From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041L-20; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zO-Py
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.139.211:34009] by server-13.bemta-5.messagelabs.com id
	AF/A2-11357-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388551021!7311023!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13257 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvCc028792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZucB022169
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuwU022156; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 140331C016E; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:38 -0500
Message-Id: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for event
	channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. There is
a similar mode - PVHVM where we run in HVM mode with
PV code enabled - and this patch explores that.

The most notable PV interfaces are the XenBus and event channels.

We will piggyback on how the event channel mechanism is
used in PVHVM - that is we want the normal native IRQ mechanism
and we will install a vector (hvm callback) for which we
will call the event channel mechanism.

This means that from a pvops perspective, we can use
native_irq_ops instead of the Xen PV specific. Albeit in the
future we could support pirq_eoi_map. But that is
a feature request that can be shared with PVHVM.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/irq.c   |  5 ++++-
 drivers/xen/events.c | 16 ++++++++++------
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 0da7f86..76ca326 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/features.h>
 #include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
@@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	/* For PVH we use default pv_irq_ops settings. */
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..bf8fb29 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
 #ifdef CONFIG_X86
-	if (xen_hvm_domain()) {
+	if (xen_pv_domain()) {
+		irq_ctx_init(smp_processor_id());
+		if (xen_initial_domain())
+			pci_xen_initial_domain();
+	}
+	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_callback_vector();
+
+	if (xen_hvm_domain()) {
 		native_init_IRQ();
 		/* pci_xen_hvm_init must be called after native_init_IRQ so that
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
-	} else {
+	} else if (!xen_pvh_domain()) {
+		/* TODO: No PVH support for PIRQ EOI */
 		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
-		irq_ctx_init(smp_processor_id());
-		if (xen_initial_domain())
-			pci_xen_initial_domain();
-
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042F-22; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-000401-Mj
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:39030] by server-2.bemta-4.messagelabs.com id
	C6/EE-11386-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388551022!9004156!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23602 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwRF028804
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvLP021130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvBY021125; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DB7A1C02CC; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:43 -0500
Message-Id: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. For the XenBus
mechanism we want to use the PVHVM mechanism.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xenbus/xenbus_client.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index ec097d6..7f7c454 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -45,6 +45,7 @@
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
+#include <xen/features.h>
 
 #include "xenbus_probe.h"
 
@@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYJ-000410-Ac; Wed, 01 Jan 2014 04:37:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zJ-OZ
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:27097] by server-13.bemta-3.messagelabs.com id
	90/EA-28603-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388551021!6677103!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9609 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zu87003671
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtYp021092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtJl022119; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BAD2A1BFB0D; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:29 -0500
Message-Id: <1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH guest
	is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Which is a PV guest with auto page translation enabled
and with vector callback. It is a cross between PVHVM and PV.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

We don't have yet a Kconfig entry setup as we do not
have all the parts ready for it - so we piggyback
on the PVHVM config option. This scaffolding will
be removed later.

Note that on ARM the concept of PVH is non-existent. As Ian
put it: "an ARM guest is neither PV nor HVM nor PVHVM.
It's a bit like PVH but is different also (it's further towards
the H end of the spectrum than even PVH).". As such these
options (PVHVM, PVH) are never enabled nor seen on ARM
compilations.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 include/xen/xen.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/include/xen/xen.h b/include/xen/xen.h
index a74d436..c4ab644 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
+#ifdef CONFIG_XEN_PVHVM
+/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
+
+/* This functionality exists only for x86. The XEN_PVHVM support exists
+ * only in x86 world - hence on ARM it will be always disabled.
+ * N.B. ARM guests are neither PV nor HVM nor PVHVM.
+ * It's a bit like PVH but is different also (it's further towards the H
+ * end of the spectrum than even PVH).
+ */
+#include <xen/features.h>
+#define xen_pvh_domain() (xen_pv_domain() && \
+			  xen_feature(XENFEAT_auto_translated_physmap) && \
+			  xen_have_vector_callback)
+#else
+#define xen_pvh_domain()	(0)
+#endif
 #endif	/* _XEN_XEN_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042F-22; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-000401-Mj
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:39030] by server-2.bemta-4.messagelabs.com id
	C6/EE-11386-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388551022!9004156!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23602 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwRF028804
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvLP021130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvBY021125; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DB7A1C02CC; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:43 -0500
Message-Id: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. For the XenBus
mechanism we want to use the PVHVM mechanism.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/xenbus/xenbus_client.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index ec097d6..7f7c454 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -45,6 +45,7 @@
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
+#include <xen/features.h>
 
 #include "xenbus_probe.h"
 
@@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040h-Ic; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zM-Nt
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.143.35:10467] by server-1.bemta-4.messagelabs.com id
	EA/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388551021!8964325!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31472 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuGO028788
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zt7L022130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtJv022120; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B1D281BFB0C; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:28 -0500
Message-Id: <1388550945-25499-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 01/18] xen/p2m: Check for auto-xlat when
	doing mfn_to_local_pfn.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Most of the functions in page.h are prefaced with
	if (xen_feature(XENFEAT_auto_translated_physmap))
		return mfn;

Except the mfn_to_local_pfn. At a first sight, the function
should work without this patch - as the 'mfn_to_mfn' has
a similar check. But there are no such check in the
'get_phys_to_machine' function - so we would crash in there.

This fixes it by following the convention of having the
check for auto-xlat in these static functions.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/page.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..4a092cc 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -167,7 +167,12 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
  */
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
-	unsigned long pfn = mfn_to_pfn(mfn);
+	unsigned long pfn;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
+
+	pfn = mfn_to_pfn(mfn);
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042r-SF; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYI-00040I-2S
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:06 +0000
Received: from [85.158.143.35:39041] by server-3.bemta-4.messagelabs.com id
	09/96-32360-17B93C25; Wed, 01 Jan 2014 04:37:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388551023!9045216!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22490 invoked from network); 1 Jan 2014 04:37:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwgS028822
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvCL022190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zv4X021128; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F0621C02CF; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:45 -0500
Message-Id: <1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
	Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH allows PV linux guest to utilize hardware extended capabilities,
such as running MMU updates in a HVM container.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Use .ascii and .asciz to define xen feature string. Note, the PVH
string must be in a single line (not multiple lines with \) to keep the
assembler from putting null char after each string before \.
This patch allows it to be configured and enabled.

Lastly remove some of the scaffolding.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/Kconfig       | 8 ++++++++
 arch/x86/xen/grant-table.c | 2 +-
 arch/x86/xen/xen-head.S    | 8 +++++++-
 include/xen/xen.h          | 4 +---
 4 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 1a3c765..161cc34 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -51,3 +51,11 @@ config XEN_DEBUG_FS
 	  Enable statistics output and various tuning options in debugfs.
 	  Enabling this option may incur a significant performance overhead.
 
+config XEN_PVH
+	bool "Support for running as a PVH guest"
+	depends on X86_64 && XEN && XEN_PVHVM
+	default n
+	help
+	   This option enables support for running as a PVH guest (PV guest
+	   using hardware extensions) under a suitably capable hypervisor.
+	   If unsure, say N.
diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 040e064..42635e9 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,7 +125,7 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
-#ifdef CONFIG_XEN_PVHVM
+#ifdef CONFIG_XEN_PVH
 #include <xen/balloon.h>
 #include <linux/slab.h>
 static int __init xlated_setup_gnttab_pages(void)
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 7faed58..56f42c0 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -13,6 +13,12 @@
 #include <xen/interface/elfnote.h>
 #include <asm/xen/interface.h>
 
+#ifdef CONFIG_XEN_PVH
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
+#else
+#define PVH_FEATURES_STR  ""
+#endif
+
 	__INIT
 ENTRY(startup_xen)
 	cld
@@ -95,7 +101,7 @@ NEXT_HYPERCALL(arch_6)
 #endif
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
-	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz "!writable_page_tables|pae_pgdir_above_4gb")
+	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/xen.h b/include/xen/xen.h
index c4ab644..0c0e3ef 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,9 +29,7 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
-#ifdef CONFIG_XEN_PVHVM
-/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
-
+#ifdef CONFIG_XEN_PVH
 /* This functionality exists only for x86. The XEN_PVHVM support exists
  * only in x86 world - hence on ARM it will be always disabled.
  * N.B. ARM guests are neither PV nor HVM nor PVHVM.
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040h-Ic; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zM-Nt
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.143.35:10467] by server-1.bemta-4.messagelabs.com id
	EA/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388551021!8964325!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31472 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuGO028788
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zt7L022130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtJv022120; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B1D281BFB0C; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:28 -0500
Message-Id: <1388550945-25499-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 01/18] xen/p2m: Check for auto-xlat when
	doing mfn_to_local_pfn.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Most of the functions in page.h are prefaced with
	if (xen_feature(XENFEAT_auto_translated_physmap))
		return mfn;

Except the mfn_to_local_pfn. At a first sight, the function
should work without this patch - as the 'mfn_to_mfn' has
a similar check. But there are no such check in the
'get_phys_to_machine' function - so we would crash in there.

This fixes it by following the convention of having the
check for auto-xlat in these static functions.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/page.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..4a092cc 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -167,7 +167,12 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
  */
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
-	unsigned long pfn = mfn_to_pfn(mfn);
+	unsigned long pfn;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
+
+	pfn = mfn_to_pfn(mfn);
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYM-000421-9x; Wed, 01 Jan 2014 04:37:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zR-5u
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [193.109.254.147:9194] by server-13.bemta-14.messagelabs.com id
	71/77-19374-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388551022!8300925!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5496 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuPr028787
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zten021088
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:55 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtoW021081; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C37191BFB0E; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:30 -0500
Message-Id: <1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in PV
	code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In the bootup code for PVH we can trap cpuid via vmexit, so don't
need to use emulated prefix call. We also check for vector callback
early on, as it is a required feature. PVH also runs at default kernel
IOPL.

Finally, pure PV settings are moved to a separate function that are
only called for pure PV, ie, pv with pvmmu. They are also #ifdef
with CONFIG_XEN_PVMMU.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 63 +++++++++++++++++++++++++++++++++---------------
 arch/x86/xen/setup.c     | 18 +++++++++-----
 2 files changed, 56 insertions(+), 25 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..755e5bb 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,6 +46,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -262,8 +263,9 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	pr_info("Booting paravirtualized kernel %son %s\n",
+		xen_feature(XENFEAT_auto_translated_physmap) ?
+			"with PVH extensions " : "", pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
 		break;
 	}
 
-	asm(XEN_EMULATE_PREFIX "cpuid"
-		: "=a" (*ax),
-		  "=b" (*bx),
-		  "=c" (*cx),
-		  "=d" (*dx)
-		: "0" (*ax), "2" (*cx));
+	if (xen_pvh_domain())
+		native_cpuid(ax, bx, cx, dx);
+	else
+		asm(XEN_EMULATE_PREFIX "cpuid"
+			: "=a" (*ax),
+			"=b" (*bx),
+			"=c" (*cx),
+			"=d" (*dx)
+			: "0" (*ax), "2" (*cx));
 
 	*bx &= maskebx;
 	*cx &= maskecx;
@@ -1420,6 +1425,19 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_early_guest_init(void)
+{
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+#ifdef CONFIG_X86_32
+	BUG(); /* PVH: Implement proper support. */
+#endif
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
+	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (xen_pvh_domain())
+		pv_cpu_ops.cpuid = xen_cpuid;
+	else
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1469,8 +1492,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1548,14 +1569,18 @@ asmlinkage void __init xen_start_kernel(void)
 	/* set the limit of our address space */
 	xen_reserve_top();
 
-	/* We used to do this in xen_arch_setup, but that is too late on AMD
-	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
-	 * which pokes 0xcf8 port.
-	 */
-	set_iopl.iopl = 1;
-	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
-	if (rc != 0)
-		xen_raw_printk("physdev_op failed %d\n", rc);
+	/* PVH: runs at default kernel iopl of 0 */
+	if (!xen_pvh_domain()) {
+		/*
+		 * We used to do this in xen_arch_setup, but that is too late
+		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
+		 * early_amd_init which pokes 0xcf8 port.
+		 */
+		set_iopl.iopl = 1;
+		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
+		if (rc != 0)
+			xen_raw_printk("physdev_op failed %d\n", rc);
+	}
 
 #ifdef CONFIG_X86_32
 	/* set up basic CPUID stuff */
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..2137c51 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -563,16 +563,13 @@ void xen_enable_nmi(void)
 		BUG();
 #endif
 }
-void __init xen_arch_setup(void)
+void __init xen_pvmmu_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		HYPERVISOR_vm_assist(VMASST_CMD_enable,
-				     VMASST_TYPE_pae_extended_cr3);
+	HYPERVISOR_vm_assist(VMASST_CMD_enable,
+			     VMASST_TYPE_pae_extended_cr3);
 
 	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
 	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
@@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
 	xen_enable_sysenter();
 	xen_enable_syscall();
 	xen_enable_nmi();
+}
+
+/* This function is not called for HVM domains */
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		xen_pvmmu_arch_setup();
+
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
 		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYO-00043x-NV; Wed, 01 Jan 2014 04:37:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYI-00040V-G5
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:06 +0000
Received: from [85.158.143.35:39024] by server-1.bemta-4.messagelabs.com id
	2B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388551022!8891666!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28728 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zvj0028791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu3U028178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuBR021103; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 24A581C0175; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:40 -0500
Message-Id: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have this odd scenario of where for PV paths we take a shortcut
but for the HVM paths we first ioremap xen_hvm_resume_frames, then
assign it to gnttab_shared.addr. This is needed because gnttab_map
uses gnttab_shared.addr.

Instead of having:
	if (pv)
		return gnttab_map
	if (hvm)
		...

	gnttab_map

Lets move the HVM part before the gnttab_map and remove the
first call to gnttab_map.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/grant-table.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 99399cb..cc1b4fa 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
-		return gnttab_map(0, nr_grant_frames - 1);
-
-	if (gnttab_shared.addr == NULL) {
+	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
+	{
 		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-						PAGE_SIZE * max_nr_gframes);
+					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_hvm_resume_frames);
 			return -ENOMEM;
 		}
 	}
-
-	gnttab_map(0, nr_grant_frames - 1);
-
-	return 0;
+	return gnttab_map(0, nr_grant_frames - 1);
 }
 
 int gnttab_resume(void)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041Z-R6; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zK-Qz
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.137.68:61380] by server-17.bemta-3.messagelabs.com id
	36/5C-15965-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388551021!6743490!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11696 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvTZ003680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvoO021120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zunv022157; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1C59D1C016F; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:39 -0500
Message-Id: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
	gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gnttab_max_grant_frames() returns the maximum amount
of frames (pages) of grants we can have. Unfortunatly it was
dependent on gnttab_init() having been run before to initialize
the boot max value (boot_max_nr_grant_frames).

This meant that users of gnttab_max_grant_frames would always
get a zero value if they called before gnttab_init() - such as
'platform_pci_init' (drivers/xen/platform-pci.c).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/grant-table.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..99399cb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -62,7 +62,6 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
-static unsigned int boot_max_nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
@@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
 unsigned int gnttab_max_grant_frames(void)
 {
 	unsigned int xen_max = __max_nr_grant_frames();
+	static unsigned int boot_max_nr_grant_frames;
+
+	/* First time, initialize it properly. */
+	if (!boot_max_nr_grant_frames)
+		boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	if (xen_max > boot_max_nr_grant_frames)
 		return boot_max_nr_grant_frames;
@@ -1227,13 +1231,12 @@ int gnttab_init(void)
 
 	gnttab_request_version();
 	nr_grant_frames = 1;
-	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
 	BUG_ON(grefs_per_grant_frame == 0);
-	max_nr_glist_frames = (boot_max_nr_grant_frames *
+	max_nr_glist_frames = (gnttab_max_grant_frames() *
 			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYJ-00041B-MR; Wed, 01 Jan 2014 04:37:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zI-Or
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:61375] by server-9.bemta-3.messagelabs.com id
	21/A5-13104-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388551021!3059736!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27048 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvLR028790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZudU028168
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu2Y028160; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D53F41BFB13; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:32 -0500
Message-Id: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
	xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The revector and copying of the P2M only happens when
!auto-xlat and on 64-bit builds. It is not obvious from
the code, so lets have seperate 32 and 64-bit functions.

We also invert the check for auto-xlat to make the code
flow simpler.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
 1 file changed, 40 insertions(+), 33 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..d792a69 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
 	 * instead of somewhere later and be confusing. */
 	xen_mc_flush();
 }
-#endif
-static void __init xen_pagetable_init(void)
+static void __init xen_pagetable_p2m_copy(void)
 {
-#ifdef CONFIG_X86_64
 	unsigned long size;
 	unsigned long addr;
-#endif
-	paging_init();
-	xen_setup_shared_info();
-#ifdef CONFIG_X86_64
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		unsigned long new_mfn_list;
+	unsigned long new_mfn_list;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* On 32-bit, we get zero so this never gets executed. */
+	new_mfn_list = xen_revector_p2m_tree();
+	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+		/* using __ka address and sticking INVALID_P2M_ENTRY! */
+		memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+		/* We should be in __ka space. */
+		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+		addr = xen_start_info->mfn_list;
+		/* We roundup to the PMD, which means that if anybody at this stage is
+		 * using the __ka address of xen_start_info or xen_start_info->shared_info
+		 * they are in going to crash. Fortunatly we have already revectored
+		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+		size = roundup(size, PMD_SIZE);
+		xen_cleanhighmap(addr, addr + size);
 
 		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+		memblock_free(__pa(xen_start_info->mfn_list), size);
+		/* And revector! Bye bye old array */
+		xen_start_info->mfn_list = new_mfn_list;
+	} else
+		return;
 
-		/* On 32-bit, we get zero so this never gets executed. */
-		new_mfn_list = xen_revector_p2m_tree();
-		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-			/* using __ka address and sticking INVALID_P2M_ENTRY! */
-			memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-			/* We should be in __ka space. */
-			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-			addr = xen_start_info->mfn_list;
-			/* We roundup to the PMD, which means that if anybody at this stage is
-			 * using the __ka address of xen_start_info or xen_start_info->shared_info
-			 * they are in going to crash. Fortunatly we have already revectored
-			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-			size = roundup(size, PMD_SIZE);
-			xen_cleanhighmap(addr, addr + size);
-
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-			memblock_free(__pa(xen_start_info->mfn_list), size);
-			/* And revector! Bye bye old array */
-			xen_start_info->mfn_list = new_mfn_list;
-		} else
-			goto skip;
-	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
@@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
-skip:
+}
+#else
+static void __init xen_pagetable_p2m_copy(void)
+{
+	/* Nada! */
+}
 #endif
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+	xen_pagetable_p2m_copy();
 	xen_post_allocator_init();
 }
 static void xen_write_cr2(unsigned long cr2)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041g-6d; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zP-1t
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.139.211:48622] by server-9.bemta-5.messagelabs.com id
	CA/9A-15098-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388551021!7137212!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23120 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuoI003670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zt2Q028149
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtAG022117; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AC7C71BF850; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:27 -0500
Message-Id: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The patches, also available at
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12

implements the neccessary functionality to boot a PV guest in PVH mode.

This blog has a great description of what PVH is:
http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

These patches are based on v3.13-rc6.

Changelog of v12: [http://mid.gmane.org/1387313503-31362-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per Stefano's review.
 - Split some patches up for easier review.
 - Bugs fixed.

Changelog of v11 as compared to v10: [https://lkml.org/lkml/2013/12/12/625]:
 - Split patches in a more logical sense, squash some
 - Dropped Acked-by's from folks
 - Fleshed out descriptions

Regression wise - there are no bugs with Xen 4.2 and Xen 4.3.

That is if you compile/boot it with
CONFIG_XEN_PVH=y or "# CONFIG_XEN_PVH is not set" - in both cases as
either dom0 or domU there are no bugs. Also launched it as 32/64 bit
dom0 with 32/64 domU as PV or PVHVM, and along with SLES11, SLES12,
F15->F19 (32/64), OL5, OL6, RHEL5 (32/64) FreeBSD HVM, NetBSD PV without issues.

With Xen 4.1, there is a regression, (see
http://mid.gmane.org/20131220175735.GA619@phenom.dumpdata.com)
and it is unclear at just time what the right way to fix the PVH ABI
to work around it. When that has been cleared up, some of the patches:

 [PATCH v12 02/18] xen/pvh/x86: Define what an PVH guest is (v2).
 [PATCH v12 03/18] xen/pvh: Early bootup changes in PV code (v2).
 [PATCH v12 07/18] xen/pvh: Setup up shared_info.
 [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for event channels (v2)
 [PATCH v12 18/18] xen/pvh: Support ParaVirtualized Hardware

will have to be reworked.

The only things needed to make this work as PVH are:

 0) Get the latest version of Xen and compile/install it.
    See http://wiki.xen.org/wiki/Compiling_Xen_From_Source for details

 1) Clone above mentioned tree

    See http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel
    for details. The steps are:

	cd $HOME
	git clone  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git linux
	cd linux
	git checkout origin/stable/pvh.v11

 2) Compile with CONFIG_XEN_PVH=y

    a) From scratch:

	make defconfig
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Paravirtualization layer for spinlocks
		 Xen guest support	(which will now show you:)
		 Support for running as a PVH guest (NEW)

	in case you like to edit .config, it is:

	CONFIG_HYPERVISOR_GUEST=y
	CONFIG_PARAVIRT=y
	CONFIG_PARAVIRT_GUEST=y
	CONFIG_PARAVIRT_SPINLOCKS=y
	CONFIG_XEN=y
	CONFIG_XEN_PVH=y

	You will also have to enable the block, network drivers, console, etc
	which are in different submenus.

    b). Based on your current distro.

	cp /boot/config-`uname -r` $HOME/linux/.config
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Support for running as a PVH guest (NEW)

 3) Launch it with 'pvh=1' in your guest config (for example):

	extra="console=hvc0 debug  kgdboc=hvc0 nokgdbroundup  initcall_debug debug"
	kernel="/mnt/lab/latest/vmlinuz"
	ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
	memory=1024
	vcpus=4
	name="pvh"
	vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
	vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
	disk=['phy:/dev/sdb1,xvda,w']
	pvh=1
	on_reboot="preserve"
	on_crash="preserve"
	on_poweroff="preserve"

    using 'xl'. Xend 'xm' does not have PVH support.

It will bootup as a normal PV guest, but 'xen-detect' will report it as an HVM
guest.

The functionality that is turned off is:
 - VCPU hotplug. You can try it but it should not allow you to do it.
   So 'echo 0 > /sys/bus/cpu/devices/cpu4/online' will error out.


Items that have not been tested extensively or at all:
  - Migration (xl save && xl restore for example).

  - 32-bit guests (won't even present you with a CONFIG_XEN_PVH option)

  - PCI passthrough

  - Running it in dom0 mode (as the patches for that are not yet in Xen upstream).
    If you want to try that, you can merge/pull Mukesh's branch:

	cd $HOME/xen
	git pull git://oss.oracle.com/git/mrathor/xen.git dom0pvh-v6

    .. and use this bootup parameter ("dom0pvh=1"). Remember to recompile
    and install the new version of Xen. This patchset
    does not contain the patches neccessary to setup guests - but I can
    create one easily enough. 

  - Memory ballooning
.
  - Multiple VBDs, NICs, etc.

If you encounter errors, please email with the following (pls note that the
guest config has 'on_reboot="preserve", on_crash="preserve" - which you should
have in your guest config to contain the memory of the guest):

 a) xl dmesg
 b) xl list
 c) xenctx -s $HOME/linux/System.map -f -a -C <domain id>
    [xenctx is sometimes found in  /usr/lib/xen/bin/xenctx ]
 d) the console output from the guest
 e) Anything else you can think off.

Stash away your vmlinux file (it is too big to send via email) - as I might
need it later on.


That is it!

Thank you!

 arch/arm/xen/enlighten.c           |   9 +-
 arch/x86/include/asm/xen/page.h    |   7 +-
 arch/x86/xen/Kconfig               |   8 ++
 arch/x86/xen/enlighten.c           | 115 ++++++++++++++++++++------
 arch/x86/xen/grant-table.c         |  64 +++++++++++++++
 arch/x86/xen/irq.c                 |   5 +-
 arch/x86/xen/mmu.c                 | 164 ++++++++++++++++++++++---------------
 arch/x86/xen/p2m.c                 |  15 +++-
 arch/x86/xen/setup.c               |  41 ++++++++--
 arch/x86/xen/smp.c                 |  49 +++++++----
 arch/x86/xen/xen-head.S            |   8 +-
 arch/x86/xen/xen-ops.h             |   1 +
 drivers/xen/cpu_hotplug.c          |   4 +-
 drivers/xen/events.c               |  16 ++--
 drivers/xen/gntdev.c               |   2 +-
 drivers/xen/grant-table.c          |  76 ++++++++++++-----
 drivers/xen/platform-pci.c         |  10 ++-
 drivers/xen/xenbus/xenbus_client.c |   3 +-
 include/xen/grant_table.h          |   9 +-
 include/xen/xen.h                  |  14 ++++
 20 files changed, 462 insertions(+), 158 deletions(-)

Konrad Rzeszutek Wilk (6):
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct.
      xen/pvh: Piggyback on PVHVM for grant driver (v2)

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v2).
      xen/pvh: Early bootup changes in PV code (v2).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh/arm/arm64: Disable PV code that does not work with PVH (v2)
      xen/pvh: Support ParaVirtualized Hardware extensions (v2).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042r-SF; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYI-00040I-2S
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:06 +0000
Received: from [85.158.143.35:39041] by server-3.bemta-4.messagelabs.com id
	09/96-32360-17B93C25; Wed, 01 Jan 2014 04:37:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388551023!9045216!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22490 invoked from network); 1 Jan 2014 04:37:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwgS028822
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvCL022190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zv4X021128; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F0621C02CF; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:45 -0500
Message-Id: <1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
	Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH allows PV linux guest to utilize hardware extended capabilities,
such as running MMU updates in a HVM container.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Use .ascii and .asciz to define xen feature string. Note, the PVH
string must be in a single line (not multiple lines with \) to keep the
assembler from putting null char after each string before \.
This patch allows it to be configured and enabled.

Lastly remove some of the scaffolding.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/Kconfig       | 8 ++++++++
 arch/x86/xen/grant-table.c | 2 +-
 arch/x86/xen/xen-head.S    | 8 +++++++-
 include/xen/xen.h          | 4 +---
 4 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 1a3c765..161cc34 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -51,3 +51,11 @@ config XEN_DEBUG_FS
 	  Enable statistics output and various tuning options in debugfs.
 	  Enabling this option may incur a significant performance overhead.
 
+config XEN_PVH
+	bool "Support for running as a PVH guest"
+	depends on X86_64 && XEN && XEN_PVHVM
+	default n
+	help
+	   This option enables support for running as a PVH guest (PV guest
+	   using hardware extensions) under a suitably capable hypervisor.
+	   If unsure, say N.
diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 040e064..42635e9 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,7 +125,7 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
-#ifdef CONFIG_XEN_PVHVM
+#ifdef CONFIG_XEN_PVH
 #include <xen/balloon.h>
 #include <linux/slab.h>
 static int __init xlated_setup_gnttab_pages(void)
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 7faed58..56f42c0 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -13,6 +13,12 @@
 #include <xen/interface/elfnote.h>
 #include <asm/xen/interface.h>
 
+#ifdef CONFIG_XEN_PVH
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
+#else
+#define PVH_FEATURES_STR  ""
+#endif
+
 	__INIT
 ENTRY(startup_xen)
 	cld
@@ -95,7 +101,7 @@ NEXT_HYPERCALL(arch_6)
 #endif
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
-	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz "!writable_page_tables|pae_pgdir_above_4gb")
+	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/xen.h b/include/xen/xen.h
index c4ab644..0c0e3ef 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,9 +29,7 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
-#ifdef CONFIG_XEN_PVHVM
-/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
-
+#ifdef CONFIG_XEN_PVH
 /* This functionality exists only for x86. The XEN_PVHVM support exists
  * only in x86 world - hence on ARM it will be always disabled.
  * N.B. ARM guests are neither PV nor HVM nor PVHVM.
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYP-00044u-CF; Wed, 01 Jan 2014 04:37:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYJ-00040r-9o
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:07 +0000
Received: from [85.158.143.35:10477] by server-1.bemta-4.messagelabs.com id
	6B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388551022!8891667!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28738 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwrR028799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvsr021121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zuwx022159; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2D0C81C0176; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:41 -0500
Message-Id: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant frame
	array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The 'xen_hvm_resume_frames' used to be an 'unsigned long'
and contain the virtual address of the grants. That was OK
for most architectures (PVHVM, ARM) were the grants are contingous
in memory. That however is not the case for PVH - in which case
we will have to do a lookup for each virtual address for the PFN.

Instead of doing that, lets make it a structure which will contain
the array of PFNs, the virtual address and the count of said PFNs.

Also provide a generic functions: gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames to populate said structure with
appropiate values for PVHVM and ARM.

To round it off, change the name from 'xen_hvm_resume_frames' to
a more descriptive one - 'xen_auto_xlat_grant_frames'.

For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
we will populate the 'xen_auto_xlat_grant_frames' by ourselves.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c   |  9 +++++++--
 drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
 drivers/xen/platform-pci.c | 10 +++++++---
 include/xen/grant_table.h  |  9 ++++++++-
 4 files changed, 62 insertions(+), 11 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..2162172 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
+	unsigned long grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
 	}
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
-	xen_hvm_resume_frames = res.start;
+	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
+			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
 	if (xen_vcpu_info == NULL)
 		return -ENOMEM;
 
+	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
+		free_percpu(xen_vcpu_info);
+		return -ENOMEM;
+	}
 	gnttab_init();
 	if (!xen_initial_domain())
 		xenbus_probe(NULL);
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index cc1b4fa..b117fd6 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
-unsigned long xen_hvm_resume_frames;
-EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
+struct grant_frames xen_auto_xlat_grant_frames;
+EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
 
 static union {
 	struct grant_entry_v1 *v1;
@@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+int gnttab_setup_auto_xlat_frames(unsigned long addr)
+{
+	xen_pfn_t *pfn;
+	unsigned int max_nr_gframes = __max_nr_grant_frames();
+	int i;
+
+	if (xen_auto_xlat_grant_frames.count)
+		return -EINVAL;
+
+	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
+	if (!pfn)
+		return -ENOMEM;
+	for (i = 0; i < max_nr_gframes; i++)
+		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
+
+	xen_auto_xlat_grant_frames.vaddr = addr;
+	xen_auto_xlat_grant_frames.pfn = pfn;
+	xen_auto_xlat_grant_frames.count = max_nr_gframes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
+
+void gnttab_free_auto_xlat_frames(void)
+{
+	if (!xen_auto_xlat_grant_frames.count)
+		return;
+	kfree(xen_auto_xlat_grant_frames.pfn);
+	xen_auto_xlat_grant_frames.pfn = NULL;
+	xen_auto_xlat_grant_frames.count = 0;
+	xen_auto_xlat_grant_frames.vaddr = 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
+
 /* Handling of paged out grant targets (GNTST_eagain) */
 #define MAX_DELAY 256
 static inline void
@@ -1068,6 +1102,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -1076,7 +1111,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				pr_warn("grant table add_to_physmap failed, err=%d\n",
@@ -1175,11 +1210,11 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
+		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
 					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-					xen_hvm_resume_frames);
+					xen_auto_xlat_grant_frames.vaddr);
 			return -ENOMEM;
 		}
 	}
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 2f3528e..f1947ac 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
 	long ioaddr;
 	long mmio_addr, mmio_len;
 	unsigned int max_nr_gframes;
+	unsigned long grant_frames;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
 	}
 
 	max_nr_gframes = gnttab_max_grant_frames();
-	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	if (gnttab_setup_auto_xlat_frames(grant_frames))
+		goto out;
 	ret = gnttab_init();
 	if (ret)
-		goto out;
+		goto grant_out;
 	xenbus_probe(NULL);
 	return 0;
-
+grant_out:
+	gnttab_free_auto_xlat_frames();
 out:
 	pci_release_region(pdev, 0);
 mem_out:
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..a997406 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
 			   grant_status_t **__shared);
 void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
 
-extern unsigned long xen_hvm_resume_frames;
+struct grant_frames {
+	xen_pfn_t *pfn;
+	int count;
+	unsigned long vaddr;
+};
+extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
+int gnttab_setup_auto_xlat_frames(unsigned long addr);
+void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041L-20; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zO-Py
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.139.211:34009] by server-13.bemta-5.messagelabs.com id
	AF/A2-11357-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388551021!7311023!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13257 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvCc028792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZucB022169
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuwU022156; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 140331C016E; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:38 -0500
Message-Id: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for event
	channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. There is
a similar mode - PVHVM where we run in HVM mode with
PV code enabled - and this patch explores that.

The most notable PV interfaces are the XenBus and event channels.

We will piggyback on how the event channel mechanism is
used in PVHVM - that is we want the normal native IRQ mechanism
and we will install a vector (hvm callback) for which we
will call the event channel mechanism.

This means that from a pvops perspective, we can use
native_irq_ops instead of the Xen PV specific. Albeit in the
future we could support pirq_eoi_map. But that is
a feature request that can be shared with PVHVM.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/irq.c   |  5 ++++-
 drivers/xen/events.c | 16 ++++++++++------
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 0da7f86..76ca326 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/features.h>
 #include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
@@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	/* For PVH we use default pv_irq_ops settings. */
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..bf8fb29 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
 #ifdef CONFIG_X86
-	if (xen_hvm_domain()) {
+	if (xen_pv_domain()) {
+		irq_ctx_init(smp_processor_id());
+		if (xen_initial_domain())
+			pci_xen_initial_domain();
+	}
+	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_callback_vector();
+
+	if (xen_hvm_domain()) {
 		native_init_IRQ();
 		/* pci_xen_hvm_init must be called after native_init_IRQ so that
 		 * __acpi_register_gsi can point at the right function */
 		pci_xen_hvm_init();
-	} else {
+	} else if (!xen_pvh_domain()) {
+		/* TODO: No PVH support for PIRQ EOI */
 		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
-		irq_ctx_init(smp_processor_id());
-		if (xen_initial_domain())
-			pci_xen_initial_domain();
-
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYO-00043x-NV; Wed, 01 Jan 2014 04:37:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYI-00040V-G5
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:06 +0000
Received: from [85.158.143.35:39024] by server-1.bemta-4.messagelabs.com id
	2B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388551022!8891666!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28728 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zvj0028791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu3U028178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuBR021103; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 24A581C0175; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:40 -0500
Message-Id: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have this odd scenario of where for PV paths we take a shortcut
but for the HVM paths we first ioremap xen_hvm_resume_frames, then
assign it to gnttab_shared.addr. This is needed because gnttab_map
uses gnttab_shared.addr.

Instead of having:
	if (pv)
		return gnttab_map
	if (hvm)
		...

	gnttab_map

Lets move the HVM part before the gnttab_map and remove the
first call to gnttab_map.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/grant-table.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 99399cb..cc1b4fa 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
-		return gnttab_map(0, nr_grant_frames - 1);
-
-	if (gnttab_shared.addr == NULL) {
+	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
+	{
 		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-						PAGE_SIZE * max_nr_gframes);
+					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_hvm_resume_frames);
 			return -ENOMEM;
 		}
 	}
-
-	gnttab_map(0, nr_grant_frames - 1);
-
-	return 0;
+	return gnttab_map(0, nr_grant_frames - 1);
 }
 
 int gnttab_resume(void)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040W-46; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zL-Kb
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [193.109.254.147:44524] by server-7.bemta-14.messagelabs.com id
	DA/D5-15500-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388551021!8294803!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8067 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvXO003678
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZujE028185
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuWk022153; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EF18B1BFB4B; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:35 -0500
Message-Id: <1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
	bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

During early bootup we start life using the Xen provided
GDT, which means that we are running with %cs segment set
to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).

But for PVH we want to be use HVM type mechanism for
segment operations. As such we need to switch to the HVM
one and also reload ourselves with the __KERNEL_CS:eip
to run in the proper GDT and segment.

For HVM this is usually done in 'secondary_startup_64' in
(head_64.S) but since we are not taking that bootup
path (we start in PV - xen_start_kernel) we need to do
that in the early PV bootup paths.

For good measure we also zero out the %fs, %ds, and %es
(not strictly needed as Xen has already cleared them
for us). The %gs is loaded by 'switch_to_new_gdt'.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 4cdc483..7690484 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1414,8 +1414,43 @@ static void __init xen_boot_params_init_edd(void)
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
  */
-static void __init xen_setup_stackprotector(void)
+static void __init xen_setup_gdt(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+#ifdef CONFIG_X86_64
+		unsigned long dummy;
+
+		switch_to_new_gdt(0); /* GDT and GS set */
+
+		/* We are switching of the Xen provided GDT to our HVM mode
+		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
+		 * and we are jumping to reload it.
+		 */
+		asm volatile ("pushq %0\n"
+			      "leaq 1f(%%rip),%0\n"
+			      "pushq %0\n"
+			      "lretq\n"
+			      "1:\n"
+			      : "=&r" (dummy) : "0" (__KERNEL_CS));
+
+		/*
+		 * While not needed, we also set the %es, %ds, and %fs
+		 * to zero. We don't care about %ss as it is NULL.
+		 * Strictly speaking this is not needed as Xen zeros those
+		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
+		 *
+		 * Linux zeros them in cpu_init() and in secondary_startup_64
+		 * (for BSP).
+		 */
+		loadsegment(es, 0);
+		loadsegment(ds, 0);
+		loadsegment(fs, 0);
+#else
+		/* PVH: TODO Implement. */
+		BUG();
+#endif
+		return;
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1500,7 +1535,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_stackprotector();
+	xen_setup_gdt();
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041S-Dt; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zN-OJ
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:39020] by server-2.bemta-4.messagelabs.com id
	46/EE-11386-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388551021!9042957!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11533 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvO5003673
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvnA022174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zufm022155; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DE3401BFB49; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:33 -0500
Message-Id: <1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

.. which are surprinsingly small compared to the amount for PV code.

PVH uses mostly native mmu ops, we leave the generic (native_*) for
the majority and just overwrite the baremetal with the ones we need.

We also optimize one - the TLB flush. The native operation would
needlessly IPI offline VCPUs causing extra wakeups. Using the
Xen one avoids that and lets the hypervisor determine which
VCPU needs the TLB flush.

At startup, we are running with pre-allocated page-tables
courtesy of the tool-stack. But we still need to graft them
in the Linux initial pagetables. However there is no need to
unpin/pin and change them to R/O or R/W.

Note that the xen_pagetable_init due to 7836fec9d0994cc9c9150c5a33f0eb0eb08a335a
"xen/mmu/p2m: Refactor the xen_pagetable_init code." does not
need any changes - we just need to make sure that xen_post_allocator_init
does not alter the pvops from the default native one.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 90 +++++++++++++++++++++++++++++++++---------------------
 1 file changed, 55 insertions(+), 35 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d792a69..d9ac620 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1760,6 +1760,10 @@ static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* For PVH no need to set R/O or R/W to pin them or unpin them. */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags))
 		BUG();
 }
@@ -1870,6 +1874,7 @@ static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native.
  */
 void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
@@ -1891,17 +1896,18 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	/* L4[272] -> level3_ident_pgt
-	 * L4[511] -> level3_kernel_pgt */
-	convert_pfn_mfn(init_level4_pgt);
-
-	/* L3_i[0] -> level2_ident_pgt */
-	convert_pfn_mfn(level3_ident_pgt);
-	/* L3_k[510] -> level2_kernel_pgt
-	 * L3_i[511] -> level2_fixmap_pgt */
-	convert_pfn_mfn(level3_kernel_pgt);
-
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		/* L4[272] -> level3_ident_pgt
+		 * L4[511] -> level3_kernel_pgt */
+		convert_pfn_mfn(init_level4_pgt);
+
+		/* L3_i[0] -> level2_ident_pgt */
+		convert_pfn_mfn(level3_ident_pgt);
+		/* L3_k[510] -> level2_kernel_pgt
+		 * L3_i[511] -> level2_fixmap_pgt */
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1925,31 +1931,33 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Make pagetable pieces RO */
+		set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+		set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Make pagetable pieces RO */
-	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
-	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
-
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
-
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
-
-	/*
-	 * At this stage there can be no user pgd, and no page
-	 * structure to attach it to, so make sure we just set kernel
-	 * pgd.
-	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(init_level4_pgt));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+		/*
+		 * At this stage there can be no user pgd, and no page
+		 * structure to attach it to, so make sure we just set kernel
+		 * pgd.
+		 */
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(init_level4_pgt));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	} else
+		native_write_cr3(__pa(init_level4_pgt));
 
 	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
 	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
@@ -2110,6 +2118,9 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 static void __init xen_post_allocator_init(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	pv_mmu_ops.set_pte = xen_set_pte;
 	pv_mmu_ops.set_pmd = xen_set_pmd;
 	pv_mmu_ops.set_pud = xen_set_pud;
@@ -2214,6 +2225,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041Z-R6; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zK-Qz
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.137.68:61380] by server-17.bemta-3.messagelabs.com id
	36/5C-15965-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388551021!6743490!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11696 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvTZ003680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvoO021120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zunv022157; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1C59D1C016F; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:39 -0500
Message-Id: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
	gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gnttab_max_grant_frames() returns the maximum amount
of frames (pages) of grants we can have. Unfortunatly it was
dependent on gnttab_init() having been run before to initialize
the boot max value (boot_max_nr_grant_frames).

This meant that users of gnttab_max_grant_frames would always
get a zero value if they called before gnttab_init() - such as
'platform_pci_init' (drivers/xen/platform-pci.c).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/grant-table.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..99399cb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -62,7 +62,6 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
-static unsigned int boot_max_nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
@@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
 unsigned int gnttab_max_grant_frames(void)
 {
 	unsigned int xen_max = __max_nr_grant_frames();
+	static unsigned int boot_max_nr_grant_frames;
+
+	/* First time, initialize it properly. */
+	if (!boot_max_nr_grant_frames)
+		boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	if (xen_max > boot_max_nr_grant_frames)
 		return boot_max_nr_grant_frames;
@@ -1227,13 +1231,12 @@ int gnttab_init(void)
 
 	gnttab_request_version();
 	nr_grant_frames = 1;
-	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
 	BUG_ON(grefs_per_grant_frame == 0);
-	max_nr_glist_frames = (boot_max_nr_grant_frames *
+	max_nr_glist_frames = (gnttab_max_grant_frames() *
 			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYM-000421-9x; Wed, 01 Jan 2014 04:37:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zR-5u
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [193.109.254.147:9194] by server-13.bemta-14.messagelabs.com id
	71/77-19374-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388551022!8300925!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5496 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuPr028787
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zten021088
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:55 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtoW021081; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C37191BFB0E; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:30 -0500
Message-Id: <1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in PV
	code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In the bootup code for PVH we can trap cpuid via vmexit, so don't
need to use emulated prefix call. We also check for vector callback
early on, as it is a required feature. PVH also runs at default kernel
IOPL.

Finally, pure PV settings are moved to a separate function that are
only called for pure PV, ie, pv with pvmmu. They are also #ifdef
with CONFIG_XEN_PVMMU.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 63 +++++++++++++++++++++++++++++++++---------------
 arch/x86/xen/setup.c     | 18 +++++++++-----
 2 files changed, 56 insertions(+), 25 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..755e5bb 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,6 +46,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -262,8 +263,9 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	pr_info("Booting paravirtualized kernel %son %s\n",
+		xen_feature(XENFEAT_auto_translated_physmap) ?
+			"with PVH extensions " : "", pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
 		break;
 	}
 
-	asm(XEN_EMULATE_PREFIX "cpuid"
-		: "=a" (*ax),
-		  "=b" (*bx),
-		  "=c" (*cx),
-		  "=d" (*dx)
-		: "0" (*ax), "2" (*cx));
+	if (xen_pvh_domain())
+		native_cpuid(ax, bx, cx, dx);
+	else
+		asm(XEN_EMULATE_PREFIX "cpuid"
+			: "=a" (*ax),
+			"=b" (*bx),
+			"=c" (*cx),
+			"=d" (*dx)
+			: "0" (*ax), "2" (*cx));
 
 	*bx &= maskebx;
 	*cx &= maskecx;
@@ -1420,6 +1425,19 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_early_guest_init(void)
+{
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+#ifdef CONFIG_X86_32
+	BUG(); /* PVH: Implement proper support. */
+#endif
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
+	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (xen_pvh_domain())
+		pv_cpu_ops.cpuid = xen_cpuid;
+	else
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1469,8 +1492,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1548,14 +1569,18 @@ asmlinkage void __init xen_start_kernel(void)
 	/* set the limit of our address space */
 	xen_reserve_top();
 
-	/* We used to do this in xen_arch_setup, but that is too late on AMD
-	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
-	 * which pokes 0xcf8 port.
-	 */
-	set_iopl.iopl = 1;
-	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
-	if (rc != 0)
-		xen_raw_printk("physdev_op failed %d\n", rc);
+	/* PVH: runs at default kernel iopl of 0 */
+	if (!xen_pvh_domain()) {
+		/*
+		 * We used to do this in xen_arch_setup, but that is too late
+		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
+		 * early_amd_init which pokes 0xcf8 port.
+		 */
+		set_iopl.iopl = 1;
+		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
+		if (rc != 0)
+			xen_raw_printk("physdev_op failed %d\n", rc);
+	}
 
 #ifdef CONFIG_X86_32
 	/* set up basic CPUID stuff */
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..2137c51 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -563,16 +563,13 @@ void xen_enable_nmi(void)
 		BUG();
 #endif
 }
-void __init xen_arch_setup(void)
+void __init xen_pvmmu_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		HYPERVISOR_vm_assist(VMASST_CMD_enable,
-				     VMASST_TYPE_pae_extended_cr3);
+	HYPERVISOR_vm_assist(VMASST_CMD_enable,
+			     VMASST_TYPE_pae_extended_cr3);
 
 	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
 	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
@@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
 	xen_enable_sysenter();
 	xen_enable_syscall();
 	xen_enable_nmi();
+}
+
+/* This function is not called for HVM domains */
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		xen_pvmmu_arch_setup();
+
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
 		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040W-46; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zL-Kb
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [193.109.254.147:44524] by server-7.bemta-14.messagelabs.com id
	DA/D5-15500-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388551021!8294803!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8067 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvXO003678
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZujE028185
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuWk022153; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EF18B1BFB4B; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:35 -0500
Message-Id: <1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
	bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

During early bootup we start life using the Xen provided
GDT, which means that we are running with %cs segment set
to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).

But for PVH we want to be use HVM type mechanism for
segment operations. As such we need to switch to the HVM
one and also reload ourselves with the __KERNEL_CS:eip
to run in the proper GDT and segment.

For HVM this is usually done in 'secondary_startup_64' in
(head_64.S) but since we are not taking that bootup
path (we start in PV - xen_start_kernel) we need to do
that in the early PV bootup paths.

For good measure we also zero out the %fs, %ds, and %es
(not strictly needed as Xen has already cleared them
for us). The %gs is loaded by 'switch_to_new_gdt'.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 4cdc483..7690484 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1414,8 +1414,43 @@ static void __init xen_boot_params_init_edd(void)
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
  */
-static void __init xen_setup_stackprotector(void)
+static void __init xen_setup_gdt(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+#ifdef CONFIG_X86_64
+		unsigned long dummy;
+
+		switch_to_new_gdt(0); /* GDT and GS set */
+
+		/* We are switching of the Xen provided GDT to our HVM mode
+		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
+		 * and we are jumping to reload it.
+		 */
+		asm volatile ("pushq %0\n"
+			      "leaq 1f(%%rip),%0\n"
+			      "pushq %0\n"
+			      "lretq\n"
+			      "1:\n"
+			      : "=&r" (dummy) : "0" (__KERNEL_CS));
+
+		/*
+		 * While not needed, we also set the %es, %ds, and %fs
+		 * to zero. We don't care about %ss as it is NULL.
+		 * Strictly speaking this is not needed as Xen zeros those
+		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
+		 *
+		 * Linux zeros them in cpu_init() and in secondary_startup_64
+		 * (for BSP).
+		 */
+		loadsegment(es, 0);
+		loadsegment(ds, 0);
+		loadsegment(fs, 0);
+#else
+		/* PVH: TODO Implement. */
+		BUG();
+#endif
+		return;
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1500,7 +1535,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_stackprotector();
+	xen_setup_gdt();
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041g-6d; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zP-1t
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.139.211:48622] by server-9.bemta-5.messagelabs.com id
	CA/9A-15098-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388551021!7137212!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23120 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZuoI003670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zt2Q028149
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtAG022117; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AC7C71BF850; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:27 -0500
Message-Id: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The patches, also available at
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12

implements the neccessary functionality to boot a PV guest in PVH mode.

This blog has a great description of what PVH is:
http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

These patches are based on v3.13-rc6.

Changelog of v12: [http://mid.gmane.org/1387313503-31362-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per Stefano's review.
 - Split some patches up for easier review.
 - Bugs fixed.

Changelog of v11 as compared to v10: [https://lkml.org/lkml/2013/12/12/625]:
 - Split patches in a more logical sense, squash some
 - Dropped Acked-by's from folks
 - Fleshed out descriptions

Regression wise - there are no bugs with Xen 4.2 and Xen 4.3.

That is if you compile/boot it with
CONFIG_XEN_PVH=y or "# CONFIG_XEN_PVH is not set" - in both cases as
either dom0 or domU there are no bugs. Also launched it as 32/64 bit
dom0 with 32/64 domU as PV or PVHVM, and along with SLES11, SLES12,
F15->F19 (32/64), OL5, OL6, RHEL5 (32/64) FreeBSD HVM, NetBSD PV without issues.

With Xen 4.1, there is a regression, (see
http://mid.gmane.org/20131220175735.GA619@phenom.dumpdata.com)
and it is unclear at just time what the right way to fix the PVH ABI
to work around it. When that has been cleared up, some of the patches:

 [PATCH v12 02/18] xen/pvh/x86: Define what an PVH guest is (v2).
 [PATCH v12 03/18] xen/pvh: Early bootup changes in PV code (v2).
 [PATCH v12 07/18] xen/pvh: Setup up shared_info.
 [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for event channels (v2)
 [PATCH v12 18/18] xen/pvh: Support ParaVirtualized Hardware

will have to be reworked.

The only things needed to make this work as PVH are:

 0) Get the latest version of Xen and compile/install it.
    See http://wiki.xen.org/wiki/Compiling_Xen_From_Source for details

 1) Clone above mentioned tree

    See http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel
    for details. The steps are:

	cd $HOME
	git clone  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git linux
	cd linux
	git checkout origin/stable/pvh.v11

 2) Compile with CONFIG_XEN_PVH=y

    a) From scratch:

	make defconfig
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Paravirtualization layer for spinlocks
		 Xen guest support	(which will now show you:)
		 Support for running as a PVH guest (NEW)

	in case you like to edit .config, it is:

	CONFIG_HYPERVISOR_GUEST=y
	CONFIG_PARAVIRT=y
	CONFIG_PARAVIRT_GUEST=y
	CONFIG_PARAVIRT_SPINLOCKS=y
	CONFIG_XEN=y
	CONFIG_XEN_PVH=y

	You will also have to enable the block, network drivers, console, etc
	which are in different submenus.

    b). Based on your current distro.

	cp /boot/config-`uname -r` $HOME/linux/.config
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Support for running as a PVH guest (NEW)

 3) Launch it with 'pvh=1' in your guest config (for example):

	extra="console=hvc0 debug  kgdboc=hvc0 nokgdbroundup  initcall_debug debug"
	kernel="/mnt/lab/latest/vmlinuz"
	ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
	memory=1024
	vcpus=4
	name="pvh"
	vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
	vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
	disk=['phy:/dev/sdb1,xvda,w']
	pvh=1
	on_reboot="preserve"
	on_crash="preserve"
	on_poweroff="preserve"

    using 'xl'. Xend 'xm' does not have PVH support.

It will bootup as a normal PV guest, but 'xen-detect' will report it as an HVM
guest.

The functionality that is turned off is:
 - VCPU hotplug. You can try it but it should not allow you to do it.
   So 'echo 0 > /sys/bus/cpu/devices/cpu4/online' will error out.


Items that have not been tested extensively or at all:
  - Migration (xl save && xl restore for example).

  - 32-bit guests (won't even present you with a CONFIG_XEN_PVH option)

  - PCI passthrough

  - Running it in dom0 mode (as the patches for that are not yet in Xen upstream).
    If you want to try that, you can merge/pull Mukesh's branch:

	cd $HOME/xen
	git pull git://oss.oracle.com/git/mrathor/xen.git dom0pvh-v6

    .. and use this bootup parameter ("dom0pvh=1"). Remember to recompile
    and install the new version of Xen. This patchset
    does not contain the patches neccessary to setup guests - but I can
    create one easily enough. 

  - Memory ballooning
.
  - Multiple VBDs, NICs, etc.

If you encounter errors, please email with the following (pls note that the
guest config has 'on_reboot="preserve", on_crash="preserve" - which you should
have in your guest config to contain the memory of the guest):

 a) xl dmesg
 b) xl list
 c) xenctx -s $HOME/linux/System.map -f -a -C <domain id>
    [xenctx is sometimes found in  /usr/lib/xen/bin/xenctx ]
 d) the console output from the guest
 e) Anything else you can think off.

Stash away your vmlinux file (it is too big to send via email) - as I might
need it later on.


That is it!

Thank you!

 arch/arm/xen/enlighten.c           |   9 +-
 arch/x86/include/asm/xen/page.h    |   7 +-
 arch/x86/xen/Kconfig               |   8 ++
 arch/x86/xen/enlighten.c           | 115 ++++++++++++++++++++------
 arch/x86/xen/grant-table.c         |  64 +++++++++++++++
 arch/x86/xen/irq.c                 |   5 +-
 arch/x86/xen/mmu.c                 | 164 ++++++++++++++++++++++---------------
 arch/x86/xen/p2m.c                 |  15 +++-
 arch/x86/xen/setup.c               |  41 ++++++++--
 arch/x86/xen/smp.c                 |  49 +++++++----
 arch/x86/xen/xen-head.S            |   8 +-
 arch/x86/xen/xen-ops.h             |   1 +
 drivers/xen/cpu_hotplug.c          |   4 +-
 drivers/xen/events.c               |  16 ++--
 drivers/xen/gntdev.c               |   2 +-
 drivers/xen/grant-table.c          |  76 ++++++++++++-----
 drivers/xen/platform-pci.c         |  10 ++-
 drivers/xen/xenbus/xenbus_client.c |   3 +-
 include/xen/grant_table.h          |   9 +-
 include/xen/xen.h                  |  14 ++++
 20 files changed, 462 insertions(+), 158 deletions(-)

Konrad Rzeszutek Wilk (6):
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct.
      xen/pvh: Piggyback on PVHVM for grant driver (v2)

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v2).
      xen/pvh: Early bootup changes in PV code (v2).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh/arm/arm64: Disable PV code that does not work with PVH (v2)
      xen/pvh: Support ParaVirtualized Hardware extensions (v2).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYJ-00041B-MR; Wed, 01 Jan 2014 04:37:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zI-Or
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:61375] by server-9.bemta-3.messagelabs.com id
	21/A5-13104-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388551021!3059736!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27048 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvLR028790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZudU028168
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu2Y028160; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D53F41BFB13; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:32 -0500
Message-Id: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
	xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The revector and copying of the P2M only happens when
!auto-xlat and on 64-bit builds. It is not obvious from
the code, so lets have seperate 32 and 64-bit functions.

We also invert the check for auto-xlat to make the code
flow simpler.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
 1 file changed, 40 insertions(+), 33 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..d792a69 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
 	 * instead of somewhere later and be confusing. */
 	xen_mc_flush();
 }
-#endif
-static void __init xen_pagetable_init(void)
+static void __init xen_pagetable_p2m_copy(void)
 {
-#ifdef CONFIG_X86_64
 	unsigned long size;
 	unsigned long addr;
-#endif
-	paging_init();
-	xen_setup_shared_info();
-#ifdef CONFIG_X86_64
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		unsigned long new_mfn_list;
+	unsigned long new_mfn_list;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* On 32-bit, we get zero so this never gets executed. */
+	new_mfn_list = xen_revector_p2m_tree();
+	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+		/* using __ka address and sticking INVALID_P2M_ENTRY! */
+		memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+		/* We should be in __ka space. */
+		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+		addr = xen_start_info->mfn_list;
+		/* We roundup to the PMD, which means that if anybody at this stage is
+		 * using the __ka address of xen_start_info or xen_start_info->shared_info
+		 * they are in going to crash. Fortunatly we have already revectored
+		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+		size = roundup(size, PMD_SIZE);
+		xen_cleanhighmap(addr, addr + size);
 
 		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+		memblock_free(__pa(xen_start_info->mfn_list), size);
+		/* And revector! Bye bye old array */
+		xen_start_info->mfn_list = new_mfn_list;
+	} else
+		return;
 
-		/* On 32-bit, we get zero so this never gets executed. */
-		new_mfn_list = xen_revector_p2m_tree();
-		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-			/* using __ka address and sticking INVALID_P2M_ENTRY! */
-			memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-			/* We should be in __ka space. */
-			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-			addr = xen_start_info->mfn_list;
-			/* We roundup to the PMD, which means that if anybody at this stage is
-			 * using the __ka address of xen_start_info or xen_start_info->shared_info
-			 * they are in going to crash. Fortunatly we have already revectored
-			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-			size = roundup(size, PMD_SIZE);
-			xen_cleanhighmap(addr, addr + size);
-
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-			memblock_free(__pa(xen_start_info->mfn_list), size);
-			/* And revector! Bye bye old array */
-			xen_start_info->mfn_list = new_mfn_list;
-		} else
-			goto skip;
-	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
@@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
-skip:
+}
+#else
+static void __init xen_pagetable_p2m_copy(void)
+{
+	/* Nada! */
+}
 #endif
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+	xen_pagetable_p2m_copy();
 	xen_post_allocator_init();
 }
 static void xen_write_cr2(unsigned long cr2)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYJ-000410-Ac; Wed, 01 Jan 2014 04:37:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zJ-OZ
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:27097] by server-13.bemta-3.messagelabs.com id
	90/EA-28603-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388551021!6677103!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9609 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zu87003671
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtYp021092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtJl022119; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BAD2A1BFB0D; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:29 -0500
Message-Id: <1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH guest
	is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Which is a PV guest with auto page translation enabled
and with vector callback. It is a cross between PVHVM and PV.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

We don't have yet a Kconfig entry setup as we do not
have all the parts ready for it - so we piggyback
on the PVHVM config option. This scaffolding will
be removed later.

Note that on ARM the concept of PVH is non-existent. As Ian
put it: "an ARM guest is neither PV nor HVM nor PVHVM.
It's a bit like PVH but is different also (it's further towards
the H end of the spectrum than even PVH).". As such these
options (PVHVM, PVH) are never enabled nor seen on ARM
compilations.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 include/xen/xen.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/include/xen/xen.h b/include/xen/xen.h
index a74d436..c4ab644 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
+#ifdef CONFIG_XEN_PVHVM
+/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
+
+/* This functionality exists only for x86. The XEN_PVHVM support exists
+ * only in x86 world - hence on ARM it will be always disabled.
+ * N.B. ARM guests are neither PV nor HVM nor PVHVM.
+ * It's a bit like PVH but is different also (it's further towards the H
+ * end of the spectrum than even PVH).
+ */
+#include <xen/features.h>
+#define xen_pvh_domain() (xen_pv_domain() && \
+			  xen_feature(XENFEAT_auto_translated_physmap) && \
+			  xen_have_vector_callback)
+#else
+#define xen_pvh_domain()	(0)
+#endif
 #endif	/* _XEN_XEN_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYP-00044u-CF; Wed, 01 Jan 2014 04:37:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYJ-00040r-9o
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:07 +0000
Received: from [85.158.143.35:10477] by server-1.bemta-4.messagelabs.com id
	6B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388551022!8891667!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28738 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwrR028799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvsr021121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zuwx022159; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2D0C81C0176; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:41 -0500
Message-Id: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant frame
	array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The 'xen_hvm_resume_frames' used to be an 'unsigned long'
and contain the virtual address of the grants. That was OK
for most architectures (PVHVM, ARM) were the grants are contingous
in memory. That however is not the case for PVH - in which case
we will have to do a lookup for each virtual address for the PFN.

Instead of doing that, lets make it a structure which will contain
the array of PFNs, the virtual address and the count of said PFNs.

Also provide a generic functions: gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames to populate said structure with
appropiate values for PVHVM and ARM.

To round it off, change the name from 'xen_hvm_resume_frames' to
a more descriptive one - 'xen_auto_xlat_grant_frames'.

For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
we will populate the 'xen_auto_xlat_grant_frames' by ourselves.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/xen/enlighten.c   |  9 +++++++--
 drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
 drivers/xen/platform-pci.c | 10 +++++++---
 include/xen/grant_table.h  |  9 ++++++++-
 4 files changed, 62 insertions(+), 11 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..2162172 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
+	unsigned long grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
 	}
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
-	xen_hvm_resume_frames = res.start;
+	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
+			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
 	if (xen_vcpu_info == NULL)
 		return -ENOMEM;
 
+	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
+		free_percpu(xen_vcpu_info);
+		return -ENOMEM;
+	}
 	gnttab_init();
 	if (!xen_initial_domain())
 		xenbus_probe(NULL);
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index cc1b4fa..b117fd6 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
-unsigned long xen_hvm_resume_frames;
-EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
+struct grant_frames xen_auto_xlat_grant_frames;
+EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
 
 static union {
 	struct grant_entry_v1 *v1;
@@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+int gnttab_setup_auto_xlat_frames(unsigned long addr)
+{
+	xen_pfn_t *pfn;
+	unsigned int max_nr_gframes = __max_nr_grant_frames();
+	int i;
+
+	if (xen_auto_xlat_grant_frames.count)
+		return -EINVAL;
+
+	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
+	if (!pfn)
+		return -ENOMEM;
+	for (i = 0; i < max_nr_gframes; i++)
+		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
+
+	xen_auto_xlat_grant_frames.vaddr = addr;
+	xen_auto_xlat_grant_frames.pfn = pfn;
+	xen_auto_xlat_grant_frames.count = max_nr_gframes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
+
+void gnttab_free_auto_xlat_frames(void)
+{
+	if (!xen_auto_xlat_grant_frames.count)
+		return;
+	kfree(xen_auto_xlat_grant_frames.pfn);
+	xen_auto_xlat_grant_frames.pfn = NULL;
+	xen_auto_xlat_grant_frames.count = 0;
+	xen_auto_xlat_grant_frames.vaddr = 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
+
 /* Handling of paged out grant targets (GNTST_eagain) */
 #define MAX_DELAY 256
 static inline void
@@ -1068,6 +1102,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -1076,7 +1111,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				pr_warn("grant table add_to_physmap failed, err=%d\n",
@@ -1175,11 +1210,11 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
+		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
 					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-					xen_hvm_resume_frames);
+					xen_auto_xlat_grant_frames.vaddr);
 			return -ENOMEM;
 		}
 	}
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 2f3528e..f1947ac 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
 	long ioaddr;
 	long mmio_addr, mmio_len;
 	unsigned int max_nr_gframes;
+	unsigned long grant_frames;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
 	}
 
 	max_nr_gframes = gnttab_max_grant_frames();
-	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	if (gnttab_setup_auto_xlat_frames(grant_frames))
+		goto out;
 	ret = gnttab_init();
 	if (ret)
-		goto out;
+		goto grant_out;
 	xenbus_probe(NULL);
 	return 0;
-
+grant_out:
+	gnttab_free_auto_xlat_frames();
 out:
 	pci_release_region(pdev, 0);
 mem_out:
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..a997406 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
 			   grant_status_t **__shared);
 void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
 
-extern unsigned long xen_hvm_resume_frames;
+struct grant_frames {
+	xen_pfn_t *pfn;
+	int count;
+	unsigned long vaddr;
+};
+extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
+int gnttab_setup_auto_xlat_frames(unsigned long addr);
+void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYK-00041S-Dt; Wed, 01 Jan 2014 04:37:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zN-OJ
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:39020] by server-2.bemta-4.messagelabs.com id
	46/EE-11386-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388551021!9042957!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11533 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvO5003673
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvnA022174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zufm022155; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DE3401BFB49; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:33 -0500
Message-Id: <1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

.. which are surprinsingly small compared to the amount for PV code.

PVH uses mostly native mmu ops, we leave the generic (native_*) for
the majority and just overwrite the baremetal with the ones we need.

We also optimize one - the TLB flush. The native operation would
needlessly IPI offline VCPUs causing extra wakeups. Using the
Xen one avoids that and lets the hypervisor determine which
VCPU needs the TLB flush.

At startup, we are running with pre-allocated page-tables
courtesy of the tool-stack. But we still need to graft them
in the Linux initial pagetables. However there is no need to
unpin/pin and change them to R/O or R/W.

Note that the xen_pagetable_init due to 7836fec9d0994cc9c9150c5a33f0eb0eb08a335a
"xen/mmu/p2m: Refactor the xen_pagetable_init code." does not
need any changes - we just need to make sure that xen_post_allocator_init
does not alter the pvops from the default native one.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 90 +++++++++++++++++++++++++++++++++---------------------
 1 file changed, 55 insertions(+), 35 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index d792a69..d9ac620 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1760,6 +1760,10 @@ static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* For PVH no need to set R/O or R/W to pin them or unpin them. */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags))
 		BUG();
 }
@@ -1870,6 +1874,7 @@ static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native.
  */
 void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
@@ -1891,17 +1896,18 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	/* L4[272] -> level3_ident_pgt
-	 * L4[511] -> level3_kernel_pgt */
-	convert_pfn_mfn(init_level4_pgt);
-
-	/* L3_i[0] -> level2_ident_pgt */
-	convert_pfn_mfn(level3_ident_pgt);
-	/* L3_k[510] -> level2_kernel_pgt
-	 * L3_i[511] -> level2_fixmap_pgt */
-	convert_pfn_mfn(level3_kernel_pgt);
-
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		/* L4[272] -> level3_ident_pgt
+		 * L4[511] -> level3_kernel_pgt */
+		convert_pfn_mfn(init_level4_pgt);
+
+		/* L3_i[0] -> level2_ident_pgt */
+		convert_pfn_mfn(level3_ident_pgt);
+		/* L3_k[510] -> level2_kernel_pgt
+		 * L3_i[511] -> level2_fixmap_pgt */
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1925,31 +1931,33 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Make pagetable pieces RO */
+		set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+		set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Make pagetable pieces RO */
-	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
-	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
-
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
-
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
-
-	/*
-	 * At this stage there can be no user pgd, and no page
-	 * structure to attach it to, so make sure we just set kernel
-	 * pgd.
-	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(init_level4_pgt));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+		/*
+		 * At this stage there can be no user pgd, and no page
+		 * structure to attach it to, so make sure we just set kernel
+		 * pgd.
+		 */
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(init_level4_pgt));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	} else
+		native_write_cr3(__pa(init_level4_pgt));
 
 	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
 	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
@@ -2110,6 +2118,9 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 static void __init xen_post_allocator_init(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	pv_mmu_ops.set_pte = xen_set_pte;
 	pv_mmu_ops.set_pmd = xen_set_pmd;
 	pv_mmu_ops.set_pud = xen_set_pud;
@@ -2214,6 +2225,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041u-UQ; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zQ-4c
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:10474] by server-3.bemta-4.messagelabs.com id
	78/96-32360-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388551022!9042958!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11546 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zxl8028828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:36:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zww3028233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvWI022187; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 355F51C01B2; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:42 -0500
Message-Id: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for grant
	driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In PVH the shared grant frame is the PFN and not MFN,
hence its mapped via the same code path as HVM.

The allocation of the grant frame is done differently - we
do not use the early platform-pci driver and have an
ioremap area - instead we use balloon memory and stitch
all of the non-contingous pages in a virtualized area.

That means when we call the hypervisor to replace the GMFN
with a XENMAPSPACE_grant_table type, we need to lookup the
old PFN for every iteration instead of assuming a flat
contingous PFN allocation.

Lastly, we only use v1 for grants. This is because PVHVM
is not able to use v2 due to no XENMEM_add_to_physmap
calls on the error status page (see commit
69e8f430e243d657c2053f097efebc2e2cd559f0
 xen/granttable: Disable grant v2 for HVM domains.)

Until that is implemented this workaround has to
be in place.

Also per suggestions by Stefano utilize the PVHVM paths
as they share common functionality.

v2 of this patch moves most of the PVH code out in the
arch/x86/xen/grant-table driver and touches only minimally
the generic driver.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/xen/gntdev.c       |  2 +-
 drivers/xen/grant-table.c  | 13 ++++++----
 3 files changed, 73 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 3a5f55d..040e064 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
+#ifdef CONFIG_XEN_PVHVM
+#include <xen/balloon.h>
+#include <linux/slab.h>
+static int __init xlated_setup_gnttab_pages(void)
+{
+	struct page **pages;
+	xen_pfn_t *pfns;
+	int rc, i;
+	unsigned long nr_grant_frames = gnttab_max_grant_frames();
+
+	BUG_ON(nr_grant_frames == 0);
+	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
+	if (!pfns) {
+		kfree(pages);
+		return -ENOMEM;
+	}
+	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
+	if (rc) {
+		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		kfree(pages);
+		kfree(pfns);
+		return rc;
+	}
+	for (i = 0; i < nr_grant_frames; i++)
+		pfns[i] = page_to_pfn(pages[i]);
+
+	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
+				    (void *)&xen_auto_xlat_grant_frames.vaddr);
+
+	kfree(pages);
+	if (rc) {
+		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pfns);
+		return rc;
+	}
+
+	xen_auto_xlat_grant_frames.pfn = pfns;
+	xen_auto_xlat_grant_frames.count = nr_grant_frames;
+
+	return 0;
+}
+
+static int __init xen_pvh_gnttab_setup(void)
+{
+	if (!xen_domain())
+		return -ENODEV;
+
+	if (!xen_pv_domain())
+		return -ENODEV;
+
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return -ENODEV;
+
+	return xlated_setup_gnttab_pages();
+}
+core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */
+#endif
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..073b4a1 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -846,7 +846,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b117fd6..2fa3a4c 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1098,7 +1098,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
@@ -1174,7 +1174,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1210,8 +1210,11 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
-					       PAGE_SIZE * max_nr_gframes);
+		if (xen_hvm_domain()) {
+			gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
+						       PAGE_SIZE * max_nr_gframes);
+		} else
+			gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_auto_xlat_grant_frames.vaddr);
@@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
 	return gnttab_init();
 }
 
-core_initcall(__gnttab_init);
+core_initcall_sync(__gnttab_init);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041u-UQ; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zQ-4c
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:10474] by server-3.bemta-4.messagelabs.com id
	78/96-32360-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388551022!9042958!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11546 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zxl8028828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:36:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zww3028233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZvWI022187; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 355F51C01B2; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:42 -0500
Message-Id: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for grant
	driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In PVH the shared grant frame is the PFN and not MFN,
hence its mapped via the same code path as HVM.

The allocation of the grant frame is done differently - we
do not use the early platform-pci driver and have an
ioremap area - instead we use balloon memory and stitch
all of the non-contingous pages in a virtualized area.

That means when we call the hypervisor to replace the GMFN
with a XENMAPSPACE_grant_table type, we need to lookup the
old PFN for every iteration instead of assuming a flat
contingous PFN allocation.

Lastly, we only use v1 for grants. This is because PVHVM
is not able to use v2 due to no XENMEM_add_to_physmap
calls on the error status page (see commit
69e8f430e243d657c2053f097efebc2e2cd559f0
 xen/granttable: Disable grant v2 for HVM domains.)

Until that is implemented this workaround has to
be in place.

Also per suggestions by Stefano utilize the PVHVM paths
as they share common functionality.

v2 of this patch moves most of the PVH code out in the
arch/x86/xen/grant-table driver and touches only minimally
the generic driver.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/xen/gntdev.c       |  2 +-
 drivers/xen/grant-table.c  | 13 ++++++----
 3 files changed, 73 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 3a5f55d..040e064 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
+#ifdef CONFIG_XEN_PVHVM
+#include <xen/balloon.h>
+#include <linux/slab.h>
+static int __init xlated_setup_gnttab_pages(void)
+{
+	struct page **pages;
+	xen_pfn_t *pfns;
+	int rc, i;
+	unsigned long nr_grant_frames = gnttab_max_grant_frames();
+
+	BUG_ON(nr_grant_frames == 0);
+	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
+	if (!pfns) {
+		kfree(pages);
+		return -ENOMEM;
+	}
+	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
+	if (rc) {
+		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		kfree(pages);
+		kfree(pfns);
+		return rc;
+	}
+	for (i = 0; i < nr_grant_frames; i++)
+		pfns[i] = page_to_pfn(pages[i]);
+
+	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
+				    (void *)&xen_auto_xlat_grant_frames.vaddr);
+
+	kfree(pages);
+	if (rc) {
+		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pfns);
+		return rc;
+	}
+
+	xen_auto_xlat_grant_frames.pfn = pfns;
+	xen_auto_xlat_grant_frames.count = nr_grant_frames;
+
+	return 0;
+}
+
+static int __init xen_pvh_gnttab_setup(void)
+{
+	if (!xen_domain())
+		return -ENODEV;
+
+	if (!xen_pv_domain())
+		return -ENODEV;
+
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return -ENODEV;
+
+	return xlated_setup_gnttab_pages();
+}
+core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */
+#endif
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..073b4a1 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -846,7 +846,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b117fd6..2fa3a4c 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1098,7 +1098,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
@@ -1174,7 +1174,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1210,8 +1210,11 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
-					       PAGE_SIZE * max_nr_gframes);
+		if (xen_hvm_domain()) {
+			gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
+						       PAGE_SIZE * max_nr_gframes);
+		} else
+			gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_auto_xlat_grant_frames.vaddr);
@@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
 	return gnttab_init();
 }
 
-core_initcall(__gnttab_init);
+core_initcall_sync(__gnttab_init);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040t-UB; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zH-OP
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:61372] by server-3.bemta-3.messagelabs.com id
	61/06-10658-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388551021!3059565!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19420 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvM9003672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtoP022121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtXD028140; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CC3A71BFB12; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:31 -0500
Message-Id: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

P2M is not available for PVH. Fortunatly for us the
P2M code already has mostly the support for auto-xlat guest thanks to
commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
"grant-table: call set_phys_to_machine after mapping grant refs"
which: "
introduces set_phys_to_machine calls for auto_translated guests
(even on x86) in gnttab_map_refs and gnttab_unmap_refs.
translated by swiotlb-xen... " so we don't need to muck much.

with above mentioned "commit you'll get set_phys_to_machine calls
from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
anything with them " (Stefano Stabellini) which is OK - we want
them to be NOPs.

This is because we assume that an "IOMMU is always present on the
plaform and Xen is going to make the appropriate IOMMU pagetable
changes in the hypercall implementation of GNTTABOP_map_grant_ref
and GNTTABOP_unmap_grant_ref, then eveything should be transparent
from PVH priviligied point of view and DMA transfers involving
foreign pages keep working with no issues[sp]

Otherwise we would need a P2M (and an M2P) for PVH priviligied to
track these foreign pages .. (see arch/arm/xen/p2m.c)."
(Stefano Stabellini).

We still have to inhibit the building of the P2M tree.
That had been done in the past by not calling
xen_build_dynamic_phys_to_machine (which setups the P2M tree
and gives us virtual address to access them). But we are missing
a check for xen_build_mfn_list_list - which was continuing to setup
the P2M tree and would blow up at trying to get the virtual
address of p2m_missing (which would have been setup by
xen_build_dynamic_phys_to_machine).

Hence a check is needed to not call xen_build_mfn_list_list when
running in auto-xlat mode.

Instead of replicating the check for auto-xlat in enlighten.c
do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
is called also in xen_arch_post_suspend without any checks for
auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
allocate space for an P2M tree.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |  3 +--
 arch/x86/xen/p2m.c       | 12 ++++++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 755e5bb..ab4dd70 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1493,8 +1493,7 @@ asmlinkage void __init xen_start_kernel(void)
 	x86_configure_nx();
 
 	/* Get mfn list */
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		xen_build_dynamic_phys_to_machine();
+	xen_build_dynamic_phys_to_machine();
 
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..fb7ee0a 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
 {
 	unsigned long pfn;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
@@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
 /* Set up p2m_top to point to the domain-builder provided p2m pages */
 void __init xen_build_dynamic_phys_to_machine(void)
 {
-	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
-	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
+	unsigned long *mfn_list;
+	unsigned long max_pfn;
 	unsigned long pfn;
 
+	 if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	mfn_list = (unsigned long *)xen_start_info->mfn_list;
+	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
 	xen_max_p2m_pfn = max_pfn;
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYI-00040t-UB; Wed, 01 Jan 2014 04:37:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYG-0003zH-OP
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:04 +0000
Received: from [85.158.137.68:61372] by server-3.bemta-3.messagelabs.com id
	61/06-10658-F6B93C25; Wed, 01 Jan 2014 04:37:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388551021!3059565!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19420 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvM9003672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtoP022121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZtXD028140; Wed, 1 Jan 2014 04:35:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CC3A71BFB12; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:31 -0500
Message-Id: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

P2M is not available for PVH. Fortunatly for us the
P2M code already has mostly the support for auto-xlat guest thanks to
commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
"grant-table: call set_phys_to_machine after mapping grant refs"
which: "
introduces set_phys_to_machine calls for auto_translated guests
(even on x86) in gnttab_map_refs and gnttab_unmap_refs.
translated by swiotlb-xen... " so we don't need to muck much.

with above mentioned "commit you'll get set_phys_to_machine calls
from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
anything with them " (Stefano Stabellini) which is OK - we want
them to be NOPs.

This is because we assume that an "IOMMU is always present on the
plaform and Xen is going to make the appropriate IOMMU pagetable
changes in the hypercall implementation of GNTTABOP_map_grant_ref
and GNTTABOP_unmap_grant_ref, then eveything should be transparent
from PVH priviligied point of view and DMA transfers involving
foreign pages keep working with no issues[sp]

Otherwise we would need a P2M (and an M2P) for PVH priviligied to
track these foreign pages .. (see arch/arm/xen/p2m.c)."
(Stefano Stabellini).

We still have to inhibit the building of the P2M tree.
That had been done in the past by not calling
xen_build_dynamic_phys_to_machine (which setups the P2M tree
and gives us virtual address to access them). But we are missing
a check for xen_build_mfn_list_list - which was continuing to setup
the P2M tree and would blow up at trying to get the virtual
address of p2m_missing (which would have been setup by
xen_build_dynamic_phys_to_machine).

Hence a check is needed to not call xen_build_mfn_list_list when
running in auto-xlat mode.

Instead of replicating the check for auto-xlat in enlighten.c
do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
is called also in xen_arch_post_suspend without any checks for
auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
allocate space for an P2M tree.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c |  3 +--
 arch/x86/xen/p2m.c       | 12 ++++++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 755e5bb..ab4dd70 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1493,8 +1493,7 @@ asmlinkage void __init xen_start_kernel(void)
 	x86_configure_nx();
 
 	/* Get mfn list */
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		xen_build_dynamic_phys_to_machine();
+	xen_build_dynamic_phys_to_machine();
 
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..fb7ee0a 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
 {
 	unsigned long pfn;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
@@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
 /* Set up p2m_top to point to the domain-builder provided p2m pages */
 void __init xen_build_dynamic_phys_to_machine(void)
 {
-	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
-	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
+	unsigned long *mfn_list;
+	unsigned long max_pfn;
 	unsigned long pfn;
 
+	 if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	mfn_list = (unsigned long *)xen_start_info->mfn_list;
+	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
 	xen_max_p2m_pfn = max_pfn;
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041n-IV; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zS-6u
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [193.109.254.147:26189] by server-16.bemta-14.messagelabs.com
	id 1C/F9-20600-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388551021!6783742!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1025 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvEP028789
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZueW021102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZueV021099; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BCA91C016D; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:37 -0500
Message-Id: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with PVH
	(v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In xen_add_extra_mem() we can skip updating P2M as it's managed
by Xen. PVH maps the entire IO space, but only RAM pages need
to be repopulated.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/setup.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2137c51..dd5f905 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -27,6 +27,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 
 	memblock_reserve(start, size);
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
@@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 		.domid        = DOMID_SELF
 	};
 	unsigned long len = 0;
+	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
 	unsigned long pfn;
 	int ret;
 
@@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 				continue;
 			frame = mfn;
 		} else {
-			if (mfn != INVALID_P2M_ENTRY)
+			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
 				continue;
 			frame = pfn;
 		}
@@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 static unsigned long __init xen_release_chunk(unsigned long start,
 					      unsigned long end)
 {
+	/*
+	 * Xen already ballooned out the E820 non RAM regions for us
+	 * and set them up properly in EPT.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return end - start;
+
 	return xen_do_chunk(start, end, true);
 }
 
@@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
 	 * (except for the ISA region which must be 1:1 mapped) to
 	 * release the refcounts (in Xen) on the original frames.
 	 */
-	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
+
+	/*
+	 * PVH E820 matches the hypervisor's P2M which means we need to
+	 * account for the proper values of *release and *identity.
+	 */
+	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
+	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
 		pte_t pte = __pte_ma(0);
 
 		if (pfn < PFN_UP(ISA_END_ADDRESS))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYL-00041n-IV; Wed, 01 Jan 2014 04:37:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zS-6u
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [193.109.254.147:26189] by server-16.bemta-14.messagelabs.com
	id 1C/F9-20600-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388551021!6783742!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1025 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvEP028789
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZueW021102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:56 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZueV021099; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BCA91C016D; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:37 -0500
Message-Id: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with PVH
	(v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In xen_add_extra_mem() we can skip updating P2M as it's managed
by Xen. PVH maps the entire IO space, but only RAM pages need
to be repopulated.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/setup.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2137c51..dd5f905 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -27,6 +27,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 
 	memblock_reserve(start, size);
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
@@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 		.domid        = DOMID_SELF
 	};
 	unsigned long len = 0;
+	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
 	unsigned long pfn;
 	int ret;
 
@@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 				continue;
 			frame = mfn;
 		} else {
-			if (mfn != INVALID_P2M_ENTRY)
+			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
 				continue;
 			frame = pfn;
 		}
@@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 static unsigned long __init xen_release_chunk(unsigned long start,
 					      unsigned long end)
 {
+	/*
+	 * Xen already ballooned out the E820 non RAM regions for us
+	 * and set them up properly in EPT.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return end - start;
+
 	return xen_do_chunk(start, end, true);
 }
 
@@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
 	 * (except for the ISA region which must be 1:1 mapped) to
 	 * release the refcounts (in Xen) on the original frames.
 	 */
-	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
+
+	/*
+	 * PVH E820 matches the hypervisor's P2M which means we need to
+	 * account for the proper values of *release and *identity.
+	 */
+	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
+	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
 		pte_t pte = __pte_ma(0);
 
 		if (pfn < PFN_UP(ISA_END_ADDRESS))

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYM-000428-MT; Wed, 01 Jan 2014 04:37:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zz-K9
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:10468] by server-1.bemta-4.messagelabs.com id
	0B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388551021!9007609!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6428 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zv1x028798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvhw028191
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu8Z022158; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 037DE1BFB4C; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:36 -0500
Message-Id: <1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 09/18] xen/pvh: Secondary VCPU bringup
	(non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

The VCPU bringup protocol follows the PV with certain twists.
>From xen/include/public/arch-x86/xen.h:

Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
for HVM and PVH guests, not all information in this structure is updated:

 - For HVM guests, the structures read include: fpu_ctxt (if
 VGCT_I387_VALID is set), flags, user_regs, debugreg[*]

 - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
 set cr3. All other fields not used should be set to 0.

This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
can load per-cpu data-structures. It has no effect on the VCPU0.

We also piggyback on the %rdi register to pass in the CPU number - so
that when we bootup a new CPU, the cpu_bringup_and_idle will have
passed as the first parameter the CPU number (via %rdi for 64-bit).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 11 ++++++++---
 arch/x86/xen/smp.c       | 49 ++++++++++++++++++++++++++++++++----------------
 arch/x86/xen/xen-ops.h   |  1 +
 3 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 7690484..8493653 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1413,14 +1413,19 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
+ *
+ * Note, that it is refok - b/c the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
  */
-static void __init xen_setup_gdt(void)
+void __init_refok xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
 		unsigned long dummy;
 
-		switch_to_new_gdt(0); /* GDT and GS set */
+		load_percpu_segment(cpu); /* We need to access per-cpu area */
+		switch_to_new_gdt(cpu); /* GDT and GS set */
 
 		/* We are switching of the Xen provided GDT to our HVM mode
 		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
@@ -1535,7 +1540,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_gdt();
+	xen_setup_gdt(0);
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index c36b325..5e46190 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -73,9 +73,11 @@ static void cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -97,8 +99,14 @@ static void cpu_bringup(void)
 	wmb();			/* make sure everything is out */
 }
 
-static void cpu_bringup_and_idle(void)
+/* Note: cpu parameter is only relevant for PVH */
+static void cpu_bringup_and_idle(int cpu)
 {
+#ifdef CONFIG_X86_64
+	if (xen_feature(XENFEAT_auto_translated_physmap) &&
+	    xen_feature(XENFEAT_supervisor_mode_kernel))
+		xen_setup_gdt(cpu);
+#endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
 }
@@ -274,9 +282,10 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	native_smp_prepare_boot_cpu();
 
 	if (xen_pv_domain()) {
-		/* We've switched to the "real" per-cpu gdt, so make sure the
-		   old memory can be recycled */
-		make_lowmem_page_readwrite(xen_initial_gdt);
+		if (!xen_feature(XENFEAT_writable_page_tables))
+			/* We've switched to the "real" per-cpu gdt, so make
+			 * sure the old memory can be recycled. */
+			make_lowmem_page_readwrite(xen_initial_gdt);
 
 #ifdef CONFIG_X86_32
 		/*
@@ -360,22 +369,21 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	gdt = get_cpu_gdt_table(cpu);
 
-	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
+	/* Note: PVH is not yet supported on x86_32. */
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
 	ctxt->user_regs.gs = __KERNEL_STACK_CANARY;
-#else
-	ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 	ctxt->user_regs.eip = (unsigned long)cpu_bringup_and_idle;
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	{
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		ctxt->flags = VGCF_IN_KERNEL;
 		ctxt->user_regs.eflags = 0x1000; /* IOPL_RING1 */
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
+		ctxt->user_regs.ss = __KERNEL_DS;
 
 		xen_copy_trap_info(ctxt->trap_ctxt);
 
@@ -396,18 +404,27 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 #ifdef CONFIG_X86_32
 		ctxt->event_callback_cs     = __KERNEL_CS;
 		ctxt->failsafe_callback_cs  = __KERNEL_CS;
+#else
+		ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 		ctxt->event_callback_eip    =
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip =
 					(unsigned long)xen_failsafe_callback;
+		ctxt->user_regs.cs = __KERNEL_CS;
+		per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
+#ifdef CONFIG_X86_32
 	}
-	ctxt->user_regs.cs = __KERNEL_CS;
+#else
+	} else
+		/* N.B. The user_regs.eip (cpu_bringup_and_idle) is called with
+		 * %rdi having the cpu number - which means are passing in
+		 * as the first parameter the cpu. Subtle!
+		 */
+		ctxt->user_regs.rdi = cpu;
+#endif
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
-
-	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
-
 	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
 		BUG();
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 95f8c61..9059c24 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,4 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
+void xen_setup_gdt(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYM-000428-MT; Wed, 01 Jan 2014 04:37:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-0003zz-K9
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.143.35:10468] by server-1.bemta-4.messagelabs.com id
	0B/21-02132-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388551021!9007609!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6428 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014Zv1x028798
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvhw028191
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zu8Z022158; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 037DE1BFB4C; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:36 -0500
Message-Id: <1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 09/18] xen/pvh: Secondary VCPU bringup
	(non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

The VCPU bringup protocol follows the PV with certain twists.
>From xen/include/public/arch-x86/xen.h:

Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
for HVM and PVH guests, not all information in this structure is updated:

 - For HVM guests, the structures read include: fpu_ctxt (if
 VGCT_I387_VALID is set), flags, user_regs, debugreg[*]

 - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
 set cr3. All other fields not used should be set to 0.

This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
can load per-cpu data-structures. It has no effect on the VCPU0.

We also piggyback on the %rdi register to pass in the CPU number - so
that when we bootup a new CPU, the cpu_bringup_and_idle will have
passed as the first parameter the CPU number (via %rdi for 64-bit).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 11 ++++++++---
 arch/x86/xen/smp.c       | 49 ++++++++++++++++++++++++++++++++----------------
 arch/x86/xen/xen-ops.h   |  1 +
 3 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 7690484..8493653 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1413,14 +1413,19 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
+ *
+ * Note, that it is refok - b/c the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
  */
-static void __init xen_setup_gdt(void)
+void __init_refok xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
 		unsigned long dummy;
 
-		switch_to_new_gdt(0); /* GDT and GS set */
+		load_percpu_segment(cpu); /* We need to access per-cpu area */
+		switch_to_new_gdt(cpu); /* GDT and GS set */
 
 		/* We are switching of the Xen provided GDT to our HVM mode
 		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
@@ -1535,7 +1540,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_gdt();
+	xen_setup_gdt(0);
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index c36b325..5e46190 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -73,9 +73,11 @@ static void cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -97,8 +99,14 @@ static void cpu_bringup(void)
 	wmb();			/* make sure everything is out */
 }
 
-static void cpu_bringup_and_idle(void)
+/* Note: cpu parameter is only relevant for PVH */
+static void cpu_bringup_and_idle(int cpu)
 {
+#ifdef CONFIG_X86_64
+	if (xen_feature(XENFEAT_auto_translated_physmap) &&
+	    xen_feature(XENFEAT_supervisor_mode_kernel))
+		xen_setup_gdt(cpu);
+#endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
 }
@@ -274,9 +282,10 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	native_smp_prepare_boot_cpu();
 
 	if (xen_pv_domain()) {
-		/* We've switched to the "real" per-cpu gdt, so make sure the
-		   old memory can be recycled */
-		make_lowmem_page_readwrite(xen_initial_gdt);
+		if (!xen_feature(XENFEAT_writable_page_tables))
+			/* We've switched to the "real" per-cpu gdt, so make
+			 * sure the old memory can be recycled. */
+			make_lowmem_page_readwrite(xen_initial_gdt);
 
 #ifdef CONFIG_X86_32
 		/*
@@ -360,22 +369,21 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	gdt = get_cpu_gdt_table(cpu);
 
-	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
+	/* Note: PVH is not yet supported on x86_32. */
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
 	ctxt->user_regs.gs = __KERNEL_STACK_CANARY;
-#else
-	ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 	ctxt->user_regs.eip = (unsigned long)cpu_bringup_and_idle;
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	{
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		ctxt->flags = VGCF_IN_KERNEL;
 		ctxt->user_regs.eflags = 0x1000; /* IOPL_RING1 */
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
+		ctxt->user_regs.ss = __KERNEL_DS;
 
 		xen_copy_trap_info(ctxt->trap_ctxt);
 
@@ -396,18 +404,27 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 #ifdef CONFIG_X86_32
 		ctxt->event_callback_cs     = __KERNEL_CS;
 		ctxt->failsafe_callback_cs  = __KERNEL_CS;
+#else
+		ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 		ctxt->event_callback_eip    =
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip =
 					(unsigned long)xen_failsafe_callback;
+		ctxt->user_regs.cs = __KERNEL_CS;
+		per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
+#ifdef CONFIG_X86_32
 	}
-	ctxt->user_regs.cs = __KERNEL_CS;
+#else
+	} else
+		/* N.B. The user_regs.eip (cpu_bringup_and_idle) is called with
+		 * %rdi having the cpu number - which means are passing in
+		 * as the first parameter the cpu. Subtle!
+		 */
+		ctxt->user_regs.rdi = cpu;
+#endif
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
-
-	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
-
 	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
 		BUG();
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 95f8c61..9059c24 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,4 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
+void xen_setup_gdt(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042i-F9; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-000400-No
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.137.68:56938] by server-3.bemta-3.messagelabs.com id
	F1/06-10658-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388551022!6709948!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9303 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwvY003687
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvmj022189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zv0r021126; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4662A1C02CD; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:44 -0500
Message-Id: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV code
	that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

As we do not have yet a mechanism for that.

This also impacts the ARM/ARM64 code (which does not have
hotplug support yet).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/cpu_hotplug.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
index cc6513a..5f80802 100644
--- a/drivers/xen/cpu_hotplug.c
+++ b/drivers/xen/cpu_hotplug.c
@@ -4,6 +4,7 @@
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
+#include <xen/features.h>
 
 #include <asm/xen/hypervisor.h>
 #include <asm/cpu.h>
@@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
 	static struct notifier_block xsn_cpu = {
 		.notifier_call = setup_cpu_watcher };
 
-	if (!xen_pv_domain())
+	/* PVH/ARM/ARM64 TBD/FIXME: future work */
+	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
 		return -ENODEV;
 
 	register_xenstore_notifier(&xsn_cpu);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:37:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDYN-00042i-F9; Wed, 01 Jan 2014 04:37:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDYH-000400-No
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:37:05 +0000
Received: from [85.158.137.68:56938] by server-3.bemta-3.messagelabs.com id
	F1/06-10658-07B93C25; Wed, 01 Jan 2014 04:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388551022!6709948!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9303 invoked from network); 1 Jan 2014 04:37:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZwvY003687
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zvmj022189
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014Zv0r021126; Wed, 1 Jan 2014 04:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4662A1C02CD; Tue, 31 Dec 2013 23:35:54 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:44 -0500
Message-Id: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV code
	that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

As we do not have yet a mechanism for that.

This also impacts the ARM/ARM64 code (which does not have
hotplug support yet).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/cpu_hotplug.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
index cc6513a..5f80802 100644
--- a/drivers/xen/cpu_hotplug.c
+++ b/drivers/xen/cpu_hotplug.c
@@ -4,6 +4,7 @@
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
+#include <xen/features.h>
 
 #include <asm/xen/hypervisor.h>
 #include <asm/cpu.h>
@@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
 	static struct notifier_block xsn_cpu = {
 		.notifier_call = setup_cpu_watcher };
 
-	if (!xen_pv_domain())
+	/* PVH/ARM/ARM64 TBD/FIXME: future work */
+	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
 		return -ENODEV;
 
 	register_xenstore_notifier(&xsn_cpu);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:44:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDes-0006VL-JS; Wed, 01 Jan 2014 04:43:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDer-0006VD-Ak
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:43:53 +0000
Received: from [85.158.137.68:14975] by server-17.bemta-3.messagelabs.com id
	63/CD-15965-80D93C25; Wed, 01 Jan 2014 04:43:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388551021!6729890!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16015 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvXg003679
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuRk028187
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuYR022154; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E6B011BFB4A; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:34 -0500
Message-Id: <1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

For PVHVM the shared_info structure is provided via the same way
as for normal PV guests (see include/xen/interface/xen.h).

That is during bootup we get 'xen_start_info' via the %esi register
in startup_xen. Then later we extract the 'shared_info' from said
structure (in xen_setup_shared_info) and start using it.

The 'xen_setup_shared_info' is all setup to work with auto-xlat
guests, but there are two functions which it calls that are not:
xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
This patch modifies those to work in auto-xlat mode.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 5 +++--
 arch/x86/xen/p2m.c       | 3 +++
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ab4dd70..4cdc483 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
 		xen_vcpu_setup(cpu);
 
 	/* xen_vcpu_setup managed to place the vcpu_info within the
-	   percpu area for all cpus, so make use of it */
-	if (have_vcpu_info_placement) {
+	 * percpu area for all cpus, so make use of it. Note that for
+	 * PVH we want to use native IRQ mechanism. */
+	if (have_vcpu_info_placement && !xen_pvh_domain()) {
 		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
 		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index fb7ee0a..696c694 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -339,6 +339,9 @@ void __ref xen_build_mfn_list_list(void)
 
 void xen_setup_mfn_list_list(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
 
 	HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 04:44:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 04:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyDes-0006VL-JS; Wed, 01 Jan 2014 04:43:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyDer-0006VD-Ak
	for xen-devel@lists.xenproject.org; Wed, 01 Jan 2014 04:43:53 +0000
Received: from [85.158.137.68:14975] by server-17.bemta-3.messagelabs.com id
	63/CD-15965-80D93C25; Wed, 01 Jan 2014 04:43:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388551021!6729890!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16015 invoked from network); 1 Jan 2014 04:37:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 04:37:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s014ZvXg003679
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 1 Jan 2014 04:35:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuRk028187
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 1 Jan 2014 04:35:57 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s014ZuYR022154; Wed, 1 Jan 2014 04:35:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 31 Dec 2013 20:35:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E6B011BFB4A; Tue, 31 Dec 2013 23:35:53 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
Date: Tue, 31 Dec 2013 23:35:34 -0500
Message-Id: <1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

For PVHVM the shared_info structure is provided via the same way
as for normal PV guests (see include/xen/interface/xen.h).

That is during bootup we get 'xen_start_info' via the %esi register
in startup_xen. Then later we extract the 'shared_info' from said
structure (in xen_setup_shared_info) and start using it.

The 'xen_setup_shared_info' is all setup to work with auto-xlat
guests, but there are two functions which it calls that are not:
xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
This patch modifies those to work in auto-xlat mode.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 5 +++--
 arch/x86/xen/p2m.c       | 3 +++
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index ab4dd70..4cdc483 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
 		xen_vcpu_setup(cpu);
 
 	/* xen_vcpu_setup managed to place the vcpu_info within the
-	   percpu area for all cpus, so make use of it */
-	if (have_vcpu_info_placement) {
+	 * percpu area for all cpus, so make use of it. Note that for
+	 * PVH we want to use native IRQ mechanism. */
+	if (have_vcpu_info_placement && !xen_pvh_domain()) {
 		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
 		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index fb7ee0a..696c694 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -339,6 +339,9 @@ void __ref xen_build_mfn_list_list(void)
 
 void xen_setup_mfn_list_list(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
 
 	HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 09:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 09:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyHkC-0000ZS-3v; Wed, 01 Jan 2014 09:05:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VyHkB-0000ZN-82
	for xen-devel@lists.xensource.com; Wed, 01 Jan 2014 09:05:39 +0000
Received: from [85.158.139.211:27745] by server-12.bemta-5.messagelabs.com id
	5D/1E-30017-26AD3C25; Wed, 01 Jan 2014 09:05:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388567136!7366506!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23742 invoked from network); 1 Jan 2014 09:05:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Jan 2014 09:05:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,584,1384300800"; d="scan'208";a="86829233"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Jan 2014 09:05:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 1 Jan 2014 04:05:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VyHk5-0004lq-VE;
	Wed, 01 Jan 2014 09:05:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VyHk5-0005j9-Ut;
	Wed, 01 Jan 2014 09:05:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23724-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Jan 2014 09:05:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23724: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23724 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23724/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 23035
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 23035 pass in 23724

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23035
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22691

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 23035 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 23035 never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 09:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 09:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyHkC-0000ZS-3v; Wed, 01 Jan 2014 09:05:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VyHkB-0000ZN-82
	for xen-devel@lists.xensource.com; Wed, 01 Jan 2014 09:05:39 +0000
Received: from [85.158.139.211:27745] by server-12.bemta-5.messagelabs.com id
	5D/1E-30017-26AD3C25; Wed, 01 Jan 2014 09:05:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388567136!7366506!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23742 invoked from network); 1 Jan 2014 09:05:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Jan 2014 09:05:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,584,1384300800"; d="scan'208";a="86829233"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Jan 2014 09:05:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 1 Jan 2014 04:05:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VyHk5-0004lq-VE;
	Wed, 01 Jan 2014 09:05:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VyHk5-0005j9-Ut;
	Wed, 01 Jan 2014 09:05:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23724-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 1 Jan 2014 09:05:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23724: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23724 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23724/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 23035
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 23035 pass in 23724

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23035
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22691

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 23035 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 23035 never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyONz-0000sI-Op; Wed, 01 Jan 2014 16:11:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1VxxAT-0003L8-Bd; Tue, 31 Dec 2013 11:07:25 +0000
Received: from [85.158.143.35:17659] by server-1.bemta-4.messagelabs.com id
	34/03-02132-C65A2C25; Tue, 31 Dec 2013 11:07:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388488042!8872368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 31 Dec 2013 11:07:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 11:07:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,580,1384300800"; d="scan'208";a="86606833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Dec 2013 11:07:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 31 Dec 2013 06:07:21 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1VxxAP-0004Me-R3;
	Tue, 31 Dec 2013 11:07:21 +0000
Date: Tue, 31 Dec 2013 11:07:21 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Liang Yan <liayan@mtu.edu>
Message-ID: <20131231110721.GC19696@zion.uk.xensource.com>
References: <CAKVivRM+MjPwHAXf3DGFmUOeRSR1LO=BrpNM=LkoLS2JsEi8QA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKVivRM+MjPwHAXf3DGFmUOeRSR1LO=BrpNM=LkoLS2JsEi8QA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:10 +0000
Cc: xen-users@lists.xen.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] xen 4.2.1 could not access to a ubuntu qcow image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Move Xen-devel to BCC, add Xen-users to CC.

This post should belong to Xen-users.

On Tue, Dec 24, 2013 at 06:40:59AM -0500, Liang Yan wrote:
> Dom0 ubuntu 12.04
> domu 12.04
> xen 4.2.1
> 
> hvm works well for base img,
> but could not access to a qcow img based on base img.
> 
> 
> log:
> 
> Unknown PV product 3 loaded in guest
> PV driver build 1
> region type 0 at [f3000000,f3020000).
> squash iomem [f3000000, f3020000).
> region type 1 at [c100,c140).
> 
> 
> cfg:
> 
> disk = [ 'tap:qcow:/var/station1/vmdisk-station1.qcow,ioemu:sda1,w' ]
> 
> the disk type of domu is xvda, I could not only get this type for creating
> a unbunt hvm.
> 

Maybe have a look at various xl config file examples in /etc/xen? The
syntax you used looks out-dated to me.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyONz-0000sI-Op; Wed, 01 Jan 2014 16:11:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1VxxAT-0003L8-Bd; Tue, 31 Dec 2013 11:07:25 +0000
Received: from [85.158.143.35:17659] by server-1.bemta-4.messagelabs.com id
	34/03-02132-C65A2C25; Tue, 31 Dec 2013 11:07:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388488042!8872368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 31 Dec 2013 11:07:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 11:07:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,580,1384300800"; d="scan'208";a="86606833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Dec 2013 11:07:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 31 Dec 2013 06:07:21 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1VxxAP-0004Me-R3;
	Tue, 31 Dec 2013 11:07:21 +0000
Date: Tue, 31 Dec 2013 11:07:21 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Liang Yan <liayan@mtu.edu>
Message-ID: <20131231110721.GC19696@zion.uk.xensource.com>
References: <CAKVivRM+MjPwHAXf3DGFmUOeRSR1LO=BrpNM=LkoLS2JsEi8QA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAKVivRM+MjPwHAXf3DGFmUOeRSR1LO=BrpNM=LkoLS2JsEi8QA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:10 +0000
Cc: xen-users@lists.xen.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] xen 4.2.1 could not access to a ubuntu qcow image
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Move Xen-devel to BCC, add Xen-users to CC.

This post should belong to Xen-users.

On Tue, Dec 24, 2013 at 06:40:59AM -0500, Liang Yan wrote:
> Dom0 ubuntu 12.04
> domu 12.04
> xen 4.2.1
> 
> hvm works well for base img,
> but could not access to a qcow img based on base img.
> 
> 
> log:
> 
> Unknown PV product 3 loaded in guest
> PV driver build 1
> region type 0 at [f3000000,f3020000).
> squash iomem [f3000000, f3020000).
> region type 1 at [c100,c140).
> 
> 
> cfg:
> 
> disk = [ 'tap:qcow:/var/station1/vmdisk-station1.qcow,ioemu:sda1,w' ]
> 
> the disk type of domu is xvda, I could not only get this type for creating
> a unbunt hvm.
> 

Maybe have a look at various xl config file examples in /etc/xen? The
syntax you used looks out-dated to me.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOZ-0000ty-7l; Wed, 01 Jan 2014 16:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vigorousfish@gmail.com>) id 1VxpYz-0003xs-CM
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 03:00:13 +0000
Received: from [85.158.139.211:49199] by server-15.bemta-5.messagelabs.com id
	ED/9F-08490-C3332C25; Tue, 31 Dec 2013 03:00:12 +0000
X-Env-Sender: vigorousfish@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388458811!7032288!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30824 invoked from network); 31 Dec 2013 03:00:12 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 03:00:12 -0000
Received: by mail-la0-f41.google.com with SMTP id c6so806107lan.28
	for <xen-devel@lists.xen.org>; Mon, 30 Dec 2013 19:00:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Rp3FIICtJOo8LrjBrv5beO0HWxZA4o68WN0TRMznokY=;
	b=d8bssIAzWosN4HKqYie8DkepMKQZus6nuaveR9rKA7gygaht+eJbpdtPyj3SC5n+Se
	HpFBxI4ZEXMpaB6NCqOwBXJRKqkBKMkCZKy6lgOImCs9Z8c58zPVGGItmZncM6mTzG6K
	5H/ggQW+BhIvqIk7xvXwIZUCrxnqJ1K3kW+dP968a6x/Mz8pDp3Bm8AOu1Of687cVTvW
	M9OzQAJ6r3a31A3mRvDD53TT9YbAkk2+RYRpQk+vQIf8+4TDxXPzPcr4KUF79N8/y0Cr
	Fty9gbpDmdro04Sxm6COwksXs9AMx5+1mlSjKDBoMLE4n1/qmiGZSCQsoezFQfDjaw7h
	SCNw==
MIME-Version: 1.0
X-Received: by 10.152.2.5 with SMTP id 5mr28733907laq.21.1388458811308; Mon,
	30 Dec 2013 19:00:11 -0800 (PST)
Received: by 10.227.240.211 with HTTP; Mon, 30 Dec 2013 19:00:11 -0800 (PST)
Date: Tue, 31 Dec 2013 11:00:11 +0800
Message-ID: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
From: =?GB2312?B?0+C+og==?= <vigorousfish@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Subject: [Xen-devel] git server maybe faild
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2721699646952402877=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2721699646952402877==
Content-Type: multipart/alternative; boundary=089e013c66403bbdd504eecbc231

--089e013c66403bbdd504eecbc231
Content-Type: text/plain; charset=ISO-8859-1

Hi all,I can't build xen these days. After I run "make tools" it always
stop on "Cloning into 'seabios-dir-remote.tmp'..." this step, and then
print "fatal: read error: Connection reset by peer".

The xen git server maybe had down several days ago. Who can resume it?

Thanx.

-- 
jacky

--089e013c66403bbdd504eecbc231
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><br clear=3D"all"></div>Hi all,I can&#39;t =
build xen these days. After I run &quot;make tools&quot; it always stop on =
&quot;Cloning into &#39;seabios-dir-remote.tmp&#39;...&quot; this step, and=
 then print &quot;fatal: read error: Connection reset by peer&quot;. <br>
<br></div>The xen git server maybe had down several days ago. Who can resum=
e it?<br><br></div>Thanx.<br><br><div><div><div><div>-- <br>jacky</div></di=
v></div></div></div>

--089e013c66403bbdd504eecbc231--


--===============2721699646952402877==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2721699646952402877==--


From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOZ-0000ty-7l; Wed, 01 Jan 2014 16:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vigorousfish@gmail.com>) id 1VxpYz-0003xs-CM
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 03:00:13 +0000
Received: from [85.158.139.211:49199] by server-15.bemta-5.messagelabs.com id
	ED/9F-08490-C3332C25; Tue, 31 Dec 2013 03:00:12 +0000
X-Env-Sender: vigorousfish@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388458811!7032288!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30824 invoked from network); 31 Dec 2013 03:00:12 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 03:00:12 -0000
Received: by mail-la0-f41.google.com with SMTP id c6so806107lan.28
	for <xen-devel@lists.xen.org>; Mon, 30 Dec 2013 19:00:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Rp3FIICtJOo8LrjBrv5beO0HWxZA4o68WN0TRMznokY=;
	b=d8bssIAzWosN4HKqYie8DkepMKQZus6nuaveR9rKA7gygaht+eJbpdtPyj3SC5n+Se
	HpFBxI4ZEXMpaB6NCqOwBXJRKqkBKMkCZKy6lgOImCs9Z8c58zPVGGItmZncM6mTzG6K
	5H/ggQW+BhIvqIk7xvXwIZUCrxnqJ1K3kW+dP968a6x/Mz8pDp3Bm8AOu1Of687cVTvW
	M9OzQAJ6r3a31A3mRvDD53TT9YbAkk2+RYRpQk+vQIf8+4TDxXPzPcr4KUF79N8/y0Cr
	Fty9gbpDmdro04Sxm6COwksXs9AMx5+1mlSjKDBoMLE4n1/qmiGZSCQsoezFQfDjaw7h
	SCNw==
MIME-Version: 1.0
X-Received: by 10.152.2.5 with SMTP id 5mr28733907laq.21.1388458811308; Mon,
	30 Dec 2013 19:00:11 -0800 (PST)
Received: by 10.227.240.211 with HTTP; Mon, 30 Dec 2013 19:00:11 -0800 (PST)
Date: Tue, 31 Dec 2013 11:00:11 +0800
Message-ID: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
From: =?GB2312?B?0+C+og==?= <vigorousfish@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Subject: [Xen-devel] git server maybe faild
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2721699646952402877=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2721699646952402877==
Content-Type: multipart/alternative; boundary=089e013c66403bbdd504eecbc231

--089e013c66403bbdd504eecbc231
Content-Type: text/plain; charset=ISO-8859-1

Hi all,I can't build xen these days. After I run "make tools" it always
stop on "Cloning into 'seabios-dir-remote.tmp'..." this step, and then
print "fatal: read error: Connection reset by peer".

The xen git server maybe had down several days ago. Who can resume it?

Thanx.

-- 
jacky

--089e013c66403bbdd504eecbc231
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><br clear=3D"all"></div>Hi all,I can&#39;t =
build xen these days. After I run &quot;make tools&quot; it always stop on =
&quot;Cloning into &#39;seabios-dir-remote.tmp&#39;...&quot; this step, and=
 then print &quot;fatal: read error: Connection reset by peer&quot;. <br>
<br></div>The xen git server maybe had down several days ago. Who can resum=
e it?<br><br></div>Thanx.<br><br><div><div><div><div>-- <br>jacky</div></di=
v></div></div></div>

--089e013c66403bbdd504eecbc231--


--===============2721699646952402877==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2721699646952402877==--


From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOZ-0000u8-Kp; Wed, 01 Jan 2014 16:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.bin18@zte.com.cn>) id 1Vxtf7-0003vk-Ac
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 07:22:49 +0000
Received: from [85.158.139.211:6024] by server-5.bemta-5.messagelabs.com id
	17/AC-14928-8C072C25; Tue, 31 Dec 2013 07:22:48 +0000
X-Env-Sender: yang.bin18@zte.com.cn
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388474565!7201572!1
X-Originating-IP: [63.217.80.70]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjMuMjE3LjgwLjcwID0+IDMyODg5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7146 invoked from network); 31 Dec 2013 07:22:46 -0000
Received: from mx5.zte.com.cn (HELO mx5.zte.com.cn) (63.217.80.70)
	by server-8.tower-206.messagelabs.com with SMTP;
	31 Dec 2013 07:22:46 -0000
Received: from mse02.zte.com.cn (unknown [10.30.3.21])
	by Websense Email Security Gateway with ESMTPS id 76A0A12C6AB0;
	Tue, 31 Dec 2013 15:22:31 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239])
	by mse02.zte.com.cn with ESMTP id rBV7MTbI097119;
	Tue, 31 Dec 2013 15:22:29 +0800 (GMT-8)
	(envelope-from yang.bin18@zte.com.cn)
In-Reply-To: <1387191383.20076.72.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
MIME-Version: 1.0
X-KeepSent: 7C8CC333:2C8AB855-48257C52:002835FF;
 type=4; name=$KeepSent
X-Mailer: Lotus Notes Release 6.5.6 March 06, 2007
Message-ID: <OF7C8CC333.2C8AB855-ON48257C52.002835FF-48257C52.00288C35@zte.com.cn>
From: yang.bin18@zte.com.cn
Date: Tue, 31 Dec 2013 15:22:38 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.3FP1
	HF212|May 23, 2012) at 2013-12-31 15:22:26,
	Serialize complete at 2013-12-31 15:22:26
X-MAIL: mse02.zte.com.cn rBV7MTbI097119
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] =?gb2312?b?tPC4tDogUmU6ILTwuLQ6IFJlOiAgeGVuIDQuMyBi?=
 =?gb2312?b?dWlsZCBmYWlsZWQ=?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7355998446571134160=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============7355998446571134160==
Content-Type: multipart/alternative; boundary="=_alternative 00288C3448257C52_="

This is a multipart message in MIME format.

--=_alternative 00288C3448257C52_=
Content-Type: text/plain; charset="GB2312"
Content-Transfer-Encoding: base64

Tm93IEkgaGF2ZSBmaWd1cmVkIG91dCBhcyBiZWxsb3c6DQoNCjEuIGRvd25sb2FkIC4vdG9vbHMv
ZmlybXdhcmUvc2VhYmlvcy1kaXKjrA0KLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC1kaXIs
Li90b29zL3FlbXUteGVuLWRpcg0KMi5tb2RpZnkgdG9vbHMvTWFrZWZpbGUgYW5kIHJlbW92ZSBh
Y3Rpb24gb2YgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyLWZpbmQgDQphbmQgcWVtdS14ZW4tZGly
LWZpbmQNCg0KDQpUaGFuayB5b3UgYW5kIHdlaSAuDQoNCg0KDQoNCg0KDQoNCklhbiBDYW1wYmVs
bCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+IA0KMjAxMy0xMi0xNiAxODo1Ng0KDQrK1bz+yMsN
Cjx5YW5nLmJpbjE4QHp0ZS5jb20uY24+DQqzrcvNDQo8eGVuLWRldmVsQGxpc3RzLnhlbi5vcmc+
DQrW98ziDQpSZTogtPC4tDogUmU6IFtYZW4tZGV2ZWxdIHhlbiA0LjMgYnVpbGQgZmFpbGVkDQoN
Cg0KDQoNCg0KDQpPbiBNb24sIDIwMTMtMTItMTYgYXQgMTg6NDggKzA4MDAsIHlhbmcuYmluMThA
enRlLmNvbS5jbiB3cm90ZToNCj4gSSB1c2UgdGhlIHNuYXBzaG90IG9mDQo+IKGwDQpodHRwOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRz
L3N0YWJsZS00LjMgDQqhsaGjQW5kIGhhdmUgbm8gbG9jYWwgbWlycm9yIHlldC4NCg0KSSB0aGlu
ayB5b3Ugd291bGQgYmUgYmV0dGVyIG9mZiB3aXRoIHRoZSB0YXJiYWxsIHJlbGVhc2VzIHRoZW4u
DQoNCklhbi4NCg0KDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQpaVEUgSW5mb3JtYXRpb24gU2VjdXJpdHkgTm90aWNlOiBUaGUgaW5m
b3JtYXRpb24gY29udGFpbmVkIGluIHRoaXMgbWFpbCAoYW5kIGFueSBhdHRhY2htZW50IHRyYW5z
bWl0dGVkIGhlcmV3aXRoKSBpcyBwcml2aWxlZ2VkIGFuZCBjb25maWRlbnRpYWwgYW5kIGlzIGlu
dGVuZGVkIGZvciB0aGUgZXhjbHVzaXZlIHVzZSBvZiB0aGUgYWRkcmVzc2VlKHMpLiAgSWYgeW91
IGFyZSBub3QgYW4gaW50ZW5kZWQgcmVjaXBpZW50LCBhbnkgZGlzY2xvc3VyZSwgcmVwcm9kdWN0
aW9uLCBkaXN0cmlidXRpb24gb3Igb3RoZXIgZGlzc2VtaW5hdGlvbiBvciB1c2Ugb2YgdGhlIGlu
Zm9ybWF0aW9uIGNvbnRhaW5lZCBpcyBzdHJpY3RseSBwcm9oaWJpdGVkLiAgSWYgeW91IGhhdmUg
cmVjZWl2ZWQgdGhpcyBtYWlsIGluIGVycm9yLCBwbGVhc2UgZGVsZXRlIGl0IGFuZCBub3RpZnkg
dXMgaW1tZWRpYXRlbHkuDQo=

--=_alternative 00288C3448257C52_=
Content-Type: text/html; charset="GB2312"
Content-Transfer-Encoding: base64

DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPjxicj4NCjwvZm9udD4NCjx0YWJs
ZT4NCjx0cj4NCjx0ZD48Zm9udCBzaXplPTI+Tm93IEkgaGF2ZSBmaWd1cmVkIG91dCBhcyBiZWxs
b3c6PC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNpemU9Mj4xLiBkb3dubG9hZCAuL3Rvb2xzL2Zp
cm13YXJlL3NlYWJpb3MtZGlyo6wuL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsLWRpciwuL3Rv
b3MvcWVtdS14ZW4tZGlyPC9mb250Pg0KPGJyPjxmb250IHNpemU9Mj4yLm1vZGlmeSB0b29scy9N
YWtlZmlsZSBhbmQgcmVtb3ZlIGFjdGlvbiBvZiBxZW11LXhlbi10cmFkaXRpb25hbC1kaXItZmlu
ZA0KYW5kIHFlbXUteGVuLWRpci1maW5kPGJyPg0KPC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNp
emU9Mj5UaGFuayB5b3UgYW5kIHdlaSAuPC9mb250Pg0KPHRhYmxlIHdpZHRoPTEwMCU+DQo8dHI+
DQo8dGQgd2lkdGg9MzQlPg0KPHRkIHdpZHRoPTY1JT48L3RhYmxlPg0KPGJyPjwvdGFibGU+DQo8
YnI+DQo8YnI+DQo8YnI+DQo8YnI+DQo8dGFibGUgd2lkdGg9MTAwJT4NCjx0ciB2YWxpZ249dG9w
Pg0KPHRkIHdpZHRoPTM2JT48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+PGI+SWFuIENh
bXBiZWxsICZsdDtJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbSZndDs8L2I+DQo8L2ZvbnQ+DQo8cD48
Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+MjAxMy0xMi0xNiAxODo1NjwvZm9udD4NCjx0
ZCB3aWR0aD02MyU+DQo8dGFibGUgd2lkdGg9MTAwJT4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0K
PGRpdiBhbGlnbj1yaWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+ytW8/sjLPC9m
b250PjwvZGl2Pg0KPHRkPjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj4mbHQ7eWFuZy5i
aW4xOEB6dGUuY29tLmNuJmd0OzwvZm9udD4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0KPGRpdiBh
bGlnbj1yaWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+s63LzTwvZm9udD48L2Rp
dj4NCjx0ZD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+Jmx0O3hlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnJmd0OzwvZm9udD4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0KPGRpdiBhbGlnbj1y
aWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+1vfM4jwvZm9udD48L2Rpdj4NCjx0
ZD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+UmU6ILTwuLQ6IFJlOiBbWGVuLWRldmVs
XSB4ZW4gNC4zDQpidWlsZCBmYWlsZWQ8L2ZvbnQ+PC90YWJsZT4NCjxicj4NCjx0YWJsZT4NCjx0
ciB2YWxpZ249dG9wPg0KPHRkPg0KPHRkPjwvdGFibGU+DQo8YnI+PC90YWJsZT4NCjxicj4NCjxi
cj4NCjxicj48dHQ+PGZvbnQgc2l6ZT0yPk9uIE1vbiwgMjAxMy0xMi0xNiBhdCAxODo0OCArMDgw
MCwgeWFuZy5iaW4xOEB6dGUuY29tLmNuDQp3cm90ZTo8YnI+DQomZ3Q7IEkgdXNlIHRoZSBzbmFw
c2hvdCBvZjxicj4NCiZndDsgobBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRzL3N0YWJsZS00LjMNCqGxoaNBbmQgaGF2ZSBubyBs
b2NhbCBtaXJyb3IgeWV0Ljxicj4NCjxicj4NCkkgdGhpbmsgeW91IHdvdWxkIGJlIGJldHRlciBv
ZmYgd2l0aCB0aGUgdGFyYmFsbCByZWxlYXNlcyB0aGVuLjxicj4NCjxicj4NCklhbi48YnI+DQo8
YnI+DQo8L2ZvbnQ+PC90dD4NCjxicj4NCg0KPGJyPjxwcmU+PGZvbnQgY29sb3I9ImJsdWUiPg0K
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0N
ClpURSBJbmZvcm1hdGlvbiBTZWN1cml0eSBOb3RpY2U6IFRoZSBpbmZvcm1hdGlvbiBjb250YWlu
ZWQgaW4gdGhpcyBtYWlsIChhbmQgYW55IGF0dGFjaG1lbnQgdHJhbnNtaXR0ZWQgaGVyZXdpdGgp
IGlzIHByaXZpbGVnZWQgYW5kIGNvbmZpZGVudGlhbCBhbmQgaXMgaW50ZW5kZWQgZm9yIHRoZSBl
eGNsdXNpdmUgdXNlIG9mIHRoZSBhZGRyZXNzZWUocykuICBJZiB5b3UgYXJlIG5vdCBhbiBpbnRl
bmRlZCByZWNpcGllbnQsIGFueSBkaXNjbG9zdXJlLCByZXByb2R1Y3Rpb24sIGRpc3RyaWJ1dGlv
biBvciBvdGhlciBkaXNzZW1pbmF0aW9uIG9yIHVzZSBvZiB0aGUgaW5mb3JtYXRpb24gY29udGFp
bmVkIGlzIHN0cmljdGx5IHByb2hpYml0ZWQuICBJZiB5b3UgaGF2ZSByZWNlaXZlZCB0aGlzIG1h
aWwgaW4gZXJyb3IsIHBsZWFzZSBkZWxldGUgaXQgYW5kIG5vdGlmeSB1cyBpbW1lZGlhdGVseS4N
Cg0KPC9mb250PjwvcHJlPjxicj4NCg==

--=_alternative 00288C3448257C52_=--


--===============7355998446571134160==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7355998446571134160==--


From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOZ-0000u8-Kp; Wed, 01 Jan 2014 16:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.bin18@zte.com.cn>) id 1Vxtf7-0003vk-Ac
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 07:22:49 +0000
Received: from [85.158.139.211:6024] by server-5.bemta-5.messagelabs.com id
	17/AC-14928-8C072C25; Tue, 31 Dec 2013 07:22:48 +0000
X-Env-Sender: yang.bin18@zte.com.cn
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388474565!7201572!1
X-Originating-IP: [63.217.80.70]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjMuMjE3LjgwLjcwID0+IDMyODg5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7146 invoked from network); 31 Dec 2013 07:22:46 -0000
Received: from mx5.zte.com.cn (HELO mx5.zte.com.cn) (63.217.80.70)
	by server-8.tower-206.messagelabs.com with SMTP;
	31 Dec 2013 07:22:46 -0000
Received: from mse02.zte.com.cn (unknown [10.30.3.21])
	by Websense Email Security Gateway with ESMTPS id 76A0A12C6AB0;
	Tue, 31 Dec 2013 15:22:31 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239])
	by mse02.zte.com.cn with ESMTP id rBV7MTbI097119;
	Tue, 31 Dec 2013 15:22:29 +0800 (GMT-8)
	(envelope-from yang.bin18@zte.com.cn)
In-Reply-To: <1387191383.20076.72.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
MIME-Version: 1.0
X-KeepSent: 7C8CC333:2C8AB855-48257C52:002835FF;
 type=4; name=$KeepSent
X-Mailer: Lotus Notes Release 6.5.6 March 06, 2007
Message-ID: <OF7C8CC333.2C8AB855-ON48257C52.002835FF-48257C52.00288C35@zte.com.cn>
From: yang.bin18@zte.com.cn
Date: Tue, 31 Dec 2013 15:22:38 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.3FP1
	HF212|May 23, 2012) at 2013-12-31 15:22:26,
	Serialize complete at 2013-12-31 15:22:26
X-MAIL: mse02.zte.com.cn rBV7MTbI097119
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] =?gb2312?b?tPC4tDogUmU6ILTwuLQ6IFJlOiAgeGVuIDQuMyBi?=
 =?gb2312?b?dWlsZCBmYWlsZWQ=?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7355998446571134160=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

--===============7355998446571134160==
Content-Type: multipart/alternative; boundary="=_alternative 00288C3448257C52_="

This is a multipart message in MIME format.

--=_alternative 00288C3448257C52_=
Content-Type: text/plain; charset="GB2312"
Content-Transfer-Encoding: base64

Tm93IEkgaGF2ZSBmaWd1cmVkIG91dCBhcyBiZWxsb3c6DQoNCjEuIGRvd25sb2FkIC4vdG9vbHMv
ZmlybXdhcmUvc2VhYmlvcy1kaXKjrA0KLi90b29scy9xZW11LXhlbi10cmFkaXRpb25hbC1kaXIs
Li90b29zL3FlbXUteGVuLWRpcg0KMi5tb2RpZnkgdG9vbHMvTWFrZWZpbGUgYW5kIHJlbW92ZSBh
Y3Rpb24gb2YgcWVtdS14ZW4tdHJhZGl0aW9uYWwtZGlyLWZpbmQgDQphbmQgcWVtdS14ZW4tZGly
LWZpbmQNCg0KDQpUaGFuayB5b3UgYW5kIHdlaSAuDQoNCg0KDQoNCg0KDQoNCklhbiBDYW1wYmVs
bCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+IA0KMjAxMy0xMi0xNiAxODo1Ng0KDQrK1bz+yMsN
Cjx5YW5nLmJpbjE4QHp0ZS5jb20uY24+DQqzrcvNDQo8eGVuLWRldmVsQGxpc3RzLnhlbi5vcmc+
DQrW98ziDQpSZTogtPC4tDogUmU6IFtYZW4tZGV2ZWxdIHhlbiA0LjMgYnVpbGQgZmFpbGVkDQoN
Cg0KDQoNCg0KDQpPbiBNb24sIDIwMTMtMTItMTYgYXQgMTg6NDggKzA4MDAsIHlhbmcuYmluMThA
enRlLmNvbS5jbiB3cm90ZToNCj4gSSB1c2UgdGhlIHNuYXBzaG90IG9mDQo+IKGwDQpodHRwOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRz
L3N0YWJsZS00LjMgDQqhsaGjQW5kIGhhdmUgbm8gbG9jYWwgbWlycm9yIHlldC4NCg0KSSB0aGlu
ayB5b3Ugd291bGQgYmUgYmV0dGVyIG9mZiB3aXRoIHRoZSB0YXJiYWxsIHJlbGVhc2VzIHRoZW4u
DQoNCklhbi4NCg0KDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tDQpaVEUgSW5mb3JtYXRpb24gU2VjdXJpdHkgTm90aWNlOiBUaGUgaW5m
b3JtYXRpb24gY29udGFpbmVkIGluIHRoaXMgbWFpbCAoYW5kIGFueSBhdHRhY2htZW50IHRyYW5z
bWl0dGVkIGhlcmV3aXRoKSBpcyBwcml2aWxlZ2VkIGFuZCBjb25maWRlbnRpYWwgYW5kIGlzIGlu
dGVuZGVkIGZvciB0aGUgZXhjbHVzaXZlIHVzZSBvZiB0aGUgYWRkcmVzc2VlKHMpLiAgSWYgeW91
IGFyZSBub3QgYW4gaW50ZW5kZWQgcmVjaXBpZW50LCBhbnkgZGlzY2xvc3VyZSwgcmVwcm9kdWN0
aW9uLCBkaXN0cmlidXRpb24gb3Igb3RoZXIgZGlzc2VtaW5hdGlvbiBvciB1c2Ugb2YgdGhlIGlu
Zm9ybWF0aW9uIGNvbnRhaW5lZCBpcyBzdHJpY3RseSBwcm9oaWJpdGVkLiAgSWYgeW91IGhhdmUg
cmVjZWl2ZWQgdGhpcyBtYWlsIGluIGVycm9yLCBwbGVhc2UgZGVsZXRlIGl0IGFuZCBub3RpZnkg
dXMgaW1tZWRpYXRlbHkuDQo=

--=_alternative 00288C3448257C52_=
Content-Type: text/html; charset="GB2312"
Content-Transfer-Encoding: base64

DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPjxicj4NCjwvZm9udD4NCjx0YWJs
ZT4NCjx0cj4NCjx0ZD48Zm9udCBzaXplPTI+Tm93IEkgaGF2ZSBmaWd1cmVkIG91dCBhcyBiZWxs
b3c6PC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNpemU9Mj4xLiBkb3dubG9hZCAuL3Rvb2xzL2Zp
cm13YXJlL3NlYWJpb3MtZGlyo6wuL3Rvb2xzL3FlbXUteGVuLXRyYWRpdGlvbmFsLWRpciwuL3Rv
b3MvcWVtdS14ZW4tZGlyPC9mb250Pg0KPGJyPjxmb250IHNpemU9Mj4yLm1vZGlmeSB0b29scy9N
YWtlZmlsZSBhbmQgcmVtb3ZlIGFjdGlvbiBvZiBxZW11LXhlbi10cmFkaXRpb25hbC1kaXItZmlu
ZA0KYW5kIHFlbXUteGVuLWRpci1maW5kPGJyPg0KPC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNp
emU9Mj5UaGFuayB5b3UgYW5kIHdlaSAuPC9mb250Pg0KPHRhYmxlIHdpZHRoPTEwMCU+DQo8dHI+
DQo8dGQgd2lkdGg9MzQlPg0KPHRkIHdpZHRoPTY1JT48L3RhYmxlPg0KPGJyPjwvdGFibGU+DQo8
YnI+DQo8YnI+DQo8YnI+DQo8YnI+DQo8dGFibGUgd2lkdGg9MTAwJT4NCjx0ciB2YWxpZ249dG9w
Pg0KPHRkIHdpZHRoPTM2JT48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+PGI+SWFuIENh
bXBiZWxsICZsdDtJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbSZndDs8L2I+DQo8L2ZvbnQ+DQo8cD48
Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+MjAxMy0xMi0xNiAxODo1NjwvZm9udD4NCjx0
ZCB3aWR0aD02MyU+DQo8dGFibGUgd2lkdGg9MTAwJT4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0K
PGRpdiBhbGlnbj1yaWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+ytW8/sjLPC9m
b250PjwvZGl2Pg0KPHRkPjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj4mbHQ7eWFuZy5i
aW4xOEB6dGUuY29tLmNuJmd0OzwvZm9udD4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0KPGRpdiBh
bGlnbj1yaWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+s63LzTwvZm9udD48L2Rp
dj4NCjx0ZD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+Jmx0O3hlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnJmd0OzwvZm9udD4NCjx0ciB2YWxpZ249dG9wPg0KPHRkPg0KPGRpdiBhbGlnbj1y
aWdodD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+1vfM4jwvZm9udD48L2Rpdj4NCjx0
ZD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+UmU6ILTwuLQ6IFJlOiBbWGVuLWRldmVs
XSB4ZW4gNC4zDQpidWlsZCBmYWlsZWQ8L2ZvbnQ+PC90YWJsZT4NCjxicj4NCjx0YWJsZT4NCjx0
ciB2YWxpZ249dG9wPg0KPHRkPg0KPHRkPjwvdGFibGU+DQo8YnI+PC90YWJsZT4NCjxicj4NCjxi
cj4NCjxicj48dHQ+PGZvbnQgc2l6ZT0yPk9uIE1vbiwgMjAxMy0xMi0xNiBhdCAxODo0OCArMDgw
MCwgeWFuZy5iaW4xOEB6dGUuY29tLmNuDQp3cm90ZTo8YnI+DQomZ3Q7IEkgdXNlIHRoZSBzbmFw
c2hvdCBvZjxicj4NCiZndDsgobBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRzL3N0YWJsZS00LjMNCqGxoaNBbmQgaGF2ZSBubyBs
b2NhbCBtaXJyb3IgeWV0Ljxicj4NCjxicj4NCkkgdGhpbmsgeW91IHdvdWxkIGJlIGJldHRlciBv
ZmYgd2l0aCB0aGUgdGFyYmFsbCByZWxlYXNlcyB0aGVuLjxicj4NCjxicj4NCklhbi48YnI+DQo8
YnI+DQo8L2ZvbnQ+PC90dD4NCjxicj4NCg0KPGJyPjxwcmU+PGZvbnQgY29sb3I9ImJsdWUiPg0K
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0N
ClpURSBJbmZvcm1hdGlvbiBTZWN1cml0eSBOb3RpY2U6IFRoZSBpbmZvcm1hdGlvbiBjb250YWlu
ZWQgaW4gdGhpcyBtYWlsIChhbmQgYW55IGF0dGFjaG1lbnQgdHJhbnNtaXR0ZWQgaGVyZXdpdGgp
IGlzIHByaXZpbGVnZWQgYW5kIGNvbmZpZGVudGlhbCBhbmQgaXMgaW50ZW5kZWQgZm9yIHRoZSBl
eGNsdXNpdmUgdXNlIG9mIHRoZSBhZGRyZXNzZWUocykuICBJZiB5b3UgYXJlIG5vdCBhbiBpbnRl
bmRlZCByZWNpcGllbnQsIGFueSBkaXNjbG9zdXJlLCByZXByb2R1Y3Rpb24sIGRpc3RyaWJ1dGlv
biBvciBvdGhlciBkaXNzZW1pbmF0aW9uIG9yIHVzZSBvZiB0aGUgaW5mb3JtYXRpb24gY29udGFp
bmVkIGlzIHN0cmljdGx5IHByb2hpYml0ZWQuICBJZiB5b3UgaGF2ZSByZWNlaXZlZCB0aGlzIG1h
aWwgaW4gZXJyb3IsIHBsZWFzZSBkZWxldGUgaXQgYW5kIG5vdGlmeSB1cyBpbW1lZGlhdGVseS4N
Cg0KPC9mb250PjwvcHJlPjxicj4NCg==

--=_alternative 00288C3448257C52_=--


--===============7355998446571134160==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7355998446571134160==--


From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOa-0000ur-5T; Wed, 01 Jan 2014 16:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <svpcom@gmail.com>) id 1Vy0Eh-0003dT-Ft
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 14:23:59 +0000
Received: from [85.158.137.68:6824] by server-5.bemta-3.messagelabs.com id
	4D/FE-25188-E73D2C25; Tue, 31 Dec 2013 14:23:58 +0000
X-Env-Sender: svpcom@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388499837!6670645!1
X-Originating-IP: [209.85.215.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7207 invoked from network); 31 Dec 2013 14:23:58 -0000
Received: from mail-la0-f48.google.com (HELO mail-la0-f48.google.com)
	(209.85.215.48)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 14:23:58 -0000
Received: by mail-la0-f48.google.com with SMTP id n7so6160983lam.21
	for <xen-devel@lists.xen.org>; Tue, 31 Dec 2013 06:23:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=Dy1mnJtjrcQW/SGqsChecT3ttP+6Sv4MOwR5OoADlOM=;
	b=TBd3BGkEqUg7ZzfMbIknQqjoG1v1Yej6wB6cMYP+1dbpnNCEpgKuGqmgLJPn+Wx51Z
	kTbWeXOOs85IM2zEG9lenZTswMrfKHP359kve4JChCjl8k9z4PBFo3jtrMy2WofBHHsu
	8AjyB2PxBtXxnJJT0lzuNM0mmln62DdQ6shIj88DvemAneUPnJuwIhXkF4OjrMC/ykcb
	sVaqwy0G3FpdGBvbv/9fWLfJR8aXU+YH+xbfPJOWdIebu1kWj3a6S2ORNIBIY5khCND5
	HPt3yUa2jYxgGET++pJ8cWXnYscW1AJlfd2spNwmxH3pCRa/fdqqpSOKscfaiZqIRxV2
	Q6GA==
X-Received: by 10.112.16.137 with SMTP id g9mr27771219lbd.1.1388499837372;
	Tue, 31 Dec 2013 06:23:57 -0800 (PST)
Received: from hzone3.svpcom.msiu.ru ([176.14.131.139])
	by mx.google.com with ESMTPSA id e10sm38759409laa.6.2013.12.31.06.23.55
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 31 Dec 2013 06:23:56 -0800 (PST)
Message-ID: <52C2D37A.1020000@gmail.com>
Date: Tue, 31 Dec 2013 18:23:54 +0400
From: Vasily Evseenko <svpcom@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20130110 Thunderbird/17.0.2
MIME-Version: 1.0
To: William Dauchy <wdauchy@gmail.com>
References: <52BD5FDD.6060009@gmail.com>
	<20131227115345.GR25969@zion.uk.xensource.com>
	<52BD70B6.7040300@gmail.com>
	<CAJ75kXaT7cRnR6cjs2hF6N4Xwy7S6bd84Q-FWQznUE5GvCe4bw@mail.gmail.com>
In-Reply-To: <CAJ75kXaT7cRnR6cjs2hF6N4Xwy7S6bd84Q-FWQznUE5GvCe4bw@mail.gmail.com>
X-Enigmail-Version: 1.5
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've found workaround:
Run  "ethtool -K vifX.Y  tx off tso off gso off"
on dom0 side (in addition to "ethtool -K eth0 tx off tso off gso off"  
on domU).
Disabling offloading only in domU is not sufficient.


On 12/31/2013 04:56 PM, William Dauchy wrote:
> On Fri, Dec 27, 2013 at 1:21 PM, Vasily Evseenko <svpcom@gmail.com> wrote:
>> There are no clear steps to reproduce.
>> Bug triggered only by high tcp network load (webserver pattern - many
>> small connections) every one - two days.
>> See xen, dom0 and domU's info in attachment. I can provide any
>> additional info.
>> I've tried dom0/domU kernels 3.10.23 (vanilla, from centos-xen) and
>> 3.10.25,  xen 4.2.3 and 4.3.1.
> maybe testing with kmemleak enabled on the domU will help getting more info.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyOOa-0000ur-5T; Wed, 01 Jan 2014 16:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <svpcom@gmail.com>) id 1Vy0Eh-0003dT-Ft
	for xen-devel@lists.xen.org; Tue, 31 Dec 2013 14:23:59 +0000
Received: from [85.158.137.68:6824] by server-5.bemta-3.messagelabs.com id
	4D/FE-25188-E73D2C25; Tue, 31 Dec 2013 14:23:58 +0000
X-Env-Sender: svpcom@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388499837!6670645!1
X-Originating-IP: [209.85.215.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7207 invoked from network); 31 Dec 2013 14:23:58 -0000
Received: from mail-la0-f48.google.com (HELO mail-la0-f48.google.com)
	(209.85.215.48)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Dec 2013 14:23:58 -0000
Received: by mail-la0-f48.google.com with SMTP id n7so6160983lam.21
	for <xen-devel@lists.xen.org>; Tue, 31 Dec 2013 06:23:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=Dy1mnJtjrcQW/SGqsChecT3ttP+6Sv4MOwR5OoADlOM=;
	b=TBd3BGkEqUg7ZzfMbIknQqjoG1v1Yej6wB6cMYP+1dbpnNCEpgKuGqmgLJPn+Wx51Z
	kTbWeXOOs85IM2zEG9lenZTswMrfKHP359kve4JChCjl8k9z4PBFo3jtrMy2WofBHHsu
	8AjyB2PxBtXxnJJT0lzuNM0mmln62DdQ6shIj88DvemAneUPnJuwIhXkF4OjrMC/ykcb
	sVaqwy0G3FpdGBvbv/9fWLfJR8aXU+YH+xbfPJOWdIebu1kWj3a6S2ORNIBIY5khCND5
	HPt3yUa2jYxgGET++pJ8cWXnYscW1AJlfd2spNwmxH3pCRa/fdqqpSOKscfaiZqIRxV2
	Q6GA==
X-Received: by 10.112.16.137 with SMTP id g9mr27771219lbd.1.1388499837372;
	Tue, 31 Dec 2013 06:23:57 -0800 (PST)
Received: from hzone3.svpcom.msiu.ru ([176.14.131.139])
	by mx.google.com with ESMTPSA id e10sm38759409laa.6.2013.12.31.06.23.55
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 31 Dec 2013 06:23:56 -0800 (PST)
Message-ID: <52C2D37A.1020000@gmail.com>
Date: Tue, 31 Dec 2013 18:23:54 +0400
From: Vasily Evseenko <svpcom@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20130110 Thunderbird/17.0.2
MIME-Version: 1.0
To: William Dauchy <wdauchy@gmail.com>
References: <52BD5FDD.6060009@gmail.com>
	<20131227115345.GR25969@zion.uk.xensource.com>
	<52BD70B6.7040300@gmail.com>
	<CAJ75kXaT7cRnR6cjs2hF6N4Xwy7S6bd84Q-FWQznUE5GvCe4bw@mail.gmail.com>
In-Reply-To: <CAJ75kXaT7cRnR6cjs2hF6N4Xwy7S6bd84Q-FWQznUE5GvCe4bw@mail.gmail.com>
X-Enigmail-Version: 1.5
X-Mailman-Approved-At: Wed, 01 Jan 2014 16:11:45 +0000
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I've found workaround:
Run  "ethtool -K vifX.Y  tx off tso off gso off"
on dom0 side (in addition to "ethtool -K eth0 tx off tso off gso off"  
on domU).
Disabling offloading only in domU is not sufficient.


On 12/31/2013 04:56 PM, William Dauchy wrote:
> On Fri, Dec 27, 2013 at 1:21 PM, Vasily Evseenko <svpcom@gmail.com> wrote:
>> There are no clear steps to reproduce.
>> Bug triggered only by high tcp network load (webserver pattern - many
>> small connections) every one - two days.
>> See xen, dom0 and domU's info in attachment. I can provide any
>> additional info.
>> I've tried dom0/domU kernels 3.10.23 (vanilla, from centos-xen) and
>> 3.10.25,  xen 4.2.3 and 4.3.1.
> maybe testing with kmemleak enabled on the domU will help getting more info.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyP1P-0002tQ-48; Wed, 01 Jan 2014 16:51:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VyP1N-0002tL-3p
	for xen-devel@lists.xen.org; Wed, 01 Jan 2014 16:51:53 +0000
Received: from [85.158.137.68:26253] by server-11.bemta-3.messagelabs.com id
	56/7D-19379-8A744C25; Wed, 01 Jan 2014 16:51:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388595109!6756794!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2026 invoked from network); 1 Jan 2014 16:51:51 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 16:51:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 01 Jan 2014 16:51:46 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,586,1384300800"; d="scan'208";a="622777549"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi03.verizon.com with ESMTP; 01 Jan 2014 16:51:45 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed,  1 Jan 2014 11:51:36 -0500
Message-Id: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Revert of commit 7113a45451a9f656deeff070e47672043ed83664

Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9

With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)

~/kexec/build/sbin/kexec -p '--command-line=placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off' --initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img /boot/vmlinuz-3.8.11-100.fc17.x86_64

Without it:

(XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
(XEN) [2014-01-01 15:40:12] CPU:    5
(XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
(XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
(XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
(XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
(XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
(XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
(XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
(XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
(XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
(XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
(XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
(XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
(XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
(XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
(XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
(XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
(XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
(XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
(XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
(XEN) [2014-01-01 15:40:12] Xen call trace:
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
(XEN) [2014-01-01 15:40:12]
(XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
(XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-01 15:40:16]
(XEN) [2014-01-01 15:40:16] ****************************************
(XEN) [2014-01-01 15:40:16] Panic on CPU 5:
(XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
(XEN) [2014-01-01 15:40:16] [error_code=0002]
(XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
(XEN) [2014-01-01 15:40:17] ****************************************
(XEN) [2014-01-01 15:40:17]
(XEN) [2014-01-01 15:40:17] Reboot in five seconds...

With this patch no panic and crash kernel works.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/setup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..90f3294 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1098,6 +1098,10 @@ void __init __start_xen(unsigned long mbi_p)
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
 
+    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                     kexec_crash_area.start >> PAGE_SHIFT,
+                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 16:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 16:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyP1P-0002tQ-48; Wed, 01 Jan 2014 16:51:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VyP1N-0002tL-3p
	for xen-devel@lists.xen.org; Wed, 01 Jan 2014 16:51:53 +0000
Received: from [85.158.137.68:26253] by server-11.bemta-3.messagelabs.com id
	56/7D-19379-8A744C25; Wed, 01 Jan 2014 16:51:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388595109!6756794!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2026 invoked from network); 1 Jan 2014 16:51:51 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Jan 2014 16:51:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 01 Jan 2014 16:51:46 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,586,1384300800"; d="scan'208";a="622777549"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi03.verizon.com with ESMTP; 01 Jan 2014 16:51:45 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed,  1 Jan 2014 11:51:36 -0500
Message-Id: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Revert of commit 7113a45451a9f656deeff070e47672043ed83664

Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9

With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)

~/kexec/build/sbin/kexec -p '--command-line=placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off' --initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img /boot/vmlinuz-3.8.11-100.fc17.x86_64

Without it:

(XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
(XEN) [2014-01-01 15:40:12] CPU:    5
(XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
(XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
(XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
(XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
(XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
(XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
(XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
(XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
(XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
(XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
(XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
(XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
(XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
(XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
(XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
(XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
(XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
(XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
(XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
(XEN) [2014-01-01 15:40:12] Xen call trace:
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
(XEN) [2014-01-01 15:40:12]
(XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
(XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-01 15:40:16]
(XEN) [2014-01-01 15:40:16] ****************************************
(XEN) [2014-01-01 15:40:16] Panic on CPU 5:
(XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
(XEN) [2014-01-01 15:40:16] [error_code=0002]
(XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
(XEN) [2014-01-01 15:40:17] ****************************************
(XEN) [2014-01-01 15:40:17]
(XEN) [2014-01-01 15:40:17] Reboot in five seconds...

With this patch no panic and crash kernel works.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/setup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..90f3294 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1098,6 +1098,10 @@ void __init __start_xen(unsigned long mbi_p)
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
 
+    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                     kexec_crash_area.start >> PAGE_SHIFT,
+                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 17:47:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 17:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyPt4-00050H-GG; Wed, 01 Jan 2014 17:47:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VyPt3-00050C-1x
	for xen-devel@lists.xen.org; Wed, 01 Jan 2014 17:47:21 +0000
Received: from [85.158.143.35:43555] by server-3.bemta-4.messagelabs.com id
	5C/6A-32360-8A454C25; Wed, 01 Jan 2014 17:47:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388598438!9114416!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1381 invoked from network); 1 Jan 2014 17:47:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Jan 2014 17:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,586,1384300800"; d="scan'208";a="89076748"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Jan 2014 17:47:17 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 1 Jan 2014 12:47:16 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 1 Jan 2014
	18:47:15 +0100
Message-ID: <52C454A4.1030408@citrix.com>
Date: Wed, 1 Jan 2014 17:47:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/2014 16:51, Don Slutz wrote:
> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>
> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>
> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>
> ~/kexec/build/sbin/kexec -p '--command-line=placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off' --initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img /boot/vmlinuz-3.8.11-100.fc17.x86_64
>
> Without it:
>
> (XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
> (XEN) [2014-01-01 15:40:12] CPU:    5
> (XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
> (XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
> (XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
> (XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
> (XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
> (XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
> (XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
> (XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
> (XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
> (XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
> (XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
> (XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
> (XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
> (XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
> (XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
> (XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
> (XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
> (XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
> (XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
> (XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
> (XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
> (XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
> (XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
> (XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
> (XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
> (XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
> (XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
> (XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
> (XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
> (XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
> (XEN) [2014-01-01 15:40:12] Xen call trace:
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
> (XEN) [2014-01-01 15:40:12]
> (XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
> (XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
> (XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
> (XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff
> (XEN) [2014-01-01 15:40:16]
> (XEN) [2014-01-01 15:40:16] ****************************************
> (XEN) [2014-01-01 15:40:16] Panic on CPU 5:
> (XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
> (XEN) [2014-01-01 15:40:16] [error_code=0002]
> (XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
> (XEN) [2014-01-01 15:40:17] ****************************************
> (XEN) [2014-01-01 15:40:17]
> (XEN) [2014-01-01 15:40:17] Reboot in five seconds...
>
> With this patch no panic and crash kernel works.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested. 
kimage_alloc_crash_control_page() explicitly chooses a page inside the
crash region and clears it.

However, the sentiment of the commit is certainly desirable, to prevent
accidental playing in the crash region.

As the mappings are removed from Xen's directmap region,
map_domain_page() doesn't work (unless the debug highmem barrier is
sufficiently low that the crash regions ends up above it, and the
virtual address ends up coming from the mapcache).

This means that both here in clear_domain_page(), and later in
machine_kexec_load() where the code is copied in, is vulnerable to this
pagefault.

The solution to this problem which would leave the fewest mappings would
be to have kimage_alloc_crash_control_page() map the individual control
page to the main Xen pagetables, at which a call to point
map_domain_page() on it will work correctly.  This would need an
equivalent call to destroy_xen_mappings() in kimage_free().

However, it is far from neat.

I defer to others as to which approach is better, but suggest that one
way or another, the problem gets fixed very quickly, even if that means
taking this complete reversion now and submitting a proper fix in due
course.

~Andrew

> ---
>  xen/arch/x86/setup.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..90f3294 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1098,6 +1098,10 @@ void __init __start_xen(unsigned long mbi_p)
>                           PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>      }
>  
> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
> +
>      xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                     ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>      destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 01 17:47:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jan 2014 17:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyPt4-00050H-GG; Wed, 01 Jan 2014 17:47:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VyPt3-00050C-1x
	for xen-devel@lists.xen.org; Wed, 01 Jan 2014 17:47:21 +0000
Received: from [85.158.143.35:43555] by server-3.bemta-4.messagelabs.com id
	5C/6A-32360-8A454C25; Wed, 01 Jan 2014 17:47:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388598438!9114416!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1381 invoked from network); 1 Jan 2014 17:47:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Jan 2014 17:47:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,586,1384300800"; d="scan'208";a="89076748"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Jan 2014 17:47:17 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 1 Jan 2014 12:47:16 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 1 Jan 2014
	18:47:15 +0100
Message-ID: <52C454A4.1030408@citrix.com>
Date: Wed, 1 Jan 2014 17:47:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/2014 16:51, Don Slutz wrote:
> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>
> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>
> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>
> ~/kexec/build/sbin/kexec -p '--command-line=placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off' --initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img /boot/vmlinuz-3.8.11-100.fc17.x86_64
>
> Without it:
>
> (XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
> (XEN) [2014-01-01 15:40:12] CPU:    5
> (XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
> (XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
> (XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
> (XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
> (XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
> (XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
> (XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
> (XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
> (XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
> (XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
> (XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
> (XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
> (XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
> (XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
> (XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
> (XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
> (XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
> (XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
> (XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
> (XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
> (XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
> (XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
> (XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
> (XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
> (XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
> (XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
> (XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
> (XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
> (XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
> (XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
> (XEN) [2014-01-01 15:40:12] Xen call trace:
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
> (XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
> (XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
> (XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
> (XEN) [2014-01-01 15:40:12]
> (XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
> (XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
> (XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
> (XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff
> (XEN) [2014-01-01 15:40:16]
> (XEN) [2014-01-01 15:40:16] ****************************************
> (XEN) [2014-01-01 15:40:16] Panic on CPU 5:
> (XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
> (XEN) [2014-01-01 15:40:16] [error_code=0002]
> (XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
> (XEN) [2014-01-01 15:40:17] ****************************************
> (XEN) [2014-01-01 15:40:17]
> (XEN) [2014-01-01 15:40:17] Reboot in five seconds...
>
> With this patch no panic and crash kernel works.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested. 
kimage_alloc_crash_control_page() explicitly chooses a page inside the
crash region and clears it.

However, the sentiment of the commit is certainly desirable, to prevent
accidental playing in the crash region.

As the mappings are removed from Xen's directmap region,
map_domain_page() doesn't work (unless the debug highmem barrier is
sufficiently low that the crash regions ends up above it, and the
virtual address ends up coming from the mapcache).

This means that both here in clear_domain_page(), and later in
machine_kexec_load() where the code is copied in, is vulnerable to this
pagefault.

The solution to this problem which would leave the fewest mappings would
be to have kimage_alloc_crash_control_page() map the individual control
page to the main Xen pagetables, at which a call to point
map_domain_page() on it will work correctly.  This would need an
equivalent call to destroy_xen_mappings() in kimage_free().

However, it is far from neat.

I defer to others as to which approach is better, but suggest that one
way or another, the problem gets fixed very quickly, even if that means
taking this complete reversion now and submitting a proper fix in due
course.

~Andrew

> ---
>  xen/arch/x86/setup.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..90f3294 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1098,6 +1098,10 @@ void __init __start_xen(unsigned long mbi_p)
>                           PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>      }
>  
> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
> +
>      xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                     ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>      destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 01:43:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 01:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyXJh-0005Af-3x; Thu, 02 Jan 2014 01:43:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1VyXJf-0005Aa-5q
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 01:43:19 +0000
Received: from [193.109.254.147:62774] by server-1.bemta-14.messagelabs.com id
	99/96-15600-634C4C25; Thu, 02 Jan 2014 01:43:18 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388626996!8384866!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2516 invoked from network); 2 Jan 2014 01:43:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 01:43:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s021gArf031919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 01:42:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s021g8s3010232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 01:42:09 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s021g8xC004555; Thu, 2 Jan 2014 01:42:08 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 17:42:07 -0800
Date: Thu, 2 Jan 2014 02:41:34 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102014134.GA3371@olila.local.net-space.pl>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C454A4.1030408@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C454A4.1030408@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 01, 2014 at 05:47:16PM +0000, Andrew Cooper wrote:
> On 01/01/2014 16:51, Don Slutz wrote:

[...]

> > With this patch no panic and crash kernel works.
> >
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested.
> kimage_alloc_crash_control_page() explicitly chooses a page inside the
> crash region and clears it.

I tested this patch earlier and now with latest Xen and kexec-tools commits.
I am not able to reproduce this issue on my machines. Don, could you
provide more details about your system and how did you build your
Xen and kexec-tools (configure, make options, etc.)?

Andrew, David, Did you run kexec tests in your automated test environment
with commit 7113a45451a9f656deeff070e47672043ed83664 applied? Could you
tell something about results?

> However, the sentiment of the commit is certainly desirable, to prevent
> accidental playing in the crash region.
>
> As the mappings are removed from Xen's directmap region,
> map_domain_page() doesn't work (unless the debug highmem barrier is
> sufficiently low that the crash regions ends up above it, and the
> virtual address ends up coming from the mapcache).
>
> This means that both here in clear_domain_page(), and later in
> machine_kexec_load() where the code is copied in, is vulnerable to this
> pagefault.
>
> The solution to this problem which would leave the fewest mappings would
> be to have kimage_alloc_crash_control_page() map the individual control
> page to the main Xen pagetables, at which a call to point
> map_domain_page() on it will work correctly.  This would need an
> equivalent call to destroy_xen_mappings() in kimage_free().
>
> However, it is far from neat.
>
> I defer to others as to which approach is better, but suggest that one
> way or another, the problem gets fixed very quickly, even if that means
> taking this complete reversion now and submitting a proper fix in due
> course.

I am on holiday until 06th January 2014 and I am not able to investigate
this issue deeper right now. If you feel that it is better to revert
this patch and later do second attempt to remove this mapping I do
not object.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 01:43:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 01:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyXJh-0005Af-3x; Thu, 02 Jan 2014 01:43:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1VyXJf-0005Aa-5q
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 01:43:19 +0000
Received: from [193.109.254.147:62774] by server-1.bemta-14.messagelabs.com id
	99/96-15600-634C4C25; Thu, 02 Jan 2014 01:43:18 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388626996!8384866!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2516 invoked from network); 2 Jan 2014 01:43:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 01:43:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s021gArf031919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 01:42:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s021g8s3010232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 01:42:09 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s021g8xC004555; Thu, 2 Jan 2014 01:42:08 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 17:42:07 -0800
Date: Thu, 2 Jan 2014 02:41:34 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102014134.GA3371@olila.local.net-space.pl>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C454A4.1030408@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C454A4.1030408@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 01, 2014 at 05:47:16PM +0000, Andrew Cooper wrote:
> On 01/01/2014 16:51, Don Slutz wrote:

[...]

> > With this patch no panic and crash kernel works.
> >
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested.
> kimage_alloc_crash_control_page() explicitly chooses a page inside the
> crash region and clears it.

I tested this patch earlier and now with latest Xen and kexec-tools commits.
I am not able to reproduce this issue on my machines. Don, could you
provide more details about your system and how did you build your
Xen and kexec-tools (configure, make options, etc.)?

Andrew, David, Did you run kexec tests in your automated test environment
with commit 7113a45451a9f656deeff070e47672043ed83664 applied? Could you
tell something about results?

> However, the sentiment of the commit is certainly desirable, to prevent
> accidental playing in the crash region.
>
> As the mappings are removed from Xen's directmap region,
> map_domain_page() doesn't work (unless the debug highmem barrier is
> sufficiently low that the crash regions ends up above it, and the
> virtual address ends up coming from the mapcache).
>
> This means that both here in clear_domain_page(), and later in
> machine_kexec_load() where the code is copied in, is vulnerable to this
> pagefault.
>
> The solution to this problem which would leave the fewest mappings would
> be to have kimage_alloc_crash_control_page() map the individual control
> page to the main Xen pagetables, at which a call to point
> map_domain_page() on it will work correctly.  This would need an
> equivalent call to destroy_xen_mappings() in kimage_free().
>
> However, it is far from neat.
>
> I defer to others as to which approach is better, but suggest that one
> way or another, the problem gets fixed very quickly, even if that means
> taking this complete reversion now and submitting a proper fix in due
> course.

I am on holiday until 06th January 2014 and I am not able to investigate
this issue deeper right now. If you feel that it is better to revert
this patch and later do second attempt to remove this mapping I do
not object.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 05:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 05:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyaXH-0005EW-D7; Thu, 02 Jan 2014 05:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1VyaXF-0005ER-Jl
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 05:09:33 +0000
Received: from [85.158.143.35:56848] by server-3.bemta-4.messagelabs.com id
	C8/E3-32360-C84F4C25; Thu, 02 Jan 2014 05:09:32 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388639370!9172132!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16543 invoked from network); 2 Jan 2014 05:09:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 05:09:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0259R9B026782
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 05:09:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0259Q94000294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 05:09:27 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0259QSW001469; Thu, 2 Jan 2014 05:09:26 GMT
Received: from [10.182.36.125] (/10.182.36.125)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 21:09:26 -0800
Message-ID: <52C4F48F.5090003@oracle.com>
Date: Thu, 02 Jan 2014 13:09:35 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Vasily Evseenko <svpcom@gmail.com>
References: <52BD5FDD.6060009@gmail.com>
In-Reply-To: <52BD5FDD.6060009@gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2013/12/27 19:09, Vasily Evseenko wrote:
> Hi,
>
> I've got domU crash (~ every 1-2 days under high network (tcp) load)
> with message:
>
> -----
> [2013-12-26 03:53:18] kernel BUG at drivers/net/xen-netfront.c:305!
> [2013-12-26 03:53:18] invalid opcode: 0000 [#1] SMP
> [2013-12-26 03:53:18] Modules linked in: ipt_REJECT iptable_filter
> xt_set xt_REDIRECT iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
> nf_nat_ipv4 nf_nat
> ip_tables ip_set_hash_net ip_set_hash_ip ip_set nfnetlink ip6t_REJECT
> nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
> ip6_table
> s ipv6 ext3 jbd xen_netfront coretemp hwmon crc32_pclmul crc32c_intel
> ghash_clmulni_intel microcode pcspkr ext4 jbd2 mbcache aesni_intel
> ablk_helper c
> ryptd lrw gf128mul glue_helper aes_x86_64 xen_blkfront dm_mirror
> dm_region_hash dm_log dm_mod
> [2013-12-26 03:53:18] CPU: 0 PID: 15126 Comm: python Not tainted
> 3.10.25-11.x86_64 #1
> [2013-12-26 03:53:18] task: ffff8801e5d68ac0 ti: ffff8801e7392000
> task.ti: ffff8801e7392000
> [2013-12-26 03:53:18] RIP: e030:[<ffffffffa015d637>]
> [<ffffffffa015d637>] xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> [2013-12-26 03:53:18] RSP: e02b:ffff8801f2e03ce0  EFLAGS: 00010282
> [2013-12-26 03:53:18] RAX: 00000000000001d4 RBX: ffff8801e5438800 RCX:
> 0000000000000001
> [2013-12-26 03:53:18] RDX: 000000000000002a RSI: 0000000000000000 RDI:
> 0000000000002200
> [2013-12-26 03:53:18] RBP: ffff8801f2e03d40 R08: 0000000000000000 R09:
> 0000000000001000
> [2013-12-26 03:53:18] R10: ffff8801000083c0 R11: dead000000200200 R12:
> 0000000000000220
> [2013-12-26 03:53:18] R13: ffff8801e6eec0c0 R14: 000000000000002a R15:
> 000000000239642a
> [2013-12-26 03:53:18] FS:  00007f4cf48d57e0(0000)
> GS:ffff8801f2e00000(0000) knlGS:0000000000000000
> [2013-12-26 03:53:18] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [2013-12-26 03:53:18] CR2: ffffffffff600400 CR3: 00000001e0db3000 CR4:
> 0000000000042660
> [2013-12-26 03:53:18] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [2013-12-26 03:53:18] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [2013-12-26 03:53:18] Stack:
> [2013-12-26 03:53:18]  ffff8801f2e03df0 02396417e5438000
> ffff8801e5439d58 ffff8801e54394f0
> [2013-12-26 03:53:18]  ffff8801e5438000 002affff00000013
> ffff8801f2e03d40 ffff8801f2e03db0
> [2013-12-26 03:53:18]  0000000000000010 ffff8800655e6ac0
> ffff8801e5438800 ffff8801e511a000
> [2013-12-26 03:53:18] Call Trace:
> [2013-12-26 03:53:18]  <IRQ>
> [2013-12-26 03:53:18]  [<ffffffffa015dc44>] xennet_poll+0x2f4/0x630
> [xen_netfront]
> [2013-12-26 03:53:18]  [<ffffffff810640a9>] ? raise_softirq_irqoff+0x9/0x50
> [2013-12-26 03:53:18]  [<ffffffff8152050c>] ? dev_kfree_skb_irq+0x5c/0x70
> [2013-12-26 03:53:18]  [<ffffffff810e4fb9>] ?
> handle_irq_event_percpu+0xc9/0x210
> [2013-12-26 03:53:18]  [<ffffffff81528022>] net_rx_action+0x112/0x290
> [2013-12-26 03:53:18]  [<ffffffff810e514d>] ? handle_irq_event+0x4d/0x70
> [2013-12-26 03:53:18]  [<ffffffff81063c97>] __do_softirq+0xf7/0x270
> [2013-12-26 03:53:18]  [<ffffffff81600edc>] call_softirq+0x1c/0x30
> [2013-12-26 03:53:18]  [<ffffffff81014505>] do_softirq+0x65/0xa0
> [2013-12-26 03:53:18]  [<ffffffff810639c5>] irq_exit+0xc5/0xd0
> [2013-12-26 03:53:18]  [<ffffffff81351e45>] xen_evtchn_do_upcall+0x35/0x50
> [2013-12-26 03:53:18]  [<ffffffff81600f3e>]
> xen_do_hypervisor_callback+0x1e/0x30
> [2013-12-26 03:53:18]  <EOI>
> [2013-12-26 03:53:18] Code: 8b 35 ee f9 bb e1 48 8d bb 08 0d 00 00 48 83
> c6 64 e8 2e f2 f0 e0 8b 83 ec 0c 00 00 31 d2 89 c1 d1 e9 39 d1 76 9e e9
> 5a ff ff ff <0f> 0b eb fe 0f 0b 0f 1f 00 eb fb 66 66 66 66 66 2e 0f 1f
> 84 00
> [2013-12-26 03:53:18] RIP  [<ffffffffa015d637>]
> xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> [2013-12-26 03:53:18]  RSP <ffff8801f2e03ce0>
> ------------
>
> dom0 and domU kernels are vanilla 3.10.25
> host server has 4 cores x 2 threads with mapping: 4 - dom0, 2 - domU, 2
> - domU
> i've tried xen versions: 4.2.3 and 4.3.1
> also i've tried to disable offloaing on domU:  ethtool -K eth0 tx off
> tso off gso off   ----  no effects
>
> domU's are under high TCP load (a lot of small tcp connections (web server))
> sometimes  i've got on dom0:
> ---
> [2013-12-26 00:16:30] (XEN) grant_table.c:289:d0 Increased maptrack size
> to 2 frames
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
>
> ---
>
> It seems the root of problem in dom0 messages above. Is it HW failure or
> some internal kernel structures overflow?
 From the stack, it reminds me this issue is very likely same with the 
one which has been fixed. There is something wrong with counting slots 
in netback, and then responses overlapps request in the ring, and 
grantcopy gets wrong grant reference and throws out error in 
grant_table.c. See 
http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
There were some back and forth work for this issue, but seems the fix 
patch exists since v3.12-rc4. Would you like to have a try with newer 
kernel version?

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 05:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 05:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyaXH-0005EW-D7; Thu, 02 Jan 2014 05:09:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1VyaXF-0005ER-Jl
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 05:09:33 +0000
Received: from [85.158.143.35:56848] by server-3.bemta-4.messagelabs.com id
	C8/E3-32360-C84F4C25; Thu, 02 Jan 2014 05:09:32 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388639370!9172132!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16543 invoked from network); 2 Jan 2014 05:09:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 05:09:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0259R9B026782
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 05:09:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0259Q94000294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 05:09:27 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0259QSW001469; Thu, 2 Jan 2014 05:09:26 GMT
Received: from [10.182.36.125] (/10.182.36.125)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 21:09:26 -0800
Message-ID: <52C4F48F.5090003@oracle.com>
Date: Thu, 02 Jan 2014 13:09:35 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Vasily Evseenko <svpcom@gmail.com>
References: <52BD5FDD.6060009@gmail.com>
In-Reply-To: <52BD5FDD.6060009@gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2013/12/27 19:09, Vasily Evseenko wrote:
> Hi,
>
> I've got domU crash (~ every 1-2 days under high network (tcp) load)
> with message:
>
> -----
> [2013-12-26 03:53:18] kernel BUG at drivers/net/xen-netfront.c:305!
> [2013-12-26 03:53:18] invalid opcode: 0000 [#1] SMP
> [2013-12-26 03:53:18] Modules linked in: ipt_REJECT iptable_filter
> xt_set xt_REDIRECT iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
> nf_nat_ipv4 nf_nat
> ip_tables ip_set_hash_net ip_set_hash_ip ip_set nfnetlink ip6t_REJECT
> nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
> ip6_table
> s ipv6 ext3 jbd xen_netfront coretemp hwmon crc32_pclmul crc32c_intel
> ghash_clmulni_intel microcode pcspkr ext4 jbd2 mbcache aesni_intel
> ablk_helper c
> ryptd lrw gf128mul glue_helper aes_x86_64 xen_blkfront dm_mirror
> dm_region_hash dm_log dm_mod
> [2013-12-26 03:53:18] CPU: 0 PID: 15126 Comm: python Not tainted
> 3.10.25-11.x86_64 #1
> [2013-12-26 03:53:18] task: ffff8801e5d68ac0 ti: ffff8801e7392000
> task.ti: ffff8801e7392000
> [2013-12-26 03:53:18] RIP: e030:[<ffffffffa015d637>]
> [<ffffffffa015d637>] xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> [2013-12-26 03:53:18] RSP: e02b:ffff8801f2e03ce0  EFLAGS: 00010282
> [2013-12-26 03:53:18] RAX: 00000000000001d4 RBX: ffff8801e5438800 RCX:
> 0000000000000001
> [2013-12-26 03:53:18] RDX: 000000000000002a RSI: 0000000000000000 RDI:
> 0000000000002200
> [2013-12-26 03:53:18] RBP: ffff8801f2e03d40 R08: 0000000000000000 R09:
> 0000000000001000
> [2013-12-26 03:53:18] R10: ffff8801000083c0 R11: dead000000200200 R12:
> 0000000000000220
> [2013-12-26 03:53:18] R13: ffff8801e6eec0c0 R14: 000000000000002a R15:
> 000000000239642a
> [2013-12-26 03:53:18] FS:  00007f4cf48d57e0(0000)
> GS:ffff8801f2e00000(0000) knlGS:0000000000000000
> [2013-12-26 03:53:18] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> [2013-12-26 03:53:18] CR2: ffffffffff600400 CR3: 00000001e0db3000 CR4:
> 0000000000042660
> [2013-12-26 03:53:18] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [2013-12-26 03:53:18] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [2013-12-26 03:53:18] Stack:
> [2013-12-26 03:53:18]  ffff8801f2e03df0 02396417e5438000
> ffff8801e5439d58 ffff8801e54394f0
> [2013-12-26 03:53:18]  ffff8801e5438000 002affff00000013
> ffff8801f2e03d40 ffff8801f2e03db0
> [2013-12-26 03:53:18]  0000000000000010 ffff8800655e6ac0
> ffff8801e5438800 ffff8801e511a000
> [2013-12-26 03:53:18] Call Trace:
> [2013-12-26 03:53:18]  <IRQ>
> [2013-12-26 03:53:18]  [<ffffffffa015dc44>] xennet_poll+0x2f4/0x630
> [xen_netfront]
> [2013-12-26 03:53:18]  [<ffffffff810640a9>] ? raise_softirq_irqoff+0x9/0x50
> [2013-12-26 03:53:18]  [<ffffffff8152050c>] ? dev_kfree_skb_irq+0x5c/0x70
> [2013-12-26 03:53:18]  [<ffffffff810e4fb9>] ?
> handle_irq_event_percpu+0xc9/0x210
> [2013-12-26 03:53:18]  [<ffffffff81528022>] net_rx_action+0x112/0x290
> [2013-12-26 03:53:18]  [<ffffffff810e514d>] ? handle_irq_event+0x4d/0x70
> [2013-12-26 03:53:18]  [<ffffffff81063c97>] __do_softirq+0xf7/0x270
> [2013-12-26 03:53:18]  [<ffffffff81600edc>] call_softirq+0x1c/0x30
> [2013-12-26 03:53:18]  [<ffffffff81014505>] do_softirq+0x65/0xa0
> [2013-12-26 03:53:18]  [<ffffffff810639c5>] irq_exit+0xc5/0xd0
> [2013-12-26 03:53:18]  [<ffffffff81351e45>] xen_evtchn_do_upcall+0x35/0x50
> [2013-12-26 03:53:18]  [<ffffffff81600f3e>]
> xen_do_hypervisor_callback+0x1e/0x30
> [2013-12-26 03:53:18]  <EOI>
> [2013-12-26 03:53:18] Code: 8b 35 ee f9 bb e1 48 8d bb 08 0d 00 00 48 83
> c6 64 e8 2e f2 f0 e0 8b 83 ec 0c 00 00 31 d2 89 c1 d1 e9 39 d1 76 9e e9
> 5a ff ff ff <0f> 0b eb fe 0f 0b 0f 1f 00 eb fb 66 66 66 66 66 2e 0f 1f
> 84 00
> [2013-12-26 03:53:18] RIP  [<ffffffffa015d637>]
> xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> [2013-12-26 03:53:18]  RSP <ffff8801f2e03ce0>
> ------------
>
> dom0 and domU kernels are vanilla 3.10.25
> host server has 4 cores x 2 threads with mapping: 4 - dom0, 2 - domU, 2
> - domU
> i've tried xen versions: 4.2.3 and 4.3.1
> also i've tried to disable offloaing on domU:  ethtool -K eth0 tx off
> tso off gso off   ----  no effects
>
> domU's are under high TCP load (a lot of small tcp connections (web server))
> sometimes  i've got on dom0:
> ---
> [2013-12-26 00:16:30] (XEN) grant_table.c:289:d0 Increased maptrack size
> to 2 frames
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 43646979
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
> [2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> 99221507
>
> ---
>
> It seems the root of problem in dom0 messages above. Is it HW failure or
> some internal kernel structures overflow?
 From the stack, it reminds me this issue is very likely same with the 
one which has been fixed. There is something wrong with counting slots 
in netback, and then responses overlapps request in the ring, and 
grantcopy gets wrong grant reference and throws out error in 
grant_table.c. See 
http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
There were some back and forth work for this issue, but seems the fix 
patch exists since v3.12-rc4. Would you like to have a try with newer 
kernel version?

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 06:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 06:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vybj2-0008Ee-G7; Thu, 02 Jan 2014 06:25:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1Vybj0-0008EZ-Uz
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 06:25:47 +0000
Received: from [85.158.137.68:6039] by server-16.bemta-3.messagelabs.com id
	57/46-26128-A6605C25; Thu, 02 Jan 2014 06:25:46 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388643943!6809398!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6271 invoked from network); 2 Jan 2014 06:25:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 06:25:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s026PgQ1009643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 06:25:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s026Pfqx011940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 06:25:42 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s026PeNU001083; Thu, 2 Jan 2014 06:25:40 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 22:25:40 -0800
Message-ID: <52C50661.7060900@oracle.com>
Date: Thu, 02 Jan 2014 14:25:37 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
In-Reply-To: <52BBEBEF.8040509@zynstra.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 12/26/2013 04:42 PM, James Dingwall wrote:
> Bob Liu wrote:
>> On 12/20/2013 03:08 AM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 12/12/2013 12:30 AM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>>>>> increase/decrease their memory allocation according to their
>>>>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>>>>
>>>>>>>>>> # free
>>>>>>>>>>                 total       used       free     shared    buffers
>>>>>>>>>> cached
>>>>>>>>>> Mem:        272080     248108      23972          0 1448     
>>>>>>>>>> 63064
>>>>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>>>>> Swap:      2097148          8    2097140
>>>>>>>>>>
>>>>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>>>>> balloon to the maximum size:
>>>>>>>>>> # xl info | grep free_mem
>>>>>>>>>> free_memory            : 14923
>>>>>>>>>>
>>>>>>>>>> An example trace (they are always the same) from the oom
>>>>>>>>>> killer in
>>>>>>>>>> 3.12 is added below.  So far I have not been able to reproduce
>>>>>>>>>> this
>>>>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>>>>> behaviour is wrong because a) ballooning could give the guest
>>>>>>>>>> more
>>>>>>>>>> memory, b) there is lots of swap available which could be used
>>>>>>>>>> as a
>>>>>>>>>> fallback.
>>>>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>>>>> sounds odd -but basically pages that are destined for swap end up
>>>>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>>>>
>>>>>>>>>> If other information could help or there are more tests that I
>>>>>>>>>> could
>>>>>>>>>> run then please let me know.
>>>>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>>>>> the guest right?
>>>>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>>>>> Excellent. The odd thing is that your swap is not used that much,
>>>>>>> but
>>>>>>> it should be (as that is part of what the self-balloon is suppose to
>>>>>>> do).
>>>>>>>
>>>>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>>>>> to account for the slab - would this be relevant to this problem?
>>>>>>>
>>>>>> Perhaps, I have attached the patch.
>>>>>> James, could you please apply it and try your application again? You
>>>>>> have to rebuild the guest kernel.
>>>>>> Oh, and also take a look at whether frontswap is in use, you can
>>>>>> check
>>>>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>>>>> I have tested this patch with a workload where I have previously seen
>>>>> failures and so far so good.  I'll try to keep a guest with it
>>>>> stressed
>>>>> to see if I do get any problems.  I don't know if it is expected but I
>>>> By the way, besides longer time of kswapd, is this patch work well
>>>> during your stress testing?
>>>>
>>>> Have you seen the OOM killer triggered quite frequently again?(with
>>>> selfshrink=true)
>>>>
>>>> Thanks,
>>>> -Bob
>>> It was looking good until today (selfshrink=true).  The trace below is
>>> during a compile of subversion, it looks like the memory has ballooned
>>> to almost the maximum permissible but even under pressure the swap disk
>>> has hardly come in to use.
>>>
>> So if without selfshrink the swap disk can be used a lot?
>>
>> If that's the case, I'm afraid the frontswap-selfshrink in
>> xen-selfballoon did something incorrect.
>>
>> Could you please try this patch which make the frontswap-selfshrink
>> slower and add a printk for debug.
>> Please still keep selfshrink=true in your test but can with or without
>> my previous patch.
>> Thanks a lot!
>>
> The oom trace below was triggered during a compile of gcc.  I have the
> full dmesg from boot which shows all the printks, please let me know if
> you would like to see that.
> 

Sorry for the later response.
Could you confirm that this problem doesn't exist if loading tmem with
selfshrinking=0 during compile gcc? It seems that you are compiling
difference packages during your testing.
This will help to figure out whether selfshrinking is the root cause.

Thanks,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 06:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 06:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vybj2-0008Ee-G7; Thu, 02 Jan 2014 06:25:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1Vybj0-0008EZ-Uz
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 06:25:47 +0000
Received: from [85.158.137.68:6039] by server-16.bemta-3.messagelabs.com id
	57/46-26128-A6605C25; Thu, 02 Jan 2014 06:25:46 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388643943!6809398!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6271 invoked from network); 2 Jan 2014 06:25:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 06:25:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s026PgQ1009643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 06:25:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s026Pfqx011940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 06:25:42 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s026PeNU001083; Thu, 2 Jan 2014 06:25:40 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 01 Jan 2014 22:25:40 -0800
Message-ID: <52C50661.7060900@oracle.com>
Date: Thu, 02 Jan 2014 14:25:37 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
In-Reply-To: <52BBEBEF.8040509@zynstra.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 12/26/2013 04:42 PM, James Dingwall wrote:
> Bob Liu wrote:
>> On 12/20/2013 03:08 AM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 12/12/2013 12:30 AM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>>>>> increase/decrease their memory allocation according to their
>>>>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>>>>
>>>>>>>>>> # free
>>>>>>>>>>                 total       used       free     shared    buffers
>>>>>>>>>> cached
>>>>>>>>>> Mem:        272080     248108      23972          0 1448     
>>>>>>>>>> 63064
>>>>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>>>>> Swap:      2097148          8    2097140
>>>>>>>>>>
>>>>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>>>>> balloon to the maximum size:
>>>>>>>>>> # xl info | grep free_mem
>>>>>>>>>> free_memory            : 14923
>>>>>>>>>>
>>>>>>>>>> An example trace (they are always the same) from the oom
>>>>>>>>>> killer in
>>>>>>>>>> 3.12 is added below.  So far I have not been able to reproduce
>>>>>>>>>> this
>>>>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>>>>> behaviour is wrong because a) ballooning could give the guest
>>>>>>>>>> more
>>>>>>>>>> memory, b) there is lots of swap available which could be used
>>>>>>>>>> as a
>>>>>>>>>> fallback.
>>>>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>>>>> sounds odd -but basically pages that are destined for swap end up
>>>>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>>>>
>>>>>>>>>> If other information could help or there are more tests that I
>>>>>>>>>> could
>>>>>>>>>> run then please let me know.
>>>>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>>>>> the guest right?
>>>>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>>>>> Excellent. The odd thing is that your swap is not used that much,
>>>>>>> but
>>>>>>> it should be (as that is part of what the self-balloon is suppose to
>>>>>>> do).
>>>>>>>
>>>>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>>>>> to account for the slab - would this be relevant to this problem?
>>>>>>>
>>>>>> Perhaps, I have attached the patch.
>>>>>> James, could you please apply it and try your application again? You
>>>>>> have to rebuild the guest kernel.
>>>>>> Oh, and also take a look at whether frontswap is in use, you can
>>>>>> check
>>>>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>>>>> I have tested this patch with a workload where I have previously seen
>>>>> failures and so far so good.  I'll try to keep a guest with it
>>>>> stressed
>>>>> to see if I do get any problems.  I don't know if it is expected but I
>>>> By the way, besides longer time of kswapd, is this patch work well
>>>> during your stress testing?
>>>>
>>>> Have you seen the OOM killer triggered quite frequently again?(with
>>>> selfshrink=true)
>>>>
>>>> Thanks,
>>>> -Bob
>>> It was looking good until today (selfshrink=true).  The trace below is
>>> during a compile of subversion, it looks like the memory has ballooned
>>> to almost the maximum permissible but even under pressure the swap disk
>>> has hardly come in to use.
>>>
>> So if without selfshrink the swap disk can be used a lot?
>>
>> If that's the case, I'm afraid the frontswap-selfshrink in
>> xen-selfballoon did something incorrect.
>>
>> Could you please try this patch which make the frontswap-selfshrink
>> slower and add a printk for debug.
>> Please still keep selfshrink=true in your test but can with or without
>> my previous patch.
>> Thanks a lot!
>>
> The oom trace below was triggered during a compile of gcc.  I have the
> full dmesg from boot which shows all the printks, please let me know if
> you would like to see that.
> 

Sorry for the later response.
Could you confirm that this problem doesn't exist if loading tmem with
selfshrinking=0 during compile gcc? It seems that you are compiling
difference packages during your testing.
This will help to figure out whether selfshrinking is the root cause.

Thanks,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 06:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 06:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyc3D-0000cp-K4; Thu, 02 Jan 2014 06:46:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <manohar.vanga@gmail.com>) id 1Vyc3B-0000ck-Rc
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 06:46:38 +0000
Received: from [85.158.143.35:29821] by server-1.bemta-4.messagelabs.com id
	BD/50-02132-D4B05C25; Thu, 02 Jan 2014 06:46:37 +0000
X-Env-Sender: manohar.vanga@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388645194!9169336!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10436 invoked from network); 2 Jan 2014 06:46:34 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 06:46:34 -0000
Received: by mail-we0-f174.google.com with SMTP id q58so11953013wes.5
	for <xen-devel@lists.xen.org>; Wed, 01 Jan 2014 22:46:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=XHlisisqB8+XfJagaGfN20HILAIg1CeALGtB6kxyOxc=;
	b=qLxxNzscbjbxTzfDEBBTCpupPve16L7mojczukGebBn0b/LXTDPuhRjW88HK6SdjTl
	gfM8IR9eOsyovXQDLiw5Apfwh6yEojJ1YiHHMQkQpZS6dcAJZLyVPaKcUHKn+FGNeNkU
	Tk4hyst+QYuUzOiZUWUU6JRyx4ULB75w7Ciyany+9xTObHUIhzUfLEjoALwrGap/iKoV
	QHLOzV/0XUv4I6rYtI7ra8YGZmmiue1bFAPXeQapyKhdT/dTBuB9ItZunclvc6FoBgA7
	IyOCwgqh/CZ262CIdxl9dM07Bw6B/yjlVrTkL2WZv4AK2qYbjZedQ2N2jWVSDlUk75PI
	hcmA==
X-Received: by 10.194.189.42 with SMTP id gf10mr55265588wjc.24.1388645194213; 
	Wed, 01 Jan 2014 22:46:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.216.167.66 with HTTP; Wed, 1 Jan 2014 22:46:14 -0800 (PST)
From: Manohar Vanga <manohar.vanga@gmail.com>
Date: Thu, 2 Jan 2014 07:46:14 +0100
Message-ID: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7bb04bd285447604eef72741
Subject: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bb04bd285447604eef72741
Content-Type: multipart/alternative; boundary=047d7bb04bd285447304eef7273f

--047d7bb04bd285447304eef7273f
Content-Type: text/plain; charset=UTF-8

Hi all,

I've spent the last few weeks trying to debug a weird issue with a new
scheduler I'm developing for Xen. I have written a barebones round-robin
scheduler which seems to work fine when starting up Dom0, but then at some
point during the boot everything just hangs (somewhat deterministically
from what I can tell from a week of debugging; see below).

I've inlined my source code below. I don't expect anyone to read the whole
thing (although it's quite minimal) so here are the key points:

   - I've implemented the following callbacks: init_domain, destroy_domain,
   insert_vcpu, remove_vcpu, sleep, wake, yield, pick_cpu, do_schedule, init,
   deinit, alloc_vdata, free_vdata, alloc_pdata, free_pdata, alloc_domdata,
   free_domdata. Most of these are minimal (or in some cases do nothing). Am I
   missing anything critical?
   - The hang occurs even if I'm running Dom0 with just a single vcpu.
   Nothing hangs if I choose a stock scheduler. Either I'm doing something
   foolish that is causing a deadlock (less likely since the code structure is
   borrowed from sched_credit.c) or I'm *not* doing something leading to Dom0
   crashing and the vcpu just dying.

If you do suspect some specific issue please let me know. Below are some of
the possible issues that I've investigated but hit dead ends on:

   - Checking if my debug printk statements were leading to a deadlock due
   to sleeps in interrupt mode. This doesn't seem to be the case since Dom0
   hangs during boot even if I disable all debug output.
   - I suspected incorrect queuing operations that might be corrupting
   memory somewhere. However, my debug logs tell me that this is not the case.
   There is at most one element in the runqueue at all times (I use Dom0 with
   1 vcpu).
   - I also suspected a deadlock due to incorrect locking. However, based
   on what the credit scheduler does in sched_credit.c, I'm don't seem to be
   doing anything significantly different. In general though, which callbacks
   run in interrupt context?
   - In the end, I stuck debug statements in tick_suspend and tick_resume
   and after the hang, those get called infinitely which seems like the
   physical CPU has gone idle. Is this correct? In that case, *what am I doing
   wrong in the scheduler* to cause Dom0 to crash?
   - The hang occurs around 3-5 seconds into the boot process quite
   deterministically. Could it be some periodic timer going off and bugging
   with my code in weird and wonderful ways?

Also, how do the sleep/wake/yield callbacks work? When do they get called?
Is there any documentation on the different callbacks with regards to when
they are called? If I understand everything correctly after this, I would
gladly create a wiki page explaining this (and perhaps a tutorial on
writing a simple scheduler; something I wish existed!).

I hope the description was enough to help understand my problem. If not,
feel free to ask for more details :-)

Thanks for reading this far! Source code follows

-- 
/mvanga


---------- SOURCE CODE BEGINS ----------
/****************************************************************************
 * (C) 2013 - Manohar Vanga - MPI-SWS
 ****************************************************************************
 *
 *        File: common/sched_xfair.c
 *      Author: Manohar Vanga
 *
 * Description: Table driven scheduler for Xen
 */

#include <xen/config.h>
#include <xen/init.h>
#include <xen/lib.h>
#include <xen/sched.h>
#include <xen/domain.h>
#include <xen/delay.h>
#include <xen/event.h>
#include <xen/time.h>
#include <xen/sched-if.h>
#include <xen/softirq.h>
#include <asm/atomic.h>
#include <asm/div64.h>
#include <xen/errno.h>
#include <xen/keyhandler.h>
#include <xen/trace.h>
#include <xen/list.h>

/* Default timeslice: 30ms */
#define XFAIR_DEFAULT_TSLICE_MS    30

/* Some useful macros */
/* Get the private data from a set of ops */
#define XFAIR_PRIV(_ops)   \
    ((struct xfair_private *)((_ops)->sched_data))
/* Get the PCPU structure for a given CPU number */
#define XFAIR_PCPU(_c)     \
    ((struct xfair_pcpu *)per_cpu(schedule_data, _c).sched_priv)
/* Get the XFair VCPU structure for a given Xen VCPU */
#define XFAIR_VCPU(_vcpu)  ((struct xfair_vcpu *) (_vcpu)->sched_priv)
/* Get the XFair dom structure for a given Xen dom */
#define XFAIR_DOM(_dom)    ((struct xfair_dom *) (_dom)->sched_priv)
/* Get the runqueue for a particular CPU */
#define RUNQ(_cpu)          (&(XFAIR_PCPU(_cpu)->runq))
/* Is the first element of _cpu's runq its idle vcpu? */
#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \

 is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))


/* Xfair tracing events */
#define TRC_XFAIR_SCHED_START   TRC_SCHED_CLASS_EVT(XFAIR, 1)
#define TRC_XFAIR_SCHED_END     TRC_SCHED_CLASS_EVT(XFAIR, 2)

/* Physical CPU */
struct xfair_pcpu {
    struct list_head runq;
#if 0
    struct timer ticker;
    unsigned int tick;
#endif
};

/* Virtual CPU */
struct xfair_vcpu {
    struct xfair_dom *domain; /* The domain this VCPU belongs to */
    struct vcpu *vcpu; /* The core Xen VCPU structure */
    struct list_head runq_elem; /* List element for adding to runqueue */
};

/* Domain */
struct xfair_dom {
    struct domain *dom; /* The core Xen domain structure */
};

/* System-wide private data */
struct xfair_private {
    spinlock_t lock;
};

static inline int __vcpu_on_runq(struct xfair_vcpu *vcpu)
{
    return !list_empty(&vcpu->runq_elem);
}

static inline struct xfair_vcpu *__runq_elem(struct list_head *elem)
{
    return list_entry(elem, struct xfair_vcpu, runq_elem);
}

static inline void __runq_insert(unsigned int cpu, struct xfair_vcpu *vcpu)
{
    struct list_head *runq = RUNQ(cpu);

BUG_ON(__vcpu_on_runq(vcpu));
    BUG_ON(cpu != vcpu->vcpu->processor);

    /* Add back at the end of the list */
    list_add_tail(&vcpu->runq_elem, runq);
}

static inline void
__runq_remove(struct xfair_vcpu *vcpu)
{
BUG_ON(!__vcpu_on_runq(vcpu));
    list_del_init(&vcpu->runq_elem);
}

static inline void print_runq(unsigned int cpu)
{
struct xfair_vcpu *c;
    struct list_head *runq = RUNQ(cpu);

debug("RUNQ: ");
list_for_each_entry(c, runq, runq_elem)
 debug("(%d.%d) ", c->domain->dom->domain_id, c->vcpu->vcpu_id);
debug("\n");
}

/* Allocate a structure for a physical CPU */
static void *xfair_alloc_pdata(const struct scheduler *ops, int cpu)
{
    struct xfair_pcpu *pcpu;

    debug(KERN_INFO "%s: ", __func__);
    debug("cpu=%d\n", cpu);

    /* Allocate per-PCPU info */
    pcpu = xzalloc(struct xfair_pcpu);
    if (pcpu == NULL)
        return NULL;

    INIT_LIST_HEAD(&pcpu->runq);
/* schedule.c expects this to not be NULL (for some reason) */
    if (per_cpu(schedule_data, cpu).sched_priv == NULL)
        per_cpu(schedule_data, cpu).sched_priv = pcpu;

    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));

    return pcpu;
}

static void xfair_free_pdata(const struct scheduler *ops, void *pc, int cpu)
{
    struct xfair_pcpu *pcpu = pc;

    debug(KERN_INFO "%s: ", __func__);
    debug("cpu=%d\n", cpu);

if (pcpu)
 xfree(pcpu);
}

static void *xfair_alloc_vdata(const struct scheduler *ops, struct vcpu *vc,
    void *dd)
{
    struct xfair_vcpu *vcpu;

    /* Allocate per-VCPU info */
    vcpu = xzalloc(struct xfair_vcpu);
    if (vcpu == NULL)
        return NULL;

    INIT_LIST_HEAD(&vcpu->runq_elem);
    vcpu->domain = dd;
    vcpu->vcpu = vc;

    debug(KERN_INFO "%s: ", __func__);
    debug("vcpu=%d\n", vc->vcpu_id);

    return vcpu;
}

static void xfair_free_vdata(const struct scheduler *ops, void *vc)
{
    struct xfair_vcpu *vcpu = vc;

if (!vcpu)
 return;

    debug(KERN_INFO "%s: ", __func__);
 debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

    BUG_ON(!list_empty(&vcpu->runq_elem));
    xfree(vcpu);
}

static void xfair_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu *vcpu = vc->sched_priv;

BUG_ON(!vcpu);

    debug(KERN_INFO "%s: ", __func__);
debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

if (!vc->is_running && vcpu_runnable(vc) && !__vcpu_on_runq(vcpu))
 __runq_insert(vc->processor, vcpu);
}

static void xfair_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);
    struct xfair_dom * const dom = vcpu->domain;

BUG_ON(!vcpu);

    debug(KERN_INFO "%s: ", __func__);
debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

    if (__vcpu_on_runq(vcpu))
        __runq_remove(vcpu);

    BUG_ON(dom == NULL);
    BUG_ON(!list_empty(&vcpu->runq_elem));
}

static void xfair_sleep(const struct scheduler *ops, struct vcpu *vc)
{
struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);

    debug(KERN_INFO "%s: ", __func__);
debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);

    BUG_ON(is_idle_vcpu(vc));
BUG_ON(vcpu_runnable(vc));

/* If the vcpu is the current VCPU on the processor, it is guaranteed to
 * not be on the runqueue of that processor.
 * However, now we need to make sure that if it wasn't the current task
 * and was instead waiting in the runqueue, it should be removed
 */
    if (curr_on_cpu(vc->processor) == vc)
cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
 else if (__vcpu_on_runq(vcpu))
__runq_remove(vcpu);
}

static void xfair_wake(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);

    debug(KERN_INFO "%s: ", __func__);
debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);
 BUG_ON(is_idle_vcpu(vc));
     if (unlikely(curr_on_cpu(vc->processor) == vc)) {
debug("woke vcpu=%d that is currently running on cpu=%d\n", vc->vcpu_id,
 vc->processor);
        return;
}

    if (unlikely(__vcpu_on_runq(vcpu))) {
debug("vcpu=%d is already on runqueue of cpu=%d\n", vc->vcpu_id,
 vc->processor);
        return;
}
 if (!__vcpu_on_runq(vcpu) && vcpu_runnable(vc) && !vc->is_running )
 __runq_insert(vc->processor, vcpu);

cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
}

static void xfair_yield(const struct scheduler *ops, struct vcpu *vc)
{
#ifdef RTS_CONFIG_DEBUG
struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);
#endif

    debug(KERN_INFO "%s: ", __func__);
 debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);
}

static void *xfair_alloc_domdata(const struct scheduler *ops, struct domain
*d)
{
    struct xfair_dom *dom;

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    dom = xzalloc(struct xfair_dom);
    if (dom == NULL)
        return NULL;

    dom->dom = d;

    return (void *)dom;
}

static void xfair_free_domdata(const struct scheduler *ops, void *d)
{
#ifdef RTS_CONFIG_DEBUG
    struct xfair_dom *dom = d;
#endif

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", dom->dom->domain_id);

    xfree(d);
}

static int xfair_dom_init(const struct scheduler *ops, struct domain *d)
{
    struct xfair_dom *dom;

    if (is_idle_domain(d))
        return 0;

    dom = xfair_alloc_domdata(ops, d);
    if (dom == NULL)
        return -ENOMEM;

    d->sched_priv = dom;

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    return 0;
}

static void
xfair_dom_destroy(const struct scheduler *ops, struct domain *d)
{
    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    xfair_free_domdata(ops, XFAIR_DOM(d));
}

static int xfair_pick_cpu(const struct scheduler *ops, struct vcpu *v)
{
    debug(KERN_INFO "%s: ", __func__);
    debug("vcpu=%d, pcpu picked=%d\n", v->vcpu_id, v->processor);
return v->processor;
}

/*
 * This function is in the critical path. It is designed to be simple and
 * fast for the common case.
 */
static struct task_slice
xfair_schedule(
    const struct scheduler *ops, s_time_t now, bool_t
tasklet_work_scheduled)
{
    const int cpu = smp_processor_id();
    struct list_head * const runq = RUNQ(cpu);
    struct xfair_vcpu * const scurr = XFAIR_VCPU(current);
    struct xfair_vcpu *snext;
    struct task_slice ret;
    s_time_t tslice = MILLISECS(30);
 /* Add this VCPU back into the runqueue */
    if (!__vcpu_on_runq(scurr) && vcpu_runnable(current)
&& !is_idle_vcpu(current))
 __runq_insert(cpu, scurr);

print_runq(cpu);

/* Tasklet work (which runs in idle VCPU context) overrides all else. */
    if (tasklet_work_scheduled) {
    debug(KERN_INFO "%s: ", __func__);
    debug("tasklet work scheduled. idling.\n");
snext = XFAIR_VCPU(idle_vcpu[cpu]);
 } else {
/* Select next runnable local VCPU (ie top of local runq) */
 if (!list_empty(runq)) {
snext = __runq_elem(runq->next);
 if (__vcpu_on_runq(snext))
__runq_remove(snext);
 } else {
snext = XFAIR_VCPU(idle_vcpu[cpu]);
 }
}

 print_runq(cpu);

/* Initialize, check and return task to run next */
    ret.task = snext->vcpu;
    ret.time = (is_idle_vcpu(snext->vcpu) ? -1 : tslice);
    ret.migrated = 0;

if (snext && snext->vcpu != current) {
    debug(KERN_INFO "%s: ", __func__);
 if (!is_idle_vcpu(snext->vcpu))
    debug("CPU %d picked(dom.vcpu)=%d.%d\n", cpu,
snext->domain->dom->domain_id, snext->vcpu->vcpu_id);
 else
    debug("CPU %d picked(dom.vcpu)=idle.%d\n", cpu, snext->vcpu->vcpu_id);
 }

    return ret;
}

static int
xfair_init(struct scheduler *ops)
{
    struct xfair_private *priv;

    priv = xzalloc(struct xfair_private);
    if (priv == NULL)
        return -ENOMEM;

    ops->sched_data = priv;
    spin_lock_init(&priv->lock);
debugtrace_toggle();

    return 0;
}

static void
xfair_deinit(const struct scheduler *ops)
{
    struct xfair_private *priv;

    priv = XFAIR_PRIV(ops);
    if (priv)
        xfree(priv);
}

static struct xfair_private _xfair_priv;

const struct scheduler sched_xfair_def = {
    .name           = "XFair Table Driver Scheduler",
    .opt_name       = "xfair",
    .sched_id       = XEN_SCHEDULER_XFAIR,
    .sched_data     = &_xfair_priv,

    .init_domain    = xfair_dom_init,
    .destroy_domain = xfair_dom_destroy,

    .insert_vcpu    = xfair_vcpu_insert,
    .remove_vcpu    = xfair_vcpu_remove,

  .sleep = xfair_sleep,
  .wake = xfair_wake,
  .yield = xfair_yield,

    .pick_cpu       = xfair_pick_cpu,
    .do_schedule    = xfair_schedule,
     .init           = xfair_init,
    .deinit         = xfair_deinit,

    .alloc_vdata    = xfair_alloc_vdata,
    .free_vdata     = xfair_free_vdata,
    .alloc_pdata    = xfair_alloc_pdata,
    .free_pdata     = xfair_free_pdata,
    .alloc_domdata  = xfair_alloc_domdata,
    .free_domdata   = xfair_free_domdata,
};

---------- SOURCE CODE ENDS ----------

--047d7bb04bd285447304eef7273f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all,<div><br></div><div>I&#39;ve spent the last few wee=
ks trying to debug a weird issue with a new scheduler I&#39;m developing fo=
r Xen. I have written a barebones round-robin scheduler which seems to work=
 fine when starting up Dom0, but then at some point during the boot everyth=
ing just hangs (somewhat deterministically from what I can tell from a week=
 of debugging; see below).</div>

<div><br></div><div>I&#39;ve inlined my source code below. I don&#39;t expe=
ct anyone to read the whole thing (although it&#39;s quite minimal) so here=
 are the key points:</div><div><ul><li>I&#39;ve implemented the following c=
allbacks: init_domain, destroy_domain, insert_vcpu, remove_vcpu, sleep, wak=
e, yield, pick_cpu, do_schedule, init, deinit, alloc_vdata, free_vdata, all=
oc_pdata, free_pdata, alloc_domdata, free_domdata. Most of these are minima=
l (or in some cases do nothing). Am I missing anything critical?</li>

<li>The hang occurs even if I&#39;m running Dom0 with just a single vcpu. N=
othing hangs if I choose a stock scheduler. Either I&#39;m doing something =
foolish that is causing a deadlock (less likely since the code structure is=
 borrowed from sched_credit.c) or I&#39;m *not* doing something leading to =
Dom0 crashing and the vcpu just dying.</li>

</ul><div>If you do suspect some specific issue please let me know. Below a=
re some of the possible issues that I&#39;ve investigated but hit dead ends=
 on:</div></div><div><ul><li>Checking if my debug printk statements were le=
ading to a deadlock due to sleeps in interrupt mode. This doesn&#39;t seem =
to be the case since Dom0 hangs during boot even if I disable all debug out=
put.</li>

<li>I suspected incorrect queuing operations that might be corrupting memor=
y somewhere. However, my debug logs tell me that this is not the case. Ther=
e is at most one element in the runqueue at all times (I use Dom0 with 1 vc=
pu).</li>

<li>I also suspected a deadlock due to incorrect locking. However, based on=
 what the credit scheduler does in sched_credit.c, I&#39;m don&#39;t seem t=
o be doing anything significantly different. In general though, which callb=
acks run in interrupt context?</li>

<li>In the end, I stuck debug statements in tick_suspend and tick_resume an=
d after the hang, those get called infinitely which seems like the physical=
 CPU has gone idle. Is this correct? In that case, *what am I doing wrong i=
n the scheduler* to cause Dom0 to crash?</li>

<li>The hang occurs around 3-5 seconds into the boot process quite determin=
istically. Could it be some periodic timer going off and bugging with my co=
de in weird and wonderful ways?</li></ul><div>Also, how do the sleep/wake/y=
ield callbacks work? When do they get called? Is there any documentation on=
 the different callbacks with regards to when they are called? If I underst=
and everything correctly after this, I would gladly create a wiki page expl=
aining this (and perhaps a tutorial on writing a simple scheduler; somethin=
g I wish existed!).</div>

<div><br></div><div>I hope the description was enough to help understand my=
 problem. If not, feel free to ask for more details :-)</div></div><div><br=
></div><div>Thanks for reading this far! Source code follows</div><div>

<div><br></div>-- <br>/mvanga</div><div><br></div><div><br></div><div>-----=
----- SOURCE CODE BEGINS ----------</div><div><div><font face=3D"courier ne=
w, monospace">/************************************************************=
****************</font></div>

<div><font face=3D"courier new, monospace">=C2=A0* (C) 2013 - Manohar Vanga=
 - MPI-SWS</font></div><div><font face=3D"courier new, monospace">=C2=A0***=
*************************************************************************</=
font></div>

<div><font face=3D"courier new, monospace">=C2=A0*</font></div><div><font f=
ace=3D"courier new, monospace">=C2=A0* =C2=A0 =C2=A0 =C2=A0 =C2=A0File: com=
mon/sched_xfair.c</font></div><div><font face=3D"courier new, monospace">=
=C2=A0* =C2=A0 =C2=A0 =C2=A0Author: Manohar Vanga</font></div>

<div><font face=3D"courier new, monospace">=C2=A0*</font></div><div><font f=
ace=3D"courier new, monospace">=C2=A0* Description: Table driven scheduler =
for Xen</font></div><div><font face=3D"courier new, monospace">=C2=A0*/</fo=
nt></div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">#include &lt;xen/co=
nfig.h&gt;</font></div><div><font face=3D"courier new, monospace">#include =
&lt;xen/init.h&gt;</font></div><div><font face=3D"courier new, monospace">#=
include &lt;xen/lib.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/sched.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/domain=
.h&gt;</font></div><div><font face=3D"courier new, monospace">#include &lt;=
xen/delay.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/event.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/time.h=
&gt;</font></div><div><font face=3D"courier new, monospace">#include &lt;xe=
n/sched-if.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/softirq.h&gt;</=
font></div><div><font face=3D"courier new, monospace">#include &lt;asm/atom=
ic.h&gt;</font></div><div><font face=3D"courier new, monospace">#include &l=
t;asm/div64.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/errno.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/keyhan=
dler.h&gt;</font></div><div><font face=3D"courier new, monospace">#include =
&lt;xen/trace.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/list.h&gt;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace">/* Default timeslice: 30ms */</font></d=
iv>

<div><font face=3D"courier new, monospace">#define XFAIR_DEFAULT_TSLICE_MS =
=C2=A0 =C2=A030</font></div><div><font face=3D"courier new, monospace"><br>=
</font></div><div><font face=3D"courier new, monospace">/* Some useful macr=
os */</font></div>

<div><font face=3D"courier new, monospace">/* Get the private data from a s=
et of ops */</font></div><div><font face=3D"courier new, monospace">#define=
 XFAIR_PRIV(_ops) =C2=A0 \</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 ((struct xfair_private *)((_ops)-&gt;sched_data))</fon=
t></div>

<div><font face=3D"courier new, monospace">/* Get the PCPU structure for a =
given CPU number */</font></div><div><font face=3D"courier new, monospace">=
#define XFAIR_PCPU(_c) =C2=A0 =C2=A0 \</font></div><div><font face=3D"couri=
er new, monospace">=C2=A0 =C2=A0 ((struct xfair_pcpu *)per_cpu(schedule_dat=
a, _c).sched_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the XFair VCPU structure =
for a given Xen VCPU */</font></div><div><font face=3D"courier new, monospa=
ce">#define XFAIR_VCPU(_vcpu) =C2=A0((struct xfair_vcpu *) (_vcpu)-&gt;sche=
d_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the XFair dom structure f=
or a given Xen dom */</font></div><div><font face=3D"courier new, monospace=
">#define XFAIR_DOM(_dom) =C2=A0 =C2=A0((struct xfair_dom *) (_dom)-&gt;sch=
ed_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the runqueue for a partic=
ular CPU */</font></div><div><font face=3D"courier new, monospace">#define =
RUNQ(_cpu) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(&amp;(XFAIR_PCPU(_cpu)-&gt;ru=
nq))</font></div><div><font face=3D"courier new, monospace">/* Is the first=
 element of _cpu&#39;s runq its idle vcpu? */</font></div>

<div><font face=3D"courier new, monospace">#define IS_RUNQ_IDLE(_cpu) =C2=
=A0(list_empty(RUNQ(_cpu)) || \</font></div><div><font face=3D"courier new,=
 monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0is_idle_vcpu(__runq_elem(RUNQ(_cpu=
)-&gt;next)-&gt;vcpu))</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">/* Xfair tracing events */</font></div><div><font face=3D"cour=
ier new, monospace">#define TRC_XFAIR_SCHED_START =C2=A0 TRC_SCHED_CLASS_EV=
T(XFAIR, 1)</font></div>

<div><font face=3D"courier new, monospace">#define TRC_XFAIR_SCHED_END =C2=
=A0 =C2=A0 TRC_SCHED_CLASS_EVT(XFAIR, 2)</font></div><div><font face=3D"cou=
rier new, monospace"><br></font></div><div><font face=3D"courier new, monos=
pace">/* Physical CPU */</font></div>

<div><font face=3D"courier new, monospace">struct xfair_pcpu {</font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct list_head r=
unq;</font></div><div><font face=3D"courier new, monospace">#if 0</font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct timer ti=
cker;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 unsigned int tick;=
</font></div><div><font face=3D"courier new, monospace">#endif</font></div>=
<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">/* Virtual CPU */</=
font></div><div><font face=3D"courier new, monospace">struct xfair_vcpu {</=
font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct =
xfair_dom *domain;<span class=3D"" style=3D"white-space:pre">		</span>/* Th=
e domain this VCPU belongs to */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct vcpu *vcpu;=
<span class=3D"" style=3D"white-space:pre">				</span>/* The core Xen VCPU =
structure */</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 struct list_head runq_elem;<span class=3D"" style=3D"white-space:pre=
">		</span>/* List element for adding to runqueue */</font></div>

<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">/* Domain */</font></div><div><font face=3D"courier new, monos=
pace">struct xfair_dom {</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct domain *dom=
;<span class=3D"" style=3D"white-space:pre">				</span>/* The core Xen doma=
in structure */</font></div><div><font face=3D"courier new, monospace">};</=
font></div><div>

<font face=3D"courier new, monospace"><br></font></div><div><font face=3D"c=
ourier new, monospace">/* System-wide private data */</font></div><div><fon=
t face=3D"courier new, monospace">struct xfair_private {</font></div><div><=
font face=3D"courier new, monospace">=C2=A0 =C2=A0 spinlock_t lock;</font><=
/div>

<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">static inline int __vcpu_on_runq(struct xfair_vcpu *vcpu)</fon=
t></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 return !list_empty(&amp;vcpu-&gt;run=
q_elem);</font></div><div><font face=3D"courier new, monospace">}</font></d=
iv><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static inline struc=
t xfair_vcpu *__runq_elem(struct list_head *elem)</font></div><div><font fa=
ce=3D"courier new, monospace">{</font></div><div><font face=3D"courier new,=
 monospace">=C2=A0 =C2=A0 return list_entry(elem, struct xfair_vcpu, runq_e=
lem);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static inline void __runq_insert(unsigned int cpu, struct xfair_v=
cpu *vcpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct list_head *runq =3D RUNQ(cpu)=
;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=
=A0</font></div><div><font face=3D"courier new, monospace"><span class=3D""=
 style=3D"white-space:pre">	</span>BUG_ON(__vcpu_on_runq(vcpu));</font></di=
v>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(cpu !=3D vc=
pu-&gt;vcpu-&gt;processor);</font></div><div><font face=3D"courier new, mon=
ospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 /* Add back at the end of the list */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 list_add_tail(&amp=
;vcpu-&gt;runq_elem, runq);</font></div><div><font face=3D"courier new, mon=
ospace">}</font></div><div><font face=3D"courier new, monospace"><br></font=
></div><div>

<font face=3D"courier new, monospace">static inline void</font></div><div><=
font face=3D"courier new, monospace">__runq_remove(struct xfair_vcpu *vcpu)=
</font></div><div><font face=3D"courier new, monospace">{</font></div><div>=
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>BUG_ON(!__vcpu_on_runq(vcpu));</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 list_del_init(&amp=
;vcpu-&gt;runq_elem);</font></div><div><font face=3D"courier new, monospace=
">}</font></div><div><font face=3D"courier new, monospace"><br></font></div=
><div><font face=3D"courier new, monospace">static inline void print_runq(u=
nsigned int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>struct xfair_vcpu *c;</font></div><div><font face=3D"courier new, monospac=
e">=C2=A0 =C2=A0 struct list_head *runq =3D RUNQ(cpu);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=A0</font></div>=
<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>debug(&quot;RUNQ: &quot;);</font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>list_for_each_entry(c, runq, runq_elem)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>debug(&quot;(%d.%d) &quot;, c-&gt;domain-&gt;dom-&gt;do=
main_id, c-&gt;vcpu-&gt;vcpu_id);</font></div><div><font face=3D"courier ne=
w, monospace"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quo=
t;\n&quot;);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">/* Allocate a structure for a physical CPU */</font></div><div><f=
ont face=3D"courier new, monospace">static void *xfair_alloc_pdata(const st=
ruct scheduler *ops, int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_pcpu *pcpu;</font></div=
><div><font face=3D"courier new, monospace"><br></font></div><div><font fac=
e=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;=
, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;cpu=3D=
%d\n&quot;, cpu);</font></div><div><font face=3D"courier new, monospace"><b=
r></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 /* =
Allocate per-PCPU info */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 pcpu =3D xzalloc(s=
truct xfair_pcpu);</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (pcpu =3D=3D NULL)</font></div><div><font face=3D"courier =
new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NULL;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 INIT_LIST_HEAD(&amp;pcpu-&gt;runq=
);</font></div><div><font face=3D"courier new, monospace"><span class=3D"" =
style=3D"white-space:pre">	</span>/* schedule.c expects this to not be NULL=
 (for some reason) */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (per_cpu(schedu=
le_data, cpu).sched_priv =3D=3D NULL)</font></div><div><font face=3D"courie=
r new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 per_cpu(schedule_data, cpu).s=
ched_priv =3D pcpu;</font></div><div>

<font face=3D"courier new, monospace"><br></font></div><div><font face=3D"c=
ourier new, monospace">=C2=A0 =C2=A0 BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu))=
);</font></div><div><font face=3D"courier new, monospace"><br></font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 return pcpu;</font=
></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_free_pdata(const struct scheduler *ops, void *p=
c, int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_pcpu *pcpu =3D pc;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s:=
 &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;cpu=3D=
%d\n&quot;, cpu);</font></div><div><font face=3D"courier new, monospace"><b=
r></font></div><div><font face=3D"courier new, monospace"><span class=3D"" =
style=3D"white-space:pre">	</span>if (pcpu)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>xfree(pcpu);</font></div><div><font face=3D"courier new=
, monospace">}</font></div><div><font face=3D"courier new, monospace"><br><=
/font></div>

<div><font face=3D"courier new, monospace">static void *xfair_alloc_vdata(c=
onst struct scheduler *ops, struct vcpu *vc,</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 void *dd)</font></div><div><font fac=
e=3D"courier new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu =
*vcpu;</font></div><div><font face=3D"courier new, monospace"><br></font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 /* Allocate pe=
r-VCPU info */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 vcpu =3D xzalloc(s=
truct xfair_vcpu);</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (vcpu =3D=3D NULL)</font></div><div><font face=3D"courier =
new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NULL;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 INIT_LIST_HEAD(&amp;vcpu-&gt;runq=
_elem);</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 vcpu-&gt;domain =3D dd;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 vcpu-&gt;vcpu =3D =
vc;</font></div><div><font face=3D"courier new, monospace"><br></font></div=
><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &=
quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;vcpu=
=3D%d\n&quot;, vc-&gt;vcpu_id);</font></div><div><font face=3D"courier new,=
 monospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 return vcpu;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_free_vdata(const struct scheduler *ops, void *v=
c)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu *vcpu =3D vc;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>if (!vcpu)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>return;</font></div><div><font face=3D"courier new, mon=
ospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>debug(&quot;vcpu=3D%d\n&quot;, vcpu-&gt;vcpu-&gt;vcpu_id=
);</font></div><div><font face=3D"courier new, monospace"><br></font></div>=
<div>
<font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(!list_empty(&amp=
;vcpu-&gt;runq_elem));</font></div>
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 xfree(vcpu);</font=
></div><div><font face=3D"courier new, monospace">}</font></div><div><font =
face=3D"courier new, monospace"><br></font></div><div><font face=3D"courier=
 new, monospace">static void xfair_vcpu_insert(const struct scheduler *ops,=
 struct vcpu *vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu *vcpu =3D vc-&gt;s=
ched_priv;</font></div><div><font face=3D"courier new, monospace"><br></fon=
t></div><div>
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>BUG_ON(!vcpu);</font></div>
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=A0</font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &q=
uot;%s: &quot;, __func__);</font></div><div><font face=3D"courier new, mono=
space"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quot;vcpu=
=3D%d\n&quot;, vcpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>if (!vc-&gt;is_running &amp;&amp; vcpu_runnable(vc) &amp;&amp; !__vcpu_=
on_runq(vcpu))</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>__runq_insert(vc-&gt;processor, vcpu);</font></div><div=
><font face=3D"courier new, monospace">}</font></div><div><font face=3D"cou=
rier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void xfair_v=
cpu_remove(const struct scheduler *ops, struct vcpu *vc)</font></div><div><=
font face=3D"courier new, monospace">{</font></div><div><font face=3D"couri=
er new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * const vcpu =3D XFAIR_V=
CPU(vc);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
 const dom =3D vcpu-&gt;domain;</font></div><div><font face=3D"courier new,=
 monospace"><br></font></div><div><font face=3D"courier new, monospace"><sp=
an class=3D"" style=3D"white-space:pre">	</span>BUG_ON(!vcpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;,=
 __func__);</font></div><div><font face=3D"courier new, monospace"><span cl=
ass=3D"" style=3D"white-space:pre">	</span>debug(&quot;vcpu=3D%d\n&quot;, v=
cpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (__vcpu_on_runq(vcpu))</font><=
/div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0=
 __runq_remove(vcpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(dom =3D=3D NULL);</font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(!list_e=
mpty(&amp;vcpu-&gt;runq_elem));</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_sleep(const struct scheduler *ops, struct vcpu =
*vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>struct xfair_vcpu * const vcpu =3D XFAIR_VCPU(vc);</font></div><div><font =
face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug=
(KERN_INFO &quot;%s: &quot;, __func__);</font></div><div><font face=3D"cour=
ier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span>debu=
g(&quot;dom=3D%d, vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;dom-&gt;domain_id,=
 vcpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(is_idle_vcpu(vc));</font><=
/div><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"w=
hite-space:pre">	</span>BUG_ON(vcpu_runnable(vc));</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>/* If the vcpu is the current VCPU on the processor, it is guaranteed t=
o</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span> * not be on the runqueue of that processor.</font></div=
><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white=
-space:pre">	</span> * However, now we need to make sure that if it wasn&#3=
9;t the current task</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span> * and was instead waiting in the runqueue, it should be=
 removed</font></div><div><font face=3D"courier new, monospace"><span class=
=3D"" style=3D"white-space:pre">	</span> */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (curr_on_cpu(vc=
-&gt;processor) =3D=3D vc)</font></div><div><font face=3D"courier new, mono=
space"><span class=3D"" style=3D"white-space:pre">		</span>cpu_raise_softir=
q(vc-&gt;processor, SCHEDULE_SOFTIRQ);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>else if (__vcpu_on_runq(vcpu))</font></div><div><font fa=
ce=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">		=
	</span>__runq_remove(vcpu);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_wake(const struct scheduler *ops, struct vcpu *=
vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * const vcpu =3D X=
FAIR_VCPU(vc);</font></div><div><font face=3D"courier new, monospace"><br><=
/font></div><div>

<font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%=
s: &quot;, __func__);</font></div><div><font face=3D"courier new, monospace=
"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quot;dom=3D%d, =
vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;dom-&gt;domain_id, vcpu-&gt;vcpu-&gt=
;vcpu_id);</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>BUG_ON(is_idle_vcpu(vc))=
;</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (unlikely(curr_on_cpu(vc-&gt;processor) =3D=3D vc)) {</fon=
t></div><div><font face=3D"courier new, monospace"><span class=3D"" style=
=3D"white-space:pre">		</span>debug(&quot;woke vcpu=3D%d that is currently =
running on cpu=3D%d\n&quot;, vc-&gt;vcpu_id,</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>vc-&gt;processor);</font></div><div><font face=3D"cour=
ier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;</font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>}</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (unlikely(__vcpu_on_runq(vcpu)=
)) {</font></div><div><font face=3D"courier new, monospace"><span class=3D"=
" style=3D"white-space:pre">		</span>debug(&quot;vcpu=3D%d is already on ru=
nqueue of cpu=3D%d\n&quot;, vc-&gt;vcpu_id,</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>vc-&gt;processor);</font></div><div><font face=3D"cour=
ier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;</font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>}</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>if (!__vcpu_on_runq(vcpu=
) &amp;&amp; vcpu_runnable(vc) &amp;&amp; !vc-&gt;is_running )</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>__runq_insert(vc-&gt;processor, vcpu);</font></div><di=
v><font face=3D"courier new, monospace"><br></font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>cpu_raise_softirq(vc-&gt;processor, SCHEDULE_SOFTIRQ);<span class=3D"" sty=
le=3D"white-space:pre">	</span></font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_yield(const struct scheduler *ops, struct vcpu =
*vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">#ifdef RTS_CONFIG_DEBUG</font></div><div><font fac=
e=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</=
span>struct xfair_vcpu * const vcpu =3D XFAIR_VCPU(vc);</font></div>

<div><font face=3D"courier new, monospace">#endif</font></div><div><font fa=
ce=3D"courier new, monospace"><br></font></div><div><font face=3D"courier n=
ew, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</=
font></div>
<div>
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>debug(&quot;dom=3D%d, vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;=
dom-&gt;domain_id, vcpu-&gt;vcpu-&gt;vcpu_id);</font></div><div><font face=
=3D"courier new, monospace">}=C2=A0</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">static void *xfair_alloc_domdata(const struct s=
cheduler *ops, struct domain *d)</font></div><div><font face=3D"courier new=
, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
dom;</font></div><div><font face=3D"courier new, monospace"><br></font></di=
v><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO =
&quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;dom=3D=
%d\n&quot;, d-&gt;domain_id);</font></div><div><font face=3D"courier new, m=
onospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 dom =3D xzalloc(struct xfair_dom);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (dom =3D=3D NUL=
L)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 return NULL;</font></div><div><font face=3D"courier new, monospa=
ce"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 dom-&gt;dom =3D d;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 return (void *)dom;</font></div><=
div><font face=3D"courier new, monospace">}</font></div><div><font face=3D"=
courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void xfair_f=
ree_domdata(const struct scheduler *ops, void *d)</font></div><div><font fa=
ce=3D"courier new, monospace">{</font></div><div><font face=3D"courier new,=
 monospace">#ifdef RTS_CONFIG_DEBUG</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
dom =3D d;</font></div><div><font face=3D"courier new, monospace">#endif</f=
ont></div><div><font face=3D"courier new, monospace"><br></font></div><div>=
<font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%=
s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;dom=3D=
%d\n&quot;, dom-&gt;dom-&gt;domain_id);</font></div><div><font face=3D"cour=
ier new, monospace"><br></font></div><div><font face=3D"courier new, monosp=
ace">=C2=A0 =C2=A0 xfree(d);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static int xfair_dom_init(const struct scheduler *ops, struct dom=
ain *d)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *dom;</font></div><=
div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (is_idle_domain(d))</font></di=
v>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 retu=
rn 0;</font></div><div><font face=3D"courier new, monospace"><br></font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 dom =3D xfair_a=
lloc_domdata(ops, d);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (dom =3D=3D NUL=
L)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 return -ENOMEM;</font></div><div><font face=3D"courier new, mono=
space"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 d-&gt;sched_priv =3D dom;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;,=
 __func__);</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 debug(&quot;dom=3D%d\n&quot;, d-&gt;domain_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 return 0;</font></div><div><font =
face=3D"courier new, monospace">}</font></div><div><font face=3D"courier ne=
w, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void</font><=
/div><div><font face=3D"courier new, monospace">xfair_dom_destroy(const str=
uct scheduler *ops, struct domain *d)</font></div><div><font face=3D"courie=
r new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &q=
uot;%s: &quot;, __func__);</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 debug(&quot;dom=3D%d\n&quot;, d-&gt;domain_id);</font>=
</div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 xfair=
_free_domdata(ops, XFAIR_DOM(d));</font></div><div><font face=3D"courier ne=
w, monospace">}</font></div><div><font face=3D"courier new, monospace"><br>=
</font></div>

<div><font face=3D"courier new, monospace">static int xfair_pick_cpu(const =
struct scheduler *ops, struct vcpu *v)</font></div><div><font face=3D"couri=
er new, monospace">{</font></div><div><font face=3D"courier new, monospace"=
>=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;vcpu=
=3D%d, pcpu picked=3D%d\n&quot;, v-&gt;vcpu_id, v-&gt;processor);</font></d=
iv><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"whi=
te-space:pre">	</span>return v-&gt;processor;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">/*</font></div><div><font face=3D"courier new, monospace">=C2=A0*=
 This function is in the critical path. It is designed to be simple and</fo=
nt></div>

<div><font face=3D"courier new, monospace">=C2=A0* fast for the common case=
.</font></div><div><font face=3D"courier new, monospace">=C2=A0*/</font></d=
iv><div><font face=3D"courier new, monospace">static struct task_slice</fon=
t></div><div>

<font face=3D"courier new, monospace">xfair_schedule(</font></div><div><fon=
t face=3D"courier new, monospace">=C2=A0 =C2=A0 const struct scheduler *ops=
, s_time_t now, bool_t tasklet_work_scheduled)</font></div><div><font face=
=3D"courier new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 const int cpu =3D =
smp_processor_id();</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 struct list_head * const runq =3D RUNQ(cpu);</font></div><div=
><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * co=
nst scurr =3D XFAIR_VCPU(current);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu =
*snext;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 struct task_slice ret;</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 s_time_t tslice =3D MILLISECS(30);</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>/* Add this VCPU back in=
to the runqueue */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (!__vcpu_on_run=
q(scurr) &amp;&amp; vcpu_runnable(current)</font></div><div><font face=3D"c=
ourier new, monospace"><span class=3D"" style=3D"white-space:pre">			</span=
>&amp;&amp; !is_idle_vcpu(current))</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>__runq_insert(cpu, scurr);</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace"><span class=3D"" style=3D"white-space:pre">	</span>print_runq(=
cpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>/* Tasklet work (which runs in idle VCPU context) overrides all else. *=
/</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (tasklet_work_s=
cheduled) {</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 <span class=3D"" style=3D"white-space:pre">	</span>debug(KERN_INFO &=
quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0<span class=3D"" st=
yle=3D"white-space:pre">		</span>debug(&quot;tasklet work scheduled. idling=
.\n&quot;);</font></div><div><font face=3D"courier new, monospace"><span cl=
ass=3D"" style=3D"white-space:pre">		</span>snext =3D XFAIR_VCPU(idle_vcpu[=
cpu]);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>} else {</font></div><div><font face=3D"courier new, mon=
ospace"><span class=3D"" style=3D"white-space:pre">		</span>/* Select next =
runnable local VCPU (ie top of local runq) */</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>if (!list_empty(runq)) {</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">			</sp=
an>snext =3D __runq_elem(runq-&gt;next);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>if (__vcpu_on_runq(snext))</font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">				=
</span>__runq_remove(snext);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>} else {</font></div><div><font face=3D"courier new, mo=
nospace"><span class=3D"" style=3D"white-space:pre">			</span>snext =3D XFA=
IR_VCPU(idle_vcpu[cpu]);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>}</font></div><div><font face=3D"courier new, monospace=
"><span class=3D"" style=3D"white-space:pre">	</span>}</font></div><div><fo=
nt face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace"><span class=3D"" st=
yle=3D"white-space:pre">	</span>print_runq(cpu);</font></div><div><font fac=
e=3D"courier new, monospace"><br></font></div><div><font face=3D"courier ne=
w, monospace"><span class=3D"" style=3D"white-space:pre">	</span>/* Initial=
ize, check and return task to run next */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 ret.task =3D snext=
-&gt;vcpu;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 ret.time =3D (is_idle_vcpu(snext-&gt;vcpu) ? -1 : tslice);</font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 ret.migrated =
=3D 0;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>if (snext &amp;&amp; snext-&gt;vcpu !=3D current) {</font></div><div><f=
ont face=3D"courier new, monospace">=C2=A0 =C2=A0 <span class=3D"" style=3D=
"white-space:pre">	</span>debug(KERN_INFO &quot;%s: &quot;, __func__);</fon=
t></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>if (!is_idle_vcpu(snext-&gt;vcpu))</font></div><div><fo=
nt face=3D"courier new, monospace">=C2=A0 =C2=A0 <span class=3D"" style=3D"=
white-space:pre">		</span>debug(&quot;CPU %d picked(dom.vcpu)=3D%d.%d\n&quo=
t;, cpu, snext-&gt;domain-&gt;dom-&gt;domain_id, snext-&gt;vcpu-&gt;vcpu_id=
);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>else</font></div><div><font face=3D"courier new, monosp=
ace">=C2=A0 =C2=A0 <span class=3D"" style=3D"white-space:pre">		</span>debu=
g(&quot;CPU %d picked(dom.vcpu)=3Didle.%d\n&quot;, cpu, snext-&gt;vcpu-&gt;=
vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>}</font></div><div><font face=3D"courier new, monospace"=
><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =
return ret;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static int</font></div><div><font face=3D"courier new, monospace"=
>xfair_init(struct scheduler *ops)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_private *priv;</font></=
div><div><font face=3D"courier new, monospace"><br></font></div><div><font =
face=3D"courier new, monospace">=C2=A0 =C2=A0 priv =3D xzalloc(struct xfair=
_private);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (priv =3D=3D NU=
LL)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 return -ENOMEM;</font></div><div><font face=3D"courier new, m=
onospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 ops-&gt;sched_data =3D priv;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 spin_lock_init(&am=
p;priv-&gt;lock);</font></div><div><font face=3D"courier new, monospace"><s=
pan class=3D"" style=3D"white-space:pre">	</span>debugtrace_toggle();</font=
></div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 retur=
n 0;</font></div><div><font face=3D"courier new, monospace">}</font></div><=
div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">static void</font></div>

<div><font face=3D"courier new, monospace">xfair_deinit(const struct schedu=
ler *ops)</font></div><div><font face=3D"courier new, monospace">{</font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_p=
rivate *priv;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 priv =3D XFAIR_PRIV(ops);</font><=
/div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (priv)</fo=
nt></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 xfree(priv);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static struct xfair_private _xfair_priv;</font></div><div><font f=
ace=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">const struct schedu=
ler sched_xfair_def =3D {</font></div><div><font face=3D"courier new, monos=
pace">=C2=A0 =C2=A0 .name =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D &quot;XFai=
r Table Driver Scheduler&quot;,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .opt_name =C2=A0 =
=C2=A0 =C2=A0 =3D &quot;xfair&quot;,</font></div><div><font face=3D"courier=
 new, monospace">=C2=A0 =C2=A0 .sched_id =C2=A0 =C2=A0 =C2=A0 =3D XEN_SCHED=
ULER_XFAIR,</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 .sched_data =C2=A0 =C2=A0 =3D &amp;_xfair_priv,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .init_domain =C2=A0 =C2=A0=3D xfa=
ir_dom_init,</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 .destroy_domain =3D xfair_dom_destroy,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .insert_vcpu =C2=A0 =C2=A0=3D xfa=
ir_vcpu_insert,</font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 .remove_vcpu =C2=A0 =C2=A0=3D xfair_vcpu_remove,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span style=3D"white-space:pre">=C2=A0   </span=
>.sleep<span class=3D"" style=3D"white-space:pre">			</span>=3D xfair_sleep=
,</font></div>

<div><font face=3D"courier new, monospace"><span style=3D"white-space:pre">=
=C2=A0   </span>.wake<span class=3D"" style=3D"white-space:pre">			</span>=
=3D xfair_wake,</font></div><div><font face=3D"courier new, monospace"><spa=
n style=3D"white-space:pre">=C2=A0   </span>.yield<span class=3D"" style=3D=
"white-space:pre">			</span>=3D xfair_yield,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .pick_cpu =C2=A0 =C2=A0 =C2=A0 =
=3D xfair_pick_cpu,</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 .do_schedule =C2=A0 =C2=A0=3D xfair_schedule,</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 .init =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D xfair_init,</fon=
t></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .deinit =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D xfair_deinit,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc_vdata =C2=A0 =C2=A0=3D xfa=
ir_alloc_vdata,</font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 .free_vdata =C2=A0 =C2=A0 =3D xfair_free_vdata,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc_pdata =C2=
=A0 =C2=A0=3D xfair_alloc_pdata,</font></div><div><font face=3D"courier new=
, monospace">=C2=A0 =C2=A0 .free_pdata =C2=A0 =C2=A0 =3D xfair_free_pdata,<=
/font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc=
_domdata =C2=A0=3D xfair_alloc_domdata,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .free_domdata =C2=
=A0 =3D xfair_free_domdata,</font></div><div><font face=3D"courier new, mon=
ospace">};</font></div></div><div><br></div><div>---------- SOURCE CODE END=
S ----------<br></div>

<div><br></div></div>

--047d7bb04bd285447304eef7273f--
--047d7bb04bd285447604eef72741
Content-Type: text/x-csrc; charset=US-ASCII; name="sched_xfair.c"
Content-Disposition: attachment; filename="sched_xfair.c"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hpxnc67d0

LyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioKICogKEMpIDIwMTMgLSBNYW5vaGFyIFZhbmdhIC0gTVBJLVNX
UwogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKgogKgogKiAgICAgICAgRmlsZTogY29tbW9uL3NjaGVkX3hm
YWlyLmMKICogICAgICBBdXRob3I6IE1hbm9oYXIgVmFuZ2EKICoKICogRGVzY3JpcHRpb246IFRh
YmxlIGRyaXZlbiBzY2hlZHVsZXIgZm9yIFhlbgogKi8KCiNpbmNsdWRlIDx4ZW4vY29uZmlnLmg+
CiNpbmNsdWRlIDx4ZW4vaW5pdC5oPgojaW5jbHVkZSA8eGVuL2xpYi5oPgojaW5jbHVkZSA8eGVu
L3NjaGVkLmg+CiNpbmNsdWRlIDx4ZW4vZG9tYWluLmg+CiNpbmNsdWRlIDx4ZW4vZGVsYXkuaD4K
I2luY2x1ZGUgPHhlbi9ldmVudC5oPgojaW5jbHVkZSA8eGVuL3RpbWUuaD4KI2luY2x1ZGUgPHhl
bi9zY2hlZC1pZi5oPgojaW5jbHVkZSA8eGVuL3NvZnRpcnEuaD4KI2luY2x1ZGUgPGFzbS9hdG9t
aWMuaD4KI2luY2x1ZGUgPGFzbS9kaXY2NC5oPgojaW5jbHVkZSA8eGVuL2Vycm5vLmg+CiNpbmNs
dWRlIDx4ZW4va2V5aGFuZGxlci5oPgojaW5jbHVkZSA8eGVuL3RyYWNlLmg+CiNpbmNsdWRlIDx4
ZW4vbGlzdC5oPgoKLyogRGVmYXVsdCB0aW1lc2xpY2U6IDMwbXMgKi8KI2RlZmluZSBYRkFJUl9E
RUZBVUxUX1RTTElDRV9NUyAgICAzMAoKLyogU29tZSB1c2VmdWwgbWFjcm9zICovCi8qIEdldCB0
aGUgcHJpdmF0ZSBkYXRhIGZyb20gYSBzZXQgb2Ygb3BzICovCiNkZWZpbmUgWEZBSVJfUFJJVihf
b3BzKSAgIFwKICAgICgoc3RydWN0IHhmYWlyX3ByaXZhdGUgKikoKF9vcHMpLT5zY2hlZF9kYXRh
KSkKLyogR2V0IHRoZSBQQ1BVIHN0cnVjdHVyZSBmb3IgYSBnaXZlbiBDUFUgbnVtYmVyICovCiNk
ZWZpbmUgWEZBSVJfUENQVShfYykgICAgIFwKICAgICgoc3RydWN0IHhmYWlyX3BjcHUgKilwZXJf
Y3B1KHNjaGVkdWxlX2RhdGEsIF9jKS5zY2hlZF9wcml2KQovKiBHZXQgdGhlIFhGYWlyIFZDUFUg
c3RydWN0dXJlIGZvciBhIGdpdmVuIFhlbiBWQ1BVICovCiNkZWZpbmUgWEZBSVJfVkNQVShfdmNw
dSkgICgoc3RydWN0IHhmYWlyX3ZjcHUgKikgKF92Y3B1KS0+c2NoZWRfcHJpdikKLyogR2V0IHRo
ZSBYRmFpciBkb20gc3RydWN0dXJlIGZvciBhIGdpdmVuIFhlbiBkb20gKi8KI2RlZmluZSBYRkFJ
Ul9ET00oX2RvbSkgICAgKChzdHJ1Y3QgeGZhaXJfZG9tICopIChfZG9tKS0+c2NoZWRfcHJpdikK
LyogR2V0IHRoZSBydW5xdWV1ZSBmb3IgYSBwYXJ0aWN1bGFyIENQVSAqLwojZGVmaW5lIFJVTlEo
X2NwdSkgICAgICAgICAgKCYoWEZBSVJfUENQVShfY3B1KS0+cnVucSkpCi8qIElzIHRoZSBmaXJz
dCBlbGVtZW50IG9mIF9jcHUncyBydW5xIGl0cyBpZGxlIHZjcHU/ICovCiNkZWZpbmUgSVNfUlVO
UV9JRExFKF9jcHUpICAobGlzdF9lbXB0eShSVU5RKF9jcHUpKSB8fCBcCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaXNfaWRsZV92Y3B1KF9fcnVucV9lbGVtKFJVTlEoX2NwdSktPm5leHQp
LT52Y3B1KSkKCgovKiBYZmFpciB0cmFjaW5nIGV2ZW50cyAqLwojZGVmaW5lIFRSQ19YRkFJUl9T
Q0hFRF9TVEFSVCAgIFRSQ19TQ0hFRF9DTEFTU19FVlQoWEZBSVIsIDEpCiNkZWZpbmUgVFJDX1hG
QUlSX1NDSEVEX0VORCAgICAgVFJDX1NDSEVEX0NMQVNTX0VWVChYRkFJUiwgMikKCi8qIFBoeXNp
Y2FsIENQVSAqLwpzdHJ1Y3QgeGZhaXJfcGNwdSB7CiAgICBzdHJ1Y3QgbGlzdF9oZWFkIHJ1bnE7
CiNpZiAwCiAgICBzdHJ1Y3QgdGltZXIgdGlja2VyOwogICAgdW5zaWduZWQgaW50IHRpY2s7CiNl
bmRpZgp9OwoKLyogVmlydHVhbCBDUFUgKi8Kc3RydWN0IHhmYWlyX3ZjcHUgewogICAgc3RydWN0
IHhmYWlyX2RvbSAqZG9tYWluOwkJLyogVGhlIGRvbWFpbiB0aGlzIFZDUFUgYmVsb25ncyB0byAq
LwogICAgc3RydWN0IHZjcHUgKnZjcHU7CQkJCS8qIFRoZSBjb3JlIFhlbiBWQ1BVIHN0cnVjdHVy
ZSAqLwogICAgc3RydWN0IGxpc3RfaGVhZCBydW5xX2VsZW07CQkvKiBMaXN0IGVsZW1lbnQgZm9y
IGFkZGluZyB0byBydW5xdWV1ZSAqLwp9OwoKLyogRG9tYWluICovCnN0cnVjdCB4ZmFpcl9kb20g
ewogICAgc3RydWN0IGRvbWFpbiAqZG9tOwkJCQkvKiBUaGUgY29yZSBYZW4gZG9tYWluIHN0cnVj
dHVyZSAqLwp9OwoKLyogU3lzdGVtLXdpZGUgcHJpdmF0ZSBkYXRhICovCnN0cnVjdCB4ZmFpcl9w
cml2YXRlIHsKICAgIHNwaW5sb2NrX3QgbG9jazsKfTsKCnN0YXRpYyBpbmxpbmUgaW50IF9fdmNw
dV9vbl9ydW5xKHN0cnVjdCB4ZmFpcl92Y3B1ICp2Y3B1KQp7CiAgICByZXR1cm4gIWxpc3RfZW1w
dHkoJnZjcHUtPnJ1bnFfZWxlbSk7Cn0KCnN0YXRpYyBpbmxpbmUgc3RydWN0IHhmYWlyX3ZjcHUg
Kl9fcnVucV9lbGVtKHN0cnVjdCBsaXN0X2hlYWQgKmVsZW0pCnsKICAgIHJldHVybiBsaXN0X2Vu
dHJ5KGVsZW0sIHN0cnVjdCB4ZmFpcl92Y3B1LCBydW5xX2VsZW0pOwp9CgpzdGF0aWMgaW5saW5l
IHZvaWQgX19ydW5xX2luc2VydCh1bnNpZ25lZCBpbnQgY3B1LCBzdHJ1Y3QgeGZhaXJfdmNwdSAq
dmNwdSkKewogICAgc3RydWN0IGxpc3RfaGVhZCAqcnVucSA9IFJVTlEoY3B1KTsKICAgIAoJQlVH
X09OKF9fdmNwdV9vbl9ydW5xKHZjcHUpKTsKICAgIEJVR19PTihjcHUgIT0gdmNwdS0+dmNwdS0+
cHJvY2Vzc29yKTsKCiAgICAvKiBBZGQgYmFjayBhdCB0aGUgZW5kIG9mIHRoZSBsaXN0ICovCiAg
ICBsaXN0X2FkZF90YWlsKCZ2Y3B1LT5ydW5xX2VsZW0sIHJ1bnEpOwp9CgpzdGF0aWMgaW5saW5l
IHZvaWQKX19ydW5xX3JlbW92ZShzdHJ1Y3QgeGZhaXJfdmNwdSAqdmNwdSkKewoJQlVHX09OKCFf
X3ZjcHVfb25fcnVucSh2Y3B1KSk7CiAgICBsaXN0X2RlbF9pbml0KCZ2Y3B1LT5ydW5xX2VsZW0p
Owp9CgpzdGF0aWMgaW5saW5lIHZvaWQgcHJpbnRfcnVucSh1bnNpZ25lZCBpbnQgY3B1KQp7Cglz
dHJ1Y3QgeGZhaXJfdmNwdSAqYzsKICAgIHN0cnVjdCBsaXN0X2hlYWQgKnJ1bnEgPSBSVU5RKGNw
dSk7CiAgICAKCWRlYnVnKCJSVU5ROiAiKTsKCWxpc3RfZm9yX2VhY2hfZW50cnkoYywgcnVucSwg
cnVucV9lbGVtKQoJCWRlYnVnKCIoJWQuJWQpICIsIGMtPmRvbWFpbi0+ZG9tLT5kb21haW5faWQs
IGMtPnZjcHUtPnZjcHVfaWQpOwoJZGVidWcoIlxuIik7Cn0KCi8qIEFsbG9jYXRlIGEgc3RydWN0
dXJlIGZvciBhIHBoeXNpY2FsIENQVSAqLwpzdGF0aWMgdm9pZCAqeGZhaXJfYWxsb2NfcGRhdGEo
Y29uc3Qgc3RydWN0IHNjaGVkdWxlciAqb3BzLCBpbnQgY3B1KQp7CiAgICBzdHJ1Y3QgeGZhaXJf
cGNwdSAqcGNwdTsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBk
ZWJ1ZygiY3B1PSVkXG4iLCBjcHUpOwoKICAgIC8qIEFsbG9jYXRlIHBlci1QQ1BVIGluZm8gKi8K
ICAgIHBjcHUgPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9wY3B1KTsKICAgIGlmIChwY3B1ID09IE5V
TEwpCiAgICAgICAgcmV0dXJuIE5VTEw7CgogICAgSU5JVF9MSVNUX0hFQUQoJnBjcHUtPnJ1bnEp
OwoJLyogc2NoZWR1bGUuYyBleHBlY3RzIHRoaXMgdG8gbm90IGJlIE5VTEwgKGZvciBzb21lIHJl
YXNvbikgKi8KICAgIGlmIChwZXJfY3B1KHNjaGVkdWxlX2RhdGEsIGNwdSkuc2NoZWRfcHJpdiA9
PSBOVUxMKQogICAgICAgIHBlcl9jcHUoc2NoZWR1bGVfZGF0YSwgY3B1KS5zY2hlZF9wcml2ID0g
cGNwdTsKCiAgICBCVUdfT04oIWlzX2lkbGVfdmNwdShjdXJyX29uX2NwdShjcHUpKSk7CgogICAg
cmV0dXJuIHBjcHU7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfcGRhdGEoY29uc3Qgc3RydWN0
IHNjaGVkdWxlciAqb3BzLCB2b2lkICpwYywgaW50IGNwdSkKewogICAgc3RydWN0IHhmYWlyX3Bj
cHUgKnBjcHUgPSBwYzsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAg
ICBkZWJ1ZygiY3B1PSVkXG4iLCBjcHUpOwoKCWlmIChwY3B1KQoJCXhmcmVlKHBjcHUpOwp9Cgpz
dGF0aWMgdm9pZCAqeGZhaXJfYWxsb2NfdmRhdGEoY29uc3Qgc3RydWN0IHNjaGVkdWxlciAqb3Bz
LCBzdHJ1Y3QgdmNwdSAqdmMsCiAgICB2b2lkICpkZCkKewogICAgc3RydWN0IHhmYWlyX3ZjcHUg
KnZjcHU7CgogICAgLyogQWxsb2NhdGUgcGVyLVZDUFUgaW5mbyAqLwogICAgdmNwdSA9IHh6YWxs
b2Moc3RydWN0IHhmYWlyX3ZjcHUpOwogICAgaWYgKHZjcHUgPT0gTlVMTCkKICAgICAgICByZXR1
cm4gTlVMTDsKCiAgICBJTklUX0xJU1RfSEVBRCgmdmNwdS0+cnVucV9lbGVtKTsKICAgIHZjcHUt
PmRvbWFpbiA9IGRkOwogICAgdmNwdS0+dmNwdSA9IHZjOwoKICAgIGRlYnVnKEtFUk5fSU5GTyAi
JXM6ICIsIF9fZnVuY19fKTsKICAgIGRlYnVnKCJ2Y3B1PSVkXG4iLCB2Yy0+dmNwdV9pZCk7Cgog
ICAgcmV0dXJuIHZjcHU7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfdmRhdGEoY29uc3Qgc3Ry
dWN0IHNjaGVkdWxlciAqb3BzLCB2b2lkICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKnZj
cHUgPSB2YzsKCglpZiAoIXZjcHUpCgkJcmV0dXJuOwoKICAgIGRlYnVnKEtFUk5fSU5GTyAiJXM6
ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJ2Y3B1PSVkXG4iLCB2Y3B1LT52Y3B1LT52Y3B1X2lkKTsK
CiAgICBCVUdfT04oIWxpc3RfZW1wdHkoJnZjcHUtPnJ1bnFfZWxlbSkpOwogICAgeGZyZWUodmNw
dSk7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX3ZjcHVfaW5zZXJ0KGNvbnN0IHN0cnVjdCBzY2hlZHVs
ZXIgKm9wcywgc3RydWN0IHZjcHUgKnZjKQp7CiAgICBzdHJ1Y3QgeGZhaXJfdmNwdSAqdmNwdSA9
IHZjLT5zY2hlZF9wcml2OwoKCUJVR19PTighdmNwdSk7CiAgICAKICAgIGRlYnVnKEtFUk5fSU5G
TyAiJXM6ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJ2Y3B1PSVkXG4iLCB2Y3B1LT52Y3B1LT52Y3B1
X2lkKTsKCglpZiAoIXZjLT5pc19ydW5uaW5nICYmIHZjcHVfcnVubmFibGUodmMpICYmICFfX3Zj
cHVfb25fcnVucSh2Y3B1KSkKCQlfX3J1bnFfaW5zZXJ0KHZjLT5wcm9jZXNzb3IsIHZjcHUpOwp9
CgpzdGF0aWMgdm9pZCB4ZmFpcl92Y3B1X3JlbW92ZShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpv
cHMsIHN0cnVjdCB2Y3B1ICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1
ID0gWEZBSVJfVkNQVSh2Yyk7CiAgICBzdHJ1Y3QgeGZhaXJfZG9tICogY29uc3QgZG9tID0gdmNw
dS0+ZG9tYWluOwoKCUJVR19PTighdmNwdSk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwg
X19mdW5jX18pOwoJZGVidWcoInZjcHU9JWRcbiIsIHZjcHUtPnZjcHUtPnZjcHVfaWQpOwoKICAg
IGlmIChfX3ZjcHVfb25fcnVucSh2Y3B1KSkKICAgICAgICBfX3J1bnFfcmVtb3ZlKHZjcHUpOwoK
ICAgIEJVR19PTihkb20gPT0gTlVMTCk7CiAgICBCVUdfT04oIWxpc3RfZW1wdHkoJnZjcHUtPnJ1
bnFfZWxlbSkpOwp9CgpzdGF0aWMgdm9pZCB4ZmFpcl9zbGVlcChjb25zdCBzdHJ1Y3Qgc2NoZWR1
bGVyICpvcHMsIHN0cnVjdCB2Y3B1ICp2YykKewoJc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2
Y3B1ID0gWEZBSVJfVkNQVSh2Yyk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5j
X18pOwoJZGVidWcoImRvbT0lZCwgdmNwdT0lZFxuIiwgdmNwdS0+ZG9tYWluLT5kb20tPmRvbWFp
bl9pZCwgdmNwdS0+dmNwdS0+dmNwdV9pZCk7CgogICAgQlVHX09OKGlzX2lkbGVfdmNwdSh2Yykp
OwoJQlVHX09OKHZjcHVfcnVubmFibGUodmMpKTsKCgkvKiBJZiB0aGUgdmNwdSBpcyB0aGUgY3Vy
cmVudCBWQ1BVIG9uIHRoZSBwcm9jZXNzb3IsIGl0IGlzIGd1YXJhbnRlZWQgdG8KCSAqIG5vdCBi
ZSBvbiB0aGUgcnVucXVldWUgb2YgdGhhdCBwcm9jZXNzb3IuCgkgKiBIb3dldmVyLCBub3cgd2Ug
bmVlZCB0byBtYWtlIHN1cmUgdGhhdCBpZiBpdCB3YXNuJ3QgdGhlIGN1cnJlbnQgdGFzawoJICog
YW5kIHdhcyBpbnN0ZWFkIHdhaXRpbmcgaW4gdGhlIHJ1bnF1ZXVlLCBpdCBzaG91bGQgYmUgcmVt
b3ZlZAoJICovCiAgICBpZiAoY3Vycl9vbl9jcHUodmMtPnByb2Nlc3NvcikgPT0gdmMpCgkJY3B1
X3JhaXNlX3NvZnRpcnEodmMtPnByb2Nlc3NvciwgU0NIRURVTEVfU09GVElSUSk7CgllbHNlIGlm
IChfX3ZjcHVfb25fcnVucSh2Y3B1KSkKCQkJX19ydW5xX3JlbW92ZSh2Y3B1KTsKfQoKc3RhdGlj
IHZvaWQgeGZhaXJfd2FrZShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVjdCB2Y3B1
ICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1ID0gWEZBSVJfVkNQVSh2
Yyk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18pOwoJZGVidWcoImRvbT0l
ZCwgdmNwdT0lZFxuIiwgdmNwdS0+ZG9tYWluLT5kb20tPmRvbWFpbl9pZCwgdmNwdS0+dmNwdS0+
dmNwdV9pZCk7CgkKCUJVR19PTihpc19pZGxlX3ZjcHUodmMpKTsKCQogICAgaWYgKHVubGlrZWx5
KGN1cnJfb25fY3B1KHZjLT5wcm9jZXNzb3IpID09IHZjKSkgewoJCWRlYnVnKCJ3b2tlIHZjcHU9
JWQgdGhhdCBpcyBjdXJyZW50bHkgcnVubmluZyBvbiBjcHU9JWRcbiIsIHZjLT52Y3B1X2lkLAoJ
CQl2Yy0+cHJvY2Vzc29yKTsKICAgICAgICByZXR1cm47Cgl9CgogICAgaWYgKHVubGlrZWx5KF9f
dmNwdV9vbl9ydW5xKHZjcHUpKSkgewoJCWRlYnVnKCJ2Y3B1PSVkIGlzIGFscmVhZHkgb24gcnVu
cXVldWUgb2YgY3B1PSVkXG4iLCB2Yy0+dmNwdV9pZCwKCQkJdmMtPnByb2Nlc3Nvcik7CiAgICAg
ICAgcmV0dXJuOwoJfQoJCglpZiAoIV9fdmNwdV9vbl9ydW5xKHZjcHUpICYmIHZjcHVfcnVubmFi
bGUodmMpICYmICF2Yy0+aXNfcnVubmluZyApCgkJCV9fcnVucV9pbnNlcnQodmMtPnByb2Nlc3Nv
ciwgdmNwdSk7CgoJY3B1X3JhaXNlX3NvZnRpcnEodmMtPnByb2Nlc3NvciwgU0NIRURVTEVfU09G
VElSUSk7CQp9CgpzdGF0aWMgdm9pZCB4ZmFpcl95aWVsZChjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVy
ICpvcHMsIHN0cnVjdCB2Y3B1ICp2YykKewojaWZkZWYgUlRTX0NPTkZJR19ERUJVRwoJc3RydWN0
IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1ID0gWEZBSVJfVkNQVSh2Yyk7CiNlbmRpZgoKICAgIGRl
YnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJkb209JWQsIHZjcHU9JWRc
biIsIHZjcHUtPmRvbWFpbi0+ZG9tLT5kb21haW5faWQsIHZjcHUtPnZjcHUtPnZjcHVfaWQpOwp9
IAoKc3RhdGljIHZvaWQgKnhmYWlyX2FsbG9jX2RvbWRhdGEoY29uc3Qgc3RydWN0IHNjaGVkdWxl
ciAqb3BzLCBzdHJ1Y3QgZG9tYWluICpkKQp7CiAgICBzdHJ1Y3QgeGZhaXJfZG9tICpkb207Cgog
ICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18pOwogICAgZGVidWcoImRvbT0lZFxu
IiwgZC0+ZG9tYWluX2lkKTsKCiAgICBkb20gPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9kb20pOwog
ICAgaWYgKGRvbSA9PSBOVUxMKQogICAgICAgIHJldHVybiBOVUxMOwoKICAgIGRvbS0+ZG9tID0g
ZDsKCiAgICByZXR1cm4gKHZvaWQgKilkb207Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfZG9t
ZGF0YShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHZvaWQgKmQpCnsKI2lmZGVmIFJUU19D
T05GSUdfREVCVUcKICAgIHN0cnVjdCB4ZmFpcl9kb20gKmRvbSA9IGQ7CiNlbmRpZgoKICAgIGRl
YnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKICAgIGRlYnVnKCJkb209JWRcbiIsIGRv
bS0+ZG9tLT5kb21haW5faWQpOwoKICAgIHhmcmVlKGQpOwp9CgpzdGF0aWMgaW50IHhmYWlyX2Rv
bV9pbml0KGNvbnN0IHN0cnVjdCBzY2hlZHVsZXIgKm9wcywgc3RydWN0IGRvbWFpbiAqZCkKewog
ICAgc3RydWN0IHhmYWlyX2RvbSAqZG9tOwoKICAgIGlmIChpc19pZGxlX2RvbWFpbihkKSkKICAg
ICAgICByZXR1cm4gMDsKCiAgICBkb20gPSB4ZmFpcl9hbGxvY19kb21kYXRhKG9wcywgZCk7CiAg
ICBpZiAoZG9tID09IE5VTEwpCiAgICAgICAgcmV0dXJuIC1FTk9NRU07CgogICAgZC0+c2NoZWRf
cHJpdiA9IGRvbTsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBk
ZWJ1ZygiZG9tPSVkXG4iLCBkLT5kb21haW5faWQpOwoKICAgIHJldHVybiAwOwp9CgpzdGF0aWMg
dm9pZAp4ZmFpcl9kb21fZGVzdHJveShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVj
dCBkb21haW4gKmQpCnsKICAgIGRlYnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKICAg
IGRlYnVnKCJkb209JWRcbiIsIGQtPmRvbWFpbl9pZCk7CgogICAgeGZhaXJfZnJlZV9kb21kYXRh
KG9wcywgWEZBSVJfRE9NKGQpKTsKfQoKc3RhdGljIGludCB4ZmFpcl9waWNrX2NwdShjb25zdCBz
dHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVjdCB2Y3B1ICp2KQp7CiAgICBkZWJ1ZyhLRVJOX0lO
Rk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBkZWJ1ZygidmNwdT0lZCwgcGNwdSBwaWNrZWQ9JWRc
biIsIHYtPnZjcHVfaWQsIHYtPnByb2Nlc3Nvcik7CglyZXR1cm4gdi0+cHJvY2Vzc29yOwp9Cgov
KgogKiBUaGlzIGZ1bmN0aW9uIGlzIGluIHRoZSBjcml0aWNhbCBwYXRoLiBJdCBpcyBkZXNpZ25l
ZCB0byBiZSBzaW1wbGUgYW5kCiAqIGZhc3QgZm9yIHRoZSBjb21tb24gY2FzZS4KICovCnN0YXRp
YyBzdHJ1Y3QgdGFza19zbGljZQp4ZmFpcl9zY2hlZHVsZSgKICAgIGNvbnN0IHN0cnVjdCBzY2hl
ZHVsZXIgKm9wcywgc190aW1lX3Qgbm93LCBib29sX3QgdGFza2xldF93b3JrX3NjaGVkdWxlZCkK
ewogICAgY29uc3QgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIHN0cnVjdCBsaXN0
X2hlYWQgKiBjb25zdCBydW5xID0gUlVOUShjcHUpOwogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBj
b25zdCBzY3VyciA9IFhGQUlSX1ZDUFUoY3VycmVudCk7CiAgICBzdHJ1Y3QgeGZhaXJfdmNwdSAq
c25leHQ7CiAgICBzdHJ1Y3QgdGFza19zbGljZSByZXQ7CiAgICBzX3RpbWVfdCB0c2xpY2UgPSBN
SUxMSVNFQ1MoMzApOwoJCgkvKiBBZGQgdGhpcyBWQ1BVIGJhY2sgaW50byB0aGUgcnVucXVldWUg
Ki8KICAgIGlmICghX192Y3B1X29uX3J1bnEoc2N1cnIpICYmIHZjcHVfcnVubmFibGUoY3VycmVu
dCkKCQkJJiYgIWlzX2lkbGVfdmNwdShjdXJyZW50KSkKCQlfX3J1bnFfaW5zZXJ0KGNwdSwgc2N1
cnIpOwoKCXByaW50X3J1bnEoY3B1KTsKCgkvKiBUYXNrbGV0IHdvcmsgKHdoaWNoIHJ1bnMgaW4g
aWRsZSBWQ1BVIGNvbnRleHQpIG92ZXJyaWRlcyBhbGwgZWxzZS4gKi8KICAgIGlmICh0YXNrbGV0
X3dvcmtfc2NoZWR1bGVkKSB7CiAgICAJZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18p
OwogICAJCWRlYnVnKCJ0YXNrbGV0IHdvcmsgc2NoZWR1bGVkLiBpZGxpbmcuXG4iKTsKCQlzbmV4
dCA9IFhGQUlSX1ZDUFUoaWRsZV92Y3B1W2NwdV0pOwoJfSBlbHNlIHsKCQkvKiBTZWxlY3QgbmV4
dCBydW5uYWJsZSBsb2NhbCBWQ1BVIChpZSB0b3Agb2YgbG9jYWwgcnVucSkgKi8KCQlpZiAoIWxp
c3RfZW1wdHkocnVucSkpIHsKCQkJc25leHQgPSBfX3J1bnFfZWxlbShydW5xLT5uZXh0KTsKCQkJ
aWYgKF9fdmNwdV9vbl9ydW5xKHNuZXh0KSkKCQkJCV9fcnVucV9yZW1vdmUoc25leHQpOwoJCX0g
ZWxzZSB7CgkJCXNuZXh0ID0gWEZBSVJfVkNQVShpZGxlX3ZjcHVbY3B1XSk7CgkJfQoJfQoKCXBy
aW50X3J1bnEoY3B1KTsKCgkvKiBJbml0aWFsaXplLCBjaGVjayBhbmQgcmV0dXJuIHRhc2sgdG8g
cnVuIG5leHQgKi8KICAgIHJldC50YXNrID0gc25leHQtPnZjcHU7CiAgICByZXQudGltZSA9IChp
c19pZGxlX3ZjcHUoc25leHQtPnZjcHUpID8gLTEgOiB0c2xpY2UpOwogICAgcmV0Lm1pZ3JhdGVk
ID0gMDsKCglpZiAoc25leHQgJiYgc25leHQtPnZjcHUgIT0gY3VycmVudCkgewogICAgCWRlYnVn
KEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKCQlpZiAoIWlzX2lkbGVfdmNwdShzbmV4dC0+
dmNwdSkpCiAgICAJCWRlYnVnKCJDUFUgJWQgcGlja2VkKGRvbS52Y3B1KT0lZC4lZFxuIiwgY3B1
LCBzbmV4dC0+ZG9tYWluLT5kb20tPmRvbWFpbl9pZCwgc25leHQtPnZjcHUtPnZjcHVfaWQpOwoJ
CWVsc2UKICAgIAkJZGVidWcoIkNQVSAlZCBwaWNrZWQoZG9tLnZjcHUpPWlkbGUuJWRcbiIsIGNw
dSwgc25leHQtPnZjcHUtPnZjcHVfaWQpOwoJfQoKICAgIHJldHVybiByZXQ7Cn0KCnN0YXRpYyBp
bnQKeGZhaXJfaW5pdChzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMpCnsKICAgIHN0cnVjdCB4ZmFpcl9w
cml2YXRlICpwcml2OwoKICAgIHByaXYgPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9wcml2YXRlKTsK
ICAgIGlmIChwcml2ID09IE5VTEwpCiAgICAgICAgcmV0dXJuIC1FTk9NRU07CgogICAgb3BzLT5z
Y2hlZF9kYXRhID0gcHJpdjsKICAgIHNwaW5fbG9ja19pbml0KCZwcml2LT5sb2NrKTsKCWRlYnVn
dHJhY2VfdG9nZ2xlKCk7CgogICAgcmV0dXJuIDA7Cn0KCnN0YXRpYyB2b2lkCnhmYWlyX2RlaW5p
dChjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMpCnsKICAgIHN0cnVjdCB4ZmFpcl9wcml2YXRl
ICpwcml2OwoKICAgIHByaXYgPSBYRkFJUl9QUklWKG9wcyk7CiAgICBpZiAocHJpdikKICAgICAg
ICB4ZnJlZShwcml2KTsKfQoKc3RhdGljIHN0cnVjdCB4ZmFpcl9wcml2YXRlIF94ZmFpcl9wcml2
OwoKY29uc3Qgc3RydWN0IHNjaGVkdWxlciBzY2hlZF94ZmFpcl9kZWYgPSB7CiAgICAubmFtZSAg
ICAgICAgICAgPSAiWEZhaXIgVGFibGUgRHJpdmVyIFNjaGVkdWxlciIsCiAgICAub3B0X25hbWUg
ICAgICAgPSAieGZhaXIiLAogICAgLnNjaGVkX2lkICAgICAgID0gWEVOX1NDSEVEVUxFUl9YRkFJ
UiwKICAgIC5zY2hlZF9kYXRhICAgICA9ICZfeGZhaXJfcHJpdiwKCiAgICAuaW5pdF9kb21haW4g
ICAgPSB4ZmFpcl9kb21faW5pdCwKICAgIC5kZXN0cm95X2RvbWFpbiA9IHhmYWlyX2RvbV9kZXN0
cm95LAoKICAgIC5pbnNlcnRfdmNwdSAgICA9IHhmYWlyX3ZjcHVfaW5zZXJ0LAogICAgLnJlbW92
ZV92Y3B1ICAgID0geGZhaXJfdmNwdV9yZW1vdmUsCgoJLnNsZWVwCQkJPSB4ZmFpcl9zbGVlcCwK
CS53YWtlCQkJPSB4ZmFpcl93YWtlLAoJLnlpZWxkCQkJPSB4ZmFpcl95aWVsZCwKCiAgICAucGlj
a19jcHUgICAgICAgPSB4ZmFpcl9waWNrX2NwdSwKICAgIC5kb19zY2hlZHVsZSAgICA9IHhmYWly
X3NjaGVkdWxlLAoJCiAgICAuaW5pdCAgICAgICAgICAgPSB4ZmFpcl9pbml0LAogICAgLmRlaW5p
dCAgICAgICAgID0geGZhaXJfZGVpbml0LAoKICAgIC5hbGxvY192ZGF0YSAgICA9IHhmYWlyX2Fs
bG9jX3ZkYXRhLAogICAgLmZyZWVfdmRhdGEgICAgID0geGZhaXJfZnJlZV92ZGF0YSwKICAgIC5h
bGxvY19wZGF0YSAgICA9IHhmYWlyX2FsbG9jX3BkYXRhLAogICAgLmZyZWVfcGRhdGEgICAgID0g
eGZhaXJfZnJlZV9wZGF0YSwKICAgIC5hbGxvY19kb21kYXRhICA9IHhmYWlyX2FsbG9jX2RvbWRh
dGEsCiAgICAuZnJlZV9kb21kYXRhICAgPSB4ZmFpcl9mcmVlX2RvbWRhdGEsCn07Cg==
--047d7bb04bd285447604eef72741
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bb04bd285447604eef72741--


From xen-devel-bounces@lists.xen.org Thu Jan 02 06:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 06:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyc3D-0000cp-K4; Thu, 02 Jan 2014 06:46:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <manohar.vanga@gmail.com>) id 1Vyc3B-0000ck-Rc
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 06:46:38 +0000
Received: from [85.158.143.35:29821] by server-1.bemta-4.messagelabs.com id
	BD/50-02132-D4B05C25; Thu, 02 Jan 2014 06:46:37 +0000
X-Env-Sender: manohar.vanga@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388645194!9169336!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10436 invoked from network); 2 Jan 2014 06:46:34 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 06:46:34 -0000
Received: by mail-we0-f174.google.com with SMTP id q58so11953013wes.5
	for <xen-devel@lists.xen.org>; Wed, 01 Jan 2014 22:46:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=XHlisisqB8+XfJagaGfN20HILAIg1CeALGtB6kxyOxc=;
	b=qLxxNzscbjbxTzfDEBBTCpupPve16L7mojczukGebBn0b/LXTDPuhRjW88HK6SdjTl
	gfM8IR9eOsyovXQDLiw5Apfwh6yEojJ1YiHHMQkQpZS6dcAJZLyVPaKcUHKn+FGNeNkU
	Tk4hyst+QYuUzOiZUWUU6JRyx4ULB75w7Ciyany+9xTObHUIhzUfLEjoALwrGap/iKoV
	QHLOzV/0XUv4I6rYtI7ra8YGZmmiue1bFAPXeQapyKhdT/dTBuB9ItZunclvc6FoBgA7
	IyOCwgqh/CZ262CIdxl9dM07Bw6B/yjlVrTkL2WZv4AK2qYbjZedQ2N2jWVSDlUk75PI
	hcmA==
X-Received: by 10.194.189.42 with SMTP id gf10mr55265588wjc.24.1388645194213; 
	Wed, 01 Jan 2014 22:46:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.216.167.66 with HTTP; Wed, 1 Jan 2014 22:46:14 -0800 (PST)
From: Manohar Vanga <manohar.vanga@gmail.com>
Date: Thu, 2 Jan 2014 07:46:14 +0100
Message-ID: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=047d7bb04bd285447604eef72741
Subject: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bb04bd285447604eef72741
Content-Type: multipart/alternative; boundary=047d7bb04bd285447304eef7273f

--047d7bb04bd285447304eef7273f
Content-Type: text/plain; charset=UTF-8

Hi all,

I've spent the last few weeks trying to debug a weird issue with a new
scheduler I'm developing for Xen. I have written a barebones round-robin
scheduler which seems to work fine when starting up Dom0, but then at some
point during the boot everything just hangs (somewhat deterministically
from what I can tell from a week of debugging; see below).

I've inlined my source code below. I don't expect anyone to read the whole
thing (although it's quite minimal) so here are the key points:

   - I've implemented the following callbacks: init_domain, destroy_domain,
   insert_vcpu, remove_vcpu, sleep, wake, yield, pick_cpu, do_schedule, init,
   deinit, alloc_vdata, free_vdata, alloc_pdata, free_pdata, alloc_domdata,
   free_domdata. Most of these are minimal (or in some cases do nothing). Am I
   missing anything critical?
   - The hang occurs even if I'm running Dom0 with just a single vcpu.
   Nothing hangs if I choose a stock scheduler. Either I'm doing something
   foolish that is causing a deadlock (less likely since the code structure is
   borrowed from sched_credit.c) or I'm *not* doing something leading to Dom0
   crashing and the vcpu just dying.

If you do suspect some specific issue please let me know. Below are some of
the possible issues that I've investigated but hit dead ends on:

   - Checking if my debug printk statements were leading to a deadlock due
   to sleeps in interrupt mode. This doesn't seem to be the case since Dom0
   hangs during boot even if I disable all debug output.
   - I suspected incorrect queuing operations that might be corrupting
   memory somewhere. However, my debug logs tell me that this is not the case.
   There is at most one element in the runqueue at all times (I use Dom0 with
   1 vcpu).
   - I also suspected a deadlock due to incorrect locking. However, based
   on what the credit scheduler does in sched_credit.c, I'm don't seem to be
   doing anything significantly different. In general though, which callbacks
   run in interrupt context?
   - In the end, I stuck debug statements in tick_suspend and tick_resume
   and after the hang, those get called infinitely which seems like the
   physical CPU has gone idle. Is this correct? In that case, *what am I doing
   wrong in the scheduler* to cause Dom0 to crash?
   - The hang occurs around 3-5 seconds into the boot process quite
   deterministically. Could it be some periodic timer going off and bugging
   with my code in weird and wonderful ways?

Also, how do the sleep/wake/yield callbacks work? When do they get called?
Is there any documentation on the different callbacks with regards to when
they are called? If I understand everything correctly after this, I would
gladly create a wiki page explaining this (and perhaps a tutorial on
writing a simple scheduler; something I wish existed!).

I hope the description was enough to help understand my problem. If not,
feel free to ask for more details :-)

Thanks for reading this far! Source code follows

-- 
/mvanga


---------- SOURCE CODE BEGINS ----------
/****************************************************************************
 * (C) 2013 - Manohar Vanga - MPI-SWS
 ****************************************************************************
 *
 *        File: common/sched_xfair.c
 *      Author: Manohar Vanga
 *
 * Description: Table driven scheduler for Xen
 */

#include <xen/config.h>
#include <xen/init.h>
#include <xen/lib.h>
#include <xen/sched.h>
#include <xen/domain.h>
#include <xen/delay.h>
#include <xen/event.h>
#include <xen/time.h>
#include <xen/sched-if.h>
#include <xen/softirq.h>
#include <asm/atomic.h>
#include <asm/div64.h>
#include <xen/errno.h>
#include <xen/keyhandler.h>
#include <xen/trace.h>
#include <xen/list.h>

/* Default timeslice: 30ms */
#define XFAIR_DEFAULT_TSLICE_MS    30

/* Some useful macros */
/* Get the private data from a set of ops */
#define XFAIR_PRIV(_ops)   \
    ((struct xfair_private *)((_ops)->sched_data))
/* Get the PCPU structure for a given CPU number */
#define XFAIR_PCPU(_c)     \
    ((struct xfair_pcpu *)per_cpu(schedule_data, _c).sched_priv)
/* Get the XFair VCPU structure for a given Xen VCPU */
#define XFAIR_VCPU(_vcpu)  ((struct xfair_vcpu *) (_vcpu)->sched_priv)
/* Get the XFair dom structure for a given Xen dom */
#define XFAIR_DOM(_dom)    ((struct xfair_dom *) (_dom)->sched_priv)
/* Get the runqueue for a particular CPU */
#define RUNQ(_cpu)          (&(XFAIR_PCPU(_cpu)->runq))
/* Is the first element of _cpu's runq its idle vcpu? */
#define IS_RUNQ_IDLE(_cpu)  (list_empty(RUNQ(_cpu)) || \

 is_idle_vcpu(__runq_elem(RUNQ(_cpu)->next)->vcpu))


/* Xfair tracing events */
#define TRC_XFAIR_SCHED_START   TRC_SCHED_CLASS_EVT(XFAIR, 1)
#define TRC_XFAIR_SCHED_END     TRC_SCHED_CLASS_EVT(XFAIR, 2)

/* Physical CPU */
struct xfair_pcpu {
    struct list_head runq;
#if 0
    struct timer ticker;
    unsigned int tick;
#endif
};

/* Virtual CPU */
struct xfair_vcpu {
    struct xfair_dom *domain; /* The domain this VCPU belongs to */
    struct vcpu *vcpu; /* The core Xen VCPU structure */
    struct list_head runq_elem; /* List element for adding to runqueue */
};

/* Domain */
struct xfair_dom {
    struct domain *dom; /* The core Xen domain structure */
};

/* System-wide private data */
struct xfair_private {
    spinlock_t lock;
};

static inline int __vcpu_on_runq(struct xfair_vcpu *vcpu)
{
    return !list_empty(&vcpu->runq_elem);
}

static inline struct xfair_vcpu *__runq_elem(struct list_head *elem)
{
    return list_entry(elem, struct xfair_vcpu, runq_elem);
}

static inline void __runq_insert(unsigned int cpu, struct xfair_vcpu *vcpu)
{
    struct list_head *runq = RUNQ(cpu);

BUG_ON(__vcpu_on_runq(vcpu));
    BUG_ON(cpu != vcpu->vcpu->processor);

    /* Add back at the end of the list */
    list_add_tail(&vcpu->runq_elem, runq);
}

static inline void
__runq_remove(struct xfair_vcpu *vcpu)
{
BUG_ON(!__vcpu_on_runq(vcpu));
    list_del_init(&vcpu->runq_elem);
}

static inline void print_runq(unsigned int cpu)
{
struct xfair_vcpu *c;
    struct list_head *runq = RUNQ(cpu);

debug("RUNQ: ");
list_for_each_entry(c, runq, runq_elem)
 debug("(%d.%d) ", c->domain->dom->domain_id, c->vcpu->vcpu_id);
debug("\n");
}

/* Allocate a structure for a physical CPU */
static void *xfair_alloc_pdata(const struct scheduler *ops, int cpu)
{
    struct xfair_pcpu *pcpu;

    debug(KERN_INFO "%s: ", __func__);
    debug("cpu=%d\n", cpu);

    /* Allocate per-PCPU info */
    pcpu = xzalloc(struct xfair_pcpu);
    if (pcpu == NULL)
        return NULL;

    INIT_LIST_HEAD(&pcpu->runq);
/* schedule.c expects this to not be NULL (for some reason) */
    if (per_cpu(schedule_data, cpu).sched_priv == NULL)
        per_cpu(schedule_data, cpu).sched_priv = pcpu;

    BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu)));

    return pcpu;
}

static void xfair_free_pdata(const struct scheduler *ops, void *pc, int cpu)
{
    struct xfair_pcpu *pcpu = pc;

    debug(KERN_INFO "%s: ", __func__);
    debug("cpu=%d\n", cpu);

if (pcpu)
 xfree(pcpu);
}

static void *xfair_alloc_vdata(const struct scheduler *ops, struct vcpu *vc,
    void *dd)
{
    struct xfair_vcpu *vcpu;

    /* Allocate per-VCPU info */
    vcpu = xzalloc(struct xfair_vcpu);
    if (vcpu == NULL)
        return NULL;

    INIT_LIST_HEAD(&vcpu->runq_elem);
    vcpu->domain = dd;
    vcpu->vcpu = vc;

    debug(KERN_INFO "%s: ", __func__);
    debug("vcpu=%d\n", vc->vcpu_id);

    return vcpu;
}

static void xfair_free_vdata(const struct scheduler *ops, void *vc)
{
    struct xfair_vcpu *vcpu = vc;

if (!vcpu)
 return;

    debug(KERN_INFO "%s: ", __func__);
 debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

    BUG_ON(!list_empty(&vcpu->runq_elem));
    xfree(vcpu);
}

static void xfair_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu *vcpu = vc->sched_priv;

BUG_ON(!vcpu);

    debug(KERN_INFO "%s: ", __func__);
debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

if (!vc->is_running && vcpu_runnable(vc) && !__vcpu_on_runq(vcpu))
 __runq_insert(vc->processor, vcpu);
}

static void xfair_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);
    struct xfair_dom * const dom = vcpu->domain;

BUG_ON(!vcpu);

    debug(KERN_INFO "%s: ", __func__);
debug("vcpu=%d\n", vcpu->vcpu->vcpu_id);

    if (__vcpu_on_runq(vcpu))
        __runq_remove(vcpu);

    BUG_ON(dom == NULL);
    BUG_ON(!list_empty(&vcpu->runq_elem));
}

static void xfair_sleep(const struct scheduler *ops, struct vcpu *vc)
{
struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);

    debug(KERN_INFO "%s: ", __func__);
debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);

    BUG_ON(is_idle_vcpu(vc));
BUG_ON(vcpu_runnable(vc));

/* If the vcpu is the current VCPU on the processor, it is guaranteed to
 * not be on the runqueue of that processor.
 * However, now we need to make sure that if it wasn't the current task
 * and was instead waiting in the runqueue, it should be removed
 */
    if (curr_on_cpu(vc->processor) == vc)
cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
 else if (__vcpu_on_runq(vcpu))
__runq_remove(vcpu);
}

static void xfair_wake(const struct scheduler *ops, struct vcpu *vc)
{
    struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);

    debug(KERN_INFO "%s: ", __func__);
debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);
 BUG_ON(is_idle_vcpu(vc));
     if (unlikely(curr_on_cpu(vc->processor) == vc)) {
debug("woke vcpu=%d that is currently running on cpu=%d\n", vc->vcpu_id,
 vc->processor);
        return;
}

    if (unlikely(__vcpu_on_runq(vcpu))) {
debug("vcpu=%d is already on runqueue of cpu=%d\n", vc->vcpu_id,
 vc->processor);
        return;
}
 if (!__vcpu_on_runq(vcpu) && vcpu_runnable(vc) && !vc->is_running )
 __runq_insert(vc->processor, vcpu);

cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
}

static void xfair_yield(const struct scheduler *ops, struct vcpu *vc)
{
#ifdef RTS_CONFIG_DEBUG
struct xfair_vcpu * const vcpu = XFAIR_VCPU(vc);
#endif

    debug(KERN_INFO "%s: ", __func__);
 debug("dom=%d, vcpu=%d\n", vcpu->domain->dom->domain_id,
vcpu->vcpu->vcpu_id);
}

static void *xfair_alloc_domdata(const struct scheduler *ops, struct domain
*d)
{
    struct xfair_dom *dom;

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    dom = xzalloc(struct xfair_dom);
    if (dom == NULL)
        return NULL;

    dom->dom = d;

    return (void *)dom;
}

static void xfair_free_domdata(const struct scheduler *ops, void *d)
{
#ifdef RTS_CONFIG_DEBUG
    struct xfair_dom *dom = d;
#endif

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", dom->dom->domain_id);

    xfree(d);
}

static int xfair_dom_init(const struct scheduler *ops, struct domain *d)
{
    struct xfair_dom *dom;

    if (is_idle_domain(d))
        return 0;

    dom = xfair_alloc_domdata(ops, d);
    if (dom == NULL)
        return -ENOMEM;

    d->sched_priv = dom;

    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    return 0;
}

static void
xfair_dom_destroy(const struct scheduler *ops, struct domain *d)
{
    debug(KERN_INFO "%s: ", __func__);
    debug("dom=%d\n", d->domain_id);

    xfair_free_domdata(ops, XFAIR_DOM(d));
}

static int xfair_pick_cpu(const struct scheduler *ops, struct vcpu *v)
{
    debug(KERN_INFO "%s: ", __func__);
    debug("vcpu=%d, pcpu picked=%d\n", v->vcpu_id, v->processor);
return v->processor;
}

/*
 * This function is in the critical path. It is designed to be simple and
 * fast for the common case.
 */
static struct task_slice
xfair_schedule(
    const struct scheduler *ops, s_time_t now, bool_t
tasklet_work_scheduled)
{
    const int cpu = smp_processor_id();
    struct list_head * const runq = RUNQ(cpu);
    struct xfair_vcpu * const scurr = XFAIR_VCPU(current);
    struct xfair_vcpu *snext;
    struct task_slice ret;
    s_time_t tslice = MILLISECS(30);
 /* Add this VCPU back into the runqueue */
    if (!__vcpu_on_runq(scurr) && vcpu_runnable(current)
&& !is_idle_vcpu(current))
 __runq_insert(cpu, scurr);

print_runq(cpu);

/* Tasklet work (which runs in idle VCPU context) overrides all else. */
    if (tasklet_work_scheduled) {
    debug(KERN_INFO "%s: ", __func__);
    debug("tasklet work scheduled. idling.\n");
snext = XFAIR_VCPU(idle_vcpu[cpu]);
 } else {
/* Select next runnable local VCPU (ie top of local runq) */
 if (!list_empty(runq)) {
snext = __runq_elem(runq->next);
 if (__vcpu_on_runq(snext))
__runq_remove(snext);
 } else {
snext = XFAIR_VCPU(idle_vcpu[cpu]);
 }
}

 print_runq(cpu);

/* Initialize, check and return task to run next */
    ret.task = snext->vcpu;
    ret.time = (is_idle_vcpu(snext->vcpu) ? -1 : tslice);
    ret.migrated = 0;

if (snext && snext->vcpu != current) {
    debug(KERN_INFO "%s: ", __func__);
 if (!is_idle_vcpu(snext->vcpu))
    debug("CPU %d picked(dom.vcpu)=%d.%d\n", cpu,
snext->domain->dom->domain_id, snext->vcpu->vcpu_id);
 else
    debug("CPU %d picked(dom.vcpu)=idle.%d\n", cpu, snext->vcpu->vcpu_id);
 }

    return ret;
}

static int
xfair_init(struct scheduler *ops)
{
    struct xfair_private *priv;

    priv = xzalloc(struct xfair_private);
    if (priv == NULL)
        return -ENOMEM;

    ops->sched_data = priv;
    spin_lock_init(&priv->lock);
debugtrace_toggle();

    return 0;
}

static void
xfair_deinit(const struct scheduler *ops)
{
    struct xfair_private *priv;

    priv = XFAIR_PRIV(ops);
    if (priv)
        xfree(priv);
}

static struct xfair_private _xfair_priv;

const struct scheduler sched_xfair_def = {
    .name           = "XFair Table Driver Scheduler",
    .opt_name       = "xfair",
    .sched_id       = XEN_SCHEDULER_XFAIR,
    .sched_data     = &_xfair_priv,

    .init_domain    = xfair_dom_init,
    .destroy_domain = xfair_dom_destroy,

    .insert_vcpu    = xfair_vcpu_insert,
    .remove_vcpu    = xfair_vcpu_remove,

  .sleep = xfair_sleep,
  .wake = xfair_wake,
  .yield = xfair_yield,

    .pick_cpu       = xfair_pick_cpu,
    .do_schedule    = xfair_schedule,
     .init           = xfair_init,
    .deinit         = xfair_deinit,

    .alloc_vdata    = xfair_alloc_vdata,
    .free_vdata     = xfair_free_vdata,
    .alloc_pdata    = xfair_alloc_pdata,
    .free_pdata     = xfair_free_pdata,
    .alloc_domdata  = xfair_alloc_domdata,
    .free_domdata   = xfair_free_domdata,
};

---------- SOURCE CODE ENDS ----------

--047d7bb04bd285447304eef7273f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all,<div><br></div><div>I&#39;ve spent the last few wee=
ks trying to debug a weird issue with a new scheduler I&#39;m developing fo=
r Xen. I have written a barebones round-robin scheduler which seems to work=
 fine when starting up Dom0, but then at some point during the boot everyth=
ing just hangs (somewhat deterministically from what I can tell from a week=
 of debugging; see below).</div>

<div><br></div><div>I&#39;ve inlined my source code below. I don&#39;t expe=
ct anyone to read the whole thing (although it&#39;s quite minimal) so here=
 are the key points:</div><div><ul><li>I&#39;ve implemented the following c=
allbacks: init_domain, destroy_domain, insert_vcpu, remove_vcpu, sleep, wak=
e, yield, pick_cpu, do_schedule, init, deinit, alloc_vdata, free_vdata, all=
oc_pdata, free_pdata, alloc_domdata, free_domdata. Most of these are minima=
l (or in some cases do nothing). Am I missing anything critical?</li>

<li>The hang occurs even if I&#39;m running Dom0 with just a single vcpu. N=
othing hangs if I choose a stock scheduler. Either I&#39;m doing something =
foolish that is causing a deadlock (less likely since the code structure is=
 borrowed from sched_credit.c) or I&#39;m *not* doing something leading to =
Dom0 crashing and the vcpu just dying.</li>

</ul><div>If you do suspect some specific issue please let me know. Below a=
re some of the possible issues that I&#39;ve investigated but hit dead ends=
 on:</div></div><div><ul><li>Checking if my debug printk statements were le=
ading to a deadlock due to sleeps in interrupt mode. This doesn&#39;t seem =
to be the case since Dom0 hangs during boot even if I disable all debug out=
put.</li>

<li>I suspected incorrect queuing operations that might be corrupting memor=
y somewhere. However, my debug logs tell me that this is not the case. Ther=
e is at most one element in the runqueue at all times (I use Dom0 with 1 vc=
pu).</li>

<li>I also suspected a deadlock due to incorrect locking. However, based on=
 what the credit scheduler does in sched_credit.c, I&#39;m don&#39;t seem t=
o be doing anything significantly different. In general though, which callb=
acks run in interrupt context?</li>

<li>In the end, I stuck debug statements in tick_suspend and tick_resume an=
d after the hang, those get called infinitely which seems like the physical=
 CPU has gone idle. Is this correct? In that case, *what am I doing wrong i=
n the scheduler* to cause Dom0 to crash?</li>

<li>The hang occurs around 3-5 seconds into the boot process quite determin=
istically. Could it be some periodic timer going off and bugging with my co=
de in weird and wonderful ways?</li></ul><div>Also, how do the sleep/wake/y=
ield callbacks work? When do they get called? Is there any documentation on=
 the different callbacks with regards to when they are called? If I underst=
and everything correctly after this, I would gladly create a wiki page expl=
aining this (and perhaps a tutorial on writing a simple scheduler; somethin=
g I wish existed!).</div>

<div><br></div><div>I hope the description was enough to help understand my=
 problem. If not, feel free to ask for more details :-)</div></div><div><br=
></div><div>Thanks for reading this far! Source code follows</div><div>

<div><br></div>-- <br>/mvanga</div><div><br></div><div><br></div><div>-----=
----- SOURCE CODE BEGINS ----------</div><div><div><font face=3D"courier ne=
w, monospace">/************************************************************=
****************</font></div>

<div><font face=3D"courier new, monospace">=C2=A0* (C) 2013 - Manohar Vanga=
 - MPI-SWS</font></div><div><font face=3D"courier new, monospace">=C2=A0***=
*************************************************************************</=
font></div>

<div><font face=3D"courier new, monospace">=C2=A0*</font></div><div><font f=
ace=3D"courier new, monospace">=C2=A0* =C2=A0 =C2=A0 =C2=A0 =C2=A0File: com=
mon/sched_xfair.c</font></div><div><font face=3D"courier new, monospace">=
=C2=A0* =C2=A0 =C2=A0 =C2=A0Author: Manohar Vanga</font></div>

<div><font face=3D"courier new, monospace">=C2=A0*</font></div><div><font f=
ace=3D"courier new, monospace">=C2=A0* Description: Table driven scheduler =
for Xen</font></div><div><font face=3D"courier new, monospace">=C2=A0*/</fo=
nt></div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">#include &lt;xen/co=
nfig.h&gt;</font></div><div><font face=3D"courier new, monospace">#include =
&lt;xen/init.h&gt;</font></div><div><font face=3D"courier new, monospace">#=
include &lt;xen/lib.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/sched.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/domain=
.h&gt;</font></div><div><font face=3D"courier new, monospace">#include &lt;=
xen/delay.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/event.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/time.h=
&gt;</font></div><div><font face=3D"courier new, monospace">#include &lt;xe=
n/sched-if.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/softirq.h&gt;</=
font></div><div><font face=3D"courier new, monospace">#include &lt;asm/atom=
ic.h&gt;</font></div><div><font face=3D"courier new, monospace">#include &l=
t;asm/div64.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/errno.h&gt;</fo=
nt></div><div><font face=3D"courier new, monospace">#include &lt;xen/keyhan=
dler.h&gt;</font></div><div><font face=3D"courier new, monospace">#include =
&lt;xen/trace.h&gt;</font></div>

<div><font face=3D"courier new, monospace">#include &lt;xen/list.h&gt;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace">/* Default timeslice: 30ms */</font></d=
iv>

<div><font face=3D"courier new, monospace">#define XFAIR_DEFAULT_TSLICE_MS =
=C2=A0 =C2=A030</font></div><div><font face=3D"courier new, monospace"><br>=
</font></div><div><font face=3D"courier new, monospace">/* Some useful macr=
os */</font></div>

<div><font face=3D"courier new, monospace">/* Get the private data from a s=
et of ops */</font></div><div><font face=3D"courier new, monospace">#define=
 XFAIR_PRIV(_ops) =C2=A0 \</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 ((struct xfair_private *)((_ops)-&gt;sched_data))</fon=
t></div>

<div><font face=3D"courier new, monospace">/* Get the PCPU structure for a =
given CPU number */</font></div><div><font face=3D"courier new, monospace">=
#define XFAIR_PCPU(_c) =C2=A0 =C2=A0 \</font></div><div><font face=3D"couri=
er new, monospace">=C2=A0 =C2=A0 ((struct xfair_pcpu *)per_cpu(schedule_dat=
a, _c).sched_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the XFair VCPU structure =
for a given Xen VCPU */</font></div><div><font face=3D"courier new, monospa=
ce">#define XFAIR_VCPU(_vcpu) =C2=A0((struct xfair_vcpu *) (_vcpu)-&gt;sche=
d_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the XFair dom structure f=
or a given Xen dom */</font></div><div><font face=3D"courier new, monospace=
">#define XFAIR_DOM(_dom) =C2=A0 =C2=A0((struct xfair_dom *) (_dom)-&gt;sch=
ed_priv)</font></div>

<div><font face=3D"courier new, monospace">/* Get the runqueue for a partic=
ular CPU */</font></div><div><font face=3D"courier new, monospace">#define =
RUNQ(_cpu) =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(&amp;(XFAIR_PCPU(_cpu)-&gt;ru=
nq))</font></div><div><font face=3D"courier new, monospace">/* Is the first=
 element of _cpu&#39;s runq its idle vcpu? */</font></div>

<div><font face=3D"courier new, monospace">#define IS_RUNQ_IDLE(_cpu) =C2=
=A0(list_empty(RUNQ(_cpu)) || \</font></div><div><font face=3D"courier new,=
 monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0is_idle_vcpu(__runq_elem(RUNQ(_cpu=
)-&gt;next)-&gt;vcpu))</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">/* Xfair tracing events */</font></div><div><font face=3D"cour=
ier new, monospace">#define TRC_XFAIR_SCHED_START =C2=A0 TRC_SCHED_CLASS_EV=
T(XFAIR, 1)</font></div>

<div><font face=3D"courier new, monospace">#define TRC_XFAIR_SCHED_END =C2=
=A0 =C2=A0 TRC_SCHED_CLASS_EVT(XFAIR, 2)</font></div><div><font face=3D"cou=
rier new, monospace"><br></font></div><div><font face=3D"courier new, monos=
pace">/* Physical CPU */</font></div>

<div><font face=3D"courier new, monospace">struct xfair_pcpu {</font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct list_head r=
unq;</font></div><div><font face=3D"courier new, monospace">#if 0</font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct timer ti=
cker;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 unsigned int tick;=
</font></div><div><font face=3D"courier new, monospace">#endif</font></div>=
<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">/* Virtual CPU */</=
font></div><div><font face=3D"courier new, monospace">struct xfair_vcpu {</=
font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct =
xfair_dom *domain;<span class=3D"" style=3D"white-space:pre">		</span>/* Th=
e domain this VCPU belongs to */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct vcpu *vcpu;=
<span class=3D"" style=3D"white-space:pre">				</span>/* The core Xen VCPU =
structure */</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 struct list_head runq_elem;<span class=3D"" style=3D"white-space:pre=
">		</span>/* List element for adding to runqueue */</font></div>

<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">/* Domain */</font></div><div><font face=3D"courier new, monos=
pace">struct xfair_dom {</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct domain *dom=
;<span class=3D"" style=3D"white-space:pre">				</span>/* The core Xen doma=
in structure */</font></div><div><font face=3D"courier new, monospace">};</=
font></div><div>

<font face=3D"courier new, monospace"><br></font></div><div><font face=3D"c=
ourier new, monospace">/* System-wide private data */</font></div><div><fon=
t face=3D"courier new, monospace">struct xfair_private {</font></div><div><=
font face=3D"courier new, monospace">=C2=A0 =C2=A0 spinlock_t lock;</font><=
/div>

<div><font face=3D"courier new, monospace">};</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace">static inline int __vcpu_on_runq(struct xfair_vcpu *vcpu)</fon=
t></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 return !list_empty(&amp;vcpu-&gt;run=
q_elem);</font></div><div><font face=3D"courier new, monospace">}</font></d=
iv><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static inline struc=
t xfair_vcpu *__runq_elem(struct list_head *elem)</font></div><div><font fa=
ce=3D"courier new, monospace">{</font></div><div><font face=3D"courier new,=
 monospace">=C2=A0 =C2=A0 return list_entry(elem, struct xfair_vcpu, runq_e=
lem);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static inline void __runq_insert(unsigned int cpu, struct xfair_v=
cpu *vcpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct list_head *runq =3D RUNQ(cpu)=
;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=
=A0</font></div><div><font face=3D"courier new, monospace"><span class=3D""=
 style=3D"white-space:pre">	</span>BUG_ON(__vcpu_on_runq(vcpu));</font></di=
v>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(cpu !=3D vc=
pu-&gt;vcpu-&gt;processor);</font></div><div><font face=3D"courier new, mon=
ospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 /* Add back at the end of the list */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 list_add_tail(&amp=
;vcpu-&gt;runq_elem, runq);</font></div><div><font face=3D"courier new, mon=
ospace">}</font></div><div><font face=3D"courier new, monospace"><br></font=
></div><div>

<font face=3D"courier new, monospace">static inline void</font></div><div><=
font face=3D"courier new, monospace">__runq_remove(struct xfair_vcpu *vcpu)=
</font></div><div><font face=3D"courier new, monospace">{</font></div><div>=
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>BUG_ON(!__vcpu_on_runq(vcpu));</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 list_del_init(&amp=
;vcpu-&gt;runq_elem);</font></div><div><font face=3D"courier new, monospace=
">}</font></div><div><font face=3D"courier new, monospace"><br></font></div=
><div><font face=3D"courier new, monospace">static inline void print_runq(u=
nsigned int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>struct xfair_vcpu *c;</font></div><div><font face=3D"courier new, monospac=
e">=C2=A0 =C2=A0 struct list_head *runq =3D RUNQ(cpu);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=A0</font></div>=
<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>debug(&quot;RUNQ: &quot;);</font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>list_for_each_entry(c, runq, runq_elem)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>debug(&quot;(%d.%d) &quot;, c-&gt;domain-&gt;dom-&gt;do=
main_id, c-&gt;vcpu-&gt;vcpu_id);</font></div><div><font face=3D"courier ne=
w, monospace"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quo=
t;\n&quot;);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">/* Allocate a structure for a physical CPU */</font></div><div><f=
ont face=3D"courier new, monospace">static void *xfair_alloc_pdata(const st=
ruct scheduler *ops, int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_pcpu *pcpu;</font></div=
><div><font face=3D"courier new, monospace"><br></font></div><div><font fac=
e=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;=
, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;cpu=3D=
%d\n&quot;, cpu);</font></div><div><font face=3D"courier new, monospace"><b=
r></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 /* =
Allocate per-PCPU info */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 pcpu =3D xzalloc(s=
truct xfair_pcpu);</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (pcpu =3D=3D NULL)</font></div><div><font face=3D"courier =
new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NULL;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 INIT_LIST_HEAD(&amp;pcpu-&gt;runq=
);</font></div><div><font face=3D"courier new, monospace"><span class=3D"" =
style=3D"white-space:pre">	</span>/* schedule.c expects this to not be NULL=
 (for some reason) */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (per_cpu(schedu=
le_data, cpu).sched_priv =3D=3D NULL)</font></div><div><font face=3D"courie=
r new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 per_cpu(schedule_data, cpu).s=
ched_priv =3D pcpu;</font></div><div>

<font face=3D"courier new, monospace"><br></font></div><div><font face=3D"c=
ourier new, monospace">=C2=A0 =C2=A0 BUG_ON(!is_idle_vcpu(curr_on_cpu(cpu))=
);</font></div><div><font face=3D"courier new, monospace"><br></font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 return pcpu;</font=
></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_free_pdata(const struct scheduler *ops, void *p=
c, int cpu)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_pcpu *pcpu =3D pc;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s:=
 &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;cpu=3D=
%d\n&quot;, cpu);</font></div><div><font face=3D"courier new, monospace"><b=
r></font></div><div><font face=3D"courier new, monospace"><span class=3D"" =
style=3D"white-space:pre">	</span>if (pcpu)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>xfree(pcpu);</font></div><div><font face=3D"courier new=
, monospace">}</font></div><div><font face=3D"courier new, monospace"><br><=
/font></div>

<div><font face=3D"courier new, monospace">static void *xfair_alloc_vdata(c=
onst struct scheduler *ops, struct vcpu *vc,</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 void *dd)</font></div><div><font fac=
e=3D"courier new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu =
*vcpu;</font></div><div><font face=3D"courier new, monospace"><br></font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 /* Allocate pe=
r-VCPU info */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 vcpu =3D xzalloc(s=
truct xfair_vcpu);</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (vcpu =3D=3D NULL)</font></div><div><font face=3D"courier =
new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return NULL;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 INIT_LIST_HEAD(&amp;vcpu-&gt;runq=
_elem);</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 vcpu-&gt;domain =3D dd;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 vcpu-&gt;vcpu =3D =
vc;</font></div><div><font face=3D"courier new, monospace"><br></font></div=
><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &=
quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;vcpu=
=3D%d\n&quot;, vc-&gt;vcpu_id);</font></div><div><font face=3D"courier new,=
 monospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 return vcpu;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_free_vdata(const struct scheduler *ops, void *v=
c)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu *vcpu =3D vc;</fon=
t></div><div><font face=3D"courier new, monospace"><br></font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>if (!vcpu)</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>return;</font></div><div><font face=3D"courier new, mon=
ospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>debug(&quot;vcpu=3D%d\n&quot;, vcpu-&gt;vcpu-&gt;vcpu_id=
);</font></div><div><font face=3D"courier new, monospace"><br></font></div>=
<div>
<font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(!list_empty(&amp=
;vcpu-&gt;runq_elem));</font></div>
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 xfree(vcpu);</font=
></div><div><font face=3D"courier new, monospace">}</font></div><div><font =
face=3D"courier new, monospace"><br></font></div><div><font face=3D"courier=
 new, monospace">static void xfair_vcpu_insert(const struct scheduler *ops,=
 struct vcpu *vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu *vcpu =3D vc-&gt;s=
ched_priv;</font></div><div><font face=3D"courier new, monospace"><br></fon=
t></div><div>
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>BUG_ON(!vcpu);</font></div>
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0=C2=A0</font></div>=
<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &q=
uot;%s: &quot;, __func__);</font></div><div><font face=3D"courier new, mono=
space"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quot;vcpu=
=3D%d\n&quot;, vcpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>if (!vc-&gt;is_running &amp;&amp; vcpu_runnable(vc) &amp;&amp; !__vcpu_=
on_runq(vcpu))</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>__runq_insert(vc-&gt;processor, vcpu);</font></div><div=
><font face=3D"courier new, monospace">}</font></div><div><font face=3D"cou=
rier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void xfair_v=
cpu_remove(const struct scheduler *ops, struct vcpu *vc)</font></div><div><=
font face=3D"courier new, monospace">{</font></div><div><font face=3D"couri=
er new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * const vcpu =3D XFAIR_V=
CPU(vc);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
 const dom =3D vcpu-&gt;domain;</font></div><div><font face=3D"courier new,=
 monospace"><br></font></div><div><font face=3D"courier new, monospace"><sp=
an class=3D"" style=3D"white-space:pre">	</span>BUG_ON(!vcpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;,=
 __func__);</font></div><div><font face=3D"courier new, monospace"><span cl=
ass=3D"" style=3D"white-space:pre">	</span>debug(&quot;vcpu=3D%d\n&quot;, v=
cpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (__vcpu_on_runq(vcpu))</font><=
/div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0=
 __runq_remove(vcpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(dom =3D=3D NULL);</font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(!list_e=
mpty(&amp;vcpu-&gt;runq_elem));</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_sleep(const struct scheduler *ops, struct vcpu =
*vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>struct xfair_vcpu * const vcpu =3D XFAIR_VCPU(vc);</font></div><div><font =
face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug=
(KERN_INFO &quot;%s: &quot;, __func__);</font></div><div><font face=3D"cour=
ier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span>debu=
g(&quot;dom=3D%d, vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;dom-&gt;domain_id,=
 vcpu-&gt;vcpu-&gt;vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 BUG_ON(is_idle_vcpu(vc));</font><=
/div><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"w=
hite-space:pre">	</span>BUG_ON(vcpu_runnable(vc));</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>/* If the vcpu is the current VCPU on the processor, it is guaranteed t=
o</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span> * not be on the runqueue of that processor.</font></div=
><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white=
-space:pre">	</span> * However, now we need to make sure that if it wasn&#3=
9;t the current task</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span> * and was instead waiting in the runqueue, it should be=
 removed</font></div><div><font face=3D"courier new, monospace"><span class=
=3D"" style=3D"white-space:pre">	</span> */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (curr_on_cpu(vc=
-&gt;processor) =3D=3D vc)</font></div><div><font face=3D"courier new, mono=
space"><span class=3D"" style=3D"white-space:pre">		</span>cpu_raise_softir=
q(vc-&gt;processor, SCHEDULE_SOFTIRQ);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>else if (__vcpu_on_runq(vcpu))</font></div><div><font fa=
ce=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">		=
	</span>__runq_remove(vcpu);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_wake(const struct scheduler *ops, struct vcpu *=
vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * const vcpu =3D X=
FAIR_VCPU(vc);</font></div><div><font face=3D"courier new, monospace"><br><=
/font></div><div>

<font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%=
s: &quot;, __func__);</font></div><div><font face=3D"courier new, monospace=
"><span class=3D"" style=3D"white-space:pre">	</span>debug(&quot;dom=3D%d, =
vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;dom-&gt;domain_id, vcpu-&gt;vcpu-&gt=
;vcpu_id);</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>BUG_ON(is_idle_vcpu(vc))=
;</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 if (unlikely(curr_on_cpu(vc-&gt;processor) =3D=3D vc)) {</fon=
t></div><div><font face=3D"courier new, monospace"><span class=3D"" style=
=3D"white-space:pre">		</span>debug(&quot;woke vcpu=3D%d that is currently =
running on cpu=3D%d\n&quot;, vc-&gt;vcpu_id,</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>vc-&gt;processor);</font></div><div><font face=3D"cour=
ier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;</font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>}</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (unlikely(__vcpu_on_runq(vcpu)=
)) {</font></div><div><font face=3D"courier new, monospace"><span class=3D"=
" style=3D"white-space:pre">		</span>debug(&quot;vcpu=3D%d is already on ru=
nqueue of cpu=3D%d\n&quot;, vc-&gt;vcpu_id,</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>vc-&gt;processor);</font></div><div><font face=3D"cour=
ier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;</font></div><div><f=
ont face=3D"courier new, monospace"><span class=3D"" style=3D"white-space:p=
re">	</span>}</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>if (!__vcpu_on_runq(vcpu=
) &amp;&amp; vcpu_runnable(vc) &amp;&amp; !vc-&gt;is_running )</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>__runq_insert(vc-&gt;processor, vcpu);</font></div><di=
v><font face=3D"courier new, monospace"><br></font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</span=
>cpu_raise_softirq(vc-&gt;processor, SCHEDULE_SOFTIRQ);<span class=3D"" sty=
le=3D"white-space:pre">	</span></font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static void xfair_yield(const struct scheduler *ops, struct vcpu =
*vc)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">#ifdef RTS_CONFIG_DEBUG</font></div><div><font fac=
e=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</=
span>struct xfair_vcpu * const vcpu =3D XFAIR_VCPU(vc);</font></div>

<div><font face=3D"courier new, monospace">#endif</font></div><div><font fa=
ce=3D"courier new, monospace"><br></font></div><div><font face=3D"courier n=
ew, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</=
font></div>
<div>
<font face=3D"courier new, monospace"><span class=3D"" style=3D"white-space=
:pre">	</span>debug(&quot;dom=3D%d, vcpu=3D%d\n&quot;, vcpu-&gt;domain-&gt;=
dom-&gt;domain_id, vcpu-&gt;vcpu-&gt;vcpu_id);</font></div><div><font face=
=3D"courier new, monospace">}=C2=A0</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">static void *xfair_alloc_domdata(const struct s=
cheduler *ops, struct domain *d)</font></div><div><font face=3D"courier new=
, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
dom;</font></div><div><font face=3D"courier new, monospace"><br></font></di=
v><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO =
&quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;dom=3D=
%d\n&quot;, d-&gt;domain_id);</font></div><div><font face=3D"courier new, m=
onospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 dom =3D xzalloc(struct xfair_dom);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (dom =3D=3D NUL=
L)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 return NULL;</font></div><div><font face=3D"courier new, monospa=
ce"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 dom-&gt;dom =3D d;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 return (void *)dom;</font></div><=
div><font face=3D"courier new, monospace">}</font></div><div><font face=3D"=
courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void xfair_f=
ree_domdata(const struct scheduler *ops, void *d)</font></div><div><font fa=
ce=3D"courier new, monospace">{</font></div><div><font face=3D"courier new,=
 monospace">#ifdef RTS_CONFIG_DEBUG</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *=
dom =3D d;</font></div><div><font face=3D"courier new, monospace">#endif</f=
ont></div><div><font face=3D"courier new, monospace"><br></font></div><div>=
<font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%=
s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;dom=3D=
%d\n&quot;, dom-&gt;dom-&gt;domain_id);</font></div><div><font face=3D"cour=
ier new, monospace"><br></font></div><div><font face=3D"courier new, monosp=
ace">=C2=A0 =C2=A0 xfree(d);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static int xfair_dom_init(const struct scheduler *ops, struct dom=
ain *d)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_dom *dom;</font></div><=
div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 if (is_idle_domain(d))</font></di=
v>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =C2=A0 retu=
rn 0;</font></div><div><font face=3D"courier new, monospace"><br></font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 dom =3D xfair_a=
lloc_domdata(ops, d);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (dom =3D=3D NUL=
L)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 return -ENOMEM;</font></div><div><font face=3D"courier new, mono=
space"><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 d-&gt;sched_priv =3D dom;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;,=
 __func__);</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 debug(&quot;dom=3D%d\n&quot;, d-&gt;domain_id);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 return 0;</font></div><div><font =
face=3D"courier new, monospace">}</font></div><div><font face=3D"courier ne=
w, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">static void</font><=
/div><div><font face=3D"courier new, monospace">xfair_dom_destroy(const str=
uct scheduler *ops, struct domain *d)</font></div><div><font face=3D"courie=
r new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(KERN_INFO &q=
uot;%s: &quot;, __func__);</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 debug(&quot;dom=3D%d\n&quot;, d-&gt;domain_id);</font>=
</div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 xfair=
_free_domdata(ops, XFAIR_DOM(d));</font></div><div><font face=3D"courier ne=
w, monospace">}</font></div><div><font face=3D"courier new, monospace"><br>=
</font></div>

<div><font face=3D"courier new, monospace">static int xfair_pick_cpu(const =
struct scheduler *ops, struct vcpu *v)</font></div><div><font face=3D"couri=
er new, monospace">{</font></div><div><font face=3D"courier new, monospace"=
>=C2=A0 =C2=A0 debug(KERN_INFO &quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 debug(&quot;vcpu=
=3D%d, pcpu picked=3D%d\n&quot;, v-&gt;vcpu_id, v-&gt;processor);</font></d=
iv><div><font face=3D"courier new, monospace"><span class=3D"" style=3D"whi=
te-space:pre">	</span>return v-&gt;processor;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">/*</font></div><div><font face=3D"courier new, monospace">=C2=A0*=
 This function is in the critical path. It is designed to be simple and</fo=
nt></div>

<div><font face=3D"courier new, monospace">=C2=A0* fast for the common case=
.</font></div><div><font face=3D"courier new, monospace">=C2=A0*/</font></d=
iv><div><font face=3D"courier new, monospace">static struct task_slice</fon=
t></div><div>

<font face=3D"courier new, monospace">xfair_schedule(</font></div><div><fon=
t face=3D"courier new, monospace">=C2=A0 =C2=A0 const struct scheduler *ops=
, s_time_t now, bool_t tasklet_work_scheduled)</font></div><div><font face=
=3D"courier new, monospace">{</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 const int cpu =3D =
smp_processor_id();</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 struct list_head * const runq =3D RUNQ(cpu);</font></div><div=
><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu * co=
nst scurr =3D XFAIR_VCPU(current);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_vcpu =
*snext;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=
=A0 struct task_slice ret;</font></div><div><font face=3D"courier new, mono=
space">=C2=A0 =C2=A0 s_time_t tslice =3D MILLISECS(30);</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
<span class=3D"" style=3D"white-space:pre">	</span>/* Add this VCPU back in=
to the runqueue */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (!__vcpu_on_run=
q(scurr) &amp;&amp; vcpu_runnable(current)</font></div><div><font face=3D"c=
ourier new, monospace"><span class=3D"" style=3D"white-space:pre">			</span=
>&amp;&amp; !is_idle_vcpu(current))</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>__runq_insert(cpu, scurr);</font></div><div><font face=
=3D"courier new, monospace"><br></font></div><div><font face=3D"courier new=
, monospace"><span class=3D"" style=3D"white-space:pre">	</span>print_runq(=
cpu);</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>/* Tasklet work (which runs in idle VCPU context) overrides all else. *=
/</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (tasklet_work_s=
cheduled) {</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 <span class=3D"" style=3D"white-space:pre">	</span>debug(KERN_INFO &=
quot;%s: &quot;, __func__);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0<span class=3D"" st=
yle=3D"white-space:pre">		</span>debug(&quot;tasklet work scheduled. idling=
.\n&quot;);</font></div><div><font face=3D"courier new, monospace"><span cl=
ass=3D"" style=3D"white-space:pre">		</span>snext =3D XFAIR_VCPU(idle_vcpu[=
cpu]);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>} else {</font></div><div><font face=3D"courier new, mon=
ospace"><span class=3D"" style=3D"white-space:pre">		</span>/* Select next =
runnable local VCPU (ie top of local runq) */</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>if (!list_empty(runq)) {</font></div><div><font face=3D=
"courier new, monospace"><span class=3D"" style=3D"white-space:pre">			</sp=
an>snext =3D __runq_elem(runq-&gt;next);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">			</span>if (__vcpu_on_runq(snext))</font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">				=
</span>__runq_remove(snext);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>} else {</font></div><div><font face=3D"courier new, mo=
nospace"><span class=3D"" style=3D"white-space:pre">			</span>snext =3D XFA=
IR_VCPU(idle_vcpu[cpu]);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>}</font></div><div><font face=3D"courier new, monospace=
"><span class=3D"" style=3D"white-space:pre">	</span>}</font></div><div><fo=
nt face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace"><span class=3D"" st=
yle=3D"white-space:pre">	</span>print_runq(cpu);</font></div><div><font fac=
e=3D"courier new, monospace"><br></font></div><div><font face=3D"courier ne=
w, monospace"><span class=3D"" style=3D"white-space:pre">	</span>/* Initial=
ize, check and return task to run next */</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 ret.task =3D snext=
-&gt;vcpu;</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 ret.time =3D (is_idle_vcpu(snext-&gt;vcpu) ? -1 : tslice);</font></d=
iv><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 ret.migrated =
=3D 0;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span class=3D"" style=3D"white-space:pre">	</s=
pan>if (snext &amp;&amp; snext-&gt;vcpu !=3D current) {</font></div><div><f=
ont face=3D"courier new, monospace">=C2=A0 =C2=A0 <span class=3D"" style=3D=
"white-space:pre">	</span>debug(KERN_INFO &quot;%s: &quot;, __func__);</fon=
t></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>if (!is_idle_vcpu(snext-&gt;vcpu))</font></div><div><fo=
nt face=3D"courier new, monospace">=C2=A0 =C2=A0 <span class=3D"" style=3D"=
white-space:pre">		</span>debug(&quot;CPU %d picked(dom.vcpu)=3D%d.%d\n&quo=
t;, cpu, snext-&gt;domain-&gt;dom-&gt;domain_id, snext-&gt;vcpu-&gt;vcpu_id=
);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">		</span>else</font></div><div><font face=3D"courier new, monosp=
ace">=C2=A0 =C2=A0 <span class=3D"" style=3D"white-space:pre">		</span>debu=
g(&quot;CPU %d picked(dom.vcpu)=3Didle.%d\n&quot;, cpu, snext-&gt;vcpu-&gt;=
vcpu_id);</font></div>

<div><font face=3D"courier new, monospace"><span class=3D"" style=3D"white-=
space:pre">	</span>}</font></div><div><font face=3D"courier new, monospace"=
><br></font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =
return ret;</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static int</font></div><div><font face=3D"courier new, monospace"=
>xfair_init(struct scheduler *ops)</font></div>

<div><font face=3D"courier new, monospace">{</font></div><div><font face=3D=
"courier new, monospace">=C2=A0 =C2=A0 struct xfair_private *priv;</font></=
div><div><font face=3D"courier new, monospace"><br></font></div><div><font =
face=3D"courier new, monospace">=C2=A0 =C2=A0 priv =3D xzalloc(struct xfair=
_private);</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (priv =3D=3D NU=
LL)</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 return -ENOMEM;</font></div><div><font face=3D"courier new, m=
onospace"><br></font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 ops-&gt;sched_data =3D priv;</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 spin_lock_init(&am=
p;priv-&gt;lock);</font></div><div><font face=3D"courier new, monospace"><s=
pan class=3D"" style=3D"white-space:pre">	</span>debugtrace_toggle();</font=
></div><div><font face=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 retur=
n 0;</font></div><div><font face=3D"courier new, monospace">}</font></div><=
div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">static void</font></div>

<div><font face=3D"courier new, monospace">xfair_deinit(const struct schedu=
ler *ops)</font></div><div><font face=3D"courier new, monospace">{</font></=
div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 struct xfair_p=
rivate *priv;</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 priv =3D XFAIR_PRIV(ops);</font><=
/div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 if (priv)</fo=
nt></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 xfree(priv);</font></div>

<div><font face=3D"courier new, monospace">}</font></div><div><font face=3D=
"courier new, monospace"><br></font></div><div><font face=3D"courier new, m=
onospace">static struct xfair_private _xfair_priv;</font></div><div><font f=
ace=3D"courier new, monospace"><br>

</font></div><div><font face=3D"courier new, monospace">const struct schedu=
ler sched_xfair_def =3D {</font></div><div><font face=3D"courier new, monos=
pace">=C2=A0 =C2=A0 .name =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D &quot;XFai=
r Table Driver Scheduler&quot;,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .opt_name =C2=A0 =
=C2=A0 =C2=A0 =3D &quot;xfair&quot;,</font></div><div><font face=3D"courier=
 new, monospace">=C2=A0 =C2=A0 .sched_id =C2=A0 =C2=A0 =C2=A0 =3D XEN_SCHED=
ULER_XFAIR,</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 .sched_data =C2=A0 =C2=A0 =3D &amp;_xfair_priv,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .init_domain =C2=A0 =C2=A0=3D xfa=
ir_dom_init,</font></div><div><font face=3D"courier new, monospace">=C2=A0 =
=C2=A0 .destroy_domain =3D xfair_dom_destroy,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .insert_vcpu =C2=A0 =C2=A0=3D xfa=
ir_vcpu_insert,</font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 .remove_vcpu =C2=A0 =C2=A0=3D xfair_vcpu_remove,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace"><span style=3D"white-space:pre">=C2=A0   </span=
>.sleep<span class=3D"" style=3D"white-space:pre">			</span>=3D xfair_sleep=
,</font></div>

<div><font face=3D"courier new, monospace"><span style=3D"white-space:pre">=
=C2=A0   </span>.wake<span class=3D"" style=3D"white-space:pre">			</span>=
=3D xfair_wake,</font></div><div><font face=3D"courier new, monospace"><spa=
n style=3D"white-space:pre">=C2=A0   </span>.yield<span class=3D"" style=3D=
"white-space:pre">			</span>=3D xfair_yield,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .pick_cpu =C2=A0 =C2=A0 =C2=A0 =
=3D xfair_pick_cpu,</font></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 .do_schedule =C2=A0 =C2=A0=3D xfair_schedule,</font></div>

<div><span class=3D"" style=3D"white-space:pre"><font face=3D"courier new, =
monospace">	</font></span></div><div><font face=3D"courier new, monospace">=
=C2=A0 =C2=A0 .init =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D xfair_init,</fon=
t></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .deinit =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =3D xfair_deinit,</font></div>

<div><font face=3D"courier new, monospace"><br></font></div><div><font face=
=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc_vdata =C2=A0 =C2=A0=3D xfa=
ir_alloc_vdata,</font></div><div><font face=3D"courier new, monospace">=C2=
=A0 =C2=A0 .free_vdata =C2=A0 =C2=A0 =3D xfair_free_vdata,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc_pdata =C2=
=A0 =C2=A0=3D xfair_alloc_pdata,</font></div><div><font face=3D"courier new=
, monospace">=C2=A0 =C2=A0 .free_pdata =C2=A0 =C2=A0 =3D xfair_free_pdata,<=
/font></div><div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .alloc=
_domdata =C2=A0=3D xfair_alloc_domdata,</font></div>

<div><font face=3D"courier new, monospace">=C2=A0 =C2=A0 .free_domdata =C2=
=A0 =3D xfair_free_domdata,</font></div><div><font face=3D"courier new, mon=
ospace">};</font></div></div><div><br></div><div>---------- SOURCE CODE END=
S ----------<br></div>

<div><br></div></div>

--047d7bb04bd285447304eef7273f--
--047d7bb04bd285447604eef72741
Content-Type: text/x-csrc; charset=US-ASCII; name="sched_xfair.c"
Content-Disposition: attachment; filename="sched_xfair.c"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hpxnc67d0

LyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioKICogKEMpIDIwMTMgLSBNYW5vaGFyIFZhbmdhIC0gTVBJLVNX
UwogKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKgogKgogKiAgICAgICAgRmlsZTogY29tbW9uL3NjaGVkX3hm
YWlyLmMKICogICAgICBBdXRob3I6IE1hbm9oYXIgVmFuZ2EKICoKICogRGVzY3JpcHRpb246IFRh
YmxlIGRyaXZlbiBzY2hlZHVsZXIgZm9yIFhlbgogKi8KCiNpbmNsdWRlIDx4ZW4vY29uZmlnLmg+
CiNpbmNsdWRlIDx4ZW4vaW5pdC5oPgojaW5jbHVkZSA8eGVuL2xpYi5oPgojaW5jbHVkZSA8eGVu
L3NjaGVkLmg+CiNpbmNsdWRlIDx4ZW4vZG9tYWluLmg+CiNpbmNsdWRlIDx4ZW4vZGVsYXkuaD4K
I2luY2x1ZGUgPHhlbi9ldmVudC5oPgojaW5jbHVkZSA8eGVuL3RpbWUuaD4KI2luY2x1ZGUgPHhl
bi9zY2hlZC1pZi5oPgojaW5jbHVkZSA8eGVuL3NvZnRpcnEuaD4KI2luY2x1ZGUgPGFzbS9hdG9t
aWMuaD4KI2luY2x1ZGUgPGFzbS9kaXY2NC5oPgojaW5jbHVkZSA8eGVuL2Vycm5vLmg+CiNpbmNs
dWRlIDx4ZW4va2V5aGFuZGxlci5oPgojaW5jbHVkZSA8eGVuL3RyYWNlLmg+CiNpbmNsdWRlIDx4
ZW4vbGlzdC5oPgoKLyogRGVmYXVsdCB0aW1lc2xpY2U6IDMwbXMgKi8KI2RlZmluZSBYRkFJUl9E
RUZBVUxUX1RTTElDRV9NUyAgICAzMAoKLyogU29tZSB1c2VmdWwgbWFjcm9zICovCi8qIEdldCB0
aGUgcHJpdmF0ZSBkYXRhIGZyb20gYSBzZXQgb2Ygb3BzICovCiNkZWZpbmUgWEZBSVJfUFJJVihf
b3BzKSAgIFwKICAgICgoc3RydWN0IHhmYWlyX3ByaXZhdGUgKikoKF9vcHMpLT5zY2hlZF9kYXRh
KSkKLyogR2V0IHRoZSBQQ1BVIHN0cnVjdHVyZSBmb3IgYSBnaXZlbiBDUFUgbnVtYmVyICovCiNk
ZWZpbmUgWEZBSVJfUENQVShfYykgICAgIFwKICAgICgoc3RydWN0IHhmYWlyX3BjcHUgKilwZXJf
Y3B1KHNjaGVkdWxlX2RhdGEsIF9jKS5zY2hlZF9wcml2KQovKiBHZXQgdGhlIFhGYWlyIFZDUFUg
c3RydWN0dXJlIGZvciBhIGdpdmVuIFhlbiBWQ1BVICovCiNkZWZpbmUgWEZBSVJfVkNQVShfdmNw
dSkgICgoc3RydWN0IHhmYWlyX3ZjcHUgKikgKF92Y3B1KS0+c2NoZWRfcHJpdikKLyogR2V0IHRo
ZSBYRmFpciBkb20gc3RydWN0dXJlIGZvciBhIGdpdmVuIFhlbiBkb20gKi8KI2RlZmluZSBYRkFJ
Ul9ET00oX2RvbSkgICAgKChzdHJ1Y3QgeGZhaXJfZG9tICopIChfZG9tKS0+c2NoZWRfcHJpdikK
LyogR2V0IHRoZSBydW5xdWV1ZSBmb3IgYSBwYXJ0aWN1bGFyIENQVSAqLwojZGVmaW5lIFJVTlEo
X2NwdSkgICAgICAgICAgKCYoWEZBSVJfUENQVShfY3B1KS0+cnVucSkpCi8qIElzIHRoZSBmaXJz
dCBlbGVtZW50IG9mIF9jcHUncyBydW5xIGl0cyBpZGxlIHZjcHU/ICovCiNkZWZpbmUgSVNfUlVO
UV9JRExFKF9jcHUpICAobGlzdF9lbXB0eShSVU5RKF9jcHUpKSB8fCBcCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgaXNfaWRsZV92Y3B1KF9fcnVucV9lbGVtKFJVTlEoX2NwdSktPm5leHQp
LT52Y3B1KSkKCgovKiBYZmFpciB0cmFjaW5nIGV2ZW50cyAqLwojZGVmaW5lIFRSQ19YRkFJUl9T
Q0hFRF9TVEFSVCAgIFRSQ19TQ0hFRF9DTEFTU19FVlQoWEZBSVIsIDEpCiNkZWZpbmUgVFJDX1hG
QUlSX1NDSEVEX0VORCAgICAgVFJDX1NDSEVEX0NMQVNTX0VWVChYRkFJUiwgMikKCi8qIFBoeXNp
Y2FsIENQVSAqLwpzdHJ1Y3QgeGZhaXJfcGNwdSB7CiAgICBzdHJ1Y3QgbGlzdF9oZWFkIHJ1bnE7
CiNpZiAwCiAgICBzdHJ1Y3QgdGltZXIgdGlja2VyOwogICAgdW5zaWduZWQgaW50IHRpY2s7CiNl
bmRpZgp9OwoKLyogVmlydHVhbCBDUFUgKi8Kc3RydWN0IHhmYWlyX3ZjcHUgewogICAgc3RydWN0
IHhmYWlyX2RvbSAqZG9tYWluOwkJLyogVGhlIGRvbWFpbiB0aGlzIFZDUFUgYmVsb25ncyB0byAq
LwogICAgc3RydWN0IHZjcHUgKnZjcHU7CQkJCS8qIFRoZSBjb3JlIFhlbiBWQ1BVIHN0cnVjdHVy
ZSAqLwogICAgc3RydWN0IGxpc3RfaGVhZCBydW5xX2VsZW07CQkvKiBMaXN0IGVsZW1lbnQgZm9y
IGFkZGluZyB0byBydW5xdWV1ZSAqLwp9OwoKLyogRG9tYWluICovCnN0cnVjdCB4ZmFpcl9kb20g
ewogICAgc3RydWN0IGRvbWFpbiAqZG9tOwkJCQkvKiBUaGUgY29yZSBYZW4gZG9tYWluIHN0cnVj
dHVyZSAqLwp9OwoKLyogU3lzdGVtLXdpZGUgcHJpdmF0ZSBkYXRhICovCnN0cnVjdCB4ZmFpcl9w
cml2YXRlIHsKICAgIHNwaW5sb2NrX3QgbG9jazsKfTsKCnN0YXRpYyBpbmxpbmUgaW50IF9fdmNw
dV9vbl9ydW5xKHN0cnVjdCB4ZmFpcl92Y3B1ICp2Y3B1KQp7CiAgICByZXR1cm4gIWxpc3RfZW1w
dHkoJnZjcHUtPnJ1bnFfZWxlbSk7Cn0KCnN0YXRpYyBpbmxpbmUgc3RydWN0IHhmYWlyX3ZjcHUg
Kl9fcnVucV9lbGVtKHN0cnVjdCBsaXN0X2hlYWQgKmVsZW0pCnsKICAgIHJldHVybiBsaXN0X2Vu
dHJ5KGVsZW0sIHN0cnVjdCB4ZmFpcl92Y3B1LCBydW5xX2VsZW0pOwp9CgpzdGF0aWMgaW5saW5l
IHZvaWQgX19ydW5xX2luc2VydCh1bnNpZ25lZCBpbnQgY3B1LCBzdHJ1Y3QgeGZhaXJfdmNwdSAq
dmNwdSkKewogICAgc3RydWN0IGxpc3RfaGVhZCAqcnVucSA9IFJVTlEoY3B1KTsKICAgIAoJQlVH
X09OKF9fdmNwdV9vbl9ydW5xKHZjcHUpKTsKICAgIEJVR19PTihjcHUgIT0gdmNwdS0+dmNwdS0+
cHJvY2Vzc29yKTsKCiAgICAvKiBBZGQgYmFjayBhdCB0aGUgZW5kIG9mIHRoZSBsaXN0ICovCiAg
ICBsaXN0X2FkZF90YWlsKCZ2Y3B1LT5ydW5xX2VsZW0sIHJ1bnEpOwp9CgpzdGF0aWMgaW5saW5l
IHZvaWQKX19ydW5xX3JlbW92ZShzdHJ1Y3QgeGZhaXJfdmNwdSAqdmNwdSkKewoJQlVHX09OKCFf
X3ZjcHVfb25fcnVucSh2Y3B1KSk7CiAgICBsaXN0X2RlbF9pbml0KCZ2Y3B1LT5ydW5xX2VsZW0p
Owp9CgpzdGF0aWMgaW5saW5lIHZvaWQgcHJpbnRfcnVucSh1bnNpZ25lZCBpbnQgY3B1KQp7Cglz
dHJ1Y3QgeGZhaXJfdmNwdSAqYzsKICAgIHN0cnVjdCBsaXN0X2hlYWQgKnJ1bnEgPSBSVU5RKGNw
dSk7CiAgICAKCWRlYnVnKCJSVU5ROiAiKTsKCWxpc3RfZm9yX2VhY2hfZW50cnkoYywgcnVucSwg
cnVucV9lbGVtKQoJCWRlYnVnKCIoJWQuJWQpICIsIGMtPmRvbWFpbi0+ZG9tLT5kb21haW5faWQs
IGMtPnZjcHUtPnZjcHVfaWQpOwoJZGVidWcoIlxuIik7Cn0KCi8qIEFsbG9jYXRlIGEgc3RydWN0
dXJlIGZvciBhIHBoeXNpY2FsIENQVSAqLwpzdGF0aWMgdm9pZCAqeGZhaXJfYWxsb2NfcGRhdGEo
Y29uc3Qgc3RydWN0IHNjaGVkdWxlciAqb3BzLCBpbnQgY3B1KQp7CiAgICBzdHJ1Y3QgeGZhaXJf
cGNwdSAqcGNwdTsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBk
ZWJ1ZygiY3B1PSVkXG4iLCBjcHUpOwoKICAgIC8qIEFsbG9jYXRlIHBlci1QQ1BVIGluZm8gKi8K
ICAgIHBjcHUgPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9wY3B1KTsKICAgIGlmIChwY3B1ID09IE5V
TEwpCiAgICAgICAgcmV0dXJuIE5VTEw7CgogICAgSU5JVF9MSVNUX0hFQUQoJnBjcHUtPnJ1bnEp
OwoJLyogc2NoZWR1bGUuYyBleHBlY3RzIHRoaXMgdG8gbm90IGJlIE5VTEwgKGZvciBzb21lIHJl
YXNvbikgKi8KICAgIGlmIChwZXJfY3B1KHNjaGVkdWxlX2RhdGEsIGNwdSkuc2NoZWRfcHJpdiA9
PSBOVUxMKQogICAgICAgIHBlcl9jcHUoc2NoZWR1bGVfZGF0YSwgY3B1KS5zY2hlZF9wcml2ID0g
cGNwdTsKCiAgICBCVUdfT04oIWlzX2lkbGVfdmNwdShjdXJyX29uX2NwdShjcHUpKSk7CgogICAg
cmV0dXJuIHBjcHU7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfcGRhdGEoY29uc3Qgc3RydWN0
IHNjaGVkdWxlciAqb3BzLCB2b2lkICpwYywgaW50IGNwdSkKewogICAgc3RydWN0IHhmYWlyX3Bj
cHUgKnBjcHUgPSBwYzsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAg
ICBkZWJ1ZygiY3B1PSVkXG4iLCBjcHUpOwoKCWlmIChwY3B1KQoJCXhmcmVlKHBjcHUpOwp9Cgpz
dGF0aWMgdm9pZCAqeGZhaXJfYWxsb2NfdmRhdGEoY29uc3Qgc3RydWN0IHNjaGVkdWxlciAqb3Bz
LCBzdHJ1Y3QgdmNwdSAqdmMsCiAgICB2b2lkICpkZCkKewogICAgc3RydWN0IHhmYWlyX3ZjcHUg
KnZjcHU7CgogICAgLyogQWxsb2NhdGUgcGVyLVZDUFUgaW5mbyAqLwogICAgdmNwdSA9IHh6YWxs
b2Moc3RydWN0IHhmYWlyX3ZjcHUpOwogICAgaWYgKHZjcHUgPT0gTlVMTCkKICAgICAgICByZXR1
cm4gTlVMTDsKCiAgICBJTklUX0xJU1RfSEVBRCgmdmNwdS0+cnVucV9lbGVtKTsKICAgIHZjcHUt
PmRvbWFpbiA9IGRkOwogICAgdmNwdS0+dmNwdSA9IHZjOwoKICAgIGRlYnVnKEtFUk5fSU5GTyAi
JXM6ICIsIF9fZnVuY19fKTsKICAgIGRlYnVnKCJ2Y3B1PSVkXG4iLCB2Yy0+dmNwdV9pZCk7Cgog
ICAgcmV0dXJuIHZjcHU7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfdmRhdGEoY29uc3Qgc3Ry
dWN0IHNjaGVkdWxlciAqb3BzLCB2b2lkICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKnZj
cHUgPSB2YzsKCglpZiAoIXZjcHUpCgkJcmV0dXJuOwoKICAgIGRlYnVnKEtFUk5fSU5GTyAiJXM6
ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJ2Y3B1PSVkXG4iLCB2Y3B1LT52Y3B1LT52Y3B1X2lkKTsK
CiAgICBCVUdfT04oIWxpc3RfZW1wdHkoJnZjcHUtPnJ1bnFfZWxlbSkpOwogICAgeGZyZWUodmNw
dSk7Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX3ZjcHVfaW5zZXJ0KGNvbnN0IHN0cnVjdCBzY2hlZHVs
ZXIgKm9wcywgc3RydWN0IHZjcHUgKnZjKQp7CiAgICBzdHJ1Y3QgeGZhaXJfdmNwdSAqdmNwdSA9
IHZjLT5zY2hlZF9wcml2OwoKCUJVR19PTighdmNwdSk7CiAgICAKICAgIGRlYnVnKEtFUk5fSU5G
TyAiJXM6ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJ2Y3B1PSVkXG4iLCB2Y3B1LT52Y3B1LT52Y3B1
X2lkKTsKCglpZiAoIXZjLT5pc19ydW5uaW5nICYmIHZjcHVfcnVubmFibGUodmMpICYmICFfX3Zj
cHVfb25fcnVucSh2Y3B1KSkKCQlfX3J1bnFfaW5zZXJ0KHZjLT5wcm9jZXNzb3IsIHZjcHUpOwp9
CgpzdGF0aWMgdm9pZCB4ZmFpcl92Y3B1X3JlbW92ZShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpv
cHMsIHN0cnVjdCB2Y3B1ICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1
ID0gWEZBSVJfVkNQVSh2Yyk7CiAgICBzdHJ1Y3QgeGZhaXJfZG9tICogY29uc3QgZG9tID0gdmNw
dS0+ZG9tYWluOwoKCUJVR19PTighdmNwdSk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwg
X19mdW5jX18pOwoJZGVidWcoInZjcHU9JWRcbiIsIHZjcHUtPnZjcHUtPnZjcHVfaWQpOwoKICAg
IGlmIChfX3ZjcHVfb25fcnVucSh2Y3B1KSkKICAgICAgICBfX3J1bnFfcmVtb3ZlKHZjcHUpOwoK
ICAgIEJVR19PTihkb20gPT0gTlVMTCk7CiAgICBCVUdfT04oIWxpc3RfZW1wdHkoJnZjcHUtPnJ1
bnFfZWxlbSkpOwp9CgpzdGF0aWMgdm9pZCB4ZmFpcl9zbGVlcChjb25zdCBzdHJ1Y3Qgc2NoZWR1
bGVyICpvcHMsIHN0cnVjdCB2Y3B1ICp2YykKewoJc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2
Y3B1ID0gWEZBSVJfVkNQVSh2Yyk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5j
X18pOwoJZGVidWcoImRvbT0lZCwgdmNwdT0lZFxuIiwgdmNwdS0+ZG9tYWluLT5kb20tPmRvbWFp
bl9pZCwgdmNwdS0+dmNwdS0+dmNwdV9pZCk7CgogICAgQlVHX09OKGlzX2lkbGVfdmNwdSh2Yykp
OwoJQlVHX09OKHZjcHVfcnVubmFibGUodmMpKTsKCgkvKiBJZiB0aGUgdmNwdSBpcyB0aGUgY3Vy
cmVudCBWQ1BVIG9uIHRoZSBwcm9jZXNzb3IsIGl0IGlzIGd1YXJhbnRlZWQgdG8KCSAqIG5vdCBi
ZSBvbiB0aGUgcnVucXVldWUgb2YgdGhhdCBwcm9jZXNzb3IuCgkgKiBIb3dldmVyLCBub3cgd2Ug
bmVlZCB0byBtYWtlIHN1cmUgdGhhdCBpZiBpdCB3YXNuJ3QgdGhlIGN1cnJlbnQgdGFzawoJICog
YW5kIHdhcyBpbnN0ZWFkIHdhaXRpbmcgaW4gdGhlIHJ1bnF1ZXVlLCBpdCBzaG91bGQgYmUgcmVt
b3ZlZAoJICovCiAgICBpZiAoY3Vycl9vbl9jcHUodmMtPnByb2Nlc3NvcikgPT0gdmMpCgkJY3B1
X3JhaXNlX3NvZnRpcnEodmMtPnByb2Nlc3NvciwgU0NIRURVTEVfU09GVElSUSk7CgllbHNlIGlm
IChfX3ZjcHVfb25fcnVucSh2Y3B1KSkKCQkJX19ydW5xX3JlbW92ZSh2Y3B1KTsKfQoKc3RhdGlj
IHZvaWQgeGZhaXJfd2FrZShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVjdCB2Y3B1
ICp2YykKewogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1ID0gWEZBSVJfVkNQVSh2
Yyk7CgogICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18pOwoJZGVidWcoImRvbT0l
ZCwgdmNwdT0lZFxuIiwgdmNwdS0+ZG9tYWluLT5kb20tPmRvbWFpbl9pZCwgdmNwdS0+dmNwdS0+
dmNwdV9pZCk7CgkKCUJVR19PTihpc19pZGxlX3ZjcHUodmMpKTsKCQogICAgaWYgKHVubGlrZWx5
KGN1cnJfb25fY3B1KHZjLT5wcm9jZXNzb3IpID09IHZjKSkgewoJCWRlYnVnKCJ3b2tlIHZjcHU9
JWQgdGhhdCBpcyBjdXJyZW50bHkgcnVubmluZyBvbiBjcHU9JWRcbiIsIHZjLT52Y3B1X2lkLAoJ
CQl2Yy0+cHJvY2Vzc29yKTsKICAgICAgICByZXR1cm47Cgl9CgogICAgaWYgKHVubGlrZWx5KF9f
dmNwdV9vbl9ydW5xKHZjcHUpKSkgewoJCWRlYnVnKCJ2Y3B1PSVkIGlzIGFscmVhZHkgb24gcnVu
cXVldWUgb2YgY3B1PSVkXG4iLCB2Yy0+dmNwdV9pZCwKCQkJdmMtPnByb2Nlc3Nvcik7CiAgICAg
ICAgcmV0dXJuOwoJfQoJCglpZiAoIV9fdmNwdV9vbl9ydW5xKHZjcHUpICYmIHZjcHVfcnVubmFi
bGUodmMpICYmICF2Yy0+aXNfcnVubmluZyApCgkJCV9fcnVucV9pbnNlcnQodmMtPnByb2Nlc3Nv
ciwgdmNwdSk7CgoJY3B1X3JhaXNlX3NvZnRpcnEodmMtPnByb2Nlc3NvciwgU0NIRURVTEVfU09G
VElSUSk7CQp9CgpzdGF0aWMgdm9pZCB4ZmFpcl95aWVsZChjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVy
ICpvcHMsIHN0cnVjdCB2Y3B1ICp2YykKewojaWZkZWYgUlRTX0NPTkZJR19ERUJVRwoJc3RydWN0
IHhmYWlyX3ZjcHUgKiBjb25zdCB2Y3B1ID0gWEZBSVJfVkNQVSh2Yyk7CiNlbmRpZgoKICAgIGRl
YnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKCWRlYnVnKCJkb209JWQsIHZjcHU9JWRc
biIsIHZjcHUtPmRvbWFpbi0+ZG9tLT5kb21haW5faWQsIHZjcHUtPnZjcHUtPnZjcHVfaWQpOwp9
IAoKc3RhdGljIHZvaWQgKnhmYWlyX2FsbG9jX2RvbWRhdGEoY29uc3Qgc3RydWN0IHNjaGVkdWxl
ciAqb3BzLCBzdHJ1Y3QgZG9tYWluICpkKQp7CiAgICBzdHJ1Y3QgeGZhaXJfZG9tICpkb207Cgog
ICAgZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18pOwogICAgZGVidWcoImRvbT0lZFxu
IiwgZC0+ZG9tYWluX2lkKTsKCiAgICBkb20gPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9kb20pOwog
ICAgaWYgKGRvbSA9PSBOVUxMKQogICAgICAgIHJldHVybiBOVUxMOwoKICAgIGRvbS0+ZG9tID0g
ZDsKCiAgICByZXR1cm4gKHZvaWQgKilkb207Cn0KCnN0YXRpYyB2b2lkIHhmYWlyX2ZyZWVfZG9t
ZGF0YShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHZvaWQgKmQpCnsKI2lmZGVmIFJUU19D
T05GSUdfREVCVUcKICAgIHN0cnVjdCB4ZmFpcl9kb20gKmRvbSA9IGQ7CiNlbmRpZgoKICAgIGRl
YnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKICAgIGRlYnVnKCJkb209JWRcbiIsIGRv
bS0+ZG9tLT5kb21haW5faWQpOwoKICAgIHhmcmVlKGQpOwp9CgpzdGF0aWMgaW50IHhmYWlyX2Rv
bV9pbml0KGNvbnN0IHN0cnVjdCBzY2hlZHVsZXIgKm9wcywgc3RydWN0IGRvbWFpbiAqZCkKewog
ICAgc3RydWN0IHhmYWlyX2RvbSAqZG9tOwoKICAgIGlmIChpc19pZGxlX2RvbWFpbihkKSkKICAg
ICAgICByZXR1cm4gMDsKCiAgICBkb20gPSB4ZmFpcl9hbGxvY19kb21kYXRhKG9wcywgZCk7CiAg
ICBpZiAoZG9tID09IE5VTEwpCiAgICAgICAgcmV0dXJuIC1FTk9NRU07CgogICAgZC0+c2NoZWRf
cHJpdiA9IGRvbTsKCiAgICBkZWJ1ZyhLRVJOX0lORk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBk
ZWJ1ZygiZG9tPSVkXG4iLCBkLT5kb21haW5faWQpOwoKICAgIHJldHVybiAwOwp9CgpzdGF0aWMg
dm9pZAp4ZmFpcl9kb21fZGVzdHJveShjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVj
dCBkb21haW4gKmQpCnsKICAgIGRlYnVnKEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKICAg
IGRlYnVnKCJkb209JWRcbiIsIGQtPmRvbWFpbl9pZCk7CgogICAgeGZhaXJfZnJlZV9kb21kYXRh
KG9wcywgWEZBSVJfRE9NKGQpKTsKfQoKc3RhdGljIGludCB4ZmFpcl9waWNrX2NwdShjb25zdCBz
dHJ1Y3Qgc2NoZWR1bGVyICpvcHMsIHN0cnVjdCB2Y3B1ICp2KQp7CiAgICBkZWJ1ZyhLRVJOX0lO
Rk8gIiVzOiAiLCBfX2Z1bmNfXyk7CiAgICBkZWJ1ZygidmNwdT0lZCwgcGNwdSBwaWNrZWQ9JWRc
biIsIHYtPnZjcHVfaWQsIHYtPnByb2Nlc3Nvcik7CglyZXR1cm4gdi0+cHJvY2Vzc29yOwp9Cgov
KgogKiBUaGlzIGZ1bmN0aW9uIGlzIGluIHRoZSBjcml0aWNhbCBwYXRoLiBJdCBpcyBkZXNpZ25l
ZCB0byBiZSBzaW1wbGUgYW5kCiAqIGZhc3QgZm9yIHRoZSBjb21tb24gY2FzZS4KICovCnN0YXRp
YyBzdHJ1Y3QgdGFza19zbGljZQp4ZmFpcl9zY2hlZHVsZSgKICAgIGNvbnN0IHN0cnVjdCBzY2hl
ZHVsZXIgKm9wcywgc190aW1lX3Qgbm93LCBib29sX3QgdGFza2xldF93b3JrX3NjaGVkdWxlZCkK
ewogICAgY29uc3QgaW50IGNwdSA9IHNtcF9wcm9jZXNzb3JfaWQoKTsKICAgIHN0cnVjdCBsaXN0
X2hlYWQgKiBjb25zdCBydW5xID0gUlVOUShjcHUpOwogICAgc3RydWN0IHhmYWlyX3ZjcHUgKiBj
b25zdCBzY3VyciA9IFhGQUlSX1ZDUFUoY3VycmVudCk7CiAgICBzdHJ1Y3QgeGZhaXJfdmNwdSAq
c25leHQ7CiAgICBzdHJ1Y3QgdGFza19zbGljZSByZXQ7CiAgICBzX3RpbWVfdCB0c2xpY2UgPSBN
SUxMSVNFQ1MoMzApOwoJCgkvKiBBZGQgdGhpcyBWQ1BVIGJhY2sgaW50byB0aGUgcnVucXVldWUg
Ki8KICAgIGlmICghX192Y3B1X29uX3J1bnEoc2N1cnIpICYmIHZjcHVfcnVubmFibGUoY3VycmVu
dCkKCQkJJiYgIWlzX2lkbGVfdmNwdShjdXJyZW50KSkKCQlfX3J1bnFfaW5zZXJ0KGNwdSwgc2N1
cnIpOwoKCXByaW50X3J1bnEoY3B1KTsKCgkvKiBUYXNrbGV0IHdvcmsgKHdoaWNoIHJ1bnMgaW4g
aWRsZSBWQ1BVIGNvbnRleHQpIG92ZXJyaWRlcyBhbGwgZWxzZS4gKi8KICAgIGlmICh0YXNrbGV0
X3dvcmtfc2NoZWR1bGVkKSB7CiAgICAJZGVidWcoS0VSTl9JTkZPICIlczogIiwgX19mdW5jX18p
OwogICAJCWRlYnVnKCJ0YXNrbGV0IHdvcmsgc2NoZWR1bGVkLiBpZGxpbmcuXG4iKTsKCQlzbmV4
dCA9IFhGQUlSX1ZDUFUoaWRsZV92Y3B1W2NwdV0pOwoJfSBlbHNlIHsKCQkvKiBTZWxlY3QgbmV4
dCBydW5uYWJsZSBsb2NhbCBWQ1BVIChpZSB0b3Agb2YgbG9jYWwgcnVucSkgKi8KCQlpZiAoIWxp
c3RfZW1wdHkocnVucSkpIHsKCQkJc25leHQgPSBfX3J1bnFfZWxlbShydW5xLT5uZXh0KTsKCQkJ
aWYgKF9fdmNwdV9vbl9ydW5xKHNuZXh0KSkKCQkJCV9fcnVucV9yZW1vdmUoc25leHQpOwoJCX0g
ZWxzZSB7CgkJCXNuZXh0ID0gWEZBSVJfVkNQVShpZGxlX3ZjcHVbY3B1XSk7CgkJfQoJfQoKCXBy
aW50X3J1bnEoY3B1KTsKCgkvKiBJbml0aWFsaXplLCBjaGVjayBhbmQgcmV0dXJuIHRhc2sgdG8g
cnVuIG5leHQgKi8KICAgIHJldC50YXNrID0gc25leHQtPnZjcHU7CiAgICByZXQudGltZSA9IChp
c19pZGxlX3ZjcHUoc25leHQtPnZjcHUpID8gLTEgOiB0c2xpY2UpOwogICAgcmV0Lm1pZ3JhdGVk
ID0gMDsKCglpZiAoc25leHQgJiYgc25leHQtPnZjcHUgIT0gY3VycmVudCkgewogICAgCWRlYnVn
KEtFUk5fSU5GTyAiJXM6ICIsIF9fZnVuY19fKTsKCQlpZiAoIWlzX2lkbGVfdmNwdShzbmV4dC0+
dmNwdSkpCiAgICAJCWRlYnVnKCJDUFUgJWQgcGlja2VkKGRvbS52Y3B1KT0lZC4lZFxuIiwgY3B1
LCBzbmV4dC0+ZG9tYWluLT5kb20tPmRvbWFpbl9pZCwgc25leHQtPnZjcHUtPnZjcHVfaWQpOwoJ
CWVsc2UKICAgIAkJZGVidWcoIkNQVSAlZCBwaWNrZWQoZG9tLnZjcHUpPWlkbGUuJWRcbiIsIGNw
dSwgc25leHQtPnZjcHUtPnZjcHVfaWQpOwoJfQoKICAgIHJldHVybiByZXQ7Cn0KCnN0YXRpYyBp
bnQKeGZhaXJfaW5pdChzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMpCnsKICAgIHN0cnVjdCB4ZmFpcl9w
cml2YXRlICpwcml2OwoKICAgIHByaXYgPSB4emFsbG9jKHN0cnVjdCB4ZmFpcl9wcml2YXRlKTsK
ICAgIGlmIChwcml2ID09IE5VTEwpCiAgICAgICAgcmV0dXJuIC1FTk9NRU07CgogICAgb3BzLT5z
Y2hlZF9kYXRhID0gcHJpdjsKICAgIHNwaW5fbG9ja19pbml0KCZwcml2LT5sb2NrKTsKCWRlYnVn
dHJhY2VfdG9nZ2xlKCk7CgogICAgcmV0dXJuIDA7Cn0KCnN0YXRpYyB2b2lkCnhmYWlyX2RlaW5p
dChjb25zdCBzdHJ1Y3Qgc2NoZWR1bGVyICpvcHMpCnsKICAgIHN0cnVjdCB4ZmFpcl9wcml2YXRl
ICpwcml2OwoKICAgIHByaXYgPSBYRkFJUl9QUklWKG9wcyk7CiAgICBpZiAocHJpdikKICAgICAg
ICB4ZnJlZShwcml2KTsKfQoKc3RhdGljIHN0cnVjdCB4ZmFpcl9wcml2YXRlIF94ZmFpcl9wcml2
OwoKY29uc3Qgc3RydWN0IHNjaGVkdWxlciBzY2hlZF94ZmFpcl9kZWYgPSB7CiAgICAubmFtZSAg
ICAgICAgICAgPSAiWEZhaXIgVGFibGUgRHJpdmVyIFNjaGVkdWxlciIsCiAgICAub3B0X25hbWUg
ICAgICAgPSAieGZhaXIiLAogICAgLnNjaGVkX2lkICAgICAgID0gWEVOX1NDSEVEVUxFUl9YRkFJ
UiwKICAgIC5zY2hlZF9kYXRhICAgICA9ICZfeGZhaXJfcHJpdiwKCiAgICAuaW5pdF9kb21haW4g
ICAgPSB4ZmFpcl9kb21faW5pdCwKICAgIC5kZXN0cm95X2RvbWFpbiA9IHhmYWlyX2RvbV9kZXN0
cm95LAoKICAgIC5pbnNlcnRfdmNwdSAgICA9IHhmYWlyX3ZjcHVfaW5zZXJ0LAogICAgLnJlbW92
ZV92Y3B1ICAgID0geGZhaXJfdmNwdV9yZW1vdmUsCgoJLnNsZWVwCQkJPSB4ZmFpcl9zbGVlcCwK
CS53YWtlCQkJPSB4ZmFpcl93YWtlLAoJLnlpZWxkCQkJPSB4ZmFpcl95aWVsZCwKCiAgICAucGlj
a19jcHUgICAgICAgPSB4ZmFpcl9waWNrX2NwdSwKICAgIC5kb19zY2hlZHVsZSAgICA9IHhmYWly
X3NjaGVkdWxlLAoJCiAgICAuaW5pdCAgICAgICAgICAgPSB4ZmFpcl9pbml0LAogICAgLmRlaW5p
dCAgICAgICAgID0geGZhaXJfZGVpbml0LAoKICAgIC5hbGxvY192ZGF0YSAgICA9IHhmYWlyX2Fs
bG9jX3ZkYXRhLAogICAgLmZyZWVfdmRhdGEgICAgID0geGZhaXJfZnJlZV92ZGF0YSwKICAgIC5h
bGxvY19wZGF0YSAgICA9IHhmYWlyX2FsbG9jX3BkYXRhLAogICAgLmZyZWVfcGRhdGEgICAgID0g
eGZhaXJfZnJlZV9wZGF0YSwKICAgIC5hbGxvY19kb21kYXRhICA9IHhmYWlyX2FsbG9jX2RvbWRh
dGEsCiAgICAuZnJlZV9kb21kYXRhICAgPSB4ZmFpcl9mcmVlX2RvbWRhdGEsCn07Cg==
--047d7bb04bd285447604eef72741
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bb04bd285447604eef72741--


From xen-devel-bounces@lists.xen.org Thu Jan 02 07:44:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 07:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vycwy-000366-7h; Thu, 02 Jan 2014 07:44:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vycww-00035t-Ez
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 07:44:14 +0000
Received: from [85.158.139.211:43097] by server-2.bemta-5.messagelabs.com id
	CB/E8-29392-DC815C25; Thu, 02 Jan 2014 07:44:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388648651!4782681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13095 invoked from network); 2 Jan 2014 07:44:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 07:44:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,589,1384300800"; d="scan'208";a="89154392"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 07:44:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 2 Jan 2014 02:44:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Vycwr-00033d-K9;
	Thu, 02 Jan 2014 07:44:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Vycwq-0000IK-Ru;
	Thu, 02 Jan 2014 07:44:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23827-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Jan 2014 07:44:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23827: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23827 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23827/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23724
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 23035

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 07:44:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 07:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vycwy-000366-7h; Thu, 02 Jan 2014 07:44:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vycww-00035t-Ez
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 07:44:14 +0000
Received: from [85.158.139.211:43097] by server-2.bemta-5.messagelabs.com id
	CB/E8-29392-DC815C25; Thu, 02 Jan 2014 07:44:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388648651!4782681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13095 invoked from network); 2 Jan 2014 07:44:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 07:44:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,589,1384300800"; d="scan'208";a="89154392"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 07:44:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 2 Jan 2014 02:44:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Vycwr-00033d-K9;
	Thu, 02 Jan 2014 07:44:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Vycwq-0000IK-Ru;
	Thu, 02 Jan 2014 07:44:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23827-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 2 Jan 2014 07:44:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23827: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23827 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23827/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23724
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 23035

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 09:59:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 09:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyf3n-0000Eg-DH; Thu, 02 Jan 2014 09:59:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyf3m-0000Eb-7O
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 09:59:26 +0000
Received: from [193.109.254.147:54174] by server-10.bemta-14.messagelabs.com
	id AC/82-20752-D7835C25; Thu, 02 Jan 2014 09:59:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388656763!8423130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27321 invoked from network); 2 Jan 2014 09:59:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 09:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208,217";a="86986682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 09:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 04:59:21 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyf3h-0006Vy-Vh;
	Thu, 02 Jan 2014 09:59:21 +0000
Message-ID: <52C53879.7090609@citrix.com>
Date: Thu, 2 Jan 2014 09:59:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Manohar Vanga <manohar.vanga@gmail.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
In-Reply-To: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2085162731150597960=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2085162731150597960==
Content-Type: multipart/alternative;
	boundary="------------000102070207020906060106"

--------------000102070207020906060106
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 02/01/14 06:46, Manohar Vanga wrote:
> Hi all,
>
> I've spent the last few weeks trying to debug a weird issue with a new
> scheduler I'm developing for Xen. I have written a barebones
> round-robin scheduler which seems to work fine when starting up Dom0,
> but then at some point during the boot everything just hangs (somewhat
> deterministically from what I can tell from a week of debugging; see
> below).
>
> I've inlined my source code below. I don't expect anyone to read the
> whole thing (although it's quite minimal) so here are the key points:
>
>   * I've implemented the following callbacks: init_domain,
>     destroy_domain, insert_vcpu, remove_vcpu, sleep, wake, yield,
>     pick_cpu, do_schedule, init, deinit, alloc_vdata, free_vdata,
>     alloc_pdata, free_pdata, alloc_domdata, free_domdata. Most of
>     these are minimal (or in some cases do nothing). Am I missing
>     anything critical?
>   * The hang occurs even if I'm running Dom0 with just a single vcpu.
>     Nothing hangs if I choose a stock scheduler. Either I'm doing
>     something foolish that is causing a deadlock (less likely since
>     the code structure is borrowed from sched_credit.c) or I'm *not*
>     doing something leading to Dom0 crashing and the vcpu just dying.
>
> If you do suspect some specific issue please let me know. Below are
> some of the possible issues that I've investigated but hit dead ends on:
>
>   * Checking if my debug printk statements were leading to a deadlock
>     due to sleeps in interrupt mode. This doesn't seem to be the case
>     since Dom0 hangs during boot even if I disable all debug output.
>   * I suspected incorrect queuing operations that might be corrupting
>     memory somewhere. However, my debug logs tell me that this is not
>     the case. There is at most one element in the runqueue at all
>     times (I use Dom0 with 1 vcpu).
>   * I also suspected a deadlock due to incorrect locking. However,
>     based on what the credit scheduler does in sched_credit.c, I'm
>     don't seem to be doing anything significantly different. In
>     general though, which callbacks run in interrupt context?
>   * In the end, I stuck debug statements in tick_suspend and
>     tick_resume and after the hang, those get called infinitely which
>     seems like the physical CPU has gone idle. Is this correct? In
>     that case, *what am I doing wrong in the scheduler* to cause Dom0
>     to crash?
>   * The hang occurs around 3-5 seconds into the boot process quite
>     deterministically. Could it be some periodic timer going off and
>     bugging with my code in weird and wonderful ways?
>
> Also, how do the sleep/wake/yield callbacks work? When do they get
> called? Is there any documentation on the different callbacks with
> regards to when they are called? If I understand everything correctly
> after this, I would gladly create a wiki page explaining this (and
> perhaps a tutorial on writing a simple scheduler; something I wish
> existed!).
>
> I hope the description was enough to help understand my problem. If
> not, feel free to ask for more details :-)
>
> Thanks for reading this far! Source code follows

Using printk()s in the code is going to skew the timing terribly.

A serial console and the 'q' debug key is probably a good start, to see
some vcpu state.

'watchdog' on the Xen command line will enable NMI watchdogs which will
catch deadlocks, but as I don't see a single use of spinlocks in your
code, I doubt this is your issue.

Beyond that, writing a custom keyhandler to dump all of the xfair state
is probably the next thing to try.

~Andrew

--------------000102070207020906060106
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 02/01/14 06:46, Manohar Vanga wrote:<br>
    </div>
    <blockquote
cite="mid:CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">Hi all,
        <div><br>
        </div>
        <div>I've spent the last few weeks trying to debug a weird issue
          with a new scheduler I'm developing for Xen. I have written a
          barebones round-robin scheduler which seems to work fine when
          starting up Dom0, but then at some point during the boot
          everything just hangs (somewhat deterministically from what I
          can tell from a week of debugging; see below).</div>
        <div><br>
        </div>
        <div>I've inlined my source code below. I don't expect anyone to
          read the whole thing (although it's quite minimal) so here are
          the key points:</div>
        <div>
          <ul>
            <li>I've implemented the following callbacks: init_domain,
              destroy_domain, insert_vcpu, remove_vcpu, sleep, wake,
              yield, pick_cpu, do_schedule, init, deinit, alloc_vdata,
              free_vdata, alloc_pdata, free_pdata, alloc_domdata,
              free_domdata. Most of these are minimal (or in some cases
              do nothing). Am I missing anything critical?</li>
            <li>The hang occurs even if I'm running Dom0 with just a
              single vcpu. Nothing hangs if I choose a stock scheduler.
              Either I'm doing something foolish that is causing a
              deadlock (less likely since the code structure is borrowed
              from sched_credit.c) or I'm *not* doing something leading
              to Dom0 crashing and the vcpu just dying.</li>
          </ul>
          <div>If you do suspect some specific issue please let me know.
            Below are some of the possible issues that I've investigated
            but hit dead ends on:</div>
        </div>
        <div>
          <ul>
            <li>Checking if my debug printk statements were leading to a
              deadlock due to sleeps in interrupt mode. This doesn't
              seem to be the case since Dom0 hangs during boot even if I
              disable all debug output.</li>
            <li>I suspected incorrect queuing operations that might be
              corrupting memory somewhere. However, my debug logs tell
              me that this is not the case. There is at most one element
              in the runqueue at all times (I use Dom0 with 1 vcpu).</li>
            <li>I also suspected a deadlock due to incorrect locking.
              However, based on what the credit scheduler does in
              sched_credit.c, I'm don't seem to be doing anything
              significantly different. In general though, which
              callbacks run in interrupt context?</li>
            <li>In the end, I stuck debug statements in tick_suspend and
              tick_resume and after the hang, those get called
              infinitely which seems like the physical CPU has gone
              idle. Is this correct? In that case, *what am I doing
              wrong in the scheduler* to cause Dom0 to crash?</li>
            <li>The hang occurs around 3-5 seconds into the boot process
              quite deterministically. Could it be some periodic timer
              going off and bugging with my code in weird and wonderful
              ways?</li>
          </ul>
          <div>Also, how do the sleep/wake/yield callbacks work? When do
            they get called? Is there any documentation on the different
            callbacks with regards to when they are called? If I
            understand everything correctly after this, I would gladly
            create a wiki page explaining this (and perhaps a tutorial
            on writing a simple scheduler; something I wish existed!).</div>
          <div><br>
          </div>
          <div>I hope the description was enough to help understand my
            problem. If not, feel free to ask for more details :-)</div>
        </div>
        <div><br>
        </div>
        <div>Thanks for reading this far! Source code follows</div>
      </div>
    </blockquote>
    <br>
    Using printk()s in the code is going to skew the timing terribly.<br>
    <br>
    A serial console and the 'q' debug key is probably a good start, to
    see some vcpu state.<br>
    <br>
    'watchdog' on the Xen command line will enable NMI watchdogs which
    will catch deadlocks, but as I don't see a single use of spinlocks
    in your code, I doubt this is your issue.<br>
    <br>
    Beyond that, writing a custom keyhandler to dump all of the xfair
    state is probably the next thing to try.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------000102070207020906060106--


--===============2085162731150597960==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2085162731150597960==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 09:59:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 09:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyf3n-0000Eg-DH; Thu, 02 Jan 2014 09:59:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyf3m-0000Eb-7O
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 09:59:26 +0000
Received: from [193.109.254.147:54174] by server-10.bemta-14.messagelabs.com
	id AC/82-20752-D7835C25; Thu, 02 Jan 2014 09:59:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388656763!8423130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27321 invoked from network); 2 Jan 2014 09:59:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 09:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208,217";a="86986682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 09:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 04:59:21 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyf3h-0006Vy-Vh;
	Thu, 02 Jan 2014 09:59:21 +0000
Message-ID: <52C53879.7090609@citrix.com>
Date: Thu, 2 Jan 2014 09:59:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Manohar Vanga <manohar.vanga@gmail.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
In-Reply-To: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2085162731150597960=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2085162731150597960==
Content-Type: multipart/alternative;
	boundary="------------000102070207020906060106"

--------------000102070207020906060106
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 02/01/14 06:46, Manohar Vanga wrote:
> Hi all,
>
> I've spent the last few weeks trying to debug a weird issue with a new
> scheduler I'm developing for Xen. I have written a barebones
> round-robin scheduler which seems to work fine when starting up Dom0,
> but then at some point during the boot everything just hangs (somewhat
> deterministically from what I can tell from a week of debugging; see
> below).
>
> I've inlined my source code below. I don't expect anyone to read the
> whole thing (although it's quite minimal) so here are the key points:
>
>   * I've implemented the following callbacks: init_domain,
>     destroy_domain, insert_vcpu, remove_vcpu, sleep, wake, yield,
>     pick_cpu, do_schedule, init, deinit, alloc_vdata, free_vdata,
>     alloc_pdata, free_pdata, alloc_domdata, free_domdata. Most of
>     these are minimal (or in some cases do nothing). Am I missing
>     anything critical?
>   * The hang occurs even if I'm running Dom0 with just a single vcpu.
>     Nothing hangs if I choose a stock scheduler. Either I'm doing
>     something foolish that is causing a deadlock (less likely since
>     the code structure is borrowed from sched_credit.c) or I'm *not*
>     doing something leading to Dom0 crashing and the vcpu just dying.
>
> If you do suspect some specific issue please let me know. Below are
> some of the possible issues that I've investigated but hit dead ends on:
>
>   * Checking if my debug printk statements were leading to a deadlock
>     due to sleeps in interrupt mode. This doesn't seem to be the case
>     since Dom0 hangs during boot even if I disable all debug output.
>   * I suspected incorrect queuing operations that might be corrupting
>     memory somewhere. However, my debug logs tell me that this is not
>     the case. There is at most one element in the runqueue at all
>     times (I use Dom0 with 1 vcpu).
>   * I also suspected a deadlock due to incorrect locking. However,
>     based on what the credit scheduler does in sched_credit.c, I'm
>     don't seem to be doing anything significantly different. In
>     general though, which callbacks run in interrupt context?
>   * In the end, I stuck debug statements in tick_suspend and
>     tick_resume and after the hang, those get called infinitely which
>     seems like the physical CPU has gone idle. Is this correct? In
>     that case, *what am I doing wrong in the scheduler* to cause Dom0
>     to crash?
>   * The hang occurs around 3-5 seconds into the boot process quite
>     deterministically. Could it be some periodic timer going off and
>     bugging with my code in weird and wonderful ways?
>
> Also, how do the sleep/wake/yield callbacks work? When do they get
> called? Is there any documentation on the different callbacks with
> regards to when they are called? If I understand everything correctly
> after this, I would gladly create a wiki page explaining this (and
> perhaps a tutorial on writing a simple scheduler; something I wish
> existed!).
>
> I hope the description was enough to help understand my problem. If
> not, feel free to ask for more details :-)
>
> Thanks for reading this far! Source code follows

Using printk()s in the code is going to skew the timing terribly.

A serial console and the 'q' debug key is probably a good start, to see
some vcpu state.

'watchdog' on the Xen command line will enable NMI watchdogs which will
catch deadlocks, but as I don't see a single use of spinlocks in your
code, I doubt this is your issue.

Beyond that, writing a custom keyhandler to dump all of the xfair state
is probably the next thing to try.

~Andrew

--------------000102070207020906060106
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 02/01/14 06:46, Manohar Vanga wrote:<br>
    </div>
    <blockquote
cite="mid:CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">Hi all,
        <div><br>
        </div>
        <div>I've spent the last few weeks trying to debug a weird issue
          with a new scheduler I'm developing for Xen. I have written a
          barebones round-robin scheduler which seems to work fine when
          starting up Dom0, but then at some point during the boot
          everything just hangs (somewhat deterministically from what I
          can tell from a week of debugging; see below).</div>
        <div><br>
        </div>
        <div>I've inlined my source code below. I don't expect anyone to
          read the whole thing (although it's quite minimal) so here are
          the key points:</div>
        <div>
          <ul>
            <li>I've implemented the following callbacks: init_domain,
              destroy_domain, insert_vcpu, remove_vcpu, sleep, wake,
              yield, pick_cpu, do_schedule, init, deinit, alloc_vdata,
              free_vdata, alloc_pdata, free_pdata, alloc_domdata,
              free_domdata. Most of these are minimal (or in some cases
              do nothing). Am I missing anything critical?</li>
            <li>The hang occurs even if I'm running Dom0 with just a
              single vcpu. Nothing hangs if I choose a stock scheduler.
              Either I'm doing something foolish that is causing a
              deadlock (less likely since the code structure is borrowed
              from sched_credit.c) or I'm *not* doing something leading
              to Dom0 crashing and the vcpu just dying.</li>
          </ul>
          <div>If you do suspect some specific issue please let me know.
            Below are some of the possible issues that I've investigated
            but hit dead ends on:</div>
        </div>
        <div>
          <ul>
            <li>Checking if my debug printk statements were leading to a
              deadlock due to sleeps in interrupt mode. This doesn't
              seem to be the case since Dom0 hangs during boot even if I
              disable all debug output.</li>
            <li>I suspected incorrect queuing operations that might be
              corrupting memory somewhere. However, my debug logs tell
              me that this is not the case. There is at most one element
              in the runqueue at all times (I use Dom0 with 1 vcpu).</li>
            <li>I also suspected a deadlock due to incorrect locking.
              However, based on what the credit scheduler does in
              sched_credit.c, I'm don't seem to be doing anything
              significantly different. In general though, which
              callbacks run in interrupt context?</li>
            <li>In the end, I stuck debug statements in tick_suspend and
              tick_resume and after the hang, those get called
              infinitely which seems like the physical CPU has gone
              idle. Is this correct? In that case, *what am I doing
              wrong in the scheduler* to cause Dom0 to crash?</li>
            <li>The hang occurs around 3-5 seconds into the boot process
              quite deterministically. Could it be some periodic timer
              going off and bugging with my code in weird and wonderful
              ways?</li>
          </ul>
          <div>Also, how do the sleep/wake/yield callbacks work? When do
            they get called? Is there any documentation on the different
            callbacks with regards to when they are called? If I
            understand everything correctly after this, I would gladly
            create a wiki page explaining this (and perhaps a tutorial
            on writing a simple scheduler; something I wish existed!).</div>
          <div><br>
          </div>
          <div>I hope the description was enough to help understand my
            problem. If not, feel free to ask for more details :-)</div>
        </div>
        <div><br>
        </div>
        <div>Thanks for reading this far! Source code follows</div>
      </div>
    </blockquote>
    <br>
    Using printk()s in the code is going to skew the timing terribly.<br>
    <br>
    A serial console and the 'q' debug key is probably a good start, to
    see some vcpu state.<br>
    <br>
    'watchdog' on the Xen command line will enable NMI watchdogs which
    will catch deadlocks, but as I don't see a single use of spinlocks
    in your code, I doubt this is your issue.<br>
    <br>
    Beyond that, writing a custom keyhandler to dump all of the xfair
    state is probably the next thing to try.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------000102070207020906060106--


--===============2085162731150597960==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2085162731150597960==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 10:38:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyffT-0001em-3n; Thu, 02 Jan 2014 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <manohar.vanga@gmail.com>) id 1VyffR-0001e0-PT
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:38:22 +0000
Received: from [85.158.137.68:15479] by server-14.bemta-3.messagelabs.com id
	5A/F9-06105-D9145C25; Thu, 02 Jan 2014 10:38:21 +0000
X-Env-Sender: manohar.vanga@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388659100!5716078!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32301 invoked from network); 2 Jan 2014 10:38:20 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:38:20 -0000
Received: by mail-wi0-f174.google.com with SMTP id z2so18544802wiv.1
	for <xen-devel@lists.xen.org>; Thu, 02 Jan 2014 02:38:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=XxPVPiBNip13xlglv9ZSFBoBs6rTCeGV8AQ4CRi3cA8=;
	b=awPuMJ6w0F0YSpbpCL0PWf5qUmS5NmJIyVGDuqcDA3CC4T6myet0x97hU/TssCWdgB
	iy5Al+DG60wpjRlWm22EPlw38mJDYYh4jRCk8N35pwztFdvHZthONyFe4XnobNoiLhmZ
	hWzmbqOClf3wKq20E7OG5zsXc9f8YsZRmEGqd8aNmkKCUoJ8sB2ZcA7oBg+qtlPG8RDb
	fg1mWJ6UkEI1/365w5CD9gYy7J3ERaC7+LpZxG/UZLMuplAMIuspZf+NSKP/eJreuBhF
	H5aJZzTig/8Dp6UDhRr8n7k4oltN68eU/dV3coHGmHJHQ/luleUrAiYLXODYX/m7rEH5
	6aVg==
X-Received: by 10.180.9.110 with SMTP id y14mr53457673wia.61.1388659100059;
	Thu, 02 Jan 2014 02:38:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.216.167.66 with HTTP; Thu, 2 Jan 2014 02:37:59 -0800 (PST)
In-Reply-To: <52C53879.7090609@citrix.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
	<52C53879.7090609@citrix.com>
From: Manohar Vanga <manohar.vanga@gmail.com>
Date: Thu, 2 Jan 2014 11:37:59 +0100
Message-ID: <CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9098740302499748454=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9098740302499748454==
Content-Type: multipart/alternative; boundary=001a11c2b8985f90e004eefa64fe

--001a11c2b8985f90e004eefa64fe
Content-Type: text/plain; charset=UTF-8

Hi Andrew,

Thanks for the quick reply! My comments are inline:


>  Using printk()s in the code is going to skew the timing terribly.
>

Would this affect the boot process of dom0 in any way? Other than that, it
doesn't seem like it would cause any problems right?


> A serial console and the 'q' debug key is probably a good start, to see
> some vcpu state.
>

I'll give this a try but the problem is that I'm not entirely sure what I'm
looking for. As I mentioned, the vcpu of dom0 goes idle after the
crash-point.


> Beyond that, writing a custom keyhandler to dump all of the xfair state is
> probably the next thing to try.
>

Hmm any hints on what I should be looking for? It's clear to me that the
vcpu goes idle after that point where dom0 stops booting. Is there anything
else I need to keep an eye out for? I'll also look into whether the skewed
timing can possibly lead to a broken dom0 boot.

Also, in case you glanced at the code, do you happen to see any glaringly
obvious bugs that you might have encountered? Otherwise I guess I'm looking
for some odd corner-case behavior.

Thanks and best regards!

-- 
/mvanga

--001a11c2b8985f90e004eefa64fe
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div=
>Hi Andrew,</div><div><br></div><div>Thanks for the quick reply! My comment=
s are inline:</div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(20=
4,204,204);border-left-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF"><div class=3D"im"></div>
    Using printk()s in the code is going to skew the timing terribly.<br></=
div></blockquote><div><br></div><div>Would this affect the boot process of =
dom0 in any way? Other than that, it doesn&#39;t seem like it would cause a=
ny problems right?</div>

<div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"=
>
    A serial console and the &#39;q&#39; debug key is probably a good start=
, to
    see some vcpu state.<br></div></blockquote><div><br></div><div>I&#39;ll=
 give this a try but the problem is that I&#39;m not entirely sure what I&#=
39;m looking for. As I mentioned, the vcpu of dom0 goes idle after the cras=
h-point.</div>

<div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"=
>
    Beyond that, writing a custom keyhandler to dump all of the xfair
    state is probably the next thing to try.</div></blockquote><div><br></d=
iv><div>Hmm any hints on what I should be looking for? It&#39;s clear to me=
 that the vcpu goes idle after that point where dom0 stops booting. Is ther=
e anything else I need to keep an eye out for? I&#39;ll also look into whet=
her the skewed timing can possibly lead to a broken dom0 boot.</div>

<div><br></div><div>Also, in case you glanced at the code, do you happen to=
 see any glaringly obvious bugs that you might have encountered? Otherwise =
I guess I&#39;m looking for some odd corner-case behavior.</div><div><br>

</div><div>Thanks and best regards!</div></div><div><br></div>-- <br>/mvang=
a</div></div>

--001a11c2b8985f90e004eefa64fe--


--===============9098740302499748454==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9098740302499748454==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 10:38:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyffT-0001em-3n; Thu, 02 Jan 2014 10:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <manohar.vanga@gmail.com>) id 1VyffR-0001e0-PT
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:38:22 +0000
Received: from [85.158.137.68:15479] by server-14.bemta-3.messagelabs.com id
	5A/F9-06105-D9145C25; Thu, 02 Jan 2014 10:38:21 +0000
X-Env-Sender: manohar.vanga@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388659100!5716078!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32301 invoked from network); 2 Jan 2014 10:38:20 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:38:20 -0000
Received: by mail-wi0-f174.google.com with SMTP id z2so18544802wiv.1
	for <xen-devel@lists.xen.org>; Thu, 02 Jan 2014 02:38:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=XxPVPiBNip13xlglv9ZSFBoBs6rTCeGV8AQ4CRi3cA8=;
	b=awPuMJ6w0F0YSpbpCL0PWf5qUmS5NmJIyVGDuqcDA3CC4T6myet0x97hU/TssCWdgB
	iy5Al+DG60wpjRlWm22EPlw38mJDYYh4jRCk8N35pwztFdvHZthONyFe4XnobNoiLhmZ
	hWzmbqOClf3wKq20E7OG5zsXc9f8YsZRmEGqd8aNmkKCUoJ8sB2ZcA7oBg+qtlPG8RDb
	fg1mWJ6UkEI1/365w5CD9gYy7J3ERaC7+LpZxG/UZLMuplAMIuspZf+NSKP/eJreuBhF
	H5aJZzTig/8Dp6UDhRr8n7k4oltN68eU/dV3coHGmHJHQ/luleUrAiYLXODYX/m7rEH5
	6aVg==
X-Received: by 10.180.9.110 with SMTP id y14mr53457673wia.61.1388659100059;
	Thu, 02 Jan 2014 02:38:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.216.167.66 with HTTP; Thu, 2 Jan 2014 02:37:59 -0800 (PST)
In-Reply-To: <52C53879.7090609@citrix.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
	<52C53879.7090609@citrix.com>
From: Manohar Vanga <manohar.vanga@gmail.com>
Date: Thu, 2 Jan 2014 11:37:59 +0100
Message-ID: <CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9098740302499748454=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9098740302499748454==
Content-Type: multipart/alternative; boundary=001a11c2b8985f90e004eefa64fe

--001a11c2b8985f90e004eefa64fe
Content-Type: text/plain; charset=UTF-8

Hi Andrew,

Thanks for the quick reply! My comments are inline:


>  Using printk()s in the code is going to skew the timing terribly.
>

Would this affect the boot process of dom0 in any way? Other than that, it
doesn't seem like it would cause any problems right?


> A serial console and the 'q' debug key is probably a good start, to see
> some vcpu state.
>

I'll give this a try but the problem is that I'm not entirely sure what I'm
looking for. As I mentioned, the vcpu of dom0 goes idle after the
crash-point.


> Beyond that, writing a custom keyhandler to dump all of the xfair state is
> probably the next thing to try.
>

Hmm any hints on what I should be looking for? It's clear to me that the
vcpu goes idle after that point where dom0 stops booting. Is there anything
else I need to keep an eye out for? I'll also look into whether the skewed
timing can possibly lead to a broken dom0 boot.

Also, in case you glanced at the code, do you happen to see any glaringly
obvious bugs that you might have encountered? Otherwise I guess I'm looking
for some odd corner-case behavior.

Thanks and best regards!

-- 
/mvanga

--001a11c2b8985f90e004eefa64fe
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div=
>Hi Andrew,</div><div><br></div><div>Thanks for the quick reply! My comment=
s are inline:</div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(20=
4,204,204);border-left-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF"><div class=3D"im"></div>
    Using printk()s in the code is going to skew the timing terribly.<br></=
div></blockquote><div><br></div><div>Would this affect the boot process of =
dom0 in any way? Other than that, it doesn&#39;t seem like it would cause a=
ny problems right?</div>

<div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"=
>
    A serial console and the &#39;q&#39; debug key is probably a good start=
, to
    see some vcpu state.<br></div></blockquote><div><br></div><div>I&#39;ll=
 give this a try but the problem is that I&#39;m not entirely sure what I&#=
39;m looking for. As I mentioned, the vcpu of dom0 goes idle after the cras=
h-point.</div>

<div>=C2=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"=
>
    Beyond that, writing a custom keyhandler to dump all of the xfair
    state is probably the next thing to try.</div></blockquote><div><br></d=
iv><div>Hmm any hints on what I should be looking for? It&#39;s clear to me=
 that the vcpu goes idle after that point where dom0 stops booting. Is ther=
e anything else I need to keep an eye out for? I&#39;ll also look into whet=
her the skewed timing can possibly lead to a broken dom0 boot.</div>

<div><br></div><div>Also, in case you glanced at the code, do you happen to=
 see any glaringly obvious bugs that you might have encountered? Otherwise =
I guess I&#39;m looking for some odd corner-case behavior.</div><div><br>

</div><div>Thanks and best regards!</div></div><div><br></div>-- <br>/mvang=
a</div></div>

--001a11c2b8985f90e004eefa64fe--


--===============9098740302499748454==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9098740302499748454==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 10:46:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyfnS-00028S-6P; Thu, 02 Jan 2014 10:46:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyfnR-00028N-AH
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:46:37 +0000
Received: from [85.158.139.211:16398] by server-5.bemta-5.messagelabs.com id
	26/BD-14928-C8345C25; Thu, 02 Jan 2014 10:46:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388659594!7492097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22439 invoked from network); 2 Jan 2014 10:46:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:46:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="86994795"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 10:46:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	05:46:33 -0500
Message-ID: <52C54388.105@citrix.com>
Date: Thu, 2 Jan 2014 10:46:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 16:51, Don Slutz wrote:
> Revert of commit 7113a45451a9f656deeff070e47672043ed83664

Since this commit introduced a regression, a revert is the best thing to
do here.

Acked-by: David Vrabel <david.vrabel@citrix.com>

> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
> 
> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)

I guess Daniel tested a debug build without this crashkernel option.
This would place the crash region above the direct mapping region and
map_domain_page() would do the right thing.


> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
> +

This should be made conditional on the location of the crash region --
it is wrong to do this for portions of the crash region that are outside
the crash region.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 10:46:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyfnS-00028S-6P; Thu, 02 Jan 2014 10:46:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyfnR-00028N-AH
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:46:37 +0000
Received: from [85.158.139.211:16398] by server-5.bemta-5.messagelabs.com id
	26/BD-14928-C8345C25; Thu, 02 Jan 2014 10:46:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388659594!7492097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22439 invoked from network); 2 Jan 2014 10:46:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:46:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="86994795"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 10:46:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	05:46:33 -0500
Message-ID: <52C54388.105@citrix.com>
Date: Thu, 2 Jan 2014 10:46:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 16:51, Don Slutz wrote:
> Revert of commit 7113a45451a9f656deeff070e47672043ed83664

Since this commit introduced a regression, a revert is the best thing to
do here.

Acked-by: David Vrabel <david.vrabel@citrix.com>

> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
> 
> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)

I guess Daniel tested a debug build without this crashkernel option.
This would place the crash region above the direct mapping region and
map_domain_page() would do the right thing.


> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
> +

This should be made conditional on the location of the crash region --
it is wrong to do this for portions of the crash region that are outside
the crash region.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 10:48:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyfp0-0002E4-Q9; Thu, 02 Jan 2014 10:48:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyfoy-0002Dx-Uj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:48:13 +0000
Received: from [85.158.143.35:24017] by server-2.bemta-4.messagelabs.com id
	6C/CB-11386-CE345C25; Thu, 02 Jan 2014 10:48:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388659690!9134510!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15749 invoked from network); 2 Jan 2014 10:48:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:48:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89183925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 10:48:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 05:48:09 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyfov-0007FL-Cs;
	Thu, 02 Jan 2014 10:48:09 +0000
Message-ID: <52C543E9.1050701@citrix.com>
Date: Thu, 2 Jan 2014 10:48:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com>
In-Reply-To: <52C54388.105@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 10:46, David Vrabel wrote:
> On 01/01/14 16:51, Don Slutz wrote:
>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
> Since this commit introduced a regression, a revert is the best thing to
> do here.
>
> Acked-by: David Vrabel <david.vrabel@citrix.com>
>
>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>
>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
> I guess Daniel tested a debug build without this crashkernel option.
> This would place the crash region above the direct mapping region and
> map_domain_page() would do the right thing.
>
>
>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>> +
> This should be made conditional on the location of the crash region --
> it is wrong to do this for portions of the crash region that are outside
> the crash region.

Presume you mean "outside the direct-map region"?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 10:48:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 10:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyfp0-0002E4-Q9; Thu, 02 Jan 2014 10:48:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyfoy-0002Dx-Uj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 10:48:13 +0000
Received: from [85.158.143.35:24017] by server-2.bemta-4.messagelabs.com id
	6C/CB-11386-CE345C25; Thu, 02 Jan 2014 10:48:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388659690!9134510!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15749 invoked from network); 2 Jan 2014 10:48:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 10:48:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89183925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 10:48:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 05:48:09 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyfov-0007FL-Cs;
	Thu, 02 Jan 2014 10:48:09 +0000
Message-ID: <52C543E9.1050701@citrix.com>
Date: Thu, 2 Jan 2014 10:48:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com>
In-Reply-To: <52C54388.105@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 10:46, David Vrabel wrote:
> On 01/01/14 16:51, Don Slutz wrote:
>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
> Since this commit introduced a regression, a revert is the best thing to
> do here.
>
> Acked-by: David Vrabel <david.vrabel@citrix.com>
>
>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>
>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
> I guess Daniel tested a debug build without this crashkernel option.
> This would place the crash region above the direct mapping region and
> map_domain_page() would do the right thing.
>
>
>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>> +
> This should be made conditional on the location of the crash region --
> it is wrong to do this for portions of the crash region that are outside
> the crash region.

Presume you mean "outside the direct-map region"?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:14:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygDw-0003ZH-LQ; Thu, 02 Jan 2014 11:14:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygDu-0003ZC-R7
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:13:59 +0000
Received: from [85.158.139.211:5179] by server-14.bemta-5.messagelabs.com id
	21/C4-24200-5F945C25; Thu, 02 Jan 2014 11:13:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388661235!7490048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22708 invoked from network); 2 Jan 2014 11:13:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:13:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87000073"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:13:55 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:13:54 -0500
Message-ID: <52C549F1.8070005@citrix.com>
Date: Thu, 2 Jan 2014 11:13:53 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH
	guest is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> Which is a PV guest with auto page translation enabled
> and with vector callback. It is a cross between PVHVM and PV.
> 
> The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> with modifications):
> 
> "* the guest uses auto translate:
>  - p2m is managed by Xen
>  - pagetables are owned by the guest
>  - mmu_update hypercall not available
> * it uses event callback and not vlapic emulation,
> * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> 
> For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> PV guest with auto translate, although it does use hvm_op for setting
> callback vector."
> 
> We don't have yet a Kconfig entry setup as we do not
> have all the parts ready for it - so we piggyback
> on the PVHVM config option. This scaffolding will
> be removed later.
> 
> Note that on ARM the concept of PVH is non-existent. As Ian
> put it: "an ARM guest is neither PV nor HVM nor PVHVM.
> It's a bit like PVH but is different also (it's further towards
> the H end of the spectrum than even PVH).". As such these
> options (PVHVM, PVH) are never enabled nor seen on ARM
> compilations.
[...]
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
>  #define xen_initial_domain()	(0)
>  #endif	/* CONFIG_XEN_DOM0 */
>  
> +#ifdef CONFIG_XEN_PVHVM
> +/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */

This is a bit confusing.  I think it would be better to add the
CONFIG_XEN_PVH option with this patch but make it default n and not
possible to enable.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:14:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygDw-0003ZH-LQ; Thu, 02 Jan 2014 11:14:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygDu-0003ZC-R7
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:13:59 +0000
Received: from [85.158.139.211:5179] by server-14.bemta-5.messagelabs.com id
	21/C4-24200-5F945C25; Thu, 02 Jan 2014 11:13:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388661235!7490048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22708 invoked from network); 2 Jan 2014 11:13:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:13:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87000073"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:13:55 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:13:54 -0500
Message-ID: <52C549F1.8070005@citrix.com>
Date: Thu, 2 Jan 2014 11:13:53 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH
	guest is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> Which is a PV guest with auto page translation enabled
> and with vector callback. It is a cross between PVHVM and PV.
> 
> The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> with modifications):
> 
> "* the guest uses auto translate:
>  - p2m is managed by Xen
>  - pagetables are owned by the guest
>  - mmu_update hypercall not available
> * it uses event callback and not vlapic emulation,
> * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> 
> For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> PV guest with auto translate, although it does use hvm_op for setting
> callback vector."
> 
> We don't have yet a Kconfig entry setup as we do not
> have all the parts ready for it - so we piggyback
> on the PVHVM config option. This scaffolding will
> be removed later.
> 
> Note that on ARM the concept of PVH is non-existent. As Ian
> put it: "an ARM guest is neither PV nor HVM nor PVHVM.
> It's a bit like PVH but is different also (it's further towards
> the H end of the spectrum than even PVH).". As such these
> options (PVHVM, PVH) are never enabled nor seen on ARM
> compilations.
[...]
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
>  #define xen_initial_domain()	(0)
>  #endif	/* CONFIG_XEN_DOM0 */
>  
> +#ifdef CONFIG_XEN_PVHVM
> +/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */

This is a bit confusing.  I think it would be better to add the
CONFIG_XEN_PVH option with this patch but make it default n and not
possible to enable.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygGS-0003fq-97; Thu, 02 Jan 2014 11:16:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VygGQ-0003fk-AG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:16:34 +0000
Received: from [85.158.139.211:9071] by server-15.bemta-5.messagelabs.com id
	8E/02-08490-19A45C25; Thu, 02 Jan 2014 11:16:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388661391!7500744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 2 Jan 2014 11:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208,217";a="89189587"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:16:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 06:16:30 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VygGM-0007hy-BR;
	Thu, 02 Jan 2014 11:16:30 +0000
Message-ID: <52C54A8E.9070100@citrix.com>
Date: Thu, 2 Jan 2014 11:16:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Manohar Vanga <manohar.vanga@gmail.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
	<52C53879.7090609@citrix.com>
	<CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
In-Reply-To: <CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6773105296145627903=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6773105296145627903==
Content-Type: multipart/alternative;
	boundary="------------040202090004010709080303"

--------------040202090004010709080303
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On 02/01/14 10:37, Manohar Vanga wrote:
> Hi Andrew,
>
> Thanks for the quick reply! My comments are inline:
>  
>
>     Using printk()s in the code is going to skew the timing terribly.
>
>
> Would this affect the boot process of dom0 in any way? Other than
> that, it doesn't seem like it would cause any problems right?

Not directly, but it means the symptoms are likely to change simply by
looking for them.

>  
>
>     A serial console and the 'q' debug key is probably a good start,
>     to see some vcpu state.
>
>
> I'll give this a try but the problem is that I'm not entirely sure
> what I'm looking for. As I mentioned, the vcpu of dom0 goes idle after
> the crash-point.

The pause_count and pause_flags are what you are looking for to start with.

>  
>
>     Beyond that, writing a custom keyhandler to dump all of the xfair
>     state is probably the next thing to try.
>
>
> Hmm any hints on what I should be looking for? It's clear to me that
> the vcpu goes idle after that point where dom0 stops booting. Is there
> anything else I need to keep an eye out for? I'll also look into
> whether the skewed timing can possibly lead to a broken dom0 boot.

You have cause and effect mixed up.  (Presumably) the scheduler makes a
mistake and fails to reschedule dom0's vcpu, at which point dom0 makes
no further progress booting.

You will need to work out why the scheduler is not rescheduling the
vcpu, which involves knowing the complete state of the scheduler, and
the vcpu scheduling information.

>
> Also, in case you glanced at the code, do you happen to see any
> glaringly obvious bugs that you might have encountered? Otherwise I
> guess I'm looking for some odd corner-case behavior.

Writing a scheduler is probably the single most complicated task
available.  It is absolutely full of subtle logic issues which are not
obvious in the slightest.

I have no specific knowledge about schedulers or writing them, so really
cant help you too much with the specifics of your code.

~Andrew

--------------040202090004010709080303
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 02/01/14 10:37, Manohar Vanga wrote:<br>
    </div>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Hi Andrew,</div>
            <div><br>
            </div>
            <div>Thanks for the quick reply! My comments are inline:</div>
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> Using printk()s in
                the code is going to skew the timing terribly.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>Would this affect the boot process of dom0 in any way?
              Other than that, it doesn't seem like it would cause any
              problems right?</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Not directly, but it means the symptoms are likely to change simply
    by looking for them.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> A serial console
                and the 'q' debug key is probably a good start, to see
                some vcpu state.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I'll give this a try but the problem is that I'm not
              entirely sure what I'm looking for. As I mentioned, the
              vcpu of dom0 goes idle after the crash-point.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The pause_count and pause_flags are what you are looking for to
    start with.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> Beyond that,
                writing a custom keyhandler to dump all of the xfair
                state is probably the next thing to try.</div>
            </blockquote>
            <div><br>
            </div>
            <div>Hmm any hints on what I should be looking for? It's
              clear to me that the vcpu goes idle after that point where
              dom0 stops booting. Is there anything else I need to keep
              an eye out for? I'll also look into whether the skewed
              timing can possibly lead to a broken dom0 boot.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    You have cause and effect mixed up.Â  (Presumably) the scheduler
    makes a mistake and fails to reschedule dom0's vcpu, at which point
    dom0 makes no further progress booting.<br>
    <br>
    You will need to work out why the scheduler is not rescheduling the
    vcpu, which involves knowing the complete state of the scheduler,
    and the vcpu scheduling information.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <div>Also, in case you glanced at the code, do you happen to
              see any glaringly obvious bugs that you might have
              encountered? Otherwise I guess I'm looking for some odd
              corner-case behavior.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Writing a scheduler is probably the single most complicated task
    available.Â  It is absolutely full of subtle logic issues which are
    not obvious in the slightest.<br>
    <br>
    I have no specific knowledge about schedulers or writing them, so
    really cant help you too much with the specifics of your code.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------040202090004010709080303--


--===============6773105296145627903==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6773105296145627903==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 11:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygGS-0003fq-97; Thu, 02 Jan 2014 11:16:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VygGQ-0003fk-AG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:16:34 +0000
Received: from [85.158.139.211:9071] by server-15.bemta-5.messagelabs.com id
	8E/02-08490-19A45C25; Thu, 02 Jan 2014 11:16:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388661391!7500744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23641 invoked from network); 2 Jan 2014 11:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208,217";a="89189587"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:16:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 06:16:30 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VygGM-0007hy-BR;
	Thu, 02 Jan 2014 11:16:30 +0000
Message-ID: <52C54A8E.9070100@citrix.com>
Date: Thu, 2 Jan 2014 11:16:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Manohar Vanga <manohar.vanga@gmail.com>
References: <CAEktxaGXyoBO9bdjxAcO=BH45-cbD52VLreGWxbAG+J_fRRRvA@mail.gmail.com>
	<52C53879.7090609@citrix.com>
	<CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
In-Reply-To: <CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem with simple scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6773105296145627903=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6773105296145627903==
Content-Type: multipart/alternative;
	boundary="------------040202090004010709080303"

--------------040202090004010709080303
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On 02/01/14 10:37, Manohar Vanga wrote:
> Hi Andrew,
>
> Thanks for the quick reply! My comments are inline:
>  
>
>     Using printk()s in the code is going to skew the timing terribly.
>
>
> Would this affect the boot process of dom0 in any way? Other than
> that, it doesn't seem like it would cause any problems right?

Not directly, but it means the symptoms are likely to change simply by
looking for them.

>  
>
>     A serial console and the 'q' debug key is probably a good start,
>     to see some vcpu state.
>
>
> I'll give this a try but the problem is that I'm not entirely sure
> what I'm looking for. As I mentioned, the vcpu of dom0 goes idle after
> the crash-point.

The pause_count and pause_flags are what you are looking for to start with.

>  
>
>     Beyond that, writing a custom keyhandler to dump all of the xfair
>     state is probably the next thing to try.
>
>
> Hmm any hints on what I should be looking for? It's clear to me that
> the vcpu goes idle after that point where dom0 stops booting. Is there
> anything else I need to keep an eye out for? I'll also look into
> whether the skewed timing can possibly lead to a broken dom0 boot.

You have cause and effect mixed up.  (Presumably) the scheduler makes a
mistake and fails to reschedule dom0's vcpu, at which point dom0 makes
no further progress booting.

You will need to work out why the scheduler is not rescheduling the
vcpu, which involves knowing the complete state of the scheduler, and
the vcpu scheduling information.

>
> Also, in case you glanced at the code, do you happen to see any
> glaringly obvious bugs that you might have encountered? Otherwise I
> guess I'm looking for some odd corner-case behavior.

Writing a scheduler is probably the single most complicated task
available.  It is absolutely full of subtle logic issues which are not
obvious in the slightest.

I have no specific knowledge about schedulers or writing them, so really
cant help you too much with the specifics of your code.

~Andrew

--------------040202090004010709080303
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 02/01/14 10:37, Manohar Vanga wrote:<br>
    </div>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Hi Andrew,</div>
            <div><br>
            </div>
            <div>Thanks for the quick reply! My comments are inline:</div>
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> Using printk()s in
                the code is going to skew the timing terribly.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>Would this affect the boot process of dom0 in any way?
              Other than that, it doesn't seem like it would cause any
              problems right?</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Not directly, but it means the symptoms are likely to change simply
    by looking for them.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> A serial console
                and the 'q' debug key is probably a good start, to see
                some vcpu state.<br>
              </div>
            </blockquote>
            <div><br>
            </div>
            <div>I'll give this a try but the problem is that I'm not
              entirely sure what I'm looking for. As I mentioned, the
              vcpu of dom0 goes idle after the crash-point.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The pause_count and pause_flags are what you are looking for to
    start with.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>Â </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> Beyond that,
                writing a custom keyhandler to dump all of the xfair
                state is probably the next thing to try.</div>
            </blockquote>
            <div><br>
            </div>
            <div>Hmm any hints on what I should be looking for? It's
              clear to me that the vcpu goes idle after that point where
              dom0 stops booting. Is there anything else I need to keep
              an eye out for? I'll also look into whether the skewed
              timing can possibly lead to a broken dom0 boot.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    You have cause and effect mixed up.Â  (Presumably) the scheduler
    makes a mistake and fails to reschedule dom0's vcpu, at which point
    dom0 makes no further progress booting.<br>
    <br>
    You will need to work out why the scheduler is not rescheduling the
    vcpu, which involves knowing the complete state of the scheduler,
    and the vcpu scheduling information.<br>
    <br>
    <blockquote
cite="mid:CAEktxaFL=3cmU4vZS2akiAR2vG-3d+9HwTZvBvf5JXuThHoOKg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <div>Also, in case you glanced at the code, do you happen to
              see any glaringly obvious bugs that you might have
              encountered? Otherwise I guess I'm looking for some odd
              corner-case behavior.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Writing a scheduler is probably the single most complicated task
    available.Â  It is absolutely full of subtle logic issues which are
    not obvious in the slightest.<br>
    <br>
    I have no specific knowledge about schedulers or writing them, so
    really cant help you too much with the specifics of your code.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------040202090004010709080303--


--===============6773105296145627903==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6773105296145627903==--


From xen-devel-bounces@lists.xen.org Thu Jan 02 11:17:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygHg-0003mR-Lw; Thu, 02 Jan 2014 11:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygHg-0003mL-1t
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:17:52 +0000
Received: from [85.158.143.35:12174] by server-1.bemta-4.messagelabs.com id
	22/A8-02132-FDA45C25; Thu, 02 Jan 2014 11:17:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388661463!9183539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32637 invoked from network); 2 Jan 2014 11:17:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:17:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87000733"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:17:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:17:42 -0500
Message-ID: <52C54AD5.6060501@citrix.com>
Date: Thu, 2 Jan 2014 11:17:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
[...]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:17:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygHg-0003mR-Lw; Thu, 02 Jan 2014 11:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygHg-0003mL-1t
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:17:52 +0000
Received: from [85.158.143.35:12174] by server-1.bemta-4.messagelabs.com id
	22/A8-02132-FDA45C25; Thu, 02 Jan 2014 11:17:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388661463!9183539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32637 invoked from network); 2 Jan 2014 11:17:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:17:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87000733"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:17:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:17:42 -0500
Message-ID: <52C54AD5.6060501@citrix.com>
Date: Thu, 2 Jan 2014 11:17:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
[...]
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygLl-0004Jz-EB; Thu, 02 Jan 2014 11:22:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygLg-0004Js-Li
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:22:00 +0000
Received: from [85.158.139.211:9690] by server-11.bemta-5.messagelabs.com id
	0F/BC-23268-7DB45C25; Thu, 02 Jan 2014 11:21:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388661718!7506258!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17599 invoked from network); 2 Jan 2014 11:21:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:21:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89190547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:21:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:21:56 -0500
Message-ID: <52C54BD3.30101@citrix.com>
Date: Thu, 2 Jan 2014 11:21:55 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
[...]
> @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#else
> +static void __init xen_pagetable_p2m_copy(void)
> +{
> +	/* Nada! */
> +}
>  #endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();

I would prefer

#ifdef CONFIG_X86_64

> +	xen_pagetable_p2m_copy();

#endif

rather than the empty stub function.  I think this makes it clearer what
is 64-bit specific.

>  	xen_post_allocator_init();
>  }
>  static void xen_write_cr2(unsigned long cr2)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygLl-0004Jz-EB; Thu, 02 Jan 2014 11:22:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygLg-0004Js-Li
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:22:00 +0000
Received: from [85.158.139.211:9690] by server-11.bemta-5.messagelabs.com id
	0F/BC-23268-7DB45C25; Thu, 02 Jan 2014 11:21:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388661718!7506258!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17599 invoked from network); 2 Jan 2014 11:21:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:21:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89190547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:21:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:21:56 -0500
Message-ID: <52C54BD3.30101@citrix.com>
Date: Thu, 2 Jan 2014 11:21:55 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
[...]
> @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#else
> +static void __init xen_pagetable_p2m_copy(void)
> +{
> +	/* Nada! */
> +}
>  #endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();

I would prefer

#ifdef CONFIG_X86_64

> +	xen_pagetable_p2m_copy();

#endif

rather than the empty stub function.  I think this makes it clearer what
is 64-bit specific.

>  	xen_post_allocator_init();
>  }
>  static void xen_write_cr2(unsigned long cr2)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygOX-0004SD-3A; Thu, 02 Jan 2014 11:24:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygOV-0004S4-E8
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:24:55 +0000
Received: from [85.158.137.68:6964] by server-15.bemta-3.messagelabs.com id
	0F/22-11556-68C45C25; Thu, 02 Jan 2014 11:24:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388661892!6836646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20147 invoked from network); 2 Jan 2014 11:24:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89191456"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:24:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:24:51 -0500
Message-ID: <52C54C82.5010802@citrix.com>
Date: Thu, 2 Jan 2014 11:24:50 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> .. which are surprinsingly small compared to the amount for PV code.
> 
> PVH uses mostly native mmu ops, we leave the generic (native_*) for
> the majority and just overwrite the baremetal with the ones we need.
> 
> We also optimize one - the TLB flush. The native operation would
> needlessly IPI offline VCPUs causing extra wakeups. Using the
> Xen one avoids that and lets the hypervisor determine which
> VCPU needs the TLB flush.

This TLB flush optimization should be a separate patch.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygOX-0004SD-3A; Thu, 02 Jan 2014 11:24:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygOV-0004S4-E8
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:24:55 +0000
Received: from [85.158.137.68:6964] by server-15.bemta-3.messagelabs.com id
	0F/22-11556-68C45C25; Thu, 02 Jan 2014 11:24:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388661892!6836646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20147 invoked from network); 2 Jan 2014 11:24:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89191456"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:24:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:24:51 -0500
Message-ID: <52C54C82.5010802@citrix.com>
Date: Thu, 2 Jan 2014 11:24:50 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> .. which are surprinsingly small compared to the amount for PV code.
> 
> PVH uses mostly native mmu ops, we leave the generic (native_*) for
> the majority and just overwrite the baremetal with the ones we need.
> 
> We also optimize one - the TLB flush. The native operation would
> needlessly IPI offline VCPUs causing extra wakeups. Using the
> Xen one avoids that and lets the hypervisor determine which
> VCPU needs the TLB flush.

This TLB flush optimization should be a separate patch.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:28:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygRX-0004bI-MU; Thu, 02 Jan 2014 11:28:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygRV-0004bD-KE
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:28:01 +0000
Received: from [193.109.254.147:61047] by server-12.bemta-14.messagelabs.com
	id A2/B4-13681-04D45C25; Thu, 02 Jan 2014 11:28:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388662079!8464590!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8075 invoked from network); 2 Jan 2014 11:28:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:28:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89191828"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:27:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:27:57 -0500
Message-ID: <52C54D3C.4050101@citrix.com>
Date: Thu, 2 Jan 2014 11:27:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> For PVHVM the shared_info structure is provided via the same way
> as for normal PV guests (see include/xen/interface/xen.h).
> 
> That is during bootup we get 'xen_start_info' via the %esi register
> in startup_xen. Then later we extract the 'shared_info' from said
> structure (in xen_setup_shared_info) and start using it.
> 
> The 'xen_setup_shared_info' is all setup to work with auto-xlat
> guests, but there are two functions which it calls that are not:
> xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> This patch modifies those to work in auto-xlat mode.
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
>  		xen_vcpu_setup(cpu);
>  
>  	/* xen_vcpu_setup managed to place the vcpu_info within the
> -	   percpu area for all cpus, so make use of it */
> -	if (have_vcpu_info_placement) {
> +	 * percpu area for all cpus, so make use of it. Note that for
> +	 * PVH we want to use native IRQ mechanism. */
> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);

Should this be in a separate patch: "xen/pvh: use native irq ops"?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:28:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygRX-0004bI-MU; Thu, 02 Jan 2014 11:28:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygRV-0004bD-KE
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:28:01 +0000
Received: from [193.109.254.147:61047] by server-12.bemta-14.messagelabs.com
	id A2/B4-13681-04D45C25; Thu, 02 Jan 2014 11:28:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388662079!8464590!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8075 invoked from network); 2 Jan 2014 11:28:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:28:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89191828"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:27:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:27:57 -0500
Message-ID: <52C54D3C.4050101@citrix.com>
Date: Thu, 2 Jan 2014 11:27:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> For PVHVM the shared_info structure is provided via the same way
> as for normal PV guests (see include/xen/interface/xen.h).
> 
> That is during bootup we get 'xen_start_info' via the %esi register
> in startup_xen. Then later we extract the 'shared_info' from said
> structure (in xen_setup_shared_info) and start using it.
> 
> The 'xen_setup_shared_info' is all setup to work with auto-xlat
> guests, but there are two functions which it calls that are not:
> xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> This patch modifies those to work in auto-xlat mode.
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
>  		xen_vcpu_setup(cpu);
>  
>  	/* xen_vcpu_setup managed to place the vcpu_info within the
> -	   percpu area for all cpus, so make use of it */
> -	if (have_vcpu_info_placement) {
> +	 * percpu area for all cpus, so make use of it. Note that for
> +	 * PVH we want to use native IRQ mechanism. */
> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);

Should this be in a separate patch: "xen/pvh: use native irq ops"?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:31:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygUh-00055y-Kj; Thu, 02 Jan 2014 11:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygUf-00055n-DI
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:31:17 +0000
Received: from [85.158.143.35:39339] by server-2.bemta-4.messagelabs.com id
	F0/6A-11386-40E45C25; Thu, 02 Jan 2014 11:31:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388662274!9143829!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14445 invoked from network); 2 Jan 2014 11:31:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:31:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87003144"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:31:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:31:13 -0500
Message-ID: <52C54E00.7010508@citrix.com>
Date: Thu, 2 Jan 2014 11:31:12 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> During early bootup we start life using the Xen provided
> GDT, which means that we are running with %cs segment set
> to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).
> 
> But for PVH we want to be use HVM type mechanism for
> segment operations. As such we need to switch to the HVM
> one and also reload ourselves with the __KERNEL_CS:eip
> to run in the proper GDT and segment.
> 
> For HVM this is usually done in 'secondary_startup_64' in
> (head_64.S) but since we are not taking that bootup
> path (we start in PV - xen_start_kernel) we need to do
> that in the early PV bootup paths.
> 
> For good measure we also zero out the %fs, %ds, and %es
> (not strictly needed as Xen has already cleared them
> for us). The %gs is loaded by 'switch_to_new_gdt'.
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1414,8 +1414,43 @@ static void __init xen_boot_params_init_edd(void)
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
>   */
> -static void __init xen_setup_stackprotector(void)
> +static void __init xen_setup_gdt(void)
>  {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> +#ifdef CONFIG_X86_64
> +		unsigned long dummy;
> +
> +		switch_to_new_gdt(0); /* GDT and GS set */
> +
> +		/* We are switching of the Xen provided GDT to our HVM mode
> +		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
> +		 * and we are jumping to reload it.
> +		 */
> +		asm volatile ("pushq %0\n"
> +			      "leaq 1f(%%rip),%0\n"
> +			      "pushq %0\n"
> +			      "lretq\n"
> +			      "1:\n"
> +			      : "=&r" (dummy) : "0" (__KERNEL_CS));
> +
> +		/*
> +		 * While not needed, we also set the %es, %ds, and %fs
> +		 * to zero. We don't care about %ss as it is NULL.
> +		 * Strictly speaking this is not needed as Xen zeros those
> +		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
> +		 *
> +		 * Linux zeros them in cpu_init() and in secondary_startup_64
> +		 * (for BSP).
> +		 */
> +		loadsegment(es, 0);
> +		loadsegment(ds, 0);
> +		loadsegment(fs, 0);
> +#else
> +		/* PVH: TODO Implement. */
> +		BUG();
> +#endif
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;

If PVH uses native GDT why are these (and possibly other?) GDT ops needed?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:31:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygUh-00055y-Kj; Thu, 02 Jan 2014 11:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygUf-00055n-DI
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:31:17 +0000
Received: from [85.158.143.35:39339] by server-2.bemta-4.messagelabs.com id
	F0/6A-11386-40E45C25; Thu, 02 Jan 2014 11:31:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388662274!9143829!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14445 invoked from network); 2 Jan 2014 11:31:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:31:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87003144"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:31:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:31:13 -0500
Message-ID: <52C54E00.7010508@citrix.com>
Date: Thu, 2 Jan 2014 11:31:12 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> During early bootup we start life using the Xen provided
> GDT, which means that we are running with %cs segment set
> to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).
> 
> But for PVH we want to be use HVM type mechanism for
> segment operations. As such we need to switch to the HVM
> one and also reload ourselves with the __KERNEL_CS:eip
> to run in the proper GDT and segment.
> 
> For HVM this is usually done in 'secondary_startup_64' in
> (head_64.S) but since we are not taking that bootup
> path (we start in PV - xen_start_kernel) we need to do
> that in the early PV bootup paths.
> 
> For good measure we also zero out the %fs, %ds, and %es
> (not strictly needed as Xen has already cleared them
> for us). The %gs is loaded by 'switch_to_new_gdt'.
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1414,8 +1414,43 @@ static void __init xen_boot_params_init_edd(void)
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
>   */
> -static void __init xen_setup_stackprotector(void)
> +static void __init xen_setup_gdt(void)
>  {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> +#ifdef CONFIG_X86_64
> +		unsigned long dummy;
> +
> +		switch_to_new_gdt(0); /* GDT and GS set */
> +
> +		/* We are switching of the Xen provided GDT to our HVM mode
> +		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
> +		 * and we are jumping to reload it.
> +		 */
> +		asm volatile ("pushq %0\n"
> +			      "leaq 1f(%%rip),%0\n"
> +			      "pushq %0\n"
> +			      "lretq\n"
> +			      "1:\n"
> +			      : "=&r" (dummy) : "0" (__KERNEL_CS));
> +
> +		/*
> +		 * While not needed, we also set the %es, %ds, and %fs
> +		 * to zero. We don't care about %ss as it is NULL.
> +		 * Strictly speaking this is not needed as Xen zeros those
> +		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
> +		 *
> +		 * Linux zeros them in cpu_init() and in secondary_startup_64
> +		 * (for BSP).
> +		 */
> +		loadsegment(es, 0);
> +		loadsegment(ds, 0);
> +		loadsegment(fs, 0);
> +#else
> +		/* PVH: TODO Implement. */
> +		BUG();
> +#endif
> +		return;
> +	}
>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;

If PVH uses native GDT why are these (and possibly other?) GDT ops needed?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:38:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygbY-0005Ic-P5; Thu, 02 Jan 2014 11:38:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygbX-0005IX-0u
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:38:23 +0000
Received: from [193.109.254.147:54983] by server-9.bemta-14.messagelabs.com id
	EB/B6-13957-EAF45C25; Thu, 02 Jan 2014 11:38:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388662700!8451678!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6918 invoked from network); 2 Jan 2014 11:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89193547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:38:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:38:19 -0500
Message-ID: <52C54FAA.4030206@citrix.com>
Date: Thu, 2 Jan 2014 11:38:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

You can pull this out of the PVH series and merge early if you prefer.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:38:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygbY-0005Ic-P5; Thu, 02 Jan 2014 11:38:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygbX-0005IX-0u
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:38:23 +0000
Received: from [193.109.254.147:54983] by server-9.bemta-14.messagelabs.com id
	EB/B6-13957-EAF45C25; Thu, 02 Jan 2014 11:38:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388662700!8451678!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6918 invoked from network); 2 Jan 2014 11:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89193547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:38:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:38:19 -0500
Message-ID: <52C54FAA.4030206@citrix.com>
Date: Thu, 2 Jan 2014 11:38:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

You can pull this out of the PVH series and merge early if you prefer.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:40:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:40:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygdG-0005jm-9t; Thu, 02 Jan 2014 11:40:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygdE-0005jd-TM
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:40:09 +0000
Received: from [85.158.137.68:53044] by server-14.bemta-3.messagelabs.com id
	6A/D0-06105-81055C25; Thu, 02 Jan 2014 11:40:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388662806!6878637!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2281 invoked from network); 2 Jan 2014 11:40:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:40:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87004693"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:40:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:39:59 -0500
Message-ID: <52C5500E.5010303@citrix.com>
Date: Thu, 2 Jan 2014 11:39:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Again, feel free to apply early.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:40:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:40:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygdG-0005jm-9t; Thu, 02 Jan 2014 11:40:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygdE-0005jd-TM
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:40:09 +0000
Received: from [85.158.137.68:53044] by server-14.bemta-3.messagelabs.com id
	6A/D0-06105-81055C25; Thu, 02 Jan 2014 11:40:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388662806!6878637!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2281 invoked from network); 2 Jan 2014 11:40:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:40:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87004693"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:40:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:39:59 -0500
Message-ID: <52C5500E.5010303@citrix.com>
Date: Thu, 2 Jan 2014 11:39:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Again, feel free to apply early.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:40:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vygdi-0005nL-NP; Thu, 02 Jan 2014 11:40:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Vygdh-0005n8-2Q
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:40:37 +0000
Received: from [85.158.143.35:32043] by server-1.bemta-4.messagelabs.com id
	DD/83-02132-43055C25; Thu, 02 Jan 2014 11:40:36 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388662835!9225208!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29080 invoked from network); 2 Jan 2014 11:40:35 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 11:40:35 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 767B21A25E8;
	Thu,  2 Jan 2014 13:40:34 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6ABD436C01F; Thu,  2 Jan 2014 13:40:34 +0200 (EET)
Date: Thu, 2 Jan 2014 13:40:34 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: annie li <annie.li@oracle.com>
Message-ID: <20140102114034.GT2924@reaktio.net>
References: <52BD5FDD.6060009@gmail.com>
 <52C4F48F.5090003@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C4F48F.5090003@oracle.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Vasily Evseenko <svpcom@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
 drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
> 
> On 2013/12/27 19:09, Vasily Evseenko wrote:
> >Hi,
> >
> >I've got domU crash (~ every 1-2 days under high network (tcp) load)
> >with message:
> >
> >-----
> >[2013-12-26 03:53:18] kernel BUG at drivers/net/xen-netfront.c:305!
> >[2013-12-26 03:53:18] invalid opcode: 0000 [#1] SMP
> >[2013-12-26 03:53:18] Modules linked in: ipt_REJECT iptable_filter
> >xt_set xt_REDIRECT iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
> >nf_nat_ipv4 nf_nat
> >ip_tables ip_set_hash_net ip_set_hash_ip ip_set nfnetlink ip6t_REJECT
> >nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
> >ip6_table
> >s ipv6 ext3 jbd xen_netfront coretemp hwmon crc32_pclmul crc32c_intel
> >ghash_clmulni_intel microcode pcspkr ext4 jbd2 mbcache aesni_intel
> >ablk_helper c
> >ryptd lrw gf128mul glue_helper aes_x86_64 xen_blkfront dm_mirror
> >dm_region_hash dm_log dm_mod
> >[2013-12-26 03:53:18] CPU: 0 PID: 15126 Comm: python Not tainted
> >3.10.25-11.x86_64 #1
> >[2013-12-26 03:53:18] task: ffff8801e5d68ac0 ti: ffff8801e7392000
> >task.ti: ffff8801e7392000
> >[2013-12-26 03:53:18] RIP: e030:[<ffffffffa015d637>]
> >[<ffffffffa015d637>] xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> >[2013-12-26 03:53:18] RSP: e02b:ffff8801f2e03ce0  EFLAGS: 00010282
> >[2013-12-26 03:53:18] RAX: 00000000000001d4 RBX: ffff8801e5438800 RCX:
> >0000000000000001
> >[2013-12-26 03:53:18] RDX: 000000000000002a RSI: 0000000000000000 RDI:
> >0000000000002200
> >[2013-12-26 03:53:18] RBP: ffff8801f2e03d40 R08: 0000000000000000 R09:
> >0000000000001000
> >[2013-12-26 03:53:18] R10: ffff8801000083c0 R11: dead000000200200 R12:
> >0000000000000220
> >[2013-12-26 03:53:18] R13: ffff8801e6eec0c0 R14: 000000000000002a R15:
> >000000000239642a
> >[2013-12-26 03:53:18] FS:  00007f4cf48d57e0(0000)
> >GS:ffff8801f2e00000(0000) knlGS:0000000000000000
> >[2013-12-26 03:53:18] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> >[2013-12-26 03:53:18] CR2: ffffffffff600400 CR3: 00000001e0db3000 CR4:
> >0000000000042660
> >[2013-12-26 03:53:18] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> >0000000000000000
> >[2013-12-26 03:53:18] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> >0000000000000400
> >[2013-12-26 03:53:18] Stack:
> >[2013-12-26 03:53:18]  ffff8801f2e03df0 02396417e5438000
> >ffff8801e5439d58 ffff8801e54394f0
> >[2013-12-26 03:53:18]  ffff8801e5438000 002affff00000013
> >ffff8801f2e03d40 ffff8801f2e03db0
> >[2013-12-26 03:53:18]  0000000000000010 ffff8800655e6ac0
> >ffff8801e5438800 ffff8801e511a000
> >[2013-12-26 03:53:18] Call Trace:
> >[2013-12-26 03:53:18]  <IRQ>
> >[2013-12-26 03:53:18]  [<ffffffffa015dc44>] xennet_poll+0x2f4/0x630
> >[xen_netfront]
> >[2013-12-26 03:53:18]  [<ffffffff810640a9>] ? raise_softirq_irqoff+0x9/0x50
> >[2013-12-26 03:53:18]  [<ffffffff8152050c>] ? dev_kfree_skb_irq+0x5c/0x70
> >[2013-12-26 03:53:18]  [<ffffffff810e4fb9>] ?
> >handle_irq_event_percpu+0xc9/0x210
> >[2013-12-26 03:53:18]  [<ffffffff81528022>] net_rx_action+0x112/0x290
> >[2013-12-26 03:53:18]  [<ffffffff810e514d>] ? handle_irq_event+0x4d/0x70
> >[2013-12-26 03:53:18]  [<ffffffff81063c97>] __do_softirq+0xf7/0x270
> >[2013-12-26 03:53:18]  [<ffffffff81600edc>] call_softirq+0x1c/0x30
> >[2013-12-26 03:53:18]  [<ffffffff81014505>] do_softirq+0x65/0xa0
> >[2013-12-26 03:53:18]  [<ffffffff810639c5>] irq_exit+0xc5/0xd0
> >[2013-12-26 03:53:18]  [<ffffffff81351e45>] xen_evtchn_do_upcall+0x35/0x50
> >[2013-12-26 03:53:18]  [<ffffffff81600f3e>]
> >xen_do_hypervisor_callback+0x1e/0x30
> >[2013-12-26 03:53:18]  <EOI>
> >[2013-12-26 03:53:18] Code: 8b 35 ee f9 bb e1 48 8d bb 08 0d 00 00 48 83
> >c6 64 e8 2e f2 f0 e0 8b 83 ec 0c 00 00 31 d2 89 c1 d1 e9 39 d1 76 9e e9
> >5a ff ff ff <0f> 0b eb fe 0f 0b 0f 1f 00 eb fb 66 66 66 66 66 2e 0f 1f
> >84 00
> >[2013-12-26 03:53:18] RIP  [<ffffffffa015d637>]
> >xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> >[2013-12-26 03:53:18]  RSP <ffff8801f2e03ce0>
> >------------
> >
> >dom0 and domU kernels are vanilla 3.10.25
> >host server has 4 cores x 2 threads with mapping: 4 - dom0, 2 - domU, 2
> >- domU
> >i've tried xen versions: 4.2.3 and 4.3.1
> >also i've tried to disable offloaing on domU:  ethtool -K eth0 tx off
> >tso off gso off   ----  no effects
> >
> >domU's are under high TCP load (a lot of small tcp connections (web server))
> >sometimes  i've got on dom0:
> >---
> >[2013-12-26 00:16:30] (XEN) grant_table.c:289:d0 Increased maptrack size
> >to 2 frames
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >
> >---
> >
> >It seems the root of problem in dom0 messages above. Is it HW failure or
> >some internal kernel structures overflow?
> From the stack, it reminds me this issue is very likely same with
> the one which has been fixed. There is something wrong with counting
> slots in netback, and then responses overlapps request in the ring,
> and grantcopy gets wrong grant reference and throws out error in
> grant_table.c. See
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
> There were some back and forth work for this issue, but seems the
> fix patch exists since v3.12-rc4. Would you like to have a try with
> newer kernel version?
> 

If that patch fixes the bug it sounds like it needs to be backported
to at least 3.10.x aswell..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:40:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vygdi-0005nL-NP; Thu, 02 Jan 2014 11:40:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Vygdh-0005n8-2Q
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:40:37 +0000
Received: from [85.158.143.35:32043] by server-1.bemta-4.messagelabs.com id
	DD/83-02132-43055C25; Thu, 02 Jan 2014 11:40:36 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388662835!9225208!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29080 invoked from network); 2 Jan 2014 11:40:35 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 11:40:35 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 767B21A25E8;
	Thu,  2 Jan 2014 13:40:34 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6ABD436C01F; Thu,  2 Jan 2014 13:40:34 +0200 (EET)
Date: Thu, 2 Jan 2014 13:40:34 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: annie li <annie.li@oracle.com>
Message-ID: <20140102114034.GT2924@reaktio.net>
References: <52BD5FDD.6060009@gmail.com>
 <52C4F48F.5090003@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C4F48F.5090003@oracle.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Vasily Evseenko <svpcom@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
 drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
> 
> On 2013/12/27 19:09, Vasily Evseenko wrote:
> >Hi,
> >
> >I've got domU crash (~ every 1-2 days under high network (tcp) load)
> >with message:
> >
> >-----
> >[2013-12-26 03:53:18] kernel BUG at drivers/net/xen-netfront.c:305!
> >[2013-12-26 03:53:18] invalid opcode: 0000 [#1] SMP
> >[2013-12-26 03:53:18] Modules linked in: ipt_REJECT iptable_filter
> >xt_set xt_REDIRECT iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4
> >nf_nat_ipv4 nf_nat
> >ip_tables ip_set_hash_net ip_set_hash_ip ip_set nfnetlink ip6t_REJECT
> >nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
> >ip6_table
> >s ipv6 ext3 jbd xen_netfront coretemp hwmon crc32_pclmul crc32c_intel
> >ghash_clmulni_intel microcode pcspkr ext4 jbd2 mbcache aesni_intel
> >ablk_helper c
> >ryptd lrw gf128mul glue_helper aes_x86_64 xen_blkfront dm_mirror
> >dm_region_hash dm_log dm_mod
> >[2013-12-26 03:53:18] CPU: 0 PID: 15126 Comm: python Not tainted
> >3.10.25-11.x86_64 #1
> >[2013-12-26 03:53:18] task: ffff8801e5d68ac0 ti: ffff8801e7392000
> >task.ti: ffff8801e7392000
> >[2013-12-26 03:53:18] RIP: e030:[<ffffffffa015d637>]
> >[<ffffffffa015d637>] xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> >[2013-12-26 03:53:18] RSP: e02b:ffff8801f2e03ce0  EFLAGS: 00010282
> >[2013-12-26 03:53:18] RAX: 00000000000001d4 RBX: ffff8801e5438800 RCX:
> >0000000000000001
> >[2013-12-26 03:53:18] RDX: 000000000000002a RSI: 0000000000000000 RDI:
> >0000000000002200
> >[2013-12-26 03:53:18] RBP: ffff8801f2e03d40 R08: 0000000000000000 R09:
> >0000000000001000
> >[2013-12-26 03:53:18] R10: ffff8801000083c0 R11: dead000000200200 R12:
> >0000000000000220
> >[2013-12-26 03:53:18] R13: ffff8801e6eec0c0 R14: 000000000000002a R15:
> >000000000239642a
> >[2013-12-26 03:53:18] FS:  00007f4cf48d57e0(0000)
> >GS:ffff8801f2e00000(0000) knlGS:0000000000000000
> >[2013-12-26 03:53:18] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
> >[2013-12-26 03:53:18] CR2: ffffffffff600400 CR3: 00000001e0db3000 CR4:
> >0000000000042660
> >[2013-12-26 03:53:18] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> >0000000000000000
> >[2013-12-26 03:53:18] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> >0000000000000400
> >[2013-12-26 03:53:18] Stack:
> >[2013-12-26 03:53:18]  ffff8801f2e03df0 02396417e5438000
> >ffff8801e5439d58 ffff8801e54394f0
> >[2013-12-26 03:53:18]  ffff8801e5438000 002affff00000013
> >ffff8801f2e03d40 ffff8801f2e03db0
> >[2013-12-26 03:53:18]  0000000000000010 ffff8800655e6ac0
> >ffff8801e5438800 ffff8801e511a000
> >[2013-12-26 03:53:18] Call Trace:
> >[2013-12-26 03:53:18]  <IRQ>
> >[2013-12-26 03:53:18]  [<ffffffffa015dc44>] xennet_poll+0x2f4/0x630
> >[xen_netfront]
> >[2013-12-26 03:53:18]  [<ffffffff810640a9>] ? raise_softirq_irqoff+0x9/0x50
> >[2013-12-26 03:53:18]  [<ffffffff8152050c>] ? dev_kfree_skb_irq+0x5c/0x70
> >[2013-12-26 03:53:18]  [<ffffffff810e4fb9>] ?
> >handle_irq_event_percpu+0xc9/0x210
> >[2013-12-26 03:53:18]  [<ffffffff81528022>] net_rx_action+0x112/0x290
> >[2013-12-26 03:53:18]  [<ffffffff810e514d>] ? handle_irq_event+0x4d/0x70
> >[2013-12-26 03:53:18]  [<ffffffff81063c97>] __do_softirq+0xf7/0x270
> >[2013-12-26 03:53:18]  [<ffffffff81600edc>] call_softirq+0x1c/0x30
> >[2013-12-26 03:53:18]  [<ffffffff81014505>] do_softirq+0x65/0xa0
> >[2013-12-26 03:53:18]  [<ffffffff810639c5>] irq_exit+0xc5/0xd0
> >[2013-12-26 03:53:18]  [<ffffffff81351e45>] xen_evtchn_do_upcall+0x35/0x50
> >[2013-12-26 03:53:18]  [<ffffffff81600f3e>]
> >xen_do_hypervisor_callback+0x1e/0x30
> >[2013-12-26 03:53:18]  <EOI>
> >[2013-12-26 03:53:18] Code: 8b 35 ee f9 bb e1 48 8d bb 08 0d 00 00 48 83
> >c6 64 e8 2e f2 f0 e0 8b 83 ec 0c 00 00 31 d2 89 c1 d1 e9 39 d1 76 9e e9
> >5a ff ff ff <0f> 0b eb fe 0f 0b 0f 1f 00 eb fb 66 66 66 66 66 2e 0f 1f
> >84 00
> >[2013-12-26 03:53:18] RIP  [<ffffffffa015d637>]
> >xennet_alloc_rx_buffers+0x347/0x360 [xen_netfront]
> >[2013-12-26 03:53:18]  RSP <ffff8801f2e03ce0>
> >------------
> >
> >dom0 and domU kernels are vanilla 3.10.25
> >host server has 4 cores x 2 threads with mapping: 4 - dom0, 2 - domU, 2
> >- domU
> >i've tried xen versions: 4.2.3 and 4.3.1
> >also i've tried to disable offloaing on domU:  ethtool -K eth0 tx off
> >tso off gso off   ----  no effects
> >
> >domU's are under high TCP load (a lot of small tcp connections (web server))
> >sometimes  i've got on dom0:
> >---
> >[2013-12-26 00:16:30] (XEN) grant_table.c:289:d0 Increased maptrack size
> >to 2 frames
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 03:53:18] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >43646979
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >[2013-12-26 06:15:14] (XEN) grant_table.c:1858:d0 Bad grant reference
> >99221507
> >
> >---
> >
> >It seems the root of problem in dom0 messages above. Is it HW failure or
> >some internal kernel structures overflow?
> From the stack, it reminds me this issue is very likely same with
> the one which has been fixed. There is something wrong with counting
> slots in netback, and then responses overlapps request in the ring,
> and grantcopy gets wrong grant reference and throws out error in
> grant_table.c. See
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
> There were some back and forth work for this issue, but seems the
> fix patch exists since v3.12-rc4. Would you like to have a try with
> newer kernel version?
> 

If that patch fixes the bug it sounds like it needs to be backported
to at least 3.10.x aswell..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:43:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyggS-00061F-Bg; Thu, 02 Jan 2014 11:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyggQ-000613-Sc
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:43:26 +0000
Received: from [85.158.143.35:3051] by server-2.bemta-4.messagelabs.com id
	12/2D-11386-DD055C25; Thu, 02 Jan 2014 11:43:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1388663004!2096379!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7449 invoked from network); 2 Jan 2014 11:43:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:43:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89194957"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:43:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:43:09 -0500
Message-ID: <52C550CC.7010201@citrix.com>
Date: Thu, 2 Jan 2014 11:43:08 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:43:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyggS-00061F-Bg; Thu, 02 Jan 2014 11:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyggQ-000613-Sc
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:43:26 +0000
Received: from [85.158.143.35:3051] by server-2.bemta-4.messagelabs.com id
	12/2D-11386-DD055C25; Thu, 02 Jan 2014 11:43:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1388663004!2096379!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7449 invoked from network); 2 Jan 2014 11:43:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:43:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89194957"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:43:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:43:09 -0500
Message-ID: <52C550CC.7010201@citrix.com>
Date: Thu, 2 Jan 2014 11:43:08 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:44:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyghI-000671-Q5; Thu, 02 Jan 2014 11:44:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyghH-00066o-8s
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:44:19 +0000
Received: from [85.158.137.68:46382] by server-6.bemta-3.messagelabs.com id
	4B/10-04868-21155C25; Thu, 02 Jan 2014 11:44:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388663056!6879512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21898 invoked from network); 2 Jan 2014 11:44:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:44:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87005408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:44:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:44:15 -0500
Message-ID: <52C5510E.1080503@citrix.com>
Date: Thu, 2 Jan 2014 11:44:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> As we do not have yet a mechanism for that.
> 
> This also impacts the ARM/ARM64 code (which does not have
> hotplug support yet).

Subject needs to be "xen/pvh: disable VCPU hotplug with PVH" or similar
since it's only disabling this one feature.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:44:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyghI-000671-Q5; Thu, 02 Jan 2014 11:44:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyghH-00066o-8s
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:44:19 +0000
Received: from [85.158.137.68:46382] by server-6.bemta-3.messagelabs.com id
	4B/10-04868-21155C25; Thu, 02 Jan 2014 11:44:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388663056!6879512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21898 invoked from network); 2 Jan 2014 11:44:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:44:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87005408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:44:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:44:15 -0500
Message-ID: <52C5510E.1080503@citrix.com>
Date: Thu, 2 Jan 2014 11:44:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> As we do not have yet a mechanism for that.
> 
> This also impacts the ARM/ARM64 code (which does not have
> hotplug support yet).

Subject needs to be "xen/pvh: disable VCPU hotplug with PVH" or similar
since it's only disabling this one feature.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygmE-0006gw-Kq; Thu, 02 Jan 2014 11:49:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygmD-0006gV-Br
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:49:25 +0000
Received: from [85.158.137.68:42001] by server-12.bemta-3.messagelabs.com id
	46/D0-20055-44255C25; Thu, 02 Jan 2014 11:49:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388663362!5722982!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31487 invoked from network); 2 Jan 2014 11:49:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89196057"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:48:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:48:50 -0500
Message-ID: <52C55222.7070801@citrix.com>
Date: Thu, 2 Jan 2014 11:48:50 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
 Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH allows PV linux guest to utilize hardware extended capabilities,
> such as running MMU updates in a HVM container.
> 
> The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> with modifications):
> 
> "* the guest uses auto translate:
>  - p2m is managed by Xen
>  - pagetables are owned by the guest
>  - mmu_update hypercall not available
> * it uses event callback and not vlapic emulation,
> * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> 
> For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> PV guest with auto translate, although it does use hvm_op for setting
> callback vector."
> 
> Use .ascii and .asciz to define xen feature string. Note, the PVH
> string must be in a single line (not multiple lines with \) to keep the
> assembler from putting null char after each string before \.
> This patch allows it to be configured and enabled.
> 
> Lastly remove some of the scaffolding.
[...]
> --- a/arch/x86/xen/Kconfig
> +++ b/arch/x86/xen/Kconfig
> @@ -51,3 +51,11 @@ config XEN_DEBUG_FS
>  	  Enable statistics output and various tuning options in debugfs.
>  	  Enabling this option may incur a significant performance overhead.
>  
> +config XEN_PVH
> +	bool "Support for running as a PVH guest"
> +	depends on X86_64 && XEN && XEN_PVHVM

Would select XEN_PVHVM be more useful?  It may not be obvious to a user
that PV with hardware extension depends on HVM with PV extensions.

> +	default n
> +	help
> +	   This option enables support for running as a PVH guest (PV guest
> +	   using hardware extensions) under a suitably capable hypervisor.
> +	   If unsure, say N.

This help text needs to clearly state that PVH support is experimental
or a tech preview and the ABI is subject to change and PVH guests may
not run on newer hypervisors.  Unless the plan is to only merge the
Linux support once the hypervisor ABI is finalized.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygmE-0006gw-Kq; Thu, 02 Jan 2014 11:49:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygmD-0006gV-Br
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:49:25 +0000
Received: from [85.158.137.68:42001] by server-12.bemta-3.messagelabs.com id
	46/D0-20055-44255C25; Thu, 02 Jan 2014 11:49:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388663362!5722982!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31487 invoked from network); 2 Jan 2014 11:49:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89196057"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:48:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:48:50 -0500
Message-ID: <52C55222.7070801@citrix.com>
Date: Thu, 2 Jan 2014 11:48:50 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
 Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH allows PV linux guest to utilize hardware extended capabilities,
> such as running MMU updates in a HVM container.
> 
> The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> with modifications):
> 
> "* the guest uses auto translate:
>  - p2m is managed by Xen
>  - pagetables are owned by the guest
>  - mmu_update hypercall not available
> * it uses event callback and not vlapic emulation,
> * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> 
> For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> PV guest with auto translate, although it does use hvm_op for setting
> callback vector."
> 
> Use .ascii and .asciz to define xen feature string. Note, the PVH
> string must be in a single line (not multiple lines with \) to keep the
> assembler from putting null char after each string before \.
> This patch allows it to be configured and enabled.
> 
> Lastly remove some of the scaffolding.
[...]
> --- a/arch/x86/xen/Kconfig
> +++ b/arch/x86/xen/Kconfig
> @@ -51,3 +51,11 @@ config XEN_DEBUG_FS
>  	  Enable statistics output and various tuning options in debugfs.
>  	  Enabling this option may incur a significant performance overhead.
>  
> +config XEN_PVH
> +	bool "Support for running as a PVH guest"
> +	depends on X86_64 && XEN && XEN_PVHVM

Would select XEN_PVHVM be more useful?  It may not be obvious to a user
that PV with hardware extension depends on HVM with PV extensions.

> +	default n
> +	help
> +	   This option enables support for running as a PVH guest (PV guest
> +	   using hardware extensions) under a suitably capable hypervisor.
> +	   If unsure, say N.

This help text needs to clearly state that PVH support is experimental
or a tech preview and the ABI is subject to change and PVH guests may
not run on newer hypervisors.  Unless the plan is to only merge the
Linux support once the hypervisor ABI is finalized.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vygmg-0006im-22; Thu, 02 Jan 2014 11:49:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vygme-0006iX-T9
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:49:53 +0000
Received: from [193.109.254.147:50093] by server-11.bemta-14.messagelabs.com
	id 91/D7-20576-06255C25; Thu, 02 Jan 2014 11:49:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388663390!4952006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22412 invoked from network); 2 Jan 2014 11:49:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:49:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89196204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:49:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:49:49 -0500
Message-ID: <52C5525B.4040704@citrix.com>
Date: Thu, 2 Jan 2014 11:49:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com> <52C543E9.1050701@citrix.com>
In-Reply-To: <52C543E9.1050701@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 10:48, Andrew Cooper wrote:
> On 02/01/14 10:46, David Vrabel wrote:
>> On 01/01/14 16:51, Don Slutz wrote:
>>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>> Since this commit introduced a regression, a revert is the best thing to
>> do here.
>>
>> Acked-by: David Vrabel <david.vrabel@citrix.com>
>>
>>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>>
>>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>> I guess Daniel tested a debug build without this crashkernel option.
>> This would place the crash region above the direct mapping region and
>> map_domain_page() would do the right thing.
>>
>>
>>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>>> +
>> This should be made conditional on the location of the crash region --
>> it is wrong to do this for portions of the crash region that are outside
>> the crash region.
> 
> Presume you mean "outside the direct-map region"?

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vygmg-0006im-22; Thu, 02 Jan 2014 11:49:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vygme-0006iX-T9
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 11:49:53 +0000
Received: from [193.109.254.147:50093] by server-11.bemta-14.messagelabs.com
	id 91/D7-20576-06255C25; Thu, 02 Jan 2014 11:49:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388663390!4952006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22412 invoked from network); 2 Jan 2014 11:49:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:49:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89196204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 11:49:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:49:49 -0500
Message-ID: <52C5525B.4040704@citrix.com>
Date: Thu, 2 Jan 2014 11:49:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com> <52C543E9.1050701@citrix.com>
In-Reply-To: <52C543E9.1050701@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 10:48, Andrew Cooper wrote:
> On 02/01/14 10:46, David Vrabel wrote:
>> On 01/01/14 16:51, Don Slutz wrote:
>>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>> Since this commit introduced a regression, a revert is the best thing to
>> do here.
>>
>> Acked-by: David Vrabel <david.vrabel@citrix.com>
>>
>>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>>
>>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>> I guess Daniel tested a debug build without this crashkernel option.
>> This would place the crash region above the direct mapping region and
>> map_domain_page() would do the right thing.
>>
>>
>>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>>> +
>> This should be made conditional on the location of the crash region --
>> it is wrong to do this for portions of the crash region that are outside
>> the crash region.
> 
> Presume you mean "outside the direct-map region"?

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:50:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygnX-0006qB-Rf; Thu, 02 Jan 2014 11:50:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygnV-0006py-Hi
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:50:45 +0000
Received: from [85.158.137.68:54126] by server-13.bemta-3.messagelabs.com id
	45/4D-28603-49255C25; Thu, 02 Jan 2014 11:50:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388663442!6841550!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13546 invoked from network); 2 Jan 2014 11:50:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87006710"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:50:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:50:41 -0500
Message-ID: <52C55290.8070703@citrix.com>
Date: Thu, 2 Jan 2014 11:50:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ben Hutchings <ben@decadent.org.uk>
References: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
In-Reply-To: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/pci: Fix build on non-x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 19:46, Ben Hutchings wrote:
> We can't include <asm/pci_x86.h> if this isn't x86, and we only need
> it if CONFIG_PCI_MMCONFIG is enabled.
> 
> Fixes: 8deb3eb1461e ('xen/mcfg: Call PHYSDEVOP_pci_mmcfg_reserved for MCFG areas.')
> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

> --- a/drivers/xen/pci.c
> +++ b/drivers/xen/pci.c
> @@ -26,7 +26,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "../pci/pci.h"
> +#ifdef CONFIG_PCI_MMCONFIG
>  #include <asm/pci_x86.h>
> +#endif
>  
>  static bool __read_mostly pci_seg_supported = true;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 11:50:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 11:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygnX-0006qB-Rf; Thu, 02 Jan 2014 11:50:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VygnV-0006py-Hi
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 11:50:45 +0000
Received: from [85.158.137.68:54126] by server-13.bemta-3.messagelabs.com id
	45/4D-28603-49255C25; Thu, 02 Jan 2014 11:50:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388663442!6841550!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13546 invoked from network); 2 Jan 2014 11:50:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 11:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87006710"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 11:50:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	06:50:41 -0500
Message-ID: <52C55290.8070703@citrix.com>
Date: Thu, 2 Jan 2014 11:50:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ben Hutchings <ben@decadent.org.uk>
References: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
In-Reply-To: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen/pci: Fix build on non-x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 19:46, Ben Hutchings wrote:
> We can't include <asm/pci_x86.h> if this isn't x86, and we only need
> it if CONFIG_PCI_MMCONFIG is enabled.
> 
> Fixes: 8deb3eb1461e ('xen/mcfg: Call PHYSDEVOP_pci_mmcfg_reserved for MCFG areas.')
> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

> --- a/drivers/xen/pci.c
> +++ b/drivers/xen/pci.c
> @@ -26,7 +26,9 @@
>  #include <asm/xen/hypervisor.h>
>  #include <asm/xen/hypercall.h>
>  #include "../pci/pci.h"
> +#ifdef CONFIG_PCI_MMCONFIG
>  #include <asm/pci_x86.h>
> +#endif
>  
>  static bool __read_mostly pci_seg_supported = true;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 12:02:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 12:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygyI-0007aK-Ul; Thu, 02 Jan 2014 12:01:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1VygyI-0007aD-5g
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 12:01:54 +0000
Received: from [193.109.254.147:40743] by server-16.bemta-14.messagelabs.com
	id C3/55-20600-13555C25; Thu, 02 Jan 2014 12:01:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388664111!4954506!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31381 invoked from network); 2 Jan 2014 12:01:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 12:01:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89198021"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 12:01:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 07:01:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1VygyE-0000hj-LS;
	Thu, 02 Jan 2014 12:01:50 +0000
Date: Thu, 2 Jan 2014 12:01:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140102120150.GA1444@zion.uk.xensource.com>
References: <52BD5FDD.6060009@gmail.com>
 <52C4F48F.5090003@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C4F48F.5090003@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Vasily Evseenko <svpcom@gmail.com>, wei.liu2@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
 drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
[...]
> >It seems the root of problem in dom0 messages above. Is it HW failure or
> >some internal kernel structures overflow?
> From the stack, it reminds me this issue is very likely same with
> the one which has been fixed. There is something wrong with counting
> slots in netback, and then responses overlapps request in the ring,
> and grantcopy gets wrong grant reference and throws out error in
> grant_table.c. See
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
> There were some back and forth work for this issue, but seems the
> fix patch exists since v3.12-rc4. Would you like to have a try with
> newer kernel version?
> 

FWIW the patch you mentioned was backported to the kernel he used.

Wei.

> Thanks
> Annie
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 12:02:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 12:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VygyI-0007aK-Ul; Thu, 02 Jan 2014 12:01:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1VygyI-0007aD-5g
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 12:01:54 +0000
Received: from [193.109.254.147:40743] by server-16.bemta-14.messagelabs.com
	id C3/55-20600-13555C25; Thu, 02 Jan 2014 12:01:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388664111!4954506!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31381 invoked from network); 2 Jan 2014 12:01:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 12:01:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89198021"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 12:01:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 07:01:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1VygyE-0000hj-LS;
	Thu, 02 Jan 2014 12:01:50 +0000
Date: Thu, 2 Jan 2014 12:01:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140102120150.GA1444@zion.uk.xensource.com>
References: <52BD5FDD.6060009@gmail.com>
 <52C4F48F.5090003@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C4F48F.5090003@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Vasily Evseenko <svpcom@gmail.com>, wei.liu2@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
 drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
[...]
> >It seems the root of problem in dom0 messages above. Is it HW failure or
> >some internal kernel structures overflow?
> From the stack, it reminds me this issue is very likely same with
> the one which has been fixed. There is something wrong with counting
> slots in netback, and then responses overlapps request in the ring,
> and grantcopy gets wrong grant reference and throws out error in
> grant_table.c. See
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
> There were some back and forth work for this issue, but seems the
> fix patch exists since v3.12-rc4. Would you like to have a try with
> newer kernel version?
> 

FWIW the patch you mentioned was backported to the kernel he used.

Wei.

> Thanks
> Annie
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 12:53:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 12:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyhlb-0001F3-OW; Thu, 02 Jan 2014 12:52:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyhlb-0001Ey-6a
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 12:52:51 +0000
Received: from [193.109.254.147:61092] by server-7.bemta-14.messagelabs.com id
	EA/FB-15500-22165C25; Thu, 02 Jan 2014 12:52:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388667168!8446710!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5056 invoked from network); 2 Jan 2014 12:52:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 12:52:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87020409"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 12:52:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	07:52:48 -0500
Message-ID: <52C5611D.3020600@citrix.com>
Date: Thu, 2 Jan 2014 12:52:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <alpine.DEB.2.02.1312171738550.8667@kaball.uk.xensource.com>
	<20131231145001.GA3349@phenom.dumpdata.com>
In-Reply-To: <20131231145001.GA3349@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 14:50, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 17, 2013 at 05:53:13PM +0000, Stefano Stabellini wrote:
>> There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
>> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
>> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
>> guests, they are not affected.
> 
> Could you reference the upstream git commit that enables this.
> 
> And also CC the maintainers of drivers/video/*
> 
> And lastly lets CC also David and Boris on it.
> 
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 12:53:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 12:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyhlb-0001F3-OW; Thu, 02 Jan 2014 12:52:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyhlb-0001Ey-6a
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 12:52:51 +0000
Received: from [193.109.254.147:61092] by server-7.bemta-14.messagelabs.com id
	EA/FB-15500-22165C25; Thu, 02 Jan 2014 12:52:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388667168!8446710!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5056 invoked from network); 2 Jan 2014 12:52:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 12:52:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87020409"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 12:52:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	07:52:48 -0500
Message-ID: <52C5611D.3020600@citrix.com>
Date: Thu, 2 Jan 2014 12:52:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <alpine.DEB.2.02.1312171738550.8667@kaball.uk.xensource.com>
	<20131231145001.GA3349@phenom.dumpdata.com>
In-Reply-To: <20131231145001.GA3349@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 14:50, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 17, 2013 at 05:53:13PM +0000, Stefano Stabellini wrote:
>> There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
>> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
>> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
>> guests, they are not affected.
> 
> Could you reference the upstream git commit that enables this.
> 
> And also CC the maintainers of drivers/video/*
> 
> And lastly lets CC also David and Boris on it.
> 
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 13:53:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 13:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyii3-0003yV-J4; Thu, 02 Jan 2014 13:53:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyii2-0003yP-Nd
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 13:53:14 +0000
Received: from [85.158.137.68:9131] by server-17.bemta-3.messagelabs.com id
	E6/2A-15965-A4F65C25; Thu, 02 Jan 2014 13:53:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388670792!6931577!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24865 invoked from network); 2 Jan 2014 13:53:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 13:53:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87034223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 13:53:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	08:53:09 -0500
Message-ID: <52C56F43.6030804@citrix.com>
Date: Thu, 2 Jan 2014 13:53:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Levente Kurusa <levex@linux.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
In-Reply-To: <1387465429-3568-27-git-send-email-levex@linux.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/13 15:03, Levente Kurusa wrote:
> This is required so that we give up the last reference to the device.
> 
> Signed-off-by: Levente Kurusa <levex@linux.com>
> ---
>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index 3c0a74b..4abb9ee 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
>  
>  	/* Register with generic device framework. */
>  	err = device_register(&xendev->dev);
> -	if (err)
> +	if (err) {
> +		put_device(&xendev->dev);
>  		goto fail;
> +	}
>  
>  	return 0;
>  fail:

There is a kfree(xendev) here so this introduces a double-free.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 13:53:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 13:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyii3-0003yV-J4; Thu, 02 Jan 2014 13:53:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyii2-0003yP-Nd
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 13:53:14 +0000
Received: from [85.158.137.68:9131] by server-17.bemta-3.messagelabs.com id
	E6/2A-15965-A4F65C25; Thu, 02 Jan 2014 13:53:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388670792!6931577!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24865 invoked from network); 2 Jan 2014 13:53:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 13:53:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="87034223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 13:53:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	08:53:09 -0500
Message-ID: <52C56F43.6030804@citrix.com>
Date: Thu, 2 Jan 2014 13:53:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Levente Kurusa <levex@linux.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
In-Reply-To: <1387465429-3568-27-git-send-email-levex@linux.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/13 15:03, Levente Kurusa wrote:
> This is required so that we give up the last reference to the device.
> 
> Signed-off-by: Levente Kurusa <levex@linux.com>
> ---
>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> index 3c0a74b..4abb9ee 100644
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
>  
>  	/* Register with generic device framework. */
>  	err = device_register(&xendev->dev);
> -	if (err)
> +	if (err) {
> +		put_device(&xendev->dev);
>  		goto fail;
> +	}
>  
>  	return 0;
>  fail:

There is a kfree(xendev) here so this introduces a double-free.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 14:18:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 14:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyj6U-0004h8-5p; Thu, 02 Jan 2014 14:18:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyj6S-0004h3-Ne
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 14:18:28 +0000
Received: from [193.109.254.147:63051] by server-12.bemta-14.messagelabs.com
	id BF/32-13681-43575C25; Thu, 02 Jan 2014 14:18:28 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388672306!8495622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 758 invoked from network); 2 Jan 2014 14:18:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 14:18:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89229417"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 14:18:25 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	09:18:25 -0500
Message-ID: <52C5752F.3080005@citrix.com>
Date: Thu, 2 Jan 2014 14:18:23 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
References: <52652534.2040303@oracle.com>	<526545E002000078000FC5F1@nat28.tlf.novell.com>	<52652E95.3020305@oracle.com>
	<20131021140607.GQ20913@ics.muni.cz>	<20131021141855.GA4211@phenom.dumpdata.com>	<5265560602000078000FC73E@nat28.tlf.novell.com>	<20131021144407.GC4560@phenom.dumpdata.com>	<5265609802000078000FC7B7@nat28.tlf.novell.com>	<20131023153645.GA28011@phenom.dumpdata.com>	<5269A865.2010100@cantab.net>	<20131025142147.GB3742@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99CE00@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A99CE00@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: "roland@kernel.org" <roland@kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lukas Hejtmanek <xhejtman@ics.muni.cz>, Jan
	Beulich <JBeulich@suse.com>, "Liu, Jijiang" <jijiang.liu@intel.com>
Subject: Re: [Xen-devel] BUG: bad page map under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/12/13 06:39, Zhang, Yang Z wrote:
> 
> Any conclusion for this issue? Our custom also saw the same issue
> that they want to map an MMIO to userspace address(through UIO
> approach), but current ->mmap function call remap_pfn_range without
> setting _PAGE_IOMAP and cause the host crashed. It seems all
> userspace device drivers that tries to map device's mmio will caused
> host crashed.
> 
> They are using 3.10 kernel.

My first attempt at fixing this was broken and I need to find time for
another look.

There were problems with pages that started out pre-ballooned.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 14:18:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 14:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyj6U-0004h8-5p; Thu, 02 Jan 2014 14:18:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyj6S-0004h3-Ne
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 14:18:28 +0000
Received: from [193.109.254.147:63051] by server-12.bemta-14.messagelabs.com
	id BF/32-13681-43575C25; Thu, 02 Jan 2014 14:18:28 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388672306!8495622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 758 invoked from network); 2 Jan 2014 14:18:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 14:18:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,590,1384300800"; d="scan'208";a="89229417"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 14:18:25 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	09:18:25 -0500
Message-ID: <52C5752F.3080005@citrix.com>
Date: Thu, 2 Jan 2014 14:18:23 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
References: <52652534.2040303@oracle.com>	<526545E002000078000FC5F1@nat28.tlf.novell.com>	<52652E95.3020305@oracle.com>
	<20131021140607.GQ20913@ics.muni.cz>	<20131021141855.GA4211@phenom.dumpdata.com>	<5265560602000078000FC73E@nat28.tlf.novell.com>	<20131021144407.GC4560@phenom.dumpdata.com>	<5265609802000078000FC7B7@nat28.tlf.novell.com>	<20131023153645.GA28011@phenom.dumpdata.com>	<5269A865.2010100@cantab.net>	<20131025142147.GB3742@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99CE00@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A99CE00@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: "roland@kernel.org" <roland@kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lukas Hejtmanek <xhejtman@ics.muni.cz>, Jan
	Beulich <JBeulich@suse.com>, "Liu, Jijiang" <jijiang.liu@intel.com>
Subject: Re: [Xen-devel] BUG: bad page map under Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/12/13 06:39, Zhang, Yang Z wrote:
> 
> Any conclusion for this issue? Our custom also saw the same issue
> that they want to map an MMIO to userspace address(through UIO
> approach), but current ->mmap function call remap_pfn_range without
> setting _PAGE_IOMAP and cause the host crashed. It seems all
> userspace device drivers that tries to map device's mmio will caused
> host crashed.
> 
> They are using 3.10 kernel.

My first attempt at fixing this was broken and I need to find time for
another look.

There were problems with pages that started out pre-ballooned.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 14:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 14:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjfg-0006d0-Ta; Thu, 02 Jan 2014 14:54:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyjff-0006cv-DJ
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 14:54:51 +0000
Received: from [85.158.139.211:62458] by server-11.bemta-5.messagelabs.com id
	3B/19-23268-ABD75C25; Thu, 02 Jan 2014 14:54:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388674488!7553372!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24961 invoked from network); 2 Jan 2014 14:54:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 14:54:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89239041"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 14:54:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	09:54:19 -0500
Message-ID: <52C57D9A.8050804@citrix.com>
Date: Thu, 2 Jan 2014 14:54:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388502919-8601-1-git-send-email-konrad.wilk@oracle.com>
	<1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, annie.li@oracle.com,
	linux-kernel@vger.kernel.org, msw@amazon.com
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Force to use v1 of grants.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 15:15, Konrad Rzeszutek Wilk wrote:
> We have the framework to use v2, but there are no backends that
> actually use it. The end result is that on PV we use v2 grants
> and on PVHVM v1. The v1 has a capacity of 512 grants per page while
> the v2 has 256 grants per page. This means we lose about 50%
> capacity - and if we want more than 16 VIFs (each VIF takes
> 512 grants), then we are hitting the max per guest of 32.
> 
> Oracle-bug: 16039922
> CC: annie.li@oracle.com
> CC: msw@amazon.com
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

What does v2 add anyway? Do we want to remove all v2 related code or are
we expecting to make use of it in the near future?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 14:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 14:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjfg-0006d0-Ta; Thu, 02 Jan 2014 14:54:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyjff-0006cv-DJ
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 14:54:51 +0000
Received: from [85.158.139.211:62458] by server-11.bemta-5.messagelabs.com id
	3B/19-23268-ABD75C25; Thu, 02 Jan 2014 14:54:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388674488!7553372!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24961 invoked from network); 2 Jan 2014 14:54:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 14:54:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89239041"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 14:54:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	09:54:19 -0500
Message-ID: <52C57D9A.8050804@citrix.com>
Date: Thu, 2 Jan 2014 14:54:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388502919-8601-1-git-send-email-konrad.wilk@oracle.com>
	<1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, annie.li@oracle.com,
	linux-kernel@vger.kernel.org, msw@amazon.com
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Force to use v1 of grants.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 15:15, Konrad Rzeszutek Wilk wrote:
> We have the framework to use v2, but there are no backends that
> actually use it. The end result is that on PV we use v2 grants
> and on PVHVM v1. The v1 has a capacity of 512 grants per page while
> the v2 has 256 grants per page. This means we lose about 50%
> capacity - and if we want more than 16 VIFs (each VIF takes
> 512 grants), then we are hitting the max per guest of 32.
> 
> Oracle-bug: 16039922
> CC: annie.li@oracle.com
> CC: msw@amazon.com
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

What does v2 add anyway? Do we want to remove all v2 related code or are
we expecting to make use of it in the near future?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjom-0007Ai-6o; Thu, 02 Jan 2014 15:04:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1Vyjoj-0007Ad-Fq
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:04:14 +0000
Received: from [85.158.139.211:12083] by server-1.bemta-5.messagelabs.com id
	AB/79-21065-CEF75C25; Thu, 02 Jan 2014 15:04:12 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388675048!7552105!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23190 invoked from network); 2 Jan 2014 15:04:09 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 15:04:09 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 02 Jan 2014 15:04:05 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; 
	d="txt'?scan'208,223";a="623226737"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.26])
	by fldsmtpi03.verizon.com with ESMTP; 02 Jan 2014 15:04:04 +0000
Message-ID: <52C57FE3.3020502@terremark.com>
Date: Thu, 02 Jan 2014 10:04:03 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C454A4.1030408@citrix.com>
	<20140102014134.GA3371@olila.local.net-space.pl>
In-Reply-To: <20140102014134.GA3371@olila.local.net-space.pl>
Content-Type: multipart/mixed; boundary="------------040403030203090802050606"
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------040403030203090802050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/01/14 20:41, Daniel Kiper wrote:
> On Wed, Jan 01, 2014 at 05:47:16PM +0000, Andrew Cooper wrote:
>> On 01/01/2014 16:51, Don Slutz wrote:
> [...]
>
>>> With this patch no panic and crash kernel works.
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested.
>> kimage_alloc_crash_control_page() explicitly chooses a page inside the
>> crash region and clears it.
> I tested this patch earlier and now with latest Xen and kexec-tools commits.
> I am not able to reproduce this issue on my machines. Don, could you
> provide more details about your system and how did you build your
> Xen and kexec-tools (configure, make options, etc.)?

It is an older fedora17 system.

dcs-xen-54:~/xen>cat /etc/default/grub
GRUB_TIMEOUT=15
GRUB_DISTRIBUTOR="Fedora"
GRUB_DEFAULT=2
GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600"
GRUB_CMDLINE_LINUX="rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap 
KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 
rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 
rd_NO_PLYMOUTH"
#GRUB_THEME="/boot/grub2/themes/system/theme.txt"
GRUB_CMDLINE_XEN="dom0_mem=2G loglvl=all guest_loglvl=all 
console_timestamps=1 com1=9600,8n1 console=com1 apic_verbosity=verbose 
crashkernel=256M@256M"
GRUB_CMDLINE_LINUX_XEN_REPLACE="rd.md=0 rd.dm=0 
rd.lvm.lv=vg_f17-xen/lv_swap  KEYTABLE=us SYSFONT=True rd.luks=0 
console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 
earlyprintk=xen rd_NO_PLYMOUTH"

dcs-xen-54:~/xen>cat .config
CONFIG_QEMU = http://xenbits.xen.org/git-http/qemu-xen-unstable.git
QEMU_UPSTREAM_EXTRA_CONFIG = --with-pkgversion=qemu-xen-4.4.0-rc1-0-gb97307e
QEMU_UPSTREAM_REVISION = qemu-xen-4.4.0-rc1
QEMU_UPSTREAM_URL = git@githq.cloudswitch.com:qemu.git
SEABIOS_UPSTREAM_TAG = rel-1.7.3.1
SEABIOS_UPSTREAM_URL = git@githq.cloudswitch.com:seabios-george.git
debug = n

dcs-xen-54:~/xen>uname -a
Linux dcs-xen-54 3.8.11-100.fc17.x86_64 #1 SMP Wed May 1 19:31:26 UTC 
2013 x86_64 x86_64 x86_64 GNU/Linux

Commands used to build (I use rpmbuild):

./configure --prefix=/usr --disable-stubdom
make dist
make -C xen MAP

The last is part of enabling xen crashdump analyser see:

http://lists.xen.org/archives/html/xen-devel/2013-02/msg01606.html

as are the patches:

* 588e9ba Adjust xen-crashdump-analyser info for 4.4.0
* f53550b Add new xen-crashdump-analyser info.
* a1f92d3 Introduce more offsets, and embed all offsets into the symbol file

Attached are these patches including the patch I use to enable rpmbuild 
(dcs-xen-54:~/xen>rpmbuild -bb xen-4.4.spec).  Not that I expect it to 
matter.  The "crashkernel=256M@256M" may be the key to reproducing it.

Also attached the complete console output from my test (kexec-broken.txt).

    -Don Slutz
> Andrew, David, Did you run kexec tests in your automated test environment
> with commit 7113a45451a9f656deeff070e47672043ed83664 applied? Could you
> tell something about results?
>
>> However, the sentiment of the commit is certainly desirable, to prevent
>> accidental playing in the crash region.
>>
>> As the mappings are removed from Xen's directmap region,
>> map_domain_page() doesn't work (unless the debug highmem barrier is
>> sufficiently low that the crash regions ends up above it, and the
>> virtual address ends up coming from the mapcache).
>>
>> This means that both here in clear_domain_page(), and later in
>> machine_kexec_load() where the code is copied in, is vulnerable to this
>> pagefault.
>>
>> The solution to this problem which would leave the fewest mappings would
>> be to have kimage_alloc_crash_control_page() map the individual control
>> page to the main Xen pagetables, at which a call to point
>> map_domain_page() on it will work correctly.  This would need an
>> equivalent call to destroy_xen_mappings() in kimage_free().
>>
>> However, it is far from neat.
>>
>> I defer to others as to which approach is better, but suggest that one
>> way or another, the problem gets fixed very quickly, even if that means
>> taking this complete reversion now and submitting a proper fix in due
>> course.
> I am on holiday until 06th January 2014 and I am not able to investigate
> this issue deeper right now. If you feel that it is better to revert
> this patch and later do second attempt to remove this mapping I do
> not object.
>
> Daniel


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0001-Adjust-to-use-rpmbuild.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0001-Adjust-to-use-rpmbuild.patch"

>From 6654e8a1ba0077217c5b3c9d17ec2d154d199dc3 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Wed, 30 Oct 2013 13:33:07 -0400
Subject: [PATCH 1/5] Adjust to use rpmbuild

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 gen_version.pl   | 391 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 update_config.pl | 109 ++++++++++++++++
 xen-4.4.spec     | 131 +++++++++++++++++++
 3 files changed, 631 insertions(+)
 create mode 100755 gen_version.pl
 create mode 100755 update_config.pl
 create mode 100644 xen-4.4.spec

diff --git a/gen_version.pl b/gen_version.pl
new file mode 100755
index 0000000..08ab7e6
--- /dev/null
+++ b/gen_version.pl
@@ -0,0 +1,391 @@
+#! /usr/bin/perl -w
+# -*- Mode: cperl; coding: utf-8; cperl-indent-level: 2 -*-
+
+use strict;
+
+my $debug = 0;
+my $cygwin = $^O eq 'cygwin';
+my $brief = 0;
+my $major = 0;
+my $minor = 0;
+my $patch = 0;
+my $clean = 0;
+my $update = 0;
+my $tags = ' --tags';
+my $xen_conf = 0;
+
+while ((defined $ARGV[0]) &&
+       (substr($ARGV[0],0,1) eq '-')) {
+  my $opts = shift @ARGV;
+  if ($opts =~ /^-d(\d+)$/) {
+    $debug = $1;
+  } elsif ($opts eq '-d') {
+    $debug = shift @ARGV;
+  } elsif ($opts eq '-u') {
+    $cygwin = 0;
+  } elsif ($opts eq '-b') {
+    $brief = 1;
+  } elsif ($opts eq '-nt') {
+    $tags = '';
+  } elsif ($opts eq '-c') {
+    $xen_conf = 1;
+  } elsif ($opts eq '-major') {
+    $major = 1;
+  } elsif ($opts eq '-minor') {
+    $minor = 1;
+  } elsif ($opts eq '-patch') {
+    $patch = 1;
+  } elsif ($opts eq '-clean') {
+    $clean = 1;
+  } elsif ($opts eq '-update') {
+    $update = 1;
+  } else {
+    print "Unknown options \"$opts\" ignored\n";
+  }
+}
+
+die "Only one of -major -minor -patch" if ($major + $minor + $patch) > 1;
+
+$/ = "\cJ" if $cygwin;
+
+my $jenkins_build_number = $ENV{'BUILD_NUMBER'};
+$jenkins_build_number = '0000'
+  if ! defined $jenkins_build_number;
+
+sub clean_line {
+  my $line = shift @_;
+
+  chomp $line;
+  chop($line) if substr($line,-1,1) eq "\cM";
+  return $line;
+}
+
+sub three_parts {
+  my $ver = shift @_;
+  my @parts = split /-/, $ver;
+
+  while ($#parts > 2) {
+    my $old = shift @parts;
+    $parts[0] = $old.'-'. $parts[0];
+  }
+  return @parts;
+}
+
+my $branch = `basename \$(git rev-parse --abbrev-ref HEAD)`;
+chomp $branch;
+if ($debug =~ /br/) {
+  print STDERR "branch=$branch\n";
+}
+
+my %conf =
+  (
+   'QEMU_UPSTREAM_URL' => 'git@githq.cloudswitch.com:qemu.git',
+   'QEMU_UPSTREAM_REVISION' => 'origin/'.$branch,
+   'QEMU_UPSTREAM_EXTRA_CONFIG', => '--with-pkgversion=dirty',
+   'SEABIOS_UPSTREAM_URL' => 'git@githq.cloudswitch.com:seabios-george.git',
+   'SEABIOS_UPSTREAM_TAG' => 'origin/'.$branch,
+  );
+
+if ($xen_conf) {
+  open CONFIG, "Config.mk"
+    || die "Failed to open Config.mk: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^QEMU_UPSTREAM_REVISION\s*\?=\s*(.*)$/) {
+      $conf{'QEMU_UPSTREAM_REVISION'} = $1;
+    } elsif ($line =~ /^SEABIOS_UPSTREAM_TAG\s*\?=\s*(.*)$/) {
+      $conf{'SEABIOS_UPSTREAM_TAG'} = $1;
+    }
+  }
+  close CONFIG;
+}
+
+my $base_xen_ver;
+my $xen_ver;
+my $qemu_ver;
+my $seabios_ver;
+my $uname_ver = "Unknown";
+
+open (UNAME, "uname -a|")
+  || die "Failed to open uname -a|: $!";
+$uname_ver = clean_line(<UNAME>);
+close UNAME;
+
+
+if (-f ".config") {
+  open (CONFIG, ".config")
+    || die "Failed to open .config: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^(.*?)\s*=\s*(.*)$/) {
+      $conf{$1} = $2;
+    }
+  }
+  close CONFIG;
+}
+
+if ($debug =~ /cf/) {
+  for my $key (sort keys %conf) {
+    print STDERR $key,' = ',$conf{$key},$/;
+  }
+}
+
+if ($clean) {
+  my $cmd = "git clean -fdxq";
+  open (GIT, "$cmd|")
+    || die "Failed to git clean($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+  for my $git_dir ('firmware/seabios-dir-remote', 'qemu-xen-dir-remote', 'qemu-xen-traditional-dir-remote') {
+    if (-d "tools/$git_dir") {
+      $cmd = "rm -rf tools/$git_dir";
+      open (GIT, "$cmd|")
+        || die "Failed to clean dir($cmd): $!";
+      print STDERR "$cmd\n";
+      while (<GIT>) {
+        print STDERR;
+      }
+      close GIT;
+    }
+  }
+  open (CONFIG, ">.config")
+    || die "Failed to open for write .config: $!";
+  for my $key (sort keys %conf) {
+    print CONFIG $key,' = ',$conf{$key},$/;
+  }
+  close CONFIG;
+  $cmd = "mkdir -p rpmbuild_dir/BUILD rpmbuild_dir/BUILDROOT rpmbuild_dir/RPMS/x86_64";
+  open (GIT, "$cmd|")
+    || die "Failed to create jenkins rpm dirs dir($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+if ($update) {
+  for my $git_dir ('firmware/seabios-dir-remote', 'qemu-xen-dir-remote', 'qemu-xen-traditional-dir-remote') {
+    if (-d "tools/$git_dir") {
+      my $cmd = "cd tools/$git_dir;git pull";
+      open (GIT, "$cmd|")
+        || die "Failed to update dir($cmd): $!";
+      print STDERR "$cmd\n";
+      while (<GIT>) {
+        print STDERR;
+      }
+      close GIT;
+    }
+  }
+}
+
+if (!-d 'tools/qemu-xen-dir') {
+  my $cmd = "export GIT=git;cd tools;../scripts/git-checkout.sh $conf{'QEMU_UPSTREAM_URL'} $conf{'QEMU_UPSTREAM_REVISION'} qemu-xen-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get qemu($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+if (!-d 'tools/firmware/seabios-dir') {
+  my $cmd = "export GIT=git;cd tools/firmware;../../scripts/git-checkout.sh $conf{'SEABIOS_UPSTREAM_URL'} $conf{'SEABIOS_UPSTREAM_TAG'} seabios-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get seabios($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+open (GIT, "git fetch --tags|")
+  || die "Failed to fetch tags via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  print STDERR;
+}
+close GIT;
+
+my @xen_ver_tags = ();
+open (GIT, "git log --oneline --decorate=full -20|")
+  || die "Failed to get xen version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @xen_ver_tags, clean_line($_);
+}
+close GIT;
+
+my @qemu_ver_tags = ();
+open (GIT, "cd tools/qemu-xen-dir;git log --oneline --decorate=full -20|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @qemu_ver_tags, clean_line($_);
+}
+close GIT;
+
+my @seabios_ver_tags = ();
+open (GIT, "cd tools/firmware/seabios-dir;git log --oneline --decorate=full -20|")
+  || die "Failed to get seabios version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @seabios_ver_tags, clean_line($_);
+}
+close GIT;
+
+open (GIT, "git describe $tags --long --dirty|")
+  || die "Failed to get xen version via git: $!";
+binmode (GIT) if $cygwin;
+$xen_ver = clean_line(<GIT>);
+close GIT;
+$base_xen_ver = $xen_ver;
+if ($xen_ver =~ /^jenkins/) {
+  my $ghash;
+  for my $i (0 .. $#xen_ver_tags) {
+    my $line = $xen_ver_tags[$i];
+    if (($i == 0) && ($line =~ /^(\S+)/)) {
+      $ghash = 'g'.$1;
+    }
+    if ($line =~ m-\((.*)\)-) {
+      my @tags = split(/, /, $1);
+      my $cur = 0;
+      my $pre;
+      for my $part (@tags) {
+        if ($part =~ m-^tag: refs/tags/(ver_\d+\.\d+\.)(\d+)-) {
+          if ($2 >= $cur) {
+            $pre = $1;
+            $cur = $2;
+          }
+        }
+      }
+      if (defined $pre) {
+        $xen_ver = $pre.$cur.'-'.$i.'-'.$ghash;
+        $xen_ver .= '-dirty'
+          if $base_xen_ver =~ /dirty$/;
+        last;
+      }
+    }
+  }
+}
+
+open (GIT, "cd tools/qemu-xen-dir;git describe $tags --long --dirty|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+$qemu_ver = clean_line(<GIT>);
+close GIT;
+
+open (GIT, "cd tools/firmware/seabios-dir;git describe $tags --long --dirty|")
+  || die "Failed to get seabios version via git: $!";
+binmode (GIT) if $cygwin;
+$seabios_ver = clean_line(<GIT>);
+close GIT;
+
+if ($debug =~ /sv/) {
+  print STDERR "xen_ver=$xen_ver qemu_ver=$qemu_ver seabios_ver=$seabios_ver\n";
+}
+
+my @xen_vers = three_parts($xen_ver);
+my @qemu_vers = three_parts($qemu_ver);
+my @seabios_vers = three_parts($seabios_ver);
+
+if ($debug =~ /pv/) {
+  for my $i (0..$#xen_vers) {
+    print STDERR "xen_vers[$i] = $xen_vers[$i]\n";
+  }
+  for my $i (0..$#qemu_vers) {
+    print STDERR "qemu_vers[$i] = $qemu_vers[$i]\n";
+  }
+  for my $i (0..$#seabios_vers) {
+    print STDERR "seabios_vers[$i] = $seabios_vers[$i]\n";
+  }
+}
+
+my $out = $xen_vers[0];
+if (($out ne $qemu_vers[0]) || ($out ne $seabios_vers[0])) {
+  die "Complex version: xen_ver=$xen_ver qemu_ver=$qemu_ver seabios_ver=$seabios_ver"
+    if $major || $minor || $patch || $clean;
+  if ($xen_vers[0] ne $qemu_vers[0]) {
+    $out .= '_'.$qemu_vers[0];
+  } else {
+    $out .= '_';
+  }
+  if ($xen_vers[0] ne $seabios_vers[0]) {
+    $out .= '_'.$seabios_vers[0];
+  } else {
+    $out .= '_';
+  }
+}
+if (($xen_vers[1] ne '0') || ($qemu_vers[1] ne '0') || ($seabios_vers[1] ne '0')) {
+  $out .= '_'.$xen_vers[1];
+  $out .= '_'.$qemu_vers[1];
+  $out .= '_'.$seabios_vers[1];
+}
+my $short_out = $out;
+$short_out .= '-dirty'
+  if $base_xen_ver =~ /dirty$/;
+$short_out =~ s/-/_/g;
+$out .= '_'.$xen_vers[2];
+$out .= '_'.$qemu_vers[2];
+$out .= '_'.$seabios_vers[2];
+$out =~ s/-/_/g;
+if (!-d 'dist/install/etc/xen') {
+  my $rc = system 'mkdir', '-p', 'dist/install/etc/xen';
+  die "Failed to create dist/install/etc/xen: $!"
+    if $rc != 0;
+}
+open(VERFILE, ">dist/install/etc/xen/gen_version")
+  || die "Failed to create dist/install/etc/xen/gen_version: $!";
+print VERFILE  $out,$/;
+print VERFILE  'all=',$out,$/;
+print VERFILE  'brief=',$short_out,$/;
+print VERFILE  'jenkins_build_number=',$jenkins_build_number,$/;
+print VERFILE  'base_xen_ver=',$base_xen_ver,$/;
+print VERFILE  'xen_ver=',$xen_ver,$/;
+print VERFILE  'qemu_ver=',$qemu_ver,$/;
+print VERFILE  'seabios_ver=',$seabios_ver,$/;
+print VERFILE  'uname_ver=',$uname_ver,$/;
+print VERFILE  'HOSTNAME=',$ENV{'HOSTNAME'},$/;
+for my $key (sort keys %conf) {
+  print VERFILE $key,'=',$conf{$key},$/;
+}
+for my $i (0 .. $#xen_ver_tags) {
+  print VERFILE  'xen_ver_tags['.$i.']=',$xen_ver_tags[$i],$/;
+}
+for my $i (0 .. $#qemu_ver_tags) {
+  print VERFILE  'qemu_ver_tags['.$i.']=',$qemu_ver_tags[$i],$/;
+}
+for my $i (0 .. $#seabios_ver_tags) {
+  print VERFILE  'seabios_ver_tags['.$i.']=',$seabios_ver_tags[$i],$/;
+}
+close VERFILE;
+if ($major) {
+  if ($short_out =~ /ver_(\d+)\.\d+\.\d+/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($minor) {
+  if ($short_out =~ /ver_\d+\.(\d+)\.\d+/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($patch) {
+  if ($short_out =~ /ver_\d+\.\d+\.(\d+)/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($brief) {
+  print $short_out,$/;
+} else {
+  print $out,$/;
+}
+
+1;
diff --git a/update_config.pl b/update_config.pl
new file mode 100755
index 0000000..097f1da
--- /dev/null
+++ b/update_config.pl
@@ -0,0 +1,109 @@
+#! /usr/bin/perl -w
+# -*- Mode: cperl; coding: utf-8; cperl-indent-level: 2 -*-
+
+use strict;
+
+my $debug = 0;
+my $cygwin = $^O eq 'cygwin';
+
+while ((defined $ARGV[0]) &&
+       (substr($ARGV[0],0,1) eq '-')) {
+  my $opts = shift @ARGV;
+  if ($opts =~ /^-d(\d+)$/) {
+    $debug = $1;
+  } elsif ($opts eq '-d') {
+    $debug = shift @ARGV;
+  } elsif ($opts eq '-u') {
+    $cygwin = 0;
+  } else {
+    print "Unknown options \"$opts\" ignored\n";
+  }
+}
+
+$/ = "\cJ" if $cygwin;
+
+sub clean_line {
+  my $line = shift @_;
+
+  chomp $line;
+  chop($line) if substr($line,-1,1) eq "\cM";
+  return $line;
+}
+
+my %conf =
+  (
+   'QEMU_UPSTREAM_URL' => 'git@githq.cloudswitch.com:qemu.git',
+   'QEMU_UPSTREAM_REVISION' => 'origin/dev-integration',
+   'QEMU_UPSTREAM_EXTRA_CONFIG', => '--with-pkgversion=dirty',
+   'SEABIOS_UPSTREAM_URL' => 'git@githq.cloudswitch.com:seabios-george.git',
+   'SEABIOS_UPSTREAM_TAG' => 'origin/dev-integration',
+  );
+my %conf_seen = ();
+my @configs = ();
+
+if (-f ".config") {
+  open (CONFIG, ".config")
+    || die "Failed to open .config: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^(.*?)\s*=\s*(.*)$/) {
+      $conf{$1} = $2;
+      $conf_seen{$1} = 1;
+    }
+    push @configs, $line;
+  }
+}
+close CONFIG;
+
+for my $key (keys %conf) {
+  if (!defined $conf_seen{$key}) {
+    push @configs, "$key = $conf{$key}";
+  }
+}
+
+if (!-d 'tools/qemu-xen-dir') {
+  my $cmd = "export GIT=git;cd tools;../scripts/git-checkout.sh $conf{'QEMU_UPSTREAM_URL'} $conf{'QEMU_UPSTREAM_REVISION'} qemu-xen-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get qemu($cmd): $!";
+  print "$cmd\n";
+  while (<GIT>) {
+    print;
+  }
+  close GIT;
+}
+
+open (GIT, "cd tools/qemu-xen-dir;git describe --tags --long --dirty|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+my $qemu_ver = clean_line(<GIT>);
+close GIT;
+
+print STDERR "qemu_ver=$qemu_ver\n"
+  if $debug =~ /qv/;
+
+for my $i (0..$#configs) {
+  my $line = $configs[$i];
+  if ($line =~ /^QEMU_UPSTREAM_EXTRA_CONFIG/) {
+    print STDERR "line=$line\n"
+      if $debug =~ /qc/;
+    my @parts = split / +/, $line;
+    my $found_pkgversion = 0;
+    for my $i (0..$#parts) {
+      if ($parts[$i] =~ /^--with-pkgversion=/) {
+        $parts[$i] = "--with-pkgversion=$qemu_ver";
+        $found_pkgversion = 1;
+      }
+    }
+    push @parts,  "--with-pkgversion=$qemu_ver"
+      if !$found_pkgversion;
+    $configs[$i] = join ' ', @parts;
+  }
+}
+
+open (CONFIG, ">.config")
+  || die "Failed to open for write .config: $!";
+for my $line (@configs) {
+  print CONFIG $line,$/;
+}
+close CONFIG;
+1;
diff --git a/xen-4.4.spec b/xen-4.4.spec
new file mode 100644
index 0000000..38ff5ba
--- /dev/null
+++ b/xen-4.4.spec
@@ -0,0 +1,131 @@
+Summary: The Xen Hypervisor, modified by CloudSwitch, Inc.
+Name: xen
+Version: 4.4.unstable
+Release: %(./gen_version.pl -b -update)
+License: GPLv2
+Group: System Environment/Kernel
+URL: http://www.xen.org
+#Source0: %{name}-%{version}.tar.gz
+BuildRoot: rpmbuild/BUILDROOT/%{name}-%{version}-%{release}
+
+# $ cd <xen source tree top directory>
+# $ rpmbuild -bb xen-4.3.spec
+
+%debug_package
+
+%description
+Xen Hypervisor
+
+%prep
+#%setup -q
+
+%build
+AT_DIR=%(pwd)
+rm -rf $RPM_BUILD_ROOT
+cd $AT_DIR
+./update_config.pl
+./configure --prefix=/usr --disable-stubdom
+./gen_version.pl
+make dist
+make -C xen MAP
+
+%install
+AT_DIR=%(pwd)
+mkdir -p $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/boot $RPM_BUILD_ROOT
+cp -a $AT_DIR/xen/System.map $RPM_BUILD_ROOT/boot/System.map-%{name}-%{version}-%{release}
+cp -a $AT_DIR/dist/install/etc $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/usr $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/var $RPM_BUILD_ROOT
+mkdir -p $RPM_BUILD_ROOT/var/log/xen/console
+mkdir -p $RPM_BUILD_ROOT/boot/flask
+# mv $RPM_BUILD_ROOT/boot/xenpolicy.24 $RPM_BUILD_ROOT/boot/flask/
+
+%post
+if [ $1 = 1 -a -f /sbin/grub2-mkconfig -a -f /boot/grub2/grub.cfg ]; then
+  /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+fi
+ldconfig
+chkconfig --add xencommons
+
+%preun
+chkconfig --del xencommons
+
+%postun
+if [ -f /sbin/grub2-mkconfig -a -f /boot/grub2/grub.cfg ]; then
+  /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+fi
+ldconfig
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+
+%files
+%defattr(-,root,root,-)
+%config /etc/rc.d/init.d/xencommons
+%doc
+/boot
+/etc
+/var
+/usr/bin
+/usr/etc/qemu/target-x86_64.conf
+/usr/include
+/usr/sbin
+/usr/share
+/usr/lib64
+/usr/libexec
+/usr/lib/fs/ext2fs/fsimage.so
+/usr/lib/fs/fat/fsimage.so
+/usr/lib/fs/iso9660/fsimage.so
+/usr/lib/fs/reiserfs/fsimage.so
+/usr/lib/fs/ufs/fsimage.so
+/usr/lib/fs/xfs/fsimage.so
+/usr/lib/fs/zfs/fsimage.so
+/usr/lib/libblktapctl.a
+/usr/lib/libblktapctl.so
+/usr/lib/libblktapctl.so.1.0
+/usr/lib/libblktapctl.so.1.0.0
+/usr/lib/libfsimage.so
+/usr/lib/libfsimage.so.1.0
+/usr/lib/libfsimage.so.1.0.0
+/usr/lib/libvhd.a
+/usr/lib/libvhd.so
+/usr/lib/libvhd.so.1.0
+/usr/lib/libvhd.so.1.0.0
+/usr/lib/libxenctrl.a
+/usr/lib/libxenctrl.so
+/usr/lib/libxenctrl.so.4.3
+/usr/lib/libxenctrl.so.4.3.0
+/usr/lib/libxenguest.a
+/usr/lib/libxenguest.so
+/usr/lib/libxenguest.so.4.3
+/usr/lib/libxenguest.so.4.3.0
+/usr/lib/libxenlight.a
+/usr/lib/libxenlight.so
+/usr/lib/libxenlight.so.4.3
+/usr/lib/libxenlight.so.4.3.0
+/usr/lib/libxenstat.a
+/usr/lib/libxenstat.so
+/usr/lib/libxenstat.so.0
+/usr/lib/libxenstat.so.0.0
+/usr/lib/libxenstore.a
+/usr/lib/libxenstore.so
+/usr/lib/libxenstore.so.3.0
+/usr/lib/libxenstore.so.3.0.3
+/usr/lib/libxenvchan.a
+/usr/lib/libxenvchan.so
+/usr/lib/libxenvchan.so.1.0
+/usr/lib/libxenvchan.so.1.0.0
+/usr/lib/libxlutil.a
+/usr/lib/libxlutil.so
+/usr/lib/libxlutil.so.4.3
+/usr/lib/libxlutil.so.4.3.0
+/usr/lib/xen
+
+%changelog
+* Thu Oct  3 2013 Don Slutz <Don@CloudSwitch.com> - 4.3.0-%(./gen_version.pl -b -update)
+- Move xenpolicy.24 out of /boot
+
+* Wed Oct  3 2012 Harry Hart <hhart@harry-laptop.cloudswitch.com> - 4.2-1
+- Initial build.
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0002-Introduce-more-offsets-and-embed-all-offsets-into-th.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0002-Introduce-more-offsets-and-embed-all-offsets-into-th.pa";
 filename*1="tch"

>From a1f92d31a1c64fe97a3f831fe0b7f1bb6fa4940b Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 20 Feb 2013 22:45:32 +0000
Subject: [PATCH 2/5] Introduce more offsets, and embed all offsets into the
 symbol file

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/Makefile                      |  1 +
 xen/arch/x86/x86_64/asm-offsets.c | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/xen/Makefile b/xen/Makefile
index 1ea2717..e48e89f 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -197,6 +197,7 @@ _cscope:
 .PHONY: _MAP
 _MAP:
 	$(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
+	cat include/asm/asm-offsets.h | awk '/^#define __ASM_OFFSETS_H__/ { next } ; /^#define / { printf "%016x - +%s\n", $$3, $$2 }' >> System.map
 
 .PHONY: FORCE
 FORCE:
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index b0098b3..4168b98 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -54,8 +54,38 @@ void __dummy__(void)
     DEFINE(UREGS_user_sizeof, sizeof(struct cpu_user_regs));
     BLANK();
 
+    OFFSET(DOMAIN_id, struct domain, domain_id);
+    OFFSET(DOMAIN_shared_info, struct domain, shared_info);
+    OFFSET(DOMAIN_next, struct domain, next_in_list);
+    OFFSET(DOMAIN_max_vcpus, struct domain, max_vcpus);
+    OFFSET(DOMAIN_vcpus, struct domain, vcpu);
+    OFFSET(DOMAIN_is_hvm, struct domain, is_hvm);
+    OFFSET(DOMAIN_is_privileged, struct domain, is_privileged);
+    OFFSET(DOMAIN_tot_pages, struct domain, tot_pages);
+    OFFSET(DOMAIN_max_pages, struct domain, max_pages);
+    OFFSET(DOMAIN_shr_pages, struct domain, shr_pages);
+    OFFSET(DOMAIN_has_32bit_shinfo, struct domain, arch.has_32bit_shinfo);
+    OFFSET(DOMAIN_handle, struct domain, handle);
+    OFFSET(DOMAIN_paging_mode, struct domain, arch.paging.mode);
+    DEFINE(DOMAIN_sizeof, sizeof(struct domain));
+    BLANK();
+
+    OFFSET(SHARED_max_pfn, struct shared_info, arch.max_pfn);
+    OFFSET(SHARED_pfn_to_mfn_list_list, struct shared_info, arch.pfn_to_mfn_frame_list_list);
+    BLANK();
+
+    DEFINE(XEN_virt_start, __XEN_VIRT_START);
+    DEFINE(XEN_page_offset, __PAGE_OFFSET);
+#ifndef NDEBUG
+    DEFINE(XEN_DEBUG, 1);
+#else
+    DEFINE(XEN_DEBUG, 0);
+#endif
+    BLANK();
+
     OFFSET(irq_caps_offset, struct domain, irq_caps);
     OFFSET(next_in_list_offset, struct domain, next_in_list);
+    OFFSET(VCPU_vcpu_id, struct vcpu, vcpu_id);
     OFFSET(VCPU_processor, struct vcpu, processor);
     OFFSET(VCPU_domain, struct vcpu, domain);
     OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
@@ -86,7 +116,10 @@ void __dummy__(void)
     OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
     OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
     OFFSET(VCPU_guest_context_flags, struct vcpu, arch.vgc_flags);
+    OFFSET(VCPU_user_regs, struct vcpu, arch.user_regs);
+    OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
+    OFFSET(VCPU_pause_flags, struct vcpu, pause_flags);
     OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
     OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
     OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask);
@@ -95,6 +128,7 @@ void __dummy__(void)
     DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
     DEFINE(_VGCF_failsafe_disables_events, _VGCF_failsafe_disables_events);
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
+    DEFINE(VCPU_sizeof, sizeof(struct vcpu));
     BLANK();
 
     OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
@@ -134,6 +168,7 @@ void __dummy__(void)
     OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, guest_cpu_user_regs);
     OFFSET(CPUINFO_processor_id, struct cpu_info, processor_id);
     OFFSET(CPUINFO_current_vcpu, struct cpu_info, current_vcpu);
+    OFFSET(CPUINFO_per_cpu_offset, struct cpu_info, per_cpu_offset);
     DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info));
     BLANK();
 
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0003-Add-new-xen-crashdump-analyser-info.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0003-Add-new-xen-crashdump-analyser-info.patch"

>From f53550bba266c10b463bd5bb14ba761de986b393 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Mon, 11 Nov 2013 14:36:40 -0500
Subject: [PATCH 3/5] Add new xen-crashdump-analyser info.

But do not delete old stuff.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/x86_64/asm-offsets.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 4168b98..c819127 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -65,6 +65,7 @@ void __dummy__(void)
     OFFSET(DOMAIN_max_pages, struct domain, max_pages);
     OFFSET(DOMAIN_shr_pages, struct domain, shr_pages);
     OFFSET(DOMAIN_has_32bit_shinfo, struct domain, arch.has_32bit_shinfo);
+    OFFSET(DOMAIN_pause_count, struct domain, pause_count);
     OFFSET(DOMAIN_handle, struct domain, handle);
     OFFSET(DOMAIN_paging_mode, struct domain, arch.paging.mode);
     DEFINE(DOMAIN_sizeof, sizeof(struct domain));
@@ -76,10 +77,23 @@ void __dummy__(void)
 
     DEFINE(XEN_virt_start, __XEN_VIRT_START);
     DEFINE(XEN_page_offset, __PAGE_OFFSET);
-#ifndef NDEBUG
-    DEFINE(XEN_DEBUG, 1);
+    DEFINE(VIRT_XEN_START, XEN_VIRT_START);
+    DEFINE(VIRT_XEN_END, XEN_VIRT_END);
+    DEFINE(VIRT_DIRECTMAP_START, DIRECTMAP_VIRT_START);
+    DEFINE(VIRT_DIRECTMAP_END, DIRECTMAP_VIRT_END);
+
+    DEFINE(XEN_DEBUG, debug_build());
+    DEFINE(XEN_STACK_SIZE, STACK_SIZE);
+    DEFINE(XEN_PRIMARY_STACK_SIZE, PRIMARY_STACK_SIZE);
+#ifdef MEMORY_GUARD
+    DEFINE(XEN_MEMORY_GUARD, 1);
 #else
-    DEFINE(XEN_DEBUG, 0);
+    DEFINE(XEN_MEMORY_GUARD, 0);
+#endif
+#ifdef CONFIG_FRAME_POINTER
+    DEFINE(XEN_FRAME_POINTER, 1);
+#else
+    DEFINE(XEN_FRAME_POINTER, 0);
 #endif
     BLANK();
 
@@ -120,6 +134,7 @@ void __dummy__(void)
     OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
     OFFSET(VCPU_pause_flags, struct vcpu, pause_flags);
+    OFFSET(VCPU_pause_count, struct vcpu, pause_count);
     OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
     OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
     OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask);
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0004-Adjust-xen-crashdump-analyser-info-for-4.4.0.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0004-Adjust-xen-crashdump-analyser-info-for-4.4.0.patch"

>From 588e9baf3ee74c0efcbd91a639f74525a431f507 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Wed, 20 Nov 2013 15:31:39 -0500
Subject: [PATCH 4/5] Adjust xen-crashdump-analyser info for 4.4.0

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/x86_64/asm-offsets.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index c819127..a7da0a3 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -59,7 +59,7 @@ void __dummy__(void)
     OFFSET(DOMAIN_next, struct domain, next_in_list);
     OFFSET(DOMAIN_max_vcpus, struct domain, max_vcpus);
     OFFSET(DOMAIN_vcpus, struct domain, vcpu);
-    OFFSET(DOMAIN_is_hvm, struct domain, is_hvm);
+    OFFSET(DOMAIN_guest_type, struct domain, guest_type);
     OFFSET(DOMAIN_is_privileged, struct domain, is_privileged);
     OFFSET(DOMAIN_tot_pages, struct domain, tot_pages);
     OFFSET(DOMAIN_max_pages, struct domain, max_pages);
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/plain; charset=us-ascii;
 name="kexec-broken.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="kexec-broken.txt"



Loading Xen xen ...
Loading Linux 3.8.11-100.fc17.x86_64 ...
Loading initial ramdisk ...
error: Can't get controller info.. __  __            _  _   _  _                      _        _     _      
 \ \/ /___ _ __   | || | | || |     _   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \  | || |_| || |_ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|__   _|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_) |_|     \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
                                                                          
(XEN) Xen version 4.4-unstable (don@culpepper.cloudswitch.com) (gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)) debug=n Tue Dec 31 11:33:10 EST 2013
(XEN) Latest ChangeSet: Wed Nov 20 15:31:39 2013 -0500 git:588e9ba
(XEN) Bootloader: GRUB 2.00~beta6
(XEN) Command line: placeholder dom0_mem=2G loglvl=all guest_loglvl=all console_timestamps=1 com1=9600,8n1 console=com1 apic_verbosity=verbose crashkernel=256M@256M
(XEN) Video information:
(XEN)  No VGA detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009b800 (usable)
(XEN)  000000000009b800 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000bf63f000 (usable)
(XEN)  00000000bf63f000 - 00000000bf6bf000 (reserved)
(XEN)  00000000bf6bf000 - 00000000bf7bf000 (ACPI NVS)
(XEN)  00000000bf7bf000 - 00000000bf7ff000 (ACPI data)
(XEN)  00000000bf7ff000 - 00000000bf800000 (usable)
(XEN)  00000000bf800000 - 00000000c0000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000feb00000 - 00000000feb04000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed10000 - 00000000fed1a000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ffd80000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000840000000 (usable)
(XEN) Kdump: 256MB (262144kB) at 0x10000000
(XEN) ACPI: RSDP 000FE020, 0024 (r2 INSYDE)
(XEN) ACPI: XSDT BF7FE170, 009C (r1 INSYDE BROMOLOW        1       1000013)
(XEN) ACPI: FACP BF7FC000, 00F4 (r4 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: DSDT BF7F2000, 5A40 (r1 INSYDE BROMOLOW        0 ACPI    40000)
(XEN) ACPI: FACS BF76E000, 0040
(XEN) ACPI: ASF! BF7FD000, 00A5 (r32 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: HPET BF7FB000, 0038 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: APIC BF7FA000, 0092 (r2 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: MCFG BF7F9000, 003C (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: MSDM BF7F8000, 0055 (r3 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SLIC BF7F1000, 0176 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: WDAT BF7F0000, 0224 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: BOOT BF7EE000, 0028 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SSDT BF7ED000, 02F6 (r1 INSYDE BROMOLOW     1000 ACPI    40000)
(XEN) ACPI: SPCR BF7EC000, 0050 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: ASPT BF7EB000, 0034 (r7 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SSDT BF7EA000, 0804 (r1 INSYDE BROMOLOW     3000 ACPI    40000)
(XEN) ACPI: SSDT BF7E9000, 0996 (r1 INSYDE BROMOLOW     3000 ACPI    40000)
(XEN) ACPI: DMAR BF7E8000, 0088 (r1 INTEL      SNB         1 INTL        1)
(XEN) System RAM: 32757MB (33544044kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000840000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fe1b0
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x408
(XEN) ACPI: SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
(XEN) ACPI:             wakeup_vec[bf76e00c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
(XEN) Processor #1 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
(XEN) Processor #4 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
(XEN) Processor #5 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] enabled)
(XEN) Processor #6 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 6:10 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) mapped APIC to ffff82cfffdfb000 (fee00000)
(XEN) mapped IOAPIC to ffff82cfffdfa000 (fec00000)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2400.077 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) Suppress EOI broadcast on CPU#0
(XEN) enabled ExtINT on CPU#0
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) init IO_APIC IRQs
(XEN)  IO-APIC (apicid-pin) 0-0, 0-16, 0-17, 0-18, 0-19, 0-20, 0-21, 0-22, 0-23 not connected.
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) number of MP IRQ sources: 15.
(XEN) number of IO-APIC #0 registers: 24.
(XEN) testing the IO APIC.......................
(XEN) IO APIC #0......
(XEN) .... register #00: 00000000
(XEN) .......    : physical APIC id: 00
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
(XEN)  00 000 00  1    0    0   0   0    0    0    00
(XEN)  01 000 00  0    0    0   0   0    1    1    30
(XEN)  02 000 00  0    0    0   0   0    1    1    F0
(XEN)  03 000 00  0    0    0   0   0    1    1    38
(XEN)  04 000 00  0    0    0   0   0    1    1    F1
(XEN)  05 000 00  0    0    0   0   0    1    1    40
(XEN)  06 000 00  0    0    0   0   0    1    1    48
(XEN)  07 000 00  0    0    0   0   0    1    1    50
(XEN)  08 000 00  0    0    0   0   0    1    1    58
(XEN)  09 000 00  1    1    0   0   0    1    1    60
(XEN)  0a 000 00  0    0    0   0   0    1    1    68
(XEN)  0b 000 00  0    0    0   0   0    1    1    70
(XEN)  0c 000 00  0    0    0   0   0    1    1    78
(XEN)  0d 000 00  0    0    0   0   0    1    1    88
(XEN)  0e 000 00  0    0    0   0   0    1    1    90
(XEN)  0f 000 00  0    0    0   0   0    1    1    98
(XEN)  10 000 00  1    0    0   0   0    0    0    00
(XEN)  11 025 05  1    0    0   0   0    1    2    C9
(XEN)  12 0E5 05  1    0    0   0   0    1    2    4D
(XEN)  13 000 00  1    0    0   0   0    0    0    00
(XEN)  14 000 00  1    0    0   0   0    0    0    00
(XEN)  15 0B1 01  1    0    0   0   0    1    2    C9
(XEN)  16 000 00  1    0    0   0   0    0    0    00
(XEN)  17 0B8 08  1    0    0   0   0    1    2    85
(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
(XEN) IRQ240 -> 0:2
(XEN) IRQ48 -> 0:1
(XEN) IRQ56 -> 0:3
(XEN) IRQ241 -> 0:4
(XEN) IRQ64 -> 0:5
(XEN) IRQ72 -> 0:6
(XEN) IRQ80 -> 0:7
(XEN) IRQ88 -> 0:8
(XEN) IRQ96 -> 0:9
(XEN) IRQ104 -> 0:10
(XEN) IRQ112 -> 0:11
(XEN) IRQ120 -> 0:12
(XEN) IRQ136 -> 0:13
(XEN) IRQ144 -> 0:14
(XEN) IRQ152 -> 0:15
(XEN) .................................... done.
(XEN) Using local APIC timer interrupts.
(XEN) calibrating APIC timer ...
(XEN) ..... CPU clock speed is 2400.0502 MHz.
(XEN) ..... host bus clock speed is 100.0019 MHz.
(XEN) ..... bus_scale = 0x6669
(XEN) TSC deadline timer enabled
(XEN) [2014-01-01 15:37:20] Platform timer is 14.318MHz HPET
(XEN) [2014-01-01 15:37:20] Allocated console ring of 64 KiB.
(XEN) [2014-01-01 15:37:20] mwait-idle: MWAIT substates: 0x1120
(XEN) [2014-01-01 15:37:20] mwait-idle: v0.4 model 0x2a
(XEN) [2014-01-01 15:37:20] mwait-idle: lapic_timer_reliable_states 0xffffffff
(XEN) [2014-01-01 15:37:20] VMX: Supported advanced features:
(XEN) [2014-01-01 15:37:20]  - APIC MMIO access virtualisation
(XEN) [2014-01-01 15:37:20]  - APIC TPR shadow
(XEN) [2014-01-01 15:37:20]  - Extended Page Tables (EPT)
(XEN) [2014-01-01 15:37:20]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-01 15:37:20]  - Virtual NMI
(XEN) [2014-01-01 15:37:20]  - MSR direct-access bitmap
(XEN) [2014-01-01 15:37:20]  - Unrestricted Guest
(XEN) [2014-01-01 15:37:20] HVM: ASIDs enabled.
(XEN) [2014-01-01 15:37:20] HVM: VMX enabled
(XEN) [2014-01-01 15:37:20] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-01 15:37:20] HVM: HAP page sizes: 4kB, 2MB
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#1
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#1
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#2
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#2
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#3
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#3
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#4
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#4
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#5
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#5
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#6
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#6
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#7
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#7
(XEN) [2014-01-01 15:37:20] Brought up 8 CPUs
(XEN) [2014-01-01 15:37:20] ACPI sleep modes: S3
(XEN) [2014-01-01 15:37:20] mcheck_poll: Machine check polling timer started.
(XEN) [2014-01-01 15:37:20] *** LOADING DOMAIN 0 ***
(XEN) [2014-01-01 15:37:20]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-01 15:37:20]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2391000
(XEN) [2014-01-01 15:37:20] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-01 15:37:20]  Dom0 alloc.:   000000081c000000->0000000820000000 (489617 pages to be allocated)
(XEN) [2014-01-01 15:37:20]  Init. ramdisk: 000000083b891000->000000083ffffa00
(XEN) [2014-01-01 15:37:20] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-01 15:37:20]  Loaded kernel: ffffffff81000000->ffffffff82391000
(XEN) [2014-01-01 15:37:20]  Init. ramdisk: ffffffff82391000->ffffffff86affa00
(XEN) [2014-01-01 15:37:20]  Phys-Mach map: ffffffff86b00000->ffffffff86f00000
(XEN) [2014-01-01 15:37:20]  Start info:    ffffffff86f00000->ffffffff86f004b4
(XEN) [2014-01-01 15:37:20]  Page tables:   ffffffff86f01000->ffffffff86f3c000
(XEN) [2014-01-01 15:37:20]  Boot stack:    ffffffff86f3c000->ffffffff86f3d000
(XEN) [2014-01-01 15:37:20]  TOTAL:         ffffffff80000000->ffffffff87000000
(XEN) [2014-01-01 15:37:20]  ENTRY ADDRESS: ffffffff81cff210
(XEN) [2014-01-01 15:37:20] Dom0 has maximum 8 VCPUs
(XEN) [2014-01-01 15:37:24] Scrubbing Free RAM: ...........................................................................................................................................................................................................................................................................................................done.
(XEN) [2014-01-01 15:37:29] Initial low memory virq threshold set at 0x4000 pages.
(XEN) [2014-01-01 15:37:29] Std. Loglevel: All
(XEN) [2014-01-01 15:37:29] Guest Loglevel: All
(XEN) [2014-01-01 15:37:29] *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) [2014-01-01 15:37:29] Freed 292kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.8.11-100.fc17.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 4.7.2 20120921 (Red Hat 4.7.2-2) (GCC) ) #1 SMP Wed May 1 19:31:26 UTC 2013
[    0.000000] Command line: placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=xen rd_NO_PLYMOUTH
[    0.000000] Freeing 9b-100 pfn range: 101 pages freed
[    0.000000] Released 101 pages of unused memory
[    0.000000] Set 264741 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80065 pfn range: 101 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x000000000009afff] usable
[    0.000000] Xen: [mem 0x000000000009b800-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000bf63efff] usable
[    0.000000] Xen: [mem 0x00000000bf63f000-0x00000000bf6befff] reserved
[    0.000000] Xen: [mem 0x00000000bf6bf000-0x00000000bf7befff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bf7bf000-0x00000000bf7fefff] ACPI data
[    0.000000] Xen: [mem 0x00000000bf7ff000-0x00000000bf7fffff] usable
[    0.000000] Xen: [mem 0x00000000bf800000-0x00000000bfffffff] reserved
[    0.000000] Xen: [mem 0x00000000e0000000-0x00000000efffffff] reserved
[    0.000000] Xen: [mem 0x00000000feb00000-0x00000000feb03fff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed10000-0x00000000fed19fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ffd80000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x00000005c0e16fff] usable
[    0.000000] Xen: [mem 0x00000005c0e17000-0x000000083fffffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn = 0x5c0e17 max_arch_pfn = 0x400000000
[    0.000000] x2apic enabled by BIOS, switching to x2apic ops
[    0.000000] e820: last_pfn = 0xbf800 max_arch_pfn = 0x400000000
[    0.000000] init_memory_mapping: [mem 0x00000000-0xbf7fffff]
[    0.000000] init_memory_mapping: [mem 0x100000000-0x5c0e16fff]
[    0.000000] RAMDISK: [mem 0x02391000-0x06afffff]
[    0.000000] ACPI: RSDP 00000000000fe020 00024 (v02 INSYDE)
[    0.000000] ACPI: XSDT 00000000bf7fe170 0009C (v01 INSYDE BROMOLOW 00000001      01000013)
[    0.000000] ACPI: FACP 00000000bf7fc000 000F4 (v04 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: DSDT 00000000bf7f2000 05A40 (v01 INSYDE BROMOLOW 00000000 ACPI 00040000)
[    0.000000] ACPI: FACS 00000000bf76e000 00040
[    0.000000] ACPI: ASF! 00000000bf7fd000 000A5 (v32 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: HPET 00000000bf7fb000 00038 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: APIC 00000000bf7fa000 00092 (v02 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: MCFG 00000000bf7f9000 0003C (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: MSDM 00000000bf7f8000 00055 (v03 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SLIC 00000000bf7f1000 00176 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: WDAT 00000000bf7f0000 00224 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: BOOT 00000000bf7ee000 00028 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7ed000 002F6 (v01 INSYDE BROMOLOW 00001000 ACPI 00040000)
[    0.000000] ACPI: SPCR 00000000bf7ec000 00050 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: ASPT 00000000bf7eb000 00034 (v07 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7ea000 00804 (v01 INSYDE BROMOLOW 00003000 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7e9000 00996 (v01 INSYDE BROMOLOW 00003000 ACPI 00040000)
[    0.000000] ACPI: XMAR 00000000bf7e8000 00088 (v01 INTEL      SNB  00000001 INTL 00000001)
[    0.000000] Setting APIC routing to cluster x2apic.
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x00000005c0e16fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x5c0e16fff]
[    0.000000]   NODE_DATA [mem 0x7da34000-0x7da47fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x5c0e16fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009afff]
[    0.000000]   node   0: [mem 0x00100000-0xbf63efff]
[    0.000000]   node   0: [mem 0xbf7ff000-0xbf7fffff]
[    0.000000]   node   0: [mem 0x100000000-0x5c0e16fff]
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] PM: Registered nosave memory: 000000000009b000 - 000000000009c000
[    0.000000] PM: Registered nosave memory: 000000000009c000 - 0000000000100000
[    0.000000] PM: Registered nosave memory: 00000000bf63f000 - 00000000bf6bf000
[    0.000000] PM: Registered nosave memory: 00000000bf6bf000 - 00000000bf7bf000
[    0.000000] PM: Registered nosave memory: 00000000bf7bf000 - 00000000bf7ff000
[    0.000000] PM: Registered nosave memory: 00000000bf800000 - 00000000c0000000
[    0.000000] PM: Registered nosave memory: 00000000c0000000 - 00000000e0000000
[    0.000000] PM: Registered nosave memory: 00000000e0000000 - 00000000f0000000
[    0.000000] PM: Registered nosave memory: 00000000f0000000 - 00000000feb00000
[    0.000000] PM: Registered nosave memory: 00000000feb00000 - 00000000feb04000
[    0.000000] PM: Registered nosave memory: 00000000feb04000 - 00000000fec00000
[    0.000000] PM: Registered nosave memory: 00000000fec00000 - 00000000fec01000
[    0.000000] PM: Registered nosave memory: 00000000fec01000 - 00000000fed10000
[    0.000000] PM: Registered nosave memory: 00000000fed10000 - 00000000fed1a000
[    0.000000] PM: Registered nosave memory: 00000000fed1a000 - 00000000fed1c000
[    0.000000] PM: Registered nosave memory: 00000000fed1c000 - 00000000fed20000
[    0.000000] PM: Registered nosave memory: 00000000fed20000 - 00000000fee00000
[    0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fef00000
[    0.000000] PM: Registered nosave memory: 00000000fef00000 - 00000000ffd80000
[    0.000000] PM: Registered nosave memory: 00000000ffd80000 - 0000000100000000
[    0.000000] e820: [mem 0xc0000000-0xdfffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-unstable (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff880066c00000 s84608 r8192 d21888 u262144
[   11.222615] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 5676548
[   11.222619] Policy zone: Normal
[   11.222622] Kernel command line: placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=xen rd_NO_PLYMOUTH
[   11.223443] PID hash table entries: 4096 (order: 3, 32768 bytes)
[   11.223450] __ex_table already sorted, skipping sort
[   11.223493] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[   11.255707] software IO TLB [mem 0x62c00000-0x66c00000] (64MB) mapped at [ffff880062c00000-ffff880066bfffff]
[   11.269831] Memory: 1528328k/24131676k available (6512k kernel code, 1059028k absent, 21544320k reserved, 6706k data, 1080k init)
[   11.269946] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[   11.269998] Hierarchical RCU implementation.
[   11.270001]  RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=8.
[   11.270015] NR_IRQS:8448 nr_irqs:744 16
[   11.270119] xen: sci override: global_irq=9 trigger=0 polarity=0
[   11.270162] xen: acpi sci 9
[   11.270550] Console: colour dummy device 80x25
[   11.270555] console [hvc0] enabled, bootconsole disabled
[   11.270555] console [hvc0] enabled, bootconsole disabled
[   11.283830] allocated 92798976 bytes of page_cgroup
[   11.283837] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[   11.283912] installing Xen timer for CPU 0
[   11.283951] tsc: Detected 2400.076 MHz processor
[   11.283959] Calibrating delay loop (skipped), value calculated using timer frequency.. 4800.15 BogoMIPS (lpj=2400076)
[   11.283966] pid_max: default: 32768 minimum: 301
[   11.284030] Security Framework initialized
[   11.284040] SELinux:  Initializing.
[   11.291424] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
[   11.303321] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
[   11.307426] Mount-cache hash table entries: 256
[   11.307763] Initializing cgroup subsys cpuacct
[   11.307768] Initializing cgroup subsys memory
[   11.307800] Initializing cgroup subsys devices
[   11.307804] Initializing cgroup subsys freezer
[   11.307809] Initializing cgroup subsys net_cls
[   11.307813] Initializing cgroup subsys blkio
[   11.307816] Initializing cgroup subsys perf_event
[   11.307912] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[   11.307912] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[   11.307922] CPU: Physical Processor ID: 0
[   11.307925] CPU: Processor Core ID: 0
[   11.307929] mce: CPU supports 2 MCE banks
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.307972] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[   11.307972] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[   11.307972] tlb_flushall_shift: 5
[   11.308107] Freeing SMP alternatives: 24k freed
[   11.310286] ACPI: Core revision 20121018
[   11.327224] ftrace: allocating 24318 entries in 95 pages
[   11.345107] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only.
[   11.346751] NMI watchdog: disabled (cpu0): hardware events not enabled
[   11.346848] installing Xen timer for CPU 1
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.347319] installing Xen timer for CPU 2
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.347717] installing Xen timer for CPU 3
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348059] installing Xen timer for CPU 4
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348442] installing Xen timer for CPU 5
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348779] installing Xen timer for CPU 6
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.349136] installing Xen timer for CPU 7
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.349343] Brought up 8 CPUs
[   11.349825] devtmpfs: initialized
[   11.350345] PM: Registering ACPI NVS region [mem 0xbf6bf000-0xbf7befff] (1048576 bytes)
[   11.351325] atomic64 test passed for x86-64 platform with CX8 and with SSE
[   11.351354] Grant tables using version 2 layout.
[   11.351374] Grant table initialized
[   11.351414] RTC time: 15:37:31, date: 01/01/14
[   11.351473] NET: Registered protocol family 16
[   11.351906] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[   11.351913] ACPI: bus type pci registered
[   11.352129] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
[   11.352139] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
[   11.411030] PCI: Using configuration type 1 for base access
[   11.412351] bio: create slab <bio-0> at 0
[   11.412497] ACPI: Added _OSI(Module Device)
[   11.412502] ACPI: Added _OSI(Processor Device)
[   11.412505] ACPI: Added _OSI(3.0 _SCP Extensions)
[   11.412509] ACPI: Added _OSI(Processor Aggregator Device)
[   11.415302] ACPI: Executed 1 blocks of module-level executable AML code
[   11.421127] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[   11.421668] ACPI: SSDT 00000000bf6b0798 00727 (v01  PmRef  Cpu0Cst 00003001 INTL 20100121)
[   11.422023] ACPI: Dynamic OEM Table Load:
[   11.422029] ACPI: SSDT           (null) 00727 (v01  PmRef  Cpu0Cst 00003001 INTL 20100121)
[   11.424574] ACPI: SSDT 00000000bf6b1a98 00303 (v01  PmRef    ApIst 00003000 INTL 20100121)
[   11.424962] ACPI: Dynamic OEM Table Load:
[   11.424967] ACPI: SSDT           (null) 00303 (v01  PmRef    ApIst 00003000 INTL 20100121)
[   11.427551] ACPI: SSDT 00000000bf6afd98 00119 (v01  PmRef    ApCst 00003000 INTL 20100121)
[   11.427897] ACPI: Dynamic OEM Table Load:
[   11.427903] ACPI: SSDT           (null) 00119 (v01  PmRef    ApCst 00003000 INTL 20100121)
[   11.431978] ACPI: Interpreter enabled
[   11.431986] ACPI: (supports S0 S1 S3 S4 S5)
[   11.432018] ACPI: Using IOAPIC for interrupt routing
[   11.436050] ACPI: Power Resource [FN00] (off)
[   11.436145] ACPI: Power Resource [FN01] (off)
[   11.436251] ACPI: Power Resource [FN02] (off)
[   11.436343] ACPI: Power Resource [FN03] (off)
[   11.436433] ACPI: Power Resource [FN04] (off)
[   11.436875] ACPI: No dock devices found.
[   11.436882] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[   11.437237] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe])
[   11.438076] PCI host bridge to bus 0000:00
[   11.438083] pci_bus 0000:00: root bus resource [bus 00-fe]
[   11.438088] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[   11.438092] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[   11.438097] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[   11.438102] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfeafffff]
[   11.443293] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.445422] pci 0000:00:01.1: PCI bridge to [bus 02]
[   11.447553] pci 0000:00:01.2: PCI bridge to [bus 03]
[   11.448712] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.449873] pci 0000:00:1c.1: PCI bridge to [bus 05]
[   11.451138] pci 0000:00:1c.2: PCI bridge to [bus 06]
[   11.453932] pci 0000:00:1c.4: PCI bridge to [bus 07]
[   11.457128] pci 0000:00:1c.5: PCI bridge to [bus 08]
[   11.457348] pci 0000:00:1e.0: PCI bridge to [bus 09] (subtractive decode)
[   11.457946]  pci0000:00: ACPI _OSC support notification failed, disabling PCIe ASPM
[   11.457952]  pci0000:00: Unable to request _OSC control (_OSC support mask: 0x08)
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:00.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.2
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.2
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.4
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.5
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1e.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1f.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1f.3
(XEN) [2014-01-01 15:37:31] PCI add device 0000:01:00.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:01:00.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:02:00.0
(XEN) [2014-01-01 15:37:31] PCI add dev[   20.095173] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready
[   20.095671] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.095969] IPv6: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
network[738]: Bringing up interface eth3:  [  OK  ]
[   20.328040] IPv6: ADDRCONF(NETDEV_UP): eth4: link is not ready
[   20.328699] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.329009] IPv6: ADDRCONF(NETDEV_CHANGE): eth4: link becomes ready
network[738]: Bringing up interface eth4:  [  OK  ]
[   20.561955] IPv6: ADDRCONF(NETDEV_UP): eth5: link is not ready
[   20.562695] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.563003] IPv6: ADDRCONF(NETDEV_CHANGE): eth5: link becomes ready
network[738]: Bringing up interface eth5:  [  OK  ]
[   20.794772] IPv6: ADDRCONF(NETDEV_UP): eth6: link is not ready
[   20.795704] e1000: eth6 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.796014] IPv6: ADDRCONF(NETDEV_CHANGE): eth6: link becomes ready
network[738]: Bringing up interface eth6:  [  OK  ]
[   21.028076] IPv6: ADDRCONF(NETDEV_UP): eth7: link is not ready
[   21.028724] e1000: eth7 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   21.029023] IPv6: ADDRCONF(NETDEV_CHANGE): eth7: link becomes ready
network[738]: Bringing up interface eth7:  [  OK  ]
[   22.118825] 8021q: 802.1Q VLAN Support v1.8
[   22.118846] 8021q: adding VLAN 0 to HW filter on device eth0
[   22.118869] 8021q: adding VLAN 0 to HW filter on device eth1
[   22.118890] 8021q: adding VLAN 0 to HW filter on device eth3
[   22.118918] 8021q: adding VLAN 0 to HW filter on device eth4
[   22.118947] 8021q: adding VLAN 0 to HW filter on device eth5
[   22.118975] 8021q: adding VLAN 0 to HW filter on device eth6
[   22.119004] 8021q: adding VLAN 0 to HW filter on device eth7
[   22.183337] device eth3.23 entered promiscuous mode
[   22.183388] device eth3 entered promiscuous mode
network[738]: Bringing up interface eth3.23:  [  OK  ]
[   22.294045] device eth4.24 entered promiscuous mode
[   22.294067] device eth4 entered promiscuous mode
network[738]: Bringing up interface eth4.24:  [  OK  ]
[   22.405486] device eth5.25 entered promiscuous mode
[   22.405507] device eth5 entered promiscuous mode
network[738]: Bringing up interface eth5.25:  [  OK  ]
[   22.514847] device eth6.26 entered promiscuous mode
[   22.514869] device eth6 entered promiscuous mode
network[738]: Bringing up interface eth6.26:  [  OK  ]
[   22.624938] device eth7.27 entered promiscuous mode
[   22.624960] device eth7 entered promiscuous mode
network[738]: Bringing up interface eth7.27:  [  OK  ]
network[738]: Bringing up interface xenbr0:
[   22.712708] xenbr0: port 1(eth0) entered forwarding state
[   22.712721] xenbr0: port 1(eth0) entered forwarding state
network[738]: Determining IP information for xenbr0... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr1:
[   25.186092] xenbr1: port 1(eth1) entered forwarding state
[   25.186110] xenbr1: port 1(eth1) entered forwarding state
network[738]: Determining IP information for xenbr1... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr3:
[   27.707860] xenbr3: port 1(eth3.23) entered forwarding state
[   27.707880] xenbr3: port 1(eth3.23) entered forwarding state
network[738]: Determining IP information for xenbr3... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr4:
[   30.224960] xenbr4: port 1(eth4.24) entered forwarding state
[   30.224979] xenbr4: port 1(eth4.24) entered forwarding state
network[738]: Determining IP information for xenbr4... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr5:
[   32.742655] xenbr5: port 1(eth5.25) entered forwarding state
[   32.742674] xenbr5: port 1(eth5.25) entered forwarding state
network[738]: Determining IP information for xenbr5... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr6:
[   35.257714] xenbr6: port 1(eth6.26) entered forwarding state
[   35.257734] xenbr6: port 1(eth6.26) entered forwarding state
network[738]: Determining IP information for xenbr6... done.
network[738]: [  OK  ]
[   37.720117] xenbr0: port 1(eth0) entered forwarding state
network[738]: Bringing up interface xenbr7:
[   37.775576] xenbr7: port 1(eth7.27) entered forwarding state
[   37.775595] xenbr7: port 1(eth7.27) entered forwarding state
network[738]: Determining IP information for xenbr7... done.
network[738]: [  OK  ]
[   40.216233] xenbr1: port 1(eth1) entered forwarding state
[  OK  ] Started LSB: Bring up/down networking.
[  OK  ] Reached target Network.
         Starting OpenSSH server daemon...
         Starting OpenVPN Robust And Highly Flexible Tunnelin...e/client/udp...
         Starting RPC bind service...
[  OK  ] Started OpenSSH server daemon.
[  OK  ] Started OpenVPN Robust And Highly Flexible Tunneling...rge/client/udp.
[  OK  ] Started RPC bind service.
         Starting NFS file locking service....
[  OK  ] Started NFS file locking service..
[  OK  ] Reached target Remote File Systems (Pre).
         Mounting /filer...
         Mounting /scratch...
[   40.491025] FS-Cache: Loaded
[   40.491596] Key type dns_resolver registered
[   40.493979] FS-Cache: Netfs 'nfs' registered for caching
         Mounting /isos...
[   40.502181] NFS: Registering the id_resolver key type
[   40.502201] Key type id_resolver registered
[   40.502205] Key type id_legacy registered

Fedora release 17 (Beefy Miracle)
Kernel 3.8.11-100.fc17.x86_64 on an x86_64 (hvc0)

dcs-xen-54 login: [  133.931861] device vif1.0 entered promiscuous mode
[  133.935393] IPv6: ADDRCONF(NETDEV_UP): vif1.0: link is not ready
[  134.070915] device vif1.0-emu entered promiscuous mode
[  134.074008] xenbr1: port 3(vif1.0-emu) entered forwarding state
[  134.074019] xenbr1: port 3(vif1.0-emu) entered forwarding state
(d1) [2014-01-01 15:39:34] HVM Loader
(d1) [2014-01-01 15:39:34] Detected Xen v4.4-unstable
(d1) [2014-01-01 15:39:34] Xenbus rings @0xfeffc000, event channel 4
(d1) [2014-01-01 15:39:34] System requested SeaBIOS
(d1) [2014-01-01 15:39:34] CPU speed is 2400 MHz
(d1) [2014-01-01 15:39:34] Relocating guest memory for lowmem MMIO space disabled
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 0 changed 0 -> 5
(d1) [2014-01-01 15:39:34] PCI-ISA link 0 routed to IRQ5
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 1 changed 0 -> 10
(d1) [2014-01-01 15:39:34] PCI-ISA link 1 routed to IRQ10
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 2 changed 0 -> 11
(d1) [2014-01-01 15:39:34] PCI-ISA link 2 routed to IRQ11
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 3 changed 0 -> 5
(d1) [2014-01-01 15:39:34] PCI-ISA link 3 routed to IRQ5
(d1) [2014-01-01 15:39:34] pci dev 01:3 INTA->IRQ10
(d1) [2014-01-01 15:39:34] pci dev 02:0 INTA->IRQ11
(d1) [2014-01-01 15:39:34] pci dev 04:0 INTA->IRQ5
(d1) [2014-01-01 15:39:34] RAM in high memory; setting high_mem resource base to 10c000000
(d1) [2014-01-01 15:39:34] pci dev 02:0 bar 14 size 001000000: 0f0000008
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 10 size 001000000: 0f1000008
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 30 size 000040000: 0f2000000
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 10 size 000020000: 0f2040000
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 30 size 000010000: 0f2060000
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 18 size 000001000: 0f2070000
(d1) [2014-01-01 15:39:34] pci dev 02:0 bar 10 size 000000100: 00000c001
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 14 size 000000040: 00000c101
(d1) [2014-01-01 15:39:34] pci dev 01:1 bar 20 size 000000010: 00000c141
(d1) [2014-01-01 15:39:34] Multiprocessor initialisation:
(d1) [2014-01-01 15:39:34]  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1) [2014-01-01 15:39:34]  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1) [2014-01-01 15:39:34] Writing SMBIOS tables ...
(d1) [2014-01-01 15:39:34] Loading SeaBIOS ...
(d1) [2014-01-01 15:39:34] Creating MP tables ...
(d1) [2014-01-01 15:39:34] Loading ACPI ...
(d1) [2014-01-01 15:39:34] vm86 TSS at fc00a080
(d1) [2014-01-01 15:39:34] BIOS map:
(d1) [2014-01-01 15:39:34]  10000-100d3: Scratch space
(d1) [2014-01-01 15:39:34]  e0000-fffff: Main BIOS
(d1) [2014-01-01 15:39:34] E820 table:
(d1) [2014-01-01 15:39:34]  [00]: 00000000:00000000 - 00000000:000a0000: RAM
(d1) [2014-01-01 15:39:34]  HOLE: 00000000:000a0000 - 00000000:000e0000
(d1) [2014-01-01 15:39:34]  [01]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d1) [2014-01-01 15:39:34]  [02]: 00000000:00100000 - 00000000:f0000000: RAM
(d1) [2014-01-01 15:39:34]  HOLE: 00000000:f0000000 - 00000000:fc000000
(d1) [2014-01-01 15:39:34]  [03]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d1) [2014-01-01 15:39:34]  [04]: 00000001:00000000 - 00000001:0c000000: RAM
(d1) [2014-01-01 15:39:34] Invoking SeaBIOS ...
(d1) [2014-01-01 15:39:34] SeaBIOS (version rel-1.7.3.1-0-g7d9cbe6-20131231_113608-dcs-xen-54)
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:34] Found Xen hypervisor signature at 40000000
(d1) [2014-01-01 15:39:34] xen: copy e820...
(d1) [2014-01-01 15:39:34] Relocating init from 0x000e2001 to 0xeffe0600 (size 63795)
(d1) [2014-01-01 15:39:34] CPU Mhz=2401
(d1) [2014-01-01 15:39:34] Found 7 PCI devices (max PCI bus is 00)
(d1) [2014-01-01 15:39:34] Allocated Xen hypercall page at effff000
(d1) [2014-01-01 15:39:34] Detected Xen v4.4-unstable
(d1) [2014-01-01 15:39:34] xen: copy BIOS tables...
(d1) [2014-01-01 15:39:34] Copying SMBIOS entry point from 0x00010010 to 0x000f1920
(d1) [2014-01-01 15:39:34] Copying MPTABLE from 0xfc001170/fc001180 to 0x000f1820
(d1) [2014-01-01 15:39:34] Copying PIR from 0x00010030 to 0x000f17a0
(d1) [2014-01-01 15:39:34] Copying ACPI RSDP from 0x000100b0 to 0x000f1770
(d1) [2014-01-01 15:39:34] Using pmtimer, ioport 0xb008, freq 3579 kHz
(d1) [2014-01-01 15:39:34] Scan for VGA option rom
(d1) [2014-01-01 15:39:34] WARNING! Found unaligned PCI rom (vd=1234:1111)
(d1) [2014-01-01 15:39:34] Running option rom at c000:0003
(XEN) [2014-01-01 15:39:34] stdvga.c:147:d1 entering stdvga and caching modes
(d1) [2014-01-01 15:39:34] Turning on vga text mode console
(d1) [2014-01-01 15:39:34] SeaBIOS (version rel-1.7.3.1-0-g7d9cbe6-20131231_113608-dcs-xen-54)
(d1) [2014-01-01 15:39:34] Machine UUID d60df350-6607-4732-bb87-cf0a43ef0c78
(d1) [2014-01-01 15:39:34] Found 0 lpt ports
(d1) [2014-01-01 15:39:34] Found 1 serial ports
(d1) [2014-01-01 15:39:34] ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
(d1) [2014-01-01 15:39:34] ATA controller 2 at 170/374/0 (irq 15 dev 9)
(d1) [2014-01-01 15:39:34] ata0-0: QEMU HARDDISK ATA-7 Hard-Disk (20480 MiBytes)
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@0/disk@0
(d1) [2014-01-01 15:39:34] DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD]
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@0
(d1) [2014-01-01 15:39:34] DVD/CD [ata1-1: QEMU DVD-ROM ATAPI-4 DVD/CD]
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@1
(d1) [2014-01-01 15:39:34] PS2 keyboard initialized
(d1) [2014-01-01 15:39:34] All threads complete.
(d1) [2014-01-01 15:39:34] Scan for option roms
(d1) [2014-01-01 15:39:34] Running option rom at ca00:0003
(d1) [2014-01-01 15:39:34] pmm call arg1=1
(d1) [2014-01-01 15:39:34] pmm call arg1=0
(d1) [2014-01-01 15:39:34] pmm call arg1=1
(d1) [2014-01-01 15:39:34] pmm call arg1=0
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@4
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:34] Press F12 for boot menu.
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:36] Searching bootorder for: HALT
(d1) [2014-01-01 15:39:36] drive 0x000f1720: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 s=41943040
(d1) [2014-01-01 15:39:36] Space available for UMB: cb000-ee800, f0000-f1690
(d1) [2014-01-01 15:39:36] Returned 61440 bytes of ZoneHigh
(d1) [2014-01-01 15:39:36] e820 map has 7 items:
(d1) [2014-01-01 15:39:36]   0: 0000000000000000 - 000000000009fc00 = 1 RAM
(d1) [2014-01-01 15:39:36]   1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   3: 0000000000100000 - 00000000effff000 = 1 RAM
(d1) [2014-01-01 15:39:36]   4: 00000000effff000 - 00000000f0000000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   5: 00000000fc000000 - 0000000100000000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   6: 0000000100000000 - 000000010c000000 = 1 RAM
(d1) [2014-01-01 15:39:36] enter handle_19:
(d1) [2014-01-01 15:39:36]   NULL
(d1) [2014-01-01 15:39:36] Booting from Hard Disk...
(d1) [2014-01-01 15:39:36] Booting from 0000:7c00
[  149.112714] xenbr1: port 3(vif1.0-emu) entered forwarding state
(XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
(XEN) [2014-01-01 15:40:12] CPU:    5
(XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
(XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
(XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
(XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
(XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
(XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
(XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
(XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
(XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
(XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
(XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
(XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
(XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
(XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
(XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
(XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
(XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
(XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
(XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
(XEN) [2014-01-01 15:40:12] Xen call trace:
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
(XEN) [2014-01-01 15:40:12] 
(XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
(XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff 
(XEN) [2014-01-01 15:40:16] 
(XEN) [2014-01-01 15:40:16] ****************************************
(XEN) [2014-01-01 15:40:16] Panic on CPU 5:
(XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
(XEN) [2014-01-01 15:40:16] [error_code=0002]
(XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
(XEN) [2014-01-01 15:40:17] ****************************************
(XEN) [2014-01-01 15:40:17] 
(XEN) [2014-01-01 15:40:17] Reboot in five seconds...



--------------040403030203090802050606
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------040403030203090802050606--


From xen-devel-bounces@lists.xen.org Thu Jan 02 15:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjom-0007Ai-6o; Thu, 02 Jan 2014 15:04:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1Vyjoj-0007Ad-Fq
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:04:14 +0000
Received: from [85.158.139.211:12083] by server-1.bemta-5.messagelabs.com id
	AB/79-21065-CEF75C25; Thu, 02 Jan 2014 15:04:12 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388675048!7552105!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23190 invoked from network); 2 Jan 2014 15:04:09 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 15:04:09 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 02 Jan 2014 15:04:05 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; 
	d="txt'?scan'208,223";a="623226737"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.26])
	by fldsmtpi03.verizon.com with ESMTP; 02 Jan 2014 15:04:04 +0000
Message-ID: <52C57FE3.3020502@terremark.com>
Date: Thu, 02 Jan 2014 10:04:03 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C454A4.1030408@citrix.com>
	<20140102014134.GA3371@olila.local.net-space.pl>
In-Reply-To: <20140102014134.GA3371@olila.local.net-space.pl>
Content-Type: multipart/mixed; boundary="------------040403030203090802050606"
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------040403030203090802050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/01/14 20:41, Daniel Kiper wrote:
> On Wed, Jan 01, 2014 at 05:47:16PM +0000, Andrew Cooper wrote:
>> On 01/01/2014 16:51, Don Slutz wrote:
> [...]
>
>>> With this patch no panic and crash kernel works.
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Commit 7113a45451a9f656deeff070e47672043ed83664 was clearly not tested.
>> kimage_alloc_crash_control_page() explicitly chooses a page inside the
>> crash region and clears it.
> I tested this patch earlier and now with latest Xen and kexec-tools commits.
> I am not able to reproduce this issue on my machines. Don, could you
> provide more details about your system and how did you build your
> Xen and kexec-tools (configure, make options, etc.)?

It is an older fedora17 system.

dcs-xen-54:~/xen>cat /etc/default/grub
GRUB_TIMEOUT=15
GRUB_DISTRIBUTOR="Fedora"
GRUB_DEFAULT=2
GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600"
GRUB_CMDLINE_LINUX="rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap 
KEYTABLE=us SYSFONT=True rd.luks=0 console=ttyS0,9600n8 
rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=ttyS0 
rd_NO_PLYMOUTH"
#GRUB_THEME="/boot/grub2/themes/system/theme.txt"
GRUB_CMDLINE_XEN="dom0_mem=2G loglvl=all guest_loglvl=all 
console_timestamps=1 com1=9600,8n1 console=com1 apic_verbosity=verbose 
crashkernel=256M@256M"
GRUB_CMDLINE_LINUX_XEN_REPLACE="rd.md=0 rd.dm=0 
rd.lvm.lv=vg_f17-xen/lv_swap  KEYTABLE=us SYSFONT=True rd.luks=0 
console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 
earlyprintk=xen rd_NO_PLYMOUTH"

dcs-xen-54:~/xen>cat .config
CONFIG_QEMU = http://xenbits.xen.org/git-http/qemu-xen-unstable.git
QEMU_UPSTREAM_EXTRA_CONFIG = --with-pkgversion=qemu-xen-4.4.0-rc1-0-gb97307e
QEMU_UPSTREAM_REVISION = qemu-xen-4.4.0-rc1
QEMU_UPSTREAM_URL = git@githq.cloudswitch.com:qemu.git
SEABIOS_UPSTREAM_TAG = rel-1.7.3.1
SEABIOS_UPSTREAM_URL = git@githq.cloudswitch.com:seabios-george.git
debug = n

dcs-xen-54:~/xen>uname -a
Linux dcs-xen-54 3.8.11-100.fc17.x86_64 #1 SMP Wed May 1 19:31:26 UTC 
2013 x86_64 x86_64 x86_64 GNU/Linux

Commands used to build (I use rpmbuild):

./configure --prefix=/usr --disable-stubdom
make dist
make -C xen MAP

The last is part of enabling xen crashdump analyser see:

http://lists.xen.org/archives/html/xen-devel/2013-02/msg01606.html

as are the patches:

* 588e9ba Adjust xen-crashdump-analyser info for 4.4.0
* f53550b Add new xen-crashdump-analyser info.
* a1f92d3 Introduce more offsets, and embed all offsets into the symbol file

Attached are these patches including the patch I use to enable rpmbuild 
(dcs-xen-54:~/xen>rpmbuild -bb xen-4.4.spec).  Not that I expect it to 
matter.  The "crashkernel=256M@256M" may be the key to reproducing it.

Also attached the complete console output from my test (kexec-broken.txt).

    -Don Slutz
> Andrew, David, Did you run kexec tests in your automated test environment
> with commit 7113a45451a9f656deeff070e47672043ed83664 applied? Could you
> tell something about results?
>
>> However, the sentiment of the commit is certainly desirable, to prevent
>> accidental playing in the crash region.
>>
>> As the mappings are removed from Xen's directmap region,
>> map_domain_page() doesn't work (unless the debug highmem barrier is
>> sufficiently low that the crash regions ends up above it, and the
>> virtual address ends up coming from the mapcache).
>>
>> This means that both here in clear_domain_page(), and later in
>> machine_kexec_load() where the code is copied in, is vulnerable to this
>> pagefault.
>>
>> The solution to this problem which would leave the fewest mappings would
>> be to have kimage_alloc_crash_control_page() map the individual control
>> page to the main Xen pagetables, at which a call to point
>> map_domain_page() on it will work correctly.  This would need an
>> equivalent call to destroy_xen_mappings() in kimage_free().
>>
>> However, it is far from neat.
>>
>> I defer to others as to which approach is better, but suggest that one
>> way or another, the problem gets fixed very quickly, even if that means
>> taking this complete reversion now and submitting a proper fix in due
>> course.
> I am on holiday until 06th January 2014 and I am not able to investigate
> this issue deeper right now. If you feel that it is better to revert
> this patch and later do second attempt to remove this mapping I do
> not object.
>
> Daniel


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0001-Adjust-to-use-rpmbuild.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0001-Adjust-to-use-rpmbuild.patch"

>From 6654e8a1ba0077217c5b3c9d17ec2d154d199dc3 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Wed, 30 Oct 2013 13:33:07 -0400
Subject: [PATCH 1/5] Adjust to use rpmbuild

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 gen_version.pl   | 391 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 update_config.pl | 109 ++++++++++++++++
 xen-4.4.spec     | 131 +++++++++++++++++++
 3 files changed, 631 insertions(+)
 create mode 100755 gen_version.pl
 create mode 100755 update_config.pl
 create mode 100644 xen-4.4.spec

diff --git a/gen_version.pl b/gen_version.pl
new file mode 100755
index 0000000..08ab7e6
--- /dev/null
+++ b/gen_version.pl
@@ -0,0 +1,391 @@
+#! /usr/bin/perl -w
+# -*- Mode: cperl; coding: utf-8; cperl-indent-level: 2 -*-
+
+use strict;
+
+my $debug = 0;
+my $cygwin = $^O eq 'cygwin';
+my $brief = 0;
+my $major = 0;
+my $minor = 0;
+my $patch = 0;
+my $clean = 0;
+my $update = 0;
+my $tags = ' --tags';
+my $xen_conf = 0;
+
+while ((defined $ARGV[0]) &&
+       (substr($ARGV[0],0,1) eq '-')) {
+  my $opts = shift @ARGV;
+  if ($opts =~ /^-d(\d+)$/) {
+    $debug = $1;
+  } elsif ($opts eq '-d') {
+    $debug = shift @ARGV;
+  } elsif ($opts eq '-u') {
+    $cygwin = 0;
+  } elsif ($opts eq '-b') {
+    $brief = 1;
+  } elsif ($opts eq '-nt') {
+    $tags = '';
+  } elsif ($opts eq '-c') {
+    $xen_conf = 1;
+  } elsif ($opts eq '-major') {
+    $major = 1;
+  } elsif ($opts eq '-minor') {
+    $minor = 1;
+  } elsif ($opts eq '-patch') {
+    $patch = 1;
+  } elsif ($opts eq '-clean') {
+    $clean = 1;
+  } elsif ($opts eq '-update') {
+    $update = 1;
+  } else {
+    print "Unknown options \"$opts\" ignored\n";
+  }
+}
+
+die "Only one of -major -minor -patch" if ($major + $minor + $patch) > 1;
+
+$/ = "\cJ" if $cygwin;
+
+my $jenkins_build_number = $ENV{'BUILD_NUMBER'};
+$jenkins_build_number = '0000'
+  if ! defined $jenkins_build_number;
+
+sub clean_line {
+  my $line = shift @_;
+
+  chomp $line;
+  chop($line) if substr($line,-1,1) eq "\cM";
+  return $line;
+}
+
+sub three_parts {
+  my $ver = shift @_;
+  my @parts = split /-/, $ver;
+
+  while ($#parts > 2) {
+    my $old = shift @parts;
+    $parts[0] = $old.'-'. $parts[0];
+  }
+  return @parts;
+}
+
+my $branch = `basename \$(git rev-parse --abbrev-ref HEAD)`;
+chomp $branch;
+if ($debug =~ /br/) {
+  print STDERR "branch=$branch\n";
+}
+
+my %conf =
+  (
+   'QEMU_UPSTREAM_URL' => 'git@githq.cloudswitch.com:qemu.git',
+   'QEMU_UPSTREAM_REVISION' => 'origin/'.$branch,
+   'QEMU_UPSTREAM_EXTRA_CONFIG', => '--with-pkgversion=dirty',
+   'SEABIOS_UPSTREAM_URL' => 'git@githq.cloudswitch.com:seabios-george.git',
+   'SEABIOS_UPSTREAM_TAG' => 'origin/'.$branch,
+  );
+
+if ($xen_conf) {
+  open CONFIG, "Config.mk"
+    || die "Failed to open Config.mk: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^QEMU_UPSTREAM_REVISION\s*\?=\s*(.*)$/) {
+      $conf{'QEMU_UPSTREAM_REVISION'} = $1;
+    } elsif ($line =~ /^SEABIOS_UPSTREAM_TAG\s*\?=\s*(.*)$/) {
+      $conf{'SEABIOS_UPSTREAM_TAG'} = $1;
+    }
+  }
+  close CONFIG;
+}
+
+my $base_xen_ver;
+my $xen_ver;
+my $qemu_ver;
+my $seabios_ver;
+my $uname_ver = "Unknown";
+
+open (UNAME, "uname -a|")
+  || die "Failed to open uname -a|: $!";
+$uname_ver = clean_line(<UNAME>);
+close UNAME;
+
+
+if (-f ".config") {
+  open (CONFIG, ".config")
+    || die "Failed to open .config: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^(.*?)\s*=\s*(.*)$/) {
+      $conf{$1} = $2;
+    }
+  }
+  close CONFIG;
+}
+
+if ($debug =~ /cf/) {
+  for my $key (sort keys %conf) {
+    print STDERR $key,' = ',$conf{$key},$/;
+  }
+}
+
+if ($clean) {
+  my $cmd = "git clean -fdxq";
+  open (GIT, "$cmd|")
+    || die "Failed to git clean($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+  for my $git_dir ('firmware/seabios-dir-remote', 'qemu-xen-dir-remote', 'qemu-xen-traditional-dir-remote') {
+    if (-d "tools/$git_dir") {
+      $cmd = "rm -rf tools/$git_dir";
+      open (GIT, "$cmd|")
+        || die "Failed to clean dir($cmd): $!";
+      print STDERR "$cmd\n";
+      while (<GIT>) {
+        print STDERR;
+      }
+      close GIT;
+    }
+  }
+  open (CONFIG, ">.config")
+    || die "Failed to open for write .config: $!";
+  for my $key (sort keys %conf) {
+    print CONFIG $key,' = ',$conf{$key},$/;
+  }
+  close CONFIG;
+  $cmd = "mkdir -p rpmbuild_dir/BUILD rpmbuild_dir/BUILDROOT rpmbuild_dir/RPMS/x86_64";
+  open (GIT, "$cmd|")
+    || die "Failed to create jenkins rpm dirs dir($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+if ($update) {
+  for my $git_dir ('firmware/seabios-dir-remote', 'qemu-xen-dir-remote', 'qemu-xen-traditional-dir-remote') {
+    if (-d "tools/$git_dir") {
+      my $cmd = "cd tools/$git_dir;git pull";
+      open (GIT, "$cmd|")
+        || die "Failed to update dir($cmd): $!";
+      print STDERR "$cmd\n";
+      while (<GIT>) {
+        print STDERR;
+      }
+      close GIT;
+    }
+  }
+}
+
+if (!-d 'tools/qemu-xen-dir') {
+  my $cmd = "export GIT=git;cd tools;../scripts/git-checkout.sh $conf{'QEMU_UPSTREAM_URL'} $conf{'QEMU_UPSTREAM_REVISION'} qemu-xen-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get qemu($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+if (!-d 'tools/firmware/seabios-dir') {
+  my $cmd = "export GIT=git;cd tools/firmware;../../scripts/git-checkout.sh $conf{'SEABIOS_UPSTREAM_URL'} $conf{'SEABIOS_UPSTREAM_TAG'} seabios-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get seabios($cmd): $!";
+  print STDERR "$cmd\n";
+  while (<GIT>) {
+    print STDERR;
+  }
+  close GIT;
+}
+
+open (GIT, "git fetch --tags|")
+  || die "Failed to fetch tags via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  print STDERR;
+}
+close GIT;
+
+my @xen_ver_tags = ();
+open (GIT, "git log --oneline --decorate=full -20|")
+  || die "Failed to get xen version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @xen_ver_tags, clean_line($_);
+}
+close GIT;
+
+my @qemu_ver_tags = ();
+open (GIT, "cd tools/qemu-xen-dir;git log --oneline --decorate=full -20|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @qemu_ver_tags, clean_line($_);
+}
+close GIT;
+
+my @seabios_ver_tags = ();
+open (GIT, "cd tools/firmware/seabios-dir;git log --oneline --decorate=full -20|")
+  || die "Failed to get seabios version via git: $!";
+binmode (GIT) if $cygwin;
+while(<GIT>) {
+  push @seabios_ver_tags, clean_line($_);
+}
+close GIT;
+
+open (GIT, "git describe $tags --long --dirty|")
+  || die "Failed to get xen version via git: $!";
+binmode (GIT) if $cygwin;
+$xen_ver = clean_line(<GIT>);
+close GIT;
+$base_xen_ver = $xen_ver;
+if ($xen_ver =~ /^jenkins/) {
+  my $ghash;
+  for my $i (0 .. $#xen_ver_tags) {
+    my $line = $xen_ver_tags[$i];
+    if (($i == 0) && ($line =~ /^(\S+)/)) {
+      $ghash = 'g'.$1;
+    }
+    if ($line =~ m-\((.*)\)-) {
+      my @tags = split(/, /, $1);
+      my $cur = 0;
+      my $pre;
+      for my $part (@tags) {
+        if ($part =~ m-^tag: refs/tags/(ver_\d+\.\d+\.)(\d+)-) {
+          if ($2 >= $cur) {
+            $pre = $1;
+            $cur = $2;
+          }
+        }
+      }
+      if (defined $pre) {
+        $xen_ver = $pre.$cur.'-'.$i.'-'.$ghash;
+        $xen_ver .= '-dirty'
+          if $base_xen_ver =~ /dirty$/;
+        last;
+      }
+    }
+  }
+}
+
+open (GIT, "cd tools/qemu-xen-dir;git describe $tags --long --dirty|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+$qemu_ver = clean_line(<GIT>);
+close GIT;
+
+open (GIT, "cd tools/firmware/seabios-dir;git describe $tags --long --dirty|")
+  || die "Failed to get seabios version via git: $!";
+binmode (GIT) if $cygwin;
+$seabios_ver = clean_line(<GIT>);
+close GIT;
+
+if ($debug =~ /sv/) {
+  print STDERR "xen_ver=$xen_ver qemu_ver=$qemu_ver seabios_ver=$seabios_ver\n";
+}
+
+my @xen_vers = three_parts($xen_ver);
+my @qemu_vers = three_parts($qemu_ver);
+my @seabios_vers = three_parts($seabios_ver);
+
+if ($debug =~ /pv/) {
+  for my $i (0..$#xen_vers) {
+    print STDERR "xen_vers[$i] = $xen_vers[$i]\n";
+  }
+  for my $i (0..$#qemu_vers) {
+    print STDERR "qemu_vers[$i] = $qemu_vers[$i]\n";
+  }
+  for my $i (0..$#seabios_vers) {
+    print STDERR "seabios_vers[$i] = $seabios_vers[$i]\n";
+  }
+}
+
+my $out = $xen_vers[0];
+if (($out ne $qemu_vers[0]) || ($out ne $seabios_vers[0])) {
+  die "Complex version: xen_ver=$xen_ver qemu_ver=$qemu_ver seabios_ver=$seabios_ver"
+    if $major || $minor || $patch || $clean;
+  if ($xen_vers[0] ne $qemu_vers[0]) {
+    $out .= '_'.$qemu_vers[0];
+  } else {
+    $out .= '_';
+  }
+  if ($xen_vers[0] ne $seabios_vers[0]) {
+    $out .= '_'.$seabios_vers[0];
+  } else {
+    $out .= '_';
+  }
+}
+if (($xen_vers[1] ne '0') || ($qemu_vers[1] ne '0') || ($seabios_vers[1] ne '0')) {
+  $out .= '_'.$xen_vers[1];
+  $out .= '_'.$qemu_vers[1];
+  $out .= '_'.$seabios_vers[1];
+}
+my $short_out = $out;
+$short_out .= '-dirty'
+  if $base_xen_ver =~ /dirty$/;
+$short_out =~ s/-/_/g;
+$out .= '_'.$xen_vers[2];
+$out .= '_'.$qemu_vers[2];
+$out .= '_'.$seabios_vers[2];
+$out =~ s/-/_/g;
+if (!-d 'dist/install/etc/xen') {
+  my $rc = system 'mkdir', '-p', 'dist/install/etc/xen';
+  die "Failed to create dist/install/etc/xen: $!"
+    if $rc != 0;
+}
+open(VERFILE, ">dist/install/etc/xen/gen_version")
+  || die "Failed to create dist/install/etc/xen/gen_version: $!";
+print VERFILE  $out,$/;
+print VERFILE  'all=',$out,$/;
+print VERFILE  'brief=',$short_out,$/;
+print VERFILE  'jenkins_build_number=',$jenkins_build_number,$/;
+print VERFILE  'base_xen_ver=',$base_xen_ver,$/;
+print VERFILE  'xen_ver=',$xen_ver,$/;
+print VERFILE  'qemu_ver=',$qemu_ver,$/;
+print VERFILE  'seabios_ver=',$seabios_ver,$/;
+print VERFILE  'uname_ver=',$uname_ver,$/;
+print VERFILE  'HOSTNAME=',$ENV{'HOSTNAME'},$/;
+for my $key (sort keys %conf) {
+  print VERFILE $key,'=',$conf{$key},$/;
+}
+for my $i (0 .. $#xen_ver_tags) {
+  print VERFILE  'xen_ver_tags['.$i.']=',$xen_ver_tags[$i],$/;
+}
+for my $i (0 .. $#qemu_ver_tags) {
+  print VERFILE  'qemu_ver_tags['.$i.']=',$qemu_ver_tags[$i],$/;
+}
+for my $i (0 .. $#seabios_ver_tags) {
+  print VERFILE  'seabios_ver_tags['.$i.']=',$seabios_ver_tags[$i],$/;
+}
+close VERFILE;
+if ($major) {
+  if ($short_out =~ /ver_(\d+)\.\d+\.\d+/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($minor) {
+  if ($short_out =~ /ver_\d+\.(\d+)\.\d+/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($patch) {
+  if ($short_out =~ /ver_\d+\.\d+\.(\d+)/) {
+    print $1,$/;
+  } else {
+    die "Failed to parse:$short_out";
+  }
+} elsif ($brief) {
+  print $short_out,$/;
+} else {
+  print $out,$/;
+}
+
+1;
diff --git a/update_config.pl b/update_config.pl
new file mode 100755
index 0000000..097f1da
--- /dev/null
+++ b/update_config.pl
@@ -0,0 +1,109 @@
+#! /usr/bin/perl -w
+# -*- Mode: cperl; coding: utf-8; cperl-indent-level: 2 -*-
+
+use strict;
+
+my $debug = 0;
+my $cygwin = $^O eq 'cygwin';
+
+while ((defined $ARGV[0]) &&
+       (substr($ARGV[0],0,1) eq '-')) {
+  my $opts = shift @ARGV;
+  if ($opts =~ /^-d(\d+)$/) {
+    $debug = $1;
+  } elsif ($opts eq '-d') {
+    $debug = shift @ARGV;
+  } elsif ($opts eq '-u') {
+    $cygwin = 0;
+  } else {
+    print "Unknown options \"$opts\" ignored\n";
+  }
+}
+
+$/ = "\cJ" if $cygwin;
+
+sub clean_line {
+  my $line = shift @_;
+
+  chomp $line;
+  chop($line) if substr($line,-1,1) eq "\cM";
+  return $line;
+}
+
+my %conf =
+  (
+   'QEMU_UPSTREAM_URL' => 'git@githq.cloudswitch.com:qemu.git',
+   'QEMU_UPSTREAM_REVISION' => 'origin/dev-integration',
+   'QEMU_UPSTREAM_EXTRA_CONFIG', => '--with-pkgversion=dirty',
+   'SEABIOS_UPSTREAM_URL' => 'git@githq.cloudswitch.com:seabios-george.git',
+   'SEABIOS_UPSTREAM_TAG' => 'origin/dev-integration',
+  );
+my %conf_seen = ();
+my @configs = ();
+
+if (-f ".config") {
+  open (CONFIG, ".config")
+    || die "Failed to open .config: $!";
+  while(<CONFIG>) {
+    my $line = clean_line($_);
+    if ($line =~ /^(.*?)\s*=\s*(.*)$/) {
+      $conf{$1} = $2;
+      $conf_seen{$1} = 1;
+    }
+    push @configs, $line;
+  }
+}
+close CONFIG;
+
+for my $key (keys %conf) {
+  if (!defined $conf_seen{$key}) {
+    push @configs, "$key = $conf{$key}";
+  }
+}
+
+if (!-d 'tools/qemu-xen-dir') {
+  my $cmd = "export GIT=git;cd tools;../scripts/git-checkout.sh $conf{'QEMU_UPSTREAM_URL'} $conf{'QEMU_UPSTREAM_REVISION'} qemu-xen-dir";
+  open (GIT, "$cmd|")
+    || die "Failed to get qemu($cmd): $!";
+  print "$cmd\n";
+  while (<GIT>) {
+    print;
+  }
+  close GIT;
+}
+
+open (GIT, "cd tools/qemu-xen-dir;git describe --tags --long --dirty|")
+  || die "Failed to get qemu version via git: $!";
+binmode (GIT) if $cygwin;
+my $qemu_ver = clean_line(<GIT>);
+close GIT;
+
+print STDERR "qemu_ver=$qemu_ver\n"
+  if $debug =~ /qv/;
+
+for my $i (0..$#configs) {
+  my $line = $configs[$i];
+  if ($line =~ /^QEMU_UPSTREAM_EXTRA_CONFIG/) {
+    print STDERR "line=$line\n"
+      if $debug =~ /qc/;
+    my @parts = split / +/, $line;
+    my $found_pkgversion = 0;
+    for my $i (0..$#parts) {
+      if ($parts[$i] =~ /^--with-pkgversion=/) {
+        $parts[$i] = "--with-pkgversion=$qemu_ver";
+        $found_pkgversion = 1;
+      }
+    }
+    push @parts,  "--with-pkgversion=$qemu_ver"
+      if !$found_pkgversion;
+    $configs[$i] = join ' ', @parts;
+  }
+}
+
+open (CONFIG, ">.config")
+  || die "Failed to open for write .config: $!";
+for my $line (@configs) {
+  print CONFIG $line,$/;
+}
+close CONFIG;
+1;
diff --git a/xen-4.4.spec b/xen-4.4.spec
new file mode 100644
index 0000000..38ff5ba
--- /dev/null
+++ b/xen-4.4.spec
@@ -0,0 +1,131 @@
+Summary: The Xen Hypervisor, modified by CloudSwitch, Inc.
+Name: xen
+Version: 4.4.unstable
+Release: %(./gen_version.pl -b -update)
+License: GPLv2
+Group: System Environment/Kernel
+URL: http://www.xen.org
+#Source0: %{name}-%{version}.tar.gz
+BuildRoot: rpmbuild/BUILDROOT/%{name}-%{version}-%{release}
+
+# $ cd <xen source tree top directory>
+# $ rpmbuild -bb xen-4.3.spec
+
+%debug_package
+
+%description
+Xen Hypervisor
+
+%prep
+#%setup -q
+
+%build
+AT_DIR=%(pwd)
+rm -rf $RPM_BUILD_ROOT
+cd $AT_DIR
+./update_config.pl
+./configure --prefix=/usr --disable-stubdom
+./gen_version.pl
+make dist
+make -C xen MAP
+
+%install
+AT_DIR=%(pwd)
+mkdir -p $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/boot $RPM_BUILD_ROOT
+cp -a $AT_DIR/xen/System.map $RPM_BUILD_ROOT/boot/System.map-%{name}-%{version}-%{release}
+cp -a $AT_DIR/dist/install/etc $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/usr $RPM_BUILD_ROOT
+cp -a $AT_DIR/dist/install/var $RPM_BUILD_ROOT
+mkdir -p $RPM_BUILD_ROOT/var/log/xen/console
+mkdir -p $RPM_BUILD_ROOT/boot/flask
+# mv $RPM_BUILD_ROOT/boot/xenpolicy.24 $RPM_BUILD_ROOT/boot/flask/
+
+%post
+if [ $1 = 1 -a -f /sbin/grub2-mkconfig -a -f /boot/grub2/grub.cfg ]; then
+  /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+fi
+ldconfig
+chkconfig --add xencommons
+
+%preun
+chkconfig --del xencommons
+
+%postun
+if [ -f /sbin/grub2-mkconfig -a -f /boot/grub2/grub.cfg ]; then
+  /sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+fi
+ldconfig
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+
+%files
+%defattr(-,root,root,-)
+%config /etc/rc.d/init.d/xencommons
+%doc
+/boot
+/etc
+/var
+/usr/bin
+/usr/etc/qemu/target-x86_64.conf
+/usr/include
+/usr/sbin
+/usr/share
+/usr/lib64
+/usr/libexec
+/usr/lib/fs/ext2fs/fsimage.so
+/usr/lib/fs/fat/fsimage.so
+/usr/lib/fs/iso9660/fsimage.so
+/usr/lib/fs/reiserfs/fsimage.so
+/usr/lib/fs/ufs/fsimage.so
+/usr/lib/fs/xfs/fsimage.so
+/usr/lib/fs/zfs/fsimage.so
+/usr/lib/libblktapctl.a
+/usr/lib/libblktapctl.so
+/usr/lib/libblktapctl.so.1.0
+/usr/lib/libblktapctl.so.1.0.0
+/usr/lib/libfsimage.so
+/usr/lib/libfsimage.so.1.0
+/usr/lib/libfsimage.so.1.0.0
+/usr/lib/libvhd.a
+/usr/lib/libvhd.so
+/usr/lib/libvhd.so.1.0
+/usr/lib/libvhd.so.1.0.0
+/usr/lib/libxenctrl.a
+/usr/lib/libxenctrl.so
+/usr/lib/libxenctrl.so.4.3
+/usr/lib/libxenctrl.so.4.3.0
+/usr/lib/libxenguest.a
+/usr/lib/libxenguest.so
+/usr/lib/libxenguest.so.4.3
+/usr/lib/libxenguest.so.4.3.0
+/usr/lib/libxenlight.a
+/usr/lib/libxenlight.so
+/usr/lib/libxenlight.so.4.3
+/usr/lib/libxenlight.so.4.3.0
+/usr/lib/libxenstat.a
+/usr/lib/libxenstat.so
+/usr/lib/libxenstat.so.0
+/usr/lib/libxenstat.so.0.0
+/usr/lib/libxenstore.a
+/usr/lib/libxenstore.so
+/usr/lib/libxenstore.so.3.0
+/usr/lib/libxenstore.so.3.0.3
+/usr/lib/libxenvchan.a
+/usr/lib/libxenvchan.so
+/usr/lib/libxenvchan.so.1.0
+/usr/lib/libxenvchan.so.1.0.0
+/usr/lib/libxlutil.a
+/usr/lib/libxlutil.so
+/usr/lib/libxlutil.so.4.3
+/usr/lib/libxlutil.so.4.3.0
+/usr/lib/xen
+
+%changelog
+* Thu Oct  3 2013 Don Slutz <Don@CloudSwitch.com> - 4.3.0-%(./gen_version.pl -b -update)
+- Move xenpolicy.24 out of /boot
+
+* Wed Oct  3 2012 Harry Hart <hhart@harry-laptop.cloudswitch.com> - 4.2-1
+- Initial build.
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0002-Introduce-more-offsets-and-embed-all-offsets-into-th.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0002-Introduce-more-offsets-and-embed-all-offsets-into-th.pa";
 filename*1="tch"

>From a1f92d31a1c64fe97a3f831fe0b7f1bb6fa4940b Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 20 Feb 2013 22:45:32 +0000
Subject: [PATCH 2/5] Introduce more offsets, and embed all offsets into the
 symbol file

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/Makefile                      |  1 +
 xen/arch/x86/x86_64/asm-offsets.c | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/xen/Makefile b/xen/Makefile
index 1ea2717..e48e89f 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -197,6 +197,7 @@ _cscope:
 .PHONY: _MAP
 _MAP:
 	$(NM) -n $(TARGET)-syms | grep -v '\(compiled\)\|\(\.o$$\)\|\( [aUw] \)\|\(\.\.ng$$\)\|\(LASH[RL]DI\)' > System.map
+	cat include/asm/asm-offsets.h | awk '/^#define __ASM_OFFSETS_H__/ { next } ; /^#define / { printf "%016x - +%s\n", $$3, $$2 }' >> System.map
 
 .PHONY: FORCE
 FORCE:
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index b0098b3..4168b98 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -54,8 +54,38 @@ void __dummy__(void)
     DEFINE(UREGS_user_sizeof, sizeof(struct cpu_user_regs));
     BLANK();
 
+    OFFSET(DOMAIN_id, struct domain, domain_id);
+    OFFSET(DOMAIN_shared_info, struct domain, shared_info);
+    OFFSET(DOMAIN_next, struct domain, next_in_list);
+    OFFSET(DOMAIN_max_vcpus, struct domain, max_vcpus);
+    OFFSET(DOMAIN_vcpus, struct domain, vcpu);
+    OFFSET(DOMAIN_is_hvm, struct domain, is_hvm);
+    OFFSET(DOMAIN_is_privileged, struct domain, is_privileged);
+    OFFSET(DOMAIN_tot_pages, struct domain, tot_pages);
+    OFFSET(DOMAIN_max_pages, struct domain, max_pages);
+    OFFSET(DOMAIN_shr_pages, struct domain, shr_pages);
+    OFFSET(DOMAIN_has_32bit_shinfo, struct domain, arch.has_32bit_shinfo);
+    OFFSET(DOMAIN_handle, struct domain, handle);
+    OFFSET(DOMAIN_paging_mode, struct domain, arch.paging.mode);
+    DEFINE(DOMAIN_sizeof, sizeof(struct domain));
+    BLANK();
+
+    OFFSET(SHARED_max_pfn, struct shared_info, arch.max_pfn);
+    OFFSET(SHARED_pfn_to_mfn_list_list, struct shared_info, arch.pfn_to_mfn_frame_list_list);
+    BLANK();
+
+    DEFINE(XEN_virt_start, __XEN_VIRT_START);
+    DEFINE(XEN_page_offset, __PAGE_OFFSET);
+#ifndef NDEBUG
+    DEFINE(XEN_DEBUG, 1);
+#else
+    DEFINE(XEN_DEBUG, 0);
+#endif
+    BLANK();
+
     OFFSET(irq_caps_offset, struct domain, irq_caps);
     OFFSET(next_in_list_offset, struct domain, next_in_list);
+    OFFSET(VCPU_vcpu_id, struct vcpu, vcpu_id);
     OFFSET(VCPU_processor, struct vcpu, processor);
     OFFSET(VCPU_domain, struct vcpu, domain);
     OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
@@ -86,7 +116,10 @@ void __dummy__(void)
     OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
     OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
     OFFSET(VCPU_guest_context_flags, struct vcpu, arch.vgc_flags);
+    OFFSET(VCPU_user_regs, struct vcpu, arch.user_regs);
+    OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
+    OFFSET(VCPU_pause_flags, struct vcpu, pause_flags);
     OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
     OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
     OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask);
@@ -95,6 +128,7 @@ void __dummy__(void)
     DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
     DEFINE(_VGCF_failsafe_disables_events, _VGCF_failsafe_disables_events);
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
+    DEFINE(VCPU_sizeof, sizeof(struct vcpu));
     BLANK();
 
     OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
@@ -134,6 +168,7 @@ void __dummy__(void)
     OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, guest_cpu_user_regs);
     OFFSET(CPUINFO_processor_id, struct cpu_info, processor_id);
     OFFSET(CPUINFO_current_vcpu, struct cpu_info, current_vcpu);
+    OFFSET(CPUINFO_per_cpu_offset, struct cpu_info, per_cpu_offset);
     DEFINE(CPUINFO_sizeof, sizeof(struct cpu_info));
     BLANK();
 
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0003-Add-new-xen-crashdump-analyser-info.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0003-Add-new-xen-crashdump-analyser-info.patch"

>From f53550bba266c10b463bd5bb14ba761de986b393 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Mon, 11 Nov 2013 14:36:40 -0500
Subject: [PATCH 3/5] Add new xen-crashdump-analyser info.

But do not delete old stuff.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/x86_64/asm-offsets.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 4168b98..c819127 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -65,6 +65,7 @@ void __dummy__(void)
     OFFSET(DOMAIN_max_pages, struct domain, max_pages);
     OFFSET(DOMAIN_shr_pages, struct domain, shr_pages);
     OFFSET(DOMAIN_has_32bit_shinfo, struct domain, arch.has_32bit_shinfo);
+    OFFSET(DOMAIN_pause_count, struct domain, pause_count);
     OFFSET(DOMAIN_handle, struct domain, handle);
     OFFSET(DOMAIN_paging_mode, struct domain, arch.paging.mode);
     DEFINE(DOMAIN_sizeof, sizeof(struct domain));
@@ -76,10 +77,23 @@ void __dummy__(void)
 
     DEFINE(XEN_virt_start, __XEN_VIRT_START);
     DEFINE(XEN_page_offset, __PAGE_OFFSET);
-#ifndef NDEBUG
-    DEFINE(XEN_DEBUG, 1);
+    DEFINE(VIRT_XEN_START, XEN_VIRT_START);
+    DEFINE(VIRT_XEN_END, XEN_VIRT_END);
+    DEFINE(VIRT_DIRECTMAP_START, DIRECTMAP_VIRT_START);
+    DEFINE(VIRT_DIRECTMAP_END, DIRECTMAP_VIRT_END);
+
+    DEFINE(XEN_DEBUG, debug_build());
+    DEFINE(XEN_STACK_SIZE, STACK_SIZE);
+    DEFINE(XEN_PRIMARY_STACK_SIZE, PRIMARY_STACK_SIZE);
+#ifdef MEMORY_GUARD
+    DEFINE(XEN_MEMORY_GUARD, 1);
 #else
-    DEFINE(XEN_DEBUG, 0);
+    DEFINE(XEN_MEMORY_GUARD, 0);
+#endif
+#ifdef CONFIG_FRAME_POINTER
+    DEFINE(XEN_FRAME_POINTER, 1);
+#else
+    DEFINE(XEN_FRAME_POINTER, 0);
 #endif
     BLANK();
 
@@ -120,6 +134,7 @@ void __dummy__(void)
     OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
     OFFSET(VCPU_pause_flags, struct vcpu, pause_flags);
+    OFFSET(VCPU_pause_count, struct vcpu, pause_count);
     OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
     OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
     OFFSET(VCPU_mce_old_mask, struct vcpu, mce_state.old_mask);
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/x-patch;
 name="0004-Adjust-xen-crashdump-analyser-info-for-4.4.0.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0004-Adjust-xen-crashdump-analyser-info-for-4.4.0.patch"

>From 588e9baf3ee74c0efcbd91a639f74525a431f507 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Wed, 20 Nov 2013 15:31:39 -0500
Subject: [PATCH 4/5] Adjust xen-crashdump-analyser info for 4.4.0

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/x86_64/asm-offsets.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index c819127..a7da0a3 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -59,7 +59,7 @@ void __dummy__(void)
     OFFSET(DOMAIN_next, struct domain, next_in_list);
     OFFSET(DOMAIN_max_vcpus, struct domain, max_vcpus);
     OFFSET(DOMAIN_vcpus, struct domain, vcpu);
-    OFFSET(DOMAIN_is_hvm, struct domain, is_hvm);
+    OFFSET(DOMAIN_guest_type, struct domain, guest_type);
     OFFSET(DOMAIN_is_privileged, struct domain, is_privileged);
     OFFSET(DOMAIN_tot_pages, struct domain, tot_pages);
     OFFSET(DOMAIN_max_pages, struct domain, max_pages);
-- 
1.7.11.7


--------------040403030203090802050606
Content-Type: text/plain; charset=us-ascii;
 name="kexec-broken.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="kexec-broken.txt"



Loading Xen xen ...
Loading Linux 3.8.11-100.fc17.x86_64 ...
Loading initial ramdisk ...
error: Can't get controller info.. __  __            _  _   _  _                      _        _     _      
 \ \/ /___ _ __   | || | | || |     _   _ _ __  ___| |_ __ _| |__ | | ___ 
  \  // _ \ '_ \  | || |_| || |_ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|__   _|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_) |_|     \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
                                                                          
(XEN) Xen version 4.4-unstable (don@culpepper.cloudswitch.com) (gcc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)) debug=n Tue Dec 31 11:33:10 EST 2013
(XEN) Latest ChangeSet: Wed Nov 20 15:31:39 2013 -0500 git:588e9ba
(XEN) Bootloader: GRUB 2.00~beta6
(XEN) Command line: placeholder dom0_mem=2G loglvl=all guest_loglvl=all console_timestamps=1 com1=9600,8n1 console=com1 apic_verbosity=verbose crashkernel=256M@256M
(XEN) Video information:
(XEN)  No VGA detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 000000000009b800 (usable)
(XEN)  000000000009b800 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000bf63f000 (usable)
(XEN)  00000000bf63f000 - 00000000bf6bf000 (reserved)
(XEN)  00000000bf6bf000 - 00000000bf7bf000 (ACPI NVS)
(XEN)  00000000bf7bf000 - 00000000bf7ff000 (ACPI data)
(XEN)  00000000bf7ff000 - 00000000bf800000 (usable)
(XEN)  00000000bf800000 - 00000000c0000000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000feb00000 - 00000000feb04000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed10000 - 00000000fed1a000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ffd80000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000840000000 (usable)
(XEN) Kdump: 256MB (262144kB) at 0x10000000
(XEN) ACPI: RSDP 000FE020, 0024 (r2 INSYDE)
(XEN) ACPI: XSDT BF7FE170, 009C (r1 INSYDE BROMOLOW        1       1000013)
(XEN) ACPI: FACP BF7FC000, 00F4 (r4 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: DSDT BF7F2000, 5A40 (r1 INSYDE BROMOLOW        0 ACPI    40000)
(XEN) ACPI: FACS BF76E000, 0040
(XEN) ACPI: ASF! BF7FD000, 00A5 (r32 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: HPET BF7FB000, 0038 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: APIC BF7FA000, 0092 (r2 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: MCFG BF7F9000, 003C (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: MSDM BF7F8000, 0055 (r3 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SLIC BF7F1000, 0176 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: WDAT BF7F0000, 0224 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: BOOT BF7EE000, 0028 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SSDT BF7ED000, 02F6 (r1 INSYDE BROMOLOW     1000 ACPI    40000)
(XEN) ACPI: SPCR BF7EC000, 0050 (r1 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: ASPT BF7EB000, 0034 (r7 INSYDE BROMOLOW        1 ACPI    40000)
(XEN) ACPI: SSDT BF7EA000, 0804 (r1 INSYDE BROMOLOW     3000 ACPI    40000)
(XEN) ACPI: SSDT BF7E9000, 0996 (r1 INSYDE BROMOLOW     3000 ACPI    40000)
(XEN) ACPI: DMAR BF7E8000, 0088 (r1 INTEL      SNB         1 INTL        1)
(XEN) System RAM: 32757MB (33544044kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000840000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fe1b0
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x408
(XEN) ACPI: SLEEP INFO: pm1x_cnt[404,0], pm1x_evt[400,0]
(XEN) ACPI:             wakeup_vec[bf76e00c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
(XEN) Processor #1 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
(XEN) Processor #2 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
(XEN) Processor #3 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
(XEN) Processor #4 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
(XEN) Processor #5 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] enabled)
(XEN) Processor #6 6:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 6:10 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) mapped APIC to ffff82cfffdfb000 (fee00000)
(XEN) mapped IOAPIC to ffff82cfffdfa000 (fec00000)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2400.077 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) Suppress EOI broadcast on CPU#0
(XEN) enabled ExtINT on CPU#0
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) init IO_APIC IRQs
(XEN)  IO-APIC (apicid-pin) 0-0, 0-16, 0-17, 0-18, 0-19, 0-20, 0-21, 0-22, 0-23 not connected.
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) number of MP IRQ sources: 15.
(XEN) number of IO-APIC #0 registers: 24.
(XEN) testing the IO APIC.......................
(XEN) IO APIC #0......
(XEN) .... register #00: 00000000
(XEN) .......    : physical APIC id: 00
(XEN) .......    : Delivery Type: 0
(XEN) .......    : LTS          : 0
(XEN) .... register #01: 00170020
(XEN) .......     : max redirection entries: 0017
(XEN) .......     : PRQ implemented: 0
(XEN) .......     : IO APIC version: 0020
(XEN) .... IRQ redirection table:
(XEN)  NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect:   
(XEN)  00 000 00  1    0    0   0   0    0    0    00
(XEN)  01 000 00  0    0    0   0   0    1    1    30
(XEN)  02 000 00  0    0    0   0   0    1    1    F0
(XEN)  03 000 00  0    0    0   0   0    1    1    38
(XEN)  04 000 00  0    0    0   0   0    1    1    F1
(XEN)  05 000 00  0    0    0   0   0    1    1    40
(XEN)  06 000 00  0    0    0   0   0    1    1    48
(XEN)  07 000 00  0    0    0   0   0    1    1    50
(XEN)  08 000 00  0    0    0   0   0    1    1    58
(XEN)  09 000 00  1    1    0   0   0    1    1    60
(XEN)  0a 000 00  0    0    0   0   0    1    1    68
(XEN)  0b 000 00  0    0    0   0   0    1    1    70
(XEN)  0c 000 00  0    0    0   0   0    1    1    78
(XEN)  0d 000 00  0    0    0   0   0    1    1    88
(XEN)  0e 000 00  0    0    0   0   0    1    1    90
(XEN)  0f 000 00  0    0    0   0   0    1    1    98
(XEN)  10 000 00  1    0    0   0   0    0    0    00
(XEN)  11 025 05  1    0    0   0   0    1    2    C9
(XEN)  12 0E5 05  1    0    0   0   0    1    2    4D
(XEN)  13 000 00  1    0    0   0   0    0    0    00
(XEN)  14 000 00  1    0    0   0   0    0    0    00
(XEN)  15 0B1 01  1    0    0   0   0    1    2    C9
(XEN)  16 000 00  1    0    0   0   0    0    0    00
(XEN)  17 0B8 08  1    0    0   0   0    1    2    85
(XEN) Using vector-based indexing
(XEN) IRQ to pin mappings:
(XEN) IRQ240 -> 0:2
(XEN) IRQ48 -> 0:1
(XEN) IRQ56 -> 0:3
(XEN) IRQ241 -> 0:4
(XEN) IRQ64 -> 0:5
(XEN) IRQ72 -> 0:6
(XEN) IRQ80 -> 0:7
(XEN) IRQ88 -> 0:8
(XEN) IRQ96 -> 0:9
(XEN) IRQ104 -> 0:10
(XEN) IRQ112 -> 0:11
(XEN) IRQ120 -> 0:12
(XEN) IRQ136 -> 0:13
(XEN) IRQ144 -> 0:14
(XEN) IRQ152 -> 0:15
(XEN) .................................... done.
(XEN) Using local APIC timer interrupts.
(XEN) calibrating APIC timer ...
(XEN) ..... CPU clock speed is 2400.0502 MHz.
(XEN) ..... host bus clock speed is 100.0019 MHz.
(XEN) ..... bus_scale = 0x6669
(XEN) TSC deadline timer enabled
(XEN) [2014-01-01 15:37:20] Platform timer is 14.318MHz HPET
(XEN) [2014-01-01 15:37:20] Allocated console ring of 64 KiB.
(XEN) [2014-01-01 15:37:20] mwait-idle: MWAIT substates: 0x1120
(XEN) [2014-01-01 15:37:20] mwait-idle: v0.4 model 0x2a
(XEN) [2014-01-01 15:37:20] mwait-idle: lapic_timer_reliable_states 0xffffffff
(XEN) [2014-01-01 15:37:20] VMX: Supported advanced features:
(XEN) [2014-01-01 15:37:20]  - APIC MMIO access virtualisation
(XEN) [2014-01-01 15:37:20]  - APIC TPR shadow
(XEN) [2014-01-01 15:37:20]  - Extended Page Tables (EPT)
(XEN) [2014-01-01 15:37:20]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-01 15:37:20]  - Virtual NMI
(XEN) [2014-01-01 15:37:20]  - MSR direct-access bitmap
(XEN) [2014-01-01 15:37:20]  - Unrestricted Guest
(XEN) [2014-01-01 15:37:20] HVM: ASIDs enabled.
(XEN) [2014-01-01 15:37:20] HVM: VMX enabled
(XEN) [2014-01-01 15:37:20] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-01 15:37:20] HVM: HAP page sizes: 4kB, 2MB
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#1
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#1
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#2
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#2
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#3
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#3
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#4
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#4
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#5
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#5
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#6
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#6
(XEN) [2014-01-01 15:37:19] Suppress EOI broadcast on CPU#7
(XEN) [2014-01-01 15:37:19] masked ExtINT on CPU#7
(XEN) [2014-01-01 15:37:20] Brought up 8 CPUs
(XEN) [2014-01-01 15:37:20] ACPI sleep modes: S3
(XEN) [2014-01-01 15:37:20] mcheck_poll: Machine check polling timer started.
(XEN) [2014-01-01 15:37:20] *** LOADING DOMAIN 0 ***
(XEN) [2014-01-01 15:37:20]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-01 15:37:20]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2391000
(XEN) [2014-01-01 15:37:20] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-01 15:37:20]  Dom0 alloc.:   000000081c000000->0000000820000000 (489617 pages to be allocated)
(XEN) [2014-01-01 15:37:20]  Init. ramdisk: 000000083b891000->000000083ffffa00
(XEN) [2014-01-01 15:37:20] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-01 15:37:20]  Loaded kernel: ffffffff81000000->ffffffff82391000
(XEN) [2014-01-01 15:37:20]  Init. ramdisk: ffffffff82391000->ffffffff86affa00
(XEN) [2014-01-01 15:37:20]  Phys-Mach map: ffffffff86b00000->ffffffff86f00000
(XEN) [2014-01-01 15:37:20]  Start info:    ffffffff86f00000->ffffffff86f004b4
(XEN) [2014-01-01 15:37:20]  Page tables:   ffffffff86f01000->ffffffff86f3c000
(XEN) [2014-01-01 15:37:20]  Boot stack:    ffffffff86f3c000->ffffffff86f3d000
(XEN) [2014-01-01 15:37:20]  TOTAL:         ffffffff80000000->ffffffff87000000
(XEN) [2014-01-01 15:37:20]  ENTRY ADDRESS: ffffffff81cff210
(XEN) [2014-01-01 15:37:20] Dom0 has maximum 8 VCPUs
(XEN) [2014-01-01 15:37:24] Scrubbing Free RAM: ...........................................................................................................................................................................................................................................................................................................done.
(XEN) [2014-01-01 15:37:29] Initial low memory virq threshold set at 0x4000 pages.
(XEN) [2014-01-01 15:37:29] Std. Loglevel: All
(XEN) [2014-01-01 15:37:29] Guest Loglevel: All
(XEN) [2014-01-01 15:37:29] *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) [2014-01-01 15:37:29] Freed 292kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.8.11-100.fc17.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 4.7.2 20120921 (Red Hat 4.7.2-2) (GCC) ) #1 SMP Wed May 1 19:31:26 UTC 2013
[    0.000000] Command line: placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=xen rd_NO_PLYMOUTH
[    0.000000] Freeing 9b-100 pfn range: 101 pages freed
[    0.000000] Released 101 pages of unused memory
[    0.000000] Set 264741 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80065 pfn range: 101 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x000000000009afff] usable
[    0.000000] Xen: [mem 0x000000000009b800-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000bf63efff] usable
[    0.000000] Xen: [mem 0x00000000bf63f000-0x00000000bf6befff] reserved
[    0.000000] Xen: [mem 0x00000000bf6bf000-0x00000000bf7befff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000bf7bf000-0x00000000bf7fefff] ACPI data
[    0.000000] Xen: [mem 0x00000000bf7ff000-0x00000000bf7fffff] usable
[    0.000000] Xen: [mem 0x00000000bf800000-0x00000000bfffffff] reserved
[    0.000000] Xen: [mem 0x00000000e0000000-0x00000000efffffff] reserved
[    0.000000] Xen: [mem 0x00000000feb00000-0x00000000feb03fff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed10000-0x00000000fed19fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ffd80000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x00000005c0e16fff] usable
[    0.000000] Xen: [mem 0x00000005c0e17000-0x000000083fffffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn = 0x5c0e17 max_arch_pfn = 0x400000000
[    0.000000] x2apic enabled by BIOS, switching to x2apic ops
[    0.000000] e820: last_pfn = 0xbf800 max_arch_pfn = 0x400000000
[    0.000000] init_memory_mapping: [mem 0x00000000-0xbf7fffff]
[    0.000000] init_memory_mapping: [mem 0x100000000-0x5c0e16fff]
[    0.000000] RAMDISK: [mem 0x02391000-0x06afffff]
[    0.000000] ACPI: RSDP 00000000000fe020 00024 (v02 INSYDE)
[    0.000000] ACPI: XSDT 00000000bf7fe170 0009C (v01 INSYDE BROMOLOW 00000001      01000013)
[    0.000000] ACPI: FACP 00000000bf7fc000 000F4 (v04 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: DSDT 00000000bf7f2000 05A40 (v01 INSYDE BROMOLOW 00000000 ACPI 00040000)
[    0.000000] ACPI: FACS 00000000bf76e000 00040
[    0.000000] ACPI: ASF! 00000000bf7fd000 000A5 (v32 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: HPET 00000000bf7fb000 00038 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: APIC 00000000bf7fa000 00092 (v02 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: MCFG 00000000bf7f9000 0003C (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: MSDM 00000000bf7f8000 00055 (v03 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SLIC 00000000bf7f1000 00176 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: WDAT 00000000bf7f0000 00224 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: BOOT 00000000bf7ee000 00028 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7ed000 002F6 (v01 INSYDE BROMOLOW 00001000 ACPI 00040000)
[    0.000000] ACPI: SPCR 00000000bf7ec000 00050 (v01 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: ASPT 00000000bf7eb000 00034 (v07 INSYDE BROMOLOW 00000001 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7ea000 00804 (v01 INSYDE BROMOLOW 00003000 ACPI 00040000)
[    0.000000] ACPI: SSDT 00000000bf7e9000 00996 (v01 INSYDE BROMOLOW 00003000 ACPI 00040000)
[    0.000000] ACPI: XMAR 00000000bf7e8000 00088 (v01 INTEL      SNB  00000001 INTL 00000001)
[    0.000000] Setting APIC routing to cluster x2apic.
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x00000005c0e16fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x5c0e16fff]
[    0.000000]   NODE_DATA [mem 0x7da34000-0x7da47fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x5c0e16fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009afff]
[    0.000000]   node   0: [mem 0x00100000-0xbf63efff]
[    0.000000]   node   0: [mem 0xbf7ff000-0xbf7fffff]
[    0.000000]   node   0: [mem 0x100000000-0x5c0e16fff]
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] PM: Registered nosave memory: 000000000009b000 - 000000000009c000
[    0.000000] PM: Registered nosave memory: 000000000009c000 - 0000000000100000
[    0.000000] PM: Registered nosave memory: 00000000bf63f000 - 00000000bf6bf000
[    0.000000] PM: Registered nosave memory: 00000000bf6bf000 - 00000000bf7bf000
[    0.000000] PM: Registered nosave memory: 00000000bf7bf000 - 00000000bf7ff000
[    0.000000] PM: Registered nosave memory: 00000000bf800000 - 00000000c0000000
[    0.000000] PM: Registered nosave memory: 00000000c0000000 - 00000000e0000000
[    0.000000] PM: Registered nosave memory: 00000000e0000000 - 00000000f0000000
[    0.000000] PM: Registered nosave memory: 00000000f0000000 - 00000000feb00000
[    0.000000] PM: Registered nosave memory: 00000000feb00000 - 00000000feb04000
[    0.000000] PM: Registered nosave memory: 00000000feb04000 - 00000000fec00000
[    0.000000] PM: Registered nosave memory: 00000000fec00000 - 00000000fec01000
[    0.000000] PM: Registered nosave memory: 00000000fec01000 - 00000000fed10000
[    0.000000] PM: Registered nosave memory: 00000000fed10000 - 00000000fed1a000
[    0.000000] PM: Registered nosave memory: 00000000fed1a000 - 00000000fed1c000
[    0.000000] PM: Registered nosave memory: 00000000fed1c000 - 00000000fed20000
[    0.000000] PM: Registered nosave memory: 00000000fed20000 - 00000000fee00000
[    0.000000] PM: Registered nosave memory: 00000000fee00000 - 00000000fef00000
[    0.000000] PM: Registered nosave memory: 00000000fef00000 - 00000000ffd80000
[    0.000000] PM: Registered nosave memory: 00000000ffd80000 - 0000000100000000
[    0.000000] e820: [mem 0xc0000000-0xdfffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-unstable (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff880066c00000 s84608 r8192 d21888 u262144
[   11.222615] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 5676548
[   11.222619] Policy zone: Normal
[   11.222622] Kernel command line: placeholder root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 console=hvc0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 earlyprintk=xen rd_NO_PLYMOUTH
[   11.223443] PID hash table entries: 4096 (order: 3, 32768 bytes)
[   11.223450] __ex_table already sorted, skipping sort
[   11.223493] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[   11.255707] software IO TLB [mem 0x62c00000-0x66c00000] (64MB) mapped at [ffff880062c00000-ffff880066bfffff]
[   11.269831] Memory: 1528328k/24131676k available (6512k kernel code, 1059028k absent, 21544320k reserved, 6706k data, 1080k init)
[   11.269946] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[   11.269998] Hierarchical RCU implementation.
[   11.270001]  RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=8.
[   11.270015] NR_IRQS:8448 nr_irqs:744 16
[   11.270119] xen: sci override: global_irq=9 trigger=0 polarity=0
[   11.270162] xen: acpi sci 9
[   11.270550] Console: colour dummy device 80x25
[   11.270555] console [hvc0] enabled, bootconsole disabled
[   11.270555] console [hvc0] enabled, bootconsole disabled
[   11.283830] allocated 92798976 bytes of page_cgroup
[   11.283837] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[   11.283912] installing Xen timer for CPU 0
[   11.283951] tsc: Detected 2400.076 MHz processor
[   11.283959] Calibrating delay loop (skipped), value calculated using timer frequency.. 4800.15 BogoMIPS (lpj=2400076)
[   11.283966] pid_max: default: 32768 minimum: 301
[   11.284030] Security Framework initialized
[   11.284040] SELinux:  Initializing.
[   11.291424] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
[   11.303321] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
[   11.307426] Mount-cache hash table entries: 256
[   11.307763] Initializing cgroup subsys cpuacct
[   11.307768] Initializing cgroup subsys memory
[   11.307800] Initializing cgroup subsys devices
[   11.307804] Initializing cgroup subsys freezer
[   11.307809] Initializing cgroup subsys net_cls
[   11.307813] Initializing cgroup subsys blkio
[   11.307816] Initializing cgroup subsys perf_event
[   11.307912] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[   11.307912] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[   11.307922] CPU: Physical Processor ID: 0
[   11.307925] CPU: Processor Core ID: 0
[   11.307929] mce: CPU supports 2 MCE banks
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.307972] Last level iTLB entries: 4KB 512, 2MB 0, 4MB 0
[   11.307972] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32
[   11.307972] tlb_flushall_shift: 5
[   11.308107] Freeing SMP alternatives: 24k freed
[   11.310286] ACPI: Core revision 20121018
[   11.327224] ftrace: allocating 24318 entries in 95 pages
[   11.345107] Performance Events: unsupported p6 CPU model 42 no PMU driver, software events only.
[   11.346751] NMI watchdog: disabled (cpu0): hardware events not enabled
[   11.346848] installing Xen timer for CPU 1
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.347319] installing Xen timer for CPU 2
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.347717] installing Xen timer for CPU 3
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348059] installing Xen timer for CPU 4
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348442] installing Xen timer for CPU 5
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.348779] installing Xen timer for CPU 6
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.349136] installing Xen timer for CPU 7
(XEN) [2014-01-01 15:37:31] traps.c:2516:d0 Domain attempted WRMSR 000000000000082f from 0x00000000000000f2 to 0x00000000000000f9.
[   11.349343] Brought up 8 CPUs
[   11.349825] devtmpfs: initialized
[   11.350345] PM: Registering ACPI NVS region [mem 0xbf6bf000-0xbf7befff] (1048576 bytes)
[   11.351325] atomic64 test passed for x86-64 platform with CX8 and with SSE
[   11.351354] Grant tables using version 2 layout.
[   11.351374] Grant table initialized
[   11.351414] RTC time: 15:37:31, date: 01/01/14
[   11.351473] NET: Registered protocol family 16
[   11.351906] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[   11.351913] ACPI: bus type pci registered
[   11.352129] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
[   11.352139] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
[   11.411030] PCI: Using configuration type 1 for base access
[   11.412351] bio: create slab <bio-0> at 0
[   11.412497] ACPI: Added _OSI(Module Device)
[   11.412502] ACPI: Added _OSI(Processor Device)
[   11.412505] ACPI: Added _OSI(3.0 _SCP Extensions)
[   11.412509] ACPI: Added _OSI(Processor Aggregator Device)
[   11.415302] ACPI: Executed 1 blocks of module-level executable AML code
[   11.421127] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[   11.421668] ACPI: SSDT 00000000bf6b0798 00727 (v01  PmRef  Cpu0Cst 00003001 INTL 20100121)
[   11.422023] ACPI: Dynamic OEM Table Load:
[   11.422029] ACPI: SSDT           (null) 00727 (v01  PmRef  Cpu0Cst 00003001 INTL 20100121)
[   11.424574] ACPI: SSDT 00000000bf6b1a98 00303 (v01  PmRef    ApIst 00003000 INTL 20100121)
[   11.424962] ACPI: Dynamic OEM Table Load:
[   11.424967] ACPI: SSDT           (null) 00303 (v01  PmRef    ApIst 00003000 INTL 20100121)
[   11.427551] ACPI: SSDT 00000000bf6afd98 00119 (v01  PmRef    ApCst 00003000 INTL 20100121)
[   11.427897] ACPI: Dynamic OEM Table Load:
[   11.427903] ACPI: SSDT           (null) 00119 (v01  PmRef    ApCst 00003000 INTL 20100121)
[   11.431978] ACPI: Interpreter enabled
[   11.431986] ACPI: (supports S0 S1 S3 S4 S5)
[   11.432018] ACPI: Using IOAPIC for interrupt routing
[   11.436050] ACPI: Power Resource [FN00] (off)
[   11.436145] ACPI: Power Resource [FN01] (off)
[   11.436251] ACPI: Power Resource [FN02] (off)
[   11.436343] ACPI: Power Resource [FN03] (off)
[   11.436433] ACPI: Power Resource [FN04] (off)
[   11.436875] ACPI: No dock devices found.
[   11.436882] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[   11.437237] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe])
[   11.438076] PCI host bridge to bus 0000:00
[   11.438083] pci_bus 0000:00: root bus resource [bus 00-fe]
[   11.438088] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[   11.438092] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[   11.438097] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[   11.438102] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfeafffff]
[   11.443293] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.445422] pci 0000:00:01.1: PCI bridge to [bus 02]
[   11.447553] pci 0000:00:01.2: PCI bridge to [bus 03]
[   11.448712] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.449873] pci 0000:00:1c.1: PCI bridge to [bus 05]
[   11.451138] pci 0000:00:1c.2: PCI bridge to [bus 06]
[   11.453932] pci 0000:00:1c.4: PCI bridge to [bus 07]
[   11.457128] pci 0000:00:1c.5: PCI bridge to [bus 08]
[   11.457348] pci 0000:00:1e.0: PCI bridge to [bus 09] (subtractive decode)
[   11.457946]  pci0000:00: ACPI _OSC support notification failed, disabling PCIe ASPM
[   11.457952]  pci0000:00: Unable to request _OSC control (_OSC support mask: 0x08)
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:00.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:01.2
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.2
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.4
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1c.5
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1e.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1f.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:00:1f.3
(XEN) [2014-01-01 15:37:31] PCI add device 0000:01:00.0
(XEN) [2014-01-01 15:37:31] PCI add device 0000:01:00.1
(XEN) [2014-01-01 15:37:31] PCI add device 0000:02:00.0
(XEN) [2014-01-01 15:37:31] PCI add dev[   20.095173] IPv6: ADDRCONF(NETDEV_UP): eth3: link is not ready
[   20.095671] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.095969] IPv6: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
network[738]: Bringing up interface eth3:  [  OK  ]
[   20.328040] IPv6: ADDRCONF(NETDEV_UP): eth4: link is not ready
[   20.328699] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.329009] IPv6: ADDRCONF(NETDEV_CHANGE): eth4: link becomes ready
network[738]: Bringing up interface eth4:  [  OK  ]
[   20.561955] IPv6: ADDRCONF(NETDEV_UP): eth5: link is not ready
[   20.562695] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.563003] IPv6: ADDRCONF(NETDEV_CHANGE): eth5: link becomes ready
network[738]: Bringing up interface eth5:  [  OK  ]
[   20.794772] IPv6: ADDRCONF(NETDEV_UP): eth6: link is not ready
[   20.795704] e1000: eth6 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   20.796014] IPv6: ADDRCONF(NETDEV_CHANGE): eth6: link becomes ready
network[738]: Bringing up interface eth6:  [  OK  ]
[   21.028076] IPv6: ADDRCONF(NETDEV_UP): eth7: link is not ready
[   21.028724] e1000: eth7 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[   21.029023] IPv6: ADDRCONF(NETDEV_CHANGE): eth7: link becomes ready
network[738]: Bringing up interface eth7:  [  OK  ]
[   22.118825] 8021q: 802.1Q VLAN Support v1.8
[   22.118846] 8021q: adding VLAN 0 to HW filter on device eth0
[   22.118869] 8021q: adding VLAN 0 to HW filter on device eth1
[   22.118890] 8021q: adding VLAN 0 to HW filter on device eth3
[   22.118918] 8021q: adding VLAN 0 to HW filter on device eth4
[   22.118947] 8021q: adding VLAN 0 to HW filter on device eth5
[   22.118975] 8021q: adding VLAN 0 to HW filter on device eth6
[   22.119004] 8021q: adding VLAN 0 to HW filter on device eth7
[   22.183337] device eth3.23 entered promiscuous mode
[   22.183388] device eth3 entered promiscuous mode
network[738]: Bringing up interface eth3.23:  [  OK  ]
[   22.294045] device eth4.24 entered promiscuous mode
[   22.294067] device eth4 entered promiscuous mode
network[738]: Bringing up interface eth4.24:  [  OK  ]
[   22.405486] device eth5.25 entered promiscuous mode
[   22.405507] device eth5 entered promiscuous mode
network[738]: Bringing up interface eth5.25:  [  OK  ]
[   22.514847] device eth6.26 entered promiscuous mode
[   22.514869] device eth6 entered promiscuous mode
network[738]: Bringing up interface eth6.26:  [  OK  ]
[   22.624938] device eth7.27 entered promiscuous mode
[   22.624960] device eth7 entered promiscuous mode
network[738]: Bringing up interface eth7.27:  [  OK  ]
network[738]: Bringing up interface xenbr0:
[   22.712708] xenbr0: port 1(eth0) entered forwarding state
[   22.712721] xenbr0: port 1(eth0) entered forwarding state
network[738]: Determining IP information for xenbr0... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr1:
[   25.186092] xenbr1: port 1(eth1) entered forwarding state
[   25.186110] xenbr1: port 1(eth1) entered forwarding state
network[738]: Determining IP information for xenbr1... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr3:
[   27.707860] xenbr3: port 1(eth3.23) entered forwarding state
[   27.707880] xenbr3: port 1(eth3.23) entered forwarding state
network[738]: Determining IP information for xenbr3... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr4:
[   30.224960] xenbr4: port 1(eth4.24) entered forwarding state
[   30.224979] xenbr4: port 1(eth4.24) entered forwarding state
network[738]: Determining IP information for xenbr4... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr5:
[   32.742655] xenbr5: port 1(eth5.25) entered forwarding state
[   32.742674] xenbr5: port 1(eth5.25) entered forwarding state
network[738]: Determining IP information for xenbr5... done.
network[738]: [  OK  ]
network[738]: Bringing up interface xenbr6:
[   35.257714] xenbr6: port 1(eth6.26) entered forwarding state
[   35.257734] xenbr6: port 1(eth6.26) entered forwarding state
network[738]: Determining IP information for xenbr6... done.
network[738]: [  OK  ]
[   37.720117] xenbr0: port 1(eth0) entered forwarding state
network[738]: Bringing up interface xenbr7:
[   37.775576] xenbr7: port 1(eth7.27) entered forwarding state
[   37.775595] xenbr7: port 1(eth7.27) entered forwarding state
network[738]: Determining IP information for xenbr7... done.
network[738]: [  OK  ]
[   40.216233] xenbr1: port 1(eth1) entered forwarding state
[  OK  ] Started LSB: Bring up/down networking.
[  OK  ] Reached target Network.
         Starting OpenSSH server daemon...
         Starting OpenVPN Robust And Highly Flexible Tunnelin...e/client/udp...
         Starting RPC bind service...
[  OK  ] Started OpenSSH server daemon.
[  OK  ] Started OpenVPN Robust And Highly Flexible Tunneling...rge/client/udp.
[  OK  ] Started RPC bind service.
         Starting NFS file locking service....
[  OK  ] Started NFS file locking service..
[  OK  ] Reached target Remote File Systems (Pre).
         Mounting /filer...
         Mounting /scratch...
[   40.491025] FS-Cache: Loaded
[   40.491596] Key type dns_resolver registered
[   40.493979] FS-Cache: Netfs 'nfs' registered for caching
         Mounting /isos...
[   40.502181] NFS: Registering the id_resolver key type
[   40.502201] Key type id_resolver registered
[   40.502205] Key type id_legacy registered

Fedora release 17 (Beefy Miracle)
Kernel 3.8.11-100.fc17.x86_64 on an x86_64 (hvc0)

dcs-xen-54 login: [  133.931861] device vif1.0 entered promiscuous mode
[  133.935393] IPv6: ADDRCONF(NETDEV_UP): vif1.0: link is not ready
[  134.070915] device vif1.0-emu entered promiscuous mode
[  134.074008] xenbr1: port 3(vif1.0-emu) entered forwarding state
[  134.074019] xenbr1: port 3(vif1.0-emu) entered forwarding state
(d1) [2014-01-01 15:39:34] HVM Loader
(d1) [2014-01-01 15:39:34] Detected Xen v4.4-unstable
(d1) [2014-01-01 15:39:34] Xenbus rings @0xfeffc000, event channel 4
(d1) [2014-01-01 15:39:34] System requested SeaBIOS
(d1) [2014-01-01 15:39:34] CPU speed is 2400 MHz
(d1) [2014-01-01 15:39:34] Relocating guest memory for lowmem MMIO space disabled
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 0 changed 0 -> 5
(d1) [2014-01-01 15:39:34] PCI-ISA link 0 routed to IRQ5
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 1 changed 0 -> 10
(d1) [2014-01-01 15:39:34] PCI-ISA link 1 routed to IRQ10
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 2 changed 0 -> 11
(d1) [2014-01-01 15:39:34] PCI-ISA link 2 routed to IRQ11
(XEN) [2014-01-01 15:39:34] irq.c:270: Dom1 PCI link 3 changed 0 -> 5
(d1) [2014-01-01 15:39:34] PCI-ISA link 3 routed to IRQ5
(d1) [2014-01-01 15:39:34] pci dev 01:3 INTA->IRQ10
(d1) [2014-01-01 15:39:34] pci dev 02:0 INTA->IRQ11
(d1) [2014-01-01 15:39:34] pci dev 04:0 INTA->IRQ5
(d1) [2014-01-01 15:39:34] RAM in high memory; setting high_mem resource base to 10c000000
(d1) [2014-01-01 15:39:34] pci dev 02:0 bar 14 size 001000000: 0f0000008
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 10 size 001000000: 0f1000008
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 30 size 000040000: 0f2000000
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 10 size 000020000: 0f2040000
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 30 size 000010000: 0f2060000
(d1) [2014-01-01 15:39:34] pci dev 03:0 bar 18 size 000001000: 0f2070000
(d1) [2014-01-01 15:39:34] pci dev 02:0 bar 10 size 000000100: 00000c001
(d1) [2014-01-01 15:39:34] pci dev 04:0 bar 14 size 000000040: 00000c101
(d1) [2014-01-01 15:39:34] pci dev 01:1 bar 20 size 000000010: 00000c141
(d1) [2014-01-01 15:39:34] Multiprocessor initialisation:
(d1) [2014-01-01 15:39:34]  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1) [2014-01-01 15:39:34]  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d1) [2014-01-01 15:39:34] Writing SMBIOS tables ...
(d1) [2014-01-01 15:39:34] Loading SeaBIOS ...
(d1) [2014-01-01 15:39:34] Creating MP tables ...
(d1) [2014-01-01 15:39:34] Loading ACPI ...
(d1) [2014-01-01 15:39:34] vm86 TSS at fc00a080
(d1) [2014-01-01 15:39:34] BIOS map:
(d1) [2014-01-01 15:39:34]  10000-100d3: Scratch space
(d1) [2014-01-01 15:39:34]  e0000-fffff: Main BIOS
(d1) [2014-01-01 15:39:34] E820 table:
(d1) [2014-01-01 15:39:34]  [00]: 00000000:00000000 - 00000000:000a0000: RAM
(d1) [2014-01-01 15:39:34]  HOLE: 00000000:000a0000 - 00000000:000e0000
(d1) [2014-01-01 15:39:34]  [01]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d1) [2014-01-01 15:39:34]  [02]: 00000000:00100000 - 00000000:f0000000: RAM
(d1) [2014-01-01 15:39:34]  HOLE: 00000000:f0000000 - 00000000:fc000000
(d1) [2014-01-01 15:39:34]  [03]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d1) [2014-01-01 15:39:34]  [04]: 00000001:00000000 - 00000001:0c000000: RAM
(d1) [2014-01-01 15:39:34] Invoking SeaBIOS ...
(d1) [2014-01-01 15:39:34] SeaBIOS (version rel-1.7.3.1-0-g7d9cbe6-20131231_113608-dcs-xen-54)
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:34] Found Xen hypervisor signature at 40000000
(d1) [2014-01-01 15:39:34] xen: copy e820...
(d1) [2014-01-01 15:39:34] Relocating init from 0x000e2001 to 0xeffe0600 (size 63795)
(d1) [2014-01-01 15:39:34] CPU Mhz=2401
(d1) [2014-01-01 15:39:34] Found 7 PCI devices (max PCI bus is 00)
(d1) [2014-01-01 15:39:34] Allocated Xen hypercall page at effff000
(d1) [2014-01-01 15:39:34] Detected Xen v4.4-unstable
(d1) [2014-01-01 15:39:34] xen: copy BIOS tables...
(d1) [2014-01-01 15:39:34] Copying SMBIOS entry point from 0x00010010 to 0x000f1920
(d1) [2014-01-01 15:39:34] Copying MPTABLE from 0xfc001170/fc001180 to 0x000f1820
(d1) [2014-01-01 15:39:34] Copying PIR from 0x00010030 to 0x000f17a0
(d1) [2014-01-01 15:39:34] Copying ACPI RSDP from 0x000100b0 to 0x000f1770
(d1) [2014-01-01 15:39:34] Using pmtimer, ioport 0xb008, freq 3579 kHz
(d1) [2014-01-01 15:39:34] Scan for VGA option rom
(d1) [2014-01-01 15:39:34] WARNING! Found unaligned PCI rom (vd=1234:1111)
(d1) [2014-01-01 15:39:34] Running option rom at c000:0003
(XEN) [2014-01-01 15:39:34] stdvga.c:147:d1 entering stdvga and caching modes
(d1) [2014-01-01 15:39:34] Turning on vga text mode console
(d1) [2014-01-01 15:39:34] SeaBIOS (version rel-1.7.3.1-0-g7d9cbe6-20131231_113608-dcs-xen-54)
(d1) [2014-01-01 15:39:34] Machine UUID d60df350-6607-4732-bb87-cf0a43ef0c78
(d1) [2014-01-01 15:39:34] Found 0 lpt ports
(d1) [2014-01-01 15:39:34] Found 1 serial ports
(d1) [2014-01-01 15:39:34] ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
(d1) [2014-01-01 15:39:34] ATA controller 2 at 170/374/0 (irq 15 dev 9)
(d1) [2014-01-01 15:39:34] ata0-0: QEMU HARDDISK ATA-7 Hard-Disk (20480 MiBytes)
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@0/disk@0
(d1) [2014-01-01 15:39:34] DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD]
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@0
(d1) [2014-01-01 15:39:34] DVD/CD [ata1-1: QEMU DVD-ROM ATAPI-4 DVD/CD]
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@1
(d1) [2014-01-01 15:39:34] PS2 keyboard initialized
(d1) [2014-01-01 15:39:34] All threads complete.
(d1) [2014-01-01 15:39:34] Scan for option roms
(d1) [2014-01-01 15:39:34] Running option rom at ca00:0003
(d1) [2014-01-01 15:39:34] pmm call arg1=1
(d1) [2014-01-01 15:39:34] pmm call arg1=0
(d1) [2014-01-01 15:39:34] pmm call arg1=1
(d1) [2014-01-01 15:39:34] pmm call arg1=0
(d1) [2014-01-01 15:39:34] Searching bootorder for: /pci@i0cf8/*@4
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:34] Press F12 for boot menu.
(d1) [2014-01-01 15:39:34] 
(d1) [2014-01-01 15:39:36] Searching bootorder for: HALT
(d1) [2014-01-01 15:39:36] drive 0x000f1720: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 s=41943040
(d1) [2014-01-01 15:39:36] Space available for UMB: cb000-ee800, f0000-f1690
(d1) [2014-01-01 15:39:36] Returned 61440 bytes of ZoneHigh
(d1) [2014-01-01 15:39:36] e820 map has 7 items:
(d1) [2014-01-01 15:39:36]   0: 0000000000000000 - 000000000009fc00 = 1 RAM
(d1) [2014-01-01 15:39:36]   1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   3: 0000000000100000 - 00000000effff000 = 1 RAM
(d1) [2014-01-01 15:39:36]   4: 00000000effff000 - 00000000f0000000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   5: 00000000fc000000 - 0000000100000000 = 2 RESERVED
(d1) [2014-01-01 15:39:36]   6: 0000000100000000 - 000000010c000000 = 1 RAM
(d1) [2014-01-01 15:39:36] enter handle_19:
(d1) [2014-01-01 15:39:36]   NULL
(d1) [2014-01-01 15:39:36] Booting from Hard Disk...
(d1) [2014-01-01 15:39:36] Booting from 0000:7c00
[  149.112714] xenbr1: port 3(vif1.0-emu) entered forwarding state
(XEN) [2014-01-01 15:40:12] ----[ Xen-4.4-unstable  x86_64  debug=n  Not tainted ]----
(XEN) [2014-01-01 15:40:12] CPU:    5
(XEN) [2014-01-01 15:40:12] RIP:    e008:[<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12] RFLAGS: 0000000000010216   CONTEXT: hypervisor
(XEN) [2014-01-01 15:40:12] rax: 0000000000000000   rbx: ffff8300104c6000   rcx: 00000000000000ff
(XEN) [2014-01-01 15:40:12] rdx: ffff830000000000   rsi: ffffffffffffffff   rdi: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] rbp: 0000000000000007   rsp: ffff830823fdfcf0   r8:  00000000000104c6
(XEN) [2014-01-01 15:40:12] r9:  00000000104c7000   r10: 0000000000000000   r11: 00000000004c6000
(XEN) [2014-01-01 15:40:12] r12: ffff83083fb1bdb0   r13: ffff83083fb1bcc0   r14: 00000000104c6000
(XEN) [2014-01-01 15:40:12] r15: 0000000000100000   cr0: 0000000080050033   cr4: 00000000000426f0
(XEN) [2014-01-01 15:40:12] cr3: 0000000650813000   cr2: ffff8300104c6000
(XEN) [2014-01-01 15:40:12] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-01 15:40:12] Xen stack trace from rsp=ffff830823fdfcf0:
(XEN) [2014-01-01 15:40:12]    ffff82d080161dd1 ffff83083fb1bdb0 ffff82d080114d96 ffff830823fb7000
(XEN) [2014-01-01 15:40:12]    ffff82e0002098c0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bdb0 0000000000000007 ffff830823fdfde0 ffff83083fb1bcc0
(XEN) [2014-01-01 15:40:12]    ffff82d0801150f4 0000000000000010 ffff83083fb1bd80 00000000000000e0
(XEN) [2014-01-01 15:40:12]    00000000fffffff2 0000000010000000 0000000020000000 ffff830823fdfde0
(XEN) [2014-01-01 15:40:12]    000000000000003e 0000000000000003 ffff82d0801152ec 00007f0df697d004
(XEN) [2014-01-01 15:40:12]    ffff83083fb1bcc0 00007ffffa0b1bd0 ffff880055784228 00007ffffa0b1bd0
(XEN) [2014-01-01 15:40:12]    ffff82d0801143e1 0000000000000002 0000000000000000 ffff83083f4bebe8
(XEN) [2014-01-01 15:40:12]    ffff82d08017c28a 0000000000000000 ffff830823fb70b0 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000000083f4be ffff82d08012a6cb ffff8300bf2f9060 ffff82d0802ea620
(XEN) [2014-01-01 15:40:12]    ffff830823fd8000 00000007003e0001 00007f0df697e004 000000001ff53720
(XEN) [2014-01-01 15:40:12]    0000000100000007 ffff82e0107e97c0 ffff830823fb7000 0000000000000007
(XEN) [2014-01-01 15:40:12]    0000000000000001 ffff83083f4be000 ffff8300bf2f9000 000000000083f4be
(XEN) [2014-01-01 15:40:12]    ffff82d080218a58 ffff830823fdff18 ffff82d080218b32 ffff830823fd8000
(XEN) [2014-01-01 15:40:12]    0000000000000000 0000000000000217 00000032fd4ee0a7 0000000000000100
(XEN) [2014-01-01 15:40:12]    00000032fd4ee0a7 0000000000000033 ffff8300bf2f9000 ffff880057605e88
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1bd0 ffff880055784228 ffff82d0801144ab 0000000000000000
(XEN) [2014-01-01 15:40:12]    ffff82d08021df79 00000026d8eb3c18 00000026d8f9b240 0000000000000000
(XEN) [2014-01-01 15:40:12]    000000280f8095dc ffff880057605e88 ffff880005331c00 0000000000000286
(XEN) [2014-01-01 15:40:12]    00007ffffa0b1d90 ffff88000561c180 000000001ff53720 0000000000000025
(XEN) [2014-01-01 15:40:12] Xen call trace:
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021f1c9>] clear_page_sse2+0x9/0x30
(XEN) [2014-01-01 15:40:12]    [<ffff82d080161dd1>] clear_domain_page+0x11/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d080114d96>] kimage_alloc_control_page+0x246/0x2d0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801150f4>] do_kimage_alloc+0x1c4/0x2e0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801152ec>] kimage_alloc+0xdc/0x100
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801143e1>] do_kexec_op_internal+0x5f1/0x6b0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08017c28a>] do_mmu_update+0x34a/0x1bf0
(XEN) [2014-01-01 15:40:12]    [<ffff82d08012a6cb>] add_entry+0x4b/0xb0
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218a58>] toggle_guest_mode+0x28/0x40
(XEN) [2014-01-01 15:40:12]    [<ffff82d080218b32>] do_iret+0xc2/0x1a0
(XEN) [2014-01-01 15:40:12]    [<ffff82d0801144ab>] do_kexec_op+0xb/0x20
(XEN) [2014-01-01 15:40:12]    [<ffff82d08021df79>] syscall_enter+0xa9/0xae
(XEN) [2014-01-01 15:40:12] 
(XEN) [2014-01-01 15:40:12] Pagetable walk from ffff8300104c6000:
(XEN) [2014-01-01 15:40:12]  L4[0x106] = 00000000bf468063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L3[0x000] = 00000000bf462063 ffffffffffffffff
(XEN) [2014-01-01 15:40:12]  L2[0x082] = 0000000000000000 ffffffffffffffff 
(XEN) [2014-01-01 15:40:16] 
(XEN) [2014-01-01 15:40:16] ****************************************
(XEN) [2014-01-01 15:40:16] Panic on CPU 5:
(XEN) [2014-01-01 15:40:16] FATAL PAGE FAULT
(XEN) [2014-01-01 15:40:16] [error_code=0002]
(XEN) [2014-01-01 15:40:16] Faulting linear address: ffff8300104c6000
(XEN) [2014-01-01 15:40:17] ****************************************
(XEN) [2014-01-01 15:40:17] 
(XEN) [2014-01-01 15:40:17] Reboot in five seconds...



--------------040403030203090802050606
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------040403030203090802050606--


From xen-devel-bounces@lists.xen.org Thu Jan 02 15:10:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyjuK-0007Vz-IM; Thu, 02 Jan 2014 15:10:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyjuI-0007Vt-UM
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:09:59 +0000
Received: from [193.109.254.147:35209] by server-15.bemta-14.messagelabs.com
	id D3/8C-22186-64185C25; Thu, 02 Jan 2014 15:09:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388675396!4993737!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10937 invoked from network); 2 Jan 2014 15:09:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:09:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89246025"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:09:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:09:46 -0500
Message-ID: <52C58138.1030301@citrix.com>
Date: Thu, 2 Jan 2014 15:09:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1387206250-13963-1-git-send-email-konrad.wilk@oracle.com>	<1387206250-13963-2-git-send-email-konrad.wilk@oracle.com>	<52B01F67.1030108@m2r.biz>	<20131217145150.GC4683@phenom.dumpdata.com>	<20131217212333.GA31966@phenom.dumpdata.com>
	<20131231143258.GA3018@phenom.dumpdata.com>
In-Reply-To: <20131231143258.GA3018@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: leosilva@linux.vnet.ibm.com, linux-fbdev@vger.kernel.org,
	linux-pci@vger.kernel.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	peterhuewe@gmx.de, tomi.valkeinen@ti.com,
	linux-input@vger.kernel.org, xen-devel@lists.xenproject.org,
	plagnioj@jcrosoft.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, ashley@ashleylai.com,
	tpmdd@selhorst.net, bhelgaas@google.com,
	boris.ostrovsky@oracle.com, axboe@kernel.dk,
	netdev@vger.kernel.org, dmitry.torokhov@gmail.com,
	linux-kernel@vger.kernel.org, mail@srajiv.net,
	tpmdd-devel@lists.sourceforge.net, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 1/2] xen/pvhvm: If xen_platform_pci=0 is
 set don't blow up (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 14:32, Konrad Rzeszutek Wilk wrote:
>> That is because 'disks' is incorrect. It should have been 'ide-disks'
>>
>> [    0.000000] unrecognised option 'disks' in parameter 'xen_emul_unplug'
>>
>> With the 'ide-disks' it should work. I will update the description to
>> mention 'ide-disks' instead of 'disks'. Thank you for finding this!
>>
> 
> I've v4 with said update and will push it to Linus shortly.
> 
> Thanks!
> 
> P.S.
> Here is v4:
> 
>>From 275a81e7496d3532e5b4752703c50a7c8355a6c7 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Tue, 26 Nov 2013 15:05:40 -0500
> Subject: [PATCH] xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
> 
> The user has the option of disabling the platform driver:
> 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> 
> which is used to unplug the emulated drivers (IDE, Realtek 8169, etc)
> and allow the PV drivers to take over. If the user wishes
> to disable that they can set:
> 
>   xen_platform_pci=0
>   (in the guest config file)
> 
> or
>   xen_emul_unplug=never
>   (on the Linux command line)
> 
> except it does not work properly. The PV drivers still try to
> load and since the Xen platform driver is not run - and it
> has not initialized the grant tables, most of the PV drivers
> stumble upon:
> 
> input: Xen Virtual Keyboard as /devices/virtual/input/input5
> input: Xen Virtual Pointer as /devices/virtual/input/input6M
> ------------[ cut here ]------------
> kernel BUG at /home/konrad/ssd/konrad/linux/drivers/xen/grant-table.c:1206!
> invalid opcode: 0000 [#1] SMP
> Modules linked in: xen_kbdfront(+) xenfs xen_privcmd
> CPU: 6 PID: 1389 Comm: modprobe Not tainted 3.13.0-rc1upstream-00021-ga6c892b-dirty #1
> Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
> RIP: 0010:[<ffffffff813ddc40>]  [<ffffffff813ddc40>] get_free_entries+0x2e0/0x300
> Call Trace:
>  [<ffffffff8150d9a3>] ? evdev_connect+0x1e3/0x240
>  [<ffffffff813ddd0e>] gnttab_grant_foreign_access+0x2e/0x70
>  [<ffffffffa0010081>] xenkbd_connect_backend+0x41/0x290 [xen_kbdfront]
>  [<ffffffffa0010a12>] xenkbd_probe+0x2f2/0x324 [xen_kbdfront]
>  [<ffffffff813e5757>] xenbus_dev_probe+0x77/0x130
>  [<ffffffff813e7217>] xenbus_frontend_dev_probe+0x47/0x50
>  [<ffffffff8145e9a9>] driver_probe_device+0x89/0x230
>  [<ffffffff8145ebeb>] __driver_attach+0x9b/0xa0
>  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
>  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
>  [<ffffffff8145cf1c>] bus_for_each_dev+0x8c/0xb0
>  [<ffffffff8145e7d9>] driver_attach+0x19/0x20
>  [<ffffffff8145e260>] bus_add_driver+0x1a0/0x220
>  [<ffffffff8145f1ff>] driver_register+0x5f/0xf0
>  [<ffffffff813e55c5>] xenbus_register_driver_common+0x15/0x20
>  [<ffffffff813e76b3>] xenbus_register_frontend+0x23/0x40
>  [<ffffffffa0015000>] ? 0xffffffffa0014fff
>  [<ffffffffa001502b>] xenkbd_init+0x2b/0x1000 [xen_kbdfront]
>  [<ffffffff81002049>] do_one_initcall+0x49/0x170
> 
> .. snip..
> 
> which is hardly nice. This patch fixes this by having each
> PV driver check for:
>  - if running in PV, then it is fine to execute (as that is their
>    native environment).
>  - if running in HVM, check if user wanted 'xen_emul_unplug=never',
>    in which case bail out and don't load any PV drivers.
>  - if running in HVM, and if PCI device 5853:0001 (xen_platform_pci)
>    does not exist, then bail out and not load PV drivers.
>  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=ide-disks',
>    then bail out for all PV devices _except_ the block one.
>    Ditto for the network one ('nics').
>  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=unnecessary'
>    then load block PV driver, and also setup the legacy IDE paths.
>    In (v3) make it actually load PV drivers.
[...]
> --- a/arch/x86/xen/platform-pci-unplug.c
> +++ b/arch/x86/xen/platform-pci-unplug.c
> @@ -69,6 +69,80 @@ static int check_platform_magic(void)
>  	return 0;
>  }
>  
> +bool xen_has_pv_devices()
> +{
> +	if (!xen_domain())
> +		return false;
> +
> +	/* PV domains always have them. */
> +	if (xen_pv_domain())
> +		return true;
> +
> +	/* And user has xen_platform_pci=0 set in guest config as
> +	 * driver did not modify the value. */
> +	if (xen_platform_pci_unplug == 0)
> +		return false;
> +
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_NEVER)
> +		return false;
> +
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_ALL)
> +		return true;
> +
> +	/* This is an odd one - we are going to run legacy
> +	 * and PV drivers at the same time. */
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_UNNECESSARY)
> +		return true;
> +
> +	/* And the caller has to follow with xen_pv_{disk,nic}_devices
> +	 * to be certain which driver can load. */
> +	return false;

This may result in:

xen_has_pv_devices() == false
xen_has_pv_disk_devices() == true

which looks odd to me.  Surely xen_has_pv_*_devices() is a subset of
xen_has_pv_devices()?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:10:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyjuK-0007Vz-IM; Thu, 02 Jan 2014 15:10:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VyjuI-0007Vt-UM
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:09:59 +0000
Received: from [193.109.254.147:35209] by server-15.bemta-14.messagelabs.com
	id D3/8C-22186-64185C25; Thu, 02 Jan 2014 15:09:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388675396!4993737!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10937 invoked from network); 2 Jan 2014 15:09:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:09:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89246025"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:09:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:09:46 -0500
Message-ID: <52C58138.1030301@citrix.com>
Date: Thu, 2 Jan 2014 15:09:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1387206250-13963-1-git-send-email-konrad.wilk@oracle.com>	<1387206250-13963-2-git-send-email-konrad.wilk@oracle.com>	<52B01F67.1030108@m2r.biz>	<20131217145150.GC4683@phenom.dumpdata.com>	<20131217212333.GA31966@phenom.dumpdata.com>
	<20131231143258.GA3018@phenom.dumpdata.com>
In-Reply-To: <20131231143258.GA3018@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: leosilva@linux.vnet.ibm.com, linux-fbdev@vger.kernel.org,
	linux-pci@vger.kernel.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	peterhuewe@gmx.de, tomi.valkeinen@ti.com,
	linux-input@vger.kernel.org, xen-devel@lists.xenproject.org,
	plagnioj@jcrosoft.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, ashley@ashleylai.com,
	tpmdd@selhorst.net, bhelgaas@google.com,
	boris.ostrovsky@oracle.com, axboe@kernel.dk,
	netdev@vger.kernel.org, dmitry.torokhov@gmail.com,
	linux-kernel@vger.kernel.org, mail@srajiv.net,
	tpmdd-devel@lists.sourceforge.net, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 1/2] xen/pvhvm: If xen_platform_pci=0 is
 set don't blow up (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/12/13 14:32, Konrad Rzeszutek Wilk wrote:
>> That is because 'disks' is incorrect. It should have been 'ide-disks'
>>
>> [    0.000000] unrecognised option 'disks' in parameter 'xen_emul_unplug'
>>
>> With the 'ide-disks' it should work. I will update the description to
>> mention 'ide-disks' instead of 'disks'. Thank you for finding this!
>>
> 
> I've v4 with said update and will push it to Linus shortly.
> 
> Thanks!
> 
> P.S.
> Here is v4:
> 
>>From 275a81e7496d3532e5b4752703c50a7c8355a6c7 Mon Sep 17 00:00:00 2001
> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Date: Tue, 26 Nov 2013 15:05:40 -0500
> Subject: [PATCH] xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
> 
> The user has the option of disabling the platform driver:
> 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> 
> which is used to unplug the emulated drivers (IDE, Realtek 8169, etc)
> and allow the PV drivers to take over. If the user wishes
> to disable that they can set:
> 
>   xen_platform_pci=0
>   (in the guest config file)
> 
> or
>   xen_emul_unplug=never
>   (on the Linux command line)
> 
> except it does not work properly. The PV drivers still try to
> load and since the Xen platform driver is not run - and it
> has not initialized the grant tables, most of the PV drivers
> stumble upon:
> 
> input: Xen Virtual Keyboard as /devices/virtual/input/input5
> input: Xen Virtual Pointer as /devices/virtual/input/input6M
> ------------[ cut here ]------------
> kernel BUG at /home/konrad/ssd/konrad/linux/drivers/xen/grant-table.c:1206!
> invalid opcode: 0000 [#1] SMP
> Modules linked in: xen_kbdfront(+) xenfs xen_privcmd
> CPU: 6 PID: 1389 Comm: modprobe Not tainted 3.13.0-rc1upstream-00021-ga6c892b-dirty #1
> Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
> RIP: 0010:[<ffffffff813ddc40>]  [<ffffffff813ddc40>] get_free_entries+0x2e0/0x300
> Call Trace:
>  [<ffffffff8150d9a3>] ? evdev_connect+0x1e3/0x240
>  [<ffffffff813ddd0e>] gnttab_grant_foreign_access+0x2e/0x70
>  [<ffffffffa0010081>] xenkbd_connect_backend+0x41/0x290 [xen_kbdfront]
>  [<ffffffffa0010a12>] xenkbd_probe+0x2f2/0x324 [xen_kbdfront]
>  [<ffffffff813e5757>] xenbus_dev_probe+0x77/0x130
>  [<ffffffff813e7217>] xenbus_frontend_dev_probe+0x47/0x50
>  [<ffffffff8145e9a9>] driver_probe_device+0x89/0x230
>  [<ffffffff8145ebeb>] __driver_attach+0x9b/0xa0
>  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
>  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
>  [<ffffffff8145cf1c>] bus_for_each_dev+0x8c/0xb0
>  [<ffffffff8145e7d9>] driver_attach+0x19/0x20
>  [<ffffffff8145e260>] bus_add_driver+0x1a0/0x220
>  [<ffffffff8145f1ff>] driver_register+0x5f/0xf0
>  [<ffffffff813e55c5>] xenbus_register_driver_common+0x15/0x20
>  [<ffffffff813e76b3>] xenbus_register_frontend+0x23/0x40
>  [<ffffffffa0015000>] ? 0xffffffffa0014fff
>  [<ffffffffa001502b>] xenkbd_init+0x2b/0x1000 [xen_kbdfront]
>  [<ffffffff81002049>] do_one_initcall+0x49/0x170
> 
> .. snip..
> 
> which is hardly nice. This patch fixes this by having each
> PV driver check for:
>  - if running in PV, then it is fine to execute (as that is their
>    native environment).
>  - if running in HVM, check if user wanted 'xen_emul_unplug=never',
>    in which case bail out and don't load any PV drivers.
>  - if running in HVM, and if PCI device 5853:0001 (xen_platform_pci)
>    does not exist, then bail out and not load PV drivers.
>  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=ide-disks',
>    then bail out for all PV devices _except_ the block one.
>    Ditto for the network one ('nics').
>  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=unnecessary'
>    then load block PV driver, and also setup the legacy IDE paths.
>    In (v3) make it actually load PV drivers.
[...]
> --- a/arch/x86/xen/platform-pci-unplug.c
> +++ b/arch/x86/xen/platform-pci-unplug.c
> @@ -69,6 +69,80 @@ static int check_platform_magic(void)
>  	return 0;
>  }
>  
> +bool xen_has_pv_devices()
> +{
> +	if (!xen_domain())
> +		return false;
> +
> +	/* PV domains always have them. */
> +	if (xen_pv_domain())
> +		return true;
> +
> +	/* And user has xen_platform_pci=0 set in guest config as
> +	 * driver did not modify the value. */
> +	if (xen_platform_pci_unplug == 0)
> +		return false;
> +
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_NEVER)
> +		return false;
> +
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_ALL)
> +		return true;
> +
> +	/* This is an odd one - we are going to run legacy
> +	 * and PV drivers at the same time. */
> +	if (xen_platform_pci_unplug & XEN_UNPLUG_UNNECESSARY)
> +		return true;
> +
> +	/* And the caller has to follow with xen_pv_{disk,nic}_devices
> +	 * to be certain which driver can load. */
> +	return false;

This may result in:

xen_has_pv_devices() == false
xen_has_pv_disk_devices() == true

which looks odd to me.  Surely xen_has_pv_*_devices() is a subset of
xen_has_pv_devices()?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:10:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjul-0007as-0x; Thu, 02 Jan 2014 15:10:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyjuk-0007aj-9G
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:10:26 +0000
Received: from [193.109.254.147:43301] by server-11.bemta-14.messagelabs.com
	id 9F/73-20576-16185C25; Thu, 02 Jan 2014 15:10:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1388675423!8477080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6460 invoked from network); 2 Jan 2014 15:10:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:10:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87057608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:10:22 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:10:21 -0500
Message-ID: <52C5815C.5020908@citrix.com>
Date: Thu, 2 Jan 2014 15:10:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>	
	<52B1EC43.7030408@cantab.net>
	<1387447808.9925.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1387447808.9925.24.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Julien
	Grall <julien.grall@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	David Vrabel <dvrabel@cantab.net>, Stefano.Stabellini@citrix.com, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/13 10:10, Ian Campbell wrote:
> On Wed, 2013-12-18 at 18:41 +0000, David Vrabel wrote:
>> On 18/12/2013 17:28, Ian Campbell wrote:
>>> When building an ARM guest we need to take care of cache maintenance
>>> because the guest starts with MMU and cache disabled, which means we
>>> need to make sure that the initial images (kernel, initrd, dtb) which we
>>> write to guest memory are not in the cache.
>>>
>>> We thought we had solved this with "tools: libxc: flush data cache after
>>> loading images into guest memory" (a0035ecc0d82) however it turns out
>>> that there are a couple of issues with this approach:
>>>
>>> Firstly we need to do a cache flush from userspace, on arm64 this is
>>> possible by directly using the instructions from userspace, but on arm32
>>> this is not possible and so we need to use a system call. Unfortunately
>>> the system call provided by Linux for this purpose does not flush far
>>> enough down the cache hierarchy. Extending the system call would not be
>>> an insurmountable barrier, were it not for the second issue:
>>>
>>> Secondly, and more importantly, Catalin Marinas points out (via Marc
>>> Zyngier) that there is a race between the cache flush and the point
>>> where we tear down the mappings, where the processor might speculatively
>>> pull some data into the cache (cache flushes are by Virtual Address, so
>>> this race is unavoidable).
>>>
>>> If this happens then guest kernels which modify some code/data before
>>> enabling MMUs + caches may see stale data in the cache.
>>
>> Would this same problem with occur with save/restore of a guest that has
>> caching disabled?
> 
> We basically only support guests running without caches at start of day.
> We expect and require them to come up and enable caches and then keep
> them enabled. Fortunately this is normal practice...

There is still a (small) window where a guest is running with caches
disabled and it may be saved/migrated at this point.  Is this a concern?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:10:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyjul-0007as-0x; Thu, 02 Jan 2014 15:10:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyjuk-0007aj-9G
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:10:26 +0000
Received: from [193.109.254.147:43301] by server-11.bemta-14.messagelabs.com
	id 9F/73-20576-16185C25; Thu, 02 Jan 2014 15:10:25 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1388675423!8477080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6460 invoked from network); 2 Jan 2014 15:10:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:10:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87057608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:10:22 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:10:21 -0500
Message-ID: <52C5815C.5020908@citrix.com>
Date: Thu, 2 Jan 2014 15:10:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>	
	<52B1EC43.7030408@cantab.net>
	<1387447808.9925.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1387447808.9925.24.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Julien
	Grall <julien.grall@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	David Vrabel <dvrabel@cantab.net>, Stefano.Stabellini@citrix.com, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/12/13 10:10, Ian Campbell wrote:
> On Wed, 2013-12-18 at 18:41 +0000, David Vrabel wrote:
>> On 18/12/2013 17:28, Ian Campbell wrote:
>>> When building an ARM guest we need to take care of cache maintenance
>>> because the guest starts with MMU and cache disabled, which means we
>>> need to make sure that the initial images (kernel, initrd, dtb) which we
>>> write to guest memory are not in the cache.
>>>
>>> We thought we had solved this with "tools: libxc: flush data cache after
>>> loading images into guest memory" (a0035ecc0d82) however it turns out
>>> that there are a couple of issues with this approach:
>>>
>>> Firstly we need to do a cache flush from userspace, on arm64 this is
>>> possible by directly using the instructions from userspace, but on arm32
>>> this is not possible and so we need to use a system call. Unfortunately
>>> the system call provided by Linux for this purpose does not flush far
>>> enough down the cache hierarchy. Extending the system call would not be
>>> an insurmountable barrier, were it not for the second issue:
>>>
>>> Secondly, and more importantly, Catalin Marinas points out (via Marc
>>> Zyngier) that there is a race between the cache flush and the point
>>> where we tear down the mappings, where the processor might speculatively
>>> pull some data into the cache (cache flushes are by Virtual Address, so
>>> this race is unavoidable).
>>>
>>> If this happens then guest kernels which modify some code/data before
>>> enabling MMUs + caches may see stale data in the cache.
>>
>> Would this same problem with occur with save/restore of a guest that has
>> caching disabled?
> 
> We basically only support guests running without caches at start of day.
> We expect and require them to come up and enable caches and then keep
> them enabled. Fortunately this is normal practice...

There is still a (small) window where a guest is running with caches
disabled and it may be saved/migrated at this point.  Is this a concern?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykGG-00008t-C3; Thu, 02 Jan 2014 15:32:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykGE-00008o-2X
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:32:38 +0000
Received: from [193.109.254.147:50453] by server-16.bemta-14.messagelabs.com
	id D0/07-20600-59685C25; Thu, 02 Jan 2014 15:32:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388676755!8510123!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11843 invoked from network); 2 Jan 2014 15:32:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:32:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87065090"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:32:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:32:34 -0500
Message-ID: <52C58691.4040502@citrix.com>
Date: Thu, 2 Jan 2014 15:32:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In the bootup code for PVH we can trap cpuid via vmexit, so don't
> need to use emulated prefix call. We also check for vector callback
> early on, as it is a required feature. PVH also runs at default kernel
> IOPL.
> 
> Finally, pure PV settings are moved to a separate function that are
> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> with CONFIG_XEN_PVMMU.
[...]
> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));

For this one off cpuid call it seems preferrable to me to use the
emulate prefix rather than diverge from PV.

> @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
> +	xen_pvh_early_guest_init();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;

If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykGG-00008t-C3; Thu, 02 Jan 2014 15:32:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykGE-00008o-2X
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:32:38 +0000
Received: from [193.109.254.147:50453] by server-16.bemta-14.messagelabs.com
	id D0/07-20600-59685C25; Thu, 02 Jan 2014 15:32:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388676755!8510123!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11843 invoked from network); 2 Jan 2014 15:32:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:32:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87065090"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:32:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:32:34 -0500
Message-ID: <52C58691.4040502@citrix.com>
Date: Thu, 2 Jan 2014 15:32:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In the bootup code for PVH we can trap cpuid via vmexit, so don't
> need to use emulated prefix call. We also check for vector callback
> early on, as it is a required feature. PVH also runs at default kernel
> IOPL.
> 
> Finally, pure PV settings are moved to a separate function that are
> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> with CONFIG_XEN_PVMMU.
[...]
> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>  		break;
>  	}
>  
> -	asm(XEN_EMULATE_PREFIX "cpuid"
> -		: "=a" (*ax),
> -		  "=b" (*bx),
> -		  "=c" (*cx),
> -		  "=d" (*dx)
> -		: "0" (*ax), "2" (*cx));
> +	if (xen_pvh_domain())
> +		native_cpuid(ax, bx, cx, dx);
> +	else
> +		asm(XEN_EMULATE_PREFIX "cpuid"
> +			: "=a" (*ax),
> +			"=b" (*bx),
> +			"=c" (*cx),
> +			"=d" (*dx)
> +			: "0" (*ax), "2" (*cx));

For this one off cpuid call it seems preferrable to me to use the
emulate prefix rather than diverge from PV.

> @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
> +	xen_pvh_early_guest_init();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (xen_pvh_domain())
> +		pv_cpu_ops.cpuid = xen_cpuid;
> +	else
> +		pv_cpu_ops = xen_cpu_ops;

If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykQi-0000qn-Pn; Thu, 02 Jan 2014 15:43:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykQh-0000qi-SA
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:43:28 +0000
Received: from [85.158.139.211:27165] by server-12.bemta-5.messagelabs.com id
	04/B4-30017-F1985C25; Thu, 02 Jan 2014 15:43:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1388677405!6325775!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9382 invoked from network); 2 Jan 2014 15:43:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070849"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:43:24 -0500
Message-ID: <52C5891A.7040603@citrix.com>
Date: Thu, 2 Jan 2014 15:43:22 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykQi-0000qn-Pn; Thu, 02 Jan 2014 15:43:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykQh-0000qi-SA
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 15:43:28 +0000
Received: from [85.158.139.211:27165] by server-12.bemta-5.messagelabs.com id
	04/B4-30017-F1985C25; Thu, 02 Jan 2014 15:43:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1388677405!6325775!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9382 invoked from network); 2 Jan 2014 15:43:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070849"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	10:43:24 -0500
Message-ID: <52C5891A.7040603@citrix.com>
Date: Thu, 2 Jan 2014 15:43:22 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRF-0000uB-7Q; Thu, 02 Jan 2014 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRD-0000tp-Kj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:43:59 +0000
Received: from [85.158.139.211:56814] by server-9.bemta-5.messagelabs.com id
	06/3F-15098-E3985C25; Thu, 02 Jan 2014 15:43:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30976 invoked from network); 2 Jan 2014 15:43:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:55 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykR9-00043n-GF;
	Thu, 02 Jan 2014 15:43:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:34 +0100
Message-ID: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v9 00/19] FreeBSD PVH DomU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a split of the previous patch "Xen x86 DomU PVH 
support", with the aim to make the review of the code easier.

The series can also be found on my git repo:

git://xenbits.xen.org/people/royger/freebsd.git pvh_v9

or

http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/pvh_v9

PVH mode is basically a PV guest inside an HVM container, and shares
a great amount of code with PVHVM. The main difference is the way the
guest is started, PVH uses the PV start sequence, jumping directly
into the kernel entry point in long mode and with page tables set.
The main work of this patch consists in setting the environment as
similar as possible to what native FreeBSD expects, and then adding
hooks to the PV ops when necessary.

This new version of the series (v9) addresses the comments from the 
previous posted version (v7).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRF-0000uB-7Q; Thu, 02 Jan 2014 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRD-0000tp-Kj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:43:59 +0000
Received: from [85.158.139.211:56814] by server-9.bemta-5.messagelabs.com id
	06/3F-15098-E3985C25; Thu, 02 Jan 2014 15:43:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30976 invoked from network); 2 Jan 2014 15:43:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:55 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykR9-00043n-GF;
	Thu, 02 Jan 2014 15:43:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:34 +0100
Message-ID: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v9 00/19] FreeBSD PVH DomU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a split of the previous patch "Xen x86 DomU PVH 
support", with the aim to make the review of the code easier.

The series can also be found on my git repo:

git://xenbits.xen.org/people/royger/freebsd.git pvh_v9

or

http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/pvh_v9

PVH mode is basically a PV guest inside an HVM container, and shares
a great amount of code with PVHVM. The main difference is the way the
guest is started, PVH uses the PV start sequence, jumping directly
into the kernel entry point in long mode and with page tables set.
The main work of this patch consists in setting the environment as
similar as possible to what native FreeBSD expects, and then adding
hooks to the PV ops when necessary.

This new version of the series (v9) addresses the comments from the 
previous posted version (v7).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRH-0000v0-2w; Thu, 02 Jan 2014 15:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRE-0000tw-Gq
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:00 +0000
Received: from [85.158.139.211:56854] by server-9.bemta-5.messagelabs.com id
	59/3F-15098-F3985C25; Thu, 02 Jan 2014 15:43:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388677437!7549188!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20649 invoked from network); 2 Jan 2014 15:43:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRA-00043n-2D;
	Thu, 02 Jan 2014 15:43:56 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:35 +0100
Message-ID: <1388677433-49525-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_01/19=5D_xen=3A_add_PV/PVH_kern?=
	=?utf-8?q?el_entry_point?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHRoZSBQVi9QVkggZW50cnkgcG9pbnQgYW5kIHRoZSBsb3cgbGV2ZWwgZnVuY3Rpb25zIGZv
ciBQVkgKaW5pdGlhbGl6YXRpb24uCi0tLQogc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TICAgICB8
ICAgIDEgKwogc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUyB8ICAgODMgKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmggfCAgIDI2ICsr
KysrKysrKwogc3lzL2NvbmYvZmlsZXMuYW1kNjQgICAgICAgICB8ICAgIDIgKwogc3lzL2kzODYv
eGVuL3hlbl9tYWNoZGVwLmMgICB8ICAgIDIgKwogc3lzL3g4Ni94ZW4vaHZtLmMgICAgICAgICAg
ICB8ICAgIDEgKwogc3lzL3g4Ni94ZW4vcHYuYyAgICAgICAgICAgICB8ICAxMTkgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCiBzeXMveGVuL3hlbi1vcy5oICAgICAg
ICAgICAgIHwgICAgNCArKwogOCBmaWxlcyBjaGFuZ2VkLCAyMzggaW5zZXJ0aW9ucygrKSwgMCBk
ZWxldGlvbnMoLSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMvYW1kNjQvYW1kNjQveGVuLWxvY29y
ZS5TCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4vcHYuYwoKZGlmZiAtLWdpdCBhL3N5
cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUyBiL3N5cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUwppbmRleCA1
NWNkYTNhLi40YWNlZjk3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbG9jb3JlLlMKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TCkBAIC04NCw1ICs4NCw2IEBAIE5PTl9HUFJPRl9F
TlRSWShidGV4dCkKIAogCS5ic3MKIAlBTElHTl9EQVRBCQkJLyoganVzdCB0byBiZSBzdXJlICov
CisJLmdsb2JsCWJvb3RzdGFjawogCS5zcGFjZQkweDEwMDAJCQkvKiBzcGFjZSBmb3IgYm9vdHN0
YWNrIC0gdGVtcG9yYXJ5IHN0YWNrICovCiBib290c3RhY2s6CmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQveGVuLWxvY29yZS5TIGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpuZXcg
ZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi44NDI4N2M0Ci0tLSAvZGV2L251bGwKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpAQCAtMCwwICsxLDgzIEBACisvKi0KKyAq
IENvcHlyaWdodCAoYykgMjAwMyBQZXRlciBXZW1tIDxwZXRlckBGcmVlQlNELm9yZz4KKyAqIENv
cHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubmUgPHJveWdlckBGcmVlQlNELm9yZz4KKyAq
IEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBz
b3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24s
IGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAq
IGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRh
aW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0
aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25z
IGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAg
IG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xh
aW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBw
cm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQ
Uk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgYGBBUyBJUycnIEFORAorICog
QU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElN
SVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFO
RCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJ
TiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAq
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZ
LCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRF
RCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExP
U1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisg
KiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVH
TElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBV
U0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBP
RgorICogU1VDSCBEQU1BR0UuCisgKgorICogJEZyZWVCU0QkCisgKi8KKworI2luY2x1ZGUgPG1h
Y2hpbmUvYXNtYWNyb3MuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3BzbC5oPgorI2luY2x1ZGUgPG1h
Y2hpbmUvcG1hcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc3BlY2lhbHJlZy5oPgorCisjaW5jbHVk
ZSA8eGVuL3hlbi1vcy5oPgorI2RlZmluZSBfX0FTU0VNQkxZX18KKyNpbmNsdWRlIDx4ZW4vaW50
ZXJmYWNlL2VsZm5vdGUuaD4KKworI2luY2x1ZGUgImFzc3ltLnMiCisKKy5zZWN0aW9uIF9feGVu
X2d1ZXN0CisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0dVRVNUX09TLCAgICAgICAuYXNjaXos
ICJGcmVlQlNEIikKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfR1VFU1RfVkVSU0lPTiwgIC5h
c2NpeiwgIkhFQUQiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9YRU5fVkVSU0lPTiwgICAg
LmFzY2l6LCAieGVuLTMuMCIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX1ZJUlRfQkFTRSwg
ICAgICAucXVhZCwgIEtFUk5CQVNFKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9QQUREUl9P
RkZTRVQsICAgLnF1YWQsICBLRVJOQkFTRSkgLyogWGVuIGhvbm91cnMgZWxmLT5wX3BhZGRyOyBj
b21wZW5zYXRlIGZvciB0aGlzICovCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0VOVFJZLCAg
ICAgICAgICAucXVhZCwgIHhlbl9zdGFydCkKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfSFlQ
RVJDQUxMX1BBR0UsIC5xdWFkLAkgaHlwZXJjYWxsX3BhZ2UpCisJRUxGTk9URShYZW4sIFhFTl9F
TEZOT1RFX0hWX1NUQVJUX0xPVywgICAucXVhZCwgIEhZUEVSVklTT1JfVklSVF9TVEFSVCkKKwlF
TEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfRkVBVFVSRVMsICAgICAgIC5hc2NpeiwgIndyaXRhYmxl
X2Rlc2NyaXB0b3JfdGFibGVzfGF1dG9fdHJhbnNsYXRlZF9waHlzbWFwfHN1cGVydmlzb3JfbW9k
ZV9rZXJuZWx8aHZtX2NhbGxiYWNrX3ZlY3RvciIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RF
X1BBRV9NT0RFLCAgICAgICAuYXNjaXosICJ5ZXMiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9U
RV9MMV9NRk5fVkFMSUQsICAgLmxvbmcsICBQR19WLCBQR19WKQorCUVMRk5PVEUoWGVuLCBYRU5f
RUxGTk9URV9MT0FERVIsICAgICAgICAgLmFzY2l6LCAiZ2VuZXJpYyIpCisJRUxGTk9URShYZW4s
IFhFTl9FTEZOT1RFX1NVU1BFTkRfQ0FOQ0VMLCAubG9uZywgIDApCisJRUxGTk9URShYZW4sIFhF
Tl9FTEZOT1RFX0JTRF9TWU1UQUIsCSAuYXNjaXosICJ5ZXMiKQorCisJLnRleHQKKy5wMmFsaWdu
IFBBR0VfU0hJRlQsIDB4OTAJLyogSHlwZXJjYWxsX3BhZ2UgbmVlZHMgdG8gYmUgUEFHRSBhbGln
bmVkICovCisKK05PTl9HUFJPRl9FTlRSWShoeXBlcmNhbGxfcGFnZSkKKwkuc2tpcAkweDEwMDAs
IDB4OTAJLyogRmlsbCB3aXRoICJub3AicyAqLworCitOT05fR1BST0ZfRU5UUlkoeGVuX3N0YXJ0
KQorCS8qIERvbid0IHRydXN0IHdoYXQgdGhlIGxvYWRlciBnaXZlcyBmb3IgcmZsYWdzLiAqLwor
CXB1c2hxCSRQU0xfS0VSTkVMCisJcG9wZnEKKworCS8qIFBhcmFtZXRlcnMgZm9yIHRoZSB4ZW4g
aW5pdCBmdW5jdGlvbiAqLworCW1vdnEJJXJzaSwgJXJkaQkJLyogc2hhcmVkX2luZm8gKGFyZyAx
KSAqLworCW1vdnEJJXJzcCwgJXJzaQkJLyogeGVuc3RhY2sgICAgKGFyZyAyKSAqLworCisJLyog
VXNlIG91ciBvd24gc3RhY2sgKi8KKwltb3ZxCSRib290c3RhY2ssJXJzcAorCXhvcmwJJWVicCwg
JWVicAorCisJLyogdV9pbnQ2NF90IGhhbW1lcl90aW1lX3hlbihzdGFydF9pbmZvX3QgKnNpLCB1
X2ludDY0X3QgeGVuc3RhY2spOyAqLworCWNhbGwJaGFtbWVyX3RpbWVfeGVuCisJbW92cQklcmF4
LCAlcnNwCQkvKiBzZXQgdXAga3N0YWNrIGZvciBtaV9zdGFydHVwKCkgKi8KKwljYWxsCW1pX3N0
YXJ0dXAJCS8qIGF1dG9jb25maWd1cmF0aW9uLCBtb3VudHJvb3QgZXRjICovCisKKwkvKiBOT1RS
RUFDSEVEICovCiswOglobHQKKwlqbXAgCTBiCmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVk
ZS9hc21hY3Jvcy5oIGIvc3lzL2FtZDY0L2luY2x1ZGUvYXNtYWNyb3MuaAppbmRleCAxZmI1OTJh
Li5jZThkY2U0IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvaW5jbHVkZS9hc21hY3Jvcy5oCisrKyBi
L3N5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmgKQEAgLTIwMSw0ICsyMDEsMzAgQEAKIAogI2Vu
ZGlmIC8qIExPQ09SRSAqLwogCisjaWZkZWYgX19TVERDX18KKyNkZWZpbmUgRUxGTk9URShuYW1l
LCB0eXBlLCBkZXNjdHlwZSwgZGVzY2RhdGEuLi4pIFwKKy5wdXNoc2VjdGlvbiAubm90ZS5uYW1l
ICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKyAgLmFsaWduIDQgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICA7ICAgICAgIFwKKyAgLmxvbmcgMmYgLSAxZiAgICAgICAgIC8qIG5hbWVzeiAq
LyAgICA7ICAgICAgIFwKKyAgLmxvbmcgNGYgLSAzZiAgICAgICAgIC8qIGRlc2NzeiAqLyAgICA7
ICAgICAgIFwKKyAgLmxvbmcgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAg
IFwKKzE6LmFzY2l6ICNuYW1lICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzI6
LmFsaWduIDQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzM6ZGVzY3R5
cGUgZGVzY2RhdGEgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzQ6LmFsaWduIDQgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKy5wb3BzZWN0aW9uCisjZWxzZSAv
KiAhX19TVERDX18sIGkuZS4gLXRyYWRpdGlvbmFsICovCisjZGVmaW5lIEVMRk5PVEUobmFtZSwg
dHlwZSwgZGVzY3R5cGUsIGRlc2NkYXRhKSBcCisucHVzaHNlY3Rpb24gLm5vdGUubmFtZSAgICAg
ICAgICAgICAgICAgOyAgICAgICBcCisgIC5hbGlnbiA0ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgOyAgICAgICBcCisgIC5sb25nIDJmIC0gMWYgICAgICAgICAvKiBuYW1lc3ogKi8gICAg
OyAgICAgICBcCisgIC5sb25nIDRmIC0gM2YgICAgICAgICAvKiBkZXNjc3ogKi8gICAgOyAgICAg
ICBcCisgIC5sb25nIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisx
Oi5hc2NpeiAibmFtZSIgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisyOi5hbGln
biA0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCiszOmRlc2N0eXBlIGRl
c2NkYXRhICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCis0Oi5hbGlnbiA0ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisucG9wc2VjdGlvbgorI2VuZGlmIC8qIF9f
U1REQ19fICovCisKICNlbmRpZiAvKiAhX01BQ0hJTkVfQVNNQUNST1NfSF8gKi8KZGlmZiAtLWdp
dCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKaW5kZXggZDFi
ZGNkOS4uMTYwMjlkOCAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuYW1kNjQKKysrIGIvc3lz
L2NvbmYvZmlsZXMuYW1kNjQKQEAgLTExOSw2ICsxMTksNyBAQCBhbWQ2NC9hbWQ2NC9pbl9ja3N1
bS5jCQlvcHRpb25hbAlpbmV0IHwgaW5ldDYKIGFtZDY0L2FtZDY0L2luaXRjcHUuYwkJc3RhbmRh
cmQKIGFtZDY0L2FtZDY0L2lvLmMJCW9wdGlvbmFsCWlvCiBhbWQ2NC9hbWQ2NC9sb2NvcmUuUwkJ
c3RhbmRhcmQJbm8tb2JqCithbWQ2NC9hbWQ2NC94ZW4tbG9jb3JlLlMJb3B0aW9uYWwJeGVuaHZt
CiBhbWQ2NC9hbWQ2NC9tYWNoZGVwLmMJCXN0YW5kYXJkCiBhbWQ2NC9hbWQ2NC9tZW0uYwkJb3B0
aW9uYWwJbWVtCiBhbWQ2NC9hbWQ2NC9taW5pZHVtcF9tYWNoZGVwLmMJc3RhbmRhcmQKQEAgLTU2
NiwzICs1NjcsNCBAQCB4ODYveDg2L25leHVzLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni90c2MuYwkJ
CXN0YW5kYXJkCiB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9p
bnRyLmMJCW9wdGlvbmFsCXhlbiB8IHhlbmh2bQoreDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVu
aHZtCmRpZmYgLS1naXQgYS9zeXMvaTM4Ni94ZW4veGVuX21hY2hkZXAuYyBiL3N5cy9pMzg2L3hl
bi94ZW5fbWFjaGRlcC5jCmluZGV4IDcwNDliZTYuLmZkNTc1ZWUgMTAwNjQ0Ci0tLSBhL3N5cy9p
Mzg2L3hlbi94ZW5fbWFjaGRlcC5jCisrKyBiL3N5cy9pMzg2L3hlbi94ZW5fbWFjaGRlcC5jCkBA
IC04OSw2ICs4OSw3IEBAIElEVFZFQyhkaXYpLCBJRFRWRUMoZGJnKSwgSURUVkVDKG5taSksIElE
VFZFQyhicHQpLCBJRFRWRUMob2ZsKSwKIAogaW50IHhlbmRlYnVnX2ZsYWdzOyAKIHN0YXJ0X2lu
Zm9fdCAqeGVuX3N0YXJ0X2luZm87CitzdGFydF9pbmZvX3QgKkhZUEVSVklTT1Jfc3RhcnRfaW5m
bzsKIHNoYXJlZF9pbmZvX3QgKkhZUEVSVklTT1Jfc2hhcmVkX2luZm87CiB4ZW5fcGZuX3QgKnhl
bl9tYWNoaW5lX3BoeXMgPSBtYWNoaW5lX3RvX3BoeXNfbWFwcGluZzsKIHhlbl9wZm5fdCAqeGVu
X3BoeXNfbWFjaGluZTsKQEAgLTkyNyw2ICs5MjgsNyBAQCBpbml0dmFsdWVzKHN0YXJ0X2luZm9f
dCAqc3RhcnRpbmZvKQogCUhZUEVSVklTT1Jfdm1fYXNzaXN0KFZNQVNTVF9DTURfZW5hYmxlLCBW
TUFTU1RfVFlQRV80Z2Jfc2VnbWVudHNfbm90aWZ5KTsJCiAjZW5kaWYJCiAJeGVuX3N0YXJ0X2lu
Zm8gPSBzdGFydGluZm87CisJSFlQRVJWSVNPUl9zdGFydF9pbmZvID0gc3RhcnRpbmZvOwogCXhl
bl9waHlzX21hY2hpbmUgPSAoeGVuX3Bmbl90ICopc3RhcnRpbmZvLT5tZm5fbGlzdDsKIAogCUlk
bGVQVEQgPSAocGRfZW50cnlfdCAqKSgodWludDhfdCAqKXN0YXJ0aW5mby0+cHRfYmFzZSArIFBB
R0VfU0laRSk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYveGVuL2h2
bS5jCmluZGV4IDcyODExZGMuLmIzOTc3MjEgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL2h2bS5j
CisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC0xNTksNiArMTU5LDcgQEAgRFBDUFVfREVGSU5F
KHhlbl9pbnRyX2hhbmRsZV90LCBpcGlfaGFuZGxlW25pdGVtcyh4ZW5faXBpcyldKTsKIC8qKiBI
eXBlcmNhbGwgdGFibGUgYWNjZXNzZWQgdmlhIEhZUEVSVklTT1JfKl9vcCgpIG1ldGhvZHMuICov
CiBjaGFyICpoeXBlcmNhbGxfc3R1YnM7CiBzaGFyZWRfaW5mb190ICpIWVBFUlZJU09SX3NoYXJl
ZF9pbmZvOworc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0YXJ0X2luZm87CiAKICNpZmRlZiBT
TVAKIC8qLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBYRU4gUFYgSVBJIEhhbmRsZXJzIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9wdi5j
IGIvc3lzL3g4Ni94ZW4vcHYuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi41
NTcxZWNmCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL3g4Ni94ZW4vcHYuYwpAQCAtMCwwICsxLDEx
OSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAwNCBDaHJpc3RpYW4gTGltcGFjaC4KKyAqIENv
cHlyaWdodCAoYykgMjAwNC0yMDA2LDIwMDggS2lwIE1hY3kKKyAqIENvcHlyaWdodCAoYykgMjAx
MyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMg
cmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJp
bmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6Cisg
KiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3Zl
IGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhl
IGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBm
b3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhp
cyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUK
KyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRo
IHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBU
SEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9S
IElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQor
ICogSU1QTElFRCBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1Ig
QSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hB
TEwgVEhFIEFVVEhPUiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVD
VCwgSU5ESVJFQ1QsIElOQ0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVO
VElBTAorICogREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVN
RU5UIE9GIFNVQlNUSVRVVEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFU
QSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVT
RUQgQU5EIE9OIEFOWSBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBT
VFJJQ1QKKyAqIExJQUJJTElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RI
RVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09G
VFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFN
QUdFLgorICovCisKKyNpbmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQi
KTsKKworI2luY2x1ZGUgPHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4KKyNpbmNs
dWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3JlYm9vdC5oPgorI2luY2x1ZGUgPHN5
cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9sb2NrLmg+CisjaW5jbHVkZSA8c3lzL3J3bG9jay5o
PgorCisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS92bV9leHRlcm4uaD4KKyNpbmNs
dWRlIDx2bS92bV9rZXJuLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFnZS5oPgorI2luY2x1ZGUgPHZt
L3ZtX21hcC5oPgorI2luY2x1ZGUgPHZtL3ZtX29iamVjdC5oPgorI2luY2x1ZGUgPHZtL3ZtX3Bh
Z2VyLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFyYW0uaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKyNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorCisvKiBOYXRpdmUgaW5pdGlhbCBmdW5j
dGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1X2ludDY0X3QsIHVfaW50NjRf
dCk7CisvKiBYZW4gaW5pdGlhbCBmdW5jdGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJf
dGltZV94ZW4oc3RhcnRfaW5mb190ICosIHVfaW50NjRfdCk7CisKKy8qCisgKiBGaXJzdCBmdW5j
dGlvbiBjYWxsZWQgYnkgdGhlIFhlbiBQVkggYm9vdCBzZXF1ZW5jZS4KKyAqCisgKiBTZXQgc29t
ZSBYZW4gZ2xvYmFsIHZhcmlhYmxlcyBhbmQgcHJlcGFyZSB0aGUgZW52aXJvbm1lbnQgc28gaXQg
aXMKKyAqIGFzIHNpbWlsYXIgYXMgcG9zc2libGUgdG8gd2hhdCBuYXRpdmUgRnJlZUJTRCBpbml0
IGZ1bmN0aW9uIGV4cGVjdHMuCisgKi8KK3VfaW50NjRfdAoraGFtbWVyX3RpbWVfeGVuKHN0YXJ0
X2luZm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKK3sKKwl1X2ludDY0X3QgcGh5c2ZyZWU7
CisJdV9pbnQ2NF90ICpQVDQgPSAodV9pbnQ2NF90ICopeGVuc3RhY2s7CisJdV9pbnQ2NF90ICpQ
VDMgPSAodV9pbnQ2NF90ICopKHhlbnN0YWNrICsgUEFHRV9TSVpFKTsKKwl1X2ludDY0X3QgKlBU
MiA9ICh1X2ludDY0X3QgKikoeGVuc3RhY2sgKyAyICogUEFHRV9TSVpFKTsKKwlpbnQgaTsKKwor
CWlmICgoc2kgPT0gTlVMTCkgfHwgKHhlbnN0YWNrID09IDApKSB7CisJCUhZUEVSVklTT1Jfc2h1
dGRvd24oU0hVVERPV05fY3Jhc2gpOworCX0KKworCS8qIFdlIHVzZSAzIHBhZ2VzIG9mIHhlbiBz
dGFjayBmb3IgdGhlIGJvb3QgcGFnZXRhYmxlcyAqLworCXBoeXNmcmVlID0geGVuc3RhY2sgKyAz
ICogUEFHRV9TSVpFIC0gS0VSTkJBU0U7CisKKwkvKiBTZXR1cCBYZW4gZ2xvYmFsIHZhcmlhYmxl
cyAqLworCUhZUEVSVklTT1Jfc3RhcnRfaW5mbyA9IHNpOworCUhZUEVSVklTT1Jfc2hhcmVkX2lu
Zm8gPQorCQkoc2hhcmVkX2luZm9fdCAqKShzaS0+c2hhcmVkX2luZm8gKyBLRVJOQkFTRSk7CisK
KwkvKgorCSAqIFNldHVwIHNvbWUgbWlzYyBnbG9iYWwgdmFyaWFibGVzIGZvciBYZW4gZGV2aWNl
cworCSAqCisJICogWFhYOiBkZXZpY2VzIHRoYXQgbmVlZCB0aGlzIHNwZWNpZmljIHZhcmlhYmxl
cyBzaG91bGQKKwkgKiAgICAgIGJlIHJld3JpdHRlbiB0byBmZXRjaCB0aGlzIGluZm8gYnkgdGhl
bXNlbHZlcyBmcm9tIHRoZQorCSAqICAgICAgc3RhcnRfaW5mbyBwYWdlLgorCSAqLworCXhlbl9z
dG9yZSA9IChzdHJ1Y3QgeGVuc3RvcmVfZG9tYWluX2ludGVyZmFjZSAqKQorCSAgICAgICAgICAg
IChwdG9hKHNpLT5zdG9yZV9tZm4pICsgS0VSTkJBU0UpOworCisJeGVuX2RvbWFpbl90eXBlID0g
WEVOX1BWX0RPTUFJTjsKKwl2bV9ndWVzdCA9IFZNX0dVRVNUX1hFTjsKKworCS8qCisJICogVXNl
IHRoZSBzdGFjayBYZW4gZ2l2ZXMgdXMgdG8gYnVpbGQgdGhlIHBhZ2UgdGFibGVzCisJICogYXMg
bmF0aXZlIEZyZWVCU0QgZXhwZWN0cyB0byBmaW5kIHRoZW0gKGNyZWF0ZWQKKwkgKiBieSB0aGUg
Ym9vdCB0cmFtcG9saW5lKS4KKwkgKi8KKwlmb3IgKGkgPSAwOyBpIDwgNTEyOyBpKyspIHsKKwkJ
LyogRWFjaCBzbG90IG9mIHRoZSBsZXZlbCA0IHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZl
bCAzIHBhZ2UgKi8KKwkJUFQ0W2ldID0gKCh1X2ludDY0X3QpJlBUM1swXSkgLSBLRVJOQkFTRTsK
KwkJUFQ0W2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1U7CisKKwkJLyogRWFjaCBzbG90IG9mIHRo
ZSBsZXZlbCAzIHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZlbCAyIHBhZ2UgKi8KKwkJUFQz
W2ldID0gKCh1X2ludDY0X3QpJlBUMlswXSkgLSBLRVJOQkFTRTsKKwkJUFQzW2ldIHw9IFBHX1Yg
fCBQR19SVyB8IFBHX1U7CisKKwkJLyogVGhlIGxldmVsIDIgcGFnZSBzbG90cyBhcmUgbWFwcGVk
IHdpdGggMk1CIHBhZ2VzIGZvciAxR0IuICovCisJCVBUMltpXSA9IGkgKiAoMiAqIDEwMjQgKiAx
MDI0KTsKKwkJUFQyW2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1BTIHwgUEdfVTsKKwl9CisJbG9h
ZF9jcjMoKCh1X2ludDY0X3QpJlBUNFswXSkgLSBLRVJOQkFTRSk7CisKKwkvKiBOb3cgd2UgY2Fu
IGp1bXAgaW50byB0aGUgbmF0aXZlIGluaXQgZnVuY3Rpb24gKi8KKwlyZXR1cm4gKGhhbW1lcl90
aW1lKDAsIHBoeXNmcmVlKSk7Cit9CmRpZmYgLS1naXQgYS9zeXMveGVuL3hlbi1vcy5oIGIvc3lz
L3hlbi94ZW4tb3MuaAppbmRleCA4NzY0NGU5Li5jNzQ3NGQ4IDEwMDY0NAotLS0gYS9zeXMveGVu
L3hlbi1vcy5oCisrKyBiL3N5cy94ZW4veGVuLW9zLmgKQEAgLTUxLDYgKzUxLDEwIEBACiB2b2lk
IGZvcmNlX2V2dGNobl9jYWxsYmFjayh2b2lkKTsKIAogZXh0ZXJuIHNoYXJlZF9pbmZvX3QgKkhZ
UEVSVklTT1Jfc2hhcmVkX2luZm87CitleHRlcm4gc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0
YXJ0X2luZm87CisKKy8qIFhYWDogd2UgbmVlZCB0byBnZXQgcmlkIG9mIHRoaXMgYW5kIHVzZSBI
WVBFUlZJU09SX3N0YXJ0X2luZm8gZGlyZWN0bHkgKi8KK2V4dGVybiBzdHJ1Y3QgeGVuc3RvcmVf
ZG9tYWluX2ludGVyZmFjZSAqeGVuX3N0b3JlOwogCiBlbnVtIHhlbl9kb21haW5fdHlwZSB7CiAJ
WEVOX05BVElWRSwgICAgICAgICAgICAgLyogcnVubmluZyBvbiBiYXJlIGhhcmR3YXJlICAgICov
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRH-0000v0-2w; Thu, 02 Jan 2014 15:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRE-0000tw-Gq
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:00 +0000
Received: from [85.158.139.211:56854] by server-9.bemta-5.messagelabs.com id
	59/3F-15098-F3985C25; Thu, 02 Jan 2014 15:43:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388677437!7549188!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20649 invoked from network); 2 Jan 2014 15:43:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRA-00043n-2D;
	Thu, 02 Jan 2014 15:43:56 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:35 +0100
Message-ID: <1388677433-49525-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_01/19=5D_xen=3A_add_PV/PVH_kern?=
	=?utf-8?q?el_entry_point?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHRoZSBQVi9QVkggZW50cnkgcG9pbnQgYW5kIHRoZSBsb3cgbGV2ZWwgZnVuY3Rpb25zIGZv
ciBQVkgKaW5pdGlhbGl6YXRpb24uCi0tLQogc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TICAgICB8
ICAgIDEgKwogc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUyB8ICAgODMgKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmggfCAgIDI2ICsr
KysrKysrKwogc3lzL2NvbmYvZmlsZXMuYW1kNjQgICAgICAgICB8ICAgIDIgKwogc3lzL2kzODYv
eGVuL3hlbl9tYWNoZGVwLmMgICB8ICAgIDIgKwogc3lzL3g4Ni94ZW4vaHZtLmMgICAgICAgICAg
ICB8ICAgIDEgKwogc3lzL3g4Ni94ZW4vcHYuYyAgICAgICAgICAgICB8ICAxMTkgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCiBzeXMveGVuL3hlbi1vcy5oICAgICAg
ICAgICAgIHwgICAgNCArKwogOCBmaWxlcyBjaGFuZ2VkLCAyMzggaW5zZXJ0aW9ucygrKSwgMCBk
ZWxldGlvbnMoLSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMvYW1kNjQvYW1kNjQveGVuLWxvY29y
ZS5TCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4vcHYuYwoKZGlmZiAtLWdpdCBhL3N5
cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUyBiL3N5cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUwppbmRleCA1
NWNkYTNhLi40YWNlZjk3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbG9jb3JlLlMKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TCkBAIC04NCw1ICs4NCw2IEBAIE5PTl9HUFJPRl9F
TlRSWShidGV4dCkKIAogCS5ic3MKIAlBTElHTl9EQVRBCQkJLyoganVzdCB0byBiZSBzdXJlICov
CisJLmdsb2JsCWJvb3RzdGFjawogCS5zcGFjZQkweDEwMDAJCQkvKiBzcGFjZSBmb3IgYm9vdHN0
YWNrIC0gdGVtcG9yYXJ5IHN0YWNrICovCiBib290c3RhY2s6CmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQveGVuLWxvY29yZS5TIGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpuZXcg
ZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi44NDI4N2M0Ci0tLSAvZGV2L251bGwKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpAQCAtMCwwICsxLDgzIEBACisvKi0KKyAq
IENvcHlyaWdodCAoYykgMjAwMyBQZXRlciBXZW1tIDxwZXRlckBGcmVlQlNELm9yZz4KKyAqIENv
cHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubmUgPHJveWdlckBGcmVlQlNELm9yZz4KKyAq
IEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBz
b3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24s
IGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAq
IGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRh
aW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0
aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25z
IGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAg
IG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xh
aW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBw
cm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQ
Uk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgYGBBUyBJUycnIEFORAorICog
QU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElN
SVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFO
RCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJ
TiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAq
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZ
LCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRF
RCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExP
U1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisg
KiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVH
TElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBV
U0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBP
RgorICogU1VDSCBEQU1BR0UuCisgKgorICogJEZyZWVCU0QkCisgKi8KKworI2luY2x1ZGUgPG1h
Y2hpbmUvYXNtYWNyb3MuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3BzbC5oPgorI2luY2x1ZGUgPG1h
Y2hpbmUvcG1hcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc3BlY2lhbHJlZy5oPgorCisjaW5jbHVk
ZSA8eGVuL3hlbi1vcy5oPgorI2RlZmluZSBfX0FTU0VNQkxZX18KKyNpbmNsdWRlIDx4ZW4vaW50
ZXJmYWNlL2VsZm5vdGUuaD4KKworI2luY2x1ZGUgImFzc3ltLnMiCisKKy5zZWN0aW9uIF9feGVu
X2d1ZXN0CisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0dVRVNUX09TLCAgICAgICAuYXNjaXos
ICJGcmVlQlNEIikKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfR1VFU1RfVkVSU0lPTiwgIC5h
c2NpeiwgIkhFQUQiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9YRU5fVkVSU0lPTiwgICAg
LmFzY2l6LCAieGVuLTMuMCIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX1ZJUlRfQkFTRSwg
ICAgICAucXVhZCwgIEtFUk5CQVNFKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9QQUREUl9P
RkZTRVQsICAgLnF1YWQsICBLRVJOQkFTRSkgLyogWGVuIGhvbm91cnMgZWxmLT5wX3BhZGRyOyBj
b21wZW5zYXRlIGZvciB0aGlzICovCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0VOVFJZLCAg
ICAgICAgICAucXVhZCwgIHhlbl9zdGFydCkKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfSFlQ
RVJDQUxMX1BBR0UsIC5xdWFkLAkgaHlwZXJjYWxsX3BhZ2UpCisJRUxGTk9URShYZW4sIFhFTl9F
TEZOT1RFX0hWX1NUQVJUX0xPVywgICAucXVhZCwgIEhZUEVSVklTT1JfVklSVF9TVEFSVCkKKwlF
TEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfRkVBVFVSRVMsICAgICAgIC5hc2NpeiwgIndyaXRhYmxl
X2Rlc2NyaXB0b3JfdGFibGVzfGF1dG9fdHJhbnNsYXRlZF9waHlzbWFwfHN1cGVydmlzb3JfbW9k
ZV9rZXJuZWx8aHZtX2NhbGxiYWNrX3ZlY3RvciIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RF
X1BBRV9NT0RFLCAgICAgICAuYXNjaXosICJ5ZXMiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9U
RV9MMV9NRk5fVkFMSUQsICAgLmxvbmcsICBQR19WLCBQR19WKQorCUVMRk5PVEUoWGVuLCBYRU5f
RUxGTk9URV9MT0FERVIsICAgICAgICAgLmFzY2l6LCAiZ2VuZXJpYyIpCisJRUxGTk9URShYZW4s
IFhFTl9FTEZOT1RFX1NVU1BFTkRfQ0FOQ0VMLCAubG9uZywgIDApCisJRUxGTk9URShYZW4sIFhF
Tl9FTEZOT1RFX0JTRF9TWU1UQUIsCSAuYXNjaXosICJ5ZXMiKQorCisJLnRleHQKKy5wMmFsaWdu
IFBBR0VfU0hJRlQsIDB4OTAJLyogSHlwZXJjYWxsX3BhZ2UgbmVlZHMgdG8gYmUgUEFHRSBhbGln
bmVkICovCisKK05PTl9HUFJPRl9FTlRSWShoeXBlcmNhbGxfcGFnZSkKKwkuc2tpcAkweDEwMDAs
IDB4OTAJLyogRmlsbCB3aXRoICJub3AicyAqLworCitOT05fR1BST0ZfRU5UUlkoeGVuX3N0YXJ0
KQorCS8qIERvbid0IHRydXN0IHdoYXQgdGhlIGxvYWRlciBnaXZlcyBmb3IgcmZsYWdzLiAqLwor
CXB1c2hxCSRQU0xfS0VSTkVMCisJcG9wZnEKKworCS8qIFBhcmFtZXRlcnMgZm9yIHRoZSB4ZW4g
aW5pdCBmdW5jdGlvbiAqLworCW1vdnEJJXJzaSwgJXJkaQkJLyogc2hhcmVkX2luZm8gKGFyZyAx
KSAqLworCW1vdnEJJXJzcCwgJXJzaQkJLyogeGVuc3RhY2sgICAgKGFyZyAyKSAqLworCisJLyog
VXNlIG91ciBvd24gc3RhY2sgKi8KKwltb3ZxCSRib290c3RhY2ssJXJzcAorCXhvcmwJJWVicCwg
JWVicAorCisJLyogdV9pbnQ2NF90IGhhbW1lcl90aW1lX3hlbihzdGFydF9pbmZvX3QgKnNpLCB1
X2ludDY0X3QgeGVuc3RhY2spOyAqLworCWNhbGwJaGFtbWVyX3RpbWVfeGVuCisJbW92cQklcmF4
LCAlcnNwCQkvKiBzZXQgdXAga3N0YWNrIGZvciBtaV9zdGFydHVwKCkgKi8KKwljYWxsCW1pX3N0
YXJ0dXAJCS8qIGF1dG9jb25maWd1cmF0aW9uLCBtb3VudHJvb3QgZXRjICovCisKKwkvKiBOT1RS
RUFDSEVEICovCiswOglobHQKKwlqbXAgCTBiCmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVk
ZS9hc21hY3Jvcy5oIGIvc3lzL2FtZDY0L2luY2x1ZGUvYXNtYWNyb3MuaAppbmRleCAxZmI1OTJh
Li5jZThkY2U0IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvaW5jbHVkZS9hc21hY3Jvcy5oCisrKyBi
L3N5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmgKQEAgLTIwMSw0ICsyMDEsMzAgQEAKIAogI2Vu
ZGlmIC8qIExPQ09SRSAqLwogCisjaWZkZWYgX19TVERDX18KKyNkZWZpbmUgRUxGTk9URShuYW1l
LCB0eXBlLCBkZXNjdHlwZSwgZGVzY2RhdGEuLi4pIFwKKy5wdXNoc2VjdGlvbiAubm90ZS5uYW1l
ICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKyAgLmFsaWduIDQgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICA7ICAgICAgIFwKKyAgLmxvbmcgMmYgLSAxZiAgICAgICAgIC8qIG5hbWVzeiAq
LyAgICA7ICAgICAgIFwKKyAgLmxvbmcgNGYgLSAzZiAgICAgICAgIC8qIGRlc2NzeiAqLyAgICA7
ICAgICAgIFwKKyAgLmxvbmcgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAg
IFwKKzE6LmFzY2l6ICNuYW1lICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzI6
LmFsaWduIDQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzM6ZGVzY3R5
cGUgZGVzY2RhdGEgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzQ6LmFsaWduIDQgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKy5wb3BzZWN0aW9uCisjZWxzZSAv
KiAhX19TVERDX18sIGkuZS4gLXRyYWRpdGlvbmFsICovCisjZGVmaW5lIEVMRk5PVEUobmFtZSwg
dHlwZSwgZGVzY3R5cGUsIGRlc2NkYXRhKSBcCisucHVzaHNlY3Rpb24gLm5vdGUubmFtZSAgICAg
ICAgICAgICAgICAgOyAgICAgICBcCisgIC5hbGlnbiA0ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgOyAgICAgICBcCisgIC5sb25nIDJmIC0gMWYgICAgICAgICAvKiBuYW1lc3ogKi8gICAg
OyAgICAgICBcCisgIC5sb25nIDRmIC0gM2YgICAgICAgICAvKiBkZXNjc3ogKi8gICAgOyAgICAg
ICBcCisgIC5sb25nIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisx
Oi5hc2NpeiAibmFtZSIgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisyOi5hbGln
biA0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCiszOmRlc2N0eXBlIGRl
c2NkYXRhICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCis0Oi5hbGlnbiA0ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisucG9wc2VjdGlvbgorI2VuZGlmIC8qIF9f
U1REQ19fICovCisKICNlbmRpZiAvKiAhX01BQ0hJTkVfQVNNQUNST1NfSF8gKi8KZGlmZiAtLWdp
dCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKaW5kZXggZDFi
ZGNkOS4uMTYwMjlkOCAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuYW1kNjQKKysrIGIvc3lz
L2NvbmYvZmlsZXMuYW1kNjQKQEAgLTExOSw2ICsxMTksNyBAQCBhbWQ2NC9hbWQ2NC9pbl9ja3N1
bS5jCQlvcHRpb25hbAlpbmV0IHwgaW5ldDYKIGFtZDY0L2FtZDY0L2luaXRjcHUuYwkJc3RhbmRh
cmQKIGFtZDY0L2FtZDY0L2lvLmMJCW9wdGlvbmFsCWlvCiBhbWQ2NC9hbWQ2NC9sb2NvcmUuUwkJ
c3RhbmRhcmQJbm8tb2JqCithbWQ2NC9hbWQ2NC94ZW4tbG9jb3JlLlMJb3B0aW9uYWwJeGVuaHZt
CiBhbWQ2NC9hbWQ2NC9tYWNoZGVwLmMJCXN0YW5kYXJkCiBhbWQ2NC9hbWQ2NC9tZW0uYwkJb3B0
aW9uYWwJbWVtCiBhbWQ2NC9hbWQ2NC9taW5pZHVtcF9tYWNoZGVwLmMJc3RhbmRhcmQKQEAgLTU2
NiwzICs1NjcsNCBAQCB4ODYveDg2L25leHVzLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni90c2MuYwkJ
CXN0YW5kYXJkCiB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9p
bnRyLmMJCW9wdGlvbmFsCXhlbiB8IHhlbmh2bQoreDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVu
aHZtCmRpZmYgLS1naXQgYS9zeXMvaTM4Ni94ZW4veGVuX21hY2hkZXAuYyBiL3N5cy9pMzg2L3hl
bi94ZW5fbWFjaGRlcC5jCmluZGV4IDcwNDliZTYuLmZkNTc1ZWUgMTAwNjQ0Ci0tLSBhL3N5cy9p
Mzg2L3hlbi94ZW5fbWFjaGRlcC5jCisrKyBiL3N5cy9pMzg2L3hlbi94ZW5fbWFjaGRlcC5jCkBA
IC04OSw2ICs4OSw3IEBAIElEVFZFQyhkaXYpLCBJRFRWRUMoZGJnKSwgSURUVkVDKG5taSksIElE
VFZFQyhicHQpLCBJRFRWRUMob2ZsKSwKIAogaW50IHhlbmRlYnVnX2ZsYWdzOyAKIHN0YXJ0X2lu
Zm9fdCAqeGVuX3N0YXJ0X2luZm87CitzdGFydF9pbmZvX3QgKkhZUEVSVklTT1Jfc3RhcnRfaW5m
bzsKIHNoYXJlZF9pbmZvX3QgKkhZUEVSVklTT1Jfc2hhcmVkX2luZm87CiB4ZW5fcGZuX3QgKnhl
bl9tYWNoaW5lX3BoeXMgPSBtYWNoaW5lX3RvX3BoeXNfbWFwcGluZzsKIHhlbl9wZm5fdCAqeGVu
X3BoeXNfbWFjaGluZTsKQEAgLTkyNyw2ICs5MjgsNyBAQCBpbml0dmFsdWVzKHN0YXJ0X2luZm9f
dCAqc3RhcnRpbmZvKQogCUhZUEVSVklTT1Jfdm1fYXNzaXN0KFZNQVNTVF9DTURfZW5hYmxlLCBW
TUFTU1RfVFlQRV80Z2Jfc2VnbWVudHNfbm90aWZ5KTsJCiAjZW5kaWYJCiAJeGVuX3N0YXJ0X2lu
Zm8gPSBzdGFydGluZm87CisJSFlQRVJWSVNPUl9zdGFydF9pbmZvID0gc3RhcnRpbmZvOwogCXhl
bl9waHlzX21hY2hpbmUgPSAoeGVuX3Bmbl90ICopc3RhcnRpbmZvLT5tZm5fbGlzdDsKIAogCUlk
bGVQVEQgPSAocGRfZW50cnlfdCAqKSgodWludDhfdCAqKXN0YXJ0aW5mby0+cHRfYmFzZSArIFBB
R0VfU0laRSk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYveGVuL2h2
bS5jCmluZGV4IDcyODExZGMuLmIzOTc3MjEgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL2h2bS5j
CisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC0xNTksNiArMTU5LDcgQEAgRFBDUFVfREVGSU5F
KHhlbl9pbnRyX2hhbmRsZV90LCBpcGlfaGFuZGxlW25pdGVtcyh4ZW5faXBpcyldKTsKIC8qKiBI
eXBlcmNhbGwgdGFibGUgYWNjZXNzZWQgdmlhIEhZUEVSVklTT1JfKl9vcCgpIG1ldGhvZHMuICov
CiBjaGFyICpoeXBlcmNhbGxfc3R1YnM7CiBzaGFyZWRfaW5mb190ICpIWVBFUlZJU09SX3NoYXJl
ZF9pbmZvOworc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0YXJ0X2luZm87CiAKICNpZmRlZiBT
TVAKIC8qLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBYRU4gUFYgSVBJIEhhbmRsZXJzIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9wdi5j
IGIvc3lzL3g4Ni94ZW4vcHYuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi41
NTcxZWNmCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL3g4Ni94ZW4vcHYuYwpAQCAtMCwwICsxLDEx
OSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAwNCBDaHJpc3RpYW4gTGltcGFjaC4KKyAqIENv
cHlyaWdodCAoYykgMjAwNC0yMDA2LDIwMDggS2lwIE1hY3kKKyAqIENvcHlyaWdodCAoYykgMjAx
MyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMg
cmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJp
bmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6Cisg
KiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3Zl
IGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhl
IGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBm
b3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhp
cyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUK
KyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRo
IHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBU
SEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9S
IElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQor
ICogSU1QTElFRCBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1Ig
QSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hB
TEwgVEhFIEFVVEhPUiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVD
VCwgSU5ESVJFQ1QsIElOQ0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVO
VElBTAorICogREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVN
RU5UIE9GIFNVQlNUSVRVVEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFU
QSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVT
RUQgQU5EIE9OIEFOWSBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBT
VFJJQ1QKKyAqIExJQUJJTElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RI
RVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09G
VFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFN
QUdFLgorICovCisKKyNpbmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQi
KTsKKworI2luY2x1ZGUgPHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4KKyNpbmNs
dWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3JlYm9vdC5oPgorI2luY2x1ZGUgPHN5
cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9sb2NrLmg+CisjaW5jbHVkZSA8c3lzL3J3bG9jay5o
PgorCisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS92bV9leHRlcm4uaD4KKyNpbmNs
dWRlIDx2bS92bV9rZXJuLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFnZS5oPgorI2luY2x1ZGUgPHZt
L3ZtX21hcC5oPgorI2luY2x1ZGUgPHZtL3ZtX29iamVjdC5oPgorI2luY2x1ZGUgPHZtL3ZtX3Bh
Z2VyLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFyYW0uaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKyNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorCisvKiBOYXRpdmUgaW5pdGlhbCBmdW5j
dGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1X2ludDY0X3QsIHVfaW50NjRf
dCk7CisvKiBYZW4gaW5pdGlhbCBmdW5jdGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJf
dGltZV94ZW4oc3RhcnRfaW5mb190ICosIHVfaW50NjRfdCk7CisKKy8qCisgKiBGaXJzdCBmdW5j
dGlvbiBjYWxsZWQgYnkgdGhlIFhlbiBQVkggYm9vdCBzZXF1ZW5jZS4KKyAqCisgKiBTZXQgc29t
ZSBYZW4gZ2xvYmFsIHZhcmlhYmxlcyBhbmQgcHJlcGFyZSB0aGUgZW52aXJvbm1lbnQgc28gaXQg
aXMKKyAqIGFzIHNpbWlsYXIgYXMgcG9zc2libGUgdG8gd2hhdCBuYXRpdmUgRnJlZUJTRCBpbml0
IGZ1bmN0aW9uIGV4cGVjdHMuCisgKi8KK3VfaW50NjRfdAoraGFtbWVyX3RpbWVfeGVuKHN0YXJ0
X2luZm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKK3sKKwl1X2ludDY0X3QgcGh5c2ZyZWU7
CisJdV9pbnQ2NF90ICpQVDQgPSAodV9pbnQ2NF90ICopeGVuc3RhY2s7CisJdV9pbnQ2NF90ICpQ
VDMgPSAodV9pbnQ2NF90ICopKHhlbnN0YWNrICsgUEFHRV9TSVpFKTsKKwl1X2ludDY0X3QgKlBU
MiA9ICh1X2ludDY0X3QgKikoeGVuc3RhY2sgKyAyICogUEFHRV9TSVpFKTsKKwlpbnQgaTsKKwor
CWlmICgoc2kgPT0gTlVMTCkgfHwgKHhlbnN0YWNrID09IDApKSB7CisJCUhZUEVSVklTT1Jfc2h1
dGRvd24oU0hVVERPV05fY3Jhc2gpOworCX0KKworCS8qIFdlIHVzZSAzIHBhZ2VzIG9mIHhlbiBz
dGFjayBmb3IgdGhlIGJvb3QgcGFnZXRhYmxlcyAqLworCXBoeXNmcmVlID0geGVuc3RhY2sgKyAz
ICogUEFHRV9TSVpFIC0gS0VSTkJBU0U7CisKKwkvKiBTZXR1cCBYZW4gZ2xvYmFsIHZhcmlhYmxl
cyAqLworCUhZUEVSVklTT1Jfc3RhcnRfaW5mbyA9IHNpOworCUhZUEVSVklTT1Jfc2hhcmVkX2lu
Zm8gPQorCQkoc2hhcmVkX2luZm9fdCAqKShzaS0+c2hhcmVkX2luZm8gKyBLRVJOQkFTRSk7CisK
KwkvKgorCSAqIFNldHVwIHNvbWUgbWlzYyBnbG9iYWwgdmFyaWFibGVzIGZvciBYZW4gZGV2aWNl
cworCSAqCisJICogWFhYOiBkZXZpY2VzIHRoYXQgbmVlZCB0aGlzIHNwZWNpZmljIHZhcmlhYmxl
cyBzaG91bGQKKwkgKiAgICAgIGJlIHJld3JpdHRlbiB0byBmZXRjaCB0aGlzIGluZm8gYnkgdGhl
bXNlbHZlcyBmcm9tIHRoZQorCSAqICAgICAgc3RhcnRfaW5mbyBwYWdlLgorCSAqLworCXhlbl9z
dG9yZSA9IChzdHJ1Y3QgeGVuc3RvcmVfZG9tYWluX2ludGVyZmFjZSAqKQorCSAgICAgICAgICAg
IChwdG9hKHNpLT5zdG9yZV9tZm4pICsgS0VSTkJBU0UpOworCisJeGVuX2RvbWFpbl90eXBlID0g
WEVOX1BWX0RPTUFJTjsKKwl2bV9ndWVzdCA9IFZNX0dVRVNUX1hFTjsKKworCS8qCisJICogVXNl
IHRoZSBzdGFjayBYZW4gZ2l2ZXMgdXMgdG8gYnVpbGQgdGhlIHBhZ2UgdGFibGVzCisJICogYXMg
bmF0aXZlIEZyZWVCU0QgZXhwZWN0cyB0byBmaW5kIHRoZW0gKGNyZWF0ZWQKKwkgKiBieSB0aGUg
Ym9vdCB0cmFtcG9saW5lKS4KKwkgKi8KKwlmb3IgKGkgPSAwOyBpIDwgNTEyOyBpKyspIHsKKwkJ
LyogRWFjaCBzbG90IG9mIHRoZSBsZXZlbCA0IHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZl
bCAzIHBhZ2UgKi8KKwkJUFQ0W2ldID0gKCh1X2ludDY0X3QpJlBUM1swXSkgLSBLRVJOQkFTRTsK
KwkJUFQ0W2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1U7CisKKwkJLyogRWFjaCBzbG90IG9mIHRo
ZSBsZXZlbCAzIHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZlbCAyIHBhZ2UgKi8KKwkJUFQz
W2ldID0gKCh1X2ludDY0X3QpJlBUMlswXSkgLSBLRVJOQkFTRTsKKwkJUFQzW2ldIHw9IFBHX1Yg
fCBQR19SVyB8IFBHX1U7CisKKwkJLyogVGhlIGxldmVsIDIgcGFnZSBzbG90cyBhcmUgbWFwcGVk
IHdpdGggMk1CIHBhZ2VzIGZvciAxR0IuICovCisJCVBUMltpXSA9IGkgKiAoMiAqIDEwMjQgKiAx
MDI0KTsKKwkJUFQyW2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1BTIHwgUEdfVTsKKwl9CisJbG9h
ZF9jcjMoKCh1X2ludDY0X3QpJlBUNFswXSkgLSBLRVJOQkFTRSk7CisKKwkvKiBOb3cgd2UgY2Fu
IGp1bXAgaW50byB0aGUgbmF0aXZlIGluaXQgZnVuY3Rpb24gKi8KKwlyZXR1cm4gKGhhbW1lcl90
aW1lKDAsIHBoeXNmcmVlKSk7Cit9CmRpZmYgLS1naXQgYS9zeXMveGVuL3hlbi1vcy5oIGIvc3lz
L3hlbi94ZW4tb3MuaAppbmRleCA4NzY0NGU5Li5jNzQ3NGQ4IDEwMDY0NAotLS0gYS9zeXMveGVu
L3hlbi1vcy5oCisrKyBiL3N5cy94ZW4veGVuLW9zLmgKQEAgLTUxLDYgKzUxLDEwIEBACiB2b2lk
IGZvcmNlX2V2dGNobl9jYWxsYmFjayh2b2lkKTsKIAogZXh0ZXJuIHNoYXJlZF9pbmZvX3QgKkhZ
UEVSVklTT1Jfc2hhcmVkX2luZm87CitleHRlcm4gc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0
YXJ0X2luZm87CisKKy8qIFhYWDogd2UgbmVlZCB0byBnZXQgcmlkIG9mIHRoaXMgYW5kIHVzZSBI
WVBFUlZJU09SX3N0YXJ0X2luZm8gZGlyZWN0bHkgKi8KK2V4dGVybiBzdHJ1Y3QgeGVuc3RvcmVf
ZG9tYWluX2ludGVyZmFjZSAqeGVuX3N0b3JlOwogCiBlbnVtIHhlbl9kb21haW5fdHlwZSB7CiAJ
WEVOX05BVElWRSwgICAgICAgICAgICAgLyogcnVubmluZyBvbiBiYXJlIGhhcmR3YXJlICAgICov
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRH-0000vZ-Hn; Thu, 02 Jan 2014 15:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000tx-1l
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:01 +0000
Received: from [85.158.139.211:4156] by server-5.bemta-5.messagelabs.com id
	B1/EC-14928-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388677437!7549188!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20689 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070933"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRA-00043n-Mk;
	Thu, 02 Jan 2014 15:43:56 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:36 +0100
Message-ID: <1388677433-49525-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 02/19] xen: add macro to detect if running as
	Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xen-os.h |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index c7474d8..e8a5a99 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -82,6 +82,13 @@ xen_hvm_domain(void)
 	return (xen_domain_type == XEN_HVM_DOMAIN);
 }
 
+static inline int
+xen_initial_domain(void)
+{
+	return (xen_domain() && HYPERVISOR_start_info &&
+	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
+}
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRH-0000vZ-Hn; Thu, 02 Jan 2014 15:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000tx-1l
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:01 +0000
Received: from [85.158.139.211:4156] by server-5.bemta-5.messagelabs.com id
	B1/EC-14928-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388677437!7549188!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20689 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070933"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRA-00043n-Mk;
	Thu, 02 Jan 2014 15:43:56 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:36 +0100
Message-ID: <1388677433-49525-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 02/19] xen: add macro to detect if running as
	Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xen-os.h |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index c7474d8..e8a5a99 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -82,6 +82,13 @@ xen_hvm_domain(void)
 	return (xen_domain_type == XEN_HVM_DOMAIN);
 }
 
+static inline int
+xen_initial_domain(void)
+{
+	return (xen_domain() && HYPERVISOR_start_info &&
+	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
+}
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRI-0000wL-8R; Thu, 02 Jan 2014 15:44:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000ty-6S
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [193.109.254.147:50321] by server-11.bemta-14.messagelabs.com
	id 62/37-20576-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388677438!8503480!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20951 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:57 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRB-00043n-8m;
	Thu, 02 Jan 2014 15:43:57 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:37 +0100
Message-ID: <1388677433-49525-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 03/19] xen: add and enable Xen console for
	PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds and enables the console used on XEN kernels.
---
 sys/conf/files                     |    4 +-
 sys/dev/xen/console/console.c      |   37 +++++++++++++++++++++++++++++------
 sys/dev/xen/console/xencons_ring.c |   15 +++++++++----
 sys/i386/include/xen/xen-os.h      |    1 -
 sys/i386/xen/xen_machdep.c         |   17 ----------------
 sys/x86/xen/pv.c                   |    4 +++
 sys/xen/xen-os.h                   |    4 +++
 7 files changed, 50 insertions(+), 32 deletions(-)

diff --git a/sys/conf/files b/sys/conf/files
index a73d31e..f55479d 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -2523,8 +2523,8 @@ dev/xe/if_xe_pccard.c		optional xe pccard
 dev/xen/balloon/balloon.c	optional xen | xenhvm
 dev/xen/blkfront/blkfront.c	optional xen | xenhvm
 dev/xen/blkback/blkback.c	optional xen | xenhvm
-dev/xen/console/console.c	optional xen
-dev/xen/console/xencons_ring.c	optional xen
+dev/xen/console/console.c	optional xen | xenhvm
+dev/xen/console/xencons_ring.c	optional xen | xenhvm
 dev/xen/control/control.c	optional xen | xenhvm
 dev/xen/netback/netback.c	optional xen | xenhvm
 dev/xen/netfront/netfront.c	optional xen | xenhvm
diff --git a/sys/dev/xen/console/console.c b/sys/dev/xen/console/console.c
index 23eaee2..899dffc 100644
--- a/sys/dev/xen/console/console.c
+++ b/sys/dev/xen/console/console.c
@@ -69,11 +69,14 @@ struct mtx              cn_mtx;
 static char wbuf[WBUF_SIZE];
 static char rbuf[RBUF_SIZE];
 static int rc, rp;
-static unsigned int cnsl_evt_reg;
+unsigned int cnsl_evt_reg;
 static unsigned int wc, wp; /* write_cons, write_prod */
 xen_intr_handle_t xen_intr_handle;
 device_t xencons_dev;
 
+/* Virtual address of the shared console page */
+char *console_page;
+
 #ifdef KDB
 static int	xc_altbrk;
 #endif
@@ -110,9 +113,26 @@ static struct ttydevsw xc_ttydevsw = {
         .tsw_outwakeup	= xcoutwakeup,
 };
 
+/*----------------------------- Debug function -------------------------------*/
+#define XC_PRINTF_BUFSIZE 1024
+void
+xc_printf(const char *fmt, ...)
+{
+	static char buf[XC_PRINTF_BUFSIZE];
+	__va_list ap;
+
+	va_start(ap, fmt);
+	vsnprintf(buf, sizeof(buf), fmt, ap);
+	va_end(ap);
+	HYPERVISOR_console_write(buf, strlen(buf));
+}
+
 static void
 xc_cnprobe(struct consdev *cp)
 {
+	if (!xen_pv_domain())
+		return;
+
 	cp->cn_pri = CN_REMOTE;
 	sprintf(cp->cn_name, "%s0", driver_name);
 }
@@ -175,7 +195,7 @@ static void
 xc_cnputc(struct consdev *dev, int c)
 {
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		xc_cnputc_dom0(dev, c);
 	else
 		xc_cnputc_domu(dev, c);
@@ -206,8 +226,7 @@ xcons_putc(int c)
 		xcons_force_flush();
 #endif	    	
 	}
-	if (cnsl_evt_reg)
-		__xencons_tx_flush();
+	__xencons_tx_flush();
 	
 	/* inform start path that we're pretty full */
 	return ((wp - wc) >= WBUF_SIZE - 100) ? TRUE : FALSE;
@@ -217,6 +236,10 @@ static void
 xc_identify(driver_t *driver, device_t parent)
 {
 	device_t child;
+
+	if (!xen_pv_domain())
+		return;
+
 	child = BUS_ADD_CHILD(parent, 0, driver_name, 0);
 	device_set_driver(child, driver);
 	device_set_desc(child, "Xen Console");
@@ -245,7 +268,7 @@ xc_attach(device_t dev)
 	cnsl_evt_reg = 1;
 	callout_reset(&xc_callout, XC_POLLTIME, xc_timeout, xccons);
     
-	if (xen_start_info->flags & SIF_INITDOMAIN) {
+	if (xen_initial_domain()) {
 		error = xen_intr_bind_virq(dev, VIRQ_CONSOLE, 0, NULL,
 		                           xencons_priv_interrupt, NULL,
 		                           INTR_TYPE_TTY, &xen_intr_handle);
@@ -309,7 +332,7 @@ __xencons_tx_flush(void)
 		sz = wp - wc;
 		if (sz > (WBUF_SIZE - WBUF_MASK(wc)))
 			sz = WBUF_SIZE - WBUF_MASK(wc);
-		if (xen_start_info->flags & SIF_INITDOMAIN) {
+		if (xen_initial_domain()) {
 			HYPERVISOR_console_io(CONSOLEIO_write, sz, &wbuf[WBUF_MASK(wc)]);
 			wc += sz;
 		} else {
@@ -424,7 +447,7 @@ xcons_force_flush(void)
 {
 	int        sz;
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		return;
 
 	/* Spin until console data is flushed through to the domain controller. */
diff --git a/sys/dev/xen/console/xencons_ring.c b/sys/dev/xen/console/xencons_ring.c
index 3701551..d826363 100644
--- a/sys/dev/xen/console/xencons_ring.c
+++ b/sys/dev/xen/console/xencons_ring.c
@@ -32,9 +32,9 @@ __FBSDID("$FreeBSD$");
 
 #define console_evtchn	console.domU.evtchn
 xen_intr_handle_t console_handle;
-extern char *console_page;
 extern struct mtx              cn_mtx;
 extern device_t xencons_dev;
+extern int cnsl_evt_reg;
 
 static inline struct xencons_interface *
 xencons_interface(void)
@@ -60,6 +60,8 @@ xencons_ring_send(const char *data, unsigned len)
 	struct xencons_interface *intf; 
 	XENCONS_RING_IDX cons, prod;
 	int sent;
+	struct evtchn_send send = { .port =
+	                            HYPERVISOR_start_info->console_evtchn };
 
 	intf = xencons_interface();
 	cons = intf->out_cons;
@@ -76,7 +78,10 @@ xencons_ring_send(const char *data, unsigned len)
 	wmb();
 	intf->out_prod = prod;
 
-	xen_intr_signal(console_handle);
+	if (cnsl_evt_reg)
+		xen_intr_signal(console_handle);
+	else
+		HYPERVISOR_event_channel_op(EVTCHNOP_send, &send);
 
 	return sent;
 
@@ -125,11 +130,11 @@ xencons_ring_init(void)
 {
 	int err;
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return 0;
 
 	err = xen_intr_bind_local_port(xencons_dev,
-	    xen_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
+	    HYPERVISOR_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
 	    INTR_TYPE_MISC | INTR_MPSAFE, &console_handle);
 	if (err) {
 		return err;
@@ -145,7 +150,7 @@ void
 xencons_suspend(void)
 {
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return;
 
 	xen_intr_unbind(&console_handle);
diff --git a/sys/i386/include/xen/xen-os.h b/sys/i386/include/xen/xen-os.h
index a8fba61..3d1ef04 100644
--- a/sys/i386/include/xen/xen-os.h
+++ b/sys/i386/include/xen/xen-os.h
@@ -45,7 +45,6 @@ static inline void rep_nop(void)
 #define cpu_relax() rep_nop()
 
 #ifndef XENHVM
-void xc_printf(const char *fmt, ...);
 
 #ifdef SMP
 extern int gdtset;
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index fd575ee..09c01f1 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -186,21 +186,6 @@ xen_boothowto(char *envp)
 	return howto;
 }
 
-#define XC_PRINTF_BUFSIZE 1024
-void
-xc_printf(const char *fmt, ...)
-{
-        __va_list ap;
-        int retval;
-        static char buf[XC_PRINTF_BUFSIZE];
-
-        va_start(ap, fmt);
-        retval = vsnprintf(buf, XC_PRINTF_BUFSIZE - 1, fmt, ap);
-        va_end(ap);
-        buf[retval] = 0;
-        (void)HYPERVISOR_console_write(buf, retval);
-}
-
 
 #define XPQUEUE_SIZE 128
 
@@ -745,8 +730,6 @@ void initvalues(start_info_t *startinfo);
 struct xenstore_domain_interface;
 extern struct xenstore_domain_interface *xen_store;
 
-char *console_page;
-
 void *
 bootmem_alloc(unsigned int size) 
 {
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 5571ecf..db3b7a3 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -70,9 +70,12 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	int i;
 
 	if ((si == NULL) || (xenstack == 0)) {
+		xc_printf("ERROR: invalid start_info or xen stack, halting\n");
 		HYPERVISOR_shutdown(SHUTDOWN_crash);
 	}
 
+	xc_printf("FreeBSD PVH running on %s\n", si->magic);
+
 	/* We use 3 pages of xen stack for the boot pagetables */
 	physfree = xenstack + 3 * PAGE_SIZE - KERNBASE;
 
@@ -90,6 +93,7 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	 */
 	xen_store = (struct xenstore_domain_interface *)
 	            (ptoa(si->store_mfn) + KERNBASE);
+	console_page = (char *)(ptoa(si->console.domU.mfn) + KERNBASE);
 
 	xen_domain_type = XEN_PV_DOMAIN;
 	vm_guest = VM_GUEST_XEN;
diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index e8a5a99..e005ccd 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -55,6 +55,7 @@ extern start_info_t *HYPERVISOR_start_info;
 
 /* XXX: we need to get rid of this and use HYPERVISOR_start_info directly */
 extern struct xenstore_domain_interface *xen_store;
+extern char *console_page;
 
 enum xen_domain_type {
 	XEN_NATIVE,             /* running on bare hardware    */
@@ -89,6 +90,9 @@ xen_initial_domain(void)
 	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
 }
 
+/* Debug function, prints directly to hypervisor console */
+void xc_printf(const char *, ...);
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRI-0000wL-8R; Thu, 02 Jan 2014 15:44:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000ty-6S
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [193.109.254.147:50321] by server-11.bemta-14.messagelabs.com
	id 62/37-20576-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388677438!8503480!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20951 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:57 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRB-00043n-8m;
	Thu, 02 Jan 2014 15:43:57 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:37 +0100
Message-ID: <1388677433-49525-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 03/19] xen: add and enable Xen console for
	PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds and enables the console used on XEN kernels.
---
 sys/conf/files                     |    4 +-
 sys/dev/xen/console/console.c      |   37 +++++++++++++++++++++++++++++------
 sys/dev/xen/console/xencons_ring.c |   15 +++++++++----
 sys/i386/include/xen/xen-os.h      |    1 -
 sys/i386/xen/xen_machdep.c         |   17 ----------------
 sys/x86/xen/pv.c                   |    4 +++
 sys/xen/xen-os.h                   |    4 +++
 7 files changed, 50 insertions(+), 32 deletions(-)

diff --git a/sys/conf/files b/sys/conf/files
index a73d31e..f55479d 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -2523,8 +2523,8 @@ dev/xe/if_xe_pccard.c		optional xe pccard
 dev/xen/balloon/balloon.c	optional xen | xenhvm
 dev/xen/blkfront/blkfront.c	optional xen | xenhvm
 dev/xen/blkback/blkback.c	optional xen | xenhvm
-dev/xen/console/console.c	optional xen
-dev/xen/console/xencons_ring.c	optional xen
+dev/xen/console/console.c	optional xen | xenhvm
+dev/xen/console/xencons_ring.c	optional xen | xenhvm
 dev/xen/control/control.c	optional xen | xenhvm
 dev/xen/netback/netback.c	optional xen | xenhvm
 dev/xen/netfront/netfront.c	optional xen | xenhvm
diff --git a/sys/dev/xen/console/console.c b/sys/dev/xen/console/console.c
index 23eaee2..899dffc 100644
--- a/sys/dev/xen/console/console.c
+++ b/sys/dev/xen/console/console.c
@@ -69,11 +69,14 @@ struct mtx              cn_mtx;
 static char wbuf[WBUF_SIZE];
 static char rbuf[RBUF_SIZE];
 static int rc, rp;
-static unsigned int cnsl_evt_reg;
+unsigned int cnsl_evt_reg;
 static unsigned int wc, wp; /* write_cons, write_prod */
 xen_intr_handle_t xen_intr_handle;
 device_t xencons_dev;
 
+/* Virtual address of the shared console page */
+char *console_page;
+
 #ifdef KDB
 static int	xc_altbrk;
 #endif
@@ -110,9 +113,26 @@ static struct ttydevsw xc_ttydevsw = {
         .tsw_outwakeup	= xcoutwakeup,
 };
 
+/*----------------------------- Debug function -------------------------------*/
+#define XC_PRINTF_BUFSIZE 1024
+void
+xc_printf(const char *fmt, ...)
+{
+	static char buf[XC_PRINTF_BUFSIZE];
+	__va_list ap;
+
+	va_start(ap, fmt);
+	vsnprintf(buf, sizeof(buf), fmt, ap);
+	va_end(ap);
+	HYPERVISOR_console_write(buf, strlen(buf));
+}
+
 static void
 xc_cnprobe(struct consdev *cp)
 {
+	if (!xen_pv_domain())
+		return;
+
 	cp->cn_pri = CN_REMOTE;
 	sprintf(cp->cn_name, "%s0", driver_name);
 }
@@ -175,7 +195,7 @@ static void
 xc_cnputc(struct consdev *dev, int c)
 {
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		xc_cnputc_dom0(dev, c);
 	else
 		xc_cnputc_domu(dev, c);
@@ -206,8 +226,7 @@ xcons_putc(int c)
 		xcons_force_flush();
 #endif	    	
 	}
-	if (cnsl_evt_reg)
-		__xencons_tx_flush();
+	__xencons_tx_flush();
 	
 	/* inform start path that we're pretty full */
 	return ((wp - wc) >= WBUF_SIZE - 100) ? TRUE : FALSE;
@@ -217,6 +236,10 @@ static void
 xc_identify(driver_t *driver, device_t parent)
 {
 	device_t child;
+
+	if (!xen_pv_domain())
+		return;
+
 	child = BUS_ADD_CHILD(parent, 0, driver_name, 0);
 	device_set_driver(child, driver);
 	device_set_desc(child, "Xen Console");
@@ -245,7 +268,7 @@ xc_attach(device_t dev)
 	cnsl_evt_reg = 1;
 	callout_reset(&xc_callout, XC_POLLTIME, xc_timeout, xccons);
     
-	if (xen_start_info->flags & SIF_INITDOMAIN) {
+	if (xen_initial_domain()) {
 		error = xen_intr_bind_virq(dev, VIRQ_CONSOLE, 0, NULL,
 		                           xencons_priv_interrupt, NULL,
 		                           INTR_TYPE_TTY, &xen_intr_handle);
@@ -309,7 +332,7 @@ __xencons_tx_flush(void)
 		sz = wp - wc;
 		if (sz > (WBUF_SIZE - WBUF_MASK(wc)))
 			sz = WBUF_SIZE - WBUF_MASK(wc);
-		if (xen_start_info->flags & SIF_INITDOMAIN) {
+		if (xen_initial_domain()) {
 			HYPERVISOR_console_io(CONSOLEIO_write, sz, &wbuf[WBUF_MASK(wc)]);
 			wc += sz;
 		} else {
@@ -424,7 +447,7 @@ xcons_force_flush(void)
 {
 	int        sz;
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		return;
 
 	/* Spin until console data is flushed through to the domain controller. */
diff --git a/sys/dev/xen/console/xencons_ring.c b/sys/dev/xen/console/xencons_ring.c
index 3701551..d826363 100644
--- a/sys/dev/xen/console/xencons_ring.c
+++ b/sys/dev/xen/console/xencons_ring.c
@@ -32,9 +32,9 @@ __FBSDID("$FreeBSD$");
 
 #define console_evtchn	console.domU.evtchn
 xen_intr_handle_t console_handle;
-extern char *console_page;
 extern struct mtx              cn_mtx;
 extern device_t xencons_dev;
+extern int cnsl_evt_reg;
 
 static inline struct xencons_interface *
 xencons_interface(void)
@@ -60,6 +60,8 @@ xencons_ring_send(const char *data, unsigned len)
 	struct xencons_interface *intf; 
 	XENCONS_RING_IDX cons, prod;
 	int sent;
+	struct evtchn_send send = { .port =
+	                            HYPERVISOR_start_info->console_evtchn };
 
 	intf = xencons_interface();
 	cons = intf->out_cons;
@@ -76,7 +78,10 @@ xencons_ring_send(const char *data, unsigned len)
 	wmb();
 	intf->out_prod = prod;
 
-	xen_intr_signal(console_handle);
+	if (cnsl_evt_reg)
+		xen_intr_signal(console_handle);
+	else
+		HYPERVISOR_event_channel_op(EVTCHNOP_send, &send);
 
 	return sent;
 
@@ -125,11 +130,11 @@ xencons_ring_init(void)
 {
 	int err;
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return 0;
 
 	err = xen_intr_bind_local_port(xencons_dev,
-	    xen_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
+	    HYPERVISOR_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
 	    INTR_TYPE_MISC | INTR_MPSAFE, &console_handle);
 	if (err) {
 		return err;
@@ -145,7 +150,7 @@ void
 xencons_suspend(void)
 {
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return;
 
 	xen_intr_unbind(&console_handle);
diff --git a/sys/i386/include/xen/xen-os.h b/sys/i386/include/xen/xen-os.h
index a8fba61..3d1ef04 100644
--- a/sys/i386/include/xen/xen-os.h
+++ b/sys/i386/include/xen/xen-os.h
@@ -45,7 +45,6 @@ static inline void rep_nop(void)
 #define cpu_relax() rep_nop()
 
 #ifndef XENHVM
-void xc_printf(const char *fmt, ...);
 
 #ifdef SMP
 extern int gdtset;
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index fd575ee..09c01f1 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -186,21 +186,6 @@ xen_boothowto(char *envp)
 	return howto;
 }
 
-#define XC_PRINTF_BUFSIZE 1024
-void
-xc_printf(const char *fmt, ...)
-{
-        __va_list ap;
-        int retval;
-        static char buf[XC_PRINTF_BUFSIZE];
-
-        va_start(ap, fmt);
-        retval = vsnprintf(buf, XC_PRINTF_BUFSIZE - 1, fmt, ap);
-        va_end(ap);
-        buf[retval] = 0;
-        (void)HYPERVISOR_console_write(buf, retval);
-}
-
 
 #define XPQUEUE_SIZE 128
 
@@ -745,8 +730,6 @@ void initvalues(start_info_t *startinfo);
 struct xenstore_domain_interface;
 extern struct xenstore_domain_interface *xen_store;
 
-char *console_page;
-
 void *
 bootmem_alloc(unsigned int size) 
 {
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 5571ecf..db3b7a3 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -70,9 +70,12 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	int i;
 
 	if ((si == NULL) || (xenstack == 0)) {
+		xc_printf("ERROR: invalid start_info or xen stack, halting\n");
 		HYPERVISOR_shutdown(SHUTDOWN_crash);
 	}
 
+	xc_printf("FreeBSD PVH running on %s\n", si->magic);
+
 	/* We use 3 pages of xen stack for the boot pagetables */
 	physfree = xenstack + 3 * PAGE_SIZE - KERNBASE;
 
@@ -90,6 +93,7 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	 */
 	xen_store = (struct xenstore_domain_interface *)
 	            (ptoa(si->store_mfn) + KERNBASE);
+	console_page = (char *)(ptoa(si->console.domU.mfn) + KERNBASE);
 
 	xen_domain_type = XEN_PV_DOMAIN;
 	vm_guest = VM_GUEST_XEN;
diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index e8a5a99..e005ccd 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -55,6 +55,7 @@ extern start_info_t *HYPERVISOR_start_info;
 
 /* XXX: we need to get rid of this and use HYPERVISOR_start_info directly */
 extern struct xenstore_domain_interface *xen_store;
+extern char *console_page;
 
 enum xen_domain_type {
 	XEN_NATIVE,             /* running on bare hardware    */
@@ -89,6 +90,9 @@ xen_initial_domain(void)
 	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
 }
 
+/* Debug function, prints directly to hypervisor console */
+void xc_printf(const char *, ...);
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRI-0000xI-TD; Thu, 02 Jan 2014 15:44:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000tz-BK
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:4164] by server-11.bemta-5.messagelabs.com id
	6E/0D-23268-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31099 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259052"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:58 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRC-00043n-Ec;
	Thu, 02 Jan 2014 15:43:58 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:39 +0100
Message-ID: <1388677433-49525-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 05/19] xen: rework xen timer so it can be
	used early in boot process
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should not introduce any functional change, and makes the
functions suitable to be called before we have actually mapped the
vcpu_info struct on a per-cpu basis.
---
 sys/dev/xen/timer/timer.c |   29 ++++++++++++++++++++---------
 1 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index 354085b..b2f6bcd 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -230,22 +230,22 @@ xen_fetch_vcpu_tinfo(struct vcpu_time_info *dst, struct vcpu_time_info *src)
 /**
  * \brief Get the current time, in nanoseconds, since the hypervisor booted.
  *
+ * \param vcpu		vcpu_info structure to fetch the time from.
+ *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined value.
  */
 static uint64_t
-xen_fetch_vcpu_time(void)
+xen_fetch_vcpu_time(struct vcpu_info *vcpu)
 {
 	struct vcpu_time_info dst;
 	struct vcpu_time_info *src;
 	uint32_t pre_version;
 	uint64_t now;
 	volatile uint64_t last;
-	struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);
 
 	src = &vcpu->time;
 
-	critical_enter();
 	do {
 		pre_version = xen_fetch_vcpu_tinfo(&dst, src);
 		barrier();
@@ -266,16 +266,19 @@ xen_fetch_vcpu_time(void)
 		}
 	} while (!atomic_cmpset_64(&xen_timer_last_time, last, now));
 
-	critical_exit();
-
 	return (now);
 }
 
 static uint32_t
 xentimer_get_timecount(struct timecounter *tc)
 {
+	uint32_t xen_time;
 
-	return ((uint32_t)xen_fetch_vcpu_time() & UINT_MAX);
+	critical_enter();
+	xen_time = (uint32_t)xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) & UINT_MAX;
+	critical_exit();
+
+	return (xen_time);
 }
 
 /**
@@ -305,7 +308,12 @@ xen_fetch_wallclock(struct timespec *ts)
 static void
 xen_fetch_uptime(struct timespec *ts)
 {
-	uint64_t uptime = xen_fetch_vcpu_time();
+	uint64_t uptime;
+
+	critical_enter();
+	uptime = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
+	critical_exit();
+
 	ts->tv_sec = uptime / NSEC_IN_SEC;
 	ts->tv_nsec = uptime % NSEC_IN_SEC;
 }
@@ -354,7 +362,7 @@ xentimer_intr(void *arg)
 	struct xentimer_softc *sc = (struct xentimer_softc *)arg;
 	struct xentimer_pcpu_data *pcpu = DPCPU_PTR(xentimer_pcpu);
 
-	pcpu->last_processed = xen_fetch_vcpu_time();
+	pcpu->last_processed = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
 	if (pcpu->timer != 0 && sc->et.et_active)
 		sc->et.et_event_cb(&sc->et, sc->et.et_arg);
 
@@ -415,7 +423,10 @@ xentimer_et_start(struct eventtimer *et,
 	do {
 		if (++i == 60)
 			panic("can't schedule timer");
-		next_time = xen_fetch_vcpu_time() + first_in_ns;
+		critical_enter();
+		next_time = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) +
+		            first_in_ns;
+		critical_exit();
 		error = xentimer_vcpu_start_timer(cpu, next_time);
 	} while (error == -ETIME);
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRI-0000xI-TD; Thu, 02 Jan 2014 15:44:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000tz-BK
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:4164] by server-11.bemta-5.messagelabs.com id
	6E/0D-23268-04985C25; Thu, 02 Jan 2014 15:44:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31099 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259052"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:58 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRC-00043n-Ec;
	Thu, 02 Jan 2014 15:43:58 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:39 +0100
Message-ID: <1388677433-49525-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 05/19] xen: rework xen timer so it can be
	used early in boot process
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should not introduce any functional change, and makes the
functions suitable to be called before we have actually mapped the
vcpu_info struct on a per-cpu basis.
---
 sys/dev/xen/timer/timer.c |   29 ++++++++++++++++++++---------
 1 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index 354085b..b2f6bcd 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -230,22 +230,22 @@ xen_fetch_vcpu_tinfo(struct vcpu_time_info *dst, struct vcpu_time_info *src)
 /**
  * \brief Get the current time, in nanoseconds, since the hypervisor booted.
  *
+ * \param vcpu		vcpu_info structure to fetch the time from.
+ *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined value.
  */
 static uint64_t
-xen_fetch_vcpu_time(void)
+xen_fetch_vcpu_time(struct vcpu_info *vcpu)
 {
 	struct vcpu_time_info dst;
 	struct vcpu_time_info *src;
 	uint32_t pre_version;
 	uint64_t now;
 	volatile uint64_t last;
-	struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);
 
 	src = &vcpu->time;
 
-	critical_enter();
 	do {
 		pre_version = xen_fetch_vcpu_tinfo(&dst, src);
 		barrier();
@@ -266,16 +266,19 @@ xen_fetch_vcpu_time(void)
 		}
 	} while (!atomic_cmpset_64(&xen_timer_last_time, last, now));
 
-	critical_exit();
-
 	return (now);
 }
 
 static uint32_t
 xentimer_get_timecount(struct timecounter *tc)
 {
+	uint32_t xen_time;
 
-	return ((uint32_t)xen_fetch_vcpu_time() & UINT_MAX);
+	critical_enter();
+	xen_time = (uint32_t)xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) & UINT_MAX;
+	critical_exit();
+
+	return (xen_time);
 }
 
 /**
@@ -305,7 +308,12 @@ xen_fetch_wallclock(struct timespec *ts)
 static void
 xen_fetch_uptime(struct timespec *ts)
 {
-	uint64_t uptime = xen_fetch_vcpu_time();
+	uint64_t uptime;
+
+	critical_enter();
+	uptime = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
+	critical_exit();
+
 	ts->tv_sec = uptime / NSEC_IN_SEC;
 	ts->tv_nsec = uptime % NSEC_IN_SEC;
 }
@@ -354,7 +362,7 @@ xentimer_intr(void *arg)
 	struct xentimer_softc *sc = (struct xentimer_softc *)arg;
 	struct xentimer_pcpu_data *pcpu = DPCPU_PTR(xentimer_pcpu);
 
-	pcpu->last_processed = xen_fetch_vcpu_time();
+	pcpu->last_processed = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
 	if (pcpu->timer != 0 && sc->et.et_active)
 		sc->et.et_event_cb(&sc->et, sc->et.et_arg);
 
@@ -415,7 +423,10 @@ xentimer_et_start(struct eventtimer *et,
 	do {
 		if (++i == 60)
 			panic("can't schedule timer");
-		next_time = xen_fetch_vcpu_time() + first_in_ns;
+		critical_enter();
+		next_time = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) +
+		            first_in_ns;
+		critical_exit();
 		error = xentimer_vcpu_start_timer(cpu, next_time);
 	} while (error == -ETIME);
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRJ-0000xz-CG; Thu, 02 Jan 2014 15:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000u6-Kd
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:4123] by server-9.bemta-5.messagelabs.com id
	1C/3F-15098-F3985C25; Thu, 02 Jan 2014 15:43:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31043 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:57 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRB-00043n-RZ;
	Thu, 02 Jan 2014 15:43:57 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:38 +0100
Message-ID: <1388677433-49525-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
	preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
 sys/amd64/include/sysarch.h |   12 ++++++
 sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 11 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index eae657b..e073eea 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/reg.h>
 #include <machine/sigframe.h>
 #include <machine/specialreg.h>
+#include <machine/sysarch.h>
 #ifdef PERFMON
 #include <machine/perfmon.h>
 #endif
@@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
     char *xfpustate, size_t xfpustate_len);
 SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 
+/* Preload data parse function */
+static caddr_t native_parse_preload_data(u_int64_t);
+
+/* Default init_ops implementation. */
+struct init_ops init_ops = {
+	.parse_preload_data =	native_parse_preload_data,
+};
+
 /*
  * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
  * the physical address at which the kernel is loaded.
@@ -1683,6 +1692,26 @@ do_next:
 	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
 }
 
+static caddr_t
+native_parse_preload_data(u_int64_t modulep)
+{
+	caddr_t kmdp;
+
+	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
+	preload_bootstrap_relocate(KERNBASE);
+	kmdp = preload_search_by_type("elf kernel");
+	if (kmdp == NULL)
+		kmdp = preload_search_by_type("elf64 kernel");
+	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
+	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
+#ifdef DDB
+	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
+	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
+#endif
+
+	return (kmdp);
+}
+
 u_int64_t
 hammer_time(u_int64_t modulep, u_int64_t physfree)
 {
@@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	 */
 	proc_linkup0(&proc0, &thread0);
 
-	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
-	preload_bootstrap_relocate(KERNBASE);
-	kmdp = preload_search_by_type("elf kernel");
-	if (kmdp == NULL)
-		kmdp = preload_search_by_type("elf64 kernel");
-	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
-	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
-#ifdef DDB
-	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
-	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
-#endif
+	kmdp = init_ops.parse_preload_data(modulep);
 
 	/* Init basic tunables, hz etc */
 	init_param1();
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index cd380d4..58ac8cd 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -4,3 +4,15 @@
 /* $FreeBSD$ */
 
 #include <x86/sysarch.h>
+
+/*
+ * Struct containing pointers to init functions whose
+ * implementation is run time selectable.  Selection can be made,
+ * for example, based on detection of a BIOS variant or
+ * hypervisor environment.
+ */
+struct init_ops {
+	caddr_t	(*parse_preload_data)(u_int64_t);
+};
+
+extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index db3b7a3..908b50b 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_pager.h>
 #include <vm/vm_param.h>
 
+#include <machine/sysarch.h>
+
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
 
@@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+/*--------------------------- Forward Declarations ---------------------------*/
+static caddr_t xen_pv_parse_preload_data(u_int64_t);
+
+static void xen_pv_set_init_ops(void);
+
+/*-------------------------------- Global Data -------------------------------*/
+/* Xen init_ops implementation. */
+struct init_ops xen_init_ops = {
+	.parse_preload_data =	xen_pv_parse_preload_data,
+};
+
+static struct
+{
+	const char	*ev;
+	int		mask;
+} howto_names[] = {
+	{"boot_askname",	RB_ASKNAME},
+	{"boot_single",		RB_SINGLE},
+	{"boot_nosync",		RB_NOSYNC},
+	{"boot_halt",		RB_ASKNAME},
+	{"boot_serial",		RB_SERIAL},
+	{"boot_cdrom",		RB_CDROM},
+	{"boot_gdb",		RB_GDB},
+	{"boot_gdb_pause",	RB_RESERVED1},
+	{"boot_verbose",	RB_VERBOSE},
+	{"boot_multicons",	RB_MULTIPLE},
+	{NULL,	0}
+};
+
+/*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
  *
@@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	}
 	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
 
+	/* Set the hooks for early functions that diverge from bare metal */
+	xen_pv_set_init_ops();
+
 	/* Now we can jump into the native init function */
 	return (hammer_time(0, physfree));
 }
+
+/*-------------------------------- PV specific -------------------------------*/
+/*
+ * Functions to convert the "extra" parameters passed by Xen
+ * into FreeBSD boot options (from the i386 Xen port).
+ */
+static char *
+xen_setbootenv(char *cmd_line)
+{
+	char *cmd_line_next;
+
+        /* Skip leading spaces */
+        for (; *cmd_line == ' '; cmd_line++);
+
+	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
+	return (cmd_line);
+}
+
+static int
+xen_boothowto(char *envp)
+{
+	int i, howto = 0;
+
+	/* get equivalents from the environment */
+	for (i = 0; howto_names[i].ev != NULL; i++)
+		if (getenv(howto_names[i].ev) != NULL)
+			howto |= howto_names[i].mask;
+	return (howto);
+}
+
+static caddr_t
+xen_pv_parse_preload_data(u_int64_t modulep)
+{
+	/* Parse the extra boot information given by Xen */
+	if (HYPERVISOR_start_info->cmd_line)
+		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
+	boothowto |= xen_boothowto(kern_envp);
+
+	return (NULL);
+}
+
+static void
+xen_pv_set_init_ops(void)
+{
+	/* Init ops for Xen PV */
+	init_ops = xen_init_ops;
+}
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRJ-0000xz-CG; Thu, 02 Jan 2014 15:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRF-0000u6-Kd
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:4123] by server-9.bemta-5.messagelabs.com id
	1C/3F-15098-F3985C25; Thu, 02 Jan 2014 15:43:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31043 invoked from network); 2 Jan 2014 15:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:57 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRB-00043n-RZ;
	Thu, 02 Jan 2014 15:43:57 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:38 +0100
Message-ID: <1388677433-49525-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
	preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
 sys/amd64/include/sysarch.h |   12 ++++++
 sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 11 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index eae657b..e073eea 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/reg.h>
 #include <machine/sigframe.h>
 #include <machine/specialreg.h>
+#include <machine/sysarch.h>
 #ifdef PERFMON
 #include <machine/perfmon.h>
 #endif
@@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
     char *xfpustate, size_t xfpustate_len);
 SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 
+/* Preload data parse function */
+static caddr_t native_parse_preload_data(u_int64_t);
+
+/* Default init_ops implementation. */
+struct init_ops init_ops = {
+	.parse_preload_data =	native_parse_preload_data,
+};
+
 /*
  * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
  * the physical address at which the kernel is loaded.
@@ -1683,6 +1692,26 @@ do_next:
 	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
 }
 
+static caddr_t
+native_parse_preload_data(u_int64_t modulep)
+{
+	caddr_t kmdp;
+
+	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
+	preload_bootstrap_relocate(KERNBASE);
+	kmdp = preload_search_by_type("elf kernel");
+	if (kmdp == NULL)
+		kmdp = preload_search_by_type("elf64 kernel");
+	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
+	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
+#ifdef DDB
+	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
+	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
+#endif
+
+	return (kmdp);
+}
+
 u_int64_t
 hammer_time(u_int64_t modulep, u_int64_t physfree)
 {
@@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	 */
 	proc_linkup0(&proc0, &thread0);
 
-	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
-	preload_bootstrap_relocate(KERNBASE);
-	kmdp = preload_search_by_type("elf kernel");
-	if (kmdp == NULL)
-		kmdp = preload_search_by_type("elf64 kernel");
-	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
-	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
-#ifdef DDB
-	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
-	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
-#endif
+	kmdp = init_ops.parse_preload_data(modulep);
 
 	/* Init basic tunables, hz etc */
 	init_param1();
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index cd380d4..58ac8cd 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -4,3 +4,15 @@
 /* $FreeBSD$ */
 
 #include <x86/sysarch.h>
+
+/*
+ * Struct containing pointers to init functions whose
+ * implementation is run time selectable.  Selection can be made,
+ * for example, based on detection of a BIOS variant or
+ * hypervisor environment.
+ */
+struct init_ops {
+	caddr_t	(*parse_preload_data)(u_int64_t);
+};
+
+extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index db3b7a3..908b50b 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_pager.h>
 #include <vm/vm_param.h>
 
+#include <machine/sysarch.h>
+
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
 
@@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+/*--------------------------- Forward Declarations ---------------------------*/
+static caddr_t xen_pv_parse_preload_data(u_int64_t);
+
+static void xen_pv_set_init_ops(void);
+
+/*-------------------------------- Global Data -------------------------------*/
+/* Xen init_ops implementation. */
+struct init_ops xen_init_ops = {
+	.parse_preload_data =	xen_pv_parse_preload_data,
+};
+
+static struct
+{
+	const char	*ev;
+	int		mask;
+} howto_names[] = {
+	{"boot_askname",	RB_ASKNAME},
+	{"boot_single",		RB_SINGLE},
+	{"boot_nosync",		RB_NOSYNC},
+	{"boot_halt",		RB_ASKNAME},
+	{"boot_serial",		RB_SERIAL},
+	{"boot_cdrom",		RB_CDROM},
+	{"boot_gdb",		RB_GDB},
+	{"boot_gdb_pause",	RB_RESERVED1},
+	{"boot_verbose",	RB_VERBOSE},
+	{"boot_multicons",	RB_MULTIPLE},
+	{NULL,	0}
+};
+
+/*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
  *
@@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	}
 	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
 
+	/* Set the hooks for early functions that diverge from bare metal */
+	xen_pv_set_init_ops();
+
 	/* Now we can jump into the native init function */
 	return (hammer_time(0, physfree));
 }
+
+/*-------------------------------- PV specific -------------------------------*/
+/*
+ * Functions to convert the "extra" parameters passed by Xen
+ * into FreeBSD boot options (from the i386 Xen port).
+ */
+static char *
+xen_setbootenv(char *cmd_line)
+{
+	char *cmd_line_next;
+
+        /* Skip leading spaces */
+        for (; *cmd_line == ' '; cmd_line++);
+
+	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
+	return (cmd_line);
+}
+
+static int
+xen_boothowto(char *envp)
+{
+	int i, howto = 0;
+
+	/* get equivalents from the environment */
+	for (i = 0; howto_names[i].ev != NULL; i++)
+		if (getenv(howto_names[i].ev) != NULL)
+			howto |= howto_names[i].mask;
+	return (howto);
+}
+
+static caddr_t
+xen_pv_parse_preload_data(u_int64_t modulep)
+{
+	/* Parse the extra boot information given by Xen */
+	if (HYPERVISOR_start_info->cmd_line)
+		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
+	boothowto |= xen_boothowto(kern_envp);
+
+	return (NULL);
+}
+
+static void
+xen_pv_set_init_ops(void)
+{
+	/* Init ops for Xen PV */
+	init_ops = xen_init_ops;
+}
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRJ-0000yp-VT; Thu, 02 Jan 2014 15:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRG-0000uT-FG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:56966] by server-14.bemta-5.messagelabs.com id
	FA/32-24200-14985C25; Thu, 02 Jan 2014 15:44:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31132 invoked from network); 2 Jan 2014 15:44:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259053"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRD-00043n-0m;
	Thu, 02 Jan 2014 15:43:59 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:40 +0100
Message-ID: <1388677433-49525-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 06/19] xen: implement an early timer for Xen
	PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running as a PVH guest, there's no emulated i8254, so we need to
use the Xen PV timer as the early source for DELAY. This change allows
for different implementations of the early DELAY function and
implements a Xen variant for it.
---
 sys/amd64/amd64/machdep.c   |    6 ++-
 sys/amd64/include/clock.h   |    5 ++
 sys/amd64/include/sysarch.h |    2 +
 sys/conf/files.amd64        |    1 +
 sys/conf/files.i386         |    1 +
 sys/dev/xen/timer/timer.c   |   33 +++++++++++++
 sys/i386/include/clock.h    |    5 ++
 sys/x86/isa/clock.c         |   53 +--------------------
 sys/x86/x86/delay.c         |  112 +++++++++++++++++++++++++++++++++++++++++++
 sys/x86/xen/pv.c            |    3 +
 10 files changed, 167 insertions(+), 54 deletions(-)
 create mode 100644 sys/x86/x86/delay.c

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index e073eea..178d8b3 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -172,6 +172,8 @@ static caddr_t native_parse_preload_data(u_int64_t);
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
+	.early_delay_init =	i8254_init,
+	.early_delay =		i8254_delay,
 };
 
 /*
@@ -1820,10 +1822,10 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	lidt(&r_idt);
 
 	/*
-	 * Initialize the i8254 before the console so that console
+	 * Initialize the early delay before the console so that console
 	 * initialization can use DELAY().
 	 */
-	i8254_init();
+	init_ops.early_delay_init();
 
 	/*
 	 * Initialize the console before we print anything out.
diff --git a/sys/amd64/include/clock.h b/sys/amd64/include/clock.h
index d7f7d82..ac8818f 100644
--- a/sys/amd64/include/clock.h
+++ b/sys/amd64/include/clock.h
@@ -25,6 +25,11 @@ extern int	smp_tsc;
 #endif
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 58ac8cd..60fa635 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -13,6 +13,8 @@
  */
 struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
+	void	(*early_delay_init)(void);
+	void	(*early_delay)(int);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/conf/files.amd64 b/sys/conf/files.amd64
index 16029d8..109a796 100644
--- a/sys/conf/files.amd64
+++ b/sys/conf/files.amd64
@@ -565,6 +565,7 @@ x86/x86/mptable_pci.c		optional	mptable pci
 x86/x86/msi.c			optional	pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional	xenhvm
 x86/xen/xen_intr.c		optional	xen | xenhvm
 x86/xen/pv.c			optional	xenhvm
diff --git a/sys/conf/files.i386 b/sys/conf/files.i386
index eb8697c..790296d 100644
--- a/sys/conf/files.i386
+++ b/sys/conf/files.i386
@@ -600,5 +600,6 @@ x86/x86/mptable_pci.c		optional apic native pci
 x86/x86/msi.c			optional apic pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional xenhvm
 x86/xen/xen_intr.c		optional xen | xenhvm
diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index b2f6bcd..96372ab 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -59,6 +59,9 @@ __FBSDID("$FreeBSD$");
 #include <machine/_inttypes.h>
 #include <machine/smp.h>
 
+/* For the declaration of clock_lock */
+#include <isa/rtc.h>
+
 #include "clock_if.h"
 
 static devclass_t xentimer_devclass;
@@ -584,6 +587,36 @@ xentimer_suspend(device_t dev)
 	return (0);
 }
 
+/*
+ * Xen delay early init
+ */
+void xen_delay_init(void)
+{
+	/* Init the clock lock */
+	mtx_init(&clock_lock, "clk", NULL, MTX_SPIN | MTX_NOPROFILE);
+}
+/*
+ * Xen PV DELAY function
+ *
+ * When running on PVH mode we don't have an emulated i8524, so
+ * make use of the Xen time info in order to code a simple DELAY
+ * function that can be used during early boot.
+ */
+void xen_delay(int n)
+{
+	uint64_t end_ns;
+	uint64_t current;
+
+	end_ns = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+	end_ns += n * NSEC_IN_USEC;
+
+	for (;;) {
+		current = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+		if (current >= end_ns)
+			break;
+	}
+}
+
 static device_method_t xentimer_methods[] = {
 	DEVMETHOD(device_identify, xentimer_identify),
 	DEVMETHOD(device_probe, xentimer_probe),
diff --git a/sys/i386/include/clock.h b/sys/i386/include/clock.h
index d980ec7..b831445 100644
--- a/sys/i386/include/clock.h
+++ b/sys/i386/include/clock.h
@@ -22,6 +22,11 @@ extern int	tsc_is_invariant;
 extern int	tsc_perf_stat;
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/x86/isa/clock.c b/sys/x86/isa/clock.c
index a12e175..a5aed1c 100644
--- a/sys/x86/isa/clock.c
+++ b/sys/x86/isa/clock.c
@@ -247,61 +247,13 @@ getit(void)
 	return ((high << 8) | low);
 }
 
-#ifndef DELAYDEBUG
-static u_int
-get_tsc(__unused struct timecounter *tc)
-{
-
-	return (rdtsc32());
-}
-
-static __inline int
-delay_tc(int n)
-{
-	struct timecounter *tc;
-	timecounter_get_t *func;
-	uint64_t end, freq, now;
-	u_int last, mask, u;
-
-	tc = timecounter;
-	freq = atomic_load_acq_64(&tsc_freq);
-	if (tsc_is_invariant && freq != 0) {
-		func = get_tsc;
-		mask = ~0u;
-	} else {
-		if (tc->tc_quality <= 0)
-			return (0);
-		func = tc->tc_get_timecount;
-		mask = tc->tc_counter_mask;
-		freq = tc->tc_frequency;
-	}
-	now = 0;
-	end = freq * n / 1000000;
-	if (func == get_tsc)
-		sched_pin();
-	last = func(tc) & mask;
-	do {
-		cpu_spinwait();
-		u = func(tc) & mask;
-		if (u < last)
-			now += mask - last + u + 1;
-		else
-			now += u - last;
-		last = u;
-	} while (now < end);
-	if (func == get_tsc)
-		sched_unpin();
-	return (1);
-}
-#endif
-
 /*
  * Wait "n" microseconds.
  * Relies on timer 1 counting down from (i8254_freq / hz)
  * Note: timer had better have been programmed before this is first used!
  */
 void
-DELAY(int n)
+i8254_delay(int n)
 {
 	int delta, prev_tick, tick, ticks_left;
 #ifdef DELAYDEBUG
@@ -317,9 +269,6 @@ DELAY(int n)
 	}
 	if (state == 1)
 		printf("DELAY(%d)...", n);
-#else
-	if (delay_tc(n))
-		return;
 #endif
 	/*
 	 * Read the counter first, so that the rest of the setup overhead is
diff --git a/sys/x86/x86/delay.c b/sys/x86/x86/delay.c
new file mode 100644
index 0000000..d13c727
--- /dev/null
+++ b/sys/x86/x86/delay.c
@@ -0,0 +1,112 @@
+/*-
+ * Copyright (c) 1990 The Regents of the University of California.
+ * Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * William Jolitz and Don Ahn.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	from: @(#)clock.c	7.2 (Berkeley) 5/12/91
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+/* Generic x86 routines to handle delay */
+
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/timetc.h>
+#include <sys/proc.h>
+#include <sys/kernel.h>
+#include <sys/sched.h>
+
+#include <machine/clock.h>
+#include <machine/cpu.h>
+#include <machine/sysarch.h>
+
+static u_int
+get_tsc(__unused struct timecounter *tc)
+{
+
+	return (rdtsc32());
+}
+
+static int
+delay_tc(int n)
+{
+	struct timecounter *tc;
+	timecounter_get_t *func;
+	uint64_t end, freq, now;
+	u_int last, mask, u;
+
+	tc = timecounter;
+	freq = atomic_load_acq_64(&tsc_freq);
+	if (tsc_is_invariant && freq != 0) {
+		func = get_tsc;
+		mask = ~0u;
+	} else {
+		if (tc->tc_quality <= 0)
+			return (0);
+		func = tc->tc_get_timecount;
+		mask = tc->tc_counter_mask;
+		freq = tc->tc_frequency;
+	}
+	now = 0;
+	end = freq * n / 1000000;
+	if (func == get_tsc)
+		sched_pin();
+	last = func(tc) & mask;
+	do {
+		cpu_spinwait();
+		u = func(tc) & mask;
+		if (u < last)
+			now += mask - last + u + 1;
+		else
+			now += u - last;
+		last = u;
+	} while (now < end);
+	if (func == get_tsc)
+		sched_unpin();
+	return (1);
+}
+
+#ifndef XEN
+void
+DELAY(int n)
+{
+
+	if (delay_tc(n))
+		return;
+
+#ifdef __amd64__
+	init_ops.early_delay(n);
+#else
+	i8254_delay(n);
+#endif
+}
+#endif
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 908b50b..0ec4b54 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_param.h>
 
 #include <machine/sysarch.h>
+#include <machine/clock.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -65,6 +66,8 @@ static void xen_pv_set_init_ops(void);
 /* Xen init_ops implementation. */
 struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
+	.early_delay_init =	xen_delay_init,
+	.early_delay =		xen_delay,
 };
 
 static struct
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRJ-0000yp-VT; Thu, 02 Jan 2014 15:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRG-0000uT-FG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:02 +0000
Received: from [85.158.139.211:56966] by server-14.bemta-5.messagelabs.com id
	FA/32-24200-14985C25; Thu, 02 Jan 2014 15:44:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31132 invoked from network); 2 Jan 2014 15:44:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259053"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:43:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRD-00043n-0m;
	Thu, 02 Jan 2014 15:43:59 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:40 +0100
Message-ID: <1388677433-49525-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 06/19] xen: implement an early timer for Xen
	PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running as a PVH guest, there's no emulated i8254, so we need to
use the Xen PV timer as the early source for DELAY. This change allows
for different implementations of the early DELAY function and
implements a Xen variant for it.
---
 sys/amd64/amd64/machdep.c   |    6 ++-
 sys/amd64/include/clock.h   |    5 ++
 sys/amd64/include/sysarch.h |    2 +
 sys/conf/files.amd64        |    1 +
 sys/conf/files.i386         |    1 +
 sys/dev/xen/timer/timer.c   |   33 +++++++++++++
 sys/i386/include/clock.h    |    5 ++
 sys/x86/isa/clock.c         |   53 +--------------------
 sys/x86/x86/delay.c         |  112 +++++++++++++++++++++++++++++++++++++++++++
 sys/x86/xen/pv.c            |    3 +
 10 files changed, 167 insertions(+), 54 deletions(-)
 create mode 100644 sys/x86/x86/delay.c

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index e073eea..178d8b3 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -172,6 +172,8 @@ static caddr_t native_parse_preload_data(u_int64_t);
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
+	.early_delay_init =	i8254_init,
+	.early_delay =		i8254_delay,
 };
 
 /*
@@ -1820,10 +1822,10 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	lidt(&r_idt);
 
 	/*
-	 * Initialize the i8254 before the console so that console
+	 * Initialize the early delay before the console so that console
 	 * initialization can use DELAY().
 	 */
-	i8254_init();
+	init_ops.early_delay_init();
 
 	/*
 	 * Initialize the console before we print anything out.
diff --git a/sys/amd64/include/clock.h b/sys/amd64/include/clock.h
index d7f7d82..ac8818f 100644
--- a/sys/amd64/include/clock.h
+++ b/sys/amd64/include/clock.h
@@ -25,6 +25,11 @@ extern int	smp_tsc;
 #endif
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 58ac8cd..60fa635 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -13,6 +13,8 @@
  */
 struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
+	void	(*early_delay_init)(void);
+	void	(*early_delay)(int);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/conf/files.amd64 b/sys/conf/files.amd64
index 16029d8..109a796 100644
--- a/sys/conf/files.amd64
+++ b/sys/conf/files.amd64
@@ -565,6 +565,7 @@ x86/x86/mptable_pci.c		optional	mptable pci
 x86/x86/msi.c			optional	pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional	xenhvm
 x86/xen/xen_intr.c		optional	xen | xenhvm
 x86/xen/pv.c			optional	xenhvm
diff --git a/sys/conf/files.i386 b/sys/conf/files.i386
index eb8697c..790296d 100644
--- a/sys/conf/files.i386
+++ b/sys/conf/files.i386
@@ -600,5 +600,6 @@ x86/x86/mptable_pci.c		optional apic native pci
 x86/x86/msi.c			optional apic pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional xenhvm
 x86/xen/xen_intr.c		optional xen | xenhvm
diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index b2f6bcd..96372ab 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -59,6 +59,9 @@ __FBSDID("$FreeBSD$");
 #include <machine/_inttypes.h>
 #include <machine/smp.h>
 
+/* For the declaration of clock_lock */
+#include <isa/rtc.h>
+
 #include "clock_if.h"
 
 static devclass_t xentimer_devclass;
@@ -584,6 +587,36 @@ xentimer_suspend(device_t dev)
 	return (0);
 }
 
+/*
+ * Xen delay early init
+ */
+void xen_delay_init(void)
+{
+	/* Init the clock lock */
+	mtx_init(&clock_lock, "clk", NULL, MTX_SPIN | MTX_NOPROFILE);
+}
+/*
+ * Xen PV DELAY function
+ *
+ * When running on PVH mode we don't have an emulated i8524, so
+ * make use of the Xen time info in order to code a simple DELAY
+ * function that can be used during early boot.
+ */
+void xen_delay(int n)
+{
+	uint64_t end_ns;
+	uint64_t current;
+
+	end_ns = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+	end_ns += n * NSEC_IN_USEC;
+
+	for (;;) {
+		current = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+		if (current >= end_ns)
+			break;
+	}
+}
+
 static device_method_t xentimer_methods[] = {
 	DEVMETHOD(device_identify, xentimer_identify),
 	DEVMETHOD(device_probe, xentimer_probe),
diff --git a/sys/i386/include/clock.h b/sys/i386/include/clock.h
index d980ec7..b831445 100644
--- a/sys/i386/include/clock.h
+++ b/sys/i386/include/clock.h
@@ -22,6 +22,11 @@ extern int	tsc_is_invariant;
 extern int	tsc_perf_stat;
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/x86/isa/clock.c b/sys/x86/isa/clock.c
index a12e175..a5aed1c 100644
--- a/sys/x86/isa/clock.c
+++ b/sys/x86/isa/clock.c
@@ -247,61 +247,13 @@ getit(void)
 	return ((high << 8) | low);
 }
 
-#ifndef DELAYDEBUG
-static u_int
-get_tsc(__unused struct timecounter *tc)
-{
-
-	return (rdtsc32());
-}
-
-static __inline int
-delay_tc(int n)
-{
-	struct timecounter *tc;
-	timecounter_get_t *func;
-	uint64_t end, freq, now;
-	u_int last, mask, u;
-
-	tc = timecounter;
-	freq = atomic_load_acq_64(&tsc_freq);
-	if (tsc_is_invariant && freq != 0) {
-		func = get_tsc;
-		mask = ~0u;
-	} else {
-		if (tc->tc_quality <= 0)
-			return (0);
-		func = tc->tc_get_timecount;
-		mask = tc->tc_counter_mask;
-		freq = tc->tc_frequency;
-	}
-	now = 0;
-	end = freq * n / 1000000;
-	if (func == get_tsc)
-		sched_pin();
-	last = func(tc) & mask;
-	do {
-		cpu_spinwait();
-		u = func(tc) & mask;
-		if (u < last)
-			now += mask - last + u + 1;
-		else
-			now += u - last;
-		last = u;
-	} while (now < end);
-	if (func == get_tsc)
-		sched_unpin();
-	return (1);
-}
-#endif
-
 /*
  * Wait "n" microseconds.
  * Relies on timer 1 counting down from (i8254_freq / hz)
  * Note: timer had better have been programmed before this is first used!
  */
 void
-DELAY(int n)
+i8254_delay(int n)
 {
 	int delta, prev_tick, tick, ticks_left;
 #ifdef DELAYDEBUG
@@ -317,9 +269,6 @@ DELAY(int n)
 	}
 	if (state == 1)
 		printf("DELAY(%d)...", n);
-#else
-	if (delay_tc(n))
-		return;
 #endif
 	/*
 	 * Read the counter first, so that the rest of the setup overhead is
diff --git a/sys/x86/x86/delay.c b/sys/x86/x86/delay.c
new file mode 100644
index 0000000..d13c727
--- /dev/null
+++ b/sys/x86/x86/delay.c
@@ -0,0 +1,112 @@
+/*-
+ * Copyright (c) 1990 The Regents of the University of California.
+ * Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * William Jolitz and Don Ahn.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	from: @(#)clock.c	7.2 (Berkeley) 5/12/91
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+/* Generic x86 routines to handle delay */
+
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/timetc.h>
+#include <sys/proc.h>
+#include <sys/kernel.h>
+#include <sys/sched.h>
+
+#include <machine/clock.h>
+#include <machine/cpu.h>
+#include <machine/sysarch.h>
+
+static u_int
+get_tsc(__unused struct timecounter *tc)
+{
+
+	return (rdtsc32());
+}
+
+static int
+delay_tc(int n)
+{
+	struct timecounter *tc;
+	timecounter_get_t *func;
+	uint64_t end, freq, now;
+	u_int last, mask, u;
+
+	tc = timecounter;
+	freq = atomic_load_acq_64(&tsc_freq);
+	if (tsc_is_invariant && freq != 0) {
+		func = get_tsc;
+		mask = ~0u;
+	} else {
+		if (tc->tc_quality <= 0)
+			return (0);
+		func = tc->tc_get_timecount;
+		mask = tc->tc_counter_mask;
+		freq = tc->tc_frequency;
+	}
+	now = 0;
+	end = freq * n / 1000000;
+	if (func == get_tsc)
+		sched_pin();
+	last = func(tc) & mask;
+	do {
+		cpu_spinwait();
+		u = func(tc) & mask;
+		if (u < last)
+			now += mask - last + u + 1;
+		else
+			now += u - last;
+		last = u;
+	} while (now < end);
+	if (func == get_tsc)
+		sched_unpin();
+	return (1);
+}
+
+#ifndef XEN
+void
+DELAY(int n)
+{
+
+	if (delay_tc(n))
+		return;
+
+#ifdef __amd64__
+	init_ops.early_delay(n);
+#else
+	i8254_delay(n);
+#endif
+}
+#endif
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 908b50b..0ec4b54 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_param.h>
 
 #include <machine/sysarch.h>
+#include <machine/clock.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -65,6 +66,8 @@ static void xen_pv_set_init_ops(void);
 /* Xen init_ops implementation. */
 struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
+	.early_delay_init =	xen_delay_init,
+	.early_delay =		xen_delay,
 };
 
 static struct
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRK-0000zp-H4; Thu, 02 Jan 2014 15:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRH-0000uh-9B
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:03 +0000
Received: from [85.158.143.35:26882] by server-1.bemta-4.messagelabs.com id
	B5/48-02132-24985C25; Thu, 02 Jan 2014 15:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388677439!9284366!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31560 invoked from network); 2 Jan 2014 15:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259060"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:44:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRE-00043n-5l;
	Thu, 02 Jan 2014 15:44:00 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:42 +0100
Message-ID: <1388677433-49525-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 08/19] xen: use the same hypercall mechanism
	for XEN and XENHVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/include/xen/hypercall.h |    7 -------
 sys/i386/i386/locore.s            |    9 +++++++++
 sys/i386/include/xen/hypercall.h  |    8 --------
 sys/x86/xen/hvm.c                 |   24 ++++++++++--------------
 4 files changed, 19 insertions(+), 29 deletions(-)

diff --git a/sys/amd64/include/xen/hypercall.h b/sys/amd64/include/xen/hypercall.h
index a1b2a5c..499fb4d 100644
--- a/sys/amd64/include/xen/hypercall.h
+++ b/sys/amd64/include/xen/hypercall.h
@@ -51,15 +51,8 @@
 #define CONFIG_XEN_COMPAT	0x030002
 #define __must_check
 
-#ifdef XEN
 #define HYPERCALL_STR(name)					\
 	"call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)					\
-	"mov $("STR(__HYPERVISOR_##name)" * 32),%%eax; "\
-	"add hypercall_stubs(%%rip),%%rax; "			\
-	"call *%%rax"
-#endif
 
 #define _hypercall0(type, name)			\
 ({						\
diff --git a/sys/i386/i386/locore.s b/sys/i386/i386/locore.s
index 68cb430..bd136b1 100644
--- a/sys/i386/i386/locore.s
+++ b/sys/i386/i386/locore.s
@@ -898,3 +898,12 @@ done_pde:
 #endif
 
 	ret
+
+#ifdef XENHVM
+/* Xen Hypercall page */
+	.text
+.p2align PAGE_SHIFT, 0x90	/* Hypercall_page needs to be PAGE aligned */
+
+NON_GPROF_ENTRY(hypercall_page)
+	.skip	0x1000, 0x90	/* Fill with "nop"s */
+#endif
diff --git a/sys/i386/include/xen/hypercall.h b/sys/i386/include/xen/hypercall.h
index edc13f4..16b5ee2 100644
--- a/sys/i386/include/xen/hypercall.h
+++ b/sys/i386/include/xen/hypercall.h
@@ -39,16 +39,8 @@
 #define	ENOXENSYS	38
 #define CONFIG_XEN_COMPAT	0x030002
 
-
-#if defined(XEN)
 #define HYPERCALL_STR(name)                                     \
         "call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)                                     \
-        "mov hypercall_stubs,%%eax; "                           \
-        "add $("STR(__HYPERVISOR_##name)" * 32),%%eax; "        \
-        "call *%%eax"
-#endif
 
 #define _hypercall0(type, name)                 \
 ({                                              \
diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index b397721..9a0411e 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -157,7 +157,7 @@ DPCPU_DEFINE(xen_intr_handle_t, ipi_handle[nitems(xen_ipis)]);
 
 /*------------------ Hypervisor Access Shared Memory Regions -----------------*/
 /** Hypercall table accessed via HYPERVISOR_*_op() methods. */
-char *hypercall_stubs;
+extern char *hypercall_page;
 shared_info_t *HYPERVISOR_shared_info;
 start_info_t *HYPERVISOR_start_info;
 
@@ -559,7 +559,7 @@ xen_hvm_cpuid_base(void)
  * Allocate and fill in the hypcall page.
  */
 static int
-xen_hvm_init_hypercall_stubs(void)
+xen_hvm_init_hypercall_stubs(enum xen_hvm_init_type init_type)
 {
 	uint32_t base, regs[4];
 	int i;
@@ -568,7 +568,7 @@ xen_hvm_init_hypercall_stubs(void)
 	if (base == 0)
 		return (ENXIO);
 
-	if (hypercall_stubs == NULL) {
+	if (init_type == XEN_HVM_INIT_COLD) {
 		do_cpuid(base + 1, regs);
 		printf("XEN: Hypervisor version %d.%d detected.\n",
 		    regs[0] >> 16, regs[0] & 0xffff);
@@ -578,18 +578,9 @@ xen_hvm_init_hypercall_stubs(void)
 	 * Find the hypercall pages.
 	 */
 	do_cpuid(base + 2, regs);
-	
-	if (hypercall_stubs == NULL) {
-		size_t call_region_size;
-
-		call_region_size = regs[0] * PAGE_SIZE;
-		hypercall_stubs = malloc(call_region_size, M_XENHVM, M_NOWAIT);
-		if (hypercall_stubs == NULL)
-			panic("Unable to allocate Xen hypercall region");
-	}
 
 	for (i = 0; i < regs[0]; i++)
-		wrmsr(regs[1], vtophys(hypercall_stubs + i * PAGE_SIZE) + i);
+		wrmsr(regs[1], vtophys(&hypercall_page + i * PAGE_SIZE) + i);
 
 	return (0);
 }
@@ -692,7 +683,12 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	if (init_type == XEN_HVM_INIT_CANCELLED_SUSPEND)
 		return;
 
-	error = xen_hvm_init_hypercall_stubs();
+	if (xen_pv_domain()) {
+		/* hypercall page is already set in the PV case */
+		error = 0;
+	} else {
+		error = xen_hvm_init_hypercall_stubs(init_type);
+	}
 
 	switch (init_type) {
 	case XEN_HVM_INIT_COLD:
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRK-0000zp-H4; Thu, 02 Jan 2014 15:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRH-0000uh-9B
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:03 +0000
Received: from [85.158.143.35:26882] by server-1.bemta-4.messagelabs.com id
	B5/48-02132-24985C25; Thu, 02 Jan 2014 15:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388677439!9284366!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31560 invoked from network); 2 Jan 2014 15:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259060"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:44:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRE-00043n-5l;
	Thu, 02 Jan 2014 15:44:00 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:42 +0100
Message-ID: <1388677433-49525-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 08/19] xen: use the same hypercall mechanism
	for XEN and XENHVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/include/xen/hypercall.h |    7 -------
 sys/i386/i386/locore.s            |    9 +++++++++
 sys/i386/include/xen/hypercall.h  |    8 --------
 sys/x86/xen/hvm.c                 |   24 ++++++++++--------------
 4 files changed, 19 insertions(+), 29 deletions(-)

diff --git a/sys/amd64/include/xen/hypercall.h b/sys/amd64/include/xen/hypercall.h
index a1b2a5c..499fb4d 100644
--- a/sys/amd64/include/xen/hypercall.h
+++ b/sys/amd64/include/xen/hypercall.h
@@ -51,15 +51,8 @@
 #define CONFIG_XEN_COMPAT	0x030002
 #define __must_check
 
-#ifdef XEN
 #define HYPERCALL_STR(name)					\
 	"call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)					\
-	"mov $("STR(__HYPERVISOR_##name)" * 32),%%eax; "\
-	"add hypercall_stubs(%%rip),%%rax; "			\
-	"call *%%rax"
-#endif
 
 #define _hypercall0(type, name)			\
 ({						\
diff --git a/sys/i386/i386/locore.s b/sys/i386/i386/locore.s
index 68cb430..bd136b1 100644
--- a/sys/i386/i386/locore.s
+++ b/sys/i386/i386/locore.s
@@ -898,3 +898,12 @@ done_pde:
 #endif
 
 	ret
+
+#ifdef XENHVM
+/* Xen Hypercall page */
+	.text
+.p2align PAGE_SHIFT, 0x90	/* Hypercall_page needs to be PAGE aligned */
+
+NON_GPROF_ENTRY(hypercall_page)
+	.skip	0x1000, 0x90	/* Fill with "nop"s */
+#endif
diff --git a/sys/i386/include/xen/hypercall.h b/sys/i386/include/xen/hypercall.h
index edc13f4..16b5ee2 100644
--- a/sys/i386/include/xen/hypercall.h
+++ b/sys/i386/include/xen/hypercall.h
@@ -39,16 +39,8 @@
 #define	ENOXENSYS	38
 #define CONFIG_XEN_COMPAT	0x030002
 
-
-#if defined(XEN)
 #define HYPERCALL_STR(name)                                     \
         "call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)                                     \
-        "mov hypercall_stubs,%%eax; "                           \
-        "add $("STR(__HYPERVISOR_##name)" * 32),%%eax; "        \
-        "call *%%eax"
-#endif
 
 #define _hypercall0(type, name)                 \
 ({                                              \
diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index b397721..9a0411e 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -157,7 +157,7 @@ DPCPU_DEFINE(xen_intr_handle_t, ipi_handle[nitems(xen_ipis)]);
 
 /*------------------ Hypervisor Access Shared Memory Regions -----------------*/
 /** Hypercall table accessed via HYPERVISOR_*_op() methods. */
-char *hypercall_stubs;
+extern char *hypercall_page;
 shared_info_t *HYPERVISOR_shared_info;
 start_info_t *HYPERVISOR_start_info;
 
@@ -559,7 +559,7 @@ xen_hvm_cpuid_base(void)
  * Allocate and fill in the hypcall page.
  */
 static int
-xen_hvm_init_hypercall_stubs(void)
+xen_hvm_init_hypercall_stubs(enum xen_hvm_init_type init_type)
 {
 	uint32_t base, regs[4];
 	int i;
@@ -568,7 +568,7 @@ xen_hvm_init_hypercall_stubs(void)
 	if (base == 0)
 		return (ENXIO);
 
-	if (hypercall_stubs == NULL) {
+	if (init_type == XEN_HVM_INIT_COLD) {
 		do_cpuid(base + 1, regs);
 		printf("XEN: Hypervisor version %d.%d detected.\n",
 		    regs[0] >> 16, regs[0] & 0xffff);
@@ -578,18 +578,9 @@ xen_hvm_init_hypercall_stubs(void)
 	 * Find the hypercall pages.
 	 */
 	do_cpuid(base + 2, regs);
-	
-	if (hypercall_stubs == NULL) {
-		size_t call_region_size;
-
-		call_region_size = regs[0] * PAGE_SIZE;
-		hypercall_stubs = malloc(call_region_size, M_XENHVM, M_NOWAIT);
-		if (hypercall_stubs == NULL)
-			panic("Unable to allocate Xen hypercall region");
-	}
 
 	for (i = 0; i < regs[0]; i++)
-		wrmsr(regs[1], vtophys(hypercall_stubs + i * PAGE_SIZE) + i);
+		wrmsr(regs[1], vtophys(&hypercall_page + i * PAGE_SIZE) + i);
 
 	return (0);
 }
@@ -692,7 +683,12 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	if (init_type == XEN_HVM_INIT_CANCELLED_SUSPEND)
 		return;
 
-	error = xen_hvm_init_hypercall_stubs();
+	if (xen_pv_domain()) {
+		/* hypercall page is already set in the PV case */
+		error = 0;
+	} else {
+		error = xen_hvm_init_hypercall_stubs(init_type);
+	}
 
 	switch (init_type) {
 	case XEN_HVM_INIT_COLD:
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRL-00010e-0f; Thu, 02 Jan 2014 15:44:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRH-0000vM-Oj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:04 +0000
Received: from [85.158.137.68:37023] by server-9.bemta-3.messagelabs.com id
	DF/20-13104-34985C25; Thu, 02 Jan 2014 15:44:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388677440!6920958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28501 invoked from network); 2 Jan 2014 15:44:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRD-00043n-JX;
	Thu, 02 Jan 2014 15:43:59 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:41 +0100
Message-ID: <1388677433-49525-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 07/19] xen: implement hook to fetch e820
	memory map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   50 ++++++++++++++++++++++++++----------------
 sys/amd64/include/pc/bios.h |    2 +
 sys/amd64/include/sysarch.h |    1 +
 sys/x86/xen/pv.c            |   25 +++++++++++++++++++++
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 178d8b3..f6eef50 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -169,11 +169,15 @@ SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 /* Preload data parse function */
 static caddr_t native_parse_preload_data(u_int64_t);
 
+/* Native function to fetch and parse the e820 map */
+static void native_parse_memmap(caddr_t, vm_paddr_t *, int *);
+
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
+	.parse_memmap =		native_parse_memmap,
 };
 
 /*
@@ -1401,21 +1405,12 @@ add_physmap_entry(uint64_t base, uint64_t length, vm_paddr_t *physmap,
 	return (1);
 }
 
-static void
-add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
-    int *physmap_idx)
+void
+bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+                      vm_paddr_t *physmap, int *physmap_idx)
 {
 	struct bios_smap *smap, *smapend;
-	u_int32_t smapsize;
 
-	/*
-	 * Memory map from INT 15:E820.
-	 *
-	 * subr_module.c says:
-	 * "Consumer may safely assume that size value precedes data."
-	 * ie: an int32_t immediately precedes smap.
-	 */
-	smapsize = *((u_int32_t *)smapbase - 1);
 	smapend = (struct bios_smap *)((uintptr_t)smapbase + smapsize);
 
 	for (smap = smapbase; smap < smapend; smap++) {
@@ -1432,6 +1427,29 @@ add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
 	}
 }
 
+static void
+native_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct bios_smap *smap;
+	u_int32_t size;
+
+	/*
+	 * Memory map from INT 15:E820.
+	 *
+	 * subr_module.c says:
+	 * "Consumer may safely assume that size value precedes data."
+	 * ie: an int32_t immediately precedes smap.
+	 */
+
+	smap = (struct bios_smap *)preload_search_info(kmdp,
+	    MODINFO_METADATA | MODINFOMD_SMAP);
+	if (smap == NULL)
+		panic("No BIOS smap info from loader!");
+	size = *((u_int32_t *)smap - 1);
+
+	bios_add_smap_entries(smap, size, physmap, physmap_idx);
+}
+
 /*
  * Populate the (physmap) array with base/bound pairs describing the
  * available physical memory in the system, then test this memory and
@@ -1449,19 +1467,13 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 	vm_paddr_t pa, physmap[PHYSMAP_SIZE];
 	u_long physmem_start, physmem_tunable, memtest;
 	pt_entry_t *pte;
-	struct bios_smap *smapbase;
 	quad_t dcons_addr, dcons_size;
 
 	bzero(physmap, sizeof(physmap));
 	basemem = 0;
 	physmap_idx = 0;
 
-	smapbase = (struct bios_smap *)preload_search_info(kmdp,
-	    MODINFO_METADATA | MODINFOMD_SMAP);
-	if (smapbase == NULL)
-		panic("No BIOS smap info from loader!");
-
-	add_smap_entries(smapbase, physmap, &physmap_idx);
+	init_ops.parse_memmap(kmdp, physmap, &physmap_idx);
 
 	/*
 	 * Find the 'base memory' segment for SMP
diff --git a/sys/amd64/include/pc/bios.h b/sys/amd64/include/pc/bios.h
index e7d568e..95ef703 100644
--- a/sys/amd64/include/pc/bios.h
+++ b/sys/amd64/include/pc/bios.h
@@ -106,6 +106,8 @@ struct bios_oem {
 int	bios_oem_strings(struct bios_oem *oem, u_char *buffer, size_t maxlen);
 uint32_t bios_sigsearch(uint32_t start, u_char *sig, int siglen, int paralen,
 	    int sigofs);
+void bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+	    vm_paddr_t *physmap, int *physmap_idx);
 #endif
 
 #endif /* _MACHINE_PC_BIOS_H_ */
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 60fa635..084223e 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -15,6 +15,7 @@ struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
+	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 0ec4b54..d11bc1a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
 
 #include <machine/sysarch.h>
 #include <machine/clock.h>
+#include <machine/pc/bios.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -57,8 +58,11 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+#define MAX_E820_ENTRIES	128
+
 /*--------------------------- Forward Declarations ---------------------------*/
 static caddr_t xen_pv_parse_preload_data(u_int64_t);
+static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
 
 static void xen_pv_set_init_ops(void);
 
@@ -68,6 +72,7 @@ struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
 	.early_delay_init =	xen_delay_init,
 	.early_delay =		xen_delay,
+	.parse_memmap =		xen_pv_parse_memmap,
 };
 
 static struct
@@ -88,6 +93,8 @@ static struct
 	{NULL,	0}
 };
 
+static struct bios_smap xen_smap[MAX_E820_ENTRIES];
+
 /*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
@@ -201,6 +208,24 @@ xen_pv_parse_preload_data(u_int64_t modulep)
 }
 
 static void
+xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct xen_memory_map memmap;
+	u_int32_t size;
+	int rc;
+
+	/* Fetch the E820 map from Xen */
+	memmap.nr_entries = MAX_E820_ENTRIES;
+	set_xen_guest_handle(memmap.buffer, xen_smap);
+	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
+	if (rc)
+		panic("unable to fetch Xen E820 memory map");
+	size = memmap.nr_entries * sizeof(xen_smap[0]);
+
+	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
+}
+
+static void
 xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRL-00010e-0f; Thu, 02 Jan 2014 15:44:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRH-0000vM-Oj
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:04 +0000
Received: from [85.158.137.68:37023] by server-9.bemta-3.messagelabs.com id
	DF/20-13104-34985C25; Thu, 02 Jan 2014 15:44:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388677440!6920958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28501 invoked from network); 2 Jan 2014 15:44:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87070948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:43:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRD-00043n-JX;
	Thu, 02 Jan 2014 15:43:59 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:41 +0100
Message-ID: <1388677433-49525-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 07/19] xen: implement hook to fetch e820
	memory map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   50 ++++++++++++++++++++++++++----------------
 sys/amd64/include/pc/bios.h |    2 +
 sys/amd64/include/sysarch.h |    1 +
 sys/x86/xen/pv.c            |   25 +++++++++++++++++++++
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 178d8b3..f6eef50 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -169,11 +169,15 @@ SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 /* Preload data parse function */
 static caddr_t native_parse_preload_data(u_int64_t);
 
+/* Native function to fetch and parse the e820 map */
+static void native_parse_memmap(caddr_t, vm_paddr_t *, int *);
+
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
+	.parse_memmap =		native_parse_memmap,
 };
 
 /*
@@ -1401,21 +1405,12 @@ add_physmap_entry(uint64_t base, uint64_t length, vm_paddr_t *physmap,
 	return (1);
 }
 
-static void
-add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
-    int *physmap_idx)
+void
+bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+                      vm_paddr_t *physmap, int *physmap_idx)
 {
 	struct bios_smap *smap, *smapend;
-	u_int32_t smapsize;
 
-	/*
-	 * Memory map from INT 15:E820.
-	 *
-	 * subr_module.c says:
-	 * "Consumer may safely assume that size value precedes data."
-	 * ie: an int32_t immediately precedes smap.
-	 */
-	smapsize = *((u_int32_t *)smapbase - 1);
 	smapend = (struct bios_smap *)((uintptr_t)smapbase + smapsize);
 
 	for (smap = smapbase; smap < smapend; smap++) {
@@ -1432,6 +1427,29 @@ add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
 	}
 }
 
+static void
+native_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct bios_smap *smap;
+	u_int32_t size;
+
+	/*
+	 * Memory map from INT 15:E820.
+	 *
+	 * subr_module.c says:
+	 * "Consumer may safely assume that size value precedes data."
+	 * ie: an int32_t immediately precedes smap.
+	 */
+
+	smap = (struct bios_smap *)preload_search_info(kmdp,
+	    MODINFO_METADATA | MODINFOMD_SMAP);
+	if (smap == NULL)
+		panic("No BIOS smap info from loader!");
+	size = *((u_int32_t *)smap - 1);
+
+	bios_add_smap_entries(smap, size, physmap, physmap_idx);
+}
+
 /*
  * Populate the (physmap) array with base/bound pairs describing the
  * available physical memory in the system, then test this memory and
@@ -1449,19 +1467,13 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 	vm_paddr_t pa, physmap[PHYSMAP_SIZE];
 	u_long physmem_start, physmem_tunable, memtest;
 	pt_entry_t *pte;
-	struct bios_smap *smapbase;
 	quad_t dcons_addr, dcons_size;
 
 	bzero(physmap, sizeof(physmap));
 	basemem = 0;
 	physmap_idx = 0;
 
-	smapbase = (struct bios_smap *)preload_search_info(kmdp,
-	    MODINFO_METADATA | MODINFOMD_SMAP);
-	if (smapbase == NULL)
-		panic("No BIOS smap info from loader!");
-
-	add_smap_entries(smapbase, physmap, &physmap_idx);
+	init_ops.parse_memmap(kmdp, physmap, &physmap_idx);
 
 	/*
 	 * Find the 'base memory' segment for SMP
diff --git a/sys/amd64/include/pc/bios.h b/sys/amd64/include/pc/bios.h
index e7d568e..95ef703 100644
--- a/sys/amd64/include/pc/bios.h
+++ b/sys/amd64/include/pc/bios.h
@@ -106,6 +106,8 @@ struct bios_oem {
 int	bios_oem_strings(struct bios_oem *oem, u_char *buffer, size_t maxlen);
 uint32_t bios_sigsearch(uint32_t start, u_char *sig, int siglen, int paralen,
 	    int sigofs);
+void bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+	    vm_paddr_t *physmap, int *physmap_idx);
 #endif
 
 #endif /* _MACHINE_PC_BIOS_H_ */
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 60fa635..084223e 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -15,6 +15,7 @@ struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
+	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 0ec4b54..d11bc1a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
 
 #include <machine/sysarch.h>
 #include <machine/clock.h>
+#include <machine/pc/bios.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -57,8 +58,11 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+#define MAX_E820_ENTRIES	128
+
 /*--------------------------- Forward Declarations ---------------------------*/
 static caddr_t xen_pv_parse_preload_data(u_int64_t);
+static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
 
 static void xen_pv_set_init_ops(void);
 
@@ -68,6 +72,7 @@ struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
 	.early_delay_init =	xen_delay_init,
 	.early_delay =		xen_delay,
+	.parse_memmap =		xen_pv_parse_memmap,
 };
 
 static struct
@@ -88,6 +93,8 @@ static struct
 	{NULL,	0}
 };
 
+static struct bios_smap xen_smap[MAX_E820_ENTRIES];
+
 /*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
@@ -201,6 +208,24 @@ xen_pv_parse_preload_data(u_int64_t modulep)
 }
 
 static void
+xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct xen_memory_map memmap;
+	u_int32_t size;
+	int rc;
+
+	/* Fetch the E820 map from Xen */
+	memmap.nr_entries = MAX_E820_ENTRIES;
+	set_xen_guest_handle(memmap.buffer, xen_smap);
+	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
+	if (rc)
+		panic("unable to fetch Xen E820 memory map");
+	size = memmap.nr_entries * sizeof(xen_smap[0]);
+
+	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
+}
+
+static void
 xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRL-00012B-PM; Thu, 02 Jan 2014 15:44:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRI-0000uw-GE
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:04 +0000
Received: from [85.158.139.211:35345] by server-11.bemta-5.messagelabs.com id
	E9/1D-23268-24985C25; Thu, 02 Jan 2014 15:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31270 invoked from network); 2 Jan 2014 15:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259063"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:44:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRE-00043n-OH;
	Thu, 02 Jan 2014 15:44:00 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:43 +0100
Message-ID: <1388677433-49525-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_09/19=5D_xen=3A_add_a_apic=5Fen?=
	=?utf-8?q?umerator_for_PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LS0tCiBzeXMvY29uZi9maWxlcy5hbWQ2NCAgICAgfCAgICAxICsKIHN5cy94ODYveGVuL3B2Y3B1
X2VudW0uYyB8ICAxMzYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKwogMiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgMCBkZWxldGlvbnMoLSkK
IGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveDg2L3hlbi9wdmNwdV9lbnVtLmMKCmRpZmYgLS1naXQg
YS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IDEwOWE3
OTYuLmEzNDkxZGEgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5cy9j
b25mL2ZpbGVzLmFtZDY0CkBAIC01NjksMyArNTY5LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0K
K3g4Ni94ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy94
ODYveGVuL3B2Y3B1X2VudW0uYyBiL3N5cy94ODYveGVuL3B2Y3B1X2VudW0uYwpuZXcgZmlsZSBt
b2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi4wMzg0ODg2Ci0tLSAvZGV2L251bGwKKysrIGIvc3lz
L3g4Ni94ZW4vcHZjcHVfZW51bS5jCkBAIC0wLDAgKzEsMTM2IEBACisvKi0KKyAqIENvcHlyaWdo
dCAoYykgMjAwMyBKb2huIEJhbGR3aW4gPGpoYkBGcmVlQlNELm9yZz4KKyAqIENvcHlyaWdodCAo
YykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2Ug
YW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBw
ZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBt
ZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGlj
ZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBp
biB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRl
ZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKiAzLiBOZWl0aGVyIHRoZSBuYW1lIG9mIHRoZSBh
dXRob3Igbm9yIHRoZSBuYW1lcyBvZiBhbnkgY28tY29udHJpYnV0b3JzCisgKiAgICBtYXkgYmUg
dXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBmcm9tIHRoaXMgc29m
dHdhcmUKKyAqICAgIHdpdGhvdXQgc3BlY2lmaWMgcHJpb3Igd3JpdHRlbiBwZXJtaXNzaW9uLgor
ICoKKyAqIFRISVMgU09GVFdBUkUgSVMgUFJPVklERUQgQlkgVEhFIEFVVEhPUiBBTkQgQ09OVFJJ
QlVUT1JTIGBgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElF
UywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lE
RU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAo
SU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUg
R09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVP
UlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElU
WSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElO
IEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpbmNs
dWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUgPHN5
cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4K
KyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorI2luY2x1ZGUg
PHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS9wbWFwLmg+CisK
KyNpbmNsdWRlIDxtYWNoaW5lL2ludHJfbWFjaGRlcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvYXBp
Y3Zhci5oPgorCisjaW5jbHVkZSA8bWFjaGluZS9jcHUuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3Nt
cC5oPgorCisjaW5jbHVkZSA8eGVuL3hlbi1vcy5oPgorI2luY2x1ZGUgPHhlbi9oeXBlcnZpc29y
Lmg+CisKKyNpbmNsdWRlIDx4ZW4vaW50ZXJmYWNlL3ZjcHUuaD4KKworc3RhdGljIGludCB4ZW5w
dl9wcm9iZSh2b2lkKTsKK3N0YXRpYyBpbnQgeGVucHZfcHJvYmVfY3B1cyh2b2lkKTsKK3N0YXRp
YyBpbnQgeGVucHZfc2V0dXBfbG9jYWwodm9pZCk7CitzdGF0aWMgaW50IHhlbnB2X3NldHVwX2lv
KHZvaWQpOworCitzdGF0aWMgc3RydWN0IGFwaWNfZW51bWVyYXRvciB4ZW5wdl9lbnVtZXJhdG9y
ID0geworCSJYZW4gUFYiLAorCXhlbnB2X3Byb2JlLAorCXhlbnB2X3Byb2JlX2NwdXMsCisJeGVu
cHZfc2V0dXBfbG9jYWwsCisJeGVucHZfc2V0dXBfaW8KK307CisKKy8qCisgKiBUaGlzIGVudW1l
cmF0b3Igd2lsbCBvbmx5IGJlIHJlZ2lzdGVyZWQgb24gUFZICisgKi8KK3N0YXRpYyBpbnQKK3hl
bnB2X3Byb2JlKHZvaWQpCit7CisJcmV0dXJuICgtMTAwKTsKK30KKworLyoKKyAqIFRlc3QgZWFj
aCBwb3NzaWJsZSB2Q1BVIGluIG9yZGVyIHRvIGZpbmQgdGhlIG51bWJlciBvZiB2Q1BVcworICov
CitzdGF0aWMgaW50Cit4ZW5wdl9wcm9iZV9jcHVzKHZvaWQpCit7CisjaWZkZWYgU01QCisJaW50
IGksIHJldDsKKworCWZvciAoaSA9IDA7IGkgPCBNQVhDUFU7IGkrKykgeworCQlyZXQgPSBIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2lzX3VwLCBpLCBOVUxMKTsKKwkJaWYgKHJldCA+PSAwKQor
CQkJY3B1X2FkZCgoaSAqIDIpLCAoaSA9PSAwKSk7CisJfQorI2VuZGlmCisJcmV0dXJuICgwKTsK
K30KKworLyoKKyAqIEluaXRpYWxpemUgdGhlIHZDUFUgaWQgb2YgdGhlIEJTUAorICovCitzdGF0
aWMgaW50Cit4ZW5wdl9zZXR1cF9sb2NhbCh2b2lkKQoreworCVBDUFVfU0VUKHZjcHVfaWQsIDAp
OworCXJldHVybiAoMCk7Cit9CisKKy8qCisgKiBPbiBQVkggZ3Vlc3RzIHRoZXJlJ3Mgbm8gSU8g
QVBJQworICovCitzdGF0aWMgaW50Cit4ZW5wdl9zZXR1cF9pbyh2b2lkKQoreworCXJldHVybiAo
MCk7Cit9CisKK3N0YXRpYyB2b2lkCit4ZW5wdl9yZWdpc3Rlcih2b2lkICpkdW1teSBfX3VudXNl
ZCkKK3sKKwlpZiAoeGVuX3B2X2RvbWFpbigpKSB7CisJCWFwaWNfcmVnaXN0ZXJfZW51bWVyYXRv
cigmeGVucHZfZW51bWVyYXRvcik7CisJfQorfQorU1lTSU5JVCh4ZW5wdl9yZWdpc3RlciwgU0lf
U1VCX1RVTkFCTEVTIC0gMSwgU0lfT1JERVJfRklSU1QsIHhlbnB2X3JlZ2lzdGVyLCBOVUxMKTsK
KworLyoKKyAqIFNldHVwIHBlci1DUFUgdkNQVSBJRHMKKyAqLworc3RhdGljIHZvaWQKK3hlbnB2
X3NldF9pZHModm9pZCAqZHVtbXkpCit7CisJc3RydWN0IHBjcHUgKnBjOworCWludCBpOworCisJ
Q1BVX0ZPUkVBQ0goaSkgeworCQlwYyA9IHBjcHVfZmluZChpKTsKKwkJcGMtPnBjX3ZjcHVfaWQg
PSBpOworCX0KK30KK1NZU0lOSVQoeGVucHZfc2V0X2lkcywgU0lfU1VCX0NQVSwgU0lfT1JERVJf
TUlERExFLCB4ZW5wdl9zZXRfaWRzLCBOVUxMKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykRL-00012B-PM; Thu, 02 Jan 2014 15:44:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykRI-0000uw-GE
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:44:04 +0000
Received: from [85.158.139.211:35345] by server-11.bemta-5.messagelabs.com id
	E9/1D-23268-24985C25; Thu, 02 Jan 2014 15:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1388677437!7554277!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31270 invoked from network); 2 Jan 2014 15:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89259063"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:44:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:44:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRE-00043n-OH;
	Thu, 02 Jan 2014 15:44:00 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:43 +0100
Message-ID: <1388677433-49525-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_09/19=5D_xen=3A_add_a_apic=5Fen?=
	=?utf-8?q?umerator_for_PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LS0tCiBzeXMvY29uZi9maWxlcy5hbWQ2NCAgICAgfCAgICAxICsKIHN5cy94ODYveGVuL3B2Y3B1
X2VudW0uYyB8ICAxMzYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKwogMiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgMCBkZWxldGlvbnMoLSkK
IGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveDg2L3hlbi9wdmNwdV9lbnVtLmMKCmRpZmYgLS1naXQg
YS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IDEwOWE3
OTYuLmEzNDkxZGEgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5cy9j
b25mL2ZpbGVzLmFtZDY0CkBAIC01NjksMyArNTY5LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0K
K3g4Ni94ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy94
ODYveGVuL3B2Y3B1X2VudW0uYyBiL3N5cy94ODYveGVuL3B2Y3B1X2VudW0uYwpuZXcgZmlsZSBt
b2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi4wMzg0ODg2Ci0tLSAvZGV2L251bGwKKysrIGIvc3lz
L3g4Ni94ZW4vcHZjcHVfZW51bS5jCkBAIC0wLDAgKzEsMTM2IEBACisvKi0KKyAqIENvcHlyaWdo
dCAoYykgMjAwMyBKb2huIEJhbGR3aW4gPGpoYkBGcmVlQlNELm9yZz4KKyAqIENvcHlyaWdodCAo
YykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2Ug
YW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBw
ZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBt
ZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGlj
ZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBp
biB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRl
ZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKiAzLiBOZWl0aGVyIHRoZSBuYW1lIG9mIHRoZSBh
dXRob3Igbm9yIHRoZSBuYW1lcyBvZiBhbnkgY28tY29udHJpYnV0b3JzCisgKiAgICBtYXkgYmUg
dXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBmcm9tIHRoaXMgc29m
dHdhcmUKKyAqICAgIHdpdGhvdXQgc3BlY2lmaWMgcHJpb3Igd3JpdHRlbiBwZXJtaXNzaW9uLgor
ICoKKyAqIFRISVMgU09GVFdBUkUgSVMgUFJPVklERUQgQlkgVEhFIEFVVEhPUiBBTkQgQ09OVFJJ
QlVUT1JTIGBgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElF
UywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lE
RU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAo
SU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUg
R09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVP
UlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElU
WSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElO
IEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpbmNs
dWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUgPHN5
cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4K
KyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorI2luY2x1ZGUg
PHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS9wbWFwLmg+CisK
KyNpbmNsdWRlIDxtYWNoaW5lL2ludHJfbWFjaGRlcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvYXBp
Y3Zhci5oPgorCisjaW5jbHVkZSA8bWFjaGluZS9jcHUuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3Nt
cC5oPgorCisjaW5jbHVkZSA8eGVuL3hlbi1vcy5oPgorI2luY2x1ZGUgPHhlbi9oeXBlcnZpc29y
Lmg+CisKKyNpbmNsdWRlIDx4ZW4vaW50ZXJmYWNlL3ZjcHUuaD4KKworc3RhdGljIGludCB4ZW5w
dl9wcm9iZSh2b2lkKTsKK3N0YXRpYyBpbnQgeGVucHZfcHJvYmVfY3B1cyh2b2lkKTsKK3N0YXRp
YyBpbnQgeGVucHZfc2V0dXBfbG9jYWwodm9pZCk7CitzdGF0aWMgaW50IHhlbnB2X3NldHVwX2lv
KHZvaWQpOworCitzdGF0aWMgc3RydWN0IGFwaWNfZW51bWVyYXRvciB4ZW5wdl9lbnVtZXJhdG9y
ID0geworCSJYZW4gUFYiLAorCXhlbnB2X3Byb2JlLAorCXhlbnB2X3Byb2JlX2NwdXMsCisJeGVu
cHZfc2V0dXBfbG9jYWwsCisJeGVucHZfc2V0dXBfaW8KK307CisKKy8qCisgKiBUaGlzIGVudW1l
cmF0b3Igd2lsbCBvbmx5IGJlIHJlZ2lzdGVyZWQgb24gUFZICisgKi8KK3N0YXRpYyBpbnQKK3hl
bnB2X3Byb2JlKHZvaWQpCit7CisJcmV0dXJuICgtMTAwKTsKK30KKworLyoKKyAqIFRlc3QgZWFj
aCBwb3NzaWJsZSB2Q1BVIGluIG9yZGVyIHRvIGZpbmQgdGhlIG51bWJlciBvZiB2Q1BVcworICov
CitzdGF0aWMgaW50Cit4ZW5wdl9wcm9iZV9jcHVzKHZvaWQpCit7CisjaWZkZWYgU01QCisJaW50
IGksIHJldDsKKworCWZvciAoaSA9IDA7IGkgPCBNQVhDUFU7IGkrKykgeworCQlyZXQgPSBIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2lzX3VwLCBpLCBOVUxMKTsKKwkJaWYgKHJldCA+PSAwKQor
CQkJY3B1X2FkZCgoaSAqIDIpLCAoaSA9PSAwKSk7CisJfQorI2VuZGlmCisJcmV0dXJuICgwKTsK
K30KKworLyoKKyAqIEluaXRpYWxpemUgdGhlIHZDUFUgaWQgb2YgdGhlIEJTUAorICovCitzdGF0
aWMgaW50Cit4ZW5wdl9zZXR1cF9sb2NhbCh2b2lkKQoreworCVBDUFVfU0VUKHZjcHVfaWQsIDAp
OworCXJldHVybiAoMCk7Cit9CisKKy8qCisgKiBPbiBQVkggZ3Vlc3RzIHRoZXJlJ3Mgbm8gSU8g
QVBJQworICovCitzdGF0aWMgaW50Cit4ZW5wdl9zZXR1cF9pbyh2b2lkKQoreworCXJldHVybiAo
MCk7Cit9CisKK3N0YXRpYyB2b2lkCit4ZW5wdl9yZWdpc3Rlcih2b2lkICpkdW1teSBfX3VudXNl
ZCkKK3sKKwlpZiAoeGVuX3B2X2RvbWFpbigpKSB7CisJCWFwaWNfcmVnaXN0ZXJfZW51bWVyYXRv
cigmeGVucHZfZW51bWVyYXRvcik7CisJfQorfQorU1lTSU5JVCh4ZW5wdl9yZWdpc3RlciwgU0lf
U1VCX1RVTkFCTEVTIC0gMSwgU0lfT1JERVJfRklSU1QsIHhlbnB2X3JlZ2lzdGVyLCBOVUxMKTsK
KworLyoKKyAqIFNldHVwIHBlci1DUFUgdkNQVSBJRHMKKyAqLworc3RhdGljIHZvaWQKK3hlbnB2
X3NldF9pZHModm9pZCAqZHVtbXkpCit7CisJc3RydWN0IHBjcHUgKnBjOworCWludCBpOworCisJ
Q1BVX0ZPUkVBQ0goaSkgeworCQlwYyA9IHBjcHVfZmluZChpKTsKKwkJcGMtPnBjX3ZjcHVfaWQg
PSBpOworCX0KK30KK1NZU0lOSVQoeGVucHZfc2V0X2lkcywgU0lfU1VCX0NQVSwgU0lfT1JERVJf
TUlERExFLCB4ZW5wdl9zZXRfaWRzLCBOVUxMKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcM-0002nh-F2; Thu, 02 Jan 2014 15:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcK-0002n6-9J
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:28 +0000
Received: from [85.158.137.68:27959] by server-2.bemta-3.messagelabs.com id
	38/70-17329-FEB85C25; Thu, 02 Jan 2014 15:55:27 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14888 invoked from network); 2 Jan 2014 15:55:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074526"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:25 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRI-00043n-7D;
	Thu, 02 Jan 2014 15:44:04 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:49 +0100
Message-ID: <1388677433-49525-16-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_15/19=5D_xen=3A_create_a_Xen_ne?=
	=?utf-8?q?xus_to_use_in_PV/PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5leHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hh
cmdlIGZvcgphdHRhY2hpbmcgWGVuIHNwZWNpZmljIGRldmljZXMuCi0tLQogc3lzL2NvbmYvZmls
ZXMuYW1kNjQgICAgICAgICAgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYgICAgICAgICAg
IHwgICAgMSArCiBzeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyB8ICAgIDIgKy0KIHN5cy9k
ZXYveGVuL3RpbWVyL3RpbWVyLmMgICAgIHwgICAgNCArLQogc3lzL2Rldi94ZW4veGVucGNpL3hl
bnBjaS5jICAgfCAgICA0ICsrCiBzeXMveDg2L3hlbi94ZW5fbmV4dXMuYyAgICAgICB8ICAgODIg
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIHN5cy94ODYveGVuL3hl
bnB2LmMgICAgICAgICAgIHwgICAgMSArCiBzeXMveGVuL3hlbnN0b3JlL3hlbnN0b3JlLmMgICB8
ICAgIDYgKy0tCiA4IGZpbGVzIGNoYW5nZWQsIDkzIGluc2VydGlvbnMoKyksIDggZGVsZXRpb25z
KC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4veGVuX25leHVzLmMKCmRpZmYgLS1n
aXQgYS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IGQ3
Yzk4Y2MuLmYzNzg5ODMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5
cy9jb25mL2ZpbGVzLmFtZDY0CkBAIC01NzEsMyArNTcxLDQgQEAgeDg2L3hlbi94ZW5faW50ci5j
CQlvcHRpb25hbAl4ZW4gfCB4ZW5odm0KIHg4Ni94ZW4vcHYuYwkJCW9wdGlvbmFsCXhlbmh2bQog
eDg2L3hlbi9wdmNwdV9lbnVtLmMJCW9wdGlvbmFsCXhlbmh2bQogeDg2L3hlbi94ZW5wdi5jCQkJ
b3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbl9uZXh1cy5jCQlvcHRpb25hbAl4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9maWxlcy5pMzg2CmluZGV4
IDgxMTQyZTMuLjAyODg3YTMzIDEwMDY0NAotLS0gYS9zeXMvY29uZi9maWxlcy5pMzg2CisrKyBi
L3N5cy9jb25mL2ZpbGVzLmkzODYKQEAgLTYwNCwzICs2MDQsNCBAQCB4ODYveDg2L2RlbGF5LmMJ
CQlzdGFuZGFyZAogeDg2L3hlbi9odm0uYwkJCW9wdGlvbmFsIHhlbmh2bQogeDg2L3hlbi94ZW5f
aW50ci5jCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KIHg4Ni94ZW4veGVucHYuYwkJCW9wdGlvbmFs
IHhlbiB8IHhlbmh2bQoreDg2L3hlbi94ZW5fbmV4dXMuYwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZt
CmRpZmYgLS1naXQgYS9zeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyBiL3N5cy9kZXYveGVu
L2NvbnNvbGUvY29uc29sZS5jCmluZGV4IDg5OWRmZmMuLjkxNTM4ZmUgMTAwNjQ0Ci0tLSBhL3N5
cy9kZXYveGVuL2NvbnNvbGUvY29uc29sZS5jCisrKyBiL3N5cy9kZXYveGVuL2NvbnNvbGUvY29u
c29sZS5jCkBAIC00NjIsNCArNDYyLDQgQEAgeGNvbnNfZm9yY2VfZmx1c2godm9pZCkKIAl9CiB9
CiAKLURSSVZFUl9NT0RVTEUoeGMsIG5leHVzLCB4Y19kcml2ZXIsIHhjX2RldmNsYXNzLCAwLCAw
KTsKK0RSSVZFUl9NT0RVTEUoeGMsIHhlbnB2LCB4Y19kcml2ZXIsIHhjX2RldmNsYXNzLCAwLCAw
KTsKZGlmZiAtLWdpdCBhL3N5cy9kZXYveGVuL3RpbWVyL3RpbWVyLmMgYi9zeXMvZGV2L3hlbi90
aW1lci90aW1lci5jCmluZGV4IDk2MzcyYWIuLmYxNmY1YTUgMTAwNjQ0Ci0tLSBhL3N5cy9kZXYv
eGVuL3RpbWVyL3RpbWVyLmMKKysrIGIvc3lzL2Rldi94ZW4vdGltZXIvdGltZXIuYwpAQCAtNjM2
LDUgKzYzNiw1IEBAIHN0YXRpYyBkcml2ZXJfdCB4ZW50aW1lcl9kcml2ZXIgPSB7CiAJc2l6ZW9m
KHN0cnVjdCB4ZW50aW1lcl9zb2Z0YyksCiB9OwogCi1EUklWRVJfTU9EVUxFKHhlbnRpbWVyLCBu
ZXh1cywgeGVudGltZXJfZHJpdmVyLCB4ZW50aW1lcl9kZXZjbGFzcywgMCwgMCk7Ci1NT0RVTEVf
REVQRU5EKHhlbnRpbWVyLCBuZXh1cywgMSwgMSwgMSk7CitEUklWRVJfTU9EVUxFKHhlbnRpbWVy
LCB4ZW5wdiwgeGVudGltZXJfZHJpdmVyLCB4ZW50aW1lcl9kZXZjbGFzcywgMCwgMCk7CitNT0RV
TEVfREVQRU5EKHhlbnRpbWVyLCB4ZW5wdiwgMSwgMSwgMSk7CmRpZmYgLS1naXQgYS9zeXMvZGV2
L3hlbi94ZW5wY2kveGVucGNpLmMgYi9zeXMvZGV2L3hlbi94ZW5wY2kveGVucGNpLmMKaW5kZXgg
ZGQyYWQ5Mi4uYTI3YjU0ZiAxMDA2NDQKLS0tIGEvc3lzL2Rldi94ZW4veGVucGNpL3hlbnBjaS5j
CisrKyBiL3N5cy9kZXYveGVuL3hlbnBjaS94ZW5wY2kuYwpAQCAtMjcwLDYgKzI3MCwxMCBAQCB4
ZW5wY2lfYXR0YWNoKGRldmljZV90IGRldikKIAkJZ290byBlcnJleGl0OwogCX0KIAorCS8qIEFk
ZCB0aGUgeGVucHYgZGV2aWNlIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoICov
CisJaWYgKEJVU19BRERfQ0hJTEQoZGV2LCAwLCAieGVucHYiLCAwKSA9PSBOVUxMKQorCQlwYW5p
YygieGVucGNpOiB1bmFibGUgdG8gYWRkIHhlbnB2IGRldmljZSIpOworCiAJcmV0dXJuIChidXNf
Z2VuZXJpY19hdHRhY2goZGV2KSk7CiAKIGVycmV4aXQ6CmRpZmYgLS1naXQgYS9zeXMveDg2L3hl
bi94ZW5fbmV4dXMuYyBiL3N5cy94ODYveGVuL3hlbl9uZXh1cy5jCm5ldyBmaWxlIG1vZGUgMTAw
NjQ0CmluZGV4IDAwMDAwMDAuLmQzNGUzMzMKLS0tIC9kZXYvbnVsbAorKysgYi9zeXMveDg2L3hl
bi94ZW5fbmV4dXMuYwpAQCAtMCwwICsxLDgyIEBACisvKgorICogQ29weXJpZ2h0IChjKSAyMDEz
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxsIHJpZ2h0cyBy
ZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmlu
YXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRl
ZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJlIG1ldDoKKyAq
IDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUg
Y29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUg
Zm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZv
cm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlz
IGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyIGluIHRoZQor
ICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGgg
dGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRI
RSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVYUFJFU1MgT1Ig
SU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgVEhFCisg
KiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUiBB
IFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBFVkVOVCBTSEFM
TCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBBTlkgRElSRUNU
LCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBDT05TRVFVRU5U
SUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgUFJPQ1VSRU1F
TlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRB
LCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dFVkVSIENBVVNF
RCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNU
UklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhF
UldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZU
V0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICogU1VDSCBEQU1B
R0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRGcmVlQlNEJCIp
OworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1
ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNsdWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lz
L3N5c2N0bC5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9zbXAuaD4K
KworI2luY2x1ZGUgPG1hY2hpbmUvbmV4dXN2YXIuaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKworLyoKKyAqIFhlbiBuZXh1cyg0KSBkcml2ZXIuCisgKi8KK3N0YXRpYyBpbnQKK25leHVz
X3hlbl9wcm9iZShkZXZpY2VfdCBkZXYpCit7CisJaWYgKCF4ZW5fcHZfZG9tYWluKCkpCisJCXJl
dHVybiAoRU5YSU8pOworCisJcmV0dXJuIChCVVNfUFJPQkVfREVGQVVMVCk7Cit9CisKK3N0YXRp
YyBpbnQKK25leHVzX3hlbl9hdHRhY2goZGV2aWNlX3QgZGV2KQoreworCisJbmV4dXNfaW5pdF9y
ZXNvdXJjZXMoKTsKKwlidXNfZ2VuZXJpY19wcm9iZShkZXYpOworCisJLyoKKwkgKiBFeHBsaWNp
dGx5IGFkZCB0aGUgeGVucHYgZGV2aWNlIGhlcmUuIE90aGVyIHRvcCBsZXZlbAorCSAqIFhlbiBk
ZXZpY2VzIHdpbGwgYXR0YWNoIHRvIHRoaXMuCisJICovCisJaWYgKEJVU19BRERfQ0hJTEQoZGV2
LCAwLCAieGVucHYiLCAwKSA9PSBOVUxMKQorCQlwYW5pYygieGVucHY6IGNvdWxkIG5vdCBhdHRh
Y2giKTsKKwlidXNfZ2VuZXJpY19hdHRhY2goZGV2KTsKKwlyZXR1cm4gMDsKK30KKworc3RhdGlj
IGRldmljZV9tZXRob2RfdCBuZXh1c194ZW5fbWV0aG9kc1tdID0geworCS8qIERldmljZSBpbnRl
cmZhY2UgKi8KKwlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLAkJbmV4dXNfeGVuX3Byb2JlKSwKKwlE
RVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJbmV4dXNfeGVuX2F0dGFjaCksCisKKwl7IDAsIDAgfQor
fTsKKworREVGSU5FX0NMQVNTXzEobmV4dXMsIG5leHVzX3hlbl9kcml2ZXIsIG5leHVzX3hlbl9t
ZXRob2RzLCAxLCBuZXh1c19kcml2ZXIpOworc3RhdGljIGRldmNsYXNzX3QgbmV4dXNfZGV2Y2xh
c3M7CisKK0RSSVZFUl9NT0RVTEUobmV4dXNfeGVuLCByb290LCBuZXh1c194ZW5fZHJpdmVyLCBu
ZXh1c19kZXZjbGFzcywgMCwgMCk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi94ZW5wdi5jIGIv
c3lzL3g4Ni94ZW4veGVucHYuYwppbmRleCA0MWQ2NzRmLi43YmQ4MTBiIDEwMDY0NAotLS0gYS9z
eXMveDg2L3hlbi94ZW5wdi5jCisrKyBiL3N5cy94ODYveGVuL3hlbnB2LmMKQEAgLTkwLDYgKzkw
LDcgQEAgc3RhdGljIGRyaXZlcl90IHhlbnB2X2RyaXZlciA9IHsKIHN0YXRpYyBkZXZjbGFzc190
IHhlbnB2X2RldmNsYXNzOwogCiBEUklWRVJfTU9EVUxFKHhlbnB2LCBuZXh1cywgeGVucHZfZHJp
dmVyLCB4ZW5wdl9kZXZjbGFzcywgMCwgMCk7CitEUklWRVJfTU9EVUxFKHhlbnB2LCB4ZW5wY2ks
IHhlbnB2X2RyaXZlciwgeGVucHZfZGV2Y2xhc3MsIDAsIDApOwogCiAvKgogICogRHVtbXkgWGVu
IGNwdSBkZXZpY2UKZGlmZiAtLWdpdCBhL3N5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYyBiL3N5
cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYwppbmRleCBkNDA0ODYyLi5iNWNmNDEzIDEwMDY0NAot
LS0gYS9zeXMveGVuL3hlbnN0b3JlL3hlbnN0b3JlLmMKKysrIGIvc3lzL3hlbi94ZW5zdG9yZS94
ZW5zdG9yZS5jCkBAIC0xMjYxLDExICsxMjYxLDcgQEAgc3RhdGljIGRldmljZV9tZXRob2RfdCB4
ZW5zdG9yZV9tZXRob2RzW10gPSB7CiBERUZJTkVfQ0xBU1NfMCh4ZW5zdG9yZSwgeGVuc3RvcmVf
ZHJpdmVyLCB4ZW5zdG9yZV9tZXRob2RzLCAwKTsKIHN0YXRpYyBkZXZjbGFzc190IHhlbnN0b3Jl
X2RldmNsYXNzOyAKICAKLSNpZmRlZiBYRU5IVk0KLURSSVZFUl9NT0RVTEUoeGVuc3RvcmUsIHhl
bnBjaSwgeGVuc3RvcmVfZHJpdmVyLCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7Ci0jZWxzZQot
RFJJVkVSX01PRFVMRSh4ZW5zdG9yZSwgbmV4dXMsIHhlbnN0b3JlX2RyaXZlciwgeGVuc3RvcmVf
ZGV2Y2xhc3MsIDAsIDApOwotI2VuZGlmCitEUklWRVJfTU9EVUxFKHhlbnN0b3JlLCB4ZW5wdiwg
eGVuc3RvcmVfZHJpdmVyLCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7CiAKIC8qLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBTeXNjdGwgRGF0YSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLSovCiAvKiBYWFggU2hvdWxkbid0IHRoZSBub2RlIGJlIHNvbWV3aGVyZSBlbHNl
PyAqLwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcS-0002pJ-9G; Thu, 02 Jan 2014 15:55:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcP-0002og-Sm
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:34 +0000
Received: from [193.109.254.147:9854] by server-16.bemta-14.messagelabs.com id
	BA/1E-20600-5FB85C25; Thu, 02 Jan 2014 15:55:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388678131!6247185!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15609 invoked from network); 2 Jan 2014 15:55:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262674"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:30 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRJ-00043n-Bv;
	Thu, 02 Jan 2014 15:44:05 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:51 +0100
Message-ID: <1388677433-49525-18-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 17/19] xen: xenstore changes to support PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xenstore/xenstore.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/sys/xen/xenstore/xenstore.c b/sys/xen/xenstore/xenstore.c
index b5cf413..7fa08cc 100644
--- a/sys/xen/xenstore/xenstore.c
+++ b/sys/xen/xenstore/xenstore.c
@@ -229,13 +229,11 @@ struct xs_softc {
 	 */
 	struct sx xenwatch_mutex;
 
-#ifdef XENHVM
 	/**
 	 * The HVM guest pseudo-physical frame number.  This is Xen's mapping
 	 * of the true machine frame number into our "physical address space".
 	 */
 	unsigned long gpfn;
-#endif
 
 	/**
 	 * The event channel for communicating with the
@@ -1147,13 +1145,15 @@ xs_attach(device_t dev)
 	/* Initialize the interface to xenstore. */
 	struct proc *p;
 
-#ifdef XENHVM
-	xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
-	xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
-	xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
-#else
-	xs.evtchn = xen_start_info->store_evtchn;
-#endif
+	if (xen_hvm_domain()) {
+		xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
+		xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
+		xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
+	} else if (xen_pv_domain()) {
+		xs.evtchn = HYPERVISOR_start_info->store_evtchn;
+	} else {
+		panic("Unknown domain type, cannot initialize xenstore\n");
+	}
 
 	TAILQ_INIT(&xs.reply_list);
 	TAILQ_INIT(&xs.watch_events);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vykca-0002vr-Gk; Thu, 02 Jan 2014 15:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcY-0002u5-Ll
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:42 +0000
Received: from [85.158.143.35:21601] by server-3.bemta-4.messagelabs.com id
	3B/07-32360-EFB85C25; Thu, 02 Jan 2014 15:55:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7140 invoked from network); 2 Jan 2014 15:55:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074602"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRG-00043n-GD;
	Thu, 02 Jan 2014 15:44:02 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:46 +0100
Message-ID: <1388677433-49525-13-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_12/19=5D_xen=3A_add_a_hook_to_p?=
	=?utf-8?q?erform_AP_startup?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QVAgc3RhcnR1cCBvbiBQVkggZm9sbG93cyB0aGUgUFYgbWV0aG9kLCBzbyB3ZSBuZWVkIHRvIGFk
ZCBhIGhvb2sgaW4Kb3JkZXIgdG8gZGl2ZXJnZSBmcm9tIGJhcmUgbWV0YWwuCi0tLQogc3lzL2Ft
ZDY0L2FtZDY0L21wX21hY2hkZXAuYyB8ICAgMTQgKysrLS0tCiBzeXMvYW1kNjQvaW5jbHVkZS9j
cHUuaCAgICAgIHwgICAgMSArCiBzeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCAgICAgIHwgICAgMSAr
CiBzeXMveDg2L3hlbi9odm0uYyAgICAgICAgICAgIHwgICAxMiArKysrKy0KIHN5cy94ODYveGVu
L3B2LmMgICAgICAgICAgICAgfCAgIDg1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogc3lzL3hlbi9wdi5oICAgICAgICAgICAgICAgICB8ICAgMzIgKysrKysrKysr
KysrKysrKwogNiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveGVuL3B2LmgKCmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQvbXBfbWFjaGRlcC5jIGIvc3lzL2FtZDY0L2FtZDY0L21wX21hY2hkZXAuYwppbmRl
eCA0ZWY0YjNkLi4wNzM4YTM3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbXBfbWFjaGRl
cC5jCisrKyBiL3N5cy9hbWQ2NC9hbWQ2NC9tcF9tYWNoZGVwLmMKQEAgLTkwLDcgKzkwLDcgQEAg
ZXh0ZXJuICBzdHJ1Y3QgcGNwdSBfX3BjcHVbXTsKIAogLyogQVAgdXNlcyB0aGlzIGR1cmluZyBi
b290c3RyYXAuICBEbyBub3Qgc3RhdGljaXplLiAgKi8KIGNoYXIgKmJvb3RTVEs7Ci1zdGF0aWMg
aW50IGJvb3RBUDsKK2ludCBib290QVA7CiAKIC8qIEZyZWUgdGhlc2UgYWZ0ZXIgdXNlICovCiB2
b2lkICpib290c3RhY2tzW01BWENQVV07CkBAIC0xMjQsNyArMTI0LDggQEAgc3RhdGljIHVfbG9u
ZyAqaXBpX2hhcmRjbG9ja19jb3VudHNbTUFYQ1BVXTsKIAogLyogRGVmYXVsdCBjcHVfb3BzIGlt
cGxlbWVudGF0aW9uLiAqLwogc3RydWN0IGNwdV9vcHMgY3B1X29wcyA9IHsKLQkuaXBpX3ZlY3Rv
cmVkID0gbGFwaWNfaXBpX3ZlY3RvcmVkCisJLmlwaV92ZWN0b3JlZCA9IGxhcGljX2lwaV92ZWN0
b3JlZCwKKwkuc3RhcnRfYWxsX2FwcyA9IG5hdGl2ZV9zdGFydF9hbGxfYXBzLAogfTsKIAogZXh0
ZXJuIGludGhhbmRfdCBJRFRWRUMoZmFzdF9zeXNjYWxsKSwgSURUVkVDKGZhc3Rfc3lzY2FsbDMy
KTsKQEAgLTEzOCw3ICsxMzksNyBAQCBleHRlcm4gaW50IHBtYXBfcGNpZF9lbmFibGVkOwogc3Rh
dGljIHZvbGF0aWxlIGNwdXNldF90IGlwaV9ubWlfcGVuZGluZzsKIAogLyogdXNlZCB0byBob2xk
IHRoZSBBUCdzIHVudGlsIHdlIGFyZSByZWFkeSB0byByZWxlYXNlIHRoZW0gKi8KLXN0YXRpYyBz
dHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4Oworc3RydWN0IG10eCBhcF9ib290X210eDsKIAogLyogU2V0
IHRvIDEgb25jZSB3ZSdyZSByZWFkeSB0byBsZXQgdGhlIEFQcyBvdXQgb2YgdGhlIHBlbi4gKi8K
IHN0YXRpYyB2b2xhdGlsZSBpbnQgYXBzX3JlYWR5ID0gMDsKQEAgLTE2NSw3ICsxNjYsNiBAQCBz
dGF0aWMgaW50IGNwdV9jb3JlczsJCQkvKiBjb3JlcyBwZXIgcGFja2FnZSAqLwogCiBzdGF0aWMg
dm9pZAlhc3NpZ25fY3B1X2lkcyh2b2lkKTsKIHN0YXRpYyB2b2lkCXNldF9pbnRlcnJ1cHRfYXBp
Y19pZHModm9pZCk7Ci1zdGF0aWMgaW50CXN0YXJ0X2FsbF9hcHModm9pZCk7CiBzdGF0aWMgaW50
CXN0YXJ0X2FwKGludCBhcGljX2lkKTsKIHN0YXRpYyB2b2lkCXJlbGVhc2VfYXBzKHZvaWQgKmR1
bW15KTsKIApAQCAtNTY5LDcgKzU2OSw3IEBAIGNwdV9tcF9zdGFydCh2b2lkKQogCWFzc2lnbl9j
cHVfaWRzKCk7CiAKIAkvKiBTdGFydCBlYWNoIEFwcGxpY2F0aW9uIFByb2Nlc3NvciAqLwotCXN0
YXJ0X2FsbF9hcHMoKTsKKwljcHVfb3BzLnN0YXJ0X2FsbF9hcHMoKTsKIAogCXNldF9pbnRlcnJ1
cHRfYXBpY19pZHMoKTsKIH0KQEAgLTkwOCw4ICs5MDgsOCBAQCBhc3NpZ25fY3B1X2lkcyh2b2lk
KQogLyoKICAqIHN0YXJ0IGVhY2ggQVAgaW4gb3VyIGxpc3QKICAqLwotc3RhdGljIGludAotc3Rh
cnRfYWxsX2Fwcyh2b2lkKQoraW50CituYXRpdmVfc3RhcnRfYWxsX2Fwcyh2b2lkKQogewogCXZt
X29mZnNldF90IHZhID0gYm9vdF9hZGRyZXNzICsgS0VSTkJBU0U7CiAJdV9pbnQ2NF90ICpwdDQs
ICpwdDMsICpwdDI7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaCBiL3N5cy9h
bWQ2NC9pbmNsdWRlL2NwdS5oCmluZGV4IDNkOWZmNTMxLi5lZDlmMWRiIDEwMDY0NAotLS0gYS9z
eXMvYW1kNjQvaW5jbHVkZS9jcHUuaAorKysgYi9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaApAQCAt
NjQsNiArNjQsNyBAQCBzdHJ1Y3QgY3B1X29wcyB7CiAJdm9pZCAoKmNwdV9pbml0KSh2b2lkKTsK
IAl2b2lkICgqY3B1X3Jlc3VtZSkodm9pZCk7CiAJdm9pZCAoKmlwaV92ZWN0b3JlZCkodV9pbnQs
IGludCk7CisJaW50ICAoKnN0YXJ0X2FsbF9hcHMpKHZvaWQpOwogfTsKIAogZXh0ZXJuIHN0cnVj
dAljcHVfb3BzIGNwdV9vcHM7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCBi
L3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oCmluZGV4IGQxYjM2NmIuLjE1YmM4MjMgMTAwNjQ0Ci0t
LSBhL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oCisrKyBiL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5o
CkBAIC03OSw2ICs3OSw3IEBAIHZvaWQJc21wX21hc2tlZF9pbnZscGdfcmFuZ2UoY3B1c2V0X3Qg
bWFzaywgc3RydWN0IHBtYXAgKnBtYXAsCiAJICAgIHZtX29mZnNldF90IHN0YXJ0dmEsIHZtX29m
ZnNldF90IGVuZHZhKTsKIHZvaWQJc21wX2ludmx0bGIoc3RydWN0IHBtYXAgKnBtYXApOwogdm9p
ZAlzbXBfbWFza2VkX2ludmx0bGIoY3B1c2V0X3QgbWFzaywgc3RydWN0IHBtYXAgKnBtYXApOwor
aW50CW5hdGl2ZV9zdGFydF9hbGxfYXBzKHZvaWQpOwogCiAjZW5kaWYgLyogIUxPQ09SRSAqLwog
I2VuZGlmIC8qIFNNUCAqLwpkaWZmIC0tZ2l0IGEvc3lzL3g4Ni94ZW4vaHZtLmMgYi9zeXMveDg2
L3hlbi9odm0uYwppbmRleCBmYjFlZDc5Li40OWNhYWNmIDEwMDY0NAotLS0gYS9zeXMveDg2L3hl
bi9odm0uYworKysgYi9zeXMveDg2L3hlbi9odm0uYwpAQCAtNTMsNiArNTMsOSBAQCBfX0ZCU0RJ
RCgiJEZyZWVCU0QkIik7CiAjaW5jbHVkZSA8eGVuL2h5cGVydmlzb3IuaD4KICNpbmNsdWRlIDx4
ZW4vaHZtLmg+CiAjaW5jbHVkZSA8eGVuL3hlbl9pbnRyLmg+CisjaWZkZWYgX19hbWQ2NF9fCisj
aW5jbHVkZSA8eGVuL3B2Lmg+CisjZW5kaWYKIAogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvaHZt
L3BhcmFtcy5oPgogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvdmNwdS5oPgpAQCAtMTE5LDcgKzEy
MiwxMCBAQCBlbnVtIHhlbl9kb21haW5fdHlwZSB4ZW5fZG9tYWluX3R5cGUgPSBYRU5fTkFUSVZF
Owogc3RydWN0IGNwdV9vcHMgeGVuX2h2bV9jcHVfb3BzID0gewogCS5pcGlfdmVjdG9yZWQJPSBs
YXBpY19pcGlfdmVjdG9yZWQsCiAJLmNwdV9pbml0CT0geGVuX2h2bV9jcHVfaW5pdCwKLQkuY3B1
X3Jlc3VtZQk9IHhlbl9odm1fY3B1X3Jlc3VtZQorCS5jcHVfcmVzdW1lCT0geGVuX2h2bV9jcHVf
cmVzdW1lLAorI2lmZGVmIF9fYW1kNjRfXworCS5zdGFydF9hbGxfYXBzID0gbmF0aXZlX3N0YXJ0
X2FsbF9hcHMsCisjZW5kaWYKIH07CiAKIHN0YXRpYyBNQUxMT0NfREVGSU5FKE1fWEVOSFZNLCAi
eGVuX2h2bSIsICJYZW4gSFZNIFBWIFN1cHBvcnQiKTsKQEAgLTY5OCw2ICs3MDQsMTAgQEAgeGVu
X2h2bV9pbml0KGVudW0geGVuX2h2bV9pbml0X3R5cGUgaW5pdF90eXBlKQogCQlzZXR1cF94ZW5f
ZmVhdHVyZXMoKTsKIAkJY3B1X29wcyA9IHhlbl9odm1fY3B1X29wczsKICAJCXZtX2d1ZXN0ID0g
Vk1fR1VFU1RfWEVOOworI2lmZGVmIF9fYW1kNjRfXworCQlpZiAoeGVuX3B2X2RvbWFpbigpKQor
CQkJY3B1X29wcy5zdGFydF9hbGxfYXBzID0geGVuX3B2X3N0YXJ0X2FsbF9hcHM7CisjZW5kaWYK
IAkJYnJlYWs7CiAJY2FzZSBYRU5fSFZNX0lOSVRfUkVTVU1FOgogCQlpZiAoZXJyb3IgIT0gMCkK
ZGlmZiAtLWdpdCBhL3N5cy94ODYveGVuL3B2LmMgYi9zeXMveDg2L3hlbi9wdi5jCmluZGV4IGQx
MWJjMWEuLjIyZmQ2YTYgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL3B2LmMKKysrIGIvc3lzL3g4
Ni94ZW4vcHYuYwpAQCAtMzQsOCArMzQsMTEgQEAgX19GQlNESUQoIiRGcmVlQlNEJCIpOwogI2lu
Y2x1ZGUgPHN5cy9rZXJuZWwuaD4KICNpbmNsdWRlIDxzeXMvcmVib290Lmg+CiAjaW5jbHVkZSA8
c3lzL3N5c3RtLmg+CisjaW5jbHVkZSA8c3lzL21hbGxvYy5oPgogI2luY2x1ZGUgPHN5cy9sb2Nr
Lmg+CiAjaW5jbHVkZSA8c3lzL3J3bG9jay5oPgorI2luY2x1ZGUgPHN5cy9tdXRleC5oPgorI2lu
Y2x1ZGUgPHN5cy9zbXAuaD4KIAogI2luY2x1ZGUgPHZtL3ZtLmg+CiAjaW5jbHVkZSA8dm0vdm1f
ZXh0ZXJuLmg+CkBAIC00OSw5ICs1MiwxMyBAQCBfX0ZCU0RJRCgiJEZyZWVCU0QkIik7CiAjaW5j
bHVkZSA8bWFjaGluZS9zeXNhcmNoLmg+CiAjaW5jbHVkZSA8bWFjaGluZS9jbG9jay5oPgogI2lu
Y2x1ZGUgPG1hY2hpbmUvcGMvYmlvcy5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc21wLmg+CiAKICNp
bmNsdWRlIDx4ZW4veGVuLW9zLmg+CiAjaW5jbHVkZSA8eGVuL2h5cGVydmlzb3IuaD4KKyNpbmNs
dWRlIDx4ZW4vcHYuaD4KKworI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvdmNwdS5oPgogCiAvKiBO
YXRpdmUgaW5pdGlhbCBmdW5jdGlvbiAqLwogZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1
X2ludDY0X3QsIHVfaW50NjRfdCk7CkBAIC02NSw2ICs3MiwxNSBAQCBzdGF0aWMgY2FkZHJfdCB4
ZW5fcHZfcGFyc2VfcHJlbG9hZF9kYXRhKHVfaW50NjRfdCk7CiBzdGF0aWMgdm9pZCB4ZW5fcHZf
cGFyc2VfbWVtbWFwKGNhZGRyX3QsIHZtX3BhZGRyX3QgKiwgaW50ICopOwogCiBzdGF0aWMgdm9p
ZCB4ZW5fcHZfc2V0X2luaXRfb3BzKHZvaWQpOworLyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tIEV4dGVybiBEZWNsYXJhdGlvbnMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KKy8q
IFZhcmlhYmxlcyB1c2VkIGJ5IGFtZDY0IG1wX21hY2hkZXAgdG8gc3RhcnQgQVBzICovCitleHRl
cm4gc3RydWN0IG10eCBhcF9ib290X210eDsKK2V4dGVybiB2b2lkICpib290c3RhY2tzW107Citl
eHRlcm4gY2hhciAqZG91YmxlZmF1bHRfc3RhY2s7CitleHRlcm4gY2hhciAqbm1pX3N0YWNrOwor
ZXh0ZXJuIHZvaWQgKmRwY3B1OworZXh0ZXJuIGludCBib290QVA7CitleHRlcm4gY2hhciAqYm9v
dFNUSzsKIAogLyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBHbG9iYWwgRGF0YSAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KIC8qIFhlbiBpbml0X29wcyBpbXBsZW1l
bnRhdGlvbi4gKi8KQEAgLTE2OCw2ICsxODQsNzUgQEAgaGFtbWVyX3RpbWVfeGVuKHN0YXJ0X2lu
Zm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKIH0KIAogLyotLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLSBQViBzcGVjaWZpYyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
Ki8KKworc3RhdGljIGludAorc3RhcnRfeGVuX2FwKGludCBjcHUpCit7CisJc3RydWN0IHZjcHVf
Z3Vlc3RfY29udGV4dCAqY3R4dDsKKwlpbnQgbXMsIGNwdXMgPSBtcF9uYXBzOworCisJY3R4dCA9
IG1hbGxvYyhzaXplb2YoKmN0eHQpLCBNX1RFTVAsIE1fTk9XQUlUIHwgTV9aRVJPKTsKKwlpZiAo
Y3R4dCA9PSBOVUxMKQorCQlwYW5pYygidW5hYmxlIHRvIGFsbG9jYXRlIG1lbW9yeSIpOworCisJ
Y3R4dC0+ZmxhZ3MgPSBWR0NGX0lOX0tFUk5FTDsKKwljdHh0LT51c2VyX3JlZ3MucmlwID0gKHVu
c2lnbmVkIGxvbmcpIGluaXRfc2Vjb25kYXJ5OworCWN0eHQtPnVzZXJfcmVncy5yc3AgPSAodW5z
aWduZWQgbG9uZykgYm9vdFNUSzsKKworCS8qIFNldCB0aGUgQVAgdG8gdXNlIHRoZSBzYW1lIHBh
Z2UgdGFibGVzICovCisJY3R4dC0+Y3RybHJlZ1szXSA9IEtQTUw0cGh5czsKKworCWlmIChIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2luaXRpYWxpc2UsIGNwdSwgY3R4dCkpCisJCXBhbmljKCJ1
bmFibGUgdG8gaW5pdGlhbGl6ZSBBUCMlZFxuIiwgY3B1KTsKKworCWZyZWUoY3R4dCwgTV9URU1Q
KTsKKworCS8qIExhdW5jaCB0aGUgdkNQVSAqLworCWlmIChIWVBFUlZJU09SX3ZjcHVfb3AoVkNQ
VU9QX3VwLCBjcHUsIE5VTEwpKQorCQlwYW5pYygidW5hYmxlIHRvIHN0YXJ0IEFQIyVkXG4iLCBj
cHUpOworCisJLyogV2FpdCB1cCB0byA1IHNlY29uZHMgZm9yIGl0IHRvIHN0YXJ0LiAqLworCWZv
ciAobXMgPSAwOyBtcyA8IDUwMDA7IG1zKyspIHsKKwkJaWYgKG1wX25hcHMgPiBjcHVzKQorCQkJ
cmV0dXJuICgxKTsJLyogcmV0dXJuIFNVQ0NFU1MgKi8KKwkJREVMQVkoMTAwMCk7CisJfQorCisJ
cmV0dXJuICgwKTsKK30KKworaW50Cit4ZW5fcHZfc3RhcnRfYWxsX2Fwcyh2b2lkKQoreworCWlu
dCBjcHU7CisKKwltdHhfaW5pdCgmYXBfYm9vdF9tdHgsICJhcCBib290IiwgTlVMTCwgTVRYX1NQ
SU4pOworCisJZm9yIChjcHUgPSAxOyBjcHUgPCBtcF9uY3B1czsgY3B1KyspIHsKKworCQkvKiBh
bGxvY2F0ZSBhbmQgc2V0IHVwIGFuIGlkbGUgc3RhY2sgZGF0YSBwYWdlICovCisJCWJvb3RzdGFj
a3NbY3B1XSA9ICh2b2lkICopa21lbV9tYWxsb2Moa2VybmVsX2FyZW5hLAorCQkgICAgS1NUQUNL
X1BBR0VTICogUEFHRV9TSVpFLCBNX1dBSVRPSyB8IE1fWkVSTyk7CisJCWRvdWJsZWZhdWx0X3N0
YWNrID0gKGNoYXIgKilrbWVtX21hbGxvYyhrZXJuZWxfYXJlbmEsCisJCSAgICBQQUdFX1NJWkUs
IE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJbm1pX3N0YWNrID0gKGNoYXIgKilrbWVtX21hbGxvYyhr
ZXJuZWxfYXJlbmEsIFBBR0VfU0laRSwKKwkJICAgIE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJZHBj
cHUgPSAodm9pZCAqKWttZW1fbWFsbG9jKGtlcm5lbF9hcmVuYSwgRFBDUFVfU0laRSwKKwkJICAg
IE1fV0FJVE9LIHwgTV9aRVJPKTsKKworCQlib290U1RLID0gKGNoYXIgKilib290c3RhY2tzW2Nw
dV0gKyBLU1RBQ0tfUEFHRVMgKiBQQUdFX1NJWkUgLSA4OworCQlib290QVAgPSBjcHU7CisKKwkJ
LyogYXR0ZW1wdCB0byBzdGFydCB0aGUgQXBwbGljYXRpb24gUHJvY2Vzc29yICovCisJCWlmICgh
c3RhcnRfeGVuX2FwKGNwdSkpCisJCQlwYW5pYygiQVAgIyVkIGZhaWxlZCB0byBzdGFydCEiLCBj
cHUpOworCisJCUNQVV9TRVQoY3B1LCAmYWxsX2NwdXMpOwkvKiByZWNvcmQgQVAgaW4gQ1BVIG1h
cCAqLworCX0KKworCXJldHVybiAobXBfbmFwcyk7Cit9CisKIC8qCiAgKiBGdW5jdGlvbnMgdG8g
Y29udmVydCB0aGUgImV4dHJhIiBwYXJhbWV0ZXJzIHBhc3NlZCBieSBYZW4KICAqIGludG8gRnJl
ZUJTRCBib290IG9wdGlvbnMgKGZyb20gdGhlIGkzODYgWGVuIHBvcnQpLgpkaWZmIC0tZ2l0IGEv
c3lzL3hlbi9wdi5oIGIvc3lzL3hlbi9wdi5oCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAw
MDAwMDAuLjQ1Yjc0NzMKLS0tIC9kZXYvbnVsbAorKysgYi9zeXMveGVuL3B2LmgKQEAgLTAsMCAr
MSwzMiBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0
cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRo
b3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9s
bG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Yg
c291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNl
LCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgor
ICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBh
Ym92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5k
IHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5k
L29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgor
ICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRP
UlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5D
TFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5USUVTIE9G
IE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAq
IEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBDT05UUklC
VVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5UQUws
IFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAoSU5DTFVE
SU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUgR09PRFMK
KyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1IgQlVTSU5F
U1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVPUlkgT0Yg
TElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElUWSwgT1Ig
VE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElOIEFOWSBX
QVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBP
RiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpZm5kZWYJX19Y
RU5fUFZfSF9fCisjZGVmaW5lCV9fWEVOX1BWX0hfXworCitpbnQJeGVuX3B2X3N0YXJ0X2FsbF9h
cHModm9pZCk7CisKKyNlbmRpZgkvKiBfX1hFTl9QVl9IX18gKi8KLS0gCjEuNy43LjUgKEFwcGxl
IEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcX-0002se-0v; Thu, 02 Jan 2014 15:55:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcV-0002qr-UG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:40 +0000
Received: from [85.158.143.35:52081] by server-2.bemta-4.messagelabs.com id
	13/45-11386-BFB85C25; Thu, 02 Jan 2014 15:55:39 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6552 invoked from network); 2 Jan 2014 15:55:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:37 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRF-00043n-TG;
	Thu, 02 Jan 2014 15:44:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:45 +0100
Message-ID: <1388677433-49525-12-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 11/19] xen: changes to hvm code in order to
	support PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On PVH we don't need to init the shared info page, or disable emulated
devices. Also, make sure PV IPIs are set before starting the APs.
---
 sys/x86/xen/hvm.c |   17 ++++++++++++-----
 1 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index 9a0411e..fb1ed79 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -523,7 +523,7 @@ xen_setup_cpus(void)
 {
 	int i;
 
-	if (!xen_hvm_domain() || !xen_vector_callback_enabled)
+	if (!xen_vector_callback_enabled)
 		return;
 
 #ifdef __amd64__
@@ -712,10 +712,13 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	}
 
 	xen_vector_callback_enabled = 0;
-	xen_domain_type = XEN_HVM_DOMAIN;
-	xen_hvm_init_shared_info_page();
 	xen_hvm_set_callback(NULL);
-	xen_hvm_disable_emulated_devices();
+
+	if (!xen_pv_domain()) {
+		xen_domain_type = XEN_HVM_DOMAIN;
+		xen_hvm_init_shared_info_page();
+		xen_hvm_disable_emulated_devices();
+	}
 } 
 
 void
@@ -746,6 +749,9 @@ xen_set_vcpu_id(void)
 	struct pcpu *pc;
 	int i;
 
+	if (!xen_hvm_domain())
+		return;
+
 	/* Set vcpu_id to acpi_id */
 	CPU_FOREACH(i) {
 		pc = pcpu_find(i);
@@ -789,7 +795,8 @@ xen_hvm_cpu_init(void)
 
 SYSINIT(xen_hvm_init, SI_SUB_HYPERVISOR, SI_ORDER_FIRST, xen_hvm_sysinit, NULL);
 #ifdef SMP
-SYSINIT(xen_setup_cpus, SI_SUB_SMP, SI_ORDER_FIRST, xen_setup_cpus, NULL);
+/* We need to setup IPIs before APs are started */
+SYSINIT(xen_setup_cpus, SI_SUB_SMP-1, SI_ORDER_FIRST, xen_setup_cpus, NULL);
 #endif
 SYSINIT(xen_hvm_cpu_init, SI_SUB_INTR, SI_ORDER_FIRST, xen_hvm_cpu_init, NULL);
 SYSINIT(xen_set_vcpu_id, SI_SUB_CPU, SI_ORDER_ANY, xen_set_vcpu_id, NULL);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcW-0002rK-4X; Thu, 02 Jan 2014 15:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcU-0002qO-Nd
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:39 +0000
Received: from [85.158.143.35:51976] by server-3.bemta-4.messagelabs.com id
	C6/F6-32360-AFB85C25; Thu, 02 Jan 2014 15:55:38 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6459 invoked from network); 2 Jan 2014 15:55:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074581"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:35 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRJ-00043n-Us;
	Thu, 02 Jan 2014 15:44:06 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:52 +0100
Message-ID: <1388677433-49525-19-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 18/19] xen: changes to gnttab for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/gnttab.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/sys/xen/gnttab.c b/sys/xen/gnttab.c
index 03c32b7..6949be5 100644
--- a/sys/xen/gnttab.c
+++ b/sys/xen/gnttab.c
@@ -25,6 +25,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/lock.h>
 #include <sys/malloc.h>
 #include <sys/mman.h>
+#include <sys/limits.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -607,6 +608,7 @@ gnttab_resume(void)
 {
 	int error;
 	unsigned int max_nr_gframes, nr_gframes;
+	void *alloc_mem;
 
 	nr_gframes = nr_grant_frames;
 	max_nr_gframes = max_nr_grant_frames();
@@ -614,11 +616,25 @@ gnttab_resume(void)
 		return (ENOSYS);
 
 	if (!resume_frames) {
-		error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
-		    &resume_frames);
-		if (error) {
-			printf("error mapping gnttab share frames\n");
-			return (error);
+		if (xen_pv_domain()) {
+			/*
+			 * This is a waste of physical memory,
+			 * we should use ballooned pages instead,
+			 * but it will do for now.
+			 */
+			alloc_mem = contigmalloc(max_nr_gframes * PAGE_SIZE,
+			                         M_DEVBUF, M_NOWAIT, 0,
+			                         ULONG_MAX, PAGE_SIZE, 0);
+			KASSERT((alloc_mem != NULL),
+				("unable to alloc memory for gnttab"));
+			resume_frames = vtophys(alloc_mem);
+		} else {
+			error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
+			    &resume_frames);
+			if (error) {
+				printf("error mapping gnttab share frames\n");
+				return (error);
+			}
 		}
 	}
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcS-0002pJ-9G; Thu, 02 Jan 2014 15:55:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcP-0002og-Sm
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:34 +0000
Received: from [193.109.254.147:9854] by server-16.bemta-14.messagelabs.com id
	BA/1E-20600-5FB85C25; Thu, 02 Jan 2014 15:55:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388678131!6247185!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15609 invoked from network); 2 Jan 2014 15:55:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262674"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:30 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRJ-00043n-Bv;
	Thu, 02 Jan 2014 15:44:05 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:51 +0100
Message-ID: <1388677433-49525-18-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 17/19] xen: xenstore changes to support PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xenstore/xenstore.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/sys/xen/xenstore/xenstore.c b/sys/xen/xenstore/xenstore.c
index b5cf413..7fa08cc 100644
--- a/sys/xen/xenstore/xenstore.c
+++ b/sys/xen/xenstore/xenstore.c
@@ -229,13 +229,11 @@ struct xs_softc {
 	 */
 	struct sx xenwatch_mutex;
 
-#ifdef XENHVM
 	/**
 	 * The HVM guest pseudo-physical frame number.  This is Xen's mapping
 	 * of the true machine frame number into our "physical address space".
 	 */
 	unsigned long gpfn;
-#endif
 
 	/**
 	 * The event channel for communicating with the
@@ -1147,13 +1145,15 @@ xs_attach(device_t dev)
 	/* Initialize the interface to xenstore. */
 	struct proc *p;
 
-#ifdef XENHVM
-	xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
-	xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
-	xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
-#else
-	xs.evtchn = xen_start_info->store_evtchn;
-#endif
+	if (xen_hvm_domain()) {
+		xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
+		xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
+		xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
+	} else if (xen_pv_domain()) {
+		xs.evtchn = HYPERVISOR_start_info->store_evtchn;
+	} else {
+		panic("Unknown domain type, cannot initialize xenstore\n");
+	}
 
 	TAILQ_INIT(&xs.reply_list);
 	TAILQ_INIT(&xs.watch_events);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcG-0002mQ-4J; Thu, 02 Jan 2014 15:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcE-0002mG-Ux
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:23 +0000
Received: from [85.158.137.68:18925] by server-7.bemta-3.messagelabs.com id
	4C/2A-27599-AEB85C25; Thu, 02 Jan 2014 15:55:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14410 invoked from network); 2 Jan 2014 15:55:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074504"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:18 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRI-00043n-Pz;
	Thu, 02 Jan 2014 15:44:04 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:50 +0100
Message-ID: <1388677433-49525-17-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 16/19] xen: add shutdown hook for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the PV shutdown hook to PVH.
---
 sys/dev/xen/control/control.c |   37 ++++++++++++++++++-------------------
 1 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/sys/dev/xen/control/control.c b/sys/dev/xen/control/control.c
index bc0609d..78894ba 100644
--- a/sys/dev/xen/control/control.c
+++ b/sys/dev/xen/control/control.c
@@ -316,21 +316,6 @@ xctrl_suspend()
 	EVENTHANDLER_INVOKE(power_resume);
 }
 
-static void
-xen_pv_shutdown_final(void *arg, int howto)
-{
-	/*
-	 * Inform the hypervisor that shutdown is complete.
-	 * This is not necessary in HVM domains since Xen
-	 * emulates ACPI in that mode and FreeBSD's ACPI
-	 * support will request this transition.
-	 */
-	if (howto & (RB_HALT | RB_POWEROFF))
-		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
-	else
-		HYPERVISOR_shutdown(SHUTDOWN_reboot);
-}
-
 #else
 
 /* HVM mode suspension. */
@@ -440,6 +425,21 @@ xctrl_crash()
 	panic("Xen directed crash");
 }
 
+static void
+xen_pv_shutdown_final(void *arg, int howto)
+{
+	/*
+	 * Inform the hypervisor that shutdown is complete.
+	 * This is not necessary in HVM domains since Xen
+	 * emulates ACPI in that mode and FreeBSD's ACPI
+	 * support will request this transition.
+	 */
+	if (howto & (RB_HALT | RB_POWEROFF))
+		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
+	else
+		HYPERVISOR_shutdown(SHUTDOWN_reboot);
+}
+
 /*------------------------------ Event Reception -----------------------------*/
 static void
 xctrl_on_watch_event(struct xs_watch *watch, const char **vec, unsigned int len)
@@ -522,10 +522,9 @@ xctrl_attach(device_t dev)
 	xctrl->xctrl_watch.callback_data = (uintptr_t)xctrl;
 	xs_register_watch(&xctrl->xctrl_watch);
 
-#ifndef XENHVM
-	EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
-			      SHUTDOWN_PRI_LAST);
-#endif
+	if (xen_pv_domain())
+		EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
+		                      SHUTDOWN_PRI_LAST);
 
 	return (0);
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcG-0002mX-Gp; Thu, 02 Jan 2014 15:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcF-0002mH-HS
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:23 +0000
Received: from [85.158.137.68:25031] by server-12.bemta-3.messagelabs.com id
	46/FA-20055-AEB85C25; Thu, 02 Jan 2014 15:55:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14481 invoked from network); 2 Jan 2014 15:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRK-00043n-HW;
	Thu, 02 Jan 2014 15:44:06 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:53 +0100
Message-ID: <1388677433-49525-20-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to xenpv
	device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/x86/isa/isa.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
index 1a57137..9287ff2 100644
--- a/sys/x86/isa/isa.c
+++ b/sys/x86/isa/isa.c
@@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
  * On this platform, isa can also attach to the legacy bus.
  */
 DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
+#ifdef XENHVM
+DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
+#endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcM-0002nh-F2; Thu, 02 Jan 2014 15:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcK-0002n6-9J
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:28 +0000
Received: from [85.158.137.68:27959] by server-2.bemta-3.messagelabs.com id
	38/70-17329-FEB85C25; Thu, 02 Jan 2014 15:55:27 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14888 invoked from network); 2 Jan 2014 15:55:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074526"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:25 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRI-00043n-7D;
	Thu, 02 Jan 2014 15:44:04 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:49 +0100
Message-ID: <1388677433-49525-16-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_15/19=5D_xen=3A_create_a_Xen_ne?=
	=?utf-8?q?xus_to_use_in_PV/PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5leHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hh
cmdlIGZvcgphdHRhY2hpbmcgWGVuIHNwZWNpZmljIGRldmljZXMuCi0tLQogc3lzL2NvbmYvZmls
ZXMuYW1kNjQgICAgICAgICAgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYgICAgICAgICAg
IHwgICAgMSArCiBzeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyB8ICAgIDIgKy0KIHN5cy9k
ZXYveGVuL3RpbWVyL3RpbWVyLmMgICAgIHwgICAgNCArLQogc3lzL2Rldi94ZW4veGVucGNpL3hl
bnBjaS5jICAgfCAgICA0ICsrCiBzeXMveDg2L3hlbi94ZW5fbmV4dXMuYyAgICAgICB8ICAgODIg
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIHN5cy94ODYveGVuL3hl
bnB2LmMgICAgICAgICAgIHwgICAgMSArCiBzeXMveGVuL3hlbnN0b3JlL3hlbnN0b3JlLmMgICB8
ICAgIDYgKy0tCiA4IGZpbGVzIGNoYW5nZWQsIDkzIGluc2VydGlvbnMoKyksIDggZGVsZXRpb25z
KC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4veGVuX25leHVzLmMKCmRpZmYgLS1n
aXQgYS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IGQ3
Yzk4Y2MuLmYzNzg5ODMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5
cy9jb25mL2ZpbGVzLmFtZDY0CkBAIC01NzEsMyArNTcxLDQgQEAgeDg2L3hlbi94ZW5faW50ci5j
CQlvcHRpb25hbAl4ZW4gfCB4ZW5odm0KIHg4Ni94ZW4vcHYuYwkJCW9wdGlvbmFsCXhlbmh2bQog
eDg2L3hlbi9wdmNwdV9lbnVtLmMJCW9wdGlvbmFsCXhlbmh2bQogeDg2L3hlbi94ZW5wdi5jCQkJ
b3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbl9uZXh1cy5jCQlvcHRpb25hbAl4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9maWxlcy5pMzg2CmluZGV4
IDgxMTQyZTMuLjAyODg3YTMzIDEwMDY0NAotLS0gYS9zeXMvY29uZi9maWxlcy5pMzg2CisrKyBi
L3N5cy9jb25mL2ZpbGVzLmkzODYKQEAgLTYwNCwzICs2MDQsNCBAQCB4ODYveDg2L2RlbGF5LmMJ
CQlzdGFuZGFyZAogeDg2L3hlbi9odm0uYwkJCW9wdGlvbmFsIHhlbmh2bQogeDg2L3hlbi94ZW5f
aW50ci5jCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KIHg4Ni94ZW4veGVucHYuYwkJCW9wdGlvbmFs
IHhlbiB8IHhlbmh2bQoreDg2L3hlbi94ZW5fbmV4dXMuYwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZt
CmRpZmYgLS1naXQgYS9zeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyBiL3N5cy9kZXYveGVu
L2NvbnNvbGUvY29uc29sZS5jCmluZGV4IDg5OWRmZmMuLjkxNTM4ZmUgMTAwNjQ0Ci0tLSBhL3N5
cy9kZXYveGVuL2NvbnNvbGUvY29uc29sZS5jCisrKyBiL3N5cy9kZXYveGVuL2NvbnNvbGUvY29u
c29sZS5jCkBAIC00NjIsNCArNDYyLDQgQEAgeGNvbnNfZm9yY2VfZmx1c2godm9pZCkKIAl9CiB9
CiAKLURSSVZFUl9NT0RVTEUoeGMsIG5leHVzLCB4Y19kcml2ZXIsIHhjX2RldmNsYXNzLCAwLCAw
KTsKK0RSSVZFUl9NT0RVTEUoeGMsIHhlbnB2LCB4Y19kcml2ZXIsIHhjX2RldmNsYXNzLCAwLCAw
KTsKZGlmZiAtLWdpdCBhL3N5cy9kZXYveGVuL3RpbWVyL3RpbWVyLmMgYi9zeXMvZGV2L3hlbi90
aW1lci90aW1lci5jCmluZGV4IDk2MzcyYWIuLmYxNmY1YTUgMTAwNjQ0Ci0tLSBhL3N5cy9kZXYv
eGVuL3RpbWVyL3RpbWVyLmMKKysrIGIvc3lzL2Rldi94ZW4vdGltZXIvdGltZXIuYwpAQCAtNjM2
LDUgKzYzNiw1IEBAIHN0YXRpYyBkcml2ZXJfdCB4ZW50aW1lcl9kcml2ZXIgPSB7CiAJc2l6ZW9m
KHN0cnVjdCB4ZW50aW1lcl9zb2Z0YyksCiB9OwogCi1EUklWRVJfTU9EVUxFKHhlbnRpbWVyLCBu
ZXh1cywgeGVudGltZXJfZHJpdmVyLCB4ZW50aW1lcl9kZXZjbGFzcywgMCwgMCk7Ci1NT0RVTEVf
REVQRU5EKHhlbnRpbWVyLCBuZXh1cywgMSwgMSwgMSk7CitEUklWRVJfTU9EVUxFKHhlbnRpbWVy
LCB4ZW5wdiwgeGVudGltZXJfZHJpdmVyLCB4ZW50aW1lcl9kZXZjbGFzcywgMCwgMCk7CitNT0RV
TEVfREVQRU5EKHhlbnRpbWVyLCB4ZW5wdiwgMSwgMSwgMSk7CmRpZmYgLS1naXQgYS9zeXMvZGV2
L3hlbi94ZW5wY2kveGVucGNpLmMgYi9zeXMvZGV2L3hlbi94ZW5wY2kveGVucGNpLmMKaW5kZXgg
ZGQyYWQ5Mi4uYTI3YjU0ZiAxMDA2NDQKLS0tIGEvc3lzL2Rldi94ZW4veGVucGNpL3hlbnBjaS5j
CisrKyBiL3N5cy9kZXYveGVuL3hlbnBjaS94ZW5wY2kuYwpAQCAtMjcwLDYgKzI3MCwxMCBAQCB4
ZW5wY2lfYXR0YWNoKGRldmljZV90IGRldikKIAkJZ290byBlcnJleGl0OwogCX0KIAorCS8qIEFk
ZCB0aGUgeGVucHYgZGV2aWNlIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoICov
CisJaWYgKEJVU19BRERfQ0hJTEQoZGV2LCAwLCAieGVucHYiLCAwKSA9PSBOVUxMKQorCQlwYW5p
YygieGVucGNpOiB1bmFibGUgdG8gYWRkIHhlbnB2IGRldmljZSIpOworCiAJcmV0dXJuIChidXNf
Z2VuZXJpY19hdHRhY2goZGV2KSk7CiAKIGVycmV4aXQ6CmRpZmYgLS1naXQgYS9zeXMveDg2L3hl
bi94ZW5fbmV4dXMuYyBiL3N5cy94ODYveGVuL3hlbl9uZXh1cy5jCm5ldyBmaWxlIG1vZGUgMTAw
NjQ0CmluZGV4IDAwMDAwMDAuLmQzNGUzMzMKLS0tIC9kZXYvbnVsbAorKysgYi9zeXMveDg2L3hl
bi94ZW5fbmV4dXMuYwpAQCAtMCwwICsxLDgyIEBACisvKgorICogQ29weXJpZ2h0IChjKSAyMDEz
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxsIHJpZ2h0cyBy
ZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmlu
YXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRl
ZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJlIG1ldDoKKyAq
IDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUg
Y29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUg
Zm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZv
cm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlz
IGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyIGluIHRoZQor
ICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGgg
dGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRI
RSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVYUFJFU1MgT1Ig
SU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgVEhFCisg
KiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUiBB
IFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBFVkVOVCBTSEFM
TCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBBTlkgRElSRUNU
LCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBDT05TRVFVRU5U
SUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgUFJPQ1VSRU1F
TlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRB
LCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dFVkVSIENBVVNF
RCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNU
UklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhF
UldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZU
V0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICogU1VDSCBEQU1B
R0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRGcmVlQlNEJCIp
OworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1
ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNsdWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lz
L3N5c2N0bC5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9zbXAuaD4K
KworI2luY2x1ZGUgPG1hY2hpbmUvbmV4dXN2YXIuaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKworLyoKKyAqIFhlbiBuZXh1cyg0KSBkcml2ZXIuCisgKi8KK3N0YXRpYyBpbnQKK25leHVz
X3hlbl9wcm9iZShkZXZpY2VfdCBkZXYpCit7CisJaWYgKCF4ZW5fcHZfZG9tYWluKCkpCisJCXJl
dHVybiAoRU5YSU8pOworCisJcmV0dXJuIChCVVNfUFJPQkVfREVGQVVMVCk7Cit9CisKK3N0YXRp
YyBpbnQKK25leHVzX3hlbl9hdHRhY2goZGV2aWNlX3QgZGV2KQoreworCisJbmV4dXNfaW5pdF9y
ZXNvdXJjZXMoKTsKKwlidXNfZ2VuZXJpY19wcm9iZShkZXYpOworCisJLyoKKwkgKiBFeHBsaWNp
dGx5IGFkZCB0aGUgeGVucHYgZGV2aWNlIGhlcmUuIE90aGVyIHRvcCBsZXZlbAorCSAqIFhlbiBk
ZXZpY2VzIHdpbGwgYXR0YWNoIHRvIHRoaXMuCisJICovCisJaWYgKEJVU19BRERfQ0hJTEQoZGV2
LCAwLCAieGVucHYiLCAwKSA9PSBOVUxMKQorCQlwYW5pYygieGVucHY6IGNvdWxkIG5vdCBhdHRh
Y2giKTsKKwlidXNfZ2VuZXJpY19hdHRhY2goZGV2KTsKKwlyZXR1cm4gMDsKK30KKworc3RhdGlj
IGRldmljZV9tZXRob2RfdCBuZXh1c194ZW5fbWV0aG9kc1tdID0geworCS8qIERldmljZSBpbnRl
cmZhY2UgKi8KKwlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLAkJbmV4dXNfeGVuX3Byb2JlKSwKKwlE
RVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJbmV4dXNfeGVuX2F0dGFjaCksCisKKwl7IDAsIDAgfQor
fTsKKworREVGSU5FX0NMQVNTXzEobmV4dXMsIG5leHVzX3hlbl9kcml2ZXIsIG5leHVzX3hlbl9t
ZXRob2RzLCAxLCBuZXh1c19kcml2ZXIpOworc3RhdGljIGRldmNsYXNzX3QgbmV4dXNfZGV2Y2xh
c3M7CisKK0RSSVZFUl9NT0RVTEUobmV4dXNfeGVuLCByb290LCBuZXh1c194ZW5fZHJpdmVyLCBu
ZXh1c19kZXZjbGFzcywgMCwgMCk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi94ZW5wdi5jIGIv
c3lzL3g4Ni94ZW4veGVucHYuYwppbmRleCA0MWQ2NzRmLi43YmQ4MTBiIDEwMDY0NAotLS0gYS9z
eXMveDg2L3hlbi94ZW5wdi5jCisrKyBiL3N5cy94ODYveGVuL3hlbnB2LmMKQEAgLTkwLDYgKzkw
LDcgQEAgc3RhdGljIGRyaXZlcl90IHhlbnB2X2RyaXZlciA9IHsKIHN0YXRpYyBkZXZjbGFzc190
IHhlbnB2X2RldmNsYXNzOwogCiBEUklWRVJfTU9EVUxFKHhlbnB2LCBuZXh1cywgeGVucHZfZHJp
dmVyLCB4ZW5wdl9kZXZjbGFzcywgMCwgMCk7CitEUklWRVJfTU9EVUxFKHhlbnB2LCB4ZW5wY2ks
IHhlbnB2X2RyaXZlciwgeGVucHZfZGV2Y2xhc3MsIDAsIDApOwogCiAvKgogICogRHVtbXkgWGVu
IGNwdSBkZXZpY2UKZGlmZiAtLWdpdCBhL3N5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYyBiL3N5
cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYwppbmRleCBkNDA0ODYyLi5iNWNmNDEzIDEwMDY0NAot
LS0gYS9zeXMveGVuL3hlbnN0b3JlL3hlbnN0b3JlLmMKKysrIGIvc3lzL3hlbi94ZW5zdG9yZS94
ZW5zdG9yZS5jCkBAIC0xMjYxLDExICsxMjYxLDcgQEAgc3RhdGljIGRldmljZV9tZXRob2RfdCB4
ZW5zdG9yZV9tZXRob2RzW10gPSB7CiBERUZJTkVfQ0xBU1NfMCh4ZW5zdG9yZSwgeGVuc3RvcmVf
ZHJpdmVyLCB4ZW5zdG9yZV9tZXRob2RzLCAwKTsKIHN0YXRpYyBkZXZjbGFzc190IHhlbnN0b3Jl
X2RldmNsYXNzOyAKICAKLSNpZmRlZiBYRU5IVk0KLURSSVZFUl9NT0RVTEUoeGVuc3RvcmUsIHhl
bnBjaSwgeGVuc3RvcmVfZHJpdmVyLCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7Ci0jZWxzZQot
RFJJVkVSX01PRFVMRSh4ZW5zdG9yZSwgbmV4dXMsIHhlbnN0b3JlX2RyaXZlciwgeGVuc3RvcmVf
ZGV2Y2xhc3MsIDAsIDApOwotI2VuZGlmCitEUklWRVJfTU9EVUxFKHhlbnN0b3JlLCB4ZW5wdiwg
eGVuc3RvcmVfZHJpdmVyLCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7CiAKIC8qLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBTeXNjdGwgRGF0YSAtLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLSovCiAvKiBYWFggU2hvdWxkbid0IHRoZSBub2RlIGJlIHNvbWV3aGVyZSBlbHNl
PyAqLwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcM-0002nV-3T; Thu, 02 Jan 2014 15:55:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcK-0002my-2B
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:28 +0000
Received: from [85.158.139.211:25861] by server-12.bemta-5.messagelabs.com id
	D7/44-30017-EEB85C25; Thu, 02 Jan 2014 15:55:26 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388678124!7367598!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30398 invoked from network); 2 Jan 2014 15:55:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262655"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRF-00043n-AO;
	Thu, 02 Jan 2014 15:44:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:44 +0100
Message-ID: <1388677433-49525-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 10/19] xen: add hook for AP bootstrap memory
	reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hook will only be implemented for bare metal, Xen doesn't require
any bootstrap code since APs are started in long mode with paging
enabled.
---
 sys/amd64/amd64/machdep.c   |    6 +++++-
 sys/amd64/include/sysarch.h |    1 +
 2 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index f6eef50..babf16d 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -178,6 +178,9 @@ struct init_ops init_ops = {
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
 	.parse_memmap =		native_parse_memmap,
+#ifdef SMP
+	.mp_bootaddress =	mp_bootaddress,
+#endif
 };
 
 /*
@@ -1490,7 +1493,8 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 
 #ifdef SMP
 	/* make hole for AP bootstrap code */
-	physmap[1] = mp_bootaddress(physmap[1] / 1024);
+	if (init_ops.mp_bootaddress)
+		physmap[1] = init_ops.mp_bootaddress(physmap[1] / 1024);
 #endif
 
 	/*
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 084223e..7696064 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -16,6 +16,7 @@ struct init_ops {
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
 	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
+	u_int	(*mp_bootaddress)(u_int);
 };
 
 extern struct init_ops init_ops;
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcN-0002oB-SN; Thu, 02 Jan 2014 15:55:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcM-0002nY-KC
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:30 +0000
Received: from [85.158.137.68:19393] by server-10.bemta-3.messagelabs.com id
	BA/30-23989-1FB85C25; Thu, 02 Jan 2014 15:55:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15151 invoked from network); 2 Jan 2014 15:55:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074550"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:27 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRH-00043n-2S;
	Thu, 02 Jan 2014 15:44:03 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:47 +0100
Message-ID: <1388677433-49525-14-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 13/19] xen: introduce flag to disable the
	local apic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PVH guests don't have an emulated lapic.
---
 sys/amd64/amd64/mp_machdep.c |   10 ++++++----
 sys/amd64/include/apicvar.h  |    1 +
 sys/i386/include/apicvar.h   |    1 +
 sys/i386/xen/xen_machdep.c   |    2 ++
 sys/x86/x86/local_apic.c     |    8 +++++---
 sys/x86/xen/pv.c             |    3 +++
 6 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
index 0738a37..fd6eace 100644
--- a/sys/amd64/amd64/mp_machdep.c
+++ b/sys/amd64/amd64/mp_machdep.c
@@ -707,7 +707,8 @@ init_secondary(void)
 	wrmsr(MSR_SF_MASK, PSL_NT|PSL_T|PSL_I|PSL_C|PSL_D);
 
 	/* Disable local APIC just to be sure. */
-	lapic_disable();
+	if (lapic_valid)
+		lapic_disable();
 
 	/* signal our startup to the BSP. */
 	mp_naps++;
@@ -733,7 +734,7 @@ init_secondary(void)
 
 	/* A quick check from sanity claus */
 	cpuid = PCPU_GET(cpuid);
-	if (PCPU_GET(apic_id) != lapic_id()) {
+	if (lapic_valid && (PCPU_GET(apic_id) != lapic_id())) {
 		printf("SMP: cpuid = %d\n", cpuid);
 		printf("SMP: actual apic_id = %d\n", lapic_id());
 		printf("SMP: correct apic_id = %d\n", PCPU_GET(apic_id));
@@ -749,7 +750,8 @@ init_secondary(void)
 	mtx_lock_spin(&ap_boot_mtx);
 
 	/* Init local apic for irq's */
-	lapic_setup(1);
+	if (lapic_valid)
+		lapic_setup(1);
 
 	/* Set memory range attributes for this CPU to match the BSP */
 	mem_range_AP_init();
@@ -764,7 +766,7 @@ init_secondary(void)
 	if (cpu_logical > 1 && PCPU_GET(apic_id) % cpu_logical != 0)
 		CPU_SET(cpuid, &logical_cpus_mask);
 
-	if (bootverbose)
+	if (lapic_valid && bootverbose)
 		lapic_dump("AP");
 
 	if (smp_cpus == mp_ncpus) {
diff --git a/sys/amd64/include/apicvar.h b/sys/amd64/include/apicvar.h
index e7423a3..c04a238 100644
--- a/sys/amd64/include/apicvar.h
+++ b/sys/amd64/include/apicvar.h
@@ -169,6 +169,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/include/apicvar.h b/sys/i386/include/apicvar.h
index df99ebe..ea8a3c3 100644
--- a/sys/i386/include/apicvar.h
+++ b/sys/i386/include/apicvar.h
@@ -168,6 +168,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index 09c01f1..25b9cfc 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -59,6 +59,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/intr_machdep.h>
 #include <machine/md_var.h>
 #include <machine/asmacros.h>
+#include <machine/apicvar.h>
 
 
 
@@ -912,6 +913,7 @@ initvalues(start_info_t *startinfo)
 #endif	
 	xen_start_info = startinfo;
 	HYPERVISOR_start_info = startinfo;
+	lapic_valid = false;
 	xen_phys_machine = (xen_pfn_t *)startinfo->mfn_list;
 
 	IdlePTD = (pd_entry_t *)((uint8_t *)startinfo->pt_base + PAGE_SIZE);
diff --git a/sys/x86/x86/local_apic.c b/sys/x86/x86/local_apic.c
index 41bd602..fddf1fb 100644
--- a/sys/x86/x86/local_apic.c
+++ b/sys/x86/x86/local_apic.c
@@ -156,6 +156,7 @@ extern inthand_t IDTVEC(rsvd);
 
 volatile lapic_t *lapic;
 vm_paddr_t lapic_paddr;
+bool lapic_valid = true;
 static u_long lapic_timer_divisor;
 static struct eventtimer lapic_et;
 
@@ -1367,9 +1368,10 @@ apic_setup_io(void *dummy __unused)
 	if (retval != 0)
 		printf("%s: Failed to setup I/O APICs: returned %d\n",
 		    best_enum->apic_name, retval);
-#ifdef XEN
-	return;
-#endif
+
+	if (!lapic_valid)
+		return;
+
 	/*
 	 * Finish setting up the local APIC on the BSP once we know how to
 	 * properly program the LINT pins.
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 22fd6a6..6ea1e2a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/clock.h>
 #include <machine/pc/bios.h>
 #include <machine/smp.h>
+#include <machine/apicvar.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -315,4 +316,6 @@ xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
 	init_ops = xen_init_ops;
+	/* Disable lapic */
+	lapic_valid = false;
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcM-0002nV-3T; Thu, 02 Jan 2014 15:55:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcK-0002my-2B
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:28 +0000
Received: from [85.158.139.211:25861] by server-12.bemta-5.messagelabs.com id
	D7/44-30017-EEB85C25; Thu, 02 Jan 2014 15:55:26 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388678124!7367598!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30398 invoked from network); 2 Jan 2014 15:55:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262655"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRF-00043n-AO;
	Thu, 02 Jan 2014 15:44:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:44 +0100
Message-ID: <1388677433-49525-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 10/19] xen: add hook for AP bootstrap memory
	reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hook will only be implemented for bare metal, Xen doesn't require
any bootstrap code since APs are started in long mode with paging
enabled.
---
 sys/amd64/amd64/machdep.c   |    6 +++++-
 sys/amd64/include/sysarch.h |    1 +
 2 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index f6eef50..babf16d 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -178,6 +178,9 @@ struct init_ops init_ops = {
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
 	.parse_memmap =		native_parse_memmap,
+#ifdef SMP
+	.mp_bootaddress =	mp_bootaddress,
+#endif
 };
 
 /*
@@ -1490,7 +1493,8 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 
 #ifdef SMP
 	/* make hole for AP bootstrap code */
-	physmap[1] = mp_bootaddress(physmap[1] / 1024);
+	if (init_ops.mp_bootaddress)
+		physmap[1] = init_ops.mp_bootaddress(physmap[1] / 1024);
 #endif
 
 	/*
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 084223e..7696064 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -16,6 +16,7 @@ struct init_ops {
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
 	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
+	u_int	(*mp_bootaddress)(u_int);
 };
 
 extern struct init_ops init_ops;
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcX-0002se-0v; Thu, 02 Jan 2014 15:55:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcV-0002qr-UG
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:40 +0000
Received: from [85.158.143.35:52081] by server-2.bemta-4.messagelabs.com id
	13/45-11386-BFB85C25; Thu, 02 Jan 2014 15:55:39 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6552 invoked from network); 2 Jan 2014 15:55:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:37 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRF-00043n-TG;
	Thu, 02 Jan 2014 15:44:01 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:45 +0100
Message-ID: <1388677433-49525-12-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 11/19] xen: changes to hvm code in order to
	support PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On PVH we don't need to init the shared info page, or disable emulated
devices. Also, make sure PV IPIs are set before starting the APs.
---
 sys/x86/xen/hvm.c |   17 ++++++++++++-----
 1 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index 9a0411e..fb1ed79 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -523,7 +523,7 @@ xen_setup_cpus(void)
 {
 	int i;
 
-	if (!xen_hvm_domain() || !xen_vector_callback_enabled)
+	if (!xen_vector_callback_enabled)
 		return;
 
 #ifdef __amd64__
@@ -712,10 +712,13 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	}
 
 	xen_vector_callback_enabled = 0;
-	xen_domain_type = XEN_HVM_DOMAIN;
-	xen_hvm_init_shared_info_page();
 	xen_hvm_set_callback(NULL);
-	xen_hvm_disable_emulated_devices();
+
+	if (!xen_pv_domain()) {
+		xen_domain_type = XEN_HVM_DOMAIN;
+		xen_hvm_init_shared_info_page();
+		xen_hvm_disable_emulated_devices();
+	}
 } 
 
 void
@@ -746,6 +749,9 @@ xen_set_vcpu_id(void)
 	struct pcpu *pc;
 	int i;
 
+	if (!xen_hvm_domain())
+		return;
+
 	/* Set vcpu_id to acpi_id */
 	CPU_FOREACH(i) {
 		pc = pcpu_find(i);
@@ -789,7 +795,8 @@ xen_hvm_cpu_init(void)
 
 SYSINIT(xen_hvm_init, SI_SUB_HYPERVISOR, SI_ORDER_FIRST, xen_hvm_sysinit, NULL);
 #ifdef SMP
-SYSINIT(xen_setup_cpus, SI_SUB_SMP, SI_ORDER_FIRST, xen_setup_cpus, NULL);
+/* We need to setup IPIs before APs are started */
+SYSINIT(xen_setup_cpus, SI_SUB_SMP-1, SI_ORDER_FIRST, xen_setup_cpus, NULL);
 #endif
 SYSINIT(xen_hvm_cpu_init, SI_SUB_INTR, SI_ORDER_FIRST, xen_hvm_cpu_init, NULL);
 SYSINIT(xen_set_vcpu_id, SI_SUB_CPU, SI_ORDER_ANY, xen_set_vcpu_id, NULL);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcN-0002oB-SN; Thu, 02 Jan 2014 15:55:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcM-0002nY-KC
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:30 +0000
Received: from [85.158.137.68:19393] by server-10.bemta-3.messagelabs.com id
	BA/30-23989-1FB85C25; Thu, 02 Jan 2014 15:55:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15151 invoked from network); 2 Jan 2014 15:55:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074550"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:27 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRH-00043n-2S;
	Thu, 02 Jan 2014 15:44:03 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:47 +0100
Message-ID: <1388677433-49525-14-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 13/19] xen: introduce flag to disable the
	local apic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PVH guests don't have an emulated lapic.
---
 sys/amd64/amd64/mp_machdep.c |   10 ++++++----
 sys/amd64/include/apicvar.h  |    1 +
 sys/i386/include/apicvar.h   |    1 +
 sys/i386/xen/xen_machdep.c   |    2 ++
 sys/x86/x86/local_apic.c     |    8 +++++---
 sys/x86/xen/pv.c             |    3 +++
 6 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
index 0738a37..fd6eace 100644
--- a/sys/amd64/amd64/mp_machdep.c
+++ b/sys/amd64/amd64/mp_machdep.c
@@ -707,7 +707,8 @@ init_secondary(void)
 	wrmsr(MSR_SF_MASK, PSL_NT|PSL_T|PSL_I|PSL_C|PSL_D);
 
 	/* Disable local APIC just to be sure. */
-	lapic_disable();
+	if (lapic_valid)
+		lapic_disable();
 
 	/* signal our startup to the BSP. */
 	mp_naps++;
@@ -733,7 +734,7 @@ init_secondary(void)
 
 	/* A quick check from sanity claus */
 	cpuid = PCPU_GET(cpuid);
-	if (PCPU_GET(apic_id) != lapic_id()) {
+	if (lapic_valid && (PCPU_GET(apic_id) != lapic_id())) {
 		printf("SMP: cpuid = %d\n", cpuid);
 		printf("SMP: actual apic_id = %d\n", lapic_id());
 		printf("SMP: correct apic_id = %d\n", PCPU_GET(apic_id));
@@ -749,7 +750,8 @@ init_secondary(void)
 	mtx_lock_spin(&ap_boot_mtx);
 
 	/* Init local apic for irq's */
-	lapic_setup(1);
+	if (lapic_valid)
+		lapic_setup(1);
 
 	/* Set memory range attributes for this CPU to match the BSP */
 	mem_range_AP_init();
@@ -764,7 +766,7 @@ init_secondary(void)
 	if (cpu_logical > 1 && PCPU_GET(apic_id) % cpu_logical != 0)
 		CPU_SET(cpuid, &logical_cpus_mask);
 
-	if (bootverbose)
+	if (lapic_valid && bootverbose)
 		lapic_dump("AP");
 
 	if (smp_cpus == mp_ncpus) {
diff --git a/sys/amd64/include/apicvar.h b/sys/amd64/include/apicvar.h
index e7423a3..c04a238 100644
--- a/sys/amd64/include/apicvar.h
+++ b/sys/amd64/include/apicvar.h
@@ -169,6 +169,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/include/apicvar.h b/sys/i386/include/apicvar.h
index df99ebe..ea8a3c3 100644
--- a/sys/i386/include/apicvar.h
+++ b/sys/i386/include/apicvar.h
@@ -168,6 +168,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index 09c01f1..25b9cfc 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -59,6 +59,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/intr_machdep.h>
 #include <machine/md_var.h>
 #include <machine/asmacros.h>
+#include <machine/apicvar.h>
 
 
 
@@ -912,6 +913,7 @@ initvalues(start_info_t *startinfo)
 #endif	
 	xen_start_info = startinfo;
 	HYPERVISOR_start_info = startinfo;
+	lapic_valid = false;
 	xen_phys_machine = (xen_pfn_t *)startinfo->mfn_list;
 
 	IdlePTD = (pd_entry_t *)((uint8_t *)startinfo->pt_base + PAGE_SIZE);
diff --git a/sys/x86/x86/local_apic.c b/sys/x86/x86/local_apic.c
index 41bd602..fddf1fb 100644
--- a/sys/x86/x86/local_apic.c
+++ b/sys/x86/x86/local_apic.c
@@ -156,6 +156,7 @@ extern inthand_t IDTVEC(rsvd);
 
 volatile lapic_t *lapic;
 vm_paddr_t lapic_paddr;
+bool lapic_valid = true;
 static u_long lapic_timer_divisor;
 static struct eventtimer lapic_et;
 
@@ -1367,9 +1368,10 @@ apic_setup_io(void *dummy __unused)
 	if (retval != 0)
 		printf("%s: Failed to setup I/O APICs: returned %d\n",
 		    best_enum->apic_name, retval);
-#ifdef XEN
-	return;
-#endif
+
+	if (!lapic_valid)
+		return;
+
 	/*
 	 * Finish setting up the local APIC on the BSP once we know how to
 	 * properly program the LINT pins.
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 22fd6a6..6ea1e2a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/clock.h>
 #include <machine/pc/bios.h>
 #include <machine/smp.h>
+#include <machine/apicvar.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -315,4 +316,6 @@ xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
 	init_ops = xen_init_ops;
+	/* Disable lapic */
+	lapic_valid = false;
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcV-0002qx-NW; Thu, 02 Jan 2014 15:55:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcT-0002pt-W9
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:38 +0000
Received: from [85.158.137.68:28532] by server-16.bemta-3.messagelabs.com id
	39/F4-26128-9FB85C25; Thu, 02 Jan 2014 15:55:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388678133!3268170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14178 invoked from network); 2 Jan 2014 15:55:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:32 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRH-00043n-L9;
	Thu, 02 Jan 2014 15:44:03 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:48 +0100
Message-ID: <1388677433-49525-15-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_14/19=5D_xen=3A_introduce_xenpv?=
	=?utf-8?q?_bus_and_a_dummy_pvcpu_device?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24ndCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRl
IGEgZHVtbXkKYnVzIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoIHRvIGl0IChp
bnN0ZWFkIG9mCmF0dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRl
dmljZSB0aGF0IHdpbGwgYmUgdXNlZAp0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
Ci0tLQogc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYg
IHwgICAgMSArCiBzeXMveDg2L3hlbi94ZW5wdi5jICB8ICAxNTUgKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMTU3IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94
ZW4veGVucHYuYwoKZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKaW5kZXggYTM0OTFkYS4uZDdjOThjYyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKKysrIGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKQEAgLTU3MCwzICs1NzAsNCBA
QCB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9pbnRyLmMJCW9w
dGlvbmFsCXhlbiB8IHhlbmh2bQogeDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYv
eGVuL3B2Y3B1X2VudW0uYwkJb3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRp
b25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9m
aWxlcy5pMzg2CmluZGV4IDc5MDI5NmQuLjgxMTQyZTMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2Zp
bGVzLmkzODYKKysrIGIvc3lzL2NvbmYvZmlsZXMuaTM4NgpAQCAtNjAzLDMgKzYwMyw0IEBAIHg4
Ni94ODYvdHNjLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni9kZWxheS5jCQkJc3RhbmRhcmQKIHg4Ni94
ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0aW9uYWwg
eGVuIHwgeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy94ODYveGVuL3hlbnB2LmMgYi9zeXMveDg2L3hlbi94ZW5wdi5jCm5ldyBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLjQxZDY3NGYKLS0tIC9kZXYvbnVsbAorKysg
Yi9zeXMveDg2L3hlbi94ZW5wdi5jCkBAIC0wLDAgKzEsMTU1IEBACisvKgorICogQ29weXJpZ2h0
IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxs
IHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJj
ZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJl
IHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJl
IG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0
aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25z
IGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4g
YmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90
aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVy
IGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3Zp
ZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJ
REVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVY
UFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBU
TywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBF
VkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBB
TlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywg
UFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0Yg
VVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09O
VFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5D
RSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0Yg
VEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICog
U1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRG
cmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL3N5c3Rt
Lmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNs
dWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3BjcHUuaD4KKyNpbmNsdWRlIDxzeXMv
c21wLmg+CisKKyNpbmNsdWRlIDx4ZW4veGVuLW9zLmg+CisKK3N0YXRpYyBpbnQKK3hlbnB2X3By
b2JlKGRldmljZV90IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVz
Iik7CisJZGV2aWNlX3F1aWV0KGRldik7CisJcmV0dXJuICgwKTsKK30KKworc3RhdGljIGludAor
eGVucHZfYXR0YWNoKGRldmljZV90IGRldikKK3sKKwlkZXZpY2VfdCBjaGlsZDsKKworCS8qCisJ
ICogTGV0IG91ciBjaGlsZCBkcml2ZXJzIGlkZW50aWZ5IGFueSBjaGlsZCBkZXZpY2VzIHRoYXQg
dGhleQorCSAqIGNhbiBmaW5kLiAgT25jZSB0aGF0IGlzIGRvbmUgYXR0YWNoIGFueSBkZXZpY2Vz
IHRoYXQgd2UKKwkgKiBmb3VuZC4KKwkgKi8KKwlidXNfZ2VuZXJpY19wcm9iZShkZXYpOworCWJ1
c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJaWYgKCFkZXZjbGFzc19nZXRfZGV2aWNlKGRldmNs
YXNzX2ZpbmQoImlzYSIpLCAwKSkgeworCQljaGlsZCA9IEJVU19BRERfQ0hJTEQoZGV2LCAwLCAi
aXNhIiwgMCk7CisJCWlmIChjaGlsZCA9PSBOVUxMKQorCQkJcGFuaWMoInhlbnB2X2F0dGFjaCBp
c2EiKTsKKwkJZGV2aWNlX3Byb2JlX2FuZF9hdHRhY2goY2hpbGQpOworCX0KKworCXJldHVybiAw
OworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2X21ldGhvZHNbXSA9IHsKKwkvKiBE
ZXZpY2UgaW50ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwJCXhlbnB2X3Byb2Jl
KSwKKwlERVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJeGVucHZfYXR0YWNoKSwKKwlERVZNRVRIT0Qo
ZGV2aWNlX3N1c3BlbmQsCWJ1c19nZW5lcmljX3N1c3BlbmQpLAorCURFVk1FVEhPRChkZXZpY2Vf
cmVzdW1lLAlidXNfZ2VuZXJpY19yZXN1bWUpLAorCisJLyogQnVzIGludGVyZmFjZSAqLworCURF
Vk1FVEhPRChidXNfYWRkX2NoaWxkLAlidXNfZ2VuZXJpY19hZGRfY2hpbGQpLAorCisJREVWTUVU
SE9EX0VORAorfTsKKworc3RhdGljIGRyaXZlcl90IHhlbnB2X2RyaXZlciA9IHsKKwkieGVucHYi
LAorCXhlbnB2X21ldGhvZHMsCisJMSwJCQkvKiBubyBzb2Z0YyAqLworfTsKK3N0YXRpYyBkZXZj
bGFzc190IHhlbnB2X2RldmNsYXNzOworCitEUklWRVJfTU9EVUxFKHhlbnB2LCBuZXh1cywgeGVu
cHZfZHJpdmVyLCB4ZW5wdl9kZXZjbGFzcywgMCwgMCk7CisKKy8qCisgKiBEdW1teSBYZW4gY3B1
IGRldmljZQorICoKKyAqIFNpbmNlIHRoZXJlJ3Mgbm8gQUNQSSBvbiBQVkggZ3Vlc3RzLCB3ZSBu
ZWVkIHRvIGNyZWF0ZSBhIGR1bW15CisgKiBDUFUgZGV2aWNlIGluIG9yZGVyIHRvIGZpbGwgdGhl
IHBjcHUtPnBjX2RldmljZSBmaWVsZC4KKyAqLworCitzdGF0aWMgdm9pZAoreGVucHZjcHVfaWRl
bnRpZnkoZHJpdmVyX3QgKmRyaXZlciwgZGV2aWNlX3QgcGFyZW50KQoreworCWRldmljZV90IGNo
aWxkOworCWludCBpOworCisJLyogT25seSBhdHRhY2ggdG8gUFYgZ3Vlc3RzLCBIVk0gZ3Vlc3Rz
IHVzZSB0aGUgQUNQSSBDUFUgZGV2aWNlcyAqLworCWlmICgheGVuX3B2X2RvbWFpbigpKQorCQly
ZXR1cm47CisKKwlDUFVfRk9SRUFDSChpKSB7CisJCWNoaWxkID0gQlVTX0FERF9DSElMRChwYXJl
bnQsIDAsICJwdmNwdSIsIGkpOworCQlpZiAoY2hpbGQgPT0gTlVMTCkKKwkJCXBhbmljKCJ4ZW5w
dmNwdV9pZGVudGlmeSBhZGQgcHZjcHUiKTsKKwl9Cit9CisKK3N0YXRpYyBpbnQKK3hlbnB2Y3B1
X3Byb2JlKGRldmljZV90IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYg
Q1BVIik7CisJcmV0dXJuICgwKTsKK30KKworc3RhdGljIGludAoreGVucHZjcHVfYXR0YWNoKGRl
dmljZV90IGRldikKK3sKKwlzdHJ1Y3QgcGNwdSAqcGM7CisJaW50IGNwdTsKKworCWNwdSA9IGRl
dmljZV9nZXRfdW5pdChkZXYpOworCXBjID0gcGNwdV9maW5kKGNwdSk7CisJcGMtPnBjX2Rldmlj
ZSA9IGRldjsKKwlyZXR1cm4gKDApOworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2
Y3B1X21ldGhvZHNbXSA9IHsKKwlERVZNRVRIT0QoZGV2aWNlX2lkZW50aWZ5LCB4ZW5wdmNwdV9p
ZGVudGlmeSksCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwgeGVucHZjcHVfcHJvYmUpLAorCURF
Vk1FVEhPRChkZXZpY2VfYXR0YWNoLCB4ZW5wdmNwdV9hdHRhY2gpLAorCisJREVWTUVUSE9EX0VO
RAorfTsKKworc3RhdGljIGRyaXZlcl90IHhlbnB2Y3B1X2RyaXZlciA9IHsKKwkicHZjcHUiLAor
CXhlbnB2Y3B1X21ldGhvZHMsCisJMCwKK307CisKK2RldmNsYXNzX3QgeGVucHZjcHVfZGV2Y2xh
c3M7CisKK0RSSVZFUl9NT0RVTEUoeGVucHZjcHUsIHhlbnB2LCB4ZW5wdmNwdV9kcml2ZXIsIHhl
bnB2Y3B1X2RldmNsYXNzLCAwLCAwKTsKK01PRFVMRV9ERVBFTkQoeGVucHZjcHUsIHhlbnB2LCAx
LCAxLCAxKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcG-0002mQ-4J; Thu, 02 Jan 2014 15:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcE-0002mG-Ux
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:23 +0000
Received: from [85.158.137.68:18925] by server-7.bemta-3.messagelabs.com id
	4C/2A-27599-AEB85C25; Thu, 02 Jan 2014 15:55:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14410 invoked from network); 2 Jan 2014 15:55:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074504"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:18 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRI-00043n-Pz;
	Thu, 02 Jan 2014 15:44:04 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:50 +0100
Message-ID: <1388677433-49525-17-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 16/19] xen: add shutdown hook for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the PV shutdown hook to PVH.
---
 sys/dev/xen/control/control.c |   37 ++++++++++++++++++-------------------
 1 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/sys/dev/xen/control/control.c b/sys/dev/xen/control/control.c
index bc0609d..78894ba 100644
--- a/sys/dev/xen/control/control.c
+++ b/sys/dev/xen/control/control.c
@@ -316,21 +316,6 @@ xctrl_suspend()
 	EVENTHANDLER_INVOKE(power_resume);
 }
 
-static void
-xen_pv_shutdown_final(void *arg, int howto)
-{
-	/*
-	 * Inform the hypervisor that shutdown is complete.
-	 * This is not necessary in HVM domains since Xen
-	 * emulates ACPI in that mode and FreeBSD's ACPI
-	 * support will request this transition.
-	 */
-	if (howto & (RB_HALT | RB_POWEROFF))
-		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
-	else
-		HYPERVISOR_shutdown(SHUTDOWN_reboot);
-}
-
 #else
 
 /* HVM mode suspension. */
@@ -440,6 +425,21 @@ xctrl_crash()
 	panic("Xen directed crash");
 }
 
+static void
+xen_pv_shutdown_final(void *arg, int howto)
+{
+	/*
+	 * Inform the hypervisor that shutdown is complete.
+	 * This is not necessary in HVM domains since Xen
+	 * emulates ACPI in that mode and FreeBSD's ACPI
+	 * support will request this transition.
+	 */
+	if (howto & (RB_HALT | RB_POWEROFF))
+		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
+	else
+		HYPERVISOR_shutdown(SHUTDOWN_reboot);
+}
+
 /*------------------------------ Event Reception -----------------------------*/
 static void
 xctrl_on_watch_event(struct xs_watch *watch, const char **vec, unsigned int len)
@@ -522,10 +522,9 @@ xctrl_attach(device_t dev)
 	xctrl->xctrl_watch.callback_data = (uintptr_t)xctrl;
 	xs_register_watch(&xctrl->xctrl_watch);
 
-#ifndef XENHVM
-	EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
-			      SHUTDOWN_PRI_LAST);
-#endif
+	if (xen_pv_domain())
+		EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
+		                      SHUTDOWN_PRI_LAST);
 
 	return (0);
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcG-0002mX-Gp; Thu, 02 Jan 2014 15:55:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcF-0002mH-HS
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:23 +0000
Received: from [85.158.137.68:25031] by server-12.bemta-3.messagelabs.com id
	46/FA-20055-AEB85C25; Thu, 02 Jan 2014 15:55:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388678119!6954631!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14481 invoked from network); 2 Jan 2014 15:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRK-00043n-HW;
	Thu, 02 Jan 2014 15:44:06 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:53 +0100
Message-ID: <1388677433-49525-20-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to xenpv
	device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/x86/isa/isa.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
index 1a57137..9287ff2 100644
--- a/sys/x86/isa/isa.c
+++ b/sys/x86/isa/isa.c
@@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
  * On this platform, isa can also attach to the legacy bus.
  */
 DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
+#ifdef XENHVM
+DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
+#endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vykca-0002vr-Gk; Thu, 02 Jan 2014 15:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcY-0002u5-Ll
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:42 +0000
Received: from [85.158.143.35:21601] by server-3.bemta-4.messagelabs.com id
	3B/07-32360-EFB85C25; Thu, 02 Jan 2014 15:55:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7140 invoked from network); 2 Jan 2014 15:55:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074602"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRG-00043n-GD;
	Thu, 02 Jan 2014 15:44:02 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:46 +0100
Message-ID: <1388677433-49525-13-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_12/19=5D_xen=3A_add_a_hook_to_p?=
	=?utf-8?q?erform_AP_startup?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QVAgc3RhcnR1cCBvbiBQVkggZm9sbG93cyB0aGUgUFYgbWV0aG9kLCBzbyB3ZSBuZWVkIHRvIGFk
ZCBhIGhvb2sgaW4Kb3JkZXIgdG8gZGl2ZXJnZSBmcm9tIGJhcmUgbWV0YWwuCi0tLQogc3lzL2Ft
ZDY0L2FtZDY0L21wX21hY2hkZXAuYyB8ICAgMTQgKysrLS0tCiBzeXMvYW1kNjQvaW5jbHVkZS9j
cHUuaCAgICAgIHwgICAgMSArCiBzeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCAgICAgIHwgICAgMSAr
CiBzeXMveDg2L3hlbi9odm0uYyAgICAgICAgICAgIHwgICAxMiArKysrKy0KIHN5cy94ODYveGVu
L3B2LmMgICAgICAgICAgICAgfCAgIDg1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogc3lzL3hlbi9wdi5oICAgICAgICAgICAgICAgICB8ICAgMzIgKysrKysrKysr
KysrKysrKwogNiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveGVuL3B2LmgKCmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQvbXBfbWFjaGRlcC5jIGIvc3lzL2FtZDY0L2FtZDY0L21wX21hY2hkZXAuYwppbmRl
eCA0ZWY0YjNkLi4wNzM4YTM3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbXBfbWFjaGRl
cC5jCisrKyBiL3N5cy9hbWQ2NC9hbWQ2NC9tcF9tYWNoZGVwLmMKQEAgLTkwLDcgKzkwLDcgQEAg
ZXh0ZXJuICBzdHJ1Y3QgcGNwdSBfX3BjcHVbXTsKIAogLyogQVAgdXNlcyB0aGlzIGR1cmluZyBi
b290c3RyYXAuICBEbyBub3Qgc3RhdGljaXplLiAgKi8KIGNoYXIgKmJvb3RTVEs7Ci1zdGF0aWMg
aW50IGJvb3RBUDsKK2ludCBib290QVA7CiAKIC8qIEZyZWUgdGhlc2UgYWZ0ZXIgdXNlICovCiB2
b2lkICpib290c3RhY2tzW01BWENQVV07CkBAIC0xMjQsNyArMTI0LDggQEAgc3RhdGljIHVfbG9u
ZyAqaXBpX2hhcmRjbG9ja19jb3VudHNbTUFYQ1BVXTsKIAogLyogRGVmYXVsdCBjcHVfb3BzIGlt
cGxlbWVudGF0aW9uLiAqLwogc3RydWN0IGNwdV9vcHMgY3B1X29wcyA9IHsKLQkuaXBpX3ZlY3Rv
cmVkID0gbGFwaWNfaXBpX3ZlY3RvcmVkCisJLmlwaV92ZWN0b3JlZCA9IGxhcGljX2lwaV92ZWN0
b3JlZCwKKwkuc3RhcnRfYWxsX2FwcyA9IG5hdGl2ZV9zdGFydF9hbGxfYXBzLAogfTsKIAogZXh0
ZXJuIGludGhhbmRfdCBJRFRWRUMoZmFzdF9zeXNjYWxsKSwgSURUVkVDKGZhc3Rfc3lzY2FsbDMy
KTsKQEAgLTEzOCw3ICsxMzksNyBAQCBleHRlcm4gaW50IHBtYXBfcGNpZF9lbmFibGVkOwogc3Rh
dGljIHZvbGF0aWxlIGNwdXNldF90IGlwaV9ubWlfcGVuZGluZzsKIAogLyogdXNlZCB0byBob2xk
IHRoZSBBUCdzIHVudGlsIHdlIGFyZSByZWFkeSB0byByZWxlYXNlIHRoZW0gKi8KLXN0YXRpYyBz
dHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4Oworc3RydWN0IG10eCBhcF9ib290X210eDsKIAogLyogU2V0
IHRvIDEgb25jZSB3ZSdyZSByZWFkeSB0byBsZXQgdGhlIEFQcyBvdXQgb2YgdGhlIHBlbi4gKi8K
IHN0YXRpYyB2b2xhdGlsZSBpbnQgYXBzX3JlYWR5ID0gMDsKQEAgLTE2NSw3ICsxNjYsNiBAQCBz
dGF0aWMgaW50IGNwdV9jb3JlczsJCQkvKiBjb3JlcyBwZXIgcGFja2FnZSAqLwogCiBzdGF0aWMg
dm9pZAlhc3NpZ25fY3B1X2lkcyh2b2lkKTsKIHN0YXRpYyB2b2lkCXNldF9pbnRlcnJ1cHRfYXBp
Y19pZHModm9pZCk7Ci1zdGF0aWMgaW50CXN0YXJ0X2FsbF9hcHModm9pZCk7CiBzdGF0aWMgaW50
CXN0YXJ0X2FwKGludCBhcGljX2lkKTsKIHN0YXRpYyB2b2lkCXJlbGVhc2VfYXBzKHZvaWQgKmR1
bW15KTsKIApAQCAtNTY5LDcgKzU2OSw3IEBAIGNwdV9tcF9zdGFydCh2b2lkKQogCWFzc2lnbl9j
cHVfaWRzKCk7CiAKIAkvKiBTdGFydCBlYWNoIEFwcGxpY2F0aW9uIFByb2Nlc3NvciAqLwotCXN0
YXJ0X2FsbF9hcHMoKTsKKwljcHVfb3BzLnN0YXJ0X2FsbF9hcHMoKTsKIAogCXNldF9pbnRlcnJ1
cHRfYXBpY19pZHMoKTsKIH0KQEAgLTkwOCw4ICs5MDgsOCBAQCBhc3NpZ25fY3B1X2lkcyh2b2lk
KQogLyoKICAqIHN0YXJ0IGVhY2ggQVAgaW4gb3VyIGxpc3QKICAqLwotc3RhdGljIGludAotc3Rh
cnRfYWxsX2Fwcyh2b2lkKQoraW50CituYXRpdmVfc3RhcnRfYWxsX2Fwcyh2b2lkKQogewogCXZt
X29mZnNldF90IHZhID0gYm9vdF9hZGRyZXNzICsgS0VSTkJBU0U7CiAJdV9pbnQ2NF90ICpwdDQs
ICpwdDMsICpwdDI7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaCBiL3N5cy9h
bWQ2NC9pbmNsdWRlL2NwdS5oCmluZGV4IDNkOWZmNTMxLi5lZDlmMWRiIDEwMDY0NAotLS0gYS9z
eXMvYW1kNjQvaW5jbHVkZS9jcHUuaAorKysgYi9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaApAQCAt
NjQsNiArNjQsNyBAQCBzdHJ1Y3QgY3B1X29wcyB7CiAJdm9pZCAoKmNwdV9pbml0KSh2b2lkKTsK
IAl2b2lkICgqY3B1X3Jlc3VtZSkodm9pZCk7CiAJdm9pZCAoKmlwaV92ZWN0b3JlZCkodV9pbnQs
IGludCk7CisJaW50ICAoKnN0YXJ0X2FsbF9hcHMpKHZvaWQpOwogfTsKIAogZXh0ZXJuIHN0cnVj
dAljcHVfb3BzIGNwdV9vcHM7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCBi
L3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oCmluZGV4IGQxYjM2NmIuLjE1YmM4MjMgMTAwNjQ0Ci0t
LSBhL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oCisrKyBiL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5o
CkBAIC03OSw2ICs3OSw3IEBAIHZvaWQJc21wX21hc2tlZF9pbnZscGdfcmFuZ2UoY3B1c2V0X3Qg
bWFzaywgc3RydWN0IHBtYXAgKnBtYXAsCiAJICAgIHZtX29mZnNldF90IHN0YXJ0dmEsIHZtX29m
ZnNldF90IGVuZHZhKTsKIHZvaWQJc21wX2ludmx0bGIoc3RydWN0IHBtYXAgKnBtYXApOwogdm9p
ZAlzbXBfbWFza2VkX2ludmx0bGIoY3B1c2V0X3QgbWFzaywgc3RydWN0IHBtYXAgKnBtYXApOwor
aW50CW5hdGl2ZV9zdGFydF9hbGxfYXBzKHZvaWQpOwogCiAjZW5kaWYgLyogIUxPQ09SRSAqLwog
I2VuZGlmIC8qIFNNUCAqLwpkaWZmIC0tZ2l0IGEvc3lzL3g4Ni94ZW4vaHZtLmMgYi9zeXMveDg2
L3hlbi9odm0uYwppbmRleCBmYjFlZDc5Li40OWNhYWNmIDEwMDY0NAotLS0gYS9zeXMveDg2L3hl
bi9odm0uYworKysgYi9zeXMveDg2L3hlbi9odm0uYwpAQCAtNTMsNiArNTMsOSBAQCBfX0ZCU0RJ
RCgiJEZyZWVCU0QkIik7CiAjaW5jbHVkZSA8eGVuL2h5cGVydmlzb3IuaD4KICNpbmNsdWRlIDx4
ZW4vaHZtLmg+CiAjaW5jbHVkZSA8eGVuL3hlbl9pbnRyLmg+CisjaWZkZWYgX19hbWQ2NF9fCisj
aW5jbHVkZSA8eGVuL3B2Lmg+CisjZW5kaWYKIAogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvaHZt
L3BhcmFtcy5oPgogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvdmNwdS5oPgpAQCAtMTE5LDcgKzEy
MiwxMCBAQCBlbnVtIHhlbl9kb21haW5fdHlwZSB4ZW5fZG9tYWluX3R5cGUgPSBYRU5fTkFUSVZF
Owogc3RydWN0IGNwdV9vcHMgeGVuX2h2bV9jcHVfb3BzID0gewogCS5pcGlfdmVjdG9yZWQJPSBs
YXBpY19pcGlfdmVjdG9yZWQsCiAJLmNwdV9pbml0CT0geGVuX2h2bV9jcHVfaW5pdCwKLQkuY3B1
X3Jlc3VtZQk9IHhlbl9odm1fY3B1X3Jlc3VtZQorCS5jcHVfcmVzdW1lCT0geGVuX2h2bV9jcHVf
cmVzdW1lLAorI2lmZGVmIF9fYW1kNjRfXworCS5zdGFydF9hbGxfYXBzID0gbmF0aXZlX3N0YXJ0
X2FsbF9hcHMsCisjZW5kaWYKIH07CiAKIHN0YXRpYyBNQUxMT0NfREVGSU5FKE1fWEVOSFZNLCAi
eGVuX2h2bSIsICJYZW4gSFZNIFBWIFN1cHBvcnQiKTsKQEAgLTY5OCw2ICs3MDQsMTAgQEAgeGVu
X2h2bV9pbml0KGVudW0geGVuX2h2bV9pbml0X3R5cGUgaW5pdF90eXBlKQogCQlzZXR1cF94ZW5f
ZmVhdHVyZXMoKTsKIAkJY3B1X29wcyA9IHhlbl9odm1fY3B1X29wczsKICAJCXZtX2d1ZXN0ID0g
Vk1fR1VFU1RfWEVOOworI2lmZGVmIF9fYW1kNjRfXworCQlpZiAoeGVuX3B2X2RvbWFpbigpKQor
CQkJY3B1X29wcy5zdGFydF9hbGxfYXBzID0geGVuX3B2X3N0YXJ0X2FsbF9hcHM7CisjZW5kaWYK
IAkJYnJlYWs7CiAJY2FzZSBYRU5fSFZNX0lOSVRfUkVTVU1FOgogCQlpZiAoZXJyb3IgIT0gMCkK
ZGlmZiAtLWdpdCBhL3N5cy94ODYveGVuL3B2LmMgYi9zeXMveDg2L3hlbi9wdi5jCmluZGV4IGQx
MWJjMWEuLjIyZmQ2YTYgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL3B2LmMKKysrIGIvc3lzL3g4
Ni94ZW4vcHYuYwpAQCAtMzQsOCArMzQsMTEgQEAgX19GQlNESUQoIiRGcmVlQlNEJCIpOwogI2lu
Y2x1ZGUgPHN5cy9rZXJuZWwuaD4KICNpbmNsdWRlIDxzeXMvcmVib290Lmg+CiAjaW5jbHVkZSA8
c3lzL3N5c3RtLmg+CisjaW5jbHVkZSA8c3lzL21hbGxvYy5oPgogI2luY2x1ZGUgPHN5cy9sb2Nr
Lmg+CiAjaW5jbHVkZSA8c3lzL3J3bG9jay5oPgorI2luY2x1ZGUgPHN5cy9tdXRleC5oPgorI2lu
Y2x1ZGUgPHN5cy9zbXAuaD4KIAogI2luY2x1ZGUgPHZtL3ZtLmg+CiAjaW5jbHVkZSA8dm0vdm1f
ZXh0ZXJuLmg+CkBAIC00OSw5ICs1MiwxMyBAQCBfX0ZCU0RJRCgiJEZyZWVCU0QkIik7CiAjaW5j
bHVkZSA8bWFjaGluZS9zeXNhcmNoLmg+CiAjaW5jbHVkZSA8bWFjaGluZS9jbG9jay5oPgogI2lu
Y2x1ZGUgPG1hY2hpbmUvcGMvYmlvcy5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc21wLmg+CiAKICNp
bmNsdWRlIDx4ZW4veGVuLW9zLmg+CiAjaW5jbHVkZSA8eGVuL2h5cGVydmlzb3IuaD4KKyNpbmNs
dWRlIDx4ZW4vcHYuaD4KKworI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvdmNwdS5oPgogCiAvKiBO
YXRpdmUgaW5pdGlhbCBmdW5jdGlvbiAqLwogZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1
X2ludDY0X3QsIHVfaW50NjRfdCk7CkBAIC02NSw2ICs3MiwxNSBAQCBzdGF0aWMgY2FkZHJfdCB4
ZW5fcHZfcGFyc2VfcHJlbG9hZF9kYXRhKHVfaW50NjRfdCk7CiBzdGF0aWMgdm9pZCB4ZW5fcHZf
cGFyc2VfbWVtbWFwKGNhZGRyX3QsIHZtX3BhZGRyX3QgKiwgaW50ICopOwogCiBzdGF0aWMgdm9p
ZCB4ZW5fcHZfc2V0X2luaXRfb3BzKHZvaWQpOworLyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tIEV4dGVybiBEZWNsYXJhdGlvbnMgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KKy8q
IFZhcmlhYmxlcyB1c2VkIGJ5IGFtZDY0IG1wX21hY2hkZXAgdG8gc3RhcnQgQVBzICovCitleHRl
cm4gc3RydWN0IG10eCBhcF9ib290X210eDsKK2V4dGVybiB2b2lkICpib290c3RhY2tzW107Citl
eHRlcm4gY2hhciAqZG91YmxlZmF1bHRfc3RhY2s7CitleHRlcm4gY2hhciAqbm1pX3N0YWNrOwor
ZXh0ZXJuIHZvaWQgKmRwY3B1OworZXh0ZXJuIGludCBib290QVA7CitleHRlcm4gY2hhciAqYm9v
dFNUSzsKIAogLyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBHbG9iYWwgRGF0YSAt
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KIC8qIFhlbiBpbml0X29wcyBpbXBsZW1l
bnRhdGlvbi4gKi8KQEAgLTE2OCw2ICsxODQsNzUgQEAgaGFtbWVyX3RpbWVfeGVuKHN0YXJ0X2lu
Zm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKIH0KIAogLyotLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLSBQViBzcGVjaWZpYyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
Ki8KKworc3RhdGljIGludAorc3RhcnRfeGVuX2FwKGludCBjcHUpCit7CisJc3RydWN0IHZjcHVf
Z3Vlc3RfY29udGV4dCAqY3R4dDsKKwlpbnQgbXMsIGNwdXMgPSBtcF9uYXBzOworCisJY3R4dCA9
IG1hbGxvYyhzaXplb2YoKmN0eHQpLCBNX1RFTVAsIE1fTk9XQUlUIHwgTV9aRVJPKTsKKwlpZiAo
Y3R4dCA9PSBOVUxMKQorCQlwYW5pYygidW5hYmxlIHRvIGFsbG9jYXRlIG1lbW9yeSIpOworCisJ
Y3R4dC0+ZmxhZ3MgPSBWR0NGX0lOX0tFUk5FTDsKKwljdHh0LT51c2VyX3JlZ3MucmlwID0gKHVu
c2lnbmVkIGxvbmcpIGluaXRfc2Vjb25kYXJ5OworCWN0eHQtPnVzZXJfcmVncy5yc3AgPSAodW5z
aWduZWQgbG9uZykgYm9vdFNUSzsKKworCS8qIFNldCB0aGUgQVAgdG8gdXNlIHRoZSBzYW1lIHBh
Z2UgdGFibGVzICovCisJY3R4dC0+Y3RybHJlZ1szXSA9IEtQTUw0cGh5czsKKworCWlmIChIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2luaXRpYWxpc2UsIGNwdSwgY3R4dCkpCisJCXBhbmljKCJ1
bmFibGUgdG8gaW5pdGlhbGl6ZSBBUCMlZFxuIiwgY3B1KTsKKworCWZyZWUoY3R4dCwgTV9URU1Q
KTsKKworCS8qIExhdW5jaCB0aGUgdkNQVSAqLworCWlmIChIWVBFUlZJU09SX3ZjcHVfb3AoVkNQ
VU9QX3VwLCBjcHUsIE5VTEwpKQorCQlwYW5pYygidW5hYmxlIHRvIHN0YXJ0IEFQIyVkXG4iLCBj
cHUpOworCisJLyogV2FpdCB1cCB0byA1IHNlY29uZHMgZm9yIGl0IHRvIHN0YXJ0LiAqLworCWZv
ciAobXMgPSAwOyBtcyA8IDUwMDA7IG1zKyspIHsKKwkJaWYgKG1wX25hcHMgPiBjcHVzKQorCQkJ
cmV0dXJuICgxKTsJLyogcmV0dXJuIFNVQ0NFU1MgKi8KKwkJREVMQVkoMTAwMCk7CisJfQorCisJ
cmV0dXJuICgwKTsKK30KKworaW50Cit4ZW5fcHZfc3RhcnRfYWxsX2Fwcyh2b2lkKQoreworCWlu
dCBjcHU7CisKKwltdHhfaW5pdCgmYXBfYm9vdF9tdHgsICJhcCBib290IiwgTlVMTCwgTVRYX1NQ
SU4pOworCisJZm9yIChjcHUgPSAxOyBjcHUgPCBtcF9uY3B1czsgY3B1KyspIHsKKworCQkvKiBh
bGxvY2F0ZSBhbmQgc2V0IHVwIGFuIGlkbGUgc3RhY2sgZGF0YSBwYWdlICovCisJCWJvb3RzdGFj
a3NbY3B1XSA9ICh2b2lkICopa21lbV9tYWxsb2Moa2VybmVsX2FyZW5hLAorCQkgICAgS1NUQUNL
X1BBR0VTICogUEFHRV9TSVpFLCBNX1dBSVRPSyB8IE1fWkVSTyk7CisJCWRvdWJsZWZhdWx0X3N0
YWNrID0gKGNoYXIgKilrbWVtX21hbGxvYyhrZXJuZWxfYXJlbmEsCisJCSAgICBQQUdFX1NJWkUs
IE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJbm1pX3N0YWNrID0gKGNoYXIgKilrbWVtX21hbGxvYyhr
ZXJuZWxfYXJlbmEsIFBBR0VfU0laRSwKKwkJICAgIE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJZHBj
cHUgPSAodm9pZCAqKWttZW1fbWFsbG9jKGtlcm5lbF9hcmVuYSwgRFBDUFVfU0laRSwKKwkJICAg
IE1fV0FJVE9LIHwgTV9aRVJPKTsKKworCQlib290U1RLID0gKGNoYXIgKilib290c3RhY2tzW2Nw
dV0gKyBLU1RBQ0tfUEFHRVMgKiBQQUdFX1NJWkUgLSA4OworCQlib290QVAgPSBjcHU7CisKKwkJ
LyogYXR0ZW1wdCB0byBzdGFydCB0aGUgQXBwbGljYXRpb24gUHJvY2Vzc29yICovCisJCWlmICgh
c3RhcnRfeGVuX2FwKGNwdSkpCisJCQlwYW5pYygiQVAgIyVkIGZhaWxlZCB0byBzdGFydCEiLCBj
cHUpOworCisJCUNQVV9TRVQoY3B1LCAmYWxsX2NwdXMpOwkvKiByZWNvcmQgQVAgaW4gQ1BVIG1h
cCAqLworCX0KKworCXJldHVybiAobXBfbmFwcyk7Cit9CisKIC8qCiAgKiBGdW5jdGlvbnMgdG8g
Y29udmVydCB0aGUgImV4dHJhIiBwYXJhbWV0ZXJzIHBhc3NlZCBieSBYZW4KICAqIGludG8gRnJl
ZUJTRCBib290IG9wdGlvbnMgKGZyb20gdGhlIGkzODYgWGVuIHBvcnQpLgpkaWZmIC0tZ2l0IGEv
c3lzL3hlbi9wdi5oIGIvc3lzL3hlbi9wdi5oCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAw
MDAwMDAuLjQ1Yjc0NzMKLS0tIC9kZXYvbnVsbAorKysgYi9zeXMveGVuL3B2LmgKQEAgLTAsMCAr
MSwzMiBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0
cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRo
b3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9s
bG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Yg
c291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNl
LCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgor
ICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBh
Ym92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5k
IHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5k
L29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgor
ICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRP
UlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElFUywgSU5D
TFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5USUVTIE9G
IE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAq
IEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBDT05UUklC
VVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5UQUws
IFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAoSU5DTFVE
SU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUgR09PRFMK
KyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1IgQlVTSU5F
U1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVPUlkgT0Yg
TElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElUWSwgT1Ig
VE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElOIEFOWSBX
QVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURWSVNFRCBP
RiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpZm5kZWYJX19Y
RU5fUFZfSF9fCisjZGVmaW5lCV9fWEVOX1BWX0hfXworCitpbnQJeGVuX3B2X3N0YXJ0X2FsbF9h
cHModm9pZCk7CisKKyNlbmRpZgkvKiBfX1hFTl9QVl9IX18gKi8KLS0gCjEuNy43LjUgKEFwcGxl
IEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcV-0002qx-NW; Thu, 02 Jan 2014 15:55:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcT-0002pt-W9
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:38 +0000
Received: from [85.158.137.68:28532] by server-16.bemta-3.messagelabs.com id
	39/F4-26128-9FB85C25; Thu, 02 Jan 2014 15:55:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388678133!3268170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14178 invoked from network); 2 Jan 2014 15:55:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89262682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:32 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRH-00043n-L9;
	Thu, 02 Jan 2014 15:44:03 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:48 +0100
Message-ID: <1388677433-49525-15-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v9_14/19=5D_xen=3A_introduce_xenpv?=
	=?utf-8?q?_bus_and_a_dummy_pvcpu_device?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24ndCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRl
IGEgZHVtbXkKYnVzIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoIHRvIGl0IChp
bnN0ZWFkIG9mCmF0dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRl
dmljZSB0aGF0IHdpbGwgYmUgdXNlZAp0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
Ci0tLQogc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYg
IHwgICAgMSArCiBzeXMveDg2L3hlbi94ZW5wdi5jICB8ICAxNTUgKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMTU3IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94
ZW4veGVucHYuYwoKZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKaW5kZXggYTM0OTFkYS4uZDdjOThjYyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKKysrIGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKQEAgLTU3MCwzICs1NzAsNCBA
QCB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9pbnRyLmMJCW9w
dGlvbmFsCXhlbiB8IHhlbmh2bQogeDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYv
eGVuL3B2Y3B1X2VudW0uYwkJb3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRp
b25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9m
aWxlcy5pMzg2CmluZGV4IDc5MDI5NmQuLjgxMTQyZTMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2Zp
bGVzLmkzODYKKysrIGIvc3lzL2NvbmYvZmlsZXMuaTM4NgpAQCAtNjAzLDMgKzYwMyw0IEBAIHg4
Ni94ODYvdHNjLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni9kZWxheS5jCQkJc3RhbmRhcmQKIHg4Ni94
ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0aW9uYWwg
eGVuIHwgeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy94ODYveGVuL3hlbnB2LmMgYi9zeXMveDg2L3hlbi94ZW5wdi5jCm5ldyBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLjQxZDY3NGYKLS0tIC9kZXYvbnVsbAorKysg
Yi9zeXMveDg2L3hlbi94ZW5wdi5jCkBAIC0wLDAgKzEsMTU1IEBACisvKgorICogQ29weXJpZ2h0
IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxs
IHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJj
ZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJl
IHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJl
IG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0
aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25z
IGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4g
YmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90
aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVy
IGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3Zp
ZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJ
REVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVY
UFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBU
TywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBF
VkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBB
TlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywg
UFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0Yg
VVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09O
VFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5D
RSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0Yg
VEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICog
U1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRG
cmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL3N5c3Rt
Lmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNs
dWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3BjcHUuaD4KKyNpbmNsdWRlIDxzeXMv
c21wLmg+CisKKyNpbmNsdWRlIDx4ZW4veGVuLW9zLmg+CisKK3N0YXRpYyBpbnQKK3hlbnB2X3By
b2JlKGRldmljZV90IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVz
Iik7CisJZGV2aWNlX3F1aWV0KGRldik7CisJcmV0dXJuICgwKTsKK30KKworc3RhdGljIGludAor
eGVucHZfYXR0YWNoKGRldmljZV90IGRldikKK3sKKwlkZXZpY2VfdCBjaGlsZDsKKworCS8qCisJ
ICogTGV0IG91ciBjaGlsZCBkcml2ZXJzIGlkZW50aWZ5IGFueSBjaGlsZCBkZXZpY2VzIHRoYXQg
dGhleQorCSAqIGNhbiBmaW5kLiAgT25jZSB0aGF0IGlzIGRvbmUgYXR0YWNoIGFueSBkZXZpY2Vz
IHRoYXQgd2UKKwkgKiBmb3VuZC4KKwkgKi8KKwlidXNfZ2VuZXJpY19wcm9iZShkZXYpOworCWJ1
c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJaWYgKCFkZXZjbGFzc19nZXRfZGV2aWNlKGRldmNs
YXNzX2ZpbmQoImlzYSIpLCAwKSkgeworCQljaGlsZCA9IEJVU19BRERfQ0hJTEQoZGV2LCAwLCAi
aXNhIiwgMCk7CisJCWlmIChjaGlsZCA9PSBOVUxMKQorCQkJcGFuaWMoInhlbnB2X2F0dGFjaCBp
c2EiKTsKKwkJZGV2aWNlX3Byb2JlX2FuZF9hdHRhY2goY2hpbGQpOworCX0KKworCXJldHVybiAw
OworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2X21ldGhvZHNbXSA9IHsKKwkvKiBE
ZXZpY2UgaW50ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwJCXhlbnB2X3Byb2Jl
KSwKKwlERVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJeGVucHZfYXR0YWNoKSwKKwlERVZNRVRIT0Qo
ZGV2aWNlX3N1c3BlbmQsCWJ1c19nZW5lcmljX3N1c3BlbmQpLAorCURFVk1FVEhPRChkZXZpY2Vf
cmVzdW1lLAlidXNfZ2VuZXJpY19yZXN1bWUpLAorCisJLyogQnVzIGludGVyZmFjZSAqLworCURF
Vk1FVEhPRChidXNfYWRkX2NoaWxkLAlidXNfZ2VuZXJpY19hZGRfY2hpbGQpLAorCisJREVWTUVU
SE9EX0VORAorfTsKKworc3RhdGljIGRyaXZlcl90IHhlbnB2X2RyaXZlciA9IHsKKwkieGVucHYi
LAorCXhlbnB2X21ldGhvZHMsCisJMSwJCQkvKiBubyBzb2Z0YyAqLworfTsKK3N0YXRpYyBkZXZj
bGFzc190IHhlbnB2X2RldmNsYXNzOworCitEUklWRVJfTU9EVUxFKHhlbnB2LCBuZXh1cywgeGVu
cHZfZHJpdmVyLCB4ZW5wdl9kZXZjbGFzcywgMCwgMCk7CisKKy8qCisgKiBEdW1teSBYZW4gY3B1
IGRldmljZQorICoKKyAqIFNpbmNlIHRoZXJlJ3Mgbm8gQUNQSSBvbiBQVkggZ3Vlc3RzLCB3ZSBu
ZWVkIHRvIGNyZWF0ZSBhIGR1bW15CisgKiBDUFUgZGV2aWNlIGluIG9yZGVyIHRvIGZpbGwgdGhl
IHBjcHUtPnBjX2RldmljZSBmaWVsZC4KKyAqLworCitzdGF0aWMgdm9pZAoreGVucHZjcHVfaWRl
bnRpZnkoZHJpdmVyX3QgKmRyaXZlciwgZGV2aWNlX3QgcGFyZW50KQoreworCWRldmljZV90IGNo
aWxkOworCWludCBpOworCisJLyogT25seSBhdHRhY2ggdG8gUFYgZ3Vlc3RzLCBIVk0gZ3Vlc3Rz
IHVzZSB0aGUgQUNQSSBDUFUgZGV2aWNlcyAqLworCWlmICgheGVuX3B2X2RvbWFpbigpKQorCQly
ZXR1cm47CisKKwlDUFVfRk9SRUFDSChpKSB7CisJCWNoaWxkID0gQlVTX0FERF9DSElMRChwYXJl
bnQsIDAsICJwdmNwdSIsIGkpOworCQlpZiAoY2hpbGQgPT0gTlVMTCkKKwkJCXBhbmljKCJ4ZW5w
dmNwdV9pZGVudGlmeSBhZGQgcHZjcHUiKTsKKwl9Cit9CisKK3N0YXRpYyBpbnQKK3hlbnB2Y3B1
X3Byb2JlKGRldmljZV90IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYg
Q1BVIik7CisJcmV0dXJuICgwKTsKK30KKworc3RhdGljIGludAoreGVucHZjcHVfYXR0YWNoKGRl
dmljZV90IGRldikKK3sKKwlzdHJ1Y3QgcGNwdSAqcGM7CisJaW50IGNwdTsKKworCWNwdSA9IGRl
dmljZV9nZXRfdW5pdChkZXYpOworCXBjID0gcGNwdV9maW5kKGNwdSk7CisJcGMtPnBjX2Rldmlj
ZSA9IGRldjsKKwlyZXR1cm4gKDApOworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2
Y3B1X21ldGhvZHNbXSA9IHsKKwlERVZNRVRIT0QoZGV2aWNlX2lkZW50aWZ5LCB4ZW5wdmNwdV9p
ZGVudGlmeSksCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwgeGVucHZjcHVfcHJvYmUpLAorCURF
Vk1FVEhPRChkZXZpY2VfYXR0YWNoLCB4ZW5wdmNwdV9hdHRhY2gpLAorCisJREVWTUVUSE9EX0VO
RAorfTsKKworc3RhdGljIGRyaXZlcl90IHhlbnB2Y3B1X2RyaXZlciA9IHsKKwkicHZjcHUiLAor
CXhlbnB2Y3B1X21ldGhvZHMsCisJMCwKK307CisKK2RldmNsYXNzX3QgeGVucHZjcHVfZGV2Y2xh
c3M7CisKK0RSSVZFUl9NT0RVTEUoeGVucHZjcHUsIHhlbnB2LCB4ZW5wdmNwdV9kcml2ZXIsIHhl
bnB2Y3B1X2RldmNsYXNzLCAwLCAwKTsKK01PRFVMRV9ERVBFTkQoeGVucHZjcHUsIHhlbnB2LCAx
LCAxLCAxKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 02 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykcW-0002rK-4X; Thu, 02 Jan 2014 15:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1VykcU-0002qO-Nd
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 15:55:39 +0000
Received: from [85.158.143.35:51976] by server-3.bemta-4.messagelabs.com id
	C6/F6-32360-AFB85C25; Thu, 02 Jan 2014 15:55:38 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388678136!9200156!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6459 invoked from network); 2 Jan 2014 15:55:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 15:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87074581"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 15:55:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 10:55:35 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1VykRJ-00043n-Us;
	Thu, 02 Jan 2014 15:44:06 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Thu, 2 Jan 2014 16:43:52 +0100
Message-ID: <1388677433-49525-19-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v9 18/19] xen: changes to gnttab for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/gnttab.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/sys/xen/gnttab.c b/sys/xen/gnttab.c
index 03c32b7..6949be5 100644
--- a/sys/xen/gnttab.c
+++ b/sys/xen/gnttab.c
@@ -25,6 +25,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/lock.h>
 #include <sys/malloc.h>
 #include <sys/mman.h>
+#include <sys/limits.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -607,6 +608,7 @@ gnttab_resume(void)
 {
 	int error;
 	unsigned int max_nr_gframes, nr_gframes;
+	void *alloc_mem;
 
 	nr_gframes = nr_grant_frames;
 	max_nr_gframes = max_nr_grant_frames();
@@ -614,11 +616,25 @@ gnttab_resume(void)
 		return (ENOSYS);
 
 	if (!resume_frames) {
-		error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
-		    &resume_frames);
-		if (error) {
-			printf("error mapping gnttab share frames\n");
-			return (error);
+		if (xen_pv_domain()) {
+			/*
+			 * This is a waste of physical memory,
+			 * we should use ballooned pages instead,
+			 * but it will do for now.
+			 */
+			alloc_mem = contigmalloc(max_nr_gframes * PAGE_SIZE,
+			                         M_DEVBUF, M_NOWAIT, 0,
+			                         ULONG_MAX, PAGE_SIZE, 0);
+			KASSERT((alloc_mem != NULL),
+				("unable to alloc memory for gnttab"));
+			resume_frames = vtophys(alloc_mem);
+		} else {
+			error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
+			    &resume_frames);
+			if (error) {
+				printf("error mapping gnttab share frames\n");
+				return (error);
+			}
 		}
 	}
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:07:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyknd-0004tc-GN; Thu, 02 Jan 2014 16:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyknc-0004tU-N7
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:07:08 +0000
Received: from [85.158.139.211:19573] by server-2.bemta-5.messagelabs.com id
	21/03-29392-BAE85C25; Thu, 02 Jan 2014 16:07:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388678825!4877628!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9459 invoked from network); 2 Jan 2014 16:07:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:07:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89267418"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:07:05 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:07:04 -0500
Message-ID: <52C58EA7.2090107@citrix.com>
Date: Thu, 2 Jan 2014 16:07:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 09/18] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> The VCPU bringup protocol follows the PV with certain twists.
> From xen/include/public/arch-x86/xen.h:
> 
> Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> for HVM and PVH guests, not all information in this structure is updated:
> 
>  - For HVM guests, the structures read include: fpu_ctxt (if
>  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> 
>  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
>  set cr3. All other fields not used should be set to 0.
> 
> This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> can load per-cpu data-structures. It has no effect on the VCPU0.
> 
> We also piggyback on the %rdi register to pass in the CPU number - so
> that when we bootup a new CPU, the cpu_bringup_and_idle will have
> passed as the first parameter the CPU number (via %rdi for 64-bit).
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1413,14 +1413,19 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> + *
> + * Note, that it is refok - b/c the only caller of this after init

Please spell out 'because' in full.  b/c is too hard to read.  Also list
the callers (cpu_bringup_and_idle() I guess).

> + * is PVH which is not going to use xen_load_gdt_boot or other
> + * __init functions.
>   */
> -static void __init xen_setup_gdt(void)
> +void __init_refok xen_setup_gdt(int cpu)

__ref seems to be the correct section marker for this.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:07:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyknd-0004tc-GN; Thu, 02 Jan 2014 16:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyknc-0004tU-N7
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:07:08 +0000
Received: from [85.158.139.211:19573] by server-2.bemta-5.messagelabs.com id
	21/03-29392-BAE85C25; Thu, 02 Jan 2014 16:07:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388678825!4877628!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9459 invoked from network); 2 Jan 2014 16:07:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:07:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89267418"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:07:05 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:07:04 -0500
Message-ID: <52C58EA7.2090107@citrix.com>
Date: Thu, 2 Jan 2014 16:07:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-10-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 09/18] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> The VCPU bringup protocol follows the PV with certain twists.
> From xen/include/public/arch-x86/xen.h:
> 
> Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> for HVM and PVH guests, not all information in this structure is updated:
> 
>  - For HVM guests, the structures read include: fpu_ctxt (if
>  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> 
>  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
>  set cr3. All other fields not used should be set to 0.
> 
> This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> can load per-cpu data-structures. It has no effect on the VCPU0.
> 
> We also piggyback on the %rdi register to pass in the CPU number - so
> that when we bootup a new CPU, the cpu_bringup_and_idle will have
> passed as the first parameter the CPU number (via %rdi for 64-bit).
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1413,14 +1413,19 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> + *
> + * Note, that it is refok - b/c the only caller of this after init

Please spell out 'because' in full.  b/c is too hard to read.  Also list
the callers (cpu_bringup_and_idle() I guess).

> + * is PVH which is not going to use xen_load_gdt_boot or other
> + * __init functions.
>   */
> -static void __init xen_setup_gdt(void)
> +void __init_refok xen_setup_gdt(int cpu)

__ref seems to be the correct section marker for this.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:15:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykvL-0005GK-LL; Thu, 02 Jan 2014 16:15:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykvJ-0005GA-SV
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:15:05 +0000
Received: from [85.158.137.68:52148] by server-13.bemta-3.messagelabs.com id
	20/6E-28603-88095C25; Thu, 02 Jan 2014 16:15:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388679302!6942404!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20447 invoked from network); 2 Jan 2014 16:15:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89270213"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:14:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:14:33 -0500
Message-ID: <52C59068.1040603@citrix.com>
Date: Thu, 2 Jan 2014 16:14:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In xen_add_extra_mem() we can skip updating P2M as it's managed
> by Xen. PVH maps the entire IO space, but only RAM pages need
> to be repopulated.

So this looks minimal but I can't work out what PVH actually needs to do
here.  This code really doesn't need to be made any more confusing.

I don't understand why the guest hasn't been supplied with sensible
memory map that we can use as-is without playing all these games?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:15:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VykvL-0005GK-LL; Thu, 02 Jan 2014 16:15:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VykvJ-0005GA-SV
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:15:05 +0000
Received: from [85.158.137.68:52148] by server-13.bemta-3.messagelabs.com id
	20/6E-28603-88095C25; Thu, 02 Jan 2014 16:15:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388679302!6942404!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20447 invoked from network); 2 Jan 2014 16:15:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89270213"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:14:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:14:33 -0500
Message-ID: <52C59068.1040603@citrix.com>
Date: Thu, 2 Jan 2014 16:14:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In xen_add_extra_mem() we can skip updating P2M as it's managed
> by Xen. PVH maps the entire IO space, but only RAM pages need
> to be repopulated.

So this looks minimal but I can't work out what PVH actually needs to do
here.  This code really doesn't need to be made any more confusing.

I don't understand why the guest hasn't been supplied with sensible
memory map that we can use as-is without playing all these games?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyl7F-0005Vh-H5; Thu, 02 Jan 2014 16:27:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyl7E-0005Vc-TN
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:27:25 +0000
Received: from [193.109.254.147:10797] by server-8.bemta-14.messagelabs.com id
	BC/78-30921-C6395C25; Thu, 02 Jan 2014 16:27:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388680042!8519906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28994 invoked from network); 2 Jan 2014 16:27:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:27:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89275865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:27:21 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:27:21 -0500
Message-ID: <52C59367.70707@citrix.com>
Date: Thu, 2 Jan 2014 16:27:19 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
	frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contingous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropiate values for PVHVM and ARM.
> 
> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
[...]
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
[...]
> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	int i;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn)
> +		return -ENOMEM;
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));

PFN_DOWN(addr) + i looks better to me.

> +
> +	xen_auto_xlat_grant_frames.vaddr = addr;

Huh? addr is a physical address but you're assigning it to a field
called vaddr?  I think you mean to set this field to the result of the
xen_remap() call, yes?

> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	int count;

unsigned int.

> +	unsigned long vaddr;

void * if this is a virtual address.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyl7F-0005Vh-H5; Thu, 02 Jan 2014 16:27:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vyl7E-0005Vc-TN
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:27:25 +0000
Received: from [193.109.254.147:10797] by server-8.bemta-14.messagelabs.com id
	BC/78-30921-C6395C25; Thu, 02 Jan 2014 16:27:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388680042!8519906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28994 invoked from network); 2 Jan 2014 16:27:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:27:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="89275865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 16:27:21 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:27:21 -0500
Message-ID: <52C59367.70707@citrix.com>
Date: Thu, 2 Jan 2014 16:27:19 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
	frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contingous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropiate values for PVHVM and ARM.
> 
> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
[...]
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
[...]
> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	int i;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn)
> +		return -ENOMEM;
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));

PFN_DOWN(addr) + i looks better to me.

> +
> +	xen_auto_xlat_grant_frames.vaddr = addr;

Huh? addr is a physical address but you're assigning it to a field
called vaddr?  I think you mean to set this field to the result of the
xen_remap() call, yes?

> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	int count;

unsigned int.

> +	unsigned long vaddr;

void * if this is a virtual address.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VylBq-0005nY-Oy; Thu, 02 Jan 2014 16:32:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VylBo-0005nG-G5
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:32:08 +0000
Received: from [85.158.137.68:24982] by server-8.bemta-3.messagelabs.com id
	60/14-31081-78495C25; Thu, 02 Jan 2014 16:32:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388680325!6945157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3127 invoked from network); 2 Jan 2014 16:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87089386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 16:32:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:32:04 -0500
Message-ID: <52C59483.5030607@citrix.com>
Date: Thu, 2 Jan 2014 16:32:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
[...]
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
[...]
> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	if (!xen_pv_domain())
> +		return -ENODEV;
> +
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return -ENODEV;

Replace all these with if (!xen_pvh_domain()) ?

> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>  	return gnttab_init();
>  }
>  
> -core_initcall(__gnttab_init);
> +core_initcall_sync(__gnttab_init);

Why has this become _sync?

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VylBq-0005nY-Oy; Thu, 02 Jan 2014 16:32:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VylBo-0005nG-G5
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:32:08 +0000
Received: from [85.158.137.68:24982] by server-8.bemta-3.messagelabs.com id
	60/14-31081-78495C25; Thu, 02 Jan 2014 16:32:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388680325!6945157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3127 invoked from network); 2 Jan 2014 16:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87089386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 16:32:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:32:04 -0500
Message-ID: <52C59483.5030607@citrix.com>
Date: Thu, 2 Jan 2014 16:32:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
[...]
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
[...]
> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	if (!xen_pv_domain())
> +		return -ENODEV;
> +
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return -ENODEV;

Replace all these with if (!xen_pvh_domain()) ?

> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>  	return gnttab_init();
>  }
>  
> -core_initcall(__gnttab_init);
> +core_initcall_sync(__gnttab_init);

Why has this become _sync?

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VylTQ-0006wk-Rr; Thu, 02 Jan 2014 16:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VylTP-0006wf-5p
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:50:19 +0000
Received: from [85.158.137.68:65387] by server-2.bemta-3.messagelabs.com id
	9D/EB-17329-AC895C25; Thu, 02 Jan 2014 16:50:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388681416!6898552!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31311 invoked from network); 2 Jan 2014 16:50:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:50:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87095858"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 16:50:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:50:15 -0500
Message-ID: <52C598C6.7040902@citrix.com>
Date: Thu, 2 Jan 2014 16:50:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> 
> implements the neccessary functionality to boot a PV guest in PVH mode.

In general this looks in much better shape now.  Some of the refactoring
patches should be queued for 3.14.

I'm not sure if when the rest wants to go in given that the PVH
hypervisor ABI is not yet finalized and is missing support for a number
of things with no visible plan for how/when/if this missing
functionality will be implemented.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 16:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 16:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VylTQ-0006wk-Rr; Thu, 02 Jan 2014 16:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1VylTP-0006wf-5p
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 16:50:19 +0000
Received: from [85.158.137.68:65387] by server-2.bemta-3.messagelabs.com id
	9D/EB-17329-AC895C25; Thu, 02 Jan 2014 16:50:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388681416!6898552!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31311 invoked from network); 2 Jan 2014 16:50:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 16:50:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,591,1384300800"; d="scan'208";a="87095858"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 16:50:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 2 Jan 2014
	11:50:15 -0500
Message-ID: <52C598C6.7040902@citrix.com>
Date: Thu, 2 Jan 2014 16:50:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> 
> implements the neccessary functionality to boot a PV guest in PVH mode.

In general this looks in much better shape now.  Some of the refactoring
patches should be queued for 3.14.

I'm not sure if when the rest wants to go in given that the PVH
hypervisor ABI is not yet finalized and is missing support for a number
of things with no visible plan for how/when/if this missing
functionality will be implemented.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 17:09:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 17:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vylls-0008Tv-Fx; Thu, 02 Jan 2014 17:09:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kamal@canonical.com>) id 1Vyllq-0008Tq-Pt
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 17:09:22 +0000
Received: from [193.109.254.147:54599] by server-8.bemta-14.messagelabs.com id
	60/3D-30921-24D95C25; Thu, 02 Jan 2014 17:09:22 +0000
X-Env-Sender: kamal@canonical.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388682561!8470502!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13783 invoked from network); 2 Jan 2014 17:09:21 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-2.tower-27.messagelabs.com with SMTP;
	2 Jan 2014 17:09:21 -0000
Received: from c-67-160-231-162.hsd1.ca.comcast.net ([67.160.231.162]
	helo=fourier) by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <kamal@canonical.com>)
	id 1Vyli7-0003Vc-DH; Thu, 02 Jan 2014 17:05:31 +0000
Received: from kamal by fourier with local (Exim 4.80)
	(envelope-from <kamal@whence.com>)
	id 1Vyli4-00089D-WF; Thu, 02 Jan 2014 09:05:29 -0800
From: Kamal Mostafa <kamal@canonical.com>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org,
	kernel-team@lists.ubuntu.com
Date: Thu,  2 Jan 2014 09:04:43 -0800
Message-Id: <1388682306-30859-69-git-send-email-kamal@canonical.com>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1388682306-30859-1-git-send-email-kamal@canonical.com>
References: <1388682306-30859-1-git-send-email-kamal@canonical.com>
X-Extended-Stable: 3.8
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Kamal Mostafa <kamal@canonical.com>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 3.8 68/91] xen/gnttab: leave lazy MMU mode in
	the case of a m2p override failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.8.13.15 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Matt Wilson <msw@amazon.com>

commit 14883a75ec76b44759385fb12629f4a0f1aef4e3 upstream.

Commit f62805f1 introduced a bug where lazy MMU mode isn't exited if a
m2p_add/remove_override call fails.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Anthony Liguori <aliguori@amazon.com>
Cc: xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Matt Wilson <msw@amazon.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
---
 drivers/xen/grant-table.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 51be226..b146bfd 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -920,9 +920,10 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
 				       &kmap_ops[i] : NULL);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
+ out:
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
@@ -953,9 +954,10 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		ret = m2p_remove_override(pages[i], kmap_ops ?
 				       &kmap_ops[i] : NULL);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
+ out:
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 17:09:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 17:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vylls-0008Tv-Fx; Thu, 02 Jan 2014 17:09:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kamal@canonical.com>) id 1Vyllq-0008Tq-Pt
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 17:09:22 +0000
Received: from [193.109.254.147:54599] by server-8.bemta-14.messagelabs.com id
	60/3D-30921-24D95C25; Thu, 02 Jan 2014 17:09:22 +0000
X-Env-Sender: kamal@canonical.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388682561!8470502!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13783 invoked from network); 2 Jan 2014 17:09:21 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-2.tower-27.messagelabs.com with SMTP;
	2 Jan 2014 17:09:21 -0000
Received: from c-67-160-231-162.hsd1.ca.comcast.net ([67.160.231.162]
	helo=fourier) by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <kamal@canonical.com>)
	id 1Vyli7-0003Vc-DH; Thu, 02 Jan 2014 17:05:31 +0000
Received: from kamal by fourier with local (Exim 4.80)
	(envelope-from <kamal@whence.com>)
	id 1Vyli4-00089D-WF; Thu, 02 Jan 2014 09:05:29 -0800
From: Kamal Mostafa <kamal@canonical.com>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org,
	kernel-team@lists.ubuntu.com
Date: Thu,  2 Jan 2014 09:04:43 -0800
Message-Id: <1388682306-30859-69-git-send-email-kamal@canonical.com>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1388682306-30859-1-git-send-email-kamal@canonical.com>
References: <1388682306-30859-1-git-send-email-kamal@canonical.com>
X-Extended-Stable: 3.8
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Kamal Mostafa <kamal@canonical.com>, Matt Wilson <msw@amazon.com>
Subject: [Xen-devel] [PATCH 3.8 68/91] xen/gnttab: leave lazy MMU mode in
	the case of a m2p override failure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.8.13.15 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Matt Wilson <msw@amazon.com>

commit 14883a75ec76b44759385fb12629f4a0f1aef4e3 upstream.

Commit f62805f1 introduced a bug where lazy MMU mode isn't exited if a
m2p_add/remove_override call fails.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Anthony Liguori <aliguori@amazon.com>
Cc: xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Matt Wilson <msw@amazon.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
---
 drivers/xen/grant-table.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 51be226..b146bfd 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -920,9 +920,10 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
 				       &kmap_ops[i] : NULL);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
+ out:
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
@@ -953,9 +954,10 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		ret = m2p_remove_override(pages[i], kmap_ops ?
 				       &kmap_ops[i] : NULL);
 		if (ret)
-			return ret;
+			goto out;
 	}
 
+ out:
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VymwX-0003MG-Po; Thu, 02 Jan 2014 18:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VymwV-0003MB-Se
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:24:28 +0000
Received: from [193.109.254.147:54390] by server-11.bemta-14.messagelabs.com
	id 5C/11-20576-BDEA5C25; Thu, 02 Jan 2014 18:24:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388687064!8480201!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31790 invoked from network); 2 Jan 2014 18:24:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:24:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02INLEg023579
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:23:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02INJiq024803
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:23:20 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02INJCW028050; Thu, 2 Jan 2014 18:23:19 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:23:19 -0800
Date: Thu, 2 Jan 2014 13:23:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182311.GA3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54D3C.4050101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > For PVHVM the shared_info structure is provided via the same way
> > as for normal PV guests (see include/xen/interface/xen.h).
> > 
> > That is during bootup we get 'xen_start_info' via the %esi register
> > in startup_xen. Then later we extract the 'shared_info' from said
> > structure (in xen_setup_shared_info) and start using it.
> > 
> > The 'xen_setup_shared_info' is all setup to work with auto-xlat
> > guests, but there are two functions which it calls that are not:
> > xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> > This patch modifies those to work in auto-xlat mode.
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
> >  		xen_vcpu_setup(cpu);
> >  
> >  	/* xen_vcpu_setup managed to place the vcpu_info within the
> > -	   percpu area for all cpus, so make use of it */
> > -	if (have_vcpu_info_placement) {
> > +	 * percpu area for all cpus, so make use of it. Note that for
> > +	 * PVH we want to use native IRQ mechanism. */
> > +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
> >  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
> >  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
> >  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> 
> Should this be in a separate patch: "xen/pvh: use native irq ops"?

Good idea. Initially it was part of the event channel one, but I split
it.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VymwX-0003MG-Po; Thu, 02 Jan 2014 18:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VymwV-0003MB-Se
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:24:28 +0000
Received: from [193.109.254.147:54390] by server-11.bemta-14.messagelabs.com
	id 5C/11-20576-BDEA5C25; Thu, 02 Jan 2014 18:24:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388687064!8480201!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31790 invoked from network); 2 Jan 2014 18:24:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:24:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02INLEg023579
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:23:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02INJiq024803
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:23:20 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02INJCW028050; Thu, 2 Jan 2014 18:23:19 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:23:19 -0800
Date: Thu, 2 Jan 2014 13:23:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182311.GA3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54D3C.4050101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > For PVHVM the shared_info structure is provided via the same way
> > as for normal PV guests (see include/xen/interface/xen.h).
> > 
> > That is during bootup we get 'xen_start_info' via the %esi register
> > in startup_xen. Then later we extract the 'shared_info' from said
> > structure (in xen_setup_shared_info) and start using it.
> > 
> > The 'xen_setup_shared_info' is all setup to work with auto-xlat
> > guests, but there are two functions which it calls that are not:
> > xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> > This patch modifies those to work in auto-xlat mode.
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
> >  		xen_vcpu_setup(cpu);
> >  
> >  	/* xen_vcpu_setup managed to place the vcpu_info within the
> > -	   percpu area for all cpus, so make use of it */
> > -	if (have_vcpu_info_placement) {
> > +	 * percpu area for all cpus, so make use of it. Note that for
> > +	 * PVH we want to use native IRQ mechanism. */
> > +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
> >  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
> >  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
> >  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> 
> Should this be in a separate patch: "xen/pvh: use native irq ops"?

Good idea. Initially it was part of the event channel one, but I split
it.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:25:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VymxR-0003P2-8a; Thu, 02 Jan 2014 18:25:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VymxQ-0003Oq-IP
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:25:24 +0000
Received: from [85.158.143.35:53124] by server-3.bemta-4.messagelabs.com id
	C3/10-32360-31FA5C25; Thu, 02 Jan 2014 18:25:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388687121!9304276!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24017 invoked from network); 2 Jan 2014 18:25:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:25:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IOIjf021640
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:24:19 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IOHkm000368
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:24:17 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IOG3O001625; Thu, 2 Jan 2014 18:24:16 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:24:16 -0800
Date: Thu, 2 Jan 2014 13:24:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182413.GB3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
	<52C54E00.7010508@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54E00.7010508@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > +		loadsegment(es, 0);
> > +		loadsegment(ds, 0);
> > +		loadsegment(fs, 0);
> > +#else
> > +		/* PVH: TODO Implement. */
> > +		BUG();
> > +#endif
> > +		return;    <==============
> > +	}
> >  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
> >  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
> 
> If PVH uses native GDT why are these (and possibly other?) GDT ops needed?

They aren't. There is a 'return' there. I marked it for you with
'<======'.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:25:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VymxR-0003P2-8a; Thu, 02 Jan 2014 18:25:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VymxQ-0003Oq-IP
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:25:24 +0000
Received: from [85.158.143.35:53124] by server-3.bemta-4.messagelabs.com id
	C3/10-32360-31FA5C25; Thu, 02 Jan 2014 18:25:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388687121!9304276!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24017 invoked from network); 2 Jan 2014 18:25:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:25:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IOIjf021640
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:24:19 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IOHkm000368
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:24:17 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IOG3O001625; Thu, 2 Jan 2014 18:24:16 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:24:16 -0800
Date: Thu, 2 Jan 2014 13:24:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182413.GB3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
	<52C54E00.7010508@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54E00.7010508@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > +		loadsegment(es, 0);
> > +		loadsegment(ds, 0);
> > +		loadsegment(fs, 0);
> > +#else
> > +		/* PVH: TODO Implement. */
> > +		BUG();
> > +#endif
> > +		return;    <==============
> > +	}
> >  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
> >  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
> 
> If PVH uses native GDT why are these (and possibly other?) GDT ops needed?

They aren't. There is a 'return' there. I marked it for you with
'<======'.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:28:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyn0Q-0003cR-Sy; Thu, 02 Jan 2014 18:28:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyn0P-0003cI-LA
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:28:29 +0000
Received: from [193.109.254.147:27101] by server-9.bemta-14.messagelabs.com id
	F8/AE-13957-DCFA5C25; Thu, 02 Jan 2014 18:28:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388687306!5026571!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26454 invoked from network); 2 Jan 2014 18:28:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:28:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IRNpM024898
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:27:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IRMbZ008444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:27:22 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IRLpL007276; Thu, 2 Jan 2014 18:27:21 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:27:21 -0800
Date: Thu, 2 Jan 2014 13:27:19 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182719.GC3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
	<52C55222.7070801@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C55222.7070801@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
 Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:48:50AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > PVH allows PV linux guest to utilize hardware extended capabilities,
> > such as running MMU updates in a HVM container.
> > 
> > The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> > with modifications):
> > 
> > "* the guest uses auto translate:
> >  - p2m is managed by Xen
> >  - pagetables are owned by the guest
> >  - mmu_update hypercall not available
> > * it uses event callback and not vlapic emulation,
> > * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> > 
> > For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> > in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> > PV guest with auto translate, although it does use hvm_op for setting
> > callback vector."
> > 
> > Use .ascii and .asciz to define xen feature string. Note, the PVH
> > string must be in a single line (not multiple lines with \) to keep the
> > assembler from putting null char after each string before \.
> > This patch allows it to be configured and enabled.
> > 
> > Lastly remove some of the scaffolding.
> [...]
> > --- a/arch/x86/xen/Kconfig
> > +++ b/arch/x86/xen/Kconfig
> > @@ -51,3 +51,11 @@ config XEN_DEBUG_FS
> >  	  Enable statistics output and various tuning options in debugfs.
> >  	  Enabling this option may incur a significant performance overhead.
> >  
> > +config XEN_PVH
> > +	bool "Support for running as a PVH guest"
> > +	depends on X86_64 && XEN && XEN_PVHVM
> 
> Would select XEN_PVHVM be more useful?  It may not be obvious to a user

Sure.
> that PV with hardware extension depends on HVM with PV extensions.
> 
> > +	default n
> > +	help
> > +	   This option enables support for running as a PVH guest (PV guest
> > +	   using hardware extensions) under a suitably capable hypervisor.
> > +	   If unsure, say N.
> 
> This help text needs to clearly state that PVH support is experimental
> or a tech preview and the ABI is subject to change and PVH guests may
> not run on newer hypervisors.  Unless the plan is to only merge the
> Linux support once the hypervisor ABI is finalized.

I am very much comfortable marking it as experimental and tech preview
with the caveat that it: 1) will change (or probably) in the future of
Xen versions, and 2) won't cause regressions with older hypervisors.
In other words, enabling this option should not make the kernel stop
working with say Xen 4.1.

[Which we need to fix of course]


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:28:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyn0Q-0003cR-Sy; Thu, 02 Jan 2014 18:28:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyn0P-0003cI-LA
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:28:29 +0000
Received: from [193.109.254.147:27101] by server-9.bemta-14.messagelabs.com id
	F8/AE-13957-DCFA5C25; Thu, 02 Jan 2014 18:28:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388687306!5026571!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26454 invoked from network); 2 Jan 2014 18:28:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:28:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IRNpM024898
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:27:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IRMbZ008444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:27:22 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IRLpL007276; Thu, 2 Jan 2014 18:27:21 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:27:21 -0800
Date: Thu, 2 Jan 2014 13:27:19 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102182719.GC3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-19-git-send-email-konrad.wilk@oracle.com>
	<52C55222.7070801@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C55222.7070801@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 18/18] xen/pvh: Support ParaVirtualized
 Hardware extensions (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:48:50AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > PVH allows PV linux guest to utilize hardware extended capabilities,
> > such as running MMU updates in a HVM container.
> > 
> > The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> > with modifications):
> > 
> > "* the guest uses auto translate:
> >  - p2m is managed by Xen
> >  - pagetables are owned by the guest
> >  - mmu_update hypercall not available
> > * it uses event callback and not vlapic emulation,
> > * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> > 
> > For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> > in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> > PV guest with auto translate, although it does use hvm_op for setting
> > callback vector."
> > 
> > Use .ascii and .asciz to define xen feature string. Note, the PVH
> > string must be in a single line (not multiple lines with \) to keep the
> > assembler from putting null char after each string before \.
> > This patch allows it to be configured and enabled.
> > 
> > Lastly remove some of the scaffolding.
> [...]
> > --- a/arch/x86/xen/Kconfig
> > +++ b/arch/x86/xen/Kconfig
> > @@ -51,3 +51,11 @@ config XEN_DEBUG_FS
> >  	  Enable statistics output and various tuning options in debugfs.
> >  	  Enabling this option may incur a significant performance overhead.
> >  
> > +config XEN_PVH
> > +	bool "Support for running as a PVH guest"
> > +	depends on X86_64 && XEN && XEN_PVHVM
> 
> Would select XEN_PVHVM be more useful?  It may not be obvious to a user

Sure.
> that PV with hardware extension depends on HVM with PV extensions.
> 
> > +	default n
> > +	help
> > +	   This option enables support for running as a PVH guest (PV guest
> > +	   using hardware extensions) under a suitably capable hypervisor.
> > +	   If unsure, say N.
> 
> This help text needs to clearly state that PVH support is experimental
> or a tech preview and the ABI is subject to change and PVH guests may
> not run on newer hypervisors.  Unless the plan is to only merge the
> Linux support once the hypervisor ABI is finalized.

I am very much comfortable marking it as experimental and tech preview
with the caveat that it: 1) will change (or probably) in the future of
Xen versions, and 2) won't cause regressions with older hypervisors.
In other words, enabling this option should not make the kernel stop
working with say Xen 4.1.

[Which we need to fix of course]


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:33:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyn5K-00049X-Lg; Thu, 02 Jan 2014 18:33:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyn5I-00049Q-RI
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:33:33 +0000
Received: from [193.109.254.147:54374] by server-14.bemta-14.messagelabs.com
	id E0/18-12628-CF0B5C25; Thu, 02 Jan 2014 18:33:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388687609!8505854!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10579 invoked from network); 2 Jan 2014 18:33:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:33:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IWPcn029705
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:32:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IWPjq021916
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:32:25 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IWOlQ021891; Thu, 2 Jan 2014 18:32:24 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:32:24 -0800
Date: Thu, 2 Jan 2014 13:32:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102183221.GD3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C58691.4040502@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > need to use emulated prefix call. We also check for vector callback
> > early on, as it is a required feature. PVH also runs at default kernel
> > IOPL.
> > 
> > Finally, pure PV settings are moved to a separate function that are
> > only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> > with CONFIG_XEN_PVMMU.
> [...]
> > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
> >  		break;
> >  	}
> >  
> > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > -		: "=a" (*ax),
> > -		  "=b" (*bx),
> > -		  "=c" (*cx),
> > -		  "=d" (*dx)
> > -		: "0" (*ax), "2" (*cx));
> > +	if (xen_pvh_domain())
> > +		native_cpuid(ax, bx, cx, dx);
> > +	else
> > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > +			: "=a" (*ax),
> > +			"=b" (*bx),
> > +			"=c" (*cx),
> > +			"=d" (*dx)
> > +			: "0" (*ax), "2" (*cx));
> 
> For this one off cpuid call it seems preferrable to me to use the
> emulate prefix rather than diverge from PV.

This was before the PV cpuid was deemed OK to be used on PVH.
Will rip this out to use the same version.

> 
> > @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
> >  
> >  	xen_domain_type = XEN_PV_DOMAIN;
> >  
> > +	xen_setup_features();
> > +	xen_pvh_early_guest_init();
> >  	xen_setup_machphys_mapping();
> >  
> >  	/* Install Xen paravirt ops */
> >  	pv_info = xen_info;
> >  	pv_init_ops = xen_init_ops;
> > -	pv_cpu_ops = xen_cpu_ops;
> >  	pv_apic_ops = xen_apic_ops;
> > +	if (xen_pvh_domain())
> > +		pv_cpu_ops.cpuid = xen_cpuid;
> > +	else
> > +		pv_cpu_ops = xen_cpu_ops;
> 
> If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?

There are a couple of filtering done on the cpuid. But with HVM I am
not entirely sure if it is worth preserving those or not.

My fear is that if we switch over to the native one without the
filtering that the kernel does we open up a can of worms that had been
closed in the past. The reason is that for dom0 - there is no cpuid
filtering being done. So it gets everything that the hypervisor sees.

Which we don't want to do for APERF (b/c the generic scheduler code will
try to do those MSRs), and then there is the ACPI extended C-states.

Perhaps a better thing is just to still have the xen_cpuid but
but a big comment saying: "/* We should use native, but we need to
filter some cpuid's out. TODO */

?
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:33:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyn5K-00049X-Lg; Thu, 02 Jan 2014 18:33:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyn5I-00049Q-RI
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:33:33 +0000
Received: from [193.109.254.147:54374] by server-14.bemta-14.messagelabs.com
	id E0/18-12628-CF0B5C25; Thu, 02 Jan 2014 18:33:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388687609!8505854!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10579 invoked from network); 2 Jan 2014 18:33:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:33:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IWPcn029705
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:32:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IWPjq021916
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:32:25 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IWOlQ021891; Thu, 2 Jan 2014 18:32:24 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:32:24 -0800
Date: Thu, 2 Jan 2014 13:32:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102183221.GD3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C58691.4040502@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > need to use emulated prefix call. We also check for vector callback
> > early on, as it is a required feature. PVH also runs at default kernel
> > IOPL.
> > 
> > Finally, pure PV settings are moved to a separate function that are
> > only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> > with CONFIG_XEN_PVMMU.
> [...]
> > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
> >  		break;
> >  	}
> >  
> > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > -		: "=a" (*ax),
> > -		  "=b" (*bx),
> > -		  "=c" (*cx),
> > -		  "=d" (*dx)
> > -		: "0" (*ax), "2" (*cx));
> > +	if (xen_pvh_domain())
> > +		native_cpuid(ax, bx, cx, dx);
> > +	else
> > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > +			: "=a" (*ax),
> > +			"=b" (*bx),
> > +			"=c" (*cx),
> > +			"=d" (*dx)
> > +			: "0" (*ax), "2" (*cx));
> 
> For this one off cpuid call it seems preferrable to me to use the
> emulate prefix rather than diverge from PV.

This was before the PV cpuid was deemed OK to be used on PVH.
Will rip this out to use the same version.

> 
> > @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
> >  
> >  	xen_domain_type = XEN_PV_DOMAIN;
> >  
> > +	xen_setup_features();
> > +	xen_pvh_early_guest_init();
> >  	xen_setup_machphys_mapping();
> >  
> >  	/* Install Xen paravirt ops */
> >  	pv_info = xen_info;
> >  	pv_init_ops = xen_init_ops;
> > -	pv_cpu_ops = xen_cpu_ops;
> >  	pv_apic_ops = xen_apic_ops;
> > +	if (xen_pvh_domain())
> > +		pv_cpu_ops.cpuid = xen_cpuid;
> > +	else
> > +		pv_cpu_ops = xen_cpu_ops;
> 
> If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?

There are a couple of filtering done on the cpuid. But with HVM I am
not entirely sure if it is worth preserving those or not.

My fear is that if we switch over to the native one without the
filtering that the kernel does we open up a can of worms that had been
closed in the past. The reason is that for dom0 - there is no cpuid
filtering being done. So it gets everything that the hypervisor sees.

Which we don't want to do for APERF (b/c the generic scheduler code will
try to do those MSRs), and then there is the ACPI extended C-states.

Perhaps a better thing is just to still have the xen_cpuid but
but a big comment saying: "/* We should use native, but we need to
filter some cpuid's out. TODO */

?
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:43:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynEE-0004hs-Tn; Thu, 02 Jan 2014 18:42:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynEE-0004hn-2X
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:42:46 +0000
Received: from [85.158.139.211:23443] by server-11.bemta-5.messagelabs.com id
	6E/5A-23268-523B5C25; Thu, 02 Jan 2014 18:42:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388688163!7579006!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7334 invoked from network); 2 Jan 2014 18:42:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 18:42:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02Ifc5N006238
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:41:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IfbK2012673
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:41:38 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IfbDc014931; Thu, 2 Jan 2014 18:41:37 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:41:36 -0800
Date: Thu, 2 Jan 2014 13:41:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102184133.GE3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59068.1040603@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > by Xen. PVH maps the entire IO space, but only RAM pages need
> > to be repopulated.
> 
> So this looks minimal but I can't work out what PVH actually needs to do
> here.  This code really doesn't need to be made any more confusing.

I gather you prefer Mukesh's original version?

https://lkml.org/lkml/2013/12/18/710
> 
> I don't understand why the guest hasn't been supplied with sensible
> memory map that we can use as-is without playing all these games?

dom0_mem=3G,max:7G. The E820 and the P2M setup in the hypervisor have
a sensible layout (aka, 1-1). But the shared_info.nr_pages doesn't tell
us that - it instead gives us just the number of pages.

Which is OK, but if it is different than what you would expect from
the E820 (as in, the number of pages of E820_RAM is different than
the nr_pages), then you need to setup some of the E820 regions as the
balloon memory but without real memory.

Unless the hypervisor's filter out the E820 that we get through the
'XENMEM_machine_memory_map' ?

This should not be (and it did not look to be) a problem with the
E820 that is setup by the toolstack.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:43:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynEE-0004hs-Tn; Thu, 02 Jan 2014 18:42:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynEE-0004hn-2X
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:42:46 +0000
Received: from [85.158.139.211:23443] by server-11.bemta-5.messagelabs.com id
	6E/5A-23268-523B5C25; Thu, 02 Jan 2014 18:42:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388688163!7579006!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7334 invoked from network); 2 Jan 2014 18:42:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 18:42:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02Ifc5N006238
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:41:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IfbK2012673
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:41:38 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IfbDc014931; Thu, 2 Jan 2014 18:41:37 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:41:36 -0800
Date: Thu, 2 Jan 2014 13:41:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102184133.GE3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59068.1040603@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > by Xen. PVH maps the entire IO space, but only RAM pages need
> > to be repopulated.
> 
> So this looks minimal but I can't work out what PVH actually needs to do
> here.  This code really doesn't need to be made any more confusing.

I gather you prefer Mukesh's original version?

https://lkml.org/lkml/2013/12/18/710
> 
> I don't understand why the guest hasn't been supplied with sensible
> memory map that we can use as-is without playing all these games?

dom0_mem=3G,max:7G. The E820 and the P2M setup in the hypervisor have
a sensible layout (aka, 1-1). But the shared_info.nr_pages doesn't tell
us that - it instead gives us just the number of pages.

Which is OK, but if it is different than what you would expect from
the E820 (as in, the number of pages of E820_RAM is different than
the nr_pages), then you need to setup some of the E820 regions as the
balloon memory but without real memory.

Unless the hypervisor's filter out the E820 that we get through the
'XENMEM_machine_memory_map' ?

This should not be (and it did not look to be) a problem with the
E820 that is setup by the toolstack.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynFW-0004mB-DG; Thu, 02 Jan 2014 18:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1VynFV-0004m1-1x
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:44:05 +0000
Received: from [85.158.143.35:10500] by server-2.bemta-4.messagelabs.com id
	17/95-11386-473B5C25; Thu, 02 Jan 2014 18:44:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388688242!9226562!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7902 invoked from network); 2 Jan 2014 18:44:03 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:44:03 -0000
Received: from hanvin-mobl6.amr.corp.intel.com (fmdmzpr03-ext.fm.intel.com
	[192.55.54.38]) (authenticated bits=0)
	by mail.zytor.com (8.14.7/8.14.5) with ESMTP id s02IddSV017409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 10:39:40 -0800
Message-ID: <52C5B266.1090009@zytor.com>
Date: Thu, 02 Jan 2014 10:39:34 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Enigmail-Version: 1.6
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/31/2013 08:35 PM, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> 
> implements the neccessary functionality to boot a PV guest in PVH mode.
> 

As x86 maintainer I would like to see a list of what pvops are necessary
in PVH mode.  Obviously the hope is that the really invasive ones will
not be necessary (and there are good reasons to believe that is within
reach.)

	-hpa



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynFW-0004mB-DG; Thu, 02 Jan 2014 18:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1VynFV-0004m1-1x
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:44:05 +0000
Received: from [85.158.143.35:10500] by server-2.bemta-4.messagelabs.com id
	17/95-11386-473B5C25; Thu, 02 Jan 2014 18:44:04 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388688242!9226562!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7902 invoked from network); 2 Jan 2014 18:44:03 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:44:03 -0000
Received: from hanvin-mobl6.amr.corp.intel.com (fmdmzpr03-ext.fm.intel.com
	[192.55.54.38]) (authenticated bits=0)
	by mail.zytor.com (8.14.7/8.14.5) with ESMTP id s02IddSV017409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 10:39:40 -0800
Message-ID: <52C5B266.1090009@zytor.com>
Date: Thu, 02 Jan 2014 10:39:34 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	stefano.stabellini@eu.citrix.com, mukesh.rathor@oracle.com
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
X-Enigmail-Version: 1.6
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/31/2013 08:35 PM, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> 
> implements the neccessary functionality to boot a PV guest in PVH mode.
> 

As x86 maintainer I would like to see a list of what pvops are necessary
in PVH mode.  Obviously the hope is that the really invasive ones will
not be necessary (and there are good reasons to believe that is within
reach.)

	-hpa



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:49:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynKM-00050X-60; Thu, 02 Jan 2014 18:49:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynKK-00050S-D6
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:49:04 +0000
Received: from [85.158.137.68:3437] by server-10.bemta-3.messagelabs.com id
	8E/79-23989-F94B5C25; Thu, 02 Jan 2014 18:49:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388688541!6914166!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20466 invoked from network); 2 Jan 2014 18:49:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:49:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02Ilv9j012102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:47:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02Iluba027452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:47:57 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02Ilu5w000304; Thu, 2 Jan 2014 18:47:56 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:47:56 -0800
Date: Thu, 2 Jan 2014 13:47:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102184754.GF3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<52C59367.70707@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59367.70707@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> > and contain the virtual address of the grants. That was OK
> > for most architectures (PVHVM, ARM) were the grants are contingous
> > in memory. That however is not the case for PVH - in which case
> > we will have to do a lookup for each virtual address for the PFN.
> > 
> > Instead of doing that, lets make it a structure which will contain
> > the array of PFNs, the virtual address and the count of said PFNs.
> > 
> > Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> > gnttab_free_auto_xlat_frames to populate said structure with
> > appropiate values for PVHVM and ARM.
> > 
> > To round it off, change the name from 'xen_hvm_resume_frames' to
> > a more descriptive one - 'xen_auto_xlat_grant_frames'.
> > 
> > For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> > we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> [...]
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> [...]
> > @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >  
> > +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> > +{
> > +	xen_pfn_t *pfn;
> > +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> > +	int i;
> > +
> > +	if (xen_auto_xlat_grant_frames.count)
> > +		return -EINVAL;
> > +
> > +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> > +	if (!pfn)
> > +		return -ENOMEM;
> > +	for (i = 0; i < max_nr_gframes; i++)
> > +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> 
> PFN_DOWN(addr) + i looks better to me.
> 
> > +
> > +	xen_auto_xlat_grant_frames.vaddr = addr;
> 
> Huh? addr is a physical address but you're assigning it to a field
> called vaddr?  I think you mean to set this field to the result of the
> xen_remap() call, yes?

It ends up doing that in gnttab_init. Not to
xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.

But not for PVH, which already has done so (via vmap).

It is kind of silly - for PVHVM we use a physical address (the MMIO
of the plaform-pci) and ioremap it. For PVH, we need to use balloon
memory and vmap it. We can't use ioremap on it because the it is RAM
pages and ioremap will complain.

We end up with special casing - for PVHVM do ioremap, for PVH, just
assign it to gnttab_shared.addr as it already has an virtual address.

Perhaps I should just make this a union field? 
> 
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> >  			   grant_status_t **__shared);
> >  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
> >  
> > -extern unsigned long xen_hvm_resume_frames;
> > +struct grant_frames {
> > +	xen_pfn_t *pfn;
> > +	int count;
> 
> unsigned int.
> 
> > +	unsigned long vaddr;
> 
> void * if this is a virtual address.

It is a physical address for PVHVM, and a virtual address for PVH
(see above rant). 
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:49:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynKM-00050X-60; Thu, 02 Jan 2014 18:49:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynKK-00050S-D6
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:49:04 +0000
Received: from [85.158.137.68:3437] by server-10.bemta-3.messagelabs.com id
	8E/79-23989-F94B5C25; Thu, 02 Jan 2014 18:49:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388688541!6914166!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20466 invoked from network); 2 Jan 2014 18:49:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:49:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02Ilv9j012102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:47:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02Iluba027452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:47:57 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02Ilu5w000304; Thu, 2 Jan 2014 18:47:56 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:47:56 -0800
Date: Thu, 2 Jan 2014 13:47:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102184754.GF3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<52C59367.70707@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59367.70707@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> > and contain the virtual address of the grants. That was OK
> > for most architectures (PVHVM, ARM) were the grants are contingous
> > in memory. That however is not the case for PVH - in which case
> > we will have to do a lookup for each virtual address for the PFN.
> > 
> > Instead of doing that, lets make it a structure which will contain
> > the array of PFNs, the virtual address and the count of said PFNs.
> > 
> > Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> > gnttab_free_auto_xlat_frames to populate said structure with
> > appropiate values for PVHVM and ARM.
> > 
> > To round it off, change the name from 'xen_hvm_resume_frames' to
> > a more descriptive one - 'xen_auto_xlat_grant_frames'.
> > 
> > For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> > we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> [...]
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> [...]
> > @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >  
> > +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> > +{
> > +	xen_pfn_t *pfn;
> > +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> > +	int i;
> > +
> > +	if (xen_auto_xlat_grant_frames.count)
> > +		return -EINVAL;
> > +
> > +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> > +	if (!pfn)
> > +		return -ENOMEM;
> > +	for (i = 0; i < max_nr_gframes; i++)
> > +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> 
> PFN_DOWN(addr) + i looks better to me.
> 
> > +
> > +	xen_auto_xlat_grant_frames.vaddr = addr;
> 
> Huh? addr is a physical address but you're assigning it to a field
> called vaddr?  I think you mean to set this field to the result of the
> xen_remap() call, yes?

It ends up doing that in gnttab_init. Not to
xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.

But not for PVH, which already has done so (via vmap).

It is kind of silly - for PVHVM we use a physical address (the MMIO
of the plaform-pci) and ioremap it. For PVH, we need to use balloon
memory and vmap it. We can't use ioremap on it because the it is RAM
pages and ioremap will complain.

We end up with special casing - for PVHVM do ioremap, for PVH, just
assign it to gnttab_shared.addr as it already has an virtual address.

Perhaps I should just make this a union field? 
> 
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
> >  			   grant_status_t **__shared);
> >  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
> >  
> > -extern unsigned long xen_hvm_resume_frames;
> > +struct grant_frames {
> > +	xen_pfn_t *pfn;
> > +	int count;
> 
> unsigned int.
> 
> > +	unsigned long vaddr;
> 
> void * if this is a virtual address.

It is a physical address for PVHVM, and a virtual address for PVH
(see above rant). 
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:51:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynMl-0005Eh-OX; Thu, 02 Jan 2014 18:51:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynMk-0005Ea-Ec
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:51:34 +0000
Received: from [85.158.143.35:37519] by server-3.bemta-4.messagelabs.com id
	43/DD-32360-535B5C25; Thu, 02 Jan 2014 18:51:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388688691!9307013!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22219 invoked from network); 2 Jan 2014 18:51:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:51:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IoSlW018211
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:50:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IoSwU006626
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:50:28 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IoRFi006607; Thu, 2 Jan 2014 18:50:27 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:50:27 -0800
Date: Thu, 2 Jan 2014 13:50:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102185023.GG3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59483.5030607@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > In PVH the shared grant frame is the PFN and not MFN,
> > hence its mapped via the same code path as HVM.
> > 
> > The allocation of the grant frame is done differently - we
> > do not use the early platform-pci driver and have an
> > ioremap area - instead we use balloon memory and stitch
> > all of the non-contingous pages in a virtualized area.
> > 
> > That means when we call the hypervisor to replace the GMFN
> > with a XENMAPSPACE_grant_table type, we need to lookup the
> > old PFN for every iteration instead of assuming a flat
> > contingous PFN allocation.
> > 
> > Lastly, we only use v1 for grants. This is because PVHVM
> > is not able to use v2 due to no XENMEM_add_to_physmap
> > calls on the error status page (see commit
> > 69e8f430e243d657c2053f097efebc2e2cd559f0
> >  xen/granttable: Disable grant v2 for HVM domains.)
> > 
> > Until that is implemented this workaround has to
> > be in place.
> > 
> > Also per suggestions by Stefano utilize the PVHVM paths
> > as they share common functionality.
> > 
> > v2 of this patch moves most of the PVH code out in the
> > arch/x86/xen/grant-table driver and touches only minimally
> > the generic driver.
> [...]
> > --- a/arch/x86/xen/grant-table.c
> > +++ b/arch/x86/xen/grant-table.c
> [...]
> > +static int __init xen_pvh_gnttab_setup(void)
> > +{
> > +	if (!xen_domain())
> > +		return -ENODEV;
> > +
> > +	if (!xen_pv_domain())
> > +		return -ENODEV;
> > +
> > +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> > +		return -ENODEV;
> 
> Replace all these with if (!xen_pvh_domain()) ?

Yes.
> 
> > @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >  	return gnttab_init();
> >  }
> >  
> > -core_initcall(__gnttab_init);
> > +core_initcall_sync(__gnttab_init);
> 
> Why has this become _sync?

It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
at gnttab_init):

+core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */

Otherwise __gnttab_init will try to use the xen_auto_xlat_grant_frames
that has not yet xen_pvh_gnttab_setup setup.

Do you think I should: a) expand the comment in 'xen_pvh_gnttab_setup'
to mention this, or b) put it in the commit description, or c) what is
there is OK?
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 18:51:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 18:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynMl-0005Eh-OX; Thu, 02 Jan 2014 18:51:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynMk-0005Ea-Ec
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 18:51:34 +0000
Received: from [85.158.143.35:37519] by server-3.bemta-4.messagelabs.com id
	43/DD-32360-535B5C25; Thu, 02 Jan 2014 18:51:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388688691!9307013!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22219 invoked from network); 2 Jan 2014 18:51:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 18:51:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02IoSlW018211
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 18:50:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IoSwU006626
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 18:50:28 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02IoRFi006607; Thu, 2 Jan 2014 18:50:27 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 10:50:27 -0800
Date: Thu, 2 Jan 2014 13:50:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102185023.GG3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C59483.5030607@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > In PVH the shared grant frame is the PFN and not MFN,
> > hence its mapped via the same code path as HVM.
> > 
> > The allocation of the grant frame is done differently - we
> > do not use the early platform-pci driver and have an
> > ioremap area - instead we use balloon memory and stitch
> > all of the non-contingous pages in a virtualized area.
> > 
> > That means when we call the hypervisor to replace the GMFN
> > with a XENMAPSPACE_grant_table type, we need to lookup the
> > old PFN for every iteration instead of assuming a flat
> > contingous PFN allocation.
> > 
> > Lastly, we only use v1 for grants. This is because PVHVM
> > is not able to use v2 due to no XENMEM_add_to_physmap
> > calls on the error status page (see commit
> > 69e8f430e243d657c2053f097efebc2e2cd559f0
> >  xen/granttable: Disable grant v2 for HVM domains.)
> > 
> > Until that is implemented this workaround has to
> > be in place.
> > 
> > Also per suggestions by Stefano utilize the PVHVM paths
> > as they share common functionality.
> > 
> > v2 of this patch moves most of the PVH code out in the
> > arch/x86/xen/grant-table driver and touches only minimally
> > the generic driver.
> [...]
> > --- a/arch/x86/xen/grant-table.c
> > +++ b/arch/x86/xen/grant-table.c
> [...]
> > +static int __init xen_pvh_gnttab_setup(void)
> > +{
> > +	if (!xen_domain())
> > +		return -ENODEV;
> > +
> > +	if (!xen_pv_domain())
> > +		return -ENODEV;
> > +
> > +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> > +		return -ENODEV;
> 
> Replace all these with if (!xen_pvh_domain()) ?

Yes.
> 
> > @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >  	return gnttab_init();
> >  }
> >  
> > -core_initcall(__gnttab_init);
> > +core_initcall_sync(__gnttab_init);
> 
> Why has this become _sync?

It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
at gnttab_init):

+core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */

Otherwise __gnttab_init will try to use the xen_auto_xlat_grant_frames
that has not yet xen_pvh_gnttab_setup setup.

Do you think I should: a) expand the comment in 'xen_pvh_gnttab_setup'
to mention this, or b) put it in the commit description, or c) what is
there is OK?
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynYy-00063n-3g; Thu, 02 Jan 2014 19:04:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynYw-00063i-N9
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:04:10 +0000
Received: from [85.158.139.211:16250] by server-2.bemta-5.messagelabs.com id
	26/C8-29392-928B5C25; Thu, 02 Jan 2014 19:04:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388689447!7586631!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12039 invoked from network); 2 Jan 2014 19:04:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:04:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02J32jq027242
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:03:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02J30MT011188
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:03:01 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02J30hj006794; Thu, 2 Jan 2014 19:03:00 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:03:00 -0800
Date: Thu, 2 Jan 2014 14:02:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>, boris.ostrovsky@oracle.com
Message-ID: <20140102190256.GH3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C598C6.7040902@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C598C6.7040902@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:50:14PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> > 
> > implements the neccessary functionality to boot a PV guest in PVH mode.
> 
> In general this looks in much better shape now.  Some of the refactoring
> patches should be queued for 3.14.

<nods> Thank you for your review!
> 
> I'm not sure if when the rest wants to go in given that the PVH
> hypervisor ABI is not yet finalized and is missing support for a number
> of things with no visible plan for how/when/if this missing
> functionality will be implemented.

We could follow the same path that Xen ARM in Linux did.

They put a thick stick in the ground with the caveat that this is
experimental. And we can do the same thing and lift the stick when we
are sure (mostly?) that it all works.

In regards to 'missing how/when/if'. As you can see from the Xen's TODO
there are a quite of items left - AMD support, shadow, etc, so it isn't
just in the kernel.

And on the Linux side things need to be tested out and figured out.

I can't give you an 'when', but the 'how/if' will be addressed. We are
under-staffed so it will take time. I sincerly hope that anybody who is
interested in PVH will help as well.

I presume that the path is going to be similar to how dom0 was
added in Linux - it took a couple of releases before it was OK, and
things have always been added to add that extra 'Uh-oh we forgot that'.
There is still some misisng pieces.

Anyhow that said - I believe this decision should be yours and Boris's.

P.S.
Happy 2014!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynYy-00063n-3g; Thu, 02 Jan 2014 19:04:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VynYw-00063i-N9
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:04:10 +0000
Received: from [85.158.139.211:16250] by server-2.bemta-5.messagelabs.com id
	26/C8-29392-928B5C25; Thu, 02 Jan 2014 19:04:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388689447!7586631!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12039 invoked from network); 2 Jan 2014 19:04:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:04:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02J32jq027242
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:03:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02J30MT011188
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:03:01 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02J30hj006794; Thu, 2 Jan 2014 19:03:00 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:03:00 -0800
Date: Thu, 2 Jan 2014 14:02:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>, boris.ostrovsky@oracle.com
Message-ID: <20140102190256.GH3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C598C6.7040902@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C598C6.7040902@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:50:14PM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> > 
> > implements the neccessary functionality to boot a PV guest in PVH mode.
> 
> In general this looks in much better shape now.  Some of the refactoring
> patches should be queued for 3.14.

<nods> Thank you for your review!
> 
> I'm not sure if when the rest wants to go in given that the PVH
> hypervisor ABI is not yet finalized and is missing support for a number
> of things with no visible plan for how/when/if this missing
> functionality will be implemented.

We could follow the same path that Xen ARM in Linux did.

They put a thick stick in the ground with the caveat that this is
experimental. And we can do the same thing and lift the stick when we
are sure (mostly?) that it all works.

In regards to 'missing how/when/if'. As you can see from the Xen's TODO
there are a quite of items left - AMD support, shadow, etc, so it isn't
just in the kernel.

And on the Linux side things need to be tested out and figured out.

I can't give you an 'when', but the 'how/if' will be addressed. We are
under-staffed so it will take time. I sincerly hope that anybody who is
interested in PVH will help as well.

I presume that the path is going to be similar to how dom0 was
added in Linux - it took a couple of releases before it was OK, and
things have always been added to add that extra 'Uh-oh we forgot that'.
There is still some misisng pieces.

Anyhow that said - I believe this decision should be yours and Boris's.

P.S.
Happy 2014!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:14:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyniW-0006ah-A6; Thu, 02 Jan 2014 19:14:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyniV-0006ac-03
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:14:03 +0000
Received: from [85.158.143.35:2517] by server-3.bemta-4.messagelabs.com id
	08/3A-32360-A7AB5C25; Thu, 02 Jan 2014 19:14:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388690040!9273423!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13704 invoked from network); 2 Jan 2014 19:14:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 19:14:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JCvPo008352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:12:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JCs56028513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:12:55 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JCsUZ002240; Thu, 2 Jan 2014 19:12:54 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:12:53 -0800
Date: Thu, 2 Jan 2014 14:12:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <20140102191250.GI3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C5B266.1090009@zytor.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5B266.1090009@zytor.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 10:39:34AM -0800, H. Peter Anvin wrote:
> On 12/31/2013 08:35 PM, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> > 
> > implements the neccessary functionality to boot a PV guest in PVH mode.
> > 
> 
> As x86 maintainer I would like to see a list of what pvops are necessary
> in PVH mode.  Obviously the hope is that the really invasive ones will

This patchset uses these
+               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+               pv_cpu_ops.cpuid = xen_cpuid;

These are still in the code:

 	pv_info = xen_info;
        pv_init_ops = xen_init_ops;
	pv_apic_ops = xen_apic_ops;
	pv_time_ops = xen_time_ops;

And the x86_init,apic, and smp_ops ops are still in force.

This is just the first step so there might be some other ones
that are needed that I failed to enumerate.

We are at the infancy period.

> not be necessary (and there are good reasons to believe that is within
> reach.)
> 
> 	-hpa
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:14:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyniW-0006ah-A6; Thu, 02 Jan 2014 19:14:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VyniV-0006ac-03
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:14:03 +0000
Received: from [85.158.143.35:2517] by server-3.bemta-4.messagelabs.com id
	08/3A-32360-A7AB5C25; Thu, 02 Jan 2014 19:14:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388690040!9273423!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13704 invoked from network); 2 Jan 2014 19:14:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 19:14:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JCvPo008352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:12:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JCs56028513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:12:55 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JCsUZ002240; Thu, 2 Jan 2014 19:12:54 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:12:53 -0800
Date: Thu, 2 Jan 2014 14:12:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "H. Peter Anvin" <hpa@zytor.com>
Message-ID: <20140102191250.GI3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C5B266.1090009@zytor.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5B266.1090009@zytor.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 10:39:34AM -0800, H. Peter Anvin wrote:
> On 12/31/2013 08:35 PM, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
> > 
> > implements the neccessary functionality to boot a PV guest in PVH mode.
> > 
> 
> As x86 maintainer I would like to see a list of what pvops are necessary
> in PVH mode.  Obviously the hope is that the really invasive ones will

This patchset uses these
+               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+               pv_cpu_ops.cpuid = xen_cpuid;

These are still in the code:

 	pv_info = xen_info;
        pv_init_ops = xen_init_ops;
	pv_apic_ops = xen_apic_ops;
	pv_time_ops = xen_time_ops;

And the x86_init,apic, and smp_ops ops are still in force.

This is just the first step so there might be some other ones
that are needed that I failed to enumerate.

We are at the infancy period.

> not be necessary (and there are good reasons to believe that is within
> reach.)
> 
> 	-hpa
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:17:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vynm0-0006i4-Uv; Thu, 02 Jan 2014 19:17:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vynly-0006hx-Mv
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:17:38 +0000
Received: from [85.158.139.211:18205] by server-17.bemta-5.messagelabs.com id
	B5/42-19152-25BB5C25; Thu, 02 Jan 2014 19:17:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388690255!7588082!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29115 invoked from network); 2 Jan 2014 19:17:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:17:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JHWrl013601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:17:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JHVPN013818
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:17:32 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JHUic014420; Thu, 2 Jan 2014 19:17:31 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:17:30 -0800
Date: Thu, 2 Jan 2014 14:17:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102191721.GK3021@pegasus.dumpdata.com>
References: <1388502919-8601-1-git-send-email-konrad.wilk@oracle.com>
	<1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
	<52C57D9A.8050804@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C57D9A.8050804@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, annie.li@oracle.com,
	linux-kernel@vger.kernel.org, msw@amazon.com
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Force to use v1 of grants.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 02:54:18PM +0000, David Vrabel wrote:
> On 31/12/13 15:15, Konrad Rzeszutek Wilk wrote:
> > We have the framework to use v2, but there are no backends that
> > actually use it. The end result is that on PV we use v2 grants
> > and on PVHVM v1. The v1 has a capacity of 512 grants per page while
> > the v2 has 256 grants per page. This means we lose about 50%
> > capacity - and if we want more than 16 VIFs (each VIF takes
> > 512 grants), then we are hitting the max per guest of 32.
> > 
> > Oracle-bug: 16039922
> > CC: annie.li@oracle.com
> > CC: msw@amazon.com
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> What does v2 add anyway? Do we want to remove all v2 related code or are

Better status reporting page and two extra types of grants.

> we expecting to make use of it in the near future?

Yes, that is my understanding.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:17:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vynm0-0006i4-Uv; Thu, 02 Jan 2014 19:17:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vynly-0006hx-Mv
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:17:38 +0000
Received: from [85.158.139.211:18205] by server-17.bemta-5.messagelabs.com id
	B5/42-19152-25BB5C25; Thu, 02 Jan 2014 19:17:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388690255!7588082!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29115 invoked from network); 2 Jan 2014 19:17:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:17:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JHWrl013601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:17:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JHVPN013818
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:17:32 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JHUic014420; Thu, 2 Jan 2014 19:17:31 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:17:30 -0800
Date: Thu, 2 Jan 2014 14:17:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102191721.GK3021@pegasus.dumpdata.com>
References: <1388502919-8601-1-git-send-email-konrad.wilk@oracle.com>
	<1388502919-8601-2-git-send-email-konrad.wilk@oracle.com>
	<52C57D9A.8050804@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C57D9A.8050804@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, annie.li@oracle.com,
	linux-kernel@vger.kernel.org, msw@amazon.com
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Force to use v1 of grants.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 02:54:18PM +0000, David Vrabel wrote:
> On 31/12/13 15:15, Konrad Rzeszutek Wilk wrote:
> > We have the framework to use v2, but there are no backends that
> > actually use it. The end result is that on PV we use v2 grants
> > and on PVHVM v1. The v1 has a capacity of 512 grants per page while
> > the v2 has 256 grants per page. This means we lose about 50%
> > capacity - and if we want more than 16 VIFs (each VIF takes
> > 512 grants), then we are hitting the max per guest of 32.
> > 
> > Oracle-bug: 16039922
> > CC: annie.li@oracle.com
> > CC: msw@amazon.com
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> What does v2 add anyway? Do we want to remove all v2 related code or are

Better status reporting page and two extra types of grants.

> we expecting to make use of it in the near future?

Yes, that is my understanding.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:17:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vynm8-0006jg-L5; Thu, 02 Jan 2014 19:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vynm6-0006jI-Ub
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:17:47 +0000
Received: from [193.109.254.147:7664] by server-14.bemta-14.messagelabs.com id
	74/5F-12628-A5BB5C25; Thu, 02 Jan 2014 19:17:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388690263!8548282!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19874 invoked from network); 2 Jan 2014 19:17:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 19:17:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JGIkd012485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:16:19 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JGEkl007281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:16:14 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JGDgX007275; Thu, 2 Jan 2014 19:16:13 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:16:13 -0800
Date: Thu, 2 Jan 2014 14:16:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102191608.GJ3021@pegasus.dumpdata.com>
References: <1387206250-13963-1-git-send-email-konrad.wilk@oracle.com>
	<1387206250-13963-2-git-send-email-konrad.wilk@oracle.com>
	<52B01F67.1030108@m2r.biz>
	<20131217145150.GC4683@phenom.dumpdata.com>
	<20131217212333.GA31966@phenom.dumpdata.com>
	<20131231143258.GA3018@phenom.dumpdata.com>
	<52C58138.1030301@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C58138.1030301@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: axboe@kernel.dk, leosilva@linux.vnet.ibm.com, linux-fbdev@vger.kernel.org,
	ashley@ashleylai.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, linux-pci@vger.kernel.org,
	tomi.valkeinen@ti.com, dmitry.torokhov@gmail.com,
	tpmdd@selhorst.net, linux-kernel@vger.kernel.org,
	mail@srajiv.net, tpmdd-devel@lists.sourceforge.net,
	plagnioj@jcrosoft.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	netdev@vger.kernel.org, bhelgaas@google.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-input@vger.kernel.org, peterhuewe@gmx.de
Subject: Re: [Xen-devel] [PATCH v3 1/2] xen/pvhvm: If xen_platform_pci=0 is
 set don't blow up (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:09:44PM +0000, David Vrabel wrote:
> On 31/12/13 14:32, Konrad Rzeszutek Wilk wrote:
> >> That is because 'disks' is incorrect. It should have been 'ide-disks'
> >>
> >> [    0.000000] unrecognised option 'disks' in parameter 'xen_emul_unplug'
> >>
> >> With the 'ide-disks' it should work. I will update the description to
> >> mention 'ide-disks' instead of 'disks'. Thank you for finding this!
> >>
> > 
> > I've v4 with said update and will push it to Linus shortly.
> > 
> > Thanks!
> > 
> > P.S.
> > Here is v4:
> > 
> >>From 275a81e7496d3532e5b4752703c50a7c8355a6c7 Mon Sep 17 00:00:00 2001
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Date: Tue, 26 Nov 2013 15:05:40 -0500
> > Subject: [PATCH] xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
> > 
> > The user has the option of disabling the platform driver:
> > 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> > 
> > which is used to unplug the emulated drivers (IDE, Realtek 8169, etc)
> > and allow the PV drivers to take over. If the user wishes
> > to disable that they can set:
> > 
> >   xen_platform_pci=0
> >   (in the guest config file)
> > 
> > or
> >   xen_emul_unplug=never
> >   (on the Linux command line)
> > 
> > except it does not work properly. The PV drivers still try to
> > load and since the Xen platform driver is not run - and it
> > has not initialized the grant tables, most of the PV drivers
> > stumble upon:
> > 
> > input: Xen Virtual Keyboard as /devices/virtual/input/input5
> > input: Xen Virtual Pointer as /devices/virtual/input/input6M
> > ------------[ cut here ]------------
> > kernel BUG at /home/konrad/ssd/konrad/linux/drivers/xen/grant-table.c:1206!
> > invalid opcode: 0000 [#1] SMP
> > Modules linked in: xen_kbdfront(+) xenfs xen_privcmd
> > CPU: 6 PID: 1389 Comm: modprobe Not tainted 3.13.0-rc1upstream-00021-ga6c892b-dirty #1
> > Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
> > RIP: 0010:[<ffffffff813ddc40>]  [<ffffffff813ddc40>] get_free_entries+0x2e0/0x300
> > Call Trace:
> >  [<ffffffff8150d9a3>] ? evdev_connect+0x1e3/0x240
> >  [<ffffffff813ddd0e>] gnttab_grant_foreign_access+0x2e/0x70
> >  [<ffffffffa0010081>] xenkbd_connect_backend+0x41/0x290 [xen_kbdfront]
> >  [<ffffffffa0010a12>] xenkbd_probe+0x2f2/0x324 [xen_kbdfront]
> >  [<ffffffff813e5757>] xenbus_dev_probe+0x77/0x130
> >  [<ffffffff813e7217>] xenbus_frontend_dev_probe+0x47/0x50
> >  [<ffffffff8145e9a9>] driver_probe_device+0x89/0x230
> >  [<ffffffff8145ebeb>] __driver_attach+0x9b/0xa0
> >  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
> >  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
> >  [<ffffffff8145cf1c>] bus_for_each_dev+0x8c/0xb0
> >  [<ffffffff8145e7d9>] driver_attach+0x19/0x20
> >  [<ffffffff8145e260>] bus_add_driver+0x1a0/0x220
> >  [<ffffffff8145f1ff>] driver_register+0x5f/0xf0
> >  [<ffffffff813e55c5>] xenbus_register_driver_common+0x15/0x20
> >  [<ffffffff813e76b3>] xenbus_register_frontend+0x23/0x40
> >  [<ffffffffa0015000>] ? 0xffffffffa0014fff
> >  [<ffffffffa001502b>] xenkbd_init+0x2b/0x1000 [xen_kbdfront]
> >  [<ffffffff81002049>] do_one_initcall+0x49/0x170
> > 
> > .. snip..
> > 
> > which is hardly nice. This patch fixes this by having each
> > PV driver check for:
> >  - if running in PV, then it is fine to execute (as that is their
> >    native environment).
> >  - if running in HVM, check if user wanted 'xen_emul_unplug=never',
> >    in which case bail out and don't load any PV drivers.
> >  - if running in HVM, and if PCI device 5853:0001 (xen_platform_pci)
> >    does not exist, then bail out and not load PV drivers.
> >  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=ide-disks',
> >    then bail out for all PV devices _except_ the block one.
> >    Ditto for the network one ('nics').
> >  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=unnecessary'
> >    then load block PV driver, and also setup the legacy IDE paths.
> >    In (v3) make it actually load PV drivers.
> [...]
> > --- a/arch/x86/xen/platform-pci-unplug.c
> > +++ b/arch/x86/xen/platform-pci-unplug.c
> > @@ -69,6 +69,80 @@ static int check_platform_magic(void)
> >  	return 0;
> >  }
> >  
> > +bool xen_has_pv_devices()
> > +{
> > +	if (!xen_domain())
> > +		return false;
> > +
> > +	/* PV domains always have them. */
> > +	if (xen_pv_domain())
> > +		return true;
> > +
> > +	/* And user has xen_platform_pci=0 set in guest config as
> > +	 * driver did not modify the value. */
> > +	if (xen_platform_pci_unplug == 0)
> > +		return false;
> > +
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_NEVER)
> > +		return false;
> > +
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_ALL)
> > +		return true;
> > +
> > +	/* This is an odd one - we are going to run legacy
> > +	 * and PV drivers at the same time. */
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_UNNECESSARY)
> > +		return true;
> > +
> > +	/* And the caller has to follow with xen_pv_{disk,nic}_devices
> > +	 * to be certain which driver can load. */
> > +	return false;
> 
> This may result in:
> 
> xen_has_pv_devices() == false
> xen_has_pv_disk_devices() == true

Yes.
> 
> which looks odd to me.  Surely xen_has_pv_*_devices() is a subset of
> xen_has_pv_devices()?

I wish, this thing drives me nuts and I couldn't come up with a sensible
way to make this work for those special ones that have their own
xen_emul_unplug parameter without special casing the
'xen_has_pv_devices'.

Perhaps it should be renamed to 'xen_has_pv_generic_devices' ?

> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:17:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vynm8-0006jg-L5; Thu, 02 Jan 2014 19:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vynm6-0006jI-Ub
	for xen-devel@lists.xenproject.org; Thu, 02 Jan 2014 19:17:47 +0000
Received: from [193.109.254.147:7664] by server-14.bemta-14.messagelabs.com id
	74/5F-12628-A5BB5C25; Thu, 02 Jan 2014 19:17:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388690263!8548282!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19874 invoked from network); 2 Jan 2014 19:17:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 19:17:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02JGIkd012485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 19:16:19 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JGEkl007281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 19:16:14 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02JGDgX007275; Thu, 2 Jan 2014 19:16:13 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 11:16:13 -0800
Date: Thu, 2 Jan 2014 14:16:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102191608.GJ3021@pegasus.dumpdata.com>
References: <1387206250-13963-1-git-send-email-konrad.wilk@oracle.com>
	<1387206250-13963-2-git-send-email-konrad.wilk@oracle.com>
	<52B01F67.1030108@m2r.biz>
	<20131217145150.GC4683@phenom.dumpdata.com>
	<20131217212333.GA31966@phenom.dumpdata.com>
	<20131231143258.GA3018@phenom.dumpdata.com>
	<52C58138.1030301@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C58138.1030301@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: axboe@kernel.dk, leosilva@linux.vnet.ibm.com, linux-fbdev@vger.kernel.org,
	ashley@ashleylai.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, linux-pci@vger.kernel.org,
	tomi.valkeinen@ti.com, dmitry.torokhov@gmail.com,
	tpmdd@selhorst.net, linux-kernel@vger.kernel.org,
	mail@srajiv.net, tpmdd-devel@lists.sourceforge.net,
	plagnioj@jcrosoft.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	netdev@vger.kernel.org, bhelgaas@google.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-input@vger.kernel.org, peterhuewe@gmx.de
Subject: Re: [Xen-devel] [PATCH v3 1/2] xen/pvhvm: If xen_platform_pci=0 is
 set don't blow up (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:09:44PM +0000, David Vrabel wrote:
> On 31/12/13 14:32, Konrad Rzeszutek Wilk wrote:
> >> That is because 'disks' is incorrect. It should have been 'ide-disks'
> >>
> >> [    0.000000] unrecognised option 'disks' in parameter 'xen_emul_unplug'
> >>
> >> With the 'ide-disks' it should work. I will update the description to
> >> mention 'ide-disks' instead of 'disks'. Thank you for finding this!
> >>
> > 
> > I've v4 with said update and will push it to Linus shortly.
> > 
> > Thanks!
> > 
> > P.S.
> > Here is v4:
> > 
> >>From 275a81e7496d3532e5b4752703c50a7c8355a6c7 Mon Sep 17 00:00:00 2001
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Date: Tue, 26 Nov 2013 15:05:40 -0500
> > Subject: [PATCH] xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
> > 
> > The user has the option of disabling the platform driver:
> > 00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
> > 
> > which is used to unplug the emulated drivers (IDE, Realtek 8169, etc)
> > and allow the PV drivers to take over. If the user wishes
> > to disable that they can set:
> > 
> >   xen_platform_pci=0
> >   (in the guest config file)
> > 
> > or
> >   xen_emul_unplug=never
> >   (on the Linux command line)
> > 
> > except it does not work properly. The PV drivers still try to
> > load and since the Xen platform driver is not run - and it
> > has not initialized the grant tables, most of the PV drivers
> > stumble upon:
> > 
> > input: Xen Virtual Keyboard as /devices/virtual/input/input5
> > input: Xen Virtual Pointer as /devices/virtual/input/input6M
> > ------------[ cut here ]------------
> > kernel BUG at /home/konrad/ssd/konrad/linux/drivers/xen/grant-table.c:1206!
> > invalid opcode: 0000 [#1] SMP
> > Modules linked in: xen_kbdfront(+) xenfs xen_privcmd
> > CPU: 6 PID: 1389 Comm: modprobe Not tainted 3.13.0-rc1upstream-00021-ga6c892b-dirty #1
> > Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
> > RIP: 0010:[<ffffffff813ddc40>]  [<ffffffff813ddc40>] get_free_entries+0x2e0/0x300
> > Call Trace:
> >  [<ffffffff8150d9a3>] ? evdev_connect+0x1e3/0x240
> >  [<ffffffff813ddd0e>] gnttab_grant_foreign_access+0x2e/0x70
> >  [<ffffffffa0010081>] xenkbd_connect_backend+0x41/0x290 [xen_kbdfront]
> >  [<ffffffffa0010a12>] xenkbd_probe+0x2f2/0x324 [xen_kbdfront]
> >  [<ffffffff813e5757>] xenbus_dev_probe+0x77/0x130
> >  [<ffffffff813e7217>] xenbus_frontend_dev_probe+0x47/0x50
> >  [<ffffffff8145e9a9>] driver_probe_device+0x89/0x230
> >  [<ffffffff8145ebeb>] __driver_attach+0x9b/0xa0
> >  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
> >  [<ffffffff8145eb50>] ? driver_probe_device+0x230/0x230
> >  [<ffffffff8145cf1c>] bus_for_each_dev+0x8c/0xb0
> >  [<ffffffff8145e7d9>] driver_attach+0x19/0x20
> >  [<ffffffff8145e260>] bus_add_driver+0x1a0/0x220
> >  [<ffffffff8145f1ff>] driver_register+0x5f/0xf0
> >  [<ffffffff813e55c5>] xenbus_register_driver_common+0x15/0x20
> >  [<ffffffff813e76b3>] xenbus_register_frontend+0x23/0x40
> >  [<ffffffffa0015000>] ? 0xffffffffa0014fff
> >  [<ffffffffa001502b>] xenkbd_init+0x2b/0x1000 [xen_kbdfront]
> >  [<ffffffff81002049>] do_one_initcall+0x49/0x170
> > 
> > .. snip..
> > 
> > which is hardly nice. This patch fixes this by having each
> > PV driver check for:
> >  - if running in PV, then it is fine to execute (as that is their
> >    native environment).
> >  - if running in HVM, check if user wanted 'xen_emul_unplug=never',
> >    in which case bail out and don't load any PV drivers.
> >  - if running in HVM, and if PCI device 5853:0001 (xen_platform_pci)
> >    does not exist, then bail out and not load PV drivers.
> >  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=ide-disks',
> >    then bail out for all PV devices _except_ the block one.
> >    Ditto for the network one ('nics').
> >  - (v2) if running in HVM, and if the user wanted 'xen_emul_unplug=unnecessary'
> >    then load block PV driver, and also setup the legacy IDE paths.
> >    In (v3) make it actually load PV drivers.
> [...]
> > --- a/arch/x86/xen/platform-pci-unplug.c
> > +++ b/arch/x86/xen/platform-pci-unplug.c
> > @@ -69,6 +69,80 @@ static int check_platform_magic(void)
> >  	return 0;
> >  }
> >  
> > +bool xen_has_pv_devices()
> > +{
> > +	if (!xen_domain())
> > +		return false;
> > +
> > +	/* PV domains always have them. */
> > +	if (xen_pv_domain())
> > +		return true;
> > +
> > +	/* And user has xen_platform_pci=0 set in guest config as
> > +	 * driver did not modify the value. */
> > +	if (xen_platform_pci_unplug == 0)
> > +		return false;
> > +
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_NEVER)
> > +		return false;
> > +
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_ALL)
> > +		return true;
> > +
> > +	/* This is an odd one - we are going to run legacy
> > +	 * and PV drivers at the same time. */
> > +	if (xen_platform_pci_unplug & XEN_UNPLUG_UNNECESSARY)
> > +		return true;
> > +
> > +	/* And the caller has to follow with xen_pv_{disk,nic}_devices
> > +	 * to be certain which driver can load. */
> > +	return false;
> 
> This may result in:
> 
> xen_has_pv_devices() == false
> xen_has_pv_disk_devices() == true

Yes.
> 
> which looks odd to me.  Surely xen_has_pv_*_devices() is a subset of
> xen_has_pv_devices()?

I wish, this thing drives me nuts and I couldn't come up with a sensible
way to make this work for those special ones that have their own
xen_emul_unplug parameter without special casing the
'xen_has_pv_devices'.

Perhaps it should be renamed to 'xen_has_pv_generic_devices' ?

> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynzL-0007nr-Dc; Thu, 02 Jan 2014 19:31:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1VynzJ-0007nm-U2
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 19:31:26 +0000
Received: from [85.158.137.68:4104] by server-17.bemta-3.messagelabs.com id
	D2/78-15965-D8EB5C25; Thu, 02 Jan 2014 19:31:25 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388691082!6997780!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 607 invoked from network); 2 Jan 2014 19:31:24 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:31:24 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s02JUuSX017856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Thu, 2 Jan 2014 14:30:56 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s02JUquL017854;
	Thu, 2 Jan 2014 14:30:52 -0500
Date: Thu, 2 Jan 2014 15:30:52 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102193051.GA17665@andromeda.dapyr.net>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131230195648.GA2937@phenom.dumpdata.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> > On 24/12/2013 12:55, Andrew Cooper wrote:
> > > On 24/12/2013 12:31, David Vrabel wrote:
> > >> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> > >>> Hey,
> > >>>
> > >>> This is with Linux and
> > >>>
> > >>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
> > >>>
> > >>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
> > >>> compiled with PVH.
> > >>>
> > >>> I think the same problem would show up if I tried to launch a PV guest 
> > >>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
> > >>> with the toolstack.
> > >> If a kernel with both PVH and PV support enabled cannot boot in PV mode
> > >> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
> > >>
> > >> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
> > >> even older are still widely deployed.
> > >>
> > >> David
> > > 
> > > I believe that the problem is because the elf parsing code is not
> > > sufficiently forward-compatible aware, and rejects the PVH kernel
> > > because it has an unrecognised Xen elf note field.  This is not a kernel
> > > bug.
> 
> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
> it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
> an unrecognized string.
> 
> > > 
> > > The elf parsing should accept unrecognised fields for forward
> > > compatibility, which would then allow a PV & PVH compiled kernel to run
> > > in PV mode.
> > 
> > It should but it doesn't, so a different way needs to be found for the
> > kernel to report (optional) PVH support.  A method that is compatible
> > with older toolstacks.
> 
> Also known as changes to the PVH ABI.
> 
> Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
> 
> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
> just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> for PVH guests.
> 
> b) Or dropping that altogether and introducing a new Xen elf note field, say:
> 
> XEN_ELFNOTE_PVH_VERSION
> 

c).

Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
 *
 * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
 * kernel to specify support for features that older hypervisors don't
 * know about. The set of features 4.2 and newer hypervisors will
 * consider supported by the kernel is the combination of the sets
 * specified through this and the string note.

for hvm_callback_vector parameter.

> 
> Which way should we do this?

The c) way looks the best. Ian, would you be OK with that idea for 4.4?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 19:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 19:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VynzL-0007nr-Dc; Thu, 02 Jan 2014 19:31:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1VynzJ-0007nm-U2
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 19:31:26 +0000
Received: from [85.158.137.68:4104] by server-17.bemta-3.messagelabs.com id
	D2/78-15965-D8EB5C25; Thu, 02 Jan 2014 19:31:25 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388691082!6997780!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 607 invoked from network); 2 Jan 2014 19:31:24 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 19:31:24 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s02JUuSX017856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Thu, 2 Jan 2014 14:30:56 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s02JUquL017854;
	Thu, 2 Jan 2014 14:30:52 -0500
Date: Thu, 2 Jan 2014 15:30:52 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102193051.GA17665@andromeda.dapyr.net>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131230195648.GA2937@phenom.dumpdata.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> > On 24/12/2013 12:55, Andrew Cooper wrote:
> > > On 24/12/2013 12:31, David Vrabel wrote:
> > >> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> > >>> Hey,
> > >>>
> > >>> This is with Linux and
> > >>>
> > >>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
> > >>>
> > >>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
> > >>> compiled with PVH.
> > >>>
> > >>> I think the same problem would show up if I tried to launch a PV guest 
> > >>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
> > >>> with the toolstack.
> > >> If a kernel with both PVH and PV support enabled cannot boot in PV mode
> > >> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
> > >>
> > >> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
> > >> even older are still widely deployed.
> > >>
> > >> David
> > > 
> > > I believe that the problem is because the elf parsing code is not
> > > sufficiently forward-compatible aware, and rejects the PVH kernel
> > > because it has an unrecognised Xen elf note field.  This is not a kernel
> > > bug.
> 
> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
> it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
> an unrecognized string.
> 
> > > 
> > > The elf parsing should accept unrecognised fields for forward
> > > compatibility, which would then allow a PV & PVH compiled kernel to run
> > > in PV mode.
> > 
> > It should but it doesn't, so a different way needs to be found for the
> > kernel to report (optional) PVH support.  A method that is compatible
> > with older toolstacks.
> 
> Also known as changes to the PVH ABI.
> 
> Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
> 
> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
> just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> for PVH guests.
> 
> b) Or dropping that altogether and introducing a new Xen elf note field, say:
> 
> XEN_ELFNOTE_PVH_VERSION
> 

c).

Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
 *
 * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
 * kernel to specify support for features that older hypervisors don't
 * know about. The set of features 4.2 and newer hypervisors will
 * consider supported by the kernel is the combination of the sets
 * specified through this and the string note.

for hvm_callback_vector parameter.

> 
> Which way should we do this?

The c) way looks the best. Ian, would you be OK with that idea for 4.4?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 20:27:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 20:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyor0-0001ZO-6O; Thu, 02 Jan 2014 20:26:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyoqz-0001ZJ-9y
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 20:26:53 +0000
Received: from [193.109.254.147:35816] by server-7.bemta-14.messagelabs.com id
	1B/7D-15500-C8BC5C25; Thu, 02 Jan 2014 20:26:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1388694410!8532115!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11001 invoked from network); 2 Jan 2014 20:26:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 20:26:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,592,1384300800"; d="scan'208";a="89350059"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 20:26:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 15:26:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyoqv-0008Ax-4p	for
	xen-devel@lists.xen.org; Thu, 02 Jan 2014 20:26:49 +0000
Message-ID: <52C5CB89.70804@citrix.com>
Date: Thu, 2 Jan 2014 20:26:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

For some post-holiday hacking, I tried playing around with getting hwloc
to understand Xen's full system topology, rather than the faked up
topology dom0 receives.

I present here some code which works (on some interestingly shaped
servers in the XenRT test pool), and some discoveries/problems found
along the way.

Code can be found at:
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v1

You will need a libxc with the following patch:
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental

Instructions for use found in the commit message of the hwloc.git tree. 
It is worth noting that with the help of the hwloc-devel list, v2 is
already quite a bit different, but is still in-progress.


Anyway, for the Xen issues I encountered.  If memory serves, some of
them might have been brought up on xen-devel in the past.

The first problem, as indicated from the extra patch required against
libxc is that the current interface for xc_{topology,numa}info() suck if
you are not libxl.  The current interface forces the caller to handle
hypercall bounce buffering, which is even harder to do sensibly as half
the bounce buffer macros are private to libxc.  Bounce buffering is the
kind of details which libxc should deal with on behalf of its callers,
and should only be exposed to callers who want to do something special.

My patch implements xc_{topology,numa}info_bounced()  (name up for
reconsideration) which takes some uint{32,64}_t arrays (optionally
NULL), and properly bounce buffer them.  This results in not needing to
mess around with any of the bounce buffering in hwloc.

The second problem is with the choice of max_node_id, which is
MAX_NUMNODES-1, or 63.  This means that the toolstack has to bounce a
16k buffer (64 * 64 * uint32_t) to get the node-node distances, even on
a single or dual node system.  The issue is less pronounced with the
node_to_mem{size,free} arrays, which only have to be 64 * uint64_t long,
but still wasteful especially if node_to_memfree is being periodically
polled.  Having nr_node_ids set dynamically (similar to nr_cpu_ids)
would alleviate this overhead, as the number of nodes available on the
system will unconditionally be static after boot.

The third problem is the one which created the only real bug in my hwloc
implementation.  Cores are numbered per-socket in Xen, while sockets,
numa nodes and cpus are numbered on an absolute scale.  There is
currently a gross hack in my hwloc code which adds (socket_id *
cores_per_socket * threads_per_core) onto each core id to make them
similarly numbered on an absolute scale.  This is fine for a homogeneous
system, but not for a hetrogeneous system.

Relatedly, when debugging the third problem on an AMD Opteron 63xx
system, I noticed that it advertises 8 cores per socket and 2 threads
per core, but numbers the cores 1-16 on each socket.  This is broken. 
It should ether be 16 cores per socket and 1 thread per core, or really
8 cores per socket and 2 threads per core, with the cores numbered 1-8
and each pair of cpus with the same core id.

Fourth, the API for identifying offline cpus is broken.  To mark a cpu
as offline, it has its topology information shot, meaning that an
offline cpu cannot be positively located in the topology.  I happen to
know it can as Xen writes the records sequentially, so a single offline
cpu can be identified based on the valid information either side, but a
block of offline cpus become rather harder to locate.  Ideally,
XEN_SYSCTL_topologyinfo should return 4 parameters, with one of them
being a bitmap from 0 to max_cpu_index identifying which cpus are
online, and writing the correct core/socket/node information (when
known) into the other parameters.  However, being an ABI now makes this
somewhat harder to do.

Fifth, Xen has no way of querying the cpu cache information.  hwloc
likes to know the entire cache hierarchy, which is arguably more useful
for its primary purpose of optimising HPC than for simply viewing the
Xen topology, but is none-the-less a missing feature as far as Xen is
concerned.  I was considering adding a sysctl along the lines of "please
execute cpuid with these parameters on that pcpu and give me the answers".

Sixth and finally, which is also the hardest problem conceptually to
solve, Xen has no notion of IO proximity.  Devices on the system can
report their location using _PXM() methods in the DSDT/SSDTs, but only
dom0 can gather this information, and doesn't have an accurate view of
the NUMA or CPU topology.


Anyway - that is probably enough rambling.  I don't expect much/any of
this to be resolved before the 4.5 dev window opens, but bringing these
issues to light might at least get some of them discussed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 20:27:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 20:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyor0-0001ZO-6O; Thu, 02 Jan 2014 20:26:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyoqz-0001ZJ-9y
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 20:26:53 +0000
Received: from [193.109.254.147:35816] by server-7.bemta-14.messagelabs.com id
	1B/7D-15500-C8BC5C25; Thu, 02 Jan 2014 20:26:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1388694410!8532115!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11001 invoked from network); 2 Jan 2014 20:26:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 20:26:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,592,1384300800"; d="scan'208";a="89350059"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 20:26:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 15:26:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyoqv-0008Ax-4p	for
	xen-devel@lists.xen.org; Thu, 02 Jan 2014 20:26:49 +0000
Message-ID: <52C5CB89.70804@citrix.com>
Date: Thu, 2 Jan 2014 20:26:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

For some post-holiday hacking, I tried playing around with getting hwloc
to understand Xen's full system topology, rather than the faked up
topology dom0 receives.

I present here some code which works (on some interestingly shaped
servers in the XenRT test pool), and some discoveries/problems found
along the way.

Code can be found at:
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v1

You will need a libxc with the following patch:
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental

Instructions for use found in the commit message of the hwloc.git tree. 
It is worth noting that with the help of the hwloc-devel list, v2 is
already quite a bit different, but is still in-progress.


Anyway, for the Xen issues I encountered.  If memory serves, some of
them might have been brought up on xen-devel in the past.

The first problem, as indicated from the extra patch required against
libxc is that the current interface for xc_{topology,numa}info() suck if
you are not libxl.  The current interface forces the caller to handle
hypercall bounce buffering, which is even harder to do sensibly as half
the bounce buffer macros are private to libxc.  Bounce buffering is the
kind of details which libxc should deal with on behalf of its callers,
and should only be exposed to callers who want to do something special.

My patch implements xc_{topology,numa}info_bounced()  (name up for
reconsideration) which takes some uint{32,64}_t arrays (optionally
NULL), and properly bounce buffer them.  This results in not needing to
mess around with any of the bounce buffering in hwloc.

The second problem is with the choice of max_node_id, which is
MAX_NUMNODES-1, or 63.  This means that the toolstack has to bounce a
16k buffer (64 * 64 * uint32_t) to get the node-node distances, even on
a single or dual node system.  The issue is less pronounced with the
node_to_mem{size,free} arrays, which only have to be 64 * uint64_t long,
but still wasteful especially if node_to_memfree is being periodically
polled.  Having nr_node_ids set dynamically (similar to nr_cpu_ids)
would alleviate this overhead, as the number of nodes available on the
system will unconditionally be static after boot.

The third problem is the one which created the only real bug in my hwloc
implementation.  Cores are numbered per-socket in Xen, while sockets,
numa nodes and cpus are numbered on an absolute scale.  There is
currently a gross hack in my hwloc code which adds (socket_id *
cores_per_socket * threads_per_core) onto each core id to make them
similarly numbered on an absolute scale.  This is fine for a homogeneous
system, but not for a hetrogeneous system.

Relatedly, when debugging the third problem on an AMD Opteron 63xx
system, I noticed that it advertises 8 cores per socket and 2 threads
per core, but numbers the cores 1-16 on each socket.  This is broken. 
It should ether be 16 cores per socket and 1 thread per core, or really
8 cores per socket and 2 threads per core, with the cores numbered 1-8
and each pair of cpus with the same core id.

Fourth, the API for identifying offline cpus is broken.  To mark a cpu
as offline, it has its topology information shot, meaning that an
offline cpu cannot be positively located in the topology.  I happen to
know it can as Xen writes the records sequentially, so a single offline
cpu can be identified based on the valid information either side, but a
block of offline cpus become rather harder to locate.  Ideally,
XEN_SYSCTL_topologyinfo should return 4 parameters, with one of them
being a bitmap from 0 to max_cpu_index identifying which cpus are
online, and writing the correct core/socket/node information (when
known) into the other parameters.  However, being an ABI now makes this
somewhat harder to do.

Fifth, Xen has no way of querying the cpu cache information.  hwloc
likes to know the entire cache hierarchy, which is arguably more useful
for its primary purpose of optimising HPC than for simply viewing the
Xen topology, but is none-the-less a missing feature as far as Xen is
concerned.  I was considering adding a sysctl along the lines of "please
execute cpuid with these parameters on that pcpu and give me the answers".

Sixth and finally, which is also the hardest problem conceptually to
solve, Xen has no notion of IO proximity.  Devices on the system can
report their location using _PXM() methods in the DSDT/SSDTs, but only
dom0 can gather this information, and doesn't have an accurate view of
the NUMA or CPU topology.


Anyway - that is probably enough rambling.  I don't expect much/any of
this to be resolved before the 4.5 dev window opens, but bringing these
issues to light might at least get some of them discussed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:25:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyplD-0004P5-Tm; Thu, 02 Jan 2014 21:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyplC-0004P0-Ht
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:24:58 +0000
Received: from [85.158.137.68:12532] by server-1.bemta-3.messagelabs.com id
	F4/A0-29598-929D5C25; Thu, 02 Jan 2014 21:24:57 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388697895!3308476!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32177 invoked from network); 2 Jan 2014 21:24:56 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 21:24:56 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 90E258407B;
	Thu,  2 Jan 2014 22:24:55 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id b0oXyd4loigx; Thu,  2 Jan 2014 22:24:55 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 394AF8407A;
	Thu,  2 Jan 2014 22:24:55 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1Vypl7-0005mc-Iz; Thu, 02 Jan 2014 22:24:53 +0100
Date: Thu, 2 Jan 2014 22:24:53 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5CB89.70804@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> Cores are numbered per-socket in Xen, while sockets,
> numa nodes and cpus are numbered on an absolute scale.  There is
> currently a gross hack in my hwloc code which adds (socket_id *
> cores_per_socket * threads_per_core) onto each core id to make them
> similarly numbered on an absolute scale.  This is fine for a homogeneous
> system, but not for a hetrogeneous system.

BTW, hwloc does not need these physical ids to be unique, it can cope
with duplication and whatnot.  That said, having a coherent interface at
the Xen layer would be a good thing, indeed :)

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:25:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyplD-0004P5-Tm; Thu, 02 Jan 2014 21:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyplC-0004P0-Ht
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:24:58 +0000
Received: from [85.158.137.68:12532] by server-1.bemta-3.messagelabs.com id
	F4/A0-29598-929D5C25; Thu, 02 Jan 2014 21:24:57 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388697895!3308476!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32177 invoked from network); 2 Jan 2014 21:24:56 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 21:24:56 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 90E258407B;
	Thu,  2 Jan 2014 22:24:55 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id b0oXyd4loigx; Thu,  2 Jan 2014 22:24:55 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 394AF8407A;
	Thu,  2 Jan 2014 22:24:55 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1Vypl7-0005mc-Iz; Thu, 02 Jan 2014 22:24:53 +0100
Date: Thu, 2 Jan 2014 22:24:53 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5CB89.70804@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> Cores are numbered per-socket in Xen, while sockets,
> numa nodes and cpus are numbered on an absolute scale.  There is
> currently a gross hack in my hwloc code which adds (socket_id *
> cores_per_socket * threads_per_core) onto each core id to make them
> similarly numbered on an absolute scale.  This is fine for a homogeneous
> system, but not for a hetrogeneous system.

BTW, hwloc does not need these physical ids to be unique, it can cope
with duplication and whatnot.  That said, having a coherent interface at
the Xen layer would be a good thing, indeed :)

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyple-0004QX-Cz; Thu, 02 Jan 2014 21:25:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyplc-0004Q8-LQ
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 21:25:24 +0000
Received: from [193.109.254.147:65294] by server-13.bemta-14.messagelabs.com
	id 2F/41-19374-449D5C25; Thu, 02 Jan 2014 21:25:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388697921!8545389!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24180 invoked from network); 2 Jan 2014 21:25:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 21:25:23 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02LO8pu005081
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 21:24:08 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02LO5fA026891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 21:24:06 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02LO5bx012503; Thu, 2 Jan 2014 21:24:05 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 13:24:04 -0800
Date: Thu, 2 Jan 2014 16:23:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140102212359.GA11592@pegasus.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140102193051.GA17665@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> > > On 24/12/2013 12:55, Andrew Cooper wrote:
> > > > On 24/12/2013 12:31, David Vrabel wrote:
> > > >> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> > > >>> Hey,
> > > >>>
> > > >>> This is with Linux and
> > > >>>
> > > >>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
> > > >>>
> > > >>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
> > > >>> compiled with PVH.
> > > >>>
> > > >>> I think the same problem would show up if I tried to launch a PV guest 
> > > >>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
> > > >>> with the toolstack.
> > > >> If a kernel with both PVH and PV support enabled cannot boot in PV mode
> > > >> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
> > > >>
> > > >> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
> > > >> even older are still widely deployed.
> > > >>
> > > >> David
> > > > 
> > > > I believe that the problem is because the elf parsing code is not
> > > > sufficiently forward-compatible aware, and rejects the PVH kernel
> > > > because it has an unrecognised Xen elf note field.  This is not a kernel
> > > > bug.
> > 
> > It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
> > it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
> > an unrecognized string.
> > 
> > > > 
> > > > The elf parsing should accept unrecognised fields for forward
> > > > compatibility, which would then allow a PV & PVH compiled kernel to run
> > > > in PV mode.
> > > 
> > > It should but it doesn't, so a different way needs to be found for the
> > > kernel to report (optional) PVH support.  A method that is compatible
> > > with older toolstacks.
> > 
> > Also known as changes to the PVH ABI.
> > 
> > Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
> > 
> > a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
> > just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> > for PVH guests.
> > 
> > b) Or dropping that altogether and introducing a new Xen elf note field, say:
> > 
> > XEN_ELFNOTE_PVH_VERSION
> > 
> 
> c).
> 
> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
>  *
>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
>  * kernel to specify support for features that older hypervisors don't
>  * know about. The set of features 4.2 and newer hypervisors will
>  * consider supported by the kernel is the combination of the sets
>  * specified through this and the string note.
> 
> for hvm_callback_vector parameter.
> 
> > 
> > Which way should we do this?
> 
> The c) way looks the best. Ian, would you be OK with that idea for 4.4?

Seems that not only does it work without any changes in Xen 4.4 but it
is all in the Linux kernel, and it allows us to boot an Linux kernel
with PV and PVH support

Seems that not only does it work without any changes in Xen 4.4 but it
is all in the Linux kernel:



diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 56f42c0..2ce56bf 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -11,12 +11,22 @@
 #include <asm/page_types.h>
 
 #include <xen/interface/elfnote.h>
+#include <xen/interface/features.h>
 #include <asm/xen/interface.h>
 
 #ifdef CONFIG_XEN_PVH
-#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
+/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
+ * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
+ * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
+ */
+#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
+		      (1 << XENFEAT_auto_translated_physmap) | \
+		      (1 << XENFEAT_supervisor_mode_kernel) | \
+		      (1 << XENFEAT_hvm_callback_vector))
 #else
 #define PVH_FEATURES_STR  ""
+#define PVH_FEATURES (0)
 #endif
 
 	__INIT
@@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
 	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
+	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
+						(1 << XENFEAT_writable_page_tables) |
+						(1 << XENFEAT_dom0))
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/interface/elfnote.h b/include/xen/interface/elfnote.h
index 0360b15..6f4eae3 100644
--- a/include/xen/interface/elfnote.h
+++ b/include/xen/interface/elfnote.h
@@ -140,6 +140,19 @@
  */
 #define XEN_ELFNOTE_SUSPEND_CANCEL 14
 
+/*
+ * The features supported by this kernel (numeric).
+ *
+ * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
+ * kernel to specify support for features that older hypervisors don't
+ * know about. The set of features 4.2 and newer hypervisors will
+ * consider supported by the kernel is the combination of the sets
+ * specified through this and the string note.
+ *
+ * LEGACY: FEATURES
+ */
+#define XEN_ELFNOTE_SUPPORTED_FEATURES 17
+
 #endif /* __XEN_PUBLIC_ELFNOTE_H__ */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyple-0004QX-Cz; Thu, 02 Jan 2014 21:25:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vyplc-0004Q8-LQ
	for xen-devel@lists.xensource.com; Thu, 02 Jan 2014 21:25:24 +0000
Received: from [193.109.254.147:65294] by server-13.bemta-14.messagelabs.com
	id 2F/41-19374-449D5C25; Thu, 02 Jan 2014 21:25:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388697921!8545389!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24180 invoked from network); 2 Jan 2014 21:25:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 21:25:23 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s02LO8pu005081
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 2 Jan 2014 21:24:08 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02LO5fA026891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 21:24:06 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s02LO5bx012503; Thu, 2 Jan 2014 21:24:05 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 13:24:04 -0800
Date: Thu, 2 Jan 2014 16:23:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140102212359.GA11592@pegasus.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140102193051.GA17665@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> > > On 24/12/2013 12:55, Andrew Cooper wrote:
> > > > On 24/12/2013 12:31, David Vrabel wrote:
> > > >> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> > > >>> Hey,
> > > >>>
> > > >>> This is with Linux and
> > > >>>
> > > >>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
> > > >>>
> > > >>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
> > > >>> compiled with PVH.
> > > >>>
> > > >>> I think the same problem would show up if I tried to launch a PV guest 
> > > >>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
> > > >>> with the toolstack.
> > > >> If a kernel with both PVH and PV support enabled cannot boot in PV mode
> > > >> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
> > > >>
> > > >> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
> > > >> even older are still widely deployed.
> > > >>
> > > >> David
> > > > 
> > > > I believe that the problem is because the elf parsing code is not
> > > > sufficiently forward-compatible aware, and rejects the PVH kernel
> > > > because it has an unrecognised Xen elf note field.  This is not a kernel
> > > > bug.
> > 
> > It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
> > it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
> > an unrecognized string.
> > 
> > > > 
> > > > The elf parsing should accept unrecognised fields for forward
> > > > compatibility, which would then allow a PV & PVH compiled kernel to run
> > > > in PV mode.
> > > 
> > > It should but it doesn't, so a different way needs to be found for the
> > > kernel to report (optional) PVH support.  A method that is compatible
> > > with older toolstacks.
> > 
> > Also known as changes to the PVH ABI.
> > 
> > Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
> > 
> > a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
> > just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> > for PVH guests.
> > 
> > b) Or dropping that altogether and introducing a new Xen elf note field, say:
> > 
> > XEN_ELFNOTE_PVH_VERSION
> > 
> 
> c).
> 
> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
>  *
>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
>  * kernel to specify support for features that older hypervisors don't
>  * know about. The set of features 4.2 and newer hypervisors will
>  * consider supported by the kernel is the combination of the sets
>  * specified through this and the string note.
> 
> for hvm_callback_vector parameter.
> 
> > 
> > Which way should we do this?
> 
> The c) way looks the best. Ian, would you be OK with that idea for 4.4?

Seems that not only does it work without any changes in Xen 4.4 but it
is all in the Linux kernel, and it allows us to boot an Linux kernel
with PV and PVH support

Seems that not only does it work without any changes in Xen 4.4 but it
is all in the Linux kernel:



diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 56f42c0..2ce56bf 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -11,12 +11,22 @@
 #include <asm/page_types.h>
 
 #include <xen/interface/elfnote.h>
+#include <xen/interface/features.h>
 #include <asm/xen/interface.h>
 
 #ifdef CONFIG_XEN_PVH
-#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
+/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
+ * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
+ * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
+ */
+#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
+		      (1 << XENFEAT_auto_translated_physmap) | \
+		      (1 << XENFEAT_supervisor_mode_kernel) | \
+		      (1 << XENFEAT_hvm_callback_vector))
 #else
 #define PVH_FEATURES_STR  ""
+#define PVH_FEATURES (0)
 #endif
 
 	__INIT
@@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
 	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
+	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
+						(1 << XENFEAT_writable_page_tables) |
+						(1 << XENFEAT_dom0))
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/interface/elfnote.h b/include/xen/interface/elfnote.h
index 0360b15..6f4eae3 100644
--- a/include/xen/interface/elfnote.h
+++ b/include/xen/interface/elfnote.h
@@ -140,6 +140,19 @@
  */
 #define XEN_ELFNOTE_SUSPEND_CANCEL 14
 
+/*
+ * The features supported by this kernel (numeric).
+ *
+ * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
+ * kernel to specify support for features that older hypervisors don't
+ * know about. The set of features 4.2 and newer hypervisors will
+ * consider supported by the kernel is the combination of the sets
+ * specified through this and the string note.
+ *
+ * LEGACY: FEATURES
+ */
+#define XEN_ELFNOTE_SUPPORTED_FEATURES 17
+
 #endif /* __XEN_PUBLIC_ELFNOTE_H__ */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:38:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VypyV-0005Af-5u; Thu, 02 Jan 2014 21:38:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VypyT-00059d-OD
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:38:42 +0000
Received: from [85.158.139.211:56491] by server-2.bemta-5.messagelabs.com id
	0E/0A-29392-06CD5C25; Thu, 02 Jan 2014 21:38:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388698718!7382849!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2600 invoked from network); 2 Jan 2014 21:38:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 21:38:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="89367394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 21:38:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 16:38:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VypyP-0000vj-7G	for
	xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:38:37 +0000
Message-ID: <52C5DC5D.3070106@citrix.com>
Date: Thu, 2 Jan 2014 21:38:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
In-Reply-To: <52C5CB89.70804@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 20:26, Andrew Cooper wrote:
> Hello,
>
> For some post-holiday hacking, I tried playing around with getting hwloc
> to understand Xen's full system topology, rather than the faked up
> topology dom0 receives.
>
> I present here some code which works (on some interestingly shaped
> servers in the XenRT test pool), and some discoveries/problems found
> along the way.
>
> Code can be found at:
> http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v1
>
> You will need a libxc with the following patch:
> http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental
>
> Instructions for use found in the commit message of the hwloc.git tree. 
> It is worth noting that with the help of the hwloc-devel list, v2 is
> already quite a bit different, but is still in-progress.
>
>
> Anyway, for the Xen issues I encountered.  If memory serves, some of
> them might have been brought up on xen-devel in the past.
>
> The first problem, as indicated from the extra patch required against
> libxc is that the current interface for xc_{topology,numa}info() suck if
> you are not libxl.  The current interface forces the caller to handle
> hypercall bounce buffering, which is even harder to do sensibly as half
> the bounce buffer macros are private to libxc.  Bounce buffering is the
> kind of details which libxc should deal with on behalf of its callers,
> and should only be exposed to callers who want to do something special.
>
> My patch implements xc_{topology,numa}info_bounced()  (name up for
> reconsideration) which takes some uint{32,64}_t arrays (optionally
> NULL), and properly bounce buffer them.  This results in not needing to
> mess around with any of the bounce buffering in hwloc.
>
> The second problem is with the choice of max_node_id, which is
> MAX_NUMNODES-1, or 63.  This means that the toolstack has to bounce a
> 16k buffer (64 * 64 * uint32_t) to get the node-node distances, even on
> a single or dual node system.  The issue is less pronounced with the
> node_to_mem{size,free} arrays, which only have to be 64 * uint64_t long,
> but still wasteful especially if node_to_memfree is being periodically
> polled.  Having nr_node_ids set dynamically (similar to nr_cpu_ids)
> would alleviate this overhead, as the number of nodes available on the
> system will unconditionally be static after boot.
>
> The third problem is the one which created the only real bug in my hwloc
> implementation.  Cores are numbered per-socket in Xen, while sockets,
> numa nodes and cpus are numbered on an absolute scale.  There is
> currently a gross hack in my hwloc code which adds (socket_id *
> cores_per_socket * threads_per_core) onto each core id to make them
> similarly numbered on an absolute scale.  This is fine for a homogeneous
> system, but not for a hetrogeneous system.
>
> Relatedly, when debugging the third problem on an AMD Opteron 63xx
> system, I noticed that it advertises 8 cores per socket and 2 threads
> per core, but numbers the cores 1-16 on each socket.  This is broken. 
> It should ether be 16 cores per socket and 1 thread per core, or really
> 8 cores per socket and 2 threads per core, with the cores numbered 1-8
> and each pair of cpus with the same core id.
>
> Fourth, the API for identifying offline cpus is broken.  To mark a cpu
> as offline, it has its topology information shot, meaning that an
> offline cpu cannot be positively located in the topology.  I happen to
> know it can as Xen writes the records sequentially, so a single offline
> cpu can be identified based on the valid information either side, but a
> block of offline cpus become rather harder to locate.  Ideally,
> XEN_SYSCTL_topologyinfo should return 4 parameters, with one of them
> being a bitmap from 0 to max_cpu_index identifying which cpus are
> online, and writing the correct core/socket/node information (when
> known) into the other parameters.  However, being an ABI now makes this
> somewhat harder to do.
>
> Fifth, Xen has no way of querying the cpu cache information.  hwloc
> likes to know the entire cache hierarchy, which is arguably more useful
> for its primary purpose of optimising HPC than for simply viewing the
> Xen topology, but is none-the-less a missing feature as far as Xen is
> concerned.  I was considering adding a sysctl along the lines of "please
> execute cpuid with these parameters on that pcpu and give me the answers".
>
> Sixth and finally, which is also the hardest problem conceptually to
> solve, Xen has no notion of IO proximity.  Devices on the system can
> report their location using _PXM() methods in the DSDT/SSDTs, but only
> dom0 can gather this information, and doesn't have an accurate view of
> the NUMA or CPU topology.

Seventh, as some very up-to-the-minute hacking,

XEN_SYSCTL_numainfo is not giving back valid information.

>From a Haswell-EP SDP, running XenServer trunk (xen-4.3 based):

Xen NUMA information:
  numa count 64, max numa id 1
  node[  0], size 19327352832, free 15262810112
  node[  1], size 17179869184, free 15961382912

Which sums to ~2GB more than the total system ram of:
(XEN) System RAM: 32320MB (33096268kB)

It would appear that a node memsize includes IO encompassed by the nodes
start/end pfns, rather than just the RAM contained inside the nodes pfns.

(XEN) SRAT: Node 0 PXM 0 0-480000000
(XEN) SRAT: Node 1 PXM 1 480000000-880000000

Is this intentional or an oversight?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:38:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VypyV-0005Af-5u; Thu, 02 Jan 2014 21:38:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VypyT-00059d-OD
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:38:42 +0000
Received: from [85.158.139.211:56491] by server-2.bemta-5.messagelabs.com id
	0E/0A-29392-06CD5C25; Thu, 02 Jan 2014 21:38:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388698718!7382849!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2600 invoked from network); 2 Jan 2014 21:38:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 21:38:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="89367394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 21:38:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 16:38:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VypyP-0000vj-7G	for
	xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:38:37 +0000
Message-ID: <52C5DC5D.3070106@citrix.com>
Date: Thu, 2 Jan 2014 21:38:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
In-Reply-To: <52C5CB89.70804@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 20:26, Andrew Cooper wrote:
> Hello,
>
> For some post-holiday hacking, I tried playing around with getting hwloc
> to understand Xen's full system topology, rather than the faked up
> topology dom0 receives.
>
> I present here some code which works (on some interestingly shaped
> servers in the XenRT test pool), and some discoveries/problems found
> along the way.
>
> Code can be found at:
> http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v1
>
> You will need a libxc with the following patch:
> http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental
>
> Instructions for use found in the commit message of the hwloc.git tree. 
> It is worth noting that with the help of the hwloc-devel list, v2 is
> already quite a bit different, but is still in-progress.
>
>
> Anyway, for the Xen issues I encountered.  If memory serves, some of
> them might have been brought up on xen-devel in the past.
>
> The first problem, as indicated from the extra patch required against
> libxc is that the current interface for xc_{topology,numa}info() suck if
> you are not libxl.  The current interface forces the caller to handle
> hypercall bounce buffering, which is even harder to do sensibly as half
> the bounce buffer macros are private to libxc.  Bounce buffering is the
> kind of details which libxc should deal with on behalf of its callers,
> and should only be exposed to callers who want to do something special.
>
> My patch implements xc_{topology,numa}info_bounced()  (name up for
> reconsideration) which takes some uint{32,64}_t arrays (optionally
> NULL), and properly bounce buffer them.  This results in not needing to
> mess around with any of the bounce buffering in hwloc.
>
> The second problem is with the choice of max_node_id, which is
> MAX_NUMNODES-1, or 63.  This means that the toolstack has to bounce a
> 16k buffer (64 * 64 * uint32_t) to get the node-node distances, even on
> a single or dual node system.  The issue is less pronounced with the
> node_to_mem{size,free} arrays, which only have to be 64 * uint64_t long,
> but still wasteful especially if node_to_memfree is being periodically
> polled.  Having nr_node_ids set dynamically (similar to nr_cpu_ids)
> would alleviate this overhead, as the number of nodes available on the
> system will unconditionally be static after boot.
>
> The third problem is the one which created the only real bug in my hwloc
> implementation.  Cores are numbered per-socket in Xen, while sockets,
> numa nodes and cpus are numbered on an absolute scale.  There is
> currently a gross hack in my hwloc code which adds (socket_id *
> cores_per_socket * threads_per_core) onto each core id to make them
> similarly numbered on an absolute scale.  This is fine for a homogeneous
> system, but not for a hetrogeneous system.
>
> Relatedly, when debugging the third problem on an AMD Opteron 63xx
> system, I noticed that it advertises 8 cores per socket and 2 threads
> per core, but numbers the cores 1-16 on each socket.  This is broken. 
> It should ether be 16 cores per socket and 1 thread per core, or really
> 8 cores per socket and 2 threads per core, with the cores numbered 1-8
> and each pair of cpus with the same core id.
>
> Fourth, the API for identifying offline cpus is broken.  To mark a cpu
> as offline, it has its topology information shot, meaning that an
> offline cpu cannot be positively located in the topology.  I happen to
> know it can as Xen writes the records sequentially, so a single offline
> cpu can be identified based on the valid information either side, but a
> block of offline cpus become rather harder to locate.  Ideally,
> XEN_SYSCTL_topologyinfo should return 4 parameters, with one of them
> being a bitmap from 0 to max_cpu_index identifying which cpus are
> online, and writing the correct core/socket/node information (when
> known) into the other parameters.  However, being an ABI now makes this
> somewhat harder to do.
>
> Fifth, Xen has no way of querying the cpu cache information.  hwloc
> likes to know the entire cache hierarchy, which is arguably more useful
> for its primary purpose of optimising HPC than for simply viewing the
> Xen topology, but is none-the-less a missing feature as far as Xen is
> concerned.  I was considering adding a sysctl along the lines of "please
> execute cpuid with these parameters on that pcpu and give me the answers".
>
> Sixth and finally, which is also the hardest problem conceptually to
> solve, Xen has no notion of IO proximity.  Devices on the system can
> report their location using _PXM() methods in the DSDT/SSDTs, but only
> dom0 can gather this information, and doesn't have an accurate view of
> the NUMA or CPU topology.

Seventh, as some very up-to-the-minute hacking,

XEN_SYSCTL_numainfo is not giving back valid information.

>From a Haswell-EP SDP, running XenServer trunk (xen-4.3 based):

Xen NUMA information:
  numa count 64, max numa id 1
  node[  0], size 19327352832, free 15262810112
  node[  1], size 17179869184, free 15961382912

Which sums to ~2GB more than the total system ram of:
(XEN) System RAM: 32320MB (33096268kB)

It would appear that a node memsize includes IO encompassed by the nodes
start/end pfns, rather than just the RAM contained inside the nodes pfns.

(XEN) SRAT: Node 0 PXM 0 0-480000000
(XEN) SRAT: Node 1 PXM 1 480000000-880000000

Is this intentional or an oversight?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:50:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyq9s-00060W-Fl; Thu, 02 Jan 2014 21:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyq9q-00060R-DR
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:50:26 +0000
Received: from [85.158.143.35:63549] by server-1.bemta-4.messagelabs.com id
	FB/37-02132-12FD5C25; Thu, 02 Jan 2014 21:50:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388699423!9341686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19214 invoked from network); 2 Jan 2014 21:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 21:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="87182158"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 21:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 16:50:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyq9W-00016Y-Se;
	Thu, 02 Jan 2014 21:50:06 +0000
Message-ID: <52C5DF0E.1030500@citrix.com>
Date: Thu, 2 Jan 2014 21:50:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Xen-devel List
	<xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 21:24, Samuel Thibault wrote:
> Hello,
>
> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
>> Cores are numbered per-socket in Xen, while sockets,
>> numa nodes and cpus are numbered on an absolute scale.  There is
>> currently a gross hack in my hwloc code which adds (socket_id *
>> cores_per_socket * threads_per_core) onto each core id to make them
>> similarly numbered on an absolute scale.  This is fine for a homogeneous
>> system, but not for a hetrogeneous system.
> BTW, hwloc does not need these physical ids to be unique, it can cope
> with duplication and whatnot.  That said, having a coherent interface at
> the Xen layer would be a good thing, indeed :)
>
> Samuel

If I take out the described hack, I am presented with

****************************************************************************
* hwloc has encountered what looks like an error from the operating system.
*
* object (Core P#0 cpuset 0x30000003) intersection without inclusion!
* Error occurred in topology.c line 853
*
* Please report this error message to the hwloc user's mailing list,
* along with the output from the hwloc-gather-topology.sh script.
****************************************************************************

Which I took to mean "I have done something stupid".  I looked and saw
that I was attempting to insert a second Core P#0 object with a
different cpuset and decided to renumber the cores so they didn't
overlap in physical ids.

If you believe that this should indeed work, then I guess I need to
raise a bug...

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:50:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyq9s-00060W-Fl; Thu, 02 Jan 2014 21:50:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vyq9q-00060R-DR
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:50:26 +0000
Received: from [85.158.143.35:63549] by server-1.bemta-4.messagelabs.com id
	FB/37-02132-12FD5C25; Thu, 02 Jan 2014 21:50:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388699423!9341686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19214 invoked from network); 2 Jan 2014 21:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 21:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="87182158"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Jan 2014 21:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 16:50:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vyq9W-00016Y-Se;
	Thu, 02 Jan 2014 21:50:06 +0000
Message-ID: <52C5DF0E.1030500@citrix.com>
Date: Thu, 2 Jan 2014 21:50:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Xen-devel List
	<xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 21:24, Samuel Thibault wrote:
> Hello,
>
> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
>> Cores are numbered per-socket in Xen, while sockets,
>> numa nodes and cpus are numbered on an absolute scale.  There is
>> currently a gross hack in my hwloc code which adds (socket_id *
>> cores_per_socket * threads_per_core) onto each core id to make them
>> similarly numbered on an absolute scale.  This is fine for a homogeneous
>> system, but not for a hetrogeneous system.
> BTW, hwloc does not need these physical ids to be unique, it can cope
> with duplication and whatnot.  That said, having a coherent interface at
> the Xen layer would be a good thing, indeed :)
>
> Samuel

If I take out the described hack, I am presented with

****************************************************************************
* hwloc has encountered what looks like an error from the operating system.
*
* object (Core P#0 cpuset 0x30000003) intersection without inclusion!
* Error occurred in topology.c line 853
*
* Please report this error message to the hwloc user's mailing list,
* along with the output from the hwloc-gather-topology.sh script.
****************************************************************************

Which I took to mean "I have done something stupid".  I looked and saw
that I was attempting to insert a second Core P#0 object with a
different cpuset and decided to renumber the cores so they didn't
overlap in physical ids.

If you believe that this should indeed work, then I guess I need to
raise a bug...

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqFH-000693-Bk; Thu, 02 Jan 2014 21:56:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyqFE-00068w-2L
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:56:00 +0000
Received: from [193.109.254.147:5740] by server-3.bemta-14.messagelabs.com id
	F6/B8-11000-F60E5C25; Thu, 02 Jan 2014 21:55:59 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388699758!8525414!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8346 invoked from network); 2 Jan 2014 21:55:58 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 21:55:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id D7FFB84076;
	Thu,  2 Jan 2014 22:55:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 0kcJElrzhjSM; Thu,  2 Jan 2014 22:55:57 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 6412D84074;
	Thu,  2 Jan 2014 22:55:57 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1VyqF8-0005wW-P9; Thu, 02 Jan 2014 22:55:54 +0100
Date: Thu, 2 Jan 2014 22:55:54 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5DF0E.1030500@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
> On 02/01/14 21:24, Samuel Thibault wrote:
> > Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> >> Cores are numbered per-socket in Xen, while sockets,
> >> numa nodes and cpus are numbered on an absolute scale.  There is
> >> currently a gross hack in my hwloc code which adds (socket_id *
> >> cores_per_socket * threads_per_core) onto each core id to make them
> >> similarly numbered on an absolute scale.  This is fine for a homogeneo=
us
> >> system, but not for a hetrogeneous system.
> > BTW, hwloc does not need these physical ids to be unique, it can cope
> > with duplication and whatnot.  That said, having a coherent interface at
> > the Xen layer would be a good thing, indeed :)
> =

> If I take out the described hack, I am presented with
> =

> *************************************************************************=
***
> * hwloc has encountered what looks like an error from the operating syste=
m.
> *
> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
> * Error occurred in topology.c line 853
> *
> * Please report this error message to the hwloc user's mailing list,
> * along with the output from the hwloc-gather-topology.sh script.
> *************************************************************************=
***
> =

> Which I took to mean "I have done something stupid".  I looked and saw
> that I was attempting to insert a second Core P#0 object with a
> different cpuset and decided to renumber the cores so they didn't
> overlap in physical ids.
> =

> If you believe that this should indeed work, then I guess I need to
> raise a bug...

Well, logical processor physical ids, i.e. what is used for indexing
physical cpusets, have to be unique. The core/socket/node IDs don't have
to.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 21:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 21:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqFH-000693-Bk; Thu, 02 Jan 2014 21:56:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyqFE-00068w-2L
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 21:56:00 +0000
Received: from [193.109.254.147:5740] by server-3.bemta-14.messagelabs.com id
	F6/B8-11000-F60E5C25; Thu, 02 Jan 2014 21:55:59 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388699758!8525414!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8346 invoked from network); 2 Jan 2014 21:55:58 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 2 Jan 2014 21:55:58 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id D7FFB84076;
	Thu,  2 Jan 2014 22:55:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id 0kcJElrzhjSM; Thu,  2 Jan 2014 22:55:57 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 6412D84074;
	Thu,  2 Jan 2014 22:55:57 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1VyqF8-0005wW-P9; Thu, 02 Jan 2014 22:55:54 +0100
Date: Thu, 2 Jan 2014 22:55:54 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5DF0E.1030500@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
> On 02/01/14 21:24, Samuel Thibault wrote:
> > Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> >> Cores are numbered per-socket in Xen, while sockets,
> >> numa nodes and cpus are numbered on an absolute scale.  There is
> >> currently a gross hack in my hwloc code which adds (socket_id *
> >> cores_per_socket * threads_per_core) onto each core id to make them
> >> similarly numbered on an absolute scale.  This is fine for a homogeneo=
us
> >> system, but not for a hetrogeneous system.
> > BTW, hwloc does not need these physical ids to be unique, it can cope
> > with duplication and whatnot.  That said, having a coherent interface at
> > the Xen layer would be a good thing, indeed :)
> =

> If I take out the described hack, I am presented with
> =

> *************************************************************************=
***
> * hwloc has encountered what looks like an error from the operating syste=
m.
> *
> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
> * Error occurred in topology.c line 853
> *
> * Please report this error message to the hwloc user's mailing list,
> * along with the output from the hwloc-gather-topology.sh script.
> *************************************************************************=
***
> =

> Which I took to mean "I have done something stupid".  I looked and saw
> that I was attempting to insert a second Core P#0 object with a
> different cpuset and decided to renumber the cores so they didn't
> overlap in physical ids.
> =

> If you believe that this should indeed work, then I guess I need to
> raise a bug...

Well, logical processor physical ids, i.e. what is used for indexing
physical cpusets, have to be unique. The core/socket/node IDs don't have
to.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqKf-0006hF-Lk; Thu, 02 Jan 2014 22:01:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VyqKd-0006hA-Oc
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:01:36 +0000
Received: from [85.158.139.211:19085] by server-7.bemta-5.messagelabs.com id
	EB/F1-04824-FB1E5C25; Thu, 02 Jan 2014 22:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388700091!7384587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 2 Jan 2014 22:01:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 22:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="89373947"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 22:01:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 17:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VyqKY-0001Hj-PI;
	Thu, 02 Jan 2014 22:01:30 +0000
Message-ID: <52C5E1BA.6070501@citrix.com>
Date: Thu, 2 Jan 2014 22:01:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Xen-devel List
	<xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
	<20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 21:55, Samuel Thibault wrote:
> Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
>> On 02/01/14 21:24, Samuel Thibault wrote:
>>> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
>>>> Cores are numbered per-socket in Xen, while sockets,
>>>> numa nodes and cpus are numbered on an absolute scale.  There is
>>>> currently a gross hack in my hwloc code which adds (socket_id *
>>>> cores_per_socket * threads_per_core) onto each core id to make them
>>>> similarly numbered on an absolute scale.  This is fine for a homogeneo=
us
>>>> system, but not for a hetrogeneous system.
>>> BTW, hwloc does not need these physical ids to be unique, it can cope
>>> with duplication and whatnot.  That said, having a coherent interface at
>>> the Xen layer would be a good thing, indeed :)
>> If I take out the described hack, I am presented with
>>
>> ************************************************************************=
****
>> * hwloc has encountered what looks like an error from the operating syst=
em.
>> *
>> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
>> * Error occurred in topology.c line 853
>> *
>> * Please report this error message to the hwloc user's mailing list,
>> * along with the output from the hwloc-gather-topology.sh script.
>> ************************************************************************=
****
>>
>> Which I took to mean "I have done something stupid".  I looked and saw
>> that I was attempting to insert a second Core P#0 object with a
>> different cpuset and decided to renumber the cores so they didn't
>> overlap in physical ids.
>>
>> If you believe that this should indeed work, then I guess I need to
>> raise a bug...
> Well, logical processor physical ids, i.e. what is used for indexing
> physical cpusets, have to be unique. The core/socket/node IDs don't have
> to.
>
> Samuel

Then a bug needs raising.  My hack only changes the Core physical ID as
far as hwloc is concerned.  The PU physical IDs are unchanged by the
hack, and already unique as presented by Xen.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqKf-0006hF-Lk; Thu, 02 Jan 2014 22:01:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VyqKd-0006hA-Oc
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:01:36 +0000
Received: from [85.158.139.211:19085] by server-7.bemta-5.messagelabs.com id
	EB/F1-04824-FB1E5C25; Thu, 02 Jan 2014 22:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388700091!7384587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 2 Jan 2014 22:01:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 22:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,593,1384300800"; d="scan'208";a="89373947"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Jan 2014 22:01:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 2 Jan 2014 17:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1VyqKY-0001Hj-PI;
	Thu, 02 Jan 2014 22:01:30 +0000
Message-ID: <52C5E1BA.6070501@citrix.com>
Date: Thu, 2 Jan 2014 22:01:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Xen-devel List
	<xen-devel@lists.xen.org>
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
	<20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 21:55, Samuel Thibault wrote:
> Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
>> On 02/01/14 21:24, Samuel Thibault wrote:
>>> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
>>>> Cores are numbered per-socket in Xen, while sockets,
>>>> numa nodes and cpus are numbered on an absolute scale.  There is
>>>> currently a gross hack in my hwloc code which adds (socket_id *
>>>> cores_per_socket * threads_per_core) onto each core id to make them
>>>> similarly numbered on an absolute scale.  This is fine for a homogeneo=
us
>>>> system, but not for a hetrogeneous system.
>>> BTW, hwloc does not need these physical ids to be unique, it can cope
>>> with duplication and whatnot.  That said, having a coherent interface at
>>> the Xen layer would be a good thing, indeed :)
>> If I take out the described hack, I am presented with
>>
>> ************************************************************************=
****
>> * hwloc has encountered what looks like an error from the operating syst=
em.
>> *
>> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
>> * Error occurred in topology.c line 853
>> *
>> * Please report this error message to the hwloc user's mailing list,
>> * along with the output from the hwloc-gather-topology.sh script.
>> ************************************************************************=
****
>>
>> Which I took to mean "I have done something stupid".  I looked and saw
>> that I was attempting to insert a second Core P#0 object with a
>> different cpuset and decided to renumber the cores so they didn't
>> overlap in physical ids.
>>
>> If you believe that this should indeed work, then I guess I need to
>> raise a bug...
> Well, logical processor physical ids, i.e. what is used for indexing
> physical cpusets, have to be unique. The core/socket/node IDs don't have
> to.
>
> Samuel

Then a bug needs raising.  My hack only changes the Core physical ID as
far as hwloc is concerned.  The PU physical IDs are unchanged by the
hack, and already unique as presented by Xen.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:04:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqNo-0006p0-9k; Thu, 02 Jan 2014 22:04:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyqNn-0006ou-2D
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:04:51 +0000
Received: from [193.109.254.147:46062] by server-3.bemta-14.messagelabs.com id
	3F/7C-11000-282E5C25; Thu, 02 Jan 2014 22:04:50 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388700289!6292457!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3760 invoked from network); 2 Jan 2014 22:04:49 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 22:04:49 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 023298407B;
	Thu,  2 Jan 2014 23:04:49 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id JbCh5c-M2hSn; Thu,  2 Jan 2014 23:04:48 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 9C9BA8407A;
	Thu,  2 Jan 2014 23:04:48 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1VyqNi-0003Ul-Qn; Thu, 02 Jan 2014 23:04:46 +0100
Date: Thu, 2 Jan 2014 23:04:46 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102220446.GB29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>, hwloc-devel@open-mpi.org
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
	<20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
	<52C5E1BA.6070501@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5E1BA.6070501@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: hwloc-devel@open-mpi.org, Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Thu 02 Jan 2014 22:01:30 +0000, a =E9crit :
> On 02/01/14 21:55, Samuel Thibault wrote:
> > Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
> >> On 02/01/14 21:24, Samuel Thibault wrote:
> >>> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> >>>> Cores are numbered per-socket in Xen, while sockets,
> >>>> numa nodes and cpus are numbered on an absolute scale.  There is
> >>>> currently a gross hack in my hwloc code which adds (socket_id *
> >>>> cores_per_socket * threads_per_core) onto each core id to make them
> >>>> similarly numbered on an absolute scale.  This is fine for a homogen=
eous
> >>>> system, but not for a hetrogeneous system.
> >>> BTW, hwloc does not need these physical ids to be unique, it can cope
> >>> with duplication and whatnot.  That said, having a coherent interface=
 at
> >>> the Xen layer would be a good thing, indeed :)
> >> If I take out the described hack, I am presented with
> >>
> >> **********************************************************************=
******
> >> * hwloc has encountered what looks like an error from the operating sy=
stem.
> >> *
> >> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
> >> * Error occurred in topology.c line 853
> >> *
> >> * Please report this error message to the hwloc user's mailing list,
> >> * along with the output from the hwloc-gather-topology.sh script.
> >> **********************************************************************=
******
> >>
> >> Which I took to mean "I have done something stupid".  I looked and saw
> >> that I was attempting to insert a second Core P#0 object with a
> >> different cpuset and decided to renumber the cores so they didn't
> >> overlap in physical ids.
> >>
> >> If you believe that this should indeed work, then I guess I need to
> >> raise a bug...
> > Well, logical processor physical ids, i.e. what is used for indexing
> > physical cpusets, have to be unique. The core/socket/node IDs don't have
> > to.
> =

> Then a bug needs raising.  My hack only changes the Core physical ID as
> far as hwloc is concerned.  The PU physical IDs are unchanged by the
> hack, and already unique as presented by Xen.

This needs investigation indeed.  I'm sure we are supposed to support
that kind of case.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:04:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyqNo-0006p0-9k; Thu, 02 Jan 2014 22:04:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1VyqNn-0006ou-2D
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:04:51 +0000
Received: from [193.109.254.147:46062] by server-3.bemta-14.messagelabs.com id
	3F/7C-11000-282E5C25; Thu, 02 Jan 2014 22:04:50 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388700289!6292457!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3760 invoked from network); 2 Jan 2014 22:04:49 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Jan 2014 22:04:49 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 023298407B;
	Thu,  2 Jan 2014 23:04:49 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id JbCh5c-M2hSn; Thu,  2 Jan 2014 23:04:48 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 9C9BA8407A;
	Thu,  2 Jan 2014 23:04:48 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1VyqNi-0003Ul-Qn; Thu, 02 Jan 2014 23:04:46 +0100
Date: Thu, 2 Jan 2014 23:04:46 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140102220446.GB29132@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>, hwloc-devel@open-mpi.org
References: <52C5CB89.70804@citrix.com>
	<20140102212453.GT29132@type.youpi.perso.aquilenet.fr>
	<52C5DF0E.1030500@citrix.com>
	<20140102215554.GX29132@type.youpi.perso.aquilenet.fr>
	<52C5E1BA.6070501@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C5E1BA.6070501@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: hwloc-devel@open-mpi.org, Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Hwloc with Xen host topology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Thu 02 Jan 2014 22:01:30 +0000, a =E9crit :
> On 02/01/14 21:55, Samuel Thibault wrote:
> > Andrew Cooper, le Thu 02 Jan 2014 21:50:06 +0000, a =E9crit :
> >> On 02/01/14 21:24, Samuel Thibault wrote:
> >>> Andrew Cooper, le Thu 02 Jan 2014 20:26:49 +0000, a =E9crit :
> >>>> Cores are numbered per-socket in Xen, while sockets,
> >>>> numa nodes and cpus are numbered on an absolute scale.  There is
> >>>> currently a gross hack in my hwloc code which adds (socket_id *
> >>>> cores_per_socket * threads_per_core) onto each core id to make them
> >>>> similarly numbered on an absolute scale.  This is fine for a homogen=
eous
> >>>> system, but not for a hetrogeneous system.
> >>> BTW, hwloc does not need these physical ids to be unique, it can cope
> >>> with duplication and whatnot.  That said, having a coherent interface=
 at
> >>> the Xen layer would be a good thing, indeed :)
> >> If I take out the described hack, I am presented with
> >>
> >> **********************************************************************=
******
> >> * hwloc has encountered what looks like an error from the operating sy=
stem.
> >> *
> >> * object (Core P#0 cpuset 0x30000003) intersection without inclusion!
> >> * Error occurred in topology.c line 853
> >> *
> >> * Please report this error message to the hwloc user's mailing list,
> >> * along with the output from the hwloc-gather-topology.sh script.
> >> **********************************************************************=
******
> >>
> >> Which I took to mean "I have done something stupid".  I looked and saw
> >> that I was attempting to insert a second Core P#0 object with a
> >> different cpuset and decided to renumber the cores so they didn't
> >> overlap in physical ids.
> >>
> >> If you believe that this should indeed work, then I guess I need to
> >> raise a bug...
> > Well, logical processor physical ids, i.e. what is used for indexing
> > physical cpusets, have to be unique. The core/socket/node IDs don't have
> > to.
> =

> Then a bug needs raising.  My hack only changes the Core physical ID as
> far as hwloc is concerned.  The PU physical IDs are unchanged by the
> hack, and already unique as presented by Xen.

This needs investigation indeed.  I'm sure we are supposed to support
that kind of case.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:23:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyqg0-0007lj-5A; Thu, 02 Jan 2014 22:23:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1Vyqfy-0007le-Ra
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:23:39 +0000
Received: from [193.109.254.147:16515] by server-9.bemta-14.messagelabs.com id
	25/41-13957-AE6E5C25; Thu, 02 Jan 2014 22:23:38 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388701415!8550996!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30752 invoked from network); 2 Jan 2014 22:23:37 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 22:23:37 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so14740312pde.27
	for <xen-devel@lists.xen.org>; Thu, 02 Jan 2014 14:23:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=mPCTNMdulc92cAIM65BypEYuq/wH+8iw9O+mFunZkCI=;
	b=XFw6ZoitCjRvKAIBfrPHUk1gs8puAshP31yGqr+UX9ylNyUYf7T+G31PAG0cil5XVP
	HQAV8etL1pH+oEqwv6XLVNSmaWntH/yJDRIDH+O21VwI+sCwqsx9Q91cde2ubMQ3WNzC
	gaHGjJJMU8jZvJ20ZjV4lvT6MBPPiGWxFRMDWWRvbQoIMGkCQeWTqgRRBjNwo+wEZoC2
	OWZgUwx3SOZW2J/nmxUC3mw1yILMRBNTk3yQNzhp8EumMgVU9JacoTziB4GfJ+fd1x/k
	csKwrSjEAlhNqrgtHfLM0EUNrIlbrOMyOByaqMo8c4qJbOPjCDvGtyX7Ev93x7jO0AgZ
	7SDQ==
X-Gm-Message-State: ALoCoQlQzFGMsQOXnFY6bEK5bkI6bBP335unsFeaTDoiWBHFiAZV8mPdioW8VihYqCQOQPOx3HJG
X-Received: by 10.68.88.161 with SMTP id bh1mr91467083pbb.49.1388701414909;
	Thu, 02 Jan 2014 14:23:34 -0800 (PST)
Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net.
	[67.170.153.23]) by mx.google.com with ESMTPSA id
	ju10sm104179969pbd.33.2014.01.02.14.23.33 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 02 Jan 2014 14:23:34 -0800 (PST)
From: John Stultz <john.stultz@linaro.org>
To: LKML <linux-kernel@vger.kernel.org>
Date: Thu,  2 Jan 2014 14:23:20 -0800
Message-Id: <1388701407-5029-2-git-send-email-john.stultz@linaro.org>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1388701407-5029-1-git-send-email-john.stultz@linaro.org>
References: <52C5E6A0.7010507@linaro.org>
	<1388701407-5029-1-git-send-email-john.stultz@linaro.org>
Cc: Prarit Bhargava <prarit@redhat.com>,
	Richard Cochran <richardcochran@gmail.com>,
	stable <stable@vger.kernel.org>, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 2/9] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7488f0b..051855f 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 02 22:23:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jan 2014 22:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyqg0-0007lj-5A; Thu, 02 Jan 2014 22:23:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1Vyqfy-0007le-Ra
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:23:39 +0000
Received: from [193.109.254.147:16515] by server-9.bemta-14.messagelabs.com id
	25/41-13957-AE6E5C25; Thu, 02 Jan 2014 22:23:38 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388701415!8550996!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30752 invoked from network); 2 Jan 2014 22:23:37 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Jan 2014 22:23:37 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so14740312pde.27
	for <xen-devel@lists.xen.org>; Thu, 02 Jan 2014 14:23:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=mPCTNMdulc92cAIM65BypEYuq/wH+8iw9O+mFunZkCI=;
	b=XFw6ZoitCjRvKAIBfrPHUk1gs8puAshP31yGqr+UX9ylNyUYf7T+G31PAG0cil5XVP
	HQAV8etL1pH+oEqwv6XLVNSmaWntH/yJDRIDH+O21VwI+sCwqsx9Q91cde2ubMQ3WNzC
	gaHGjJJMU8jZvJ20ZjV4lvT6MBPPiGWxFRMDWWRvbQoIMGkCQeWTqgRRBjNwo+wEZoC2
	OWZgUwx3SOZW2J/nmxUC3mw1yILMRBNTk3yQNzhp8EumMgVU9JacoTziB4GfJ+fd1x/k
	csKwrSjEAlhNqrgtHfLM0EUNrIlbrOMyOByaqMo8c4qJbOPjCDvGtyX7Ev93x7jO0AgZ
	7SDQ==
X-Gm-Message-State: ALoCoQlQzFGMsQOXnFY6bEK5bkI6bBP335unsFeaTDoiWBHFiAZV8mPdioW8VihYqCQOQPOx3HJG
X-Received: by 10.68.88.161 with SMTP id bh1mr91467083pbb.49.1388701414909;
	Thu, 02 Jan 2014 14:23:34 -0800 (PST)
Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net.
	[67.170.153.23]) by mx.google.com with ESMTPSA id
	ju10sm104179969pbd.33.2014.01.02.14.23.33 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 02 Jan 2014 14:23:34 -0800 (PST)
From: John Stultz <john.stultz@linaro.org>
To: LKML <linux-kernel@vger.kernel.org>
Date: Thu,  2 Jan 2014 14:23:20 -0800
Message-Id: <1388701407-5029-2-git-send-email-john.stultz@linaro.org>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1388701407-5029-1-git-send-email-john.stultz@linaro.org>
References: <52C5E6A0.7010507@linaro.org>
	<1388701407-5029-1-git-send-email-john.stultz@linaro.org>
Cc: Prarit Bhargava <prarit@redhat.com>,
	Richard Cochran <richardcochran@gmail.com>,
	stable <stable@vger.kernel.org>, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 2/9] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7488f0b..051855f 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 00:06:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 00:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VysH6-0004BV-31; Fri, 03 Jan 2014 00:06:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yinghai@kernel.org>) id 1VysH5-0004BQ-8l
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 00:06:03 +0000
Received: from [85.158.137.68:46810] by server-1.bemta-3.messagelabs.com id
	38/28-29598-AEEF5C25; Fri, 03 Jan 2014 00:06:02 +0000
X-Env-Sender: yinghai@kernel.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388707560!3320379!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24056 invoked from network); 3 Jan 2014 00:06:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:06:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0305gu0029765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 00:05:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0305f3p018597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 00:05:42 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0305fPM022418; Fri, 3 Jan 2014 00:05:41 GMT
Received: from linux-siqj.site (/10.132.126.191)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 16:05:41 -0800
From: Yinghai Lu <yinghai@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>, Tony Luck <tony.luck@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>, "Rafael J. Wysocki" <rjw@sisk.pl>
Date: Thu,  2 Jan 2014 16:05:48 -0800
Message-Id: <1388707565-16535-17-git-send-email-yinghai@kernel.org>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	Yinghai Lu <yinghai@kernel.org>
Subject: [Xen-devel] [PATCH v5 16/33] xen,
	irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To make x86 irq allocation to be same with booting path and ioapic
hot add path, We will pre-reserve irq for all gsi at first.
We have to use alloc_reserved here, otherwise irq_alloc_desc_at will fail
because bit is already get marked for pre-reserved in irq bitmaps.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com
---
 drivers/xen/events.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..020cd77 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
 	/* Legacy IRQ descriptors are already allocated by the arch. */
 	if (gsi < NR_IRQS_LEGACY)
 		irq = gsi;
-	else
-		irq = irq_alloc_desc_at(gsi, -1);
+	else {
+		/* for x86, irq already get reserved for gsi */
+		irq = irq_alloc_reserved_desc_at(gsi, -1);
+		if (irq < 0)
+			irq = irq_alloc_desc_at(gsi, -1);
+	}
 
 	xen_irq_init(irq);
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 00:06:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 00:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VysH6-0004BV-31; Fri, 03 Jan 2014 00:06:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yinghai@kernel.org>) id 1VysH5-0004BQ-8l
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 00:06:03 +0000
Received: from [85.158.137.68:46810] by server-1.bemta-3.messagelabs.com id
	38/28-29598-AEEF5C25; Fri, 03 Jan 2014 00:06:02 +0000
X-Env-Sender: yinghai@kernel.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388707560!3320379!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24056 invoked from network); 3 Jan 2014 00:06:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:06:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0305gu0029765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 00:05:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0305f3p018597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 00:05:42 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0305fPM022418; Fri, 3 Jan 2014 00:05:41 GMT
Received: from linux-siqj.site (/10.132.126.191)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 16:05:41 -0800
From: Yinghai Lu <yinghai@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>, Tony Luck <tony.luck@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>, "Rafael J. Wysocki" <rjw@sisk.pl>
Date: Thu,  2 Jan 2014 16:05:48 -0800
Message-Id: <1388707565-16535-17-git-send-email-yinghai@kernel.org>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	Yinghai Lu <yinghai@kernel.org>
Subject: [Xen-devel] [PATCH v5 16/33] xen,
	irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To make x86 irq allocation to be same with booting path and ioapic
hot add path, We will pre-reserve irq for all gsi at first.
We have to use alloc_reserved here, otherwise irq_alloc_desc_at will fail
because bit is already get marked for pre-reserved in irq bitmaps.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xensource.com
---
 drivers/xen/events.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..020cd77 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
 	/* Legacy IRQ descriptors are already allocated by the arch. */
 	if (gsi < NR_IRQS_LEGACY)
 		irq = gsi;
-	else
-		irq = irq_alloc_desc_at(gsi, -1);
+	else {
+		/* for x86, irq already get reserved for gsi */
+		irq = irq_alloc_reserved_desc_at(gsi, -1);
+		if (irq < 0)
+			irq = irq_alloc_desc_at(gsi, -1);
+	}
 
 	xen_irq_init(irq);
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 00:30:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 00:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyseA-0005PZ-GH; Fri, 03 Jan 2014 00:29:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vyse8-0005PU-M1
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 00:29:52 +0000
Received: from [85.158.137.68:61247] by server-10.bemta-3.messagelabs.com id
	3F/E4-23989-F7406C25; Fri, 03 Jan 2014 00:29:51 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388708989!7022201!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8901 invoked from network); 3 Jan 2014 00:29:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:29:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s030RcNB012735
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 00:27:39 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s030Rb24025024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 00:27:38 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s030Rb9O025019; Fri, 3 Jan 2014 00:27:37 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 16:27:37 -0800
Date: Thu, 2 Jan 2014 16:27:35 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102162735.5a3038c0@mantra.us.oracle.com>
In-Reply-To: <20131230195648.GA2937@phenom.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013 14:56:48 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
......
> > > 
> > > The elf parsing should accept unrecognised fields for forward
> > > compatibility, which would then allow a PV & PVH compiled kernel
> > > to run in PV mode.
> > 
> > It should but it doesn't, so a different way needs to be found for
> > the kernel to report (optional) PVH support.  A method that is
> > compatible with older toolstacks.
> 
> Also known as changes to the PVH ABI.
> 
> Mukesh, Roger, George (emailing Ian instead since he is now the
> Release Manager-pro-temp), Jan,
> 
> a).  That means dropping the 'hvm_callback_vector' check from
> xc_dom_core.c and just depending on:
> "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> for PVH guests.

I"m not sure what the state of auto xlated with shadow and Ian's work
for supervisor mode kernel is. If they are obsolete, then we can drop
the hvm_callback_vector check.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 00:30:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 00:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyseA-0005PZ-GH; Fri, 03 Jan 2014 00:29:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vyse8-0005PU-M1
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 00:29:52 +0000
Received: from [85.158.137.68:61247] by server-10.bemta-3.messagelabs.com id
	3F/E4-23989-F7406C25; Fri, 03 Jan 2014 00:29:51 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388708989!7022201!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8901 invoked from network); 3 Jan 2014 00:29:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:29:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s030RcNB012735
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 00:27:39 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s030Rb24025024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 00:27:38 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s030Rb9O025019; Fri, 3 Jan 2014 00:27:37 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 16:27:37 -0800
Date: Thu, 2 Jan 2014 16:27:35 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102162735.5a3038c0@mantra.us.oracle.com>
In-Reply-To: <20131230195648.GA2937@phenom.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013 14:56:48 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
......
> > > 
> > > The elf parsing should accept unrecognised fields for forward
> > > compatibility, which would then allow a PV & PVH compiled kernel
> > > to run in PV mode.
> > 
> > It should but it doesn't, so a different way needs to be found for
> > the kernel to report (optional) PVH support.  A method that is
> > compatible with older toolstacks.
> 
> Also known as changes to the PVH ABI.
> 
> Mukesh, Roger, George (emailing Ian instead since he is now the
> Release Manager-pro-temp), Jan,
> 
> a).  That means dropping the 'hvm_callback_vector' check from
> xc_dom_core.c and just depending on:
> "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> for PVH guests.

I"m not sure what the state of auto xlated with shadow and Ian's work
for supervisor mode kernel is. If they are obsolete, then we can drop
the hvm_callback_vector check.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 01:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 01:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vytg1-0003Bg-HS; Fri, 03 Jan 2014 01:35:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vytfz-0003Bb-KF
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 01:35:51 +0000
Received: from [85.158.137.68:51952] by server-5.bemta-3.messagelabs.com id
	8F/14-25188-6F316C25; Fri, 03 Jan 2014 01:35:50 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388712947!6974060!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4647 invoked from network); 3 Jan 2014 01:35:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 01:35:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s031YgsQ032555
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 01:34:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031Yfow021235
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 01:34:41 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031YenO021810; Fri, 3 Jan 2014 01:34:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 17:34:39 -0800
Date: Thu, 2 Jan 2014 17:34:38 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102173438.40612127@mantra.us.oracle.com>
In-Reply-To: <20140102183221.GD3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 13:32:21 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > > need to use emulated prefix call. We also check for vector
> > > callback early on, as it is a required feature. PVH also runs at
> > > default kernel IOPL.
> > > 
> > > Finally, pure PV settings are moved to a separate function that
> > > are only called for pure PV, ie, pv with pvmmu. They are also
> > > #ifdef with CONFIG_XEN_PVMMU.
> > [...]
> > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > unsigned int *bx, break;
> > >  	}
> > >  
> > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > -		: "=a" (*ax),
> > > -		  "=b" (*bx),
> > > -		  "=c" (*cx),
> > > -		  "=d" (*dx)
> > > -		: "0" (*ax), "2" (*cx));
> > > +	if (xen_pvh_domain())
> > > +		native_cpuid(ax, bx, cx, dx);
> > > +	else
> > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > +			: "=a" (*ax),
> > > +			"=b" (*bx),
> > > +			"=c" (*cx),
> > > +			"=d" (*dx)
> > > +			: "0" (*ax), "2" (*cx));
> > 
> > For this one off cpuid call it seems preferrable to me to use the
> > emulate prefix rather than diverge from PV.
> 
> This was before the PV cpuid was deemed OK to be used on PVH.
> Will rip this out to use the same version.

Whats wrong with using native cpuid? That is one of the benefits that
cpuid can be trapped via vmexit, and also there is talk of making PV
cpuid trap obsolete in the future. I suggest leaving it native.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 01:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 01:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vytg1-0003Bg-HS; Fri, 03 Jan 2014 01:35:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vytfz-0003Bb-KF
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 01:35:51 +0000
Received: from [85.158.137.68:51952] by server-5.bemta-3.messagelabs.com id
	8F/14-25188-6F316C25; Fri, 03 Jan 2014 01:35:50 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388712947!6974060!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4647 invoked from network); 3 Jan 2014 01:35:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 01:35:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s031YgsQ032555
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 01:34:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031Yfow021235
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 01:34:41 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031YenO021810; Fri, 3 Jan 2014 01:34:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 17:34:39 -0800
Date: Thu, 2 Jan 2014 17:34:38 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140102173438.40612127@mantra.us.oracle.com>
In-Reply-To: <20140102183221.GD3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 13:32:21 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > > need to use emulated prefix call. We also check for vector
> > > callback early on, as it is a required feature. PVH also runs at
> > > default kernel IOPL.
> > > 
> > > Finally, pure PV settings are moved to a separate function that
> > > are only called for pure PV, ie, pv with pvmmu. They are also
> > > #ifdef with CONFIG_XEN_PVMMU.
> > [...]
> > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > unsigned int *bx, break;
> > >  	}
> > >  
> > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > -		: "=a" (*ax),
> > > -		  "=b" (*bx),
> > > -		  "=c" (*cx),
> > > -		  "=d" (*dx)
> > > -		: "0" (*ax), "2" (*cx));
> > > +	if (xen_pvh_domain())
> > > +		native_cpuid(ax, bx, cx, dx);
> > > +	else
> > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > +			: "=a" (*ax),
> > > +			"=b" (*bx),
> > > +			"=c" (*cx),
> > > +			"=d" (*dx)
> > > +			: "0" (*ax), "2" (*cx));
> > 
> > For this one off cpuid call it seems preferrable to me to use the
> > emulate prefix rather than diverge from PV.
> 
> This was before the PV cpuid was deemed OK to be used on PVH.
> Will rip this out to use the same version.

Whats wrong with using native cpuid? That is one of the benefits that
cpuid can be trapped via vmexit, and also there is talk of making PV
cpuid trap obsolete in the future. I suggest leaving it native.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 01:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 01:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vythu-0003H4-28; Fri, 03 Jan 2014 01:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vyths-0003Gy-Sc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 01:37:49 +0000
Received: from [85.158.143.35:20056] by server-2.bemta-4.messagelabs.com id
	4D/18-11386-C6416C25; Fri, 03 Jan 2014 01:37:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388713066!9265557!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 3 Jan 2014 01:37:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 01:37:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s031ahoX001490
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 01:36:44 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031agRJ023873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 01:36:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031af95024494; Fri, 3 Jan 2014 01:36:42 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 17:36:41 -0800
Date: Thu, 2 Jan 2014 17:36:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102173639.3869d841@mantra.us.oracle.com>
In-Reply-To: <52C54C82.5010802@citrix.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
	<52C54C82.5010802@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 11:24:50 +0000
David Vrabel <david.vrabel@citrix.com> wrote:

> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > .. which are surprinsingly small compared to the amount for PV code.
> > 
> > PVH uses mostly native mmu ops, we leave the generic (native_*) for
> > the majority and just overwrite the baremetal with the ones we need.
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> 
> This TLB flush optimization should be a separate patch.

It's not really an "optimization", we are using PV mechanism instead
of native because PV one performs better. So, I think it's ok to belong
here.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 01:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 01:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vythu-0003H4-28; Fri, 03 Jan 2014 01:37:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1Vyths-0003Gy-Sc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 01:37:49 +0000
Received: from [85.158.143.35:20056] by server-2.bemta-4.messagelabs.com id
	4D/18-11386-C6416C25; Fri, 03 Jan 2014 01:37:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388713066!9265557!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 3 Jan 2014 01:37:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 01:37:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s031ahoX001490
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 01:36:44 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031agRJ023873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 01:36:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s031af95024494; Fri, 3 Jan 2014 01:36:42 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 17:36:41 -0800
Date: Thu, 2 Jan 2014 17:36:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140102173639.3869d841@mantra.us.oracle.com>
In-Reply-To: <52C54C82.5010802@citrix.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
	<52C54C82.5010802@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 11:24:50 +0000
David Vrabel <david.vrabel@citrix.com> wrote:

> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > .. which are surprinsingly small compared to the amount for PV code.
> > 
> > PVH uses mostly native mmu ops, we leave the generic (native_*) for
> > the majority and just overwrite the baremetal with the ones we need.
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> 
> This TLB flush optimization should be a separate patch.

It's not really an "optimization", we are using PV mechanism instead
of native because PV one performs better. So, I think it's ok to belong
here.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 02:13:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 02:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyuFw-0005Lp-44; Fri, 03 Jan 2014 02:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VyuFt-0005Li-SJ
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 02:12:58 +0000
Received: from [193.109.254.147:34297] by server-2.bemta-14.messagelabs.com id
	9C/D6-00361-9AC16C25; Fri, 03 Jan 2014 02:12:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388715173!8568687!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27707 invoked from network); 3 Jan 2014 02:12:55 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 02:12:55 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 03 Jan 2014 02:12:51 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,594,1384300800"; d="scan'208";a="639688592"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi01.verizon.com with ESMTP; 03 Jan 2014 02:12:50 +0000
From: Don Slutz <dslutz@verizon.com>
To: <qemu-devel@nongnu.org>, <1257099@bugs.launchpad.net>,
	<xen-devel@lists.xensource.com>
Date: Thu,  2 Jan 2014 21:12:46 -0500
Message-Id: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Richard Henderson <rth@twiddle.net>
Subject: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if -fPIE
	does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
to be fixed (TMPB).

Add new functions do_libtool and libtool_prog.

Add check for broken gcc and libtool.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
Was posted as an attachment.

https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html

 configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index edfea95..852d021 100755
--- a/configure
+++ b/configure
@@ -12,7 +12,10 @@ else
 fi
 
 TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
-TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
+TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
+TMPO="${TMPDIR1}/${TMPB}.o"
+TMPL="${TMPDIR1}/${TMPB}.lo"
+TMPA="${TMPDIR1}/lib${TMPB}.la"
 TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
 
 # NB: do not call "exit" in the trap handler; this is buggy with some shells;
@@ -86,6 +89,38 @@ compile_prog() {
   do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
 }
 
+do_libtool() {
+    local mode=$1
+    shift
+    # Run the compiler, capturing its output to the log.
+    echo $libtool $mode --tag=CC $cc "$@" >> config.log
+    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
+    # Test passed. If this is an --enable-werror build, rerun
+    # the test with -Werror and bail out if it fails. This
+    # makes warning-generating-errors in configure test code
+    # obvious to developers.
+    if test "$werror" != "yes"; then
+        return 0
+    fi
+    # Don't bother rerunning the compile if we were already using -Werror
+    case "$*" in
+        *-Werror*)
+           return 0
+        ;;
+    esac
+    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
+    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
+    error_exit "configure test passed without -Werror but failed with -Werror." \
+        "This is probably a bug in the configure script. The failing command" \
+        "will be at the bottom of config.log." \
+        "You can run configure with --disable-werror to bypass this check."
+}
+
+libtool_prog() {
+    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
+    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
+}
+
 # symbolically link $1 to $2.  Portable version of "ln -sf".
 symlink() {
   rm -rf "$2"
@@ -1367,6 +1402,32 @@ EOF
   fi
 fi
 
+# check for broken gcc and libtool in RHEL5
+if test -n "$libtool" -a "$pie" != "no" ; then
+  cat > $TMPC <<EOF
+
+void *f(unsigned char *buf, int len);
+void *g(unsigned char *buf, int len);
+
+void *
+f(unsigned char *buf, int len)
+{
+    return (void*)0L;
+}
+
+void *
+g(unsigned char *buf, int len)
+{
+    return f(buf, len);
+}
+
+EOF
+  if ! libtool_prog; then
+    echo "Disabling libtool due to broken toolchain support"
+    libtool=
+  fi
+fi
+
 ##########################################
 # __sync_fetch_and_and requires at least -march=i486. Many toolchains
 # use i686 as default anyway, but for those that don't, an explicit
-- 
1.8.2.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 02:13:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 02:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VyuFw-0005Lp-44; Fri, 03 Jan 2014 02:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VyuFt-0005Li-SJ
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 02:12:58 +0000
Received: from [193.109.254.147:34297] by server-2.bemta-14.messagelabs.com id
	9C/D6-00361-9AC16C25; Fri, 03 Jan 2014 02:12:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388715173!8568687!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27707 invoked from network); 3 Jan 2014 02:12:55 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 02:12:55 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 03 Jan 2014 02:12:51 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,594,1384300800"; d="scan'208";a="639688592"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi01.verizon.com with ESMTP; 03 Jan 2014 02:12:50 +0000
From: Don Slutz <dslutz@verizon.com>
To: <qemu-devel@nongnu.org>, <1257099@bugs.launchpad.net>,
	<xen-devel@lists.xensource.com>
Date: Thu,  2 Jan 2014 21:12:46 -0500
Message-Id: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Richard Henderson <rth@twiddle.net>
Subject: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if -fPIE
	does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
to be fixed (TMPB).

Add new functions do_libtool and libtool_prog.

Add check for broken gcc and libtool.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
Was posted as an attachment.

https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html

 configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index edfea95..852d021 100755
--- a/configure
+++ b/configure
@@ -12,7 +12,10 @@ else
 fi
 
 TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
-TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
+TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
+TMPO="${TMPDIR1}/${TMPB}.o"
+TMPL="${TMPDIR1}/${TMPB}.lo"
+TMPA="${TMPDIR1}/lib${TMPB}.la"
 TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
 
 # NB: do not call "exit" in the trap handler; this is buggy with some shells;
@@ -86,6 +89,38 @@ compile_prog() {
   do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
 }
 
+do_libtool() {
+    local mode=$1
+    shift
+    # Run the compiler, capturing its output to the log.
+    echo $libtool $mode --tag=CC $cc "$@" >> config.log
+    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
+    # Test passed. If this is an --enable-werror build, rerun
+    # the test with -Werror and bail out if it fails. This
+    # makes warning-generating-errors in configure test code
+    # obvious to developers.
+    if test "$werror" != "yes"; then
+        return 0
+    fi
+    # Don't bother rerunning the compile if we were already using -Werror
+    case "$*" in
+        *-Werror*)
+           return 0
+        ;;
+    esac
+    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
+    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
+    error_exit "configure test passed without -Werror but failed with -Werror." \
+        "This is probably a bug in the configure script. The failing command" \
+        "will be at the bottom of config.log." \
+        "You can run configure with --disable-werror to bypass this check."
+}
+
+libtool_prog() {
+    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
+    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
+}
+
 # symbolically link $1 to $2.  Portable version of "ln -sf".
 symlink() {
   rm -rf "$2"
@@ -1367,6 +1402,32 @@ EOF
   fi
 fi
 
+# check for broken gcc and libtool in RHEL5
+if test -n "$libtool" -a "$pie" != "no" ; then
+  cat > $TMPC <<EOF
+
+void *f(unsigned char *buf, int len);
+void *g(unsigned char *buf, int len);
+
+void *
+f(unsigned char *buf, int len)
+{
+    return (void*)0L;
+}
+
+void *
+g(unsigned char *buf, int len)
+{
+    return f(buf, len);
+}
+
+EOF
+  if ! libtool_prog; then
+    echo "Disabling libtool due to broken toolchain support"
+    libtool=
+  fi
+fi
+
 ##########################################
 # __sync_fetch_and_and requires at least -march=i486. Many toolchains
 # use i686 as default anyway, but for those that don't, an explicit
-- 
1.8.2.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 06:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 06:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyy2N-0004Y0-Kj; Fri, 03 Jan 2014 06:15:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1Vyy2L-0004Xv-Lp
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 06:15:13 +0000
Received: from [193.109.254.147:26913] by server-6.bemta-14.messagelabs.com id
	4A/E0-14958-07556C25; Fri, 03 Jan 2014 06:15:12 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388729710!6329731!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13439 invoked from network); 3 Jan 2014 06:15:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 06:15:11 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s036F5sP023817
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 06:15:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s036F4fR020595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 06:15:05 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s036F4eN000298; Fri, 3 Jan 2014 06:15:04 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 22:15:04 -0800
Message-ID: <52C65565.6030608@oracle.com>
Date: Fri, 03 Jan 2014 14:15:01 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <52BD5FDD.6060009@gmail.com> <52C4F48F.5090003@oracle.com>
	<20140102120150.GA1444@zion.uk.xensource.com>
In-Reply-To: <20140102120150.GA1444@zion.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Vasily Evseenko <svpcom@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/2 20:01, Wei Liu wrote:
> On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
> [...]
>>> It seems the root of problem in dom0 messages above. Is it HW failure or
>>> some internal kernel structures overflow?
>>  From the stack, it reminds me this issue is very likely same with
>> the one which has been fixed. There is something wrong with counting
>> slots in netback, and then responses overlapps request in the ring,
>> and grantcopy gets wrong grant reference and throws out error in
>> grant_table.c. See
>> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
>> There were some back and forth work for this issue, but seems the
>> fix patch exists since v3.12-rc4. Would you like to have a try with
>> newer kernel version?
>>
> FWIW the patch you mentioned was backported to the kernel he used.

Yes, it exists in 3.10.25 he used.
Based on assumption of counting slots in netback causing this issue, 
maybe http://www.spinics.net/lists/netdev/msg260017.html is the right 
fix. This patch fixed an issue caused by counting slots, and it went 
into net-next tree, 
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git

Thanks
Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 06:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 06:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vyy2N-0004Y0-Kj; Fri, 03 Jan 2014 06:15:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1Vyy2L-0004Xv-Lp
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 06:15:13 +0000
Received: from [193.109.254.147:26913] by server-6.bemta-14.messagelabs.com id
	4A/E0-14958-07556C25; Fri, 03 Jan 2014 06:15:12 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388729710!6329731!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13439 invoked from network); 3 Jan 2014 06:15:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 06:15:11 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s036F5sP023817
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 06:15:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s036F4fR020595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 06:15:05 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s036F4eN000298; Fri, 3 Jan 2014 06:15:04 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 02 Jan 2014 22:15:04 -0800
Message-ID: <52C65565.6030608@oracle.com>
Date: Fri, 03 Jan 2014 14:15:01 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <52BD5FDD.6060009@gmail.com> <52C4F48F.5090003@oracle.com>
	<20140102120150.GA1444@zion.uk.xensource.com>
In-Reply-To: <20140102120150.GA1444@zion.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Vasily Evseenko <svpcom@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] domU crash with kernel BUG at
	drivers/net/xen-netfront.c:305
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/2 20:01, Wei Liu wrote:
> On Thu, Jan 02, 2014 at 01:09:35PM +0800, annie li wrote:
> [...]
>>> It seems the root of problem in dom0 messages above. Is it HW failure or
>>> some internal kernel structures overflow?
>>  From the stack, it reminds me this issue is very likely same with
>> the one which has been fixed. There is something wrong with counting
>> slots in netback, and then responses overlapps request in the ring,
>> and grantcopy gets wrong grant reference and throws out error in
>> grant_table.c. See
>> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01143.html
>> There were some back and forth work for this issue, but seems the
>> fix patch exists since v3.12-rc4. Would you like to have a try with
>> newer kernel version?
>>
> FWIW the patch you mentioned was backported to the kernel he used.

Yes, it exists in 3.10.25 he used.
Based on assumption of counting slots in netback causing this issue, 
maybe http://www.spinics.net/lists/netdev/msg260017.html is the right 
fix. This patch fixed an issue caused by counting slots, and it went 
into net-next tree, 
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git

Thanks
Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 09:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 09:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1L4-0002vr-0g; Fri, 03 Jan 2014 09:46:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julian@freebsd.org>) id 1VysWv-00052u-EP
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 00:22:25 +0000
Received: from [85.158.143.35:10612] by server-2.bemta-4.messagelabs.com id
	72/63-11386-0C206C25; Fri, 03 Jan 2014 00:22:24 +0000
X-Env-Sender: julian@freebsd.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388708542!9185591!1
X-Originating-IP: [204.109.63.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5812 invoked from network); 3 Jan 2014 00:22:24 -0000
Received: from vps1.elischer.org (HELO vps1.elischer.org) (204.109.63.16)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:22:24 -0000
Received: from [192.168.1.73] (254C510A.nat.pool.telekom.hu [37.76.81.10])
	(authenticated bits=0)
	by vps1.elischer.org (8.14.7/8.14.7) with ESMTP id s030MELl019270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 16:22:16 -0800 (PST)
	(envelope-from julian@freebsd.org)
Message-ID: <52C602B0.7060904@freebsd.org>
Date: Fri, 03 Jan 2014 01:22:08 +0100
From: Julian Elischer <julian@freebsd.org>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, xen-devel@lists.xen.org,
	gibbs@freebsd.org, jhb@freebsd.org, kib@freebsd.org,
	julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-20-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-20-git-send-email-roger.pau@citrix.com>
X-Mailman-Approved-At: Fri, 03 Jan 2014 09:46:43 +0000
Subject: Re: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to
	xenpv device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/2/14, 4:43 PM, Roger Pau Monne wrote:
> ---
>   sys/x86/isa/isa.c |    3 +++
>   1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
> index 1a57137..9287ff2 100644
> --- a/sys/x86/isa/isa.c
> +++ b/sys/x86/isa/isa.c
> @@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
>    * On this platform, isa can also attach to the legacy bus.
>    */
>   DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
> +#ifdef XENHVM
> +DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
> +#endif
read all 19 patches. I'm glad you split them up.. makes it 
understandable.. even by me :-)
no real negative comments except a question as to whether there is any 
noticable performance impact on real hardware?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 09:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 09:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1L4-0002vy-F3; Fri, 03 Jan 2014 09:46:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <groen692@grosc.com>) id 1Vyyaz-0005nX-Pj
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 06:51:03 +0000
Received: from [85.158.137.68:59820] by server-6.bemta-3.messagelabs.com id
	37/8D-04868-4DD56C25; Fri, 03 Jan 2014 06:51:00 +0000
X-Env-Sender: groen692@grosc.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388731855!3349556!1
X-Originating-IP: [74.50.18.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19275 invoked from network); 3 Jan 2014 06:50:57 -0000
Received: from carp.lunarservers.com (HELO carp.lunarservers.com) (74.50.18.10)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 06:50:57 -0000
Received: from [194.219.126.84] (port=50932 helo=[172.16.0.10])
	by carp.lunarservers.com with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256)
	(Exim 4.82) (envelope-from <groen692@grosc.com>) id 1Vyyak-0005yp-PY
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:50:54 -0800
Message-ID: <52C65D06.8080404@grosc.com>
Date: Fri, 03 Jan 2014 07:47:34 +0100
From: Jeroen Groenewegen van der Weyden <groen692@grosc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------060906080403000401080808"
X-AntiAbuse: This header was added to track abuse,
	please include it with any abuse report
X-AntiAbuse: Primary Hostname - carp.lunarservers.com
X-AntiAbuse: Original Domain - lists.xen.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - grosc.com
X-Get-Message-Sender-Via: carp.lunarservers.com: authenticated_id:
	servicedesk@grosc.com
X-Mailman-Approved-At: Fri, 03 Jan 2014 09:46:43 +0000
Subject: [Xen-devel] BUG: unable to handle kernel NULL pointer dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------060906080403000401080808
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi All,

Yesterday my xen machine stopped working, after a kernel panic.
To me it seems to problem started at something called xen_spin_kick,

I attached a screenshot of the console of the xen machine.

Is this email-list the right one to address this bug?

disto: openSuse 13.1
kernel 3.11.6-4-xen
xen : 4.3.1_02-4.4

mfg,
Jeroen

--------------060906080403000401080808
Content-Type: image/jpeg;
 name="Screenshot_console.jpg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="Screenshot_console.jpg"

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAIBAQIBAQICAgICAgICAwUDAwMDAwYEBAMFBwYH
BwcGBwcICQsJCAgKCAcHCg0KCgsMDAwMBwkODw0MDgsMDAz/2wBDAQICAgMDAwYDAwYMCAcI
DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAz/wAAR
CAPMBNQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA
AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK
FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG
h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl
5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA
AgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYk
NOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOE
hYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk
5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDuP2Sv2VvBt58FPD/xC+Inh7SPGXi3xhp8
eu21rrsCalpek2F0nm2ccdnKGt2ka2kikeWSN5FkldVZUUA4Pxf8JeANP1jZZfCj4Ixrk8f8
K00B/wCdmam8J/F59K/Z8+FFmHOE+G3hI4z66BYN/Wv00/4J6eCvCPib9kvwzql9oGiXl5fL
LLPPc2kcruwkZeSwJ6KOK9SpVg6abR51OnNTsj8rPDHg3wZew5f4V/BFj/2THw+P/bOu78E/
BPwT4gvo4z8J/gmQxxx8NNAH/tnX6gfCbxD4G+Kni/VIbTwf4abRoQVtLpNKiIdoyok3nbtT
d5iGNSQ7KGfaFKM3nH7Zvgrw9oHxL8HppGladpz3kF08wtYViEmxoQuQvHG89u9cWXY/C4yK
nQ1jrr6fp27o7MfgsThJOFfR6aev9a9j588KfsH/AA91izDv8Ivgt0H/ADTfQv8A5ErZb/gn
j8Oc/wDJI/gt/wCG40P/AORK9d1/4zaN8FLvwNpmoWOp3d14+1o6Bpos442EdwLO5u8yl3Xa
nl2sgyNx3MgxgkjE039u3wVe/sXXHx3NprkPhOz0yfULjT5LaMatDLBI8MliYhIU+1CdGh2e
ZjzBjdjmvX5qCTfKtN/wf5NfeeYoV20rvXb72vzT+5nnh/4J1/DwD/kknwX/APDcaF/8iVBL
/wAE9vh6n/NJPgv/AOG40L/5Er618A67c+MbXUGvPC+t+HUtJES3fUntG/tBGiSQyRiCeVlV
WdoyJRG2+NsKV2u0eu+XazY2j8q1tR6xRipVGrqTPkk/8E+/h7/0SX4L/wDht9C/+RKZJ/wT
6+Hx6fCb4L/+G40L/wCRK+pftiEfdFQy6lED90flS/c/yof77+Y+XG/4J9+AB/zSb4L/APht
9C/+RKif/gn54CPT4T/Bj8PhtoX/AMiV7R8f/wBoq1+Amg6Pdf8ACNeIPFl7r+rQ6LYaZov2
Rbm4nlWRx813cQQqoWNiS0g9gaoaN+1/4c1T9qW0+Ev/AAj/AIzg1670m61ZNRu9FktdJcW5
txLFHcSlftDj7TES1uskQyQZAw20ozw0nypK97fNJS/Jp/NdxtV1u3tf5Xt+af3PszyI/wDB
P7wID/ySf4L/APht9C/+RKltf2AvACN8/wAJ/guf+6caF/8AIld38bf23L34C3ZfXPgZ8Wjo
0ut2ugWmr2t14bktr+e6uktbZoo/7WFztkkkTG6FWAbLKuDiX4q/tmWvwj8bm01z4efEKx8J
pq1tosnjKS2sV0aK5uHjiizGboXxiM8scPnC1MW5t2/ywZKqLw7t7q1dtuun/wAkvvXclqvd
rm/H8u+z27HO6b+wh8M0Qb/hB8F2/wC6d6J/8i1pD9hb4WY/5I58F/8Aw3ui/wDyLXV/tLft
VaF+yf8ACTUvGGu6N4r1ux0yGWd7Xw9o02pXBWOJ5XLbQIoUCRsTJPJHECAC4LKDofEX9qXw
38NfhdofiSey1bUp/FL29voeiafAkuqazdTp5kdtCjOse/aGZmeRY40jd3dI0ZxpGph9VyrS
3TrLZebdtFv96MJU62l5PW/Xta79FfU4T/hhz4WIefg58GP/AA3ui/8AyLU8P7FfwojH/JGv
gx/4b7Rh/wC21eo+DPiLN4q8ArrWteGNZ8DTYkafTdcnsZLm1RCRvd7O4uIMEDcNspwDzg5A
8b+Cf/BQjw38avHHh/S4/Cnjfw/p3ja0udQ8Ja5rFnbxad4qghwzPbmOeSaMtEwmRLqKB3iy
yqdrY3jOhzW5F939b2du9n2MnTrcvNzu3qy/e/sXfCt4zs+DXwZHv/wr7Rj/AO21cb4p/YT8
A3hP2f4T/BqL0/4tzobY/O0r0348ftTP8FLu4W2+GXxH8b2um6cdU1K98P2ll9n0+Ab+91dQ
NcSYjkYxWqzSABdygyRB+V8Tft56TJ4x8P6N4L8CeNfifJ4k8NReLrS48OyaVDbjTpXCRSFr
++teWJBCqGODzim6uHenIr7ba7N9u0ZNPqk7bMPZV4aubXq/NL85R+9d0eV3v/BPjwq8ny/D
T4NKM/8ARNPD/wD8h1FF/wAE+PDSf801+DZ/7pn4f/8AkOvdPjH+1TL8INzRfCv4l+LYLLTB
q2qT6Ha2Bi0qL5zsY3F3D9plAjkJis/PcbVyo8yLevgL9sXw18R/jNo3hLTtE15rPxR4Xbxd
oHiQmzbSNcsUNmHMOy4a5Rgb6DiaCMH5tpIGTUIYeUuWMFf08m/yjJrvZ2vYpvEJXcn3/L/5
JXW6uu541ZfsDeEoT8/wx+Dh/wC6a6B/8h10Gi/sS+ArIjzvhT8G3/7p1oY/laV9PB7d04C/
lXG/Fv4hP8N/Csuo2fhrXvFl2skcUOlaLHAbu6Z3C/K08sUKKoJZmllRQFPOcA3z4aGrgvuM
+WvPTnf3nm9t+yT8MoY/m+D3waY/9k+0b/5Gpx/ZS+GP/RHPg1/4b/Rv/kauk/Z+/aS0f9oD
R9aeDSNb8Oax4Y1STRdb0TWoYkvtKukRJQjmGSWCQPDLDKrwyyIyyr82QyhPCX7SOleO/jr4
n8D6X4f8QXA8GiKLV9dK2yaXa3ckMc6WY3TC5eUwzRybkgaIBsGQN8tb+2w10uSOuq0W1r39
LWs9ndd0YujiVf35ab6v0OYk/ZR+GKjP/Cnfgz/4b/Rv/kaqdz+zH8MIf+aPfBr/AMN/o3/y
NXtt3f2qRc7a5XxFr9pDnkVvTeGk/wCHH7kc9SOIWqnL72eYSfs7/C2Pr8H/AINf+G/0b/5G
pn/DP/wr/wCiQfBn/wAN/o3/AMjVua74qgjjZlYHFcFq3xbj0+82Fu9d6w+FavyR+5HE62JT
+N/ezpE/Z++FZH/JH/gx/wCG/wBG/wDkanf8M+fCr/oj3wZ/8N/o3/yNWJYfFy2mGTIPzrRh
+J9pIP8AWL+dV9Vwv8kfuRP1nEfzv72WD+zz8K2/5o/8Gf8Aw3+jf/I1I37O/wAKx/zR/wCD
P/hv9G/+RqafiPaY/wBav51b0/xtBqDYVwc1SweG/kj9yF9YxH87+9lKX9nv4XAcfCH4Nf8A
hv8ARv8A5GqrJ8BPhih/5JF8Gf8Aw32i/wDyNXVC83ioLiSj6lhv5F9yD61X/nf3s5aT4D/D
Ef8ANIvgz/4b7Rf/AJFph+BXwzA4+Efwa/8ADfaL/wDItdC82DUTXGKX1LDfyL7kP61X/nf3
swv+FIfDQD/kkfwa/wDDfaL/APItNb4I/DUH/kkfwa/8N9ov/wAi1tm6pjXWRU/UsP8AyL7k
P63X/nf3sxG+CXw1/wCiSfBr/wAN9ov/AMi0w/BX4bD/AJpL8Gv/AA32i/8AyLWpdakIaqrr
ySNjNJ4LDfyL7kP63X/nf3spj4L/AA3MgH/Cpfg1/wCG+0X/AORa3NK/Z2+GV5FlvhJ8Gv8A
w32i/wDyLUFrd75lrqtGuCYcCs3g6F9IL7kX9br2+N/ezGb9mv4YA/8AJJfg1/4b/Rf/AJFp
U/Zq+GDH/kkvwa/8N/ov/wAi11sNs8kdQXDPay9elH1Kh/IvuQfW6/8AO/vZiW37LnwwkP8A
ySP4NH/un+jf/I1aEX7JPwruY/8AkkPwaz/2T/Rv/kan3nj5dIOHI/Gm2nxytYesi5+tL6rh
v5F9yH9ZxH8z+9mVrP7D3w5vmzB8KPg0n/dO9DP87WqNj+wd4DikBb4WfBsj/snOhf8AyJXf
6N8c7S5YDev511Gn/E60uVGHXml9Tw38i+5D+t4j+Z/ezz7Sf2JfhjCg834R/Bp/+6faKP8A
21rYi/Yz+EqDn4OfBn/wgNH/APkauxu/HsKLwwrJvficsXQ1f1XD/wAi+5Ee3rv7b+9mR/wx
v8I/+iOfBj/wgdH/APkaoLz9jX4SvGdvwe+DIP8A2IGj/wDyNWtF8TTNETVSb4oSJLip+r4f
+Rfch+2r/wA7+9nFeIf2Gfhzdk/Z/hT8Go/+6d6If52tcrd/8E/fBkj/AC/DT4Nr/wB030D/
AOQ69q0/4kGVwDVjUPiMto4zSeDwz15F9yLWKxC05n97PBf+He/g/wD6Jt8G/wDw22gf/IdH
/Dvfwf8A9E2+Df8A4bbQP/kOvbm+LURuAuRzU918TEiiJ46VH1HC/wAi+5FfXMR/M/vPCv8A
h3v4PI/5Jt8HP/DbaB/8h0xv+Ce/g/8A6Jv8HP8Aw22gf/Ide8aV8SlvIj0qprfxO/s+TFH1
HDfyL7kP65if5n954VP/AME+vCa9Phz8HP8Aw22gf/IdZ15+wL4Zj+78PPg7/wCG18P/APyH
Xvlr8VhdDtSXfxDCyDOKPqOG/kX3IPrmJ/mf3nzpN+wj4dHT4ffB7/w2nh//AOQqhP7Cfh8j
j4ffB7/w2fh//wCQq+gb34jRxSfw02H4n2g+8y/jT+oYX+RfcT9dxP8AM/vPn1v2D9BP/NP/
AIPf+Gz8Pf8AyFTR+wboRP8AyIHwe/8ADZ+Hv/kKvoDUPixZwDhk/Oo7D4v2Uw5Zfzo+oYX+
Vfcg+vYn+Z/eeFWv7BugI43fD74On/umnh7/AOQq6nw5+xT4Ns9v2j4afByT1z8ONBH8rSvY
Lf4l2U44dPzqDUviXb2h+8uKtYHCx15F9yJeMxEtOd/ezmtO/ZH+G0aDf8Jfg23/AHT7RR/7
a1dH7JnwxI/5JF8G/wDw3+jf/I1XIvjPaA43irdv8YbMj/WL+dX9Vw38kfuRl7bEfzy+9mNJ
+yX8MSv/ACSP4Nj/ALp/o3/yLWLrn7HPw9u1PkfCr4Nxn/snmiH+drXcf8LZtZmwrr+dbGh+
KE1RMjFNYTDvTkj9yF9Yrr7b+9nzvrX7CPhi5k/dfDn4OoP+ybaAf52dY7/8E/8AQ2PHgD4P
D/umfh7/AOQq+uVmVuwp6yrjpUvLcO/sL7karMMQvtv7z5D/AOHfmiH/AJkH4P8A/hs/D3/y
FR/w770T/oQfg/8A+Gz8Pf8AyFX18syg9Kd5ielL+y8P/IvuD+0cR/O/vPj/AP4d96J/0IPw
f/8ADZ+Hv/kKnf8ADvnRD/zIPwf/APDZeHv/AJCr6+8xPSlWZQelH9l4f+RfcH9o4j+d/efI
A/4J86Jn/kQPg/8A+Gy8Pf8AyFS/8O+dD/6ED4P/APhs/D3/AMhV9gmVAOlZGveJ4tIRi2OK
P7Mwy3gvuD+0MQ/tv7z5VP8AwT60MD/kQPg//wCGz8Pf/IVc54q/Yd0nSSdngL4PD/umXh3/
AOQq+sLH4qW19OEBHJxTPF14l5bs2B0qf7OwvSC+4r6/iVvJ/efGdj+yLpko+bwH8HT/AN0x
8O//ACFV1f2PdHU8+Avg7/4bLw7/APIVfREUqRE8CsTxZ45h0WNuQMCq/s7CpawX3E/X8Tf4
3955JpP7JPh6Fh5nw9+Dr/8AdNPDw/lZV01j+zB4Ohi+f4ZfBtj/ANk40H/5Era8NfFaHUpg
u4da7vT9RS7wR3q44HC20gvuQnjcRfWb+9nmifsr+ELwfJ8MPg4P+6caD/8AIlWbP9kDwmZB
u+GHwdI/7JxoP/yJXvXgyxSeIZANdbHo8QH3BTWBw38i+5C/tDE7cz+8+cF/ZF8EpbH/AItZ
8HN3/ZOtD/8AkSuM8a/sieHXDfZPhp8HYvT/AItroB/nZmvsX+zI8fdFQzeHoZuqCr+pYW1v
Zr7kY/XMVe/O/vZ+e+q/sfIZf3fgD4Pj/ul/h3/5Bqh/wx64P/Ih/B//AMNd4c/+Qa/Q+Twd
asP9Wv5VH/whdp/zyWsf7Nwv8i+40WYYpfbf3n57p+x43fwF8Hv/AA13hz/5BqzF+xuD18A/
B7/w1/hz/wCQa+/x4NtVH+qX8qmg8I2uf9Uv5Uv7Owv8i+4azHFfzP7z4Cj/AGMY2/5kD4O/
+Gv8Of8AyDU0f7F0Wf8Akn/we/8ADX+Hf/kGv0Bi8JWv/PJfyqwnhG1Yf6pfypf2dhf5F9w/
7RxX8z+8/P6L9iu3J5+Hvwd/8Nh4d/8AkGrMX7FFqf8Amnnwd/8ADYeHf/kKvv1PCVr/AM8l
/Kp4/Cdr/wA81/Kj+zsL/IvuF/aOK/mf3s+Ao/2JbI/807+Dv/hsfDv/AMhVIP2JLD/onXwd
/wDDY+Hv/kKv0Aj8J2x/5Zj8qk/4RK2/55p+VP8As7C/yL7g/tDFfzP7z4Ah/Yk0/Iz8Ofg7
/wCGx8Pf/IVbuh/sT6GrDzvhp8HW/wC6aeHx/wC2dfcX/CKW/wDzzX8qenhq3Q/6tfyprAYV
fYX3CePxT+2/vPkvSP2LPBwQeZ8LPg23/dONC/8AkSteP9i7wLt/5JR8Gv8Aw3Wh/wDyJX1G
uiQqPuj8qf8A2NH/AHRWv1XDL/l2vuRl9ZxT+2/vZ8t/8MX+A/8AolHwb/8ADdaH/wDItKf2
LfAeP+SUfBz/AMNzof8A8iV9SDRo/wC6KX+x4v7go+rYb/n2vuQfWcV/O/vZ8oX37Fnggqdn
wq+DY/7pzof/AMiVzGufsT+GGJ8r4YfBxfT/AIttoJ/9s6+1TosJ/gFRv4dgf+AVDwmFf2F9
yKjjMSvtv72fBlz+xFoxb5fhr8HB/wB0z8P/APyFUH/DEek/9E3+Dn/hsvD/AP8AIVfe/wDw
i9v/AM81pR4Wtz/yyX8qz/s/C/yL7karH4n+Z/efA/8AwxHpP/RN/g5/4bLw/wD/ACFSj9iP
SP8Aom/wc/8ADZeH/wD5Cr73/wCEVt/+eS0f8Irb/wDPJaP7Pwv8i+5D+v4n+Z/efBQ/Yh0g
/wDNNvg5/wCGz8P/APyFR/wxDpH/AETb4Of+Gy8P/wDyFX3r/wAIrb/88lo/4RW3/wCeS0f2
fhf5F9yD6/iv5n958Ff8MQ6R/wBE2+Dn/hsvD/8A8hUf8MQaR/0Tb4Of+Gy8P/8AyFX3r/wi
tv8A88lo/wCEVt/+eS0f2fhf5F9yD6/iv5n97Pgr/hiHSP8Aom3wd/8ADZeHv/kKgfsR6QG/
5Jt8Hf8Aw2fh/wD+Qq+9T4Vt8f6paT/hFrb/AJ5rR9Qwv8i+5C+v4n+d/efDuj/sT6Asg834
afBxvX/i2mgD/wBs67bQP2MfBMaDzvhV8HG/7pzoQ/laV9WDwxAv/LNfyoOiRJ0QVrDB4WP/
AC7X3Iyni8TL7b+9nza37HXw+VP+SS/Bv/w3mif/ACLWPrP7HfgfafK+FXwbX3/4V1of/wAi
V9TPpKc/LVefRY2/hFaOhhn/AMu4/ciIYnExd+d/ez4+uf2P/CqyfL8MPg4B/wBk30H/AOQ6
q3n7IvhdIzt+GPwdB/7JvoP/AMh19fTaBF/cH5VTn8PREfcH5Vl9Tw3/AD7X3I6HjsS/tP7z
4j1/9kvRwT5Pw3+Dqj/smnh8/wA7OuXvv2TbYMdvw++Dw/7pj4d/+Qq+9bnwvA5/1a/lVC48
I2//ADzX8qh4HDP7C+5CWNxP87+8+C5P2UYlP/IgfB7/AMNh4d/+Qagk/ZVjB/5EH4Pf+Gw8
O/8AyDX3fN4Rtz/yyX8qqTeEbf8A55r+VR/Z+G/kX3FfX8T/ADP72fCx/ZaRT/yIPwe/8Nf4
c/8AkGrFt+y3bsfm+H/wdP1+F/h3/wCQq+2JPCVv/wA81/Kmp4Ztwf8AVj8qX9n4b+Rfcivr
+J/mf3nyBYfsl6fMBu+HXwb/APDY+Hf/AJCrTi/Y60th/wAk3+Df/hsfD3/yFX15p2g28bfc
H5VvWGg2z/wKfwqfqGG/lX3FfX8T/M/vPi2L9jbSD1+G3wa/8Nj4e/8AkKr1p+xjobHn4Z/B
o/8AdMvD3/yFX2mnhy2H8C/lViDw/bf3F/Kp+oYb+VfcX9exP8z+8+OLT9irw633vhj8Gv8A
w2fh/wD+Q6vw/sS+F8c/C74M/wDhtNA/+Q6+x7bQrfj5F/KrcegQEfcWj6hh/wCVfcH17Efz
P7z4+079ifwgJBv+FfwZI/7JroP/AMh13Xwx/Yx+FEXxr8Kw6v8ACH4PTRTeHfEMskR8B6Ok
Mrx3WgLEzItsFZkE04ViMgSuAfmNfRS6FCv8A/KuG+JWvxeBvi94Qu2haRf+Ec8RJtU4PN74
e/wrixuEpRp+7FXuuh14TFVpVPek7WOkg/Yx+AIX/kiXwT/8IPSP/kepf+GMvgB/0RL4J/8A
hCaT/wDI9cVrP7Tek+HdNlvb9PsNpAN0k9xcLHHGM4yzEgDkjrXON+338Pc/8jJ4d/8ABzbf
/F15Twi7HoxxEnsz1V/2NPgB/wBES+Cn/hB6R/8AI9cb8c/2OPgRa/BzxhPbfBf4LwXEGhX8
0UsXgXSUkidbaRlZWFvlWBAIIOQRWDpv7dPgbWtQitLPW9Fu7q4cRxQw6rA8krHoFUNkn2FP
+Kvx3tdR+E3jCEWU4L+HtS5LD/n0lNQ8LpexpGu+ZXZr/Ez9kf8AZ58DJeXV38EfgmtrFdJb
RxwfDLSbueSSSZYYoo4o7RpJHeR0RVVSzMwABJrmR+zZ8BCef2YvD3/iOMn/AMqa3vjJ8Yba
98Z6PZrayrJF4+8PnezDH7vXrNv/AGWmfHj9vPV/DP7SHj7Rbz4w3/gqz0fUbWDTdMtxocaJ
A+mWUzPm8tJZmLTSzHJcjsAMVzSoPmUIpXtc7KdRODnJvexgal8Af2ddB0y4vtR/Zv8ACOm2
FnE091eXv7PRtra0iUFnkllfSgkaKoJZ2IVQCSQBXdWH7BfwFmvIVb4F/AkqzqCP+FcaIM8/
9e1cvpP7XWqfGL9nf9pXSrjx7cfEDR7H4dM9pcXC6aWs7ie21dJkD2MEKkFYbc4cMR1BAavQ
dB+NdnPrFqv2K4G6ZFzuHdhRChzNprVBUq8qi09z5S8AfBX4O2Pw98LIfg38D5mbQdNleS4+
HehzzSs9nC7M7valmYliSWJJJ5NdFbfBb4MuOfgv8B//AA2ug/8AyJXzlb/tZ6foeheGbZtM
vHKeGdGYsJBg7tNtm/rWxpn7YGnOo/4ld5/38FflmKzuUH/Ef3s/pjK+BqdaEZPDp3S6RPdd
V+EvwO0LSbm+u/g58B4LSzheeeRvhroWI0UFmY/6J2ANR6h8LfgvoGowWms/AP4WeGbi6hln
t01/4Mafo/2pItvmtEbrT4xJsDoW252hgTjNeNT/ALWulXVu8UukXMsUilHR3Uq4PBBHcV9L
/wDBOf4pf8NO+C9U+HPjLwZD4z+Evg+a3vtDvdbAl/sS9idSmmIxOZ4hEzEL/wAsoWaCTfDP
HGvVk2PWOlKj7WXNa6s3b5/8OvvPM4y4e/sOjDGvCUnTvaSaipXe3L+uj77Xtxdp8M/gvcWe
lXb/AAC+GFnp2u7f7M1C9+CthZ2Go7ommXyLmXT1hl3RI8i7HO5VLDIGa0H+CXwXA/5Iv8Cf
/DbaF/8AIleZ/tB/tseI/iJ+0Dq+pfEnQr/TvEmgyyafZaAZAbfwxbtg+XF2keVQjvcj/XDY
VxEsaLzcv7Xmn7f+QVef9/FrlzPOY4fEujTqS93R3b3/AA0PU4c4Klj8uhja+Fp+/quRRtbp
fV69+226PZbn4OfBqIcfBj4Efj8NtB/+RKzPDn7EXwd+O/jbxxp83wg+FItf+Ee8Plo9P8JW
GmtF5l5r6yNFJaxRSQSP5EIaSFkkIiQFsKMeLan+2Jp6D/kFXv8A38WvqP8A4JV/EK2+LfxB
+I1zHbSQLF4Z8M/LIwPXUfE4/wDZa93hjMfrWMjTlK617nxniHwzHLsrniKdJQaa1SS6+R+R
f/BR/wDZlsP+Ce37S914Oiiu7nw7rNlFr/h4zS+dcRWUzPEYpG4JMdxBcxqT8zRpGzEsxor3
v/g5/sVi/bm8BKo4Hw6tP/TtqtFfbVFyzaXc/FqbbimzQ0PTZ7r4TfC3AOD8NfB/T/sXdOr9
Ev2Xfina+Fv+Cdnh61vtf1Hwr9ukv7CHV7W2Mv2ORbuZcsxRkQfKRlsd8MDgj49+GOhWtx8C
vhM7Bdx+G3hHP/ggsK7rwd8YfHvwk0L+yPCfjrWNB0bz5LlbGOzsLmKOSRtzlTcW8jKGYlio
bbkk4ySTjCneKv5P+tH+RrObUnbzX9ar8z2vwB43sb25kh0bxvrXh/VdSvrVkjjudK1FpJxI
XLJbaWC0iFQ8bCYrHiYMVLIuPUP2q9Zku/jz8P7Z886PqkvPfbNYD/2avnPwx+0P8W9WAMvx
U8QKf9jSNGH/ALZV13hO71vxH41g1zxJ4k1nxTqkMDWsE9+tvGLWJ2RnWOOCKKNdxjQsduTs
XJ4rWhgoUuVUYxjFX0Wi1+S9TGti51LutJyk7avy+b9Cz8fvHej+Pf2p/gB4J0TVdN1XxZ4U
8YSeJ9b0m1uo5bvRtMTRdRh+13MQJeKJpbq3jVnADNMoGc14de+Hb5v2/NU/Zf8Ass6+DNb8
c2/xyMm0mBdGH+kXFmD6t4ghRyvTy7hvTFfc3hbxLsscH0rQbW4xIGreOGlGaqLe93/5I19z
pwfnZraRMsTGcOSXay8viu/W05pbWunuj4C/ae1PWW/4XysGpadpHgi5+P8Ao0fjq81TSZtV
0mDRj4W0wE6jbw3Fs8lh9q+yi4HnInllvM3ReYjYWu6D4c/Zc+Bl78cvBvjrwL8Q/B3wy8e2
evtpvwt8NNpfhvSraay/s7U4rGIX13GytFdpcyrBKIxNAxZN5ev0ssfHMdrAVzj8ao6n4wju
Xzx+dT9UqWsnraCT/wACprXXVPkvbTd6sKmJpzmnNae9df4nO7/xWna+uystD8x/gF8PPHXh
n41aB8G9Wa+ufE9i9x8eLhpjuto7260w25tQ+zDqNbuLmZVyCBGOMAZyv2APh1oXiTXvh/r1
t8UfhbF8UNL0nUZPGmhaT4MuLLx5qszWsqX1v4huH1OeU7LzZI0lxbopmiiEewOqn9PX8TRg
9qjk8VxD0/OqeBbjKMdLq3krJxVv+3HGMr3clBXYvruqcnd312V9bu9v7zk42ty8z0b1PgX9
kr4M+Gvhn+w9+yRqWiaJp9hqvijU/Dd7q+oRwL9r1SX+y7kq082N8mwMVQMSEQBVwoAr1D4w
/tW/DDwX/wAFRfhXYat8SPAWmX+i+FvEem6jb3fiC0glsLq4m0dre3lVpAY5ZVViiMAzhSVB
xXufxr+HPhf4+eHrPS/EsOqPb6dfR6laS6brN5pF1a3EYYJJHcWksUykB2HDgEE5zXUx+Nob
SJVUnCgKNzFj+JPJrb6tUlWnVe0qjkl5OEY2v5WfTt6GP1iEVFR/kUfnzSd/P4l+J5P8T72L
9oP/AIKC+DPB+YbnRfgxpx8c6vGULY1W7E1lpaH+H5YhqM2OSrLAeODXkvx3/bD+Gf7WP7UC
fCe6+JXw/wDD/hHwJ4itP7et7/xHaWuqeLNctriOW20m1tZHEpt4p1heaUD95IqQR7v35X6v
m+I6KOv61Qu/iLG5P+NaQwlRct+jb9Xe6f8A26rLbXli31UoeKh7z6tJLyVtberbflzO2tmv
nv8Abx/ar+Hc37EnxtSXxloWmm10zXPB5GpXA0/z9XXT5T9ih8/Z50pDDAj3bucZwa8i+M3x
E8GfFjwl+zd4v/4WDa/8Ku8NX0+m+JPEHh/xAkFvos8ujvFF5+o27h7H94RA0iSRuDcqm9RI
c/aU/wAQIj6fnVc+OYd+eKlZdN35tb8l+143vp2d3p00u3Z3qWPg4xj/ACudu/v8qXzXKter
votLfNPwG+LP9ufs9+MfAlx4z1jW7T4j674h0H4S6nqkt1f3Wt2C6cZwftr7mmiSQXflXE0n
72KOMiR8qTxfwx+MOgfGu4/Zh0zw5dxPq3wkil1bxtp8Sk3Pg1LbQ7rT5bW+jUZt5jczBFif
a0gikZFZUYj7Ti+IcQXHH50SeNYJc9K0jgqsbuL1agrvVv2akk3rq25Ny79LbmDxVJ7/AN/7
ptcy+5JR7W6p2Xyj+0z/AMFHvAPj7TdB8BeG/il4O8HL8Q9Dg1m+8T67qsOkf2Nod0p2y20d
2UaW9nTcsUZXEPMsowqRzQ/tMWnwB+H/AIY0n7P49ufA3jLRPBENl4Gk8O+K7yxvNRsgG+wx
WVrDMIdWzLGu2DyrgOSishEihvrAeLoc9qf/AMJjEvp+dRUy2c4yinZt79UrSSs+8VJ2fnLT
3rKljaaavqktuj1i3fybW3lHXTX4/wDjf+3F9l8B+DfhJ4x8feCPhp8S/FXhayvvHWraxq9t
pUfhmCeELcfZEncCW9lcTJEgysOPNk4WOObttEsNF8JftpfBrT/CrWZ8Maf8KNbtdINnKJbc
2aXmgrD5bgkMmwLggkEYNfRQ8Zwn0rz+f4W+FovjbP8AEQW+qTeK5tOfSlnn1m9ntba2doWk
jgtHlNtBva3hZjFGrMUySSTnrpUKka7rNbuW2m8Zxirdo8/f+Z9bEfWKfJyX2SXfW8W38+X8
Eulzo/E8HxD1jxPpkvhXxT4N0bRYtv8AaNrq3he51O6uvny3kzx6hbrDlOBuikweeR8tN/ah
+JGg/Dj4a3F14i8d/wDCtbC9lSyTxIXtYl02V8lWMl3FLbR7tpUNMhQlgv3mWrul+J1tq0X8
YxSHmsng5uHKv1/4f07AsVBS5n/X9de587/8E89fbXtN+JUmn6lZeL/Dz+LZJtN8b26nPjTd
a2/nXDyB2ineGQNbGW2Edt/o4SKKJYtg+W/2mPg94K8E+Bv2uNbs/CumaReR+PPDVtLqGhaY
tvqqW0p0G5njgkgUTBnmZpcIctKd3LnNfpS3iqEjtXG/HPwDovx9+Hc3hnWJr62sJ7yyvmks
3RJg9rdw3UYBZWG0vCobjJUnBBwRFTAVJctt0lHXt7q8tLK1u2h00cxpxq88tnJN233vf18+
/TofEXifS/hTq3jnxqvwX8SeGvAvwwvvh1dWPjXxN4UKJoVhqEt1brp7zNA6Rm7SNr3zTuWV
YpB5joDGw7D9hXWNIvvAfiq28MeGvh/o1hp2si3bVfAWB4a8TSC1gzeWqqoVD0jkjVpAjxMv
myEE19o6h4qhkiIwK47xDfxT5wvX0FdWHwVSM032t5/E5b+V7Jdt76W4q+NjKL83f8Evvdlr
9yR5tcXFztYMTzXB+OdCuZ7gyJu4r1q9tvMc4WqFzo4uRymfwr3IUfdszxZVfeujwF4tUhmw
PMx0q9bJqixZy9exS+C4XOfK/SnxeEolGPK/Sn7F9xe1XY8etbnVTJz5mK7v4fXV2k8fmFut
dQfB0Q6RfpVmx0EWbAqnSrhTcXcUp3R1lhe5gGT2qSS73CsSO5kjXoaX7ZKexrouYcpeuJ8G
qzXf0qtLcSnsapTXEo9aVwsaLXeaY13WU11KezVG1zIexpcwWLWqMZUODWXa27xzc+tTNcyH
+E0xpZB/CfyqXZ6lK5sWEuJFrsNAn2xg1wWktJLcLwa9D8OaYZrXOKUdWN7G9Z3qiKqOr3ay
Hipk0hkHemHRS8vNaamZ554/0qe9DFM15nceHNQS4/j619Jy+H45l+ZQapTeCLWQ58tfyrB0
Lu5vGtZWPBbKG808/wAYrrPCms3rsvL16Ld/Dq2mP3B+VaHh/wCHdvar90flUewlcr2yMO0v
Li4UZzVv+zZJYS2DXcWXguHA4Falv4RiEWMVp7NkuoeX2sMkZ27TzUtzpkipvKnivS18FQK+
dtTS+E4ZIsbRR7J2F7Q8qsGbzh8p4qLxU8ssmFB6cV6jH4EgjP3f0on8CQTtkrR7J2sHtFe5
4DJYXv20MN3BrTKXTw8hq9o/4V7a5+4Kf/wgNqB9wVPsWV7ZHkGiyT2UZ4apNQD6tKMqa9Zb
wBbBcbBTI/h/bxnhRT9kxe1R43cafNp0oKg0ySaa5b7pr2mXwHbzdUFVm+GtsHyEFHsWHtUe
K6noNzdxFkDZrj9Z0PU4GO3fxX1DH4GgijxsH5Vm6p8PbeRT+7X8qXsGCrWPlC9t9TKkHzM1
QhsNWSYY8zGa+n7z4ZWjtkxL+VVx8NbRT/ql/Kl9Xfcft0eB6auqwTLkyYrcuIL2/tjnfuxX
skfw+tVP+rX8qnj8D2ydEFUqL2J9srnzrP4f1BJ/4+tXLXw/qLp1kr3+TwJasfuL+VPh8E20
f8C/lR9XH7Y8L0zQtQhuVzv616/8NLaWCzG/P41uJ4Ots52CtGx0xLJcKK1p0uVmVSfMXFlx
61IJfrUFKrYrcxJw+acJOKhp4ORTGiTzfrR5v1plFAiQTZGK5T4gaTLqVs4TPI7V0+aVolmH
IqWrqw07M8Z8OeA7y01NXbdgHNdf4hQ21iQfSuy/s+Neij8q5b4gRkW77R2rPk5Uac7bPPZb
jcrD3rz34maDcakH2E118ssscp4PWoLpDdLhlqN42L2Z5Z4F8H3dpeKWLY3V7V4YLWyKGPpW
FbWH2Y5CVpWVzIsy/KaIrlQSbe57D8PZN8AruYv6V5n8O9T8m3G6u7j1tAvWqiyTUorN/txP
71H9uJ/eqrokvsuDSVROuJ/eph15PWoCxoMuRSK2DWc2vIO9N/t9PWgo2onyPerUMma51dfX
Oc1ah8QoO9AzfRtwqVDWJF4gQHrU415B3oA2o2qeN6w019PWpk8QJ60CNqisoeIU/vfrS/8A
CQJ/eoGalFZf9vp60v8Abyf3h+dIDToBxWYNej/vfrS/29H/AHv1qbAaoOaWsr/hIYx/FR/w
kMf96iwzVorK/wCEhj/vUv8AwkCf3qQjUorL/t9D3o/t5PX9aANSisv+3k9f1o/t5PX9aBmp
RWX/AG8nr+tH9vJ6/rQBqUVl/wBvJ6/rR/byev60AaTrTCMiqB19PWmnXU9aBFyRc1BKlV31
1P7361C+uJ6/rQBJPFiq08WRTZNcQ96rT62nrTGJNDxVO4izTptZTHWqk2sp60gIp4tv0qrL
HT59WUjr/wDXqnPqyUDGzRfnVeVMUTasoqrLqq0DLMMuxq1tNvtrCuZbVlVqlttdCt1pDO9t
bgOtWUlwa5PTvEgI61rwa4siCpKTN+CbFXYJ8jrXORawoNWYdcUHrSC50PmZFeVftEwIfHXg
aW4nitobyy1nR4JJW2pJdzS6TPFCGPG947K4Kj+LyyBzxXepr6gdaZqF/Z6tp09pe2tlf2V0
mye2u7dLiCZc5w8bgqwyAeR2rGvSc42W+5tQqqErvY8g174EeKZ0sptOt7iy1LS9SstVtJpr
GSaJJ7S6iuY96BkLKXiAIDqcE4Ir0Q/Gj9pD/oJeD/8AwkdQ/wDljWU3wR+GP/RMvhl/4SOm
/wDxmmN8EfhkP+aZ/DL/AMJHTf8A4zXmVsA6rvUim/V/5Hp0cwVNcsJNfJFP4qX/AMcvjh4O
PhzxPe6BLok17ZXlxHYeGbyCeQ2t1FdIqu97Iq5eFAco3BOMHBHGfE/4R6xpvwp8Xzy6feRx
J4f1Il2hYAf6JKOuK70/BH4Z5/5Jn8Mf/CQ03/4zTT8EvhoP+aZ/DH/wkNN/+MVnHBypxcYR
S+b/AMipY2E5KU23byRgfFf4EeJNd8TXsthbXFtd2muQ6paTS2Uk8QltrxLmPcgKlkLRKCAy
kgnBHWtka98eD/y8eDf/AAkb7/5YVOPgl8NT/wA00+GX/hI6b/8AGaevwQ+GpP8AyTP4Zf8A
hI6b/wDGaxqYNztzxT+b/wAjanjVDSDf3Iw/HegfGn4neBda8N6peeF003xBYT6bdm28J3iT
rDNG0blGa+ZQ+1jglWAOMg9K3fDfwm1qDV7N30+9CpOjMTC2AAw9qki+Bvw0J/5Jn8Mv/CR0
3/4zWhZfAb4Yv1+GPwyP/cpad/8AGaUMNKmrQil82VPFRnbmbfyR+cWm/sk+IPFHhfwpqEGl
6lLDc+FtEKulu7KwGmWo4IGO1buk/sY+JFAzpGqf+Ar/AOFfoBb/ALIvwUkOf+FM/B/Pr/wh
Gl//ABirkP7IXwXA4+DfwiH08E6X/wDGK/O6/h9Krq6i/E/bsJ42VMPFRhTdl6HwFd/sW+JL
izlSPTtUid0KrILVyUJGAenavQvFPhP4seKvhtoPg23tE8JeF/DjrNaWPhfTbvTy8yOJIpZZ
Xmlkd0kBlB3AmU+Y+91Vl+w0/ZE+DB/5o58Iv/CK0z/4xViL9kH4Kkc/Bv4Rf+EXpn/xitMH
wNjMLCVPD4hRUt7LX77XXyZyZt4sYPM6tOtmGE9o6fw3eivv7t+V/NM+K/jV8NfiZ+0TLoVx
4t0bS7nVNCg+zDV7LQ7i11C+i2nMc7CUxMpc+ZgRKFcts2B3VuLm/Yy8RbONJ1P/AMBpP8K/
Q5f2Pfgl/wBEa+EX/hF6Z/8AGKcf2Ovgiw/5Iz8Iv/CL0z/4xWeM4BxGLqqtiKycrW2t+SRt
k/jFRyrDPCZfh3CF27czertfdvttsfmnqf7FXiVm40fVST/06yf4V9Y/8EjPhbdfDrxf8TfM
UNbx6R4d0qWRGDLFeRXWvXMtuxHSRIb60dl6gXEeRyK95P7F/wADZOvwV+Dx+vgnTP8A4xXo
fhfSdI8D+H7XSdD0zS9E0mxUpbWOnWkdpa26kliEijCooJJPAHJJr2Mj4Q/s/EKu5XsfP8Xe
JtXO8E8HKFrtP7j8Uf8Ag6C/5Pq8Cf8AZOrT/wBOuq0VB/wc/XQk/bm8BnPX4dWn/p11WivX
q/HL1Z8NS+CPoju/grrH9ueHvghosskkcGo+BfBFo7RkblWXQtNUkZyM/Ma+wv2cP+Ccnir9
pP8AZ68BfEW28bfDTQrbx74c0/xHDpk3gjVr2TTkvLaO4WBpxr0QlZBIFMgijDFc7FztHxR8
AT/xOPgB/wBih4C/9Mml1+r/APwT4u77UP8Aglr8ELPTNQfS9Rn+Fegw2l6saSGzlOkW4SUK
4KttbBwwIOOQRXm1asoQ5l07HfTpxlKzOC0T/glJ4y0bGPiN8MGx6fD/AFYfz8QGuz8PfsAe
M9CUf8V18NZCO48C6kP564a8l8A/twfEn4m/DT4B6HHrrWHjy8vNXk+IU0Npa+bJb6Gs0F6h
jaJ44fPujbgMiKR5nygDiuK+I3/BWTx/rv7E+oah8P8AS70+JtF+HmneLtZ8RapqVnLeaS15
O0cIjt0sUtrx9sUjSHy7dETlVZsJVxxFVNqEtnbyekndPt7stdNV6XX1WEpRjKNm/wAHzKNn
5pyjor7o+sof2RfHUC4Xxv8ADkf9yRqH/wAuaef2TPHjf8zx8Of/AAiNQ/8AlzXkXxC/4LH+
G/hv8eLnwjc2ugPp2h69p3hnVri58Tw2uuG6u1Q/aLXSzEWuLSJpYRJL5qEZkKo4jyem8Hf8
FM28UeP9Eabwf9k+HPizxhd+BdD8RjV995dalb+Yu6axMKiG3kkguER/PeQlELRIHyukcbiJ
W5Zb7efw/nzRt3voYfVqHLzOPS/p8T/DllftZ3O1P7JHjs/8zx8Of/CI1D/5c01v2Q/HTf8A
M8fDr/wiNQ/+XNezf297/rXzn/wUK/a01v8AZkk+EeoaXe3kOn6344g0vWLS0so7u51W1a0u
n+yxIys3mSSxxKvllXLEAHk1DzCsmryerS+9pfqafUaLTajsm/uV/wBDpT+x/wCOD/zO/wAO
v/CJ1D/5c0xv2OfGz9fG3w7/APCJ1D/5c14Z8Pv+Clms/Df9mrwt8QvG/iPw7res/GbxJNb6
DpWo6hB4e0DwVCnml7O71BrcyhoFhdZZJI5GefCIoX5quWn/AAWs0q4+GMPij/hEFls7nw5r
d9b/AGXXUuUvNZ0u4SGXS4ZEhKyJKJEliuR9+M58oHir+v4hXXM7rfytHme29lpdXTeibbV1
9RoXXu7uy89XFfe099UtXazPZG/Yx8aP/wAzt8PP/CK1D/5c1G37FHjJ/wDmdvh7/wCEXqH/
AMua8J0P/gpT8Sfhj8X/AIxX/jDw7Hq3w88IeK9B0e4K6jDb3PhVL+1tQ6wqtt/poS4nzI0s
sZVWXbuyVX0vwv8A8FOjr3jbSJ5/B32T4ceKPF154H0PxENYDXl1qNt5g3TWTQqsFvJJb3Ea
P9oZ8ohaJA+VFmGJsnz7pfiou3r78fVvS5H1LD6vl0Tf4c3/AMi33SWtjp3/AGHfF0nXxr8P
f/CL1D/5c1E37Cfixv8AmdPh9/4Reof/AC5ryrwd/wAFk/7f1nxhZ3XgnTv+JD4G1Pxvplxp
fiOTUbTU0sTiS0a6FmlqZMkK0lnNeQo6yIXLJg3fGH/BSP4jXfw/0O1034c6JZeMPGvgy/8A
GthDB4sa5Gk6XDaxSLM7PYKkl55s6qtsAYiV+adVNTPNMTGDqc7slf8A9K/+Ql8k3tqaRyzD
ufs+VXvb5+7/APJR+bS30PRW/YK8Ut/zOnw//wDCM1H/AOXNNH7A/igf8zp8P/8AwjNR/wDl
zXa/sZfFzWPir+yN8MfE3iC9/tDXfEHhbTdR1G68tIvtFxLbRvI+xAqLlmJwoAGeAK9A1zxO
9not5LG+2SKB3Q8HBCkitcVmOKw7mpzfu3vbyMaGAw1ZRcYL3rfieGL+wX4pX/mdPh9/4Rmo
/wDy5p4/YT8Vj/mdPh7/AOEXqH/y5r4bi/4KXfHY/wDBOKfXT8QD/wALHbUjq0esf2Jp+9NI
GkPd+X5HkeTj7TE0W/ZuweueK+tde/4KgT+EtR1u7bwj/aXgPwJqmk+HfFXiQ6sILy2v71Ic
tBY+QVlgia5txI7TRH94+xJNnzX9fxfO6XtHzK2nrZfm+V3+15WbHgMMoqXIrNN/d+rXvJb2
+47AfsK+LB/zOnw9/wDCL1H/AOXNL/wwr4s/6HT4e/8AhF6h/wDLmuT+GP8AwVGb4i+NPCGk
f8IVJa/8JV418SeD/MTVvOa2/siGWX7QE8ld5m8vHl5XZu+8+K5f4T/8FntM8ZeD/F/iPXvD
GlaRpXhTQ77W7ix0/wAW2194hsDaXJt5LS/0qWO3uLWctswVE0I3gGUfKWx/tbE25vaO1ub5
cvNf7v8ALfQv+zMPzcvIr3t8+Zx/NP7r7anqn/DC3iz/AKHT4e/+EXqH/wAuaT/hhTxWf+Zz
+Hv/AIReof8Ay5rP/Yt/4KQad+1n438R+F3g8K22u+H9PstX3eGvFUfiTTp7a5DDb9pSKHZc
RSIySRFMLuQq7huPon+3vf8AWtZZhjI7zf8AWn56ERwOFe0F/Wv5Hgw/YV8WL/zOnw9/8IvU
P/lzTv8AhhjxaP8AmdPh7/4Reof/AC5rH8bfFfxv8cf20de+Gfhzxvqvw60DwL4Zs9Xv73Sb
CwutQ1W8vZZlhQG9guIkgiSBiQsYdmcfMAOfMvhH/wAFRviNr2oeG/AcfgPRfG3xAlm8Tafq
GoDWm0TT5pNEuUhMwQW9wy+ekiEADCynbgIdy5LNsS1d1Hs38o35n8rde6tcuWW4dP4Ful85
JNL53/zPZf8Ahhnxb/0Onw9/8IvUf/lzSH9hbxYf+Z0+Hv8A4Reof/LmvnLxX/wVn8beOLvW
/EPh3SrbTfh/B8GJ/HsccGqRRa5b3X2h4CQ0tlcW/mRzQPEqMrR7WaVt52wr6tf/APBUuXQX
1TUh4Pk1D4f+CtQ0fQfFHiKbWVjv7W+vo4DmGzFuEnhha5txLIZYf9Y5jjcJg6xzDGOXKpu/
b/t+UP8A0qNvmu5EsBhYq7gv6jGXre0lpvo+x2j/ALB/ip+vjP4e/wDhF6h/8uaif9gPxM/X
xl8Pf/CL1D/5c1yPww/4Kkap47+O+keF7z4drp2ha94p8Q+EbHVYPEH2q6kutISSV3a1+zoF
jlRPlPmlg+QVIAdtH9in/gpq37X3jqfSG8OeH9CxZ3V2LWHxhb3et6W9vdG2kt9S0ySKC5tZ
c7WyizRfNgyjK7phmmLlblqPVX+Vr/LTv5d0OeW4WF+aC0dvndr80/ufY2v+HfniT/ocfh7/
AOEXqH/y5o/4d+eJP+hx+Hv/AIReof8Ay5r6D/t73/Wvm79qP4n+NdS/bM+EPw+8O+Pde8D6
H4r0jXb7VJtIstNnuZ5LVbUw4a9tbhVA8x8hVGc9eBR/a2M5lFTd3f8ABN/kg/svC2b5Fp/n
b9S9/wAO/PEn/Q4/D3/wi9Q/+XNH/DvzxJ/0OPw9/wDCL1D/AOXNeGeF/wDgsPqXwx8C2nhj
xWfB+t/EPStV8Q6Veapr3iCHwnpOpw6RP5InEximQ3dxvhAhiQRl/OO6JVC16Jpf/BW+w8T+
Ff7T0nwhc3B1rTfDWpeF7efUvKk1wavdmzeNtsT+S9rOriTHmAhQQRuqoZpi5pOFRtO1vPmv
a3rZ+lncmeWYWDanTSte/lZpO/3r77nYD/gn74kH/M4fDz/wi9Q/+XNH/Dv7xJ/0OHw8/wDC
L1D/AOXNef8Awp/4KTeIvGF7p3hPw34bk8YeOdd8QeKI4YPEHiKDTraz0/SrxoC7XFtp/wDE
7RJFH9nZsFvMmO0uyftFf8FibT9nPXTpWteFNK0nWdD8O2viPxJpOu+MLPT9Qt1ncr9k01EW
VNSukVJXKLJEmBGA+59qxHN8U4xn7R2kk1803+Fnfs1Z62L/ALLw3NKPs1eLafyaX4tq3dO6
0PQv+Hf/AIl/6HH4ef8AhF6h/wDLmj/h3/4l/wChx+Hn/hF6h/8ALmvdvD/ju28UaDZanZTe
dZajbx3VvIP+WkbqGU/iCK4j9rDWfGT/ALNnjZ/h5qz6N43t9InudFult4bki5jXzETy5UdG
DldnKn7+RyAaK2b4ylGUpzfu7/IKOV4Wq4qEF71rfM8//wCHf3iT/ocPh5/4Reof/LmmP/wT
38RP18YfD3/wjNR/+XNeM/GT/gov4r8U+GtL8VeD9ZvrDw54e+Dtz8RPE1tpq2S3Fzc3SKlj
brLc21ysDo8V2/3GGY8Ojjiuq8a/8FIPiFLo3xkg8H+BfDlwvwf8K2us3Wta74mkje6luNIN
/HstILIiQoQQymWFW4KsuSFdXNcZTU3Ko/cTb+UpR/OL/DuKjleGqyhGNNXla3zUX+U4/j2O
5P8AwTu8QE/8jh8Pv/CM1H/5c03/AId16/8A9Df8Pv8AwjNR/wDlzXl1n/wVq8R/DX4QanN4
q8FWWreIfCPwu0fx5ez2evbYtWe8m8jyx/oi+USAJWIUhWcoAQoduv8AjZ/wVUHwY/aG0zwf
L4V07UtJutd0rQLy+tdfe41DTpL8J5c01rBayw28eXwq3V1bzS+XK0cTqu46yzHGqqqXtHdu
y82pOP8A6UmtfXYzWBwjp+15Fa1/laMvykvvtudCf+CdOvH/AJm/4ff+EbqP/wAuaQ/8E5te
P/M4fD7/AMI3Uf8A5c18xSf8FV/iT8L9Y0XUdWvr7xbbSX/xCgGh2el2/nau2mXapp0IMUXm
KsUe8u68+Wru+/bXquo/8FItS/Zb+EHw40/xR4u8G/Erxz480K58WtrWs67a+EfD5tFRJFgt
ZxbMXYmWOOBHjLyAO8jpjFYRzjFOnGr7R2aT+/mdvW0W/RGzyrDqbp+zV7tfc0m/S7SXdnpc
P/BO3xBA2V8YfD/j/qTdR/8AlzWpa/sNeK7NMJ408AD/ALkzUP8A5cV5xp//AAWBsvElrpz6
P4Lur2XxVZeGr/wrby6oIptaj1a7a0nDgRMIWs5EfzMGQNheVDZHLfs3/wDBTbxt4cRp/ifp
ial4Q1v4l694O0/xNHdxRXNgbeS4a2hayit1V4BHA0Ym80ylwQYzwx0/tTGRk06jVr+l4uMW
vW8l+K30M/7OwjipKCe33OLkn6Wi/wCrnu3/AAxR4v8A+h28Af8AhGX/AP8ALikP7E3i8/8A
M6+AP/CMv/8A5cVwXgf/AIKpXGr6bo+p+IPAsXh/SPHXhjUvFngyc+Jrdm1S1slEjR3rTJBB
YSvC8Uo/fSxqrPvkUpg8jb/8Fo54/hb8UtZm+H1pdav8N4dEuoLXTtfuXsNeg1O4WCOSC7ub
C3JVSdwkWF4ZVKtHKytuD/tTG35ed37dd3H800+3Uf8AZmEtfkVu/Tp126rXb8T2r/hiXxd/
0OvgH/wjdQ/+XFH/AAxH4t/6HXwB/wCEbqH/AMuK8Q/bb/4KNfFj4cfC/wCJGieHPCmh6H42
8CeFrfxBrWq2HiD+0YdEFzdNFbC2juLBFvGKRO8vmLEsa52mVgAftrTfEjT6dA7tlnjVmPqS
BR/auMtze0dv+C1+aFLLcImoumr/APAi/wApI8U/4Yi8W/8AQ6+Af/CN1D/5cU+L9izxhAfl
8beAP/CM1D/5cVV/4KTftF+Iv2ef2UNS8VeF9Sm0zVrPV9Ig8+KzS8fyZtRt4plWJkfcWid1
AClufl+bFeLeLP8AgrMfhr8b/i7qN42uXHgzw5pHhyDQtI1zRpfDDR6lf3FzCzzSXtvFPFbk
rGXmkV0RI3ZFYgqYjm+Le1R72+dk/wAbpLzKeV4VJy9mtk/vdvw3fke+x/sh+N4unjf4f/8A
hF3/AP8ALipl/ZQ8dL/zPHw+/wDCKv8A/wCXFfOtx/wVf1L4q+Jvh/D4Zu9DsZrH4iz+GfFN
voer2+v6VrFqukXV7E1rfmBS0TlIzvWON1eN0IwPm9E/ZQ/4Ka6v8f8AxB8NrfxJ8PY/Bll8
XNAu9c8NXEPiD+0nkNoU8+G4Q28Plko4kQo0m5PvBD8ouGa42d7VHp+sOdffH56PTa8Sy7CR
aTgtf0lyv7n8tb97ej/8MqeO/wDod/h7/wCEVf8A/wAuKP8AhlTx3/0O/wAPf/CKv/8A5cV6
7/b3v+tH9ve/60v7Wxn/AD8ZX9mYX+RHkX/DKnjv/od/h7/4RV//APLij/hlTx3/ANDv8Pf/
AAir/wD+XFeu/wBve/60f297/rR/a2M/5+MP7Mwv8iPIv+GVPHf/AEO/w9/8Iq//APlxR/wy
p47/AOh3+Hv/AIRV/wD/AC4r13+3vf8AWj+3vf8AWj+1sZ/z8Yf2Zhf5EeRH9lPx2R/yO/w9
/wDCKv8A/wCXFJ/wyj46/wCh4+Hv/hFX/wD8uK9e/t73/Wj+3vf9aP7Wxn/Pxh/ZmF/kR5D/
AMMo+Ov+h4+Hv/hFX/8A8uKP+GUfHX/Q8fD3/wAIq/8A/lxXr39ve/60f297/rR/a2M/5+MP
7Mwn/PtHkB/ZQ8dH/mePh7/4RV//APLio5P2RvG8o58b/D7/AMIu/wD/AJcV7H/b3v8ArR/b
3v8ArR/a2M/5+MP7Mwv8iPFX/Y18ZSdfG3w//wDCMv8A/wCXFRn9ivxg3/M7eAP/AAjL/wD+
XFe3f297/rR/b3v+tH9rYz/n4w/szC/yI8Q/4Yp8Yf8AQ7eAf/CNv/8A5cUo/Yq8YD/mdvAH
/hGX/wD8uK9u/t73/Wj+3vf9aP7Xxn/Pxi/szCf8+0eJf8MW+MP+h28Af+EZqH/y4o/4Yt8Y
/wDQ7eAP/CM1D/5cV7b/AG97/rR/b3v+tH9r4z/n4x/2ZhP+faPEh+xd4xH/ADO3gD/wjNQ/
+XFL/wAMYeMf+h28Af8AhGX/AP8ALivbP7e9/wBaP7e9/wBaP7Xxn/Pxi/szCf8APtHin/DG
XjH/AKHX4f8A/hGX/wD8uKP+GMvGP/Q6/D//AMIy/wD/AJcV7X/b3v8ArR/b3v8ArR/a+M/5
+MP7Lwn/AD7R4r/wxn4y/wCh2+H/AP4Rl/8A/Lij/hjTxmP+Z2+H/wD4Rl//APLivav7e9/1
o/t73/Wj+18Z/wA/GH9l4T/n2jxYfsbeMx/zO3w//wDCMv8A/wCXFH/DG/jP/odvh/8A+EZf
/wDy4r2n+3vf9aP7e9/1o/tfGf8APxh/ZeE/59o8W/4Y38Z/9Dt8P/8AwjL/AP8AlxSj9jjx
mP8Amdvh/wD+EZf/APy4r2j+3vf9aP7e9/1o/tfGf8/GH9l4T/n2jxc/sc+ND/zO3w//APCL
v/8A5cVVvv2IvFuoDEnjT4fHP/Umah/8uK9y/t73/Wj+3vf9aP7Wxn/Pxj/szCf8+0fPL/8A
BPPxDI2T4w+Hv/hGaj/8uaT/AId4+IP+hw+Hv/hGaj/8ua+h/wC3vf8AWj+3vf8AWl/auL/5
+MP7Nwv8iPnj/h3j4g/6HD4e/wDhGaj/APLmlX/gnn4hU5HjD4e/+EZqP/y5r6G/t73/AFo/
t73/AFo/tXF/8/GH9m4X+RHg1p+wl4qslwnjT4fD/uTNQ/8AlzVkfsT+MB/zO3w//wDCM1D/
AOXFe4f297/rR/b3v+tH9q4v/n4w/szC/wAiPD/+GKPGH/Q7fD//AMIzUP8A5cUf8MUeMP8A
odvh/wD+EZqH/wAuK9w/t73/AFo/t73/AFo/tXGf8/GH9mYT/n2jw/8A4Yo8Yf8AQ7fD/wD8
IzUP/lxQf2J/GB/5nb4f/wDhGah/8uK9w/t73/Wj+3vf9aP7Vxn/AD8Yf2Zhf5EeHf8ADE3j
A/8AM7eAP/CM1D/5cUn/AAxJ4v8A+h28Af8AhGah/wDLivcv7e9/1o/t73/Wj+1cX/z8Yf2Z
hf5EeHD9iXxgP+Z28Af+EZqH/wAuKcv7FXjFR/yO/wAP/wDwjNQ/+XFe3/297/rR/b3v+tH9
q4z/AJ+MP7Mwv8iPE1/Yy8Zr/wAzv8P/APwi9Q/+XFPH7HPjUf8AM7/D7/wi9Q/+XFe0/wBv
e/60f297/rR/auL/AOfjD+zcL/Ijxcfsd+Nh/wAzv8Pv/CL1D/5cUv8Awx942/6Hf4ff+EXq
H/y4r2f+3vf9aP7e9/1o/tXGf8/GH9m4X+RHjI/Y/wDG4/5nf4ff+EXqH/y4pR+yD43H/M8f
D7/wi9Q/+XFey/297/rR/b3v+tH9q4z/AJ+MP7Nwv8iPGv8AhkPxx/0PHw+/8IvUP/lxR/wy
H44/6Hj4ff8AhF6h/wDLivZf7e9/1o/t73/Wj+1cX/z8Yf2bhf5EeNf8Mh+OB/zPHw+/8IvU
P/lxS/8ADInjj/od/h7/AOEVqH/y4r2T+3vf9aP7e9/1o/tXF/8APxh/ZuF/kR43/wAMieOP
+h3+Hv8A4RWof/Lij/hkXxz/ANDx8Pv/AAi9Q/8AlxXsn9ve/wCtH9ve/wCtH9q4v/n4w/s3
C/yI8b/4ZF8c/wDQ8fD7/wAIvUP/AJcUf8Mi+Of+h4+H3/hF6h/8uK9k/t73/Wj+3vf9aP7V
xf8Az8Yf2Zhf5EeN/wDDIvjn/oePh9/4Reof/Lil/wCGRvHP/Q8fD3/wir//AOXFex/297/r
R/b3v+tH9q4v/n4w/s3C/wAiPHP+GRvHP/Q8fD3/AMIq/wD/AJcUf8MjeOf+h4+Hv/hFX/8A
8uK9j/t73/Wj+3vf9aP7Vxf/AD8Yf2bhf5EeOf8ADI3jn/oePh7/AOEVf/8Ay4o/4ZG8c/8A
Q8fD3/wir/8A+XFex/297/rR/b3v+tH9q4v/AJ+MP7Nwv8iPHP8Ahkbxz/0PHw9/8Iq//wDl
xR/wyN45/wCh4+Hv/hFX/wD8uK9j/t73/Wj+3vf9aP7Vxf8Az8Yf2bhf5EeOf8MjeOf+h4+H
v/hFX/8A8uKT/hkXxz/0PHw+/wDCL1D/AOXFeyf297/rR/b3v+tH9q4v/n4w/s3C/wAiPGj+
yH44P/M8fD7/AMIvUP8A5cUh/Y/8bn/mePh9/wCEXqH/AMuK9m/t73/Wj+3vf9aX9q4v/n4w
/s3C/wAiPFz+x342P/M7/D7/AMIvUP8A5cUx/wBjXxo/Xxv8Pv8Awi9Q/wDlxXtf9ve/60f2
97/rT/tXF/8APxh/ZuF/kR4kf2LvGR/5nb4f/wDhGah/8uKY37E/jB+vjb4f/wDhGah/8uK9
w/t73/Wj+3vf9aP7Vxf/AD8Yf2bhf5EeGN+xD4ub/mdfAH/hGah/8uKY37DHixj/AMjr4A/8
IzUP/lxXu39ve/60f297/rS/tTF/8/GH9m4X+RHgx/YT8VN/zOvgD/wjdQ/+XFNP7Bvik/8A
M6eAf/CN1D/5cV73/b3v+tH9ve/60f2pi/8An4w/s3C/yI8CP7BPidv+Z08A/wDhG6h/8uKQ
fsD+Jx08aeAf/CN1D/5cV79/b3v+tH9ve/60f2pi/wDn4x/2bhf5EeDRfsKeK4fu+NfAH/hG
ah/8uKsxfsVeMYRx42+H3/hF6h/8uK9w/t73/Wj+3vf9aP7Uxf8Az8Yf2bhf5EeKD9jPxoP+
Z2+Hv/hF6h/8uKUfsb+NR/zO/wAPv/CL1D/5cV7V/b3v+tH9ve/60f2pi/8An4w/s7DfyI8X
/wCGO/G3/Q7/AA+/8Iu//wDlxSf8Mc+NT/zO/wAPv/CL1D/5cV7T/b3v+tH9ve/60f2pi/8A
n4w/s3C/yI8VP7G3jQ/8zv8AD7/wi9Q/+XFJ/wAMaeNMf8jv8Pv/AAi9Q/8AlxXtf9ve/wCt
H9ve/wCtH9qYv+dh/Z2G/kR4p/wxn40/6Hf4ff8AhF6h/wDLij/hjTxp/wBDv8Pv/CL1D/5c
V7X/AG97/rR/b3v+tL+08V/Ox/2fhv5EeK/8MbeNB/zO/wAPv/CL1D/5cU4fsdeNV/5nf4ff
+EXqH/y4r2j+3vf9aP7e9/1pf2jif52P6hh/5UeMr+x/43X/AJnf4e/+EVqH/wAuKkj/AGS/
HcXTxx8PP/CKv/8A5cV7F/b3v+tH9ve/60v7QxH87H9Sofynksf7L3xAi6eOfh3/AOERf/8A
y4qZf2aviGn/ADPXw6/8Ii//APlxXqn9ve/60f297/rR9fxH8wfUqH8p5av7OHxFH/M9fDn/
AMIe/wD/AJcU8fs7fEYf8z18Of8Awh7/AP8AlxXp/wDb3v8ArR/b3v8ArS+vYj+YPqVD+U8x
/wCGefiP/wBD18Of/CHv/wD5cUv/AAz58SP+h6+HP/hD3/8A8uK9N/t73/Wj+3vf9aPr2I/m
D6lQ/lPMv+GfPiQP+Z7+HP8A4Q9//wDLiuesV17w38TNd8K6/f6Fq1xpemabqsN7pWmTacjL
dy6hEYmilubgkobHcHDjPm42/Lk+3f297/rXkGoP/af7UnjJ+uPCvh0df+nvXf8AGuzA4qrO
sozloc2Mw1KFJuMdT8Uf+DnXj9t/wD/2Tq1/9O2rUVa/4OfbbZ+3L4CH/VOrT/07arRTq/xJ
erFS+CPojpfgC3/E5+AH/YoeAv8A0yaXX6cf8E/fEF9Y/wDBNL4JHS4bS71OH4ZaEbS3urlr
aCeYaVBsSSVY5GjQtgFxG5UEkK2MH8wvgJJ/xPfgAP8AqUPAP/pk0uoP2Qf+C8Pir4bfD6H4
XTWHww0S0+Ffwy0Z9Hvtf8QWWmv4kvDpNpJa2CpcvHtMgZw04Z0j2AuF8xRXmtXWp3xetj9C
fhL+zf438CftDeLPGsvg3wZa/wDCwGMGqLF8RLm6j0eGdozdy2UB0OLMspjSRlkmw7xqNyCp
dc/4JU/D7VPhrD4UtfFHxA0bSX8MWvhLVBYX1msmu2VrKZbY3Je2b95G7MQ0Qj3AlWDIdtfC
vwq/4OUfEnxc8GfD3VrKz+EGnP4g1WfT/F1prHiax0mfwZCk8SpchLt4nv0eCRph5CgAxtHu
L5x6B+z5/wAF6NZ/aj+IfhLRfBVz8NdTg1zVdc0/UmmnistR0qGyDvZXMen3DR3N2l5Cqyfu
lxBlw7ExvjOMFFJRe1kvle3/AKU/vNeeXM521evTf3Xf/wAlj9x93n9kvR7H4sah4o0jxh48
8OW+uaja6vrOh6RqUVrp+s3lvGsaTTMIftK7kSMSJFPHHKI13o2W3UfCf7D/AIN8H/Emy1uD
VfE8+kaRrt14o0nwxPdQto+kapcKwlu4QIhPkl5WEbzPEjTOVRTjH5vfCr/g5R8SfFzwZ8Pd
WsrP4Qac/iDVZ9P8XWmseJrHSZ/BkKTxKlyEu3ie/R4JGmHkKADG0e4vnHQeDP8Ag49tvHOo
eGbi11/4V23hvU9d1TR9a1G/uI7C60JIphFpt0LGdkuLiK9V4XZ412Wod/NfEMhVxXLZp7f8
D8uVW7WVrWI+zy20/wCH/wA3fvd33Z+uf9vf7X6143+1j8OfEPxW1z4d3+gaPousTeB9fHiJ
BqHiiXRVW4jieKJTs068M0bLNLuGYipRMM2SB+XfwK/4OcPFnxhl8Z6RqOkfCDwZ4s8Nz2kO
m2mseJbePStZV7sW93KNS2+QEt0ImAj85p03eUr4JryrXv8Ag8C8e6Rrl5aW/wALvBeqQWs7
wxXtrqUqwXiqxAljEtokgRgNwDorYIyqnIC5VdO+zT67p3X4ju7NW3TXyej/AAP1N+Gf7Ctz
d614k1HXBJ8Pb258Vp410GTwp4rTWBompyRSRXklul1pMCRpOhAeOUXCPvYgRFQTofH39iO8
+MC/B7w9LqkuveHvAXilPFGr674l1ye81vUCglLWywrCImSZnUNiSKOJF2pEVAWvyX/4jE/i
F/0R/wAK/wDg0/8Auej/AIjE/iF/0R/wr/4NP/ueiMVHls9mmv8At2yX3csfWyvcJNy5rrdN
f+BXv/6U/vdtz9hPE3/BP3wh4s+KniXxLd+JfGrWnjLXtP8AEWuaALq1/srUrmxWMWyuptzM
I1MSsVSVd5+8WUKosad+wN4FsfGAu5NR8T3nhy31e91+w8KT3kX9j6XqN3GyT3MO2IXALeZM
wRp2iRp3ZEU7dv45/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PSVOKjyXVtv
yX/tsbdrK2w+eV3K2r/4P/yT+9rqfrV4V/4Jk+DvDFq1u/jb4j6nap4Jvfh9aQXd7Y7LHR7o
DMMYjtEy0ZGUkk3tk/OXCqF634jfsTeFviFoHhGzj8QeLvD9z4P8OzeE4dQ0q6t0u7/S5oI4
ZrWcyQOpDiJG3Rqjqw3IyGvxo/4jE/iF/wBEf8K/+DT/AO56P+IxP4hf9Ef8K/8Ag0/+56co
qStJ7+v978+aV+/M77hGTi00tvT+7+XLG3aytsfvJ8HfBmn/AAR+E3hrwbpNxdz6Z4V0u20m
0ku3Vp5IoIljQyFVVSxCjJCgZ6AVv32prf2UsDuQkyNGxU8gEY4r+fz/AIjE/iF/0R/wr/4N
P/uej/iMT+IX/RH/AAr/AODT/wC56dVKpfnd7779SYPkSUVax+s6f8Es/hqnw/8A+Ec/tfxl
9h/4Q9/Bm/7bb+YbZpml+0f6jb9oG9kDbduw42Z5rp9e/YN8FeIPGt5qL6t4ph0XWb+w1fWv
DUV3D/ZGuX1kFFvdTqYTMrjy4SywyxxyGBN6Nzn8cf8AiMT+IX/RH/Cv/g0/+56P+IxP4hf9
Ef8ACv8A4NP/ALnp/a576/8ABv8Anr627IHrHla0/wA0k/wVv+HZ+wnhP/gn54O8G/GXTfGF
p4j8aAaNr2q+I9P0X7ZbJptnd6nE8V4V2QLOQwfcu6ZihUbCoLBsrx9+wVpep6Tqt5dal4j+
K2qNoV34a0rTfGviMW1tZWF5MjXEX263spLt2CqNklx9okBjUBlLF6/JD/iMT+IX/RH/AAr/
AODT/wC56P8AiMT+IX/RH/Cv/g0/+56j2cbKN9Erddrcv5Nr0b7l88ubmtrv+PN/6Vr66n7H
/shfD74mfCrXdRHjm/k1i0uLCG2t7u48dyazJbeRtSOGO0TSNPtowVLs8/zTOyqGLjBT3n+3
v9r9a/n3/wCIxP4hf9Ef8K/+DT/7no/4jE/iF/0R/wAK/wDg0/8Auerl7zu2Zx91WSP10/aO
+DXxJ8a/Hj/hMPAltpHhrU7bTf7GXX7LxwdPvtVsm2ymC5tJ9Cv7cCOYyGNlYuASQy7yg8/0
H/gn94rvfHvw0u4rrT/hdpPgTS9c0yeXwz4ol1rVrttQ8l2ujPe6akckssn2jzfMjO3crId2
PL/Mv/iMT+IX/RH/AAr/AODT/wC56P8AiMT+IX/RH/Cv/g0/+56z9nHv379U0/k+Zu3d3Lc5
Pp27dGmvyS9NNj9dvEP/AATW+H+q6JBpen6x4t8P6WvgR/h3c2mn3FqUv9MYyODI01vI4mWW
RpRJGyZYDcGXKnT1H9gLwRqPiie6/tfxXFoWp3mnanrPhuO7g/snXb2xVFt7qdTCZlceXCXW
GWOOQwJvRuc/jt/xGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9afa5+bX5/wAz
l/6U3L113SJeq5WtNunZR/JJemh+y3hb9h3wb4S8Y6Drdvqnid7rw94n1vxZbK93EEe61WJ4
rhGKxK3lqsjeXtZWUgZZqvfDD9kzSPhz8W9P8Z3ni/x14x1bQtOutJ0X/hIb+C6OkWtxKskk
azJClxcfcRQ11LM4C/eyST+Lf/EYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z1M
YqNmnsrddrcv5ael+45NyvzLf/Pm/wDStfU/oI/t7/a/WvnT9rn4J+P/AIo/G7wd4v8AA01t
peoeDtPu7a11JfFcWmzg3ZUTxtbz6JqMTrshiKvuVss428Bj+QX/ABGJ/EL/AKI/4V/8Gn/3
PR/xGJ/EL/oj/hX/AMGn/wBz0cqunfb17W/Jj5ntb8j9afhn+w8YvB+gzXt7qvw08beHH1GC
PXfCfiZNZvtXgvyst5Jdz3umxxmSa43SELbjy2RDE6A7F9G1v9lTwp4i+IPw18S3t94lu9T+
F8EkGnvc6o9ydSDIqq168oaS4ZGUSKxYESZbnpX4of8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv
+iP+Ff8Awaf/AHPTSSs09tvLfbTTd7d2Jtu91ve/nfe/fZb9j9hrD9gbwt4afTbzw74r8beG
PEOkatrOq2uuWE9k96g1WUzXlqVntZIGgL7CoaIuhjUh85Jt6n+xHosur2uo6X4++KPh7U30
W30DWdQsddV7/wAS2sEhkj+1XM8Uk6ygvL++t3hkAlZQwAUL+N3/ABGJ/EL/AKI/4V/8Gn/3
PR/xGJ/EL/oj/hX/AMGn/wBz0oxUUknotvuat6Wb063d9wcm221vv96d/W6Tv5H9AsGri2gS
NXYqihQWcsxA9Sckn3PNZfjrxTrth4VupfDVhpOra0uz7NaanqUmnWsuXUPvnjgnZMJuIxE2
SAOAdw/A7/iMT+IX/RH/AAr/AODT/wC56P8AiMT+IX/RH/Cv/g0/+56Gk93+YJ20SP1D+Dv7
Dms+H/BnxC8C6z4b0PSfB/xVjubbWL7T/H0upajo9q6TGO1sYX0W3j8lJJn2iRyV812JcgKf
ZNJ/ZD8I6VbfE6D7ZrtxB8WdKtdH1pJbiPEUFvYGwTyCsYKsYjkli3zcgAcV+LH/ABGJ/EL/
AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0nTi1KLfxJJ76pFKclJTS1TbXq7f5L7j9Y/G
v/BLzwd4+8LjSr/x18SPLn8LW/g7UZ4LjTYZtW062nM1rHMVsgAYWKgNEIy6oBJ5mW3bXjf/
AIJ4eEvGvj6+1seLfH2lW+o+JtP8Yz6TY3toti2rWaxLFc5e2eblYgGj83yzkkKrBWX8gv8A
iMT+IX/RH/Cv/g0/+56P+IxP4hf9Ef8ACv8A4NP/ALnq/tc/Nre/z5ub/wBK19bPoR9nltpt
+HL/AOk6emmx+y3w3/Yc8FfDHx/oviKz1DxDd3WhXev3kMF3PA9vM2szLNdLIohBKqygIARg
E7i/WsDwt/wTo8K/DrQtDtfCnjb4ieFrvww2pQ6PqNhd2TXWm6ffuJJtNj861kQ2yyAPHvRp
Y2A2yjpX5Ef8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9QoRUVFPRKy327fi16
NrZsrnd27bu/Tfv/AF1SfRH68eLv2QLjxl+1z8IfF1y9pJ4a+D+k3cFteX+s3Wo61rdzMkSR
C4Eke3ETI0vmtNI7uR8i9RL4I/4J2eCfBvi2C+l8QeMtc0m18Raj4rh8P6nc2smmR6nfCVZb
jalukp2pK6ohlKL97aXJY/kF/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9Pl
VrNrr/5M+Z9O9n8l2Fd9u34Ky/Bteja2P10g/wCCaHw7n8JXOgarq/jLX9Di0G88M6FZahqE
RTwrp90waWGzeOFJCfljVXuWncJCi7tu4My7/wCCbXhTXNK8bQa144+I2v3Hj+00iz1W7vLu
xWUppc4mtDGsVokaEbQjYTDLk43szn8jv+IxP4hf9Ef8K/8Ag0/+56P+IxP4hf8ARH/Cv/g0
/wDueqWkuZS1+fdv77tu+922J6q1tPl5f/IrTbRH7HftM/sJeEv2oNd16+1HxD4x8OnxZo0O
ha9Dod1bQx6zbQTGa383zYJWDxOzYaMpkEq+9Ttr3e31cW1ukaudsahRk9hxX8/X/EYn8Qv+
iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PUqKS5U9Pn6/qNybfM1r/AMMvyS+4/a/9tH4Z
6n+0F8FP+EY0ywsNTa41SyvJo7rX30YIttOtyrLMtleZPmxRKUMQyrOd6kDPmt7+yTqvx4+K
HjXW/iLoOkeHT4u03TIkvfDnjWXUrjTbvTbgzWU9vHNpNuI3VpHcu8kqkoimIqzV+TX/ABGJ
/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cqWz63672tf1skPmbVmt9Om172+8/ZX
xD+xxp/iew0WbWPG3jzxRrnh3XLjxHaX+s6jC/m3UlnLaCJoo4Ehht1SU4jt4ohuG45Jbdyn
7CP7CB/Zl8JeAbzxZ4k1DxJ4s8E6BNodjbrdJLpGjLPKJLg2n+jxTMZNiZaZnKjKqFXAr8lf
+IxP4hf9Ef8ACv8A4NP/ALno/wCIxP4hf9Ef8K/+DT/7npwSg24ve34Jx7fyu3oTJ81uZbX/
ABal+auf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A
3PS5V3/MfM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8A
Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V
/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+
iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/
ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A
3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8
Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4
V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/E
L/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8
RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz
7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/t
frX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfr
R/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/
b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/
QR/b3+1+tH9vf7X61/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5h
zPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRy
rv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8
Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX
/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj
/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8Rifx
C/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz
0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp
/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/h
X/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/o
j/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEY
n8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/P
v/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X6
1/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b
3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+
1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9
BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz
7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7
/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc
9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/h
X/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP
+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv
+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xG
J/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/
AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/
8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/wARifxC/wCi
P+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Q
v+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A
8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfr
X8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/
tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+t
H9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9
vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzP
sf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5
hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPR
yrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/
ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCi
P+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz1+qn/AASg
/aw8e/8ABTn9jTS/i5LfeGvA39p6pe6aNJTQP7U8v7M4Xf53nwZ3ZzjZx6mnyLuHM+x9af29
/tfrXA6BqP2r9pfxqc5/4pnw8P8Aya1ytr/hV3jf/ofvD3/hEj/5OrjPhGLu5+NnjCW/1Kw1
a7/sPS7eW5s7RrSFjBq3iO3wI25VgIhu5Ybt21mXax7cvj++TTOPHS/ctH5Gf8HQP/J9PgP/
ALJ1af8Ap11Wij/g5/8A+T5/Af8A2Tq0/wDTrqtFdFX+JL1ZjS/hx9EanwHkx4i+AA/6lHwD
/wCmTSq/Gz9r3/ks0X/Ys+Hf/TJY1+xnwNkx4o+AP/Yo+Af/AEyaVX45/te/8lmi/wCxZ8O/
+mSxrzp/B936ndH4/v8A0PMK9W+Gv7bfxP8AhD4HsPD3h/xN9isNH+3f2TK+nWlxfaF9tj8u
6+w3csTXFl5i5LfZ5I/mLNwzEnymisDYKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6fP8Ag2tn
8XW//BB/TT4DtvDl34wPiDXF0pNfuZrfTVmM8YDztDG8hRQS2xFBfaE3R7t6/wAwdf03f8Gx
F94uv/8Agkd4VsPC914csltdf1e4uX1WzmuTKZLnaoQRyx7ceU2c5zuHTHOtKDldLt+qM6kl
G1z6D/4J5fAX9rT4A/ErXR8ZfH3gX4keD/E9xNqEzpq17LqWi3bAkG0V7NIxbMQFNsGSOPh4
tmHSXvvhJf8A2P4ueNf+vSP/ANSXxZ/hXon9mfFr/oNfDr/wSXn/AMlV5T8HRdx/EbxUNQe3
lvhp1uLl7dCkTy/8JH4s3lFYkhc5wCSQMcnrXo5dTtVWqODHzvTeh+UP/Bznfed+294Bb1+H
Vr/6dtWoqt/wc1nP7bHw/wD+ydWv/p31airr/wAWXq/zIov93H0RufBKXb4v+AI/6lLwB/6Z
NKr83P2qP2KPFWr/ABD+B2px3/h8QfHjT9G0fQFaebfZzQWWmWTtdDysIhklUgxmQ7QSQD8p
/S74M+HbmPxL8AJ/Lbyz4S8A849NE0sf0rzLw547+DHizwF+zZbeK/iL4V0DxL8P7bTrI2mp
LbLJpFwf7I19bt3uJojDFJbaWbMTxhwX1MJnKvGfOkvc+49CGs/68j8j/iz8OL74OfFTxL4Q
1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOldX+z3+y/q37SOh+PbvRdZ8P2M/w9
8N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr+hd3+2XaX+g+Fb/wCCfiL4VR3V58R/
Fmr+I7nxN4xufCccX2nWVnsLy6tVvrKS/iazaPd5kN1hIfKCgh4z86fsH+ItJ1L4qftPahc6
58NPD0Hiz4f+I9D0lI9Vt9A0q7vb+ZWtoLCG9eGRbciJ9u5R5SCMSbCyg85sfGlFfdfwm+KO
v/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/AGzxdZ+HY/tEl9bPZ3N9BPIh1CJbZVGViuBsjaLB
IMddt8CPjUtn4F/Zwj8EfFDwV4dg8N/EDWL34pLp3iS08HWmqwyavayxXD2U7WbXVu1ipEar
AQka+TsQqYgxn5vUV+u3hH9vn4c/DXw/8JNM8J+M/BWl+Hrvxl4hurO0WztQNFhm8aWiwyMk
ke7TUbQrzVwrOIQIZXAwwTHP3f7SumeAtB8K6X8CNY+Cmm/2X8R/Fkmuf2l42fwvptkjayr6
dcvBbX9muo2hsvKAIiu08qARIOGjIB+VNdt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3X
l0ttbxRoMszsxds4ChYXywYor5PxZ1mLxH8VPEuoW6eH44L/AFW6uIk0G1ltdKVXmdgLSGVV
kjt8H92jqrKm0EAgivrb/gnl+0Rrkv7JXxj+GNp8Xf8AhBPEmo/2C/gyTV/FMmi2OkxLqjPq
UkFwzqkH7uYPJHEfNmXftSQgikI+KaK/Rb9mv4o2/hD4U/sx6X4A+KfhXQbDwZ451ST4m/Z/
F0PhmPV7c6raPDczwXclrNfRPYphSYnIQGIhWUxj4f8A2lNZ8PeI/wBozx/qHhBLWPwnf+JN
RuNFS1tTawLZPdSNbiOEqpjTyimEKrtGBgYxQBxNFfoX+yb8Yl0j4G/sr23gj4ieH/BkHhbx
lqN18UrNvGFp4be8hfU7OSKS6hnnha/Q2KlQyrMNqGLqCg9L+A/xAt9Q+EFl4v8ABnijSvDP
gzQP2nbmKxv7jW4fDtrZeEZhFeXFhALiSHbaS4imaxQfOYwxiJQ4dh2Pypor9TNN/au0bRfA
vgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJdNuJrKO+sWvbf7CYRtaC5CxwCEIpV4jxP7O
v7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8AC10Ip7yK2glKT2+iSTortFIqxkonmLvX
gA+X/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDH
GzDE8Tf8E8vHvgrVfhbp+rzeH7DUviv4kv8Awrp9q128j6Ze2WpJps4umSNkCCd+GhaXKgn0
B+oPhzdaP4G/aA87wb8S/hVqvwn8RfFbV5/EvhLU9S0vR7Xwbbw3z29pqelTTXMUwlNjP50F
1pnllDBEm6TywgNH1b4f+K7H9lO28CeOvCs/hb4KfEfXZNUn8ReI7HRb620xvEFvd2ly0N49
vJN5lmokJhjI3BkwrgoAD5/f/gl/4s0bxDoeleIPGXw/8MX/AIt8V6j4P8ORX0uozf29d2N6
tjM8RtrOYRRfaXEam4MTHBbaFwxqaN/wTd15vh4/iPxF4/8Ahp4IgtfGU3gC+g1y7v1fTNai
LbraaWC0lgVNi+Z54lMIUjdIpBA+i/hz8QdP1P8AaA/4Srwz8Vfh/wCIvhl8Qvitq+teK/DX
ia+stDk8KRfbnjttXsnvbiG8S7ayujPHc2KxSxvCiMXaPYD4T6to/hj9g/xP4F+Gvjr4VGHx
N8Vtag06Xxp4j0u0uovDF3pM+k/2nKkzx3FtKFbd+7iScjOInikaNwD4J+LPw4vvg58VPEvh
DU5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSoOMZA6V6B+z3+w549/aT+Ffj3xvoVna2nh
P4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7vP8A4s+D7H4e/FTxLoGma1a+JNN0
PVbrT7TV7UL5GqwxTPGlzHtZ12SKocYdhhhhj1r6W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17w
lpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IICEeU/BP9kJvjVZ+FAvxG+Gnh7VvG+qnR
9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnzXx34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+oP2CvBOq/Br4yeCfEaeMPgU+kx+MraDxRBq2q6DLq
OgQ2N5H5k0Mt+ASjxO8kdxpksgfYPnDooHlPxA8E6N8e/wBoz4xan4c8YeH9O8PadPrXibS7
vxNqs8M+vWqXTNDbwPcBpp72ZJFKpKRI5Dlm3ZoAP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrn
VNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pd4/X1t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr
3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQPlTXtGm8Oa5eafcPayT2E728r2t1Fd
QMyMVJjmiZo5EyOHRmVhggkEGgD0v4QfAnwh8SfA+hX+pfEvSvDWtal45sPDF9pd5aAf2Zpl
1GzHWTM8qK8UTo6ugACYQvIgkTPoHgj/AIJvy/FjwxbeJPC/xT+H8/hbWvHL+AdDutSttWtL
rUdQZ2NqrwJZSiHz4fLmBLlEEoV3Vwyj5pr9Fv8AgnZ8QLLwF+xh8PLM+KPhVYX/APwvKy8T
alZ+INb0L7VZaHHbRQT3axXsm+3lWSJgjRqtzj5o/lcMWho/P/x34I1T4Z+ONZ8N63bfYta8
P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK6DwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCv
uvLpba3ijQZZnZi7ZwFCwvlgxRX/AEL0H9oDQ9S1r4LXPwl+KOleHvC2kfFbxFqnj5bzxrHo
F1qunza7BPa3F8l9cQ3Opb9OAG9xM5CtGx3hkrifgN+1DF4o8LftReAvAvxatfAEHiDxJY3f
wyN74gl8N6Vo2mnXppbt7RnMaWafZ50d4YwssibgsTlWUAHxV4B+APiL4j/CTx7430+O1Hh7
4cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor6v7Pf7L+rftI6H49u9F1nw/Yz/D3w3c+K
ryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv0t+wf8AHfVtN/Zm+N/wm0z42WvhjXbifRE8
Eaje+KLjQ9KtII9Wc6jc2k8/lG3R45RK8YVJ5ULYidlZRyn/AATUt9J8L65+0LaXPi7wVbQa
n8Mdc8K6Te6hrtvpMGs3t00YthAL1oJCkghdtzIvlgr5nlllBAPkmvVvhB8CfCHxJ8D6Ff6l
8S9K8Na1qXjmw8MX2l3loB/ZmmXUbMdZMzyorxROjq6AAJhC8iCRM/YH7LP7Stx4C/Y6+BWl
/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjPwT8WdZi8
R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgikIq+O9I0zw/441mw0
TV/+Eg0Wyvp7ew1T7K9p/aVukjLHceS5LR+YgVtjHK7sHkVk0V6B4I+Mvh3wp4YtrC/+E3w/
8TXUG/fqWpXmuR3Vzl2Yb1ttRhhG0EKNsa8KM5OWIB5/Xtfw3/YY8SfFb4QeC/Gej694VksP
G3jm2+HsNrJLdJdabqc4LR/aAbfZ5Xl7HLxPIQJVGNwZV8f17UYtX1y8u7ewtdLgup3misrV
pWgs1ZiRFGZXeQooO0F3ZsAZZjkn9C/+Cev7Rvhv4AfsH+G9B1jW/Ctrf/ED4rTafNLH4htb
bXfCWmX2ktp0muW7BnewlgkV8SyxgGNmHCzLJTQ0fBPxZ+HF98HPip4l8IanLaz6l4V1W60e
7ltWZoJJreZ4XaMsqsULISCVBxjIHSufr9Nvhj8ZYvgB8Dfhh4L+DfjL4P6lqXg/xl4is/E+
r6541l8L2lwyanGLHUpreDUrb+0beW0WNs7b1fLiES5wyNynhv8Aa+1n4IfsG3Pirwn4t+Gk
PjDTvi7darYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSiv0L/ZY/a+
sbX4CfDvU5/Fvh/wVrv/AA0PDJNpen6oumpofhi8SG5vbWKEyb4NHa4UF4yfILRKXyy5r0u7
/aV0zwFoPhXS/gRrHwU03+y/iP4sk1z+0vGz+F9NskbWVfTrl4La/s11G0Nl5QBEV2nlQCJB
w0ZAPz0/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuM
gOV80r7L/YP8RaTqXxU/ae1C51z4aeHoPFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDevDItuRE+
3co8pBGJNhZQfjSkI9g+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC
1R5JQqG5MRfazAFAHPmvjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqS
DjgkV9K/8E7NCv8A4e/Ej4eeM7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S
6e7klMFlkjVV80+IHgnRvj3+0Z8YtT8OeMPD+neHtOn1rxNpd34m1WeGfXrVLpmht4HuA009
7MkilUlIkchyzbs0AH7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxku
PLBbbgKoK73TfHuP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceW
C23AVQV3um+Pd6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQ
IwBxKVEWVYFwQQLf/BNnwtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3
xPtbbcLmBsHEhxTGfH1el6N+y/q03wEf4j65rPh/wj4eu55rTQU1h7gXfiqeFGaZLGGGGVnS
NlWNppPLgWSVEMobcFyvBPwSm8X654w0+48TeCtAn8HaVe6pK+pa1EINWa2ZVNpYzReZHc3E
hP7pUbbIFJD4wT9q/s+fFXRPGf7M37Knh2HVvg/d6T4O8Satb/EKx8ZHQBPYWU+rQXGYl1UC
Uo9q8hL2ecldpO9AFQj89K9g/Yz/AGHPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRU
dt8jKVRFVmbDHGxHZbfxA+Mvw18KfEjxRYeF/hN8P/E3haDXdQ/sTUtSvPEMd1c6ebqVrXeq
ajCBthMajMavhRvy+5j9Lf8ABKP9uS0h+O3gDwfruifCrwL4B8H32teIjqk2rXOktZz3dvcR
KzNcX4hvJVE0VrGZo550t84b5XkoA/P6ivVpfi54d8Eahf6Vf/Br4VardWd9co9wusa5cR8z
ORHHLbaqIZIowRHHIpbeiIxeQku31X+wl8cvCHhr4H+X498R/D/SNa1PXb2f4L2d3ONSh+GO
pvb3Qe9ufM8+Sy083T2WxbppCZYRcmIhGuaAPz+or9LPgp+0zrngj9nH4R6b4K8UfCp/H2ne
K9em+IV94m+Ismjx/wBoPqMTw310bfUrYaxE8OC0229R0h2Ln5kbJ/Z1/au0zw98IPBFwPHn
hXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwxn5016B8CP2cNc+Pn
9v3lpdaVoPhvwlYnUde8RazLJDpejxHIiWR40d2lmkAjihiR5ZHOFQhWKn7WX9h/8NUfEv8A
4Rn+yv8AhG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV6t+xb470PVf2Yfj58JrvWd
K0DxJ8S7HR7vQbrWbuOx0ueXS7uS7ltZLqQhIZZYyREZdsTOu1pEJXchHj/gH4A+IviP8JPH
vjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiueAfgD4i+I/wk8e+N9PjtR4e
+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/1B/wTy/aI1yX9kr4x/DG0+Lv/CCeJNR/
sF/Bkmr+KZNFsdJiXVGfUpILhnVIP3cweSOI+bMu/akhBFW/2D/jvq2m/szfG/4TaZ8bLXwx
rtxPoieCNRvfFFxoelWkEerOdRubSefyjbo8coleMKk8qFsROysoYz4er2D9nv8AYc8e/tJ/
Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4931X8DrfSfip4W/Y
u0Pwv4u8FazffCX4galDr0cmu2+lzlZdetLm3ltra+aC5uUmhG5BFEzE/IVEgKA+FOn2ukft
h/tqahqfiLwVoUHifSvHPhLS01jxTpumz3mpXF6rQxCG4nSQIwBxKVEWVYFwQQAD40+BH7OG
uftH/wBv2fha60q68SaLYnUbXw68si6p4giTJnWxTYUmlijBkaHesroGMaSFWA8/r6h/4Jle
ItJ/Zg/a6l+JPjHXPD9hoXwngv3v4INVt7q91qea2uLKK202ON2F47SyBvMjbyFjUu8qKVLe
E/Cv4V/8LS/4SP8A4qPwr4c/4RzQ7rXP+J5qH2P+1PI2/wCh2vynzbuTd+7i43bW5GKQjlKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACv6hf+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562p
fDP0/VGVT4o+v6M/TOvmf4fS+X8XvGf/AF6Rf+pL4tr6Yr5b8IXHlfGLxgPW0h/9SXxbXRgJ
8lRPzRzY9XptH5Pf8HNEmf21fh9/2Tq1/wDTvq1FQ/8ABzF+8/bR+Hp/6pza/wDp31aiuqt/
El6v8znpL93H0R90fs7/ALPltr/wP+AGrrEPNTwX4NcnH9zSrAf+y1/OP+3NYf2V+0Zd23/P
toehRf8AfOjWQ/pX9T/7FEKyfss/AcH/AKEPwr/6arOv5c/+CjMflfteeIVHRdP0cD/wVWdT
jqajRg11t+ReDm3Vkn5nh9FFFeSemFFFFABRRRQAUUUUAFFFFABXoH/DUHjP/hnj/hVX23Sv
+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+igAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKAPS/hf8Atc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7d
bhZFlMepfZ/tiOVldQ6zBkUgKVAAHmlFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1C/8ABqF/yit0n/sL
3/8A6Vz1/L1X9Qv/AAahf8ordJ/7C9//AOlc9bUvhn6fqjKp8UfX9GfpnXyt4YiL/GPxeen+
iQ/+pL4ur6pr5d8Inb8WvF5/6dYv/Uk8W115ZTU6yizkzKXLSbPyW/4OXU/4zN+Hn/ZObX/0
76tRTv8Ag5abd+2Z8PP+ydW3/p31aiujEK1WS83+Zz0W/Zx9F+R+on7FN1s/Zj+Ay/8AUieF
P/TVZV/L7/wUcOf2v/EX/XhpH/pqs6/p2/Yvz/wzZ8Bv+xF8Kf8Apqsq/mI/4KM/8neeIf8A
sH6P/wCmqzp5j/Ap/L8isD/Gn8/zPD6KKK8U9YKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6hf
+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562pfDP0/VGVT4o+v6M/
TOvlTRZvJ+Kviw/9O8f/AKkniyvquvkqOXyvif4qP/TCP/1JPFtehk3+8I4M2/gM/Kb/AIOT
ZPN/bD+HZ/6p1bf+nfVqKg/4OO5PN/a4+HB/6p1b/wDp41eitcV/Gn6v8zKh/Cj6L8j9S/2L
UH/DNHwH5Gf+EF8Kf+mqyr+YL/go5x+1/wCIv+vDSP8A01WdfvZ+zR+2Tb+F/CfwG8NNIA58
G+CocZ/566Ppzf8As9fCPiH9oHxx4F0HQ77w/wDGDwUNc1LTrOw8M6HqfjPTND0r4fW11YWn
2nVNVtpJY59RuGd5HtoHiuI4ULS4dmjhqMfUjKhBLpb8jXBQarTb8/zPyQor9Nvhj+0Rq3wx
+Bvww0LwF42+D+seNtE8ZeIv+E91vXPiHcaLaXd6dTja31ObZf2b6vbzQbW814rrdHGECg74
zxX7Jnx68CeLvA82g6r4u+H/AIHuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQVHk2kkaSNG
6BQSrhPIPVPz+or9TNN/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31
tBqNr/aCfZHjD/JeKFgMS9GRuJ/Z1/au0zw98IPBFwPHnhXw9qtr+0eqtBoupPpdrp/ha6EU
95FbQSlJ7fRJJ0V2ikVYyUTzF3rwAfnTRX6w/CTxOum/Dz+3vCnivw/4d8J+Ef2pLyyttRXx
LaaPp1p4TkMd1c2NrK80cb2UpWOY2kBKy7A4jbbkef6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvU
NV8NaONUtIb2LwfLcxzCzhsfMS8trKe9trdpbWJYneMFmXymZiWCx+b1FfpD8CP2jF8R+Bf2
cNR8EeM/BXw0g0/4gaxrHxS0PTvFFp4StFhn1e1uIle0nuImvLdbEGOMKJgscflZypWqnwI+
PXwo8Xf2/oOm+LvCvgfRfAn7QB+MNi2qRSadY6l4cgzGLfT40iLNdhFTZaGNGZXUICVcIAfC
nwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrlK+6
/wBkT43af8S/jh+1/rcXibSvC/hv4p+FPEqaVYeIvEVlo/8AaGoX9w72EbRzTqjyrG9wpkBZ
IvMYF1Eg3W/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzK
cQNIgZ8sM0hHwTXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd
+7i43bW5GK/QH4s/FF/7B8EaX+zn8U/h/wDDj+wPiP4vk8Q/Z/F2n+HdN2SazG+m3M8DyIuo
Wi2SxhTFFcJ5UZiAOPLrx7/gnX4vWy1z9pqxvvG/grT9N8XfD/W9HtRJrNp4d0rXdVuGK2bW
1rcG2UJtNzsPkosCS7WEXmBSAfGlFfot+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN4
2n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg
0XUn0u10/wALXQinvIraCUpPb6JJOiu0UirGSieYu9eGM/OmvS/2e/2X9W/aR0Px7d6LrPh+
xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlan7WX9h/8ADVHxL/4Rn+yv+Eb/
AOEr1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV7X/wS2udP0//AIXx/aWv+FdC/t34Vaz4
a07+2tfstL+26heeV9nhj+0yx7t3kvlx8ifLvZdy5Qj5Uor9C/2TfjEukfA39le28EfETw/4
Mg8LeMtRuvilZt4wtPDb3kL6nZyRSXUM88LX6GxUqGVZhtQxdQUHu37I3jvwP8Xfjh8GNJ+F
2s+FY/Bll458eap4m8N213b6T9u3XEl1oVw2lymKW78mKO1eF0hk+z/Z15jMJCMZ+VPwr+Ff
/C0v+Ej/AOKj8K+HP+Ec0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiuUr7B/4JnfES
71H/AIaN/wCEh8caVYf8J18ONX04/wDCReKrbT/7e1u6/wCPZm+1TJ50pzdZmORH5zbnXzRu
7X9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIGfLD
NIR8P+FfAmueOv7S/sTRtV1j+x7GXVL/AOw2klx9htIseZcS7AdkSbl3O2FXIyRmsmvoHxpf
eGLH9qj9oD+xPiP/AMK58Nzf8JFFof8AwjtrLPY+KIDdn7Po6/ZGVEtLiPbh2zAFjXKkEV8/
UAerfAT9k7UPjv8ADDx54y/4Sjwr4U8N/Dn+z/7YvNaN62PtsrwweWlrbTu37xMH5Rjcp6ZI
tfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e70
v9im50/xD+wl+0z4N/t/wrpXiTxX/wAIt/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAN
v/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLggg
MZ5T8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAK
AOer0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImS
rMMphz6X+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZ
Ib2ASOluNsCZlEr1P2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlx
HJqRS5ECmFrKICSZ1YbX3soB5r4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mq
WWoQXpsjDciytJ4oN8uCjySBHRgysQGxqv8A8Ev/ABZo3iHQ9K8QeMvh/wCGL/xb4r1Hwf4c
ivpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20Lhj1fw6+MvgeXxj+0v+0BZz6VB4+03XY9X+
G+meIHtzIkupapN5l4tmWInu7OErImDJFE53sr7UIP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4
rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97gA8/8Zf8E9tY+EngDQNe8f8A
jz4f+Av+EjvtW021sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnB4H/AOCe2seM9I8E38vj
z4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvoDwZ8e9a/aF+NPhnXL
rxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowo5/9m34w3Oj
ftGWsmh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5j
gA+NPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r4J/shN8a
rPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAcnxAs7H9rb9
oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxk+1/sw+BbX4Yfs2
+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijDSeYyE
ea+H/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kg
R0YMrEBsGmf8E+tQm+JEXg/UfiX8KtF8U3viu78H2Glzaje3d1eXdvdLaF2S1tJjbRSTNtjN
35LOFLhdnzV0H7MNn4V/Zn8OfHHxtPr3grWfih8Kp7Gw8BF72G806/upb6S3uNUsoH2m8e3i
QTQOVaNN6yPGxCbe1/Z+0N/DPwJ0fxz4O+IHw/f42/Fm+1C217xR4p8bafp2ofDqyNwYJJYo
7if7Q13eK0sr3iK8yQ7liQPJ5jsZ4/4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7
qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhyfs3f8E5fij+0/8AHvxF8O9D0q1sNW8HTz2viG81
C4xp2izRO8ZjlmiEgZ3kjZEWMOX2sw+RHdfa/g58G7X9nX4SWd18O/iL8H7r4t+L9V1Pw/qf
i288c6bYRfD7TYLprMz2CTSrM73qCSUXscZlS2O2KJWl8xuq/wCCYP7VMXwn/aS8G/CnxLN8
H7LwT8LdV16+l8YJrkunxXtzJDcW321ZXu4rW/d/Mjghd7eSRbZjsCAO4APlT9nv9hzx7+0n
8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3Vf2RP2TtQ/bJ+J
8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkD3X/gnvo9r4c1z9om7
1PVvhp4Sg8RfD/xL4P0uyk8a6bDA2pTtAYba2NxeNJJb4BVLku8TBOZmOTWV/wAEhtPtfhX/
AMFCdA8ReJfEXgrQNC8ET39vqt9qHinTbeANLY3luhgZ5wLpDJgb7fzFAZWJCspIB8k19A/A
3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKPn6
v0W/4J2fECy8BfsYfDyzPij4VWF//wALysvE2pWfiDW9C+1WWhx20UE92sV7Jvt5VkiYI0ar
c4+aP5XDEQI+X/E3/BPLx74K1X4W6fq83h+w1L4r+JL/AMK6fatdvI+mXtlqSabOLpkjZAgn
fhoWlyoJ9Abcv/BO7xJ4c1C/Txf4w+H/AMP7CHxXc+DtM1HxDe3UFr4hvbaZ4LmW08u3kf7J
DIgWS6mSKFDIqs4YMq/VfiP4ieB/jX4q/Zt1Xwf448KyaD8L/it4j1TWpfEXiq302+tNPn8S
QX1tcMNTmjubrfafvC6iRywZWPmBlqp8U/j38Iv2r18IXF3ceCte8J+AfiB4qk8T2PiXVbnR
bt9F1jW/7RTVtMjS4t5bp0tYpUMC75hJIq/ZX3I1AHwTqPwN8Yab448V+G/+Ec1W71rwN9rb
X7exgN7/AGUlpJ5dzLK0O5VijcYaXOwZHzYIrlK921a08JeD/wBoz46af4c+JV18PfCdtBr9
l4efR/tWoweKrZborbaQZoZMm3uIguZpWeMiNS27INeE0hBRX6WfsIfHfwPofhj9jzV9R8Ze
FdKtfhRfeMNO8Ux3+rW9pdaZLqjlLJhbyOs00Tm4j3TQo8UQ3tI6COQryn7HXx7t/gx+zR8L
tHv/AIg6Vo/iTwv+0BbabdrD4mhMln4aligkv0WSOUg6VLcxCSQqxtpHQOSxANMZ+f1el6j+
y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwv6Lfs9fFX4
NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlzFaXVlPA8G+Y2t1GiFmaWJ
If3Xmn/BP74iR/Db9kfR/B58cfD/AEi6g+OQ/wCEx0vUvFWkx2up+GX02O01Dek83k3to4Zl
Hl+YHKh48lAwAPjT9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmN
dsjx5Mq4yA5XzSvtb9i258D6f8cP2qv+Ea1/wroXgzXfA3ibw14P/trX7fS/tv2y4X+zoY/t
0scrbooeXf7ny+aylhn0D9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+
ge0ubqCK/szqMRttgYtFdjZB5QGQ0ZQj86a7bwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCv
uvLpba3ijQZZnZi7ZwFCwvlgxRXyfizrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyq
skdvg/u0dVZU2ggEEV9bf8E8v2iNcl/ZK+MfwxtPi7/wgniTUf7BfwZJq/imTRbHSYl1Rn1K
SC4Z1SD93MHkjiPmzLv2pIQRQB8U0V+i37NfxRt/CHwp/Zj0vwB8U/Cug2HgzxzqknxN+z+L
ofDMer251W0eG5ngu5LWa+iexTCkxOQgMRCspjHw/wDtKaz4e8R/tGeP9Q8IJax+E7/xJqNx
oqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGKANXUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4G
o2N6IJLjEivCsRQwor7o5X/1qA4YOF80r9Af2Fvjvp/wM/Yw+FVo/jLwromp6r+0BpuqX0La
tZHULLRDbJb3NxIu8zWcTGGSKR28stDI6sTDORJ7B+z18Vfg18HPipAmkeKfBUnw+8UfEDxn
aeOtPvfFjW+laZbTzNaaQlppCXMVpdWU8Dwb5ja3UaIWZpYkh/dMZ+T1el/slfsv6t+2L8a7
HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb62/Y6+Pdv8GP2aPhdo9/
8QdK0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8LPgV8c
PB174G8ZfD/wn4Ml+I/jL/hYi2erWln9tVri4tvDuIi4ln09IrqMp9mVrSLLyvsMbyKAfm9+
z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr5pX1t
/wAE1NPtfBmuftC6frXiLwVo09/8Mdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lw
HQt7B+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbA
xaK7GyDygMhoyhH5016XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYU
V90cr/61AcMHC8p8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0
EAgivvb/AIJ6/Gr4f/B/9g/w3pPi3V/Ctv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBi
trmNsrc7AoZZhkiLcAfnTRX6bfDH9ojVvhj8DfhhoXgLxt8H9Y8baJ4y8Rf8J7reufEO40W0
u706nG1vqc2y/s31e3mg2t5rxXW6OMIFB3xnlPDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbD
R9K1SCEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmYsZ+elFfoX+yx+19Y2vwE+Hepz+
LfD/AIK13/hoeGSbS9P1RdNTQ/DF4kNze2sUJk3waO1woLxk+QWiUvllzXpd3+0rpngLQfCu
l/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a6jaGy8oAiK7TyoBEg4aMgH56fs9
/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlXGQHK+aV9l/s
H+ItJ1L4qftPahc658NPD0Hiz4f+I9D0lI9Vt9A0q7vb+ZWtoLCG9eGRbciJ9u5R5SCMSbCy
g/GlIR9A+Dv+CanxL8ffBn4aeP8ASodKu/C3xO11PD9vdpNK39h3El/9gie+URkxxSTAhZI/
MH3Q213RGPiJ/wAE4fHHwz8Y/DrRL/VfCst18TvFep+D9Le3ubho7e7sNUXTJnnJhBWIzMGU
qHYpklVPy19Qfst/t6zfs3fB79jTSNA8beH7axutV17SfHelXOoRNBY2V1rUOye8j3g27pE8
k0Ur7cAPy0bSK3QfF74ieE/jX8YvgNqvh/xx8P5LD4X/ABk8Vap4jlvvFWnab9ktJ/E8V9Dc
RC5mjN1E9tmRXtxIpwVzuBWmM+NPiz+wPr3wc+CniXxxqfi3wVPZ+FfGV14Du7C1e/a9k1W3
dw8Ue61WIoYUM4cyBfLwCRL+6rn/APhk7UP+GQP+Fzf8JR4V/sH+3f8AhGv7Lze/2p/aG3zf
J2/ZvI/49/3+/wA7Zs+Xd5n7uvrX4upoP7Un7JnjXSPDvj34aWE/jX9ofU/GNidc8WWGlPDo
s0E1ut9NBPKs8abznyzF5xXDLGwIzz8vwV0fVf8Agnxf/CXRfid8Kri/g+OVzf2t/qXjDS9O
jn0eGyfTxqjxtcM6RNIm8IoeRkIZFkUqzAHwpRXQfFnwfY/D34qeJdA0zWrXxJpuh6rdafaa
vahfI1WGKZ40uY9rOuyRVDjDsMMMMetfoD/wTy8T6zpv7Cnw016PxXa+HYPCPx5t7K91HUfE
sGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQhH5vV1fwr+Ff/C0v+Ej/AOKj8K+HP+Ec
0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiv0W0H49+G7vWvgtN8GfiF4V8EeDNF+K3
iLUvG9hb+JrXwhHc6ZLrsE1m89nPLbPdxf2aFRQscgVEMOAVMY8p/ZE+IHhu6+OH7X6eHvFH
hXw14B8ceFPEum+G7K+1u18P2Oo3F1cP/ZaRW1zJCBthMyqSgECyFWMfmAMxnwpXbeAfgD4i
+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/2B+w5+0pY/Dv9kz4
RW8/j+10PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmug+Ef7RHm2P7V3w
x8G/F3SvAn9o+K7V/htI/in+xdD0nT18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRQB+dNFfpD+z
t+0I3wr/AGVfgvoHwm8T/B9td0HxJrY8XXuueNLvwraCY38LWd9NbfbbCXULd7XYf3tvcERw
iLYrB4jk+G/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6j
bQs1qqxSPCNzr5bMxAPz0r0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhl
Kv5SyOC4VcRsN24qrcV478Vf8J1441nW/wCzdK0f+2L6e++waXb/AGexsfNkZ/JgjydkSbtq
Lk7VAGTiv0W/4JmfHfwP8CvA/wAAr3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2i
LdKR5CtbRMZJZdjRySKhH5p0V+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQ
rS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kbJ8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutV
sNH0rVIIRF4allimls7KxkkS/ttMn1G2hZrVVikeEbnXy2Zixn56V7X+z3+wf4v/AGj/AIby
eKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgfYPwU/a+uE/Zx+
Ec3wum+CnhXxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsVuIQMq0beafF
n4q2viP/AIJPeJdJt9W+D8eoX/xdutci0fQTptrIukOjoJ7SzlC30afajsj3ot0trtU4twRQ
B8PUV+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiN
tsDForsbIPKAyGjNTw3+19rPwQ/YNufFXhPxb8NIfGGnfF261Ww0fStUghEXhqWWKaWzsrGS
RL+20yfUbaFmtVWKR4RudfLZmIB+elFfpZ8FP2vrhP2cfhHN8Lpvgp4V8SSeK9e1LxlYal4u
n8H6bolxPqMU1o72cWoWpvbRbZkQBo7wLFbiEDKtG2T+zr+1dpnh74QeCLgePPCvh7VbX9o9
VaDRdSfS7XT/AAtdCKe8itoJSk9vokk6K7RSKsZKJ5i714APzpor9LPiz8UX/sHwRpf7OfxT
+H/w4/sD4j+L5PEP2fxdp/h3TdkmsxvptzPA8iLqFotksYUxRXCeVGYgDjy6Pgp+07ceDv2c
fhHYfC7xJ8FB4ksPFevT+MrvUvFU/gvTUuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCUI/N
Oiug+LOsxeI/ip4l1C3Tw/HBf6rdXESaDay2ulKrzOwFpDKqyR2+D+7R1VlTaCAQRX6A/wDB
Mj4q/Db4OfCv4WJf+KfD8mheKNV1+0+KGn+IfFk1vBpjTwpaackekG5iguredHi82aS1ulQG
RnliWH90AfCngH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBi
ivxNfcP7B/xv8RaD+zN8b/hCnxgtfBHix59Eh8IzXvjMWGlaWsWrOdUltL1ZfIRCkvmOLdy1
wm4xrLg18U69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAAe1/Cr9gf
XviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/
Zs+KOg+IfgP+yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGO
SeL75IHoOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0s
EpeS1hxviST93LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j
1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw
38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4Wt
fFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8
LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/
g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LP
EQHJCmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpe
S1hxviST93LPEQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llc
APY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7S
yuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLP
SL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO
8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4WtfF
HiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LW
vijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfj
b4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAc
kKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb
4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21z
FGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7
G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90K
yufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r
1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4
D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvB37Bera94s03w1rPj
34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OftbX/2h/Bvxt8X/
AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhf
PtTTQY9V8TfEn4P+PfhoPiD8aPGXiHzPEuv+LLDRLn4b6LJqUsUclva3Eq3K3F3C7SNcrGZo
oMpHEryeYwB81fs3f8E5fij+0/8AHvxF8O9D0q1sNW8HTz2viG81C4xp2izRO8ZjlmiEgZ3k
jZEWMOX2sw+RHdeU0b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbT
SeXAskqIZQ24L9gf8Ewf2qYvhP8AtJeDfhT4lm+D9l4J+Fuq69fS+ME1yXT4r25khuLb7asr
3cVrfu/mRwQu9vJItsx2BAHca3w3+JmieKPg9+zd4YhuvgVFpPgnxlrlp8QtG1690C7g0aym
1qKfZZSarJLLNbtavJtms5ZfMCDMrugwAfm9RXV/Hf8A4Rj/AIXh4y/4Qj/kTP7dvf7A/wBb
/wAg/wC0P9m/1373/VbP9Z8/97nNcpSEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UL/AMGoX/KK3Sf+wvf/
APpXPX8vVf1C/wDBqF/yit0n/sL3/wD6Vz1tS+Gfp+qMqnxR9f0Z+mdfHerakth8TPFOeMwR
f+pJ4t/wr7Er4a+J999j+JXibnGYYf8A1JPF9ehk7tiEcOaq9F/12PzC/wCDibUBc/tVfDZx
3+HcH/p51eisH/g4FvfP/aR+F7Z6/DqH/wBPesUVpiX++n6v8zPDr91H0X5Gv8PBd/8AC7Pg
LtZ/L/4Rn4e/+mPSM18LfEb9k/UfjxrXjjxmfFHhXwp4c+HPhrwb/bN5rRvWx9t0m0hg8tLW
2nd/3iYPyjG5T0yR+oHwV8DW2oeLPgLdNjzP+ET8BN+WiaX/AIV8Ob9O1n4H/tO+CRr/AIU0
rxH4o8P/AA3XR7PWtfstI/tD7NbW80/lvdSxodkYyfm7qOrAHgqxtTT9DupO9R/M+XP2e/2H
PHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvj
VZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9r
pGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI
/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlE
r8p1Hmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig
3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS
4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hvpniB7cyJLqWqTeZ
eLZliJ7uzhKyJgyRROd7K+1CADVf9l/4v6N8GdD+CviD4gfCrwx4V8W+OdR0Pw5pd9pDTTa9
q1jfrYzXkV3babNPF+/cQLLcSxSGIFMCHAPlXjL/AIJ7ax8JPAGga94/8efD/wABf8JHfatp
trYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H//AAr3wd4r
l8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FN
T+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/AMD/APBPbWPG
ekeCb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz
8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NN
O/Z3+HvxAv8AWNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2j
PjF4v0zXvD/hXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC
/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOeh8P/8ABPrUNQ8Q
+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Zh8C
2vww/Zt8NeJfh34z+Glj8W/iPPe6bqeu6/4x03SLn4Yaas32Ym3gmmE32i6QyO11GjSxwApF
GGk8xuV/Zhs/Cv7M/hz44+Np9e8Faz8UPhVPY2HgIvew3mnX91LfSW9xqllA+03j28SCaByr
RpvWR42ITaxnE/ED9hjxJ8IvDHijWPF+veFfDFhoOu6h4c0xr6W6Mni+9sXljuRpscdu7yRJ
JGIzPMsUIeWNGcNuC1P2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJ
ceWC23AVQV3um+Pd9bfBn45WXxc+AX7MNhe+IvhVrkfh/wAV6y3xNTx5PoUl9Db3WsQ3Tyqd
X/fv5tvJKzS2hJZgQWMiALxX7GWn+DdI+Kn7UWoeHPEXgrQvAnifwb4r8JeDE1jxTZ6bPeNc
TQtYRCG9nS5CNCFxLKoXKsGcMGAAPCbb9i2Ww+BPgf4heJPiL8P/AAfovxD+3/2NDqSatPdS
/Yrj7PPvW0sZ1TD7SMtyHHcEC34B/YgtviLL4St7H4yfB8al451WXR9G043WqS3s0y3f2WNp
oorB2tUmYo8RufKLRyBiFw4X2D9lXWvFwg+E/hzxL42/Z/1/4ZeEfFc2natoPiCfw1PdeHLQ
6hHJfsst/GGmiuEd5EmsJp1cLgOrIqjn/g5r3wu+EOq/tFfFHwfeeH/7d8B6rAPhHYatN5iG
G61KWIX0NpckS3FxaWqxyR+aHETESSIzKpUA8/8Ag7+wPr3xm/aj8QfB618W+CtJ8b6Hqt5o
8MGoPf8AkatNafaPtDQSxWsihEW2dszeUSGXaCcgeE19bf8ABIbxOtv/AMFCdA+I3jHxX4f0
3TdInv7rXNX8SeJbSynnmu7G8jWT/SZlluXeZxvaMOVLgvjcCfkmkI9W/Ze/ZO1D9qj/AITH
+zvFHhXw5/wg+hzeJdR/to3o3afB/wAfE0f2a2m3eVlMocO3mLsV8Nt+gP2Q/CPxn1n9mjwb
qXhL4xfD/wANaDf+K7nwR4di1bT7mfWNB1jUYsS2tndDTZ5bH7REVkMltOkYMmS6yFwOU/4J
bXOn6f8A8L4/tLX/AAroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXPV/
Br45X/wC/wCCTMF14Z8RfD+DxxafFZfE1tp99Po2p6paWS2KWqXcVldeZJHKt1Gu1kjEypmQ
YiJcsZ8qaj8DfGGm+OPFfhv/AIRzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82C
K5SvS/g/4sbXNc+IWp698VPEHgrUtX8N6lLLdxx3d7P4yupGRjpdy8Thtl0xYvJMWjymXByK
80pCPVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0b1sfbZXhg8tLW2ndv3iYPyjG5T0y
RxPwr+FfiL43/EPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR9F/sU3On+I
f2Ev2mfBv9v+FdK8SeK/+EW/sez1rX7LSP7Q+z6hNNP5b3UsaHZGMn5uMqOrAHn/ANin4qeH
l+Hnjz4W3+rWvw41v4mQR2Vl49yQkag5Okag5DNBplywXzJrfYysFM3nwAogBlfBj/gnt41+
NnhPUNXstV8FaZBD4kbwdpgvtbTZ4j1oW8twLGznhEkDO6RgJJLLHDK00SpIxbip+z3+wf4v
/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHsGveIj4
c/4IzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP9sG0TIWla3xhjaEVV/wCE7j/4
clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUxnlP7Pf7B/i
/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+w
f4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEger
f8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2aj/
AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+
z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXL
EgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRX
LEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9m
o/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAe
U/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcs
SAfs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFc
sSB6t/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9
mo/4TuP/AIclf8Iz/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5
p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY
6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZh
lMOfoD9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQ
ZFZj++BNdsfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8Vz
LLJaQYaZRK7JLcRt+8ZgAD408HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNw
lqwlWwt7lLdGnYojTvHv2M65jw51dF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tv
bO7SyuAHsba5ijRbp/KDSyJkqzDKYc+wfB7Worr9vC4+K/hnxv8ABS88D+K/itPqmqL4gn0m
11jRtPi1bz1uFTV4454fMt5jIj2LM+VwxWWMIvq2g/Hfwnr+tfBa7+HvjL4f3vh3w/8AFbxF
qnimbxvq2nSalpunza7BcWtxatrrm8j8yyAlL2eHaUOzk3G40AfD3iL9ijxV8Ofh5rniPxvf
+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bcr9kr9l/Vv2xfjXY+
AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfe2o/HLwX8XLH4J2HhXxF8
Ktc8D+H/AIj+Jm8XJ48n0eS+h0e68QJdQyqdc/0x/NspGZpYCZGYEOxlTC6v7Ifx3+EHwK8c
fDS9+G3jLwr4T+HsvjnxX/wna3GrLZ317A0k1t4b86K7cXs9okF1ERsVoImMks2x45JFAPyp
r2D9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4
932D+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3
ipeRmOARpn5kbzT9hW50+88cftK+IbvX/hV4esPG3gbxR4a0aKHX7LQrG51C6kgkghs7S8li
uIrRlyI3ljVFVdrMrKwAB8qaR8EvE+v/AAg1fx5YaZ9t8LeH76HTtUure5ikk02WYEwtPAGM
0cUhBRZmQRM4KBy/y11f7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+
XHCxPmsmdyhdxyB6B+xPqOn/ALLPifxP8R/FfivSk0zw19o8OTeD9L1Oy1G6+IMsqFZNPkjH
nQnSmADTXbo8RCqIN8xVo+g/4JDa1pOi/wDBQnQPHuq6h4K8D+E9Anv5rwah4ht9PgsFurG8
ihigW8uPPnRXZUyplZQVMjc7ihHyTRRX3t/wT0+MPhXwH+znZ2HxF8Y+Cl8Q3Oq3R+Dh1WWH
UD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwAD4Jor9LPgp+0zrngj9nH4R6b4K8UfCp/
H2neK9em+IV94m+Ismjx/wBoPqMTw310bfUrYaxE8OC0229R0h2Ln5kbJ/Z1/au0zw98IPBF
wPHnhXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwxn500V6B+1l/Y
f/DVHxL/AOEZ/sr/AIRv/hK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFfW3/BPv4r6jp/
wE8J+Gz8S/D/AIF01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLFDNEJeluEI+
Ca6v4V/Cv/haX/CR/wDFR+FfDn/COaHda5/xPNQ+x/2p5G3/AEO1+U+bdybv3cXG7a3IxX6F
/sXeL9Zg/ZM8G+JI/G/h/TYPC37Q4tb3XF1mDw3py+HpIILu/tbVZzahLKeRVnNjHGm/YCYM
oQvmn7InxA8N3Xxw/a/Tw94o8K+GvAPjjwp4l03w3ZX2t2vh+x1G4urh/wCy0itrmSEDbCZl
UlAIFkKsY/MAZjPhSivvb9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC
4dNMkuIw8ynEDSIGfLDNfJP7WX9h/wDDVHxL/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+T5Hl
/J5Xl7dmz5duMcUhFvUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7
o5X/ANagOGDhef8AhX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdyb
v3cXG7a3IxX3X/wT1+NXw/8Ag/8AsH+G9J8W6v4Vt/EniP4rTXPh66k1OxvLrwPcTaS1na+I
biwklAMVtcxtlbnYFDLMMkRbvP8A9gnW7vwr44/am0TxD8SvCtz/AMJF4G17QDf33jK2t7Hx
ZrcsjJbXEUl1LH9q3/6Uy3BGFWclmTzhuYz4pr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxh
e4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF+oPhN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu63J43
+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjr0H9jH45fD/wCFX7I9pp3i3xF8P7jx
J4o+Ml5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD80692+FX7A+vfEr
wZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe1/Zp/a10D9
krxx8ctM8b6PpXxJ1rxVoeveHP8AhJ7G/vL/APt66uJI1xLN9rhWTT5nieVp40F03mAq+Dge
1/Ar4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJ
ieeNmMhOAD8//HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r
9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fa3
wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o
8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC
53HzTzQB4p8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDc
mIvtZgCgDnW+G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPl
gbSzr5ke/wBg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJ
bpPMkN7AJHS3G2BMyiV6n/BPfwLa/C7XP2ibTU/Gfw0jgv8A4f8AiXwJpd7J4x021g1rUnaA
Q/ZhcTRyPbyhSyXBRYiOrAggAHxpXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1
qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjb+D3jfwh+x14YuPFkNzpXjD42xX09jodkkYvNH8DmF
9h1WSXBt7+7YjNqsLS28YAmdncRxj6V+BXxu0/4l/CL9li5m8TfD/UtV8GeK9Tm8c3fi3xFZ
WWqaJ53iXTtW+3Q/bZ4pZZZIoHDTRCXcktxGfnYgAH5/+O/BGqfDPxxrPhvW7b7FrXh++n02
/t/MST7PcQyNHIm5CVbDqwypIOOCRXpXwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbi
bzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+1vhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7
PgW9na+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOPCv2KNH8ReA/2gPCvjbUPHnwU1SO98cwv4
zk8Qa3od3rGlfZL5HnuluNRG5/NSSSVLnTZpfMK5L+YigAHn/jL/AIJ7ax8JPAGga94/8efD
/wABf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPODwP/wT21jxnpHgm/l8efD/
AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykMETTSqq/ahC/UlFHNfQHgz4o3/AMSvjT4Z8n4g
fBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB2uh/FH4c+Ibb
9n62+HmtfDQ+D/h18QNejv8A/hI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3hY
AA/N/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG
+mfEz9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKA
CvQP+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lef
0UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABX9Qv/BqF/yit0n/
ALC9/wD+lc9fy9V/UL/wahf8ordJ/wCwvf8A/pXPW1L4Z+n6oyqfFH1/Rn6Z18G/GOymu/iV
4l8rP+og/wDUk8YV95V8b67aLdfEzxVuGcQRf+pJ4ur0MmjzYhI4M2ly0G/66H49f8F+LWWD
9oj4Wq+dw+HcWf8AweazRXRf8HEdusP7U/w0UcAfDuD/ANPWsUVtioWrTXm/zMsPO9GD8l+R
6D8F/Gz2vxA+AlsD8v8AwingEfnomlf41+bfxS/ZV1D9oDxN498b/wDCUeFfCnhv4e+HfB7a
xea0b1sfbtKtYoPLS1tp3b94mD8oxuU9MkfoT8ILPd8TPgG/r4V+H5/8oek18jaPdaf4i/Z4
/aT8G/2/4V0rxJ4s8NfDT+x7PWtfstI/tD7PaW80/lvdSxodkYyfm7qOrAHzJu8Len6no00l
P7/0PmX9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBX
e6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvt
ZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqI
sqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3Se
ZIb2ASOluNsCZlEr850nmmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXA
D2NtcxRot0/lBpZEyVZhlMOavh//AIJ9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZ
ahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXu
i6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hv
pniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/AOCX/izRvEOh6V4g8ZfD/wAMX/i3
xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f+PPh/
4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/AMfah448Iav4t8ff
D/8A4V74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/G
nwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4
H/4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivsv9m34w3OjftGW
smh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++
IFnY/tbftGfGLxfpmveH/Cumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/
shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc9D4f/
AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgys
QGx6V+zD4Ftfhh+zb4a8S/Dvxn8NLH4t/Eee903U9d1/xjpukXPww01ZvsxNvBNMJvtF0hkd
rqNGljgBSKMNJ5jcr+zDZ+Ff2Z/Dnxx8bT694K1n4ofCqexsPARe9hvNOv7qW+kt7jVLKB9p
vHt4kE0DlWjTesjxsQm1jOJ+IH7DHiT4ReGPFGseL9e8K+GLDQdd1Dw5pjX0t0ZPF97YvLHc
jTY47d3kiSSMRmeZYoQ8saM4bcFqfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7Q
QmdrSAqjGS48sFtuAqgrvdN8e762+DPxysvi58Av2YbC98RfCrXI/D/ivWW+JqePJ9Ckvobe
61iG6eVTq/79/Nt5JWaW0JLMCCxkQBeK/Yy0/wAG6R8VP2otQ8OeIvBWheBPE/g3xX4S8GJr
Himz02e8a4mhawiEN7OlyEaELiWVQuVYM4YMAAeE237Fsth8CfA/xC8SfEX4f+D9F+If2/8A
saHUk1ae6l+xXH2efetpYzqmH2kZbkOO4IFvwD+xBbfEWXwlb2Pxk+D41Lxzqsuj6NpxutUl
vZplu/ssbTRRWDtapMxR4jc+UWjkDELhwvsH7KuteLhB8J/DniXxt+z/AK/8MvCPiubTtW0H
xBP4anuvDlodQjkv2WW/jDTRXCO8iTWE06uFwHVkVRz/AMHNe+F3wh1X9or4o+D7zw//AG74
D1WAfCOw1abzEMN1qUsQvobS5IluLi0tVjkj80OImIkkRmVSoB5/8Hf2B9e+M37UfiD4PWvi
3wVpPjfQ9VvNHhg1B7/yNWmtPtH2hoJYrWRQiLbO2ZvKJDLtBOQPCa+tv+CQ3idbf/goToHx
G8Y+K/D+m6bpE9/da5q/iTxLaWU8813Y3kayf6TMsty7zON7RhypcF8bgT8k0hH0D8Df+Ce2
sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqaN/wT+8
VWdm9x438Q+CvhXBJ4km8KWMviu/miTVb+CVobryDbQzg29vIoSW6bbbozqvmk7gvsHwa+OV
/wDAL/gkzBdeGfEXw/g8cWnxWXxNbaffT6NqeqWlktilql3FZXXmSRyrdRrtZIxMqZkGIiXO
r8PvjF4E/aM/Zx+CNv45u/CviL/hWWu6zH8QbTxfr9xYalNZarqMd/Jq+ntHcwy38qRQzo0a
tLMZZB/o0m9HpjPKvBv/AAS38ceMfA82p/8ACTfD/StVh8V3ngT+wdR1G4gvn8QW8csn9mLL
5BtDLKsX7p/tHlO0kaeYHbaPnTXtCvvC2uXmmanZ3Wnalp072t3aXULQz2syMVeORGAZXVgQ
VIBBBBr9K/henw58BfCuHTtM8e/DTQfDHgr9pa58eWkE/iy1uJ18NadDJGksUKyyXU7yfZxH
CgRpJS8bgFH8yvz/AP2lPiRY/GP9ozx/4v0yK6g03xV4k1HWLSK6VVnjhuLqSZFkCsyhwrgE
BiM5wT1pCPQPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrG
DO0ZJDHGzDHx/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
vX9mz4o6D4h+A/7JdtZa18NC/wAOvEmoR+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZ
QY5J4vvkgfGn7WXjfTPiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQ
DQB5/RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAegeCP2l/E
Xw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HS
oLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UAegf8ADUHjP/hnj/hVX23Sv+EF+3f2
p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+iigAr0DwR+0v4i+H/hi20iw074fz2tp
v2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB5/RQBreN/GV38QPE9zq9/DpUF1d7N8em6Xba
ZartRUGy3to44U4UZ2oMnLHJJJyaKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
v6hf+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562pfDP0/VGVT4o+
v6M/TOvjzUefif4r/wCveL/1JPF1fYdfHt6u74peLf8Ar2h/9STxdXo5L/vKPPzjXDs/Jr/g
4s/5Ot+G3/ZOoP8A086vRSf8HGJ2/tX/AA2/7J1B/wCnnV6K6MV/Hn6v8zHDX9jD0X5He/BG
x83xv8An/wCpT8AH/wAoelV+a/xE/ZO1D476x438Zf8ACUeFfCnhv4c+GvBv9sXmtG9bH23S
bSGDy0tbad3/AHiYPyjG5T0yR+oPwCsfM8QfAF/+pQ8An/yiaXXwppNzp/iH9nr9pTwb/b/h
XSvEnivw18NP7Hs9a1+y0j+0Ps9pbzT+W91LGh2RjJ+bjKjqwB8iXwHqx+P7/wBD5k/Z7/Yc
8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+N
Vn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2u
kaH8cNQ1PxF4K0KDxP8ADHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98L
I/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZl
Er4G55pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaW
RMlWYZTDmr4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4o
N8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRk
uI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3m
Xi2ZYie7s4SsiYMkUTneyvtQgA5R/wDgl/4s0bxDoeleIPGXw/8ADF/4t8V6j4P8ORX0uozf
29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwxyvGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNS
OqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/wDH2oeOPCGr+LfH3w//AOFe+DvFcviL
Urnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwY
uviPql/P4a8aReHYtY8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/+B/+Ce2seM9I8E38
vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3
rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8A
D34gX+saOviyXS7u/wBI0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X
6Zr3h/wrps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8N
PD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPQ+H/wDgn1qGoeIfD3h7
VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBselfsw+BbX4Yf
s2+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijDSeY
3K/sw2fhX9mfw58cfG0+veCtZ+KHwqnsbDwEXvYbzTr+6lvpLe41Sygfabx7eJBNA5Vo03rI
8bEJtYzifiB+wx4k+EXhjxRrHi/XvCvhiw0HXdQ8OaY19LdGTxfe2Lyx3I02OO3d5IkkjEZn
mWKEPLGjOG3Ban7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBb
bgKoK73TfHu+tvgz8crL4ufAL9mGwvfEXwq1yPw/4r1lvianjyfQpL6G3utYhunlU6v+/fzb
eSVmltCSzAgsZEAXiv2MtP8ABukfFT9qLUPDniLwVoXgTxP4N8V+EvBiax4ps9NnvGuJoWsI
hDezpchGhC4llULlWDOGDAAHhNt+xbLYfAnwP8QvEnxF+H/g/RfiH9v/ALGh1JNWnupfsVx9
nn3raWM6ph9pGW5DjuCBb8A/sQW3xFl8JW9j8ZPg+NS8c6rLo+jacbrVJb2aZbv7LG00UVg7
WqTMUeI3PlFo5AxC4cL7B+yrrXi4QfCfw54l8bfs/wCv/DLwj4rm07VtB8QT+Gp7rw5aHUI5
L9llv4w00VwjvIk1hNOrhcB1ZFUc/wDBzXvhd8IdV/aK+KPg+88P/wBu+A9VgHwjsNWm8xDD
dalLEL6G0uSJbi4tLVY5I/NDiJiJJEZlUqAef/B39gfXvjN+1H4g+D1r4t8FaT430PVbzR4Y
NQe/8jVprT7R9oaCWK1kUIi2ztmbyiQy7QTkDwmvrb/gkN4nW3/4KE6B8RvGPivw/pum6RPf
3Wuav4k8S2llPPNd2N5Gsn+kzLLcu8zje0YcqXBfG4E/JNIR7t8Kv2B9e+JXgz4e61e+LfBX
hFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7Fr
Xh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX3B8DvE+g/EL4PfsXGy8V+CrJ/g74y1K68XQ
6x4lsNHn0qGTWrS8SQRXc0TzoYAzBoBIMqy/fBWvVv8AhpnSfGvjb4Ra18Lfid4f8NeHrb4u
+Jdc+IUbeK7fws+p2Vzr8NzbT3VtczW8l6jafgD5JMKpiOGUoGM/LOiv1W+A/wAQLfUPhBZe
L/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhALiSHbaS4imaxQfOYwxiJQ45X4IfHDT9T
1DUtL8I/Ejwr4B8A3nxH1zVvDur+GPFll4T1DwpE8zG3OraNfi2h1jT3H2OREXfLHDDLCHGP
swLBY/Onwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0kuPsNpFjzLiXYDsiTcu52wq5GSM1k16t8
PdWtF8cfE6W7+Kf/AAhX2rQ9VWG68O6VcxWPjGVpFK6WsECwfZ7S65IEsSRRqihol4UeU0hH
u3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZh
jq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4j03VEurVLueJ3doL
dl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZa
xb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1
ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO
8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhf
xH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH5k+O/B
GqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXoHwE/ZO1D47/DDx54y
/wCEo8K+FPDfw5/s/wDti81o3rY+2yvDB5aWttO7fvEwflGNynpkjJ/ay8b6Z8TP2qPiX4k0
S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa9r/YpudP8Q/sJftM+Df7f8K6V4k8
V/8ACLf2PZ61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgChHlXwR/Yk+J/wC0Z4e0zVfBvhn+
2LDWNdl8NWcv9o2lv52oRWT3zw4llQjFtG8m8gIcbQxYhatfs9/sOePf2k/hX498b6FZ2tp4
T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e772/4JAftTeB/hn+y98PNH8SfFz/
AIR+6sviPqUlxpOpajb2kFvaPol2VhcSXYK6eZmWYSmPab1hH5WT59fOn/BPe4tbrXP2ifEG
p+N/D/keNfh/4l8NaXqHizxNpularrmpXTQSQmaC4uzIrzDLNKWeIPvBmYqTTGeFfsifsnah
+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfW3/BIbT7X
4V/8FCdA8ReJfEXgrQNC8ET39vqt9qHinTbeANLY3luhgZ5wLpDJgb7fzFAZWJCspPyTSEe7
fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGP
a+F/+CM3xz8ZaR4TvNO0XSrmPxNruoeH7sJfbv8AhGbiyuZ7ad75gpVYt9tOVkhaUNsVR+8k
iR+r/YIun8DeGPhpNafEv4f6r4M8ReK3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT
/LdHgRC0hj2D1bR/2utD+BHhr9lPwl8Mfiv9m8Cw+OddsfEKtq0cEx0T/hKbee0m1FCENv5l
shcs6RExSTrgRySIWM+dNG/4JXeJtUs3kn+IPw0051+IE3wwEVw+rM766krItuPLsHXZIqiR
ZCQgV1DlH3Ivzp478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFf
oX8av2kD8K/2XPihr3gXxf8ADS68TTftD6t4w0uKS80TWb0aad8EOoW1rcGV1f7QqFJI4xKI
z5inymLn86de16+8U65eanqd5dajqWozvdXd3dTNNPdTOxZ5JHYlmdmJJYkkkkmkIqUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABX9Qv/BqF/yit0n/ALC9/wD+lc9fy9V/UL/wahf8ordJ/wCwvf8A/pXPW1L4Z+n6oyqf
FH1/Rn6Z18WeJfFNt4f+K3i0TuqZtYCMn18S+MP/AIk19p1+U37e/ju68JfGTxKtu7Jmx08n
B/veJfHX/wASK6MDW9lPnMMbR9rDkPh3/g4n8Twah+1N8Mpo3UpJ8OYMHPprWsD+lFeAf8Fx
PHFzqvxU+DNzJIS8vw2BJJ648Ra6P6UVpVxXNNy7tkU8NywUeyP0D/Z00zzLj4ASf9Sd4DP/
AJRNLr8o/iJ+ydqHx31jxv4y/wCEo8K+FPDfw58NeDf7YvNaN62Ptuk2kMHlpa207t+8TB+U
Y3KemSP19/Zj0nztM/Z+l9fBfgU/+UTTK/NjSbnT/EP7PX7Sng3+3/CuleJPFfhr4af2PZ61
r9lpH9ofZ7S3mn8t7qWNDsjGT83GVHVgDyy+D7v1OmPx/f8AofMn7Pf7Dnj39pP4V+PfG+hW
draeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3
jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWh
QeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfC
q1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV8Dc800X/gmh4yW
Xwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P/wDB
PrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY
9K/Yy8b+MPhZ8VPDmka18Q/g/P8ACr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKI
CSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTB
kiic72V9qEAHKP8A8Ev/ABZo3iHQ9K8QeMvh/wCGL/xb4r1Hwf4civpdRm/t67sb1bGZ4jbW
cwii+0uI1NwYmOC20LhjleMv+Ce2sfCTwBoGveP/AB58P/AX/CR32raba2GpHVLy6S40y8az
vEc2NlcRDbKvBEhDBgQTzj0D9hT4/wDj7UPHHhDV/Fvj74f/APCvfB3iuXxFqVz42u9H1DVr
L95Fe37WMd0suqebcGMbTZp89w+4MH3uPQPBnx71r9oX40+GdcuvF3wU1P4MXXxH1S/n8NeN
IvDsWseFtMvNYF5diVdSiEp8+KcvusppxlCu5WjCgA+f/A//AAT21jxnpHgm/l8efD/RbX4m
67daF4Me+OqN/wAJM9vcx2rTxCGykMETTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh+
+n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X+zb8YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18
WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02ef
WvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F
0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnofD/wDwT61DUPEPh7w9qvxL+FXhzxn4
j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2YfAtr8MP2bfDXiX4d+M/
hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt4JphN9oukMjtdRo0scAKRRhpPMblf2YbPwr+zP
4c+OPjafXvBWs/FD4VT2Nh4CL3sN5p1/dS30lvcapZQPtN49vEgmgcq0ab1keNiE2sZxPxA/
YY8SfCLwx4o1jxfr3hXwxYaDruoeHNMa+lujJ4vvbF5Y7kabHHbu8kSSRiMzzLFCHljRnDbg
tT9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3f
W3wZ+OVl8XPgF+zDYXviL4Va5H4f8V6y3xNTx5PoUl9Db3WsQ3TyqdX/AH7+bbySs0toSWYE
FjIgC8V+xlp/g3SPip+1FqHhzxF4K0LwJ4n8G+K/CXgxNY8U2emz3jXE0LWEQhvZ0uQjQhcS
yqFyrBnDBgADwm2/YtlsPgT4H+IXiT4i/D/wfovxD+3/ANjQ6kmrT3Uv2K4+zz71tLGdUw+0
jLchx3BAt+Af2ILb4iy+Erex+MnwfGpeOdVl0fRtON1qkt7NMt39ljaaKKwdrVJmKPEbnyi0
cgYhcOF9g/ZV1rxcIPhP4c8S+Nv2f9f+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHe
RJrCadXC4DqyKo5/4Oa98LvhDqv7RXxR8H3nh/8At3wHqsA+Edhq03mIYbrUpYhfQ2lyRLcX
FparHJH5ocRMRJIjMqlQDz/4O/sD698Zv2o/EHwetfFvgrSfG+h6reaPDBqD3/katNafaPtD
QSxWsihEW2dszeUSGXaCcgeE19bf8EhvE62//BQnQPiN4x8V+H9N03SJ7+61zV/EniW0sp55
ruxvI1k/0mZZbl3mcb2jDlS4L43An5JpCPpb4N/8Enfi/wDH74YfDnxh4T0/StW0H4jX1zYx
3CXTL/YPkSyxvNfAoPLiPkTMrR+ZnaqY8ySKN/Cfiz8OL74OfFTxL4Q1OW1n1Lwrqt1o93La
szQSTW8zwu0ZZVYoWQkEqDjGQOlfdfwO/bLm+D/wH/Yu8O6B8SLXQbGTxJqVv47sbfWIoha2
TeI7S4Q3q7swI0SyEO+3MLzLkxySBvS9e/aA0/Uvil8L7n4a/FHwr4e0XSPjJ4p1T4jrb+Nb
LQI9VtJvEMM9vcTpLcQ/2lE+njCvGJlKKYwcjZTGfGn7JX/Cyf2xdDsf2bdA8TeCtG0LVJ59
ZsbHWNFhQXd7EpmdxeQ2ctytx5KyYkd1/cxtFv2lY2+dK/WH9mD9pD4TfC34qeAde+H3i/wV
4P8AAmp/EDxdcfECJbyDSZ7uOWaeDw4GtZily1lHDdxFUhj+zQEvJKI2ikdOU/Yu+L3g3wDZ
/skz6z418Facnwa1Xxno3i4TeILMPp0+oytFZvGvmbrq3kaeM/abYSwIu93kVI3ZQD4U/Z7/
AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfNK+tv+
Camn2vgzXP2hdP1rxF4K0ae/+GOueErN9Q8U6bawX+pXLRrDFBNJOI5kYwSfvY2aIDaS4DoW
9g/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv8AhJvG0/heOyuGvoHtLm6giv7M6jEbbYGL
RXY2QeUBkNGUI/Omiv0W/ZU+OF3qfhiDS4PiR8P/AADot5451XVrLV/Aniy28Jt4Und8qb7R
tTFsusaU7/ZZIkO+eO3hkhLgj7MPz/8AHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyov
KibqkflR7VIGxMbQAdr+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjL
qTGu2R48mVcZAcrz/wAK/hX/AMLS/wCEj/4qPwr4c/4RzQ7rXP8Aieah9j/tTyNv+h2vynzb
uTd+7i43bW5GK+gP+CW1zp+n/wDC+P7S1/wroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPd
u8l8uPkT5d7LuXNv/glh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1
kcL9pAkwViErBmTzRuAPkmvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8K
xFDCivujlf8A1qA4YOF+q/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817Gw
Lh00yS4jDzKcQNIgZ8sM17B8J/jV8I/g/wDDfxPpM2r/AA/t/wDhI/2gNaufBF1Yanpt5H4H
Sa1ns9O8Qmw80RG0tpY/lWbYioyTJnEO5jPypr0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYn
WHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfdfwx/aI1b4Y/A34YaF4C8bfB/WPG2ieMvEX/Ce
63rnxDuNFtLu9Opxtb6nNsv7N9Xt5oNrea8V1ujjCBQd8Z1v2Jf2kPAXwt1z4Ua9ovi/4aeD
9J1Pxl4ouPitFpN4mkwXc8rTQaEIrW6KXjaZGt2hiRI/IgBaSYRvFI6AH5Z11fwr+Ff/AAtL
/hI/+Kj8K+HP+Ec0O61z/ieah9j/ALU8jb/odr8p827k3fu4uN21uRiuf17RpvDmuXmn3D2s
k9hO9vK9rdRXUDMjFSY5omaORMjh0ZlYYIJBBr6r/wCCWHiddG0P9oDTL7xX4f8AD2m+Kfhj
qmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViErBmTzRuQj5JrW8K+BNc8df2l/YmjarrH9j
2MuqX/2G0kuPsNpFjzLiXYDsiTcu52wq5GSM19wfsOftKWPw7/ZM+EVvP4/tdD13RvjzZWs0
T64ttd2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZrwrxpfeGLH9qj9oD+xPiP/AMK58Nzf8JFF
of8AwjtrLPY+KIDdn7Po6/ZGVEtLiPbh2zAFjXKkEUAfP1FFfot+yz+0rceAv2OvgVpfwu1j
4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyAfGn7JX7L+rft
i/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWP2Sv2X9W/bF+Ndj4
C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt+gP7Ev7SHgL4W658KNe0Xx
f8NPB+k6n4y8UXHxWi0m8TSYLueVpoNCEVrdFLxtMjW7QxIkfkQAtJMI3ikdMn/gnV8XvBv7
M+h/A9LXxr4K8LwWviTxDb/F/b4gs1nv7zbJaaKWPmGS7skFyrK9r5lmhLzuVKPIrGfmTRVv
XtGm8Oa5eafcPayT2E728r2t1FdQMyMVJjmiZo5EyOHRmVhggkEGvuv/AIJ9/FfUdP8AgJ4T
8Nn4l+H/AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcIR8f+Af
gD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/tIa5
qHwC+Pnw3tPjTpXhTxJq19o83gy+fXpPDOh2UQ1iR9SuLEssCWcTRzCVoIo45XjLBYWKlB8P
69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAAegfslfsv6t+2L8a7Hw
FoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVbzSv0s/4JmfHfwP8CvA/wAA
r3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySLxPwg/aM8S
/slf8E1dPudD8Z+Cm8f+EPic8kOmDxRp1/ejw8DA1xawrFcGcWVxqVpE0sdsy+fGDL80LmQs
Z8qaj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcKfs
lfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb7W/Yb/aI
tdU/Zjs44PEfw08GP4g+PLat4t8M3GsabpOnP4XutPSG/gFleShZbLbIY1iAkI8tSgLxhl7X
9kP47/CD4FeOPhpe/Dbxl4V8J/D2Xxz4r/4TtbjVls769gaSa28N+dFduL2e0SC6iI2K0ETG
SWbY8ckigH50/Aj9nDXPj5/b95aXWlaD4b8JWJ1HXvEWsyyQ6Xo8RyIlkeNHdpZpAI4oYkeW
RzhUIVivbfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGd
oySGONmGPV/sleItJsv2ff2ifgrqeueH9E8WePINLfRb2/1W3j0S5n0m9kuZrY34cwK8yZEM
hbyJGGDKu5C3qvwO8T6D8Qvg9+xcbLxX4Ksn+DvjLUrrxdDrHiWw0efSoZNatLxJBFdzRPOh
gDMGgEgyrL98FaAPl/wH+xz4v+Ifjjxx4QsjpS+PvAv2hZfCr3JbVNZe2kdLuKx2K0NxLAEd
2iEod0VjEsu0geU19rfsvfHLwh4P/wCCo3xE+Pt74i0qLwD4W13xBrsSvOItU19L83kNpBY2
b7ZppXNwjNlUSJAxlePjPxTSEel/s9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2U
mYwGOF4y6kxrtkePJlXGQHK8/wDCv4V/8LS/4SP/AIqPwr4c/wCEc0O61z/ieah9j/tTyNv+
h2vynzbuTd+7i43bW5GK+gP+CW1zp+n/APC+P7S1/wAK6F/bvwq1nw1p39ta/ZaX9t1C88r7
PDH9plj3bvJfLj5E+Xey7lzb/wCCWHiddG0P9oDTL7xX4f8AD2m+KfhjqmiWtprHiW00mDVN
VnULZqEuJo1kcL9pAkwViErBmTzRuAPFNR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3o
gkuMSK8KxFDCivujlf8A1qA4YOF80r9C/wDgnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueI
NNsU1HwvNpcVtfia2vJFW5t3VyuwK5LKCg3xgrb174o2/wDwqn4X6X+zH8U/Cvw4sNA8c+KZ
NY+2eLofDsf2eTVYX0q5voL2RJtQiWxWIZaK4OyNomBYNHTGfnTRX6A/D39q7X/2ef8Agnim
v+G/Hnw/vvHGifFaa/tbS01Kzt5pvDjvDLPBbWGYbu10+51K2id7SGK3Zoss0axMxo0H496/
46/Zx+C03wZ+IXw/+D/iSHxX4i1LxvYW/iaz8KabY3FzqME1m89nPKpvbSO22ooWO5CxRGHD
FTHSEfL3wT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7
WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI0
7x79jOuY8OfS/wBijSr/AME/tAeFfHNp45/Z/v8AQb/xzDJ4j+2f2Npt1o9va3yO1zawapb2
8ttFLFK0sR09QQFVSI5IljT0C10Pw34Zg1nxz8F/iB8P38dfFnxXr1sPFHinxta6dqHw60M6
hLBFLFHez/bWu7yBmle8KvcpDuVEEkhkdjPhTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ
7iGRo5E3ISrYdWGVJBxwSKya/QHQfE+v+Bf2cfgt4P8Agz8Zfh/4Q8SeCPFfiK18b6hb+NrP
RdNvrj+0YPsd/Ok7xnVLQ2yKVdYZw0SmPaSDHXwp47uPtnjjWZftmlah5t9O/wBq0uz+x2Nz
mRj5kEHlReVE3VI/Kj2qQNiY2hCMmu28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2
t4o0GWZ2Yu2cBQsL5YMUV/sD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOoH4e6qbW5S
TUr0BJPsNlJdvZlUnDKZovtIgCobgVP2Lf2jfF938Avj58NNQ+Nn/CM+PtUvtHk8Oajq/jk2
1jauNYkfV7mDUfOMPziYyyGCRnuVLsgm5pjPhSu28A/AHxF8R/hJ498b6fHajw98OILGbV5p
pwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+U17TotI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQI
wG4B0VsEZVTkD7L/AOCeX7RGuS/slfGP4Y2nxd/4QTxJqP8AYL+DJNX8UyaLY6TEuqM+pSQX
DOqQfu5g8kcR82Zd+1JCCKQj4por728CfEDWdI/ZV+BOgfBH4xeCvBmu+FvEmvDxnet4sg8N
2l5M9/btZX11bXhhlv7c2qqRm3mPloYmTcDEPQP2LvF+swfsmeDfEkfjfw/psHhb9ocWt7ri
6zB4b05fD0kEF3f2tqs5tQllPIqzmxjjTfsBMGUIVjPzJr0v9kr9l/Vv2xfjXY+AtA1nw/o2
u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfot8JP2kPhNP8ZP2e/E+heL/BWjeCfh
v4y+INvfwzXkGlPpNtq95KNKMdlKY5jbulzB88URjgUP5hjEUmzlP+CdXxe8G/sz6H8D0tfG
vgrwvBa+JPENv8X9viCzWe/vNslpopY+YZLuyQXKsr2vmWaEvO5Uo8igH5k0V+ln7CF9r/gr
9jDwBfP4u0rQf+ED/aAj0rUdUuPF1nY2ttoJtre41Cygu3uFintJpUEzW8Dus5QSBH27h8E/
tKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxikI4mu28A
/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/sD/AIJ6fGHw
r4D/AGc7Ow+IvjHwUviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuBU
/Yt/aN8X3fwC+Pnw01D42f8ACM+PtUvtHk8Oajq/jk21jauNYkfV7mDUfOMPziYyyGCRnuVL
sgm5pjPhSirevadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfot/wTy8
T6zpv7Cnw016PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFS
QhHxTqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC5X
gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv8Aot8Cv2g/
g94B+DetQrqngqzg8Y/HnV9R8B3ENzZF/AsNxZzWum+IJNKldAlvayIP3VysQjVklAysW7yn
9kr4+eKrP4PftE/Cm4+Pdrp3ja61XSx4X1698bzWmlEprUjarfWl/I6qElWXz32ETXCMzBJD
uFMZ8f8AgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxN
fcP7B/x31bTf2Zvjf8JtM+Nlr4Y124n0RPBGo3vii40PSrSCPVnOo3NpPP5Rt0eOUSvGFSeV
C2InZWUdV8DrfSfip4W/Yu0Pwv4u8FazffCX4galDr0cmu2+lzlZdetLm3ltra+aC5uUmhG5
BFEzE/IVEgKAA/PSiv128P8AifWdNs9Z16PxXa+HYPCP7WWp2V7qOo+JYNHS08PSSrdX9iks
80Ye3lkVZpLSMnzWTeY2KkjldN/au0bRfAvgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJd
NuJrKO+sWvbf7CYRtaC5CxwCEIpV4iWCx+WdFdB8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltd
KVXmdgLSGVVkjt8H92jqrKm0EAgivtb/AIJ6fGHwr4D/AGc7Ow+IvjHwUviG51W6PwcOqyw6
gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuAhHx/4B+APiL4j/CTx7430+O1Hh74cQWM2
rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX3X+xb+0b4vu/gF8fPhpqHxs/4Rnx9ql9o8nh
zUdX8cm2sbVxrEj6vcwaj5xh+cTGWQwSM9ypdkE3NfD+vadFpGuXlpb39rqkFrO8MV7arKsF
4qsQJYxKiSBGA3AOitgjKqcgAFSvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogku
MSK8KxFDCivujlf/AFqA4YOF+1v+CeXifWdN/YU+GmvR+K7Xw7B4R+PNvZXuo6j4lg0dLTw9
JaW11f2KSzzRh7eWRVmktIyfNZN5jYqSNX4V/tIeDfhd8PNJl8J+L/BWgaP4k/akGuWVtb3l
nbXNh4XcmEzmAkTafblImibcsJ8iRkb9zMVd2HY/Mmiv1B8R3PhP41/GL9m3Tfhxr/w/ksPh
f8ZPEf23Todf07TfslpP4ngurL7Hbyyxm6ie22mP7IsinGxfmG2rek/FfUdP+Mnjjw2fiX4f
8C6anxd8TX1vrWg+NrHw7q/he5+2SDdrOmXxgh1yykcWsg2NJIsUM0Ql6W4APz+/ZK/Zf1b9
sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq3mlfqD+wP8f/AAP8
Ff8AhTep6b4++H+h2F14r8SP8Wrizu7fRv7WuX8220SQWkqwXDaeFuldI4IFtrfc8kqRNFIy
c/8As7fHu++D/wCyr8F/DXw21v4P2Xizw54k1uPxu+ufEBtCtLW6+3wm2upltdRtk1W3NuF/
eKl5GY4BGmfmRgD83qK/QH9kz49eBPF3gebQdV8XfD/wPdeBPjlD8YbhvKuNO0PUtHgjEctv
pEZiMzSgqPJtJI0kaN0CglXCfGn7SnxIsfjH+0Z4/wDF+mRXUGm+KvEmo6xaRXSqs8cNxdST
IsgVmUOFcAgMRnOCetIRxNdt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMsz
sxds4ChYXywYor/YH/BPT4w+FfAf7OdnYfEXxj4KXxDc6rdH4OHVZYdQPw91U2tykmpXoCSf
YbKS7ezKpOGUzRfaRAFQ3AqfsW/tG+L7v4BfHz4aah8bP+EZ8fapfaPJ4c1HV/HJtrG1caxI
+r3MGo+cYfnExlkMEjPcqXZBNzTGfClerf8ADJ2of8Mgf8Lm/wCEo8K/2D/bv/CNf2Xm9/tT
+0Nvm+Tt+zeR/wAe/wC/3+ds2fLu8z93XmmvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKi
SBGA3AOitgjKqcgfYHwr8C2vxY/4JSaT4Eg8Z/DTQ/EOq/F0a+YNc8Y6bpz2Wm/2abJruaOS
YSqizA/IEMrKAyRsrKShHlXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh
9ktZ0hTz5VjBnaMkhjjZhiaN/wAE/vFVnZvceN/EPgr4VwSeJJvCljL4rv5ok1W/glaG68g2
0M4NvbyKElum226M6r5pO4L9QaPbeE/sP7KeheH/AIjfD/V7D4A/EfXYfEeo33iTTtH/ANEH
iC3uIb2KK5uFM8U1shlU25mXqu4sCKt/FP49/CL9q9fCFxd3HgrXvCfgH4geKpPE9j4l1W50
W7fRdY1v+0U1bTI0uLeW6dLWKVDAu+YSSKv2V9yNTGfnp478Eap8M/HGs+G9btvsWteH76fT
b+38xJPs9xDI0cibkJVsOrDKkg44JFZNdt+0po3h7w5+0Z4/0/wg9rJ4TsPEmo2+iva3RuoG
skupFtzHMWYyJ5QTDlm3DByc5r7A/wCCenxh8K+A/wBnOzsPiL4x8FL4hudVuj8HDqssOoH4
e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgIR8f+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq80
04V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/tG+L7v4BfHz4aah8bP+EZ8fapfaPJ4c1H
V/HJtrG1caxI+r3MGo+cYfnExlkMEjPcqXZBNzXw/r2nRaRrl5aW9/a6pBazvDFe2qyrBeKr
ECWMSokgRgNwDorYIyqnIAB6B+z3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvrK2Um
YwGOF4y6kxrtkePJlXGQHK8/8K/hX/wtL/hI/wDio/Cvhz/hHNDutc/4nmofY/7U8jb/AKHa
/KfNu5N37uLjdtbkYr6A/wCCW1zp+n/8L4/tLX/Cuhf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/
aZY927yXy4+RPl3su5c2/wDglh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZ
qEuJo1kcL9pAkwViErBmTzRuAPkmur+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2
p5G3/Q7X5T5t3Ju/dxcbtrcjFfot+x5+3B4b+CX7Kf7M3htvG3hXT7pL7UGvre4W1uZtHuH8
V2EZlmZ1ZrHdo13rQErmIGKWTDZ2Y80/Zi8T6Do37Rn7ZmmaN4r8FeHvBPinw34o0Tw/aSeJ
bDSdK1S6nupF0xbZJJo4pEEPnhJEBjiSXBZBKNzGfH/gH4A+IviP8JPHvjfT47UeHvhxBYza
vNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfa3/BPL9ojXJf2SvjH8MbT4u/8IJ4k1H+wX8G
Sav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV8aa9p0Wka5eWlvf2uqQWs7wxXtqsqw
XiqxAljEqJIEYDcA6K2CMqpyAhHoGo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXG
JFeFYihhRX3Ryv8A61AcMHC+aV+hf/BOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpq
PhebS4ra/E1teSKtzburldgVyWUFBvjBW3D+1Lb/ALNv7FOra18JPGHhVZPD/wAZL288LaVc
69DJqieDGnikW0W3lmGoRWk95a2zTQL5csqBnkBjZ3LGfCnwr+Ff/C0v+Ej/AOKj8K+HP+Ec
0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiug/ZK/Zf1b9sX412PgLQNZ8P6NruqQTz
WJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq30B/wTr+KK+Itc/aauL7WvBXgbTfiF8P9bsrX
RZNdtNB0qbVbtibO3tre4nRdkatcojcrCj7WdfMG72H/AIJmfHfwP8CvA/wCvdN8ZeFfCdhL
ruvf8LaW41a3s769uWje20TzopXFxPaIt0pHkK1tExkll2NHJIoB+adFfqt/wTPudP1f/hmX
wRomv+FX/wCEU13xe/xF0C21+yX+09Qj/faXdNbCX/iaeV5MEkNzAs6RfZ1ZXXysrxX7O37W
WraX+yr8Fx4C8S/DT/hNk8Sa3fePbvxh47uPDrpezX8Mtve30aahaPqaPAV3u8d3lYTGBkPG
wB+b1FdB8WdZi8R/FTxLqFunh+OC/wBVuriJNBtZbXSlV5nYC0hlVZI7fB/do6qyptBAIIr7
W/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFi
hmiEvS3CEfH/AIB+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXyw
Yor8TX3X+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJZxNHMJW
gijjleMsFhYqUHoP7CGrap4V/Yw8AanZ+OtKsbDwP+0BHptxr1x4jTRbU+HGtre5vrWCS8eB
2tJ5EW4azChpSodoSyHaxn5p16X+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZS
ZjAY4XjLqTGu2R48mVcZAcrlftKaz4e8R/tGeP8AUPCCWsfhO/8AEmo3Gipa2ptYFsnupGtx
HCVUxp5RTCFV2jAwMYr3X/gltc6fp/8Awvj+0tf8K6F/bvwq1nw1p39ta/ZaX9t1C88r7PDH
9plj3bvJfLj5E+Xey7lyhHj/AOz3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvrK2Um
YwGOF4y6kxrtkePJlXGQHK+aV9V/8EtrnT9P/wCF8f2lr/hXQv7d+FWs+GtO/trX7LS/tuoX
nlfZ4Y/tMse7d5L5cfIny72XcufQPhN8Udf/AOGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw
7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjoA+FKK1vHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G
5zIx8yCDyovKibqkflR7VIGxMbR+hf8AwTI+Kvw2+Dnwr+FiX/inw/JoXijVdftPihp/iHxZ
NbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ5Ylh/dAHyT+z3+wf4v/aP+G8nijStS8K6RYXG
ujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHilfZeveIj4c/4IzXngi41z4aSa
/YfE57uXTLXVdEutSbTUhMBuYxE7TSP9sG0TIWla3xhjaEV8aUAFFfpD/wAE8vE+s6b+wp8N
Nej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdr+y58bvg
p8P/ABxpd7oHibwr/wAK28b+OfF6eNtM1fxFPptjpVpcyG20eODQmnggmtJoJLfzJJLO4WJS
5d4Vg/dOw7H5U16XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90c
r/61AcMHC/VfgTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eONO0e0a6a/tzaXV0rzBNU
txboMSRJdRtGhjG7Gyu2/Yx+OXw/+FX7I9pp3i3xF8P7jxJ4o+Ml5eeHtVsJ7ER+EbifTXs7
XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD8067bwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvu
vLpba3ijQZZnZi7ZwFCwvlgxRX/QD4Y/tEat8Mfgb8MNC8BeNvg/rHjbRPGXiL/hPdb1z4h3
Gi2l3enU42t9Tm2X9m+r280G1vNeK63RxhAoO+M+f/slftQ6t4o+D37RPgLTPi14f8Aa74g1
XS7vwQYPEFx4b8OaNAdakl1F9Nacxmzt/LnD+SAs8kWQInZWUAHwTXV/Cv4V/wDC0v8AhI/+
Kj8K+HP+Ec0O61z/AInmofY/7U8jb/odr8p827k3fu4uN21uRiuf17TotI1y8tLe/tdUgtZ3
hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkD6r/AOCWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2m
seJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPkmvS9G/Zf1ab4CP8R9c1nw/4R8PXc81
poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC/a3/BPLxPrOm/sKfDTXo/Fdr4dg8I/
Hm3sr3UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJHQaj8bvAfxWsfgnpvgrU/g
pP8AD3w58R/E3/CUad4ottBs/wCzdEuvECXVv9mt9WVJo4nsZGP+hqCNoQ4eMKrsOx+engH4
A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX6A/s0ftEW
kvw6/aV+GPw9+Lv/AAglhqOu6c/wtk1fxTc6LY6Tpa65K9zJBcXDqYP9GmjeSMHz5l3/ACSM
GFfBOvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgIR7t8Df+Ce2sfHf
4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMotaN/wTd15vh4
/iPxF4/+GngiC18ZTeAL6DXLu/V9M1qItutppYLSWBU2L5nniUwhSN0ikED6A/4J+/tB6D+z
7+x58LbfU9U+Glxeav8AHmz1C707W7mwvZ9M0p7IWr6k0TOZbF4Joi6Tny2UojEmKTEnQ6Hr
Ufh79kf4geCPCHxK+FWran4p+MniGJdS8aeMtJlup/D95ptzpMmsSyvKJllJYyebColkBLLH
JDKUkYz4++Ef7DHiT4w/tf6v8E7PXvCum+L9KvtR01Jr6W6Fjf3Fi0gmSKSO3dhlIpZFMiIC
sZBIYqp1vBH7Acvjf4QW3jsfFf4VaV4WvPFb+DIb7UpdWt4/7QAZ495NgRHFJCFmEshVERx5
hjcMi/RX7FPw3+D37KH7SXw3+Ien/Ffw/qsHgjVfGOn+OLu81uyt0SG3hubfTLmwst32m5S6
hkRx9nN0CzbVYFSKyv2TvE9zp/8AwS7PhDQ/Ffwf0zxN4w+IGoC5t/FXiXS7Z9O0W90OTSp7
4rLMJ7d0d2K+WvnMo4SSKRlkAPCpf+Cd3iTw5qF+ni/xh8P/AIf2EPiu58HaZqPiG9uoLXxD
e20zwXMtp5dvI/2SGRAsl1MkUKGRVZwwZV6D4D/8EnvHvxy0PxbdyeKfhp4Mn8Earqek6xZe
JNbe3ntm01bc30+YopYzbwm5hV5Q+1SwyQGUt2vxf8D+G/j5+zj8MvhT4Q+JHw/+3/A/xX4i
8PanqHiHXbXRLXU7K+1Ey22r2jySsk9p5cJ8xYXkmQsu2ORWV2yv+CeNj4b8H+OP2kbC28be
FW0XUPhx4g8J+H9U1rVbXQP7fuLmRFszHDdzIy+akDMckiLKiRlLLkA+Xviz8OL74OfFTxL4
Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlc/RX3t+w5+0pY/Dv9kz4RW8/j+1
0PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmkI+dP2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur9Af2svHfh//h3j
8S/DPhnWfhV9j/4XlqmqaTpOl3ei/a/+Ef3yxQ3EEMZ8/wD1+1EdB5v2bAB+y1+f1ABX9Qv/
AAahf8ordJ/7C9//AOlc9fjL8Jvijr//AAx1+z5pfwZ+KfhX4ceJNA13W5PG/wBs8XWfh2P7
RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHX7X/APBr9cfbP+Ccry/bNK1DzfEuqv8AatLs/sdj
c5v7k+ZBB5UXlRN1SPyo9qkDYmNo3pfDP0/VGVT4oev6M/R6vyS/4KH2X2343eJh6afph/8A
Lm8eV+ttflT+3DYfb/jx4sX00zTD/wCXP48qIbDnv/XkfkJ/wXDt/svxO+Cqenw1H/qR69RV
/wD4L0Wv2P4y/BmP0+Gi/wDqRa7RQB+qP7KGm+Z4K/Z8fHXwR4HP/lF02vwI/bA/5LRH/wBi
14d/9MljX9EX7HXhp7j4T/s9z7eP+EE8En8tF07/AAr8/wD4ceI9a0j4A/DHXI/Fdr4cg8H/
ABV8LWN7qOpeJYNHS08Pv4Z0i6v7BJbiaMPbyyqs0lpGT5rR7zGxUkaS+D7hR+P7/wBD8ja9
L0b9l/VpvgI/xH1zWfD/AIR8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC/
ZfwI+PXwo8Xf2/oOm+LvCvgfRfAn7QB+MNi2qRSadY6l4cgzGLfT40iLNdhFTZaGNGZXUICV
cJ1eo/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf8AiBL1AqamTJ89nM5ZrFmB
ZSu4vGAMDY/L6iur+O//AAjH/C8PGX/CEf8AImf27e/2B/rf+Qf9of7N/rv3v+q2f6z5/wC9
zmvsv9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIG
fLDNIR86fs9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpu
Hhjd5o0VyxIHilfoD+1l478P/wDDvH4l+GfDOs/Cr7H/AMLy1TVNJ0nS7vRftf8Awj++WKG4
ghjPn/6/aiOg837NgA/Za/P6gD0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvE
GhhlKv5SyOC4VcRsN24qreaV+ln/AATM+O/gf4FeB/gFe6b4y8K+E7CXXde/4W0txq1vZ317
ctG9tonnRSuLie0RbpSPIVraJjJLLsaOSRan7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8
bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kZjPkn9nv9g/xf8AtH/DeTxRpWpe
FdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDxSvuH4s/FW18R/8ABJ7x
LpNvq3wfj1C/+Lt1rkWj6CdNtZF0h0dBPaWcoW+jT7Udke9FultdqnFuCKtfAL9qXX/2bf8A
glzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnIB8K
UV+m37Afx/8Aht4W0PwLrF/q/grStC+IPiTxK/xQ8P3viSbT9K8PNdqtvp1taaIbqOCeykSW
JXkktrtYk3F5YlgzF0H/AATPudP1f/hmXwRomv8AhV/+EU13xe/xF0C21+yX+09Qj/faXdNb
CX/iaeV5MEkNzAs6RfZ1ZXXysqAflTXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8A
tTyNv+h2vynzbuTd+7i43bW5GK5/XtevvFOuXmp6neXWo6lqM73V3d3UzTT3UzsWeSR2JZnZ
iSWJJJJJr6r/AOCWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv
2kCTBWISsGZPNG5CPkmiv0h/4J5eJ9Z039hT4aa9H4rtfDsHhH4829le6jqPiWDR0tPD0lpb
XV/YpLPNGHt5ZFWaS0jJ81k3mNipIqfAj49fCjxd/b+g6b4u8K+B9F8CftAH4w2LapFJp1jq
XhyDMYt9PjSIs12EVNloY0ZldQgJVwjsOx8aaN+y/q03wEf4j65rPh/wj4eu55rTQU1h7gXf
iqeFGaZLGGGGVnSNlWNppPLgWSVEMobcF80r9QdR/ac8LftLWPwTvNF1H4VSeFoPiP4mv/G+
mePB4ejvtL0y/wDECXqBU1MmT57OZyzWLMCyldxeMAfnT8d/+EY/4Xh4y/4Qj/kTP7dvf7A/
1v8AyD/tD/Zv9d+9/wBVs/1nz/3uc0hB8K/hX/wtL/hI/wDio/Cvhz/hHNDutc/4nmofY/7U
8jb/AKHa/KfNu5N37uLjdtbkYrlK+tv+CWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTS
YNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG72v/AIJ5eJ9Z039hT4aa9H4rtfDsHhH4829le6jq
PiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS0jJ81k3mNipIYz4p0b9l/VpvgI/wAR9c1nw/4R8PXc
81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC5XgH4A+IviP8JPHvjfT47UeHvhxB
YzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv+heo/G7wH8VrH4J6b4K1P4KT/D3w58R/E3/C
Uad4ottBs/7N0S68QJdW/wBmt9WVJo4nsZGP+hqCNoQ4eMKvn/7NH7RFpL8Ov2lfhj8Pfi7/
AMIJYajrunP8LZNX8U3Oi2Ok6WuuSvcyQXFw6mD/AEaaN5IwfPmXf8kjBhQB+f1el6j+y/q1
l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwv3D/wTm+Inww+A
vgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICIs
n/gn98RI/ht+yPo/g8+OPh/pF1B8ch/wmOl6l4q0mO11Pwy+mx2mob0nm8m9tHDMo8vzA5UP
HkoGAB8afs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlX
GQHK+aV9rfsW3PgfT/jh+1V/wjWv+FdC8Ga74G8TeGvB/wDbWv2+l/bftlwv9nQx/bpY5W3R
Q8u/3Pl81lLDPoH7LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQR
X9mdRiNtsDForsbIPKAyGjKEfnTRXQfFnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0h
lVZI7fB/do6qyptBAIIr7L+AX7Uuv/s2/wDBLnQ9a8N+MPCq+OPD/wAR/tlrpVzr1nJqieHG
FvJPaLb+cLuK0n1K1iaaCHy2lQM7AxMzkA+FKK/Tb9gP4/8Aw28LaH4F1i/1fwVpWhfEHxJ4
lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZiP2Wpfhd4k+A/wCzlYeJ
/iT8NND8T/BfxJqcEtrqfiHyja3/APwkdjqZmjki3QTW76ZZ30aXBc27zXMCK5dgUYz8ya6v
4V/Cv/haX/CR/wDFR+FfDn/COaHda5/xPNQ+x/2p5G3/AEO1+U+bdybv3cXG7a3IxWr+1l43
0z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA17t/wSw8Tro2h/tA
aZfeK/D/AIe03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaNyEfJNdt4B+
APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/AHX/AME8vE+s
6b+wp8NNej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkZP
wG/ahi8UeFv2ovAXgX4tWvgCDxB4ksbv4ZG98QS+G9K0bTTr00t29ozmNLNPs86O8MYWWRNw
WJyrKGM/PSvoH4G/8E9tY+O/wg8OeM7Px58P9HsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlX
EgcxgOAzqwZR4Tr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH6Af8A
BP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiT
FJiQQI+f9G/4Ju683w8fxH4i8f8Aw08EQWvjKbwBfQa5d36vpmtRFt1tNLBaSwKmxfM88SmE
KRukUggc/wDCP9hjxJ8Yf2v9X+Cdnr3hXTfF+lX2o6ak19LdCxv7ixaQTJFJHbuwykUsimRE
BWMgkMVU/YOh61H4e/ZH+IHgjwh8SvhVq2p+KfjJ4hiXUvGnjLSZbqfw/eabc6TJrEsryiZZ
SWMnmwqJZASyxyQylJOe/Yp+G/we/ZQ/aS+G/wAQ9P8Aiv4f1WDwRqvjHT/HF3ea3ZW6JDbw
3NvplzYWW77TcpdQyI4+zm6BZtqsCpFAHzr4I/YDl8b/AAgtvHY+K/wq0rwteeK38GQ32pS6
tbx/2gAzx7ybAiOKSELMJZCqIjjzDG4ZFJf+Cd3iTw5qF+ni/wAYfD/4f2EPiu58HaZqPiG9
uoLXxDe20zwXMtp5dvI/2SGRAsl1MkUKGRVZwwZV91/ZO8T3On/8Euz4Q0PxX8H9M8TeMPiB
qAubfxV4l0u2fTtFvdDk0qe+KyzCe3dHdivlr5zKOEkikZZOf+L/AIH8N/Hz9nH4ZfCnwh8S
Ph/9v+B/ivxF4e1PUPEOu2uiWup2V9qJlttXtHklZJ7Ty4T5iwvJMhZdscisrsAcV8B/+CT3
j345aH4tu5PFPw08GT+CNV1PSdYsvEmtvbz2zaatub6fMUUsZt4Tcwq8ofapYZIDKW+fviz8
OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfUP/AATxsfDfg/xx
+0jYW3jbwq2i6h8OPEHhPw/qmtara6B/b9xcyItmY4buZGXzUgZjkkRZUSMpZc/H1IR1fwr+
Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrlK+tv+CW
HiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG72
v/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzW
TeY2KkhjPhTwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxR
X4mv0L+A37UMXijwt+1F4C8C/Fq18AQeIPEljd/DI3viCXw3pWjaademlu3tGcxpZp9nnR3h
jCyyJuCxOVZR0P8AwTm+Inww+Avgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2tjB5saWWmtBo80
1ulzaXMbRGSaeyn8tWcyPAICIgD4e1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4
xIrwrEUMKK+6OV/9agOGDhT9nv8AZf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxg
McLxl1JjXbI8eTKuMgOV+y/+Cf3xEj+G37I+j+Dz44+H+kXUHxyH/CY6XqXirSY7XU/DL6bH
aahvSebyb20cMyjy/MDlQ8eSgYef/sW3PgfT/jh+1V/wjWv+FdC8Ga74G8TeGvB/9ta/b6X9
t+2XC/2dDH9uljlbdFDy7/c+XzWUsMgHxTRX6Lfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5P
GX/CTeNp/C8dlcNfQPaXN1BFf2Z1GI22wMWiuxsg8oDIaM/BPxZ1mLxH8VPEuoW6eH44L/Vb
q4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCKQjn6K+6/gF+1Lr/wCzb/wS50PWvDfjDwqv
jjw/8R/tlrpVzr1nJqieHGFvJPaLb+cLuK0n1K1iaaCHy2lQM7AxMzn0v9gP4/8Aw28LaH4F
1i/1fwVpWhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZiAPz
Jor9Nv2Wpfhd4k+A/wCzlYeJ/iT8NND8T/BfxJqcEtrqfiHyja3/APwkdjqZmjki3QTW76ZZ
30aXBc27zXMCK5dgU+Cf2svG+mfEz9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuU
ZThgCM8gGgDz+u28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL
5YMUV/sD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlU
nDKZovtIgCobgVP2Lf2jfF938Avj58NNQ+Nn/CM+PtUvtHk8Oajq/jk21jauNYkfV7mDUfOM
PziYyyGCRnuVLsgm5pjPhSvYPhX+xrq3xA+Hmk+LNc8V+Cvhx4e8SaqNG0G98V3dxbJrk4JE
zwCGCZhbwNtWW5kCQRtIqmTcGC+Va9p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA
6K2CMqpyB9VzW+k/tffsPfAzwRofi7wV4a8Q/CvVdY07XoPFeu2+iIsGqXi3MN/A8zBZ7eNY
2WVYy06NtxCysrFCMr4D/wDBJ7x78ctD8W3cnin4aeDJ/BGq6npOsWXiTW3t57ZtNW3N9PmK
KWM28JuYVeUPtUsMkBlLfP3xZ+HF98HPip4l8IanLaz6l4V1W60e7ltWZoJJreZ4XaMsqsUL
ISCVBxjIHSvqH/gnjY+G/B/jj9pGwtvG3hVtF1D4ceIPCfh/VNa1W10D+37i5kRbMxw3cyMv
mpAzHJIiyokZSy5+PqAOr+Ffwr/4Wl/wkf8AxUfhXw5/wjmh3Wuf8TzUPsf9qeRt/wBDtflP
m3cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVNEtbTWPEtppMGqarOoWzUJ
cTRrI4X7SBJgrEJWDMnmjd7X/wAE8vE+s6b+wp8NNej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S
0trq/sUlnmjD28sirNJaRk+aybzGxUkMZ8qfArWPiJP+z74z8QaBpPw0m8J/C2C0m1S61jwV
oWo3rNfXohgiEtxZyzzOzvIwLttSOFhuX92jeP8Ajfxld/EDxPc6vfw6VBdXezfHpul22mWq
7UVBst7aOOFOFGdqDJyxySSfvX4DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNp
p16aW7e0ZzGlmn2edHeGMLLIm4LE5VlHQ/8ABOb4ifDD4C+B/hzYz+OPCuseG9e13xFp3xLT
V/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIiAPh7Uf2X9Wsv2TLD4xR6z4fu/D1
34kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF9A/Z7/AOFk/tI/suePfhrovibwVY+D
fh7pVz4+vND1DRYUvb5bbLTXUF1HZvIbhQY4syTxsY5VjBMQcL9F/wDBP74iR/Db9kfR/B58
cfD/AEi6g+OQ/wCEx0vUvFWkx2up+GX02O01Dek83k3to4ZlHl+YHKh48lAw8/8A2LbnwPp/
xw/aq/4RrX/CuheDNd8DeJvDXg/+2tft9L+2/bLhf7Ohj+3Sxytuih5d/ufL5rKWGQD4por9
Fv2Wf2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRX
Y2QeUBkNGfgn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBB
FIR9F/s9/C/4x/tH/sYSeEtK8QfD/SPhtceKxpWj6brFnY2t14g8Rm2ku1gt7oWzSpdtEixL
LcTwq4mjt1kKkxj5/wDhX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3
Ju/dxcbtrcjFfS3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k
348/yOM/Zqqf8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhft
IEmCsQlYMyeaNzGeKfs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6k
xrtkePJlXGQHK5XgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhf
LBiiv7r/AMEtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nhj+0yx7t3kvlx8if
LvZdy56v/gnl+0Rrkv7JXxj+GNp8Xf8AhBPEmo/2C/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYP
JHEfNmXftSQgigD5p8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOB
hRgAAa3jf9tLx98QPhBc+Ab+fwrB4Qu75NUfTdN8H6Ppka3ahVFwjW1rGyS7FCF1IYplCSpK
n7L/AGdv2hG+Ff7KvwX0D4TeJ/g+2u6D4k1seLr3XPGl34VtBMb+FrO+mtvtthLqFu9rsP72
3uCI4RFsVg8RyfDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7
bTJ9RtoWa1VYpHhG518tmYgH56V1fwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1P
I2/6Ha/KfNu5N37uLjdtbkYrK8d+Kv8AhOvHGs63/ZulaP8A2xfT332DS7f7PY2PmyM/kwR5
OyJN21FydqgDJxX1B/wSw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqzqFs1CXE0
ayOF+0gSYKxCVgzJ5o3IR8k19LfBbWfif8HP2aE8d2HiP4f/AA00Vvt2l+GtTvvDdonibxFK
0U32tdNvIbGW++TeYTctNFFE00cYmUqQn0X/AME8vE+s6b+wp8NNej8V2vh2Dwj8ebeyvdR1
HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdBqPxu8B/Fax+Cem+CtT+Ck/w98OfE
fxN/wlGneKLbQbP+zdEuvECXVv8AZrfVlSaOJ7GRj/oagjaEOHjCqxnxp4B8RfFX4j/sPePd
B0+58Pj4SfDiex1nV7ebSbBbn7beXi29u8cwgN01w2XHmeYMQQvGXClIn8Jr9Af2aP2iLSX4
dftK/DH4e/F3/hBLDUdd05/hbJq/im50Wx0nS11yV7mSC4uHUwf6NNG8kYPnzLv+SRgwrq/+
Cc3xE+GHwF8D/Dmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt0ubS5jaIyTT2U/l
qzmR4BAREAfD2o/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv8A
61AcMHCn7Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mV
cZAcr9l/8E/viJH8Nv2R9H8Hnxx8P9IuoPjkP+Ex0vUvFWkx2up+GX02O01Dek83k3to4ZlH
l+YHKh48lAw8/wD2LbnwPp/xw/aq/wCEa1/wroXgzXfA3ibw14P/ALa1+30v7b9suF/s6GP7
dLHK26KHl3+58vmspYZAPimvQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJ
M/LHG5zgYUYAAH2t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3U
EV/ZnUYjbbAxaK7GyDygMhoz8E/FnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0hlVZI
7fB/do6qyptBAIIpCO28b/tpePviB8ILnwDfz+FYPCF3fJqj6bpvg/R9MjW7UKouEa2tY2SX
YoQupDFMoSVJU+U1+oP7Hn7cHhv4Jfsp/szeG28beFdPukvtQa+t7hbW5m0e4fxXYRmWZnVm
sd2jXetASuYgYpZMNnZjoP2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrT
SEtNIS5itLqyngeDfMbW6jRCzNLEkP7pjPzp+F/7XPjn4LaHDZ+Fbrw/oU9rBc28GrWvhnTF
1u3W4WRZTHqX2f7YjlZXUOswZFIClQAB5pX6A/sdfHu3+DH7NHwu0e/+IOlaP4k8L/tAW2m3
aw+JoTJZ+GpYoJL9FkjlIOlS3MQkkKsbaR0DksQDX0X+zJ4i8G+Ov2jPhJ4Z+G2ueCv+ET0P
4geP7vxX4d0zVbOzg1BhdS3GiXIsd6G/SKKK1eCaGOVYBboVZPJ+UA/L79nv9l/Vv2kdD8e3
ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/gmp4nXWdc/aF1
PxL4r8P2epeL/hjrmiQ3fiTxLaWE+r6rftG0S77uZGkeRo5C8mSqkguy71z7B+yz+0rceAv2
OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH5
00V0HxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK+1v2HP
2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM0AfOn
7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEg
eKV+gP7WXjvw/wD8O8fiX4Z8M6z8Kvsf/C8tU1TSdJ0u70X7X/wj++WKG4ghjPn/AOv2ojoP
N+zYAP2Wvz+oA9A8EftL+Ivh/wCGLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgY
UYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ+1fhN8Udf/
AOGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjr
tvgR8als/Av7OEfgj4oeCvDsHhv4gaxe/FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEj
XydiFTEGM/N6iv0s+GH7V3hDw94Y0G48L+PNK8PWFr+07IulwQ6kNLk0/wAFXTxzzRLASj2+
lSSJG8kRVYS6Deu5eLXg79pTwr8O7PTbfw/4/wDD+hwaN+1JcWunxafrkNslj4NuJUmnjiCO
AmjySRo7quLZmRWOSAaAPz/8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2
Yu2cBQsL5YMUV+Jr9FvhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09fEE73kljcB
1ggi8iZHaOA75o92xJMEVz/wg/aU1n9lb/gmrp9/4T8f+CtT8YeFfic89hbjXIGvbnw1mB5Y
YbaR47+GyutRtYZJbdUhkkjy7oI2ZqAPgmvQPBH7S/iL4f8Ahi20iw074fz2tpv2Sal4E0PU
7ptzs533FzaSTPyxxuc4GFGAAB97fsB/H/4beFtD8C6xf6v4K0rQviD4k8Sv8UPD974km0/S
vDzXarb6dbWmiG6jgnspEliV5JLa7WJNxeWJYMxef+BPHHiXw5+yr8CfDXwh+K3gr4eeLPCP
iTXo/Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsbKAPmn4kfBHxF4z/Zzi+PN1qfg
qTSdX8SDwrc6bo+nDS57C9S1aUA2sNrDaKhgiRy0LNkyru+cybfH6/Sz9jH45fD/AOFX7I9p
p3i3xF8P7jxJ4o+Ml5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgfzz+LN
nfad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetIRz9Ffe37Dn7Slj
8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WGa9gu/2ldM
8BaD4V0v4Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1lX065eC2v7NdRtDZeUARFdp5UAiQcNGWM
/N/wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1dR/Zf1
ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6r/ZK/ah1bxR
8Hv2ifAWmfFrw/4A13xBqul3fggweILjw34c0aA61JLqL6a05jNnb+XOH8kBZ5IsgROyso6D
/gn98TdP8A/sj6P4UsPiJ8P4Y1+OQfxRa6lr1lp9rrnhSTTY7S+ke11FomuLSVGOI2i35AIQ
SR/KAfCnwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uR
iuUr7h/Yn8T+FdG+Kn7WemeFvFfh/wAPfDnxT4N8Q6J4WtNY8Sw6TBqk08zrpShL6aN5HEHn
ASSAmISsJGQy/N1X7JvxiXSPgb+yvbeCPiJ4f8GQeFvGWo3XxSs28YWnht7yF9Ts5IpLqGee
Fr9DYqVDKsw2oYuoKAA/PSiv0s+CHxw0/U9Q1LS/CPxI8K+AfAN58R9c1bw7q/hjxZZeE9Q8
KRPMxtzq2jX4todY09x9jkRF3yxwwywhxj7MPzp8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMj
HzIIPKi8qJuqR+VHtUgbExtCEegfDX9tv4n/AAh8D2Hh7w/4m+xWGj/bv7JlfTrS4vtC+2x+
XdfYbuWJriy8xclvs8kfzFm4ZiT5TX3t+w5+0pY/Dv8AZM+EVvP4/tdD13RvjzZWs0T64ttd
2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZr0D4W/Fe10/xZr/hvwz8S/BXgXwSnxO1++0HWvB3
jbTfDtz4XX7Q4gbUtMujBa65pkmLOSPymkZYIZYllxi3DGfmTRWt47uPtnjjWZftmlah5t9O
/wBq0uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2j7g/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZ
on1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iBnywzSEfOn7Pf7B/i/9o/4byeKNK1LwrpFhca6
PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgcp/w1B4z/wCGeP8AhVX23Sv+EF+3
f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/kr7B/ay8d+H/+HePxL8M+GdZ+FX2P/heW
qappOk6Xd6L9r/4R/fLFDcQQxnz/APX7UR0Hm/ZsAH7LX5/UAFFfdfwm+KOv/wDDHX7Pml/B
n4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx123wI+NS2fgX
9nCPwR8UPBXh2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr5OxCpiDGfm
9RX6WfDD9q7wh4e8MaDceF/HmleHrC1/adkXS4IdSGlyaf4KunjnmiWAlHt9KkkSN5Iiqwl0
G9dy8WvB37SnhX4d2em2/h/x/wCH9Dg0b9qS4tdPi0/XIbZLHwbcSpNPHEEcBNHkkjR3VcWz
MisckA0Afn/4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYo
r8TX6LfCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3
zR7tiSYIrn/hB+0prP7K3/BNXT7/AMJ+P/BWp+MPCvxOeewtxrkDXtz4azA8sMNtI8d/DZXW
o2sMktuqQySR5d0EbM1AHwTXoHgj9pfxF8P/AAxbaRYad8P57W037JNS8CaHqd0252c77i5t
JJn5Y43OcDCjAAA+9v2A/j/8NvC2h+BdYv8AV/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTr
a00Q3UcE9lIksSvJJbXaxJuLyxLBmLz/AMCeOPEvhz9lX4E+GvhD8VvBXw88WeEfEmvR+PHb
xxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2UAeVfEjxl8VfGf/BO2LxJdeIPhpJ8L9X8
ZDSbnw9o/hKw0u9sNZS3aYTkQ6fCiubeJMywzMTHKsbHBkRfl6v0s/Yx+OXw/wDhV+yPaad4
t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcozLFcxQCNHSUIpWIH88/izZ32
nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ8uGDHrQB1eo/sv6tZfsmWHxi
j1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/AK1AcMHC+aV+hf8AwTg+L1r4Z/Yr
8NeH4PGvgrRHn+NcVx4t0jXPEGm2Kaj4Xm0uK2vxNbXkirc27q5XYFcllBQb4wVtw/tS2/7N
v7FOra18JPGHhVZPD/xkvbzwtpVzr0MmqJ4MaeKRbRbeWYahFaT3lrbNNAvlyyoGeQGNncgH
500V+kPwI/aMXxH4F/Zw1HwR4z8FfDSDT/iBrGsfFLQ9O8UWnhK0WGfV7W4iV7Se4ia8t1sQ
Y4womCxx+VnKla7X9lz43fBT4f8AjjS73QPE3hX/AIVt438c+L08baZq/iKfTbHSrS5kNto8
cGhNPBBNaTQSW/mSSWdwsSly7wrB+6APz0/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hIL
tokMrxBoYZSr+UsjguFXEbDduKq3mlfpt/wTq+L3g39mfQ/gelr418FeF4LXxJ4ht/i/t8QW
az395tktNFLHzDJd2SC5Vle18yzQl53KlHkXz/wJ448S+HP2VfgT4a+EPxW8FfDzxZ4R8Sa9
H48dvHGnaPaNdNf25tLq6V5gmqW4t0GJIkuo2jQxjdjZQB8k/wDDUHjP/hnj/hVX23Sv+EF+
3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+v02/YD+L3w2+FGh+BWv/ABr4K1TQ
vFviTxLb/FCG98QTaXpVq06raacbTQTJawS2VyjxF3ksJlhRm3m3WDEPFfsM+PtX8GfCDw/4
Kufid4V8E2uneK76VNc8MfEDTNH1DwzdqFQvq1jcvHa+IdPkkW2lR4ZJm8mGWNZiCLegD408
A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV7Xgj9pfxF8P
/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAAD6r/Yt/aQ1zUPgF8fPhvaf
GnSvCniTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKD4f17TotI1y8tL
e/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkBCLfjfxld/EDxPc6vfw6VBdXezfHpul
22mWq7UVBst7aOOFOFGdqDJyxySSfVf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA+i/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u
7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM1b/ay8d+H/8Ah3j8S/DPhnWfhV9j/wCF5apqmk6T
pd3ov2v/AIR/fLFDcQQxnz/9ftRHQeb9mwAfstMZ+f1FFfdfwm+KOv8A/DHX7Pml/Bn4p+Ff
hx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx0hHwpRX6Q/Aj41LZ+B
f2cI/BHxQ8FeHYPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIW/h
h+1d4Q8PeGNBuPC/jzSvD1ha/tOyLpcEOpDS5NP8FXTxzzRLASj2+lSSJG8kRVYS6Deu5eGM
/NOu28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/0A8H
ftKeFfh3Z6bb+H/H/h/Q4NG/akuLXT4tP1yG2Sx8G3EqTTxxBHATR5JI0d1XFszIrHJANc/8
I/2iPNsf2rvhj4N+LuleBP7R8V2r/DaR/FP9i6HpOnr4gne8ksbgOsEEXkTI7RwHfNHu2JJg
igD86a9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAfW3w
g/aU1n9lb/gmrp9/4T8f+CtT8YeFfic89hbjXIGvbnw1mB5YYbaR47+GyutRtYZJbdUhkkjy
7oI2Zq9A/YD+P/w28LaH4F1i/wBX8FaVoXxB8SeJX+KHh+98STafpXh5rtVt9OtrTRDdRwT2
UiSxK8kltdrEm4vLEsGYgD83/G/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7
UGTljkkk/wBI3/BrvrvjGy/4JiaDb+GPDGka+kuq6h5jXWuNYPGwu5uAv2eQEYI53Dvx6/kz
4E8ceJfDn7KvwJ8NfCH4reCvh54s8I+JNej8eO3jjTtHtGumv7c2l1dK8wTVLcW6DEkSXUbR
oYxuxsr9qv8Ag1YuPtn7B0cv2zStQ83xPq7/AGrS7P7HY3Ob6Y+ZBB5UXlRN1SPyo9qkDYmN
o9DL1C1VzjzWj1v/ADR7NHHi+b3FF21/R9z7D1b4s/EbQpGW68A+GoinUf8ACWSNj8rI18Ff
tFyzeJPjD4jurmCO2uLjQdHmlhjlMqRO3iXx2WVXKqWAJIBKrnGcDpX6M/Fn/j+uPqa/Pn4s
Wf234yeJRjOPDujH/wAuXx3XRioUlQjOEFF36X7ebZzUJ1HWcZSvp5eXZI/IT/g4Og+zfH74
Op6fDOP/ANSHXaKvf8HGVv8AZf2lfhEnp8Mov/T/AK5RXlHoH7l/sP8Aw5Sf9l79n672cjwB
4ROf93SLIf0r8CfGdt8Sf2ufjkP2Z/D/AIm8FaNoV/pekavY2OsaLCgur2HRbOV3W8hs5blb
jyUkAkd1/cxtFv2lY2/os/YKnWX9jb9n/j/mn3hX/wBNVnX48+BPjx4H+Bnxl+Ct9pnjLwr4
TsJpWPxaW41a3s769uX0S1ttE86KVxcT2iLdKR5CtbRMZJpdjRySK73j9xq4pNM/HCva/wBn
v9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD
62/Z2+Pd98H/ANlX4L+GvhtrfwfsvFnhzxJrcfjd9c+IDaFaWt19vhNtdTLa6jbJqtubcL+8
VLyMxwCNM/Mjef8AxZ+Ktr4j/wCCT3iXSbfVvg/HqF/8XbrXItH0E6bayLpDo6Ce0s5Qt9Gn
2o7I96LdLa7VOLcEVmB8PUV91/AL9qXX/wBm3/glzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8
OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnPpf7Afx/8Aht4W0PwLrF/q/grStC+IPiTxK/xQ
8P3viSbT9K8PNdqtvp1taaIbqOCeykSWJXkktrtYk3F5YlgzEhH5k16B4I/aX8RfD/wxbaRY
ad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA/Rb/AIJn3On6v/wzL4I0TX/Cr/8A
CKa74vf4i6Bba/ZL/aeoR/vtLumthL/xNPK8mCSG5gWdIvs6srr5WV/LTXtevvFOuXmp6neX
Wo6lqM73V3d3UzTT3UzsWeSR2JZnZiSWJJJJJoA7XQvDt3+0n4n8W6vf678P/Ct1pGhz648d
1HbaBa6l9mSNBZ2VvbQrC13ICNkSovmFXYnJJPn9fW3/AASw8Tro2h/tAaZfeK/D/h7TfFPw
x1TRLW01jxLaaTBqmqzqFs1CXE0ayOF+0gSYKxCVgzJ5o3e1/wDBPLxPrOm/sKfDTXo/Fdr4
dg8I/Hm3sr3UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJDGfm9XsH7Gf7Dnj39
uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv1t8CPj18KPF39v6Dp
vi7wr4H0XwJ+0AfjDYtqkUmnWOpeHIMxi30+NIizXYRU2WhjRmV1CAlXCa37DX/BRvTPiN+3
NZWepad8P/CXw20/xX4o8aQ65rWpPomoRPqP2sRzXIN8tpdXe24itl3RTPHCXCEKrSAA/NOv
oH9mH9of4n2fgfVPD3h/4leFfB9h4E0PV9c0mXXxaJfW3nRpHdWejXcsElxb3dwrErFbyRbm
DtkMST4p47t/sfjjWYvselaf5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncfqD/AIJY
eJ10bQ/2gNMvvFfh/wAPab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5
CPCfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38
QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ/Qv/gnl4n1nTf2FPhpr0fiu
18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkj4U/aU1nw94j/aM
8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMUAegeAfEXxV+I/wCw9490
HT7nw+PhJ8OJ7HWdXt5tJsFuftt5eLb27xzCA3TXDZceZ5gxBC8ZcKUifwmvtb/gnl+0Rrkv
7JXxj+GNp8Xf+EE8Saj/AGC/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYPJHEfNmXftSQgivVv8A
gnN8RPhh8BfA/wAObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT
+WrOZHgEBETGfD2o/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv
/rUBwwcKfs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlX
GQHK/Zf/AAT++Ikfw2/ZH0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmU
eX5gcqHjyUDDz/8AYtufA+n/ABw/aq/4RrX/AAroXgzXfA3ibw14P/trX7fS/tv2y4X+zoY/
t0scrbooeXf7ny+aylhkA+Ka9A/4ag8Z/wDDPH/Cqvtulf8ACC/bv7U+wf2HYed9r3Z+0faf
J+0ebt/d7/M3eV+6z5fyV9rfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/AAk3jafwvHZX
DX0D2lzdQRX9mdRiNtsDForsbIPKAyGjJ+yp8cLvU/DEGlwfEj4f+AdFvPHOq6tZav4E8WW3
hNvCk7vlTfaNqYtl1jSnf7LJEh3zx28MkJcEfZgAfnTRWt47uPtnjjWZftmlah5t9O/2rS7P
7HY3OZGPmQQeVF5UTdUj8qPapA2JjaPuD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOo
H4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgIR8f8AgH4A+IviP8JPHvjfT47UeHvhxBYz
avNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfdf7Fv7Rvi+7+AXx8+GmofGz/hGfH2qX2jye
HNR1fxybaxtXGsSPq9zBqPnGH5xMZZDBIz3Kl2QTc18P69p0Wka5eWlvf2uqQWs7wxXtqsqw
XiqxAljEqJIEYDcA6K2CMqpyAAVKK+6/hN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu63J43+2eL
rPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjrtvgR8als/Av7OEfgj4oeCvDsHhv4gaxe/
FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEjXydiFTEGM+HvBH7S/iL4f+GLbSLDTvh/P
a2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZa
rtRUGy3to44U4UZ2oMnLHJJJ/Rb4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/AAVd
PHPNEsBKPb6VJIkbyRFVhLoN67l4teDv2lPCvw7s9Nt/D/j/AMP6HBo37Ulxa6fFp+uQ2yWP
g24lSaeOII4CaPJJGjuq4tmZFY5IBoA/P/wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvL
pba3ijQZZnZi7ZwFCwvlgxRX4mv0W+Ef7RHm2P7V3wx8G/F3SvAn9o+K7V/htI/in+xdD0nT
18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRXP/CD9pTWf2Vv+Caun3/hPx/4K1Pxh4V+Jzz2FuNc
ga9ufDWYHlhhtpHjv4bK61G1hklt1SGSSPLugjZmoA+CaK/Tb9gP4/8Aw28LaH4F1i/1fwVp
WhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZi8/8CeOPEvhz
9lX4E+GvhD8VvBXw88WeEfEmvR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2UhH
imo/8LJsv+CYthex+JvBV38JLvxkdJl0KHRYRq9jrIikufPknezVy5gRR5sdy58qVISQoeNf
nSv0s/Yx+OXw/wDhV+yPaad4t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBco
zLFcxQCNHSUIpWIH88/izZ32nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ
8uGDHrTGdB4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAP
Qfhl+yD47/bQ8PXvj+G8+H+gR6nrsfhbS4Jkt9Ch1/WBZNOlhaQWsC20MrRRIAZfIjklmQB2
kdq91/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZon1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iB
nywzVv8Aay8d+H/+HePxL8M+GdZ+FX2P/heWqappOk6Xd6L9r/4R/fLFDcQQxnz/APX7UR0H
m/ZsAH7LQB+f1FFfdfwm+KOv/wDDHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc
30E8iHUIltlUZWK4GyNosEgx0hHwpRX6Q/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1i9+KS6d4kt
PB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIW/hh+1d4Q8PeGNBuPC/jzSvD1ha/tOyLp
cEOpDS5NP8FXTxzzRLASj2+lSSJG8kRVYS6Deu5eGM/NOu28A/AHxF8R/hJ498b6fHajw98O
ILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/0A8HftKeFfh3Z6bb+H/H/h/Q4NG/akuLXT
4tP1yG2Sx8G3EqTTxxBHATR5JI0d1XFszIrHJANc/wDCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/
w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIoA/OmvQP+GoPGf8Awzx/wqr7bpX/
AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lfW3wg/aU1n9lb/AIJq6ff+E/H/
AIK1Pxh4V+Jzz2FuNcga9ufDWYHlhhtpHjv4bK61G1hklt1SGSSPLugjZmr0D9gP4/8Aw28L
aH4F1i/1fwVpWhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZ
iAPzJr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YO
F+q/AnjjxL4c/ZV+BPhr4Q/FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS
6jaNDGN2Nldt+xj8cvh/8Kv2R7TTvFviL4f3HiTxR8ZLy88ParYT2Ij8I3E+mvZ2viU6TJ5I
jtILlGZYrmKARo6ShFKxAgH5p11fwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmofY/7U8jb
/odr8p827k3fu4uN21uRiqnxZs77Tvip4lt9T1618ValBqt1Hd61a3rX0GsTCZw91HcN80yS
tlxIeXDBj1r6W/4JYeJ10bQ/2gNMvvFfh/w9pvin4Y6polraax4ltNJg1TVZ1C2ahLiaNZHC
/aQJMFYhKwZk80bkI+Sa+i/hfbeOfAv7LkPjC9n+D/hjwn59zaeHj4q8DaZqWq+LJ4/MlnS0
Z9OuJ5kjfbE007pAjyxxeaNpVPpb/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08
PSWltdX9iks80Ye3lkVZpLSMnzWTeY2KkjoNR+N3gP4rWPwT03wVqfwUn+Hvhz4j+Jv+Eo07
xRbaDZ/2bol14gS6t/s1vqypNHE9jIx/0NQRtCHDxhVYz4Jg+Gfi39ojwZ8R/icLXw/baT4D
g0+bW3srK10qBWup47O2igtLaNIw7EM5Koq4ikZm3sofzSv0B/Zo/aItJfh1+0r8Mfh78Xf+
EEsNR13Tn+Fsmr+KbnRbHSdLXXJXuZILi4dTB/o00byRg+fMu/5JGDCur/4JzfET4YfAXwP8
ObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEQB8Pa
j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcKfs9/sv
6t+0jofj270XWfD9jP8AD3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv2X/wT++I
kfw2/ZH0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDDz/
APYtufA+n/HD9qr/AIRrX/CuheDNd8DeJvDXg/8AtrX7fS/tv2y4X+zoY/t0scrbooeXf7ny
+aylhkA+KaK/Rb9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr
+zOoxG22Bi0V2NkHlAZDRn4J+LOsxeI/ip4l1C3Tw/HBf6rdXESaDay2ulKrzOwFpDKqyR2+
D+7R1VlTaCAQRSEfRf7Inwm+KXxP+B8Wo+Gz8FLDRU11vC2if8JX4V0S5vvEWsPby3osIbm4
sZmMrIAqG6lijLSxRI+flXxTQvDt3+0n4n8W6vf678P/AArdaRoc+uPHdR22gWupfZkjQWdl
b20KwtdyAjZEqL5hV2JyST9A/wDCdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2f
yvtHk5+1eb9r+Tfjz/I4z9mqp/wSw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqz
qFs1CXE0ayOF+0gSYKxCVgzJ5o3MZ4p+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+
srZSZjAY4XjLqTGu2R48mVcZAcrleAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8U
aDLM7MXbOAoWF8sGKK/uv/BLa50/T/8AhfH9pa/4V0L+3fhVrPhrTv7a1+y0v7bqF55X2eGP
7TLHu3eS+XHyJ8u9l3Lnq/8Agnl+0Rrkv7JXxj+GNp8Xf+EE8Saj/YL+DJNX8UyaLY6TEuqM
+pSQXDOqQfu5g8kcR82Zd+1JCCKAPimvQP8AhqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3
Z+0fafJ+0ebt/d7/ADN3lfus+X8lfcP7O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF17rnjS78
K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv8Aa+1n4IfsG3Pirwn4t+GkPjDTvi7d
arYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSur+Ffwr/4Wl/wkf/FR
+FfDn/COaHda5/xPNQ+x/wBqeRt/0O1+U+bdybv3cXG7a3IxWV478Vf8J1441nW/7N0rR/7Y
vp777Bpdv9nsbHzZGfyYI8nZEm7ai5O1QBk4r6g/4JYeJ10bQ/2gNMvvFfh/w9pvin4Y6pol
raax4ltNJg1TVZ1C2ahLiaNZHC/aQJMFYhKwZk80bkI+Sa9L0b9l/VpvgI/xH1zWfD/hHw9d
zzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L9rf8E8vE+s6b+wp8NNej8V2vh2D
wj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdBqPxu8B/Fax+Cem+Ct
T+Ck/wAPfDnxH8Tf8JRp3ii20Gz/ALN0S68QJdW/2a31ZUmjiexkY/6GoI2hDh4wquw7H5fV
7t8CtY+Ik/7PvjPxBoGk/DSbwn8LYLSbVLrWPBWhajes19eiGCIS3FnLPM7O8jAu21I4WG5f
3aN5p8d/+EY/4Xh4y/4Qj/kTP7dvf7A/1v8AyD/tD/Zv9d+9/wBVs/1nz/3uc19V/wDBPL9o
jXJf2SvjH8MbT4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhH
x/438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntdR/Zf1ay/ZMsP
jFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X7h/4JzfET4YfAXwP8Ob
Gfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEWT/AME/
viJH8Nv2R9H8Hnxx8P8ASLqD45D/AITHS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjy
UDBjPjT9nv8AZf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKu
MgOV80r7W/YtufA+n/HD9qr/AIRrX/CuheDNd8DeJvDXg/8AtrX7fS/tv2y4X+zoY/t0scrb
ooeXf7ny+aylhn0D9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uo
Ir+zOoxG22Bi0V2NkHlAZDRlCPzpor9Fv2VPjhd6n4Yg0uD4kfD/AMA6LeeOdV1ay1fwJ4st
vCbeFJ3fKm+0bUxbLrGlO/2WSJDvnjt4ZIS4I+zD8/8Ax3cfbPHGsy/bNK1Dzb6d/tWl2f2O
xucyMfMgg8qLyom6pH5Ue1SBsTG0AGTRX3t/wT0+MPhXwH+znZ2HxF8Y+Cl8Q3Oq3R+Dh1WW
HUD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwPh/x3Lqk3jjWX1u//tXWmvp2v777emof
bLgyN5kv2lHdZ9z7m81XYPncGIOaAPYPAPiL4q/Ef9h7x7oOn3Ph8fCT4cT2Os6vbzaTYLc/
bby8W3t3jmEBumuGy48zzBiCF4y4UpE/hNfa3/BPL9ojXJf2SvjH8MbT4u/8IJ4k1H+wX8GS
av4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV6t/wTm+Inww+Avgf4c2M/jjwrrHhvXt
d8Rad8S01fxVd2tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICImM+HtR/Zf1ay/ZMsPjF
HrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOFP2e/2X9W/aR0Px7d6LrP
h+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfsv/AIJ/fESP4bfsj6P4PPjj
4f6RdQfHIf8ACY6XqXirSY7XU/DL6bHaahvSebyb20cMyjy/MDlQ8eSgYef/ALFtz4H0/wCO
H7VX/CNa/wCFdC8Ga74G8TeGvB/9ta/b6X9t+2XC/wBnQx/bpY5W3RQ8u/3Pl81lLDIB8U0V
+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDFo
rsbIPKAyGjJ+yp8cLvU/DEGlwfEj4f8AgHRbzxzqurWWr+BPFlt4TbwpO75U32jamLZdY0p3
+yyRId88dvDJCXBH2YAH5016B4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJ
n5Y43OcDCjAAA5Xx3cfbPHGsy/bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0f
cH7Dn7Slj8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WG
aQj5p+JHwR8ReM/2c4vjzdan4Kk0nV/Eg8K3Om6Ppw0uewvUtWlANrDaw2ioYIkctCzZMq7v
nMm3K/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq36
A/BL4veGfDPhPxN4f8JeNfhponh6f9pa/uNX0i48QaTY6dqPgua3+zXIFtcSLFc2TwuFVEVw
dqmMbowVtfsh/Hf4QfArxx8NL34beMvCvhP4ey+OfFf/AAna3GrLZ317A0k1t4b86K7cXs9o
kF1ERsVoImMks2x45JFYz8qa+ofA3jv4y/BT9hTQfiD4a8f+Hx4PsvEl/wCFdOtlsVn1/wAK
Xt5aPJdC1uZ7TfZpLAgctZ3I5mB4kMm35p17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5oma
ORMjh0ZlYYIJBBr9AP8AgnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueINNsU1HwvNpcVtfi
a2vJFW5t3VyuwK5LKCg3xgqIEfH/AOz3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvr
K2UmYwGOF4y6kxrtkePJlXGQHKmo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJF
eFYihhRX3Ryv/rUBwwcL9F/sW3PgfT/jh+1V/wAI1r/hXQvBmu+BvE3hrwf/AG1r9vpf237Z
cL/Z0Mf26WOVt0UPLv8Ac+XzWUsM+gf8E9fjV8P/AIP/ALB/hvSfFur+FbfxJ4j+K01z4eup
NTsby68D3E2ktZ2viG4sJJQDFbXMbZW52BQyzDJEW4A+H/APwB8RfEf4SePfG+nx2o8PfDiC
xm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+9v2Svj54qs/g9+0T8Kbj492uneNrrVdLH
hfXr3xvNaaUSmtSNqt9aX8jqoSVZfPfYRNcIzMEkO4V23/BOb4ifDD4C+B/hzYz+OPCuseG9
e13xFp3xLTV/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIiAPzTr6h/Zc8M+JfFP
gzwVo2n+NP2dNGvPFmqvo+haV4j8Jadret3UzToitcPHpd5LAjzTbIzeSRkqhKjylVq+i/2E
L7X/AAV+xh4Avn8XaVoP/CB/tAR6VqOqXHi6zsbW20E21vcahZQXb3CxT2k0qCZreB3WcoJA
j7dw5S18PeE/CUGs+NvgZ4x+FWm+L/it4r161g1rVvE+neHm+Fvh86hLBB9jsppEnilubdi5
nSLzYbceXFErPvYA+KfjlHr9n8X/ABHZ+KrDStK8SaVfSabqdnpun2dja21xbnyHRIrNEtxh
oyCY1wxy2WLFjyla3jvwr/wgvjjWdE/tHStY/se+nsft+l3H2ixvvKkZPOgkwN8T7dyNgblI
OBmv0L/4J5eJ9Z039hT4aa9H4rtfDsHhH4829le6jqPiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS
0jJ81k3mNipIQj83q6v4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T5t3J
u/dxcbtrcjFfe3jv41L4j8C/DuP9mv4oeCvhpBp/xA8WXuuq3iS08JWiwz6vFLpdxdWVw0TX
lutiIwFEEwWOMw7MqYq8p/4J1+L1stc/aasb7xv4K0/TfF3w/wBb0e1Ems2nh3Std1W4YrZt
bWtwbZQm03Ow+SiwJLtYReYFIB8/6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcY
kV4ViKGFFfdHK/8ArUBwwcL5pX6F/wDBOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYp
qPhebS4ra/E1teSKtzburldgVyWUFBvjBX0H9j/4q/A/4Oa54fTwx4p8PyfC/wAUeMvFlp4x
0/xD4svLeDTLKdvsmipHpE9zEl1bz272/mzXFrdbAZGlliEJ8pjPyzr0vUf2X9Wsv2TLD4xR
6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF+q/AnjjxL4c/ZV+BPhr4Q/
FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2Nldt+xj8cvh/wDC
r9ke007xb4i+H9x4k8UfGS8vPD2q2E9iI/CNxPpr2dr4lOkyeSI7SC5RmWK5igEaOkoRSsQI
B+adegeCP2l/EXw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOf+LNn
fad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetfoZ+wh8d/A+h+GP2
PNX1Hxl4V0q1+FF94w07xTHf6tb2l1pkuqOUsmFvI6zTRObiPdNCjxRDe0joI5CqEfnT438Z
XfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntdR/Zf1ay/ZMsPjFHrPh
+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+tv2GfH2r+DPhB4f8FXPxO8K
+CbXTvFd9KmueGPiBpmj6h4Zu1CoX1axuXjtfEOnySLbSo8MkzeTDLGsxBFvXQfsN/Ga10T9
mOz0WD4g/DS1e8+PLXfi2yuNU03RdO1rwvPp6W1+4sLz7Or2UiuQsIgBGF2RI8QCMZ8U6j+y
/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/8ArUBwwcKaj+y/q1l+
yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcL91/sr/ABe8G+Gf
g3e+H/AXjXwVong+f9oe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5keVt/s2f
Gr4MfB/9n++0m11fwrb2HiP45anc/D+6k1O2vLrwPbzWMtnpniG4sLqUOYraSMZW92FQyzcs
I9wB+X1FdB8WbO+074qeJbfU9etfFWpQardR3etWt619BrEwmcPdR3DfNMkrZcSHlwwY9a+1
v+Cenxh8K+A/2c7Ow+IvjHwUviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0
iAKhuAhHwTXoHgj9pfxF8P8AwxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjA
AA+7Pgp+0zrngj9nH4R6b4K8UfCp/H2neK9em+IV94m+Ismjx/2g+oxPDfXRt9SthrETw4LT
bb1HSHYufmRsn9nX9q7TPD3wg8EXA8eeFfD2q2v7R6q0Gi6k+l2un+FroRT3kVtBKUnt9Ekn
RXaKRVjJRPMXevDGfn/438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsc
kknJr0D9rL+w/wDhqj4l/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxji
vrb/AIJ9/FfUdP8AgJ4T8Nn4l+H/AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQ
bGkkWKGaIS9LcIR8E0V+gOg/FrX7z9nH4LWHwZ+L/wAP/BHiTRfFfiKfxvd2/iKz8Iabc3Eu
owSWd7PYz/Zjd2n2YLtVbWQLEhh8oFTCLX7G/wAYNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xV
Y+EbnQLkkDztS0W+js4dX0yZ/ssscaJ5kcEUsAKYFqoB8P8Awr+Ff/C0v+Ej/wCKj8K+HP8A
hHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uRivYPDNt45+GP7Emj/EvTJ/g/qfhP/hJJ
PCrWl74G0zUtbs70xy3ZE815pzGRPLAYMJ5NqyxoMbWVPQP+Cdfi9bLXP2mrG+8b+CtP03xd
8P8AW9HtRJrNp4d0rXdVuGK2bW1rcG2UJtNzsPkosCS7WEXmBT6B/wAE4Pi9a+Gf2K/DXh+D
xr4K0R5/jXFceLdI1zxBptimo+F5tLitr8TW15Iq3Nu6uV2BXJZQUG+MFWM+VPiR8EfEXjP9
nOL483Wp+CpNJ1fxIPCtzpuj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt8fr9Qf2bPjV8
GPg/+z/faTa6v4Vt7DxH8ctTufh/dSanbXl14Ht5rGWz0zxDcWF1KHMVtJGMre7CoZZuWEe7
83vizZ32nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ8uGDHrSEc/XsH7Pf
7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHu+tv2Wf
2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRXY2QeU
BkNGfP8A9hW50+88cftK+IbvX/hV4esPG3gbxR4a0aKHX7LQrG51C6kgkghs7S8liuIrRlyI
3ljVFVdrMrKwDGeP/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSF
PPlWMGdoySGONmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY2
1zFGi3T+UGlkTJVmGUw5+i/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtS
W7sXmuEE++G2ZRPa+cjxSToCS3Hba/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P
30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/TbX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvte
ufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFtXf7Zdpf6D4Vv/gn4i+FUd1ef
EfxZq/iO58TeMbnwnHF9p1lZ7C8urVb6ykv4ms2j3eZDdYSHygoIeMgH5U16t8Dfjb4ih1Dw
54NsP+FVada3t9HYpqfibwZod3HZ+fNzNdXlzZyzeUhclmZm2IuAMKBXE/FnWYvEfxU8S6hb
p4fjgv8AVbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK9W/4J86F4q8OftZ/B7xfp9n4
gsNCk+IGlaC+tW8M0Vo0008YmszcKAhd7d3DRbstG7ZBUmkI80+OVxdzfF/xHHf3nhXUbqyv
pLF7zwzZ21po955B8kTWqW0UUPlOEDKyxrvDbyMsTXKV+sPjv4xLpHxu+Hdt4I+Inh/wZB4W
+Nfiy6+KVm3jC08NveQv4jikikuoZ54Wv0NipUMqzDahi6goOUl8ReDf2jviN+zAPhhrngr+
yfh98XfEFxc6XJqtnoc9lZXXiW3vLEW1ldPDLKj2pXYlvG+CPLwHUoGM+FP2Sv2X9W/bF+Nd
j4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt5pX7A/Dn47+B/gV+1Ro9
7pvjLwr4TsJfit44/wCFtLcatb2d9e3LXd1baJ50Uri4ntEW6UjyFa2iYySy7GjkkXx/9nb4
933wf/ZV+C/hr4ba38H7LxZ4c8Sa3H43fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAj
TPzIwB+b1FfoD+yZ8evAni7wPNoOq+Lvh/4HuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQV
Hk2kkaSNG6BQSrhPjT9pT4kWPxj/AGjPH/i/TIrqDTfFXiTUdYtIrpVWeOG4upJkWQKzKHCu
AQGIznBPWkI4mivuv4TfFHX/APhjr9nzS/gz8U/Cvw48SaBrutyeN/tni6z8Ox/aJL62ezub
6CeRDqES2yqMrFcDZG0WCQY67b4EfGpbPwL+zhH4I+KHgrw7B4b+IGsXvxSXTvElp4OtNVhk
1e1liuHsp2s2urdrFSI1WAhI18nYhUxBjPzeor9IdX/a+X4Ifsi+IvFXws8W+CobzTvjXqGq
+GtHGqWkN7F4PluY5hZw2PmJeW1lPe21u0trEsTvGCzL5TMxt/BT9r64T9nH4RzfC6b4KeFf
EknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRsAfBPgH4A+IviP8A
CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX3t+yV+1Dq3ij4PftE
+AtM+LXh/wAAa74g1XS7vwQYPEFx4b8OaNAdakl1F9Nacxmzt/LnD+SAs8kWQInZWUdX+zt+
0I3wr/ZV+C+gfCbxP8H213QfEmtjxde6540u/CtoJjfwtZ301t9tsJdQt3tdh/e29wRHCIti
sHiIB8PeCP2l/EXw/wDDFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADlfG
/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk/qX8Ff2+fCvw1+E/wL
0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJcR+dptu2jXmsiJphCRbysDhwuD4SeJ1034ef
294U8V+H/DvhPwj+1JeWVtqK+JbTR9OtPCchjurmxtZXmjjeylKxzG0gJWXYHEbbcgA/J6iu
2/aU1nw94j/aM8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMV9a/Cb4o
6/8A8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwS
DHSEfClFfoD8Pf2rtf8A2ef+CeKa/wCG/Hnw/vvHGifFaa/tbS01Kzt5pvDjvDLPBbWGYbu1
0+51K2id7SGK3Zoss0axMxo0H496/wCOv2cfgtN8GfiF8P8A4P8AiSHxX4i1LxvYW/iaz8Ka
bY3FzqME1m89nPKpvbSO22ooWO5CxRGHDFTHQB8vfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0b
UL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPC
tr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw59L/Yo0q/8E/tAeFfHNp45/Z/v9Bv/
ABzDJ4j+2f2Npt1o9va3yO1zawapb28ttFLFK0sR09QQFVSI5IljT0C10Pw34Zg1nxz8F/iB
8P38dfFnxXr1sPFHinxta6dqHw60M6hLBFLFHez/AG1ru8gZpXvCr3KQ7lRBJIZHYz4U8d+C
NU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv6dP+DR3/lGtoX/AGGd
Q/8ASqWv5i/HfhX/AIQXxxrOif2jpWsf2PfT2P2/S7j7RY33lSMnnQSYG+J9u5GwNykHAzX9
On/Bo7/yjW0L/sM6h/6VS16GA+Ct/h/9uiceK3p/4v0Z9+/Fn/j+uPqa+E/E+nfbvjV4q4zj
w3o3/qTeOv8AGvuz4s/8f1x9TXxhb232n41eLhjOPDWj/wDqTeOK6MT/ALtH1/Q5MP8A7w/T
/I/Gn/g5Vt/sv7VfwmTpj4Yw/wDp+1yir/8Awc8w+R+2H8Kl9Phjb/8Ap91uivKPTP3T/wCC
f9xn9jv9n0Z/5p54T/8ATRZV/N3+0D+yI3xt/aH0Pb8Rvhr4e1bxzFoekaLo2oX13cajcTf2
Xp0KNLHZ21wLVHklCobkxF9rMAUAc/0f/wDBPiAt+x3+z0f+qd+Ej/5SLKvwJ8J6Ff8Aw9/b
V8EeM7Txb8FP7Bur/wAKnxHa+ILzRotY8PW9pY6azyqNURJY98UjOkunu5JTBZZI1VS3u/cU
3eS/rsfNng79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FE
ad49+xnXMeHPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9
12uh+G/DMGs+Ofgv8QPh+/jr4s+K9eth4o8U+NrXTtQ+HWhnUJYIpYo72f7a13eQM0r3hV7l
IdyogkkMjmg+J9f8C/s4/Bbwf8GfjL8P/CHiTwR4r8RWvjfULfxtZ6Lpt9cf2jB9jv50neM6
paG2RSrrDOGiUx7SQY6go/P6ur+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2p5G3
/Q7X5T5t3Ju/dxcbtrcjFZXju4+2eONZl+2aVqHm307/AGrS7P7HY3OZGPmQQeVF5UTdUj8q
PapA2JjaPqD/AIJYeJ10bQ/2gNMvvFfh/wAPab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRw
v2kCTBWISsGZPNG5CPH/AIa/tt/E/wCEPgew8PeH/E32Kw0f7d/ZMr6daXF9oX22Py7r7Ddy
xNcWXmLkt9nkj+Ys3DMSeg+BWsfESf8AZ98Z+INA0n4aTeE/hbBaTapdax4K0LUb1mvr0QwR
CW4s5Z5nZ3kYF22pHCw3L+7RvoD9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEF
vNexsC4dNMkuIw8ynEDSIGfLDNdB8I/2iPNsf2rvhj4N+LuleBP7R8V2r/DaR/FP9i6HpOnr
4gne8ksbgOsEEXkTI7RwHfNHu2JJgimM/P8A8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1Xaio
NlvbRxwpwoztQZOWOSST7r+z3/wsn9pH9lzx78NdF8TeCrHwb8PdKufH15oeoaLCl7fLbZaa
6guo7N5DcKDHFmSeNjHKsYJiDhfpb9nb9oRvhX+yr8F9A+E3if4Ptrug+JNbHi691zxpd+Fb
QTG/hazvprb7bYS6hbva7D+9t7giOERbFYPEfKf2D/EWk6l8VP2ntQudc+Gnh6DxZ8P/ABHo
ekpHqtvoGlXd7fzK1tBYQ3rwyLbkRPt3KPKQRiTYWUEA+NKKK/Uz/gll4i0nx1of7LPhnw5r
nh/yND1XxXd/EHw6dVt7OfUL0L9o0q5msZHSS/eIRW7xTRxy+QbcHchhO1CPz+1H9l/VrL9k
yw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfNK/Qv9jT9pRfhr+y
Z8N7jVvH/h+18TeJ/wBoex17WJbvXLSfV00qSBIby8uCztPapI8UqSyv5bPFK6sTFcESerfs
9fFX4NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlzFaXVlPA8G+Y2t1Gi
FmaWJIf3TGfm9/w1B4z/AOGeP+FVfbdK/wCEF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8
r91ny/kr0vwzbeOfhj+xJo/xL0yf4P6n4T/4SSTwq1pe+BtM1LW7O9Mct2RPNeacxkTywGDC
eTassaDG1lT3b9hnx9q/gz4QeH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47Xx
Dp8ki20qPDJM3kwyxrMQRb1q/spfH/T/AIL/ALNHgOF/H3w/tNT1r9o+01e+k027srRo9EaJ
YLm8S32xzadaSGGRDuitz9nlMbKsMxRwD8//ABv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqK
g2W9tHHCnCjO1Bk5Y5JJOTXoH7WX9h/8NUfEv/hGf7K/4Rv/AISvVP7J/svy/sP2T7XL5Pke
X8nleXt2bPl24xxX1X8Jvijr/wDwx1+z5pfwZ+KfhX4ceJNA13W5PG/2zxdZ+HY/tEl9bPZ3
N9BPIh1CJbZVGViuBsjaLBIMdIR8v/s9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK
2UmYwGOF4y6kxrtkePJlXGQHK+aV9g/8E8dS0+Lxx+0jLqXiv4f2X/CRfDjxB4d066m1Oy8P
WOraheyIbdbSC5+zbIn8pyAIo0hXYHWLcq19AfseftweG/gl+yn+zN4bbxt4V0+6S+1Br63u
FtbmbR7h/FdhGZZmdWax3aNd60BK5iBilkw2dmGM/L6vS/2e/wBl/Vv2kdD8e3ei6z4fsZ/h
74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9IfAXx3+Fmh/tAfAjV9K8ZfD/AErw
h8KPHPxE07U449WtLSPTINUvpk0tre33q01o6XEOJrdHgiTczuixuV+Sf+Camn2vgzXP2hdP
1rxF4K0ae/8AhjrnhKzfUPFOm2sF/qVy0awxQTSTiOZGMEn72NmiA2kuA6FkI+Sa6v4V/Cv/
AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMV9l/sOftKWPw
7/ZM+EVvP4/tdD13RvjzZWs0T64ttd2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZo/Zi8T6Do3
7Rn7ZmmaN4r8FeHvBPinw34o0Tw/aSeJbDSdK1S6nupF0xbZJJo4pEEPnhJEBjiSXBZBKNwB
8E17X+zp8Q/EWpeGNa0iw1L4KaJa+EdDvNcSTxd4U0Oe61Xy3DmzguLmylmuLuQyHyomfkKV
BUKBXilfW3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CXE0ayOF+0
gSYKxCVgzJ5o3AHy/wCN/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJ
JJ6DwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX+wP2HP
2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM10Hwj
/aI82x/au+GPg34u6V4E/tHxXav8NpH8U/2Loek6eviCd7ySxuA6wQReRMjtHAd80e7YkmCK
Yz86a9L/AGe/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyr
jIDlftb9nb9oRvhX+yr8F9A+E3if4Ptrug+JNbHi691zxpd+FbQTG/hazvprb7bYS6hbva7D
+9t7giOERbFYPEfKf2D/ABFpOpfFT9p7ULnXPhp4eg8WfD/xHoekpHqtvoGlXd7fzK1tBYQ3
rwyLbkRPt3KPKQRiTYWUFCPjSiiv0h/4JkfFX4bfBz4V/CxL/wAU+H5NC8Uarr9p8UNP8Q+L
JreDTGnhS005I9INzFBdW86PF5s0lrdKgMjPLEsP7oA/N6iv0h/Z2+Pd98H/ANlX4L+Gvhtr
fwfsvFnhzxJrcfjd9c+IDaFaWt19vhNtdTLa6jbJqtubcL+8VLyMxwCNM/MjVP2VPjhd6n4Y
g0uD4kfD/wAA6LeeOdV1ay1fwJ4stvCbeFJ3fKm+0bUxbLrGlO/2WSJDvnjt4ZIS4I+zBjPz
prq/hX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxX3
B+yx+07Y+B/gJ8O1n+JXh+z13S/2h4YJptP1BdJSLwxcJDNe+VbkQPbaPNcIJXiMUUG5V3xq
y4GV+zF4n0HRv2jP2zNM0bxX4K8PeCfFPhvxRonh+0k8S2Gk6Vql1PdSLpi2ySTRxSIIfPCS
IDHEkuCyCUbkI+Ca6v4V/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3
cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CX
E0ayOF+0gSYKxCVgzJ5o3AHhPgj9pfxF8P8AwxbaRYad8P57W037JNS8CaHqd0252c77i5tJ
Jn5Y43OcDCjAAAtwfDPxb+0R4M+I/wATha+H7bSfAcGnza29lZWulQK11PHZ20UFpbRpGHYh
nJVFXEUjM29lD/Vf7Dn7Slj8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJ
cRh5lOIGkQM+WGa6D4R/tEebY/tXfDHwb8XdK8Cf2j4rtX+G0j+Kf7F0PSdPXxBO95JY3AdY
IIvImR2jgO+aPdsSTBFMZ+dNel/s9/sv6t+0jofj270XWfD9jP8AD3w3c+Kryy1B7hJ76ytl
JmMBjheMupMa7ZHjyZVxkByv2t+zt+0I3wr/AGVfgvoHwm8T/B9td0HxJrY8XXuueNLvwraC
Y38LWd9NbfbbCXULd7XYf3tvcERwiLYrB4j5T+wf4i0nUvip+09qFzrnw08PQeLPh/4j0PSU
j1W30DSru9v5la2gsIb14ZFtyIn27lHlIIxJsLKChHxpXV/Cv4V/8LS/4SP/AIqPwr4c/wCE
c0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK5Svrb/glh4nXRtD/AGgNMvvFfh/w9pvi
n4Y6polraax4ltNJg1TVZ1C2ahLiaNZHC/aQJMFYhKwZk80bgDlP2TP+Canir9r74V6p4v0j
xn8NPDOm6RPex3EXiTV5rKdYbOG2mubrCwOv2eJbyAPIWAQuN2MqTVl/4J3eJPDmoX6eL/GH
w/8Ah/YQ+K7nwdpmo+Ib26gtfEN7bTPBcy2nl28j/ZIZECyXUyRQoZFVnDBlXqv+CW1zp+n/
APC+P7S1/wAK6F/bvwq1nw1p39ta/ZaX9t1C88r7PDH9plj3bvJfLj5E+Xey7lzbmt9J/a+/
Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28axssqxlp0bbiFlZWLGcpo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
nwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+9fBOoeDfEfh
b9key8L/ABD8FanpPwQ+IGstr19rGtWfh2dbI69bXVveC1vpo5XSW1XzQIhJtO6MnepWvjT9
rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBpCPP69L/AGSv
2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9w/sea98KP
iP8As1/szHX/AIjfD/w/4g+E2u6g32LXdXk0+4sr1/ENhqvm7duwxNplpexiWQ+SZ7iGMN5p
+T0D9n79q34aeFvjh4O8Y+EfHnhXQPC3iX4j+MtS+Jc1xqUWnX2qG5uLiLw+80FwUupbRY7y
NwIkNvCzSSzBHikdGM+CfAPiL4q/Ef8AYe8e6Dp9z4fHwk+HE9jrOr282k2C3P228vFt7d45
hAbprhsuPM8wYgheMuFKRP4TX3D+wf8AG/xFoP7M3xv+EKfGC18EeLHn0SHwjNe+MxYaVpax
as51SW0vVl8hEKS+Y4t3LXCbjGsuDXxTr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgR
gNwDorYIyqnICEVK9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+
6OV/9agOGDhftb/gmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/EPiya3g0xp4UtNOSPSDcx
QXVvOjxebNJa3SoDIzyxLD+6qf8ABP74iR/Db9kfR/B58cfD/SLqD45D/hMdL1LxVpMdrqfh
l9NjtNQ3pPN5N7aOGZR5fmByoePJQMGM+NP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3
CT31lbKTMYDHC8ZdSY12yPHkyrjIDlTUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJ
LjEivCsRQwor7o5X/wBagOGDhfov9i258D6f8cP2qv8AhGtf8K6F4M13wN4m8NeD/wC2tft9
L+2/bLhf7Ohj+3Sxytuih5d/ufL5rKWGfQP+Cevxq+H/AMH/ANg/w3pPi3V/Ctv4k8R/Faa5
8PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLcAfnTX0X4ZtvHPwx/Yk0f4l6ZP8
H9T8J/8ACSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieWAwYTybVljQY2sqeKfFmzvtO+KniW31P
XrXxVqUGq3Ud3rVretfQaxMJnD3Udw3zTJK2XEh5cMGPWvuH9hb476f8DP2MPhVaP4y8K6Jq
eq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSCBHwp438ZXfxA8T3Or38O
lQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntf2e/wBl/Vv2kdD8e3ei6z4fsZ/h74bu
fFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9C7v9pXTPAWg+FdL+BGsfBTTf7L+I/iyT
XP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAERXaeVAIkHDRn50/YP8RaTqXxU/ae1C51z4aeHo
PFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDevDItuRE+3co8pBGJNhZQUI+NKK/SH/gnl4n1nTf2
FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkjq9N/a
u0bRfAvgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJdNuJrKO+sWvbf7CYRtaC5CxwCEIpV
4i7DsflnRX6F+G/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22
mT6jbQs1qqxSPCNzr5bMx+CvHfir/hOvHGs63/ZulaP/AGxfT332DS7f7PY2PmyM/kwR5OyJ
N21FydqgDJxSEZNFfdfwm+KOv/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/AGzxdZ+HY/tEl9bP
Z3N9BPIh1CJbZVGViuBsjaLBIMda3w9/au1/9nn/AIJ4pr/hvx58P77xxonxWmv7W0tNSs7e
abw47wyzwW1hmG7tdPudStone0hit2aLLNGsTMaAPj7/AIag8Z/8M8f8Kq+26V/wgv27+1Ps
H9h2Hnfa92ftH2nyftHm7f3e/wAzd5X7rPl/JXn9fot8K/Gen/tTWP7HVz4fvfh/peq+B/iP
q194j8Pw6vZaL/Y32zxBa3sMNnZ3MySzRGIkRrbiX7vl5LgrXsHh/wAT6zptnrOvR+K7Xw7B
4R/ay1OyvdR1HxLBo6Wnh6SVbq/sUlnmjD28sirNJaRk+aybzGxUkMZ+f/wK1j4iT/s++M/E
GgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcWcs8zs7yMC7bUjhYbl/do2T8Afgj4i/4KC/
tJR+GdP1PwV4f8T+IIJJrVJtOGk6dctbw7mijhsLUxRP5Mbvny0VvLcli7AP9V/Ab9qGLxR4
W/ai8BeBfi1a+AIPEHiSxu/hkb3xBL4b0rRtNOvTS3b2jOY0s0+zzo7wxhZZE3BYnKso6v8A
4J1fGbwF+zlofwPbRfiD4K0fSYvEniGH4rXa6oljPrE+2S00KVobryryayC3KOoSLyISzyzL
G8cjIAfmTXV/Cv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInmofY/7U8jb/odr8p827k3fu4u
N21uRiuf17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORMjh0ZlYYIJBBr6r/AOCWHidd
G0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPKfg
n+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz5r4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfWv7MPgW1+GH7Nvh
rxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0XSGR2uo0aWOAFIow0nmN22g
+J9f8C/s4/Bbwf8ABn4y/D/wh4k8EeK/EVr431C38bWei6bfXH9owfY7+dJ3jOqWhtkUq6wz
holMe0kGOmM/P6itbx3cfbPHGsy/bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG
0fpX+x5+3B4b+CX7Kf7M3htvG3hXT7pL7UGvre4W1uZtHuH8V2EZlmZ1ZrHdo13rQErmIGKW
TDZ2YQj8vq9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAA
fpZ4C+O/ws0P9oD4EavpXjL4f6V4Q+FHjn4iadqccerWlpHpkGqX0yaW1vb71aa0dLiHE1uj
wRJuZ3RY3K+KfsJfEjw38IPgf/wj/wATfGnw/PimLXb2D4QSXd/a61D8P9YW3uoptTuZYhNH
bafJeNZsm8yRvLGLpYti/aaYz4U8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpw
oztQZOWOSSTk1reO5dUm8cay+t3/APautNfTtf3329NQ+2XBkbzJftKO6z7n3N5quwfO4MQc
1+m3/BLLxFpPjrQ/2WfDPhzXPD/kaHqviu7+IPh06rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl
8g24O5DCdqEflnXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd
+7i43bW5GK/QH9hD4ieJ9X/Yw8AeMr/xx/Z11of7QEaaxrmt+KotNkGiT21vealam4upkMsU
8wE8tsrMZnUuUcqSPPv2RPiB4buvjh+1+nh7xR4V8NeAfHHhTxLpvhuyvtbtfD9jqNxdXD/2
WkVtcyQgbYTMqkoBAshVjH5gDMZ8KUV+hf7JvxiXSPgb+yvbeCPiJ4f8GQeFvGWo3XxSs28Y
Wnht7yF9Ts5IpLqGeeFr9DYqVDKsw2oYuoKD4q/aU1nw94j/AGjPH+oeEEtY/Cd/4k1G40VL
W1NrAtk91I1uI4SqmNPKKYQqu0YGBjFIRU+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ
+x/2p5G3/Q7X5T5t3Ju/dxcbtrcjFegfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3
AuRbIlzHa2cy23mOyMhldQyOGBwG2+l/8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2m
kwapqs6hbNQlxNGsjhftIEmCsQlYMyeaN3KfsTa94V+EPwk+LHxRnvPD/wDwtDwHBpQ8BWGr
TQyIbq6umiuL6G0cg3FxaRKJI8h44mIkdGKoVYzxT4s/Di++DnxU8S+ENTltZ9S8K6rdaPdy
2rM0Ek1vM8LtGWVWKFkJBKg4xkDpXP197eBPjf4t8U/sq/AkfC34weH/AAT42svEmvX3xCu9
W8Z2uhvdXtxf28tte6jHcyq+pp5GMv5dzlUaMhiDHXV/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1
i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIAH5vUV+kOr/ALXy/BD9kXxF
4q+Fni3wVDead8a9Q1Xw1o41S0hvYvB8tzHMLOGx8xLy2sp722t2ltYlid4wWZfKZmNv4Kft
fXCfs4/COb4XTfBTwr4kk8V69qXjKw1LxdP4P03RLifUYprR3s4tQtTe2i2zIgDR3gWK3EIG
VaNgD806K6D4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBF
fe37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDFo
rsbIPKAyGjKEfGn7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq
4jYbtxVW80r9TP2Jf2kPAXwt1z4Ua9ovi/4aeD9J1Pxl4ouPitFpN4mkwXc8rTQaEIrW6KXj
aZGt2hiRI/IgBaSYRvFI6c//AME5vHnw8/Z58D/DnTNX8WeFbjSta13xFpfxVsdX8bM1jp0s
kaWVgsGnR3aWl/aTq0fmXP2e7iCl3aaNIcxMZ8J/8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7
DsPO+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K8/r728CeOPEvhz9lX4E+GvhD8VvBXw88WeEfEm
vR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2V8P+O7j7Z441mX7ZpWoebfTv9q0
uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2hCMmvQP+GoPGf/DPH/Cqvtulf8IL9u/tT7B/Ydh5
32vdn7R9p8n7R5u393v8zd5X7rPl/JX6Gf8ABLLxFpPjrQ/2WfDPhzXPD/kaHqviu7+IPh06
rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl8g24O5DCdvP8A7CHxE8T6v+xh4A8ZX/jj+zrrQ/2g
I01jXNb8VRabINEntre81K1NxdTIZYp5gJ5bZWYzOpco5UkMZ+f3wr+Ff/C0v+Ej/wCKj8K+
HP8AhHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uRiuUr7r/ZE+IHhu6+OH7X6eHvFHhX
w14B8ceFPEum+G7K+1u18P2Oo3F1cP8A2WkVtcyQgbYTMqkoBAshVjH5gDdB+yb8Yl0j4G/s
r23gj4ieH/BkHhbxlqN18UrNvGFp4be8hfU7OSKS6hnnha/Q2KlQyrMNqGLqCgAPz0rq/hX8
K/8AhaX/AAkf/FR+FfDn/COaHda5/wATzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxVv9pTWfD3
iP8AaM8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMV9Af8EsPE66Nof7
QGmX3ivw/wCHtN8U/DHVNEtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDMnmjchHyTXpeo
/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9QfCb4o6
/wD8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSD
HXV/spfH/T/gv+zR4Dhfx98P7TU9a/aPtNXvpNNu7K0aPRGiWC5vEt9sc2nWkhhkQ7orc/Z5
TGyrDMUdjPzprq/hX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3Ju/d
xcbtrcjFav7WX9h/8NUfEv8A4Rn+yv8AhG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duM
cV7t/wAEsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQ
lYMyeaNyEcp+yZ/wTU8VftffCvVPF+keM/hp4Z03SJ72O4i8SavNZTrDZw201zdYWB1+zxLe
QB5CwCFxuxlSasv/AATu8SeHNQv08X+MPh/8P7CHxXc+DtM1HxDe3UFr4hvbaZ4LmW08u3kf
7JDIgWS6mSKFDIqs4YMq9V/wS2udP0//AIXx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nh
j+0yx7t3kvlx8ifLvZdy5tzW+k/tffsPfAzwRofi7wV4a8Q/CvVdY07XoPFeu2+iIsGqXi3M
N/A8zBZ7eNY2WVYy06NtxCysrFjOU0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9
TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQ
yNHIm5CVbDqwypIOOCRX3r4J1Dwb4j8Lfsj2Xhf4h+CtT0n4IfEDWW16+1jWrPw7Otkdetrq
3vBa300crpLar5oEQk2ndGTvUrXxp+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNd
yyRvtcBlyjKcMARnkA0hHn9FfcP/AATz+MPgq7/Zz0DwhrvjHw/4L1L4bfF3T/ixeS69K8EG
qaVa2qQzQWRjR2mvQyArb7VaQONhbDhfa9N/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHj
OXwq+nNdaulzYz31tBqNr/aCfZHjD/JeKFgMS9GRmM/LOvS/2Sv2X9W/bF+Ndj4C0DWfD+ja
7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9l/s6/tXaZ4e+EHgi4Hjzwr4e1W1/aP
VWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7149W/Z++O/wALPgV8cPB174G8ZfD/
AMJ+DJfiP4y/4WItnq1pZ/bVa4uLbw7iIuJZ9PSK6jKfZla0iy8r7DG8igH5p+CP2l/EXw/8
MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HSoLq7
2b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+gH7NfxJfwL8Kf2Y9F8IfEfwr4Q/4QjxzqjfF
a1t/HWn6LHfJ/atoySzhrmNdUiNkhVZYfPRkUorHG2rer/tfL8EP2RfEXir4WeLfBUN5p3xr
1DVfDWjjVLSG9i8Hy3Mcws4bHzEvLaynvba3aW1iWJ3jBZl8pmYgH5vUVreO/FX/AAnXjjWd
b/s3StH/ALYvp777Bpdv9nsbHzZGfyYI8nZEm7ai5O1QBk4r7g/4J9/FfUdP+AnhPw2fiX4f
8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFihmiEvS3CEfBNFfqD+yX8f8A
wPp+ofsra5qPj74fx2vwt13xvY+KZ47u30iO1l1WZ1sprexkWCb7JKbiNg0NuIoELeYIRFIE
yv2EL7X/AAV+xh4Avn8XaVoP/CB/tAR6VqOqXHi6zsbW20E21vcahZQXb3CxT2k0qCZreB3W
coJAj7dwdh2PzTor9VvgP8QLfUPhBZeL/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhAL
iSHbaS4imaxQfOYwxiJQ4ytB+Pfhu71r4LTfBn4heFfBHgzRfit4i1LxvYW/ia18IR3OmS67
BNZvPZzy2z3cX9mhUULHIFRDDgFTGCwWPy+rtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK
+68ultreKNBlmdmLtnAULC+WDFFf7r1f9r5fgh+yL4i8VfCzxb4KhvNO+Neoar4a0capaQ3s
Xg+W5jmFnDY+Yl5bWU97bW7S2sSxO8YLMvlMzHif2Lf2stc8f/AL4+eDbT4l6V8K/Eniq+0f
UfBlm/iKTw7ofhuJ9Ykm1JbF2kCWkSR3ALQxN5skYYKkpUigD4Uoq3r2nRaRrl5aW9/a6pBa
zvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH6Lf8ABPLxPrOm/sKfDTXo/Fdr4dg8I/Hm3sr3
UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJCEfCngH4A+IviP8ACTx7430+O1Hh
74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX6F/Ab9qGLxR4W/ai8BeBfi1a+AIPE
HiSxu/hkb3xBL4b0rRtNOvTS3b2jOY0s0+zzo7wxhZZE3BYnKso6H9hDVtU8K/sYeANTs/HW
lWNh4H/aAj0241648RpotqfDjW1vc31rBJePA7Wk8iLcNZhQ0pUO0JZDtYz806+lv2ENe+K9
74H8fW/w98Z+FdKtfh5Yt8SJtH8QaXHqWXsI2DX1gs9pcQw3cYaNN4aF28yL5mCEx+P/ALSm
s+HvEf7Rnj/UPCCWsfhO/wDEmo3Gipa2ptYFsnupGtxHCVUxp5RTCFV2jAwMYr3X/gltc6fp
/wDwvj+0tf8ACuhf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/aZY927yXy4+RPl3su5coR8v69r1
94p1y81PU7y61HUtRne6u7u6maae6mdizySOxLM7MSSxJJJJNegfC/8Aa58c/BbQ4bPwrdeH
9CntYLm3g1a18M6Yut263CyLKY9S+z/bEcrK6h1mDIpAUqAAPqD4BftS6/8As2/8EudD1rw3
4w8Kr448P/Ef7Za6Vc69ZyaonhxhbyT2i2/nC7itJ9StYmmgh8tpUDOwMTM59W/Z4+JHwo/a
B+EX7P8Armv+NPhV4G8QfD/xXq9//YdzfyaPDol7P4lstZxbw4KLaf2ZbXsMbSN5Ilnt4g3m
/cYz8vq92+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7
RkkMcbMMeJ/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa
+y/2Tfizptr8Df2V7fQfEvw0jHg7xlqN149i8W6jo4u9DhfU7OaOSyGrN5sCNaqX3adtBkVm
P74E0hHzV4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxR
GnePfsZ1zHhzb8Zf8E9tY+EngDQNe8f+PPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIb
ZV4IkIYMCCecfQFrofhvwzBrPjn4L/ED4fv46+LPivXrYeKPFPja107UPh1oZ1CWCKWKO9n+
2td3kDNK94Ve5SHcqIJJDI/P/s53PijRtV+GfhHXfiL+zp4z+FXgfxldaTqWnatL4flfSLIa
kj308MmqQRzTW92jNLHLaPLuUAZR0CBjPH/A/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+O
qN/wkz29zHatPEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs
9xDI0cibkJVsOrDKkg44JFfpBofxR+HPiG2/Z+tvh5rXw0Pg/wCHXxA16O//AOEj1210y98N
aU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgPgn9rLxvpnxM/ao+JfiTRLn7boviDxXqmpW
Fx5bx/aLea7lkjfa4DLlGU4YAjPIBpCMr4V/Cv8A4Wl/wkf/ABUfhXw5/wAI5od1rn/E81D7
H/ankbf9DtflPm3cm793Fxu2tyMV2vwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibz
Y4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4JYeJ10bQ/wBoDTL7xX4f8Pab4p+GOqaJa2mseJbT
SYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG7V/Zh8C2vww/Zt8NeJfh34z+Glj8W/iPPe6bqeu
6/4x03SLn4Yaas32Ym3gmmE32i6QyO11GjSxwApFGGk8xmM+SvHfgjVPhn441nw3rdt9i1rw
/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr9AdB8T6/wCBf2cfgt4P+DPxl+H/AIQ8SeCP
FfiK18b6hb+NrPRdNvrj+0YPsd/Ok7xnVLQ2yKVdYZw0SmPaSDHXwp47uPtnjjWZftmlah5t
9O/2rS7P7HY3OZGPmQQeVF5UTdUj8qPapA2JjaEIya9A/wCGoPGf/DPH/Cqvtulf8IL9u/tT
7B/Ydh532vdn7R9p8n7R5u393v8AM3eV+6z5fyV+gP7Hn7cHhv4Jfsp/szeG28beFdPukvtQ
a+t7hbW5m0e4fxXYRmWZnVmsd2jXetASuYgYpZMNnZjq/AXx3+Fmh/tAfAjV9K8ZfD/SvCHw
o8c/ETTtTjj1a0tI9Mg1S+mTS2t7ferTWjpcQ4mt0eCJNzO6LG5VjPyTor9Af2EviR4b+EHw
P/4R/wCJvjT4fnxTFrt7B8IJLu/tdah+H+sLb3UU2p3MsQmjttPkvGs2TeZI3ljF0sWxftNf
CnjuXVJvHGsvrd//AGrrTX07X999vTUPtlwZG8yX7Sjus+59zearsHzuDEHNIRk17X+z3+wf
4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB9w/8
EsvEWk+OtD/ZZ8M+HNc8P+Roeq+K7v4g+HTqtvZz6hehftGlXM1jI6SX7xCK3eKaOOXyDbg7
kMJ2/P2vfFm58U/8EZrzTNT8S+CtR8Vaj8Tn167tLrUdLm8QXVg8JV7yRGY3jXDXpIMpBuDC
SCfs1MZ4p+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqC
u903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9r
MAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRF
lWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6Tz
JDewCR0txtgTMolcAqfC79mf4q/D3w54M8PazqXwK8F3nizxJqHhXw7Y+KvBdhreo6re2t8t
rcg3UWl3vyLdymFWnnX7ny/ugrV+9X/BsfHr9n+xrNZ+KrDStK8SaV4n1TTdTs9N0+zsbW2u
Le7lgdEis0S3GGjIJjXDHLZYsWP41/Ajxfpvw78C/s4eHNB8b/B/VB8NfiBrEfj271rWdHnS
xhXV7Vo7rS31YiVLeS1jMqSacFDNlj++zX7P/wDBtjr2jeKf2YvEWp+HLy61Hw9qPj3xDdaX
d3U08091avqdy0Mkj3BMzOyFSWlJkJJLfNmvRwPwVv8AD/7dE5MXvT/xfoz7H+LP/H9cfU18
baCpl+PPjJew8L6Mf/Ln8dV9k/Fn/j+uPqa+RfBFt5/x68acf8yvo3/qT+Oa3xX+7R9f0OPD
/wC8P0Pxr/4OjI/K/bP+Fg/6phbf+n3W6Ksf8HUMflftufC4f9Uvtf8A0+a1RXlHpn7t/wDB
PPQ/+MKP2fHyOPh14V/TSbP/AAr+UT9vCL7P+0tqCf3NG0Nfy0eyr9vP2f8A/gqtr3wvu/gT
8OLa5slsYvCPgayVGhBfbPo2mMef+2pr4s+Nv7WOvfs/fslWHiLwz4w8KDxvo3izQtRtNKud
es5NTTw4+jaVLNaLbiYXcVpPqVtC00EPltKgZ2BiZnOs4ctO/oGjl/XkflrRX6WfBT9r64T9
nH4RzfC6b4KeFfEknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRt+
efxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK5izoPBH7S
/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/
DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ+4P+CffxX1HT/gJ4T8Nn4l+H/Aump4k
u7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZohL0tx23wU/aduPB37OPwjsPhd4k
+Cg8SWHivXp/GV3qXiqfwXpqXD6jFJaXr2MV3pxvLR7YphWtZwkUIhESFWhLGfmnXV/Cv4V/
8LS/4SP/AIqPwr4c/wCEc0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK+4PDf7X2s/BD
9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmY+
f/8ABOv4or4i1z9pq4vta8FeBtN+IXw/1uytdFk1200HSptVu2Js7e2t7idF2Rq1yiNysKPt
Z18wbkI+NK9L/Z7/AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12y
PHkyrjIDlftb/gmR8Vfht8HPhX8LEv8AxT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MU
F1bzo8XmzSWt0qAyM8sSw/uvFP8Agmpp9r4M1z9oXT9a8ReCtGnv/hjrnhKzfUPFOm2sF/qV
y0awxQTSTiOZGMEn72NmiA2kuA6FgD5Jr1b4a/tt/E/4Q+B7Dw94f8TfYrDR/t39kyvp1pcX
2hfbY/LuvsN3LE1xZeYuS32eSP5izcMxJ8pr7r+E3xR1/wD4Y6/Z80v4M/FPwr8OPEmga7rc
njf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOgD4Uor9IfgR8als/Av7OEfgj4oe
CvDsHhv4gaxe/FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEjXydiFTEDV/2vl+CH7Ivi
LxV8LPFvgqG807416hqvhrRxqlpDexeD5bmOYWcNj5iXltZT3ttbtLaxLE7xgsy+UzMWM/N6
iv0s+Cn7X1wn7OPwjm+F03wU8K+JJPFeval4ysNS8XT+D9N0S4n1GKa0d7OLULU3totsyIA0
d4FitxCBlWjb88/izrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyqskdvg/u0dVZU2g
gEEUhHpf7OnxD8Ral4Y1rSLDUvgpolr4R0O81xJPF3hTQ57rVfLcObOC4ubKWa4u5DIfKiZ+
QpUFQoFW/BX7Ol9+1Vrnh7Vb34gfB/wv4h+IWqro+k6EsLWk80ytDbxs1npVlJDYpI7qqmcQ
mQq8mCp8xvQP+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL
9pAkwViErBmTzRu1f2YfAtr8MP2bfDXiX4d+M/hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt
4JphN9oukMjtdRo0scAKRRhpPMZjPkrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5
E3ISrYdWGVJBxwSKya/QHQfE+v8AgX9nH4LeD/gz8Zfh/wCEPEngjxX4itfG+oW/jaz0XTb6
4/tGD7HfzpO8Z1S0NsilXWGcNEpj2kgx18KeO7j7Z441mX7ZpWoebfTv9q0uz+x2NzmRj5kE
HlReVE3VI/Kj2qQNiY2hCNX4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T
5t3Ju/dxcbtrcjFel/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nS
FPPlWMGdoySGONmGPoH/AASw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqzqFs1C
XE0ayOF+0gSYKxCVgzJ5o3eq/sm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c
0clkNWbzYEa1Uvu07aDIrMf3wJpjPn/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHOV8N/wDgnL8UfiPofxX1MaVa6Ppvwag1A+IrvULj
EH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/2Wfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+I
rey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgPNP2bvE+g+Nf2jP2u/Glv4r8FWHh
74j+G/GWieG5tY8S2GkT6ndX11HNaqLe6mimRJEORI6LGCGUsGUgAHyT4I/aX8RfD/wxbaRY
ad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4nudXv4dKgurvZvj03
S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJNTXtGm8Oa5eafcPayT2E728r2t1FdQMyMVJjmiZo5Ey
OHRmVhggkEGvuv8A4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+ME
OuWUji1kGxpJFihmiEvS3CEfBNFfpZ8FP2nbjwd+zj8I7D4XeJPgoPElh4r16fxld6l4qn8F
6alw+oxSWl69jFd6cby0e2KYVrWcJFCIREhVoTz3hv8Aa+1n4IfsG3Pirwn4t+GkPjDTvi7d
arYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszFjPh/4V/Cv/AIWl/wAJH/xU
fhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMVylfZf8AwTr+KK+Itc/aauL7
WvBXgbTfiF8P9bsrXRZNdtNB0qbVbtibO3tre4nRdkatcojcrCj7WdfMG70D9k34xLpHwN/Z
XtvBHxE8P+DIPC3jLUbr4pWbeMLTw295C+p2ckUl1DPPC1+hsVKhlWYbUMXUFAAfnpRXbftK
az4e8R/tGeP9Q8IJax+E7/xJqNxoqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGK+tfgF+1Lr/7
Nv8AwS50PWvDfjDwqvjjw/8AEf7Za6Vc69ZyaonhxhbyT2i2/nC7itJ9StYmmgh8tpUDOwMT
M5Qj4Uor9Nv2A/j/APDbwtofgXWL/V/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTra00Q3Uc
E9lIksSvJJbXaxJuLyxLBmI/Zal+F3iT4D/s5WHif4k/DTQ/E/wX8SanBLa6n4h8o2t//wAJ
HY6mZo5It0E1u+mWd9GlwXNu81zAiuXYFGM/MmivQP2svG+mfEz9qj4l+JNEuftui+IPFeqa
lYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvtb9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4S
bxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZDRlCPnT9hO28c/tCfEPSPhV4Rn+D+nalc
wXM2nS+KvA2mag94yB55IjdNp1zMX2CV1MrBQsWwMPkQ+FeN/GV38QPE9zq9/DpUF1d7N8em
6XbaZartRUGy3to44U4UZ2oMnLHJJJ/Tb9iX9pDwF8Ldc+FGvaL4v+Gng/SdT8ZeKLj4rRaT
eJpMF3PK00GhCK1uil42mRrdoYkSPyIAWkmEbxSOnP8A/BObx58PP2efA/w50zV/FnhW40rW
td8RaX8VbHV/GzNY6dLJGllYLBp0d2lpf2k6tH5lz9nu4gpd2mjSHMTGfmnRX3t4E8ceJfDn
7KvwJ8NfCH4reCvh54s8I+JNej8eO3jjTtHtGumv7c2l1dK8wTVLcW6DEkSXUbRoYxuxsr4f
8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtCEZNFfdfwm+KOv8A
/DHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx12
3wI+NS2fgX9nCPwR8UPBXh2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr
5OxCpiDGfm9RX6Q6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvUNV8NaONUtIb2LwfLcxzCzhsfMS8
trKe9trdpbWJYneMFmXymZjb+Cn7X1wn7OPwjm+F03wU8K+JJPFeval4ysNS8XT+D9N0S4n1
GKa0d7OLULU3totsyIA0d4FitxCBlWjYA/NOvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7
gajY3ogkuMSK8KxFDCivujlf/WoDhg4XlPizrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zs
BaQyqskdvg/u0dVZU2ggEEV91/8ABOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpqPh
ebS4ra/E1teSKtzburldgVyWUFBvjBVCPz0r3b4Fax8RJ/2ffGfiDQNJ+Gk3hP4WwWk2qXWs
eCtC1G9Zr69EMEQluLOWeZ2d5GBdtqRwsNy/u0b7g+A/x3+EGh/Ej9m/V/DXjLwrpXgH4UeK
/H2nXUd/qy2l1plpql1ImksLe6dbqaJ47iDdMqOsQ3tM6eXIV8K/YP8Ajf4i0H9mb43/AAhT
4wWvgjxY8+iQ+EZr3xmLDStLWLVnOqS2l6svkIhSXzHFu5a4TcY1lwaYz5pg+Gfi39ojwZ8R
/icLXw/baT4Dg0+bW3srK10qBWup47O2igtLaNIw7EM5Koq4ikZm3sofzSvuH9g/476tpv7M
3xv+E2mfGy18Ma7cT6IngjUb3xRcaHpVpBHqznUbm0nn8o26PHKJXjCpPKhbETsrKO1/Zr+K
Nv4Q+FP7Mel+APin4V0Gw8GeOdUk+Jv2fxdD4Zj1e3Oq2jw3M8F3JazX0T2KYUmJyEBiIVlM
YAPzpor9LPhh+1d4Q8PeGNBuPC/jzSvD1ha/tOyLpcEOpDS5NP8ABV08c80SwEo9vpUkiRvJ
EVWEug3ruXj4T/ay/sP/AIao+Jf/AAjP9lf8I3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2f
LtxjikI8/r0v9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx
5Mq4yA5X9Af+CWXiLSfHWh/ss+GfDmueH/I0PVfFd38QfDp1W3s59QvQv2jSrmaxkdJL94hF
bvFNHHL5BtwdyGE7fmn/AIJqeJ11nXP2hdT8S+K/D9nqXi/4Y65okN34k8S2lhPq+q37RtEu
+7mRpHkaOQvJkqpILsu9csZ8k0V+ln7CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaX
WmS6o5SyYW8jrNNE5uI900KPFEN7SOgjkK1P2dvj3ffB/wDZV+C/hr4ba38H7LxZ4c8Sa3H4
3fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAjTPzIwB8PeCP2l/EXw/8MW2kWGnfD+e1
tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOg+JHwR8ReM/wBnOL483Wp+CpNJ1fxIPCtz
puj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt8/wDizrMXiP4qeJdQt08PxwX+q3VxEmg2
strpSq8zsBaQyqskdvg/u0dVZU2ggEEV9w/sLfHfT/gZ+xh8KrR/GXhXRNT1X9oDTdUvoW1a
yOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciQA/P6vS/wBnv9l/Vv2kdD8e3ei6z4fsZ/h7
4bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9C7v8AaV0zwFoPhXS/gRrHwU03+y/i
P4sk1z+0vGz+F9NskbWVfTrl4La/s11G0Nl5QBEV2nlQCJBw0Z+dP2D/ABFpOpfFT9p7ULnX
Php4eg8WfD/xHoekpHqtvoGlXd7fzK1tBYQ3rwyLbkRPt3KPKQRiTYWUFCPnT/hqDxn/AMM8
f8Kq+26V/wAIL9u/tT7B/Ydh532vdn7R9p8n7R5u393v8zd5X7rPl/JXn9fpD/wTy8T6zpv7
Cnw016PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSR1em/t
XaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BCEUq
8RYz8s6K/Qvw3+19rPwQ/YNufFXhPxb8NIfGGnfF261Ww0fStUghEXhqWWKaWzsrGSRL+20y
fUbaFmtVWKR4RudfLZmPwV478Vf8J1441nW/7N0rR/7Yvp777Bpdv9nsbHzZGfyYI8nZEm7a
i5O1QBk4pCMmvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf
/WoDhg4X6r/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUj
i1kGxpJFihmiEvS3HQfspfH/AE/4L/s0eA4X8ffD+01PWv2j7TV76TTbuytGj0RolgubxLfb
HNp1pIYZEO6K3P2eUxsqwzFHYz40+F/7XPjn4LaHDZ+Fbrw/oU9rBc28GrWvhnTF1u3W4WRZ
THqX2f7YjlZXUOswZFIClQABU/4ag8Z/8M8f8Kq+26V/wgv27+1PsH9h2Hnfa92ftH2nyftH
m7f3e/zN3lfus+X8lfpD8JPE66b8PP7e8KeK/D/h3wn4R/akvLK21FfEtpo+nWnhOQx3VzY2
srzRxvZSlY5jaQErLsDiNtuR5/47+NS+I/Avw7j/AGa/ih4K+GkGn/EDxZe66reJLTwlaLDP
q8Uul3F1ZXDRNeW62IjAUQTBY4zDsypioA+KdR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7ga
jY3ogkuMSK8KxFDCivujlf8A1qA4YOF80r9Fv2Uvj/p/wX/Zo8Bwv4++H9pqetftH2mr30mm
3dlaNHojRLBc3iW+2ObTrSQwyId0Vufs8pjZVhmKP6X8JPE66b8PP7e8KeK/D/h3wn4R/akv
LK21FfEtpo+nWnhOQx3VzY2srzRxvZSlY5jaQErLsDiNtuQWCx+T1Fdt+0prPh7xH+0Z4/1D
wglrH4Tv/Emo3Gipa2ptYFsnupGtxHCVUxp5RTCFV2jAwMYr7r/4J5eJ9Z039hT4aa9H4rtf
DsHhH4829le6jqPiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS0jJ81k3mNipIQj83qK/SHx38al8R
+Bfh3H+zX8UPBXw0g0/4geLL3XVbxJaeErRYZ9Xil0u4urK4aJry3WxEYCiCYLHGYdmVMVZP
7G/xg1GP4eaZoB+K/grwrpreMtR1C38SeDvFVj4RudAuSQPO1LRb6Ozh1fTJn+yyxxonmRwR
SwApgWqgHxp4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAA
Og+JHwR8ReM/2c4vjzdan4Kk0nV/Eg8K3Om6Ppw0uewvUtWlANrDaw2ioYIkctCzZMq7vnMm
37W/Yu8X6zB+yZ4N8SR+N/D+mweFv2hxa3uuLrMHhvTl8PSQQXd/a2qzm1CWU8irObGONN+w
EwZQhdb4FftB/B7wD8G9ahXVPBVnB4x+POr6j4DuIbmyL+BYbizmtdN8QSaVK6BLe1kQfurl
YhGrJKBlYtzGflnRX6F+G/2mfFv7Ln7BtzqWnfE7wV4o+JXhj4u3VwZR4rtdSvdV0LzYnuhD
mYXpsrzVLWOWVI/LeeNjKwMbs5+CvHfir/hOvHGs63/ZulaP/bF9PffYNLt/s9jY+bIz+TBH
k7Ik3bUXJ2qAMnFIR1Xgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5
wMKMAADq9M/4KG/FrQtPittO8QaVpcdl9r/sw2HhrS7STQPtcKwXP9nPHbK2n+ai5f7IYtzs
8h/eOzn62/4JkfFX4bfBz4V/CxL/AMU+H5NC8Uarr9p8UNP8Q+LJreDTGnhS005I9INzFBdW
86PF5s0lrdKgMjPLEsP7o/Z2+Pd98H/2Vfgv4a+G2t/B+y8WeHPEmtx+N31z4gNoVpa3X2+E
211MtrqNsmq25twv7xUvIzHAI0z8yMxnw9/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877
Xuz9o+0+T9o83b+73+Zu8r91ny/krz+v0W/ZU+OF3qfhiDS4PiR8P/AOi3njnVdWstX8CeLL
bwm3hSd3ypvtG1MWy6xpTv8AZZIkO+eO3hkhLgj7MPz/APHdx9s8cazL9s0rUPNvp3+1aXZ/
Y7G5zIx8yCDyovKibqkflR7VIGxMbQhGTXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttY
e/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Af8ABPT4w+FfAf7OdnYfEXxj4KXxDc6rdH4O
HVZYdQPw91U2tykmpXoCSfYbKS7ezKpOGUzRfaRAFQ3A6v4EfF4p4F/ZwiTxr8NNR1jwh8QN
Yu/iZc+LfEGiX93p7Pq9rMbmyn1OR32SQK8vnaYxWSQM+5pstTGfH3iL9ijxV8Ofh5rniPxv
f+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud23x+v1B1H45eC/i5
Y/BOw8K+IvhVrngfw/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYX86f
jv8A8Ix/wvDxl/whH/Imf27e/wBgf63/AJB/2h/s3+u/e/6rZ/rPn/vc5pCOg0b9l/VpvgI/
xH1zWfD/AIR8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC6v7Gf7Dnj39uv
4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv1X+z58VdE8Z/szfsqeH
YdW+D93pPg7xJq1v8QrHxkdAE9hZT6tBcZiXVQJSj2ryEvZ5yV2k70AXW/4J+ftp+EtB/bK0
bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwxn5v
V6X+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcrx
Xju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dx+lf+CW1zp+n/wDC
+P7S1/wroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXKEeKeCP2l/EXw/
8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HSoLq
72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+hf/AATy8T6zpv7Cnw016PxXa+HYPCPx5t7K
91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQeO/jUviPwL8O4/2a/ih4K+GkGn
/EDxZe66reJLTwlaLDPq8Uul3F1ZXDRNeW62IjAUQTBY4zDsypipjPzerq/hX8K/+Fpf8JH/
AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3Ju/dxcbtrcjFZXju4+2eONZl+2aVqHm3
07/atLs/sdjc5kY+ZBB5UXlRN1SPyo9qkDYmNo+oP+CWHiddG0P9oDTL7xX4f8Pab4p+GOqa
Ja2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPkmvS9R/Zf1ay/ZMsPjFHrPh+78P
XfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+oPhN8Udf/4Y6/Z80v4M/FPwr8OP
Emga7rcnjf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOur/ZS+P+n/Bf9mjwHC/j
74f2mp61+0faavfSabd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo7GfH/w1/bb+J/w
h8D2Hh7w/wCJvsVho/27+yZX060uL7Qvtsfl3X2G7lia4svMXJb7PJH8xZuGYk5P/DUHjP8A
4Z4/4VV9t0r/AIQX7d/an2D+w7Dzvte7P2j7T5P2jzdv7vf5m7yv3WfL+Sj9rL+w/wDhqj4l
/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjiv0B/Y8/bg8N/BL9lP9mb
w23jbwrp90l9qDX1vcLa3M2j3D+K7CMyzM6s1ju0a71oCVzEDFLJhs7MAHxT+z3+wf4v/aP+
G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHP8A7JX7L+rf
ti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVW+tv2svHfh//AId4
/Evwz4Z1n4VfY/8AheWqappOk6Xd6L9r/wCEf3yxQ3EEMZ8//X7UR0Hm/ZsAH7LXV/8ABMz4
7+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW9nfXty0b22iedFK4uJ7RFulI8hWtomMksuxo5J
FAPhPwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAByvjfxld
/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySSamvaNN4c1y80+4e1knsJ3t
5XtbqK6gZkYqTHNEzRyJkcOjMrDBBIINfcHwm+KOv/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/
AGzxdZ+HY/tEl9bPZ3N9BPIh1CJbZVGViuBsjaLBIMdIR8KUV+kPwI+NS2fgX9nCPwR8UPBX
h2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr5OxCpiBq/7Xy/BD9kXxF4
q+Fni3wVDead8a9Q1Xw1o41S0hvYvB8tzHMLOGx8xLy2sp722t2ltYlid4wWZfKZmLGfm9RX
6WfBT9r64T9nH4RzfC6b4KeFfEknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8
CxW4hAyrRt+efxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQC
CKQi38K/hX/wtL/hI/8Aio/Cvhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrl
K+tv+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViE
rBmTzRu9V/ZN+MS6R8Df2V7bwR8RPD/gyDwt4y1G6+KVm3jC08NveQvqdnJFJdQzzwtfobFS
oZVmG1DF1BQMZ+ele7fArWPiJP8As++M/EGgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcW
cs8zs7yMC7bUjhYbl/do33B+y58bvgp8P/HGl3ugeJvCv/CtvG/jnxenjbTNX8RT6bY6VaXM
httHjg0Jp4IJrSaCS38ySSzuFiUuXeFYP3XhX7B/xv8AEWg/szfG/wCEKfGC18EeLHn0SHwj
Ne+MxYaVpaxas51SW0vVl8hEKS+Y4t3LXCbjGsuDQB8/fAH4I+Iv+Cgv7SUfhnT9T8FeH/E/
iCCSa1SbThpOnXLW8O5oo4bC1MUT+TG758tFby3JYuwD+P1+m3/BOr4zeAv2ctD+B7aL8QfB
Wj6TF4k8Qw/Fa7XVEsZ9Yn2yWmhStDdeVeTWQW5R1CReRCWeWZY3jkZNb9lr9sfQf2b/ANm/
9nLwXdeOfBVpqWm6rqcGpQrcWGpjSbo+LbGFpmuF81bZH0e61oLcb0jeGV2V2zGaAPyzor9Y
f2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrTSEtNIS5itLqyngeDfMbW6
jRCzNLEkP7r8qde0abw5rl5p9w9rJPYTvbyva3UV1AzIxUmOaJmjkTI4dGZWGCCQQaQipRX3
t/wT7+K+o6f8BPCfhs/Evw/4F01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLF
DNEJeluLfw9/au1/9nn/AIJ4pr/hvx58P77xxonxWmv7W0tNSs7eabw47wyzwW1hmG7tdPud
Stone0hit2aLLNGsTMaYz4+8EftL+Ivh/wCGLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM
/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJPH
fir/AITrxxrOt/2bpWj/ANsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycV+lf7Hn7cHhv4
Jfsp/szeG28beFdPukvtQa+t7hbW5m0e4fxXYRmWZnVmsd2jXetASuYgYpZMNnZhCPy+r6L/
AGSv+Fk/ti6HY/s26B4m8FaNoWqTz6zY2OsaLCgu72JTM7i8hs5blbjyVkxI7r+5jaLftKxt
9g/8Li0HSPG3witvhD8RPBXgzwn4W+LviW68eWeneMLDw3aXmnPr8MlpI8Lzwi/tzpyhY2iW
ZPLTyh02DV/Zg/aQ+E3wt+KngHXvh94v8FeD/Amp/EDxdcfECJbyDSZ7uOWaeDw4GtZily1l
HDdxFUhj+zQEvJKI2ikdGM/N7/hqDxn/AMM8f8Kq+26V/wAIL9u/tT7B/Ydh532vdn7R9p8n
7R5u393v8zd5X7rPl/JXn9W9e0abw5rl5p9w9rJPYTvbyva3UV1AzIxUmOaJmjkTI4dGZWGC
CQQa+6/+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZB
saSRYoZohL0twhHwTRX6w/8ABO3xXpPxG1z9nLw94c8UeCtQg0jxJ4yvviDpGmXNvo8GtXu4
3OlXo0qQW8l0iCK3lgeO2P2YQIMQmEqn5U69r194p1y81PU7y61HUtRne6u7u6maae6mdizy
SOxLM7MSSxJJJJNAHoGo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX
3Ryv/rUBwwcL6B+z3/wsn9pH9lzx78NdF8TeCrHwb8PdKufH15oeoaLCl7fLbZaa6guo7N5D
cKDHFmSeNjHKsYJiDhfoD/gnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueINNsU1HwvNpcVt
fia2vJFW5t3VyuwK5LKCg3xgrxX7Ftz4H0/44ftVf8I1r/hXQvBmu+BvE3hrwf8A21r9vpf2
37ZcL/Z0Mf26WOVt0UPLv9z5fNZSwyxnzpqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG
9EElxiRXhWIoYUV90cr/AOtQHDBwvoGo/wDCybL/AIJi2F7H4m8FXfwku/GR0mXQodFhGr2O
siKS58+Sd7NXLmBFHmx3LnypUhJCh41+i/8Agnr8avh/8H/2D/Dek+LdX8K2/iTxH8Vprnw9
dSanY3l14HuJtJaztfENxYSSgGK2uY2ytzsChlmGSIt2t+xz8UX8Efs/nwvefFP4f3+q/wDD
QEsnjh9S8Xaf9l8UeHJLFLXUrlxfSJ9vtJ9zEEozOcOo3plQD4e1H9l/VrL9kyw+MUes+H7v
w9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4XzSv1B/Zs+NXwY+D/7P99pNrq/h
W3sPEfxy1O5+H91JqdteXXge3msZbPTPENxYXUocxW0kYyt7sKhlm5YR7vze+LNnfad8VPEt
vqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetIRz9Ffe37Dn7Slj8O/2TPhFb
z+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WGa9gu/2ldM8BaD4V0v4
Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1lX065eC2v7NdRtDZeUARFdp5UAiQcNGWM/N/wD8AfE
XxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1dR/Zf1ay/ZMsPjF
HrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+q/2Sv2odW8UfB79onwF
pnxa8P8AgDXfEGq6Xd+CDB4guPDfhzRoDrUkuovprTmM2dv5c4fyQFnkiyBE7KyjoP8Agn98
TdP8A/sj6P4UsPiJ8P4Y1+OQfxRa6lr1lp9rrnhSTTY7S+ke11FomuLSVGOI2i35AIQSR/KA
fGmo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL5pX6
g/s2fGr4MfB/9n++0m11fwrb2HiP45anc/D+6k1O2vLrwPbzWMtnpniG4sLqUOYraSMZW92F
QyzcsI935vfFmzvtO+KniW31PXrXxVqUGq3Ud3rVretfQaxMJnD3Udw3zTJK2XEh5cMGPWkI
6vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/ANagOGDhT9nv
9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X7A/4Jwf
F618M/sV+GvD8HjXwVojz/GuK48W6RrniDTbFNR8LzaXFbX4mtryRVubd1crsCuSygoN8YK8
V+xbc+B9P+OH7VX/AAjWv+FdC8Ga74G8TeGvB/8AbWv2+l/bftlwv9nQx/bpY5W3RQ8u/wBz
5fNZSwyxnzpqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCt
QHDBwuV4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/cH
/BPX41fD/wCD/wCwf4b0nxbq/hW38SeI/itNc+HrqTU7G8uvA9xNpLWdr4huLCSUAxW1zG2V
udgUMswyRFu5/wDZK+Pniqz+D37RPwpuPj3a6d42utV0seF9evfG81ppRKa1I2q31pfyOqhJ
Vl899hE1wjMwSQ7hQB8E16B4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ
+WONznAwowAAPuz/AIJzfET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oN
Hmmt0ubS5jaIyTT2U/lqzmR4BAREfsIX2v8Agr9jDwBfP4u0rQf+ED/aAj0rUdUuPF1nY2tt
oJtre41Cygu3uFintJpUEzW8Dus5QSBH27gAfnT438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu
1FQbLe2jjhThRnagycsckkntf2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGG
Uq/lLI4LhVxGw3biqtlftKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGn
lFMIVXaMDAxivvb/AIJmfHfwP8CvA/wCvdN8ZeFfCdhLruvf8LaW41a3s769uWje20TzopXF
xPaIt0pHkK1tExkll2NHJIqEfmnRX6Lfs1/El/Avwp/Zj0Xwh8R/CvhD/hCPHOqN8VrW38da
fosd8n9q2jJLOGuY11SI2SFVlh89GRSiscba9g8I/t8/Dn4a+H/hJpnhPxn4K0vw9d+MvEN1
Z2i2dqBosM3jS0WGRkkj3aajaFeauFZxCBDK4GGCYYz83/hr+238T/hD4HsPD3h/xN9isNH+
3f2TK+nWlxfaF9tj8u6+w3csTXFl5i5LfZ5I/mLNwzEnn/APwB8RfEf4SePfG+nx2o8PfDiC
xm1eaacK+68ultreKNBlmdmLtnAULC+WDFFf72+LPxRf+wfBGl/s5/FP4f8Aw4/sD4j+L5PE
P2fxdp/h3TdkmsxvptzPA8iLqFotksYUxRXCeVGYgDjy68+/Yt/aQ1zUPgF8fPhvafGnSvCn
iTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKAA+X/wBnv9l/Vv2kdD8e
3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/AIJqW+k+F9c/
aFtLnxd4KtoNT+GOueFdJvdQ1230mDWb26aMWwgF60EhSQQu25kXywV8zyyyg/JNIR6XqP7L
+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC/0f/wDBo7/y
jW0L/sM6h/6VS1+QH/BOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpqPhebS4ra/E1t
eSKtzburldgVyWUFBvjBX9qv+DZT/hGP+GRtV/4Qj/kTP+E213+wP9b/AMg/+0bj7N/rv3v+
q2f6z5/73Oa9HA/BW/w/+3ROTF70/wDF+jPtL4s/8f1x9TXyT8PpVHx98aqSB/xS2jdf+xn8
c/4V9bfFn/j+uPqa/G7/AIK7/tgeOf2OfELax4Gu7S0u9W03SrS5a4tlmDRr4h8duAAenNdO
JV8NFef6HJQdsRL0/wAj5N/4OqSD+3D8L8HP/Fr7X/0+a1RXzl/wVM/aC8R/tQt8B/GfiueC
51zUvhxJHPJDEIkIi8T6/GuFHA+VRRXknpHqENzd/wDDdPwQCM/lDRfhr3/6gOi5r5J/bEz/
AMLsTPX/AIRvw9n/AMEljX314M8K295+1p8Fp2xv/sH4ckfhoOjf4V0Xi79pCf4efDX4HaR8
LdY+FWm+I9Ijt38Zf8JN42n8Lx2Vw1npb2lzdQRX9mdRiNtsDForsbIPKAyGjPTXi1TXyMaL
vUfz/Q/JKiv1h+Cv7fPhX4a/Cf4F6ZF4z+Gmlvd+JNaurm00mzhFhos03jCyXzIkuI/O023b
RrzWRE0whIt5WBw4XB+z18Vfg18HPipAmkeKfBUnw+8UfEDxnaeOtPvfFjW+laZbTzNaaQlp
pCXMVpdWU8Dwb5ja3UaIWZpYkh/dcZ1n5PUV+ln7CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1H
VLjxdZ2NrbaCba3uNQsoLt7hYp7SaVBM1vA7rOUEgR9u4fBP7Sms+HvEf7Rnj/UPCCWsfhO/
8SajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxikI4mivvb/AIJ6fGHwr4D/AGc7Ow+IvjHw
UviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuB23wU/aZ1zwR+zj8I9
N8FeKPhU/j7TvFevTfEK+8TfEWTR4/7QfUYnhvro2+pWw1iJ4cFptt6jpDsXPzIzGfmnXpeo
/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9l/s6/tX
aZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7147b4J
fF7wz4Z8J+JvD/hLxr8NNE8PT/tLX9xq+kXHiDSbHTtR8FzW/wBmuQLa4kWK5snhcKqIrg7V
MY3RgqAfnn4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAO
V8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+oHwH+O/wAIND+J
H7N+r+GvGXhXSvAPwo8V+PtOuo7/AFZbS60y01S6kTSWFvdOt1NE8dxBumVHWIb2mdPLkK+V
fsM+PtX8GfCDw/4Kufid4V8E2uneK76VNc8MfEDTNH1DwzdqFQvq1jcvHa+IdPkkW2lR4ZJm
8mGWNZiCLegD8/q9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6
OV/9agOGDheK8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtH3r/
AME4Pi9a+Gf2K/DXh+Dxr4K0R5/jXFceLdI1zxBptimo+F5tLitr8TW15Iq3Nu6uV2BXJZQU
G+MFUI/PSiv1B8H/ALSugeAvhT8PNL/Z31j4VabYaX458Syan/wk3ja88Lx2Vu2qo+l3N1B9
vs5tRiNj5IYzRXZ2QeVjcHjP5vfFnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0hlVZI
7fB/do6qyptBAIIoA5+iv1M/4JZeItJ8daH+yz4Z8Oa54f8AI0PVfFd38QfDp1W3s59QvQv2
jSrmaxkdJL94hFbvFNHHL5BtwdyGE7fNP2Ov2sv7E/Zo+F13rXxL+yeL4P2gLb+0Zr3xF5ep
R+HbuKC41DzmeTzBp810nmz7v3Mkq7nywzTGfGngH4A+IviP8JPHvjfT47UeHvhxBYzavNNO
FfdeXS21vFGgyzOzF2zgKFhfLBiiv6r+wnbeOf2hPiHpHwq8Iz/B/TtSuYLmbTpfFXgbTNQe
8ZA88kRum065mL7BK6mVgoWLYGHyIfov4R/tEebY/tXfDHwb8XdK8Cf2j4rtX+G0j+Kf7F0P
SdPXxBO95JY3AdYIIvImR2jgO+aPdsSTBFdB/wAE6vjN4C/Zy0P4HtovxB8FaPpMXiTxDD8V
rtdUSxn1ifbJaaFK0N15V5NZBblHUJF5EJZ5ZljeORkAPzf8b+Mrv4geJ7nV7+HSoLq72b49
N0u20y1XaioNlvbRxwpwoztQZOWOSSTk1+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8
bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kbn/ANkz49eBPF3gebQdV8XfD/wP
deBPjlD8YbhvKuNO0PUtHgjEctvpEZiMzSgqPJtJI0kaN0CglXCAH5/UV237SnxIsfjH+0Z4
/wDF+mRXUGm+KvEmo6xaRXSqs8cNxdSTIsgVmUOFcAgMRnOCetfcH7LP7Stx4C/Y6+BWl/C7
WPhVpviTS9d1eTxl/wAJN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH50123gH4
A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/Zf7Jnx68CeL
vA82g6r4u+H/AIHuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQVHk2kkaSNG6BQSrhLXwG/b
QvvjR4W/ai0DQfixdfDzUvHniSx1vwCPEPidtDg0S1m16a4v2jn83yrdxDcq8scLmSUeZsWX
aaYz89K9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOG
DhfqvwJ8QNZ0j9lX4E6B8EfjF4K8Ga74W8Sa8PGd63iyDw3aXkz39u1lfXVteGGW/tzaqpGb
eY+WhiZNwMQ6D9lL4/6f8F/2aPAcL+Pvh/aanrX7R9pq99Jpt3ZWjR6I0SwXN4lvtjm060kM
MiHdFbn7PKY2VYZijgH500V+m3g79pTwr8O7PTbfw/4/8P6HBo37Ulxa6fFp+uQ2yWPg24lS
aeOII4CaPJJGjuq4tmZFY5IBr4J/ay/sP/hqj4l/8Iz/AGV/wjf/AAleqf2T/Zfl/Yfsn2uX
yfI8v5PK8vbs2fLtxjikI9L/AGE7bxz+0J8Q9I+FXhGf4P6dqVzBczadL4q8DaZqD3jIHnki
N02nXMxfYJXUysFCxbAw+RD4V438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRn
agycsckkn9Fv+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtpbjVrezvr25aN7bRPOilcXE9o
i3SkeQrW0TGSWXY0cki2v2Lvi94N8A2f7JM+s+NfBWnJ8GtV8Z6N4uE3iCzD6dPqMrRWbxr5
m66t5GnjP2m2EsCLvd5FSN2VjPzJor9TP2Wv2x9B/Zv/AGb/ANnLwXdeOfBVpqWm6rqcGpQr
cWGpjSbo+LbGFpmuF81bZH0e61oLcb0jeGV2V2zGa7XTv24Ph/8ABKP4Y+G/Dfjb4f6foqeO
fEjW9vZrY3Nvo6P42tY4pQwVlsYm0O71cJLmJDBK+1s7MAH4/UV+oPiO58J/Gv4xfs26b8ON
f+H8lh8L/jJ4j+26dDr+nab9ktJ/E8F1ZfY7eWWM3UT220x/ZFkU42L8w218Kft2/wDJ7/xk
/wCx51v/ANL56Qjn/APwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAUL
C+WDFFfia+1v+CeX7RGuS/slfGP4Y2nxd/4QTxJqP9gv4Mk1fxTJotjpMS6oz6lJBcM6pB+7
mDyRxHzZl37UkIIr2D9i74zeAvBln+yTeXPxB8Ff2b8H9V8Z6T4luZtUSye3bUpWjsJ47a58
q6lt5vPibzVhKxKXMxi8uTYxn5k16B4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO
+4ubSSZ+WONznAwowAAPsH9jr492/wAGP2aPhdo9/wDEHStH8SeF/wBoC2027WHxNCZLPw1L
FBJfoskcpB0qW5iEkhVjbSOgcliAa+Pv2sv7D/4ao+Jf/CM/2V/wjf8Awleqf2T/AGX5f2H7
J9rl8nyPL+TyvL27Nny7cY4pCOg+APwR8Rf8FBf2ko/DOn6n4K8P+J/EEEk1qk2nDSdOuWt4
dzRRw2FqYon8mN3z5aK3luSxdgH8fr9LP+CZnx38D/ArwP8AAK903xl4V8J2Euu69/wtpbjV
rezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki9B+y1+2PoP7N/wCzf+zl4LuvHPgq01LT
dV1ODUoVuLDUxpN0fFtjC0zXC+atsj6Pda0FuN6RvDK7K7ZjNMZ+WdFegftZf2H/AMNUfEv/
AIRn+yv+Eb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxX6A/seftweG/gl+yn+zN4bb
xt4V0+6S+1Br63uFtbmbR7h/FdhGZZmdWax3aNd60BK5iBilkw2dmEI/L6vS/wBkr9l/Vv2x
fjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfoD/wuLQdI8bfCK2+
EPxE8FeDPCfhb4u+Jbrx5Z6d4wsPDdpeac+vwyWkjwvPCL+3OnKFjaJZk8tPKHTYNX9mD9pD
4TfC34qeAde+H3i/wV4P8Can8QPF1x8QIlvINJnu45Zp4PDga1mKXLWUcN3EVSGP7NAS8koj
aKR0Yz8nqKt69o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg1+i3/AATI
+Kvw2+Dnwr+FiX/inw/JoXijVdftPihp/iHxZNbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ
5Ylh/dIR+b1dt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXyw
Yor/AKAfsXfF7wb4Bs/2SZ9Z8a+CtOT4Nar4z0bxcJvEFmH06fUZWis3jXzN11byNPGftNsJ
YEXe7yKkbsvlX7B/xv8AEWg/szfG/wCEKfGC18EeLHn0SHwjNe+MxYaVpaxas51SW0vVl8hE
KS+Y4t3LXCbjGsuDTGfD1e1/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6
WwtyEZElaNFCm4eGN3mjRXLEgfVf7NfxRt/CHwp/Zj0vwB8U/Cug2HgzxzqknxN+z+LofDMe
r251W0eG5ngu5LWa+iexTCkxOQgMRCspjHP/ALSnxV8M+I/+Cavj/SfCGrfDSPS7/wCNeo65
ouj2p0m11JfDrmRLeeOzIW7jfzyiDKLcLb4Xi2GKAPl/4a/tt/E/4Q+B7Dw94f8AE32Kw0f7
d/ZMr6daXF9oX22Py7r7DdyxNcWXmLkt9nkj+Ys3DMSbX7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2
jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv0t/wT7+K+o6f8BPCfhs/Evw/wCBdNTxJd31
vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxayDY0kixQzRCXpbjV/wCCYP7bcWm/tJeDfBXiX/hT
+leCfBWq69r8vitLiXw1Fc3N1DcRfa1hee2tZnbzo7eFHs/MhtmKpHEEcqAfnpXpeo/sv6tZ
fsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv8A61AcMHC8V47t/sfjjWYv
selaf5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncfuv9hb476f8DP2MPhVaP4y8K6Jq
eq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSIR80/s6fEPxFqXhjWtIsN
S+CmiWvhHQ7zXEk8XeFNDnutV8tw5s4Li5spZri7kMh8qJn5ClQVCgVU+APwR8Rf8FBf2ko/
DOn6n4K8P+J/EEEk1qk2nDSdOuWt4dzRRw2FqYon8mN3z5aK3luSxdgH+lv2YvE+g6N+0Z+2
ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdMW2SSaOKRBD54SRAY4klwWQSjd23/BMz
47+B/gV4H+AV7pvjLwr4TsJdd17/AIW0txq1vZ317ctG9tonnRSuLie0RbpSPIVraJjJLLsa
OSRWM/NOirevaNN4c1y80+4e1knsJ3t5XtbqK6gZkYqTHNEzRyJkcOjMrDBBIINfqD/wSy8R
aT460P8AZZ8M+HNc8P8AkaHqviu7+IPh06rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl8g24O5D
CdqEfnn/AMNQeM/+GeP+FVfbdK/4QX7d/an2D+w7Dzvte7P2j7T5P2jzdv7vf5m7yv3WfL+S
vP6/SH9nb9rLVtL/AGVfguPAXiX4af8ACbJ4k1u+8e3fjDx3ceHXS9mv4Zbe9vo01C0fU0eA
rvd47vKwmMDIeNqn7Ov7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6
JJOiu0UirGSieYu9eGM/Omiv1Wu/2ldM8BaD4V0v4Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1l
X065eC2v7NdRtDZeUARFdp5UAiQcNGfNPDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1S
CEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmYgHw/wDCv4V/8LS/4SP/AIqPwr4c/wCE
c0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK5Svsv/gnX8UV8Ra5+01cX2teCvA2m/EL
4f63ZWuiya7aaDpU2q3bE2dvbW9xOi7I1a5RG5WFH2s6+YN3QfCb4o6//wAMdfs+aX8Gfin4
V+HHiTQNd1uTxv8AbPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx0hHyp/w1B4z/wCG
eP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+tbx3cfbPHGsy/
bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0fot+wh8d/A+h+GP2PNX1Hxl4V0q
1+FF94w07xTHf6tb2l1pkuqOUsmFvI6zTRObiPdNCjxRDe0joI5CoB+adfReo/8ACybL/gmL
YXsfibwVd/CS78ZHSZdCh0WEavY6yIpLnz5J3s1cuYEUebHcufKlSEkKHjX3b9hnx9q/gz4Q
eH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47XxDp8ki20qPDJM3kwyxrMQRb10
H7DfxmtdE/Zjs9Fg+IPw0tXvPjy134tsrjVNN0XTta8Lz6eltfuLC8+zq9lIrkLCIARhdkSP
EAjGfFOo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL
0Hwr+JnjP4pfs8eI/hV/wmPw/wDDngXw5Y3XjD7BrllYWc2qXcDKdlrc/ZzcS6hIreXGPMDN
ErRbhGNlfYPwN+O/gf4GfCDw5aeC/GXhXRPD+q/tOx6pp0LatbnULLwoR9n+0SLM5urOJo4T
FI8vlu0Mjq5MU7CTif2YvE+g6N+0Z+2ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdM
W2SSaOKRBD54SRAY4klwWQSjcAfKn7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhle
INDDKVfylkcFwq4jYbtxVW80r9LP+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtpbjVrezvr
25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki/m9r2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkx
zRM0ciZHDozKwwQSCDSEdr/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o8
3b+73+Zu8r91ny/kqr4B+APiL4j/AAk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbO
AoWF8sGKK/3X/wAEyPir8Nvg58K/hYl/4p8PyaF4o1XX7T4oaf4h8WTW8GmNPClppyR6QbmK
C6t50eLzZpLW6VAZGeWJYf3Xn/7B/wAb/EWg/szfG/4Qp8YLXwR4sefRIfCM174zFhpWlrFq
znVJbS9WXyEQpL5ji3ctcJuMay4NMZ8PUV+ln/BOb4ifDD4C+B/hzYz+OPCuseG9e13xFp3x
LTV/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIi/N7XtGm8Oa5eafcPayT2E728r
2t1FdQMyMVJjmiZo5EyOHRmVhggkEGkIqV1fwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmo
fY/7U8jb/odr8p827k3fu4uN21uRivvb/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYN
HS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkjif2RPiB4buvjh+1+nh7xR4V8NeAfHHhTxL
pvhuyvtbtfD9jqNxdXD/ANlpFbXMkIG2EzKpKAQLIVYx+YAzGfH/AMK/hX/wtL/hI/8Aio/C
vhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrV8EftL+Ivh/4YttIsNO+H89ra
b9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAe7f8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1
tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaN3oH/AAT0+MPhXwH+znZ2HxF8Y+Cl8Q3O
q3R+Dh1WWHUD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwAD4f8b+Mrv4geJ7nV7+HSoL
q72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk1reO5dUm8cay+t3/8AautNfTtf3329NQ+2
XBkbzJftKO6z7n3N5quwfO4MQc1+i37CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaX
WmS6o5SyYW8jrNNE5uI900KPFEN7SOgjkKoR8Paj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3
A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwtTwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs
533FzaSTPyxxuc4GFGAAB9l/safF61/Zp/ZM+G+h3fjXwVpGtzftD2N3qUdv4g027ubTRlgS
1urkyRyO1vbs0EsbTKyCSCRhuaC4/edD8Wfii/8AYPgjS/2c/in8P/hx/YHxH8XyeIfs/i7T
/Dum7JNZjfTbmeB5EXULRbJYwpiiuE8qMxAHHl0xnwnoXh27/aT8T+LdXv8AXfh/4VutI0Of
XHjuo7bQLXUvsyRoLOyt7aFYWu5ARsiVF8wq7E5JJ8/r7L/4J1+L1stc/aasb7xv4K0/TfF3
w/1vR7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gU/GlIR6XqP7L+rWX7Jlh8Yo9Z
8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCtQHDBwvmlfoD+wt8d9P8AgZ+xh8Kr
R/GXhXRNT1X9oDTdUvoW1ayOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciTq/iz8UX/ALB8
EaX+zn8U/h/8OP7A+I/i+TxD9n8Xaf4d03ZJrMb6bczwPIi6haLZLGFMUVwnlRmIA48umM+X
v2Sv+Fk/ti6HY/s26B4m8FaNoWqTz6zY2OsaLCgu72JTM7i8hs5blbjyVkxI7r+5jaLftKxt
86V+oP7A/wAf/A/wV/4U3qem+Pvh/odhdeK/Ej/Fq4s7u30b+1rl/NttEkFpKsFw2nhbpXSO
CBba33PJKkTRSMn5k69o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg0hH
tfwK1j4iT/s++M/EGgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcWcs8zs7yMC7bUjhYbl/
do3KQfDPxb+0R4M+I/xOFr4fttJ8BwafNrb2Vla6VArXU8dnbRQWltGkYdiGclUVcRSMzb2U
P9F/8E8v2iNcl/ZK+MfwxtPi7/wgniTUf7BfwZJq/imTRbHSYl1Rn1KSC4Z1SD93MHkjiPmz
Lv2pIQRVv9g/476tpv7M3xv+E2mfGy18Ma7cT6IngjUb3xRcaHpVpBHqznUbm0nn8o26PHKJ
XjCpPKhbETsrKGM+HqK/Uz9lr9s/wr+z3+zf+zl4Qi+IvgqV7LVdTjuZVjhnFnM3i2xh+1Mb
iITWCS6NdayVkmWAmGZ84faB+ef7WX9h/wDDVHxL/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+
T5Hl/J5Xl7dmz5duMcUhFv8AZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZS
r+UsjguFXEbDduKq3mlfpZ/wTM+O/gf4FeB/gFe6b4y8K+E7CXXde/4W0txq1vZ317ctG9to
nnRSuLie0RbpSPIVraJjJLLsaOSReJ+EH7RniX9kr/gmrp9zofjPwU3j/wAIfE55IdMHijTr
+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhYz4Jor9Vv2eP8AgoV4b8GfAv8AZ/t28SfD
/wAL3Wq67q9/faPZ2tqbfwzcXHi6yBAVxI2nRf2Nea0iM7RjyJHwxIQ1+dP7WX9h/wDDVHxL
/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcUhHpfhm28c/DH9iTR/iXpk
/wAH9T8J/wDCSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieWAwYTybVljQY2sqeFeN/GV38QPE9z
q9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ/QD/AIJ6/Gr4f/B/9g/w3pPi3V/C
tv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLd6X8Ff209J+APwn+
BfhPVvip4K1rXbXxJrSazqS3lvq4+2t4wsopL9rqVGe3SfSLrW2W5kMXmQ3Dtu3FMMZ+T1Ff
pt8Lfiva6f4s1/w34Z+JfgrwL4JT4na/faDrXg7xtpvh258Lr9ocQNqWmXRgtdc0yTFnJH5T
SMsEMsSy4xbj83/Hdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyovKibqkflR7VIGxMbQh
Ha6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfo
D+wt8d9P+Bn7GHwqtH8ZeFdE1PVf2gNN1S+hbVrI6hZaIbZLe5uJF3maziYwyRSO3lloZHVi
YZyJPVbv9pXTPAWg+FdL+BGsfBTTf7L+I/iyTXP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAER
XaeVAIkHDRljPz0/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjX
bI8eTKuMgOV80r7L/YP8RaTqXxU/ae1C51z4aeHoPFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDe
vDItuRE+3co8pBGJNhZQfVv+CeXifWdN/YU+GmvR+K7Xw7B4R+PNvZXuo6j4lg0dLTw9JaW1
1f2KSzzRh7eWRVmktIyfNZN5jYqSAD83qK/UzTf2rtG0XwL4Jj+AGq/B/SYIviB4ovdYXXPF
k/g20sIZdXSXTbiayjvrFr23+wmEbWguQscAhCKVeI+feG/2vtZ+CH7Btz4q8J+LfhpD4w07
4u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6jbQs1qqxSPCNzr5bMxAPz0r0v9kr9l/Vv2xfj
XY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrcV478Vf8J1441nW/wCz
dK0f+2L6e++waXb/AGexsfNkZ/JgjydkSbtqLk7VAGTiv0W/4JmfHfwP8CvA/wAAr3TfGXhX
wnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySKhH5p0V97fCD9ozxL+
yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2zL58YMvzQuZD7B
+zx/wUK8N+DPgX+z/bt4k+H/AIXutV13V7++0eztbU2/hm4uPF1kCAriRtOi/sa81pEZ2jHk
SPhiQhpjPyprq/hX8K/+Fpf8JH/xUfhXw5/wjmh3Wuf8TzUPsf8Aankbf9DtflPm3cm793Fx
u2tyMVq/tZf2H/w1R8S/+EZ/sr/hG/8AhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFe7
f8EsPE66Nof7QGmX3ivw/wCHtN8U/DHVNEtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDM
nmjchGV+y54Z8S+KfBngrRtP8afs6aNeeLNVfR9C0rxH4S07W9bupmnRFa4ePS7yWBHmm2Rm
8kjJVCVHlKrV4T8co9fs/i/4js/FVhpWleJNKvpNN1Oz03T7Oxtba4tz5DokVmiW4w0ZBMa4
Y5bLFix+oP2YfAtr8MP2bfDXiX4d+M/hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt4JphN9o
ukMjtdRo0scAKRRhpPMbttB8T6/4F/Zx+C3g/wCDPxl+H/hDxJ4I8V+IrXxvqFv42s9F02+u
P7Rg+x386TvGdUtDbIpV1hnDRKY9pIMdMZ+f1Fa3ju4+2eONZl+2aVqHm307/atLs/sdjc5k
Y+ZBB5UXlRN1SPyo9qkDYmNo/Qv/AIJkfFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/wAQ+LJr
eDTGnhS005I9INzFBdW86PF5s0lrdKgMjPLEsP7pCPin9nv9l/Vv2kdD8e3ei6z4fsZ/h74b
ufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/gmpp9r4M1z9oXT9a8ReCtGnv8A
4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhb6W/4JZeItJ8daH+yz4Z8Oa54
f8jQ9V8V3fxB8OnVbezn1C9C/aNKuZrGR0kv3iEVu8U0ccvkG3B3IYTtYz4J+Gv7bfxP+EPg
ew8PeH/E32Kw0f7d/ZMr6daXF9oX22Py7r7DdyxNcWXmLkt9nkj+Ys3DMSfKa/Rb9mv9oC71
L4U/sx3Phv4o6V4eutI8c6pqnxdW88a22gXWq+dqtpOLi+S5uIZNS32YYbwJshWjzkFK+H/2
lNZ8PeI/2jPH+oeEEtY/Cd/4k1G40VLW1NrAtk91I1uI4SqmNPKKYQqu0YGBjFIRq6j+y/q1
l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvP/Cv4V/8AC0v+
Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd+7i43bW5GK+4P+CcHxetfDP7Ffhr
w/B418FaI8/xriuPFuka54g02xTUfC82lxW1+Jra8kVbm3dXK7ArksoKDfGCvK/sT+J/Cujf
FT9rPTPC3ivw/wCHvhz4p8G+IdE8LWmseJYdJg1SaeZ10pQl9NG8jiDzgJJATEJWEjIZfmYz
5U8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr7W/4J
5ftEa5L+yV8Y/hjafF3/AIQTxJqP9gv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHzZl37Uk
IIr4017TotI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkBCKlFfe3/BPv4r6
jp/wE8J+Gz8S/D/gXTU8SXd9b61oPjax8O6v4XudijdrOmXxgh1yykcWsg2NJIsUM0Ql6W46
D4V6jp/x3sf2OrPw/wCK/h/quq/DT4j6s/iOCHU7LQcfaPEFrdwyWdjc/ZpZIpYstHHbwcf6
vYrqYwxn5011fwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37u
LjdtbkYr9IdJ+K+o6f8AGTxx4bPxL8P+BdNT4u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65Z
SOLWQbGkkWKGaIS9LcfP3/BOvxetlrn7TVjfeN/BWn6b4u+H+t6PaiTWbTw7pWu6rcMVs2tr
W4NsoTabnYfJRYEl2sIvMClCPjSiv1B/Y8/bg8N/BL9lP9mbw23jbwrp90l9qDX1vcLa3M2j
3D+K7CMyzM6s1ju0a71oCVzEDFLJhs7Mel/syeIvBvjr9oz4SeGfhtrngr/hE9D+IHj+78V+
HdM1Wzs4NQYXUtxolyLHehv0iiitXgmhjlWAW6FWTyflYz8vtR/Zf1ay/ZMsPjFHrPh+78PX
fiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOFP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu
58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfsv8A4J/fHt3/AGR9Hs7z4g+FbbVdY+OQ
1LxxZ+JvE2n28mt+HLnTY4NSe6ivpV+1xS+YwYFXZnG5QXTcvn/7Ftz4H0/44ftVf8I1r/hX
QvBmu+BvE3hrwf8A21r9vpf237ZcL/Z0Mf26WOVt0UPLv9z5fNZSwyAfL/gH4A+IviP8JPHv
jfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfa3/BPL9ojXJf2SvjH8MbT
4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV8aa9p0Wka5eWlvf
2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAhFSivvb/AIJ9/FfUdP8AgJ4T8Nn4l+H/
AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcdB8K9R0/wCO9j+x
1Z+H/Ffw/wBV1X4afEfVn8RwQ6nZaDj7R4gtbuGSzsbn7NLJFLFlo47eDj/V7FdTGGM/Omvq
v9nv4X/GP9o/9jCTwlpXiD4f6R8NrjxWNK0fTdYs7G1uvEHiM20l2sFvdC2aVLtokWJZbieF
XE0dushUmMfUOk/FfUdP+Mnjjw2fiX4f8C6anxd8TX1vrWg+NrHw7q/he5+2SDdrOmXxgh1y
ykcWsg2NJIsUM0Ql6W48J8d/ESO8/wCCQWs+Hv8AhJfhVqGsS/FafV/seljSbO7udLMbR/bI
LLZFdRbrvhP3Mc62xCbUthtAB8qfCv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInmofY/7U8j
b/odr8p827k3fu4uN21uRiug/Z7/AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTM
YDHC8ZdSY12yPHkyrjIDlfa/+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVn
ULZqEuJo1kcL9pAkwViErBmTzRuqf8EtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26he
eV9nhj+0yx7t3kvlx8ifLvZdy5Qjx/8AZ7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPf
WVspMxgMcLxl1JjXbI8eTKuMgOV80r6r/wCCW1zp+n/8L4/tLX/Cuhf278KtZ8Nad/bWv2Wl
/bdQvPK+zwx/aZY927yXy4+RPl3su5c+gfAL9qXX/wBm3/glzoeteG/GHhVfHHh/4j/bLXSr
nXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnIB8KUV+lnwU/a+uE/Zx+Ec3wum+Cnh
XxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsVuIQMq0bc9/wALq1nxH8Df
hFH8Efih8NPhprun+MvEt74zXTvEkHhLRlmn1OGWyuHsrxopbyyW1CiNTBMVhj8lk3KYqAPi
rwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX4mvuv9i39
pDXNQ+AXx8+G9p8adK8KeJNWvtHm8GXz69J4Z0OyiGsSPqVxYllgSziaOYStBFHHK8ZYLCxU
oPh/XtOi0jXLy0t7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQACpX9RX/AAaO/wDK
NbQv+wzqH/pVLX5F/ss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/CTeNp/C8dlcNfQPaXN1
BFf2Z1GI22wMWiuxsg8oDIaM/sp/wa7azF4j/YquNQt08PxwX/i7WriJNBtZbXSlV9QnYC0h
lVZI7fB/do6qyptBAIIr0cB8Fb/D/wC3ROTF70/8X6M+3fiz/wAf1x9TX4R/8HDLbbGw/wCv
fTP/AE/eOq/dz4s/8f1x9TX4L/8ABxRO0cGnAd7fS/8A0/eO66q38CPr+hxUf48vT/I/N/8A
bCbd8Kf2eD/1Ty6/9SvxDRSfte/8km/Z3/7J3df+pX4ioryXuemtj6j0b4gtpX7bPwSs84Da
H8N1xn10HRf8a+M/2zpvtHx2L/3/AA74fb89Fsa+l7zI/b8+B/8A2B/hp/6YdEr0/wCM/wAU
df8A+FRfs8aX8Gfin4V+HHiTQLeCTxubzxdZ+HY/tEtjpD2dzfQTyIdQiW2VRlYrgbI2iwSD
HW9Zt018jOkl7R/P9D8xKK1vHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyovKibqkflR
7VIGxMbR9q/Cb4o6/wD8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQ
iW2VRlYrgbI2iwSDHXGdJ8qf8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7DsPO+17s/aPtPk/a
PN2/u9/mbvK/dZ8v5K8/r9IfgR8als/Av7OEfgj4oeCvDsHhv4gaxe/FJdO8SWng601WGTV7
WWK4eynaza6t2sVIjVYCEjXydiFTEDV/2vl+CH7IviLxV8LPFvgqG807416hqvhrRxqlpDex
eD5bmOYWcNj5iXltZT3ttbtLaxLE7xgsy+UzMWM/N6iv0s+Cn7X1wn7OPwjm+F03wU8K+JJP
Feval4ysNS8XT+D9N0S4n1GKa0d7OLULU3totsyIA0d4FitxCBlWjb88/izrMXiP4qeJdQt0
8PxwX+q3VxEmg2strpSq8zsBaQyqskdvg/u0dVZU2ggEEUhHP16XqP7L+rWX7Jlh8Yo9Z8P3
fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC/Zf7LP7Stx4C/Y6+BWl/C7WPhVp
viTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjNv9m/9pDSfhd8BPCE
sXi/4aaBeeJP2lrfXLm20O8t7aKw0J0EM88MEpF1p9kfKki/erC/2aQo/wC6mZXYz83q6v4V
/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMV+gPxZ+K
L/2D4I0v9nP4p/D/AOHH9gfEfxfJ4h+z+LtP8O6bsk1mN9NuZ4HkRdQtFsljCmKK4TyozEAc
eXXj3/BOvxetlrn7TVjfeN/BWn6b4u+H+t6PaiTWbTw7pWu6rcMVs2trW4NsoTabnYfJRYEl
2sIvMClCPnTwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB0
HxI+CPiLxn+znF8ebrU/BUmk6v4kHhW503R9OGlz2F6lq0oBtYbWG0VDBEjloWbJlXd85k2+
P1+gP7C3x30/4GfsYfCq0fxl4V0TU9V/aA03VL6FtWsjqFlohtkt7m4kXeZrOJjDJFI7eWWh
kdWJhnIkYz8/q9L/AGSv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4L
hVxGw3biqt9w/Fn4ov8A2D4I0v8AZz+Kfw/+HH9gfEfxfJ4h+z+LtP8ADum7JNZjfTbmeB5E
XULRbJYwpiiuE8qMxAHHl1q/sD/H/wAD/BX/AIU3qem+Pvh/odhdeK/Ej/Fq4s7u30b+1rl/
NttEkFpKsFw2nhbpXSOCBba33PJKkTRSMgB8KfDX9tv4n/CHwPYeHvD/AIm+xWGj/bv7JlfT
rS4vtC+2x+XdfYbuWJriy8xclvs8kfzFm4ZiT5TVvXtGm8Oa5eafcPayT2E728r2t1FdQMyM
VJjmiZo5EyOHRmVhggkEGvuv/gn38V9R0/4CeE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3Ox
Ru1nTL4wQ65ZSOLWQbGkkWKGaIS9LcIR8E0V+i3wr1HT/jvY/sdWfh/xX8P9V1X4afEfVn8R
wQ6nZaDj7R4gtbuGSzsbn7NLJFLFlo47eDj/AFexXUxjttJ+K+o6f8ZPHHhs/Evw/wCBdNT4
u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcMZ+WdFfoDoPxa1+8/Zx
+C1h8Gfi/wDD/wAEeJNF8V+Ip/G93b+IrPwhptzcS6jBJZ3s9jP9mN3afZgu1VtZAsSGHygV
MI6D4EfGpbPwL+zhH4I+KHgrw7B4b+IGsXvxSXTvElp4OtNVhk1e1liuHsp2s2urdrFSI1WA
hI18nYhUxAA/N6iv1W+A/wAQLfUPhBZeL/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhA
LiSHbaS4imaxQfOYwxiJQ45+XxF4N/aO+I37MA+GGueCv7J+H3xd8QXFzpcmq2ehz2VldeJb
e8sRbWV08MsqPaldiW8b4I8vAdSgLBY/Mmiv12+GvxV+G3wc/aM1FL/xT4fk0LxR8TvHNp8U
NP8AEPiya3g0xp7p7TTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/uvn/AOEH7RniX9kr/gmrp9zo
fjPwU3j/AMIfE55IdMHijTr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhQj4Jor9LPgp
+19cJ+zj8I5vhdN8FPCviSTxXr2peMrDUvF0/g/TdEuJ9RimtHezi1C1N7aLbMiANHeBYrcQ
gZVo21f2Ifjd8MPh/wD8Ilez+Jvh/wD8I3438V+KU+JemP4iu9N0PSvtOLbTY7HQpp4Em0+a
OSLdJPZ3HlRljI8IgPlAH5fUVb17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORMjh0Zl
YYIJBBr9If2EPjv4H0Pwx+x5q+o+MvCulWvwovvGGneKY7/Vre0utMl1Rylkwt5HWaaJzcR7
poUeKIb2kdBHIVAPzTr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQw
or7o5X/1qA4YOF+tv2GfH2r+DPhB4f8ABVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsb
l47XxDp8ki20qPDJM3kwyxrMQRb10H7DfxmtdE/Zjs9Fg+IPw0tXvPjy134tsrjVNN0XTta8
Lz6eltfuLC8+zq9lIrkLCIARhdkSPEAjGfm9RX6La98Ubf8A4VT8L9L/AGY/in4V+HFhoHjn
xTJrH2zxdD4dj+zyarC+lXN9BeyJNqES2KxDLRXB2RtEwLBo6t/Aj41LZ+Bf2cI/BHxQ8FeH
YPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIAHxT8L/2ufHPwW0O
Gz8K3Xh/Qp7WC5t4NWtfDOmLrdutwsiymPUvs/2xHKyuodZgyKQFKgADzSv0h1f9r5fgh+yL
4i8VfCzxb4KhvNO+Neoar4a0capaQ3sXg+W5jmFnDY+Yl5bWU97bW7S2sSxO8YLMvlMzG38F
P2vrhP2cfhHN8Lpvgp4V8SSeK9e1LxlYal4un8H6bolxPqMU1o72cWoWpvbRbZkQBo7wLFbi
EDKtGwB+adFdB8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EA
giv0M/YQ+O/gfQ/DH7Hmr6j4y8K6Va/Ci+8Yad4pjv8AVre0utMl1Rylkwt5HWaaJzcR7poU
eKIb2kdBHIVQj8069L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+
6OV/9agOGDhfrb9hnx9q/gz4QeH/AAVc/E7wr4JtdO8V30qa54Y+IGmaPqHhm7UKhfVrG5eO
18Q6fJIttKjwyTN5MMsazEEW9dB+w38ZrXRP2Y7PRYPiD8NLV7z48td+LbK41TTdF07WvC8+
npbX7iwvPs6vZSK5CwiAEYXZEjxAIxn5vUV1fx3/AOEY/wCF4eMv+EI/5Ez+3b3+wP8AW/8A
IP8AtD/Zv9d+9/1Wz/WfP/e5zX2B8Jvijr//AAx1+z5pfwZ+KfhX4ceJNA13W5PG/wBs8XWf
h2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHSEfKngj9pfxF8P8AwxbaRYad8P57W037JNS8
CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqKg2W9t
HHCnCjO1Bk5Y5JJP6F/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9
lO1m11btYqRGqwEJGvk7EKmIeq+Ef2+fhz8NfD/wk0zwn4z8FaX4eu/GXiG6s7RbO1A0WGbx
paLDIySR7tNRtCvNXCs4hAhlcDDBMMZ+RNFfsDp37cHw/wDglH8MfDfhvxt8P9P0VPHPiRre
3s1sbm30dH8bWscUoYKy2MTaHd6uElzEhglfa2dmOK/4XFoOkeNvhFbfCH4ieCvBnhPwt8Xf
Et148s9O8YWHhu0vNOfX4ZLSR4XnhF/bnTlCxtEsyeWnlDpsAB+WdFfpZ8MP2rvCHh7wxoNx
4X8eaV4esLX9p2RdLgh1IaXJp/gq6eOeaJYCUe30qSRI3kiKrCXQb13Lx8J/tZf2H/w1R8S/
+EZ/sr/hG/8AhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFIR5/RX3t/wT7+K+o6f8BPC
fhs/Evw/4F01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLFDNEJeluLeg/FrX7
z9nH4LWHwZ+L/wAP/BHiTRfFfiKfxvd2/iKz8Iabc3EuowSWd7PYz/Zjd2n2YLtVbWQLEhh8
oFTCAD8/q6v4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T5t3Ju/dxcbtr
cjFfcH7G/wAYNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xVY+EbnQLkkDztS0W+js4dX0yZ/sss
caJ5kcEUsAKYFqvn/wDwTr8XrZa5+01Y33jfwVp+m+Lvh/rej2ok1m08O6Vruq3DFbNra1uD
bKE2m52HyUWBJdrCLzApAPjSvS/2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8Qa
GGUq/lLI4LhVxGw3biqt5pX6Wf8ABMz47+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW9nfXty
0b22iedFK4uJ7RFulI8hWtomMksuxo5JFAPh7Uf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4G
o2N6IJLjEivCsRQwor7o5X/1qA4YOF80r9C/2NPi9a/s0/smfDfQ7vxr4K0jW5v2h7G71KO3
8Qabd3NpoywJa3VyZI5Ha3t2aCWNplZBJBIw3NBcfvPjT9rL+w/+GqPiX/wjP9lf8I3/AMJX
qn9k/wBl+X9h+yfa5fJ8jy/k8ry9uzZ8u3GOKALeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC
9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL5pX6A/sLfHfT/gZ+xh8KrR/GXhXRNT1X9oDTdUv
oW1ayOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciT2D4SeJ1034ef294U8V+H/DvhPwj+1J
eWVtqK+JbTR9OtPCchjurmxtZXmjjeylKxzG0gJWXYHEbbch2HY/N7wR+0v4i+H/AIYttIsN
O+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAdXpn/AAUN+LWhafFbad4g0rS47L7X
/ZhsPDWl2kmgfa4Vguf7OeO2VtP81Fy/2QxbnZ5D+8dnPFftKaz4e8R/tGeP9Q8IJax+E7/x
JqNxoqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGK+wP+CffxX1HT/gJ4T8Nn4l+H/Aump4ku76
31rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZohL0twhHwTRX6g/s8eJ/hR8U/hF+z/N
r/xS+FVh4g+FfivV5NtzdyaHCl6/iWy1b7Tb2/kwpHaSaZaXqRmSOKFZbm3hASX5Y/z+/ay8
b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAPP6K+9v8Agn38
V9R0/wCAnhPw2fiX4f8AAump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZoh
L0txb0H4ta/efs4/Baw+DPxf+H/gjxJovivxFP43u7fxFZ+ENNubiXUYJLO9nsZ/sxu7T7MF
2qtrIFiQw+UCphAB+f1dX8K/hX/wtL/hI/8Aio/Cvhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/Kf
Nu5N37uLjdtbkYr7g/Y3+MGox/DzTNAPxX8FeFdNbxlqOoW/iTwd4qsfCNzoFySB52paLfR2
cOr6ZM/2WWONE8yOCKWAFMC1Xz//AIJ1+L1stc/aasb7xv4K0/TfF3w/1vR7USazaeHdK13V
bhitm1ta3BtlCbTc7D5KLAku1hF5gUgHxpXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPc
JBdtEhleINDDKVfylkcFwq4jYbtxVW80r9LP+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtp
bjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0ckigHw9qP7L+rWX7Jlh8Yo9Z8P3fh678
SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCtQHDBwvmlfoX+xp8XrX9mn9kz4b6Hd+NfBWka
3N+0PY3epR2/iDTbu5tNGWBLW6uTJHI7W9uzQSxtMrIJIJGG5oLj958aftZf2H/w1R8S/wDh
Gf7K/wCEb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxQB5/Xpf7Pf7L+rftI6H49u9F
1nw/Yz/D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv2t/wTI+Kvw2+Dnwr+FiX/
AIp8PyaF4o1XX7T4oaf4h8WTW8GmNPClppyR6QbmKC6t50eLzZpLW6VAZGeWJYf3Xin/AATU
0+18Ga5+0Lp+teIvBWjT3/wx1zwlZvqHinTbWC/1K5aNYYoJpJxHMjGCT97GzRAbSXAdCwB8
k0V+ln7CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaXWmS6o5SyYW8jrNNE5uI900KP
FEN7SOgjkKn/AATm8efDz9nnwP8ADnTNX8WeFbjSta13xFpfxVsdX8bM1jp0skaWVgsGnR3a
Wl/aTq0fmXP2e7iCl3aaNIcxAH5p0Vb17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORM
jh0ZlYYIJBBr7r/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEO
uWUji1kGxpJFihmiEvS3AB8E16B4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4u
bSSZ+WONznAwowAAPsHQfi1r95+zj8FrD4M/F/4f+CPEmi+K/EU/je7t/EVn4Q025uJdRgks
72exn+zG7tPswXaq2sgWJDD5QKmEWv2N/jBqMfw80zQD8V/BXhXTW8ZajqFv4k8HeKrHwjc6
BckgedqWi30dnDq+mTP9lljjRPMjgilgBTAtVYz510L4/ePv2k/hB4t8A3/iv4VeFfCGkWM/
jB9NuvD+j6BHqV3bCNQlk1tZqzahIhCIFKtIiuhbaSp+fq+y/wDgnX4vWy1z9pqxvvG/grT9
N8XfD/W9HtRJrNp4d0rXdVuGK2bW1rcG2UJtNzsPkosCS7WEXmBT8aUhHpf7JX7L+rfti/Gu
x8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWNR/Zf1ay/ZMsPjFHrPh+
78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOF+4f8AgmZ8d/A/wK8D/AK903xl
4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki89+xp8XrX9mn9kz
4b6Hd+NfBWka3N+0PY3epR2/iDTbu5tNGWBLW6uTJHI7W9uzQSxtMrIJIJGG5oLj94xn56UV
6B+1l/Yf/DVHxL/4Rn+yv+Eb/wCEr1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV9V/AL9q
XX/2bf8AglzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2
BiZnKEfL/wCyV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7
cVVvNK/UH9gf9qHwP4E/4U34p03xB8P/AIdWGu+K/El98WtNs7630vFzP5sOiQi2lk+0Pp8K
3i7FgD20HzyylWikkXK/4JzePPh5+zz4H+HOmav4s8K3Gla1rviLS/irY6v42ZrHTpZI0srB
YNOju0tL+0nVo/Mufs93EFLu00aQ5iYz8069A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O
6bc7Od9xc2kkz8scbnOBhRgAAfW3gTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eONO0e
0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyvh/wAd3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjH
zIIPKi8qJuqR+VHtUgbExtCEHjfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGd
qDJyxySScmvvb/gn38V9R0/4CeE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65
ZSOLWQbGkkWKGaIS9LcW9B+LWv3n7OPwWsPgz8X/AIf+CPEmi+K/EU/je7t/EVn4Q025uJdR
gks72exn+zG7tPswXaq2sgWJDD5QKmEAH5/V1fwr+Ff/AAtL/hI/+Kj8K+HP+Ec0O61z/iea
h9j/ALU8jb/odr8p827k3fu4uN21uRivuD9jf4wajH8PNM0A/FfwV4V01vGWo6hb+JPB3iqx
8I3OgXJIHnalot9HZw6vpkz/AGWWONE8yOCKWAFMC1Xz/wD4J1+L1stc/aasb7xv4K0/TfF3
w/1vR7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gUgHxpXpf7JX7L+rfti/Gux8Ba
BrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVW80r9LP8AgmZ8d/A/wK8D/AK9
03xl4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0ckigHw9qP7L+rW
X7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC+aV+hf7Gnxetf2
af2TPhvod3418FaRrc37Q9jd6lHb+INNu7m00ZYEtbq5Mkcjtb27NBLG0ysgkgkYbmguP3nx
p+1l/Yf/AA1R8S/+EZ/sr/hG/wDhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFAHn9Ffo
t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7
GyDygMhoz6X8Ff2+fCvw1+E/wL0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJcR+dptu2jX
msiJphCRbysDhwuGM/J6vQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/L
HG5zgYUYAAH3Z8Wfii/9g+CNL/Zz+Kfw/wDhx/YHxH8XyeIfs/i7T/Dum7JNZjfTbmeB5EXU
LRbJYwpiiuE8qMxAHHl1+dPju4+2eONZl+2aVqHm307/AGrS7P7HY3OZGPmQQeVF5UTdUj8q
PapA2JjaEIPG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk5Nfe3/
AAT7+K+o6f8AATwn4bPxL8P+BdNTxJd31vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxayDY0kix
QzRCXpbi3oPxa1+8/Zx+C1h8Gfi/8P8AwR4k0XxX4in8b3dv4is/CGm3NxLqMElnez2M/wBm
N3afZgu1VtZAsSGHygVMIAPz+r2v9nT4h+ItS8Ma1pFhqXwU0S18I6Hea4kni7wpoc91qvlu
HNnBcXNlLNcXchkPlRM/IUqCoUCvqD9jf4wajH8PNM0A/FfwV4V01vGWo6hb+JPB3iqx8I3O
gXJIHnalot9HZw6vpkz/AGWWONE8yOCKWAFMC1Xz/wD4J1+L1stc/aasb7xv4K0/TfF3w/1v
R7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gUsZ8leN/GV38QPE9zq9/DpUF1d7N8
em6XbaZartRUGy3to44U4UZ2oMnLHJJJ7X9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu
2iQyvEGhhlKv5SyOC4VcRsN24qreaV+ln/BMz47+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW
9nfXty0b22iedFK4uJ7RFulI8hWtomMksuxo5JFQj4e1H9l/VrL9kyw+MUes+H7vw9d+JD4V
lsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4Wp/w1B4z/4Z4/4VV9t0r/hBft39qfYP7DsP
O+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K+y/2NPi9a/s0/smfDfQ7vxr4K0jW5v2h7G71KO38Q
abd3NpoywJa3VyZI5Ha3t2aCWNplZBJBIw3NBcfvPjT9rL+w/wDhqj4l/wDCM/2V/wAI3/wl
eqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjimM8/or72/wCCenxh8K+A/wBnOzsPiL4x8FL4
hudVuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgdt8FP2mdc8Efs4/CPTfB
Xij4VP4+07xXr03xCvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yMAfmnXpeo/sv
6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9l/s6/tXaZ4
e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7147b4JfF7
wz4Z8J+JvD/hLxr8NNE8PT/tLX9xq+kXHiDSbHTtR8FzW/2a5AtriRYrmyeFwqoiuDtUxjdG
CoB+f37Pf7L+rftI6H49u9F1nw/Yz/D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkB
ypqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/AOtQHDBwv0X+
xbc+B9P+OH7VX/CNa/4V0LwZrvgbxN4a8H/21r9vpf237ZcL/Z0Mf26WOVt0UPLv9z5fNZSw
z6B/wT1+NXw/+D/7B/hvSfFur+FbfxJ4j+K01z4eupNTsby68D3E2ktZ2viG4sJJQDFbXMbZ
W52BQyzDJEW4A/Omiug+LNnfad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJ
Dy4YMetfZfwm+KOv/wDDHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUI
ltlUZWK4GyNosEgx0hHy/wDslfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV
/KWRwXCriNhu3FVbzSv1B/YH+P8A4H+Cv/Cm9T03x98P9DsLrxX4kf4tXFnd2+jf2tcv5tto
kgtJVguG08LdK6RwQLbW+55JUiaKRk80+EH7RniX9kr/AIJq6fc6H4z8FN4/8IfE55IdMHij
Tr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhYz5J8EftL+Ivh/4YttIsNO+H89rab9km
peBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbL
e2jjhThRnagycsckkn9C/gR+0YviPwL+zhqPgjxn4K+GkGn/ABA1jWPiloeneKLTwlaLDPq9
rcRK9pPcRNeW62IMcYUTBY4/KzlSteq+Ef2+fhz8NfD/AMJNM8J+M/BWl+Hrvxl4hurO0Wzt
QNFhm8aWiwyMkke7TUbQrzVwrOIQIZXAwwTAB+RNerfDX9tv4n/CHwPYeHvD/ib7FYaP9u/s
mV9OtLi+0L7bH5d19hu5YmuLLzFyW+zyR/MWbhmJOT+1l/Yf/DVHxL/4Rn+yv+Eb/wCEr1T+
yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV9bf8E9PjD4V8B/s52dh8RfGPgpfENzqt0fg4dVl
h1A/D3VTa3KSalegJJ9hspLt7Mqk4ZTNF9pEAVDcBCPgmvQPBH7S/iL4f+GLbSLDTvh/Pa2m
/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAH3Z8FP2mdc8Efs4/CPTfBXij4VP4+07xXr03x
CvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yNk/s6/tXaZ4e+EHgi4Hjzwr4e1W1
/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i714Yz8//G/jK7+IHie51e/h0qC6
u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk5Nfpt4O/aU8K/Duz0238P+P/AA/ocGjftSXF
rp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkVjkgGugu/wBpXTPAWg+FdL+BGsfBTTf7L+I/
iyTXP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAERXaeVAIkHDRkA/N/wAA/AHxF8R/hJ498b6f
Hajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr72/ZK/ah1bxR8Hv2ifAWmfFrw
/wCANd8Qarpd34IMHiC48N+HNGgOtSS6i+mtOYzZ2/lzh/JAWeSLIETsrKPhTXtOi0jXLy0t
7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQEI9g/Z7/YP8X/tH/DeTxRpWpeFdIsLj
XR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDn/AIX/ALXPjn4LaHDZ+Fbrw/oU
9rBc28GrWvhnTF1u3W4WRZTHqX2f7YjlZXUOswZFIClQAB7t/wAJ3H/w5K/4Rn+2fh//AGx/
wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2avYP8AgmR8Vfht8HPhX8LEv/FPh+TQ
vFGq6/afFDT/ABD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/umM/N6iv0B/Y6+P
dv8ABj9mj4XaPf8AxB0rR/Enhf8AaAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJY
gGvj79rL+w/+GqPiX/wjP9lf8I3/AMJXqn9k/wBl+X9h+yfa5fJ8jy/k8ry9uzZ8u3GOKQjz
+vVvhr+238T/AIQ+B7Dw94f8TfYrDR/t39kyvp1pcX2hfbY/LuvsN3LE1xZeYuS32eSP5izc
MxJ+y/8Agnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZp
LSMnzWTeY2Kkg1f9r5fgh+yL4i8VfCzxb4KhvNO+Neoar4a0capaQ3sXg+W5jmFnDY+Yl5bW
U97bW7S2sSxO8YLMvlMzFjPzeor9IfgR+0YviPwL+zhqPgjxn4K+GkGn/EDWNY+KWh6d4otP
CVosM+r2txEr2k9xE15brYgxxhRMFjj8rOVK0eO/jUviPwL8O4/2a/ih4K+GkGn/ABA8WXuu
q3iS08JWiwz6vFLpdxdWVw0TXlutiIwFEEwWOMw7MqYqAPzeor9LPgp+07ceDv2cfhHYfC7x
J8FB4ksPFevT+MrvUvFU/gvTUuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCcn9lT44Xep+G
INLg+JHw/wDAOi3njnVdWstX8CeLLbwm3hSd3ypvtG1MWy6xpTv9lkiQ7547eGSEuCPswAPz
pr0v4X/tc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7dbhZFlMepfZ/tiOVldQ6zBkUgKVAAH
2B+xv8YNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xVY+EbnQLkkDztS0W+js4dX0yZ/ssscaJ5k
cEUsAKYFqut8CPjUtn4F/Zwj8EfFDwV4dg8N/EDWL34pLp3iS08HWmqwyavayxXD2U7WbXVu
1ipEarAQka+TsQqYgAfm9RX6Q6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvUNV8NaONUtIb2LwfLc
xzCzhsfMS8trKe9trdpbWJYneMFmXymZj+enjvxV/wAJ1441nW/7N0rR/wC2L6e++waXb/Z7
Gx82Rn8mCPJ2RJu2ouTtUAZOKQjJrtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultr
eKNBlmdmLtnAULC+WDFFf7r/AOCZHxV+G3wc+FfwsS/8U+H5NC8Uarr9p8UNP8Q+LJreDTGn
hS005I9INzFBdW86PF5s0lrdKgMjPLEsP7rz/wDYP+N/iLQf2Zvjf8IU+MFr4I8WPPokPhGa
98Ziw0rS1i1ZzqktperL5CIUl8xxbuWuE3GNZcGmM+HqK/Sz/gnN8RPhh8BfA/w5sZ/HHhXW
PDeva74i074lpq/iq7tbGDzY0stNaDR5prdLm0uY2iMk09lP5as5keAQERfm9r2jTeHNcvNP
uHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDSEVKK/Uz/gll4i0nx1of7LPhnw5rnh/
yND1XxXd/EHw6dVt7OfUL0L9o0q5msZHSS/eIRW7xTRxy+QbcHchhO3lP2dv2stW0v8AZV+C
48BeJfhp/wAJsniTW77x7d+MPHdx4ddL2a/hlt72+jTULR9TR4Cu93ju8rCYwMh42Yz83qK/
Rb9nX9q7TPD3wg8EXA8eeFfD2q2v7R6q0Gi6k+l2un+FroRT3kVtBKUnt9EknRXaKRVjJRPM
XevHoF3+0rpngLQfCul/AjWPgppv9l/EfxZJrn9peNn8L6bZI2sq+nXLwW1/ZrqNobLygCIr
tPKgESDhoyAflTX9RX/Bo7/yjW0L/sM6h/6VS1+QHhv9r7Wfgh+wbc+KvCfi34aQ+MNO+Lt1
qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+WzMf2V/wCDVjxV/wAJ1+wdHrf9
m6Vo/wDbHifV777Bpdv9nsbHzb6Z/JgjydkSbtqLk7VAGTivQwHwVv8AD/7dE48XvT/xfoz7
n+LP/H9cfU1+En/BwrZC6srIn+G20z/0/eOq/dv4s/8AH9cfU1+En/BwrfC2s7JT/FbaZ/6f
vHf+FdVb+BH1/Q4qX8eXp+qPzT/bDXZ8Kv2eR/1Ty6/9SvxDRR+2K274V/s8n/qnl1/6lfiG
ivJe56a2Pc7uLd+3x8Dj/wBQf4af+mHRK+XP2wf+S0x/9i14d/8ATJY19T3H/J+fwO/7A3w0
/wDTDolfLH7YP/Jao/8AsWvDv/pksa2rfw18jOl/Efz/AEOm+BWsfESf9n3xn4g0DSfhpN4T
+FsFpNql1rHgrQtRvWa+vRDBEJbizlnmdneRgXbakcLDcv7tGyfiR8EfEXjP9nOL483Wp+Cp
NJ1fxIPCtzpuj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt92/4J5ftEa5L+yV8Y/hjafF
3/hBPEmo/wBgv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHzZl37UkIIrtf2NPjNpP7On7Jn
w30mL4g+CoLy7/aHsb65nt9UtxcjQhAltPelJdt1ZW8nkSIzypA7QSsrgRTsr8p1H56UV+q1
3+0rpngLQfCul/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a6jaGy8oAiK7TyoB
Eg4aM+aeG/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6jb
Qs1qqxSPCNzr5bMxAPz0or9Vv2eP+ChXhvwZ8C/2f7dvEnw/8L3Wq67q9/faPZ2tqbfwzcXH
i6yBAVxI2nRf2Nea0iM7RjyJHwxIQ1b/AGevir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974s
a30rTLaeZrTSEtNIS5itLqyngeDfMbW6jRCzNLEkP7oA/On9nv8AZf1b9pHQ/Ht3ous+H7Gf
4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuMgOV80r62/4Jqafa+DNc/aF0/WvEXgrR
p7/4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhb5JpCCivvb/gn38V9R0/4C
eE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcdB8K9R
0/472P7HVn4f8V/D/VdV+GnxH1Z/EcEOp2Wg4+0eILW7hks7G5+zSyRSxZaOO3g4/wBXsV1M
YYz4p8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZX
fxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkn9NtJ+K+o6f8ZPHHhs/Evw
/wCBdNT4u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcfmT47uPtnjj
WZftmlah5t9O/wBq0uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2hCO11H9l/VrL9kyw+MUes+H
7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfNK/QH9hb476f8DP2MPhVaP4y8
K6Jqeq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSe16dr3wM+I+p/DEwf
Eb4VeH4/hN8R/EjaTZXOr/2fDZF/FtrqsMtusa+SbRtItLuOOVj9nMtxbxq3mH5GM/JOiv2B
07/got4H8PR/DH+xvH/hVbDXfHPiTVF+0Q28s2n/AGrxta7LibzozJp/maJe6xh5fJzFNJzu
2Y4r/hcWg6R42+EVt8IfiJ4K8GeE/C3xd8S3Xjyz07xhYeG7S8059fhktJHheeEX9udOULG0
SzJ5aeUOmwAH5/fC/wDa58c/BbQ4bPwrdeH9CntYLm3g1a18M6Yut263CyLKY9S+z/bEcrK6
h1mDIpAUqAAPNK/Sz4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/BV08c80SwEo9vp
UkiRvJEVWEug3ruXj4T/AGsv7D/4ao+Jf/CM/wBlf8I3/wAJXqn9k/2X5f2H7J9rl8nyPL+T
yvL27Nny7cY4pCPP6KKKAPQP+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyft
Hm7f3e/zN3lfus+X8lW/hf8Atc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7dbhZFlMepfZ/t
iOVldQ6zBkUgKVAAHmlFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAegeCP2l/
EXw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+H
SoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UAFFFFABRRRQAUUUUAFFFFABRRRQB6
B4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4
nudXv4dKgurvZvj03S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJOTRQAV6B4I/aX8RfD/AMMW2kWG
nfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAPP6KANbxv4yu/iB4nudXv4dKgurvZ
vj03S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJPVf8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7DsPO
+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K8/ooA9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O
6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhTh
RnagycsckknJooAKKKKACiiigAooooA9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7
Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnag
ycsckknJooAK9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRg
AAef0UAa3jfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySScmiigAoo
ooAKKKKACiiigAr0DwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4G
FGAAB5/RQBreN/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKKA
PQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38Q
PE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKACiiigAooooAKKKKACiii
gD0DwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAByvjfxld/
EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySScmigAr0DwR+0v4i+H/hi20i
w074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB5/RQBreN/GV38QPE9zq9/DpUF1d
7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKKACiiigAooooAKKKKAPQPBH7S/iL4f+GLb
SLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8
em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigD0D/hqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r
3Z+0fafJ+0ebt/d7/M3eV+6z5fyV5/RRQAUUUUAFFFFABRRRQAUUUUAFFFFABXoHgj9pfxF8
P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igD2vTP+Chvxa0LT4r
bTvEGlaXHZfa/wCzDYeGtLtJNA+1wrBc/wBnPHbK2n+ai5f7IYtzs8h/eOznxSiigAooooAK
/qK/4NHf+Ua2hf8AYZ1D/wBKpa/l1r+or/g0d/5RraF/2GdQ/wDSqWvRwHwVv8P/ALdE5MVv
T/xfoz79+LP/AB/XH1Nfg5/wcQWbXEGnsO1tpf8A6f8Ax3/jX7x/Fn/j+uPqa/DD/g4DVW02
0z1+zaZ/6fvHVdVb/d4+v6HHR/jy9P8AI/Mf9sQbfhV+zx/2Tu6/9SvxDRT/ANssY+F/7PX/
AGTy6/8AUr8RUV5L3PSWx7rcrj9vL4G/9gb4af8Aph0SvLfiP+ydqHx31vxx4y/4Sjwr4U8N
/Dnw14N/ti81o3rY+26TaQweWlrbTu37xMH5Rjcp6ZI9SuD/AMZ4/A3/ALA3w0/9MOi0/S7n
T/EP7Pn7Svg3+3/CmleJPFfhr4a/2PZ61r9lpH9ofZ7S3mn8t7qWNDsjGT83dR1YA7Vf4a+R
nS/iP5nzJ+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqC
u903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9r
MAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRF
lWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6Tz
JDewCR0txtgTMolflOo800X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7Syu
AHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281S
y1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYv
dF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser/D
fTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/FmjeIdD0rxB4y+H/hi/wDF
vivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/wCCe2sfCTwBoGveP/Hn
w/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W+
Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2h
fjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5
/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9SU
Uc1ykvxg8bfs56hf+BLzRPh/Df8AhS+udNvItS8EaBq91DcRzOJUe6mtZXl2ybgCZGAAAU7Q
or6L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYto
wzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUB
Yg2GKvhuMlCNbwV+zpfftVa54e1W9+IHwf8AC/iH4haquj6ToSwtaTzTK0NvGzWelWUkNikj
uqqZxCZCryYKnzG9L/Z+8CfE/wCGeoaP4O074j/BT4aeOL3XdQ8GabZzaXaXfiu3u2mNnPu1
Cw0+6ubTdLPJFHLNcxMVUmM+UqtWT/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw
9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/
t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3u
hWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTTo
PtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/
EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/D
Tw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+
JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV
8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2M
viz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirC
Qvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWL
MCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5r
JncoXccgeU19rf8ABN+58D+Dv+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS
4mypiBwrum5TJtzk/FNIR7B8K/2NdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBIme
AQwTMLeBtqy3MgSCNpFUybgwXtvBH/BL/wAWeK/DFtf3/jL4f+Gbqfxy/wANn03UpdRkurbx
AHZRZO1tZzQncAGEqyNFhhlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERY
NUvFuYb+B5mCz28axssqxlp0bbiFlZWPa+Ffi3F+zF/wTR1KP4e+Ovh/rGvaP8ZJde0aW+TS
bjVJ9LitRaQanFpl4ZZ7aUzxxsoEYnjQl+IyzljPCvE3/BPLx74K1X4W6fq83h+w1L4r+JL/
AMK6fatdvI+mXtlqSabOLpkjZAgnfhoWlyoJ9AfP/F/7Pnirwt8VPG/hC30u68Qal8PJ7+PW
pdHtpruC1hspjDcXRIQMturDJkdVADLnGcV9l/Cv4iaf8a/h1+x1qs3jjwrJqvwv8c6tqnjm
XxF4qstNvrTz9ctb77QwvZo5brfFvkLxCTLBlJ3grXz/AONPiLoHj79qj9oDxJp/xR1XwNov
ib/hIr7Sbixsbxv+EvSe7MkOlSrGUaKK5RgWacbF2DeueiEfP1e1/CT9jFPi5pHgV4fir8Kt
K1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMhldQyOGBwG2+KV9F/sTa94V+EPwk+LHxR
nvPD/wDwtDwHBpQ8BWGrTQyIbq6umiuL6G0cg3FxaRKJI8h44mIkdGKoVAPFPiz8OL74OfFT
xL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlew/A3/gntrHx3+EHhzxnZ+PP
h/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKPCde16+8U65eanqd5dajqWo
zvdXd3dTNNPdTOxZ5JHYlmdmJJYkkkkmvsv4NfHK/wDgF/wSZguvDPiL4fweOLT4rL4mttPv
p9G1PVLSyWxS1S7isrrzJI5Vuo12skYmVMyDERLkA8U+DP7DHiT4ueOPiV4bude8K+D9a+E9
je6l4gt9alun8m3spGjvHja0t7hZPJcKCActvXyw4Dbdb4I/8E7vEnx9+B+meN9E8YfD+CPW
tdl8LWGk6je3VpfXesLbvcR2Cu9v9mEs0SKYi04jZpY0LrIdg7X/AIJqeJ11nXP2hdT8S+K/
D9nqXi/4Y65okN34k8S2lhPq+q37RtEu+7mRpHkaOQvJkqpILsu9c+l/sN6t4b0T9iD4df23
46+H/h2Twz8crf4i39tqPiO1W+j0exsNski2iO1y8rywNFFCsRkkZ4yF8tvMpjPh6z+E3irU
dc17TLfwz4gn1LwrBcXWtWkenTNPo8Nu224kuUC7oUibh2cAIeDiufr3bVvito3xW/aM+Oni
+3+IXiD4Zab4wg1/U9Oijs55J/EgubozRaLci2k2xpOrYdnZ4VMfIbg14TSEe7fCr9gfXviV
4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPj/AI78Eap8
M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfev7NnxR0HxD8B/wBku2st
a+Ghf4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJA+NP2svG+mfEz
9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGgDoPhX+xrq3xA+Hmk+LN
c8V+Cvhx4e8SaqNG0G98V3dxbJrk4JEzwCGCZhbwNtWW5kCQRtIqmTcGC+a+O/BGqfDPxxrP
hvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX1BNb6T+19+w98DPBGh+LvBXhrxD
8K9V1jTteg8V67b6IiwapeLcw38DzMFnt41jZZVjLTo23ELKysfa/gR8S9J8EeBf2cNH+Fvx
c8P6Z4e8E/EDWG+IVw3iW38IPr1l/a9q9teXVnc3EMl2kmnoMDE21QYSdylKYz83qK/Xbwj+
3z8Ofhr4f+EmmeE/GfgrS/D134y8Q3VnaLZ2oGiwzeNLRYZGSSPdpqNoV5q4VnEIEMrgYYJi
3p37cHw/+CUfwx8N+G/G3w/0/RU8c+JGt7ezWxubfR0fxtaxxShgrLYxNod3q4SXMSGCV9rZ
2YAPyU8K+BNc8df2l/YmjarrH9j2MuqX/wBhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNZNfQPjS
+8MWP7VH7QH9ifEf/hXPhub/AISKLQ/+EdtZZ7HxRAbs/Z9HX7IyolpcR7cO2YAsa5Ugivn6
kIKK+9v+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZB
saSRYoZohL0tx6X+zx4n+FHxT+EX7P8ANr/xS+FVh4g+FfivV5NtzdyaHCl6/iWy1b7Tb2/k
wpHaSaZaXqRmSOKFZbm3hASX5Y2M/L6iv1h/Z6/aM+EWkfFSDxBZ+M/D+o+CfiT8QPGc/wAQ
bfxD4oubOCzhvZmg0ry9GkuIYri3uIZYTLLNaXIjDSGSSEQHyuK/YQvtf8FfsYeAL5/F2laD
/wAIH+0BHpWo6pceLrOxtbbQTbW9xqFlBdvcLFPaTSoJmt4HdZygkCPt3AsFj806K/Vb4D/E
C31D4QWXi/wZ4o0rwz4M0D9p25isb+41uHw7a2XhGYRXlxYQC4kh22kuIpmsUHzmMMYiUOKu
m/tXaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BC
EUq8RLBY/LOiv0L8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS
/ttMn1G2hZrVVikeEbnXy2Zj8/8Aw8+PXgr4L2eqfFextPD+p/GbxFqt3c+HtAstJeHQPhuD
KWF95Uq+XPcKWxaQIZIYAgkkZpFSNUI5/wCEn7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9
UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt80+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNBJN
bzPC7RllVihZCQSoOMZA6V7t+yJ470PwV4A+Mvxk1rWdK1H4x+FP7NufBsfiC7jupLvUL68d
LvUlt5Tuu7u2T98jNvSN28x0YhSvzpr2vX3inXLzU9TvLrUdS1Gd7q7u7qZpp7qZ2LPJI7Es
zsxJLEkkkk0AfQH7Jn/BNTxV+198K9U8X6R4z+GnhnTdInvY7iLxJq81lOsNnDbTXN1hYHX7
PEt5AHkLAIXG7GVJ8U+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSo
OMZA6V9F/wDBLa50/T/+F8f2lr/hXQv7d+FWs+GtO/trX7LS/tuoXnlfZ4Y/tMse7d5L5cfI
ny72Xcufdv8AgmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/ABD4smt4NMaeFLTTkj0g3MUF
1bzo8XmzSWt0qAyM8sSw/umM+CfhX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8A
Q7X5T5t3Ju/dxcbtrcjFcpX2D/wTOvn8Cf8ADRvhvVvF3hXQrXWvhxq/hyK3vvF2n2ljq2sS
fu7QRM9wsNxgC5CzoWjRZT86iUbu1/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZon1xba7sfDF
xBbzXsbAuHTTJLiMPMpxA0iBnywzSEfH/gH4A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl
0ttbxRoMszsxds4ChYXywYor8TX6LfCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp
6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIq3+zt+0I3wr/ZV+C+gfCbxP8AB9td0HxJrY8XXuue
NLvwraCY38LWd9NbfbbCXULd7XYf3tvcERwiLYrB4ixnxT+yV+y/q37YvxrsfAWgaz4f0bXd
UgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7cVVuK8K+BNc8df2l/YmjarrH9j2MuqX/ANht
JLj7DaRY8y4l2A7Ik3LudsKuRkjNfpt+xL+0h4C+FuufCjXtF8X/AA08H6TqfjLxRcfFaLSb
xNJgu55Wmg0IRWt0UvG0yNbtDEiR+RAC0kwjeKR0/P74P2kXg/XPiFp+p/Eq6+Hs9t4b1KyV
9H83UYPFU6sijSDNZyeWbe4IOZizwERqTuBU0hHmlewfCv8AY11b4gfDzSfFmueK/BXw48Pe
JNVGjaDe+K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kVTJuDBfH6+4fhB8RPht8Y/2VfgLoniYe
CtR034Q6rq1j420nxLrM2lXcem6lfxXh1PTFhuYHvHiggmjMSNJL5jqBbSb42oA+NPHfgjVP
hn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJrtv2lNG8PeHP2jPH+n+E
HtZPCdh4k1G30V7W6N1A1kl1ItuY5izGRPKCYcs24YOTnNcTQAUUUUAFFFFABRRRQAUUUUAF
e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONm
GPhNfe3wO8T6D8Qvg9+xcbLxX4Ksn+DvjLUrrxdDrHiWw0efSoZNatLxJBFdzRPOhgDMGgEg
yrL98FaAPh/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9K+C
f7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPP8A
7WXjfTPiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXtf/BOzQr/4
e/Ej4eeM7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVVAPmrx
34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9K/Z7/Yc8e/tJ/Cv
x7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8QPBOjfHv9oz4xan
4c8YeH9O8PadPrXibS7vxNqs8M+vWqXTNDbwPcBpp72ZJFKpKRI5Dlm3Zr1b/gl5p9rpGh/H
DUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggAHlP7Pf7Dnj39
pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHu8fr7B/4Js+Fo
/Av/AAvz+2/Evw/0f+2Phx4h8FWH27xnpNv9u1SX7P5ccW+5G+J9rbbhcwNg4kOK+dPBPwSm
8X654w0+48TeCtAn8HaVe6pK+pa1EINWa2ZVNpYzReZHc3EhP7pUbbIFJD4wSAaujfsv6tN8
BH+I+uaz4f8ACPh67nmtNBTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwXzSv0L/AGfP
ironjP8AZm/ZU8Ow6t8H7vSfB3iTVrf4hWPjI6AJ7Cyn1aC4zEuqgSlHtXkJezzkrtJ3oAvy
/wDED4y/DXwp8SPFFh4X+E3w/wDE3haDXdQ/sTUtSvPEMd1c6ebqVrXeqajCBthMajMavhRv
y+5iAVP2M/2HPHv7dfxDufD/AIIs7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X
x+v0B/4JR/tyWkPx28AeD9d0T4VeBfAPg++1rxEdUm1a50lrOe7t7iJWZri/EN5KomitYzNH
POlvnDfK8lfKkvxc8O+CNQv9Kv8A4NfCrVbqzvrlHuF1jXLiPmZyI45bbVRDJFGCI45FLb0R
GLyEl2APKaK/QH9hL45eEPDXwP8AL8e+I/h/pGtanrt7P8F7O7nGpQ/DHU3t7oPe3PmefJZa
ebp7LYt00hMsIuTEQjXNdX8FP2mdc8Efs4/CPTfBXij4VP4+07xXr03xCvvE3xFk0eP+0H1G
J4b66NvqVsNYieHBabbeo6Q7Fz8yMxn5p0V+i37Ov7V2meHvhB4IuB488K+HtVtf2j1VoNF1
J9LtdP8AC10Ip7yK2glKT2+iSTortFIqxkonmLvXj4p/ay/sP/hqj4l/8Iz/AGV/wjf/AAle
qf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikI1vgj+xJ8T/2jPD2mar4N8M/2xYaxrsvhqzl
/tG0t/O1CKye+eHEsqEYto3k3kBDjaGLELVr9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T
1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fe3/BID9qbwP8M/2Xvh5o/iT4uf8I/dWXxH1KS4
0nUtRt7SC3tH0S7KwuJLsFdPMzLMJTHtN6wj8rJ8+vnT/gnvcWt1rn7RPiDU/G/h/wAjxr8P
/EvhrS9Q8WeJtN0rVdc1K6aCSEzQXF2ZFeYZZpSzxB94MzFSaYz40oq3r2jTeHNcvNPuHtZJ
7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDVSkIKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6iv8Ag0d/
5RraF/2GdQ/9Kpa/l1r+or/g0d/5RraF/wBhnUP/AEqlr0cB8Fb/AA/+3ROTFb0/8X6M+/fi
z/x/XH1NfhD/AMHClw0UdiB3ttL/APT/AOO6/d74s/8AH9cfU1+Ff/BwNZ/abezPpa6Z/wCn
/wAd11Vv4EfX9Djo/wAeXp/kfmf+2X/yS/8AZ5/7J3df+pX4hop37aK7Phn+z0P+qd3P/qVe
IaK8l7npLY9uvGI/b2+Bv/YG+Gn/AKYdErzH4i/snah8d9Z8b+Mv+Eo8K+FPDfw58NeDf7Yv
NaN62Ptuk2kMHlpa207t+8TB+UY3KemSPah4Bv779uD4HXiQyGH+xfhs24DjA0HRc/yrN0+W
w1v9n/8AaW8FHXvCuleJvFPhv4ajSLPWtfstI/tD7Na280/lvdSxIdkYyfm4yo6sAd6q/dr5
GdL+I/mfMX7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKo
K73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2
swBQBz6t/wAEvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSl
RFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6
TzJDewCR0txtgTMolfkOo800X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK
4AextrmKNFun8oNLImSrMMphzV8P/wDBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzV
LLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Yy8b+MPhZ8VPDmka18Q/g/P8ACr4SeMri6nvt
YvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser
/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP8A8Ev/ABZo3iHQ9K8QeMvh/wCG
L/xb4r1Hwf4civpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20LhjleMv+Ce2sfCTwBoGveP/
AB58P/AX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/wDj7UPHHhDV
/Fvj74f/APCvfB3iuXxFqVz42u9H1DVrL95Fe37WMd0suqebcGMbTZp89w+4MH3uPQPBnx71
r9oX40+GdcuvF3wU1P4MXXxH1S/n8NeNIvDsWseFtMvNYF5diVdSiEp8+KcvusppxlCu5WjC
gA+f/A//AAT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wAJM9vcx2rTxCGykMETTSqq/ahC
/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X+zb8
YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGByJP
McfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxk
oQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgD
nW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M
65jw57X/AIJ2aFf/AA9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0
l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPL
bS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8AD/gODS9VvdCsbTWJ5nu/Ed/ZmRbq
GxS2imEqRSRiJrhiluJJEXzc7tp+xn+w549/br+Idz4f8EWdqqadB9o1HVdQd4dO0xSG8sSy
KjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f+I/iZvFyePJ9HkvodHuvECXU
MqnXP9MfzbKRmaWAmRmBDsZUwvP/APBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoap
daPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1W
fR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/AA/4Dg0v
Vb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7b7X+xl8WfGHhv4qeHBrXiX4P+
HfhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfg
neaLqPwqk8LQfEfxNf8AjfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k
7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3
Pgfwd/wVdm8ZaNr/AIV8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+K
aQj2DSv2KPFWr+HPgpqcd/4fEHx41W50fQFaebfZzQX0dk7XQ8rCIZJVIMZkO0EkA/KfP/iz
8OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfcPwe/ak0f4e/BL
9h/R7fXfh+39meK9UbxSupWWl3914et31+CRJXkuI3l0/dE0kglRoiQgfd8ilfa/gx8c/hR4
F+OEuoR+NvCuq+EPHXxH8bD4ixav4ykWxtkurh7bSzBpYuo7a9tLmKSEyXDW10iqzs0sSw5i
Yz8k69L/AGSv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3bi
qt9V/CD9ozxL+yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2z
L58YMvzQuZD6X+wP+1D4H8Cf8Kb8U6b4g+H/AMOrDXfFfiS++LWm2d9b6Xi5n82HRIRbSyfa
H0+FbxdiwB7aD55ZSrRSSKAfl9RX6bfstS/C7xJ8B/2crDxP8Sfhpofif4L+JNTgltdT8Q+U
bW//AOEjsdTM0ckW6Ca3fTLO+jS4Lm3ea5gRXLsCnV/s9ftGfCLSPipB4gs/Gfh/UfBPxJ+I
HjOf4g2/iHxRc2cFnDezNBpXl6NJcQxXFvcQywmWWa0uRGGkMkkIgPlAH5PUV97fCD9ozxL+
yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2zL58YMvzQuZD23
wU/a+uE/Zx+Ec3wum+CnhXxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsV
uIQMq0bAH5p0V0HxZ1mLxH8VPEuoW6eH44L/AFW6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm
0EAgivtb9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynED
SIGfLDNIR8E0V+sP7PXxV+DXwc+KkCaR4p8FSfD7xR8QPGdp460+98WNb6VpltPM1ppCWmkJ
cxWl1ZTwPBvmNrdRohZmliSH91lfstftj6D+zf8As3/s5eC7rxz4KtNS03VdTg1KFbiw1MaT
dHxbYwtM1wvmrbI+j3WtBbjekbwyuyu2YzTGflnWt4V8Ca546/tL+xNG1XWP7HsZdUv/ALDa
SXH2G0ix5lxLsB2RJuXc7YVcjJGa/UC7/aV0zwFoPhXS/gRrHwU03+y/iP4sk1z+0vGz+F9N
skbWVfTrl4La/s11G0Nl5QBEV2nlQCJBw0Z+CfDXifTtV+Knxb1B/iJa/DWDVNK1iS0Twnpl
8mleJWkmDJo0MK+XJBZTg/L9oXaiRoJEzwEI8fr3b4VfsD698SvBnw91q98W+CvCKfFfVZ9H
8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8Jr7W/YIun8DeGPhpNafEv4f6r4M8ReK
3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT/LdHgRC0hj2AA+P/HfgjVPhn441nw3r
dt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr9IfBNv4N+Kl9+yPofww8XeH9Z0n4
S/E7WYbmPWNds9L1U2UviK2ubGUW100Ety81qFYC3ibL7k2hwUHV6T8V9R0/4yeOPDZ+Jfh/
wLpqfF3xNfW+taD42sfDur+F7n7ZIN2s6ZfGCHXLKRxayDY0kixQzRCXpbhjPzp8A/AHxF8R
/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr7r/Yt/aQ1zUPgF8f
PhvafGnSvCniTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKDtv2dv2hG
+Ff7KvwX0D4TeJ/g+2u6D4k1seLr3XPGl34VtBMb+FrO+mtvtthLqFu9rsP723uCI4RFsVg8
RAPzeor9Yfgr+3z4V+Gvwn+BemReM/hppb3fiTWrq5tNJs4RYaLNN4wsl8yJLiPztNt20a81
kRNMISLeVgcOFx+b37WX9h/8NUfEv/hGf7K/4Rv/AISvVP7J/svy/sP2T7XL5PkeX8nleXt2
bPl24xxSEef16B/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu
8r91ny/kr9Af2PP24PDfwS/ZT/Zm8Nt428K6fdJfag19b3C2tzNo9w/iuwjMszOrNY7tGu9a
AlcxAxSyYbOzHQfs9fFX4NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlz
FaXVlPA8G+Y2t1GiFmaWJIf3TGfm9/w1B4z/AOGeP+FVfbdK/wCEF+3f2p9g/sOw877Xuz9o
+0+T9o83b+73+Zu8r91ny/krz+v0B/Y6+Pdv8GP2aPhdo9/8QdK0fxJ4X/aAttNu1h8TQmSz
8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/AXx3+Fmh/tAfAjV9K8ZfD/SvCHwo8c/ETTtTj
j1a0tI9Mg1S+mTS2t7ferTWjpcQ4mt0eCJNzO6LG5UA/P79nv9g/xf8AtH/DeTxRpWpeFdIs
LjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDxSvsvXvER8Of8EZrzwRca58N
JNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCK9W/4JkfFX4bfBz4V/CxL/xT
4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/ugD83q7bwD8A
fEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX+wPAnjjxL4c/ZV
+BPhr4Q/FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2NlVP2Lf
2kNc1D4BfHz4b2nxp0rwp4k1a+0ebwZfPr0nhnQ7KIaxI+pXFiWWBLOJo5hK0EUccrxlgsLF
SgAPhSiv02/Yu+M3gLwZZ/sk3lz8QfBX9m/B/VfGek+JbmbVEsnt21KVo7CeO2ufKupbebz4
m81YSsSlzMYvLk2ef/CD9ozxL+yV/wAE1dPudD8Z+Cm8f+EPic8kOmDxRp1/ejw8DA1xawrF
cGcWVxqVpE0sdsy+fGDL80LmQgHwTRX6WfBT9r64T9nH4RzfC6b4KeFfEknivXtS8ZWGpeLp
/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRtz3hv9r7Wfgh+wbc+KvCfi34aQ+MNO+L
t1qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+WzMQD89K92+FX7A+vfErwZ8P
davfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/AB34q/4Trxxr
Ot/2bpWj/wBsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycV9gfsEXT+BvDHw0mtPiX8P8A
VfBniLxW8/jvwl4m1LT9Hk8GvC628ep2U11cxXIuzaTtNHdaf5bo8CIWkMewIR8f+O/BGqfD
PxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTX6Q+Cbfwb8VL79kfQ/hh4
u8P6zpPwl+J2sw3Mesa7Z6XqpspfEVtc2MotrpoJbl5rUKwFvE2X3JtDgoOr0n4r6jp/xk8c
eGz8S/D/AIF01Pi74mvrfWtB8bWPh3V/C9z9skG7WdMvjBDrllI4tZBsaSRYoZohL0twxn50
+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/t
Ia5qHwC+Pnw3tPjTpXhTxJq19o83gy+fXpPDOh2UQ1iR9SuLEssCWcTRzCVoIo45XjLBYWKl
B237O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3
BEcIi2KweIgH5vUV+sPwV/b58K/DX4T/AAL0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJc
R+dptu2jXmsiJphCRbysDhwuPze/ay/sP/hqj4l/8Iz/AGV/wjf/AAleqf2T/Zfl/Yfsn2uX
yfI8v5PK8vbs2fLtxjikI9B+Bv8AwT21j47/AAg8OeM7Px58P9HsPFHiuPwRZ2upHVBdDWJR
uitXENlIg3xlXEgcxgOAzqwZR4p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJ
VsOrDKkg44JFfYHwa+OV/wDAL/gkzBdeGfEXw/g8cWnxWXxNbaffT6NqeqWlktilql3FZXXm
SRyrdRrtZIxMqZkGIiXPQfCD9sPxb8Fv+Caun+LdO8eeH7/4lWHxOfVja6n4ktbrW59ClME1
1AYjP9tS3udUt43nijKPMu+RgY3ZyxnwTXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJ
BdtEhleINDDKVfylkcFwq4jYbtxVW+1vgR+0YviPwL+zhqPgjxn4K+GkGn/EDWNY+KWh6d4o
tPCVosM+r2txEr2k9xE15brYgxxhRMFjj8rOVK16B+zB+0h8Jvhb8VPAOvfD7xf4K8H+BNT+
IHi64+IES3kGkz3ccs08HhwNazFLlrKOG7iKpDH9mgJeSURtFI6AH5PUV+m37F3xe8G+AbP9
kmfWfGvgrTk+DWq+M9G8XCbxBZh9On1GVorN418zddW8jTxn7TbCWBF3u8ipG7L5/wCBPHHi
Xw5+yr8CfDXwh+K3gr4eeLPCPiTXo/Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsb
KQj4/wDAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+
6/2Lf2kNc1D4BfHz4b2nxp0rwp4k1a+0ebwZfPr0nhnQ7KIaxI+pXFiWWBLOJo5hK0EUccrx
lgsLFSg+H9e06LSNcvLS3v7XVILWd4Yr21WVYLxVYgSxiVEkCMBuAdFbBGVU5AAPdvB3/BNT
4l+Pvgz8NPH+lQ6Vd+FvidrqeH7e7SaVv7DuJL/7BE98ojJjikmBCyR+YPuhtruiN498Wfhx
ffBz4qeJfCGpy2s+peFdVutHu5bVmaCSa3meF2jLKrFCyEglQcYyB0r9Af2W/wBvWb9m74Pf
saaRoHjbw/bWN1quvaT470q51CJoLGyutah2T3ke8G3dInkmilfbgB+WjaRW9L+DHxz+FHgX
44S6hH428K6r4Q8dfEfxsPiLFq/jKRbG2S6uHttLMGli6jtr20uYpITJcNbXSKrOzSxLDmJj
PyTor9TP2Wv2x9B/Zv8A2b/2cvBd1458FWmpabqupwalCtxYamNJuj4tsYWma4XzVtkfR7rW
gtxvSN4ZXZXbMZq1d/tK6Z4C0HwrpfwI1j4Kab/ZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a
6jaGy8oAiK7TyoBEg4aMgH5U17t8NP2FLr4i+HPhtdyfEj4aaBqXxXne38PaLqE+pSajMy3z
2KmVLazmSFHnRlR5HCthsH5HC9B4a+NPgT9n/UPEPxN0l/Cvij4vaxrt8/hrTdJ0e4t/DPgN
POYrqaQ3USedKcg2UG1kt1CyTZlVYV9A/Z+0N/DPwJ0fxz4O+IHw/f42/Fm+1C217xR4p8ba
fp2ofDqyNwYJJYo7if7Q13eK0sr3iK8yQ7liQPJ5jgHn7/8ABL/xZo3iHQ9K8QeMvh/4Yv8A
xb4r1Hwf4civpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20LhiXX/BJ34v2PhHwbrc2n6Uth
4t8VyeDLh0umm/4RvUE1NtMH24xowSJrlHVZYjKnCgkM8av9LfCbxT4f+Geg/s+eGPBvxA+F
XiDTPhj8R9bh8Zal4k1DRVkt7RNZtmgvdOXVJHkgims4/OB0xtpfJ3NMC1dVYf8ABRPTPgv4
q/ZyT4e/EPSrjwhrfjnxfZ+J7XU9Ve48vR73xIrW13fi4k8+KUW8jTxz3BWT75Ysryq4B8la
N/wSu8TapZvJP8Qfhppzr8QJvhgIrh9WZ311JWRbceXYOuyRVEiyEhArqHKPuRfnTx34I1T4
Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/UDwb+1D4T+E3nXf8AwkHw
q1mHxR+1Xea15Oo32nan9m0Sbzbf+11XzGe08qSPzYrn92RtjbLRS4k+H/i78WfB/gP4yeNd
Ij8C+CvihBYeJNTjg8W65rOtXWo+IYftkxjupprTUIYJXdCp8yOJQ/DHJJJALfwR/wCCd3iT
4+/A/TPG+ieMPh/BHrWuy+FrDSdRvbq0vrvWFt3uI7BXe3+zCWaJFMRacRs0saF1kOwZPwZ/
4J/fEv41eB/iV4kttH/sPRfhRY3t1rtxrSy2my4tY2kmsI12FmuwisShACfL5jIXTd9Qfsce
J/CqfseeBrrU/Ffw08EJofx5i+JV3pM/iWHz9P0WysiHEVq00t7I5khMMMJV5pC0bHKN5tc/
+yh440f4mfHD9q7x/wD254V8N6L8UfCni/StAg8ReJ9L0q+uL3ULiKe2gaCa4DLlGwZeYQys
PMypoA+dPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX
2swBQBz5r478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfYH7P3
wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJ
mUSv4p8Jv2Ov+Fh6h8Zftni3SrLRfg/od/qUuv2Uf23R9Zu4ZhDa2kNyWjUfbH3eQ5yzhflj
Y5AQjxSvoH4G/wDBPbWPjv8ACDw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2UiDfGVc
SBzGA4DOrBlHz9X6Lf8ABOz4gWXgL9jD4eWZ8UfCqwv/APheVl4m1Kz8Qa3oX2qy0OO2ignu
1ivZN9vKskTBGjVbnHzR/K4YtDR8qfBn9hjxJ8XPHHxK8N3OveFfB+tfCexvdS8QW+tS3T+T
b2UjR3jxtaW9wsnkuFBAOW3r5YcBttv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT6
1NDPFbOw+yWs6Qp58qxgztGSQxxswx9r/ZK8T+FdZ/aM/a31PTvFfh+z8PeL/BvirRPDV34k
8Sw2E+rzX90Gsl36hMk0jyJGS8khJUkGVlZxnV+B3ifQfiF8Hv2LjZeK/BVk/wAHfGWpXXi6
HWPEtho8+lQya1aXiSCK7miedDAGYNAJBlWX74K0AeKaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kv
vCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99
Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv0A+HPxB0/U/2gP+Eq8M/FX4f+Ivhl8Qvitq+teK
/DXia+stDk8KRfbnjttXsnvbiG8S7ayujPHc2KxSxvCiMXaPYPgn4s6d4e0j4qeJbTwhf3Wq
eE7XVbqHRb26UrPeWSzOLeWQFEIdogjEFF5J+VegQj0D9nv9hzx7+0n8K/HvjfQrO1tPCfw6
0q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRd
G1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17
wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIB+wV4J1X4NfGTwT4jTxh8Cn0mPxlbQe
KINW1XQZdR0CGxvI/MmhlvwCUeJ3kjuNMlkD7B84dFAYzlNF/wCCaHjJZfC1r4o8R+CvAere
N/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV4O/YL1bXvFmm+GtZ8e/DT
wj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz9ln43eB/iXqfwKufB/
ib4f6loPgz4j+IZtau/FviK3stU0TT5vFtjq1tfQ/wBpzxXMsslpBhplErsktxG37xmA4nU0
0GPVfE3xJ+D/AI9+Gg+IPxo8ZeIfM8S6/wCLLDRLn4b6LJqUsUclva3Eq3K3F3C7SNcrGZoo
MpHEryeYwB8FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTW
t478K/8ACC+ONZ0T+0dK1j+x76ex+36XcfaLG+8qRk86CTA3xPt3I2BuUg4GayaQgooooA9g
/Z7/AGHPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pcf
BP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59
W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4II
HQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEj
pbjbAmZRK7GfH/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7
B8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY
6vw88TeCv2IrPVPENjrHh/4ifGaw1W70zw99ihe60DwmLeUxjWvMljWO+uJCN9oqBoYxiaQm
QJEv0X8Cvjdp/wAS/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8Usssk
UDhpohLuSW4jPzsQAD5f8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwl
Wwt7lLdGnYojTvHv2M65jw58f8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2
HVhlSQccEivvXU00GPVfE3xJ+D/j34aD4g/Gjxl4h8zxLr/iyw0S5+G+iyalLFHJb2txKtyt
xdwu0jXKxmaKDKRxK8nmN8//AA88TeCv2IrPVPENjrHh/wCInxmsNVu9M8PfYoXutA8Ji3lM
Y1rzJY1jvriQjfaKgaGMYmkJkCRKhHKfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUb
ibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56vRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvL
ifU72zu0srgB7G2uYo0W6fyg0siZKswymHPpf7P2hv4Z+BOj+OfB3xA+H7/G34s32oW2veKP
FPjbT9O1D4dWRuDBJLFHcT/aGu7xWlle8RXmSHcsSB5PMf0v4T+J9B0b4ffsy+GrLxX8H9Zf
4ReMtX03xdqF/wCJbC0GkQr4n0/UU1DTmu5oXnSWC1bbNAkgaGWaPAdiAxnyp8N/+CcvxR+I
+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wFRj8p8sDaWdfMj3nwq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Afs3eJ9B8a/tGf
td+NLfxX4KsPD3xH8N+MtE8Nzax4lsNIn1O6vrqOa1UW91NFMiSIciR0WMEMpYMpA6D4FeON
HHwi/ZY0WLXfhVLdfCzxXqdn4yfWfE+l203hxP8AhJdO1Jbuxea4QT74bZlE9r5yPFJOgJLc
AH5/+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTXoH7WXjfT
PiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXn9IQUUUUAFFFFABR
RRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUU
AFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/wAGjv8AyjW0L/sM6h/6VS1/LrX9RX/Bo7/yjW0L
/sM6h/6VS16OA+Ct/h/9uicmK3p/4v0Z9+/Fn/j+uPqa/DD/AIL+3gt4LRfW10z/ANP/AI6r
9z/iz/x/XH1NfhT/AMHAtobgWR9LXS//AE/+O66q3+7x9f0OOj/Hl6f5H5rftrnPw2/Z6/7J
3c/+pV4hopf22F2fDf8AZ6H/AFTu5/8AUq8Q0V5L3PSWx+6H7Of7Feh+LtD+BfieWKI3Z8Ie
C58kc5i0fTlH/oAr8Zv2j/2Pr74wfF/4reKF8T+FPCfhj4X6Z4Wi1a81o3rbReadbQQeWlrb
Tu3zx4PyjG5T0yR+0v7JvxluLXTvgfpG/wDdjwb4MTGf72jacf61+WvxDvtP8T+Dv2uvB/8A
b/hXS/Eni7T/AIfnR7PWtfstI/tH7PFDNP5b3UsaHZGMn5u6jqwB9nM1H6vScey/I8vLnJ16
il3f5nyJ+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu
903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rM
AUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFl
WBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJ
DewCR0txtgTMolfwj2jzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHNXw/8A8E+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUst
QgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3R
dTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/wBoCzn0qDx9puux6v8A
DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wS/wDFmjeIdD0rxB4y+H/hi/8A
FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/wDH
nw/8Bf8ACR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W
+Pvh/wD8K98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2
hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD
5/8AA/8AwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9S
UUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7Nvxhu
dG/aMtZND8Y/DTTv2d/h78QL/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx
8/fECzsf2tv2jPjF4v0zXvD/AIV02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxko
QfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn
W8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M6
5jw57X/gnZoV/wDD34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT
3ckpgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL
5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobF
LaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO
2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyq
dc/0x/NspGZpYCZGYEOxlTC8/wD8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o
9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H
8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/wDD/gODS9Vv
dCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/wCH
fhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfgn
eaLqPwqk8LQfEfxNf+N9M8eDw9HfaXpl/wCIEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k7
UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3P
gfwd/wAFXZvGWja/4V8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+Ka
QjtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+1v+
CeX7RGuS/slfGP4Y2nxd/wCEE8Saj/YL+DJNX8UyaLY6TEuqM+pSQXDOqQfu5g8kcR82Zd+1
JCCK9W/4JzfET4YfAXwP8ObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNo
jJNPZT+WrOZHgEBETGfmnXpeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFY
ihhRX3Ryv/rUBwwcL9bfsM+PtX8GfCDw/wCCrn4neFfBNrp3iu+lTXPDHxA0zR9Q8M3ahUL6
tY3Lx2viHT5JFtpUeGSZvJhljWYgi3rq/wBjH45fD/4Vfsj2mneLfEXw/uPEnij4yXl54e1W
wnsRH4RuJ9NeztfEp0mTyRHaQXKMyxXMUAjR0lCKViBAPzp8K+BNc8df2l/YmjarrH9j2Muq
X/2G0kuPsNpFjzLiXYDsiTcu52wq5GSM1k17B4aZtA+Knxbt9e+M11oepDStYtpda0eS71WD
x/decAbE3ETKz296wZzPNmMhVZ1O4V4/SEe1/s9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6
axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto
6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHq3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U
/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqP+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T
9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC
2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6P
C2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH
/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p
/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeafCr9gfXviV4M+HutXvi3wV4RT4r6rPo
/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDv
GWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e
+HfD/wAVvEWqeKZvG+radJqWm6fNrsFxa3Fq2uubyPzLICUvZ4dpQ7OTcbjQB8afDf8A4Jy/
FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB4g6zXAVGPynywNpZ18yPfU+En7GKfFzS
PArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt+gP2UL7w3qXx
w/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbpvMjIJnnGCwbe+8NXlX7I
kvhv9n7wB8ZfiDf3/hWT4sfDT+zbXwNZ3l/a31rJez3jwXN/axK7Jey2saCWJ1MkKFllKvhG
ABleFP8Agmp8S/GH/C47i2h0qPRfgh/aceu6xNNKljd3FhvM1taN5e6WUpGzgFVCqU8wxmRA
2TbfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b/AOxodSTVp7qX7FcfZ5962ljOqYfaRluQ47ggerf8
E2b59S/4X54k8TeLvCtpdeOfhx4h8OW1x4i8XafZX2q6xd/Z5EDLdXCzN5hLEzuPLLBsvkGt
X9lXWvFwg+E/hzxL42/Z/wBf+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHeRJrCadX
C4DqyKoAPn/wb+1R4z/Z+87SPBus+Fbf+y/tljZ+I9L8L2EeqSRTebG80Goy2iahHvSR9jF0
lRGCgJgKOg+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM
7RkkMcbMMfsvwf8AtK6B4C+FPw80v9nfWPhVpthpfjnxLJqf/CTeNrzwvHZW7aqj6Xc3UH2+
zm1GI2PkhjNFdnZB5WNweM5XgX41eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/
VornSrfUJbecRNbwFIykbkQSSwsBISqgHy9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT
3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3P
hqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx9l/8NIeGfiF42+EWt+BfF/w0udE0
74u+Jdc8WyeLbzSYbvSrK51+G6t57KPWCtxbI9l+8/4l6p+8DFv34avnT4deO/h54K8Y/tL/
ABk0PWdK1Hxf4U12O5+F0fiC7a6ku/t2qTI2pLb3Z8+7u7a32TI02/y3bzJUZgpUA+Xviz8O
L74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlerfCr9gfXviV4M+Hu
tXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGP0B4E+N/i3xT+yr8
CR8LfjB4f8E+NrLxJr198QrvVvGdrob3V7cX9vLbXuox3MqvqaeRjL+Xc5VGjIYgx11fwn+L
Og+Lfh9+zLHZeJfg/qL/AA/8ZauPF13f6jYaANDhfxPp+qpfadbXbWjqkkFuwXyICFhlmh2I
+UUA/PTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK6DwD8AfE
XxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRXtftZeN9M+Jn7VHxL
8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANfS3/BPL9ojXJf2SvjH8MbT4u/
8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhHyp8K/hX/AMLS/wCE
j/4qPwr4c/4RzQ7rXP8Aieah9j/tTyNv+h2vynzbuTd+7i43bW5GKyvCvgTXPHX9pf2Jo2q6
x/Y9jLql/wDYbSS4+w2kWPMuJdgOyJNy7nbCrkZIzX1X/wAEztWt/B//AA0boM3jrwrZ6Lr/
AMONX8P2LX3iOHR7HX9Tl/d2LxR3rwM+U+0bZHjUxLMQ/lmTB+fvgDfWlj/wmv2v4j6r8OfO
8KX0UP2G1uZ/+EolPl7dHl8hl2RXHO55cxDyxuU5FAHn9e1/s9/sH+L/ANo/4byeKNK1Lwrp
Fhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgeKV9g/8J3H/AMOSv+EZ/tn4
f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPKf2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8U
aVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NR/wncf/AA5K/wCEZ/tn
4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+Hut
Xvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPpf8Awncf/Dkr/hGf
7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/Z
Xt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/wCi/wDB
NDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnK+
G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke/wCy
9B+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7
Sh2cm43GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbp
vMjIJnnGCwbe+8NQB8//AAk/YxT4uaR4FeH4q/CrSta+Id82m6XoN3eahPqkNwLkWyJcx2tn
Mtt5jsjIZXUMjhgcBtut4U/4JqfEvxh/wuO4todKj0X4If2nHrusTTSpY3dxYbzNbWjeXull
KRs4BVQqlPMMZkQNq/siS+G/2fvAHxl+IN/f+FZPix8NP7NtfA1neX9rfWsl7PePBc39rErs
l7LaxoJYnUyQoWWUq+EYdX/wTZvn1L/hfniTxN4u8K2l145+HHiHw5bXHiLxdp9lfarrF39n
kQMt1cLM3mEsTO48ssGy+QaAKnwm0bx78Bv2bfAPjaD4ofB/4Zab45g1e38P39x4WeXxIixT
SWt4RqFppM91C/70hXFwGCSKEK7dq/KmvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBG
A3AOitgjKqcgfZf7KuteLhB8J/DniXxt+z/r/wAMvCPiubTtW0HxBP4anuvDlodQjkv2WW/j
DTRXCO8iTWE06uFwHVkVR7X4P/aV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/8AhJvG154Xjsrd
tVR9LubqD7fZzajEbHyQxmiuzsg8rG4PGQD40+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ
9nuIZGjkTchKth1YZUkHHBIr9FvAvxq8N/ETw1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXx
Tp+rRXOlW+oS284ia3gKRlI3IgklhYCQlV7X9nr9oz4RaR8VIPEFn4z8P6j4J+JPxA8Zz/EG
38Q+KLmzgs4b2ZoNK8vRpLiGK4t7iGWEyyzWlyIw0hkkhEB8oA+FPhV+wPr3xK8GfD3Wr3xb
4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV0X/AIJoeMll8LWvijxH
4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK8caOPhF+yxo
sWu/CqW6+FnivU7Pxk+s+J9LtpvDif8ACS6dqS3di81wgn3w2zKJ7XzkeKSdASW47bX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX
6ba/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb
4kk/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDA
EZ5ANIRV8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+J
r7W/4J5ftEa5L+yV8Y/hjafF3/hBPEmo/wBgv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHz
Zl37UkIIr1b/AIJzfET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt
0ubS5jaIyTT2U/lqzmR4BARExn5p16XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EE
lxiRXhWIoYUV90cr/wCtQHDBwv1t+wz4+1fwZ8IPD/gq5+J3hXwTa6d4rvpU1zwx8QNM0fUP
DN2oVC+rWNy8dr4h0+SRbaVHhkmbyYZY1mIIt66v9jH45fD/AOFX7I9pp3i3xF8P7jxJ4o+M
l5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD8067bwD8AfEXxH+Enj3x
vp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1Zfg/ffEH4qfEm31P4heCp9S8
KwaprF3rWpa6zQeLJrebDrY3DKWu7i5Zi8QODKMsSK+i/wDgnl+0Rrkv7JXxj+GNp8Xf+EE8
Saj/AGC/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYPJHEfNmXftSQgikI+VPhX8K/wDhaX/CR/8A
FR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxWV4V8Ca546/tL+xNG1XWP7
HsZdUv8A7DaSXH2G0ix5lxLsB2RJuXc7YVcjJGa+q/8Agmdq1v4P/wCGjdBm8deFbPRdf+HG
r+H7Fr7xHDo9jr+py/u7F4o714GfKfaNsjxqYlmIfyzJg/P3wBvrSx/4TX7X8R9V+HPneFL6
KH7Da3M//CUSny9ujy+Qy7Irjnc8uYh5Y3KcigDz+va/2e/2D/F/7R/w3k8UaVqXhXSLC410
eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur7B/4TuP8A4clf8Iz/AGz8P/7Y
/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hvJ4o0rUvC
ukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/4Rn+2fh/
/bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz/bPw/wD7
Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8PdavfFvg
rwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfS/wDhO4/+HJX/AAjP9s/D
/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NXpX7JvxZ021+Bv7K9voPiX
4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+CaHjJZfC1r
4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV8N/wDgnL8U
fiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/2XoPx38J6/rX
wWu/h74y+H974d8P/FbxFqnimbxvq2nSalpunza7BcWtxatrrm8j8yyAlL2eHaUOzk3G415T
+yhfeG9S+OH7V3iTT/F3hW08LeOfCni/w54XuPEXi61sr7Vbi7uIpLQMt/cLdN5kZBM84wWD
b33hqAPn/wCEn7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6
hkcMDgNt1vCn/BNT4l+MP+Fx3FtDpUei/BD+049d1iaaVLG7uLDeZra0by90spSNnAKqFUp5
hjMiBtX9kSXw3+z94A+MvxBv7/wrJ8WPhp/Ztr4Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6m
SFCyylXwjDq/+CbN8+pf8L88SeJvF3hW0uvHPw48Q+HLa48ReLtPsr7VdYu/s8iBlurhZm8w
liZ3Hllg2XyDQB5TbfsWy2HwJ8D/ABC8SfEX4f8Ag/RfiH9v/saHUk1ae6l+xXH2efetpYzq
mH2kZbkOO4IHj+vadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfZf7Ku
teLhB8J/DniXxt+z/r/wy8I+K5tO1bQfEE/hqe68OWh1COS/ZZb+MNNFcI7yJNYTTq4XAdWR
VHtfg/8AaV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/+Em8bXnheOyt21VH0u5uoPt9nNqMRsfJ
DGaK7OyDysbg8ZAPjT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6
Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQ
ccEiv0W8C/Grw38RPDX7OM2lav8ABRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284ia3
gKRlI3IgklhYCQlV7X9nr9oz4RaR8VIPEFn4z8P6j4J+JPxA8Zz/ABBt/EPii5s4LOG9maDS
vL0aS4hiuLe4hlhMss1pciMNIZJIRAfKAPhT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22s
PfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1
ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK8caOPhF+yxosWu/CqW6+FnivU7Pxk
+s+J9LtpvDif8JLp2pLd2LzXCCffDbMontfOR4pJ0BJbjttf/aH8G/G3xf8ABfWvC/iP4aX2
k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZf
C1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqn
wz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/ALQ/g342+L/gvrXh
fxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJC/nn+1l4
30z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHn9FFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABX9RX/AAaO/wDKNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv+wzqH/pVLXo4D4K3+
H/26JyYren/i/Rn378Wf+P64+pr8PP8AgvUiva2+ev2TTP8A0/8Ajqv3D+LP/H9cfU1+F3/B
fqd4zaBe9ppf/p/8d11Vv93j6/ocVL+PL0/yPzV/bhG34ffs+f8AZO7n/wBSrxDRSftwHPw9
/Z8/7J3c/wDqVeIaK8l7nprY/az9mvwvO1z8D7uNCV/4RHwSScemjab/AIV+RH7R37J+o/Hj
4vfFbxmfFHhXwn4c+HOmeFv7ZvNaN6237bp1tDB5aWttO7fvEwflGNynpkj9yf2MtQ04eB/g
tHPs87/hDPB2M+v9jafivyS+I11pviLwh+134NXX/CmleIvFlh8P/wCxrPWdfstI/tAW8UM0
3lvdSxodkYyfm4yo6sAfYzJf7PS9F+R5WXf7xU9WfIf7Pf7Dnj39pP4V+PfG+hWdraeE/h1p
Vzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6N
qF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3h
LS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1
DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/DPbPNNF/4JoeMll8LWvijx
H4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/wDwT61DUPEP
h7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/
jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3s
tX4dfGXwPL4x/aX/AGgLOfSoPH2m67Hq/wAN9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZ
X2oQAco//BL/AMWaN4h0PSvEHjL4f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLi
NTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/AMefD/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2V
xENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1
jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHh
bTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wAD/wDBPbWPGekeCb+Xx58P9Ftfibrt1oXg
x746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fz
Ek+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/p
GlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P8AhXTZ59a8aWkX
ia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru
41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7
UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/AMPfiR8PPGdp4t+Cg0G68V2T
eI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z
4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH4
3v8Aw/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/i
Hc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF
8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/APwT8/bT8JaD
+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7g
A+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHG
zDHJ8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4
kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/AId+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61J
kkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/AIgS
9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilm
l8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/AAVdm8ZaNr/hXw58J/C+u6z9hvNa1+30
zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCPoH4G/8ABPbWPjv8IPDnjOz8efD/AEew8UeK
4/BFna6kdUF0NYlG6K1cQ2UiDfGVcSBzGA4DOrBlGT8Gf2GPEnxc8cfErw3c694V8H618J7G
91LxBb61LdP5NvZSNHePG1pb3CyeS4UEA5bevlhwG2/UH/BP39oPQf2ff2PPhbb6nqnw0uLz
V/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTlf2Sk0Hw9+0Z+1vbjx74fv
NN1bwb4q8LaHrXiTxZYQz+Jbq6ugLST7RPLGtw8yws7zr+7BYMxXeuWM8f8Agb/wT21j47/C
Dw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2UiDfGVcSBzGA4DOrBlHlOo/A3xhpvjjx
X4b/AOEc1W71rwN9rbX7exgN7/ZSWknl3MsrQ7lWKNxhpc7BkfNgivqv4NfHK/8AgF/wSZgu
vDPiL4fweOLT4rL4mttPvp9G1PVLSyWxS1S7isrrzJI5Vuo12skYmVMyDERLn50+D/ixtc1z
4hanr3xU8QeCtS1fw3qUst3HHd3s/jK6kZGOl3LxOG2XTFi8kxaPKZcHIpCPNKKK/Rb9ln9p
W48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZ
DRkA+X/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJ
DHGzDE8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTv
Hv2M65jw5+tvAvxq8N/ETw1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284
ia3gKRlI3IgklhYCQlV8++D2tRXX7eFx8V/DPjf4KXngfxX8Vp9U1RfEE+k2usaNp8Wreetw
qavHHPD5lvMZEexZnyuGKyxhFYzx/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU7
2zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/A3xhpvjjxX4b/AOEc1W71rwN9rbX7exgN7/ZS
Wknl3MsrQ7lWKNxhpc7BkfNgiv0WPxu8D/EvU/gVc+D/ABN8P9S0HwZ8R/EM2tXfi3xFb2Wq
aJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMB8f8AjT4i6B4+/ao/aA8Saf8AFHVfA2i+
Jv8AhIr7Sbixsbxv+EvSe7MkOlSrGUaKK5RgWacbF2DeueiEcp8BP2TtQ+O/ww8eeMv+Eo8K
+FPDfw5/s/8Ati81o3rY+2yvDB5aWttO7fvEwflGNynpkjifhX8K/EXxv+Iek+E/Cek3WueI
dcnFvZWVuBvlbBJJJIVUVQzM7EKiqzMQqkj6L/YpudP8Q/sJftM+Df7f8K6V4k8V/wDCLf2P
Z61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgDz/7FPxU8PL8PPHnwtv8AVrX4ca38TII7Ky8e
5ISNQcnSNQchmg0y5YL5k1vsZWCmbz4AUQAyvgx/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2t
ps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxU/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dN
Y1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD2DXvER8Of8EZrzwRca58NJNfsPic93Lpl
rquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCKq/8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yft
ek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdH
hbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNd
HhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8A
hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7
U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNd
HhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuN
dHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+
Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/
7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410
eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwu
NdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P
+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/
ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA80+FX7A+vfErwZ8PdavfFvgrwin
xX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf8Agmh4yWXwta+KPEfgrwHq
3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G/sr2+g+J
fhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrtj8bvA/xL1P4FXPg
/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4jb94zAAHx
p4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1
zHhzq6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZ
EyVZhlMOfYPg9rUV1+3hcfFfwz43+Cl54H8V/FafVNUXxBPpNrrGjafFq3nrcKmrxxzw+Zbz
GRHsWZ8rhissYRfVtB+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uL
Vtdc3kfmWQEpezw7Sh2cm43GgD4e8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R3
9mZFuobFLaKYSpFJGImuGKW4kkRfNzu25X7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBd
tEhleINDDKVfylkcFwq4jYbtxVW+9tR+OXgv4uWPwTsPCviL4Va54H8P/EfxM3i5PHk+jyX0
Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXV/ZD+O/wAIPgV44+Gl78NvGXhXwn8PZfHPiv8A
4TtbjVls769gaSa28N+dFduL2e0SC6iI2K0ETGSWbY8ckigH5U17B+z3+w549/aT+Ffj3xvo
Vna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7vsH9nb4933wf8A2Vfgv4a+
G2t/B+y8WeHPEmtx+N31z4gNoVpa3X2+E211MtrqNsmq25twv7xUvIzHAI0z8yN5p+wrc6fe
eOP2lfEN3r/wq8PWHjbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiqu1mVlYAA+
VNI+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lrq/wBk
T9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A9A/Yn1H
T/2WfE/if4j+K/FelJpnhr7R4cm8H6XqdlqN18QZZUKyafJGPOhOlMAGmu3R4iFUQb5irR9B
/wAEhta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG5
3FCPkmiiigD1b4Cfsnah8d/hh488Zf8ACUeFfCnhv4c/2f8A2xea0b1sfbZXhg8tLW2ndv3i
YPyjG5T0yRxPwr+FfiL43/EPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR9
F/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6s
Aef/AGKfip4eX4eePPhbf6ta/DjW/iZBHZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwUzefACiA
GV8GP+Ce3jX42eE9Q1ey1XwVpkEPiRvB2mC+1tNniPWhby3AsbOeESQM7pGAkksscMrTRKkj
FuKn7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjR
XLEgewa94iPhz/gjNeeCLjXPhpJr9h8Tnu5dMtdV0S61JtNSEwG5jETtNI/2wbRMhaVrfGGN
oRVX/hO4/wDhyV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7N
TGeU/s9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd
5o0VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhj
d5o0VyxIHq3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/
yOM/ZqP+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HG
fs1AHlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3
mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN
3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/
ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ
+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3e
aNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eG
N3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k34
8/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1AHmnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBna
MkhjjZhjq6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOfoD9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebA
jWql92nbQZFZj++BNdsfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW
19D/AGnPFcyyyWkGGmUSuyS3EbfvGYAA+NPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ
1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0D
TdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPsHwe1qK6/bwuPiv4Z8b/BS88D+K/itP
qmqL4gn0m11jRtPi1bz1uFTV4454fMt5jIj2LM+VwxWWMIvq2g/Hfwnr+tfBa7+HvjL4f3vh
3w/8VvEWqeKZvG+radJqWm6fNrsFxa3Fq2uubyPzLICUvZ4dpQ7OTcbjQB8PeIv2KPFXw5+H
mueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bcr9kr9l/
Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfe2o/HLwX8XLH
4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLq/sh/H
f4QfArxx8NL34beMvCvhP4ey+OfFf/CdrcastnfXsDSTW3hvzortxez2iQXURGxWgiYySzbH
jkkUA/KmvYP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AV
QV3um+Pd9g/s7fHu++D/AOyr8F/DXw21v4P2Xizw54k1uPxu+ufEBtCtLW6+3wm2upltdRtk
1W3NuF/eKl5GY4BGmfmRvNP2FbnT7zxx+0r4hu9f+FXh6w8beBvFHhrRoodfstCsbnULqSCS
CGztLyWK4itGXIjeWNUVV2sysrAAHyppHwS8T6/8INX8eWGmfbfC3h++h07VLq3uYpJNNlmB
MLTwBjNHFIQUWZkETOCgcv8ALXV/sifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL
5bwW0yL5ccLE+ayZ3KF3HIHoH7E+o6f+yz4n8T/EfxX4r0pNM8NfaPDk3g/S9TstRuviDLKh
WTT5Ix50J0pgA0126PEQqiDfMVaPoP8AgkNrWk6L/wAFCdA8e6rqHgrwP4T0Ce/mvBqHiG30
+CwW6sbyKGKBby48+dFdlTKmVlBUyNzuKEfJNFFfe3/BPT4w+FfAf7OdnYfEXxj4KXxDc6rd
H4OHVZYdQPw91U2tykmpXoCSfYbKS7ezKpOGUzRfaRAFQ3AAPgmiv0s+Cn7TOueCP2cfhHpv
grxR8Kn8fad4r16b4hX3ib4iyaPH/aD6jE8N9dG31K2GsRPDgtNtvUdIdi5+ZGyf2df2rtM8
PfCDwRcDx54V8Para/tHqrQaLqT6Xa6f4WuhFPeRW0EpSe30SSdFdopFWMlE8xd68MZ+dNFe
gftZf2H/AMNUfEv/AIRn+yv+Eb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxX1t/wAE
+/ivqOn/AAE8J+Gz8S/D/gXTU8SXd9b61oPjax8O6v4XudijdrOmXxgh1yykcWsg2NJIsUM0
Ql6W4Qj4Jrq/hX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cX
G7a3IxX6F/sXeL9Zg/ZM8G+JI/G/h/TYPC37Q4tb3XF1mDw3py+HpIILu/tbVZzahLKeRVnN
jHGm/YCYMoQvmn7InxA8N3Xxw/a/Tw94o8K+GvAPjjwp4l03w3ZX2t2vh+x1G4urh/7LSK2u
ZIQNsJmVSUAgWQqxj8wBmM+FKK+9v2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7Hwxc
QW817GwLh00yS4jDzKcQNIgZ8sM18k/tZf2H/wANUfEv/hGf7K/4Rv8A4SvVP7J/svy/sP2T
7XL5PkeX8nleXt2bPl24xxSEW9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8
KxFDCivujlf/AFqA4YOF5/4V/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9Dtf
lPm3cm793Fxu2tyMV91/8E9fjV8P/g/+wf4b0nxbq/hW38SeI/itNc+HrqTU7G8uvA9xNpLW
dr4huLCSUAxW1zG2VudgUMswyRFu8/8A2Cdbu/Cvjj9qbRPEPxK8K3P/AAkXgbXtAN/feMra
3sfFmtyyMltcRSXUsf2rf/pTLcEYVZyWZPOG5jPimvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+F
ZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6g+E3xR1/8A4Y6/Z80v4M/FPwr8OPEmga7r
cnjf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOvQf2Mfjl8P/hV+yPaad4t8RfD+
48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcozLFcxQCNHSUIpWIEA/NOvdvhV+wPr3
xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHtf2af2td
A/ZK8cfHLTPG+j6V8Sda8VaHr3hz/hJ7G/vL/wDt66uJI1xLN9rhWTT5nieVp40F03mAq+Dg
e1/Ar4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhn
JieeNmMhOAD8/wDx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
9K/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b493
2t8Jv2mPAWoa54/1TxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf2reo0bvYWUtxLYssdyp/fR
/aPIUIZx4p/wT3tLnTtc/aJ1Pxf4x8FQal4q+H/iXwpFd6x440tZ9Y1q4aBhh5bndMkrbyLo
Ewudx8080AeKfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVD
cmIvtZgCgDnW+G//AATl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflP
lgbSzr5ke/2D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1jxBq3jvSbJvhlp8VwbRms1NwWkl
uk8yQ3sAkdLcbYEzKJXqf8E9/Atr8Ltc/aJtNT8Z/DSOC/8Ah/4l8CaXeyeMdNtYNa1J2gEP
2YXE0cj28oUslwUWIjqwIIAB8aV7t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9am
hnitnYfZLWdIU8+VYwZ2jJIY42YY2/g9438IfsdeGLjxZDc6V4w+NsV9PY6HZJGLzR/A5hfY
dVklwbe/u2IzarC0tvGAJnZ3EcY+lfgV8btP+Jfwi/ZYuZvE3w/1LVfBnivU5vHN34t8RWVl
qmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IAB+f8A478Eap8M/HGs+G9btvsWteH76fTb
+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUb
ibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+1vhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS
7PgW9na+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOPCv2KNH8ReA/2gPCvjbUPHnwU1SO98cwv
4zk8Qa3od3rGlfZL5HnuluNRG5/NSSSVLnTZpfMK5L+YigAHn/jL/gntrHwk8AaBr3j/AMef
D/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPODwP8A8E9tY8Z6R4Jv5fHn
w/0W1+Juu3WheDHvjqjf8JM9vcx2rTxCGykMETTSqq/ahC/UlFHNfQHgz4o3/wASvjT4Z8n4
gfBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB2uh/FH4c+Ib
b9n62+HmtfDQ+D/h18QNejv/APhI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3h
YAA/N/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya9A/ay8b
6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa8/pCCiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/wCDR3/lGtoX
/YZ1D/0qlr+XWv6iv+DR3/lGtoX/AGGdQ/8ASqWvRwHwVv8AD/7dE5MVvT/xfoz79+LP/H9c
fU1+HH/BeuFZFtt3a00v/wBP/juv3H+LP/H9cfU1+GH/AAX1uPKktB62ml/+n/x3XVW/gR9f
0OOj/Hl6f5H5q/tyjHgD9nz/ALJ3c/8AqVeIaKP25f8Akn/7Pn/ZOrn/ANSrxDRXkvc9JbH7
3/sq/CK+uvBHwO1OMsIm8G+DX/750fTwf5V+Iv7TH7JOo/G/4z/FrxefFHhXwp4c+G2neF11
m81o3rY+26fbwweWlrbTu/7yPB+UY3KemSP2p/Zr/bR0bwn4S+BfheWWMXX/AAh3gyHGecy6
Rp7D/wBDFflT411nTfG/gH9rTwuuv+FNL8ReNNM+HsujWes6/ZaR/aIghhnm8t7qWNDsj5Pz
d1HVgD7OZ8v1ely9l+R5eXKXt6nN3f5nyT+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g
7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNx
N5scKNLHZ21wLVHklCobkxF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6b
ps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGre
O9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolfwj2jzTRf+CaHjJZfC1r4o8R+CvAereN
/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Par8S/
hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8O
aRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8D
y+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/wDB
L/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGO
V4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBP
OPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxt
Nmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1K
ISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTPb3Md
q08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQl
Ww6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6
ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljA
F3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7
a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWr
CVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre
0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6
RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W90Kx
tNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jU
dV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ
48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+
CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ
8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw
81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8
YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3y
V7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZS
u4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuU
LuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZU
xA4V3Tcpk25yfimkI+gfgb/wT21j47/CDw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2
UiDfGVcSBzGA4DOrBlFrRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4A
extrmKNFun8oNLImSrMMphz9Af8ABP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+m
aU9kLV9SaJnMti8E0RdJz5bKURiTFJiQ+Di2vw1+Mlmug/GnwV4/+HOt/E7U38Yab4w1/TbW
70VYLxraDXrW9uLmK4lvZrKc3KX2nCNhJEgLOU2BjPCvBH/BL/xZ4r8MW1/f+Mvh/wCGbqfx
y/w2fTdSl1GS6tvEAdlFk7W1nNCdwAYSrI0WGGXByo8U1H4G+MNN8ceK/Df/AAjmq3eteBvt
ba/b2MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEV9l2fxh039mr/gnbr3/CqvGPgq9vNK+Ndx4g
8NprEuj6jrZ0WO3+yWuoCxukMsVx50cRBWCOZVLOFWIk18qfB/xY2ua58QtT174qeIPBWpav
4b1KWW7jju72fxldSMjHS7l4nDbLpixeSYtHlMuDkUhHmlFFfot+yz+0rceAv2OvgVpfwu1j
4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyAfL/wq/YH174l
eDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhieDv2C9W17x
ZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c/W3gX41
eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/VornSrfUJbecRNbwFIykbkQSSwsB
ISq89qaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W4u4XaRrl
YzNFBlI4leTzGYz5U+Jn7FHir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJFJ+68h0C
Rbt8M0inzEGc7gvmvhXwJrnjr+0v7E0bVdY/sexl1S/+w2klx9htIseZcS7AdkSbl3O2FXIy
Rmv0L/Yb8aWvwe/Zjs/AkHxH+Givp3x5aHxbBceKNNh07X/C509LK/lEd5Ii3llKpO0BGLYV
kXegK/JMF94Msfjh8Z/+EU+I+q/DnwZNY65F4a+w2t/P/wAJRaG4/wBE0eXDLKkVxFt3PcZU
eWPMUk0hB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHh
jd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKb
h4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+
Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx
5/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgH7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjR
QpuHhjd5o0VyxIHq3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v
2v5N+PP8jjP2aj/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8
m/Hn+Rxn7NQB5p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+
VYwZ2jJIY42YY6ui/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LI
as3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6tp0mpabp8
2uwXFrcWra65vI/MsgJS9nh2lDs5NxuNAHxp8N/+CcvxR+I+h/FfUxpVro+m/BqDUD4iu9Qu
MQfarJWaexgeIOs1wFRj8p8sDaWdfMj31PhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfV
IbgXItkS5jtbOZbbzHZGQyuoZHDA4DbfoD9lC+8N6l8cP2rvEmn+LvCtp4W8c+FPF/hzwvce
IvF1rZX2q3F3cRSWgZb+4W6bzIyCZ5xgsG3vvDV5V+yJL4b/AGfvAHxl+IN/f+FZPix8NP7N
tfA1neX9rfWsl7PePBc39rErsl7LaxoJYnUyQoWWUq+EYAGVpn/BPrUJviRF4P1H4l/CrRfF
N74ru/B9hpc2o3t3dXl3b3S2hdktbSY20UkzbYzd+SzhS4XZ81c98Bf2KPFXx5/aju/g9Hf+
H/DPja0nvbJoNYnmMD3Vnv8APtxLbRTLvVYpmDHEZETYckoG9W/YKOu6N8ZPBPxGuPHPwf1C
z1zxlbXfi4+KtR0j+39IW3vI5ZrktqyrMHlSWSVZrCR2Yr8zCWMKPqv9mD9pT4XfDP4qeAfE
Pgnx/wCH9E8H6t8QPF118RrnUtc8nVdXE008Hh6S6F64v7m3EV3G5YB4Y3aWafbJHLIoB+T1
FfpD+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3
ipeRmOARpn5kbV+GP7VzaL8DfhhH8JtV+BWk67F4y8RXvi5bjxZd+DdGsJpdTjls7gWX26xl
vbL7KUCrLBclYYFh2KyvEQD4p+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjS
x2dtcC1R5JQqG5MRfazAFAHPmvjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlW
w6sMqSDjgkV9a/six3OgftOaF8Q9M8Vfs6ReGdZ+ICXWv2btpdg/hy1t9QWQyWUOsxRXNvbv
DKzwNZEyBUVX2SxBF+gdN/ay07S/AvgkfAvxL8NN6fEDxRfa9d+MPHd94ddFm1dJdPvb6NtQ
tLjU0eyMW9547tisJjI3h42APhP9jP8AYc8e/t1/EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJ
ZFR23yMpVEVWZsMcbEdl1vhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7
JazpCnnyrGDO0ZJDHGzDH6g/4Jvft2Wkv7V3hnwrrth8FPC3gHwprviHxIdesrq58OWaT3kd
zGs0MVxdQwz8TxW0CTWzzxWp2hU2Owt/Cf4kaDq3w+/ZlsLK++D9g/w28ZavB4utb/xRYQDw
jCfE+n6mk2nSXd3vuEMFs0a3ED3IeF5k3s7E0Afnp478Eap8M/HGs+G9btvsWteH76fTb+38
xJPs9xDI0cibkJVsOrDKkg44JFZNegftZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7Rbz
Xcskb7XAZcoynDAEZ5ANef0hHu3wr/YUuvid8BNJ+I0/xI+GnhXw9q3iQeEgdcn1KF7TUihl
WKZo7OSKNDDiXzTJ5Sqw3urBlHQS/wDBMzWNK8MX+sa18TvhV4csNL8c3Pw6uptSudUjjg1i
B3BjeRbFkSJo080TMwjVGG9kYMq+wfsneJ7nT/8Agl2fCGh+K/g/pnibxh8QNQFzb+KvEul2
z6dot7ocmlT3xWWYT27o7sV8tfOZRwkkUjLIJ46tf2Vf+CZnj3wn4X8Y/DTxbqA+J2t2aLJq
Wmz3uo6FcaVNoralbWwnaeF3dtyGI7/Lbcd9u7l2M+f/AIs/sD698HPgp4l8can4t8FT2fhX
xldeA7uwtXv2vZNVt3cPFHutViKGFDOHMgXy8AkS/uq5/wCBH7FnxD/aR8D6/wCIPCei/brD
Qswxq8oim1q7WMzvY2KHm6u1tkmuGhjywihb+J4kk91+FfgW1+LH/BKTSfAkHjP4aaH4h1X4
ujXzBrnjHTdOey03+zTZNdzRyTCVUWYH5AhlZQGSNlZSfVrf4ufDb4G6N+yL4NXU/hp8QU8D
fEDXrO71+41aaJ/D9sniSIw6kEtr1IokmhUXCm5E0ZWNSNybtwB8ffCv9jXVviB8PNJ8Wa54
r8FfDjw94k1UaNoN74ru7i2TXJwSJngEMEzC3gbastzIEgjaRVMm4MF6vRf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1D8cfjF8KP2jN
a8O2+p3fw/8AEXhv4ZfEfxdH4rtNc1+Swkm0fVddN+ur6Q1vcwm/2W0MqCOJpZC8iYtpA8b1
z3wcsNB+Efxks7XwL8TvhprPwg1P4namPEHhXXNcsLFPB9rbXjWtnqmnXd3dpdPcGwm86K90
9lkDQRqzylNtAHw/qPwN8Yab448V+G/+Ec1W71rwN9rbX7exgN7/AGUlpJ5dzLK0O5VijcYa
XOwZHzYIrlK9g8NJ4S8NfFT4t2nhz4reIPB3hOPStYtfD17HaXUs/jO2EwFtplyIfKMaXUQV
naVBGpT5oxwB4/SEe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1n
SFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9j
bXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP+yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI
9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPi
PWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1b
xv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvhv8A8E5fij8R9D+K+pjS
rXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5T5YG0s6+ZHv+1tf/aH8G/G3xf8F9a8L+I/
hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXyn9m7xPoPj
X9oz9rvxpb+K/BVh4e+I/hvxlonhubWPEthpE+p3V9dRzWqi3upopkSRDkSOixghlLBlIAB8
/wDwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZ
hjq6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJV
mGUw5+i/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRP
a+cjxSToCS3Hba/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3t
rFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3
tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSu
L3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7W1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXz7U00GPVfE3xJ+D/j34aD4g
/Gjxl4h8zxLr/iyw0S5+G+iyalLFHJb2txKtytxdwu0jXKxmaKDKRxK8nmMAfBXjvwRqnwz8
caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkVk1reO/Cv/CC+ONZ0T+0dK1j+
x76ex+36XcfaLG+8qRk86CTA3xPt3I2BuUg4GayaQgooooAKKKKACiiigAooooAKKKKAPQPB
H7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9z
q9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKAPQP+GoPGf/DPH/Cqvtulf8IL
9u/tT7B/Ydh532vdn7R9p8n7R5u393v8zd5X7rPl/JXn9FFABXoHgj9pfxF8P/DFtpFhp3w/
ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igDW8b+Mrv4geJ7nV7+HSoLq72b49N
0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BX9RX/Bo7/yjW0L/ALDOof8ApVLX8utf1Ff8Gjv/ACjW0L/sM6h/6VS16OA+Ct/h/wDbonJi
t6f+L9GffvxZ/wCP64+pr8M/+C9+ntdy2rD+Gz0v/wBP/jv/ABr9zPiz/wAf1x9TX4lf8Fzm
QQx7v+fPTMf+D/x1XXW/gR9f0OKl/Hl6fqj8vP26U8vwF+z4PT4dXH/qVeIaKl/b148Ffs/Y
/wCid3H/AKlXiGivIe56a2PqHT/GGo2X7YPwMt0kkEH9hfDhQAeMHQtGz/OvnP4ofspaj8fP
EfjzxqfFHhXwp4c+Hfh3wcdZvNaN62Pt2lWsMHlpa207v+8TB+UY3qemSPsDw58OV1f9pb4F
XgA3Dw/8PGz/ALuh6R/hXjAfT9Y+B37Tvgj+3/CuleJPFHh/4bro9nrWv2Wkf2h9mtreafy3
upY0OyMZPzd1HVgD1YhNU438vyMMPZ1HbzPl39nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9
T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7j
UbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wAMde8JaWms
eKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrH
iDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvwnaeaaL/wAE0PGSy+FrXxR4j8Fe
A9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ahqHiHw94e
1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8L
Pip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8O
vjL4Hl8Y/tL/ALQFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA
5R/+CX/izRvEOh6V4g8ZfD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMc
FtoXDHK8Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXg
iQhgwIJ5x6B+wp8f/H2oeOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU
824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY8LaZeawL
y7Eq6lEJT58U5fdZTTjKFdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/
4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4h
kaORNyEq2HVhlSQccEivsv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W
7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+
oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3m
xwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6
ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3XiuybxHa+IL
zRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz
4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8A
AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X8Q7nw/4I
s7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngf
w/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhLQf2ytG8C
adZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv
2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL
9ijxV8Ofh5rniPxvf+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud
232v9jL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVhey
LiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJeoFTUyZP
ns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi
+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+
Gx8tL2VLibKmIHCu6blMm3OT8U0hH0D8Df8AgntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6
oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKMnwH/AME9viv8Rf8AhOE07w3/AKV4CvrjR7u1e6j8
7U9Tt97T6dYgEi8u44Yp52jhLHyoGIJLxLJ9Qf8ABP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6
hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTq/Gnx/8Aht8P9f8A2XtEXV/BXxHT
w38TvEn2vxHrniSa51HQ7Y+KEeHU5pba6hiLzw4uPNuY3jk8sOF2FgzGfGnw6/YL8cfFrwx8
LtW8NvpWtWvxT1248O2/2E3Fy3h+7gcbl1Ly4mFvmEtcgAu32eOSTaAtef6j8Gtc/wCE48V6
JokH/CY/8Id9rmv7/wAOpJqFj9ktpNkl8siL/wAen3WEzBV2upOM1+m3wb/a4+HX7OHiz4i6
dqHxDutQg/aW+J3iK1S88MeKrVYvAGnfaJrWHV2LHFtcTSTJIJslGtoYpQxMIjk/PT4e6Haf
Dfxx8TtEu/iz/wAIl/Zmh6rpkN/4dW5v7HxtLHIqLpqyQFP9Eu9pYSygxbUUsvIpCPKa9r/Z
7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDx
SvsH/hO4/wDhyV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7N
QB5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0
VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o
0VyxIHq3/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAj
jP2aj/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1
MZ5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0
VyxIFv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQ
xxswx9L/AOE7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1elfsm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07
aDIrMf3wJoA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijR
bp/KDSyJkqzDKYc5Xw3/AOCcvxR+I+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wF
Rj8p8sDaWdfMj3/Zeg/Hfwnr+tfBa7+HvjL4f3vh3w/8VvEWqeKZvG+radJqWm6fNrsFxa3F
q2uubyPzLICUvZ4dpQ7OTcbjXlP7KF94b1L44ftXeJNP8XeFbTwt458KeL/Dnhe48ReLrWyv
tVuLu4iktAy39wt03mRkEzzjBYNvfeGoA+f/AISfsYp8XNI8CvD8VfhVpWtfEO+bTdL0G7vN
Qn1SG4FyLZEuY7WzmW28x2RkMrqGRwwOA23W0z/gn1qE3xIi8H6j8S/hVovim98V3fg+w0ub
Ub27ury7t7pbQuyWtpMbaKSZtsZu/JZwpcLs+atX9kSXw3+z94A+MvxBv7/wrJ8WPhp/Ztr4
Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6mSFCyylXwjDoP2CjrujfGTwT8Rrjxz8H9Qs9c8ZW
134uPirUdI/t/SFt7yOWa5LasqzB5UlklWawkdmK/MwljCgA8p+Av7FHir48/tR3fwejv/D/
AIZ8bWk97ZNBrE8xge6s9/n24ltopl3qsUzBjiMiJsOSUDW/2e/2D/F/7R/w3k8UaVqXhXSL
C410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA+9v2YP2lPhd8M/ip4B8Q+CfH
/h/RPB+rfEDxddfEa51LXPJ1XVxNNPB4ekuheuL+5txFdxuWAeGN2lmn2yRyyL80694iPhz/
AIIzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP8AbBtEyFpWt8YY2hFAHinwT/ZC
b41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56vRf+Ca
HjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPoH7
MPgW1+GH7NvhrxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0XSGR2uo0aWO
AFIow0nmN6t8J/E+g6N8Pv2ZfDVl4r+D+sv8IvGWr6b4u1C/8S2FoNIhXxPp+opqGnNdzQvO
ksFq22aBJA0Ms0eA7EAA+adF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3
aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4
kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R
+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4
b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/tD+Dfjb4v8AgvrXhfxH8NL7
SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJC/nn+1l430z4mft
UfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHoPwN/4J7ax8d/hB4c8Z
2fjz4f6PYeKPFcfgiztdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyjV8Ef8ABL/xZ4r8MW1/
f+Mvh/4Zup/HL/DZ9N1KXUZLq28QB2UWTtbWc0J3ABhKsjRYYZcHKj3X/gn7+0HoP7Pv7Hnw
tt9T1T4aXF5q/wAebPULvTtbubC9n0zSnshavqTRM5lsXgmiLpOfLZSiMSYpMSeg/AHxponw
x+Hkel6b8R/hpraQftLSeIp77xV4o0C+1G78Own7PJqxa7k3LcO0JdZolS4bd5kXySAljPjS
2/4JqfEu+1DwPZW8OlXt/wCM/Fd/4Lnt7GaW9bwvqdlN5c8OpNDG6Q4jElwNjSE28UkuNq15
TqPwa1z/AITjxXomiQf8Jj/wh32ua/v/AA6kmoWP2S2k2SXyyIv/AB6fdYTMFXa6k4zX6Q/s
wftcfC79nCz8Wad/wsPxBqGk/tLfEDW7WzvNL8VbbnwBpXmy2tvq9ybs+fbXszzRyGaYlmgh
jlZi8PlyfBPw90O0+G/jj4naJd/Fn/hEv7M0PVdMhv8Aw6tzf2PjaWORUXTVkgKf6Jd7Swll
Bi2opZeRSEeU17t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8
+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4k
vteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gmh
4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/w
TQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+w
df8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjf
Ekn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721
ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O
9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT
6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+165
8R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pP
hv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwt
a+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wAE0PGS
y+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h
/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7
lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK
4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tn
dpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHr
FnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv
4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+
KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/B
vxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeI
gOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8l
rDjfEkn7uWeIgOSFAPinwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCV
bC3uUt0adiiNO8e/YzrmPDk/Zu/4Jy/FH9p/49+Ivh3oelWthq3g6ee18Q3moXGNO0WaJ3jM
cs0QkDO8kbIixhy+1mHyI7r9K6mmgx6r4m+JPwf8e/DQfEH40eMvEPmeJdf8WWGiXPw30WTU
pYo5Le1uJVuVuLuF2ka5WMzRQZSOJXk8xj/gmD+1TF8J/wBpLwb8KfEs3wfsvBPwt1XXr6Xx
gmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P8ARv2X9Wm+Aj/EfXNZ8P8AhHw9
dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L5pX6Q/Df4maJ4o+D37N3hiG6+B
UWk+CfGWuWnxC0bXr3QLuDRrKbWop9llJqskss1u1q8m2azll8wIMyu6DHwT8d/+EY/4Xh4y
/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNIR6t8Df+Ce2sfHf4QeHPGdn48+
H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqaN/wAE/vFVnZvceN/EPgr4
VwSeJJvCljL4rv5ok1W/glaG68g20M4NvbyKElum226M6r5pO4L9K/8ABP39oPQf2ff2PPhb
b6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTn/2m9L8N/tW/
CnwX4A8IfEzwrLf/AAk8c+KdK1PVPGviq1s5NWstS1Vri21lLuRgl7F5cRMzQ5m3sCsLK6sW
M8Vtv+CanxLvtQ8D2VvDpV7f+M/Fd/4Lnt7GaW9bwvqdlN5c8OpNDG6Q4jElwNjSE28UkuNq
15TqPwa1z/hOPFeiaJB/wmP/AAh32ua/v/DqSahY/ZLaTZJfLIi/8en3WEzBV2upOM1+gP7A
/wC0H8NP2J/hBN4LuPibquvWHx38V6jocGq6FrUWmt4J0yENYwa3NbzYfT7ueSSOXLkgW8Uc
m4tb+VJ8U/D3Q7T4b+OPidol38Wf+ES/szQ9V0yG/wDDq3N/Y+NpY5FRdNWSAp/ol3tLCWUG
Laill5FIR5TXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5
VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol
1apdzxO7tBbsvmQLKDHJPF98kAA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9T
vbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3l
xPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHj
JZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1
/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8S
Sfu5Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK
+lglLyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPq
d7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7Xrnx
HrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G
/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r
4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8
G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uW
eIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS
8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srg
B7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2
llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesW
ekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/i
d4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha1
8UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/
G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA
5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWs
ON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7
G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7o
Vlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvte
ufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2
+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9wfCn9pjXfGf7Yd/4nHjv4aR/BDw78Tt
W8VW8/iw6RNf2Fk96L+c6ba3cb6pE9xGieWtpEubhx92QOy/H/7SnxIsfjH+0Z4/8X6ZFdQa
b4q8SajrFpFdKqzxw3F1JMiyBWZQ4VwCAxGc4J60hHE0UUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUA
FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAV/UV/waO/8o1tC/7DOof+lUtfy61/UV/waO/8o1tC/wCwzqH/AKVS16OA+Ct/h/8AbonJ
it6f+L9GffvxZ/4/rj6mvwz/AOC9t2YLi0Ud7PS//T/47r9zPiz/AMf1x9TX4Xf8F9Bm7s/+
vPSv/T/48rqr/wC7x9f0OOj/AB5en+R+bX7dxz4F/Z9/7J1cf+pT4hoo/bt/5ET9n3/snVx/
6lPiGivJe56S2PuLwH8QYdN/aT+CFmxG7/hHvh4o/HQtI/xr47+L/wCypqHx/wDGXxC8bDxR
4U8KeGvh5oHhBtXvNaN62Pt2l2sMHlpa207v+8TB+UY3KemSPdrKZ1/bP+Bozx/Yfw3/APTD
o1cvY3Wn+IvgD+0v4N/t/wAK6X4k8WeHPhr/AGPZ61r9lpH9o/Z7W3mn8t7qWNDsjGT83dR1
YA9WIk3Tin5GFCKVR/M+Yv2e/wBhzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQ
FUYyXHlgttwFUFd7pvj3HwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7
a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2a
GIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4
rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvwnaeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabr
E95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/8Agn1qGoeIfD3h7VfiX8KvDnjPxHrt
z4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+
FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZ
z6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OErImDJFE53sr7UIAOUf8A4Jf+LNG8Q6Hp
XiDxl8P/AAxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j
4SeANA17x/48+H/gL/hI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f8A
x9qHjjwhq/i3x98P/wDhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD7
3HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU
04yhXcrRhQAfP/gf/gntrHjPSPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2Uhgiaa
VVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
+y/2bfjDc6N+0ZayaH4x+Gmnfs7/AA9+IF/rGjr4sl0u7v8ASNKS6W7ddNgvkl1dHmgiiEYt
owzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQF
iDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhu
TEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujT
sURp3j37Gdcx4c9r/wAE7NCv/h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVE
SWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5
La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934j
v7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/wBhzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6d
pikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ
6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/AOCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJr
mi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfB
XhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+
H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/wBjL4s+MPDfxU8O
DWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pz
wt+0tY/BO80XUfhVJ4Wg+I/ia/8AG+mePB4ejvtL0y/8QJeoFTUyZPns5nLNYswLKV3F4wAA
fCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5T
X2t/wTfufA/g7/gq7N4y0bX/AAr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6b
lMm3OT8U0hH0D8Df+Ce2sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yr
iQOYwHAZ1YMo1fBv/BLfxx4x8Dzan/wk3w/0rVYfFd54E/sHUdRuIL5/EFvHLJ/Ziy+QbQyy
rF+6f7R5TtJGnmB22j3X/gn7+0HoP7Pv7Hnwtt9T1T4aXF5q/wAebPULvTtbubC9n0zSnsha
vqTRM5lsXgmiLpOfLZSiMSYpMSdX8L7rwb4T+FcNtcfFnwVcWPhL9pa5+IMuo634ws7vVdU0
LT4ZEN80cTtPc3Fw8GEVIi8zzI4Xy3D0xnx/4Y/4J2/EbxjZ/Dt9PtrW5n+IXiS88JvaRx3T
3PhXUbWULNBqqLCTauI99xs+ZxBFJIVAU15pqPwa1z/hOPFeiaJB/wAJj/wh32ua/v8Aw6km
oWP2S2k2SXyyIv8Ax6fdYTMFXa6k4zX6Q/swftxfDr4VWfiy61Dxj4gt4P2r/iBrciJpfiK1
tbn4Z6dLLLBDqNzGzEWV68lwjF9zIYLeOUOTAI5Pgn4e6HafDfxx8TtEu/iz/wAIl/Zmh6rp
kN/4dW5v7HxtLHIqLpqyQFP9Eu9pYSygxbUUsvIpCPKa92+FX7A+vfErwZ8PdavfFvgrwinx
X1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9mz4o6D4h+A/7JdtZa18N
C/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP+i/8E0PGSy+F
rXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBK
XktYcb4kk/dyzxEByQrGfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aW
VwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7
tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz
0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8
SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWv
ijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb
4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQH
JCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1h
xviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jb
XMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Ae
xtrmKNFun8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdC
srnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa
9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/B
XgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPE
fgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v
+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa
/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+J
JP3cs8RAckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Aext
rmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90Kyuf
F9lrFve2sV9LBKXktYcb4kk/dyzxEByQvlPwp/aY13xn+2Hf+Jx47+GkfwQ8O/E7VvFVvP4s
OkTX9hZPei/nOm2t3G+qRPcRonlraRLm4cfdkDsoB8P+O/BGqfDPxxrPhvW7b7FrXh++n02/
t/MST7PcQyNHIm5CVbDqwypIOOCRXpX7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIb
yxLIqO2+RlKoiqzNhjjYjsv3Zpv7bkXibwL4J1r4N6n8NNG1LVfiB4o1zxPH4w8Zy+FX05rr
V0ubGe+toNRtf7QT7I8Yf5LxQsBiXoyNxP8AwTe/bstJf2rvDPhXXbD4KeFvAPhTXfEPiQ69
ZXVz4cs0nvI7mNZoYri6hhn4nitoEmtnnitTtCpsdgAfnTRX6A6D8Sdfs/2cfgtovwZ+I/w/
+E/iTw34r8RN43tbfx1Z6TpttcSajA1nLOJ7ljqtpHbKqrKv2wNFEU3SHKn4U8d3H2zxxrMv
2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtCEe1/A3/gntrHx3+EHhzxnZ+PPh
/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKLWi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD/gn7+0HoP7Pv7Hnwtt
9T1T4aXF5q/x5s9Qu9O1u5sL2fTNKeyFq+pNEzmWxeCaIuk58tlKIxJikxIfBxbX4a/GSzXQ
fjT4K8f/AA51v4nam/jDTfGGv6ba3eirBeNbQa9a3txcxXEt7NZTm5S+04RsJIkBZymwMZ8v
/Bn9hjxJ8XPHHxK8N3OveFfB+tfCexvdS8QW+tS3T+Tb2UjR3jxtaW9wsnkuFBAOW3r5YcBt
vlXhXwJrnjr+0v7E0bVdY/sexl1S/wDsNpJcfYbSLHmXEuwHZEm5dzthVyMkZr61/YPt/Bvh
f4qftPWmheLvD9t4T1P4f+I/CvhS98Sa7Z6TPrLXUyiwGLloCXkihLO2xVjJG/y9yg/OnwBv
rSx/4TX7X8R9V+HPneFL6KH7Da3M/wDwlEp8vbo8vkMuyK453PLmIeWNynIpCPP692+FX7A+
vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9
mz4o6D4h+A/7JdtZa18NC/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkn
i++SAAfP+i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lB
pZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6f
yg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFv
e2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i
90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/wDBNDxksvha18UeI/BXgPVvG/iS
+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jf
xJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/D
S+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L
/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCg
Hx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlW
YZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSy
JkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX
0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK
58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFd
A03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8
N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+
teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2
i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZ
TDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwS
l5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxf
Zaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6Bpus
T3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA0
3WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38T
vEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfx
H8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/
AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc
/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1h
xviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW9
7axX0sEpeS1hxviST93LPEQHJCgHxT8N/wDgnL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2q
yVmnsYHiDrNcBUY/KfLA2lnXzI9/hNfe37N3ifQfGv7Rn7XfjS38V+CrDw98R/DfjLRPDc2s
eJbDSJ9Tur66jmtVFvdTRTIkiHIkdFjBDKWDKQPhTXtGm8Oa5eafcPayT2E728r2t1FdQMyM
VJjmiZo5EyOHRmVhggkEGkI92+Bv/BPbWPjv8IPDnjOz8efD/R7DxR4rj8EWdrqR1QXQ1iUb
orVxDZSIN8ZVxIHMYDgM6sGUavg3/glv448Y+B5tT/4Sb4f6VqsPiu88Cf2DqOo3EF8/iC3j
lk/sxZfINoZZVi/dP9o8p2kjTzA7bR7r/wAE/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t
3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOr+F914N8J/CuG2uPiz4KuLHwl+0tc/EGX
Udb8YWd3quqaFp8MiG+aOJ2nubi4eDCKkReZ5kcL5bh6Yz4p/Z7/AGMde+P+uePdMOt+H/Be
pfDbSrnW9ctPEkV/BPb2tqxW7bZBbTMHgbaHjYLJlwFVsNt818K+BNc8df2l/YmjarrH9j2M
uqX/ANhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNfZf7JXxR0H4p/tGftb+LxrXh/wANab8R/Bvi
qy0OLxJrthpE9zdapdCa0t8TzqpcqjB2VmjjIG5xuUt8vfAG+tLH/hNftfxH1X4c+d4Uvoof
sNrcz/8ACUSny9ujy+Qy7Irjnc8uYh5Y3KcikI8/r3b4VfsD698SvBnw91q98W+CvCKfFfVZ
9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8Jr9C/2bPijoPiH4D/ALJdtZa18NC/
w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP8Aov8AwTQ8ZLL4
WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMl
l8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQrGfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD
2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0sr
gB7G2uYo0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i
90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvE
l9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4
j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4
o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+N
vi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEBy
Qpr/AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYc
b4kk/dyzxEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2N
tcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC
+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+
0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP
3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo
0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9
lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1
iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8
b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAe
reN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614
X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P
4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyz
xEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOfCvHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr
9Ntf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrD
jfEkn7uWeIgOSF/PP9rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4
YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf+Ua2hf9hnUP8A0qlr+XWv
6iv+DR3/AJRraF/2GdQ/9Kpa9HAfBW/w/wDt0TkxW9P/ABfoz79+LP8Ax/XH1Nfhf/wXy/4/
LP8A689K/wDT/wCPK/dD4s/8f1x9TX4a/wDBeexe8vbTYM/6HpX/AKf/AB3XTX/3ePr+hx0f
48vT/I/NT9vD/kRv2ff+ydXH/qU+IaKn/b3sXt/Bf7PyMOR8Orj/ANSnxDRXlPc9JbH0VZrn
9s/4G/8AYD+G/wD6YdGrwT4mfsnah8d/EHjrxl/wlHhXwp4b+HPhzwd/bF5rRvWx9t0q0hg8
tLW2ndv3iYPyjG5T0yR79aJ/xmd8Df8AsB/Df/0xaNXLWNzp/iD4A/tL+DP7f8K6V4k8V+HP
hr/Y9nrWv2Wkf2h9ntbeafy3upY0OyMZPzd1HVgD1Yj+HH5GNH+I/mfMX7Pf7Dnj39pP4V+P
fG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfD
Tw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+
IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4
D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV+E7DzTRf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNX
w/8A8E+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOj
BlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAp
hayiAkmdWG197LV+HXxl8Dy+Mf2l/wBoCzn0qDx9puux6v8ADfTPED25kSXUtUm8y8WzLET3
dnCVkTBkiic72V9qEAHKP/wS/wDFmjeIdD0rxB4y+H/hi/8AFvivUfB/hyK+l1Gb+3ruxvVs
ZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/wDHnw/8Bf8ACR32raba2GpHVLy6
S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W+Pvh/wD8K98HeK5fEWpXPja7
0fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL
+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8AA/8AwT21jxnpHgm/l8ef
D/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btv
sWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7NvxhudG/aMtZND8Y/DTTv2d/h78QL
/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2jPjF4v0zXvD/
AIV02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7Vv
G+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R
8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw57X/gnZoV/wDD34kfDzxn
aeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ckpgsskaqvbfAE2ujftqR/E
bwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL5qzWkjysV/eMJowoYz5+8Rfs
UeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2
n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8
F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/0x/NspGZpYCZGYEOxlTC8
/wD8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0Fs
nnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Q
p58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/wDD/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2i
mEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/wCHfhV8OPGVxqs9nrGo6Lq58Pwx
zR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfgneaLqPwqk8LQfEfxNf+N9M8eD
w9HfaXpl/wCIEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k7UP2yfifF4N0LxR4V0LxJfbv7
Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3Pgfwd/wAFXZvGWja/4V8OfCfw
vrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+KaQj6B+Bv/AAT21j47/CDw54zs
/Hnw/wBHsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZRq+CP8Agl/4s8V+GLa/
v/GXw/8ADN1P45f4bPpupS6jJdW3iAOyiydrazmhO4AMJVkaLDDLg5Ue6/8ABP39oPQf2ff2
PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiT0H4A+NNE
+GPw8j0vTfiP8NNbSD9paTxFPfeKvFGgX2o3fh2E/Z5NWLXcm5bh2hLrNEqXDbvMi+SQEsZ8
k+CP+CX/AIs8V+GLa/v/ABl8P/DN1P45f4bPpupS6jJdW3iAOyiydrazmhO4AMJVkaLDDLg5
UeKaj8DfGGm+OPFfhv8A4RzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+4PF
/wC0ha/Cv9iTxvr3w18X+CrrVpvjzf8AjDQotYvNN1nWxpBjMFvqAtdRMt2tx56xkSNGLoKW
kJ2Mzn4/+D/ixtc1z4hanr3xU8QeCtS1fw3qUst3HHd3s/jK6kZGOl3LxOG2XTFi8kxaPKZc
HIpCPNK9r/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dw
xu80aK5YkDxSvsH/AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k/a9J/tn+y/s/lfaPJz9q837X
8m/Hn+Rxn7NQB5T+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZE
laNFCm4eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZ
ElaNFCm4eGN3mjRXLEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+
1eb9r+Tfjz/I4z9mo/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN
+1/Jvx5/kcZ+zUxnlP7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3I
RkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsP
slrOkKefKsYM7RkkMcbMMfS/+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jy
c/avN+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyW
Q1ZvNgRrVS+7TtoMisx/fAmgD5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s
7tLK4AextrmKNFun8oNLImSrMMphzlfDf/gnL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2qy
VmnsYHiDrNcBUY/KfLA2lnXzI9/2XoPx38J6/rXwWu/h74y+H974d8P/ABW8Rap4pm8b6tp0
mpabp82uwXFrcWra65vI/MsgJS9nh2lDs5NxuNeU/soX3hvUvjh+1d4k0/xd4VtPC3jnwp4v
8OeF7jxF4utbK+1W4u7iKS0DLf3C3TeZGQTPOMFg2994agD5/wDhJ+xinxc0jwK8PxV+FWla
18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4DbdbTP+CfWoTfEiLwfqPxL+FWi
+Kb3xXd+D7DS5tRvbu6vLu3ultC7Ja2kxtopJm2xm78lnClwuz5q1f2RJfDf7P3gD4y/EG/v
/CsnxY+Gn9m2vgazvL+1vrWS9nvHgub+1iV2S9ltY0EsTqZIULLKVfCMOg/YKOu6N8ZPBPxG
uPHPwf1Cz1zxlbXfi4+KtR0j+39IW3vI5ZrktqyrMHlSWSVZrCR2Yr8zCWMKADyn4C/sUeKv
jz+1Hd/B6O/8P+GfG1pPe2TQaxPMYHurPf59uJbaKZd6rFMwY4jIibDklA1v9nv9g/xf+0f8
N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPvb9mD9pT4Xf
DP4qeAfEPgnx/wCH9E8H6t8QPF118RrnUtc8nVdXE008Hh6S6F64v7m3EV3G5YB4Y3aWafbJ
HLIvzTr3iI+HP+CM154IuNc+Gkmv2HxOe7l0y11XRLrUm01ITAbmMRO00j/bBtEyFpWt8YY2
hFAHinwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WY
AoA56vRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0si
ZKswymHPoH7MPgW1+GH7NvhrxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0
XSGR2uo0aWOAFIow0nmMfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k
0ZLiOTUilyIFMLWUQEkzqw2vvZQD5K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaOR
NyEq2HVhlSQccEivVbb9i2Ww+BPgf4heJPiL8P8AwfovxD+3/wBjQ6kmrT3Uv2K4+zz71tLG
dUw+0jLchx3BA+4dN/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31tB
qNr/AGgn2R4w/wAl4oWAxL0ZG8e+CHxM8RfETxj4AXVfGP7NWrfCPT/HN79t8OXllodlH4Y0
+bVEuLz7LDrNvFdLaTxys8X2dpHCIqN5ckflqAeK+B/+Ce2seM9I8E38vjz4f6La/E3XbrQv
Bj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/
mJJ9nuIZGjkTchKth1YZUkHHBIr9IND+KPw58Q237P1t8PNa+Gh8H/Dr4ga9Hf8A/CR67a6Z
e+GtKbxXYapZ3VqmozxXDu1lbhfMRZWMck8TfvCwHwT+1l430z4mftUfEvxJolz9t0XxB4r1
TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHoPwN/4J7ax8d/hB4c8Z2fjz4f6PYeKPFcfgiz
tdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyi1ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+gP+Cfv7Qeg/s+/sefC231PVPhpcXmr/
AB5s9Qu9O1u5sL2fTNKeyFq+pNEzmWxeCaIuk58tlKIxJikxJ2x+InhPUNT+BWm+H/HHw/8A
Fdh8M/iP4ht/Eet+LfFWnRapZWh8W2Opw6pDLczQm5lntrYlri3WRXSa4T7zEBjPknRf+CaH
jJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/
A3xhpvjjxX4b/wCEc1W71rwN9rbX7exgN7/ZSWknl3MsrQ7lWKNxhpc7BkfNgiv0h/4aQ8M/
ELxt8Itb8C+L/hpc6Jp3xd8S654tk8W3mkw3elWVzr8N1bz2UesFbi2R7L95/wAS9U/eBi37
8NXxTq3izw1rn7Rnx01PR/ip4g8FeHtXg1+XRbuOPUb2fxlDJdFrfS7ly4m2XSEF5LosMpmQ
FjSEeE17X+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHh
jd5o0VyxIHilfYP/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k
348/yOM/ZqAPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJ
K0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIy
JK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc
/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb
9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLY
W5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+
V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5
OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5
X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R
5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0t
hbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4r
Z2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r
7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5
o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnv
LifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q7
8W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgPNPhT+0xrvjP9sO/8Tjx38NI
/gh4d+J2reKrefxYdImv7Cye9F/OdNtbuN9Uie4jRPLW0iXNw4+7IHZQDwrw/wD8E+tQ1DxD
4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNjVf/AIJf
+LNG8Q6HpXiDxl8P/DF/4t8V6j4P8ORX0uozf29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwx6v
4dfGXwPL4x/aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEe
q/s1/HeLX/hT+zHd2njL4f3uo+H/ABzqmqfEibxvq2kyalpvnaraXAuLVtYczx+ZbhpS+n43
Sh2JM+40Afn/AOO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXs
Hwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj
9V+O/jfN4p8C/Dsfs7fGDw/4Jey+IHiy+8R3ereM4tDe6W41eKXTb3UY7+VbjU0+xeXl3juW
Ko0bBnDR0fCf4s6D4t+H37Msdl4l+D+ov8P/ABlq48XXd/qNhoA0OF/E+n6ql9p1tdtaOqSQ
W7BfIgIWGWaHYj5RQD5U+G//AATl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1
muAqMflPlgbSzr5ke/wmvvb9m7xPoPjX9oz9rvxpb+K/BVh4e+I/hvxlonhubWPEthpE+p3V
9dRzWqi3upopkSRDkSOixghlLBlIHwpr2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZ
HDozKwwQSCDSEe7fA3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+
Mq4kDmMBwGdWDKLWi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbX
MUaLdP5QaWRMlWYZTDn6A/4J+/tB6D+z7+x58LbfU9U+Glxeav8AHmz1C707W7mwvZ9M0p7I
Wr6k0TOZbF4Joi6Tny2UojEmKTEnbH4ieE9Q1P4Fab4f8cfD/wAV2Hwz+I/iG38R634t8Vad
FqllaHxbY6nDqkMtzNCbmWe2tiWuLdZFdJrhPvMQGM+SdF/4JoeMll8LWvijxH4K8B6t438S
X3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc+P6j8DfGGm+OPFfhv/AIRzVbvW
vA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+y/hT+0xrvjP9sO/wDE48d/DSP4IeHf
idq3iq3n8WHSJr+wsnvRfznTbW7jfVInuI0Ty1tIlzcOPuyB2X5/1b4raN8Vv2jPjp4vt/iF
4g+GWm+MINf1PToo7OeSfxILm6M0Wi3ItpNsaTq2HZ2eFTHyG4NIR4TXu3wq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEP
wH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98kAA+
f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzD
KYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRM
lWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+l
glLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXP
i+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTd
YnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6B
pusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf8F9a8L+I/hpfaT4b
+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/wDaH8G/G3xf8F9a
8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF
/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswym
HJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWY
ZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJ
aw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4k
vteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaH
jJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8A
wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+
wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDj
fEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt72
1ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU7
2zu0srgB7G2uYo0W6fyg0siZKswymHNXxl/wT21j4SeANA17x/48+H/gL/hI77VtNtbDUjql
5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x9l6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+
I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL59pPxq1X42/tC6NqLePfgV4o+CB
+J2r3TaP4qg0GDUdC0q61n7VcyFNXgjudlzDMZFa2eRvl2HZJGI1APl/4YfsYp8UtQ0Gztfi
r8KrO/8AFuuyaFoNnNeahNdao6zRwRztFb2cr2kU0kgEf21bd2ALbAo3Vb8HfsF6tr3izTfD
Ws+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+gPC/gXwb8M
PDl74l+AnjP4aWPif4j+JNa02z13X/GNnpFz8MPDq30ltbm3gu5hd/aLq2O9rrY1xHACkcYe
TzGyvg58G7X9nX4SWd18O/iL8H7r4t+L9V1Pw/qfi288c6bYRfD7TYLprMz2CTSrM73qCSUX
scZlS2O2KJWl8xgDynRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sM
qSDjgkV+hfwn8T6Do3w+/Zl8NWXiv4P6y/wi8Zavpvi7UL/xLYWg0iFfE+n6imoac13NC86S
wWrbZoEkDQyzR4DsQPh79rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLl
GU4YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+
XWv6iv8Ag0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3ROTFb0/wDF+jPv34s/8f1x9TX4w/8ABZ2y
iu76HzMcWemYz/2MHjmv2e+LP/H9cfU1+IP/AAXS12TR7+12d7PS8/8Ag/8AHf8AhXXV/gR9
f0OKnrWl6f5H55/8FIbWOHSvgMq4wPh1Lj/wp9forE/4KB6w194U+AEp6t8Op/8A1KPEAory
Z/Ez047I+h7L/k834G/9gP4b/wDpi0avn74nfsnah8d/EPjvxl/wlHhXwp4b+HPhzwd/bF5r
RvWx9t0q1hg8tLW2ndv3iYPyjG5T0yR7/p5/4zN+Bv8A2A/hv/6YtGrlrG60/wARfAL9pfwb
/b/hXSvEnizw58Nf7Hs9a1+y0j+0Ps9rbzT+W91LGh2RjJ+buo6sAeiv/DXyMqP8R/M+Yf2e
/wBhzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/
AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+
CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H
7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W4
2wJmUSvxHWeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5q+H/8Agn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3I
srSeKDfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaO
e8k0ZLiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIku
papN5l4tmWInu7OErImDJFE53sr7UIAOUf8A4Jf+LNG8Q6HpXiDxl8P/AAxf+LfFeo+D/DkV
9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/48+H/gL/hI77Vt
NtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f8Ax9qHjjwhq/i3x98P/wDhXvg7
xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74
Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/gntrHjP
SPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+
ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmn
fs7/AA9+IF/rGjr4sl0u7v8ASNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0
Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/Cg
X4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpv
hrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wAE7NCv
/h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvg
CbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0
YUMZ8/eIv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUt
xJIi+bndtP2M/wBhzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxs
R2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sB
MjMCHYyphef/AOCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4v
I4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhn
itnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+z
Mi3UNiltFMJUikjETXDFLcSSIvm53bfa/wBjL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWN
R0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/i
a/8AG+mePB4ejvtL0y/8QJeoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijw
roXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/gq7N4y0bX/
AAr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hH0D8Df+Ce2sfHf
4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqfDf/gnL8Ufi
PofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wAE/f2g9B/Z
9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOf/ZQ0
PR/C/wAcP2rrj/hYHhW90XxB4U8X+DtA1jxF420uO+8R3s1xEbaVmmnRpfPQbzdbRCzFvnBy
AxnhXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhj
jZhj5pqPwN8Yab448V+G/wDhHNVu9a8Dfa21+3sYDe/2UlpJ5dzLK0O5VijcYaXOwZHzYIr7
r+BXjjRx8Iv2WNFi134VS3Xws8V6nZ+Mn1nxPpdtN4cT/hJdO1Jbuxea4QT74bZlE9r5yPFJ
OgJLcfNXjT4i6B4+/ao/aA8Saf8AFHVfA2i+Jv8AhIr7Sbixsbxv+EvSe7MkOlSrGUaKK5Rg
WacbF2DeueiEfP1e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nS
FPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/Af8AZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6
bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/wBF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdY
nvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4n
eJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/i
P4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gm
h4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/
AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMO
fsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw
43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe
9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1
O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95c
T6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvt
eufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4a
X2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4
yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wd
f/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SS
fu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6W
CUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tL
K4AextrmKNFun8oNLImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3t
ndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxH
rFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPh
v4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta
+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz4V478Eap8M/
HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfptr/wC0P4N+Nvi/4L614X8R
/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQvP/Fn9oi7+
Jmg+CL/9n74u+FfAt0fiP4v1fxRc3nim28MR3H2vWY59OvL61unikvovsZT/AJYz4RGiKkqY
6APjP4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMA
UAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49
+xnXMeHPtfwc8G2vw7+Eln4r+Hfj/wCD7fFv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmh
e8QySCeO3WWC2/dxQRs+48/+xR8OL/4B/tAeFdYtPH/7P+saDY+OYbPxHNeanoz3WmW+n3yB
ru1k1SOOXyponaWKfT2YuFXJWRFVQD5U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hka
ORNyEq2HVhlSQccEismv1M039rLTtL8C+CR8C/Evw03p8QPFF9r134w8d33h10WbV0l0+9vo
21C0uNTR7Ixb3nju2KwmMjeHjb80/izrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyq
skdvg/u0dVZU2ggEEUhHsPwN/wCCe2sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RW
riGykQb4yriQOYwHAZ1YMoqfDf8A4Jy/FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB
4g6zXAVGPynywNpZ18yPf9K/8E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2
QtX1Jomcy2LwTRF0nPlspRGJMUmJOf8A2UND0fwv8cP2rrj/AIWB4VvdF8QeFPF/g7QNY8Re
NtLjvvEd7NcRG2lZpp0aXz0G83W0Qsxb5wcgMZ86fs9/sOePf2k/hX498b6FZ2tp4T+HWlXO
qanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e7zXwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0k
uPsNpFjzLiXYDsiTcu52wq5GSM19V/8ABNnwtH4F/wCF+f234l+H+j/2x8OPEPgqw+3eM9Jt
/t2qS/Z/Lji33I3xPtbbcLmBsHEhxXhPwftIvB+ufELT9T+JV18PZ7bw3qVkr6P5uoweKp1Z
FGkGazk8s29wQczFngIjUncCppCPNK92+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k
+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9mz4o6D4h+A/wCyXbWWtfDQv8OvEmoR+Lv+
Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgAHz/AKL/AME0PGSy+FrXxR4j8FeA
9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvr
XhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKxnx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5
QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZax
b3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3j
fxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/E
fw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Df
jb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RA
ckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8
oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2
sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6B
pusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3
hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7
SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8A
gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH
x9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWY
ZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImS
rMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSw
Sl5LWHG+JJP3cs8RAckL5T8Kf2mNd8Z/th3/AInHjv4aR/BDw78TtW8VW8/iw6RNf2Fk96L+
c6ba3cb6pE9xGieWtpEubhx92QOygHw/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0
cibkJVsOrDKkg44JFelfsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUq
iKrM2GONiOy/dmm/tuReJvAvgnWvg3qfw00bUtV+IHijXPE8fjDxnL4VfTmutXS5sZ762g1G
1/tBPsjxh/kvFCwGJejI3E/8E3v27LSX9q7wz4V12w+CnhbwD4U13xD4kOvWV1c+HLNJ7yO5
jWaGK4uoYZ+J4raBJrZ54rU7QqbHYAH500V+gOg/EnX7P9nH4LaL8GfiP8P/AIT+JPDfivxE
3je1t/HVnpOm21xJqMDWcs4nuWOq2kdsqqsq/bA0URTdIcqfhTx3cfbPHGsy/bNK1Dzb6d/t
Wl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0IR7X8Df+Ce2sfHf4QeHPGdn48+H+j2HijxXH4I
s7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqfDf/gnL8UfiPofxX1MaVa6Ppvwag1A+Irv
ULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wT9/aD0H9n39jz4W2+p6p8NLi81f482eoX
ena3c2F7PpmlPZC1fUmiZzLYvBNEXSc+WylEYkxSYk5/9lDQ9H8L/HD9q64/4WB4VvdF8QeF
PF/g7QNY8ReNtLjvvEd7NcRG2lZpp0aXz0G83W0Qsxb5wcgMZ86fBP8AZCb41WfhQL8Rvhp4
e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA54rUfgb4w03xx4r8N/8I5q
t3rXgb7W2v29jAb3+yktJPLuZZWh3KsUbjDS52DI+bBFfVf7P3wsj+BvwJ0fVfAfjb4VWvxj
8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSv8AP/w90O0+G/jj4naJ
d/Fn/hEv7M0PVdMhv/Dq3N/Y+NpY5FRdNWSAp/ol3tLCWUGLaill5FIR5TXu3wq/YH174leD
Ph7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQ
fEPwH/ZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQA
D5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZ
EyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr
6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVl
c+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0
DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pP
hv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfW
vC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7R
f+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymH
Jov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVm
GUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS
8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vs
tYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnv
LifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugab
rE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/i
d4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC
/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
n7B1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWs
ON8SSfu5Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3
vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLi
fU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3
lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7
XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX
2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ
8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf
/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEk
n7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721iv
pYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu
0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJ
uQlWw6sMqSDjgkV9wfCn9pjXfGf7Yd/4nHjv4aR/BDw78TtW8VW8/iw6RNf2Fk96L+c6ba3c
b6pE9xGieWtpEubhx92QOy/H/wC0p8SLH4x/tGeP/F+mRXUGm+KvEmo6xaRXSqs8cNxdSTIs
gVmUOFcAgMRnOCetIRxNFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1Ff8Gjv/KNbQv8AsM6h
/wClUtfy61/UV/waO/8AKNbQv+wzqH/pVLXo4D4K3+H/ANuicmK3p/4v0Z9+/Fn/AI/rj6mv
xL/4Lg+HP7evrb/Zs9L/AE1/x1/jX7afFn/j+uPqa/Fv/gtPrkekX0O/+Kz0z/1IPHP+FddX
+BG/f9Dip/xpen+R+Y//AAUN0L+zvDfwCh/ufDqb9fFHiA/1oq5/wUb1dLzRvgLIvRvh1Nj/
AMKfXxRXkz+Jnpx+FHt1gf8AjND4G/8AYD+G/wD6YtGrwL4mfsnah8d/EHjrxl/wlHhXwp4b
+HPhzwd/bF5rRvWx9t0q0hg8tLW2ndv3iYPyjG5T0yR73Zxt/wANn/A044/sP4b/APpi0auZ
sLnT/EPwB/aX8G/2/wCFdK8SeK/Dnw1/sez1rX7LSP7Q+z2tvNP5b3UsaHZGMn5u6jqwB3r/
AMOPyM6X8R/M+Yv2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC
23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVD
cmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRg
DiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4L
SS3SeZIb2ASOluNsCZlEr8Z1Hmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7
Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN
7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT
32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12
PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/wCCX/izRvEOh6V4g8ZfD/wx
f+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/4
8+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/wAfah448Iav
4t8ffD//AIV74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+Pet
ftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YU
AHz/AOB/+Ce2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfq
SijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MN
zo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmO
Pn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUI
Pgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzr
eDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdc
x4c9r/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5
JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzV
mtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRT
CVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8
jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/EzeLk8eT6PJfQ6PdeIEuoZVO
uf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e
5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+
EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6F
Y2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kv
hx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRd
R+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZP
xPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8ABN+58D+D
v+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIR9A/
A3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKKn
w3/4Jy/FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB4g6zXAVGPynywNpZ18yPf9K/8
E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJM
UmJOf/ZQ0PR/C/xw/auuP+FgeFb3RfEHhTxf4O0DWPEXjbS477xHezXERtpWaadGl89BvN1t
ELMW+cHIDGfOnwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjySh
UNyYi+1mAKAOeK1H4G+MNN8ceK/Df/COard614G+1tr9vYwG9/spLSTy7mWVodyrFG4w0udg
yPmwRX0X+wV4J1X4NfGTwT4jTxh8Cn0mPxlbQeKINW1XQZdR0CGxvI/MmhlvwCUeJ3kjuNMl
kD7B84dFA8/1bxZ4a1z9oz46ano/xU8QeCvD2rwa/Lot3HHqN7P4yhkui1vpdy5cTbLpCC8l
0WGUzICxpCPCa9r/AGe/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMi
StGihTcPDG7zRorliQPFK+wf+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jy
c/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC
3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6
WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n
8r7R5OftXm/a/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9
o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1ku
lsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBt
ZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Z
f2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P
5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8Qaw
bWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWT
xBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/
AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P/wC2P+Fj/wBqf2T9r0n+
2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsa
ibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB4pX2D/wncf/AA5K/wCEZ/tn4f8A9sf8LH/t
T+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mr2D/gmR8Vfht8HPhX8LEv/ABT4fk0LxRqu
v2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/ugD5U+FX7A+vfErwZ8Pda
vfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP0X8CvHGjj4Rfs
saLFrvwqluvhZ4r1Oz8ZPrPifS7abw4n/CS6dqS3di81wgn3w2zKJ7XzkeKSdASW47bX/wBo
fwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7l
niIDkhQD8yfHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr1W2/
YtlsPgT4H+IXiT4i/D/wfovxD+3/ANjQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA+67v9su0
v9B8K3/wT8RfCqO6vPiP4s1fxHc+JvGNz4Tji+06ys9heXVqt9ZSX8TWbR7vMhusJD5QUEPG
fFPgh8TPEXxE8Y+AF1Xxj+zVq3wj0/xze/bfDl5ZaHZR+GNPm1RLi8+yw6zbxXS2k8crPF9n
aRwiKjeXJH5agHz/APDD9jFPilqGg2dr8VfhVZ3/AIt12TQtBs5rzUJrrVHWaOCOdorezle0
imkkAj+2rbuwBbYFG6qnxM/Yo8VfCb4CXXj7V7/w+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3Xk
OgSLdvhmkU+YgzncF+lvC/gXwb8MPDl74l+AnjP4aWPif4j+JNa02z13X/GNnpFz8MPDq30l
tbm3gu5hd/aLq2O9rrY1xHACkcYeTzG6v9hvxpa/B79mOz8CQfEf4aK+nfHlofFsFx4o02HT
tf8AC509LK/lEd5Ii3llKpO0BGLYVkXegKgHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+Eb
bWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MS
T7PcQyNHIm5CVbDqwypIOOCRX6QeBPiz4FtV+BNv8KvEvw0j8FeDvidr11r0Xi3UdMF3oelP
rdvNZSWQ1tvtUCNYKr7rHaxkVmb/AEgMa/P/APaU17RvFP7Rnj/U/Dl5daj4e1HxJqN1pd3d
TTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmkI9L+Bv/AAT21j47/CDw54zs/Hnw/wBHsPFHiuPw
RZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZRU+G/8AwTl+KPxH0P4r6mNKtdH034NQagfE
V3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke/wClf+Cfv7Qeg/s+/sefC231PVPhpcXmr/Hm
z1C707W7mwvZ9M0p7IWr6k0TOZbF4Joi6Tny2UojEmKTEnP/ALKGh6P4X+OH7V1x/wALA8K3
ui+IPCni/wAHaBrHiLxtpcd94jvZriI20rNNOjS+eg3m62iFmLfODkBjPnT9jP8AYc8e/t1/
EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR23yMpVEVWZsMcbEdl818K+BNc8df2l/Ymjarr
H9j2MuqX/wBhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNfdf/BKP9rr/AIV/8dvAHw08W23wq8Pe
EPhtfa1ql14jm177CzXc1vcW5uGmW+Wx1CU+bHbxv5UzC3JMZChpB8qfD2a00bxx8Torvx3/
AMKpzoeqwQ2vh1rnU7HXpfMULoazwTvutJsECaWWaIrEpZpMhihHlNe7fCr9gfXviV4M+Hut
Xvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/A
f9ku2sta+Ghf4deJNQj8Xf8ACR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/
0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMp
hyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L
7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1i
e8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm
6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rwv4j+Gl9pPhv4
neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/
AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc
mi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhl
MOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lr
DjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3v
bWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9T
vbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3l
xPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeM
ll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/wDB
NDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B
1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8
SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tvb
O7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p
3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7Xrn
xHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfKfhT+0xrvjP9sO/wDE48d/DSP4
IeHfidq3iq3n8WHSJr+wsnvRfznTbW7jfVInuI0Ty1tIlzcOPuyB2UA+H/HfgjVPhn441nw3
rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr1W2/YtlsPgT4H+IXiT4i/D/wfovxD
+3/2NDqSatPdS/Yrj7PPvW0sZ1TD7SMtyHHcED7h039tyLxN4F8E618G9T+GmjalqvxA8Ua5
4nj8YeM5fCr6c11q6XNjPfW0Go2v9oJ9keMP8l4oWAxL0ZG8e+CHxM8RfETxj4AXVfGP7NWr
fCPT/HN79t8OXllodlH4Y0+bVEuLz7LDrNvFdLaTxys8X2dpHCIqN5ckflqAeK+B/wDgntrH
jPSPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4
Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/SDwJ8WfAtqvwJt/hV4l+G
kfgrwd8Tteutei8W6jpgu9D0p9bt5rKSyGtt9qgRrBVfdY7WMiszf6QGNfn/APtKa9o3in9o
zx/qfhy8utR8Paj4k1G60u7upp5p7q1e6kaGSR7gmZnZCpLSkyEklvmzSEel/A3/AIJ7ax8d
/hB4c8Z2fjz4f6PYeKPFcfgiztdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyip8N/wDgnL8U
fiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wT9/aD0H9
n39jz4W2+p6p8NLi81f482eoXena3c2F7PpmlPZC1fUmiZzLYvBNEXSc+WylEYkxSYk5/wDZ
Q0PR/C/xw/auuP8AhYHhW90XxB4U8X+DtA1jxF420uO+8R3s1xEbaVmmnRpfPQbzdbRCzFvn
ByAxnz/bfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b/wCxodSTVp7qX7FcfZ5962ljOqYfaRluQ47g
geaWfwz1bxHrmvWnhy1uvFsHh2C4vru90eyuJoFsoGw96Q0ayR2+CrF5UTaHXcFPFfW37Kut
eLhB8J/DniXxt+z/AK/8MvCPiubTtW0HxBP4anuvDlodQjkv2WW/jDTRXCO8iTWE06uFwHVk
VR4pBfeDLH44fGf/AIRT4j6r8OfBk1jrkXhr7Da38/8AwlFobj/RNHlwyypFcRbdz3GVHljz
FJNIR4pXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBn
aMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol1apd
zxO7tBbsvmQLKDHJPF98kAA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7
SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd
7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658
R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/hpfaT
4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHjJZfC
1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxks
vha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDa
H8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5
Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lgl
LyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3
aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4
kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R
+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4Wtf
FHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3
xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgO
SFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrD
jfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcA
PY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXu
hWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kv
teufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf
8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXy
n4U/tMa74z/bDv8AxOPHfw0j+CHh34nat4qt5/Fh0ia/sLJ70X85021u431SJ7iNE8tbSJc3
Dj7sgdlAPh/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9Vtv
2LZbD4E+B/iF4k+Ivw/8H6L8Q/t/9jQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA+4dN/bci8
TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31tBqNr/aCfZHjD/JeKFgMS9GR
vHvgh8TPEXxE8Y+AF1Xxj+zVq3wj0/xze/bfDl5ZaHZR+GNPm1RLi8+yw6zbxXS2k8crPF9n
aRwiKjeXJH5agHivgf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIY
ImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQc
cEiv0g0P4o/DnxDbfs/W3w81r4aHwf8ADr4ga9Hf/wDCR67a6Ze+GtKbxXYapZ3VqmozxXDu
1lbhfMRZWMck8TfvCwHwT+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBl
yjKcMARnkA0hHn9FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1Ff8Gjv/ACjW0L/sM6h/6VS1
/LrX9RX/AAaO/wDKNbQv+wzqH/pVLXo4D4K3+H/26JyYren/AIv0Z9+/Fn/j+uPqa/Dj/gvF
bSz31p5ef+PPSun/AGH/AB3X7j/Fn/j+uPqa/GH/AILO2UV3fQ+Zjiz0zGf+xg8c111VehH1
/Q4qTtWk/L/I/LH9vu2ki8Hfs/hs7h8Op8/+FT4gorov+CkNrHDpXwGVcYHw6lx/4U+v0V5M
viZ6cdkfe2gfshXep/Ev4C+KUhJjfw34Alzj/nno2lL/AOyV+fPxa/ZG1H41+LfiB4tPifwr
4T8OfDTw/wCEE1i81o3rbftml2sEHlpa207t+8jwflGNynpkj+hD9mz4dWviH9n/AOAtw8Sl
18FeEXzj+7pdj/hX4seNxp2p+Av2tPAw17wppXiPxLpnw9TR7PWtfstI/tD7NDDLP5b3UsaH
ZGMn5uMqOrAH1cyw6p0acl1S/I83L67qVpxfRv8AM+Sf2e/2HPHv7Sfwr8e+N9Cs7W08J/Dr
SrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0
bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/AAx1
7wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxz
fahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/insHmmi/8E0PGSy+FrXx
R4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh//AIJ9ahqH
iHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GX
jfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr
72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K
+1CADlH/AOCX/izRvEOh6V4g8ZfD/wAMX/i3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcR
qbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f+PPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIb
ZV4IkIYMCCecegfsKfH/AMfah448Iav4t8ffD/8A4V74O8Vy+ItSufG13o+oatZfvIr2/axj
ull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtp
l5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHV
G/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e
4hkaORNyEq2HVhlSQccEivsv9m34w3OjftGWsmh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjS
kulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGLxfpmveH/Cumzz6140tIvE16
tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqN
xN5scKNLHZ21wLVHklCobkxF9rMAUAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oal
cXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHPa/8ABOzQr/4e/Ej4eeM7Txb8FBoN14rsm8R2
viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVV7b4Am10b9tSP4jeEfHPwf1DwBrnxOk
u78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7xhNGFDGfP3iL9ijxV8Ofh5rniPxvf+
H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP8AYc8e/t1/EO58
P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR23yMpVEVWZsMcbEdl+4dR+OXgv4uWPwTsPCviL4Va
54H8P/EfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/wDgn5+2n4S0H9sr
RvAmnWfw08NfBXwL4k8Sa5oviHUNUutHuRBcrdQ208puLyOK7uPJmgtk8+CWdIC2Nu13AB8q
fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGO
T4i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5
ud232v8AYy+LPjDw38VPDg1rxL8H/Dvwq+HHjK41Wez1jUdF1c+H4Y5o7q8h0YXDXWpMkgiV
YXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4mv/ABvpnjweHo77S9Mv/ECXqBU1
MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvB
bTIvlxwsT5rJncoXccgeU19rf8E37nwP4O/4KuzeMtG1/wAK+HPhP4X13WfsN5rWv2+mbNPu
La/hsfLS9lS4mypiBwrum5TJtzk/FNIR7t8K/wBhS6+J3wE0n4jT/Ej4aeFfD2reJB4SB1yf
UoXtNSKGVYpmjs5Io0MOJfNMnlKrDe6sGUdA/wDwS/8AFmjeIdD0rxB4y+H/AIYv/FvivUfB
/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGPoHwj+NT/Bf/gjxq9npWr/AA/uPEWt
+OdRkuNGvtT0+41SPR73RJNIluYrYy/aYpRLIQpjVZNmWYNbu+8/YIun8DeGPhpNafEv4f6r
4M8ReK3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT/AC3R4EQtIY9gYz5U1H4G+MNN
8ceK/Df/AAjmq3eteBvtba/b2MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEVyleweGk8JeGvip8
W7Tw58VvEHg7wnHpWsWvh69jtLqWfxnbCYC20y5EPlGNLqIKztKgjUp80Y4A8fpCCvdvhV+w
Pr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L
/Zs+KOg+IfgP+yXbWWtfDQv8OvEmoR+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5
J4vvkgAHz/ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5
QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZax
b3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4
kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/+CfWoah4h8PeHtV+Jfwq8
OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx9l6/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL8/eGP
ip4C034m/tSfHXS9W8P3vjbQ/Eg1H4ZQ6sU2XLahq04fUYbKYK89xbQFJow6ssLMHeMlV2gH
n/hT/gmR498WXnimBdY8Fae/h3xlc/D6yN9qbwJ4k12CKaU2Nmxj2q7rCAjXJgR2miQNvJVe
f+En7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt9
g174s3Pin/gjNeaZqfiXwVqPirUfic+vXdpdajpc3iC6sHhKveSIzG8a4a9JBlINwYSQT9mr
z/8AYm17wr8IfhJ8WPijPeeH/wDhaHgODSh4CsNWmhkQ3V1dNFcX0No5BuLi0iUSR5DxxMRI
6MVQqAeKfFn4cX3wc+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK9A+C
f7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHP2Z/
wTm+OfgTwL4H+HOoeIvG2lara+Otd8RD4txeK/GVwv2a4uo0trE/2W91HDexXIkjM9xLbXQX
dI0ksQhJi8p/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJb
pPMkN7AJHS3G2BMyiVwDwnxF+xR4q+HPw81zxH43v/D/AIDg0vVb3QrG01ieZ7vxHf2ZkW6h
sUtophKkUkYia4YpbiSRF83O7b4/X6Q/DfxXokHwe/Zu8BQ+KPgV4j0n4a+Mtc0n4hHXrnQJ
4Fsm1qKXz7JtVCyyW81q0jrLZj5xgE70Cr0Hg/8AaV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/
+Em8bXnheOyt21VH0u5uoPt9nNqMRsfJDGaK7OyDysbg8ZAPy+r3b4VfsD698SvBnw91q98W
+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+lf+F1az4j+Bvwij+CPx
Q+Gnw013T/GXiW98Zrp3iSDwloyzT6nDLZXD2V40Ut5ZLahRGpgmKwx+SyblMVef/sZfGrxh
ZfFTw5c6149+D4+FXgHxlca3Pc6xBoqmxhWaO7vG0axuIBqVulwIl8mOytoh5zrhY3DlQD5K
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH7g/Zc/ad+FH/C
caX4rj8SaUnhD4peOfF978RdK8V+KpLX+yEv5DDpcX9ji7S2uYpopoRPM1vdpHmQtNGsGYvP
/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxST
oCS3AB86aL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UG
lkTJVmGUw54rwH+xz4v8YeOPHGj6gdK8J2vwx+0L4u1nWbkrpegPFI8IiklhWUySyzIYoooF
keV/uKwDMPvbX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tY
r6WCUvJaw43xJJ+7lniIDkhfCfD/AMcvCHxhuv2x/BWn+ItK026+Neux674R1LWZxpml36WO
rXN+YJJ5tot5Z4XHleeEQuNrvGSoIB4/8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J
9amhnitnYfZLWdIU8+VYwZ2jJIY42YY6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeX
E+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/wCyXbWWtfDQv8OvEmoR+Lv+
Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/4L614X8R/DS+0n
w38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q
/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/d
yzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LB
KXktYcb4kk/dyzxEByQoB8E+A/2OfF/xD8ceOPCFkdKXx94F+0LL4Ve5LaprL20jpdxWOxWh
uJYAju0QlDuisYll2kDymvtb9l745eEPB/8AwVG+Inx9vfEWlReAfC2u+INdiV5xFqmvpfm8
htILGzfbNNK5uEZsqiRIGMrx8Z+KaQj2D4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3
cWya5OCRM8AhgmYW8DbVluZAkEbSKpk3BgvV6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabr
E95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw59W+EHxE+G3xj/ZV+AuieJh4K1HTfhDqurW
PjbSfEuszaVdx6bqV/FeHU9MWG5ge8eKCCaMxI0kvmOoFtJvjauq+DlhoPwj+Mlna+Bfid8N
NZ+EGp/E7Ux4g8K65rlhYp4Ptba8a1s9U067u7tLp7g2E3nRXunssgaCNWeUptpjPh/Ufgb4
w03xx4r8N/8ACOard614G+1tr9vYwG9/spLSTy7mWVodyrFG4w0udgyPmwRXoH7Pf7B/i/8A
aP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBU8NJ4S8
NfFT4t2nhz4reIPB3hOPStYtfD17HaXUs/jO2EwFtplyIfKMaXUQVnaVBGpT5oxwB7D/AMJ3
H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2akI8p/Z
7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkA/
Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJ
A9W/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zU
f8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2amM
8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5
YkC38Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY4
2YY+l/8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9m
r0r9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQZFZ
j++BNAHz/ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5yvhv/wAE5fij8R9D+K+pjSrXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5
T5YG0s6+ZHv+y9B+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtd
c3kfmWQEpezw7Sh2cm43GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarc
XdxFJaBlv7hbpvMjIJnnGCwbe+8NQB8//CT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ
3AuRbIlzHa2cy23mOyMhldQyOGBwG262mf8ABPrUJviRF4P1H4l/CrRfFN74ru/B9hpc2o3t
3dXl3b3S2hdktbSY20UkzbYzd+SzhS4XZ81av7Ikvhv9n7wB8ZfiDf3/AIVk+LHw0/s218DW
d5f2t9ayXs948Fzf2sSuyXstrGglidTJChZZSr4Rh0H7BR13RvjJ4J+I1x45+D+oWeueMra7
8XHxVqOkf2/pC295HLNcltWVZg8qSySrNYSOzFfmYSxhQAeU/AX9ijxV8ef2o7v4PR3/AIf8
M+NrSe9smg1ieYwPdWe/z7cS20Uy71WKZgxxGRE2HJKBvH6/WH9mD9pT4XfDP4qeAfEPgnx/
4f0Twfq3xA8XXXxGudS1zydV1cTTTweHpLoXri/ubcRXcblgHhjdpZp9skcsi+f/ALO3x7vv
g/8Asq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn
5kYA/N6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO
0ZJDHGzDH7A+GP7VzaL8DfhhH8JtV+BWk67F4y8RXvi5bjxZd+DdGsJpdTjls7gWX26xlvbL
7KUCrLBclYYFh2KyvEavgX41eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/Vorn
SrfUJbecRNbwFIykbkQSSwsBISqgHy9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH
8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BND
xksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnivAf
7HPi/wAYeOPHGj6gdK8J2vwx+0L4u1nWbkrpegPFI8IiklhWUySyzIYoooFkeV/uKwDMPvbX
/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJ
J+7lniIDkhfCfD/xy8IfGG6/bH8Faf4i0rTbr4167HrvhHUtZnGmaXfpY6tc35gknm2i3lnh
ceV54RC42u8ZKggHj/wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0
hTz5VjBnaMkhjjZhjq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2
NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4
j03VEurVLueJ3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+
I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W
8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L61
4X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQoB8E+A/wBjnxf8Q/HHjjwhZHSl8feBftCy+FXuS2qay9tI6XcVjsVobiWAI7tEJQ7o
rGJZdpA8pr7W/Ze+OXhDwf8A8FRviJ8fb3xFpUXgHwtrviDXYlecRapr6X5vIbSCxs32zTSu
bhGbKokSBjK8fGfimkI9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUM
KK+6OV/9agOGDhfNK/Rb/gnr8avh/wDB/wDYP8N6T4t1fwrb+JPEfxWmufD11JqdjeXXge4m
0lrO18Q3FhJKAYra5jbK3OwKGWYZIi3H7Ov7RH/CqvhB4I0/Wvi7pV74v8P/ALR62Oo6tD4p
8yS68OzCKXUJlmd1kbSrm6j8+RmAhlcB3BamM+NP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VX
llqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDldX4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBv
fFd3cWya5OCRM8AhgmYW8DbVluZAkEbSKpk3Bgv0X+zRc+E9P/ao/bD/ALK1/wCH+heG9d8K
eLfDXhjztf07S7G9lvLv/QYbTzZY0aJo4Th4/wB0i7NzKGTNX4QfET4bfGP9lX4C6J4mHgrU
dN+EOq6tY+NtJ8S6zNpV3HpupX8V4dT0xYbmB7x4oIJozEjSS+Y6gW0m+NqAPKdF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/A3
xhpvjjxX4b/4RzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+4Pg5YaD8I/jJ
Z2vgX4nfDTWfhBqfxO1MeIPCuua5YWKeD7W2vGtbPVNOu7u7S6e4NhN50V7p7LIGgjVnlKba
+VPDSeEvDXxU+Ldp4c+K3iDwd4Tj0rWLXw9ex2l1LP4zthMBbaZciHyjGl1EFZ2lQRqU+aMc
AIRrfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGO
NmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlk
TJVmGUw5+gP2bPijoPiH4D/sl21lrXw0L/DrxJqEfi7/AISPXbDTL3w1C3iPTdUS6tUu54nd
2gt2XzIFlBjkni++SB6Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6Bpu
sT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA
03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N
/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i
/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZT
Dk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrM
Mphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH
8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BND
xksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/g
mh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g
6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+
JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3tr
FfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd
7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9
c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+teF/Efw0v
tJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/wDBNDxk
svha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh
4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/
+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/
dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sE
peS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWV
wA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO
7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPW
LPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDf
xO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH5k+O/BGqfDPxxrPh
vW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTXoH7WXjfTPiZ+1R8S/EmiXP23Rf
EHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXn9IQUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/waO/8o1tC/wCwzqH/AKVS1/Lr
X9RX/Bo7/wAo1tC/7DOof+lUtejgPgrf4f8A26JyYren/i/Rn378Wf8Aj+uPqa/EP/gufrUm
lX9ts72el/8Ap/8AHf8AhX7efFn/AI/rj6mvxS/4LZaB/beq2oxnFnpf/qQeOv8AGuurf2EL
d/0OGnpWlft/kfmT/wAFBtUe78LfAGRurfDqfOf+xo8QCitL/go9oP8AZmj/AAFhx9z4dTfr
4n18/wBaK8qSfMz04tcqP6NP2Ofm/Zw+BP8A2I3hT/01WVfzvftT/snah8dvjt8YfGX/AAlH
hTwp4a+HNl4Y/ti81o3rY+22FvDB5aWttO7fvEwflGNynpkj+h/9jdv+McfgT/2I3hP/ANNV
lX4a/E250/xD4X/a+8G/2/4V0rxJ4rsvAH9j2eta/ZaR/aH2eOKafy3upY0OyMZPzcZUdWAP
v51/u1H0X5HjZT/vFX1f5nyD+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2t
ICqMZLjywW24CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ2
1wLVHklCobkxF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0
MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnx
XBtGazU3BaSW6TzJDewCR0txtgTMolf5o+gPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN
1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD//AAT61DUPEPh7w9qvxL+FXhzxn4j1
258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz
/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aA
s59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/AMEv/FmjeIdD
0rxB4y+H/hi/8W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrH
wk8AaBr3j/x58P8AwF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE849A/YU+P/
AI+1Dxx4Q1fxb4++H/8Awr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB
97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7r
KacZQruVowoAPn/wP/wT21jxnpHgm/l8efD/AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykME
TTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOO
CRX2X+zb8YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxb
RhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygL
EGwxV8NxkoQfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ
3JiL7WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3R
p2KI07x79jOuY8Oe1/4J2aFf/D34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqI
kse+KRnSXT3ckpgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealy
W1xVuw8ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/4Dg0vVb3QrG01ieZ7vxH
f2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtM
UhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR
7rxAl1DKp1z/AEx/NspGZpYCZGYEOxlTC8//AME/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTX
NF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+C
vCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/8
P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa
14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeF
v2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT
9kT9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W
/wCCb9z4H8Hf8FXZvGWja/4V8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymT
bnJ+KaQj2D4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3cWya5OCRM8AhgmYW8DbVluZ
AkEbSKpk3BgvV6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxR
ot0/lBpZEyVZhlMOermt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5
mCz28axssqxlp0bbiFlZWPa/sh2Wn/CbUPBtv4Z+MXw/8a/DLVfHNz/wlejeJp7Lw5J4fitp
vsttrdl9tuo7yG7kspjcxz2OyWJ4kjZpGi2hjPj/AFH4G+MNN8ceK/Df/COard614G+1tr9v
YwG9/spLSTy7mWVodyrFG4w0udgyPmwRXKV7B4aTwl4a+KnxbtPDnxW8QeDvCcelaxa+Hr2O
0upZ/GdsJgLbTLkQ+UY0uogrO0qCNSnzRjgDx+kIKKK/Rb9ln9pW48BfsdfArS/hdrHwq03x
Jpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZDRkA+X/hV+wPr3xK8GfD3
Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDE8HfsF6tr3izTfDW
s+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+tvAvxq8N/ET
w1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284ia3gKRlI3IgklhYCQlV8+
+D2tRXX7eFx8V/DPjf4KXngfxX8Vp9U1RfEE+k2usaNp8WreetwqavHHPD5lvMZEexZnyuGK
yxhFYzwrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiN
O8e/YzrmPDnx/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
sv4Am10b9tSP4jeEfHPwf1DwBrnxOku78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7
xhNGFHyp+0pr2jeKf2jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW
+bNIR6X8Ef8Agnd4k+PvwP0zxvonjD4fwR61rsvhaw0nUb26tL671hbd7iOwV3t/swlmiRTE
WnEbNLGhdZDsB4H/AOCdPizxDpHgmbxB4g8K/D+/+I2u3Xh3w5pfiJNRW+1C7trmO0mVktrS
cW+25kERFw0TBlY7doDH6A/Yb1bw3on7EHw6/tvx18P/AA7J4Z+OVv8AEW/ttR8R2q30ej2N
htkkW0R2uXleWBoooViMkjPGQvlt5lWvg58UrXxV8ZLPxroPxG8FX3w5+JnxO1PX/GHg7xhq
Gm6Ld+CVa8ZYNStZLi7S4W9+xXRlS604q0cluiF5THtDGfD+o/A3xhpvjjxX4b/4RzVbvWvA
32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK5SvYPDSeEvDXxU+Ldp4c+K3iDwd4Tj0rW
LXw9ex2l1LP4zthMBbaZciHyjGl1EFZ2lQRqU+aMcAeP0hHu3wq/YH174leDPh7rV74t8FeE
U+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjq6L/AME0PGSy+FrXxR4j8FeA
9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/7JdtZ
a18NC/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SB6Dr/wC0P4N+
Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEB
yQrGfFPg79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad
49+xnXMeHOT8TP2KPFXwm+Al14+1e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+G
aRT5iDOdwX6r1NNBj1XxN8Sfg/49+Gg+IPxo8ZeIfM8S6/4ssNEufhvosmpSxRyW9rcSrcrc
XcLtI1ysZmigykcSvJ5ja37DfjS1+D37Mdn4Eg+I/wANFfTvjy0Pi2C48UabDp2v+Fzp6WV/
KI7yRFvLKVSdoCMWwrIu9AVAPzer3b4V/sKXXxO+Amk/Eaf4kfDTwr4e1bxIPCQOuT6lC9pq
RQyrFM0dnJFGhhxL5pk8pVYb3VgyjzT47/8ACMf8Lw8Zf8IR/wAiZ/bt7/YH+t/5B/2h/s3+
u/e/6rZ/rPn/AL3Oa+y/2TvE9zp//BLs+END8V/B/TPE3jD4gagLm38VeJdLtn07Rb3Q5NKn
visswnt3R3Yr5a+cyjhJIpGWRCPH3/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39
vXdjerYzPEbazmEUX2lxGpuDExwW2hcMfFNR+BvjDTfHHivw3/wjmq3eteBvtba/b2MBvf7K
S0k8u5llaHcqxRuMNLnYMj5sEV9l/sw+GbX4CeLPDWjaD8Zfhp45+HM3xAvbbxhpOuajpujp
oK2lx9jg1zTpbi8W4FxLZSm5iutOYMjRRoZJTHtr5p8NJ4S8NfFT4t2nhz4reIPB3hOPStYt
fD17HaXUs/jO2EwFtplyIfKMaXUQVnaVBGpT5oxwAAeP17t8Kv2B9e+JXgz4e61e+LfBXhFP
ivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/2S7ay1r4
aF/h14k1CPxd/wAJHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV4O/YL1
bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz9ra
/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+J
JP3cs8RAckL59qaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W
4u4XaRrlYzNFBlI4leTzGYz5U+Jn7FHir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJ
FJ+68h0CRbt8M0inzEGc7gvj9fpD+w340tfg9+zHZ+BIPiP8NFfTvjy0Pi2C48UabDp2v+Fz
p6WV/KI7yRFvLKVSdoCMWwrIu9AV+Cfjv/wjH/C8PGX/AAhH/Imf27e/2B/rf+Qf9of7N/rv
3v8Aqtn+s+f+9zmkI7X4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3cWya5OCRM8Ahgm
YW8DbVluZAkEbSKpk3BgvbeCP+CX/izxX4Ytr+/8ZfD/AMM3U/jl/hs+m6lLqMl1beIA7KLJ
2trOaE7gAwlWRosMMuDlR0E1vpP7X37D3wM8EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tz
DfwPMwWe3jWNllWMtOjbcQsrKx9r/Y4+Nng39l79nPwNpFv4r+Gniixuv2h4riJ9bFm0/wDY
X2U2Y1prSVzPpro8PnJI/lvGQmS0bkSMZ806L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE
95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58f1H4G+MNN8ceK/Df8Awjmq3eteBvtba/b2
MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEV+ix+InhPUNT+BWm+H/HHw/8AFdh8M/iP4ht/Eet+
LfFWnRapZWh8W2Opw6pDLczQm5lntrYlri3WRXSa4T7zED4/8afEXQPH37VH7QHiTT/ijqvg
bRfE3/CRX2k3FjY3jf8ACXpPdmSHSpVjKNFFcowLNONi7BvXPRCPn6vdvhV+wPr3xK8GfD3W
r3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L/Zs+KOg+Ifg
P+yXbWWtfDQv8OvEmoR+Lv8AhI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IAB8/
6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGU
w5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZK
swymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LB
KXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6Bpu
sT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA
03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N
/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i
/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZT
Dk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrM
Mphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCn/AA0h4Z+IXjb4Ra34F8X/
AA0udE074u+Jdc8WyeLbzSYbvSrK51+G6t57KPWCtxbI9l+8/wCJeqfvAxb9+GoA+PtF/wCC
aHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov
/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn
62Pxu8D/ABL1P4FXPg/xN8P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDD
TKJXZJbiNv3jMBb1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3
vbWK+lglLyWsON8SSfu5Z4iA5IUA+NPGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNSOqXl
0lxpl41neI5sbK4iG2VeCJCGDAgnnB4H/wCCe2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv8A
hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5r6K0n41ar8bf2hdG1FvHvwK8UfBA/E7V7ptH8VQaD
BqOhaVdaz9quZCmrwR3Oy5hmMitbPI3y7DskjEa9XofxR+HPiG2/Z+tvh5rXw0Pg/wCHXxA1
6O//AOEj1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgAD5U0X/AIJoeMll8LWv
ijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc5Xg79gvVte8W
ab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHP2tr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQvj3we1qK6/bwuPiv4Z8b/BS88D+K/itPqmqL4gn0m11jRtPi1bz1uFTV4454fMt5jI
j2LM+VwxWWMIoB86fEz9ijxV8JvgJdePtXv/AA+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3XkOg
SLdvhmkU+YgzncF8fr9Qf2dPj34bf4b61Z+FfiD4VttB1j9o+81LWbPxN4mtbeTW/BlzaiC5
e6i1SVZbuKWKQBhIrys43YMibl/On47/APCMf8Lw8Zf8IR/yJn9u3v8AYH+t/wCQf9of7N/r
v3v+q2f6z5/73OaQjlKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAr+or/g0d/5RraF/2GdQ/wDSqWv5da/qK/4NHf8AlGtoX/YZ1D/0
qlr0cB8Fb/D/AO3ROTFb0/8AF+jPv34s/wDH9cfU1+Ln/BaLXo9E1S3LnrZ6Z/6kHjn/AAr9
o/iz/wAf1x9TX4ef8F2bCS91G0Eef+PPS+n/AGH/AB3XXVdqELd/0OGnrWlft/kfn1/wUl11
NU0z4DTr91/h1Lj8PE2vj+lFYP8AwUC0+S18J/ABG3ZHw6nz/wCFR4gorypt8zPTh8KP6Q/2
N5Mfs7fAkf8AUj+FP/TVZV/PJ+1P+yfqHx4+O3xh8Z/8JR4V8KeG/hzZeGP7YvNaN62Ptthb
wweWlrbTu/7xMH5Rjcp6ZI/cD9lz9prSPD/w8+BehSXCC5HgzwdFsLc5fSLAj/0IV+R3jvVN
N8YeBv2tvCi6/wCFNL8ReMtN+H0mjWms6/ZaR/aIghhnm8t7qWNDsj5Pzd1HVgD72czjLDUU
uy/I8fKotYirfu/zPkf9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVG
Mlx5YLbcBVBXe6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uB
ao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiE
NxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4N
ozWam4LSS3SeZIb2ASOluNsCZlEr/Nnvnmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6Bpus
T3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+
GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV
8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+
lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/wCCX/izRvEOh6V4
g8ZfD/wxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4Se
ANA17x/48+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/wAf
ah448Iav4t8ffD//AIV74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x
6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNO
MoV3K0YUAHz/AOB/+Ce2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJpp
VVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr
7L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowz
XDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDY
Yq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX
2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsUR
p3j37Gdcx4c9r/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98
UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrir
dh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szI
t1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3
liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/EzeLk8eT6PJfQ6Pd
eIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+
IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhF
Pivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/A
cGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S
/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S
1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ
+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8A
BN+58D+Dv+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk
/FNIR7B8K/2NdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBImeAQwTMLeBtqy3MgSC
NpFUybgwXtvBH/BL/wAWeK/DFtf3/jL4f+Gbqfxy/wANn03UpdRkurbxAHZRZO1tZzQncAGE
qyNFhhlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28a
xssqxlp0bbiFlZWPtf7HHxs8G/svfs5+BtIt/Ffw08UWN1+0PFcRPrYs2n/sL7KbMa01pK5n
010eHzkkfy3jITJaNyJGM+VPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJ
asJVsLe5S3Rp2KI07x79jOuY8OfNNR+BvjDTfHHivw3/AMI5qt3rXgb7W2v29jAb3+yktJPL
uZZWh3KsUbjDS52DI+bBFfa1r8OdH+D0Gs678J/il8P9c+JPxA8V69pFx438U+PtLttQ8D6O
moS2q3cXmTCWW7v4t0z30SNIsLMIYw0vmN8qfD3Q7T4b+OPidol38Wf+ES/szQ9V0yG/8Orc
39j42ljkVF01ZICn+iXe0sJZQYtqKWXkUhHlNe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/h
G21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/Af9ku2sta+Ghf4deJN
Qj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZfC1r4o8R+
CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfF
HiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt
8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOS
FNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDj
fEkn7uWeIgOSFYz4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsb
a5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD
2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhW
Vz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+165
8R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6
t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/B
XgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF
9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9
ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSf
u5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi
+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R
6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re
6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+KfB37Bera94s03w1rPj34aeEfGGr+JLj
wra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OT9m7/gnL8Uf2n/j34i+Heh6Va2Gr
eDp57XxDeahcY07RZoneMxyzRCQM7yRsiLGHL7WYfIjuv0rqaaDHqvib4k/B/wAe/DQfEH40
eMvEPmeJdf8AFlholz8N9Fk1KWKOS3tbiVblbi7hdpGuVjM0UGUjiV5PMY/4Jg/tUxfCf9pL
wb8KfEs3wfsvBPwt1XXr6XxgmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P9G/
Zf1ab4CP8R9c1nw/4R8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC+aV+kP
w3+JmieKPg9+zd4YhuvgVFpPgnxlrlp8QtG1690C7g0aym1qKfZZSarJLLNbtavJtms5ZfMC
DMrugx8E/Hf/AIRj/heHjL/hCP8AkTP7dvf7A/1v/IP+0P8AZv8AXfvf9Vs/1nz/AN7nNIR2
vwr/AGNdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBImeAQwTMLeBtqy3MgSCNpFUy
bgwXtvBH/BL/AMWeK/DFtf3/AIy+H/hm6n8cv8Nn03UpdRkurbxAHZRZO1tZzQncAGEqyNFh
hlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28axssqx
lp0bbiFlZWPtf7HHxs8G/svfs5+BtIt/Ffw08UWN1+0PFcRPrYs2n/sL7KbMa01pK5n010eH
zkkfy3jITJaNyJGM+VPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVs
Le5S3Rp2KI07x79jOuY8OfNNR+BvjDTfHHivw3/wjmq3eteBvtba/b2MBvf7KS0k8u5llaHc
qxRuMNLnYMj5sEV9bfAHwsfhZ+2pH4r0H4l/B/xT4Pf4nSJrGo+KtZ0S51+2srPVMi/M+pIj
s9xBIZ1udNd/MIyWWRFVfFNW8WeGtc/aM+Omp6P8VPEHgrw9q8Gvy6Ldxx6jez+MoZLotb6X
cuXE2y6QgvJdFhlMyAsaQjwmvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L/AGbPijoPiH4D/sl21lrXw0L/AA68SahH4u/4SPXb
DTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP+i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3j
fxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/E
fw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Df
jb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RA
ckKxnx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaW
RMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW9
7axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL
3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnK+G//BOX4o/EfQ/ivqY0q10fTfg1
BqB8RXeoXGIPtVkrNPYwPEHWa4Cox+U+WBtLOvmR7/tbX/2h/Bvxt8X/AAX1rwv4j+Gl9pPh
v4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfKf2bvE+g+Nf2jP2u
/Glv4r8FWHh74j+G/GWieG5tY8S2GkT6ndX11HNaqLe6mimRJEORI6LGCGUsGUgAHypo37L+
rTfAR/iPrms+H/CPh67nmtNBTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwXV/Yz/Yc8
e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfrb4M+O9K1D4Bf
sw+DTrPwUv7D4f8AivWdO+I9n4ou/Ddz9itJNYhmLWz6gSZYpLZpT52nsyvtADlkUC3/AME/
P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzp
AWxt2u4APj74J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc9Xov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP
5QaWRMlWYZTDn1b4OfD21+Enwks/EHw78UfB/Qfi38Q9V1PStT1O88e6baRfCbTUumtilgJr
uSaV7hPMYX0bTSrbLiIFpfOf0D4T+J9B0b4ffsy+GrLxX8H9Zf4ReMtX03xdqF/4lsLQaRCv
ifT9RTUNOa7mhedJYLVts0CSBoZZo8B2IAB8v+Mv+Ce2sfCTwBoGveP/AB58P/AX/CR32rab
a2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzjJ/ZR/YL8cftp/F/VvCngB9K1K10TzHvfEU
xuLfR4IgXEUjO0XnDzin7uMxeaRklFCSFPqDSfjVqvxt/aF0bUW8e/ArxR8ED8TtXum0fxVB
oMGo6FpV1rP2q5kKavBHc7LmGYyK1s8jfLsOySMRrrf8E/P20/CWg/tlaN4E06z+Gnhr4K+B
fEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91
q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/S3wn+JGg6t8P
v2ZbCyvvg/YP8NvGWrweLrW/8UWEA8IwnxPp+ppNp0l3d77hDBbNGtxA9yHheZN7OxNdXr/7
Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93
LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP
5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
/TbX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw
43xJJ+7lniIDkhfzz/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlO
GAIzyAaQjoPhX+xrq3xA+Hmk+LNc8V+Cvhx4e8SaqNG0G98V3dxbJrk4JEzwCGCZhbwNtWW5
kCQRtIqmTcGC9t4I/wCCX/izxX4Ytr+/8ZfD/wAM3U/jl/hs+m6lLqMl1beIA7KLJ2trOaE7
gAwlWRosMMuDlR0E1vpP7X37D3wM8EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tzDfwPMwW
e3jWNllWMtOjbcQsrKx9r/Y4+Nng39l79nPwNpFv4r+Gniixuv2h4riJ9bFm0/8AYX2U2Y1p
rSVzPpro8PnJI/lvGQmS0bkSMZ8qfs3f8E5fij+0/wDHvxF8O9D0q1sNW8HTz2viG81C4xp2
izRO8ZjlmiEgZ3kjZEWMOX2sw+RHdfH/AAr4E1zx1/aX9iaNqusf2PYy6pf/AGG0kuPsNpFj
zLiXYDsiTcu52wq5GSM1+i37DX7Y+meEP25rL4ealqXw/k+G3w+8V+KNYh8d614ie11DV3n+
1wR6jc3BvIrTUbuVZYoVkaCRxC7lAqhnHxT8PZrTRvHHxOiu/Hf/AAqnOh6rBDa+HWudTsde
l8xQuhrPBO+60mwQJpZZoisSlmkyGKEeU17t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbW
Hv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/wBku2sta+Ghf4deJNQj
8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/AEX/AIJoeMll8LWvijxH
4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR
4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X
/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFN
f/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SS
fu5Z4iA5IVjPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0
W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21
zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz
4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteuf
EesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAer
eN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXg
PVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf8F9a
8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/wDa
H8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5
Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6
fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUa
LdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LW
Le9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWek
XuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El9
4V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv
4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3
xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgO
SFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0si
ZKswymHOV4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxR
GnePfsZ1zHhz9ra/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2
sV9LBKXktYcb4kk/dyzxEByQvn2ppoMeq+JviT8H/Hvw0HxB+NHjLxD5niXX/Flholz8N9Fk
1KWKOS3tbiVblbi7hdpGuVjM0UGUjiV5PMYA+VPEX7FHir4c/DzXPEfje/8AD/gODS9VvdCs
bTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tuVo37L+rTfAR/iPrms+H/CPh67nmtN
BTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwX7W+G/ivRIPg9+zd4Ch8UfArxHpPw18Z
a5pPxCOvXOgTwLZNrUUvn2TaqFlkt5rVpHWWzHzjAJ3oFXVsvir8PfGfhb4D+HfAWrfB+7+G
vg74geI7fxHY+MjownsNCn16O4tjEutAXRR7ByS9vlyV2ufMTCgHx/8ACr9gfXviV4M+HutX
vi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPj/jvwRqnwz8caz4b1
u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+kGh/FH4c+Ibb9n62+HmtfDQ+D/h18
QNejv/8AhI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3hYD4J/ay8b6Z8TP2qPi
X4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaQjz+iiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACv6iv+DR3/lGtoX/YZ1D/ANKpa/l1r+or/g0d/wCUa2hf9hnUP/SqWvRwHwVv
8P8A7dE5MVvT/wAX6M+/fiz/AMf1x9TX4x/8FlbWO51a38zHFnpnX/sYPHNfs58Wf+P64+pr
8P8A/gurr8miX9qUOM2el/8Ap/8AHf8AhXZVdqEPX9Dhpq9aS8v1R+f3/BTC3jjtPgQExtHw
6lx/4U2v0Vzn/BQHXX1Hwl8AJnPzP8Op88+nijxAP6UV5U5e8z0oR91H0jpnx01XRv2xvgdp
MdxItv8A2F8OI9oPGH0LRif/AEI186/FL9lfUv2g/Evj3xw3ijwr4V8O/D7w74PfWbzWjetg
32lWsUPlpa207v8AvEwflGN6npkj6BtvhpLeftj/AAO1ED5f7D+HDf8AfOhaMP6VyVlLp+t/
AL9pjwV/b3hXSvEvinw58NRpFnrWv2Wkf2h9mtbeafy3upY0OyMZPzd1HVgDtX5vZxv5fkZ0
Le0dvM+Yf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV
3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZ
gCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKi
LKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0n
mSG9gEjpbjbAmZRK/Edh5pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqW
WoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7
oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b
6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t
8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/wCP
Ph/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/x9qHjjwhq/i3x
98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L
8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP
/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/wCEme3uY7Vp4hDZSGCJppVVftQhfqSi
jmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo
37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn
74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD
4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc63
g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXM
eHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkun
u5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baX
zVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2K
W0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUd
t8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU
65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR
7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P
4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re
6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO
/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO
80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+y
dqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8E37
nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0
hHu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjj
ZhjyifsleNYvDnxW1O7sbWwg+DE8Fl4nWa7Rnt7qa+FklvGELCR/MEp3KfLCwud+Siv9F/sE
XT+BvDHw0mtPiX8P9V8GeIvFbz+O/CXibUtP0eTwa8Lrbx6nZTXVzFci7NpO00d1p/lujwIh
aQx7B0H7JXx3i034PftE/CbwL8bLrwxBcarpafDLUfEPiiXQ4LTTY9akN3cxzv5SW7vbypLL
HGqSygybYnKsoYz4Jor9LP8AgnN8RPhh8BfA/wAObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82
NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEXE+BPHHiXw5+yr8CfDXwh+K3gr4eeLPCPiTXo/
Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsbKQj5U1H9l/VrL9kyw+MUes+H7vw9d+
JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4XivCvgTXPHX9pf2Jo2q6x/Y9jLql/8A
YbSS4+w2kWPMuJdgOyJNy7nbCrkZIzX6LfsY/HL4f/Cr9ke007xb4i+H9x4k8UfGS8vPD2q2
E9iI/CNxPpr2dr4lOkyeSI7SC5RmWK5igEaOkoRSsQPxn4aZtA+Knxbt9e+M11oepDStYtpd
a0eS71WDx/decAbE3ETKz296wZzPNmMhVZ1O4UAeP0UUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUt
ejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr8Sf+C43hw69e23+zZ6X+mv8Ajr/Gv22+LP8A
x/XH1Nfiv/wWu12PR72Df3s9M/8AT/45/wAK66v8CHr+hw0/40rdv1R+ZX/BQnQv7N8M/AGH
+58Op/18UeID/Wirf/BRbWEvdC+Akinhvh1Nj/wp/EAoryZ/Ez04/Cj640LxJZw/tGfA2Biv
mf8ACP8Aw7H56JpGK+Pvix+yjf8Ax68V+P8AxoPFHhTwp4a+HXh7wedXvNaN62Pt2l2sMHlp
a207t+8TB+UY3KemSPU7rxPPF+3X8DrcMdn9jfDUYz66Dov+NR6Xd6f4j/Z8/aV8Hf2/4V0v
xL4s8NfDU6RZ61r9lpH9o/Z7S3mn8t7qWNDsjGT83dR1YA9FeXNTXy/IyoRtUfzPmT9nv9hz
x7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41
WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/wCCXmn2
ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3ws
j+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmU
SvxHYeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGl
kTJVmGUw5q+H/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeK
DfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0Z
LiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5
l4tmWInu7OErImDJFE53sr7UIAOUf/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6j
N/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021
sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/wD4V74O8Vy+
ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/
Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/wCCe2seM9I8
E38vjz4f6La/E3XbrQvBj3x1Rv8AhJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+O
NZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnf
s7/D34gX+saOviyXS7u/0jSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGL
xfpmveH/AArps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfi
N8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+Gt
Z8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v8AwTs0K/8A
h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgC
bXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0Y
UMZ8/eIv2KPFXw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFL
cSSIvm53bT9jP9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR
2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBM
jMCHYyphef8A+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI
4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhni
tnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf8Ah/wHBpeq3uhWNprE8z3fiO/s
zIt1DYpbRTCVIpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8ADvwq+HHjK41Wez1j
UdF1c+H4Y5o7q8h0YXDXWpMkgiVYXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4
mv8AxvpnjweHo77S9Mv/ABAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheK
PCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRt
f8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIQV7t8Kv2B9e+JX
gz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR
0HxD8B/2S7ay1r4aF/h14k1CPxd/wkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98k
AA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJk
qzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6F
ZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeF
dA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kv
vCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfa
T4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX
1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+
0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMp
hyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L
7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ
7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroG
m6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv
4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1r
wv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X
/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphy
aL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGU
w5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8l
rDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstY
t721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y
4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021
sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnH2Xr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvE
l9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQvn2k/GrVfjb+0Lo2ot49+B
Xij4IH4navdNo/iqDQYNR0LSrrWftVzIU1eCO52XMMxkVrZ5G+XYdkkYjUA+X/hh+xinxS1D
QbO1+Kvwqs7/AMW67JoWg2c15qE11qjrNHBHO0VvZyvaRTSSAR/bVt3YAtsCjdVvwd+wXq2v
eLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn6A8L+
BfBvww8OXviX4CeM/hpY+J/iP4k1rTbPXdf8Y2ekXPww8OrfSW1ubeC7mF39ourY72utjXEc
AKRxh5PMbK+Dnwbtf2dfhJZ3Xw7+Ivwfuvi34v1XU/D+p+LbzxzpthF8PtNgumszPYJNKszv
eoJJRexxmVLY7YolaXzGAPKdF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7Sy
uAHsba5ijRbp/KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5C
VbDqwypIOOCRX6F/CfxPoOjfD79mXw1ZeK/g/rL/AAi8Zavpvi7UL/xLYWg0iFfE+n6imoac
13NC86SwWrbZoEkDQyzR4DsQPh79rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lk
jfa4DLlGU4YAjPIBpCO2+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPsl
rOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP0B+zZ8UdB8Q/Af9ku2sta+Ghf4deJNQj8Xf8ACR67YaZe+GoW
8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQPQdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteu
fEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gmh4yWXwta+KPEfgrwH
q3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV
4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/Bf
WvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A
2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7
uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNF
un8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zF
Gi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y
1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJ
feFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P/8ABPrUNQ8Q+HvD2q/Ev4Ve
HPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY+y9f/aH8G/G3xf8F9a8
L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IX5+8MfF
TwFpvxN/ak+Oul6t4fvfG2h+JBqPwyh1YpsuW1DVpw+ow2UwV57i2gKTRh1ZYWYO8ZKrtAOK
f/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxM
cFtoXDGpov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+U
GlkTJVmGUw5+i/gV8btP+Jfwi/ZYuZvE3w/1LVfBnivU5vHN34t8RWVlqmied4l07Vvt0P22
eKWWWSKBw00Ql3JLcRn52IHba/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdC
srnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K
6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJf
eFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0v
tJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v
+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKA
fmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP9kJvjVZ
+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn7M+LP7RF38T
NB8EX/7P3xd8K+Bbo/Efxfq/ii5vPFNt4YjuPtesxz6deX1rdPFJfRfYyn/LGfCI0RUlTHXm
nwc8G2vw7+Eln4r+Hfj/AOD7fFv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmhe8QySCeO3
WWC2/dxQRs+4gHing79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvc
pbo07FEad49+xnXMeHPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMq
SDjgkV9V/sUfDi/+Af7QHhXWLTx/+z/rGg2PjmGz8RzXmp6M91plvp98ga7tZNUjjl8qaJ2l
in09mLhVyVkRVX6L039rLTtL8C+CR8C/Evw03p8QPFF9r134w8d33h10WbV0l0+9vo21C0uN
TR7Ixb3nju2KwmMjeHjYA/LOvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHyn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqr
JHb4P7tHVWVNoIBBFfdf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAQj5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xP
eXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJ
L7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j
+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/wCC
aHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov
/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn
7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43
xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721i
vpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72z
u0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6
ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteu
fEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2
k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZf
C1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8
ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wf8A
hpDwz8QvG3wi1vwL4v8Ahpc6Jp3xd8S654tk8W3mkw3elWVzr8N1bz2UesFbi2R7L95/xL1T
94GLfvw1VD8bvA/xL1P4FXPg/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054r
mWWS0gw0yiV2SW4jb94zAAHyTov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d
2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tv
bO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPi
PWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7Sf
DfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4
WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMl
l8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAP
Y21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0
i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38Tv
El9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8aeH/8Agn1qGoeIfD3h
7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBsePfFn4cX3wc
+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK+q/DHxU8Bab8Tf2pPjrpe
reH73xtofiQaj8ModWKbLltQ1acPqMNlMFee4toCk0YdWWFmDvGSq7fj/XtevvFOuXmp6neX
Wo6lqM73V3d3UzTT3UzsWeSR2JZnZiSWJJJJJpCPYP2e/wBg/wAX/tH/AA3k8UaVqXhXSLC4
10eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC
410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y
/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wAL
H/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4
V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal
4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs
/D/+2P8AhY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9
sf8ACx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal
4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGla
l4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z
8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/
APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqX
hXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGl
al4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2
z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/
ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA80+FX7A+vfErwZ8Pdav
fFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf8Agmh4yWXwta+K
PEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G
/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrtj8bvA/x
L1P4FXPg/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4j
b94zAAHyTov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5q+Mv+Ce2sfCTwBoGveP/AB58P/AX/CR32raba2GpHVLy6S40y8azvEc2NlcR
DbKvBEhDBgQTzj7L1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i
3vbWK+lglLyWsON8SSfu5Z4iA5IXz7SfjVqvxt/aF0bUW8e/ArxR8ED8TtXum0fxVBoMGo6F
pV1rP2q5kKavBHc7LmGYyK1s8jfLsOySMRqAfn/r2nRaRrl5aW9/a6pBazvDFe2qyrBeKrEC
WMSokgRgNwDorYIyqnIFSv1M/Y/+KvwP+DmueH08MeKfD8nwv8UeMvFlp4x0/wAQ+LLy3g0y
ynb7JoqR6RPcxJdW89u9v5s1xa3WwGRpZYhCfK5//gnN48+Hn7PPgf4c6Zq/izwrcaVrWu+I
tL+Ktjq/jZmsdOlkjSysFg06O7S0v7SdWj8y5+z3cQUu7TRpDmJCPzTr2D9jP9hzx7+3X8Q7
nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X6W8CeOPEvhz9lX4E+GvhD8
VvBXw88WeEfEmvR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2Vq/wDBMH9tuLTf
2kvBvgrxL/wp/SvBPgrVde1+XxWlxL4aiubm6huIvtawvPbWszt50dvCj2fmQ2zFUjiCOVYz
89KK1vHdv9j8cazF9j0rT/Kvp0+y6XefbLG2xIw8uCfzZfNiXoknmyblAO987jk0hBRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAV/UV/waO/8o1tC/7DOof+lUtfy61/S5/wbKarqmi/8EgILnRZb+HU4tYu
/JeyieWdc35DbVS0vGPyk5xbScZ+799fUyyDn7WC3cbf+TxOLGy5eST7/oz9K/iz/wAf1x9T
X4Z/8F7IJZryz8vP/HppfT/sP+O6/XTS38Y+JvFmnrqc3jqeCe7iWcS2V8iMpcBtx/4RWIAY
zn94n++v3h+Wn/Baeyiu7yDzMcWemYz/ANh/xzXdjKEqVOMJb3/RnFQleq5Lt/kfln+3nbyR
+Cf2fg2d3/CurjP/AIVPiGiuh/4KM20cOi/AVVxgfDqbH/hT6/RXhz+Jnqx2R2Gocft+/A7/
ALA/w0/9MOiV538RP2TtQ+O+seN/GX/CUeFfCnhv4c+GvBv9sXmtG9bH23SbSGDy0tbad2/e
Jg/KMblPTJHol+M/t/fA7/sD/DT/ANMOiVo6Rc6f4h/Z5/aU8G/2/wCFdK8SeK/DXw0/sez1
rX7LSP7Q+z2lvNP5b3UsaHZGMn5uMqOrAHWr/DXyM6X8R/M+Zf2e/wBhzx7+0n8K/HvjfQrO
1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/AGQm+NVn4UC/Eb4aeHtW
8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0
KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4
VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvynUeaaL/wTQ8Z
LL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/8A
gn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEB
selfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQ
Ekzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OErImD
JFE53sr7UIAOUf8A4Jf+LNG8Q6HpXiDxl8P/AAxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazm
EUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/48+H/gL/hI77VtNtbDUjql5dJcaZeNZ3iO
bGyuIhtlXgiQhgwIJ5x6B+wp8f8Ax9qHjjwhq/i3x98P/wDhXvg7xXL4i1K58bXej6hq1l+8
ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh
2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/gntrHjPSPBN/L48+H+i2vxN1260
LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m39
v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnfs7/AA9+IF/rGjr4sl0u
7v8ASNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXj
S0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6Nq
F9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW
18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wAE7NCv/h78SPh54ztPFvwUGg3X
iuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UP
AGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+Hmu
eI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/wBhzx7+
3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8
K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/AOCfn7af
hLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY2
7XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJ
IY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDF
LcSSIvm53bfa/wBjL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNd
akySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8AG+mePB4ejvtL0y/8
QJeoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2R
SzS+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/gq7N4y0bX/AAr4c+E/hfXdZ+w3mta/
b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hBXtf7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxr
o8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB4pX2D/wncf/AA5K/wCEZ/tn4f8A
9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/YP8X/tH/DeTxRpWpeF
dIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/ALR/w3k8UaVq
XhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP/hyV/wAIz/bP
w/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/w5K/4Rn+2fh/8A
2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2amM8p/Z7/YP8X/tH/DeTxRp
WpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61
e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+l/8ACdx/8OSv+EZ/
tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mr0r9k34s6ba/A39le30H
xL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQZFZj++BNAHz/ov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvhv/wAE
5fij8R9D+K+pjSrXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5T5YG0s6+ZHv+y9B+O/hP
X9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7Sh2cm43
GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbpvMjIJnn
GCwbe+8NQB8//CT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMh
ldQyOGBwG263hT/gmp8S/GH/AAuO4todKj0X4If2nHrusTTSpY3dxYbzNbWjeXullKRs4BVQ
qlPMMZkQNq/siS+G/wBn7wB8ZfiDf3/hWT4sfDT+zbXwNZ3l/a31rJez3jwXN/axK7Jey2sa
CWJ1MkKFllKvhGHV/wDBNm+fUv8AhfniTxN4u8K2l145+HHiHw5bXHiLxdp9lfarrF39nkQM
t1cLM3mEsTO48ssGy+QaAPNPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7
D7JazpCnnyrGDO0ZJDHGzDG38Nv+CbHxD+Ien67JNc+FfDd1o/iubwJb2esaqIpNZ8QRQyzN
plu8avCJSItqvNJFE7yxokjFsD6V+BXjjRx8Iv2WNFi134VS3Xws8V6nZ+Mn1nxPpdtN4cT/
AISXTtSW7sXmuEE++G2ZRPa+cjxSToCS3GV+1l8ctH+Jn/BPH4lponiL4f3q+IPjlqniawsv
P0uPWbjQ5nl8u8+zPi8WU3LKu5kFyIDtOLbigD5J1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsY
XuBqNjeiCS4xIrwrEUMKK+6OV/8AWoDhg4Xq/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjba
w9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH6V/4JwfF618M/sV+GvD8HjXwVojz/GuK48W6
RrniDTbFNR8LzaXFbX4mtryRVubd1crsCuSygoN8YK9XofxR+HPiG2/Z+tvh5rXw0Pg/4dfE
DXo7/wD4SPXbXTL3w1pTeK7DVLO6tU1GeK4d2srcL5iLKxjknib94WAAPkrxl/wT21j4SeAN
A17x/wCPPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecHgf8A4J7ax4z0
jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa+itJ+NWq/G39
oXRtRbx78CvFHwQPxO1e6bR/FUGgwajoWlXWs/armQpq8EdzsuYZjIrWzyN8uw7JIxGvV6H8
Ufhz4htv2frb4ea18ND4P+HXxA16O/8A+Ej1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY
5J4m/eFgAD5U0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44J
Ffptr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXkt
Ycb4kk/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoy
nDAEZ5ANIR23wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhjq6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5+gP2bPijoPiH4D/sl21lrXw0L/AA68SahH4u/4SPXbDTL3w1C3iPTdUS6t
Uu54nd2gt2XzIFlBjkni++SB6Dr/AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i9
0KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7w
roGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfx
JfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC+teF/Efw
0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nvi
/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAck
KAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEy
VZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0s
iZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV
9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90Ky
ufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wro
Gm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94
V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614X8R/DS+0
nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P4N+Nvi/4
L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8
faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZ
hlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZ
KswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sE
peS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZ
axb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPe
XE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03
WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E
7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nvi/wCC+teF
/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8
E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTR
f+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymH
P2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktY
cb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFv
e2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7U
NSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7W/wCGkPDPxC8bfCLW/Avi/wCGlzomnfF3
xLrni2TxbeaTDd6VZXOvw3VvPZR6wVuLZHsv3n/EvVP3gYt+/DV5pa6H4b8Mwaz45+C/xA+H
7+Oviz4r162HijxT42tdO1D4daGdQlgilijvZ/trXd5AzSveFXuUh3KiCSQyOAfCnjvwRqnw
z8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7B8Kv2B9e+JXgz4e61e+Lf
BXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/Cv8AwgvjjWdE/tHS
tY/se+nsft+l3H2ixvvKkZPOgkwN8T7dyNgblIOBmvvX9mz4o6D4h+A/7JdtZa18NC/w68Sa
hH4u/wCEj12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgIR8/6L/wTQ8ZLL4WtfFHi
PwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342
+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJ
Cmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hx
viST93LPEQHJCsZ8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2N
tcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC
+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+
0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP
3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo
0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9
lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1
iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8
b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAe
reN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614
X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P
4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyz
xEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W
6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9Iv
dCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfFP7N3/BOX4o/tP8Ax78RfDvQ9KtbDVvB
089r4hvNQuMados0TvGY5ZohIGd5I2RFjDl9rMPkR3XJ/Z7/AGHPHv7Sfwr8e+N9Cs7W08J/
DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pd9rfsb/wDBQrRvF3/BQL+yLpfhpo/w
v0Xxl4r8XWvinUNQn0K5nN+bxYruVZruGC5uGS4ht0Etu80cDMFCBXceKf8ABPfR7Xw5rn7R
N3qerfDTwlB4i+H/AIl8H6XZSeNdNhgbUp2gMNtbG4vGkkt8AqlyXeJgnMzHJoA8q+FX7A+v
fErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjV
Phn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV440cfCL9ljRYtd+
FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxSToCS3HxT+1l430z4mft
UfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHbfCr9gfXviV4M+HutXv
i3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP+
yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv8A
7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST9
3LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uY
o0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j
1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bx
v4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6
t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342+L/gvrXh
fxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/ALQ/
g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LP
EQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2Ws
W97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLP
SL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4
kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nv
i/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQ
oB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJ
VmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0
siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2s
V9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90K
yufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCug
abrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3h
XQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342+L/gvrXhfxH8NL7S
fDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/ALQ/g342+L/g
vrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx
p4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBH
RgysQGxa+KH/AATl1P4G+HNN1Pxv8S/hp4Wg1nVdZ0exW4GsXL3M2lXz2V0wFtp8oVPMUFSx
BZXU4B3KvoHhj4qeAtN+Jv7Unx10vVvD97420PxINR+GUOrFNly2oatOH1GGymCvPcW0BSaM
OrLCzB3jJVdtv9lX4o+NviVB8J/+Ew+IHwU8WfD2x8VzNrmk+NH0B9Y0S3n1CO41GWV9WiW4
l+1CV5fNtZZixUgsroEAB8aa9p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CM
qpyBUrq/jv8A8Ix/wvDxl/whH/Imf27e/wBgf63/AJB/2h/s3+u/e/6rZ/rPn/vc5rlKQgoo
ooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKA
CiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoo
ooAKKKKACiiigAooooAK/qK/4NHf+Ua2hf8AYZ1D/wBKpa/l1r+or/g0d/5RraF/2GdQ/wDS
qWvRwHwVv8P/ALdE5MVvT/xfoz79+LP/AB/XH1Nfh3/wXe159GvLTZ/FaaX/AOn/AMd/4V+4
nxZ/4/rj6mvxE/4Ln+HP7evLX/ZtNL/TX/HX+Ndda/sI27/ocVL+NK/b/I/NH9vrWWv/AAd+
z/KTy3w6n/8AUp8QCirX/BQTQv7O8K/AGH+58Op/18UeID/WivJl8TPTjsj0G+/5P/8Agd/2
B/hn/wCmHRK87+In7J2ofHfWPG/jL/hKPCvhTw38OfDXg3+2LzWjetj7bpNpDB5aWttO7fvE
wflGNynpkj0S/wCP2/vgd/2B/hp/6YdErR0e60/xD+zx+0n4N/t/wppXiTxZ4Z+Gn9j2eta/
ZaR/aH2e0t5p/Le6ljQ7Ixk/NxlR1YA61f4aIp/xPvPmX9nv9hzx7+0n8K/HvjfQrO1tPCfw
60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fR
dG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wAM
de8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8
c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvynSeaaL/wAE0PGSy+Fr
XxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ah
qHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7
GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6s
Nr72Wr8OvjL4Hl8Y/tL/ALQFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUT
neyvtQgA5R/+CX/izRvEOh6V4g8ZfD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9p
cRqbgxMcFtoXDHK8Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObG
yuIhtlXgiQhgwIJ5x6B+wp8f/H2oeOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v
2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY
8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC
8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b
+Ykn2e4hkaORNyEq2HVhlSQccEivsv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu
7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0
i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9
d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18
PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3Xiu
ybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAG
ufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI
/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X
8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+
IvhVrngfw/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhL
Qf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27X
cAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY
42YY5PiL9ijxV8Ofh5rniPxvf+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS
3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdak
ySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJ
eoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSz
S+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6
Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hBXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4
RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiT
UI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/0X/gmh4yWXwta+KPEf
grwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFH
iPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8
X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSF
Nf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjf
Ekn7uWeIgOSFYz4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Aextrm
KNFun8oNLImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2
NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oV
lc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7X
rnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrw
Hq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j
8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/
wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX
/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJ
J+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKN
Fun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21z
FGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4
vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufE
esWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq
3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4
D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8
L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofw
b8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lni
IDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oN
LImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0
/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWL
e9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnp
F7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJf
eFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b
+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j
+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8
bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniID
khQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLI
mSrMMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfptr/
AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk
/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5
ANIR23wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkh
jjZhjq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBp
ZEyVZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4j03VEurVLueJ
3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrn
xfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hX
QNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7Sf
DfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gv
rXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9
ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmG
Uw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkq
zDKYc/YOv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl
5LWHG+JJP3cs8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lr
Fve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95c
T6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdY
nvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38Tv
El9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8
R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/
4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/
YOv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hx
viST93LPEQHJCmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97
axX0sEpeS1hxviST93LPEQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6
ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8u
J9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2
vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8N
L7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8
ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4J
oeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YO
v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJ
P3cs8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9L
BKXktYcb4kk/dyzxEByQoB8aeH/+CfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWo
QXpsjDciytJ4oN8uCjySBHRgysQGxa+KH/BOXU/gb4c03U/G/wAS/hp4Wg1nVdZ0exW4GsXL
3M2lXz2V0wFtp8oVPMUFSxBZXU4B3KvoHhj4qeAtN+Jv7Unx10vVvD97420PxINR+GUOrFNl
y2oatOH1GGymCvPcW0BSaMOrLCzB3jJVdtv9lX4o+NviVB8J/wDhMPiB8FPFnw9sfFcza5pP
jR9AfWNEt59QjuNRllfVoluJftQleXzbWWYsVILK6BAAfGmvadFpGuXlpb39rqkFrO8MV7ar
KsF4qsQJYxKiSBGA3AOitgjKqcgewfs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBr
BtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB5/8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9
d+9/1Wz/AFnz/wB7nNfS3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2akI8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2Fu
QjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2F
uQjIkrRooU3Dwxu80aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o
8nP2rzftfyb8ef5HGfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2amM8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2Fu
QjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2F
uQjIkrRooU3Dwxu80aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o
8nP2rzftfyb8ef5HGfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5
CMiStGihTcPDG7zRorliQD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2
FuQjIkrRooU3Dwxu80aK5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/l
faPJz9q837X8m/Hn+Rxn7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7
R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJ
dLYW5CMiStGihTcPDG7zRorliQPFK+wf+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/s
v7P5X2jyc/avN+1/Jvx5/kcZ+zV7B/wTI+Kvw2+Dnwr+FiX/AIp8PyaF4o1XX7T4oaf4h8WT
W8GmNPClppyR6QbmKC6t50eLzZpLW6VAZGeWJYf3QB8qfCr9gfXviV4M+HutXvi3wV4RT4r6
rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv
4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+i/gV440cfCL9ljRYtd+FUt1
8LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxSToCS3Hba/8AtD+Dfjb4v+C+
teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfmT
478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP8AZCb41Wfh
QL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+9rv9su0v9B8K
3/wT8RfCqO6vPiP4s1fxHc+JvGNz4Tji+06ys9heXVqt9ZSX8TWbR7vMhusJD5QUEPGfnT9k
WO50D9pzQviHpnir9nSLwzrPxAS61+zdtLsH8OWtvqCyGSyh1mKK5t7d4ZWeBrImQKiq+yWI
IoB5r4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEM
GBBPOOU8Xfsnah4a/ZoX4r2fijwr4g8LN4rl8HlbA3qXS3aRSTq5S4toh5TwosikNuxMgZVc
OifVfgz4o3/xK+NPhnyfiB8FPFnwJsfiPqjJpPjR9GfWNE0efWBcXEsr65Et7L9qgl83zYZZ
pGKkOyypsHa/sr/F7wb4Z+Dd74f8BeNfBWieD5/2h7m4vtI1zxBZ2Kaj4Hms1tphNbalIr3N
u8Dqux1eQsoIHmR5UA+P/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7J
azpCnnyrGDO0ZJDHGzDHx/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWG
VJBxwSK/SDwJ8WfAtqvwJt/hV4l+Gkfgrwd8Tteutei8W6jpgu9D0p9bt5rKSyGtt9qgRrBV
fdY7WMiszf6QGNfn/wDtKa9o3in9ozx/qfhy8utR8Paj4k1G60u7upp5p7q1e6kaGSR7gmZn
ZCpLSkyEklvmzSEdr+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJ
WjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IR
kSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk
5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/a
vN+1/Jvx5/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyE
ZElaNFCm4eGN3mjRXLEgH7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsL
chGRJWjRQpuHhjd5o0VyxIHq3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+
0eTn7V5v2v5N+PP8jjP2aj/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJ
z9q837X8m/Hn+Rxn7NQB5p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYf
ZLWdIU8+VYwZ2jJIY42YY6ui/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aW
VwA9jbXMUaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF
9Ts5o5LIas3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6t
p0mpabp82uwXFrcWra65vI/MsgJS9nh2lDs5NxuNAHxp8N/+CcvxR+I+h/FfUxpVro+m/BqD
UD4iu9QuMQfarJWaexgeIOs1wFRj8p8sDaWdfMj31PhJ+xinxc0jwK8PxV+FWla18Q75tN0v
Qbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4DbfoD9lC+8N6l8cP2rvEmn+LvCtp4W8c+FP
F/hzwvceIvF1rZX2q3F3cRSWgZb+4W6bzIyCZ5xgsG3vvDV5V+yJL4b/AGfvAHxl+IN/f+FZ
Pix8NP7NtfA1neX9rfWsl7PePBc39rErsl7LaxoJYnUyQoWWUq+EYAA//BL/AMWaN4h0PSvE
HjL4f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y1NF/4JoeMll8
LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK+N2n
/Ev4RfssXM3ib4f6lqvgzxXqc3jm78W+IrKy1TRPO8S6dq326H7bPFLLLJFA4aaIS7kluIz8
7EDtv+GkPDPxC8bfCLW/Avi/4aXOiad8XfEuueLZPFt5pMN3pVlc6/DdW89lHrBW4tkey/ef
8S9U/eBi378NQB8aeMv+Ce2sfCTwBoGveP8Ax58P/AX/AAkd9q2m2thqR1S8ukuNMvGs7xHN
jZXEQ2yrwRIQwYEE84yfgz+wX44+Pngf4leLPDT6Vc+DPhhY3t9feIJjcQWOp/Zo2lMNpviE
ryvEvmBXjTYrJ5piLoG+lvBnxRv/AIlfGnwz5PxA+Cniz4E2PxH1Rk0nxo+jPrGiaPPrAuLi
WV9ciW9l+1QS+b5sMs0jFSHZZU2Dn/2MtP8ABukfFT9qLUPDniLwVoXgTxP4N8V+EvBiax4p
s9NnvGuJoWsIhDezpchGhC4llULlWDOGDAAHhPwk/YxT4uaR4FeH4q/CrSta+Id82m6XoN3e
ahPqkNwLkWyJcx2tnMtt5jsjIZXUMjhgcBtvQfFD/gnLqfwN8Oabqfjf4l/DTwtBrOq6zo9i
twNYuXuZtKvnsrpgLbT5QqeYoKliCyupwDuVbf7Ikvhv9n7wB8ZfiDf3/hWT4sfDT+zbXwNZ
3l/a31rJez3jwXN/axK7Jey2saCWJ1MkKFllKvhGHoH7KvxR8bfEqD4T/wDCYfED4KeLPh7Y
+K5m1zSfGj6A+saJbz6hHcajLK+rRLcS/ahK8vm2ssxYqQWV0CAA8U/ZR/YL8cftp/F/VvCn
gB9K1K10TzHvfEUxuLfR4IgXEUjO0XnDzin7uMxeaRklFCSFLfwq/YH174leDPh7rV74t8Fe
EU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9V/8E/P20/CWg/tlaN4E06z
+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4PhP8SNB1b4
ffsy2FlffB+wf4beMtXg8XWt/wCKLCAeEYT4n0/U0m06S7u99whgtmjW4ge5DwvMm9nYmgD8
9PHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM
/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACv6iv+DR3/AJRraF/2GdQ/9Kpa/l1r+or/AINHf+Ua2hf9hnUP/SqWvRwHwVv8P/t0
TkxW9P8Axfoz79+LP/H9cfU1+Kf/AAW212PR7u33/wAVnpn/AKf/ABz/AIV+1nxZ/wCP64+p
r8NP+C9NjLe3tkI8/wDHppfT/sP+O666v8CPr+hxU/40r9v8j89f+CiGrpe+HvgHKp4b4dTY
/wDCn8QCisn9vbT5LbwX+z+jZ3D4dT5/8KnxBRXky+JnpR+FHpt/z+358D/+wP8ADT/0w6JX
nXxC/ZO1D476v428Zf8ACUeFfCnhv4c+GfBv9sXmtG9bH23SbSGDy0tbad2/eJg/KMblPTJH
o1/Gf+G/PgccHH9j/DT/ANMGiVoaPc6f4g/Z4/aT8Gf2/wCFdK8SeK/DPw0/sez1rX7LSP7Q
+z2lvNP5b3UsaHZGMn5u6jqwB1q/w18iKX8R/M+Zf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrn
VNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL
67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wl
paax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfah
omseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/KdR5pov/BNDxksvha18UeI/
BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8P
eHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438Y
fCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq
/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQg
A5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpu
DExwW2hcMcrxl/wT21j4SeANA17x/wCPPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZ
V4IkIYMCCecegfsKfH/x9qHjjwhq/i3x98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6W
XVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXm
sC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG
/wCEme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9n
uIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6
W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16t
jPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNx
N5scKNLHZ21wLVHklCobkxF9rMAUAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalc
Xuom9huEtWEq2Fvcpbo07FEad49+xnXMeHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEd
r4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58Tp
Lu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/
AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDuf
D/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FW
ueB/D/xH8TN4uTx5Po8l9Do914gS6hlU65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK
0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfK
nwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj
k+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSI
vm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySC
JVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXq
BU1MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0v
lvBbTIvlxwsT5rJncoXccgeU19rf8E37nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0
+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hHsHwr/Y11b4gfDzSfFmueK/BXw48PeJNVGjaDe+
K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kVTJuDBfNfHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJ
J9nuIZGjkTchKth1YZUkHHBIr6gmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYN
UvFuYb+B5mCz28axssqxlp0bbiFlZWP0B+wH4/8AhF+z7ofgXTbP4geH/EngnW/EniXTPiDJ
4h8RXOmQCGRVstKlj0KS6jimt7uFoWlea2uhEHk8yWIQExAHwp+z3+y/q37SOh+PbvRdZ8P2
M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcqfs9/sv6t+0jofj270XWfD9jP8A
D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByvtf8AwTU0+18Ga5+0Lp+teIvBWjT3
/wAMdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lwHQtU/4JbXOn6f/wAL4/tLX/Cu
hf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/aZY927yXy4+RPl3su5cgHypX0X+yZ/wAE1PFX7X3w
r1TxfpHjP4aeGdN0ie9juIvEmrzWU6w2cNtNc3WFgdfs8S3kAeQsAhcbsZUn50r6r/4JbXOn
6f8A8L4/tLX/AAroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXIB86fFn
4cX3wc+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK5+vuv4BftS6/wDs
2/8ABLnQ9a8N+MPCq+OPD/xH+2WulXOvWcmqJ4cYW8k9otv5wu4rSfUrWJpoIfLaVAzsDEzO
fS/2A/j/APDbwtofgXWL/V/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTra00Q3UcE9lIksSv
JJbXaxJuLyxLBmIA/MmivvbwJ448S+HP2VfgT4a+EPxW8FfDzxZ4R8Sa9H48dvHGnaPaNdNf
25tLq6V5gmqW4t0GJIkuo2jQxjdjZXbfBT9p248Hfs4/COw+F3iT4KDxJYeK9en8ZXepeKp/
BempcPqMUlpevYxXenG8tHtimFa1nCRQiERIVaEgH50+FfAmueOv7S/sTRtV1j+x7GXVL/7D
aSXH2G0ix5lxLsB2RJuXc7YVcjJGaya9g8NeJ9O1X4qfFvUH+Ilr8NYNU0rWJLRPCemXyaV4
laSYMmjQwr5ckFlOD8v2hdqJGgkTPA8foAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4
NHf+Ua2hf9hnUP8A0qlr+XWv6iv+DR3/AJRraF/2GdQ/9Kpa9HAfBW/w/wDt0TkxW9P/ABfo
z79+LP8Ax/XH1Nfi5/wWjtI7rU7XzMf8emmdf+w/45r9o/iz/wAf1x9TX4b/APBebxBJod3Z
FDjNppf/AKf/AB3/AIV2VHahD1/Q4aavWkvL/I+Av+CkttHFpvwHVcbR8Opcf+FNr9Fc5+3p
rz6l4L/Z/mc/M/w6nzn28U+IB/SivKnL3melBe6j7X/4Y6v9R+PfwF8WRwsYZNA+HkpIHaPR
dIQ/+gV8ZfFD9kbUfjZ4k8eeLT4n8K+E/Dnw08O+D01m71o3rbftulWsEHlpa207t+8jwflG
Nynpkj94v2dfBVl4k+EnwKkliQyp4O8GuDjnI0mwxX5EeLxp2p/Dv9rHwMNe8KaV4j8S6X8P
E0az1rX7LSP7Q+zQwyz+W91LGh2RjJ+bjKjqwB9DMKCp0acl1S/I4cBiPaVpx7NnyX+z3+w5
49/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8a
rPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc+rf8EvNPtd
I0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH
8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMol
fyD1jzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0s
iZKswymHNXw//wAE+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQ
b5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJ
cRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvM
vFsyxE93ZwlZEwZIonO9lfahAByj/wDBL/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3
ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYak
dUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEW
pXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gx
dfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5
fHnw/wBFtfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4
b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+
HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/T
Ne8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4a
eHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49
+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw
88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37a
kfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fv
EX7FHir4c/DzXPEfje/8P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNz
u2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPx
y8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZ
UwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceT
NBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPsl
rOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS
2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/D
HNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4
PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+z
rPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L
67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkIK9W+An7J2ofHf4YePPGX/CU
eFfCnhv4c/2f/bF5rRvWx9tleGDy0tbad2/eJg/KMblPTJHlNfVf7FNzp/iH9hL9pnwb/b/h
XSvEniv/AIRb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6sAQD50+Ffwr8RfG/4h6T4T8J
6Tda54h1ycW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvgx/wT28a/GzwnqGr2Wq+CtMgh8SN
4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq/sU/FTw8vw88efC2/1a1+HGt/Ey
COysvHuSEjUHJ0jUHIZoNMuWC+ZNb7GVgpm8+AFE9A17xEfDn/BGa88EXGufDSTX7D4nPdy6
Za6rol1qTaakJgNzGInaaR/tg2iZC0rW+MMbQigDx/8AZ7/YP8X/ALR/w3k8UaVqXhXSLC41
0eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhX
SLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//
ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBs
f8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/wC0f8N5PFGlal4V
0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv8AYP8AF/7R/wAN5PFG
lal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv8AhO4/+HJX/CM/
2z8P/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/AAncf/Dkr/hGf7Z+
H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/wBg/wAX/tH/AA3k
8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3
k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf
8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ
/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPKf2e/2D/F/7R/w3
k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur7B/wCE7j/4
clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zV7B/wTI+Kvw2
+Dnwr+FiX/inw/JoXijVdftPihp/iHxZNbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ5Ylh/
dAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY
42YY6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZE
yVZhlMOfov4FeONHHwi/ZY0WLXfhVLdfCzxXqdn4yfWfE+l203hxP+El07Ulu7F5rhBPvhtm
UT2vnI8Uk6Aktx22v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3
trFfSwSl5LWHG+JJP3cs8RAckKAfmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0ci
bkJVsOrDKkg44JFelfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao
8koVDcmIvtZgCgDn72u/2y7S/wBB8K3/AME/EXwqjurz4j+LNX8R3Pibxjc+E44vtOsrPYXl
1arfWUl/E1m0e7zIbrCQ+UFBDxn50/ZFjudA/ac0L4h6Z4q/Z0i8M6z8QEutfs3bS7B/Dlrb
6gshksodZiiube3eGVngayJkCoqvsliCKAea+Mv+Ce2sfCTwBoGveP8Ax58P/AX/AAkd9q2m
2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE845Txd+ydqHhr9mhfivZ+KPCviDws3iuXweV
sDepdLdpFJOrlLi2iHlPCiyKQ27EyBlVw6J9V+DPijf/ABK+NPhnyfiB8FPFnwJsfiPqjJpP
jR9GfWNE0efWBcXEsr65Et7L9qgl83zYZZpGKkOyypsHa/sr/F7wb4Z+Dd74f8BeNfBWieD5
/wBoe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5keVAPj/AOFX7A+vfErwZ8Pd
avfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMeJ8S/st+O/CXhjxh
rF5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV97eBPiz4FtV+BNv8KvE
vw0j8FeDvidr11r0Xi3UdMF3oelPrdvNZSWQ1tvtUCNYKr7rHaxkVmb/AEgMa8K+DvxI8IfC
X9pf4tfGv/hNPtXgOLXdW07TPDAvxeax8Sbe7llZLK7hvBJKNPeIo9zdXcbHIATfc4MYB4p+
yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf
8Ehta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H
5JpCCiv0W/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+Em8bT+F47K4a+ge0ubqCK/szqM
RttgYtFdjZB5QGQ0Zyv2TPj14E8XeB5tB1Xxd8P/AAPdeBPjlD8YbhvKuNO0PUtHgjEctvpE
ZiMzSgqPJtJI0kaN0CglXCMZ8aeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaD
LM7MXbOAoWF8sGKK/E1+hfwG/bQvvjR4W/ai0DQfixdfDzUvHniSx1vwCPEPidtDg0S1m16a
4v2jn83yrdxDcq8scLmSUeZsWXaayvAnxA1nSP2VfgToHwR+MXgrwZrvhbxJrw8Z3reLIPDd
peTPf27WV9dW14YZb+3NqqkZt5j5aGJk3AxAA+VNR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF
7gajY3ogkuMSK8KxFDCivujlf/WoDhg4XzSv0W/ZS+P+n/Bf9mjwHC/j74f2mp61+0faavfS
abd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo/QeDv2lPCvw7s9Nt/D/j/w/ocGjftS
XFrp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkVjkgGgD8ya6v4V/Cv/AIWl/wAJH/xUfhXw
5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMVq/tZf2H/wANUfEv/hGf7K/4Rv8A
4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxXu3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVN
EtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDMnmjchHyTXbeAfgD4i+I/wk8e+N9PjtR4e
+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/2B+w5+0pY/Dv9kz4RW8/j+10PXdG+PNl
azRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmug+Ef7RHm2P7V3wx8G/F3SvAn9o+K7V
/htI/in+xdD0nT18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRTGfnTRX6Q/s7ftCN8K/2VfgvoHw
m8T/AAfbXdB8Sa2PF17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv9r7Wf
gh+wbc+KvCfi34aQ+MNO+Lt1qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+Wz
MQD89KK/QH4e/tn6/wDCf/gninjXw3r3w/0XxxB8VptatfDVpq1nHNp3hyd4Z59OtrPz/tdt
p8upW8W+2hKM8SlmzEzOfQP2S/2ofA+v6h+yt4y1HxB8P/CVr4B13xuninTI7630yPw9LrMz
/Yo7ezkkEzWmbmMCSFZIoUVjI6CKQqAfl9XbeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V9
15dLbW8UaDLM7MXbOAoWF8sGKK/3t+whfa/4K/Yw8AXz+LtK0H/hA/2gI9K1HVLjxdZ2Nrba
Cba3uNQsoLt7hYp7SaVBM1vA7rOUEgR9u4c98Bv2oYvFHhb9qLwF4F+LVr4Ag8QeJLG7+GRv
fEEvhvStG0069NLdvaM5jSzT7POjvDGFlkTcFicqygA/PSvdvhV+wPr3xK8GfD3Wr3xb4K8I
p8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHoP2Lf2sfBn7In/C2dJ1vwvpX
jP8A4SjwprHhqw1qxF+P7Ra48lI4ZVe5ttmnv5TOzrCl2N4wy/dX6A+BXxh8N+OfhF+ywdKu
/hVpkfgTxXqbeKbLWdftdOm8HW8niXTtVilsV1C5WeT/AEe3MYlQzkxPPGzGQnAB+f8A478E
ap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFZNfpt8Jv2mPAWoa54/1
TxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf2reo0bvYWUtxLYssdyp/fR/aPIUIZwfsB/FrwV
8ItD8C/8JV458P6ynivxJ4ltfi+niHx49xBBdTqtpZtHYreLa6jb3O9DLdtBeR4aR2mjSLMY
B8f/AAq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhj
jZhj4/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB+zX8QL
Lwl8Kf2Y9H0nxR8KvO+H/jnVG+IK+JNb0K5k0VP7VtJFl06TUZGKRNbI0gl0ptjupcMZcmvF
NW+N3ww+A3xI8dfE/wAHan/wnvxE1vxXqsvgqPUba7uLXwfafapGh1i7e9Xde6g6MrW6MZFi
OZpmaULEoBxXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhj4/wCO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX
6AfAr43af8S/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8UssskUDhpo
hLuSW4jPzsQOg+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWqlLs+Bb2dr5v7VvUaN3sLKW
4lsWWO5U/vo/tHkKEM4APin9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7W
kBVGMlx5YLbcBVBXe6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7
O2uBao8koVDcmIvtZgCgDn2v/gnvaXOna5+0Tqfi/wAY+CoNS8VfD/xL4Uiu9Y8caWs+sa1c
NAww8tzumSVt5F0CYXO4+aeat/s/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8Mt
PiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK4B5pov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugab
rE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99Ppt/b+
Ykn2e4hkaORNyEq2HVhlSQccEiv0L+E/ifQdG+H37Mvhqy8V/B/WX+EXjLV9N8Xahf8AiWwt
BpEK+J9P1FNQ05ruaF50lgtW2zQJIGhlmjwHYgfD37WXjfTPiZ+1R8S/EmiXP23RfEHivVNS
sLjy3j+0W813LJG+1wGXKMpwwBGeQDSEa3wE/ZO1D47/AAw8eeMv+Eo8K+FPDfw5/s/+2LzW
jetj7bK8MHlpa207t+8TB+UY3KemSPKa+q/2KbnT/EP7CX7TPg3+3/CuleJPFf8Awi39j2et
a/ZaR/aH2fUJpp/Le6ljQ7Ixk/NxlR1YA+1/ss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/
AAk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjLGfnTXbeAfgD4i+I/wAJPHvjfT47
UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv9l/smfHrwJ4u8DzaDqvi74f8Age68
CfHKH4w3DeVcadoepaPBGI5bfSIzEZmlBUeTaSRpI0boFBKuEtfAb9tC++NHhb9qLQNB+LF1
8PNS8eeJLHW/AI8Q+J20ODRLWbXpri/aOfzfKt3ENyryxwuZJR5mxZdpoA/PSvS9R/Zf1ay/
ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6r8CfEDWdI/ZV+B
OgfBH4xeCvBmu+FvEmvDxnet4sg8N2l5M9/btZX11bXhhlv7c2qqRm3mPloYmTcDEOg/ZS+P
+n/Bf9mjwHC/j74f2mp61+0faavfSabd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo4
B+dNerfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3H
IH3B4O/aU8K/Duz0238P+P8Aw/ocGjftSXFrp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkV
jkgGvNP2G7nwn4O/4LK6/wCMrbX/AIf+HPht4X8V+IPKvJtf07TLFLS4i1GG0+yI8qedEcxg
fZ1dUVkJ2qQSAfClFFFIQUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/U
V/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit6f8A
i/Rn378Wf+P64+pr8P8A/gu14bOvXNnj+G00z/0/+Ov8a/cD4s/8f1x9TX4n/wDBb7XY9Hur
bf8AxWmmY/8AB/46/wAK7Kv8CHr+hw0/40vT9UfmZ+35oX9neEPgBD/c+HU/6+KPEB/rRVv/
AIKEaut74a+AUi9G+HU//qUeIBRXkT+Jnpx+FH76/spw/wDFpvgXz/zJfhD/ANNFhX4P/tN/
snah8d/jV8XPGX/CUeFfCnhv4c6f4X/ti81o3rY+26fbwweWlrbTu/7xMH5Rjcp6ZI/eH9lF
/wDi0vwK/wCxK8If+miwr8dPiDc6f4h8Gftc+Df7f8KaV4k8V6f8Pv7Hs9a1+y0j+0Ps8UM0
/lvdSxodkYyfm7qOrAH2c0/3el6L8j57I9cVX9X+Z8i/s9/sOePf2k/hX498b6FZ2tp4T+HW
lXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo
2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOve
EtLTWPFOm6bPealcLbNDEIbidJAjAHEpURZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7
UNE1jxBq3jvSbJvhlp8VwbRms1NwWkluk8yQ3sAkdLcbYEzKJX8E+oPNNF/4JoeMll8LWvij
xH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/8AwT61DUPE
Ph7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG
/jD4WfFTw5pGtfEP4Pz/AAq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG1
97LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9l
fahAByj/APBL/wAWaN4h0PSvEHjL4f8Ahi/8W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtL
iNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/wAefD/wF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZX
EQ2yrwRIQwYEE849A/YU+P8A4+1Dxx4Q1fxb4++H/wDwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+
1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrH
hbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wP/wAE9tY8Z6R4Jv5fHnw/0W1+Juu3WheD
Hvjqjf8ACTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7f
zEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/
pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaRe
Jr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7
jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+Ht
Q1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8Oe1/wCCdmhX/wAPfiR8PPGdp4t+Cg0G68V2
TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1
z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH
43v/AA/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/
iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXx
F8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/wDwT8/bT8Ja
D+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7
gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDH
GzDHJ8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW
4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUm
SQRKsL2RcSTFWEhffJXu2o/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/AI30zx4PD0d9pemX/iBL
1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWa
Xy3gtpkXy44WJ81kzuULuOQPKa+1v+Cb9z4H8Hf8FXZvGWja/wCFfDnwn8L67rP2G81rX7fT
Nmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkIK9W+An7J2ofHf4YePPGX/CUeFfCnhv4c/wBn
/wBsXmtG9bH22V4YPLS1tp3b94mD8oxuU9MkeU19V/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A
4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6sAQD50+Ffwr8RfG/4h6T4T8J6Tda54h1y
cW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvgx/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps
8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq/sU/FTw8vw88efC2/wBWtfhxrfxMgjsrLx7k
hI1BydI1ByGaDTLlgvmTW+xlYKZvPgBRPQNe8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jda
k2mpCYDcxiJ2mkf7YNomQtK1vjDG0IoA8f8A2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNR
NrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01
jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf
2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7X
pP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtH
TWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHh
bR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/
ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/
sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/YP8X/ALR/w3k8UaVqXhXSLC41
0eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhX
SLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//
ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBs
f8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/AGD/ABf+0f8ADeTxRpWp
eFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61e+
LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+l/8J3H/wAOSv8AhGf7
Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8
S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4
H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLc
Rt+8ZgLev/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl
5LWHG+JJP3cs8RAckKAfmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrD
Kkg44JFdr+yV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7c
VVv0W/Z6/aM+EWkfFSDxBZ+M/D+o+CfiT8QPGc/xBt/EPii5s4LOG9maDSvL0aS4hiuLe4hl
hMss1pciMNIZJIRAfK5T/gnV8XvBv7M+h/A9LXxr4K8LwWviTxDb/F/b4gs1nv7zbJaaKWPm
GS7skFyrK9r5lmhLzuVKPIoB8afs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZ
LpbC3IRkSVo0UKbh4Y3eaNFcsSAfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRb
IlzHa2cy23mOyMhldQyOGBwG2+wa94iPhz/gjNeeCLjXPhpJr9h8Tnu5dMtdV0S61JtNSEwG
5jETtNI/2wbRMhaVrfGGNoRXn/7E2veFfhD8JPix8UZ7zw//AMLQ8BwaUPAVhq00MiG6urpo
ri+htHINxcWkSiSPIeOJiJHRiqFQDn/hH+wx4k+MP7X+r/BOz17wrpvi/Sr7UdNSa+luhY39
xYtIJkikjt3YZSKWRTIiArGQSGKqfP8ASPgl4n1/4Qav48sNM+2+FvD99Dp2qXVvcxSSabLM
CYWngDGaOKQgoszIImcFA5f5a/Qv/gnV+0p4V+Geh/A/xD/wn/h/RH1bxJ4huvjLc6lrkMOq
6vfzLJBo8l0J3F1c24+1hy0Qe3jdpZpdrxySL8v/ALE+o6f+yz4n8T/EfxX4r0pNM8NfaPDk
3g/S9TstRuviDLKhWTT5Ix50J0pgA0126PEQqiDfMVaMA8//AGRP2TtQ/bJ+J8Xg3QvFHhXQ
vEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvrb/gkNrWk6L/wUJ0Dx7quoeCv
A/hPQJ7+a8GoeIbfT4LBbqxvIoYoFvLjz50V2VMqZWUFTI3O4/JNIR6XqP7L+rWX7Jlh8Yo9
Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC+aV+gP7C3x30/4GfsYfCq0f
xl4V0TU9V/aA03VL6FtWsjqFlohtkt7m4kXeZrOJjDJFI7eWWhkdWJhnIk9g/Z6+Kvwa+Dnx
UgTSPFPgqT4feKPiB4ztPHWn3vixrfStMtp5mtNIS00hLmK0urKeB4N8xtbqNELM0sSQ/umM
/J6vS/2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9
bfsdfHu3+DH7NHwu0e/+IOlaP4k8L/tAW2m3aw+JoTJZ+GpYoJL9FkjlIOlS3MQkkKsbaR0D
ksQDXtf7P3x3+FnwK+OHg698DeMvh/4T8GS/Efxl/wALEWz1a0s/tqtcXFt4dxEXEs+npFdR
lPsytaRZeV9hjeRQD83v2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8Z
dSY12yPHkyrjIDlfNK+tv+Camn2vgzXP2hdP1rxF4K0ae/8AhjrnhKzfUPFOm2sF/qVy0awx
QTSTiOZGMEn72NmiA2kuA6FvYP2Wf2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOy
uGvoHtLm6giv7M6jEbbYGLRXY2QeUBkNGUI+KfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9T
um3OznfcXNpJM/LHG5zgYUYAABoXh27/AGk/E/i3V7/Xfh/4VutI0OfXHjuo7bQLXUvsyRoL
Oyt7aFYWu5ARsiVF8wq7E5JJ5/4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJH
b4P7tHVWVNoIBBFfS3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CX
E0ayOF+0gSYKxCVgzJ5o3AHyTRX3X8Av2pdf/Zt/4Jc6HrXhvxh4VXxx4f8AiP8AbLXSrnXr
OTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnPu3/BO34maT8dNc/ZytPDl14K0aBPEnjL
WfiD4O0y9t9Lga9kY3mlONMkkWS8S3CW7QSRpN9nFsnzoYTtYz8nqKt69r194p1y81PU7y61
HUtRne6u7u6maae6mdizySOxLM7MSSxJJJJNfoX+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk
8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH500V+gP7Jnx68CeLvA82g6r
4u+H/ge68CfHKH4w3DeVcadoepaPBGI5bfSIzEZmlBUeTaSRpI0boFBKuE9L039tyLxN4F8E
618G9T+GmjalqvxA8Ua54nj8YeM5fCr6c11q6XNjPfW0Go2v9oJ9keMP8l4oWAxL0ZGYz88/
+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lVfAPwB
8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFf7g/Z1/au0zw98I
PBFwPHnhXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwfCP9ojzbH9
q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIoA/Omire
vadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfot/wTy8T6zpv7Cnw016
PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQhH5vUV+pmm/
tXaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BCEU
q8R8+8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS/ttMn1G2hZ
rVVikeEbnXy2Zixn56UV+gOg/HvX/HX7OPwWm+DPxC+H/wAH/EkPivxFqXjewt/E1n4U02xu
LnUYJrN57OeVTe2kdttRQsdyFiiMOGKmOu1/Yu8X6zB+yZ4N8SR+N/D+mweFv2hxa3uuLrMH
hvTl8PSQQXd/a2qzm1CWU8irObGONN+wEwZQhQD8yaK/WH4SftIfCaf4yfs9+J9C8X+CtG8E
/Dfxl8Qbe/hmvINKfSbbV7yUaUY7KUxzG3dLmD54ojHAofzDGIpNnxp+xb+1R4Y/Y4/4Wzon
ivwXpXiO/wDEfhTWPDEN/Y3ct59onn8lFtZZIL2O3bT2aFmaa3BnO4GOUqRhCOf+FX7A+vfE
rwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPh
n441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV8YfDfjn4RfssHSrv4
VaZH4E8V6m3imy1nX7XTpvB1vJ4l07VYpbFdQuVnk/0e3MYlQzkxPPGzGQnHQfCb9pjwFqGu
eP8AVPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnDGfF
PwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAO
T4C/sUeKvjz+1Hd/B6O/8P8AhnxtaT3tk0GsTzGB7qz3+fbiW2imXeqxTMGOIyImw5JQN9A/
BzwVc/Db4SWfijwt8Q/hpP8AHX4larqeneI/Fev/ABA0uC5+HlqLpraWa3L3LSTXF4DLK1/D
5kggyIV3SiV/YP2B/ib4H/ZQ/wCFN2Om/ET4f6XYaX4r8SWPxavbPXreP+27lfNstEmHmstx
daftuFdGgQ20e55pQjRyOgB+X1e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpo
Z4rZ2H2S1nSFPPlWMGdoySGONmGNv4PeN/CH7HXhi48WQ3OleMPjbFfT2Oh2SRi80fwOYX2H
VZJcG3v7tiM2qwtLbxgCZ2dxHGPpX4FfG7T/AIl/CL9li5m8TfD/AFLVfBnivU5vHN34t8RW
Vlqmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IAB+f/jvwRqnwz8caz4b1u2+xa14fvp9N
v7fzEk+z3EMjRyJuQlWw6sMqSDjgkV6V+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7x
QTtBCZ2tICqMZLjywW24CqCu903x7vtb4TftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz
4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzjxT/gnvaXOna5+0Tqfi/wAY+CoNS8VfD/xL
4Uiu9Y8caWs+sa1cNAww8tzumSVt5F0CYXO4+aeaAPn/AFH9l/VrL9kyw+MUes+H7vw9d+JD
4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDher+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/
CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfqD/gn98RI/ht+yPo/g8+OPh/pF1B8ch/
wmOl6l4q0mO11Pwy+mx2mob0nm8m9tHDMo8vzA5UPHkoGHQaH8Ufhz4htv2frb4ea18ND4P+
HXxA16O//wCEj1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgAD83/HfgjVPhn4
41nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM/ao+JfiTRLn
7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACv6iv+DR3/lGtoX/YZ1D/ANKpa/l1r+or/g0d/wCUa2hf
9hnUP/SqWvRwHwVv8P8A7dE5MVvT/wAX6M+/fiz/AMf1x9TX4Yf8F8bCW9vLER5/49NL/wDT
/wCO6/c/4s/8f1x9TX4sf8FrLSK51K08zHFppnX/ALD/AI5rsqK9GC8/0OGDtWk/L9Uflz+3
hp8lr4H/AGfkbOR8Orjt/wBTT4gorpf+Cj1tHFpPwGVMbR8Opcf+FNr9FeVONpNHpQd4o/eP
9lEN/wAKn+BX/Yl+D/8A00WFfhD+03+ydqHx3+NXxc8Zf8JR4V8KeG/hzp/hf+2LzWjetj7b
p9vDB5aWttO7fvEwflGNynpkj92f2U/G2k2nws+BVrJPGLj/AIQvweu0tzk6RYY/mK/Hvx/e
6b4m8FftceD11/wppfiLxbp3w+OjWms6/ZaR/aIt4oZpvLe6ljQ7Ixk/N3UdWAPrZov3FL0X
5Hg5JTccTWb7v8z5G/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx
5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyS
hUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6S
BGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZq
bgtJLdJ5khvYBI6W42wJmUSv4R9MeaaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ahqHiHw94e1X4l/Crw54z8R67c+Gov
DF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JP
GVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/ALQFnPpU
Hj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/izRvEOh6V4g8Z
fD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8ABPbWPhJ4
A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f/H2o
eOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvceg
eDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY8LaZeawLy7Eq6lEJT58U5fdZTTjK
FdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImml
VV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
sv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtow
zXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiD
YYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTE
X2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsU
Rp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESW
PfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La
4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7
+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpi
kN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8AEfxM3i5PHk+jyX0O
j3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi
+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXh
FPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/
AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODW
vEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt
+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJeoFTUyZPns5nLNYswLKV3F4wAAfC
n7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX2
t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blM
m3OT8U0hBXq3wE/ZO1D47/DDx54y/wCEo8K+FPDfw5/s/wDti81o3rY+2yvDB5aWttO7fvEw
flGNynpkjymvqv8AYpudP8Q/sJftM+Df7f8ACuleJPFf/CLf2PZ61r9lpH9ofZ9Qmmn8t7qW
NDsjGT83GVHVgCAfOnwr+FfiL43/ABD0nwn4T0m61zxDrk4t7KytwN8rYJJJJCqiqGZnYhUV
WZiFUkerfBj/AIJ7eNfjZ4T1DV7LVfBWmQQ+JG8HaYL7W02eI9aFvLcCxs54RJAzukYCSSyx
wytNEqSMW41f2Kfip4eX4eePPhbf6ta/DjW/iZBHZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwU
zefACiega94iPhz/AIIzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP8AbBtEyFpW
t8YY2hFAHj/7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4
eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+T
fjz/ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5
/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4
eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+T
fjz/ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5
/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh
4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFC
m4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a
/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb
8ef5HGfs1AHlP7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjR
QpuHhjd5o0VyxIFv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp
58qxgztGSQxxswx9L/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN
+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyWQ1ZvN
gRrVS+7TtoMisx/fAmgD5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4
AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5+tj8bvA/xL1P4FXPg/xN8P9S0HwZ8R/EM2tXfi3xFb2Wqa
Jp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMBb1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3
iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA/Mnx34I1T4Z+ONZ8N63
bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya/WH9nr9oz4RaR8VIPEFn4z8P6j4J+
JPxA8Zz/ABBt/EPii5s4LOG9maDSvL0aS4hiuLe4hlhMss1pciMNIZJIRAfK8/8A2dvj3ffB
/wDZV+C/hr4ba38H7LxZ4c8Sa3H43fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAjTPz
IyEfH37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73T
fHut237Fsth8CfA/xC8SfEX4f+D9F+If2/8AsaHUk1ae6l+xXH2efetpYzqmH2kZbkOO4IHt
f7Ctzp9544/aV8Q3ev8Awq8PWHjbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiq
u1mVlYA/ZV1rxcIPhP4c8S+Nv2f9f+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHeRJ
rCadXC4DqyKoYzx/VP2HB4Z+EnhTxvrnxU+GmieHvHM+pQ6DNcRa3K+orYXRtppQkOnSNGhb
ayiQIxWRcqrBlXz+f4A+Irj4eeJPGOkR2viLwf4V1VNJ1DV9OnDJA0hIgne3fZdRW823Ecs0
KKzfJkSZQfYHwu8ZnxD8TfBmm6N40+BWt/s76B8QNQt7Lw94qTRE1Hw1oUmrLNKSNahS7ZLi
2kDq8EkrnZtYpImwef8AwA8V+Av2fPj38Tfixpnii1g+GOk6rqeiaD4NtLlJtR8f2Vw8nkad
Pa3QkePTDB5bT3F1EcbVWMNcYMYB4/8Asifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZ
FLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfW3/BIbWtJ0X/goToHj3VdQ8FeB/CegT3814NQ8Q2+n
wWC3VjeRQxQLeXHnzorsqZUysoKmRudx+SaQj0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe
4Go2N6IJLjEivCsRQwor7o5X/wBagOGDhfNK/QH9hb476f8AAz9jD4VWj+MvCuianqv7QGm6
pfQtq1kdQstENslvc3Ei7zNZxMYZIpHbyy0MjqxMM5EnsH7PXxV+DXwc+KkCaR4p8FSfD7xR
8QPGdp460+98WNb6VpltPM1ppCWmkJcxWl1ZTwPBvmNrdRohZmliSH90xn5PV6X+yV+y/q37
YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7cVVvrb9jr492/wY/Zo+
F2j3/wAQdK0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8
LPgV8cPB174G8ZfD/wAJ+DJfiP4y/wCFiLZ6taWf21WuLi28O4iLiWfT0iuoyn2ZWtIsvK+w
xvIoB+b37Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mV
cZAcr5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/AMMdc8JWb6h4p021gv8AUrlo1higmknEcyMY
JP3sbNEBtJcB0Lewfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/CTeNp/C8dlcNfQPaXN1
BFf2Z1GI22wMWiuxsg8oDIaMoR8U+CP2l/EXw/8ADFtpFhp3w/ntbTfsk1LwJoep3TbnZzvu
Lm0kmfljjc5wMKMAAA0Lw7d/tJ+J/Fur3+u/D/wrdaRoc+uPHdR22gWupfZkjQWdlb20Kwtd
yAjZEqL5hV2JySTz/wAWdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqr
Km0EAgivpb/glh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pA
kwViErBmTzRuAPkmu2+AP7Q/i39l/wCIcfizwRqFrpPiGCCS3gvZtNtb57dZBhzGLiORY3K5
XeoDbWdc7XYH61+AX7Uuv/s2/wDBLnQ9a8N+MPCq+OPD/wAR/tlrpVzr1nJqieHGFvJPaLb+
cLuK0n1K1iaaCHy2lQM7AxMzn3b/AIJ2/EzSfjprn7OVp4cuvBWjQJ4k8Zaz8QfB2mXtvpcD
XsjG80pxpkkiyXiW4S3aCSNJvs4tk+dDCdrGfnTo37WHijQbN4INL+Gjo881wTcfDrw/cuGl
laVgGksmYIGchUB2ooVECoqqOK8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpwo
ztQZOWOSSTU17Xr7xTrl5qep3l1qOpajO91d3d1M0091M7FnkkdiWZ2YkliSSSSa/Qv9ln9p
W48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZ
DRlCPzpor9Af2TPj14E8XeB5tB1Xxd8P/A914E+OUPxhuG8q407Q9S0eCMRy2+kRmIzNKCo8
m0kjSRo3QKCVcJ6Xpv7bkXibwL4J1r4N6n8NNG1LVfiB4o1zxPH4w8Zy+FX05rrV0ubGe+to
NRtf7QT7I8Yf5LxQsBiXoyMxn5Z123gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21v
FGgyzOzF2zgKFhfLBiiv9wfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8it
oJSk9vokk6K7RSKsZKJ5i714PhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09fEE7
3kljcB1ggi8iZHaOA75o92xJMEUAfFPgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzv
uLm0kmfljjc5wMKMAAD6L0Dwz8Xf2iv2bfA3hvXPGnwf8E+DfjH4klj8K6HN4StrB9Z1G3mg
tGuo20vS3Fs/myLB5kkkTsocHMRy3FfsW/tY+DP2RP8AhbOk634X0rxn/wAJR4U1jw1Ya1Yi
/H9otceSkcMqvc22zT38pnZ1hS7G8YZfur9AfAr4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrO
v2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJieeNmMhOAD8//AB34I1T4Z+ONZ8N63bfYta8P
30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9Vtv2LZbD4E+B/iF4k+Ivw/wDB+i/EP7f/AGND
qSatPdS/Yrj7PPvW0sZ1TD7SMtyHHcED7L+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWql
Ls+Bb2dr5v7VvUaN3sLKW4lsWWO5U/vo/tHkKEM48q/Zz8b+PPFGq/DOw8cfEP4FeMPAmh+M
rqDxDpniq98OXuo6NDJqSTalMbjUY/MuUuhJJKtxZT3Akx98MoUAHimqfsODwz8JPCnjfXPi
p8NNE8PeOZ9Sh0Ga4i1uV9RWwujbTShIdOkaNC21lEgRisi5VWDKvKa9+0f4t0j4KXnwft9c
8P6p8PrXVXvoltdBtVF5cq5xex3Etsl4HZRtDuVk8kiIhUzGPqv4XeMz4h+JvgzTdG8afArW
/wBnfQPiBqFvZeHvFSaImo+GtCk1ZZpSRrUKXbJcW0gdXgklc7NrFJE2Dyq3+KHwi/Zc8WeL
vHnw6mtfFnjK68SajF8P7CexuW07wLpyXDi21Sf7WgN1emLYbaM70hx5sxaULEoBynwq/YH1
74leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Ea
p8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8Cvjdp/xL+EX7LFzN
4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMe
AtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzg
A+NPhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4D
berf/gl/4s0bxDoeleIPGXw/8MX/AIt8V6j4P8ORX0uozf29d2N6tjM8RtrOYRRfaXEam4MT
HBbaFwxt/sw+ItJ+E/hz44/FTxFrnh/VvjV4Gnsf+ERGrarb6ol7qV3fSRXmpwrvZb+4t1Hn
RyhpYlZxKyyfIw91+BXxu0/4l/CL9li5m8TfD/UtV8GeK9Tm8c3fi3xFZWWqaJ53iXTtW+3Q
/bZ4pZZZIoHDTRCXcktxGfnYgAH5/wDjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRy
JuQlWw6sMqSDjgkVk16B+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBly
jKcMARnkA15/SEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUt
ejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr8Mv+C+HiJ9CurEocbrXS/wD0/wDjv/Cv3N+L
P/H9cfU1+DX/AAcOPtl07/r20v8A9P8A48rqrfwI+v6HHR1ry9P1R+fH7dmvvqfgX9n6Zjkv
8Orj9PFPiAf0orK/bSbd8Mv2ef8AsnVz/wCpV4horyZbs9KOx92eE/2mdZ8PftKfA3QoriQW
v/CPfDyLaCcYfQ9IJ/8AQjXx58X/ANlrUv2h/GXxC8dN4o8K+FPDvgDQPCEms3mtG9bBv9Lt
YofLS1tp3f8AeJg/KMb1PTJH1BoHw9lv/wBrD4G3gQ7f7B+HTZ+mhaOP6V5ikmn618DP2nPB
P9veFdK8S+KPD3w3XSLPWtfstI/tD7Na280/lvdSxodkYyfm4yo6sAe3FOTpRv5fkcuFUfay
t5ny9+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903
x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUA
c+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBc
EEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDew
CR0txtgTMolfzj0DzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHNXw//AME+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgv
TZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wAKvhJ4yuLqe+1i90XU
zbQ280c95JoyXEcmpFLkQKYWsogJJnVhtfey1fh18ZfA8vjH9pf9oCzn0qDx9puux6v8N9M8
QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQAco/wDwS/8AFmjeIdD0rxB4y+H/AIYv/Fvi
vUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8AHnw/
8Bf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/AOPtQ8ceENX8W+Pv
h/8A8K98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfj
T4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8
D/8ABPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o3/AAkz29zHatPEIbKQwRNNKqr9qEL9SUUc
14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7NvxhudG/
aMtZND8Y/DTTv2d/h78QL/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/f
ECzsf2tv2jPjF4v0zXvD/hXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/
2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+
wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn
tf8AgnZoV/8AD34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ck
pgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL5qz
WkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W90KxtNYnme78R39mZFuobFLaK
YSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+R
lKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/
0x/NspGZpYCZGYEOxlTC8/8A8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yI
LlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I2
2sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/8AD/gODS9VvdCs
bTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw
48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo
/CqTwtB8R/E1/wCN9M8eDw9HfaXpl/4gS9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ
+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3
/BV2bxlo2v8AhXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCCvV
vgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP8AZ/8AbF5rRvWx9tleGDy0tbad2/eJg/KMblPTJHlN
fVf7FNzp/iH9hL9pnwb/AG/4V0rxJ4r/AOEW/sez1rX7LSP7Q+z6hNNP5b3UsaHZGMn5uMqO
rAEA+dPhX8K/EXxv+Iek+E/Cek3WueIdcnFvZWVuBvlbBJJJIVUVQzM7EKiqzMQqkj1b4Mf8
E9vGvxs8J6hq9lqvgrTIIfEjeDtMF9rabPEetC3luBY2c8IkgZ3SMBJJZY4ZWmiVJGLcav7F
PxU8PL8PPHnwtv8AVrX4ca38TII7Ky8e5ISNQcnSNQchmg0y5YL5k1vsZWCmbz4AUT0DXvER
8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCKAPH/ANnv
9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9n
v9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD
1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NR/
wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqYzy
n9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorli
QD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK
5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn
7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/Z
qAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7z
RorliQD9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTc
PDG7zRorliQPVv8AhO4/+HJX/CM/2z8P/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfy
b8ef5HGfs1H/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/
yOM/ZqAPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aK
FNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFP
PlWMGdoySGONmGPpf/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v
2v5N+PP8jjP2avSv2Tfizptr8Df2V7fQfEvw0jHg7xlqN149i8W6jo4u9DhfU7OaOSyGrN5s
CNaqX3adtBkVmP74E0AfP+i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXA
D2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7t
LK4AextrmKNFun8oNLImSrMMphz9bH43eB/iXqfwKufB/ib4f6loPgz4j+IZtau/FviK3stU
0TT5vFtjq1tfQ/2nPFcyyyWkGGmUSuyS3EbfvGYC3r/7Q/g342+L/gvrXhfxH8NL7SfDfxO8
SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHwn8TP2KPFXwm+Al14+1
e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwU/Z7/Yc8e/tJ/Cvx743
0KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4933Z8Ev2lNB8TeE/E0nh3x
/wCCrLSfE/7S1/reu2Gua5YaamseD7y38q6aa0v3jM9vJHLjyzGz7lBVQ8eV8U/Yy0/wbpHx
U/ai1Dw54i8FaF4E8T+DfFfhLwYmseKbPTZ7xriaFrCIQ3s6XIRoQuJZVC5VgzhgwAB8/fs9
/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7I
TfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPsP/BNn
wtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3xPtbbcLmBsHEhxVT9grw
Tqvwa+MngnxGnjD4FPpMfjK2g8UQatqugy6joENjeR+ZNDLfgEo8TvJHcaZLIH2D5w6KAAcp
ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmG
Uw5808S/st+O/CXhjxhrF5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV
+gJ+N3gf4l6n8Crnwf4m+H+paD4M+I/iGbWrvxb4it7LVNE0+bxbY6tbX0P9pzxXMsslpBhp
lErsktxG37xmA8J+Hfxh8K+B/wBrP4zfHyfxjazeCrrxJrNtY+FbSWFtR+I8N9PNItjPZzox
h0x4mR557iHAwqRK1wB5YB4V+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvB
bTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf8AgoToHj3VdQ8FeB/CegT3814NQ8Q2+nwWC3Vj
eRQxQLeXHnzorsqZUysoKmRudx+SaQj0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6
IJLjEivCsRQwor7o5X/1qA4YOF80r9Af2Fvjvp/wM/Yw+FVo/jLwromp6r+0BpuqX0LatZHU
LLRDbJb3NxIu8zWcTGGSKR28stDI6sTDORJ7B+z18Vfg18HPipAmkeKfBUnw+8UfEDxnaeOt
PvfFjW+laZbTzNaaQlppCXMVpdWU8Dwb5ja3UaIWZpYkh/dMZ+T1el/slfsv6t+2L8a7HwFo
Gs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb62/Y6+Pdv8GP2aPhdo9/8QdK
0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8LPgV8cPB17
4G8ZfD/wn4Ml+I/jL/hYi2erWln9tVri4tvDuIi4ln09IrqMp9mVrSLLyvsMbyKAfm9+z3+y
/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr5pX1t/wAE
1NPtfBmuftC6frXiLwVo09/8Mdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lwHQt7
B+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7
GyDygMhoyhHxT4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAA
A9B+GX7IPjv9tDw9e+P4bz4f6BHqeux+FtLgmS30KHX9YFk06WFpBawLbQytFEgBl8iOSWZA
HaR2rx74s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBFfUP/
AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPj6u2
+AP7Q/i39l/4hx+LPBGoWuk+IYIJLeC9m021vnt1kGHMYuI5Fjcrld6gNtZ1ztdgfuv/AIJk
fFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/wAQ+LJreDTGnhS005I9INzFBdW86PF5s0lrdKgM
jPLEsP7rn/2GfH2r+DPhB4f8FXPxO8K+CbXTvFd9KmueGPiBpmj6h4Zu1CoX1axuXjtfEOny
SLbSo8MkzeTDLGsxBFvTGfP/AMKPGXxK8VfBn4geMNI0b4VN4b+Hn2e+1i4vvAHh6aYS6jf+
XFDEHsnc5kkkZV4iiihKKUAijbxTxv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqKg2W9tHHCn
CjO1Bk5Y5JJP2r+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJZ
xNHMJWgijjleMsFhYqUHbfs7ftCN8K/2VfgvoHwm8T/B9td0HxJrY8XXuueNLvwraCY38LWd
9NbfbbCXULd7XYf3tvcERwiLYrB4iAfm9XV/Cv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInm
ofY/7U8jb/odr8p827k3fu4uN21uRiv0B+E3x7t7vQf2fJvAHxC+H/giw0X4j63qXxNsNE8T
Q+ENNubeXWbaaFxZ3cttNd2n2EbIgY5CsSCEgMpjHn37InxA8N3Xxw/a/Tw94o8K+GvAPjjw
p4l03w3ZX2t2vh+x1G4urh/7LSK2uZIQNsJmVSUAgWQqxj8wBgD4Urq/hX8K/wDhaX/CR/8A
FR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxXKV9bf8EsPE66Nof7QGmX3
ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaNyEfJNFfdfwC/al1
/wDZt/4Jc6HrXhvxh4VXxx4f+I/2y10q516zk1RPDjC3kntFt/OF3FaT6laxNNBD5bSoGdgY
mZz7t/wTt+Jmk/HTXP2crTw5deCtGgTxJ4y1n4g+DtMvbfS4GvZGN5pTjTJJFkvEtwlu0Eka
TfZxbJ86GE7WM/J6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7Jaz
pCnnyrGDO0ZJDHGzDHoP2Lf239A+An/C2bjxv4V/4TDWviH4U1jSv7YmnvJ77ULq98k+ReN9
siX7I7o7ySxr9q3P8smDgfQHwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6
dqsUtiuoXKzyf6PbmMSoZyYnnjZjITgA/P8A8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e
4hkaORNyEq2HVhlSQccEiu10b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMr
OkbKsbTSeXAskqIZQ24L91/Cb9pjwFqGueP9U8U+O/BV3P4p+IGvap8DLrVSl2fAt7O1839q
3qNG72FlLcS2LLHcqf30f2jyFCGcZXw3+LMHjn4Pfs3aBqviX4P+IH8IeMtcg+KQ8Zaj4dv5
0huNaiuJJop9SZnnSaB5nNxYu4kI++zquAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ
9nuIZGjkTchKth1YZUkHHBIr9IND+KPw58Q237P1t8PNa+Gh8H/Dr4ga9Hf/APCR67a6Ze+G
tKbxXYapZ3VqmozxXDu1lbhfMRZWMck8TfvCwHz/AONP2h/ht8KPjJ8R/jB4W1C18c/EzxV4
y1i98HwT6bMuneDLZ7yV4tXnS4jVbm9dXVraEBo4P9bLmRUiUA8/+FX7A+vfErwZ8PdavfFv
grwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9
i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV8btP+Jfwi/ZYuZvE3w/1LVfBnivU
5vHN34t8RWVlqmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IHQfCb9pjwFqGueP9U8U+O/
BV3P4p+IGvap8DLrVSl2fAt7O1839q3qNG72FlLcS2LLHcqf30f2jyFCGcAHxT+z3+w549/a
T+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8arPwo
F+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc+1/wDBPe0udO1z
9onU/F/jHwVBqXir4f8AiXwpFd6x440tZ9Y1q4aBhh5bndMkrbyLoEwudx8081b/AGfvhZH8
DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolc
A800X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoX8J/E+
g6N8Pv2ZfDVl4r+D+sv8IvGWr6b4u1C/8S2FoNIhXxPp+opqGnNdzQvOksFq22aBJA0Ms0eA
7ED4e/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaQjz+i
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAr+or
/g0d/wCUa2hf9hnUP/SqWv5da/qK/wCDR3/lGtoX/YZ1D/0qlr0cB8Fb/D/7dE5MVvT/AMX6
M+/fiz/x/XH1Nfg3/wAHDVjLePpvlKWxbaXnA/6j/jyv3k+LP/H9cfU1+aX7b37KEH7TGqSR
zIG+yWViwyP7viDxl/8AHK7/AGUqlKEI73/Q872qpVJTltb9Ufhd+2lZyQ/DP9nlXUhh8O7n
IP8A2NXiGivpj/gsf+zbD8JPHvwc8PIuF0/4d8D03+INbk/9noryqtGUZuL6M9KnWjKCkup9
AeAL+zg+OHwMVyvmnw38Pxz/ANgTScV8IfGj9lC/+PPjr4jeMx4o8KeFPDXw60Lwj/a95rRv
Wx9u0y1hg8tLW2ndv3iYPyjG5T0yR9OaPr0y/tX/AALhDHaNB+HQx9dD0f8AxrzKO70/xH8C
/wBpzwd/b/hXS/Evizw98NjpFnrWv2Wkf2j9ntbeafy3upY0OyMZPzd1HVgD0Yt3pR+X5GWF
VqsvmfL37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK7
3TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2sw
BQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWV
YFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMk
N7AJHS3G2BMyiV/NPQPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAH
sba5ijRbp/KDSyJkqzDKYc1fD/8AwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1
CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/AAq+EnjK4up77WL3
RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w3
0zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/APBL/wAWaN4h0PSvEHjL4f8Ahi/8
W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/wAe
fD/wF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE849A/YU+P8A4+1Dxx4Q1fxb
4++H/wDwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/a
F+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAP
n/wP/wAE9tY8Z6R4Jv5fHnw/0W1+Juu3WheDHvjqjf8ACTPb3Mdq08QhspDBE00qqv2oQv1J
RRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG5
0b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHH
z98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEH
wT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA51v
B37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY
8Oe1/wCCdmhX/wAPfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdP
dySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20v
mrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v/AA/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsU
tophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7
b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp
1z/TH82ykZmlgJkZgQ7GVMLz/wDwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj
3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0f
wjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W9
0KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/h34
VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeFv2lrH4J3m
i6j8KpPC0HxH8TX/AI30zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D
9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v+Cb9z4H
8Hf8FXZvGWja/wCFfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkI
K9W+An7J2ofHf4YePPGX/CUeFfCnhv4c/wBn/wBsXmtG9bH22V4YPLS1tp3b94mD8oxuU9Mk
eU19V/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4
yo6sAQD50+Ffwr8RfG/4h6T4T8J6Tda54h1ycW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvg
x/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq
/sU/FTw8vw88efC2/wBWtfhxrfxMgjsrLx7khI1BydI1ByGaDTLlgvmTW+xlYKZvPgBRPQNe
8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYDcxiJ2mkf7YNomQtK1vjDG0IoA8f8A
2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA
P2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorl
iQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs
1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9mp
jPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGi
uWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7z
RorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4
z9moA8p/Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8M
bvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aK
FNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+
1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tf
jz/I4z9moA8p/Z7/AGD/ABf+0f8ADeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkr
RooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWd
IU8+VYwZ2jJIY42YY+l/8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5Oft
Xm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas
3mwI1qpfdp20GRWY/vgTQB8/6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Ire
y1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgLev/tD+Dfjb4v+C+teF/Efw0vtJ8N/
E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXx
R4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXw
ta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N
+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8
RAckKf8ADSHhn4heNvhFrfgXxf8ADS50TTvi74l1zxbJ4tvNJhu9KsrnX4bq3nso9YK3Fsj2
X7z/AIl6p+8DFv34agD408P/APBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL
02RhuRZWk8UG+XBR5JAjowZWIDYyfhH+wx4k+MP7X+r/AATs9e8K6b4v0q+1HTUmvpboWN/c
WLSCZIpI7d2GUilkUyIgKxkEhiqn1b4deO/h54K8Y/tL/GTQ9Z0rUfF/hTXY7n4XR+ILtrqS
7+3apMjaktvdnz7u7trfZMjTb/LdvMlRmClfdf8AgnV+0p4V+Geh/A/xD/wn/h/RH1bxJ4hu
vjLc6lrkMOq6vfzLJBo8l0J3F1c24+1hy0Qe3jdpZpdrxySKAfH/AMKv2B9e+JXgz4e61e+L
fBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY8T4l/Zb8d+EvDHjDWLzQ
v9A+H+unw54ka3vbe5k0W93tGBPHHIzpE0iNGs5XyXdSiuW4r7W/Zr+IFl4S+FP7Mej6T4o+
FXnfD/xzqjfEFfEmt6FcyaKn9q2kiy6dJqMjFImtkaQS6U2x3UuGMuTXlXwd+JHhD4S/tL/F
r41/8Jp9q8Bxa7q2naZ4YF+LzWPiTb3csrJZXcN4JJRp7xFHubq7jY5ACb7nBjAPFP2RP2Tt
Q/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvrb/gkNrW
k6L/AMFCdA8e6rqHgrwP4T0Ce/mvBqHiG30+CwW6sbyKGKBby48+dFdlTKmVlBUyNzuPyTSE
el6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfo
D+wt8d9P+Bn7GHwqtH8ZeFdE1PVf2gNN1S+hbVrI6hZaIbZLe5uJF3maziYwyRSO3lloZHVi
YZyJPYP2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrTSEtNIS5itLqynge
DfMbW6jRCzNLEkP7pjPyer0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhl
Kv5SyOC4VcRsN24qrfW37HXx7t/gx+zR8LtHv/iDpWj+JPC/7QFtpt2sPiaEyWfhqWKCS/RZ
I5SDpUtzEJJCrG2kdA5LEA17X+z98d/hZ8Cvjh4OvfA3jL4f+E/BkvxH8Zf8LEWz1a0s/tqt
cXFt4dxEXEs+npFdRlPsytaRZeV9hjeRQD83v2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllq
D3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfNK+tv8Agmpp9r4M1z9oXT9a8ReCtGnv/hjrnhKz
fUPFOm2sF/qVy0awxQTSTiOZGMEn72NmiA2kuA6FvYP2Wf2lbjwF+x18CtL+F2sfCrTfEml6
7q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRXY2QeUBkNGUI+KfBH7S/iL4f+GLbSLDT
vh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAABoXh27/aT8T+LdXv8AXfh/4VutI0Of
XHjuo7bQLXUvsyRoLOyt7aFYWu5ARsiVF8wq7E5JJ5/4s6zF4j+KniXULdPD8cF/qt1cRJoN
rLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBFfS3/AASw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01
jxLaaTBqmqzqFs1CXE0ayOF+0gSYKxCVgzJ5o3AHyTRX3X8Av2pdf/Zt/wCCXOh614b8YeFV
8ceH/iP9stdKudes5NUTw4wt5J7RbfzhdxWk+pWsTTQQ+W0qBnYGJmc+7f8ABO34maT8dNc/
ZytPDl14K0aBPEnjLWfiD4O0y9t9Lga9kY3mlONMkkWS8S3CW7QSRpN9nFsnzoYTtYz8nq92
+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe
g/Yt/bf0D4Cf8LZuPG/hX/hMNa+IfhTWNK/tiae8nvtQur3yT5F432yJfsjujvJLGv2rc/yy
YOB9AfAr4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYx
KhnJieeNmMhOAD8//HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHB
Ir0r9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj
3fa3wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn9
9H9o8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvI
ugTC53HzTzQBlfs5/B74g3Xwk+GbLrfwK8MWPxM1W60fwdD4q8D2Wr6jrk0d0kMjG4XSbt1Q
XE4iU3MqEbeAI1U186fHKPX7P4v+I7PxVYaVpXiTSr6TTdTs9N0+zsbW2uLc+Q6JFZoluMNG
QTGuGOWyxYsfuv8AZr+IFl4S+FP7Mej6T4o+FXnfD/xzqjfEFfEmt6FcyaKn9q2kiy6dJqMj
FImtkaQS6U2x3UuGMuTXimrfG74YfAb4keOvif4O1P8A4T34ia34r1WXwVHqNtd3Fr4PtPtU
jQ6xdveruvdQdGVrdGMixHM0zNKFiUA4r4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfy
T61NDPFbOw+yWs6Qp58qxgztGSQxxswx5T4efFD4g/sJ/HvVLjQ5rXw5478MT3eiXMs1jZak
+nzK5hnWMypLGr5Vk8yPkqzqGKuwP2X8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKy
stU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMeAtQ1zx/qninx34Ku5/FPxA17
VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzgA+NPhh+y3cftGahoN/N8Q/hV
4a174ja7Jp+laJNJOt1NcPNHGC1rp1pNFYRPLNtjWYQAhSUXywGroNF/4JoeMll8LWvijxH4
K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc9t+xRo/iLwH+0B4V8
bah48+CmqR3vjmF/GcniDW9Du9Y0r7JfI890txqI3P5qSSSpc6bNL5hXJfzEUD6APxu8D/Ev
U/gVc+D/ABN8P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv
3jMAAfmp478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFZNegftZ
eN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANef0hBRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFf1Ff8Gjv/ACjW0L/sM6h/6VS1/LrX9RX/AAaO/wDKNbQv+wzqH/pVLXo4D4K3
+H/26JyYren/AIv0Z9+/Fn/j+uPqa+N/F+qS6brepGNc5sLX/wBSDxbX2R8Wf+P64+pr5TXR
01XXtV3AHGn23X/sYfFn+Nexhb/u7d/0PFxdrTv2/VH5Nf8ABffUJLv9on4XyNnc3w7iz/4O
9YorY/4OFNKWy/af+GkYAwvw7h/9PWsGivMxN/bTv3f5no4VL2MPRfkeY6P/AMnefA3/ALAX
w5/9MWjV81fFT9k7UPjv4o8feMv+Eo8K+FPDfw58PeD/AO2LzWjetj7bpVrDB5aWttO7fvEw
flGNynpkj6W0b/k7z4G/9gH4c/8Api0avPobnT/EPwJ/ab8G/wBv+FdK8SeK/D3w2/sez1rX
7LSP7Q+z2tvNP5b3UsaHZGMn5uMqOrAHLE/wo/L8jfDfxJfM+X/2e/2HPHv7Sfwr8e+N9Cs7
W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG
+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg
8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FV
r8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr+edx5pov/BNDxks
vha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/wCC
fWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx
6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRkuI5NSKXIgUwtZRAS
TOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMk
UTneyvtQgA5R/wDgl/4s0bxDoeleIPGXw/8ADF/4t8V6j4P8ORX0uozf29d2N6tjM8RtrOYR
RfaXEam4MTHBbaFwxyvGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNSOqXl0lxpl41neI5s
bK4iG2VeCJCGDAgnnHoH7Cnx/wDH2oeOPCGr+LfH3w//AOFe+DvFcviLUrnxtd6PqGrWX7yK
9v2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHY
tY8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/+B/+Ce2seM9I8E38vjz4f6La/E3XbrQv
Bj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/
mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8AD34gX+saOviyXS7u
/wBI0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X6Zr3h/wrps8+teNL
SLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX
13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbX
w9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/AATs0K/+HvxI+HnjO08W/BQaDdeK
7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8A
a58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54
j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/AGHPHv7d
fxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr
4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU65/pj+bZSMzSwEyMwIdjKmF5/8A4J+ftp+E
tB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbt
dwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkh
jjZhjk+Iv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUt
xJIi+bndt9r/AGMviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11q
TJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/wAb6Z48Hh6O+0vTL/xA
l6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFL
NL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRtf8ACvhz4T+F9d1n7Dea1r9v
pmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEFerfAT9k7UPjv8ADDx54y/4Sjwr4U8N/Dn+
z/7YvNaN62PtsrwweWlrbTu37xMH5Rjcp6ZI8pr6r/YpudP8Q/sJftM+Df7f8K6V4k8V/wDC
Lf2PZ61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgCAfOnwr+FfiL43/EPSfCfhPSbrXPEOuTi
3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR6t8GP+Ce3jX42eE9Q1ey1XwVpkEPiRvB2mC+1tNni
PWhby3AsbOeESQM7pGAkksscMrTRKkjFuNX9in4qeHl+Hnjz4W3+rWvw41v4mQR2Vl49yQka
g5Okag5DNBplywXzJrfYysFM3nwAonoGveIj4c/4IzXngi41z4aSa/YfE57uXTLXVdEutSbT
UhMBuYxE7TSP9sG0TIWla3xhjaEUAeP/ALPf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJt
ZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jpr
Gom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+y
ftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2aj/AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k
/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2
jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxr
o8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H/wDbH/Cx
/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P/wC2P+Fj
/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4o0rUvCuk
WFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wAJ3H/w5K/4Rn+2fh//
AGx/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2aj/hO4/wDhyV/wjP8AbPw//tj/
AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+z3+wf4v/aP+G8nijStS8K6R
YXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIFv4VfsD698SvBnw91q98W+Cv
CKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx9L/AOE7j/4clf8ACM/2z8P/
AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1elfsm/FnTbX4G/sr2+g+Jfh
pGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJoA+f9F/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc5Xg79gvVte8Wa
b4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHP2Wfjd4H+
Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt
+8ZgPKfg9rUV1+3hcfFfwz43+Cl54H8V/FafVNUXxBPpNrrGjafFq3nrcKmrxxzw+ZbzGRHs
WZ8rhissYRQDx/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0
W6fyg0siZKswymHOV8N/+CcvxR+I+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wFR
j8p8sDaWdfMj3/Zeg/Hfwnr+tfBa7+HvjL4f3vh3w/8AFbxFqnimbxvq2nSalpunza7BcWtx
atrrm8j8yyAlL2eHaUOzk3G415T+yhfeG9S+OH7V3iTT/F3hW08LeOfCni/w54XuPEXi61sr
7Vbi7uIpLQMt/cLdN5kZBM84wWDb33hqAPhSvS9G/Zf1ab4CP8R9c1nw/wCEfD13PNaaCmsP
cC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgv2t+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNb
j8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5ka34U+N3h/4rfDr9nrTbHU/wBn
+fQfDnjnxB/wn2napbaLZ2Om6fda5FdD+z7fWlS6jtHtJJCn2ZQ4VVRsSR7VAPzTrq9I+CXi
fX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lr7r174o2//Cqf
hfpf7MfxT8K/Diw0Dxz4pk1j7Z4uh8Ox/Z5NVhfSrm+gvZEm1CJbFYhlorg7I2iYFg0deFfs
i+M9J+A/xD8afF3xH408P3OiadPeaOfC+gpbwv8AEhrkMWsv7PkhVbbR3XDyyzWqKihEhj88
KIkI80/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuO
QPKa+tv+CQ2taTov/BQnQPHuq6h4K8D+E9Anv5rwah4ht9PgsFurG8ihigW8uPPnRXZUyplZ
QVMjc7j8k0AFFfot+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3U
EV/ZnUYjbbAxaK7GyDygMhozlfsmfHrwJ4u8DzaDqvi74f8Age68CfHKH4w3DeVcadoepaPB
GI5bfSIzEZmlBUeTaSRpI0boFBKuEYz408A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6
W2t4o0GWZ2Yu2cBQsL5YMUV+Jr9C/gN+2hffGjwt+1FoGg/Fi6+HmpePPEljrfgEeIfE7aHB
olrNr01xftHP5vlW7iG5V5Y4XMko8zYsu01leBPiBrOkfsq/AnQPgj8YvBXgzXfC3iTXh4zv
W8WQeG7S8me/t2sr66trwwy39ubVVIzbzHy0MTJuBiAB8qaj+y/q1l+yZYfGKPWfD934eu/E
h8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfot+yl8f8AT/gv+zR4Dhfx98P7TU9a
/aPtNXvpNNu7K0aPRGiWC5vEt9sc2nWkhhkQ7orc/Z5TGyrDMUfoPB37SnhX4d2em2/h/wAf
+H9Dg0b9qS4tdPi0/XIbZLHwbcSpNPHEEcBNHkkjR3VcWzMisckA0AfBPgj9pfxF8P8Awxba
RYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA6D4A/BHxF/wUF/aSj8M6fqfgrw
/wCJ/EEEk1qk2nDSdOuWt4dzRRw2FqYon8mN3z5aK3luSxdgH5/9rL+w/wDhqj4l/wDCM/2V
/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjivuz/AIJmfHfwP8CvA/wCvdN8ZeFf
CdhLruvf8LaW41a3s769uWje20TzopXFxPaIt0pHkK1tExkll2NHJIqEfmnXbfAH9ofxb+y/
8Q4/FngjULXSfEMEElvBezaba3z26yDDmMXEcixuVyu9QG2s652uwP3B+zX8SX8C/Cn9mPRf
CHxH8K+EP+EI8c6o3xWtbfx1p+ix3yf2raMks4a5jXVIjZIVWWHz0ZFKKxxtr4f/AGlNZ8Pe
I/2jPH+oeEEtY/Cd/wCJNRuNFS1tTawLZPdSNbiOEqpjTyimEKrtGBgYxQBymvazN4j1y81C
4S1jnv53uJUtbWK1gVnYsRHDEqxxpk8IiqqjAAAAFVK/Sz9hD47+B9D8MfseavqPjLwrpVr8
KL7xhp3imO/1a3tLrTJdUcpZMLeR1mmic3Ee6aFHiiG9pHQRyFT9hC+1/wAFfsYeAL5/F2la
D/wgf7QEelajqlx4us7G1ttBNtb3GoWUF29wsU9pNKgma3gd1nKCQI+3cHYdj806K/SHx38a
l8R+Bfh3H+zX8UPBXw0g0/4geLL3XVbxJaeErRYZ9Xil0u4urK4aJry3WxEYCiCYLHGYdmVM
Vav7Afxe+G3wo0PwK1/418FapoXi3xJ4lt/ihDe+IJtL0q1adVtNONpoJktYJbK5R4i7yWEy
wozbzbrBiFCPhT9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNds
jx5Mq4yA5U/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eT
KuMgOV9r/wCCamn2vgzXP2hdP1rxF4K0ae/+GOueErN9Q8U6bawX+pXLRrDFBNJOI5kYwSfv
Y2aIDaS4DoWqf8EtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nhj+0yx7t3kvl
x8ifLvZdy5APlSvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpC
nnyrGDO0ZJDHGzDHV/YQ/bC8Jfst6H8TLTxH4Itden8Y+DdU0C0vYzdGeaS5WAJaXIW8hjWy
JiZneJPtILfLJjgfRfwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6dqsUti
uoXKzyf6PbmMSoZyYnnjZjIThjPz/wDHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjk
TchKth1YZUkHHBIr0r9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyX
HlgttwFUFd7pvj3fa3wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu
9hZS3Etiyx3Kn99H9o8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f8AiXwpFd6x440tZ9Y1
q4aBhh5bndMkrbyLoEwudx8080AfGlFfe3gTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47
eONO0e0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyreg/FrX7z9nH4LWHwZ+L/wAP/BHiTRfF
fiKfxvd2/iKz8Iabc3EuowSWd7PYz/Zjd2n2YLtVbWQLEhh8oFTCEI/P6vdvhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV+HnxY8Ffs0We
qfEKx1Lw/wCPPjNf6rdr4eistFe10DwcFlONY8iW3hjluHJ3WlukQhtxiSRRIqQJ9F/Ar43a
f8S/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8UssskUDhpohLuSW4jP
zsQGM/P/AMd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivSvgn+
yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz9rfCb
9pjwFqGueP8AVPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8
hQhnHlXwc8FXPw2+Eln4o8LfEP4aT/HX4larqeneI/Fev/EDS4Ln4eWoumtpZrcvctJNcXgM
srX8PmSCDIhXdKJXAPjTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJ
BxwSK9g+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7Rk
kMcbMMfrb4KfGu4+BH7OPwj8H/C7xV8FJ/EnhLxXr1r4y1DUvH0+gaabhdRi+yX7pFfWR1S0
ktghDtDdAxRCMKDujJ4F+NXhv4ieGv2cZtK1f4KJH4J8c61J4pWXU7XQrfwxby+KdP1aK50q
31CW3nETW8BSMpG5EEksLASEqoB+dPjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJ
uQlWw6sMqSDjgkVk16B+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyj
KcMARnkA15/SEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABX9RX/AAaO/wDKNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv+wzqH/pVLXo4D4K3+
H/26JyYren/i/Rn378Wf+P64+pr5j0EZ8Qav/wBg62/9SHxXX058Wf8Aj+uPqa+ZPD//ACMO
r/8AYOt//Uh8V17GF/5d+v6HiYzafp+qPyt/4OJf+Tq/hv8A9k7g/wDTzq9FL/wcTjH7Vnw3
/wCydwf+nnV6K83E/wAafq/zPQw38GHovyPJNH/5O8+Bv/YC+HP/AKYtGr5q+Kn7J2ofHfxR
4+8Zf8JR4V8KeG/hz4e8H/2xea0b1sfbdKtYYPLS1tp3b94mD8oxuU9MkfSuk/8AJ3fwN/7A
Pw5/9MWjV5/Dc6f4h+BP7Tfg3+3/AArpXiTxX4e+G39j2eta/ZaR/aH2e1t5p/Le6ljQ7Ixk
/NxlR1YA44n+FH5fkdOG/iy+Z8v/ALPf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0
EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo
0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mp
XC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb
4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/PO4800X/AIJoeMll8LWvijxH4K8B6t438SX3
hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhz
xn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtf
EP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/
aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/Fm
jeIdD0rxB4y+H/hi/wDFvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y
/wCCe2sfCTwBoGveP/Hnw/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj
0D9hT4/+PtQ8ceENX8W+Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmn
z3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISn
z4py+6ymnGUK7laMKAD5/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHat
PEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVs
OrDKkg44JFfZf7NvxhudG/aMtZND8Y/DTTv2d/h78QL/AFjR18WS6Xd3+kaUl0t266bBfJLq
6PNBFEIxbRhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMA
Xesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2u
Bao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwl
Wwt7lLdGnYojTvHv2M65jw57X/gnZoV/8PfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tL
qJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/AG/p
Gm2+qealyW1xVuw8ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/wCA4NL1W90K
xtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wAEWdqqadB9
o1HVdQd4dO0xSG8sSyKjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f8AiP4m
bxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz//AAT8/bT8JaD+2Vo3gTTrP4ae
Gvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK
8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvh
z8PNc8R+N7/w/wCA4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZ
fFnxh4b+Knhwa14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWE
hffJXu2o/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsW
YFlK7i8YAAPhT9kT9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzW
TO5Qu45A8pr7W/4Jv3Pgfwd/wVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE
2VMQOFd03KZNucn4ppCCvVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0b1sfbZXhg8tL
W2ndv3iYPyjG5T0yR5TX1X+xTc6f4h/YS/aZ8G/2/wCFdK8SeK/+EW/sez1rX7LSP7Q+z6hN
NP5b3UsaHZGMn5uMqOrAEA+dPhX8K/EXxv8AiHpPhPwnpN1rniHXJxb2Vlbgb5WwSSSSFVFU
MzOxCoqszEKpI9W+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5bgWNnPCJIGd0j
ASSWWOGVpolSRi3Gr+xT8VPDy/Dzx58Lb/VrX4ca38TII7Ky8e5ISNQcnSNQchmg0y5YL5k1
vsZWCmbz4AUT0DXvER8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJk
LStb4wxtCKAPH/2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJ
K0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIy
JK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc
/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb
9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLY
W5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+
V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5
OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5
X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R
5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0t
hbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4r
Z2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r
7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5
o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavjL/gntrHwk8AaBr3j/wAefD/wF/wkd9q2m2th
qR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE84+wD8bvA/xL1P4FXPg/wATfD/UtB8GfEfxDNrV
34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4jb94zAcTpPxq1X42/tC6NqLePfgV4
o+CB+J2r3TaP4qg0GDUdC0q61n7VcyFNXgjudlzDMZFa2eRvl2HZJGI1APl/4YfsYp8UtQ0G
ztfir8KrO/8AFuuyaFoNnNeahNdao6zRwRztFb2cr2kU0kgEf21bd2ALbAo3VreMv+Ce2sfC
TwBoGveP/Hnw/wDAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj3Xwv4F8G/
DDw5e+JfgJ4z+Glj4n+I/iTWtNs9d1/xjZ6Rc/DDw6t9JbW5t4LuYXf2i6tjva62NcRwApHG
Hk8xsr9nO58UaNqvwz8I678Rf2dPGfwq8D+MrrSdS07VpfD8r6RZDUke+nhk1SCOaa3u0ZpY
5bR5dygDKOgQAHj/AIH/AOCe2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZS
GCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUk
HHBIr7L/AGbfjDc6N+0ZayaH4x+Gmnfs7/D34gX+saOviyXS7u/0jSkulu3XTYL5JdXR5oIo
hGLaMM1wwORJ5jj0Dx3+0zqPxj8C/DvWvgN8TvD/AMMNSu/iB4s1zxZHq3iux8NPG17q8VzY
T6jbSzD7ei2hQHYlyu2Nohu2lKAPzerq9I+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpss
wJhaeAMZo4pCCizMgiZwUDl/lr9Afgp+07ceDv2cfhHYfC7xJ8FB4ksPFevT+MrvUvFU/gvT
UuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCfn79mL4iaH8Hvi/8QPjRrfiXwraeHYL6/wBM
TwV4XEccfj37WXY6bHYXKM1vohQqXluoPkRY0jU3AHloR5V+yJ+ydqH7ZPxPi8G6F4o8K6F4
kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf+ChOgePdV1DwV4H
8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H5JoA9L/ZK/Zf1b9sX412PgLQ
NZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq3mlfpZ/wTM+O/gf4FeB/gFe6b
4y8K+E7CXXde/wCFtLcatb2d9e3LRvbaJ50Uri4ntEW6UjyFa2iYySy7GjkkXJ/Zr+JL+Bfh
T+zHovhD4j+FfCH/AAhHjnVG+K1rb+OtP0WO+T+1bRklnDXMa6pEbJCqyw+ejIpRWONtMZ+d
Nel6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcLlf
tKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxivsv9hb47
6f8AAz9jD4VWj+MvCuianqv7QGm6pfQtq1kdQstENslvc3Ei7zNZxMYZIpHbyy0MjqxMM5Ei
Efn9RX6g+I7nwn8a/jF+zbpvw41/4fyWHwv+MniP7bp0Ov6dpv2S0n8TwXVl9jt5ZYzdRPbb
TH9kWRTjYvzDbXa+H/E+s6bZ6zr0fiu18OweEf2stTsr3UdR8SwaOlp4eklW6v7FJZ5ow9vL
IqzSWkZPmsm8xsVJDsOx+ZPgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfl
jjc5wMKMAADlfG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk/ptp
v7V2jaL4F8Ex/ADVfg/pMEXxA8UXusLrniyfwbaWEMurpLptxNZR31i17b/YTCNrQXIWOAQh
FKvEfPvDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7bTJ9Rto
Wa1VYpHhG518tmYgH56V7X+zp8Q/EWpeGNa0iw1L4KaJa+EdDvNcSTxd4U0Oe61Xy3DmzguL
mylmuLuQyHyomfkKVBUKBX0X+xf+0X4K+Ivwrh0zXfEHgr4Zal4O+Ndt8Zry0ukew0qbSooV
Way0tI1lZ7iNgBHa4DMjIEL7X21P2RPjdp/xL+OH7X+txeJtK8L+G/in4U8SppVh4i8RWWj/
ANoahf3DvYRtHNOqPKsb3CmQFki8xgXUSDcAfFXjfxld/EDxPc6vfw6VBdXezfHpul22mWq7
UVBst7aOOFOFGdqDJyxySSeq/wCGoPGf/DPH/Cqvtulf8IL9u/tT7B/Ydh532vdn7R9p8n7R
5u393v8AM3eV+6z5fyV9bf8ABPv4r6jp/wABPCfhs/Evw/4F01PEl3fW+taD42sfDur+F7nY
o3azpl8YIdcspHFrINjSSLFDNEJeluOr+BHxqWz8C/s4R+CPih4K8OweG/iBrF78Ul07xJae
DrTVYZNXtZYrh7KdrNrq3axUiNVgISNfJ2IVMQAPzeor9IdX/a+X4Ifsi+IvFXws8W+CobzT
vjXqGq+GtHGqWkN7F4PluY5hZw2PmJeW1lPe21u0trEsTvGCzL5TMx/PTx34q/4TrxxrOt/2
bpWj/wBsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycUhGTRX3X8Jvijr//AAx1+z5pfwZ+
KfhX4ceJNA13W5PG/wBs8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHXpf7Afxe+G3w
o0PwK1/418FapoXi3xJ4lt/ihDe+IJtL0q1adVtNONpoJktYJbK5R4i7yWEywozbzbrBiEA+
FP2e/wBl/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5U/
Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuMgOV9r/4J
qafa+DNc/aF0/WvEXgrRp7/4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhap
/wAEtrnT9P8A+F8f2lr/AIV0L+3fhVrPhrTv7a1+y0v7bqF55X2eGP7TLHu3eS+XHyJ8u9l3
LkA8f1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfN
K/Qv/gnB8XrXwz+xX4a8PweNfBWiPP8AGuK48W6RrniDTbFNR8LzaXFbX4mtryRVubd1crsC
uSygoN8YK+g/sf8AxV+B/wAHNc8Pp4Y8U+H5Phf4o8ZeLLTxjp/iHxZeW8GmWU7fZNFSPSJ7
mJLq3nt3t/NmuLW62AyNLLEIT5TGflnXpeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb
0QSXGJFeFYihhRX3Ryv/AK1AcMHC/VfgTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eON
O0e0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyu2/Yx+OXw/+FX7I9pp3i3xF8P7jxJ4o+Ml5
eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD80692+FX7A+vfErwZ8Pdav
fFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe1/Zp/a10D9krxx8ct
M8b6PpXxJ1rxVoeveHP+Ensb+8v/AO3rq4kjXEs32uFZNPmeJ5WnjQXTeYCr4OB7X8CvjD4b
8c/CL9lg6Vd/CrTI/AnivU28U2Ws6/a6dN4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE4
APz/APHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r9nv9hzx
7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fa3wm/aY8B
ahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnHi
n/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC53HzTzQ
B8aUV+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVt
zbhf3ipeRmOARpn5kb508NfGnwJ+z/qHiH4m6S/hXxR8XtY12+fw1puk6PcW/hnwGnnMV1NI
bqJPOlOQbKDayW6hZJsyqsKoRz/wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGe
K2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibk
JVsOrDKkg44JFfoB8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh
+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2
vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzhjPin4J/shN8arPwoF+I3w08Pat431U6PoujahfX
dxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc9Xov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw57b9ijR/EXgP9oDwr421Dx58FNUjvfHML
+M5PEGt6Hd6xpX2S+R57pbjURufzUkklS502aXzCuS/mIoH0Afjd4H+Jep/Aq58H+Jvh/qWg
+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgAD81PHfgjVPhn
441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM/ao+JfiTRL
n7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAr+or/g0d/5RraF/wBhnUP/AEqlr+XWv6iv+DR3
/lGtoX/YZ1D/ANKpa9HAfBW/w/8At0TkxW9P/F+jPv34s/8AH9cfU18y+H/+Rg1f/sHW3/qQ
+K6+mviz/wAf1x9TXzL4f/5GDV/+wdbf+pD4rr2ML/y79f0PExm0/T9Uflb/AMHFH/J1vw3/
AOydwf8Ap51eij/g4o/5Ot+G/wD2TuD/ANPOr0V5uJ/jT9X+Z34b+DD0X5HkejnP7XXwN/7A
Pw5/9MWjV81/FT9k7UPjv4o8feMv+Eo8K+FPDfw58PeD/wC2LzWjetj7bpVrDB5aWttO7fvE
wflGNynpkj6S0dv+Mvfgb/2Afhz/AOmLRq4KC50/xD8Cf2m/Bv8Ab/hXSvEnivw98Nv7Hs9a
1+y0j+0Ps9rbzT+W91LGh2RjJ+bjKjqwBxxP8KPy/I6cN/El8z5f/Z7/AGHPHv7Sfwr8e+N9
Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP8AZCb41WfhQL8Rvhp4
e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EX
grQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+
NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/nneeaaL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H
/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YM
rEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFML
WUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OEr
ImDJFE53sr7UIAOUf/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6jN/b13Y3q2Mzx
G2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021sNSOqXl0lxpl
41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/wD4V74O8Vy+ItSufG13o+oa
tZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hr
xpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/wCCe2seM9I8E38vjz4f6La/
E3XbrQvBj3x1Rv8AhJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8
P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnfs7/D34gX+saO
viyXS7u/0jSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGLxfpmveH/AArp
s8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VO
j6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1f
xJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v8AwTs0K/8Ah78SPh54ztPF
vwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R
8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPF
Xw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9j
P9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i
5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef8A
+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4
JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+
VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf8Ah/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCV
IpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8ADvwq+HHjK41Wez1jUdF1c+H4Y5o7
q8h0YXDXWpMkgiVYXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4mv8AxvpnjweH
o77S9Mv/ABAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1
nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRtf8K+HPhP4X13
WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIQV6t8BP2TtQ+O/ww8eeMv+Eo8K
+FPDfw5/s/8Ati81o3rY+2yvDB5aWttO7fvEwflGNynpkjymvqv9im50/wAQ/sJftM+Df7f8
K6V4k8V/8It/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAIB86fCv4V+Ivjf8Q9J8J+E9
Jutc8Q65OLeysrcDfK2CSSSQqoqhmZ2IVFVmYhVJHq3wY/4J7eNfjZ4T1DV7LVfBWmQQ+JG8
HaYL7W02eI9aFvLcCxs54RJAzukYCSSyxwytNEqSMW41f2Kfip4eX4eePPhbf6ta/DjW/iZB
HZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwUzefACiega94iPhz/gjNeeCLjXPhpJr9h8Tnu5dM
tdV0S61JtNSEwG5jETtNI/2wbRMhaVrfGGNoRQB4/wDs9/sH+L/2j/hvJ4o0rUvCukWFxro8
LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca
6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH
/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/
ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/aP+G8nijStS8K6RYXG
ujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukW
Fxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A
9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4
WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/4byeKNK1Lwrp
Fhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rU
vCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H
/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P
/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4
o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8P
davfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfS/+E7j/wCHJX/C
M/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb
6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyWQ1ZvNgRrVS+7TtoMisx/fAmgD5/0X/gmh4yW
Xwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0
PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfrY/
G7wP8S9T+BVz4P8AE3w/1LQfBnxH8Qza1d+LfEVvZapomnzeLbHVra+h/tOeK5llktIMNMol
dkluI2/eMwFvX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tY
r6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s
7tLK4AextrmKNFun8oNLImSrMMphzleDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe
6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c/a2v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL498Htaiuv28Lj4r+GfG/wUvPA/
iv4rT6pqi+IJ9JtdY0bT4tW89bhU1eOOeHzLeYyI9izPlcMVljCKAeFeDv2C9W17xZpvhrWf
Hvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c+P+O/BGqfDPxxr
PhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X8ATa6N+2pH8RvCPjn4P6h4A1
z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjCj3X9lz45/B3wL440vUNL8
baVqvgHx1458Xjx5F4r8ZXy/Zre6kNtpJ/su5uoxexXMMkBnuLm2uiu6RpZYvJJiAPyprq9I
+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lr9Af+Cc3j
z4efs8+B/hzpmr+LPCtxpWta74i0v4q2Or+Nmax06WSNLKwWDTo7tLS/tJ1aPzLn7PdxBS7t
NGkOYvmn9ifUdP8A2WfE/if4j+K/FelJpnhr7R4cm8H6XqdlqN18QZZUKyafJGPOhOlMAGmu
3R4iFUQb5irRoR5/+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxws
T5rJncoXccgeU19bf8Ehta0nRf8AgoToHj3VdQ8FeB/CegT3814NQ8Q2+nwWC3VjeRQxQLeX
HnzorsqZUysoKmRudx+SaAPYNK/Yo8Vav4c+Cmpx3/h8QfHjVbnR9AVp5t9nNBfR2TtdDysI
hklUgxmQ7QSQD8p6H/h3D44/6CvhX/kq3/CoP+Pm4/5C3/Pf/U/8en+3/rP+mdfQHwe/ak0f
4e/BL9h/R7fXfh+39meK9UbxSupWWl3914et31+CRJXkuI3l0/dE0kglRoiQgfd8ildX9m74
7x/8NUftW7PGXw//AOEb+3eIPFHgf/hINW0n7D/wlf2uX+ydRsPtr7PN8vzf38fybfK8xseV
TGcV8L/2RfiT4S8GQ6bb+L/2dLWCy+IFz8MIl1vwPDqt7JronkxbtcS6LPI6SD50leQosbop
MZUxr5/r37B/ir4haHeeOvGPjn4P+AHPjJ/AGpWl1bTaVBpWtQKVNtImn2Bsok8uMSGeNvJw
xZ5A5evQPhv+0hrvwr/4Jay69pni/wAFXXxKm+Lp8YNFrF5pGs62IDZrAdQFreGWZbj7YoIk
EYnClpAfLZnPQfs2fETV9Z/4Jr32iW/jj4VL44+IXxH1O51R/GnirTEuoNL1DRZdMu9SlM83
2mKUSySNuQee4ydksUrLIAZPwW/Zn+L/AMKfDCQeG/iX8FPCN1Z+Ob74UW+oL4cb+3Ytbkea
FoI9Sj0h7pPNjdmjnE4CRyIu+IrsXivE3wR+OPgr4G/C34Navqfgqw8MfFfxlf6Tp+jNp1nJ
c6ZrNlqaadPPdXaWrShxM+wSwzSkwApnZiM+7fsrfFPwn+x38D/Cfhj/AIS74VeLIYv2j1/e
6jcadebtEW3Nj/baxea72O2SDzop9ylMRnc8Un7znj8efhdr3ibwdbaz8Rrq1T9lP4ga74gg
vb1v7avfiZp0+sC8hltLhCiSXsk8cEbhyEaO5a6EmyKRFAPmr4b/ALA+vfEX4qfFfwgfFvgr
RNS+DsGoXuuS6g9+YJrWwmaG7uIDBays6RsqnaypIwlXajYYKfCr9gfXviV4M+HutXvi3wV4
RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPsH7FvxX0/4t/HD9qrxlqWo+
FfB//Cy/A3ia006z1rxLZWO/UNUuFmt7SN7l4fN+44MgUIu1S+zeoJ+wRdP4G8MfDSa0+Jfw
/wBV8GeIvFbz+O/CXibUtP0eTwa8Lrbx6nZTXVzFci7NpO00d1p/lujwIhaQx7AAeaaL/wAE
0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMObfg
j/gl/wCLPFfhi2v7/wAZfD/wzdT+OX+Gz6bqUuoyXVt4gDsosna2s5oTuADCVZGiwwy4OVH0
Bo+rfD/xXY/sp23gTx14Vn8LfBT4j67Jqk/iLxHY6LfW2mN4gt7u0uWhvHt5JvMs1EhMMZG4
MmFcFB23wv8A2uvBvgnQ4dbt9d+Gmq2Pjf8AakufEUVtrctncT2mhXKyQjVmglYT2DxOnmJM
6xOhVCcxyFXAPzJ8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEi
smu2/aUs7HTv2jPH9vpmvXXirTYPEmox2mtXV6t9PrEIupAl1JcL8szyrhzIOHLFh1r7A/4J
9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFihmiE
vS3CEfBNfRfhm28c/DH9iTR/iXpk/wAH9T8J/wDCSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieW
AwYTybVljQY2sqe7aD8WtfvP2cfgtYfBn4v/AA/8EeJNF8V+Ip/G93b+IrPwhptzcS6jBJZ3
s9jP9mN3afZgu1VtZAsSGHygVMI1f2Uvj/p/wX/Zo8Bwv4++H9pqetftH2mr30mm3dlaNHoj
RLBc3iW+2ObTrSQwyId0Vufs8pjZVhmKOxn5/wDjfxld/EDxPc6vfw6VBdXezfHpul22mWq7
UVBst7aOOFOFGdqDJyxySSe1/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZ
Sr+UsjguFXEbDduKq36F3f7SumeAtB8K6X8CNY+Cmm/2X8R/Fkmuf2l42fwvptkjayr6dcvB
bX9muo2hsvKAIiu08qARIOGjNX9iX9pDwF8Ldc+FGvaL4v8Ahp4P0nU/GXii4+K0Wk3iaTBd
zytNBoQitbopeNpka3aGJEj8iAFpJhG8UjoAflnRVvXtGm8Oa5eafcPayT2E728r2t1FdQMy
MVJjmiZo5EyOHRmVhggkEGv0A/ZN+MS6R8Df2V7bwR8RPD/gyDwt4y1G6+KVm3jC08NveQvq
dnJFJdQzzwtfobFSoZVmG1DF1BQIR+elFfqt8B/iBb6h8ILLxf4M8UaV4Z8GaB+07cxWN/ca
3D4dtbLwjMIry4sIBcSQ7bSXEUzWKD5zGGMRKHH5v/tKaz4e8R/tGeP9Q8IJax+E7/xJqNxo
qWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGKAOJor72/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZ
on1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iBnywzXsF3+0rpngLQfCul/AjWPgppv9l/EfxZJ
rn9peNn8L6bZI2sq+nXLwW1/ZrqNobLygCIrtPKgESDhoyxn5U0V+pnwV8b/AAi+NXw++Beo
6v8AEP4P+G9Y+GvjLWrqKwF7c6LZWF1L4nstXElpBLGpSybS7W9jia4CxiS4t4jibIj+fvAX
7fHw++E/7Rn7S+s3HhK18Wab8UoPFNlouqxpepPex6hdK9vb3MZuoFjsnVd7ska3Sl8Bx90I
R5V8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42
YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX6AfAr4w+G/
HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJieeNmMhOO
g+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWqlLs+Bb2dr5v7VvUaN3sLKW4lsWWO5U/vo/
tHkKEM4Yz8yaK/Tb9gP4teCvhFofgX/hKvHPh/WU8V+JPEtr8X08Q+PHuIILqdVtLNo7Fbxb
XUbe53oZbtoLyPDSO00aRZj+Svg9438IfsdeGLjxZDc6V4w+NsV9PY6HZJGLzR/A5hfYdVkl
wbe/u2IzarC0tvGAJnZ3EcYQip8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhni
tnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5C
VbDqwypIOOCRX6AfAr43af8AEv4RfssXM3ib4f6lqvgzxXqc3jm78W+IrKy1TRPO8S6dq326
H7bPFLLLJFA4aaIS7kluIz87EDoPhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7PgW9n
a+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOGM+Kfgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9
d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz1ei/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm
6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfVvg54Kufht8JLPxR4W+Ifw0n+OvxK1XU
9O8R+K9f+IGlwXPw8tRdNbSzW5e5aSa4vAZZWv4fMkEGRCu6USv6B8J/E+g6N8Pv2ZfDVl4r
+D+sv8IvGWr6b4u1C/8AEthaDSIV8T6fqKahpzXc0LzpLBattmgSQNDLNHgOxAAPz08d+CNU
+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG+mfEz9qj4l+J
NEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACv6iv+DR3/lGtoX/AGGdQ/8ASqWv5da/qK/4
NHf+Ua2hf9hnUP8A0qlr0cB8Fb/D/wC3ROTFb0/8X6M+/fiz/wAf1x9TXzL4f/5GDV/+wdbf
+pD4rr6a+LP/AB/XH1NfMvh//kYNX/7B1t/6kPiuvYwv/Lv1/Q8TGbT9P1R+Vv8AwcUf8nW/
Df8A7J3B/wCnnV6KP+Dij/k634b/APZO4P8A086vRXm4n+NP1f5nfhv4MPRfkY+hfATUJvj5
8CNcWFzBJ4e+Hr7scYTRNJU/+gmvin4t/slaj8cfF3xA8XnxP4V8J+HPht4f8IDWLzWjett+
26XawweWlrbTu37yPB+UY3KemSP3n+BHwPsvE3wb+AmpGFTMvg7wdJux/c0qwx/Kvx+8Wrp+
o/Dn9rDwL/b3hXSvEniXSvh5Ho9nrWv2Wkf2h9lhhln8t7qWNDsjGT83GVHVgDpmVD2dGm+6
X5Dy+tz1prs2fJn7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLB
bbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKh
uTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIw
BxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcF
pJbpPMkN7AJHS3G2BMyiV/FPXPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tvb
O7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/wDwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvU
b281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6
nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/AGgLOfSoPH2m
67Hq/wAN9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQAco//BL/AMWaN4h0PSvEHjL4
f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaB
r3j/AMefD/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dx
x4Q1fxb4++H/APwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0Dw
Z8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQr
uVowoAPn/wAD/wDBPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qq
v2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l
/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrh
gciTzHHz98QLOx/a2/aM+MXi/TNe8P8AhXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbD
FXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+
1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiN
O8e/YzrmPDntf+CdmhX/AMPfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx7
4pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXF
W7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v8Aw/4Dg0vVb3QrG01ieZ7vxHf2
ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUh
vLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7r
xAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/APwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0Xx
DqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip
8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/AMP+
A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14
l+D/AId+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb
9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/AIgS9QKmpkyfPZzOWaxZgWUruLxgAA+F
P2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvt
b/gm/c+B/B3/AAVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZ
Nucn4ppCCvVvgJ+ydqHx3+GHjzxl/wAJR4V8KeG/hz/Z/wDbF5rRvWx9tleGDy0tbad2/eJg
/KMblPTJHlNfVf7FNzp/iH9hL9pnwb/b/hXSvEniv/hFv7Hs9a1+y0j+0Ps+oTTT+W91LGh2
RjJ+bjKjqwBAPnT4V/CvxF8b/iHpPhPwnpN1rniHXJxb2Vlbgb5WwSSSSFVFUMzOxCoqszEK
pI9W+DH/AAT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaa
JUkYtxq/sU/FTw8vw88efC2/1a1+HGt/EyCOysvHuSEjUHJ0jUHIZoNMuWC+ZNb7GVgpm8+A
FE9A17xEfDn/AARmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYDcxiJ2mkf7YNomQtK1vjDG0
IoA8f/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80
aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5H
Gfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2
amM8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80
aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5H
Gfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2
agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zR
orliQD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu
80aK5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn
+Rxn7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/y
OM/ZqAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcP
DG7zRorliQLfwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhj6X/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+
PP8AI4z9mr0r9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWq
l92nbQZFZj++BNAHz/ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jb
XMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Ae
xtrmKNFun8oNLImSrMMphz9bH43eB/iXqfwKufB/ib4f6loPgz4j+IZtau/FviK3stU0TT5v
Ftjq1tfQ/wBpzxXMsslpBhplErsktxG37xmAt6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJf
a9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8F
eA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfCvHfgjVPhn441nw3r
dt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9Ntf8A2h/Bvxt8X/BfWvC/iP4aX2k+
G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFtXf7Zdpf6D4Vv/gn
4i+FUd1efEfxZq/iO58TeMbnwnHF9p1lZ7C8urVb6ykv4ms2j3eZDdYSHygoIeMgHwT8E/2Q
m+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOer0X/gm
h4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz6B+
yLHc6B+05oXxD0zxV+zpF4Z1n4gJda/Zu2l2D+HLW31BZDJZQ6zFFc29u8MrPA1kTIFRVfZL
EEX3bQfjv4T1/Wvgtd/D3xl8P73w74f+K3iLVPFM3jfVtOk1LTdPm12C4tbi1bXXN5H5lkBK
Xs8O0odnJuNxoA+HviZ+xR4q+E3wEuvH2r3/AIfEGl+MrjwHqukQzzPqOlarAkskkUn7ryHQ
JFu3wzSKfMQZzuC8VpHwS8T6/wDCDV/Hlhpn23wt4fvodO1S6t7mKSTTZZgTC08AYzRxSEFF
mZBEzgoHL/LX6Lfs6fHvw2/w31qz8K/EHwrbaDrH7R95qWs2fibxNa28mt+DLm1EFy91Fqkq
y3cUsUgDCRXlZxuwZE3L86fADxX4C/Z8+PfxN+LGmeKLWD4Y6Tqup6JoPg20uUm1Hx/ZXDye
Rp09rdCR49MMHltPcXURxtVYw1xgxgHj/wCyJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+
0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxD
b6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H5JpCCivvb/gnp8YfCvgP9nOzsPiL4x8FL4hud
Vuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgdt8FP2mdc8Efs4/CPTfBXij
4VP4+07xXr03xCvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yMxn5p0V+i37Ov7V
2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6JJOiu0UirGSieYu9ePin9
rL+w/wDhqj4l/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikIyvhX8K
/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxXQfslfsv6t
+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb2v/glh4nXRtD/a
A0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViErBmTzRu92/wCCZnx3
8D/ArwP8Ar3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySK
xnw9+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr
5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/wx1zwlZvqHinTbWC/1K5aNYYoJpJxHMjGCT97GzRA
bSXAdC3sH7LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wAJN42n8Lx2Vw19A9pc3UEV/ZnU
YjbbAxaK7GyDygMhoyhH5017t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnit
nYfZLWdIU8+VYwZ2jJIY42YY9r+zT+234M/Z+8cfHKXW/AvhXWv+FgaHr2kWF14dtr+2sZGv
JIzHZrA91b+RpR8tiNsS3aKygMMbR7X8CvjD4b8c/CL9lg6Vd/CrTI/AnivU28U2Ws6/a6dN
4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE4Yz4/l+MHjb9nPUL/wJeaJ8P4b/AMKX1zpt
5FqXgjQNXuobiOZxKj3U1rK8u2TcATIwAACnaFFdB8Mv2TviT+2p4T+I/wATrHTvD+m+HvAu
lT6pq2oLp8Oj6dM1tbiQ2lrBaQrEbgwpvKoiqMhpHVpVL/YHwm/aY8Bahrnj/VPFPjvwVdz+
KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnHin/BPe0udO1z9onU/F
/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC53HzTzQB8/6j+y/q1l+yZYfG
KPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfot/wT++Ikfw2/ZH0f
wefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDDwq3+KHwi/Zc
8WeLvHnw6mtfFnjK68SajF8P7CexuW07wLpyXDi21Sf7WgN1emLYbaM70hx5sxaULEoBynwq
/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8Cvjdp/xL+EX7
LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4T
ftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQ
oQzgA+VPDNt45+GP7Emj/EvTJ/g/qfhP/hJJPCrWl74G0zUtbs70xy3ZE815pzGRPLAYMJ5N
qyxoMbWVDQP2OfFX7RVn4G8Y654n+Gngmf4x6rLo/hSym06awTWZreWCzZo7fS7F7a2TzpFj
zJ5RZg7kEHe30t+w38Yrnw/+zHZ6FqnxE8FRa7dfHlrrx5ba54w0tk13w9Np6W2pyTfaZzHf
28pd/mTzfMYB03MgYauh/FH4c+Ibb9n62+HmtfDQ+D/h18QNejv/APhI9dtdMvfDWlN4rsNU
s7q1TUZ4rh3aytwvmIsrGOSeJv3hYAA+H/Hfxp+I/wAM/hhrPwG1t9KsvD/h/XZ2v9I/sfTp
JrfU4ZWSSX7YkRmaUFWi81ZiTEPLDGL5ayfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3
OznfcXNpJM/LHG5zgYUYAAB+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtc
BlyjKcMARnkA15/SEa3jfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxy
SScmiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACv6iv+DR3/AJRraF/2GdQ/9Kpa/l1r+or/AINHf+Ua2hf9
hnUP/SqWvRwHwVv8P/t0TkxW9P8Axfoz79+LP/H9cfU18y+H/wDkYNY/7B1t/wCpD4rr6a+L
P/H9cfU18yeH/wDkYtY/7B1t/wCpD4rr2ML/AMu/X9DxMZtU9P1R+V3/AAcUf8nW/Df/ALJ3
B/6edXoo/wCDij/k634b/wDZO4P/AE86vRXm4n+NP1f5nfhv4MPRfkfqX+x9Csv7OXwJz/0I
/hT/ANNVlX88P7U37JuofHX46/GDxj/wlHhXwp4a+HFj4YGr3mtG9bH22wt4YPLS1tp3b94m
D8oxuU9Mkf0PfsbnH7OfwJ/7Ebwp/wCmqyr8N/ibc6f4h8L/ALX3g3+3/CuleJPFdl4A/sez
1rX7LSP7Q+zxxTT+W91LGh2RjJ+bjKjqwB9DOv8AdqPovyObKP8AeKvq/wAz5B/Z7/Yc8e/t
J/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4U
C/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8
cNQ1PxF4K0KDxP8ADHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb
8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr/M
n0J5pov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlk
TJVmGUw5q+H/APgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKD
fLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZL
iOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v8AtAWc+lQePtN12PV/hvpniB7cyJLqWqTe
ZeLZliJ7uzhKyJgyRROd7K+1CADlH/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39
vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wAE9tY+EngDQNe8f+PPh/4C/wCEjvtW021s
NSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/AP4V74O8Vy+I
tSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/B
i6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/wCB/wDgntrHjPSP
BN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ON
Z8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/wBm34w3OjftGWsmh+Mfhpp3
7O/w9+IF/rGjr4sl0u7v9I0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi
8X6Zr3h/wrps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN
8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ
8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/AOHv
xI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtd
G/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQx
nz94i/Yo8VfDn4ea54j8b3/h/wABwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJ
Ii+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZf
uHUfjl4L+Llj8E7Dwr4i+FWueB/D/wAR/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMj
MCHYyphef/4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu
7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2d
h9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi
3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRd
XPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/x
vpnjweHo77S9Mv8AxAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuhe
JL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv8Agq7N4y0bX/Cv
hz4T+F9d1n7Dea1r9vpmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEFerfAT9k7UPjv8MPH
njL/AISjwr4U8N/Dn+z/AO2LzWjetj7bK8MHlpa207t+8TB+UY3KemSPKa+q/wBim50/xD+w
l+0z4N/t/wAK6V4k8V/8It/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAIB86fCv4V+Iv
jf8AEPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR6t8GP8Agnt41+NnhPUN
XstV8FaZBD4kbwdpgvtbTZ4j1oW8twLGznhEkDO6RgJJLLHDK00SpIxbjV/Yp+Knh5fh548+
Ft/q1r8ONb+JkEdlZePckJGoOTpGoOQzQaZcsF8ya32MrBTN58AKJ6Br3iI+HP8AgjNeeCLj
XPhpJr9h8Tnu5dMtdV0S61JtNSEwG5jETtNI/wBsG0TIWla3xhjaEUAeP/s9/sH+L/2j/hvJ
4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hv
J4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/
4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/2j/hvJ
4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hv
J4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/
4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+z3+wf4v/aP+G8n
ijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/h
vJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K
/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAI
z/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/
4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgW/hV+wPr3xK
8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH0v/hO4/wDh
yV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NXpX7JvxZ021+
Bv7K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
n62Pxu8D/EvU/gVc+D/E3w/1LQfBnxH8Qza1d+LfEVvZapomnzeLbHVra+h/tOeK5llktIMN
MoldkluI2/eMwFvX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe
9tYr6WCUvJaw43xJJ+7lniIDkhQD408P/wDBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9R
vbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY1X/4Jf8AizRvEOh6V4g8ZfD/AMMX/i3xXqPg
/wAORX0uozf29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwx7Xwx8VPAWm/E39qT466Xq3h+98ba
H4kGo/DKHVimy5bUNWnD6jDZTBXnuLaApNGHVlhZg7xkqu2r+wp8f/H2oeOPCGr+LfH3w/8A
+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvcAHmn7N3/BOX4o/tP/
AB78RfDvQ9KtbDVvB089r4hvNQuMados0TvGY5ZohIGd5I2RFjDl9rMPkR3WpbfsWy2HwJ8D
/ELxJ8Rfh/4P0X4h/b/7Gh1JNWnupfsVx9nn3raWM6ph9pGW5DjuCB9g/sNf8FG9M+I37c1l
Z6lp3w/8JfDbT/FfijxpDrmtak+iahE+o/axHNcg3y2l1d7biK2XdFM8cJcIQqtIPNP2c/Ff
ijVtV+Gej674o/Z0uvhV4b8ZXVpqXhTVrnw/cv4Rsn1JJb5IZtUDTT28ySM0c1pc3O9YwPN3
qBQB8U69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyB0GkfBLxPr/wAI
NX8eWGmfbfC3h++h07VLq3uYpJNNlmBMLTwBjNHFIQUWZkETOCgcv8tfpD+x/wDFX4H/AAc1
zw+nhjxT4fk+F/ijxl4stPGOn+IfFl5bwaZZTt9k0VI9InuYkuree3e382a4tbrYDI0ssQhP
lfJP7E+o6f8Ass+J/E/xH8V+K9KTTPDX2jw5N4P0vU7LUbr4gyyoVk0+SMedCdKYANNdujxE
Kog3zFWjQjz/APZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81
kzuULuOQPKa+tv8AgkNrWk6L/wAFCdA8e6rqHgrwP4T0Ce/mvBqHiG30+CwW6sbyKGKBby48
+dFdlTKmVlBUyNzuPyTQAUV91/Cb4o6//wAMdfs+aX8Gfin4V+HHiTQNd1uTxv8AbPF1n4dj
+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx16X+xd4v1mD9kzwb4kj8b+H9Ng8LftDi1vdcXWY
PDenL4ekggu7+1tVnNqEsp5FWc2Mcab9gJgyhCsZ+ZNegeCP2l/EXw/8MW2kWGnfD+e1tN+y
TUvAmh6ndNudnO+4ubSSZ+WONznAwowAAP0h+En7SHwmn+Mn7PfifQvF/grRvBPw38ZfEG3v
4ZryDSn0m21e8lGlGOylMcxt3S5g+eKIxwKH8wxiKTZ+VOvaNN4c1y80+4e1knsJ3t5XtbqK
6gZkYqTHNEzRyJkcOjMrDBBIINIRb8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxw
pwoztQZOWOSST0HgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhf
LBiiv9gf8E+/ivqOn/ATwn4bPxL8P+BdNTxJd31vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxay
DY0kixQzRCXpbip+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJ
ZxNHMJWgijjleMsFhYqUDGfClFfpZ/wTm+Inww+Avgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2
tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICIuJ+EH7RniX9kr/gmrp9zofjPwU3j/AMIf
E55IdMHijTr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhQj4/wDAPwB8RfEf4SePfG+n
x2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+6/2Lf2stc8f/AAC+Png20+Je
lfCvxJ4qvtH1HwZZv4ik8O6H4bifWJJtSWxdpAlpEkdwC0MTebJGGCpKVIr0v9i74zeAvBln
+yTeXPxB8Ff2b8H9V8Z6T4luZtUSye3bUpWjsJ47a58q6lt5vPibzVhKxKXMxi8uTYxn5k0V
+ln7CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1HVLjxdZ2NrbaCba3uNQsoLt7hYp7SaVBM1vA7
rOUEgR9u4fBP7Sms+HvEf7Rnj/UPCCWsfhO/8SajcaKlram1gWye6ka3EcJVTGnlFMIVXaMD
AxikI4miv0W/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+Em8bT+F47K4a+ge0ubqCK/sz
qMRttgYtFdjZB5QGQ0ZP2df2rtM8PfCDwRcDx54V8Para/tHqrQaLqT6Xa6f4WuhFPeRW0Ep
Se30SSdFdopFWMlE8xd68MZ8P+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDL
M7MXbOAoWF8sGKK/E1+i3wj/AGiPNsf2rvhj4N+LuleBP7R8V2r/AA2kfxT/AGLoek6eviCd
7ySxuA6wQReRMjtHAd80e7YkmCK5/wCEH7Sms/srf8E1dPv/AAn4/wDBWp+MPCvxOeewtxrk
DXtz4azA8sMNtI8d/DZXWo2sMktuqQySR5d0EbM1AHwTRX6LfCvxnp/7U1j+x1c+H734f6Xq
vgf4j6tfeI/D8Or2Wi/2N9s8QWt7DDZ2dzMks0RiJEa24l+75eS4K1xS/theEv2W/wBsP9rG
08R+CLXXp/GM/i/QLS9jN0Z5pLm9wlpchbyGNbImJmd4k+0gt8smOAhHinwq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Eap8M/HGs+
G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8CvjD4b8c/CL9lg6Vd/CrTI/A
nivU28U2Ws6/a6dN4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE46D4TftMeAtQ1zx/qni
nx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzhjPhTUf2X9W
sv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/ANagOGDhfNK/SH9hv4xX
Ph/9mOz0LVPiJ4Ki126+PLXXjy21zxhpbJrvh6bT0ttTkm+0zmO/t5S7/Mnm+YwDpuZAw6v9
j/4q/A/4Oa54fTwx4p8PyfC/xR4y8WWnjHT/ABD4svLeDTLKdvsmipHpE9zEl1bz272/mzXF
rdbAZGlliEJ8oA/LOvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7Ja
zpCnnyrGDO0ZJDHGzDG38HvG/hD9jrwxceLIbnSvGHxtivp7HQ7JIxeaP4HML7DqskuDb392
xGbVYWlt4wBM7O4jjH0r8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEuna
t9uh+2zxSyyyRQOGmiEu5JbiM/OxAAPz/wDHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuI
ZGjkTchKth1YZUkHHBIr0r4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ2
1wLVHklCobkxF9rMAUAc/a3wm/aY8Bahrnj/AFTxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf
2reo0bvYWUtxLYssdyp/fR/aPIUIZx5V8HPBVz8NvhJZ+KPC3xD+Gk/x1+JWq6np3iPxXr/x
A0uC5+HlqLpraWa3L3LSTXF4DLK1/D5kggyIV3SiVwD408d+CNU+GfjjWfDet232LWvD99Pp
t/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/J
PrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH7A/YD8V/Db9mLQ/Aug3/AI78Fa5oWoeJPEujfFAX
vjSb+yomZVsNOe00w3EMF9ZXSGJnuZLS5VUZneSFYf3XP/Arxxo4+EX7LGixa78Kpbr4WeK9
Ts/GT6z4n0u2m8OJ/wAJLp2pLd2LzXCCffDbMontfOR4pJ0BJbgA/P8A8d+CNU+GfjjWfDet
232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG+mfEz9qj4l+JNEuftui+IP
FeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+XWv6iv8A
g0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3ROTFb0/wDF+jPv34s/8f1x9TXzL4f/AORg1j/sHW3/
AKkPiuvpr4s/8f1x9TXzL4f/AORg1f8A7B1t/wCpD4rr2ML/AMu/X9DxMZtU9P1R+Vv/AAcU
f8nW/Df/ALJ3B/6edXoo/wCDij/k634b/wDZO4P/AE86vRXm4n+NP1f5nfhv4MPRfkfqR+xy
3/GOvwJ/7Efwn/6arKv55f2qP2TtQ+O/x2+MPjL/AISjwr4U8N/Dmy8Mf2xea0b1sfbbC3hg
8tLW2ndv3iYPyjG5T0yR+/H7I/xDsLH4IfAqyeZBN/whPhFduecnSbEj+dfi18Rr7TvE/hH9
rvwgNf8ACml+I/F1h8P20a01rX7LSP7R+zxQzT+W91LGh2RjJ+buo6sAfQztp4aj6L8jmyhf
7RV9X+Z8h/s9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfa
zAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOveEtLTWPFOm6bPealcLbNDEIbidJAjAHEpUR
ZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1jxBq3jvSbJvhlp8VwbRms1NwWkluk8
yQ3sAkdLcbYEzKJX+ZPoTzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0sr
gB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNU
stQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL
3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w
30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/wDBL/xZo3iHQ9K8QeMvh/4Yv/Fv
ivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8efD/
AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/wCPtQ8ceENX8W+P
vh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hf
jT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/
8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRz
XinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9
oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98
QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/
AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd
+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPD
ntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0l093JKY
LLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPLbS+as1p
I8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSp
FJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKo
iqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/wBM
fzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5
W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtr
D38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/D/gODS9VvdCsbTW
J5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw48Z
XGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/Cq
TwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxe
DdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v8Agm/c+B/B3/BV
2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkI9g+Ff7G
urfED4eaT4s1zxX4K+HHh7xJqo0bQb3xXd3FsmuTgkTPAIYJmFvA21ZbmQJBG0iqZNwYL5r4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfUE1vpP7X37D3wM8
EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tzDfwPMwWe3jWNllWMtOjbcQsrKx9r+BHxL0nw
R4F/Zw0f4W/Fzw/pnh7wT8QNYb4hXDeJbfwg+vWX9r2r215dWdzcQyXaSaegwMTbVBhJ3KUp
jPin9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qreaV
+sP7MH7SHwm+FvxU8A698PvF/grwf4E1P4geLrj4gRLeQaTPdxyzTweHA1rMUuWso4buIqkM
f2aAl5JRG0Ujp5/+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbX
UbZNVtzbhf3ipeRmOARpn5kYA/PTwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0kuPsNpFjzLiXY
DsiTcu52wq5GSM1k17B4a8T6dqvxU+LeoP8AES1+GsGqaVrElonhPTL5NK8StJMGTRoYV8uS
CynB+X7Qu1EjQSJngeP0hHsH7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4dO0xSG8sSyKj
tvkZSqIqszYY42I7L4/X3t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5ubWe
Hz5S95Ha3dx++jtk3wSyJAW2bQruLeg/EnX7P9nH4LaL8GfiP8P/AIT+JPDfivxE3je1t/HV
npOm21xJqMDWcs4nuWOq2kdsqqsq/bA0URTdIcqWM/P6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8
V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHx/x3cfbPHGsy/bNK1Dzb6d/tWl
2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0fUH/AAT3+MPjDRtc8Bya14x+GmnfCr4e+JDrE6+L
JdFu7/SIUaG7vF02C4SXUkeYRKIxZRgNcMCCH3uEI+X/AB34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya/UzTf23IvE3gXwTrXwb1P4aaNqWq/EDxRrniePxh
4zl8KvpzXWrpc2M99bQaja/2gn2R4w/yXihYDEvRkb80/izrMXiP4qeJdQt08PxwX+q3VxEm
g2strpSq8zsBaQyqskdvg/u0dVZU2ggEEUAerfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG2
1h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP8Asl21lrXw0L/DrxJq
Efi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxn5k+O/BGq
fDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXsHwq/YH174leDPh7rV74
t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjxP7WXjfTPiZ+1R8S/Em
iXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDX2X+zZ8UdB8Q/Af9ku2sta+Ghf4d
eJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAQj5/wBF/wCCaHjJZfC1
r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz
8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL+ef7WXjfTP
iZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDQB0H7Gf7Dnj39uv4h3
Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsut8Kv2B9e+JXgz4e61e+LfB
XhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY/QH/BIX9tOLQfi38NPAniW
z+GnhrwT4Fn1fXJfEOoapLo9yLm5tZ4fPlL3kdrd3H76O2TfBLIkBbZtCu46v4EfFHTdK8C/
s4WWg618H9LHgn4gaxP49sNa13R7tPDULavayx/2XLq08sq2/wBlUskunSsHZC5kebLFjPz0
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHz/wDaU17RvFP7
Rnj/AFPw5eXWo+HtR8SajdaXd3U08091avdSNDJI9wTMzshUlpSZCSS3zZr7V/ZN+LOm2vwN
/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTSEfKnxM/Y
o8VfCb4CXXj7V7/w+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3XkOgSLdvhmkU+YgzncF5//AIag
8Z/8M8f8Kq+26V/wgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/wAzd5X7rPl/JX6A/s6fHvw2
/wAN9as/CvxB8K22g6x+0fealrNn4m8TWtvJrfgy5tRBcvdRapKst3FLFIAwkV5WcbsGRNy/
nT8d/wDhGP8AheHjL/hCP+RM/t29/sD/AFv/ACD/ALQ/2b/Xfvf9Vs/1nz/3uc0xnpfwq/YH
174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj6B+xx+
z38dPi98VNY+EnhXwx8NLe88BT3VrruoeI/BeialBo8yTSq0dxfSWVxLK7TK6RqrSEhTt/dR
syeq/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZA
soMck8X3yQOr/Y3/AOChWjeLv+CgX9kXS/DTR/hfovjLxX4utfFOoahPoVzOb83ixXcqzXcM
FzcMlxDboJbd5o4GYKECu4APzf8AG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKc
KM7UGTljkkk5Na3ju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dx+
1fhN8Udf/wCGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVw
NkbRYJBjpCPl/wDZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFX
EbDduKq3mlfqD+wP8f8AwP8ABX/hTep6b4++H+h2F14r8SP8Wrizu7fRv7WuX8220SQWkqwX
DaeFuldI4IFtrfc8kqRNFIyfmTr2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozK
wwQSCDQBUr3b4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qx
gztGSQxxswx+q/8AgmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/ABD4smt4NMaeFLTTkj0g
3MUF1bzo8XmzSWt0qAyM8sSw/uqnwK8caOPhF+yxosWu/CqW6+FnivU7Pxk+s+J9LtpvDif8
JLp2pLd2LzXCCffDbMontfOR4pJ0BJbhjPj/AMd/Gn4j/DP4Yaz8BtbfSrLw/wCH9dna/wBI
/sfTpJrfU4ZWSSX7YkRmaUFWi81ZiTEPLDGL5a9V/Zt8Ea/rngD4eaXpXxD/AGarG/8AHF9L
puj6JrvhCz1rXY7hrwwql5INIunh8ySRTGbiYAxsu0hUIXxT9rLxvpnxM/ao+JfiTRLn7bov
iDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBr0v9ibXvCvwh+EnxY+KM954f/4Wh4Dg0oeA
rDVpoZEN1dXTRXF9DaOQbi4tIlEkeQ8cTESOjFUKoR5V+0hpWs+HPj34v0jxF/wj/wDbugar
caPfHQ9Og0/TmmtXNuzQwQRQxqhMWciJC2SzDcxria/Sz/gnN8c/AngXwP8ADnUPEXjbStVt
fHWu+Ih8W4vFfjK4X7NcXUaW1if7Le6jhvYrkSRme4ltroLukaSWIQkxfm9r2jTeHNcvNPuH
tZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDQB7X8Kv2B9e+JXgz4e61e+LfBXhFPivqs
+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY2/D/8AwT61DUPEPh7w9qvxL+FXhzxn
4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2O1/4J7/ABh8YaNrngOTWvGP
w0074VfD3xIdYnXxZLot3f6RCjQ3d4umwXCS6kjzCJRGLKMBrhgQQ+9xb+HXxl8Dy+Mf2l/2
gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahDGeU3Pxx+I/7GXjj
X/AmkX/hXSta8F32p+H5Na0vw3p39qRuJJre4eDU2tRfLndIEk8xXVGAXYAALfwq/YH174le
DPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4pr2vX3inXL
zU9TvLrUdS1Gd7q7u7qZpp7qZ2LPJI7EszsxJLEkkkk1+gH7JvxZ021+Bv7K9voPiX4aRjwd
4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1
u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+lZ+N3gf4l6n8Crnwf4m+H+paD4M+I
/iGbWrvxb4it7LVNE0+bxbY6tbX0P9pzxXMsslpBhplErsktxG37xmA/P79rLxvpnxM/ao+J
fiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBpCOg+Cf7ITfGqz8KBfiN8NPD2
reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DT
wj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/4e/Ej4eeM
7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVV7b4Am10b9tSP4
jeEfHPwf1DwBrnxOku78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7xhNGFDGfGnjvw
Rqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV6V8E/2Qm+NVn4UC/Eb
4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOeV/aU17RvFP7Rnj/U/
Dl5daj4e1HxJqN1pd3dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmvdf8AgnZoV/8AD34kfDzx
naeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ckpgsskaqqEcV+zd/wTl+K
P7T/AMe/EXw70PSrWw1bwdPPa+IbzULjGnaLNE7xmOWaISBneSNkRYw5fazD5Ed18Jr9LP2G
v28dAvP25rLww3/Cv9L+E/hfxX4o8TWHizWtavNM1C5F59rjhvLmS6vUS+u2jnhgVriGW4WF
n+7iRx+dPju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dxAPYPhV+
wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDE8Hfs
F6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+
lf2bPijoPiH4D/sl21lrXw0L/DrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUG
OSeL75IBqaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W4u4Xa
RrlYzNFBlI4leTzGYz4K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlS
QccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0Z
JDHGzDHx/wAd+Ff+EF8cazon9o6VrH9j309j9v0u4+0WN95UjJ50EmBvifbuRsDcpBwM196/
sm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3w
JpCPmrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8
e/YzrmPDnx/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/g
CbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0
YUfKn7SmvaN4p/aM8f6n4cvLrUfD2o+JNRutLu7qaeae6tXupGhkke4JmZ2QqS0pMhJJb5s0
Adr+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3m
jRXLEgeKV9g/8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k34
8/yOM/Zq9V/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv8AhJvG0/heOyuGvoHtLm6giv7M
6jEbbYGLRXY2QeUBkNGWM/OmvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHyn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqr
JHb4P7tHVWVNoIBBFfdf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAQj5q+G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7
VZKzT2MDxB1muAqMflPlgbSzr5ke/wAJr72/Zu8T6D41/aM/a78aW/ivwVYeHviP4b8ZaJ4b
m1jxLYaRPqd1fXUc1qot7qaKZEkQ5EjosYIZSwZSB8Ka9o03hzXLzT7h7WSewne3le1uorqB
mRipMc0TNHImRw6MysMEEgg0AewfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRb
IlzHa2cy23mOyMhldQyOGBwG29Av/BOq6g8e6X4Tvfi18H7HxZrPiS88K22im/1K5vVvba/a
xxMlvZSC3SWUK0TTmPfG4bjDBT9ibXvCvwh+EnxY+KM954f/AOFoeA4NKHgKw1aaGRDdXV00
VxfQ2jkG4uLSJRJHkPHExEjoxVCtv9kTx3ofgrwB8ZfjJrWs6VqPxj8Kf2bc+DY/EF3HdSXe
oX146XepLbyndd3dsn75Gbekbt5joxClWM8J+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNB
JNbzPC7RllVihZCQSoOMZA6Vz9W9e16+8U65eanqd5dajqWozvdXd3dTNNPdTOxZ5JHYlmdm
JJYkkkkmv0W/4JkfFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1
bzo8XmzSWt0qAyM8sSw/ukI/N6iv0h/Z2+Pd98H/ANlX4L+GvhtrfwfsvFnhzxJrcfjd9c+I
DaFaWt19vhNtdTLa6jbJqtubcL+8VLyMxwCNM/Mjfn98WdZi8R/FTxLqFunh+OC/1W6uIk0G
1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgigDn69L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXu
BqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfsv8AZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+
Em8bT+F47K4a+ge0ubqCK/szqMRttgYtFdjZB5QGQ0Z1v2Ofjvp8f7P5sLDxl8KtAj1b9oCX
UfFGirq1lo+j33hS5sUgvljsdReNpNPZJCscLRlwETCCSL5WM/NOvYP2e/2HPHv7Sfwr8e+N
9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PdxXx3/wCEY/4Xh4y/4Qj/
AJEz+3b3+wP9b/yD/tD/AGb/AF373/VbP9Z8/wDe5zX0X/wS80+10jQ/jhqGp+IvBWhQeJ/h
jr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQEI8/+FX7A+vfErwZ8PdavfFvgrwi
nxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw
/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8
T6XbTeHE/wCEl07Ulu7F5rhBPvhtmUT2vnI8Uk6Aktx8U/tZeN9M+Jn7VHxL8SaJc/bdF8Qe
K9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANAHQfsZ/sOePf26/iHc+H/AARZ2qpp0H2jUdV1
B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsp+xn+w549/br+Idz4f8EWdqqadB9o1HVdQd4dO0xS
G8sSyKjtvkZSqIqszYY42I7L9Lf8Ehf204tB+Lfw08CeJbP4aeGvBPgWfV9cl8Q6hqkuj3Iu
bm1nh8+UveR2t3cfvo7ZN8EsiQFtm0K7jV/4Jg/tlxeFv2kvBvw/8S6P8H/BPgnwBquvazLq
yeIJbOK1ubiG4g3rO+om11Bx5sdrDI63MgtslHIDy0xn56V7B8E/2Qm+NVn4UC/Eb4aeHtW8
b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfNfHdv9j8cazF9j0rT/Kvp0+y
6XefbLG2xIw8uCfzZfNiXoknmyblAO987j9K/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEd
r4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1VUI5/Rf+CaHjJZfC1r4o8R+CvAereN/
El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14
fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9q/Av4/6vqH7VDavbePvh/wD8KJ8HfEfUvEVl
c+NrvTNQ1ay0/wC1i9maxj1BZdY824ijj2m3Te9w+4sJfMcfJX7SnxIsfjH+0Z4/8X6ZFdQa
b4q8SajrFpFdKqzxw3F1JMiyBWZQ4VwCAxGc4J60AdrbfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b
/wCxodSTVp7qX7FcfZ5962ljOqYfaRluQ47ggHww/YxT4pahoNna/FX4VWd/4t12TQtBs5rz
UJrrVHWaOCOdorezle0imkkAj+2rbuwBbYFG6vYP2N/GfijxD4c+DWm6740+BWt/CrQPEj2+
peHvFSeH01Hw1ZSX0U18SNUhSZkuEkZ1e0klzs25R0CC3+zHoWj/AA9/a407xn8PfFvwUHwt
uviOWktfEF5pcWseHtHtNSDQyqNaRLmPfaSB0ls3eQlMOyzRhVYz4/8AHfgjVPhn441nw3rd
t9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr2D4VfsD698SvBnw91q98W+CvCKfFfVZ
9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8//aU17RvFP7Rnj/U/Dl5daj4e1HxJ
qN1pd3dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmvtX9mz4o6D4h+A/7JdtZa18NC/wAOvEmo
R+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgIR8/wCi/wDBNDxksvha18Ue
I/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ
8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/Tb/hpDwz8QvG3wi1vwL4v+Glzo
mnfF3xLrni2TxbeaTDd6VZXOvw3VvPZR6wVuLZHsv3n/ABL1T94GLfvw1fnT+0pr2jeKf2jP
H+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNAHa/CT9jFPi5pHgV4
fir8KtK1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMhldQyOGBwG263wM/wCCanxL/aH/
AGl/F/wx8Nw6VLf+BL66sNc1maaVNHsHglkhBaXyy582SJhGojLsAW2hUkZLX7E2veFfhD8J
Pix8UZ7zw/8A8LQ8BwaUPAVhq00MiG6urpori+htHINxcWkSiSPIeOJiJHRiqFfdv+CUf7eN
xefHbwB4Y8c/8K/0vw34Xvta8Tah4s1rWp9M1C5vry3uI3vLmSW9S3vrtmnSBWlhllWFn27Q
HcMZ+f1ewfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e7zXx3b/Y/HGsxfY9K0/wAq+nT7Lpd59ssbbEjDy4J/Nl82JeiSebJuUA73zuP1B/wS
80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQEI8U
0b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L5pX
6LfBnx3pWofAL9mHwadZ+Cl/YfD/AMV6zp3xHs/FF34bufsVpJrEMxa2fUCTLFJbNKfO09mV
9oAcsigfCnx3/wCEY/4Xh4y/4Qj/AJEz+3b3+wP9b/yD/tD/AGb/AF373/VbP9Z8/wDe5zQB
0Gjfsv6tN8BH+I+uaz4f8I+Hruea00FNYe4F34qnhRmmSxhhhlZ0jZVjaaTy4FklRDKG3BTR
v2X9Wm+Aj/EfXNZ8P+EfD13PNaaCmsPcC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgv2B+z
58VdE8Z/szfsqeHYdW+D93pPg7xJq1v8QrHxkdAE9hZT6tBcZiXVQJSj2ryEvZ5yV2k70AXq
7L4q/D3xn4W+A/h3wFq3wfu/hr4O+IHiO38R2PjI6MJ7DQp9ejuLYxLrQF0Uewckvb5cldrn
zEwrGfmTXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjB
naMkhjjZhj5p8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNfc
H7NnxR0HxD8B/wBku2sta+Ghf4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCyg
xyTxffJAQj51+G3/AATY+IfxD0/XZJrnwr4butH8VzeBLez1jVRFJrPiCKGWZtMt3jV4RKRF
tV5pIoneWNEkYtgfP1foD+1l8ctH+Jn/AATx+JaaJ4i+H96viD45ap4msLLz9Lj1m40OZ5fL
vPsz4vFlNyyruZBciA7Ti24r8/qAPYP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKC
doITO1pAVRjJceWC23AVQV3um+Pdb/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDW
DayXS2FuQjIkrRooU3Dwxu80aK5YkD0v/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTd
NnvNSuFtmhiENxOkgRgDiUqIsqwLgggauveIj4c/4IzXngi41z4aSa/YfE57uXTLXVdEutSb
TUhMBuYxE7TSP9sG0TIWla3xhjaEUxnxpXsH7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4
dO0xSG8sSyKjtvkZSqIqszYY42I7L4/X3t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDq
GqS6Pci5ubWeHz5S95Ha3dx++jtk3wSyJAW2bQruEI+af2M/2HPHv7dfxDufD/giztVTToPt
Go6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfH6/Qv8A4Jg/tlxeFv2kvBvw/wDEuj/B/wAE
+CfAGq69rMurJ4gls4rW5uIbiDes76ibXUHHmx2sMjrcyC2yUcgPLXwV47t/sfjjWYvselaf
5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncQD2D4VfsD698SvBnw91q98W+CvCKfFfV
Z9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvijxH4K8B6t438S
X3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/QH7NnxR0HxD8B/wBku2sta+Gh
f4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJA9B1/9ofwb8bfF/wX
1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM/M
nx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9g+FX7A+vfErwZ
8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMeJ/ay8b6Z8TP2
qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa+y/2bPijoPiH4D/sl21lr
Xw0L/DrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75ICEfP+i/8ABNDx
ksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34
I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/TbX/2h/Bvxt8X/BfW
vC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSF/PP8A
ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAOg/Yz/AGHP
Hv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZdb4VfsD698SvBnw
91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+gP+CQv7acWg/F
v4aeBPEtn8NPDXgnwLPq+uS+IdQ1SXR7kXNzazw+fKXvI7W7uP30dsm+CWRIC2zaFdx1fwI+
KOm6V4F/ZwstB1r4P6WPBPxA1ifx7Ya1ruj3aeGoW1e1lj/suXVp5ZVt/sqlkl06Vg7IXMjz
ZYsZ+enjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7B8Kv2B9
e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+f/ALSm
vaN4p/aM8f6n4cvLrUfD2o+JNRutLu7qaeae6tXupGhkke4JmZ2QqS0pMhJJb5s19q/sm/Fn
TbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJpCPl
T4mfsUeKvhN8BLrx9q9/4fEGl+MrjwHqukQzzPqOlarAkskkUn7ryHQJFu3wzSKfMQZzuC+P
1+oP7Onx78Nv8N9as/CvxB8K22g6x+0fealrNn4m8TWtvJrfgy5tRBcvdRapKst3FLFIAwkV
5WcbsGRNy/nT8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNAH
oFt+xbLYfAnwP8QvEnxF+H/g/RfiH9v/ALGh1JNWnupfsVx9nn3raWM6ph9pGW5DjuCBleLv
2TtQ8Nfs0L8V7PxR4V8QeFm8Vy+Dytgb1LpbtIpJ1cpcW0Q8p4UWRSG3YmQMquHRPdf2N/Gf
ijxD4c+DWm6740+BWt/CrQPEj2+peHvFSeH01Hw1ZSX0U18SNUhSZkuEkZ1e0klzs25R0CD2
v9lf4veDfDPwbvfD/gLxr4K0TwfP+0Pc3F9pGueILOxTUfA81mttMJrbUpFe5t3gdV2OryFl
BA8yPKsZ+ZNewfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFt
uAqgrvdN8e7ivjv/AMIx/wALw8Zf8IR/yJn9u3v9gf63/kH/AGh/s3+u/e/6rZ/rPn/vc5r6
L/4Jeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4II
CEef/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGO
NmGPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+gHwK8caO
PhF+yxosWu/CqW6+FnivU7Pxk+s+J9LtpvDif8JLp2pLd2LzXCCffDbMontfOR4pJ0BJbj4p
/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAOg/Yz/Yc8
e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZdb4VfsD698SvBn
w91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+gP8AgkL+2nFo
Pxb+GngTxLZ/DTw14J8Cz6vrkviHUNUl0e5Fzc2s8Pnyl7yO1u7j99HbJvglkSAts2hXcdX8
CPijpuleBf2cLLQda+D+ljwT8QNYn8e2Gta7o92nhqFtXtZY/wCy5dWnllW3+yqWSXTpWDsh
cyPNlixn56eO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXsHwq
/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj5/+
0pr2jeKf2jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNfav7Jv
xZ021+Bv7K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaQ
j518Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhg
wIJ5x4Tr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH3t4M+KN/8Svj
T4Z8n4gfBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB8U/Hf
/hGP+F4eMv8AhCP+RM/t29/sD/W/8g/7Q/2b/Xfvf9Vs/wBZ8/8Ae5zQB0H7JX7L+rfti/Gu
x8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWNR/Zf1ay/ZMsPjFHrPh+
78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOF+4f8AgmZ8d/A/wK8D/AK903xl
4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki5P/AAT++Ikfw2/Z
H0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDBjPzpr2v4
SfsYp8XNI8CvD8VfhVpWtfEO+bTdL0G7vNQn1SG4FyLZEuY7WzmW28x2RkMrqGRwwOA23z/4
7/8ACMf8Lw8Zf8IR/wAiZ/bt7/YH+t/5B/2h/s3+u/e/6rZ/rPn/AL3Oa9g/Ym17wr8IfhJ8
WPijPeeH/wDhaHgODSh4CsNWmhkQ3V1dNFcX0No5BuLi0iUSR5DxxMRI6MVQqhFX4Gf8E1Pi
X+0P+0v4v+GPhuHSpb/wJfXVhrmszTSpo9g8EskILS+WXPmyRMI1EZdgC20KkjJ8/V+gP/BK
P9vG4vPjt4A8MeOf+Ff6X4b8L32teJtQ8Wa1rU+mahc315b3Eb3lzJLepb312zTpArSwyyrC
z7doDuPhTx3b/Y/HGsxfY9K0/wAq+nT7Lpd59ssbbEjDy4J/Nl82JeiSebJuUA73zuIB6V8E
/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn+k
D/g1Y8Eap8M/2Do/Det232LWvD/ifV9Nv7fzEk+z3EN9NHIm5CVbDqwypIOOCRX4Vfsw+BbX
4Yfs2+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijD
SeY37q/8GrHhX/hBf2Do9E/tHStY/sfxPq9j9v0u4+0WN95V9MnnQSYG+J9u5GwNykHAzXo4
D4K3+H/26JyYven/AIv0Z9z/ABZ/4/rj6mvmXw//AMjBq/8A2Drb/wBSHxXX018Wf+P64+pr
5l8P/wDIwav/ANg62/8AUh8V17GF/wCXfr+h4mM2n6fqj8rf+Dij/k634b/9k7g/9POr0Uf8
HFH/ACdb8N/+ydwf+nnV6K83E/xp+r/M78N/Bh6L8h3gH9rPU/DXxz+Bnh1JnEH/AAjfw/h2
g9pNE0kn/wBCNfD3x2/ZPf49/H3w9N/wsf4a+H9Y8fwaFpejaNqN9d3Go3EzaXpsStLHZ21w
LVHllCobkxF9rMAUAc/SfhvwPNqH7WHwPuwp2jQfh02fpoWj/wCFcl418A2vwq0fw94i+Hfj
P4a2Pxc+IujaZpeqa74g8Y6bpFz8MNNXS7K2Jt4JphN9oukMjtdRo0scAKRRhpPMaMXKUqUU
/L8jfCRiqsmvM+PfiZ+xR4q+E3wEuvH2r3/h8QaX4yuPAeq6RDPM+o6VqsCSySRSfuvIdAkW
7fDNIp8xBnO4L4/X6Q/sN+NLX4Pfsx2fgSD4j/DRX0748tD4tguPFGmw6dr/AIXOnpZX8ojv
JEW8spVJ2gIxbCsi70BX4J+O/wDwjH/C8PGX/CEf8iZ/bt7/AGB/rf8AkH/aH+zf6797/qtn
+s+f+9zmvLPQPQPhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZ
GQyuoZHDA4Dbeg+KH/BOXU/gb4c03U/G/wAS/hp4Wg1nVdZ0exW4GsXL3M2lXz2V0wFtp8oV
PMUFSxBZXU4B3Kp+xNr3hX4Q/CT4sfFGe88P/wDC0PAcGlDwFYatNDIhurq6aK4vobRyDcXF
pEokjyHjiYiR0YqhX0v9lX4o+NviVB8J/wDhMPiB8FPFnw9sfFcza5pPjR9AfWNEt59QjuNR
llfVoluJftQleXzbWWYsVILK6BAxnxpr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRg
NwDorYIyqnIHsH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSV
o0UKbh4Y3eaNFcsSB5/8d/8AhGP+F4eMv+EI/wCRM/t29/sD/W/8g/7Q/wBm/wBd+9/1Wz/W
fP8A3uc19Lf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/
ACOM/ZqQjx74J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc+a+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX1r+
zD4Ftfhh+zb4a8S/Dvxn8NLH4t/Eee903U9d1/xjpukXPww01ZvsxNvBNMJvtF0hkdrqNGlj
gBSKMNJ5jfJXjvwr/wAIL441nRP7R0rWP7Hvp7H7fpdx9osb7ypGTzoJMDfE+3cjYG5SDgZo
Aya9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfs
v9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+ge0ubqCK/szqMRttgYtF
djZB5QGQ0Z1v2Ofjvp8f7P5sLDxl8KtAj1b9oCXUfFGirq1lo+j33hS5sUgvljsdReNpNPZJ
CscLRlwETCCSL5WM/NOiur+O/wDwjH/C8PGX/CEf8iZ/bt7/AGB/rf8AkH/aH+zf6797/qtn
+s+f+9zmvuv9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zO
oxG22Bi0V2NkHlAZDRlCPkn9jP8AYc8e/t1/EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR2
3yMpVEVWZsMcbEdl8fr9Fv8Agm9+3ZaS/tXeGfCuu2HwU8LeAfCmu+IfEh16yurnw5ZpPeR3
MazQxXF1DDPxPFbQJNbPPFanaFTY7D8//Hdv9j8cazF9j0rT/Kvp0+y6XefbLG2xIw8uCfzZ
fNiXoknmyblAO987iAelfsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GU
qiKrM2GONiOyn7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhj
jYjsv0t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5ubWeHz5S95Ha3dx++jt
k3wSyJAW2bQruNX/AIJg/tlxeFv2kvBvw/8AEuj/AAf8E+CfAGq69rMurJ4gls4rW5uIbiDe
s76ibXUHHmx2sMjrcyC2yUcgPLTGfnpXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/
kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/47t/sfjjWYvselaf5V9On2XS7z7ZY22JGHlwT
+bL5sS9Ek82TcoB3vncfvX9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1Ozmjk
shqzebAjWql92nbQZFZj++BNIR8/6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne
2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hka
ORNyEq2HVhlSQccEivtX4F/H/V9Q/aobV7bx98P/APhRPg74j6l4isrnxtd6ZqGrWWn/AGsX
szWMeoLLrHm3EUce026b3uH3FhL5jj5K/aU+JFj8Y/2jPH/i/TIrqDTfFXiTUdYtIrpVWeOG
4upJkWQKzKHCuAQGIznBPWgDia9g+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebH
CjSx2dtcC1R5JQqG5MRfazAFAHP1t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8
Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhozynw58Paf4S8Af8ACbfDLxj8FNN+KfxW13V7
W/1pvE9l4es/hbpZvHgH9mWV1Il1F9pjZ3WcRefDagRxxK772Yz4q8d+CNU+GfjjWfDet232
LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivSv2M/2HPHv7dfxDufD/giztVTToPtGo6r
qDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfNfHfhX/AIQXxxrOif2jpWsf2PfT2P2/S7j7RY33
lSMnnQSYG+J9u5GwNykHAzX3B/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5u
bWeHz5S95Ha3dx++jtk3wSyJAW2bQruEI+f/AIVfsD698SvBnw91q98W+CvCKfFfVZ9H8I22
sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8AHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJ
J9nuIZGjkTchKth1YZUkHHBIr9C/gR8UdN0rwL+zhZaDrXwf0seCfiBrE/j2w1rXdHu08NQt
q9rLH/ZcurTyyrb/AGVSyS6dKwdkLmR5ssfhT9pTXtG8U/tGeP8AU/Dl5daj4e1HxJqN1pd3
dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmgDia9r+En7GKfFzSPArw/FX4VaVrXxDvm03S9Bu
7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt+gPhN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu6
3J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjrzX9mHxN4V+Fvhz44/FefWPBWo
/FDwRPYnwFG8MNvp13dXd9JHcalZaa8cJd7aICaBDCI4Nys8AKoEYz5++LPw4vvg58VPEvhD
U5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSoOMZA6Vz9W9e16+8U65eanqd5dajqWozvdX
d3dTNNPdTOxZ5JHYlmdmJJYkkkkmvuD4TfFHX/8Ahjr9nzS/gz8U/Cvw48SaBrutyeN/tni6
z8Ox/aJL62ezub6CeRDqES2yqMrFcDZG0WCQY6Qjwr4VfsD698SvBnw91q98W+CvCKfFfVZ9
H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDet232LWvD99Ppt/
b+Ykn2e4hkaORNyEq2HVhlSQccEiv0L+E/xZ0Hxb8Pv2ZY7LxL8H9Rf4f+MtXHi67v8AUbDQ
BocL+J9P1VL7Tra7a0dUkgt2C+RAQsMs0OxHyi/D37WXjfTPiZ+1R8S/EmiXP23RfEHivVNS
sLjy3j+0W813LJG+1wGXKMpwwBGeQDQB0H7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07T
FIbyxLIqO2+RlKoiqzNhjjYjsut8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhn
itnYfZLWdIU8+VYwZ2jJIY42YY/QH/BIX9tOLQfi38NPAniWz+GnhrwT4Fn1fXJfEOoapLo9
yLm5tZ4fPlL3kdrd3H76O2TfBLIkBbZtCu46v4EfFHTdK8C/s4WWg618H9LHgn4gaxP49sNa
13R7tPDULavayx/2XLq08sq2/wBlUskunSsHZC5kebLFjPz08d+CNU+GfjjWfDet232LWvD9
9Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw
9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHz/wDaU17RvFP7Rnj/AFPw5eXWo+HtR8SajdaX
d3U08091avdSNDJI9wTMzshUlpSZCSS3zZr6A/4J7/GHxho2ueA5Na8Y/DTTvhV8PfEh1idf
Fkui3d/pEKNDd3i6bBcJLqSPMIlEYsowGuGBBD73CEfL/jvwRqnwz8caz4b1u2+xa14fvp9N
v7fzEk+z3EMjRyJuQlWw6sMqSDjgkVk1+pmm/tuReJvAvgnWvg3qfw00bUtV+IHijXPE8fjD
xnL4VfTmutXS5sZ762g1G1/tBPsjxh/kvFCwGJejI35p/FnWYvEfxU8S6hbp4fjgv9VuriJN
BtZbXSlV5nYC0hlVZI7fB/do6qyptBAIIoA9W+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0DT
dYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP0B+zZ8UdB8Q/Af8AZLtrLWvhoX+HXiTU
I/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQPQdf/aH8G/G3xf8F9a8L+I/
hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPzJ8d+CNU
+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3x
b4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHif2svG+mfEz9qj4l+JN
Euftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvsv9mz4o6D4h+A/7JdtZa18NC/w6
8SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAhHz/ov/AATQ8ZLL4Wtf
FHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+Gfj
jWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv021/9ofwb8bfF/wX1rwv4j+G
l9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfzz/AGsvG+mf
Ez9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGgDoP2M/wBhzx7+3X8Q
7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2XW+FX7A+vfErwZ8PdavfFv
grwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfoD/gkL+2nFoPxb+GngTx
LZ/DTw14J8Cz6vrkviHUNUl0e5Fzc2s8Pnyl7yO1u7j99HbJvglkSAts2hXcdX8CPijpuleB
f2cLLQda+D+ljwT8QNYn8e2Gta7o92nhqFtXtZY/7Ll1aeWVbf7KpZJdOlYOyFzI82WLGfnp
478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFewfCr9gfXviV4M+
HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPn/wC0pr2jeKf2
jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNfav7JvxZ021+Bv7
K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaQj5U+Jn7FH
ir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJFJ+68h0CRbt8M0inzEGc7gvj9fqD+zp
8e/Db/DfWrPwr8QfCttoOsftH3mpazZ+JvE1rbya34MubUQXL3UWqSrLdxSxSAMJFeVnG7Bk
Tcv50/Hf/hGP+F4eMv8AhCP+RM/t29/sD/W/8g/7Q/2b/Xfvf9Vs/wBZ8/8Ae5zQB2v7Pf7D
nj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuP2M/2HPH
v7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfVv+CXmn2ukaH8cN
Q1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB6X/wAEo/2uv+Ff
/HbwB8NPFtt8KvD3hD4bX2tapdeI5te+ws13Nb3FubhplvlsdQlPmx28b+VMwtyTGQoaQMZ+
f1el/slfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVbiv
Hdv9j8cazF9j0rT/ACr6dPsul3n2yxtsSMPLgn82XzYl6JJ5sm5QDvfO4/ot/wAEzPjv4H+B
Xgf4BXum+MvCvhOwl13Xv+FtLcatb2d9e3LRvbaJ50Uri4ntEW6UjyFa2iYySy7GjkkVCPj6
2/YtlsPgT4H+IXiT4i/D/wAH6L8Q/t/9jQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA8f17To
tI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkD7L/AGVda8XCD4T+HPEvjb9n
/X/hl4R8Vzadq2g+IJ/DU914ctDqEcl+yy38YaaK4R3kSawmnVwuA6siqPlT47/8Ix/wvDxl
/wAIR/yJn9u3v9gf63/kH/aH+zf6797/AKrZ/rPn/vc5oA5SiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAr+or/g0d/5RraF/2GdQ/wDSqWv5da/qK/4NHf8AlGtoX/YZ1D/0qlr0cB8Fb/D/AO3R
OTFb0/8AF+jPv34s/wDH9cfU18y+H/8AkYNX/wCwdbf+pD4rr6a+LP8Ax/XH1NfMvh//AJGD
V/8AsHW3/qQ+K69jC/8ALv1/Q8TGbT9P1R+Vv/BxR/ydb8N/+ydwf+nnV6KP+Dij/k634b/9
k7g/9POr0V5uJ/jT9X+Z34b+DD0X5Efw81G0h+PfwNRiu/8A4Rz4f/8Apj0nFfDXxo/ZQv8A
48+OviN4zHijwp4U8M/DrQvCP9r3mtG9bH27TLWGDy0tbad3/eJg/KMblPTJH0houp3cf7Xf
wNjUt5f9g/Dkfh/YWj5rz6O6sPEXwL/ac8GnX/CuleJvFnh74bf2RZ61r9lpH9o/Z7W3mn8t
7qWNDsjGT83GVHVgDlinelFen5HRhY2qyfqfL37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp
6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3
Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wAEvNPtdI0P44ahqfiLwVoUHif4Y694S0tN
Y8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TW
PEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolfzj0DzTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P8A/BPrUNQ8Q+Hv
D2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Yy8b+MP
hZ8VPDmka18Q/g/P8KvhJ4yuLqe+1i90XUzbQ280c95JoyXEcmpFLkQKYWsogJJnVhtfey1f
h18ZfA8vjH9pf9oCzn0qDx9puux6v8N9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQA
co//AAS/8WaN4h0PSvEHjL4f+GL/AMW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJ
jgttC4Y5XjL/AIJ7ax8JPAGga94/8efD/wABf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq
8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H//AAr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLL
qnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzW
BeXYlXUohKfPinL7rKacZQruVowoAPn/AMD/APBPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o
3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3
EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv8AWNHXxZLpd3f6RpSX
S3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2jPjF4v0zXvD/hXTZ59a8aWkXia9Wx
n1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m
82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL
3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF
5o0WseHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd35
8Vajpv8Ab+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v/D/
AIDg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/
AARZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrX
PA/h/wCI/iZvFyePJ9HkvodHuvECXUMqnXP9MfzbKRmaWAmRmBDsZUwvP/8ABPz9tPwloP7Z
WjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5
U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMM
cnxF+xR4q+HPw81zxH43v/D/AIDg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSR
F83O7b7X+xl8WfGHhv4qeHBrXiX4P+HfhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBE
qwvZFxJMVYSF98le7aj+054W/aWsfgneaLqPwqk8LQfEfxNf+N9M8eDw9HfaXpl/4gS9QKmp
kyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4L
aZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ft
fw2PlpeypcTZUxA4V3Tcpk25yfimkI+gfgb/AME9tY+O/wAIPDnjOz8efD/R7DxR4rj8EWdr
qR1QXQ1iUborVxDZSIN8ZVxIHMYDgM6sGUZPwZ/YY8SfFzxx8SvDdzr3hXwfrXwnsb3UvEFv
rUt0/k29lI0d48bWlvcLJ5LhQQDlt6+WHAbb9Qf8E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnq
F3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOV/ZKTQfD37Rn7W9uPHvh+803VvBv
irwtoeteJPFlhDP4lurq6AtJPtE8sa3DzLCzvOv7sFgzFd65Yzx/4G/8E9tY+O/wg8OeM7Px
58P9HsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZR4p478Eap8M/HGs+G9btvs
WteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfYHwa+OV/8Av8AgkzBdeGfEXw/g8cWnxWX
xNbaffT6NqeqWlktilql3FZXXmSRyrdRrtZIxMqZkGIiXPxpr2vX3inXLzU9TvLrUdS1Gd7q
7u7qZpp7qZ2LPJI7EszsxJLEkkkk0hFSvVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0
b1sfbZXhg8tLW2ndv3iYPyjG5T0yR5TX1X+xTc6f4h/YS/aZ8G/2/wCFdK8SeK/+EW/sez1r
X7LSP7Q+z6hNNP5b3UsaHZGMn5uMqOrAEA+dPhX8K/EXxv8AiHpPhPwnpN1rniHXJxb2Vlbg
b5WwSSSSFVFUMzOxCoqszEKpI9W+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5b
gWNnPCJIGd0jASSWWOGVpolSRi3Gr+xT8VPDy/Dzx58Lb/VrX4ca38TII7Ky8e5ISNQcnSNQ
chmg0y5YL5k1vsZWCmbz4AUT0DXvER8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3M
YidppH+2DaJkLStb4wxtCKAPH/2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1
g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7
Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf
2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUT
ayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jU
TayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T
9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP
9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jU
TayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01j
UTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP
2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7Xp
P9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNR
NrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG2
1h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7
XpP9s/2X9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2Lxb
qOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL
7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxh
q/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7LPxu8D/EvU/gVc+D/ABN8
P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMB5T8Htaiu
v28Lj4r+GfG/wUvPA/iv4rT6pqi+IJ9JtdY0bT4tW89bhU1eOOeHzLeYyI9izPlcMVljCKAe
P6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZ
hlMOeU8RfsUeKvhz8PNc8R+N7/w/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4Ypb
iSRF83O7b9w6D8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6tp0mpabp82uwXFrcWra65v
I/MsgJS9nh2lDs5NxuNGo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DK
p1z/AEx/NspGZpYCZGYEOxlTCgHwT+yV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZX
iDQwylX8pZHBcKuI2G7cVVvNK/Vb9kP47/CD4FeOPhpe/Dbxl4V8J/D2Xxz4r/4TtbjVls76
9gaSa28N+dFduL2e0SC6iI2K0ETGSWbY8cki8V+zt8e774P/ALKvwX8NfDbW/g/ZeLPDniTW
4/G7658QG0K0tbr7fCba6mW11G2TVbc24X94qXkZjgEaZ+ZGAPj79nv9hzx7+0n8K/HvjfQr
O1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3cVpHwS8T6/8INX8eWGmfbfC
3h++h07VLq3uYpJNNlmBMLTwBjNHFIQUWZkETOCgcv8ALX1X+wrc6feeOP2lfEN3r/wq8PWH
jbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiqu1mVlYDz/9ifUdP/ZZ8T+J/iP4
r8V6UmmeGvtHhybwfpep2Wo3XxBllQrJp8kY86E6UwAaa7dHiIVRBvmKtGhHn/7In7J2oftk
/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX1t/wSG1rSdF/
4KE6B491XUPBXgfwnoE9/NeDUPENvp8Fgt1Y3kUMUC3lx586K7KmVMrKCpkbncfkmgAor72/
4J6fGHwr4D/Zzs7D4i+MfBS+IbnVbo/Bw6rLDqB+Huqm1uUk1K9AST7DZSXb2ZVJwymaL7SI
AqG4HbfBT9pnXPBH7OPwj03wV4o+FT+PtO8V69N8Qr7xN8RZNHj/ALQfUYnhvro2+pWw1iJ4
cFptt6jpDsXPzIzGfmnRX6Lfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8i
toJSk9vokk6K7RSKsZKJ5i714+Kf2sv7D/4ao+Jf/CM/2V/wjf8Awleqf2T/AGX5f2H7J9rl
8nyPL+TyvL27Nny7cY4pCPP6K+9v+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fw
vc7FG7WdMvjBDrllI4tZBsaSRYoZohL0tx6B+xd4v1mD9kzwb4kj8b+H9Ng8LftDi1vdcXWY
PDenL4ekggu7+1tVnNqEsp5FWc2Mcab9gJgyhCuw7H56fCv4V/8AC0v+Ej/4qPwr4c/4RzQ7
rXP+J5qH2P8AtTyNv+h2vynzbuTd+7i43bW5GK5Svuv9kT4geG7r44ftfp4e8UeFfDXgHxx4
U8S6b4bsr7W7Xw/Y6jcXVw/9lpFbXMkIG2EzKpKAQLIVYx+YA1v9hz9pSx+Hf7Jnwit5/H9r
oeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIGfLDNIR8E16XqP7L+rWX7Jlh8Yo
9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC1P2sv7D/4ao+Jf/CM/2V/w
jf8Awleqf2T/AGX5f2H7J9rl8nyPL+TyvL27Nny7cY4r7W/4J6/Gr4f/AAf/AGD/AA3pPi3V
/Ctv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLcAfCnwr+Ff/C0v
+Ej/AOKj8K+HP+Ec0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiuUr7W/YJ1u78K+OP
2ptE8Q/Erwrc/wDCReBte0A3994ytrex8Wa3LIyW1xFJdSx/at/+lMtwRhVnJZk84btX4TfF
HX/+GOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJB
joA+X9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF
80r9LP2Mfjl8P/hV+yPaad4t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcoz
LFcxQCNHSUIpWIH5+/Zp/a10D9krxx8ctM8b6PpXxJ1rxVoeveHP+Ensb+8v/wC3rq4kjXEs
32uFZNPmeJ5WnjQXTeYCr4OAxnFfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ
4rZ2H2S1nSFPPlWMGdoySGONmGPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJu
QlWw6sMqSDjgkV+gHwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6dqsUtiu
oXKzyf6PbmMSoZyYnnjZjITjoPhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7PgW9na+
b+1b1Gjd7CyluJbFljuVP76P7R5ChDOAD4p/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9
Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G
4m82OFGljs7a4FqjyShUNyYi+1mAKAOfa/8AgnvaXOna5+0Tqfi/xj4Kg1LxV8P/ABL4Uiu9
Y8caWs+sa1cNAww8tzumSVt5F0CYXO4+aeat/s/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseI
NW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK4B4/8N/8AgnL8UfiPofxX1MaVa6Pp
vwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/hNfZf8AwT38C2vwu1z9om01Pxn8
NI4L/wCH/iXwJpd7J4x021g1rUnaAQ/ZhcTRyPbyhSyXBRYiOrAggea/B7xv4Q/Y68MXHiyG
50rxh8bYr6ex0OySMXmj+BzC+w6rJLg29/dsRm1WFpbeMATOzuI4whFT4VfsD698SvBnw91q
98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDe
t232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv0A+BXxu0/4l/CL9li5m8TfD/UtV8G
eK9Tm8c3fi3xFZWWqaJ53iXTtW+3Q/bZ4pZZZIoHDTRCXcktxGfnYgdB8Jv2mPAWoa54/wBU
8U+O/BV3P4p+IGvap8DLrVSl2fAt7O1839q3qNG72FlLcS2LLHcqf30f2jyFCGcMZ8U/BP8A
ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56Hxl/
wT21j4SeANA17x/48+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegf
sUaP4i8B/tAeFfG2oePPgpqkd745hfxnJ4g1vQ7vWNK+yXyPPdLcaiNz+akkkqXOmzS+YVyX
8xFA9A8GfFG/+JXxp8M+T8QPgp4s+BNj8R9UZNJ8aPoz6xomjz6wLi4llfXIlvZftUEvm+bD
LNIxUh2WVNgAPn/wP/wT21jxnpHgm/l8efD/AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykME
TTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOO
CRX6QaH8Ufhz4htv2frb4ea18ND4P+HXxA16O/8A+Ej1210y98NaU3iuw1SzurVNRniuHdrK
3C+YiysY5J4m/eFgPgn9rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlG
U4YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACv6iv8Ag0d/5RraF/2GdQ/9Kpa/l1r+or/g0d/5RraF/wBhnUP/AEqlr0cB8Fb/
AA/+3ROTFb0/8X6M+/fiz/x/XH1NfMvh/wD5GDV/+wdbf+pD4rr6a+LP/H9cfU18y+Hz/wAV
Bq//AGDrb/1IfFdexhf+Xfr+h4mM2n6fqj8rf+Dij/k634b/APZO4P8A086vRR/wcUf8nW/D
f/sncH/p51eivNxP8afq/wAzvw38GHovyOC8PWqf8NUfApsc/wDCP/Dvt/1A9Hr5P+L37J2o
fHfxj8QvGX/CUeFPCnhr4c6B4Q/te81o3rY+26XawweWlrbTu37xMH5Rjcp6ZI+ntK8UpB+1
h8C4P4hoPw6H56Ho/wDjXmkV7p/ib4FftN+D/wC3/CmleJPFvh74bHR7PWtfstI/tD7Pa280
/lvdSxodkYyfm7qOrAHHE/wo/L8jpwy/eS+Z8vfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqa
nqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13
cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOveEtLTW
PFOm6bPealcLbNDEIbidJAjAHEpURZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1j
xBq3jvSbJvhlp8VwbRms1NwWkluk8yQ3sAkdLcbYEzKJX887zzTRf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Pa
r8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+Fn
xU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HX
xl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj
/wDBL/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4Lb
QuGOV4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEM
GBBPOPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5t
wYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2
JV1KISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTP
b3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRy
JuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumw
XyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdN
MljAF3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFG
ljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew
3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0Ws
eHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajp
v9v6RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W
90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0
H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+J
m8XJ48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/h
p4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vf
ErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q
+HPw81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7G
XxZ8YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVh
IX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrF
mBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81
kzuULuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2Plpeyp
cTZUxA4V3Tcpk25yfimkIK+gfgb/AME9tY+O/wAIPDnjOz8efD/R7DxR4rj8EWdrqR1QXQ1i
UborVxDZSIN8ZVxIHMYDgM6sGUfP1foX/wAE/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t
3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJGho+f9F/4JoeMll8LWvijxH4K8B6t438SX
3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc2/BH/BL/wAWeK/DFtf3/jL4f+Gb
qfxy/wANn03UpdRkurbxAHZRZO1tZzQncAGEqyNFhhlwcqPdfg4tr8NfjJZroPxp8FeP/hzr
fxO1N/GGm+MNf021u9FWC8a2g161vbi5iuJb2aynNyl9pwjYSRICzlNgyrP4w6b+zV/wTt17
/hVXjHwVe3mlfGu48QeG01iXR9R1s6LHb/ZLXUBY3SGWK486OIgrBHMqlnCrESaAPh/x34I1
T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKtfCv4V+Ivjf8Q9J8J+E9
Jutc8Q65OLeysrcDfK2CSSSQqoqhmZ2IVFVmYhVJGVr2vX3inXLzU9TvLrUdS1Gd7q7u7qZp
p7qZ2LPJI7EszsxJLEkkkk19AfsU/FTw8vw88efC2/1a1+HGt/EyCOysvHuSEjUHJ0jUHIZo
NMuWC+ZNb7GVgpm8+AFEQjK+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5bgWNn
PCJIGd0jASSWWOGVpolSRi3FT9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPYNe8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYD
cxiJ2mkf7YNomQtK1vjDG0Iqr/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0
eTn7V5v2v5N+PP8AI4z9mpjPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1
g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7
Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf
2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPNPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPr
U0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1
HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrttB+O/hPX9a+C138PfGXw/vfDvh/4reItU
8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7Sh2cm43GgD40+G/8AwTl+KPxH0P4r6mNK
tdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke+p8JP2MU+LmkeBXh+Kvwq0rW
viHfNpul6Dd3moT6pDcC5FsiXMdrZzLbeY7IyGV1DI4YHAbb9AfsoX3hvUvjh+1d4k0/xd4V
tPC3jnwp4v8ADnhe48ReLrWyvtVuLu4iktAy39wt03mRkEzzjBYNvfeGryr9kSXw3+z94A+M
vxBv7/wrJ8WPhp/Ztr4Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6mSFCyylXwjAAytM/4J9ah
N8SIvB+o/Ev4VaL4pvfFd34PsNLm1G9u7q8u7e6W0LslraTG2ikmbbGbvyWcKXC7PmrnvgL+
xR4q+PP7Ud38Ho7/AMP+GfG1pPe2TQaxPMYHurPf59uJbaKZd6rFMwY4jIibDklA3q37BR13
RvjJ4J+I1x45+D+oWeueMra78XHxVqOkf2/pC295HLNcltWVZg8qSySrNYSOzFfmYSxhR9V/
swftKfC74Z/FTwD4h8E+P/D+ieD9W+IHi66+I1zqWueTquriaaeDw9JdC9cX9zbiK7jcsA8M
btLNPtkjlkUA/J6iv0h/Z2+Pd98H/wBlX4L+GvhtrfwfsvFnhzxJrcfjd9c+IDaFaWt19vhN
tdTLa6jbJqtubcL+8VLyMxwCNM/Mjavwx/aubRfgb8MI/hNqvwK0nXYvGXiK98XLceLLvwbo
1hNLqcctncCy+3WMt7ZfZSgVZYLkrDAsOxWV4iAfFPwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRd
G1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5818d+CNU+GfjjWfDet232LWvD99Ppt/b
+Ykn2e4hkaORNyEq2HVhlSQccEivrX9kWO50D9pzQviHpnir9nSLwzrPxAS61+zdtLsH8OWt
vqCyGSyh1mKK5t7d4ZWeBrImQKiq+yWIIv0Dpv7WWnaX4F8Ej4F+JfhpvT4geKL7Xrvxh47v
vDros2rpLp97fRtqFpcamj2Ri3vPHdsVhMZG8PGwB8J/sZ/sOePf26/iHc+H/BFnaqmnQfaN
R1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy63wq/YH174leDPh7rV74t8FeEU+K+qz6P4Rtt
Ye/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Qf8E3v27LSX9q7wz4V12w+CnhbwD4U13xD4
kOvWV1c+HLNJ7yO5jWaGK4uoYZ+J4raBJrZ54rU7QqbHYW/hP8SNB1b4ffsy2FlffB+wf4be
MtXg8XWt/wCKLCAeEYT4n0/U0m06S7u99whgtmjW4ge5DwvMm9nYmgD4e8S/st+O/CXhjxhr
F5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV5/X2X8O/jD4V8D/tZ/Gb
4+T+MbWbwVdeJNZtrHwraSwtqPxHhvp5pFsZ7OdGMOmPEyPPPcQ4GFSJWuAPL+P9e1GLV9cv
Lu3sLXS4Lqd5orK1aVoLNWYkRRmV3kKKDtBd2bAGWY5JQipRRRQAUUUUAFegeCP2l/EXw/8A
DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igDW8b+Mrv4geJ7nV7+H
SoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST1X/DUHjP/hnj/hVX23Sv+EF+3f2p9g/s
Ow877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+igAooooA9A8EftL+Ivh/4YttIsNO+H89rab9
kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQ
bLe2jjhThRnagycsckknJooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/wCD
R3/lGtoX/YZ1D/0qlr+XWv6iv+DR3/lGtoX/AGGdQ/8ASqWvRwHwVv8AD/7dE5MVvT/xfoz7
9+LP/H9cfU18yeH/APkYdX/7B1v/AOpD4rr6b+LP/H9cfU18yeH/APkYdX/7B1v/AOpD4rr2
ML/y79f0PExm0/T9Ufld/wAHFH/J1vw3/wCydwf+nnV6KP8Ag4o/5Ot+G/8A2TuD/wBPOr0V
5uJ/jT9X+Z34b+DD0X5HgMEpH7a/wOHb+xPhv/6YdFrwr4nfsn6h8d/EPjvxl/wlHhXwp4b+
HPhzwd/bF5rRvWx9t0q1hg8tLW2nd/3iYPyjG5T0yR7hA2P22vgd/wBgT4b/APph0WsWxutP
8Q/AL9pfwb/b/hXSvEnizw58Nf7Hs9a1+y0j+0Ps9rbzT+W91LGh2RjJ+bjKjqwB5q/8NfI7
KH8R/M+Yf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV
3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZ
gCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKi
LKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0n
mSG9gEjpbjbAmZRK/Edh5pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqW
WoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7
oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b
6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t
8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/wCP
Ph/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/x9qHjjwhq/i3x
98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L
8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP
/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/wCEme3uY7Vp4hDZSGCJppVVftQhfqSi
jmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo
37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn
74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD
4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc63
g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXM
eHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkun
u5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baX
zVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2K
W0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUd
t8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU
65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR
7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P
4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re
6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO
/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO
80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+y
dqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8E37
nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0
hHsHwr/Y11b4gfDzSfFmueK/BXw48PeJNVGjaDe+K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kV
TJuDBe28Ef8ABL/xZ4r8MW1/f+Mvh/4Zup/HL/DZ9N1KXUZLq28QB2UWTtbWc0J3ABhKsjRY
YZcHKjoJrfSf2vv2HvgZ4I0Pxd4K8NeIfhXqusadr0HivXbfREWDVLxbmG/geZgs9vGsbLKs
ZadG24hZWVj7X+xx8bPBv7L37OfgbSLfxX8NPFFjdftDxXET62LNp/7C+ymzGtNaSuZ9NdHh
85JH8t4yEyWjciRjPlTwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVb
C3uUt0adiiNO8e/YzrmPDnzTUfgb4w03xx4r8N/8I5qt3rXgb7W2v29jAb3+yktJPLuZZWh3
KsUbjDS52DI+bBFfa1r8OdH+D0Gs678J/il8P9c+JPxA8V69pFx438U+PtLttQ8D6OmoS2q3
cXmTCWW7v4t0z30SNIsLMIYw0vmN8qfD3Q7T4b+OPidol38Wf+ES/szQ9V0yG/8ADq3N/Y+N
pY5FRdNWSAp/ol3tLCWUGLaill5FIR5TXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe
/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/AGS7ay1r4aF/h14k1CPx
d/wkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98kAA+f8ARf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHi
PwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8
F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/
9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+
7lniIDkhWM+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi
+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R
6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re
6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3h
XQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF
/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5I
UA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJk
qzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6F
ZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+KfB37Bera94s03w1rPj34aeEfGGr+JLjwr
a+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OT9m7/gnL8Uf2n/AI9+Ivh3oelWthq3
g6ee18Q3moXGNO0WaJ3jMcs0QkDO8kbIixhy+1mHyI7r9K6mmgx6r4m+JPwf8e/DQfEH40eM
vEPmeJdf8WWGiXPw30WTUpYo5Le1uJVuVuLuF2ka5WMzRQZSOJXk8xj/AIJg/tUxfCf9pLwb
8KfEs3wfsvBPwt1XXr6XxgmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P9G/Zf
1ab4CP8AEfXNZ8P+EfD13PNaaCmsPcC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgvmlfpD8
N/iZonij4Pfs3eGIbr4FRaT4J8Za5afELRtevdAu4NGsptain2WUmqySyzW7WrybZrOWXzAg
zK7oMfBPx3/4Rj/heHjL/hCP+RM/t29/sD/W/wDIP+0P9m/1373/AFWz/WfP/e5zSEcpRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit
6f8Ai/Rn378Wf+P64+pr5j0E48Q6v/2Drf8A9SHxXX058Wf+P64+pr5j0L/kYNX/AOwdbf8A
qQ+K69jC/wDLv1/Q8TGbT9P1R+V//BxOf+Mrfhv/ANk7g/8ATzq9FJ/wcS/8nV/Df/sncH/p
51eivNxP8afq/wAzvw38GHovyPnqM4/bc+B3/YE+G3/ph0WvEviZ+ydqHx38QeOvGX/CUeFf
Cnhv4c+HPB39sXmtG9bH23SrSGDy0tbad2/eJg/KMblPTJHt8Qz+218Dv+wJ8Nv/AEw6LWLY
XOn+IfgD+0v4N/t/wrpXiTxX4c+Gv9j2eta/ZaR/aH2e1t5p/Le6ljQ7Ixk/NxlR1YA8tf8A
hr5HZQ/iP5nzF+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW2
4CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAc
SlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaS
W6TzJDewCR0txtgTMolfjOw800X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO
7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb
281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6n
vtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrs
er/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/FmjeIdD0rxB4y+H/hi
/wDFvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/wCCe2sfCTwBoGve
P/Hnw/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceEN
X8W+Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHv
Wv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laM
KAD5/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qE
L9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7Nv
xhudG/aMtZND8Y/DTTv2d/h78QL/AFjR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGBy
JPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8N
xkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgC
gDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv
2M65jw57X/gnZoV/8PfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJ
dPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/AG/pGm2+qealyW1xVuw8
ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/wCA4NL1W90KxtNYnme78R39mZFu
obFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4dO0xSG8s
SyKjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f8AiP4mbxcnjyfR5L6HR7rx
Al1DKp1z/TH82ykZmlgJkZgQ7GVMLz//AAT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxD
qGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8
V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/w/wCA
4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l
+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeFv2l
rH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT
9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4J
v3Pgfwd/wVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4
ppCCivQPBHxl8O+FPDFtYX/wm+H/AImuoN+/UtSvNcjurnLsw3rbajDCNoIUbY14UZycsbej
fHbwvpdm8c/wY+Gmou080wluL7xCrorys6xDy9URdkasI1JBYqil2d9zsAeaV6t8BP2TtQ+O
/wAMPHnjL/hKPCvhTw38Of7P/ti81o3rY+2yvDB5aWttO7fvEwflGNynpkipo3x28L6XZvHP
8GPhpqLtPNMJbi+8Qq6K8rOsQ8vVEXZGrCNSQWKopdnfc7e1/sj61pPjj9i79qXwxb6h4K8L
a741n8MzaHo2oeIbfSoJVh1KeeWKCW/uAWSGPu8rNjblmZhkA+SaKK/Rb9ln9pW48BfsdfAr
S/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+ge0ubqCK/szqMRttgYtFdjZB5QGQ0ZAPzpr0
v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfZf7Ov
7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6JJOiu0UirGSieYu9ePV
v2fvjv8ACz4FfHDwde+BvGXw/wDCfgyX4j+Mv+FiLZ6taWf21WuLi28O4iLiWfT0iuoyn2ZW
tIsvK+wxvIrGfknXbeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAo
WF8sGKK/Ka9o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg19l/wDBPL9o
jXJf2SvjH8MbT4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhH
xTRX3t8IP2lNZ/ZW/wCCaun3/hPx/wCCtT8YeFfic89hbjXIGvbnw1mB5YYbaR47+GyutRtY
ZJbdUhkkjy7oI2Zq9A/YD+P/AMNvC2h+BdYv9X8FaVoXxB8SeJX+KHh+98STafpXh5rtVt9O
trTRDdRwT2UiSxK8kltdrEm4vLEsGYgD8yaK+9vAnjjxL4c/ZV+BPhr4Q/FbwV8PPFnhHxJr
0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2Nldt8FP2nbjwd+zj8I7D4XeJPgoPE
lh4r16fxld6l4qn8F6alw+oxSWl69jFd6cby0e2KYVrWcJFCIREhVoSAfmnXu3wq/YH174le
DPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj5T8WdZi8R/F
TxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgivpb/AIJ7/GHxho2ueA5N
a8Y/DTTvhV8PfEh1idfFkui3d/pEKNDd3i6bBcJLqSPMIlEYsowGuGBBD73AB5T8TP2KPFXw
m+Al14+1e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwXK1H9l/VrL9
kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4X72/Z0/al0fx78N
9av9F8YeFfDth4x/aPvPEHiLRPE2vaXp8l74RvrUR3iXVrdTbLiJo5drRqJPnTKZaMMKn7K/
xe8G+Gfg3e+H/AXjXwVong+f9oe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5k
eVYz40/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkA/Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8M
bvNGiuWJA+gPjv478J/8OufGXhnwRrPw/wD7E/4XJe6poGk/a9O/tn/hG8PFbXHkzH7f5vnb
E3yD7V5HDH7PmuV/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1
/Jvx5/kcZ+zUAeE/slfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXC
riNhu3FVboLb9i2Ww+BPgf4heJPiL8P/AAfovxD+3/2NDqSatPdS/Yrj7PPvW0sZ1TD7SMty
HHcED7B/4JmfHfwP8CvA/wAAr3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR
5CtbRMZJZdjRySL5T+yrrXi4QfCfw54l8bfs/wCv/DLwj4rm07VtB8QT+Gp7rw5aHUI5L9ll
v4w00VwjvIk1hNOrhcB1ZFUAHj/gH9iC2+IsvhK3sfjJ8HxqXjnVZdH0bTjdapLezTLd/ZY2
miisHa1SZijxG58otHIGIXDhfKfiz8OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZ
VYoWQkEqDjGQOlfS3wc174XfCHVf2ivij4PvPD/9u+A9VgHwjsNWm8xDDdalLEL6G0uSJbi4
tLVY5I/NDiJiJJEZlUr6B4E+N/i3xT+yr8CR8LfjB4f8E+NrLxJr198QrvVvGdrob3V7cX9v
LbXuox3MqvqaeRjL+Xc5VGjIYgx0AfBNewfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uN
RuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn6h0H4ta/efs4/Baw+DPxf8Ah/4I8SaL4r8RT+N7
u38RWfhDTbm4l1GCSzvZ7Gf7Mbu0+zBdqrayBYkMPlAqYRz/AMHPBtr8O/hJZ+K/h34/+D7f
Fv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmhe8QySCeO3WWC2/dxQRs+4gHing79gvVte8
Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHJ8N/wDg
nL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/tfwc+Ddr
+zr8JLO6+HfxF+D918W/F+q6n4f1PxbeeOdNsIvh9psF01mZ7BJpVmd71BJKL2OMypbHbFEr
S+Y2T/wT38C2vwu1z9om01Pxn8NI4L/4f+JfAml3snjHTbWDWtSdoBD9mFxNHI9vKFLJcFFi
I6sCCAAeKfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfa
zAFAHPsP/BNnwtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3xPtbbcLm
BsHEhxWr+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZ
Ib2ASOluNsCZlErgHmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9
jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYd
WGVJBxwSK/Qv4T+J9B0b4ffsy+GrLxX8H9Zf4ReMtX03xdqF/wCJbC0GkQr4n0/UU1DTmu5o
XnSWC1bbNAkgaGWaPAdiB8PftZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7X
AZcoynDAEZ5ANIRrfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccL
E+ayZ3KF3HIHlNfVf/BGm50/wd+3b4T8Za7r/hXw54b8L/bP7RvNa1+y0zZ9o0+8hi8tJ5Ue
bMhUHylfZuUttBBPingj4y+HfCnhi2sL/wCE3w/8TXUG/fqWpXmuR3Vzl2Yb1ttRhhG0EKNs
a8KM5OWIB5/RXpejfHbwvpdm8c/wY+Gmou080wluL7xCrorys6xDy9URdkasI1JBYqil2d9z
saN8dvC+l2bxz/Bj4aai7TzTCW4vvEKuivKzrEPL1RF2RqwjUkFiqKXZ33OwB5pRVvXtRi1f
XLy7t7C10uC6neaKytWlaCzVmJEUZld5Cig7QXdmwBlmOSalABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFf1Ff8Gjv/KNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv8AsM6h/wClUtejgPgr
f4f/AG6JyYren/i/Rn378Wf+P64+pr5j0L/kYNX/AOwdbf8AqQ+K6+nPiz/x/XH1NfMehf8A
Iwav/wBg62/9SHxXXsYX/l36/oeJjNp+n6o/K7/g4l/5Or+G/wD2TuD/ANPOr0Uf8HEv/J1f
w3/7J3B/6edXorzcT/Gn6v8AM9DDfwYei/Is/Dv9ibVfGHxv+BvimOB2tz4b+H8+4L2i0TSV
P/oBr4O+NP7H+pfGHx38RvFDeJ/CvhPw78MND8Ixazd60b1tpvNMtYIfLS1tp3b95Hg/KMbl
PTJH9Ev7FvhbT734D/Ayd4ozMPA3hNs45yNJssV+I/xPl07W/DX7X/goa94U0rxH4psvAA0a
z1rX7LR/7Q+zRxTT+W91LEh2RjJ+bjKjqwB0zGgoUKbXVL8hZfWc69SL6N/mfIH7Pf7Dnj39
pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/C
gX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/
jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+
BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/FP
YPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkq
zDKYc1fD/wDwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlw
UeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJ
qRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/AGgLOfSoPH2m67Hq/wAN9M8QPbmRJdS1SbzL
xbMsRPd2cJWRMGSKJzvZX2oQAco//BL/AMWaN4h0PSvEHjL4f+GL/wAW+K9R8H+HIr6XUZv7
eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/AMefD/wF/wAJHfatptrY
akdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4rl8R
alc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+D
F18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wAD/wDBPbWPGeke
Cb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8ca
z4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z
3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi
/TNe8P8AhXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb
4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz
49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/AMPf
iR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6
N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChj
Pn7xF+xR4q+HPw81zxH43v8Aw/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiS
RF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/
cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZg
Q7GVMLz/APwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd
3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7
D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZF
uobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/AId+FXw48ZXGqz2esajo
urnw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/
430zx4PD0d9pemX/AIgS9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQ
vEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/AAVdm8ZaNr/h
Xw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCPa/hv+wx4k+K3wg8
F+M9H17wrJYeNvHNt8PYbWSW6S603U5wWj+0A2+zyvL2OXieQgSqMbgyr5p8WfhxffBz4qeJ
fCGpy2s+peFdVutHu5bVmaCSa3meF2jLKrFCyEglQcYyB0r72/4J6/tG+G/gB+wf4b0HWNb8
K2t/8QPitNp80sfiG1ttd8JaZfaS2nSa5bsGd7CWCRXxLLGAY2YcLMsldB8MfjLF8APgb8MP
Bfwb8ZfB/UtS8H+MvEVn4n1fXPGsvhe0uGTU4xY6lNbwalbf2jby2ixtnber5cQiXOGRmM/M
miv0L8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS/ttMn1G2hZ
rVVikeEbnXy2ZifssftfWNr8BPh3qc/i3w/4K13/AIaHhkm0vT9UXTU0PwxeJDc3trFCZN8G
jtcKC8ZPkFolL5Zc0AfnpXpf7Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZj
AY4XjLqTGu2R48mVcZAcr+hd3+0rpngLQfCul/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKv
p1y8Ftf2a6jaGy8oAiK7TyoBEg4aM/On7B/iLSdS+Kn7T2oXOufDTw9B4s+H/iPQ9JSPVbfQ
NKu72/mVraCwhvXhkW3IifbuUeUgjEmwsoKEfGlFFfpD/wAE8vE+s6b+wp8NNej8V2vh2Dwj
8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkAH5vUV+q37Lnxu+Cnw/8A
HGl3ugeJvCv/AArbxv458Xp420zV/EU+m2OlWlzIbbR44NCaeCCa0mgkt/Mkks7hYlLl3hWD
915V+x18e7f4Mfs0fC7R7/4g6Vo/iTwv+0BbabdrD4mhMln4aligkv0WSOUg6VLcxCSQqxtp
HQOSxANAHxT8K/hX/wALS/4SP/io/Cvhz/hHNDutc/4nmofY/wC1PI2/6Ha/KfNu5N37uLjd
tbkYrlK+9v2YvE+g6N+0Z+2ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdMW2SSaOKR
BD54SRAY4klwWQSjdU+E3xR1/wD4Y6/Z80v4M/FPwr8OPEmga7rcnjf7Z4us/Dsf2iS+tns7
m+gnkQ6hEtsqjKxXA2RtFgkGOgD4UorW8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi
8qJuqR+VHtUgbExtHVeCPjL4d8KeGLawv/hN8P8AxNdQb9+palea5HdXOXZhvW21GGEbQQo2
xrwozk5YgHn9FW9e1GLV9cvLu3sLXS4Lqd5orK1aVoLNWYkRRmV3kKKDtBd2bAGWY5J+6/2H
P2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM0AfB
Ndt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/ot/wAL
i0HSPG3witvhD8RPBXgzwn4W+LviW68eWeneMLDw3aXmnPr8MlpI8Lzwi/tzpyhY2iWZPLTy
h02Dz74DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNpp16aW7e0ZzGlmn2edHeG
MLLIm4LE5VlDGfnpRVvXtOi0jXLy0t7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQP
uv8A4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJ
FihmiEvS3CEfBNFfot8K9R0/472P7HVn4f8AFfw/1XVfhp8R9WfxHBDqdloOPtHiC1u4ZLOx
ufs0skUsWWjjt4OP9XsV1MY7a6/aUsfh34snt5/H9roeu6N+1lfWs0T64ttd2Phi4uBNexsC
4dNMkuIw8ynEDSIGfLDNMZ+WddX8K/hX/wALS/4SP/io/Cvhz/hHNDutc/4nmofY/wC1PI2/
6Ha/KfNu5N37uLjdtbkYr9Afiz8UX/sHwRpf7OfxT+H/AMOP7A+I/i+TxD9n8Xaf4d03ZJrM
b6bczwPIi6haLZLGFMUVwnlRmIA48uvHv+Cdfi9bLXP2mrG+8b+CtP03xd8P9b0e1Ems2nh3
Std1W4YrZtbWtwbZQm03Ow+SiwJLtYReYFKEfGlFfe37Dn7Slj8O/wBkz4RW8/j+10PXdG+P
NlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmvpb9mTxF4N8dftGfCTwz8Ntc8Ff8A
CJ6H8QPH934r8O6ZqtnZwagwupbjRLkWO9DfpFFFavBNDHKsAt0Ksnk/Kxn5E+FfAmueOv7S
/sTRtV1j+x7GXVL/AOw2klx9htIseZcS7AdkSbl3O2FXIyRmsmvS/g/4sbXNc+IWp698VPEH
grUtX8N6lLLdxx3d7P4yupGRjpdy8Thtl0xYvJMWjymXByK80pCPdvhX+wpdfE74CaT8Rp/i
R8NPCvh7VvEg8JA65PqUL2mpFDKsUzR2ckUaGHEvmmTylVhvdWDKPKfiz8OL74OfFTxL4Q1O
W1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfW3wj+NT/Bf/gjxq9npWr/AA/uPEWt
+OdRkuNGvtT0+41SPR73RJNIluYrYy/aYpRLIQpjVZNmWYNbu+/0v/gmR8Vfht8HPhX8LEv/
ABT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/umM+FPAPw
B8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+4f2D/jf4i0
H9mb43/CFPjBa+CPFjz6JD4RmvfGYsNK0tYtWc6pLaXqy+QiFJfMcW7lrhNxjWXBr0v/AIJz
fET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt0ubS5jaIyTT2U/lq
zmR4BAREAfD37Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R
48mVcZAcr5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/AMMdc8JWb6h4p021gv8AUrlo1higmknE
cyMYJP3sbNEBtJcB0LfJNIQUV+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwv
HZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjJ+yb8evCGi+B9ZPj3xd8Koda1jxXql18FxbRC
XS/h3qckd4H1FoZImk03SnupLIww3SZDxLcG3URtOGM/Omiv0B0H4reM5v2cfgtpvw1+MfhX
wb4+0PxX4im+I99eePbDTY7zUJdRge3vr4yz41eIwjPnRrdI6KyDf9yur+Cn7Ttx4O/Zx+Ed
h8LvEnwUHiSw8V69P4yu9S8VT+C9NS4fUYpLS9exiu9ON5aPbFMK1rOEihEIiQq0JQj8067b
wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX/Qv9kv4gap
qH7NHhnxfZ+KPCvhmw0D9o9ori/s9bTw7o9l4cmihvL6wsReSQuunyyBZvsIG5xGrNEWQ44n
4DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNpp16aW7e0ZzGlmn2edHeGMLLIm4
LE5VlDGfD/wr+Ff/AAtL/hI/+Kj8K+HP+Ec0O61z/ieah9j/ALU8jb/odr8p827k3fu4uN21
uRiuUr7B/wCCZ2rW/g//AIaN0Gbx14Vs9F1/4cav4fsWvvEcOj2Ov6nL+7sXijvXgZ8p9o2y
PGpiWYh/LMmD6V+yb8Yl0j4G/sr23gj4ieH/AAZB4W8ZajdfFKzbxhaeG3vIX1OzkikuoZ54
Wv0NipUMqzDahi6goAD89KK/Sz4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/BV08c
80SwEo9vpUkiRvJEVWEug3ruXj4T/ay/sP8A4ao+Jf8AwjP9lf8ACN/8JXqn9k/2X5f2H7J9
rl8nyPL+TyvL27Nny7cY4pCMr4V/Cv8A4Wl/wkf/ABUfhXw5/wAI5od1rn/E81D7H/ankbf9
DtflPm3cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVNEtbTWPEtppMGqarO
oWzUJcTRrI4X7SBJgrEJWDMnmjd6B+w5+0pY/Dv9kz4RW8/j+10PXdG+PNlazRPri213Y+GL
iC3mvY2BcOmmSXEYeZTiBpEDPlhmgD4Jrq/hX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H
/ankbf8AQ7X5T5t3Ju/dxcbtrcjFav7WX9h/8NUfEv8A4Rn+yv8AhG/+Er1T+yf7L8v7D9k+
1y+T5Hl/J5Xl7dmz5duMcV7t/wAEsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs
6hbNQlxNGsjhftIEmCsQlYMyeaNwB8k123gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeX
S21vFGgyzOzF2zgKFhfLBiiv9gfsOftKWPw7/ZM+EVvP4/tdD13RvjzZWs0T64ttd2Phi4gt
5r2NgXDppklxGHmU4gaRAz5YZroPhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09f
EE73kljcB1ggi8iZHaOA75o92xJMEUxn500V+kP7O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF
17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv8Aa+1n4IfsG3Pirwn4t+Gk
PjDTvi7darYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSiv0B+Hv7Z+
v/Cf/gninjXw3r3w/wBF8cQfFabWrXw1aatZxzad4cneGefTraz8/wC122ny6lbxb7aEozxK
WbMTM59A/ZL/AGofA+v6h+yt4y1HxB8P/CVr4B13xuninTI7630yPw9LrMz/AGKO3s5JBM1p
m5jAkhWSKFFYyOgikKgH5fV23gH4A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoM
szsxds4ChYXywYor/e37CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1HVLjxdZ2NrbaCba3uNQso
Lt7hYp7SaVBM1vA7rOUEgR9u4c98Bv2oYvFHhb9qLwF4F+LVr4Ag8QeJLG7+GRvfEEvhvStG
0069NLdvaM5jSzT7POjvDGFlkTcFicqygA/PSirevadFpGuXlpb39rqkFrO8MV7arKsF4qsQ
JYxKiSBGA3AOitgjKqcgVKQgooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+XWv6iv8Ag0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3RO
TFb0/wDF+jPv34s/8f1x9TXzHoX/ACMGr/8AYOtv/Uh8V19OfFn/AI/rj6mvmLQzjxBq3/YP
tv8A1IfFdexhf+Xfr+h4mM2n6fqj8r/+DiX/AJOr+G//AGTuD/086vRR/wAHEv8AydX8N/8A
sncH/p51eivNxP8AGn6v8z0MN/Bh6L8j9Iv2LfErQfCP4G2+eP8AhCPCI/PSLGvwM/ap/ZQ1
D49fHj4xeNP+Eo8K+FPDfw6svDJ1i81o3rY+3WFvDB5aWttO7/vEwflGNynpkj91P2O4JD8P
fgYecf8ACE+EP/TPYV+P/wASbnT/ABB4U/a88Gf2/wCFdK8SeK7H4f8A9j2eta/ZaR/aH2eK
Kafy3upY0OyMZPzd1HVgDhXryqUYqXT/ACIy5JYip6v8z5C/Z7/Yc8e/tJ/Cvx7430KztbTw
n8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qd
H0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv8Agl5p9rpGh/HDUNT8ReCtCg8T
/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8
Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr8B7R5pov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/APgn
1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBse
lfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQEk
zqw2vvZavw6+MvgeXxj+0v8AtAWc+lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgy
RROd7K+1CADlH/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39vXdjerYzPEbazmEU
X2lxGpuDExwW2hcMcrxl/wAE9tY+EngDQNe8f+PPh/4C/wCEjvtW021sNSOqXl0lxpl41neI
5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/AP4V74O8Vy+ItSufG13o+oatZfvI
r2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4d
i1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/wCB/wDgntrHjPSPBN/L48+H+i2vxN12
60LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/wBm34w3OjftGWsmh+Mfhpp37O/w9+IF/rGjr4sl
0u7v9I0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X6Zr3h/wrps8+te
NLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2
oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceF
bXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/AOHvxI+HnjO08W/BQaDd
eK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q
8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea
54j8b3/h/wABwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv
7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7D
wr4i+FWueB/D/wAR/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/4J+ftp
+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtj
btdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaM
khjjZhjk+Iv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETX
DFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw1
1qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv8A
xAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZ
FLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv8Agq7N4y0bX/Cvhz4T+F9d1n7Dea1r
9vpmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo
/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOT8TP2KPFXwm+Al14+1e/wDD4g0vxlce
A9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwX3b9gi6fwN4Y+Gk1p8S/h/qvgzxF4ref
x34S8Talp+jyeDXhdbePU7Ka6uYrkXZtJ2mjutP8t0eBELSGPYPS/wBm/wCM3g39nT4CeENJ
8J/EHw/Bo93+0tb31lPd6pZjVx4XCC2N7cIds1kkiQMkrskBaKV1YCGcq7Gfm9XbeAfgD4i+
I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/6beAvjv8ACzQ/2gPg
Rq+leMvh/pXhD4UeOfiJp2pxx6taWkemQapfTJpbW9vvVprR0uIcTW6PBEm5ndFjcr86fsH/
ABv8RaD+zN8b/hCnxgtfBHix59Eh8IzXvjMWGlaWsWrOdUltL1ZfIRCkvmOLdy1wm4xrLg0A
fKngH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivU+Ffwr/
AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2p5G3/Q7X5T5t3Ju/dxcbtrcjFfW37B/x31bT
f2Zvjf8ACbTPjZa+GNduJ9ETwRqN74ouND0q0gj1ZzqNzaTz+UbdHjlErxhUnlQtiJ2VlHP/
APBM7Vrfwf8A8NG6DN468K2ei6/8ONX8P2LX3iOHR7HX9Tl/d2LxR3rwM+U+0bZHjUxLMQ/l
mTBAPj6vQP8AhqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3Z+0fafJ+0ebt/d7/ADN3lfus
+X8lfW37Dn7Slj8O/wBkz4RW8/j+10PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiB
pEDPlhmvpb9mTxF4N8dftGfCTwz8Ntc8Ff8ACJ6H8QPH934r8O6ZqtnZwagwupbjRLkWO9Df
pFFFavBNDHKsAt0Ksnk/KAfjnXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleI
NDDKVfylkcFwq4jYbtxVW8/17Xr7xTrl5qep3l1qOpajO91d3d1M0091M7FnkkdiWZ2YkliS
SSSa/SH/AIJmfHfwP8CvA/wCvdN8ZeFfCdhLruvf8LaW41a3s769uWje20TzopXFxPaIt0pH
kK1tExkll2NHJIqEfmnRX6Lfs1/El/Avwp/Zj0Xwh8R/CvhD/hCPHOqN8VrW38dafosd8n9q
2jJLOGuY11SI2SFVlh89GRSiscba9L8I+N/gN8arb4Sajp/xD+Gnhuz+GvxA8Q3Wi2Gp3sui
mwEviu01e3kjgMaqlu2j2t3GjShYRNcQRcS8Ruw7H5PUV+m3g79svw9cWem63pfxItbB5/2p
Li4tpH1g2N3B4QvpUurkNG7JLFpk0ypJMjqsTSKC43rx8E/tZf2H/wANUfEv/hGf7K/4Rv8A
4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxSEef0V91/Cb4o6/wD8Mdfs+aX8Gfin4V+H
HiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHXpf7F3i/WYP2TPBviSP
xv4f02Dwt+0OLW91xdZg8N6cvh6SCC7v7W1Wc2oSynkVZzYxxpv2AmDKEKxn5k0V+sPwk/aQ
+E0/xk/Z78T6F4v8FaN4J+G/jL4g29/DNeQaU+k22r3ko0ox2UpjmNu6XMHzxRGOBQ/mGMRS
bPH/ANhnx9q/gz4QeH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47XxDp8ki20q
PDJM3kwyxrMQRb0Afn9RWt47uPtnjjWZftmlah5t9O/2rS7P7HY3OZGPmQQeVF5UTdUj8qPa
pA2JjaP0W/Y8174UfEf9mv8AZmOv/Eb4f+H/ABB8Jtd1BvsWu6vJp9xZXr+IbDVfN27dhibT
LS9jEsh8kz3EMYbzT8iEfJP7Lfxl8c6Roeu2fhXxd8NPAE/hPw3qWqQatqWk6ZY63eKVZZbS
x1L7K1417Ks7rEqzK20EK6hQB5/4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRo
Mszsxds4ChYXywYor/YH7MXxe0nxn+0Z+2ZqFl418P6Z4T+JPhvxRb6MmseILfRYNbvb26kb
TiIbuSIs/lNPh2X9yJWDlPMw3P8A/BPL9ojXJf2SvjH8MbT4u/8ACCeJNR/sF/Bkmr+KZNFs
dJiXVGfUpILhnVIP3cweSOI+bMu/akhBFMZ86aj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A
1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfoX+xp8ZtJ/Z0/ZM+G+kxfEHwVBeXf7Q9jfXM9v
qluLkaEIEtp70pLturK3k8iRGeVIHaCVlcCKdlf40/ay/sP/AIao+Jf/AAjP9lf8I3/wleqf
2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikI9B8Hf8E1PiX4++DPw08f6VDpV34W+J2up4ft7
tJpW/sO4kv8A7BE98ojJjikmBCyR+YPuhtruiN9FfCr9jH9rz4eeDPCGgeBtb8P6hoj+JPEP
hnT9RtYoLmfwRMk89jqFxHeXFt9q0+3m+yzuHtHBYgfKs8yI+t+y3+3rN+zd8Hv2NNI0Dxt4
ftrG61XXtJ8d6Vc6hE0FjZXWtQ7J7yPeDbukTyTRSvtwA/LRtIrdr8Xv26bfwR8S/gNYeDfi
rpVn4f1X4reKm8YJpuuwm3/syXxjFdQS3RVyI4pIQ7LKSoeCSYBjFJIGYz4f+G/7A+vfEX4q
fFfwgfFvgrRNS+DsGoXuuS6g9+YJrWwmaG7uIDBays6RsqnaypIwlXajYYL4TX3t8BfE+g6z
+2H+2hqY8V+CrPTfF/hvxhomh3eoeJbCwg1e6v70taLA88yLIkixsfMUmNQVLMu9c/BNIQUU
UUAFFFFABRRRQAUUUUAFFFFAHoH/AA1B4z/4Z4/4VV9t0r/hBft39qfYP7DsPO+17s/aPtPk
/aPN2/u9/mbvK/dZ8v5K8/oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAPQP8AhqDx
n/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3Z+0fafJ+0ebt/d7/ADN3lfus+X8lef0UUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wz
qH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr
5h0b/kPav/2D7b/1IfFdfT3xZ/4/rj6mvmHRzjX9W/7B9v8A+pD4rr18L/y79f0PExe0/T9U
flf/AMHEZz+1T8Nv+ydwf+nnV6KP+DiP/k6n4bf9k7g/9POr0V52J/jT9X+Z6GG/gw9F+R+l
H7F+mBvg78DZMc/8IR4SOf8AuEWNfzZf8FIxj9sbxL/15aT/AOmq0r+lb9jjxJp9n8EfgXC8
sYm/4QjwkME85Ok2OP6V/NR/wUikEv7YviVh0ay0gj/wVWlc1ejKnRi5dTPLnevU9X+Z4ZRR
RXCe0FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXq3w1/bb+J/wh
8D2Hh7w/4m+xWGj/AG7+yZX060uL7Qvtsfl3X2G7lia4svMXJb7PJH8xZuGYk+U0UAFFFFAB
RRRQAUUUUAFegf8ADUHjP/hnj/hVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r
91ny/krz+igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAK/p3/4NYvDHiaD/AIJT+Gda0PxDoWmRSa7qtu1v
f6BLqDFkuS24Ol5BgESAbdp+6TnnA/mIr+nz/g2t8Ka544/4IP6bpXhrxTd+CdfvPEGuLp+u
W1lb3z6dMJ42VzBcI0cqZGGQgEozBWRtrr0YevOlzOHVW2T6ruZVacZ2Uuj/AEZ99+Ifhn41
8Syu8/jXwupc5Pl+D5x/PUzXznaadLo/inWrSeeO6mtbGCGSaOEwrMy+IfFYLBCzFQSM7dzY
zjJ61N/wTy+F/wC15bfErXdV/aH+JVpPoGi3E1jpWh6ZpuklPEJwVF888NsskVtg5jjzHMzj
MixqmyabVf8Ako/ij/rjH/6kfiyvVy/F1atWMZ2sn2S79kjysyoQhRlKO7Xd+R+UH/BxB/yd
P8Nv+ydwf+nnV6KP+DiD/k6f4bf9k7g/9POr0Vjif40/V/maYb+DD0X5DPAX7bmq+D/j38DP
Csc7i3Hhz4fQbQ3aXRNJY/8AoZr7o/YW/wCCSvw4/aw/Y1+GfxG8R6L8N73WPFWgW9xPNqng
C11S7fywYF8y4eZWf5YhjI+Vdq9q/JuHwzcXH7bPwNnCts/sT4bnP00HRf8ACv30/wCCWWh6
tr3/AARw+EWm6FrP/COa7e+BpLbT9W+yJef2Xcs9wsVx5LkJL5blX2McNtweDWOIqydON/I3
oU4qpK3mfHPxb/4NNfgb+0L8XPEevWfxT1Xw7cJcQ2l9onhTQNPtNP0iZbWAiMW4djC7xNFM
VY5bzw/RxXk/gP8A4NB/h78RbTVLiw+JfjwQabrF9pOZIrXdIba5kh34ERxuCBsc43YycZP6
Bf8ABFj9nX4h/sv/AAR+J3hr4nxXcniuT4jX9/NqU073Ka8ktlYN9uinf5pklYOd7fPvDq4W
RXVfbv2ePiDN4U8P+JreOTYH8Xa5Jj3Oozj+laYBc1OclFNq26XmGLfLKKu0tT8q/wDiDH8F
/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDGq/ZL/hc91/z3o/4XPdf8966v3v8Az7j9
y/yOfnh/PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o
/wCFz3X/AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1
bf8Axqv2S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8A
jVH/ABBj+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4g
x/Bf/RTPG/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3
L/IOeH88j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/
AIXPdf8APej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt
/wDGq/ZL/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCN
Uf8AEGP4L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH
8F/9FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv
8g54fzyPxt/4gx/Bf/RTPG//AH6tv/jVH/EGP4L/AOimeN/+/Vt/8ar9kv8Ahc91/wA96P8A
hc91/wA96P3v/PuP3L/IOeH88j8bf+IMfwX/ANFM8b/9+rb/AONUf8QY/gv/AKKZ43/79W3/
AMar9kv+Fz3X/Pej/hc91/z3o/e/8+4/cv8AIOeH88j8bf8AiDH8F/8ARTPG/wD36tv/AI1R
/wAQY/gv/opnjf8A79W3/wAar9kv+Fz3X/Pej/hc91/z3o/e/wDPuP3L/IOeH88j8bf+IMfw
X/0Uzxv/AN+rb/41R/xBj+C/+imeN/8Av1bf/Gq/ZL/hc91/z3o/4XPdf896P3v/AD7j9y/y
Dnh/PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o/wCF
z3X/AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1bf8A
xqv2S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8AjVH/
ABBj+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4gx/Bf
/RTPG/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3L/IO
eH88j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/AIXP
df8APej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDG
q/ZL/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCNUf8A
EGP4L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH8F/9
FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv8g54
fzyPxt/4gx/Bf/RTPG//AH6tv/jVH/EGP4L/AOimeN/+/Vt/8ar9kv8Ahc91/wA96P8Ahc91
/wA96P3v/PuP3L/IOeH88j8bf+IMfwX/ANFM8b/9+rb/AONUf8QY/gv/AKKZ43/79W3/AMar
9kv+Fz3X/Pej/hc91/z3o/e/8+4/cv8AIOeH88j8bf8AiDH8F/8ARTPG/wD36tv/AI1R/wAQ
Y/gv/opnjf8A79W3/wAar9kv+Fz3X/Pej/hc91/z3o/e/wDPuP3L/IOeH88j8bf+IMfwX/0U
zxv/AN+rb/41R/xBj+C/+imeN/8Av1bf/Gq/ZL/hc91/z3o/4XPdf896P3v/AD7j9y/yDnh/
PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o/wCFz3X/
AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1bf8Axqv2
S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8AjVH/ABBj
+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4gx/Bf/RTP
G/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3L/IOeH88
j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/AIXPdf8A
Pej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDGq/ZL
/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCNUf8AEGP4
L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH8F/9FM8b
/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv8g54fzyP
xt/4gx/Bf/RTPG//AH6tv/jVZOuf8Ggvwt8MXEkWpfGfxBp8sMXnyJczWUTJHsmk3kNGMLst
7hs9NsEp6I2P2o/4XPdf896+fP2ovE8nivxN4quJG3lPDojz7DQfGJ/rXVhKEqsmpwilb+Ve
XkZVayilyzd/U/PD/iDH8F/9FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2e8Q/Fq503X
763WbAguJIwPQBiP6VT/AOFz3X/PeuRe0av7OP3L/I2c4p255H42/wDEGP4L/wCimeN/+/Vt
/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvT/e/8+4/cv8AIXPD+eR+
Nv8AxBj+C/8Aopnjf/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/5
70fvf+fcfuX+Qc8P55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf
8Lnuv+e9H/C57r/nvR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8
b/8Afq2/+NV+yX/C57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1b
f/GqP+IMfwX/ANFM8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/
AMQY/gv/AKKZ43/79W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H
73/n3H7l/kHPD+eR+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C5
7r/nvR/wue6/570fvf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//
AH6tv/jVfsl/wue6/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/x
qj/iDH8F/wDRTPG//fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H42/wDE
GP4L/wCimeN/+/Vt/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvR+9/
59x+5f5Bzw/nkfjb/wAQY/gv/opnjf8A79W3/wAao/4gx/Bf/RTPG/8A36tv/jVfsl/wue6/
570f8Lnuv+e9H73/AJ9x+5f5Bzw/nkfjb/xBj+C/+imeN/8Av1bf/GqP+IMfwX/0Uzxv/wB+
rb/41X7Jf8Lnuv8AnvR/wue6/wCe9H73/n3H7l/kHPD+eR+Nv/EGP4L/AOimeN/+/Vt/8ao/
4gx/Bf8A0Uzxv/36tv8A41X7Jf8AC57r/nvR/wALnuv+e9H73/n3H7l/kHPD+eR+Nv8AxBj+
C/8Aopnjf/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/570fvf+fc
fuX+Qc8P55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf8Lnuv+e9
H/C57r/nvR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8b/8Afq2/
+NV+yX/C57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1bf/GqP+IM
fwX/ANFM8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/AMQY/gv/
AKKZ43/79W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H73/n3H7l
/kHPD+eR+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C57r/nvR/w
ue6/570fvf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//AH6tv/jV
fsl/wue6/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/xqj/iDH8F
/wDRTPG//fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H42/wDEGP4L/wCi
meN/+/Vt/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvR+9/59x+5f5B
zw/nkfjb/wAQY/gv/opnjf8A79W3/wAao/4gx/Bf/RTPG/8A36tv/jVfsl/wue6/570f8Lnu
v+e9H73/AJ9x+5f5Bzw/nkfjb/xBj+C/+imeN/8Av1bf/GqP+IMfwX/0Uzxv/wB+rb/41X7J
f8Lnuv8AnvR/wue6/wCe9H73/n3H7l/kHPD+eR+Nv/EGP4L/AOimeN/+/Vt/8ao/4gx/Bf8A
0Uzxv/36tv8A41X7Jf8AC57r/nvR/wALnuv+e9H73/n3H7l/kHPD+eR+Nv8AxBj+C/8Aopnj
f/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/570fvf+fcfuX+Qc8P
55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf8Lnuv+e9H/C57r/n
vR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8b/8Afq2/+NV+yX/C
57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1bf/GqP+IMfwX/ANFM
8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/AMQY/gv/AKKZ43/7
9W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H73/n3H7l/kHPD+eR
+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C57r/nvR/wue6/570f
vf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//AH6tv/jVfsl/wue6
/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/xqj/iDH8F/wDRTPG/
/fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H4ofCj/AINDvhz8VPhrofiO
H4m+NrWPWrKO7ELm1LRb1BK5EHOD3roP+IM/4ff9FU8Zflbf/GK/WH9j3/k1zwH/ANga3/8A
Qa9Jrz8bKNPEVIRirKTX4nZh4udKM5Sd2l+R+Kv/ABBn/D7/AKKp4y/K2/8AjFH/ABBn/D7/
AKKp4y/K2/8AjFftVRXN7b+6jb2X95n4q/8AEGf8Pv8AoqnjL8rb/wCMUf8AEGf8Pv8Aoqnj
L8rb/wCMV+1VFHtv7qD2X95n4q/8QZ/w+/6Kp4y/K2/+MUf8QZ/w+/6Kp4y/K2/+MV+1VFHt
v7qD2X95n4q/8QZ/w+/6Kp4y/K2/+MU7wh/wZ0/B/wAStq8Uvxg+JVtdaLqH9nXCpp9lJGz/
AGa3uQUYhSV2XMY5UHcGGMAE/tRXKfDP/kYviD/2NC/+mfSqPaXT0QclmtWfkdon/BnD8DfE
1k9zpvx3+IGoW8dxNaPLbWGnyok0MrwzRkqxAeOWN42XqroynBBFffH/AATg/YN0f9i39kDS
fhn4F8feOP7H0DXNYaS9uLXTluL2c3rxSFleCVVRWg+TaQSCS3UKvC/8ExP2XPiz8JP2q/2i
vGviXVLvRfhx428b63Ponhe5jy9/MdSkxq4Dc26NEvlrjm4Rldhsit2f6q+A3/Ij33/Yxa7/
AOna8qeZ8tx8q5rB/wAKu1z/AKKP4z/8BtJ/+Qq+a5LZ7Dxt4hhluZrySG1hR7iYIJJyPEXi
wF2CKq7j1O1QMngAcV9kV8aeJpzD8RvE/wD1xj/9SPxbXo5U2669f8zz81j+4f8AXY/Kr/g4
gP8AxlP8Nv8AsncH/p51eioP+DhaYyftQfDQ+vw7h/8ATzq9FViv40/V/mLDL9zD0X5FXwV4
cs5/2hvgXM4XzP8AhHfh8fy0PSMfyr9gf+CbXgfxHff8E8Pgl/YWsXejaYnhG1EcS6vbAyuS
7ySFZNJnK5kZwAJCAip3yT+Huj+P7mz/AGuvgXZrnYNB+HK/99aFo5/rX74/8EoJTN/wTH+A
7nq/g60Y/iXrnxM/cil2OjDw96Vzuv8AhXPjv/ocL/8A8HFj/wDKKubsP2XNa0wXHkeINQj+
1XU95L/xUFsd0s0rSyN/yBOMu7HA4GcAAcV7XRXNDEVIfA7HROhCfxK543/wzd4g/wChk1H/
AMH9t/8AKOj/AIZu8Qf9DJqP/g/tv/lHXslFX9cr/wAzI+qUf5UeN/8ADN3iD/oZNR/8H9t/
8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm
7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJ
qP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+
2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR1
7JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK
/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPq
lH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43
/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3i
D/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf
/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/y
jo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+Gbv
EH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo
/wDg/tv/AJR16jf+NdH0rxRp+iXWraZba1q0cs1jp8t0iXV6kW0ytFGTudU3LuKghdwzjNYn
gr4/+BPiVrv9l+HfGvhLX9TEUs5tNN1i3up/LikEUr7I3LbUkIRjjCsQDg8ULGV3opP/AIbc
PqlFa8qOJ/4Zu8Qf9DJqP/g/tv8A5R0f8M3eIP8AoZNR/wDB/bf/ACjrr2/aT+HSeI7XRz4+
8FDV77UptGtrE65bfabi+hKCW1SPfuadDIgaMDcu9cgZFanhj4t+FfG3irWdC0bxN4f1fW/D
rrHq2nWWow3F1pbNnas8SMXiJwcBwM4PpQsZXeqk/wCv6QPCUVvFHnn/AAzd4g/6GTUf/B/b
f/KOj/hm7xB/0Mmo/wDg/tv/AJR10Wn/ALW3wq1bx/8A8Ina/E34fXPik3TWI0aLxFZvqBuF
JDQ+QJPM8wEEFduQQeK9Co+uV2uZSdg+qUU7cqueN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Q
yaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg
/tv/AJR163BrVnc6tcWEd3bSX1pHHLPbrKplhSQsEZlzlQxR8EjnY2OhqzR9cr/zMPqlH+VH
jf8Awzd4g/6GTUf/AAf23/yjo/4Zu8Qf9DJqP/g/tv8A5R122iftAeA/E3xHvPB2m+NvCOoe
LtO3m70O21i3l1K22Y377dXMi7cjOVGMjNddR9cr2vzPUPqlG9uVHjf/AAzd4g/6GTUf/B/b
f/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4
Zu8Qf9DJqP8A4P7b/wCUdeo3njXR9P8AFdloU+raZBrmpQS3Vpp0l0i3d1FEVEkkcRO9kQum
5gCBuXPUVp0fXK+/Mw+qUduVHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR1
7JVbSdas9fsvtFjd217b+ZJF5sEqyJvjdo3XIJGVdWUjsVIPIo+uV/5mH1Sj/KjyT/hm7xB/
0Mmo/wDg/tv/AJR0f8M3eIP+hk1H/wAH9t/8o69kqtpOtWev2X2ixu7a9t/Mki82CVZE3xu0
brkEjKurKR2KkHkUfXK/8zD6pR/lR5J/wzd4g/6GTUf/AAf23/yjo/4Zu8Qf9DJqP/g/tv8A
5R16l4v8ZaR8PvDV3rOvarpuiaPYJ5l1fahcpbW1suQNzyOQqjJAyT3rRRxIgZSCpGQQeCKP
rlf+Zh9Uo/yo8c/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9U
o/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4
Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJ
qP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b
/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/D
N3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZN
R/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf
/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdS2/7Pvii0jdIvFerRJJ99U8RW4D
fKy8/wDEk5+VmH0Yjua9foo+uVv5mH1Sivso8euP2dvEl3O8svifVJJZGLu7+ILYs5PJJP8A
YnJpn/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUd
H/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN
3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/
AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGT
Uf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8A
wf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23
/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo
69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69ko
o+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV
/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZ
h9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo
/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAq
PG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+
GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvE
H/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qy
aj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4
P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/
AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUd
H/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN
3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/
AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGT
Uf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8A
wf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23
/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo
69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69ko
o+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV
/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZ
h9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo
/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAq
PG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+
GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvE
H/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69kryP41/tL6R8Bre41Txf4x0PwdoUni
CHw5YS3ehXN/5tzJZxXI8ySKdRGCHk+ZlCKE5bms6mY1KceepUsu7dvI6sDks8bXjhcHRdSp
LRRjFyk35JXb+Q/wf8C/FXgHwtYaLpPii/tNM0yFbe2h/tu0k8tF4A3NoZY/UkmtL/hXPjv/
AKHC/wD/AAcWP/yipF+LMT/Glfh4PG+hnxg+jDxAtgPCl0QbEzGETeb9r8r/AFgI2793fGOa
6z+x/En/AEMeif8AhOS//J1Z/WPaNzunq7vR631+d9/Muvl9bDKCrU5Q5kpRumrxezV907aN
aHKf8K58d/8AQ4X/AP4OLH/5RUf8K58d/wDQ4X//AIOLH/5RV1f9j+JP+hj0T/wnJf8A5Oqn
eeNbq3+B48SiO3+3Hw6mreWVPleabUTFcZzt3HGM5x370+d/0jDlRgf8K58d/wDQ4X//AIOL
H/5RUf8ACufHf/Q4X/8A4OLH/wCUVUPih+0BpHwa8Z2/h7xD8RvDVnrN04jjtY/C91cSAkgL
uEd22zJIxvxnqOhruv7H8Sf9DHon/hOS/wDydRztq629A5EnZnKf8K58d/8AQ4X/AP4OLH/5
RUf8K58d/wDQ4X//AIOLH/5RV1f9j+JP+hj0T/wnJf8A5Oo8M32pf8JJq+najd2V99htrK5i
ltrFrT/XNdKysrTS5x9nUggj7x4o53/SDlRyn/CufHf/AEOF/wD+Dix/+UVVdM+D3jHR7jUJ
bbxXfxyapdfbbo/21ZnzZvJig3c6Fx+7giXAwPlzjJJNz4h/Fxfh3pniDWtb8T6H4a0HR9Sj
01GuNGmvpZXa2t5v+WdwhYkzkbVQkBCTwCRf+Hviq9+Kng+z17QvFujXuk34ZredvCtzD5gV
ipO2S8VsZU9Rz24pqbd7fkDgtLmd/wAK58d/9Dhf/wDg4sf/AJRVS0H4MeL/AAxYvbWPiq/g
hkuZ7tl/tqzbMs8zzStk6ET80kjnHQZwAAAK7j+x/En/AEMeif8AhOS//J1cd8WfitffBy60
OLU9btJ21+6e0tza+GGYI6xtIS+7UFwMKemeal1bLUap3ehN/wAK58d/9Dhf/wDg4sf/AJRV
83fFWQeFPHurW6m6lkXStP8AtEk9wk7yznW/FRmfekMKkNJvYYiTAIGOK+oP2efiz/wuz4dP
rytvgkvDDbsbP7I7RfZreUb4/OmCtumYcSEEAdOa+d/inpceo/FXxF5nbT7L/wBPvimvVyrm
+txizzcy5fqzkj8h/wDgv/qRu/2kPhjIR974dxf+nrWBRV7/AIODtPjtP2m/hpGvQfDyHH/g
61iirxSft5+r/MWGa9jD0X5HH+FfCEN7+1T8Dbkj5v8AhH/h22cemhaP/hX7uf8ABKaPyv8A
gmb8Cl/u+ELUf+PPX4O+GvHUGn/tVfA21LDf/YHw7X89C0f/ABr94f8AglJL5/8AwTL+BD/3
/B9q35l648VbkjY6cPfmlc9/oooriOsKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigD4i/wCClXgHxf8AFP8Aa9+E/hvwPrcPhvxBrfg/
xdaRamylntVaC0z5YBGHbhA24bN5cElADwOk/GbwX8PP2f8A4BfHjw74ft/CWl/COa88DeMt
AtxtbQlniaCe3kyWb93fx27hmJd1m3HLOa/RuipinGHLF/P/ALek/wAYzlF/erap1JqUryXd
fekvvTjFp9LejX5S/FH4NXvwnh/Z3v8AVrXd4rtNA8RfEvWPlLSf2iLvT9WnGTzldpj/AN1c
HivZ/wDgl3brcftjfFnW9m1/GXhvSfFDNtwZFv8AUNVuoyfX91LGB7KOwFfedFaqSi1yqyXN
ZeUubT5JxX/bvpbNpyg4yd2+Vt+aad/naX/gX3/lJHd+I9L8MRnxHqWj23wPu/2iNS/t+5tN
NkXWdEuotTMlnK1y05i+yyXSxpIwhR41Iwz7+Or+EPxw+J3iP9ph01HxvZad4vg8T+IrTX/D
lx4u1W5vJNJiilFuq6HHZtaaekca20sV75yedn5pGabbX6YUVhyWpKlfZW/8lpxv/wCSbdnb
o76ud6jqd3f/AMmqSt/5P98ebfb8qP2e5vFHjjwb8FZNT+JnxcuZPHfwp8Ra/rbf8JxqaNdX
llJD9llUrMDCU38+Vs8zaBJvDOGt6h+094v8T/CbSdV8ZfEDxf4c17/hTek6x8ORp2qXFj/w
l3iNwfPzFGwTUrnzltENrIJF2TsfJO8tX6mUVtN8zdtLtvTprU281zpJ/wBxeVs6a5Ur62t8
7ez38nyO6/vv5/lN8QvGfij4R/Eb9o7U7LVdQ0L4h6r/AMIZea3anxLd272ulz2tsNXu4xuk
aGGF3Mf2uOF2tEYiPYBivqr9g7xB4u8dfBH4qR2fjDRvE9gNUu4PCF5YeI9R8Sx6cWtlzbjV
r23ga+SO4JKyq0gXeYzJmPA+r6KmaUoSpvRSVtOi91aenLp2cpNbtNwvFwe7i09etr6vzd9f
JJdEz4o/Ym+Mfwy0b9lz4U+A1i0K/wDjR4U0i4jg8N3Fgb7WtD1uKGVbyS4jVDJZb5TIGnlM
SuJgPMO8Z8Bsf2hvGK/A+4vvDPxC8b614pufhZ4l1P4pRXOtXUsng/W4kJttsTN/xKp1uDcR
JBCsIaOLdtbyw1fqrRRWvUc5PRyvt0upLT/wJO392Pa46NqfKlqotb9bNPX7nrp8UtHex+Z9
v4x8Z6J8Ovjzodr8WvF2iR2fhDwXrltrOuahqeqrplzeQSSXjNND5lzZ285jxJJDtS3VmdQg
Xh/hv9ozVNT8C6Vp994n8V6L8J4fil/ZfibxbZ/EO517To7BrETwpaeIfKt7qOya8aOGR5JC
8Z/dmdQ20fpbRVynerKpbRvb0lGVvNWXLbbW9t7wotQUb7Lf/tzlb9b+9fe/Xqfk7qnxb+KL
6/o3iHw5da1r+vaR4N+IKeAdUuojeahq2lRXdkLC6+cFrhzECUkbeZgqOS5fJ+lv+CXPxA1v
xr438X+R440fxd4MGkaVNHb2vjTVfGUmm6iyN5xOoXlpCimVNjPaJI5hdc7EEnP2ZRU03yJR
etlb11m7vu3za+av2s6i5m2tLu/4QVl5LlsvJtbXv+dngT4maxF+3nN4Zv8Ax74i8bv4m8Xa
zpk1no3jPU9N1DQtOeyLpFfeHri32wW8LKBFf2csBbzI5A7bsN4V8Nvi9aeCf2HPhX4Q0fxx
rXhS8fR/E15fajL411K2tLLV7aVdmmrBazxXEmoMjxeVZC5hjXzzK0Fw7KrfsRRWXI/Zezv0
Sv6Jpfc3zetm7631U0qjqNbu/wB7u/vXu+myPz6/ZQ8Qa5+1V8dvhhB4r8a+OLywm+A+k+Ir
m10nxRqGlW97qbXzxPczC1liMkpAKtk7W7g4XHz58F/ijfeAv2Wfgz4fs/Gkfh/wY+l+Ihr9
1ffEnUfDMOm+IYpv3NrLe26XE0EqW7GaPT9scczEuVduv7EUVrVfO5OOl7v75Ta7bc+nW8Yt
WtYzpXgrPW3L8rRin/4Fy6+Tad9z8jf20Pif4o8QfATxRpnxc8eanFqf/CpdGvPDEGn3N7pt
j4svpLlxf3BspUgNzJsWEuk8ANujtIETAcfowv7aPw00TxGPDV14l8rWrXVo/Dktt/Z90dl+
bH7cIdwi2n/RgX3A7P4d275a9aoqpzbvbu398m/ybXrZ6bOVH4b9Fb/yWK/ON/m15nP/AAq+
KWhfG34c6N4t8MX39p+H/EFql7YXfkyQ/aIm5Vtkiq659GUH2roKKKl2vpsNXtruFFFFIYUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFfLP7
cf7NOp/tR2PhfRbXQ49f0bTfjBpup+I7d7mOFU0pdKt47lzvdSw2yAbUy53cDivqasOXwP8A
8TG+uLfXfEOnf2jOLqaG1NmYvMEUcW4ebbOwykSZG4jIJGM1zYzCU8TS9jWV43Tt3s07Pydt
fI9vhzP8VkmY0s0wVvaU7uN77tNX0ad1e6s1qfn7L/wTU+LulfFT4kaL9rj8ReDT8LrzwZ4L
1m8vIlmSJrpbi1sLhd3mb490kfm7NmxExt+4LWvfsjfGv47eAviJqPiPwDH4S1fxCnhHSLLS
bbxHa3Nw8GmXKvc3YuEdUj+Usyru3gqQMkAt98f8IXcf9DX4s/8AKd/8h0f8IXcf9DX4s/8A
Kd/8h14i4Xwqa96TS6Np3+Pe6u7KpNavVPW7SZ+qf8R84gcoValGg6kfZtS5Jp3pzpzbtGoo
3qSpQdT3ej5OS7v8J6d/wTc8U+B/HMVz4f0LXLXTfCPxZ0/VvCsB8UNLBYaC2w38qRyXJ5dh
8wcec2OAQTn7I1T/AJNMX/sSYv8A03LXSf8ACF3H/Q1+LP8Aynf/ACHU914MsbnwGPDebsaY
NMXSd3mL5/krCIc7tu3ftGc7cZ7dq9PLcro4Gm6dC9m09bdIqPRLpFHwnGHHmacS+xeaNN0k
0mua7uo3u5Sldtxc3tecpyesjx/WPBPjz4LftQ+M/EOgeEbLxtpHjq7hkluDqsVjd6R5fyMr
eYPnj5JVU/u844z5Xp/7L/jvRfjKNbv9B86G18Q32p3uuf2/5v8AaNtJBKIT9lZsJ5YOzON/
zYA2jNfV83hG6uJmdvFniws5LMf+JdyT/wBudRy+Bpp4mR/FPip0cFWVhpxDA9QR9jrucfdS
XRWX3pr7rfde58cpe9d9Xd/c1+p8Vfs7/ADxpr3w2TWdG0m90+01LwZdWBvLTX/Nm12aViIE
EUzKsAi6EZC4BKkk19H/ALG3hvXvCPhi/wBP8SeHND8LapbadpqPY6UEEW0TakFkYIWXzHxu
bDNknOQSVHfaH8LovDGkwWGm694i0+xtV2Q21tDpkUUK+iqtkAB9BWn4f8LR+H7y/ujqGran
d6gkETy3rQfIkJmKKohhjHWdyScnpW0patrr/m3+pio6JPp/kl+h4z+0x8PNQ+JPgjWrTSfD
Wr+ItRtPGUV5btpmuQaPc6c8emWRE6TSqy5BO3G0n5sjBUGvNvEP7N/xO8QeCvh23i3R5/iL
Ho8t22oaDceIxbTwmQ5gkkusgTPEMgkE9cDKkmvqaXwP/wATG+uLfXfEOnf2jOLqaG1NmYvM
EUcW4ebbOwykSZG4jIJGM0f8IXcf9DX4s/8AKd/8h1EPdd13v/X+e66NFy1VvK39f5bd0fMZ
/Yx17/hUXj+5t9Ha28Z65rd0Y4zqxQappLXKTfZgyyGOPzQG5IU84YgE1l+HP2UvHlpo+iiH
wz/YmmQeJp9Rt9A/tqO9/sO2a0aM/vnf598hzhSSM5OMmvrD/hC7j/oa/Fn/AJTv/kOj/hC7
j/oa/Fn/AJTv/kOp5Vy8vSyX3W/y/F2tdjvrfzb++/8An+XZHn/7EfgjVPhv8CzoetW32PVN
N1HyrmDzEk8tvsFicbkJU8EdCa8f+Jik/FXxFj/oH2f/AKffFNfVvhzw5F4ZtbtEu9Qvpr67
N5PPeNEZHcxRRAARRxqAEhT+H15r5U+JJx8VvEX/AGD7P/0+eKa9zJZc+PjJ9bnj5uuTBOK6
WPyS/wCDhNT/AMNP/DT/ALJ5D/6etXop3/Bwr/ydB8Nf+yeQ/wDp51eirxv+8VP8T/Mzwn8C
HovyPkn4gfFWLwb+0B8JfG1uG1DSdN8K+Cb2ExHC3Z0/SNPtblFJ4ylzZ3MJ9HiYdq+y/gx/
wcA3PwI+EPhfwToKRDRfCWl2+k2P2nwxqpnaKGMIGkMPiiKIyNjc5jijUszEIoOB+Xvww/am
1T4c+HrbQNV0Hwz488MWMks1ppHiGG4MdlJIcsYbi1mguolJG4xpMI2YlmRic11Y/bZ8OD/m
3v4M/wDgf4q/+XVebzwaSktj0eWSd4n6ef8AESvr/wDdsf8AwmNb/wDmspR/wcra+e1h/wCE
xrf/AM1lfmF/w214c/6N7+DP/gd4q/8Al1R/w214c/6N7+DP/gd4q/8Al1S/ddiv3nc/Tw/8
HK+vj+Gx/wDCY1v/AOaygf8ABytr57WP/hMa3/8ANZX5h/8ADbXhw/8ANvfwZ/8AA/xV/wDL
qhf22vDmf+Te/gz/AOB3ir/5dUXpdhfve5+n3/ESnr57WH/hMa3/APNZSn/g5R8QAdLD/wAJ
jW//AJrK/MP/AIbZ8Of9G+fBn/wO8Vf/AC5pf+G2/Do/5t8+DP8A4HeKf/lzRel2D973P06H
/Bylr57WH/hMa3/81lOX/g5Q18sBiw5/6ljW/wD5rK/MM/ts+HT/AM2+fBn/AMDvFP8A8uaF
/bb8Oq2f+GfPgz/4HeKf/lzTvR7B+97n6teHP+DhzxBr0qrusF3H/oWtbH/u1mvpD9n3/gpJ
4k+OFoki6vpNtvxw2ha4n/uytX4WaP8At+aTpjAwfAL4Mpjp/pnig/z1mvXPhX/wWP174eRK
mj/Bz4N2ajoA/iJ//QtWNdeHeET/AHkWzlrrFNfu5JH7v3vxn8YQQb08R6G3Gcf2Prp/92Ks
yx+P3je6uAja9oQBPX+xtd/+aOvyBX/g4G+JIjA/4Vl8HcY/5567/wDLSmr/AMHAHxHRsj4Y
fB3/AL9a7/8ALSvR58r/AOfcvw/zOHkzL+dfj/kftPYfEzxVdJl/FGhqf+wTrv8A80NWm8fe
Jwv/ACNWh/8Agq13/wCaCvxUX/g4R+Jq9Phn8Hv+/Wu//LSnf8RCvxPH/NNPg7/3613/AOWl
V7TKv+fcvw/zJ9nmf/Pxf18j9l9R+Kni+0PyeJdDb/uE67/80NUD8ZvGv/Qw6F/4J9d/+aKv
x2b/AIOEfia3X4Z/B3/v1rv/AMtKb/xEG/Ez/omXwd/79a7/APLSjnyr/n3L8P8AMfJmf88f
x/yP2MHxk8an/mYdC/8ABPrv/wA0VSQ/F/xnJ18RaEP+4Prv/wA0VfjeP+Dg74mAf8ky+Dv/
AH613/5aUo/4OD/iYP8Ammfwd/79a7/8tKXPlX/PuX4f5hyZn/PH8f8AI/ZhPip4wKZPiXQv
/BRrv/zQ1WuPjD4zifA8RaEf+4Prv/zRV+OP/EQl8Tf+iZ/B3/v1rv8A8tKQ/wDBwf8AEwn/
AJJn8Hf+/Wu//LSnz5V/z7l+H+YuTM/54/j/AJH7Fr8ZPGpb/kYtC/8ABPrv/wA0VWY/iz4y
dM/8JJoQ/wC4Rrv/AM0VfjX/AMRB/wATP+iZ/B3/AL9a7/8ALSlH/Bwj8TR/zTP4O/8AfrXf
/lpRz5V/z7l+H+Y+TM/54/j/AJH7ISfF7xkjY/4SPQv/AAUa7/8ANFSr8XfGTD/kY9C/8FGu
/wDzRV+Np/4OEPiaf+aZ/B3/AL9a7/8ALSgf8HCHxNH/ADTP4O/9+td/+WlHPlX/AD7l+H+Y
cmZfzr+vkfsbL8YvGiH/AJGLQv8AwT67/wDNFRD8YvGkh58RaF/4J9d/+aKvxzH/AAcH/Exj
/wAkz+Dv/frXf/lpSj/g4M+Jg/5pn8Hf+/Wu/wDy0o5sq/59y/D/ADDkzL+eP9fI/ZWL4q+M
X6+JNC/8FGu//NDTbr4seMYDx4k0M/8AcI13/wCaKvxuH/Bwh8TR/wA00+Dv/fnXf/lpSH/g
4P8Aiaf+aZ/B3/vzrv8A8tKfNlX/AD7l+H+YuXM/54/18j9j4vi34ykH/Ix6EP8AuEa7/wDN
FUqfFbxiW58SaF/4KNd/+aGvxrH/AAcHfEwf80z+Dv8A3513/wCWlL/xEH/E3/omfwd/7865
/wDLSkpZV/z7l+H+YcmZ/wA6/r5H7K3HxV8XRJkeJdDP/cJ13/5oapL8ZvGZkwfEOh49f7H1
3/5oq/HY/wDBwd8TD/zTP4O/9+td/wDlpSf8RBfxL/6Jl8Hf+/Wu/wDy0o58q/59y/D/ADBQ
zP8Anj/XyP2Ub4teLxHn/hJNDz6f2Trv/wA0NUX+NfjRW/5GDQ//AAT67/8ANFX49f8AEQd8
TMf8kz+Dv/frXf8A5aU0/wDBwT8Sz/zTL4O/9+td/wDlpS58q/59y/D/ADHyZl/PH8f8j9iV
+NXjNh/yMOh/+CfXf/mipG+NPjQH/kYdC/8ABPrv/wA0Vfjv/wARBHxK/wCiZfB3/v1rv/y0
oP8AwcE/Es/80y+Dv/frXf8A5aUc+Vf8+5fh/mHJmX88fx/yP2MT4y+MiP8AkYtD/wDBRrv/
AM0VNb40eM1/5mHQ/wDwUa7/APNFX46/8RBHxK/6Jl8Hf+/Wu/8Ay0o/4iB/iUf+aZfB3/v1
rv8A8tKOfKv+fcvw/wAw5My/nj/XyP2IPxr8Z/8AQw6H/wCCfXf/AJoqaPjT42Y/8jBoX/gn
13/5oq/Hj/iIF+JP/RMvg7/3613/AOWlOX/g4K+Ja/8ANMvg7/3613/5aUc2Vf8APuX4f5j5
My/nj+P+R+xI+MfjY/8AMw6F/wCCfXf/AJoqkj+LnjZz/wAjFoP/AIJ9d/8Amir8dB/wcIfE
xf8Ammfwc/7867/8tKUf8HCfxOH/ADTP4Of9+dc/+WlHNlX/AD7l+H+YuTMv54/j/kfspD8T
/GTnnxLoX/go13/5oq0bXx34rmX5vFWhD/uFa7/80Ffi4P8Ag4Y+J4/5pp8Hf+/Ouf8Ay0pw
/wCDhz4oj/mmvwd/7865/wDLSmp5V/z7l+H+YuTM/wDn4v6+R+058beKB/zNeh/+CvXf/mgp
P+E28Uf9DXof/gq13/5oK/Fk/wDBw78USP8Akmvwd/7865/8tKT/AIiG/ij/ANE1+Dv/AH51
z/5aU+fKv+fcvw/zF7PM/wDn4vx/yP2mPjnxOD/yNeh/+CrXf/mgo/4TrxP/ANDXof8A4Ktd
/wDmgr8Wf+Ihv4o/9E1+Dv8A351z/wCWlH/EQ38Uf+ia/B3/AL865/8ALSjnyr/n3L8P8w9n
mf8Az8X9fI/aX/hO/FH/AENWh/8Agr13/wCaCkPjzxQB/wAjVof/AIK9d/8Amgr8XF/4OGPi
g3/NNfg7/wB+dc/+WlL/AMRC3xP/AOia/B3/AL865/8ALOjnyr/n3L8P8w5Mz/5+L8f8j9oP
+FgeJv8AoatE/wDBXrv/AM0FDfELxMP+Zp0T/wAFeu//ADQV+Ln/ABELfE9h/wAk0+Dv/fnX
f/lpSj/g4S+JxH/JNPg9/wB+td/+WlL2mVf8+5fh/mP2eZ/8/F/XyP2g/wCFi+Jv+hp0T/wV
67/80FIfiR4m/wChp0T/AMFeu/8AzQV+MH/EQj8Tv+iafB3/AL867/8ALSk/4iD/AIm/9Ez+
Dv8A3613/wCWlHtMq/59y/D/ADD2eZ/8/F+P+R+zx+JfiUf8zRov/gr1z/5oKQ/E3xMP+Zo0
X/wV65/80FfjD/xEGfEz/omfwd/79a7/APLSk/4iCfiX/wBEz+D3/frXf/lpT58q/wCfcvw/
zFyZl/z8X4/5H7O/8LS8Sf8AQ0aL/wCCvXP/AJoKQ/FPxKP+Zn0X/wAFeuf/ADQV+MR/4OB/
iUf+aZ/B7/v1rv8A8tKRv+Dgf4lAf8kz+Dv/AH613/5aUvaZV/z7l+H+Y/Z5l/z8X9fI/Z3/
AIWt4k/6GfRf/BXrn/zQUh+K/iQf8zPo3/gr1z/5oK/GD/iIG+JX/RMvg7/3613/AOWlH/EQ
L8ST/wA0y+D3/fvXf/lpR7TKv+fcvw/zD2eZf8/F+P8Akfs8fi14kH/MzaN/4K9c/wDmgpF+
LfiNj/yM+jf+CvXP/mgr8Yf+IgP4k/8ARMvg9/3713/5aUn/ABEA/Egf80y+D3/fvXf/AJaU
e0yr/n3L8P8AMPZ5l/z8X9fI/aWL4n+JZf8AmaNF/wDBXrn/AM0FWovHniaUf8jXof8A4Ktd
/wDmgr8U0/4OCPiUnT4Z/B7/AL967/8ALSpY/wDg4W+J8XT4afB3/v1rv/y0o9plX/PuX4f5
h7PMv+fi/H/I/a0eMPFB/wCZs0L/AMFWu/8AzQUf8Jd4p/6GvQ//AAVa7/8ANBX4rj/g4i+K
a/8ANNfg7/351z/5aUv/ABESfFP/AKJt8HP+/Guf/LOj2mVf8+5fh/mHs8y/5+L8f8j9qB4u
8Un/AJmvQ/8AwVa7/wDNBR/wlnin/obNC/8ABVrv/wA0Ffiv/wAREnxT/wCibfBz/vxrn/yz
pR/wcTfFTP8AyTb4O/8AfjXP/lnR7TKv+fcvw/zH7PMv+fi/H/I/af8A4SzxT/0Nmhf+CrXf
/mgpf+Er8U/9DZoX/gq13/5oK/Fr/iIl+Kg/5pv8Hf8Avxrn/wAs6Uf8HE/xUH/NNvg5/wB+
Nc/+WdLnyv8A59y/D/MPZ5l/z8X4/wCR+0n/AAlXir/obNC/8FWu/wDzQUf8JV4q/wChs0L/
AMFeu/8AzQV+Ln/ERT8VR/zTb4Of9+Nc/wDlnR/xEVfFX/om3wc/78a5/wDLOjnyv/n3L8P8
w9nmX86/H/I/aP8A4SrxV/0Nmhf+CvXf/mgpf+Ep8Vf9DXoX/gq13/5oK/Fv/iIq+Kv/AETb
4Of9+Nc/+WdH/ERV8Vf+ibfBz/vxrn/yzo58r/59y/D/ADD2eZfzr8f8j9pP+Eo8Vf8AQ16F
/wCCrXf/AJoKP+En8V/9DVof/gq13/5oK/Fv/iIq+Kv/AETb4Of9+Nc/+WdH/ERV8Vf+ibfB
z/vxrn/yzo58r/59y/D/ADD2eZf8/F+P+R+0n/CT+K/+hq0P/wAFWu//ADQUf8JP4r/6GrQ/
/BVrv/zQV+Lf/ERV8Vf+ibfBz/vxrn/yzpf+Iiv4rf8ARNvg5/4D65/8s6OfK/8An3L8P8w9
nmX86/r5H7R/8JP4r/6GrQ//AAVa7/8ANBR/wk/iv/oatD/8FWu//NBX4uf8RFfxW/6Jt8HP
/AfXP/lnR/xEV/Fb/om3wc/8B9c/+WdHPlf/AD7l+H+YezzL+dfj/kftH/wk/iv/AKGrQ/8A
wVa7/wDNBR/wlHir/oa9C/8ABVrv/wA0Ffi5/wARFfxW/wCibfBz/wAB9c/+WdH/ABEV/Fb/
AKJt8HP/AAH1z/5Z0c+V/wDPuX4f5h7PMv51+P8AkftH/wAJR4q/6GvQv/BVrv8A80FIfFXi
of8AM16H/wCCrXf/AJoK/F3/AIiK/it/0Tb4Of8AgPrn/wAs6Q/8HFXxVP8AzTb4Of8AgPrn
/wAs6OfKv+fcvw/zD2eZfzr8f8j9oj4t8VD/AJmvQ/8AwVa7/wDNBTW8YeKR/wAzVof/AIKt
d/8Amgr8Xv8AiIn+Kn/RNfg3/wCA+uf/ACzoH/BxH8U2P/JNfg5/341z/wCWdP2mVf8APuX4
f5h7PMv+fi/H/I/Z8+NfFK/8zVon/gq13/5oKafHPigf8zTon/gq13/5oK/GF/8Ag4f+KX/R
Nfg5/wB+Nc/+WdNP/Bw98Uj/AM01+Dn/AH41z/5Z0e0yr/n3L8P8xezzL/n4vx/yP2ePj3xQ
v/M06J/4Ktd/+aCm/wDCwPE//Q06J/4K9d/+aCvxiP8AwcOfFE/801+Dn/fnXP8A5aU0/wDB
wx8Tz/zTT4O/9+dc/wDlpR7TKv8An3L8P8w9nmX/AD8X9fI/Z4/EPxOP+Zo0X/wV67/80FJ/
wsbxMP8AmaNE/wDBXrv/AM0FfjCf+DhT4nH/AJpp8Hf+/Ouf/LSm/wDEQj8Tv+iafB3/AL86
7/8ALSnz5V/z7l+H+YcmZf8APxfj/kfs+fiP4mH/ADNGi/8Agr13/wCaCk/4WT4mH/M0aL/4
K9c/+aCvxhP/AAcIfE0/800+Dv8A3513/wCWlJ/xEHfE3/omfwd/79a7/wDLSl7TKv8An3L8
P8w9nmf/AD8X9fI/Z4/EvxL/ANDTov8A4K9d/wDmgoHxM8Sn/maNF/8ABXrn/wA0FfjB/wAR
BnxM/wCiZ/B3/v1rv/y0o/4iC/iZn/kmfwe/79a7/wDLSn7TKv8An3L8P8xcmZf8/F/XyP2f
PxM8Sj/maNF/8Feuf/NBTf8AhZ/iX/oaNF/8Feu//NBX4xf8RBfxL/6Jn8Hv+/Wu/wDy0pP+
Igj4l/8ARM/g9/3613/5aUe0yr/n3L8P8w5My/5+L8f8j9nT8UfEo/5mjRf/AAV65/8ANBSr
8UPErf8AM0aL/wCCvXP/AJoK/GD/AIiCPiX/ANEz+D3/AH613/5aUf8AEQN8Ss/8kz+D3/fr
Xf8A5aUe0yr/AJ9y/D/Mfs8z/wCfi/r5H7Rp8SPErn/kadE/8Feu/wDzQVPH468TyD/ka9D/
APBXrv8A80Ffiuv/AAcGfExP+aafB7/v1rv/AMtKkT/g4W+J8fT4a/B7/vzrn/y0pc+Vf8+5
fh/mHs8y/wCfi/r5H7UDxl4oP/M2aH/4Ktd/+aClHi7xSf8Ama9C/wDBVrv/AM0Ffi0P+Dh7
4pL/AM01+Dv/AH51z/5Z0v8AxEQ/FP8A6Jt8Hf8Avxrn/wAs6PaZV/z7l+H+YezzL/n4vx/y
P2l/4SzxT/0Nmhf+CrXf/mgpR4r8Un/mbNC/8FWu/wDzQV+LC/8ABxN8VM/8k2+Dn/fjXP8A
5Z04f8HEvxUB/wCSbfB3/vxrn/yzpe0yv/n3L8P8x+zzL/n4vx/yP2l/4SrxV/0Nmhf+CvXf
/mgpf+Eo8Vf9DXoX/gq13/5oK/Fr/iIn+Kn/AETb4Of9+Nc/+WdKP+Dij4qj/mm3wc/78a5/
8s6OfK/+fcvw/wAw9nmX86/r5H7Sf8JR4q/6GvQv/BVrv/zQUf8ACUeKv+hr0L/wVa7/APNB
X4t/8RFXxV/6Jt8HP+/Guf8Ayzo/4iKvir/0Tb4Of9+Nc/8AlnRz5X/z7l+H+YezzL+dfj/k
ftEfFvilf+Zr0P8A8FWu/wDzQVGfGnilf+Zq0T/wVa7/APNBX4wH/g4n+Kjf802+Dn/fjXP/
AJZ0xv8Ag4f+KTf801+Dv/fjXP8A5Z01PKv+fcvw/wAyXTzP/n4v6+R+0H/CceKP+hq0T/wV
a7/80FH/AAnHij/oatE/8FWu/wDzQV+Lp/4OGvigf+aa/B3/AL865/8ALSj/AIiGvih/0TT4
O/8AfnXP/lpT9plP/PuX4f5i9nmf/Pxf18j9ov8AhOPFH/Q1aJ/4Ktd/+aCj/hOPFH/Q1aJ/
4Ktd/wDmgr8Xf+Ihr4of9E0+Dv8A351z/wCWlH/EQ18UP+iafB3/AL865/8ALSn7TKf+fcvw
/wAw5Mz/AOfi/r5H7Rf8Jz4n/wChq0T/AMFWu/8AzQUo8ceJz/zNeh/+CrXf/mgr8XP+Ihn4
of8ARNfg7/351z/5aUf8RDXxQ/6Jp8Hf+/Ouf/LSl7TKf+fcvw/zDkzP/n4v6+R+0f8Awm3i
j/oa9D/8FWu//NBR/wAJt4o/6GvQ/wDwVa7/APNBX4t/8RDPxQ/6Jr8Hf+/Ouf8Ayzo/4iGf
igP+aa/B3/vzrn/yzp8+U/8APuX4f5hyZn/z8X9fI/aQ+OPE4/5mvQ//AAVa7/8ANBTH8eeJ
0H/I1aH/AOCvXf8A5oK/F/8A4iG/igf+aa/B3/vzrn/y0pD/AMHDPxQP/NNPg7/351z/AOWl
L2mU/wDPuX4f5j5Mz/5+L8f8j9m3+JPiZP8AmaNF/wDBXrv/AM0FNPxO8Sj/AJmjRf8AwV65
/wDNBX4xt/wcH/E1uvw0+Dv/AH613/5aUw/8HBXxLP8AzTP4Pf8AfrXf/lpRz5T/AM+5fh/m
HJmf/Pxf18j9nW+KXiUf8zPov/gr1z/5oKT/AIWn4l/6GfRf/BXrn/zQV+MX/EQP8Sz/AM0z
+D3/AH613/5aUf8AEQL8Sv8Aomfwe/79a7/8tKOfKv8An3L8P8x+zzL/AJ+L+vkfs2fiv4kH
/Mz6N/4K9c/+aCkPxZ8SD/mZtG/8Feuf/NBX4yf8RAfxJ/6Jl8Hv+/Wu/wDy0pP+IgL4k/8A
RMvg9/3713/5aUvaZV/z7l+H+YezzL/n4v6+R+zR+LviQf8AMzaN/wCCvXP/AJoKD8XvEg/5
mbRv/BXrn/zQV+Mh/wCC/wD8SD/zTL4Pf9+9d/8AlpR/w/8AviP/ANEx+D3/AH713/5aUe0y
r/n3L8P8w9nmX/Pxfj/kfsyfjD4jH/MzaP8A+CvXP/mgpD8ZPEg/5mXR/wDwV65/80FfjMf+
C/nxHP8AzTL4Pf8AfvXf/lpSH/gv18Rj/wA0x+D3/fvXf/lpRz5V/wA+5fh/mP2eZf8APxfj
/kfs0PjJ4jJ/5GXR/wDwV65/80FPT4veI3P/ACM2jf8Agr1z/wCaCvxh/wCH/HxG/wCiY/B7
/v3rv/y0pR/wX7+I6/8ANMvg/wD9+9d/+WlLnyr/AJ9y/D/MPZ5l/wA/F+P+R+1Fv8SfEtx0
8U6IP+4Xrn/zQVaj8ZeKJB/yNeh/+CrXf/mgr8VIv+Dgv4mQ/d+Gnwe/79a7/wDLSp4/+Dh3
4oxjj4bfB3/vzrn/AMtKPaZX/wA+5fh/mHs8y/5+L8f8j9qP+Eq8Vf8AQ2aF/wCCvXf/AJoK
P+Eq8Vf9DZoX/gr13/5oK/Fsf8HE3xUH/NNvg7/341z/AOWdH/ERP8VP+ibfBz/vxrn/AMs6
OfK/+fcvw/zD2eY/zr+vkftJ/wAJV4q/6GzQv/BXrv8A80FH/CVeKv8AobNC/wDBXrv/AM0F
fi3/AMRE/wAVP+ibfBz/AL8a5/8ALOlH/BxR8VB/zTb4Of8AfjXP/lnRz5X/AM+5fh/mHs8x
/nX9fI/aP/hKfFX/AENehf8Agq13/wCaCj/hKfFX/Q16H/4Ktd/+aCvxc/4iKfir/wBE2+Dn
/fjXP/lnSp/wcT/FQ/8ANNvg5/341z/5Z0c+Vf8APuX4f5h7PMv51+P+R+0X/CU+Kv8Aoa9D
/wDBVrv/AM0FH/CVeKh/zNehf+CrXf8A5oK/F/8A4iJvir/0Tf4O/wDfjXP/AJZ0f8RE3xU/
6Jv8Hf8Avxrn/wAs6fPlX/PuX4f5h7PMv+fi/H/I/aD/AISvxT/0Nmhf+CrXf/mgo/4SvxT/
ANDZoX/gq13/AOaCvxe/4iJfip/0Tb4Of9+Nc/8AlnR/xES/FT/om3wc/wC/Guf/ACzo58q/
59y/D/MPZ5l/z8X4/wCR+0H/AAlnir/oa9D/APBVrv8A80FMk8ZeKY/+Zq0T/wAFWu//ADQV
+MX/ABES/FT/AKJt8HP+/Guf/LOkb/g4i+Kb9fht8HP+/Guf/LOj2mVf8+5fh/mHs8y/5+L8
f8j9lJviN4nh/wCZn0U/9wrXf/mgqu/xX8Sp/wAzNo3/AIK9c/8Amgr8bpP+DhT4ny9fhp8H
f+/Ouf8AyzqCT/g4G+JRH/JMvg7/AN+td/8AlpR7TKv+fcvw/wAxezzL/n4vx/yP2V/4W54l
/wChl0f/AMFeuf8AzQVlXOt/bry7vb2+srq8ure1s1W00+azhjigmv7jc3n3d1LJI8uoTFmM
gGAoC9Sfx6/4iAviT/0TL4O/9+td/wDlpSH/AILw/FHxZJFplp4K+Fvh+5v5Ugj1Kxs9TnuL
MswG9Eur6aBmHpLE6+qmtaGLy2jNVKdOV16f5mVbCY+tB06k1Z/12Oy/4L2XkfjL9q7wba6W
TfXHh/wPa2OoxwqWNnO+oahdpG/oxt7q3kH+zKp70V0/w/8AAdp440WTXtemutd13XJjfahq
F+4luLuZwuWY4wAAAqqoCqqqqgAAArx61WVSpKp3bf3np0qcacFT7K33H//Z
--------------060906080403000401080808
Content-Type: text/plain; charset=windows-1252;
 name="xen info.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen info.txt"

host                   : zevenprovincien
release                : 3.11.6-4-xen
version                : #1 SMP Wed Oct 30 18:04:56 UTC 2013 (e6d4a27)
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 2400
hw_caps                : bfebfbff:2c100800:00000000:00003f00:029ee3ff:00000000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 98295
free_memory            : 205
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .1_02-4.4
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 27302
xen_commandline        : vga=mode-0x31a
cc_compiler            : gcc (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388
cc_compile_by          : abuild
cc_compile_domain      :
cc_compile_date        : Wed Dec  4 15:16:21 UTC 2013
xend_config_format     : 4

--------------060906080403000401080808
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060906080403000401080808--


From xen-devel-bounces@lists.xen.org Fri Jan 03 09:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 09:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1L4-0002vy-F3; Fri, 03 Jan 2014 09:46:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <groen692@grosc.com>) id 1Vyyaz-0005nX-Pj
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 06:51:03 +0000
Received: from [85.158.137.68:59820] by server-6.bemta-3.messagelabs.com id
	37/8D-04868-4DD56C25; Fri, 03 Jan 2014 06:51:00 +0000
X-Env-Sender: groen692@grosc.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388731855!3349556!1
X-Originating-IP: [74.50.18.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19275 invoked from network); 3 Jan 2014 06:50:57 -0000
Received: from carp.lunarservers.com (HELO carp.lunarservers.com) (74.50.18.10)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 06:50:57 -0000
Received: from [194.219.126.84] (port=50932 helo=[172.16.0.10])
	by carp.lunarservers.com with esmtpsa (TLSv1:DHE-RSA-AES256-SHA:256)
	(Exim 4.82) (envelope-from <groen692@grosc.com>) id 1Vyyak-0005yp-PY
	for xen-devel@lists.xen.org; Thu, 02 Jan 2014 22:50:54 -0800
Message-ID: <52C65D06.8080404@grosc.com>
Date: Fri, 03 Jan 2014 07:47:34 +0100
From: Jeroen Groenewegen van der Weyden <groen692@grosc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary="------------060906080403000401080808"
X-AntiAbuse: This header was added to track abuse,
	please include it with any abuse report
X-AntiAbuse: Primary Hostname - carp.lunarservers.com
X-AntiAbuse: Original Domain - lists.xen.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - grosc.com
X-Get-Message-Sender-Via: carp.lunarservers.com: authenticated_id:
	servicedesk@grosc.com
X-Mailman-Approved-At: Fri, 03 Jan 2014 09:46:43 +0000
Subject: [Xen-devel] BUG: unable to handle kernel NULL pointer dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------060906080403000401080808
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi All,

Yesterday my xen machine stopped working, after a kernel panic.
To me it seems to problem started at something called xen_spin_kick,

I attached a screenshot of the console of the xen machine.

Is this email-list the right one to address this bug?

disto: openSuse 13.1
kernel 3.11.6-4-xen
xen : 4.3.1_02-4.4

mfg,
Jeroen

--------------060906080403000401080808
Content-Type: image/jpeg;
 name="Screenshot_console.jpg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="Screenshot_console.jpg"

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAIBAQIBAQICAgICAgICAwUDAwMDAwYEBAMFBwYH
BwcGBwcICQsJCAgKCAcHCg0KCgsMDAwMBwkODw0MDgsMDAz/2wBDAQICAgMDAwYDAwYMCAcI
DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAz/wAAR
CAPMBNQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA
AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK
FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG
h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl
5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA
AgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYk
NOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOE
hYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk
5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDuP2Sv2VvBt58FPD/xC+Inh7SPGXi3xhp8
eu21rrsCalpek2F0nm2ccdnKGt2ka2kikeWSN5FkldVZUUA4Pxf8JeANP1jZZfCj4Ixrk8f8
K00B/wCdmam8J/F59K/Z8+FFmHOE+G3hI4z66BYN/Wv00/4J6eCvCPib9kvwzql9oGiXl5fL
LLPPc2kcruwkZeSwJ6KOK9SpVg6abR51OnNTsj8rPDHg3wZew5f4V/BFj/2THw+P/bOu78E/
BPwT4gvo4z8J/gmQxxx8NNAH/tnX6gfCbxD4G+Kni/VIbTwf4abRoQVtLpNKiIdoyok3nbtT
d5iGNSQ7KGfaFKM3nH7Zvgrw9oHxL8HppGladpz3kF08wtYViEmxoQuQvHG89u9cWXY/C4yK
nQ1jrr6fp27o7MfgsThJOFfR6aev9a9j588KfsH/AA91izDv8Ivgt0H/ADTfQv8A5ErZb/gn
j8Oc/wDJI/gt/wCG40P/AORK9d1/4zaN8FLvwNpmoWOp3d14+1o6Bpos442EdwLO5u8yl3Xa
nl2sgyNx3MgxgkjE039u3wVe/sXXHx3NprkPhOz0yfULjT5LaMatDLBI8MliYhIU+1CdGh2e
ZjzBjdjmvX5qCTfKtN/wf5NfeeYoV20rvXb72vzT+5nnh/4J1/DwD/kknwX/APDcaF/8iVBL
/wAE9vh6n/NJPgv/AOG40L/5Er618A67c+MbXUGvPC+t+HUtJES3fUntG/tBGiSQyRiCeVlV
WdoyJRG2+NsKV2u0eu+XazY2j8q1tR6xRipVGrqTPkk/8E+/h7/0SX4L/wDht9C/+RKZJ/wT
6+Hx6fCb4L/+G40L/wCRK+pftiEfdFQy6lED90flS/c/yof77+Y+XG/4J9+AB/zSb4L/APht
9C/+RKif/gn54CPT4T/Bj8PhtoX/AMiV7R8f/wBoq1+Amg6Pdf8ACNeIPFl7r+rQ6LYaZov2
Rbm4nlWRx813cQQqoWNiS0g9gaoaN+1/4c1T9qW0+Ev/AAj/AIzg1670m61ZNRu9FktdJcW5
txLFHcSlftDj7TES1uskQyQZAw20ozw0nypK97fNJS/Jp/NdxtV1u3tf5Xt+af3PszyI/wDB
P7wID/ySf4L/APht9C/+RKltf2AvACN8/wAJ/guf+6caF/8AIld38bf23L34C3ZfXPgZ8Wjo
0ut2ugWmr2t14bktr+e6uktbZoo/7WFztkkkTG6FWAbLKuDiX4q/tmWvwj8bm01z4efEKx8J
pq1tosnjKS2sV0aK5uHjiizGboXxiM8scPnC1MW5t2/ywZKqLw7t7q1dtuun/wAkvvXclqvd
rm/H8u+z27HO6b+wh8M0Qb/hB8F2/wC6d6J/8i1pD9hb4WY/5I58F/8Aw3ui/wDyLXV/tLft
VaF+yf8ACTUvGGu6N4r1ux0yGWd7Xw9o02pXBWOJ5XLbQIoUCRsTJPJHECAC4LKDofEX9qXw
38NfhdofiSey1bUp/FL29voeiafAkuqazdTp5kdtCjOse/aGZmeRY40jd3dI0ZxpGph9VyrS
3TrLZebdtFv96MJU62l5PW/Xta79FfU4T/hhz4WIefg58GP/AA3ui/8AyLU8P7FfwojH/JGv
gx/4b7Rh/wC21eo+DPiLN4q8ArrWteGNZ8DTYkafTdcnsZLm1RCRvd7O4uIMEDcNspwDzg5A
8b+Cf/BQjw38avHHh/S4/Cnjfw/p3ja0udQ8Ja5rFnbxad4qghwzPbmOeSaMtEwmRLqKB3iy
yqdrY3jOhzW5F939b2du9n2MnTrcvNzu3qy/e/sXfCt4zs+DXwZHv/wr7Rj/AO21cb4p/YT8
A3hP2f4T/BqL0/4tzobY/O0r0348ftTP8FLu4W2+GXxH8b2um6cdU1K98P2ll9n0+Ab+91dQ
NcSYjkYxWqzSABdygyRB+V8Tft56TJ4x8P6N4L8CeNfifJ4k8NReLrS48OyaVDbjTpXCRSFr
++teWJBCqGODzim6uHenIr7ba7N9u0ZNPqk7bMPZV4aubXq/NL85R+9d0eV3v/BPjwq8ny/D
T4NKM/8ARNPD/wD8h1FF/wAE+PDSf801+DZ/7pn4f/8AkOvdPjH+1TL8INzRfCv4l+LYLLTB
q2qT6Ha2Bi0qL5zsY3F3D9plAjkJis/PcbVyo8yLevgL9sXw18R/jNo3hLTtE15rPxR4Xbxd
oHiQmzbSNcsUNmHMOy4a5Rgb6DiaCMH5tpIGTUIYeUuWMFf08m/yjJrvZ2vYpvEJXcn3/L/5
JXW6uu541ZfsDeEoT8/wx+Dh/wC6a6B/8h10Gi/sS+ArIjzvhT8G3/7p1oY/laV9PB7d04C/
lXG/Fv4hP8N/Csuo2fhrXvFl2skcUOlaLHAbu6Z3C/K08sUKKoJZmllRQFPOcA3z4aGrgvuM
+WvPTnf3nm9t+yT8MoY/m+D3waY/9k+0b/5Gpx/ZS+GP/RHPg1/4b/Rv/kauk/Z+/aS0f9oD
R9aeDSNb8Oax4Y1STRdb0TWoYkvtKukRJQjmGSWCQPDLDKrwyyIyyr82QyhPCX7SOleO/jr4
n8D6X4f8QXA8GiKLV9dK2yaXa3ckMc6WY3TC5eUwzRybkgaIBsGQN8tb+2w10uSOuq0W1r39
LWs9ndd0YujiVf35ab6v0OYk/ZR+GKjP/Cnfgz/4b/Rv/kaqdz+zH8MIf+aPfBr/AMN/o3/y
NXtt3f2qRc7a5XxFr9pDnkVvTeGk/wCHH7kc9SOIWqnL72eYSfs7/C2Pr8H/AINf+G/0b/5G
pn/DP/wr/wCiQfBn/wAN/o3/AMjVua74qgjjZlYHFcFq3xbj0+82Fu9d6w+FavyR+5HE62JT
+N/ezpE/Z++FZH/JH/gx/wCG/wBG/wDkanf8M+fCr/oj3wZ/8N/o3/yNWJYfFy2mGTIPzrRh
+J9pIP8AWL+dV9Vwv8kfuRP1nEfzv72WD+zz8K2/5o/8Gf8Aw3+jf/I1I37O/wAKx/zR/wCD
P/hv9G/+RqafiPaY/wBav51b0/xtBqDYVwc1SweG/kj9yF9YxH87+9lKX9nv4XAcfCH4Nf8A
hv8ARv8A5GqrJ8BPhih/5JF8Gf8Aw32i/wDyNXVC83ioLiSj6lhv5F9yD61X/nf3s5aT4D/D
Ef8ANIvgz/4b7Rf/AJFph+BXwzA4+Efwa/8ADfaL/wDItdC82DUTXGKX1LDfyL7kP61X/nf3
swv+FIfDQD/kkfwa/wDDfaL/APItNb4I/DUH/kkfwa/8N9ov/wAi1tm6pjXWRU/UsP8AyL7k
P63X/nf3sxG+CXw1/wCiSfBr/wAN9ov/AMi0w/BX4bD/AJpL8Gv/AA32i/8AyLWpdakIaqrr
ySNjNJ4LDfyL7kP63X/nf3spj4L/AA3MgH/Cpfg1/wCG+0X/AORa3NK/Z2+GV5FlvhJ8Gv8A
w32i/wDyLUFrd75lrqtGuCYcCs3g6F9IL7kX9br2+N/ezGb9mv4YA/8AJJfg1/4b/Rf/AJFp
U/Zq+GDH/kkvwa/8N/ov/wAi11sNs8kdQXDPay9elH1Kh/IvuQfW6/8AO/vZiW37LnwwkP8A
ySP4NH/un+jf/I1aEX7JPwruY/8AkkPwaz/2T/Rv/kan3nj5dIOHI/Gm2nxytYesi5+tL6rh
v5F9yH9ZxH8z+9mVrP7D3w5vmzB8KPg0n/dO9DP87WqNj+wd4DikBb4WfBsj/snOhf8AyJXf
6N8c7S5YDev511Gn/E60uVGHXml9Tw38i+5D+t4j+Z/ezz7Sf2JfhjCg834R/Bp/+6faKP8A
21rYi/Yz+EqDn4OfBn/wgNH/APkauxu/HsKLwwrJvficsXQ1f1XD/wAi+5Ee3rv7b+9mR/wx
v8I/+iOfBj/wgdH/APkaoLz9jX4SvGdvwe+DIP8A2IGj/wDyNWtF8TTNETVSb4oSJLip+r4f
+Rfch+2r/wA7+9nFeIf2Gfhzdk/Z/hT8Go/+6d6If52tcrd/8E/fBkj/AC/DT4Nr/wB030D/
AOQ69q0/4kGVwDVjUPiMto4zSeDwz15F9yLWKxC05n97PBf+He/g/wD6Jt8G/wDw22gf/IdH
/Dvfwf8A9E2+Df8A4bbQP/kOvbm+LURuAuRzU918TEiiJ46VH1HC/wAi+5FfXMR/M/vPCv8A
h3v4PI/5Jt8HP/DbaB/8h0xv+Ce/g/8A6Jv8HP8Aw22gf/Ide8aV8SlvIj0qprfxO/s+TFH1
HDfyL7kP65if5n954VP/AME+vCa9Phz8HP8Aw22gf/IdZ15+wL4Zj+78PPg7/wCG18P/APyH
Xvlr8VhdDtSXfxDCyDOKPqOG/kX3IPrmJ/mf3nzpN+wj4dHT4ffB7/w2nh//AOQqhP7Cfh8j
j4ffB7/w2fh//wCQq+gb34jRxSfw02H4n2g+8y/jT+oYX+RfcT9dxP8AM/vPn1v2D9BP/NP/
AIPf+Gz8Pf8AyFTR+wboRP8AyIHwe/8ADZ+Hv/kKvoDUPixZwDhk/Oo7D4v2Uw5Zfzo+oYX+
Vfcg+vYn+Z/eeFWv7BugI43fD74On/umnh7/AOQq6nw5+xT4Ns9v2j4afByT1z8ONBH8rSvY
Lf4l2U44dPzqDUviXb2h+8uKtYHCx15F9yJeMxEtOd/ezmtO/ZH+G0aDf8Jfg23/AHT7RR/7
a1dH7JnwxI/5JF8G/wDw3+jf/I1XIvjPaA43irdv8YbMj/WL+dX9Vw38kfuRl7bEfzy+9mNJ
+yX8MSv/ACSP4Nj/ALp/o3/yLWLrn7HPw9u1PkfCr4Nxn/snmiH+drXcf8LZtZmwrr+dbGh+
KE1RMjFNYTDvTkj9yF9Yrr7b+9nzvrX7CPhi5k/dfDn4OoP+ybaAf52dY7/8E/8AQ2PHgD4P
D/umfh7/AOQq+uVmVuwp6yrjpUvLcO/sL7karMMQvtv7z5D/AOHfmiH/AJkH4P8A/hs/D3/y
FR/w770T/oQfg/8A+Gz8Pf8AyFX18syg9Kd5ielL+y8P/IvuD+0cR/O/vPj/AP4d96J/0IPw
f/8ADZ+Hv/kKnf8ADvnRD/zIPwf/APDZeHv/AJCr6+8xPSlWZQelH9l4f+RfcH9o4j+d/efI
A/4J86Jn/kQPg/8A+Gy8Pf8AyFS/8O+dD/6ED4P/APhs/D3/AMhV9gmVAOlZGveJ4tIRi2OK
P7Mwy3gvuD+0MQ/tv7z5VP8AwT60MD/kQPg//wCGz8Pf/IVc54q/Yd0nSSdngL4PD/umXh3/
AOQq+sLH4qW19OEBHJxTPF14l5bs2B0qf7OwvSC+4r6/iVvJ/efGdj+yLpko+bwH8HT/AN0x
8O//ACFV1f2PdHU8+Avg7/4bLw7/APIVfREUqRE8CsTxZ45h0WNuQMCq/s7CpawX3E/X8Tf4
3955JpP7JPh6Fh5nw9+Dr/8AdNPDw/lZV01j+zB4Ohi+f4ZfBtj/ANk40H/5Era8NfFaHUpg
u4da7vT9RS7wR3q44HC20gvuQnjcRfWb+9nmifsr+ELwfJ8MPg4P+6caD/8AIlWbP9kDwmZB
u+GHwdI/7JxoP/yJXvXgyxSeIZANdbHo8QH3BTWBw38i+5C/tDE7cz+8+cF/ZF8EpbH/AItZ
8HN3/ZOtD/8AkSuM8a/sieHXDfZPhp8HYvT/AItroB/nZmvsX+zI8fdFQzeHoZuqCr+pYW1v
Zr7kY/XMVe/O/vZ+e+q/sfIZf3fgD4Pj/ul/h3/5Bqh/wx64P/Ih/B//AMNd4c/+Qa/Q+Twd
asP9Wv5VH/whdp/zyWsf7Nwv8i+40WYYpfbf3n57p+x43fwF8Hv/AA13hz/5BqzF+xuD18A/
B7/w1/hz/wCQa+/x4NtVH+qX8qmg8I2uf9Uv5Uv7Owv8i+4azHFfzP7z4Cj/AGMY2/5kD4O/
+Gv8Of8AyDU0f7F0Wf8Akn/we/8ADX+Hf/kGv0Bi8JWv/PJfyqwnhG1Yf6pfypf2dhf5F9w/
7RxX8z+8/P6L9iu3J5+Hvwd/8Nh4d/8AkGrMX7FFqf8Amnnwd/8ADYeHf/kKvv1PCVr/AM8l
/Kp4/Cdr/wA81/Kj+zsL/IvuF/aOK/mf3s+Ao/2JbI/807+Dv/hsfDv/AMhVIP2JLD/onXwd
/wDDY+Hv/kKv0Aj8J2x/5Zj8qk/4RK2/55p+VP8As7C/yL7g/tDFfzP7z4Ah/Yk0/Iz8Ofg7
/wCGx8Pf/IVbuh/sT6GrDzvhp8HW/wC6aeHx/wC2dfcX/CKW/wDzzX8qenhq3Q/6tfyprAYV
fYX3CePxT+2/vPkvSP2LPBwQeZ8LPg23/dONC/8AkSteP9i7wLt/5JR8Gv8Aw3Wh/wDyJX1G
uiQqPuj8qf8A2NH/AHRWv1XDL/l2vuRl9ZxT+2/vZ8t/8MX+A/8AolHwb/8ADdaH/wDItKf2
LfAeP+SUfBz/AMNzof8A8iV9SDRo/wC6KX+x4v7go+rYb/n2vuQfWcV/O/vZ8oX37Fnggqdn
wq+DY/7pzof/AMiVzGufsT+GGJ8r4YfBxfT/AIttoJ/9s6+1TosJ/gFRv4dgf+AVDwmFf2F9
yKjjMSvtv72fBlz+xFoxb5fhr8HB/wB0z8P/APyFUH/DEek/9E3+Dn/hsvD/AP8AIVfe/wDw
i9v/AM81pR4Wtz/yyX8qz/s/C/yL7karH4n+Z/efA/8AwxHpP/RN/g5/4bLw/wD/ACFSj9iP
SP8Aom/wc/8ADZeH/wD5Cr73/wCEVt/+eS0f8Irb/wDPJaP7Pwv8i+5D+v4n+Z/efBQ/Yh0g
/wDNNvg5/wCGz8P/APyFR/wxDpH/AETb4Of+Gy8P/wDyFX3r/wAIrb/88lo/4RW3/wCeS0f2
fhf5F9yD6/iv5n958Ff8MQ6R/wBE2+Dn/hsvD/8A8hUf8MQaR/0Tb4Of+Gy8P/8AyFX3r/wi
tv8A88lo/wCEVt/+eS0f2fhf5F9yD6/iv5n97Pgr/hiHSP8Aom3wd/8ADZeHv/kKgfsR6QG/
5Jt8Hf8Aw2fh/wD+Qq+9T4Vt8f6paT/hFrb/AJ5rR9Qwv8i+5C+v4n+d/efDuj/sT6Asg834
afBxvX/i2mgD/wBs67bQP2MfBMaDzvhV8HG/7pzoQ/laV9WDwxAv/LNfyoOiRJ0QVrDB4WP/
AC7X3Iyni8TL7b+9nza37HXw+VP+SS/Bv/w3mif/ACLWPrP7HfgfafK+FXwbX3/4V1of/wAi
V9TPpKc/LVefRY2/hFaOhhn/AMu4/ciIYnExd+d/ez4+uf2P/CqyfL8MPg4B/wBk30H/AOQ6
q3n7IvhdIzt+GPwdB/7JvoP/AMh19fTaBF/cH5VTn8PREfcH5Vl9Tw3/AD7X3I6HjsS/tP7z
4j1/9kvRwT5Pw3+Dqj/smnh8/wA7OuXvv2TbYMdvw++Dw/7pj4d/+Qq+9bnwvA5/1a/lVC48
I2//ADzX8qh4HDP7C+5CWNxP87+8+C5P2UYlP/IgfB7/AMNh4d/+Qagk/ZVjB/5EH4Pf+Gw8
O/8AyDX3fN4Rtz/yyX8qqTeEbf8A55r+VR/Z+G/kX3FfX8T/ADP72fCx/ZaRT/yIPwe/8Nf4
c/8AkGrFt+y3bsfm+H/wdP1+F/h3/wCQq+2JPCVv/wA81/Kmp4Ztwf8AVj8qX9n4b+Rfcivr
+J/mf3nyBYfsl6fMBu+HXwb/APDY+Hf/AJCrTi/Y60th/wAk3+Df/hsfD3/yFX15p2g28bfc
H5VvWGg2z/wKfwqfqGG/lX3FfX8T/M/vPi2L9jbSD1+G3wa/8Nj4e/8AkKr1p+xjobHn4Z/B
o/8AdMvD3/yFX2mnhy2H8C/lViDw/bf3F/Kp+oYb+VfcX9exP8z+8+OLT9irw633vhj8Gv8A
w2fh/wD+Q6vw/sS+F8c/C74M/wDhtNA/+Q6+x7bQrfj5F/KrcegQEfcWj6hh/wCVfcH17Efz
P7z4+079ifwgJBv+FfwZI/7JroP/AMh13Xwx/Yx+FEXxr8Kw6v8ACH4PTRTeHfEMskR8B6Ok
Mrx3WgLEzItsFZkE04ViMgSuAfmNfRS6FCv8A/KuG+JWvxeBvi94Qu2haRf+Ec8RJtU4PN74
e/wrixuEpRp+7FXuuh14TFVpVPek7WOkg/Yx+AIX/kiXwT/8IPSP/kepf+GMvgB/0RL4J/8A
hCaT/wDI9cVrP7Tek+HdNlvb9PsNpAN0k9xcLHHGM4yzEgDkjrXON+338Pc/8jJ4d/8ABzbf
/F15Twi7HoxxEnsz1V/2NPgB/wBES+Cn/hB6R/8AI9cb8c/2OPgRa/BzxhPbfBf4LwXEGhX8
0UsXgXSUkidbaRlZWFvlWBAIIOQRWDpv7dPgbWtQitLPW9Fu7q4cRxQw6rA8krHoFUNkn2FP
+Kvx3tdR+E3jCEWU4L+HtS5LD/n0lNQ8LpexpGu+ZXZr/Ez9kf8AZ58DJeXV38EfgmtrFdJb
RxwfDLSbueSSSZYYoo4o7RpJHeR0RVVSzMwABJrmR+zZ8BCef2YvD3/iOMn/AMqa3vjJ8Yba
98Z6PZrayrJF4+8PnezDH7vXrNv/AGWmfHj9vPV/DP7SHj7Rbz4w3/gqz0fUbWDTdMtxocaJ
A+mWUzPm8tJZmLTSzHJcjsAMVzSoPmUIpXtc7KdRODnJvexgal8Af2ddB0y4vtR/Zv8ACOm2
FnE091eXv7PRtra0iUFnkllfSgkaKoJZ2IVQCSQBXdWH7BfwFmvIVb4F/AkqzqCP+FcaIM8/
9e1cvpP7XWqfGL9nf9pXSrjx7cfEDR7H4dM9pcXC6aWs7ie21dJkD2MEKkFYbc4cMR1BAavQ
dB+NdnPrFqv2K4G6ZFzuHdhRChzNprVBUq8qi09z5S8AfBX4O2Pw98LIfg38D5mbQdNleS4+
HehzzSs9nC7M7valmYliSWJJJ5NdFbfBb4MuOfgv8B//AA2ug/8AyJXzlb/tZ6foeheGbZtM
vHKeGdGYsJBg7tNtm/rWxpn7YGnOo/4ld5/38FflmKzuUH/Ef3s/pjK+BqdaEZPDp3S6RPdd
V+EvwO0LSbm+u/g58B4LSzheeeRvhroWI0UFmY/6J2ANR6h8LfgvoGowWms/AP4WeGbi6hln
t01/4Mafo/2pItvmtEbrT4xJsDoW252hgTjNeNT/ALWulXVu8UukXMsUilHR3Uq4PBBHcV9L
/wDBOf4pf8NO+C9U+HPjLwZD4z+Evg+a3vtDvdbAl/sS9idSmmIxOZ4hEzEL/wAsoWaCTfDP
HGvVk2PWOlKj7WXNa6s3b5/8OvvPM4y4e/sOjDGvCUnTvaSaipXe3L+uj77Xtxdp8M/gvcWe
lXb/AAC+GFnp2u7f7M1C9+CthZ2Go7ommXyLmXT1hl3RI8i7HO5VLDIGa0H+CXwXA/5Iv8Cf
/DbaF/8AIleZ/tB/tseI/iJ+0Dq+pfEnQr/TvEmgyyafZaAZAbfwxbtg+XF2keVQjvcj/XDY
VxEsaLzcv7Xmn7f+QVef9/FrlzPOY4fEujTqS93R3b3/AA0PU4c4Klj8uhja+Fp+/quRRtbp
fV69+226PZbn4OfBqIcfBj4Efj8NtB/+RKzPDn7EXwd+O/jbxxp83wg+FItf+Ee8Plo9P8JW
GmtF5l5r6yNFJaxRSQSP5EIaSFkkIiQFsKMeLan+2Jp6D/kFXv8A38WvqP8A4JV/EK2+LfxB
+I1zHbSQLF4Z8M/LIwPXUfE4/wDZa93hjMfrWMjTlK617nxniHwzHLsrniKdJQaa1SS6+R+R
f/BR/wDZlsP+Ce37S914Oiiu7nw7rNlFr/h4zS+dcRWUzPEYpG4JMdxBcxqT8zRpGzEsxor3
v/g5/sVi/bm8BKo4Hw6tP/TtqtFfbVFyzaXc/FqbbimzQ0PTZ7r4TfC3AOD8NfB/T/sXdOr9
Ev2Xfina+Fv+Cdnh61vtf1Hwr9ukv7CHV7W2Mv2ORbuZcsxRkQfKRlsd8MDgj49+GOhWtx8C
vhM7Bdx+G3hHP/ggsK7rwd8YfHvwk0L+yPCfjrWNB0bz5LlbGOzsLmKOSRtzlTcW8jKGYlio
bbkk4ySTjCneKv5P+tH+RrObUnbzX9ar8z2vwB43sb25kh0bxvrXh/VdSvrVkjjudK1FpJxI
XLJbaWC0iFQ8bCYrHiYMVLIuPUP2q9Zku/jz8P7Z886PqkvPfbNYD/2avnPwx+0P8W9WAMvx
U8QKf9jSNGH/ALZV13hO71vxH41g1zxJ4k1nxTqkMDWsE9+tvGLWJ2RnWOOCKKNdxjQsduTs
XJ4rWhgoUuVUYxjFX0Wi1+S9TGti51LutJyk7avy+b9Cz8fvHej+Pf2p/gB4J0TVdN1XxZ4U
8YSeJ9b0m1uo5bvRtMTRdRh+13MQJeKJpbq3jVnADNMoGc14de+Hb5v2/NU/Zf8Ass6+DNb8
c2/xyMm0mBdGH+kXFmD6t4ghRyvTy7hvTFfc3hbxLsscH0rQbW4xIGreOGlGaqLe93/5I19z
pwfnZraRMsTGcOSXay8viu/W05pbWunuj4C/ae1PWW/4XysGpadpHgi5+P8Ao0fjq81TSZtV
0mDRj4W0wE6jbw3Fs8lh9q+yi4HnInllvM3ReYjYWu6D4c/Zc+Bl78cvBvjrwL8Q/B3wy8e2
evtpvwt8NNpfhvSraay/s7U4rGIX13GytFdpcyrBKIxNAxZN5ev0ssfHMdrAVzj8ao6n4wju
Xzx+dT9UqWsnraCT/wACprXXVPkvbTd6sKmJpzmnNae9df4nO7/xWna+uystD8x/gF8PPHXh
n41aB8G9Wa+ufE9i9x8eLhpjuto7260w25tQ+zDqNbuLmZVyCBGOMAZyv2APh1oXiTXvh/r1
t8UfhbF8UNL0nUZPGmhaT4MuLLx5qszWsqX1v4huH1OeU7LzZI0lxbopmiiEewOqn9PX8TRg
9qjk8VxD0/OqeBbjKMdLq3krJxVv+3HGMr3clBXYvruqcnd312V9bu9v7zk42ty8z0b1PgX9
kr4M+Gvhn+w9+yRqWiaJp9hqvijU/Dd7q+oRwL9r1SX+y7kq082N8mwMVQMSEQBVwoAr1D4w
/tW/DDwX/wAFRfhXYat8SPAWmX+i+FvEem6jb3fiC0glsLq4m0dre3lVpAY5ZVViiMAzhSVB
xXufxr+HPhf4+eHrPS/EsOqPb6dfR6laS6brN5pF1a3EYYJJHcWksUykB2HDgEE5zXUx+Nob
SJVUnCgKNzFj+JPJrb6tUlWnVe0qjkl5OEY2v5WfTt6GP1iEVFR/kUfnzSd/P4l+J5P8T72L
9oP/AIKC+DPB+YbnRfgxpx8c6vGULY1W7E1lpaH+H5YhqM2OSrLAeODXkvx3/bD+Gf7WP7UC
fCe6+JXw/wDD/hHwJ4itP7et7/xHaWuqeLNctriOW20m1tZHEpt4p1heaUD95IqQR7v35X6v
m+I6KOv61Qu/iLG5P+NaQwlRct+jb9Xe6f8A26rLbXli31UoeKh7z6tJLyVtberbflzO2tmv
nv8Abx/ar+Hc37EnxtSXxloWmm10zXPB5GpXA0/z9XXT5T9ih8/Z50pDDAj3bucZwa8i+M3x
E8GfFjwl+zd4v/4WDa/8Ku8NX0+m+JPEHh/xAkFvos8ujvFF5+o27h7H94RA0iSRuDcqm9RI
c/aU/wAQIj6fnVc+OYd+eKlZdN35tb8l+143vp2d3p00u3Z3qWPg4xj/ACudu/v8qXzXKter
votLfNPwG+LP9ufs9+MfAlx4z1jW7T4j674h0H4S6nqkt1f3Wt2C6cZwftr7mmiSQXflXE0n
72KOMiR8qTxfwx+MOgfGu4/Zh0zw5dxPq3wkil1bxtp8Sk3Pg1LbQ7rT5bW+jUZt5jczBFif
a0gikZFZUYj7Ti+IcQXHH50SeNYJc9K0jgqsbuL1agrvVv2akk3rq25Ny79LbmDxVJ7/AN/7
ptcy+5JR7W6p2Xyj+0z/AMFHvAPj7TdB8BeG/il4O8HL8Q9Dg1m+8T67qsOkf2Nod0p2y20d
2UaW9nTcsUZXEPMsowqRzQ/tMWnwB+H/AIY0n7P49ufA3jLRPBENl4Gk8O+K7yxvNRsgG+wx
WVrDMIdWzLGu2DyrgOSishEihvrAeLoc9qf/AMJjEvp+dRUy2c4yinZt79UrSSs+8VJ2fnLT
3rKljaaavqktuj1i3fybW3lHXTX4/wDjf+3F9l8B+DfhJ4x8feCPhp8S/FXhayvvHWraxq9t
pUfhmCeELcfZEncCW9lcTJEgysOPNk4WOObttEsNF8JftpfBrT/CrWZ8Maf8KNbtdINnKJbc
2aXmgrD5bgkMmwLggkEYNfRQ8Zwn0rz+f4W+FovjbP8AEQW+qTeK5tOfSlnn1m9ntba2doWk
jgtHlNtBva3hZjFGrMUySSTnrpUKka7rNbuW2m8Zxirdo8/f+Z9bEfWKfJyX2SXfW8W38+X8
Eulzo/E8HxD1jxPpkvhXxT4N0bRYtv8AaNrq3he51O6uvny3kzx6hbrDlOBuikweeR8tN/ah
+JGg/Dj4a3F14i8d/wDCtbC9lSyTxIXtYl02V8lWMl3FLbR7tpUNMhQlgv3mWrul+J1tq0X8
YxSHmsng5uHKv1/4f07AsVBS5n/X9de587/8E89fbXtN+JUmn6lZeL/Dz+LZJtN8b26nPjTd
a2/nXDyB2ineGQNbGW2Edt/o4SKKJYtg+W/2mPg94K8E+Bv2uNbs/CumaReR+PPDVtLqGhaY
tvqqW0p0G5njgkgUTBnmZpcIctKd3LnNfpS3iqEjtXG/HPwDovx9+Hc3hnWJr62sJ7yyvmks
3RJg9rdw3UYBZWG0vCobjJUnBBwRFTAVJctt0lHXt7q8tLK1u2h00cxpxq88tnJN233vf18+
/TofEXifS/hTq3jnxqvwX8SeGvAvwwvvh1dWPjXxN4UKJoVhqEt1brp7zNA6Rm7SNr3zTuWV
YpB5joDGw7D9hXWNIvvAfiq28MeGvh/o1hp2si3bVfAWB4a8TSC1gzeWqqoVD0jkjVpAjxMv
myEE19o6h4qhkiIwK47xDfxT5wvX0FdWHwVSM032t5/E5b+V7Jdt76W4q+NjKL83f8Evvdlr
9yR5tcXFztYMTzXB+OdCuZ7gyJu4r1q9tvMc4WqFzo4uRymfwr3IUfdszxZVfeujwF4tUhmw
PMx0q9bJqixZy9exS+C4XOfK/SnxeEolGPK/Sn7F9xe1XY8etbnVTJz5mK7v4fXV2k8fmFut
dQfB0Q6RfpVmx0EWbAqnSrhTcXcUp3R1lhe5gGT2qSS73CsSO5kjXoaX7ZKexrouYcpeuJ8G
qzXf0qtLcSnsapTXEo9aVwsaLXeaY13WU11KezVG1zIexpcwWLWqMZUODWXa27xzc+tTNcyH
+E0xpZB/CfyqXZ6lK5sWEuJFrsNAn2xg1wWktJLcLwa9D8OaYZrXOKUdWN7G9Z3qiKqOr3ay
Hipk0hkHemHRS8vNaamZ554/0qe9DFM15nceHNQS4/j619Jy+H45l+ZQapTeCLWQ58tfyrB0
Lu5vGtZWPBbKG808/wAYrrPCms3rsvL16Ld/Dq2mP3B+VaHh/wCHdvar90flUewlcr2yMO0v
Li4UZzVv+zZJYS2DXcWXguHA4Falv4RiEWMVp7NkuoeX2sMkZ27TzUtzpkipvKnivS18FQK+
dtTS+E4ZIsbRR7J2F7Q8qsGbzh8p4qLxU8ssmFB6cV6jH4EgjP3f0on8CQTtkrR7J2sHtFe5
4DJYXv20MN3BrTKXTw8hq9o/4V7a5+4Kf/wgNqB9wVPsWV7ZHkGiyT2UZ4apNQD6tKMqa9Zb
wBbBcbBTI/h/bxnhRT9kxe1R43cafNp0oKg0ySaa5b7pr2mXwHbzdUFVm+GtsHyEFHsWHtUe
K6noNzdxFkDZrj9Z0PU4GO3fxX1DH4GgijxsH5Vm6p8PbeRT+7X8qXsGCrWPlC9t9TKkHzM1
QhsNWSYY8zGa+n7z4ZWjtkxL+VVx8NbRT/ql/Kl9Xfcft0eB6auqwTLkyYrcuIL2/tjnfuxX
skfw+tVP+rX8qnj8D2ydEFUqL2J9srnzrP4f1BJ/4+tXLXw/qLp1kr3+TwJasfuL+VPh8E20
f8C/lR9XH7Y8L0zQtQhuVzv616/8NLaWCzG/P41uJ4Ots52CtGx0xLJcKK1p0uVmVSfMXFlx
61IJfrUFKrYrcxJw+acJOKhp4ORTGiTzfrR5v1plFAiQTZGK5T4gaTLqVs4TPI7V0+aVolmH
IqWrqw07M8Z8OeA7y01NXbdgHNdf4hQ21iQfSuy/s+Neij8q5b4gRkW77R2rPk5Uac7bPPZb
jcrD3rz34maDcakH2E118ssscp4PWoLpDdLhlqN42L2Z5Z4F8H3dpeKWLY3V7V4YLWyKGPpW
FbWH2Y5CVpWVzIsy/KaIrlQSbe57D8PZN8AruYv6V5n8O9T8m3G6u7j1tAvWqiyTUorN/txP
71H9uJ/eqrokvsuDSVROuJ/eph15PWoCxoMuRSK2DWc2vIO9N/t9PWgo2onyPerUMma51dfX
Oc1ah8QoO9AzfRtwqVDWJF4gQHrU415B3oA2o2qeN6w019PWpk8QJ60CNqisoeIU/vfrS/8A
CQJ/eoGalFZf9vp60v8Abyf3h+dIDToBxWYNej/vfrS/29H/AHv1qbAaoOaWsr/hIYx/FR/w
kMf96iwzVorK/wCEhj/vUv8AwkCf3qQjUorL/t9D3o/t5PX9aANSisv+3k9f1o/t5PX9aBmp
RWX/AG8nr+tH9vJ6/rQBqUVl/wBvJ6/rR/byev60AaTrTCMiqB19PWmnXU9aBFyRc1BKlV31
1P7361C+uJ6/rQBJPFiq08WRTZNcQ96rT62nrTGJNDxVO4izTptZTHWqk2sp60gIp4tv0qrL
HT59WUjr/wDXqnPqyUDGzRfnVeVMUTasoqrLqq0DLMMuxq1tNvtrCuZbVlVqlttdCt1pDO9t
bgOtWUlwa5PTvEgI61rwa4siCpKTN+CbFXYJ8jrXORawoNWYdcUHrSC50PmZFeVftEwIfHXg
aW4nitobyy1nR4JJW2pJdzS6TPFCGPG947K4Kj+LyyBzxXepr6gdaZqF/Z6tp09pe2tlf2V0
mye2u7dLiCZc5w8bgqwyAeR2rGvSc42W+5tQqqErvY8g174EeKZ0sptOt7iy1LS9SstVtJpr
GSaJJ7S6iuY96BkLKXiAIDqcE4Ir0Q/Gj9pD/oJeD/8AwkdQ/wDljWU3wR+GP/RMvhl/4SOm
/wDxmmN8EfhkP+aZ/DL/AMJHTf8A4zXmVsA6rvUim/V/5Hp0cwVNcsJNfJFP4qX/AMcvjh4O
PhzxPe6BLok17ZXlxHYeGbyCeQ2t1FdIqu97Iq5eFAco3BOMHBHGfE/4R6xpvwp8Xzy6feRx
J4f1Il2hYAf6JKOuK70/BH4Z5/5Jn8Mf/CQ03/4zTT8EvhoP+aZ/DH/wkNN/+MVnHBypxcYR
S+b/AMipY2E5KU23byRgfFf4EeJNd8TXsthbXFtd2muQ6paTS2Uk8QltrxLmPcgKlkLRKCAy
kgnBHWtka98eD/y8eDf/AAkb7/5YVOPgl8NT/wA00+GX/hI6b/8AGaevwQ+GpP8AyTP4Zf8A
hI6b/wDGaxqYNztzxT+b/wAjanjVDSDf3Iw/HegfGn4neBda8N6peeF003xBYT6bdm28J3iT
rDNG0blGa+ZQ+1jglWAOMg9K3fDfwm1qDV7N30+9CpOjMTC2AAw9qki+Bvw0J/5Jn8Mv/CR0
3/4zWhZfAb4Yv1+GPwyP/cpad/8AGaUMNKmrQil82VPFRnbmbfyR+cWm/sk+IPFHhfwpqEGl
6lLDc+FtEKulu7KwGmWo4IGO1buk/sY+JFAzpGqf+Ar/AOFfoBb/ALIvwUkOf+FM/B/Pr/wh
Gl//ABirkP7IXwXA4+DfwiH08E6X/wDGK/O6/h9Krq6i/E/bsJ42VMPFRhTdl6HwFd/sW+JL
izlSPTtUid0KrILVyUJGAenavQvFPhP4seKvhtoPg23tE8JeF/DjrNaWPhfTbvTy8yOJIpZZ
Xmlkd0kBlB3AmU+Y+91Vl+w0/ZE+DB/5o58Iv/CK0z/4xViL9kH4Kkc/Bv4Rf+EXpn/xitMH
wNjMLCVPD4hRUt7LX77XXyZyZt4sYPM6tOtmGE9o6fw3eivv7t+V/NM+K/jV8NfiZ+0TLoVx
4t0bS7nVNCg+zDV7LQ7i11C+i2nMc7CUxMpc+ZgRKFcts2B3VuLm/Yy8RbONJ1P/AMBpP8K/
Q5f2Pfgl/wBEa+EX/hF6Z/8AGKcf2Ovgiw/5Iz8Iv/CL0z/4xWeM4BxGLqqtiKycrW2t+SRt
k/jFRyrDPCZfh3CF27czertfdvttsfmnqf7FXiVm40fVST/06yf4V9Y/8EjPhbdfDrxf8TfM
UNbx6R4d0qWRGDLFeRXWvXMtuxHSRIb60dl6gXEeRyK95P7F/wADZOvwV+Dx+vgnTP8A4xXo
fhfSdI8D+H7XSdD0zS9E0mxUpbWOnWkdpa26kliEijCooJJPAHJJr2Mj4Q/s/EKu5XsfP8Xe
JtXO8E8HKFrtP7j8Uf8Ag6C/5Pq8Cf8AZOrT/wBOuq0VB/wc/XQk/bm8BnPX4dWn/p11WivX
q/HL1Z8NS+CPoju/grrH9ueHvghosskkcGo+BfBFo7RkblWXQtNUkZyM/Ma+wv2cP+Ccnir9
pP8AZ68BfEW28bfDTQrbx74c0/xHDpk3gjVr2TTkvLaO4WBpxr0QlZBIFMgijDFc7FztHxR8
AT/xOPgB/wBih4C/9Mml1+r/APwT4u77UP8Aglr8ELPTNQfS9Rn+Fegw2l6saSGzlOkW4SUK
4KttbBwwIOOQRXm1asoQ5l07HfTpxlKzOC0T/glJ4y0bGPiN8MGx6fD/AFYfz8QGuz8PfsAe
M9CUf8V18NZCO48C6kP564a8l8A/twfEn4m/DT4B6HHrrWHjy8vNXk+IU0Npa+bJb6Gs0F6h
jaJ44fPujbgMiKR5nygDiuK+I3/BWTx/rv7E+oah8P8AS70+JtF+HmneLtZ8RapqVnLeaS15
O0cIjt0sUtrx9sUjSHy7dETlVZsJVxxFVNqEtnbyekndPt7stdNV6XX1WEpRjKNm/wAHzKNn
5pyjor7o+sof2RfHUC4Xxv8ADkf9yRqH/wAuaef2TPHjf8zx8Of/AAiNQ/8AlzXkXxC/4LH+
G/hv8eLnwjc2ugPp2h69p3hnVri58Tw2uuG6u1Q/aLXSzEWuLSJpYRJL5qEZkKo4jyem8Hf8
FM28UeP9Eabwf9k+HPizxhd+BdD8RjV995dalb+Yu6axMKiG3kkguER/PeQlELRIHyukcbiJ
W5Zb7efw/nzRt3voYfVqHLzOPS/p8T/DllftZ3O1P7JHjs/8zx8Of/CI1D/5c01v2Q/HTf8A
M8fDr/wiNQ/+XNezf297/rXzn/wUK/a01v8AZkk+EeoaXe3kOn6344g0vWLS0so7u51W1a0u
n+yxIys3mSSxxKvllXLEAHk1DzCsmryerS+9pfqafUaLTajsm/uV/wBDpT+x/wCOD/zO/wAO
v/CJ1D/5c0xv2OfGz9fG3w7/APCJ1D/5c14Z8Pv+Clms/Df9mrwt8QvG/iPw7res/GbxJNb6
DpWo6hB4e0DwVCnml7O71BrcyhoFhdZZJI5GefCIoX5quWn/AAWs0q4+GMPij/hEFls7nw5r
d9b/AGXXUuUvNZ0u4SGXS4ZEhKyJKJEliuR9+M58oHir+v4hXXM7rfytHme29lpdXTeibbV1
9RoXXu7uy89XFfe099UtXazPZG/Yx8aP/wAzt8PP/CK1D/5c1G37FHjJ/wDmdvh7/wCEXqH/
AMua8J0P/gpT8Sfhj8X/AIxX/jDw7Hq3w88IeK9B0e4K6jDb3PhVL+1tQ6wqtt/poS4nzI0s
sZVWXbuyVX0vwv8A8FOjr3jbSJ5/B32T4ceKPF154H0PxENYDXl1qNt5g3TWTQqsFvJJb3Ea
P9oZ8ohaJA+VFmGJsnz7pfiou3r78fVvS5H1LD6vl0Tf4c3/AMi33SWtjp3/AGHfF0nXxr8P
f/CL1D/5c1E37Cfixv8AmdPh9/4Reof/AC5ryrwd/wAFk/7f1nxhZ3XgnTv+JD4G1Pxvplxp
fiOTUbTU0sTiS0a6FmlqZMkK0lnNeQo6yIXLJg3fGH/BSP4jXfw/0O1034c6JZeMPGvgy/8A
GthDB4sa5Gk6XDaxSLM7PYKkl55s6qtsAYiV+adVNTPNMTGDqc7slf8A9K/+Ql8k3tqaRyzD
ufs+VXvb5+7/APJR+bS30PRW/YK8Ut/zOnw//wDCM1H/AOXNNH7A/igf8zp8P/8AwjNR/wDl
zXa/sZfFzWPir+yN8MfE3iC9/tDXfEHhbTdR1G68tIvtFxLbRvI+xAqLlmJwoAGeAK9A1zxO
9not5LG+2SKB3Q8HBCkitcVmOKw7mpzfu3vbyMaGAw1ZRcYL3rfieGL+wX4pX/mdPh9/4Rmo
/wDy5p4/YT8Vj/mdPh7/AOEXqH/y5r4bi/4KXfHY/wDBOKfXT8QD/wALHbUjq0esf2Jp+9NI
GkPd+X5HkeTj7TE0W/ZuweueK+tde/4KgT+EtR1u7bwj/aXgPwJqmk+HfFXiQ6sILy2v71Ic
tBY+QVlgia5txI7TRH94+xJNnzX9fxfO6XtHzK2nrZfm+V3+15WbHgMMoqXIrNN/d+rXvJb2
+47AfsK+LB/zOnw9/wDCL1H/AOXNL/wwr4s/6HT4e/8AhF6h/wDLmuT+GP8AwVGb4i+NPCGk
f8IVJa/8JV418SeD/MTVvOa2/siGWX7QE8ld5m8vHl5XZu+8+K5f4T/8FntM8ZeD/F/iPXvD
GlaRpXhTQ77W7ix0/wAW2194hsDaXJt5LS/0qWO3uLWctswVE0I3gGUfKWx/tbE25vaO1ub5
cvNf7v8ALfQv+zMPzcvIr3t8+Zx/NP7r7anqn/DC3iz/AKHT4e/+EXqH/wAuaT/hhTxWf+Zz
+Hv/AIReof8Ay5rP/Yt/4KQad+1n438R+F3g8K22u+H9PstX3eGvFUfiTTp7a5DDb9pSKHZc
RSIySRFMLuQq7huPon+3vf8AWtZZhjI7zf8AWn56ERwOFe0F/Wv5Hgw/YV8WL/zOnw9/8IvU
P/lzTv8AhhjxaP8AmdPh7/4Reof/AC5rH8bfFfxv8cf20de+Gfhzxvqvw60DwL4Zs9Xv73Sb
CwutQ1W8vZZlhQG9guIkgiSBiQsYdmcfMAOfMvhH/wAFRviNr2oeG/AcfgPRfG3xAlm8Tafq
GoDWm0TT5pNEuUhMwQW9wy+ekiEADCynbgIdy5LNsS1d1Hs38o35n8rde6tcuWW4dP4Ful85
JNL53/zPZf8Ahhnxb/0Onw9/8IvUf/lzSH9hbxYf+Z0+Hv8A4Reof/LmvnLxX/wVn8beOLvW
/EPh3SrbTfh/B8GJ/HsccGqRRa5b3X2h4CQ0tlcW/mRzQPEqMrR7WaVt52wr6tf/APBUuXQX
1TUh4Pk1D4f+CtQ0fQfFHiKbWVjv7W+vo4DmGzFuEnhha5txLIZYf9Y5jjcJg6xzDGOXKpu/
b/t+UP8A0qNvmu5EsBhYq7gv6jGXre0lpvo+x2j/ALB/ip+vjP4e/wDhF6h/8uaif9gPxM/X
xl8Pf/CL1D/5c1yPww/4Kkap47+O+keF7z4drp2ha94p8Q+EbHVYPEH2q6kutISSV3a1+zoF
jlRPlPmlg+QVIAdtH9in/gpq37X3jqfSG8OeH9CxZ3V2LWHxhb3et6W9vdG2kt9S0ySKC5tZ
c7WyizRfNgyjK7phmmLlblqPVX+Vr/LTv5d0OeW4WF+aC0dvndr80/ufY2v+HfniT/ocfh7/
AOEXqH/y5o/4d+eJP+hx+Hv/AIReof8Ay5r6D/t73/Wvm79qP4n+NdS/bM+EPw+8O+Pde8D6
H4r0jXb7VJtIstNnuZ5LVbUw4a9tbhVA8x8hVGc9eBR/a2M5lFTd3f8ABN/kg/svC2b5Fp/n
b9S9/wAO/PEn/Q4/D3/wi9Q/+XNH/DvzxJ/0OPw9/wDCL1D/AOXNeGeF/wDgsPqXwx8C2nhj
xWfB+t/EPStV8Q6Veapr3iCHwnpOpw6RP5InEximQ3dxvhAhiQRl/OO6JVC16Jpf/BW+w8T+
Ff7T0nwhc3B1rTfDWpeF7efUvKk1wavdmzeNtsT+S9rOriTHmAhQQRuqoZpi5pOFRtO1vPmv
a3rZ+lncmeWYWDanTSte/lZpO/3r77nYD/gn74kH/M4fDz/wi9Q/+XNH/Dv7xJ/0OHw8/wDC
L1D/AOXNef8Awp/4KTeIvGF7p3hPw34bk8YeOdd8QeKI4YPEHiKDTraz0/SrxoC7XFtp/wDE
7RJFH9nZsFvMmO0uyftFf8FibT9nPXTpWteFNK0nWdD8O2viPxJpOu+MLPT9Qt1ncr9k01EW
VNSukVJXKLJEmBGA+59qxHN8U4xn7R2kk1803+Fnfs1Z62L/ALLw3NKPs1eLafyaX4tq3dO6
0PQv+Hf/AIl/6HH4ef8AhF6h/wDLmj/h3/4l/wChx+Hn/hF6h/8ALmvdvD/ju28UaDZanZTe
dZajbx3VvIP+WkbqGU/iCK4j9rDWfGT/ALNnjZ/h5qz6N43t9InudFult4bki5jXzETy5UdG
DldnKn7+RyAaK2b4ylGUpzfu7/IKOV4Wq4qEF71rfM8//wCHf3iT/ocPh5/4Reof/LmmP/wT
38RP18YfD3/wjNR/+XNeM/GT/gov4r8U+GtL8VeD9ZvrDw54e+Dtz8RPE1tpq2S3Fzc3SKlj
brLc21ysDo8V2/3GGY8Ojjiuq8a/8FIPiFLo3xkg8H+BfDlwvwf8K2us3Wta74mkje6luNIN
/HstILIiQoQQymWFW4KsuSFdXNcZTU3Ko/cTb+UpR/OL/DuKjleGqyhGNNXla3zUX+U4/j2O
5P8AwTu8QE/8jh8Pv/CM1H/5c03/AId16/8A9Df8Pv8AwjNR/wDlzXl1n/wVq8R/DX4QanN4
q8FWWreIfCPwu0fx5ez2evbYtWe8m8jyx/oi+USAJWIUhWcoAQoduv8AjZ/wVUHwY/aG0zwf
L4V07UtJutd0rQLy+tdfe41DTpL8J5c01rBayw28eXwq3V1bzS+XK0cTqu46yzHGqqqXtHdu
y82pOP8A6UmtfXYzWBwjp+15Fa1/laMvykvvtudCf+CdOvH/AJm/4ff+EbqP/wAuaQ/8E5te
P/M4fD7/AMI3Uf8A5c18xSf8FV/iT8L9Y0XUdWvr7xbbSX/xCgGh2el2/nau2mXapp0IMUXm
KsUe8u68+Wru+/bXquo/8FItS/Zb+EHw40/xR4u8G/Erxz480K58WtrWs67a+EfD5tFRJFgt
ZxbMXYmWOOBHjLyAO8jpjFYRzjFOnGr7R2aT+/mdvW0W/RGzyrDqbp+zV7tfc0m/S7SXdnpc
P/BO3xBA2V8YfD/j/qTdR/8AlzWpa/sNeK7NMJ408AD/ALkzUP8A5cV5xp//AAWBsvElrpz6
P4Lur2XxVZeGr/wrby6oIptaj1a7a0nDgRMIWs5EfzMGQNheVDZHLfs3/wDBTbxt4cRp/ifp
ial4Q1v4l694O0/xNHdxRXNgbeS4a2hayit1V4BHA0Ym80ylwQYzwx0/tTGRk06jVr+l4uMW
vW8l+K30M/7OwjipKCe33OLkn6Wi/wCrnu3/AAxR4v8A+h28Af8AhGX/AP8ALikP7E3i8/8A
M6+AP/CMv/8A5cVwXgf/AIKpXGr6bo+p+IPAsXh/SPHXhjUvFngyc+Jrdm1S1slEjR3rTJBB
YSvC8Uo/fSxqrPvkUpg8jb/8Fo54/hb8UtZm+H1pdav8N4dEuoLXTtfuXsNeg1O4WCOSC7ub
C3JVSdwkWF4ZVKtHKytuD/tTG35ed37dd3H800+3Uf8AZmEtfkVu/Tp126rXb8T2r/hiXxd/
0OvgH/wjdQ/+XFH/AAxH4t/6HXwB/wCEbqH/AMuK8Q/bb/4KNfFj4cfC/wCJGieHPCmh6H42
8CeFrfxBrWq2HiD+0YdEFzdNFbC2juLBFvGKRO8vmLEsa52mVgAftrTfEjT6dA7tlnjVmPqS
BR/auMtze0dv+C1+aFLLcImoumr/APAi/wApI8U/4Yi8W/8AQ6+Af/CN1D/5cU+L9izxhAfl
8beAP/CM1D/5cVV/4KTftF+Iv2ef2UNS8VeF9Sm0zVrPV9Ig8+KzS8fyZtRt4plWJkfcWid1
AClufl+bFeLeLP8AgrMfhr8b/i7qN42uXHgzw5pHhyDQtI1zRpfDDR6lf3FzCzzSXtvFPFbk
rGXmkV0RI3ZFYgqYjm+Le1R72+dk/wAbpLzKeV4VJy9mtk/vdvw3fke+x/sh+N4unjf4f/8A
hF3/AP8ALipl/ZQ8dL/zPHw+/wDCKv8A/wCXFfOtx/wVf1L4q+Jvh/D4Zu9DsZrH4iz+GfFN
voer2+v6VrFqukXV7E1rfmBS0TlIzvWON1eN0IwPm9E/ZQ/4Ka6v8f8AxB8NrfxJ8PY/Bll8
XNAu9c8NXEPiD+0nkNoU8+G4Q28Plko4kQo0m5PvBD8ouGa42d7VHp+sOdffH56PTa8Sy7CR
aTgtf0lyv7n8tb97ej/8MqeO/wDod/h7/wCEVf8A/wAuKP8AhlTx3/0O/wAPf/CKv/8A5cV6
7/b3v+tH9ve/60v7Wxn/AD8ZX9mYX+RHkX/DKnjv/od/h7/4RV//APLij/hlTx3/ANDv8Pf/
AAir/wD+XFeu/wBve/60f297/rR/a2M/5+MP7Mwv8iPIv+GVPHf/AEO/w9/8Iq//APlxR/wy
p47/AOh3+Hv/AIRV/wD/AC4r13+3vf8AWj+3vf8AWj+1sZ/z8Yf2Zhf5EeRH9lPx2R/yO/w9
/wDCKv8A/wCXFJ/wyj46/wCh4+Hv/hFX/wD8uK9e/t73/Wj+3vf9aP7Wxn/Pxh/ZmF/kR5D/
AMMo+Ov+h4+Hv/hFX/8A8uKP+GUfHX/Q8fD3/wAIq/8A/lxXr39ve/60f297/rR/a2M/5+MP
7Mwn/PtHkB/ZQ8dH/mePh7/4RV//APLio5P2RvG8o58b/D7/AMIu/wD/AJcV7H/b3v8ArR/b
3v8ArR/a2M/5+MP7Mwv8iPFX/Y18ZSdfG3w//wDCMv8A/wCXFRn9ivxg3/M7eAP/AAjL/wD+
XFe3f297/rR/b3v+tH9rYz/n4w/szC/yI8Q/4Yp8Yf8AQ7eAf/CNv/8A5cUo/Yq8YD/mdvAH
/hGX/wD8uK9u/t73/Wj+3vf9aP7Xxn/Pxi/szCf8+0eJf8MW+MP+h28Af+EZqH/y4o/4Yt8Y
/wDQ7eAP/CM1D/5cV7b/AG97/rR/b3v+tH9r4z/n4x/2ZhP+faPEh+xd4xH/ADO3gD/wjNQ/
+XFL/wAMYeMf+h28Af8AhGX/AP8ALivbP7e9/wBaP7e9/wBaP7Xxn/Pxi/szCf8APtHin/DG
XjH/AKHX4f8A/hGX/wD8uKP+GMvGP/Q6/D//AMIy/wD/AJcV7X/b3v8ArR/b3v8ArR/a+M/5
+MP7Lwn/AD7R4r/wxn4y/wCh2+H/AP4Rl/8A/Lij/hjTxmP+Z2+H/wD4Rl//APLivav7e9/1
o/t73/Wj+18Z/wA/GH9l4T/n2jxYfsbeMx/zO3w//wDCMv8A/wCXFH/DG/jP/odvh/8A+EZf
/wDy4r2n+3vf9aP7e9/1o/tfGf8APxh/ZeE/59o8W/4Y38Z/9Dt8P/8AwjL/AP8AlxSj9jjx
mP8Amdvh/wD+EZf/APy4r2j+3vf9aP7e9/1o/tfGf8/GH9l4T/n2jxc/sc+ND/zO3w//APCL
v/8A5cVVvv2IvFuoDEnjT4fHP/Umah/8uK9y/t73/Wj+3vf9aP7Wxn/Pxj/szCf8+0fPL/8A
BPPxDI2T4w+Hv/hGaj/8uaT/AId4+IP+hw+Hv/hGaj/8ua+h/wC3vf8AWj+3vf8AWl/auL/5
+MP7Nwv8iPnj/h3j4g/6HD4e/wDhGaj/APLmlX/gnn4hU5HjD4e/+EZqP/y5r6G/t73/AFo/
t73/AFo/tXF/8/GH9m4X+RHg1p+wl4qslwnjT4fD/uTNQ/8AlzVkfsT+MB/zO3w//wDCM1D/
AOXFe4f297/rR/b3v+tH9q4v/n4w/szC/wAiPD/+GKPGH/Q7fD//AMIzUP8A5cUf8MUeMP8A
odvh/wD+EZqH/wAuK9w/t73/AFo/t73/AFo/tXGf8/GH9mYT/n2jw/8A4Yo8Yf8AQ7fD/wD8
IzUP/lxQf2J/GB/5nb4f/wDhGah/8uK9w/t73/Wj+3vf9aP7Vxn/AD8Yf2Zhf5EeHf8ADE3j
A/8AM7eAP/CM1D/5cUn/AAxJ4v8A+h28Af8AhGah/wDLivcv7e9/1o/t73/Wj+1cX/z8Yf2Z
hf5EeHD9iXxgP+Z28Af+EZqH/wAuKcv7FXjFR/yO/wAP/wDwjNQ/+XFe3/297/rR/b3v+tH9
q4z/AJ+MP7Mwv8iPE1/Yy8Zr/wAzv8P/APwi9Q/+XFPH7HPjUf8AM7/D7/wi9Q/+XFe0/wBv
e/60f297/rR/auL/AOfjD+zcL/Ijxcfsd+Nh/wAzv8Pv/CL1D/5cUv8Awx942/6Hf4ff+EXq
H/y4r2f+3vf9aP7e9/1o/tXGf8/GH9m4X+RHjI/Y/wDG4/5nf4ff+EXqH/y4pR+yD43H/M8f
D7/wi9Q/+XFey/297/rR/b3v+tH9q4z/AJ+MP7Nwv8iPGv8AhkPxx/0PHw+/8IvUP/lxR/wy
H44/6Hj4ff8AhF6h/wDLivZf7e9/1o/t73/Wj+1cX/z8Yf2bhf5EeNf8Mh+OB/zPHw+/8IvU
P/lxS/8ADInjj/od/h7/AOEVqH/y4r2T+3vf9aP7e9/1o/tXF/8APxh/ZuF/kR43/wAMieOP
+h3+Hv8A4RWof/Lij/hkXxz/ANDx8Pv/AAi9Q/8AlxXsn9ve/wCtH9ve/wCtH9q4v/n4w/s3
C/yI8b/4ZF8c/wDQ8fD7/wAIvUP/AJcUf8Mi+Of+h4+H3/hF6h/8uK9k/t73/Wj+3vf9aP7V
xf8Az8Yf2Zhf5EeN/wDDIvjn/oePh9/4Reof/Lil/wCGRvHP/Q8fD3/wir//AOXFex/297/r
R/b3v+tH9q4v/n4w/s3C/wAiPHP+GRvHP/Q8fD3/AMIq/wD/AJcUf8MjeOf+h4+Hv/hFX/8A
8uK9j/t73/Wj+3vf9aP7Vxf/AD8Yf2bhf5EeOf8ADI3jn/oePh7/AOEVf/8Ay4o/4ZG8c/8A
Q8fD3/wir/8A+XFex/297/rR/b3v+tH9q4v/AJ+MP7Nwv8iPHP8Ahkbxz/0PHw9/8Iq//wDl
xR/wyN45/wCh4+Hv/hFX/wD8uK9j/t73/Wj+3vf9aP7Vxf8Az8Yf2bhf5EeOf8MjeOf+h4+H
v/hFX/8A8uKT/hkXxz/0PHw+/wDCL1D/AOXFeyf297/rR/b3v+tH9q4v/n4w/s3C/wAiPGj+
yH44P/M8fD7/AMIvUP8A5cUh/Y/8bn/mePh9/wCEXqH/AMuK9m/t73/Wj+3vf9aX9q4v/n4w
/s3C/wAiPFz+x342P/M7/D7/AMIvUP8A5cUx/wBjXxo/Xxv8Pv8Awi9Q/wDlxXtf9ve/60f2
97/rT/tXF/8APxh/ZuF/kR4kf2LvGR/5nb4f/wDhGah/8uKY37E/jB+vjb4f/wDhGah/8uK9
w/t73/Wj+3vf9aP7Vxf/AD8Yf2bhf5EeGN+xD4ub/mdfAH/hGah/8uKY37DHixj/AMjr4A/8
IzUP/lxXu39ve/60f297/rS/tTF/8/GH9m4X+RHgx/YT8VN/zOvgD/wjdQ/+XFNP7Bvik/8A
M6eAf/CN1D/5cV73/b3v+tH9ve/60f2pi/8An4w/s3C/yI8CP7BPidv+Z08A/wDhG6h/8uKQ
fsD+Jx08aeAf/CN1D/5cV79/b3v+tH9ve/60f2pi/wDn4x/2bhf5EeDRfsKeK4fu+NfAH/hG
ah/8uKsxfsVeMYRx42+H3/hF6h/8uK9w/t73/Wj+3vf9aP7Uxf8Az8Yf2bhf5EeKD9jPxoP+
Z2+Hv/hF6h/8uKUfsb+NR/zO/wAPv/CL1D/5cV7V/b3v+tH9ve/60f2pi/8An4w/s7DfyI8X
/wCGO/G3/Q7/AA+/8Iu//wDlxSf8Mc+NT/zO/wAPv/CL1D/5cV7T/b3v+tH9ve/60f2pi/8A
n4w/s3C/yI8VP7G3jQ/8zv8AD7/wi9Q/+XFJ/wAMaeNMf8jv8Pv/AAi9Q/8AlxXtf9ve/wCt
H9ve/wCtH9qYv+dh/Z2G/kR4p/wxn40/6Hf4ff8AhF6h/wDLij/hjTxp/wBDv8Pv/CL1D/5c
V7X/AG97/rR/b3v+tL+08V/Ox/2fhv5EeK/8MbeNB/zO/wAPv/CL1D/5cU4fsdeNV/5nf4ff
+EXqH/y4r2j+3vf9aP7e9/1pf2jif52P6hh/5UeMr+x/43X/AJnf4e/+EVqH/wAuKkj/AGS/
HcXTxx8PP/CKv/8A5cV7F/b3v+tH9ve/60v7QxH87H9Sofynksf7L3xAi6eOfh3/AOERf/8A
y4qZf2aviGn/ADPXw6/8Ii//APlxXqn9ve/60f297/rR9fxH8wfUqH8p5av7OHxFH/M9fDn/
AMIe/wD/AJcU8fs7fEYf8z18Of8Awh7/AP8AlxXp/wDb3v8ArR/b3v8ArS+vYj+YPqVD+U8x
/wCGefiP/wBD18Of/CHv/wD5cUv/AAz58SP+h6+HP/hD3/8A8uK9N/t73/Wj+3vf9aPr2I/m
D6lQ/lPMv+GfPiQP+Z7+HP8A4Q9//wDLiuesV17w38TNd8K6/f6Fq1xpemabqsN7pWmTacjL
dy6hEYmilubgkobHcHDjPm42/Lk+3f297/rXkGoP/af7UnjJ+uPCvh0df+nvXf8AGuzA4qrO
sozloc2Mw1KFJuMdT8Uf+DnXj9t/wD/2Tq1/9O2rUVa/4OfbbZ+3L4CH/VOrT/07arRTq/xJ
erFS+CPojpfgC3/E5+AH/YoeAv8A0yaXX6cf8E/fEF9Y/wDBNL4JHS4bS71OH4ZaEbS3urlr
aCeYaVBsSSVY5GjQtgFxG5UEkK2MH8wvgJJ/xPfgAP8AqUPAP/pk0uoP2Qf+C8Pir4bfD6H4
XTWHww0S0+Ffwy0Z9Hvtf8QWWmv4kvDpNpJa2CpcvHtMgZw04Z0j2AuF8xRXmtXWp3xetj9C
fhL+zf438CftDeLPGsvg3wZa/wDCwGMGqLF8RLm6j0eGdozdy2UB0OLMspjSRlkmw7xqNyCp
dc/4JU/D7VPhrD4UtfFHxA0bSX8MWvhLVBYX1msmu2VrKZbY3Je2b95G7MQ0Qj3AlWDIdtfC
vwq/4OUfEnxc8GfD3VrKz+EGnP4g1WfT/F1prHiax0mfwZCk8SpchLt4nv0eCRph5CgAxtHu
L5x6B+z5/wAF6NZ/aj+IfhLRfBVz8NdTg1zVdc0/UmmnistR0qGyDvZXMen3DR3N2l5Cqyfu
lxBlw7ExvjOMFFJRe1kvle3/AKU/vNeeXM521evTf3Xf/wAlj9x93n9kvR7H4sah4o0jxh48
8OW+uaja6vrOh6RqUVrp+s3lvGsaTTMIftK7kSMSJFPHHKI13o2W3UfCf7D/AIN8H/Emy1uD
VfE8+kaRrt14o0nwxPdQto+kapcKwlu4QIhPkl5WEbzPEjTOVRTjH5vfCr/g5R8SfFzwZ8Pd
WsrP4Qac/iDVZ9P8XWmseJrHSZ/BkKTxKlyEu3ie/R4JGmHkKADG0e4vnHQeDP8Ag49tvHOo
eGbi11/4V23hvU9d1TR9a1G/uI7C60JIphFpt0LGdkuLiK9V4XZ412Wod/NfEMhVxXLZp7f8
D8uVW7WVrWI+zy20/wCH/wA3fvd33Z+uf9vf7X6143+1j8OfEPxW1z4d3+gaPousTeB9fHiJ
BqHiiXRVW4jieKJTs068M0bLNLuGYipRMM2SB+XfwK/4OcPFnxhl8Z6RqOkfCDwZ4s8Nz2kO
m2mseJbePStZV7sW93KNS2+QEt0ImAj85p03eUr4JryrXv8Ag8C8e6Rrl5aW/wALvBeqQWs7
wxXtrqUqwXiqxAljEtokgRgNwDorYIyqnIC5VdO+zT67p3X4ju7NW3TXyej/AAP1N+Gf7Ctz
d614k1HXBJ8Pb258Vp410GTwp4rTWBompyRSRXklul1pMCRpOhAeOUXCPvYgRFQTofH39iO8
+MC/B7w9LqkuveHvAXilPFGr674l1ye81vUCglLWywrCImSZnUNiSKOJF2pEVAWvyX/4jE/i
F/0R/wAK/wDg0/8Auej/AIjE/iF/0R/wr/4NP/ueiMVHls9mmv8At2yX3csfWyvcJNy5rrdN
f+BXv/6U/vdtz9hPE3/BP3wh4s+KniXxLd+JfGrWnjLXtP8AEWuaALq1/srUrmxWMWyuptzM
I1MSsVSVd5+8WUKosad+wN4FsfGAu5NR8T3nhy31e91+w8KT3kX9j6XqN3GyT3MO2IXALeZM
wRp2iRp3ZEU7dv45/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PSVOKjyXVtv
yX/tsbdrK2w+eV3K2r/4P/yT+9rqfrV4V/4Jk+DvDFq1u/jb4j6nap4Jvfh9aQXd7Y7LHR7o
DMMYjtEy0ZGUkk3tk/OXCqF634jfsTeFviFoHhGzj8QeLvD9z4P8OzeE4dQ0q6t0u7/S5oI4
ZrWcyQOpDiJG3Rqjqw3IyGvxo/4jE/iF/wBEf8K/+DT/AO56P+IxP4hf9Ef8K/8Ag0/+56co
qStJ7+v978+aV+/M77hGTi00tvT+7+XLG3aytsfvJ8HfBmn/AAR+E3hrwbpNxdz6Z4V0u20m
0ku3Vp5IoIljQyFVVSxCjJCgZ6AVv32prf2UsDuQkyNGxU8gEY4r+fz/AIjE/iF/0R/wr/4N
P/uej/iMT+IX/RH/AAr/AODT/wC56dVKpfnd7779SYPkSUVax+s6f8Es/hqnw/8A+Ec/tfxl
9h/4Q9/Bm/7bb+YbZpml+0f6jb9oG9kDbduw42Z5rp9e/YN8FeIPGt5qL6t4ph0XWb+w1fWv
DUV3D/ZGuX1kFFvdTqYTMrjy4SywyxxyGBN6Nzn8cf8AiMT+IX/RH/Cv/g0/+56P+IxP4hf9
Ef8ACv8A4NP/ALnp/a576/8ABv8Anr627IHrHla0/wA0k/wVv+HZ+wnhP/gn54O8G/GXTfGF
p4j8aAaNr2q+I9P0X7ZbJptnd6nE8V4V2QLOQwfcu6ZihUbCoLBsrx9+wVpep6Tqt5dal4j+
K2qNoV34a0rTfGviMW1tZWF5MjXEX263spLt2CqNklx9okBjUBlLF6/JD/iMT+IX/RH/AAr/
AODT/wC56P8AiMT+IX/RH/Cv/g0/+56j2cbKN9Erddrcv5Nr0b7l88ubmtrv+PN/6Vr66n7H
/shfD74mfCrXdRHjm/k1i0uLCG2t7u48dyazJbeRtSOGO0TSNPtowVLs8/zTOyqGLjBT3n+3
v9r9a/n3/wCIxP4hf9Ef8K/+DT/7no/4jE/iF/0R/wAK/wDg0/8Auerl7zu2Zx91WSP10/aO
+DXxJ8a/Hj/hMPAltpHhrU7bTf7GXX7LxwdPvtVsm2ymC5tJ9Cv7cCOYyGNlYuASQy7yg8/0
H/gn94rvfHvw0u4rrT/hdpPgTS9c0yeXwz4ol1rVrttQ8l2ujPe6akckssn2jzfMjO3crId2
PL/Mv/iMT+IX/RH/AAr/AODT/wC56P8AiMT+IX/RH/Cv/g0/+56z9nHv379U0/k+Zu3d3Lc5
Pp27dGmvyS9NNj9dvEP/AATW+H+q6JBpen6x4t8P6WvgR/h3c2mn3FqUv9MYyODI01vI4mWW
RpRJGyZYDcGXKnT1H9gLwRqPiie6/tfxXFoWp3mnanrPhuO7g/snXb2xVFt7qdTCZlceXCXW
GWOOQwJvRuc/jt/xGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9afa5+bX5/wAz
l/6U3L113SJeq5WtNunZR/JJemh+y3hb9h3wb4S8Y6Drdvqnid7rw94n1vxZbK93EEe61WJ4
rhGKxK3lqsjeXtZWUgZZqvfDD9kzSPhz8W9P8Z3ni/x14x1bQtOutJ0X/hIb+C6OkWtxKskk
azJClxcfcRQ11LM4C/eyST+Lf/EYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z1M
YqNmnsrddrcv5ael+45NyvzLf/Pm/wDStfU/oI/t7/a/WvnT9rn4J+P/AIo/G7wd4v8AA01t
peoeDtPu7a11JfFcWmzg3ZUTxtbz6JqMTrshiKvuVss428Bj+QX/ABGJ/EL/AKI/4V/8Gn/3
PR/xGJ/EL/oj/hX/AMGn/wBz0cqunfb17W/Jj5ntb8j9afhn+w8YvB+gzXt7qvw08beHH1GC
PXfCfiZNZvtXgvyst5Jdz3umxxmSa43SELbjy2RDE6A7F9G1v9lTwp4i+IPw18S3t94lu9T+
F8EkGnvc6o9ydSDIqq168oaS4ZGUSKxYESZbnpX4of8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv
+iP+Ff8Awaf/AHPTSSs09tvLfbTTd7d2Jtu91ve/nfe/fZb9j9hrD9gbwt4afTbzw74r8beG
PEOkatrOq2uuWE9k96g1WUzXlqVntZIGgL7CoaIuhjUh85Jt6n+xHosur2uo6X4++KPh7U30
W30DWdQsddV7/wAS2sEhkj+1XM8Uk6ygvL++t3hkAlZQwAUL+N3/ABGJ/EL/AKI/4V/8Gn/3
PR/xGJ/EL/oj/hX/AMGn/wBz0oxUUknotvuat6Wb063d9wcm221vv96d/W6Tv5H9AsGri2gS
NXYqihQWcsxA9Sckn3PNZfjrxTrth4VupfDVhpOra0uz7NaanqUmnWsuXUPvnjgnZMJuIxE2
SAOAdw/A7/iMT+IX/RH/AAr/AODT/wC56P8AiMT+IX/RH/Cv/g0/+56Gk93+YJ20SP1D+Dv7
Dms+H/BnxC8C6z4b0PSfB/xVjubbWL7T/H0upajo9q6TGO1sYX0W3j8lJJn2iRyV812JcgKf
ZNJ/ZD8I6VbfE6D7ZrtxB8WdKtdH1pJbiPEUFvYGwTyCsYKsYjkli3zcgAcV+LH/ABGJ/EL/
AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0nTi1KLfxJJ76pFKclJTS1TbXq7f5L7j9Y/G
v/BLzwd4+8LjSr/x18SPLn8LW/g7UZ4LjTYZtW062nM1rHMVsgAYWKgNEIy6oBJ5mW3bXjf/
AIJ4eEvGvj6+1seLfH2lW+o+JtP8Yz6TY3toti2rWaxLFc5e2eblYgGj83yzkkKrBWX8gv8A
iMT+IX/RH/Cv/g0/+56P+IxP4hf9Ef8ACv8A4NP/ALnq/tc/Nre/z5ub/wBK19bPoR9nltpt
+HL/AOk6emmx+y3w3/Yc8FfDHx/oviKz1DxDd3WhXev3kMF3PA9vM2szLNdLIohBKqygIARg
E7i/WsDwt/wTo8K/DrQtDtfCnjb4ieFrvww2pQ6PqNhd2TXWm6ffuJJtNj861kQ2yyAPHvRp
Y2A2yjpX5Ef8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9QoRUVFPRKy327fi16
NrZsrnd27bu/Tfv/AF1SfRH68eLv2QLjxl+1z8IfF1y9pJ4a+D+k3cFteX+s3Wo61rdzMkSR
C4Eke3ETI0vmtNI7uR8i9RL4I/4J2eCfBvi2C+l8QeMtc0m18Raj4rh8P6nc2smmR6nfCVZb
jalukp2pK6ohlKL97aXJY/kF/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9Pl
VrNrr/5M+Z9O9n8l2Fd9u34Ky/Bteja2P10g/wCCaHw7n8JXOgarq/jLX9Di0G88M6FZahqE
RTwrp90waWGzeOFJCfljVXuWncJCi7tu4My7/wCCbXhTXNK8bQa144+I2v3Hj+00iz1W7vLu
xWUppc4mtDGsVokaEbQjYTDLk43szn8jv+IxP4hf9Ef8K/8Ag0/+56P+IxP4hf8ARH/Cv/g0
/wDueqWkuZS1+fdv77tu+922J6q1tPl5f/IrTbRH7HftM/sJeEv2oNd16+1HxD4x8OnxZo0O
ha9Dod1bQx6zbQTGa383zYJWDxOzYaMpkEq+9Ttr3e31cW1ukaudsahRk9hxX8/X/EYn8Qv+
iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PUqKS5U9Pn6/qNybfM1r/AMMvyS+4/a/9tH4Z
6n+0F8FP+EY0ywsNTa41SyvJo7rX30YIttOtyrLMtleZPmxRKUMQyrOd6kDPmt7+yTqvx4+K
HjXW/iLoOkeHT4u03TIkvfDnjWXUrjTbvTbgzWU9vHNpNuI3VpHcu8kqkoimIqzV+TX/ABGJ
/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cqWz63672tf1skPmbVmt9Om172+8/ZX
xD+xxp/iew0WbWPG3jzxRrnh3XLjxHaX+s6jC/m3UlnLaCJoo4Ehht1SU4jt4ohuG45Jbdyn
7CP7CB/Zl8JeAbzxZ4k1DxJ4s8E6BNodjbrdJLpGjLPKJLg2n+jxTMZNiZaZnKjKqFXAr8lf
+IxP4hf9Ef8ACv8A4NP/ALno/wCIxP4hf9Ef8K/+DT/7npwSg24ve34Jx7fyu3oTJ81uZbX/
ABal+auf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A
3PS5V3/MfM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8A
Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V
/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+
iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/
ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A
3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8
Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4
V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/E
L/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8
RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz
7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/t
frX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfr
R/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/
b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/
QR/b3+1+tH9vf7X61/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5h
zPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRy
rv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8
Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX
/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj
/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8Rifx
C/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz
0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp
/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/h
X/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/o
j/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEY
n8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/P
v/xGJ/EL/oj/AIV/8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X6
1/Pv/wARifxC/wCiP+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b
3+1+tfz7/wDEYn8Qv+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+
1+tH9vf7X61/Pv8A8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9
BH9vf7X60f29/tfrX8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz
7H9BH9vf7X60f29/tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7
/mHM+x/QR/b3+1+tH9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc
9HKu/wCYcz7H9BH9vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/h
X/waf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP
+Ff/AAaf/c9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv
+iP+Ff8Awaf/AHPRyrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xG
J/EL/oj/AIV/8Gn/ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/
AHPR/wARifxC/wCiP+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/
8Gn/ANz0f8RifxC/6I/4V/8ABp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/wARifxC/wCi
P+Ff/Bp/9z0f8RifxC/6I/4V/wDBp/8Ac9HKu/5hzPsf0Ef29/tfrR/b3+1+tfz7/wDEYn8Q
v+iP+Ff/AAaf/c9H/EYn8Qv+iP8AhX/waf8A3PRyrv8AmHM+x/QR/b3+1+tH9vf7X61/Pv8A
8RifxC/6I/4V/wDBp/8Ac9H/ABGJ/EL/AKI/4V/8Gn/3PRyrv+Ycz7H9BH9vf7X60f29/tfr
X8+//EYn8Qv+iP8AhX/waf8A3PR/xGJ/EL/oj/hX/wAGn/3PRyrv+Ycz7H9BH9vf7X60f29/
tfrX8+//ABGJ/EL/AKI/4V/8Gn/3PR/xGJ/EL/oj/hX/AMGn/wBz0cq7/mHM+x/QR/b3+1+t
H9vf7X61/Pv/AMRifxC/6I/4V/8ABp/9z0f8RifxC/6I/wCFf/Bp/wDc9HKu/wCYcz7H9BH9
vf7X60f29/tfrX8+/wDxGJ/EL/oj/hX/AMGn/wBz0f8AEYn8Qv8Aoj/hX/waf/c9HKu/5hzP
sf0Ef29/tfrR/b3+1+tfz7/8RifxC/6I/wCFf/Bp/wDc9H/EYn8Qv+iP+Ff/AAaf/c9HKu/5
hzPsf0Ef29/tfrR/b3+1+tfz7/8AEYn8Qv8Aoj/hX/waf/c9H/EYn8Qv+iP+Ff8Awaf/AHPR
yrv+Ycz7H9BH9vf7X60f29/tfrX8+/8AxGJ/EL/oj/hX/wAGn/3PR/xGJ/EL/oj/AIV/8Gn/
ANz0cq7/AJhzPsf0Ef29/tfrR/b3+1+tfz7/APEYn8Qv+iP+Ff8Awaf/AHPR/wARifxC/wCi
P+Ff/Bp/9z0cq7/mHM+x/QR/b3+1+tH9vf7X61/Pv/xGJ/EL/oj/AIV/8Gn/ANz1+qn/AASg
/aw8e/8ABTn9jTS/i5LfeGvA39p6pe6aNJTQP7U8v7M4Xf53nwZ3ZzjZx6mnyLuHM+x9af29
/tfrXA6BqP2r9pfxqc5/4pnw8P8Aya1ytr/hV3jf/ofvD3/hEj/5OrjPhGLu5+NnjCW/1Kw1
a7/sPS7eW5s7RrSFjBq3iO3wI25VgIhu5Ybt21mXax7cvj++TTOPHS/ctH5Gf8HQP/J9PgP/
ALJ1af8Ap11Wij/g5/8A+T5/Af8A2Tq0/wDTrqtFdFX+JL1ZjS/hx9EanwHkx4i+AA/6lHwD
/wCmTSq/Gz9r3/ks0X/Ys+Hf/TJY1+xnwNkx4o+AP/Yo+Af/AEyaVX45/te/8lmi/wCxZ8O/
+mSxrzp/B936ndH4/v8A0PMK9W+Gv7bfxP8AhD4HsPD3h/xN9isNH+3f2TK+nWlxfaF9tj8u
6+w3csTXFl5i5LfZ5I/mLNwzEnymisDYKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6fP8Ag2tn
8XW//BB/TT4DtvDl34wPiDXF0pNfuZrfTVmM8YDztDG8hRQS2xFBfaE3R7t6/wAwdf03f8Gx
F94uv/8Agkd4VsPC914csltdf1e4uX1WzmuTKZLnaoQRyx7ceU2c5zuHTHOtKDldLt+qM6kl
G1z6D/4J5fAX9rT4A/ErXR8ZfH3gX4keD/E9xNqEzpq17LqWi3bAkG0V7NIxbMQFNsGSOPh4
tmHSXvvhJf8A2P4ueNf+vSP/ANSXxZ/hXon9mfFr/oNfDr/wSXn/AMlV5T8HRdx/EbxUNQe3
lvhp1uLl7dCkTy/8JH4s3lFYkhc5wCSQMcnrXo5dTtVWqODHzvTeh+UP/Bznfed+294Bb1+H
Vr/6dtWoqt/wc1nP7bHw/wD+ydWv/p31airr/wAWXq/zIov93H0RufBKXb4v+AI/6lLwB/6Z
NKr83P2qP2KPFWr/ABD+B2px3/h8QfHjT9G0fQFaebfZzQWWmWTtdDysIhklUgxmQ7QSQD8p
/S74M+HbmPxL8AJ/Lbyz4S8A849NE0sf0rzLw547+DHizwF+zZbeK/iL4V0DxL8P7bTrI2mp
LbLJpFwf7I19bt3uJojDFJbaWbMTxhwX1MJnKvGfOkvc+49CGs/68j8j/iz8OL74OfFTxL4Q
1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOldX+z3+y/q37SOh+PbvRdZ8P2M/w9
8N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr+hd3+2XaX+g+Fb/wCCfiL4VR3V58R/
Fmr+I7nxN4xufCccX2nWVnsLy6tVvrKS/iazaPd5kN1hIfKCgh4z86fsH+ItJ1L4qftPahc6
58NPD0Hiz4f+I9D0lI9Vt9A0q7vb+ZWtoLCG9eGRbciJ9u5R5SCMSbCyg85sfGlFfdfwm+KO
v/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/AGzxdZ+HY/tEl9bPZ3N9BPIh1CJbZVGViuBsjaLB
IMddt8CPjUtn4F/Zwj8EfFDwV4dg8N/EDWL34pLp3iS08HWmqwyavayxXD2U7WbXVu1ipEar
AQka+TsQqYgxn5vUV+u3hH9vn4c/DXw/8JNM8J+M/BWl+Hrvxl4hurO0WztQNFhm8aWiwyMk
ke7TUbQrzVwrOIQIZXAwwTHP3f7SumeAtB8K6X8CNY+Cmm/2X8R/Fkmuf2l42fwvptkjayr6
dcvBbX9muo2hsvKAIiu08qARIOGjIB+VNdt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3X
l0ttbxRoMszsxds4ChYXywYor5PxZ1mLxH8VPEuoW6eH44L/AFW6uIk0G1ltdKVXmdgLSGVV
kjt8H92jqrKm0EAgivrb/gnl+0Rrkv7JXxj+GNp8Xf8AhBPEmo/2C/gyTV/FMmi2OkxLqjPq
UkFwzqkH7uYPJHEfNmXftSQgikI+KaK/Rb9mv4o2/hD4U/sx6X4A+KfhXQbDwZ451ST4m/Z/
F0PhmPV7c6raPDczwXclrNfRPYphSYnIQGIhWUxj4f8A2lNZ8PeI/wBozx/qHhBLWPwnf+JN
RuNFS1tTawLZPdSNbiOEqpjTyimEKrtGBgYxQBxNFfoX+yb8Yl0j4G/sr23gj4ieH/BkHhbx
lqN18UrNvGFp4be8hfU7OSKS6hnnha/Q2KlQyrMNqGLqCg9L+A/xAt9Q+EFl4v8ABnijSvDP
gzQP2nbmKxv7jW4fDtrZeEZhFeXFhALiSHbaS4imaxQfOYwxiJQ4dh2Pypor9TNN/au0bRfA
vgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJdNuJrKO+sWvbf7CYRtaC5CxwCEIpV4jxP7O
v7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8AC10Ip7yK2glKT2+iSTortFIqxkonmLvX
gA+X/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDH
GzDE8Tf8E8vHvgrVfhbp+rzeH7DUviv4kv8Awrp9q128j6Ze2WpJps4umSNkCCd+GhaXKgn0
B+oPhzdaP4G/aA87wb8S/hVqvwn8RfFbV5/EvhLU9S0vR7Xwbbw3z29pqelTTXMUwlNjP50F
1pnllDBEm6TywgNH1b4f+K7H9lO28CeOvCs/hb4KfEfXZNUn8ReI7HRb620xvEFvd2ly0N49
vJN5lmokJhjI3BkwrgoAD5/f/gl/4s0bxDoeleIPGXw/8MX/AIt8V6j4P8ORX0uozf29d2N6
tjM8RtrOYRRfaXEam4MTHBbaFwxqaN/wTd15vh4/iPxF4/8Ahp4IgtfGU3gC+g1y7v1fTNai
LbraaWC0lgVNi+Z54lMIUjdIpBA+i/hz8QdP1P8AaA/4Srwz8Vfh/wCIvhl8Qvitq+teK/DX
ia+stDk8KRfbnjttXsnvbiG8S7ayujPHc2KxSxvCiMXaPYD4T6to/hj9g/xP4F+Gvjr4VGHx
N8Vtag06Xxp4j0u0uovDF3pM+k/2nKkzx3FtKFbd+7iScjOInikaNwD4J+LPw4vvg58VPEvh
DU5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSoOMZA6V6B+z3+w549/aT+Ffj3xvoVna2nh
P4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7vP8A4s+D7H4e/FTxLoGma1a+JNN0
PVbrT7TV7UL5GqwxTPGlzHtZ12SKocYdhhhhj1r6W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17w
lpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IICEeU/BP9kJvjVZ+FAvxG+Gnh7VvG+qnR
9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnzXx34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+oP2CvBOq/Br4yeCfEaeMPgU+kx+MraDxRBq2q6DLq
OgQ2N5H5k0Mt+ASjxO8kdxpksgfYPnDooHlPxA8E6N8e/wBoz4xan4c8YeH9O8PadPrXibS7
vxNqs8M+vWqXTNDbwPcBpp72ZJFKpKRI5Dlm3ZoAP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrn
VNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pd4/X1t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr
3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQPlTXtGm8Oa5eafcPayT2E728r2t1Fd
QMyMVJjmiZo5EyOHRmVhggkEGgD0v4QfAnwh8SfA+hX+pfEvSvDWtal45sPDF9pd5aAf2Zpl
1GzHWTM8qK8UTo6ugACYQvIgkTPoHgj/AIJvy/FjwxbeJPC/xT+H8/hbWvHL+AdDutSttWtL
rUdQZ2NqrwJZSiHz4fLmBLlEEoV3Vwyj5pr9Fv8AgnZ8QLLwF+xh8PLM+KPhVYX/APwvKy8T
alZ+INb0L7VZaHHbRQT3axXsm+3lWSJgjRqtzj5o/lcMWho/P/x34I1T4Z+ONZ8N63bfYta8
P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK6DwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCv
uvLpba3ijQZZnZi7ZwFCwvlgxRX/AEL0H9oDQ9S1r4LXPwl+KOleHvC2kfFbxFqnj5bzxrHo
F1qunza7BPa3F8l9cQ3Opb9OAG9xM5CtGx3hkrifgN+1DF4o8LftReAvAvxatfAEHiDxJY3f
wyN74gl8N6Vo2mnXppbt7RnMaWafZ50d4YwssibgsTlWUAHxV4B+APiL4j/CTx7430+O1Hh7
4cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor6v7Pf7L+rftI6H49u9F1nw/Yz/D3w3c+K
ryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv0t+wf8AHfVtN/Zm+N/wm0z42WvhjXbifRE8
Eaje+KLjQ9KtII9Wc6jc2k8/lG3R45RK8YVJ5ULYidlZRyn/AATUt9J8L65+0LaXPi7wVbQa
n8Mdc8K6Te6hrtvpMGs3t00YthAL1oJCkghdtzIvlgr5nlllBAPkmvVvhB8CfCHxJ8D6Ff6l
8S9K8Na1qXjmw8MX2l3loB/ZmmXUbMdZMzyorxROjq6AAJhC8iCRM/YH7LP7Stx4C/Y6+BWl
/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjPwT8WdZi8
R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgikIq+O9I0zw/441mw0
TV/+Eg0Wyvp7ew1T7K9p/aVukjLHceS5LR+YgVtjHK7sHkVk0V6B4I+Mvh3wp4YtrC/+E3w/
8TXUG/fqWpXmuR3Vzl2Yb1ttRhhG0EKNsa8KM5OWIB5/Xtfw3/YY8SfFb4QeC/Gej694VksP
G3jm2+HsNrJLdJdabqc4LR/aAbfZ5Xl7HLxPIQJVGNwZV8f17UYtX1y8u7ewtdLgup3misrV
pWgs1ZiRFGZXeQooO0F3ZsAZZjkn9C/+Cev7Rvhv4AfsH+G9B1jW/Ctrf/ED4rTafNLH4htb
bXfCWmX2ktp0muW7BnewlgkV8SyxgGNmHCzLJTQ0fBPxZ+HF98HPip4l8IanLaz6l4V1W60e
7ltWZoJJreZ4XaMsqsULISCVBxjIHSufr9Nvhj8ZYvgB8Dfhh4L+DfjL4P6lqXg/xl4is/E+
r6541l8L2lwyanGLHUpreDUrb+0beW0WNs7b1fLiES5wyNynhv8Aa+1n4IfsG3Pirwn4t+Gk
PjDTvi7darYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSiv0L/ZY/a+
sbX4CfDvU5/Fvh/wVrv/AA0PDJNpen6oumpofhi8SG5vbWKEyb4NHa4UF4yfILRKXyy5r0u7
/aV0zwFoPhXS/gRrHwU03+y/iP4sk1z+0vGz+F9NskbWVfTrl4La/s11G0Nl5QBEV2nlQCJB
w0ZAPz0/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuM
gOV80r7L/YP8RaTqXxU/ae1C51z4aeHoPFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDevDItuRE+
3co8pBGJNhZQfjSkI9g+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC
1R5JQqG5MRfazAFAHPmvjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqS
DjgkV9K/8E7NCv8A4e/Ej4eeM7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S
6e7klMFlkjVV80+IHgnRvj3+0Z8YtT8OeMPD+neHtOn1rxNpd34m1WeGfXrVLpmht4HuA009
7MkilUlIkchyzbs0AH7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxku
PLBbbgKoK73TfHuP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceW
C23AVQV3um+Pd6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQ
IwBxKVEWVYFwQQLf/BNnwtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3
xPtbbcLmBsHEhxTGfH1el6N+y/q03wEf4j65rPh/wj4eu55rTQU1h7gXfiqeFGaZLGGGGVnS
NlWNppPLgWSVEMobcFyvBPwSm8X654w0+48TeCtAn8HaVe6pK+pa1EINWa2ZVNpYzReZHc3E
hP7pUbbIFJD4wT9q/s+fFXRPGf7M37Knh2HVvg/d6T4O8Satb/EKx8ZHQBPYWU+rQXGYl1UC
Uo9q8hL2ecldpO9AFQj89K9g/Yz/AGHPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRU
dt8jKVRFVmbDHGxHZbfxA+Mvw18KfEjxRYeF/hN8P/E3haDXdQ/sTUtSvPEMd1c6ebqVrXeq
ajCBthMajMavhRvy+5j9Lf8ABKP9uS0h+O3gDwfruifCrwL4B8H32teIjqk2rXOktZz3dvcR
KzNcX4hvJVE0VrGZo550t84b5XkoA/P6ivVpfi54d8Eahf6Vf/Br4VardWd9co9wusa5cR8z
ORHHLbaqIZIowRHHIpbeiIxeQku31X+wl8cvCHhr4H+X498R/D/SNa1PXb2f4L2d3ONSh+GO
pvb3Qe9ufM8+Sy083T2WxbppCZYRcmIhGuaAPz+or9LPgp+0zrngj9nH4R6b4K8UfCp/H2ne
K9em+IV94m+Ismjx/wBoPqMTw310bfUrYaxE8OC0229R0h2Ln5kbJ/Z1/au0zw98IPBFwPHn
hXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwxn5016B8CP2cNc+Pn
9v3lpdaVoPhvwlYnUde8RazLJDpejxHIiWR40d2lmkAjihiR5ZHOFQhWKn7WX9h/8NUfEv8A
4Rn+yv8AhG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV6t+xb470PVf2Yfj58JrvWd
K0DxJ8S7HR7vQbrWbuOx0ueXS7uS7ltZLqQhIZZYyREZdsTOu1pEJXchHj/gH4A+IviP8JPH
vjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiueAfgD4i+I/wk8e+N9PjtR4e
+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/1B/wTy/aI1yX9kr4x/DG0+Lv/CCeJNR/
sF/Bkmr+KZNFsdJiXVGfUpILhnVIP3cweSOI+bMu/akhBFW/2D/jvq2m/szfG/4TaZ8bLXwx
rtxPoieCNRvfFFxoelWkEerOdRubSefyjbo8coleMKk8qFsROysoYz4er2D9nv8AYc8e/tJ/
Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4931X8DrfSfip4W/Y
u0Pwv4u8FazffCX4galDr0cmu2+lzlZdetLm3ltra+aC5uUmhG5BFEzE/IVEgKA+FOn2ukft
h/tqahqfiLwVoUHifSvHPhLS01jxTpumz3mpXF6rQxCG4nSQIwBxKVEWVYFwQQAD40+BH7OG
uftH/wBv2fha60q68SaLYnUbXw68si6p4giTJnWxTYUmlijBkaHesroGMaSFWA8/r6h/4Jle
ItJ/Zg/a6l+JPjHXPD9hoXwngv3v4INVt7q91qea2uLKK202ON2F47SyBvMjbyFjUu8qKVLe
E/Cv4V/8LS/4SP8A4qPwr4c/4RzQ7rXP+J5qH2P+1PI2/wCh2vynzbuTd+7i43bW5GKQjlKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACv6hf+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562p
fDP0/VGVT4o+v6M/TOvmf4fS+X8XvGf/AF6Rf+pL4tr6Yr5b8IXHlfGLxgPW0h/9SXxbXRgJ
8lRPzRzY9XptH5Pf8HNEmf21fh9/2Tq1/wDTvq1FQ/8ABzF+8/bR+Hp/6pza/wDp31aiuqt/
El6v8znpL93H0R90fs7/ALPltr/wP+AGrrEPNTwX4NcnH9zSrAf+y1/OP+3NYf2V+0Zd23/P
toehRf8AfOjWQ/pX9T/7FEKyfss/AcH/AKEPwr/6arOv5c/+CjMflfteeIVHRdP0cD/wVWdT
jqajRg11t+ReDm3Vkn5nh9FFFeSemFFFFABRRRQAUUUUAFFFFABXoH/DUHjP/hnj/hVX23Sv
+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+igAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKAPS/hf8Atc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7d
bhZFlMepfZ/tiOVldQ6zBkUgKVAAHmlFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1C/8ABqF/yit0n/sL
3/8A6Vz1/L1X9Qv/AAahf8ordJ/7C9//AOlc9bUvhn6fqjKp8UfX9GfpnXyt4YiL/GPxeen+
iQ/+pL4ur6pr5d8Inb8WvF5/6dYv/Uk8W115ZTU6yizkzKXLSbPyW/4OXU/4zN+Hn/ZObX/0
76tRTv8Ag5abd+2Z8PP+ydW3/p31aiujEK1WS83+Zz0W/Zx9F+R+on7FN1s/Zj+Ay/8AUieF
P/TVZV/L7/wUcOf2v/EX/XhpH/pqs6/p2/Yvz/wzZ8Bv+xF8Kf8Apqsq/mI/4KM/8neeIf8A
sH6P/wCmqzp5j/Ap/L8isD/Gn8/zPD6KKK8U9YKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6hf
+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562pfDP0/VGVT4o+v6M/
TOvlTRZvJ+Kviw/9O8f/AKkniyvquvkqOXyvif4qP/TCP/1JPFtehk3+8I4M2/gM/Kb/AIOT
ZPN/bD+HZ/6p1bf+nfVqKg/4OO5PN/a4+HB/6p1b/wDp41eitcV/Gn6v8zKh/Cj6L8j9S/2L
UH/DNHwH5Gf+EF8Kf+mqyr+YL/go5x+1/wCIv+vDSP8A01WdfvZ+zR+2Tb+F/CfwG8NNIA58
G+CocZ/566Ppzf8As9fCPiH9oHxx4F0HQ77w/wDGDwUNc1LTrOw8M6HqfjPTND0r4fW11YWn
2nVNVtpJY59RuGd5HtoHiuI4ULS4dmjhqMfUjKhBLpb8jXBQarTb8/zPyQor9Nvhj+0Rq3wx
+Bvww0LwF42+D+seNtE8ZeIv+E91vXPiHcaLaXd6dTja31ObZf2b6vbzQbW814rrdHGECg74
zxX7Jnx68CeLvA82g6r4u+H/AIHuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQVHk2kkaSNG
6BQSrhPIPVPz+or9TNN/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31
tBqNr/aCfZHjD/JeKFgMS9GRuJ/Z1/au0zw98IPBFwPHnhXw9qtr+0eqtBoupPpdrp/ha6EU
95FbQSlJ7fRJJ0V2ikVYyUTzF3rwAfnTRX6w/CTxOum/Dz+3vCnivw/4d8J+Ef2pLyyttRXx
LaaPp1p4TkMd1c2NrK80cb2UpWOY2kBKy7A4jbbkef6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvU
NV8NaONUtIb2LwfLcxzCzhsfMS8trKe9trdpbWJYneMFmXymZiWCx+b1FfpD8CP2jF8R+Bf2
cNR8EeM/BXw0g0/4gaxrHxS0PTvFFp4StFhn1e1uIle0nuImvLdbEGOMKJgscflZypWqnwI+
PXwo8Xf2/oOm+LvCvgfRfAn7QB+MNi2qRSadY6l4cgzGLfT40iLNdhFTZaGNGZXUICVcIAfC
nwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrlK+6
/wBkT43af8S/jh+1/rcXibSvC/hv4p+FPEqaVYeIvEVlo/8AaGoX9w72EbRzTqjyrG9wpkBZ
IvMYF1Eg3W/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzK
cQNIgZ8sM0hHwTXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd
+7i43bW5GK/QH4s/FF/7B8EaX+zn8U/h/wDDj+wPiP4vk8Q/Z/F2n+HdN2SazG+m3M8DyIuo
Wi2SxhTFFcJ5UZiAOPLrx7/gnX4vWy1z9pqxvvG/grT9N8XfD/W9HtRJrNp4d0rXdVuGK2bW
1rcG2UJtNzsPkosCS7WEXmBSAfGlFfot+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN4
2n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg
0XUn0u10/wALXQinvIraCUpPb6JJOiu0UirGSieYu9eGM/OmvS/2e/2X9W/aR0Px7d6LrPh+
xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlan7WX9h/8ADVHxL/4Rn+yv+Eb/
AOEr1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV7X/wS2udP0//AIXx/aWv+FdC/t34Vaz4
a07+2tfstL+26heeV9nhj+0yx7t3kvlx8ifLvZdy5Qj5Uor9C/2TfjEukfA39le28EfETw/4
Mg8LeMtRuvilZt4wtPDb3kL6nZyRSXUM88LX6GxUqGVZhtQxdQUHu37I3jvwP8Xfjh8GNJ+F
2s+FY/Bll458eap4m8N213b6T9u3XEl1oVw2lymKW78mKO1eF0hk+z/Z15jMJCMZ+VPwr+Ff
/C0v+Ej/AOKj8K+HP+Ec0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiuUr7B/4JnfES
71H/AIaN/wCEh8caVYf8J18ONX04/wDCReKrbT/7e1u6/wCPZm+1TJ50pzdZmORH5zbnXzRu
7X9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIGfLD
NIR8P+FfAmueOv7S/sTRtV1j+x7GXVL/AOw2klx9htIseZcS7AdkSbl3O2FXIyRmsmvoHxpf
eGLH9qj9oD+xPiP/AMK58Nzf8JFFof8AwjtrLPY+KIDdn7Po6/ZGVEtLiPbh2zAFjXKkEV8/
UAerfAT9k7UPjv8ADDx54y/4Sjwr4U8N/Dn+z/7YvNaN62PtsrwweWlrbTu37xMH5Rjcp6ZI
tfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e70
v9im50/xD+wl+0z4N/t/wrpXiTxX/wAIt/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAN
v/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLggg
MZ5T8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAK
AOer0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImS
rMMphz6X+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZ
Ib2ASOluNsCZlEr1P2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlx
HJqRS5ECmFrKICSZ1YbX3soB5r4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mq
WWoQXpsjDciytJ4oN8uCjySBHRgysQGxqv8A8Ev/ABZo3iHQ9K8QeMvh/wCGL/xb4r1Hwf4c
ivpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20Lhj1fw6+MvgeXxj+0v+0BZz6VB4+03XY9X+
G+meIHtzIkupapN5l4tmWInu7OErImDJFE53sr7UIP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4
rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97gA8/8Zf8E9tY+EngDQNe8f8A
jz4f+Av+EjvtW021sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnB4H/AOCe2seM9I8E38vj
z4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvoDwZ8e9a/aF+NPhnXL
rxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowo5/9m34w3Oj
ftGWsmh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5j
gA+NPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r4J/shN8a
rPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAcnxAs7H9rb9
oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxk+1/sw+BbX4Yfs2
+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijDSeYyE
ea+H/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kg
R0YMrEBsGmf8E+tQm+JEXg/UfiX8KtF8U3viu78H2Glzaje3d1eXdvdLaF2S1tJjbRSTNtjN
35LOFLhdnzV0H7MNn4V/Zn8OfHHxtPr3grWfih8Kp7Gw8BF72G806/upb6S3uNUsoH2m8e3i
QTQOVaNN6yPGxCbe1/Z+0N/DPwJ0fxz4O+IHw/f42/Fm+1C217xR4p8bafp2ofDqyNwYJJYo
7if7Q13eK0sr3iK8yQ7liQPJ5jsZ4/4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7
qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhyfs3f8E5fij+0/8AHvxF8O9D0q1sNW8HTz2viG81
C4xp2izRO8ZjlmiEgZ3kjZEWMOX2sw+RHdfa/g58G7X9nX4SWd18O/iL8H7r4t+L9V1Pw/qf
i288c6bYRfD7TYLprMz2CTSrM73qCSUXscZlS2O2KJWl8xuq/wCCYP7VMXwn/aS8G/CnxLN8
H7LwT8LdV16+l8YJrkunxXtzJDcW321ZXu4rW/d/Mjghd7eSRbZjsCAO4APlT9nv9hzx7+0n
8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3Vf2RP2TtQ/bJ+J
8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkD3X/gnvo9r4c1z9om7
1PVvhp4Sg8RfD/xL4P0uyk8a6bDA2pTtAYba2NxeNJJb4BVLku8TBOZmOTWV/wAEhtPtfhX/
AMFCdA8ReJfEXgrQNC8ET39vqt9qHinTbeANLY3luhgZ5wLpDJgb7fzFAZWJCspIB8k19A/A
3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKPn6
v0W/4J2fECy8BfsYfDyzPij4VWF//wALysvE2pWfiDW9C+1WWhx20UE92sV7Jvt5VkiYI0ar
c4+aP5XDEQI+X/E3/BPLx74K1X4W6fq83h+w1L4r+JL/AMK6fatdvI+mXtlqSabOLpkjZAgn
fhoWlyoJ9Abcv/BO7xJ4c1C/Txf4w+H/AMP7CHxXc+DtM1HxDe3UFr4hvbaZ4LmW08u3kf7J
DIgWS6mSKFDIqs4YMq/VfiP4ieB/jX4q/Zt1Xwf448KyaD8L/it4j1TWpfEXiq302+tNPn8S
QX1tcMNTmjubrfafvC6iRywZWPmBlqp8U/j38Iv2r18IXF3ceCte8J+AfiB4qk8T2PiXVbnR
bt9F1jW/7RTVtMjS4t5bp0tYpUMC75hJIq/ZX3I1AHwTqPwN8Yab448V+G/+Ec1W71rwN9rb
X7exgN7/AGUlpJ5dzLK0O5VijcYaXOwZHzYIrlK921a08JeD/wBoz46af4c+JV18PfCdtBr9
l4efR/tWoweKrZborbaQZoZMm3uIguZpWeMiNS27INeE0hBRX6WfsIfHfwPofhj9jzV9R8Ze
FdKtfhRfeMNO8Ux3+rW9pdaZLqjlLJhbyOs00Tm4j3TQo8UQ3tI6COQryn7HXx7t/gx+zR8L
tHv/AIg6Vo/iTwv+0BbabdrD4mhMln4aligkv0WSOUg6VLcxCSQqxtpHQOSxANMZ+f1el6j+
y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwv6Lfs9fFX4
NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlzFaXVlPA8G+Y2t1GiFmaWJ
If3Xmn/BP74iR/Db9kfR/B58cfD/AEi6g+OQ/wCEx0vUvFWkx2up+GX02O01Dek83k3to4Zl
Hl+YHKh48lAwAPjT9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmN
dsjx5Mq4yA5XzSvtb9i258D6f8cP2qv+Ea1/wroXgzXfA3ibw14P/trX7fS/tv2y4X+zoY/t
0scrbooeXf7ny+aylhn0D9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+
ge0ubqCK/szqMRttgYtFdjZB5QGQ0ZQj86a7bwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCv
uvLpba3ijQZZnZi7ZwFCwvlgxRXyfizrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyq
skdvg/u0dVZU2ggEEV9bf8E8v2iNcl/ZK+MfwxtPi7/wgniTUf7BfwZJq/imTRbHSYl1Rn1K
SC4Z1SD93MHkjiPmzLv2pIQRQB8U0V+i37NfxRt/CHwp/Zj0vwB8U/Cug2HgzxzqknxN+z+L
ofDMer251W0eG5ngu5LWa+iexTCkxOQgMRCspjHw/wDtKaz4e8R/tGeP9Q8IJax+E7/xJqNx
oqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGKANXUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4G
o2N6IJLjEivCsRQwor7o5X/1qA4YOF80r9Af2Fvjvp/wM/Yw+FVo/jLwromp6r+0BpuqX0La
tZHULLRDbJb3NxIu8zWcTGGSKR28stDI6sTDORJ7B+z18Vfg18HPipAmkeKfBUnw+8UfEDxn
aeOtPvfFjW+laZbTzNaaQlppCXMVpdWU8Dwb5ja3UaIWZpYkh/dMZ+T1el/slfsv6t+2L8a7
HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb62/Y6+Pdv8GP2aPhdo9/
8QdK0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8LPgV8c
PB174G8ZfD/wn4Ml+I/jL/hYi2erWln9tVri4tvDuIi4ln09IrqMp9mVrSLLyvsMbyKAfm9+
z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr5pX1t
/wAE1NPtfBmuftC6frXiLwVo09/8Mdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lw
HQt7B+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbA
xaK7GyDygMhoyhH5016XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYU
V90cr/61AcMHC8p8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0
EAgivvb/AIJ6/Gr4f/B/9g/w3pPi3V/Ctv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBi
trmNsrc7AoZZhkiLcAfnTRX6bfDH9ojVvhj8DfhhoXgLxt8H9Y8baJ4y8Rf8J7reufEO40W0
u706nG1vqc2y/s31e3mg2t5rxXW6OMIFB3xnlPDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbD
R9K1SCEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmYsZ+elFfoX+yx+19Y2vwE+Hepz+
LfD/AIK13/hoeGSbS9P1RdNTQ/DF4kNze2sUJk3waO1woLxk+QWiUvllzXpd3+0rpngLQfCu
l/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a6jaGy8oAiK7TyoBEg4aMgH56fs9
/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlXGQHK+aV9l/s
H+ItJ1L4qftPahc658NPD0Hiz4f+I9D0lI9Vt9A0q7vb+ZWtoLCG9eGRbciJ9u5R5SCMSbCy
g/GlIR9A+Dv+CanxL8ffBn4aeP8ASodKu/C3xO11PD9vdpNK39h3El/9gie+URkxxSTAhZI/
MH3Q213RGPiJ/wAE4fHHwz8Y/DrRL/VfCst18TvFep+D9Le3ubho7e7sNUXTJnnJhBWIzMGU
qHYpklVPy19Qfst/t6zfs3fB79jTSNA8beH7axutV17SfHelXOoRNBY2V1rUOye8j3g27pE8
k0Ur7cAPy0bSK3QfF74ieE/jX8YvgNqvh/xx8P5LD4X/ABk8Vap4jlvvFWnab9ktJ/E8V9Dc
RC5mjN1E9tmRXtxIpwVzuBWmM+NPiz+wPr3wc+CniXxxqfi3wVPZ+FfGV14Du7C1e/a9k1W3
dw8Ue61WIoYUM4cyBfLwCRL+6rn/APhk7UP+GQP+Fzf8JR4V/sH+3f8AhGv7Lze/2p/aG3zf
J2/ZvI/49/3+/wA7Zs+Xd5n7uvrX4upoP7Un7JnjXSPDvj34aWE/jX9ofU/GNidc8WWGlPDo
s0E1ut9NBPKs8abznyzF5xXDLGwIzz8vwV0fVf8Agnxf/CXRfid8Kri/g+OVzf2t/qXjDS9O
jn0eGyfTxqjxtcM6RNIm8IoeRkIZFkUqzAHwpRXQfFnwfY/D34qeJdA0zWrXxJpuh6rdafaa
vahfI1WGKZ40uY9rOuyRVDjDsMMMMetfoD/wTy8T6zpv7Cnw016PxXa+HYPCPx5t7K91HUfE
sGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQhH5vV1fwr+Ff/C0v+Ej/AOKj8K+HP+Ec
0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiv0W0H49+G7vWvgtN8GfiF4V8EeDNF+K3
iLUvG9hb+JrXwhHc6ZLrsE1m89nPLbPdxf2aFRQscgVEMOAVMY8p/ZE+IHhu6+OH7X6eHvFH
hXw14B8ceFPEum+G7K+1u18P2Oo3F1cP/ZaRW1zJCBthMyqSgECyFWMfmAMxnwpXbeAfgD4i
+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/2B+w5+0pY/Dv9kz4
RW8/j+10PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmug+Ef7RHm2P7V3w
x8G/F3SvAn9o+K7V/htI/in+xdD0nT18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRQB+dNFfpD+z
t+0I3wr/AGVfgvoHwm8T/B9td0HxJrY8XXuueNLvwraCY38LWd9NbfbbCXULd7XYf3tvcERw
iLYrB4jk+G/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6j
bQs1qqxSPCNzr5bMxAPz0r0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhl
Kv5SyOC4VcRsN24qrcV478Vf8J1441nW/wCzdK0f+2L6e++waXb/AGexsfNkZ/JgjydkSbtq
Lk7VAGTiv0W/4JmfHfwP8CvA/wAAr3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2i
LdKR5CtbRMZJZdjRySKhH5p0V+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQ
rS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kbJ8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutV
sNH0rVIIRF4allimls7KxkkS/ttMn1G2hZrVVikeEbnXy2Zixn56V7X+z3+wf4v/AGj/AIby
eKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgfYPwU/a+uE/Zx+
Ec3wum+CnhXxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsVuIQMq0beafF
n4q2viP/AIJPeJdJt9W+D8eoX/xdutci0fQTptrIukOjoJ7SzlC30afajsj3ot0trtU4twRQ
B8PUV+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiN
tsDForsbIPKAyGjNTw3+19rPwQ/YNufFXhPxb8NIfGGnfF261Ww0fStUghEXhqWWKaWzsrGS
RL+20yfUbaFmtVWKR4RudfLZmIB+elFfpZ8FP2vrhP2cfhHN8Lpvgp4V8SSeK9e1LxlYal4u
n8H6bolxPqMU1o72cWoWpvbRbZkQBo7wLFbiEDKtG2T+zr+1dpnh74QeCLgePPCvh7VbX9o9
VaDRdSfS7XT/AAtdCKe8itoJSk9vokk6K7RSKsZKJ5i714APzpor9LPiz8UX/sHwRpf7OfxT
+H/w4/sD4j+L5PEP2fxdp/h3TdkmsxvptzPA8iLqFotksYUxRXCeVGYgDjy6Pgp+07ceDv2c
fhHYfC7xJ8FB4ksPFevT+MrvUvFU/gvTUuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCUI/N
Oiug+LOsxeI/ip4l1C3Tw/HBf6rdXESaDay2ulKrzOwFpDKqyR2+D+7R1VlTaCAQRX6A/wDB
Mj4q/Db4OfCv4WJf+KfD8mheKNV1+0+KGn+IfFk1vBpjTwpaackekG5iguredHi82aS1ulQG
RnliWH90AfCngH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBi
ivxNfcP7B/xv8RaD+zN8b/hCnxgtfBHix59Eh8IzXvjMWGlaWsWrOdUltL1ZfIRCkvmOLdy1
wm4xrLg18U69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAAe1/Cr9gf
XviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/
Zs+KOg+IfgP+yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGO
SeL75IHoOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0s
EpeS1hxviST93LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j
1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw
38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4Wt
fFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8
LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/
g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LP
EQHJCmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpe
S1hxviST93LPEQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llc
APY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7S
yuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLP
SL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO
8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4WtfF
HiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LW
vijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfj
b4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAc
kKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb
4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21z
FGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7
G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90K
yufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r
1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4
D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvB37Bera94s03w1rPj
34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OftbX/2h/Bvxt8X/
AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhf
PtTTQY9V8TfEn4P+PfhoPiD8aPGXiHzPEuv+LLDRLn4b6LJqUsUclva3Eq3K3F3C7SNcrGZo
oMpHEryeYwB81fs3f8E5fij+0/8AHvxF8O9D0q1sNW8HTz2viG81C4xp2izRO8ZjlmiEgZ3k
jZEWMOX2sw+RHdeU0b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbT
SeXAskqIZQ24L9gf8Ewf2qYvhP8AtJeDfhT4lm+D9l4J+Fuq69fS+ME1yXT4r25khuLb7asr
3cVrfu/mRwQu9vJItsx2BAHca3w3+JmieKPg9+zd4YhuvgVFpPgnxlrlp8QtG1690C7g0aym
1qKfZZSarJLLNbtavJtms5ZfMCDMrugwAfm9RXV/Hf8A4Rj/AIXh4y/4Qj/kTP7dvf7A/wBb
/wAg/wC0P9m/1373/VbP9Z8/97nNcpSEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UL/AMGoX/KK3Sf+wvf/
APpXPX8vVf1C/wDBqF/yit0n/sL3/wD6Vz1tS+Gfp+qMqnxR9f0Z+mdfHerakth8TPFOeMwR
f+pJ4t/wr7Er4a+J999j+JXibnGYYf8A1JPF9ehk7tiEcOaq9F/12PzC/wCDibUBc/tVfDZx
3+HcH/p51eisH/g4FvfP/aR+F7Z6/DqH/wBPesUVpiX++n6v8zPDr91H0X5Gv8PBd/8AC7Pg
LtZ/L/4Rn4e/+mPSM18LfEb9k/UfjxrXjjxmfFHhXwp4c+HPhrwb/bN5rRvWx9t0m0hg8tLW
2nd/3iYPyjG5T0yR+oHwV8DW2oeLPgLdNjzP+ET8BN+WiaX/AIV8Ob9O1n4H/tO+CRr/AIU0
rxH4o8P/AA3XR7PWtfstI/tD7NbW80/lvdSxodkYyfm7qOrAHgqxtTT9DupO9R/M+XP2e/2H
PHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvj
VZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9r
pGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI
/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlE
r8p1Hmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig
3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS
4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hvpniB7cyJLqWqTeZ
eLZliJ7uzhKyJgyRROd7K+1CADVf9l/4v6N8GdD+CviD4gfCrwx4V8W+OdR0Pw5pd9pDTTa9
q1jfrYzXkV3babNPF+/cQLLcSxSGIFMCHAPlXjL/AIJ7ax8JPAGga94/8efD/wABf8JHfatp
trYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H//AAr3wd4r
l8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FN
T+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/AMD/APBPbWPG
ekeCb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz
8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NN
O/Z3+HvxAv8AWNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2j
PjF4v0zXvD/hXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC
/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOeh8P/8ABPrUNQ8Q
+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Zh8C
2vww/Zt8NeJfh34z+Glj8W/iPPe6bqeu6/4x03SLn4Yaas32Ym3gmmE32i6QyO11GjSxwApF
GGk8xuV/Zhs/Cv7M/hz44+Np9e8Faz8UPhVPY2HgIvew3mnX91LfSW9xqllA+03j28SCaByr
RpvWR42ITaxnE/ED9hjxJ8IvDHijWPF+veFfDFhoOu6h4c0xr6W6Mni+9sXljuRpscdu7yRJ
JGIzPMsUIeWNGcNuC1P2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJ
ceWC23AVQV3um+Pd9bfBn45WXxc+AX7MNhe+IvhVrkfh/wAV6y3xNTx5PoUl9Db3WsQ3Tyqd
X/fv5tvJKzS2hJZgQWMiALxX7GWn+DdI+Kn7UWoeHPEXgrQvAnifwb4r8JeDE1jxTZ6bPeNc
TQtYRCG9nS5CNCFxLKoXKsGcMGAAPCbb9i2Ww+BPgf4heJPiL8P/AAfovxD+3/2NDqSatPdS
/Yrj7PPvW0sZ1TD7SMtyHHcEC34B/YgtviLL4St7H4yfB8al451WXR9G043WqS3s0y3f2WNp
oorB2tUmYo8RufKLRyBiFw4X2D9lXWvFwg+E/hzxL42/Z/1/4ZeEfFc2natoPiCfw1PdeHLQ
6hHJfsst/GGmiuEd5EmsJp1cLgOrIqjn/g5r3wu+EOq/tFfFHwfeeH/7d8B6rAPhHYatN5iG
G61KWIX0NpckS3FxaWqxyR+aHETESSIzKpUA8/8Ag7+wPr3xm/aj8QfB618W+CtJ8b6Hqt5o
8MGoPf8AkatNafaPtDQSxWsihEW2dszeUSGXaCcgeE19bf8ABIbxOtv/AMFCdA+I3jHxX4f0
3TdInv7rXNX8SeJbSynnmu7G8jWT/SZlluXeZxvaMOVLgvjcCfkmkI9W/Ze/ZO1D9qj/AITH
+zvFHhXw5/wg+hzeJdR/to3o3afB/wAfE0f2a2m3eVlMocO3mLsV8Nt+gP2Q/CPxn1n9mjwb
qXhL4xfD/wANaDf+K7nwR4di1bT7mfWNB1jUYsS2tndDTZ5bH7REVkMltOkYMmS6yFwOU/4J
bXOn6f8A8L4/tLX/AAroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXPV/
Br45X/wC/wCCTMF14Z8RfD+DxxafFZfE1tp99Po2p6paWS2KWqXcVldeZJHKt1Gu1kjEypmQ
YiJcsZ8qaj8DfGGm+OPFfhv/AIRzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82C
K5SvS/g/4sbXNc+IWp698VPEHgrUtX8N6lLLdxx3d7P4yupGRjpdy8Thtl0xYvJMWjymXByK
80pCPVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0b1sfbZXhg8tLW2ndv3iYPyjG5T0y
RxPwr+FfiL43/EPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR9F/sU3On+I
f2Ev2mfBv9v+FdK8SeK/+EW/sez1rX7LSP7Q+z6hNNP5b3UsaHZGMn5uMqOrAHn/ANin4qeH
l+Hnjz4W3+rWvw41v4mQR2Vl49yQkag5Okag5DNBplywXzJrfYysFM3nwAogBlfBj/gnt41+
NnhPUNXstV8FaZBD4kbwdpgvtbTZ4j1oW8twLGznhEkDO6RgJJLLHDK00SpIxbip+z3+wf4v
/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHsGveIj4
c/4IzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP9sG0TIWla3xhjaEVV/wCE7j/4
clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUxnlP7Pf7B/i
/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+w
f4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEger
f8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2aj/
AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+
z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXL
EgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRX
LEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9m
o/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAe
U/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcs
SAfs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFc
sSB6t/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9
mo/4TuP/AIclf8Iz/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5
p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY
6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZh
lMOfoD9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQ
ZFZj++BNdsfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8Vz
LLJaQYaZRK7JLcRt+8ZgAD408HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNw
lqwlWwt7lLdGnYojTvHv2M65jw51dF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tv
bO7SyuAHsba5ijRbp/KDSyJkqzDKYc+wfB7Worr9vC4+K/hnxv8ABS88D+K/itPqmqL4gn0m
11jRtPi1bz1uFTV4454fMt5jIj2LM+VwxWWMIvq2g/Hfwnr+tfBa7+HvjL4f3vh3w/8AFbxF
qnimbxvq2nSalpunza7BcWtxatrrm8j8yyAlL2eHaUOzk3G40AfD3iL9ijxV8Ofh5rniPxvf
+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bcr9kr9l/Vv2xfjXY+
AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfe2o/HLwX8XLH4J2HhXxF8
Ktc8D+H/AIj+Jm8XJ48n0eS+h0e68QJdQyqdc/0x/NspGZpYCZGYEOxlTC6v7Ifx3+EHwK8c
fDS9+G3jLwr4T+HsvjnxX/wna3GrLZ317A0k1t4b86K7cXs9okF1ERsVoImMks2x45JFAPyp
r2D9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4
932D+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3
ipeRmOARpn5kbzT9hW50+88cftK+IbvX/hV4esPG3gbxR4a0aKHX7LQrG51C6kgkghs7S8li
uIrRlyI3ljVFVdrMrKwAB8qaR8EvE+v/AAg1fx5YaZ9t8LeH76HTtUure5ikk02WYEwtPAGM
0cUhBRZmQRM4KBy/y11f7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+
XHCxPmsmdyhdxyB6B+xPqOn/ALLPifxP8R/FfivSk0zw19o8OTeD9L1Oy1G6+IMsqFZNPkjH
nQnSmADTXbo8RCqIN8xVo+g/4JDa1pOi/wDBQnQPHuq6h4K8D+E9Anv5rwah4ht9PgsFurG8
ihigW8uPPnRXZUyplZQVMjc7ihHyTRRX3t/wT0+MPhXwH+znZ2HxF8Y+Cl8Q3Oq3R+Dh1WWH
UD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwAD4Jor9LPgp+0zrngj9nH4R6b4K8UfCp/
H2neK9em+IV94m+Ismjx/wBoPqMTw310bfUrYaxE8OC0229R0h2Ln5kbJ/Z1/au0zw98IPBF
wPHnhXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwxn500V6B+1l/Y
f/DVHxL/AOEZ/sr/AIRv/hK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFfW3/BPv4r6jp/
wE8J+Gz8S/D/AIF01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLFDNEJeluEI+
Ca6v4V/Cv/haX/CR/wDFR+FfDn/COaHda5/xPNQ+x/2p5G3/AEO1+U+bdybv3cXG7a3IxX6F
/sXeL9Zg/ZM8G+JI/G/h/TYPC37Q4tb3XF1mDw3py+HpIILu/tbVZzahLKeRVnNjHGm/YCYM
oQvmn7InxA8N3Xxw/a/Tw94o8K+GvAPjjwp4l03w3ZX2t2vh+x1G4urh/wCy0itrmSEDbCZl
UlAIFkKsY/MAZjPhSivvb9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC
4dNMkuIw8ynEDSIGfLDNfJP7WX9h/wDDVHxL/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+T5Hl
/J5Xl7dmz5duMcUhFvUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7
o5X/ANagOGDhef8AhX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdyb
v3cXG7a3IxX3X/wT1+NXw/8Ag/8AsH+G9J8W6v4Vt/EniP4rTXPh66k1OxvLrwPcTaS1na+I
biwklAMVtcxtlbnYFDLMMkRbvP8A9gnW7vwr44/am0TxD8SvCtz/AMJF4G17QDf33jK2t7Hx
ZrcsjJbXEUl1LH9q3/6Uy3BGFWclmTzhuYz4pr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxh
e4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF+oPhN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu63J43
+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjr0H9jH45fD/wCFX7I9pp3i3xF8P7jx
J4o+Ml5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD80692+FX7A+vfEr
wZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe1/Zp/a10D9
krxx8ctM8b6PpXxJ1rxVoeveHP8AhJ7G/vL/APt66uJI1xLN9rhWTT5nieVp40F03mAq+Dge
1/Ar4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJ
ieeNmMhOAD8//HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r
9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fa3
wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o
8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC
53HzTzQB4p8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDc
mIvtZgCgDnW+G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPl
gbSzr5ke/wBg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJ
bpPMkN7AJHS3G2BMyiV6n/BPfwLa/C7XP2ibTU/Gfw0jgv8A4f8AiXwJpd7J4x021g1rUnaA
Q/ZhcTRyPbyhSyXBRYiOrAggAHxpXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1
qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjb+D3jfwh+x14YuPFkNzpXjD42xX09jodkkYvNH8DmF
9h1WSXBt7+7YjNqsLS28YAmdncRxj6V+BXxu0/4l/CL9li5m8TfD/UtV8GeK9Tm8c3fi3xFZ
WWqaJ53iXTtW+3Q/bZ4pZZZIoHDTRCXcktxGfnYgAH5/+O/BGqfDPxxrPhvW7b7FrXh++n02
/t/MST7PcQyNHIm5CVbDqwypIOOCRXpXwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbi
bzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+1vhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7
PgW9na+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOPCv2KNH8ReA/2gPCvjbUPHnwU1SO98cwv4
zk8Qa3od3rGlfZL5HnuluNRG5/NSSSVLnTZpfMK5L+YigAHn/jL/AIJ7ax8JPAGga94/8efD
/wABf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPODwP/wT21jxnpHgm/l8efD/
AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykMETTSqq/ahC/UlFHNfQHgz4o3/AMSvjT4Z8n4g
fBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB2uh/FH4c+Ibb
9n62+HmtfDQ+D/h18QNejv8A/hI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3hY
AA/N/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG
+mfEz9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKA
CvQP+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lef
0UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABX9Qv/BqF/yit0n/
ALC9/wD+lc9fy9V/UL/wahf8ordJ/wCwvf8A/pXPW1L4Z+n6oyqfFH1/Rn6Z18G/GOymu/iV
4l8rP+og/wDUk8YV95V8b67aLdfEzxVuGcQRf+pJ4ur0MmjzYhI4M2ly0G/66H49f8F+LWWD
9oj4Wq+dw+HcWf8AweazRXRf8HEdusP7U/w0UcAfDuD/ANPWsUVtioWrTXm/zMsPO9GD8l+R
6D8F/Gz2vxA+AlsD8v8AwingEfnomlf41+bfxS/ZV1D9oDxN498b/wDCUeFfCnhv4e+HfB7a
xea0b1sfbtKtYoPLS1tp3b94mD8oxuU9MkfoT8ILPd8TPgG/r4V+H5/8oek18jaPdaf4i/Z4
/aT8G/2/4V0rxJ4s8NfDT+x7PWtfstI/tD7PaW80/lvdSxodkYyfm7qOrAHzJu8Len6no00l
P7/0PmX9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBX
e6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvt
ZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqI
sqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3Se
ZIb2ASOluNsCZlEr850nmmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXA
D2NtcxRot0/lBpZEyVZhlMOavh//AIJ9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZ
ahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXu
i6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hv
pniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/AOCX/izRvEOh6V4g8ZfD/wAMX/i3
xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f+PPh/
4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/AMfah448Iav4t8ff
D/8A4V74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/G
nwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4
H/4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivsv9m34w3OjftGW
smh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++
IFnY/tbftGfGLxfpmveH/Cumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/
shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc9D4f/
AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgys
QGx6V+zD4Ftfhh+zb4a8S/Dvxn8NLH4t/Eee903U9d1/xjpukXPww01ZvsxNvBNMJvtF0hkd
rqNGljgBSKMNJ5jcr+zDZ+Ff2Z/Dnxx8bT694K1n4ofCqexsPARe9hvNOv7qW+kt7jVLKB9p
vHt4kE0DlWjTesjxsQm1jOJ+IH7DHiT4ReGPFGseL9e8K+GLDQdd1Dw5pjX0t0ZPF97YvLHc
jTY47d3kiSSMRmeZYoQ8saM4bcFqfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7Q
QmdrSAqjGS48sFtuAqgrvdN8e762+DPxysvi58Av2YbC98RfCrXI/D/ivWW+JqePJ9Ckvobe
61iG6eVTq/79/Nt5JWaW0JLMCCxkQBeK/Yy0/wAG6R8VP2otQ8OeIvBWheBPE/g3xX4S8GJr
Himz02e8a4mhawiEN7OlyEaELiWVQuVYM4YMAAeE237Fsth8CfA/xC8SfEX4f+D9F+If2/8A
saHUk1ae6l+xXH2efetpYzqmH2kZbkOO4IFvwD+xBbfEWXwlb2Pxk+D41Lxzqsuj6NpxutUl
vZplu/ssbTRRWDtapMxR4jc+UWjkDELhwvsH7KuteLhB8J/DniXxt+z/AK/8MvCPiubTtW0H
xBP4anuvDlodQjkv2WW/jDTRXCO8iTWE06uFwHVkVRz/AMHNe+F3wh1X9or4o+D7zw//AG74
D1WAfCOw1abzEMN1qUsQvobS5IluLi0tVjkj80OImIkkRmVSoB5/8Hf2B9e+M37UfiD4PWvi
3wVpPjfQ9VvNHhg1B7/yNWmtPtH2hoJYrWRQiLbO2ZvKJDLtBOQPCa+tv+CQ3idbf/goToHx
G8Y+K/D+m6bpE9/da5q/iTxLaWU8813Y3kayf6TMsty7zON7RhypcF8bgT8k0hH0D8Df+Ce2
sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqaN/wT+8
VWdm9x438Q+CvhXBJ4km8KWMviu/miTVb+CVobryDbQzg29vIoSW6bbbozqvmk7gvsHwa+OV
/wDAL/gkzBdeGfEXw/g8cWnxWXxNbaffT6NqeqWlktilql3FZXXmSRyrdRrtZIxMqZkGIiXO
r8PvjF4E/aM/Zx+CNv45u/CviL/hWWu6zH8QbTxfr9xYalNZarqMd/Jq+ntHcwy38qRQzo0a
tLMZZB/o0m9HpjPKvBv/AAS38ceMfA82p/8ACTfD/StVh8V3ngT+wdR1G4gvn8QW8csn9mLL
5BtDLKsX7p/tHlO0kaeYHbaPnTXtCvvC2uXmmanZ3Wnalp072t3aXULQz2syMVeORGAZXVgQ
VIBBBBr9K/henw58BfCuHTtM8e/DTQfDHgr9pa58eWkE/iy1uJ18NadDJGksUKyyXU7yfZxH
CgRpJS8bgFH8yvz/AP2lPiRY/GP9ozx/4v0yK6g03xV4k1HWLSK6VVnjhuLqSZFkCsyhwrgE
BiM5wT1pCPQPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrG
DO0ZJDHGzDHx/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
vX9mz4o6D4h+A/7JdtZa18NC/wAOvEmoR+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZ
QY5J4vvkgfGn7WXjfTPiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQ
DQB5/RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAegeCP2l/E
Xw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HS
oLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UAegf8ADUHjP/hnj/hVX23Sv+EF+3f2
p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+iigAr0DwR+0v4i+H/hi20iw074fz2tp
v2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB5/RQBreN/GV38QPE9zq9/DpUF1d7N8em6Xba
ZartRUGy3to44U4UZ2oMnLHJJJyaKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
v6hf+DUL/lFbpP8A2F7/AP8ASuev5eq/qF/4NQv+UVuk/wDYXv8A/wBK562pfDP0/VGVT4o+
v6M/TOvjzUefif4r/wCveL/1JPF1fYdfHt6u74peLf8Ar2h/9STxdXo5L/vKPPzjXDs/Jr/g
4s/5Ot+G3/ZOoP8A086vRSf8HGJ2/tX/AA2/7J1B/wCnnV6K6MV/Hn6v8zHDX9jD0X5He/BG
x83xv8An/wCpT8AH/wAoelV+a/xE/ZO1D476x438Zf8ACUeFfCnhv4c+GvBv9sXmtG9bH23S
bSGDy0tbad3/AHiYPyjG5T0yR+oPwCsfM8QfAF/+pQ8An/yiaXXwppNzp/iH9nr9pTwb/b/h
XSvEnivw18NP7Hs9a1+y0j+0Ps9pbzT+W91LGh2RjJ+bjKjqwB8iXwHqx+P7/wBD5k/Z7/Yc
8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+N
Vn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2u
kaH8cNQ1PxF4K0KDxP8ADHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98L
I/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZl
Er4G55pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaW
RMlWYZTDmr4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4o
N8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRk
uI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3m
Xi2ZYie7s4SsiYMkUTneyvtQgA5R/wDgl/4s0bxDoeleIPGXw/8ADF/4t8V6j4P8ORX0uozf
29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwxyvGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNS
OqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/wDH2oeOPCGr+LfH3w//AOFe+DvFcviL
Urnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwY
uviPql/P4a8aReHYtY8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/+B/+Ce2seM9I8E38
vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3
rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8A
D34gX+saOviyXS7u/wBI0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X
6Zr3h/wrps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8N
PD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPQ+H/wDgn1qGoeIfD3h7
VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBselfsw+BbX4Yf
s2+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijDSeY
3K/sw2fhX9mfw58cfG0+veCtZ+KHwqnsbDwEXvYbzTr+6lvpLe41Sygfabx7eJBNA5Vo03rI
8bEJtYzifiB+wx4k+EXhjxRrHi/XvCvhiw0HXdQ8OaY19LdGTxfe2Lyx3I02OO3d5IkkjEZn
mWKEPLGjOG3Ban7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBb
bgKoK73TfHu+tvgz8crL4ufAL9mGwvfEXwq1yPw/4r1lvianjyfQpL6G3utYhunlU6v+/fzb
eSVmltCSzAgsZEAXiv2MtP8ABukfFT9qLUPDniLwVoXgTxP4N8V+EvBiax4ps9NnvGuJoWsI
hDezpchGhC4llULlWDOGDAAHhNt+xbLYfAnwP8QvEnxF+H/g/RfiH9v/ALGh1JNWnupfsVx9
nn3raWM6ph9pGW5DjuCBb8A/sQW3xFl8JW9j8ZPg+NS8c6rLo+jacbrVJb2aZbv7LG00UVg7
WqTMUeI3PlFo5AxC4cL7B+yrrXi4QfCfw54l8bfs/wCv/DLwj4rm07VtB8QT+Gp7rw5aHUI5
L9llv4w00VwjvIk1hNOrhcB1ZFUc/wDBzXvhd8IdV/aK+KPg+88P/wBu+A9VgHwjsNWm8xDD
dalLEL6G0uSJbi4tLVY5I/NDiJiJJEZlUqAef/B39gfXvjN+1H4g+D1r4t8FaT430PVbzR4Y
NQe/8jVprT7R9oaCWK1kUIi2ztmbyiQy7QTkDwmvrb/gkN4nW3/4KE6B8RvGPivw/pum6RPf
3Wuav4k8S2llPPNd2N5Gsn+kzLLcu8zje0YcqXBfG4E/JNIR7t8Kv2B9e+JXgz4e61e+LfBX
hFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7Fr
Xh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX3B8DvE+g/EL4PfsXGy8V+CrJ/g74y1K68XQ
6x4lsNHn0qGTWrS8SQRXc0TzoYAzBoBIMqy/fBWvVv8AhpnSfGvjb4Ra18Lfid4f8NeHrb4u
+Jdc+IUbeK7fws+p2Vzr8NzbT3VtczW8l6jafgD5JMKpiOGUoGM/LOiv1W+A/wAQLfUPhBZe
L/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhALiSHbaS4imaxQfOYwxiJQ45X4IfHDT9T
1DUtL8I/Ejwr4B8A3nxH1zVvDur+GPFll4T1DwpE8zG3OraNfi2h1jT3H2OREXfLHDDLCHGP
swLBY/Onwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0kuPsNpFjzLiXYDsiTcu52wq5GSM1k16t8
PdWtF8cfE6W7+Kf/AAhX2rQ9VWG68O6VcxWPjGVpFK6WsECwfZ7S65IEsSRRqihol4UeU0hH
u3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZh
jq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4j03VEurVLueJ3doL
dl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZa
xb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1
ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO
8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhf
xH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH5k+O/B
GqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXoHwE/ZO1D47/DDx54y
/wCEo8K+FPDfw5/s/wDti81o3rY+2yvDB5aWttO7fvEwflGNynpkjJ/ay8b6Z8TP2qPiX4k0
S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa9r/YpudP8Q/sJftM+Df7f8K6V4k8
V/8ACLf2PZ61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgChHlXwR/Yk+J/wC0Z4e0zVfBvhn+
2LDWNdl8NWcv9o2lv52oRWT3zw4llQjFtG8m8gIcbQxYhatfs9/sOePf2k/hX498b6FZ2tp4
T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e772/4JAftTeB/hn+y98PNH8SfFz/
AIR+6sviPqUlxpOpajb2kFvaPol2VhcSXYK6eZmWYSmPab1hH5WT59fOn/BPe4tbrXP2ifEG
p+N/D/keNfh/4l8NaXqHizxNpularrmpXTQSQmaC4uzIrzDLNKWeIPvBmYqTTGeFfsifsnah
+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfW3/BIbT7X
4V/8FCdA8ReJfEXgrQNC8ET39vqt9qHinTbeANLY3luhgZ5wLpDJgb7fzFAZWJCspPyTSEe7
fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGP
a+F/+CM3xz8ZaR4TvNO0XSrmPxNruoeH7sJfbv8AhGbiyuZ7ad75gpVYt9tOVkhaUNsVR+8k
iR+r/YIun8DeGPhpNafEv4f6r4M8ReK3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT
/LdHgRC0hj2D1bR/2utD+BHhr9lPwl8Mfiv9m8Cw+OddsfEKtq0cEx0T/hKbee0m1FCENv5l
shcs6RExSTrgRySIWM+dNG/4JXeJtUs3kn+IPw0051+IE3wwEVw+rM766krItuPLsHXZIqiR
ZCQgV1DlH3Ivzp478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFf
oX8av2kD8K/2XPihr3gXxf8ADS68TTftD6t4w0uKS80TWb0aad8EOoW1rcGV1f7QqFJI4xKI
z5inymLn86de16+8U65eanqd5dajqWozvdXd3dTNNPdTOxZ5JHYlmdmJJYkkkkmkIqUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABX9Qv/BqF/yit0n/ALC9/wD+lc9fy9V/UL/wahf8ordJ/wCwvf8A/pXPW1L4Z+n6oyqf
FH1/Rn6Z18WeJfFNt4f+K3i0TuqZtYCMn18S+MP/AIk19p1+U37e/ju68JfGTxKtu7Jmx08n
B/veJfHX/wASK6MDW9lPnMMbR9rDkPh3/g4n8Twah+1N8Mpo3UpJ8OYMHPprWsD+lFeAf8Fx
PHFzqvxU+DNzJIS8vw2BJJ648Ra6P6UVpVxXNNy7tkU8NywUeyP0D/Z00zzLj4ASf9Sd4DP/
AJRNLr8o/iJ+ydqHx31jxv4y/wCEo8K+FPDfw58NeDf7YvNaN62Ptuk2kMHlpa207t+8TB+U
Y3KemSP19/Zj0nztM/Z+l9fBfgU/+UTTK/NjSbnT/EP7PX7Sng3+3/CuleJPFfhr4af2PZ61
r9lpH9ofZ7S3mn8t7qWNDsjGT83GVHVgDyy+D7v1OmPx/f8AofMn7Pf7Dnj39pP4V+PfG+hW
draeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3
jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWh
QeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfC
q1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV8Dc800X/gmh4yW
Xwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P/wDB
PrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY
9K/Yy8b+MPhZ8VPDmka18Q/g/P8ACr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKI
CSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTB
kiic72V9qEAHKP8A8Ev/ABZo3iHQ9K8QeMvh/wCGL/xb4r1Hwf4civpdRm/t67sb1bGZ4jbW
cwii+0uI1NwYmOC20LhjleMv+Ce2sfCTwBoGveP/AB58P/AX/CR32raba2GpHVLy6S40y8az
vEc2NlcRDbKvBEhDBgQTzj0D9hT4/wDj7UPHHhDV/Fvj74f/APCvfB3iuXxFqVz42u9H1DVr
L95Fe37WMd0suqebcGMbTZp89w+4MH3uPQPBnx71r9oX40+GdcuvF3wU1P4MXXxH1S/n8NeN
IvDsWseFtMvNYF5diVdSiEp8+KcvusppxlCu5WjCgA+f/A//AAT21jxnpHgm/l8efD/RbX4m
67daF4Me+OqN/wAJM9vcx2rTxCGykMETTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh+
+n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X+zb8YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18
WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02ef
WvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F
0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnofD/wDwT61DUPEPh7w9qvxL+FXhzxn4
j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2YfAtr8MP2bfDXiX4d+M/
hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt4JphN9oukMjtdRo0scAKRRhpPMblf2YbPwr+zP
4c+OPjafXvBWs/FD4VT2Nh4CL3sN5p1/dS30lvcapZQPtN49vEgmgcq0ab1keNiE2sZxPxA/
YY8SfCLwx4o1jxfr3hXwxYaDruoeHNMa+lujJ4vvbF5Y7kabHHbu8kSSRiMzzLFCHljRnDbg
tT9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3f
W3wZ+OVl8XPgF+zDYXviL4Va5H4f8V6y3xNTx5PoUl9Db3WsQ3TyqdX/AH7+bbySs0toSWYE
FjIgC8V+xlp/g3SPip+1FqHhzxF4K0LwJ4n8G+K/CXgxNY8U2emz3jXE0LWEQhvZ0uQjQhcS
yqFyrBnDBgADwm2/YtlsPgT4H+IXiT4i/D/wfovxD+3/ANjQ6kmrT3Uv2K4+zz71tLGdUw+0
jLchx3BAt+Af2ILb4iy+Erex+MnwfGpeOdVl0fRtON1qkt7NMt39ljaaKKwdrVJmKPEbnyi0
cgYhcOF9g/ZV1rxcIPhP4c8S+Nv2f9f+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHe
RJrCadXC4DqyKo5/4Oa98LvhDqv7RXxR8H3nh/8At3wHqsA+Edhq03mIYbrUpYhfQ2lyRLcX
FparHJH5ocRMRJIjMqlQDz/4O/sD698Zv2o/EHwetfFvgrSfG+h6reaPDBqD3/katNafaPtD
QSxWsihEW2dszeUSGXaCcgeE19bf8EhvE62//BQnQPiN4x8V+H9N03SJ7+61zV/EniW0sp55
ruxvI1k/0mZZbl3mcb2jDlS4L43An5JpCPpb4N/8Enfi/wDH74YfDnxh4T0/StW0H4jX1zYx
3CXTL/YPkSyxvNfAoPLiPkTMrR+ZnaqY8ySKN/Cfiz8OL74OfFTxL4Q1OW1n1Lwrqt1o93La
szQSTW8zwu0ZZVYoWQkEqDjGQOlfdfwO/bLm+D/wH/Yu8O6B8SLXQbGTxJqVv47sbfWIoha2
TeI7S4Q3q7swI0SyEO+3MLzLkxySBvS9e/aA0/Uvil8L7n4a/FHwr4e0XSPjJ4p1T4jrb+Nb
LQI9VtJvEMM9vcTpLcQ/2lE+njCvGJlKKYwcjZTGfGn7JX/Cyf2xdDsf2bdA8TeCtG0LVJ59
ZsbHWNFhQXd7EpmdxeQ2ctytx5KyYkd1/cxtFv2lY2+dK/WH9mD9pD4TfC34qeAde+H3i/wV
4P8AAmp/EDxdcfECJbyDSZ7uOWaeDw4GtZily1lHDdxFUhj+zQEvJKI2ikdOU/Yu+L3g3wDZ
/skz6z418Facnwa1Xxno3i4TeILMPp0+oytFZvGvmbrq3kaeM/abYSwIu93kVI3ZQD4U/Z7/
AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfNK+tv+
Camn2vgzXP2hdP1rxF4K0ae/+GOueErN9Q8U6bawX+pXLRrDFBNJOI5kYwSfvY2aIDaS4DoW
9g/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv8AhJvG0/heOyuGvoHtLm6giv7M6jEbbYGL
RXY2QeUBkNGUI/Omiv0W/ZU+OF3qfhiDS4PiR8P/AADot5451XVrLV/Aniy28Jt4Und8qb7R
tTFsusaU7/ZZIkO+eO3hkhLgj7MPz/8AHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyov
KibqkflR7VIGxMbQAdr+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjL
qTGu2R48mVcZAcrz/wAK/hX/AMLS/wCEj/4qPwr4c/4RzQ7rXP8Aieah9j/tTyNv+h2vynzb
uTd+7i43bW5GK+gP+CW1zp+n/wDC+P7S1/wroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPd
u8l8uPkT5d7LuXNv/glh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1
kcL9pAkwViErBmTzRuAPkmvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8K
xFDCivujlf8A1qA4YOF+q/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817Gw
Lh00yS4jDzKcQNIgZ8sM17B8J/jV8I/g/wDDfxPpM2r/AA/t/wDhI/2gNaufBF1Yanpt5H4H
Sa1ns9O8Qmw80RG0tpY/lWbYioyTJnEO5jPypr0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYn
WHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfdfwx/aI1b4Y/A34YaF4C8bfB/WPG2ieMvEX/Ce
63rnxDuNFtLu9Opxtb6nNsv7N9Xt5oNrea8V1ujjCBQd8Z1v2Jf2kPAXwt1z4Ua9ovi/4aeD
9J1Pxl4ouPitFpN4mkwXc8rTQaEIrW6KXjaZGt2hiRI/IgBaSYRvFI6AH5Z11fwr+Ff/AAtL
/hI/+Kj8K+HP+Ec0O61z/ieah9j/ALU8jb/odr8p827k3fu4uN21uRiuf17RpvDmuXmn3D2s
k9hO9vK9rdRXUDMjFSY5omaORMjh0ZlYYIJBBr6r/wCCWHiddG0P9oDTL7xX4f8AD2m+Kfhj
qmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViErBmTzRuQj5JrW8K+BNc8df2l/YmjarrH9j
2MuqX/2G0kuPsNpFjzLiXYDsiTcu52wq5GSM19wfsOftKWPw7/ZM+EVvP4/tdD13RvjzZWs0
T64ttd2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZrwrxpfeGLH9qj9oD+xPiP/AMK58Nzf8JFF
of8AwjtrLPY+KIDdn7Po6/ZGVEtLiPbh2zAFjXKkEUAfP1FFfot+yz+0rceAv2OvgVpfwu1j
4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyAfGn7JX7L+rft
i/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWP2Sv2X9W/bF+Ndj4
C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt+gP7Ev7SHgL4W658KNe0Xx
f8NPB+k6n4y8UXHxWi0m8TSYLueVpoNCEVrdFLxtMjW7QxIkfkQAtJMI3ikdMn/gnV8XvBv7
M+h/A9LXxr4K8LwWviTxDb/F/b4gs1nv7zbJaaKWPmGS7skFyrK9r5lmhLzuVKPIrGfmTRVv
XtGm8Oa5eafcPayT2E728r2t1FdQMyMVJjmiZo5EyOHRmVhggkEGvuv/AIJ9/FfUdP8AgJ4T
8Nn4l+H/AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcIR8f+Af
gD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/tIa5
qHwC+Pnw3tPjTpXhTxJq19o83gy+fXpPDOh2UQ1iR9SuLEssCWcTRzCVoIo45XjLBYWKlB8P
69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAAegfslfsv6t+2L8a7Hw
FoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVbzSv0s/4JmfHfwP8CvA/wAA
r3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySLxPwg/aM8S
/slf8E1dPudD8Z+Cm8f+EPic8kOmDxRp1/ejw8DA1xawrFcGcWVxqVpE0sdsy+fGDL80LmQs
Z8qaj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcKfs
lfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb7W/Yb/aI
tdU/Zjs44PEfw08GP4g+PLat4t8M3GsabpOnP4XutPSG/gFleShZbLbIY1iAkI8tSgLxhl7X
9kP47/CD4FeOPhpe/Dbxl4V8J/D2Xxz4r/4TtbjVls769gaSa28N+dFduL2e0SC6iI2K0ETG
SWbY8ckigH50/Aj9nDXPj5/b95aXWlaD4b8JWJ1HXvEWsyyQ6Xo8RyIlkeNHdpZpAI4oYkeW
RzhUIVivbfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGd
oySGONmGPV/sleItJsv2ff2ifgrqeueH9E8WePINLfRb2/1W3j0S5n0m9kuZrY34cwK8yZEM
hbyJGGDKu5C3qvwO8T6D8Qvg9+xcbLxX4Ksn+DvjLUrrxdDrHiWw0efSoZNatLxJBFdzRPOh
gDMGgEgyrL98FaAPl/wH+xz4v+Ifjjxx4QsjpS+PvAv2hZfCr3JbVNZe2kdLuKx2K0NxLAEd
2iEod0VjEsu0geU19rfsvfHLwh4P/wCCo3xE+Pt74i0qLwD4W13xBrsSvOItU19L83kNpBY2
b7ZppXNwjNlUSJAxlePjPxTSEel/s9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2U
mYwGOF4y6kxrtkePJlXGQHK8/wDCv4V/8LS/4SP/AIqPwr4c/wCEc0O61z/ieah9j/tTyNv+
h2vynzbuTd+7i43bW5GK+gP+CW1zp+n/APC+P7S1/wAK6F/bvwq1nw1p39ta/ZaX9t1C88r7
PDH9plj3bvJfLj5E+Xey7lzb/wCCWHiddG0P9oDTL7xX4f8AD2m+KfhjqmiWtprHiW00mDVN
VnULZqEuJo1kcL9pAkwViErBmTzRuAPFNR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3o
gkuMSK8KxFDCivujlf8A1qA4YOF80r9C/wDgnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueI
NNsU1HwvNpcVtfia2vJFW5t3VyuwK5LKCg3xgrb174o2/wDwqn4X6X+zH8U/Cvw4sNA8c+KZ
NY+2eLofDsf2eTVYX0q5voL2RJtQiWxWIZaK4OyNomBYNHTGfnTRX6A/D39q7X/2ef8Agnim
v+G/Hnw/vvHGifFaa/tbS01Kzt5pvDjvDLPBbWGYbu10+51K2id7SGK3Zoss0axMxo0H496/
46/Zx+C03wZ+IXw/+D/iSHxX4i1LxvYW/iaz8KabY3FzqME1m89nPKpvbSO22ooWO5CxRGHD
FTHSEfL3wT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7
WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI0
7x79jOuY8OfS/wBijSr/AME/tAeFfHNp45/Z/v8AQb/xzDJ4j+2f2Npt1o9va3yO1zawapb2
8ttFLFK0sR09QQFVSI5IljT0C10Pw34Zg1nxz8F/iB8P38dfFnxXr1sPFHinxta6dqHw60M6
hLBFLFHez/bWu7yBmle8KvcpDuVEEkhkdjPhTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ
7iGRo5E3ISrYdWGVJBxwSKya/QHQfE+v+Bf2cfgt4P8Agz8Zfh/4Q8SeCPFfiK18b6hb+NrP
RdNvrj+0YPsd/Ok7xnVLQ2yKVdYZw0SmPaSDHXwp47uPtnjjWZftmlah5t9O/wBq0uz+x2Nz
mRj5kEHlReVE3VI/Kj2qQNiY2hCMmu28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2
t4o0GWZ2Yu2cBQsL5YMUV/sD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOoH4e6qbW5S
TUr0BJPsNlJdvZlUnDKZovtIgCobgVP2Lf2jfF938Avj58NNQ+Nn/CM+PtUvtHk8Oajq/jk2
1jauNYkfV7mDUfOMPziYyyGCRnuVLsgm5pjPhSu28A/AHxF8R/hJ498b6fHajw98OILGbV5p
pwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+U17TotI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQI
wG4B0VsEZVTkD7L/AOCeX7RGuS/slfGP4Y2nxd/4QTxJqP8AYL+DJNX8UyaLY6TEuqM+pSQX
DOqQfu5g8kcR82Zd+1JCCKQj4por728CfEDWdI/ZV+BOgfBH4xeCvBmu+FvEmvDxnet4sg8N
2l5M9/btZX11bXhhlv7c2qqRm3mPloYmTcDEPQP2LvF+swfsmeDfEkfjfw/psHhb9ocWt7ri
6zB4b05fD0kEF3f2tqs5tQllPIqzmxjjTfsBMGUIVjPzJr0v9kr9l/Vv2xfjXY+AtA1nw/o2
u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfot8JP2kPhNP8ZP2e/E+heL/BWjeCfh
v4y+INvfwzXkGlPpNtq95KNKMdlKY5jbulzB88URjgUP5hjEUmzlP+CdXxe8G/sz6H8D0tfG
vgrwvBa+JPENv8X9viCzWe/vNslpopY+YZLuyQXKsr2vmWaEvO5Uo8igH5k0V+ln7CF9r/gr
9jDwBfP4u0rQf+ED/aAj0rUdUuPF1nY2ttoJtre41Cygu3uFintJpUEzW8Dus5QSBH27h8E/
tKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxikI4mu28A
/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/sD/AIJ6fGHw
r4D/AGc7Ow+IvjHwUviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuBU
/Yt/aN8X3fwC+Pnw01D42f8ACM+PtUvtHk8Oajq/jk21jauNYkfV7mDUfOMPziYyyGCRnuVL
sgm5pjPhSirevadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfot/wTy8
T6zpv7Cnw016PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFS
QhHxTqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC5X
gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv8Aot8Cv2g/
g94B+DetQrqngqzg8Y/HnV9R8B3ENzZF/AsNxZzWum+IJNKldAlvayIP3VysQjVklAysW7yn
9kr4+eKrP4PftE/Cm4+Pdrp3ja61XSx4X1698bzWmlEprUjarfWl/I6qElWXz32ETXCMzBJD
uFMZ8f8AgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxN
fcP7B/x31bTf2Zvjf8JtM+Nlr4Y124n0RPBGo3vii40PSrSCPVnOo3NpPP5Rt0eOUSvGFSeV
C2InZWUdV8DrfSfip4W/Yu0Pwv4u8FazffCX4galDr0cmu2+lzlZdetLm3ltra+aC5uUmhG5
BFEzE/IVEgKAA/PSiv128P8AifWdNs9Z16PxXa+HYPCP7WWp2V7qOo+JYNHS08PSSrdX9iks
80Ye3lkVZpLSMnzWTeY2KkjldN/au0bRfAvgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJd
NuJrKO+sWvbf7CYRtaC5CxwCEIpV4iWCx+WdFdB8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltd
KVXmdgLSGVVkjt8H92jqrKm0EAgivtb/AIJ6fGHwr4D/AGc7Ow+IvjHwUviG51W6PwcOqyw6
gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuAhHx/4B+APiL4j/CTx7430+O1Hh74cQWM2
rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX3X+xb+0b4vu/gF8fPhpqHxs/4Rnx9ql9o8nh
zUdX8cm2sbVxrEj6vcwaj5xh+cTGWQwSM9ypdkE3NfD+vadFpGuXlpb39rqkFrO8MV7arKsF
4qsQJYxKiSBGA3AOitgjKqcgAFSvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogku
MSK8KxFDCivujlf/AFqA4YOF+1v+CeXifWdN/YU+GmvR+K7Xw7B4R+PNvZXuo6j4lg0dLTw9
JaW11f2KSzzRh7eWRVmktIyfNZN5jYqSNX4V/tIeDfhd8PNJl8J+L/BWgaP4k/akGuWVtb3l
nbXNh4XcmEzmAkTafblImibcsJ8iRkb9zMVd2HY/Mmiv1B8R3PhP41/GL9m3Tfhxr/w/ksPh
f8ZPEf23Todf07TfslpP4ngurL7Hbyyxm6ie22mP7IsinGxfmG2rek/FfUdP+Mnjjw2fiX4f
8C6anxd8TX1vrWg+NrHw7q/he5+2SDdrOmXxgh1yykcWsg2NJIsUM0Ql6W4APz+/ZK/Zf1b9
sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq3mlfqD+wP8f/AAP8
Ff8AhTep6b4++H+h2F14r8SP8Wrizu7fRv7WuX8220SQWkqwXDaeFuldI4IFtrfc8kqRNFIy
c/8As7fHu++D/wCyr8F/DXw21v4P2Xizw54k1uPxu+ufEBtCtLW6+3wm2upltdRtk1W3NuF/
eKl5GY4BGmfmRgD83qK/QH9kz49eBPF3gebQdV8XfD/wPdeBPjlD8YbhvKuNO0PUtHgjEctv
pEZiMzSgqPJtJI0kaN0CglXCfGn7SnxIsfjH+0Z4/wDF+mRXUGm+KvEmo6xaRXSqs8cNxdST
IsgVmUOFcAgMRnOCetIRxNdt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMsz
sxds4ChYXywYor/YH/BPT4w+FfAf7OdnYfEXxj4KXxDc6rdH4OHVZYdQPw91U2tykmpXoCSf
YbKS7ezKpOGUzRfaRAFQ3AqfsW/tG+L7v4BfHz4aah8bP+EZ8fapfaPJ4c1HV/HJtrG1caxI
+r3MGo+cYfnExlkMEjPcqXZBNzTGfClerf8ADJ2of8Mgf8Lm/wCEo8K/2D/bv/CNf2Xm9/tT
+0Nvm+Tt+zeR/wAe/wC/3+ds2fLu8z93XmmvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKi
SBGA3AOitgjKqcgfYHwr8C2vxY/4JSaT4Eg8Z/DTQ/EOq/F0a+YNc8Y6bpz2Wm/2abJruaOS
YSqizA/IEMrKAyRsrKShHlXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh
9ktZ0hTz5VjBnaMkhjjZhiaN/wAE/vFVnZvceN/EPgr4VwSeJJvCljL4rv5ok1W/glaG68g2
0M4NvbyKElum226M6r5pO4L9QaPbeE/sP7KeheH/AIjfD/V7D4A/EfXYfEeo33iTTtH/ANEH
iC3uIb2KK5uFM8U1shlU25mXqu4sCKt/FP49/CL9q9fCFxd3HgrXvCfgH4geKpPE9j4l1W50
W7fRdY1v+0U1bTI0uLeW6dLWKVDAu+YSSKv2V9yNTGfnp478Eap8M/HGs+G9btvsWteH76fT
b+38xJPs9xDI0cibkJVsOrDKkg44JFZNdt+0po3h7w5+0Z4/0/wg9rJ4TsPEmo2+iva3RuoG
skupFtzHMWYyJ5QTDlm3DByc5r7A/wCCenxh8K+A/wBnOzsPiL4x8FL4hudVuj8HDqssOoH4
e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgIR8f+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq80
04V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/tG+L7v4BfHz4aah8bP+EZ8fapfaPJ4c1H
V/HJtrG1caxI+r3MGo+cYfnExlkMEjPcqXZBNzXw/r2nRaRrl5aW9/a6pBazvDFe2qyrBeKr
ECWMSokgRgNwDorYIyqnIAB6B+z3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvrK2Um
YwGOF4y6kxrtkePJlXGQHK8/8K/hX/wtL/hI/wDio/Cvhz/hHNDutc/4nmofY/7U8jb/AKHa
/KfNu5N37uLjdtbkYr6A/wCCW1zp+n/8L4/tLX/Cuhf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/
aZY927yXy4+RPl3su5c2/wDglh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZ
qEuJo1kcL9pAkwViErBmTzRuAPkmur+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2
p5G3/Q7X5T5t3Ju/dxcbtrcjFfot+x5+3B4b+CX7Kf7M3htvG3hXT7pL7UGvre4W1uZtHuH8
V2EZlmZ1ZrHdo13rQErmIGKWTDZ2Y80/Zi8T6Do37Rn7ZmmaN4r8FeHvBPinw34o0Tw/aSeJ
bDSdK1S6nupF0xbZJJo4pEEPnhJEBjiSXBZBKNzGfH/gH4A+IviP8JPHvjfT47UeHvhxBYza
vNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfa3/BPL9ojXJf2SvjH8MbT4u/8IJ4k1H+wX8G
Sav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV8aa9p0Wka5eWlvf2uqQWs7wxXtqsqw
XiqxAljEqJIEYDcA6K2CMqpyAhHoGo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXG
JFeFYihhRX3Ryv8A61AcMHC+aV+hf/BOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpq
PhebS4ra/E1teSKtzburldgVyWUFBvjBW3D+1Lb/ALNv7FOra18JPGHhVZPD/wAZL288LaVc
69DJqieDGnikW0W3lmGoRWk95a2zTQL5csqBnkBjZ3LGfCnwr+Ff/C0v+Ej/AOKj8K+HP+Ec
0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiug/ZK/Zf1b9sX412PgLQNZ8P6NruqQTz
WJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq30B/wTr+KK+Itc/aauL7WvBXgbTfiF8P9bsrX
RZNdtNB0qbVbtibO3tre4nRdkatcojcrCj7WdfMG72H/AIJmfHfwP8CvA/wCvdN8ZeFfCdhL
ruvf8LaW41a3s769uWje20TzopXFxPaIt0pHkK1tExkll2NHJIoB+adFfqt/wTPudP1f/hmX
wRomv+FX/wCEU13xe/xF0C21+yX+09Qj/faXdNbCX/iaeV5MEkNzAs6RfZ1ZXXysrxX7O37W
WraX+yr8Fx4C8S/DT/hNk8Sa3fePbvxh47uPDrpezX8Mtve30aahaPqaPAV3u8d3lYTGBkPG
wB+b1FdB8WdZi8R/FTxLqFunh+OC/wBVuriJNBtZbXSlV5nYC0hlVZI7fB/do6qyptBAIIr7
W/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFi
hmiEvS3CEfH/AIB+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXyw
Yor8TX3X+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJZxNHMJW
gijjleMsFhYqUHoP7CGrap4V/Yw8AanZ+OtKsbDwP+0BHptxr1x4jTRbU+HGtre5vrWCS8eB
2tJ5EW4azChpSodoSyHaxn5p16X+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZS
ZjAY4XjLqTGu2R48mVcZAcrlftKaz4e8R/tGeP8AUPCCWsfhO/8AEmo3Gipa2ptYFsnupGtx
HCVUxp5RTCFV2jAwMYr3X/gltc6fp/8Awvj+0tf8K6F/bvwq1nw1p39ta/ZaX9t1C88r7PDH
9plj3bvJfLj5E+Xey7lyhHj/AOz3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvrK2Um
YwGOF4y6kxrtkePJlXGQHK+aV9V/8EtrnT9P/wCF8f2lr/hXQv7d+FWs+GtO/trX7LS/tuoX
nlfZ4Y/tMse7d5L5cfIny72XcufQPhN8Udf/AOGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw
7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjoA+FKK1vHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G
5zIx8yCDyovKibqkflR7VIGxMbR+hf8AwTI+Kvw2+Dnwr+FiX/inw/JoXijVdftPihp/iHxZ
NbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ5Ylh/dAHyT+z3+wf4v/aP+G8nijStS8K6RYXG
ujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHilfZeveIj4c/4IzXngi41z4aSa
/YfE57uXTLXVdEutSbTUhMBuYxE7TSP9sG0TIWla3xhjaEV8aUAFFfpD/wAE8vE+s6b+wp8N
Nej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdr+y58bvg
p8P/ABxpd7oHibwr/wAK28b+OfF6eNtM1fxFPptjpVpcyG20eODQmnggmtJoJLfzJJLO4WJS
5d4Vg/dOw7H5U16XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90c
r/61AcMHC/VfgTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eONO0e0a6a/tzaXV0rzBNU
txboMSRJdRtGhjG7Gyu2/Yx+OXw/+FX7I9pp3i3xF8P7jxJ4o+Ml5eeHtVsJ7ER+EbifTXs7
XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD8067bwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvu
vLpba3ijQZZnZi7ZwFCwvlgxRX/QD4Y/tEat8Mfgb8MNC8BeNvg/rHjbRPGXiL/hPdb1z4h3
Gi2l3enU42t9Tm2X9m+r280G1vNeK63RxhAoO+M+f/slftQ6t4o+D37RPgLTPi14f8Aa74g1
XS7vwQYPEFx4b8OaNAdakl1F9Nacxmzt/LnD+SAs8kWQInZWUAHwTXV/Cv4V/wDC0v8AhI/+
Kj8K+HP+Ec0O61z/AInmofY/7U8jb/odr8p827k3fu4uN21uRiuf17TotI1y8tLe/tdUgtZ3
hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkD6r/AOCWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2m
seJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPkmvS9G/Zf1ab4CP8R9c1nw/4R8PXc81
poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC/a3/BPLxPrOm/sKfDTXo/Fdr4dg8I/
Hm3sr3UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJHQaj8bvAfxWsfgnpvgrU/g
pP8AD3w58R/E3/CUad4ottBs/wCzdEuvECXVv9mt9WVJo4nsZGP+hqCNoQ4eMKrsOx+engH4
A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX6A/s0ftEW
kvw6/aV+GPw9+Lv/AAglhqOu6c/wtk1fxTc6LY6Tpa65K9zJBcXDqYP9GmjeSMHz5l3/ACSM
GFfBOvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgIR7t8Df+Ce2sfHf
4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMotaN/wTd15vh4
/iPxF4/+GngiC18ZTeAL6DXLu/V9M1qItutppYLSWBU2L5nniUwhSN0ikED6A/4J+/tB6D+z
7+x58LbfU9U+Glxeav8AHmz1C707W7mwvZ9M0p7IWr6k0TOZbF4Joi6Tny2UojEmKTEnQ6Hr
Ufh79kf4geCPCHxK+FWran4p+MniGJdS8aeMtJlup/D95ptzpMmsSyvKJllJYyebColkBLLH
JDKUkYz4++Ef7DHiT4w/tf6v8E7PXvCum+L9KvtR01Jr6W6Fjf3Fi0gmSKSO3dhlIpZFMiIC
sZBIYqp1vBH7Acvjf4QW3jsfFf4VaV4WvPFb+DIb7UpdWt4/7QAZ495NgRHFJCFmEshVERx5
hjcMi/RX7FPw3+D37KH7SXw3+Ien/Ffw/qsHgjVfGOn+OLu81uyt0SG3hubfTLmwst32m5S6
hkRx9nN0CzbVYFSKyv2TvE9zp/8AwS7PhDQ/Ffwf0zxN4w+IGoC5t/FXiXS7Z9O0W90OTSp7
4rLMJ7d0d2K+WvnMo4SSKRlkAPCpf+Cd3iTw5qF+ni/xh8P/AIf2EPiu58HaZqPiG9uoLXxD
e20zwXMtp5dvI/2SGRAsl1MkUKGRVZwwZV6D4D/8EnvHvxy0PxbdyeKfhp4Mn8Earqek6xZe
JNbe3ntm01bc30+YopYzbwm5hV5Q+1SwyQGUt2vxf8D+G/j5+zj8MvhT4Q+JHw/+3/A/xX4i
8PanqHiHXbXRLXU7K+1Ey22r2jySsk9p5cJ8xYXkmQsu2ORWV2yv+CeNj4b8H+OP2kbC28be
FW0XUPhx4g8J+H9U1rVbXQP7fuLmRFszHDdzIy+akDMckiLKiRlLLkA+Xviz8OL74OfFTxL4
Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlc/RX3t+w5+0pY/Dv9kz4RW8/j+1
0PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmkI+dP2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur9Af2svHfh//h3j
8S/DPhnWfhV9j/4XlqmqaTpOl3ei/a/+Ef3yxQ3EEMZ8/wD1+1EdB5v2bAB+y1+f1ABX9Qv/
AAahf8ordJ/7C9//AOlc9fjL8Jvijr//AAx1+z5pfwZ+KfhX4ceJNA13W5PG/wBs8XWfh2P7
RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHX7X/APBr9cfbP+Ccry/bNK1DzfEuqv8AatLs/sdj
c5v7k+ZBB5UXlRN1SPyo9qkDYmNo3pfDP0/VGVT4oev6M/R6vyS/4KH2X2343eJh6afph/8A
Lm8eV+ttflT+3DYfb/jx4sX00zTD/wCXP48qIbDnv/XkfkJ/wXDt/svxO+Cqenw1H/qR69RV
/wD4L0Wv2P4y/BmP0+Gi/wDqRa7RQB+qP7KGm+Z4K/Z8fHXwR4HP/lF02vwI/bA/5LRH/wBi
14d/9MljX9EX7HXhp7j4T/s9z7eP+EE8En8tF07/AAr8/wD4ceI9a0j4A/DHXI/Fdr4cg8H/
ABV8LWN7qOpeJYNHS08Pv4Z0i6v7BJbiaMPbyyqs0lpGT5rR7zGxUkaS+D7hR+P7/wBD8ja9
L0b9l/VpvgI/xH1zWfD/AIR8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC/
ZfwI+PXwo8Xf2/oOm+LvCvgfRfAn7QB+MNi2qRSadY6l4cgzGLfT40iLNdhFTZaGNGZXUICV
cJ1eo/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf8AiBL1AqamTJ89nM5ZrFmB
ZSu4vGAMDY/L6iur+O//AAjH/C8PGX/CEf8AImf27e/2B/rf+Qf9of7N/rv3v+q2f6z5/wC9
zmvsv9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIG
fLDNIR86fs9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpu
Hhjd5o0VyxIHilfoD+1l478P/wDDvH4l+GfDOs/Cr7H/AMLy1TVNJ0nS7vRftf8Awj++WKG4
ghjPn/6/aiOg837NgA/Za/P6gD0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvE
GhhlKv5SyOC4VcRsN24qreaV+ln/AATM+O/gf4FeB/gFe6b4y8K+E7CXXde/4W0txq1vZ317
ctG9tonnRSuLie0RbpSPIVraJjJLLsaOSRan7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8
bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kZjPkn9nv9g/xf8AtH/DeTxRpWpe
FdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDxSvuH4s/FW18R/8ABJ7x
LpNvq3wfj1C/+Lt1rkWj6CdNtZF0h0dBPaWcoW+jT7Udke9FultdqnFuCKtfAL9qXX/2bf8A
glzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnIB8K
UV+m37Afx/8Aht4W0PwLrF/q/grStC+IPiTxK/xQ8P3viSbT9K8PNdqtvp1taaIbqOCeykSW
JXkktrtYk3F5YlgzF0H/AATPudP1f/hmXwRomv8AhV/+EU13xe/xF0C21+yX+09Qj/faXdNb
CX/iaeV5MEkNzAs6RfZ1ZXXysqAflTXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8A
tTyNv+h2vynzbuTd+7i43bW5GK5/XtevvFOuXmp6neXWo6lqM73V3d3UzTT3UzsWeSR2JZnZ
iSWJJJJJr6r/AOCWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv
2kCTBWISsGZPNG5CPkmiv0h/4J5eJ9Z039hT4aa9H4rtfDsHhH4829le6jqPiWDR0tPD0lpb
XV/YpLPNGHt5ZFWaS0jJ81k3mNipIqfAj49fCjxd/b+g6b4u8K+B9F8CftAH4w2LapFJp1jq
XhyDMYt9PjSIs12EVNloY0ZldQgJVwjsOx8aaN+y/q03wEf4j65rPh/wj4eu55rTQU1h7gXf
iqeFGaZLGGGGVnSNlWNppPLgWSVEMobcF80r9QdR/ac8LftLWPwTvNF1H4VSeFoPiP4mv/G+
mePB4ejvtL0y/wDECXqBU1MmT57OZyzWLMCyldxeMAfnT8d/+EY/4Xh4y/4Qj/kTP7dvf7A/
1v8AyD/tD/Zv9d+9/wBVs/1nz/3uc0hB8K/hX/wtL/hI/wDio/Cvhz/hHNDutc/4nmofY/7U
8jb/AKHa/KfNu5N37uLjdtbkYrlK+tv+CWHiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTS
YNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG72v/AIJ5eJ9Z039hT4aa9H4rtfDsHhH4829le6jq
PiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS0jJ81k3mNipIYz4p0b9l/VpvgI/wAR9c1nw/4R8PXc
81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC5XgH4A+IviP8JPHvjfT47UeHvhxB
YzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv+heo/G7wH8VrH4J6b4K1P4KT/D3w58R/E3/C
Uad4ottBs/7N0S68QJdW/wBmt9WVJo4nsZGP+hqCNoQ4eMKvn/7NH7RFpL8Ov2lfhj8Pfi7/
AMIJYajrunP8LZNX8U3Oi2Ok6WuuSvcyQXFw6mD/AEaaN5IwfPmXf8kjBhQB+f1el6j+y/q1
l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwv3D/wTm+Inww+A
vgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICIs
n/gn98RI/ht+yPo/g8+OPh/pF1B8ch/wmOl6l4q0mO11Pwy+mx2mob0nm8m9tHDMo8vzA5UP
HkoGAB8afs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlX
GQHK+aV9rfsW3PgfT/jh+1V/wjWv+FdC8Ga74G8TeGvB/wDbWv2+l/bftlwv9nQx/bpY5W3R
Q8u/3Pl81lLDPoH7LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQR
X9mdRiNtsDForsbIPKAyGjKEfnTRXQfFnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0h
lVZI7fB/do6qyptBAIIr7L+AX7Uuv/s2/wDBLnQ9a8N+MPCq+OPD/wAR/tlrpVzr1nJqieHG
FvJPaLb+cLuK0n1K1iaaCHy2lQM7AxMzkA+FKK/Tb9gP4/8Aw28LaH4F1i/1fwVpWhfEHxJ4
lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZiP2Wpfhd4k+A/wCzlYeJ
/iT8NND8T/BfxJqcEtrqfiHyja3/APwkdjqZmjki3QTW76ZZ30aXBc27zXMCK5dgUYz8ya6v
4V/Cv/haX/CR/wDFR+FfDn/COaHda5/xPNQ+x/2p5G3/AEO1+U+bdybv3cXG7a3IxWr+1l43
0z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA17t/wSw8Tro2h/tA
aZfeK/D/AIe03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaNyEfJNdt4B+
APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/AHX/AME8vE+s
6b+wp8NNej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkZP
wG/ahi8UeFv2ovAXgX4tWvgCDxB4ksbv4ZG98QS+G9K0bTTr00t29ozmNLNPs86O8MYWWRNw
WJyrKGM/PSvoH4G/8E9tY+O/wg8OeM7Px58P9HsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlX
EgcxgOAzqwZR4Tr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH6Af8A
BP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiT
FJiQQI+f9G/4Ju683w8fxH4i8f8Aw08EQWvjKbwBfQa5d36vpmtRFt1tNLBaSwKmxfM88SmE
KRukUggc/wDCP9hjxJ8Yf2v9X+Cdnr3hXTfF+lX2o6ak19LdCxv7ixaQTJFJHbuwykUsimRE
BWMgkMVU/YOh61H4e/ZH+IHgjwh8SvhVq2p+KfjJ4hiXUvGnjLSZbqfw/eabc6TJrEsryiZZ
SWMnmwqJZASyxyQylJOe/Yp+G/we/ZQ/aS+G/wAQ9P8Aiv4f1WDwRqvjHT/HF3ea3ZW6JDbw
3NvplzYWW77TcpdQyI4+zm6BZtqsCpFAHzr4I/YDl8b/AAgtvHY+K/wq0rwteeK38GQ32pS6
tbx/2gAzx7ybAiOKSELMJZCqIjjzDG4ZFJf+Cd3iTw5qF+ni/wAYfD/4f2EPiu58HaZqPiG9
uoLXxDe20zwXMtp5dvI/2SGRAsl1MkUKGRVZwwZV91/ZO8T3On/8Euz4Q0PxX8H9M8TeMPiB
qAubfxV4l0u2fTtFvdDk0qe+KyzCe3dHdivlr5zKOEkikZZOf+L/AIH8N/Hz9nH4ZfCnwh8S
Ph/9v+B/ivxF4e1PUPEOu2uiWup2V9qJlttXtHklZJ7Ty4T5iwvJMhZdscisrsAcV8B/+CT3
j345aH4tu5PFPw08GT+CNV1PSdYsvEmtvbz2zaatub6fMUUsZt4Tcwq8ofapYZIDKW+fviz8
OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfUP/AATxsfDfg/xx
+0jYW3jbwq2i6h8OPEHhPw/qmtara6B/b9xcyItmY4buZGXzUgZjkkRZUSMpZc/H1IR1fwr+
Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrlK+tv+CW
HiddG0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG72
v/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzW
TeY2KkhjPhTwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxR
X4mv0L+A37UMXijwt+1F4C8C/Fq18AQeIPEljd/DI3viCXw3pWjaademlu3tGcxpZp9nnR3h
jCyyJuCxOVZR0P8AwTm+Inww+Avgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2tjB5saWWmtBo80
1ulzaXMbRGSaeyn8tWcyPAICIgD4e1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4
xIrwrEUMKK+6OV/9agOGDhT9nv8AZf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxg
McLxl1JjXbI8eTKuMgOV+y/+Cf3xEj+G37I+j+Dz44+H+kXUHxyH/CY6XqXirSY7XU/DL6bH
aahvSebyb20cMyjy/MDlQ8eSgYef/sW3PgfT/jh+1V/wjWv+FdC8Ga74G8TeGvB/9ta/b6X9
t+2XC/2dDH9uljlbdFDy7/c+XzWUsMgHxTRX6Lfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5P
GX/CTeNp/C8dlcNfQPaXN1BFf2Z1GI22wMWiuxsg8oDIaM/BPxZ1mLxH8VPEuoW6eH44L/Vb
q4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCKQjn6K+6/gF+1Lr/wCzb/wS50PWvDfjDwqv
jjw/8R/tlrpVzr1nJqieHGFvJPaLb+cLuK0n1K1iaaCHy2lQM7AxMzn0v9gP4/8Aw28LaH4F
1i/1fwVpWhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZiAPz
Jor9Nv2Wpfhd4k+A/wCzlYeJ/iT8NND8T/BfxJqcEtrqfiHyja3/APwkdjqZmjki3QTW76ZZ
30aXBc27zXMCK5dgU+Cf2svG+mfEz9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuU
ZThgCM8gGgDz+u28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL
5YMUV/sD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlU
nDKZovtIgCobgVP2Lf2jfF938Avj58NNQ+Nn/CM+PtUvtHk8Oajq/jk21jauNYkfV7mDUfOM
PziYyyGCRnuVLsgm5pjPhSvYPhX+xrq3xA+Hmk+LNc8V+Cvhx4e8SaqNG0G98V3dxbJrk4JE
zwCGCZhbwNtWW5kCQRtIqmTcGC+Va9p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA
6K2CMqpyB9VzW+k/tffsPfAzwRofi7wV4a8Q/CvVdY07XoPFeu2+iIsGqXi3MN/A8zBZ7eNY
2WVYy06NtxCysrFCMr4D/wDBJ7x78ctD8W3cnin4aeDJ/BGq6npOsWXiTW3t57ZtNW3N9PmK
KWM28JuYVeUPtUsMkBlLfP3xZ+HF98HPip4l8IanLaz6l4V1W60e7ltWZoJJreZ4XaMsqsUL
ISCVBxjIHSvqH/gnjY+G/B/jj9pGwtvG3hVtF1D4ceIPCfh/VNa1W10D+37i5kRbMxw3cyMv
mpAzHJIiyokZSy5+PqAOr+Ffwr/4Wl/wkf8AxUfhXw5/wjmh3Wuf8TzUPsf9qeRt/wBDtflP
m3cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVNEtbTWPEtppMGqarOoWzUJ
cTRrI4X7SBJgrEJWDMnmjd7X/wAE8vE+s6b+wp8NNej8V2vh2Dwj8ebeyvdR1HxLBo6Wnh6S
0trq/sUlnmjD28sirNJaRk+aybzGxUkMZ8qfArWPiJP+z74z8QaBpPw0m8J/C2C0m1S61jwV
oWo3rNfXohgiEtxZyzzOzvIwLttSOFhuX92jeP8Ajfxld/EDxPc6vfw6VBdXezfHpul22mWq
7UVBst7aOOFOFGdqDJyxySSfvX4DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNp
p16aW7e0ZzGlmn2edHeGMLLIm4LE5VlHQ/8ABOb4ifDD4C+B/hzYz+OPCuseG9e13xFp3xLT
V/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIiAPh7Uf2X9Wsv2TLD4xR6z4fu/D1
34kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF9A/Z7/AOFk/tI/suePfhrovibwVY+D
fh7pVz4+vND1DRYUvb5bbLTXUF1HZvIbhQY4syTxsY5VjBMQcL9F/wDBP74iR/Db9kfR/B58
cfD/AEi6g+OQ/wCEx0vUvFWkx2up+GX02O01Dek83k3to4ZlHl+YHKh48lAw8/8A2LbnwPp/
xw/aq/4RrX/CuheDNd8DeJvDXg/+2tft9L+2/bLhf7Ohj+3Sxytuih5d/ufL5rKWGQD4por9
Fv2Wf2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRX
Y2QeUBkNGfgn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBB
FIR9F/s9/C/4x/tH/sYSeEtK8QfD/SPhtceKxpWj6brFnY2t14g8Rm2ku1gt7oWzSpdtEixL
LcTwq4mjt1kKkxj5/wDhX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3
Ju/dxcbtrcjFfS3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k
348/yOM/Zqqf8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhft
IEmCsQlYMyeaNzGeKfs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6k
xrtkePJlXGQHK5XgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhf
LBiiv7r/AMEtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nhj+0yx7t3kvlx8if
LvZdy56v/gnl+0Rrkv7JXxj+GNp8Xf8AhBPEmo/2C/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYP
JHEfNmXftSQgigD5p8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOB
hRgAAa3jf9tLx98QPhBc+Ab+fwrB4Qu75NUfTdN8H6Ppka3ahVFwjW1rGyS7FCF1IYplCSpK
n7L/AGdv2hG+Ff7KvwX0D4TeJ/g+2u6D4k1seLr3XPGl34VtBMb+FrO+mtvtthLqFu9rsP72
3uCI4RFsVg8RyfDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7
bTJ9RtoWa1VYpHhG518tmYgH56V1fwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1P
I2/6Ha/KfNu5N37uLjdtbkYrK8d+Kv8AhOvHGs63/ZulaP8A2xfT332DS7f7PY2PmyM/kwR5
OyJN21FydqgDJxX1B/wSw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqzqFs1CXE0
ayOF+0gSYKxCVgzJ5o3IR8k19LfBbWfif8HP2aE8d2HiP4f/AA00Vvt2l+GtTvvDdonibxFK
0U32tdNvIbGW++TeYTctNFFE00cYmUqQn0X/AME8vE+s6b+wp8NNej8V2vh2Dwj8ebeyvdR1
HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdBqPxu8B/Fax+Cem+CtT+Ck/w98OfE
fxN/wlGneKLbQbP+zdEuvECXVv8AZrfVlSaOJ7GRj/oagjaEOHjCqxnxp4B8RfFX4j/sPePd
B0+58Pj4SfDiex1nV7ebSbBbn7beXi29u8cwgN01w2XHmeYMQQvGXClIn8Jr9Af2aP2iLSX4
dftK/DH4e/F3/hBLDUdd05/hbJq/im50Wx0nS11yV7mSC4uHUwf6NNG8kYPnzLv+SRgwrq/+
Cc3xE+GHwF8D/Dmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt0ubS5jaIyTT2U/l
qzmR4BAREAfD2o/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv8A
61AcMHCn7Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mV
cZAcr9l/8E/viJH8Nv2R9H8Hnxx8P9IuoPjkP+Ex0vUvFWkx2up+GX02O01Dek83k3to4ZlH
l+YHKh48lAw8/wD2LbnwPp/xw/aq/wCEa1/wroXgzXfA3ibw14P/ALa1+30v7b9suF/s6GP7
dLHK26KHl3+58vmspYZAPimvQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJ
M/LHG5zgYUYAAH2t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3U
EV/ZnUYjbbAxaK7GyDygMhoz8E/FnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0hlVZI
7fB/do6qyptBAIIpCO28b/tpePviB8ILnwDfz+FYPCF3fJqj6bpvg/R9MjW7UKouEa2tY2SX
YoQupDFMoSVJU+U1+oP7Hn7cHhv4Jfsp/szeG28beFdPukvtQa+t7hbW5m0e4fxXYRmWZnVm
sd2jXetASuYgYpZMNnZjoP2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrT
SEtNIS5itLqyngeDfMbW6jRCzNLEkP7pjPzp+F/7XPjn4LaHDZ+Fbrw/oU9rBc28GrWvhnTF
1u3W4WRZTHqX2f7YjlZXUOswZFIClQAB5pX6A/sdfHu3+DH7NHwu0e/+IOlaP4k8L/tAW2m3
aw+JoTJZ+GpYoJL9FkjlIOlS3MQkkKsbaR0DksQDX0X+zJ4i8G+Ov2jPhJ4Z+G2ueCv+ET0P
4geP7vxX4d0zVbOzg1BhdS3GiXIsd6G/SKKK1eCaGOVYBboVZPJ+UA/L79nv9l/Vv2kdD8e3
ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/gmp4nXWdc/aF1
PxL4r8P2epeL/hjrmiQ3fiTxLaWE+r6rftG0S77uZGkeRo5C8mSqkguy71z7B+yz+0rceAv2
OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH5
00V0HxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK+1v2HP
2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM0AfOn
7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEg
eKV+gP7WXjvw/wD8O8fiX4Z8M6z8Kvsf/C8tU1TSdJ0u70X7X/wj++WKG4ghjPn/AOv2ojoP
N+zYAP2Wvz+oA9A8EftL+Ivh/wCGLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgY
UYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ+1fhN8Udf/
AOGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjr
tvgR8als/Av7OEfgj4oeCvDsHhv4gaxe/FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEj
XydiFTEGM/N6iv0s+GH7V3hDw94Y0G48L+PNK8PWFr+07IulwQ6kNLk0/wAFXTxzzRLASj2+
lSSJG8kRVYS6Deu5eLXg79pTwr8O7PTbfw/4/wDD+hwaN+1JcWunxafrkNslj4NuJUmnjiCO
AmjySRo7quLZmRWOSAaAPz/8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2
Yu2cBQsL5YMUV+Jr9FvhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09fEE73kljcB
1ggi8iZHaOA75o92xJMEVz/wg/aU1n9lb/gmrp9/4T8f+CtT8YeFfic89hbjXIGvbnw1mB5Y
YbaR47+GyutRtYZJbdUhkkjy7oI2ZqAPgmvQPBH7S/iL4f8Ahi20iw074fz2tpv2Sal4E0PU
7ptzs533FzaSTPyxxuc4GFGAAB97fsB/H/4beFtD8C6xf6v4K0rQviD4k8Sv8UPD974km0/S
vDzXarb6dbWmiG6jgnspEliV5JLa7WJNxeWJYMxef+BPHHiXw5+yr8CfDXwh+K3gr4eeLPCP
iTXo/Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsbKAPmn4kfBHxF4z/Zzi+PN1qfg
qTSdX8SDwrc6bo+nDS57C9S1aUA2sNrDaKhgiRy0LNkyru+cybfH6/Sz9jH45fD/AOFX7I9p
p3i3xF8P7jxJ4o+Ml5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgfzz+LN
nfad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetIRz9Ffe37Dn7Slj
8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WGa9gu/2ldM
8BaD4V0v4Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1lX065eC2v7NdRtDZeUARFdp5UAiQcNGWM
/N/wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1dR/Zf1
ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6r/ZK/ah1bxR
8Hv2ifAWmfFrw/4A13xBqul3fggweILjw34c0aA61JLqL6a05jNnb+XOH8kBZ5IsgROyso6D
/gn98TdP8A/sj6P4UsPiJ8P4Y1+OQfxRa6lr1lp9rrnhSTTY7S+ke11FomuLSVGOI2i35AIQ
SR/KAfCnwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uR
iuUr7h/Yn8T+FdG+Kn7WemeFvFfh/wAPfDnxT4N8Q6J4WtNY8Sw6TBqk08zrpShL6aN5HEHn
ASSAmISsJGQy/N1X7JvxiXSPgb+yvbeCPiJ4f8GQeFvGWo3XxSs28YWnht7yF9Ts5IpLqGee
Fr9DYqVDKsw2oYuoKAA/PSiv0s+CHxw0/U9Q1LS/CPxI8K+AfAN58R9c1bw7q/hjxZZeE9Q8
KRPMxtzq2jX4todY09x9jkRF3yxwwywhxj7MPzp8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMj
HzIIPKi8qJuqR+VHtUgbExtCEegfDX9tv4n/AAh8D2Hh7w/4m+xWGj/bv7JlfTrS4vtC+2x+
XdfYbuWJriy8xclvs8kfzFm4ZiT5TX3t+w5+0pY/Dv8AZM+EVvP4/tdD13RvjzZWs0T64ttd
2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZr0D4W/Fe10/xZr/hvwz8S/BXgXwSnxO1++0HWvB3
jbTfDtz4XX7Q4gbUtMujBa65pkmLOSPymkZYIZYllxi3DGfmTRWt47uPtnjjWZftmlah5t9O
/wBq0uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2j7g/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZ
on1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iBnywzSEfOn7Pf7B/i/9o/4byeKNK1LwrpFhca6
PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgcp/w1B4z/wCGeP8AhVX23Sv+EF+3
f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/kr7B/ay8d+H/+HePxL8M+GdZ+FX2P/heW
qappOk6Xd6L9r/4R/fLFDcQQxnz/APX7UR0Hm/ZsAH7LX5/UAFFfdfwm+KOv/wDDHX7Pml/B
n4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx123wI+NS2fgX
9nCPwR8UPBXh2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr5OxCpiDGfm
9RX6WfDD9q7wh4e8MaDceF/HmleHrC1/adkXS4IdSGlyaf4KunjnmiWAlHt9KkkSN5Iiqwl0
G9dy8WvB37SnhX4d2em2/h/x/wCH9Dg0b9qS4tdPi0/XIbZLHwbcSpNPHEEcBNHkkjR3VcWz
MisckA0Afn/4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYo
r8TX6LfCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3
zR7tiSYIrn/hB+0prP7K3/BNXT7/AMJ+P/BWp+MPCvxOeewtxrkDXtz4azA8sMNtI8d/DZXW
o2sMktuqQySR5d0EbM1AHwTXoHgj9pfxF8P/AAxbaRYad8P57W037JNS8CaHqd0252c77i5t
JJn5Y43OcDCjAAA+9v2A/j/8NvC2h+BdYv8AV/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTr
a00Q3UcE9lIksSvJJbXaxJuLyxLBmLz/AMCeOPEvhz9lX4E+GvhD8VvBXw88WeEfEmvR+PHb
xxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2UAeVfEjxl8VfGf/BO2LxJdeIPhpJ8L9X8
ZDSbnw9o/hKw0u9sNZS3aYTkQ6fCiubeJMywzMTHKsbHBkRfl6v0s/Yx+OXw/wDhV+yPaad4
t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcozLFcxQCNHSUIpWIH88/izZ32
nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ8uGDHrQB1eo/sv6tZfsmWHxi
j1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/AK1AcMHC+aV+hf8AwTg+L1r4Z/Yr
8NeH4PGvgrRHn+NcVx4t0jXPEGm2Kaj4Xm0uK2vxNbXkirc27q5XYFcllBQb4wVtw/tS2/7N
v7FOra18JPGHhVZPD/xkvbzwtpVzr0MmqJ4MaeKRbRbeWYahFaT3lrbNNAvlyyoGeQGNncgH
500V+kPwI/aMXxH4F/Zw1HwR4z8FfDSDT/iBrGsfFLQ9O8UWnhK0WGfV7W4iV7Se4ia8t1sQ
Y4womCxx+VnKla7X9lz43fBT4f8AjjS73QPE3hX/AIVt438c+L08baZq/iKfTbHSrS5kNto8
cGhNPBBNaTQSW/mSSWdwsSly7wrB+6APz0/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hIL
tokMrxBoYZSr+UsjguFXEbDduKq3mlfpt/wTq+L3g39mfQ/gelr418FeF4LXxJ4ht/i/t8QW
az395tktNFLHzDJd2SC5Vle18yzQl53KlHkXz/wJ448S+HP2VfgT4a+EPxW8FfDzxZ4R8Sa9
H48dvHGnaPaNdNf25tLq6V5gmqW4t0GJIkuo2jQxjdjZQB8k/wDDUHjP/hnj/hVX23Sv+EF+
3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+v02/YD+L3w2+FGh+BWv/ABr4K1TQ
vFviTxLb/FCG98QTaXpVq06raacbTQTJawS2VyjxF3ksJlhRm3m3WDEPFfsM+PtX8GfCDw/4
Kufid4V8E2uneK76VNc8MfEDTNH1DwzdqFQvq1jcvHa+IdPkkW2lR4ZJm8mGWNZiCLegD408
A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV7Xgj9pfxF8P
/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAAD6r/Yt/aQ1zUPgF8fPhvaf
GnSvCniTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKD4f17TotI1y8tL
e/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkBCLfjfxld/EDxPc6vfw6VBdXezfHpul
22mWq7UVBst7aOOFOFGdqDJyxySSfVf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA+i/2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u
7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM1b/ay8d+H/8Ah3j8S/DPhnWfhV9j/wCF5apqmk6T
pd3ov2v/AIR/fLFDcQQxnz/9ftRHQeb9mwAfstMZ+f1FFfdfwm+KOv8A/DHX7Pml/Bn4p+Ff
hx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx0hHwpRX6Q/Aj41LZ+B
f2cI/BHxQ8FeHYPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIW/h
h+1d4Q8PeGNBuPC/jzSvD1ha/tOyLpcEOpDS5NP8FXTxzzRLASj2+lSSJG8kRVYS6Deu5eGM
/NOu28A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/0A8H
ftKeFfh3Z6bb+H/H/h/Q4NG/akuLXT4tP1yG2Sx8G3EqTTxxBHATR5JI0d1XFszIrHJANc/8
I/2iPNsf2rvhj4N+LuleBP7R8V2r/DaR/FP9i6HpOnr4gne8ksbgOsEEXkTI7RwHfNHu2JJg
igD86a9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAfW3w
g/aU1n9lb/gmrp9/4T8f+CtT8YeFfic89hbjXIGvbnw1mB5YYbaR47+GyutRtYZJbdUhkkjy
7oI2Zq9A/YD+P/w28LaH4F1i/wBX8FaVoXxB8SeJX+KHh+98STafpXh5rtVt9OtrTRDdRwT2
UiSxK8kltdrEm4vLEsGYgD83/G/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7
UGTljkkk/wBI3/BrvrvjGy/4JiaDb+GPDGka+kuq6h5jXWuNYPGwu5uAv2eQEYI53Dvx6/kz
4E8ceJfDn7KvwJ8NfCH4reCvh54s8I+JNej8eO3jjTtHtGumv7c2l1dK8wTVLcW6DEkSXUbR
oYxuxsr9qv8Ag1YuPtn7B0cv2zStQ83xPq7/AGrS7P7HY3Ob6Y+ZBB5UXlRN1SPyo9qkDYmN
o9DL1C1VzjzWj1v/ADR7NHHi+b3FF21/R9z7D1b4s/EbQpGW68A+GoinUf8ACWSNj8rI18Ff
tFyzeJPjD4jurmCO2uLjQdHmlhjlMqRO3iXx2WVXKqWAJIBKrnGcDpX6M/Fn/j+uPqa/Pn4s
Wf234yeJRjOPDujH/wAuXx3XRioUlQjOEFF36X7ebZzUJ1HWcZSvp5eXZI/IT/g4Og+zfH74
Op6fDOP/ANSHXaKvf8HGVv8AZf2lfhEnp8Mov/T/AK5RXlHoH7l/sP8Aw5Sf9l79n672cjwB
4ROf93SLIf0r8CfGdt8Sf2ufjkP2Z/D/AIm8FaNoV/pekavY2OsaLCgur2HRbOV3W8hs5blb
jyUkAkd1/cxtFv2lY2/os/YKnWX9jb9n/j/mn3hX/wBNVnX48+BPjx4H+Bnxl+Ct9pnjLwr4
TsJpWPxaW41a3s769uX0S1ttE86KVxcT2iLdKR5CtbRMZJpdjRySK73j9xq4pNM/HCva/wBn
v9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD
62/Z2+Pd98H/ANlX4L+GvhtrfwfsvFnhzxJrcfjd9c+IDaFaWt19vhNtdTLa6jbJqtubcL+8
VLyMxwCNM/Mjef8AxZ+Ktr4j/wCCT3iXSbfVvg/HqF/8XbrXItH0E6bayLpDo6Ce0s5Qt9Gn
2o7I96LdLa7VOLcEVmB8PUV91/AL9qXX/wBm3/glzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8
OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnPpf7Afx/8Aht4W0PwLrF/q/grStC+IPiTxK/xQ
8P3viSbT9K8PNdqtvp1taaIbqOCeykSWJXkktrtYk3F5YlgzEhH5k16B4I/aX8RfD/wxbaRY
ad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA/Rb/AIJn3On6v/wzL4I0TX/Cr/8A
CKa74vf4i6Bba/ZL/aeoR/vtLumthL/xNPK8mCSG5gWdIvs6srr5WV/LTXtevvFOuXmp6neX
Wo6lqM73V3d3UzTT3UzsWeSR2JZnZiSWJJJJJoA7XQvDt3+0n4n8W6vf678P/Ct1pGhz648d
1HbaBa6l9mSNBZ2VvbQrC13ICNkSovmFXYnJJPn9fW3/AASw8Tro2h/tAaZfeK/D/h7TfFPw
x1TRLW01jxLaaTBqmqzqFs1CXE0ayOF+0gSYKxCVgzJ5o3e1/wDBPLxPrOm/sKfDTXo/Fdr4
dg8I/Hm3sr3UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJDGfm9XsH7Gf7Dnj39
uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv1t8CPj18KPF39v6Dp
vi7wr4H0XwJ+0AfjDYtqkUmnWOpeHIMxi30+NIizXYRU2WhjRmV1CAlXCa37DX/BRvTPiN+3
NZWepad8P/CXw20/xX4o8aQ65rWpPomoRPqP2sRzXIN8tpdXe24itl3RTPHCXCEKrSAA/NOv
oH9mH9of4n2fgfVPD3h/4leFfB9h4E0PV9c0mXXxaJfW3nRpHdWejXcsElxb3dwrErFbyRbm
DtkMST4p47t/sfjjWYvselaf5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncfqD/AIJY
eJ10bQ/2gNMvvFfh/wAPab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5
CPCfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38
QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ/Qv/gnl4n1nTf2FPhpr0fiu
18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkj4U/aU1nw94j/aM
8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMUAegeAfEXxV+I/wCw9490
HT7nw+PhJ8OJ7HWdXt5tJsFuftt5eLb27xzCA3TXDZceZ5gxBC8ZcKUifwmvtb/gnl+0Rrkv
7JXxj+GNp8Xf+EE8Saj/AGC/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYPJHEfNmXftSQgivVv8A
gnN8RPhh8BfA/wAObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT
+WrOZHgEBETGfD2o/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv
/rUBwwcKfs9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK2UmYwGOF4y6kxrtkePJlX
GQHK/Zf/AAT++Ikfw2/ZH0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmU
eX5gcqHjyUDDz/8AYtufA+n/ABw/aq/4RrX/AAroXgzXfA3ibw14P/trX7fS/tv2y4X+zoY/
t0scrbooeXf7ny+aylhkA+Ka9A/4ag8Z/wDDPH/Cqvtulf8ACC/bv7U+wf2HYed9r3Z+0faf
J+0ebt/d7/M3eV+6z5fyV9rfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/AAk3jafwvHZX
DX0D2lzdQRX9mdRiNtsDForsbIPKAyGjJ+yp8cLvU/DEGlwfEj4f+AdFvPHOq6tZav4E8WW3
hNvCk7vlTfaNqYtl1jSnf7LJEh3zx28MkJcEfZgAfnTRWt47uPtnjjWZftmlah5t9O/2rS7P
7HY3OZGPmQQeVF5UTdUj8qPapA2JjaPuD/gnp8YfCvgP9nOzsPiL4x8FL4hudVuj8HDqssOo
H4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgIR8f8AgH4A+IviP8JPHvjfT47UeHvhxBYz
avNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfdf7Fv7Rvi+7+AXx8+GmofGz/hGfH2qX2jye
HNR1fxybaxtXGsSPq9zBqPnGH5xMZZDBIz3Kl2QTc18P69p0Wka5eWlvf2uqQWs7wxXtqsqw
XiqxAljEqJIEYDcA6K2CMqpyAAVKK+6/hN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu63J43+2eL
rPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjrtvgR8als/Av7OEfgj4oeCvDsHhv4gaxe/
FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEjXydiFTEGM+HvBH7S/iL4f+GLbSLDTvh/P
a2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZa
rtRUGy3to44U4UZ2oMnLHJJJ/Rb4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/AAVd
PHPNEsBKPb6VJIkbyRFVhLoN67l4teDv2lPCvw7s9Nt/D/j/AMP6HBo37Ulxa6fFp+uQ2yWP
g24lSaeOII4CaPJJGjuq4tmZFY5IBoA/P/wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvL
pba3ijQZZnZi7ZwFCwvlgxRX4mv0W+Ef7RHm2P7V3wx8G/F3SvAn9o+K7V/htI/in+xdD0nT
18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRXP/CD9pTWf2Vv+Caun3/hPx/4K1Pxh4V+Jzz2FuNc
ga9ufDWYHlhhtpHjv4bK61G1hklt1SGSSPLugjZmoA+CaK/Tb9gP4/8Aw28LaH4F1i/1fwVp
WhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZi8/8CeOPEvhz
9lX4E+GvhD8VvBXw88WeEfEmvR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2UhH
imo/8LJsv+CYthex+JvBV38JLvxkdJl0KHRYRq9jrIikufPknezVy5gRR5sdy58qVISQoeNf
nSv0s/Yx+OXw/wDhV+yPaad4t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBco
zLFcxQCNHSUIpWIH88/izZ32nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ
8uGDHrTGdB4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAP
Qfhl+yD47/bQ8PXvj+G8+H+gR6nrsfhbS4Jkt9Ch1/WBZNOlhaQWsC20MrRRIAZfIjklmQB2
kdq91/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZon1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iB
nywzVv8Aay8d+H/+HePxL8M+GdZ+FX2P/heWqappOk6Xd6L9r/4R/fLFDcQQxnz/APX7UR0H
m/ZsAH7LQB+f1FFfdfwm+KOv/wDDHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc
30E8iHUIltlUZWK4GyNosEgx0hHwpRX6Q/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1i9+KS6d4kt
PB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIW/hh+1d4Q8PeGNBuPC/jzSvD1ha/tOyLp
cEOpDS5NP8FXTxzzRLASj2+lSSJG8kRVYS6Deu5eGM/NOu28A/AHxF8R/hJ498b6fHajw98O
ILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV/0A8HftKeFfh3Z6bb+H/H/h/Q4NG/akuLXT
4tP1yG2Sx8G3EqTTxxBHATR5JI0d1XFszIrHJANc/wDCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/
w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIoA/OmvQP+GoPGf8Awzx/wqr7bpX/
AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lfW3wg/aU1n9lb/AIJq6ff+E/H/
AIK1Pxh4V+Jzz2FuNcga9ufDWYHlhhtpHjv4bK61G1hklt1SGSSPLugjZmr0D9gP4/8Aw28L
aH4F1i/1fwVpWhfEHxJ4lf4oeH73xJNp+leHmu1W3062tNEN1HBPZSJLErySW12sSbi8sSwZ
iAPzJr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YO
F+q/AnjjxL4c/ZV+BPhr4Q/FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS
6jaNDGN2Nldt+xj8cvh/8Kv2R7TTvFviL4f3HiTxR8ZLy88ParYT2Ij8I3E+mvZ2viU6TJ5I
jtILlGZYrmKARo6ShFKxAgH5p11fwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmofY/7U8jb
/odr8p827k3fu4uN21uRiqnxZs77Tvip4lt9T1618ValBqt1Hd61a3rX0GsTCZw91HcN80yS
tlxIeXDBj1r6W/4JYeJ10bQ/2gNMvvFfh/w9pvin4Y6polraax4ltNJg1TVZ1C2ahLiaNZHC
/aQJMFYhKwZk80bkI+Sa+i/hfbeOfAv7LkPjC9n+D/hjwn59zaeHj4q8DaZqWq+LJ4/MlnS0
Z9OuJ5kjfbE007pAjyxxeaNpVPpb/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08
PSWltdX9iks80Ye3lkVZpLSMnzWTeY2KkjoNR+N3gP4rWPwT03wVqfwUn+Hvhz4j+Jv+Eo07
xRbaDZ/2bol14gS6t/s1vqypNHE9jIx/0NQRtCHDxhVYz4Jg+Gfi39ojwZ8R/icLXw/baT4D
g0+bW3srK10qBWup47O2igtLaNIw7EM5Koq4ikZm3sofzSv0B/Zo/aItJfh1+0r8Mfh78Xf+
EEsNR13Tn+Fsmr+KbnRbHSdLXXJXuZILi4dTB/o00byRg+fMu/5JGDCur/4JzfET4YfAXwP8
ObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEQB8Pa
j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcKfs9/sv
6t+0jofj270XWfD9jP8AD3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv2X/wT++I
kfw2/ZH0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDDz/
APYtufA+n/HD9qr/AIRrX/CuheDNd8DeJvDXg/8AtrX7fS/tv2y4X+zoY/t0scrbooeXf7ny
+aylhkA+KaK/Rb9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr
+zOoxG22Bi0V2NkHlAZDRn4J+LOsxeI/ip4l1C3Tw/HBf6rdXESaDay2ulKrzOwFpDKqyR2+
D+7R1VlTaCAQRSEfRf7Inwm+KXxP+B8Wo+Gz8FLDRU11vC2if8JX4V0S5vvEWsPby3osIbm4
sZmMrIAqG6lijLSxRI+flXxTQvDt3+0n4n8W6vf678P/AArdaRoc+uPHdR22gWupfZkjQWdl
b20KwtdyAjZEqL5hV2JyST9A/wDCdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2f
yvtHk5+1eb9r+Tfjz/I4z9mqp/wSw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqz
qFs1CXE0ayOF+0gSYKxCVgzJ5o3MZ4p+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+
srZSZjAY4XjLqTGu2R48mVcZAcrleAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8U
aDLM7MXbOAoWF8sGKK/uv/BLa50/T/8AhfH9pa/4V0L+3fhVrPhrTv7a1+y0v7bqF55X2eGP
7TLHu3eS+XHyJ8u9l3Lnq/8Agnl+0Rrkv7JXxj+GNp8Xf+EE8Saj/YL+DJNX8UyaLY6TEuqM
+pSQXDOqQfu5g8kcR82Zd+1JCCKAPimvQP8AhqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3
Z+0fafJ+0ebt/d7/ADN3lfus+X8lfcP7O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF17rnjS78
K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv8Aa+1n4IfsG3Pirwn4t+GkPjDTvi7d
arYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSur+Ffwr/4Wl/wkf/FR
+FfDn/COaHda5/xPNQ+x/wBqeRt/0O1+U+bdybv3cXG7a3IxWV478Vf8J1441nW/7N0rR/7Y
vp777Bpdv9nsbHzZGfyYI8nZEm7ai5O1QBk4r6g/4JYeJ10bQ/2gNMvvFfh/w9pvin4Y6pol
raax4ltNJg1TVZ1C2ahLiaNZHC/aQJMFYhKwZk80bkI+Sa9L0b9l/VpvgI/xH1zWfD/hHw9d
zzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L9rf8E8vE+s6b+wp8NNej8V2vh2D
wj8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkdBqPxu8B/Fax+Cem+Ct
T+Ck/wAPfDnxH8Tf8JRp3ii20Gz/ALN0S68QJdW/2a31ZUmjiexkY/6GoI2hDh4wquw7H5fV
7t8CtY+Ik/7PvjPxBoGk/DSbwn8LYLSbVLrWPBWhajes19eiGCIS3FnLPM7O8jAu21I4WG5f
3aN5p8d/+EY/4Xh4y/4Qj/kTP7dvf7A/1v8AyD/tD/Zv9d+9/wBVs/1nz/3uc19V/wDBPL9o
jXJf2SvjH8MbT4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhH
x/438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntdR/Zf1ay/ZMsP
jFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X7h/4JzfET4YfAXwP8Ob
Gfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEWT/AME/
viJH8Nv2R9H8Hnxx8P8ASLqD45D/AITHS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjy
UDBjPjT9nv8AZf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKu
MgOV80r7W/YtufA+n/HD9qr/AIRrX/CuheDNd8DeJvDXg/8AtrX7fS/tv2y4X+zoY/t0scrb
ooeXf7ny+aylhn0D9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uo
Ir+zOoxG22Bi0V2NkHlAZDRlCPzpor9Fv2VPjhd6n4Yg0uD4kfD/AMA6LeeOdV1ay1fwJ4st
vCbeFJ3fKm+0bUxbLrGlO/2WSJDvnjt4ZIS4I+zD8/8Ax3cfbPHGsy/bNK1Dzb6d/tWl2f2O
xucyMfMgg8qLyom6pH5Ue1SBsTG0AGTRX3t/wT0+MPhXwH+znZ2HxF8Y+Cl8Q3Oq3R+Dh1WW
HUD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwPh/x3Lqk3jjWX1u//tXWmvp2v777emof
bLgyN5kv2lHdZ9z7m81XYPncGIOaAPYPAPiL4q/Ef9h7x7oOn3Ph8fCT4cT2Os6vbzaTYLc/
bby8W3t3jmEBumuGy48zzBiCF4y4UpE/hNfa3/BPL9ojXJf2SvjH8MbT4u/8IJ4k1H+wX8GS
av4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV6t/wTm+Inww+Avgf4c2M/jjwrrHhvXt
d8Rad8S01fxVd2tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICImM+HtR/Zf1ay/ZMsPjF
HrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOFP2e/2X9W/aR0Px7d6LrP
h+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfsv/AIJ/fESP4bfsj6P4PPjj
4f6RdQfHIf8ACY6XqXirSY7XU/DL6bHaahvSebyb20cMyjy/MDlQ8eSgYef/ALFtz4H0/wCO
H7VX/CNa/wCFdC8Ga74G8TeGvB/9ta/b6X9t+2XC/wBnQx/bpY5W3RQ8u/3Pl81lLDIB8U0V
+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDFo
rsbIPKAyGjJ+yp8cLvU/DEGlwfEj4f8AgHRbzxzqurWWr+BPFlt4TbwpO75U32jamLZdY0p3
+yyRId88dvDJCXBH2YAH5016B4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJ
n5Y43OcDCjAAA5Xx3cfbPHGsy/bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0f
cH7Dn7Slj8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WG
aQj5p+JHwR8ReM/2c4vjzdan4Kk0nV/Eg8K3Om6Ppw0uewvUtWlANrDaw2ioYIkctCzZMq7v
nMm3K/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq36
A/BL4veGfDPhPxN4f8JeNfhponh6f9pa/uNX0i48QaTY6dqPgua3+zXIFtcSLFc2TwuFVEVw
dqmMbowVtfsh/Hf4QfArxx8NL34beMvCvhP4ey+OfFf/AAna3GrLZ317A0k1t4b86K7cXs9o
kF1ERsVoImMks2x45JFYz8qa+ofA3jv4y/BT9hTQfiD4a8f+Hx4PsvEl/wCFdOtlsVn1/wAK
Xt5aPJdC1uZ7TfZpLAgctZ3I5mB4kMm35p17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5oma
ORMjh0ZlYYIJBBr9AP8AgnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueINNsU1HwvNpcVtfi
a2vJFW5t3VyuwK5LKCg3xgqIEfH/AOz3+y/q37SOh+PbvRdZ8P2M/wAPfDdz4qvLLUHuEnvr
K2UmYwGOF4y6kxrtkePJlXGQHKmo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJF
eFYihhRX3Ryv/rUBwwcL9F/sW3PgfT/jh+1V/wAI1r/hXQvBmu+BvE3hrwf/AG1r9vpf237Z
cL/Z0Mf26WOVt0UPLv8Ac+XzWUsM+gf8E9fjV8P/AIP/ALB/hvSfFur+FbfxJ4j+K01z4eup
NTsby68D3E2ktZ2viG4sJJQDFbXMbZW52BQyzDJEW4A+H/APwB8RfEf4SePfG+nx2o8PfDiC
xm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+9v2Svj54qs/g9+0T8Kbj492uneNrrVdLH
hfXr3xvNaaUSmtSNqt9aX8jqoSVZfPfYRNcIzMEkO4V23/BOb4ifDD4C+B/hzYz+OPCuseG9
e13xFp3xLTV/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIiAPzTr6h/Zc8M+JfFP
gzwVo2n+NP2dNGvPFmqvo+haV4j8Jadret3UzToitcPHpd5LAjzTbIzeSRkqhKjylVq+i/2E
L7X/AAV+xh4Avn8XaVoP/CB/tAR6VqOqXHi6zsbW20E21vcahZQXb3CxT2k0qCZreB3WcoJA
j7dw5S18PeE/CUGs+NvgZ4x+FWm+L/it4r161g1rVvE+neHm+Fvh86hLBB9jsppEnilubdi5
nSLzYbceXFErPvYA+KfjlHr9n8X/ABHZ+KrDStK8SaVfSabqdnpun2dja21xbnyHRIrNEtxh
oyCY1wxy2WLFjyla3jvwr/wgvjjWdE/tHStY/se+nsft+l3H2ixvvKkZPOgkwN8T7dyNgblI
OBmv0L/4J5eJ9Z039hT4aa9H4rtfDsHhH4829le6jqPiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS
0jJ81k3mNipIQj83q6v4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T5t3J
u/dxcbtrcjFfe3jv41L4j8C/DuP9mv4oeCvhpBp/xA8WXuuq3iS08JWiwz6vFLpdxdWVw0TX
lutiIwFEEwWOMw7MqYq8p/4J1+L1stc/aasb7xv4K0/TfF3w/wBb0e1Ems2nh3Std1W4YrZt
bWtwbZQm03Ow+SiwJLtYReYFIB8/6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcY
kV4ViKGFFfdHK/8ArUBwwcL5pX6F/wDBOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYp
qPhebS4ra/E1teSKtzburldgVyWUFBvjBX0H9j/4q/A/4Oa54fTwx4p8PyfC/wAUeMvFlp4x
0/xD4svLeDTLKdvsmipHpE9zEl1bz272/mzXFrdbAZGlliEJ8pjPyzr0vUf2X9Wsv2TLD4xR
6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/1qA4YOF+q/AnjjxL4c/ZV+BPhr4Q/
FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2Nldt+xj8cvh/wDC
r9ke007xb4i+H9x4k8UfGS8vPD2q2E9iI/CNxPpr2dr4lOkyeSI7SC5RmWK5igEaOkoRSsQI
B+adegeCP2l/EXw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOf+LNn
fad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetfoZ+wh8d/A+h+GP2
PNX1Hxl4V0q1+FF94w07xTHf6tb2l1pkuqOUsmFvI6zTRObiPdNCjxRDe0joI5CqEfnT438Z
XfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntdR/Zf1ay/ZMsPjFHrPh
+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+tv2GfH2r+DPhB4f8FXPxO8K
+CbXTvFd9KmueGPiBpmj6h4Zu1CoX1axuXjtfEOnySLbSo8MkzeTDLGsxBFvXQfsN/Ga10T9
mOz0WD4g/DS1e8+PLXfi2yuNU03RdO1rwvPp6W1+4sLz7Or2UiuQsIgBGF2RI8QCMZ8U6j+y
/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/8ArUBwwcKaj+y/q1l+
yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcL91/sr/ABe8G+Gf
g3e+H/AXjXwVong+f9oe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5keVt/s2f
Gr4MfB/9n++0m11fwrb2HiP45anc/D+6k1O2vLrwPbzWMtnpniG4sLqUOYraSMZW92FQyzcs
I9wB+X1FdB8WbO+074qeJbfU9etfFWpQardR3etWt619BrEwmcPdR3DfNMkrZcSHlwwY9a+1
v+Cenxh8K+A/2c7Ow+IvjHwUviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0
iAKhuAhHwTXoHgj9pfxF8P8AwxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjA
AA+7Pgp+0zrngj9nH4R6b4K8UfCp/H2neK9em+IV94m+Ismjx/2g+oxPDfXRt9SthrETw4LT
bb1HSHYufmRsn9nX9q7TPD3wg8EXA8eeFfD2q2v7R6q0Gi6k+l2un+FroRT3kVtBKUnt9Ekn
RXaKRVjJRPMXevDGfn/438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsc
kknJr0D9rL+w/wDhqj4l/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxji
vrb/AIJ9/FfUdP8AgJ4T8Nn4l+H/AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQ
bGkkWKGaIS9LcIR8E0V+gOg/FrX7z9nH4LWHwZ+L/wAP/BHiTRfFfiKfxvd2/iKz8Iabc3Eu
owSWd7PYz/Zjd2n2YLtVbWQLEhh8oFTCLX7G/wAYNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xV
Y+EbnQLkkDztS0W+js4dX0yZ/ssscaJ5kcEUsAKYFqoB8P8Awr+Ff/C0v+Ej/wCKj8K+HP8A
hHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uRivYPDNt45+GP7Emj/EvTJ/g/qfhP/hJJ
PCrWl74G0zUtbs70xy3ZE815pzGRPLAYMJ5NqyxoMbWVPQP+Cdfi9bLXP2mrG+8b+CtP03xd
8P8AW9HtRJrNp4d0rXdVuGK2bW1rcG2UJtNzsPkosCS7WEXmBT6B/wAE4Pi9a+Gf2K/DXh+D
xr4K0R5/jXFceLdI1zxBptimo+F5tLitr8TW15Iq3Nu6uV2BXJZQUG+MFWM+VPiR8EfEXjP9
nOL483Wp+CpNJ1fxIPCtzpuj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt8fr9Qf2bPjV8
GPg/+z/faTa6v4Vt7DxH8ctTufh/dSanbXl14Ht5rGWz0zxDcWF1KHMVtJGMre7CoZZuWEe7
83vizZ32nfFTxLb6nr1r4q1KDVbqO71q1vWvoNYmEzh7qO4b5pklbLiQ8uGDHrSEc/XsH7Pf
7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHu+tv2Wf
2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRXY2QeU
BkNGfP8A9hW50+88cftK+IbvX/hV4esPG3gbxR4a0aKHX7LQrG51C6kgkghs7S8liuIrRlyI
3ljVFVdrMrKwDGeP/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSF
PPlWMGdoySGONmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY2
1zFGi3T+UGlkTJVmGUw5+i/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtS
W7sXmuEE++G2ZRPa+cjxSToCS3Hba/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P
30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/TbX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvte
ufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFtXf7Zdpf6D4Vv/gn4i+FUd1ef
EfxZq/iO58TeMbnwnHF9p1lZ7C8urVb6ykv4ms2j3eZDdYSHygoIeMgH5U16t8Dfjb4ih1Dw
54NsP+FVada3t9HYpqfibwZod3HZ+fNzNdXlzZyzeUhclmZm2IuAMKBXE/FnWYvEfxU8S6hb
p4fjgv8AVbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK9W/4J86F4q8OftZ/B7xfp9n4
gsNCk+IGlaC+tW8M0Vo0008YmszcKAhd7d3DRbstG7ZBUmkI80+OVxdzfF/xHHf3nhXUbqyv
pLF7zwzZ21po955B8kTWqW0UUPlOEDKyxrvDbyMsTXKV+sPjv4xLpHxu+Hdt4I+Inh/wZB4W
+Nfiy6+KVm3jC08NveQv4jikikuoZ54Wv0NipUMqzDahi6goOUl8ReDf2jviN+zAPhhrngr+
yfh98XfEFxc6XJqtnoc9lZXXiW3vLEW1ldPDLKj2pXYlvG+CPLwHUoGM+FP2Sv2X9W/bF+Nd
j4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt5pX7A/Dn47+B/gV+1Ro9
7pvjLwr4TsJfit44/wCFtLcatb2d9e3LXd1baJ50Uri4ntEW6UjyFa2iYySy7GjkkXx/9nb4
933wf/ZV+C/hr4ba38H7LxZ4c8Sa3H43fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAj
TPzIwB+b1FfoD+yZ8evAni7wPNoOq+Lvh/4HuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQV
Hk2kkaSNG6BQSrhPjT9pT4kWPxj/AGjPH/i/TIrqDTfFXiTUdYtIrpVWeOG4upJkWQKzKHCu
AQGIznBPWkI4mivuv4TfFHX/APhjr9nzS/gz8U/Cvw48SaBrutyeN/tni6z8Ox/aJL62ezub
6CeRDqES2yqMrFcDZG0WCQY67b4EfGpbPwL+zhH4I+KHgrw7B4b+IGsXvxSXTvElp4OtNVhk
1e1liuHsp2s2urdrFSI1WAhI18nYhUxBjPzeor9IdX/a+X4Ifsi+IvFXws8W+CobzTvjXqGq
+GtHGqWkN7F4PluY5hZw2PmJeW1lPe21u0trEsTvGCzL5TMxt/BT9r64T9nH4RzfC6b4KeFf
EknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRsAfBPgH4A+IviP8A
CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX3t+yV+1Dq3ij4PftE
+AtM+LXh/wAAa74g1XS7vwQYPEFx4b8OaNAdakl1F9Nacxmzt/LnD+SAs8kWQInZWUdX+zt+
0I3wr/ZV+C+gfCbxP8H213QfEmtjxde6540u/CtoJjfwtZ301t9tsJdQt3tdh/e29wRHCIti
sHiIB8PeCP2l/EXw/wDDFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADlfG
/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk/qX8Ff2+fCvw1+E/wL
0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJcR+dptu2jXmsiJphCRbysDhwuD4SeJ1034ef
294U8V+H/DvhPwj+1JeWVtqK+JbTR9OtPCchjurmxtZXmjjeylKxzG0gJWXYHEbbcgA/J6iu
2/aU1nw94j/aM8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMV9a/Cb4o
6/8A8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwS
DHSEfClFfoD8Pf2rtf8A2ef+CeKa/wCG/Hnw/vvHGifFaa/tbS01Kzt5pvDjvDLPBbWGYbu1
0+51K2id7SGK3Zoss0axMxo0H496/wCOv2cfgtN8GfiF8P8A4P8AiSHxX4i1LxvYW/iaz8Ka
bY3FzqME1m89nPKpvbSO22ooWO5CxRGHDFTHQB8vfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0b
UL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPC
tr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw59L/Yo0q/8E/tAeFfHNp45/Z/v9Bv/
ABzDJ4j+2f2Npt1o9va3yO1zawapb28ttFLFK0sR09QQFVSI5IljT0C10Pw34Zg1nxz8F/iB
8P38dfFnxXr1sPFHinxta6dqHw60M6hLBFLFHez/AG1ru8gZpXvCr3KQ7lRBJIZHYz4U8d+C
NU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv6dP+DR3/lGtoX/AGGd
Q/8ASqWv5i/HfhX/AIQXxxrOif2jpWsf2PfT2P2/S7j7RY33lSMnnQSYG+J9u5GwNykHAzX9
On/Bo7/yjW0L/sM6h/6VS16GA+Ct/h/9uiceK3p/4v0Z9+/Fn/j+uPqa+E/E+nfbvjV4q4zj
w3o3/qTeOv8AGvuz4s/8f1x9TXxhb232n41eLhjOPDWj/wDqTeOK6MT/ALtH1/Q5MP8A7w/T
/I/Gn/g5Vt/sv7VfwmTpj4Yw/wDp+1yir/8Awc8w+R+2H8Kl9Phjb/8Ap91uivKPTP3T/wCC
f9xn9jv9n0Z/5p54T/8ATRZV/N3+0D+yI3xt/aH0Pb8Rvhr4e1bxzFoekaLo2oX13cajcTf2
Xp0KNLHZ21wLVHklCobkxF9rMAUAc/0f/wDBPiAt+x3+z0f+qd+Ej/5SLKvwJ8J6Ff8Aw9/b
V8EeM7Txb8FP7Bur/wAKnxHa+ILzRotY8PW9pY6azyqNURJY98UjOkunu5JTBZZI1VS3u/cU
3eS/rsfNng79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FE
ad49+xnXMeHPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9
12uh+G/DMGs+Ofgv8QPh+/jr4s+K9eth4o8U+NrXTtQ+HWhnUJYIpYo72f7a13eQM0r3hV7l
IdyogkkMjmg+J9f8C/s4/Bbwf8GfjL8P/CHiTwR4r8RWvjfULfxtZ6Lpt9cf2jB9jv50neM6
paG2RSrrDOGiUx7SQY6go/P6ur+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2p5G3
/Q7X5T5t3Ju/dxcbtrcjFZXju4+2eONZl+2aVqHm307/AGrS7P7HY3OZGPmQQeVF5UTdUj8q
PapA2JjaPqD/AIJYeJ10bQ/2gNMvvFfh/wAPab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRw
v2kCTBWISsGZPNG5CPH/AIa/tt/E/wCEPgew8PeH/E32Kw0f7d/ZMr6daXF9oX22Py7r7Ddy
xNcWXmLkt9nkj+Ys3DMSeg+BWsfESf8AZ98Z+INA0n4aTeE/hbBaTapdax4K0LUb1mvr0QwR
CW4s5Z5nZ3kYF22pHCw3L+7RvoD9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEF
vNexsC4dNMkuIw8ynEDSIGfLDNdB8I/2iPNsf2rvhj4N+LuleBP7R8V2r/DaR/FP9i6HpOnr
4gne8ksbgOsEEXkTI7RwHfNHu2JJgimM/P8A8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1Xaio
NlvbRxwpwoztQZOWOSST7r+z3/wsn9pH9lzx78NdF8TeCrHwb8PdKufH15oeoaLCl7fLbZaa
6guo7N5DcKDHFmSeNjHKsYJiDhfpb9nb9oRvhX+yr8F9A+E3if4Ptrug+JNbHi691zxpd+Fb
QTG/hazvprb7bYS6hbva7D+9t7giOERbFYPEfKf2D/EWk6l8VP2ntQudc+Gnh6DxZ8P/ABHo
ekpHqtvoGlXd7fzK1tBYQ3rwyLbkRPt3KPKQRiTYWUEA+NKKK/Uz/gll4i0nx1of7LPhnw5r
nh/yND1XxXd/EHw6dVt7OfUL0L9o0q5msZHSS/eIRW7xTRxy+QbcHchhO1CPz+1H9l/VrL9k
yw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfNK/Qv9jT9pRfhr+y
Z8N7jVvH/h+18TeJ/wBoex17WJbvXLSfV00qSBIby8uCztPapI8UqSyv5bPFK6sTFcESerfs
9fFX4NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlzFaXVlPA8G+Y2t1Gi
FmaWJIf3TGfm9/w1B4z/AOGeP+FVfbdK/wCEF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8
r91ny/kr0vwzbeOfhj+xJo/xL0yf4P6n4T/4SSTwq1pe+BtM1LW7O9Mct2RPNeacxkTywGDC
eTassaDG1lT3b9hnx9q/gz4QeH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47Xx
Dp8ki20qPDJM3kwyxrMQRb1q/spfH/T/AIL/ALNHgOF/H3w/tNT1r9o+01e+k027srRo9EaJ
YLm8S32xzadaSGGRDuitz9nlMbKsMxRwD8//ABv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqK
g2W9tHHCnCjO1Bk5Y5JJOTXoH7WX9h/8NUfEv/hGf7K/4Rv/AISvVP7J/svy/sP2T7XL5Pke
X8nleXt2bPl24xxX1X8Jvijr/wDwx1+z5pfwZ+KfhX4ceJNA13W5PG/2zxdZ+HY/tEl9bPZ3
N9BPIh1CJbZVGViuBsjaLBIMdIR8v/s9/sv6t+0jofj270XWfD9jP8PfDdz4qvLLUHuEnvrK
2UmYwGOF4y6kxrtkePJlXGQHK+aV9g/8E8dS0+Lxx+0jLqXiv4f2X/CRfDjxB4d066m1Oy8P
WOraheyIbdbSC5+zbIn8pyAIo0hXYHWLcq19AfseftweG/gl+yn+zN4bbxt4V0+6S+1Br63u
FtbmbR7h/FdhGZZmdWax3aNd60BK5iBilkw2dmGM/L6vS/2e/wBl/Vv2kdD8e3ei6z4fsZ/h
74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9IfAXx3+Fmh/tAfAjV9K8ZfD/AErw
h8KPHPxE07U449WtLSPTINUvpk0tre33q01o6XEOJrdHgiTczuixuV+Sf+Camn2vgzXP2hdP
1rxF4K0ae/8AhjrnhKzfUPFOm2sF/qVy0awxQTSTiOZGMEn72NmiA2kuA6FkI+Sa6v4V/Cv/
AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMV9l/sOftKWPw
7/ZM+EVvP4/tdD13RvjzZWs0T64ttd2Phi4gt5r2NgXDppklxGHmU4gaRAz5YZo/Zi8T6Do3
7Rn7ZmmaN4r8FeHvBPinw34o0Tw/aSeJbDSdK1S6nupF0xbZJJo4pEEPnhJEBjiSXBZBKNwB
8E17X+zp8Q/EWpeGNa0iw1L4KaJa+EdDvNcSTxd4U0Oe61Xy3DmzguLmylmuLuQyHyomfkKV
BUKBXilfW3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CXE0ayOF+0
gSYKxCVgzJ5o3AHy/wCN/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJ
JJ6DwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX+wP2HP
2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM10Hwj
/aI82x/au+GPg34u6V4E/tHxXav8NpH8U/2Loek6eviCd7ySxuA6wQReRMjtHAd80e7YkmCK
Yz86a9L/AGe/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyr
jIDlftb9nb9oRvhX+yr8F9A+E3if4Ptrug+JNbHi691zxpd+FbQTG/hazvprb7bYS6hbva7D
+9t7giOERbFYPEfKf2D/ABFpOpfFT9p7ULnXPhp4eg8WfD/xHoekpHqtvoGlXd7fzK1tBYQ3
rwyLbkRPt3KPKQRiTYWUFCPjSiiv0h/4JkfFX4bfBz4V/CxL/wAU+H5NC8Uarr9p8UNP8Q+L
JreDTGnhS005I9INzFBdW86PF5s0lrdKgMjPLEsP7oA/N6iv0h/Z2+Pd98H/ANlX4L+Gvhtr
fwfsvFnhzxJrcfjd9c+IDaFaWt19vhNtdTLa6jbJqtubcL+8VLyMxwCNM/MjVP2VPjhd6n4Y
g0uD4kfD/wAA6LeeOdV1ay1fwJ4stvCbeFJ3fKm+0bUxbLrGlO/2WSJDvnjt4ZIS4I+zBjPz
prq/hX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxX3
B+yx+07Y+B/gJ8O1n+JXh+z13S/2h4YJptP1BdJSLwxcJDNe+VbkQPbaPNcIJXiMUUG5V3xq
y4GV+zF4n0HRv2jP2zNM0bxX4K8PeCfFPhvxRonh+0k8S2Gk6Vql1PdSLpi2ySTRxSIIfPCS
IDHEkuCyCUbkI+Ca6v4V/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3
cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CX
E0ayOF+0gSYKxCVgzJ5o3AHhPgj9pfxF8P8AwxbaRYad8P57W037JNS8CaHqd0252c77i5tJ
Jn5Y43OcDCjAAAtwfDPxb+0R4M+I/wATha+H7bSfAcGnza29lZWulQK11PHZ20UFpbRpGHYh
nJVFXEUjM29lD/Vf7Dn7Slj8O/2TPhFbz+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJ
cRh5lOIGkQM+WGa6D4R/tEebY/tXfDHwb8XdK8Cf2j4rtX+G0j+Kf7F0PSdPXxBO95JY3AdY
IIvImR2jgO+aPdsSTBFMZ+dNel/s9/sv6t+0jofj270XWfD9jP8AD3w3c+Kryy1B7hJ76ytl
JmMBjheMupMa7ZHjyZVxkByv2t+zt+0I3wr/AGVfgvoHwm8T/B9td0HxJrY8XXuueNLvwraC
Y38LWd9NbfbbCXULd7XYf3tvcERwiLYrB4j5T+wf4i0nUvip+09qFzrnw08PQeLPh/4j0PSU
j1W30DSru9v5la2gsIb14ZFtyIn27lHlIIxJsLKChHxpXV/Cv4V/8LS/4SP/AIqPwr4c/wCE
c0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK5Svrb/glh4nXRtD/AGgNMvvFfh/w9pvi
n4Y6polraax4ltNJg1TVZ1C2ahLiaNZHC/aQJMFYhKwZk80bgDlP2TP+Canir9r74V6p4v0j
xn8NPDOm6RPex3EXiTV5rKdYbOG2mubrCwOv2eJbyAPIWAQuN2MqTVl/4J3eJPDmoX6eL/GH
w/8Ah/YQ+K7nwdpmo+Ib26gtfEN7bTPBcy2nl28j/ZIZECyXUyRQoZFVnDBlXqv+CW1zp+n/
APC+P7S1/wAK6F/bvwq1nw1p39ta/ZaX9t1C88r7PDH9plj3bvJfLj5E+Xey7lzbmt9J/a+/
Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28axssqxlp0bbiFlZWLGcpo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
nwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+9fBOoeDfEfh
b9key8L/ABD8FanpPwQ+IGstr19rGtWfh2dbI69bXVveC1vpo5XSW1XzQIhJtO6MnepWvjT9
rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBpCPP69L/AGSv
2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9w/sea98KP
iP8As1/szHX/AIjfD/w/4g+E2u6g32LXdXk0+4sr1/ENhqvm7duwxNplpexiWQ+SZ7iGMN5p
+T0D9n79q34aeFvjh4O8Y+EfHnhXQPC3iX4j+MtS+Jc1xqUWnX2qG5uLiLw+80FwUupbRY7y
NwIkNvCzSSzBHikdGM+CfAPiL4q/Ef8AYe8e6Dp9z4fHwk+HE9jrOr282k2C3P228vFt7d45
hAbprhsuPM8wYgheMuFKRP4TX3D+wf8AG/xFoP7M3xv+EKfGC18EeLHn0SHwjNe+MxYaVpax
as51SW0vVl8hEKS+Y4t3LXCbjGsuDXxTr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgR
gNwDorYIyqnICEVK9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+
6OV/9agOGDhftb/gmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/EPiya3g0xp4UtNOSPSDcx
QXVvOjxebNJa3SoDIzyxLD+6qf8ABP74iR/Db9kfR/B58cfD/SLqD45D/hMdL1LxVpMdrqfh
l9NjtNQ3pPN5N7aOGZR5fmByoePJQMGM+NP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3
CT31lbKTMYDHC8ZdSY12yPHkyrjIDlTUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJ
LjEivCsRQwor7o5X/wBagOGDhfov9i258D6f8cP2qv8AhGtf8K6F4M13wN4m8NeD/wC2tft9
L+2/bLhf7Ohj+3Sxytuih5d/ufL5rKWGfQP+Cevxq+H/AMH/ANg/w3pPi3V/Ctv4k8R/Faa5
8PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLcAfnTX0X4ZtvHPwx/Yk0f4l6ZP8
H9T8J/8ACSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieWAwYTybVljQY2sqeKfFmzvtO+KniW31P
XrXxVqUGq3Ud3rVretfQaxMJnD3Udw3zTJK2XEh5cMGPWvuH9hb476f8DP2MPhVaP4y8K6Jq
eq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSCBHwp438ZXfxA8T3Or38O
lQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkntf2e/wBl/Vv2kdD8e3ei6z4fsZ/h74bu
fFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9C7v9pXTPAWg+FdL+BGsfBTTf7L+I/iyT
XP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAERXaeVAIkHDRn50/YP8RaTqXxU/ae1C51z4aeHo
PFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDevDItuRE+3co8pBGJNhZQUI+NKK/SH/gnl4n1nTf2
FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkjq9N/a
u0bRfAvgmP4Aar8H9Jgi+IHii91hdc8WT+DbSwhl1dJdNuJrKO+sWvbf7CYRtaC5CxwCEIpV
4i7DsflnRX6F+G/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22
mT6jbQs1qqxSPCNzr5bMx+CvHfir/hOvHGs63/ZulaP/AGxfT332DS7f7PY2PmyM/kwR5OyJ
N21FydqgDJxSEZNFfdfwm+KOv/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/AGzxdZ+HY/tEl9bP
Z3N9BPIh1CJbZVGViuBsjaLBIMda3w9/au1/9nn/AIJ4pr/hvx58P77xxonxWmv7W0tNSs7e
abw47wyzwW1hmG7tdPudStone0hit2aLLNGsTMaAPj7/AIag8Z/8M8f8Kq+26V/wgv27+1Ps
H9h2Hnfa92ftH2nyftHm7f3e/wAzd5X7rPl/JXn9fot8K/Gen/tTWP7HVz4fvfh/peq+B/iP
q194j8Pw6vZaL/Y32zxBa3sMNnZ3MySzRGIkRrbiX7vl5LgrXsHh/wAT6zptnrOvR+K7Xw7B
4R/ay1OyvdR1HxLBo6Wnh6SVbq/sUlnmjD28sirNJaRk+aybzGxUkMZ+f/wK1j4iT/s++M/E
GgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcWcs8zs7yMC7bUjhYbl/do2T8Afgj4i/4KC/
tJR+GdP1PwV4f8T+IIJJrVJtOGk6dctbw7mijhsLUxRP5Mbvny0VvLcli7AP9V/Ab9qGLxR4
W/ai8BeBfi1a+AIPEHiSxu/hkb3xBL4b0rRtNOvTS3b2jOY0s0+zzo7wxhZZE3BYnKso6v8A
4J1fGbwF+zlofwPbRfiD4K0fSYvEniGH4rXa6oljPrE+2S00KVobryryayC3KOoSLyISzyzL
G8cjIAfmTXV/Cv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInmofY/7U8jb/odr8p827k3fu4u
N21uRiuf17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORMjh0ZlYYIJBBr6r/AOCWHidd
G0P9oDTL7xX4f8Pab4p+GOqaJa2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPKfg
n+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz5r4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfWv7MPgW1+GH7Nvh
rxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0XSGR2uo0aWOAFIow0nmN22g
+J9f8C/s4/Bbwf8ABn4y/D/wh4k8EeK/EVr431C38bWei6bfXH9owfY7+dJ3jOqWhtkUq6wz
holMe0kGOmM/P6itbx3cfbPHGsy/bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG
0fpX+x5+3B4b+CX7Kf7M3htvG3hXT7pL7UGvre4W1uZtHuH8V2EZlmZ1ZrHdo13rQErmIGKW
TDZ2YQj8vq9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAA
fpZ4C+O/ws0P9oD4EavpXjL4f6V4Q+FHjn4iadqccerWlpHpkGqX0yaW1vb71aa0dLiHE1uj
wRJuZ3RY3K+KfsJfEjw38IPgf/wj/wATfGnw/PimLXb2D4QSXd/a61D8P9YW3uoptTuZYhNH
bafJeNZsm8yRvLGLpYti/aaYz4U8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpw
oztQZOWOSSTk1reO5dUm8cay+t3/APautNfTtf3329NQ+2XBkbzJftKO6z7n3N5quwfO4MQc
1+m3/BLLxFpPjrQ/2WfDPhzXPD/kaHqviu7+IPh06rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl
8g24O5DCdqEflnXV/Cv4V/8AC0v+Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd
+7i43bW5GK/QH9hD4ieJ9X/Yw8AeMr/xx/Z11of7QEaaxrmt+KotNkGiT21vealam4upkMsU
8wE8tsrMZnUuUcqSPPv2RPiB4buvjh+1+nh7xR4V8NeAfHHhTxLpvhuyvtbtfD9jqNxdXD/2
WkVtcyQgbYTMqkoBAshVjH5gDMZ8KUV+hf7JvxiXSPgb+yvbeCPiJ4f8GQeFvGWo3XxSs28Y
Wnht7yF9Ts5IpLqGeeFr9DYqVDKsw2oYuoKD4q/aU1nw94j/AGjPH+oeEEtY/Cd/4k1G40VL
W1NrAtk91I1uI4SqmNPKKYQqu0YGBjFIRU+Ffwr/AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ
+x/2p5G3/Q7X5T5t3Ju/dxcbtrcjFegfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3
AuRbIlzHa2cy23mOyMhldQyOGBwG2+l/8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2m
kwapqs6hbNQlxNGsjhftIEmCsQlYMyeaN3KfsTa94V+EPwk+LHxRnvPD/wDwtDwHBpQ8BWGr
TQyIbq6umiuL6G0cg3FxaRKJI8h44mIkdGKoVYzxT4s/Di++DnxU8S+ENTltZ9S8K6rdaPdy
2rM0Ek1vM8LtGWVWKFkJBKg4xkDpXP197eBPjf4t8U/sq/AkfC34weH/AAT42svEmvX3xCu9
W8Z2uhvdXtxf28tte6jHcyq+pp5GMv5dzlUaMhiDHXV/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1
i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIAH5vUV+kOr/ALXy/BD9kXxF
4q+Fni3wVDead8a9Q1Xw1o41S0hvYvB8tzHMLOGx8xLy2sp722t2ltYlid4wWZfKZmNv4Kft
fXCfs4/COb4XTfBTwr4kk8V69qXjKw1LxdP4P03RLifUYprR3s4tQtTe2i2zIgDR3gWK3EIG
VaNgD806K6D4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBF
fe37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDFo
rsbIPKAyGjKEfGn7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq
4jYbtxVW80r9TP2Jf2kPAXwt1z4Ua9ovi/4aeD9J1Pxl4ouPitFpN4mkwXc8rTQaEIrW6KXj
aZGt2hiRI/IgBaSYRvFI6c//AME5vHnw8/Z58D/DnTNX8WeFbjSta13xFpfxVsdX8bM1jp0s
kaWVgsGnR3aWl/aTq0fmXP2e7iCl3aaNIcxMZ8J/8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7
DsPO+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K8/r728CeOPEvhz9lX4E+GvhD8VvBXw88WeEfEm
vR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2V8P+O7j7Z441mX7ZpWoebfTv9q0
uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2hCMmvQP+GoPGf/DPH/Cqvtulf8IL9u/tT7B/Ydh5
32vdn7R9p8n7R5u393v8zd5X7rPl/JX6Gf8ABLLxFpPjrQ/2WfDPhzXPD/kaHqviu7+IPh06
rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl8g24O5DCdvP8A7CHxE8T6v+xh4A8ZX/jj+zrrQ/2g
I01jXNb8VRabINEntre81K1NxdTIZYp5gJ5bZWYzOpco5UkMZ+f3wr+Ff/C0v+Ej/wCKj8K+
HP8AhHNDutc/4nmofY/7U8jb/odr8p827k3fu4uN21uRiuUr7r/ZE+IHhu6+OH7X6eHvFHhX
w14B8ceFPEum+G7K+1u18P2Oo3F1cP8A2WkVtcyQgbYTMqkoBAshVjH5gDdB+yb8Yl0j4G/s
r23gj4ieH/BkHhbxlqN18UrNvGFp4be8hfU7OSKS6hnnha/Q2KlQyrMNqGLqCgAPz0rq/hX8
K/8AhaX/AAkf/FR+FfDn/COaHda5/wATzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxVv9pTWfD3
iP8AaM8f6h4QS1j8J3/iTUbjRUtbU2sC2T3UjW4jhKqY08ophCq7RgYGMV9Af8EsPE66Nof7
QGmX3ivw/wCHtN8U/DHVNEtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDMnmjchHyTXpeo
/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9QfCb4o6
/wD8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSD
HXV/spfH/T/gv+zR4Dhfx98P7TU9a/aPtNXvpNNu7K0aPRGiWC5vEt9sc2nWkhhkQ7orc/Z5
TGyrDMUdjPzprq/hX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3Ju/d
xcbtrcjFav7WX9h/8NUfEv8A4Rn+yv8AhG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duM
cV7t/wAEsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQ
lYMyeaNyEcp+yZ/wTU8VftffCvVPF+keM/hp4Z03SJ72O4i8SavNZTrDZw201zdYWB1+zxLe
QB5CwCFxuxlSasv/AATu8SeHNQv08X+MPh/8P7CHxXc+DtM1HxDe3UFr4hvbaZ4LmW08u3kf
7JDIgWS6mSKFDIqs4YMq9V/wS2udP0//AIXx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nh
j+0yx7t3kvlx8ifLvZdy5tzW+k/tffsPfAzwRofi7wV4a8Q/CvVdY07XoPFeu2+iIsGqXi3M
N/A8zBZ7eNY2WVYy06NtxCysrFjOU0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9
TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQ
yNHIm5CVbDqwypIOOCRX3r4J1Dwb4j8Lfsj2Xhf4h+CtT0n4IfEDWW16+1jWrPw7Otkdetrq
3vBa300crpLar5oEQk2ndGTvUrXxp+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNd
yyRvtcBlyjKcMARnkA0hHn9FfcP/AATz+MPgq7/Zz0DwhrvjHw/4L1L4bfF3T/ixeS69K8EG
qaVa2qQzQWRjR2mvQyArb7VaQONhbDhfa9N/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHj
OXwq+nNdaulzYz31tBqNr/aCfZHjD/JeKFgMS9GRmM/LOvS/2Sv2X9W/bF+Ndj4C0DWfD+ja
7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9l/s6/tXaZ4e+EHgi4Hjzwr4e1W1/aP
VWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7149W/Z++O/wALPgV8cPB174G8ZfD/
AMJ+DJfiP4y/4WItnq1pZ/bVa4uLbw7iIuJZ9PSK6jKfZla0iy8r7DG8igH5p+CP2l/EXw/8
MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HSoLq7
2b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+gH7NfxJfwL8Kf2Y9F8IfEfwr4Q/4QjxzqjfF
a1t/HWn6LHfJ/atoySzhrmNdUiNkhVZYfPRkUorHG2rer/tfL8EP2RfEXir4WeLfBUN5p3xr
1DVfDWjjVLSG9i8Hy3Mcws4bHzEvLaynvba3aW1iWJ3jBZl8pmYgH5vUVreO/FX/AAnXjjWd
b/s3StH/ALYvp777Bpdv9nsbHzZGfyYI8nZEm7ai5O1QBk4r7g/4J9/FfUdP+AnhPw2fiX4f
8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFihmiEvS3CEfBNFfqD+yX8f8A
wPp+ofsra5qPj74fx2vwt13xvY+KZ47u30iO1l1WZ1sprexkWCb7JKbiNg0NuIoELeYIRFIE
yv2EL7X/AAV+xh4Avn8XaVoP/CB/tAR6VqOqXHi6zsbW20E21vcahZQXb3CxT2k0qCZreB3W
coJAj7dwdh2PzTor9VvgP8QLfUPhBZeL/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhAL
iSHbaS4imaxQfOYwxiJQ4ytB+Pfhu71r4LTfBn4heFfBHgzRfit4i1LxvYW/ia18IR3OmS67
BNZvPZzy2z3cX9mhUULHIFRDDgFTGCwWPy+rtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK
+68ultreKNBlmdmLtnAULC+WDFFf7r1f9r5fgh+yL4i8VfCzxb4KhvNO+Neoar4a0capaQ3s
Xg+W5jmFnDY+Yl5bWU97bW7S2sSxO8YLMvlMzHif2Lf2stc8f/AL4+eDbT4l6V8K/Eniq+0f
UfBlm/iKTw7ofhuJ9Ykm1JbF2kCWkSR3ALQxN5skYYKkpUigD4Uoq3r2nRaRrl5aW9/a6pBa
zvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH6Lf8ABPLxPrOm/sKfDTXo/Fdr4dg8I/Hm3sr3
UdR8SwaOlp4ektLa6v7FJZ5ow9vLIqzSWkZPmsm8xsVJCEfCngH4A+IviP8ACTx7430+O1Hh
74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor8TX6F/Ab9qGLxR4W/ai8BeBfi1a+AIPE
HiSxu/hkb3xBL4b0rRtNOvTS3b2jOY0s0+zzo7wxhZZE3BYnKso6H9hDVtU8K/sYeANTs/HW
lWNh4H/aAj0241648RpotqfDjW1vc31rBJePA7Wk8iLcNZhQ0pUO0JZDtYz806+lv2ENe+K9
74H8fW/w98Z+FdKtfh5Yt8SJtH8QaXHqWXsI2DX1gs9pcQw3cYaNN4aF28yL5mCEx+P/ALSm
s+HvEf7Rnj/UPCCWsfhO/wDEmo3Gipa2ptYFsnupGtxHCVUxp5RTCFV2jAwMYr3X/gltc6fp
/wDwvj+0tf8ACuhf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/aZY927yXy4+RPl3su5coR8v69r1
94p1y81PU7y61HUtRne6u7u6maae6mdizySOxLM7MSSxJJJJNegfC/8Aa58c/BbQ4bPwrdeH
9CntYLm3g1a18M6Yut263CyLKY9S+z/bEcrK6h1mDIpAUqAAPqD4BftS6/8As2/8EudD1rw3
4w8Kr448P/Ef7Za6Vc69ZyaonhxhbyT2i2/nC7itJ9StYmmgh8tpUDOwMTM59W/Z4+JHwo/a
B+EX7P8Armv+NPhV4G8QfD/xXq9//YdzfyaPDol7P4lstZxbw4KLaf2ZbXsMbSN5Ilnt4g3m
/cYz8vq92+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7
RkkMcbMMeJ/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa
+y/2Tfizptr8Df2V7fQfEvw0jHg7xlqN149i8W6jo4u9DhfU7OaOSyGrN5sCNaqX3adtBkVm
P74E0hHzV4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxR
GnePfsZ1zHhzb8Zf8E9tY+EngDQNe8f+PPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIb
ZV4IkIYMCCecfQFrofhvwzBrPjn4L/ED4fv46+LPivXrYeKPFPja107UPh1oZ1CWCKWKO9n+
2td3kDNK94Ve5SHcqIJJDI/P/s53PijRtV+GfhHXfiL+zp4z+FXgfxldaTqWnatL4flfSLIa
kj308MmqQRzTW92jNLHLaPLuUAZR0CBjPH/A/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+O
qN/wkz29zHatPEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs
9xDI0cibkJVsOrDKkg44JFfpBofxR+HPiG2/Z+tvh5rXw0Pg/wCHXxA16O//AOEj1210y98N
aU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgPgn9rLxvpnxM/ao+JfiTRLn7boviDxXqmpW
Fx5bx/aLea7lkjfa4DLlGU4YAjPIBpCMr4V/Cv8A4Wl/wkf/ABUfhXw5/wAI5od1rn/E81D7
H/ankbf9DtflPm3cm793Fxu2tyMV2vwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibz
Y4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4JYeJ10bQ/wBoDTL7xX4f8Pab4p+GOqaJa2mseJbT
SYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG7V/Zh8C2vww/Zt8NeJfh34z+Glj8W/iPPe6bqeu
6/4x03SLn4Yaas32Ym3gmmE32i6QyO11GjSxwApFGGk8xmM+SvHfgjVPhn441nw3rdt9i1rw
/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr9AdB8T6/wCBf2cfgt4P+DPxl+H/AIQ8SeCP
FfiK18b6hb+NrPRdNvrj+0YPsd/Ok7xnVLQ2yKVdYZw0SmPaSDHXwp47uPtnjjWZftmlah5t
9O/2rS7P7HY3OZGPmQQeVF5UTdUj8qPapA2JjaEIya9A/wCGoPGf/DPH/Cqvtulf8IL9u/tT
7B/Ydh532vdn7R9p8n7R5u393v8AM3eV+6z5fyV+gP7Hn7cHhv4Jfsp/szeG28beFdPukvtQ
a+t7hbW5m0e4fxXYRmWZnVmsd2jXetASuYgYpZMNnZjq/AXx3+Fmh/tAfAjV9K8ZfD/SvCHw
o8c/ETTtTjj1a0tI9Mg1S+mTS2t7ferTWjpcQ4mt0eCJNzO6LG5VjPyTor9Af2EviR4b+EHw
P/4R/wCJvjT4fnxTFrt7B8IJLu/tdah+H+sLb3UU2p3MsQmjttPkvGs2TeZI3ljF0sWxftNf
CnjuXVJvHGsvrd//AGrrTX07X999vTUPtlwZG8yX7Sjus+59zearsHzuDEHNIRk17X+z3+wf
4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB9w/8
EsvEWk+OtD/ZZ8M+HNc8P+Roeq+K7v4g+HTqtvZz6hehftGlXM1jI6SX7xCK3eKaOOXyDbg7
kMJ2/P2vfFm58U/8EZrzTNT8S+CtR8Vaj8Tn167tLrUdLm8QXVg8JV7yRGY3jXDXpIMpBuDC
SCfs1MZ4p+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqC
u903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9r
MAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRF
lWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6Tz
JDewCR0txtgTMolcAqfC79mf4q/D3w54M8PazqXwK8F3nizxJqHhXw7Y+KvBdhreo6re2t8t
rcg3UWl3vyLdymFWnnX7ny/ugrV+9X/BsfHr9n+xrNZ+KrDStK8SaV4n1TTdTs9N0+zsbW2u
Le7lgdEis0S3GGjIJjXDHLZYsWP41/Ajxfpvw78C/s4eHNB8b/B/VB8NfiBrEfj271rWdHnS
xhXV7Vo7rS31YiVLeS1jMqSacFDNlj++zX7P/wDBtjr2jeKf2YvEWp+HLy61Hw9qPj3xDdaX
d3U08091avqdy0Mkj3BMzOyFSWlJkJJLfNmvRwPwVv8AD/7dE5MXvT/xfoz7H+LP/H9cfU18
baCpl+PPjJew8L6Mf/Ln8dV9k/Fn/j+uPqa+RfBFt5/x68acf8yvo3/qT+Oa3xX+7R9f0OPD
/wC8P0Pxr/4OjI/K/bP+Fg/6phbf+n3W6Ksf8HUMflftufC4f9Uvtf8A0+a1RXlHpn7t/wDB
PPQ/+MKP2fHyOPh14V/TSbP/AAr+UT9vCL7P+0tqCf3NG0Nfy0eyr9vP2f8A/gqtr3wvu/gT
8OLa5slsYvCPgayVGhBfbPo2mMef+2pr4s+Nv7WOvfs/fslWHiLwz4w8KDxvo3izQtRtNKud
es5NTTw4+jaVLNaLbiYXcVpPqVtC00EPltKgZ2BiZnOs4ctO/oGjl/XkflrRX6WfBT9r64T9
nH4RzfC6b4KeFfEknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRt+
efxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQCCK5izoPBH7S
/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/
DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ+4P+CffxX1HT/gJ4T8Nn4l+H/Aump4k
u7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZohL0tx23wU/aduPB37OPwjsPhd4k
+Cg8SWHivXp/GV3qXiqfwXpqXD6jFJaXr2MV3pxvLR7YphWtZwkUIhESFWhLGfmnXV/Cv4V/
8LS/4SP/AIqPwr4c/wCEc0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK+4PDf7X2s/BD
9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmY+
f/8ABOv4or4i1z9pq4vta8FeBtN+IXw/1uytdFk1200HSptVu2Js7e2t7idF2Rq1yiNysKPt
Z18wbkI+NK9L/Z7/AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8ZdSY12y
PHkyrjIDlftb/gmR8Vfht8HPhX8LEv8AxT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MU
F1bzo8XmzSWt0qAyM8sSw/uvFP8Agmpp9r4M1z9oXT9a8ReCtGnv/hjrnhKzfUPFOm2sF/qV
y0awxQTSTiOZGMEn72NmiA2kuA6FgD5Jr1b4a/tt/E/4Q+B7Dw94f8TfYrDR/t39kyvp1pcX
2hfbY/LuvsN3LE1xZeYuS32eSP5izcMxJ8pr7r+E3xR1/wD4Y6/Z80v4M/FPwr8OPEmga7rc
njf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOgD4Uor9IfgR8als/Av7OEfgj4oe
CvDsHhv4gaxe/FJdO8SWng601WGTV7WWK4eynaza6t2sVIjVYCEjXydiFTEDV/2vl+CH7Ivi
LxV8LPFvgqG807416hqvhrRxqlpDexeD5bmOYWcNj5iXltZT3ttbtLaxLE7xgsy+UzMWM/N6
iv0s+Cn7X1wn7OPwjm+F03wU8K+JJPFeval4ysNS8XT+D9N0S4n1GKa0d7OLULU3totsyIA0
d4FitxCBlWjb88/izrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyqskdvg/u0dVZU2g
gEEUhHpf7OnxD8Ral4Y1rSLDUvgpolr4R0O81xJPF3hTQ57rVfLcObOC4ubKWa4u5DIfKiZ+
QpUFQoFW/BX7Ol9+1Vrnh7Vb34gfB/wv4h+IWqro+k6EsLWk80ytDbxs1npVlJDYpI7qqmcQ
mQq8mCp8xvQP+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL
9pAkwViErBmTzRu1f2YfAtr8MP2bfDXiX4d+M/hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt
4JphN9oukMjtdRo0scAKRRhpPMZjPkrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5
E3ISrYdWGVJBxwSKya/QHQfE+v8AgX9nH4LeD/gz8Zfh/wCEPEngjxX4itfG+oW/jaz0XTb6
4/tGD7HfzpO8Z1S0NsilXWGcNEpj2kgx18KeO7j7Z441mX7ZpWoebfTv9q0uz+x2NzmRj5kE
HlReVE3VI/Kj2qQNiY2hCNX4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T
5t3Ju/dxcbtrcjFel/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nS
FPPlWMGdoySGONmGPoH/AASw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01jxLaaTBqmqzqFs1C
XE0ayOF+0gSYKxCVgzJ5o3eq/sm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c
0clkNWbzYEa1Uvu07aDIrMf3wJpjPn/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHOV8N/wDgnL8UfiPofxX1MaVa6Ppvwag1A+IrvULj
EH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/2Wfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+I
rey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgPNP2bvE+g+Nf2jP2u/Glv4r8FWHh
74j+G/GWieG5tY8S2GkT6ndX11HNaqLe6mimRJEORI6LGCGUsGUgAHyT4I/aX8RfD/wxbaRY
ad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4nudXv4dKgurvZvj03
S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJNTXtGm8Oa5eafcPayT2E728r2t1FdQMyMVJjmiZo5Ey
OHRmVhggkEGvuv8A4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+ME
OuWUji1kGxpJFihmiEvS3CEfBNFfpZ8FP2nbjwd+zj8I7D4XeJPgoPElh4r16fxld6l4qn8F
6alw+oxSWl69jFd6cby0e2KYVrWcJFCIREhVoTz3hv8Aa+1n4IfsG3Pirwn4t+GkPjDTvi7d
arYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszFjPh/4V/Cv/AIWl/wAJH/xU
fhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMVylfZf8AwTr+KK+Itc/aauL7
WvBXgbTfiF8P9bsrXRZNdtNB0qbVbtibO3tre4nRdkatcojcrCj7WdfMG70D9k34xLpHwN/Z
XtvBHxE8P+DIPC3jLUbr4pWbeMLTw295C+p2ckUl1DPPC1+hsVKhlWYbUMXUFAAfnpRXbftK
az4e8R/tGeP9Q8IJax+E7/xJqNxoqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGK+tfgF+1Lr/7
Nv8AwS50PWvDfjDwqvjjw/8AEf7Za6Vc69ZyaonhxhbyT2i2/nC7itJ9StYmmgh8tpUDOwMT
M5Qj4Uor9Nv2A/j/APDbwtofgXWL/V/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTra00Q3Uc
E9lIksSvJJbXaxJuLyxLBmI/Zal+F3iT4D/s5WHif4k/DTQ/E/wX8SanBLa6n4h8o2t//wAJ
HY6mZo5It0E1u+mWd9GlwXNu81zAiuXYFGM/MmivQP2svG+mfEz9qj4l+JNEuftui+IPFeqa
lYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvtb9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4S
bxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZDRlCPnT9hO28c/tCfEPSPhV4Rn+D+nalc
wXM2nS+KvA2mag94yB55IjdNp1zMX2CV1MrBQsWwMPkQ+FeN/GV38QPE9zq9/DpUF1d7N8em
6XbaZartRUGy3to44U4UZ2oMnLHJJJ/Tb9iX9pDwF8Ldc+FGvaL4v+Gng/SdT8ZeKLj4rRaT
eJpMF3PK00GhCK1uil42mRrdoYkSPyIAWkmEbxSOnP8A/BObx58PP2efA/w50zV/FnhW40rW
td8RaX8VbHV/GzNY6dLJGllYLBp0d2lpf2k6tH5lz9nu4gpd2mjSHMTGfmnRX3t4E8ceJfDn
7KvwJ8NfCH4reCvh54s8I+JNej8eO3jjTtHtGumv7c2l1dK8wTVLcW6DEkSXUbRoYxuxsr4f
8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtCEZNFfdfwm+KOv8A
/DHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx12
3wI+NS2fgX9nCPwR8UPBXh2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr
5OxCpiDGfm9RX6Q6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvUNV8NaONUtIb2LwfLcxzCzhsfMS8
trKe9trdpbWJYneMFmXymZjb+Cn7X1wn7OPwjm+F03wU8K+JJPFeval4ysNS8XT+D9N0S4n1
GKa0d7OLULU3totsyIA0d4FitxCBlWjYA/NOvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7
gajY3ogkuMSK8KxFDCivujlf/WoDhg4XlPizrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zs
BaQyqskdvg/u0dVZU2ggEEV91/8ABOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpqPh
ebS4ra/E1teSKtzburldgVyWUFBvjBVCPz0r3b4Fax8RJ/2ffGfiDQNJ+Gk3hP4WwWk2qXWs
eCtC1G9Zr69EMEQluLOWeZ2d5GBdtqRwsNy/u0b7g+A/x3+EGh/Ej9m/V/DXjLwrpXgH4UeK
/H2nXUd/qy2l1plpql1ImksLe6dbqaJ47iDdMqOsQ3tM6eXIV8K/YP8Ajf4i0H9mb43/AAhT
4wWvgjxY8+iQ+EZr3xmLDStLWLVnOqS2l6svkIhSXzHFu5a4TcY1lwaYz5pg+Gfi39ojwZ8R
/icLXw/baT4Dg0+bW3srK10qBWup47O2igtLaNIw7EM5Koq4ikZm3sofzSvuH9g/476tpv7M
3xv+E2mfGy18Ma7cT6IngjUb3xRcaHpVpBHqznUbm0nn8o26PHKJXjCpPKhbETsrKO1/Zr+K
Nv4Q+FP7Mel+APin4V0Gw8GeOdUk+Jv2fxdD4Zj1e3Oq2jw3M8F3JazX0T2KYUmJyEBiIVlM
YAPzpor9LPhh+1d4Q8PeGNBuPC/jzSvD1ha/tOyLpcEOpDS5NP8ABV08c80SwEo9vpUkiRvJ
EVWEug3ruXj4T/ay/sP/AIao+Jf/AAjP9lf8I3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2f
LtxjikI8/r0v9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx
5Mq4yA5X9Af+CWXiLSfHWh/ss+GfDmueH/I0PVfFd38QfDp1W3s59QvQv2jSrmaxkdJL94hF
bvFNHHL5BtwdyGE7fmn/AIJqeJ11nXP2hdT8S+K/D9nqXi/4Y65okN34k8S2lhPq+q37RtEu
+7mRpHkaOQvJkqpILsu9csZ8k0V+ln7CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaX
WmS6o5SyYW8jrNNE5uI900KPFEN7SOgjkK1P2dvj3ffB/wDZV+C/hr4ba38H7LxZ4c8Sa3H4
3fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAjTPzIwB8PeCP2l/EXw/8MW2kWGnfD+e1
tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOg+JHwR8ReM/wBnOL483Wp+CpNJ1fxIPCtz
puj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt8/wDizrMXiP4qeJdQt08PxwX+q3VxEmg2
strpSq8zsBaQyqskdvg/u0dVZU2ggEEV9w/sLfHfT/gZ+xh8KrR/GXhXRNT1X9oDTdUvoW1a
yOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciQA/P6vS/wBnv9l/Vv2kdD8e3ei6z4fsZ/h7
4bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X9C7v8AaV0zwFoPhXS/gRrHwU03+y/i
P4sk1z+0vGz+F9NskbWVfTrl4La/s11G0Nl5QBEV2nlQCJBw0Z+dP2D/ABFpOpfFT9p7ULnX
Php4eg8WfD/xHoekpHqtvoGlXd7fzK1tBYQ3rwyLbkRPt3KPKQRiTYWUFCPnT/hqDxn/AMM8
f8Kq+26V/wAIL9u/tT7B/Ydh532vdn7R9p8n7R5u393v8zd5X7rPl/JXn9fpD/wTy8T6zpv7
Cnw016PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSR1em/t
XaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BCEUq
8RYz8s6K/Qvw3+19rPwQ/YNufFXhPxb8NIfGGnfF261Ww0fStUghEXhqWWKaWzsrGSRL+20y
fUbaFmtVWKR4RudfLZmPwV478Vf8J1441nW/7N0rR/7Yvp777Bpdv9nsbHzZGfyYI8nZEm7a
i5O1QBk4pCMmvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf
/WoDhg4X6r/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUj
i1kGxpJFihmiEvS3HQfspfH/AE/4L/s0eA4X8ffD+01PWv2j7TV76TTbuytGj0RolgubxLfb
HNp1pIYZEO6K3P2eUxsqwzFHYz40+F/7XPjn4LaHDZ+Fbrw/oU9rBc28GrWvhnTF1u3W4WRZ
THqX2f7YjlZXUOswZFIClQABU/4ag8Z/8M8f8Kq+26V/wgv27+1PsH9h2Hnfa92ftH2nyftH
m7f3e/zN3lfus+X8lfpD8JPE66b8PP7e8KeK/D/h3wn4R/akvLK21FfEtpo+nWnhOQx3VzY2
srzRxvZSlY5jaQErLsDiNtuR5/47+NS+I/Avw7j/AGa/ih4K+GkGn/EDxZe66reJLTwlaLDP
q8Uul3F1ZXDRNeW62IjAUQTBY4zDsypioA+KdR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7ga
jY3ogkuMSK8KxFDCivujlf8A1qA4YOF80r9Fv2Uvj/p/wX/Zo8Bwv4++H9pqetftH2mr30mm
3dlaNHojRLBc3iW+2ObTrSQwyId0Vufs8pjZVhmKP6X8JPE66b8PP7e8KeK/D/h3wn4R/akv
LK21FfEtpo+nWnhOQx3VzY2srzRxvZSlY5jaQErLsDiNtuQWCx+T1Fdt+0prPh7xH+0Z4/1D
wglrH4Tv/Emo3Gipa2ptYFsnupGtxHCVUxp5RTCFV2jAwMYr7r/4J5eJ9Z039hT4aa9H4rtf
DsHhH4829le6jqPiWDR0tPD0lpbXV/YpLPNGHt5ZFWaS0jJ81k3mNipIQj83qK/SHx38al8R
+Bfh3H+zX8UPBXw0g0/4geLL3XVbxJaeErRYZ9Xil0u4urK4aJry3WxEYCiCYLHGYdmVMVZP
7G/xg1GP4eaZoB+K/grwrpreMtR1C38SeDvFVj4RudAuSQPO1LRb6Ozh1fTJn+yyxxonmRwR
SwApgWqgHxp4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAA
Og+JHwR8ReM/2c4vjzdan4Kk0nV/Eg8K3Om6Ppw0uewvUtWlANrDaw2ioYIkctCzZMq7vnMm
37W/Yu8X6zB+yZ4N8SR+N/D+mweFv2hxa3uuLrMHhvTl8PSQQXd/a2qzm1CWU8irObGONN+w
EwZQhdb4FftB/B7wD8G9ahXVPBVnB4x+POr6j4DuIbmyL+BYbizmtdN8QSaVK6BLe1kQfurl
YhGrJKBlYtzGflnRX6F+G/2mfFv7Ln7BtzqWnfE7wV4o+JXhj4u3VwZR4rtdSvdV0LzYnuhD
mYXpsrzVLWOWVI/LeeNjKwMbs5+CvHfir/hOvHGs63/ZulaP/bF9PffYNLt/s9jY+bIz+TBH
k7Ik3bUXJ2qAMnFIR1Xgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5
wMKMAADq9M/4KG/FrQtPittO8QaVpcdl9r/sw2HhrS7STQPtcKwXP9nPHbK2n+ai5f7IYtzs
8h/eOzn62/4JkfFX4bfBz4V/CxL/AMU+H5NC8Uarr9p8UNP8Q+LJreDTGnhS005I9INzFBdW
86PF5s0lrdKgMjPLEsP7o/Z2+Pd98H/2Vfgv4a+G2t/B+y8WeHPEmtx+N31z4gNoVpa3X2+E
211MtrqNsmq25twv7xUvIzHAI0z8yMxnw9/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877
Xuz9o+0+T9o83b+73+Zu8r91ny/krz+v0W/ZU+OF3qfhiDS4PiR8P/AOi3njnVdWstX8CeLL
bwm3hSd3ypvtG1MWy6xpTv8AZZIkO+eO3hkhLgj7MPz/APHdx9s8cazL9s0rUPNvp3+1aXZ/
Y7G5zIx8yCDyovKibqkflR7VIGxMbQhGTXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttY
e/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Af8ABPT4w+FfAf7OdnYfEXxj4KXxDc6rdH4O
HVZYdQPw91U2tykmpXoCSfYbKS7ezKpOGUzRfaRAFQ3A6v4EfF4p4F/ZwiTxr8NNR1jwh8QN
Yu/iZc+LfEGiX93p7Pq9rMbmyn1OR32SQK8vnaYxWSQM+5pstTGfH3iL9ijxV8Ofh5rniPxv
f+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud23x+v1B1H45eC/i5
Y/BOw8K+IvhVrngfw/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYX86f
jv8A8Ix/wvDxl/whH/Imf27e/wBgf63/AJB/2h/s3+u/e/6rZ/rPn/vc5pCOg0b9l/VpvgI/
xH1zWfD/AIR8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC6v7Gf7Dnj39uv
4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv1X+z58VdE8Z/szfsqeH
YdW+D93pPg7xJq1v8QrHxkdAE9hZT6tBcZiXVQJSj2ryEvZ5yV2k70AXW/4J+ftp+EtB/bK0
bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwxn5v
V6X+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcrx
Xju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dx+lf+CW1zp+n/wDC
+P7S1/wroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXKEeKeCP2l/EXw/
8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+HSoLq
72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+hf/AATy8T6zpv7Cnw016PxXa+HYPCPx5t7K
91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQeO/jUviPwL8O4/2a/ih4K+GkGn
/EDxZe66reJLTwlaLDPq8Uul3F1ZXDRNeW62IjAUQTBY4zDsypipjPzerq/hX8K/+Fpf8JH/
AMVH4V8Of8I5od1rn/E81D7H/ankbf8AQ7X5T5t3Ju/dxcbtrcjFZXju4+2eONZl+2aVqHm3
07/atLs/sdjc5kY+ZBB5UXlRN1SPyo9qkDYmNo+oP+CWHiddG0P9oDTL7xX4f8Pab4p+GOqa
Ja2mseJbTSYNU1WdQtmoS4mjWRwv2kCTBWISsGZPNG5CPkmvS9R/Zf1ay/ZMsPjFHrPh+78P
XfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+oPhN8Udf/4Y6/Z80v4M/FPwr8OP
Emga7rcnjf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOur/ZS+P+n/Bf9mjwHC/j
74f2mp61+0faavfSabd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo7GfH/w1/bb+J/w
h8D2Hh7w/wCJvsVho/27+yZX060uL7Qvtsfl3X2G7lia4svMXJb7PJH8xZuGYk5P/DUHjP8A
4Z4/4VV9t0r/AIQX7d/an2D+w7Dzvte7P2j7T5P2jzdv7vf5m7yv3WfL+Sj9rL+w/wDhqj4l
/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjiv0B/Y8/bg8N/BL9lP9mb
w23jbwrp90l9qDX1vcLa3M2j3D+K7CMyzM6s1ju0a71oCVzEDFLJhs7MAHxT+z3+wf4v/aP+
G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHP8A7JX7L+rf
ti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVW+tv2svHfh//AId4
/Evwz4Z1n4VfY/8AheWqappOk6Xd6L9r/wCEf3yxQ3EEMZ8//X7UR0Hm/ZsAH7LXV/8ABMz4
7+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW9nfXty0b22iedFK4uJ7RFulI8hWtomMksuxo5J
FAPhPwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAByvjfxld
/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySSamvaNN4c1y80+4e1knsJ3t
5XtbqK6gZkYqTHNEzRyJkcOjMrDBBIINfcHwm+KOv/8ADHX7Pml/Bn4p+Ffhx4k0DXdbk8b/
AGzxdZ+HY/tEl9bPZ3N9BPIh1CJbZVGViuBsjaLBIMdIR8KUV+kPwI+NS2fgX9nCPwR8UPBX
h2Dw38QNYvfikuneJLTwdaarDJq9rLFcPZTtZtdW7WKkRqsBCRr5OxCpiBq/7Xy/BD9kXxF4
q+Fni3wVDead8a9Q1Xw1o41S0hvYvB8tzHMLOGx8xLy2sp722t2ltYlid4wWZfKZmLGfm9RX
6WfBT9r64T9nH4RzfC6b4KeFfEknivXtS8ZWGpeLp/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8
CxW4hAyrRt+efxZ1mLxH8VPEuoW6eH44L/Vbq4iTQbWW10pVeZ2AtIZVWSO3wf3aOqsqbQQC
CKQi38K/hX/wtL/hI/8Aio/Cvhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrl
K+tv+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViE
rBmTzRu9V/ZN+MS6R8Df2V7bwR8RPD/gyDwt4y1G6+KVm3jC08NveQvqdnJFJdQzzwtfobFS
oZVmG1DF1BQMZ+ele7fArWPiJP8As++M/EGgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcW
cs8zs7yMC7bUjhYbl/do33B+y58bvgp8P/HGl3ugeJvCv/CtvG/jnxenjbTNX8RT6bY6VaXM
httHjg0Jp4IJrSaCS38ySSzuFiUuXeFYP3XhX7B/xv8AEWg/szfG/wCEKfGC18EeLHn0SHwj
Ne+MxYaVpaxas51SW0vVl8hEKS+Y4t3LXCbjGsuDQB8/fAH4I+Iv+Cgv7SUfhnT9T8FeH/E/
iCCSa1SbThpOnXLW8O5oo4bC1MUT+TG758tFby3JYuwD+P1+m3/BOr4zeAv2ctD+B7aL8QfB
Wj6TF4k8Qw/Fa7XVEsZ9Yn2yWmhStDdeVeTWQW5R1CReRCWeWZY3jkZNb9lr9sfQf2b/ANm/
9nLwXdeOfBVpqWm6rqcGpQrcWGpjSbo+LbGFpmuF81bZH0e61oLcb0jeGV2V2zGaAPyzor9Y
f2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrTSEtNIS5itLqyngeDfMbW6
jRCzNLEkP7r8qde0abw5rl5p9w9rJPYTvbyva3UV1AzIxUmOaJmjkTI4dGZWGCCQQaQipRX3
t/wT7+K+o6f8BPCfhs/Evw/4F01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLF
DNEJeluLfw9/au1/9nn/AIJ4pr/hvx58P77xxonxWmv7W0tNSs7eabw47wyzwW1hmG7tdPud
Stone0hit2aLLNGsTMaYz4+8EftL+Ivh/wCGLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM
/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJPH
fir/AITrxxrOt/2bpWj/ANsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycV+lf7Hn7cHhv4
Jfsp/szeG28beFdPukvtQa+t7hbW5m0e4fxXYRmWZnVmsd2jXetASuYgYpZMNnZhCPy+r6L/
AGSv+Fk/ti6HY/s26B4m8FaNoWqTz6zY2OsaLCgu72JTM7i8hs5blbjyVkxI7r+5jaLftKxt
9g/8Li0HSPG3witvhD8RPBXgzwn4W+LviW68eWeneMLDw3aXmnPr8MlpI8Lzwi/tzpyhY2iW
ZPLTyh02DV/Zg/aQ+E3wt+KngHXvh94v8FeD/Amp/EDxdcfECJbyDSZ7uOWaeDw4GtZily1l
HDdxFUhj+zQEvJKI2ikdGM/N7/hqDxn/AMM8f8Kq+26V/wAIL9u/tT7B/Ydh532vdn7R9p8n
7R5u393v8zd5X7rPl/JXn9W9e0abw5rl5p9w9rJPYTvbyva3UV1AzIxUmOaJmjkTI4dGZWGC
CQQa+6/+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZB
saSRYoZohL0twhHwTRX6w/8ABO3xXpPxG1z9nLw94c8UeCtQg0jxJ4yvviDpGmXNvo8GtXu4
3OlXo0qQW8l0iCK3lgeO2P2YQIMQmEqn5U69r194p1y81PU7y61HUtRne6u7u6maae6mdizy
SOxLM7MSSxJJJJNAHoGo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX
3Ryv/rUBwwcL6B+z3/wsn9pH9lzx78NdF8TeCrHwb8PdKufH15oeoaLCl7fLbZaa6guo7N5D
cKDHFmSeNjHKsYJiDhfoD/gnB8XrXwz+xX4a8PweNfBWiPP8a4rjxbpGueINNsU1HwvNpcVt
fia2vJFW5t3VyuwK5LKCg3xgrxX7Ftz4H0/44ftVf8I1r/hXQvBmu+BvE3hrwf8A21r9vpf2
37ZcL/Z0Mf26WOVt0UPLv9z5fNZSwyxnzpqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG
9EElxiRXhWIoYUV90cr/AOtQHDBwvoGo/wDCybL/AIJi2F7H4m8FXfwku/GR0mXQodFhGr2O
siKS58+Sd7NXLmBFHmx3LnypUhJCh41+i/8Agnr8avh/8H/2D/Dek+LdX8K2/iTxH8Vprnw9
dSanY3l14HuJtJaztfENxYSSgGK2uY2ytzsChlmGSIt2t+xz8UX8Efs/nwvefFP4f3+q/wDD
QEsnjh9S8Xaf9l8UeHJLFLXUrlxfSJ9vtJ9zEEozOcOo3plQD4e1H9l/VrL9kyw+MUes+H7v
w9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4XzSv1B/Zs+NXwY+D/7P99pNrq/h
W3sPEfxy1O5+H91JqdteXXge3msZbPTPENxYXUocxW0kYyt7sKhlm5YR7vze+LNnfad8VPEt
vqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJDy4YMetIRz9Ffe37Dn7Slj8O/2TPhFb
z+P7XQ9d0b482VrNE+uLbXdj4YuILea9jYFw6aZJcRh5lOIGkQM+WGa9gu/2ldM8BaD4V0v4
Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1lX065eC2v7NdRtDZeUARFdp5UAiQcNGWM/N/wD8AfE
XxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1dR/Zf1ay/ZMsPjF
HrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF+q/2Sv2odW8UfB79onwF
pnxa8P8AgDXfEGq6Xd+CDB4guPDfhzRoDrUkuovprTmM2dv5c4fyQFnkiyBE7KyjoP8Agn98
TdP8A/sj6P4UsPiJ8P4Y1+OQfxRa6lr1lp9rrnhSTTY7S+ke11FomuLSVGOI2i35AIQSR/KA
fGmo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL5pX6
g/s2fGr4MfB/9n++0m11fwrb2HiP45anc/D+6k1O2vLrwPbzWMtnpniG4sLqUOYraSMZW92F
QyzcsI935vfFmzvtO+KniW31PXrXxVqUGq3Ud3rVretfQaxMJnD3Udw3zTJK2XEh5cMGPWkI
6vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/ANagOGDhT9nv
9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5X7A/4Jwf
F618M/sV+GvD8HjXwVojz/GuK48W6RrniDTbFNR8LzaXFbX4mtryRVubd1crsCuSygoN8YK8
V+xbc+B9P+OH7VX/AAjWv+FdC8Ga74G8TeGvB/8AbWv2+l/bftlwv9nQx/bpY5W3RQ8u/wBz
5fNZSwyxnzpqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCt
QHDBwuV4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/cH
/BPX41fD/wCD/wCwf4b0nxbq/hW38SeI/itNc+HrqTU7G8uvA9xNpLWdr4huLCSUAxW1zG2V
udgUMswyRFu5/wDZK+Pniqz+D37RPwpuPj3a6d42utV0seF9evfG81ppRKa1I2q31pfyOqhJ
Vl899hE1wjMwSQ7hQB8E16B4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ
+WONznAwowAAPuz/AIJzfET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oN
Hmmt0ubS5jaIyTT2U/lqzmR4BAREfsIX2v8Agr9jDwBfP4u0rQf+ED/aAj0rUdUuPF1nY2tt
oJtre41Cygu3uFintJpUEzW8Dus5QSBH27gAfnT438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu
1FQbLe2jjhThRnagycsckkntf2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGG
Uq/lLI4LhVxGw3biqtlftKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGn
lFMIVXaMDAxivvb/AIJmfHfwP8CvA/wCvdN8ZeFfCdhLruvf8LaW41a3s769uWje20TzopXF
xPaIt0pHkK1tExkll2NHJIqEfmnRX6Lfs1/El/Avwp/Zj0Xwh8R/CvhD/hCPHOqN8VrW38da
fosd8n9q2jJLOGuY11SI2SFVlh89GRSiscba9g8I/t8/Dn4a+H/hJpnhPxn4K0vw9d+MvEN1
Z2i2dqBosM3jS0WGRkkj3aajaFeauFZxCBDK4GGCYYz83/hr+238T/hD4HsPD3h/xN9isNH+
3f2TK+nWlxfaF9tj8u6+w3csTXFl5i5LfZ5I/mLNwzEnn/APwB8RfEf4SePfG+nx2o8PfDiC
xm1eaacK+68ultreKNBlmdmLtnAULC+WDFFf72+LPxRf+wfBGl/s5/FP4f8Aw4/sD4j+L5PE
P2fxdp/h3TdkmsxvptzPA8iLqFotksYUxRXCeVGYgDjy68+/Yt/aQ1zUPgF8fPhvafGnSvCn
iTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKAA+X/wBnv9l/Vv2kdD8e
3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/AIJqW+k+F9c/
aFtLnxd4KtoNT+GOueFdJvdQ1230mDWb26aMWwgF60EhSQQu25kXywV8zyyyg/JNIR6XqP7L
+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC/0f/wDBo7/y
jW0L/sM6h/6VS1+QH/BOD4vWvhn9ivw14fg8a+CtEef41xXHi3SNc8QabYpqPhebS4ra/E1t
eSKtzburldgVyWUFBvjBX9qv+DZT/hGP+GRtV/4Qj/kTP+E213+wP9b/AMg/+0bj7N/rv3v+
q2f6z5/73Oa9HA/BW/w/+3ROTF70/wDF+jPtL4s/8f1x9TXyT8PpVHx98aqSB/xS2jdf+xn8
c/4V9bfFn/j+uPqa/G7/AIK7/tgeOf2OfELax4Gu7S0u9W03SrS5a4tlmDRr4h8duAAenNdO
JV8NFef6HJQdsRL0/wAj5N/4OqSD+3D8L8HP/Fr7X/0+a1RXzl/wVM/aC8R/tQt8B/GfiueC
51zUvhxJHPJDEIkIi8T6/GuFHA+VRRXknpHqENzd/wDDdPwQCM/lDRfhr3/6gOi5r5J/bEz/
AMLsTPX/AIRvw9n/AMEljX314M8K295+1p8Fp2xv/sH4ckfhoOjf4V0Xi79pCf4efDX4HaR8
LdY+FWm+I9Ijt38Zf8JN42n8Lx2Vw1npb2lzdQRX9mdRiNtsDForsbIPKAyGjPTXi1TXyMaL
vUfz/Q/JKiv1h+Cv7fPhX4a/Cf4F6ZF4z+Gmlvd+JNaurm00mzhFhos03jCyXzIkuI/O023b
RrzWRE0whIt5WBw4XB+z18Vfg18HPipAmkeKfBUnw+8UfEDxnaeOtPvfFjW+laZbTzNaaQlp
pCXMVpdWU8Dwb5ja3UaIWZpYkh/dcZ1n5PUV+ln7CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1H
VLjxdZ2NrbaCba3uNQsoLt7hYp7SaVBM1vA7rOUEgR9u4fBP7Sms+HvEf7Rnj/UPCCWsfhO/
8SajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxikI4mivvb/AIJ6fGHwr4D/AGc7Ow+IvjHw
UviG51W6PwcOqyw6gfh7qptblJNSvQEk+w2Ul29mVScMpmi+0iAKhuB23wU/aZ1zwR+zj8I9
N8FeKPhU/j7TvFevTfEK+8TfEWTR4/7QfUYnhvro2+pWw1iJ4cFptt6jpDsXPzIzGfmnXpeo
/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9l/s6/tX
aZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7147b4J
fF7wz4Z8J+JvD/hLxr8NNE8PT/tLX9xq+kXHiDSbHTtR8FzW/wBmuQLa4kWK5snhcKqIrg7V
MY3RgqAfnn4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAO
V8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST+oHwH+O/wAIND+J
H7N+r+GvGXhXSvAPwo8V+PtOuo7/AFZbS60y01S6kTSWFvdOt1NE8dxBumVHWIb2mdPLkK+V
fsM+PtX8GfCDw/4Kufid4V8E2uneK76VNc8MfEDTNH1DwzdqFQvq1jcvHa+IdPkkW2lR4ZJm
8mGWNZiCLegD8/q9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6
OV/9agOGDheK8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtH3r/
AME4Pi9a+Gf2K/DXh+Dxr4K0R5/jXFceLdI1zxBptimo+F5tLitr8TW15Iq3Nu6uV2BXJZQU
G+MFUI/PSiv1B8H/ALSugeAvhT8PNL/Z31j4VabYaX458Syan/wk3ja88Lx2Vu2qo+l3N1B9
vs5tRiNj5IYzRXZ2QeVjcHjP5vfFnWYvEfxU8S6hbp4fjgv9VuriJNBtZbXSlV5nYC0hlVZI
7fB/do6qyptBAIIoA5+iv1M/4JZeItJ8daH+yz4Z8Oa54f8AI0PVfFd38QfDp1W3s59QvQv2
jSrmaxkdJL94hFbvFNHHL5BtwdyGE7fNP2Ov2sv7E/Zo+F13rXxL+yeL4P2gLb+0Zr3xF5ep
R+HbuKC41DzmeTzBp810nmz7v3Mkq7nywzTGfGngH4A+IviP8JPHvjfT47UeHvhxBYzavNNO
FfdeXS21vFGgyzOzF2zgKFhfLBiiv6r+wnbeOf2hPiHpHwq8Iz/B/TtSuYLmbTpfFXgbTNQe
8ZA88kRum065mL7BK6mVgoWLYGHyIfov4R/tEebY/tXfDHwb8XdK8Cf2j4rtX+G0j+Kf7F0P
SdPXxBO95JY3AdYIIvImR2jgO+aPdsSTBFdB/wAE6vjN4C/Zy0P4HtovxB8FaPpMXiTxDD8V
rtdUSxn1ifbJaaFK0N15V5NZBblHUJF5EJZ5ZljeORkAPzf8b+Mrv4geJ7nV7+HSoLq72b49
N0u20y1XaioNlvbRxwpwoztQZOWOSSTk1+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8
bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5kbn/ANkz49eBPF3gebQdV8XfD/wP
deBPjlD8YbhvKuNO0PUtHgjEctvpEZiMzSgqPJtJI0kaN0CglXCAH5/UV237SnxIsfjH+0Z4
/wDF+mRXUGm+KvEmo6xaRXSqs8cNxdSTIsgVmUOFcAgMRnOCetfcH7LP7Stx4C/Y6+BWl/C7
WPhVpviTS9d1eTxl/wAJN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH50123gH4
A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/Zf7Jnx68CeL
vA82g6r4u+H/AIHuvAnxyh+MNw3lXGnaHqWjwRiOW30iMxGZpQVHk2kkaSNG6BQSrhLXwG/b
QvvjR4W/ai0DQfixdfDzUvHniSx1vwCPEPidtDg0S1m16a4v2jn83yrdxDcq8scLmSUeZsWX
aaYz89K9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOG
DhfqvwJ8QNZ0j9lX4E6B8EfjF4K8Ga74W8Sa8PGd63iyDw3aXkz39u1lfXVteGGW/tzaqpGb
eY+WhiZNwMQ6D9lL4/6f8F/2aPAcL+Pvh/aanrX7R9pq99Jpt3ZWjR6I0SwXN4lvtjm060kM
MiHdFbn7PKY2VYZijgH500V+m3g79pTwr8O7PTbfw/4/8P6HBo37Ulxa6fFp+uQ2yWPg24lS
aeOII4CaPJJGjuq4tmZFY5IBr4J/ay/sP/hqj4l/8Iz/AGV/wjf/AAleqf2T/Zfl/Yfsn2uX
yfI8v5PK8vbs2fLtxjikI9L/AGE7bxz+0J8Q9I+FXhGf4P6dqVzBczadL4q8DaZqD3jIHnki
N02nXMxfYJXUysFCxbAw+RD4V438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRn
agycsckkn9Fv+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtpbjVrezvr25aN7bRPOilcXE9o
i3SkeQrW0TGSWXY0cki2v2Lvi94N8A2f7JM+s+NfBWnJ8GtV8Z6N4uE3iCzD6dPqMrRWbxr5
m66t5GnjP2m2EsCLvd5FSN2VjPzJor9TP2Wv2x9B/Zv/AGb/ANnLwXdeOfBVpqWm6rqcGpQr
cWGpjSbo+LbGFpmuF81bZH0e61oLcb0jeGV2V2zGa7XTv24Ph/8ABKP4Y+G/Dfjb4f6foqeO
fEjW9vZrY3Nvo6P42tY4pQwVlsYm0O71cJLmJDBK+1s7MAH4/UV+oPiO58J/Gv4xfs26b8ON
f+H8lh8L/jJ4j+26dDr+nab9ktJ/E8F1ZfY7eWWM3UT220x/ZFkU42L8w218Kft2/wDJ7/xk
/wCx51v/ANL56Qjn/APwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAUL
C+WDFFfia+1v+CeX7RGuS/slfGP4Y2nxd/4QTxJqP9gv4Mk1fxTJotjpMS6oz6lJBcM6pB+7
mDyRxHzZl37UkIIr2D9i74zeAvBln+yTeXPxB8Ff2b8H9V8Z6T4luZtUSye3bUpWjsJ47a58
q6lt5vPibzVhKxKXMxi8uTYxn5k16B4I/aX8RfD/AMMW2kWGnfD+e1tN+yTUvAmh6ndNudnO
+4ubSSZ+WONznAwowAAPsH9jr492/wAGP2aPhdo9/wDEHStH8SeF/wBoC2027WHxNCZLPw1L
FBJfoskcpB0qW5iEkhVjbSOgcliAa+Pv2sv7D/4ao+Jf/CM/2V/wjf8Awleqf2T/AGX5f2H7
J9rl8nyPL+TyvL27Nny7cY4pCOg+APwR8Rf8FBf2ko/DOn6n4K8P+J/EEEk1qk2nDSdOuWt4
dzRRw2FqYon8mN3z5aK3luSxdgH8fr9LP+CZnx38D/ArwP8AAK903xl4V8J2Euu69/wtpbjV
rezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki9B+y1+2PoP7N/wCzf+zl4LuvHPgq01LT
dV1ODUoVuLDUxpN0fFtjC0zXC+atsj6Pda0FuN6RvDK7K7ZjNMZ+WdFegftZf2H/AMNUfEv/
AIRn+yv+Eb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxX6A/seftweG/gl+yn+zN4bb
xt4V0+6S+1Br63uFtbmbR7h/FdhGZZmdWax3aNd60BK5iBilkw2dmEI/L6vS/wBkr9l/Vv2x
fjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfoD/wuLQdI8bfCK2+
EPxE8FeDPCfhb4u+Jbrx5Z6d4wsPDdpeac+vwyWkjwvPCL+3OnKFjaJZk8tPKHTYNX9mD9pD
4TfC34qeAde+H3i/wV4P8Can8QPF1x8QIlvINJnu45Zp4PDga1mKXLWUcN3EVSGP7NAS8koj
aKR0Yz8nqKt69o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg1+i3/AATI
+Kvw2+Dnwr+FiX/inw/JoXijVdftPihp/iHxZNbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ
5Ylh/dIR+b1dt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXyw
Yor/AKAfsXfF7wb4Bs/2SZ9Z8a+CtOT4Nar4z0bxcJvEFmH06fUZWis3jXzN11byNPGftNsJ
YEXe7yKkbsvlX7B/xv8AEWg/szfG/wCEKfGC18EeLHn0SHwjNe+MxYaVpaxas51SW0vVl8hE
KS+Y4t3LXCbjGsuDTGfD1e1/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6
WwtyEZElaNFCm4eGN3mjRXLEgfVf7NfxRt/CHwp/Zj0vwB8U/Cug2HgzxzqknxN+z+LofDMe
r251W0eG5ngu5LWa+iexTCkxOQgMRCspjHP/ALSnxV8M+I/+Cavj/SfCGrfDSPS7/wCNeo65
ouj2p0m11JfDrmRLeeOzIW7jfzyiDKLcLb4Xi2GKAPl/4a/tt/E/4Q+B7Dw94f8AE32Kw0f7
d/ZMr6daXF9oX22Py7r7DdyxNcWXmLkt9nkj+Ys3DMSbX7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2
jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv0t/wT7+K+o6f8BPCfhs/Evw/wCBdNTxJd31
vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxayDY0kixQzRCXpbjV/wCCYP7bcWm/tJeDfBXiX/hT
+leCfBWq69r8vitLiXw1Fc3N1DcRfa1hee2tZnbzo7eFHs/MhtmKpHEEcqAfnpXpeo/sv6tZ
fsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv8A61AcMHC8V47t/sfjjWYv
selaf5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncfuv9hb476f8DP2MPhVaP4y8K6Jq
eq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSIR80/s6fEPxFqXhjWtIsN
S+CmiWvhHQ7zXEk8XeFNDnutV8tw5s4Li5spZri7kMh8qJn5ClQVCgVU+APwR8Rf8FBf2ko/
DOn6n4K8P+J/EEEk1qk2nDSdOuWt4dzRRw2FqYon8mN3z5aK3luSxdgH+lv2YvE+g6N+0Z+2
ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdMW2SSaOKRBD54SRAY4klwWQSjd23/BMz
47+B/gV4H+AV7pvjLwr4TsJdd17/AIW0txq1vZ317ctG9tonnRSuLie0RbpSPIVraJjJLLsa
OSRWM/NOirevaNN4c1y80+4e1knsJ3t5XtbqK6gZkYqTHNEzRyJkcOjMrDBBIINfqD/wSy8R
aT460P8AZZ8M+HNc8P8AkaHqviu7+IPh06rb2c+oXoX7RpVzNYyOkl+8Qit3imjjl8g24O5D
CdqEfnn/AMNQeM/+GeP+FVfbdK/4QX7d/an2D+w7Dzvte7P2j7T5P2jzdv7vf5m7yv3WfL+S
vP6/SH9nb9rLVtL/AGVfguPAXiX4af8ACbJ4k1u+8e3fjDx3ceHXS9mv4Zbe9vo01C0fU0eA
rvd47vKwmMDIeNqn7Ov7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6
JJOiu0UirGSieYu9eGM/Omiv1Wu/2ldM8BaD4V0v4Eax8FNN/sv4j+LJNc/tLxs/hfTbJG1l
X065eC2v7NdRtDZeUARFdp5UAiQcNGfNPDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1S
CEReGpZYppbOysZJEv7bTJ9RtoWa1VYpHhG518tmYgHw/wDCv4V/8LS/4SP/AIqPwr4c/wCE
c0O61z/ieah9j/tTyNv+h2vynzbuTd+7i43bW5GK5Svsv/gnX8UV8Ra5+01cX2teCvA2m/EL
4f63ZWuiya7aaDpU2q3bE2dvbW9xOi7I1a5RG5WFH2s6+YN3QfCb4o6//wAMdfs+aX8Gfin4
V+HHiTQNd1uTxv8AbPF1n4dj+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx0hHyp/w1B4z/wCG
eP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+tbx3cfbPHGsy/
bNK1Dzb6d/tWl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0fot+wh8d/A+h+GP2PNX1Hxl4V0q
1+FF94w07xTHf6tb2l1pkuqOUsmFvI6zTRObiPdNCjxRDe0joI5CoB+adfReo/8ACybL/gmL
YXsfibwVd/CS78ZHSZdCh0WEavY6yIpLnz5J3s1cuYEUebHcufKlSEkKHjX3b9hnx9q/gz4Q
eH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47XxDp8ki20qPDJM3kwyxrMQRb10
H7DfxmtdE/Zjs9Fg+IPw0tXvPjy134tsrjVNN0XTta8Lz6eltfuLC8+zq9lIrkLCIARhdkSP
EAjGfFOo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL
0Hwr+JnjP4pfs8eI/hV/wmPw/wDDngXw5Y3XjD7BrllYWc2qXcDKdlrc/ZzcS6hIreXGPMDN
ErRbhGNlfYPwN+O/gf4GfCDw5aeC/GXhXRPD+q/tOx6pp0LatbnULLwoR9n+0SLM5urOJo4T
FI8vlu0Mjq5MU7CTif2YvE+g6N+0Z+2ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdM
W2SSaOKRBD54SRAY4klwWQSjcAfKn7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhle
INDDKVfylkcFwq4jYbtxVW80r9LP+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtpbjVrezvr
25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki/m9r2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkx
zRM0ciZHDozKwwQSCDSEdr/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o8
3b+73+Zu8r91ny/kqr4B+APiL4j/AAk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbO
AoWF8sGKK/3X/wAEyPir8Nvg58K/hYl/4p8PyaF4o1XX7T4oaf4h8WTW8GmNPClppyR6QbmK
C6t50eLzZpLW6VAZGeWJYf3Xn/7B/wAb/EWg/szfG/4Qp8YLXwR4sefRIfCM174zFhpWlrFq
znVJbS9WXyEQpL5ji3ctcJuMay4NMZ8PUV+ln/BOb4ifDD4C+B/hzYz+OPCuseG9e13xFp3x
LTV/FV3a2MHmxpZaa0GjzTW6XNpcxtEZJp7Kfy1ZzI8AgIi/N7XtGm8Oa5eafcPayT2E728r
2t1FdQMyMVJjmiZo5EyOHRmVhggkEGkIqV1fwr+Ff/C0v+Ej/wCKj8K+HP8AhHNDutc/4nmo
fY/7U8jb/odr8p827k3fu4uN21uRivvb/gnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYN
HS08PSWltdX9iks80Ye3lkVZpLSMnzWTeY2Kkjif2RPiB4buvjh+1+nh7xR4V8NeAfHHhTxL
pvhuyvtbtfD9jqNxdXD/ANlpFbXMkIG2EzKpKAQLIVYx+YAzGfH/AMK/hX/wtL/hI/8Aio/C
vhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/KfNu5N37uLjdtbkYrV8EftL+Ivh/4YttIsNO+H89ra
b9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAe7f8EsPE66Nof7QGmX3ivw/4e03xT8MdU0S1
tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaN3oH/AAT0+MPhXwH+znZ2HxF8Y+Cl8Q3O
q3R+Dh1WWHUD8PdVNrcpJqV6Akn2Gyku3syqThlM0X2kQBUNwAD4f8b+Mrv4geJ7nV7+HSoL
q72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk1reO5dUm8cay+t3/8AautNfTtf3329NQ+2
XBkbzJftKO6z7n3N5quwfO4MQc1+i37CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaX
WmS6o5SyYW8jrNNE5uI900KPFEN7SOgjkKoR8Paj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3
A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwtTwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs
533FzaSTPyxxuc4GFGAAB9l/safF61/Zp/ZM+G+h3fjXwVpGtzftD2N3qUdv4g027ubTRlgS
1urkyRyO1vbs0EsbTKyCSCRhuaC4/edD8Wfii/8AYPgjS/2c/in8P/hx/YHxH8XyeIfs/i7T
/Dum7JNZjfTbmeB5EXULRbJYwpiiuE8qMxAHHl0xnwnoXh27/aT8T+LdXv8AXfh/4VutI0Of
XHjuo7bQLXUvsyRoLOyt7aFYWu5ARsiVF8wq7E5JJ8/r7L/4J1+L1stc/aasb7xv4K0/TfF3
w/1vR7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gU/GlIR6XqP7L+rWX7Jlh8Yo9Z
8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCtQHDBwvmlfoD+wt8d9P8AgZ+xh8Kr
R/GXhXRNT1X9oDTdUvoW1ayOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciTq/iz8UX/ALB8
EaX+zn8U/h/8OP7A+I/i+TxD9n8Xaf4d03ZJrMb6bczwPIi6haLZLGFMUVwnlRmIA48umM+X
v2Sv+Fk/ti6HY/s26B4m8FaNoWqTz6zY2OsaLCgu72JTM7i8hs5blbjyVkxI7r+5jaLftKxt
86V+oP7A/wAf/A/wV/4U3qem+Pvh/odhdeK/Ej/Fq4s7u30b+1rl/NttEkFpKsFw2nhbpXSO
CBba33PJKkTRSMn5k69o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg0hH
tfwK1j4iT/s++M/EGgaT8NJvCfwtgtJtUutY8FaFqN6zX16IYIhLcWcs8zs7yMC7bUjhYbl/
do3KQfDPxb+0R4M+I/xOFr4fttJ8BwafNrb2Vla6VArXU8dnbRQWltGkYdiGclUVcRSMzb2U
P9F/8E8v2iNcl/ZK+MfwxtPi7/wgniTUf7BfwZJq/imTRbHSYl1Rn1KSC4Z1SD93MHkjiPmz
Lv2pIQRVv9g/476tpv7M3xv+E2mfGy18Ma7cT6IngjUb3xRcaHpVpBHqznUbm0nn8o26PHKJ
XjCpPKhbETsrKGM+HqK/Uz9lr9s/wr+z3+zf+zl4Qi+IvgqV7LVdTjuZVjhnFnM3i2xh+1Mb
iITWCS6NdayVkmWAmGZ84faB+ef7WX9h/wDDVHxL/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+
T5Hl/J5Xl7dmz5duMcUhFv8AZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZS
r+UsjguFXEbDduKq3mlfpZ/wTM+O/gf4FeB/gFe6b4y8K+E7CXXde/4W0txq1vZ317ctG9to
nnRSuLie0RbpSPIVraJjJLLsaOSReJ+EH7RniX9kr/gmrp9zofjPwU3j/wAIfE55IdMHijTr
+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhYz4Jor9Vv2eP8AgoV4b8GfAv8AZ/t28SfD
/wAL3Wq67q9/faPZ2tqbfwzcXHi6yBAVxI2nRf2Nea0iM7RjyJHwxIQ1+dP7WX9h/wDDVHxL
/wCEZ/sr/hG/+Er1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcUhHpfhm28c/DH9iTR/iXpk
/wAH9T8J/wDCSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieWAwYTybVljQY2sqeFeN/GV38QPE9z
q9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJ/QD/AIJ6/Gr4f/B/9g/w3pPi3V/C
tv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLd6X8Ff209J+APwn+
BfhPVvip4K1rXbXxJrSazqS3lvq4+2t4wsopL9rqVGe3SfSLrW2W5kMXmQ3Dtu3FMMZ+T1Ff
pt8Lfiva6f4s1/w34Z+JfgrwL4JT4na/faDrXg7xtpvh258Lr9ocQNqWmXRgtdc0yTFnJH5T
SMsEMsSy4xbj83/Hdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyovKibqkflR7VIGxMbQh
Ha6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfo
D+wt8d9P+Bn7GHwqtH8ZeFdE1PVf2gNN1S+hbVrI6hZaIbZLe5uJF3maziYwyRSO3lloZHVi
YZyJPVbv9pXTPAWg+FdL+BGsfBTTf7L+I/iyTXP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAER
XaeVAIkHDRljPz0/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjX
bI8eTKuMgOV80r7L/YP8RaTqXxU/ae1C51z4aeHoPFnw/wDEeh6Skeq2+gaVd3t/MrW0FhDe
vDItuRE+3co8pBGJNhZQfVv+CeXifWdN/YU+GmvR+K7Xw7B4R+PNvZXuo6j4lg0dLTw9JaW1
1f2KSzzRh7eWRVmktIyfNZN5jYqSAD83qK/UzTf2rtG0XwL4Jj+AGq/B/SYIviB4ovdYXXPF
k/g20sIZdXSXTbiayjvrFr23+wmEbWguQscAhCKVeI+feG/2vtZ+CH7Btz4q8J+LfhpD4w07
4u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6jbQs1qqxSPCNzr5bMxAPz0r0v9kr9l/Vv2xfj
XY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrcV478Vf8J1441nW/wCz
dK0f+2L6e++waXb/AGexsfNkZ/JgjydkSbtqLk7VAGTiv0W/4JmfHfwP8CvA/wAAr3TfGXhX
wnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySKhH5p0V97fCD9ozxL+
yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2zL58YMvzQuZD7B
+zx/wUK8N+DPgX+z/bt4k+H/AIXutV13V7++0eztbU2/hm4uPF1kCAriRtOi/sa81pEZ2jHk
SPhiQhpjPyprq/hX8K/+Fpf8JH/xUfhXw5/wjmh3Wuf8TzUPsf8Aankbf9DtflPm3cm793Fx
u2tyMVq/tZf2H/w1R8S/+EZ/sr/hG/8AhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFe7
f8EsPE66Nof7QGmX3ivw/wCHtN8U/DHVNEtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDM
nmjchGV+y54Z8S+KfBngrRtP8afs6aNeeLNVfR9C0rxH4S07W9bupmnRFa4ePS7yWBHmm2Rm
8kjJVCVHlKrV4T8co9fs/i/4js/FVhpWleJNKvpNN1Oz03T7Oxtba4tz5DokVmiW4w0ZBMa4
Y5bLFix+oP2YfAtr8MP2bfDXiX4d+M/hpY/Fv4jz3um6nruv+MdN0i5+GGmrN9mJt4JphN9o
ukMjtdRo0scAKRRhpPMbttB8T6/4F/Zx+C3g/wCDPxl+H/hDxJ4I8V+IrXxvqFv42s9F02+u
P7Rg+x386TvGdUtDbIpV1hnDRKY9pIMdMZ+f1Fa3ju4+2eONZl+2aVqHm307/atLs/sdjc5k
Y+ZBB5UXlRN1SPyo9qkDYmNo/Qv/AIJkfFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/wAQ+LJr
eDTGnhS005I9INzFBdW86PF5s0lrdKgMjPLEsP7pCPin9nv9l/Vv2kdD8e3ei6z4fsZ/h74b
ufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5XzSvrb/gmpp9r4M1z9oXT9a8ReCtGnv8A
4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhb6W/4JZeItJ8daH+yz4Z8Oa54
f8jQ9V8V3fxB8OnVbezn1C9C/aNKuZrGR0kv3iEVu8U0ccvkG3B3IYTtYz4J+Gv7bfxP+EPg
ew8PeH/E32Kw0f7d/ZMr6daXF9oX22Py7r7DdyxNcWXmLkt9nkj+Ys3DMSfKa/Rb9mv9oC71
L4U/sx3Phv4o6V4eutI8c6pqnxdW88a22gXWq+dqtpOLi+S5uIZNS32YYbwJshWjzkFK+H/2
lNZ8PeI/2jPH+oeEEtY/Cd/4k1G40VLW1NrAtk91I1uI4SqmNPKKYQqu0YGBjFIRq6j+y/q1
l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvP/Cv4V/8AC0v+
Ej/4qPwr4c/4RzQ7rXP+J5qH2P8AtTyNv+h2vynzbuTd+7i43bW5GK+4P+CcHxetfDP7Ffhr
w/B418FaI8/xriuPFuka54g02xTUfC82lxW1+Jra8kVbm3dXK7ArksoKDfGCvK/sT+J/Cujf
FT9rPTPC3ivw/wCHvhz4p8G+IdE8LWmseJYdJg1SaeZ10pQl9NG8jiDzgJJATEJWEjIZfmYz
5U8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr7W/4J
5ftEa5L+yV8Y/hjafF3/AIQTxJqP9gv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHzZl37Uk
IIr4017TotI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkBCKlFfe3/BPv4r6
jp/wE8J+Gz8S/D/gXTU8SXd9b61oPjax8O6v4XudijdrOmXxgh1yykcWsg2NJIsUM0Ql6W46
D4V6jp/x3sf2OrPw/wCK/h/quq/DT4j6s/iOCHU7LQcfaPEFrdwyWdjc/ZpZIpYstHHbwcf6
vYrqYwxn5011fwr+Ff8AwtL/AISP/io/Cvhz/hHNDutc/wCJ5qH2P+1PI2/6Ha/KfNu5N37u
LjdtbkYr9IdJ+K+o6f8AGTxx4bPxL8P+BdNT4u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65Z
SOLWQbGkkWKGaIS9LcfP3/BOvxetlrn7TVjfeN/BWn6b4u+H+t6PaiTWbTw7pWu6rcMVs2tr
W4NsoTabnYfJRYEl2sIvMClCPjSiv1B/Y8/bg8N/BL9lP9mbw23jbwrp90l9qDX1vcLa3M2j
3D+K7CMyzM6s1ju0a71oCVzEDFLJhs7Mel/syeIvBvjr9oz4SeGfhtrngr/hE9D+IHj+78V+
HdM1Wzs4NQYXUtxolyLHehv0iiitXgmhjlWAW6FWTyflYz8vtR/Zf1ay/ZMsPjFHrPh+78PX
fiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOFP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu
58VXllqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfsv8A4J/fHt3/AGR9Hs7z4g+FbbVdY+OQ
1LxxZ+JvE2n28mt+HLnTY4NSe6ivpV+1xS+YwYFXZnG5QXTcvn/7Ftz4H0/44ftVf8I1r/hX
QvBmu+BvE3hrwf8A21r9vpf237ZcL/Z0Mf26WOVt0UPLv9z5fNZSwyAfL/gH4A+IviP8JPHv
jfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivxNfa3/BPL9ojXJf2SvjH8MbT
4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEV8aa9p0Wka5eWlvf
2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyAhFSivvb/AIJ9/FfUdP8AgJ4T8Nn4l+H/
AALpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcdB8K9R0/wCO9j+x
1Z+H/Ffw/wBV1X4afEfVn8RwQ6nZaDj7R4gtbuGSzsbn7NLJFLFlo47eDj/V7FdTGGM/Omvq
v9nv4X/GP9o/9jCTwlpXiD4f6R8NrjxWNK0fTdYs7G1uvEHiM20l2sFvdC2aVLtokWJZbieF
XE0dushUmMfUOk/FfUdP+Mnjjw2fiX4f8C6anxd8TX1vrWg+NrHw7q/he5+2SDdrOmXxgh1y
ykcWsg2NJIsUM0Ql6W48J8d/ESO8/wCCQWs+Hv8AhJfhVqGsS/FafV/seljSbO7udLMbR/bI
LLZFdRbrvhP3Mc62xCbUthtAB8qfCv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInmofY/7U8j
b/odr8p827k3fu4uN21uRiug/Z7/AGX9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTM
YDHC8ZdSY12yPHkyrjIDlfa/+CWHiddG0P8AaA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVn
ULZqEuJo1kcL9pAkwViErBmTzRuqf8EtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26he
eV9nhj+0yx7t3kvlx8ifLvZdy5Qjx/8AZ7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPf
WVspMxgMcLxl1JjXbI8eTKuMgOV80r6r/wCCW1zp+n/8L4/tLX/Cuhf278KtZ8Nad/bWv2Wl
/bdQvPK+zwx/aZY927yXy4+RPl3su5c+gfAL9qXX/wBm3/glzoeteG/GHhVfHHh/4j/bLXSr
nXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnIB8KUV+lnwU/a+uE/Zx+Ec3wum+Cnh
XxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsVuIQMq0bc9/wALq1nxH8Df
hFH8Efih8NPhprun+MvEt74zXTvEkHhLRlmn1OGWyuHsrxopbyyW1CiNTBMVhj8lk3KYqAPi
rwD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX4mvuv9i39
pDXNQ+AXx8+G9p8adK8KeJNWvtHm8GXz69J4Z0OyiGsSPqVxYllgSziaOYStBFHHK8ZYLCxU
oPh/XtOi0jXLy0t7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQACpX9RX/AAaO/wDK
NbQv+wzqH/pVLX5F/ss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/CTeNp/C8dlcNfQPaXN1
BFf2Z1GI22wMWiuxsg8oDIaM/sp/wa7azF4j/YquNQt08PxwX/i7WriJNBtZbXSlV9QnYC0h
lVZI7fB/do6qyptBAIIr0cB8Fb/D/wC3ROTF70/8X6M+3fiz/wAf1x9TX4R/8HDLbbGw/wCv
fTP/AE/eOq/dz4s/8f1x9TX4L/8ABxRO0cGnAd7fS/8A0/eO66q38CPr+hxUf48vT/I/N/8A
bCbd8Kf2eD/1Ty6/9SvxDRSfte/8km/Z3/7J3df+pX4ioryXuemtj6j0b4gtpX7bPwSs84Da
H8N1xn10HRf8a+M/2zpvtHx2L/3/AA74fb89Fsa+l7zI/b8+B/8A2B/hp/6YdEr0/wCM/wAU
df8A+FRfs8aX8Gfin4V+HHiTQLeCTxubzxdZ+HY/tEtjpD2dzfQTyIdQiW2VRlYrgbI2iwSD
HW9Zt018jOkl7R/P9D8xKK1vHdx9s8cazL9s0rUPNvp3+1aXZ/Y7G5zIx8yCDyovKibqkflR
7VIGxMbR9q/Cb4o6/wD8Mdfs+aX8Gfin4V+HHiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQ
iW2VRlYrgbI2iwSDHXGdJ8qf8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7DsPO+17s/aPtPk/a
PN2/u9/mbvK/dZ8v5K8/r9IfgR8als/Av7OEfgj4oeCvDsHhv4gaxe/FJdO8SWng601WGTV7
WWK4eynaza6t2sVIjVYCEjXydiFTEDV/2vl+CH7IviLxV8LPFvgqG807416hqvhrRxqlpDex
eD5bmOYWcNj5iXltZT3ttbtLaxLE7xgsy+UzMWM/N6iv0s+Cn7X1wn7OPwjm+F03wU8K+JJP
Feval4ysNS8XT+D9N0S4n1GKa0d7OLULU3totsyIA0d4FitxCBlWjb88/izrMXiP4qeJdQt0
8PxwX+q3VxEmg2strpSq8zsBaQyqskdvg/u0dVZU2ggEEUhHP16XqP7L+rWX7Jlh8Yo9Z8P3
fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC/Zf7LP7Stx4C/Y6+BWl/C7WPhVp
viTS9d1eTxl/wk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjNv9m/9pDSfhd8BPCE
sXi/4aaBeeJP2lrfXLm20O8t7aKw0J0EM88MEpF1p9kfKki/erC/2aQo/wC6mZXYz83q6v4V
/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMV+gPxZ+K
L/2D4I0v9nP4p/D/AOHH9gfEfxfJ4h+z+LtP8O6bsk1mN9NuZ4HkRdQtFsljCmKK4TyozEAc
eXXj3/BOvxetlrn7TVjfeN/BWn6b4u+H+t6PaiTWbTw7pWu6rcMVs2trW4NsoTabnYfJRYEl
2sIvMClCPnTwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB0
HxI+CPiLxn+znF8ebrU/BUmk6v4kHhW503R9OGlz2F6lq0oBtYbWG0VDBEjloWbJlXd85k2+
P1+gP7C3x30/4GfsYfCq0fxl4V0TU9V/aA03VL6FtWsjqFlohtkt7m4kXeZrOJjDJFI7eWWh
kdWJhnIkYz8/q9L/AGSv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4L
hVxGw3biqt9w/Fn4ov8A2D4I0v8AZz+Kfw/+HH9gfEfxfJ4h+z+LtP8ADum7JNZjfTbmeB5E
XULRbJYwpiiuE8qMxAHHl1q/sD/H/wAD/BX/AIU3qem+Pvh/odhdeK/Ej/Fq4s7u30b+1rl/
NttEkFpKsFw2nhbpXSOCBba33PJKkTRSMgB8KfDX9tv4n/CHwPYeHvD/AIm+xWGj/bv7JlfT
rS4vtC+2x+XdfYbuWJriy8xclvs8kfzFm4ZiT5TVvXtGm8Oa5eafcPayT2E728r2t1FdQMyM
VJjmiZo5EyOHRmVhggkEGvuv/gn38V9R0/4CeE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3Ox
Ru1nTL4wQ65ZSOLWQbGkkWKGaIS9LcIR8E0V+i3wr1HT/jvY/sdWfh/xX8P9V1X4afEfVn8R
wQ6nZaDj7R4gtbuGSzsbn7NLJFLFlo47eDj/AFexXUxjttJ+K+o6f8ZPHHhs/Evw/wCBdNT4
u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcMZ+WdFfoDoPxa1+8/Zx
+C1h8Gfi/wDD/wAEeJNF8V+Ip/G93b+IrPwhptzcS6jBJZ3s9jP9mN3afZgu1VtZAsSGHygV
MI6D4EfGpbPwL+zhH4I+KHgrw7B4b+IGsXvxSXTvElp4OtNVhk1e1liuHsp2s2urdrFSI1WA
hI18nYhUxAA/N6iv1W+A/wAQLfUPhBZeL/BnijSvDPgzQP2nbmKxv7jW4fDtrZeEZhFeXFhA
LiSHbaS4imaxQfOYwxiJQ45+XxF4N/aO+I37MA+GGueCv7J+H3xd8QXFzpcmq2ehz2VldeJb
e8sRbWV08MsqPaldiW8b4I8vAdSgLBY/Mmiv12+GvxV+G3wc/aM1FL/xT4fk0LxR8TvHNp8U
NP8AEPiya3g0xp7p7TTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/uvn/AOEH7RniX9kr/gmrp9zo
fjPwU3j/AMIfE55IdMHijTr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhQj4Jor9LPgp
+19cJ+zj8I5vhdN8FPCviSTxXr2peMrDUvF0/g/TdEuJ9RimtHezi1C1N7aLbMiANHeBYrcQ
gZVo21f2Ifjd8MPh/wD8Ilez+Jvh/wD8I3438V+KU+JemP4iu9N0PSvtOLbTY7HQpp4Em0+a
OSLdJPZ3HlRljI8IgPlAH5fUVb17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORMjh0Zl
YYIJBBr9If2EPjv4H0Pwx+x5q+o+MvCulWvwovvGGneKY7/Vre0utMl1Rylkwt5HWaaJzcR7
poUeKIb2kdBHIVAPzTr0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQw
or7o5X/1qA4YOF+tv2GfH2r+DPhB4f8ABVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsb
l47XxDp8ki20qPDJM3kwyxrMQRb10H7DfxmtdE/Zjs9Fg+IPw0tXvPjy134tsrjVNN0XTta8
Lz6eltfuLC8+zq9lIrkLCIARhdkSPEAjGfm9RX6La98Ubf8A4VT8L9L/AGY/in4V+HFhoHjn
xTJrH2zxdD4dj+zyarC+lXN9BeyJNqES2KxDLRXB2RtEwLBo6t/Aj41LZ+Bf2cI/BHxQ8FeH
YPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9lO1m11btYqRGqwEJGvk7EKmIAHxT8L/2ufHPwW0O
Gz8K3Xh/Qp7WC5t4NWtfDOmLrdutwsiymPUvs/2xHKyuodZgyKQFKgADzSv0h1f9r5fgh+yL
4i8VfCzxb4KhvNO+Neoar4a0capaQ3sXg+W5jmFnDY+Yl5bWU97bW7S2sSxO8YLMvlMzG38F
P2vrhP2cfhHN8Lpvgp4V8SSeK9e1LxlYal4un8H6bolxPqMU1o72cWoWpvbRbZkQBo7wLFbi
EDKtGwB+adFdB8WdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EA
giv0M/YQ+O/gfQ/DH7Hmr6j4y8K6Va/Ci+8Yad4pjv8AVre0utMl1Rylkwt5HWaaJzcR7poU
eKIb2kdBHIVQj8069L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+
6OV/9agOGDhfrb9hnx9q/gz4QeH/AAVc/E7wr4JtdO8V30qa54Y+IGmaPqHhm7UKhfVrG5eO
18Q6fJIttKjwyTN5MMsazEEW9dB+w38ZrXRP2Y7PRYPiD8NLV7z48td+LbK41TTdF07WvC8+
npbX7iwvPs6vZSK5CwiAEYXZEjxAIxn5vUV1fx3/AOEY/wCF4eMv+EI/5Ez+3b3+wP8AW/8A
IP8AtD/Zv9d+9/1Wz/WfP/e5zX2B8Jvijr//AAx1+z5pfwZ+KfhX4ceJNA13W5PG/wBs8XWf
h2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHSEfKngj9pfxF8P8AwxbaRYad8P57W037JNS8
CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqKg2W9t
HHCnCjO1Bk5Y5JJP6F/Aj41LZ+Bf2cI/BHxQ8FeHYPDfxA1i9+KS6d4ktPB1pqsMmr2ssVw9
lO1m11btYqRGqwEJGvk7EKmIeq+Ef2+fhz8NfD/wk0zwn4z8FaX4eu/GXiG6s7RbO1A0WGbx
paLDIySR7tNRtCvNXCs4hAhlcDDBMMZ+RNFfsDp37cHw/wDglH8MfDfhvxt8P9P0VPHPiRre
3s1sbm30dH8bWscUoYKy2MTaHd6uElzEhglfa2dmOK/4XFoOkeNvhFbfCH4ieCvBnhPwt8Xf
Et148s9O8YWHhu0vNOfX4ZLSR4XnhF/bnTlCxtEsyeWnlDpsAB+WdFfpZ8MP2rvCHh7wxoNx
4X8eaV4esLX9p2RdLgh1IaXJp/gq6eOeaJYCUe30qSRI3kiKrCXQb13Lx8J/tZf2H/w1R8S/
+EZ/sr/hG/8AhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFIR5/RX3t/wT7+K+o6f8BPC
fhs/Evw/4F01PEl3fW+taD42sfDur+F7nYo3azpl8YIdcspHFrINjSSLFDNEJeluLeg/FrX7
z9nH4LWHwZ+L/wAP/BHiTRfFfiKfxvd2/iKz8Iabc3EuowSWd7PYz/Zjd2n2YLtVbWQLEhh8
oFTCAD8/q6v4V/Cv/haX/CR/8VH4V8Of8I5od1rn/E81D7H/AGp5G3/Q7X5T5t3Ju/dxcbtr
cjFfcH7G/wAYNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xVY+EbnQLkkDztS0W+js4dX0yZ/sss
caJ5kcEUsAKYFqvn/wDwTr8XrZa5+01Y33jfwVp+m+Lvh/rej2ok1m08O6Vruq3DFbNra1uD
bKE2m52HyUWBJdrCLzApAPjSvS/2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8Qa
GGUq/lLI4LhVxGw3biqt5pX6Wf8ABMz47+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW9nfXty
0b22iedFK4uJ7RFulI8hWtomMksuxo5JFAPh7Uf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4G
o2N6IJLjEivCsRQwor7o5X/1qA4YOF80r9C/2NPi9a/s0/smfDfQ7vxr4K0jW5v2h7G71KO3
8Qabd3NpoywJa3VyZI5Ha3t2aCWNplZBJBIw3NBcfvPjT9rL+w/+GqPiX/wjP9lf8I3/AMJX
qn9k/wBl+X9h+yfa5fJ8jy/k8ry9uzZ8u3GOKALeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC
9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL5pX6A/sLfHfT/gZ+xh8KrR/GXhXRNT1X9oDTdUv
oW1ayOoWWiG2S3ubiRd5ms4mMMkUjt5ZaGR1YmGciT2D4SeJ1034ef294U8V+H/DvhPwj+1J
eWVtqK+JbTR9OtPCchjurmxtZXmjjeylKxzG0gJWXYHEbbch2HY/N7wR+0v4i+H/AIYttIsN
O+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAdXpn/AAUN+LWhafFbad4g0rS47L7X
/ZhsPDWl2kmgfa4Vguf7OeO2VtP81Fy/2QxbnZ5D+8dnPFftKaz4e8R/tGeP9Q8IJax+E7/x
JqNxoqWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGK+wP+CffxX1HT/gJ4T8Nn4l+H/Aump4ku76
31rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZohL0twhHwTRX6g/s8eJ/hR8U/hF+z/N
r/xS+FVh4g+FfivV5NtzdyaHCl6/iWy1b7Tb2/kwpHaSaZaXqRmSOKFZbm3hASX5Y/z+/ay8
b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAPP6K+9v8Agn38
V9R0/wCAnhPw2fiX4f8AAump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZBsaSRYoZoh
L0txb0H4ta/efs4/Baw+DPxf+H/gjxJovivxFP43u7fxFZ+ENNubiXUYJLO9nsZ/sxu7T7MF
2qtrIFiQw+UCphAB+f1dX8K/hX/wtL/hI/8Aio/Cvhz/AIRzQ7rXP+J5qH2P+1PI2/6Ha/Kf
Nu5N37uLjdtbkYr7g/Y3+MGox/DzTNAPxX8FeFdNbxlqOoW/iTwd4qsfCNzoFySB52paLfR2
cOr6ZM/2WWONE8yOCKWAFMC1Xz//AIJ1+L1stc/aasb7xv4K0/TfF3w/1vR7USazaeHdK13V
bhitm1ta3BtlCbTc7D5KLAku1hF5gUgHxpXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPc
JBdtEhleINDDKVfylkcFwq4jYbtxVW80r9LP+CZnx38D/ArwP8Ar3TfGXhXwnYS67r3/AAtp
bjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0ckigHw9qP7L+rWX7Jlh8Yo9Z8P3fh678
SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/wCtQHDBwvmlfoX+xp8XrX9mn9kz4b6Hd+NfBWka
3N+0PY3epR2/iDTbu5tNGWBLW6uTJHI7W9uzQSxtMrIJIJGG5oLj958aftZf2H/w1R8S/wDh
Gf7K/wCEb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxQB5/Xpf7Pf7L+rftI6H49u9F
1nw/Yz/D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByv2t/wTI+Kvw2+Dnwr+FiX/
AIp8PyaF4o1XX7T4oaf4h8WTW8GmNPClppyR6QbmKC6t50eLzZpLW6VAZGeWJYf3Xin/AATU
0+18Ga5+0Lp+teIvBWjT3/wx1zwlZvqHinTbWC/1K5aNYYoJpJxHMjGCT97GzRAbSXAdCwB8
k0V+ln7CHx38D6H4Y/Y81fUfGXhXSrX4UX3jDTvFMd/q1vaXWmS6o5SyYW8jrNNE5uI900KP
FEN7SOgjkKn/AATm8efDz9nnwP8ADnTNX8WeFbjSta13xFpfxVsdX8bM1jp0skaWVgsGnR3a
Wl/aTq0fmXP2e7iCl3aaNIcxAH5p0Vb17RpvDmuXmn3D2sk9hO9vK9rdRXUDMjFSY5omaORM
jh0ZlYYIJBBr7r/4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEO
uWUji1kGxpJFihmiEvS3AB8E16B4I/aX8RfD/wAMW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4u
bSSZ+WONznAwowAAPsHQfi1r95+zj8FrD4M/F/4f+CPEmi+K/EU/je7t/EVn4Q025uJdRgks
72exn+zG7tPswXaq2sgWJDD5QKmEWv2N/jBqMfw80zQD8V/BXhXTW8ZajqFv4k8HeKrHwjc6
BckgedqWi30dnDq+mTP9lljjRPMjgilgBTAtVYz510L4/ePv2k/hB4t8A3/iv4VeFfCGkWM/
jB9NuvD+j6BHqV3bCNQlk1tZqzahIhCIFKtIiuhbaSp+fq+y/wDgnX4vWy1z9pqxvvG/grT9
N8XfD/W9HtRJrNp4d0rXdVuGK2bW1rcG2UJtNzsPkosCS7WEXmBT8aUhHpf7JX7L+rfti/Gu
x8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWNR/Zf1ay/ZMsPjFHrPh+
78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOF+4f8AgmZ8d/A/wK8D/AK903xl
4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki89+xp8XrX9mn9kz
4b6Hd+NfBWka3N+0PY3epR2/iDTbu5tNGWBLW6uTJHI7W9uzQSxtMrIJIJGG5oLj94xn56UV
6B+1l/Yf/DVHxL/4Rn+yv+Eb/wCEr1T+yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV9V/AL9q
XX/2bf8AglzoeteG/GHhVfHHh/4j/bLXSrnXrOTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2
BiZnKEfL/wCyV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7
cVVvNK/UH9gf9qHwP4E/4U34p03xB8P/AIdWGu+K/El98WtNs7630vFzP5sOiQi2lk+0Pp8K
3i7FgD20HzyylWikkXK/4JzePPh5+zz4H+HOmav4s8K3Gla1rviLS/irY6v42ZrHTpZI0srB
YNOju0tL+0nVo/Mufs93EFLu00aQ5iYz8069A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O
6bc7Od9xc2kkz8scbnOBhRgAAfW3gTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eONO0e
0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyvh/wAd3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjH
zIIPKi8qJuqR+VHtUgbExtCEHjfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGd
qDJyxySScmvvb/gn38V9R0/4CeE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65
ZSOLWQbGkkWKGaIS9LcW9B+LWv3n7OPwWsPgz8X/AIf+CPEmi+K/EU/je7t/EVn4Q025uJdR
gks72exn+zG7tPswXaq2sgWJDD5QKmEAH5/V1fwr+Ff/AAtL/hI/+Kj8K+HP+Ec0O61z/iea
h9j/ALU8jb/odr8p827k3fu4uN21uRivuD9jf4wajH8PNM0A/FfwV4V01vGWo6hb+JPB3iqx
8I3OgXJIHnalot9HZw6vpkz/AGWWONE8yOCKWAFMC1Xz/wD4J1+L1stc/aasb7xv4K0/TfF3
w/1vR7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gUgHxpXpf7JX7L+rfti/Gux8Ba
BrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVW80r9LP8AgmZ8d/A/wK8D/AK9
03xl4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0ckigHw9qP7L+rW
X7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC+aV+hf7Gnxetf2
af2TPhvod3418FaRrc37Q9jd6lHb+INNu7m00ZYEtbq5Mkcjtb27NBLG0ysgkgkYbmguP3nx
p+1l/Yf/AA1R8S/+EZ/sr/hG/wDhK9U/sn+y/L+w/ZPtcvk+R5fyeV5e3Zs+XbjHFAHn9Ffo
t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7
GyDygMhoz6X8Ff2+fCvw1+E/wL0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJcR+dptu2jX
msiJphCRbysDhwuGM/J6vQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/L
HG5zgYUYAAH3Z8Wfii/9g+CNL/Zz+Kfw/wDhx/YHxH8XyeIfs/i7T/Dum7JNZjfTbmeB5EXU
LRbJYwpiiuE8qMxAHHl1+dPju4+2eONZl+2aVqHm307/AGrS7P7HY3OZGPmQQeVF5UTdUj8q
PapA2JjaEIPG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk5Nfe3/
AAT7+K+o6f8AATwn4bPxL8P+BdNTxJd31vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxayDY0kix
QzRCXpbi3oPxa1+8/Zx+C1h8Gfi/8P8AwR4k0XxX4in8b3dv4is/CGm3NxLqMElnez2M/wBm
N3afZgu1VtZAsSGHygVMIAPz+r2v9nT4h+ItS8Ma1pFhqXwU0S18I6Hea4kni7wpoc91qvlu
HNnBcXNlLNcXchkPlRM/IUqCoUCvqD9jf4wajH8PNM0A/FfwV4V01vGWo6hb+JPB3iqx8I3O
gXJIHnalot9HZw6vpkz/AGWWONE8yOCKWAFMC1Xz/wD4J1+L1stc/aasb7xv4K0/TfF3w/1v
R7USazaeHdK13Vbhitm1ta3BtlCbTc7D5KLAku1hF5gUsZ8leN/GV38QPE9zq9/DpUF1d7N8
em6XbaZartRUGy3to44U4UZ2oMnLHJJJ7X9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu
2iQyvEGhhlKv5SyOC4VcRsN24qreaV+ln/BMz47+B/gV4H+AV7pvjLwr4TsJdd17/hbS3GrW
9nfXty0b22iedFK4uJ7RFulI8hWtomMksuxo5JFQj4e1H9l/VrL9kyw+MUes+H7vw9d+JD4V
lsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4Wp/w1B4z/4Z4/4VV9t0r/hBft39qfYP7DsP
O+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K+y/2NPi9a/s0/smfDfQ7vxr4K0jW5v2h7G71KO38Q
abd3NpoywJa3VyZI5Ha3t2aCWNplZBJBIw3NBcfvPjT9rL+w/wDhqj4l/wDCM/2V/wAI3/wl
eqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjimM8/or72/wCCenxh8K+A/wBnOzsPiL4x8FL4
hudVuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgdt8FP2mdc8Efs4/CPTfB
Xij4VP4+07xXr03xCvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yMAfmnXpeo/sv
6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFYihhRX3Ryv/rUBwwcL9l/s6/tXaZ4
e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i7147b4JfF7
wz4Z8J+JvD/hLxr8NNE8PT/tLX9xq+kXHiDSbHTtR8FzW/2a5AtriRYrmyeFwqoiuDtUxjdG
CoB+f37Pf7L+rftI6H49u9F1nw/Yz/D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkB
ypqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/AOtQHDBwv0X+
xbc+B9P+OH7VX/CNa/4V0LwZrvgbxN4a8H/21r9vpf237ZcL/Z0Mf26WOVt0UPLv9z5fNZSw
z6B/wT1+NXw/+D/7B/hvSfFur+FbfxJ4j+K01z4eupNTsby68D3E2ktZ2viG4sJJQDFbXMbZ
W52BQyzDJEW4A/Omiug+LNnfad8VPEtvqevWvirUoNVuo7vWrW9a+g1iYTOHuo7hvmmSVsuJ
Dy4YMetfZfwm+KOv/wDDHX7Pml/Bn4p+Ffhx4k0DXdbk8b/bPF1n4dj+0SX1s9nc30E8iHUI
ltlUZWK4GyNosEgx0hHy/wDslfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV
/KWRwXCriNhu3FVbzSv1B/YH+P8A4H+Cv/Cm9T03x98P9DsLrxX4kf4tXFnd2+jf2tcv5tto
kgtJVguG08LdK6RwQLbW+55JUiaKRk80+EH7RniX9kr/AIJq6fc6H4z8FN4/8IfE55IdMHij
Tr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhYz5J8EftL+Ivh/4YttIsNO+H89rab9km
peBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbL
e2jjhThRnagycsckkn9C/gR+0YviPwL+zhqPgjxn4K+GkGn/ABA1jWPiloeneKLTwlaLDPq9
rcRK9pPcRNeW62IMcYUTBY4/KzlSteq+Ef2+fhz8NfD/AMJNM8J+M/BWl+Hrvxl4hurO0Wzt
QNFhm8aWiwyMkke7TUbQrzVwrOIQIZXAwwTAB+RNerfDX9tv4n/CHwPYeHvD/ib7FYaP9u/s
mV9OtLi+0L7bH5d19hu5YmuLLzFyW+zyR/MWbhmJOT+1l/Yf/DVHxL/4Rn+yv+Eb/wCEr1T+
yf7L8v7D9k+1y+T5Hl/J5Xl7dmz5duMcV9bf8E9PjD4V8B/s52dh8RfGPgpfENzqt0fg4dVl
h1A/D3VTa3KSalegJJ9hspLt7Mqk4ZTNF9pEAVDcBCPgmvQPBH7S/iL4f+GLbSLDTvh/Pa2m
/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAH3Z8FP2mdc8Efs4/CPTfBXij4VP4+07xXr03x
CvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yNk/s6/tXaZ4e+EHgi4Hjzwr4e1W1
/aPVWg0XUn0u10/wtdCKe8itoJSk9vokk6K7RSKsZKJ5i714Yz8//G/jK7+IHie51e/h0qC6
u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk5Nfpt4O/aU8K/Duz0238P+P/AA/ocGjftSXF
rp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkVjkgGugu/wBpXTPAWg+FdL+BGsfBTTf7L+I/
iyTXP7S8bP4X02yRtZV9OuXgtr+zXUbQ2XlAERXaeVAIkHDRkA/N/wAA/AHxF8R/hJ498b6f
Hajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr72/ZK/ah1bxR8Hv2ifAWmfFrw
/wCANd8Qarpd34IMHiC48N+HNGgOtSS6i+mtOYzZ2/lzh/JAWeSLIETsrKPhTXtOi0jXLy0t
7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQEI9g/Z7/YP8X/tH/DeTxRpWpeFdIsLj
XR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDn/AIX/ALXPjn4LaHDZ+Fbrw/oU
9rBc28GrWvhnTF1u3W4WRZTHqX2f7YjlZXUOswZFIClQAB7t/wAJ3H/w5K/4Rn+2fh//AGx/
wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2avYP8AgmR8Vfht8HPhX8LEv/FPh+TQ
vFGq6/afFDT/ABD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/umM/N6iv0B/Y6+P
dv8ABj9mj4XaPf8AxB0rR/Enhf8AaAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJY
gGvj79rL+w/+GqPiX/wjP9lf8I3/AMJXqn9k/wBl+X9h+yfa5fJ8jy/k8ry9uzZ8u3GOKQjz
+vVvhr+238T/AIQ+B7Dw94f8TfYrDR/t39kyvp1pcX2hfbY/LuvsN3LE1xZeYuS32eSP5izc
MxJ+y/8Agnl4n1nTf2FPhpr0fiu18OweEfjzb2V7qOo+JYNHS08PSWltdX9iks80Ye3lkVZp
LSMnzWTeY2Kkg1f9r5fgh+yL4i8VfCzxb4KhvNO+Neoar4a0capaQ3sXg+W5jmFnDY+Yl5bW
U97bW7S2sSxO8YLMvlMzFjPzeor9IfgR+0YviPwL+zhqPgjxn4K+GkGn/EDWNY+KWh6d4otP
CVosM+r2txEr2k9xE15brYgxxhRMFjj8rOVK0eO/jUviPwL8O4/2a/ih4K+GkGn/ABA8WXuu
q3iS08JWiwz6vFLpdxdWVw0TXlutiIwFEEwWOMw7MqYqAPzeor9LPgp+07ceDv2cfhHYfC7x
J8FB4ksPFevT+MrvUvFU/gvTUuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCcn9lT44Xep+G
INLg+JHw/wDAOi3njnVdWstX8CeLLbwm3hSd3ypvtG1MWy6xpTv9lkiQ7547eGSEuCPswAPz
pr0v4X/tc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7dbhZFlMepfZ/tiOVldQ6zBkUgKVAAH
2B+xv8YNRj+HmmaAfiv4K8K6a3jLUdQt/Eng7xVY+EbnQLkkDztS0W+js4dX0yZ/ssscaJ5k
cEUsAKYFqut8CPjUtn4F/Zwj8EfFDwV4dg8N/EDWL34pLp3iS08HWmqwyavayxXD2U7WbXVu
1ipEarAQka+TsQqYgAfm9RX6Q6v+18vwQ/ZF8ReKvhZ4t8FQ3mnfGvUNV8NaONUtIb2LwfLc
xzCzhsfMS8trKe9trdpbWJYneMFmXymZj+enjvxV/wAJ1441nW/7N0rR/wC2L6e++waXb/Z7
Gx82Rn8mCPJ2RJu2ouTtUAZOKQjJrtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultr
eKNBlmdmLtnAULC+WDFFf7r/AOCZHxV+G3wc+FfwsS/8U+H5NC8Uarr9p8UNP8Q+LJreDTGn
hS005I9INzFBdW86PF5s0lrdKgMjPLEsP7rz/wDYP+N/iLQf2Zvjf8IU+MFr4I8WPPokPhGa
98Ziw0rS1i1ZzqktperL5CIUl8xxbuWuE3GNZcGmM+HqK/Sz/gnN8RPhh8BfA/w5sZ/HHhXW
PDeva74i074lpq/iq7tbGDzY0stNaDR5prdLm0uY2iMk09lP5as5keAQERfm9r2jTeHNcvNP
uHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDSEVKK/Uz/gll4i0nx1of7LPhnw5rnh/
yND1XxXd/EHw6dVt7OfUL0L9o0q5msZHSS/eIRW7xTRxy+QbcHchhO3lP2dv2stW0v8AZV+C
48BeJfhp/wAJsniTW77x7d+MPHdx4ddL2a/hlt72+jTULR9TR4Cu93ju8rCYwMh42Yz83qK/
Rb9nX9q7TPD3wg8EXA8eeFfD2q2v7R6q0Gi6k+l2un+FroRT3kVtBKUnt9EknRXaKRVjJRPM
XevHoF3+0rpngLQfCul/AjWPgppv9l/EfxZJrn9peNn8L6bZI2sq+nXLwW1/ZrqNobLygCIr
tPKgESDhoyAflTX9RX/Bo7/yjW0L/sM6h/6VS1+QHhv9r7Wfgh+wbc+KvCfi34aQ+MNO+Lt1
qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+WzMf2V/wCDVjxV/wAJ1+wdHrf9
m6Vo/wDbHifV777Bpdv9nsbHzb6Z/JgjydkSbtqLk7VAGTivQwHwVv8AD/7dE48XvT/xfoz7
n+LP/H9cfU1+En/BwrZC6srIn+G20z/0/eOq/dv4s/8AH9cfU1+En/BwrfC2s7JT/FbaZ/6f
vHf+FdVb+BH1/Q4qX8eXp+qPzT/bDXZ8Kv2eR/1Ty6/9SvxDRR+2K274V/s8n/qnl1/6lfiG
ivJe56a2Pc7uLd+3x8Dj/wBQf4af+mHRK+XP2wf+S0x/9i14d/8ATJY19T3H/J+fwO/7A3w0
/wDTDolfLH7YP/Jao/8AsWvDv/pksa2rfw18jOl/Efz/AEOm+BWsfESf9n3xn4g0DSfhpN4T
+FsFpNql1rHgrQtRvWa+vRDBEJbizlnmdneRgXbakcLDcv7tGyfiR8EfEXjP9nOL483Wp+Cp
NJ1fxIPCtzpuj6cNLnsL1LVpQDaw2sNoqGCJHLQs2TKu75zJt92/4J5ftEa5L+yV8Y/hjafF
3/hBPEmo/wBgv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHzZl37UkIIrtf2NPjNpP7On7Jn
w30mL4g+CoLy7/aHsb65nt9UtxcjQhAltPelJdt1ZW8nkSIzypA7QSsrgRTsr8p1H56UV+q1
3+0rpngLQfCul/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a6jaGy8oAiK7TyoB
Eg4aM+aeG/2vtZ+CH7Btz4q8J+LfhpD4w074u3Wq2Gj6VqkEIi8NSyxTS2dlYySJf22mT6jb
Qs1qqxSPCNzr5bMxAPz0or9Vv2eP+ChXhvwZ8C/2f7dvEnw/8L3Wq67q9/faPZ2tqbfwzcXH
i6yBAVxI2nRf2Nea0iM7RjyJHwxIQ1b/AGevir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974s
a30rTLaeZrTSEtNIS5itLqyngeDfMbW6jRCzNLEkP7oA/On9nv8AZf1b9pHQ/Ht3ous+H7Gf
4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuMgOV80r62/4Jqafa+DNc/aF0/WvEXgrR
p7/4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhb5JpCCivvb/gn38V9R0/4C
eE/DZ+Jfh/wLpqeJLu+t9a0HxtY+HdX8L3OxRu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcdB8K9R
0/472P7HVn4f8V/D/VdV+GnxH1Z/EcEOp2Wg4+0eILW7hks7G5+zSyRSxZaOO3g4/wBXsV1M
YYz4p8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZX
fxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnagycsckkn9NtJ+K+o6f8ZPHHhs/Evw
/wCBdNT4u+Jr631rQfG1j4d1fwvc/bJBu1nTL4wQ65ZSOLWQbGkkWKGaIS9LcfmT47uPtnjj
WZftmlah5t9O/wBq0uz+x2NzmRj5kEHlReVE3VI/Kj2qQNiY2hCO11H9l/VrL9kyw+MUes+H
7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfNK/QH9hb476f8DP2MPhVaP4y8
K6Jqeq/tAabql9C2rWR1Cy0Q2yW9zcSLvM1nExhkikdvLLQyOrEwzkSe16dr3wM+I+p/DEwf
Eb4VeH4/hN8R/EjaTZXOr/2fDZF/FtrqsMtusa+SbRtItLuOOVj9nMtxbxq3mH5GM/JOiv2B
07/got4H8PR/DH+xvH/hVbDXfHPiTVF+0Q28s2n/AGrxta7LibzozJp/maJe6xh5fJzFNJzu
2Y4r/hcWg6R42+EVt8IfiJ4K8GeE/C3xd8S3Xjyz07xhYeG7S8059fhktJHheeEX9udOULG0
SzJ5aeUOmwAH5/fC/wDa58c/BbQ4bPwrdeH9CntYLm3g1a18M6Yut263CyLKY9S+z/bEcrK6
h1mDIpAUqAAPNK/Sz4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/BV08c80SwEo9vp
UkiRvJEVWEug3ruXj4T/AGsv7D/4ao+Jf/CM/wBlf8I3/wAJXqn9k/2X5f2H7J9rl8nyPL+T
yvL27Nny7cY4pCPP6KKKAPQP+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyft
Hm7f3e/zN3lfus+X8lW/hf8Atc+OfgtocNn4VuvD+hT2sFzbwata+GdMXW7dbhZFlMepfZ/t
iOVldQ6zBkUgKVAAHmlFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAegeCP2l/
EXw/8MW2kWGnfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAOV8b+Mrv4geJ7nV7+H
SoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UAFFFFABRRRQAUUUUAFFFFABRRRQB6
B4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA5Xxv4yu/iB4
nudXv4dKgurvZvj03S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJOTRQAV6B4I/aX8RfD/AMMW2kWG
nfD+e1tN+yTUvAmh6ndNudnO+4ubSSZ+WONznAwowAAPP6KANbxv4yu/iB4nudXv4dKgurvZ
vj03S7bTLVdqKg2W9tHHCnCjO1Bk5Y5JJPVf8NQeM/8Ahnj/AIVV9t0r/hBft39qfYP7DsPO
+17s/aPtPk/aPN2/u9/mbvK/dZ8v5K8/ooA9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O
6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhTh
RnagycsckknJooAKKKKACiiigAooooA9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7
Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQbLe2jjhThRnag
ycsckknJooAK9A8EftL+Ivh/4YttIsNO+H89rab9kmpeBND1O6bc7Od9xc2kkz8scbnOBhRg
AAef0UAa3jfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySScmiigAoo
ooAKKKKACiiigAr0DwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4G
FGAAB5/RQBreN/GV38QPE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKKA
PQPBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38Q
PE9zq9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKACiiigAooooAKKKKACiii
gD0DwR+0v4i+H/hi20iw074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAByvjfxld/
EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxySScmigAr0DwR+0v4i+H/hi20i
w074fz2tpv2Sal4E0PU7ptzs533FzaSTPyxxuc4GFGAAB5/RQBreN/GV38QPE9zq9/DpUF1d
7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKKACiiigAooooAKKKKAPQPBH7S/iL4f+GLb
SLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9zq9/DpUF1d7N8
em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigD0D/hqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r
3Z+0fafJ+0ebt/d7/M3eV+6z5fyV5/RRQAUUUUAFFFFABRRRQAUUUUAFFFFABXoHgj9pfxF8
P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igD2vTP+Chvxa0LT4r
bTvEGlaXHZfa/wCzDYeGtLtJNA+1wrBc/wBnPHbK2n+ai5f7IYtzs8h/eOznxSiigAooooAK
/qK/4NHf+Ua2hf8AYZ1D/wBKpa/l1r+or/g0d/5RraF/2GdQ/wDSqWvRwHwVv8P/ALdE5MVv
T/xfoz79+LP/AB/XH1Nfg5/wcQWbXEGnsO1tpf8A6f8Ax3/jX7x/Fn/j+uPqa/DD/g4DVW02
0z1+zaZ/6fvHVdVb/d4+v6HHR/jy9P8AI/Mf9sQbfhV+zx/2Tu6/9SvxDRT/ANssY+F/7PX/
AGTy6/8AUr8RUV5L3PSWx7rcrj9vL4G/9gb4af8Aph0SvLfiP+ydqHx31vxx4y/4Sjwr4U8N
/Dnw14N/ti81o3rY+26TaQweWlrbTu37xMH5Rjcp6ZI9SuD/AMZ4/A3/ALA3w0/9MOi0/S7n
T/EP7Pn7Svg3+3/CmleJPFfhr4a/2PZ61r9lpH9ofZ7S3mn8t7qWNDsjGT83dR1YA7Vf4a+R
nS/iP5nzJ+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqC
u903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9r
MAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRF
lWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6Tz
JDewCR0txtgTMolflOo800X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7Syu
AHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281S
y1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYv
dF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser/D
fTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/FmjeIdD0rxB4y+H/hi/wDF
vivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/wCCe2sfCTwBoGveP/Hn
w/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W+
Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2h
fjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5
/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9SU
Uc1ykvxg8bfs56hf+BLzRPh/Df8AhS+udNvItS8EaBq91DcRzOJUe6mtZXl2ybgCZGAAAU7Q
or6L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYto
wzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUB
Yg2GKvhuMlCNbwV+zpfftVa54e1W9+IHwf8AC/iH4haquj6ToSwtaTzTK0NvGzWelWUkNikj
uqqZxCZCryYKnzG9L/Z+8CfE/wCGeoaP4O074j/BT4aeOL3XdQ8GabZzaXaXfiu3u2mNnPu1
Cw0+6ubTdLPJFHLNcxMVUmM+UqtWT/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw
9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/
t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3u
hWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTTo
PtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/
EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/D
Tw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+
JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV
8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2M
viz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirC
Qvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWL
MCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5r
JncoXccgeU19rf8ABN+58D+Dv+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS
4mypiBwrum5TJtzk/FNIR7B8K/2NdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBIme
AQwTMLeBtqy3MgSCNpFUybgwXtvBH/BL/wAWeK/DFtf3/jL4f+Gbqfxy/wANn03UpdRkurbx
AHZRZO1tZzQncAGEqyNFhhlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERY
NUvFuYb+B5mCz28axssqxlp0bbiFlZWPa+Ffi3F+zF/wTR1KP4e+Ovh/rGvaP8ZJde0aW+TS
bjVJ9LitRaQanFpl4ZZ7aUzxxsoEYnjQl+IyzljPCvE3/BPLx74K1X4W6fq83h+w1L4r+JL/
AMK6fatdvI+mXtlqSabOLpkjZAgnfhoWlyoJ9AfP/F/7Pnirwt8VPG/hC30u68Qal8PJ7+PW
pdHtpruC1hspjDcXRIQMturDJkdVADLnGcV9l/Cv4iaf8a/h1+x1qs3jjwrJqvwv8c6tqnjm
XxF4qstNvrTz9ctb77QwvZo5brfFvkLxCTLBlJ3grXz/AONPiLoHj79qj9oDxJp/xR1XwNov
ib/hIr7Sbixsbxv+EvSe7MkOlSrGUaKK5RgWacbF2DeueiEfP1e1/CT9jFPi5pHgV4fir8Kt
K1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMhldQyOGBwG2+KV9F/sTa94V+EPwk+LHxR
nvPD/wDwtDwHBpQ8BWGrTQyIbq6umiuL6G0cg3FxaRKJI8h44mIkdGKoVAPFPiz8OL74OfFT
xL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlew/A3/gntrHx3+EHhzxnZ+PP
h/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKPCde16+8U65eanqd5dajqWo
zvdXd3dTNNPdTOxZ5JHYlmdmJJYkkkkmvsv4NfHK/wDgF/wSZguvDPiL4fweOLT4rL4mttPv
p9G1PVLSyWxS1S7isrrzJI5Vuo12skYmVMyDERLkA8U+DP7DHiT4ueOPiV4bude8K+D9a+E9
je6l4gt9alun8m3spGjvHja0t7hZPJcKCActvXyw4Dbdb4I/8E7vEnx9+B+meN9E8YfD+CPW
tdl8LWGk6je3VpfXesLbvcR2Cu9v9mEs0SKYi04jZpY0LrIdg7X/AIJqeJ11nXP2hdT8S+K/
D9nqXi/4Y65okN34k8S2lhPq+q37RtEu+7mRpHkaOQvJkqpILsu9c+l/sN6t4b0T9iD4df23
46+H/h2Twz8crf4i39tqPiO1W+j0exsNski2iO1y8rywNFFCsRkkZ4yF8tvMpjPh6z+E3irU
dc17TLfwz4gn1LwrBcXWtWkenTNPo8Nu224kuUC7oUibh2cAIeDiufr3bVvito3xW/aM+Oni
+3+IXiD4Zab4wg1/U9Oijs55J/EgubozRaLci2k2xpOrYdnZ4VMfIbg14TSEe7fCr9gfXviV
4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPj/AI78Eap8
M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfev7NnxR0HxD8B/wBku2st
a+Ghf4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJA+NP2svG+mfEz
9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGgDoPhX+xrq3xA+Hmk+LN
c8V+Cvhx4e8SaqNG0G98V3dxbJrk4JEzwCGCZhbwNtWW5kCQRtIqmTcGC+a+O/BGqfDPxxrP
hvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX1BNb6T+19+w98DPBGh+LvBXhrxD
8K9V1jTteg8V67b6IiwapeLcw38DzMFnt41jZZVjLTo23ELKysfa/gR8S9J8EeBf2cNH+Fvx
c8P6Z4e8E/EDWG+IVw3iW38IPr1l/a9q9teXVnc3EMl2kmnoMDE21QYSdylKYz83qK/Xbwj+
3z8Ofhr4f+EmmeE/GfgrS/D134y8Q3VnaLZ2oGiwzeNLRYZGSSPdpqNoV5q4VnEIEMrgYYJi
3p37cHw/+CUfwx8N+G/G3w/0/RU8c+JGt7ezWxubfR0fxtaxxShgrLYxNod3q4SXMSGCV9rZ
2YAPyU8K+BNc8df2l/YmjarrH9j2MuqX/wBhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNZNfQPjS
+8MWP7VH7QH9ifEf/hXPhub/AISKLQ/+EdtZZ7HxRAbs/Z9HX7IyolpcR7cO2YAsa5Ugivn6
kIKK+9v+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fwvc7FG7WdMvjBDrllI4tZB
saSRYoZohL0tx6X+zx4n+FHxT+EX7P8ANr/xS+FVh4g+FfivV5NtzdyaHCl6/iWy1b7Tb2/k
wpHaSaZaXqRmSOKFZbm3hASX5Y2M/L6iv1h/Z6/aM+EWkfFSDxBZ+M/D+o+CfiT8QPGc/wAQ
bfxD4oubOCzhvZmg0ry9GkuIYri3uIZYTLLNaXIjDSGSSEQHyuK/YQvtf8FfsYeAL5/F2laD
/wAIH+0BHpWo6pceLrOxtbbQTbW9xqFlBdvcLFPaTSoJmt4HdZygkCPt3AsFj806K/Vb4D/E
C31D4QWXi/wZ4o0rwz4M0D9p25isb+41uHw7a2XhGYRXlxYQC4kh22kuIpmsUHzmMMYiUOKu
m/tXaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BC
EUq8RLBY/LOiv0L8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS
/ttMn1G2hZrVVikeEbnXy2Zj8/8Aw8+PXgr4L2eqfFextPD+p/GbxFqt3c+HtAstJeHQPhuD
KWF95Uq+XPcKWxaQIZIYAgkkZpFSNUI5/wCEn7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9
UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt80+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNBJN
bzPC7RllVihZCQSoOMZA6V7t+yJ470PwV4A+Mvxk1rWdK1H4x+FP7NufBsfiC7jupLvUL68d
LvUlt5Tuu7u2T98jNvSN28x0YhSvzpr2vX3inXLzU9TvLrUdS1Gd7q7u7qZpp7qZ2LPJI7Es
zsxJLEkkkk0AfQH7Jn/BNTxV+198K9U8X6R4z+GnhnTdInvY7iLxJq81lOsNnDbTXN1hYHX7
PEt5AHkLAIXG7GVJ8U+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSo
OMZA6V9F/wDBLa50/T/+F8f2lr/hXQv7d+FWs+GtO/trX7LS/tuoXnlfZ4Y/tMse7d5L5cfI
ny72Xcufdv8AgmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/ABD4smt4NMaeFLTTkj0g3MUF
1bzo8XmzSWt0qAyM8sSw/umM+CfhX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H/ankbf8A
Q7X5T5t3Ju/dxcbtrcjFcpX2D/wTOvn8Cf8ADRvhvVvF3hXQrXWvhxq/hyK3vvF2n2ljq2sS
fu7QRM9wsNxgC5CzoWjRZT86iUbu1/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZon1xba7sfDF
xBbzXsbAuHTTJLiMPMpxA0iBnywzSEfH/gH4A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl
0ttbxRoMszsxds4ChYXywYor8TX6LfCP9ojzbH9q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp
6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIq3+zt+0I3wr/ZV+C+gfCbxP8AB9td0HxJrY8XXuue
NLvwraCY38LWd9NbfbbCXULd7XYf3tvcERwiLYrB4ixnxT+yV+y/q37YvxrsfAWgaz4f0bXd
UgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7cVVuK8K+BNc8df2l/YmjarrH9j2MuqX/ANht
JLj7DaRY8y4l2A7Ik3LudsKuRkjNfpt+xL+0h4C+FuufCjXtF8X/AA08H6TqfjLxRcfFaLSb
xNJgu55Wmg0IRWt0UvG0yNbtDEiR+RAC0kwjeKR0/P74P2kXg/XPiFp+p/Eq6+Hs9t4b1KyV
9H83UYPFU6sijSDNZyeWbe4IOZizwERqTuBU0hHmlewfCv8AY11b4gfDzSfFmueK/BXw48Pe
JNVGjaDe+K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kVTJuDBfH6+4fhB8RPht8Y/2VfgLoniYe
CtR034Q6rq1j420nxLrM2lXcem6lfxXh1PTFhuYHvHiggmjMSNJL5jqBbSb42oA+NPHfgjVP
hn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJrtv2lNG8PeHP2jPH+n+E
HtZPCdh4k1G30V7W6N1A1kl1ItuY5izGRPKCYcs24YOTnNcTQAUUUUAFFFFABRRRQAUUUUAF
e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONm
GPhNfe3wO8T6D8Qvg9+xcbLxX4Ksn+DvjLUrrxdDrHiWw0efSoZNatLxJBFdzRPOhgDMGgEg
yrL98FaAPh/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9K+C
f7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPP8A
7WXjfTPiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXtf/BOzQr/4
e/Ej4eeM7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVVAPmrx
34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9K/Z7/Yc8e/tJ/Cv
x7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8QPBOjfHv9oz4xan
4c8YeH9O8PadPrXibS7vxNqs8M+vWqXTNDbwPcBpp72ZJFKpKRI5Dlm3Zr1b/gl5p9rpGh/H
DUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggAHlP7Pf7Dnj39
pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHu8fr7B/4Js+Fo
/Av/AAvz+2/Evw/0f+2Phx4h8FWH27xnpNv9u1SX7P5ccW+5G+J9rbbhcwNg4kOK+dPBPwSm
8X654w0+48TeCtAn8HaVe6pK+pa1EINWa2ZVNpYzReZHc3EhP7pUbbIFJD4wSAaujfsv6tN8
BH+I+uaz4f8ACPh67nmtNBTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwXzSv0L/AGfP
ironjP8AZm/ZU8Ow6t8H7vSfB3iTVrf4hWPjI6AJ7Cyn1aC4zEuqgSlHtXkJezzkrtJ3oAvy
/wDED4y/DXwp8SPFFh4X+E3w/wDE3haDXdQ/sTUtSvPEMd1c6ebqVrXeqajCBthMajMavhRv
y+5iAVP2M/2HPHv7dfxDufD/AIIs7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X
x+v0B/4JR/tyWkPx28AeD9d0T4VeBfAPg++1rxEdUm1a50lrOe7t7iJWZri/EN5KomitYzNH
POlvnDfK8lfKkvxc8O+CNQv9Kv8A4NfCrVbqzvrlHuF1jXLiPmZyI45bbVRDJFGCI45FLb0R
GLyEl2APKaK/QH9hL45eEPDXwP8AL8e+I/h/pGtanrt7P8F7O7nGpQ/DHU3t7oPe3PmefJZa
ebp7LYt00hMsIuTEQjXNdX8FP2mdc8Efs4/CPTfBXij4VP4+07xXr03xCvvE3xFk0eP+0H1G
J4b66NvqVsNYieHBabbeo6Q7Fz8yMxn5p0V+i37Ov7V2meHvhB4IuB488K+HtVtf2j1VoNF1
J9LtdP8AC10Ip7yK2glKT2+iSTortFIqxkonmLvXj4p/ay/sP/hqj4l/8Iz/AGV/wjf/AAle
qf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikI1vgj+xJ8T/2jPD2mar4N8M/2xYaxrsvhqzl
/tG0t/O1CKye+eHEsqEYto3k3kBDjaGLELVr9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T
1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fe3/BID9qbwP8M/2Xvh5o/iT4uf8I/dWXxH1KS4
0nUtRt7SC3tH0S7KwuJLsFdPMzLMJTHtN6wj8rJ8+vnT/gnvcWt1rn7RPiDU/G/h/wAjxr8P
/EvhrS9Q8WeJtN0rVdc1K6aCSEzQXF2ZFeYZZpSzxB94MzFSaYz40oq3r2jTeHNcvNPuHtZJ
7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDVSkIKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACv6iv8Ag0d/
5RraF/2GdQ/9Kpa/l1r+or/g0d/5RraF/wBhnUP/AEqlr0cB8Fb/AA/+3ROTFb0/8X6M+/fi
z/x/XH1NfhD/AMHClw0UdiB3ttL/APT/AOO6/d74s/8AH9cfU1+Ff/BwNZ/abezPpa6Z/wCn
/wAd11Vv4EfX9Djo/wAeXp/kfmf+2X/yS/8AZ5/7J3df+pX4hop37aK7Phn+z0P+qd3P/qVe
IaK8l7npLY9uvGI/b2+Bv/YG+Gn/AKYdErzH4i/snah8d9Z8b+Mv+Eo8K+FPDfw58NeDf7Yv
NaN62Ptuk2kMHlpa207t+8TB+UY3KemSPah4Bv779uD4HXiQyGH+xfhs24DjA0HRc/yrN0+W
w1v9n/8AaW8FHXvCuleJvFPhv4ajSLPWtfstI/tD7Na280/lvdSxIdkYyfm4yo6sAd6q/dr5
GdL+I/mfMX7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKo
K73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2
swBQBz6t/wAEvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSl
RFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6
TzJDewCR0txtgTMolfkOo800X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK
4AextrmKNFun8oNLImSrMMphzV8P/wDBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzV
LLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Yy8b+MPhZ8VPDmka18Q/g/P8ACr4SeMri6nvt
YvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrser
/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP8A8Ev/ABZo3iHQ9K8QeMvh/wCG
L/xb4r1Hwf4civpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20LhjleMv+Ce2sfCTwBoGveP/
AB58P/AX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/wDj7UPHHhDV
/Fvj74f/APCvfB3iuXxFqVz42u9H1DVrL95Fe37WMd0suqebcGMbTZp89w+4MH3uPQPBnx71
r9oX40+GdcuvF3wU1P4MXXxH1S/n8NeNIvDsWseFtMvNYF5diVdSiEp8+KcvusppxlCu5WjC
gA+f/A//AAT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wAJM9vcx2rTxCGykMETTSqq/ahC
/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X+zb8
YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGByJP
McfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxk
oQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgD
nW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M
65jw57X/AIJ2aFf/AA9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0
l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPL
bS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8AD/gODS9VvdCsbTWJ5nu/Ed/ZmRbq
GxS2imEqRSRiJrhiluJJEXzc7tp+xn+w549/br+Idz4f8EWdqqadB9o1HVdQd4dO0xSG8sSy
KjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f+I/iZvFyePJ9HkvodHuvECXU
MqnXP9MfzbKRmaWAmRmBDsZUwvP/APBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoap
daPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1W
fR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/AA/4Dg0v
Vb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7b7X+xl8WfGHhv4qeHBrXiX4P+
HfhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfg
neaLqPwqk8LQfEfxNf8AjfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k
7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3
Pgfwd/wVdm8ZaNr/AIV8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+K
aQj2DSv2KPFWr+HPgpqcd/4fEHx41W50fQFaebfZzQX0dk7XQ8rCIZJVIMZkO0EkA/KfP/iz
8OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfcPwe/ak0f4e/BL
9h/R7fXfh+39meK9UbxSupWWl3914et31+CRJXkuI3l0/dE0kglRoiQgfd8ilfa/gx8c/hR4
F+OEuoR+NvCuq+EPHXxH8bD4ixav4ykWxtkurh7bSzBpYuo7a9tLmKSEyXDW10iqzs0sSw5i
Yz8k69L/AGSv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3bi
qt9V/CD9ozxL+yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2z
L58YMvzQuZD6X+wP+1D4H8Cf8Kb8U6b4g+H/AMOrDXfFfiS++LWm2d9b6Xi5n82HRIRbSyfa
H0+FbxdiwB7aD55ZSrRSSKAfl9RX6bfstS/C7xJ8B/2crDxP8Sfhpofif4L+JNTgltdT8Q+U
bW//AOEjsdTM0ckW6Ca3fTLO+jS4Lm3ea5gRXLsCnV/s9ftGfCLSPipB4gs/Gfh/UfBPxJ+I
HjOf4g2/iHxRc2cFnDezNBpXl6NJcQxXFvcQywmWWa0uRGGkMkkIgPlAH5PUV97fCD9ozxL+
yV/wTV0+50Pxn4Kbx/4Q+JzyQ6YPFGnX96PDwMDXFrCsVwZxZXGpWkTSx2zL58YMvzQuZD23
wU/a+uE/Zx+Ec3wum+CnhXxJJ4r17UvGVhqXi6fwfpuiXE+oxTWjvZxaham9tFtmRAGjvAsV
uIQMq0bAH5p0V0HxZ1mLxH8VPEuoW6eH44L/AFW6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm
0EAgivtb9hz9pSx+Hf7Jnwit5/H9roeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynED
SIGfLDNIR8E0V+sP7PXxV+DXwc+KkCaR4p8FSfD7xR8QPGdp460+98WNb6VpltPM1ppCWmkJ
cxWl1ZTwPBvmNrdRohZmliSH91lfstftj6D+zf8As3/s5eC7rxz4KtNS03VdTg1KFbiw1MaT
dHxbYwtM1wvmrbI+j3WtBbjekbwyuyu2YzTGflnWt4V8Ca546/tL+xNG1XWP7HsZdUv/ALDa
SXH2G0ix5lxLsB2RJuXc7YVcjJGa/UC7/aV0zwFoPhXS/gRrHwU03+y/iP4sk1z+0vGz+F9N
skbWVfTrl4La/s11G0Nl5QBEV2nlQCJBw0Z+CfDXifTtV+Knxb1B/iJa/DWDVNK1iS0Twnpl
8mleJWkmDJo0MK+XJBZTg/L9oXaiRoJEzwEI8fr3b4VfsD698SvBnw91q98W+CvCKfFfVZ9H
8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8Jr7W/YIun8DeGPhpNafEv4f6r4M8ReK
3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT/LdHgRC0hj2AA+P/HfgjVPhn441nw3r
dt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr9IfBNv4N+Kl9+yPofww8XeH9Z0n4
S/E7WYbmPWNds9L1U2UviK2ubGUW100Ety81qFYC3ibL7k2hwUHV6T8V9R0/4yeOPDZ+Jfh/
wLpqfF3xNfW+taD42sfDur+F7n7ZIN2s6ZfGCHXLKRxayDY0kixQzRCXpbhjPzp8A/AHxF8R
/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+Jr7r/Yt/aQ1zUPgF8f
PhvafGnSvCniTVr7R5vBl8+vSeGdDsohrEj6lcWJZYEs4mjmErQRRxyvGWCwsVKDtv2dv2hG
+Ff7KvwX0D4TeJ/g+2u6D4k1seLr3XPGl34VtBMb+FrO+mtvtthLqFu9rsP723uCI4RFsVg8
RAPzeor9Yfgr+3z4V+Gvwn+BemReM/hppb3fiTWrq5tNJs4RYaLNN4wsl8yJLiPztNt20a81
kRNMISLeVgcOFx+b37WX9h/8NUfEv/hGf7K/4Rv/AISvVP7J/svy/sP2T7XL5PkeX8nleXt2
bPl24xxSEef16B/w1B4z/wCGeP8AhVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu
8r91ny/kr9Af2PP24PDfwS/ZT/Zm8Nt428K6fdJfag19b3C2tzNo9w/iuwjMszOrNY7tGu9a
AlcxAxSyYbOzHQfs9fFX4NfBz4qQJpHinwVJ8PvFHxA8Z2njrT73xY1vpWmW08zWmkJaaQlz
FaXVlPA8G+Y2t1GiFmaWJIf3TGfm9/w1B4z/AOGeP+FVfbdK/wCEF+3f2p9g/sOw877Xuz9o
+0+T9o83b+73+Zu8r91ny/krz+v0B/Y6+Pdv8GP2aPhdo9/8QdK0fxJ4X/aAttNu1h8TQmSz
8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/AXx3+Fmh/tAfAjV9K8ZfD/SvCHwo8c/ETTtTj
j1a0tI9Mg1S+mTS2t7ferTWjpcQ4mt0eCJNzO6LG5UA/P79nv9g/xf8AtH/DeTxRpWpeFdIs
LjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDxSvsvXvER8Of8EZrzwRca58N
JNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCK9W/4JkfFX4bfBz4V/CxL/xT
4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/ugD83q7bwD8A
fEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX+wPAnjjxL4c/ZV
+BPhr4Q/FbwV8PPFnhHxJr0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2NlVP2Lf
2kNc1D4BfHz4b2nxp0rwp4k1a+0ebwZfPr0nhnQ7KIaxI+pXFiWWBLOJo5hK0EUccrxlgsLF
SgAPhSiv02/Yu+M3gLwZZ/sk3lz8QfBX9m/B/VfGek+JbmbVEsnt21KVo7CeO2ufKupbebz4
m81YSsSlzMYvLk2ef/CD9ozxL+yV/wAE1dPudD8Z+Cm8f+EPic8kOmDxRp1/ejw8DA1xawrF
cGcWVxqVpE0sdsy+fGDL80LmQgHwTRX6WfBT9r64T9nH4RzfC6b4KeFfEknivXtS8ZWGpeLp
/B+m6JcT6jFNaO9nFqFqb20W2ZEAaO8CxW4hAyrRtz3hv9r7Wfgh+wbc+KvCfi34aQ+MNO+L
t1qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+WzMQD89K92+FX7A+vfErwZ8P
davfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/AB34q/4Trxxr
Ot/2bpWj/wBsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycV9gfsEXT+BvDHw0mtPiX8P8A
VfBniLxW8/jvwl4m1LT9Hk8GvC628ep2U11cxXIuzaTtNHdaf5bo8CIWkMewIR8f+O/BGqfD
PxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTX6Q+Cbfwb8VL79kfQ/hh4
u8P6zpPwl+J2sw3Mesa7Z6XqpspfEVtc2MotrpoJbl5rUKwFvE2X3JtDgoOr0n4r6jp/xk8c
eGz8S/D/AIF01Pi74mvrfWtB8bWPh3V/C9z9skG7WdMvjBDrllI4tZBsaSRYoZohL0twxn50
+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/E191/sW/t
Ia5qHwC+Pnw3tPjTpXhTxJq19o83gy+fXpPDOh2UQ1iR9SuLEssCWcTRzCVoIo45XjLBYWKl
B237O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3
BEcIi2KweIgH5vUV+sPwV/b58K/DX4T/AAL0yLxn8NNLe78Sa1dXNppNnCLDRZpvGFkvmRJc
R+dptu2jXmsiJphCRbysDhwuPze/ay/sP/hqj4l/8Iz/AGV/wjf/AAleqf2T/Zfl/Yfsn2uX
yfI8v5PK8vbs2fLtxjikI9B+Bv8AwT21j47/AAg8OeM7Px58P9HsPFHiuPwRZ2upHVBdDWJR
uitXENlIg3xlXEgcxgOAzqwZR4p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJ
VsOrDKkg44JFfYHwa+OV/wDAL/gkzBdeGfEXw/g8cWnxWXxNbaffT6NqeqWlktilql3FZXXm
SRyrdRrtZIxMqZkGIiXPQfCD9sPxb8Fv+Caun+LdO8eeH7/4lWHxOfVja6n4ktbrW59ClME1
1AYjP9tS3udUt43nijKPMu+RgY3ZyxnwTXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJ
BdtEhleINDDKVfylkcFwq4jYbtxVW+1vgR+0YviPwL+zhqPgjxn4K+GkGn/EDWNY+KWh6d4o
tPCVosM+r2txEr2k9xE15brYgxxhRMFjj8rOVK16B+zB+0h8Jvhb8VPAOvfD7xf4K8H+BNT+
IHi64+IES3kGkz3ccs08HhwNazFLlrKOG7iKpDH9mgJeSURtFI6AH5PUV+m37F3xe8G+AbP9
kmfWfGvgrTk+DWq+M9G8XCbxBZh9On1GVorN418zddW8jTxn7TbCWBF3u8ipG7L5/wCBPHHi
Xw5+yr8CfDXwh+K3gr4eeLPCPiTXo/Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsb
KQj4/wDAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+
6/2Lf2kNc1D4BfHz4b2nxp0rwp4k1a+0ebwZfPr0nhnQ7KIaxI+pXFiWWBLOJo5hK0EUccrx
lgsLFSg+H9e06LSNcvLS3v7XVILWd4Yr21WVYLxVYgSxiVEkCMBuAdFbBGVU5AAPdvB3/BNT
4l+Pvgz8NPH+lQ6Vd+FvidrqeH7e7SaVv7DuJL/7BE98ojJjikmBCyR+YPuhtruiN498Wfhx
ffBz4qeJfCGpy2s+peFdVutHu5bVmaCSa3meF2jLKrFCyEglQcYyB0r9Af2W/wBvWb9m74Pf
saaRoHjbw/bWN1quvaT470q51CJoLGyutah2T3ke8G3dInkmilfbgB+WjaRW9L+DHxz+FHgX
44S6hH428K6r4Q8dfEfxsPiLFq/jKRbG2S6uHttLMGli6jtr20uYpITJcNbXSKrOzSxLDmJj
PyTor9TP2Wv2x9B/Zv8A2b/2cvBd1458FWmpabqupwalCtxYamNJuj4tsYWma4XzVtkfR7rW
gtxvSN4ZXZXbMZq1d/tK6Z4C0HwrpfwI1j4Kab/ZfxH8WSa5/aXjZ/C+m2SNrKvp1y8Ftf2a
6jaGy8oAiK7TyoBEg4aMgH5U17t8NP2FLr4i+HPhtdyfEj4aaBqXxXne38PaLqE+pSajMy3z
2KmVLazmSFHnRlR5HCthsH5HC9B4a+NPgT9n/UPEPxN0l/Cvij4vaxrt8/hrTdJ0e4t/DPgN
POYrqaQ3USedKcg2UG1kt1CyTZlVYV9A/Z+0N/DPwJ0fxz4O+IHw/f42/Fm+1C217xR4p8ba
fp2ofDqyNwYJJYo7if7Q13eK0sr3iK8yQ7liQPJ5jgHn7/8ABL/xZo3iHQ9K8QeMvh/4Yv8A
xb4r1Hwf4civpdRm/t67sb1bGZ4jbWcwii+0uI1NwYmOC20LhiXX/BJ34v2PhHwbrc2n6Uth
4t8VyeDLh0umm/4RvUE1NtMH24xowSJrlHVZYjKnCgkM8av9LfCbxT4f+Geg/s+eGPBvxA+F
XiDTPhj8R9bh8Zal4k1DRVkt7RNZtmgvdOXVJHkgims4/OB0xtpfJ3NMC1dVYf8ABRPTPgv4
q/ZyT4e/EPSrjwhrfjnxfZ+J7XU9Ve48vR73xIrW13fi4k8+KUW8jTxz3BWT75Ysryq4B8la
N/wSu8TapZvJP8Qfhppzr8QJvhgIrh9WZ311JWRbceXYOuyRVEiyEhArqHKPuRfnTx34I1T4
Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/UDwb+1D4T+E3nXf8AwkHw
q1mHxR+1Xea15Oo32nan9m0Sbzbf+11XzGe08qSPzYrn92RtjbLRS4k+H/i78WfB/gP4yeNd
Ij8C+CvihBYeJNTjg8W65rOtXWo+IYftkxjupprTUIYJXdCp8yOJQ/DHJJJALfwR/wCCd3iT
4+/A/TPG+ieMPh/BHrWuy+FrDSdRvbq0vrvWFt3uI7BXe3+zCWaJFMRacRs0saF1kOwZPwZ/
4J/fEv41eB/iV4kttH/sPRfhRY3t1rtxrSy2my4tY2kmsI12FmuwisShACfL5jIXTd9Qfsce
J/CqfseeBrrU/Ffw08EJofx5i+JV3pM/iWHz9P0WysiHEVq00t7I5khMMMJV5pC0bHKN5tc/
+yh440f4mfHD9q7x/wD254V8N6L8UfCni/StAg8ReJ9L0q+uL3ULiKe2gaCa4DLlGwZeYQys
PMypoA+dPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX
2swBQBz5r478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfYH7P3
wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJ
mUSv4p8Jv2Ov+Fh6h8Zftni3SrLRfg/od/qUuv2Uf23R9Zu4ZhDa2kNyWjUfbH3eQ5yzhflj
Y5AQjxSvoH4G/wDBPbWPjv8ACDw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2UiDfGVc
SBzGA4DOrBlHz9X6Lf8ABOz4gWXgL9jD4eWZ8UfCqwv/APheVl4m1Kz8Qa3oX2qy0OO2ignu
1ivZN9vKskTBGjVbnHzR/K4YtDR8qfBn9hjxJ8XPHHxK8N3OveFfB+tfCexvdS8QW+tS3T+T
b2UjR3jxtaW9wsnkuFBAOW3r5YcBttv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT6
1NDPFbOw+yWs6Qp58qxgztGSQxxswx9r/ZK8T+FdZ/aM/a31PTvFfh+z8PeL/BvirRPDV34k
8Sw2E+rzX90Gsl36hMk0jyJGS8khJUkGVlZxnV+B3ifQfiF8Hv2LjZeK/BVk/wAHfGWpXXi6
HWPEtho8+lQya1aXiSCK7miedDAGYNAJBlWX74K0AeKaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kv
vCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99
Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv0A+HPxB0/U/2gP+Eq8M/FX4f+Ivhl8Qvitq+teK
/DXia+stDk8KRfbnjttXsnvbiG8S7ayujPHc2KxSxvCiMXaPYPgn4s6d4e0j4qeJbTwhf3Wq
eE7XVbqHRb26UrPeWSzOLeWQFEIdogjEFF5J+VegQj0D9nv9hzx7+0n8K/HvjfQrO1tPCfw6
0q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRd
G1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17
wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIB+wV4J1X4NfGTwT4jTxh8Cn0mPxlbQe
KINW1XQZdR0CGxvI/MmhlvwCUeJ3kjuNMlkD7B84dFAYzlNF/wCCaHjJZfC1r4o8R+CvAere
N/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV4O/YL1bXvFmm+GtZ8e/DT
wj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz9ln43eB/iXqfwKufB/
ib4f6loPgz4j+IZtau/FviK3stU0TT5vFtjq1tfQ/wBpzxXMsslpBhplErsktxG37xmA4nU0
0GPVfE3xJ+D/AI9+Gg+IPxo8ZeIfM8S6/wCLLDRLn4b6LJqUsUclva3Eq3K3F3C7SNcrGZoo
MpHEryeYwB8FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTW
t478K/8ACC+ONZ0T+0dK1j+x76ex+36XcfaLG+8qRk86CTA3xPt3I2BuUg4GayaQgooooA9g
/Z7/AGHPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pcf
BP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59
W/4Jeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4II
HQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEj
pbjbAmZRK7GfH/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7
B8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY
6vw88TeCv2IrPVPENjrHh/4ifGaw1W70zw99ihe60DwmLeUxjWvMljWO+uJCN9oqBoYxiaQm
QJEv0X8Cvjdp/wAS/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8Usssk
UDhpohLuSW4jPzsQAD5f8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwl
Wwt7lLdGnYojTvHv2M65jw58f8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2
HVhlSQccEivvXU00GPVfE3xJ+D/j34aD4g/Gjxl4h8zxLr/iyw0S5+G+iyalLFHJb2txKtyt
xdwu0jXKxmaKDKRxK8nmN8//AA88TeCv2IrPVPENjrHh/wCInxmsNVu9M8PfYoXutA8Ji3lM
Y1rzJY1jvriQjfaKgaGMYmkJkCRKhHKfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUb
ibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56vRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvL
ifU72zu0srgB7G2uYo0W6fyg0siZKswymHPpf7P2hv4Z+BOj+OfB3xA+H7/G34s32oW2veKP
FPjbT9O1D4dWRuDBJLFHcT/aGu7xWlle8RXmSHcsSB5PMf0v4T+J9B0b4ffsy+GrLxX8H9Zf
4ReMtX03xdqF/wCJbC0GkQr4n0/UU1DTmu5oXnSWC1bbNAkgaGWaPAdiAxnyp8N/+CcvxR+I
+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wFRj8p8sDaWdfMj3nwq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Afs3eJ9B8a/tGf
td+NLfxX4KsPD3xH8N+MtE8Nzax4lsNIn1O6vrqOa1UW91NFMiSIciR0WMEMpYMpA6D4FeON
HHwi/ZY0WLXfhVLdfCzxXqdn4yfWfE+l203hxP8AhJdO1Jbuxea4QT74bZlE9r5yPFJOgJLc
AH5/+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTXoH7WXjfT
PiZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXn9IQUUUUAFFFFABR
RRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUU
AFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/wAGjv8AyjW0L/sM6h/6VS1/LrX9RX/Bo7/yjW0L
/sM6h/6VS16OA+Ct/h/9uicmK3p/4v0Z9+/Fn/j+uPqa/DD/AIL+3gt4LRfW10z/ANP/AI6r
9z/iz/x/XH1NfhT/AMHAtobgWR9LXS//AE/+O66q3+7x9f0OOj/Hl6f5H5rftrnPw2/Z6/7J
3c/+pV4hopf22F2fDf8AZ6H/AFTu5/8AUq8Q0V5L3PSWx+6H7Of7Feh+LtD+BfieWKI3Z8Ie
C58kc5i0fTlH/oAr8Zv2j/2Pr74wfF/4reKF8T+FPCfhj4X6Z4Wi1a81o3rbReadbQQeWlrb
Tu3zx4PyjG5T0yR+0v7JvxluLXTvgfpG/wDdjwb4MTGf72jacf61+WvxDvtP8T+Dv2uvB/8A
b/hXS/Eni7T/AIfnR7PWtfstI/tH7PFDNP5b3UsaHZGMn5u6jqwB9nM1H6vScey/I8vLnJ16
il3f5nyJ+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu
903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rM
AUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFl
WBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJ
DewCR0txtgTMolfwj2jzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHNXw/8A8E+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUst
QgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3R
dTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/wBoCzn0qDx9puux6v8A
DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wS/wDFmjeIdD0rxB4y+H/hi/8A
FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/wDH
nw/8Bf8ACR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W
+Pvh/wD8K98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2
hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD
5/8AA/8AwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9S
UUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7Nvxhu
dG/aMtZND8Y/DTTv2d/h78QL/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx
8/fECzsf2tv2jPjF4v0zXvD/AIV02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8Nxko
QfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn
W8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M6
5jw57X/gnZoV/wDD34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT
3ckpgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL
5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobF
LaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO
2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyq
dc/0x/NspGZpYCZGYEOxlTC8/wD8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o
9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H
8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/wDD/gODS9Vv
dCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/wCH
fhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfgn
eaLqPwqk8LQfEfxNf+N9M8eDw9HfaXpl/wCIEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k7
UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3P
gfwd/wAFXZvGWja/4V8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+Ka
QjtvAPwB8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+1v+
CeX7RGuS/slfGP4Y2nxd/wCEE8Saj/YL+DJNX8UyaLY6TEuqM+pSQXDOqQfu5g8kcR82Zd+1
JCCK9W/4JzfET4YfAXwP8ObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82NLLTWg0eaa3S5tLmNo
jJNPZT+WrOZHgEBETGfmnXpeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb0QSXGJFeFY
ihhRX3Ryv/rUBwwcL9bfsM+PtX8GfCDw/wCCrn4neFfBNrp3iu+lTXPDHxA0zR9Q8M3ahUL6
tY3Lx2viHT5JFtpUeGSZvJhljWYgi3rq/wBjH45fD/4Vfsj2mneLfEXw/uPEnij4yXl54e1W
wnsRH4RuJ9NeztfEp0mTyRHaQXKMyxXMUAjR0lCKViBAPzp8K+BNc8df2l/YmjarrH9j2Muq
X/2G0kuPsNpFjzLiXYDsiTcu52wq5GSM1k17B4aZtA+Knxbt9e+M11oepDStYtpda0eS71WD
x/decAbE3ETKz296wZzPNmMhVZ1O4V4/SEe1/s9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6
axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto
6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIHq3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U
/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqP+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T
9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC
2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6P
C2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH
/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p
/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeafCr9gfXviV4M+HutXvi3wV4RT4r6rPo
/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDv
GWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e
+HfD/wAVvEWqeKZvG+radJqWm6fNrsFxa3Fq2uubyPzLICUvZ4dpQ7OTcbjQB8afDf8A4Jy/
FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB4g6zXAVGPynywNpZ18yPfU+En7GKfFzS
PArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt+gP2UL7w3qXx
w/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbpvMjIJnnGCwbe+8NXlX7I
kvhv9n7wB8ZfiDf3/hWT4sfDT+zbXwNZ3l/a31rJez3jwXN/axK7Jey2saCWJ1MkKFllKvhG
ABleFP8Agmp8S/GH/C47i2h0qPRfgh/aceu6xNNKljd3FhvM1taN5e6WUpGzgFVCqU8wxmRA
2TbfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b/AOxodSTVp7qX7FcfZ5962ljOqYfaRluQ47ggerf8
E2b59S/4X54k8TeLvCtpdeOfhx4h8OW1x4i8XafZX2q6xd/Z5EDLdXCzN5hLEzuPLLBsvkGt
X9lXWvFwg+E/hzxL42/Z/wBf+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHeRJrCadX
C4DqyKoAPn/wb+1R4z/Z+87SPBus+Fbf+y/tljZ+I9L8L2EeqSRTebG80Goy2iahHvSR9jF0
lRGCgJgKOg+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM
7RkkMcbMMfsvwf8AtK6B4C+FPw80v9nfWPhVpthpfjnxLJqf/CTeNrzwvHZW7aqj6Xc3UH2+
zm1GI2PkhjNFdnZB5WNweM5XgX41eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/
VornSrfUJbecRNbwFIykbkQSSwsBISqgHy9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT
3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/wCCfWoah4h8PeHtV+Jfwq8OeM/Eeu3P
hqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx9l/8NIeGfiF42+EWt+BfF/w0udE0
74u+Jdc8WyeLbzSYbvSrK51+G6t57KPWCtxbI9l+8/4l6p+8DFv34avnT4deO/h54K8Y/tL/
ABk0PWdK1Hxf4U12O5+F0fiC7a6ku/t2qTI2pLb3Z8+7u7a32TI02/y3bzJUZgpUA+Xviz8O
L74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlerfCr9gfXviV4M+Hu
tXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGP0B4E+N/i3xT+yr8
CR8LfjB4f8E+NrLxJr198QrvVvGdrob3V7cX9vLbXuox3MqvqaeRjL+Xc5VGjIYgx11fwn+L
Og+Lfh9+zLHZeJfg/qL/AA/8ZauPF13f6jYaANDhfxPp+qpfadbXbWjqkkFuwXyICFhlmh2I
+UUA/PTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK6DwD8AfE
XxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRXtftZeN9M+Jn7VHxL
8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANfS3/BPL9ojXJf2SvjH8MbT4u/
8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhHyp8K/hX/AMLS/wCE
j/4qPwr4c/4RzQ7rXP8Aieah9j/tTyNv+h2vynzbuTd+7i43bW5GKyvCvgTXPHX9pf2Jo2q6
x/Y9jLql/wDYbSS4+w2kWPMuJdgOyJNy7nbCrkZIzX1X/wAEztWt/B//AA0boM3jrwrZ6Lr/
AMONX8P2LX3iOHR7HX9Tl/d2LxR3rwM+U+0bZHjUxLMQ/lmTB+fvgDfWlj/wmv2v4j6r8OfO
8KX0UP2G1uZ/+EolPl7dHl8hl2RXHO55cxDyxuU5FAHn9e1/s9/sH+L/ANo/4byeKNK1Lwrp
Fhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgeKV9g/8J3H/AMOSv+EZ/tn4
f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPKf2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8U
aVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NR/wncf/AA5K/wCEZ/tn
4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/7R/w3k8Ua
VqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+Hut
Xvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPpf8Awncf/Dkr/hGf
7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/Z
Xt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/wCi/wDB
NDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnK+
G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke/wCy
9B+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7
Sh2cm43GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbp
vMjIJnnGCwbe+8NQB8//AAk/YxT4uaR4FeH4q/CrSta+Id82m6XoN3eahPqkNwLkWyJcx2tn
Mtt5jsjIZXUMjhgcBtut4U/4JqfEvxh/wuO4todKj0X4If2nHrusTTSpY3dxYbzNbWjeXull
KRs4BVQqlPMMZkQNq/siS+G/2fvAHxl+IN/f+FZPix8NP7NtfA1neX9rfWsl7PePBc39rErs
l7LaxoJYnUyQoWWUq+EYdX/wTZvn1L/hfniTxN4u8K2l145+HHiHw5bXHiLxdp9lfarrF39n
kQMt1cLM3mEsTO48ssGy+QaAKnwm0bx78Bv2bfAPjaD4ofB/4Zab45g1e38P39x4WeXxIixT
SWt4RqFppM91C/70hXFwGCSKEK7dq/KmvadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBG
A3AOitgjKqcgfZf7KuteLhB8J/DniXxt+z/r/wAMvCPiubTtW0HxBP4anuvDlodQjkv2WW/j
DTRXCO8iTWE06uFwHVkVR7X4P/aV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/8AhJvG154Xjsrd
tVR9LubqD7fZzajEbHyQxmiuzsg8rG4PGQD40+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ
9nuIZGjkTchKth1YZUkHHBIr9FvAvxq8N/ETw1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXx
Tp+rRXOlW+oS284ia3gKRlI3IgklhYCQlV7X9nr9oz4RaR8VIPEFn4z8P6j4J+JPxA8Zz/EG
38Q+KLmzgs4b2ZoNK8vRpLiGK4t7iGWEyyzWlyIw0hkkhEB8oA+FPhV+wPr3xK8GfD3Wr3xb
4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV0X/AIJoeMll8LWvijxH
4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK8caOPhF+yxo
sWu/CqW6+FnivU7Pxk+s+J9LtpvDif8ACS6dqS3di81wgn3w2zKJ7XzkeKSdASW47bX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX
6ba/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb
4kk/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDA
EZ5ANIRV8A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6W2t4o0GWZ2Yu2cBQsL5YMUV+J
r7W/4J5ftEa5L+yV8Y/hjafF3/hBPEmo/wBgv4Mk1fxTJotjpMS6oz6lJBcM6pB+7mDyRxHz
Zl37UkIIr1b/AIJzfET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt
0ubS5jaIyTT2U/lqzmR4BARExn5p16XqP7L+rWX7Jlh8Yo9Z8P3fh678SHwrLYwvcDUbG9EE
lxiRXhWIoYUV90cr/wCtQHDBwv1t+wz4+1fwZ8IPD/gq5+J3hXwTa6d4rvpU1zwx8QNM0fUP
DN2oVC+rWNy8dr4h0+SRbaVHhkmbyYZY1mIIt66v9jH45fD/AOFX7I9pp3i3xF8P7jxJ4o+M
l5eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD8067bwD8AfEXxH+Enj3x
vp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX1Zfg/ffEH4qfEm31P4heCp9S8
KwaprF3rWpa6zQeLJrebDrY3DKWu7i5Zi8QODKMsSK+i/wDgnl+0Rrkv7JXxj+GNp8Xf+EE8
Saj/AGC/gyTV/FMmi2OkxLqjPqUkFwzqkH7uYPJHEfNmXftSQgikI+VPhX8K/wDhaX/CR/8A
FR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxWV4V8Ca546/tL+xNG1XWP7
HsZdUv8A7DaSXH2G0ix5lxLsB2RJuXc7YVcjJGa+q/8Agmdq1v4P/wCGjdBm8deFbPRdf+HG
r+H7Fr7xHDo9jr+py/u7F4o714GfKfaNsjxqYlmIfyzJg/P3wBvrSx/4TX7X8R9V+HPneFL6
KH7Da3M//CUSny9ujy+Qy7Irjnc8uYh5Y3KcigDz+va/2e/2D/F/7R/w3k8UaVqXhXSLC410
eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur7B/4TuP8A4clf8Iz/AGz8P/7Y
/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hvJ4o0rUvC
ukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/4Rn+2fh/
/bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz/bPw/wD7
Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8PdavfFvg
rwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfS/wDhO4/+HJX/AAjP9s/D
/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NXpX7JvxZ021+Bv7K9voPiX
4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+CaHjJZfC1r
4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV8N/wDgnL8U
fiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/2XoPx38J6/rX
wWu/h74y+H974d8P/FbxFqnimbxvq2nSalpunza7BcWtxatrrm8j8yyAlL2eHaUOzk3G415T
+yhfeG9S+OH7V3iTT/F3hW08LeOfCni/w54XuPEXi61sr7Vbi7uIpLQMt/cLdN5kZBM84wWD
b33hqAPn/wCEn7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6
hkcMDgNt1vCn/BNT4l+MP+Fx3FtDpUei/BD+049d1iaaVLG7uLDeZra0by90spSNnAKqFUp5
hjMiBtX9kSXw3+z94A+MvxBv7/wrJ8WPhp/Ztr4Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6m
SFCyylXwjDq/+CbN8+pf8L88SeJvF3hW0uvHPw48Q+HLa48ReLtPsr7VdYu/s8iBlurhZm8w
liZ3Hllg2XyDQB5TbfsWy2HwJ8D/ABC8SfEX4f8Ag/RfiH9v/saHUk1ae6l+xXH2efetpYzq
mH2kZbkOO4IHj+vadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfZf7Ku
teLhB8J/DniXxt+z/r/wy8I+K5tO1bQfEE/hqe68OWh1COS/ZZb+MNNFcI7yJNYTTq4XAdWR
VHtfg/8AaV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/+Em8bXnheOyt21VH0u5uoPt9nNqMRsfJ
DGaK7OyDysbg8ZAPjT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6
Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQ
ccEiv0W8C/Grw38RPDX7OM2lav8ABRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284ia3
gKRlI3IgklhYCQlV7X9nr9oz4RaR8VIPEFn4z8P6j4J+JPxA8Zz/ABBt/EPii5s4LOG9maDS
vL0aS4hiuLe4hlhMss1pciMNIZJIRAfKAPhT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22s
PfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1
ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK8caOPhF+yxosWu/CqW6+FnivU7Pxk
+s+J9LtpvDif8JLp2pLd2LzXCCffDbMontfOR4pJ0BJbjttf/aH8G/G3xf8ABfWvC/iP4aX2
k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZf
C1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqn
wz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/ALQ/g342+L/gvrXh
fxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJC/nn+1l4
30z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHn9FFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABX9RX/AAaO/wDKNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv+wzqH/pVLXo4D4K3+
H/26JyYren/i/Rn378Wf+P64+pr8PP8AgvUiva2+ev2TTP8A0/8Ajqv3D+LP/H9cfU1+F3/B
fqd4zaBe9ppf/p/8d11Vv93j6/ocVL+PL0/yPzV/bhG34ffs+f8AZO7n/wBSrxDRSftwHPw9
/Z8/7J3c/wDqVeIaK8l7nprY/az9mvwvO1z8D7uNCV/4RHwSScemjab/AIV+RH7R37J+o/Hj
4vfFbxmfFHhXwn4c+HOmeFv7ZvNaN6237bp1tDB5aWttO7fvEwflGNynpkj9yf2MtQ04eB/g
tHPs87/hDPB2M+v9jafivyS+I11pviLwh+134NXX/CmleIvFlh8P/wCxrPWdfstI/tAW8UM0
3lvdSxodkYyfm4yo6sAfYzJf7PS9F+R5WXf7xU9WfIf7Pf7Dnj39pP4V+PfG+hWdraeE/h1p
Vzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6N
qF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3h
LS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1
DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/DPbPNNF/4JoeMll8LWvijx
H4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/wDwT61DUPEP
h7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/
jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3s
tX4dfGXwPL4x/aX/AGgLOfSoPH2m67Hq/wAN9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZ
X2oQAco//BL/AMWaN4h0PSvEHjL4f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLi
NTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/AMefD/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2V
xENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1
jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHh
bTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wAD/wDBPbWPGekeCb+Xx58P9Ftfibrt1oXg
x746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fz
Ek+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/p
GlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P8AhXTZ59a8aWkX
ia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru
41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7
UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/AMPfiR8PPGdp4t+Cg0G68V2T
eI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z
4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH4
3v8Aw/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/i
Hc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF
8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/APwT8/bT8JaD
+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7g
A+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHG
zDHJ8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4
kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/AId+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61J
kkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/AIgS
9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilm
l8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/AAVdm8ZaNr/hXw58J/C+u6z9hvNa1+30
zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCPoH4G/8ABPbWPjv8IPDnjOz8efD/AEew8UeK
4/BFna6kdUF0NYlG6K1cQ2UiDfGVcSBzGA4DOrBlGT8Gf2GPEnxc8cfErw3c694V8H618J7G
91LxBb61LdP5NvZSNHePG1pb3CyeS4UEA5bevlhwG2/UH/BP39oPQf2ff2PPhbb6nqnw0uLz
V/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTlf2Sk0Hw9+0Z+1vbjx74fv
NN1bwb4q8LaHrXiTxZYQz+Jbq6ugLST7RPLGtw8yws7zr+7BYMxXeuWM8f8Agb/wT21j47/C
Dw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2UiDfGVcSBzGA4DOrBlHlOo/A3xhpvjjx
X4b/AOEc1W71rwN9rbX7exgN7/ZSWknl3MsrQ7lWKNxhpc7BkfNgivqv4NfHK/8AgF/wSZgu
vDPiL4fweOLT4rL4mttPvp9G1PVLSyWxS1S7isrrzJI5Vuo12skYmVMyDERLn50+D/ixtc1z
4hanr3xU8QeCtS1fw3qUst3HHd3s/jK6kZGOl3LxOG2XTFi8kxaPKZcHIpCPNKKK/Rb9ln9p
W48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZ
DRkA+X/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJ
DHGzDE8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTv
Hv2M65jw5+tvAvxq8N/ETw1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284
ia3gKRlI3IgklhYCQlV8++D2tRXX7eFx8V/DPjf4KXngfxX8Vp9U1RfEE+k2usaNp8Wreetw
qavHHPD5lvMZEexZnyuGKyxhFYzx/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU7
2zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/A3xhpvjjxX4b/AOEc1W71rwN9rbX7exgN7/ZS
Wknl3MsrQ7lWKNxhpc7BkfNgiv0WPxu8D/EvU/gVc+D/ABN8P9S0HwZ8R/EM2tXfi3xFb2Wq
aJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMB8f8AjT4i6B4+/ao/aA8Saf8AFHVfA2i+
Jv8AhIr7Sbixsbxv+EvSe7MkOlSrGUaKK5RgWacbF2DeueiEcp8BP2TtQ+O/ww8eeMv+Eo8K
+FPDfw5/s/8Ati81o3rY+2yvDB5aWttO7fvEwflGNynpkjifhX8K/EXxv+Iek+E/Cek3WueI
dcnFvZWVuBvlbBJJJIVUVQzM7EKiqzMQqkj6L/YpudP8Q/sJftM+Df7f8K6V4k8V/wDCLf2P
Z61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgDz/7FPxU8PL8PPHnwtv8AVrX4ca38TII7Ky8e
5ISNQcnSNQchmg0y5YL5k1vsZWCmbz4AUQAyvgx/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2t
ps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxU/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dN
Y1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD2DXvER8Of8EZrzwRca58NJNfsPic93Lpl
rquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCKq/8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yft
ek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdH
hbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNd
HhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8A
hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7
U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNd
HhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuN
dHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+
Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/
7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410
eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwu
NdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P
+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/
ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA80+FX7A+vfErwZ8PdavfFvgrwin
xX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf8Agmh4yWXwta+KPEfgrwHq
3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G/sr2+g+J
fhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrtj8bvA/xL1P4FXPg
/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4jb94zAAHx
p4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1
zHhzq6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZ
EyVZhlMOfYPg9rUV1+3hcfFfwz43+Cl54H8V/FafVNUXxBPpNrrGjafFq3nrcKmrxxzw+Zbz
GRHsWZ8rhissYRfVtB+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uL
Vtdc3kfmWQEpezw7Sh2cm43GgD4e8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R3
9mZFuobFLaKYSpFJGImuGKW4kkRfNzu25X7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBd
tEhleINDDKVfylkcFwq4jYbtxVW+9tR+OXgv4uWPwTsPCviL4Va54H8P/EfxM3i5PHk+jyX0
Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXV/ZD+O/wAIPgV44+Gl78NvGXhXwn8PZfHPiv8A
4TtbjVls769gaSa28N+dFduL2e0SC6iI2K0ETGSWbY8ckigH5U17B+z3+w549/aT+Ffj3xvo
Vna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7vsH9nb4933wf8A2Vfgv4a+
G2t/B+y8WeHPEmtx+N31z4gNoVpa3X2+E211MtrqNsmq25twv7xUvIzHAI0z8yN5p+wrc6fe
eOP2lfEN3r/wq8PWHjbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiqu1mVlYAA+
VNI+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lrq/wBk
T9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A9A/Yn1H
T/2WfE/if4j+K/FelJpnhr7R4cm8H6XqdlqN18QZZUKyafJGPOhOlMAGmu3R4iFUQb5irR9B
/wAEhta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG5
3FCPkmiiigD1b4Cfsnah8d/hh488Zf8ACUeFfCnhv4c/2f8A2xea0b1sfbZXhg8tLW2ndv3i
YPyjG5T0yRxPwr+FfiL43/EPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR9
F/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6s
Aef/AGKfip4eX4eePPhbf6ta/DjW/iZBHZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwUzefACiA
GV8GP+Ce3jX42eE9Q1ey1XwVpkEPiRvB2mC+1tNniPWhby3AsbOeESQM7pGAkksscMrTRKkj
FuKn7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjR
XLEgewa94iPhz/gjNeeCLjXPhpJr9h8Tnu5dMtdV0S61JtNSEwG5jETtNI/2wbRMhaVrfGGN
oRVX/hO4/wDhyV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7N
TGeU/s9/sH+L/wBo/wCG8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd
5o0VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhj
d5o0VyxIHq3/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/
yOM/ZqP+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HG
fs1AHlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3
mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN
3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/
ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ
+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3e
aNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eG
N3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k34
8/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1AHmnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBna
MkhjjZhjq6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOfoD9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebA
jWql92nbQZFZj++BNdsfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW
19D/AGnPFcyyyWkGGmUSuyS3EbfvGYAA+NPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ
1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0D
TdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPsHwe1qK6/bwuPiv4Z8b/BS88D+K/itP
qmqL4gn0m11jRtPi1bz1uFTV4454fMt5jIj2LM+VwxWWMIvq2g/Hfwnr+tfBa7+HvjL4f3vh
3w/8VvEWqeKZvG+radJqWm6fNrsFxa3Fq2uubyPzLICUvZ4dpQ7OTcbjQB8PeIv2KPFXw5+H
mueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bcr9kr9l/
Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfe2o/HLwX8XLH
4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLq/sh/H
f4QfArxx8NL34beMvCvhP4ey+OfFf/CdrcastnfXsDSTW3hvzortxez2iQXURGxWgiYySzbH
jkkUA/KmvYP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AV
QV3um+Pd9g/s7fHu++D/AOyr8F/DXw21v4P2Xizw54k1uPxu+ufEBtCtLW6+3wm2upltdRtk
1W3NuF/eKl5GY4BGmfmRvNP2FbnT7zxx+0r4hu9f+FXh6w8beBvFHhrRoodfstCsbnULqSCS
CGztLyWK4itGXIjeWNUVV2sysrAAHyppHwS8T6/8INX8eWGmfbfC3h++h07VLq3uYpJNNlmB
MLTwBjNHFIQUWZkETOCgcv8ALXV/sifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL
5bwW0yL5ccLE+ayZ3KF3HIHoH7E+o6f+yz4n8T/EfxX4r0pNM8NfaPDk3g/S9TstRuviDLKh
WTT5Ix50J0pgA0126PEQqiDfMVaPoP8AgkNrWk6L/wAFCdA8e6rqHgrwP4T0Ce/mvBqHiG30
+CwW6sbyKGKBby48+dFdlTKmVlBUyNzuKEfJNFFfe3/BPT4w+FfAf7OdnYfEXxj4KXxDc6rd
H4OHVZYdQPw91U2tykmpXoCSfYbKS7ezKpOGUzRfaRAFQ3AAPgmiv0s+Cn7TOueCP2cfhHpv
grxR8Kn8fad4r16b4hX3ib4iyaPH/aD6jE8N9dG31K2GsRPDgtNtvUdIdi5+ZGyf2df2rtM8
PfCDwRcDx54V8Para/tHqrQaLqT6Xa6f4WuhFPeRW0EpSe30SSdFdopFWMlE8xd68MZ+dNFe
gftZf2H/AMNUfEv/AIRn+yv+Eb/4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxX1t/wAE
+/ivqOn/AAE8J+Gz8S/D/gXTU8SXd9b61oPjax8O6v4XudijdrOmXxgh1yykcWsg2NJIsUM0
Ql6W4Qj4Jrq/hX8K/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cX
G7a3IxX6F/sXeL9Zg/ZM8G+JI/G/h/TYPC37Q4tb3XF1mDw3py+HpIILu/tbVZzahLKeRVnN
jHGm/YCYMoQvmn7InxA8N3Xxw/a/Tw94o8K+GvAPjjwp4l03w3ZX2t2vh+x1G4urh/7LSK2u
ZIQNsJmVSUAgWQqxj8wBmM+FKK+9v2HP2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7Hwxc
QW817GwLh00yS4jDzKcQNIgZ8sM18k/tZf2H/wANUfEv/hGf7K/4Rv8A4SvVP7J/svy/sP2T
7XL5PkeX8nleXt2bPl24xxSEW9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8
KxFDCivujlf/AFqA4YOF5/4V/Cv/AIWl/wAJH/xUfhXw5/wjmh3Wuf8AE81D7H/ankbf9Dtf
lPm3cm793Fxu2tyMV91/8E9fjV8P/g/+wf4b0nxbq/hW38SeI/itNc+HrqTU7G8uvA9xNpLW
dr4huLCSUAxW1zG2VudgUMswyRFu8/8A2Cdbu/Cvjj9qbRPEPxK8K3P/AAkXgbXtAN/feMra
3sfFmtyyMltcRSXUsf2rf/pTLcEYVZyWZPOG5jPimvS9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+F
ZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6g+E3xR1/8A4Y6/Z80v4M/FPwr8OPEmga7r
cnjf7Z4us/Dsf2iS+tns7m+gnkQ6hEtsqjKxXA2RtFgkGOvQf2Mfjl8P/hV+yPaad4t8RfD+
48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcozLFcxQCNHSUIpWIEA/NOvdvhV+wPr3
xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHtf2af2td
A/ZK8cfHLTPG+j6V8Sda8VaHr3hz/hJ7G/vL/wDt66uJI1xLN9rhWTT5nieVp40F03mAq+Dg
e1/Ar4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhn
JieeNmMhOAD8/wDx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
9K/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b493
2t8Jv2mPAWoa54/1TxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf2reo0bvYWUtxLYssdyp/fR
/aPIUIZx4p/wT3tLnTtc/aJ1Pxf4x8FQal4q+H/iXwpFd6x440tZ9Y1q4aBhh5bndMkrbyLo
Ewudx8080AeKfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVD
cmIvtZgCgDnW+G//AATl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflP
lgbSzr5ke/2D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1jxBq3jvSbJvhlp8VwbRms1NwWkl
uk8yQ3sAkdLcbYEzKJXqf8E9/Atr8Ltc/aJtNT8Z/DSOC/8Ah/4l8CaXeyeMdNtYNa1J2gEP
2YXE0cj28oUslwUWIjqwIIAB8aV7t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9am
hnitnYfZLWdIU8+VYwZ2jJIY42YY2/g9438IfsdeGLjxZDc6V4w+NsV9PY6HZJGLzR/A5hfY
dVklwbe/u2IzarC0tvGAJnZ3EcY+lfgV8btP+Jfwi/ZYuZvE3w/1LVfBnivU5vHN34t8RWVl
qmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IAB+f8A478Eap8M/HGs+G9btvsWteH76fTb
+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUb
ibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+1vhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS
7PgW9na+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOPCv2KNH8ReA/2gPCvjbUPHnwU1SO98cwv
4zk8Qa3od3rGlfZL5HnuluNRG5/NSSSVLnTZpfMK5L+YigAHn/jL/gntrHwk8AaBr3j/AMef
D/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPODwP8A8E9tY8Z6R4Jv5fHn
w/0W1+Juu3WheDHvjqjf8JM9vcx2rTxCGykMETTSqq/ahC/UlFHNfQHgz4o3/wASvjT4Z8n4
gfBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB2uh/FH4c+Ib
b9n62+HmtfDQ+D/h18QNejv/APhI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3h
YAA/N/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya9A/ay8b
6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa8/pCCiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/wCDR3/lGtoX
/YZ1D/0qlr+XWv6iv+DR3/lGtoX/AGGdQ/8ASqWvRwHwVv8AD/7dE5MVvT/xfoz79+LP/H9c
fU1+HH/BeuFZFtt3a00v/wBP/juv3H+LP/H9cfU1+GH/AAX1uPKktB62ml/+n/x3XVW/gR9f
0OOj/Hl6f5H5q/tyjHgD9nz/ALJ3c/8AqVeIaKP25f8Akn/7Pn/ZOrn/ANSrxDRXkvc9JbH7
3/sq/CK+uvBHwO1OMsIm8G+DX/750fTwf5V+Iv7TH7JOo/G/4z/FrxefFHhXwp4c+G2neF11
m81o3rY+26fbwweWlrbTu/7yPB+UY3KemSP2p/Zr/bR0bwn4S+BfheWWMXX/AAh3gyHGecy6
Rp7D/wBDFflT411nTfG/gH9rTwuuv+FNL8ReNNM+HsujWes6/ZaR/aIghhnm8t7qWNDsj5Pz
d1HVgD7OZ8v1ely9l+R5eXKXt6nN3f5nyT+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g
7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNx
N5scKNLHZ21wLVHklCobkxF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6b
ps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGre
O9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolfwj2jzTRf+CaHjJZfC1r4o8R+CvAereN
/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Par8S/
hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8O
aRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8D
y+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/wDB
L/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGO
V4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBP
OPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxt
Nmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1K
ISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTPb3Md
q08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQl
Ww6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6
ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljA
F3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7
a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWr
CVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre
0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6
RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W90Kx
tNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jU
dV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ
48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+
CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ
8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw
81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8
YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3y
V7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZS
u4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuU
LuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZU
xA4V3Tcpk25yfimkI+gfgb/wT21j47/CDw54zs/Hnw/0ew8UeK4/BFna6kdUF0NYlG6K1cQ2
UiDfGVcSBzGA4DOrBlFrRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4A
extrmKNFun8oNLImSrMMphz9Af8ABP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+m
aU9kLV9SaJnMti8E0RdJz5bKURiTFJiQ+Di2vw1+Mlmug/GnwV4/+HOt/E7U38Yab4w1/TbW
70VYLxraDXrW9uLmK4lvZrKc3KX2nCNhJEgLOU2BjPCvBH/BL/xZ4r8MW1/f+Mvh/wCGbqfx
y/w2fTdSl1GS6tvEAdlFk7W1nNCdwAYSrI0WGGXByo8U1H4G+MNN8ceK/Df/AAjmq3eteBvt
ba/b2MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEV9l2fxh039mr/gnbr3/CqvGPgq9vNK+Ndx4g
8NprEuj6jrZ0WO3+yWuoCxukMsVx50cRBWCOZVLOFWIk18qfB/xY2ua58QtT174qeIPBWpav
4b1KWW7jju72fxldSMjHS7l4nDbLpixeSYtHlMuDkUhHmlFFfot+yz+0rceAv2OvgVpfwu1j
4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyAfL/wq/YH174l
eDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhieDv2C9W17x
ZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c/W3gX41
eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/VornSrfUJbecRNbwFIykbkQSSwsB
ISq89qaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W4u4XaRrl
YzNFBlI4leTzGYz5U+Jn7FHir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJFJ+68h0C
Rbt8M0inzEGc7gvmvhXwJrnjr+0v7E0bVdY/sexl1S/+w2klx9htIseZcS7AdkSbl3O2FXIy
Rmv0L/Yb8aWvwe/Zjs/AkHxH+Givp3x5aHxbBceKNNh07X/C509LK/lEd5Ii3llKpO0BGLYV
kXegK/JMF94Msfjh8Z/+EU+I+q/DnwZNY65F4a+w2t/P/wAJRaG4/wBE0eXDLKkVxFt3PcZU
eWPMUk0hB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHh
jd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKb
h4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+
Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx
5/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgH7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjR
QpuHhjd5o0VyxIHq3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v
2v5N+PP8jjP2aj/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8
m/Hn+Rxn7NQB5p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+
VYwZ2jJIY42YY6ui/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LI
as3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6tp0mpabp8
2uwXFrcWra65vI/MsgJS9nh2lDs5NxuNAHxp8N/+CcvxR+I+h/FfUxpVro+m/BqDUD4iu9Qu
MQfarJWaexgeIOs1wFRj8p8sDaWdfMj31PhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfV
IbgXItkS5jtbOZbbzHZGQyuoZHDA4DbfoD9lC+8N6l8cP2rvEmn+LvCtp4W8c+FPF/hzwvce
IvF1rZX2q3F3cRSWgZb+4W6bzIyCZ5xgsG3vvDV5V+yJL4b/AGfvAHxl+IN/f+FZPix8NP7N
tfA1neX9rfWsl7PePBc39rErsl7LaxoJYnUyQoWWUq+EYAGVpn/BPrUJviRF4P1H4l/CrRfF
N74ru/B9hpc2o3t3dXl3b3S2hdktbSY20UkzbYzd+SzhS4XZ81c98Bf2KPFXx5/aju/g9Hf+
H/DPja0nvbJoNYnmMD3Vnv8APtxLbRTLvVYpmDHEZETYckoG9W/YKOu6N8ZPBPxGuPHPwf1C
z1zxlbXfi4+KtR0j+39IW3vI5ZrktqyrMHlSWSVZrCR2Yr8zCWMKPqv9mD9pT4XfDP4qeAfE
Pgnx/wCH9E8H6t8QPF118RrnUtc8nVdXE008Hh6S6F64v7m3EV3G5YB4Y3aWafbJHLIoB+T1
FfpD+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3
ipeRmOARpn5kbV+GP7VzaL8DfhhH8JtV+BWk67F4y8RXvi5bjxZd+DdGsJpdTjls7gWX26xl
vbL7KUCrLBclYYFh2KyvEQD4p+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjS
x2dtcC1R5JQqG5MRfazAFAHPmvjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlW
w6sMqSDjgkV9a/six3OgftOaF8Q9M8Vfs6ReGdZ+ICXWv2btpdg/hy1t9QWQyWUOsxRXNvbv
DKzwNZEyBUVX2SxBF+gdN/ay07S/AvgkfAvxL8NN6fEDxRfa9d+MPHd94ddFm1dJdPvb6NtQ
tLjU0eyMW9547tisJjI3h42APhP9jP8AYc8e/t1/EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJ
ZFR23yMpVEVWZsMcbEdl1vhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7
JazpCnnyrGDO0ZJDHGzDH6g/4Jvft2Wkv7V3hnwrrth8FPC3gHwprviHxIdesrq58OWaT3kd
zGs0MVxdQwz8TxW0CTWzzxWp2hU2Owt/Cf4kaDq3w+/ZlsLK++D9g/w28ZavB4utb/xRYQDw
jCfE+n6mk2nSXd3vuEMFs0a3ED3IeF5k3s7E0Afnp478Eap8M/HGs+G9btvsWteH76fTb+38
xJPs9xDI0cibkJVsOrDKkg44JFZNegftZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7Rbz
Xcskb7XAZcoynDAEZ5ANef0hHu3wr/YUuvid8BNJ+I0/xI+GnhXw9q3iQeEgdcn1KF7TUihl
WKZo7OSKNDDiXzTJ5Sqw3urBlHQS/wDBMzWNK8MX+sa18TvhV4csNL8c3Pw6uptSudUjjg1i
B3BjeRbFkSJo080TMwjVGG9kYMq+wfsneJ7nT/8Agl2fCGh+K/g/pnibxh8QNQFzb+KvEul2
z6dot7ocmlT3xWWYT27o7sV8tfOZRwkkUjLIJ46tf2Vf+CZnj3wn4X8Y/DTxbqA+J2t2aLJq
Wmz3uo6FcaVNoralbWwnaeF3dtyGI7/Lbcd9u7l2M+f/AIs/sD698HPgp4l8can4t8FT2fhX
xldeA7uwtXv2vZNVt3cPFHutViKGFDOHMgXy8AkS/uq5/wCBH7FnxD/aR8D6/wCIPCei/brD
Qswxq8oim1q7WMzvY2KHm6u1tkmuGhjywihb+J4kk91+FfgW1+LH/BKTSfAkHjP4aaH4h1X4
ujXzBrnjHTdOey03+zTZNdzRyTCVUWYH5AhlZQGSNlZSfVrf4ufDb4G6N+yL4NXU/hp8QU8D
fEDXrO71+41aaJ/D9sniSIw6kEtr1IokmhUXCm5E0ZWNSNybtwB8ffCv9jXVviB8PNJ8Wa54
r8FfDjw94k1UaNoN74ru7i2TXJwSJngEMEzC3gbastzIEgjaRVMm4MF6vRf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1D8cfjF8KP2jN
a8O2+p3fw/8AEXhv4ZfEfxdH4rtNc1+Swkm0fVddN+ur6Q1vcwm/2W0MqCOJpZC8iYtpA8b1
z3wcsNB+Efxks7XwL8TvhprPwg1P4namPEHhXXNcsLFPB9rbXjWtnqmnXd3dpdPcGwm86K90
9lkDQRqzylNtAHw/qPwN8Yab448V+G/+Ec1W71rwN9rbX7exgN7/AGUlpJ5dzLK0O5VijcYa
XOwZHzYIrlK9g8NJ4S8NfFT4t2nhz4reIPB3hOPStYtfD17HaXUs/jO2EwFtplyIfKMaXUQV
naVBGpT5oxwB4/SEe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1n
SFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9j
bXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP+yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI
9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPi
PWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1b
xv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvhv8A8E5fij8R9D+K+pjS
rXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5T5YG0s6+ZHv+1tf/aH8G/G3xf8F9a8L+I/
hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXyn9m7xPoPj
X9oz9rvxpb+K/BVh4e+I/hvxlonhubWPEthpE+p3V9dRzWqi3upopkSRDkSOixghlLBlIAB8
/wDwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZ
hjq6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJV
mGUw5+i/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRP
a+cjxSToCS3Hba/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3t
rFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3
tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSu
L3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7W1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXz7U00GPVfE3xJ+D/j34aD4g
/Gjxl4h8zxLr/iyw0S5+G+iyalLFHJb2txKtytxdwu0jXKxmaKDKRxK8nmMAfBXjvwRqnwz8
caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkVk1reO/Cv/CC+ONZ0T+0dK1j+
x76ex+36XcfaLG+8qRk86CTA3xPt3I2BuUg4GayaQgooooAKKKKACiiigAooooAKKKKAPQPB
H7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAAHK+N/GV38QPE9z
q9/DpUF1d7N8em6XbaZartRUGy3to44U4UZ2oMnLHJJJyaKAPQP+GoPGf/DPH/Cqvtulf8IL
9u/tT7B/Ydh532vdn7R9p8n7R5u393v8zd5X7rPl/JXn9FFABXoHgj9pfxF8P/DFtpFhp3w/
ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igDW8b+Mrv4geJ7nV7+HSoLq72b49N
0u20y1XaioNlvbRxwpwoztQZOWOSSTk0UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BX9RX/Bo7/yjW0L/ALDOof8ApVLX8utf1Ff8Gjv/ACjW0L/sM6h/6VS16OA+Ct/h/wDbonJi
t6f+L9GffvxZ/wCP64+pr8M/+C9+ntdy2rD+Gz0v/wBP/jv/ABr9zPiz/wAf1x9TX4lf8Fzm
QQx7v+fPTMf+D/x1XXW/gR9f0OKl/Hl6fqj8vP26U8vwF+z4PT4dXH/qVeIaKl/b148Ffs/Y
/wCid3H/AKlXiGivIe56a2PqHT/GGo2X7YPwMt0kkEH9hfDhQAeMHQtGz/OvnP4ofspaj8fP
EfjzxqfFHhXwp4c+Hfh3wcdZvNaN62Pt2lWsMHlpa207v+8TB+UY3qemSPsDw58OV1f9pb4F
XgA3Dw/8PGz/ALuh6R/hXjAfT9Y+B37Tvgj+3/CuleJPFHh/4bro9nrWv2Wkf2h9mtreafy3
upY0OyMZPzd1HVgD1YhNU438vyMMPZ1HbzPl39nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9
T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7j
UbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wAMde8JaWms
eKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrH
iDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvwnaeaaL/wAE0PGSy+FrXxR4j8Fe
A9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ahqHiHw94e
1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8L
Pip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8O
vjL4Hl8Y/tL/ALQFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA
5R/+CX/izRvEOh6V4g8ZfD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMc
FtoXDHK8Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXg
iQhgwIJ5x6B+wp8f/H2oeOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU
824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY8LaZeawL
y7Eq6lEJT58U5fdZTTjKFdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/
4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4h
kaORNyEq2HVhlSQccEivsv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W
7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+
oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3m
xwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6
ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3XiuybxHa+IL
zRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz
4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8A
AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X8Q7nw/4I
s7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngf
w/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhLQf2ytG8C
adZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv
2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL
9ijxV8Ofh5rniPxvf+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud
232v9jL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVhey
LiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJeoFTUyZP
ns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi
+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+
Gx8tL2VLibKmIHCu6blMm3OT8U0hH0D8Df8AgntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6
oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKMnwH/AME9viv8Rf8AhOE07w3/AKV4CvrjR7u1e6j8
7U9Tt97T6dYgEi8u44Yp52jhLHyoGIJLxLJ9Qf8ABP39oPQf2ff2PPhbb6nqnw0uLzV/jzZ6
hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTq/Gnx/8Aht8P9f8A2XtEXV/BXxHT
w38TvEn2vxHrniSa51HQ7Y+KEeHU5pba6hiLzw4uPNuY3jk8sOF2FgzGfGnw6/YL8cfFrwx8
LtW8NvpWtWvxT1248O2/2E3Fy3h+7gcbl1Ly4mFvmEtcgAu32eOSTaAtef6j8Gtc/wCE48V6
JokH/CY/8Id9rmv7/wAOpJqFj9ktpNkl8siL/wAen3WEzBV2upOM1+m3wb/a4+HX7OHiz4i6
dqHxDutQg/aW+J3iK1S88MeKrVYvAGnfaJrWHV2LHFtcTSTJIJslGtoYpQxMIjk/PT4e6Haf
Dfxx8TtEu/iz/wAIl/Zmh6rpkN/4dW5v7HxtLHIqLpqyQFP9Eu9pYSygxbUUsvIpCPKa9r/Z
7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkDx
SvsH/hO4/wDhyV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7N
QB5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0
VyxIB+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o
0VyxIHq3/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAj
jP2aj/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1
MZ5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0
VyxIFv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQ
xxswx9L/AOE7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1elfsm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07
aDIrMf3wJoA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijR
bp/KDSyJkqzDKYc5Xw3/AOCcvxR+I+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wF
Rj8p8sDaWdfMj3/Zeg/Hfwnr+tfBa7+HvjL4f3vh3w/8VvEWqeKZvG+radJqWm6fNrsFxa3F
q2uubyPzLICUvZ4dpQ7OTcbjXlP7KF94b1L44ftXeJNP8XeFbTwt458KeL/Dnhe48ReLrWyv
tVuLu4iktAy39wt03mRkEzzjBYNvfeGoA+f/AISfsYp8XNI8CvD8VfhVpWtfEO+bTdL0G7vN
Qn1SG4FyLZEuY7WzmW28x2RkMrqGRwwOA23W0z/gn1qE3xIi8H6j8S/hVovim98V3fg+w0ub
Ub27ury7t7pbQuyWtpMbaKSZtsZu/JZwpcLs+atX9kSXw3+z94A+MvxBv7/wrJ8WPhp/Ztr4
Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6mSFCyylXwjDoP2CjrujfGTwT8Rrjxz8H9Qs9c8ZW
134uPirUdI/t/SFt7yOWa5LasqzB5UlklWawkdmK/MwljCgA8p+Av7FHir48/tR3fwejv/D/
AIZ8bWk97ZNBrE8xge6s9/n24ltopl3qsUzBjiMiJsOSUDW/2e/2D/F/7R/w3k8UaVqXhXSL
C410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA+9v2YP2lPhd8M/ip4B8Q+CfH
/h/RPB+rfEDxddfEa51LXPJ1XVxNNPB4ekuheuL+5txFdxuWAeGN2lmn2yRyyL80694iPhz/
AIIzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP8AbBtEyFpWt8YY2hFAHinwT/ZC
b41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56vRf+Ca
HjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPoH7
MPgW1+GH7NvhrxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0XSGR2uo0aWO
AFIow0nmN6t8J/E+g6N8Pv2ZfDVl4r+D+sv8IvGWr6b4u1C/8S2FoNIhXxPp+opqGnNdzQvO
ksFq22aBJA0Ms0eA7EAA+adF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3
aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4
kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R
+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4
b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/tD+Dfjb4v8AgvrXhfxH8NL7
SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJC/nn+1l430z4mft
UfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHoPwN/4J7ax8d/hB4c8Z
2fjz4f6PYeKPFcfgiztdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyjV8Ef8ABL/xZ4r8MW1/
f+Mvh/4Zup/HL/DZ9N1KXUZLq28QB2UWTtbWc0J3ABhKsjRYYZcHKj3X/gn7+0HoP7Pv7Hnw
tt9T1T4aXF5q/wAebPULvTtbubC9n0zSnshavqTRM5lsXgmiLpOfLZSiMSYpMSeg/AHxponw
x+Hkel6b8R/hpraQftLSeIp77xV4o0C+1G78Own7PJqxa7k3LcO0JdZolS4bd5kXySAljPjS
2/4JqfEu+1DwPZW8OlXt/wCM/Fd/4Lnt7GaW9bwvqdlN5c8OpNDG6Q4jElwNjSE28UkuNq15
TqPwa1z/AITjxXomiQf8Jj/wh32ua/v/AA6kmoWP2S2k2SXyyIv/AB6fdYTMFXa6k4zX6Q/s
wftcfC79nCz8Wad/wsPxBqGk/tLfEDW7WzvNL8VbbnwBpXmy2tvq9ybs+fbXszzRyGaYlmgh
jlZi8PlyfBPw90O0+G/jj4naJd/Fn/hEv7M0PVdMhv8Aw6tzf2PjaWORUXTVkgKf6Jd7Swll
Bi2opZeRSEeU17t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8
+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4k
vteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gmh
4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/w
TQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+w
df8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjf
Ekn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721
ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O
9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT
6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+165
8R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pP
hv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwt
a+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wAE0PGS
y+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h
/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7
lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK
4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tn
dpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHr
FnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv
4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+
KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/B
vxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeI
gOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8l
rDjfEkn7uWeIgOSFAPinwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCV
bC3uUt0adiiNO8e/YzrmPDk/Zu/4Jy/FH9p/49+Ivh3oelWthq3g6ee18Q3moXGNO0WaJ3jM
cs0QkDO8kbIixhy+1mHyI7r9K6mmgx6r4m+JPwf8e/DQfEH40eMvEPmeJdf8WWGiXPw30WTU
pYo5Le1uJVuVuLuF2ka5WMzRQZSOJXk8xj/gmD+1TF8J/wBpLwb8KfEs3wfsvBPwt1XXr6Xx
gmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P8ARv2X9Wm+Aj/EfXNZ8P8AhHw9
dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L5pX6Q/Df4maJ4o+D37N3hiG6+B
UWk+CfGWuWnxC0bXr3QLuDRrKbWop9llJqskss1u1q8m2azll8wIMyu6DHwT8d/+EY/4Xh4y
/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNIR6t8Df+Ce2sfHf4QeHPGdn48+
H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqaN/wAE/vFVnZvceN/EPgr4
VwSeJJvCljL4rv5ok1W/glaG68g20M4NvbyKElum226M6r5pO4L9K/8ABP39oPQf2ff2PPhb
b6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiTn/2m9L8N/tW/
CnwX4A8IfEzwrLf/AAk8c+KdK1PVPGviq1s5NWstS1Vri21lLuRgl7F5cRMzQ5m3sCsLK6sW
M8Vtv+CanxLvtQ8D2VvDpV7f+M/Fd/4Lnt7GaW9bwvqdlN5c8OpNDG6Q4jElwNjSE28UkuNq
15TqPwa1z/hOPFeiaJB/wmP/AAh32ua/v/DqSahY/ZLaTZJfLIi/8en3WEzBV2upOM1+gP7A
/wC0H8NP2J/hBN4LuPibquvWHx38V6jocGq6FrUWmt4J0yENYwa3NbzYfT7ueSSOXLkgW8Uc
m4tb+VJ8U/D3Q7T4b+OPidol38Wf+ES/szQ9V0yG/wDDq3N/Y+NpY5FRdNWSAp/ol3tLCWUG
Laill5FIR5TXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5
VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol
1apdzxO7tBbsvmQLKDHJPF98kAA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9T
vbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3l
xPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHj
JZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1
/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8S
Sfu5Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK
+lglLyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPq
d7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7Xrnx
HrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G
/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r
4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8
G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uW
eIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS
8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srg
B7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2
llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesW
ekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/i
d4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha1
8UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/
G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA
5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWs
ON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7
G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7o
Vlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvte
ufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2
+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9wfCn9pjXfGf7Yd/4nHjv4aR/BDw78Tt
W8VW8/iw6RNf2Fk96L+c6ba3cb6pE9xGieWtpEubhx92QOy/H/7SnxIsfjH+0Z4/8X6ZFdQa
b4q8SajrFpFdKqzxw3F1JMiyBWZQ4VwCAxGc4J60hHE0UUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUA
FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAV/UV/waO/8o1tC/7DOof+lUtfy61/UV/waO/8o1tC/wCwzqH/AKVS16OA+Ct/h/8AbonJ
it6f+L9GffvxZ/4/rj6mvwz/AOC9t2YLi0Ud7PS//T/47r9zPiz/AMf1x9TX4Xf8F9Bm7s/+
vPSv/T/48rqr/wC7x9f0OOj/AB5en+R+bX7dxz4F/Z9/7J1cf+pT4hoo/bt/5ET9n3/snVx/
6lPiGivJe56S2PuLwH8QYdN/aT+CFmxG7/hHvh4o/HQtI/xr47+L/wCypqHx/wDGXxC8bDxR
4U8KeGvh5oHhBtXvNaN62Pt2l2sMHlpa207v+8TB+UY3KemSPdrKZ1/bP+Bozx/Yfw3/APTD
o1cvY3Wn+IvgD+0v4N/t/wAK6X4k8WeHPhr/AGPZ61r9lpH9o/Z7W3mn8t7qWNDsjGT83dR1
YA9WIk3Tin5GFCKVR/M+Yv2e/wBhzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQ
FUYyXHlgttwFUFd7pvj3HwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7
a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2a
GIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4
rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvwnaeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabr
E95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/8Agn1qGoeIfD3h7VfiX8KvDnjPxHrt
z4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+
FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZ
z6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OErImDJFE53sr7UIAOUf8A4Jf+LNG8Q6Hp
XiDxl8P/AAxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j
4SeANA17x/48+H/gL/hI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f8A
x9qHjjwhq/i3x98P/wDhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD7
3HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU
04yhXcrRhQAfP/gf/gntrHjPSPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2Uhgiaa
VVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
+y/2bfjDc6N+0ZayaH4x+Gmnfs7/AA9+IF/rGjr4sl0u7v8ASNKS6W7ddNgvkl1dHmgiiEYt
owzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQF
iDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhu
TEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujT
sURp3j37Gdcx4c9r/wAE7NCv/h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVE
SWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5
La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934j
v7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/wBhzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6d
pikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ
6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/AOCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJr
mi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfB
XhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+
H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/wBjL4s+MPDfxU8O
DWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pz
wt+0tY/BO80XUfhVJ4Wg+I/ia/8AG+mePB4ejvtL0y/8QJeoFTUyZPns5nLNYswLKV3F4wAA
fCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5T
X2t/wTfufA/g7/gq7N4y0bX/AAr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6b
lMm3OT8U0hH0D8Df+Ce2sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yr
iQOYwHAZ1YMo1fBv/BLfxx4x8Dzan/wk3w/0rVYfFd54E/sHUdRuIL5/EFvHLJ/Ziy+QbQyy
rF+6f7R5TtJGnmB22j3X/gn7+0HoP7Pv7Hnwtt9T1T4aXF5q/wAebPULvTtbubC9n0zSnsha
vqTRM5lsXgmiLpOfLZSiMSYpMSdX8L7rwb4T+FcNtcfFnwVcWPhL9pa5+IMuo634ws7vVdU0
LT4ZEN80cTtPc3Fw8GEVIi8zzI4Xy3D0xnx/4Y/4J2/EbxjZ/Dt9PtrW5n+IXiS88JvaRx3T
3PhXUbWULNBqqLCTauI99xs+ZxBFJIVAU15pqPwa1z/hOPFeiaJB/wAJj/wh32ua/v8Aw6km
oWP2S2k2SXyyIv8Ax6fdYTMFXa6k4zX6Q/swftxfDr4VWfiy61Dxj4gt4P2r/iBrciJpfiK1
tbn4Z6dLLLBDqNzGzEWV68lwjF9zIYLeOUOTAI5Pgn4e6HafDfxx8TtEu/iz/wAIl/Zmh6rp
kN/4dW5v7HxtLHIqLpqyQFP9Eu9pYSygxbUUsvIpCPKa92+FX7A+vfErwZ8PdavfFvgrwinx
X1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9mz4o6D4h+A/7JdtZa18N
C/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP+i/8E0PGSy+F
rXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBK
XktYcb4kk/dyzxEByQrGfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aW
VwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7
tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz
0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8
SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWv
ijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb
4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQH
JCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1h
xviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jb
XMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Ae
xtrmKNFun8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdC
srnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa
9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/B
XgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPE
fgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v
+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa
/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+J
JP3cs8RAckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Aext
rmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90Kyuf
F9lrFve2sV9LBKXktYcb4kk/dyzxEByQvlPwp/aY13xn+2Hf+Jx47+GkfwQ8O/E7VvFVvP4s
OkTX9hZPei/nOm2t3G+qRPcRonlraRLm4cfdkDsoB8P+O/BGqfDPxxrPhvW7b7FrXh++n02/
t/MST7PcQyNHIm5CVbDqwypIOOCRXpX7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIb
yxLIqO2+RlKoiqzNhjjYjsv3Zpv7bkXibwL4J1r4N6n8NNG1LVfiB4o1zxPH4w8Zy+FX05rr
V0ubGe+toNRtf7QT7I8Yf5LxQsBiXoyNxP8AwTe/bstJf2rvDPhXXbD4KeFvAPhTXfEPiQ69
ZXVz4cs0nvI7mNZoYri6hhn4nitoEmtnnitTtCpsdgAfnTRX6A6D8Sdfs/2cfgtovwZ+I/w/
+E/iTw34r8RN43tbfx1Z6TpttcSajA1nLOJ7ljqtpHbKqrKv2wNFEU3SHKn4U8d3H2zxxrMv
2zStQ82+nf7Vpdn9jsbnMjHzIIPKi8qJuqR+VHtUgbExtCEe1/A3/gntrHx3+EHhzxnZ+PPh
/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKLWi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD/gn7+0HoP7Pv7Hnwtt
9T1T4aXF5q/x5s9Qu9O1u5sL2fTNKeyFq+pNEzmWxeCaIuk58tlKIxJikxIfBxbX4a/GSzXQ
fjT4K8f/AA51v4nam/jDTfGGv6ba3eirBeNbQa9a3txcxXEt7NZTm5S+04RsJIkBZymwMZ8v
/Bn9hjxJ8XPHHxK8N3OveFfB+tfCexvdS8QW+tS3T+Tb2UjR3jxtaW9wsnkuFBAOW3r5YcBt
vlXhXwJrnjr+0v7E0bVdY/sexl1S/wDsNpJcfYbSLHmXEuwHZEm5dzthVyMkZr61/YPt/Bvh
f4qftPWmheLvD9t4T1P4f+I/CvhS98Sa7Z6TPrLXUyiwGLloCXkihLO2xVjJG/y9yg/OnwBv
rSx/4TX7X8R9V+HPneFL6KH7Da3M/wDwlEp8vbo8vkMuyK453PLmIeWNynIpCPP692+FX7A+
vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9
mz4o6D4h+A/7JdtZa18NC/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkn
i++SAAfP+i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lB
pZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6f
yg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFv
e2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i
90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/wDBNDxksvha18UeI/BXgPVvG/iS
+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jf
xJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/D
S+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L
/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCg
Hx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlW
YZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSy
JkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX
0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK
58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFd
A03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8
N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+
teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2
i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZ
TDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwS
l5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxf
Zaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6Bpus
T3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA0
3WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38T
vEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfx
H8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BN
Dxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/
AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc
/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1h
xviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW9
7axX0sEpeS1hxviST93LPEQHJCgHxT8N/wDgnL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2q
yVmnsYHiDrNcBUY/KfLA2lnXzI9/hNfe37N3ifQfGv7Rn7XfjS38V+CrDw98R/DfjLRPDc2s
eJbDSJ9Tur66jmtVFvdTRTIkiHIkdFjBDKWDKQPhTXtGm8Oa5eafcPayT2E728r2t1FdQMyM
VJjmiZo5EyOHRmVhggkEGkI92+Bv/BPbWPjv8IPDnjOz8efD/R7DxR4rj8EWdrqR1QXQ1iUb
orVxDZSIN8ZVxIHMYDgM6sGUavg3/glv448Y+B5tT/4Sb4f6VqsPiu88Cf2DqOo3EF8/iC3j
lk/sxZfINoZZVi/dP9o8p2kjTzA7bR7r/wAE/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t
3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOr+F914N8J/CuG2uPiz4KuLHwl+0tc/EGX
Udb8YWd3quqaFp8MiG+aOJ2nubi4eDCKkReZ5kcL5bh6Yz4p/Z7/AGMde+P+uePdMOt+H/Be
pfDbSrnW9ctPEkV/BPb2tqxW7bZBbTMHgbaHjYLJlwFVsNt818K+BNc8df2l/YmjarrH9j2M
uqX/ANhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNfZf7JXxR0H4p/tGftb+LxrXh/wANab8R/Bvi
qy0OLxJrthpE9zdapdCa0t8TzqpcqjB2VmjjIG5xuUt8vfAG+tLH/hNftfxH1X4c+d4Uvoof
sNrcz/8ACUSny9ujy+Qy7Irjnc8uYh5Y3KcikI8/r3b4VfsD698SvBnw91q98W+CvCKfFfVZ
9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8Jr9C/2bPijoPiH4D/ALJdtZa18NC/
w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP8Aov8AwTQ8ZLL4
WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMl
l8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQrGfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD
2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0sr
gB7G2uYo0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i
90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvE
l9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4
j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4
o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+N
vi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEBy
Qpr/AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYc
b4kk/dyzxEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2N
tcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC
+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+
0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP
3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo
0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9
lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1
iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8
b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAe
reN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614
X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P
4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyz
xEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOfCvHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr
9Ntf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrD
jfEkn7uWeIgOSF/PP9rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4
YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf+Ua2hf9hnUP8A0qlr+XWv
6iv+DR3/AJRraF/2GdQ/9Kpa9HAfBW/w/wDt0TkxW9P/ABfoz79+LP8Ax/XH1Nfhf/wXy/4/
LP8A689K/wDT/wCPK/dD4s/8f1x9TX4a/wDBeexe8vbTYM/6HpX/AKf/AB3XTX/3ePr+hx0f
48vT/I/NT9vD/kRv2ff+ydXH/qU+IaKn/b3sXt/Bf7PyMOR8Orj/ANSnxDRXlPc9JbH0VZrn
9s/4G/8AYD+G/wD6YdGrwT4mfsnah8d/EHjrxl/wlHhXwp4b+HPhzwd/bF5rRvWx9t0q0hg8
tLW2ndv3iYPyjG5T0yR79aJ/xmd8Df8AsB/Df/0xaNXLWNzp/iD4A/tL+DP7f8K6V4k8V+HP
hr/Y9nrWv2Wkf2h9ntbeafy3upY0OyMZPzd1HVgD1Yj+HH5GNH+I/mfMX7Pf7Dnj39pP4V+P
fG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfD
Tw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+
IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4
D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV+E7DzTRf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNX
w/8A8E+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOj
BlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAp
hayiAkmdWG197LV+HXxl8Dy+Mf2l/wBoCzn0qDx9puux6v8ADfTPED25kSXUtUm8y8WzLET3
dnCVkTBkiic72V9qEAHKP/wS/wDFmjeIdD0rxB4y+H/hi/8AFvivUfB/hyK+l1Gb+3ruxvVs
ZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/wDHnw/8Bf8ACR32raba2GpHVLy6
S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceENX8W+Pvh/wD8K98HeK5fEWpXPja7
0fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL
+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8AA/8AwT21jxnpHgm/l8ef
D/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btv
sWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7NvxhudG/aMtZND8Y/DTTv2d/h78QL
/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2jPjF4v0zXvD/
AIV02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7Vv
G+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R
8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw57X/gnZoV/wDD34kfDzxn
aeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ckpgsskaqvbfAE2ujftqR/E
bwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL5qzWkjysV/eMJowoYz5+8Rfs
UeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2
n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8
F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/0x/NspGZpYCZGYEOxlTC8
/wD8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0Fs
nnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Q
p58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/wDD/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2i
mEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/wCHfhV8OPGVxqs9nrGo6Lq58Pwx
zR3V5DowuGutSZJBEqwvZFxJMVYSF98le7aj+054W/aWsfgneaLqPwqk8LQfEfxNf+N9M8eD
w9HfaXpl/wCIEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT9k7UP2yfifF4N0LxR4V0LxJfbv7
Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4Jv3Pgfwd/wAFXZvGWja/4V8OfCfw
vrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymTbnJ+KaQj6B+Bv/AAT21j47/CDw54zs
/Hnw/wBHsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZRq+CP8Agl/4s8V+GLa/
v/GXw/8ADN1P45f4bPpupS6jJdW3iAOyiydrazmhO4AMJVkaLDDLg5Ue6/8ABP39oPQf2ff2
PPhbb6nqnw0uLzV/jzZ6hd6drdzYXs+maU9kLV9SaJnMti8E0RdJz5bKURiTFJiT0H4A+NNE
+GPw8j0vTfiP8NNbSD9paTxFPfeKvFGgX2o3fh2E/Z5NWLXcm5bh2hLrNEqXDbvMi+SQEsZ8
k+CP+CX/AIs8V+GLa/v/ABl8P/DN1P45f4bPpupS6jJdW3iAOyiydrazmhO4AMJVkaLDDLg5
UeKaj8DfGGm+OPFfhv8A4RzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+4PF
/wC0ha/Cv9iTxvr3w18X+CrrVpvjzf8AjDQotYvNN1nWxpBjMFvqAtdRMt2tx56xkSNGLoKW
kJ2Mzn4/+D/ixtc1z4hanr3xU8QeCtS1fw3qUst3HHd3s/jK6kZGOl3LxOG2XTFi8kxaPKZc
HIpCPNK9r/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dw
xu80aK5YkDxSvsH/AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k/a9J/tn+y/s/lfaPJz9q837X
8m/Hn+Rxn7NQB5T+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZE
laNFCm4eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZ
ElaNFCm4eGN3mjRXLEgerf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+
1eb9r+Tfjz/I4z9mo/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN
+1/Jvx5/kcZ+zUxnlP7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3I
RkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsP
slrOkKefKsYM7RkkMcbMMfS/+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jy
c/avN+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyW
Q1ZvNgRrVS+7TtoMisx/fAmgD5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s
7tLK4AextrmKNFun8oNLImSrMMphzlfDf/gnL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2qy
VmnsYHiDrNcBUY/KfLA2lnXzI9/2XoPx38J6/rXwWu/h74y+H974d8P/ABW8Rap4pm8b6tp0
mpabp82uwXFrcWra65vI/MsgJS9nh2lDs5NxuNeU/soX3hvUvjh+1d4k0/xd4VtPC3jnwp4v
8OeF7jxF4utbK+1W4u7iKS0DLf3C3TeZGQTPOMFg2994agD5/wDhJ+xinxc0jwK8PxV+FWla
18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4DbdbTP+CfWoTfEiLwfqPxL+FWi
+Kb3xXd+D7DS5tRvbu6vLu3ultC7Ja2kxtopJm2xm78lnClwuz5q1f2RJfDf7P3gD4y/EG/v
/CsnxY+Gn9m2vgazvL+1vrWS9nvHgub+1iV2S9ltY0EsTqZIULLKVfCMOg/YKOu6N8ZPBPxG
uPHPwf1Cz1zxlbXfi4+KtR0j+39IW3vI5ZrktqyrMHlSWSVZrCR2Yr8zCWMKADyn4C/sUeKv
jz+1Hd/B6O/8P+GfG1pPe2TQaxPMYHurPf59uJbaKZd6rFMwY4jIibDklA1v9nv9g/xf+0f8
N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPvb9mD9pT4Xf
DP4qeAfEPgnx/wCH9E8H6t8QPF118RrnUtc8nVdXE008Hh6S6F64v7m3EV3G5YB4Y3aWafbJ
HLIvzTr3iI+HP+CM154IuNc+Gkmv2HxOe7l0y11XRLrUm01ITAbmMRO00j/bBtEyFpWt8YY2
hFAHinwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WY
AoA56vRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0si
ZKswymHPoH7MPgW1+GH7NvhrxL8O/Gfw0sfi38R573TdT13X/GOm6Rc/DDTVm+zE28E0wm+0
XSGR2uo0aWOAFIow0nmMfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k
0ZLiOTUilyIFMLWUQEkzqw2vvZQD5K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaOR
NyEq2HVhlSQccEivVbb9i2Ww+BPgf4heJPiL8P8AwfovxD+3/wBjQ6kmrT3Uv2K4+zz71tLG
dUw+0jLchx3BA+4dN/bci8TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31tB
qNr/AGgn2R4w/wAl4oWAxL0ZG8e+CHxM8RfETxj4AXVfGP7NWrfCPT/HN79t8OXllodlH4Y0
+bVEuLz7LDrNvFdLaTxys8X2dpHCIqN5ckflqAeK+B/+Ce2seM9I8E38vjz4f6La/E3XbrQv
Bj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/
mJJ9nuIZGjkTchKth1YZUkHHBIr9IND+KPw58Q237P1t8PNa+Gh8H/Dr4ga9Hf8A/CR67a6Z
e+GtKbxXYapZ3VqmozxXDu1lbhfMRZWMck8TfvCwHwT+1l430z4mftUfEvxJolz9t0XxB4r1
TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHoPwN/4J7ax8d/hB4c8Z2fjz4f6PYeKPFcfgiz
tdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyi1ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+gP+Cfv7Qeg/s+/sefC231PVPhpcXmr/
AB5s9Qu9O1u5sL2fTNKeyFq+pNEzmWxeCaIuk58tlKIxJikxJ2x+InhPUNT+BWm+H/HHw/8A
Fdh8M/iP4ht/Eet+LfFWnRapZWh8W2Opw6pDLczQm5lntrYlri3WRXSa4T7zEBjPknRf+CaH
jJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/
A3xhpvjjxX4b/wCEc1W71rwN9rbX7exgN7/ZSWknl3MsrQ7lWKNxhpc7BkfNgiv0h/4aQ8M/
ELxt8Itb8C+L/hpc6Jp3xd8S654tk8W3mkw3elWVzr8N1bz2UesFbi2R7L95/wAS9U/eBi37
8NXxTq3izw1rn7Rnx01PR/ip4g8FeHtXg1+XRbuOPUb2fxlDJdFrfS7ly4m2XSEF5LosMpmQ
FjSEeE17X+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHh
jd5o0VyxIHilfYP/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k
348/yOM/ZqAPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJ
K0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIy
JK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc
/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb
9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLY
W5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+
V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5
OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5
X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R
5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0t
hbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4r
Z2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r
7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5
o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnv
LifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q7
8W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgPNPhT+0xrvjP9sO/8Tjx38NI
/gh4d+J2reKrefxYdImv7Cye9F/OdNtbuN9Uie4jRPLW0iXNw4+7IHZQDwrw/wD8E+tQ1DxD
4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNjVf/AIJf
+LNG8Q6HpXiDxl8P/DF/4t8V6j4P8ORX0uozf29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwx6v
4dfGXwPL4x/aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEe
q/s1/HeLX/hT+zHd2njL4f3uo+H/ABzqmqfEibxvq2kyalpvnaraXAuLVtYczx+ZbhpS+n43
Sh2JM+40Afn/AOO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXs
Hwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj
9V+O/jfN4p8C/Dsfs7fGDw/4Jey+IHiy+8R3ereM4tDe6W41eKXTb3UY7+VbjU0+xeXl3juW
Ko0bBnDR0fCf4s6D4t+H37Msdl4l+D+ov8P/ABlq48XXd/qNhoA0OF/E+n6ql9p1tdtaOqSQ
W7BfIgIWGWaHYj5RQD5U+G//AATl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7VZKzT2MDxB1
muAqMflPlgbSzr5ke/wmvvb9m7xPoPjX9oz9rvxpb+K/BVh4e+I/hvxlonhubWPEthpE+p3V
9dRzWqi3upopkSRDkSOixghlLBlIHwpr2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZ
HDozKwwQSCDSEe7fA3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+
Mq4kDmMBwGdWDKLWi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbX
MUaLdP5QaWRMlWYZTDn6A/4J+/tB6D+z7+x58LbfU9U+Glxeav8AHmz1C707W7mwvZ9M0p7I
Wr6k0TOZbF4Joi6Tny2UojEmKTEnbH4ieE9Q1P4Fab4f8cfD/wAV2Hwz+I/iG38R634t8Vad
FqllaHxbY6nDqkMtzNCbmWe2tiWuLdZFdJrhPvMQGM+SdF/4JoeMll8LWvijxH4K8B6t438S
X3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc+P6j8DfGGm+OPFfhv/AIRzVbvW
vA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+y/hT+0xrvjP9sO/wDE48d/DSP4IeHf
idq3iq3n8WHSJr+wsnvRfznTbW7jfVInuI0Ty1tIlzcOPuyB2X5/1b4raN8Vv2jPjp4vt/iF
4g+GWm+MINf1PToo7OeSfxILm6M0Wi3ItpNsaTq2HZ2eFTHyG4NIR4TXu3wq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEP
wH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98kAA+
f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzD
KYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRM
lWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+l
glLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXP
i+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTd
YnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6B
pusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf8F9a8L+I/hpfaT4b
+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/wDaH8G/G3xf8F9a
8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF
/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswym
HJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWY
ZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJ
aw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLif
U72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE9
5cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4k
vteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaH
jJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8A
wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+
wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDj
fEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt72
1ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU7
2zu0srgB7G2uYo0W6fyg0siZKswymHNXxl/wT21j4SeANA17x/48+H/gL/hI77VtNtbDUjql
5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x9l6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+
I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL59pPxq1X42/tC6NqLePfgV4o+CB
+J2r3TaP4qg0GDUdC0q61n7VcyFNXgjudlzDMZFa2eRvl2HZJGI1APl/4YfsYp8UtQ0Gztfi
r8KrO/8AFuuyaFoNnNeahNdao6zRwRztFb2cr2kU0kgEf21bd2ALbAo3Vb8HfsF6tr3izTfD
Ws+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+gPC/gXwb8M
PDl74l+AnjP4aWPif4j+JNa02z13X/GNnpFz8MPDq30ltbm3gu5hd/aLq2O9rrY1xHACkcYe
TzGyvg58G7X9nX4SWd18O/iL8H7r4t+L9V1Pw/qfi288c6bYRfD7TYLprMz2CTSrM73qCSUX
scZlS2O2KJWl8xgDynRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sM
qSDjgkV+hfwn8T6Do3w+/Zl8NWXiv4P6y/wi8Zavpvi7UL/xLYWg0iFfE+n6imoac13NC86S
wWrbZoEkDQyzR4DsQPh79rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLl
GU4YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+
XWv6iv8Ag0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3ROTFb0/wDF+jPv34s/8f1x9TX4w/8ABZ2y
iu76HzMcWemYz/2MHjmv2e+LP/H9cfU1+IP/AAXS12TR7+12d7PS8/8Ag/8AHf8AhXXV/gR9
f0OKnrWl6f5H55/8FIbWOHSvgMq4wPh1Lj/wp9forE/4KB6w194U+AEp6t8Op/8A1KPEAory
Z/Ez047I+h7L/k834G/9gP4b/wDpi0avn74nfsnah8d/EPjvxl/wlHhXwp4b+HPhzwd/bF5r
RvWx9t0q1hg8tLW2ndv3iYPyjG5T0yR7/p5/4zN+Bv8A2A/hv/6YtGrlrG60/wARfAL9pfwb
/b/hXSvEnizw58Nf7Hs9a1+y0j+0Ps9rbzT+W91LGh2RjJ+buo6sAeiv/DXyMqP8R/M+Yf2e
/wBhzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/
AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+
CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H
7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W4
2wJmUSvxHWeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5q+H/8Agn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3I
srSeKDfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaO
e8k0ZLiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIku
papN5l4tmWInu7OErImDJFE53sr7UIAOUf8A4Jf+LNG8Q6HpXiDxl8P/AAxf+LfFeo+D/DkV
9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/48+H/gL/hI77Vt
NtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f8Ax9qHjjwhq/i3x98P/wDhXvg7
xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74
Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/gntrHjP
SPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+
ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmn
fs7/AA9+IF/rGjr4sl0u7v8ASNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0
Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/Cg
X4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpv
hrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wAE7NCv
/h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvg
CbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0
YUMZ8/eIv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUt
xJIi+bndtP2M/wBhzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxs
R2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sB
MjMCHYyphef/AOCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4v
I4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhn
itnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+z
Mi3UNiltFMJUikjETXDFLcSSIvm53bfa/wBjL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWN
R0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/i
a/8AG+mePB4ejvtL0y/8QJeoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijw
roXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/gq7N4y0bX/
AAr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hH0D8Df+Ce2sfHf
4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqfDf/gnL8Ufi
PofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wAE/f2g9B/Z
9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOf/ZQ0
PR/C/wAcP2rrj/hYHhW90XxB4U8X+DtA1jxF420uO+8R3s1xEbaVmmnRpfPQbzdbRCzFvnBy
AxnhXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhj
jZhj5pqPwN8Yab448V+G/wDhHNVu9a8Dfa21+3sYDe/2UlpJ5dzLK0O5VijcYaXOwZHzYIr7
r+BXjjRx8Iv2WNFi134VS3Xws8V6nZ+Mn1nxPpdtN4cT/hJdO1Jbuxea4QT74bZlE9r5yPFJ
OgJLcfNXjT4i6B4+/ao/aA8Saf8AFHVfA2i+Jv8AhIr7Sbixsbxv+EvSe7MkOlSrGUaKK5Rg
WacbF2DeueiEfP1e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nS
FPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/Af8AZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6
bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/wBF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdY
nvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4n
eJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/i
P4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gm
h4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/
AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMO
fsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw
43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe
9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1
O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95c
T6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvt
eufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4a
X2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4
yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wd
f/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SS
fu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6W
CUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tL
K4AextrmKNFun8oNLImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3t
ndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxH
rFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPh
v4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta
+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz4V478Eap8M/
HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfptr/wC0P4N+Nvi/4L614X8R
/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQvP/Fn9oi7+
Jmg+CL/9n74u+FfAt0fiP4v1fxRc3nim28MR3H2vWY59OvL61unikvovsZT/AJYz4RGiKkqY
6APjP4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMA
UAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49
+xnXMeHPtfwc8G2vw7+Eln4r+Hfj/wCD7fFv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmh
e8QySCeO3WWC2/dxQRs+48/+xR8OL/4B/tAeFdYtPH/7P+saDY+OYbPxHNeanoz3WmW+n3yB
ru1k1SOOXyponaWKfT2YuFXJWRFVQD5U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hka
ORNyEq2HVhlSQccEismv1M039rLTtL8C+CR8C/Evw03p8QPFF9r134w8d33h10WbV0l0+9vo
21C0uNTR7Ixb3nju2KwmMjeHjb80/izrMXiP4qeJdQt08PxwX+q3VxEmg2strpSq8zsBaQyq
skdvg/u0dVZU2ggEEUhHsPwN/wCCe2sfHf4QeHPGdn48+H+j2HijxXH4Is7XUjqguhrEo3RW
riGykQb4yriQOYwHAZ1YMoqfDf8A4Jy/FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB
4g6zXAVGPynywNpZ18yPf9K/8E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2
QtX1Jomcy2LwTRF0nPlspRGJMUmJOf8A2UND0fwv8cP2rrj/AIWB4VvdF8QeFPF/g7QNY8Re
NtLjvvEd7NcRG2lZpp0aXz0G83W0Qsxb5wcgMZ86fs9/sOePf2k/hX498b6FZ2tp4T+HWlXO
qanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e7zXwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0k
uPsNpFjzLiXYDsiTcu52wq5GSM19V/8ABNnwtH4F/wCF+f234l+H+j/2x8OPEPgqw+3eM9Jt
/t2qS/Z/Lji33I3xPtbbcLmBsHEhxXhPwftIvB+ufELT9T+JV18PZ7bw3qVkr6P5uoweKp1Z
FGkGazk8s29wQczFngIjUncCppCPNK92+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k
+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfCa/Qv9mz4o6D4h+A/wCyXbWWtfDQv8OvEmoR+Lv+
Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgAHz/AKL/AME0PGSy+FrXxR4j8FeA
9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvr
XhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKxnx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5
QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZax
b3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3j
fxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/E
fw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Df
jb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RA
ckKAfH2i/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8
oNLImSrMMphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2
sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6B
pusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3
hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7
SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8A
gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH
x9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWY
ZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImS
rMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSw
Sl5LWHG+JJP3cs8RAckL5T8Kf2mNd8Z/th3/AInHjv4aR/BDw78TtW8VW8/iw6RNf2Fk96L+
c6ba3cb6pE9xGieWtpEubhx92QOygHw/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0
cibkJVsOrDKkg44JFelfsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUq
iKrM2GONiOy/dmm/tuReJvAvgnWvg3qfw00bUtV+IHijXPE8fjDxnL4VfTmutXS5sZ762g1G
1/tBPsjxh/kvFCwGJejI3E/8E3v27LSX9q7wz4V12w+CnhbwD4U13xD4kOvWV1c+HLNJ7yO5
jWaGK4uoYZ+J4raBJrZ54rU7QqbHYAH500V+gOg/EnX7P9nH4LaL8GfiP8P/AIT+JPDfivxE
3je1t/HVnpOm21xJqMDWcs4nuWOq2kdsqqsq/bA0URTdIcqfhTx3cfbPHGsy/bNK1Dzb6d/t
Wl2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0IR7X8Df+Ce2sfHf4QeHPGdn48+H+j2HijxXH4I
s7XUjqguhrEo3RWriGykQb4yriQOYwHAZ1YMoqfDf/gnL8UfiPofxX1MaVa6Ppvwag1A+Irv
ULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wT9/aD0H9n39jz4W2+p6p8NLi81f482eoX
ena3c2F7PpmlPZC1fUmiZzLYvBNEXSc+WylEYkxSYk5/9lDQ9H8L/HD9q64/4WB4VvdF8QeF
PF/g7QNY8ReNtLjvvEd7NcRG2lZpp0aXz0G83W0Qsxb5wcgMZ86fBP8AZCb41WfhQL8Rvhp4
e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA54rUfgb4w03xx4r8N/8I5q
t3rXgb7W2v29jAb3+yktJPLuZZWh3KsUbjDS52DI+bBFfVf7P3wsj+BvwJ0fVfAfjb4VWvxj
8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSv8AP/w90O0+G/jj4naJ
d/Fn/hEv7M0PVdMhv/Dq3N/Y+NpY5FRdNWSAp/ol3tLCWUGLaill5FIR5TXu3wq/YH174leD
Ph7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQ
fEPwH/ZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQA
D5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZ
EyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr
6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVl
c+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0
DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pP
hv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfW
vC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7R
f+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymH
Jov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVm
GUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS
8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vs
tYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnv
LifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugab
rE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/i
d4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC
/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
n7B1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWs
ON8SSfu5Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3
vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLi
fU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3
lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7
XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX
2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ
8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf
/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEk
n7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721iv
pYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu
0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJ
uQlWw6sMqSDjgkV9wfCn9pjXfGf7Yd/4nHjv4aR/BDw78TtW8VW8/iw6RNf2Fk96L+c6ba3c
b6pE9xGieWtpEubhx92QOy/H/wC0p8SLH4x/tGeP/F+mRXUGm+KvEmo6xaRXSqs8cNxdSTIs
gVmUOFcAgMRnOCetIRxNFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1Ff8Gjv/KNbQv8AsM6h
/wClUtfy61/UV/waO/8AKNbQv+wzqH/pVLXo4D4K3+H/ANuicmK3p/4v0Z9+/Fn/AI/rj6mv
xL/4Lg+HP7evrb/Zs9L/AE1/x1/jX7afFn/j+uPqa/Fv/gtPrkekX0O/+Kz0z/1IPHP+FddX
+BG/f9Dip/xpen+R+Y//AAUN0L+zvDfwCh/ufDqb9fFHiA/1oq5/wUb1dLzRvgLIvRvh1Nj/
AMKfXxRXkz+Jnpx+FHt1gf8AjND4G/8AYD+G/wD6YtGrwL4mfsnah8d/EHjrxl/wlHhXwp4b
+HPhzwd/bF5rRvWx9t0q0hg8tLW2ndv3iYPyjG5T0yR73Zxt/wANn/A044/sP4b/APpi0auZ
sLnT/EPwB/aX8G/2/wCFdK8SeK/Dnw1/sez1rX7LSP7Q+z2tvNP5b3UsaHZGMn5u6jqwB3r/
AMOPyM6X8R/M+Yv2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC
23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVD
cmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRg
DiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4L
SS3SeZIb2ASOluNsCZlEr8Z1Hmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7
Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+GovDF3qN
7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT
32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12
PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/wCCX/izRvEOh6V4g8ZfD/wx
f+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/4
8+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/wAfah448Iav
4t8ffD//AIV74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+Pet
ftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YU
AHz/AOB/+Ce2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfq
SijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MN
zo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmO
Pn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUI
Pgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzr
eDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdc
x4c9r/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5
JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzV
mtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRT
CVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8
jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/EzeLk8eT6PJfQ6PdeIEuoZVO
uf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e
5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+
EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6F
Y2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kv
hx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRd
R+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZP
xPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8ABN+58D+D
v+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIR9A/
A3/gntrHx3+EHhzxnZ+PPh/o9h4o8Vx+CLO11I6oLoaxKN0Vq4hspEG+Mq4kDmMBwGdWDKKn
w3/4Jy/FH4j6H8V9TGlWuj6b8GoNQPiK71C4xB9qslZp7GB4g6zXAVGPynywNpZ18yPf9K/8
E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJM
UmJOf/ZQ0PR/C/xw/auuP+FgeFb3RfEHhTxf4O0DWPEXjbS477xHezXERtpWaadGl89BvN1t
ELMW+cHIDGfOnwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjySh
UNyYi+1mAKAOeK1H4G+MNN8ceK/Df/COard614G+1tr9vYwG9/spLSTy7mWVodyrFG4w0udg
yPmwRX0X+wV4J1X4NfGTwT4jTxh8Cn0mPxlbQeKINW1XQZdR0CGxvI/MmhlvwCUeJ3kjuNMl
kD7B84dFA8/1bxZ4a1z9oz46ano/xU8QeCvD2rwa/Lot3HHqN7P4yhkui1vpdy5cTbLpCC8l
0WGUzICxpCPCa9r/AGe/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMi
StGihTcPDG7zRorliQPFK+wf+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jy
c/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC
3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6
WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n
8r7R5OftXm/a/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9
o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1ku
lsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBt
ZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Z
f2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P
5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8Qaw
bWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWT
xBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/
AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P/wC2P+Fj/wBqf2T9r0n+
2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsa
ibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB4pX2D/wncf/AA5K/wCEZ/tn4f8A9sf8LH/t
T+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mr2D/gmR8Vfht8HPhX8LEv/ABT4fk0LxRqu
v2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/ugD5U+FX7A+vfErwZ8Pda
vfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o
8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP0X8CvHGjj4Rfs
saLFrvwqluvhZ4r1Oz8ZPrPifS7abw4n/CS6dqS3di81wgn3w2zKJ7XzkeKSdASW47bX/wBo
fwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7l
niIDkhQD8yfHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr1W2/
YtlsPgT4H+IXiT4i/D/wfovxD+3/ANjQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA+67v9su0
v9B8K3/wT8RfCqO6vPiP4s1fxHc+JvGNz4Tji+06ys9heXVqt9ZSX8TWbR7vMhusJD5QUEPG
fFPgh8TPEXxE8Y+AF1Xxj+zVq3wj0/xze/bfDl5ZaHZR+GNPm1RLi8+yw6zbxXS2k8crPF9n
aRwiKjeXJH5agHz/APDD9jFPilqGg2dr8VfhVZ3/AIt12TQtBs5rzUJrrVHWaOCOdorezle0
imkkAj+2rbuwBbYFG6qnxM/Yo8VfCb4CXXj7V7/w+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3Xk
OgSLdvhmkU+YgzncF+lvC/gXwb8MPDl74l+AnjP4aWPif4j+JNa02z13X/GNnpFz8MPDq30l
tbm3gu5hd/aLq2O9rrY1xHACkcYeTzG6v9hvxpa/B79mOz8CQfEf4aK+nfHlofFsFx4o02HT
tf8AC509LK/lEd5Ii3llKpO0BGLYVkXegKgHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+Eb
bWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MS
T7PcQyNHIm5CVbDqwypIOOCRX6QeBPiz4FtV+BNv8KvEvw0j8FeDvidr11r0Xi3UdMF3oelP
rdvNZSWQ1tvtUCNYKr7rHaxkVmb/AEgMa/P/APaU17RvFP7Rnj/U/Dl5daj4e1HxJqN1pd3d
TTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmkI9L+Bv/AAT21j47/CDw54zs/Hnw/wBHsPFHiuPw
RZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZRU+G/8AwTl+KPxH0P4r6mNKtdH034NQagfE
V3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke/wClf+Cfv7Qeg/s+/sefC231PVPhpcXmr/Hm
z1C707W7mwvZ9M0p7IWr6k0TOZbF4Joi6Tny2UojEmKTEnP/ALKGh6P4X+OH7V1x/wALA8K3
ui+IPCni/wAHaBrHiLxtpcd94jvZriI20rNNOjS+eg3m62iFmLfODkBjPnT9jP8AYc8e/t1/
EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR23yMpVEVWZsMcbEdl818K+BNc8df2l/Ymjarr
H9j2MuqX/wBhtJLj7DaRY8y4l2A7Ik3LudsKuRkjNfdf/BKP9rr/AIV/8dvAHw08W23wq8Pe
EPhtfa1ql14jm177CzXc1vcW5uGmW+Wx1CU+bHbxv5UzC3JMZChpB8qfD2a00bxx8Torvx3/
AMKpzoeqwQ2vh1rnU7HXpfMULoazwTvutJsECaWWaIrEpZpMhihHlNe7fCr9gfXviV4M+Hut
Xvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/A
f9ku2sta+Ghf4deJNQj8Xf8ACR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/
0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMp
hyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L
7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1i
e8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm
6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rwv4j+Gl9pPhv4
neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/
AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc
mi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhl
MOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lr
DjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3v
bWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9T
vbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3l
xPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+
1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeM
ll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/wDB
NDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B
1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8
SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tvb
O7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p
3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7Xrn
xHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfKfhT+0xrvjP9sO/wDE48d/DSP4
IeHfidq3iq3n8WHSJr+wsnvRfznTbW7jfVInuI0Ty1tIlzcOPuyB2UA+H/HfgjVPhn441nw3
rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr1W2/YtlsPgT4H+IXiT4i/D/wfovxD
+3/2NDqSatPdS/Yrj7PPvW0sZ1TD7SMtyHHcED7h039tyLxN4F8E618G9T+GmjalqvxA8Ua5
4nj8YeM5fCr6c11q6XNjPfW0Go2v9oJ9keMP8l4oWAxL0ZG8e+CHxM8RfETxj4AXVfGP7NWr
fCPT/HN79t8OXllodlH4Y0+bVEuLz7LDrNvFdLaTxys8X2dpHCIqN5ckflqAeK+B/wDgntrH
jPSPBN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4
Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/SDwJ8WfAtqvwJt/hV4l+G
kfgrwd8Tteutei8W6jpgu9D0p9bt5rKSyGtt9qgRrBVfdY7WMiszf6QGNfn/APtKa9o3in9o
zx/qfhy8utR8Paj4k1G60u7upp5p7q1e6kaGSR7gmZnZCpLSkyEklvmzSEel/A3/AIJ7ax8d
/hB4c8Z2fjz4f6PYeKPFcfgiztdSOqC6GsSjdFauIbKRBvjKuJA5jAcBnVgyip8N/wDgnL8U
fiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/0r/wT9/aD0H9
n39jz4W2+p6p8NLi81f482eoXena3c2F7PpmlPZC1fUmiZzLYvBNEXSc+WylEYkxSYk5/wDZ
Q0PR/C/xw/auuP8AhYHhW90XxB4U8X+DtA1jxF420uO+8R3s1xEbaVmmnRpfPQbzdbRCzFvn
ByAxnz/bfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b/wCxodSTVp7qX7FcfZ5962ljOqYfaRluQ47g
geaWfwz1bxHrmvWnhy1uvFsHh2C4vru90eyuJoFsoGw96Q0ayR2+CrF5UTaHXcFPFfW37Kut
eLhB8J/DniXxt+z/AK/8MvCPiubTtW0HxBP4anuvDlodQjkv2WW/jDTRXCO8iTWE06uFwHVk
VR4pBfeDLH44fGf/AIRT4j6r8OfBk1jrkXhr7Da38/8AwlFobj/RNHlwyypFcRbdz3GVHljz
FJNIR4pXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBn
aMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/AAkeu2GmXvhqFvEem6ol1apd
zxO7tBbsvmQLKDHJPF98kAA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7
SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd
7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658
R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wAF9a8L+I/hpfaT
4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf+CaHjJZfC
1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxks
vha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDa
H8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5
Z4iA5IU1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lgl
LyWsON8SSfu5Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3
aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4
kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R
+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4Wtf
FHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3
xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgO
SFNf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrD
jfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcA
PY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXu
hWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kv
teufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf
8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IXy
n4U/tMa74z/bDv8AxOPHfw0j+CHh34nat4qt5/Fh0ia/sLJ70X85021u431SJ7iNE8tbSJc3
Dj7sgdlAPh/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9Vtv
2LZbD4E+B/iF4k+Ivw/8H6L8Q/t/9jQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA+4dN/bci8
TeBfBOtfBvU/hpo2par8QPFGueJ4/GHjOXwq+nNdaulzYz31tBqNr/aCfZHjD/JeKFgMS9GR
vHvgh8TPEXxE8Y+AF1Xxj+zVq3wj0/xze/bfDl5ZaHZR+GNPm1RLi8+yw6zbxXS2k8crPF9n
aRwiKjeXJH5agHivgf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIY
ImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQc
cEiv0g0P4o/DnxDbfs/W3w81r4aHwf8ADr4ga9Hf/wDCR67a6Ze+GtKbxXYapZ3VqmozxXDu
1lbhfMRZWMck8TfvCwHwT+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBl
yjKcMARnkA0hHn9FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFf1Ff8Gjv/ACjW0L/sM6h/6VS1
/LrX9RX/AAaO/wDKNbQv+wzqH/pVLXo4D4K3+H/26JyYren/AIv0Z9+/Fn/j+uPqa/Dj/gvF
bSz31p5ef+PPSun/AGH/AB3X7j/Fn/j+uPqa/GH/AILO2UV3fQ+Zjiz0zGf+xg8c111VehH1
/Q4qTtWk/L/I/LH9vu2ki8Hfs/hs7h8Op8/+FT4gorov+CkNrHDpXwGVcYHw6lx/4U+v0V5M
viZ6cdkfe2gfshXep/Ev4C+KUhJjfw34Alzj/nno2lL/AOyV+fPxa/ZG1H41+LfiB4tPifwr
4T8OfDTw/wCEE1i81o3rbftml2sEHlpa207t+8jwflGNynpkj+hD9mz4dWviH9n/AOAtw8Sl
18FeEXzj+7pdj/hX4seNxp2p+Av2tPAw17wppXiPxLpnw9TR7PWtfstI/tD7NDDLP5b3UsaH
ZGMn5uMqOrAH1cyw6p0acl1S/I83L67qVpxfRv8AM+Sf2e/2HPHv7Sfwr8e+N9Cs7W08J/Dr
SrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0
bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/AAx1
7wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxz
fahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/insHmmi/8E0PGSy+FrXx
R4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh//AIJ9ahqH
iHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GX
jfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr
72Wr8OvjL4Hl8Y/tL/tAWc+lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K
+1CADlH/AOCX/izRvEOh6V4g8ZfD/wAMX/i3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcR
qbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f+PPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIb
ZV4IkIYMCCecegfsKfH/AMfah448Iav4t8ffD/8A4V74O8Vy+ItSufG13o+oatZfvIr2/axj
ull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtp
l5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHV
G/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e
4hkaORNyEq2HVhlSQccEivsv9m34w3OjftGWsmh+Mfhpp37O/wAPfiBf6xo6+LJdLu7/AEjS
kulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGLxfpmveH/Cumzz6140tIvE16
tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqN
xN5scKNLHZ21wLVHklCobkxF9rMAUAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oal
cXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHPa/8ABOzQr/4e/Ej4eeM7Txb8FBoN14rsm8R2
viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVV7b4Am10b9tSP4jeEfHPwf1DwBrnxOk
u78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7xhNGFDGfP3iL9ijxV8Ofh5rniPxvf+
H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP8AYc8e/t1/EO58
P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR23yMpVEVWZsMcbEdl+4dR+OXgv4uWPwTsPCviL4Va
54H8P/EfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/wDgn5+2n4S0H9sr
RvAmnWfw08NfBXwL4k8Sa5oviHUNUutHuRBcrdQ208puLyOK7uPJmgtk8+CWdIC2Nu13AB8q
fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGO
T4i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5
ud232v8AYy+LPjDw38VPDg1rxL8H/Dvwq+HHjK41Wez1jUdF1c+H4Y5o7q8h0YXDXWpMkgiV
YXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4mv/ABvpnjweHo77S9Mv/ECXqBU1
MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvB
bTIvlxwsT5rJncoXccgeU19rf8E37nwP4O/4KuzeMtG1/wAK+HPhP4X13WfsN5rWv2+mbNPu
La/hsfLS9lS4mypiBwrum5TJtzk/FNIR7t8K/wBhS6+J3wE0n4jT/Ej4aeFfD2reJB4SB1yf
UoXtNSKGVYpmjs5Io0MOJfNMnlKrDe6sGUdA/wDwS/8AFmjeIdD0rxB4y+H/AIYv/FvivUfB
/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGPoHwj+NT/Bf/gjxq9npWr/AA/uPEWt
+OdRkuNGvtT0+41SPR73RJNIluYrYy/aYpRLIQpjVZNmWYNbu+8/YIun8DeGPhpNafEv4f6r
4M8ReK3n8d+EvE2pafo8ng14XW3j1OymurmK5F2bSdpo7rT/AC3R4EQtIY9gYz5U1H4G+MNN
8ceK/Df/AAjmq3eteBvtba/b2MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEVyleweGk8JeGvip8
W7Tw58VvEHg7wnHpWsWvh69jtLqWfxnbCYC20y5EPlGNLqIKztKgjUp80Y4A8fpCCvdvhV+w
Pr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L
/Zs+KOg+IfgP+yXbWWtfDQv8OvEmoR+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5
J4vvkgAHz/ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5
QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz9g6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZax
b3trFfSwSl5LWHG+JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9
IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4
kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/+CfWoah4h8PeHtV+Jfwq8
OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx9l6/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL8/eGP
ip4C034m/tSfHXS9W8P3vjbQ/Eg1H4ZQ6sU2XLahq04fUYbKYK89xbQFJow6ssLMHeMlV2gH
n/hT/gmR498WXnimBdY8Fae/h3xlc/D6yN9qbwJ4k12CKaU2Nmxj2q7rCAjXJgR2miQNvJVe
f+En7GKfFzSPArw/FX4VaVrXxDvm03S9Bu7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt9
g174s3Pin/gjNeaZqfiXwVqPirUfic+vXdpdajpc3iC6sHhKveSIzG8a4a9JBlINwYSQT9mr
z/8AYm17wr8IfhJ8WPijPeeH/wDhaHgODSh4CsNWmhkQ3V1dNFcX0No5BuLi0iUSR5DxxMRI
6MVQqAeKfFn4cX3wc+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK9A+C
f7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHP2Z/
wTm+OfgTwL4H+HOoeIvG2lara+Otd8RD4txeK/GVwv2a4uo0trE/2W91HDexXIkjM9xLbXQX
dI0ksQhJi8p/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJb
pPMkN7AJHS3G2BMyiVwDwnxF+xR4q+HPw81zxH43v/D/AIDg0vVb3QrG01ieZ7vxHf2ZkW6h
sUtophKkUkYia4YpbiSRF83O7b4/X6Q/DfxXokHwe/Zu8BQ+KPgV4j0n4a+Mtc0n4hHXrnQJ
4Fsm1qKXz7JtVCyyW81q0jrLZj5xgE70Cr0Hg/8AaV0DwF8Kfh5pf7O+sfCrTbDS/HPiWTU/
+Em8bXnheOyt21VH0u5uoPt9nNqMRsfJDGaK7OyDysbg8ZAPy+r3b4VfsD698SvBnw91q98W
+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+lf+F1az4j+Bvwij+CPx
Q+Gnw013T/GXiW98Zrp3iSDwloyzT6nDLZXD2V40Ut5ZLahRGpgmKwx+SyblMVef/sZfGrxh
ZfFTw5c6149+D4+FXgHxlca3Pc6xBoqmxhWaO7vG0axuIBqVulwIl8mOytoh5zrhY3DlQD5K
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH7g/Zc/ad+FH/C
caX4rj8SaUnhD4peOfF978RdK8V+KpLX+yEv5DDpcX9ji7S2uYpopoRPM1vdpHmQtNGsGYvP
/gV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxST
oCS3AB86aL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UG
lkTJVmGUw54rwH+xz4v8YeOPHGj6gdK8J2vwx+0L4u1nWbkrpegPFI8IiklhWUySyzIYoooF
keV/uKwDMPvbX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tY
r6WCUvJaw43xJJ+7lniIDkhfCfD/AMcvCHxhuv2x/BWn+ItK026+Neux674R1LWZxpml36WO
rXN+YJJ5tot5Z4XHleeEQuNrvGSoIB4/8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J
9amhnitnYfZLWdIU8+VYwZ2jJIY42YY6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeX
E+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/wCyXbWWtfDQv8OvEmoR+Lv+
Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/4L614X8R/DS+0n
w38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q
/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/d
yzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LB
KXktYcb4kk/dyzxEByQoB8E+A/2OfF/xD8ceOPCFkdKXx94F+0LL4Ve5LaprL20jpdxWOxWh
uJYAju0QlDuisYll2kDymvtb9l745eEPB/8AwVG+Inx9vfEWlReAfC2u+INdiV5xFqmvpfm8
htILGzfbNNK5uEZsqiRIGMrx8Z+KaQj2D4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3
cWya5OCRM8AhgmYW8DbVluZAkEbSKpk3BgvV6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabr
E95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw59W+EHxE+G3xj/ZV+AuieJh4K1HTfhDqurW
PjbSfEuszaVdx6bqV/FeHU9MWG5ge8eKCCaMxI0kvmOoFtJvjauq+DlhoPwj+Mlna+Bfid8N
NZ+EGp/E7Ux4g8K65rlhYp4Ptba8a1s9U067u7tLp7g2E3nRXunssgaCNWeUptpjPh/Ufgb4
w03xx4r8N/8ACOard614G+1tr9vYwG9/spLSTy7mWVodyrFG4w0udgyPmwRXoH7Pf7B/i/8A
aP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBU8NJ4S8
NfFT4t2nhz4reIPB3hOPStYtfD17HaXUs/jO2EwFtplyIfKMaXUQVnaVBGpT5oxwB7D/AMJ3
H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2akI8p/Z
7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkA/
Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJ
A9W/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zU
f8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2amM
8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5
YkC38Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY4
2YY+l/8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9m
r0r9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQZFZ
j++BNAHz/ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5yvhv/wAE5fij8R9D+K+pjSrXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5
T5YG0s6+ZHv+y9B+O/hPX9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtd
c3kfmWQEpezw7Sh2cm43GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarc
XdxFJaBlv7hbpvMjIJnnGCwbe+8NQB8//CT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ
3AuRbIlzHa2cy23mOyMhldQyOGBwG262mf8ABPrUJviRF4P1H4l/CrRfFN74ru/B9hpc2o3t
3dXl3b3S2hdktbSY20UkzbYzd+SzhS4XZ81av7Ikvhv9n7wB8ZfiDf3/AIVk+LHw0/s218DW
d5f2t9ayXs948Fzf2sSuyXstrGglidTJChZZSr4Rh0H7BR13RvjJ4J+I1x45+D+oWeueMra7
8XHxVqOkf2/pC295HLNcltWVZg8qSySrNYSOzFfmYSxhQAeU/AX9ijxV8ef2o7v4PR3/AIf8
M+NrSe9smg1ieYwPdWe/z7cS20Uy71WKZgxxGRE2HJKBvH6/WH9mD9pT4XfDP4qeAfEPgnx/
4f0Twfq3xA8XXXxGudS1zydV1cTTTweHpLoXri/ubcRXcblgHhjdpZp9skcsi+f/ALO3x7vv
g/8Asq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn
5kYA/N6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO
0ZJDHGzDH7A+GP7VzaL8DfhhH8JtV+BWk67F4y8RXvi5bjxZd+DdGsJpdTjls7gWX26xlvbL
7KUCrLBclYYFh2KyvEavgX41eG/iJ4a/Zxm0rV/gokfgnxzrUnilZdTtdCt/DFvL4p0/Vorn
SrfUJbecRNbwFIykbkQSSwsBISqgHy9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH
8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BND
xksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnivAf
7HPi/wAYeOPHGj6gdK8J2vwx+0L4u1nWbkrpegPFI8IiklhWUySyzIYoooFkeV/uKwDMPvbX
/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJ
J+7lniIDkhfCfD/xy8IfGG6/bH8Faf4i0rTbr4167HrvhHUtZnGmaXfpY6tc35gknm2i3lnh
ceV54RC42u8ZKggHj/wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0
hTz5VjBnaMkhjjZhjq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2
NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4
j03VEurVLueJ3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+
I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W
8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L61
4X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQoB8E+A/wBjnxf8Q/HHjjwhZHSl8feBftCy+FXuS2qay9tI6XcVjsVobiWAI7tEJQ7o
rGJZdpA8pr7W/Ze+OXhDwf8A8FRviJ8fb3xFpUXgHwtrviDXYlecRapr6X5vIbSCxs32zTSu
bhGbKokSBjK8fGfimkI9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUM
KK+6OV/9agOGDhfNK/Rb/gnr8avh/wDB/wDYP8N6T4t1fwrb+JPEfxWmufD11JqdjeXXge4m
0lrO18Q3FhJKAYra5jbK3OwKGWYZIi3H7Ov7RH/CqvhB4I0/Wvi7pV74v8P/ALR62Oo6tD4p
8yS68OzCKXUJlmd1kbSrm6j8+RmAhlcB3BamM+NP2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VX
llqD3CT31lbKTMYDHC8ZdSY12yPHkyrjIDldX4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBv
fFd3cWya5OCRM8AhgmYW8DbVluZAkEbSKpk3Bgv0X+zRc+E9P/ao/bD/ALK1/wCH+heG9d8K
eLfDXhjztf07S7G9lvLv/QYbTzZY0aJo4Th4/wB0i7NzKGTNX4QfET4bfGP9lX4C6J4mHgrU
dN+EOq6tY+NtJ8S6zNpV3HpupX8V4dT0xYbmB7x4oIJozEjSS+Y6gW0m+NqAPKdF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPj+o/A3
xhpvjjxX4b/4RzVbvWvA32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK+4Pg5YaD8I/jJ
Z2vgX4nfDTWfhBqfxO1MeIPCuua5YWKeD7W2vGtbPVNOu7u7S6e4NhN50V7p7LIGgjVnlKba
+VPDSeEvDXxU+Ldp4c+K3iDwd4Tj0rWLXw9ex2l1LP4zthMBbaZciHyjGl1EFZ2lQRqU+aMc
AIRrfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGO
NmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlk
TJVmGUw5+gP2bPijoPiH4D/sl21lrXw0L/DrxJqEfi7/AISPXbDTL3w1C3iPTdUS6tUu54nd
2gt2XzIFlBjkni++SB6Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6Bpu
sT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA
03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N
/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i
/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZT
Dk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrM
Mphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH
8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BND
xksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/g
mh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g
6/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+
JJP3cs8RAckKa/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3tr
FfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd
7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9
c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+teF/Efw0v
tJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/wDBNDxk
svha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh
4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/
+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/
dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sE
peS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWV
wA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO
7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPW
LPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDf
xO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgH5k+O/BGqfDPxxrPh
vW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRWTXoH7WXjfTPiZ+1R8S/EmiXP23Rf
EHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDXn9IQUUUUAFFFFABRRRQAUUUUAFFFFABRRR
QAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/waO/8o1tC/wCwzqH/AKVS1/Lr
X9RX/Bo7/wAo1tC/7DOof+lUtejgPgrf4f8A26JyYren/i/Rn378Wf8Aj+uPqa/EP/gufrUm
lX9ts72el/8Ap/8AHf8AhX7efFn/AI/rj6mvxS/4LZaB/beq2oxnFnpf/qQeOv8AGuurf2EL
d/0OGnpWlft/kfmT/wAFBtUe78LfAGRurfDqfOf+xo8QCitL/go9oP8AZmj/AAFhx9z4dTfr
4n18/wBaK8qSfMz04tcqP6NP2Ofm/Zw+BP8A2I3hT/01WVfzvftT/snah8dvjt8YfGX/AAlH
hTwp4a+HNl4Y/ti81o3rY+22FvDB5aWttO7fvEwflGNynpkj+h/9jdv+McfgT/2I3hP/ANNV
lX4a/E250/xD4X/a+8G/2/4V0rxJ4rsvAH9j2eta/ZaR/aH2eOKafy3upY0OyMZPzcZUdWAP
v51/u1H0X5HjZT/vFX1f5nyD+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2t
ICqMZLjywW24CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ2
1wLVHklCobkxF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0
MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnx
XBtGazU3BaSW6TzJDewCR0txtgTMolf5o+gPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN
1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD//AAT61DUPEPh7w9qvxL+FXhzxn4j1
258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz
/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aA
s59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/AMEv/FmjeIdD
0rxB4y+H/hi/8W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrH
wk8AaBr3j/x58P8AwF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE849A/YU+P/
AI+1Dxx4Q1fxb4++H/8Awr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB
97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7r
KacZQruVowoAPn/wP/wT21jxnpHgm/l8efD/AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykME
TTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOO
CRX2X+zb8YbnRv2jLWTQ/GPw0079nf4e/EC/1jR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxb
RhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygL
EGwxV8NxkoQfBP8AZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ
3JiL7WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3R
p2KI07x79jOuY8Oe1/4J2aFf/D34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqI
kse+KRnSXT3ckpgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealy
W1xVuw8ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/4Dg0vVb3QrG01ieZ7vxH
f2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtM
UhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR
7rxAl1DKp1z/AEx/NspGZpYCZGYEOxlTC8//AME/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTX
NF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+C
vCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/8
P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa
14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeF
v2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT
9kT9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W
/wCCb9z4H8Hf8FXZvGWja/4V8OfCfwvrus/YbzWtft9M2afcW1/DY+Wl7KlxNlTEDhXdNymT
bnJ+KaQj2D4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3cWya5OCRM8AhgmYW8DbVluZ
AkEbSKpk3BgvV6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxR
ot0/lBpZEyVZhlMOermt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5
mCz28axssqxlp0bbiFlZWPa/sh2Wn/CbUPBtv4Z+MXw/8a/DLVfHNz/wlejeJp7Lw5J4fitp
vsttrdl9tuo7yG7kspjcxz2OyWJ4kjZpGi2hjPj/AFH4G+MNN8ceK/Df/COard614G+1tr9v
YwG9/spLSTy7mWVodyrFG4w0udgyPmwRXKV7B4aTwl4a+KnxbtPDnxW8QeDvCcelaxa+Hr2O
0upZ/GdsJgLbTLkQ+UY0uogrO0qCNSnzRjgDx+kIKKK/Rb9ln9pW48BfsdfArS/hdrHwq03x
Jpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZDRkA+X/hV+wPr3xK8GfD3
Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDE8HfsF6tr3izTfDW
s+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+tvAvxq8N/ET
w1+zjNpWr/BRI/BPjnWpPFKy6na6Fb+GLeXxTp+rRXOlW+oS284ia3gKRlI3IgklhYCQlV8+
+D2tRXX7eFx8V/DPjf4KXngfxX8Vp9U1RfEE+k2usaNp8WreetwqavHHPD5lvMZEexZnyuGK
yxhFYzwrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiN
O8e/YzrmPDnx/wAd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
sv4Am10b9tSP4jeEfHPwf1DwBrnxOku78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7
xhNGFHyp+0pr2jeKf2jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW
+bNIR6X8Ef8Agnd4k+PvwP0zxvonjD4fwR61rsvhaw0nUb26tL671hbd7iOwV3t/swlmiRTE
WnEbNLGhdZDsB4H/AOCdPizxDpHgmbxB4g8K/D+/+I2u3Xh3w5pfiJNRW+1C7trmO0mVktrS
cW+25kERFw0TBlY7doDH6A/Yb1bw3on7EHw6/tvx18P/AA7J4Z+OVv8AEW/ttR8R2q30ej2N
htkkW0R2uXleWBoooViMkjPGQvlt5lWvg58UrXxV8ZLPxroPxG8FX3w5+JnxO1PX/GHg7xhq
Gm6Ld+CVa8ZYNStZLi7S4W9+xXRlS604q0cluiF5THtDGfD+o/A3xhpvjjxX4b/4RzVbvWvA
32ttft7GA3v9lJaSeXcyytDuVYo3GGlzsGR82CK5SvYPDSeEvDXxU+Ldp4c+K3iDwd4Tj0rW
LXw9ex2l1LP4zthMBbaZciHyjGl1EFZ2lQRqU+aMcAeP0hHu3wq/YH174leDPh7rV74t8FeE
U+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjq6L/AME0PGSy+FrXxR4j8FeA
9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfoD9mz4o6D4h+A/7JdtZ
a18NC/w68SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SB6Dr/wC0P4N+
Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEB
yQrGfFPg79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad
49+xnXMeHOT8TP2KPFXwm+Al14+1e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+G
aRT5iDOdwX6r1NNBj1XxN8Sfg/49+Gg+IPxo8ZeIfM8S6/4ssNEufhvosmpSxRyW9rcSrcrc
XcLtI1ysZmigykcSvJ5ja37DfjS1+D37Mdn4Eg+I/wANFfTvjy0Pi2C48UabDp2v+Fzp6WV/
KI7yRFvLKVSdoCMWwrIu9AVAPzer3b4V/sKXXxO+Amk/Eaf4kfDTwr4e1bxIPCQOuT6lC9pq
RQyrFM0dnJFGhhxL5pk8pVYb3VgyjzT47/8ACMf8Lw8Zf8IR/wAiZ/bt7/YH+t/5B/2h/s3+
u/e/6rZ/rPn/AL3Oa+y/2TvE9zp//BLs+END8V/B/TPE3jD4gagLm38VeJdLtn07Rb3Q5NKn
visswnt3R3Yr5a+cyjhJIpGWRCPH3/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39
vXdjerYzPEbazmEUX2lxGpuDExwW2hcMfFNR+BvjDTfHHivw3/wjmq3eteBvtba/b2MBvf7K
S0k8u5llaHcqxRuMNLnYMj5sEV9l/sw+GbX4CeLPDWjaD8Zfhp45+HM3xAvbbxhpOuajpujp
oK2lx9jg1zTpbi8W4FxLZSm5iutOYMjRRoZJTHtr5p8NJ4S8NfFT4t2nhz4reIPB3hOPStYt
fD17HaXUs/jO2EwFtplyIfKMaXUQVnaVBGpT5oxwAAeP17t8Kv2B9e+JXgz4e61e+LfBXhFP
ivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/2S7ay1r4
aF/h14k1CPxd/wAJHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZ
fC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHOV4O/YL1
bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz9ra
/wDtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+J
JP3cs8RAckL59qaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W
4u4XaRrlYzNFBlI4leTzGYz5U+Jn7FHir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJ
FJ+68h0CRbt8M0inzEGc7gvj9fpD+w340tfg9+zHZ+BIPiP8NFfTvjy0Pi2C48UabDp2v+Fz
p6WV/KI7yRFvLKVSdoCMWwrIu9AV+Cfjv/wjH/C8PGX/AAhH/Imf27e/2B/rf+Qf9of7N/rv
3v8Aqtn+s+f+9zmkI7X4V/sa6t8QPh5pPizXPFfgr4ceHvEmqjRtBvfFd3cWya5OCRM8Ahgm
YW8DbVluZAkEbSKpk3BgvbeCP+CX/izxX4Ytr+/8ZfD/AMM3U/jl/hs+m6lLqMl1beIA7KLJ
2trOaE7gAwlWRosMMuDlR0E1vpP7X37D3wM8EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tz
DfwPMwWe3jWNllWMtOjbcQsrKx9r/Y4+Nng39l79nPwNpFv4r+Gniixuv2h4riJ9bFm0/wDY
X2U2Y1prSVzPpro8PnJI/lvGQmS0bkSMZ806L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE
95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58f1H4G+MNN8ceK/Df8Awjmq3eteBvtba/b2
MBvf7KS0k8u5llaHcqxRuMNLnYMj5sEV+ix+InhPUNT+BWm+H/HHw/8AFdh8M/iP4ht/Eet+
LfFWnRapZWh8W2Opw6pDLczQm5lntrYlri3WRXSa4T7zED4/8afEXQPH37VH7QHiTT/ijqvg
bRfE3/CRX2k3FjY3jf8ACXpPdmSHSpVjKNFFcowLNONi7BvXPRCPn6vdvhV+wPr3xK8GfD3W
r3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L/Zs+KOg+Ifg
P+yXbWWtfDQv8OvEmoR+Lv8AhI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IAB8/
6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGU
w5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZK
swymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LB
KXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K6Bpu
sT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA
03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0vtJ8N
/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v+C+t
eF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i
/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZT
Dk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrM
Mphz9g6/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQpr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxP
qd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie
8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8S
X2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCn/AA0h4Z+IXjb4Ra34F8X/
AA0udE074u+Jdc8WyeLbzSYbvSrK51+G6t57KPWCtxbI9l+8/wCJeqfvAxb9+GoA+PtF/wCC
aHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov
/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn
62Pxu8D/ABL1P4FXPg/xN8P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDD
TKJXZJbiNv3jMBb1/wDaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3
vbWK+lglLyWsON8SSfu5Z4iA5IUA+NPGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNSOqXl
0lxpl41neI5sbK4iG2VeCJCGDAgnnB4H/wCCe2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv8A
hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5r6K0n41ar8bf2hdG1FvHvwK8UfBA/E7V7ptH8VQaD
BqOhaVdaz9quZCmrwR3Oy5hmMitbPI3y7DskjEa9XofxR+HPiG2/Z+tvh5rXw0Pg/wCHXxA1
6O//AOEj1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgAD5U0X/AIJoeMll8LWv
ijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc5Xg79gvVte8W
ab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHP2tr/wC0
P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dy
zxEByQvj3we1qK6/bwuPiv4Z8b/BS88D+K/itPqmqL4gn0m11jRtPi1bz1uFTV4454fMt5jI
j2LM+VwxWWMIoB86fEz9ijxV8JvgJdePtXv/AA+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3XkOg
SLdvhmkU+YgzncF8fr9Qf2dPj34bf4b61Z+FfiD4VttB1j9o+81LWbPxN4mtbeTW/BlzaiC5
e6i1SVZbuKWKQBhIrys43YMibl/On47/APCMf8Lw8Zf8IR/yJn9u3v8AYH+t/wCQf9of7N/r
v3v+q2f6z5/73OaQjlKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAr+or/g0d/5RraF/2GdQ/wDSqWv5da/qK/4NHf8AlGtoX/YZ1D/0
qlr0cB8Fb/D/AO3ROTFb0/8AF+jPv34s/wDH9cfU1+Ln/BaLXo9E1S3LnrZ6Z/6kHjn/AAr9
o/iz/wAf1x9TX4ef8F2bCS91G0Eef+PPS+n/AGH/AB3XXVdqELd/0OGnrWlft/kfn1/wUl11
NU0z4DTr91/h1Lj8PE2vj+lFYP8AwUC0+S18J/ABG3ZHw6nz/wCFR4gorypt8zPTh8KP6Q/2
N5Mfs7fAkf8AUj+FP/TVZV/PJ+1P+yfqHx4+O3xh8Z/8JR4V8KeG/hzZeGP7YvNaN62Ptthb
wweWlrbTu/7xMH5Rjcp6ZI/cD9lz9prSPD/w8+BehSXCC5HgzwdFsLc5fSLAj/0IV+R3jvVN
N8YeBv2tvCi6/wCFNL8ReMtN+H0mjWms6/ZaR/aIghhnm8t7qWNDsj5Pzd1HVgD72czjLDUU
uy/I8fKotYirfu/zPkf9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVG
Mlx5YLbcBVBXe6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uB
ao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTdNnvNSuFtmhiE
NxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4N
ozWam4LSS3SeZIb2ASOluNsCZlEr/Nnvnmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6Bpus
T3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/4J9ahqHiHw94e1X4l/Crw54z8R67c+
GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV
8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/tAWc+
lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgyRROd7K+1CADlH/wCCX/izRvEOh6V4
g8ZfD/wxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4Se
ANA17x/48+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/wAf
ah448Iav4t8ffD//AIV74O8Vy+ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x
6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNO
MoV3K0YUAHz/AOB/+Ce2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZSGCJpp
VVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr
7L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowz
XDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiDY
Yq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX
2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsUR
p3j37Gdcx4c9r/wTs0K/+HvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98
UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrir
dh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/h/wHBpeq3uhWNprE8z3fiO/szI
t1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/Yc8e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3
liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/AMR/EzeLk8eT6PJfQ6Pd
eIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/wCCfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+
IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhF
Pivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/A
cGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S
/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S
1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv/ECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ
+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8A
BN+58D+Dv+Crs3jLRtf8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk
/FNIR7B8K/2NdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBImeAQwTMLeBtqy3MgSC
NpFUybgwXtvBH/BL/wAWeK/DFtf3/jL4f+Gbqfxy/wANn03UpdRkurbxAHZRZO1tZzQncAGE
qyNFhhlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28a
xssqxlp0bbiFlZWPtf7HHxs8G/svfs5+BtIt/Ffw08UWN1+0PFcRPrYs2n/sL7KbMa01pK5n
010eHzkkfy3jITJaNyJGM+VPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJ
asJVsLe5S3Rp2KI07x79jOuY8OfNNR+BvjDTfHHivw3/AMI5qt3rXgb7W2v29jAb3+yktJPL
uZZWh3KsUbjDS52DI+bBFfa1r8OdH+D0Gs678J/il8P9c+JPxA8V69pFx438U+PtLttQ8D6O
moS2q3cXmTCWW7v4t0z30SNIsLMIYw0vmN8qfD3Q7T4b+OPidol38Wf+ES/szQ9V0yG/8Orc
39j42ljkVF01ZICn+iXe0sJZQYtqKWXkUhHlNe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/h
G21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPhNfoX+zZ8UdB8Q/Af9ku2sta+Ghf4deJN
Qj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/Rf+CaHjJZfC1r4o8R+
CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8ZLL4WtfF
HiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt
8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOS
FNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDj
fEkn7uWeIgOSFYz4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsb
a5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD
2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhW
Vz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+165
8R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6
t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/B
XgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF
9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9
ofwb8bfF/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSf
u5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi
+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R
6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re
6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+KfB37Bera94s03w1rPj34aeEfGGr+JLj
wra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OT9m7/gnL8Uf2n/j34i+Heh6Va2Gr
eDp57XxDeahcY07RZoneMxyzRCQM7yRsiLGHL7WYfIjuv0rqaaDHqvib4k/B/wAe/DQfEH40
eMvEPmeJdf8AFlholz8N9Fk1KWKOS3tbiVblbi7hdpGuVjM0UGUjiV5PMY/4Jg/tUxfCf9pL
wb8KfEs3wfsvBPwt1XXr6XxgmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P9G/
Zf1ab4CP8R9c1nw/4R8PXc81poKaw9wLvxVPCjNMljDDDKzpGyrG00nlwLJKiGUNuC+aV+kP
w3+JmieKPg9+zd4YhuvgVFpPgnxlrlp8QtG1690C7g0aym1qKfZZSarJLLNbtavJtms5ZfMC
DMrugx8E/Hf/AIRj/heHjL/hCP8AkTP7dvf7A/1v/IP+0P8AZv8AXfvf9Vs/1nz/AN7nNIR2
vwr/AGNdW+IHw80nxZrnivwV8OPD3iTVRo2g3viu7uLZNcnBImeAQwTMLeBtqy3MgSCNpFUy
bgwXtvBH/BL/AMWeK/DFtf3/AIy+H/hm6n8cv8Nn03UpdRkurbxAHZRZO1tZzQncAGEqyNFh
hlwcqOgmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYNUvFuYb+B5mCz28axssqx
lp0bbiFlZWPtf7HHxs8G/svfs5+BtIt/Ffw08UWN1+0PFcRPrYs2n/sL7KbMa01pK5n010eH
zkkfy3jITJaNyJGM+VPB37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVs
Le5S3Rp2KI07x79jOuY8OfNNR+BvjDTfHHivw3/wjmq3eteBvtba/b2MBvf7KS0k8u5llaHc
qxRuMNLnYMj5sEV9bfAHwsfhZ+2pH4r0H4l/B/xT4Pf4nSJrGo+KtZ0S51+2srPVMi/M+pIj
s9xBIZ1udNd/MIyWWRFVfFNW8WeGtc/aM+Omp6P8VPEHgrw9q8Gvy6Ldxx6jez+MoZLotb6X
cuXE2y6QgvJdFhlMyAsaQjwmvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHwmv0L/AGbPijoPiH4D/sl21lrXw0L/AA68SahH4u/4SPXb
DTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAAfP+i/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3j
fxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/E
fw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Df
jb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RA
ckKxnx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaW
RMlWYZTDk0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYc/YOv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW9
7axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v8AgvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL
3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8
K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnK+G//BOX4o/EfQ/ivqY0q10fTfg1
BqB8RXeoXGIPtVkrNPYwPEHWa4Cox+U+WBtLOvmR7/tbX/2h/Bvxt8X/AAX1rwv4j+Gl9pPh
v4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfKf2bvE+g+Nf2jP2u
/Glv4r8FWHh74j+G/GWieG5tY8S2GkT6ndX11HNaqLe6mimRJEORI6LGCGUsGUgAHypo37L+
rTfAR/iPrms+H/CPh67nmtNBTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwXV/Yz/Yc8
e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfrb4M+O9K1D4Bf
sw+DTrPwUv7D4f8AivWdO+I9n4ou/Ddz9itJNYhmLWz6gSZYpLZpT52nsyvtADlkUC3/AME/
P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzp
AWxt2u4APj74J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc9Xov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP
5QaWRMlWYZTDn1b4OfD21+Enwks/EHw78UfB/Qfi38Q9V1PStT1O88e6baRfCbTUumtilgJr
uSaV7hPMYX0bTSrbLiIFpfOf0D4T+J9B0b4ffsy+GrLxX8H9Zf4ReMtX03xdqF/4lsLQaRCv
ifT9RTUNOa7mhedJYLVts0CSBoZZo8B2IAB8v+Mv+Ce2sfCTwBoGveP/AB58P/AX/CR32rab
a2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzjJ/ZR/YL8cftp/F/VvCngB9K1K10TzHvfEU
xuLfR4IgXEUjO0XnDzin7uMxeaRklFCSFPqDSfjVqvxt/aF0bUW8e/ArxR8ED8TtXum0fxVB
oMGo6FpV1rP2q5kKavBHc7LmGYyK1s8jfLsOySMRrrf8E/P20/CWg/tlaN4E06z+Gnhr4K+B
fEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91
q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/S3wn+JGg6t8P
v2ZbCyvvg/YP8NvGWrweLrW/8UWEA8IwnxPp+ppNp0l3d77hDBbNGtxA9yHheZN7OxNdXr/7
Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93
LPEQHJCgHx9ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP
5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK
/TbX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw
43xJJ+7lniIDkhfzz/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlO
GAIzyAaQjoPhX+xrq3xA+Hmk+LNc8V+Cvhx4e8SaqNG0G98V3dxbJrk4JEzwCGCZhbwNtWW5
kCQRtIqmTcGC9t4I/wCCX/izxX4Ytr+/8ZfD/wAM3U/jl/hs+m6lLqMl1beIA7KLJ2trOaE7
gAwlWRosMMuDlR0E1vpP7X37D3wM8EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tzDfwPMwW
e3jWNllWMtOjbcQsrKx9r/Y4+Nng39l79nPwNpFv4r+Gniixuv2h4riJ9bFm0/8AYX2U2Y1p
rSVzPpro8PnJI/lvGQmS0bkSMZ8qfs3f8E5fij+0/wDHvxF8O9D0q1sNW8HTz2viG81C4xp2
izRO8ZjlmiEgZ3kjZEWMOX2sw+RHdfH/AAr4E1zx1/aX9iaNqusf2PYy6pf/AGG0kuPsNpFj
zLiXYDsiTcu52wq5GSM1+i37DX7Y+meEP25rL4ealqXw/k+G3w+8V+KNYh8d614ie11DV3n+
1wR6jc3BvIrTUbuVZYoVkaCRxC7lAqhnHxT8PZrTRvHHxOiu/Hf/AAqnOh6rBDa+HWudTsde
l8xQuhrPBO+60mwQJpZZoisSlmkyGKEeU17t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbW
Hv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR0HxD8B/wBku2sta+Ghf4deJNQj
8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAAPn/AEX/AIJoeMll8LWvijxH
4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR
4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X
/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFN
f/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SS
fu5Z4iA5IVjPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0
W6fyg0siZKswymHJov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21
zFGi3T+UGlkTJVmGUw5+wdf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz
4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/AGh/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteuf
EesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAer
eN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/BNDxksvha18UeI/BXg
PVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/wDaH8G/G3xf8F9a
8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/wDa
H8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5
Z4iA5IUA+PtF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6
fyg0siZKswymHJov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUa
LdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LW
Le9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWek
XuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El9
4V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv
4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP
4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3
xf8ABfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgO
SFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0si
ZKswymHOV4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxR
GnePfsZ1zHhz9ra/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2
sV9LBKXktYcb4kk/dyzxEByQvn2ppoMeq+JviT8H/Hvw0HxB+NHjLxD5niXX/Flholz8N9Fk
1KWKOS3tbiVblbi7hdpGuVjM0UGUjiV5PMYA+VPEX7FHir4c/DzXPEfje/8AD/gODS9VvdCs
bTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tuVo37L+rTfAR/iPrms+H/CPh67nmtN
BTWHuBd+Kp4UZpksYYYZWdI2VY2mk8uBZJUQyhtwX7W+G/ivRIPg9+zd4Ch8UfArxHpPw18Z
a5pPxCOvXOgTwLZNrUUvn2TaqFlkt5rVpHWWzHzjAJ3oFXVsvir8PfGfhb4D+HfAWrfB+7+G
vg74geI7fxHY+MjownsNCn16O4tjEutAXRR7ByS9vlyV2ufMTCgHx/8ACr9gfXviV4M+HutX
vi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPj/jvwRqnwz8caz4b1
u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+kGh/FH4c+Ibb9n62+HmtfDQ+D/h18
QNejv/8AhI9dtdMvfDWlN4rsNUs7q1TUZ4rh3aytwvmIsrGOSeJv3hYD4J/ay8b6Z8TP2qPi
X4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaQjz+iiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACv6iv+DR3/lGtoX/YZ1D/ANKpa/l1r+or/g0d/wCUa2hf9hnUP/SqWvRwHwVv
8P8A7dE5MVvT/wAX6M+/fiz/AMf1x9TX4x/8FlbWO51a38zHFnpnX/sYPHNfs58Wf+P64+pr
8P8A/gurr8miX9qUOM2el/8Ap/8AHf8AhXZVdqEPX9Dhpq9aS8v1R+f3/BTC3jjtPgQExtHw
6lx/4U2v0Vzn/BQHXX1Hwl8AJnPzP8Op88+nijxAP6UV5U5e8z0oR91H0jpnx01XRv2xvgdp
MdxItv8A2F8OI9oPGH0LRif/AEI186/FL9lfUv2g/Evj3xw3ijwr4V8O/D7w74PfWbzWjetg
32lWsUPlpa207v8AvEwflGN6npkj6BtvhpLeftj/AAO1ED5f7D+HDf8AfOhaMP6VyVlLp+t/
AL9pjwV/b3hXSvEvinw58NRpFnrWv2Wkf2h9mtbeafy3upY0OyMZPzd1HVgDtX5vZxv5fkZ0
Le0dvM+Yf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV
3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZ
gCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKi
LKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0n
mSG9gEjpbjbAmZRK/Edh5pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqW
WoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7
oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b
6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t
8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/wCP
Ph/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/x9qHjjwhq/i3x
98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L
8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP
/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/wCEme3uY7Vp4hDZSGCJppVVftQhfqSi
jmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo
37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn
74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD
4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc63
g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXM
eHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkun
u5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baX
zVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2K
W0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUd
t8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU
65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR
7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P
4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re
6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO
/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO
80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+y
dqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8E37
nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0
hHu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjj
ZhjyifsleNYvDnxW1O7sbWwg+DE8Fl4nWa7Rnt7qa+FklvGELCR/MEp3KfLCwud+Siv9F/sE
XT+BvDHw0mtPiX8P9V8GeIvFbz+O/CXibUtP0eTwa8Lrbx6nZTXVzFci7NpO00d1p/lujwIh
aQx7B0H7JXx3i034PftE/CbwL8bLrwxBcarpafDLUfEPiiXQ4LTTY9akN3cxzv5SW7vbypLL
HGqSygybYnKsoYz4Jor9LP8AgnN8RPhh8BfA/wAObGfxx4V1jw3r2u+ItO+Jaav4qu7Wxg82
NLLTWg0eaa3S5tLmNojJNPZT+WrOZHgEBEXE+BPHHiXw5+yr8CfDXwh+K3gr4eeLPCPiTXo/
Hjt4407R7Rrpr+3NpdXSvME1S3FugxJEl1G0aGMbsbKQj5U1H9l/VrL9kyw+MUes+H7vw9d+
JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4XivCvgTXPHX9pf2Jo2q6x/Y9jLql/8A
YbSS4+w2kWPMuJdgOyJNy7nbCrkZIzX6LfsY/HL4f/Cr9ke007xb4i+H9x4k8UfGS8vPD2q2
E9iI/CNxPpr2dr4lOkyeSI7SC5RmWK5igEaOkoRSsQPxn4aZtA+Knxbt9e+M11oepDStYtpd
a0eS71WDx/decAbE3ETKz296wZzPNmMhVZ1O4UAeP0UUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUt
ejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr8Sf+C43hw69e23+zZ6X+mv8Ajr/Gv22+LP8A
x/XH1Nfiv/wWu12PR72Df3s9M/8AT/45/wAK66v8CHr+hw0/40rdv1R+ZX/BQnQv7N8M/AGH
+58Op/18UeID/Wirf/BRbWEvdC+Akinhvh1Nj/wp/EAoryZ/Ez04/Cj640LxJZw/tGfA2Biv
mf8ACP8Aw7H56JpGK+Pvix+yjf8Ax68V+P8AxoPFHhTwp4a+HXh7wedXvNaN62Pt2l2sMHlp
a207t+8TB+UY3KemSPU7rxPPF+3X8DrcMdn9jfDUYz66Dov+NR6Xd6f4j/Z8/aV8Hf2/4V0v
xL4s8NfDU6RZ61r9lpH9o/Z7S3mn8t7qWNDsjGT83dR1YA9FeXNTXy/IyoRtUfzPmT9nv9hz
x7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41
WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/wCCXmn2
ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3ws
j+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmU
SvxHYeaaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGl
kTJVmGUw5q+H/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeK
DfLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0Z
LiOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5
l4tmWInu7OErImDJFE53sr7UIAOUf/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6j
N/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021
sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/wD4V74O8Vy+
ItSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/
Bi6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/wCCe2seM9I8
E38vjz4f6La/E3XbrQvBj3x1Rv8AhJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+O
NZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnf
s7/D34gX+saOviyXS7u/0jSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGL
xfpmveH/AArps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfi
N8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+Gt
Z8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v8AwTs0K/8A
h78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgC
bXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0Y
UMZ8/eIv2KPFXw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFL
cSSIvm53bT9jP9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR
2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBM
jMCHYyphef8A+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI
4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhni
tnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf8Ah/wHBpeq3uhWNprE8z3fiO/s
zIt1DYpbRTCVIpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8ADvwq+HHjK41Wez1j
UdF1c+H4Y5o7q8h0YXDXWpMkgiVYXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4
mv8AxvpnjweHo77S9Mv/ABAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheK
PCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRt
f8K+HPhP4X13WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIQV7t8Kv2B9e+JX
gz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+E1+hf7NnxR
0HxD8B/2S7ay1r4aF/h14k1CPxd/wkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98k
AA+f9F/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJk
qzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6F
ZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeF
dA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kv
vCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfa
T4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX
1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+
0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMp
hyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyV
ZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WC
UvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L
7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ
7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroG
m6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv
4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1r
wv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X
/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphy
aL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGU
w5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8l
rDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstY
t721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y
4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021
sNSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnH2Xr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvE
l9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQvn2k/GrVfjb+0Lo2ot49+B
Xij4IH4navdNo/iqDQYNR0LSrrWftVzIU1eCO52XMMxkVrZ5G+XYdkkYjUA+X/hh+xinxS1D
QbO1+Kvwqs7/AMW67JoWg2c15qE11qjrNHBHO0VvZyvaRTSSAR/bVt3YAtsCjdVvwd+wXq2v
eLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn6A8L+
BfBvww8OXviX4CeM/hpY+J/iP4k1rTbPXdf8Y2ekXPww8OrfSW1ubeC7mF39ourY72utjXEc
AKRxh5PMbK+Dnwbtf2dfhJZ3Xw7+Ivwfuvi34v1XU/D+p+LbzxzpthF8PtNgumszPYJNKszv
eoJJRexxmVLY7YolaXzGAPKdF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7Sy
uAHsba5ijRbp/KDSyJkqzDKYc+FeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5C
VbDqwypIOOCRX6F/CfxPoOjfD79mXw1ZeK/g/rL/AAi8Zavpvi7UL/xLYWg0iFfE+n6imoac
13NC86SwWrbZoEkDQyzR4DsQPh79rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lk
jfa4DLlGU4YAjPIBpCO2+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPsl
rOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP0B+zZ8UdB8Q/Af9ku2sta+Ghf4deJNQj8Xf8ACR67YaZe+GoW
8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQPQdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteu
fEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFYz4+0X/gmh4yWXwta+KPEfgrwH
q3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV
4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/Bf
WvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A
2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7
uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNF
un8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zF
Gi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y
1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFn
pF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJ
feFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P/8ABPrUNQ8Q+HvD2q/Ev4Ve
HPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY+y9f/aH8G/G3xf8F9a8
L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IX5+8MfF
TwFpvxN/ak+Oul6t4fvfG2h+JBqPwyh1YpsuW1DVpw+ow2UwV57i2gKTRh1ZYWYO8ZKrtAOK
f/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxM
cFtoXDGpov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+U
GlkTJVmGUw5+i/gV8btP+Jfwi/ZYuZvE3w/1LVfBnivU5vHN34t8RWVlqmied4l07Vvt0P22
eKWWWSKBw00Ql3JLcRn52IHba/8AtD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdC
srnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8ABNDxksvha18UeI/BXgPVvG/iS+8K
6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJf
eFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/wDtD+Dfjb4v+C+teF/Efw0v
tJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/wDtD+Dfjb4v
+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKA
fmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP9kJvjVZ
+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn7M+LP7RF38T
NB8EX/7P3xd8K+Bbo/Efxfq/ii5vPFNt4YjuPtesxz6deX1rdPFJfRfYyn/LGfCI0RUlTHXm
nwc8G2vw7+Eln4r+Hfj/AOD7fFv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmhe8QySCeO3
WWC2/dxQRs+4gHing79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvc
pbo07FEad49+xnXMeHPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMq
SDjgkV9V/sUfDi/+Af7QHhXWLTx/+z/rGg2PjmGz8RzXmp6M91plvp98ga7tZNUjjl8qaJ2l
in09mLhVyVkRVX6L039rLTtL8C+CR8C/Evw03p8QPFF9r134w8d33h10WbV0l0+9vo21C0uN
TR7Ixb3nju2KwmMjeHjYA/LOvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHyn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqr
JHb4P7tHVWVNoIBBFfdf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAQj5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xP
eXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJ
L7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8bfF/wX1rwv4j
+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM+PtF/wCC
aHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov
/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn
7B1/9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43
xJJ+7lniIDkhTX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721i
vpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72z
u0srgB7G2uYo0W6fyg0siZKswymHJov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6
ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8ABfWvC/iP4aX2k+G/id4kvteu
fEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8ABfWvC/iP4aX2
k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf+CaHjJZf
C1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJov8AwTQ8
ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wf8A
hpDwz8QvG3wi1vwL4v8Ahpc6Jp3xd8S654tk8W3mkw3elWVzr8N1bz2UesFbi2R7L95/xL1T
94GLfvw1VD8bvA/xL1P4FXPg/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054r
mWWS0gw0yiV2SW4jb94zAAHyTov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d
2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tv
bO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPi
PWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7Sf
DfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4
WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMl
l8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD
+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs
8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXk
tYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAP
Y21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0s
rgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0
i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38Tv
El9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8aeH/8Agn1qGoeIfD3h
7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBsePfFn4cX3wc
+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK+q/DHxU8Bab8Tf2pPjrpe
reH73xtofiQaj8ModWKbLltQ1acPqMNlMFee4toCk0YdWWFmDvGSq7fj/XtevvFOuXmp6neX
Wo6lqM73V3d3UzTT3UzsWeSR2JZnZiSWJJJJJpCPYP2e/wBg/wAX/tH/AA3k8UaVqXhXSLC4
10eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC
410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y
/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wAL
H/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4
V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal
4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs
/D/+2P8AhY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9
sf8ACx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal
4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGla
l4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z
8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/
APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqX
hXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGl
al4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2
z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/
ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA80+FX7A+vfErwZ8Pdav
fFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf8Agmh4yWXwta+K
PEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G
/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrtj8bvA/x
L1P4FXPg/wATfD/UtB8GfEfxDNrV34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4j
b94zAAHyTov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5q+Mv+Ce2sfCTwBoGveP/AB58P/AX/CR32raba2GpHVLy6S40y8azvEc2NlcR
DbKvBEhDBgQTzj7L1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i
3vbWK+lglLyWsON8SSfu5Z4iA5IXz7SfjVqvxt/aF0bUW8e/ArxR8ED8TtXum0fxVBoMGo6F
pV1rP2q5kKavBHc7LmGYyK1s8jfLsOySMRqAfn/r2nRaRrl5aW9/a6pBazvDFe2qyrBeKrEC
WMSokgRgNwDorYIyqnIFSv1M/Y/+KvwP+DmueH08MeKfD8nwv8UeMvFlp4x0/wAQ+LLy3g0y
ynb7JoqR6RPcxJdW89u9v5s1xa3WwGRpZYhCfK5//gnN48+Hn7PPgf4c6Zq/izwrcaVrWu+I
tL+Ktjq/jZmsdOlkjSysFg06O7S0v7SdWj8y5+z3cQUu7TRpDmJCPzTr2D9jP9hzx7+3X8Q7
nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X6W8CeOPEvhz9lX4E+GvhD8
VvBXw88WeEfEmvR+PHbxxp2j2jXTX9ubS6uleYJqluLdBiSJLqNo0MY3Y2Vq/wDBMH9tuLTf
2kvBvgrxL/wp/SvBPgrVde1+XxWlxL4aiubm6huIvtawvPbWszt50dvCj2fmQ2zFUjiCOVYz
89KK1vHdv9j8cazF9j0rT/Kvp0+y6XefbLG2xIw8uCfzZfNiXoknmyblAO987jk0hBRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAV/UV/waO/8o1tC/7DOof+lUtfy61/S5/wbKarqmi/8EgILnRZb+HU4tYu
/JeyieWdc35DbVS0vGPyk5xbScZ+799fUyyDn7WC3cbf+TxOLGy5eST7/oz9K/iz/wAf1x9T
X4Z/8F7IJZryz8vP/HppfT/sP+O6/XTS38Y+JvFmnrqc3jqeCe7iWcS2V8iMpcBtx/4RWIAY
zn94n++v3h+Wn/Baeyiu7yDzMcWemYz/ANh/xzXdjKEqVOMJb3/RnFQleq5Lt/kfln+3nbyR
+Cf2fg2d3/CurjP/AIVPiGiuh/4KM20cOi/AVVxgfDqbH/hT6/RXhz+Jnqx2R2Gocft+/A7/
ALA/w0/9MOiV538RP2TtQ+O+seN/GX/CUeFfCnhv4c+GvBv9sXmtG9bH23SbSGDy0tbad2/e
Jg/KMblPTJHol+M/t/fA7/sD/DT/ANMOiVo6Rc6f4h/Z5/aU8G/2/wCFdK8SeK/DXw0/sez1
rX7LSP7Q+z2lvNP5b3UsaHZGMn5uMqOrAHWr/DXyM6X8R/M+Zf2e/wBhzx7+0n8K/HvjfQrO
1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/AGQm+NVn4UC/Eb4aeHtW
8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0
KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4
VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvynUeaaL/wTQ8Z
LL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/8A
gn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEB
selfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQ
Ekzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OErImD
JFE53sr7UIAOUf8A4Jf+LNG8Q6HpXiDxl8P/AAxf+LfFeo+D/DkV9LqM39vXdjerYzPEbazm
EUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/48+H/gL/hI77VtNtbDUjql5dJcaZeNZ3iO
bGyuIhtlXgiQhgwIJ5x6B+wp8f8Ax9qHjjwhq/i3x98P/wDhXvg7xXL4i1K58bXej6hq1l+8
ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh
2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/gntrHjPSPBN/L48+H+i2vxN1260
LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m39
v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnfs7/AA9+IF/rGjr4sl0u
7v8ASNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXj
S0i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6Nq
F9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW
18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wAE7NCv/h78SPh54ztPFvwUGg3X
iuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UP
AGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+Hmu
eI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/wBhzx7+
3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8
K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/AOCfn7af
hLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY2
7XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJ
IY42YY5PiL9ijxV8Ofh5rniPxvf+H/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDF
LcSSIvm53bfa/wBjL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNd
akySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8AG+mePB4ejvtL0y/8
QJeoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2R
SzS+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/gq7N4y0bX/AAr4c+E/hfXdZ+w3mta/
b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hBXtf7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxr
o8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB4pX2D/wncf/AA5K/wCEZ/tn4f8A
9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/YP8X/tH/DeTxRpWpeF
dIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/ALR/w3k8UaVq
XhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP/hyV/wAIz/bP
w/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/w5K/4Rn+2fh/8A
2x/wsf8AtT+yftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2amM8p/Z7/YP8X/tH/DeTxRp
WpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61
e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+l/8ACdx/8OSv+EZ/
tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mr0r9k34s6ba/A39le30H
xL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWql92nbQZFZj++BNAHz/ov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5yvhv/wAE
5fij8R9D+K+pjSrXR9N+DUGoHxFd6hcYg+1WSs09jA8QdZrgKjH5T5YG0s6+ZHv+y9B+O/hP
X9a+C138PfGXw/vfDvh/4reItU8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7Sh2cm43
GvKf2UL7w3qXxw/au8Saf4u8K2nhbxz4U8X+HPC9x4i8XWtlfarcXdxFJaBlv7hbpvMjIJnn
GCwbe+8NQB8//CT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMh
ldQyOGBwG263hT/gmp8S/GH/AAuO4todKj0X4If2nHrusTTSpY3dxYbzNbWjeXullKRs4BVQ
qlPMMZkQNq/siS+G/wBn7wB8ZfiDf3/hWT4sfDT+zbXwNZ3l/a31rJez3jwXN/axK7Jey2sa
CWJ1MkKFllKvhGHV/wDBNm+fUv8AhfniTxN4u8K2l145+HHiHw5bXHiLxdp9lfarrF39nkQM
t1cLM3mEsTO48ssGy+QaAPNPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7
D7JazpCnnyrGDO0ZJDHGzDG38Nv+CbHxD+Ien67JNc+FfDd1o/iubwJb2esaqIpNZ8QRQyzN
plu8avCJSItqvNJFE7yxokjFsD6V+BXjjRx8Iv2WNFi134VS3Xws8V6nZ+Mn1nxPpdtN4cT/
AISXTtSW7sXmuEE++G2ZRPa+cjxSToCS3GV+1l8ctH+Jn/BPH4lponiL4f3q+IPjlqniawsv
P0uPWbjQ5nl8u8+zPi8WU3LKu5kFyIDtOLbigD5J1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsY
XuBqNjeiCS4xIrwrEUMKK+6OV/8AWoDhg4Xq/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjba
w9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH6V/4JwfF618M/sV+GvD8HjXwVojz/GuK48W6
RrniDTbFNR8LzaXFbX4mtryRVubd1crsCuSygoN8YK9XofxR+HPiG2/Z+tvh5rXw0Pg/4dfE
DXo7/wD4SPXbXTL3w1pTeK7DVLO6tU1GeK4d2srcL5iLKxjknib94WAAPkrxl/wT21j4SeAN
A17x/wCPPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecHgf8A4J7ax4z0
jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa+itJ+NWq/G39
oXRtRbx78CvFHwQPxO1e6bR/FUGgwajoWlXWs/armQpq8EdzsuYZjIrWzyN8uw7JIxGvV6H8
Ufhz4htv2frb4ea18ND4P+HXxA16O/8A+Ej1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY
5J4m/eFgAD5U0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFu
n8oNLImSrMMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44J
Ffptr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXkt
Ycb4kk/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoy
nDAEZ5ANIR23wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhjq6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5+gP2bPijoPiH4D/sl21lrXw0L/AA68SahH4u/4SPXbDTL3w1C3iPTdUS6t
Uu54nd2gt2XzIFlBjkni++SB6Dr/AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i9
0KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQrGfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7w
roGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfx
JfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC+teF/Efw
0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nvi
/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAck
KAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEy
VZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0s
iZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV
9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90Ky
ufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wro
Gm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94
V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614X8R/DS+0
nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P4N+Nvi/4
L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8
faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZ
hlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZ
KswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sE
peS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZ
axb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPe
XE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03
WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E
7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nvi/wCC+teF
/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8
E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTR
f+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymH
P2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktY
cb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFv
e2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7U
NSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7W/wCGkPDPxC8bfCLW/Avi/wCGlzomnfF3
xLrni2TxbeaTDd6VZXOvw3VvPZR6wVuLZHsv3n/EvVP3gYt+/DV5pa6H4b8Mwaz45+C/xA+H
7+Oviz4r162HijxT42tdO1D4daGdQlgilijvZ/trXd5AzSveFXuUh3KiCSQyOAfCnjvwRqnw
z8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7B8Kv2B9e+JXgz4e61e+Lf
BXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+P+O/Cv8AwgvjjWdE/tHS
tY/se+nsft+l3H2ixvvKkZPOgkwN8T7dyNgblIOBmvvX9mz4o6D4h+A/7JdtZa18NC/w68Sa
hH4u/wCEj12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgIR8/6L/wTQ8ZLL4WtfFHi
PwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342
+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJ
Cmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hx
viST93LPEQHJCsZ8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2N
tcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB
7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3Qr
K58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N+Nvi/wCC
+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+
0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP
3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo
0W6fyg0siZKswymHP2Dr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9
lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/wC0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1
iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wAE0PGSy+FrXxR4j8FeA9W8
b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAe
reN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/AO0P4N+Nvi/4L614
X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/AO0P
4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyz
xEByQoB8faL/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/
lBpZEyVZhlMOTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W
6fyg0siZKswymHP2Dr/7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW
97axX0sEpeS1hxviST93LPEQHJCmv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9Iv
dCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfFP7N3/BOX4o/tP8Ax78RfDvQ9KtbDVvB
089r4hvNQuMados0TvGY5ZohIGd5I2RFjDl9rMPkR3XJ/Z7/AGHPHv7Sfwr8e+N9Cs7W08J/
DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+Pd9rfsb/wDBQrRvF3/BQL+yLpfhpo/w
v0Xxl4r8XWvinUNQn0K5nN+bxYruVZruGC5uGS4ht0Etu80cDMFCBXceKf8ABPfR7Xw5rn7R
N3qerfDTwlB4i+H/AIl8H6XZSeNdNhgbUp2gMNtbG4vGkkt8AqlyXeJgnMzHJoA8q+FX7A+v
fErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjV
Phn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV440cfCL9ljRYtd+
FUt18LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxSToCS3HxT+1l430z4mft
UfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyjKcMARnkA0hHbfCr9gfXviV4M+HutXv
i3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI
/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP+
yXbWWtfDQv8ADrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv8A
7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST9
3LPEQHJCsZ8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3
T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uY
o0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF
9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j
1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bx
v4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6
t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342+L/gvrXh
fxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/ALQ/
g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LP
EQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+
UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2Ws
W97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLP
SL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4
kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKa/+0P4N+Nv
i/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQ
oB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJ
VmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0
siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2s
V9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8R/DS+0nw38TvEl9r1z4j1iz0i90K
yufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCug
abrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3h
XQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv/ALQ/g342+L/gvrXhfxH8NL7S
fDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv/ALQ/g342+L/g
vrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx
p4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBH
RgysQGxa+KH/AATl1P4G+HNN1Pxv8S/hp4Wg1nVdZ0exW4GsXL3M2lXz2V0wFtp8oVPMUFSx
BZXU4B3KvoHhj4qeAtN+Jv7Unx10vVvD97420PxINR+GUOrFNly2oatOH1GGymCvPcW0BSaM
OrLCzB3jJVdtv9lX4o+NviVB8J/+Ew+IHwU8WfD2x8VzNrmk+NH0B9Y0S3n1CO41GWV9WiW4
l+1CV5fNtZZixUgsroEAB8aa9p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CM
qpyBUrq/jv8A8Ix/wvDxl/whH/Imf27e/wBgf63/AJB/2h/s3+u/e/6rZ/rPn/vc5rlKQgoo
ooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKA
CiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoo
ooAKKKKACiiigAooooAK/qK/4NHf+Ua2hf8AYZ1D/wBKpa/l1r+or/g0d/5RraF/2GdQ/wDS
qWvRwHwVv8P/ALdE5MVvT/xfoz79+LP/AB/XH1Nfh3/wXe159GvLTZ/FaaX/AOn/AMd/4V+4
nxZ/4/rj6mvxE/4Ln+HP7evLX/ZtNL/TX/HX+Ndda/sI27/ocVL+NK/b/I/NH9vrWWv/AAd+
z/KTy3w6n/8AUp8QCirX/BQTQv7O8K/AGH+58Op/18UeID/WivJl8TPTjsj0G+/5P/8Agd/2
B/hn/wCmHRK87+In7J2ofHfWPG/jL/hKPCvhTw38OfDXg3+2LzWjetj7bpNpDB5aWttO7fvE
wflGNynpkj0S/wCP2/vgd/2B/hp/6YdErR0e60/xD+zx+0n4N/t/wppXiTxZ4Z+Gn9j2eta/
ZaR/aH2e0t5p/Le6ljQ7Ixk/NxlR1YA61f4aIp/xPvPmX9nv9hzx7+0n8K/HvjfQrO1tPCfw
60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3HwT/ZCb41WfhQL8Rvhp4e1bxvqp0fR
dG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EXgrQoPE/wAM
de8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8
c32oaJrHiDVvHek2TfDLT4rg2jNZqbgtJLdJ5khvYBI6W42wJmUSvynSeaaL/wAE0PGSy+Fr
XxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ah
qHiHw94e1X4l/Crw54z8R67c+GovDF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7
GXjfxh8LPip4c0jWviH8H5/hV8JPGVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6s
Nr72Wr8OvjL4Hl8Y/tL/ALQFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUT
neyvtQgA5R/+CX/izRvEOh6V4g8ZfD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9p
cRqbgxMcFtoXDHK8Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObG
yuIhtlXgiQhgwIJ5x6B+wp8f/H2oeOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v
2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY
8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC
8GPfHVG/4SZ7e5jtWniENlIYImmlVV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b
+Ykn2e4hkaORNyEq2HVhlSQccEivsv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu
7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0
i8TXq2M+oQm6aZLGALvWS9dZQFiDYYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9
d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18
PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3Xiu
ybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAG
ufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI
/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X
8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+
IvhVrngfw/8AEfxM3i5PHk+jyX0Oj3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhL
Qf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27X
cAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY
42YY5PiL9ijxV8Ofh5rniPxvf+H/AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS
3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdak
ySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJ
eoFTUyZPns5nLNYswLKV3F4wAAfCn7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSz
S+W8FtMi+XHCxPmsmdyhdxyB5TX2t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6
Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hBXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4
RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/ZLtrLWvhoX+HXiT
UI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQAD5/0X/gmh4yWXwta+KPEf
grwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFH
iPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8
X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSF
Nf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjf
Ekn7uWeIgOSFYz4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Aextrm
KNFun8oNLImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2
NtcxRot0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oV
lc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7X
rnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrw
Hq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j
8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/
wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX
/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJ
J+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKN
Fun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21z
FGi3T+UGlkTJVmGUw5+wdf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4
vstYt721ivpYJS8lrDjfEkn7uWeIgOSFNf8A2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufE
esWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFAPj7Rf8Agmh4yWXwta+KPEfgrwHq
3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4
D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8F9a8
L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofw
b8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lni
IDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oN
LImSrMMphyaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0
/lBpZEyVZhlMOfsHX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWL
e9tYr6WCUvJaw43xJJ+7lniIDkhTX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnp
F7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJf
eFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0PGSy+FrXxR4j8FeA9W8b
+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/wBofwb8bfF/wX1rwv4j
+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/wBofwb8
bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniID
khQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLI
mSrMMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfptr/
AO0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk
/dyzxEByQv55/tZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5
ANIR23wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkh
jjZhjq6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBp
ZEyVZhlMOfoD9mz4o6D4h+A/7JdtZa18NC/w68SahH4u/wCEj12w0y98NQt4j03VEurVLueJ
3doLdl8yBZQY5J4vvkgeg6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrn
xfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxnx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hX
QNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7Sf
DfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gv
rXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9
ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmG
Uw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkq
zDKYc/YOv/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl
5LWHG+JJP3cs8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lr
Fve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95c
T6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdY
nvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP2Dr/7Q/g342+L/AIL614X8R/DS+0nw38Tv
El9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQpr/7Q/g342+L/AIL614X8
R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9LBKXktYcb4kk/dyzxEByQoB8faL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/
4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/
YOv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hx
viST93LPEQHJCmv/ALQ/g342+L/gvrXhfxH8NL7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97
axX0sEpeS1hxviST93LPEQHJCgHx9ov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6
ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8u
J9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YOv8A7Q/g342+L/gvrXhfxH8NL7SfDfxO8SX2
vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCmv8A7Q/g342+L/gvrXhfxH8N
L7SfDfxO8SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHx9ov8AwTQ8
ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/4J
oeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/YO
v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJ
P3cs8RAckKa/+0P4N+Nvi/4L614X8R/DS+0nw38TvEl9r1z4j1iz0i90KyufF9lrFve2sV9L
BKXktYcb4kk/dyzxEByQoB8aeH/+CfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWo
QXpsjDciytJ4oN8uCjySBHRgysQGxa+KH/BOXU/gb4c03U/G/wAS/hp4Wg1nVdZ0exW4GsXL
3M2lXz2V0wFtp8oVPMUFSxBZXU4B3KvoHhj4qeAtN+Jv7Unx10vVvD97420PxINR+GUOrFNl
y2oatOH1GGymCvPcW0BSaMOrLCzB3jJVdtv9lX4o+NviVB8J/wDhMPiB8FPFnw9sfFcza5pP
jR9AfWNEt59QjuNRllfVoluJftQleXzbWWYsVILK6BAAfGmvadFpGuXlpb39rqkFrO8MV7ar
KsF4qsQJYxKiSBGA3AOitgjKqcgewfs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBr
BtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB5/8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9
d+9/1Wz/AFnz/wB7nNfS3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2akI8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2Fu
QjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2F
uQjIkrRooU3Dwxu80aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o
8nP2rzftfyb8ef5HGfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2amM8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2Fu
QjIkrRooU3Dwxu80aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2F
uQjIkrRooU3Dwxu80aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o
8nP2rzftfyb8ef5HGfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn
7V5v2v5N+PP8jjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5
CMiStGihTcPDG7zRorliQD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2
FuQjIkrRooU3Dwxu80aK5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/l
faPJz9q837X8m/Hn+Rxn7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7
R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJ
dLYW5CMiStGihTcPDG7zRorliQPFK+wf+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/s
v7P5X2jyc/avN+1/Jvx5/kcZ+zV7B/wTI+Kvw2+Dnwr+FiX/AIp8PyaF4o1XX7T4oaf4h8WT
W8GmNPClppyR6QbmKC6t50eLzZpLW6VAZGeWJYf3QB8qfCr9gfXviV4M+HutXvi3wV4RT4r6
rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/AATQ8ZLL4WtfFHiPwV4D1bxv
4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+i/gV440cfCL9ljRYtd+FUt1
8LPFep2fjJ9Z8T6XbTeHE/4SXTtSW7sXmuEE++G2ZRPa+cjxSToCS3Hba/8AtD+Dfjb4v+C+
teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfmT
478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFelfBP8AZCb41Wfh
QL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5+9rv9su0v9B8K
3/wT8RfCqO6vPiP4s1fxHc+JvGNz4Tji+06ys9heXVqt9ZSX8TWbR7vMhusJD5QUEPGfnT9k
WO50D9pzQviHpnir9nSLwzrPxAS61+zdtLsH8OWtvqCyGSyh1mKK5t7d4ZWeBrImQKiq+yWI
IoB5r4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEM
GBBPOOU8Xfsnah4a/ZoX4r2fijwr4g8LN4rl8HlbA3qXS3aRSTq5S4toh5TwosikNuxMgZVc
OifVfgz4o3/xK+NPhnyfiB8FPFnwJsfiPqjJpPjR9GfWNE0efWBcXEsr65Et7L9qgl83zYZZ
pGKkOyypsHa/sr/F7wb4Z+Dd74f8BeNfBWieD5/2h7m4vtI1zxBZ2Kaj4Hms1tphNbalIr3N
u8Dqux1eQsoIHmR5UA+P/hV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7J
azpCnnyrGDO0ZJDHGzDHx/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWG
VJBxwSK/SDwJ8WfAtqvwJt/hV4l+Gkfgrwd8Tteutei8W6jpgu9D0p9bt5rKSyGtt9qgRrBV
fdY7WMiszf6QGNfn/wDtKa9o3in9ozx/qfhy8utR8Paj4k1G60u7upp5p7q1e6kaGSR7gmZn
ZCpLSkyEklvmzSEdr+z3+wf4v/aP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJ
WjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IR
kSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk
5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/a
vN+1/Jvx5/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyE
ZElaNFCm4eGN3mjRXLEgH7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsL
chGRJWjRQpuHhjd5o0VyxIHq3/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+
0eTn7V5v2v5N+PP8jjP2aj/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJ
z9q837X8m/Hn+Rxn7NQB5p8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYf
ZLWdIU8+VYwZ2jJIY42YY6ui/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aW
VwA9jbXMUaLdP5QaWRMlWYZTDn6A/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF
9Ts5o5LIas3mwI1qpfdp20GRWY/vgTXbaD8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6t
p0mpabp82uwXFrcWra65vI/MsgJS9nh2lDs5NxuNAHxp8N/+CcvxR+I+h/FfUxpVro+m/BqD
UD4iu9QuMQfarJWaexgeIOs1wFRj8p8sDaWdfMj31PhJ+xinxc0jwK8PxV+FWla18Q75tN0v
Qbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4DbfoD9lC+8N6l8cP2rvEmn+LvCtp4W8c+FP
F/hzwvceIvF1rZX2q3F3cRSWgZb+4W6bzIyCZ5xgsG3vvDV5V+yJL4b/AGfvAHxl+IN/f+FZ
Pix8NP7NtfA1neX9rfWsl7PePBc39rErsl7LaxoJYnUyQoWWUq+EYAA//BL/AMWaN4h0PSvE
HjL4f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y1NF/4JoeMll8
LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/RfwK+N2n
/Ev4RfssXM3ib4f6lqvgzxXqc3jm78W+IrKy1TRPO8S6dq326H7bPFLLLJFA4aaIS7kluIz8
7EDtv+GkPDPxC8bfCLW/Avi/4aXOiad8XfEuueLZPFt5pMN3pVlc6/DdW89lHrBW4tkey/ef
8S9U/eBi378NQB8aeMv+Ce2sfCTwBoGveP8Ax58P/AX/AAkd9q2m2thqR1S8ukuNMvGs7xHN
jZXEQ2yrwRIQwYEE84yfgz+wX44+Pngf4leLPDT6Vc+DPhhY3t9feIJjcQWOp/Zo2lMNpviE
ryvEvmBXjTYrJ5piLoG+lvBnxRv/AIlfGnwz5PxA+Cniz4E2PxH1Rk0nxo+jPrGiaPPrAuLi
WV9ciW9l+1QS+b5sMs0jFSHZZU2Dn/2MtP8ABukfFT9qLUPDniLwVoXgTxP4N8V+EvBiax4p
s9NnvGuJoWsIhDezpchGhC4llULlWDOGDAAHhPwk/YxT4uaR4FeH4q/CrSta+Id82m6XoN3e
ahPqkNwLkWyJcx2tnMtt5jsjIZXUMjhgcBtvQfFD/gnLqfwN8Oabqfjf4l/DTwtBrOq6zo9i
twNYuXuZtKvnsrpgLbT5QqeYoKliCyupwDuVbf7Ikvhv9n7wB8ZfiDf3/hWT4sfDT+zbXwNZ
3l/a31rJez3jwXN/axK7Jey2saCWJ1MkKFllKvhGHoH7KvxR8bfEqD4T/wDCYfED4KeLPh7Y
+K5m1zSfGj6A+saJbz6hHcajLK+rRLcS/ahK8vm2ssxYqQWV0CAA8U/ZR/YL8cftp/F/VvCn
gB9K1K10TzHvfEUxuLfR4IgXEUjO0XnDzin7uMxeaRklFCSFLfwq/YH174leDPh7rV74t8Fe
EU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9V/8E/P20/CWg/tlaN4E06z
+Gnhr4K+BfEniTXNF8Q6hql1o9yILlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4PhP8SNB1b4
ffsy2FlffB+wf4beMtXg8XWt/wCKLCAeEYT4n0/U0m06S7u99whgtmjW4ge5DwvMm9nYmgD8
9PHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM
/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACv6iv+DR3/AJRraF/2GdQ/9Kpa/l1r+or/AINHf+Ua2hf9hnUP/SqWvRwHwVv8P/t0
TkxW9P8Axfoz79+LP/H9cfU1+Kf/AAW212PR7u33/wAVnpn/AKf/ABz/AIV+1nxZ/wCP64+p
r8NP+C9NjLe3tkI8/wDHppfT/sP+O666v8CPr+hxU/40r9v8j89f+CiGrpe+HvgHKp4b4dTY
/wDCn8QCisn9vbT5LbwX+z+jZ3D4dT5/8KnxBRXky+JnpR+FHpt/z+358D/+wP8ADT/0w6JX
nXxC/ZO1D476v428Zf8ACUeFfCnhv4c+GfBv9sXmtG9bH23SbSGDy0tbad2/eJg/KMblPTJH
o1/Gf+G/PgccHH9j/DT/ANMGiVoaPc6f4g/Z4/aT8Gf2/wCFdK8SeK/DPw0/sez1rX7LSP7Q
+z2lvNP5b3UsaHZGMn5u6jqwB1q/w18iKX8R/M+Zf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrn
VNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL
67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wl
paax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfah
omseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/KdR5pov/BNDxksvha18UeI/
BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8P
eHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438Y
fCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq
/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQg
A5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpu
DExwW2hcMcrxl/wT21j4SeANA17x/wCPPh/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZ
V4IkIYMCCecegfsKfH/x9qHjjwhq/i3x98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6W
XVPNuDGNps0+e4fcGD73HoHgz4961+0L8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXm
sC8uxKupRCU+fFOX3WU04yhXcrRhQAfP/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG
/wCEme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9n
uIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6
W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16t
jPqEJummSxgC71kvXWUBYg2GKvhuMlCD4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNx
N5scKNLHZ21wLVHklCobkxF9rMAUAc63g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalc
Xuom9huEtWEq2Fvcpbo07FEad49+xnXMeHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEd
r4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58Tp
Lu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/
AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDuf
D/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FW
ueB/D/xH8TN4uTx5Po8l9Do914gS6hlU65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK
0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfK
nwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj
k+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSI
vm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySC
JVheyLiSYqwkL75K921H9pzwt+0tY/BO80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXq
BU1MmT57OZyzWLMCyldxeMAAHwp+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0v
lvBbTIvlxwsT5rJncoXccgeU19rf8E37nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0
+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0hHsHwr/Y11b4gfDzSfFmueK/BXw48PeJNVGjaDe+
K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kVTJuDBfNfHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJ
J9nuIZGjkTchKth1YZUkHHBIr6gmt9J/a+/Ye+BngjQ/F3grw14h+Feq6xp2vQeK9dt9ERYN
UvFuYb+B5mCz28axssqxlp0bbiFlZWP0B+wH4/8AhF+z7ofgXTbP4geH/EngnW/EniXTPiDJ
4h8RXOmQCGRVstKlj0KS6jimt7uFoWlea2uhEHk8yWIQExAHwp+z3+y/q37SOh+PbvRdZ8P2
M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcqfs9/sv6t+0jofj270XWfD9jP8A
D3w3c+Kryy1B7hJ76ytlJmMBjheMupMa7ZHjyZVxkByvtf8AwTU0+18Ga5+0Lp+teIvBWjT3
/wAMdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lwHQtU/4JbXOn6f/wAL4/tLX/Cu
hf278KtZ8Nad/bWv2Wl/bdQvPK+zwx/aZY927yXy4+RPl3su5cgHypX0X+yZ/wAE1PFX7X3w
r1TxfpHjP4aeGdN0ie9juIvEmrzWU6w2cNtNc3WFgdfs8S3kAeQsAhcbsZUn50r6r/4JbXOn
6f8A8L4/tLX/AAroX9u/CrWfDWnf21r9lpf23ULzyvs8Mf2mWPdu8l8uPkT5d7LuXIB86fFn
4cX3wc+KniXwhqctrPqXhXVbrR7uW1Zmgkmt5nhdoyyqxQshIJUHGMgdK5+vuv4BftS6/wDs
2/8ABLnQ9a8N+MPCq+OPD/xH+2WulXOvWcmqJ4cYW8k9otv5wu4rSfUrWJpoIfLaVAzsDEzO
fS/2A/j/APDbwtofgXWL/V/BWlaF8QfEniV/ih4fvfEk2n6V4ea7VbfTra00Q3UcE9lIksSv
JJbXaxJuLyxLBmIA/MmivvbwJ448S+HP2VfgT4a+EPxW8FfDzxZ4R8Sa9H48dvHGnaPaNdNf
25tLq6V5gmqW4t0GJIkuo2jQxjdjZXbfBT9p248Hfs4/COw+F3iT4KDxJYeK9en8ZXepeKp/
BempcPqMUlpevYxXenG8tHtimFa1nCRQiERIVaEgH50+FfAmueOv7S/sTRtV1j+x7GXVL/7D
aSXH2G0ix5lxLsB2RJuXc7YVcjJGaya9g8NeJ9O1X4qfFvUH+Ilr8NYNU0rWJLRPCemXyaV4
laSYMmjQwr5ckFlOD8v2hdqJGgkTPA8foAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4
NHf+Ua2hf9hnUP8A0qlr+XWv6iv+DR3/AJRraF/2GdQ/9Kpa9HAfBW/w/wDt0TkxW9P/ABfo
z79+LP8Ax/XH1Nfi5/wWjtI7rU7XzMf8emmdf+w/45r9o/iz/wAf1x9TX4b/APBebxBJod3Z
FDjNppf/AKf/AB3/AIV2VHahD1/Q4aavWkvL/I+Av+CkttHFpvwHVcbR8Opcf+FNr9Fc5+3p
rz6l4L/Z/mc/M/w6nzn28U+IB/SivKnL3melBe6j7X/4Y6v9R+PfwF8WRwsYZNA+HkpIHaPR
dIQ/+gV8ZfFD9kbUfjZ4k8eeLT4n8K+E/Dnw08O+D01m71o3rbftulWsEHlpa207t+8jwflG
Nynpkj94v2dfBVl4k+EnwKkliQyp4O8GuDjnI0mwxX5EeLxp2p/Dv9rHwMNe8KaV4j8S6X8P
E0az1rX7LSP7Q+zQwyz+W91LGh2RjJ+bjKjqwB9DMKCp0acl1S/I4cBiPaVpx7NnyX+z3+w5
49/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8a
rPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc+rf8EvNPtd
I0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH
8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMol
fyD1jzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0s
iZKswymHNXw//wAE+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQ
b5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJ
cRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvM
vFsyxE93ZwlZEwZIonO9lfahAByj/wDBL/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3
ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYak
dUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEW
pXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gx
dfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5
fHnw/wBFtfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4
b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+
HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/T
Ne8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4a
eHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49
+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw
88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37a
kfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fv
EX7FHir4c/DzXPEfje/8P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNz
u2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPx
y8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZ
UwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceT
NBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPsl
rOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS
2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/D
HNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4
PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+z
rPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L
67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkIK9W+An7J2ofHf4YePPGX/CU
eFfCnhv4c/2f/bF5rRvWx9tleGDy0tbad2/eJg/KMblPTJHlNfVf7FNzp/iH9hL9pnwb/b/h
XSvEniv/AIRb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6sAQD50+Ffwr8RfG/4h6T4T8J
6Tda54h1ycW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvgx/wT28a/GzwnqGr2Wq+CtMgh8SN
4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq/sU/FTw8vw88efC2/1a1+HGt/Ey
COysvHuSEjUHJ0jUHIZoNMuWC+ZNb7GVgpm8+AFE9A17xEfDn/BGa88EXGufDSTX7D4nPdy6
Za6rol1qTaakJgNzGInaaR/tg2iZC0rW+MMbQigDx/8AZ7/YP8X/ALR/w3k8UaVqXhXSLC41
0eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhX
SLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//
ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBs
f8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/wC0f8N5PFGlal4V
0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv8AYP8AF/7R/wAN5PFG
lal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv8AhO4/+HJX/CM/
2z8P/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/AAncf/Dkr/hGf7Z+
H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/wBg/wAX/tH/AA3k
8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3
k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf
8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ
/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPKf2e/2D/F/7R/w3
k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA8Ur7B/wCE7j/4
clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zV7B/wTI+Kvw2
+Dnwr+FiX/inw/JoXijVdftPihp/iHxZNbwaY08KWmnJHpBuYoLq3nR4vNmktbpUBkZ5Ylh/
dAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY
42YY6ui/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZE
yVZhlMOfov4FeONHHwi/ZY0WLXfhVLdfCzxXqdn4yfWfE+l203hxP+El07Ulu7F5rhBPvhtm
UT2vnI8Uk6Aktx22v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3
trFfSwSl5LWHG+JJP3cs8RAckKAfmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0ci
bkJVsOrDKkg44JFelfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao
8koVDcmIvtZgCgDn72u/2y7S/wBB8K3/AME/EXwqjurz4j+LNX8R3Pibxjc+E44vtOsrPYXl
1arfWUl/E1m0e7zIbrCQ+UFBDxn50/ZFjudA/ac0L4h6Z4q/Z0i8M6z8QEutfs3bS7B/Dlrb
6gshksodZiiube3eGVngayJkCoqvsliCKAea+Mv+Ce2sfCTwBoGveP8Ax58P/AX/AAkd9q2m
2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE845Txd+ydqHhr9mhfivZ+KPCviDws3iuXweV
sDepdLdpFJOrlLi2iHlPCiyKQ27EyBlVw6J9V+DPijf/ABK+NPhnyfiB8FPFnwJsfiPqjJpP
jR9GfWNE0efWBcXEsr65Et7L9qgl83zYZZpGKkOyypsHa/sr/F7wb4Z+Dd74f8BeNfBWieD5
/wBoe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5keVAPj/AOFX7A+vfErwZ8Pd
avfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMeJ8S/st+O/CXhjxh
rF5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV97eBPiz4FtV+BNv8KvE
vw0j8FeDvidr11r0Xi3UdMF3oelPrdvNZSWQ1tvtUCNYKr7rHaxkVmb/AEgMa8K+DvxI8IfC
X9pf4tfGv/hNPtXgOLXdW07TPDAvxeax8Sbe7llZLK7hvBJKNPeIo9zdXcbHIATfc4MYB4p+
yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf
8Ehta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H
5JpCCiv0W/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+Em8bT+F47K4a+ge0ubqCK/szqM
RttgYtFdjZB5QGQ0Zyv2TPj14E8XeB5tB1Xxd8P/AAPdeBPjlD8YbhvKuNO0PUtHgjEctvpE
ZiMzSgqPJtJI0kaN0CglXCMZ8aeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaD
LM7MXbOAoWF8sGKK/E1+hfwG/bQvvjR4W/ai0DQfixdfDzUvHniSx1vwCPEPidtDg0S1m16a
4v2jn83yrdxDcq8scLmSUeZsWXaayvAnxA1nSP2VfgToHwR+MXgrwZrvhbxJrw8Z3reLIPDd
peTPf27WV9dW14YZb+3NqqkZt5j5aGJk3AxAA+VNR/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF
7gajY3ogkuMSK8KxFDCivujlf/WoDhg4XzSv0W/ZS+P+n/Bf9mjwHC/j74f2mp61+0faavfS
abd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo/QeDv2lPCvw7s9Nt/D/j/w/ocGjftS
XFrp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkVjkgGgD8ya6v4V/Cv/AIWl/wAJH/xUfhXw
5/wjmh3Wuf8AE81D7H/ankbf9DtflPm3cm793Fxu2tyMVq/tZf2H/wANUfEv/hGf7K/4Rv8A
4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxXu3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVN
EtbTWPEtppMGqarOoWzUJcTRrI4X7SBJgrEJWDMnmjchHyTXbeAfgD4i+I/wk8e+N9PjtR4e
+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/2B+w5+0pY/Dv9kz4RW8/j+10PXdG+PNl
azRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmug+Ef7RHm2P7V3wx8G/F3SvAn9o+K7V
/htI/in+xdD0nT18QTveSWNwHWCCLyJkdo4Dvmj3bEkwRTGfnTRX6Q/s7ftCN8K/2VfgvoHw
m8T/AAfbXdB8Sa2PF17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv9r7Wf
gh+wbc+KvCfi34aQ+MNO+Lt1qtho+lapBCIvDUssU0tnZWMkiX9tpk+o20LNaqsUjwjc6+Wz
MQD89KK/QH4e/tn6/wDCf/gninjXw3r3w/0XxxB8VptatfDVpq1nHNp3hyd4Z59OtrPz/tdt
p8upW8W+2hKM8SlmzEzOfQP2S/2ofA+v6h+yt4y1HxB8P/CVr4B13xuninTI7630yPw9LrMz
/Yo7ezkkEzWmbmMCSFZIoUVjI6CKQqAfl9XbeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V9
15dLbW8UaDLM7MXbOAoWF8sGKK/3t+whfa/4K/Yw8AXz+LtK0H/hA/2gI9K1HVLjxdZ2Nrba
Cba3uNQsoLt7hYp7SaVBM1vA7rOUEgR9u4c98Bv2oYvFHhb9qLwF4F+LVr4Ag8QeJLG7+GRv
fEEvhvStG0069NLdvaM5jSzT7POjvDGFlkTcFicqygA/PSvdvhV+wPr3xK8GfD3Wr3xb4K8I
p8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHoP2Lf2sfBn7In/C2dJ1vwvpX
jP8A4SjwprHhqw1qxF+P7Ra48lI4ZVe5ttmnv5TOzrCl2N4wy/dX6A+BXxh8N+OfhF+ywdKu
/hVpkfgTxXqbeKbLWdftdOm8HW8niXTtVilsV1C5WeT/AEe3MYlQzkxPPGzGQnAB+f8A478E
ap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFZNfpt8Jv2mPAWoa54/1
TxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf2reo0bvYWUtxLYssdyp/fR/aPIUIZwfsB/FrwV
8ItD8C/8JV458P6ynivxJ4ltfi+niHx49xBBdTqtpZtHYreLa6jb3O9DLdtBeR4aR2mjSLMY
B8f/AAq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhj
jZhj4/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB+zX8QL
Lwl8Kf2Y9H0nxR8KvO+H/jnVG+IK+JNb0K5k0VP7VtJFl06TUZGKRNbI0gl0ptjupcMZcmvF
NW+N3ww+A3xI8dfE/wAHan/wnvxE1vxXqsvgqPUba7uLXwfafapGh1i7e9Xde6g6MrW6MZFi
OZpmaULEoBxXwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhj4/wCO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX
6AfAr43af8S/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8UssskUDhpo
hLuSW4jPzsQOg+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWqlLs+Bb2dr5v7VvUaN3sLKW
4lsWWO5U/vo/tHkKEM4APin9nv8AYc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7W
kBVGMlx5YLbcBVBXe6b49x8E/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7
O2uBao8koVDcmIvtZgCgDn2v/gnvaXOna5+0Tqfi/wAY+CoNS8VfD/xL4Uiu9Y8caWs+sa1c
NAww8tzumSVt5F0CYXO4+aeat/s/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8Mt
PiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK4B5pov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugab
rE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99Ppt/b+
Ykn2e4hkaORNyEq2HVhlSQccEiv0L+E/ifQdG+H37Mvhqy8V/B/WX+EXjLV9N8Xahf8AiWwt
BpEK+J9P1FNQ05ruaF50lgtW2zQJIGhlmjwHYgfD37WXjfTPiZ+1R8S/EmiXP23RfEHivVNS
sLjy3j+0W813LJG+1wGXKMpwwBGeQDSEa3wE/ZO1D47/AAw8eeMv+Eo8K+FPDfw5/s/+2LzW
jetj7bK8MHlpa207t+8TB+UY3KemSPKa+q/2KbnT/EP7CX7TPg3+3/CuleJPFf8Awi39j2et
a/ZaR/aH2fUJpp/Le6ljQ7Ixk/NxlR1YA+1/ss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/
AAk3jafwvHZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjLGfnTXbeAfgD4i+I/wAJPHvjfT47
UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiiv9l/smfHrwJ4u8DzaDqvi74f8Age68
CfHKH4w3DeVcadoepaPBGI5bfSIzEZmlBUeTaSRpI0boFBKuEtfAb9tC++NHhb9qLQNB+LF1
8PNS8eeJLHW/AI8Q+J20ODRLWbXpri/aOfzfKt3ENyryxwuZJR5mxZdpoA/PSvS9R/Zf1ay/
ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/WoDhg4X6r8CfEDWdI/ZV+B
OgfBH4xeCvBmu+FvEmvDxnet4sg8N2l5M9/btZX11bXhhlv7c2qqRm3mPloYmTcDEOg/ZS+P
+n/Bf9mjwHC/j74f2mp61+0faavfSabd2Vo0eiNEsFzeJb7Y5tOtJDDIh3RW5+zymNlWGYo4
B+dNerfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3H
IH3B4O/aU8K/Duz0238P+P8Aw/ocGjftSXFrp8Wn65DbJY+DbiVJp44gjgJo8kkaO6ri2ZkV
jkgGvNP2G7nwn4O/4LK6/wCMrbX/AIf+HPht4X8V+IPKvJtf07TLFLS4i1GG0+yI8qedEcxg
fZ1dUVkJ2qQSAfClFFFIQUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/U
V/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit6f8A
i/Rn378Wf+P64+pr8P8A/gu14bOvXNnj+G00z/0/+Ov8a/cD4s/8f1x9TX4n/wDBb7XY9Hur
bf8AxWmmY/8AB/46/wAK7Kv8CHr+hw0/40vT9UfmZ+35oX9neEPgBD/c+HU/6+KPEB/rRVv/
AIKEaut74a+AUi9G+HU//qUeIBRXkT+Jnpx+FH76/spw/wDFpvgXz/zJfhD/ANNFhX4P/tN/
snah8d/jV8XPGX/CUeFfCnhv4c6f4X/ti81o3rY+26fbwweWlrbTu/7xMH5Rjcp6ZI/eH9lF
/wDi0vwK/wCxK8If+miwr8dPiDc6f4h8Gftc+Df7f8KaV4k8V6f8Pv7Hs9a1+y0j+0Ps8UM0
/lvdSxodkYyfm7qOrAH2c0/3el6L8j57I9cVX9X+Z8i/s9/sOePf2k/hX498b6FZ2tp4T+HW
lXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo
2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOve
EtLTWPFOm6bPealcLbNDEIbidJAjAHEpURZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7
UNE1jxBq3jvSbJvhlp8VwbRms1NwWkluk8yQ3sAkdLcbYEzKJX8E+oPNNF/4JoeMll8LWvij
xH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/8AwT61DUPE
Ph7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG
/jD4WfFTw5pGtfEP4Pz/AAq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG1
97LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9l
fahAByj/APBL/wAWaN4h0PSvEHjL4f8Ahi/8W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtL
iNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/wAefD/wF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZX
EQ2yrwRIQwYEE849A/YU+P8A4+1Dxx4Q1fxb4++H/wDwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+
1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrH
hbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wP/wAE9tY8Z6R4Jv5fHnw/0W1+Juu3WheD
Hvjqjf8ACTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7f
zEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/
pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaRe
Jr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7
jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA51vB37Bera94s03w1rPj34aeEfGGr+JLjwra+Ht
Q1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8Oe1/wCCdmhX/wAPfiR8PPGdp4t+Cg0G68V2
TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1
z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH
43v/AA/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/
iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXx
F8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/wDwT8/bT8Ja
D+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7
gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDH
GzDHJ8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW
4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUm
SQRKsL2RcSTFWEhffJXu2o/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/AI30zx4PD0d9pemX/iBL
1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWa
Xy3gtpkXy44WJ81kzuULuOQPKa+1v+Cb9z4H8Hf8FXZvGWja/wCFfDnwn8L67rP2G81rX7fT
Nmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkIK9W+An7J2ofHf4YePPGX/CUeFfCnhv4c/wBn
/wBsXmtG9bH22V4YPLS1tp3b94mD8oxuU9MkeU19V/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A
4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4yo6sAQD50+Ffwr8RfG/4h6T4T8J6Tda54h1y
cW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvgx/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps
8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq/sU/FTw8vw88efC2/wBWtfhxrfxMgjsrLx7k
hI1BydI1ByGaDTLlgvmTW+xlYKZvPgBRPQNe8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jda
k2mpCYDcxiJ2mkf7YNomQtK1vjDG0IoA8f8A2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNR
NrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01
jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf
2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7X
pP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9mpjPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtH
TWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHh
bR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/
ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/
sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/YP8X/ALR/w3k8UaVqXhXSLC41
0eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhX
SLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//
ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBs
f8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9moA8p/Z7/AGD/ABf+0f8ADeTxRpWp
eFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61e+
LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+l/8J3H/wAOSv8AhGf7
Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8
S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wTQ8ZLL4W
tfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJ
ZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4
H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLc
Rt+8ZgLev/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl
5LWHG+JJP3cs8RAckKAfmT478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrD
Kkg44JFdr+yV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7c
VVv0W/Z6/aM+EWkfFSDxBZ+M/D+o+CfiT8QPGc/xBt/EPii5s4LOG9maDSvL0aS4hiuLe4hl
hMss1pciMNIZJIRAfK5T/gnV8XvBv7M+h/A9LXxr4K8LwWviTxDb/F/b4gs1nv7zbJaaKWPm
GS7skFyrK9r5lmhLzuVKPIoB8afs9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZ
LpbC3IRkSVo0UKbh4Y3eaNFcsSAfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRb
IlzHa2cy23mOyMhldQyOGBwG2+wa94iPhz/gjNeeCLjXPhpJr9h8Tnu5dMtdV0S61JtNSEwG
5jETtNI/2wbRMhaVrfGGNoRXn/7E2veFfhD8JPix8UZ7zw//AMLQ8BwaUPAVhq00MiG6urpo
ri+htHINxcWkSiSPIeOJiJHRiqFQDn/hH+wx4k+MP7X+r/BOz17wrpvi/Sr7UdNSa+luhY39
xYtIJkikjt3YZSKWRTIiArGQSGKqfP8ASPgl4n1/4Qav48sNM+2+FvD99Dp2qXVvcxSSabLM
CYWngDGaOKQgoszIImcFA5f5a/Qv/gnV+0p4V+Geh/A/xD/wn/h/RH1bxJ4huvjLc6lrkMOq
6vfzLJBo8l0J3F1c24+1hy0Qe3jdpZpdrxySL8v/ALE+o6f+yz4n8T/EfxX4r0pNM8NfaPDk
3g/S9TstRuviDLKhWTT5Ix50J0pgA0126PEQqiDfMVaMA8//AGRP2TtQ/bJ+J8Xg3QvFHhXQ
vEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvrb/gkNrWk6L/wUJ0Dx7quoeCv
A/hPQJ7+a8GoeIbfT4LBbqxvIoYoFvLjz50V2VMqZWUFTI3O4/JNIR6XqP7L+rWX7Jlh8Yo9
Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC+aV+gP7C3x30/4GfsYfCq0f
xl4V0TU9V/aA03VL6FtWsjqFlohtkt7m4kXeZrOJjDJFI7eWWhkdWJhnIk9g/Z6+Kvwa+Dnx
UgTSPFPgqT4feKPiB4ztPHWn3vixrfStMtp5mtNIS00hLmK0urKeB4N8xtbqNELM0sSQ/umM
/J6vS/2Sv2X9W/bF+Ndj4C0DWfD+ja7qkE81idYe4SC7aJDK8QaGGUq/lLI4LhVxGw3biqt9
bfsdfHu3+DH7NHwu0e/+IOlaP4k8L/tAW2m3aw+JoTJZ+GpYoJL9FkjlIOlS3MQkkKsbaR0D
ksQDXtf7P3x3+FnwK+OHg698DeMvh/4T8GS/Efxl/wALEWz1a0s/tqtcXFt4dxEXEs+npFdR
lPsytaRZeV9hjeRQD83v2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllqD3CT31lbKTMYDHC8Z
dSY12yPHkyrjIDlfNK+tv+Camn2vgzXP2hdP1rxF4K0ae/8AhjrnhKzfUPFOm2sF/qVy0awx
QTSTiOZGMEn72NmiA2kuA6FvYP2Wf2lbjwF+x18CtL+F2sfCrTfEml67q8njL/hJvG0/heOy
uGvoHtLm6giv7M6jEbbYGLRXY2QeUBkNGUI+KfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9T
um3OznfcXNpJM/LHG5zgYUYAABoXh27/AGk/E/i3V7/Xfh/4VutI0OfXHjuo7bQLXUvsyRoL
Oyt7aFYWu5ARsiVF8wq7E5JJ5/4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJH
b4P7tHVWVNoIBBFfS3/BLDxOujaH+0Bpl94r8P8Ah7TfFPwx1TRLW01jxLaaTBqmqzqFs1CX
E0ayOF+0gSYKxCVgzJ5o3AHyTRX3X8Av2pdf/Zt/4Jc6HrXhvxh4VXxx4f8AiP8AbLXSrnXr
OTVE8OMLeSe0W384XcVpPqVrE00EPltKgZ2BiZnPu3/BO34maT8dNc/ZytPDl14K0aBPEnjL
WfiD4O0y9t9Lga9kY3mlONMkkWS8S3CW7QSRpN9nFsnzoYTtYz8nqKt69r194p1y81PU7y61
HUtRne6u7u6maae6mdizySOxLM7MSSxJJJJNfoX+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk
8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhoyhH500V+gP7Jnx68CeLvA82g6r
4u+H/ge68CfHKH4w3DeVcadoepaPBGI5bfSIzEZmlBUeTaSRpI0boFBKuE9L039tyLxN4F8E
618G9T+GmjalqvxA8Ua54nj8YeM5fCr6c11q6XNjPfW0Go2v9oJ9keMP8l4oWAxL0ZGYz88/
+GoPGf8Awzx/wqr7bpX/AAgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/zN3lfus+X8lVfAPwB
8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFf7g/Z1/au0zw98I
PBFwPHnhXw9qtr+0eqtBoupPpdrp/ha6EU95FbQSlJ7fRJJ0V2ikVYyUTzF3rwfCP9ojzbH9
q74Y+Dfi7pXgT+0fFdq/w2kfxT/Yuh6Tp6+IJ3vJLG4DrBBF5EyO0cB3zR7tiSYIoA/Omire
vadFpGuXlpb39rqkFrO8MV7arKsF4qsQJYxKiSBGA3AOitgjKqcgfot/wTy8T6zpv7Cnw016
PxXa+HYPCPx5t7K91HUfEsGjpaeHpLS2ur+xSWeaMPbyyKs0lpGT5rJvMbFSQhH5vUV+pmm/
tXaNovgXwTH8ANV+D+kwRfEDxRe6wuueLJ/BtpYQy6ukum3E1lHfWLXtv9hMI2tBchY4BCEU
q8R8+8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS/ttMn1G2hZ
rVVikeEbnXy2Zixn56UV+gOg/HvX/HX7OPwWm+DPxC+H/wAH/EkPivxFqXjewt/E1n4U02xu
LnUYJrN57OeVTe2kdttRQsdyFiiMOGKmOu1/Yu8X6zB+yZ4N8SR+N/D+mweFv2hxa3uuLrMH
hvTl8PSQQXd/a2qzm1CWU8irObGONN+wEwZQhQD8yaK/WH4SftIfCaf4yfs9+J9C8X+CtG8E
/Dfxl8Qbe/hmvINKfSbbV7yUaUY7KUxzG3dLmD54ojHAofzDGIpNnxp+xb+1R4Y/Y4/4Wzon
ivwXpXiO/wDEfhTWPDEN/Y3ct59onn8lFtZZIL2O3bT2aFmaa3BnO4GOUqRhCOf+FX7A+vfE
rwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPh
n441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV8YfDfjn4RfssHSrv4
VaZH4E8V6m3imy1nX7XTpvB1vJ4l07VYpbFdQuVnk/0e3MYlQzkxPPGzGQnHQfCb9pjwFqGu
eP8AVPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnDGfF
PwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAO
T4C/sUeKvjz+1Hd/B6O/8P8AhnxtaT3tk0GsTzGB7qz3+fbiW2imXeqxTMGOIyImw5JQN9A/
BzwVc/Db4SWfijwt8Q/hpP8AHX4larqeneI/Fev/ABA0uC5+HlqLpraWa3L3LSTXF4DLK1/D
5kggyIV3SiV/YP2B/ib4H/ZQ/wCFN2Om/ET4f6XYaX4r8SWPxavbPXreP+27lfNstEmHmstx
daftuFdGgQ20e55pQjRyOgB+X1e7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpo
Z4rZ2H2S1nSFPPlWMGdoySGONmGNv4PeN/CH7HXhi48WQ3OleMPjbFfT2Oh2SRi80fwOYX2H
VZJcG3v7tiM2qwtLbxgCZ2dxHGPpX4FfG7T/AIl/CL9li5m8TfD/AFLVfBnivU5vHN34t8RW
Vlqmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IAB+f/jvwRqnwz8caz4b1u2+xa14fvp9N
v7fzEk+z3EMjRyJuQlWw6sMqSDjgkV6V+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7x
QTtBCZ2tICqMZLjywW24CqCu903x7vtb4TftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz
4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzjxT/gnvaXOna5+0Tqfi/wAY+CoNS8VfD/xL
4Uiu9Y8caWs+sa1cNAww8tzumSVt5F0CYXO4+aeaAPn/AFH9l/VrL9kyw+MUes+H7vw9d+JD
4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDher+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/
CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfqD/gn98RI/ht+yPo/g8+OPh/pF1B8ch/
wmOl6l4q0mO11Pwy+mx2mob0nm8m9tHDMo8vzA5UPHkoGHQaH8Ufhz4htv2frb4ea18ND4P+
HXxA16O//wCEj1210y98NaU3iuw1SzurVNRniuHdrK3C+YiysY5J4m/eFgAD83/HfgjVPhn4
41nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM/ao+JfiTRLn
7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK
KKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACv6iv+DR3/lGtoX/YZ1D/ANKpa/l1r+or/g0d/wCUa2hf
9hnUP/SqWvRwHwVv8P8A7dE5MVvT/wAX6M+/fiz/AMf1x9TX4Yf8F8bCW9vLER5/49NL/wDT
/wCO6/c/4s/8f1x9TX4sf8FrLSK51K08zHFppnX/ALD/AI5rsqK9GC8/0OGDtWk/L9Uflz+3
hp8lr4H/AGfkbOR8Orjt/wBTT4gorpf+Cj1tHFpPwGVMbR8Opcf+FNr9FeVONpNHpQd4o/eP
9lEN/wAKn+BX/Yl+D/8A00WFfhD+03+ydqHx3+NXxc8Zf8JR4V8KeG/hzp/hf+2LzWjetj7b
p9vDB5aWttO7fvEwflGNynpkj92f2U/G2k2nws+BVrJPGLj/AIQvweu0tzk6RYY/mK/Hvx/e
6b4m8FftceD11/wppfiLxbp3w+OjWms6/ZaR/aIt4oZpvLe6ljQ7Ixk/N3UdWAPrZov3FL0X
5Hg5JTccTWb7v8z5G/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx
5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyS
hUNyYi+1mAKAOfVv+CXmn2ukaH8cNQ1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6S
BGAOJSoiyrAuCCB0H7P3wsj+BvwJ0fVfAfjb4VWvxj8c32oaJrHiDVvHek2TfDLT4rg2jNZq
bgtJLdJ5khvYBI6W42wJmUSv4R9MeaaL/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavh/wD4J9ahqHiHw94e1X4l/Crw54z8R67c+Gov
DF3qN7eapZahBemyMNyLK0nig3y4KPJIEdGDKxAbHpX7GXjfxh8LPip4c0jWviH8H5/hV8JP
GVxdT32sXui6mbaG3mjnvJNGS4jk1IpciBTC1lEBJM6sNr72Wr8OvjL4Hl8Y/tL/ALQFnPpU
Hj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/izRvEOh6V4g8Z
fD/wxf8Ai3xXqPg/w5FfS6jN/b13Y3q2MzxG2s5hFF9pcRqbgxMcFtoXDHK8Zf8ABPbWPhJ4
A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhgwIJ5x6B+wp8f/H2o
eOPCGr+LfH3w/wD+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvceg
eDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHYtY8LaZeawLy7Eq6lEJT58U5fdZTTjK
FdytGFAB8/8Agf8A4J7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/4SZ7e5jtWniENlIYImml
VV+1CF+pKKOa8U8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv
sv8AZt+MNzo37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtow
zXDA5EnmOPn74gWdj+1t+0Z8YvF+ma94f8K6bPPrXjS0i8TXq2M+oQm6aZLGALvWS9dZQFiD
YYq+G4yUIPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTE
X2swBQBzreDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsU
Rp3j37Gdcx4c9r/wTs0K/wDh78SPh54ztPFvwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESW
PfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La
4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7
+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9jP9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpi
kN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i5Y/BOw8K+IvhVrngfw/8AEfxM3i5PHk+jyX0O
j3XiBLqGVTrn+mP5tlIzNLATIzAh2MqYXn/+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi
+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXh
FPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf+H/
AAHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODW
vEvwf8O/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt
+0tY/BO80XUfhVJ4Wg+I/ia/8b6Z48Hh6O+0vTL/AMQJeoFTUyZPns5nLNYswLKV3F4wAAfC
n7In7J2oftk/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX2
t/wTfufA/g7/AIKuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blM
m3OT8U0hBXq3wE/ZO1D47/DDx54y/wCEo8K+FPDfw5/s/wDti81o3rY+2yvDB5aWttO7fvEw
flGNynpkjymvqv8AYpudP8Q/sJftM+Df7f8ACuleJPFf/CLf2PZ61r9lpH9ofZ9Qmmn8t7qW
NDsjGT83GVHVgCAfOnwr+FfiL43/ABD0nwn4T0m61zxDrk4t7KytwN8rYJJJJCqiqGZnYhUV
WZiFUkerfBj/AIJ7eNfjZ4T1DV7LVfBWmQQ+JG8HaYL7W02eI9aFvLcCxs54RJAzukYCSSyx
wytNEqSMW41f2Kfip4eX4eePPhbf6ta/DjW/iZBHZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwU
zefACiega94iPhz/AIIzXngi41z4aSa/YfE57uXTLXVdEutSbTUhMBuYxE7TSP8AbBtEyFpW
t8YY2hFAHj/7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4
eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+T
fjz/ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5
/kcZ+zUxnlP7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4
eGN3mjRXLEgH7Pf7B/i/9o/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm
4eGN3mjRXLEgerf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+T
fjz/ACOM/ZqP+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5
/kcZ+zUAeU/s9/sH+L/2j/hvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh
4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFC
m4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a
/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb
8ef5HGfs1AHlP7Pf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjR
QpuHhjd5o0VyxIFv4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp
58qxgztGSQxxswx9L/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc/avN
+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyWQ1ZvN
gRrVS+7TtoMisx/fAmgD5/0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4
AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5+tj8bvA/xL1P4FXPg/xN8P9S0HwZ8R/EM2tXfi3xFb2Wqa
Jp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMBb1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3
iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA/Mnx34I1T4Z+ONZ8N63
bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya/WH9nr9oz4RaR8VIPEFn4z8P6j4J+
JPxA8Zz/ABBt/EPii5s4LOG9maDSvL0aS4hiuLe4hlhMss1pciMNIZJIRAfK8/8A2dvj3ffB
/wDZV+C/hr4ba38H7LxZ4c8Sa3H43fXPiA2hWlrdfb4TbXUy2uo2yarbm3C/vFS8jMcAjTPz
IyEfH37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73T
fHut237Fsth8CfA/xC8SfEX4f+D9F+If2/8AsaHUk1ae6l+xXH2efetpYzqmH2kZbkOO4IHt
f7Ctzp9544/aV8Q3ev8Awq8PWHjbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiq
u1mVlYA/ZV1rxcIPhP4c8S+Nv2f9f+GXhHxXNp2raD4gn8NT3Xhy0OoRyX7LLfxhporhHeRJ
rCadXC4DqyKoYzx/VP2HB4Z+EnhTxvrnxU+GmieHvHM+pQ6DNcRa3K+orYXRtppQkOnSNGhb
ayiQIxWRcqrBlXz+f4A+Irj4eeJPGOkR2viLwf4V1VNJ1DV9OnDJA0hIgne3fZdRW823Ecs0
KKzfJkSZQfYHwu8ZnxD8TfBmm6N40+BWt/s76B8QNQt7Lw94qTRE1Hw1oUmrLNKSNahS7ZLi
2kDq8EkrnZtYpImwef8AwA8V+Av2fPj38Tfixpnii1g+GOk6rqeiaD4NtLlJtR8f2Vw8nkad
Pa3QkePTDB5bT3F1EcbVWMNcYMYB4/8Asifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZ
FLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfW3/BIbWtJ0X/goToHj3VdQ8FeB/CegT3814NQ8Q2+n
wWC3VjeRQxQLeXHnzorsqZUysoKmRudx+SaQj0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe
4Go2N6IJLjEivCsRQwor7o5X/wBagOGDhfNK/QH9hb476f8AAz9jD4VWj+MvCuianqv7QGm6
pfQtq1kdQstENslvc3Ei7zNZxMYZIpHbyy0MjqxMM5EnsH7PXxV+DXwc+KkCaR4p8FSfD7xR
8QPGdp460+98WNb6VpltPM1ppCWmkJcxWl1ZTwPBvmNrdRohZmliSH90xn5PV6X+yV+y/q37
YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZXiDQwylX8pZHBcKuI2G7cVVvrb9jr492/wY/Zo+
F2j3/wAQdK0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8
LPgV8cPB174G8ZfD/wAJ+DJfiP4y/wCFiLZ6taWf21WuLi28O4iLiWfT0iuoyn2ZWtIsvK+w
xvIoB+b37Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mV
cZAcr5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/AMMdc8JWb6h4p021gv8AUrlo1higmknEcyMY
JP3sbNEBtJcB0Lewfss/tK3HgL9jr4FaX8LtY+FWm+JNL13V5PGX/CTeNp/C8dlcNfQPaXN1
BFf2Z1GI22wMWiuxsg8oDIaMoR8U+CP2l/EXw/8ADFtpFhp3w/ntbTfsk1LwJoep3TbnZzvu
Lm0kmfljjc5wMKMAAA0Lw7d/tJ+J/Fur3+u/D/wrdaRoc+uPHdR22gWupfZkjQWdlb20Kwtd
yAjZEqL5hV2JySTz/wAWdZi8R/FTxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqr
Km0EAgivpb/glh4nXRtD/aA0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pA
kwViErBmTzRuAPkmu2+AP7Q/i39l/wCIcfizwRqFrpPiGCCS3gvZtNtb57dZBhzGLiORY3K5
XeoDbWdc7XYH61+AX7Uuv/s2/wDBLnQ9a8N+MPCq+OPD/wAR/tlrpVzr1nJqieHGFvJPaLb+
cLuK0n1K1iaaCHy2lQM7AxMzn3b/AIJ2/EzSfjprn7OVp4cuvBWjQJ4k8Zaz8QfB2mXtvpcD
XsjG80pxpkkiyXiW4S3aCSNJvs4tk+dDCdrGfnTo37WHijQbN4INL+Gjo881wTcfDrw/cuGl
laVgGksmYIGchUB2ooVECoqqOK8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxwpwo
ztQZOWOSSTU17Xr7xTrl5qep3l1qOpajO91d3d1M0091M7FnkkdiWZ2YkliSSSSa/Qv9ln9p
W48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zOoxG22Bi0V2NkHlAZ
DRlCPzpor9Af2TPj14E8XeB5tB1Xxd8P/A914E+OUPxhuG8q407Q9S0eCMRy2+kRmIzNKCo8
m0kjSRo3QKCVcJ6Xpv7bkXibwL4J1r4N6n8NNG1LVfiB4o1zxPH4w8Zy+FX05rrV0ubGe+to
NRtf7QT7I8Yf5LxQsBiXoyMxn5Z123gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21v
FGgyzOzF2zgKFhfLBiiv9wfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8it
oJSk9vokk6K7RSKsZKJ5i714PhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09fEE7
3kljcB1ggi8iZHaOA75o92xJMEUAfFPgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzv
uLm0kmfljjc5wMKMAAD6L0Dwz8Xf2iv2bfA3hvXPGnwf8E+DfjH4klj8K6HN4StrB9Z1G3mg
tGuo20vS3Fs/myLB5kkkTsocHMRy3FfsW/tY+DP2RP8AhbOk634X0rxn/wAJR4U1jw1Ya1Yi
/H9otceSkcMqvc22zT38pnZ1hS7G8YZfur9AfAr4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrO
v2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJieeNmMhOAD8//AB34I1T4Z+ONZ8N63bfYta8P
30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9Vtv2LZbD4E+B/iF4k+Ivw/wDB+i/EP7f/AGND
qSatPdS/Yrj7PPvW0sZ1TD7SMtyHHcED7L+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWql
Ls+Bb2dr5v7VvUaN3sLKW4lsWWO5U/vo/tHkKEM48q/Zz8b+PPFGq/DOw8cfEP4FeMPAmh+M
rqDxDpniq98OXuo6NDJqSTalMbjUY/MuUuhJJKtxZT3Akx98MoUAHimqfsODwz8JPCnjfXPi
p8NNE8PeOZ9Sh0Ga4i1uV9RWwujbTShIdOkaNC21lEgRisi5VWDKvKa9+0f4t0j4KXnwft9c
8P6p8PrXVXvoltdBtVF5cq5xex3Etsl4HZRtDuVk8kiIhUzGPqv4XeMz4h+JvgzTdG8afArW
/wBnfQPiBqFvZeHvFSaImo+GtCk1ZZpSRrUKXbJcW0gdXgklc7NrFJE2Dyq3+KHwi/Zc8WeL
vHnw6mtfFnjK68SajF8P7CexuW07wLpyXDi21Sf7WgN1emLYbaM70hx5sxaULEoBynwq/YH1
74leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Ea
p8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8Cvjdp/xL+EX7LFzN
4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMe
AtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzg
A+NPhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZGQyuoZHDA4D
berf/gl/4s0bxDoeleIPGXw/8MX/AIt8V6j4P8ORX0uozf29d2N6tjM8RtrOYRRfaXEam4MT
HBbaFwxt/sw+ItJ+E/hz44/FTxFrnh/VvjV4Gnsf+ERGrarb6ol7qV3fSRXmpwrvZb+4t1Hn
RyhpYlZxKyyfIw91+BXxu0/4l/CL9li5m8TfD/UtV8GeK9Tm8c3fi3xFZWWqaJ53iXTtW+3Q
/bZ4pZZZIoHDTRCXcktxGfnYgAH5/wDjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRy
JuQlWw6sMqSDjgkVk16B+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBly
jKcMARnkA15/SEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUt
ejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr8Mv+C+HiJ9CurEocbrXS/wD0/wDjv/Cv3N+L
P/H9cfU1+DX/AAcOPtl07/r20v8A9P8A48rqrfwI+v6HHR1ry9P1R+fH7dmvvqfgX9n6Zjkv
8Orj9PFPiAf0orK/bSbd8Mv2ef8AsnVz/wCpV4horyZbs9KOx92eE/2mdZ8PftKfA3QoriQW
v/CPfDyLaCcYfQ9IJ/8AQjXx58X/ANlrUv2h/GXxC8dN4o8K+FPDvgDQPCEms3mtG9bBv9Lt
YofLS1tp3f8AeJg/KMb1PTJH1BoHw9lv/wBrD4G3gQ7f7B+HTZ+mhaOP6V5ikmn618DP2nPB
P9veFdK8S+KPD3w3XSLPWtfstI/tD7Na280/lvdSxodkYyfm4yo6sAe3FOTpRv5fkcuFUfay
t5ny9+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903
x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUA
c+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBc
EEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDew
CR0txtgTMolfzj0DzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2
uYo0W6fyg0siZKswymHNXw//AME+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNUstQgv
TZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wAKvhJ4yuLqe+1i90XU
zbQ280c95JoyXEcmpFLkQKYWsogJJnVhtfey1fh18ZfA8vjH9pf9oCzn0qDx9puux6v8N9M8
QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQAco/wDwS/8AFmjeIdD0rxB4y+H/AIYv/Fvi
vUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8AHnw/
8Bf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/AOPtQ8ceENX8W+Pv
h/8A8K98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hfj
T4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/8
D/8ABPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o3/AAkz29zHatPEIbKQwRNNKqr9qEL9SUUc
14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7NvxhudG/
aMtZND8Y/DTTv2d/h78QL/WNHXxZLpd3f6RpSXS3brpsF8kuro80EUQjFtGGa4YHIk8xx8/f
ECzsf2tv2jPjF4v0zXvD/hXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/
2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+
wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn
tf8AgnZoV/8AD34kfDzxnaeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ck
pgsskaqvbfAE2ujftqR/Ebwj45+D+oeANc+J0l3fnxVqOm/2/pGm2+qealyW1xVuw8ttL5qz
WkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W90KxtNYnme78R39mZFuobFLaK
YSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+R
lKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/
0x/NspGZpYCZGYEOxlTC8/8A8E/P20/CWg/tlaN4E06z+Gnhr4K+BfEniTXNF8Q6hql1o9yI
LlbqG2nlNxeRxXdx5M0FsnnwSzpAWxt2u4APlT4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I2
2sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswxyfEX7FHir4c/DzXPEfje/8AD/gODS9VvdCs
bTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw
48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo
/CqTwtB8R/E1/wCN9M8eDw9HfaXpl/4gS9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ
+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3
/BV2bxlo2v8AhXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCCvV
vgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP8AZ/8AbF5rRvWx9tleGDy0tbad2/eJg/KMblPTJHlN
fVf7FNzp/iH9hL9pnwb/AG/4V0rxJ4r/AOEW/sez1rX7LSP7Q+z6hNNP5b3UsaHZGMn5uMqO
rAEA+dPhX8K/EXxv+Iek+E/Cek3WueIdcnFvZWVuBvlbBJJJIVUVQzM7EKiqzMQqkj1b4Mf8
E9vGvxs8J6hq9lqvgrTIIfEjeDtMF9rabPEetC3luBY2c8IkgZ3SMBJJZY4ZWmiVJGLcav7F
PxU8PL8PPHnwtv8AVrX4ca38TII7Ky8e5ISNQcnSNQchmg0y5YL5k1vsZWCmbz4AUT0DXvER
8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJkLStb4wxtCKAPH/ANnv
9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9n
v9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK5YkD
1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NR/
wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqYzy
n9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorli
QD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80aK
5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn
7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/Z
qAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7z
RorliQD9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTc
PDG7zRorliQPVv8AhO4/+HJX/CM/2z8P/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfy
b8ef5HGfs1H/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/
yOM/ZqAPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aK
FNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFP
PlWMGdoySGONmGPpf/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v
2v5N+PP8jjP2avSv2Tfizptr8Df2V7fQfEvw0jHg7xlqN149i8W6jo4u9DhfU7OaOSyGrN5s
CNaqX3adtBkVmP74E0AfP+i/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXA
D2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7t
LK4AextrmKNFun8oNLImSrMMphz9bH43eB/iXqfwKufB/ib4f6loPgz4j+IZtau/FviK3stU
0TT5vFtjq1tfQ/2nPFcyyyWkGGmUSuyS3EbfvGYC3r/7Q/g342+L/gvrXhfxH8NL7SfDfxO8
SX2vXPiPWLPSL3QrK58X2WsW97axX0sEpeS1hxviST93LPEQHJCgHwn8TP2KPFXwm+Al14+1
e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwU/Z7/Yc8e/tJ/Cvx743
0KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b4933Z8Ev2lNB8TeE/E0nh3x
/wCCrLSfE/7S1/reu2Gua5YaamseD7y38q6aa0v3jM9vJHLjyzGz7lBVQ8eV8U/Yy0/wbpHx
U/ai1Dw54i8FaF4E8T+DfFfhLwYmseKbPTZ7xriaFrCIQ3s6XIRoQuJZVC5VgzhgwAB8/fs9
/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7I
TfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPsP/BNn
wtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3xPtbbcLmBsHEhxVT9grw
Tqvwa+MngnxGnjD4FPpMfjK2g8UQatqugy6joENjeR+ZNDLfgEo8TvJHcaZLIH2D5w6KAAcp
ov8AwTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmG
Uw5808S/st+O/CXhjxhrF5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV
+gJ+N3gf4l6n8Crnwf4m+H+paD4M+I/iGbWrvxb4it7LVNE0+bxbY6tbX0P9pzxXMsslpBhp
lErsktxG37xmA8J+Hfxh8K+B/wBrP4zfHyfxjazeCrrxJrNtY+FbSWFtR+I8N9PNItjPZzox
h0x4mR557iHAwqRK1wB5YB4V+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvB
bTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf8AgoToHj3VdQ8FeB/CegT3814NQ8Q2+nwWC3Vj
eRQxQLeXHnzorsqZUysoKmRudx+SaQj0vUf2X9Wsv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6
IJLjEivCsRQwor7o5X/1qA4YOF80r9Af2Fvjvp/wM/Yw+FVo/jLwromp6r+0BpuqX0LatZHU
LLRDbJb3NxIu8zWcTGGSKR28stDI6sTDORJ7B+z18Vfg18HPipAmkeKfBUnw+8UfEDxnaeOt
PvfFjW+laZbTzNaaQlppCXMVpdWU8Dwb5ja3UaIWZpYkh/dMZ+T1el/slfsv6t+2L8a7HwFo
Gs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb62/Y6+Pdv8GP2aPhdo9/8QdK
0fxJ4X/aAttNu1h8TQmSz8NSxQSX6LJHKQdKluYhJIVY20joHJYgGva/2fvjv8LPgV8cPB17
4G8ZfD/wn4Ml+I/jL/hYi2erWln9tVri4tvDuIi4ln09IrqMp9mVrSLLyvsMbyKAfm9+z3+y
/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr5pX1t/wAE
1NPtfBmuftC6frXiLwVo09/8Mdc8JWb6h4p021gv9SuWjWGKCaScRzIxgk/exs0QG0lwHQt7
B+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7
GyDygMhoyhHxT4I/aX8RfD/wxbaRYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAA
A9B+GX7IPjv9tDw9e+P4bz4f6BHqeux+FtLgmS30KHX9YFk06WFpBawLbQytFEgBl8iOSWZA
HaR2rx74s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBFfUP/
AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPj6u2
+AP7Q/i39l/4hx+LPBGoWuk+IYIJLeC9m021vnt1kGHMYuI5Fjcrld6gNtZ1ztdgfuv/AIJk
fFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/wAQ+LJreDTGnhS005I9INzFBdW86PF5s0lrdKgM
jPLEsP7rn/2GfH2r+DPhB4f8FXPxO8K+CbXTvFd9KmueGPiBpmj6h4Zu1CoX1axuXjtfEOny
SLbSo8MkzeTDLGsxBFvTGfP/AMKPGXxK8VfBn4geMNI0b4VN4b+Hn2e+1i4vvAHh6aYS6jf+
XFDEHsnc5kkkZV4iiihKKUAijbxTxv4yu/iB4nudXv4dKgurvZvj03S7bTLVdqKg2W9tHHCn
CjO1Bk5Y5JJP2r+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJZ
xNHMJWgijjleMsFhYqUHbfs7ftCN8K/2VfgvoHwm8T/B9td0HxJrY8XXuueNLvwraCY38LWd
9NbfbbCXULd7XYf3tvcERwiLYrB4iAfm9XV/Cv4V/wDC0v8AhI/+Kj8K+HP+Ec0O61z/AInm
ofY/7U8jb/odr8p827k3fu4uN21uRiv0B+E3x7t7vQf2fJvAHxC+H/giw0X4j63qXxNsNE8T
Q+ENNubeXWbaaFxZ3cttNd2n2EbIgY5CsSCEgMpjHn37InxA8N3Xxw/a/Tw94o8K+GvAPjjw
p4l03w3ZX2t2vh+x1G4urh/7LSK2uZIQNsJmVSUAgWQqxj8wBgD4Urq/hX8K/wDhaX/CR/8A
FR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxXKV9bf8EsPE66Nof7QGmX3
ivw/4e03xT8MdU0S1tNY8S2mkwapqs6hbNQlxNGsjhftIEmCsQlYMyeaNyEfJNFfdfwC/al1
/wDZt/4Jc6HrXhvxh4VXxx4f+I/2y10q516zk1RPDjC3kntFt/OF3FaT6laxNNBD5bSoGdgY
mZz7t/wTt+Jmk/HTXP2crTw5deCtGgTxJ4y1n4g+DtMvbfS4GvZGN5pTjTJJFkvEtwlu0Eka
TfZxbJ86GE7WM/J6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7Jaz
pCnnyrGDO0ZJDHGzDHoP2Lf239A+An/C2bjxv4V/4TDWviH4U1jSv7YmnvJ77ULq98k+ReN9
siX7I7o7ySxr9q3P8smDgfQHwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6
dqsUtiuoXKzyf6PbmMSoZyYnnjZjITgA/P8A8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e
4hkaORNyEq2HVhlSQccEiu10b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMr
OkbKsbTSeXAskqIZQ24L91/Cb9pjwFqGueP9U8U+O/BV3P4p+IGvap8DLrVSl2fAt7O1839q
3qNG72FlLcS2LLHcqf30f2jyFCGcZXw3+LMHjn4Pfs3aBqviX4P+IH8IeMtcg+KQ8Zaj4dv5
0huNaiuJJop9SZnnSaB5nNxYu4kI++zquAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ
9nuIZGjkTchKth1YZUkHHBIr9IND+KPw58Q237P1t8PNa+Gh8H/Dr4ga9Hf/APCR67a6Ze+G
tKbxXYapZ3VqmozxXDu1lbhfMRZWMck8TfvCwHz/AONP2h/ht8KPjJ8R/jB4W1C18c/EzxV4
y1i98HwT6bMuneDLZ7yV4tXnS4jVbm9dXVraEBo4P9bLmRUiUA8/+FX7A+vfErwZ8PdavfFv
grwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9
i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV8btP+Jfwi/ZYuZvE3w/1LVfBnivU
5vHN34t8RWVlqmied4l07Vvt0P22eKWWWSKBw00Ql3JLcRn52IHQfCb9pjwFqGueP9U8U+O/
BV3P4p+IGvap8DLrVSl2fAt7O1839q3qNG72FlLcS2LLHcqf30f2jyFCGcAHxT+z3+w549/a
T+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW24CqCu903x7j4J/shN8arPwo
F+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc+1/wDBPe0udO1z
9onU/F/jHwVBqXir4f8AiXwpFd6x440tZ9Y1q4aBhh5bndMkrbyLoEwudx8081b/AGfvhZH8
DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolc
A800X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSr
MMphz4V478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoX8J/E+
g6N8Pv2ZfDVl4r+D+sv8IvGWr6b4u1C/8S2FoNIhXxPp+opqGnNdzQvOksFq22aBJA0Ms0eA
7ED4e/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaQjz+i
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAr+or
/g0d/wCUa2hf9hnUP/SqWv5da/qK/wCDR3/lGtoX/YZ1D/0qlr0cB8Fb/D/7dE5MVvT/AMX6
M+/fiz/x/XH1Nfg3/wAHDVjLePpvlKWxbaXnA/6j/jyv3k+LP/H9cfU1+aX7b37KEH7TGqSR
zIG+yWViwyP7viDxl/8AHK7/AGUqlKEI73/Q872qpVJTltb9Ufhd+2lZyQ/DP9nlXUhh8O7n
IP8A2NXiGivpj/gsf+zbD8JPHvwc8PIuF0/4d8D03+INbk/9noryqtGUZuL6M9KnWjKCkup9
AeAL+zg+OHwMVyvmnw38Pxz/ANgTScV8IfGj9lC/+PPjr4jeMx4o8KeFPDXw60Lwj/a95rRv
Wx9u0y1hg8tLW2ndv3iYPyjG5T0yR9OaPr0y/tX/AALhDHaNB+HQx9dD0f8AxrzKO70/xH8C
/wBpzwd/b/hXS/Evizw98NjpFnrWv2Wkf2j9ntbeafy3upY0OyMZPzd1HVgD0Yt3pR+X5GWF
VqsvmfL37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK7
3TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2sw
BQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWV
YFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMk
N7AJHS3G2BMyiV/NPQPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAH
sba5ijRbp/KDSyJkqzDKYc1fD/8AwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1
CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/AAq+EnjK4up77WL3
RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w3
0zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/APBL/wAWaN4h0PSvEHjL4f8Ahi/8
W+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/wAe
fD/wF/wkd9q2m2thqR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE849A/YU+P8A4+1Dxx4Q1fxb
4++H/wDwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/a
F+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAP
n/wP/wAE9tY8Z6R4Jv5fHnw/0W1+Juu3WheDHvjqjf8ACTPb3Mdq08QhspDBE00qqv2oQv1J
RRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG5
0b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHH
z98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEH
wT/ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA51v
B37Bera94s03w1rPj34aeEfGGr+JLjwra+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY
8Oe1/wCCdmhX/wAPfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdP
dySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20v
mrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v/AA/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsU
tophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7
b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp
1z/TH82ykZmlgJkZgQ7GVMLz/wDwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj
3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0f
wjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/wAP+A4NL1W9
0KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/h34
VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeFv2lrH4J3m
i6j8KpPC0HxH8TX/AI30zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D
9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v+Cb9z4H
8Hf8FXZvGWja/wCFfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkI
K9W+An7J2ofHf4YePPGX/CUeFfCnhv4c/wBn/wBsXmtG9bH22V4YPLS1tp3b94mD8oxuU9Mk
eU19V/sU3On+If2Ev2mfBv8Ab/hXSvEniv8A4Rb+x7PWtfstI/tD7PqE00/lvdSxodkYyfm4
yo6sAQD50+Ffwr8RfG/4h6T4T8J6Tda54h1ycW9lZW4G+VsEkkkhVRVDMzsQqKrMxCqSPVvg
x/wT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaaJUkYtxq
/sU/FTw8vw88efC2/wBWtfhxrfxMgjsrLx7khI1BydI1ByGaDTLlgvmTW+xlYKZvPgBRPQNe
8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYDcxiJ2mkf7YNomQtK1vjDG0IoA8f8A
2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA
P2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zRorl
iQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs
1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4z9mp
jPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGi
uWJAP2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7z
RorliQPVv+E7j/4clf8ACM/2z8P/AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5
HGfs1H/Cdx/8OSv+EZ/tn4f/ANsf8LH/ALU/sn7XpP8AbP8AZf2fyvtHk5+1eb9r+Tfjz/I4
z9moA8p/Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8M
bvNGiuWJAP2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aK
FNw8MbvNGiuWJA9W/wCE7j/4clf8Iz/bPw//ALY/4WP/AGp/ZP2vSf7Z/sv7P5X2jyc/avN+
1/Jvx5/kcZ+zUf8ACdx/8OSv+EZ/tn4f/wBsf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tf
jz/I4z9moA8p/Z7/AGD/ABf+0f8ADeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkr
RooU3Dwxu80aK5YkC38Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWd
IU8+VYwZ2jJIY42YY+l/8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5Oft
Xm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas
3mwI1qpfdp20GRWY/vgTQB8/6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2l
lcAPY21zFGi3T+UGlkTJVmGUw5NF/wCCaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72
zu0srgB7G2uYo0W6fyg0siZKswymHP1sfjd4H+Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Ire
y1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgLev/tD+Dfjb4v+C+teF/Efw0vtJ8N/
E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXx
R4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOTRf8Agmh4yWXw
ta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz9g6/+0P4N
+Nvi/wCC+teF/Efw0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8
RAckKf8ADSHhn4heNvhFrfgXxf8ADS50TTvi74l1zxbJ4tvNJhu9KsrnX4bq3nso9YK3Fsj2
X7z/AIl6p+8DFv34agD408P/APBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL
02RhuRZWk8UG+XBR5JAjowZWIDYyfhH+wx4k+MP7X+r/AATs9e8K6b4v0q+1HTUmvpboWN/c
WLSCZIpI7d2GUilkUyIgKxkEhiqn1b4deO/h54K8Y/tL/GTQ9Z0rUfF/hTXY7n4XR+ILtrqS
7+3apMjaktvdnz7u7trfZMjTb/LdvMlRmClfdf8AgnV+0p4V+Geh/A/xD/wn/h/RH1bxJ4hu
vjLc6lrkMOq6vfzLJBo8l0J3F1c24+1hy0Qe3jdpZpdrxySKAfH/AMKv2B9e+JXgz4e61e+L
fBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY8T4l/Zb8d+EvDHjDWLzQ
v9A+H+unw54ka3vbe5k0W93tGBPHHIzpE0iNGs5XyXdSiuW4r7W/Zr+IFl4S+FP7Mej6T4o+
FXnfD/xzqjfEFfEmt6FcyaKn9q2kiy6dJqMjFImtkaQS6U2x3UuGMuTXlXwd+JHhD4S/tL/F
r41/8Jp9q8Bxa7q2naZ4YF+LzWPiTb3csrJZXcN4JJRp7xFHubq7jY5ACb7nBjAPFP2RP2Tt
Q/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvrb/gkNrW
k6L/AMFCdA8e6rqHgrwP4T0Ce/mvBqHiG30+CwW6sbyKGKBby48+dFdlTKmVlBUyNzuPyTSE
el6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfo
D+wt8d9P+Bn7GHwqtH8ZeFdE1PVf2gNN1S+hbVrI6hZaIbZLe5uJF3maziYwyRSO3lloZHVi
YZyJPYP2evir8Gvg58VIE0jxT4Kk+H3ij4geM7Tx1p974sa30rTLaeZrTSEtNIS5itLqynge
DfMbW6jRCzNLEkP7pjPyer0v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhl
Kv5SyOC4VcRsN24qrfW37HXx7t/gx+zR8LtHv/iDpWj+JPC/7QFtpt2sPiaEyWfhqWKCS/RZ
I5SDpUtzEJJCrG2kdA5LEA17X+z98d/hZ8Cvjh4OvfA3jL4f+E/BkvxH8Zf8LEWz1a0s/tqt
cXFt4dxEXEs+npFdRlPsytaRZeV9hjeRQD83v2e/2X9W/aR0Px7d6LrPh+xn+Hvhu58VXllq
D3CT31lbKTMYDHC8ZdSY12yPHkyrjIDlfNK+tv8Agmpp9r4M1z9oXT9a8ReCtGnv/hjrnhKz
fUPFOm2sF/qVy0awxQTSTiOZGMEn72NmiA2kuA6FvYP2Wf2lbjwF+x18CtL+F2sfCrTfEml6
7q8njL/hJvG0/heOyuGvoHtLm6giv7M6jEbbYGLRXY2QeUBkNGUI+KfBH7S/iL4f+GLbSLDT
vh/Pa2m/ZJqXgTQ9Tum3OznfcXNpJM/LHG5zgYUYAABoXh27/aT8T+LdXv8AXfh/4VutI0Of
XHjuo7bQLXUvsyRoLOyt7aFYWu5ARsiVF8wq7E5JJ5/4s6zF4j+KniXULdPD8cF/qt1cRJoN
rLa6UqvM7AWkMqrJHb4P7tHVWVNoIBBFfS3/AASw8Tro2h/tAaZfeK/D/h7TfFPwx1TRLW01
jxLaaTBqmqzqFs1CXE0ayOF+0gSYKxCVgzJ5o3AHyTRX3X8Av2pdf/Zt/wCCXOh614b8YeFV
8ceH/iP9stdKudes5NUTw4wt5J7RbfzhdxWk+pWsTTQQ+W0qBnYGJmc+7f8ABO34maT8dNc/
ZytPDl14K0aBPEnjLWfiD4O0y9t9Lga9kY3mlONMkkWS8S3CW7QSRpN9nFsnzoYTtYz8nq92
+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe
g/Yt/bf0D4Cf8LZuPG/hX/hMNa+IfhTWNK/tiae8nvtQur3yT5F432yJfsjujvJLGv2rc/yy
YOB9AfAr4w+G/HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYx
KhnJieeNmMhOAD8//HfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHB
Ir0r9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj
3fa3wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn9
9H9o8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvI
ugTC53HzTzQBlfs5/B74g3Xwk+GbLrfwK8MWPxM1W60fwdD4q8D2Wr6jrk0d0kMjG4XSbt1Q
XE4iU3MqEbeAI1U186fHKPX7P4v+I7PxVYaVpXiTSr6TTdTs9N0+zsbW2uLc+Q6JFZoluMNG
QTGuGOWyxYsfuv8AZr+IFl4S+FP7Mej6T4o+FXnfD/xzqjfEFfEmt6FcyaKn9q2kiy6dJqMj
FImtkaQS6U2x3UuGMuTXimrfG74YfAb4keOvif4O1P8A4T34ia34r1WXwVHqNtd3Fr4PtPtU
jQ6xdveruvdQdGVrdGMixHM0zNKFiUA4r4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfy
T61NDPFbOw+yWs6Qp58qxgztGSQxxswx5T4efFD4g/sJ/HvVLjQ5rXw5478MT3eiXMs1jZak
+nzK5hnWMypLGr5Vk8yPkqzqGKuwP2X8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKy
stU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMeAtQ1zx/qninx34Ku5/FPxA17
VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzgA+NPhh+y3cftGahoN/N8Q/hV
4a174ja7Jp+laJNJOt1NcPNHGC1rp1pNFYRPLNtjWYQAhSUXywGroNF/4JoeMll8LWvijxH4
K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc9t+xRo/iLwH+0B4V8
bah48+CmqR3vjmF/GcniDW9Du9Y0r7JfI890txqI3P5qSSSpc6bNL5hXJfzEUD6APxu8D/Ev
U/gVc+D/ABN8P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv
3jMAAfmp478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFZNegftZ
eN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANef0hBRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFf1Ff8Gjv/ACjW0L/sM6h/6VS1/LrX9RX/AAaO/wDKNbQv+wzqH/pVLXo4D4K3
+H/26JyYren/AIv0Z9+/Fn/j+uPqa+N/F+qS6brepGNc5sLX/wBSDxbX2R8Wf+P64+pr5TXR
01XXtV3AHGn23X/sYfFn+Nexhb/u7d/0PFxdrTv2/VH5Nf8ABffUJLv9on4XyNnc3w7iz/4O
9YorY/4OFNKWy/af+GkYAwvw7h/9PWsGivMxN/bTv3f5no4VL2MPRfkeY6P/AMnefA3/ALAX
w5/9MWjV81fFT9k7UPjv4o8feMv+Eo8K+FPDfw58PeD/AO2LzWjetj7bpVrDB5aWttO7fvEw
flGNynpkj6W0b/k7z4G/9gH4c/8Api0avPobnT/EPwJ/ab8G/wBv+FdK8SeK/D3w2/sez1rX
7LSP7Q+z2tvNP5b3UsaHZGMn5uMqOrAHLE/wo/L8jfDfxJfM+X/2e/2HPHv7Sfwr8e+N9Cs7
W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG
+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn1b/gl5p9rpGh/HDUNT8ReCtCg
8T/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FV
r8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr+edx5pov/BNDxks
vha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDmr4f/wCC
fWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqWWoQXpsjDciytJ4oN8uCjySBHRgysQGx
6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7oupm2ht5o57yTRkuI5NSKXIgUwtZRAS
TOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMk
UTneyvtQgA5R/wDgl/4s0bxDoeleIPGXw/8ADF/4t8V6j4P8ORX0uozf29d2N6tjM8RtrOYR
RfaXEam4MTHBbaFwxyvGX/BPbWPhJ4A0DXvH/jz4f+Av+EjvtW021sNSOqXl0lxpl41neI5s
bK4iG2VeCJCGDAgnnHoH7Cnx/wDH2oeOPCGr+LfH3w//AOFe+DvFcviLUrnxtd6PqGrWX7yK
9v2sY7pZdU824MY2mzT57h9wYPvcegeDPj3rX7Qvxp8M65deLvgpqfwYuviPql/P4a8aReHY
tY8LaZeawLy7Eq6lEJT58U5fdZTTjKFdytGFAB8/+B/+Ce2seM9I8E38vjz4f6La/E3XbrQv
Bj3x1Rv+Eme3uY7Vp4hDZSGCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/
mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo37RlrJofjH4aad+zv8AD34gX+saOviyXS7u
/wBI0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X6Zr3h/wrps8+teNL
SLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX
13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceFbX
w9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/AATs0K/+HvxI+HnjO08W/BQaDdeK
7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8A
a58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54
j8b3/h/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCVIpIxE1wxS3EkiL5ud20/Yz/AGHPHv7d
fxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr
4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU65/pj+bZSMzSwEyMwIdjKmF5/8A4J+ftp+E
tB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbt
dwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkh
jjZhjk+Iv2KPFXw5+HmueI/G9/4f8BwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUt
xJIi+bndt9r/AGMviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw11q
TJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/wAb6Z48Hh6O+0vTL/xA
l6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFL
NL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRtf8ACvhz4T+F9d1n7Dea1r9v
pmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEFerfAT9k7UPjv8ADDx54y/4Sjwr4U8N/Dn+
z/7YvNaN62PtsrwweWlrbTu37xMH5Rjcp6ZI8pr6r/YpudP8Q/sJftM+Df7f8K6V4k8V/wDC
Lf2PZ61r9lpH9ofZ9Qmmn8t7qWNDsjGT83GVHVgCAfOnwr+FfiL43/EPSfCfhPSbrXPEOuTi
3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR6t8GP+Ce3jX42eE9Q1ey1XwVpkEPiRvB2mC+1tNni
PWhby3AsbOeESQM7pGAkksscMrTRKkjFuNX9in4qeHl+Hnjz4W3+rWvw41v4mQR2Vl49yQka
g5Okag5DNBplywXzJrfYysFM3nwAonoGveIj4c/4IzXngi41z4aSa/YfE57uXTLXVdEutSbT
UhMBuYxE7TSP9sG0TIWla3xhjaEUAeP/ALPf7B/i/wDaP+G8nijStS8K6RYXGujwto6axqJt
ZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jpr
Gom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/w5K/4Rn+2fh/8A2x/wsf8AtT+y
ftek/wBs/wBl/Z/K+0eTn7V5v2v5N+PP8jjP2aj/AITuP/hyV/wjP9s/D/8Atj/hY/8Aan9k
/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/ANo/4byeKNK1LwrpFhca6PC2
jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxr
o8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H/wDbH/Cx
/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P/wC2P+Fj
/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4o0rUvCuk
WFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hvJ4o0rUvCu
kWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wAJ3H/w5K/4Rn+2fh//
AGx/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2aj/hO4/wDhyV/wjP8AbPw//tj/
AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+z3+wf4v/aP+G8nijStS8K6R
YXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIFv4VfsD698SvBnw91q98W+Cv
CKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx9L/AOE7j/4clf8ACM/2z8P/
AO2P+Fj/ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1elfsm/FnTbX4G/sr2+g+Jfh
pGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJoA+f9F/4JoeMll8LWvi
jxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc5Xg79gvVte8Wa
b4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHP2Wfjd4H+
Jep/Aq58H+Jvh/qWg+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt
+8ZgPKfg9rUV1+3hcfFfwz43+Cl54H8V/FafVNUXxBPpNrrGjafFq3nrcKmrxxzw+ZbzGRHs
WZ8rhissYRQDx/Rf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0
W6fyg0siZKswymHOV8N/+CcvxR+I+h/FfUxpVro+m/BqDUD4iu9QuMQfarJWaexgeIOs1wFR
j8p8sDaWdfMj3/Zeg/Hfwnr+tfBa7+HvjL4f3vh3w/8AFbxFqnimbxvq2nSalpunza7BcWtx
atrrm8j8yyAlL2eHaUOzk3G415T+yhfeG9S+OH7V3iTT/F3hW08LeOfCni/w54XuPEXi61sr
7Vbi7uIpLQMt/cLdN5kZBM84wWDb33hqAPhSvS9G/Zf1ab4CP8R9c1nw/wCEfD13PNaaCmsP
cC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgv2t+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNb
j8bvrnxAbQrS1uvt8JtrqZbXUbZNVtzbhf3ipeRmOARpn5ka34U+N3h/4rfDr9nrTbHU/wBn
+fQfDnjnxB/wn2napbaLZ2Om6fda5FdD+z7fWlS6jtHtJJCn2ZQ4VVRsSR7VAPzTrq9I+CXi
fX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lr7r174o2//Cqf
hfpf7MfxT8K/Diw0Dxz4pk1j7Z4uh8Ox/Z5NVhfSrm+gvZEm1CJbFYhlorg7I2iYFg0deFfs
i+M9J+A/xD8afF3xH408P3OiadPeaOfC+gpbwv8AEhrkMWsv7PkhVbbR3XDyyzWqKihEhj88
KIkI80/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuO
QPKa+tv+CQ2taTov/BQnQPHuq6h4K8D+E9Anv5rwah4ht9PgsFurG8ihigW8uPPnRXZUyplZ
QVMjc7j8k0AFFfot+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8Lx2Vw19A9pc3U
EV/ZnUYjbbAxaK7GyDygMhozlfsmfHrwJ4u8DzaDqvi74f8Age68CfHKH4w3DeVcadoepaPB
GI5bfSIzEZmlBUeTaSRpI0boFBKuEYz408A/AHxF8R/hJ498b6fHajw98OILGbV5ppwr7ry6
W2t4o0GWZ2Yu2cBQsL5YMUV+Jr9C/gN+2hffGjwt+1FoGg/Fi6+HmpePPEljrfgEeIfE7aHB
olrNr01xftHP5vlW7iG5V5Y4XMko8zYsu01leBPiBrOkfsq/AnQPgj8YvBXgzXfC3iTXh4zv
W8WQeG7S8me/t2sr66trwwy39ubVVIzbzHy0MTJuBiAB8qaj+y/q1l+yZYfGKPWfD934eu/E
h8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfot+yl8f8AT/gv+zR4Dhfx98P7TU9a
/aPtNXvpNNu7K0aPRGiWC5vEt9sc2nWkhhkQ7orc/Z5TGyrDMUfoPB37SnhX4d2em2/h/wAf
+H9Dg0b9qS4tdPi0/XIbZLHwbcSpNPHEEcBNHkkjR3VcWzMisckA0AfBPgj9pfxF8P8Awxba
RYad8P57W037JNS8CaHqd0252c77i5tJJn5Y43OcDCjAAA6D4A/BHxF/wUF/aSj8M6fqfgrw
/wCJ/EEEk1qk2nDSdOuWt4dzRRw2FqYon8mN3z5aK3luSxdgH5/9rL+w/wDhqj4l/wDCM/2V
/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjivuz/AIJmfHfwP8CvA/wCvdN8ZeFf
CdhLruvf8LaW41a3s769uWje20TzopXFxPaIt0pHkK1tExkll2NHJIqEfmnXbfAH9ofxb+y/
8Q4/FngjULXSfEMEElvBezaba3z26yDDmMXEcixuVyu9QG2s652uwP3B+zX8SX8C/Cn9mPRf
CHxH8K+EP+EI8c6o3xWtbfx1p+ix3yf2raMks4a5jXVIjZIVWWHz0ZFKKxxtr4f/AGlNZ8Pe
I/2jPH+oeEEtY/Cd/wCJNRuNFS1tTawLZPdSNbiOEqpjTyimEKrtGBgYxQBymvazN4j1y81C
4S1jnv53uJUtbWK1gVnYsRHDEqxxpk8IiqqjAAAAFVK/Sz9hD47+B9D8MfseavqPjLwrpVr8
KL7xhp3imO/1a3tLrTJdUcpZMLeR1mmic3Ee6aFHiiG9pHQRyFT9hC+1/wAFfsYeAL5/F2la
D/wgf7QEelajqlx4us7G1ttBNtb3GoWUF29wsU9pNKgma3gd1nKCQI+3cHYdj806K/SHx38a
l8R+Bfh3H+zX8UPBXw0g0/4geLL3XVbxJaeErRYZ9Xil0u4urK4aJry3WxEYCiCYLHGYdmVM
Vav7Afxe+G3wo0PwK1/418FapoXi3xJ4lt/ihDe+IJtL0q1adVtNONpoJktYJbK5R4i7yWEy
wozbzbrBiFCPhT9nv9l/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNds
jx5Mq4yA5U/Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eT
KuMgOV9r/wCCamn2vgzXP2hdP1rxF4K0ae/+GOueErN9Q8U6bawX+pXLRrDFBNJOI5kYwSfv
Y2aIDaS4DoWqf8EtrnT9P/4Xx/aWv+FdC/t34Vaz4a07+2tfstL+26heeV9nhj+0yx7t3kvl
x8ifLvZdy5APlSvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpC
nnyrGDO0ZJDHGzDHV/YQ/bC8Jfst6H8TLTxH4Itden8Y+DdU0C0vYzdGeaS5WAJaXIW8hjWy
JiZneJPtILfLJjgfRfwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6dqsUti
uoXKzyf6PbmMSoZyYnnjZjIThjPz/wDHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjk
TchKth1YZUkHHBIr0r9nv9hzx7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyX
HlgttwFUFd7pvj3fa3wm/aY8Bahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu
9hZS3Etiyx3Kn99H9o8hQhnHin/BPe0udO1z9onU/F/jHwVBqXir4f8AiXwpFd6x440tZ9Y1
q4aBhh5bndMkrbyLoEwudx8080AfGlFfe3gTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47
eONO0e0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyreg/FrX7z9nH4LWHwZ+L/wAP/BHiTRfF
fiKfxvd2/iKz8Iabc3EuowSWd7PYz/Zjd2n2YLtVbWQLEhh8oFTCEI/P6vdvhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV+HnxY8Ffs0We
qfEKx1Lw/wCPPjNf6rdr4eistFe10DwcFlONY8iW3hjluHJ3WlukQhtxiSRRIqQJ9F/Ar43a
f8S/hF+yxczeJvh/qWq+DPFepzeObvxb4isrLVNE87xLp2rfbofts8UssskUDhpohLuSW4jP
zsQGM/P/AMd+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivSvgn+
yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz9rfCb
9pjwFqGueP8AVPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8
hQhnHlXwc8FXPw2+Eln4o8LfEP4aT/HX4larqeneI/Fev/EDS4Ln4eWoumtpZrcvctJNcXgM
srX8PmSCDIhXdKJXAPjTx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJ
BxwSK9g+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7Rk
kMcbMMfrb4KfGu4+BH7OPwj8H/C7xV8FJ/EnhLxXr1r4y1DUvH0+gaabhdRi+yX7pFfWR1S0
ktghDtDdAxRCMKDujJ4F+NXhv4ieGv2cZtK1f4KJH4J8c61J4pWXU7XQrfwxby+KdP1aK50q
31CW3nETW8BSMpG5EEksLASEqoB+dPjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJ
uQlWw6sMqSDjgkVk16B+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtcBlyj
KcMARnkA15/SEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFF
FABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQA
UUUUAFFFFABX9RX/AAaO/wDKNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv+wzqH/pVLXo4D4K3+
H/26JyYren/i/Rn378Wf+P64+pr5j0EZ8Qav/wBg62/9SHxXX058Wf8Aj+uPqa+ZPD//ACMO
r/8AYOt//Uh8V17GF/5d+v6HiYzafp+qPyt/4OJf+Tq/hv8A9k7g/wDTzq9FL/wcTjH7Vnw3
/wCydwf+nnV6K83E/wAafq/zPQw38GHovyPJNH/5O8+Bv/YC+HP/AKYtGr5q+Kn7J2ofHfxR
4+8Zf8JR4V8KeG/hz4e8H/2xea0b1sfbdKtYYPLS1tp3b94mD8oxuU9MkfSuk/8AJ3fwN/7A
Pw5/9MWjV5/Dc6f4h+BP7Tfg3+3/AArpXiTxX4e+G39j2eta/ZaR/aH2e1t5p/Le6ljQ7Ixk
/NxlR1YA44n+FH5fkdOG/iy+Z8v/ALPf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0
EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo
0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mp
XC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb
4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/PO4800X/AIJoeMll8LWvijxH4K8B6t438SX3
hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhz
xn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtf
EP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/
aX/aAs59Kg8fabrser/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/Fm
jeIdD0rxB4y+H/hi/wDFvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y
/wCCe2sfCTwBoGveP/Hnw/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj
0D9hT4/+PtQ8ceENX8W+Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmn
z3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISn
z4py+6ymnGUK7laMKAD5/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHat
PEIbKQwRNNKqr9qEL9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVs
OrDKkg44JFfZf7NvxhudG/aMtZND8Y/DTTv2d/h78QL/AFjR18WS6Xd3+kaUl0t266bBfJLq
6PNBFEIxbRhmuGByJPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMA
Xesl66ygLEGwxV8NxkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2u
Bao8koVDcmIvtZgCgDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwl
Wwt7lLdGnYojTvHv2M65jw57X/gnZoV/8PfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tL
qJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/AG/p
Gm2+qealyW1xVuw8ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/wCA4NL1W90K
xtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wAEWdqqadB9
o1HVdQd4dO0xSG8sSyKjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f8AiP4m
bxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz//AAT8/bT8JaD+2Vo3gTTrP4ae
Gvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK
8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvh
z8PNc8R+N7/w/wCA4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZ
fFnxh4b+Knhwa14l+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWE
hffJXu2o/tOeFv2lrH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsW
YFlK7i8YAAPhT9kT9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzW
TO5Qu45A8pr7W/4Jv3Pgfwd/wVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE
2VMQOFd03KZNucn4ppCCvVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0b1sfbZXhg8tL
W2ndv3iYPyjG5T0yR5TX1X+xTc6f4h/YS/aZ8G/2/wCFdK8SeK/+EW/sez1rX7LSP7Q+z6hN
NP5b3UsaHZGMn5uMqOrAEA+dPhX8K/EXxv8AiHpPhPwnpN1rniHXJxb2Vlbgb5WwSSSSFVFU
MzOxCoqszEKpI9W+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5bgWNnPCJIGd0j
ASSWWOGVpolSRi3Gr+xT8VPDy/Dzx58Lb/VrX4ca38TII7Ky8e5ISNQcnSNQchmg0y5YL5k1
vsZWCmbz4AUT0DXvER8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3MYidppH+2DaJk
LStb4wxtCKAPH/2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJ
K0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIy
JK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7Z/sv7P5X2jyc
/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb
9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLY
W5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T9r0n+2f7L+z+
V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP9s/2X9n8r7R5
OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdL
YW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5
X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R
5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0t
hbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4r
Z2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7XpP9s/2X9n8r
7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5
o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE
+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOavjL/gntrHwk8AaBr3j/wAefD/wF/wkd9q2m2th
qR1S8ukuNMvGs7xHNjZXEQ2yrwRIQwYEE84+wD8bvA/xL1P4FXPg/wATfD/UtB8GfEfxDNrV
34t8RW9lqmiafN4tsdWtr6H+054rmWWS0gw0yiV2SW4jb94zAcTpPxq1X42/tC6NqLePfgV4
o+CB+J2r3TaP4qg0GDUdC0q61n7VcyFNXgjudlzDMZFa2eRvl2HZJGI1APl/4YfsYp8UtQ0G
ztfir8KrO/8AFuuyaFoNnNeahNdao6zRwRztFb2cr2kU0kgEf21bd2ALbAo3VreMv+Ce2sfC
TwBoGveP/Hnw/wDAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj3Xwv4F8G/
DDw5e+JfgJ4z+Glj4n+I/iTWtNs9d1/xjZ6Rc/DDw6t9JbW5t4LuYXf2i6tjva62NcRwApHG
Hk8xsr9nO58UaNqvwz8I678Rf2dPGfwq8D+MrrSdS07VpfD8r6RZDUke+nhk1SCOaa3u0ZpY
5bR5dygDKOgQAHj/AIH/AOCe2seM9I8E38vjz4f6La/E3XbrQvBj3x1Rv+Eme3uY7Vp4hDZS
GCJppVVftQhfqSijmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUk
HHBIr7L/AGbfjDc6N+0ZayaH4x+Gmnfs7/D34gX+saOviyXS7u/0jSkulu3XTYL5JdXR5oIo
hGLaMM1wwORJ5jj0Dx3+0zqPxj8C/DvWvgN8TvD/AMMNSu/iB4s1zxZHq3iux8NPG17q8VzY
T6jbSzD7ei2hQHYlyu2Nohu2lKAPzerq9I+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpss
wJhaeAMZo4pCCizMgiZwUDl/lr9Afgp+07ceDv2cfhHYfC7xJ8FB4ksPFevT+MrvUvFU/gvT
UuH1GKS0vXsYrvTjeWj2xTCtazhIoRCIkKtCfn79mL4iaH8Hvi/8QPjRrfiXwraeHYL6/wBM
TwV4XEccfj37WXY6bHYXKM1vohQqXluoPkRY0jU3AHloR5V+yJ+ydqH7ZPxPi8G6F4o8K6F4
kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf+ChOgePdV1DwV4H
8J6BPfzXg1DxDb6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H5JoA9L/ZK/Zf1b9sX412PgLQ
NZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFXEbDduKq3mlfpZ/wTM+O/gf4FeB/gFe6b
4y8K+E7CXXde/wCFtLcatb2d9e3LRvbaJ50Uri4ntEW6UjyFa2iYySy7GjkkXJ/Zr+JL+Bfh
T+zHovhD4j+FfCH/AAhHjnVG+K1rb+OtP0WO+T+1bRklnDXMa6pEbJCqyw+ejIpRWONtMZ+d
Nel6j+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/wDrUBwwcLlf
tKaz4e8R/tGeP9Q8IJax+E7/AMSajcaKlram1gWye6ka3EcJVTGnlFMIVXaMDAxivsv9hb47
6f8AAz9jD4VWj+MvCuianqv7QGm6pfQtq1kdQstENslvc3Ei7zNZxMYZIpHbyy0MjqxMM5Ei
Efn9RX6g+I7nwn8a/jF+zbpvw41/4fyWHwv+MniP7bp0Ov6dpv2S0n8TwXVl9jt5ZYzdRPbb
TH9kWRTjYvzDbXa+H/E+s6bZ6zr0fiu18OweEf2stTsr3UdR8SwaOlp4eklW6v7FJZ5ow9vL
IqzSWkZPmsm8xsVJDsOx+ZPgj9pfxF8P/DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfl
jjc5wMKMAADlfG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKcKM7UGTljkkk/ptp
v7V2jaL4F8Ex/ADVfg/pMEXxA8UXusLrniyfwbaWEMurpLptxNZR31i17b/YTCNrQXIWOAQh
FKvEfPvDf7X2s/BD9g258VeE/Fvw0h8Yad8XbrVbDR9K1SCEReGpZYppbOysZJEv7bTJ9Rto
Wa1VYpHhG518tmYgH56V7X+zp8Q/EWpeGNa0iw1L4KaJa+EdDvNcSTxd4U0Oe61Xy3DmzguL
mylmuLuQyHyomfkKVBUKBX0X+xf+0X4K+Ivwrh0zXfEHgr4Zal4O+Ndt8Zry0ukew0qbSooV
Way0tI1lZ7iNgBHa4DMjIEL7X21P2RPjdp/xL+OH7X+txeJtK8L+G/in4U8SppVh4i8RWWj/
ANoahf3DvYRtHNOqPKsb3CmQFki8xgXUSDcAfFXjfxld/EDxPc6vfw6VBdXezfHpul22mWq7
UVBst7aOOFOFGdqDJyxySSeq/wCGoPGf/DPH/Cqvtulf8IL9u/tT7B/Ydh532vdn7R9p8n7R
5u393v8AM3eV+6z5fyV9bf8ABPv4r6jp/wABPCfhs/Evw/4F01PEl3fW+taD42sfDur+F7nY
o3azpl8YIdcspHFrINjSSLFDNEJeluOr+BHxqWz8C/s4R+CPih4K8OweG/iBrF78Ul07xJae
DrTVYZNXtZYrh7KdrNrq3axUiNVgISNfJ2IVMQAPzeor9IdX/a+X4Ifsi+IvFXws8W+CobzT
vjXqGq+GtHGqWkN7F4PluY5hZw2PmJeW1lPe21u0trEsTvGCzL5TMx/PTx34q/4TrxxrOt/2
bpWj/wBsX0999g0u3+z2Nj5sjP5MEeTsiTdtRcnaoAycUhGTRX3X8Jvijr//AAx1+z5pfwZ+
KfhX4ceJNA13W5PG/wBs8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHXpf7Afxe+G3w
o0PwK1/418FapoXi3xJ4lt/ihDe+IJtL0q1adVtNONpoJktYJbK5R4i7yWEywozbzbrBiEA+
FP2e/wBl/Vv2kdD8e3ei6z4fsZ/h74bufFV5Zag9wk99ZWykzGAxwvGXUmNdsjx5Mq4yA5U/
Z7/Zf1b9pHQ/Ht3ous+H7Gf4e+G7nxVeWWoPcJPfWVspMxgMcLxl1JjXbI8eTKuMgOV9r/4J
qafa+DNc/aF0/WvEXgrRp7/4Y654Ss31DxTptrBf6lctGsMUE0k4jmRjBJ+9jZogNpLgOhap
/wAEtrnT9P8A+F8f2lr/AIV0L+3fhVrPhrTv7a1+y0v7bqF55X2eGP7TLHu3eS+XHyJ8u9l3
LkA8f1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfN
K/Qv/gnB8XrXwz+xX4a8PweNfBWiPP8AGuK48W6RrniDTbFNR8LzaXFbX4mtryRVubd1crsC
uSygoN8YK+g/sf8AxV+B/wAHNc8Pp4Y8U+H5Phf4o8ZeLLTxjp/iHxZeW8GmWU7fZNFSPSJ7
mJLq3nt3t/NmuLW62AyNLLEIT5TGflnXpeo/sv6tZfsmWHxij1nw/d+HrvxIfCstjC9wNRsb
0QSXGJFeFYihhRX3Ryv/AK1AcMHC/VfgTxx4l8Ofsq/Anw18Ifit4K+Hnizwj4k16Px47eON
O0e0a6a/tzaXV0rzBNUtxboMSRJdRtGhjG7Gyu2/Yx+OXw/+FX7I9pp3i3xF8P7jxJ4o+Ml5
eeHtVsJ7ER+EbifTXs7XxKdJk8kR2kFyjMsVzFAI0dJQilYgQD80692+FX7A+vfErwZ8Pdav
fFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMe1/Zp/a10D9krxx8ct
M8b6PpXxJ1rxVoeveHP+Ensb+8v/AO3rq4kjXEs32uFZNPmeJ5WnjQXTeYCr4OB7X8CvjD4b
8c/CL9lg6Vd/CrTI/AnivU28U2Ws6/a6dN4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE4
APz/APHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr0r9nv9hzx
7+0n8K/HvjfQrO1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3fa3wm/aY8B
ahrnj/VPFPjvwVdz+KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnHi
n/BPe0udO1z9onU/F/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC53HzTzQ
B8aUV+kP7O3x7vvg/wDsq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbXUbZNVt
zbhf3ipeRmOARpn5kb508NfGnwJ+z/qHiH4m6S/hXxR8XtY12+fw1puk6PcW/hnwGnnMV1NI
bqJPOlOQbKDayW6hZJsyqsKoRz/wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGe
K2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibk
JVsOrDKkg44JFfoB8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh
+2zxSyyyRQOGmiEu5JbiM/OxA6D4TftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2
vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzhjPin4J/shN8arPwoF+I3w08Pat431U6PoujahfX
dxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc9Xov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCuga
brE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw57b9ijR/EXgP9oDwr421Dx58FNUjvfHML
+M5PEGt6Hd6xpX2S+R57pbjURufzUkklS502aXzCuS/mIoH0Afjd4H+Jep/Aq58H+Jvh/qWg
+DPiP4hm1q78W+Irey1TRNPm8W2OrW19D/ac8VzLLJaQYaZRK7JLcRt+8ZgAD81PHfgjVPhn
441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIrJr0D9rLxvpnxM/ao+JfiTRL
n7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBrz+kIKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAr+or/g0d/5RraF/wBhnUP/AEqlr+XWv6iv+DR3
/lGtoX/YZ1D/ANKpa9HAfBW/w/8At0TkxW9P/F+jPv34s/8AH9cfU18y+H/+Rg1f/sHW3/qQ
+K6+mviz/wAf1x9TXzL4f/5GDV/+wdbf+pD4rr2ML/y79f0PExm0/T9Uflb/AMHFH/J1vw3/
AOydwf8Ap51eij/g4o/5Ot+G/wD2TuD/ANPOr0V5uJ/jT9X+Z34b+DD0X5HkejnP7XXwN/7A
Pw5/9MWjV81/FT9k7UPjv4o8feMv+Eo8K+FPDfw58PeD/wC2LzWjetj7bpVrDB5aWttO7fvE
wflGNynpkj6S0dv+Mvfgb/2Afhz/AOmLRq4KC50/xD8Cf2m/Bv8Ab/hXSvEnivw98Nv7Hs9a
1+y0j+0Ps9rbzT+W91LGh2RjJ+bjKjqwBxxP8KPy/I6cN/El8z5f/Z7/AGHPHv7Sfwr8e+N9
Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PcfBP8AZCb41WfhQL8Rvhp4
e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA59W/4Jeafa6Rofxw1DU/EX
grQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4IIHQfs/fCyP4G/AnR9V8B+
NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK/nneeaaL/wT
Q8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H
/wDgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YM
rEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFML
WUQEkzqw2vvZavw6+MvgeXxj+0v+0BZz6VB4+03XY9X+G+meIHtzIkupapN5l4tmWInu7OEr
ImDJFE53sr7UIAOUf/gl/wCLNG8Q6HpXiDxl8P8Awxf+LfFeo+D/AA5FfS6jN/b13Y3q2Mzx
G2s5hFF9pcRqbgxMcFtoXDHK8Zf8E9tY+EngDQNe8f8Ajz4f+Av+EjvtW021sNSOqXl0lxpl
41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/wD4V74O8Vy+ItSufG13o+oa
tZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hr
xpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/4H/wCCe2seM9I8E38vjz4f6La/
E3XbrQvBj3x1Rv8AhJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8
P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/2bfjDc6N+0ZayaH4x+Gmnfs7/D34gX+saO
viyXS7u/0jSkulu3XTYL5JdXR5oIohGLaMM1wwORJ5jj5++IFnY/tbftGfGLxfpmveH/AArp
s8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VO
j6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1f
xJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v8AwTs0K/8Ah78SPh54ztPF
vwUGg3XiuybxHa+ILzRotY8PW9pdRM0qjVESWPfFIzpLp7uSUwWWSNVXtvgCbXRv21I/iN4R
8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0YUMZ8/eIv2KPF
Xw5+HmueI/G9/wCH/AcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bT9j
P9hzx7+3X8Q7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2X7h1H45eC/i
5Y/BOw8K+IvhVrngfw/8R/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef8A
+Cfn7afhLQf2ytG8CadZ/DTw18FfAviTxJrmi+IdQ1S60e5EFyt1DbTym4vI4ru48maC2Tz4
JZ0gLY27XcAHyp8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+
VYwZ2jJIY42YY5PiL9ijxV8Ofh5rniPxvf8Ah/wHBpeq3uhWNprE8z3fiO/szIt1DYpbRTCV
IpIxE1wxS3EkiL5ud232v9jL4s+MPDfxU8ODWvEvwf8ADvwq+HHjK41Wez1jUdF1c+H4Y5o7
q8h0YXDXWpMkgiVYXsi4kmKsJC++SvdtR/ac8LftLWPwTvNF1H4VSeFoPiP4mv8AxvpnjweH
o77S9Mv/ABAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1
nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv+Crs3jLRtf8K+HPhP4X13
WfsN5rWv2+mbNPuLa/hsfLS9lS4mypiBwrum5TJtzk/FNIQV6t8BP2TtQ+O/ww8eeMv+Eo8K
+FPDfw5/s/8Ati81o3rY+2yvDB5aWttO7fvEwflGNynpkjymvqv9im50/wAQ/sJftM+Df7f8
K6V4k8V/8It/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAIB86fCv4V+Ivjf8Q9J8J+E9
Jutc8Q65OLeysrcDfK2CSSSQqoqhmZ2IVFVmYhVJHq3wY/4J7eNfjZ4T1DV7LVfBWmQQ+JG8
HaYL7W02eI9aFvLcCxs54RJAzukYCSSyxwytNEqSMW41f2Kfip4eX4eePPhbf6ta/DjW/iZB
HZWXj3JCRqDk6RqDkM0GmXLBfMmt9jKwUzefACiega94iPhz/gjNeeCLjXPhpJr9h8Tnu5dM
tdV0S61JtNSEwG5jETtNI/2wbRMhaVrfGGNoRQB4/wDs9/sH+L/2j/hvJ4o0rUvCukWFxro8
LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/ANo/4byeKNK1LwrpFhca
6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgerf8J3H/wAOSv8AhGf7Z+H/APbH
/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k348/yOM/ZqP+E7j/4clf8ACM/2z8P/AO2P+Fj/
ANqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1MZ5T+z3+wf4v/aP+G8nijStS8K6RYXG
ujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/hvJ4o0rUvCukW
Fxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K/wCEZ/tn4f8A
9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAIz/bPw/8A7Y/4
WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/4byeKNK1Lwrp
Fhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgH7Pf7B/i/8AaP8AhvJ4o0rU
vCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/Dkr/hGf7Z+H
/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/yOM/ZqP8AhO4/+HJX/CM/2z8P
/wC2P+Fj/wBqf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1AHlP7Pf7B/i/8AaP8AhvJ4
o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSBb+FX7A+vfErwZ8P
davfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfS/+E7j/wCHJX/C
M/2z8P8A+2P+Fj/2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zV6V+yb8WdNtfgb+yvb
6D4l+GkY8HeMtRuvHsXi3UdHF3ocL6nZzRyWQ1ZvNgRrVS+7TtoMisx/fAmgD5/0X/gmh4yW
Xwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/AME0
PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfrY/
G7wP8S9T+BVz4P8AE3w/1LQfBnxH8Qza1d+LfEVvZapomnzeLbHVra+h/tOeK5llktIMNMol
dkluI2/eMwFvX/2h/Bvxt8X/AAX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tY
r6WCUvJaw43xJJ+7lniIDkhQD4+0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s
7tLK4AextrmKNFun8oNLImSrMMphzleDv2C9W17xZpvhrWfHvw08I+MNX8SXHhW18PahqVxe
6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c/a2v/tD+Dfjb4v+C+teF/Efw0vtJ8N/E7xJfa9c
+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL498Htaiuv28Lj4r+GfG/wUvPA/
iv4rT6pqi+IJ9JtdY0bT4tW89bhU1eOOeHzLeYyI9izPlcMVljCKAeFeDv2C9W17xZpvhrWf
Hvw08I+MNX8SXHhW18PahqVxe6ib2G4S1YSrYW9ylujTsURp3j37Gdcx4c+P+O/BGqfDPxxr
PhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX2X8ATa6N+2pH8RvCPjn4P6h4A1
z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjCj3X9lz45/B3wL440vUNL8
baVqvgHx1458Xjx5F4r8ZXy/Zre6kNtpJ/su5uoxexXMMkBnuLm2uiu6RpZYvJJiAPyprq9I
+CXifX/hBq/jyw0z7b4W8P30OnapdW9zFJJpsswJhaeAMZo4pCCizMgiZwUDl/lr9Af+Cc3j
z4efs8+B/hzpmr+LPCtxpWta74i0v4q2Or+Nmax06WSNLKwWDTo7tLS/tJ1aPzLn7PdxBS7t
NGkOYvmn9ifUdP8A2WfE/if4j+K/FelJpnhr7R4cm8H6XqdlqN18QZZUKyafJGPOhOlMAGmu
3R4iFUQb5irRoR5/+yJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxws
T5rJncoXccgeU19bf8Ehta0nRf8AgoToHj3VdQ8FeB/CegT3814NQ8Q2+nwWC3VjeRQxQLeX
HnzorsqZUysoKmRudx+SaAPYNK/Yo8Vav4c+Cmpx3/h8QfHjVbnR9AVp5t9nNBfR2TtdDysI
hklUgxmQ7QSQD8p6H/h3D44/6CvhX/kq3/CoP+Pm4/5C3/Pf/U/8en+3/rP+mdfQHwe/ak0f
4e/BL9h/R7fXfh+39meK9UbxSupWWl3914et31+CRJXkuI3l0/dE0kglRoiQgfd8ildX9m74
7x/8NUftW7PGXw//AOEb+3eIPFHgf/hINW0n7D/wlf2uX+ydRsPtr7PN8vzf38fybfK8xseV
TGcV8L/2RfiT4S8GQ6bb+L/2dLWCy+IFz8MIl1vwPDqt7JronkxbtcS6LPI6SD50leQosbop
MZUxr5/r37B/ir4haHeeOvGPjn4P+AHPjJ/AGpWl1bTaVBpWtQKVNtImn2Bsok8uMSGeNvJw
xZ5A5evQPhv+0hrvwr/4Jay69pni/wAFXXxKm+Lp8YNFrF5pGs62IDZrAdQFreGWZbj7YoIk
EYnClpAfLZnPQfs2fETV9Z/4Jr32iW/jj4VL44+IXxH1O51R/GnirTEuoNL1DRZdMu9SlM83
2mKUSySNuQee4ydksUrLIAZPwW/Zn+L/AMKfDCQeG/iX8FPCN1Z+Ob74UW+oL4cb+3Ytbkea
FoI9Sj0h7pPNjdmjnE4CRyIu+IrsXivE3wR+OPgr4G/C34Navqfgqw8MfFfxlf6Tp+jNp1nJ
c6ZrNlqaadPPdXaWrShxM+wSwzSkwApnZiM+7fsrfFPwn+x38D/Cfhj/AIS74VeLIYv2j1/e
6jcadebtEW3Nj/baxea72O2SDzop9ylMRnc8Un7znj8efhdr3ibwdbaz8Rrq1T9lP4ga74gg
vb1v7avfiZp0+sC8hltLhCiSXsk8cEbhyEaO5a6EmyKRFAPmr4b/ALA+vfEX4qfFfwgfFvgr
RNS+DsGoXuuS6g9+YJrWwmaG7uIDBays6RsqnaypIwlXajYYKfCr9gfXviV4M+HutXvi3wV4
RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPsH7FvxX0/4t/HD9qrxlqWo+
FfB//Cy/A3ia006z1rxLZWO/UNUuFmt7SN7l4fN+44MgUIu1S+zeoJ+wRdP4G8MfDSa0+Jfw
/wBV8GeIvFbz+O/CXibUtP0eTwa8Lrbx6nZTXVzFci7NpO00d1p/lujwIhaQx7AAeaaL/wAE
0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMObfg
j/gl/wCLPFfhi2v7/wAZfD/wzdT+OX+Gz6bqUuoyXVt4gDsosna2s5oTuADCVZGiwwy4OVH0
Bo+rfD/xXY/sp23gTx14Vn8LfBT4j67Jqk/iLxHY6LfW2mN4gt7u0uWhvHt5JvMs1EhMMZG4
MmFcFB23wv8A2uvBvgnQ4dbt9d+Gmq2Pjf8AakufEUVtrctncT2mhXKyQjVmglYT2DxOnmJM
6xOhVCcxyFXAPzJ8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEi
smu2/aUs7HTv2jPH9vpmvXXirTYPEmox2mtXV6t9PrEIupAl1JcL8szyrhzIOHLFh1r7A/4J
9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJFihmiE
vS3CEfBNfRfhm28c/DH9iTR/iXpk/wAH9T8J/wDCSSeFWtL3wNpmpa3Z3pjluyJ5rzTmMieW
AwYTybVljQY2sqe7aD8WtfvP2cfgtYfBn4v/AA/8EeJNF8V+Ip/G93b+IrPwhptzcS6jBJZ3
s9jP9mN3afZgu1VtZAsSGHygVMI1f2Uvj/p/wX/Zo8Bwv4++H9pqetftH2mr30mm3dlaNHoj
RLBc3iW+2ObTrSQwyId0Vufs8pjZVhmKOxn5/wDjfxld/EDxPc6vfw6VBdXezfHpul22mWq7
UVBst7aOOFOFGdqDJyxySSe1/ZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZ
Sr+UsjguFXEbDduKq36F3f7SumeAtB8K6X8CNY+Cmm/2X8R/Fkmuf2l42fwvptkjayr6dcvB
bX9muo2hsvKAIiu08qARIOGjNX9iX9pDwF8Ldc+FGvaL4v8Ahp4P0nU/GXii4+K0Wk3iaTBd
zytNBoQitbopeNpka3aGJEj8iAFpJhG8UjoAflnRVvXtGm8Oa5eafcPayT2E728r2t1FdQMy
MVJjmiZo5EyOHRmVhggkEGv0A/ZN+MS6R8Df2V7bwR8RPD/gyDwt4y1G6+KVm3jC08NveQvq
dnJFJdQzzwtfobFSoZVmG1DF1BQIR+elFfqt8B/iBb6h8ILLxf4M8UaV4Z8GaB+07cxWN/ca
3D4dtbLwjMIry4sIBcSQ7bSXEUzWKD5zGGMRKHH5v/tKaz4e8R/tGeP9Q8IJax+E7/xJqNxo
qWtqbWBbJ7qRrcRwlVMaeUUwhVdowMDGKAOJor72/Yc/aUsfh3+yZ8Irefx/a6HrujfHmytZ
on1xba7sfDFxBbzXsbAuHTTJLiMPMpxA0iBnywzXsF3+0rpngLQfCul/AjWPgppv9l/EfxZJ
rn9peNn8L6bZI2sq+nXLwW1/ZrqNobLygCIrtPKgESDhoyxn5U0V+pnwV8b/AAi+NXw++Beo
6v8AEP4P+G9Y+GvjLWrqKwF7c6LZWF1L4nstXElpBLGpSybS7W9jia4CxiS4t4jibIj+fvAX
7fHw++E/7Rn7S+s3HhK18Wab8UoPFNlouqxpepPex6hdK9vb3MZuoFjsnVd7ska3Sl8Bx90I
R5V8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42
YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX6AfAr4w+G/
HPwi/ZYOlXfwq0yPwJ4r1NvFNlrOv2unTeDreTxLp2qxS2K6hcrPJ/o9uYxKhnJieeNmMhOO
g+E37THgLUNc8f6p4p8d+CrufxT8QNe1T4GXWqlLs+Bb2dr5v7VvUaN3sLKW4lsWWO5U/vo/
tHkKEM4Yz8yaK/Tb9gP4teCvhFofgX/hKvHPh/WU8V+JPEtr8X08Q+PHuIILqdVtLNo7Fbxb
XUbe53oZbtoLyPDSO00aRZj+Svg9438IfsdeGLjxZDc6V4w+NsV9PY6HZJGLzR/A5hfYdVkl
wbe/u2IzarC0tvGAJnZ3EcYQip8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhni
tnYfZLWdIU8+VYwZ2jJIY42YY+P+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5C
VbDqwypIOOCRX6AfAr43af8AEv4RfssXM3ib4f6lqvgzxXqc3jm78W+IrKy1TRPO8S6dq326
H7bPFLLLJFA4aaIS7kluIz87EDoPhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7PgW9n
a+b+1b1Gjd7CyluJbFljuVP76P7R5ChDOGM+Kfgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9
d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz1ei/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm
6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfVvg54Kufht8JLPxR4W+Ifw0n+OvxK1XU
9O8R+K9f+IGlwXPw8tRdNbSzW5e5aSa4vAZZWv4fMkEGRCu6USv6B8J/E+g6N8Pv2ZfDVl4r
+D+sv8IvGWr6b4u1C/8AEthaDSIV8T6fqKahpzXc0LzpLBattmgSQNDLNHgOxAAPz08d+CNU
+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG+mfEz9qj4l+J
NEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK
KKKACiiigAooooAKKKKACiiigAooooAKKKKACv6iv+DR3/lGtoX/AGGdQ/8ASqWv5da/qK/4
NHf+Ua2hf9hnUP8A0qlr0cB8Fb/D/wC3ROTFb0/8X6M+/fiz/wAf1x9TXzL4f/5GDV/+wdbf
+pD4rr6a+LP/AB/XH1NfMvh//kYNX/7B1t/6kPiuvYwv/Lv1/Q8TGbT9P1R+Vv8AwcUf8nW/
Df8A7J3B/wCnnV6KP+Dij/k634b/APZO4P8A086vRXm4n+NP1f5nfhv4MPRfkY+hfATUJvj5
8CNcWFzBJ4e+Hr7scYTRNJU/+gmvin4t/slaj8cfF3xA8XnxP4V8J+HPht4f8IDWLzWjett+
26XawweWlrbTu37yPB+UY3KemSP3n+BHwPsvE3wb+AmpGFTMvg7wdJux/c0qwx/Kvx+8Wrp+
o/Dn9rDwL/b3hXSvEniXSvh5Ho9nrWv2Wkf2h9lhhln8t7qWNDsjGT83GVHVgDpmVD2dGm+6
X5Dy+tz1prs2fJn7Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLB
bbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKh
uTEX2swBQBz6t/wS80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIw
BxKVEWVYFwQQOg/Z++FkfwN+BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcF
pJbpPMkN7AJHS3G2BMyiV/FPXPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9Tvb
O7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/wDwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvU
b281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6
nvtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/AGgLOfSoPH2m
67Hq/wAN9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQAco//BL/AMWaN4h0PSvEHjL4
f+GL/wAW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaB
r3j/AMefD/wF/wAJHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dx
x4Q1fxb4++H/APwr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0Dw
Z8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQr
uVowoAPn/wAD/wDBPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qq
v2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l
/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrh
gciTzHHz98QLOx/a2/aM+MXi/TNe8P8AhXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbD
FXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+
1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiN
O8e/YzrmPDntf+CdmhX/AMPfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx7
4pGdJdPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXF
W7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v8Aw/4Dg0vVb3QrG01ieZ7vxHf2
ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUh
vLEsio7b5GUqiKrM2GONiOy/cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7r
xAl1DKp1z/TH82ykZmlgJkZgQ7GVMLz/APwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0Xx
DqGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip
8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/AMP+
A4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14
l+D/AId+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb
9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/AIgS9QKmpkyfPZzOWaxZgWUruLxgAA+F
P2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvt
b/gm/c+B/B3/AAVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZ
Nucn4ppCCvVvgJ+ydqHx3+GHjzxl/wAJR4V8KeG/hz/Z/wDbF5rRvWx9tleGDy0tbad2/eJg
/KMblPTJHlNfVf7FNzp/iH9hL9pnwb/b/hXSvEniv/hFv7Hs9a1+y0j+0Ps+oTTT+W91LGh2
RjJ+bjKjqwBAPnT4V/CvxF8b/iHpPhPwnpN1rniHXJxb2Vlbgb5WwSSSSFVFUMzOxCoqszEK
pI9W+DH/AAT28a/GzwnqGr2Wq+CtMgh8SN4O0wX2tps8R60LeW4FjZzwiSBndIwEklljhlaa
JUkYtxq/sU/FTw8vw88efC2/1a1+HGt/EyCOysvHuSEjUHJ0jUHIZoNMuWC+ZNb7GVgpm8+A
FE9A17xEfDn/AARmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYDcxiJ2mkf7YNomQtK1vjDG0
IoA8f/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80
aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5H
Gfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2
amM8p/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu80
aK5YkA/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkD1b/hO4/8AhyV/wjP9s/D/APtj/hY/9qf2T9r0n+2f7L+z+V9o8nP2rzftfyb8ef5H
Gfs1H/Cdx/8ADkr/AIRn+2fh/wD2x/wsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8jjP2
agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcPDG7zR
orliQD9nv9g/xf8AtH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu
80aK5YkD1b/hO4/+HJX/AAjP9s/D/wDtj/hY/wDan9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn
+Rxn7NR/wncf/Dkr/hGf7Z+H/wDbH/Cx/wC1P7J+16T/AGz/AGX9n8r7R5OftXm/a/k348/y
OM/ZqAPKf2e/2D/F/wC0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJdLYW5CMiStGihTcP
DG7zRorliQLfwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5Vj
BnaMkhjjZhj6X/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+
PP8AI4z9mr0r9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1OzmjkshqzebAjWq
l92nbQZFZj++BNAHz/ov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jb
XMUaLdP5QaWRMlWYZTDk0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4Ae
xtrmKNFun8oNLImSrMMphz9bH43eB/iXqfwKufB/ib4f6loPgz4j+IZtau/FviK3stU0TT5v
Ftjq1tfQ/wBpzxXMsslpBhplErsktxG37xmAt6/+0P4N+Nvi/wCC+teF/Efw0vtJ8N/E7xJf
a9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKAfH2i/8E0PGSy+FrXxR4j8F
eA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfCvHfgjVPhn441nw3r
dt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9Ntf8A2h/Bvxt8X/BfWvC/iP4aX2k+
G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSFtXf7Zdpf6D4Vv/gn
4i+FUd1efEfxZq/iO58TeMbnwnHF9p1lZ7C8urVb6ykv4ms2j3eZDdYSHygoIeMgHwT8E/2Q
m+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOer0X/gm
h4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphz6B+
yLHc6B+05oXxD0zxV+zpF4Z1n4gJda/Zu2l2D+HLW31BZDJZQ6zFFc29u8MrPA1kTIFRVfZL
EEX3bQfjv4T1/Wvgtd/D3xl8P73w74f+K3iLVPFM3jfVtOk1LTdPm12C4tbi1bXXN5H5lkBK
Xs8O0odnJuNxoA+HviZ+xR4q+E3wEuvH2r3/AIfEGl+MrjwHqukQzzPqOlarAkskkUn7ryHQ
JFu3wzSKfMQZzuC8VpHwS8T6/wDCDV/Hlhpn23wt4fvodO1S6t7mKSTTZZgTC08AYzRxSEFF
mZBEzgoHL/LX6Lfs6fHvw2/w31qz8K/EHwrbaDrH7R95qWs2fibxNa28mt+DLm1EFy91Fqkq
y3cUsUgDCRXlZxuwZE3L86fADxX4C/Z8+PfxN+LGmeKLWD4Y6Tqup6JoPg20uUm1Hx/ZXDye
Rp09rdCR49MMHltPcXURxtVYw1xgxgHj/wCyJ+ydqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+
0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19bf8Ehta0nRf+ChOgePdV1DwV4H8J6BPfzXg1DxD
b6fBYLdWN5FDFAt5cefOiuyplTKygqZG53H5JpCCivvb/gnp8YfCvgP9nOzsPiL4x8FL4hud
Vuj8HDqssOoH4e6qbW5STUr0BJPsNlJdvZlUnDKZovtIgCobgdt8FP2mdc8Efs4/CPTfBXij
4VP4+07xXr03xCvvE3xFk0eP+0H1GJ4b66NvqVsNYieHBabbeo6Q7Fz8yMxn5p0V+i37Ov7V
2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6JJOiu0UirGSieYu9ePin9
rL+w/wDhqj4l/wDCM/2V/wAI3/wleqf2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikIyvhX8K
/wDhaX/CR/8AFR+FfDn/AAjmh3Wuf8TzUPsf9qeRt/0O1+U+bdybv3cXG7a3IxXQfslfsv6t
+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVb2v/glh4nXRtD/a
A0y+8V+H/D2m+KfhjqmiWtprHiW00mDVNVnULZqEuJo1kcL9pAkwViErBmTzRu92/wCCZnx3
8D/ArwP8Ar3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR5CtbRMZJZdjRySK
xnw9+z3+y/q37SOh+PbvRdZ8P2M/w98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R48mVcZAcr
5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/wx1zwlZvqHinTbWC/1K5aNYYoJpJxHMjGCT97GzRA
bSXAdC3sH7LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wAJN42n8Lx2Vw19A9pc3UEV/ZnU
YjbbAxaK7GyDygMhoyhH5017t8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnit
nYfZLWdIU8+VYwZ2jJIY42YY9r+zT+234M/Z+8cfHKXW/AvhXWv+FgaHr2kWF14dtr+2sZGv
JIzHZrA91b+RpR8tiNsS3aKygMMbR7X8CvjD4b8c/CL9lg6Vd/CrTI/AnivU28U2Ws6/a6dN
4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE4Yz4/l+MHjb9nPUL/wJeaJ8P4b/AMKX1zpt
5FqXgjQNXuobiOZxKj3U1rK8u2TcATIwAACnaFFdB8Mv2TviT+2p4T+I/wATrHTvD+m+HvAu
lT6pq2oLp8Oj6dM1tbiQ2lrBaQrEbgwpvKoiqMhpHVpVL/YHwm/aY8Bahrnj/VPFPjvwVdz+
KfiBr2qfAy61UpdnwLeztfN/at6jRu9hZS3Etiyx3Kn99H9o8hQhnHin/BPe0udO1z9onU/F
/jHwVBqXir4f+JfCkV3rHjjS1n1jWrhoGGHlud0yStvIugTC53HzTzQB8/6j+y/q1l+yZYfG
KPWfD934eu/Eh8Ky2ML3A1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfot/wT++Ikfw2/ZH0f
wefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDDwq3+KHwi/Zc
8WeLvHnw6mtfFnjK68SajF8P7CexuW07wLpyXDi21Sf7WgN1emLYbaM70hx5sxaULEoBynwq
/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8Cvjdp/xL+EX7
LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEunat9uh+2zxSyyyRQOGmiEu5JbiM/OxA6D4T
ftMeAtQ1zx/qninx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQ
oQzgA+VPDNt45+GP7Emj/EvTJ/g/qfhP/hJJPCrWl74G0zUtbs70xy3ZE815pzGRPLAYMJ5N
qyxoMbWVDQP2OfFX7RVn4G8Y654n+Gngmf4x6rLo/hSym06awTWZreWCzZo7fS7F7a2TzpFj
zJ5RZg7kEHe30t+w38Yrnw/+zHZ6FqnxE8FRa7dfHlrrx5ba54w0tk13w9Np6W2pyTfaZzHf
28pd/mTzfMYB03MgYauh/FH4c+Ibb9n62+HmtfDQ+D/h18QNejv/APhI9dtdMvfDWlN4rsNU
s7q1TUZ4rh3aytwvmIsrGOSeJv3hYAA+H/Hfxp+I/wAM/hhrPwG1t9KsvD/h/XZ2v9I/sfTp
JrfU4ZWSSX7YkRmaUFWi81ZiTEPLDGL5ayfBH7S/iL4f+GLbSLDTvh/Pa2m/ZJqXgTQ9Tum3
OznfcXNpJM/LHG5zgYUYAAB+1l430z4mftUfEvxJolz9t0XxB4r1TUrC48t4/tFvNdyyRvtc
BlyjKcMARnkA15/SEa3jfxld/EDxPc6vfw6VBdXezfHpul22mWq7UVBst7aOOFOFGdqDJyxy
SScmiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACv6iv+DR3/AJRraF/2GdQ/9Kpa/l1r+or/AINHf+Ua2hf9
hnUP/SqWvRwHwVv8P/t0TkxW9P8Axfoz79+LP/H9cfU18y+H/wDkYNY/7B1t/wCpD4rr6a+L
P/H9cfU18yeH/wDkYtY/7B1t/wCpD4rr2ML/AMu/X9DxMZtU9P1R+V3/AAcUf8nW/Df/ALJ3
B/6edXoo/wCDij/k634b/wDZO4P/AE86vRXm4n+NP1f5nfhv4MPRfkfqX+x9Csv7OXwJz/0I
/hT/ANNVlX88P7U37JuofHX46/GDxj/wlHhXwp4a+HFj4YGr3mtG9bH22wt4YPLS1tp3b94m
D8oxuU9Mkf0PfsbnH7OfwJ/7Ebwp/wCmqyr8N/ibc6f4h8L/ALX3g3+3/CuleJPFdl4A/sez
1rX7LSP7Q+zxxTT+W91LGh2RjJ+bjKjqwB9DOv8AdqPovyObKP8AeKvq/wAz5B/Z7/Yc8e/t
J/Cvx7430KztbTwn8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4U
C/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv+CXmn2ukaH8
cNQ1PxF4K0KDxP8ADHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb
8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr/M
n0J5pov/AATQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlk
TJVmGUw5q+H/APgn1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKD
fLgo8kgR0YMrEBselfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZL
iOTUilyIFMLWUQEkzqw2vvZavw6+MvgeXxj+0v8AtAWc+lQePtN12PV/hvpniB7cyJLqWqTe
ZeLZliJ7uzhKyJgyRROd7K+1CADlH/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39
vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wAE9tY+EngDQNe8f+PPh/4C/wCEjvtW021s
NSOqXl0lxpl41neI5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/AP4V74O8Vy+I
tSufG13o+oatZfvIr2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/B
i6+I+qX8/hrxpF4di1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/wCB/wDgntrHjPSP
BN/L48+H+i2vxN1260LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ON
Z8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/wBm34w3OjftGWsmh+Mfhpp3
7O/w9+IF/rGjr4sl0u7v9I0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi
8X6Zr3h/wrps8+teNLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN
8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ
8e/DTwj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/AOHv
xI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtd
G/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQx
nz94i/Yo8VfDn4ea54j8b3/h/wABwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJ
Ii+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZf
uHUfjl4L+Llj8E7Dwr4i+FWueB/D/wAR/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMj
MCHYyphef/4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu
7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2d
h9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi
3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRd
XPh+GOaO6vIdGFw11qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/x
vpnjweHo77S9Mv8AxAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuhe
JL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv8Agq7N4y0bX/Cv
hz4T+F9d1n7Dea1r9vpmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEFerfAT9k7UPjv8MPH
njL/AISjwr4U8N/Dn+z/AO2LzWjetj7bK8MHlpa207t+8TB+UY3KemSPKa+q/wBim50/xD+w
l+0z4N/t/wAK6V4k8V/8It/Y9nrWv2Wkf2h9n1Caafy3upY0OyMZPzcZUdWAIB86fCv4V+Iv
jf8AEPSfCfhPSbrXPEOuTi3srK3A3ytgkkkkKqKoZmdiFRVZmIVSR6t8GP8Agnt41+NnhPUN
XstV8FaZBD4kbwdpgvtbTZ4j1oW8twLGznhEkDO6RgJJLLHDK00SpIxbjV/Yp+Knh5fh548+
Ft/q1r8ONb+JkEdlZePckJGoOTpGoOQzQaZcsF8ya32MrBTN58AKJ6Br3iI+HP8AgjNeeCLj
XPhpJr9h8Tnu5dMtdV0S61JtNSEwG5jETtNI/wBsG0TIWla3xhjaEUAeP/s9/sH+L/2j/hvJ
4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hv
J4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/
4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NTGeU/s9/sH+L/2j/hvJ
4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSAfs9/sH+L/2j/hv
J4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf8Aw5K/
4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0eTn7V5v2v5N+PP8AI4z9mo/4TuP/AIclf8Iz
/bPw/wD7Y/4WP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NQB5T+z3+wf4v/aP+G8n
ijStS8K6RYXGujwto6axqJtZPEGsG1kulsLchGRJWjRQpuHhjd5o0VyxIB+z3+wf4v8A2j/h
vJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSVo0UKbh4Y3eaNFcsSB6t/wncf/AA5K
/wCEZ/tn4f8A9sf8LH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/I4z9mo/4TuP/hyV/wAI
z/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUAeU/s9/sH+L/ANo/
4byeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3mjRXLEgW/hV+wPr3xK
8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH0v/hO4/wDh
yV/wjP8AbPw//tj/AIWP/an9k/a9J/tn+y/s/lfaPJz9q837X8m/Hn+Rxn7NXpX7JvxZ021+
Bv7K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+
CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHJo
v/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTD
n62Pxu8D/EvU/gVc+D/E3w/1LQfBnxH8Qza1d+LfEVvZapomnzeLbHVra+h/tOeK5llktIMN
MoldkluI2/eMwFvX/wBofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe
9tYr6WCUvJaw43xJJ+7lniIDkhQD408P/wDBPrUNQ8Q+HvD2q/Ev4VeHPGfiPXbnw1F4Yu9R
vbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY1X/4Jf8AizRvEOh6V4g8ZfD/AMMX/i3xXqPg
/wAORX0uozf29d2N6tjM8RtrOYRRfaXEam4MTHBbaFwx7Xwx8VPAWm/E39qT466Xq3h+98ba
H4kGo/DKHVimy5bUNWnD6jDZTBXnuLaApNGHVlhZg7xkqu2r+wp8f/H2oeOPCGr+LfH3w/8A
+Fe+DvFcviLUrnxtd6PqGrWX7yK9v2sY7pZdU824MY2mzT57h9wYPvcAHmn7N3/BOX4o/tP/
AB78RfDvQ9KtbDVvB089r4hvNQuMados0TvGY5ZohIGd5I2RFjDl9rMPkR3WpbfsWy2HwJ8D
/ELxJ8Rfh/4P0X4h/b/7Gh1JNWnupfsVx9nn3raWM6ph9pGW5DjuCB9g/sNf8FG9M+I37c1l
Z6lp3w/8JfDbT/FfijxpDrmtak+iahE+o/axHNcg3y2l1d7biK2XdFM8cJcIQqtIPNP2c/Ff
ijVtV+Gej674o/Z0uvhV4b8ZXVpqXhTVrnw/cv4Rsn1JJb5IZtUDTT28ySM0c1pc3O9YwPN3
qBQB8U69p0Wka5eWlvf2uqQWs7wxXtqsqwXiqxAljEqJIEYDcA6K2CMqpyB0GkfBLxPr/wAI
NX8eWGmfbfC3h++h07VLq3uYpJNNlmBMLTwBjNHFIQUWZkETOCgcv8tfpD+x/wDFX4H/AAc1
zw+nhjxT4fk+F/ijxl4stPGOn+IfFl5bwaZZTt9k0VI9InuYkuree3e382a4tbrYDI0ssQhP
lfJP7E+o6f8Ass+J/E/xH8V+K9KTTPDX2jw5N4P0vU7LUbr4gyyoVk0+SMedCdKYANNdujxE
Kog3zFWjQjz/APZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81
kzuULuOQPKa+tv8AgkNrWk6L/wAFCdA8e6rqHgrwP4T0Ce/mvBqHiG30+CwW6sbyKGKBby48
+dFdlTKmVlBUyNzuPyTQAUV91/Cb4o6//wAMdfs+aX8Gfin4V+HHiTQNd1uTxv8AbPF1n4dj
+0SX1s9nc30E8iHUIltlUZWK4GyNosEgx16X+xd4v1mD9kzwb4kj8b+H9Ng8LftDi1vdcXWY
PDenL4ekggu7+1tVnNqEsp5FWc2Mcab9gJgyhCsZ+ZNegeCP2l/EXw/8MW2kWGnfD+e1tN+y
TUvAmh6ndNudnO+4ubSSZ+WONznAwowAAP0h+En7SHwmn+Mn7PfifQvF/grRvBPw38ZfEG3v
4ZryDSn0m21e8lGlGOylMcxt3S5g+eKIxwKH8wxiKTZ+VOvaNN4c1y80+4e1knsJ3t5XtbqK
6gZkYqTHNEzRyJkcOjMrDBBIINIRb8b+Mrv4geJ7nV7+HSoLq72b49N0u20y1XaioNlvbRxw
pwoztQZOWOSST0HgH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhf
LBiiv9gf8E+/ivqOn/ATwn4bPxL8P+BdNTxJd31vrWg+NrHw7q/he52KN2s6ZfGCHXLKRxay
DY0kixQzRCXpbip+xb+0hrmofAL4+fDe0+NOleFPEmrX2jzeDL59ek8M6HZRDWJH1K4sSywJ
ZxNHMJWgijjleMsFhYqUDGfClFfpZ/wTm+Inww+Avgf4c2M/jjwrrHhvXtd8Rad8S01fxVd2
tjB5saWWmtBo801ulzaXMbRGSaeyn8tWcyPAICIuJ+EH7RniX9kr/gmrp9zofjPwU3j/AMIf
E55IdMHijTr+9Hh4GBri1hWK4M4srjUrSJpY7Zl8+MGX5oXMhQj4/wDAPwB8RfEf4SePfG+n
x2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+6/2Lf2stc8f/AAC+Png20+Je
lfCvxJ4qvtH1HwZZv4ik8O6H4bifWJJtSWxdpAlpEkdwC0MTebJGGCpKVIr0v9i74zeAvBln
+yTeXPxB8Ff2b8H9V8Z6T4luZtUSye3bUpWjsJ47a58q6lt5vPibzVhKxKXMxi8uTYxn5k0V
+ln7CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1HVLjxdZ2NrbaCba3uNQsoLt7hYp7SaVBM1vA7
rOUEgR9u4fBP7Sms+HvEf7Rnj/UPCCWsfhO/8SajcaKlram1gWye6ka3EcJVTGnlFMIVXaMD
AxikI4miv0W/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+Em8bT+F47K4a+ge0ubqCK/sz
qMRttgYtFdjZB5QGQ0ZP2df2rtM8PfCDwRcDx54V8Para/tHqrQaLqT6Xa6f4WuhFPeRW0Ep
Se30SSdFdopFWMlE8xd68MZ8P+AfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDL
M7MXbOAoWF8sGKK/E1+i3wj/AGiPNsf2rvhj4N+LuleBP7R8V2r/AA2kfxT/AGLoek6eviCd
7ySxuA6wQReRMjtHAd80e7YkmCK5/wCEH7Sms/srf8E1dPv/AAn4/wDBWp+MPCvxOeewtxrk
DXtz4azA8sMNtI8d/DZXWo2sMktuqQySR5d0EbM1AHwTRX6LfCvxnp/7U1j+x1c+H734f6Xq
vgf4j6tfeI/D8Or2Wi/2N9s8QWt7DDZ2dzMks0RiJEa24l+75eS4K1xS/theEv2W/wBsP9rG
08R+CLXXp/GM/i/QLS9jN0Z5pLm9wlpchbyGNbImJmd4k+0gt8smOAhHinwq/YH174leDPh7
rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/478Eap8M/HGs+
G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfoB8CvjD4b8c/CL9lg6Vd/CrTI/A
nivU28U2Ws6/a6dN4Ot5PEunarFLYrqFys8n+j25jEqGcmJ542YyE46D4TftMeAtQ1zx/qni
nx34Ku5/FPxA17VPgZdaqUuz4FvZ2vm/tW9Ro3ewspbiWxZY7lT++j+0eQoQzhjPhTUf2X9W
sv2TLD4xR6z4fu/D134kPhWWxhe4Go2N6IJLjEivCsRQwor7o5X/ANagOGDhfNK/SH9hv4xX
Ph/9mOz0LVPiJ4Ki126+PLXXjy21zxhpbJrvh6bT0ttTkm+0zmO/t5S7/Mnm+YwDpuZAw6v9
j/4q/A/4Oa54fTwx4p8PyfC/xR4y8WWnjHT/ABD4svLeDTLKdvsmipHpE9zEl1bz272/mzXF
rdbAZGlliEJ8oA/LOvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7Ja
zpCnnyrGDO0ZJDHGzDG38HvG/hD9jrwxceLIbnSvGHxtivp7HQ7JIxeaP4HML7DqskuDb392
xGbVYWlt4wBM7O4jjH0r8Cvjdp/xL+EX7LFzN4m+H+par4M8V6nN45u/FviKystU0TzvEuna
t9uh+2zxSyyyRQOGmiEu5JbiM/OxAAPz/wDHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuI
ZGjkTchKth1YZUkHHBIr0r4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ2
1wLVHklCobkxF9rMAUAc/a3wm/aY8Bahrnj/AFTxT478FXc/in4ga9qnwMutVKXZ8C3s7Xzf
2reo0bvYWUtxLYssdyp/fR/aPIUIZx5V8HPBVz8NvhJZ+KPC3xD+Gk/x1+JWq6np3iPxXr/x
A0uC5+HlqLpraWa3L3LSTXF4DLK1/D5kggyIV3SiVwD408d+CNU+GfjjWfDet232LWvD99Pp
t/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/J
PrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDH7A/YD8V/Db9mLQ/Aug3/AI78Fa5oWoeJPEujfFAX
vjSb+yomZVsNOe00w3EMF9ZXSGJnuZLS5VUZneSFYf3XP/Arxxo4+EX7LGixa78Kpbr4WeK9
Ts/GT6z4n0u2m8OJ/wAJLp2pLd2LzXCCffDbMontfOR4pJ0BJbgA/P8A8d+CNU+GfjjWfDet
232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEismvQP2svG+mfEz9qj4l+JNEuftui+IP
FeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvP6QgooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+XWv6iv8A
g0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3ROTFb0/wDF+jPv34s/8f1x9TXzL4f/AORg1j/sHW3/
AKkPiuvpr4s/8f1x9TXzL4f/AORg1f8A7B1t/wCpD4rr2ML/AMu/X9DxMZtU9P1R+Vv/AAcU
f8nW/Df/ALJ3B/6edXoo/wCDij/k634b/wDZO4P/AE86vRXm4n+NP1f5nfhv4MPRfkfqR+xy
3/GOvwJ/7Efwn/6arKv55f2qP2TtQ+O/x2+MPjL/AISjwr4U8N/Dmy8Mf2xea0b1sfbbC3hg
8tLW2ndv3iYPyjG5T0yR+/H7I/xDsLH4IfAqyeZBN/whPhFduecnSbEj+dfi18Rr7TvE/hH9
rvwgNf8ACml+I/F1h8P20a01rX7LSP7R+zxQzT+W91LGh2RjJ+buo6sAfQztp4aj6L8jmyhf
7RV9X+Z8h/s9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfa
zAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOveEtLTWPFOm6bPealcLbNDEIbidJAjAHEpUR
ZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1jxBq3jvSbJvhlp8VwbRms1NwWkluk8
yQ3sAkdLcbYEzKJX+ZPoTzTRf+CaHjJZfC1r4o8R+CvAereN/El94V0DTdYnvLifU72zu0sr
gB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Par8S/hV4c8Z+I9dufDUXhi71G9vNU
stQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+FnxU8OaRrXxD+D8/wq+EnjK4up77WL
3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HXxl8Dy+Mf2l/2gLOfSoPH2m67Hq/w
30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj/wDBL/xZo3iHQ9K8QeMvh/4Yv/Fv
ivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/4J7ax8JPAGga94/8efD/
AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/wCPtQ8ceENX8W+P
vh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHvWv2hf
jT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laMKAD5/
8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRz
XinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9
oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98
QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdNMljAF3rJeusoCxBsMVfDcZKEHwT/
AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd
+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPD
ntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0WseHre0uomaVRqiJLHvikZ0l093JKY
LLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajpv9v6RptvqnmpcltcVbsPLbS+as1p
I8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W90KxtNYnme78R39mZFuobFLaKYSp
FJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKo
iqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+Jm8XJ48n0eS+h0e68QJdQyqdc/wBM
fzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5
W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtr
D38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q+HPw81zxH43v/D/gODS9VvdCsbTW
J5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7GXxZ8YeG/ip4cGteJfg/4d+FXw48Z
XGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/Cq
TwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrFmBZSu4vGAAD4U/ZE/ZO1D9sn4nxe
DdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81kzuULuOQPKa+1v8Agm/c+B/B3/BV
2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2PlpeypcTZUxA4V3Tcpk25yfimkI9g+Ff7G
urfED4eaT4s1zxX4K+HHh7xJqo0bQb3xXd3FsmuTgkTPAIYJmFvA21ZbmQJBG0iqZNwYL5r4
78Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfUE1vpP7X37D3wM8
EaH4u8FeGvEPwr1XWNO16DxXrtvoiLBql4tzDfwPMwWe3jWNllWMtOjbcQsrKx9r+BHxL0nw
R4F/Zw0f4W/Fzw/pnh7wT8QNYb4hXDeJbfwg+vWX9r2r215dWdzcQyXaSaegwMTbVBhJ3KUp
jPin9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qreaV
+sP7MH7SHwm+FvxU8A698PvF/grwf4E1P4geLrj4gRLeQaTPdxyzTweHA1rMUuWso4buIqkM
f2aAl5JRG0Ujp5/+zt8e774P/sq/Bfw18Ntb+D9l4s8OeJNbj8bvrnxAbQrS1uvt8JtrqZbX
UbZNVtzbhf3ipeRmOARpn5kYA/PTwr4E1zx1/aX9iaNqusf2PYy6pf8A2G0kuPsNpFjzLiXY
DsiTcu52wq5GSM1k17B4a8T6dqvxU+LeoP8AES1+GsGqaVrElonhPTL5NK8StJMGTRoYV8uS
CynB+X7Qu1EjQSJngeP0hHsH7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4dO0xSG8sSyKj
tvkZSqIqszYY42I7L4/X3t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5ubWe
Hz5S95Ha3dx++jtk3wSyJAW2bQruLeg/EnX7P9nH4LaL8GfiP8P/AIT+JPDfivxE3je1t/HV
npOm21xJqMDWcs4nuWOq2kdsqqsq/bA0URTdIcqWM/P6vdvhV+wPr3xK8GfD3Wr3xb4K8Ip8
V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHx/x3cfbPHGsy/bNK1Dzb6d/tWl
2f2OxucyMfMgg8qLyom6pH5Ue1SBsTG0fUH/AAT3+MPjDRtc8Bya14x+GmnfCr4e+JDrE6+L
JdFu7/SIUaG7vF02C4SXUkeYRKIxZRgNcMCCH3uEI+X/AB34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKya/UzTf23IvE3gXwTrXwb1P4aaNqWq/EDxRrniePxh
4zl8KvpzXWrpc2M99bQaja/2gn2R4w/yXihYDEvRkb80/izrMXiP4qeJdQt08PxwX+q3VxEm
g2strpSq8zsBaQyqskdvg/u0dVZU2ggEEUAerfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG2
1h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOrov/BNDxksvha18UeI/BXgPVvG/iS+8K6Bp
usT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn6A/Zs+KOg+IfgP8Asl21lrXw0L/DrxJq
Efi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75IHoOv/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckKxn5k+O/BGq
fDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXsHwq/YH174leDPh7rV74
t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjxP7WXjfTPiZ+1R8S/Em
iXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDX2X+zZ8UdB8Q/Af9ku2sta+Ghf4d
eJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJAQj5/wBF/wCCaHjJZfC1
r4o8R+CvAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz
8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+m2v/tD+Dfjb4v+C+teF/Ef
w0vtJ8N/E7xJfa9c+I9Ys9IvdCsrnxfZaxb3trFfSwSl5LWHG+JJP3cs8RAckL+ef7WXjfTP
iZ+1R8S/EmiXP23RfEHivVNSsLjy3j+0W813LJG+1wGXKMpwwBGeQDQB0H7Gf7Dnj39uv4h3
Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsut8Kv2B9e+JXgz4e61e+LfB
XhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY/QH/BIX9tOLQfi38NPAniW
z+GnhrwT4Fn1fXJfEOoapLo9yLm5tZ4fPlL3kdrd3H76O2TfBLIkBbZtCu46v4EfFHTdK8C/
s4WWg618H9LHgn4gaxP49sNa13R7tPDULavayx/2XLq08sq2/wBlUskunSsHZC5kebLFjPz0
8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8Gf
D3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHz/wDaU17RvFP7
Rnj/AFPw5eXWo+HtR8SajdaXd3U08091avdSNDJI9wTMzshUlpSZCSS3zZr7V/ZN+LOm2vwN
/ZXt9B8S/DSMeDvGWo3Xj2LxbqOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTSEfKnxM/Y
o8VfCb4CXXj7V7/w+INL8ZXHgPVdIhnmfUdK1WBJZJIpP3XkOgSLdvhmkU+YgzncF5//AIag
8Z/8M8f8Kq+26V/wgv27+1PsH9h2Hnfa92ftH2nyftHm7f3e/wAzd5X7rPl/JX6A/s6fHvw2
/wAN9as/CvxB8K22g6x+0fealrNn4m8TWtvJrfgy5tRBcvdRapKst3FLFIAwkV5WcbsGRNy/
nT8d/wDhGP8AheHjL/hCP+RM/t29/sD/AFv/ACD/ALQ/2b/Xfvf9Vs/1nz/3uc0xnpfwq/YH
174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj6B+xx+
z38dPi98VNY+EnhXwx8NLe88BT3VrruoeI/BeialBo8yTSq0dxfSWVxLK7TK6RqrSEhTt/dR
syeq/s2fFHQfEPwH/ZLtrLWvhoX+HXiTUI/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZA
soMck8X3yQOr/Y3/AOChWjeLv+CgX9kXS/DTR/hfovjLxX4utfFOoahPoVzOb83ixXcqzXcM
FzcMlxDboJbd5o4GYKECu4APzf8AG/jK7+IHie51e/h0qC6u9m+PTdLttMtV2oqDZb20ccKc
KM7UGTljkkk5Na3ju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dx+
1fhN8Udf/wCGOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVw
NkbRYJBjpCPl/wDZK/Zf1b9sX412PgLQNZ8P6NruqQTzWJ1h7hILtokMrxBoYZSr+UsjguFX
EbDduKq3mlfqD+wP8f8AwP8ABX/hTep6b4++H+h2F14r8SP8Wrizu7fRv7WuX8220SQWkqwX
DaeFuldI4IFtrfc8kqRNFIyfmTr2jTeHNcvNPuHtZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozK
wwQSCDQBUr3b4VfsD698SvBnw91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qx
gztGSQxxswx+q/8AgmR8Vfht8HPhX8LEv/FPh+TQvFGq6/afFDT/ABD4smt4NMaeFLTTkj0g
3MUF1bzo8XmzSWt0qAyM8sSw/uqnwK8caOPhF+yxosWu/CqW6+FnivU7Pxk+s+J9LtpvDif8
JLp2pLd2LzXCCffDbMontfOR4pJ0BJbhjPj/AMd/Gn4j/DP4Yaz8BtbfSrLw/wCH9dna/wBI
/sfTpJrfU4ZWSSX7YkRmaUFWi81ZiTEPLDGL5a9V/Zt8Ea/rngD4eaXpXxD/AGarG/8AHF9L
puj6JrvhCz1rXY7hrwwql5INIunh8ySRTGbiYAxsu0hUIXxT9rLxvpnxM/ao+JfiTRLn7bov
iDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBr0v9ibXvCvwh+EnxY+KM954f/4Wh4Dg0oeA
rDVpoZEN1dXTRXF9DaOQbi4tIlEkeQ8cTESOjFUKoR5V+0hpWs+HPj34v0jxF/wj/wDbugar
caPfHQ9Og0/TmmtXNuzQwQRQxqhMWciJC2SzDcxria/Sz/gnN8c/AngXwP8ADnUPEXjbStVt
fHWu+Ih8W4vFfjK4X7NcXUaW1if7Le6jhvYrkSRme4ltroLukaSWIQkxfm9r2jTeHNcvNPuH
tZJ7Cd7eV7W6iuoGZGKkxzRM0ciZHDozKwwQSCDQB7X8Kv2B9e+JXgz4e61e+LfBXhFPivqs
+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY2/D/8AwT61DUPEPh7w9qvxL+FXhzxn
4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2O1/4J7/ABh8YaNrngOTWvGP
w0074VfD3xIdYnXxZLot3f6RCjQ3d4umwXCS6kjzCJRGLKMBrhgQQ+9xb+HXxl8Dy+Mf2l/2
gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahDGeU3Pxx+I/7GXjj
X/AmkX/hXSta8F32p+H5Na0vw3p39qRuJJre4eDU2tRfLndIEk8xXVGAXYAALfwq/YH174le
DPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4pr2vX3inXL
zU9TvLrUdS1Gd7q7u7qZpp7qZ2LPJI7EszsxJLEkkkk1+gH7JvxZ021+Bv7K9voPiX4aRjwd
4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaAPn/Rf+CaHjJZfC1r4o8R+C
vAereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1
u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+lZ+N3gf4l6n8Crnwf4m+H+paD4M+I
/iGbWrvxb4it7LVNE0+bxbY6tbX0P9pzxXMsslpBhplErsktxG37xmA/P79rLxvpnxM/ao+J
fiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlGU4YAjPIBpCOg+Cf7ITfGqz8KBfiN8NPD2
reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DT
wj4w1fxJceFbXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/4e/Ej4eeM
7Txb8FBoN14rsm8R2viC80aLWPD1vaXUTNKo1RElj3xSM6S6e7klMFlkjVV7b4Am10b9tSP4
jeEfHPwf1DwBrnxOku78+KtR03+39I0231TzUuS2uKt2HltpfNWa0keViv7xhNGFDGfGnjvw
Rqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV6V8E/2Qm+NVn4UC/Eb
4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOeV/aU17RvFP7Rnj/U/
Dl5daj4e1HxJqN1pd3dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmvdf8AgnZoV/8AD34kfDzx
naeLfgoNBuvFdk3iO18QXmjRax4et7S6iZpVGqIkse+KRnSXT3ckpgsskaqqEcV+zd/wTl+K
P7T/AMe/EXw70PSrWw1bwdPPa+IbzULjGnaLNE7xmOWaISBneSNkRYw5fazD5Ed18Jr9LP2G
v28dAvP25rLww3/Cv9L+E/hfxX4o8TWHizWtavNM1C5F59rjhvLmS6vUS+u2jnhgVriGW4WF
n+7iRx+dPju3+x+ONZi+x6Vp/lX06fZdLvPtljbYkYeXBP5svmxL0STzZNygHe+dxAPYPhV+
wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDE8Hfs
F6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv2M65jw5+
lf2bPijoPiH4D/sl21lrXw0L/DrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUG
OSeL75IBqaaDHqvib4k/B/x78NB8QfjR4y8Q+Z4l1/xZYaJc/DfRZNSlijkt7W4lW5W4u4Xa
RrlYzNFBlI4leTzGYz4K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlS
QccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0Z
JDHGzDHx/wAd+Ff+EF8cazon9o6VrH9j309j9v0u4+0WN95UjJ50EmBvifbuRsDcpBwM196/
sm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3w
JpCPmrwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8
e/YzrmPDnx/x34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/g
CbXRv21I/iN4R8c/B/UPAGufE6S7vz4q1HTf7f0jTbfVPNS5La4q3YeW2l81ZrSR5WK/vGE0
YUfKn7SmvaN4p/aM8f6n4cvLrUfD2o+JNRutLu7qaeae6tXupGhkke4JmZ2QqS0pMhJJb5s0
Adr+z3+wf4v/AGj/AIbyeKNK1LwrpFhca6PC2jprGom1k8QawbWS6WwtyEZElaNFCm4eGN3m
jRXLEgeKV9g/8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7XpP9s/2X9n8r7R5OftXm/a/k34
8/yOM/Zq9V/ZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv8AhJvG0/heOyuGvoHtLm6giv7M
6jEbbYGLRXY2QeUBkNGWM/OmvdvhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8
Vs7D7JazpCnnyrGDO0ZJDHGzDHyn4s6zF4j+KniXULdPD8cF/qt1cRJoNrLa6UqvM7AWkMqr
JHb4P7tHVWVNoIBBFfdf7NnxR0HxD8B/2S7ay1r4aF/h14k1CPxd/wAJHrthpl74ahbxHpuq
JdWqXc8Tu7QW7L5kCygxyTxffJAQj5q+G/8AwTl+KPxH0P4r6mNKtdH034NQagfEV3qFxiD7
VZKzT2MDxB1muAqMflPlgbSzr5ke/wAJr72/Zu8T6D41/aM/a78aW/ivwVYeHviP4b8ZaJ4b
m1jxLYaRPqd1fXUc1qot7qaKZEkQ5EjosYIZSwZSB8Ka9o03hzXLzT7h7WSewne3le1uorqB
mRipMc0TNHImRw6MysMEEgg0AewfCT9jFPi5pHgV4fir8KtK1r4h3zabpeg3d5qE+qQ3AuRb
IlzHa2cy23mOyMhldQyOGBwG29Av/BOq6g8e6X4Tvfi18H7HxZrPiS88K22im/1K5vVvba/a
xxMlvZSC3SWUK0TTmPfG4bjDBT9ibXvCvwh+EnxY+KM954f/AOFoeA4NKHgKw1aaGRDdXV00
VxfQ2jkG4uLSJRJHkPHExEjoxVCtv9kTx3ofgrwB8ZfjJrWs6VqPxj8Kf2bc+DY/EF3HdSXe
oX146XepLbyndd3dsn75Gbekbt5joxClWM8J+LPw4vvg58VPEvhDU5bWfUvCuq3Wj3ctqzNB
JNbzPC7RllVihZCQSoOMZA6Vz9W9e16+8U65eanqd5dajqWozvdXd3dTNNPdTOxZ5JHYlmdm
JJYkkkkmv0W/4JkfFX4bfBz4V/CxL/xT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1
bzo8XmzSWt0qAyM8sSw/ukI/N6iv0h/Z2+Pd98H/ANlX4L+GvhtrfwfsvFnhzxJrcfjd9c+I
DaFaWt19vhNtdTLa6jbJqtubcL+8VLyMxwCNM/Mjfn98WdZi8R/FTxLqFunh+OC/1W6uIk0G
1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgigDn69L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXu
BqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfsv8AZZ/aVuPAX7HXwK0v4Xax8KtN8SaXruryeMv+
Em8bT+F47K4a+ge0ubqCK/szqMRttgYtFdjZB5QGQ0Z1v2Ofjvp8f7P5sLDxl8KtAj1b9oCX
UfFGirq1lo+j33hS5sUgvljsdReNpNPZJCscLRlwETCCSL5WM/NOvYP2e/2HPHv7Sfwr8e+N
9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV3um+PdxXx3/wCEY/4Xh4y/4Qj/
AJEz+3b3+wP9b/yD/tD/AGb/AF373/VbP9Z8/wDe5zX0X/wS80+10jQ/jhqGp+IvBWhQeJ/h
jr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQEI8/+FX7A+vfErwZ8PdavfFvgrwi
nxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfH/HfgjVPhn441nw3rdt9i1rw
/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr9APgV440cfCL9ljRYtd+FUt18LPFep2fjJ9Z8
T6XbTeHE/wCEl07Ulu7F5rhBPvhtmUT2vnI8Uk6Aktx8U/tZeN9M+Jn7VHxL8SaJc/bdF8Qe
K9U1KwuPLeP7RbzXcskb7XAZcoynDAEZ5ANAHQfsZ/sOePf26/iHc+H/AARZ2qpp0H2jUdV1
B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsp+xn+w549/br+Idz4f8EWdqqadB9o1HVdQd4dO0xS
G8sSyKjtvkZSqIqszYY42I7L9Lf8Ehf204tB+Lfw08CeJbP4aeGvBPgWfV9cl8Q6hqkuj3Iu
bm1nh8+UveR2t3cfvo7ZN8EsiQFtm0K7jV/4Jg/tlxeFv2kvBvw/8S6P8H/BPgnwBquvazLq
yeIJbOK1ubiG4g3rO+om11Bx5sdrDI63MgtslHIDy0xn56V7B8E/2Qm+NVn4UC/Eb4aeHtW8
b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfNfHdv9j8cazF9j0rT/Kvp0+y
6XefbLG2xIw8uCfzZfNiXoknmyblAO987j9K/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEd
r4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1VUI5/Rf+CaHjJZfC1r4o8R+CvAereN/
El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHPhXjvwRqnwz8caz4b1u2+xa14
fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9q/Av4/6vqH7VDavbePvh/wD8KJ8HfEfUvEVl
c+NrvTNQ1ay0/wC1i9maxj1BZdY824ijj2m3Te9w+4sJfMcfJX7SnxIsfjH+0Z4/8X6ZFdQa
b4q8SajrFpFdKqzxw3F1JMiyBWZQ4VwCAxGc4J60AdrbfsWy2HwJ8D/ELxJ8Rfh/4P0X4h/b
/wCxodSTVp7qX7FcfZ5962ljOqYfaRluQ47ggHww/YxT4pahoNna/FX4VWd/4t12TQtBs5rz
UJrrVHWaOCOdorezle0imkkAj+2rbuwBbYFG6vYP2N/GfijxD4c+DWm6740+BWt/CrQPEj2+
peHvFSeH01Hw1ZSX0U18SNUhSZkuEkZ1e0klzs25R0CC3+zHoWj/AA9/a407xn8PfFvwUHwt
uviOWktfEF5pcWseHtHtNSDQyqNaRLmPfaSB0ls3eQlMOyzRhVYz4/8AHfgjVPhn441nw3rd
t9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr2D4VfsD698SvBnw91q98W+CvCKfFfVZ
9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8//aU17RvFP7Rnj/U/Dl5daj4e1HxJ
qN1pd3dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmvtX9mz4o6D4h+A/7JdtZa18NC/wAOvEmo
R+Lv+Ej12w0y98NQt4j03VEurVLueJ3doLdl8yBZQY5J4vvkgIR8/wCi/wDBNDxksvha18Ue
I/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ
8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/Tb/hpDwz8QvG3wi1vwL4v+Glzo
mnfF3xLrni2TxbeaTDd6VZXOvw3VvPZR6wVuLZHsv3n/ABL1T94GLfvw1fnT+0pr2jeKf2jP
H+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNAHa/CT9jFPi5pHgV4
fir8KtK1r4h3zabpeg3d5qE+qQ3AuRbIlzHa2cy23mOyMhldQyOGBwG263wM/wCCanxL/aH/
AGl/F/wx8Nw6VLf+BL66sNc1maaVNHsHglkhBaXyy582SJhGojLsAW2hUkZLX7E2veFfhD8J
Pix8UZ7zw/8A8LQ8BwaUPAVhq00MiG6urpori+htHINxcWkSiSPIeOJiJHRiqFfdv+CUf7eN
xefHbwB4Y8c/8K/0vw34Xvta8Tah4s1rWp9M1C5vry3uI3vLmSW9S3vrtmnSBWlhllWFn27Q
HcMZ+f1ewfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e7zXx3b/Y/HGsxfY9K0/wAq+nT7Lpd59ssbbEjDy4J/Nl82JeiSebJuUA73zuP1B/wS
80+10jQ/jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQEI8U
0b9l/VpvgI/xH1zWfD/hHw9dzzWmgprD3Au/FU8KM0yWMMMMrOkbKsbTSeXAskqIZQ24L5pX
6LfBnx3pWofAL9mHwadZ+Cl/YfD/AMV6zp3xHs/FF34bufsVpJrEMxa2fUCTLFJbNKfO09mV
9oAcsigfCnx3/wCEY/4Xh4y/4Qj/AJEz+3b3+wP9b/yD/tD/AGb/AF373/VbP9Z8/wDe5zQB
0Gjfsv6tN8BH+I+uaz4f8I+Hruea00FNYe4F34qnhRmmSxhhhlZ0jZVjaaTy4FklRDKG3BTR
v2X9Wm+Aj/EfXNZ8P+EfD13PNaaCmsPcC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgv2B+z
58VdE8Z/szfsqeHYdW+D93pPg7xJq1v8QrHxkdAE9hZT6tBcZiXVQJSj2ryEvZ5yV2k70AXq
7L4q/D3xn4W+A/h3wFq3wfu/hr4O+IHiO38R2PjI6MJ7DQp9ejuLYxLrQF0Uewckvb5cldrn
zEwrGfmTXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjB
naMkhjjZhj5p8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNfc
H7NnxR0HxD8B/wBku2sta+Ghf4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCyg
xyTxffJAQj51+G3/AATY+IfxD0/XZJrnwr4butH8VzeBLez1jVRFJrPiCKGWZtMt3jV4RKRF
tV5pIoneWNEkYtgfP1foD+1l8ctH+Jn/AATx+JaaJ4i+H96viD45ap4msLLz9Lj1m40OZ5fL
vPsz4vFlNyyruZBciA7Ti24r8/qAPYP2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKC
doITO1pAVRjJceWC23AVQV3um+Pdb/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDW
DayXS2FuQjIkrRooU3Dwxu80aK5YkD0v/gl5p9rpGh/HDUNT8ReCtCg8T/DHXvCWlprHinTd
NnvNSuFtmhiENxOkgRgDiUqIsqwLgggauveIj4c/4IzXngi41z4aSa/YfE57uXTLXVdEutSb
TUhMBuYxE7TSP9sG0TIWla3xhjaEUxnxpXsH7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4
dO0xSG8sSyKjtvkZSqIqszYY42I7L4/X3t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDq
GqS6Pci5ubWeHz5S95Ha3dx++jtk3wSyJAW2bQruEI+af2M/2HPHv7dfxDufD/giztVTToPt
Go6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfH6/Qv8A4Jg/tlxeFv2kvBvw/wDEuj/B/wAE
+CfAGq69rMurJ4gls4rW5uIbiDes76ibXUHHmx2sMjrcyC2yUcgPLXwV47t/sfjjWYvselaf
5V9On2XS7z7ZY22JGHlwT+bL5sS9Ek82TcoB3vncQD2D4VfsD698SvBnw91q98W+CvCKfFfV
Z9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx1dF/4JoeMll8LWvijxH4K8B6t438S
X3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc/QH7NnxR0HxD8B/wBku2sta+Gh
f4deJNQj8Xf8JHrthpl74ahbxHpuqJdWqXc8Tu7QW7L5kCygxyTxffJA9B1/9ofwb8bfF/wX
1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhWM/M
nx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK9g+FX7A+vfErwZ
8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMeJ/ay8b6Z8TP2
qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAa+y/2bPijoPiH4D/sl21lr
Xw0L/DrxJqEfi7/hI9dsNMvfDULeI9N1RLq1S7nid3aC3ZfMgWUGOSeL75ICEfP+i/8ABNDx
ksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDnwrx34
I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK/TbX/2h/Bvxt8X/BfW
vC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt721ivpYJS8lrDjfEkn7uWeIgOSF/PP8A
ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAOg/Yz/AGHP
Hv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZdb4VfsD698SvBnw
91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+gP+CQv7acWg/F
v4aeBPEtn8NPDXgnwLPq+uS+IdQ1SXR7kXNzazw+fKXvI7W7uP30dsm+CWRIC2zaFdx1fwI+
KOm6V4F/ZwstB1r4P6WPBPxA1ifx7Ya1ruj3aeGoW1e1lj/suXVp5ZVt/sqlkl06Vg7IXMjz
ZYsZ+enjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV7B8Kv2B9
e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhnitnYfZLWdIU8+VYwZ2jJIY42YY+f/ALSm
vaN4p/aM8f6n4cvLrUfD2o+JNRutLu7qaeae6tXupGhkke4JmZ2QqS0pMhJJb5s19q/sm/Fn
TbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJpCPl
T4mfsUeKvhN8BLrx9q9/4fEGl+MrjwHqukQzzPqOlarAkskkUn7ryHQJFu3wzSKfMQZzuC+P
1+oP7Onx78Nv8N9as/CvxB8K22g6x+0fealrNn4m8TWtvJrfgy5tRBcvdRapKst3FLFIAwkV
5WcbsGRNy/nT8d/+EY/4Xh4y/wCEI/5Ez+3b3+wP9b/yD/tD/Zv9d+9/1Wz/AFnz/wB7nNAH
oFt+xbLYfAnwP8QvEnxF+H/g/RfiH9v/ALGh1JNWnupfsVx9nn3raWM6ph9pGW5DjuCBleLv
2TtQ8Nfs0L8V7PxR4V8QeFm8Vy+Dytgb1LpbtIpJ1cpcW0Q8p4UWRSG3YmQMquHRPdf2N/Gf
ijxD4c+DWm6740+BWt/CrQPEj2+peHvFSeH01Hw1ZSX0U18SNUhSZkuEkZ1e0klzs25R0CD2
v9lf4veDfDPwbvfD/gLxr4K0TwfP+0Pc3F9pGueILOxTUfA81mttMJrbUpFe5t3gdV2OryFl
BA8yPKsZ+ZNewfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFt
uAqgrvdN8e7ivjv/AMIx/wALw8Zf8IR/yJn9u3v9gf63/kH/AGh/s3+u/e/6rZ/rPn/vc5r6
L/4Jeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKiLKsC4II
CEef/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGO
NmGPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV+gHwK8caO
PhF+yxosWu/CqW6+FnivU7Pxk+s+J9LtpvDif8JLp2pLd2LzXCCffDbMontfOR4pJ0BJbj4p
/ay8b6Z8TP2qPiX4k0S5+26L4g8V6pqVhceW8f2i3mu5ZI32uAy5RlOGAIzyAaAOg/Yz/Yc8
e/t1/EO58P8AgiztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZdb4VfsD698SvBn
w91q98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx+gP8AgkL+2nFo
Pxb+GngTxLZ/DTw14J8Cz6vrkviHUNUl0e5Fzc2s8Pnyl7yO1u7j99HbJvglkSAts2hXcdX8
CPijpuleBf2cLLQda+D+ljwT8QNYn8e2Gta7o92nhqFtXtZY/wCy5dWnllW3+yqWSXTpWDsh
cyPNlixn56eO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRXsHwq
/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj5/+
0pr2jeKf2jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNfav7Jv
xZ021+Bv7K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaQ
j518Zf8ABPbWPhJ4A0DXvH/jz4f+Av8AhI77VtNtbDUjql5dJcaZeNZ3iObGyuIhtlXgiQhg
wIJ5x4Tr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRgNwDorYIyqnIH3t4M+KN/8Svj
T4Z8n4gfBTxZ8CbH4j6oyaT40fRn1jRNHn1gXFxLK+uRLey/aoJfN82GWaRipDssqbB8U/Hf
/hGP+F4eMv8AhCP+RM/t29/sD/W/8g/7Q/2b/Xfvf9Vs/wBZ8/8Ae5zQB0H7JX7L+rfti/Gu
x8BaBrPh/Rtd1SCeaxOsPcJBdtEhleINDDKVfylkcFwq4jYbtxVWNR/Zf1ay/ZMsPjFHrPh+
78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf8A1qA4YOF+4f8AgmZ8d/A/wK8D/AK903xl
4V8J2Euu69/wtpbjVrezvr25aN7bRPOilcXE9oi3SkeQrW0TGSWXY0cki5P/AAT++Ikfw2/Z
H0fwefHHw/0i6g+OQ/4THS9S8VaTHa6n4ZfTY7TUN6TzeTe2jhmUeX5gcqHjyUDBjPzpr2v4
SfsYp8XNI8CvD8VfhVpWtfEO+bTdL0G7vNQn1SG4FyLZEuY7WzmW28x2RkMrqGRwwOA23z/4
7/8ACMf8Lw8Zf8IR/wAiZ/bt7/YH+t/5B/2h/s3+u/e/6rZ/rPn/AL3Oa9g/Ym17wr8IfhJ8
WPijPeeH/wDhaHgODSh4CsNWmhkQ3V1dNFcX0No5BuLi0iUSR5DxxMRI6MVQqhFX4Gf8E1Pi
X+0P+0v4v+GPhuHSpb/wJfXVhrmszTSpo9g8EskILS+WXPmyRMI1EZdgC20KkjJ8/V+gP/BK
P9vG4vPjt4A8MeOf+Ff6X4b8L32teJtQ8Wa1rU+mahc315b3Eb3lzJLepb312zTpArSwyyrC
z7doDuPhTx3b/Y/HGsxfY9K0/wAq+nT7Lpd59ssbbEjDy4J/Nl82JeiSebJuUA73zuIB6V8E
/wBkJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn+k
D/g1Y8Eap8M/2Do/Det232LWvD/ifV9Nv7fzEk+z3EN9NHIm5CVbDqwypIOOCRX4Vfsw+BbX
4Yfs2+GvEvw78Z/DSx+LfxHnvdN1PXdf8Y6bpFz8MNNWb7MTbwTTCb7RdIZHa6jRpY4AUijD
SeY37q/8GrHhX/hBf2Do9E/tHStY/sfxPq9j9v0u4+0WN95V9MnnQSYG+J9u5GwNykHAzXo4
D4K3+H/26JyYven/AIv0Z9z/ABZ/4/rj6mvmXw//AMjBq/8A2Drb/wBSHxXX018Wf+P64+pr
5l8P/wDIwav/ANg62/8AUh8V17GF/wCXfr+h4mM2n6fqj8rf+Dij/k634b/9k7g/9POr0Uf8
HFH/ACdb8N/+ydwf+nnV6K83E/xp+r/M78N/Bh6L8h3gH9rPU/DXxz+Bnh1JnEH/AAjfw/h2
g9pNE0kn/wBCNfD3x2/ZPf49/H3w9N/wsf4a+H9Y8fwaFpejaNqN9d3Go3EzaXpsStLHZ21w
LVHllCobkxF9rMAUAc/SfhvwPNqH7WHwPuwp2jQfh02fpoWj/wCFcl418A2vwq0fw94i+Hfj
P4a2Pxc+IujaZpeqa74g8Y6bpFz8MNNXS7K2Jt4JphN9oukMjtdRo0scAKRRhpPMaMXKUqUU
/L8jfCRiqsmvM+PfiZ+xR4q+E3wEuvH2r3/h8QaX4yuPAeq6RDPM+o6VqsCSySRSfuvIdAkW
7fDNIp8xBnO4L4/X6Q/sN+NLX4Pfsx2fgSD4j/DRX0748tD4tguPFGmw6dr/AIXOnpZX8ojv
JEW8spVJ2gIxbCsi70BX4J+O/wDwjH/C8PGX/CEf8iZ/bt7/AGB/rf8AkH/aH+zf6797/qtn
+s+f+9zmvLPQPQPhJ+xinxc0jwK8PxV+FWla18Q75tN0vQbu81CfVIbgXItkS5jtbOZbbzHZ
GQyuoZHDA4Dbeg+KH/BOXU/gb4c03U/G/wAS/hp4Wg1nVdZ0exW4GsXL3M2lXz2V0wFtp8oV
PMUFSxBZXU4B3Kp+xNr3hX4Q/CT4sfFGe88P/wDC0PAcGlDwFYatNDIhurq6aK4vobRyDcXF
pEokjyHjiYiR0YqhX0v9lX4o+NviVB8J/wDhMPiB8FPFnw9sfFcza5pPjR9AfWNEt59QjuNR
llfVoluJftQleXzbWWYsVILK6BAxnxpr2nRaRrl5aW9/a6pBazvDFe2qyrBeKrECWMSokgRg
NwDorYIyqnIHsH7Pf7B/i/8AaP8AhvJ4o0rUvCukWFxro8LaOmsaibWTxBrBtZLpbC3IRkSV
o0UKbh4Y3eaNFcsSB5/8d/8AhGP+F4eMv+EI/wCRM/t29/sD/W/8g/7Q/wBm/wBd+9/1Wz/W
fP8A3uc19Lf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf2fyvtHk5+1eb9r+Tfjz/
ACOM/ZqQjx74J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc+a+O/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOOCRX1r+
zD4Ftfhh+zb4a8S/Dvxn8NLH4t/Eee903U9d1/xjpukXPww01ZvsxNvBNMJvtF0hkdrqNGlj
gBSKMNJ5jfJXjvwr/wAIL441nRP7R0rWP7Hvp7H7fpdx9osb7ypGTzoJMDfE+3cjYG5SDgZo
Aya9L1H9l/VrL9kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/9agOGDhfs
v9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+ge0ubqCK/szqMRttgYtF
djZB5QGQ0Z1v2Ofjvp8f7P5sLDxl8KtAj1b9oCXUfFGirq1lo+j33hS5sUgvljsdReNpNPZJ
CscLRlwETCCSL5WM/NOiur+O/wDwjH/C8PGX/CEf8iZ/bt7/AGB/rf8AkH/aH+zf6797/qtn
+s+f+9zmvuv9ln9pW48BfsdfArS/hdrHwq03xJpeu6vJ4y/4SbxtP4Xjsrhr6B7S5uoIr+zO
oxG22Bi0V2NkHlAZDRlCPkn9jP8AYc8e/t1/EO58P+CLO1VNOg+0ajquoO8OnaYpDeWJZFR2
3yMpVEVWZsMcbEdl8fr9Fv8Agm9+3ZaS/tXeGfCuu2HwU8LeAfCmu+IfEh16yurnw5ZpPeR3
MazQxXF1DDPxPFbQJNbPPFanaFTY7D8//Hdv9j8cazF9j0rT/Kvp0+y6XefbLG2xIw8uCfzZ
fNiXoknmyblAO987iAelfsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GU
qiKrM2GONiOyn7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhj
jYjsv0t/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5ubWeHz5S95Ha3dx++jt
k3wSyJAW2bQruNX/AIJg/tlxeFv2kvBvw/8AEuj/AAf8E+CfAGq69rMurJ4gls4rW5uIbiDe
s76ibXUHHmx2sMjrcyC2yUcgPLTGfnpXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/
kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4/47t/sfjjWYvselaf5V9On2XS7z7ZY22JGHlwT
+bL5sS9Ek82TcoB3vncfvX9k34s6ba/A39le30HxL8NIx4O8ZajdePYvFuo6OLvQ4X1Ozmjk
shqzebAjWql92nbQZFZj++BNIR8/6L/wTQ8ZLL4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne
2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hka
ORNyEq2HVhlSQccEivtX4F/H/V9Q/aobV7bx98P/APhRPg74j6l4isrnxtd6ZqGrWWn/AGsX
szWMeoLLrHm3EUce026b3uH3FhL5jj5K/aU+JFj8Y/2jPH/i/TIrqDTfFXiTUdYtIrpVWeOG
4upJkWQKzKHCuAQGIznBPWgDia9g+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebH
CjSx2dtcC1R5JQqG5MRfazAFAHP1t+yz+0rceAv2OvgVpfwu1j4Vab4k0vXdXk8Zf8JN42n8
Lx2Vw19A9pc3UEV/ZnUYjbbAxaK7GyDygMhozynw58Paf4S8Af8ACbfDLxj8FNN+KfxW13V7
W/1pvE9l4es/hbpZvHgH9mWV1Il1F9pjZ3WcRefDagRxxK772Yz4q8d+CNU+GfjjWfDet232
LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivSv2M/2HPHv7dfxDufD/giztVTToPtGo6r
qDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfNfHfhX/AIQXxxrOif2jpWsf2PfT2P2/S7j7RY33
lSMnnQSYG+J9u5GwNykHAzX3B/wSF/bTi0H4t/DTwJ4ls/hp4a8E+BZ9X1yXxDqGqS6Pci5u
bWeHz5S95Ha3dx++jtk3wSyJAW2bQruEI+f/AIVfsD698SvBnw91q98W+CvCKfFfVZ9H8I22
sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8AHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJ
J9nuIZGjkTchKth1YZUkHHBIr9C/gR8UdN0rwL+zhZaDrXwf0seCfiBrE/j2w1rXdHu08NQt
q9rLH/ZcurTyyrb/AGVSyS6dKwdkLmR5ssfhT9pTXtG8U/tGeP8AU/Dl5daj4e1HxJqN1pd3
dTTzT3Vq91I0Mkj3BMzOyFSWlJkJJLfNmgDia9r+En7GKfFzSPArw/FX4VaVrXxDvm03S9Bu
7zUJ9UhuBci2RLmO1s5ltvMdkZDK6hkcMDgNt+gPhN8Udf8A+GOv2fNL+DPxT8K/DjxJoGu6
3J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJBjrzX9mHxN4V+Fvhz44/FefWPBWo
/FDwRPYnwFG8MNvp13dXd9JHcalZaa8cJd7aICaBDCI4Nys8AKoEYz5++LPw4vvg58VPEvhD
U5bWfUvCuq3Wj3ctqzNBJNbzPC7RllVihZCQSoOMZA6Vz9W9e16+8U65eanqd5dajqWozvdX
d3dTNNPdTOxZ5JHYlmdmJJYkkkkmvuD4TfFHX/8Ahjr9nzS/gz8U/Cvw48SaBrutyeN/tni6
z8Ox/aJL62ezub6CeRDqES2yqMrFcDZG0WCQY6Qjwr4VfsD698SvBnw91q98W+CvCKfFfVZ9
H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDet232LWvD99Ppt/
b+Ykn2e4hkaORNyEq2HVhlSQccEiv0L+E/xZ0Hxb8Pv2ZY7LxL8H9Rf4f+MtXHi67v8AUbDQ
BocL+J9P1VL7Tra7a0dUkgt2C+RAQsMs0OxHyi/D37WXjfTPiZ+1R8S/EmiXP23RfEHivVNS
sLjy3j+0W813LJG+1wGXKMpwwBGeQDQB0H7Gf7Dnj39uv4h3Ph/wRZ2qpp0H2jUdV1B3h07T
FIbyxLIqO2+RlKoiqzNhjjYjsut8Kv2B9e+JXgz4e61e+LfBXhFPivqs+j+EbbWHv5J9amhn
itnYfZLWdIU8+VYwZ2jJIY42YY/QH/BIX9tOLQfi38NPAniWz+GnhrwT4Fn1fXJfEOoapLo9
yLm5tZ4fPlL3kdrd3H76O2TfBLIkBbZtCu46v4EfFHTdK8C/s4WWg618H9LHgn4gaxP49sNa
13R7tPDULavayx/2XLq08sq2/wBlUskunSsHZC5kebLFjPz08d+CNU+GfjjWfDet232LWvD9
9Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw
9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHz/wDaU17RvFP7Rnj/AFPw5eXWo+HtR8SajdaX
d3U08091avdSNDJI9wTMzshUlpSZCSS3zZr6A/4J7/GHxho2ueA5Na8Y/DTTvhV8PfEh1idf
Fkui3d/pEKNDd3i6bBcJLqSPMIlEYsowGuGBBD73CEfL/jvwRqnwz8caz4b1u2+xa14fvp9N
v7fzEk+z3EMjRyJuQlWw6sMqSDjgkVk1+pmm/tuReJvAvgnWvg3qfw00bUtV+IHijXPE8fjD
xnL4VfTmutXS5sZ762g1G1/tBPsjxh/kvFCwGJejI35p/FnWYvEfxU8S6hbp4fjgv9VuriJN
BtZbXSlV5nYC0hlVZI7fB/do6qyptBAIIoA9W+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNt
rD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMdXRf+CaHjJZfC1r4o8R+CvAereN/El94V0DT
dYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHP0B+zZ8UdB8Q/Af8AZLtrLWvhoX+HXiTU
I/F3/CR67YaZe+GoW8R6bqiXVql3PE7u0Fuy+ZAsoMck8X3yQPQdf/aH8G/G3xf8F9a8L+I/
hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IVjPzJ8d+CNU
+GfjjWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEivYPhV+wPr3xK8GfD3Wr3x
b4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHif2svG+mfEz9qj4l+JN
Euftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGvsv9mz4o6D4h+A/7JdtZa18NC/w6
8SahH4u/4SPXbDTL3w1C3iPTdUS6tUu54nd2gt2XzIFlBjkni++SAhHz/ov/AATQ8ZLL4Wtf
FHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw58K8d+CNU+Gfj
jWfDet232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv021/9ofwb8bfF/wX1rwv4j+G
l9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhfzz/AGsvG+mf
Ez9qj4l+JNEuftui+IPFeqalYXHlvH9ot5ruWSN9rgMuUZThgCM8gGgDoP2M/wBhzx7+3X8Q
7nw/4Is7VU06D7RqOq6g7w6dpikN5YlkVHbfIylURVZmwxxsR2XW+FX7A+vfErwZ8PdavfFv
grwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMfoD/gkL+2nFoPxb+GngTx
LZ/DTw14J8Cz6vrkviHUNUl0e5Fzc2s8Pnyl7yO1u7j99HbJvglkSAts2hXcdX8CPijpuleB
f2cLLQda+D+ljwT8QNYn8e2Gta7o92nhqFtXtZY/7Ll1aeWVbf7KpZJdOlYOyFzI82WLGfnp
478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFewfCr9gfXviV4M+
HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPn/wC0pr2jeKf2
jPH+p+HLy61Hw9qPiTUbrS7u6mnmnurV7qRoZJHuCZmdkKktKTISSW+bNfav7JvxZ021+Bv7
K9voPiX4aRjwd4y1G68exeLdR0cXehwvqdnNHJZDVm82BGtVL7tO2gyKzH98CaQj5U+Jn7FH
ir4TfAS68favf+HxBpfjK48B6rpEM8z6jpWqwJLJJFJ+68h0CRbt8M0inzEGc7gvj9fqD+zp
8e/Db/DfWrPwr8QfCttoOsftH3mpazZ+JvE1rbya34MubUQXL3UWqSrLdxSxSAMJFeVnG7Bk
Tcv50/Hf/hGP+F4eMv8AhCP+RM/t29/sD/W/8g/7Q/2b/Xfvf9Vs/wBZ8/8Ae5zQB2v7Pf7D
nj39pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuP2M/2HPH
v7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfVv+CXmn2ukaH8cN
Q1PxF4K0KDxP8Mde8JaWmseKdN02e81K4W2aGIQ3E6SBGAOJSoiyrAuCCB6X/wAEo/2uv+Ff
/HbwB8NPFtt8KvD3hD4bX2tapdeI5te+ws13Nb3FubhplvlsdQlPmx28b+VMwtyTGQoaQMZ+
f1el/slfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXCriNhu3FVbiv
Hdv9j8cazF9j0rT/ACr6dPsul3n2yxtsSMPLgn82XzYl6JJ5sm5QDvfO4/ot/wAEzPjv4H+B
Xgf4BXum+MvCvhOwl13Xv+FtLcatb2d9e3LRvbaJ50Uri4ntEW6UjyFa2iYySy7GjkkVCPj6
2/YtlsPgT4H+IXiT4i/D/wAH6L8Q/t/9jQ6kmrT3Uv2K4+zz71tLGdUw+0jLchx3BA8f17To
tI1y8tLe/tdUgtZ3hivbVZVgvFViBLGJUSQIwG4B0VsEZVTkD7L/AGVda8XCD4T+HPEvjb9n
/X/hl4R8Vzadq2g+IJ/DU914ctDqEcl+yy38YaaK4R3kSawmnVwuA6siqPlT47/8Ix/wvDxl
/wAIR/yJn9u3v9gf63/kH/aH+zf6797/AKrZ/rPn/vc5oA5SiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAC
iiigAr+or/g0d/5RraF/2GdQ/wDSqWv5da/qK/4NHf8AlGtoX/YZ1D/0qlr0cB8Fb/D/AO3R
OTFb0/8AF+jPv34s/wDH9cfU18y+H/8AkYNX/wCwdbf+pD4rr6a+LP8Ax/XH1NfMvh//AJGD
V/8AsHW3/qQ+K69jC/8ALv1/Q8TGbT9P1R+Vv/BxR/ydb8N/+ydwf+nnV6KP+Dij/k634b/9
k7g/9POr0V5uJ/jT9X+Z34b+DD0X5Efw81G0h+PfwNRiu/8A4Rz4f/8Apj0nFfDXxo/ZQv8A
48+OviN4zHijwp4U8M/DrQvCP9r3mtG9bH27TLWGDy0tbad3/eJg/KMblPTJH0houp3cf7Xf
wNjUt5f9g/Dkfh/YWj5rz6O6sPEXwL/ac8GnX/CuleJvFnh74bf2RZ61r9lpH9o/Z7W3mn8t
7qWNDsjGT83GVHVgDlinelFen5HRhY2qyfqfL37Pf7Dnj39pP4V+PfG+hWdraeE/h1pVzqmp
6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/CgX4jfDTw9q3jfVTo+i6NqF9d3
Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wAEvNPtdI0P44ahqfiLwVoUHif4Y694S0tN
Y8U6bps95qVwts0MQhuJ0kCMAcSlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TW
PEGreO9Jsm+GWnxXBtGazU3BaSW6TzJDewCR0txtgTMolfzj0DzTRf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphzV8P8A/BPrUNQ8Q+Hv
D2q/Ev4VeHPGfiPXbnw1F4Yu9RvbzVLLUIL02RhuRZWk8UG+XBR5JAjowZWIDY9K/Yy8b+MP
hZ8VPDmka18Q/g/P8KvhJ4yuLqe+1i90XUzbQ280c95JoyXEcmpFLkQKYWsogJJnVhtfey1f
h18ZfA8vjH9pf9oCzn0qDx9puux6v8N9M8QPbmRJdS1SbzLxbMsRPd2cJWRMGSKJzvZX2oQA
co//AAS/8WaN4h0PSvEHjL4f+GL/AMW+K9R8H+HIr6XUZv7eu7G9WxmeI21nMIovtLiNTcGJ
jgttC4Y5XjL/AIJ7ax8JPAGga94/8efD/wABf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq
8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H//AAr3wd4rl8Ralc+NrvR9Q1ay/eRXt+1jHdLL
qnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+DF18R9Uv5/DXjSLw7FrHhbTLzW
BeXYlXUohKfPinL7rKacZQruVowoAPn/AMD/APBPbWPGekeCb+Xx58P9Ftfibrt1oXgx746o
3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3
EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv8AWNHXxZLpd3f6RpSX
S3brpsF8kuro80EUQjFtGGa4YHIk8xx8/fECzsf2tv2jPjF4v0zXvD/hXTZ59a8aWkXia9Wx
n1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m
82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL
3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF
5o0WseHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd35
8Vajpv8Ab+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChjPn7xF+xR4q+HPw81zxH43v/D/
AIDg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSRF83O7afsZ/sOePf26/iHc+H/
AARZ2qpp0H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrX
PA/h/wCI/iZvFyePJ9HkvodHuvECXUMqnXP9MfzbKRmaWAmRmBDsZUwvP/8ABPz9tPwloP7Z
WjeBNOs/hp4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5
U+FX7A+vfErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMM
cnxF+xR4q+HPw81zxH43v/D/AIDg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiSR
F83O7b7X+xl8WfGHhv4qeHBrXiX4P+HfhV8OPGVxqs9nrGo6Lq58PwxzR3V5DowuGutSZJBE
qwvZFxJMVYSF98le7aj+054W/aWsfgneaLqPwqk8LQfEfxNf+N9M8eDw9HfaXpl/4gS9QKmp
kyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQvEl9u/s6z1o3qf2lsilml8t4L
aZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ft
fw2PlpeypcTZUxA4V3Tcpk25yfimkI+gfgb/AME9tY+O/wAIPDnjOz8efD/R7DxR4rj8EWdr
qR1QXQ1iUborVxDZSIN8ZVxIHMYDgM6sGUZPwZ/YY8SfFzxx8SvDdzr3hXwfrXwnsb3UvEFv
rUt0/k29lI0d48bWlvcLJ5LhQQDlt6+WHAbb9Qf8E/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnq
F3p2t3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJOV/ZKTQfD37Rn7W9uPHvh+803VvBv
irwtoeteJPFlhDP4lurq6AtJPtE8sa3DzLCzvOv7sFgzFd65Yzx/4G/8E9tY+O/wg8OeM7Px
58P9HsPFHiuPwRZ2upHVBdDWJRuitXENlIg3xlXEgcxgOAzqwZR4p478Eap8M/HGs+G9btvs
WteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfYHwa+OV/8Av8AgkzBdeGfEXw/g8cWnxWX
xNbaffT6NqeqWlktilql3FZXXmSRyrdRrtZIxMqZkGIiXPxpr2vX3inXLzU9TvLrUdS1Gd7q
7u7qZpp7qZ2LPJI7EszsxJLEkkkk0hFSvVvgJ+ydqHx3+GHjzxl/wlHhXwp4b+HP9n/2xea0
b1sfbZXhg8tLW2ndv3iYPyjG5T0yR5TX1X+xTc6f4h/YS/aZ8G/2/wCFdK8SeK/+EW/sez1r
X7LSP7Q+z6hNNP5b3UsaHZGMn5uMqOrAEA+dPhX8K/EXxv8AiHpPhPwnpN1rniHXJxb2Vlbg
b5WwSSSSFVFUMzOxCoqszEKpI9W+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5b
gWNnPCJIGd0jASSWWOGVpolSRi3Gr+xT8VPDy/Dzx58Lb/VrX4ca38TII7Ky8e5ISNQcnSNQ
chmg0y5YL5k1vsZWCmbz4AUT0DXvER8Of8EZrzwRca58NJNfsPic93LplrquiXWpNpqQmA3M
YidppH+2DaJkLStb4wxtCKAPH/2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1
g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7
Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf
2fyvtHk5+1eb9r+Tfjz/ACOM/ZqYzyn9nv8AYP8AF/7R/wAN5PFGlal4V0iwuNdHhbR01jUT
ayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jU
TayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/AOHJX/CM/wBs/D/+2P8AhY/9qf2T
9r0n+2f7L+z+V9o8nP2rzftfyb8ef5HGfs1H/Cdx/wDDkr/hGf7Z+H/9sf8ACx/7U/sn7XpP
9s/2X9n8r7R5OftXm/a/k348/wAjjP2agDyn9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jU
TayeINYNrJdLYW5CMiStGihTcPDG7zRorliQD9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01j
UTayeINYNrJdLYW5CMiStGihTcPDG7zRorliQPVv+E7j/wCHJX/CM/2z8P8A+2P+Fj/2p/ZP
2vSf7Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/wAOSv8AhGf7Z+H/APbH/Cx/7U/sn7Xp
P9s/2X9n8r7R5OftXm/a/k348/yOM/ZqAPKf2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNR
NrJ4g1g2sl0thbkIyJK0aKFNw8MbvNGiuWJAt/Cr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG2
1h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGPpf/AAncf/Dkr/hGf7Z+H/8AbH/Cx/7U/sn7
XpP9s/2X9n8r7R5OftXm/a/k348/yOM/Zq9K/ZN+LOm2vwN/ZXt9B8S/DSMeDvGWo3Xj2Lxb
qOji70OF9Ts5o5LIas3mwI1qpfdp20GRWY/vgTQB8/6L/wAE0PGSy+FrXxR4j8FeA9W8b+JL
7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOcrwd+wXq2veLNN8Naz49+GnhHxh
q/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDn7LPxu8D/EvU/gVc+D/ABN8
P9S0HwZ8R/EM2tXfi3xFb2WqaJp83i2x1a2vof7TniuZZZLSDDTKJXZJbiNv3jMB5T8Htaiu
v28Lj4r+GfG/wUvPA/iv4rT6pqi+IJ9JtdY0bT4tW89bhU1eOOeHzLeYyI9izPlcMVljCKAe
P6L/AME0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZ
hlMOeU8RfsUeKvhz8PNc8R+N7/w/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4Ypb
iSRF83O7b9w6D8d/Cev618Frv4e+Mvh/e+HfD/xW8Rap4pm8b6tp0mpabp82uwXFrcWra65v
I/MsgJS9nh2lDs5NxuNGo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DK
p1z/AEx/NspGZpYCZGYEOxlTCgHwT+yV+y/q37YvxrsfAWgaz4f0bXdUgnmsTrD3CQXbRIZX
iDQwylX8pZHBcKuI2G7cVVvNK/Vb9kP47/CD4FeOPhpe/Dbxl4V8J/D2Xxz4r/4TtbjVls76
9gaSa28N+dFduL2e0SC6iI2K0ETGSWbY8cki8V+zt8e774P/ALKvwX8NfDbW/g/ZeLPDniTW
4/G7658QG0K0tbr7fCba6mW11G2TVbc24X94qXkZjgEaZ+ZGAPj79nv9hzx7+0n8K/HvjfQr
O1tPCfw60q51TU9T1B3ignaCEztaQFUYyXHlgttwFUFd7pvj3cVpHwS8T6/8INX8eWGmfbfC
3h++h07VLq3uYpJNNlmBMLTwBjNHFIQUWZkETOCgcv8ALX1X+wrc6feeOP2lfEN3r/wq8PWH
jbwN4o8NaNFDr9loVjc6hdSQSQQ2dpeSxXEVoy5Ebyxqiqu1mVlYDz/9ifUdP/ZZ8T+J/iP4
r8V6UmmeGvtHhybwfpep2Wo3XxBllQrJp8kY86E6UwAaa7dHiIVRBvmKtGhHn/7In7J2oftk
/E+LwboXijwroXiS+3f2dZ60b1P7S2RSzS+W8FtMi+XHCxPmsmdyhdxyB5TX1t/wSG1rSdF/
4KE6B491XUPBXgfwnoE9/NeDUPENvp8Fgt1Y3kUMUC3lx586K7KmVMrKCpkbncfkmgAor72/
4J6fGHwr4D/Zzs7D4i+MfBS+IbnVbo/Bw6rLDqB+Huqm1uUk1K9AST7DZSXb2ZVJwymaL7SI
AqG4HbfBT9pnXPBH7OPwj03wV4o+FT+PtO8V69N8Qr7xN8RZNHj/ALQfUYnhvro2+pWw1iJ4
cFptt6jpDsXPzIzGfmnRX6Lfs6/tXaZ4e+EHgi4Hjzwr4e1W1/aPVWg0XUn0u10/wtdCKe8i
toJSk9vokk6K7RSKsZKJ5i714+Kf2sv7D/4ao+Jf/CM/2V/wjf8Awleqf2T/AGX5f2H7J9rl
8nyPL+TyvL27Nny7cY4pCPP6K+9v+CffxX1HT/gJ4T8Nn4l+H/Aump4ku7631rQfG1j4d1fw
vc7FG7WdMvjBDrllI4tZBsaSRYoZohL0tx6B+xd4v1mD9kzwb4kj8b+H9Ng8LftDi1vdcXWY
PDenL4ekggu7+1tVnNqEsp5FWc2Mcab9gJgyhCuw7H56fCv4V/8AC0v+Ej/4qPwr4c/4RzQ7
rXP+J5qH2P8AtTyNv+h2vynzbuTd+7i43bW5GK5Svuv9kT4geG7r44ftfp4e8UeFfDXgHxx4
U8S6b4bsr7W7Xw/Y6jcXVw/9lpFbXMkIG2EzKpKAQLIVYx+YA1v9hz9pSx+Hf7Jnwit5/H9r
oeu6N8ebK1mifXFtrux8MXEFvNexsC4dNMkuIw8ynEDSIGfLDNIR8E16XqP7L+rWX7Jlh8Yo
9Z8P3fh678SHwrLYwvcDUbG9EElxiRXhWIoYUV90cr/61AcMHC1P2sv7D/4ao+Jf/CM/2V/w
jf8Awleqf2T/AGX5f2H7J9rl8nyPL+TyvL27Nny7cY4r7W/4J6/Gr4f/AAf/AGD/AA3pPi3V
/Ctv4k8R/Faa58PXUmp2N5deB7ibSWs7XxDcWEkoBitrmNsrc7AoZZhkiLcAfCnwr+Ff/C0v
+Ej/AOKj8K+HP+Ec0O61z/ieah9j/tTyNv8Aodr8p827k3fu4uN21uRiuUr7W/YJ1u78K+OP
2ptE8Q/Erwrc/wDCReBte0A3994ytrex8Wa3LIyW1xFJdSx/at/+lMtwRhVnJZk84btX4TfF
HX/+GOv2fNL+DPxT8K/DjxJoGu63J43+2eLrPw7H9okvrZ7O5voJ5EOoRLbKoysVwNkbRYJB
joA+X9R/Zf1ay/ZMsPjFHrPh+78PXfiQ+FZbGF7gajY3ogkuMSK8KxFDCivujlf/AFqA4YOF
80r9LP2Mfjl8P/hV+yPaad4t8RfD+48SeKPjJeXnh7VbCexEfhG4n017O18SnSZPJEdpBcoz
LFcxQCNHSUIpWIH5+/Zp/a10D9krxx8ctM8b6PpXxJ1rxVoeveHP+Ensb+8v/wC3rq4kjXEs
32uFZNPmeJ5WnjQXTeYCr4OAxnFfCr9gfXviV4M+HutXvi3wV4RT4r6rPo/hG21h7+SfWpoZ
4rZ2H2S1nSFPPlWMGdoySGONmGPj/jvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJu
QlWw6sMqSDjgkV+gHwK+MPhvxz8Iv2WDpV38KtMj8CeK9TbxTZazr9rp03g63k8S6dqsUtiu
oXKzyf6PbmMSoZyYnnjZjITjoPhN+0x4C1DXPH+qeKfHfgq7n8U/EDXtU+Bl1qpS7PgW9na+
b+1b1Gjd7CyluJbFljuVP76P7R5ChDOAD4p/Z7/Yc8e/tJ/Cvx7430KztbTwn8OtKudU1PU9
Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G
4m82OFGljs7a4FqjyShUNyYi+1mAKAOfa/8AgnvaXOna5+0Tqfi/xj4Kg1LxV8P/ABL4Uiu9
Y8caWs+sa1cNAww8tzumSVt5F0CYXO4+aeat/s/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseI
NW8d6TZN8MtPiuDaM1mpuC0kt0nmSG9gEjpbjbAmZRK4B4/8N/8AgnL8UfiPofxX1MaVa6Pp
vwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/hNfZf8AwT38C2vwu1z9om01Pxn8
NI4L/wCH/iXwJpd7J4x021g1rUnaAQ/ZhcTRyPbyhSyXBRYiOrAggea/B7xv4Q/Y68MXHiyG
50rxh8bYr6ex0OySMXmj+BzC+w6rJLg29/dsRm1WFpbeMATOzuI4whFT4VfsD698SvBnw91q
98W+CvCKfFfVZ9H8I22sPfyT61NDPFbOw+yWs6Qp58qxgztGSQxxswx8f8d+CNU+GfjjWfDe
t232LWvD99Ppt/b+Ykn2e4hkaORNyEq2HVhlSQccEiv0A+BXxu0/4l/CL9li5m8TfD/UtV8G
eK9Tm8c3fi3xFZWWqaJ53iXTtW+3Q/bZ4pZZZIoHDTRCXcktxGfnYgdB8Jv2mPAWoa54/wBU
8U+O/BV3P4p+IGvap8DLrVSl2fAt7O1839q3qNG72FlLcS2LLHcqf30f2jyFCGcMZ8U/BP8A
ZCb41WfhQL8Rvhp4e1bxvqp0fRdG1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA56Hxl/
wT21j4SeANA17x/48+H/AIC/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegf
sUaP4i8B/tAeFfG2oePPgpqkd745hfxnJ4g1vQ7vWNK+yXyPPdLcaiNz+akkkqXOmzS+YVyX
8xFA9A8GfFG/+JXxp8M+T8QPgp4s+BNj8R9UZNJ8aPoz6xomjz6wLi4llfXIlvZftUEvm+bD
LNIxUh2WVNgAPn/wP/wT21jxnpHgm/l8efD/AEW1+Juu3WheDHvjqjf8JM9vcx2rTxCGykME
TTSqq/ahC/UlFHNeKeO/BGqfDPxxrPhvW7b7FrXh++n02/t/MST7PcQyNHIm5CVbDqwypIOO
CRX6QaH8Ufhz4htv2frb4ea18ND4P+HXxA16O/8A+Ej1210y98NaU3iuw1SzurVNRniuHdrK
3C+YiysY5J4m/eFgPgn9rLxvpnxM/ao+JfiTRLn7boviDxXqmpWFx5bx/aLea7lkjfa4DLlG
U4YAjPIBpCPP6KKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKK
ACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAo
oooAKKKKACv6iv8Ag0d/5RraF/2GdQ/9Kpa/l1r+or/g0d/5RraF/wBhnUP/AEqlr0cB8Fb/
AA/+3ROTFb0/8X6M+/fiz/x/XH1NfMvh/wD5GDV/+wdbf+pD4rr6a+LP/H9cfU18y+Hz/wAV
Bq//AGDrb/1IfFdexhf+Xfr+h4mM2n6fqj8rf+Dij/k634b/APZO4P8A086vRR/wcUf8nW/D
f/sncH/p51eivNxP8afq/wAzvw38GHovyOC8PWqf8NUfApsc/wDCP/Dvt/1A9Hr5P+L37J2o
fHfxj8QvGX/CUeFPCnhr4c6B4Q/te81o3rY+26XawweWlrbTu37xMH5Rjcp6ZI+ntK8UpB+1
h8C4P4hoPw6H56Ho/wDjXmkV7p/ib4FftN+D/wC3/CmleJPFvh74bHR7PWtfstI/tD7Pa280
/lvdSxodkYyfm7qOrAHHE/wo/L8jpwy/eS+Z8vfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqa
nqeoO8UE7QQmdrSAqjGS48sFtuAqgrvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13
cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHPq3/BLzT7XSND+OGoan4i8FaFB4n+GOveEtLTW
PFOm6bPealcLbNDEIbidJAjAHEpURZVgXBBA6D9n74WR/A34E6PqvgPxt8KrX4x+Ob7UNE1j
xBq3jvSbJvhlp8VwbRms1NwWkluk8yQ3sAkdLcbYEzKJX887zzTRf+CaHjJZfC1r4o8R+CvA
ereN/El94V0DTdYnvLifU72zu0srgB7G2uYo0W6fyg0siZKswymHNXw//wAE+tQ1DxD4e8Pa
r8S/hV4c8Z+I9dufDUXhi71G9vNUstQgvTZGG5FlaTxQb5cFHkkCOjBlYgNj0r9jLxv4w+Fn
xU8OaRrXxD+D8/wq+EnjK4up77WL3RdTNtDbzRz3kmjJcRyakUuRAphayiAkmdWG197LV+HX
xl8Dy+Mf2l/2gLOfSoPH2m67Hq/w30zxA9uZEl1LVJvMvFsyxE93ZwlZEwZIonO9lfahAByj
/wDBL/xZo3iHQ9K8QeMvh/4Yv/FvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4Lb
QuGOV4y/4J7ax8JPAGga94/8efD/AMBf8JHfatptrYakdUvLpLjTLxrO8RzY2VxENsq8ESEM
GBBPOPQP2FPj/wCPtQ8ceENX8W+Pvh//AMK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5t
wYxtNmnz3D7gwfe49A8GfHvWv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2
JV1KISnz4py+6ymnGUK7laMKAD5/8D/8E9tY8Z6R4Jv5fHnw/wBFtfibrt1oXgx746o3/CTP
b3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8caz4b1u2+xa14fvp9Nv7fzEk+z3EMjRy
JuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z3+HvxAv9Y0dfFkul3d/pGlJdLduumw
XyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi/TNe8P+FdNnn1rxpaReJr1bGfUITdN
MljAF3rJeusoCxBsMVfDcZKEHwT/AGQm+NVn4UC/Eb4aeHtW8b6qdH0XRtQvru41G4m82OFG
ljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew
3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/w9+JHw88Z2ni34KDQbrxXZN4jtfEF5o0Ws
eHre0uomaVRqiJLHvikZ0l093JKYLLJGqr23wBNro37akfxG8I+Ofg/qHgDXPidJd358Vajp
v9v6RptvqnmpcltcVbsPLbS+as1pI8rFf3jCaMKGM+fvEX7FHir4c/DzXPEfje/8P+A4NL1W
90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wRZ2qpp0
H2jUdV1B3h07TFIbyxLIqO2+RlKoiqzNhjjYjsv3DqPxy8F/Fyx+Cdh4V8RfCrXPA/h/4j+J
m8XJ48n0eS+h0e68QJdQyqdc/wBMfzbKRmaWAmRmBDsZUwvP/wDBPz9tPwloP7ZWjeBNOs/h
p4a+CvgXxJ4k1zRfEOoapdaPciC5W6htp5TcXkcV3ceTNBbJ58Es6QFsbdruAD5U+FX7A+vf
ErwZ8PdavfFvgrwinxX1WfR/CNtrD38k+tTQzxWzsPslrOkKefKsYM7RkkMcbMMcnxF+xR4q
+HPw81zxH43v/D/gODS9VvdCsbTWJ5nu/Ed/ZmRbqGxS2imEqRSRiJrhiluJJEXzc7tvtf7G
XxZ8YeG/ip4cGteJfg/4d+FXw48ZXGqz2esajournw/DHNHdXkOjC4a61JkkESrC9kXEkxVh
IX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/430zx4PD0d9pemX/iBL1AqamTJ89nM5ZrF
mBZSu4vGAAD4U/ZE/ZO1D9sn4nxeDdC8UeFdC8SX27+zrPWjep/aWyKWaXy3gtpkXy44WJ81
kzuULuOQPKa+1v8Agm/c+B/B3/BV2bxlo2v+FfDnwn8L67rP2G81rX7fTNmn3Ftfw2Plpeyp
cTZUxA4V3Tcpk25yfimkIK+gfgb/AME9tY+O/wAIPDnjOz8efD/R7DxR4rj8EWdrqR1QXQ1i
UborVxDZSIN8ZVxIHMYDgM6sGUfP1foX/wAE/f2g9B/Z9/Y8+FtvqeqfDS4vNX+PNnqF3p2t
3Nhez6ZpT2QtX1Jomcy2LwTRF0nPlspRGJMUmJGho+f9F/4JoeMll8LWvijxH4K8B6t438SX
3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYc2/BH/BL/wAWeK/DFtf3/jL4f+Gb
qfxy/wANn03UpdRkurbxAHZRZO1tZzQncAGEqyNFhhlwcqPdfg4tr8NfjJZroPxp8FeP/hzr
fxO1N/GGm+MNf021u9FWC8a2g161vbi5iuJb2aynNyl9pwjYSRICzlNgyrP4w6b+zV/wTt17
/hVXjHwVe3mlfGu48QeG01iXR9R1s6LHb/ZLXUBY3SGWK486OIgrBHMqlnCrESaAPh/x34I1
T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYdWGVJBxwSKtfCv4V+Ivjf8Q9J8J+E9
Jutc8Q65OLeysrcDfK2CSSSQqoqhmZ2IVFVmYhVJGVr2vX3inXLzU9TvLrUdS1Gd7q7u7qZp
p7qZ2LPJI7EszsxJLEkkkk19AfsU/FTw8vw88efC2/1a1+HGt/EyCOysvHuSEjUHJ0jUHIZo
NMuWC+ZNb7GVgpm8+AFEQjK+DH/BPbxr8bPCeoavZar4K0yCHxI3g7TBfa2mzxHrQt5bgWNn
PCJIGd0jASSWWOGVpolSRi3FT9nv9g/xf+0f8N5PFGlal4V0iwuNdHhbR01jUTayeINYNrJd
LYW5CMiStGihTcPDG7zRorliQPYNe8RHw5/wRmvPBFxrnw0k1+w+Jz3cumWuq6Jdak2mpCYD
cxiJ2mkf7YNomQtK1vjDG0Iqr/wncf8Aw5K/4Rn+2fh//bH/AAsf+1P7J+16T/bP9l/Z/K+0
eTn7V5v2v5N+PP8AI4z9mpjPKf2e/wBg/wAX/tH/AA3k8UaVqXhXSLC410eFtHTWNRNrJ4g1
g2sl0thbkIyJK0aKFNw8MbvNGiuWJAP2e/2D/F/7R/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g
1g2sl0thbkIyJK0aKFNw8MbvNGiuWJA9W/4TuP8A4clf8Iz/AGz8P/7Y/wCFj/2p/ZP2vSf7
Z/sv7P5X2jyc/avN+1/Jvx5/kcZ+zUf8J3H/AMOSv+EZ/tn4f/2x/wALH/tT+yftek/2z/Zf
2fyvtHk5+1eb9r+Tfjz/ACOM/ZqAPNPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPr
U0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHV0X/gmh4yWXwta+KPEfgrwHq3jfxJfeFdA03WJ7y4n
1O9s7tLK4AextrmKNFun8oNLImSrMMphz9Afsm/FnTbX4G/sr2+g+JfhpGPB3jLUbrx7F4t1
HRxd6HC+p2c0clkNWbzYEa1Uvu07aDIrMf3wJrttB+O/hPX9a+C138PfGXw/vfDvh/4reItU
8UzeN9W06TUtN0+bXYLi1uLVtdc3kfmWQEpezw7Sh2cm43GgD40+G/8AwTl+KPxH0P4r6mNK
tdH034NQagfEV3qFxiD7VZKzT2MDxB1muAqMflPlgbSzr5ke+p8JP2MU+LmkeBXh+Kvwq0rW
viHfNpul6Dd3moT6pDcC5FsiXMdrZzLbeY7IyGV1DI4YHAbb9AfsoX3hvUvjh+1d4k0/xd4V
tPC3jnwp4v8ADnhe48ReLrWyvtVuLu4iktAy39wt03mRkEzzjBYNvfeGryr9kSXw3+z94A+M
vxBv7/wrJ8WPhp/Ztr4Gs7y/tb61kvZ7x4Lm/tYldkvZbWNBLE6mSFCyylXwjAAytM/4J9ah
N8SIvB+o/Ev4VaL4pvfFd34PsNLm1G9u7q8u7e6W0LslraTG2ikmbbGbvyWcKXC7PmrnvgL+
xR4q+PP7Ud38Ho7/AMP+GfG1pPe2TQaxPMYHurPf59uJbaKZd6rFMwY4jIibDklA3q37BR13
RvjJ4J+I1x45+D+oWeueMra78XHxVqOkf2/pC295HLNcltWVZg8qSySrNYSOzFfmYSxhR9V/
swftKfC74Z/FTwD4h8E+P/D+ieD9W+IHi66+I1zqWueTquriaaeDw9JdC9cX9zbiK7jcsA8M
btLNPtkjlkUA/J6iv0h/Z2+Pd98H/wBlX4L+GvhtrfwfsvFnhzxJrcfjd9c+IDaFaWt19vhN
tdTLa6jbJqtubcL+8VLyMxwCNM/Mjavwx/aubRfgb8MI/hNqvwK0nXYvGXiK98XLceLLvwbo
1hNLqcctncCy+3WMt7ZfZSgVZYLkrDAsOxWV4iAfFPwT/ZCb41WfhQL8Rvhp4e1bxvqp0fRd
G1C+u7jUbibzY4UaWOztrgWqPJKFQ3JiL7WYAoA5818d+CNU+GfjjWfDet232LWvD99Ppt/b
+Ykn2e4hkaORNyEq2HVhlSQccEivrX9kWO50D9pzQviHpnir9nSLwzrPxAS61+zdtLsH8OWt
vqCyGSyh1mKK5t7d4ZWeBrImQKiq+yWIIv0Dpv7WWnaX4F8Ej4F+JfhpvT4geKL7Xrvxh47v
vDros2rpLp97fRtqFpcamj2Ri3vPHdsVhMZG8PGwB8J/sZ/sOePf26/iHc+H/BFnaqmnQfaN
R1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy63wq/YH174leDPh7rV74t8FeEU+K+qz6P4Rtt
Ye/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj9Qf8E3v27LSX9q7wz4V12w+CnhbwD4U13xD4
kOvWV1c+HLNJ7yO5jWaGK4uoYZ+J4raBJrZ54rU7QqbHYW/hP8SNB1b4ffsy2FlffB+wf4be
MtXg8XWt/wCKLCAeEYT4n0/U0m06S7u99whgtmjW4ge5DwvMm9nYmgD4e8S/st+O/CXhjxhr
F5oX+gfD/XT4c8SNb3tvcyaLe72jAnjjkZ0iaRGjWcr5LupRXLcV5/X2X8O/jD4V8D/tZ/Gb
4+T+MbWbwVdeJNZtrHwraSwtqPxHhvp5pFsZ7OdGMOmPEyPPPcQ4GFSJWuAPL+P9e1GLV9cv
Lu3sLXS4Lqd5orK1aVoLNWYkRRmV3kKKDtBd2bAGWY5JQipRRRQAUUUUAFegeCP2l/EXw/8A
DFtpFhp3w/ntbTfsk1LwJoep3TbnZzvuLm0kmfljjc5wMKMAADz+igDW8b+Mrv4geJ7nV7+H
SoLq72b49N0u20y1XaioNlvbRxwpwoztQZOWOSST1X/DUHjP/hnj/hVX23Sv+EF+3f2p9g/s
Ow877Xuz9o+0+T9o83b+73+Zu8r91ny/krz+igAooooA9A8EftL+Ivh/4YttIsNO+H89rab9
kmpeBND1O6bc7Od9xc2kkz8scbnOBhRgAAcr438ZXfxA8T3Or38OlQXV3s3x6bpdtplqu1FQ
bLe2jjhThRnagycsckknJooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA
KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK/qK/wCD
R3/lGtoX/YZ1D/0qlr+XWv6iv+DR3/lGtoX/AGGdQ/8ASqWvRwHwVv8AD/7dE5MVvT/xfoz7
9+LP/H9cfU18yeH/APkYdX/7B1v/AOpD4rr6b+LP/H9cfU18yeH/APkYdX/7B1v/AOpD4rr2
ML/y79f0PExm0/T9Ufld/wAHFH/J1vw3/wCydwf+nnV6KP8Ag4o/5Ot+G/8A2TuD/wBPOr0V
5uJ/jT9X+Z34b+DD0X5HgMEpH7a/wOHb+xPhv/6YdFrwr4nfsn6h8d/EPjvxl/wlHhXwp4b+
HPhzwd/bF5rRvWx9t0q1hg8tLW2nd/3iYPyjG5T0yR7hA2P22vgd/wBgT4b/APph0WsWxutP
8Q/AL9pfwb/b/hXSvEnizw58Nf7Hs9a1+y0j+0Ps9rbzT+W91LGh2RjJ+bjKjqwB5q/8NfI7
KH8R/M+Yf2e/2HPHv7Sfwr8e+N9Cs7W08J/DrSrnVNT1PUHeKCdoITO1pAVRjJceWC23AVQV
3um+PcfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZ
gCgDn1b/AIJeafa6Rofxw1DU/EXgrQoPE/wx17wlpaax4p03TZ7zUrhbZoYhDcTpIEYA4lKi
LKsC4IIHQfs/fCyP4G/AnR9V8B+NvhVa/GPxzfahomseINW8d6TZN8MtPiuDaM1mpuC0kt0n
mSG9gEjpbjbAmZRK/Edh5pov/BNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVw
A9jbXMUaLdP5QaWRMlWYZTDmr4f/AOCfWoah4h8PeHtV+Jfwq8OeM/Eeu3PhqLwxd6je3mqW
WoQXpsjDciytJ4oN8uCjySBHRgysQGx6V+xl438YfCz4qeHNI1r4h/B+f4VfCTxlcXU99rF7
oupm2ht5o57yTRkuI5NSKXIgUwtZRASTOrDa+9lq/Dr4y+B5fGP7S/7QFnPpUHj7Tddj1f4b
6Z4ge3MiS6lqk3mXi2ZYie7s4SsiYMkUTneyvtQgA5R/+CX/AIs0bxDoeleIPGXw/wDDF/4t
8V6j4P8ADkV9LqM39vXdjerYzPEbazmEUX2lxGpuDExwW2hcMcrxl/wT21j4SeANA17x/wCP
Ph/4C/4SO+1bTbWw1I6peXSXGmXjWd4jmxsriIbZV4IkIYMCCecegfsKfH/x9qHjjwhq/i3x
98P/APhXvg7xXL4i1K58bXej6hq1l+8ivb9rGO6WXVPNuDGNps0+e4fcGD73HoHgz4961+0L
8afDOuXXi74Kan8GLr4j6pfz+GvGkXh2LWPC2mXmsC8uxKupRCU+fFOX3WU04yhXcrRhQAfP
/gf/AIJ7ax4z0jwTfy+PPh/otr8TddutC8GPfHVG/wCEme3uY7Vp4hDZSGCJppVVftQhfqSi
jmvFPHfgjVPhn441nw3rdt9i1rw/fT6bf2/mJJ9nuIZGjkTchKth1YZUkHHBIr7L/Zt+MNzo
37RlrJofjH4aad+zv8PfiBf6xo6+LJdLu7/SNKS6W7ddNgvkl1dHmgiiEYtowzXDA5EnmOPn
74gWdj+1t+0Z8YvF+ma94f8ACumzz6140tIvE16tjPqEJummSxgC71kvXWUBYg2GKvhuMlCD
4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobkxF9rMAUAc63
g79gvVte8Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXM
eHPa/wDBOzQr/wCHvxI+HnjO08W/BQaDdeK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkun
u5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baX
zVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea54j8b3/AIf8BwaXqt7oVjaaxPM934jv7MyLdQ2K
W0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUd
t8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7Dwr4i+FWueB/D/xH8TN4uTx5Po8l9Do914gS6hlU
65/pj+bZSMzSwEyMwIdjKmF5/wD4J+ftp+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR
7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtjbtdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P
4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhjk+Iv2KPFXw5+HmueI/G9/wCH/AcGl6re
6FY2msTzPd+I7+zMi3UNiltFMJUikjETXDFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/wAO
/Cr4ceMrjVZ7PWNR0XVz4fhjmjuryHRhcNdakySCJVheyLiSYqwkL75K921H9pzwt+0tY/BO
80XUfhVJ4Wg+I/ia/wDG+mePB4ejvtL0y/8AECXqBU1MmT57OZyzWLMCyldxeMAAHwp+yJ+y
dqH7ZPxPi8G6F4o8K6F4kvt39nWetG9T+0tkUs0vlvBbTIvlxwsT5rJncoXccgeU19rf8E37
nwP4O/4KuzeMtG1/wr4c+E/hfXdZ+w3mta/b6Zs0+4tr+Gx8tL2VLibKmIHCu6blMm3OT8U0
hHsHwr/Y11b4gfDzSfFmueK/BXw48PeJNVGjaDe+K7u4tk1ycEiZ4BDBMwt4G2rLcyBII2kV
TJuDBe28Ef8ABL/xZ4r8MW1/f+Mvh/4Zup/HL/DZ9N1KXUZLq28QB2UWTtbWc0J3ABhKsjRY
YZcHKjoJrfSf2vv2HvgZ4I0Pxd4K8NeIfhXqusadr0HivXbfREWDVLxbmG/geZgs9vGsbLKs
ZadG24hZWVj7X+xx8bPBv7L37OfgbSLfxX8NPFFjdftDxXET62LNp/7C+ymzGtNaSuZ9NdHh
85JH8t4yEyWjciRjPlTwd+wXq2veLNN8Naz49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVb
C3uUt0adiiNO8e/YzrmPDnzTUfgb4w03xx4r8N/8I5qt3rXgb7W2v29jAb3+yktJPLuZZWh3
KsUbjDS52DI+bBFfa1r8OdH+D0Gs678J/il8P9c+JPxA8V69pFx438U+PtLttQ8D6OmoS2q3
cXmTCWW7v4t0z30SNIsLMIYw0vmN8qfD3Q7T4b+OPidol38Wf+ES/szQ9V0yG/8ADq3N/Y+N
pY5FRdNWSAp/ol3tLCWUGLaill5FIR5TXu3wq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe
/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj4TX6F/s2fFHQfEPwH/AGS7ay1r4aF/h14k1CPx
d/wkeu2GmXvhqFvEem6ol1apdzxO7tBbsvmQLKDHJPF98kAA+f8ARf8Agmh4yWXwta+KPEfg
rwHq3jfxJfeFdA03WJ7y4n1O9s7tLK4AextrmKNFun8oNLImSrMMphyaL/wTQ8ZLL4WtfFHi
PwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5+wdf/aH8G/G3xf8
F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/
9ofwb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+
7lniIDkhWM+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRb
p/KDSyJkqzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXM
UaLdP5QaWRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi
+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R
6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t4
38SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9
W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot0/lBpZEyVZhlMOfsHX/ANofwb8bfF/wX1rw
v4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7lniIDkhTX/ANof
wb8bfF/wX1rwv4j+Gl9pPhv4neJL7XrnxHrFnpF7oVlc+L7LWLe9tYr6WCUvJaw43xJJ+7ln
iIDkhQD4+0X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/
KDSyJkqzDKYcmi/8E0PGSy+FrXxR4j8FeA9W8b+JL7wroGm6xPeXE+p3tndpZXAD2NtcxRot
0/lBpZEyVZhlMOfsHX/2h/Bvxt8X/BfWvC/iP4aX2k+G/id4kvteufEesWekXuhWVz4vstYt
721ivpYJS8lrDjfEkn7uWeIgOSFNf/aH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re
6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+PtF/4JoeMll8LWvijxH4K8B6t438SX3h
XQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkqzDKYcmi/8ABNDxksvha18UeI/BXgPVvG/i
S+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5QaWRMlWYZTDn7B1/9ofwb8bfF/wAF9a8L+I/h
pfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IU1/9ofwb8bfF
/wAF9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5I
UA+PtF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJk
qzDKYcmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9jbXMUaLdP5Qa
WRMlWYZTDn7B1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6FZXPi+y1i3vbW
K+lglLyWsON8SSfu5Z4iA5IU1/8AaH8G/G3xf8F9a8L+I/hpfaT4b+J3iS+1658R6xZ6Re6F
ZXPi+y1i3vbWK+lglLyWsON8SSfu5Z4iA5IUA+KfB37Bera94s03w1rPj34aeEfGGr+JLjwr
a+HtQ1K4vdRN7DcJasJVsLe5S3Rp2KI07x79jOuY8OT9m7/gnL8Uf2n/AI9+Ivh3oelWthq3
g6ee18Q3moXGNO0WaJ3jMcs0QkDO8kbIixhy+1mHyI7r9K6mmgx6r4m+JPwf8e/DQfEH40eM
vEPmeJdf8WWGiXPw30WTUpYo5Le1uJVuVuLuF2ka5WMzRQZSOJXk8xj/AIJg/tUxfCf9pLwb
8KfEs3wfsvBPwt1XXr6XxgmuS6fFe3MkNxbfbVle7itb938yOCF3t5JFtmOwIA7gA+P9G/Zf
1ab4CP8AEfXNZ8P+EfD13PNaaCmsPcC78VTwozTJYwwwys6RsqxtNJ5cCySohlDbgvmlfpD8
N/iZonij4Pfs3eGIbr4FRaT4J8Za5afELRtevdAu4NGsptain2WUmqySyzW7WrybZrOWXzAg
zK7oMfBPx3/4Rj/heHjL/hCP+RM/t29/sD/W/wDIP+0P9m/1373/AFWz/WfP/e5zSEcpRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFF
FFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQ
AV/UV/waO/8AKNbQv+wzqH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit
6f8Ai/Rn378Wf+P64+pr5j0E48Q6v/2Drf8A9SHxXX058Wf+P64+pr5j0L/kYNX/AOwdbf8A
qQ+K69jC/wDLv1/Q8TGbT9P1R+V//BxOf+Mrfhv/ANk7g/8ATzq9FJ/wcS/8nV/Df/sncH/p
51eivNxP8afq/wAzvw38GHovyPnqM4/bc+B3/YE+G3/ph0WvEviZ+ydqHx38QeOvGX/CUeFf
Cnhv4c+HPB39sXmtG9bH23SrSGDy0tbad2/eJg/KMblPTJHt8Qz+218Dv+wJ8Nv/AEw6LWLY
XOn+IfgD+0v4N/t/wrpXiTxX4c+Gv9j2eta/ZaR/aH2e1t5p/Le6ljQ7Ixk/NxlR1YA8tf8A
hr5HZQ/iP5nzF+z3+w549/aT+Ffj3xvoVna2nhP4daVc6pqep6g7xQTtBCZ2tICqMZLjywW2
4CqCu903x7j4J/shN8arPwoF+I3w08Pat431U6PoujahfXdxqNxN5scKNLHZ21wLVHklCobk
xF9rMAUAc+rf8EvNPtdI0P44ahqfiLwVoUHif4Y694S0tNY8U6bps95qVwts0MQhuJ0kCMAc
SlRFlWBcEEDoP2fvhZH8DfgTo+q+A/G3wqtfjH45vtQ0TWPEGreO9Jsm+GWnxXBtGazU3BaS
W6TzJDewCR0txtgTMolfjOw800X/AIJoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO
7SyuAHsba5ijRbp/KDSyJkqzDKYc1fD/APwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb
281Sy1CC9NkYbkWVpPFBvlwUeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6n
vtYvdF1M20NvNHPeSaMlxHJqRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/aAs59Kg8fabrs
er/DfTPED25kSXUtUm8y8WzLET3dnCVkTBkiic72V9qEAHKP/wAEv/FmjeIdD0rxB4y+H/hi
/wDFvivUfB/hyK+l1Gb+3ruxvVsZniNtZzCKL7S4jU3BiY4LbQuGOV4y/wCCe2sfCTwBoGve
P/Hnw/8AAX/CR32raba2GpHVLy6S40y8azvEc2NlcRDbKvBEhDBgQTzj0D9hT4/+PtQ8ceEN
X8W+Pvh//wAK98HeK5fEWpXPja70fUNWsv3kV7ftYx3Sy6p5twYxtNmnz3D7gwfe49A8GfHv
Wv2hfjT4Z1y68XfBTU/gxdfEfVL+fw140i8Oxax4W0y81gXl2JV1KISnz4py+6ymnGUK7laM
KAD5/wDA/wDwT21jxnpHgm/l8efD/RbX4m67daF4Me+OqN/wkz29zHatPEIbKQwRNNKqr9qE
L9SUUc14p478Eap8M/HGs+G9btvsWteH76fTb+38xJPs9xDI0cibkJVsOrDKkg44JFfZf7Nv
xhudG/aMtZND8Y/DTTv2d/h78QL/AFjR18WS6Xd3+kaUl0t266bBfJLq6PNBFEIxbRhmuGBy
JPMcfP3xAs7H9rb9oz4xeL9M17w/4V02efWvGlpF4mvVsZ9QhN00yWMAXesl66ygLEGwxV8N
xkoQfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uNRuJvNjhRpY7O2uBao8koVDcmIvtZgC
gDnW8HfsF6tr3izTfDWs+Pfhp4R8Yav4kuPCtr4e1DUri91E3sNwlqwlWwt7lLdGnYojTvHv
2M65jw57X/gnZoV/8PfiR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJ
dPdySmCyyRqq9t8ATa6N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/AG/pGm2+qealyW1xVuw8
ttL5qzWkjysV/eMJowoYz5+8RfsUeKvhz8PNc8R+N7/w/wCA4NL1W90KxtNYnme78R39mZFu
obFLaKYSpFJGImuGKW4kkRfNzu2n7Gf7Dnj39uv4h3Ph/wAEWdqqadB9o1HVdQd4dO0xSG8s
SyKjtvkZSqIqszYY42I7L9w6j8cvBfxcsfgnYeFfEXwq1zwP4f8AiP4mbxcnjyfR5L6HR7rx
Al1DKp1z/TH82ykZmlgJkZgQ7GVMLz//AAT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxD
qGqXWj3IguVuobaeU3F5HFd3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8
V9Vn0fwjbaw9/JPrU0M8Vs7D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/w/wCA
4NL1W90KxtNYnme78R39mZFuobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l
+D/h34VfDjxlcarPZ6xqOi6ufD8Mc0d1eQ6MLhrrUmSQRKsL2RcSTFWEhffJXu2o/tOeFv2l
rH4J3mi6j8KpPC0HxH8TX/jfTPHg8PR32l6Zf+IEvUCpqZMnz2czlmsWYFlK7i8YAAPhT9kT
9k7UP2yfifF4N0LxR4V0LxJfbv7Os9aN6n9pbIpZpfLeC2mRfLjhYnzWTO5Qu45A8pr7W/4J
v3Pgfwd/wVdm8ZaNr/hXw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4
ppCCivQPBHxl8O+FPDFtYX/wm+H/AImuoN+/UtSvNcjurnLsw3rbajDCNoIUbY14UZycsbej
fHbwvpdm8c/wY+Gmou080wluL7xCrorys6xDy9URdkasI1JBYqil2d9zsAeaV6t8BP2TtQ+O
/wAMPHnjL/hKPCvhTw38Of7P/ti81o3rY+2yvDB5aWttO7fvEwflGNynpkipo3x28L6XZvHP
8GPhpqLtPNMJbi+8Qq6K8rOsQ8vVEXZGrCNSQWKopdnfc7e1/sj61pPjj9i79qXwxb6h4K8L
a741n8MzaHo2oeIbfSoJVh1KeeWKCW/uAWSGPu8rNjblmZhkA+SaKK/Rb9ln9pW48BfsdfAr
S/hdrHwq03xJpeu6vJ4y/wCEm8bT+F47K4a+ge0ubqCK/szqMRttgYtFdjZB5QGQ0ZAPzpr0
v9kr9l/Vv2xfjXY+AtA1nw/o2u6pBPNYnWHuEgu2iQyvEGhhlKv5SyOC4VcRsN24qrfZf7Ov
7V2meHvhB4IuB488K+HtVtf2j1VoNF1J9LtdP8LXQinvIraCUpPb6JJOiu0UirGSieYu9ePV
v2fvjv8ACz4FfHDwde+BvGXw/wDCfgyX4j+Mv+FiLZ6taWf21WuLi28O4iLiWfT0iuoyn2ZW
tIsvK+wxvIrGfknXbeAfgD4i+I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAo
WF8sGKK/Ka9o03hzXLzT7h7WSewne3le1uorqBmRipMc0TNHImRw6MysMEEgg19l/wDBPL9o
jXJf2SvjH8MbT4u/8IJ4k1H+wX8GSav4pk0Wx0mJdUZ9SkguGdUg/dzB5I4j5sy79qSEEUhH
xTRX3t8IP2lNZ/ZW/wCCaun3/hPx/wCCtT8YeFfic89hbjXIGvbnw1mB5YYbaR47+GyutRtY
ZJbdUhkkjy7oI2Zq9A/YD+P/AMNvC2h+BdYv9X8FaVoXxB8SeJX+KHh+98STafpXh5rtVt9O
trTRDdRwT2UiSxK8kltdrEm4vLEsGYgD8yaK+9vAnjjxL4c/ZV+BPhr4Q/FbwV8PPFnhHxJr
0fjx28cado9o101/bm0urpXmCapbi3QYkiS6jaNDGN2Nldt8FP2nbjwd+zj8I7D4XeJPgoPE
lh4r16fxld6l4qn8F6alw+oxSWl69jFd6cby0e2KYVrWcJFCIREhVoSAfmnXu3wq/YH174le
DPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaMkhjjZhj5T8WdZi8R/F
TxLqFunh+OC/1W6uIk0G1ltdKVXmdgLSGVVkjt8H92jqrKm0EAgivpb/AIJ7/GHxho2ueA5N
a8Y/DTTvhV8PfEh1idfFkui3d/pEKNDd3i6bBcJLqSPMIlEYsowGuGBBD73AB5T8TP2KPFXw
m+Al14+1e/8AD4g0vxlceA9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwXK1H9l/VrL9
kyw+MUes+H7vw9d+JD4VlsYXuBqNjeiCS4xIrwrEUMKK+6OV/wDWoDhg4X72/Z0/al0fx78N
9av9F8YeFfDth4x/aPvPEHiLRPE2vaXp8l74RvrUR3iXVrdTbLiJo5drRqJPnTKZaMMKn7K/
xe8G+Gfg3e+H/AXjXwVong+f9oe5uL7SNc8QWdimo+B5rNbaYTW2pSK9zbvA6rsdXkLKCB5k
eVYz40/Z7/YP8X/tH/DeTxRpWpeFdIsLjXR4W0dNY1E2sniDWDayXS2FuQjIkrRooU3Dwxu8
0aK5YkA/Z7/YP8X/ALR/w3k8UaVqXhXSLC410eFtHTWNRNrJ4g1g2sl0thbkIyJK0aKFNw8M
bvNGiuWJA+gPjv478J/8OufGXhnwRrPw/wD7E/4XJe6poGk/a9O/tn/hG8PFbXHkzH7f5vnb
E3yD7V5HDH7PmuV/4TuP/hyV/wAIz/bPw/8A7Y/4WP8A2p/ZP2vSf7Z/sv7P5X2jyc/avN+1
/Jvx5/kcZ+zUAeE/slfsv6t+2L8a7HwFoGs+H9G13VIJ5rE6w9wkF20SGV4g0MMpV/KWRwXC
riNhu3FVboLb9i2Ww+BPgf4heJPiL8P/AAfovxD+3/2NDqSatPdS/Yrj7PPvW0sZ1TD7SMty
HHcED7B/4JmfHfwP8CvA/wAAr3TfGXhXwnYS67r3/C2luNWt7O+vblo3ttE86KVxcT2iLdKR
5CtbRMZJZdjRySL5T+yrrXi4QfCfw54l8bfs/wCv/DLwj4rm07VtB8QT+Gp7rw5aHUI5L9ll
v4w00VwjvIk1hNOrhcB1ZFUAHj/gH9iC2+IsvhK3sfjJ8HxqXjnVZdH0bTjdapLezTLd/ZY2
miisHa1SZijxG58otHIGIXDhfKfiz8OL74OfFTxL4Q1OW1n1Lwrqt1o93LaszQSTW8zwu0ZZ
VYoWQkEqDjGQOlfS3wc174XfCHVf2ivij4PvPD/9u+A9VgHwjsNWm8xDDdalLEL6G0uSJbi4
tLVY5I/NDiJiJJEZlUr6B4E+N/i3xT+yr8CR8LfjB4f8E+NrLxJr198QrvVvGdrob3V7cX9v
LbXuox3MqvqaeRjL+Xc5VGjIYgx0AfBNewfBP9kJvjVZ+FAvxG+Gnh7VvG+qnR9F0bUL67uN
RuJvNjhRpY7O2uBao8koVDcmIvtZgCgDn6h0H4ta/efs4/Baw+DPxf8Ah/4I8SaL4r8RT+N7
u38RWfhDTbm4l1GCSzvZ7Gf7Mbu0+zBdqrayBYkMPlAqYRz/AMHPBtr8O/hJZ+K/h34/+D7f
Fv4marqen6n4nvPEum+HIvhzpoumtzLYWMxhmhe8QySCeO3WWC2/dxQRs+4gHing79gvVte8
Wab4a1nx78NPCPjDV/Elx4VtfD2oalcXuom9huEtWEq2Fvcpbo07FEad49+xnXMeHJ8N/wDg
nL8UfiPofxX1MaVa6Ppvwag1A+IrvULjEH2qyVmnsYHiDrNcBUY/KfLA2lnXzI9/tfwc+Ddr
+zr8JLO6+HfxF+D918W/F+q6n4f1PxbeeOdNsIvh9psF01mZ7BJpVmd71BJKL2OMypbHbFEr
S+Y2T/wT38C2vwu1z9om01Pxn8NI4L/4f+JfAml3snjHTbWDWtSdoBD9mFxNHI9vKFLJcFFi
I6sCCAAeKfs9/sOePf2k/hX498b6FZ2tp4T+HWlXOqanqeoO8UE7QQmdrSAqjGS48sFtuAqg
rvdN8e4+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2oX13cajcTebHCjSx2dtcC1R5JQqG5MRfa
zAFAHPsP/BNnwtH4F/4X5/bfiX4f6P8A2x8OPEPgqw+3eM9Jt/t2qS/Z/Lji33I3xPtbbcLm
BsHEhxWr+z98LI/gb8CdH1XwH42+FVr8Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZ
Ib2ASOluNsCZlErgHmmi/wDBNDxksvha18UeI/BXgPVvG/iS+8K6BpusT3lxPqd7Z3aWVwA9
jbXMUaLdP5QaWRMlWYZTDnwrx34I1T4Z+ONZ8N63bfYta8P30+m39v5iSfZ7iGRo5E3ISrYd
WGVJBxwSK/Qv4T+J9B0b4ffsy+GrLxX8H9Zf4ReMtX03xdqF/wCJbC0GkQr4n0/UU1DTmu5o
XnSWC1bbNAkgaGWaPAdiB8PftZeN9M+Jn7VHxL8SaJc/bdF8QeK9U1KwuPLeP7RbzXcskb7X
AZcoynDAEZ5ANIRrfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZFLNL5bwW0yL5ccL
E+ayZ3KF3HIHlNfVf/BGm50/wd+3b4T8Za7r/hXw54b8L/bP7RvNa1+y0zZ9o0+8hi8tJ5Ue
bMhUHylfZuUttBBPingj4y+HfCnhi2sL/wCE3w/8TXUG/fqWpXmuR3Vzl2Yb1ttRhhG0EKNs
a8KM5OWIB5/RXpejfHbwvpdm8c/wY+Gmou080wluL7xCrorys6xDy9URdkasI1JBYqil2d9z
saN8dvC+l2bxz/Bj4aai7TzTCW4vvEKuivKzrEPL1RF2RqwjUkFiqKXZ33OwB5pRVvXtRi1f
XLy7t7C10uC6neaKytWlaCzVmJEUZld5Cig7QXdmwBlmOSalABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFf1Ff8Gjv/KNbQv+wzqH/pVLX8utf1Ff8Gjv/KNbQv8AsM6h/wClUtejgPgr
f4f/AG6JyYren/i/Rn378Wf+P64+pr5j0L/kYNX/AOwdbf8AqQ+K6+nPiz/x/XH1NfMehf8A
Iwav/wBg62/9SHxXXsYX/l36/oeJjNp+n6o/K7/g4l/5Or+G/wD2TuD/ANPOr0Uf8HEv/J1f
w3/7J3B/6edXorzcT/Gn6v8AM9DDfwYei/Is/Dv9ibVfGHxv+BvimOB2tz4b+H8+4L2i0TSV
P/oBr4O+NP7H+pfGHx38RvFDeJ/CvhPw78MND8Ixazd60b1tpvNMtYIfLS1tp3b95Hg/KMbl
PTJH9Ev7FvhbT734D/Ayd4ozMPA3hNs45yNJssV+I/xPl07W/DX7X/goa94U0rxH4psvAA0a
z1rX7LR/7Q+zRxTT+W91LEh2RjJ+bjKjqwB0zGgoUKbXVL8hZfWc69SL6N/mfIH7Pf7Dnj39
pP4V+PfG+hWdraeE/h1pVzqmp6nqDvFBO0EJna0gKoxkuPLBbbgKoK73TfHuPgn+yE3xqs/C
gX4jfDTw9q3jfVTo+i6NqF9d3Go3E3mxwo0sdnbXAtUeSUKhuTEX2swBQBz6t/wS80+10jQ/
jhqGp+IvBWhQeJ/hjr3hLS01jxTpumz3mpXC2zQxCG4nSQIwBxKVEWVYFwQQOg/Z++FkfwN+
BOj6r4D8bfCq1+Mfjm+1DRNY8Qat470myb4ZafFcG0ZrNTcFpJbpPMkN7AJHS3G2BMyiV/FP
YPNNF/4JoeMll8LWvijxH4K8B6t438SX3hXQNN1ie8uJ9TvbO7SyuAHsba5ijRbp/KDSyJkq
zDKYc1fD/wDwT61DUPEPh7w9qvxL+FXhzxn4j1258NReGLvUb281Sy1CC9NkYbkWVpPFBvlw
UeSQI6MGViA2PSv2MvG/jD4WfFTw5pGtfEP4Pz/Cr4SeMri6nvtYvdF1M20NvNHPeSaMlxHJ
qRS5ECmFrKICSZ1YbX3stX4dfGXwPL4x/aX/AGgLOfSoPH2m67Hq/wAN9M8QPbmRJdS1SbzL
xbMsRPd2cJWRMGSKJzvZX2oQAco//BL/AMWaN4h0PSvEHjL4f+GL/wAW+K9R8H+HIr6XUZv7
eu7G9WxmeI21nMIovtLiNTcGJjgttC4Y5XjL/gntrHwk8AaBr3j/AMefD/wF/wAJHfatptrY
akdUvLpLjTLxrO8RzY2VxENsq8ESEMGBBPOPQP2FPj/4+1Dxx4Q1fxb4++H/APwr3wd4rl8R
alc+NrvR9Q1ay/eRXt+1jHdLLqnm3BjG02afPcPuDB97j0DwZ8e9a/aF+NPhnXLrxd8FNT+D
F18R9Uv5/DXjSLw7FrHhbTLzWBeXYlXUohKfPinL7rKacZQruVowoAPn/wAD/wDBPbWPGeke
Cb+Xx58P9Ftfibrt1oXgx746o3/CTPb3Mdq08QhspDBE00qqv2oQv1JRRzXinjvwRqnwz8ca
z4b1u2+xa14fvp9Nv7fzEk+z3EMjRyJuQlWw6sMqSDjgkV9l/s2/GG50b9oy1k0Pxj8NNO/Z
3+HvxAv9Y0dfFkul3d/pGlJdLduumwXyS6ujzQRRCMW0YZrhgciTzHHz98QLOx/a2/aM+MXi
/TNe8P8AhXTZ59a8aWkXia9Wxn1CE3TTJYwBd6yXrrKAsQbDFXw3GShB8E/2Qm+NVn4UC/Eb
4aeHtW8b6qdH0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOdbwd+wXq2veLNN8Naz
49+GnhHxhq/iS48K2vh7UNSuL3UTew3CWrCVbC3uUt0adiiNO8e/YzrmPDntf+CdmhX/AMPf
iR8PPGdp4t+Cg0G68V2TeI7XxBeaNFrHh63tLqJmlUaoiSx74pGdJdPdySmCyyRqq9t8ATa6
N+2pH8RvCPjn4P6h4A1z4nSXd+fFWo6b/b+kabb6p5qXJbXFW7Dy20vmrNaSPKxX94wmjChj
Pn7xF+xR4q+HPw81zxH43v8Aw/4Dg0vVb3QrG01ieZ7vxHf2ZkW6hsUtophKkUkYia4YpbiS
RF83O7afsZ/sOePf26/iHc+H/BFnaqmnQfaNR1XUHeHTtMUhvLEsio7b5GUqiKrM2GONiOy/
cOo/HLwX8XLH4J2HhXxF8Ktc8D+H/iP4mbxcnjyfR5L6HR7rxAl1DKp1z/TH82ykZmlgJkZg
Q7GVMLz/APwT8/bT8JaD+2Vo3gTTrP4aeGvgr4F8SeJNc0XxDqGqXWj3IguVuobaeU3F5HFd
3HkzQWyefBLOkBbG3a7gA+VPhV+wPr3xK8GfD3Wr3xb4K8Ip8V9Vn0fwjbaw9/JPrU0M8Vs7
D7JazpCnnyrGDO0ZJDHGzDHJ8RfsUeKvhz8PNc8R+N7/AMP+A4NL1W90KxtNYnme78R39mZF
uobFLaKYSpFJGImuGKW4kkRfNzu2+1/sZfFnxh4b+Knhwa14l+D/AId+FXw48ZXGqz2esajo
urnw/DHNHdXkOjC4a61JkkESrC9kXEkxVhIX3yV7tqP7Tnhb9pax+Cd5ouo/CqTwtB8R/E1/
430zx4PD0d9pemX/AIgS9QKmpkyfPZzOWaxZgWUruLxgAA+FP2RP2TtQ/bJ+J8Xg3QvFHhXQ
vEl9u/s6z1o3qf2lsilml8t4LaZF8uOFifNZM7lC7jkDymvtb/gm/c+B/B3/AAVdm8ZaNr/h
Xw58J/C+u6z9hvNa1+30zZp9xbX8Nj5aXsqXE2VMQOFd03KZNucn4ppCPa/hv+wx4k+K3wg8
F+M9H17wrJYeNvHNt8PYbWSW6S603U5wWj+0A2+zyvL2OXieQgSqMbgyr5p8WfhxffBz4qeJ
fCGpy2s+peFdVutHu5bVmaCSa3meF2jLKrFCyEglQcYyB0r72/4J6/tG+G/gB+wf4b0HWNb8
K2t/8QPitNp80sfiG1ttd8JaZfaS2nSa5bsGd7CWCRXxLLGAY2YcLMsldB8MfjLF8APgb8MP
Bfwb8ZfB/UtS8H+MvEVn4n1fXPGsvhe0uGTU4xY6lNbwalbf2jby2ixtnber5cQiXOGRmM/M
miv0L8N/tfaz8EP2DbnxV4T8W/DSHxhp3xdutVsNH0rVIIRF4allimls7KxkkS/ttMn1G2hZ
rVVikeEbnXy2ZifssftfWNr8BPh3qc/i3w/4K13/AIaHhkm0vT9UXTU0PwxeJDc3trFCZN8G
jtcKC8ZPkFolL5Zc0AfnpXpf7Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZj
AY4XjLqTGu2R48mVcZAcr+hd3+0rpngLQfCul/AjWPgppv8AZfxH8WSa5/aXjZ/C+m2SNrKv
p1y8Ftf2a6jaGy8oAiK7TyoBEg4aM/On7B/iLSdS+Kn7T2oXOufDTw9B4s+H/iPQ9JSPVbfQ
NKu72/mVraCwhvXhkW3IifbuUeUgjEmwsoKEfGlFFfpD/wAE8vE+s6b+wp8NNej8V2vh2Dwj
8ebeyvdR1HxLBo6Wnh6S0trq/sUlnmjD28sirNJaRk+aybzGxUkAH5vUV+q37Lnxu+Cnw/8A
HGl3ugeJvCv/AArbxv458Xp420zV/EU+m2OlWlzIbbR44NCaeCCa0mgkt/Mkks7hYlLl3hWD
915V+x18e7f4Mfs0fC7R7/4g6Vo/iTwv+0BbabdrD4mhMln4aligkv0WSOUg6VLcxCSQqxtp
HQOSxANAHxT8K/hX/wALS/4SP/io/Cvhz/hHNDutc/4nmofY/wC1PI2/6Ha/KfNu5N37uLjd
tbkYrlK+9v2YvE+g6N+0Z+2ZpmjeK/BXh7wT4p8N+KNE8P2kniWw0nStUup7qRdMW2SSaOKR
BD54SRAY4klwWQSjdU+E3xR1/wD4Y6/Z80v4M/FPwr8OPEmga7rcnjf7Z4us/Dsf2iS+tns7
m+gnkQ6hEtsqjKxXA2RtFgkGOgD4UorW8d3H2zxxrMv2zStQ82+nf7Vpdn9jsbnMjHzIIPKi
8qJuqR+VHtUgbExtHVeCPjL4d8KeGLawv/hN8P8AxNdQb9+palea5HdXOXZhvW21GGEbQQo2
xrwozk5YgHn9FW9e1GLV9cvLu3sLXS4Lqd5orK1aVoLNWYkRRmV3kKKDtBd2bAGWY5J+6/2H
P2lLH4d/smfCK3n8f2uh67o3x5srWaJ9cW2u7HwxcQW817GwLh00yS4jDzKcQNIgZ8sM0AfB
Ndt4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoMszsxds4ChYXywYor/ot/wAL
i0HSPG3witvhD8RPBXgzwn4W+LviW68eWeneMLDw3aXmnPr8MlpI8Lzwi/tzpyhY2iWZPLTy
h02Dz74DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNpp16aW7e0ZzGlmn2edHeG
MLLIm4LE5VlDGfnpRVvXtOi0jXLy0t7+11SC1neGK9tVlWC8VWIEsYlRJAjAbgHRWwRlVOQP
uv8A4J9/FfUdP+AnhPw2fiX4f8C6aniS7vrfWtB8bWPh3V/C9zsUbtZ0y+MEOuWUji1kGxpJ
FihmiEvS3CEfBNFfot8K9R0/472P7HVn4f8AFfw/1XVfhp8R9WfxHBDqdloOPtHiC1u4ZLOx
ufs0skUsWWjjt4OP9XsV1MY7a6/aUsfh34snt5/H9roeu6N+1lfWs0T64ttd2Phi4uBNexsC
4dNMkuIw8ynEDSIGfLDNMZ+WddX8K/hX/wALS/4SP/io/Cvhz/hHNDutc/4nmofY/wC1PI2/
6Ha/KfNu5N37uLjdtbkYr9Afiz8UX/sHwRpf7OfxT+H/AMOP7A+I/i+TxD9n8Xaf4d03ZJrM
b6bczwPIi6haLZLGFMUVwnlRmIA48uvHv+Cdfi9bLXP2mrG+8b+CtP03xd8P9b0e1Ems2nh3
Std1W4YrZtbWtwbZQm03Ow+SiwJLtYReYFKEfGlFfe37Dn7Slj8O/wBkz4RW8/j+10PXdG+P
NlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiBpEDPlhmvpb9mTxF4N8dftGfCTwz8Ntc8Ff8A
CJ6H8QPH934r8O6ZqtnZwagwupbjRLkWO9DfpFFFavBNDHKsAt0Ksnk/Kxn5E+FfAmueOv7S
/sTRtV1j+x7GXVL/AOw2klx9htIseZcS7AdkSbl3O2FXIyRmsmvS/g/4sbXNc+IWp698VPEH
grUtX8N6lLLdxx3d7P4yupGRjpdy8Thtl0xYvJMWjymXByK80pCPdvhX+wpdfE74CaT8Rp/i
R8NPCvh7VvEg8JA65PqUL2mpFDKsUzR2ckUaGHEvmmTylVhvdWDKPKfiz8OL74OfFTxL4Q1O
W1n1Lwrqt1o93LaszQSTW8zwu0ZZVYoWQkEqDjGQOlfW3wj+NT/Bf/gjxq9npWr/AA/uPEWt
+OdRkuNGvtT0+41SPR73RJNIluYrYy/aYpRLIQpjVZNmWYNbu+/0v/gmR8Vfht8HPhX8LEv/
ABT4fk0LxRquv2nxQ0/xD4smt4NMaeFLTTkj0g3MUF1bzo8XmzSWt0qAyM8sSw/umM+FPAPw
B8RfEf4SePfG+nx2o8PfDiCxm1eaacK+68ultreKNBlmdmLtnAULC+WDFFfia+4f2D/jf4i0
H9mb43/CFPjBa+CPFjz6JD4RmvfGYsNK0tYtWc6pLaXqy+QiFJfMcW7lrhNxjWXBr0v/AIJz
fET4YfAXwP8ADmxn8ceFdY8N69rviLTviWmr+Kru1sYPNjSy01oNHmmt0ubS5jaIyTT2U/lq
zmR4BAREAfD37Pf7L+rftI6H49u9F1nw/Yz/AA98N3Piq8stQe4Se+srZSZjAY4XjLqTGu2R
48mVcZAcr5pX1t/wTU0+18Ga5+0Lp+teIvBWjT3/AMMdc8JWb6h4p021gv8AUrlo1higmknE
cyMYJP3sbNEBtJcB0LfJNIQUV+i37LP7Stx4C/Y6+BWl/C7WPhVpviTS9d1eTxl/wk3jafwv
HZXDX0D2lzdQRX9mdRiNtsDForsbIPKAyGjJ+yb8evCGi+B9ZPj3xd8Koda1jxXql18FxbRC
XS/h3qckd4H1FoZImk03SnupLIww3SZDxLcG3URtOGM/Omiv0B0H4reM5v2cfgtpvw1+MfhX
wb4+0PxX4im+I99eePbDTY7zUJdRge3vr4yz41eIwjPnRrdI6KyDf9yur+Cn7Ttx4O/Zx+Ed
h8LvEnwUHiSw8V69P4yu9S8VT+C9NS4fUYpLS9exiu9ON5aPbFMK1rOEihEIiQq0JQj8067b
wD8AfEXxH+Enj3xvp8dqPD3w4gsZtXmmnCvuvLpba3ijQZZnZi7ZwFCwvlgxRX/Qv9kv4gap
qH7NHhnxfZ+KPCvhmw0D9o9ori/s9bTw7o9l4cmihvL6wsReSQuunyyBZvsIG5xGrNEWQ44n
4DftQxeKPC37UXgLwL8WrXwBB4g8SWN38Mje+IJfDelaNpp16aW7e0ZzGlmn2edHeGMLLIm4
LE5VlDGfD/wr+Ff/AAtL/hI/+Kj8K+HP+Ec0O61z/ieah9j/ALU8jb/odr8p827k3fu4uN21
uRiuUr7B/wCCZ2rW/g//AIaN0Gbx14Vs9F1/4cav4fsWvvEcOj2Ov6nL+7sXijvXgZ8p9o2y
PGpiWYh/LMmD6V+yb8Yl0j4G/sr23gj4ieH/AAZB4W8ZajdfFKzbxhaeG3vIX1OzkikuoZ54
Wv0NipUMqzDahi6goAD89KK/Sz4YftXeEPD3hjQbjwv480rw9YWv7Tsi6XBDqQ0uTT/BV08c
80SwEo9vpUkiRvJEVWEug3ruXj4T/ay/sP8A4ao+Jf8AwjP9lf8ACN/8JXqn9k/2X5f2H7J9
rl8nyPL+TyvL27Nny7cY4pCMr4V/Cv8A4Wl/wkf/ABUfhXw5/wAI5od1rn/E81D7H/ankbf9
DtflPm3cm793Fxu2tyMVylfW3/BLDxOujaH+0Bpl94r8P+HtN8U/DHVNEtbTWPEtppMGqarO
oWzUJcTRrI4X7SBJgrEJWDMnmjd6B+w5+0pY/Dv9kz4RW8/j+10PXdG+PNlazRPri213Y+GL
iC3mvY2BcOmmSXEYeZTiBpEDPlhmgD4Jrq/hX8K/+Fpf8JH/AMVH4V8Of8I5od1rn/E81D7H
/ankbf8AQ7X5T5t3Ju/dxcbtrcjFav7WX9h/8NUfEv8A4Rn+yv8AhG/+Er1T+yf7L8v7D9k+
1y+T5Hl/J5Xl7dmz5duMcV7t/wAEsPE66Nof7QGmX3ivw/4e03xT8MdU0S1tNY8S2mkwapqs
6hbNQlxNGsjhftIEmCsQlYMyeaNwB8k123gH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeX
S21vFGgyzOzF2zgKFhfLBiiv9gfsOftKWPw7/ZM+EVvP4/tdD13RvjzZWs0T64ttd2Phi4gt
5r2NgXDppklxGHmU4gaRAz5YZroPhH+0R5tj+1d8MfBvxd0rwJ/aPiu1f4bSP4p/sXQ9J09f
EE73kljcB1ggi8iZHaOA75o92xJMEUxn500V+kP7O37QjfCv9lX4L6B8JvE/wfbXdB8Sa2PF
17rnjS78K2gmN/C1nfTW322wl1C3e12H97b3BEcIi2KweI5Phv8Aa+1n4IfsG3Pirwn4t+Gk
PjDTvi7darYaPpWqQQiLw1LLFNLZ2VjJIl/baZPqNtCzWqrFI8I3OvlszEA/PSiv0B+Hv7Z+
v/Cf/gninjXw3r3w/wBF8cQfFabWrXw1aatZxzad4cneGefTraz8/wC122ny6lbxb7aEozxK
WbMTM59A/ZL/AGofA+v6h+yt4y1HxB8P/CVr4B13xuninTI7630yPw9LrMz/AGKO3s5JBM1p
m5jAkhWSKFFYyOgikKgH5fV23gH4A+IviP8ACTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRoM
szsxds4ChYXywYor/e37CF9r/gr9jDwBfP4u0rQf+ED/AGgI9K1HVLjxdZ2NrbaCba3uNQso
Lt7hYp7SaVBM1vA7rOUEgR9u4c98Bv2oYvFHhb9qLwF4F+LVr4Ag8QeJLG7+GRvfEEvhvStG
0069NLdvaM5jSzT7POjvDGFlkTcFicqygA/PSirevadFpGuXlpb39rqkFrO8MV7arKsF4qsQ
JYxKiSBGA3AOitgjKqcgVKQgooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKK
KACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigA
ooooAK/qK/4NHf8AlGtoX/YZ1D/0qlr+XWv6iv8Ag0d/5RraF/2GdQ/9Kpa9HAfBW/w/+3RO
TFb0/wDF+jPv34s/8f1x9TXzHoX/ACMGr/8AYOtv/Uh8V19OfFn/AI/rj6mvmLQzjxBq3/YP
tv8A1IfFdexhf+Xfr+h4mM2n6fqj8r/+DiX/AJOr+G//AGTuD/086vRR/wAHEv8AydX8N/8A
sncH/p51eivNxP8AGn6v8z0MN/Bh6L8j9Iv2LfErQfCP4G2+eP8AhCPCI/PSLGvwM/ap/ZQ1
D49fHj4xeNP+Eo8K+FPDfw6svDJ1i81o3rY+3WFvDB5aWttO7/vEwflGNynpkj91P2O4JD8P
fgYecf8ACE+EP/TPYV+P/wASbnT/ABB4U/a88Gf2/wCFdK8SeK7H4f8A9j2eta/ZaR/aH2eK
Kafy3upY0OyMZPzd1HVgDhXryqUYqXT/ACIy5JYip6v8z5C/Z7/Yc8e/tJ/Cvx7430KztbTw
n8OtKudU1PU9Qd4oJ2ghM7WkBVGMlx5YLbcBVBXe6b49x8E/2Qm+NVn4UC/Eb4aeHtW8b6qd
H0XRtQvru41G4m82OFGljs7a4FqjyShUNyYi+1mAKAOfVv8Agl5p9rpGh/HDUNT8ReCtCg8T
/DHXvCWlprHinTdNnvNSuFtmhiENxOkgRgDiUqIsqwLgggdB+z98LI/gb8CdH1XwH42+FVr8
Y/HN9qGiax4g1bx3pNk3wy0+K4NozWam4LSS3SeZIb2ASOluNsCZlEr8B7R5pov/AATQ8ZLL
4WtfFHiPwV4D1bxv4kvvCugabrE95cT6ne2d2llcAPY21zFGi3T+UGlkTJVmGUw5q+H/APgn
1qGoeIfD3h7VfiX8KvDnjPxHrtz4ai8MXeo3t5qllqEF6bIw3IsrSeKDfLgo8kgR0YMrEBse
lfsZeN/GHws+KnhzSNa+Ifwfn+FXwk8ZXF1Pfaxe6LqZtobeaOe8k0ZLiOTUilyIFMLWUQEk
zqw2vvZavw6+MvgeXxj+0v8AtAWc+lQePtN12PV/hvpniB7cyJLqWqTeZeLZliJ7uzhKyJgy
RROd7K+1CADlH/4Jf+LNG8Q6HpXiDxl8P/DF/wCLfFeo+D/DkV9LqM39vXdjerYzPEbazmEU
X2lxGpuDExwW2hcMcrxl/wAE9tY+EngDQNe8f+PPh/4C/wCEjvtW021sNSOqXl0lxpl41neI
5sbK4iG2VeCJCGDAgnnHoH7Cnx/8fah448Iav4t8ffD/AP4V74O8Vy+ItSufG13o+oatZfvI
r2/axjull1TzbgxjabNPnuH3Bg+9x6B4M+PetftC/Gnwzrl14u+Cmp/Bi6+I+qX8/hrxpF4d
i1jwtpl5rAvLsSrqUQlPnxTl91lNOMoV3K0YUAHz/wCB/wDgntrHjPSPBN/L48+H+i2vxN12
60LwY98dUb/hJnt7mO1aeIQ2UhgiaaVVX7UIX6koo5rxTx34I1T4Z+ONZ8N63bfYta8P30+m
39v5iSfZ7iGRo5E3ISrYdWGVJBxwSK+y/wBm34w3OjftGWsmh+Mfhpp37O/w9+IF/rGjr4sl
0u7v9I0pLpbt102C+SXV0eaCKIRi2jDNcMDkSeY4+fviBZ2P7W37Rnxi8X6Zr3h/wrps8+te
NLSLxNerYz6hCbppksYAu9ZL11lAWINhir4bjJQg+Cf7ITfGqz8KBfiN8NPD2reN9VOj6Lo2
oX13cajcTebHCjSx2dtcC1R5JQqG5MRfazAFAHOt4O/YL1bXvFmm+GtZ8e/DTwj4w1fxJceF
bXw9qGpXF7qJvYbhLVhKthb3KW6NOxRGnePfsZ1zHhz2v/BOzQr/AOHvxI+HnjO08W/BQaDd
eK7JvEdr4gvNGi1jw9b2l1EzSqNURJY98UjOkunu5JTBZZI1Ve2+AJtdG/bUj+I3hHxz8H9Q
8Aa58TpLu/PirUdN/t/SNNt9U81Lktrirdh5baXzVmtJHlYr+8YTRhQxnz94i/Yo8VfDn4ea
54j8b3/h/wABwaXqt7oVjaaxPM934jv7MyLdQ2KW0UwlSKSMRNcMUtxJIi+bndtP2M/2HPHv
7dfxDufD/giztVTToPtGo6rqDvDp2mKQ3liWRUdt8jKVRFVmbDHGxHZfuHUfjl4L+Llj8E7D
wr4i+FWueB/D/wAR/EzeLk8eT6PJfQ6PdeIEuoZVOuf6Y/m2UjM0sBMjMCHYyphef/4J+ftp
+EtB/bK0bwJp1n8NPDXwV8C+JPEmuaL4h1DVLrR7kQXK3UNtPKbi8jiu7jyZoLZPPglnSAtj
btdwAfKnwq/YH174leDPh7rV74t8FeEU+K+qz6P4RttYe/kn1qaGeK2dh9ktZ0hTz5VjBnaM
khjjZhjk+Iv2KPFXw5+HmueI/G9/4f8AAcGl6re6FY2msTzPd+I7+zMi3UNiltFMJUikjETX
DFLcSSIvm53bfa/2Mviz4w8N/FTw4Na8S/B/w78Kvhx4yuNVns9Y1HRdXPh+GOaO6vIdGFw1
1qTJIIlWF7IuJJirCQvvkr3bUf2nPC37S1j8E7zRdR+FUnhaD4j+Jr/xvpnjweHo77S9Mv8A
xAl6gVNTJk+ezmcs1izAspXcXjAAB8Kfsifsnah+2T8T4vBuheKPCuheJL7d/Z1nrRvU/tLZ
FLNL5bwW0yL5ccLE+ayZ3KF3HIHlNfa3/BN+58D+Dv8Agq7N4y0bX/Cvhz4T+F9d1n7Dea1r
9vpmzT7i2v4bHy0vZUuJsqYgcK7puUybc5PxTSEe7fCr9gfXviV4M+HutXvi3wV4RT4r6rPo
/hG21h7+SfWpoZ4rZ2H2S1nSFPPlWMGdoySGONmGOT8TP2KPFXwm+Al14+1e/wDD4g0vxlce
A9V0iGeZ9R0rVYElkkik/deQ6BIt2+GaRT5iDOdwX3b9gi6fwN4Y+Gk1p8S/h/qvgzxF4ref
x34S8Talp+jyeDXhdbePU7Ka6uYrkXZtJ2mjutP8t0eBELSGPYPS/wBm/wCM3g39nT4CeENJ
8J/EHw/Bo93+0tb31lPd6pZjVx4XCC2N7cIds1kkiQMkrskBaKV1YCGcq7Gfm9XbeAfgD4i+
I/wk8e+N9PjtR4e+HEFjNq8004V915dLbW8UaDLM7MXbOAoWF8sGKK/6beAvjv8ACzQ/2gPg
Rq+leMvh/pXhD4UeOfiJp2pxx6taWkemQapfTJpbW9vvVprR0uIcTW6PBEm5ndFjcr86fsH/
ABv8RaD+zN8b/hCnxgtfBHix59Eh8IzXvjMWGlaWsWrOdUltL1ZfIRCkvmOLdy1wm4xrLg0A
fKngH4A+IviP8JPHvjfT47UeHvhxBYzavNNOFfdeXS21vFGgyzOzF2zgKFhfLBiivU+Ffwr/
AOFpf8JH/wAVH4V8Of8ACOaHda5/xPNQ+x/2p5G3/Q7X5T5t3Ju/dxcbtrcjFfW37B/x31bT
f2Zvjf8ACbTPjZa+GNduJ9ETwRqN74ouND0q0gj1ZzqNzaTz+UbdHjlErxhUnlQtiJ2VlHP/
APBM7Vrfwf8A8NG6DN468K2ei6/8ONX8P2LX3iOHR7HX9Tl/d2LxR3rwM+U+0bZHjUxLMQ/l
mTBAPj6vQP8AhqDxn/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3Z+0fafJ+0ebt/d7/ADN3lfus
+X8lfW37Dn7Slj8O/wBkz4RW8/j+10PXdG+PNlazRPri213Y+GLiC3mvY2BcOmmSXEYeZTiB
pEDPlhmvpb9mTxF4N8dftGfCTwz8Ntc8Ff8ACJ6H8QPH934r8O6ZqtnZwagwupbjRLkWO9Df
pFFFavBNDHKsAt0Ksnk/KAfjnXpf7JX7L+rfti/Gux8BaBrPh/Rtd1SCeaxOsPcJBdtEhleI
NDDKVfylkcFwq4jYbtxVW8/17Xr7xTrl5qep3l1qOpajO91d3d1M0091M7FnkkdiWZ2YkliS
SSSa/SH/AIJmfHfwP8CvA/wCvdN8ZeFfCdhLruvf8LaW41a3s769uWje20TzopXFxPaIt0pH
kK1tExkll2NHJIqEfmnRX6Lfs1/El/Avwp/Zj0Xwh8R/CvhD/hCPHOqN8VrW38dafosd8n9q
2jJLOGuY11SI2SFVlh89GRSiscba9L8I+N/gN8arb4Sajp/xD+Gnhuz+GvxA8Q3Wi2Gp3sui
mwEviu01e3kjgMaqlu2j2t3GjShYRNcQRcS8Ruw7H5PUV+m3g79svw9cWem63pfxItbB5/2p
Li4tpH1g2N3B4QvpUurkNG7JLFpk0ypJMjqsTSKC43rx8E/tZf2H/wANUfEv/hGf7K/4Rv8A
4SvVP7J/svy/sP2T7XL5PkeX8nleXt2bPl24xxSEef0V91/Cb4o6/wD8Mdfs+aX8Gfin4V+H
HiTQNd1uTxv9s8XWfh2P7RJfWz2dzfQTyIdQiW2VRlYrgbI2iwSDHXpf7F3i/WYP2TPBviSP
xv4f02Dwt+0OLW91xdZg8N6cvh6SCC7v7W1Wc2oSynkVZzYxxpv2AmDKEKxn5k0V+sPwk/aQ
+E0/xk/Z78T6F4v8FaN4J+G/jL4g29/DNeQaU+k22r3ko0ox2UpjmNu6XMHzxRGOBQ/mGMRS
bPH/ANhnx9q/gz4QeH/BVz8TvCvgm107xXfSprnhj4gaZo+oeGbtQqF9Wsbl47XxDp8ki20q
PDJM3kwyxrMQRb0Afn9RWt47uPtnjjWZftmlah5t9O/2rS7P7HY3OZGPmQQeVF5UTdUj8qPa
pA2JjaP0W/Y8174UfEf9mv8AZmOv/Eb4f+H/ABB8Jtd1BvsWu6vJp9xZXr+IbDVfN27dhibT
LS9jEsh8kz3EMYbzT8iEfJP7Lfxl8c6Roeu2fhXxd8NPAE/hPw3qWqQatqWk6ZY63eKVZZbS
x1L7K1417Ks7rEqzK20EK6hQB5/4B+APiL4j/CTx7430+O1Hh74cQWM2rzTThX3Xl0ttbxRo
Mszsxds4ChYXywYor/YH7MXxe0nxn+0Z+2ZqFl418P6Z4T+JPhvxRb6MmseILfRYNbvb26kb
TiIbuSIs/lNPh2X9yJWDlPMw3P8A/BPL9ojXJf2SvjH8MbT4u/8ACCeJNR/sF/Bkmr+KZNFs
dJiXVGfUpILhnVIP3cweSOI+bMu/akhBFMZ86aj+y/q1l+yZYfGKPWfD934eu/Eh8Ky2ML3A
1GxvRBJcYkV4ViKGFFfdHK/+tQHDBwvmlfoX+xp8ZtJ/Z0/ZM+G+kxfEHwVBeXf7Q9jfXM9v
qluLkaEIEtp70pLturK3k8iRGeVIHaCVlcCKdlf40/ay/sP/AIao+Jf/AAjP9lf8I3/wleqf
2T/Zfl/Yfsn2uXyfI8v5PK8vbs2fLtxjikI9B8Hf8E1PiX4++DPw08f6VDpV34W+J2up4ft7
tJpW/sO4kv8A7BE98ojJjikmBCyR+YPuhtruiN9FfCr9jH9rz4eeDPCGgeBtb8P6hoj+JPEP
hnT9RtYoLmfwRMk89jqFxHeXFt9q0+3m+yzuHtHBYgfKs8yI+t+y3+3rN+zd8Hv2NNI0Dxt4
ftrG61XXtJ8d6Vc6hE0FjZXWtQ7J7yPeDbukTyTRSvtwA/LRtIrdr8Xv26bfwR8S/gNYeDfi
rpVn4f1X4reKm8YJpuuwm3/syXxjFdQS3RVyI4pIQ7LKSoeCSYBjFJIGYz4f+G/7A+vfEX4q
fFfwgfFvgrRNS+DsGoXuuS6g9+YJrWwmaG7uIDBays6RsqnaypIwlXajYYL4TX3t8BfE+g6z
+2H+2hqY8V+CrPTfF/hvxhomh3eoeJbCwg1e6v70taLA88yLIkixsfMUmNQVLMu9c/BNIQUU
UUAFFFFABRRRQAUUUUAFFFFAHoH/AA1B4z/4Z4/4VV9t0r/hBft39qfYP7DsPO+17s/aPtPk
/aPN2/u9/mbvK/dZ8v5K8/oooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAPQP8AhqDx
n/wzx/wqr7bpX/CC/bv7U+wf2HYed9r3Z+0fafJ+0ebt/d7/ADN3lfus+X8lef0UUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAB
RRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV/UV/waO/8AKNbQv+wz
qH/pVLX8utf1Ff8ABo7/AMo1tC/7DOof+lUtejgPgrf4f/bonJit6f8Ai/Rn378Wf+P64+pr
5h0b/kPav/2D7b/1IfFdfT3xZ/4/rj6mvmHRzjX9W/7B9v8A+pD4rr18L/y79f0PExe0/T9U
flf/AMHEZz+1T8Nv+ydwf+nnV6KP+DiP/k6n4bf9k7g/9POr0V52J/jT9X+Z6GG/gw9F+R+l
H7F+mBvg78DZMc/8IR4SOf8AuEWNfzZf8FIxj9sbxL/15aT/AOmq0r+lb9jjxJp9n8EfgXC8
sYm/4QjwkME85Ok2OP6V/NR/wUikEv7YviVh0ay0gj/wVWlc1ejKnRi5dTPLnevU9X+Z4ZRR
RXCe0FFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXq3w1/bb+J/wh
8D2Hh7w/4m+xWGj/AG7+yZX060uL7Qvtsfl3X2G7lia4svMXJb7PJH8xZuGYk+U0UAFFFFAB
RRRQAUUUUAFegf8ADUHjP/hnj/hVX23Sv+EF+3f2p9g/sOw877Xuz9o+0+T9o83b+73+Zu8r
91ny/krz+igAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACi
iigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoooo
AKKKKACiiigAooooAKKKKACiiigAooooAK/p3/4NYvDHiaD/AIJT+Gda0PxDoWmRSa7qtu1v
f6BLqDFkuS24Ol5BgESAbdp+6TnnA/mIr+nz/g2t8Ka544/4IP6bpXhrxTd+CdfvPEGuLp+u
W1lb3z6dMJ42VzBcI0cqZGGQgEozBWRtrr0YevOlzOHVW2T6ruZVacZ2Uuj/AEZ99+Ifhn41
8Syu8/jXwupc5Pl+D5x/PUzXznaadLo/inWrSeeO6mtbGCGSaOEwrMy+IfFYLBCzFQSM7dzY
zjJ61N/wTy+F/wC15bfErXdV/aH+JVpPoGi3E1jpWh6ZpuklPEJwVF888NsskVtg5jjzHMzj
MixqmyabVf8Ako/ij/rjH/6kfiyvVy/F1atWMZ2sn2S79kjysyoQhRlKO7Xd+R+UH/BxB/yd
P8Nv+ydwf+nnV6KP+DiD/k6f4bf9k7g/9POr0Vjif40/V/maYb+DD0X5DPAX7bmq+D/j38DP
Csc7i3Hhz4fQbQ3aXRNJY/8AoZr7o/YW/wCCSvw4/aw/Y1+GfxG8R6L8N73WPFWgW9xPNqng
C11S7fywYF8y4eZWf5YhjI+Vdq9q/JuHwzcXH7bPwNnCts/sT4bnP00HRf8ACv30/wCCWWh6
tr3/AARw+EWm6FrP/COa7e+BpLbT9W+yJef2Xcs9wsVx5LkJL5blX2McNtweDWOIqydON/I3
oU4qpK3mfHPxb/4NNfgb+0L8XPEevWfxT1Xw7cJcQ2l9onhTQNPtNP0iZbWAiMW4djC7xNFM
VY5bzw/RxXk/gP8A4NB/h78RbTVLiw+JfjwQabrF9pOZIrXdIba5kh34ERxuCBsc43YycZP6
Bf8ABFj9nX4h/sv/AAR+J3hr4nxXcniuT4jX9/NqU073Ka8ktlYN9uinf5pklYOd7fPvDq4W
RXVfbv2ePiDN4U8P+JreOTYH8Xa5Jj3Oozj+laYBc1OclFNq26XmGLfLKKu0tT8q/wDiDH8F
/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDGq/ZL/hc91/z3o/4XPdf8966v3v8Az7j9
y/yOfnh/PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o
/wCFz3X/AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1
bf8Axqv2S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8A
jVH/ABBj+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4g
x/Bf/RTPG/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3
L/IOeH88j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/
AIXPdf8APej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt
/wDGq/ZL/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCN
Uf8AEGP4L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH
8F/9FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv
8g54fzyPxt/4gx/Bf/RTPG//AH6tv/jVH/EGP4L/AOimeN/+/Vt/8ar9kv8Ahc91/wA96P8A
hc91/wA96P3v/PuP3L/IOeH88j8bf+IMfwX/ANFM8b/9+rb/AONUf8QY/gv/AKKZ43/79W3/
AMar9kv+Fz3X/Pej/hc91/z3o/e/8+4/cv8AIOeH88j8bf8AiDH8F/8ARTPG/wD36tv/AI1R
/wAQY/gv/opnjf8A79W3/wAar9kv+Fz3X/Pej/hc91/z3o/e/wDPuP3L/IOeH88j8bf+IMfw
X/0Uzxv/AN+rb/41R/xBj+C/+imeN/8Av1bf/Gq/ZL/hc91/z3o/4XPdf896P3v/AD7j9y/y
Dnh/PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o/wCF
z3X/AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1bf8A
xqv2S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8AjVH/
ABBj+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4gx/Bf
/RTPG/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3L/IO
eH88j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/AIXP
df8APej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDG
q/ZL/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCNUf8A
EGP4L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH8F/9
FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv8g54
fzyPxt/4gx/Bf/RTPG//AH6tv/jVH/EGP4L/AOimeN/+/Vt/8ar9kv8Ahc91/wA96P8Ahc91
/wA96P3v/PuP3L/IOeH88j8bf+IMfwX/ANFM8b/9+rb/AONUf8QY/gv/AKKZ43/79W3/AMar
9kv+Fz3X/Pej/hc91/z3o/e/8+4/cv8AIOeH88j8bf8AiDH8F/8ARTPG/wD36tv/AI1R/wAQ
Y/gv/opnjf8A79W3/wAar9kv+Fz3X/Pej/hc91/z3o/e/wDPuP3L/IOeH88j8bf+IMfwX/0U
zxv/AN+rb/41R/xBj+C/+imeN/8Av1bf/Gq/ZL/hc91/z3o/4XPdf896P3v/AD7j9y/yDnh/
PI/G3/iDH8F/9FM8b/8Afq2/+NUf8QY/gv8A6KZ43/79W3/xqv2S/wCFz3X/AD3o/wCFz3X/
AD3o/e/8+4/cv8g54fzyPxt/4gx/Bf8A0Uzxv/36tv8A41R/xBj+C/8Aopnjf/v1bf8Axqv2
S/4XPdf896P+Fz3X/Pej97/z7j9y/wAg54fzyPxt/wCIMfwX/wBFM8b/APfq2/8AjVH/ABBj
+C/+imeN/wDv1bf/ABqv2S/4XPdf896P+Fz3X/Pej97/AM+4/cv8g54fzyPxt/4gx/Bf/RTP
G/8A36tv/jVH/EGP4L/6KZ43/wC/Vt/8ar9kv+Fz3X/Pej/hc91/z3o/e/8APuP3L/IOeH88
j8bf+IMfwX/0Uzxv/wB+rb/41R/xBj+C/wDopnjf/v1bf/Gq/ZL/AIXPdf8APej/AIXPdf8A
Pej97/z7j9y/yDnh/PI/G3/iDH8F/wDRTPG//fq2/wDjVH/EGP4L/wCimeN/+/Vt/wDGq/ZL
/hc91/z3o/4XPdf896P3v/PuP3L/ACDnh/PI/G3/AIgx/Bf/AEUzxv8A9+rb/wCNUf8AEGP4
L/6KZ43/AO/Vt/8AGq/ZL/hc91/z3o/4XPdf896P3v8Az7j9y/yDnh/PI/G3/iDH8F/9FM8b
/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2S/4XPdf896P+Fz3X/Pej97/wA+4/cv8g54fzyP
xt/4gx/Bf/RTPG//AH6tv/jVZOuf8Ggvwt8MXEkWpfGfxBp8sMXnyJczWUTJHsmk3kNGMLst
7hs9NsEp6I2P2o/4XPdf896+fP2ovE8nivxN4quJG3lPDojz7DQfGJ/rXVhKEqsmpwilb+Ve
XkZVayilyzd/U/PD/iDH8F/9FM8b/wDfq2/+NUf8QY/gv/opnjf/AL9W3/xqv2e8Q/Fq503X
763WbAguJIwPQBiP6VT/AOFz3X/PeuRe0av7OP3L/I2c4p255H42/wDEGP4L/wCimeN/+/Vt
/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvT/e/8+4/cv8AIXPD+eR+
Nv8AxBj+C/8Aopnjf/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/5
70fvf+fcfuX+Qc8P55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf
8Lnuv+e9H/C57r/nvR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8
b/8Afq2/+NV+yX/C57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1b
f/GqP+IMfwX/ANFM8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/
AMQY/gv/AKKZ43/79W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H
73/n3H7l/kHPD+eR+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C5
7r/nvR/wue6/570fvf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//
AH6tv/jVfsl/wue6/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/x
qj/iDH8F/wDRTPG//fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H42/wDE
GP4L/wCimeN/+/Vt/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvR+9/
59x+5f5Bzw/nkfjb/wAQY/gv/opnjf8A79W3/wAao/4gx/Bf/RTPG/8A36tv/jVfsl/wue6/
570f8Lnuv+e9H73/AJ9x+5f5Bzw/nkfjb/xBj+C/+imeN/8Av1bf/GqP+IMfwX/0Uzxv/wB+
rb/41X7Jf8Lnuv8AnvR/wue6/wCe9H73/n3H7l/kHPD+eR+Nv/EGP4L/AOimeN/+/Vt/8ao/
4gx/Bf8A0Uzxv/36tv8A41X7Jf8AC57r/nvR/wALnuv+e9H73/n3H7l/kHPD+eR+Nv8AxBj+
C/8Aopnjf/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/570fvf+fc
fuX+Qc8P55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf8Lnuv+e9
H/C57r/nvR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8b/8Afq2/
+NV+yX/C57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1bf/GqP+IM
fwX/ANFM8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/AMQY/gv/
AKKZ43/79W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H73/n3H7l
/kHPD+eR+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C57r/nvR/w
ue6/570fvf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//AH6tv/jV
fsl/wue6/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/xqj/iDH8F
/wDRTPG//fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H42/wDEGP4L/wCi
meN/+/Vt/wDGqP8AiDH8F/8ARTPG/wD36tv/AI1X7Jf8Lnuv+e9H/C57r/nvR+9/59x+5f5B
zw/nkfjb/wAQY/gv/opnjf8A79W3/wAao/4gx/Bf/RTPG/8A36tv/jVfsl/wue6/570f8Lnu
v+e9H73/AJ9x+5f5Bzw/nkfjb/xBj+C/+imeN/8Av1bf/GqP+IMfwX/0Uzxv/wB+rb/41X7J
f8Lnuv8AnvR/wue6/wCe9H73/n3H7l/kHPD+eR+Nv/EGP4L/AOimeN/+/Vt/8ao/4gx/Bf8A
0Uzxv/36tv8A41X7Jf8AC57r/nvR/wALnuv+e9H73/n3H7l/kHPD+eR+Nv8AxBj+C/8Aopnj
f/v1bf8Axqj/AIgx/Bf/AEUzxv8A9+rb/wCNV+yX/C57r/nvR/wue6/570fvf+fcfuX+Qc8P
55H42/8AEGP4L/6KZ43/AO/Vt/8AGqP+IMfwX/0Uzxv/AN+rb/41X7Jf8Lnuv+e9H/C57r/n
vR+9/wCfcfuX+Qc8P55H42/8QY/gv/opnjf/AL9W3/xqj/iDH8F/9FM8b/8Afq2/+NV+yX/C
57r/AJ70f8Lnuv8AnvR+9/59x+5f5Bzw/nkfjb/xBj+C/wDopnjf/v1bf/GqP+IMfwX/ANFM
8b/9+rb/AONV+yX/AAue6/570f8AC57r/nvR+9/59x+5f5Bzw/nkfjb/AMQY/gv/AKKZ43/7
9W3/AMao/wCIMfwX/wBFM8b/APfq2/8AjVfsl/wue6/570f8Lnuv+e9H73/n3H7l/kHPD+eR
+Nv/ABBj+C/+imeN/wDv1bf/ABqj/iDH8F/9FM8b/wDfq2/+NV+yX/C57r/nvR/wue6/570f
vf8An3H7l/kHPD+eR+Nv/EGP4L/6KZ43/wC/Vt/8ao/4gx/Bf/RTPG//AH6tv/jVfsl/wue6
/wCe9H/C57r/AJ70fvf+fcfuX+Qc8P55H42/8QY/gv8A6KZ43/79W3/xqj/iDH8F/wDRTPG/
/fq2/wDjVfsl/wALnuv+e9H/AAue6/570fvf+fcfuX+Qc8P55H4ofCj/AINDvhz8VPhrofiO
H4m+NrWPWrKO7ELm1LRb1BK5EHOD3roP+IM/4ff9FU8Zflbf/GK/WH9j3/k1zwH/ANga3/8A
Qa9Jrz8bKNPEVIRirKTX4nZh4udKM5Sd2l+R+Kv/ABBn/D7/AKKp4y/K2/8AjFH/ABBn/D7/
AKKp4y/K2/8AjFftVRXN7b+6jb2X95n4q/8AEGf8Pv8AoqnjL8rb/wCMUf8AEGf8Pv8Aoqnj
L8rb/wCMV+1VFHtv7qD2X95n4q/8QZ/w+/6Kp4y/K2/+MUf8QZ/w+/6Kp4y/K2/+MV+1VFHt
v7qD2X95n4q/8QZ/w+/6Kp4y/K2/+MU7wh/wZ0/B/wAStq8Uvxg+JVtdaLqH9nXCpp9lJGz/
AGa3uQUYhSV2XMY5UHcGGMAE/tRXKfDP/kYviD/2NC/+mfSqPaXT0QclmtWfkdon/BnD8DfE
1k9zpvx3+IGoW8dxNaPLbWGnyok0MrwzRkqxAeOWN42XqroynBBFffH/AATg/YN0f9i39kDS
fhn4F8feOP7H0DXNYaS9uLXTluL2c3rxSFleCVVRWg+TaQSCS3UKvC/8ExP2XPiz8JP2q/2i
vGviXVLvRfhx428b63Ponhe5jy9/MdSkxq4Dc26NEvlrjm4Rldhsit2f6q+A3/Ij33/Yxa7/
AOna8qeZ8tx8q5rB/wAKu1z/AKKP4z/8BtJ/+Qq+a5LZ7Dxt4hhluZrySG1hR7iYIJJyPEXi
wF2CKq7j1O1QMngAcV9kV8aeJpzD8RvE/wD1xj/9SPxbXo5U2669f8zz81j+4f8AXY/Kr/g4
gP8AxlP8Nv8AsncH/p51eioP+DhaYyftQfDQ+vw7h/8ATzq9FViv40/V/mLDL9zD0X5FXwV4
cs5/2hvgXM4XzP8AhHfh8fy0PSMfyr9gf+CbXgfxHff8E8Pgl/YWsXejaYnhG1EcS6vbAyuS
7ySFZNJnK5kZwAJCAip3yT+Huj+P7mz/AGuvgXZrnYNB+HK/99aFo5/rX74/8EoJTN/wTH+A
7nq/g60Y/iXrnxM/cil2OjDw96Vzuv8AhXPjv/ocL/8A8HFj/wDKKubsP2XNa0wXHkeINQj+
1XU95L/xUFsd0s0rSyN/yBOMu7HA4GcAAcV7XRXNDEVIfA7HROhCfxK543/wzd4g/wChk1H/
AMH9t/8AKOj/AIZu8Qf9DJqP/g/tv/lHXslFX9cr/wAzI+qUf5UeN/8ADN3iD/oZNR/8H9t/
8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm
7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJ
qP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+
2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR1
7JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK
/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPq
lH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43
/wAM3eIP+hk1H/wf23/yjo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3i
D/oZNR/8H9t/8o6P+GbvEH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf
/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/y
jo/4Zu8Qf9DJqP8A4P7b/wCUdeyUUfXK/wDMw+qUf5UeN/8ADN3iD/oZNR/8H9t/8o6P+Gbv
EH/Qyaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo
/wDg/tv/AJR16jf+NdH0rxRp+iXWraZba1q0cs1jp8t0iXV6kW0ytFGTudU3LuKghdwzjNYn
gr4/+BPiVrv9l+HfGvhLX9TEUs5tNN1i3up/LikEUr7I3LbUkIRjjCsQDg8ULGV3opP/AIbc
PqlFa8qOJ/4Zu8Qf9DJqP/g/tv8A5R0f8M3eIP8AoZNR/wDB/bf/ACjrr2/aT+HSeI7XRz4+
8FDV77UptGtrE65bfabi+hKCW1SPfuadDIgaMDcu9cgZFanhj4t+FfG3irWdC0bxN4f1fW/D
rrHq2nWWow3F1pbNnas8SMXiJwcBwM4PpQsZXeqk/wCv6QPCUVvFHnn/AAzd4g/6GTUf/B/b
f/KOj/hm7xB/0Mmo/wDg/tv/AJR10Wn/ALW3wq1bx/8A8Ina/E34fXPik3TWI0aLxFZvqBuF
JDQ+QJPM8wEEFduQQeK9Co+uV2uZSdg+qUU7cqueN/8ADN3iD/oZNR/8H9t/8o6P+GbvEH/Q
yaj/AOD+2/8AlHXslFH1yv8AzMPqlH+VHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg
/tv/AJR163BrVnc6tcWEd3bSX1pHHLPbrKplhSQsEZlzlQxR8EjnY2OhqzR9cr/zMPqlH+VH
jf8Awzd4g/6GTUf/AAf23/yjo/4Zu8Qf9DJqP/g/tv8A5R122iftAeA/E3xHvPB2m+NvCOoe
LtO3m70O21i3l1K22Y377dXMi7cjOVGMjNddR9cr2vzPUPqlG9uVHjf/AAzd4g/6GTUf/B/b
f/KOj/hm7xB/0Mmo/wDg/tv/AJR17JRR9cr/AMzD6pR/lR43/wAM3eIP+hk1H/wf23/yjo/4
Zu8Qf9DJqP8A4P7b/wCUdeo3njXR9P8AFdloU+raZBrmpQS3Vpp0l0i3d1FEVEkkcRO9kQum
5gCBuXPUVp0fXK+/Mw+qUduVHjf/AAzd4g/6GTUf/B/bf/KOj/hm7xB/0Mmo/wDg/tv/AJR1
7JVbSdas9fsvtFjd217b+ZJF5sEqyJvjdo3XIJGVdWUjsVIPIo+uV/5mH1Sj/KjyT/hm7xB/
0Mmo/wDg/tv/AJR0f8M3eIP+hk1H/wAH9t/8o69kqtpOtWev2X2ixu7a9t/Mki82CVZE3xu0
brkEjKurKR2KkHkUfXK/8zD6pR/lR5J/wzd4g/6GTUf/AAf23/yjo/4Zu8Qf9DJqP/g/tv8A
5R16l4v8ZaR8PvDV3rOvarpuiaPYJ5l1fahcpbW1suQNzyOQqjJAyT3rRRxIgZSCpGQQeCKP
rlf+Zh9Uo/yo8c/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9U
o/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4
Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJ
qP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b
/wCUdH/DN3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/D
N3iD/oZNR/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZN
R/8AB/bf/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdH/DN3iD/oZNR/8AB/bf
/KOvZKKPrlf+Zh9Uo/yo8b/4Zu8Qf9DJqP8A4P7b/wCUdS2/7Pvii0jdIvFerRJJ99U8RW4D
fKy8/wDEk5+VmH0Yjua9foo+uVv5mH1Sivso8euP2dvEl3O8svifVJJZGLu7+ILYs5PJJP8A
YnJpn/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUd
H/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN
3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/
AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGT
Uf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8A
wf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23
/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo
69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69ko
o+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV
/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZ
h9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo
/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAq
PG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+
GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvE
H/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qy
aj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4
P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/
AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUd
H/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN
3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/
AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGT
Uf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8A
wf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23
/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo
69koo+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69ko
o+uV/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV
/wCZh9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZ
h9Uo/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo
/wAqPG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAq
PG/+GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+
GbvEH/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69koo+uV/wCZh9Uo/wAqPG/+GbvE
H/Qyaj/4P7b/AOUdH/DN3iD/AKGTUf8Awf23/wAo69kryP41/tL6R8Bre41Txf4x0PwdoUni
CHw5YS3ehXN/5tzJZxXI8ySKdRGCHk+ZlCKE5bms6mY1KceepUsu7dvI6sDks8bXjhcHRdSp
LRRjFyk35JXb+Q/wf8C/FXgHwtYaLpPii/tNM0yFbe2h/tu0k8tF4A3NoZY/UkmtL/hXPjv/
AKHC/wD/AAcWP/yipF+LMT/Glfh4PG+hnxg+jDxAtgPCl0QbEzGETeb9r8r/AFgI2793fGOa
6z+x/En/AEMeif8AhOS//J1Z/WPaNzunq7vR631+d9/Muvl9bDKCrU5Q5kpRumrxezV907aN
aHKf8K58d/8AQ4X/AP4OLH/5RUf8K58d/wDQ4X//AIOLH/5RV1f9j+JP+hj0T/wnJf8A5Oqn
eeNbq3+B48SiO3+3Hw6mreWVPleabUTFcZzt3HGM5x370+d/0jDlRgf8K58d/wDQ4X//AIOL
H/5RUf8ACufHf/Q4X/8A4OLH/wCUVUPih+0BpHwa8Z2/h7xD8RvDVnrN04jjtY/C91cSAkgL
uEd22zJIxvxnqOhruv7H8Sf9DHon/hOS/wDydRztq629A5EnZnKf8K58d/8AQ4X/AP4OLH/5
RUf8K58d/wDQ4X//AIOLH/5RV1f9j+JP+hj0T/wnJf8A5Oo8M32pf8JJq+najd2V99htrK5i
ltrFrT/XNdKysrTS5x9nUggj7x4o53/SDlRyn/CufHf/AEOF/wD+Dix/+UVVdM+D3jHR7jUJ
bbxXfxyapdfbbo/21ZnzZvJig3c6Fx+7giXAwPlzjJJNz4h/Fxfh3pniDWtb8T6H4a0HR9Sj
01GuNGmvpZXa2t5v+WdwhYkzkbVQkBCTwCRf+Hviq9+Kng+z17QvFujXuk34ZredvCtzD5gV
ipO2S8VsZU9Rz24pqbd7fkDgtLmd/wAK58d/9Dhf/wDg4sf/AJRVS0H4MeL/AAxYvbWPiq/g
hkuZ7tl/tqzbMs8zzStk6ET80kjnHQZwAAAK7j+x/En/AEMeif8AhOS//J1cd8WfitffBy60
OLU9btJ21+6e0tza+GGYI6xtIS+7UFwMKemeal1bLUap3ehN/wAK58d/9Dhf/wDg4sf/AJRV
83fFWQeFPHurW6m6lkXStP8AtEk9wk7yznW/FRmfekMKkNJvYYiTAIGOK+oP2efiz/wuz4dP
rytvgkvDDbsbP7I7RfZreUb4/OmCtumYcSEEAdOa+d/inpceo/FXxF5nbT7L/wBPvimvVyrm
+txizzcy5fqzkj8h/wDgv/qRu/2kPhjIR974dxf+nrWBRV7/AIODtPjtP2m/hpGvQfDyHH/g
61iirxSft5+r/MWGa9jD0X5HH+FfCEN7+1T8Dbkj5v8AhH/h22cemhaP/hX7uf8ABKaPyv8A
gmb8Cl/u+ELUf+PPX4O+GvHUGn/tVfA21LDf/YHw7X89C0f/ABr94f8AglJL5/8AwTL+BD/3
/B9q35l648VbkjY6cPfmlc9/oooriOsKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiii
gAooooAKKKKACiiigAooooAKKKKACiiigD4i/wCClXgHxf8AFP8Aa9+E/hvwPrcPhvxBrfg/
xdaRamylntVaC0z5YBGHbhA24bN5cElADwOk/GbwX8PP2f8A4BfHjw74ft/CWl/COa88DeMt
AtxtbQlniaCe3kyWb93fx27hmJd1m3HLOa/RuipinGHLF/P/ALek/wAYzlF/erap1JqUryXd
fekvvTjFp9LejX5S/FH4NXvwnh/Z3v8AVrXd4rtNA8RfEvWPlLSf2iLvT9WnGTzldpj/AN1c
HivZ/wDgl3brcftjfFnW9m1/GXhvSfFDNtwZFv8AUNVuoyfX91LGB7KOwFfedFaqSi1yqyXN
ZeUubT5JxX/bvpbNpyg4yd2+Vt+aad/naX/gX3/lJHd+I9L8MRnxHqWj23wPu/2iNS/t+5tN
NkXWdEuotTMlnK1y05i+yyXSxpIwhR41Iwz7+Or+EPxw+J3iP9ph01HxvZad4vg8T+IrTX/D
lx4u1W5vJNJiilFuq6HHZtaaekca20sV75yedn5pGabbX6YUVhyWpKlfZW/8lpxv/wCSbdnb
o76ud6jqd3f/AMmqSt/5P98ebfb8qP2e5vFHjjwb8FZNT+JnxcuZPHfwp8Ra/rbf8JxqaNdX
llJD9llUrMDCU38+Vs8zaBJvDOGt6h+094v8T/CbSdV8ZfEDxf4c17/hTek6x8ORp2qXFj/w
l3iNwfPzFGwTUrnzltENrIJF2TsfJO8tX6mUVtN8zdtLtvTprU281zpJ/wBxeVs6a5Ur62t8
7ez38nyO6/vv5/lN8QvGfij4R/Eb9o7U7LVdQ0L4h6r/AMIZea3anxLd272ulz2tsNXu4xuk
aGGF3Mf2uOF2tEYiPYBivqr9g7xB4u8dfBH4qR2fjDRvE9gNUu4PCF5YeI9R8Sx6cWtlzbjV
r23ga+SO4JKyq0gXeYzJmPA+r6KmaUoSpvRSVtOi91aenLp2cpNbtNwvFwe7i09etr6vzd9f
JJdEz4o/Ym+Mfwy0b9lz4U+A1i0K/wDjR4U0i4jg8N3Fgb7WtD1uKGVbyS4jVDJZb5TIGnlM
SuJgPMO8Z8Bsf2hvGK/A+4vvDPxC8b614pufhZ4l1P4pRXOtXUsng/W4kJttsTN/xKp1uDcR
JBCsIaOLdtbyw1fqrRRWvUc5PRyvt0upLT/wJO392Pa46NqfKlqotb9bNPX7nrp8UtHex+Z9
v4x8Z6J8Ovjzodr8WvF2iR2fhDwXrltrOuahqeqrplzeQSSXjNND5lzZ285jxJJDtS3VmdQg
Xh/hv9ozVNT8C6Vp994n8V6L8J4fil/ZfibxbZ/EO517To7BrETwpaeIfKt7qOya8aOGR5JC
8Z/dmdQ20fpbRVynerKpbRvb0lGVvNWXLbbW9t7wotQUb7Lf/tzlb9b+9fe/Xqfk7qnxb+KL
6/o3iHw5da1r+vaR4N+IKeAdUuojeahq2lRXdkLC6+cFrhzECUkbeZgqOS5fJ+lv+CXPxA1v
xr438X+R440fxd4MGkaVNHb2vjTVfGUmm6iyN5xOoXlpCimVNjPaJI5hdc7EEnP2ZRU03yJR
etlb11m7vu3za+av2s6i5m2tLu/4QVl5LlsvJtbXv+dngT4maxF+3nN4Zv8Ax74i8bv4m8Xa
zpk1no3jPU9N1DQtOeyLpFfeHri32wW8LKBFf2csBbzI5A7bsN4V8Nvi9aeCf2HPhX4Q0fxx
rXhS8fR/E15fajL411K2tLLV7aVdmmrBazxXEmoMjxeVZC5hjXzzK0Fw7KrfsRRWXI/Zezv0
Sv6Jpfc3zetm7631U0qjqNbu/wB7u/vXu+myPz6/ZQ8Qa5+1V8dvhhB4r8a+OLywm+A+k+Ir
m10nxRqGlW97qbXzxPczC1liMkpAKtk7W7g4XHz58F/ijfeAv2Wfgz4fs/Gkfh/wY+l+Ihr9
1ffEnUfDMOm+IYpv3NrLe26XE0EqW7GaPT9scczEuVduv7EUVrVfO5OOl7v75Ta7bc+nW8Yt
WtYzpXgrPW3L8rRin/4Fy6+Tad9z8jf20Pif4o8QfATxRpnxc8eanFqf/CpdGvPDEGn3N7pt
j4svpLlxf3BspUgNzJsWEuk8ANujtIETAcfowv7aPw00TxGPDV14l8rWrXVo/Dktt/Z90dl+
bH7cIdwi2n/RgX3A7P4d275a9aoqpzbvbu398m/ybXrZ6bOVH4b9Fb/yWK/ON/m15nP/AAq+
KWhfG34c6N4t8MX39p+H/EFql7YXfkyQ/aIm5Vtkiq659GUH2roKKKl2vpsNXtruFFFFIYUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUU
UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFfLP7
cf7NOp/tR2PhfRbXQ49f0bTfjBpup+I7d7mOFU0pdKt47lzvdSw2yAbUy53cDivqasOXwP8A
8TG+uLfXfEOnf2jOLqaG1NmYvMEUcW4ebbOwykSZG4jIJGM1zYzCU8TS9jWV43Tt3s07Pydt
fI9vhzP8VkmY0s0wVvaU7uN77tNX0ad1e6s1qfn7L/wTU+LulfFT4kaL9rj8ReDT8LrzwZ4L
1m8vIlmSJrpbi1sLhd3mb490kfm7NmxExt+4LWvfsjfGv47eAviJqPiPwDH4S1fxCnhHSLLS
bbxHa3Nw8GmXKvc3YuEdUj+Usyru3gqQMkAt98f8IXcf9DX4s/8AKd/8h0f8IXcf9DX4s/8A
Kd/8h14i4Xwqa96TS6Np3+Pe6u7KpNavVPW7SZ+qf8R84gcoValGg6kfZtS5Jp3pzpzbtGoo
3qSpQdT3ej5OS7v8J6d/wTc8U+B/HMVz4f0LXLXTfCPxZ0/VvCsB8UNLBYaC2w38qRyXJ5dh
8wcec2OAQTn7I1T/AJNMX/sSYv8A03LXSf8ACF3H/Q1+LP8Aynf/ACHU914MsbnwGPDebsaY
NMXSd3mL5/krCIc7tu3ftGc7cZ7dq9PLcro4Gm6dC9m09bdIqPRLpFHwnGHHmacS+xeaNN0k
0mua7uo3u5Sldtxc3tecpyesjx/WPBPjz4LftQ+M/EOgeEbLxtpHjq7hkluDqsVjd6R5fyMr
eYPnj5JVU/u844z5Xp/7L/jvRfjKNbv9B86G18Q32p3uuf2/5v8AaNtJBKIT9lZsJ5YOzON/
zYA2jNfV83hG6uJmdvFniws5LMf+JdyT/wBudRy+Bpp4mR/FPip0cFWVhpxDA9QR9jrucfdS
XRWX3pr7rfde58cpe9d9Xd/c1+p8Vfs7/ADxpr3w2TWdG0m90+01LwZdWBvLTX/Nm12aViIE
EUzKsAi6EZC4BKkk19H/ALG3hvXvCPhi/wBP8SeHND8LapbadpqPY6UEEW0TakFkYIWXzHxu
bDNknOQSVHfaH8LovDGkwWGm694i0+xtV2Q21tDpkUUK+iqtkAB9BWn4f8LR+H7y/ujqGran
d6gkETy3rQfIkJmKKohhjHWdyScnpW0patrr/m3+pio6JPp/kl+h4z+0x8PNQ+JPgjWrTSfD
Wr+ItRtPGUV5btpmuQaPc6c8emWRE6TSqy5BO3G0n5sjBUGvNvEP7N/xO8QeCvh23i3R5/iL
Ho8t22oaDceIxbTwmQ5gkkusgTPEMgkE9cDKkmvqaXwP/wATG+uLfXfEOnf2jOLqaG1NmYvM
EUcW4ebbOwykSZG4jIJGM0f8IXcf9DX4s/8AKd/8h1EPdd13v/X+e66NFy1VvK39f5bd0fMZ
/Yx17/hUXj+5t9Ha28Z65rd0Y4zqxQappLXKTfZgyyGOPzQG5IU84YgE1l+HP2UvHlpo+iiH
wz/YmmQeJp9Rt9A/tqO9/sO2a0aM/vnf598hzhSSM5OMmvrD/hC7j/oa/Fn/AJTv/kOj/hC7
j/oa/Fn/AJTv/kOp5Vy8vSyX3W/y/F2tdjvrfzb++/8An+XZHn/7EfgjVPhv8CzoetW32PVN
N1HyrmDzEk8tvsFicbkJU8EdCa8f+Jik/FXxFj/oH2f/AKffFNfVvhzw5F4ZtbtEu9Qvpr67
N5PPeNEZHcxRRAARRxqAEhT+H15r5U+JJx8VvEX/AGD7P/0+eKa9zJZc+PjJ9bnj5uuTBOK6
WPyS/wCDhNT/AMNP/DT/ALJ5D/6etXop3/Bwr/ydB8Nf+yeQ/wDp51eirxv+8VP8T/Mzwn8C
HovyPkn4gfFWLwb+0B8JfG1uG1DSdN8K+Cb2ExHC3Z0/SNPtblFJ4ylzZ3MJ9HiYdq+y/gx/
wcA3PwI+EPhfwToKRDRfCWl2+k2P2nwxqpnaKGMIGkMPiiKIyNjc5jijUszEIoOB+Xvww/am
1T4c+HrbQNV0Hwz488MWMks1ppHiGG4MdlJIcsYbi1mguolJG4xpMI2YlmRic11Y/bZ8OD/m
3v4M/wDgf4q/+XVebzwaSktj0eWSd4n6ef8AESvr/wDdsf8AwmNb/wDmspR/wcra+e1h/wCE
xrf/AM1lfmF/w214c/6N7+DP/gd4q/8Al1R/w214c/6N7+DP/gd4q/8Al1S/ddiv3nc/Tw/8
HK+vj+Gx/wDCY1v/AOaygf8ABytr57WP/hMa3/8ANZX5h/8ADbXhw/8ANvfwZ/8AA/xV/wDL
qhf22vDmf+Te/gz/AOB3ir/5dUXpdhfve5+n3/ESnr57WH/hMa3/APNZSn/g5R8QAdLD/wAJ
jW//AJrK/MP/AIbZ8Of9G+fBn/wO8Vf/AC5pf+G2/Do/5t8+DP8A4HeKf/lzRel2D973P06H
/Bylr57WH/hMa3/81lOX/g5Q18sBiw5/6ljW/wD5rK/MM/ts+HT/AM2+fBn/AMDvFP8A8uaF
/bb8Oq2f+GfPgz/4HeKf/lzTvR7B+97n6teHP+DhzxBr0qrusF3H/oWtbH/u1mvpD9n3/gpJ
4k+OFoki6vpNtvxw2ha4n/uytX4WaP8At+aTpjAwfAL4Mpjp/pnig/z1mvXPhX/wWP174eRK
mj/Bz4N2ajoA/iJ//QtWNdeHeET/AHkWzlrrFNfu5JH7v3vxn8YQQb08R6G3Gcf2Prp/92Ks
yx+P3je6uAja9oQBPX+xtd/+aOvyBX/g4G+JIjA/4Vl8HcY/5567/wDLSmr/AMHAHxHRsj4Y
fB3/AL9a7/8ALSvR58r/AOfcvw/zOHkzL+dfj/kftPYfEzxVdJl/FGhqf+wTrv8A80NWm8fe
Jwv/ACNWh/8Agq13/wCaCvxUX/g4R+Jq9Phn8Hv+/Wu//LSnf8RCvxPH/NNPg7/3613/AOWl
V7TKv+fcvw/zJ9nmf/Pxf18j9l9R+Kni+0PyeJdDb/uE67/80NUD8ZvGv/Qw6F/4J9d/+aKv
x2b/AIOEfia3X4Z/B3/v1rv/AMtKb/xEG/Ez/omXwd/79a7/APLSjnyr/n3L8P8AMfJmf88f
x/yP2MHxk8an/mYdC/8ABPrv/wA0VSQ/F/xnJ18RaEP+4Prv/wA0VfjeP+Dg74mAf8ky+Dv/
AH613/5aUo/4OD/iYP8Ammfwd/79a7/8tKXPlX/PuX4f5hyZn/PH8f8AI/ZhPip4wKZPiXQv
/BRrv/zQ1WuPjD4zifA8RaEf+4Prv/zRV+OP/EQl8Tf+iZ/B3/v1rv8A8tKQ/wDBwf8AEwn/
AJJn8Hf+/Wu//LSnz5V/z7l+H+YuTM/54/j/AJH7Fr8ZPGpb/kYtC/8ABPrv/wA0VWY/iz4y
dM/8JJoQ/wC4Rrv/AM0VfjX/AMRB/wATP+iZ/B3/AL9a7/8ALSlH/Bwj8TR/zTP4O/8AfrXf
/lpRz5V/z7l+H+Y+TM/54/j/AJH7ISfF7xkjY/4SPQv/AAUa7/8ANFSr8XfGTD/kY9C/8FGu
/wDzRV+Np/4OEPiaf+aZ/B3/AL9a7/8ALSgf8HCHxNH/ADTP4O/9+td/+WlHPlX/AD7l+H+Y
cmZfzr+vkfsbL8YvGiH/AJGLQv8AwT67/wDNFRD8YvGkh58RaF/4J9d/+aKvxzH/AAcH/Exj
/wAkz+Dv/frXf/lpSj/g4M+Jg/5pn8Hf+/Wu/wDy0o5sq/59y/D/ADDkzL+eP9fI/ZWL4q+M
X6+JNC/8FGu//NDTbr4seMYDx4k0M/8AcI13/wCaKvxuH/Bwh8TR/wA00+Dv/fnXf/lpSH/g
4P8Aiaf+aZ/B3/vzrv8A8tKfNlX/AD7l+H+YuXM/54/18j9j4vi34ykH/Ix6EP8AuEa7/wDN
FUqfFbxiW58SaF/4KNd/+aGvxrH/AAcHfEwf80z+Dv8A3513/wCWlL/xEH/E3/omfwd/7865
/wDLSkpZV/z7l+H+YcmZ/wA6/r5H7K3HxV8XRJkeJdDP/cJ13/5oapL8ZvGZkwfEOh49f7H1
3/5oq/HY/wDBwd8TD/zTP4O/9+td/wDlpSf8RBfxL/6Jl8Hf+/Wu/wDy0o58q/59y/D/ADBQ
zP8Anj/XyP2Ub4teLxHn/hJNDz6f2Trv/wA0NUX+NfjRW/5GDQ//AAT67/8ANFX49f8AEQd8
TMf8kz+Dv/frXf8A5aU0/wDBwT8Sz/zTL4O/9+td/wDlpS58q/59y/D/ADHyZl/PH8f8j9iV
+NXjNh/yMOh/+CfXf/mipG+NPjQH/kYdC/8ABPrv/wA0Vfjv/wARBHxK/wCiZfB3/v1rv/y0
oP8AwcE/Es/80y+Dv/frXf8A5aUc+Vf8+5fh/mHJmX88fx/yP2MT4y+MiP8AkYtD/wDBRrv/
AM0VNb40eM1/5mHQ/wDwUa7/APNFX46/8RBHxK/6Jl8Hf+/Wu/8Ay0o/4iB/iUf+aZfB3/v1
rv8A8tKOfKv+fcvw/wAw5My/nj/XyP2IPxr8Z/8AQw6H/wCCfXf/AJoqaPjT42Y/8jBoX/gn
13/5oq/Hj/iIF+JP/RMvg7/3613/AOWlOX/g4K+Ja/8ANMvg7/3613/5aUc2Vf8APuX4f5j5
My/nj+P+R+xI+MfjY/8AMw6F/wCCfXf/AJoqkj+LnjZz/wAjFoP/AIJ9d/8Amir8dB/wcIfE
xf8Ammfwc/7867/8tKUf8HCfxOH/ADTP4Of9+dc/+WlHNlX/AD7l+H+YuTMv54/j/kfspD8T
/GTnnxLoX/go13/5oq0bXx34rmX5vFWhD/uFa7/80Ffi4P8Ag4Y+J4/5pp8Hf+/Ouf8Ay0pw
/wCDhz4oj/mmvwd/7865/wDLSmp5V/z7l+H+YuTM/wDn4v6+R+058beKB/zNeh/+CvXf/mgp
P+E28Uf9DXof/gq13/5oK/Fk/wDBw78USP8Akmvwd/7865/8tKT/AIiG/ij/ANE1+Dv/AH51
z/5aU+fKv+fcvw/zF7PM/wDn4vx/yP2mPjnxOD/yNeh/+CrXf/mgo/4TrxP/ANDXof8A4Ktd
/wDmgr8Wf+Ihv4o/9E1+Dv8A351z/wCWlH/EQ38Uf+ia/B3/AL865/8ALSjnyr/n3L8P8w9n
mf8Az8X9fI/aX/hO/FH/AENWh/8Agr13/wCaCkPjzxQB/wAjVof/AIK9d/8Amgr8XF/4OGPi
g3/NNfg7/wB+dc/+WlL/AMRC3xP/AOia/B3/AL865/8ALOjnyr/n3L8P8w5Mz/5+L8f8j9oP
+FgeJv8AoatE/wDBXrv/AM0FDfELxMP+Zp0T/wAFeu//ADQV+Ln/ABELfE9h/wAk0+Dv/fnX
f/lpSj/g4S+JxH/JNPg9/wB+td/+WlL2mVf8+5fh/mP2eZ/8/F/XyP2g/wCFi+Jv+hp0T/wV
67/80FIfiR4m/wChp0T/AMFeu/8AzQV+MH/EQj8Tv+iafB3/AL867/8ALSk/4iD/AIm/9Ez+
Dv8A3613/wCWlHtMq/59y/D/ADD2eZ/8/F+P+R+zx+JfiUf8zRov/gr1z/5oKQ/E3xMP+Zo0
X/wV65/80FfjD/xEGfEz/omfwd/79a7/APLSk/4iCfiX/wBEz+D3/frXf/lpT58q/wCfcvw/
zFyZl/z8X4/5H7O/8LS8Sf8AQ0aL/wCCvXP/AJoKQ/FPxKP+Zn0X/wAFeuf/ADQV+MR/4OB/
iUf+aZ/B7/v1rv8A8tKRv+Dgf4lAf8kz+Dv/AH613/5aUvaZV/z7l+H+Y/Z5l/z8X9fI/Z3/
AIWt4k/6GfRf/BXrn/zQUh+K/iQf8zPo3/gr1z/5oK/GD/iIG+JX/RMvg7/3613/AOWlH/EQ
L8ST/wA0y+D3/fvXf/lpR7TKv+fcvw/zD2eZf8/F+P8Akfs8fi14kH/MzaN/4K9c/wDmgpF+
LfiNj/yM+jf+CvXP/mgr8Yf+IgP4k/8ARMvg9/3713/5aUn/ABEA/Egf80y+D3/fvXf/AJaU
e0yr/n3L8P8AMPZ5l/z8X9fI/aWL4n+JZf8AmaNF/wDBXrn/AM0FWovHniaUf8jXof8A4Ktd
/wDmgr8U0/4OCPiUnT4Z/B7/AL967/8ALSpY/wDg4W+J8XT4afB3/v1rv/y0o9plX/PuX4f5
h7PMv+fi/H/I/a0eMPFB/wCZs0L/AMFWu/8AzQUf8Jd4p/6GvQ//AAVa7/8ANBX4rj/g4i+K
a/8ANNfg7/351z/5aUv/ABESfFP/AKJt8HP+/Guf/LOj2mVf8+5fh/mHs8y/5+L8f8j9qB4u
8Un/AJmvQ/8AwVa7/wDNBR/wlnin/obNC/8ABVrv/wA0Ffiv/wAREnxT/wCibfBz/vxrn/yz
pR/wcTfFTP8AyTb4O/8AfjXP/lnR7TKv+fcvw/zH7PMv+fi/H/I/af8A4SzxT/0Nmhf+CrXf
/mgpf+Er8U/9DZoX/gq13/5oK/Fr/iIl+Kg/5pv8Hf8Avxrn/wAs6Uf8HE/xUH/NNvg5/wB+
Nc/+WdLnyv8A59y/D/MPZ5l/z8X4/wCR+0n/AAlXir/obNC/8FWu/wDzQUf8JV4q/wChs0L/
AMFeu/8AzQV+Ln/ERT8VR/zTb4Of9+Nc/wDlnR/xEVfFX/om3wc/78a5/wDLOjnyv/n3L8P8
w9nmX86/H/I/aP8A4SrxV/0Nmhf+CvXf/mgpf+Ep8Vf9DXoX/gq13/5oK/Fv/iIq+Kv/AETb
4Of9+Nc/+WdH/ERV8Vf+ibfBz/vxrn/yzo58r/59y/D/ADD2eZfzr8f8j9pP+Eo8Vf8AQ16F
/wCCrXf/AJoKP+En8V/9DVof/gq13/5oK/Fv/iIq+Kv/AETb4Of9+Nc/+WdH/ERV8Vf+ibfB
z/vxrn/yzo58r/59y/D/ADD2eZf8/F+P+R+0n/CT+K/+hq0P/wAFWu//ADQUf8JP4r/6GrQ/
/BVrv/zQV+Lf/ERV8Vf+ibfBz/vxrn/yzpf+Iiv4rf8ARNvg5/4D65/8s6OfK/8An3L8P8w9
nmX86/r5H7R/8JP4r/6GrQ//AAVa7/8ANBR/wk/iv/oatD/8FWu//NBX4uf8RFfxW/6Jt8HP
/AfXP/lnR/xEV/Fb/om3wc/8B9c/+WdHPlf/AD7l+H+YezzL+dfj/kftH/wk/iv/AKGrQ/8A
wVa7/wDNBR/wlHir/oa9C/8ABVrv/wA0Ffi5/wARFfxW/wCibfBz/wAB9c/+WdH/ABEV/Fb/
AKJt8HP/AAH1z/5Z0c+V/wDPuX4f5h7PMv51+P8AkftH/wAJR4q/6GvQv/BVrv8A80FIfFXi
of8AM16H/wCCrXf/AJoK/F3/AIiK/it/0Tb4Of8AgPrn/wAs6Q/8HFXxVP8AzTb4Of8AgPrn
/wAs6OfKv+fcvw/zD2eZfzr8f8j9oj4t8VD/AJmvQ/8AwVa7/wDNBTW8YeKR/wAzVof/AIKt
d/8Amgr8Xv8AiIn+Kn/RNfg3/wCA+uf/ACzoH/BxH8U2P/JNfg5/341z/wCWdP2mVf8APuX4
f5h7PMv+fi/H/I/Z8+NfFK/8zVon/gq13/5oKafHPigf8zTon/gq13/5oK/GF/8Ag4f+KX/R
Nfg5/wB+Nc/+WdNP/Bw98Uj/AM01+Dn/AH41z/5Z0e0yr/n3L8P8xezzL/n4vx/yP2ePj3xQ
v/M06J/4Ktd/+aCm/wDCwPE//Q06J/4K9d/+aCvxiP8AwcOfFE/801+Dn/fnXP8A5aU0/wDB
wx8Tz/zTT4O/9+dc/wDlpR7TKv8An3L8P8w9nmX/AD8X9fI/Z4/EPxOP+Zo0X/wV67/80FJ/
wsbxMP8AmaNE/wDBXrv/AM0FfjCf+DhT4nH/AJpp8Hf+/Ouf/LSm/wDEQj8Tv+iafB3/AL86
7/8ALSnz5V/z7l+H+YcmZf8APxfj/kfs+fiP4mH/ADNGi/8Agr13/wCaCk/4WT4mH/M0aL/4
K9c/+aCvxhP/AAcIfE0/800+Dv8A3513/wCWlJ/xEHfE3/omfwd/79a7/wDLSl7TKv8An3L8
P8w9nmf/AD8X9fI/Z4/EvxL/ANDTov8A4K9d/wDmgoHxM8Sn/maNF/8ABXrn/wA0FfjB/wAR
BnxM/wCiZ/B3/v1rv/y0o/4iC/iZn/kmfwe/79a7/wDLSn7TKv8An3L8P8xcmZf8/F/XyP2f
PxM8Sj/maNF/8Feuf/NBTf8AhZ/iX/oaNF/8Feu//NBX4xf8RBfxL/6Jn8Hv+/Wu/wDy0pP+
Igj4l/8ARM/g9/3613/5aUe0yr/n3L8P8w5My/5+L8f8j9nT8UfEo/5mjRf/AAV65/8ANBSr
8UPErf8AM0aL/wCCvXP/AJoK/GD/AIiCPiX/ANEz+D3/AH613/5aUf8AEQN8Ss/8kz+D3/fr
Xf8A5aUe0yr/AJ9y/D/Mfs8z/wCfi/r5H7Rp8SPErn/kadE/8Feu/wDzQVPH468TyD/ka9D/
APBXrv8A80Ffiuv/AAcGfExP+aafB7/v1rv/AMtKkT/g4W+J8fT4a/B7/vzrn/y0pc+Vf8+5
fh/mHs8y/wCfi/r5H7UDxl4oP/M2aH/4Ktd/+aClHi7xSf8Ama9C/wDBVrv/AM0Ffi0P+Dh7
4pL/AM01+Dv/AH51z/5Z0v8AxEQ/FP8A6Jt8Hf8Avxrn/wAs6PaZV/z7l+H+YezzL/n4vx/y
P2l/4SzxT/0Nmhf+CrXf/mgpR4r8Un/mbNC/8FWu/wDzQV+LC/8ABxN8VM/8k2+Dn/fjXP8A
5Z04f8HEvxUB/wCSbfB3/vxrn/yzpe0yv/n3L8P8x+zzL/n4vx/yP2l/4SrxV/0Nmhf+CvXf
/mgpf+Eo8Vf9DXoX/gq13/5oK/Fr/iIn+Kn/AETb4Of9+Nc/+WdKP+Dij4qj/mm3wc/78a5/
8s6OfK/+fcvw/wAw9nmX86/r5H7Sf8JR4q/6GvQv/BVrv/zQUf8ACUeKv+hr0L/wVa7/APNB
X4t/8RFXxV/6Jt8HP+/Guf8Ayzo/4iKvir/0Tb4Of9+Nc/8AlnRz5X/z7l+H+YezzL+dfj/k
ftEfFvilf+Zr0P8A8FWu/wDzQVGfGnilf+Zq0T/wVa7/APNBX4wH/g4n+Kjf802+Dn/fjXP/
AJZ0xv8Ag4f+KTf801+Dv/fjXP8A5Z01PKv+fcvw/wAyXTzP/n4v6+R+0H/CceKP+hq0T/wV
a7/80FH/AAnHij/oatE/8FWu/wDzQV+Lp/4OGvigf+aa/B3/AL865/8ALSj/AIiGvih/0TT4
O/8AfnXP/lpT9plP/PuX4f5i9nmf/Pxf18j9ov8AhOPFH/Q1aJ/4Ktd/+aCj/hOPFH/Q1aJ/
4Ktd/wDmgr8Xf+Ihr4of9E0+Dv8A351z/wCWlH/EQ18UP+iafB3/AL865/8ALSn7TKf+fcvw
/wAw5Mz/AOfi/r5H7Rf8Jz4n/wChq0T/AMFWu/8AzQUo8ceJz/zNeh/+CrXf/mgr8XP+Ihn4
of8ARNfg7/351z/5aUf8RDXxQ/6Jp8Hf+/Ouf/LSl7TKf+fcvw/zDkzP/n4v6+R+0f8Awm3i
j/oa9D/8FWu//NBR/wAJt4o/6GvQ/wDwVa7/APNBX4t/8RDPxQ/6Jr8Hf+/Ouf8Ayzo/4iGf
igP+aa/B3/vzrn/yzp8+U/8APuX4f5hyZn/z8X9fI/aQ+OPE4/5mvQ//AAVa7/8ANBTH8eeJ
0H/I1aH/AOCvXf8A5oK/F/8A4iG/igf+aa/B3/vzrn/y0pD/AMHDPxQP/NNPg7/351z/AOWl
L2mU/wDPuX4f5j5Mz/5+L8f8j9m3+JPiZP8AmaNF/wDBXrv/AM0FNPxO8Sj/AJmjRf8AwV65
/wDNBX4xt/wcH/E1uvw0+Dv/AH613/5aUw/8HBXxLP8AzTP4Pf8AfrXf/lpRz5T/AM+5fh/m
HJmf/Pxf18j9nW+KXiUf8zPov/gr1z/5oKT/AIWn4l/6GfRf/BXrn/zQV+MX/EQP8Sz/AM0z
+D3/AH613/5aUf8AEQL8Sv8Aomfwe/79a7/8tKOfKv8An3L8P8x+zzL/AJ+L+vkfs2fiv4kH
/Mz6N/4K9c/+aCkPxZ8SD/mZtG/8Feuf/NBX4yf8RAfxJ/6Jl8Hv+/Wu/wDy0pP+IgL4k/8A
RMvg9/3713/5aUvaZV/z7l+H+YezzL/n4v6+R+zR+LviQf8AMzaN/wCCvXP/AJoKD8XvEg/5
mbRv/BXrn/zQV+Mh/wCC/wD8SD/zTL4Pf9+9d/8AlpR/w/8AviP/ANEx+D3/AH713/5aUe0y
r/n3L8P8w9nmX/Pxfj/kfsyfjD4jH/MzaP8A+CvXP/mgpD8ZPEg/5mXR/wDwV65/80FfjMf+
C/nxHP8AzTL4Pf8AfvXf/lpSH/gv18Rj/wA0x+D3/fvXf/lpRz5V/wA+5fh/mP2eZf8APxfj
/kfs0PjJ4jJ/5GXR/wDwV65/80FPT4veI3P/ACM2jf8Agr1z/wCaCvxh/wCH/HxG/wCiY/B7
/v3rv/y0pR/wX7+I6/8ANMvg/wD9+9d/+WlLnyr/AJ9y/D/MPZ5l/wA/F+P+R+1Fv8SfEtx0
8U6IP+4Xrn/zQVaj8ZeKJB/yNeh/+CrXf/mgr8VIv+Dgv4mQ/d+Gnwe/79a7/wDLSp4/+Dh3
4oxjj4bfB3/vzrn/AMtKPaZX/wA+5fh/mHs8y/5+L8f8j9qP+Eq8Vf8AQ2aF/wCCvXf/AJoK
P+Eq8Vf9DZoX/gr13/5oK/Fsf8HE3xUH/NNvg7/341z/AOWdH/ERP8VP+ibfBz/vxrn/AMs6
OfK/+fcvw/zD2eY/zr+vkftJ/wAJV4q/6GzQv/BXrv8A80FH/CVeKv8AobNC/wDBXrv/AM0F
fi3/AMRE/wAVP+ibfBz/AL8a5/8ALOlH/BxR8VB/zTb4Of8AfjXP/lnRz5X/AM+5fh/mHs8x
/nX9fI/aP/hKfFX/AENehf8Agq13/wCaCj/hKfFX/Q16H/4Ktd/+aCvxc/4iKfir/wBE2+Dn
/fjXP/lnSp/wcT/FQ/8ANNvg5/341z/5Z0c+Vf8APuX4f5h7PMv51+P+R+0X/CU+Kv8Aoa9D
/wDBVrv/AM0FH/CVeKh/zNehf+CrXf8A5oK/F/8A4iJvir/0Tf4O/wDfjXP/AJZ0f8RE3xU/
6Jv8Hf8Avxrn/wAs6fPlX/PuX4f5h7PMv+fi/H/I/aD/AISvxT/0Nmhf+CrXf/mgo/4SvxT/
ANDZoX/gq13/AOaCvxe/4iJfip/0Tb4Of9+Nc/8AlnR/xES/FT/om3wc/wC/Guf/ACzo58q/
59y/D/MPZ5l/z8X4/wCR+0H/AAlnir/oa9D/APBVrv8A80FMk8ZeKY/+Zq0T/wAFWu//ADQV
+MX/ABES/FT/AKJt8HP+/Guf/LOkb/g4i+Kb9fht8HP+/Guf/LOj2mVf8+5fh/mHs8y/5+L8
f8j9lJviN4nh/wCZn0U/9wrXf/mgqu/xX8Sp/wAzNo3/AIK9c/8Amgr8bpP+DhT4ny9fhp8H
f+/Ouf8AyzqCT/g4G+JRH/JMvg7/AN+td/8AlpR7TKv+fcvw/wAxezzL/n4vx/yP2V/4W54l
/wChl0f/AMFeuf8AzQVlXOt/bry7vb2+srq8ure1s1W00+azhjigmv7jc3n3d1LJI8uoTFmM
gGAoC9Sfx6/4iAviT/0TL4O/9+td/wDlpSH/AILw/FHxZJFplp4K+Fvh+5v5Ugj1Kxs9TnuL
MswG9Eur6aBmHpLE6+qmtaGLy2jNVKdOV16f5mVbCY+tB06k1Z/12Oy/4L2XkfjL9q7wba6W
TfXHh/wPa2OoxwqWNnO+oahdpG/oxt7q3kH+zKp70V0/w/8AAdp440WTXtemutd13XJjfahq
F+4luLuZwuWY4wAAAqqoCqqqqgAAArx61WVSpKp3bf3np0qcacFT7K33H//Z
--------------060906080403000401080808
Content-Type: text/plain; charset=windows-1252;
 name="xen info.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen info.txt"

host                   : zevenprovincien
release                : 3.11.6-4-xen
version                : #1 SMP Wed Oct 30 18:04:56 UTC 2013 (e6d4a27)
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 2400
hw_caps                : bfebfbff:2c100800:00000000:00003f00:029ee3ff:00000000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 98295
free_memory            : 205
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .1_02-4.4
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 27302
xen_commandline        : vga=mode-0x31a
cc_compiler            : gcc (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388
cc_compile_by          : abuild
cc_compile_domain      :
cc_compile_date        : Wed Dec  4 15:16:21 UTC 2013
xend_config_format     : 4

--------------060906080403000401080808
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------060906080403000401080808--


From xen-devel-bounces@lists.xen.org Fri Jan 03 09:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 09:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1L4-0002vr-0g; Fri, 03 Jan 2014 09:46:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julian@freebsd.org>) id 1VysWv-00052u-EP
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 00:22:25 +0000
Received: from [85.158.143.35:10612] by server-2.bemta-4.messagelabs.com id
	72/63-11386-0C206C25; Fri, 03 Jan 2014 00:22:24 +0000
X-Env-Sender: julian@freebsd.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388708542!9185591!1
X-Originating-IP: [204.109.63.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5812 invoked from network); 3 Jan 2014 00:22:24 -0000
Received: from vps1.elischer.org (HELO vps1.elischer.org) (204.109.63.16)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 00:22:24 -0000
Received: from [192.168.1.73] (254C510A.nat.pool.telekom.hu [37.76.81.10])
	(authenticated bits=0)
	by vps1.elischer.org (8.14.7/8.14.7) with ESMTP id s030MELl019270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO);
	Thu, 2 Jan 2014 16:22:16 -0800 (PST)
	(envelope-from julian@freebsd.org)
Message-ID: <52C602B0.7060904@freebsd.org>
Date: Fri, 03 Jan 2014 01:22:08 +0100
From: Julian Elischer <julian@freebsd.org>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, xen-devel@lists.xen.org,
	gibbs@freebsd.org, jhb@freebsd.org, kib@freebsd.org,
	julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-20-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-20-git-send-email-roger.pau@citrix.com>
X-Mailman-Approved-At: Fri, 03 Jan 2014 09:46:43 +0000
Subject: Re: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to
	xenpv device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/2/14, 4:43 PM, Roger Pau Monne wrote:
> ---
>   sys/x86/isa/isa.c |    3 +++
>   1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
> index 1a57137..9287ff2 100644
> --- a/sys/x86/isa/isa.c
> +++ b/sys/x86/isa/isa.c
> @@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
>    * On this platform, isa can also attach to the legacy bus.
>    */
>   DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
> +#ifdef XENHVM
> +DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
> +#endif
read all 19 patches. I'm glad you split them up.. makes it 
understandable.. even by me :-)
no real negative comments except a question as to whether there is any 
noticable performance impact on real hardware?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 10:14:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 10:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1li-0004Bb-U6; Fri, 03 Jan 2014 10:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz1lh-0004BW-3K
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 10:14:17 +0000
Received: from [85.158.143.35:26599] by server-1.bemta-4.messagelabs.com id
	7B/2D-02132-87D86C25; Fri, 03 Jan 2014 10:14:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388744054!9425338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12765 invoked from network); 3 Jan 2014 10:14:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 10:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89487629"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 10:14:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	05:14:13 -0500
Message-ID: <52C68D73.70005@citrix.com>
Date: Fri, 3 Jan 2014 10:14:11 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>	<52C54C82.5010802@citrix.com>
	<20140102173639.3869d841@mantra.us.oracle.com>
In-Reply-To: <20140102173639.3869d841@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:36, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 11:24:50 +0000
> David Vrabel <david.vrabel@citrix.com> wrote:
> 
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> .. which are surprinsingly small compared to the amount for PV code.
>>>
>>> PVH uses mostly native mmu ops, we leave the generic (native_*) for
>>> the majority and just overwrite the baremetal with the ones we need.
>>>
>>> We also optimize one - the TLB flush. The native operation would
>>> needlessly IPI offline VCPUs causing extra wakeups. Using the
>>> Xen one avoids that and lets the hypervisor determine which
>>> VCPU needs the TLB flush.
>>
>> This TLB flush optimization should be a separate patch.
> 
> It's not really an "optimization", we are using PV mechanism instead
> of native because PV one performs better.

Um.  Isn't that the very definition of an optimization?

I do think it is better for the essential MMU changes to be clearly
separate from the optional ones.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 10:14:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 10:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz1li-0004Bb-U6; Fri, 03 Jan 2014 10:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz1lh-0004BW-3K
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 10:14:17 +0000
Received: from [85.158.143.35:26599] by server-1.bemta-4.messagelabs.com id
	7B/2D-02132-87D86C25; Fri, 03 Jan 2014 10:14:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388744054!9425338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12765 invoked from network); 3 Jan 2014 10:14:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 10:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89487629"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 10:14:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	05:14:13 -0500
Message-ID: <52C68D73.70005@citrix.com>
Date: Fri, 3 Jan 2014 10:14:11 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>	<52C54C82.5010802@citrix.com>
	<20140102173639.3869d841@mantra.us.oracle.com>
In-Reply-To: <20140102173639.3869d841@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:36, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 11:24:50 +0000
> David Vrabel <david.vrabel@citrix.com> wrote:
> 
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> .. which are surprinsingly small compared to the amount for PV code.
>>>
>>> PVH uses mostly native mmu ops, we leave the generic (native_*) for
>>> the majority and just overwrite the baremetal with the ones we need.
>>>
>>> We also optimize one - the TLB flush. The native operation would
>>> needlessly IPI offline VCPUs causing extra wakeups. Using the
>>> Xen one avoids that and lets the hypervisor determine which
>>> VCPU needs the TLB flush.
>>
>> This TLB flush optimization should be a separate patch.
> 
> It's not really an "optimization", we are using PV mechanism instead
> of native because PV one performs better.

Um.  Isn't that the very definition of an optimization?

I do think it is better for the essential MMU changes to be clearly
separate from the optional ones.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2sY-0006O9-1U; Fri, 03 Jan 2014 11:25:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2sV-0006O4-Tn
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:25:24 +0000
Received: from [85.158.137.68:64011] by server-16.bemta-3.messagelabs.com id
	39/A1-26128-22E96C25; Fri, 03 Jan 2014 11:25:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388748320!7046026!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20214 invoked from network); 3 Jan 2014 11:25:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87314045"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:25:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:25:19 -0500
Message-ID: <52C69E1E.7090103@citrix.com>
Date: Fri, 3 Jan 2014 11:25:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
In-Reply-To: <20140102183221.GD3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:32, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
>>> need to use emulated prefix call. We also check for vector callback
>>> early on, as it is a required feature. PVH also runs at default kernel
>>> IOPL.
>>>
>>> Finally, pure PV settings are moved to a separate function that are
>>> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
>>> with CONFIG_XEN_PVMMU.
>> [...]
>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>>>  		break;
>>>  	}
>>>  
>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
>>> -		: "=a" (*ax),
>>> -		  "=b" (*bx),
>>> -		  "=c" (*cx),
>>> -		  "=d" (*dx)
>>> -		: "0" (*ax), "2" (*cx));
>>> +	if (xen_pvh_domain())
>>> +		native_cpuid(ax, bx, cx, dx);
>>> +	else
>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
>>> +			: "=a" (*ax),
>>> +			"=b" (*bx),
>>> +			"=c" (*cx),
>>> +			"=d" (*dx)
>>> +			: "0" (*ax), "2" (*cx));
>>
>> For this one off cpuid call it seems preferrable to me to use the
>> emulate prefix rather than diverge from PV.
> 
> This was before the PV cpuid was deemed OK to be used on PVH.
> Will rip this out to use the same version.
> 
>>
>>> @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
>>>  
>>>  	xen_domain_type = XEN_PV_DOMAIN;
>>>  
>>> +	xen_setup_features();
>>> +	xen_pvh_early_guest_init();
>>>  	xen_setup_machphys_mapping();
>>>  
>>>  	/* Install Xen paravirt ops */
>>>  	pv_info = xen_info;
>>>  	pv_init_ops = xen_init_ops;
>>> -	pv_cpu_ops = xen_cpu_ops;
>>>  	pv_apic_ops = xen_apic_ops;
>>> +	if (xen_pvh_domain())
>>> +		pv_cpu_ops.cpuid = xen_cpuid;
>>> +	else
>>> +		pv_cpu_ops = xen_cpu_ops;
>>
>> If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?
> 
> There are a couple of filtering done on the cpuid. But with HVM I am
> not entirely sure if it is worth preserving those or not.

I think we should behave like HVM for cpuid and any cpuid
policy/filtering should be set up by the toolstack.

> My fear is that if we switch over to the native one without the
> filtering that the kernel does we open up a can of worms that had been
> closed in the past. The reason is that for dom0 - there is no cpuid
> filtering being done. So it gets everything that the hypervisor sees.

I think we should switch to using the native cpuid pv-op and fix up any
problems as we encounter them (by fixing the toolstack to set up the
cpuid stuff properly).

dom0 isn't supported yet so that's not an issue.  In the future dom0
could be handled by either: a) setting the cpuid policy in the
hypervisor during dom0 create; or b) the kernel can set this up during
early boot.  In both cases using native cpuid should do the right thing.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2sY-0006O9-1U; Fri, 03 Jan 2014 11:25:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2sV-0006O4-Tn
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:25:24 +0000
Received: from [85.158.137.68:64011] by server-16.bemta-3.messagelabs.com id
	39/A1-26128-22E96C25; Fri, 03 Jan 2014 11:25:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388748320!7046026!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20214 invoked from network); 3 Jan 2014 11:25:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87314045"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:25:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:25:19 -0500
Message-ID: <52C69E1E.7090103@citrix.com>
Date: Fri, 3 Jan 2014 11:25:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
In-Reply-To: <20140102183221.GD3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:32, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
>>> need to use emulated prefix call. We also check for vector callback
>>> early on, as it is a required feature. PVH also runs at default kernel
>>> IOPL.
>>>
>>> Finally, pure PV settings are moved to a separate function that are
>>> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
>>> with CONFIG_XEN_PVMMU.
>> [...]
>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
>>>  		break;
>>>  	}
>>>  
>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
>>> -		: "=a" (*ax),
>>> -		  "=b" (*bx),
>>> -		  "=c" (*cx),
>>> -		  "=d" (*dx)
>>> -		: "0" (*ax), "2" (*cx));
>>> +	if (xen_pvh_domain())
>>> +		native_cpuid(ax, bx, cx, dx);
>>> +	else
>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
>>> +			: "=a" (*ax),
>>> +			"=b" (*bx),
>>> +			"=c" (*cx),
>>> +			"=d" (*dx)
>>> +			: "0" (*ax), "2" (*cx));
>>
>> For this one off cpuid call it seems preferrable to me to use the
>> emulate prefix rather than diverge from PV.
> 
> This was before the PV cpuid was deemed OK to be used on PVH.
> Will rip this out to use the same version.
> 
>>
>>> @@ -1431,13 +1449,18 @@ asmlinkage void __init xen_start_kernel(void)
>>>  
>>>  	xen_domain_type = XEN_PV_DOMAIN;
>>>  
>>> +	xen_setup_features();
>>> +	xen_pvh_early_guest_init();
>>>  	xen_setup_machphys_mapping();
>>>  
>>>  	/* Install Xen paravirt ops */
>>>  	pv_info = xen_info;
>>>  	pv_init_ops = xen_init_ops;
>>> -	pv_cpu_ops = xen_cpu_ops;
>>>  	pv_apic_ops = xen_apic_ops;
>>> +	if (xen_pvh_domain())
>>> +		pv_cpu_ops.cpuid = xen_cpuid;
>>> +	else
>>> +		pv_cpu_ops = xen_cpu_ops;
>>
>> If cpuid is trapped for PVH guests why does PVH need non-native cpuid op?
> 
> There are a couple of filtering done on the cpuid. But with HVM I am
> not entirely sure if it is worth preserving those or not.

I think we should behave like HVM for cpuid and any cpuid
policy/filtering should be set up by the toolstack.

> My fear is that if we switch over to the native one without the
> filtering that the kernel does we open up a can of worms that had been
> closed in the past. The reason is that for dom0 - there is no cpuid
> filtering being done. So it gets everything that the hypervisor sees.

I think we should switch to using the native cpuid pv-op and fix up any
problems as we encounter them (by fixing the toolstack to set up the
cpuid stuff properly).

dom0 isn't supported yet so that's not an issue.  In the future dom0
could be handled by either: a) setting the cpuid policy in the
hypervisor during dom0 create; or b) the kernel can set this up during
early boot.  In both cases using native cpuid should do the right thing.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:27:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2uv-0006Tq-J9; Fri, 03 Jan 2014 11:27:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2uu-0006Tk-AR
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:27:52 +0000
Received: from [85.158.143.35:36403] by server-3.bemta-4.messagelabs.com id
	05/B6-32360-7BE96C25; Fri, 03 Jan 2014 11:27:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388748469!9423991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21461 invoked from network); 3 Jan 2014 11:27:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89502286"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 11:27:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:27:48 -0500
Message-ID: <52C69EB3.3030300@citrix.com>
Date: Fri, 3 Jan 2014 11:27:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
	<52C54E00.7010508@citrix.com>
	<20140102182413.GB3021@pegasus.dumpdata.com>
In-Reply-To: <20140102182413.GB3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:24, Konrad Rzeszutek Wilk wrote:
>>> +		loadsegment(es, 0);
>>> +		loadsegment(ds, 0);
>>> +		loadsegment(fs, 0);
>>> +#else
>>> +		/* PVH: TODO Implement. */
>>> +		BUG();
>>> +#endif
>>> +		return;    <==============
>>> +	}
>>>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>>>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>>
>> If PVH uses native GDT why are these (and possibly other?) GDT ops needed?
> 
> They aren't. There is a 'return' there. I marked it for you with
> '<======'.

I missed that, in which case.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:27:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2uv-0006Tq-J9; Fri, 03 Jan 2014 11:27:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2uu-0006Tk-AR
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:27:52 +0000
Received: from [85.158.143.35:36403] by server-3.bemta-4.messagelabs.com id
	05/B6-32360-7BE96C25; Fri, 03 Jan 2014 11:27:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388748469!9423991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21461 invoked from network); 3 Jan 2014 11:27:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89502286"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 11:27:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:27:48 -0500
Message-ID: <52C69EB3.3030300@citrix.com>
Date: Fri, 3 Jan 2014 11:27:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-9-git-send-email-konrad.wilk@oracle.com>
	<52C54E00.7010508@citrix.com>
	<20140102182413.GB3021@pegasus.dumpdata.com>
In-Reply-To: <20140102182413.GB3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 08/18] xen/pvh: Load GDT/GS in early PV
 bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:24, Konrad Rzeszutek Wilk wrote:
>>> +		loadsegment(es, 0);
>>> +		loadsegment(ds, 0);
>>> +		loadsegment(fs, 0);
>>> +#else
>>> +		/* PVH: TODO Implement. */
>>> +		BUG();
>>> +#endif
>>> +		return;    <==============
>>> +	}
>>>  	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
>>>  	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>>
>> If PVH uses native GDT why are these (and possibly other?) GDT ops needed?
> 
> They aren't. There is a 'return' there. I marked it for you with
> '<======'.

I missed that, in which case.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2wL-0006o7-2A; Fri, 03 Jan 2014 11:29:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2wJ-0006nZ-Cv
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:29:19 +0000
Received: from [193.109.254.147:52392] by server-12.bemta-14.messagelabs.com
	id A7/1C-13681-E0F96C25; Fri, 03 Jan 2014 11:29:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388748556!8619207!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22666 invoked from network); 3 Jan 2014 11:29:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:29:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87314429"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:29:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:29:14 -0500
Message-ID: <52C69F09.5010407@citrix.com>
Date: Fri, 3 Jan 2014 11:29:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>	<52C58691.4040502@citrix.com>	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
In-Reply-To: <20140102173438.40612127@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:34, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:32:21 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
>> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>>
>>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
>>>> need to use emulated prefix call. We also check for vector
>>>> callback early on, as it is a required feature. PVH also runs at
>>>> default kernel IOPL.
>>>>
>>>> Finally, pure PV settings are moved to a separate function that
>>>> are only called for pure PV, ie, pv with pvmmu. They are also
>>>> #ifdef with CONFIG_XEN_PVMMU.
>>> [...]
>>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
>>>> unsigned int *bx, break;
>>>>  	}
>>>>  
>>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
>>>> -		: "=a" (*ax),
>>>> -		  "=b" (*bx),
>>>> -		  "=c" (*cx),
>>>> -		  "=d" (*dx)
>>>> -		: "0" (*ax), "2" (*cx));
>>>> +	if (xen_pvh_domain())
>>>> +		native_cpuid(ax, bx, cx, dx);
>>>> +	else
>>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
>>>> +			: "=a" (*ax),
>>>> +			"=b" (*bx),
>>>> +			"=c" (*cx),
>>>> +			"=d" (*dx)
>>>> +			: "0" (*ax), "2" (*cx));
>>>
>>> For this one off cpuid call it seems preferrable to me to use the
>>> emulate prefix rather than diverge from PV.
>>
>> This was before the PV cpuid was deemed OK to be used on PVH.
>> Will rip this out to use the same version.
> 
> Whats wrong with using native cpuid? That is one of the benefits that
> cpuid can be trapped via vmexit, and also there is talk of making PV
> cpuid trap obsolete in the future. I suggest leaving it native.

It should either use the PV interface or the HVM one, not a hybrid of
the two.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz2wL-0006o7-2A; Fri, 03 Jan 2014 11:29:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz2wJ-0006nZ-Cv
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:29:19 +0000
Received: from [193.109.254.147:52392] by server-12.bemta-14.messagelabs.com
	id A7/1C-13681-E0F96C25; Fri, 03 Jan 2014 11:29:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388748556!8619207!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22666 invoked from network); 3 Jan 2014 11:29:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:29:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87314429"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:29:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:29:14 -0500
Message-ID: <52C69F09.5010407@citrix.com>
Date: Fri, 3 Jan 2014 11:29:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>	<52C58691.4040502@citrix.com>	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
In-Reply-To: <20140102173438.40612127@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:34, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:32:21 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
>> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>>
>>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
>>>> need to use emulated prefix call. We also check for vector
>>>> callback early on, as it is a required feature. PVH also runs at
>>>> default kernel IOPL.
>>>>
>>>> Finally, pure PV settings are moved to a separate function that
>>>> are only called for pure PV, ie, pv with pvmmu. They are also
>>>> #ifdef with CONFIG_XEN_PVMMU.
>>> [...]
>>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
>>>> unsigned int *bx, break;
>>>>  	}
>>>>  
>>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
>>>> -		: "=a" (*ax),
>>>> -		  "=b" (*bx),
>>>> -		  "=c" (*cx),
>>>> -		  "=d" (*dx)
>>>> -		: "0" (*ax), "2" (*cx));
>>>> +	if (xen_pvh_domain())
>>>> +		native_cpuid(ax, bx, cx, dx);
>>>> +	else
>>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
>>>> +			: "=a" (*ax),
>>>> +			"=b" (*bx),
>>>> +			"=c" (*cx),
>>>> +			"=d" (*dx)
>>>> +			: "0" (*ax), "2" (*cx));
>>>
>>> For this one off cpuid call it seems preferrable to me to use the
>>> emulate prefix rather than diverge from PV.
>>
>> This was before the PV cpuid was deemed OK to be used on PVH.
>> Will rip this out to use the same version.
> 
> Whats wrong with using native cpuid? That is one of the benefits that
> cpuid can be trapped via vmexit, and also there is talk of making PV
> cpuid trap obsolete in the future. I suggest leaving it native.

It should either use the PV interface or the HVM one, not a hybrid of
the two.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz30b-00074d-OS; Fri, 03 Jan 2014 11:33:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Vz30Z-00074V-H2
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 11:33:43 +0000
Received: from [85.158.137.68:47646] by server-14.bemta-3.messagelabs.com id
	79/82-06105-610A6C25; Fri, 03 Jan 2014 11:33:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388748820!7060823!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17233 invoked from network); 3 Jan 2014 11:33:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:33:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87315541"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:33:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 06:33:39 -0500
Message-ID: <52C6A013.1070906@citrix.com>
Date: Fri, 3 Jan 2014 12:33:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julian Elischer <julian@freebsd.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-20-git-send-email-roger.pau@citrix.com>
	<52C602B0.7060904@freebsd.org>
In-Reply-To: <52C602B0.7060904@freebsd.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to
	xenpv device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:22, Julian Elischer wrote:
> On 1/2/14, 4:43 PM, Roger Pau Monne wrote:
>> ---
>>   sys/x86/isa/isa.c |    3 +++
>>   1 files changed, 3 insertions(+), 0 deletions(-)
>>
>> diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
>> index 1a57137..9287ff2 100644
>> --- a/sys/x86/isa/isa.c
>> +++ b/sys/x86/isa/isa.c
>> @@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child,
>> int type, int rid,
>>    * On this platform, isa can also attach to the legacy bus.
>>    */
>>   DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
>> +#ifdef XENHVM
>> +DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
>> +#endif
> read all 19 patches. I'm glad you split them up.. makes it
> understandable.. even by me :-)
> no real negative comments except a question as to whether there is any
> noticable performance impact on real hardware?

Thanks for taking a look. I haven't seen any performance impact when
running a PVH capable kernel (a kernel with this patch series applied)
on real hardware. I'm not adding hooks to any hot paths, most of the
code added in this series is only used during boot time.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz30b-00074d-OS; Fri, 03 Jan 2014 11:33:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Vz30Z-00074V-H2
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 11:33:43 +0000
Received: from [85.158.137.68:47646] by server-14.bemta-3.messagelabs.com id
	79/82-06105-610A6C25; Fri, 03 Jan 2014 11:33:42 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388748820!7060823!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17233 invoked from network); 3 Jan 2014 11:33:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:33:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87315541"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:33:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 06:33:39 -0500
Message-ID: <52C6A013.1070906@citrix.com>
Date: Fri, 3 Jan 2014 12:33:39 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julian Elischer <julian@freebsd.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-20-git-send-email-roger.pau@citrix.com>
	<52C602B0.7060904@freebsd.org>
In-Reply-To: <52C602B0.7060904@freebsd.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 19/19] isa: allow ISA bus to attach to
	xenpv device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 01:22, Julian Elischer wrote:
> On 1/2/14, 4:43 PM, Roger Pau Monne wrote:
>> ---
>>   sys/x86/isa/isa.c |    3 +++
>>   1 files changed, 3 insertions(+), 0 deletions(-)
>>
>> diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
>> index 1a57137..9287ff2 100644
>> --- a/sys/x86/isa/isa.c
>> +++ b/sys/x86/isa/isa.c
>> @@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child,
>> int type, int rid,
>>    * On this platform, isa can also attach to the legacy bus.
>>    */
>>   DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
>> +#ifdef XENHVM
>> +DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
>> +#endif
> read all 19 patches. I'm glad you split them up.. makes it
> understandable.. even by me :-)
> no real negative comments except a question as to whether there is any
> noticable performance impact on real hardware?

Thanks for taking a look. I haven't seen any performance impact when
running a PVH capable kernel (a kernel with this patch series applied)
on real hardware. I'm not adding hooks to any hot paths, most of the
code added in this series is only used during boot time.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3C3-0007VQ-1m; Fri, 03 Jan 2014 11:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1Vz3C2-0007VL-5q
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:45:34 +0000
Received: from [85.158.137.68:57447] by server-15.bemta-3.messagelabs.com id
	A6/78-11556-DD2A6C25; Fri, 03 Jan 2014 11:45:33 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388749532!3403169!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDE2NDE4OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24215 invoked from network); 3 Jan 2014 11:45:32 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-15.tower-31.messagelabs.com with SMTP;
	3 Jan 2014 11:45:32 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 94DBD2D804A;
	Fri,  3 Jan 2014 12:45:31 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Fri, 3 Jan 2014 12:44:12 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Fri, 3 Jan 2014 12:44:11 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: "james.harper@bendigoit.com.au" <james.harper@bendigoit.com.au>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMA==
Date: Fri, 3 Jan 2014 11:44:11 +0000
Message-ID: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8320555004146392371=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8320555004146392371==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_"

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi James

I fiddled around with compiling you GPL PV driver, however I got an error w=
hen generating the MSI packages. The error was related to xenscsi which is =
not compiled, however WIX try to included. By applying the following patch,=
 I successfully compiled the driver. It might not be the right approached j=
ust to remove the lines in installer.wxs, since I got an error relating to =
xenvbd when installing the driver on a windows 8.1 domU.

Best regards Kristian Hagsted Rasmussen

diff -r 5fa56ef930bf installer.wxs
--- a/installer.wxs            Tue Dec 24 15:44:06 2013 +1100
+++ b/installer.wxs         Fri Jan 03 12:34:25 2014 +0100
@@ -99,16 +99,6 @@
               </Component>
             </Directory>
             <?endif ?>
-            <?if $(env.DDK_TARGET_OS) !=3D Win2K ?>
-            <Directory Id=3D'XenScsiDir' Name=3D'xenscsi'>
-              <Component Id=3D'XenScsi' Guid=3D'47C9AB48-3A7D-42b2-AE2C-7F=
9235C8B7B4'>
-                <File Id=3D'xenscsi.cat' Name=3D'xenscsi.cat' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.cat' />
-                <File Id=3D'xenscsi.inf' Name=3D'xenscsi.inf' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.inf' />
-                <File Id=3D'xenscsi.sys' Name=3D'xenscsi.sys' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.sys' />
-                <difx:Driver Sequence=3D'2' Legacy=3D'yes' PlugAndPlayProm=
pt=3D'no' ForceInstall=3D'yes' />
-              </Component>
-            </Directory>
-            <?endif ?>
             <Directory Id=3D'XenNetDir' Name=3D'xennet'>
               <Component Id=3D'XenNet' Guid=3D'F16B1EC7-35B1-42c2-9017-22D=
C23D80BE7'>
                 <File Id=3D'xennet.cat' Name=3D'xennet.cat' DiskId=3D'1' S=
ource=3D'xennet\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xennet.cat' />
@@ -160,11 +150,6 @@
       <Feature Id=3D'XenVbd' Title=3D'XenVbd Driver' Level=3D'1' AllowAdve=
rtise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
         <ComponentRef Id=3D'XenVbd' />
       </Feature>
-      <?if $(env.DDK_TARGET_OS) !=3D Win2K ?>
-      <Feature Id=3D'XenScsi' Title=3D'XenScsi Driver' Level=3D'1' AllowAd=
vertise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
-        <ComponentRef Id=3D'XenScsi' />
-      </Feature>
-      <?endif ?>
       <Feature Id=3D'XenNet' Title=3D'XenNet Driver' Level=3D'1' AllowAdve=
rtise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
         <ComponentRef Id=3D'XenNet' />
       </Feature>

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:#954F72;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi James<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I fiddled around with compiling you GPL PV driver, h=
owever I got an error when generating the MSI packages. The error was relat=
ed to xenscsi which is not compiled, however WIX try to included. By applyi=
ng the following patch, I successfully
 compiled the driver. It might not be the right approached just to remove t=
he lines in installer.wxs, since I got an error relating to xenvbd when ins=
talling the driver on a windows 8.1 domU.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Best regards Kristian Hagsted Rasmussen<o:p></o:p></=
p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">diff -r 5fa56ef930bf installer.wxs<o:p></o:p></p>
<p class=3D"MsoNormal">--- a/installer.wxs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Tue Dec 24 15:44:06 2013 &#43;1100<o:p></=
o:p></p>
<p class=3D"MsoNormal"><span lang=3D"DA">&#43;&#43;&#43; b/installer.wxs&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Fri Jan 03 12:34:25 2014 &#43=
;0100<o:p></o:p></span></p>
<p class=3D"MsoNormal">@@ -99,16 &#43;99,6 @@<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Component&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;/Directory&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;?if $(env.DDK_TARGET_OS) !=3D Win2K ?&gt;<o:p></o:p></p=
>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;Directory Id=3D'XenScsiDir' Name=3D'xenscsi'&gt;<o:p></=
o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Component Id=3D'XenScsi' Guid=3D'47C9AB48-3=
A7D-42b2-AE2C-7F9235C8B7B4'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xenscsi.cat' Name=3D=
'xenscsi.cat' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.cat' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xenscsi.inf' Name=3D=
'xenscsi.inf' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.inf' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;File Id=3D'xenscsi.sys' Name=3D=
'xenscsi.sys' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.sys' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;difx:Driver Sequence=3D'2' Lega=
cy=3D'yes' PlugAndPlayPrompt=3D'no' ForceInstall=3D'yes' /&gt;<o:p></o:p></=
p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Component&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;/Directory&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;Directory Id=3D'XenNetDir' Name=3D'xennet'&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Component Id=3D'XenNet' Guid=3D'F16B1E=
C7-35B1-42c2-9017-22DC23D80BE7'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xennet.cat' Nam=
e=3D'xennet.cat' DiskId=3D'1' Source=3D'xennet\obj$(env.BUILD_ALT_DIR)\$(va=
r.ARCHDIR)\xennet.cat' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">@@ -160,11 &#43;150,6 @@<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=
=3D'XenVbd' Title=3D'XenVbd Driver' Level=3D'1' AllowAdvertise=3D'no' Insta=
llDefault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt=
;ComponentRef Id=3D'XenVbd' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt=
;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;?if $(env.DDK_TA=
RGET_OS) !=3D Win2K ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=3D'Xe=
nScsi' Title=3D'XenScsi Driver' Level=3D'1' AllowAdvertise=3D'no' InstallDe=
fault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Comp=
onentRef Id=3D'XenScsi' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=
=3D'XenNet' Title=3D'XenNet Driver' Level=3D'1' AllowAdvertise=3D'no' Insta=
llDefault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt=
;ComponentRef Id=3D'XenNet' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt=
;<o:p></o:p></p>
</div>
</body>
</html>

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_--


--===============8320555004146392371==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8320555004146392371==--


From xen-devel-bounces@lists.xen.org Fri Jan 03 11:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3C3-0007VQ-1m; Fri, 03 Jan 2014 11:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1Vz3C2-0007VL-5q
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:45:34 +0000
Received: from [85.158.137.68:57447] by server-15.bemta-3.messagelabs.com id
	A6/78-11556-DD2A6C25; Fri, 03 Jan 2014 11:45:33 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388749532!3403169!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDE2NDE4OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24215 invoked from network); 3 Jan 2014 11:45:32 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-15.tower-31.messagelabs.com with SMTP;
	3 Jan 2014 11:45:32 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 94DBD2D804A;
	Fri,  3 Jan 2014 12:45:31 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Fri, 3 Jan 2014 12:44:12 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Fri, 3 Jan 2014 12:44:11 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: "james.harper@bendigoit.com.au" <james.harper@bendigoit.com.au>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMA==
Date: Fri, 3 Jan 2014 11:44:11 +0000
Message-ID: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8320555004146392371=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8320555004146392371==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_"

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi James

I fiddled around with compiling you GPL PV driver, however I got an error w=
hen generating the MSI packages. The error was related to xenscsi which is =
not compiled, however WIX try to included. By applying the following patch,=
 I successfully compiled the driver. It might not be the right approached j=
ust to remove the lines in installer.wxs, since I got an error relating to =
xenvbd when installing the driver on a windows 8.1 domU.

Best regards Kristian Hagsted Rasmussen

diff -r 5fa56ef930bf installer.wxs
--- a/installer.wxs            Tue Dec 24 15:44:06 2013 +1100
+++ b/installer.wxs         Fri Jan 03 12:34:25 2014 +0100
@@ -99,16 +99,6 @@
               </Component>
             </Directory>
             <?endif ?>
-            <?if $(env.DDK_TARGET_OS) !=3D Win2K ?>
-            <Directory Id=3D'XenScsiDir' Name=3D'xenscsi'>
-              <Component Id=3D'XenScsi' Guid=3D'47C9AB48-3A7D-42b2-AE2C-7F=
9235C8B7B4'>
-                <File Id=3D'xenscsi.cat' Name=3D'xenscsi.cat' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.cat' />
-                <File Id=3D'xenscsi.inf' Name=3D'xenscsi.inf' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.inf' />
-                <File Id=3D'xenscsi.sys' Name=3D'xenscsi.sys' DiskId=3D'1'=
 Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xenscsi.sys' />
-                <difx:Driver Sequence=3D'2' Legacy=3D'yes' PlugAndPlayProm=
pt=3D'no' ForceInstall=3D'yes' />
-              </Component>
-            </Directory>
-            <?endif ?>
             <Directory Id=3D'XenNetDir' Name=3D'xennet'>
               <Component Id=3D'XenNet' Guid=3D'F16B1EC7-35B1-42c2-9017-22D=
C23D80BE7'>
                 <File Id=3D'xennet.cat' Name=3D'xennet.cat' DiskId=3D'1' S=
ource=3D'xennet\obj$(env.BUILD_ALT_DIR)\$(var.ARCHDIR)\xennet.cat' />
@@ -160,11 +150,6 @@
       <Feature Id=3D'XenVbd' Title=3D'XenVbd Driver' Level=3D'1' AllowAdve=
rtise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
         <ComponentRef Id=3D'XenVbd' />
       </Feature>
-      <?if $(env.DDK_TARGET_OS) !=3D Win2K ?>
-      <Feature Id=3D'XenScsi' Title=3D'XenScsi Driver' Level=3D'1' AllowAd=
vertise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
-        <ComponentRef Id=3D'XenScsi' />
-      </Feature>
-      <?endif ?>
       <Feature Id=3D'XenNet' Title=3D'XenNet Driver' Level=3D'1' AllowAdve=
rtise=3D'no' InstallDefault=3D'local' Absent=3D'allow'>
         <ComponentRef Id=3D'XenNet' />
       </Feature>

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:#954F72;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi James<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I fiddled around with compiling you GPL PV driver, h=
owever I got an error when generating the MSI packages. The error was relat=
ed to xenscsi which is not compiled, however WIX try to included. By applyi=
ng the following patch, I successfully
 compiled the driver. It might not be the right approached just to remove t=
he lines in installer.wxs, since I got an error relating to xenvbd when ins=
talling the driver on a windows 8.1 domU.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Best regards Kristian Hagsted Rasmussen<o:p></o:p></=
p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">diff -r 5fa56ef930bf installer.wxs<o:p></o:p></p>
<p class=3D"MsoNormal">--- a/installer.wxs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Tue Dec 24 15:44:06 2013 &#43;1100<o:p></=
o:p></p>
<p class=3D"MsoNormal"><span lang=3D"DA">&#43;&#43;&#43; b/installer.wxs&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Fri Jan 03 12:34:25 2014 &#43=
;0100<o:p></o:p></span></p>
<p class=3D"MsoNormal">@@ -99,16 &#43;99,6 @@<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Component&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;/Directory&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;?if $(env.DDK_TARGET_OS) !=3D Win2K ?&gt;<o:p></o:p></p=
>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;Directory Id=3D'XenScsiDir' Name=3D'xenscsi'&gt;<o:p></=
o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Component Id=3D'XenScsi' Guid=3D'47C9AB48-3=
A7D-42b2-AE2C-7F9235C8B7B4'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xenscsi.cat' Name=3D=
'xenscsi.cat' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.cat' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xenscsi.inf' Name=3D=
'xenscsi.inf' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.inf' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;File Id=3D'xenscsi.sys' Name=3D=
'xenscsi.sys' DiskId=3D'1' Source=3D'xenscsi\obj$(env.BUILD_ALT_DIR)\$(var.=
ARCHDIR)\xenscsi.sys' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;difx:Driver Sequence=3D'2' Lega=
cy=3D'yes' PlugAndPlayPrompt=3D'no' ForceInstall=3D'yes' /&gt;<o:p></o:p></=
p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Component&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;/Directory&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; &lt;Directory Id=3D'XenNetDir' Name=3D'xennet'&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Component Id=3D'XenNet' Guid=3D'F16B1E=
C7-35B1-42c2-9017-22DC23D80BE7'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;File Id=3D'xennet.cat' Nam=
e=3D'xennet.cat' DiskId=3D'1' Source=3D'xennet\obj$(env.BUILD_ALT_DIR)\$(va=
r.ARCHDIR)\xennet.cat' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">@@ -160,11 &#43;150,6 @@<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=
=3D'XenVbd' Title=3D'XenVbd Driver' Level=3D'1' AllowAdvertise=3D'no' Insta=
llDefault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt=
;ComponentRef Id=3D'XenVbd' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt=
;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;?if $(env.DDK_TA=
RGET_OS) !=3D Win2K ?&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=3D'Xe=
nScsi' Title=3D'XenScsi Driver' Level=3D'1' AllowAdvertise=3D'no' InstallDe=
fault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Comp=
onentRef Id=3D'XenScsi' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;?endif ?&gt;<o:p=
></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;Feature Id=
=3D'XenNet' Title=3D'XenNet Driver' Level=3D'1' AllowAdvertise=3D'no' Insta=
llDefault=3D'local' Absent=3D'allow'&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt=
;ComponentRef Id=3D'XenNet' /&gt;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/Feature&gt=
;<o:p></o:p></p>
</div>
</body>
</html>

--_000_975f6d4ac79f4467875a54f1d1e421f5hagstedcserverhagsteddk_--


--===============8320555004146392371==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8320555004146392371==--


From xen-devel-bounces@lists.xen.org Fri Jan 03 11:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3GU-0007wN-Vz; Fri, 03 Jan 2014 11:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3GM-0007wD-V5
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 11:50:09 +0000
Received: from [85.158.143.35:17453] by server-3.bemta-4.messagelabs.com id
	10/F4-32360-AE3A6C25; Fri, 03 Jan 2014 11:50:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388749800!9367318!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22514 invoked from network); 3 Jan 2014 11:50:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:50:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89506944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 11:50:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:49:59 -0500
Message-ID: <52C6A3E6.3060801@citrix.com>
Date: Fri, 3 Jan 2014 11:49:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jeroen Groenewegen van der Weyden <groen692@grosc.com>
References: <52C65D06.8080404@grosc.com>
In-Reply-To: <52C65D06.8080404@grosc.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] BUG: unable to handle kernel NULL pointer
	dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 06:47, Jeroen Groenewegen van der Weyden wrote:
> Hi All,
> 
> Yesterday my xen machine stopped working, after a kernel panic.
> To me it seems to problem started at something called xen_spin_kick,
> 
> I attached a screenshot of the console of the xen machine.
> 
> Is this email-list the right one to address this bug?

You should report this to a Suse specific list as the Suse kernel with
Xen support is very different to the Xen support in upstream kernels.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3GU-0007wN-Vz; Fri, 03 Jan 2014 11:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3GM-0007wD-V5
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 11:50:09 +0000
Received: from [85.158.143.35:17453] by server-3.bemta-4.messagelabs.com id
	10/F4-32360-AE3A6C25; Fri, 03 Jan 2014 11:50:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388749800!9367318!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22514 invoked from network); 3 Jan 2014 11:50:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:50:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89506944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 11:50:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:49:59 -0500
Message-ID: <52C6A3E6.3060801@citrix.com>
Date: Fri, 3 Jan 2014 11:49:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jeroen Groenewegen van der Weyden <groen692@grosc.com>
References: <52C65D06.8080404@grosc.com>
In-Reply-To: <52C65D06.8080404@grosc.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] BUG: unable to handle kernel NULL pointer
	dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 06:47, Jeroen Groenewegen van der Weyden wrote:
> Hi All,
> 
> Yesterday my xen machine stopped working, after a kernel panic.
> To me it seems to problem started at something called xen_spin_kick,
> 
> I attached a screenshot of the console of the xen machine.
> 
> Is this email-list the right one to address this bug?

You should report this to a Suse specific list as the Suse kernel with
Xen support is very different to the Xen support in upstream kernels.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3KW-00085F-MT; Fri, 03 Jan 2014 11:54:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3KU-000858-Uo
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:54:19 +0000
Received: from [85.158.137.68:39278] by server-6.bemta-3.messagelabs.com id
	AC/85-04868-AE4A6C25; Fri, 03 Jan 2014 11:54:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388750055!7052582!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28858 invoked from network); 3 Jan 2014 11:54:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:54:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87319040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:54:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:54:14 -0500
Message-ID: <52C6A4E5.3080904@citrix.com>
Date: Fri, 3 Jan 2014 11:54:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
In-Reply-To: <20140102185023.GG3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>>>  	return gnttab_init();
>>>  }
>>>  
>>> -core_initcall(__gnttab_init);
>>> +core_initcall_sync(__gnttab_init);
>>
>> Why has this become _sync?
> 
> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> at gnttab_init):


The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
you call xen_pvh_gnttab_setup() from within __gnttab_init() ?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3KW-00085F-MT; Fri, 03 Jan 2014 11:54:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3KU-000858-Uo
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 11:54:19 +0000
Received: from [85.158.137.68:39278] by server-6.bemta-3.messagelabs.com id
	AC/85-04868-AE4A6C25; Fri, 03 Jan 2014 11:54:18 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388750055!7052582!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28858 invoked from network); 3 Jan 2014 11:54:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:54:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87319040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:54:15 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	06:54:14 -0500
Message-ID: <52C6A4E5.3080904@citrix.com>
Date: Fri, 3 Jan 2014 11:54:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
In-Reply-To: <20140102185023.GG3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>>>  	return gnttab_init();
>>>  }
>>>  
>>> -core_initcall(__gnttab_init);
>>> +core_initcall_sync(__gnttab_init);
>>
>> Why has this become _sync?
> 
> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> at gnttab_init):


The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
you call xen_pvh_gnttab_setup() from within __gnttab_init() ?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3Nu-0008D9-AV; Fri, 03 Jan 2014 11:57:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vz3Ns-0008D4-RZ
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 11:57:49 +0000
Received: from [193.109.254.147:8446] by server-9.bemta-14.messagelabs.com id
	ED/B0-13957-CB5A6C25; Fri, 03 Jan 2014 11:57:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1388750266!6332800!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25793 invoked from network); 3 Jan 2014 11:57:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:57:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87319378"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:57:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 3 Jan 2014 06:57:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Vz3No-00039p-F0;
	Fri, 03 Jan 2014 11:57:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Vz3No-0005F5-6b;
	Fri, 03 Jan 2014 11:57:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23938-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Jan 2014 11:57:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23938: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23938 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23938/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23827
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 22832
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 23724

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 11:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 11:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3Nu-0008D9-AV; Fri, 03 Jan 2014 11:57:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vz3Ns-0008D4-RZ
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 11:57:49 +0000
Received: from [193.109.254.147:8446] by server-9.bemta-14.messagelabs.com id
	ED/B0-13957-CB5A6C25; Fri, 03 Jan 2014 11:57:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1388750266!6332800!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25793 invoked from network); 3 Jan 2014 11:57:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 11:57:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87319378"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 11:57:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 3 Jan 2014 06:57:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1Vz3No-00039p-F0;
	Fri, 03 Jan 2014 11:57:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1Vz3No-0005F5-6b;
	Fri, 03 Jan 2014 11:57:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-23938-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 3 Jan 2014 11:57:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 23938: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 23938 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/23938/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23827
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 22832
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 23724

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:03:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3Tj-0000LJ-1X; Fri, 03 Jan 2014 12:03:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz3Th-0000LD-EI
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 12:03:49 +0000
Received: from [85.158.139.211:62938] by server-16.bemta-5.messagelabs.com id
	07/26-11843-427A6C25; Fri, 03 Jan 2014 12:03:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388750626!7703435!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30468 invoked from network); 3 Jan 2014 12:03:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87320802"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 12:03:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:03:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz3Tc-0004IS-Sw;
	Fri, 03 Jan 2014 12:03:44 +0000
Date: Fri, 3 Jan 2014 12:02:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubtd6rxX2K8Ze66gwr6Eegj-Rcd-TE8eO5bSPAQz2FmsZQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401031157010.8667@kaball.uk.xensource.com>
References: <215681387800071@web30j.yandex.ru>
	<CAOTdubtd6rxX2K8Ze66gwr6Eegj-Rcd-TE8eO5bSPAQz2FmsZQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrei Zakharov <z-andrew@yandex.ru>, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Does Xen ARM utilize Trustzone somehow?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes, that's correct. Xen needs to run in hypervisor mode, unavailable in
secure mode.
However if the trustzone firmware exports some functionalities via SMC
calls, it is possible to use them from Xen or Dom0.

One way to do that would be allowing only the hypervisor to make SMC
calls and then "proxy" any calls via Xen that Dom0 (or other guests)
need to make.
This way you can check in Xen what you want to allow the guest to
call. Also multiple guests, not just dom0, can access those services.

Sorry for the late reply, but I was on holiday.

On Mon, 23 Dec 2013, karim.allah.ahmed@gmail.com wrote:
> Hi Andrei,
> 
> I think Xen ARM doesn't utilize Trustzone and you can't run Dom0 in secure mode. In a trustzone environment that supports
> virtualization, the hypevisor as well as all its guests will be running in the non-secure mode.
> 
> "The virtualization features accessible only at PL2 are implemented only in Non-secure state. Secure state has only two privilege
> levels, PL0 and PL1." ~ ARMv7 TRM.
> Things like VTTBR aren't banked and are only available in non-secure. This register is currently used for MMU virtualization for
> guests.
> 
> 
> 
> 
> On Mon, Dec 23, 2013 at 12:01 PM, Andrei Zakharov <z-andrew@yandex.ru> wrote:
>       Hi,
> 
>       Does Xen ARM utilize Trustzone somehow? Can Dom0 be in Trustzone?
>       On http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Arndale I read 'The bootloader provided
>       with the Arndale does not let Xen boot in hypervisor mode, so we will use the u-boot provided by Linaro.'
>       Confusing moment...
> 
>       Thanks.
>       Andrei.
> 
>       _______________________________________________
>       Xen-devel mailing list
>       Xen-devel@lists.xen.org
>       http://lists.xen.org/xen-devel
> 
> 
> 
> 
> --
> Karim Allah Ahmed.
> LinkedIn
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:03:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3Tj-0000LJ-1X; Fri, 03 Jan 2014 12:03:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz3Th-0000LD-EI
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 12:03:49 +0000
Received: from [85.158.139.211:62938] by server-16.bemta-5.messagelabs.com id
	07/26-11843-427A6C25; Fri, 03 Jan 2014 12:03:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388750626!7703435!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30468 invoked from network); 3 Jan 2014 12:03:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87320802"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 12:03:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:03:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz3Tc-0004IS-Sw;
	Fri, 03 Jan 2014 12:03:44 +0000
Date: Fri, 3 Jan 2014 12:02:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubtd6rxX2K8Ze66gwr6Eegj-Rcd-TE8eO5bSPAQz2FmsZQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401031157010.8667@kaball.uk.xensource.com>
References: <215681387800071@web30j.yandex.ru>
	<CAOTdubtd6rxX2K8Ze66gwr6Eegj-Rcd-TE8eO5bSPAQz2FmsZQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrei Zakharov <z-andrew@yandex.ru>, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Does Xen ARM utilize Trustzone somehow?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes, that's correct. Xen needs to run in hypervisor mode, unavailable in
secure mode.
However if the trustzone firmware exports some functionalities via SMC
calls, it is possible to use them from Xen or Dom0.

One way to do that would be allowing only the hypervisor to make SMC
calls and then "proxy" any calls via Xen that Dom0 (or other guests)
need to make.
This way you can check in Xen what you want to allow the guest to
call. Also multiple guests, not just dom0, can access those services.

Sorry for the late reply, but I was on holiday.

On Mon, 23 Dec 2013, karim.allah.ahmed@gmail.com wrote:
> Hi Andrei,
> 
> I think Xen ARM doesn't utilize Trustzone and you can't run Dom0 in secure mode. In a trustzone environment that supports
> virtualization, the hypevisor as well as all its guests will be running in the non-secure mode.
> 
> "The virtualization features accessible only at PL2 are implemented only in Non-secure state. Secure state has only two privilege
> levels, PL0 and PL1." ~ ARMv7 TRM.
> Things like VTTBR aren't banked and are only available in non-secure. This register is currently used for MMU virtualization for
> guests.
> 
> 
> 
> 
> On Mon, Dec 23, 2013 at 12:01 PM, Andrei Zakharov <z-andrew@yandex.ru> wrote:
>       Hi,
> 
>       Does Xen ARM utilize Trustzone somehow? Can Dom0 be in Trustzone?
>       On http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Arndale I read 'The bootloader provided
>       with the Arndale does not let Xen boot in hypervisor mode, so we will use the u-boot provided by Linaro.'
>       Confusing moment...
> 
>       Thanks.
>       Andrei.
> 
>       _______________________________________________
>       Xen-devel mailing list
>       Xen-devel@lists.xen.org
>       http://lists.xen.org/xen-devel
> 
> 
> 
> 
> --
> Karim Allah Ahmed.
> LinkedIn
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3bE-0000nO-WA; Fri, 03 Jan 2014 12:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz3bD-0000nJ-Qx
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:11:36 +0000
Received: from [85.158.139.211:34457] by server-4.bemta-5.messagelabs.com id
	DC/9A-26791-7F8A6C25; Fri, 03 Jan 2014 12:11:35 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388751092!7504574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22493 invoked from network); 3 Jan 2014 12:11:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:11:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89513310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:11:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:11:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz3b9-0004OR-J7;
	Fri, 03 Jan 2014 12:11:31 +0000
Date: Fri, 3 Jan 2014 12:10:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Harper <james.harper@bendigoit.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
Message-ID: <alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 20 Dec 2013, James Harper wrote:
> > 
> > On Sat, 7 Dec 2013, James Harper wrote:
> > > >
> > > > > I'm actually using ceph as the backend, and also using it on PV
> > > > > DomU's. Is qdisk only for HVM?
> > > >
> > > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no
> > > > problem found, only performance increased.
> > 
> > I would definitely recommend qdisk over tapdisk: you can simply use
> > upstream QEMU for development and works. Enabling Ceph should just be a
> > matter of passing the right command line arguments to QEMU.
> > 
> 
> I finally got rbd working under Debian with qdisk.
> 
> I had to enable rbd in qemu (and not build the static user package as the build seems to be broken for rbd/rados static), then patch libxl so it would accept a format of rbd. Repeating what I said in xen-users, having an artificial restriction on formats for qdisk is a bit of a pain for something that should "just work". Is this resolved in newer versions? Or maybe there is some secret sauce I'm missing?
> 
> Anyway, now I can use a disk config of:
> 
> rbd:prod/virt-zoneminder-root,rbd,xvda1,rw,backendtype=qdisk
> 
> and get rbd with ceph under hvm. Probably pv too but I can't seem to get an 'xl console' working to check right now.

Nice! Please send the libxl patches to the list!

The reason why libxl has a specific list of supported formats is that we
thought that a simple wildcard and external bash scripts without a clear
calling convention were too fragile. Of course if RDB "just works", we
should enable it in libxl.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3bE-0000nO-WA; Fri, 03 Jan 2014 12:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz3bD-0000nJ-Qx
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:11:36 +0000
Received: from [85.158.139.211:34457] by server-4.bemta-5.messagelabs.com id
	DC/9A-26791-7F8A6C25; Fri, 03 Jan 2014 12:11:35 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1388751092!7504574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22493 invoked from network); 3 Jan 2014 12:11:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:11:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89513310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:11:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:11:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz3b9-0004OR-J7;
	Fri, 03 Jan 2014 12:11:31 +0000
Date: Fri, 3 Jan 2014 12:10:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Harper <james.harper@bendigoit.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
Message-ID: <alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 20 Dec 2013, James Harper wrote:
> > 
> > On Sat, 7 Dec 2013, James Harper wrote:
> > > >
> > > > > I'm actually using ceph as the backend, and also using it on PV
> > > > > DomU's. Is qdisk only for HVM?
> > > >
> > > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no
> > > > problem found, only performance increased.
> > 
> > I would definitely recommend qdisk over tapdisk: you can simply use
> > upstream QEMU for development and works. Enabling Ceph should just be a
> > matter of passing the right command line arguments to QEMU.
> > 
> 
> I finally got rbd working under Debian with qdisk.
> 
> I had to enable rbd in qemu (and not build the static user package as the build seems to be broken for rbd/rados static), then patch libxl so it would accept a format of rbd. Repeating what I said in xen-users, having an artificial restriction on formats for qdisk is a bit of a pain for something that should "just work". Is this resolved in newer versions? Or maybe there is some secret sauce I'm missing?
> 
> Anyway, now I can use a disk config of:
> 
> rbd:prod/virt-zoneminder-root,rbd,xvda1,rw,backendtype=qdisk
> 
> and get rbd with ceph under hvm. Probably pv too but I can't seem to get an 'xl console' working to check right now.

Nice! Please send the libxl patches to the list!

The reason why libxl has a specific list of supported formats is that we
thought that a simple wildcard and external bash scripts without a clear
calling convention were too fragile. Of course if RDB "just works", we
should enable it in libxl.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3bQ-0000oG-Cx; Fri, 03 Jan 2014 12:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3bP-0000nq-Jm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:11:47 +0000
Received: from [85.158.137.68:59177] by server-3.bemta-3.messagelabs.com id
	71/04-10658-209A6C25; Fri, 03 Jan 2014 12:11:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388751104!7069253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29141 invoked from network); 3 Jan 2014 12:11:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89513555"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:11:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	07:11:43 -0500
Message-ID: <52C6A8FE.5000500@citrix.com>
Date: Fri, 3 Jan 2014 12:11:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>	<52C59367.70707@citrix.com>
	<20140102184754.GF3021@pegasus.dumpdata.com>
In-Reply-To: <20140102184754.GF3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:47, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
>>> and contain the virtual address of the grants. That was OK
>>> for most architectures (PVHVM, ARM) were the grants are contingous
>>> in memory. That however is not the case for PVH - in which case
>>> we will have to do a lookup for each virtual address for the PFN.
>>>
>>> Instead of doing that, lets make it a structure which will contain
>>> the array of PFNs, the virtual address and the count of said PFNs.
>>>
>>> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
>>> gnttab_free_auto_xlat_frames to populate said structure with
>>> appropiate values for PVHVM and ARM.
>>>
>>> To round it off, change the name from 'xen_hvm_resume_frames' to
>>> a more descriptive one - 'xen_auto_xlat_grant_frames'.
>>>
>>> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
>>> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
>> [...]
>>> --- a/drivers/xen/grant-table.c
>>> +++ b/drivers/xen/grant-table.c
>> [...]
>>> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>>>  }
>>>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>>>  
>>> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
>>> +{
>>> +	xen_pfn_t *pfn;
>>> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
>>> +	int i;
>>> +
>>> +	if (xen_auto_xlat_grant_frames.count)
>>> +		return -EINVAL;
>>> +
>>> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
>>> +	if (!pfn)
>>> +		return -ENOMEM;
>>> +	for (i = 0; i < max_nr_gframes; i++)
>>> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
>>
>> PFN_DOWN(addr) + i looks better to me.
>>
>>> +
>>> +	xen_auto_xlat_grant_frames.vaddr = addr;

I think you should move the xen_remap() call here.

>> Huh? addr is a physical address but you're assigning it to a field
>> called vaddr?  I think you mean to set this field to the result of the
>> xen_remap() call, yes?
> 
> It ends up doing that in gnttab_init. Not to
> xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz3bQ-0000oG-Cx; Fri, 03 Jan 2014 12:11:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz3bP-0000nq-Jm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:11:47 +0000
Received: from [85.158.137.68:59177] by server-3.bemta-3.messagelabs.com id
	71/04-10658-209A6C25; Fri, 03 Jan 2014 12:11:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388751104!7069253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29141 invoked from network); 3 Jan 2014 12:11:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89513555"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:11:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	07:11:43 -0500
Message-ID: <52C6A8FE.5000500@citrix.com>
Date: Fri, 3 Jan 2014 12:11:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>	<52C59367.70707@citrix.com>
	<20140102184754.GF3021@pegasus.dumpdata.com>
In-Reply-To: <20140102184754.GF3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 18:47, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
>>> and contain the virtual address of the grants. That was OK
>>> for most architectures (PVHVM, ARM) were the grants are contingous
>>> in memory. That however is not the case for PVH - in which case
>>> we will have to do a lookup for each virtual address for the PFN.
>>>
>>> Instead of doing that, lets make it a structure which will contain
>>> the array of PFNs, the virtual address and the count of said PFNs.
>>>
>>> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
>>> gnttab_free_auto_xlat_frames to populate said structure with
>>> appropiate values for PVHVM and ARM.
>>>
>>> To round it off, change the name from 'xen_hvm_resume_frames' to
>>> a more descriptive one - 'xen_auto_xlat_grant_frames'.
>>>
>>> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
>>> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
>> [...]
>>> --- a/drivers/xen/grant-table.c
>>> +++ b/drivers/xen/grant-table.c
>> [...]
>>> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>>>  }
>>>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>>>  
>>> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
>>> +{
>>> +	xen_pfn_t *pfn;
>>> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
>>> +	int i;
>>> +
>>> +	if (xen_auto_xlat_grant_frames.count)
>>> +		return -EINVAL;
>>> +
>>> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
>>> +	if (!pfn)
>>> +		return -ENOMEM;
>>> +	for (i = 0; i < max_nr_gframes; i++)
>>> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
>>
>> PFN_DOWN(addr) + i looks better to me.
>>
>>> +
>>> +	xen_auto_xlat_grant_frames.vaddr = addr;

I think you should move the xen_remap() call here.

>> Huh? addr is a physical address but you're assigning it to a field
>> called vaddr?  I think you mean to set this field to the result of the
>> xen_remap() call, yes?
> 
> It ends up doing that in gnttab_init. Not to
> xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz437-0001wH-VK; Fri, 03 Jan 2014 12:40:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Vz436-0001wA-L6
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 12:40:24 +0000
Received: from [193.109.254.147:14644] by server-5.bemta-14.messagelabs.com id
	1B/B5-03510-7BFA6C25; Fri, 03 Jan 2014 12:40:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388752821!8656102!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8001 invoked from network); 3 Jan 2014 12:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87329944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 12:40:21 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:40:20 -0500
Message-ID: <52C6AFB4.9000007@citrix.com>
Date: Fri, 3 Jan 2014 13:40:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Konrad Rzeszutek Wilk
	<konrad@darnok.org>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
	<20140102212359.GA11592@pegasus.dumpdata.com>
In-Reply-To: <20140102212359.GA11592@pegasus.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 22:23, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
>>>> On 24/12/2013 12:55, Andrew Cooper wrote:
>>>>> On 24/12/2013 12:31, David Vrabel wrote:
>>>>>> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
>>>>>>> Hey,
>>>>>>>
>>>>>>> This is with Linux and
>>>>>>>
>>>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
>>>>>>>
>>>>>>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
>>>>>>> compiled with PVH.
>>>>>>>
>>>>>>> I think the same problem would show up if I tried to launch a PV guest 
>>>>>>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
>>>>>>> with the toolstack.
>>>>>> If a kernel with both PVH and PV support enabled cannot boot in PV mode
>>>>>> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
>>>>>>
>>>>>> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
>>>>>> even older are still widely deployed.
>>>>>>
>>>>>> David
>>>>>
>>>>> I believe that the problem is because the elf parsing code is not
>>>>> sufficiently forward-compatible aware, and rejects the PVH kernel
>>>>> because it has an unrecognised Xen elf note field.  This is not a kernel
>>>>> bug.
>>>
>>> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
>>> it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
>>> an unrecognized string.
>>>
>>>>>
>>>>> The elf parsing should accept unrecognised fields for forward
>>>>> compatibility, which would then allow a PV & PVH compiled kernel to run
>>>>> in PV mode.
>>>>
>>>> It should but it doesn't, so a different way needs to be found for the
>>>> kernel to report (optional) PVH support.  A method that is compatible
>>>> with older toolstacks.
>>>
>>> Also known as changes to the PVH ABI.
>>>
>>> Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
>>>
>>> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
>>> just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
>>> for PVH guests.
>>>
>>> b) Or dropping that altogether and introducing a new Xen elf note field, say:
>>>
>>> XEN_ELFNOTE_PVH_VERSION
>>>
>>
>> c).
>>
>> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
>>  *
>>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
>>  * kernel to specify support for features that older hypervisors don't
>>  * know about. The set of features 4.2 and newer hypervisors will
>>  * consider supported by the kernel is the combination of the sets
>>  * specified through this and the string note.
>>
>> for hvm_callback_vector parameter.
>>
>>>
>>> Which way should we do this?
>>
>> The c) way looks the best. Ian, would you be OK with that idea for 4.4?
> 
> Seems that not only does it work without any changes in Xen 4.4 but it
> is all in the Linux kernel, and it allows us to boot an Linux kernel
> with PV and PVH support
> 
> Seems that not only does it work without any changes in Xen 4.4 but it
> is all in the Linux kernel:
> 
> 
> 
> diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> index 56f42c0..2ce56bf 100644
> --- a/arch/x86/xen/xen-head.S
> +++ b/arch/x86/xen/xen-head.S
> @@ -11,12 +11,22 @@
>  #include <asm/page_types.h>
>  
>  #include <xen/interface/elfnote.h>
> +#include <xen/interface/features.h>
>  #include <asm/xen/interface.h>
>  
>  #ifdef CONFIG_XEN_PVH
> -#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
> +#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> +/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
> + * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
> + * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
> + */
> +#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
> +		      (1 << XENFEAT_auto_translated_physmap) | \
> +		      (1 << XENFEAT_supervisor_mode_kernel) | \
> +		      (1 << XENFEAT_hvm_callback_vector))
>  #else
>  #define PVH_FEATURES_STR  ""
> +#define PVH_FEATURES (0)
>  #endif
>  
>  	__INIT
> @@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
>  	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
>  	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
>  	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
> +	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
> +						(1 << XENFEAT_writable_page_tables) |

PVH_FEATURES already contains XENFEAT_writable_page_tables, shouldn't
you remove it from PVH_FEATURES if you are adding it unconditionally
here? (or is this just to make clear that you need
XENFEAT_writable_page_tables for both PVH and PV?)

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz432-0001w3-3g; Fri, 03 Jan 2014 12:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vz430-0001vy-RK
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:40:19 +0000
Received: from [85.158.143.35:6012] by server-3.bemta-4.messagelabs.com id
	55/E0-32360-2BFA6C25; Fri, 03 Jan 2014 12:40:18 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388752814!9363302!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1219 invoked from network); 3 Jan 2014 12:40:17 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 3 Jan 2014 12:40:17 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vz42p-0007ko-Ms; Fri, 03 Jan 2014 23:40:07 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Fri, 3 Jan 2014 23:40:05 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] tapdisk in debian 3.11 kernel
Thread-Index: Ac7y62k3QBWA4/hKQi+7HJ6JaU0YK///y/2A//9HFeCAAL3EAP//RRUwgAKpaAD/69p60AeaFCsA//9HF1A=
Date: Fri, 3 Jan 2014 12:40:03 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > an 'xl console' working to check right now.
> 
> Nice! Please send the libxl patches to the list!
> 
> The reason why libxl has a specific list of supported formats is that we
> thought that a simple wildcard and external bash scripts without a clear
> calling convention were too fragile. Of course if RDB "just works", we
> should enable it in libxl.

It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:

disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]

seems to work fine on a standard Debian system with only qemu recompiled with rbd enabled (my test system has probably suffered a bit of bitrot and so couldn't really be called standard, but I did just reinstall the xen-utils-4.3 package that contains libxl so it should be back to defaults)

I can probably still put a patch together but I think a better thought out "if backendtype=qdisk then don't check the format" would be better, unless it opens up a security hole or something.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz437-0001wH-VK; Fri, 03 Jan 2014 12:40:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1Vz436-0001wA-L6
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 12:40:24 +0000
Received: from [193.109.254.147:14644] by server-5.bemta-14.messagelabs.com id
	1B/B5-03510-7BFA6C25; Fri, 03 Jan 2014 12:40:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388752821!8656102!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8001 invoked from network); 3 Jan 2014 12:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="87329944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 12:40:21 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:40:20 -0500
Message-ID: <52C6AFB4.9000007@citrix.com>
Date: Fri, 3 Jan 2014 13:40:20 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Konrad Rzeszutek Wilk
	<konrad@darnok.org>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
	<20140102212359.GA11592@pegasus.dumpdata.com>
In-Reply-To: <20140102212359.GA11592@pegasus.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 22:23, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
>>>> On 24/12/2013 12:55, Andrew Cooper wrote:
>>>>> On 24/12/2013 12:31, David Vrabel wrote:
>>>>>> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
>>>>>>> Hey,
>>>>>>>
>>>>>>> This is with Linux and
>>>>>>>
>>>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvh.v11
>>>>>>>
>>>>>>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel that has been
>>>>>>> compiled with PVH.
>>>>>>>
>>>>>>> I think the same problem would show up if I tried to launch a PV guest 
>>>>>>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code is shared
>>>>>>> with the toolstack.
>>>>>> If a kernel with both PVH and PV support enabled cannot boot in PV mode
>>>>>> with a non-PVH aware hypervisor/toolstack then the kernel is broken.
>>>>>>
>>>>>> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 and
>>>>>> even older are still widely deployed.
>>>>>>
>>>>>> David
>>>>>
>>>>> I believe that the problem is because the elf parsing code is not
>>>>> sufficiently forward-compatible aware, and rejects the PVH kernel
>>>>> because it has an unrecognised Xen elf note field.  This is not a kernel
>>>>> bug.
>>>
>>> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields. But
>>> it (all Xen versions) do not have the logic to ignore in the "SUPPORTED_FEATURES"
>>> an unrecognized string.
>>>
>>>>>
>>>>> The elf parsing should accept unrecognised fields for forward
>>>>> compatibility, which would then allow a PV & PVH compiled kernel to run
>>>>> in PV mode.
>>>>
>>>> It should but it doesn't, so a different way needs to be found for the
>>>> kernel to report (optional) PVH support.  A method that is compatible
>>>> with older toolstacks.
>>>
>>> Also known as changes to the PVH ABI.
>>>
>>> Mukesh, Roger, George (emailing Ian instead since he is now the Release Manager-pro-temp), Jan,
>>>
>>> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_core.c and
>>> just depending on: "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
>>> for PVH guests.
>>>
>>> b) Or dropping that altogether and introducing a new Xen elf note field, say:
>>>
>>> XEN_ELFNOTE_PVH_VERSION
>>>
>>
>> c).
>>
>> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
>>  *
>>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
>>  * kernel to specify support for features that older hypervisors don't
>>  * know about. The set of features 4.2 and newer hypervisors will
>>  * consider supported by the kernel is the combination of the sets
>>  * specified through this and the string note.
>>
>> for hvm_callback_vector parameter.
>>
>>>
>>> Which way should we do this?
>>
>> The c) way looks the best. Ian, would you be OK with that idea for 4.4?
> 
> Seems that not only does it work without any changes in Xen 4.4 but it
> is all in the Linux kernel, and it allows us to boot an Linux kernel
> with PV and PVH support
> 
> Seems that not only does it work without any changes in Xen 4.4 but it
> is all in the Linux kernel:
> 
> 
> 
> diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> index 56f42c0..2ce56bf 100644
> --- a/arch/x86/xen/xen-head.S
> +++ b/arch/x86/xen/xen-head.S
> @@ -11,12 +11,22 @@
>  #include <asm/page_types.h>
>  
>  #include <xen/interface/elfnote.h>
> +#include <xen/interface/features.h>
>  #include <asm/xen/interface.h>
>  
>  #ifdef CONFIG_XEN_PVH
> -#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector"
> +#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> +/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
> + * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
> + * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
> + */
> +#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
> +		      (1 << XENFEAT_auto_translated_physmap) | \
> +		      (1 << XENFEAT_supervisor_mode_kernel) | \
> +		      (1 << XENFEAT_hvm_callback_vector))
>  #else
>  #define PVH_FEATURES_STR  ""
> +#define PVH_FEATURES (0)
>  #endif
>  
>  	__INIT
> @@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
>  	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
>  	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
>  	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
> +	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
> +						(1 << XENFEAT_writable_page_tables) |

PVH_FEATURES already contains XENFEAT_writable_page_tables, shouldn't
you remove it from PVH_FEATURES if you are adding it unconditionally
here? (or is this just to make clear that you need
XENFEAT_writable_page_tables for both PVH and PV?)

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz432-0001w3-3g; Fri, 03 Jan 2014 12:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vz430-0001vy-RK
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:40:19 +0000
Received: from [85.158.143.35:6012] by server-3.bemta-4.messagelabs.com id
	55/E0-32360-2BFA6C25; Fri, 03 Jan 2014 12:40:18 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388752814!9363302!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1219 invoked from network); 3 Jan 2014 12:40:17 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 3 Jan 2014 12:40:17 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vz42p-0007ko-Ms; Fri, 03 Jan 2014 23:40:07 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Fri, 3 Jan 2014 23:40:05 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] tapdisk in debian 3.11 kernel
Thread-Index: Ac7y62k3QBWA4/hKQi+7HJ6JaU0YK///y/2A//9HFeCAAL3EAP//RRUwgAKpaAD/69p60AeaFCsA//9HF1A=
Date: Fri, 3 Jan 2014 12:40:03 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > an 'xl console' working to check right now.
> 
> Nice! Please send the libxl patches to the list!
> 
> The reason why libxl has a specific list of supported formats is that we
> thought that a simple wildcard and external bash scripts without a clear
> calling convention were too fragile. Of course if RDB "just works", we
> should enable it in libxl.

It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:

disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]

seems to work fine on a standard Debian system with only qemu recompiled with rbd enabled (my test system has probably suffered a bit of bitrot and so couldn't really be called standard, but I did just reinstall the xen-utils-4.3 package that contains libxl so it should be back to defaults)

I can probably still put a patch together but I think a better thought out "if backendtype=qdisk then don't check the format" would be better, unless it opens up a security hole or something.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:52:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4Eh-0002UW-89; Fri, 03 Jan 2014 12:52:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4Eg-0002UR-58
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:52:22 +0000
Received: from [193.109.254.147:39883] by server-11.bemta-14.messagelabs.com
	id 87/8D-20576-582B6C25; Fri, 03 Jan 2014 12:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388753539!8667819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6554 invoked from network); 3 Jan 2014 12:52:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89521119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:52:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:52:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4Ec-0004wU-FT;
	Fri, 03 Jan 2014 12:52:18 +0000
Date: Fri, 3 Jan 2014 12:51:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Harper <james.harper@bendigoit.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
Message-ID: <alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, James Harper wrote:
> > > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > > an 'xl console' working to check right now.
> > 
> > Nice! Please send the libxl patches to the list!
> > 
> > The reason why libxl has a specific list of supported formats is that we
> > thought that a simple wildcard and external bash scripts without a clear
> > calling convention were too fragile. Of course if RDB "just works", we
> > should enable it in libxl.

Sorry, I realize now that the word "format" in my reply is wrong.  I
should have written "targets", like iscsi and nfs, each of them can
support multiple image formats like raw and qcow2.


> It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:
> 
> disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]
> 
> seems to work fine on a standard Debian system with only qemu recompiled with rbd enabled (my test system has probably suffered a bit of bitrot and so couldn't really be called standard, but I did just reinstall the xen-utils-4.3 package that contains libxl so it should be back to defaults)
> 
> I can probably still put a patch together but I think a better thought out "if backendtype=qdisk then don't check the format" would be better, unless it opens up a security hole or something.

Unfortunately it does, in fact we turn off automatic format recognition
in QEMU on purpose. But in this case, shouldn't:

disk = [ 'rbd:prod/virt-zoneminder-root,raw,xvda1,rw,backendtype=qdisk' ]

just work?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:52:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4Eh-0002UW-89; Fri, 03 Jan 2014 12:52:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4Eg-0002UR-58
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 12:52:22 +0000
Received: from [193.109.254.147:39883] by server-11.bemta-14.messagelabs.com
	id 87/8D-20576-582B6C25; Fri, 03 Jan 2014 12:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388753539!8667819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6554 invoked from network); 3 Jan 2014 12:52:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89521119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:52:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:52:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4Ec-0004wU-FT;
	Fri, 03 Jan 2014 12:52:18 +0000
Date: Fri, 3 Jan 2014 12:51:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: James Harper <james.harper@bendigoit.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
Message-ID: <alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, James Harper wrote:
> > > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > > an 'xl console' working to check right now.
> > 
> > Nice! Please send the libxl patches to the list!
> > 
> > The reason why libxl has a specific list of supported formats is that we
> > thought that a simple wildcard and external bash scripts without a clear
> > calling convention were too fragile. Of course if RDB "just works", we
> > should enable it in libxl.

Sorry, I realize now that the word "format" in my reply is wrong.  I
should have written "targets", like iscsi and nfs, each of them can
support multiple image formats like raw and qcow2.


> It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:
> 
> disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]
> 
> seems to work fine on a standard Debian system with only qemu recompiled with rbd enabled (my test system has probably suffered a bit of bitrot and so couldn't really be called standard, but I did just reinstall the xen-utils-4.3 package that contains libxl so it should be back to defaults)
> 
> I can probably still put a patch together but I think a better thought out "if backendtype=qdisk then don't check the format" would be better, unless it opens up a security hole or something.

Unfortunately it does, in fact we turn off automatic format recognition
in QEMU on purpose. But in this case, shouldn't:

disk = [ 'rbd:prod/virt-zoneminder-root,raw,xvda1,rw,backendtype=qdisk' ]

just work?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:56:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4Id-0002do-3M; Fri, 03 Jan 2014 12:56:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4Ic-0002di-7m
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 12:56:26 +0000
Received: from [85.158.137.68:45842] by server-6.bemta-3.messagelabs.com id
	5B/DB-04868-973B6C25; Fri, 03 Jan 2014 12:56:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388753783!7065406!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30154 invoked from network); 3 Jan 2014 12:56:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:56:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89521707"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:56:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:56:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4IY-0004zb-0K;
	Fri, 03 Jan 2014 12:56:22 +0000
Date: Fri, 3 Jan 2014 12:55:34 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131218212249.GF11717@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031252380.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312181913120.8667@kaball.uk.xensource.com>
	<20131218212249.GF11717@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] build QEMU with Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 07:15:43PM +0000, Stefano Stabellini wrote:
> > Hi all,
> > the xenpv machine provides Xen paravirtualized backends for console,
> > disk and framebuffer. xenfb in particular is the only open source
> > framebuffer backend available.
> > On ARM we don't need QEMU to emulate any hardware but xenpv would still
> > be useful at least to provide xenfb.
> > This patch series allows QEMU to build and run (xenpv) with Xen support
> > on ARM.
> > 
> 
> This should work out of the box (with changes to the toolstack) to work
> under x86 right? I would have to do some 'vga=none' to disable the VGA
> framebuffer?

Yes, it should already be working on x86 without changes to the toolstack.
You don't even need to pass 'vga=none': that is for HVM guests, while
for PVH guests we simple want the xenpv QEMU, the one exclusively providing
userspace pv backends.



> > Changes in v2:
> > - use SCNu64 instead of PRIu64 with sscanf;
> > - assert mfn == (xen_pfn_t)mfn;
> > - use HOST_LONG_BITS to check for QEMU's address space size.
> > 
> > 
> > Stefano Stabellini (2):
> >       xen_backend: introduce xenstore_read_uint64 and xenstore_read_fe_uint64
> >       xen: build on ARM
> > 
> >  hw/display/xenfb.c           |   18 ++++++++++--------
> >  hw/xen/xen_backend.c         |   18 ++++++++++++++++++
> >  include/hw/xen/xen_backend.h |    2 ++
> >  xen-all.c                    |    2 +-
> >  xen-mapcache.c               |    4 ++--
> >  5 files changed, 33 insertions(+), 11 deletions(-)
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 12:56:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 12:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4Id-0002do-3M; Fri, 03 Jan 2014 12:56:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4Ic-0002di-7m
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 12:56:26 +0000
Received: from [85.158.137.68:45842] by server-6.bemta-3.messagelabs.com id
	5B/DB-04868-973B6C25; Fri, 03 Jan 2014 12:56:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388753783!7065406!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30154 invoked from network); 3 Jan 2014 12:56:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 12:56:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,597,1384300800"; d="scan'208";a="89521707"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 12:56:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 07:56:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4IY-0004zb-0K;
	Fri, 03 Jan 2014 12:56:22 +0000
Date: Fri, 3 Jan 2014 12:55:34 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131218212249.GF11717@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031252380.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312181913120.8667@kaball.uk.xensource.com>
	<20131218212249.GF11717@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] build QEMU with Xen support on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 07:15:43PM +0000, Stefano Stabellini wrote:
> > Hi all,
> > the xenpv machine provides Xen paravirtualized backends for console,
> > disk and framebuffer. xenfb in particular is the only open source
> > framebuffer backend available.
> > On ARM we don't need QEMU to emulate any hardware but xenpv would still
> > be useful at least to provide xenfb.
> > This patch series allows QEMU to build and run (xenpv) with Xen support
> > on ARM.
> > 
> 
> This should work out of the box (with changes to the toolstack) to work
> under x86 right? I would have to do some 'vga=none' to disable the VGA
> framebuffer?

Yes, it should already be working on x86 without changes to the toolstack.
You don't even need to pass 'vga=none': that is for HVM guests, while
for PVH guests we simple want the xenpv QEMU, the one exclusively providing
userspace pv backends.



> > Changes in v2:
> > - use SCNu64 instead of PRIu64 with sscanf;
> > - assert mfn == (xen_pfn_t)mfn;
> > - use HOST_LONG_BITS to check for QEMU's address space size.
> > 
> > 
> > Stefano Stabellini (2):
> >       xen_backend: introduce xenstore_read_uint64 and xenstore_read_fe_uint64
> >       xen: build on ARM
> > 
> >  hw/display/xenfb.c           |   18 ++++++++++--------
> >  hw/xen/xen_backend.c         |   18 ++++++++++++++++++
> >  include/hw/xen/xen_backend.h |    2 ++
> >  xen-all.c                    |    2 +-
> >  xen-mapcache.c               |    4 ++--
> >  5 files changed, 33 insertions(+), 11 deletions(-)
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:15:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4aP-0003Oa-Tu; Fri, 03 Jan 2014 13:14:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4aO-0003OV-I3
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:14:48 +0000
Received: from [85.158.139.211:31782] by server-9.bemta-5.messagelabs.com id
	16/AD-15098-7C7B6C25; Fri, 03 Jan 2014 13:14:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1388754885!7705781!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5530 invoked from network); 3 Jan 2014 13:14:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89526783"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:14:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:14:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4aK-0005F3-KN;
	Fri, 03 Jan 2014 13:14:44 +0000
Date: Fri, 3 Jan 2014 13:13:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark Salter <msalter@redhat.com>
In-Reply-To: <1388431259.2564.15.camel@deneb.redhat.com>
Message-ID: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, catalin.marinas@arm.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013, Mark Salter wrote:
> On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > was deleted, while xen_remap stays the same. This would lead to
> > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > on arm64 as ioremap_cache on arm64 to fix it.
> > 
> 
> I missed that include of arm header by arm64 when looking for users
> of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> grepping the kernel tree, I see:
> 
>   ioremap_cached()
>     defined by: arm, metag, unicore32
>     used by: arch/arm/include/asm/xen/page.h
>              drivers/mtd/maps/pxa2xx-flash.c
> 
>   ioremap_cache()
>     defined by: arm64, sh, xtensa, ia64, x86
>     used by: drivers/video/vesafb.c
>              drivers/char/toshiba.c
>              drivers/acpi/apei
>              drivers/lguest/lguest_device.c
>              drivers/sfi/sfi_core.c
>              include/linux/acpi_io.h
> 
> I think it would be better to just avoid the confusion and the ifdef in
> asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.

While I welcome the suggestion, this is a critical fix for a regression
that I think should go in as soon as possible, maybe 3.13-rc7, while I
don't think that a global s/ioremap_cached/ioremap_cache would be
acceptable at this stage.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:15:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4aP-0003Oa-Tu; Fri, 03 Jan 2014 13:14:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4aO-0003OV-I3
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:14:48 +0000
Received: from [85.158.139.211:31782] by server-9.bemta-5.messagelabs.com id
	16/AD-15098-7C7B6C25; Fri, 03 Jan 2014 13:14:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1388754885!7705781!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5530 invoked from network); 3 Jan 2014 13:14:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89526783"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:14:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:14:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4aK-0005F3-KN;
	Fri, 03 Jan 2014 13:14:44 +0000
Date: Fri, 3 Jan 2014 13:13:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mark Salter <msalter@redhat.com>
In-Reply-To: <1388431259.2564.15.camel@deneb.redhat.com>
Message-ID: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, catalin.marinas@arm.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013, Mark Salter wrote:
> On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > was deleted, while xen_remap stays the same. This would lead to
> > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > on arm64 as ioremap_cache on arm64 to fix it.
> > 
> 
> I missed that include of arm header by arm64 when looking for users
> of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> grepping the kernel tree, I see:
> 
>   ioremap_cached()
>     defined by: arm, metag, unicore32
>     used by: arch/arm/include/asm/xen/page.h
>              drivers/mtd/maps/pxa2xx-flash.c
> 
>   ioremap_cache()
>     defined by: arm64, sh, xtensa, ia64, x86
>     used by: drivers/video/vesafb.c
>              drivers/char/toshiba.c
>              drivers/acpi/apei
>              drivers/lguest/lguest_device.c
>              drivers/sfi/sfi_core.c
>              include/linux/acpi_io.h
> 
> I think it would be better to just avoid the confusion and the ifdef in
> asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.

While I welcome the suggestion, this is a critical fix for a regression
that I think should go in as soon as possible, maybe 3.13-rc7, while I
don't think that a global s/ioremap_cached/ioremap_cache would be
acceptable at this stage.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4bV-0003SL-Jj; Fri, 03 Jan 2014 13:15:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Vz4bU-0003S7-39
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 13:15:56 +0000
Received: from [85.158.137.68:15112] by server-7.bemta-3.messagelabs.com id
	52/15-27599-B08B6C25; Fri, 03 Jan 2014 13:15:55 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388754953!7090218!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30624 invoked from network); 3 Jan 2014 13:15:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:15:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87338084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 13:15:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:15:51 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1Vz4bQ-0005G2-31;
	Fri, 03 Jan 2014 13:15:52 +0000
Message-ID: <1388754947.28243.4.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>
Date: Fri, 3 Jan 2014 13:15:47 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Avoid turning on legacy interrupts before hpet_event has been set up.
Particularly, the spinlock can be uninitialised at the point at which
the interrupt first arrives.

Also, fix a memory leak of a cpumask in the unlikely event that
hpet_assign_irq() fails.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/hpet.c |   59 +++++++++++++++++++++++++++------------------------
 1 file changed, 31 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 3a4f7e8..0dedfb7 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -387,6 +387,26 @@ static int __init hpet_assign_irq(struct hpet_event_channel *ch)
     return 0;
 }
 
+static void __init hpet_init_channel(struct hpet_event_channel *ch)
+{
+    u64 hpet_rate = hpet_setup();
+
+    /*
+     * The period is a femto seconds value. We need to calculate the scaled
+     * math multiplication factor for nanosecond to hpet tick conversion.
+     */
+    ch->mult = div_sc((unsigned long)hpet_rate,
+                                     1000000000ul, 32);
+    ch->shift = 32;
+    ch->next_event = STIME_MAX;
+    spin_lock_init(&ch->lock);
+    ch->event_handler = handle_hpet_broadcast;
+
+    ch->msi.irq = -1;
+    ch->msi.msi_attrib.maskbit = 1;
+    ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
+}
+
 static void __init hpet_fsb_cap_lookup(void)
 {
     u32 id;
@@ -423,11 +443,15 @@ static void __init hpet_fsb_cap_lookup(void)
             break;
         }
 
+        hpet_init_channel(ch);
+
         ch->flags = 0;
         ch->idx = i;
 
         if ( hpet_assign_irq(ch) == 0 )
             num_hpets_used++;
+        else
+            free_cpumask_var(ch->cpumask);
     }
 
     printk(XENLOG_INFO "HPET: %u timers usable for broadcast (%u total)\n",
@@ -553,7 +577,6 @@ void __init hpet_broadcast_init(void)
 {
     u64 hpet_rate = hpet_setup();
     u32 hpet_id, cfg;
-    unsigned int i, n;
 
     if ( hpet_rate == 0 || hpet_broadcast_is_available() )
         return;
@@ -565,7 +588,6 @@ void __init hpet_broadcast_init(void)
     {
         /* Stop HPET legacy interrupts */
         cfg &= ~HPET_CFG_LEGACY;
-        n = num_hpets_used;
     }
     else
     {
@@ -577,11 +599,10 @@ void __init hpet_broadcast_init(void)
             hpet_events = xzalloc(struct hpet_event_channel);
         if ( !hpet_events || !zalloc_cpumask_var(&hpet_events->cpumask) )
             return;
-        hpet_events->msi.irq = -1;
+        hpet_init_channel(hpet_events);
 
         /* Start HPET legacy interrupts */
         cfg |= HPET_CFG_LEGACY;
-        n = 1;
 
         if ( !force_hpet_broadcast )
             pv_rtc_handler = handle_rtc_once;
@@ -589,31 +610,13 @@ void __init hpet_broadcast_init(void)
 
     hpet_write32(cfg, HPET_CFG);
 
-    for ( i = 0; i < n; i++ )
+    if ( cfg & HPET_CFG_LEGACY )
     {
-        if ( i == 0 && (cfg & HPET_CFG_LEGACY) )
-        {
-            /* set HPET T0 as oneshot */
-            cfg = hpet_read32(HPET_Tn_CFG(0));
-            cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
-            cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
-            hpet_write32(cfg, HPET_Tn_CFG(0));
-        }
-
-        /*
-         * The period is a femto seconds value. We need to calculate the scaled
-         * math multiplication factor for nanosecond to hpet tick conversion.
-         */
-        hpet_events[i].mult = div_sc((unsigned long)hpet_rate,
-                                     1000000000ul, 32);
-        hpet_events[i].shift = 32;
-        hpet_events[i].next_event = STIME_MAX;
-        spin_lock_init(&hpet_events[i].lock);
-        wmb();
-        hpet_events[i].event_handler = handle_hpet_broadcast;
-
-        hpet_events[i].msi.msi_attrib.maskbit = 1;
-        hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
+        /* set HPET T0 as oneshot */
+        cfg = hpet_read32(HPET_Tn_CFG(0));
+        cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
+        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
+        hpet_write32(cfg, HPET_Tn_CFG(0));
     }
 
     if ( !num_hpets_used )
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4bV-0003SL-Jj; Fri, 03 Jan 2014 13:15:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1Vz4bU-0003S7-39
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 13:15:56 +0000
Received: from [85.158.137.68:15112] by server-7.bemta-3.messagelabs.com id
	52/15-27599-B08B6C25; Fri, 03 Jan 2014 13:15:55 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388754953!7090218!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30624 invoked from network); 3 Jan 2014 13:15:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:15:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87338084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 13:15:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:15:51 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1Vz4bQ-0005G2-31;
	Fri, 03 Jan 2014 13:15:52 +0000
Message-ID: <1388754947.28243.4.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com>
Date: Fri, 3 Jan 2014 13:15:47 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Avoid turning on legacy interrupts before hpet_event has been set up.
Particularly, the spinlock can be uninitialised at the point at which
the interrupt first arrives.

Also, fix a memory leak of a cpumask in the unlikely event that
hpet_assign_irq() fails.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/hpet.c |   59 +++++++++++++++++++++++++++------------------------
 1 file changed, 31 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 3a4f7e8..0dedfb7 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -387,6 +387,26 @@ static int __init hpet_assign_irq(struct hpet_event_channel *ch)
     return 0;
 }
 
+static void __init hpet_init_channel(struct hpet_event_channel *ch)
+{
+    u64 hpet_rate = hpet_setup();
+
+    /*
+     * The period is a femto seconds value. We need to calculate the scaled
+     * math multiplication factor for nanosecond to hpet tick conversion.
+     */
+    ch->mult = div_sc((unsigned long)hpet_rate,
+                                     1000000000ul, 32);
+    ch->shift = 32;
+    ch->next_event = STIME_MAX;
+    spin_lock_init(&ch->lock);
+    ch->event_handler = handle_hpet_broadcast;
+
+    ch->msi.irq = -1;
+    ch->msi.msi_attrib.maskbit = 1;
+    ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
+}
+
 static void __init hpet_fsb_cap_lookup(void)
 {
     u32 id;
@@ -423,11 +443,15 @@ static void __init hpet_fsb_cap_lookup(void)
             break;
         }
 
+        hpet_init_channel(ch);
+
         ch->flags = 0;
         ch->idx = i;
 
         if ( hpet_assign_irq(ch) == 0 )
             num_hpets_used++;
+        else
+            free_cpumask_var(ch->cpumask);
     }
 
     printk(XENLOG_INFO "HPET: %u timers usable for broadcast (%u total)\n",
@@ -553,7 +577,6 @@ void __init hpet_broadcast_init(void)
 {
     u64 hpet_rate = hpet_setup();
     u32 hpet_id, cfg;
-    unsigned int i, n;
 
     if ( hpet_rate == 0 || hpet_broadcast_is_available() )
         return;
@@ -565,7 +588,6 @@ void __init hpet_broadcast_init(void)
     {
         /* Stop HPET legacy interrupts */
         cfg &= ~HPET_CFG_LEGACY;
-        n = num_hpets_used;
     }
     else
     {
@@ -577,11 +599,10 @@ void __init hpet_broadcast_init(void)
             hpet_events = xzalloc(struct hpet_event_channel);
         if ( !hpet_events || !zalloc_cpumask_var(&hpet_events->cpumask) )
             return;
-        hpet_events->msi.irq = -1;
+        hpet_init_channel(hpet_events);
 
         /* Start HPET legacy interrupts */
         cfg |= HPET_CFG_LEGACY;
-        n = 1;
 
         if ( !force_hpet_broadcast )
             pv_rtc_handler = handle_rtc_once;
@@ -589,31 +610,13 @@ void __init hpet_broadcast_init(void)
 
     hpet_write32(cfg, HPET_CFG);
 
-    for ( i = 0; i < n; i++ )
+    if ( cfg & HPET_CFG_LEGACY )
     {
-        if ( i == 0 && (cfg & HPET_CFG_LEGACY) )
-        {
-            /* set HPET T0 as oneshot */
-            cfg = hpet_read32(HPET_Tn_CFG(0));
-            cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
-            cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
-            hpet_write32(cfg, HPET_Tn_CFG(0));
-        }
-
-        /*
-         * The period is a femto seconds value. We need to calculate the scaled
-         * math multiplication factor for nanosecond to hpet tick conversion.
-         */
-        hpet_events[i].mult = div_sc((unsigned long)hpet_rate,
-                                     1000000000ul, 32);
-        hpet_events[i].shift = 32;
-        hpet_events[i].next_event = STIME_MAX;
-        spin_lock_init(&hpet_events[i].lock);
-        wmb();
-        hpet_events[i].event_handler = handle_hpet_broadcast;
-
-        hpet_events[i].msi.msi_attrib.maskbit = 1;
-        hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
+        /* set HPET T0 as oneshot */
+        cfg = hpet_read32(HPET_Tn_CFG(0));
+        cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
+        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
+        hpet_write32(cfg, HPET_Tn_CFG(0));
     }
 
     if ( !num_hpets_used )
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4fO-0003wD-9P; Fri, 03 Jan 2014 13:19:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4fN-0003w8-5A
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:19:57 +0000
Received: from [85.158.143.35:15745] by server-1.bemta-4.messagelabs.com id
	C9/C1-02132-CF8B6C25; Fri, 03 Jan 2014 13:19:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388755194!9371479!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24756 invoked from network); 3 Jan 2014 13:19:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:19:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89528013"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:19:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:19:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4fJ-0005JG-Ho;
	Fri, 03 Jan 2014 13:19:53 +0000
Date: Fri, 3 Jan 2014 13:19:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Liu <liuw@liuw.name>
In-Reply-To: <CAOsiSVWe7zRL2qmExHL3D4ayYyHmzHU=PonQ+-Q625Vkji+J4A@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401031318220.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<CAOsiSVWe7zRL2qmExHL3D4ayYyHmzHU=PonQ+-Q625Vkji+J4A@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	msalter@redhat.com, xen-devel <xen-devel@lists.xenproject.org>,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013, Wei Liu wrote:
> On Mon, Dec 30, 2013 at 6:55 AM, Chen Baozi <baozich@gmail.com> wrote:
> > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > was deleted, while xen_remap stays the same. This would lead to
> > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > on arm64 as ioremap_cache on arm64 to fix it.
> >
> > Signed-off-by: Chen Baozi <baozich@gmail.com>
> > ---
> >  arch/arm/include/asm/xen/page.h | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> > index 75579a9..b3368df 100644
> > --- a/arch/arm/include/asm/xen/page.h
> > +++ b/arch/arm/include/asm/xen/page.h
> > @@ -117,6 +117,10 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> >         return __set_phys_to_machine(pfn, mfn);
> >  }
> >
> > +#ifdef CONFIG_ARM64
> > +#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
> > +#else
> >  #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
> 
> Looks like there's an redundant semicolon in the original implementation.

Well spotted. We can fix this separately.
Do you want to send a patch for it?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4fO-0003wD-9P; Fri, 03 Jan 2014 13:19:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4fN-0003w8-5A
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:19:57 +0000
Received: from [85.158.143.35:15745] by server-1.bemta-4.messagelabs.com id
	C9/C1-02132-CF8B6C25; Fri, 03 Jan 2014 13:19:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388755194!9371479!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24756 invoked from network); 3 Jan 2014 13:19:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:19:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89528013"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:19:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:19:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4fJ-0005JG-Ho;
	Fri, 03 Jan 2014 13:19:53 +0000
Date: Fri, 3 Jan 2014 13:19:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Liu <liuw@liuw.name>
In-Reply-To: <CAOsiSVWe7zRL2qmExHL3D4ayYyHmzHU=PonQ+-Q625Vkji+J4A@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401031318220.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<CAOsiSVWe7zRL2qmExHL3D4ayYyHmzHU=PonQ+-Q625Vkji+J4A@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	msalter@redhat.com, xen-devel <xen-devel@lists.xenproject.org>,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 30 Dec 2013, Wei Liu wrote:
> On Mon, Dec 30, 2013 at 6:55 AM, Chen Baozi <baozich@gmail.com> wrote:
> > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > was deleted, while xen_remap stays the same. This would lead to
> > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > on arm64 as ioremap_cache on arm64 to fix it.
> >
> > Signed-off-by: Chen Baozi <baozich@gmail.com>
> > ---
> >  arch/arm/include/asm/xen/page.h | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> > index 75579a9..b3368df 100644
> > --- a/arch/arm/include/asm/xen/page.h
> > +++ b/arch/arm/include/asm/xen/page.h
> > @@ -117,6 +117,10 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
> >         return __set_phys_to_machine(pfn, mfn);
> >  }
> >
> > +#ifdef CONFIG_ARM64
> > +#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
> > +#else
> >  #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
> 
> Looks like there's an redundant semicolon in the original implementation.

Well spotted. We can fix this separately.
Do you want to send a patch for it?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:26:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4la-00045z-6F; Fri, 03 Jan 2014 13:26:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vz4lY-00045u-9y
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 13:26:20 +0000
Received: from [193.109.254.147:2945] by server-15.bemta-14.messagelabs.com id
	7D/2B-22186-B7AB6C25; Fri, 03 Jan 2014 13:26:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1388755576!8646839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6531 invoked from network); 3 Jan 2014 13:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89529092"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:26:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:26:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vz4lT-0005OY-Uw;
	Fri, 03 Jan 2014 13:26:15 +0000
Message-ID: <52C6BA77.8030809@citrix.com>
Date: Fri, 3 Jan 2014 13:26:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1388754947.28243.4.camel@hamster.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 13:15, Frediano Ziglio wrote:
> Avoid turning on legacy interrupts before hpet_event has been set up.
> Particularly, the spinlock can be uninitialised at the point at which
> the interrupt first arrives.
>
> Also, fix a memory leak of a cpumask in the unlikely event that
> hpet_assign_irq() fails.
>
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

For reference, this was the underlying issue behind my patch "[Patch]
xen/spinlock: Improvements to check_lock()", Message ID
"1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com"

The hpet code was receiving a legacy interrupt between enabling
interrupts and initialising the spinlock, and check_lock() was mistaking
the 0 in lock->debug.irq_safe for "not IRQ-safe".

~Andrew

> ---
>  xen/arch/x86/hpet.c |   59 +++++++++++++++++++++++++++------------------------
>  1 file changed, 31 insertions(+), 28 deletions(-)
>
> diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
> index 3a4f7e8..0dedfb7 100644
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -387,6 +387,26 @@ static int __init hpet_assign_irq(struct hpet_event_channel *ch)
>      return 0;
>  }
>  
> +static void __init hpet_init_channel(struct hpet_event_channel *ch)
> +{
> +    u64 hpet_rate = hpet_setup();
> +
> +    /*
> +     * The period is a femto seconds value. We need to calculate the scaled
> +     * math multiplication factor for nanosecond to hpet tick conversion.
> +     */
> +    ch->mult = div_sc((unsigned long)hpet_rate,
> +                                     1000000000ul, 32);
> +    ch->shift = 32;
> +    ch->next_event = STIME_MAX;
> +    spin_lock_init(&ch->lock);
> +    ch->event_handler = handle_hpet_broadcast;
> +
> +    ch->msi.irq = -1;
> +    ch->msi.msi_attrib.maskbit = 1;
> +    ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
> +}
> +
>  static void __init hpet_fsb_cap_lookup(void)
>  {
>      u32 id;
> @@ -423,11 +443,15 @@ static void __init hpet_fsb_cap_lookup(void)
>              break;
>          }
>  
> +        hpet_init_channel(ch);
> +
>          ch->flags = 0;
>          ch->idx = i;
>  
>          if ( hpet_assign_irq(ch) == 0 )
>              num_hpets_used++;
> +        else
> +            free_cpumask_var(ch->cpumask);
>      }
>  
>      printk(XENLOG_INFO "HPET: %u timers usable for broadcast (%u total)\n",
> @@ -553,7 +577,6 @@ void __init hpet_broadcast_init(void)
>  {
>      u64 hpet_rate = hpet_setup();
>      u32 hpet_id, cfg;
> -    unsigned int i, n;
>  
>      if ( hpet_rate == 0 || hpet_broadcast_is_available() )
>          return;
> @@ -565,7 +588,6 @@ void __init hpet_broadcast_init(void)
>      {
>          /* Stop HPET legacy interrupts */
>          cfg &= ~HPET_CFG_LEGACY;
> -        n = num_hpets_used;
>      }
>      else
>      {
> @@ -577,11 +599,10 @@ void __init hpet_broadcast_init(void)
>              hpet_events = xzalloc(struct hpet_event_channel);
>          if ( !hpet_events || !zalloc_cpumask_var(&hpet_events->cpumask) )
>              return;
> -        hpet_events->msi.irq = -1;
> +        hpet_init_channel(hpet_events);
>  
>          /* Start HPET legacy interrupts */
>          cfg |= HPET_CFG_LEGACY;
> -        n = 1;
>  
>          if ( !force_hpet_broadcast )
>              pv_rtc_handler = handle_rtc_once;
> @@ -589,31 +610,13 @@ void __init hpet_broadcast_init(void)
>  
>      hpet_write32(cfg, HPET_CFG);
>  
> -    for ( i = 0; i < n; i++ )
> +    if ( cfg & HPET_CFG_LEGACY )
>      {
> -        if ( i == 0 && (cfg & HPET_CFG_LEGACY) )
> -        {
> -            /* set HPET T0 as oneshot */
> -            cfg = hpet_read32(HPET_Tn_CFG(0));
> -            cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
> -            cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
> -            hpet_write32(cfg, HPET_Tn_CFG(0));
> -        }
> -
> -        /*
> -         * The period is a femto seconds value. We need to calculate the scaled
> -         * math multiplication factor for nanosecond to hpet tick conversion.
> -         */
> -        hpet_events[i].mult = div_sc((unsigned long)hpet_rate,
> -                                     1000000000ul, 32);
> -        hpet_events[i].shift = 32;
> -        hpet_events[i].next_event = STIME_MAX;
> -        spin_lock_init(&hpet_events[i].lock);
> -        wmb();
> -        hpet_events[i].event_handler = handle_hpet_broadcast;
> -
> -        hpet_events[i].msi.msi_attrib.maskbit = 1;
> -        hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
> +        /* set HPET T0 as oneshot */
> +        cfg = hpet_read32(HPET_Tn_CFG(0));
> +        cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
> +        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
> +        hpet_write32(cfg, HPET_Tn_CFG(0));
>      }
>  
>      if ( !num_hpets_used )


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:26:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4la-00045z-6F; Fri, 03 Jan 2014 13:26:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vz4lY-00045u-9y
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 13:26:20 +0000
Received: from [193.109.254.147:2945] by server-15.bemta-14.messagelabs.com id
	7D/2B-22186-B7AB6C25; Fri, 03 Jan 2014 13:26:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1388755576!8646839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6531 invoked from network); 3 Jan 2014 13:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89529092"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:26:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:26:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vz4lT-0005OY-Uw;
	Fri, 03 Jan 2014 13:26:15 +0000
Message-ID: <52C6BA77.8030809@citrix.com>
Date: Fri, 3 Jan 2014 13:26:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1388754947.28243.4.camel@hamster.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 13:15, Frediano Ziglio wrote:
> Avoid turning on legacy interrupts before hpet_event has been set up.
> Particularly, the spinlock can be uninitialised at the point at which
> the interrupt first arrives.
>
> Also, fix a memory leak of a cpumask in the unlikely event that
> hpet_assign_irq() fails.
>
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

For reference, this was the underlying issue behind my patch "[Patch]
xen/spinlock: Improvements to check_lock()", Message ID
"1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com"

The hpet code was receiving a legacy interrupt between enabling
interrupts and initialising the spinlock, and check_lock() was mistaking
the 0 in lock->debug.irq_safe for "not IRQ-safe".

~Andrew

> ---
>  xen/arch/x86/hpet.c |   59 +++++++++++++++++++++++++++------------------------
>  1 file changed, 31 insertions(+), 28 deletions(-)
>
> diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
> index 3a4f7e8..0dedfb7 100644
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -387,6 +387,26 @@ static int __init hpet_assign_irq(struct hpet_event_channel *ch)
>      return 0;
>  }
>  
> +static void __init hpet_init_channel(struct hpet_event_channel *ch)
> +{
> +    u64 hpet_rate = hpet_setup();
> +
> +    /*
> +     * The period is a femto seconds value. We need to calculate the scaled
> +     * math multiplication factor for nanosecond to hpet tick conversion.
> +     */
> +    ch->mult = div_sc((unsigned long)hpet_rate,
> +                                     1000000000ul, 32);
> +    ch->shift = 32;
> +    ch->next_event = STIME_MAX;
> +    spin_lock_init(&ch->lock);
> +    ch->event_handler = handle_hpet_broadcast;
> +
> +    ch->msi.irq = -1;
> +    ch->msi.msi_attrib.maskbit = 1;
> +    ch->msi.msi_attrib.pos = MSI_TYPE_HPET;
> +}
> +
>  static void __init hpet_fsb_cap_lookup(void)
>  {
>      u32 id;
> @@ -423,11 +443,15 @@ static void __init hpet_fsb_cap_lookup(void)
>              break;
>          }
>  
> +        hpet_init_channel(ch);
> +
>          ch->flags = 0;
>          ch->idx = i;
>  
>          if ( hpet_assign_irq(ch) == 0 )
>              num_hpets_used++;
> +        else
> +            free_cpumask_var(ch->cpumask);
>      }
>  
>      printk(XENLOG_INFO "HPET: %u timers usable for broadcast (%u total)\n",
> @@ -553,7 +577,6 @@ void __init hpet_broadcast_init(void)
>  {
>      u64 hpet_rate = hpet_setup();
>      u32 hpet_id, cfg;
> -    unsigned int i, n;
>  
>      if ( hpet_rate == 0 || hpet_broadcast_is_available() )
>          return;
> @@ -565,7 +588,6 @@ void __init hpet_broadcast_init(void)
>      {
>          /* Stop HPET legacy interrupts */
>          cfg &= ~HPET_CFG_LEGACY;
> -        n = num_hpets_used;
>      }
>      else
>      {
> @@ -577,11 +599,10 @@ void __init hpet_broadcast_init(void)
>              hpet_events = xzalloc(struct hpet_event_channel);
>          if ( !hpet_events || !zalloc_cpumask_var(&hpet_events->cpumask) )
>              return;
> -        hpet_events->msi.irq = -1;
> +        hpet_init_channel(hpet_events);
>  
>          /* Start HPET legacy interrupts */
>          cfg |= HPET_CFG_LEGACY;
> -        n = 1;
>  
>          if ( !force_hpet_broadcast )
>              pv_rtc_handler = handle_rtc_once;
> @@ -589,31 +610,13 @@ void __init hpet_broadcast_init(void)
>  
>      hpet_write32(cfg, HPET_CFG);
>  
> -    for ( i = 0; i < n; i++ )
> +    if ( cfg & HPET_CFG_LEGACY )
>      {
> -        if ( i == 0 && (cfg & HPET_CFG_LEGACY) )
> -        {
> -            /* set HPET T0 as oneshot */
> -            cfg = hpet_read32(HPET_Tn_CFG(0));
> -            cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
> -            cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
> -            hpet_write32(cfg, HPET_Tn_CFG(0));
> -        }
> -
> -        /*
> -         * The period is a femto seconds value. We need to calculate the scaled
> -         * math multiplication factor for nanosecond to hpet tick conversion.
> -         */
> -        hpet_events[i].mult = div_sc((unsigned long)hpet_rate,
> -                                     1000000000ul, 32);
> -        hpet_events[i].shift = 32;
> -        hpet_events[i].next_event = STIME_MAX;
> -        spin_lock_init(&hpet_events[i].lock);
> -        wmb();
> -        hpet_events[i].event_handler = handle_hpet_broadcast;
> -
> -        hpet_events[i].msi.msi_attrib.maskbit = 1;
> -        hpet_events[i].msi.msi_attrib.pos = MSI_TYPE_HPET;
> +        /* set HPET T0 as oneshot */
> +        cfg = hpet_read32(HPET_Tn_CFG(0));
> +        cfg &= ~(HPET_TN_LEVEL | HPET_TN_PERIODIC);
> +        cfg |= HPET_TN_ENABLE | HPET_TN_32BIT;
> +        hpet_write32(cfg, HPET_Tn_CFG(0));
>      }
>  
>      if ( !num_hpets_used )


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:33:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4s6-0004Uv-2O; Fri, 03 Jan 2014 13:33:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4s4-0004Uo-OM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:33:04 +0000
Received: from [85.158.139.211:30264] by server-4.bemta-5.messagelabs.com id
	E4/1E-26791-F0CB6C25; Fri, 03 Jan 2014 13:33:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388755981!7672230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21365 invoked from network); 3 Jan 2014 13:33:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:33:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89530347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:32:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:32:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4rx-0005TL-Cp;
	Fri, 03 Jan 2014 13:32:57 +0000
Date: Fri, 3 Jan 2014 13:32:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401031329160.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, robherring2@gmail.com,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Stefano Stabellini wrote:
> On Mon, 30 Dec 2013, Mark Salter wrote:
> > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > was deleted, while xen_remap stays the same. This would lead to
> > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > on arm64 as ioremap_cache on arm64 to fix it.
> > > 
> > 
> > I missed that include of arm header by arm64 when looking for users
> > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > grepping the kernel tree, I see:
> > 
> >   ioremap_cached()
> >     defined by: arm, metag, unicore32
> >     used by: arch/arm/include/asm/xen/page.h
> >              drivers/mtd/maps/pxa2xx-flash.c
> > 
> >   ioremap_cache()
> >     defined by: arm64, sh, xtensa, ia64, x86
> >     used by: drivers/video/vesafb.c
> >              drivers/char/toshiba.c
> >              drivers/acpi/apei
> >              drivers/lguest/lguest_device.c
> >              drivers/sfi/sfi_core.c
> >              include/linux/acpi_io.h
> > 
> > I think it would be better to just avoid the confusion and the ifdef in
> > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> 
> While I welcome the suggestion, this is a critical fix for a regression
> that I think should go in as soon as possible, maybe 3.13-rc7, while I
> don't think that a global s/ioremap_cached/ioremap_cache would be
> acceptable at this stage.

BTW Rob Herring sent a patch in November to solve the problem by
renaming ioremap_cached to ioremap_cache under arch/arm:

http://marc.info/?l=linux-arm-kernel&m=138394601804783

I wrongly assumed that was going to go in.
I think it is too late for that now. We'll have to go this this patch
for 3.13.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:33:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4s6-0004Uv-2O; Fri, 03 Jan 2014 13:33:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz4s4-0004Uo-OM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:33:04 +0000
Received: from [85.158.139.211:30264] by server-4.bemta-5.messagelabs.com id
	E4/1E-26791-F0CB6C25; Fri, 03 Jan 2014 13:33:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388755981!7672230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21365 invoked from network); 3 Jan 2014 13:33:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:33:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89530347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 13:32:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 08:32:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz4rx-0005TL-Cp;
	Fri, 03 Jan 2014 13:32:57 +0000
Date: Fri, 3 Jan 2014 13:32:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401031329160.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, robherring2@gmail.com,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Stefano Stabellini wrote:
> On Mon, 30 Dec 2013, Mark Salter wrote:
> > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > was deleted, while xen_remap stays the same. This would lead to
> > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > on arm64 as ioremap_cache on arm64 to fix it.
> > > 
> > 
> > I missed that include of arm header by arm64 when looking for users
> > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > grepping the kernel tree, I see:
> > 
> >   ioremap_cached()
> >     defined by: arm, metag, unicore32
> >     used by: arch/arm/include/asm/xen/page.h
> >              drivers/mtd/maps/pxa2xx-flash.c
> > 
> >   ioremap_cache()
> >     defined by: arm64, sh, xtensa, ia64, x86
> >     used by: drivers/video/vesafb.c
> >              drivers/char/toshiba.c
> >              drivers/acpi/apei
> >              drivers/lguest/lguest_device.c
> >              drivers/sfi/sfi_core.c
> >              include/linux/acpi_io.h
> > 
> > I think it would be better to just avoid the confusion and the ifdef in
> > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> 
> While I welcome the suggestion, this is a critical fix for a regression
> that I think should go in as soon as possible, maybe 3.13-rc7, while I
> don't think that a global s/ioremap_cached/ioremap_cache would be
> acceptable at this stage.

BTW Rob Herring sent a patch in November to solve the problem by
renaming ioremap_cached to ioremap_cache under arch/arm:

http://marc.info/?l=linux-arm-kernel&m=138394601804783

I wrongly assumed that was going to go in.
I think it is too late for that now. We'll have to go this this patch
for 3.13.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:38:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4wj-0004fq-Vu; Fri, 03 Jan 2014 13:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz4wi-0004fh-L8
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:37:52 +0000
Received: from [85.158.143.35:26155] by server-3.bemta-4.messagelabs.com id
	E9/11-32360-F2DB6C25; Fri, 03 Jan 2014 13:37:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1388756269!2321971!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12407 invoked from network); 3 Jan 2014 13:37:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:37:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87342822"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 13:37:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	08:37:48 -0500
Message-ID: <52C6BD2B.6090108@citrix.com>
Date: Fri, 3 Jan 2014 13:37:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C598C6.7040902@citrix.com>
	<20140102190256.GH3021@pegasus.dumpdata.com>
In-Reply-To: <20140102190256.GH3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 19:02, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:50:14PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> The patches, also available at
>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
>>>
>>> implements the neccessary functionality to boot a PV guest in PVH mode.
>>
>> In general this looks in much better shape now.  Some of the refactoring
>> patches should be queued for 3.14.
> 
> <nods> Thank you for your review!
>>
>> I'm not sure if when the rest wants to go in given that the PVH
>> hypervisor ABI is not yet finalized and is missing support for a number
>> of things with no visible plan for how/when/if this missing
>> functionality will be implemented.
> 
> We could follow the same path that Xen ARM in Linux did.

ARM was a whole new architecture with limited hardware availability
initially so I think what the ARM port did was the right approach.  It's
less clear to me if this is sensible for an existing, widely used
architecture.

If the PVH ABI was fixed and documented then it would be a non-brainer
to merge kernel support even if it was not fully feature complete.

What I don't want is guests or dom0s that used to boot in PVH mode that
would end up not booting at all if Xen is upgraded.  It's probably ok if
PV can be used a fallback.  It's also probably ok if this fallback is a
manual process (e.g., user has to set pvh=0 to get a working guest again).

It would also be preferable for PVH guests to fail hard if run on newer,
incompatible hypervisors.  Whether this is feasible would depend on what
the ABI changes are.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 13:38:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 13:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz4wj-0004fq-Vu; Fri, 03 Jan 2014 13:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz4wi-0004fh-L8
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 13:37:52 +0000
Received: from [85.158.143.35:26155] by server-3.bemta-4.messagelabs.com id
	E9/11-32360-F2DB6C25; Fri, 03 Jan 2014 13:37:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1388756269!2321971!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12407 invoked from network); 3 Jan 2014 13:37:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 13:37:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87342822"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 13:37:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	08:37:48 -0500
Message-ID: <52C6BD2B.6090108@citrix.com>
Date: Fri, 3 Jan 2014 13:37:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<52C598C6.7040902@citrix.com>
	<20140102190256.GH3021@pegasus.dumpdata.com>
In-Reply-To: <20140102190256.GH3021@pegasus.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12] Linux Xen PVH support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/01/14 19:02, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:50:14PM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> The patches, also available at
>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v12
>>>
>>> implements the neccessary functionality to boot a PV guest in PVH mode.
>>
>> In general this looks in much better shape now.  Some of the refactoring
>> patches should be queued for 3.14.
> 
> <nods> Thank you for your review!
>>
>> I'm not sure if when the rest wants to go in given that the PVH
>> hypervisor ABI is not yet finalized and is missing support for a number
>> of things with no visible plan for how/when/if this missing
>> functionality will be implemented.
> 
> We could follow the same path that Xen ARM in Linux did.

ARM was a whole new architecture with limited hardware availability
initially so I think what the ARM port did was the right approach.  It's
less clear to me if this is sensible for an existing, widely used
architecture.

If the PVH ABI was fixed and documented then it would be a non-brainer
to merge kernel support even if it was not fully feature complete.

What I don't want is guests or dom0s that used to boot in PVH mode that
would end up not booting at all if Xen is upgraded.  It's probably ok if
PV can be used a fallback.  It's also probably ok if this fallback is a
manual process (e.g., user has to set pvh=0 to get a working guest again).

It would also be preferable for PVH guests to fail hard if run on newer,
incompatible hypervisors.  Whether this is feasible would depend on what
the ABI changes are.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:04:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5Lz-0005kp-A2; Fri, 03 Jan 2014 14:03:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Vz5Lx-0005kk-LP
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 14:03:57 +0000
Received: from [85.158.139.211:44696] by server-13.bemta-5.messagelabs.com id
	DE/38-11357-C43C6C25; Fri, 03 Jan 2014 14:03:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388757834!5035704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12452 invoked from network); 3 Jan 2014 14:03:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87349493"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 14:03:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 09:03:35 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1Vz5Lb-0005yP-Fl;
	Fri, 03 Jan 2014 14:03:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-arm-kernel@lists.infradead.org>
Date: Fri, 3 Jan 2014 14:03:35 +0000
Message-ID: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <liuw@liuw.name>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] asm/xen/page.h: remove redundant semicolon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>

Signed-off-by: Wei Liu <liuw@liuw.name>
---
 arch/arm/include/asm/xen/page.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 75579a9..ac6789a 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -117,6 +117,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:04:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5Lz-0005kp-A2; Fri, 03 Jan 2014 14:03:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1Vz5Lx-0005kk-LP
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 14:03:57 +0000
Received: from [85.158.139.211:44696] by server-13.bemta-5.messagelabs.com id
	DE/38-11357-C43C6C25; Fri, 03 Jan 2014 14:03:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388757834!5035704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12452 invoked from network); 3 Jan 2014 14:03:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87349493"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 14:03:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 09:03:35 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1Vz5Lb-0005yP-Fl;
	Fri, 03 Jan 2014 14:03:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-arm-kernel@lists.infradead.org>
Date: Fri, 3 Jan 2014 14:03:35 +0000
Message-ID: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <liuw@liuw.name>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] asm/xen/page.h: remove redundant semicolon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <liuw@liuw.name>

Signed-off-by: Wei Liu <liuw@liuw.name>
---
 arch/arm/include/asm/xen/page.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 75579a9..ac6789a 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -117,6 +117,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:18:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5ZS-0006JD-C4; Fri, 03 Jan 2014 14:17:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vz5ZR-0006J3-C0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:17:53 +0000
Received: from [85.158.137.68:50089] by server-4.bemta-3.messagelabs.com id
	30/E4-10414-096C6C25; Fri, 03 Jan 2014 14:17:52 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388758669!7050053!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2681 invoked from network); 3 Jan 2014 14:17:51 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:17:51 -0000
Received: by mail-pd0-f173.google.com with SMTP id p10so15434854pdj.32
	for <xen-devel@lists.xenproject.org>;
	Fri, 03 Jan 2014 06:17:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=KVTCfm1ZrJE9mfdlQiZtsN3eZy1b5yNFt0oaGiEdPrY=;
	b=MunfkchQPdqdSKdyLqN42vZplY7/Pcz1XHYXdlGLfuFV7xdgpFX/tTh4OhSVufUtpH
	AN0dcgZnKfHJTi+O1iKsBIEl2CQ+eAy0nsxrWPYuUED/94j18EiaTXfzIkz3YrjdI+Xp
	1Ud/afxMs/C9n2SHIFn9Ep4p5hZMmKc7D42EgLlgcW+3soHM9kN6J4Vzx5CKYUZFfe9s
	I+hfQW/NMtABOpKwa31fihUw86x9SkUSU6920dKX4NzmkVPhYjxhYjA3rfqauyklG3VF
	1Tq7iqxbSuP67MFgnRarzlSaAMqSyJ6yeOhesm3XZwSQaTBw8lZqcbCmZDqY3Ig7Ydak
	Vc/A==
X-Received: by 10.68.201.226 with SMTP id kd2mr9774049pbc.157.1388758667381;
	Fri, 03 Jan 2014 06:17:47 -0800 (PST)
Received: from localhost ([113.247.7.238]) by mx.google.com with ESMTPSA id
	z10sm143021025pas.6.2014.01.03.06.17.40 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 06:17:46 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Date: Fri,  3 Jan 2014 22:17:23 +0800
Message-Id: <1388758643-3897-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: catalin.marinas@arm.com, Chen Baozi <baozich@gmail.com>, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
	definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reported-by: Wei Liu <liuw@liuw.name>
Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 arch/arm/include/asm/xen/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index b3368df..8a6155f 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -120,7 +120,7 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 #ifdef CONFIG_ARM64
 #define xen_remap(cookie, size) ioremap_cache((cookie), (size))
 #else
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
 #endif
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:18:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5ZS-0006JD-C4; Fri, 03 Jan 2014 14:17:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vz5ZR-0006J3-C0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:17:53 +0000
Received: from [85.158.137.68:50089] by server-4.bemta-3.messagelabs.com id
	30/E4-10414-096C6C25; Fri, 03 Jan 2014 14:17:52 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388758669!7050053!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2681 invoked from network); 3 Jan 2014 14:17:51 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:17:51 -0000
Received: by mail-pd0-f173.google.com with SMTP id p10so15434854pdj.32
	for <xen-devel@lists.xenproject.org>;
	Fri, 03 Jan 2014 06:17:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=KVTCfm1ZrJE9mfdlQiZtsN3eZy1b5yNFt0oaGiEdPrY=;
	b=MunfkchQPdqdSKdyLqN42vZplY7/Pcz1XHYXdlGLfuFV7xdgpFX/tTh4OhSVufUtpH
	AN0dcgZnKfHJTi+O1iKsBIEl2CQ+eAy0nsxrWPYuUED/94j18EiaTXfzIkz3YrjdI+Xp
	1Ud/afxMs/C9n2SHIFn9Ep4p5hZMmKc7D42EgLlgcW+3soHM9kN6J4Vzx5CKYUZFfe9s
	I+hfQW/NMtABOpKwa31fihUw86x9SkUSU6920dKX4NzmkVPhYjxhYjA3rfqauyklG3VF
	1Tq7iqxbSuP67MFgnRarzlSaAMqSyJ6yeOhesm3XZwSQaTBw8lZqcbCmZDqY3Ig7Ydak
	Vc/A==
X-Received: by 10.68.201.226 with SMTP id kd2mr9774049pbc.157.1388758667381;
	Fri, 03 Jan 2014 06:17:47 -0800 (PST)
Received: from localhost ([113.247.7.238]) by mx.google.com with ESMTPSA id
	z10sm143021025pas.6.2014.01.03.06.17.40 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 06:17:46 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Date: Fri,  3 Jan 2014 22:17:23 +0800
Message-Id: <1388758643-3897-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: catalin.marinas@arm.com, Chen Baozi <baozich@gmail.com>, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
	definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reported-by: Wei Liu <liuw@liuw.name>
Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 arch/arm/include/asm/xen/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index b3368df..8a6155f 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -120,7 +120,7 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 #ifdef CONFIG_ARM64
 #define xen_remap(cookie, size) ioremap_cache((cookie), (size))
 #else
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
 #endif
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:33:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5og-0007Be-U9; Fri, 03 Jan 2014 14:33:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux+xen-devel=lists.xenproject.org@arm.linux.org.uk>)
	id 1Vz5of-0007BZ-SP
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:33:38 +0000
Received: from [85.158.143.35:15103] by server-2.bemta-4.messagelabs.com id
	0E/C2-11386-14AC6C25; Fri, 03 Jan 2014 14:33:37 +0000
X-Env-Sender: linux+xen-devel=lists.xenproject.org@arm.linux.org.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388759615!9463337!1
X-Originating-IP: [78.32.30.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17950 invoked from network); 3 Jan 2014 14:33:36 -0000
Received: from gw-1.arm.linux.org.uk (HELO pandora.arm.linux.org.uk)
	(78.32.30.217)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:33:36 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=arm.linux.org.uk; s=pandora; 
	h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date;
	bh=4CxnPST3ajwMItJYLHw4gK/PhYmFLEIOOM+YMHt25bc=; 
	b=bfwvt7SWBKtwircO717WUfI3YE5F6OPI/+mO22QvyktZ76jr6ObFY7ipMtwDTUdQ9g/h0NkMnMlpySt/xLrNu4yeyOoj/ehlTCYcd8ZGrjN8652TorqbIEQSvwOcxi05i6d+sfZrLUuD9wGQxJmMVSvuWzxEZIBOJMUz2wGIECw=;
Received: from n2100.arm.linux.org.uk
	([2002:4e20:1eda:1:214:fdff:fe10:4f86]:36628)
	by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.76) (envelope-from <linux@arm.linux.org.uk>)
	id 1Vz5mE-0005Lt-4E; Fri, 03 Jan 2014 14:31:06 +0000
Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76)
	(envelope-from <linux@n2100.arm.linux.org.uk>)
	id 1Vz5mD-0003gH-1s; Fri, 03 Jan 2014 14:31:05 +0000
Date: Fri, 3 Jan 2014 14:31:04 +0000
From: Russell King - ARM Linux <linux@arm.linux.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103143104.GN7383@n2100.arm.linux.org.uk>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.19 (2009-01-05)
Cc: catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 01:13:57PM +0000, Stefano Stabellini wrote:
> On Mon, 30 Dec 2013, Mark Salter wrote:
> > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > was deleted, while xen_remap stays the same. This would lead to
> > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > on arm64 as ioremap_cache on arm64 to fix it.
> > > 
> > 
> > I missed that include of arm header by arm64 when looking for users
> > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > grepping the kernel tree, I see:
> > 
> >   ioremap_cached()
> >     defined by: arm, metag, unicore32
> >     used by: arch/arm/include/asm/xen/page.h
> >              drivers/mtd/maps/pxa2xx-flash.c
> > 
> >   ioremap_cache()
> >     defined by: arm64, sh, xtensa, ia64, x86
> >     used by: drivers/video/vesafb.c
> >              drivers/char/toshiba.c
> >              drivers/acpi/apei
> >              drivers/lguest/lguest_device.c
> >              drivers/sfi/sfi_core.c
> >              include/linux/acpi_io.h
> > 
> > I think it would be better to just avoid the confusion and the ifdef in
> > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> 
> While I welcome the suggestion, this is a critical fix for a regression
> that I think should go in as soon as possible, maybe 3.13-rc7, while I
> don't think that a global s/ioremap_cached/ioremap_cache would be
> acceptable at this stage.

Since it's just one driver, just make the change for ARM (provided the
grep is accurate.)  pxa2xx-flash is only used on ARM and not the other
two listed there, so looks like metag and unicore just decided to copy
ARM.

My grep concurs with yours.

So... just change ioremap_cached -> ioremap_cache in
arch/arm/include/asm/io.h
arch/arm/include/asm/xen/page.h
drivers/mtd/maps/pxa2xx-flash.c

to fix the problem.

-- 
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up.  Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:33:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5og-0007Be-U9; Fri, 03 Jan 2014 14:33:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux+xen-devel=lists.xenproject.org@arm.linux.org.uk>)
	id 1Vz5of-0007BZ-SP
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:33:38 +0000
Received: from [85.158.143.35:15103] by server-2.bemta-4.messagelabs.com id
	0E/C2-11386-14AC6C25; Fri, 03 Jan 2014 14:33:37 +0000
X-Env-Sender: linux+xen-devel=lists.xenproject.org@arm.linux.org.uk
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388759615!9463337!1
X-Originating-IP: [78.32.30.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17950 invoked from network); 3 Jan 2014 14:33:36 -0000
Received: from gw-1.arm.linux.org.uk (HELO pandora.arm.linux.org.uk)
	(78.32.30.217)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:33:36 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=arm.linux.org.uk; s=pandora; 
	h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date;
	bh=4CxnPST3ajwMItJYLHw4gK/PhYmFLEIOOM+YMHt25bc=; 
	b=bfwvt7SWBKtwircO717WUfI3YE5F6OPI/+mO22QvyktZ76jr6ObFY7ipMtwDTUdQ9g/h0NkMnMlpySt/xLrNu4yeyOoj/ehlTCYcd8ZGrjN8652TorqbIEQSvwOcxi05i6d+sfZrLUuD9wGQxJmMVSvuWzxEZIBOJMUz2wGIECw=;
Received: from n2100.arm.linux.org.uk
	([2002:4e20:1eda:1:214:fdff:fe10:4f86]:36628)
	by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.76) (envelope-from <linux@arm.linux.org.uk>)
	id 1Vz5mE-0005Lt-4E; Fri, 03 Jan 2014 14:31:06 +0000
Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76)
	(envelope-from <linux@n2100.arm.linux.org.uk>)
	id 1Vz5mD-0003gH-1s; Fri, 03 Jan 2014 14:31:05 +0000
Date: Fri, 3 Jan 2014 14:31:04 +0000
From: Russell King - ARM Linux <linux@arm.linux.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103143104.GN7383@n2100.arm.linux.org.uk>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.19 (2009-01-05)
Cc: catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 01:13:57PM +0000, Stefano Stabellini wrote:
> On Mon, 30 Dec 2013, Mark Salter wrote:
> > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > was deleted, while xen_remap stays the same. This would lead to
> > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > on arm64 as ioremap_cache on arm64 to fix it.
> > > 
> > 
> > I missed that include of arm header by arm64 when looking for users
> > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > grepping the kernel tree, I see:
> > 
> >   ioremap_cached()
> >     defined by: arm, metag, unicore32
> >     used by: arch/arm/include/asm/xen/page.h
> >              drivers/mtd/maps/pxa2xx-flash.c
> > 
> >   ioremap_cache()
> >     defined by: arm64, sh, xtensa, ia64, x86
> >     used by: drivers/video/vesafb.c
> >              drivers/char/toshiba.c
> >              drivers/acpi/apei
> >              drivers/lguest/lguest_device.c
> >              drivers/sfi/sfi_core.c
> >              include/linux/acpi_io.h
> > 
> > I think it would be better to just avoid the confusion and the ifdef in
> > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> 
> While I welcome the suggestion, this is a critical fix for a regression
> that I think should go in as soon as possible, maybe 3.13-rc7, while I
> don't think that a global s/ioremap_cached/ioremap_cache would be
> acceptable at this stage.

Since it's just one driver, just make the change for ARM (provided the
grep is accurate.)  pxa2xx-flash is only used on ARM and not the other
two listed there, so looks like metag and unicore just decided to copy
ARM.

My grep concurs with yours.

So... just change ioremap_cached -> ioremap_cache in
arch/arm/include/asm/io.h
arch/arm/include/asm/xen/page.h
drivers/mtd/maps/pxa2xx-flash.c

to fix the problem.

-- 
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up.  Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:37:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5sA-0007J5-Ie; Fri, 03 Jan 2014 14:37:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5s8-0007Iy-Nf
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 14:37:13 +0000
Received: from [85.158.143.35:11091] by server-2.bemta-4.messagelabs.com id
	4F/07-11386-81BC6C25; Fri, 03 Jan 2014 14:37:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388759829!6782506!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3579 invoked from network); 3 Jan 2014 14:37:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:37:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03EZpfW018088
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:35:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EZoQJ022304
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:35:50 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EZoAL021998; Fri, 3 Jan 2014 14:35:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:35:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 941DD1BF850; Fri,  3 Jan 2014 09:35:48 -0500 (EST)
Date: Fri, 3 Jan 2014 09:35:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140103143548.GA27019@phenom.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
	<20140102212359.GA11592@pegasus.dumpdata.com>
	<52C6AFB4.9000007@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6AFB4.9000007@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com,
	Konrad Rzeszutek Wilk <konrad@darnok.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 01:40:20PM +0100, Roger Pau Monn=E9 wrote:
> On 02/01/14 22:23, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> >>>> On 24/12/2013 12:55, Andrew Cooper wrote:
> >>>>> On 24/12/2013 12:31, David Vrabel wrote:
> >>>>>> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> >>>>>>> Hey,
> >>>>>>>
> >>>>>>> This is with Linux and
> >>>>>>>
> >>>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stab=
le/pvh.v11
> >>>>>>>
> >>>>>>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel th=
at has been
> >>>>>>> compiled with PVH.
> >>>>>>>
> >>>>>>> I think the same problem would show up if I tried to launch a PV =
guest =

> >>>>>>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code i=
s shared
> >>>>>>> with the toolstack.
> >>>>>> If a kernel with both PVH and PV support enabled cannot boot in PV=
 mode
> >>>>>> with a non-PVH aware hypervisor/toolstack then the kernel is broke=
n.
> >>>>>>
> >>>>>> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 a=
nd
> >>>>>> even older are still widely deployed.
> >>>>>>
> >>>>>> David
> >>>>>
> >>>>> I believe that the problem is because the elf parsing code is not
> >>>>> sufficiently forward-compatible aware, and rejects the PVH kernel
> >>>>> because it has an unrecognised Xen elf note field.  This is not a k=
ernel
> >>>>> bug.
> >>>
> >>> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields=
. But
> >>> it (all Xen versions) do not have the logic to ignore in the "SUPPORT=
ED_FEATURES"
> >>> an unrecognized string.
> >>>
> >>>>>
> >>>>> The elf parsing should accept unrecognised fields for forward
> >>>>> compatibility, which would then allow a PV & PVH compiled kernel to=
 run
> >>>>> in PV mode.
> >>>>
> >>>> It should but it doesn't, so a different way needs to be found for t=
he
> >>>> kernel to report (optional) PVH support.  A method that is compatible
> >>>> with older toolstacks.
> >>>
> >>> Also known as changes to the PVH ABI.
> >>>
> >>> Mukesh, Roger, George (emailing Ian instead since he is now the Relea=
se Manager-pro-temp), Jan,
> >>>
> >>> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_=
core.c and
> >>> just depending on: "writable_descriptor_tables|auto_translated_physma=
p|supervisor_mode_kernel"
> >>> for PVH guests.
> >>>
> >>> b) Or dropping that altogether and introducing a new Xen elf note fie=
ld, say:
> >>>
> >>> XEN_ELFNOTE_PVH_VERSION
> >>>
> >>
> >> c).
> >>
> >> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
> >>  *
> >>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
> >>  * kernel to specify support for features that older hypervisors don't
> >>  * know about. The set of features 4.2 and newer hypervisors will
> >>  * consider supported by the kernel is the combination of the sets
> >>  * specified through this and the string note.
> >>
> >> for hvm_callback_vector parameter.
> >>
> >>>
> >>> Which way should we do this?
> >>
> >> The c) way looks the best. Ian, would you be OK with that idea for 4.4?
> > =

> > Seems that not only does it work without any changes in Xen 4.4 but it
> > is all in the Linux kernel, and it allows us to boot an Linux kernel
> > with PV and PVH support
> > =

> > Seems that not only does it work without any changes in Xen 4.4 but it
> > is all in the Linux kernel:
> > =

> > =

> > =

> > diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> > index 56f42c0..2ce56bf 100644
> > --- a/arch/x86/xen/xen-head.S
> > +++ b/arch/x86/xen/xen-head.S
> > @@ -11,12 +11,22 @@
> >  #include <asm/page_types.h>
> >  =

> >  #include <xen/interface/elfnote.h>
> > +#include <xen/interface/features.h>
> >  #include <asm/xen/interface.h>
> >  =

> >  #ifdef CONFIG_XEN_PVH
> > -#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated=
_physmap|supervisor_mode_kernel|hvm_callback_vector"
> > +#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated=
_physmap|supervisor_mode_kernel"
> > +/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
> > + * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
> > + * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
> > + */
> > +#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
> > +		      (1 << XENFEAT_auto_translated_physmap) | \
> > +		      (1 << XENFEAT_supervisor_mode_kernel) | \
> > +		      (1 << XENFEAT_hvm_callback_vector))
> >  #else
> >  #define PVH_FEATURES_STR  ""
> > +#define PVH_FEATURES (0)
> >  #endif
> >  =

> >  	__INIT
> > @@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_table=
s|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
> > +	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
> > +						(1 << XENFEAT_writable_page_tables) |
> =

> PVH_FEATURES already contains XENFEAT_writable_page_tables, shouldn't
> you remove it from PVH_FEATURES if you are adding it unconditionally
> here? (or is this just to make clear that you need
> XENFEAT_writable_page_tables for both PVH and PV?)

Yup, that was the only reason for it. I will flesh out the comment
to make that clear. Thanks!
> =

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:37:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5sA-0007J5-Ie; Fri, 03 Jan 2014 14:37:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5s8-0007Iy-Nf
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 14:37:13 +0000
Received: from [85.158.143.35:11091] by server-2.bemta-4.messagelabs.com id
	4F/07-11386-81BC6C25; Fri, 03 Jan 2014 14:37:12 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388759829!6782506!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3579 invoked from network); 3 Jan 2014 14:37:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:37:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03EZpfW018088
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:35:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EZoQJ022304
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:35:50 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EZoAL021998; Fri, 3 Jan 2014 14:35:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:35:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 941DD1BF850; Fri,  3 Jan 2014 09:35:48 -0500 (EST)
Date: Fri, 3 Jan 2014 09:35:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140103143548.GA27019@phenom.dumpdata.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102193051.GA17665@andromeda.dapyr.net>
	<20140102212359.GA11592@pegasus.dumpdata.com>
	<52C6AFB4.9000007@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6AFB4.9000007@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, ian.campbell@citrix.com,
	george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com,
	Konrad Rzeszutek Wilk <konrad@darnok.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 01:40:20PM +0100, Roger Pau Monn=E9 wrote:
> On 02/01/14 22:23, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 03:30:52PM -0400, Konrad Rzeszutek Wilk wrote:
> >> On Mon, Dec 30, 2013 at 02:56:48PM -0500, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> >>>> On 24/12/2013 12:55, Andrew Cooper wrote:
> >>>>> On 24/12/2013 12:31, David Vrabel wrote:
> >>>>>> On 20/12/2013 17:57, Konrad Rzeszutek Wilk wrote:
> >>>>>>> Hey,
> >>>>>>>
> >>>>>>> This is with Linux and
> >>>>>>>
> >>>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stab=
le/pvh.v11
> >>>>>>>
> >>>>>>> I get Xen 4.1 (only) hypervisor to blow up with a Linux kernel th=
at has been
> >>>>>>> compiled with PVH.
> >>>>>>>
> >>>>>>> I think the same problem would show up if I tried to launch a PV =
guest =

> >>>>>>> compiled as PVH under Xen 4.1 as well - as the ELF parsing code i=
s shared
> >>>>>>> with the toolstack.
> >>>>>> If a kernel with both PVH and PV support enabled cannot boot in PV=
 mode
> >>>>>> with a non-PVH aware hypervisor/toolstack then the kernel is broke=
n.
> >>>>>>
> >>>>>> Hypervisor/tool-side fixes aren't the correct fix here.  Xen 4.1 a=
nd
> >>>>>> even older are still widely deployed.
> >>>>>>
> >>>>>> David
> >>>>>
> >>>>> I believe that the problem is because the elf parsing code is not
> >>>>> sufficiently forward-compatible aware, and rejects the PVH kernel
> >>>>> because it has an unrecognised Xen elf note field.  This is not a k=
ernel
> >>>>> bug.
> >>>
> >>> It (Xen 4.1) has the logic to ignore unrecognized Xen elf note fields=
. But
> >>> it (all Xen versions) do not have the logic to ignore in the "SUPPORT=
ED_FEATURES"
> >>> an unrecognized string.
> >>>
> >>>>>
> >>>>> The elf parsing should accept unrecognised fields for forward
> >>>>> compatibility, which would then allow a PV & PVH compiled kernel to=
 run
> >>>>> in PV mode.
> >>>>
> >>>> It should but it doesn't, so a different way needs to be found for t=
he
> >>>> kernel to report (optional) PVH support.  A method that is compatible
> >>>> with older toolstacks.
> >>>
> >>> Also known as changes to the PVH ABI.
> >>>
> >>> Mukesh, Roger, George (emailing Ian instead since he is now the Relea=
se Manager-pro-temp), Jan,
> >>>
> >>> a).  That means dropping the 'hvm_callback_vector' check from xc_dom_=
core.c and
> >>> just depending on: "writable_descriptor_tables|auto_translated_physma=
p|supervisor_mode_kernel"
> >>> for PVH guests.
> >>>
> >>> b) Or dropping that altogether and introducing a new Xen elf note fie=
ld, say:
> >>>
> >>> XEN_ELFNOTE_PVH_VERSION
> >>>
> >>
> >> c).
> >>
> >> Use the 'XEN_ELFNOTE_SUPPORTED_FEATURES' which says:
> >>  *
> >>  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
> >>  * kernel to specify support for features that older hypervisors don't
> >>  * know about. The set of features 4.2 and newer hypervisors will
> >>  * consider supported by the kernel is the combination of the sets
> >>  * specified through this and the string note.
> >>
> >> for hvm_callback_vector parameter.
> >>
> >>>
> >>> Which way should we do this?
> >>
> >> The c) way looks the best. Ian, would you be OK with that idea for 4.4?
> > =

> > Seems that not only does it work without any changes in Xen 4.4 but it
> > is all in the Linux kernel, and it allows us to boot an Linux kernel
> > with PV and PVH support
> > =

> > Seems that not only does it work without any changes in Xen 4.4 but it
> > is all in the Linux kernel:
> > =

> > =

> > =

> > diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> > index 56f42c0..2ce56bf 100644
> > --- a/arch/x86/xen/xen-head.S
> > +++ b/arch/x86/xen/xen-head.S
> > @@ -11,12 +11,22 @@
> >  #include <asm/page_types.h>
> >  =

> >  #include <xen/interface/elfnote.h>
> > +#include <xen/interface/features.h>
> >  #include <asm/xen/interface.h>
> >  =

> >  #ifdef CONFIG_XEN_PVH
> > -#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated=
_physmap|supervisor_mode_kernel|hvm_callback_vector"
> > +#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated=
_physmap|supervisor_mode_kernel"
> > +/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
> > + * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
> > + * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
> > + */
> > +#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
> > +		      (1 << XENFEAT_auto_translated_physmap) | \
> > +		      (1 << XENFEAT_supervisor_mode_kernel) | \
> > +		      (1 << XENFEAT_hvm_callback_vector))
> >  #else
> >  #define PVH_FEATURES_STR  ""
> > +#define PVH_FEATURES (0)
> >  #endif
> >  =

> >  	__INIT
> > @@ -102,6 +112,9 @@ NEXT_HYPERCALL(arch_6)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
> >  	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_table=
s|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
> > +	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
> > +						(1 << XENFEAT_writable_page_tables) |
> =

> PVH_FEATURES already contains XENFEAT_writable_page_tables, shouldn't
> you remove it from PVH_FEATURES if you are adding it unconditionally
> here? (or is this just to make clear that you need
> XENFEAT_writable_page_tables for both PVH and PV?)

Yup, that was the only reason for it. I will flesh out the comment
to make that clear. Thanks!
> =

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5v1-0007if-5d; Fri, 03 Jan 2014 14:40:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5v0-0007iZ-GN
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:40:10 +0000
Received: from [85.158.137.68:14084] by server-14.bemta-3.messagelabs.com id
	8C/2D-06105-9CBC6C25; Fri, 03 Jan 2014 14:40:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388760007!7087059!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 3 Jan 2014 14:40:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:40:08 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ed4m6025130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:39:05 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ed4HS029504
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:39:04 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ed3Il029014; Fri, 3 Jan 2014 14:39:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:39:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6D3CE1BF850; Fri,  3 Jan 2014 09:39:02 -0500 (EST)
Date: Fri, 3 Jan 2014 09:39:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103143902.GB27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54D3C.4050101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > For PVHVM the shared_info structure is provided via the same way
> > as for normal PV guests (see include/xen/interface/xen.h).
> > 
> > That is during bootup we get 'xen_start_info' via the %esi register
> > in startup_xen. Then later we extract the 'shared_info' from said
> > structure (in xen_setup_shared_info) and start using it.
> > 
> > The 'xen_setup_shared_info' is all setup to work with auto-xlat
> > guests, but there are two functions which it calls that are not:
> > xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> > This patch modifies those to work in auto-xlat mode.
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
> >  		xen_vcpu_setup(cpu);
> >  
> >  	/* xen_vcpu_setup managed to place the vcpu_info within the
> > -	   percpu area for all cpus, so make use of it */
> > -	if (have_vcpu_info_placement) {
> > +	 * percpu area for all cpus, so make use of it. Note that for
> > +	 * PVH we want to use native IRQ mechanism. */
> > +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
> >  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
> >  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
> >  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> 
> Should this be in a separate patch: "xen/pvh: use native irq ops"?

On a second thought I think not. The reason is explained in the commit
description:

 The 'xen_setup_shared_info' is all setup to work with auto-xlat
 guests, but there are two functions which it calls that are not:
 xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
 This patch modifies those to work in auto-xlat mode.

If we move this to another patch, it is going to be mostly the same
comment and this patch will feel unfinished.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5v1-0007if-5d; Fri, 03 Jan 2014 14:40:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5v0-0007iZ-GN
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:40:10 +0000
Received: from [85.158.137.68:14084] by server-14.bemta-3.messagelabs.com id
	8C/2D-06105-9CBC6C25; Fri, 03 Jan 2014 14:40:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388760007!7087059!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 3 Jan 2014 14:40:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:40:08 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ed4m6025130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:39:05 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ed4HS029504
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:39:04 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ed3Il029014; Fri, 3 Jan 2014 14:39:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:39:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6D3CE1BF850; Fri,  3 Jan 2014 09:39:02 -0500 (EST)
Date: Fri, 3 Jan 2014 09:39:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103143902.GB27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C54D3C.4050101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > For PVHVM the shared_info structure is provided via the same way
> > as for normal PV guests (see include/xen/interface/xen.h).
> > 
> > That is during bootup we get 'xen_start_info' via the %esi register
> > in startup_xen. Then later we extract the 'shared_info' from said
> > structure (in xen_setup_shared_info) and start using it.
> > 
> > The 'xen_setup_shared_info' is all setup to work with auto-xlat
> > guests, but there are two functions which it calls that are not:
> > xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> > This patch modifies those to work in auto-xlat mode.
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
> >  		xen_vcpu_setup(cpu);
> >  
> >  	/* xen_vcpu_setup managed to place the vcpu_info within the
> > -	   percpu area for all cpus, so make use of it */
> > -	if (have_vcpu_info_placement) {
> > +	 * percpu area for all cpus, so make use of it. Note that for
> > +	 * PVH we want to use native IRQ mechanism. */
> > +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
> >  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
> >  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
> >  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> 
> Should this be in a separate patch: "xen/pvh: use native irq ops"?

On a second thought I think not. The reason is explained in the commit
description:

 The 'xen_setup_shared_info' is all setup to work with auto-xlat
 guests, but there are two functions which it calls that are not:
 xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
 This patch modifies those to work in auto-xlat mode.

If we move this to another patch, it is going to be mostly the same
comment and this patch will feel unfinished.


> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:45:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5zx-0007uu-VD; Fri, 03 Jan 2014 14:45:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5zw-0007up-Pm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:45:16 +0000
Received: from [193.109.254.147:38627] by server-11.bemta-14.messagelabs.com
	id 02/BE-20576-CFCC6C25; Fri, 03 Jan 2014 14:45:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388760314!7182521!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21205 invoked from network); 3 Jan 2014 14:45:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:45:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03EiBJG029866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:44:12 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EiBCo010615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:44:11 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EiAPe009597; Fri, 3 Jan 2014 14:44:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:44:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A07501BF850; Fri,  3 Jan 2014 09:44:09 -0500 (EST)
Date: Fri, 3 Jan 2014 09:44:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103144409.GC27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6A4E5.3080904@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> >> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >>>  	return gnttab_init();
> >>>  }
> >>>  
> >>> -core_initcall(__gnttab_init);
> >>> +core_initcall_sync(__gnttab_init);
> >>
> >> Why has this become _sync?
> > 
> > It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > at gnttab_init):
> 
> 
> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't

It has a clear ordering property.

> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?

No. That is due to the fact that __gnttab_init() is in drivers/xen and is
also used by the ARM code.

Stefano in his previous review mentioned he would like PVH specific
code in arch/x86:

https://lkml.org/lkml/2013/12/18/507

> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:45:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz5zx-0007uu-VD; Fri, 03 Jan 2014 14:45:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz5zw-0007up-Pm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:45:16 +0000
Received: from [193.109.254.147:38627] by server-11.bemta-14.messagelabs.com
	id 02/BE-20576-CFCC6C25; Fri, 03 Jan 2014 14:45:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388760314!7182521!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21205 invoked from network); 3 Jan 2014 14:45:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 14:45:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03EiBJG029866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 14:44:12 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EiBCo010615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 14:44:11 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03EiAPe009597; Fri, 3 Jan 2014 14:44:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 06:44:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A07501BF850; Fri,  3 Jan 2014 09:44:09 -0500 (EST)
Date: Fri, 3 Jan 2014 09:44:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103144409.GC27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6A4E5.3080904@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> >> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >>>  	return gnttab_init();
> >>>  }
> >>>  
> >>> -core_initcall(__gnttab_init);
> >>> +core_initcall_sync(__gnttab_init);
> >>
> >> Why has this become _sync?
> > 
> > It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > at gnttab_init):
> 
> 
> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't

It has a clear ordering property.

> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?

No. That is due to the fact that __gnttab_init() is in drivers/xen and is
also used by the ARM code.

Stefano in his previous review mentioned he would like PVH specific
code in arch/x86:

https://lkml.org/lkml/2013/12/18/507

> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:49:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz64P-0008Kt-PS; Fri, 03 Jan 2014 14:49:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1Vz64N-0008Kj-Up
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 14:49:52 +0000
Received: from [85.158.139.211:39634] by server-8.bemta-5.messagelabs.com id
	92/5F-29838-F0EC6C25; Fri, 03 Jan 2014 14:49:51 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388760576!7686114!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM1MDggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28085 invoked from network); 3 Jan 2014 14:49:37 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:49:37 -0000
Received: by mail-ie0-f173.google.com with SMTP id to1so15998561ieb.18
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 06:49:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=ZaTbL8Yb3uRu/sXkCLF7DdXp5ncHcaRhyoyAuOCiMXk=;
	b=X4QAiNAXp5UjWbLCB5w99iSObvycLHspfQHL5QNc8sf6/JXGielbczJOJjY246dP0R
	r+v1dbm0BTP/Yb51g7nCcghCUrD1vyjAwaa9ba/XG3ykDY7TImcvckUVDZ8tNNb2JlGq
	cDO3KFFHJ7fr/fkiWOuXr2w2d6XuJovQrhUO/JgznW7tHlH4wblX7yCsrZWyc52zQEit
	cuMXKzMKO2f4PJo93loaxbEVpkJxpZtQp7rzVFmHFDp6Car+y6Wq5YapERoGIgsiFcZL
	u8NofmV9SSpjxzqGzbfH2r398tXHEEHzT1OOSiRZo8AaafCE1rQ/pKcL1IMt+zgh3oNm
	xhAA==
X-Gm-Message-State: ALoCoQnWYkVYH7dxJIJUof2fOSp6A6YVP8A4ZhyCPiVh4jeUI69AVirAC6oZroVwBkeVn5YoLpmQ
X-Received: by 10.50.102.99 with SMTP id fn3mr3232450igb.5.1388760576138;
	Fri, 03 Jan 2014 06:49:36 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id s4sm2303288ige.0.2014.01.03.06.49.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 06:49:35 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20131231163110.GA34150@deinos.phlegethon.org>
Date: Fri, 3 Jan 2014 09:49:36 -0500
Message-Id: <556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
To: Tim Deegan <tim@xen.org>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:

> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>> On Twitter, Florian Heigl sent a out a few messages about issues with x=
enpaging:
>>> =

>>> ---
>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=3D%23=
xen&src=3Dhash> #xenpaging<https://twitter.com/search?q=3D%23xenpaging&src=
=3Dhash>? docs are at SLES manual, rest is mostly this: http://www.gossamer=
-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or=
 usable?
>>> =

>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https:/=
/twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://=
piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on =
holiday and not constant online)
>>> =

>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=3D%23xen&src=3Dha=
sh> Xenpaging (memory overcommit)
>>> [x] largely untested
>>> [x] docs outdated
>>> [x] syntax+logic changed
>>> [x] broken
>>> ---
>>> =

>>> [I've taken the liberty of removing the colorful expletive from the fin=
al post]
>>> =

>>> Is Florian's assessment correct, or is there somewhere we can point him=
 for help?  I'm on vacation this week, but if someone replies to me, I will=
 try to forward the information appropriately.
>> =

>> The Maintainers file implies otherwise. Let me CC the maintainers.
> =

> Andres really owns this code, so I'll punt to him for an official
> answer, but:
The part actively maintained is the hypervisor support for paging, and the =
interface.

tools/xenpaging is one way to consume that interface. It seems to have suff=
ered from bitrot.

So other than echoing Tim's points below, I'll add

- Some interesting ideas thrown around by Florian in his notes. Could lead =
to a robust discussion in xen-devel =85 if Florian is still interested.

- Perhaps the developers who are interested (myself included) should make a=
 decent effort at improving the in-tree tools. There is the argument that f=
or example KSM gives KVM users a sharing solution that just works, whether =
you like the results or not. In that vein xenpaging apparently doesn't cut =
it, nor the absence of a basic sharing tool.

One simple paging tool could be lazy restore. There is some interest out th=
ere, it would be relatively straightforward to codify.

Andres
> =

> - It's been listed as a 'tech preview' on the feature list since it went
>  in.  http://wiki.xenproject.org/wiki/Xen_Release_Features says:
>  "Preview, due to limited tools support. Hypervisor side in good shape."
> =

> - I can't say anything about SuSE's apparent support for it, except
>  that ISTR Olaf worked at/for/with SuSE at the time.
> =

> - Patches would, of course, be welcome.
> =

> Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:49:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz64P-0008Kt-PS; Fri, 03 Jan 2014 14:49:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1Vz64N-0008Kj-Up
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 14:49:52 +0000
Received: from [85.158.139.211:39634] by server-8.bemta-5.messagelabs.com id
	92/5F-29838-F0EC6C25; Fri, 03 Jan 2014 14:49:51 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388760576!7686114!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM1MDggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28085 invoked from network); 3 Jan 2014 14:49:37 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:49:37 -0000
Received: by mail-ie0-f173.google.com with SMTP id to1so15998561ieb.18
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 06:49:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=ZaTbL8Yb3uRu/sXkCLF7DdXp5ncHcaRhyoyAuOCiMXk=;
	b=X4QAiNAXp5UjWbLCB5w99iSObvycLHspfQHL5QNc8sf6/JXGielbczJOJjY246dP0R
	r+v1dbm0BTP/Yb51g7nCcghCUrD1vyjAwaa9ba/XG3ykDY7TImcvckUVDZ8tNNb2JlGq
	cDO3KFFHJ7fr/fkiWOuXr2w2d6XuJovQrhUO/JgznW7tHlH4wblX7yCsrZWyc52zQEit
	cuMXKzMKO2f4PJo93loaxbEVpkJxpZtQp7rzVFmHFDp6Car+y6Wq5YapERoGIgsiFcZL
	u8NofmV9SSpjxzqGzbfH2r398tXHEEHzT1OOSiRZo8AaafCE1rQ/pKcL1IMt+zgh3oNm
	xhAA==
X-Gm-Message-State: ALoCoQnWYkVYH7dxJIJUof2fOSp6A6YVP8A4ZhyCPiVh4jeUI69AVirAC6oZroVwBkeVn5YoLpmQ
X-Received: by 10.50.102.99 with SMTP id fn3mr3232450igb.5.1388760576138;
	Fri, 03 Jan 2014 06:49:36 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id s4sm2303288ige.0.2014.01.03.06.49.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 06:49:35 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20131231163110.GA34150@deinos.phlegethon.org>
Date: Fri, 3 Jan 2014 09:49:36 -0500
Message-Id: <556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
To: Tim Deegan <tim@xen.org>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:

> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>> On Twitter, Florian Heigl sent a out a few messages about issues with x=
enpaging:
>>> =

>>> ---
>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=3D%23=
xen&src=3Dhash> #xenpaging<https://twitter.com/search?q=3D%23xenpaging&src=
=3Dhash>? docs are at SLES manual, rest is mostly this: http://www.gossamer=
-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or=
 usable?
>>> =

>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https:/=
/twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://=
piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on =
holiday and not constant online)
>>> =

>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=3D%23xen&src=3Dha=
sh> Xenpaging (memory overcommit)
>>> [x] largely untested
>>> [x] docs outdated
>>> [x] syntax+logic changed
>>> [x] broken
>>> ---
>>> =

>>> [I've taken the liberty of removing the colorful expletive from the fin=
al post]
>>> =

>>> Is Florian's assessment correct, or is there somewhere we can point him=
 for help?  I'm on vacation this week, but if someone replies to me, I will=
 try to forward the information appropriately.
>> =

>> The Maintainers file implies otherwise. Let me CC the maintainers.
> =

> Andres really owns this code, so I'll punt to him for an official
> answer, but:
The part actively maintained is the hypervisor support for paging, and the =
interface.

tools/xenpaging is one way to consume that interface. It seems to have suff=
ered from bitrot.

So other than echoing Tim's points below, I'll add

- Some interesting ideas thrown around by Florian in his notes. Could lead =
to a robust discussion in xen-devel =85 if Florian is still interested.

- Perhaps the developers who are interested (myself included) should make a=
 decent effort at improving the in-tree tools. There is the argument that f=
or example KSM gives KVM users a sharing solution that just works, whether =
you like the results or not. In that vein xenpaging apparently doesn't cut =
it, nor the absence of a basic sharing tool.

One simple paging tool could be lazy restore. There is some interest out th=
ere, it would be relatively straightforward to codify.

Andres
> =

> - It's been listed as a 'tech preview' on the feature list since it went
>  in.  http://wiki.xenproject.org/wiki/Xen_Release_Features says:
>  "Preview, due to limited tools support. Hypervisor side in good shape."
> =

> - I can't say anything about SuSE's apparent support for it, except
>  that ISTR Olaf worked at/for/with SuSE at the time.
> =

> - Patches would, of course, be welcome.
> =

> Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:50:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz65I-0008PX-9O; Fri, 03 Jan 2014 14:50:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz65G-0008PM-BA
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:50:46 +0000
Received: from [85.158.137.68:24974] by server-14.bemta-3.messagelabs.com id
	1B/29-06105-54EC6C25; Fri, 03 Jan 2014 14:50:45 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388760643!5937054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29088 invoked from network); 3 Jan 2014 14:50:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89550944"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 14:50:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 09:50:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz65C-0006fO-0s;
	Fri, 03 Jan 2014 14:50:42 +0000
Date: Fri, 3 Jan 2014 14:49:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Russell King - ARM Linux <linux@arm.linux.org.uk>
In-Reply-To: <20140103143104.GN7383@n2100.arm.linux.org.uk>
Message-ID: <alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Mark Salter <msalter@redhat.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Russell King - ARM Linux wrote:
> On Fri, Jan 03, 2014 at 01:13:57PM +0000, Stefano Stabellini wrote:
> > On Mon, 30 Dec 2013, Mark Salter wrote:
> > > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > > was deleted, while xen_remap stays the same. This would lead to
> > > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > > on arm64 as ioremap_cache on arm64 to fix it.
> > > > 
> > > 
> > > I missed that include of arm header by arm64 when looking for users
> > > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > > grepping the kernel tree, I see:
> > > 
> > >   ioremap_cached()
> > >     defined by: arm, metag, unicore32
> > >     used by: arch/arm/include/asm/xen/page.h
> > >              drivers/mtd/maps/pxa2xx-flash.c
> > > 
> > >   ioremap_cache()
> > >     defined by: arm64, sh, xtensa, ia64, x86
> > >     used by: drivers/video/vesafb.c
> > >              drivers/char/toshiba.c
> > >              drivers/acpi/apei
> > >              drivers/lguest/lguest_device.c
> > >              drivers/sfi/sfi_core.c
> > >              include/linux/acpi_io.h
> > > 
> > > I think it would be better to just avoid the confusion and the ifdef in
> > > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> > 
> > While I welcome the suggestion, this is a critical fix for a regression
> > that I think should go in as soon as possible, maybe 3.13-rc7, while I
> > don't think that a global s/ioremap_cached/ioremap_cache would be
> > acceptable at this stage.
> 
> Since it's just one driver, just make the change for ARM (provided the
> grep is accurate.)  pxa2xx-flash is only used on ARM and not the other
> two listed there, so looks like metag and unicore just decided to copy
> ARM.
> 
> My grep concurs with yours.
> 
> So... just change ioremap_cached -> ioremap_cache in
> arch/arm/include/asm/io.h
> arch/arm/include/asm/xen/page.h
> drivers/mtd/maps/pxa2xx-flash.c
> 
> to fix the problem.

OK. That would be Rob's patch below.
Are you going to take care of sending it to Linus?

Thanks for the quick reply.


>From robherring2@gmail.com Fri Nov  8 21:26:03 2013
Date: Fri, 8 Nov 2013 15:25:56 -0600
From: Rob Herring <robherring2@gmail.com>
To: linux-arm-kernel@lists.infradead.org
Cc: Rob Herring <rob.herring@calxeda.com>, Russell King <linux@arm.linux.org.uk>, Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will.deacon@arm.com>
Subject: [PATCH] ARM: rename ioremap_cached to ioremap_cache

From: Rob Herring <rob.herring@calxeda.com>

ioremap_cache is more aligned with other architectures. There are only
2 users of this in the kernel: pxa2xx-flash and Xen.

This fixes Xen build failures on arm64:

drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/io.h       | 2 +-
 arch/arm/include/asm/xen/page.h | 2 +-
 drivers/mtd/maps/pxa2xx-flash.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index 3c597c2..fbeb39c 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -329,7 +329,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
  */
 #define ioremap(cookie,size)		__arm_ioremap((cookie), (size), MT_DEVICE)
 #define ioremap_nocache(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE)
-#define ioremap_cached(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
+#define ioremap_cache(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
 #define ioremap_wc(cookie,size)		__arm_ioremap((cookie), (size), MT_DEVICE_WC)
 #define iounmap				__arm_iounmap
 
diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 5d0e4c5..a8591fb 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -123,6 +123,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cache((cookie), (size));
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
index d210d13..0f55589 100644
--- a/drivers/mtd/maps/pxa2xx-flash.c
+++ b/drivers/mtd/maps/pxa2xx-flash.c
@@ -73,7 +73,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
 		return -ENOMEM;
 	}
 	info->map.cached =
-		ioremap_cached(info->map.phys, info->map.size);
+		ioremap_cache(info->map.phys, info->map.size);
 	if (!info->map.cached)
 		printk(KERN_WARNING "Failed to ioremap cached %s\n",
 		       info->map.name);
-- 
1.8.1.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 14:50:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 14:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz65I-0008PX-9O; Fri, 03 Jan 2014 14:50:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz65G-0008PM-BA
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 14:50:46 +0000
Received: from [85.158.137.68:24974] by server-14.bemta-3.messagelabs.com id
	1B/29-06105-54EC6C25; Fri, 03 Jan 2014 14:50:45 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388760643!5937054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29088 invoked from network); 3 Jan 2014 14:50:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 14:50:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89550944"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 14:50:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 09:50:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz65C-0006fO-0s;
	Fri, 03 Jan 2014 14:50:42 +0000
Date: Fri, 3 Jan 2014 14:49:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Russell King - ARM Linux <linux@arm.linux.org.uk>
In-Reply-To: <20140103143104.GN7383@n2100.arm.linux.org.uk>
Message-ID: <alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Mark Salter <msalter@redhat.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Russell King - ARM Linux wrote:
> On Fri, Jan 03, 2014 at 01:13:57PM +0000, Stefano Stabellini wrote:
> > On Mon, 30 Dec 2013, Mark Salter wrote:
> > > On Mon, 2013-12-30 at 14:55 +0800, Chen Baozi wrote:
> > > > xen_remap used to be defined as ioremap_cached on arm64. In commit
> > > > c04e8e2fe, a new ioremap_cache was implemented, and ioremap_cached
> > > > was deleted, while xen_remap stays the same. This would lead to
> > > > the failure when building with CONFIG_HVC_XEN. Redefined xen_remap
> > > > on arm64 as ioremap_cache on arm64 to fix it.
> > > > 
> > > 
> > > I missed that include of arm header by arm64 when looking for users
> > > of arm64's ioremap_cached() when working on commit c04e8e2fe. Anyway,
> > > grepping the kernel tree, I see:
> > > 
> > >   ioremap_cached()
> > >     defined by: arm, metag, unicore32
> > >     used by: arch/arm/include/asm/xen/page.h
> > >              drivers/mtd/maps/pxa2xx-flash.c
> > > 
> > >   ioremap_cache()
> > >     defined by: arm64, sh, xtensa, ia64, x86
> > >     used by: drivers/video/vesafb.c
> > >              drivers/char/toshiba.c
> > >              drivers/acpi/apei
> > >              drivers/lguest/lguest_device.c
> > >              drivers/sfi/sfi_core.c
> > >              include/linux/acpi_io.h
> > > 
> > > I think it would be better to just avoid the confusion and the ifdef in
> > > asm/xen/page.h by globally changing ioremap_cached to ioremap_cache.
> > 
> > While I welcome the suggestion, this is a critical fix for a regression
> > that I think should go in as soon as possible, maybe 3.13-rc7, while I
> > don't think that a global s/ioremap_cached/ioremap_cache would be
> > acceptable at this stage.
> 
> Since it's just one driver, just make the change for ARM (provided the
> grep is accurate.)  pxa2xx-flash is only used on ARM and not the other
> two listed there, so looks like metag and unicore just decided to copy
> ARM.
> 
> My grep concurs with yours.
> 
> So... just change ioremap_cached -> ioremap_cache in
> arch/arm/include/asm/io.h
> arch/arm/include/asm/xen/page.h
> drivers/mtd/maps/pxa2xx-flash.c
> 
> to fix the problem.

OK. That would be Rob's patch below.
Are you going to take care of sending it to Linus?

Thanks for the quick reply.


>From robherring2@gmail.com Fri Nov  8 21:26:03 2013
Date: Fri, 8 Nov 2013 15:25:56 -0600
From: Rob Herring <robherring2@gmail.com>
To: linux-arm-kernel@lists.infradead.org
Cc: Rob Herring <rob.herring@calxeda.com>, Russell King <linux@arm.linux.org.uk>, Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will.deacon@arm.com>
Subject: [PATCH] ARM: rename ioremap_cached to ioremap_cache

From: Rob Herring <rob.herring@calxeda.com>

ioremap_cache is more aligned with other architectures. There are only
2 users of this in the kernel: pxa2xx-flash and Xen.

This fixes Xen build failures on arm64:

drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/io.h       | 2 +-
 arch/arm/include/asm/xen/page.h | 2 +-
 drivers/mtd/maps/pxa2xx-flash.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index 3c597c2..fbeb39c 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -329,7 +329,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
  */
 #define ioremap(cookie,size)		__arm_ioremap((cookie), (size), MT_DEVICE)
 #define ioremap_nocache(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE)
-#define ioremap_cached(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
+#define ioremap_cache(cookie,size)	__arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
 #define ioremap_wc(cookie,size)		__arm_ioremap((cookie), (size), MT_DEVICE_WC)
 #define iounmap				__arm_iounmap
 
diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 5d0e4c5..a8591fb 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -123,6 +123,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cache((cookie), (size));
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
index d210d13..0f55589 100644
--- a/drivers/mtd/maps/pxa2xx-flash.c
+++ b/drivers/mtd/maps/pxa2xx-flash.c
@@ -73,7 +73,7 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
 		return -ENOMEM;
 	}
 	info->map.cached =
-		ioremap_cached(info->map.phys, info->map.size);
+		ioremap_cache(info->map.phys, info->map.size);
 	if (!info->map.cached)
 		printk(KERN_WARNING "Failed to ioremap cached %s\n",
 		       info->map.name);
-- 
1.8.1.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:02:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Fu-0000Ur-Fw; Fri, 03 Jan 2014 15:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux+xen-devel=lists.xenproject.org@arm.linux.org.uk>)
	id 1Vz6Ft-0000Um-Qm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:01:46 +0000
Received: from [85.158.143.35:53108] by server-1.bemta-4.messagelabs.com id
	A6/75-02132-9D0D6C25; Fri, 03 Jan 2014 15:01:45 +0000
X-Env-Sender: linux+xen-devel=lists.xenproject.org@arm.linux.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388761304!6787479!1
X-Originating-IP: [78.32.30.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28440 invoked from network); 3 Jan 2014 15:01:44 -0000
Received: from gw-1.arm.linux.org.uk (HELO pandora.arm.linux.org.uk)
	(78.32.30.217)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 15:01:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=arm.linux.org.uk; s=pandora; 
	h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date;
	bh=FBbMuLpnO7anTHvBvB2gUvCn1AfmHKRDN037AjxD4Is=; 
	b=EeoDc7878sYo3e0qLrMcVL33H9h1Qet7ZaAo7iqPyCmT9p0FIWYTH9OE18e5EAl9ra9PD1ws++KDNS+8X2hhLHIh4OcnqiQIJQAxEoR+mKFAAqy74tNAKIRZBBKN9MDQ8g+8ChEs5fFYNeio9rO/z7kwWV7Eo2EiuXmAYBHvG6w=;
Received: from n2100.arm.linux.org.uk
	([2001:4d48:ad52:3201:214:fdff:fe10:4f86]:36974)
	by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.76) (envelope-from <linux@arm.linux.org.uk>)
	id 1Vz6DX-0005Nk-GG; Fri, 03 Jan 2014 14:59:19 +0000
Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76)
	(envelope-from <linux@n2100.arm.linux.org.uk>)
	id 1Vz6DV-00046h-Q1; Fri, 03 Jan 2014 14:59:17 +0000
Date: Fri, 3 Jan 2014 14:59:17 +0000
From: Russell King - ARM Linux <linux@arm.linux.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103145917.GP7383@n2100.arm.linux.org.uk>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
	<alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.19 (2009-01-05)
Cc: catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 02:49:54PM +0000, Stefano Stabellini wrote:
> OK. That would be Rob's patch below.

Looks fine to me.

> Are you going to take care of sending it to Linus?

If you want to stick it in the patch system, I'll throw it in my tree.
I need to push out the next set of fixes soon anyway.

Thanks.

-- 
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up.  Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:02:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Fu-0000Ur-Fw; Fri, 03 Jan 2014 15:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux+xen-devel=lists.xenproject.org@arm.linux.org.uk>)
	id 1Vz6Ft-0000Um-Qm
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:01:46 +0000
Received: from [85.158.143.35:53108] by server-1.bemta-4.messagelabs.com id
	A6/75-02132-9D0D6C25; Fri, 03 Jan 2014 15:01:45 +0000
X-Env-Sender: linux+xen-devel=lists.xenproject.org@arm.linux.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388761304!6787479!1
X-Originating-IP: [78.32.30.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28440 invoked from network); 3 Jan 2014 15:01:44 -0000
Received: from gw-1.arm.linux.org.uk (HELO pandora.arm.linux.org.uk)
	(78.32.30.217)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 15:01:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=arm.linux.org.uk; s=pandora; 
	h=Sender:In-Reply-To:Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date;
	bh=FBbMuLpnO7anTHvBvB2gUvCn1AfmHKRDN037AjxD4Is=; 
	b=EeoDc7878sYo3e0qLrMcVL33H9h1Qet7ZaAo7iqPyCmT9p0FIWYTH9OE18e5EAl9ra9PD1ws++KDNS+8X2hhLHIh4OcnqiQIJQAxEoR+mKFAAqy74tNAKIRZBBKN9MDQ8g+8ChEs5fFYNeio9rO/z7kwWV7Eo2EiuXmAYBHvG6w=;
Received: from n2100.arm.linux.org.uk
	([2001:4d48:ad52:3201:214:fdff:fe10:4f86]:36974)
	by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.76) (envelope-from <linux@arm.linux.org.uk>)
	id 1Vz6DX-0005Nk-GG; Fri, 03 Jan 2014 14:59:19 +0000
Received: from linux by n2100.arm.linux.org.uk with local (Exim 4.76)
	(envelope-from <linux@n2100.arm.linux.org.uk>)
	id 1Vz6DV-00046h-Q1; Fri, 03 Jan 2014 14:59:17 +0000
Date: Fri, 3 Jan 2014 14:59:17 +0000
From: Russell King - ARM Linux <linux@arm.linux.org.uk>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103145917.GP7383@n2100.arm.linux.org.uk>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
	<alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.19 (2009-01-05)
Cc: catalin.marinas@arm.com, linux-kernel@vger.kernel.org,
	Mark Salter <msalter@redhat.com>, xen-devel@lists.xenproject.org,
	Chen Baozi <baozich@gmail.com>, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 02:49:54PM +0000, Stefano Stabellini wrote:
> OK. That would be Rob's patch below.

Looks fine to me.

> Are you going to take care of sending it to Linus?

If you want to stick it in the patch system, I'll throw it in my tree.
I need to push out the next set of fixes soon anyway.

Thanks.

-- 
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up.  Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:05:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6JM-0000cV-4d; Fri, 03 Jan 2014 15:05:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6JK-0000cP-QP
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:05:19 +0000
Received: from [85.158.137.68:22971] by server-5.bemta-3.messagelabs.com id
	59/FA-25188-EA1D6C25; Fri, 03 Jan 2014 15:05:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388761515!5940001!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7434 invoked from network); 3 Jan 2014 15:05:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:05:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89554863"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:05:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:05:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6JG-0006wQ-LZ;
	Fri, 03 Jan 2014 15:05:14 +0000
Date: Fri, 3 Jan 2014 15:04:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131231185656.GB3129@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131231185656.GB3129@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > @@ -45,6 +45,7 @@
> > >  #include <xen/grant_table.h>
> > >  #include <xen/xenbus.h>
> > >  #include <xen/xen.h>
> > > +#include <xen/features.h>
> > >  
> > >  #include "xenbus_probe.h"
> > >  
> > > @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
> > >  
> > >  void __init xenbus_ring_ops_init(void)
> > >  {
> > > -	if (xen_pv_domain())
> > > +	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > Can we just change this test to
> > 
> > if (!xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > ?
> 
> No. If we do then the HVM domains (which are also !auto-xlat)
> will end up using the PV version of ring_ops.

Actually HVM guests have XENFEAT_auto_translated_physmap, so in this
case they would get &ring_ops_hvm.


> > >  		ring_ops = &ring_ops_pv;
> > >  	else
> > >  		ring_ops = &ring_ops_hvm;
> > > -- 
> > > 1.8.3.1
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:05:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6JM-0000cV-4d; Fri, 03 Jan 2014 15:05:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6JK-0000cP-QP
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:05:19 +0000
Received: from [85.158.137.68:22971] by server-5.bemta-3.messagelabs.com id
	59/FA-25188-EA1D6C25; Fri, 03 Jan 2014 15:05:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388761515!5940001!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7434 invoked from network); 3 Jan 2014 15:05:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:05:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89554863"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:05:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:05:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6JG-0006wQ-LZ;
	Fri, 03 Jan 2014 15:05:14 +0000
Date: Fri, 3 Jan 2014 15:04:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131231185656.GB3129@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131231185656.GB3129@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > @@ -45,6 +45,7 @@
> > >  #include <xen/grant_table.h>
> > >  #include <xen/xenbus.h>
> > >  #include <xen/xen.h>
> > > +#include <xen/features.h>
> > >  
> > >  #include "xenbus_probe.h"
> > >  
> > > @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
> > >  
> > >  void __init xenbus_ring_ops_init(void)
> > >  {
> > > -	if (xen_pv_domain())
> > > +	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > Can we just change this test to
> > 
> > if (!xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > ?
> 
> No. If we do then the HVM domains (which are also !auto-xlat)
> will end up using the PV version of ring_ops.

Actually HVM guests have XENFEAT_auto_translated_physmap, so in this
case they would get &ring_ops_hvm.


> > >  		ring_ops = &ring_ops_pv;
> > >  	else
> > >  		ring_ops = &ring_ops_hvm;
> > > -- 
> > > 1.8.3.1
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Ov-00014U-WE; Fri, 03 Jan 2014 15:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz6Ou-00014J-47
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:11:04 +0000
Received: from [85.158.143.35:59813] by server-2.bemta-4.messagelabs.com id
	D5/5D-11386-703D6C25; Fri, 03 Jan 2014 15:11:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388761861!9471766!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14639 invoked from network); 3 Jan 2014 15:11:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 15:11:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03F9vm6018913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 15:09:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03F9tXF014025
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 15:09:55 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03F9smP014257; Fri, 3 Jan 2014 15:09:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 07:09:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 647951BF850; Fri,  3 Jan 2014 10:09:53 -0500 (EST)
Date: Fri, 3 Jan 2014 10:09:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103150953.GD27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<52C59367.70707@citrix.com>
	<20140102184754.GF3021@pegasus.dumpdata.com>
	<52C6A8FE.5000500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6A8FE.5000500@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 12:11:42PM +0000, David Vrabel wrote:
> On 02/01/14 18:47, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
> >> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> >>> and contain the virtual address of the grants. That was OK
> >>> for most architectures (PVHVM, ARM) were the grants are contingous
> >>> in memory. That however is not the case for PVH - in which case
> >>> we will have to do a lookup for each virtual address for the PFN.
> >>>
> >>> Instead of doing that, lets make it a structure which will contain
> >>> the array of PFNs, the virtual address and the count of said PFNs.
> >>>
> >>> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> >>> gnttab_free_auto_xlat_frames to populate said structure with
> >>> appropiate values for PVHVM and ARM.
> >>>
> >>> To round it off, change the name from 'xen_hvm_resume_frames' to
> >>> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> >>>
> >>> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> >>> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> >> [...]
> >>> --- a/drivers/xen/grant-table.c
> >>> +++ b/drivers/xen/grant-table.c
> >> [...]
> >>> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >>>  }
> >>>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >>>  
> >>> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> >>> +{
> >>> +	xen_pfn_t *pfn;
> >>> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> >>> +	int i;
> >>> +
> >>> +	if (xen_auto_xlat_grant_frames.count)
> >>> +		return -EINVAL;
> >>> +
> >>> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> >>> +	if (!pfn)
> >>> +		return -ENOMEM;
> >>> +	for (i = 0; i < max_nr_gframes; i++)
> >>> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> >>
> >> PFN_DOWN(addr) + i looks better to me.
> >>
> >>> +
> >>> +	xen_auto_xlat_grant_frames.vaddr = addr;
> 
> I think you should move the xen_remap() call here.

Excellent suggestion!
> 
> >> Huh? addr is a physical address but you're assigning it to a field
> >> called vaddr?  I think you mean to set this field to the result of the
> >> xen_remap() call, yes?
> > 
> > It ends up doing that in gnttab_init. Not to
> > xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Ov-00014U-WE; Fri, 03 Jan 2014 15:11:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz6Ou-00014J-47
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:11:04 +0000
Received: from [85.158.143.35:59813] by server-2.bemta-4.messagelabs.com id
	D5/5D-11386-703D6C25; Fri, 03 Jan 2014 15:11:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388761861!9471766!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14639 invoked from network); 3 Jan 2014 15:11:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 15:11:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03F9vm6018913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 15:09:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03F9tXF014025
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 15:09:55 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03F9smP014257; Fri, 3 Jan 2014 15:09:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 07:09:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 647951BF850; Fri,  3 Jan 2014 10:09:53 -0500 (EST)
Date: Fri, 3 Jan 2014 10:09:53 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103150953.GD27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<52C59367.70707@citrix.com>
	<20140102184754.GF3021@pegasus.dumpdata.com>
	<52C6A8FE.5000500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6A8FE.5000500@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 12:11:42PM +0000, David Vrabel wrote:
> On 02/01/14 18:47, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 02, 2014 at 04:27:19PM +0000, David Vrabel wrote:
> >> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> >>> and contain the virtual address of the grants. That was OK
> >>> for most architectures (PVHVM, ARM) were the grants are contingous
> >>> in memory. That however is not the case for PVH - in which case
> >>> we will have to do a lookup for each virtual address for the PFN.
> >>>
> >>> Instead of doing that, lets make it a structure which will contain
> >>> the array of PFNs, the virtual address and the count of said PFNs.
> >>>
> >>> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> >>> gnttab_free_auto_xlat_frames to populate said structure with
> >>> appropiate values for PVHVM and ARM.
> >>>
> >>> To round it off, change the name from 'xen_hvm_resume_frames' to
> >>> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> >>>
> >>> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> >>> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> >> [...]
> >>> --- a/drivers/xen/grant-table.c
> >>> +++ b/drivers/xen/grant-table.c
> >> [...]
> >>> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >>>  }
> >>>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >>>  
> >>> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> >>> +{
> >>> +	xen_pfn_t *pfn;
> >>> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> >>> +	int i;
> >>> +
> >>> +	if (xen_auto_xlat_grant_frames.count)
> >>> +		return -EINVAL;
> >>> +
> >>> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> >>> +	if (!pfn)
> >>> +		return -ENOMEM;
> >>> +	for (i = 0; i < max_nr_gframes; i++)
> >>> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> >>
> >> PFN_DOWN(addr) + i looks better to me.
> >>
> >>> +
> >>> +	xen_auto_xlat_grant_frames.vaddr = addr;
> 
> I think you should move the xen_remap() call here.

Excellent suggestion!
> 
> >> Huh? addr is a physical address but you're assigning it to a field
> >> called vaddr?  I think you mean to set this field to the result of the
> >> xen_remap() call, yes?
> > 
> > It ends up doing that in gnttab_init. Not to
> > xen_auto_xlat_grant_frames.vaddr but to gnttab_shared.addr.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6P5-00015z-JE; Fri, 03 Jan 2014 15:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6P4-00015k-AI
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:11:14 +0000
Received: from [85.158.137.68:54759] by server-1.bemta-3.messagelabs.com id
	71/1D-29598-113D6C25; Fri, 03 Jan 2014 15:11:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388761871!7102377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32657 invoked from network); 3 Jan 2014 15:11:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:11:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89558466"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:11:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:11:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6Oz-00070T-UR;
	Fri, 03 Jan 2014 15:11:09 +0000
Date: Fri, 3 Jan 2014 15:10:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131218212150.GE11717@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031504500.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-11-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181831560.8667@kaball.uk.xensource.com>
	<20131218212150.GE11717@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 10/12] xen/pvh: Piggyback on PVHVM for
 grant driver.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 06:46:56PM +0000, Stefano Stabellini wrote:
> > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > In PVH the shared grant frame is the PFN and not MFN,
> > > hence its mapped via the same code path as HVM.
> > > 
> > > The allocation of the grant frame is done differently - we
> > > do not use the early platform-pci driver and have an
> > > ioremap area - instead we use balloon memory and stitch
> > > all of the non-contingous pages in a virtualized area.
> > > 
> > > That means when we call the hypervisor to replace the GMFN
> > > with a XENMAPSPACE_grant_table type, we need to lookup the
> > > old PFN for every iteration instead of assuming a flat
> > > contingous PFN allocation.
> > > 
> > > Lastly, we only use v1 for grants. This is because PVHVM
> > > is not able to use v2 due to no XENMEM_add_to_physmap
> > > calls on the error status page (see commit
> > > 69e8f430e243d657c2053f097efebc2e2cd559f0
> > >  xen/granttable: Disable grant v2 for HVM domains.)
> > > 
> > > Until that is implemented this workaround has to
> > > be in place.
> > > 
> > > Also per suggestions by Stefano utilize the PVHVM paths
> > > as they share common functionality.
> > > 
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  drivers/xen/gntdev.c       |  2 +-
> > >  drivers/xen/grant-table.c  | 80 ++++++++++++++++++++++++++++++++++++++++++----
> > >  drivers/xen/platform-pci.c |  2 +-
> > >  include/xen/grant_table.h  |  2 +-
> > >  4 files changed, 76 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > > index e41c79c..073b4a1 100644
> > > --- a/drivers/xen/gntdev.c
> > > +++ b/drivers/xen/gntdev.c
> > > @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
> > >  	if (!xen_domain())
> > >  		return -ENODEV;
> > >  
> > > -	use_ptemod = xen_pv_domain();
> > > +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> > >  
> > >  	err = misc_register(&gntdev_miscdev);
> > >  	if (err != 0) {
> > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > index aa846a4..c0ded9f 100644
> > > --- a/drivers/xen/grant-table.c
> > > +++ b/drivers/xen/grant-table.c
> > > @@ -47,6 +47,7 @@
> > >  #include <xen/interface/xen.h>
> > >  #include <xen/page.h>
> > >  #include <xen/grant_table.h>
> > > +#include <xen/balloon.h>
> > >  #include <xen/interface/memory.h>
> > >  #include <xen/hvc-console.h>
> > >  #include <xen/swiotlb-xen.h>
> > > @@ -66,8 +67,8 @@ static unsigned int boot_max_nr_grant_frames;
> > >  static int gnttab_free_count;
> > >  static grant_ref_t gnttab_free_head;
> > >  static DEFINE_SPINLOCK(gnttab_list_lock);
> > > -unsigned long xen_hvm_resume_frames;
> > > -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> > > +unsigned long xen_auto_xlat_grant_frames;
> > > +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
> > >  
> > >  static union {
> > >  	struct grant_entry_v1 *v1;
> > > @@ -1060,7 +1061,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
> > >  	unsigned int nr_gframes = end_idx + 1;
> > >  	int rc;
> > >  
> > > -	if (xen_hvm_domain()) {
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > >  		struct xen_add_to_physmap xatp;
> > >  		unsigned int i = end_idx;
> > >  		rc = 0;
> > > @@ -1069,10 +1070,24 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
> > >  		 * index, ensuring that the table will grow only once.
> > >  		 */
> > >  		do {
> > > +			unsigned long vaddr;
> > > +			unsigned int level;
> > > +			pte_t *pte;
> > > +
> > >  			xatp.domid = DOMID_SELF;
> > >  			xatp.idx = i;
> > >  			xatp.space = XENMAPSPACE_grant_table;
> > > -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> > > +
> > > +			/*
> > > +			 * Don't assume the memory is contingous. Lookup each.
> > > +			 */
> > > +			vaddr = xen_auto_xlat_grant_frames + (i * PAGE_SIZE);
> > > +			if (xen_hvm_domain())
> > > +				xatp.gpfn = vaddr >> PAGE_SHIFT;
> > > +			else {
> > > +				pte = lookup_address(vaddr, &level);
> > 
> > lookup_address is x86 only. I added an empty implementation under
> > arch/arm to be able to compile stuff under drivers/xen but I would like
> > to get rid of it.
> > If you can't find a way to replace lookup_address with something that
> > works on arm and arm64 too, you could always turn
> > xen_auto_xlat_grant_frames into an array and store the pfns there.
> 
> Yeah, that would work nicely too. Thought we would have to store the
> size somewhere. Maybe a struct then.
> 
> struct xlat_grant_frames {
> 	int	nr;
> 	xen_pfn_t	*array;
> 	struct page	*page;
> };

sounds good


> > > +			}
> > >  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> > >  			if (rc != 0) {
> > >  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> > > @@ -1135,7 +1150,7 @@ static void gnttab_request_version(void)
> > >  	int rc;
> > >  	struct gnttab_set_version gsv;
> > >  
> > > -	if (xen_hvm_domain())
> > > +	if (xen_hvm_domain() || xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > just turn this to
> > 
> > if (xen_feature(XENFEAT_auto_translated_physmap))
> 
> Hmm, I would rather actually keep that until I fix the underlaying issue
> of why v2 can't be used. And then we can get rid of this workaround.

Given that HVM guests have XENFEAT_auto_translated_physmap, the two are
equivalent.


> > >  		gsv.version = 1;
> > >  	else
> > >  		gsv.version = 2;
> > > @@ -1161,6 +1176,46 @@ static void gnttab_request_version(void)
> > >  	pr_info("Grant tables using version %d layout\n", grant_table_version);
> > >  }
> > >  
> > > +static int xlated_setup_gnttab_pages(unsigned long nr_grant_frames,
> > > +				     unsigned long max, void *addr)
> > > +{
> > > +	struct page **pages;
> > > +	unsigned long *pfns;
> > > +	int rc, i;
> > > +
> > > +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> > > +	if (!pages)
> > > +		return -ENOMEM;
> > > +
> > > +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> > > +	if (!pfns) {
> > > +		kfree(pages);
> > > +		return -ENOMEM;
> > > +	}
> > > +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> > > +	if (rc) {
> > > +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> > > +			nr_grant_frames, rc);
> > > +		kfree(pages);
> > > +		kfree(pfns);
> > > +		return rc;
> > > +	}
> > > +	for (i = 0; i < nr_grant_frames; i++)
> > > +		pfns[i] = page_to_pfn(pages[i]);
> > > +
> > > +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, max, addr);
> > > +
> > > +	kfree(pages);
> > > +	kfree(pfns);
> > > +	if (rc) {
> > > +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> > > +			nr_grant_frames, rc);
> > > +		free_xenballooned_pages(nr_grant_frames, pages);
> > > +		return rc;
> > > +	}
> > > +	return rc;
> > > +}
> > 
> > Please move this function somewhere under arch/x86/xen and call it from
> > xen_start_kernel, it doesn't belong to common code.
> 
> <nods>
> 
> Actually, looking at how the xen-platform-pci does it. I am wondering
> if that can be just made in one function that will do the right
> thing.
> 
> Can the xen-platform-pci use balloon pages instead of the MMIO BARs?
> Or would that be a waste of memory?

I think they could be used but it would be a waste and they would need
to be contiguous in gpfn space.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6P5-00015z-JE; Fri, 03 Jan 2014 15:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6P4-00015k-AI
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:11:14 +0000
Received: from [85.158.137.68:54759] by server-1.bemta-3.messagelabs.com id
	71/1D-29598-113D6C25; Fri, 03 Jan 2014 15:11:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388761871!7102377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32657 invoked from network); 3 Jan 2014 15:11:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:11:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89558466"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:11:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:11:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6Oz-00070T-UR;
	Fri, 03 Jan 2014 15:11:09 +0000
Date: Fri, 3 Jan 2014 15:10:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131218212150.GE11717@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031504500.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-11-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181831560.8667@kaball.uk.xensource.com>
	<20131218212150.GE11717@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 10/12] xen/pvh: Piggyback on PVHVM for
 grant driver.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 06:46:56PM +0000, Stefano Stabellini wrote:
> > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > In PVH the shared grant frame is the PFN and not MFN,
> > > hence its mapped via the same code path as HVM.
> > > 
> > > The allocation of the grant frame is done differently - we
> > > do not use the early platform-pci driver and have an
> > > ioremap area - instead we use balloon memory and stitch
> > > all of the non-contingous pages in a virtualized area.
> > > 
> > > That means when we call the hypervisor to replace the GMFN
> > > with a XENMAPSPACE_grant_table type, we need to lookup the
> > > old PFN for every iteration instead of assuming a flat
> > > contingous PFN allocation.
> > > 
> > > Lastly, we only use v1 for grants. This is because PVHVM
> > > is not able to use v2 due to no XENMEM_add_to_physmap
> > > calls on the error status page (see commit
> > > 69e8f430e243d657c2053f097efebc2e2cd559f0
> > >  xen/granttable: Disable grant v2 for HVM domains.)
> > > 
> > > Until that is implemented this workaround has to
> > > be in place.
> > > 
> > > Also per suggestions by Stefano utilize the PVHVM paths
> > > as they share common functionality.
> > > 
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  drivers/xen/gntdev.c       |  2 +-
> > >  drivers/xen/grant-table.c  | 80 ++++++++++++++++++++++++++++++++++++++++++----
> > >  drivers/xen/platform-pci.c |  2 +-
> > >  include/xen/grant_table.h  |  2 +-
> > >  4 files changed, 76 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > > index e41c79c..073b4a1 100644
> > > --- a/drivers/xen/gntdev.c
> > > +++ b/drivers/xen/gntdev.c
> > > @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
> > >  	if (!xen_domain())
> > >  		return -ENODEV;
> > >  
> > > -	use_ptemod = xen_pv_domain();
> > > +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> > >  
> > >  	err = misc_register(&gntdev_miscdev);
> > >  	if (err != 0) {
> > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > > index aa846a4..c0ded9f 100644
> > > --- a/drivers/xen/grant-table.c
> > > +++ b/drivers/xen/grant-table.c
> > > @@ -47,6 +47,7 @@
> > >  #include <xen/interface/xen.h>
> > >  #include <xen/page.h>
> > >  #include <xen/grant_table.h>
> > > +#include <xen/balloon.h>
> > >  #include <xen/interface/memory.h>
> > >  #include <xen/hvc-console.h>
> > >  #include <xen/swiotlb-xen.h>
> > > @@ -66,8 +67,8 @@ static unsigned int boot_max_nr_grant_frames;
> > >  static int gnttab_free_count;
> > >  static grant_ref_t gnttab_free_head;
> > >  static DEFINE_SPINLOCK(gnttab_list_lock);
> > > -unsigned long xen_hvm_resume_frames;
> > > -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> > > +unsigned long xen_auto_xlat_grant_frames;
> > > +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
> > >  
> > >  static union {
> > >  	struct grant_entry_v1 *v1;
> > > @@ -1060,7 +1061,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
> > >  	unsigned int nr_gframes = end_idx + 1;
> > >  	int rc;
> > >  
> > > -	if (xen_hvm_domain()) {
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > >  		struct xen_add_to_physmap xatp;
> > >  		unsigned int i = end_idx;
> > >  		rc = 0;
> > > @@ -1069,10 +1070,24 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
> > >  		 * index, ensuring that the table will grow only once.
> > >  		 */
> > >  		do {
> > > +			unsigned long vaddr;
> > > +			unsigned int level;
> > > +			pte_t *pte;
> > > +
> > >  			xatp.domid = DOMID_SELF;
> > >  			xatp.idx = i;
> > >  			xatp.space = XENMAPSPACE_grant_table;
> > > -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> > > +
> > > +			/*
> > > +			 * Don't assume the memory is contingous. Lookup each.
> > > +			 */
> > > +			vaddr = xen_auto_xlat_grant_frames + (i * PAGE_SIZE);
> > > +			if (xen_hvm_domain())
> > > +				xatp.gpfn = vaddr >> PAGE_SHIFT;
> > > +			else {
> > > +				pte = lookup_address(vaddr, &level);
> > 
> > lookup_address is x86 only. I added an empty implementation under
> > arch/arm to be able to compile stuff under drivers/xen but I would like
> > to get rid of it.
> > If you can't find a way to replace lookup_address with something that
> > works on arm and arm64 too, you could always turn
> > xen_auto_xlat_grant_frames into an array and store the pfns there.
> 
> Yeah, that would work nicely too. Thought we would have to store the
> size somewhere. Maybe a struct then.
> 
> struct xlat_grant_frames {
> 	int	nr;
> 	xen_pfn_t	*array;
> 	struct page	*page;
> };

sounds good


> > > +			}
> > >  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
> > >  			if (rc != 0) {
> > >  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> > > @@ -1135,7 +1150,7 @@ static void gnttab_request_version(void)
> > >  	int rc;
> > >  	struct gnttab_set_version gsv;
> > >  
> > > -	if (xen_hvm_domain())
> > > +	if (xen_hvm_domain() || xen_feature(XENFEAT_auto_translated_physmap))
> > 
> > just turn this to
> > 
> > if (xen_feature(XENFEAT_auto_translated_physmap))
> 
> Hmm, I would rather actually keep that until I fix the underlaying issue
> of why v2 can't be used. And then we can get rid of this workaround.

Given that HVM guests have XENFEAT_auto_translated_physmap, the two are
equivalent.


> > >  		gsv.version = 1;
> > >  	else
> > >  		gsv.version = 2;
> > > @@ -1161,6 +1176,46 @@ static void gnttab_request_version(void)
> > >  	pr_info("Grant tables using version %d layout\n", grant_table_version);
> > >  }
> > >  
> > > +static int xlated_setup_gnttab_pages(unsigned long nr_grant_frames,
> > > +				     unsigned long max, void *addr)
> > > +{
> > > +	struct page **pages;
> > > +	unsigned long *pfns;
> > > +	int rc, i;
> > > +
> > > +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> > > +	if (!pages)
> > > +		return -ENOMEM;
> > > +
> > > +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> > > +	if (!pfns) {
> > > +		kfree(pages);
> > > +		return -ENOMEM;
> > > +	}
> > > +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> > > +	if (rc) {
> > > +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> > > +			nr_grant_frames, rc);
> > > +		kfree(pages);
> > > +		kfree(pfns);
> > > +		return rc;
> > > +	}
> > > +	for (i = 0; i < nr_grant_frames; i++)
> > > +		pfns[i] = page_to_pfn(pages[i]);
> > > +
> > > +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, max, addr);
> > > +
> > > +	kfree(pages);
> > > +	kfree(pfns);
> > > +	if (rc) {
> > > +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> > > +			nr_grant_frames, rc);
> > > +		free_xenballooned_pages(nr_grant_frames, pages);
> > > +		return rc;
> > > +	}
> > > +	return rc;
> > > +}
> > 
> > Please move this function somewhere under arch/x86/xen and call it from
> > xen_start_kernel, it doesn't belong to common code.
> 
> <nods>
> 
> Actually, looking at how the xen-platform-pci does it. I am wondering
> if that can be just made in one function that will do the right
> thing.
> 
> Can the xen-platform-pci use balloon pages instead of the MMIO BARs?
> Or would that be a waste of memory?

I think they could be used but it would be a waste and they would need
to be contiguous in gpfn space.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6SI-0001LD-Ax; Fri, 03 Jan 2014 15:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vz6SH-0001L4-71
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:14:33 +0000
Received: from [85.158.139.211:28464] by server-12.bemta-5.messagelabs.com id
	C4/50-30017-8D3D6C25; Fri, 03 Jan 2014 15:14:32 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388762070!7690671!1
X-Originating-IP: [209.85.192.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3171 invoked from network); 3 Jan 2014 15:14:31 -0000
Received: from mail-pd0-f176.google.com (HELO mail-pd0-f176.google.com)
	(209.85.192.176)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:14:31 -0000
Received: by mail-pd0-f176.google.com with SMTP id w10so15502231pde.35
	for <xen-devel@lists.xenproject.org>;
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=8tO0k3XkBJG/aLwmglwCWuCKJqjp/Y3/Qxrj1cQFdZs=;
	b=F2pT9162bVvJ8oYZju/iBh8vl3reGxTdkeBfsja8BsYB2BzQ5bFQhPdBy0oQNslL+S
	jSJEvW2x/ThvJA3VnjkGkaCSJJq9+kwSyTRjpE5+MoZqDgSCVpPsdX19058IrNmp7kf1
	3Ii3/C7N5Qw2NMXjfXj3mExWKw+uMs3JD/WM7yaiYWoe50rc9cVCJWvD2f8W1r3UDgE8
	Um/jKF+dMv5XAEcWq7mHv7XOUM4TZRtO7lqHw3NRKLeyfL8ZV1zpnti803p6ja60DxEw
	4I1yc2B3tEIy37WrCDlWua/dXuEpYAnEP331+iP/ihAYEmHDzIcf7tp0kBfe/CiPaSp3
	aEIw==
X-Received: by 10.68.197.129 with SMTP id iu1mr95862863pbc.139.1388762069714; 
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
Received: from localhost ([113.247.7.238]) by mx.google.com with ESMTPSA id
	m2sm109529105pbn.19.2014.01.03.07.14.24 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Date: Fri,  3 Jan 2014 23:10:18 +0800
Message-Id: <1388761818-4646-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: catalin.marinas@arm.com, Chen Baozi <baozich@gmail.com>, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH V2] arm/xen: remove redundant semicolon in
	definition of ioremap_cache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v2 <- v1: Edit based on Rob's patch

Reported-by: Wei Liu <liuw@liuw.name>
Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 arch/arm/include/asm/xen/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 3759cac..fba7995 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -117,6 +117,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cache((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6SJ-0001LU-NC; Fri, 03 Jan 2014 15:14:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vz6SI-0001LB-67
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 15:14:34 +0000
Received: from [85.158.139.211:47902] by server-14.bemta-5.messagelabs.com id
	F1/F4-24200-9D3D6C25; Fri, 03 Jan 2014 15:14:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388762071!7726673!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22138 invoked from network); 3 Jan 2014 15:14:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87370806"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 15:14:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 3 Jan 2014 10:14:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Vz6SB-00047n-Uw;
	Fri, 03 Jan 2014 15:14:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Vz6SB-0006Ft-NN;
	Fri, 03 Jan 2014 15:14:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21190.54227.587855.230296@mariner.uk.xensource.com>
Date: Fri, 3 Jan 2014 15:14:27 +0000
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-23933-mainreport@xen.org>
References: <osstest-23933-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 23933: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[linux-arm-xen test] 23933: regressions - FAIL"):
> flight 23933 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/23933/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-armhf-pvops             4 kernel-build          fail REGR. vs. 21427

These failures are due to a bug in the way the test job is being
constructed: they specify a particular git commit id, but a git tree
url which doesn't actually contain that commit.

I think I fixed the root cause yesterday.  I'm waiting for the push
gate to push it through.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6SI-0001LD-Ax; Fri, 03 Jan 2014 15:14:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vz6SH-0001L4-71
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:14:33 +0000
Received: from [85.158.139.211:28464] by server-12.bemta-5.messagelabs.com id
	C4/50-30017-8D3D6C25; Fri, 03 Jan 2014 15:14:32 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388762070!7690671!1
X-Originating-IP: [209.85.192.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3171 invoked from network); 3 Jan 2014 15:14:31 -0000
Received: from mail-pd0-f176.google.com (HELO mail-pd0-f176.google.com)
	(209.85.192.176)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:14:31 -0000
Received: by mail-pd0-f176.google.com with SMTP id w10so15502231pde.35
	for <xen-devel@lists.xenproject.org>;
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=8tO0k3XkBJG/aLwmglwCWuCKJqjp/Y3/Qxrj1cQFdZs=;
	b=F2pT9162bVvJ8oYZju/iBh8vl3reGxTdkeBfsja8BsYB2BzQ5bFQhPdBy0oQNslL+S
	jSJEvW2x/ThvJA3VnjkGkaCSJJq9+kwSyTRjpE5+MoZqDgSCVpPsdX19058IrNmp7kf1
	3Ii3/C7N5Qw2NMXjfXj3mExWKw+uMs3JD/WM7yaiYWoe50rc9cVCJWvD2f8W1r3UDgE8
	Um/jKF+dMv5XAEcWq7mHv7XOUM4TZRtO7lqHw3NRKLeyfL8ZV1zpnti803p6ja60DxEw
	4I1yc2B3tEIy37WrCDlWua/dXuEpYAnEP331+iP/ihAYEmHDzIcf7tp0kBfe/CiPaSp3
	aEIw==
X-Received: by 10.68.197.129 with SMTP id iu1mr95862863pbc.139.1388762069714; 
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
Received: from localhost ([113.247.7.238]) by mx.google.com with ESMTPSA id
	m2sm109529105pbn.19.2014.01.03.07.14.24 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 07:14:29 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Date: Fri,  3 Jan 2014 23:10:18 +0800
Message-Id: <1388761818-4646-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: catalin.marinas@arm.com, Chen Baozi <baozich@gmail.com>, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH V2] arm/xen: remove redundant semicolon in
	definition of ioremap_cache
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v2 <- v1: Edit based on Rob's patch

Reported-by: Wei Liu <liuw@liuw.name>
Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 arch/arm/include/asm/xen/page.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 3759cac..fba7995 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -117,6 +117,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 	return __set_phys_to_machine(pfn, mfn);
 }
 
-#define xen_remap(cookie, size) ioremap_cache((cookie), (size));
+#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6SJ-0001LU-NC; Fri, 03 Jan 2014 15:14:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1Vz6SI-0001LB-67
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 15:14:34 +0000
Received: from [85.158.139.211:47902] by server-14.bemta-5.messagelabs.com id
	F1/F4-24200-9D3D6C25; Fri, 03 Jan 2014 15:14:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1388762071!7726673!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22138 invoked from network); 3 Jan 2014 15:14:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87370806"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 15:14:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 3 Jan 2014 10:14:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Vz6SB-00047n-Uw;
	Fri, 03 Jan 2014 15:14:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1Vz6SB-0006Ft-NN;
	Fri, 03 Jan 2014 15:14:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21190.54227.587855.230296@mariner.uk.xensource.com>
Date: Fri, 3 Jan 2014 15:14:27 +0000
To: xen.org <ian.jackson@eu.citrix.com>
In-Reply-To: <osstest-23933-mainreport@xen.org>
References: <osstest-23933-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 23933: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[linux-arm-xen test] 23933: regressions - FAIL"):
> flight 23933 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/23933/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-armhf-pvops             4 kernel-build          fail REGR. vs. 21427

These failures are due to a bug in the way the test job is being
constructed: they specify a particular git commit id, but a git tree
url which doesn't actually contain that commit.

I think I fixed the root cause yesterday.  I'm waiting for the push
gate to push it through.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:18:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6WO-0001l6-EU; Fri, 03 Jan 2014 15:18:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6WM-0001jx-V3
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:18:47 +0000
Received: from [85.158.137.68:36575] by server-5.bemta-3.messagelabs.com id
	AF/2B-25188-6D4D6C25; Fri, 03 Jan 2014 15:18:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388762324!3442079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7649 invoked from network); 3 Jan 2014 15:18:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:18:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89560727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:18:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:18:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6WJ-00076t-9B;
	Fri, 03 Jan 2014 15:18:43 +0000
Date: Fri, 3 Jan 2014 15:17:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Russell King - ARM Linux <linux@arm.linux.org.uk>
In-Reply-To: <20140103145917.GP7383@n2100.arm.linux.org.uk>
Message-ID: <alpine.DEB.2.02.1401031517070.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
	<alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
	<20140103145917.GP7383@n2100.arm.linux.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Mark Salter <msalter@redhat.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Russell King - ARM Linux wrote:
> On Fri, Jan 03, 2014 at 02:49:54PM +0000, Stefano Stabellini wrote:
> > OK. That would be Rob's patch below.
> 
> Looks fine to me.
> 
> > Are you going to take care of sending it to Linus?
> 
> If you want to stick it in the patch system, I'll throw it in my tree.
> I need to push out the next set of fixes soon anyway.
> 
> Thanks.

Done. My first time with the patch system, please be lenient :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:18:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6WO-0001l6-EU; Fri, 03 Jan 2014 15:18:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6WM-0001jx-V3
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:18:47 +0000
Received: from [85.158.137.68:36575] by server-5.bemta-3.messagelabs.com id
	AF/2B-25188-6D4D6C25; Fri, 03 Jan 2014 15:18:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1388762324!3442079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7649 invoked from network); 3 Jan 2014 15:18:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:18:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89560727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:18:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:18:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6WJ-00076t-9B;
	Fri, 03 Jan 2014 15:18:43 +0000
Date: Fri, 3 Jan 2014 15:17:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Russell King - ARM Linux <linux@arm.linux.org.uk>
In-Reply-To: <20140103145917.GP7383@n2100.arm.linux.org.uk>
Message-ID: <alpine.DEB.2.02.1401031517070.8667@kaball.uk.xensource.com>
References: <1388386511-32276-1-git-send-email-baozich@gmail.com>
	<1388431259.2564.15.camel@deneb.redhat.com>
	<alpine.DEB.2.02.1401031310100.8667@kaball.uk.xensource.com>
	<20140103143104.GN7383@n2100.arm.linux.org.uk>
	<alpine.DEB.2.02.1401031444550.8667@kaball.uk.xensource.com>
	<20140103145917.GP7383@n2100.arm.linux.org.uk>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: catalin.marinas@arm.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Mark Salter <msalter@redhat.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm64/xen: redefine xen_remap on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Russell King - ARM Linux wrote:
> On Fri, Jan 03, 2014 at 02:49:54PM +0000, Stefano Stabellini wrote:
> > OK. That would be Rob's patch below.
> 
> Looks fine to me.
> 
> > Are you going to take care of sending it to Linus?
> 
> If you want to stick it in the patch system, I'll throw it in my tree.
> I need to push out the next set of fixes soon anyway.
> 
> Thanks.

Done. My first time with the patch system, please be lenient :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Wf-0001t8-Rg; Fri, 03 Jan 2014 15:19:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz6We-0001rN-Ab
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:19:04 +0000
Received: from [85.158.139.211:25973] by server-1.bemta-5.messagelabs.com id
	47/45-21065-7E4D6C25; Fri, 03 Jan 2014 15:19:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388762341!5049735!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5619 invoked from network); 3 Jan 2014 15:19:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89560902"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:19:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	10:19:00 -0500
Message-ID: <52C6D4E3.9000701@citrix.com>
Date: Fri, 3 Jan 2014 15:18:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
	<20140103143902.GB27019@phenom.dumpdata.com>
In-Reply-To: <20140103143902.GB27019@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 14:39, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> For PVHVM the shared_info structure is provided via the same way
>>> as for normal PV guests (see include/xen/interface/xen.h).
>>>
>>> That is during bootup we get 'xen_start_info' via the %esi register
>>> in startup_xen. Then later we extract the 'shared_info' from said
>>> structure (in xen_setup_shared_info) and start using it.
>>>
>>> The 'xen_setup_shared_info' is all setup to work with auto-xlat
>>> guests, but there are two functions which it calls that are not:
>>> xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
>>> This patch modifies those to work in auto-xlat mode.
>> [...]
>>> --- a/arch/x86/xen/enlighten.c
>>> +++ b/arch/x86/xen/enlighten.c
>>> @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
>>>  		xen_vcpu_setup(cpu);
>>>  
>>>  	/* xen_vcpu_setup managed to place the vcpu_info within the
>>> -	   percpu area for all cpus, so make use of it */
>>> -	if (have_vcpu_info_placement) {
>>> +	 * percpu area for all cpus, so make use of it. Note that for
>>> +	 * PVH we want to use native IRQ mechanism. */
>>> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>>>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>>>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>>>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
>>
>> Should this be in a separate patch: "xen/pvh: use native irq ops"?
> 
> On a second thought I think not. The reason is explained in the commit
> description:
> 
>  The 'xen_setup_shared_info' is all setup to work with auto-xlat
>  guests, but there are two functions which it calls that are not:
>  xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
>  This patch modifies those to work in auto-xlat mode.
> 
> If we move this to another patch, it is going to be mostly the same
> comment and this patch will feel unfinished.

Looking at again, this hunk be in "xen/pvh: Piggyback on PVHVM for event
channel" where we have:

+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;

The tests in both places need to be the same as well.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6Wf-0001t8-Rg; Fri, 03 Jan 2014 15:19:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz6We-0001rN-Ab
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:19:04 +0000
Received: from [85.158.139.211:25973] by server-1.bemta-5.messagelabs.com id
	47/45-21065-7E4D6C25; Fri, 03 Jan 2014 15:19:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388762341!5049735!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5619 invoked from network); 3 Jan 2014 15:19:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89560902"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:19:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	10:19:00 -0500
Message-ID: <52C6D4E3.9000701@citrix.com>
Date: Fri, 3 Jan 2014 15:18:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-8-git-send-email-konrad.wilk@oracle.com>
	<52C54D3C.4050101@citrix.com>
	<20140103143902.GB27019@phenom.dumpdata.com>
In-Reply-To: <20140103143902.GB27019@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 14:39, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
>>>
>>> For PVHVM the shared_info structure is provided via the same way
>>> as for normal PV guests (see include/xen/interface/xen.h).
>>>
>>> That is during bootup we get 'xen_start_info' via the %esi register
>>> in startup_xen. Then later we extract the 'shared_info' from said
>>> structure (in xen_setup_shared_info) and start using it.
>>>
>>> The 'xen_setup_shared_info' is all setup to work with auto-xlat
>>> guests, but there are two functions which it calls that are not:
>>> xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
>>> This patch modifies those to work in auto-xlat mode.
>> [...]
>>> --- a/arch/x86/xen/enlighten.c
>>> +++ b/arch/x86/xen/enlighten.c
>>> @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
>>>  		xen_vcpu_setup(cpu);
>>>  
>>>  	/* xen_vcpu_setup managed to place the vcpu_info within the
>>> -	   percpu area for all cpus, so make use of it */
>>> -	if (have_vcpu_info_placement) {
>>> +	 * percpu area for all cpus, so make use of it. Note that for
>>> +	 * PVH we want to use native IRQ mechanism. */
>>> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>>>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>>>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>>>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
>>
>> Should this be in a separate patch: "xen/pvh: use native irq ops"?
> 
> On a second thought I think not. The reason is explained in the commit
> description:
> 
>  The 'xen_setup_shared_info' is all setup to work with auto-xlat
>  guests, but there are two functions which it calls that are not:
>  xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
>  This patch modifies those to work in auto-xlat mode.
> 
> If we move this to another patch, it is going to be mostly the same
> comment and this patch will feel unfinished.

Looking at again, this hunk be in "xen/pvh: Piggyback on PVHVM for event
channel" where we have:

+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;

The tests in both places need to be the same as well.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:35:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6lt-0002VT-Dt; Fri, 03 Jan 2014 15:34:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6lr-0002VO-VN
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:34:48 +0000
Received: from [193.109.254.147:2309] by server-4.bemta-14.messagelabs.com id
	68/ED-03916-798D6C25; Fri, 03 Jan 2014 15:34:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388763285!8705376!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31448 invoked from network); 3 Jan 2014 15:34:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89566838"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:34:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:34:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6ln-0007K8-Ey;
	Fri, 03 Jan 2014 15:34:43 +0000
Date: Fri, 3 Jan 2014 15:33:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C549F1.8070005@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031533350.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
	<52C549F1.8070005@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH
 guest is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > Which is a PV guest with auto page translation enabled
> > and with vector callback. It is a cross between PVHVM and PV.
> > 
> > The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> > with modifications):
> > 
> > "* the guest uses auto translate:
> >  - p2m is managed by Xen
> >  - pagetables are owned by the guest
> >  - mmu_update hypercall not available
> > * it uses event callback and not vlapic emulation,
> > * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> > 
> > For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> > in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> > PV guest with auto translate, although it does use hvm_op for setting
> > callback vector."
> > 
> > We don't have yet a Kconfig entry setup as we do not
> > have all the parts ready for it - so we piggyback
> > on the PVHVM config option. This scaffolding will
> > be removed later.
> > 
> > Note that on ARM the concept of PVH is non-existent. As Ian
> > put it: "an ARM guest is neither PV nor HVM nor PVHVM.
> > It's a bit like PVH but is different also (it's further towards
> > the H end of the spectrum than even PVH).". As such these
> > options (PVHVM, PVH) are never enabled nor seen on ARM
> > compilations.
> [...]
> > --- a/include/xen/xen.h
> > +++ b/include/xen/xen.h
> > @@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
> >  #define xen_initial_domain()	(0)
> >  #endif	/* CONFIG_XEN_DOM0 */
> >  
> > +#ifdef CONFIG_XEN_PVHVM
> > +/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
> 
> This is a bit confusing.  I think it would be better to add the
> CONFIG_XEN_PVH option with this patch but make it default n and not
> possible to enable.
 
I am OK with the patch as is, but your suggestion would probably make
things better.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:35:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6lt-0002VT-Dt; Fri, 03 Jan 2014 15:34:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6lr-0002VO-VN
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:34:48 +0000
Received: from [193.109.254.147:2309] by server-4.bemta-14.messagelabs.com id
	68/ED-03916-798D6C25; Fri, 03 Jan 2014 15:34:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388763285!8705376!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31448 invoked from network); 3 Jan 2014 15:34:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89566838"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:34:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:34:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6ln-0007K8-Ey;
	Fri, 03 Jan 2014 15:34:43 +0000
Date: Fri, 3 Jan 2014 15:33:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C549F1.8070005@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031533350.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-3-git-send-email-konrad.wilk@oracle.com>
	<52C549F1.8070005@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 02/18] xen/pvh/x86: Define what an PVH
 guest is (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > Which is a PV guest with auto page translation enabled
> > and with vector callback. It is a cross between PVHVM and PV.
> > 
> > The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
> > with modifications):
> > 
> > "* the guest uses auto translate:
> >  - p2m is managed by Xen
> >  - pagetables are owned by the guest
> >  - mmu_update hypercall not available
> > * it uses event callback and not vlapic emulation,
> > * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
> > 
> > For a full list of hcalls supported for PVH, see pvh_hypercall64_table
> > in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
> > PV guest with auto translate, although it does use hvm_op for setting
> > callback vector."
> > 
> > We don't have yet a Kconfig entry setup as we do not
> > have all the parts ready for it - so we piggyback
> > on the PVHVM config option. This scaffolding will
> > be removed later.
> > 
> > Note that on ARM the concept of PVH is non-existent. As Ian
> > put it: "an ARM guest is neither PV nor HVM nor PVHVM.
> > It's a bit like PVH but is different also (it's further towards
> > the H end of the spectrum than even PVH).". As such these
> > options (PVHVM, PVH) are never enabled nor seen on ARM
> > compilations.
> [...]
> > --- a/include/xen/xen.h
> > +++ b/include/xen/xen.h
> > @@ -29,4 +29,20 @@ extern enum xen_domain_type xen_domain_type;
> >  #define xen_initial_domain()	(0)
> >  #endif	/* CONFIG_XEN_DOM0 */
> >  
> > +#ifdef CONFIG_XEN_PVHVM
> > +/* Temporarily under XEN_PVHVM, but will be under CONFIG_XEN_PVH */
> 
> This is a bit confusing.  I think it would be better to add the
> CONFIG_XEN_PVH option with this patch but make it default n and not
> possible to enable.
 
I am OK with the patch as is, but your suggestion would probably make
things better.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:38:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6pM-0002ly-0T; Fri, 03 Jan 2014 15:38:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6pK-0002lg-U2
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:38:23 +0000
Received: from [85.158.143.35:55011] by server-3.bemta-4.messagelabs.com id
	0C/FC-32360-E69D6C25; Fri, 03 Jan 2014 15:38:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388763500!9479517!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17150 invoked from network); 3 Jan 2014 15:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89567669"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:38:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:38:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6p6-0007Ml-PG;
	Fri, 03 Jan 2014 15:38:08 +0000
Date: Fri, 3 Jan 2014 15:37:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C69F09.5010407@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031537110.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
	<52C69F09.5010407@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> On 03/01/14 01:34, Mukesh Rathor wrote:
> > On Thu, 2 Jan 2014 13:32:21 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> >> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> >>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> >>>>
> >>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
> >>>> need to use emulated prefix call. We also check for vector
> >>>> callback early on, as it is a required feature. PVH also runs at
> >>>> default kernel IOPL.
> >>>>
> >>>> Finally, pure PV settings are moved to a separate function that
> >>>> are only called for pure PV, ie, pv with pvmmu. They are also
> >>>> #ifdef with CONFIG_XEN_PVMMU.
> >>> [...]
> >>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> >>>> unsigned int *bx, break;
> >>>>  	}
> >>>>  
> >>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
> >>>> -		: "=a" (*ax),
> >>>> -		  "=b" (*bx),
> >>>> -		  "=c" (*cx),
> >>>> -		  "=d" (*dx)
> >>>> -		: "0" (*ax), "2" (*cx));
> >>>> +	if (xen_pvh_domain())
> >>>> +		native_cpuid(ax, bx, cx, dx);
> >>>> +	else
> >>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
> >>>> +			: "=a" (*ax),
> >>>> +			"=b" (*bx),
> >>>> +			"=c" (*cx),
> >>>> +			"=d" (*dx)
> >>>> +			: "0" (*ax), "2" (*cx));
> >>>
> >>> For this one off cpuid call it seems preferrable to me to use the
> >>> emulate prefix rather than diverge from PV.
> >>
> >> This was before the PV cpuid was deemed OK to be used on PVH.
> >> Will rip this out to use the same version.
> > 
> > Whats wrong with using native cpuid? That is one of the benefits that
> > cpuid can be trapped via vmexit, and also there is talk of making PV
> > cpuid trap obsolete in the future. I suggest leaving it native.
> 
> It should either use the PV interface or the HVM one, not a hybrid of
> the two.

I agree

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:38:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:38:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6pM-0002ly-0T; Fri, 03 Jan 2014 15:38:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6pK-0002lg-U2
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:38:23 +0000
Received: from [85.158.143.35:55011] by server-3.bemta-4.messagelabs.com id
	0C/FC-32360-E69D6C25; Fri, 03 Jan 2014 15:38:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388763500!9479517!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17150 invoked from network); 3 Jan 2014 15:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89567669"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:38:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:38:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6p6-0007Ml-PG;
	Fri, 03 Jan 2014 15:38:08 +0000
Date: Fri, 3 Jan 2014 15:37:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C69F09.5010407@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031537110.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
	<52C69F09.5010407@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> On 03/01/14 01:34, Mukesh Rathor wrote:
> > On Thu, 2 Jan 2014 13:32:21 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> >> On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> >>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>>> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> >>>>
> >>>> In the bootup code for PVH we can trap cpuid via vmexit, so don't
> >>>> need to use emulated prefix call. We also check for vector
> >>>> callback early on, as it is a required feature. PVH also runs at
> >>>> default kernel IOPL.
> >>>>
> >>>> Finally, pure PV settings are moved to a separate function that
> >>>> are only called for pure PV, ie, pv with pvmmu. They are also
> >>>> #ifdef with CONFIG_XEN_PVMMU.
> >>> [...]
> >>>> @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> >>>> unsigned int *bx, break;
> >>>>  	}
> >>>>  
> >>>> -	asm(XEN_EMULATE_PREFIX "cpuid"
> >>>> -		: "=a" (*ax),
> >>>> -		  "=b" (*bx),
> >>>> -		  "=c" (*cx),
> >>>> -		  "=d" (*dx)
> >>>> -		: "0" (*ax), "2" (*cx));
> >>>> +	if (xen_pvh_domain())
> >>>> +		native_cpuid(ax, bx, cx, dx);
> >>>> +	else
> >>>> +		asm(XEN_EMULATE_PREFIX "cpuid"
> >>>> +			: "=a" (*ax),
> >>>> +			"=b" (*bx),
> >>>> +			"=c" (*cx),
> >>>> +			"=d" (*dx)
> >>>> +			: "0" (*ax), "2" (*cx));
> >>>
> >>> For this one off cpuid call it seems preferrable to me to use the
> >>> emulate prefix rather than diverge from PV.
> >>
> >> This was before the PV cpuid was deemed OK to be used on PVH.
> >> Will rip this out to use the same version.
> > 
> > Whats wrong with using native cpuid? That is one of the benefits that
> > cpuid can be trapped via vmexit, and also there is talk of making PV
> > cpuid trap obsolete in the future. I suggest leaving it native.
> 
> It should either use the PV interface or the HVM one, not a hybrid of
> the two.

I agree

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:42:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6sn-0003Fa-LV; Fri, 03 Jan 2014 15:41:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz6sm-0003FV-M4
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:41:56 +0000
Received: from [85.158.143.35:18772] by server-1.bemta-4.messagelabs.com id
	61/13-02132-44AD6C25; Fri, 03 Jan 2014 15:41:56 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388763714!6796191!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24711 invoked from network); 3 Jan 2014 15:41:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:41:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87379905"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 15:41:53 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	10:41:53 -0500
Message-ID: <52C6DA3F.7070509@citrix.com>
Date: Fri, 3 Jan 2014 15:41:51 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
In-Reply-To: <20140103144409.GC27019@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
>> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
>>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
>>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>>>>>  	return gnttab_init();
>>>>>  }
>>>>>  
>>>>> -core_initcall(__gnttab_init);
>>>>> +core_initcall_sync(__gnttab_init);
>>>>
>>>> Why has this become _sync?
>>>
>>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
>>> at gnttab_init):
>>
>>
>> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> 
> It has a clear ordering property.

This really isn't obvious to me.  Can you point to the docs/code the
guarantee this?  I couldn't find it.

>> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> 
> No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> also used by the ARM code.
> 
> Stefano in his previous review mentioned he would like PVH specific
> code in arch/x86:
> 
> https://lkml.org/lkml/2013/12/18/507

Call it xen_arch_gnttab_setup() and add weak stub for other architectures?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:42:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6sn-0003Fa-LV; Fri, 03 Jan 2014 15:41:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz6sm-0003FV-M4
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:41:56 +0000
Received: from [85.158.143.35:18772] by server-1.bemta-4.messagelabs.com id
	61/13-02132-44AD6C25; Fri, 03 Jan 2014 15:41:56 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388763714!6796191!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24711 invoked from network); 3 Jan 2014 15:41:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:41:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87379905"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 15:41:53 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 3 Jan 2014
	10:41:53 -0500
Message-ID: <52C6DA3F.7070509@citrix.com>
Date: Fri, 3 Jan 2014 15:41:51 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
In-Reply-To: <20140103144409.GC27019@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
>> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
>>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
>>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
>>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>>>>>  	return gnttab_init();
>>>>>  }
>>>>>  
>>>>> -core_initcall(__gnttab_init);
>>>>> +core_initcall_sync(__gnttab_init);
>>>>
>>>> Why has this become _sync?
>>>
>>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
>>> at gnttab_init):
>>
>>
>> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> 
> It has a clear ordering property.

This really isn't obvious to me.  Can you point to the docs/code the
guarantee this?  I couldn't find it.

>> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> 
> No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> also used by the ARM code.
> 
> Stefano in his previous review mentioned he would like PVH specific
> code in arch/x86:
> 
> https://lkml.org/lkml/2013/12/18/507

Call it xen_arch_gnttab_setup() and add weak stub for other architectures?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6tV-0003K7-3d; Fri, 03 Jan 2014 15:42:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6tT-0003Jr-1j
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:42:39 +0000
Received: from [85.158.139.211:17826] by server-11.bemta-5.messagelabs.com id
	5C/CA-23268-E6AD6C25; Fri, 03 Jan 2014 15:42:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388763756!7730428!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20108 invoked from network); 3 Jan 2014 15:42:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:42:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89568797"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:42:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:42:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6tP-0007QT-8G;
	Fri, 03 Jan 2014 15:42:35 +0000
Date: Fri, 3 Jan 2014 15:41:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031541390.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> P2M is not available for PVH. Fortunatly for us the
> P2M code already has mostly the support for auto-xlat guest thanks to
> commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
> "grant-table: call set_phys_to_machine after mapping grant refs"
> which: "
> introduces set_phys_to_machine calls for auto_translated guests
> (even on x86) in gnttab_map_refs and gnttab_unmap_refs.
> translated by swiotlb-xen... " so we don't need to muck much.
> 
> with above mentioned "commit you'll get set_phys_to_machine calls
> from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
> anything with them " (Stefano Stabellini) which is OK - we want
> them to be NOPs.
> 
> This is because we assume that an "IOMMU is always present on the
> plaform and Xen is going to make the appropriate IOMMU pagetable
> changes in the hypercall implementation of GNTTABOP_map_grant_ref
> and GNTTABOP_unmap_grant_ref, then eveything should be transparent
> from PVH priviligied point of view and DMA transfers involving
> foreign pages keep working with no issues[sp]
> 
> Otherwise we would need a P2M (and an M2P) for PVH priviligied to
> track these foreign pages .. (see arch/arm/xen/p2m.c)."
> (Stefano Stabellini).
> 
> We still have to inhibit the building of the P2M tree.
> That had been done in the past by not calling
> xen_build_dynamic_phys_to_machine (which setups the P2M tree
> and gives us virtual address to access them). But we are missing
> a check for xen_build_mfn_list_list - which was continuing to setup
> the P2M tree and would blow up at trying to get the virtual
> address of p2m_missing (which would have been setup by
> xen_build_dynamic_phys_to_machine).
> 
> Hence a check is needed to not call xen_build_mfn_list_list when
> running in auto-xlat mode.
> 
> Instead of replicating the check for auto-xlat in enlighten.c
> do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
> is called also in xen_arch_post_suspend without any checks for
> auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
> allocate space for an P2M tree.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c |  3 +--
>  arch/x86/xen/p2m.c       | 12 ++++++++++--
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 755e5bb..ab4dd70 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1493,8 +1493,7 @@ asmlinkage void __init xen_start_kernel(void)
>  	x86_configure_nx();
>  
>  	/* Get mfn list */
> -	if (!xen_feature(XENFEAT_auto_translated_physmap))
> -		xen_build_dynamic_phys_to_machine();
> +	xen_build_dynamic_phys_to_machine();
>  
>  	/*
>  	 * Set up kernel GDT and segment registers, mainly so that
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..fb7ee0a 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
>  {
>  	unsigned long pfn;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
>  	/* Pre-initialize p2m_top_mfn to be completely missing */
>  	if (p2m_top_mfn == NULL) {
>  		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
> @@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
>  /* Set up p2m_top to point to the domain-builder provided p2m pages */
>  void __init xen_build_dynamic_phys_to_machine(void)
>  {
> -	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
> -	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
> +	unsigned long *mfn_list;
> +	unsigned long max_pfn;
>  	unsigned long pfn;
>  
> +	 if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	mfn_list = (unsigned long *)xen_start_info->mfn_list;
> +	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
>  	xen_max_p2m_pfn = max_pfn;
>  
>  	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6tV-0003K7-3d; Fri, 03 Jan 2014 15:42:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6tT-0003Jr-1j
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:42:39 +0000
Received: from [85.158.139.211:17826] by server-11.bemta-5.messagelabs.com id
	5C/CA-23268-E6AD6C25; Fri, 03 Jan 2014 15:42:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388763756!7730428!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20108 invoked from network); 3 Jan 2014 15:42:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:42:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89568797"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:42:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:42:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6tP-0007QT-8G;
	Fri, 03 Jan 2014 15:42:35 +0000
Date: Fri, 3 Jan 2014 15:41:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031541390.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-5-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 04/18] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> P2M is not available for PVH. Fortunatly for us the
> P2M code already has mostly the support for auto-xlat guest thanks to
> commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
> "grant-table: call set_phys_to_machine after mapping grant refs"
> which: "
> introduces set_phys_to_machine calls for auto_translated guests
> (even on x86) in gnttab_map_refs and gnttab_unmap_refs.
> translated by swiotlb-xen... " so we don't need to muck much.
> 
> with above mentioned "commit you'll get set_phys_to_machine calls
> from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
> anything with them " (Stefano Stabellini) which is OK - we want
> them to be NOPs.
> 
> This is because we assume that an "IOMMU is always present on the
> plaform and Xen is going to make the appropriate IOMMU pagetable
> changes in the hypercall implementation of GNTTABOP_map_grant_ref
> and GNTTABOP_unmap_grant_ref, then eveything should be transparent
> from PVH priviligied point of view and DMA transfers involving
> foreign pages keep working with no issues[sp]
> 
> Otherwise we would need a P2M (and an M2P) for PVH priviligied to
> track these foreign pages .. (see arch/arm/xen/p2m.c)."
> (Stefano Stabellini).
> 
> We still have to inhibit the building of the P2M tree.
> That had been done in the past by not calling
> xen_build_dynamic_phys_to_machine (which setups the P2M tree
> and gives us virtual address to access them). But we are missing
> a check for xen_build_mfn_list_list - which was continuing to setup
> the P2M tree and would blow up at trying to get the virtual
> address of p2m_missing (which would have been setup by
> xen_build_dynamic_phys_to_machine).
> 
> Hence a check is needed to not call xen_build_mfn_list_list when
> running in auto-xlat mode.
> 
> Instead of replicating the check for auto-xlat in enlighten.c
> do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
> is called also in xen_arch_post_suspend without any checks for
> auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
> allocate space for an P2M tree.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c |  3 +--
>  arch/x86/xen/p2m.c       | 12 ++++++++++--
>  2 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 755e5bb..ab4dd70 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1493,8 +1493,7 @@ asmlinkage void __init xen_start_kernel(void)
>  	x86_configure_nx();
>  
>  	/* Get mfn list */
> -	if (!xen_feature(XENFEAT_auto_translated_physmap))
> -		xen_build_dynamic_phys_to_machine();
> +	xen_build_dynamic_phys_to_machine();
>  
>  	/*
>  	 * Set up kernel GDT and segment registers, mainly so that
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..fb7ee0a 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
>  {
>  	unsigned long pfn;
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
>  	/* Pre-initialize p2m_top_mfn to be completely missing */
>  	if (p2m_top_mfn == NULL) {
>  		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
> @@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
>  /* Set up p2m_top to point to the domain-builder provided p2m pages */
>  void __init xen_build_dynamic_phys_to_machine(void)
>  {
> -	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
> -	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
> +	unsigned long *mfn_list;
> +	unsigned long max_pfn;
>  	unsigned long pfn;
>  
> +	 if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	mfn_list = (unsigned long *)xen_start_info->mfn_list;
> +	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
>  	xen_max_p2m_pfn = max_pfn;
>  
>  	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6yz-0003cP-VZ; Fri, 03 Jan 2014 15:48:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6yy-0003cB-9p
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:48:20 +0000
Received: from [193.109.254.147:25997] by server-4.bemta-14.messagelabs.com id
	A7/0B-03916-3CBD6C25; Fri, 03 Jan 2014 15:48:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388764096!8646473!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14579 invoked from network); 3 Jan 2014 15:48:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89570458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:48:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:48:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6yg-0007UV-K7;
	Fri, 03 Jan 2014 15:48:02 +0000
Date: Fri, 3 Jan 2014 15:47:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
>  1 file changed, 40 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..d792a69 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
>  	 * instead of somewhere later and be confusing. */
>  	xen_mc_flush();
>  }
> -#endif
> -static void __init xen_pagetable_init(void)
> +static void __init xen_pagetable_p2m_copy(void)
>  {
> -#ifdef CONFIG_X86_64
>  	unsigned long size;
>  	unsigned long addr;
> -#endif
> -	paging_init();
> -	xen_setup_shared_info();
> -#ifdef CONFIG_X86_64
> -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		unsigned long new_mfn_list;
> +	unsigned long new_mfn_list;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* On 32-bit, we get zero so this never gets executed. */

Given that this code is already ifdef'ed CONFIG_X86_64, this comment
should be removed.


> +	new_mfn_list = xen_revector_p2m_tree();

I take from the comment that new_mfn_list must not be zero. Maybe we
want a BUG_ON or a WARN_ON?


> +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +		/* We should be in __ka space. */
> +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +		addr = xen_start_info->mfn_list;
> +		/* We roundup to the PMD, which means that if anybody at this stage is
> +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +		 * they are in going to crash. Fortunatly we have already revectored
> +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +		size = roundup(size, PMD_SIZE);
> +		xen_cleanhighmap(addr, addr + size);
>  
>  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +		memblock_free(__pa(xen_start_info->mfn_list), size);
> +		/* And revector! Bye bye old array */
> +		xen_start_info->mfn_list = new_mfn_list;
> +	} else
> +		return;

This was a normal condition when the function was executed on both
x86_64 and x86_32. Now that it is only executed on x86_64, is it still
the case?


> -		/* On 32-bit, we get zero so this never gets executed. */
> -		new_mfn_list = xen_revector_p2m_tree();
> -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -			/* We should be in __ka space. */
> -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -			addr = xen_start_info->mfn_list;
> -			/* We roundup to the PMD, which means that if anybody at this stage is
> -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -			 * they are in going to crash. Fortunatly we have already revectored
> -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -			size = roundup(size, PMD_SIZE);
> -			xen_cleanhighmap(addr, addr + size);
> -
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -			memblock_free(__pa(xen_start_info->mfn_list), size);
> -			/* And revector! Bye bye old array */
> -			xen_start_info->mfn_list = new_mfn_list;
> -		} else
> -			goto skip;
> -	}
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#else
> +static void __init xen_pagetable_p2m_copy(void)
> +{
> +	/* Nada! */
> +}
>  #endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();
> +	xen_pagetable_p2m_copy();
>  	xen_post_allocator_init();
>  }
>  static void xen_write_cr2(unsigned long cr2)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6yz-0003cP-VZ; Fri, 03 Jan 2014 15:48:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz6yy-0003cB-9p
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:48:20 +0000
Received: from [193.109.254.147:25997] by server-4.bemta-14.messagelabs.com id
	A7/0B-03916-3CBD6C25; Fri, 03 Jan 2014 15:48:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388764096!8646473!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14579 invoked from network); 3 Jan 2014 15:48:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89570458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:48:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:48:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz6yg-0007UV-K7;
	Fri, 03 Jan 2014 15:48:02 +0000
Date: Fri, 3 Jan 2014 15:47:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
>  1 file changed, 40 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..d792a69 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
>  	 * instead of somewhere later and be confusing. */
>  	xen_mc_flush();
>  }
> -#endif
> -static void __init xen_pagetable_init(void)
> +static void __init xen_pagetable_p2m_copy(void)
>  {
> -#ifdef CONFIG_X86_64
>  	unsigned long size;
>  	unsigned long addr;
> -#endif
> -	paging_init();
> -	xen_setup_shared_info();
> -#ifdef CONFIG_X86_64
> -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		unsigned long new_mfn_list;
> +	unsigned long new_mfn_list;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* On 32-bit, we get zero so this never gets executed. */

Given that this code is already ifdef'ed CONFIG_X86_64, this comment
should be removed.


> +	new_mfn_list = xen_revector_p2m_tree();

I take from the comment that new_mfn_list must not be zero. Maybe we
want a BUG_ON or a WARN_ON?


> +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +		/* We should be in __ka space. */
> +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +		addr = xen_start_info->mfn_list;
> +		/* We roundup to the PMD, which means that if anybody at this stage is
> +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +		 * they are in going to crash. Fortunatly we have already revectored
> +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +		size = roundup(size, PMD_SIZE);
> +		xen_cleanhighmap(addr, addr + size);
>  
>  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +		memblock_free(__pa(xen_start_info->mfn_list), size);
> +		/* And revector! Bye bye old array */
> +		xen_start_info->mfn_list = new_mfn_list;
> +	} else
> +		return;

This was a normal condition when the function was executed on both
x86_64 and x86_32. Now that it is only executed on x86_64, is it still
the case?


> -		/* On 32-bit, we get zero so this never gets executed. */
> -		new_mfn_list = xen_revector_p2m_tree();
> -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -			/* We should be in __ka space. */
> -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -			addr = xen_start_info->mfn_list;
> -			/* We roundup to the PMD, which means that if anybody at this stage is
> -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -			 * they are in going to crash. Fortunatly we have already revectored
> -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -			size = roundup(size, PMD_SIZE);
> -			xen_cleanhighmap(addr, addr + size);
> -
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -			memblock_free(__pa(xen_start_info->mfn_list), size);
> -			/* And revector! Bye bye old array */
> -			xen_start_info->mfn_list = new_mfn_list;
> -		} else
> -			goto skip;
> -	}
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#else
> +static void __init xen_pagetable_p2m_copy(void)
> +{
> +	/* Nada! */
> +}
>  #endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();
> +	xen_pagetable_p2m_copy();
>  	xen_post_allocator_init();
>  }
>  static void xen_write_cr2(unsigned long cr2)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:48:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6zE-0003hj-Es; Fri, 03 Jan 2014 15:48:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1Vz6zC-0003gb-JM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:48:34 +0000
Received: from [85.158.137.68:26957] by server-9.bemta-3.messagelabs.com id
	07/54-13104-1DBD6C25; Fri, 03 Jan 2014 15:48:33 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388764111!7069538!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22643 invoked from network); 3 Jan 2014 15:48:32 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 15:48:32 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s03FmKTa023889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Jan 2014 10:48:20 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s03FmK04023887;
	Fri, 3 Jan 2014 10:48:20 -0500
Date: Fri, 3 Jan 2014 11:48:20 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: David Vrabel <david.vrabel@citrix.com>, stefano.stabellini@eu.citrix.com
Message-ID: <20140103154820.GA20976@andromeda.dapyr.net>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6DA3F.7070509@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
	grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >>>>>  	return gnttab_init();
> >>>>>  }
> >>>>>  
> >>>>> -core_initcall(__gnttab_init);
> >>>>> +core_initcall_sync(__gnttab_init);
> >>>>
> >>>> Why has this become _sync?
> >>>
> >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> >>> at gnttab_init):
> >>
> >>
> >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > 
> > It has a clear ordering property.
> 
> This really isn't obvious to me.  Can you point to the docs/code the
> guarantee this?  I couldn't find it.

include/linux/init.h
> 
> >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > 
> > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > also used by the ARM code.
> > 
> > Stefano in his previous review mentioned he would like PVH specific
> > code in arch/x86:
> > 
> > https://lkml.org/lkml/2013/12/18/507
> 
> Call it xen_arch_gnttab_setup() and add weak stub for other architectures?

Stefano, thoughts?

> 
> David
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:48:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz6zE-0003hj-Es; Fri, 03 Jan 2014 15:48:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1Vz6zC-0003gb-JM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:48:34 +0000
Received: from [85.158.137.68:26957] by server-9.bemta-3.messagelabs.com id
	07/54-13104-1DBD6C25; Fri, 03 Jan 2014 15:48:33 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388764111!7069538!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22643 invoked from network); 3 Jan 2014 15:48:32 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 15:48:32 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s03FmKTa023889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Jan 2014 10:48:20 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s03FmK04023887;
	Fri, 3 Jan 2014 10:48:20 -0500
Date: Fri, 3 Jan 2014 11:48:20 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: David Vrabel <david.vrabel@citrix.com>, stefano.stabellini@eu.citrix.com
Message-ID: <20140103154820.GA20976@andromeda.dapyr.net>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52C6DA3F.7070509@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
	grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> >>>>>  	return gnttab_init();
> >>>>>  }
> >>>>>  
> >>>>> -core_initcall(__gnttab_init);
> >>>>> +core_initcall_sync(__gnttab_init);
> >>>>
> >>>> Why has this become _sync?
> >>>
> >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> >>> at gnttab_init):
> >>
> >>
> >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > 
> > It has a clear ordering property.
> 
> This really isn't obvious to me.  Can you point to the docs/code the
> guarantee this?  I couldn't find it.

include/linux/init.h
> 
> >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > 
> > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > also used by the ARM code.
> > 
> > Stefano in his previous review mentioned he would like PVH specific
> > code in arch/x86:
> > 
> > https://lkml.org/lkml/2013/12/18/507
> 
> Call it xen_arch_gnttab_setup() and add weak stub for other architectures?

Stefano, thoughts?

> 
> David
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:50:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz71S-0004E9-Mz; Fri, 03 Jan 2014 15:50:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz71R-0004Dw-Rk
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:50:54 +0000
Received: from [85.158.143.35:50943] by server-1.bemta-4.messagelabs.com id
	2F/6D-02132-D5CD6C25; Fri, 03 Jan 2014 15:50:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388764251!9401557!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20693 invoked from network); 3 Jan 2014 15:50:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89571484"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:50:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:50:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz71O-0007WX-HL;
	Fri, 03 Jan 2014 15:50:50 +0000
Date: Fri, 3 Jan 2014 15:50:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C54C82.5010802@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031549520.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
	<52C54C82.5010802@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > .. which are surprinsingly small compared to the amount for PV code.
> > 
> > PVH uses mostly native mmu ops, we leave the generic (native_*) for
> > the majority and just overwrite the baremetal with the ones we need.
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> 
> This TLB flush optimization should be a separate patch.

Right.
Aside from this:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 15:50:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 15:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz71S-0004E9-Mz; Fri, 03 Jan 2014 15:50:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz71R-0004Dw-Rk
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 15:50:54 +0000
Received: from [85.158.143.35:50943] by server-1.bemta-4.messagelabs.com id
	2F/6D-02132-D5CD6C25; Fri, 03 Jan 2014 15:50:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388764251!9401557!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20693 invoked from network); 3 Jan 2014 15:50:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 15:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89571484"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 15:50:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 10:50:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz71O-0007WX-HL;
	Fri, 03 Jan 2014 15:50:50 +0000
Date: Fri, 3 Jan 2014 15:50:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52C54C82.5010802@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031549520.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-7-git-send-email-konrad.wilk@oracle.com>
	<52C54C82.5010802@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 06/18] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > .. which are surprinsingly small compared to the amount for PV code.
> > 
> > PVH uses mostly native mmu ops, we leave the generic (native_*) for
> > the majority and just overwrite the baremetal with the ones we need.
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> 
> This TLB flush optimization should be a separate patch.

Right.
Aside from this:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:03:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7DM-0005N3-DZ; Fri, 03 Jan 2014 16:03:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz7DK-0005Mt-HV
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:03:10 +0000
Received: from [85.158.139.211:12680] by server-5.bemta-5.messagelabs.com id
	84/60-14928-D3FD6C25; Fri, 03 Jan 2014 16:03:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388764987!7746653!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31904 invoked from network); 3 Jan 2014 16:03:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 16:03:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03G23Sn013138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 16:02:04 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03G22j7007760
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 16:02:02 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03G221i007725; Fri, 3 Jan 2014 16:02:02 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 08:02:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 867BB1BFB02; Fri,  3 Jan 2014 11:02:00 -0500 (EST)
Date: Fri, 3 Jan 2014 11:02:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103160200.GE27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 03:47:15PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > The revector and copying of the P2M only happens when
> > !auto-xlat and on 64-bit builds. It is not obvious from
> > the code, so lets have seperate 32 and 64-bit functions.
> > 
> > We also invert the check for auto-xlat to make the code
> > flow simpler.
> > 
> > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
> >  1 file changed, 40 insertions(+), 33 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index ce563be..d792a69 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
> >  	 * instead of somewhere later and be confusing. */
> >  	xen_mc_flush();
> >  }
> > -#endif
> > -static void __init xen_pagetable_init(void)
> > +static void __init xen_pagetable_p2m_copy(void)
> >  {
> > -#ifdef CONFIG_X86_64
> >  	unsigned long size;
> >  	unsigned long addr;
> > -#endif
> > -	paging_init();
> > -	xen_setup_shared_info();
> > -#ifdef CONFIG_X86_64
> > -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > -		unsigned long new_mfn_list;
> > +	unsigned long new_mfn_list;
> > +
> > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > +		return;
> > +
> > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +
> > +	/* On 32-bit, we get zero so this never gets executed. */
> 
> Given that this code is already ifdef'ed CONFIG_X86_64, this comment
> should be removed.

Sure.
> 
> 
> > +	new_mfn_list = xen_revector_p2m_tree();
> 
> I take from the comment that new_mfn_list must not be zero. Maybe we
> want a BUG_ON or a WARN_ON?

It can be zero, in which case we don't want to revector.
> 
> 
> > +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> > +
> > +		/* We should be in __ka space. */
> > +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > +		addr = xen_start_info->mfn_list;
> > +		/* We roundup to the PMD, which means that if anybody at this stage is
> > +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > +		 * they are in going to crash. Fortunatly we have already revectored
> > +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > +		size = roundup(size, PMD_SIZE);
> > +		xen_cleanhighmap(addr, addr + size);
> >  
> >  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +		memblock_free(__pa(xen_start_info->mfn_list), size);
> > +		/* And revector! Bye bye old array */
> > +		xen_start_info->mfn_list = new_mfn_list;
> > +	} else
> > +		return;
> 
> This was a normal condition when the function was executed on both
> x86_64 and x86_32. Now that it is only executed on x86_64, is it still
> the case?

It could be. Since this particular patch just moves code I would hesitate
to make changes here. Perhaps a seperate patch after the conditions
under which the xen_revector_p2m_tree() fail can be done?

> 
> 
> > -		/* On 32-bit, we get zero so this never gets executed. */
> > -		new_mfn_list = xen_revector_p2m_tree();
> > -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> > -
> > -			/* We should be in __ka space. */
> > -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > -			addr = xen_start_info->mfn_list;
> > -			/* We roundup to the PMD, which means that if anybody at this stage is
> > -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > -			 * they are in going to crash. Fortunatly we have already revectored
> > -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > -			size = roundup(size, PMD_SIZE);
> > -			xen_cleanhighmap(addr, addr + size);
> > -
> > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > -			memblock_free(__pa(xen_start_info->mfn_list), size);
> > -			/* And revector! Bye bye old array */
> > -			xen_start_info->mfn_list = new_mfn_list;
> > -		} else
> > -			goto skip;
> > -	}
> >  	/* At this stage, cleanup_highmap has already cleaned __ka space
> >  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
> >  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> > @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
> >  	 * anything at this stage. */
> >  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
> >  #endif
> > -skip:
> > +}
> > +#else
> > +static void __init xen_pagetable_p2m_copy(void)
> > +{
> > +	/* Nada! */
> > +}
> >  #endif
> > +
> > +static void __init xen_pagetable_init(void)
> > +{
> > +	paging_init();
> > +	xen_setup_shared_info();
> > +	xen_pagetable_p2m_copy();
> >  	xen_post_allocator_init();
> >  }
> >  static void xen_write_cr2(unsigned long cr2)
> > -- 
> > 1.8.3.1
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:03:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7DM-0005N3-DZ; Fri, 03 Jan 2014 16:03:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz7DK-0005Mt-HV
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:03:10 +0000
Received: from [85.158.139.211:12680] by server-5.bemta-5.messagelabs.com id
	84/60-14928-D3FD6C25; Fri, 03 Jan 2014 16:03:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1388764987!7746653!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31904 invoked from network); 3 Jan 2014 16:03:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 16:03:08 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03G23Sn013138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 16:02:04 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03G22j7007760
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 16:02:02 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03G221i007725; Fri, 3 Jan 2014 16:02:02 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 08:02:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 867BB1BFB02; Fri,  3 Jan 2014 11:02:00 -0500 (EST)
Date: Fri, 3 Jan 2014 11:02:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103160200.GE27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 03:47:15PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > The revector and copying of the P2M only happens when
> > !auto-xlat and on 64-bit builds. It is not obvious from
> > the code, so lets have seperate 32 and 64-bit functions.
> > 
> > We also invert the check for auto-xlat to make the code
> > flow simpler.
> > 
> > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
> >  1 file changed, 40 insertions(+), 33 deletions(-)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index ce563be..d792a69 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
> >  	 * instead of somewhere later and be confusing. */
> >  	xen_mc_flush();
> >  }
> > -#endif
> > -static void __init xen_pagetable_init(void)
> > +static void __init xen_pagetable_p2m_copy(void)
> >  {
> > -#ifdef CONFIG_X86_64
> >  	unsigned long size;
> >  	unsigned long addr;
> > -#endif
> > -	paging_init();
> > -	xen_setup_shared_info();
> > -#ifdef CONFIG_X86_64
> > -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > -		unsigned long new_mfn_list;
> > +	unsigned long new_mfn_list;
> > +
> > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > +		return;
> > +
> > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +
> > +	/* On 32-bit, we get zero so this never gets executed. */
> 
> Given that this code is already ifdef'ed CONFIG_X86_64, this comment
> should be removed.

Sure.
> 
> 
> > +	new_mfn_list = xen_revector_p2m_tree();
> 
> I take from the comment that new_mfn_list must not be zero. Maybe we
> want a BUG_ON or a WARN_ON?

It can be zero, in which case we don't want to revector.
> 
> 
> > +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> > +
> > +		/* We should be in __ka space. */
> > +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > +		addr = xen_start_info->mfn_list;
> > +		/* We roundup to the PMD, which means that if anybody at this stage is
> > +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > +		 * they are in going to crash. Fortunatly we have already revectored
> > +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > +		size = roundup(size, PMD_SIZE);
> > +		xen_cleanhighmap(addr, addr + size);
> >  
> >  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > +		memblock_free(__pa(xen_start_info->mfn_list), size);
> > +		/* And revector! Bye bye old array */
> > +		xen_start_info->mfn_list = new_mfn_list;
> > +	} else
> > +		return;
> 
> This was a normal condition when the function was executed on both
> x86_64 and x86_32. Now that it is only executed on x86_64, is it still
> the case?

It could be. Since this particular patch just moves code I would hesitate
to make changes here. Perhaps a seperate patch after the conditions
under which the xen_revector_p2m_tree() fail can be done?

> 
> 
> > -		/* On 32-bit, we get zero so this never gets executed. */
> > -		new_mfn_list = xen_revector_p2m_tree();
> > -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> > -
> > -			/* We should be in __ka space. */
> > -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > -			addr = xen_start_info->mfn_list;
> > -			/* We roundup to the PMD, which means that if anybody at this stage is
> > -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > -			 * they are in going to crash. Fortunatly we have already revectored
> > -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > -			size = roundup(size, PMD_SIZE);
> > -			xen_cleanhighmap(addr, addr + size);
> > -
> > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > -			memblock_free(__pa(xen_start_info->mfn_list), size);
> > -			/* And revector! Bye bye old array */
> > -			xen_start_info->mfn_list = new_mfn_list;
> > -		} else
> > -			goto skip;
> > -	}
> >  	/* At this stage, cleanup_highmap has already cleaned __ka space
> >  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
> >  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> > @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
> >  	 * anything at this stage. */
> >  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
> >  #endif
> > -skip:
> > +}
> > +#else
> > +static void __init xen_pagetable_p2m_copy(void)
> > +{
> > +	/* Nada! */
> > +}
> >  #endif
> > +
> > +static void __init xen_pagetable_init(void)
> > +{
> > +	paging_init();
> > +	xen_setup_shared_info();
> > +	xen_pagetable_p2m_copy();
> >  	xen_post_allocator_init();
> >  }
> >  static void xen_write_cr2(unsigned long cr2)
> > -- 
> > 1.8.3.1
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:14:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7OE-0005p0-Qg; Fri, 03 Jan 2014 16:14:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Vz7OC-0005ov-O4
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:14:24 +0000
Received: from [85.158.143.35:38114] by server-3.bemta-4.messagelabs.com id
	87/54-32360-FD1E6C25; Fri, 03 Jan 2014 16:14:23 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388765662!9445696!1
X-Originating-IP: [209.85.220.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9797 invoked from network); 3 Jan 2014 16:14:23 -0000
Received: from mail-vc0-f177.google.com (HELO mail-vc0-f177.google.com)
	(209.85.220.177)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:14:23 -0000
Received: by mail-vc0-f177.google.com with SMTP id le5so423808vcb.8
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 08:14:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=/qHKqM1ogGRWHmnGMptmmppyQato0j9WTsD+PjLYaig=;
	b=r+HeN72lvrKmIPuzCoR3NJUENh2bRMA2OcO5S9oWaoWGcL++GAKApmaCqMxP41Em4j
	FJ2JkYi+bnuHOeZvRh4AZlDgFoBITKtlVP/X24z1vOgRG9HV4F5HyK6pnQUJ9IIRHK+M
	7Uu0G9DW5CQNq0gbyOgTUXBE4WgDwnLGgdQ9Qk2V4c9YUjMsAjnNhl9vkbQhjkQMz5y1
	6cORJAvOIQ/YdTk7YQK8u2gK6hHQcJsdOp64/Aj8o6TdvPPqaOe/3MhOq6jRUcJ7+ZJM
	lB4Hj5U+iz1ps98ucRzeD9j5wgqK9MM2oD4SEXAarzKIN1MJ8NF6y5ao3S1GK+Qiphls
	wfvw==
X-Received: by 10.52.244.49 with SMTP id xd17mr3194140vdc.26.1388765661856;
	Fri, 03 Jan 2014 08:14:21 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.244.193 with HTTP; Fri, 3 Jan 2014 08:14:01 -0800 (PST)
From: William Dauchy <wdauchy@gmail.com>
Date: Fri, 3 Jan 2014 17:14:01 +0100
Message-ID: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] domU v3.10.x kmemleak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have activated kmemleak on a v3.10.x domU (x86_64) running on a dom0
v3.4.x (x86_32) and xen hypervisor v4.1.x
This domU is leaking pages, among other things I found this output in kmemleak:

unreferenced object 0xffff88005c9b3498 (size 560):
  comm "blkid", pid 572, jiffies 4294893852 (age 1912.348s)
  hex dump (first 32 bytes):
    02 00 00 00 00 00 00 00 c0 2d 76 59 00 88 ff ff  .........-vY....
    00 2f 24 81 ff ff ff ff 00 00 00 00 00 00 00 00  ./$.............
  backtrace:
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff810f6518>] kmem_cache_alloc+0x138/0x230
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff810bbb9b>] add_to_page_cache_locked+0x7b/0x140
    [<ffffffff810bbc71>] add_to_page_cache_lru+0x11/0x40
    [<ffffffff810c5946>] __do_page_cache_readahead+0x236/0x250
    [<ffffffff8100680b>] xen_make_pte+0x1b/0x50
    [<ffffffff810c5baf>] force_page_cache_readahead+0x8f/0xc0
    [<ffffffff810bd076>] generic_file_aio_read+0x4c6/0x6c0
    [<ffffffff81004739>] __raw_callee_save_xen_pmd_val+0x11/0x1e
    [<ffffffff8110b78a>] do_sync_read+0x6a/0xa0
    [<ffffffff8110c670>] vfs_read+0xa0/0x180
    [<ffffffff8110c901>] SyS_read+0x51/0xb0
    [<ffffffff81395369>] system_call_fastpath+0x16/0x1b
    [<ffffffffffffffff>] 0xffffffffffffffff

unreferenced object 0xffff88005af24bc0 (size 192):
  comm "softirq", pid 0, jiffies 4294894408 (age 2246.916s)
  hex dump (first 32 bytes):
    b0 d0 e6 50 00 88 ff ff 70 72 2f 81 ff ff ff ff  ...P....pr/.....
    00 00 00 00 00 00 00 00 00 50 f9 5b 00 88 ff ff  .........P.[....
  backtrace:
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff810f6518>] kmem_cache_alloc+0x138/0x230
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff812f9b9e>] ip_route_input_noref+0x11e/0xbf0
    [<ffffffff81186ad1>] ext4_end_bio+0x251/0x2d0
    [<ffffffff810f5729>] poison_obj+0x29/0x40
    [<ffffffff812fc274>] ip_rcv_finish+0x184/0x360
    [<ffffffff812ca033>] __netif_receive_skb_core+0x5e3/0x7d0
    [<ffffffff812ca434>] netif_receive_skb+0x24/0x80
    [<ffffffff812cb7e8>] napi_gro_receive+0x98/0xd0
    [<ffffffff8129de37>] xennet_poll+0x797/0xce0
    [<ffffffff8107e91f>] clockevents_program_event+0x6f/0x110
    [<ffffffff810a7043>] handle_irq_event_percpu+0x83/0x140
    [<ffffffff812ca734>] net_rx_action+0x104/0x1c0
    [<ffffffff81042056>] __do_softirq+0xd6/0x1b0

Do you have any idea where this could come from?

Thanks,

-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:14:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7OE-0005p0-Qg; Fri, 03 Jan 2014 16:14:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Vz7OC-0005ov-O4
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:14:24 +0000
Received: from [85.158.143.35:38114] by server-3.bemta-4.messagelabs.com id
	87/54-32360-FD1E6C25; Fri, 03 Jan 2014 16:14:23 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388765662!9445696!1
X-Originating-IP: [209.85.220.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9797 invoked from network); 3 Jan 2014 16:14:23 -0000
Received: from mail-vc0-f177.google.com (HELO mail-vc0-f177.google.com)
	(209.85.220.177)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:14:23 -0000
Received: by mail-vc0-f177.google.com with SMTP id le5so423808vcb.8
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 08:14:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=/qHKqM1ogGRWHmnGMptmmppyQato0j9WTsD+PjLYaig=;
	b=r+HeN72lvrKmIPuzCoR3NJUENh2bRMA2OcO5S9oWaoWGcL++GAKApmaCqMxP41Em4j
	FJ2JkYi+bnuHOeZvRh4AZlDgFoBITKtlVP/X24z1vOgRG9HV4F5HyK6pnQUJ9IIRHK+M
	7Uu0G9DW5CQNq0gbyOgTUXBE4WgDwnLGgdQ9Qk2V4c9YUjMsAjnNhl9vkbQhjkQMz5y1
	6cORJAvOIQ/YdTk7YQK8u2gK6hHQcJsdOp64/Aj8o6TdvPPqaOe/3MhOq6jRUcJ7+ZJM
	lB4Hj5U+iz1ps98ucRzeD9j5wgqK9MM2oD4SEXAarzKIN1MJ8NF6y5ao3S1GK+Qiphls
	wfvw==
X-Received: by 10.52.244.49 with SMTP id xd17mr3194140vdc.26.1388765661856;
	Fri, 03 Jan 2014 08:14:21 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.244.193 with HTTP; Fri, 3 Jan 2014 08:14:01 -0800 (PST)
From: William Dauchy <wdauchy@gmail.com>
Date: Fri, 3 Jan 2014 17:14:01 +0100
Message-ID: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] domU v3.10.x kmemleak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have activated kmemleak on a v3.10.x domU (x86_64) running on a dom0
v3.4.x (x86_32) and xen hypervisor v4.1.x
This domU is leaking pages, among other things I found this output in kmemleak:

unreferenced object 0xffff88005c9b3498 (size 560):
  comm "blkid", pid 572, jiffies 4294893852 (age 1912.348s)
  hex dump (first 32 bytes):
    02 00 00 00 00 00 00 00 c0 2d 76 59 00 88 ff ff  .........-vY....
    00 2f 24 81 ff ff ff ff 00 00 00 00 00 00 00 00  ./$.............
  backtrace:
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff810f6518>] kmem_cache_alloc+0x138/0x230
    [<ffffffff81243aa0>] radix_tree_preload+0x30/0x90
    [<ffffffff810bbb9b>] add_to_page_cache_locked+0x7b/0x140
    [<ffffffff810bbc71>] add_to_page_cache_lru+0x11/0x40
    [<ffffffff810c5946>] __do_page_cache_readahead+0x236/0x250
    [<ffffffff8100680b>] xen_make_pte+0x1b/0x50
    [<ffffffff810c5baf>] force_page_cache_readahead+0x8f/0xc0
    [<ffffffff810bd076>] generic_file_aio_read+0x4c6/0x6c0
    [<ffffffff81004739>] __raw_callee_save_xen_pmd_val+0x11/0x1e
    [<ffffffff8110b78a>] do_sync_read+0x6a/0xa0
    [<ffffffff8110c670>] vfs_read+0xa0/0x180
    [<ffffffff8110c901>] SyS_read+0x51/0xb0
    [<ffffffff81395369>] system_call_fastpath+0x16/0x1b
    [<ffffffffffffffff>] 0xffffffffffffffff

unreferenced object 0xffff88005af24bc0 (size 192):
  comm "softirq", pid 0, jiffies 4294894408 (age 2246.916s)
  hex dump (first 32 bytes):
    b0 d0 e6 50 00 88 ff ff 70 72 2f 81 ff ff ff ff  ...P....pr/.....
    00 00 00 00 00 00 00 00 00 50 f9 5b 00 88 ff ff  .........P.[....
  backtrace:
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff810f6518>] kmem_cache_alloc+0x138/0x230
    [<ffffffff812d1ea6>] dst_alloc+0x56/0x180
    [<ffffffff812f9b9e>] ip_route_input_noref+0x11e/0xbf0
    [<ffffffff81186ad1>] ext4_end_bio+0x251/0x2d0
    [<ffffffff810f5729>] poison_obj+0x29/0x40
    [<ffffffff812fc274>] ip_rcv_finish+0x184/0x360
    [<ffffffff812ca033>] __netif_receive_skb_core+0x5e3/0x7d0
    [<ffffffff812ca434>] netif_receive_skb+0x24/0x80
    [<ffffffff812cb7e8>] napi_gro_receive+0x98/0xd0
    [<ffffffff8129de37>] xennet_poll+0x797/0xce0
    [<ffffffff8107e91f>] clockevents_program_event+0x6f/0x110
    [<ffffffff810a7043>] handle_irq_event_percpu+0x83/0x140
    [<ffffffff812ca734>] net_rx_action+0x104/0x1c0
    [<ffffffff81042056>] __do_softirq+0xd6/0x1b0

Do you have any idea where this could come from?

Thanks,

-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:20:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:20:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7UA-0006Fs-VF; Fri, 03 Jan 2014 16:20:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Vz7U9-0006Fn-5h
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:20:33 +0000
Received: from [85.158.137.68:47619] by server-2.bemta-3.messagelabs.com id
	AE/45-17329-053E6C25; Fri, 03 Jan 2014 16:20:32 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388766030!5953363!1
X-Originating-IP: [209.85.220.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 3 Jan 2014 16:20:31 -0000
Received: from mail-vc0-f182.google.com (HELO mail-vc0-f182.google.com)
	(209.85.220.182)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:20:31 -0000
Received: by mail-vc0-f182.google.com with SMTP id lc6so7949526vcb.13
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 08:20:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:content-type; bh=eaL/vvLjGuv42jTOWicxI4bkuaUZZ7pnUHgrL2Zj5DE=;
	b=gNUxzha/hj8LmkL2vedcyAa8aU0rwtjDvyaKfh8IsWu/80wZMT8PG0hABjEUhwbMwN
	VNeUvRCOwEbkPnt2/G/MCd4qu11RDqHWgwQBR0iBCrT8WkvZMgZKuo5aiyAiXyAhCTC0
	w4+h6G9Q969e6w6QaBPS3Gfup8VBaeTO5WjBFUtdGulwTX776uEOql+3apXyh8RgXADY
	L/Iw7zIRbOrHgwUQsx7HscF0K9/6weXHW2RpRZK9IXKrJ2aEFLJy5l5bnZrND5W8m4Nb
	IK+kDtK/ku+MEOeMbtgT7sKJQ09fyUON2kVNPKteFzG8UfFlmK1R5Ov53vQiC5PvuBhi
	z2QA==
X-Received: by 10.220.139.136 with SMTP id e8mr1760342vcu.34.1388766029112;
	Fri, 03 Jan 2014 08:20:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.244.193 with HTTP; Fri, 3 Jan 2014 08:20:08 -0800 (PST)
In-Reply-To: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
References: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Fri, 3 Jan 2014 17:20:08 +0100
Message-ID: <CAJ75kXaazwNBKGniXFAoKLLkKDxMr=FO6i5Z8kO-mtGT-kywpw@mail.gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] domU v3.10.x kmemleak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 3, 2014 at 5:14 PM, William Dauchy <wdauchy@gmail.com> wrote:
> I have activated kmemleak on a v3.10.x domU (x86_64) running on a dom0
> v3.4.x (x86_32) and xen hypervisor v4.1.x

sorry forgot to mention that the domU is running as a paravirt kernel

-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:20:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:20:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7UA-0006Fs-VF; Fri, 03 Jan 2014 16:20:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wdauchy@gmail.com>) id 1Vz7U9-0006Fn-5h
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:20:33 +0000
Received: from [85.158.137.68:47619] by server-2.bemta-3.messagelabs.com id
	AE/45-17329-053E6C25; Fri, 03 Jan 2014 16:20:32 +0000
X-Env-Sender: wdauchy@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388766030!5953363!1
X-Originating-IP: [209.85.220.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 3 Jan 2014 16:20:31 -0000
Received: from mail-vc0-f182.google.com (HELO mail-vc0-f182.google.com)
	(209.85.220.182)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:20:31 -0000
Received: by mail-vc0-f182.google.com with SMTP id lc6so7949526vcb.13
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 08:20:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:content-type; bh=eaL/vvLjGuv42jTOWicxI4bkuaUZZ7pnUHgrL2Zj5DE=;
	b=gNUxzha/hj8LmkL2vedcyAa8aU0rwtjDvyaKfh8IsWu/80wZMT8PG0hABjEUhwbMwN
	VNeUvRCOwEbkPnt2/G/MCd4qu11RDqHWgwQBR0iBCrT8WkvZMgZKuo5aiyAiXyAhCTC0
	w4+h6G9Q969e6w6QaBPS3Gfup8VBaeTO5WjBFUtdGulwTX776uEOql+3apXyh8RgXADY
	L/Iw7zIRbOrHgwUQsx7HscF0K9/6weXHW2RpRZK9IXKrJ2aEFLJy5l5bnZrND5W8m4Nb
	IK+kDtK/ku+MEOeMbtgT7sKJQ09fyUON2kVNPKteFzG8UfFlmK1R5Ov53vQiC5PvuBhi
	z2QA==
X-Received: by 10.220.139.136 with SMTP id e8mr1760342vcu.34.1388766029112;
	Fri, 03 Jan 2014 08:20:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.244.193 with HTTP; Fri, 3 Jan 2014 08:20:08 -0800 (PST)
In-Reply-To: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
References: <CAJ75kXawXsqDDtEx+YJgsUm8PF0PN+-rsG7yWQTfqaiAbkR1Rw@mail.gmail.com>
From: William Dauchy <wdauchy@gmail.com>
Date: Fri, 3 Jan 2014 17:20:08 +0100
Message-ID: <CAJ75kXaazwNBKGniXFAoKLLkKDxMr=FO6i5Z8kO-mtGT-kywpw@mail.gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] domU v3.10.x kmemleak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 3, 2014 at 5:14 PM, William Dauchy <wdauchy@gmail.com> wrote:
> I have activated kmemleak on a v3.10.x domU (x86_64) running on a dom0
> v3.4.x (x86_32) and xen hypervisor v4.1.x

sorry forgot to mention that the domU is running as a paravirt kernel

-- 
William

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7Wd-0006My-J7; Fri, 03 Jan 2014 16:23:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7Wa-0006Mr-Cf
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:23:05 +0000
Received: from [85.158.143.35:62485] by server-1.bemta-4.messagelabs.com id
	BF/CE-02132-7E3E6C25; Fri, 03 Jan 2014 16:23:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388766181!9446908!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10222 invoked from network); 3 Jan 2014 16:23:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:23:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87392084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:22:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:22:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7WL-0000QD-IR;
	Fri, 03 Jan 2014 16:22:49 +0000
Date: Fri, 3 Jan 2014 16:22:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> As we do not have yet a mechanism for that.
> 
> This also impacts the ARM/ARM64 code (which does not have
> hotplug support yet).
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/cpu_hotplug.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> index cc6513a..5f80802 100644
> --- a/drivers/xen/cpu_hotplug.c
> +++ b/drivers/xen/cpu_hotplug.c
> @@ -4,6 +4,7 @@
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> +#include <xen/features.h>
>  
>  #include <asm/xen/hypervisor.h>
>  #include <asm/cpu.h>
> @@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
>  	static struct notifier_block xsn_cpu = {
>  		.notifier_call = setup_cpu_watcher };
>  
> -	if (!xen_pv_domain())
> +	/* PVH/ARM/ARM64 TBD/FIXME: future work */
> +	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
>  		return -ENODEV;
>  
>  	register_xenstore_notifier(&xsn_cpu);

Sorry for being a bit obnoxious but I was thinking that using a
xen_feature(XENFEAT_auto_translated_physmap) check is conceptually
wrong, because cpu hotplug and nested paging are orthogonal.

Given that we most probably want to follow the PV path for cpu_hotplug
(that is using drivers/xen/cpu_hotplug.c), is there actually a problem
with building and initializing it on PVH guests?
If it works as it is, I would be tempted to leave it for now.

Otherwise the patch is OK and you can add my Acked-by.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7Wd-0006My-J7; Fri, 03 Jan 2014 16:23:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7Wa-0006Mr-Cf
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:23:05 +0000
Received: from [85.158.143.35:62485] by server-1.bemta-4.messagelabs.com id
	BF/CE-02132-7E3E6C25; Fri, 03 Jan 2014 16:23:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388766181!9446908!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10222 invoked from network); 3 Jan 2014 16:23:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:23:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87392084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:22:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:22:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7WL-0000QD-IR;
	Fri, 03 Jan 2014 16:22:49 +0000
Date: Fri, 3 Jan 2014 16:22:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> As we do not have yet a mechanism for that.
> 
> This also impacts the ARM/ARM64 code (which does not have
> hotplug support yet).
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/cpu_hotplug.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> index cc6513a..5f80802 100644
> --- a/drivers/xen/cpu_hotplug.c
> +++ b/drivers/xen/cpu_hotplug.c
> @@ -4,6 +4,7 @@
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> +#include <xen/features.h>
>  
>  #include <asm/xen/hypervisor.h>
>  #include <asm/cpu.h>
> @@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
>  	static struct notifier_block xsn_cpu = {
>  		.notifier_call = setup_cpu_watcher };
>  
> -	if (!xen_pv_domain())
> +	/* PVH/ARM/ARM64 TBD/FIXME: future work */
> +	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
>  		return -ENODEV;
>  
>  	register_xenstore_notifier(&xsn_cpu);

Sorry for being a bit obnoxious but I was thinking that using a
xen_feature(XENFEAT_auto_translated_physmap) check is conceptually
wrong, because cpu hotplug and nested paging are orthogonal.

Given that we most probably want to follow the PV path for cpu_hotplug
(that is using drivers/xen/cpu_hotplug.c), is there actually a problem
with building and initializing it on PVH guests?
If it works as it is, I would be tempted to leave it for now.

Otherwise the patch is OK and you can add my Acked-by.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:23:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7XP-0006RM-18; Fri, 03 Jan 2014 16:23:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7XO-0006RA-Di
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:23:54 +0000
Received: from [85.158.137.68:28799] by server-10.bemta-3.messagelabs.com id
	4A/FC-23989-914E6C25; Fri, 03 Jan 2014 16:23:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388766231!7115341!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13208 invoked from network); 3 Jan 2014 16:23:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:23:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87392298"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:23:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7XJ-0000RE-OG;
	Fri, 03 Jan 2014 16:23:49 +0000
Date: Fri, 3 Jan 2014 16:23:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103160200.GE27019@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031622310.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
	<20140103160200.GE27019@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 03:47:15PM +0000, Stefano Stabellini wrote:
> > On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > The revector and copying of the P2M only happens when
> > > !auto-xlat and on 64-bit builds. It is not obvious from
> > > the code, so lets have seperate 32 and 64-bit functions.
> > > 
> > > We also invert the check for auto-xlat to make the code
> > > flow simpler.
> > > 
> > > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
> > >  1 file changed, 40 insertions(+), 33 deletions(-)
> > > 
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index ce563be..d792a69 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
> > >  	 * instead of somewhere later and be confusing. */
> > >  	xen_mc_flush();
> > >  }
> > > -#endif
> > > -static void __init xen_pagetable_init(void)
> > > +static void __init xen_pagetable_p2m_copy(void)
> > >  {
> > > -#ifdef CONFIG_X86_64
> > >  	unsigned long size;
> > >  	unsigned long addr;
> > > -#endif
> > > -	paging_init();
> > > -	xen_setup_shared_info();
> > > -#ifdef CONFIG_X86_64
> > > -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > > -		unsigned long new_mfn_list;
> > > +	unsigned long new_mfn_list;
> > > +
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > > +		return;
> > > +
> > > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > +
> > > +	/* On 32-bit, we get zero so this never gets executed. */
> > 
> > Given that this code is already ifdef'ed CONFIG_X86_64, this comment
> > should be removed.
> 
> Sure.
> > 
> > 
> > > +	new_mfn_list = xen_revector_p2m_tree();
> > 
> > I take from the comment that new_mfn_list must not be zero. Maybe we
> > want a BUG_ON or a WARN_ON?
> 
> It can be zero, in which case we don't want to revector.
> > 
> > 
> > > +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > > +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > > +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> > > +
> > > +		/* We should be in __ka space. */
> > > +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > > +		addr = xen_start_info->mfn_list;
> > > +		/* We roundup to the PMD, which means that if anybody at this stage is
> > > +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > > +		 * they are in going to crash. Fortunatly we have already revectored
> > > +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > > +		size = roundup(size, PMD_SIZE);
> > > +		xen_cleanhighmap(addr, addr + size);
> > >  
> > >  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > +		memblock_free(__pa(xen_start_info->mfn_list), size);
> > > +		/* And revector! Bye bye old array */
> > > +		xen_start_info->mfn_list = new_mfn_list;
> > > +	} else
> > > +		return;
> > 
> > This was a normal condition when the function was executed on both
> > x86_64 and x86_32. Now that it is only executed on x86_64, is it still
> > the case?
> 
> It could be. Since this particular patch just moves code I would hesitate
> to make changes here. Perhaps a seperate patch after the conditions
> under which the xen_revector_p2m_tree() fail can be done?

No, I think that's OK as it is.


> > 
> > 
> > > -		/* On 32-bit, we get zero so this never gets executed. */
> > > -		new_mfn_list = xen_revector_p2m_tree();
> > > -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > > -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > > -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> > > -
> > > -			/* We should be in __ka space. */
> > > -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > > -			addr = xen_start_info->mfn_list;
> > > -			/* We roundup to the PMD, which means that if anybody at this stage is
> > > -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > > -			 * they are in going to crash. Fortunatly we have already revectored
> > > -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > > -			size = roundup(size, PMD_SIZE);
> > > -			xen_cleanhighmap(addr, addr + size);
> > > -
> > > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > -			memblock_free(__pa(xen_start_info->mfn_list), size);
> > > -			/* And revector! Bye bye old array */
> > > -			xen_start_info->mfn_list = new_mfn_list;
> > > -		} else
> > > -			goto skip;
> > > -	}
> > >  	/* At this stage, cleanup_highmap has already cleaned __ka space
> > >  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
> > >  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> > > @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
> > >  	 * anything at this stage. */
> > >  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
> > >  #endif
> > > -skip:
> > > +}
> > > +#else
> > > +static void __init xen_pagetable_p2m_copy(void)
> > > +{
> > > +	/* Nada! */
> > > +}
> > >  #endif
> > > +
> > > +static void __init xen_pagetable_init(void)
> > > +{
> > > +	paging_init();
> > > +	xen_setup_shared_info();
> > > +	xen_pagetable_p2m_copy();
> > >  	xen_post_allocator_init();
> > >  }
> > >  static void xen_write_cr2(unsigned long cr2)
> > > -- 
> > > 1.8.3.1
> > > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:23:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7XP-0006RM-18; Fri, 03 Jan 2014 16:23:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7XO-0006RA-Di
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:23:54 +0000
Received: from [85.158.137.68:28799] by server-10.bemta-3.messagelabs.com id
	4A/FC-23989-914E6C25; Fri, 03 Jan 2014 16:23:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388766231!7115341!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13208 invoked from network); 3 Jan 2014 16:23:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:23:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87392298"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:23:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7XJ-0000RE-OG;
	Fri, 03 Jan 2014 16:23:49 +0000
Date: Fri, 3 Jan 2014 16:23:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103160200.GE27019@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031622310.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-6-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031544180.8667@kaball.uk.xensource.com>
	<20140103160200.GE27019@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 05/18] xen/mmu/p2m: Refactor the
 xen_pagetable_init code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 03:47:15PM +0000, Stefano Stabellini wrote:
> > On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > The revector and copying of the P2M only happens when
> > > !auto-xlat and on 64-bit builds. It is not obvious from
> > > the code, so lets have seperate 32 and 64-bit functions.
> > > 
> > > We also invert the check for auto-xlat to make the code
> > > flow simpler.
> > > 
> > > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/mmu.c | 73 ++++++++++++++++++++++++++++++------------------------
> > >  1 file changed, 40 insertions(+), 33 deletions(-)
> > > 
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index ce563be..d792a69 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
> > >  	 * instead of somewhere later and be confusing. */
> > >  	xen_mc_flush();
> > >  }
> > > -#endif
> > > -static void __init xen_pagetable_init(void)
> > > +static void __init xen_pagetable_p2m_copy(void)
> > >  {
> > > -#ifdef CONFIG_X86_64
> > >  	unsigned long size;
> > >  	unsigned long addr;
> > > -#endif
> > > -	paging_init();
> > > -	xen_setup_shared_info();
> > > -#ifdef CONFIG_X86_64
> > > -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> > > -		unsigned long new_mfn_list;
> > > +	unsigned long new_mfn_list;
> > > +
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap))
> > > +		return;
> > > +
> > > +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > +
> > > +	/* On 32-bit, we get zero so this never gets executed. */
> > 
> > Given that this code is already ifdef'ed CONFIG_X86_64, this comment
> > should be removed.
> 
> Sure.
> > 
> > 
> > > +	new_mfn_list = xen_revector_p2m_tree();
> > 
> > I take from the comment that new_mfn_list must not be zero. Maybe we
> > want a BUG_ON or a WARN_ON?
> 
> It can be zero, in which case we don't want to revector.
> > 
> > 
> > > +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > > +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > > +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> > > +
> > > +		/* We should be in __ka space. */
> > > +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > > +		addr = xen_start_info->mfn_list;
> > > +		/* We roundup to the PMD, which means that if anybody at this stage is
> > > +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > > +		 * they are in going to crash. Fortunatly we have already revectored
> > > +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > > +		size = roundup(size, PMD_SIZE);
> > > +		xen_cleanhighmap(addr, addr + size);
> > >  
> > >  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > +		memblock_free(__pa(xen_start_info->mfn_list), size);
> > > +		/* And revector! Bye bye old array */
> > > +		xen_start_info->mfn_list = new_mfn_list;
> > > +	} else
> > > +		return;
> > 
> > This was a normal condition when the function was executed on both
> > x86_64 and x86_32. Now that it is only executed on x86_64, is it still
> > the case?
> 
> It could be. Since this particular patch just moves code I would hesitate
> to make changes here. Perhaps a seperate patch after the conditions
> under which the xen_revector_p2m_tree() fail can be done?

No, I think that's OK as it is.


> > 
> > 
> > > -		/* On 32-bit, we get zero so this never gets executed. */
> > > -		new_mfn_list = xen_revector_p2m_tree();
> > > -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> > > -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> > > -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> > > -
> > > -			/* We should be in __ka space. */
> > > -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> > > -			addr = xen_start_info->mfn_list;
> > > -			/* We roundup to the PMD, which means that if anybody at this stage is
> > > -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> > > -			 * they are in going to crash. Fortunatly we have already revectored
> > > -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> > > -			size = roundup(size, PMD_SIZE);
> > > -			xen_cleanhighmap(addr, addr + size);
> > > -
> > > -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> > > -			memblock_free(__pa(xen_start_info->mfn_list), size);
> > > -			/* And revector! Bye bye old array */
> > > -			xen_start_info->mfn_list = new_mfn_list;
> > > -		} else
> > > -			goto skip;
> > > -	}
> > >  	/* At this stage, cleanup_highmap has already cleaned __ka space
> > >  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
> > >  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> > > @@ -1255,8 +1251,19 @@ static void __init xen_pagetable_init(void)
> > >  	 * anything at this stage. */
> > >  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
> > >  #endif
> > > -skip:
> > > +}
> > > +#else
> > > +static void __init xen_pagetable_p2m_copy(void)
> > > +{
> > > +	/* Nada! */
> > > +}
> > >  #endif
> > > +
> > > +static void __init xen_pagetable_init(void)
> > > +{
> > > +	paging_init();
> > > +	xen_setup_shared_info();
> > > +	xen_pagetable_p2m_copy();
> > >  	xen_post_allocator_init();
> > >  }
> > >  static void xen_write_cr2(unsigned long cr2)
> > > -- 
> > > 1.8.3.1
> > > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:31:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7eC-0006yd-14; Fri, 03 Jan 2014 16:30:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7eA-0006yY-9W
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:30:54 +0000
Received: from [193.109.254.147:6936] by server-6.bemta-14.messagelabs.com id
	2D/29-14958-DB5E6C25; Fri, 03 Jan 2014 16:30:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388766651!8709699!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20043 invoked from network); 3 Jan 2014 16:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89583612"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:30:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:30:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7e6-0000XW-Fa;
	Fri, 03 Jan 2014 16:30:50 +0000
Date: Fri, 3 Jan 2014 16:30:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031629110.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In xen_add_extra_mem() we can skip updating P2M as it's managed
> by Xen. PVH maps the entire IO space, but only RAM pages need
> to be repopulated.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/setup.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 2137c51..dd5f905 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -27,6 +27,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/physdev.h>
>  #include <xen/features.h>
> +#include "mmu.h"
>  #include "xen-ops.h"
>  #include "vdso.h"
>  
> @@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
>  
>  	memblock_reserve(start, size);
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
>  	xen_max_p2m_pfn = PFN_DOWN(start + size);
>  	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
>  		unsigned long mfn = pfn_to_mfn(pfn);
> @@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  		.domid        = DOMID_SELF
>  	};
>  	unsigned long len = 0;
> +	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
>  	unsigned long pfn;
>  	int ret;
>  
> @@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  				continue;
>  			frame = mfn;
>  		} else {
> -			if (mfn != INVALID_P2M_ENTRY)
> +			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
>  				continue;
>  			frame = pfn;
>  		}
> @@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  static unsigned long __init xen_release_chunk(unsigned long start,
>  					      unsigned long end)
>  {
> +	/*
> +	 * Xen already ballooned out the E820 non RAM regions for us
> +	 * and set them up properly in EPT.
> +	 */
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return end - start;
> +
>  	return xen_do_chunk(start, end, true);
>  }
>  
> @@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
>  	 * (except for the ISA region which must be 1:1 mapped) to
>  	 * release the refcounts (in Xen) on the original frames.
>  	 */
> -	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
> +
> +	/*
> +	 * PVH E820 matches the hypervisor's P2M which means we need to
> +	 * account for the proper values of *release and *identity.
> +	 */
> +	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
> +	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
>  		pte_t pte = __pte_ma(0);
>  
>  		if (pfn < PFN_UP(ISA_END_ADDRESS))
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:31:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7eC-0006yd-14; Fri, 03 Jan 2014 16:30:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7eA-0006yY-9W
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:30:54 +0000
Received: from [193.109.254.147:6936] by server-6.bemta-14.messagelabs.com id
	2D/29-14958-DB5E6C25; Fri, 03 Jan 2014 16:30:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388766651!8709699!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20043 invoked from network); 3 Jan 2014 16:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89583612"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:30:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:30:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7e6-0000XW-Fa;
	Fri, 03 Jan 2014 16:30:50 +0000
Date: Fri, 3 Jan 2014 16:30:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031629110.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> In xen_add_extra_mem() we can skip updating P2M as it's managed
> by Xen. PVH maps the entire IO space, but only RAM pages need
> to be repopulated.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/setup.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 2137c51..dd5f905 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -27,6 +27,7 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/physdev.h>
>  #include <xen/features.h>
> +#include "mmu.h"
>  #include "xen-ops.h"
>  #include "vdso.h"
>  
> @@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
>  
>  	memblock_reserve(start, size);
>  
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
>  	xen_max_p2m_pfn = PFN_DOWN(start + size);
>  	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
>  		unsigned long mfn = pfn_to_mfn(pfn);
> @@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  		.domid        = DOMID_SELF
>  	};
>  	unsigned long len = 0;
> +	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
>  	unsigned long pfn;
>  	int ret;
>  
> @@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  				continue;
>  			frame = mfn;
>  		} else {
> -			if (mfn != INVALID_P2M_ENTRY)
> +			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
>  				continue;
>  			frame = pfn;
>  		}
> @@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
>  static unsigned long __init xen_release_chunk(unsigned long start,
>  					      unsigned long end)
>  {
> +	/*
> +	 * Xen already ballooned out the E820 non RAM regions for us
> +	 * and set them up properly in EPT.
> +	 */
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return end - start;
> +
>  	return xen_do_chunk(start, end, true);
>  }
>  
> @@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
>  	 * (except for the ISA region which must be 1:1 mapped) to
>  	 * release the refcounts (in Xen) on the original frames.
>  	 */
> -	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
> +
> +	/*
> +	 * PVH E820 matches the hypervisor's P2M which means we need to
> +	 * account for the proper values of *release and *identity.
> +	 */
> +	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
> +	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
>  		pte_t pte = __pte_ma(0);
>  
>  		if (pfn < PFN_UP(ISA_END_ADDRESS))
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:35:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7iI-00078C-Tg; Fri, 03 Jan 2014 16:35:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7iH-000787-Le
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:35:09 +0000
Received: from [193.109.254.147:3073] by server-12.bemta-14.messagelabs.com id
	41/26-13681-CB6E6C25; Fri, 03 Jan 2014 16:35:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1388766907!6384335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 3 Jan 2014 16:35:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87396121"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:35:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:35:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7iD-0000an-Rr;
	Fri, 03 Jan 2014 16:35:05 +0000
Date: Fri, 3 Jan 2014 16:34:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/irq.c   |  5 ++++-
>  drivers/xen/events.c | 16 ++++++++++------
>  2 files changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 0da7f86..76ca326 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/features.h>
>  #include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
> @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	/* For PVH we use default pv_irq_ops settings. */
> +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..bf8fb29 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
>  #ifdef CONFIG_X86
> -	if (xen_hvm_domain()) {
> +	if (xen_pv_domain()) {
> +		irq_ctx_init(smp_processor_id());
> +		if (xen_initial_domain())
> +			pci_xen_initial_domain();
> +	}
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
>  		xen_callback_vector();
> +
> +	if (xen_hvm_domain()) {
>  		native_init_IRQ();
>  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
> -	} else {
> +	} else if (!xen_pvh_domain()) {
> +		/* TODO: No PVH support for PIRQ EOI */
>  		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
> -		irq_ctx_init(smp_processor_id());
> -		if (xen_initial_domain())
> -			pci_xen_initial_domain();

We already have a mechanism to identify whether
PHYSDEVOP_pirq_eoi_gmfn_v2 is available or not. Can't we just rely on
that?


>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:35:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7iI-00078C-Tg; Fri, 03 Jan 2014 16:35:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7iH-000787-Le
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:35:09 +0000
Received: from [193.109.254.147:3073] by server-12.bemta-14.messagelabs.com id
	41/26-13681-CB6E6C25; Fri, 03 Jan 2014 16:35:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1388766907!6384335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 3 Jan 2014 16:35:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:35:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87396121"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:35:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:35:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7iD-0000an-Rr;
	Fri, 03 Jan 2014 16:35:05 +0000
Date: Fri, 3 Jan 2014 16:34:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/irq.c   |  5 ++++-
>  drivers/xen/events.c | 16 ++++++++++------
>  2 files changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 0da7f86..76ca326 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/features.h>
>  #include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
> @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	/* For PVH we use default pv_irq_ops settings. */
> +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..bf8fb29 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
>  #ifdef CONFIG_X86
> -	if (xen_hvm_domain()) {
> +	if (xen_pv_domain()) {
> +		irq_ctx_init(smp_processor_id());
> +		if (xen_initial_domain())
> +			pci_xen_initial_domain();
> +	}
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
>  		xen_callback_vector();
> +
> +	if (xen_hvm_domain()) {
>  		native_init_IRQ();
>  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
>  		 * __acpi_register_gsi can point at the right function */
>  		pci_xen_hvm_init();
> -	} else {
> +	} else if (!xen_pvh_domain()) {
> +		/* TODO: No PVH support for PIRQ EOI */
>  		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
> -		irq_ctx_init(smp_processor_id());
> -		if (xen_initial_domain())
> -			pci_xen_initial_domain();

We already have a mechanism to identify whether
PHYSDEVOP_pirq_eoi_gmfn_v2 is available or not. Can't we just rely on
that?


>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:41:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7oS-0007a2-Oi; Fri, 03 Jan 2014 16:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7oS-0007Zx-5a
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:41:32 +0000
Received: from [85.158.143.35:16704] by server-2.bemta-4.messagelabs.com id
	94/8C-11386-B38E6C25; Fri, 03 Jan 2014 16:41:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388767289!9411118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29934 invoked from network); 3 Jan 2014 16:41:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87397720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:41:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:41:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7oA-0000g7-8v;
	Fri, 03 Jan 2014 16:41:14 +0000
Date: Fri, 3 Jan 2014 16:40:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031640200.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  drivers/xen/grant-table.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..99399cb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,6 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> @@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
>  unsigned int gnttab_max_grant_frames(void)
>  {
>  	unsigned int xen_max = __max_nr_grant_frames();
> +	static unsigned int boot_max_nr_grant_frames;
> +
> +	/* First time, initialize it properly. */
> +	if (!boot_max_nr_grant_frames)
> +		boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	if (xen_max > boot_max_nr_grant_frames)
>  		return boot_max_nr_grant_frames;
> @@ -1227,13 +1231,12 @@ int gnttab_init(void)
>  
>  	gnttab_request_version();
>  	nr_grant_frames = 1;
> -	boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	/* Determine the maximum number of frames required for the
>  	 * grant reference free list on the current hypervisor.
>  	 */
>  	BUG_ON(grefs_per_grant_frame == 0);
> -	max_nr_glist_frames = (boot_max_nr_grant_frames *
> +	max_nr_glist_frames = (gnttab_max_grant_frames() *
>  			       grefs_per_grant_frame / RPP);
>  
>  	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:41:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7oS-0007a2-Oi; Fri, 03 Jan 2014 16:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7oS-0007Zx-5a
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:41:32 +0000
Received: from [85.158.143.35:16704] by server-2.bemta-4.messagelabs.com id
	94/8C-11386-B38E6C25; Fri, 03 Jan 2014 16:41:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388767289!9411118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29934 invoked from network); 3 Jan 2014 16:41:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87397720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:41:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:41:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7oA-0000g7-8v;
	Fri, 03 Jan 2014 16:41:14 +0000
Date: Fri, 3 Jan 2014 16:40:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031640200.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-13-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 12/18] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  drivers/xen/grant-table.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..99399cb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,6 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> @@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
>  unsigned int gnttab_max_grant_frames(void)
>  {
>  	unsigned int xen_max = __max_nr_grant_frames();
> +	static unsigned int boot_max_nr_grant_frames;
> +
> +	/* First time, initialize it properly. */
> +	if (!boot_max_nr_grant_frames)
> +		boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	if (xen_max > boot_max_nr_grant_frames)
>  		return boot_max_nr_grant_frames;
> @@ -1227,13 +1231,12 @@ int gnttab_init(void)
>  
>  	gnttab_request_version();
>  	nr_grant_frames = 1;
> -	boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	/* Determine the maximum number of frames required for the
>  	 * grant reference free list on the current hypervisor.
>  	 */
>  	BUG_ON(grefs_per_grant_frame == 0);
> -	max_nr_glist_frames = (boot_max_nr_grant_frames *
> +	max_nr_glist_frames = (gnttab_max_grant_frames() *
>  			       grefs_per_grant_frame / RPP);
>  
>  	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7rG-0007go-BL; Fri, 03 Jan 2014 16:44:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7rE-0007gi-Pc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:44:24 +0000
Received: from [85.158.143.35:31396] by server-1.bemta-4.messagelabs.com id
	13/13-02132-8E8E6C25; Fri, 03 Jan 2014 16:44:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388767461!9338211!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30275 invoked from network); 3 Jan 2014 16:44:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:44:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87398412"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:44:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:44:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7rA-0000ii-WE;
	Fri, 03 Jan 2014 16:44:20 +0000
Date: Fri, 3 Jan 2014 16:43:33 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031642050.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/grant-table.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 99399cb..cc1b4fa 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> -		return gnttab_map(0, nr_grant_frames - 1);
> -
> -	if (gnttab_shared.addr == NULL) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> +	{
>  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -						PAGE_SIZE * max_nr_gframes);
> +					       PAGE_SIZE * max_nr_gframes);

                          ^ spurious change
Aside from that:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_hvm_resume_frames);
>  			return -ENOMEM;
>  		}
>  	}
> -
> -	gnttab_map(0, nr_grant_frames - 1);
> -
> -	return 0;
> +	return gnttab_map(0, nr_grant_frames - 1);
>  }
>  
>  int gnttab_resume(void)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7rG-0007go-BL; Fri, 03 Jan 2014 16:44:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz7rE-0007gi-Pc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:44:24 +0000
Received: from [85.158.143.35:31396] by server-1.bemta-4.messagelabs.com id
	13/13-02132-8E8E6C25; Fri, 03 Jan 2014 16:44:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388767461!9338211!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30275 invoked from network); 3 Jan 2014 16:44:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:44:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87398412"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:44:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:44:21 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz7rA-0000ii-WE;
	Fri, 03 Jan 2014 16:44:20 +0000
Date: Fri, 3 Jan 2014 16:43:33 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031642050.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-14-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 13/18] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/grant-table.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 99399cb..cc1b4fa 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> -		return gnttab_map(0, nr_grant_frames - 1);
> -
> -	if (gnttab_shared.addr == NULL) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> +	{
>  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -						PAGE_SIZE * max_nr_gframes);
> +					       PAGE_SIZE * max_nr_gframes);

                          ^ spurious change
Aside from that:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_hvm_resume_frames);
>  			return -ENOMEM;
>  		}
>  	}
> -
> -	gnttab_map(0, nr_grant_frames - 1);
> -
> -	return 0;
> +	return gnttab_map(0, nr_grant_frames - 1);
>  }
>  
>  int gnttab_resume(void)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sL-0007nD-R7; Fri, 03 Jan 2014 16:45:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sK-0007mt-ML
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:32 +0000
Received: from [85.158.137.68:28692] by server-15.bemta-3.messagelabs.com id
	F4/02-11556-B29E6C25; Fri, 03 Jan 2014 16:45:31 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2844 invoked from network); 3 Jan 2014 16:45:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:28 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-L0;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:20 +0000
Message-ID: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH RFC 0/2] Linux: possible ixes for mapping high
	MMIO regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This a possible fix for the problems with mapping high MMIO regions in
certain cases (e.g., the RDMA drivers) as not all mappers were
specifing the _PAGE_IOMAP which meant no valid MFN could be found and
the resulting PTE would be marked as not present, causing subsequent
faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Ballooned frames
are still marked as missing in the p2m as before.

As a follow on, mfn_to_pfn() is (hopefully) extended to do the right
thing with such an MFN.  This means the Xen-specific _PAGE_IOMAP PTE
flag can be removed,

This series is posted as an early RFC in the hope that is an
acceptable approach.  It has only seen the bare minimum of smoke
testing (my test dom0 didn't explode). In particular, I've not
actually tested it with a device with a high MMIO.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sN-0007nW-83; Fri, 03 Jan 2014 16:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sL-0007n3-Ef
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:33 +0000
Received: from [85.158.137.68:28731] by server-11.bemta-3.messagelabs.com id
	31/48-19379-C29E6C25; Fri, 03 Jan 2014 16:45:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2892 invoked from network); 3 Jan 2014 16:45:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591331"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:29 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-Mp;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:21 +0000
Message-ID: <1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |   10 ++++++++++
 2 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..a905355 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..6d7798f 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
 		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
 		mem_end = PFN_PHYS(max_pfn);
 	}
+
+	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(max_pfn + 1, ~0ul);
+
 	/*
 	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
 	 * factor the base size.  On non-highmem systems, the base
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sN-0007nf-KV; Fri, 03 Jan 2014 16:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sM-0007nJ-Mi
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:35 +0000
Received: from [85.158.137.68:17738] by server-5.bemta-3.messagelabs.com id
	91/77-25188-D29E6C25; Fri, 03 Jan 2014 16:45:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2957 invoked from network); 3 Jan 2014 16:45:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591332"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:29 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-NK;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:22 +0000
Message-ID: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with MFNs
	that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
MFN that is an identify frame (1:1) in the p2m.  This is so the
correct conversion of MFN to PFN can be done reading a PTE.

If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
is not required and can be removed.

In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
two MFNs differ then the MFN is one of three possibilities:

  a) its a foreign MFN with an m2p override.

  b) it's a foreign MFN with /no/ m2p override.

  c) it's a identity MFN.

It is not permitted to call mfn_to_pfn() no a foreign MFN without an
override was the resulting PFN will not incorrect.  We can therefore
assume case (c) and return PFN == MFN.

[ This patch should probably be split into the Xen-specific parts and
an x86 patch to remove _PAGE_IOMAP. ]

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |   12 +++---
 arch/x86/include/asm/xen/page.h      |   18 +++++++---
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 -
 arch/x86/xen/enlighten.c             |    2 -
 arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
 7 files changed, 28 insertions(+), 68 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0ecac25..0b12657 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -163,10 +163,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..eb11963 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 	pfn = mfn_to_pfn_no_overrides(mfn);
 	if (get_phys_to_machine(pfn) != mfn) {
 		/*
-		 * If this appears to be a foreign mfn (because the pfn
-		 * doesn't map back to the mfn), then check the local override
-		 * table to see if there's a better pfn to use.
+		 * This is either:
 		 *
-		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
+		 * a) a foreign MFN with an override.
+		 *
+		 * b) a foreign MFN without an override.
+		 *
+		 * c) an identity MFN that is not in the the p2m.
+		 *
+		 * For (a), look in the m2p overrides.  For (b) and
+		 * (c) assume identify MFN since mfn_to_pfn() will
+		 * only be called on foreign MFNs iff they have
+		 * overrides.
 		 */
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
@@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
 	 */
-	if (pfn == ~0 &&
-			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
+	if (pfn == ~0)
 		pfn = mfn;
 
 	return pfn;
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 4287f1f..9031593 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 104d56a..68bf948 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..f9c2d71 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..5fa77a1 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -368,11 +368,9 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 	if (val & _PAGE_PRESENT) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
-
 		pteval_t flags = val & PTE_FLAGS_MASK;
-		if (unlikely(pfn == ~0))
-			val = flags & ~_PAGE_PRESENT;
-		else
+
+		if (likely(pfn != ~0))
 			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
 	}
 
@@ -390,6 +388,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 			mfn = get_phys_to_machine(pfn);
 		else
 			mfn = pfn;
+
 		/*
 		 * If there's no mfn for the pfn, then just create an
 		 * empty non-present pte.  Unfortunately this loses
@@ -399,38 +398,15 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+			mfn = pfn;
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 static pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 static pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
@@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sL-0007nD-R7; Fri, 03 Jan 2014 16:45:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sK-0007mt-ML
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:32 +0000
Received: from [85.158.137.68:28692] by server-15.bemta-3.messagelabs.com id
	F4/02-11556-B29E6C25; Fri, 03 Jan 2014 16:45:31 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2844 invoked from network); 3 Jan 2014 16:45:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:28 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-L0;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:20 +0000
Message-ID: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH RFC 0/2] Linux: possible ixes for mapping high
	MMIO regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This a possible fix for the problems with mapping high MMIO regions in
certain cases (e.g., the RDMA drivers) as not all mappers were
specifing the _PAGE_IOMAP which meant no valid MFN could be found and
the resulting PTE would be marked as not present, causing subsequent
faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Ballooned frames
are still marked as missing in the p2m as before.

As a follow on, mfn_to_pfn() is (hopefully) extended to do the right
thing with such an MFN.  This means the Xen-specific _PAGE_IOMAP PTE
flag can be removed,

This series is posted as an early RFC in the hope that is an
acceptable approach.  It has only seen the bare minimum of smoke
testing (my test dom0 didn't explode). In particular, I've not
actually tested it with a device with a high MMIO.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sN-0007nf-KV; Fri, 03 Jan 2014 16:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sM-0007nJ-Mi
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:35 +0000
Received: from [85.158.137.68:17738] by server-5.bemta-3.messagelabs.com id
	91/77-25188-D29E6C25; Fri, 03 Jan 2014 16:45:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2957 invoked from network); 3 Jan 2014 16:45:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591332"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:29 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-NK;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:22 +0000
Message-ID: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with MFNs
	that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
MFN that is an identify frame (1:1) in the p2m.  This is so the
correct conversion of MFN to PFN can be done reading a PTE.

If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
is not required and can be removed.

In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
two MFNs differ then the MFN is one of three possibilities:

  a) its a foreign MFN with an m2p override.

  b) it's a foreign MFN with /no/ m2p override.

  c) it's a identity MFN.

It is not permitted to call mfn_to_pfn() no a foreign MFN without an
override was the resulting PFN will not incorrect.  We can therefore
assume case (c) and return PFN == MFN.

[ This patch should probably be split into the Xen-specific parts and
an x86 patch to remove _PAGE_IOMAP. ]

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/pgtable_types.h |   12 +++---
 arch/x86/include/asm/xen/page.h      |   18 +++++++---
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 -
 arch/x86/xen/enlighten.c             |    2 -
 arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
 7 files changed, 28 insertions(+), 68 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0ecac25..0b12657 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -163,10 +163,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..eb11963 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 	pfn = mfn_to_pfn_no_overrides(mfn);
 	if (get_phys_to_machine(pfn) != mfn) {
 		/*
-		 * If this appears to be a foreign mfn (because the pfn
-		 * doesn't map back to the mfn), then check the local override
-		 * table to see if there's a better pfn to use.
+		 * This is either:
 		 *
-		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
+		 * a) a foreign MFN with an override.
+		 *
+		 * b) a foreign MFN without an override.
+		 *
+		 * c) an identity MFN that is not in the the p2m.
+		 *
+		 * For (a), look in the m2p overrides.  For (b) and
+		 * (c) assume identify MFN since mfn_to_pfn() will
+		 * only be called on foreign MFNs iff they have
+		 * overrides.
 		 */
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
@@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
 	 */
-	if (pfn == ~0 &&
-			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
+	if (pfn == ~0)
 		pfn = mfn;
 
 	return pfn;
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 4287f1f..9031593 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 104d56a..68bf948 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..f9c2d71 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..5fa77a1 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -368,11 +368,9 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 	if (val & _PAGE_PRESENT) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
-
 		pteval_t flags = val & PTE_FLAGS_MASK;
-		if (unlikely(pfn == ~0))
-			val = flags & ~_PAGE_PRESENT;
-		else
+
+		if (likely(pfn != ~0))
 			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
 	}
 
@@ -390,6 +388,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 			mfn = get_phys_to_machine(pfn);
 		else
 			mfn = pfn;
+
 		/*
 		 * If there's no mfn for the pfn, then just create an
 		 * empty non-present pte.  Unfortunately this loses
@@ -399,38 +398,15 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+			mfn = pfn;
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 static pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 static pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
@@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz7sN-0007nW-83; Fri, 03 Jan 2014 16:45:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1Vz7sL-0007n3-Ef
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 16:45:33 +0000
Received: from [85.158.137.68:28731] by server-11.bemta-3.messagelabs.com id
	31/48-19379-C29E6C25; Fri, 03 Jan 2014 16:45:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1388767529!7157616!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2892 invoked from network); 3 Jan 2014 16:45:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:45:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89591331"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 16:45:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:45:29 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1Vz7sG-0000kB-Mp;
	Fri, 03 Jan 2014 16:45:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 16:45:21 +0000
Message-ID: <1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |   10 ++++++++++
 2 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..a905355 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..6d7798f 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
 		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
 		mem_end = PFN_PHYS(max_pfn);
 	}
+
+	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(max_pfn + 1, ~0ul);
+
 	/*
 	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
 	 * factor the base size.  On non-highmem systems, the base
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:55:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz81M-0000Ae-PH; Fri, 03 Jan 2014 16:54:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz81L-0000AZ-HM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:54:51 +0000
Received: from [85.158.137.68:38019] by server-8.bemta-3.messagelabs.com id
	11/97-31081-A5BE6C25; Fri, 03 Jan 2014 16:54:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388768088!7110386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2287 invoked from network); 3 Jan 2014 16:54:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:54:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87403236"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:54:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:54:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz81G-0000s0-Pc;
	Fri, 03 Jan 2014 16:54:46 +0000
Date: Fri, 3 Jan 2014 16:53:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contingous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropiate values for PVHVM and ARM.
     ^appropriate


> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/arm/xen/enlighten.c   |  9 +++++++--
>  drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
>  drivers/xen/platform-pci.c | 10 +++++++---
>  include/xen/grant_table.h  |  9 ++++++++-
>  4 files changed, 62 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8550123..2162172 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
>  	const char *version = NULL;
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
> +	unsigned long grant_frames;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
>  	}
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
> -	xen_hvm_resume_frames = res.start;
> +	grant_frames = res.start;
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
>  	if (xen_vcpu_info == NULL)
>  		return -ENOMEM;
>  
> +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> +		free_percpu(xen_vcpu_info);
> +		return -ENOMEM;
> +	}
>  	gnttab_init();
>  	if (!xen_initial_domain())
>  		xenbus_probe(NULL);
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index cc1b4fa..b117fd6 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -unsigned long xen_hvm_resume_frames;
> -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> +struct grant_frames xen_auto_xlat_grant_frames;
> +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);

it should be static now


>  static union {
>  	struct grant_entry_v1 *v1;
> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	int i;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn)
> +		return -ENOMEM;
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> +
> +	xen_auto_xlat_grant_frames.vaddr = addr;
> +	xen_auto_xlat_grant_frames.pfn = pfn;
> +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> +
> +void gnttab_free_auto_xlat_frames(void)
> +{
> +	if (!xen_auto_xlat_grant_frames.count)
> +		return;
> +	kfree(xen_auto_xlat_grant_frames.pfn);
> +	xen_auto_xlat_grant_frames.pfn = NULL;
> +	xen_auto_xlat_grant_frames.count = 0;
> +	xen_auto_xlat_grant_frames.vaddr = 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);

I would leave vaddr alone in gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames


>  /* Handling of paged out grant targets (GNTST_eagain) */
>  #define MAX_DELAY 256
>  static inline void
> @@ -1068,6 +1102,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -1076,7 +1111,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> @@ -1175,11 +1210,11 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> +		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
>  					       PAGE_SIZE * max_nr_gframes);

here you can xen_remap xen_auto_xlat_grant_frames.pfn[0] instead


>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> -					xen_hvm_resume_frames);
> +					xen_auto_xlat_grant_frames.vaddr);
>  			return -ENOMEM;
>  		}
>  	}
> diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
> index 2f3528e..f1947ac 100644
> --- a/drivers/xen/platform-pci.c
> +++ b/drivers/xen/platform-pci.c
> @@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	long ioaddr;
>  	long mmio_addr, mmio_len;
>  	unsigned int max_nr_gframes;
> +	unsigned long grant_frames;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	}
>  
>  	max_nr_gframes = gnttab_max_grant_frames();
> -	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	if (gnttab_setup_auto_xlat_frames(grant_frames))
> +		goto out;
>  	ret = gnttab_init();
>  	if (ret)
> -		goto out;
> +		goto grant_out;
>  	xenbus_probe(NULL);
>  	return 0;
> -
> +grant_out:
> +	gnttab_free_auto_xlat_frames();
>  out:
>  	pci_release_region(pdev, 0);
>  mem_out:
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..a997406 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	int count;
> +	unsigned long vaddr;
> +};
> +extern struct grant_frames xen_auto_xlat_grant_frames;
>  unsigned int gnttab_max_grant_frames(void);
> +int gnttab_setup_auto_xlat_frames(unsigned long addr);
> +void gnttab_free_auto_xlat_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 16:55:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 16:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz81M-0000Ae-PH; Fri, 03 Jan 2014 16:54:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz81L-0000AZ-HM
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 16:54:51 +0000
Received: from [85.158.137.68:38019] by server-8.bemta-3.messagelabs.com id
	11/97-31081-A5BE6C25; Fri, 03 Jan 2014 16:54:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388768088!7110386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2287 invoked from network); 3 Jan 2014 16:54:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 16:54:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87403236"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 16:54:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 11:54:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz81G-0000s0-Pc;
	Fri, 03 Jan 2014 16:54:46 +0000
Date: Fri, 3 Jan 2014 16:53:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
 frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contingous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropiate values for PVHVM and ARM.
     ^appropriate


> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/arm/xen/enlighten.c   |  9 +++++++--
>  drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
>  drivers/xen/platform-pci.c | 10 +++++++---
>  include/xen/grant_table.h  |  9 ++++++++-
>  4 files changed, 62 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8550123..2162172 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
>  	const char *version = NULL;
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
> +	unsigned long grant_frames;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
>  	}
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
> -	xen_hvm_resume_frames = res.start;
> +	grant_frames = res.start;
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
>  	if (xen_vcpu_info == NULL)
>  		return -ENOMEM;
>  
> +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> +		free_percpu(xen_vcpu_info);
> +		return -ENOMEM;
> +	}
>  	gnttab_init();
>  	if (!xen_initial_domain())
>  		xenbus_probe(NULL);
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index cc1b4fa..b117fd6 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -unsigned long xen_hvm_resume_frames;
> -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> +struct grant_frames xen_auto_xlat_grant_frames;
> +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);

it should be static now


>  static union {
>  	struct grant_entry_v1 *v1;
> @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	int i;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn)
> +		return -ENOMEM;
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> +
> +	xen_auto_xlat_grant_frames.vaddr = addr;
> +	xen_auto_xlat_grant_frames.pfn = pfn;
> +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> +
> +void gnttab_free_auto_xlat_frames(void)
> +{
> +	if (!xen_auto_xlat_grant_frames.count)
> +		return;
> +	kfree(xen_auto_xlat_grant_frames.pfn);
> +	xen_auto_xlat_grant_frames.pfn = NULL;
> +	xen_auto_xlat_grant_frames.count = 0;
> +	xen_auto_xlat_grant_frames.vaddr = 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);

I would leave vaddr alone in gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames


>  /* Handling of paged out grant targets (GNTST_eagain) */
>  #define MAX_DELAY 256
>  static inline void
> @@ -1068,6 +1102,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -1076,7 +1111,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> @@ -1175,11 +1210,11 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> +		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
>  					       PAGE_SIZE * max_nr_gframes);

here you can xen_remap xen_auto_xlat_grant_frames.pfn[0] instead


>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> -					xen_hvm_resume_frames);
> +					xen_auto_xlat_grant_frames.vaddr);
>  			return -ENOMEM;
>  		}
>  	}
> diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
> index 2f3528e..f1947ac 100644
> --- a/drivers/xen/platform-pci.c
> +++ b/drivers/xen/platform-pci.c
> @@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	long ioaddr;
>  	long mmio_addr, mmio_len;
>  	unsigned int max_nr_gframes;
> +	unsigned long grant_frames;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	}
>  
>  	max_nr_gframes = gnttab_max_grant_frames();
> -	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	if (gnttab_setup_auto_xlat_frames(grant_frames))
> +		goto out;
>  	ret = gnttab_init();
>  	if (ret)
> -		goto out;
> +		goto grant_out;
>  	xenbus_probe(NULL);
>  	return 0;
> -
> +grant_out:
> +	gnttab_free_auto_xlat_frames();
>  out:
>  	pci_release_region(pdev, 0);
>  mem_out:
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..a997406 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	int count;
> +	unsigned long vaddr;
> +};
> +extern struct grant_frames xen_auto_xlat_grant_frames;
>  unsigned int gnttab_max_grant_frames(void);
> +int gnttab_setup_auto_xlat_frames(unsigned long addr);
> +void gnttab_free_auto_xlat_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8RP-0001b6-Ut; Fri, 03 Jan 2014 17:21:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8RO-0001b1-Md
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:21:46 +0000
Received: from [85.158.143.35:40489] by server-1.bemta-4.messagelabs.com id
	6D/64-02132-9A1F6C25; Fri, 03 Jan 2014 17:21:45 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388769703!8227639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24689 invoked from network); 3 Jan 2014 17:21:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:21:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89604022"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 17:21:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:21:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8RK-0001Ek-H2;
	Fri, 03 Jan 2014 17:21:42 +0000
Date: Fri, 3 Jan 2014 17:20:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
In-Reply-To: <20140103154820.GA20976@andromeda.dapyr.net>
Message-ID: <alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > >>>>>  	return gnttab_init();
> > >>>>>  }
> > >>>>>  
> > >>>>> -core_initcall(__gnttab_init);
> > >>>>> +core_initcall_sync(__gnttab_init);
> > >>>>
> > >>>> Why has this become _sync?
> > >>>
> > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > >>> at gnttab_init):
> > >>
> > >>
> > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > 
> > > It has a clear ordering property.
> > 
> > This really isn't obvious to me.  Can you point to the docs/code the
> > guarantee this?  I couldn't find it.
> 
> include/linux/init.h
> > 
> > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > 
> > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > also used by the ARM code.
> > > 
> > > Stefano in his previous review mentioned he would like PVH specific
> > > code in arch/x86:
> > > 
> > > https://lkml.org/lkml/2013/12/18/507
> > 
> > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> 
> Stefano, thoughts?

I think that you can safely move __gnttab_init to postcore_initcall if
it works correctly for the PV and PVH cases, because HVM and ARM are
unaffected by it.  In fact they don't initialize the grant table via
__gnttab_init at all. See:

/* Delay grant-table initialization in the PV on HVM case */
if (xen_hvm_domain())
	return 0;

at the beginning of __gnttab_init.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8RP-0001b6-Ut; Fri, 03 Jan 2014 17:21:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8RO-0001b1-Md
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:21:46 +0000
Received: from [85.158.143.35:40489] by server-1.bemta-4.messagelabs.com id
	6D/64-02132-9A1F6C25; Fri, 03 Jan 2014 17:21:45 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388769703!8227639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24689 invoked from network); 3 Jan 2014 17:21:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:21:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89604022"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 17:21:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:21:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8RK-0001Ek-H2;
	Fri, 03 Jan 2014 17:21:42 +0000
Date: Fri, 3 Jan 2014 17:20:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
In-Reply-To: <20140103154820.GA20976@andromeda.dapyr.net>
Message-ID: <alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > >>>>>  	return gnttab_init();
> > >>>>>  }
> > >>>>>  
> > >>>>> -core_initcall(__gnttab_init);
> > >>>>> +core_initcall_sync(__gnttab_init);
> > >>>>
> > >>>> Why has this become _sync?
> > >>>
> > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > >>> at gnttab_init):
> > >>
> > >>
> > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > 
> > > It has a clear ordering property.
> > 
> > This really isn't obvious to me.  Can you point to the docs/code the
> > guarantee this?  I couldn't find it.
> 
> include/linux/init.h
> > 
> > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > 
> > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > also used by the ARM code.
> > > 
> > > Stefano in his previous review mentioned he would like PVH specific
> > > code in arch/x86:
> > > 
> > > https://lkml.org/lkml/2013/12/18/507
> > 
> > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> 
> Stefano, thoughts?

I think that you can safely move __gnttab_init to postcore_initcall if
it works correctly for the PV and PVH cases, because HVM and ARM are
unaffected by it.  In fact they don't initialize the grant table via
__gnttab_init at all. See:

/* Delay grant-table initialization in the PV on HVM case */
if (xen_hvm_domain())
	return 0;

at the beginning of __gnttab_init.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8SX-0001em-DN; Fri, 03 Jan 2014 17:22:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8SW-0001eU-9X
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:22:56 +0000
Received: from [85.158.139.211:53240] by server-14.bemta-5.messagelabs.com id
	06/8F-24200-FE1F6C25; Fri, 03 Jan 2014 17:22:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388769773!7745495!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13986 invoked from network); 3 Jan 2014 17:22:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:22:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89604619"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 17:22:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:22:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8SS-0001Fq-K9;
	Fri, 03 Jan 2014 17:22:52 +0000
Date: Fri, 3 Jan 2014 17:22:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031721380.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/xenbus/xenbus_client.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index ec097d6..7f7c454 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -45,6 +45,7 @@
>  #include <xen/grant_table.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
> +#include <xen/features.h>
>  
>  #include "xenbus_probe.h"
>  
> @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))

As I wrote in the other email, this should be

if (!xen_feature(XENFEAT_auto_translated_physmap))


>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8SX-0001em-DN; Fri, 03 Jan 2014 17:22:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8SW-0001eU-9X
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:22:56 +0000
Received: from [85.158.139.211:53240] by server-14.bemta-5.messagelabs.com id
	06/8F-24200-FE1F6C25; Fri, 03 Jan 2014 17:22:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388769773!7745495!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13986 invoked from network); 3 Jan 2014 17:22:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:22:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="89604619"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 17:22:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:22:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8SS-0001Fq-K9;
	Fri, 03 Jan 2014 17:22:52 +0000
Date: Fri, 3 Jan 2014 17:22:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031721380.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-17-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 16/18] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/xenbus/xenbus_client.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index ec097d6..7f7c454 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -45,6 +45,7 @@
>  #include <xen/grant_table.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
> +#include <xen/features.h>
>  
>  #include "xenbus_probe.h"
>  
> @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (xen_pv_domain() && !xen_feature(XENFEAT_auto_translated_physmap))

As I wrote in the other email, this should be

if (!xen_feature(XENFEAT_auto_translated_physmap))


>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8Wz-0001rB-4L; Fri, 03 Jan 2014 17:27:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8Wx-0001r5-MQ
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:27:31 +0000
Received: from [85.158.139.211:21320] by server-12.bemta-5.messagelabs.com id
	88/51-30017-203F6C25; Fri, 03 Jan 2014 17:27:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388770048!7710617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 3 Jan 2014 17:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87412828"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 17:27:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:27:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8Wt-0001Jj-3E;
	Fri, 03 Jan 2014 17:27:27 +0000
Date: Fri, 3 Jan 2014 17:26:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/gntdev.c       |  2 +-
>  drivers/xen/grant-table.c  | 13 ++++++----
>  3 files changed, 73 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 3a5f55d..040e064 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
>  	apply_to_page_range(&init_mm, (unsigned long)shared,
>  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
>  }
> +#ifdef CONFIG_XEN_PVHVM
> +#include <xen/balloon.h>
> +#include <linux/slab.h>
> +static int __init xlated_setup_gnttab_pages(void)
> +{
> +	struct page **pages;
> +	xen_pfn_t *pfns;
> +	int rc, i;
> +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> +
> +	BUG_ON(nr_grant_frames == 0);
> +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> +	if (!pages)
> +		return -ENOMEM;
> +
> +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> +	if (!pfns) {
> +		kfree(pages);
> +		return -ENOMEM;
> +	}
> +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> +	if (rc) {
> +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		kfree(pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +	for (i = 0; i < nr_grant_frames; i++)
> +		pfns[i] = page_to_pfn(pages[i]);
> +
> +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> +				    (void *)&xen_auto_xlat_grant_frames.vaddr);
> +
> +	kfree(pages);
> +	if (rc) {
> +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +
> +	xen_auto_xlat_grant_frames.pfn = pfns;
> +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> +
> +	return 0;
> +}

Unfortunately this way pfns is leaked. Can we safely free it or is it
reused at resume time?


> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	if (!xen_pv_domain())
> +		return -ENODEV;
> +
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return -ENODEV;
> +
> +	return xlated_setup_gnttab_pages();
> +}
> +core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */
> +#endif
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..073b4a1 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
>  
> -	use_ptemod = xen_pv_domain();
> +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>  
>  	err = misc_register(&gntdev_miscdev);
>  	if (err != 0) {
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index b117fd6..2fa3a4c 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1098,7 +1098,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> @@ -1174,7 +1174,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1210,8 +1210,11 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
> -					       PAGE_SIZE * max_nr_gframes);
> +		if (xen_hvm_domain()) {
> +			gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
> +						       PAGE_SIZE * max_nr_gframes);
> +		} else
> +			gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_auto_xlat_grant_frames.vaddr);
> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>  	return gnttab_init();
>  }
>  
> -core_initcall(__gnttab_init);
> +core_initcall_sync(__gnttab_init);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8Wz-0001rB-4L; Fri, 03 Jan 2014 17:27:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8Wx-0001r5-MQ
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:27:31 +0000
Received: from [85.158.139.211:21320] by server-12.bemta-5.messagelabs.com id
	88/51-30017-203F6C25; Fri, 03 Jan 2014 17:27:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388770048!7710617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 3 Jan 2014 17:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,598,1384300800"; d="scan'208";a="87412828"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 17:27:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:27:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8Wt-0001Jj-3E;
	Fri, 03 Jan 2014 17:27:27 +0000
Date: Fri, 3 Jan 2014 17:26:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/gntdev.c       |  2 +-
>  drivers/xen/grant-table.c  | 13 ++++++----
>  3 files changed, 73 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 3a5f55d..040e064 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
>  	apply_to_page_range(&init_mm, (unsigned long)shared,
>  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
>  }
> +#ifdef CONFIG_XEN_PVHVM
> +#include <xen/balloon.h>
> +#include <linux/slab.h>
> +static int __init xlated_setup_gnttab_pages(void)
> +{
> +	struct page **pages;
> +	xen_pfn_t *pfns;
> +	int rc, i;
> +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> +
> +	BUG_ON(nr_grant_frames == 0);
> +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> +	if (!pages)
> +		return -ENOMEM;
> +
> +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> +	if (!pfns) {
> +		kfree(pages);
> +		return -ENOMEM;
> +	}
> +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> +	if (rc) {
> +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		kfree(pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +	for (i = 0; i < nr_grant_frames; i++)
> +		pfns[i] = page_to_pfn(pages[i]);
> +
> +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> +				    (void *)&xen_auto_xlat_grant_frames.vaddr);
> +
> +	kfree(pages);
> +	if (rc) {
> +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +
> +	xen_auto_xlat_grant_frames.pfn = pfns;
> +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> +
> +	return 0;
> +}

Unfortunately this way pfns is leaked. Can we safely free it or is it
reused at resume time?


> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	if (!xen_pv_domain())
> +		return -ENODEV;
> +
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return -ENODEV;
> +
> +	return xlated_setup_gnttab_pages();
> +}
> +core_initcall(xen_pvh_gnttab_setup); /* Call it _before_ __gnttab_init */
> +#endif
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..073b4a1 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
>  
> -	use_ptemod = xen_pv_domain();
> +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>  
>  	err = misc_register(&gntdev_miscdev);
>  	if (err != 0) {
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index b117fd6..2fa3a4c 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1098,7 +1098,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> @@ -1174,7 +1174,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1210,8 +1210,11 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
> -					       PAGE_SIZE * max_nr_gframes);
> +		if (xen_hvm_domain()) {
> +			gnttab_shared.addr = xen_remap(xen_auto_xlat_grant_frames.vaddr,
> +						       PAGE_SIZE * max_nr_gframes);
> +		} else
> +			gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_auto_xlat_grant_frames.vaddr);
> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
>  	return gnttab_init();
>  }
>  
> -core_initcall(__gnttab_init);
> +core_initcall_sync(__gnttab_init);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8gL-0002Jk-7j; Fri, 03 Jan 2014 17:37:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz8gJ-0002Jf-4c
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:37:11 +0000
Received: from [193.109.254.147:14160] by server-10.bemta-14.messagelabs.com
	id C3/0E-20752-645F6C25; Fri, 03 Jan 2014 17:37:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388770624!8689457!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 3 Jan 2014 17:37:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 17:37:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03HZx94003918
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 17:36:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HZwXX013690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 17:35:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HZv86020418; Fri, 3 Jan 2014 17:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 09:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7D3831BFB02; Fri,  3 Jan 2014 12:35:55 -0500 (EST)
Date: Fri, 3 Jan 2014 12:35:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140103173555.GF27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140102173438.40612127@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 05:34:38PM -0800, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:32:21 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > > > need to use emulated prefix call. We also check for vector
> > > > callback early on, as it is a required feature. PVH also runs at
> > > > default kernel IOPL.
> > > > 
> > > > Finally, pure PV settings are moved to a separate function that
> > > > are only called for pure PV, ie, pv with pvmmu. They are also
> > > > #ifdef with CONFIG_XEN_PVMMU.
> > > [...]
> > > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > > unsigned int *bx, break;
> > > >  	}
> > > >  
> > > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > > -		: "=a" (*ax),
> > > > -		  "=b" (*bx),
> > > > -		  "=c" (*cx),
> > > > -		  "=d" (*dx)
> > > > -		: "0" (*ax), "2" (*cx));
> > > > +	if (xen_pvh_domain())
> > > > +		native_cpuid(ax, bx, cx, dx);
> > > > +	else
> > > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > > +			: "=a" (*ax),
> > > > +			"=b" (*bx),
> > > > +			"=c" (*cx),
> > > > +			"=d" (*dx)
> > > > +			: "0" (*ax), "2" (*cx));
> > > 
> > > For this one off cpuid call it seems preferrable to me to use the
> > > emulate prefix rather than diverge from PV.
> > 
> > This was before the PV cpuid was deemed OK to be used on PVH.
> > Will rip this out to use the same version.
> 
> Whats wrong with using native cpuid? That is one of the benefits that
> cpuid can be trapped via vmexit, and also there is talk of making PV
> cpuid trap obsolete in the future. I suggest leaving it native.

I chatted with David, Andrew and Roger on IRC about this. I like the
idea of using xen_cpuid because:
 1) It filters some of the CPUID flags that guests should not use. There is
    the 'aperfmperf,'x2apic', 'xsave', and whether the MWAIT_LEAF
    should be exposed (so that the ACPI AML code can call the right
    initialization code to use the extended C3 states instead of the
    legacy IOPORT ones). All of that is in xen_cpuid.
   
 2) It works, while we can concentrate on making 1) work in the
    hypervisor/toolstack.

Meaning that the future way would be to use the native cpuid and have
the hypervisor/toolstack setup the proper cpuid. In other words - use
the xen_cpuid as is until that code for filtering is in the hypervisor.


Except that PVH does not work the PV cpuid at all. I get a triple fault.
The instruction it fails at is at the 'XEN_EMULATE_PREFIX'.

Mukesh, can you point me to the patch where the PV cpuid functionality
is enabled?

Anyhow, as it stands, I will just use the native cpuid.

> 
> Mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8gL-0002Jk-7j; Fri, 03 Jan 2014 17:37:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz8gJ-0002Jf-4c
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 17:37:11 +0000
Received: from [193.109.254.147:14160] by server-10.bemta-14.messagelabs.com
	id C3/0E-20752-645F6C25; Fri, 03 Jan 2014 17:37:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388770624!8689457!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 3 Jan 2014 17:37:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 17:37:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03HZx94003918
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 17:36:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HZwXX013690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 17:35:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HZv86020418; Fri, 3 Jan 2014 17:35:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 09:35:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7D3831BFB02; Fri,  3 Jan 2014 12:35:55 -0500 (EST)
Date: Fri, 3 Jan 2014 12:35:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140103173555.GF27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140102173438.40612127@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
	PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 05:34:38PM -0800, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:32:21 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > In the bootup code for PVH we can trap cpuid via vmexit, so don't
> > > > need to use emulated prefix call. We also check for vector
> > > > callback early on, as it is a required feature. PVH also runs at
> > > > default kernel IOPL.
> > > > 
> > > > Finally, pure PV settings are moved to a separate function that
> > > > are only called for pure PV, ie, pv with pvmmu. They are also
> > > > #ifdef with CONFIG_XEN_PVMMU.
> > > [...]
> > > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > > unsigned int *bx, break;
> > > >  	}
> > > >  
> > > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > > -		: "=a" (*ax),
> > > > -		  "=b" (*bx),
> > > > -		  "=c" (*cx),
> > > > -		  "=d" (*dx)
> > > > -		: "0" (*ax), "2" (*cx));
> > > > +	if (xen_pvh_domain())
> > > > +		native_cpuid(ax, bx, cx, dx);
> > > > +	else
> > > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > > +			: "=a" (*ax),
> > > > +			"=b" (*bx),
> > > > +			"=c" (*cx),
> > > > +			"=d" (*dx)
> > > > +			: "0" (*ax), "2" (*cx));
> > > 
> > > For this one off cpuid call it seems preferrable to me to use the
> > > emulate prefix rather than diverge from PV.
> > 
> > This was before the PV cpuid was deemed OK to be used on PVH.
> > Will rip this out to use the same version.
> 
> Whats wrong with using native cpuid? That is one of the benefits that
> cpuid can be trapped via vmexit, and also there is talk of making PV
> cpuid trap obsolete in the future. I suggest leaving it native.

I chatted with David, Andrew and Roger on IRC about this. I like the
idea of using xen_cpuid because:
 1) It filters some of the CPUID flags that guests should not use. There is
    the 'aperfmperf,'x2apic', 'xsave', and whether the MWAIT_LEAF
    should be exposed (so that the ACPI AML code can call the right
    initialization code to use the extended C3 states instead of the
    legacy IOPORT ones). All of that is in xen_cpuid.
   
 2) It works, while we can concentrate on making 1) work in the
    hypervisor/toolstack.

Meaning that the future way would be to use the native cpuid and have
the hypervisor/toolstack setup the proper cpuid. In other words - use
the xen_cpuid as is until that code for filtering is in the hypervisor.


Except that PVH does not work the PV cpuid at all. I get a triple fault.
The instruction it fails at is at the 'XEN_EMULATE_PREFIX'.

Mukesh, can you point me to the patch where the PV cpuid functionality
is enabled?

Anyhow, as it stands, I will just use the native cpuid.

> 
> Mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8tj-000367-AM; Fri, 03 Jan 2014 17:51:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8tg-000362-Jr
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 17:51:00 +0000
Received: from [193.109.254.147:32154] by server-5.bemta-14.messagelabs.com id
	21/62-03510-388F6C25; Fri, 03 Jan 2014 17:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388771457!8698440!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8158 invoked from network); 3 Jan 2014 17:50:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:50:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87420133"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 17:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8tc-0001cc-Ih;
	Fri, 03 Jan 2014 17:50:56 +0000
Date: Fri, 3 Jan 2014 17:50:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Yinghai Lu <yinghai@kernel.org>
In-Reply-To: <1388707565-16535-17-git-send-email-yinghai@kernel.org>
Message-ID: <alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Tony Luck <tony.luck@intel.com>, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rjw@sisk.pl>,
	linux-acpi@vger.kernel.org, xen-devel@lists.xensource.com, "H.
	Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, Yinghai Lu wrote:
> To make x86 irq allocation to be same with booting path and ioapic
> hot add path, We will pre-reserve irq for all gsi at first.
> We have to use alloc_reserved here, otherwise irq_alloc_desc_at will fail
> because bit is already get marked for pre-reserved in irq bitmaps.
> 
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: xen-devel@lists.xensource.com
> ---
>  drivers/xen/events.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..020cd77 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
>  	/* Legacy IRQ descriptors are already allocated by the arch. */
>  	if (gsi < NR_IRQS_LEGACY)
>  		irq = gsi;
> -	else
> -		irq = irq_alloc_desc_at(gsi, -1);
> +	else {
> +		/* for x86, irq already get reserved for gsi */
> +		irq = irq_alloc_reserved_desc_at(gsi, -1);
> +		if (irq < 0)
> +			irq = irq_alloc_desc_at(gsi, -1);
> +	}

This is common code. On ARM I get:

drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
   irq = irq_alloc_reserved_desc_at(gsi, -1);
   ^
cc1: some warnings being treated as errors


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 17:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 17:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz8tj-000367-AM; Fri, 03 Jan 2014 17:51:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz8tg-000362-Jr
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 17:51:00 +0000
Received: from [193.109.254.147:32154] by server-5.bemta-14.messagelabs.com id
	21/62-03510-388F6C25; Fri, 03 Jan 2014 17:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388771457!8698440!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8158 invoked from network); 3 Jan 2014 17:50:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 17:50:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87420133"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 17:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 12:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz8tc-0001cc-Ih;
	Fri, 03 Jan 2014 17:50:56 +0000
Date: Fri, 3 Jan 2014 17:50:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Yinghai Lu <yinghai@kernel.org>
In-Reply-To: <1388707565-16535-17-git-send-email-yinghai@kernel.org>
Message-ID: <alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Tony Luck <tony.luck@intel.com>, linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rjw@sisk.pl>,
	linux-acpi@vger.kernel.org, xen-devel@lists.xensource.com, "H.
	Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014, Yinghai Lu wrote:
> To make x86 irq allocation to be same with booting path and ioapic
> hot add path, We will pre-reserve irq for all gsi at first.
> We have to use alloc_reserved here, otherwise irq_alloc_desc_at will fail
> because bit is already get marked for pre-reserved in irq bitmaps.
> 
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: xen-devel@lists.xensource.com
> ---
>  drivers/xen/events.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..020cd77 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
>  	/* Legacy IRQ descriptors are already allocated by the arch. */
>  	if (gsi < NR_IRQS_LEGACY)
>  		irq = gsi;
> -	else
> -		irq = irq_alloc_desc_at(gsi, -1);
> +	else {
> +		/* for x86, irq already get reserved for gsi */
> +		irq = irq_alloc_reserved_desc_at(gsi, -1);
> +		if (irq < 0)
> +			irq = irq_alloc_desc_at(gsi, -1);
> +	}

This is common code. On ARM I get:

drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
   irq = irq_alloc_reserved_desc_at(gsi, -1);
   ^
cc1: some warnings being treated as errors


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz93U-0003bl-GJ; Fri, 03 Jan 2014 18:01:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz93T-0003bd-Ok
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:01:07 +0000
Received: from [193.109.254.147:40058] by server-8.bemta-14.messagelabs.com id
	F8/D1-30921-2EAF6C25; Fri, 03 Jan 2014 18:01:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388772064!6452889!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10416 invoked from network); 3 Jan 2014 18:01:06 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:01:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03HxpRR026631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 17:59:52 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HxoiW009734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 17:59:51 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Hxot4009727; Fri, 3 Jan 2014 17:59:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 09:59:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 35D631BFB02; Fri,  3 Jan 2014 12:59:49 -0500 (EST)
Date: Fri, 3 Jan 2014 12:59:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103175949.GA28551@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:22:01PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > As we do not have yet a mechanism for that.
> > 
> > This also impacts the ARM/ARM64 code (which does not have
> > hotplug support yet).
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  drivers/xen/cpu_hotplug.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> > index cc6513a..5f80802 100644
> > --- a/drivers/xen/cpu_hotplug.c
> > +++ b/drivers/xen/cpu_hotplug.c
> > @@ -4,6 +4,7 @@
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > +#include <xen/features.h>
> >  
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/cpu.h>
> > @@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
> >  	static struct notifier_block xsn_cpu = {
> >  		.notifier_call = setup_cpu_watcher };
> >  
> > -	if (!xen_pv_domain())
> > +	/* PVH/ARM/ARM64 TBD/FIXME: future work */
> > +	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
> >  		return -ENODEV;
> >  
> >  	register_xenstore_notifier(&xsn_cpu);
> 
> Sorry for being a bit obnoxious but I was thinking that using a
> xen_feature(XENFEAT_auto_translated_physmap) check is conceptually
> wrong, because cpu hotplug and nested paging are orthogonal.

Yeah, you should be sorry :-) (Just joking - appreciate your
input and review)
> 
> Given that we most probably want to follow the PV path for cpu_hotplug
> (that is using drivers/xen/cpu_hotplug.c), is there actually a problem
> with building and initializing it on PVH guests?

It hasn't been tested..

> If it works as it is, I would be tempted to leave it for now.

..until now and it actually looks to work.

> 
> Otherwise the patch is OK and you can add my Acked-by.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz93U-0003bl-GJ; Fri, 03 Jan 2014 18:01:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz93T-0003bd-Ok
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:01:07 +0000
Received: from [193.109.254.147:40058] by server-8.bemta-14.messagelabs.com id
	F8/D1-30921-2EAF6C25; Fri, 03 Jan 2014 18:01:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388772064!6452889!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10416 invoked from network); 3 Jan 2014 18:01:06 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:01:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03HxpRR026631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 17:59:52 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03HxoiW009734
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 17:59:51 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Hxot4009727; Fri, 3 Jan 2014 17:59:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 09:59:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 35D631BFB02; Fri,  3 Jan 2014 12:59:49 -0500 (EST)
Date: Fri, 3 Jan 2014 12:59:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103175949.GA28551@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-18-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031610470.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 17/18] xen/pvh/arm/arm64: Disable PV
 code that does not work with PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:22:01PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > As we do not have yet a mechanism for that.
> > 
> > This also impacts the ARM/ARM64 code (which does not have
> > hotplug support yet).
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  drivers/xen/cpu_hotplug.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
> > index cc6513a..5f80802 100644
> > --- a/drivers/xen/cpu_hotplug.c
> > +++ b/drivers/xen/cpu_hotplug.c
> > @@ -4,6 +4,7 @@
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > +#include <xen/features.h>
> >  
> >  #include <asm/xen/hypervisor.h>
> >  #include <asm/cpu.h>
> > @@ -102,7 +103,8 @@ static int __init setup_vcpu_hotplug_event(void)
> >  	static struct notifier_block xsn_cpu = {
> >  		.notifier_call = setup_cpu_watcher };
> >  
> > -	if (!xen_pv_domain())
> > +	/* PVH/ARM/ARM64 TBD/FIXME: future work */
> > +	if (!xen_pv_domain() || xen_feature(XENFEAT_auto_translated_physmap))
> >  		return -ENODEV;
> >  
> >  	register_xenstore_notifier(&xsn_cpu);
> 
> Sorry for being a bit obnoxious but I was thinking that using a
> xen_feature(XENFEAT_auto_translated_physmap) check is conceptually
> wrong, because cpu hotplug and nested paging are orthogonal.

Yeah, you should be sorry :-) (Just joking - appreciate your
input and review)
> 
> Given that we most probably want to follow the PV path for cpu_hotplug
> (that is using drivers/xen/cpu_hotplug.c), is there actually a problem
> with building and initializing it on PVH guests?

It hasn't been tested..

> If it works as it is, I would be tempted to leave it for now.

..until now and it actually looks to work.

> 
> Otherwise the patch is OK and you can add my Acked-by.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Dk-0003vt-NY; Fri, 03 Jan 2014 18:11:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9Dj-0003vm-Jv
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:11:43 +0000
Received: from [85.158.137.68:8875] by server-2.bemta-3.messagelabs.com id
	BF/8F-17329-E5DF6C25; Fri, 03 Jan 2014 18:11:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388772700!3468004!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29236 invoked from network); 3 Jan 2014 18:11:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:11:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IAajj004344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:10:36 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IAYWC003865
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:10:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IAYlh013811; Fri, 3 Jan 2014 18:10:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:10:33 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BD1661BFB02; Fri,  3 Jan 2014 13:10:32 -0500 (EST)
Date: Fri, 3 Jan 2014 13:10:32 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103181032.GB28551@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:34:18PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > PVH is a PV guest with a twist - there are certain things
> > that work in it like HVM and some like PV. There is
> > a similar mode - PVHVM where we run in HVM mode with
> > PV code enabled - and this patch explores that.
> > 
> > The most notable PV interfaces are the XenBus and event channels.
> > 
> > We will piggyback on how the event channel mechanism is
> > used in PVHVM - that is we want the normal native IRQ mechanism
> > and we will install a vector (hvm callback) for which we
> > will call the event channel mechanism.
> > 
> > This means that from a pvops perspective, we can use
> > native_irq_ops instead of the Xen PV specific. Albeit in the
> > future we could support pirq_eoi_map. But that is
> > a feature request that can be shared with PVHVM.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/irq.c   |  5 ++++-
> >  drivers/xen/events.c | 16 ++++++++++------
> >  2 files changed, 14 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 0da7f86..76ca326 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -5,6 +5,7 @@
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/sched.h>
> >  #include <xen/interface/vcpu.h>
> > +#include <xen/features.h>
> >  #include <xen/events.h>
> >  
> >  #include <asm/xen/hypercall.h>
> > @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
> >  
> >  void __init xen_init_irq_ops(void)
> >  {
> > -	pv_irq_ops = xen_irq_ops;
> > +	/* For PVH we use default pv_irq_ops settings. */
> > +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> > +		pv_irq_ops = xen_irq_ops;
> >  	x86_init.irqs.intr_init = xen_init_IRQ;
> >  }
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 4035e83..bf8fb29 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
> >  	pirq_needs_eoi = pirq_needs_eoi_flag;
> >  
> >  #ifdef CONFIG_X86
> > -	if (xen_hvm_domain()) {
> > +	if (xen_pv_domain()) {
> > +		irq_ctx_init(smp_processor_id());
> > +		if (xen_initial_domain())
> > +			pci_xen_initial_domain();
> > +	}
> > +	if (xen_feature(XENFEAT_hvm_callback_vector))
> >  		xen_callback_vector();
> > +
> > +	if (xen_hvm_domain()) {
> >  		native_init_IRQ();
> >  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
> >  		 * __acpi_register_gsi can point at the right function */
> >  		pci_xen_hvm_init();
> > -	} else {
> > +	} else if (!xen_pvh_domain()) {
> > +		/* TODO: No PVH support for PIRQ EOI */
> >  		int rc;
> >  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
> >  
> > -		irq_ctx_init(smp_processor_id());
> > -		if (xen_initial_domain())
> > -			pci_xen_initial_domain();
> 
> We already have a mechanism to identify whether
> PHYSDEVOP_pirq_eoi_gmfn_v2 is available or not. Can't we just rely on
> that?

Yes, and this code has the right recovery mechanism to deal with let.

Thank you!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Dk-0003vt-NY; Fri, 03 Jan 2014 18:11:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9Dj-0003vm-Jv
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:11:43 +0000
Received: from [85.158.137.68:8875] by server-2.bemta-3.messagelabs.com id
	BF/8F-17329-E5DF6C25; Fri, 03 Jan 2014 18:11:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388772700!3468004!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29236 invoked from network); 3 Jan 2014 18:11:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:11:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IAajj004344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:10:36 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IAYWC003865
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:10:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IAYlh013811; Fri, 3 Jan 2014 18:10:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:10:33 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BD1661BFB02; Fri,  3 Jan 2014 13:10:32 -0500 (EST)
Date: Fri, 3 Jan 2014 13:10:32 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103181032.GB28551@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-12-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031633070.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 11/18] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:34:18PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > PVH is a PV guest with a twist - there are certain things
> > that work in it like HVM and some like PV. There is
> > a similar mode - PVHVM where we run in HVM mode with
> > PV code enabled - and this patch explores that.
> > 
> > The most notable PV interfaces are the XenBus and event channels.
> > 
> > We will piggyback on how the event channel mechanism is
> > used in PVHVM - that is we want the normal native IRQ mechanism
> > and we will install a vector (hvm callback) for which we
> > will call the event channel mechanism.
> > 
> > This means that from a pvops perspective, we can use
> > native_irq_ops instead of the Xen PV specific. Albeit in the
> > future we could support pirq_eoi_map. But that is
> > a feature request that can be shared with PVHVM.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/irq.c   |  5 ++++-
> >  drivers/xen/events.c | 16 ++++++++++------
> >  2 files changed, 14 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> > index 0da7f86..76ca326 100644
> > --- a/arch/x86/xen/irq.c
> > +++ b/arch/x86/xen/irq.c
> > @@ -5,6 +5,7 @@
> >  #include <xen/interface/xen.h>
> >  #include <xen/interface/sched.h>
> >  #include <xen/interface/vcpu.h>
> > +#include <xen/features.h>
> >  #include <xen/events.h>
> >  
> >  #include <asm/xen/hypercall.h>
> > @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
> >  
> >  void __init xen_init_irq_ops(void)
> >  {
> > -	pv_irq_ops = xen_irq_ops;
> > +	/* For PVH we use default pv_irq_ops settings. */
> > +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> > +		pv_irq_ops = xen_irq_ops;
> >  	x86_init.irqs.intr_init = xen_init_IRQ;
> >  }
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 4035e83..bf8fb29 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -1908,20 +1908,24 @@ void __init xen_init_IRQ(void)
> >  	pirq_needs_eoi = pirq_needs_eoi_flag;
> >  
> >  #ifdef CONFIG_X86
> > -	if (xen_hvm_domain()) {
> > +	if (xen_pv_domain()) {
> > +		irq_ctx_init(smp_processor_id());
> > +		if (xen_initial_domain())
> > +			pci_xen_initial_domain();
> > +	}
> > +	if (xen_feature(XENFEAT_hvm_callback_vector))
> >  		xen_callback_vector();
> > +
> > +	if (xen_hvm_domain()) {
> >  		native_init_IRQ();
> >  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
> >  		 * __acpi_register_gsi can point at the right function */
> >  		pci_xen_hvm_init();
> > -	} else {
> > +	} else if (!xen_pvh_domain()) {
> > +		/* TODO: No PVH support for PIRQ EOI */
> >  		int rc;
> >  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
> >  
> > -		irq_ctx_init(smp_processor_id());
> > -		if (xen_initial_domain())
> > -			pci_xen_initial_domain();
> 
> We already have a mechanism to identify whether
> PHYSDEVOP_pirq_eoi_gmfn_v2 is available or not. Can't we just rely on
> that?

Yes, and this code has the right recovery mechanism to deal with let.

Thank you!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Fh-00041t-8B; Fri, 03 Jan 2014 18:13:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9Fe-00041e-JT
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:13:43 +0000
Received: from [85.158.137.68:25518] by server-15.bemta-3.messagelabs.com id
	BC/45-11556-5DDF6C25; Fri, 03 Jan 2014 18:13:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388772818!7090353!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7563 invoked from network); 3 Jan 2014 18:13:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89616390"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 18:13:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:13:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9FC-0001us-K8;
	Fri, 03 Jan 2014 18:13:14 +0000
Date: Fri, 3 Jan 2014 18:12:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> PCI devices may have BARs located above the end of RAM so mark such
> frames as identity frames in the p2m (instead of the default of
> missing).
> 
> PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
> identity frames for the same reason.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Shouldn't this be the case only for dom0?


>  arch/x86/xen/p2m.c   |    2 +-
>  arch/x86/xen/setup.c |   10 ++++++++++
>  2 files changed, 11 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..a905355 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
>  	unsigned topidx, mididx, idx;
>  
>  	if (unlikely(pfn >= MAX_P2M_PFN))
> -		return INVALID_P2M_ENTRY;
> +		return IDENTITY_FRAME(pfn);
>  
>  	topidx = p2m_top_index(pfn);
>  	mididx = p2m_mid_index(pfn);
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 68c054f..6d7798f 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
>  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
>  		mem_end = PFN_PHYS(max_pfn);
>  	}
> +
> +	/*
> +	 * Set the rest as identity mapped, in case PCI BARs are
> +	 * located here.
> +	 *
> +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
> +	 * well.
> +	 */
> +	set_phys_range_identity(max_pfn + 1, ~0ul);
> +

Wouldn't this increase the size of the P2M considerably?


>  	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
>  	 * factor the base size.  On non-highmem systems, the base
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Fh-00041t-8B; Fri, 03 Jan 2014 18:13:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9Fe-00041e-JT
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:13:43 +0000
Received: from [85.158.137.68:25518] by server-15.bemta-3.messagelabs.com id
	BC/45-11556-5DDF6C25; Fri, 03 Jan 2014 18:13:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388772818!7090353!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7563 invoked from network); 3 Jan 2014 18:13:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89616390"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 18:13:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:13:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9FC-0001us-K8;
	Fri, 03 Jan 2014 18:13:14 +0000
Date: Fri, 3 Jan 2014 18:12:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> PCI devices may have BARs located above the end of RAM so mark such
> frames as identity frames in the p2m (instead of the default of
> missing).
> 
> PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
> identity frames for the same reason.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Shouldn't this be the case only for dom0?


>  arch/x86/xen/p2m.c   |    2 +-
>  arch/x86/xen/setup.c |   10 ++++++++++
>  2 files changed, 11 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..a905355 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
>  	unsigned topidx, mididx, idx;
>  
>  	if (unlikely(pfn >= MAX_P2M_PFN))
> -		return INVALID_P2M_ENTRY;
> +		return IDENTITY_FRAME(pfn);
>  
>  	topidx = p2m_top_index(pfn);
>  	mididx = p2m_mid_index(pfn);
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 68c054f..6d7798f 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
>  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
>  		mem_end = PFN_PHYS(max_pfn);
>  	}
> +
> +	/*
> +	 * Set the rest as identity mapped, in case PCI BARs are
> +	 * located here.
> +	 *
> +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
> +	 * well.
> +	 */
> +	set_phys_range_identity(max_pfn + 1, ~0ul);
> +

Wouldn't this increase the size of the P2M considerably?


>  	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
>  	 * factor the base size.  On non-highmem systems, the base
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9G3-00044g-LQ; Fri, 03 Jan 2014 18:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9G3-00044Y-51
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:14:07 +0000
Received: from [85.158.139.211:55358] by server-10.bemta-5.messagelabs.com id
	A5/48-01405-EEDF6C25; Fri, 03 Jan 2014 18:14:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388772844!7717006!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 3 Jan 2014 18:14:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87425284"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:14:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:14:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9Fz-0001vZ-Em;
	Fri, 03 Jan 2014 18:14:03 +0000
Date: Fri, 3 Jan 2014 18:13:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031806220.8667@kaball.uk.xensource.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with
 MFNs that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
> MFN that is an identify frame (1:1) in the p2m.  This is so the
> correct conversion of MFN to PFN can be done reading a PTE.
> 
> If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
> is not required and can be removed.
> 
> In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
> two MFNs differ then the MFN is one of three possibilities:
> 
>   a) its a foreign MFN with an m2p override.
> 
>   b) it's a foreign MFN with /no/ m2p override.
> 
>   c) it's a identity MFN.
> 
> It is not permitted to call mfn_to_pfn() no a foreign MFN without an
> override was the resulting PFN will not incorrect.  We can therefore
> assume case (c) and return PFN == MFN.
> 
> [ This patch should probably be split into the Xen-specific parts and
> an x86 patch to remove _PAGE_IOMAP. ]
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |   12 +++---
>  arch/x86/include/asm/xen/page.h      |   18 +++++++---
>  arch/x86/mm/init_32.c                |    2 +-
>  arch/x86/mm/init_64.c                |    2 +-
>  arch/x86/pci/i386.c                  |    2 -
>  arch/x86/xen/enlighten.c             |    2 -
>  arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
>  7 files changed, 28 insertions(+), 68 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0ecac25..0b12657 100644
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..eb11963 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	pfn = mfn_to_pfn_no_overrides(mfn);
>  	if (get_phys_to_machine(pfn) != mfn) {
>  		/*
> -		 * If this appears to be a foreign mfn (because the pfn
> -		 * doesn't map back to the mfn), then check the local override
> -		 * table to see if there's a better pfn to use.
> +		 * This is either:
>  		 *
> -		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
> +		 * a) a foreign MFN with an override.
> +		 *
> +		 * b) a foreign MFN without an override.
> +		 *
> +		 * c) an identity MFN that is not in the the p2m.

You have missed a case here:

           d) a local MFN with an override

think of blkfront and blkback in the same domain.
However I don't think this case would be affected by the patch.


> +		 * For (a), look in the m2p overrides.  For (b) and
> +		 * (c) assume identify MFN since mfn_to_pfn() will
> +		 * only be called on foreign MFNs iff they have
> +		 * overrides.
>  		 */
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
> @@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
>  	 */
> -	if (pfn == ~0 &&
> -			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
> +	if (pfn == ~0)
>  		pfn = mfn;
>  
>  	return pfn;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9G3-00044g-LQ; Fri, 03 Jan 2014 18:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9G3-00044Y-51
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:14:07 +0000
Received: from [85.158.139.211:55358] by server-10.bemta-5.messagelabs.com id
	A5/48-01405-EEDF6C25; Fri, 03 Jan 2014 18:14:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388772844!7717006!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 3 Jan 2014 18:14:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87425284"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:14:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:14:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9Fz-0001vZ-Em;
	Fri, 03 Jan 2014 18:14:03 +0000
Date: Fri, 3 Jan 2014 18:13:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
Message-ID: <alpine.DEB.2.02.1401031806220.8667@kaball.uk.xensource.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with
 MFNs that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
> MFN that is an identify frame (1:1) in the p2m.  This is so the
> correct conversion of MFN to PFN can be done reading a PTE.
> 
> If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
> is not required and can be removed.
> 
> In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
> two MFNs differ then the MFN is one of three possibilities:
> 
>   a) its a foreign MFN with an m2p override.
> 
>   b) it's a foreign MFN with /no/ m2p override.
> 
>   c) it's a identity MFN.
> 
> It is not permitted to call mfn_to_pfn() no a foreign MFN without an
> override was the resulting PFN will not incorrect.  We can therefore
> assume case (c) and return PFN == MFN.
> 
> [ This patch should probably be split into the Xen-specific parts and
> an x86 patch to remove _PAGE_IOMAP. ]
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |   12 +++---
>  arch/x86/include/asm/xen/page.h      |   18 +++++++---
>  arch/x86/mm/init_32.c                |    2 +-
>  arch/x86/mm/init_64.c                |    2 +-
>  arch/x86/pci/i386.c                  |    2 -
>  arch/x86/xen/enlighten.c             |    2 -
>  arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
>  7 files changed, 28 insertions(+), 68 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0ecac25..0b12657 100644
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..eb11963 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	pfn = mfn_to_pfn_no_overrides(mfn);
>  	if (get_phys_to_machine(pfn) != mfn) {
>  		/*
> -		 * If this appears to be a foreign mfn (because the pfn
> -		 * doesn't map back to the mfn), then check the local override
> -		 * table to see if there's a better pfn to use.
> +		 * This is either:
>  		 *
> -		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
> +		 * a) a foreign MFN with an override.
> +		 *
> +		 * b) a foreign MFN without an override.
> +		 *
> +		 * c) an identity MFN that is not in the the p2m.

You have missed a case here:

           d) a local MFN with an override

think of blkfront and blkback in the same domain.
However I don't think this case would be affected by the patch.


> +		 * For (a), look in the m2p overrides.  For (b) and
> +		 * (c) assume identify MFN since mfn_to_pfn() will
> +		 * only be called on foreign MFNs iff they have
> +		 * overrides.
>  		 */
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
> @@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
>  	 */
> -	if (pfn == ~0 &&
> -			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
> +	if (pfn == ~0)
>  		pfn = mfn;
>  
>  	return pfn;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9IG-0004Q4-Qn; Fri, 03 Jan 2014 18:16:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Vz9IF-0004Pk-93
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 18:16:23 +0000
Received: from [85.158.143.35:29468] by server-2.bemta-4.messagelabs.com id
	1B/20-11386-67EF6C25; Fri, 03 Jan 2014 18:16:22 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388772981!9518487!1
X-Originating-IP: [62.142.5.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTE3ID0+IDk1NDU1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2337 invoked from network); 3 Jan 2014 18:16:21 -0000
Received: from emh07.mail.saunalahti.fi (HELO emh07.mail.saunalahti.fi)
	(62.142.5.117)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:16:21 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh07.mail.saunalahti.fi (Postfix) with ESMTP id 2BCB94093;
	Fri,  3 Jan 2014 20:16:19 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 7E14536C01F; Fri,  3 Jan 2014 20:16:19 +0200 (EET)
Date: Fri, 3 Jan 2014 20:16:19 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140103181619.GU2924@reaktio.net>
References: <511BA0D7.8070809@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B35671CE8@BITCOM1.int.sbss.com.au>
	<52BD862B.4080704@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B68566D03@BITCOM1.int.sbss.com.au>
	<52BE8CCD.1040701@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B685671E0@BITCOM1.int.sbss.com.au>
	<52BEBC6E.3000304@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52BEBC6E.3000304@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: James Harper <james.harper@bendigoit.com.au>,
	xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] GPLPV questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 28, 2013 at 11:56:30AM +0000, Andrew Cooper wrote:
> On 28/12/2013 09:49, James Harper wrote:
> >> Thanks for the reply.
> >> I don't know if is virtual or hardware clock to resync.
> >> When I do save/restore of windows domUs (with gplpv) inside domain on
> >> restore the domain users are unable to login until windows time will be
> >> updated.
> >> I also enabled ntp and tried to set very low time between every ntp
> >> check but however, it takes a long time to synchronize.
> >> I did a fast search on citrix pv and probably the time update is here:
> >> https://github.com/xenserver/win-
> >> xeniface/blob/master/src/win32stubagent/XService.cpp
> >> on finishSuspend function, there is this comment: /* We need to resync
> >> the clock when we recover from suspend/resume. */
> > Looks like citrix pv usermode code makes a WMI call into the driver to get the time from xen, and then sets the time back in the usermode code.
> >
> > Not as straightforward as I might have thought. I don't have a WMI interface but any mechanism of talking to the driver is fine. From a quick look I can't see how the driver gets the clock from Xen though.
> >
> > James
> 
> HVM guests get wallclock time from the shared info page, which awkwardly
> changes its exact location between a 32 and 64 bit domains.
> 
> Up until recently, the Citrix Windows PV drivers still contained a
> hacked HVMPARAM, for which the set_hvmparam manually forced the shinfo
> size, and updated the wallclock.  The latching of the shinfo size was
> fixed a long time ago, but there was still an outstanding bug that when
> Qemu stepped the domain wallclock on resume, the domain didn't see the
> updated time for a minute or so.
> 
> At a first guess, I would say that
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=915a59f25c5eddd86bc2cae6389d0ed2ab87e69e
> should fix the problem.  To the best of my knowledge, this was the very
> last of oustanding issues preventing our PV driver running correctly on
> xen-unstable.
> 

master commit 915a59f25c5eddd86bc2cae6389d0ed2ab87e69e (x86/time: Update wallclock in shared info when altering domain time offset)
seems to be already backported to xen-4.3 branch, and it's included in Xen 4.3.1 release (but it's not yet in Xen 4.3.0).

Just for the sake of archives, if someone else is wondering about this..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9I9-0004Od-7w; Fri, 03 Jan 2014 18:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9I7-0004OS-Qe
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:16:16 +0000
Received: from [85.158.143.35:5538] by server-2.bemta-4.messagelabs.com id
	7B/10-11386-F6EF6C25; Fri, 03 Jan 2014 18:16:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388772973!9509769!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5887 invoked from network); 3 Jan 2014 18:16:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:16:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IF2qB009825
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:15:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IF078023035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:15:01 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IF0ES020274; Fri, 3 Jan 2014 18:15:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:15:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 61D4A1BFB02; Fri,  3 Jan 2014 13:14:59 -0500 (EST)
Date: Fri, 3 Jan 2014 13:14:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103181459.GG27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, boris.ostrovsky@oracle.com,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > >>>>>  	return gnttab_init();
> > > >>>>>  }
> > > >>>>>  
> > > >>>>> -core_initcall(__gnttab_init);
> > > >>>>> +core_initcall_sync(__gnttab_init);
> > > >>>>
> > > >>>> Why has this become _sync?
> > > >>>
> > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > >>> at gnttab_init):
> > > >>
> > > >>
> > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > 
> > > > It has a clear ordering property.
> > > 
> > > This really isn't obvious to me.  Can you point to the docs/code the
> > > guarantee this?  I couldn't find it.
> > 
> > include/linux/init.h
> > > 
> > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > 
> > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > also used by the ARM code.
> > > > 
> > > > Stefano in his previous review mentioned he would like PVH specific
> > > > code in arch/x86:
> > > > 
> > > > https://lkml.org/lkml/2013/12/18/507
> > > 
> > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > 
> > Stefano, thoughts?
> 
> I think that you can safely move __gnttab_init to postcore_initcall if
> it works correctly for the PV and PVH cases, because HVM and ARM are
> unaffected by it.  In fact they don't initialize the grant table via
> __gnttab_init at all. See:

The 'xenbus_init' is called in postcore_initcall. I don't actually
know if it is OK to call that _before_ gnttab_init is called.

> 
> /* Delay grant-table initialization in the PV on HVM case */
> if (xen_hvm_domain())
> 	return 0;
> 
> at the beginning of __gnttab_init.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9IG-0004Q4-Qn; Fri, 03 Jan 2014 18:16:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1Vz9IF-0004Pk-93
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 18:16:23 +0000
Received: from [85.158.143.35:29468] by server-2.bemta-4.messagelabs.com id
	1B/20-11386-67EF6C25; Fri, 03 Jan 2014 18:16:22 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388772981!9518487!1
X-Originating-IP: [62.142.5.117]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTE3ID0+IDk1NDU1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2337 invoked from network); 3 Jan 2014 18:16:21 -0000
Received: from emh07.mail.saunalahti.fi (HELO emh07.mail.saunalahti.fi)
	(62.142.5.117)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:16:21 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh07.mail.saunalahti.fi (Postfix) with ESMTP id 2BCB94093;
	Fri,  3 Jan 2014 20:16:19 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 7E14536C01F; Fri,  3 Jan 2014 20:16:19 +0200 (EET)
Date: Fri, 3 Jan 2014 20:16:19 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140103181619.GU2924@reaktio.net>
References: <511BA0D7.8070809@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B35671CE8@BITCOM1.int.sbss.com.au>
	<52BD862B.4080704@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B68566D03@BITCOM1.int.sbss.com.au>
	<52BE8CCD.1040701@tiscali.it>
	<6035A0D088A63A46850C3988ED045A4B685671E0@BITCOM1.int.sbss.com.au>
	<52BEBC6E.3000304@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52BEBC6E.3000304@citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: James Harper <james.harper@bendigoit.com.au>,
	xen-devel <xen-devel@lists.xensource.com>,
	"fantonifabio@tiscali.it" <fantonifabio@tiscali.it>
Subject: Re: [Xen-devel] GPLPV questions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Dec 28, 2013 at 11:56:30AM +0000, Andrew Cooper wrote:
> On 28/12/2013 09:49, James Harper wrote:
> >> Thanks for the reply.
> >> I don't know if is virtual or hardware clock to resync.
> >> When I do save/restore of windows domUs (with gplpv) inside domain on
> >> restore the domain users are unable to login until windows time will be
> >> updated.
> >> I also enabled ntp and tried to set very low time between every ntp
> >> check but however, it takes a long time to synchronize.
> >> I did a fast search on citrix pv and probably the time update is here:
> >> https://github.com/xenserver/win-
> >> xeniface/blob/master/src/win32stubagent/XService.cpp
> >> on finishSuspend function, there is this comment: /* We need to resync
> >> the clock when we recover from suspend/resume. */
> > Looks like citrix pv usermode code makes a WMI call into the driver to get the time from xen, and then sets the time back in the usermode code.
> >
> > Not as straightforward as I might have thought. I don't have a WMI interface but any mechanism of talking to the driver is fine. From a quick look I can't see how the driver gets the clock from Xen though.
> >
> > James
> 
> HVM guests get wallclock time from the shared info page, which awkwardly
> changes its exact location between a 32 and 64 bit domains.
> 
> Up until recently, the Citrix Windows PV drivers still contained a
> hacked HVMPARAM, for which the set_hvmparam manually forced the shinfo
> size, and updated the wallclock.  The latching of the shinfo size was
> fixed a long time ago, but there was still an outstanding bug that when
> Qemu stepped the domain wallclock on resume, the domain didn't see the
> updated time for a minute or so.
> 
> At a first guess, I would say that
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=915a59f25c5eddd86bc2cae6389d0ed2ab87e69e
> should fix the problem.  To the best of my knowledge, this was the very
> last of oustanding issues preventing our PV driver running correctly on
> xen-unstable.
> 

master commit 915a59f25c5eddd86bc2cae6389d0ed2ab87e69e (x86/time: Update wallclock in shared info when altering domain time offset)
seems to be already backported to xen-4.3 branch, and it's included in Xen 4.3.1 release (but it's not yet in Xen 4.3.0).

Just for the sake of archives, if someone else is wondering about this..

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9I9-0004Od-7w; Fri, 03 Jan 2014 18:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9I7-0004OS-Qe
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:16:16 +0000
Received: from [85.158.143.35:5538] by server-2.bemta-4.messagelabs.com id
	7B/10-11386-F6EF6C25; Fri, 03 Jan 2014 18:16:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388772973!9509769!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5887 invoked from network); 3 Jan 2014 18:16:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:16:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IF2qB009825
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:15:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IF078023035
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:15:01 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IF0ES020274; Fri, 3 Jan 2014 18:15:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:15:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 61D4A1BFB02; Fri,  3 Jan 2014 13:14:59 -0500 (EST)
Date: Fri, 3 Jan 2014 13:14:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103181459.GG27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, boris.ostrovsky@oracle.com,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > >>>>>  	return gnttab_init();
> > > >>>>>  }
> > > >>>>>  
> > > >>>>> -core_initcall(__gnttab_init);
> > > >>>>> +core_initcall_sync(__gnttab_init);
> > > >>>>
> > > >>>> Why has this become _sync?
> > > >>>
> > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > >>> at gnttab_init):
> > > >>
> > > >>
> > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > 
> > > > It has a clear ordering property.
> > > 
> > > This really isn't obvious to me.  Can you point to the docs/code the
> > > guarantee this?  I couldn't find it.
> > 
> > include/linux/init.h
> > > 
> > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > 
> > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > also used by the ARM code.
> > > > 
> > > > Stefano in his previous review mentioned he would like PVH specific
> > > > code in arch/x86:
> > > > 
> > > > https://lkml.org/lkml/2013/12/18/507
> > > 
> > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > 
> > Stefano, thoughts?
> 
> I think that you can safely move __gnttab_init to postcore_initcall if
> it works correctly for the PV and PVH cases, because HVM and ARM are
> unaffected by it.  In fact they don't initialize the grant table via
> __gnttab_init at all. See:

The 'xenbus_init' is called in postcore_initcall. I don't actually
know if it is OK to call that _before_ gnttab_init is called.

> 
> /* Delay grant-table initialization in the PV on HVM case */
> if (xen_hvm_domain())
> 	return 0;
> 
> at the beginning of __gnttab_init.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9ND-00054l-NE; Fri, 03 Jan 2014 18:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9NC-00054g-OL
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:21:31 +0000
Received: from [85.158.139.211:30135] by server-8.bemta-5.messagelabs.com id
	D5/0F-29838-9AFF6C25; Fri, 03 Jan 2014 18:21:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1388773287!7755918!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30916 invoked from network); 3 Jan 2014 18:21:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 18:21:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IKOAc017660
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:20:25 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IKOxK008983
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:20:24 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IKNUX025959; Fri, 3 Jan 2014 18:20:23 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:20:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8B51B1BFB02; Fri,  3 Jan 2014 13:20:22 -0500 (EST)
Date: Fri, 3 Jan 2014 13:20:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103182022.GH27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:26:39PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > In PVH the shared grant frame is the PFN and not MFN,
> > hence its mapped via the same code path as HVM.
> > 
> > The allocation of the grant frame is done differently - we
> > do not use the early platform-pci driver and have an
> > ioremap area - instead we use balloon memory and stitch
> > all of the non-contingous pages in a virtualized area.
> > 
> > That means when we call the hypervisor to replace the GMFN
> > with a XENMAPSPACE_grant_table type, we need to lookup the
> > old PFN for every iteration instead of assuming a flat
> > contingous PFN allocation.
> > 
> > Lastly, we only use v1 for grants. This is because PVHVM
> > is not able to use v2 due to no XENMEM_add_to_physmap
> > calls on the error status page (see commit
> > 69e8f430e243d657c2053f097efebc2e2cd559f0
> >  xen/granttable: Disable grant v2 for HVM domains.)
> > 
> > Until that is implemented this workaround has to
> > be in place.
> > 
> > Also per suggestions by Stefano utilize the PVHVM paths
> > as they share common functionality.
> > 
> > v2 of this patch moves most of the PVH code out in the
> > arch/x86/xen/grant-table driver and touches only minimally
> > the generic driver.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
> >  drivers/xen/gntdev.c       |  2 +-
> >  drivers/xen/grant-table.c  | 13 ++++++----
> >  3 files changed, 73 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> > index 3a5f55d..040e064 100644
> > --- a/arch/x86/xen/grant-table.c
> > +++ b/arch/x86/xen/grant-table.c
> > @@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> >  	apply_to_page_range(&init_mm, (unsigned long)shared,
> >  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
> >  }
> > +#ifdef CONFIG_XEN_PVHVM
> > +#include <xen/balloon.h>
> > +#include <linux/slab.h>
> > +static int __init xlated_setup_gnttab_pages(void)
> > +{
> > +	struct page **pages;
> > +	xen_pfn_t *pfns;
> > +	int rc, i;
> > +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> > +
> > +	BUG_ON(nr_grant_frames == 0);
> > +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> > +	if (!pages)
> > +		return -ENOMEM;
> > +
> > +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> > +	if (!pfns) {
> > +		kfree(pages);
> > +		return -ENOMEM;
> > +	}
> > +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> > +	if (rc) {
> > +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> > +			nr_grant_frames, rc);
> > +		kfree(pages);
> > +		kfree(pfns);
> > +		return rc;
> > +	}
> > +	for (i = 0; i < nr_grant_frames; i++)
> > +		pfns[i] = page_to_pfn(pages[i]);
> > +
> > +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> > +				    (void *)&xen_auto_xlat_grant_frames.vaddr);
> > +
> > +	kfree(pages);
> > +	if (rc) {
> > +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> > +			nr_grant_frames, rc);
> > +		free_xenballooned_pages(nr_grant_frames, pages);
> > +		kfree(pfns);
> > +		return rc;
> > +	}
> > +
> > +	xen_auto_xlat_grant_frames.pfn = pfns;
> > +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> > +
> > +	return 0;
> > +}
> 
> Unfortunately this way pfns is leaked. Can we safely free it or is it
> reused at resume time?

You mean you want PVH to suspend and resume work out of the box?!

HA! I hadn't even tested that yet.

How about when we get to that point we will figure out the way to
do the right thing.

What actually happens during suspend/resume in a HVM guests? We just
need to call 'gnttab_setup' which calls 'gnttab_map' to do the
XENMAPSPACE_grant_table on the PFNs right? That should be OK
and the xen_auto_xlat_grant_frames.pfn  is used during that.

The suspend path would use unmap_frames -> arch_gnttab_unmap which
just clears the PTEs. There is no freeing off the memory
which is used as backing store.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9ND-00054l-NE; Fri, 03 Jan 2014 18:21:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9NC-00054g-OL
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:21:31 +0000
Received: from [85.158.139.211:30135] by server-8.bemta-5.messagelabs.com id
	D5/0F-29838-9AFF6C25; Fri, 03 Jan 2014 18:21:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1388773287!7755918!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30916 invoked from network); 3 Jan 2014 18:21:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 18:21:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IKOAc017660
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:20:25 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IKOxK008983
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:20:24 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IKNUX025959; Fri, 3 Jan 2014 18:20:23 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:20:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8B51B1BFB02; Fri,  3 Jan 2014 13:20:22 -0500 (EST)
Date: Fri, 3 Jan 2014 13:20:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103182022.GH27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031722170.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:26:39PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > In PVH the shared grant frame is the PFN and not MFN,
> > hence its mapped via the same code path as HVM.
> > 
> > The allocation of the grant frame is done differently - we
> > do not use the early platform-pci driver and have an
> > ioremap area - instead we use balloon memory and stitch
> > all of the non-contingous pages in a virtualized area.
> > 
> > That means when we call the hypervisor to replace the GMFN
> > with a XENMAPSPACE_grant_table type, we need to lookup the
> > old PFN for every iteration instead of assuming a flat
> > contingous PFN allocation.
> > 
> > Lastly, we only use v1 for grants. This is because PVHVM
> > is not able to use v2 due to no XENMEM_add_to_physmap
> > calls on the error status page (see commit
> > 69e8f430e243d657c2053f097efebc2e2cd559f0
> >  xen/granttable: Disable grant v2 for HVM domains.)
> > 
> > Until that is implemented this workaround has to
> > be in place.
> > 
> > Also per suggestions by Stefano utilize the PVHVM paths
> > as they share common functionality.
> > 
> > v2 of this patch moves most of the PVH code out in the
> > arch/x86/xen/grant-table driver and touches only minimally
> > the generic driver.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/grant-table.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++
> >  drivers/xen/gntdev.c       |  2 +-
> >  drivers/xen/grant-table.c  | 13 ++++++----
> >  3 files changed, 73 insertions(+), 6 deletions(-)
> > 
> > diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> > index 3a5f55d..040e064 100644
> > --- a/arch/x86/xen/grant-table.c
> > +++ b/arch/x86/xen/grant-table.c
> > @@ -125,3 +125,67 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
> >  	apply_to_page_range(&init_mm, (unsigned long)shared,
> >  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
> >  }
> > +#ifdef CONFIG_XEN_PVHVM
> > +#include <xen/balloon.h>
> > +#include <linux/slab.h>
> > +static int __init xlated_setup_gnttab_pages(void)
> > +{
> > +	struct page **pages;
> > +	xen_pfn_t *pfns;
> > +	int rc, i;
> > +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> > +
> > +	BUG_ON(nr_grant_frames == 0);
> > +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> > +	if (!pages)
> > +		return -ENOMEM;
> > +
> > +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> > +	if (!pfns) {
> > +		kfree(pages);
> > +		return -ENOMEM;
> > +	}
> > +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> > +	if (rc) {
> > +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> > +			nr_grant_frames, rc);
> > +		kfree(pages);
> > +		kfree(pfns);
> > +		return rc;
> > +	}
> > +	for (i = 0; i < nr_grant_frames; i++)
> > +		pfns[i] = page_to_pfn(pages[i]);
> > +
> > +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> > +				    (void *)&xen_auto_xlat_grant_frames.vaddr);
> > +
> > +	kfree(pages);
> > +	if (rc) {
> > +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> > +			nr_grant_frames, rc);
> > +		free_xenballooned_pages(nr_grant_frames, pages);
> > +		kfree(pfns);
> > +		return rc;
> > +	}
> > +
> > +	xen_auto_xlat_grant_frames.pfn = pfns;
> > +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> > +
> > +	return 0;
> > +}
> 
> Unfortunately this way pfns is leaked. Can we safely free it or is it
> reused at resume time?

You mean you want PVH to suspend and resume work out of the box?!

HA! I hadn't even tested that yet.

How about when we get to that point we will figure out the way to
do the right thing.

What actually happens during suspend/resume in a HVM guests? We just
need to call 'gnttab_setup' which calls 'gnttab_map' to do the
XENMAPSPACE_grant_table on the PFNs right? That should be OK
and the xen_auto_xlat_grant_frames.pfn  is used during that.

The suspend path would use unmap_frames -> arch_gnttab_unmap which
just clears the PTEs. There is no freeing off the memory
which is used as backing store.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:28:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Tb-0005Da-LH; Fri, 03 Jan 2014 18:28:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9Ta-0005DS-2V
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:28:06 +0000
Received: from [85.158.137.68:36424] by server-13.bemta-3.messagelabs.com id
	FC/44-28603-53107C25; Fri, 03 Jan 2014 18:28:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388773682!7138552!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24685 invoked from network); 3 Jan 2014 18:28:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:28:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IS041007119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:28:01 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IS0bk010571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:28:00 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IRxrj021522; Fri, 3 Jan 2014 18:27:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:27:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 155E71BFB02; Fri,  3 Jan 2014 13:27:55 -0500 (EST)
Date: Fri, 3 Jan 2014 13:27:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103182755.GA28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with
 MFNs that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:45:22PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
> MFN that is an identify frame (1:1) in the p2m.  This is so the
> correct conversion of MFN to PFN can be done reading a PTE.
> 
> If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
> is not required and can be removed.
> 
> In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
> two MFNs differ then the MFN is one of three possibilities:
> 
>   a) its a foreign MFN with an m2p override.
> 
>   b) it's a foreign MFN with /no/ m2p override.
> 
>   c) it's a identity MFN.
> 
> It is not permitted to call mfn_to_pfn() no a foreign MFN without an
> override was the resulting PFN will not incorrect.  We can therefore
> assume case (c) and return PFN == MFN.
> 
> [ This patch should probably be split into the Xen-specific parts and
> an x86 patch to remove _PAGE_IOMAP. ]

Yeah :-)

> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |   12 +++---
>  arch/x86/include/asm/xen/page.h      |   18 +++++++---
>  arch/x86/mm/init_32.c                |    2 +-
>  arch/x86/mm/init_64.c                |    2 +-
>  arch/x86/pci/i386.c                  |    2 -
>  arch/x86/xen/enlighten.c             |    2 -
>  arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
>  7 files changed, 28 insertions(+), 68 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0ecac25..0b12657 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -17,7 +17,7 @@
>  #define _PAGE_BIT_PAT		7	/* on 4KB pages */
>  #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
>  #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
> -#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
> +#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
>  #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
>  #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
>  #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
> @@ -41,7 +41,7 @@
>  #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
>  #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
>  #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
> -#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
> +#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
>  #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
>  #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
>  #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
> @@ -163,10 +163,10 @@
>  #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
>  #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
>  
> -#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
> +#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
> +#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
> +#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
> +#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
>  
>  #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
>  #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..eb11963 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	pfn = mfn_to_pfn_no_overrides(mfn);
>  	if (get_phys_to_machine(pfn) != mfn) {
>  		/*
> -		 * If this appears to be a foreign mfn (because the pfn
> -		 * doesn't map back to the mfn), then check the local override
> -		 * table to see if there's a better pfn to use.
> +		 * This is either:
>  		 *
> -		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
> +		 * a) a foreign MFN with an override.
> +		 *
> +		 * b) a foreign MFN without an override.
> +		 *
> +		 * c) an identity MFN that is not in the the p2m.
> +		 *
> +		 * For (a), look in the m2p overrides.  For (b) and
> +		 * (c) assume identify MFN since mfn_to_pfn() will
> +		 * only be called on foreign MFNs iff they have
> +		 * overrides.
>  		 */
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
> @@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
>  	 */
> -	if (pfn == ~0 &&
> -			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
> +	if (pfn == ~0)
>  		pfn = mfn;
>  
>  	return pfn;
> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 4287f1f..9031593 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -537,7 +537,7 @@ static void __init pagetable_init(void)
>  	permanent_kmaps_init(pgd_base);
>  }
>  
> -pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
> +pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
>  EXPORT_SYMBOL_GPL(__supported_pte_mask);
>  
>  /* user-defined highmem size */
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 104d56a..68bf948 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
>   * around without checking the pgd every time.
>   */
>  
> -pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
> +pteval_t __supported_pte_mask __read_mostly = ~0;
>  EXPORT_SYMBOL_GPL(__supported_pte_mask);
>  
>  int force_personality32;
> diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
> index db6b1ab..1f642d6 100644
> --- a/arch/x86/pci/i386.c
> +++ b/arch/x86/pci/i386.c
> @@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
>  		 */
>  		prot |= _PAGE_CACHE_UC_MINUS;
>  
> -	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
> -
>  	vma->vm_page_prot = __pgprot(prot);
>  
>  	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fa6ade7..f9c2d71 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
>  #endif
>  		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
>  
> -	__supported_pte_mask |= _PAGE_IOMAP;
> -
>  	/*
>  	 * Prevent page tables from being allocated in highmem, even
>  	 * if CONFIG_HIGHPTE is enabled.
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..5fa77a1 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -368,11 +368,9 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>  	if (val & _PAGE_PRESENT) {
>  		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>  		unsigned long pfn = mfn_to_pfn(mfn);
> -
>  		pteval_t flags = val & PTE_FLAGS_MASK;
> -		if (unlikely(pfn == ~0))
> -			val = flags & ~_PAGE_PRESENT;
> -		else
> +
> +		if (likely(pfn != ~0))
>  			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
>  	}
>  
> @@ -390,6 +388,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
>  			mfn = get_phys_to_machine(pfn);
>  		else
>  			mfn = pfn;
> +
>  		/*
>  		 * If there's no mfn for the pfn, then just create an
>  		 * empty non-present pte.  Unfortunately this loses
> @@ -399,38 +398,15 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
>  		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
>  			mfn = 0;
>  			flags = 0;
> -		} else {
> -			/*
> -			 * Paramount to do this test _after_ the
> -			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
> -			 * IDENTITY_FRAME_BIT resolves to true.
> -			 */
> -			mfn &= ~FOREIGN_FRAME_BIT;
> -			if (mfn & IDENTITY_FRAME_BIT) {
> -				mfn &= ~IDENTITY_FRAME_BIT;
> -				flags |= _PAGE_IOMAP;
> -			}
> -		}
> +			mfn = pfn;
> +		} else
> +			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
>  		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
>  	}
>  
>  	return val;
>  }
>  
> -static pteval_t iomap_pte(pteval_t val)
> -{
> -	if (val & _PAGE_PRESENT) {
> -		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
> -		pteval_t flags = val & PTE_FLAGS_MASK;
> -
> -		/* We assume the pte frame number is a MFN, so
> -		   just use it as-is. */
> -		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
> -	}
> -
> -	return val;
> -}
> -
>  static pteval_t xen_pte_val(pte_t pte)
>  {
>  	pteval_t pteval = pte.pte;
> @@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
>  		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
>  	}
>  #endif
> -	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
> -		return pteval;
> -
>  	return pte_mfn_to_pfn(pteval);
>  }
>  PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
> @@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
>  
>  static pte_t xen_make_pte(pteval_t pte)
>  {
> -	phys_addr_t addr = (pte & PTE_PFN_MASK);
>  #if 0
>  	/* If Linux is trying to set a WC pte, then map to the Xen WC.
>  	 * If _PAGE_PAT is set, then it probably means it is really
> @@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
>  			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
>  	}
>  #endif
> -	/*
> -	 * Unprivileged domains are allowed to do IOMAPpings for
> -	 * PCI passthrough, but not map ISA space.  The ISA
> -	 * mappings are just dummy local mappings to keep other
> -	 * parts of the kernel happy.
> -	 */
> -	if (unlikely(pte & _PAGE_IOMAP) &&
> -	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
> -		pte = iomap_pte(pte);
> -	} else {
> -		pte &= ~_PAGE_IOMAP;
> -		pte = pte_pfn_to_mfn(pte);
> -	}
> +	pte = pte_pfn_to_mfn(pte);
>  
>  	return native_make_pte(pte);
>  }
> @@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>  
>  	default:
>  		/* By default, set_fixmap is used for hardware mappings */
> -		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
> +		pte = mfn_pte(phys, prot);
>  		break;
>  	}
>  
> @@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return -EINVAL;
>  
> -	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
> -
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
>  
>  	rmd.mfn = mfn;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:28:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Tb-0005Da-LH; Fri, 03 Jan 2014 18:28:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9Ta-0005DS-2V
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:28:06 +0000
Received: from [85.158.137.68:36424] by server-13.bemta-3.messagelabs.com id
	FC/44-28603-53107C25; Fri, 03 Jan 2014 18:28:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388773682!7138552!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24685 invoked from network); 3 Jan 2014 18:28:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:28:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IS041007119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:28:01 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IS0bk010571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:28:00 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IRxrj021522; Fri, 3 Jan 2014 18:27:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:27:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 155E71BFB02; Fri,  3 Jan 2014 13:27:55 -0500 (EST)
Date: Fri, 3 Jan 2014 13:27:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103182755.GA28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388767522-11768-3-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/xen: make mfn_to_pfn() work with
 MFNs that are 1:1 in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:45:22PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IOMAP PTE flag is used to indicate that the PTE contains an
> MFN that is an identify frame (1:1) in the p2m.  This is so the
> correct conversion of MFN to PFN can be done reading a PTE.
> 
> If mfn_to_pfn() returned the correct PFN instead the _PAGE_IOMAP flag
> is not required and can be removed.
> 
> In mfn_to_pfn() the PFN found in the M2P is checked in P2M.  If the
> two MFNs differ then the MFN is one of three possibilities:
> 
>   a) its a foreign MFN with an m2p override.
> 
>   b) it's a foreign MFN with /no/ m2p override.
> 
>   c) it's a identity MFN.
> 
> It is not permitted to call mfn_to_pfn() no a foreign MFN without an
> override was the resulting PFN will not incorrect.  We can therefore
> assume case (c) and return PFN == MFN.
> 
> [ This patch should probably be split into the Xen-specific parts and
> an x86 patch to remove _PAGE_IOMAP. ]

Yeah :-)

> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/pgtable_types.h |   12 +++---
>  arch/x86/include/asm/xen/page.h      |   18 +++++++---
>  arch/x86/mm/init_32.c                |    2 +-
>  arch/x86/mm/init_64.c                |    2 +-
>  arch/x86/pci/i386.c                  |    2 -
>  arch/x86/xen/enlighten.c             |    2 -
>  arch/x86/xen/mmu.c                   |   58 +++++-----------------------------
>  7 files changed, 28 insertions(+), 68 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0ecac25..0b12657 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -17,7 +17,7 @@
>  #define _PAGE_BIT_PAT		7	/* on 4KB pages */
>  #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
>  #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
> -#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
> +#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
>  #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
>  #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
>  #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
> @@ -41,7 +41,7 @@
>  #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
>  #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
>  #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
> -#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
> +#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
>  #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
>  #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
>  #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
> @@ -163,10 +163,10 @@
>  #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
>  #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
>  
> -#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
> -#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
> +#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
> +#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
> +#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
> +#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
>  
>  #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
>  #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..eb11963 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -112,11 +112,18 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	pfn = mfn_to_pfn_no_overrides(mfn);
>  	if (get_phys_to_machine(pfn) != mfn) {
>  		/*
> -		 * If this appears to be a foreign mfn (because the pfn
> -		 * doesn't map back to the mfn), then check the local override
> -		 * table to see if there's a better pfn to use.
> +		 * This is either:
>  		 *
> -		 * m2p_find_override_pfn returns ~0 if it doesn't find anything.
> +		 * a) a foreign MFN with an override.
> +		 *
> +		 * b) a foreign MFN without an override.
> +		 *
> +		 * c) an identity MFN that is not in the the p2m.
> +		 *
> +		 * For (a), look in the m2p overrides.  For (b) and
> +		 * (c) assume identify MFN since mfn_to_pfn() will
> +		 * only be called on foreign MFNs iff they have
> +		 * overrides.
>  		 */
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
> @@ -126,8 +133,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
>  	 */
> -	if (pfn == ~0 &&
> -			get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
> +	if (pfn == ~0)
>  		pfn = mfn;
>  
>  	return pfn;
> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 4287f1f..9031593 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -537,7 +537,7 @@ static void __init pagetable_init(void)
>  	permanent_kmaps_init(pgd_base);
>  }
>  
> -pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
> +pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
>  EXPORT_SYMBOL_GPL(__supported_pte_mask);
>  
>  /* user-defined highmem size */
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 104d56a..68bf948 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
>   * around without checking the pgd every time.
>   */
>  
> -pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
> +pteval_t __supported_pte_mask __read_mostly = ~0;
>  EXPORT_SYMBOL_GPL(__supported_pte_mask);
>  
>  int force_personality32;
> diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
> index db6b1ab..1f642d6 100644
> --- a/arch/x86/pci/i386.c
> +++ b/arch/x86/pci/i386.c
> @@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
>  		 */
>  		prot |= _PAGE_CACHE_UC_MINUS;
>  
> -	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
> -
>  	vma->vm_page_prot = __pgprot(prot);
>  
>  	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fa6ade7..f9c2d71 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
>  #endif
>  		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
>  
> -	__supported_pte_mask |= _PAGE_IOMAP;
> -
>  	/*
>  	 * Prevent page tables from being allocated in highmem, even
>  	 * if CONFIG_HIGHPTE is enabled.
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..5fa77a1 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -368,11 +368,9 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>  	if (val & _PAGE_PRESENT) {
>  		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>  		unsigned long pfn = mfn_to_pfn(mfn);
> -
>  		pteval_t flags = val & PTE_FLAGS_MASK;
> -		if (unlikely(pfn == ~0))
> -			val = flags & ~_PAGE_PRESENT;
> -		else
> +
> +		if (likely(pfn != ~0))
>  			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
>  	}
>  
> @@ -390,6 +388,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
>  			mfn = get_phys_to_machine(pfn);
>  		else
>  			mfn = pfn;
> +
>  		/*
>  		 * If there's no mfn for the pfn, then just create an
>  		 * empty non-present pte.  Unfortunately this loses
> @@ -399,38 +398,15 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
>  		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
>  			mfn = 0;
>  			flags = 0;
> -		} else {
> -			/*
> -			 * Paramount to do this test _after_ the
> -			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
> -			 * IDENTITY_FRAME_BIT resolves to true.
> -			 */
> -			mfn &= ~FOREIGN_FRAME_BIT;
> -			if (mfn & IDENTITY_FRAME_BIT) {
> -				mfn &= ~IDENTITY_FRAME_BIT;
> -				flags |= _PAGE_IOMAP;
> -			}
> -		}
> +			mfn = pfn;
> +		} else
> +			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
>  		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
>  	}
>  
>  	return val;
>  }
>  
> -static pteval_t iomap_pte(pteval_t val)
> -{
> -	if (val & _PAGE_PRESENT) {
> -		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
> -		pteval_t flags = val & PTE_FLAGS_MASK;
> -
> -		/* We assume the pte frame number is a MFN, so
> -		   just use it as-is. */
> -		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
> -	}
> -
> -	return val;
> -}
> -
>  static pteval_t xen_pte_val(pte_t pte)
>  {
>  	pteval_t pteval = pte.pte;
> @@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
>  		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
>  	}
>  #endif
> -	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
> -		return pteval;
> -
>  	return pte_mfn_to_pfn(pteval);
>  }
>  PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
> @@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
>  
>  static pte_t xen_make_pte(pteval_t pte)
>  {
> -	phys_addr_t addr = (pte & PTE_PFN_MASK);
>  #if 0
>  	/* If Linux is trying to set a WC pte, then map to the Xen WC.
>  	 * If _PAGE_PAT is set, then it probably means it is really
> @@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
>  			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
>  	}
>  #endif
> -	/*
> -	 * Unprivileged domains are allowed to do IOMAPpings for
> -	 * PCI passthrough, but not map ISA space.  The ISA
> -	 * mappings are just dummy local mappings to keep other
> -	 * parts of the kernel happy.
> -	 */
> -	if (unlikely(pte & _PAGE_IOMAP) &&
> -	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
> -		pte = iomap_pte(pte);
> -	} else {
> -		pte &= ~_PAGE_IOMAP;
> -		pte = pte_pfn_to_mfn(pte);
> -	}
> +	pte = pte_pfn_to_mfn(pte);
>  
>  	return native_make_pte(pte);
>  }
> @@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>  
>  	default:
>  		/* By default, set_fixmap is used for hardware mappings */
> -		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
> +		pte = mfn_pte(phys, prot);
>  		break;
>  	}
>  
> @@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		return -EINVAL;
>  
> -	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
> -
>  	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
>  
>  	rmd.mfn = mfn;
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:30:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Vj-0005cA-6y; Fri, 03 Jan 2014 18:30:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9Vh-0005c2-FK
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:30:17 +0000
Received: from [193.109.254.147:37463] by server-2.bemta-14.messagelabs.com id
	64/0D-00361-8B107C25; Fri, 03 Jan 2014 18:30:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388773814!8730233!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 3 Jan 2014 18:30:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:30:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87429104"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:30:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:30:13 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9Vd-0002AC-1y;
	Fri, 03 Jan 2014 18:30:13 +0000
Date: Fri, 3 Jan 2014 18:29:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103181459.GG27019@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > >>>>>  	return gnttab_init();
> > > > >>>>>  }
> > > > >>>>>  
> > > > >>>>> -core_initcall(__gnttab_init);
> > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > >>>>
> > > > >>>> Why has this become _sync?
> > > > >>>
> > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > >>> at gnttab_init):
> > > > >>
> > > > >>
> > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > 
> > > > > It has a clear ordering property.
> > > > 
> > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > guarantee this?  I couldn't find it.
> > > 
> > > include/linux/init.h
> > > > 
> > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > 
> > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > also used by the ARM code.
> > > > > 
> > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > code in arch/x86:
> > > > > 
> > > > > https://lkml.org/lkml/2013/12/18/507
> > > > 
> > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > 
> > > Stefano, thoughts?
> > 
> > I think that you can safely move __gnttab_init to postcore_initcall if
> > it works correctly for the PV and PVH cases, because HVM and ARM are
> > unaffected by it.  In fact they don't initialize the grant table via
> > __gnttab_init at all. See:
> 
> The 'xenbus_init' is called in postcore_initcall. I don't actually
> know if it is OK to call that _before_ gnttab_init is called.

No, xenbus_init needs to be called after gnttab_init, however the
alphabetical order would enforce it.
Not that I would want to rely on it :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:30:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9Vj-0005cA-6y; Fri, 03 Jan 2014 18:30:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9Vh-0005c2-FK
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:30:17 +0000
Received: from [193.109.254.147:37463] by server-2.bemta-14.messagelabs.com id
	64/0D-00361-8B107C25; Fri, 03 Jan 2014 18:30:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388773814!8730233!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 3 Jan 2014 18:30:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:30:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87429104"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:30:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:30:13 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9Vd-0002AC-1y;
	Fri, 03 Jan 2014 18:30:13 +0000
Date: Fri, 3 Jan 2014 18:29:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103181459.GG27019@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > >>>>>  	return gnttab_init();
> > > > >>>>>  }
> > > > >>>>>  
> > > > >>>>> -core_initcall(__gnttab_init);
> > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > >>>>
> > > > >>>> Why has this become _sync?
> > > > >>>
> > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > >>> at gnttab_init):
> > > > >>
> > > > >>
> > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > 
> > > > > It has a clear ordering property.
> > > > 
> > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > guarantee this?  I couldn't find it.
> > > 
> > > include/linux/init.h
> > > > 
> > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > 
> > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > also used by the ARM code.
> > > > > 
> > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > code in arch/x86:
> > > > > 
> > > > > https://lkml.org/lkml/2013/12/18/507
> > > > 
> > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > 
> > > Stefano, thoughts?
> > 
> > I think that you can safely move __gnttab_init to postcore_initcall if
> > it works correctly for the PV and PVH cases, because HVM and ARM are
> > unaffected by it.  In fact they don't initialize the grant table via
> > __gnttab_init at all. See:
> 
> The 'xenbus_init' is called in postcore_initcall. I don't actually
> know if it is OK to call that _before_ gnttab_init is called.

No, xenbus_init needs to be called after gnttab_init, however the
alphabetical order would enforce it.
Not that I would want to rely on it :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9al-0005pr-4b; Fri, 03 Jan 2014 18:35:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9aj-0005pe-Cu
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:35:29 +0000
Received: from [85.158.137.68:39414] by server-2.bemta-3.messagelabs.com id
	D0/1C-17329-0F207C25; Fri, 03 Jan 2014 18:35:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388774126!3470384!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19106 invoked from network); 3 Jan 2014 18:35:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:35:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IYN2O017911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:34:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IYMi7009293
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:34:23 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IYMh9019240; Fri, 3 Jan 2014 18:34:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:34:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6FB411BFB02; Fri,  3 Jan 2014 13:34:20 -0500 (EST)
Date: Fri, 3 Jan 2014 13:34:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103183420.GB28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
	<alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 06:12:26PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, David Vrabel wrote:
> > From: David Vrabel <david.vrabel@citrix.com>
> > 
> > PCI devices may have BARs located above the end of RAM so mark such
> > frames as identity frames in the p2m (instead of the default of
> > missing).
> > 
> > PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
> > identity frames for the same reason.
> > 
> > Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> 
> Shouldn't this be the case only for dom0?

You can have PV guests with PCI passthrough that depend on identity
MFNs.

Which is interesting - because if we do not have the E820 setup
to reflect the memory correct (and especially the MMIO regions), we
are screwed. Unless the user boots this PV guest with memory right
up to the MMIO BAR.

The 'e820_hole=1' parameter helps in fabricating an E820 for PV
guests that looks like the hosts - with the nice mix of E820_RSV
and MMIO bars.

> 
> 
> >  arch/x86/xen/p2m.c   |    2 +-
> >  arch/x86/xen/setup.c |   10 ++++++++++
> >  2 files changed, 11 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 2ae8699..a905355 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
> >  	unsigned topidx, mididx, idx;
> >  
> >  	if (unlikely(pfn >= MAX_P2M_PFN))
> > -		return INVALID_P2M_ENTRY;
> > +		return IDENTITY_FRAME(pfn);
> >  
> >  	topidx = p2m_top_index(pfn);
> >  	mididx = p2m_mid_index(pfn);
> > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> > index 68c054f..6d7798f 100644
> > --- a/arch/x86/xen/setup.c
> > +++ b/arch/x86/xen/setup.c
> > @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
> >  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
> >  		mem_end = PFN_PHYS(max_pfn);
> >  	}
> > +
> > +	/*
> > +	 * Set the rest as identity mapped, in case PCI BARs are
> > +	 * located here.
> > +	 *
> > +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
> > +	 * well.
> > +	 */
> > +	set_phys_range_identity(max_pfn + 1, ~0ul);
> > +
> 
> Wouldn't this increase the size of the P2M considerably?

Yes. The P2M[4->500] parts will end up being allocated (if you
boot with a 4GB guest/dom0). One thought I had was to have a
p2m_top_missing to be a 'P2M' 'identity' type so you don't waste
that much memory. Similary to how the checks are done against
'p2m_missing' right now.


The neat thing with this patch is that regions _after_ the P2M
(so past the 500GB) are solved with this patch. 
(which is what the original bug was about).
> 
> 
> >  	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
> >  	 * factor the base size.  On non-highmem systems, the base
> > -- 
> > 1.7.2.5
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9ac-0005pK-NM; Fri, 03 Jan 2014 18:35:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9aa-0005pD-Ve
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:35:21 +0000
Received: from [85.158.143.35:41449] by server-3.bemta-4.messagelabs.com id
	B2/DB-32360-8E207C25; Fri, 03 Jan 2014 18:35:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388774118!9504488!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25191 invoked from network); 3 Jan 2014 18:35:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:35:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IZFtw009039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:35:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IZFRw028538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:35:15 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IZEa0011456; Fri, 3 Jan 2014 18:35:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:35:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5F0471BFB02; Fri,  3 Jan 2014 13:35:13 -0500 (EST)
Date: Fri, 3 Jan 2014 13:35:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103183513.GC28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH RFC 0/2] Linux: possible ixes for mapping
 high MMIO regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:45:20PM +0000, David Vrabel wrote:
> This a possible fix for the problems with mapping high MMIO regions in
> certain cases (e.g., the RDMA drivers) as not all mappers were
> specifing the _PAGE_IOMAP which meant no valid MFN could be found and
> the resulting PTE would be marked as not present, causing subsequent
> faults.
> 
> It assumes that anything that isn't RAM (whether ballooned out or not)
> is an I/O region and thus should be 1:1 in the p2m.  Ballooned frames
> are still marked as missing in the p2m as before.
> 
> As a follow on, mfn_to_pfn() is (hopefully) extended to do the right
> thing with such an MFN.  This means the Xen-specific _PAGE_IOMAP PTE
> flag can be removed,

Woot!
> 
> This series is posted as an early RFC in the hope that is an
> acceptable approach.  It has only seen the bare minimum of smoke
> testing (my test dom0 didn't explode). In particular, I've not
> actually tested it with a device with a high MMIO.

I can do in a couple of weeks.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9al-0005pr-4b; Fri, 03 Jan 2014 18:35:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9aj-0005pe-Cu
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:35:29 +0000
Received: from [85.158.137.68:39414] by server-2.bemta-3.messagelabs.com id
	D0/1C-17329-0F207C25; Fri, 03 Jan 2014 18:35:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388774126!3470384!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19106 invoked from network); 3 Jan 2014 18:35:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:35:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IYN2O017911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:34:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IYMi7009293
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:34:23 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IYMh9019240; Fri, 3 Jan 2014 18:34:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:34:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6FB411BFB02; Fri,  3 Jan 2014 13:34:20 -0500 (EST)
Date: Fri, 3 Jan 2014 13:34:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103183420.GB28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
	<alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 06:12:26PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, David Vrabel wrote:
> > From: David Vrabel <david.vrabel@citrix.com>
> > 
> > PCI devices may have BARs located above the end of RAM so mark such
> > frames as identity frames in the p2m (instead of the default of
> > missing).
> > 
> > PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
> > identity frames for the same reason.
> > 
> > Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> 
> Shouldn't this be the case only for dom0?

You can have PV guests with PCI passthrough that depend on identity
MFNs.

Which is interesting - because if we do not have the E820 setup
to reflect the memory correct (and especially the MMIO regions), we
are screwed. Unless the user boots this PV guest with memory right
up to the MMIO BAR.

The 'e820_hole=1' parameter helps in fabricating an E820 for PV
guests that looks like the hosts - with the nice mix of E820_RSV
and MMIO bars.

> 
> 
> >  arch/x86/xen/p2m.c   |    2 +-
> >  arch/x86/xen/setup.c |   10 ++++++++++
> >  2 files changed, 11 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 2ae8699..a905355 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
> >  	unsigned topidx, mididx, idx;
> >  
> >  	if (unlikely(pfn >= MAX_P2M_PFN))
> > -		return INVALID_P2M_ENTRY;
> > +		return IDENTITY_FRAME(pfn);
> >  
> >  	topidx = p2m_top_index(pfn);
> >  	mididx = p2m_mid_index(pfn);
> > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> > index 68c054f..6d7798f 100644
> > --- a/arch/x86/xen/setup.c
> > +++ b/arch/x86/xen/setup.c
> > @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
> >  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
> >  		mem_end = PFN_PHYS(max_pfn);
> >  	}
> > +
> > +	/*
> > +	 * Set the rest as identity mapped, in case PCI BARs are
> > +	 * located here.
> > +	 *
> > +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
> > +	 * well.
> > +	 */
> > +	set_phys_range_identity(max_pfn + 1, ~0ul);
> > +
> 
> Wouldn't this increase the size of the P2M considerably?

Yes. The P2M[4->500] parts will end up being allocated (if you
boot with a 4GB guest/dom0). One thought I had was to have a
p2m_top_missing to be a 'P2M' 'identity' type so you don't waste
that much memory. Similary to how the checks are done against
'p2m_missing' right now.


The neat thing with this patch is that regions _after_ the P2M
(so past the 500GB) are solved with this patch. 
(which is what the original bug was about).
> 
> 
> >  	 * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
> >  	 * factor the base size.  On non-highmem systems, the base
> > -- 
> > 1.7.2.5
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9ac-0005pK-NM; Fri, 03 Jan 2014 18:35:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9aa-0005pD-Ve
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:35:21 +0000
Received: from [85.158.143.35:41449] by server-3.bemta-4.messagelabs.com id
	B2/DB-32360-8E207C25; Fri, 03 Jan 2014 18:35:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388774118!9504488!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25191 invoked from network); 3 Jan 2014 18:35:19 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:35:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IZFtw009039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:35:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IZFRw028538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:35:15 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IZEa0011456; Fri, 3 Jan 2014 18:35:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:35:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5F0471BFB02; Fri,  3 Jan 2014 13:35:13 -0500 (EST)
Date: Fri, 3 Jan 2014 13:35:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140103183513.GC28915@phenom.dumpdata.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH RFC 0/2] Linux: possible ixes for mapping
 high MMIO regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:45:20PM +0000, David Vrabel wrote:
> This a possible fix for the problems with mapping high MMIO regions in
> certain cases (e.g., the RDMA drivers) as not all mappers were
> specifing the _PAGE_IOMAP which meant no valid MFN could be found and
> the resulting PTE would be marked as not present, causing subsequent
> faults.
> 
> It assumes that anything that isn't RAM (whether ballooned out or not)
> is an I/O region and thus should be 1:1 in the p2m.  Ballooned frames
> are still marked as missing in the p2m as before.
> 
> As a follow on, mfn_to_pfn() is (hopefully) extended to do the right
> thing with such an MFN.  This means the Xen-specific _PAGE_IOMAP PTE
> flag can be removed,

Woot!
> 
> This series is posted as an early RFC in the hope that is an
> acceptable approach.  It has only seen the bare minimum of smoke
> testing (my test dom0 didn't explode). In particular, I've not
> actually tested it with a device with a high MMIO.

I can do in a couple of weeks.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:38:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9e3-0006DE-QA; Fri, 03 Jan 2014 18:38:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1Vz9e2-0006Cz-T9
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:38:55 +0000
Received: from [85.158.139.211:27385] by server-10.bemta-5.messagelabs.com id
	E2/37-01405-EB307C25; Fri, 03 Jan 2014 18:38:54 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388774333!7718649!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25276 invoked from network); 3 Jan 2014 18:38:53 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:38:53 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so8451834lbh.32
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 10:38:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=a5GptYVIcd81pX41lDlejL/5J+dTuRHsVv8QWlGmMxA=;
	b=h7v5A8MxfWxrH+W9szqFnOmVJCVUJl8wNXg+zQ6e30OKjOlnmPk1Mhi4GwSUuq0rSy
	GSk4tLbnQ7gtQGR1Qmww0E07S24xsQZqF8sswxzEaYJITFsGvvETijujmK2hDacY6FCo
	Bm7WD3jNDNeTBCIc0XsWA9bjUvEyzJjhX2Uf3KEYsLHOWxXZzCcEESK7zTtxtj2kMWRu
	A5Dm3Tn25+CFehqY/vufJf8mZcQlt+KBbcLty+sOWmPLr/6v/YhuK8lcxkZcDXWfgt8a
	c7C9nQ09q7TQVzNCWNawwVNDHNCqYz6bWL6wZgJZWsL4A2WTxHD3QI8IIB7K1mp7t/YZ
	GvlQ==
X-Received: by 10.152.183.194 with SMTP id eo2mr191962lac.81.1388774332729;
	Fri, 03 Jan 2014 10:38:52 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id e10sm47614503laa.6.2014.01.03.10.38.50
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 10:38:50 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Fri, 3 Jan 2014 22:38:49 +0400
Message-Id: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:

xen/include/public/trace.h :

struct t_buf {}
struct t_info {}

I have a questions - can we rename structures to:
struct t_buf_xen {}
struct t_info_xen {}

Will it be applicable for Xen sources ?
if yes - i can update Xen sources by new variables and provide changes to review.

it will help with next Xen ports without changes in public headers.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:38:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9e3-0006DE-QA; Fri, 03 Jan 2014 18:38:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1Vz9e2-0006Cz-T9
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:38:55 +0000
Received: from [85.158.139.211:27385] by server-10.bemta-5.messagelabs.com id
	E2/37-01405-EB307C25; Fri, 03 Jan 2014 18:38:54 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388774333!7718649!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25276 invoked from network); 3 Jan 2014 18:38:53 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:38:53 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so8451834lbh.32
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 10:38:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=a5GptYVIcd81pX41lDlejL/5J+dTuRHsVv8QWlGmMxA=;
	b=h7v5A8MxfWxrH+W9szqFnOmVJCVUJl8wNXg+zQ6e30OKjOlnmPk1Mhi4GwSUuq0rSy
	GSk4tLbnQ7gtQGR1Qmww0E07S24xsQZqF8sswxzEaYJITFsGvvETijujmK2hDacY6FCo
	Bm7WD3jNDNeTBCIc0XsWA9bjUvEyzJjhX2Uf3KEYsLHOWxXZzCcEESK7zTtxtj2kMWRu
	A5Dm3Tn25+CFehqY/vufJf8mZcQlt+KBbcLty+sOWmPLr/6v/YhuK8lcxkZcDXWfgt8a
	c7C9nQ09q7TQVzNCWNawwVNDHNCqYz6bWL6wZgJZWsL4A2WTxHD3QI8IIB7K1mp7t/YZ
	GvlQ==
X-Received: by 10.152.183.194 with SMTP id eo2mr191962lac.81.1388774332729;
	Fri, 03 Jan 2014 10:38:52 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id e10sm47614503laa.6.2014.01.03.10.38.50
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 10:38:50 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Fri, 3 Jan 2014 22:38:49 +0400
Message-Id: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:

xen/include/public/trace.h :

struct t_buf {}
struct t_info {}

I have a questions - can we rename structures to:
struct t_buf_xen {}
struct t_info_xen {}

Will it be applicable for Xen sources ?
if yes - i can update Xen sources by new variables and provide changes to review.

it will help with next Xen ports without changes in public headers.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9eU-0006Kd-7H; Fri, 03 Jan 2014 18:39:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9eS-0006KS-G8
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 18:39:20 +0000
Received: from [85.158.137.68:37308] by server-16.bemta-3.messagelabs.com id
	D2/82-26128-7D307C25; Fri, 03 Jan 2014 18:39:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388774357!7119611!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30898 invoked from network); 3 Jan 2014 18:39:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:39:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87431420"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:39:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:39:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9eO-0002Gn-K2;
	Fri, 03 Jan 2014 18:39:16 +0000
Date: Fri, 3 Jan 2014 18:38:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131231145001.GA3349@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031837100.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312171738550.8667@kaball.uk.xensource.com>
	<20131231145001.GA3349@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 17, 2013 at 05:53:13PM +0000, Stefano Stabellini wrote:
> > There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
> > As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
> > Given that no Xen toolstacks configure a xenfb backend for x86 HVM
> > guests, they are not affected.
> 
> Could you reference the upstream git commit that enables this.

The toolstack can already setup a xenfb frontend/backend pair for ARM
guests. However QEMU xenpv machine still needs few unapplied fixes to be
built on ARM:

http://marc.info/?l=qemu-devel&m=138739419700837&w=2


> And also CC the maintainers of drivers/video/*
>
> And lastly lets CC also David and Boris on it.

OK

> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
> > index cd005c2..02e1c01 100644
> > --- a/drivers/video/xen-fbfront.c
> > +++ b/drivers/video/xen-fbfront.c
> > @@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
> >  
> >  static int __init xenfb_init(void)
> >  {
> > -	if (!xen_pv_domain())
> > +	if (!xen_domain())
> >  		return -ENODEV;
> >  
> >  	/* Nothing to do if running in dom0. */
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9eU-0006Kd-7H; Fri, 03 Jan 2014 18:39:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vz9eS-0006KS-G8
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 18:39:20 +0000
Received: from [85.158.137.68:37308] by server-16.bemta-3.messagelabs.com id
	D2/82-26128-7D307C25; Fri, 03 Jan 2014 18:39:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388774357!7119611!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30898 invoked from network); 3 Jan 2014 18:39:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:39:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87431420"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 18:39:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:39:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vz9eO-0002Gn-K2;
	Fri, 03 Jan 2014 18:39:16 +0000
Date: Fri, 3 Jan 2014 18:38:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20131231145001.GA3349@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031837100.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312171738550.8667@kaball.uk.xensource.com>
	<20131231145001.GA3349@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 17, 2013 at 05:53:13PM +0000, Stefano Stabellini wrote:
> > There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
> > As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
> > Given that no Xen toolstacks configure a xenfb backend for x86 HVM
> > guests, they are not affected.
> 
> Could you reference the upstream git commit that enables this.

The toolstack can already setup a xenfb frontend/backend pair for ARM
guests. However QEMU xenpv machine still needs few unapplied fixes to be
built on ARM:

http://marc.info/?l=qemu-devel&m=138739419700837&w=2


> And also CC the maintainers of drivers/video/*
>
> And lastly lets CC also David and Boris on it.

OK

> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
> > index cd005c2..02e1c01 100644
> > --- a/drivers/video/xen-fbfront.c
> > +++ b/drivers/video/xen-fbfront.c
> > @@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
> >  
> >  static int __init xenfb_init(void)
> >  {
> > -	if (!xen_pv_domain())
> > +	if (!xen_domain())
> >  		return -ENODEV;
> >  
> >  	/* Nothing to do if running in dom0. */
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:41:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9gJ-0006ax-RE; Fri, 03 Jan 2014 18:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9gI-0006an-2L
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:41:14 +0000
Received: from [85.158.139.211:49949] by server-1.bemta-5.messagelabs.com id
	B9/69-21065-94407C25; Fri, 03 Jan 2014 18:41:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388774470!7761636!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23470 invoked from network); 3 Jan 2014 18:41:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 18:41:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IdwOP027890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:39:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IdvYc007962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:39:58 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IdvAX007957; Fri, 3 Jan 2014 18:39:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:39:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 241811BFB02; Fri,  3 Jan 2014 13:39:56 -0500 (EST)
Date: Fri, 3 Jan 2014 13:39:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103183956.GA29058@phenom.dumpdata.com>
References: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, boris.ostrovsky@oracle.com,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 06:29:25PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > > >>>>>  	return gnttab_init();
> > > > > >>>>>  }
> > > > > >>>>>  
> > > > > >>>>> -core_initcall(__gnttab_init);
> > > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > > >>>>
> > > > > >>>> Why has this become _sync?
> > > > > >>>
> > > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > > >>> at gnttab_init):
> > > > > >>
> > > > > >>
> > > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > > 
> > > > > > It has a clear ordering property.
> > > > > 
> > > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > > guarantee this?  I couldn't find it.
> > > > 
> > > > include/linux/init.h
> > > > > 
> > > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > > 
> > > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > > also used by the ARM code.
> > > > > > 
> > > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > > code in arch/x86:
> > > > > > 
> > > > > > https://lkml.org/lkml/2013/12/18/507
> > > > > 
> > > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > > 
> > > > Stefano, thoughts?
> > > 
> > > I think that you can safely move __gnttab_init to postcore_initcall if
> > > it works correctly for the PV and PVH cases, because HVM and ARM are
> > > unaffected by it.  In fact they don't initialize the grant table via
> > > __gnttab_init at all. See:
> > 
> > The 'xenbus_init' is called in postcore_initcall. I don't actually
> > know if it is OK to call that _before_ gnttab_init is called.
> 
> No, xenbus_init needs to be called after gnttab_init, however the
> alphabetical order would enforce it.
> Not that I would want to rely on it :-)

Exactly. Which is why I came back to the idea of just moving __gnttab_init 
one level down in the '1' runlevel. This way I can guarantee that this
order of operation will be done:

xen_pvh_gnttab_setup
__gnttab_init
xenbus_init

Without anybody coming up with a patch that would randomize the order
of functions called within the runlevels.

I gather you prefer then this approach then?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:41:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9gJ-0006ax-RE; Fri, 03 Jan 2014 18:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9gI-0006an-2L
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 18:41:14 +0000
Received: from [85.158.139.211:49949] by server-1.bemta-5.messagelabs.com id
	B9/69-21065-94407C25; Fri, 03 Jan 2014 18:41:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1388774470!7761636!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23470 invoked from network); 3 Jan 2014 18:41:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 18:41:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03IdwOP027890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:39:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IdvYc007962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:39:58 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03IdvAX007957; Fri, 3 Jan 2014 18:39:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:39:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 241811BFB02; Fri,  3 Jan 2014 13:39:56 -0500 (EST)
Date: Fri, 3 Jan 2014 13:39:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103183956.GA29058@phenom.dumpdata.com>
References: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, boris.ostrovsky@oracle.com,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 06:29:25PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > > >>>>>  	return gnttab_init();
> > > > > >>>>>  }
> > > > > >>>>>  
> > > > > >>>>> -core_initcall(__gnttab_init);
> > > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > > >>>>
> > > > > >>>> Why has this become _sync?
> > > > > >>>
> > > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > > >>> at gnttab_init):
> > > > > >>
> > > > > >>
> > > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > > 
> > > > > > It has a clear ordering property.
> > > > > 
> > > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > > guarantee this?  I couldn't find it.
> > > > 
> > > > include/linux/init.h
> > > > > 
> > > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > > 
> > > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > > also used by the ARM code.
> > > > > > 
> > > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > > code in arch/x86:
> > > > > > 
> > > > > > https://lkml.org/lkml/2013/12/18/507
> > > > > 
> > > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > > 
> > > > Stefano, thoughts?
> > > 
> > > I think that you can safely move __gnttab_init to postcore_initcall if
> > > it works correctly for the PV and PVH cases, because HVM and ARM are
> > > unaffected by it.  In fact they don't initialize the grant table via
> > > __gnttab_init at all. See:
> > 
> > The 'xenbus_init' is called in postcore_initcall. I don't actually
> > know if it is OK to call that _before_ gnttab_init is called.
> 
> No, xenbus_init needs to be called after gnttab_init, however the
> alphabetical order would enforce it.
> Not that I would want to rely on it :-)

Exactly. Which is why I came back to the idea of just moving __gnttab_init 
one level down in the '1' runlevel. This way I can guarantee that this
order of operation will be done:

xen_pvh_gnttab_setup
__gnttab_init
xenbus_init

Without anybody coming up with a patch that would randomize the order
of functions called within the runlevels.

I gather you prefer then this approach then?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9hQ-0006iO-AQ; Fri, 03 Jan 2014 18:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1Vz9hP-0006iD-7v; Fri, 03 Jan 2014 18:42:23 +0000
Received: from [85.158.143.35:6990] by server-1.bemta-4.messagelabs.com id
	14/01-02132-E8407C25; Fri, 03 Jan 2014 18:42:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388774528!9427192!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDU1NTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26826 invoked from network); 3 Jan 2014 18:42:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:42:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ig1iT031748
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:42:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ig08e013586
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:42:01 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ig0co025563; Fri, 3 Jan 2014 18:42:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:42:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6E8191BFB02; Fri,  3 Jan 2014 13:41:54 -0500 (EST)
Date: Fri, 3 Jan 2014 13:41:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20140103184154.GA29283@phenom.dumpdata.com>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBKYW4gMDMsIDIwMTQgYXQgMDk6NDk6MzZBTSAtMDUwMCwgQW5kcmVzIExhZ2FyLUNh
dmlsbGEgd3JvdGU6Cj4gCj4gT24gRGVjIDMxLCAyMDEzLCBhdCAxMTozMSBBTSwgVGltIERlZWdh
biA8dGltQHhlbi5vcmc+IHdyb3RlOgo+IAo+ID4gQXQgMTA6MzMgLTA1MDAgb24gMzEgRGVjICgx
Mzg4NDgyNDEwKSwgS29ucmFkIFJ6ZXN6dXRlayBXaWxrIHdyb3RlOgo+ID4+IE9uIE1vbiwgRGVj
IDIzLCAyMDEzIGF0IDA2OjM0OjU1UE0gKzAwMDAsIFJ1c3NlbGwgUGF2bGljZWsgd3JvdGU6Cj4g
Pj4+IE9uIFR3aXR0ZXIsIEZsb3JpYW4gSGVpZ2wgc2VudCBhIG91dCBhIGZldyBtZXNzYWdlcyBh
Ym91dCBpc3N1ZXMgd2l0aCB4ZW5wYWdpbmc6Cj4gPj4+IAo+ID4+PiAtLS0KPiA+Pj4gMTktRGVj
OiBBbnlvbmUgc3VjY2Vzc2Z1bGx5IHVzZSAjeGVuPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNo
P3E9JTIzeGVuJnNyYz1oYXNoPiAjeGVucGFnaW5nPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNo
P3E9JTIzeGVucGFnaW5nJnNyYz1oYXNoPj8gZG9jcyBhcmUgYXQgU0xFUyBtYW51YWwsIHJlc3Qg
aXMgbW9zdGx5IHRoaXM6IGh0dHA6Ly93d3cuZ29zc2FtZXItdGhyZWFkcy5jb20vbGlzdHMveGVu
L2RldmVsLzI1NTc5ODxodHRwOi8vdC5jby9QMzZWZEw4NEV0PiBkZWFkIGZlYXR1cmUgb3IgdXNh
YmxlPwo+ID4+PiAKPiA+Pj4gMjItRGVjOiBAbGFyc19rdXJ0aDxodHRwczovL3R3aXR0ZXIuY29t
L2xhcnNfa3VydGg+IEBSQ1BhdmxpY2VrPGh0dHBzOi8vdHdpdHRlci5jb20vUkNQYXZsaWNlaz4g
SGV5IGd1eXMsIEkgd3JvdGUgZG93biBhcyBtdWNoIGFzIEkgY291bGQgaHR0cHM6Ly9waXJhdGVu
cGFkLmRlL3AvSWszbE9CTG5pcTFMNVRFTSAgIDxodHRwczovL3QuY28vZTVMUUNVRDlkMD4gKHNp
bmNlIEknbSBvbiBob2xpZGF5IGFuZCBub3QgY29uc3RhbnQgb25saW5lKQo+ID4+PiAKPiA+Pj4g
MjItRGVjOiBZYXksIHRlc3RlZCAjeGVuPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNoP3E9JTIz
eGVuJnNyYz1oYXNoPiBYZW5wYWdpbmcgKG1lbW9yeSBvdmVyY29tbWl0KQo+ID4+PiBbeF0gbGFy
Z2VseSB1bnRlc3RlZAo+ID4+PiBbeF0gZG9jcyBvdXRkYXRlZAo+ID4+PiBbeF0gc3ludGF4K2xv
Z2ljIGNoYW5nZWQKPiA+Pj4gW3hdIGJyb2tlbgo+ID4+PiAtLS0KPiA+Pj4gCj4gPj4+IFtJJ3Zl
IHRha2VuIHRoZSBsaWJlcnR5IG9mIHJlbW92aW5nIHRoZSBjb2xvcmZ1bCBleHBsZXRpdmUgZnJv
bSB0aGUgZmluYWwgcG9zdF0KPiA+Pj4gCj4gPj4+IElzIEZsb3JpYW4ncyBhc3Nlc3NtZW50IGNv
cnJlY3QsIG9yIGlzIHRoZXJlIHNvbWV3aGVyZSB3ZSBjYW4gcG9pbnQgaGltIGZvciBoZWxwPyAg
SSdtIG9uIHZhY2F0aW9uIHRoaXMgd2VlaywgYnV0IGlmIHNvbWVvbmUgcmVwbGllcyB0byBtZSwg
SSB3aWxsIHRyeSB0byBmb3J3YXJkIHRoZSBpbmZvcm1hdGlvbiBhcHByb3ByaWF0ZWx5Lgo+ID4+
IAo+ID4+IFRoZSBNYWludGFpbmVycyBmaWxlIGltcGxpZXMgb3RoZXJ3aXNlLiBMZXQgbWUgQ0Mg
dGhlIG1haW50YWluZXJzLgo+ID4gCj4gPiBBbmRyZXMgcmVhbGx5IG93bnMgdGhpcyBjb2RlLCBz
byBJJ2xsIHB1bnQgdG8gaGltIGZvciBhbiBvZmZpY2lhbAo+ID4gYW5zd2VyLCBidXQ6Cj4gVGhl
IHBhcnQgYWN0aXZlbHkgbWFpbnRhaW5lZCBpcyB0aGUgaHlwZXJ2aXNvciBzdXBwb3J0IGZvciBw
YWdpbmcsIGFuZCB0aGUgaW50ZXJmYWNlLgo+IAo+IHRvb2xzL3hlbnBhZ2luZyBpcyBvbmUgd2F5
IHRvIGNvbnN1bWUgdGhhdCBpbnRlcmZhY2UuIEl0IHNlZW1zIHRvIGhhdmUgc3VmZmVyZWQgZnJv
bSBiaXRyb3QuCgpXaGF0IGlzIHRoZSBvdGhlciBpbnRlcmZhY2U/IFRoYW5rcyEKPiAKPiBTbyBv
dGhlciB0aGFuIGVjaG9pbmcgVGltJ3MgcG9pbnRzIGJlbG93LCBJJ2xsIGFkZAo+IAo+IC0gU29t
ZSBpbnRlcmVzdGluZyBpZGVhcyB0aHJvd24gYXJvdW5kIGJ5IEZsb3JpYW4gaW4gaGlzIG5vdGVz
LiBDb3VsZCBsZWFkIHRvIGEgcm9idXN0IGRpc2N1c3Npb24gaW4geGVuLWRldmVsIOKApiBpZiBG
bG9yaWFuIGlzIHN0aWxsIGludGVyZXN0ZWQuCj4gCj4gLSBQZXJoYXBzIHRoZSBkZXZlbG9wZXJz
IHdobyBhcmUgaW50ZXJlc3RlZCAobXlzZWxmIGluY2x1ZGVkKSBzaG91bGQgbWFrZSBhIGRlY2Vu
dCBlZmZvcnQgYXQgaW1wcm92aW5nIHRoZSBpbi10cmVlIHRvb2xzLiBUaGVyZSBpcyB0aGUgYXJn
dW1lbnQgdGhhdCBmb3IgZXhhbXBsZSBLU00gZ2l2ZXMgS1ZNIHVzZXJzIGEgc2hhcmluZyBzb2x1
dGlvbiB0aGF0IGp1c3Qgd29ya3MsIHdoZXRoZXIgeW91IGxpa2UgdGhlIHJlc3VsdHMgb3Igbm90
LiBJbiB0aGF0IHZlaW4geGVucGFnaW5nIGFwcGFyZW50bHkgZG9lc24ndCBjdXQgaXQsIG5vciB0
aGUgYWJzZW5jZSBvZiBhIGJhc2ljIHNoYXJpbmcgdG9vbC4KPiAKPiBPbmUgc2ltcGxlIHBhZ2lu
ZyB0b29sIGNvdWxkIGJlIGxhenkgcmVzdG9yZS4gVGhlcmUgaXMgc29tZSBpbnRlcmVzdCBvdXQg
dGhlcmUsIGl0IHdvdWxkIGJlIHJlbGF0aXZlbHkgc3RyYWlnaHRmb3J3YXJkIHRvIGNvZGlmeS4K
PiAKPiBBbmRyZXMKPiA+IAo+ID4gLSBJdCdzIGJlZW4gbGlzdGVkIGFzIGEgJ3RlY2ggcHJldmll
dycgb24gdGhlIGZlYXR1cmUgbGlzdCBzaW5jZSBpdCB3ZW50Cj4gPiAgaW4uICBodHRwOi8vd2lr
aS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hlbl9SZWxlYXNlX0ZlYXR1cmVzIHNheXM6Cj4gPiAgIlBy
ZXZpZXcsIGR1ZSB0byBsaW1pdGVkIHRvb2xzIHN1cHBvcnQuIEh5cGVydmlzb3Igc2lkZSBpbiBn
b29kIHNoYXBlLiIKPiA+IAo+ID4gLSBJIGNhbid0IHNheSBhbnl0aGluZyBhYm91dCBTdVNFJ3Mg
YXBwYXJlbnQgc3VwcG9ydCBmb3IgaXQsIGV4Y2VwdAo+ID4gIHRoYXQgSVNUUiBPbGFmIHdvcmtl
ZCBhdC9mb3Ivd2l0aCBTdVNFIGF0IHRoZSB0aW1lLgo+ID4gCj4gPiAtIFBhdGNoZXMgd291bGQs
IG9mIGNvdXJzZSwgYmUgd2VsY29tZS4KPiA+IAo+ID4gVGltLgo+IAoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9hQ-0006iO-AQ; Fri, 03 Jan 2014 18:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1Vz9hP-0006iD-7v; Fri, 03 Jan 2014 18:42:23 +0000
Received: from [85.158.143.35:6990] by server-1.bemta-4.messagelabs.com id
	14/01-02132-E8407C25; Fri, 03 Jan 2014 18:42:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1388774528!9427192!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDU1NTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26826 invoked from network); 3 Jan 2014 18:42:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:42:09 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ig1iT031748
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:42:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ig08e013586
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:42:01 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ig0co025563; Fri, 3 Jan 2014 18:42:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:42:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6E8191BFB02; Fri,  3 Jan 2014 13:41:54 -0500 (EST)
Date: Fri, 3 Jan 2014 13:41:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20140103184154.GA29283@phenom.dumpdata.com>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBKYW4gMDMsIDIwMTQgYXQgMDk6NDk6MzZBTSAtMDUwMCwgQW5kcmVzIExhZ2FyLUNh
dmlsbGEgd3JvdGU6Cj4gCj4gT24gRGVjIDMxLCAyMDEzLCBhdCAxMTozMSBBTSwgVGltIERlZWdh
biA8dGltQHhlbi5vcmc+IHdyb3RlOgo+IAo+ID4gQXQgMTA6MzMgLTA1MDAgb24gMzEgRGVjICgx
Mzg4NDgyNDEwKSwgS29ucmFkIFJ6ZXN6dXRlayBXaWxrIHdyb3RlOgo+ID4+IE9uIE1vbiwgRGVj
IDIzLCAyMDEzIGF0IDA2OjM0OjU1UE0gKzAwMDAsIFJ1c3NlbGwgUGF2bGljZWsgd3JvdGU6Cj4g
Pj4+IE9uIFR3aXR0ZXIsIEZsb3JpYW4gSGVpZ2wgc2VudCBhIG91dCBhIGZldyBtZXNzYWdlcyBh
Ym91dCBpc3N1ZXMgd2l0aCB4ZW5wYWdpbmc6Cj4gPj4+IAo+ID4+PiAtLS0KPiA+Pj4gMTktRGVj
OiBBbnlvbmUgc3VjY2Vzc2Z1bGx5IHVzZSAjeGVuPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNo
P3E9JTIzeGVuJnNyYz1oYXNoPiAjeGVucGFnaW5nPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNo
P3E9JTIzeGVucGFnaW5nJnNyYz1oYXNoPj8gZG9jcyBhcmUgYXQgU0xFUyBtYW51YWwsIHJlc3Qg
aXMgbW9zdGx5IHRoaXM6IGh0dHA6Ly93d3cuZ29zc2FtZXItdGhyZWFkcy5jb20vbGlzdHMveGVu
L2RldmVsLzI1NTc5ODxodHRwOi8vdC5jby9QMzZWZEw4NEV0PiBkZWFkIGZlYXR1cmUgb3IgdXNh
YmxlPwo+ID4+PiAKPiA+Pj4gMjItRGVjOiBAbGFyc19rdXJ0aDxodHRwczovL3R3aXR0ZXIuY29t
L2xhcnNfa3VydGg+IEBSQ1BhdmxpY2VrPGh0dHBzOi8vdHdpdHRlci5jb20vUkNQYXZsaWNlaz4g
SGV5IGd1eXMsIEkgd3JvdGUgZG93biBhcyBtdWNoIGFzIEkgY291bGQgaHR0cHM6Ly9waXJhdGVu
cGFkLmRlL3AvSWszbE9CTG5pcTFMNVRFTSAgIDxodHRwczovL3QuY28vZTVMUUNVRDlkMD4gKHNp
bmNlIEknbSBvbiBob2xpZGF5IGFuZCBub3QgY29uc3RhbnQgb25saW5lKQo+ID4+PiAKPiA+Pj4g
MjItRGVjOiBZYXksIHRlc3RlZCAjeGVuPGh0dHBzOi8vdHdpdHRlci5jb20vc2VhcmNoP3E9JTIz
eGVuJnNyYz1oYXNoPiBYZW5wYWdpbmcgKG1lbW9yeSBvdmVyY29tbWl0KQo+ID4+PiBbeF0gbGFy
Z2VseSB1bnRlc3RlZAo+ID4+PiBbeF0gZG9jcyBvdXRkYXRlZAo+ID4+PiBbeF0gc3ludGF4K2xv
Z2ljIGNoYW5nZWQKPiA+Pj4gW3hdIGJyb2tlbgo+ID4+PiAtLS0KPiA+Pj4gCj4gPj4+IFtJJ3Zl
IHRha2VuIHRoZSBsaWJlcnR5IG9mIHJlbW92aW5nIHRoZSBjb2xvcmZ1bCBleHBsZXRpdmUgZnJv
bSB0aGUgZmluYWwgcG9zdF0KPiA+Pj4gCj4gPj4+IElzIEZsb3JpYW4ncyBhc3Nlc3NtZW50IGNv
cnJlY3QsIG9yIGlzIHRoZXJlIHNvbWV3aGVyZSB3ZSBjYW4gcG9pbnQgaGltIGZvciBoZWxwPyAg
SSdtIG9uIHZhY2F0aW9uIHRoaXMgd2VlaywgYnV0IGlmIHNvbWVvbmUgcmVwbGllcyB0byBtZSwg
SSB3aWxsIHRyeSB0byBmb3J3YXJkIHRoZSBpbmZvcm1hdGlvbiBhcHByb3ByaWF0ZWx5Lgo+ID4+
IAo+ID4+IFRoZSBNYWludGFpbmVycyBmaWxlIGltcGxpZXMgb3RoZXJ3aXNlLiBMZXQgbWUgQ0Mg
dGhlIG1haW50YWluZXJzLgo+ID4gCj4gPiBBbmRyZXMgcmVhbGx5IG93bnMgdGhpcyBjb2RlLCBz
byBJJ2xsIHB1bnQgdG8gaGltIGZvciBhbiBvZmZpY2lhbAo+ID4gYW5zd2VyLCBidXQ6Cj4gVGhl
IHBhcnQgYWN0aXZlbHkgbWFpbnRhaW5lZCBpcyB0aGUgaHlwZXJ2aXNvciBzdXBwb3J0IGZvciBw
YWdpbmcsIGFuZCB0aGUgaW50ZXJmYWNlLgo+IAo+IHRvb2xzL3hlbnBhZ2luZyBpcyBvbmUgd2F5
IHRvIGNvbnN1bWUgdGhhdCBpbnRlcmZhY2UuIEl0IHNlZW1zIHRvIGhhdmUgc3VmZmVyZWQgZnJv
bSBiaXRyb3QuCgpXaGF0IGlzIHRoZSBvdGhlciBpbnRlcmZhY2U/IFRoYW5rcyEKPiAKPiBTbyBv
dGhlciB0aGFuIGVjaG9pbmcgVGltJ3MgcG9pbnRzIGJlbG93LCBJJ2xsIGFkZAo+IAo+IC0gU29t
ZSBpbnRlcmVzdGluZyBpZGVhcyB0aHJvd24gYXJvdW5kIGJ5IEZsb3JpYW4gaW4gaGlzIG5vdGVz
LiBDb3VsZCBsZWFkIHRvIGEgcm9idXN0IGRpc2N1c3Npb24gaW4geGVuLWRldmVsIOKApiBpZiBG
bG9yaWFuIGlzIHN0aWxsIGludGVyZXN0ZWQuCj4gCj4gLSBQZXJoYXBzIHRoZSBkZXZlbG9wZXJz
IHdobyBhcmUgaW50ZXJlc3RlZCAobXlzZWxmIGluY2x1ZGVkKSBzaG91bGQgbWFrZSBhIGRlY2Vu
dCBlZmZvcnQgYXQgaW1wcm92aW5nIHRoZSBpbi10cmVlIHRvb2xzLiBUaGVyZSBpcyB0aGUgYXJn
dW1lbnQgdGhhdCBmb3IgZXhhbXBsZSBLU00gZ2l2ZXMgS1ZNIHVzZXJzIGEgc2hhcmluZyBzb2x1
dGlvbiB0aGF0IGp1c3Qgd29ya3MsIHdoZXRoZXIgeW91IGxpa2UgdGhlIHJlc3VsdHMgb3Igbm90
LiBJbiB0aGF0IHZlaW4geGVucGFnaW5nIGFwcGFyZW50bHkgZG9lc24ndCBjdXQgaXQsIG5vciB0
aGUgYWJzZW5jZSBvZiBhIGJhc2ljIHNoYXJpbmcgdG9vbC4KPiAKPiBPbmUgc2ltcGxlIHBhZ2lu
ZyB0b29sIGNvdWxkIGJlIGxhenkgcmVzdG9yZS4gVGhlcmUgaXMgc29tZSBpbnRlcmVzdCBvdXQg
dGhlcmUsIGl0IHdvdWxkIGJlIHJlbGF0aXZlbHkgc3RyYWlnaHRmb3J3YXJkIHRvIGNvZGlmeS4K
PiAKPiBBbmRyZXMKPiA+IAo+ID4gLSBJdCdzIGJlZW4gbGlzdGVkIGFzIGEgJ3RlY2ggcHJldmll
dycgb24gdGhlIGZlYXR1cmUgbGlzdCBzaW5jZSBpdCB3ZW50Cj4gPiAgaW4uICBodHRwOi8vd2lr
aS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hlbl9SZWxlYXNlX0ZlYXR1cmVzIHNheXM6Cj4gPiAgIlBy
ZXZpZXcsIGR1ZSB0byBsaW1pdGVkIHRvb2xzIHN1cHBvcnQuIEh5cGVydmlzb3Igc2lkZSBpbiBn
b29kIHNoYXBlLiIKPiA+IAo+ID4gLSBJIGNhbid0IHNheSBhbnl0aGluZyBhYm91dCBTdVNFJ3Mg
YXBwYXJlbnQgc3VwcG9ydCBmb3IgaXQsIGV4Y2VwdAo+ID4gIHRoYXQgSVNUUiBPbGFmIHdvcmtl
ZCBhdC9mb3Ivd2l0aCBTdVNFIGF0IHRoZSB0aW1lLgo+ID4gCj4gPiAtIFBhdGNoZXMgd291bGQs
IG9mIGNvdXJzZSwgYmUgd2VsY29tZS4KPiA+IAo+ID4gVGltLgo+IAoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:43:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9im-0006wq-GX; Fri, 03 Jan 2014 18:43:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vz9il-0006wZ-5d
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:43:47 +0000
Received: from [85.158.137.68:7711] by server-1.bemta-3.messagelabs.com id
	84/85-29598-2E407C25; Fri, 03 Jan 2014 18:43:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388774623!3471182!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2389 invoked from network); 3 Jan 2014 18:43:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89623584"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 18:43:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:43:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vz9iR-0002K8-0Z;
	Fri, 03 Jan 2014 18:43:27 +0000
Message-ID: <52C704CE.5080003@citrix.com>
Date: Fri, 3 Jan 2014 18:43:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
In-Reply-To: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 18:38, Igor Kozhukhov wrote:
> Hello All,
>
> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
>
> xen/include/public/trace.h :
>
> struct t_buf {}
> struct t_info {}
>
> I have a questions - can we rename structures to:
> struct t_buf_xen {}
> struct t_info_xen {}
>
> Will it be applicable for Xen sources ?

You would break the compilation for every other user of trace.h, so no.

If it conflicts with something locally, you could do something like:
(completely untested)

#define t_buf t_buf_xen
#include <public/trace.h>
#undef t_buf

And use t_buf_xen in your own code.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:43:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9im-0006wq-GX; Fri, 03 Jan 2014 18:43:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vz9il-0006wZ-5d
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:43:47 +0000
Received: from [85.158.137.68:7711] by server-1.bemta-3.messagelabs.com id
	84/85-29598-2E407C25; Fri, 03 Jan 2014 18:43:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388774623!3471182!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2389 invoked from network); 3 Jan 2014 18:43:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 18:43:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89623584"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 18:43:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 13:43:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1Vz9iR-0002K8-0Z;
	Fri, 03 Jan 2014 18:43:27 +0000
Message-ID: <52C704CE.5080003@citrix.com>
Date: Fri, 3 Jan 2014 18:43:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
In-Reply-To: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 18:38, Igor Kozhukhov wrote:
> Hello All,
>
> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
>
> xen/include/public/trace.h :
>
> struct t_buf {}
> struct t_info {}
>
> I have a questions - can we rename structures to:
> struct t_buf_xen {}
> struct t_info_xen {}
>
> Will it be applicable for Xen sources ?

You would break the compilation for every other user of trace.h, so no.

If it conflicts with something locally, you could do something like:
(completely untested)

#define t_buf t_buf_xen
#include <public/trace.h>
#undef t_buf

And use t_buf_xen in your own code.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:50:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9oh-0007dQ-Hf; Fri, 03 Jan 2014 18:49:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9of-0007dJ-9O
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:49:53 +0000
Received: from [85.158.139.211:7160] by server-12.bemta-5.messagelabs.com id
	AB/E8-30017-05607C25; Fri, 03 Jan 2014 18:49:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388774990!7720162!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24571 invoked from network); 3 Jan 2014 18:49:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:49:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03InmEo003132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:49:49 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Injg0005341
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:49:46 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Injlk006841; Fri, 3 Jan 2014 18:49:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:49:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DEB31BFB02; Fri,  3 Jan 2014 13:49:44 -0500 (EST)
Date: Fri, 3 Jan 2014 13:49:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Meng Xu <xumengpanda@gmail.com>
Message-ID: <20140103184944.GB29402@phenom.dumpdata.com>
References: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about four kinds of pages in struct
 xc_dominfo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:
> Hi,
> 
> I'm trying to print out the "current used" pages of each domU.
> 
> I'm reading the xen code and found the data structure xc_dominfo at file
> tools/libxc/xenctrl.h.
> 
> *I have a simple, maybe very naive question: *
> 1) What is the difference among *nr_outstanding_pages*, * nr_shared_pages*,
> and *nr_paged_pages*?

The nr_outstanding_pages is usually zero. It means the amount of
pages that are needed for the guest to be allocated.

The nr_shared_pages - is the number of pages that are shared with other
guests or tools

The nr_ages_pages - that is if you page the pages to swap of a VM.
You need to use xenpaging for that.

> 2) Could anyone point me to a place that I can find the document of the
> definition of the structures in xen code, so that I can find those
> definition by myself?

Um, I usually use 'git annotate' on the file and the commit description
gives me a good idea

> 
> I'm new to the xen source and hope you can give me some guide to hack the
> xen code.
> 
> ========The structure is as below======================
> "tools/libxc/xenctrl.h"
> /*
>  * DOMAIN MANAGEMENT FUNCTIONS
>  */
> 
> typedef struct xc_dominfo {
>     uint32_t      domid;
>     uint32_t      ssidref;
>     unsigned int  dying:1, crashed:1, shutdown:1,
>                   paused:1, blocked:1, running:1,
>                   hvm:1, debugged:1;
>     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
>     unsigned long nr_pages; /* current number, not maximum */
>     unsigned long nr_outstanding_pages;
>     unsigned long nr_shared_pages;
>     unsigned long nr_paged_pages;
>     unsigned long shared_info_frame;
>     uint64_t      cpu_time;
>     unsigned long max_memkb;
>     unsigned int  nr_online_vcpus;
>     unsigned int  max_vcpu_id;
>     xen_domain_handle_t handle;
>     unsigned int  cpupool;
> } xc_dominfo_t;
> 
> 
> Thank you very much for your time and help in these questions!
> Happy New Year!

You too!
> 
> Best,
> 
> Meng

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 18:50:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 18:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vz9oh-0007dQ-Hf; Fri, 03 Jan 2014 18:49:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1Vz9of-0007dJ-9O
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 18:49:53 +0000
Received: from [85.158.139.211:7160] by server-12.bemta-5.messagelabs.com id
	AB/E8-30017-05607C25; Fri, 03 Jan 2014 18:49:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388774990!7720162!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24571 invoked from network); 3 Jan 2014 18:49:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 18:49:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03InmEo003132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 18:49:49 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Injg0005341
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 18:49:46 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Injlk006841; Fri, 3 Jan 2014 18:49:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 10:49:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3DEB31BFB02; Fri,  3 Jan 2014 13:49:44 -0500 (EST)
Date: Fri, 3 Jan 2014 13:49:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Meng Xu <xumengpanda@gmail.com>
Message-ID: <20140103184944.GB29402@phenom.dumpdata.com>
References: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about four kinds of pages in struct
 xc_dominfo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:
> Hi,
> 
> I'm trying to print out the "current used" pages of each domU.
> 
> I'm reading the xen code and found the data structure xc_dominfo at file
> tools/libxc/xenctrl.h.
> 
> *I have a simple, maybe very naive question: *
> 1) What is the difference among *nr_outstanding_pages*, * nr_shared_pages*,
> and *nr_paged_pages*?

The nr_outstanding_pages is usually zero. It means the amount of
pages that are needed for the guest to be allocated.

The nr_shared_pages - is the number of pages that are shared with other
guests or tools

The nr_ages_pages - that is if you page the pages to swap of a VM.
You need to use xenpaging for that.

> 2) Could anyone point me to a place that I can find the document of the
> definition of the structures in xen code, so that I can find those
> definition by myself?

Um, I usually use 'git annotate' on the file and the commit description
gives me a good idea

> 
> I'm new to the xen source and hope you can give me some guide to hack the
> xen code.
> 
> ========The structure is as below======================
> "tools/libxc/xenctrl.h"
> /*
>  * DOMAIN MANAGEMENT FUNCTIONS
>  */
> 
> typedef struct xc_dominfo {
>     uint32_t      domid;
>     uint32_t      ssidref;
>     unsigned int  dying:1, crashed:1, shutdown:1,
>                   paused:1, blocked:1, running:1,
>                   hvm:1, debugged:1;
>     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
>     unsigned long nr_pages; /* current number, not maximum */
>     unsigned long nr_outstanding_pages;
>     unsigned long nr_shared_pages;
>     unsigned long nr_paged_pages;
>     unsigned long shared_info_frame;
>     uint64_t      cpu_time;
>     unsigned long max_memkb;
>     unsigned int  nr_online_vcpus;
>     unsigned int  max_vcpu_id;
>     xen_domain_handle_t handle;
>     unsigned int  cpupool;
> } xc_dominfo_t;
> 
> 
> Thank you very much for your time and help in these questions!
> Happy New Year!

You too!
> 
> Best,
> 
> Meng

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:03:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzA1R-0008CZ-0o; Fri, 03 Jan 2014 19:03:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzA1P-0008CU-OA
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 19:03:03 +0000
Received: from [193.109.254.147:53564] by server-6.bemta-14.messagelabs.com id
	F7/57-14958-76907C25; Fri, 03 Jan 2014 19:03:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388775780!8706127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11313 invoked from network); 3 Jan 2014 19:03:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:03:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89628673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 19:03:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 14:02:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzA1L-0002cr-Be;
	Fri, 03 Jan 2014 19:02:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 3 Jan 2014 19:02:09 +0000
Message-ID: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux-fbdev@vger.kernel.org, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, tomi.valkeinen@ti.com,
	boris.ostrovsky@oracle.com, plagnioj@jcrosoft.com
Subject: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
Given that no Xen toolstacks configure a xenfb backend for x86 HVM
guests, they are not affected.

Please note that at this time QEMU needs few outstanding fixes to
provide xenfb on ARM:

http://marc.info/?l=qemu-devel&m=138739419700837&w=2

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
CC: boris.ostrovsky@oracle.com
CC: plagnioj@jcrosoft.com
CC: tomi.valkeinen@ti.com
CC: linux-fbdev@vger.kernel.org
CC: konrad.wilk@oracle.com
---
 drivers/video/xen-fbfront.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
index cd005c2..02e1c01 100644
--- a/drivers/video/xen-fbfront.c
+++ b/drivers/video/xen-fbfront.c
@@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
 
 static int __init xenfb_init(void)
 {
-	if (!xen_pv_domain())
+	if (!xen_domain())
 		return -ENODEV;
 
 	/* Nothing to do if running in dom0. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:03:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzA1R-0008CZ-0o; Fri, 03 Jan 2014 19:03:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzA1P-0008CU-OA
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 19:03:03 +0000
Received: from [193.109.254.147:53564] by server-6.bemta-14.messagelabs.com id
	F7/57-14958-76907C25; Fri, 03 Jan 2014 19:03:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388775780!8706127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11313 invoked from network); 3 Jan 2014 19:03:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:03:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="89628673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 19:03:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 14:02:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzA1L-0002cr-Be;
	Fri, 03 Jan 2014 19:02:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 3 Jan 2014 19:02:09 +0000
Message-ID: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux-fbdev@vger.kernel.org, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, tomi.valkeinen@ti.com,
	boris.ostrovsky@oracle.com, plagnioj@jcrosoft.com
Subject: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
Given that no Xen toolstacks configure a xenfb backend for x86 HVM
guests, they are not affected.

Please note that at this time QEMU needs few outstanding fixes to
provide xenfb on ARM:

http://marc.info/?l=qemu-devel&m=138739419700837&w=2

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: David Vrabel <david.vrabel@citrix.com>
CC: boris.ostrovsky@oracle.com
CC: plagnioj@jcrosoft.com
CC: tomi.valkeinen@ti.com
CC: linux-fbdev@vger.kernel.org
CC: konrad.wilk@oracle.com
---
 drivers/video/xen-fbfront.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
index cd005c2..02e1c01 100644
--- a/drivers/video/xen-fbfront.c
+++ b/drivers/video/xen-fbfront.c
@@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
 
 static int __init xenfb_init(void)
 {
-	if (!xen_pv_domain())
+	if (!xen_domain())
 		return -ENODEV;
 
 	/* Nothing to do if running in dom0. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:03:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzA2C-0008Fb-Eq; Fri, 03 Jan 2014 19:03:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzA2A-0008FL-Nz
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 19:03:50 +0000
Received: from [193.109.254.147:61396] by server-15.bemta-14.messagelabs.com
	id 00/D7-22186-69907C25; Fri, 03 Jan 2014 19:03:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388775828!8717470!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3222 invoked from network); 3 Jan 2014 19:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87437441"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 19:03:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 14:03:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzA27-0002dm-3W;
	Fri, 03 Jan 2014 19:03:47 +0000
Date: Fri, 3 Jan 2014 19:02:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103183956.GA29058@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031902460.8667@kaball.uk.xensource.com>
References: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
	<20140103183956.GA29058@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 06:29:25PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > > > >>>>>  	return gnttab_init();
> > > > > > >>>>>  }
> > > > > > >>>>>  
> > > > > > >>>>> -core_initcall(__gnttab_init);
> > > > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > > > >>>>
> > > > > > >>>> Why has this become _sync?
> > > > > > >>>
> > > > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > > > >>> at gnttab_init):
> > > > > > >>
> > > > > > >>
> > > > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > > > 
> > > > > > > It has a clear ordering property.
> > > > > > 
> > > > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > > > guarantee this?  I couldn't find it.
> > > > > 
> > > > > include/linux/init.h
> > > > > > 
> > > > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > > > 
> > > > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > > > also used by the ARM code.
> > > > > > > 
> > > > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > > > code in arch/x86:
> > > > > > > 
> > > > > > > https://lkml.org/lkml/2013/12/18/507
> > > > > > 
> > > > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > > > 
> > > > > Stefano, thoughts?
> > > > 
> > > > I think that you can safely move __gnttab_init to postcore_initcall if
> > > > it works correctly for the PV and PVH cases, because HVM and ARM are
> > > > unaffected by it.  In fact they don't initialize the grant table via
> > > > __gnttab_init at all. See:
> > > 
> > > The 'xenbus_init' is called in postcore_initcall. I don't actually
> > > know if it is OK to call that _before_ gnttab_init is called.
> > 
> > No, xenbus_init needs to be called after gnttab_init, however the
> > alphabetical order would enforce it.
> > Not that I would want to rely on it :-)
> 
> Exactly. Which is why I came back to the idea of just moving __gnttab_init 
> one level down in the '1' runlevel. This way I can guarantee that this
> order of operation will be done:
> 
> xen_pvh_gnttab_setup
> __gnttab_init
> xenbus_init
> 
> Without anybody coming up with a patch that would randomize the order
> of functions called within the runlevels.
> 
> I gather you prefer then this approach then?

Yeah, seems sensible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:03:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzA2C-0008Fb-Eq; Fri, 03 Jan 2014 19:03:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzA2A-0008FL-Nz
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 19:03:50 +0000
Received: from [193.109.254.147:61396] by server-15.bemta-14.messagelabs.com
	id 00/D7-22186-69907C25; Fri, 03 Jan 2014 19:03:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1388775828!8717470!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3222 invoked from network); 3 Jan 2014 19:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,599,1384300800"; d="scan'208";a="87437441"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jan 2014 19:03:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 14:03:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzA27-0002dm-3W;
	Fri, 03 Jan 2014 19:03:47 +0000
Date: Fri, 3 Jan 2014 19:02:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140103183956.GA29058@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401031902460.8667@kaball.uk.xensource.com>
References: <1388550945-25499-16-git-send-email-konrad.wilk@oracle.com>
	<52C59483.5030607@citrix.com>
	<20140102185023.GG3021@pegasus.dumpdata.com>
	<52C6A4E5.3080904@citrix.com>
	<20140103144409.GC27019@phenom.dumpdata.com>
	<52C6DA3F.7070509@citrix.com>
	<20140103154820.GA20976@andromeda.dapyr.net>
	<alpine.DEB.2.02.1401031659230.8667@kaball.uk.xensource.com>
	<20140103181459.GG27019@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031827570.8667@kaball.uk.xensource.com>
	<20140103183956.GA29058@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v12 15/18] xen/pvh: Piggyback on PVHVM for
 grant driver (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 06:29:25PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 03, 2014 at 05:20:54PM +0000, Stefano Stabellini wrote:
> > > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Jan 03, 2014 at 03:41:51PM +0000, David Vrabel wrote:
> > > > > > On 03/01/14 14:44, Konrad Rzeszutek Wilk wrote:
> > > > > > > On Fri, Jan 03, 2014 at 11:54:13AM +0000, David Vrabel wrote:
> > > > > > >> On 02/01/14 18:50, Konrad Rzeszutek Wilk wrote:
> > > > > > >>> On Thu, Jan 02, 2014 at 04:32:03PM +0000, David Vrabel wrote:
> > > > > > >>>> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > > >>>>> @@ -1320,4 +1323,4 @@ static int __gnttab_init(void)
> > > > > > >>>>>  	return gnttab_init();
> > > > > > >>>>>  }
> > > > > > >>>>>  
> > > > > > >>>>> -core_initcall(__gnttab_init);
> > > > > > >>>>> +core_initcall_sync(__gnttab_init);
> > > > > > >>>>
> > > > > > >>>> Why has this become _sync?
> > > > > > >>>
> > > > > > >>> It needs to run _after_ the xen_pvh_gnttab_setup has run (which is
> > > > > > >>> at gnttab_init):
> > > > > > >>
> > > > > > >>
> > > > > > >> The use of core_initcall_sync() doesn't imply any ordering to me.  Can't
> > > > > > > 
> > > > > > > It has a clear ordering property.
> > > > > > 
> > > > > > This really isn't obvious to me.  Can you point to the docs/code the
> > > > > > guarantee this?  I couldn't find it.
> > > > > 
> > > > > include/linux/init.h
> > > > > > 
> > > > > > >> you call xen_pvh_gnttab_setup() from within __gnttab_init() ?
> > > > > > > 
> > > > > > > No. That is due to the fact that __gnttab_init() is in drivers/xen and is
> > > > > > > also used by the ARM code.
> > > > > > > 
> > > > > > > Stefano in his previous review mentioned he would like PVH specific
> > > > > > > code in arch/x86:
> > > > > > > 
> > > > > > > https://lkml.org/lkml/2013/12/18/507
> > > > > > 
> > > > > > Call it xen_arch_gnttab_setup() and add weak stub for other architectures?
> > > > > 
> > > > > Stefano, thoughts?
> > > > 
> > > > I think that you can safely move __gnttab_init to postcore_initcall if
> > > > it works correctly for the PV and PVH cases, because HVM and ARM are
> > > > unaffected by it.  In fact they don't initialize the grant table via
> > > > __gnttab_init at all. See:
> > > 
> > > The 'xenbus_init' is called in postcore_initcall. I don't actually
> > > know if it is OK to call that _before_ gnttab_init is called.
> > 
> > No, xenbus_init needs to be called after gnttab_init, however the
> > alphabetical order would enforce it.
> > Not that I would want to rely on it :-)
> 
> Exactly. Which is why I came back to the idea of just moving __gnttab_init 
> one level down in the '1' runlevel. This way I can guarantee that this
> order of operation will be done:
> 
> xen_pvh_gnttab_setup
> __gnttab_init
> xenbus_init
> 
> Without anybody coming up with a patch that would randomize the order
> of functions called within the runlevels.
> 
> I gather you prefer then this approach then?

Yeah, seems sensible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:18:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAGM-0000Nr-Vd; Fri, 03 Jan 2014 19:18:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1VzAGL-0000Nm-7d
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 19:18:29 +0000
Received: from [85.158.139.211:22868] by server-15.bemta-5.messagelabs.com id
	E7/7B-08490-40D07C25; Fri, 03 Jan 2014 19:18:28 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388776705!7722070!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24249 invoked from network); 3 Jan 2014 19:18:27 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 19:18:27 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s03JIElV032276
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Jan 2014 14:18:15 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s03JICUm032272;
	Fri, 3 Jan 2014 14:18:12 -0500
Date: Fri, 3 Jan 2014 15:18:12 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103191812.GA31994@andromeda.dapyr.net>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
	frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:53:59PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> > and contain the virtual address of the grants. That was OK
> > for most architectures (PVHVM, ARM) were the grants are contingous
> > in memory. That however is not the case for PVH - in which case
> > we will have to do a lookup for each virtual address for the PFN.
> > 
> > Instead of doing that, lets make it a structure which will contain
> > the array of PFNs, the virtual address and the count of said PFNs.
> > 
> > Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> > gnttab_free_auto_xlat_frames to populate said structure with
> > appropiate values for PVHVM and ARM.
>      ^appropriate
> 
> 
> > To round it off, change the name from 'xen_hvm_resume_frames' to
> > a more descriptive one - 'xen_auto_xlat_grant_frames'.
> > 
> > For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> > we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> > 
> > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/arm/xen/enlighten.c   |  9 +++++++--
> >  drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
> >  drivers/xen/platform-pci.c | 10 +++++++---
> >  include/xen/grant_table.h  |  9 ++++++++-
> >  4 files changed, 62 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 8550123..2162172 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
> >  	const char *version = NULL;
> >  	const char *xen_prefix = "xen,xen-";
> >  	struct resource res;
> > +	unsigned long grant_frames;
> >  
> >  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
> >  	if (!node) {
> > @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
> >  	}
> >  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
> >  		return 0;
> > -	xen_hvm_resume_frames = res.start;
> > +	grant_frames = res.start;
> >  	xen_events_irq = irq_of_parse_and_map(node, 0);
> >  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> > +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
> >  	if (xen_vcpu_info == NULL)
> >  		return -ENOMEM;
> >  
> > +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> > +		free_percpu(xen_vcpu_info);
> > +		return -ENOMEM;
> > +	}
> >  	gnttab_init();
> >  	if (!xen_initial_domain())
> >  		xenbus_probe(NULL);
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index cc1b4fa..b117fd6 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
> >  static int gnttab_free_count;
> >  static grant_ref_t gnttab_free_head;
> >  static DEFINE_SPINLOCK(gnttab_list_lock);
> > -unsigned long xen_hvm_resume_frames;
> > -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> > +struct grant_frames xen_auto_xlat_grant_frames;
> > +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
> 
> it should be static now

Can't be. The arch/x86/xen/grant-table.c has to use it.

I can drop the 'EXPORT_SYMBOL_GPL' though.

> 
> 
> >  static union {
> >  	struct grant_entry_v1 *v1;
> > @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >  
> > +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> > +{
> > +	xen_pfn_t *pfn;
> > +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> > +	int i;
> > +
> > +	if (xen_auto_xlat_grant_frames.count)
> > +		return -EINVAL;
> > +
> > +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> > +	if (!pfn)
> > +		return -ENOMEM;
> > +	for (i = 0; i < max_nr_gframes; i++)
> > +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> > +
> > +	xen_auto_xlat_grant_frames.vaddr = addr;
> > +	xen_auto_xlat_grant_frames.pfn = pfn;
> > +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> > +
> > +void gnttab_free_auto_xlat_frames(void)
> > +{
> > +	if (!xen_auto_xlat_grant_frames.count)
> > +		return;
> > +	kfree(xen_auto_xlat_grant_frames.pfn);
> > +	xen_auto_xlat_grant_frames.pfn = NULL;
> > +	xen_auto_xlat_grant_frames.count = 0;
> > +	xen_auto_xlat_grant_frames.vaddr = 0;
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
> 
> I would leave vaddr alone in gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames

Actually, I like David's suggestion. Patch coming out soon.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:18:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAGM-0000Nr-Vd; Fri, 03 Jan 2014 19:18:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1VzAGL-0000Nm-7d
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 19:18:29 +0000
Received: from [85.158.139.211:22868] by server-15.bemta-5.messagelabs.com id
	E7/7B-08490-40D07C25; Fri, 03 Jan 2014 19:18:28 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388776705!7722070!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24249 invoked from network); 3 Jan 2014 19:18:27 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 19:18:27 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s03JIElV032276
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Fri, 3 Jan 2014 14:18:15 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s03JICUm032272;
	Fri, 3 Jan 2014 14:18:12 -0500
Date: Fri, 3 Jan 2014 15:18:12 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103191812.GA31994@andromeda.dapyr.net>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-15-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401031644120.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 14/18] xen/grant: Implement an grant
	frame array struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 04:53:59PM +0000, Stefano Stabellini wrote:
> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> > and contain the virtual address of the grants. That was OK
> > for most architectures (PVHVM, ARM) were the grants are contingous
> > in memory. That however is not the case for PVH - in which case
> > we will have to do a lookup for each virtual address for the PFN.
> > 
> > Instead of doing that, lets make it a structure which will contain
> > the array of PFNs, the virtual address and the count of said PFNs.
> > 
> > Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> > gnttab_free_auto_xlat_frames to populate said structure with
> > appropiate values for PVHVM and ARM.
>      ^appropriate
> 
> 
> > To round it off, change the name from 'xen_hvm_resume_frames' to
> > a more descriptive one - 'xen_auto_xlat_grant_frames'.
> > 
> > For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> > we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> > 
> > Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/arm/xen/enlighten.c   |  9 +++++++--
> >  drivers/xen/grant-table.c  | 45 ++++++++++++++++++++++++++++++++++++++++-----
> >  drivers/xen/platform-pci.c | 10 +++++++---
> >  include/xen/grant_table.h  |  9 ++++++++-
> >  4 files changed, 62 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index 8550123..2162172 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
> >  	const char *version = NULL;
> >  	const char *xen_prefix = "xen,xen-";
> >  	struct resource res;
> > +	unsigned long grant_frames;
> >  
> >  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
> >  	if (!node) {
> > @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
> >  	}
> >  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
> >  		return 0;
> > -	xen_hvm_resume_frames = res.start;
> > +	grant_frames = res.start;
> >  	xen_events_irq = irq_of_parse_and_map(node, 0);
> >  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> > -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> > +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
> >  	xen_domain_type = XEN_HVM_DOMAIN;
> >  
> >  	xen_setup_features();
> > @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
> >  	if (xen_vcpu_info == NULL)
> >  		return -ENOMEM;
> >  
> > +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> > +		free_percpu(xen_vcpu_info);
> > +		return -ENOMEM;
> > +	}
> >  	gnttab_init();
> >  	if (!xen_initial_domain())
> >  		xenbus_probe(NULL);
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index cc1b4fa..b117fd6 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -65,8 +65,8 @@ static unsigned int nr_grant_frames;
> >  static int gnttab_free_count;
> >  static grant_ref_t gnttab_free_head;
> >  static DEFINE_SPINLOCK(gnttab_list_lock);
> > -unsigned long xen_hvm_resume_frames;
> > -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> > +struct grant_frames xen_auto_xlat_grant_frames;
> > +EXPORT_SYMBOL_GPL(xen_auto_xlat_grant_frames);
> 
> it should be static now

Can't be. The arch/x86/xen/grant-table.c has to use it.

I can drop the 'EXPORT_SYMBOL_GPL' though.

> 
> 
> >  static union {
> >  	struct grant_entry_v1 *v1;
> > @@ -838,6 +838,40 @@ unsigned int gnttab_max_grant_frames(void)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
> >  
> > +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> > +{
> > +	xen_pfn_t *pfn;
> > +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> > +	int i;
> > +
> > +	if (xen_auto_xlat_grant_frames.count)
> > +		return -EINVAL;
> > +
> > +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> > +	if (!pfn)
> > +		return -ENOMEM;
> > +	for (i = 0; i < max_nr_gframes; i++)
> > +		pfn[i] = PFN_DOWN(addr + (i * PAGE_SIZE));
> > +
> > +	xen_auto_xlat_grant_frames.vaddr = addr;
> > +	xen_auto_xlat_grant_frames.pfn = pfn;
> > +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> > +
> > +void gnttab_free_auto_xlat_frames(void)
> > +{
> > +	if (!xen_auto_xlat_grant_frames.count)
> > +		return;
> > +	kfree(xen_auto_xlat_grant_frames.pfn);
> > +	xen_auto_xlat_grant_frames.pfn = NULL;
> > +	xen_auto_xlat_grant_frames.count = 0;
> > +	xen_auto_xlat_grant_frames.vaddr = 0;
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
> 
> I would leave vaddr alone in gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames

Actually, I like David's suggestion. Patch coming out soon.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:50:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAkU-0001o2-Mq; Fri, 03 Jan 2014 19:49:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1VzAkT-0001nN-D5
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 19:49:37 +0000
Received: from [193.109.254.147:42089] by server-12.bemta-14.messagelabs.com
	id 08/C9-13681-05417C25; Fri, 03 Jan 2014 19:49:36 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388778575!8677828!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18642 invoked from network); 3 Jan 2014 19:49:35 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:49:35 -0000
Received: by mail-la0-f43.google.com with SMTP id n7so8350382lam.16
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 11:49:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=1FWNU7NoYheGOUeXFor35A7hjumtMuLAInNgQcGXw+Y=;
	b=NWe+eyAtx1cYpfsPTQgLZFGH/bdkEJ5ISGrmOCamOcVBZY7UbeQdB77izg9a9I1C9H
	OgyGsI2Pe0tpMgkVmLieDo+zwMiqBd2lN0rr1DAmcoo4lamkoOQyNHVjIf7NTKTX5WA2
	IIGZQNLsCdx79qGIHMyT4WIayxT+XJNqfP0thwKEjmITGaBz8XliKCG67yst+Irq8yAB
	WOqHh9VGEgSCDCZ3pEqX5P2VyW7n+9Ovf7j/kQv133qAZCgwn6HthFQGme8LmvyUZTko
	RLtpgSY4erjyA/AatcKPTjY375+Z0xv5Smi34g5C0ZslrdS+qpJ5BfDsByGSrJeAgvyw
	Stvw==
X-Received: by 10.112.14.34 with SMTP id m2mr35201614lbc.13.1388778574873;
	Fri, 03 Jan 2014 11:49:34 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id h11sm37114645lbg.8.2014.01.03.11.49.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 11:49:34 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52C704CE.5080003@citrix.com>
Date: Fri, 3 Jan 2014 23:49:32 +0400
Message-Id: <BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
	<52C704CE.5080003@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

it is impossible to do outside public headers, because we have trace.h included to another public readers - for example: xen/public/hvm/hvm_op.h

will be better to have committed changes in public headers for next ports without additional changes in sources.

is it possible to add '#ifdef __sun' to public headers ?
I can try to do it and provide review ?
or i need update public headers in illumos tree every time ?

--
Best regards,
Igor Kozhukhov




On Jan 3, 2014, at 10:43 PM, Andrew Cooper wrote:

> On 03/01/14 18:38, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
>> 
>> xen/include/public/trace.h :
>> 
>> struct t_buf {}
>> struct t_info {}
>> 
>> I have a questions - can we rename structures to:
>> struct t_buf_xen {}
>> struct t_info_xen {}
>> 
>> Will it be applicable for Xen sources ?
> 
> You would break the compilation for every other user of trace.h, so no.
> 
> If it conflicts with something locally, you could do something like:
> (completely untested)
> 
> #define t_buf t_buf_xen
> #include <public/trace.h>
> #undef t_buf
> 
> And use t_buf_xen in your own code.
> 
> ~Andrew
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:50:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAkU-0001o2-Mq; Fri, 03 Jan 2014 19:49:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1VzAkT-0001nN-D5
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 19:49:37 +0000
Received: from [193.109.254.147:42089] by server-12.bemta-14.messagelabs.com
	id 08/C9-13681-05417C25; Fri, 03 Jan 2014 19:49:36 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388778575!8677828!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18642 invoked from network); 3 Jan 2014 19:49:35 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:49:35 -0000
Received: by mail-la0-f43.google.com with SMTP id n7so8350382lam.16
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 11:49:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=1FWNU7NoYheGOUeXFor35A7hjumtMuLAInNgQcGXw+Y=;
	b=NWe+eyAtx1cYpfsPTQgLZFGH/bdkEJ5ISGrmOCamOcVBZY7UbeQdB77izg9a9I1C9H
	OgyGsI2Pe0tpMgkVmLieDo+zwMiqBd2lN0rr1DAmcoo4lamkoOQyNHVjIf7NTKTX5WA2
	IIGZQNLsCdx79qGIHMyT4WIayxT+XJNqfP0thwKEjmITGaBz8XliKCG67yst+Irq8yAB
	WOqHh9VGEgSCDCZ3pEqX5P2VyW7n+9Ovf7j/kQv133qAZCgwn6HthFQGme8LmvyUZTko
	RLtpgSY4erjyA/AatcKPTjY375+Z0xv5Smi34g5C0ZslrdS+qpJ5BfDsByGSrJeAgvyw
	Stvw==
X-Received: by 10.112.14.34 with SMTP id m2mr35201614lbc.13.1388778574873;
	Fri, 03 Jan 2014 11:49:34 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id h11sm37114645lbg.8.2014.01.03.11.49.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 11:49:34 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52C704CE.5080003@citrix.com>
Date: Fri, 3 Jan 2014 23:49:32 +0400
Message-Id: <BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
	<52C704CE.5080003@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

it is impossible to do outside public headers, because we have trace.h included to another public readers - for example: xen/public/hvm/hvm_op.h

will be better to have committed changes in public headers for next ports without additional changes in sources.

is it possible to add '#ifdef __sun' to public headers ?
I can try to do it and provide review ?
or i need update public headers in illumos tree every time ?

--
Best regards,
Igor Kozhukhov




On Jan 3, 2014, at 10:43 PM, Andrew Cooper wrote:

> On 03/01/14 18:38, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
>> 
>> xen/include/public/trace.h :
>> 
>> struct t_buf {}
>> struct t_info {}
>> 
>> I have a questions - can we rename structures to:
>> struct t_buf_xen {}
>> struct t_info_xen {}
>> 
>> Will it be applicable for Xen sources ?
> 
> You would break the compilation for every other user of trace.h, so no.
> 
> If it conflicts with something locally, you could do something like:
> (completely untested)
> 
> #define t_buf t_buf_xen
> #include <public/trace.h>
> #undef t_buf
> 
> And use t_buf_xen in your own code.
> 
> ~Andrew
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAm8-0001uP-Ft; Fri, 03 Jan 2014 19:51:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1VzAm7-0001u8-7X
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 19:51:19 +0000
Received: from [193.109.254.147:10593] by server-9.bemta-14.messagelabs.com id
	1D/13-13957-6B417C25; Fri, 03 Jan 2014 19:51:18 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388778675!8703071!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6128 invoked from network); 3 Jan 2014 19:51:16 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:51:16 -0000
Received: by mail-ie0-f173.google.com with SMTP id to1so16400206ieb.18
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 11:51:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=K6ENe9wSEaNiNz6XOqglwjELXivJ+3mpYUOEkpqtPhA=;
	b=GI136T+TWWb7g0eowuXR1QO536UMPHsDd+lQAoVrGT7gN8Nq3BYzfpv4Wz5908IqI4
	5liT/A0sj//L/dWLElzdbxomr5kgPGphmp1ppy96gaW5SDOjOa9sdqaYhgwBF//NIcoF
	ndOFpqLuVhkCRCXXVsLWU15v5NHHwluEhvTHJxkEBnn1MGGthxKFN4sHtTRoHNpUNIpq
	1AiL3G78M/AyScJ5Ko2yZly8/OkSxUq5q0pfR2gZD9VdIGj9gBTelghRl753bid3JpIs
	VtMwxm50CkaQTw6g1TQpnywOGCEQt03ro8u1hZfBbu1mfGzXrxiip9CVOxd1J9gEbpZD
	XXDg==
X-Gm-Message-State: ALoCoQl8K9Xlr0d0Eiq7DqcH8AEVstwHV29qcBT9A2HyAKItu223miqiabEl9YKu06pNV3o0Gczu
X-Received: by 10.42.133.199 with SMTP id i7mr3550320ict.64.1388778675267;
	Fri, 03 Jan 2014 11:51:15 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id fk5sm2157969igb.9.2014.01.03.11.51.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 11:51:13 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20140103184154.GA29283@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 14:51:14 -0500
Message-Id: <C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> =
wrote:

> On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
>> =

>> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
>> =

>>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>>>> On Twitter, Florian Heigl sent a out a few messages about issues with=
 xenpaging:
>>>>> =

>>>>> ---
>>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=3D%=
23xen&src=3Dhash> #xenpaging<https://twitter.com/search?q=3D%23xenpaging&sr=
c=3Dhash>? docs are at SLES manual, rest is mostly this: http://www.gossame=
r-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature o=
r usable?
>>>>> =

>>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https=
://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https:=
//piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm o=
n holiday and not constant online)
>>>>> =

>>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=3D%23xen&src=3D=
hash> Xenpaging (memory overcommit)
>>>>> [x] largely untested
>>>>> [x] docs outdated
>>>>> [x] syntax+logic changed
>>>>> [x] broken
>>>>> ---
>>>>> =

>>>>> [I've taken the liberty of removing the colorful expletive from the f=
inal post]
>>>>> =

>>>>> Is Florian's assessment correct, or is there somewhere we can point h=
im for help?  I'm on vacation this week, but if someone replies to me, I wi=
ll try to forward the information appropriately.
>>>> =

>>>> The Maintainers file implies otherwise. Let me CC the maintainers.
>>> =

>>> Andres really owns this code, so I'll punt to him for an official
>>> answer, but:
>> The part actively maintained is the hypervisor support for paging, and t=
he interface.
>> =

>> tools/xenpaging is one way to consume that interface. It seems to have s=
uffered from bitrot.
> =

> What is the other interface? Thanks!

Not sure what the question is. There is one interface. What I was referring=
 to, is that tools/xenpaging implements one specific paging policy: victim =
selection, rate limiting, paging target, all of these are algorithms that e=
ntirely define what bang for your money you will get.

Andres
>> =

>> So other than echoing Tim's points below, I'll add
>> =

>> - Some interesting ideas thrown around by Florian in his notes. Could le=
ad to a robust discussion in xen-devel =85 if Florian is still interested.
>> =

>> - Perhaps the developers who are interested (myself included) should mak=
e a decent effort at improving the in-tree tools. There is the argument tha=
t for example KSM gives KVM users a sharing solution that just works, wheth=
er you like the results or not. In that vein xenpaging apparently doesn't c=
ut it, nor the absence of a basic sharing tool.
>> =

>> One simple paging tool could be lazy restore. There is some interest out=
 there, it would be relatively straightforward to codify.
>> =

>> Andres
>>> =

>>> - It's been listed as a 'tech preview' on the feature list since it went
>>> in.  http://wiki.xenproject.org/wiki/Xen_Release_Features says:
>>> "Preview, due to limited tools support. Hypervisor side in good shape."
>>> =

>>> - I can't say anything about SuSE's apparent support for it, except
>>> that ISTR Olaf worked at/for/with SuSE at the time.
>>> =

>>> - Patches would, of course, be welcome.
>>> =

>>> Tim.
>> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 19:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 19:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzAm8-0001uP-Ft; Fri, 03 Jan 2014 19:51:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1VzAm7-0001u8-7X
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 19:51:19 +0000
Received: from [193.109.254.147:10593] by server-9.bemta-14.messagelabs.com id
	1D/13-13957-6B417C25; Fri, 03 Jan 2014 19:51:18 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388778675!8703071!1
X-Originating-IP: [209.85.223.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6128 invoked from network); 3 Jan 2014 19:51:16 -0000
Received: from mail-ie0-f173.google.com (HELO mail-ie0-f173.google.com)
	(209.85.223.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 19:51:16 -0000
Received: by mail-ie0-f173.google.com with SMTP id to1so16400206ieb.18
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 11:51:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=K6ENe9wSEaNiNz6XOqglwjELXivJ+3mpYUOEkpqtPhA=;
	b=GI136T+TWWb7g0eowuXR1QO536UMPHsDd+lQAoVrGT7gN8Nq3BYzfpv4Wz5908IqI4
	5liT/A0sj//L/dWLElzdbxomr5kgPGphmp1ppy96gaW5SDOjOa9sdqaYhgwBF//NIcoF
	ndOFpqLuVhkCRCXXVsLWU15v5NHHwluEhvTHJxkEBnn1MGGthxKFN4sHtTRoHNpUNIpq
	1AiL3G78M/AyScJ5Ko2yZly8/OkSxUq5q0pfR2gZD9VdIGj9gBTelghRl753bid3JpIs
	VtMwxm50CkaQTw6g1TQpnywOGCEQt03ro8u1hZfBbu1mfGzXrxiip9CVOxd1J9gEbpZD
	XXDg==
X-Gm-Message-State: ALoCoQl8K9Xlr0d0Eiq7DqcH8AEVstwHV29qcBT9A2HyAKItu223miqiabEl9YKu06pNV3o0Gczu
X-Received: by 10.42.133.199 with SMTP id i7mr3550320ict.64.1388778675267;
	Fri, 03 Jan 2014 11:51:15 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id fk5sm2157969igb.9.2014.01.03.11.51.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 11:51:13 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20140103184154.GA29283@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 14:51:14 -0500
Message-Id: <C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> =
wrote:

> On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
>> =

>> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
>> =

>>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>>>> On Twitter, Florian Heigl sent a out a few messages about issues with=
 xenpaging:
>>>>> =

>>>>> ---
>>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=3D%=
23xen&src=3Dhash> #xenpaging<https://twitter.com/search?q=3D%23xenpaging&sr=
c=3Dhash>? docs are at SLES manual, rest is mostly this: http://www.gossame=
r-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature o=
r usable?
>>>>> =

>>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https=
://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https:=
//piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm o=
n holiday and not constant online)
>>>>> =

>>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=3D%23xen&src=3D=
hash> Xenpaging (memory overcommit)
>>>>> [x] largely untested
>>>>> [x] docs outdated
>>>>> [x] syntax+logic changed
>>>>> [x] broken
>>>>> ---
>>>>> =

>>>>> [I've taken the liberty of removing the colorful expletive from the f=
inal post]
>>>>> =

>>>>> Is Florian's assessment correct, or is there somewhere we can point h=
im for help?  I'm on vacation this week, but if someone replies to me, I wi=
ll try to forward the information appropriately.
>>>> =

>>>> The Maintainers file implies otherwise. Let me CC the maintainers.
>>> =

>>> Andres really owns this code, so I'll punt to him for an official
>>> answer, but:
>> The part actively maintained is the hypervisor support for paging, and t=
he interface.
>> =

>> tools/xenpaging is one way to consume that interface. It seems to have s=
uffered from bitrot.
> =

> What is the other interface? Thanks!

Not sure what the question is. There is one interface. What I was referring=
 to, is that tools/xenpaging implements one specific paging policy: victim =
selection, rate limiting, paging target, all of these are algorithms that e=
ntirely define what bang for your money you will get.

Andres
>> =

>> So other than echoing Tim's points below, I'll add
>> =

>> - Some interesting ideas thrown around by Florian in his notes. Could le=
ad to a robust discussion in xen-devel =85 if Florian is still interested.
>> =

>> - Perhaps the developers who are interested (myself included) should mak=
e a decent effort at improving the in-tree tools. There is the argument tha=
t for example KSM gives KVM users a sharing solution that just works, wheth=
er you like the results or not. In that vein xenpaging apparently doesn't c=
ut it, nor the absence of a basic sharing tool.
>> =

>> One simple paging tool could be lazy restore. There is some interest out=
 there, it would be relatively straightforward to codify.
>> =

>> Andres
>>> =

>>> - It's been listed as a 'tech preview' on the feature list since it went
>>> in.  http://wiki.xenproject.org/wiki/Xen_Release_Features says:
>>> "Preview, due to limited tools support. Hypervisor side in good shape."
>>> =

>>> - I can't say anything about SuSE's apparent support for it, except
>>> that ISTR Olaf worked at/for/with SuSE at the time.
>>> =

>>> - Patches would, of course, be welcome.
>>> =

>>> Tim.
>> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBBi-0002qD-4S; Fri, 03 Jan 2014 20:17:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1VzBBg-0002q6-Cn
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 20:17:44 +0000
Received: from [85.158.137.68:61136] by server-1.bemta-3.messagelabs.com id
	00/F5-29598-7EA17C25; Fri, 03 Jan 2014 20:17:43 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388780261!7097747!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12137 invoked from network); 3 Jan 2014 20:17:42 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 20:17:42 -0000
Received: by mail-ob0-f172.google.com with SMTP id gq1so16177674obb.31
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 12:17:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UffsvK7c2m7ww4IxWQgFJYnI9dfej+FbAgW10GsYs5A=;
	b=YdrXxg5XL8ryEh/1l0fWztmdlJ9n1v4DIPnXZeuQQ1udlSKp5HlRq7nJTuKhW072EK
	UNDQE2QLP4OnaWKpPJDyeSXpJdfoFceQbp1WQaYzDRFzYf7IhwbMCkEr+5+dDip8AIIg
	66zAlstBMgtblBy8oTgnCIHFoWckLGtk9J21PnFax2OQHF8ODd1TyICv+G2Wglq2OpHi
	mk58Bx8BYAOkEuiK8uZcAKEKBZKoIknCZqUfX1n0Pf1w7FTF/MIbUEs2lgcQ9lpHuj39
	MEgoBi3Q4jiPZzDbOCCvswRVc6kvHhsOjy6oKXfU25dRE8NSmPyv/YCIoBAkvhY2W1p+
	RlQA==
MIME-Version: 1.0
X-Received: by 10.60.232.11 with SMTP id tk11mr1090768oec.77.1388780260615;
	Fri, 03 Jan 2014 12:17:40 -0800 (PST)
Received: by 10.76.7.235 with HTTP; Fri, 3 Jan 2014 12:17:40 -0800 (PST)
In-Reply-To: <20140103184944.GB29402@phenom.dumpdata.com>
References: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
	<20140103184944.GB29402@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 15:17:40 -0500
Message-ID: <CAENZ-+nPTOxaQg4JO9EuZ1Y_W2_EwO0Umr+4O_fR60P3egk=Kg@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about four kinds of pages in struct
	xc_dominfo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5530130950656265060=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5530130950656265060==
Content-Type: multipart/alternative; boundary=001a11369abe1aea6104ef169ae0

--001a11369abe1aea6104ef169ae0
Content-Type: text/plain; charset=ISO-8859-1

Hi Konrad,

Thank you so much for your advice! Now I get the difference of those three
kinds of pages.

Your advice of using the git annotate is really helpful! :-)

Best,

Meng


2014/1/3 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:
> > Hi,
> >
> > I'm trying to print out the "current used" pages of each domU.
> >
> > I'm reading the xen code and found the data structure xc_dominfo at file
> > tools/libxc/xenctrl.h.
> >
> > *I have a simple, maybe very naive question: *
> > 1) What is the difference among *nr_outstanding_pages*, *
> nr_shared_pages*,
> > and *nr_paged_pages*?
>
> The nr_outstanding_pages is usually zero. It means the amount of
> pages that are needed for the guest to be allocated.
>
> The nr_shared_pages - is the number of pages that are shared with other
> guests or tools
>
> The nr_ages_pages - that is if you page the pages to swap of a VM.
> You need to use xenpaging for that.
>
> > 2) Could anyone point me to a place that I can find the document of the
> > definition of the structures in xen code, so that I can find those
> > definition by myself?
>
> Um, I usually use 'git annotate' on the file and the commit description
> gives me a good idea
>
> >
> > I'm new to the xen source and hope you can give me some guide to hack the
> > xen code.
> >
> > ========The structure is as below======================
> > "tools/libxc/xenctrl.h"
> > /*
> >  * DOMAIN MANAGEMENT FUNCTIONS
> >  */
> >
> > typedef struct xc_dominfo {
> >     uint32_t      domid;
> >     uint32_t      ssidref;
> >     unsigned int  dying:1, crashed:1, shutdown:1,
> >                   paused:1, blocked:1, running:1,
> >                   hvm:1, debugged:1;
> >     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
> >     unsigned long nr_pages; /* current number, not maximum */
> >     unsigned long nr_outstanding_pages;
> >     unsigned long nr_shared_pages;
> >     unsigned long nr_paged_pages;
> >     unsigned long shared_info_frame;
> >     uint64_t      cpu_time;
> >     unsigned long max_memkb;
> >     unsigned int  nr_online_vcpus;
> >     unsigned int  max_vcpu_id;
> >     xen_domain_handle_t handle;
> >     unsigned int  cpupool;
> > } xc_dominfo_t;
> >
> >
> > Thank you very much for your time and help in these questions!
> > Happy New Year!
>
> You too!
> >
> > Best,
> >
> > Meng
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--001a11369abe1aea6104ef169ae0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Konrad,<br></div><div class=3D"gmail_extra"><div class=3D"gmail_default" st=
yle=3D"font-size:small"><br></div><div class=3D"gmail_default" style=3D"fon=
t-size:small">
Thank you so much for your advice! Now I get the difference of those three =
kinds of pages.=A0</div><div class=3D"gmail_default" style=3D"font-size:sma=
ll"><br></div><div class=3D"gmail_default" style=3D"font-size:small">Your a=
dvice of using the git annotate is really helpful! :-)</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Best,</div><div class=3D"gmail=
_default" style=3D"font-size:small"><br></div><div class=3D"gmail_default" =
style=3D"font-size:small">
Meng</div>
<br><br><div class=3D"gmail_quote">2014/1/3 Konrad Rzeszutek Wilk <span dir=
=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">ko=
nrad.wilk@oracle.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:<=
br>
&gt; Hi,<br>
&gt;<br>
&gt; I&#39;m trying to print out the &quot;current used&quot; pages of each=
 domU.<br>
&gt;<br>
&gt; I&#39;m reading the xen code and found the data structure xc_dominfo a=
t file<br>
&gt; tools/libxc/xenctrl.h.<br>
&gt;<br>
</div>&gt; *I have a simple, maybe very naive question: *<br>
<div class=3D"im">&gt; 1) What is the difference among *nr_outstanding_page=
s*, * nr_shared_pages*,<br>
&gt; and *nr_paged_pages*?<br>
<br>
</div>The nr_outstanding_pages is usually zero. It means the amount of<br>
pages that are needed for the guest to be allocated.<br>
<br>
The nr_shared_pages - is the number of pages that are shared with other<br>
guests or tools<br>
<br>
The nr_ages_pages - that is if you page the pages to swap of a VM.<br>
You need to use xenpaging for that.<br>
<div class=3D"im"><br>
&gt; 2) Could anyone point me to a place that I can find the document of th=
e<br>
&gt; definition of the structures in xen code, so that I can find those<br>
&gt; definition by myself?<br>
<br>
</div>Um, I usually use &#39;git annotate&#39; on the file and the commit d=
escription<br>
gives me a good idea<br>
<div><div class=3D"h5"><br>
&gt;<br>
&gt; I&#39;m new to the xen source and hope you can give me some guide to h=
ack the<br>
&gt; xen code.<br>
&gt;<br>
&gt; =3D=3D=3D=3D=3D=3D=3D=3DThe structure is as below=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
&gt; &quot;tools/libxc/xenctrl.h&quot;<br>
&gt; /*<br>
&gt; =A0* DOMAIN MANAGEMENT FUNCTIONS<br>
&gt; =A0*/<br>
&gt;<br>
&gt; typedef struct xc_dominfo {<br>
&gt; =A0 =A0 uint32_t =A0 =A0 =A0domid;<br>
&gt; =A0 =A0 uint32_t =A0 =A0 =A0ssidref;<br>
&gt; =A0 =A0 unsigned int =A0dying:1, crashed:1, shutdown:1,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 paused:1, blocked:1, running:1,<br=
>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hvm:1, debugged:1;<br>
&gt; =A0 =A0 unsigned int =A0shutdown_reason; /* only meaningful if shutdow=
n=3D=3D1 */<br>
&gt; =A0 =A0 unsigned long nr_pages; /* current number, not maximum */<br>
&gt; =A0 =A0 unsigned long nr_outstanding_pages;<br>
&gt; =A0 =A0 unsigned long nr_shared_pages;<br>
&gt; =A0 =A0 unsigned long nr_paged_pages;<br>
&gt; =A0 =A0 unsigned long shared_info_frame;<br>
&gt; =A0 =A0 uint64_t =A0 =A0 =A0cpu_time;<br>
&gt; =A0 =A0 unsigned long max_memkb;<br>
&gt; =A0 =A0 unsigned int =A0nr_online_vcpus;<br>
&gt; =A0 =A0 unsigned int =A0max_vcpu_id;<br>
&gt; =A0 =A0 xen_domain_handle_t handle;<br>
&gt; =A0 =A0 unsigned int =A0cpupool;<br>
&gt; } xc_dominfo_t;<br>
&gt;<br>
&gt;<br>
&gt; Thank you very much for your time and help in these questions!<br>
&gt; Happy New Year!<br>
<br>
</div></div>You too!<br>
&gt;<br>
&gt; Best,<br>
&gt;<br>
&gt; Meng<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div></div>

--001a11369abe1aea6104ef169ae0--


--===============5530130950656265060==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5530130950656265060==--


From xen-devel-bounces@lists.xen.org Fri Jan 03 20:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBBi-0002qD-4S; Fri, 03 Jan 2014 20:17:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1VzBBg-0002q6-Cn
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 20:17:44 +0000
Received: from [85.158.137.68:61136] by server-1.bemta-3.messagelabs.com id
	00/F5-29598-7EA17C25; Fri, 03 Jan 2014 20:17:43 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388780261!7097747!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12137 invoked from network); 3 Jan 2014 20:17:42 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 20:17:42 -0000
Received: by mail-ob0-f172.google.com with SMTP id gq1so16177674obb.31
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 12:17:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UffsvK7c2m7ww4IxWQgFJYnI9dfej+FbAgW10GsYs5A=;
	b=YdrXxg5XL8ryEh/1l0fWztmdlJ9n1v4DIPnXZeuQQ1udlSKp5HlRq7nJTuKhW072EK
	UNDQE2QLP4OnaWKpPJDyeSXpJdfoFceQbp1WQaYzDRFzYf7IhwbMCkEr+5+dDip8AIIg
	66zAlstBMgtblBy8oTgnCIHFoWckLGtk9J21PnFax2OQHF8ODd1TyICv+G2Wglq2OpHi
	mk58Bx8BYAOkEuiK8uZcAKEKBZKoIknCZqUfX1n0Pf1w7FTF/MIbUEs2lgcQ9lpHuj39
	MEgoBi3Q4jiPZzDbOCCvswRVc6kvHhsOjy6oKXfU25dRE8NSmPyv/YCIoBAkvhY2W1p+
	RlQA==
MIME-Version: 1.0
X-Received: by 10.60.232.11 with SMTP id tk11mr1090768oec.77.1388780260615;
	Fri, 03 Jan 2014 12:17:40 -0800 (PST)
Received: by 10.76.7.235 with HTTP; Fri, 3 Jan 2014 12:17:40 -0800 (PST)
In-Reply-To: <20140103184944.GB29402@phenom.dumpdata.com>
References: <CAENZ-+=v4TyP4_VxoU6B_w90gVZPhMkk7Yv0mKPYBF-po0y6xA@mail.gmail.com>
	<20140103184944.GB29402@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 15:17:40 -0500
Message-ID: <CAENZ-+nPTOxaQg4JO9EuZ1Y_W2_EwO0Umr+4O_fR60P3egk=Kg@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about four kinds of pages in struct
	xc_dominfo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5530130950656265060=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5530130950656265060==
Content-Type: multipart/alternative; boundary=001a11369abe1aea6104ef169ae0

--001a11369abe1aea6104ef169ae0
Content-Type: text/plain; charset=ISO-8859-1

Hi Konrad,

Thank you so much for your advice! Now I get the difference of those three
kinds of pages.

Your advice of using the git annotate is really helpful! :-)

Best,

Meng


2014/1/3 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:
> > Hi,
> >
> > I'm trying to print out the "current used" pages of each domU.
> >
> > I'm reading the xen code and found the data structure xc_dominfo at file
> > tools/libxc/xenctrl.h.
> >
> > *I have a simple, maybe very naive question: *
> > 1) What is the difference among *nr_outstanding_pages*, *
> nr_shared_pages*,
> > and *nr_paged_pages*?
>
> The nr_outstanding_pages is usually zero. It means the amount of
> pages that are needed for the guest to be allocated.
>
> The nr_shared_pages - is the number of pages that are shared with other
> guests or tools
>
> The nr_ages_pages - that is if you page the pages to swap of a VM.
> You need to use xenpaging for that.
>
> > 2) Could anyone point me to a place that I can find the document of the
> > definition of the structures in xen code, so that I can find those
> > definition by myself?
>
> Um, I usually use 'git annotate' on the file and the commit description
> gives me a good idea
>
> >
> > I'm new to the xen source and hope you can give me some guide to hack the
> > xen code.
> >
> > ========The structure is as below======================
> > "tools/libxc/xenctrl.h"
> > /*
> >  * DOMAIN MANAGEMENT FUNCTIONS
> >  */
> >
> > typedef struct xc_dominfo {
> >     uint32_t      domid;
> >     uint32_t      ssidref;
> >     unsigned int  dying:1, crashed:1, shutdown:1,
> >                   paused:1, blocked:1, running:1,
> >                   hvm:1, debugged:1;
> >     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
> >     unsigned long nr_pages; /* current number, not maximum */
> >     unsigned long nr_outstanding_pages;
> >     unsigned long nr_shared_pages;
> >     unsigned long nr_paged_pages;
> >     unsigned long shared_info_frame;
> >     uint64_t      cpu_time;
> >     unsigned long max_memkb;
> >     unsigned int  nr_online_vcpus;
> >     unsigned int  max_vcpu_id;
> >     xen_domain_handle_t handle;
> >     unsigned int  cpupool;
> > } xc_dominfo_t;
> >
> >
> > Thank you very much for your time and help in these questions!
> > Happy New Year!
>
> You too!
> >
> > Best,
> >
> > Meng
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--001a11369abe1aea6104ef169ae0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Konrad,<br></div><div class=3D"gmail_extra"><div class=3D"gmail_default" st=
yle=3D"font-size:small"><br></div><div class=3D"gmail_default" style=3D"fon=
t-size:small">
Thank you so much for your advice! Now I get the difference of those three =
kinds of pages.=A0</div><div class=3D"gmail_default" style=3D"font-size:sma=
ll"><br></div><div class=3D"gmail_default" style=3D"font-size:small">Your a=
dvice of using the git annotate is really helpful! :-)</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Best,</div><div class=3D"gmail=
_default" style=3D"font-size:small"><br></div><div class=3D"gmail_default" =
style=3D"font-size:small">
Meng</div>
<br><br><div class=3D"gmail_quote">2014/1/3 Konrad Rzeszutek Wilk <span dir=
=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">ko=
nrad.wilk@oracle.com</a>&gt;</span><br><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Tue, Dec 31, 2013 at 04:43:55PM -0500, Meng Xu wrote:<=
br>
&gt; Hi,<br>
&gt;<br>
&gt; I&#39;m trying to print out the &quot;current used&quot; pages of each=
 domU.<br>
&gt;<br>
&gt; I&#39;m reading the xen code and found the data structure xc_dominfo a=
t file<br>
&gt; tools/libxc/xenctrl.h.<br>
&gt;<br>
</div>&gt; *I have a simple, maybe very naive question: *<br>
<div class=3D"im">&gt; 1) What is the difference among *nr_outstanding_page=
s*, * nr_shared_pages*,<br>
&gt; and *nr_paged_pages*?<br>
<br>
</div>The nr_outstanding_pages is usually zero. It means the amount of<br>
pages that are needed for the guest to be allocated.<br>
<br>
The nr_shared_pages - is the number of pages that are shared with other<br>
guests or tools<br>
<br>
The nr_ages_pages - that is if you page the pages to swap of a VM.<br>
You need to use xenpaging for that.<br>
<div class=3D"im"><br>
&gt; 2) Could anyone point me to a place that I can find the document of th=
e<br>
&gt; definition of the structures in xen code, so that I can find those<br>
&gt; definition by myself?<br>
<br>
</div>Um, I usually use &#39;git annotate&#39; on the file and the commit d=
escription<br>
gives me a good idea<br>
<div><div class=3D"h5"><br>
&gt;<br>
&gt; I&#39;m new to the xen source and hope you can give me some guide to h=
ack the<br>
&gt; xen code.<br>
&gt;<br>
&gt; =3D=3D=3D=3D=3D=3D=3D=3DThe structure is as below=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
&gt; &quot;tools/libxc/xenctrl.h&quot;<br>
&gt; /*<br>
&gt; =A0* DOMAIN MANAGEMENT FUNCTIONS<br>
&gt; =A0*/<br>
&gt;<br>
&gt; typedef struct xc_dominfo {<br>
&gt; =A0 =A0 uint32_t =A0 =A0 =A0domid;<br>
&gt; =A0 =A0 uint32_t =A0 =A0 =A0ssidref;<br>
&gt; =A0 =A0 unsigned int =A0dying:1, crashed:1, shutdown:1,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 paused:1, blocked:1, running:1,<br=
>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 hvm:1, debugged:1;<br>
&gt; =A0 =A0 unsigned int =A0shutdown_reason; /* only meaningful if shutdow=
n=3D=3D1 */<br>
&gt; =A0 =A0 unsigned long nr_pages; /* current number, not maximum */<br>
&gt; =A0 =A0 unsigned long nr_outstanding_pages;<br>
&gt; =A0 =A0 unsigned long nr_shared_pages;<br>
&gt; =A0 =A0 unsigned long nr_paged_pages;<br>
&gt; =A0 =A0 unsigned long shared_info_frame;<br>
&gt; =A0 =A0 uint64_t =A0 =A0 =A0cpu_time;<br>
&gt; =A0 =A0 unsigned long max_memkb;<br>
&gt; =A0 =A0 unsigned int =A0nr_online_vcpus;<br>
&gt; =A0 =A0 unsigned int =A0max_vcpu_id;<br>
&gt; =A0 =A0 xen_domain_handle_t handle;<br>
&gt; =A0 =A0 unsigned int =A0cpupool;<br>
&gt; } xc_dominfo_t;<br>
&gt;<br>
&gt;<br>
&gt; Thank you very much for your time and help in these questions!<br>
&gt; Happy New Year!<br>
<br>
</div></div>You too!<br>
&gt;<br>
&gt; Best,<br>
&gt;<br>
&gt; Meng<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div></div>

--001a11369abe1aea6104ef169ae0--


--===============5530130950656265060==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5530130950656265060==--


From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNf-0003ec-9J; Fri, 03 Jan 2014 20:30:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bC-Kc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.143.35:26293] by server-1.bemta-4.messagelabs.com id
	E9/0D-02132-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388780997!6833719!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28094 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStd1022511
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSs6r019495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsrm019490; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 38FF31C016F; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:29 -0500
Message-Id: <1388777916-1328-13-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 12/19] xen/pvh: Update E820 to work with PVH
	(v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In xen_add_extra_mem() we can skip updating P2M as it's managed
by Xen. PVH maps the entire IO space, but only RAM pages need
to be repopulated.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/setup.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2137c51..dd5f905 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -27,6 +27,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 
 	memblock_reserve(start, size);
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
@@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 		.domid        = DOMID_SELF
 	};
 	unsigned long len = 0;
+	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
 	unsigned long pfn;
 	int ret;
 
@@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 				continue;
 			frame = mfn;
 		} else {
-			if (mfn != INVALID_P2M_ENTRY)
+			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
 				continue;
 			frame = pfn;
 		}
@@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 static unsigned long __init xen_release_chunk(unsigned long start,
 					      unsigned long end)
 {
+	/*
+	 * Xen already ballooned out the E820 non RAM regions for us
+	 * and set them up properly in EPT.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return end - start;
+
 	return xen_do_chunk(start, end, true);
 }
 
@@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
 	 * (except for the ISA region which must be 1:1 mapped) to
 	 * release the refcounts (in Xen) on the original frames.
 	 */
-	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
+
+	/*
+	 * PVH E820 matches the hypervisor's P2M which means we need to
+	 * account for the proper values of *release and *identity.
+	 */
+	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
+	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
 		pte_t pte = __pte_ma(0);
 
 		if (pfn < PFN_UP(ISA_END_ADDRESS))
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNY-0003bJ-G8; Fri, 03 Jan 2014 20:30:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNX-0003ax-Hu
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:29:59 +0000
Received: from [193.109.254.147:9626] by server-6.bemta-14.messagelabs.com id
	70/51-14958-6CD17C25; Fri, 03 Jan 2014 20:29:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388780996!7229834!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5789 invoked from network); 3 Jan 2014 20:29:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSrSH022475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSq5W019403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqHd019394; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D74061BFB0D; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:19 -0500
Message-Id: <1388777916-1328-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 02/19] xen/pvh/x86: Define what an PVH guest
	is (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Which is a PV guest with auto page translation enabled
and with vector callback. It is a cross between PVHVM and PV.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Also we use the PV cpuid, albeit we can use the HVM (native) cpuid.
However, we do have a fair bit of filtering in the xen_cpuid and
we can piggyback on that until the hypervisor/toolstack filters
the appropiate cpuids. Once that is done we can swap over to
use the native one.

We setup a Kconfig entry that is disabled by default and
cannot be enabled.

Note that on ARM the concept of PVH is non-existent. As Ian
put it: "an ARM guest is neither PV nor HVM nor PVHVM.
It's a bit like PVH but is different also (it's further towards
the H end of the spectrum than even PVH).". As such these
options (PVHVM, PVH) are never enabled nor seen on ARM
compilations.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/Kconfig |  5 +++++
 include/xen/xen.h    | 14 ++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 1a3c765..e7d0590 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -51,3 +51,8 @@ config XEN_DEBUG_FS
 	  Enable statistics output and various tuning options in debugfs.
 	  Enabling this option may incur a significant performance overhead.
 
+config XEN_PVH
+	bool "Support for running as a PVH guest"
+	depends on X86_64 && XEN && BROKEN
+	select XEN_PVHVM
+	def_bool n
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a74d436..0c0e3ef 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,4 +29,18 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
+#ifdef CONFIG_XEN_PVH
+/* This functionality exists only for x86. The XEN_PVHVM support exists
+ * only in x86 world - hence on ARM it will be always disabled.
+ * N.B. ARM guests are neither PV nor HVM nor PVHVM.
+ * It's a bit like PVH but is different also (it's further towards the H
+ * end of the spectrum than even PVH).
+ */
+#include <xen/features.h>
+#define xen_pvh_domain() (xen_pv_domain() && \
+			  xen_feature(XENFEAT_auto_translated_physmap) && \
+			  xen_have_vector_callback)
+#else
+#define xen_pvh_domain()	(0)
+#endif
 #endif	/* _XEN_XEN_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNb-0003dA-Tn; Fri, 03 Jan 2014 20:30:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003b9-Eq
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [193.109.254.147:21119] by server-14.bemta-14.messagelabs.com
	id 8C/04-12628-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388780997!8715172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSssw022506
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrQk025216
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSr3N019900; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CE8281BFB0C; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:18 -0500
Message-Id: <1388777916-1328-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 01/19] xen/p2m: Check for auto-xlat when
	doing mfn_to_local_pfn.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Most of the functions in page.h are prefaced with
	if (xen_feature(XENFEAT_auto_translated_physmap))
		return mfn;

Except the mfn_to_local_pfn. At a first sight, the function
should work without this patch - as the 'mfn_to_mfn' has
a similar check. But there are no such check in the
'get_phys_to_machine' function - so we would crash in there.

This fixes it by following the convention of having the
check for auto-xlat in these static functions.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/page.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..4a092cc 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -167,7 +167,12 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
  */
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
-	unsigned long pfn = mfn_to_pfn(mfn);
+	unsigned long pfn;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
+
+	pfn = mfn_to_pfn(mfn);
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNd-0003dz-E2; Fri, 03 Jan 2014 20:30:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bA-J4
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.143.35:17854] by server-3.bemta-4.messagelabs.com id
	C8/4B-32360-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388780997!9454691!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30137 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsa8007032
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrLU025195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrc9019886; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E93B71BFB12; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:21 -0500
Message-Id: <1388777916-1328-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 04/19] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

P2M is not available for PVH. Fortunatly for us the
P2M code already has mostly the support for auto-xlat guest thanks to
commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
"grant-table: call set_phys_to_machine after mapping grant refs"
which: "
introduces set_phys_to_machine calls for auto_translated guests
(even on x86) in gnttab_map_refs and gnttab_unmap_refs.
translated by swiotlb-xen... " so we don't need to muck much.

with above mentioned "commit you'll get set_phys_to_machine calls
from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
anything with them " (Stefano Stabellini) which is OK - we want
them to be NOPs.

This is because we assume that an "IOMMU is always present on the
plaform and Xen is going to make the appropriate IOMMU pagetable
changes in the hypercall implementation of GNTTABOP_map_grant_ref
and GNTTABOP_unmap_grant_ref, then eveything should be transparent
from PVH priviligied point of view and DMA transfers involving
foreign pages keep working with no issues[sp]

Otherwise we would need a P2M (and an M2P) for PVH priviligied to
track these foreign pages .. (see arch/arm/xen/p2m.c)."
(Stefano Stabellini).

We still have to inhibit the building of the P2M tree.
That had been done in the past by not calling
xen_build_dynamic_phys_to_machine (which setups the P2M tree
and gives us virtual address to access them). But we are missing
a check for xen_build_mfn_list_list - which was continuing to setup
the P2M tree and would blow up at trying to get the virtual
address of p2m_missing (which would have been setup by
xen_build_dynamic_phys_to_machine).

Hence a check is needed to not call xen_build_mfn_list_list when
running in auto-xlat mode.

Instead of replicating the check for auto-xlat in enlighten.c
do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
is called also in xen_arch_post_suspend without any checks for
auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
allocate space for an P2M tree.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/enlighten.c |  3 +--
 arch/x86/xen/p2m.c       | 12 ++++++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index eb0efc2..23ead29 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1488,8 +1488,7 @@ asmlinkage void __init xen_start_kernel(void)
 	x86_configure_nx();
 
 	/* Get mfn list */
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		xen_build_dynamic_phys_to_machine();
+	xen_build_dynamic_phys_to_machine();
 
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..fb7ee0a 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
 {
 	unsigned long pfn;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
@@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
 /* Set up p2m_top to point to the domain-builder provided p2m pages */
 void __init xen_build_dynamic_phys_to_machine(void)
 {
-	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
-	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
+	unsigned long *mfn_list;
+	unsigned long max_pfn;
 	unsigned long pfn;
 
+	 if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	mfn_list = (unsigned long *)xen_start_info->mfn_list;
+	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
 	xen_max_p2m_pfn = max_pfn;
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNg-0003f7-3e; Fri, 03 Jan 2014 20:30:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bB-MH
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.137.68:50117] by server-8.bemta-3.messagelabs.com id
	37/15-31081-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388780997!7150017!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStIx022508
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsQf025223
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSssV019910; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F1FB41BFB13; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:22 -0500
Message-Id: <1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 05/19] xen/mmu/p2m: Refactor the
	xen_pagetable_init code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The revector and copying of the P2M only happens when
!auto-xlat and on 64-bit builds. It is not obvious from
the code, so lets have seperate 32 and 64-bit functions.

We also invert the check for auto-xlat to make the code
flow simpler.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 70 +++++++++++++++++++++++++++++-------------------------
 1 file changed, 37 insertions(+), 33 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..c140eff 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
 	 * instead of somewhere later and be confusing. */
 	xen_mc_flush();
 }
-#endif
-static void __init xen_pagetable_init(void)
+static void __init xen_pagetable_p2m_copy(void)
 {
-#ifdef CONFIG_X86_64
 	unsigned long size;
 	unsigned long addr;
-#endif
-	paging_init();
-	xen_setup_shared_info();
-#ifdef CONFIG_X86_64
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		unsigned long new_mfn_list;
+	unsigned long new_mfn_list;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* On 32-bit, we get zero so this never gets executed. */
+	new_mfn_list = xen_revector_p2m_tree();
+	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+		/* using __ka address and sticking INVALID_P2M_ENTRY! */
+		memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+		/* We should be in __ka space. */
+		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+		addr = xen_start_info->mfn_list;
+		/* We roundup to the PMD, which means that if anybody at this stage is
+		 * using the __ka address of xen_start_info or xen_start_info->shared_info
+		 * they are in going to crash. Fortunatly we have already revectored
+		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+		size = roundup(size, PMD_SIZE);
+		xen_cleanhighmap(addr, addr + size);
 
 		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+		memblock_free(__pa(xen_start_info->mfn_list), size);
+		/* And revector! Bye bye old array */
+		xen_start_info->mfn_list = new_mfn_list;
+	} else
+		return;
 
-		/* On 32-bit, we get zero so this never gets executed. */
-		new_mfn_list = xen_revector_p2m_tree();
-		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-			/* using __ka address and sticking INVALID_P2M_ENTRY! */
-			memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-			/* We should be in __ka space. */
-			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-			addr = xen_start_info->mfn_list;
-			/* We roundup to the PMD, which means that if anybody at this stage is
-			 * using the __ka address of xen_start_info or xen_start_info->shared_info
-			 * they are in going to crash. Fortunatly we have already revectored
-			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-			size = roundup(size, PMD_SIZE);
-			xen_cleanhighmap(addr, addr + size);
-
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-			memblock_free(__pa(xen_start_info->mfn_list), size);
-			/* And revector! Bye bye old array */
-			xen_start_info->mfn_list = new_mfn_list;
-		} else
-			goto skip;
-	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
@@ -1255,7 +1251,15 @@ static void __init xen_pagetable_init(void)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
-skip:
+}
+#endif
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+#ifdef CONFIG_X86_64
+	xen_pagetable_p2m_copy();
 #endif
 	xen_post_allocator_init();
 }
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNb-0003dA-Tn; Fri, 03 Jan 2014 20:30:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003b9-Eq
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [193.109.254.147:21119] by server-14.bemta-14.messagelabs.com
	id 8C/04-12628-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388780997!8715172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSssw022506
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrQk025216
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSr3N019900; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CE8281BFB0C; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:18 -0500
Message-Id: <1388777916-1328-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 01/19] xen/p2m: Check for auto-xlat when
	doing mfn_to_local_pfn.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Most of the functions in page.h are prefaced with
	if (xen_feature(XENFEAT_auto_translated_physmap))
		return mfn;

Except the mfn_to_local_pfn. At a first sight, the function
should work without this patch - as the 'mfn_to_mfn' has
a similar check. But there are no such check in the
'get_phys_to_machine' function - so we would crash in there.

This fixes it by following the convention of having the
check for auto-xlat in these static functions.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/include/asm/xen/page.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..4a092cc 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -167,7 +167,12 @@ static inline xpaddr_t machine_to_phys(xmaddr_t machine)
  */
 static inline unsigned long mfn_to_local_pfn(unsigned long mfn)
 {
-	unsigned long pfn = mfn_to_pfn(mfn);
+	unsigned long pfn;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return mfn;
+
+	pfn = mfn_to_pfn(mfn);
 	if (get_phys_to_machine(pfn) != mfn)
 		return -1; /* force !pfn_valid() */
 	return pfn;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNh-0003fV-Ge; Fri, 03 Jan 2014 20:30:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bD-U6
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.137.68:50121] by server-15.bemta-3.messagelabs.com id
	B1/CA-11556-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388780997!7130652!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15689 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSs8R007027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrTZ019467
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSraI019460; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 06B651BFB49; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:23 -0500
Message-Id: <1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 06/19] xen/mmu: Cleanup
	xen_pagetable_p2m_copy a bit.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano noticed that the code runs only under 64-bit so
the comments about 32-bit are pointless.

Also we change the condition for xen_revector_p2m_tree
returning the same value (because it could not allocate
a swath of space to put the new P2M in) or it had been
called once already. In such we return early from the
function.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index c140eff..9d74249 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1209,29 +1209,29 @@ static void __init xen_pagetable_p2m_copy(void)
 
 	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 
-	/* On 32-bit, we get zero so this never gets executed. */
 	new_mfn_list = xen_revector_p2m_tree();
-	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-		/* using __ka address and sticking INVALID_P2M_ENTRY! */
-		memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-		/* We should be in __ka space. */
-		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-		addr = xen_start_info->mfn_list;
-		/* We roundup to the PMD, which means that if anybody at this stage is
-		 * using the __ka address of xen_start_info or xen_start_info->shared_info
-		 * they are in going to crash. Fortunatly we have already revectored
-		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-		size = roundup(size, PMD_SIZE);
-		xen_cleanhighmap(addr, addr + size);
-
-		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-		memblock_free(__pa(xen_start_info->mfn_list), size);
-		/* And revector! Bye bye old array */
-		xen_start_info->mfn_list = new_mfn_list;
-	} else
+	/* No memory or already called. */
+	if (!new_mfn_list || new_mfn_list == xen_start_info->mfn_list)
 		return;
 
+	/* using __ka address and sticking INVALID_P2M_ENTRY! */
+	memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+	/* We should be in __ka space. */
+	BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+	addr = xen_start_info->mfn_list;
+	/* We roundup to the PMD, which means that if anybody at this stage is
+	 * using the __ka address of xen_start_info or xen_start_info->shared_info
+	 * they are in going to crash. Fortunatly we have already revectored
+	 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+	size = roundup(size, PMD_SIZE);
+	xen_cleanhighmap(addr, addr + size);
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+	memblock_free(__pa(xen_start_info->mfn_list), size);
+	/* And revector! Bye bye old array */
+	xen_start_info->mfn_list = new_mfn_list;
+
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNY-0003bf-UF; Fri, 03 Jan 2014 20:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNX-0003ay-JT
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:29:59 +0000
Received: from [85.158.143.35:17834] by server-3.bemta-4.messagelabs.com id
	08/4B-32360-6CD17C25; Fri, 03 Jan 2014 20:29:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388780996!9454690!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30111 invoked from network); 3 Jan 2014 20:29:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsLi022501
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSswJ019919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsje019912; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0EDF41BFB4A; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:24 -0500
Message-Id: <1388777916-1328-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 07/19] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

.. which are surprinsingly small compared to the amount for PV code.

PVH uses mostly native mmu ops, we leave the generic (native_*) for
the majority and just overwrite the baremetal with the ones we need.

At startup, we are running with pre-allocated page-tables
courtesy of the tool-stack. But we still need to graft them
in the Linux initial pagetables. However there is no need to
unpin/pin and change them to R/O or R/W.

Note that the xen_pagetable_init due to 7836fec9d0994cc9c9150c5a33f0eb0eb08a335a
"xen/mmu/p2m: Refactor the xen_pagetable_init code." does not
need any changes - we just need to make sure that xen_post_allocator_init
does not alter the pvops from the default native one.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/mmu.c | 81 +++++++++++++++++++++++++++++++-----------------------
 1 file changed, 46 insertions(+), 35 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 9d74249..490ddb3 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1757,6 +1757,10 @@ static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* For PVH no need to set R/O or R/W to pin them or unpin them. */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags))
 		BUG();
 }
@@ -1867,6 +1871,7 @@ static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native.
  */
 void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
@@ -1888,17 +1893,18 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	/* L4[272] -> level3_ident_pgt
-	 * L4[511] -> level3_kernel_pgt */
-	convert_pfn_mfn(init_level4_pgt);
-
-	/* L3_i[0] -> level2_ident_pgt */
-	convert_pfn_mfn(level3_ident_pgt);
-	/* L3_k[510] -> level2_kernel_pgt
-	 * L3_i[511] -> level2_fixmap_pgt */
-	convert_pfn_mfn(level3_kernel_pgt);
-
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		/* L4[272] -> level3_ident_pgt
+		 * L4[511] -> level3_kernel_pgt */
+		convert_pfn_mfn(init_level4_pgt);
+
+		/* L3_i[0] -> level2_ident_pgt */
+		convert_pfn_mfn(level3_ident_pgt);
+		/* L3_k[510] -> level2_kernel_pgt
+		 * L3_i[511] -> level2_fixmap_pgt */
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1922,31 +1928,33 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Make pagetable pieces RO */
+		set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+		set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Make pagetable pieces RO */
-	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
-	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
-
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
-
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
-
-	/*
-	 * At this stage there can be no user pgd, and no page
-	 * structure to attach it to, so make sure we just set kernel
-	 * pgd.
-	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(init_level4_pgt));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+		/*
+		 * At this stage there can be no user pgd, and no page
+		 * structure to attach it to, so make sure we just set kernel
+		 * pgd.
+		 */
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(init_level4_pgt));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	} else
+		native_write_cr3(__pa(init_level4_pgt));
 
 	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
 	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
@@ -2107,6 +2115,9 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 static void __init xen_post_allocator_init(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	pv_mmu_ops.set_pte = xen_set_pte;
 	pv_mmu_ops.set_pmd = xen_set_pmd;
 	pv_mmu_ops.set_pud = xen_set_pud;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNY-0003bJ-G8; Fri, 03 Jan 2014 20:30:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNX-0003ax-Hu
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:29:59 +0000
Received: from [193.109.254.147:9626] by server-6.bemta-14.messagelabs.com id
	70/51-14958-6CD17C25; Fri, 03 Jan 2014 20:29:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388780996!7229834!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5789 invoked from network); 3 Jan 2014 20:29:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:57 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSrSH022475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSq5W019403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqHd019394; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D74061BFB0D; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:19 -0500
Message-Id: <1388777916-1328-3-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 02/19] xen/pvh/x86: Define what an PVH guest
	is (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

Which is a PV guest with auto page translation enabled
and with vector callback. It is a cross between PVHVM and PV.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Also we use the PV cpuid, albeit we can use the HVM (native) cpuid.
However, we do have a fair bit of filtering in the xen_cpuid and
we can piggyback on that until the hypervisor/toolstack filters
the appropiate cpuids. Once that is done we can swap over to
use the native one.

We setup a Kconfig entry that is disabled by default and
cannot be enabled.

Note that on ARM the concept of PVH is non-existent. As Ian
put it: "an ARM guest is neither PV nor HVM nor PVHVM.
It's a bit like PVH but is different also (it's further towards
the H end of the spectrum than even PVH).". As such these
options (PVHVM, PVH) are never enabled nor seen on ARM
compilations.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/Kconfig |  5 +++++
 include/xen/xen.h    | 14 ++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 1a3c765..e7d0590 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -51,3 +51,8 @@ config XEN_DEBUG_FS
 	  Enable statistics output and various tuning options in debugfs.
 	  Enabling this option may incur a significant performance overhead.
 
+config XEN_PVH
+	bool "Support for running as a PVH guest"
+	depends on X86_64 && XEN && BROKEN
+	select XEN_PVHVM
+	def_bool n
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a74d436..0c0e3ef 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -29,4 +29,18 @@ extern enum xen_domain_type xen_domain_type;
 #define xen_initial_domain()	(0)
 #endif	/* CONFIG_XEN_DOM0 */
 
+#ifdef CONFIG_XEN_PVH
+/* This functionality exists only for x86. The XEN_PVHVM support exists
+ * only in x86 world - hence on ARM it will be always disabled.
+ * N.B. ARM guests are neither PV nor HVM nor PVHVM.
+ * It's a bit like PVH but is different also (it's further towards the H
+ * end of the spectrum than even PVH).
+ */
+#include <xen/features.h>
+#define xen_pvh_domain() (xen_pv_domain() && \
+			  xen_feature(XENFEAT_auto_translated_physmap) && \
+			  xen_have_vector_callback)
+#else
+#define xen_pvh_domain()	(0)
+#endif
 #endif	/* _XEN_XEN_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNZ-0003bz-BM; Fri, 03 Jan 2014 20:30:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003b0-41
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [193.109.254.147:21115] by server-16.bemta-14.messagelabs.com
	id BF/6E-20600-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388780997!5226846!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10667 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSrRu007014
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqwV019414
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:53 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqP8019409; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E01C21BFB0E; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:20 -0500
Message-Id: <1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 03/19] xen/pvh: Early bootup changes in PV
	code (v4).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

We don't use the filtering that 'xen_cpuid' is doing
because the hypervisor treats 'XEN_EMULATE_PREFIX' as
an invalid instruction. This means that all of the filtering
will have to be done in the hypervisor/toolstack.

Without the filtering we expose to the guest the:

 - cpu topology (sockets, cores, etc);
 - the APERF (which the generic scheduler likes to
    use), see  5e626254206a709c6e937f3dda69bf26c7344f6f
    "xen/setup: filter APERFMPERF cpuid feature out"
 - and the inability to figure out whether MWAIT_LEAF
   should be exposed or not. See
   df88b2d96e36d9a9e325bfcd12eb45671cbbc937
   "xen/enlighten: Disable MWAIT_LEAF so that acpi-pad won't be loaded."
 - x2apic, see  4ea9b9aca90cfc71e6872ed3522356755162932c
   "xen: mask x2APIC feature in PV"

We also check for vector callback early on, as it is a required
feature. PVH also runs at default kernel IOPL.

Finally, pure PV settings are moved to a separate function that are
only called for pure PV, ie, pv with pvmmu. They are also #ifdef
with CONFIG_XEN_PVMMU.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 48 ++++++++++++++++++++++++++++++++++--------------
 arch/x86/xen/setup.c     | 18 ++++++++++++------
 2 files changed, 46 insertions(+), 20 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..eb0efc2 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,6 +46,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -262,8 +263,9 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	pr_info("Booting paravirtualized kernel %son %s\n",
+		xen_feature(XENFEAT_auto_translated_physmap) ?
+			"with PVH extensions " : "", pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -433,7 +435,7 @@ static void __init xen_init_cpuid_mask(void)
 
 	ax = 1;
 	cx = 0;
-	xen_cpuid(&ax, &bx, &cx, &dx);
+	cpuid(1, &ax, &bx, &cx, &dx);
 
 	xsave_mask =
 		(1 << (X86_FEATURE_XSAVE % 32)) |
@@ -1420,6 +1422,19 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_early_guest_init(void)
+{
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+#ifdef CONFIG_X86_32
+	BUG(); /* PVH: Implement proper support. */
+#endif
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1431,13 +1446,16 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
+	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (!xen_pvh_domain())
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1469,8 +1487,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1548,14 +1564,18 @@ asmlinkage void __init xen_start_kernel(void)
 	/* set the limit of our address space */
 	xen_reserve_top();
 
-	/* We used to do this in xen_arch_setup, but that is too late on AMD
-	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
-	 * which pokes 0xcf8 port.
-	 */
-	set_iopl.iopl = 1;
-	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
-	if (rc != 0)
-		xen_raw_printk("physdev_op failed %d\n", rc);
+	/* PVH: runs at default kernel iopl of 0 */
+	if (!xen_pvh_domain()) {
+		/*
+		 * We used to do this in xen_arch_setup, but that is too late
+		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
+		 * early_amd_init which pokes 0xcf8 port.
+		 */
+		set_iopl.iopl = 1;
+		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
+		if (rc != 0)
+			xen_raw_printk("physdev_op failed %d\n", rc);
+	}
 
 #ifdef CONFIG_X86_32
 	/* set up basic CPUID stuff */
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..2137c51 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -563,16 +563,13 @@ void xen_enable_nmi(void)
 		BUG();
 #endif
 }
-void __init xen_arch_setup(void)
+void __init xen_pvmmu_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		HYPERVISOR_vm_assist(VMASST_CMD_enable,
-				     VMASST_TYPE_pae_extended_cr3);
+	HYPERVISOR_vm_assist(VMASST_CMD_enable,
+			     VMASST_TYPE_pae_extended_cr3);
 
 	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
 	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
@@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
 	xen_enable_sysenter();
 	xen_enable_syscall();
 	xen_enable_nmi();
+}
+
+/* This function is not called for HVM domains */
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		xen_pvmmu_arch_setup();
+
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
 		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNg-0003f7-3e; Fri, 03 Jan 2014 20:30:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bB-MH
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.137.68:50117] by server-8.bemta-3.messagelabs.com id
	37/15-31081-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1388780997!7150017!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStIx022508
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsQf025223
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSssV019910; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F1FB41BFB13; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:22 -0500
Message-Id: <1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 05/19] xen/mmu/p2m: Refactor the
	xen_pagetable_init code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The revector and copying of the P2M only happens when
!auto-xlat and on 64-bit builds. It is not obvious from
the code, so lets have seperate 32 and 64-bit functions.

We also invert the check for auto-xlat to make the code
flow simpler.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 70 +++++++++++++++++++++++++++++-------------------------
 1 file changed, 37 insertions(+), 33 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..c140eff 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
 	 * instead of somewhere later and be confusing. */
 	xen_mc_flush();
 }
-#endif
-static void __init xen_pagetable_init(void)
+static void __init xen_pagetable_p2m_copy(void)
 {
-#ifdef CONFIG_X86_64
 	unsigned long size;
 	unsigned long addr;
-#endif
-	paging_init();
-	xen_setup_shared_info();
-#ifdef CONFIG_X86_64
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		unsigned long new_mfn_list;
+	unsigned long new_mfn_list;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+	/* On 32-bit, we get zero so this never gets executed. */
+	new_mfn_list = xen_revector_p2m_tree();
+	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+		/* using __ka address and sticking INVALID_P2M_ENTRY! */
+		memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+		/* We should be in __ka space. */
+		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+		addr = xen_start_info->mfn_list;
+		/* We roundup to the PMD, which means that if anybody at this stage is
+		 * using the __ka address of xen_start_info or xen_start_info->shared_info
+		 * they are in going to crash. Fortunatly we have already revectored
+		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+		size = roundup(size, PMD_SIZE);
+		xen_cleanhighmap(addr, addr + size);
 
 		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+		memblock_free(__pa(xen_start_info->mfn_list), size);
+		/* And revector! Bye bye old array */
+		xen_start_info->mfn_list = new_mfn_list;
+	} else
+		return;
 
-		/* On 32-bit, we get zero so this never gets executed. */
-		new_mfn_list = xen_revector_p2m_tree();
-		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-			/* using __ka address and sticking INVALID_P2M_ENTRY! */
-			memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-			/* We should be in __ka space. */
-			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-			addr = xen_start_info->mfn_list;
-			/* We roundup to the PMD, which means that if anybody at this stage is
-			 * using the __ka address of xen_start_info or xen_start_info->shared_info
-			 * they are in going to crash. Fortunatly we have already revectored
-			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-			size = roundup(size, PMD_SIZE);
-			xen_cleanhighmap(addr, addr + size);
-
-			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-			memblock_free(__pa(xen_start_info->mfn_list), size);
-			/* And revector! Bye bye old array */
-			xen_start_info->mfn_list = new_mfn_list;
-		} else
-			goto skip;
-	}
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
@@ -1255,7 +1251,15 @@ static void __init xen_pagetable_init(void)
 	 * anything at this stage. */
 	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
 #endif
-skip:
+}
+#endif
+
+static void __init xen_pagetable_init(void)
+{
+	paging_init();
+	xen_setup_shared_info();
+#ifdef CONFIG_X86_64
+	xen_pagetable_p2m_copy();
 #endif
 	xen_post_allocator_init();
 }
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNZ-0003bz-BM; Fri, 03 Jan 2014 20:30:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003b0-41
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [193.109.254.147:21115] by server-16.bemta-14.messagelabs.com
	id BF/6E-20600-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388780997!5226846!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10667 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSrRu007014
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqwV019414
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:53 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqP8019409; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E01C21BFB0E; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:20 -0500
Message-Id: <1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 03/19] xen/pvh: Early bootup changes in PV
	code (v4).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

We don't use the filtering that 'xen_cpuid' is doing
because the hypervisor treats 'XEN_EMULATE_PREFIX' as
an invalid instruction. This means that all of the filtering
will have to be done in the hypervisor/toolstack.

Without the filtering we expose to the guest the:

 - cpu topology (sockets, cores, etc);
 - the APERF (which the generic scheduler likes to
    use), see  5e626254206a709c6e937f3dda69bf26c7344f6f
    "xen/setup: filter APERFMPERF cpuid feature out"
 - and the inability to figure out whether MWAIT_LEAF
   should be exposed or not. See
   df88b2d96e36d9a9e325bfcd12eb45671cbbc937
   "xen/enlighten: Disable MWAIT_LEAF so that acpi-pad won't be loaded."
 - x2apic, see  4ea9b9aca90cfc71e6872ed3522356755162932c
   "xen: mask x2APIC feature in PV"

We also check for vector callback early on, as it is a required
feature. PVH also runs at default kernel IOPL.

Finally, pure PV settings are moved to a separate function that are
only called for pure PV, ie, pv with pvmmu. They are also #ifdef
with CONFIG_XEN_PVMMU.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 48 ++++++++++++++++++++++++++++++++++--------------
 arch/x86/xen/setup.c     | 18 ++++++++++++------
 2 files changed, 46 insertions(+), 20 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..eb0efc2 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,6 +46,7 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
+#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>
@@ -262,8 +263,9 @@ static void __init xen_banner(void)
 	struct xen_extraversion extra;
 	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
 
-	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
-	       pv_info.name);
+	pr_info("Booting paravirtualized kernel %son %s\n",
+		xen_feature(XENFEAT_auto_translated_physmap) ?
+			"with PVH extensions " : "", pv_info.name);
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
@@ -433,7 +435,7 @@ static void __init xen_init_cpuid_mask(void)
 
 	ax = 1;
 	cx = 0;
-	xen_cpuid(&ax, &bx, &cx, &dx);
+	cpuid(1, &ax, &bx, &cx, &dx);
 
 	xsave_mask =
 		(1 << (X86_FEATURE_XSAVE % 32)) |
@@ -1420,6 +1422,19 @@ static void __init xen_setup_stackprotector(void)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+static void __init xen_pvh_early_guest_init(void)
+{
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	if (xen_feature(XENFEAT_hvm_callback_vector))
+		xen_have_vector_callback = 1;
+
+#ifdef CONFIG_X86_32
+	BUG(); /* PVH: Implement proper support. */
+#endif
+}
+
 /* First C function to be called on Xen boot */
 asmlinkage void __init xen_start_kernel(void)
 {
@@ -1431,13 +1446,16 @@ asmlinkage void __init xen_start_kernel(void)
 
 	xen_domain_type = XEN_PV_DOMAIN;
 
+	xen_setup_features();
+	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
-	pv_cpu_ops = xen_cpu_ops;
 	pv_apic_ops = xen_apic_ops;
+	if (!xen_pvh_domain())
+		pv_cpu_ops = xen_cpu_ops;
 
 	x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
@@ -1469,8 +1487,6 @@ asmlinkage void __init xen_start_kernel(void)
 	/* Work out if we support NX */
 	x86_configure_nx();
 
-	xen_setup_features();
-
 	/* Get mfn list */
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		xen_build_dynamic_phys_to_machine();
@@ -1548,14 +1564,18 @@ asmlinkage void __init xen_start_kernel(void)
 	/* set the limit of our address space */
 	xen_reserve_top();
 
-	/* We used to do this in xen_arch_setup, but that is too late on AMD
-	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
-	 * which pokes 0xcf8 port.
-	 */
-	set_iopl.iopl = 1;
-	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
-	if (rc != 0)
-		xen_raw_printk("physdev_op failed %d\n", rc);
+	/* PVH: runs at default kernel iopl of 0 */
+	if (!xen_pvh_domain()) {
+		/*
+		 * We used to do this in xen_arch_setup, but that is too late
+		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
+		 * early_amd_init which pokes 0xcf8 port.
+		 */
+		set_iopl.iopl = 1;
+		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
+		if (rc != 0)
+			xen_raw_printk("physdev_op failed %d\n", rc);
+	}
 
 #ifdef CONFIG_X86_32
 	/* set up basic CPUID stuff */
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..2137c51 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -563,16 +563,13 @@ void xen_enable_nmi(void)
 		BUG();
 #endif
 }
-void __init xen_arch_setup(void)
+void __init xen_pvmmu_arch_setup(void)
 {
-	xen_panic_handler_init();
-
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
 	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
 
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		HYPERVISOR_vm_assist(VMASST_CMD_enable,
-				     VMASST_TYPE_pae_extended_cr3);
+	HYPERVISOR_vm_assist(VMASST_CMD_enable,
+			     VMASST_TYPE_pae_extended_cr3);
 
 	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
 	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
@@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
 	xen_enable_sysenter();
 	xen_enable_syscall();
 	xen_enable_nmi();
+}
+
+/* This function is not called for HVM domains */
+void __init xen_arch_setup(void)
+{
+	xen_panic_handler_init();
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
+		xen_pvmmu_arch_setup();
+
 #ifdef CONFIG_ACPI
 	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
 		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNf-0003ec-9J; Fri, 03 Jan 2014 20:30:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bC-Kc
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.143.35:26293] by server-1.bemta-4.messagelabs.com id
	E9/0D-02132-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388780997!6833719!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28094 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStd1022511
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSs6r019495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsrm019490; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 38FF31C016F; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:29 -0500
Message-Id: <1388777916-1328-13-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 12/19] xen/pvh: Update E820 to work with PVH
	(v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

In xen_add_extra_mem() we can skip updating P2M as it's managed
by Xen. PVH maps the entire IO space, but only RAM pages need
to be repopulated.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/setup.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2137c51..dd5f905 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -27,6 +27,7 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/physdev.h>
 #include <xen/features.h>
+#include "mmu.h"
 #include "xen-ops.h"
 #include "vdso.h"
 
@@ -81,6 +82,9 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 
 	memblock_reserve(start, size);
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	xen_max_p2m_pfn = PFN_DOWN(start + size);
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
@@ -103,6 +107,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 		.domid        = DOMID_SELF
 	};
 	unsigned long len = 0;
+	int xlated_phys = xen_feature(XENFEAT_auto_translated_physmap);
 	unsigned long pfn;
 	int ret;
 
@@ -116,7 +121,7 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 				continue;
 			frame = mfn;
 		} else {
-			if (mfn != INVALID_P2M_ENTRY)
+			if (!xlated_phys && mfn != INVALID_P2M_ENTRY)
 				continue;
 			frame = pfn;
 		}
@@ -154,6 +159,13 @@ static unsigned long __init xen_do_chunk(unsigned long start,
 static unsigned long __init xen_release_chunk(unsigned long start,
 					      unsigned long end)
 {
+	/*
+	 * Xen already ballooned out the E820 non RAM regions for us
+	 * and set them up properly in EPT.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return end - start;
+
 	return xen_do_chunk(start, end, true);
 }
 
@@ -222,7 +234,13 @@ static void __init xen_set_identity_and_release_chunk(
 	 * (except for the ISA region which must be 1:1 mapped) to
 	 * release the refcounts (in Xen) on the original frames.
 	 */
-	for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
+
+	/*
+	 * PVH E820 matches the hypervisor's P2M which means we need to
+	 * account for the proper values of *release and *identity.
+	 */
+	for (pfn = start_pfn; !xen_feature(XENFEAT_auto_translated_physmap) &&
+	     pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) {
 		pte_t pte = __pte_ma(0);
 
 		if (pfn < PFN_UP(ISA_END_ADDRESS))
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNk-0003hY-0B; Fri, 03 Jan 2014 20:30:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003az-95
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:50325] by server-3.bemta-5.messagelabs.com id
	EC/FF-04773-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388780997!7728351!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsm7022504
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSswZ019475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsZt019915; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 287971C016D; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:27 -0500
Message-Id: <1388777916-1328-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 10/19] xen/pvh: Load GDT/GS in early PV
	bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

During early bootup we start life using the Xen provided
GDT, which means that we are running with %cs segment set
to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).

But for PVH we want to be use HVM type mechanism for
segment operations. As such we need to switch to the HVM
one and also reload ourselves with the __KERNEL_CS:eip
to run in the proper GDT and segment.

For HVM this is usually done in 'secondary_startup_64' in
(head_64.S) but since we are not taking that bootup
path (we start in PV - xen_start_kernel) we need to do
that in the early PV bootup paths.

For good measure we also zero out the %fs, %ds, and %es
(not strictly needed as Xen has already cleared them
for us). The %gs is loaded by 'switch_to_new_gdt'.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/enlighten.c | 39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 23ead29..1170d00 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1410,8 +1410,43 @@ static void __init xen_boot_params_init_edd(void)
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
  */
-static void __init xen_setup_stackprotector(void)
+static void __init xen_setup_gdt(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+#ifdef CONFIG_X86_64
+		unsigned long dummy;
+
+		switch_to_new_gdt(0); /* GDT and GS set */
+
+		/* We are switching of the Xen provided GDT to our HVM mode
+		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
+		 * and we are jumping to reload it.
+		 */
+		asm volatile ("pushq %0\n"
+			      "leaq 1f(%%rip),%0\n"
+			      "pushq %0\n"
+			      "lretq\n"
+			      "1:\n"
+			      : "=&r" (dummy) : "0" (__KERNEL_CS));
+
+		/*
+		 * While not needed, we also set the %es, %ds, and %fs
+		 * to zero. We don't care about %ss as it is NULL.
+		 * Strictly speaking this is not needed as Xen zeros those
+		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
+		 *
+		 * Linux zeros them in cpu_init() and in secondary_startup_64
+		 * (for BSP).
+		 */
+		loadsegment(es, 0);
+		loadsegment(ds, 0);
+		loadsegment(fs, 0);
+#else
+		/* PVH: TODO Implement. */
+		BUG();
+#endif
+		return; /* PVH does not need any PV GDT ops. */
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1494,7 +1529,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_stackprotector();
+	xen_setup_gdt();
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNh-0003fV-Ge; Fri, 03 Jan 2014 20:30:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bD-U6
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.137.68:50121] by server-15.bemta-3.messagelabs.com id
	B1/CA-11556-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388780997!7130652!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15689 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSs8R007027
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrTZ019467
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSraI019460; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 06B651BFB49; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:23 -0500
Message-Id: <1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 06/19] xen/mmu: Cleanup
	xen_pagetable_p2m_copy a bit.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano noticed that the code runs only under 64-bit so
the comments about 32-bit are pointless.

Also we change the condition for xen_revector_p2m_tree
returning the same value (because it could not allocate
a swath of space to put the new P2M in) or it had been
called once already. In such we return early from the
function.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index c140eff..9d74249 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1209,29 +1209,29 @@ static void __init xen_pagetable_p2m_copy(void)
 
 	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
 
-	/* On 32-bit, we get zero so this never gets executed. */
 	new_mfn_list = xen_revector_p2m_tree();
-	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
-		/* using __ka address and sticking INVALID_P2M_ENTRY! */
-		memset((void *)xen_start_info->mfn_list, 0xff, size);
-
-		/* We should be in __ka space. */
-		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
-		addr = xen_start_info->mfn_list;
-		/* We roundup to the PMD, which means that if anybody at this stage is
-		 * using the __ka address of xen_start_info or xen_start_info->shared_info
-		 * they are in going to crash. Fortunatly we have already revectored
-		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
-		size = roundup(size, PMD_SIZE);
-		xen_cleanhighmap(addr, addr + size);
-
-		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
-		memblock_free(__pa(xen_start_info->mfn_list), size);
-		/* And revector! Bye bye old array */
-		xen_start_info->mfn_list = new_mfn_list;
-	} else
+	/* No memory or already called. */
+	if (!new_mfn_list || new_mfn_list == xen_start_info->mfn_list)
 		return;
 
+	/* using __ka address and sticking INVALID_P2M_ENTRY! */
+	memset((void *)xen_start_info->mfn_list, 0xff, size);
+
+	/* We should be in __ka space. */
+	BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+	addr = xen_start_info->mfn_list;
+	/* We roundup to the PMD, which means that if anybody at this stage is
+	 * using the __ka address of xen_start_info or xen_start_info->shared_info
+	 * they are in going to crash. Fortunatly we have already revectored
+	 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+	size = roundup(size, PMD_SIZE);
+	xen_cleanhighmap(addr, addr + size);
+
+	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+	memblock_free(__pa(xen_start_info->mfn_list), size);
+	/* And revector! Bye bye old array */
+	xen_start_info->mfn_list = new_mfn_list;
+
 	/* At this stage, cleanup_highmap has already cleaned __ka space
 	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
 	 * the ramdisk). We continue on, erasing PMD entries that point to page
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNi-0003fv-Ir; Fri, 03 Jan 2014 20:30:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bM-1Y
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.137.68:51743] by server-1.bemta-3.messagelabs.com id
	53/EA-29598-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388780998!7098719!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12595 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStmG022509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsqc025236
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsi2025222; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1FF861BFB4C; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:26 -0500
Message-Id: <1388777916-1328-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 09/19] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

For PVHVM the shared_info structure is provided via the same way
as for normal PV guests (see include/xen/interface/xen.h).

That is during bootup we get 'xen_start_info' via the %esi register
in startup_xen. Then later we extract the 'shared_info' from said
structure (in xen_setup_shared_info) and start using it.

The 'xen_setup_shared_info' is all setup to work with auto-xlat
guests, but there are two functions which it calls that are not:
xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
This patch modifies the P2M code (xen_setup_mfn_list_list)
while the "Piggyback on PVHVM for event channels" modifies
the xen_setup_vcpu_info_placement.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index fb7ee0a..696c694 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -339,6 +339,9 @@ void __ref xen_build_mfn_list_list(void)
 
 void xen_setup_mfn_list_list(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
 
 	HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNd-0003dz-E2; Fri, 03 Jan 2014 20:30:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bA-J4
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:00 +0000
Received: from [85.158.143.35:17854] by server-3.bemta-4.messagelabs.com id
	C8/4B-32360-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388780997!9454691!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30137 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsa8007032
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrLU025195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrc9019886; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E93B71BFB12; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:21 -0500
Message-Id: <1388777916-1328-5-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 04/19] xen/pvh: Don't setup P2M tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

P2M is not available for PVH. Fortunatly for us the
P2M code already has mostly the support for auto-xlat guest thanks to
commit 3d24bbd7dddbea54358a9795abaf051b0f18973c
"grant-table: call set_phys_to_machine after mapping grant refs"
which: "
introduces set_phys_to_machine calls for auto_translated guests
(even on x86) in gnttab_map_refs and gnttab_unmap_refs.
translated by swiotlb-xen... " so we don't need to muck much.

with above mentioned "commit you'll get set_phys_to_machine calls
from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
anything with them " (Stefano Stabellini) which is OK - we want
them to be NOPs.

This is because we assume that an "IOMMU is always present on the
plaform and Xen is going to make the appropriate IOMMU pagetable
changes in the hypercall implementation of GNTTABOP_map_grant_ref
and GNTTABOP_unmap_grant_ref, then eveything should be transparent
from PVH priviligied point of view and DMA transfers involving
foreign pages keep working with no issues[sp]

Otherwise we would need a P2M (and an M2P) for PVH priviligied to
track these foreign pages .. (see arch/arm/xen/p2m.c)."
(Stefano Stabellini).

We still have to inhibit the building of the P2M tree.
That had been done in the past by not calling
xen_build_dynamic_phys_to_machine (which setups the P2M tree
and gives us virtual address to access them). But we are missing
a check for xen_build_mfn_list_list - which was continuing to setup
the P2M tree and would blow up at trying to get the virtual
address of p2m_missing (which would have been setup by
xen_build_dynamic_phys_to_machine).

Hence a check is needed to not call xen_build_mfn_list_list when
running in auto-xlat mode.

Instead of replicating the check for auto-xlat in enlighten.c
do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
is called also in xen_arch_post_suspend without any checks for
auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
allocate space for an P2M tree.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/enlighten.c |  3 +--
 arch/x86/xen/p2m.c       | 12 ++++++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index eb0efc2..23ead29 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1488,8 +1488,7 @@ asmlinkage void __init xen_start_kernel(void)
 	x86_configure_nx();
 
 	/* Get mfn list */
-	if (!xen_feature(XENFEAT_auto_translated_physmap))
-		xen_build_dynamic_phys_to_machine();
+	xen_build_dynamic_phys_to_machine();
 
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..fb7ee0a 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -280,6 +280,9 @@ void __ref xen_build_mfn_list_list(void)
 {
 	unsigned long pfn;
 
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
@@ -346,10 +349,15 @@ void xen_setup_mfn_list_list(void)
 /* Set up p2m_top to point to the domain-builder provided p2m pages */
 void __init xen_build_dynamic_phys_to_machine(void)
 {
-	unsigned long *mfn_list = (unsigned long *)xen_start_info->mfn_list;
-	unsigned long max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
+	unsigned long *mfn_list;
+	unsigned long max_pfn;
 	unsigned long pfn;
 
+	 if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
+	mfn_list = (unsigned long *)xen_start_info->mfn_list;
+	max_pfn = min(MAX_DOMAIN_PAGES, xen_start_info->nr_pages);
 	xen_max_p2m_pfn = max_pfn;
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNi-0003fv-Ir; Fri, 03 Jan 2014 20:30:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bM-1Y
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.137.68:51743] by server-1.bemta-3.messagelabs.com id
	53/EA-29598-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1388780998!7098719!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12595 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStmG022509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsqc025236
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsi2025222; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1FF861BFB4C; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:26 -0500
Message-Id: <1388777916-1328-10-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 09/19] xen/pvh: Setup up shared_info.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

For PVHVM the shared_info structure is provided via the same way
as for normal PV guests (see include/xen/interface/xen.h).

That is during bootup we get 'xen_start_info' via the %esi register
in startup_xen. Then later we extract the 'shared_info' from said
structure (in xen_setup_shared_info) and start using it.

The 'xen_setup_shared_info' is all setup to work with auto-xlat
guests, but there are two functions which it calls that are not:
xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
This patch modifies the P2M code (xen_setup_mfn_list_list)
while the "Piggyback on PVHVM for event channels" modifies
the xen_setup_vcpu_info_placement.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/p2m.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index fb7ee0a..696c694 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -339,6 +339,9 @@ void __ref xen_build_mfn_list_list(void)
 
 void xen_setup_mfn_list_list(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	BUG_ON(HYPERVISOR_shared_info == &xen_dummy_shared_info);
 
 	HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNj-0003gZ-45; Fri, 03 Jan 2014 20:30:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bE-V0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:50349] by server-2.bemta-5.messagelabs.com id
	CC/F4-29392-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388780997!7758095!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24364 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSr67007013
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqcf019392
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:52 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqsv019389; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C90E91BFB02; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:17 -0500
Message-Id: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: hpa@zytor.com
Subject: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The patches, also available at

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13

implements the neccessary functionality to boot a PV guest in PVH mode.

This blog has a great description of what PVH is:
http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

These patches are based on v3.13-rc6. If I had failed to
address your review I am terrible sorry - it was an oversight.
Please poke at the patch again.

Changes since v13: [http://mid.gmane.org/1388550945-25499-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per David and Stefano review.
 - Fix regression with Xen 4.1.
 - Use native_cpuid instead of xen_cpuid.

v12: [http://mid.gmane.org/1387313503-31362-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per Stefano's review.
 - Split some patches up for easier review.
 - Bugs fixed.

v11 as compared to v10: [https://lkml.org/lkml/2013/12/12/625]:
 - Split patches in a more logical sense, squash some
 - Dropped Acked-by's from folks
 - Fleshed out descriptions


Regression wise - there are no bugs with Xen 4.[1,2,3,4].

That is if you compile/boot it with
CONFIG_XEN_PVH=y or "# CONFIG_XEN_PVH is not set" - in both cases as
either dom0 or domU there are no bugs. Also launched it as 32/64 bit
dom0 with 32/64 domU as PV or PVHVM, and along with SLES11, SLES12,
F15->F19 (32/64), OL5, OL6, RHEL5 (32/64) FreeBSD HVM, NetBSD PV without issues.

With Xen 4.1, there was a regression, (see
http://mid.gmane.org/20131220175735.GA619@phenom.dumpdata.com)
and it this patchset has the fix for it.


-------------------------
PARAVIRT OPS / x86_init /apic /smp ops
------------------------

The paravirt ops that are in usage are:

	pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;

These are still used:

        pv_info = xen_info;
        pv_init_ops = xen_init_ops;
        pv_apic_ops = xen_apic_ops;
        pv_time_ops = xen_time_ops;

And the x86_init,apic, and smp_ops ops are still in force.

This is just the first step so there might be some other ones
that are needed that I failed to enumerate.

The pv_cpu_ops is not used. From pv_mmu_ops only one is used.

-----------------------------
HOW TO USE IT
-----------------------------

The only things needed to make this work as PVH are:

 0) Get the latest version of Xen and compile/install it.
    See http://wiki.xen.org/wiki/Compiling_Xen_From_Source for details

 1) Clone above mentioned tree

    See http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel
    for details. The steps are:

	cd $HOME
	git clone  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git linux
	cd linux
	git checkout origin/stable/pvh.v11

 2) Compile with CONFIG_XEN_PVH=y

    a) From scratch:

	make defconfig
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Paravirtualization layer for spinlocks
		 Xen guest support	(which will now show you:)
		 Support for running as a PVH guest (NEW)

	in case you like to edit .config, it is:

	CONFIG_HYPERVISOR_GUEST=y
	CONFIG_PARAVIRT=y
	CONFIG_PARAVIRT_GUEST=y
	CONFIG_PARAVIRT_SPINLOCKS=y
	CONFIG_XEN=y
	CONFIG_XEN_PVH=y

	You will also have to enable the block, network drivers, console, etc
	which are in different submenus.

    b). Based on your current distro.

	cp /boot/config-`uname -r` $HOME/linux/.config
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Support for running as a PVH guest (NEW)

 3) Launch it with 'pvh=1' in your guest config (for example):

	extra="console=hvc0 debug  kgdboc=hvc0 nokgdbroundup  initcall_debug debug"
	kernel="/mnt/lab/latest/vmlinuz"
	ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
	memory=1024
	vcpus=4
	name="pvh"
	vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
	vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
	disk=['phy:/dev/sdb1,xvda,w']
	pvh=1
	on_reboot="preserve"
	on_crash="preserve"
	on_poweroff="preserve"

    using 'xl'. Xend 'xm' does not have PVH support.

It will bootup as a normal PV guest, but 'xen-detect' will report it as an HVM
guest.

Items that have not been tested extensively or at all:
  - Migration (xl save && xl restore for example).

  - 32-bit guests (won't even present you with a CONFIG_XEN_PVH option)

  - PCI passthrough

  - Running it in dom0 mode (as the patches for that are not yet in Xen upstream).
    If you want to try that, you can merge/pull Mukesh's branch:

	cd $HOME/xen
	git pull git://oss.oracle.com/git/mrathor/xen.git dom0pvh-v6

    .. and use this bootup parameter ("dom0pvh=1"). Remember to recompile
    and install the new version of Xen. This patchset
    does not contain the patches neccessary to setup guests - but I can
    create one easily enough. 

  - Memory ballooning
  - Multiple VBDs, NICs, etc.

Things that are broken:
 - CPUID filtering. There are no filtering done at all which  means that
   certain cpuid flags are exposed to the guest. The x2apic will cause
   a crash if the NMI handler is invoked. The APERF will cause inferior
   scheduling decisions.
 
If you encounter errors, please email with the following (pls note that the
guest config has 'on_reboot="preserve", on_crash="preserve" - which you should
have in your guest config to contain the memory of the guest):

 a) xl dmesg
 b) xl list
 c) xenctx -s $HOME/linux/System.map -f -a -C <domain id>
    [xenctx is sometimes found in  /usr/lib/xen/bin/xenctx ]
 d) the console output from the guest
 e) Anything else you can think off.

Stash away your vmlinux file (it is too big to send via email) - as I might
need it later on.


That is it!

Thank you!

 arch/arm/include/asm/xen/page.h    |   1 +
 arch/arm/xen/enlighten.c           |   9 +-
 arch/x86/include/asm/xen/page.h    |   8 +-
 arch/x86/xen/Kconfig               |   5 ++
 arch/x86/xen/enlighten.c           | 100 +++++++++++++++++-----
 arch/x86/xen/grant-table.c         |  62 ++++++++++++++
 arch/x86/xen/irq.c                 |   5 +-
 arch/x86/xen/mmu.c                 | 166 +++++++++++++++++++++----------------
 arch/x86/xen/p2m.c                 |  15 +++-
 arch/x86/xen/setup.c               |  40 +++++++--
 arch/x86/xen/smp.c                 |  49 +++++++----
 arch/x86/xen/xen-head.S            |  25 +++++-
 arch/x86/xen/xen-ops.h             |   1 +
 drivers/xen/events.c               |  14 ++--
 drivers/xen/gntdev.c               |   2 +-
 drivers/xen/grant-table.c          |  87 ++++++++++++++-----
 drivers/xen/platform-pci.c         |  10 ++-
 drivers/xen/xenbus/xenbus_client.c |   3 +-
 include/xen/grant_table.h          |   9 +-
 include/xen/interface/elfnote.h    |  13 +++
 include/xen/xen.h                  |  14 ++++
 21 files changed, 483 insertions(+), 155 deletions(-)

Konrad Rzeszutek Wilk (7):
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code (v2).
      xen/mmu: Cleanup xen_pagetable_p2m_copy a bit.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct (v2).
      xen/pvh: Piggyback on PVHVM for grant driver (v4)

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v3).
      xen/pvh: Early bootup changes in PV code (v4).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh/mmu: Use PV TLB instead of native.
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh: Support ParaVirtualized Hardware extensions (v3).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNk-0003hY-0B; Fri, 03 Jan 2014 20:30:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003az-95
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:50325] by server-3.bemta-5.messagelabs.com id
	EC/FF-04773-7CD17C25; Fri, 03 Jan 2014 20:29:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388780997!7728351!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5816 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsm7022504
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSswZ019475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsZt019915; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 287971C016D; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:27 -0500
Message-Id: <1388777916-1328-11-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 10/19] xen/pvh: Load GDT/GS in early PV
	bootup code for BSP.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

During early bootup we start life using the Xen provided
GDT, which means that we are running with %cs segment set
to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).

But for PVH we want to be use HVM type mechanism for
segment operations. As such we need to switch to the HVM
one and also reload ourselves with the __KERNEL_CS:eip
to run in the proper GDT and segment.

For HVM this is usually done in 'secondary_startup_64' in
(head_64.S) but since we are not taking that bootup
path (we start in PV - xen_start_kernel) we need to do
that in the early PV bootup paths.

For good measure we also zero out the %fs, %ds, and %es
(not strictly needed as Xen has already cleared them
for us). The %gs is loaded by 'switch_to_new_gdt'.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/enlighten.c | 39 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 37 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 23ead29..1170d00 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1410,8 +1410,43 @@ static void __init xen_boot_params_init_edd(void)
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
  */
-static void __init xen_setup_stackprotector(void)
+static void __init xen_setup_gdt(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+#ifdef CONFIG_X86_64
+		unsigned long dummy;
+
+		switch_to_new_gdt(0); /* GDT and GS set */
+
+		/* We are switching of the Xen provided GDT to our HVM mode
+		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
+		 * and we are jumping to reload it.
+		 */
+		asm volatile ("pushq %0\n"
+			      "leaq 1f(%%rip),%0\n"
+			      "pushq %0\n"
+			      "lretq\n"
+			      "1:\n"
+			      : "=&r" (dummy) : "0" (__KERNEL_CS));
+
+		/*
+		 * While not needed, we also set the %es, %ds, and %fs
+		 * to zero. We don't care about %ss as it is NULL.
+		 * Strictly speaking this is not needed as Xen zeros those
+		 * out (and also MSR_FS_BASE, MSR_GS_BASE, MSR_KERNEL_GS_BASE)
+		 *
+		 * Linux zeros them in cpu_init() and in secondary_startup_64
+		 * (for BSP).
+		 */
+		loadsegment(es, 0);
+		loadsegment(ds, 0);
+		loadsegment(fs, 0);
+#else
+		/* PVH: TODO Implement. */
+		BUG();
+#endif
+		return; /* PVH does not need any PV GDT ops. */
+	}
 	pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
 	pv_cpu_ops.load_gdt = xen_load_gdt_boot;
 
@@ -1494,7 +1529,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_stackprotector();
+	xen_setup_gdt();
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNj-0003gZ-45; Fri, 03 Jan 2014 20:30:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNY-0003bE-V0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:50349] by server-2.bemta-5.messagelabs.com id
	CC/F4-29392-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388780997!7758095!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24364 invoked from network); 3 Jan 2014 20:29:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSr67007013
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqcf019392
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:52 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSqsv019389; Fri, 3 Jan 2014 20:28:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C90E91BFB02; Fri,  3 Jan 2014 15:28:50 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:17 -0500
Message-Id: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: hpa@zytor.com
Subject: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The patches, also available at

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13

implements the neccessary functionality to boot a PV guest in PVH mode.

This blog has a great description of what PVH is:
http://blog.xen.org/index.php/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

These patches are based on v3.13-rc6. If I had failed to
address your review I am terrible sorry - it was an oversight.
Please poke at the patch again.

Changes since v13: [http://mid.gmane.org/1388550945-25499-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per David and Stefano review.
 - Fix regression with Xen 4.1.
 - Use native_cpuid instead of xen_cpuid.

v12: [http://mid.gmane.org/1387313503-31362-1-git-send-email-konrad.wilk@oracle.com]
 - Rework per Stefano's review.
 - Split some patches up for easier review.
 - Bugs fixed.

v11 as compared to v10: [https://lkml.org/lkml/2013/12/12/625]:
 - Split patches in a more logical sense, squash some
 - Dropped Acked-by's from folks
 - Fleshed out descriptions


Regression wise - there are no bugs with Xen 4.[1,2,3,4].

That is if you compile/boot it with
CONFIG_XEN_PVH=y or "# CONFIG_XEN_PVH is not set" - in both cases as
either dom0 or domU there are no bugs. Also launched it as 32/64 bit
dom0 with 32/64 domU as PV or PVHVM, and along with SLES11, SLES12,
F15->F19 (32/64), OL5, OL6, RHEL5 (32/64) FreeBSD HVM, NetBSD PV without issues.

With Xen 4.1, there was a regression, (see
http://mid.gmane.org/20131220175735.GA619@phenom.dumpdata.com)
and it this patchset has the fix for it.


-------------------------
PARAVIRT OPS / x86_init /apic /smp ops
------------------------

The paravirt ops that are in usage are:

	pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;

These are still used:

        pv_info = xen_info;
        pv_init_ops = xen_init_ops;
        pv_apic_ops = xen_apic_ops;
        pv_time_ops = xen_time_ops;

And the x86_init,apic, and smp_ops ops are still in force.

This is just the first step so there might be some other ones
that are needed that I failed to enumerate.

The pv_cpu_ops is not used. From pv_mmu_ops only one is used.

-----------------------------
HOW TO USE IT
-----------------------------

The only things needed to make this work as PVH are:

 0) Get the latest version of Xen and compile/install it.
    See http://wiki.xen.org/wiki/Compiling_Xen_From_Source for details

 1) Clone above mentioned tree

    See http://wiki.xenproject.org/wiki/Mainline_Linux_Kernel_Configs#Configuring_the_Kernel
    for details. The steps are:

	cd $HOME
	git clone  git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git linux
	cd linux
	git checkout origin/stable/pvh.v11

 2) Compile with CONFIG_XEN_PVH=y

    a) From scratch:

	make defconfig
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Paravirtualization layer for spinlocks
		 Xen guest support	(which will now show you:)
		 Support for running as a PVH guest (NEW)

	in case you like to edit .config, it is:

	CONFIG_HYPERVISOR_GUEST=y
	CONFIG_PARAVIRT=y
	CONFIG_PARAVIRT_GUEST=y
	CONFIG_PARAVIRT_SPINLOCKS=y
	CONFIG_XEN=y
	CONFIG_XEN_PVH=y

	You will also have to enable the block, network drivers, console, etc
	which are in different submenus.

    b). Based on your current distro.

	cp /boot/config-`uname -r` $HOME/linux/.config
	make menuconfig
	Processor type and features  --->  Linux guest support  --->
		 Support for running as a PVH guest (NEW)

 3) Launch it with 'pvh=1' in your guest config (for example):

	extra="console=hvc0 debug  kgdboc=hvc0 nokgdbroundup  initcall_debug debug"
	kernel="/mnt/lab/latest/vmlinuz"
	ramdisk="/mnt/lab/latest/initramfs.cpio.gz"
	memory=1024
	vcpus=4
	name="pvh"
	vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
	vfb = [ 'vnc=1, vnclisten=0.0.0.0,vncunused=1']
	disk=['phy:/dev/sdb1,xvda,w']
	pvh=1
	on_reboot="preserve"
	on_crash="preserve"
	on_poweroff="preserve"

    using 'xl'. Xend 'xm' does not have PVH support.

It will bootup as a normal PV guest, but 'xen-detect' will report it as an HVM
guest.

Items that have not been tested extensively or at all:
  - Migration (xl save && xl restore for example).

  - 32-bit guests (won't even present you with a CONFIG_XEN_PVH option)

  - PCI passthrough

  - Running it in dom0 mode (as the patches for that are not yet in Xen upstream).
    If you want to try that, you can merge/pull Mukesh's branch:

	cd $HOME/xen
	git pull git://oss.oracle.com/git/mrathor/xen.git dom0pvh-v6

    .. and use this bootup parameter ("dom0pvh=1"). Remember to recompile
    and install the new version of Xen. This patchset
    does not contain the patches neccessary to setup guests - but I can
    create one easily enough. 

  - Memory ballooning
  - Multiple VBDs, NICs, etc.

Things that are broken:
 - CPUID filtering. There are no filtering done at all which  means that
   certain cpuid flags are exposed to the guest. The x2apic will cause
   a crash if the NMI handler is invoked. The APERF will cause inferior
   scheduling decisions.
 
If you encounter errors, please email with the following (pls note that the
guest config has 'on_reboot="preserve", on_crash="preserve" - which you should
have in your guest config to contain the memory of the guest):

 a) xl dmesg
 b) xl list
 c) xenctx -s $HOME/linux/System.map -f -a -C <domain id>
    [xenctx is sometimes found in  /usr/lib/xen/bin/xenctx ]
 d) the console output from the guest
 e) Anything else you can think off.

Stash away your vmlinux file (it is too big to send via email) - as I might
need it later on.


That is it!

Thank you!

 arch/arm/include/asm/xen/page.h    |   1 +
 arch/arm/xen/enlighten.c           |   9 +-
 arch/x86/include/asm/xen/page.h    |   8 +-
 arch/x86/xen/Kconfig               |   5 ++
 arch/x86/xen/enlighten.c           | 100 +++++++++++++++++-----
 arch/x86/xen/grant-table.c         |  62 ++++++++++++++
 arch/x86/xen/irq.c                 |   5 +-
 arch/x86/xen/mmu.c                 | 166 +++++++++++++++++++++----------------
 arch/x86/xen/p2m.c                 |  15 +++-
 arch/x86/xen/setup.c               |  40 +++++++--
 arch/x86/xen/smp.c                 |  49 +++++++----
 arch/x86/xen/xen-head.S            |  25 +++++-
 arch/x86/xen/xen-ops.h             |   1 +
 drivers/xen/events.c               |  14 ++--
 drivers/xen/gntdev.c               |   2 +-
 drivers/xen/grant-table.c          |  87 ++++++++++++++-----
 drivers/xen/platform-pci.c         |  10 ++-
 drivers/xen/xenbus/xenbus_client.c |   3 +-
 include/xen/grant_table.h          |   9 +-
 include/xen/interface/elfnote.h    |  13 +++
 include/xen/xen.h                  |  14 ++++
 21 files changed, 483 insertions(+), 155 deletions(-)

Konrad Rzeszutek Wilk (7):
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code (v2).
      xen/mmu: Cleanup xen_pagetable_p2m_copy a bit.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct (v2).
      xen/pvh: Piggyback on PVHVM for grant driver (v4)

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v3).
      xen/pvh: Early bootup changes in PV code (v4).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh/mmu: Use PV TLB instead of native.
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh: Support ParaVirtualized Hardware extensions (v3).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNY-0003bf-UF; Fri, 03 Jan 2014 20:30:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNX-0003ay-JT
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:29:59 +0000
Received: from [85.158.143.35:17834] by server-3.bemta-4.messagelabs.com id
	08/4B-32360-6CD17C25; Fri, 03 Jan 2014 20:29:58 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388780996!9454690!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30111 invoked from network); 3 Jan 2014 20:29:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSsLi022501
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSswJ019919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsje019912; Fri, 3 Jan 2014 20:28:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0EDF41BFB4A; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:24 -0500
Message-Id: <1388777916-1328-8-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 07/19] xen/pvh: MMU changes for PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

.. which are surprinsingly small compared to the amount for PV code.

PVH uses mostly native mmu ops, we leave the generic (native_*) for
the majority and just overwrite the baremetal with the ones we need.

At startup, we are running with pre-allocated page-tables
courtesy of the tool-stack. But we still need to graft them
in the Linux initial pagetables. However there is no need to
unpin/pin and change them to R/O or R/W.

Note that the xen_pagetable_init due to 7836fec9d0994cc9c9150c5a33f0eb0eb08a335a
"xen/mmu/p2m: Refactor the xen_pagetable_init code." does not
need any changes - we just need to make sure that xen_post_allocator_init
does not alter the pvops from the default native one.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/mmu.c | 81 +++++++++++++++++++++++++++++++-----------------------
 1 file changed, 46 insertions(+), 35 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 9d74249..490ddb3 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1757,6 +1757,10 @@ static void set_page_prot_flags(void *addr, pgprot_t prot, unsigned long flags)
 	unsigned long pfn = __pa(addr) >> PAGE_SHIFT;
 	pte_t pte = pfn_pte(pfn, prot);
 
+	/* For PVH no need to set R/O or R/W to pin them or unpin them. */
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	if (HYPERVISOR_update_va_mapping((unsigned long)addr, pte, flags))
 		BUG();
 }
@@ -1867,6 +1871,7 @@ static void __init check_pt_base(unsigned long *pt_base, unsigned long *pt_end,
  * but that's enough to get __va working.  We need to fill in the rest
  * of the physical mapping once some sort of allocator has been set
  * up.
+ * NOTE: for PVH, the page tables are native.
  */
 void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 {
@@ -1888,17 +1893,18 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Zap identity mapping */
 	init_level4_pgt[0] = __pgd(0);
 
-	/* Pre-constructed entries are in pfn, so convert to mfn */
-	/* L4[272] -> level3_ident_pgt
-	 * L4[511] -> level3_kernel_pgt */
-	convert_pfn_mfn(init_level4_pgt);
-
-	/* L3_i[0] -> level2_ident_pgt */
-	convert_pfn_mfn(level3_ident_pgt);
-	/* L3_k[510] -> level2_kernel_pgt
-	 * L3_i[511] -> level2_fixmap_pgt */
-	convert_pfn_mfn(level3_kernel_pgt);
-
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Pre-constructed entries are in pfn, so convert to mfn */
+		/* L4[272] -> level3_ident_pgt
+		 * L4[511] -> level3_kernel_pgt */
+		convert_pfn_mfn(init_level4_pgt);
+
+		/* L3_i[0] -> level2_ident_pgt */
+		convert_pfn_mfn(level3_ident_pgt);
+		/* L3_k[510] -> level2_kernel_pgt
+		 * L3_i[511] -> level2_fixmap_pgt */
+		convert_pfn_mfn(level3_kernel_pgt);
+	}
 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
 	l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd);
 	l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud);
@@ -1922,31 +1928,33 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	copy_page(level2_fixmap_pgt, l2);
 	/* Note that we don't do anything with level1_fixmap_pgt which
 	 * we don't need. */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		/* Make pagetable pieces RO */
+		set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
+		set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+		set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+
+		/* Pin down new L4 */
+		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+				  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+
+		/* Unpin Xen-provided one */
+		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-	/* Make pagetable pieces RO */
-	set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
-	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
-	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
-
-	/* Pin down new L4 */
-	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-			  PFN_DOWN(__pa_symbol(init_level4_pgt)));
-
-	/* Unpin Xen-provided one */
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
-
-	/*
-	 * At this stage there can be no user pgd, and no page
-	 * structure to attach it to, so make sure we just set kernel
-	 * pgd.
-	 */
-	xen_mc_batch();
-	__xen_write_cr3(true, __pa(init_level4_pgt));
-	xen_mc_issue(PARAVIRT_LAZY_CPU);
+		/*
+		 * At this stage there can be no user pgd, and no page
+		 * structure to attach it to, so make sure we just set kernel
+		 * pgd.
+		 */
+		xen_mc_batch();
+		__xen_write_cr3(true, __pa(init_level4_pgt));
+		xen_mc_issue(PARAVIRT_LAZY_CPU);
+	} else
+		native_write_cr3(__pa(init_level4_pgt));
 
 	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
 	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
@@ -2107,6 +2115,9 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 static void __init xen_post_allocator_init(void)
 {
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return;
+
 	pv_mmu_ops.set_pte = xen_set_pte;
 	pv_mmu_ops.set_pmd = xen_set_pmd;
 	pv_mmu_ops.set_pud = xen_set_pud;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNn-0003lY-8N; Fri, 03 Jan 2014 20:30:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bZ-An
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:21604] by server-10.bemta-5.messagelabs.com id
	0C/4B-01405-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388780997!7728353!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5924 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStxF022507
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsrA025219
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrmS019470; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 178621BFB4B; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:25 -0500
Message-Id: <1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead of
	native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

We also optimize one - the TLB flush. The native operation would
needlessly IPI offline VCPUs causing extra wakeups. Using the
Xen one avoids that and lets the hypervisor determine which
VCPU needs the TLB flush.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 490ddb3..c1d406f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBNn-0003lY-8N; Fri, 03 Jan 2014 20:30:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bZ-An
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [85.158.139.211:21604] by server-10.bemta-5.messagelabs.com id
	0C/4B-01405-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388780997!7728353!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5924 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KStxF022507
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSsrA025219
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:54 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSrmS019470; Fri, 3 Jan 2014 20:28:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 178621BFB4B; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:25 -0500
Message-Id: <1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead of
	native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

We also optimize one - the TLB flush. The native operation would
needlessly IPI offline VCPUs causing extra wakeups. Using the
Xen one avoids that and lets the hypervisor determine which
VCPU needs the TLB flush.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 490ddb3..c1d406f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
 	pv_mmu_ops = xen_mmu_ops;
 
 	memset(dummy_mapping, 0xff, PAGE_SIZE);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO1-0003oZ-VR; Fri, 03 Jan 2014 20:30:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bY-D0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [193.109.254.147:21156] by server-7.bemta-14.messagelabs.com id
	F2/4F-15500-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388780998!5226849!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10688 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSuST022534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStjU019973
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStiG025272; Fri, 3 Jan 2014 20:28:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 414441C0175; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:30 -0500
Message-Id: <1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 13/19] xen/pvh: Piggyback on PVHVM for event
	channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. There is
a similar mode - PVHVM where we run in HVM mode with
PV code enabled - and this patch explores that.

The most notable PV interfaces are the XenBus and event channels.

We will piggyback on how the event channel mechanism is
used in PVHVM - that is we want the normal native IRQ mechanism
and we will install a vector (hvm callback) for which we
will call the event channel mechanism.

This means that from a pvops perspective, we can use
native_irq_ops instead of the Xen PV specific. Albeit in the
future we could support pirq_eoi_map. But that is
a feature request that can be shared with PVHVM.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/enlighten.c |  5 +++--
 arch/x86/xen/irq.c       |  5 ++++-
 drivers/xen/events.c     | 14 +++++++++-----
 3 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fde62c4..628099a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1144,8 +1144,9 @@ void xen_setup_vcpu_info_placement(void)
 		xen_vcpu_setup(cpu);
 
 	/* xen_vcpu_setup managed to place the vcpu_info within the
-	   percpu area for all cpus, so make use of it */
-	if (have_vcpu_info_placement) {
+	 * percpu area for all cpus, so make use of it. Note that for
+	 * PVH we want to use native IRQ mechanism. */
+	if (have_vcpu_info_placement && !xen_pvh_domain()) {
 		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
 		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 0da7f86..76ca326 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/features.h>
 #include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
@@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	/* For PVH we use default pv_irq_ops settings. */
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..783b972 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1908,8 +1908,15 @@ void __init xen_init_IRQ(void)
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
 #ifdef CONFIG_X86
-	if (xen_hvm_domain()) {
+	if (xen_pv_domain()) {
+		irq_ctx_init(smp_processor_id());
+		if (xen_initial_domain())
+			pci_xen_initial_domain();
+	}
+	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_callback_vector();
+
+	if (xen_hvm_domain()) {
 		native_init_IRQ();
 		/* pci_xen_hvm_init must be called after native_init_IRQ so that
 		 * __acpi_register_gsi can point at the right function */
@@ -1918,13 +1925,10 @@ void __init xen_init_IRQ(void)
 		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
-		irq_ctx_init(smp_processor_id());
-		if (xen_initial_domain())
-			pci_xen_initial_domain();
-
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
+		/* TODO: No PVH support for PIRQ EOI */
 		if (rc != 0) {
 			free_page((unsigned long) pirq_eoi_map);
 			pirq_eoi_map = NULL;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO1-0003oZ-VR; Fri, 03 Jan 2014 20:30:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNZ-0003bY-D0
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:01 +0000
Received: from [193.109.254.147:21156] by server-7.bemta-14.messagelabs.com id
	F2/4F-15500-8CD17C25; Fri, 03 Jan 2014 20:30:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1388780998!5226849!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10688 invoked from network); 3 Jan 2014 20:29:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:29:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSuST022534
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStjU019973
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStiG025272; Fri, 3 Jan 2014 20:28:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 414441C0175; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:30 -0500
Message-Id: <1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 13/19] xen/pvh: Piggyback on PVHVM for event
	channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. There is
a similar mode - PVHVM where we run in HVM mode with
PV code enabled - and this patch explores that.

The most notable PV interfaces are the XenBus and event channels.

We will piggyback on how the event channel mechanism is
used in PVHVM - that is we want the normal native IRQ mechanism
and we will install a vector (hvm callback) for which we
will call the event channel mechanism.

This means that from a pvops perspective, we can use
native_irq_ops instead of the Xen PV specific. Albeit in the
future we could support pirq_eoi_map. But that is
a feature request that can be shared with PVHVM.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/enlighten.c |  5 +++--
 arch/x86/xen/irq.c       |  5 ++++-
 drivers/xen/events.c     | 14 +++++++++-----
 3 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fde62c4..628099a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1144,8 +1144,9 @@ void xen_setup_vcpu_info_placement(void)
 		xen_vcpu_setup(cpu);
 
 	/* xen_vcpu_setup managed to place the vcpu_info within the
-	   percpu area for all cpus, so make use of it */
-	if (have_vcpu_info_placement) {
+	 * percpu area for all cpus, so make use of it. Note that for
+	 * PVH we want to use native IRQ mechanism. */
+	if (have_vcpu_info_placement && !xen_pvh_domain()) {
 		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
 		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
 		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 0da7f86..76ca326 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -5,6 +5,7 @@
 #include <xen/interface/xen.h>
 #include <xen/interface/sched.h>
 #include <xen/interface/vcpu.h>
+#include <xen/features.h>
 #include <xen/events.h>
 
 #include <asm/xen/hypercall.h>
@@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
 
 void __init xen_init_irq_ops(void)
 {
-	pv_irq_ops = xen_irq_ops;
+	/* For PVH we use default pv_irq_ops settings. */
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		pv_irq_ops = xen_irq_ops;
 	x86_init.irqs.intr_init = xen_init_IRQ;
 }
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4035e83..783b972 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -1908,8 +1908,15 @@ void __init xen_init_IRQ(void)
 	pirq_needs_eoi = pirq_needs_eoi_flag;
 
 #ifdef CONFIG_X86
-	if (xen_hvm_domain()) {
+	if (xen_pv_domain()) {
+		irq_ctx_init(smp_processor_id());
+		if (xen_initial_domain())
+			pci_xen_initial_domain();
+	}
+	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_callback_vector();
+
+	if (xen_hvm_domain()) {
 		native_init_IRQ();
 		/* pci_xen_hvm_init must be called after native_init_IRQ so that
 		 * __acpi_register_gsi can point at the right function */
@@ -1918,13 +1925,10 @@ void __init xen_init_IRQ(void)
 		int rc;
 		struct physdev_pirq_eoi_gmfn eoi_gmfn;
 
-		irq_ctx_init(smp_processor_id());
-		if (xen_initial_domain())
-			pci_xen_initial_domain();
-
 		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
 		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
 		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
+		/* TODO: No PVH support for PIRQ EOI */
 		if (rc != 0) {
 			free_page((unsigned long) pirq_eoi_map);
 			pirq_eoi_map = NULL;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO5-0003sH-P8; Fri, 03 Jan 2014 20:30:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNb-0003cU-RA
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [193.109.254.147:2573] by server-13.bemta-14.messagelabs.com id
	DA/26-19374-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388781000!8737205!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15047 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSuMx007102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSu5C019522
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStQo019517; Fri, 3 Jan 2014 20:28:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 30C751C016E; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:28 -0500
Message-Id: <1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
	(non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

The VCPU bringup protocol follows the PV with certain twists.
>From xen/include/public/arch-x86/xen.h:

Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
for HVM and PVH guests, not all information in this structure is updated:

 - For HVM guests, the structures read include: fpu_ctxt (if
 VGCT_I387_VALID is set), flags, user_regs, debugreg[*]

 - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
 set cr3. All other fields not used should be set to 0.

This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
can load per-cpu data-structures. It has no effect on the VCPU0.

We also piggyback on the %rdi register to pass in the CPU number - so
that when we bootup a new CPU, the cpu_bringup_and_idle will have
passed as the first parameter the CPU number (via %rdi for 64-bit).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 11 ++++++++---
 arch/x86/xen/smp.c       | 49 ++++++++++++++++++++++++++++++++----------------
 arch/x86/xen/xen-ops.h   |  1 +
 3 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 1170d00..fde62c4 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
+ *
+ * Note, that it is refok - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
  */
-static void __init xen_setup_gdt(void)
+void __ref xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
 		unsigned long dummy;
 
-		switch_to_new_gdt(0); /* GDT and GS set */
+		load_percpu_segment(cpu); /* We need to access per-cpu area */
+		switch_to_new_gdt(cpu); /* GDT and GS set */
 
 		/* We are switching of the Xen provided GDT to our HVM mode
 		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
@@ -1529,7 +1534,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_gdt();
+	xen_setup_gdt(0);
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index c36b325..5e46190 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -73,9 +73,11 @@ static void cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -97,8 +99,14 @@ static void cpu_bringup(void)
 	wmb();			/* make sure everything is out */
 }
 
-static void cpu_bringup_and_idle(void)
+/* Note: cpu parameter is only relevant for PVH */
+static void cpu_bringup_and_idle(int cpu)
 {
+#ifdef CONFIG_X86_64
+	if (xen_feature(XENFEAT_auto_translated_physmap) &&
+	    xen_feature(XENFEAT_supervisor_mode_kernel))
+		xen_setup_gdt(cpu);
+#endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
 }
@@ -274,9 +282,10 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	native_smp_prepare_boot_cpu();
 
 	if (xen_pv_domain()) {
-		/* We've switched to the "real" per-cpu gdt, so make sure the
-		   old memory can be recycled */
-		make_lowmem_page_readwrite(xen_initial_gdt);
+		if (!xen_feature(XENFEAT_writable_page_tables))
+			/* We've switched to the "real" per-cpu gdt, so make
+			 * sure the old memory can be recycled. */
+			make_lowmem_page_readwrite(xen_initial_gdt);
 
 #ifdef CONFIG_X86_32
 		/*
@@ -360,22 +369,21 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	gdt = get_cpu_gdt_table(cpu);
 
-	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
+	/* Note: PVH is not yet supported on x86_32. */
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
 	ctxt->user_regs.gs = __KERNEL_STACK_CANARY;
-#else
-	ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 	ctxt->user_regs.eip = (unsigned long)cpu_bringup_and_idle;
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	{
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		ctxt->flags = VGCF_IN_KERNEL;
 		ctxt->user_regs.eflags = 0x1000; /* IOPL_RING1 */
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
+		ctxt->user_regs.ss = __KERNEL_DS;
 
 		xen_copy_trap_info(ctxt->trap_ctxt);
 
@@ -396,18 +404,27 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 #ifdef CONFIG_X86_32
 		ctxt->event_callback_cs     = __KERNEL_CS;
 		ctxt->failsafe_callback_cs  = __KERNEL_CS;
+#else
+		ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 		ctxt->event_callback_eip    =
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip =
 					(unsigned long)xen_failsafe_callback;
+		ctxt->user_regs.cs = __KERNEL_CS;
+		per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
+#ifdef CONFIG_X86_32
 	}
-	ctxt->user_regs.cs = __KERNEL_CS;
+#else
+	} else
+		/* N.B. The user_regs.eip (cpu_bringup_and_idle) is called with
+		 * %rdi having the cpu number - which means are passing in
+		 * as the first parameter the cpu. Subtle!
+		 */
+		ctxt->user_regs.rdi = cpu;
+#endif
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
-
-	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
-
 	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
 		BUG();
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 95f8c61..9059c24 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,4 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
+void xen_setup_gdt(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO5-0003sH-P8; Fri, 03 Jan 2014 20:30:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNb-0003cU-RA
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [193.109.254.147:2573] by server-13.bemta-14.messagelabs.com id
	DA/26-19374-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1388781000!8737205!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15047 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSuMx007102
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSu5C019522
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KStQo019517; Fri, 3 Jan 2014 20:28:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 30C751C016E; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:28 -0500
Message-Id: <1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
	(non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

The VCPU bringup protocol follows the PV with certain twists.
>From xen/include/public/arch-x86/xen.h:

Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
for HVM and PVH guests, not all information in this structure is updated:

 - For HVM guests, the structures read include: fpu_ctxt (if
 VGCT_I387_VALID is set), flags, user_regs, debugreg[*]

 - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
 set cr3. All other fields not used should be set to 0.

This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
can load per-cpu data-structures. It has no effect on the VCPU0.

We also piggyback on the %rdi register to pass in the CPU number - so
that when we bootup a new CPU, the cpu_bringup_and_idle will have
passed as the first parameter the CPU number (via %rdi for 64-bit).

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 11 ++++++++---
 arch/x86/xen/smp.c       | 49 ++++++++++++++++++++++++++++++++----------------
 arch/x86/xen/xen-ops.h   |  1 +
 3 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 1170d00..fde62c4 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
+ *
+ * Note, that it is refok - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
  */
-static void __init xen_setup_gdt(void)
+void __ref xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
 		unsigned long dummy;
 
-		switch_to_new_gdt(0); /* GDT and GS set */
+		load_percpu_segment(cpu); /* We need to access per-cpu area */
+		switch_to_new_gdt(cpu); /* GDT and GS set */
 
 		/* We are switching of the Xen provided GDT to our HVM mode
 		 * GDT. The new GDT has  __KERNEL_CS with CS.L = 1
@@ -1529,7 +1534,7 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
-	xen_setup_gdt();
+	xen_setup_gdt(0);
 
 	xen_init_irq_ops();
 	xen_init_cpuid_mask();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index c36b325..5e46190 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -73,9 +73,11 @@ static void cpu_bringup(void)
 	touch_softlockup_watchdog();
 	preempt_disable();
 
-	xen_enable_sysenter();
-	xen_enable_syscall();
-
+	/* PVH runs in ring 0 and allows us to do native syscalls. Yay! */
+	if (!xen_feature(XENFEAT_supervisor_mode_kernel)) {
+		xen_enable_sysenter();
+		xen_enable_syscall();
+	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
 	cpu_data(cpu).x86_max_cores = 1;
@@ -97,8 +99,14 @@ static void cpu_bringup(void)
 	wmb();			/* make sure everything is out */
 }
 
-static void cpu_bringup_and_idle(void)
+/* Note: cpu parameter is only relevant for PVH */
+static void cpu_bringup_and_idle(int cpu)
 {
+#ifdef CONFIG_X86_64
+	if (xen_feature(XENFEAT_auto_translated_physmap) &&
+	    xen_feature(XENFEAT_supervisor_mode_kernel))
+		xen_setup_gdt(cpu);
+#endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
 }
@@ -274,9 +282,10 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	native_smp_prepare_boot_cpu();
 
 	if (xen_pv_domain()) {
-		/* We've switched to the "real" per-cpu gdt, so make sure the
-		   old memory can be recycled */
-		make_lowmem_page_readwrite(xen_initial_gdt);
+		if (!xen_feature(XENFEAT_writable_page_tables))
+			/* We've switched to the "real" per-cpu gdt, so make
+			 * sure the old memory can be recycled. */
+			make_lowmem_page_readwrite(xen_initial_gdt);
 
 #ifdef CONFIG_X86_32
 		/*
@@ -360,22 +369,21 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	gdt = get_cpu_gdt_table(cpu);
 
-	ctxt->flags = VGCF_IN_KERNEL;
-	ctxt->user_regs.ss = __KERNEL_DS;
 #ifdef CONFIG_X86_32
+	/* Note: PVH is not yet supported on x86_32. */
 	ctxt->user_regs.fs = __KERNEL_PERCPU;
 	ctxt->user_regs.gs = __KERNEL_STACK_CANARY;
-#else
-	ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 	ctxt->user_regs.eip = (unsigned long)cpu_bringup_and_idle;
 
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
-	{
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		ctxt->flags = VGCF_IN_KERNEL;
 		ctxt->user_regs.eflags = 0x1000; /* IOPL_RING1 */
 		ctxt->user_regs.ds = __USER_DS;
 		ctxt->user_regs.es = __USER_DS;
+		ctxt->user_regs.ss = __KERNEL_DS;
 
 		xen_copy_trap_info(ctxt->trap_ctxt);
 
@@ -396,18 +404,27 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 #ifdef CONFIG_X86_32
 		ctxt->event_callback_cs     = __KERNEL_CS;
 		ctxt->failsafe_callback_cs  = __KERNEL_CS;
+#else
+		ctxt->gs_base_kernel = per_cpu_offset(cpu);
 #endif
 		ctxt->event_callback_eip    =
 					(unsigned long)xen_hypervisor_callback;
 		ctxt->failsafe_callback_eip =
 					(unsigned long)xen_failsafe_callback;
+		ctxt->user_regs.cs = __KERNEL_CS;
+		per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
+#ifdef CONFIG_X86_32
 	}
-	ctxt->user_regs.cs = __KERNEL_CS;
+#else
+	} else
+		/* N.B. The user_regs.eip (cpu_bringup_and_idle) is called with
+		 * %rdi having the cpu number - which means are passing in
+		 * as the first parameter the cpu. Subtle!
+		 */
+		ctxt->user_regs.rdi = cpu;
+#endif
 	ctxt->user_regs.esp = idle->thread.sp0 - sizeof(struct pt_regs);
-
-	per_cpu(xen_cr3, cpu) = __pa(swapper_pg_dir);
 	ctxt->ctrlreg[3] = xen_pfn_to_cr3(virt_to_mfn(swapper_pg_dir));
-
 	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
 		BUG();
 
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 95f8c61..9059c24 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,4 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
+void xen_setup_gdt(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO8-0003v6-UL; Fri, 03 Jan 2014 20:30:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d4-9a
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [85.158.143.35:18034] by server-3.bemta-4.messagelabs.com id
	80/5B-32360-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388781001!9518190!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27257 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxF6022547
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwBO019595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwYu019585; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5E2CF1C02CC; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:33 -0500
Message-Id: <1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 16/19] xen/grant: Implement an grant frame
	array struct (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The 'xen_hvm_resume_frames' used to be an 'unsigned long'
and contain the virtual address of the grants. That was OK
for most architectures (PVHVM, ARM) were the grants are contiguous
in memory. That however is not the case for PVH - in which case
we will have to do a lookup for each virtual address for the PFN.

Instead of doing that, lets make it a structure which will contain
the array of PFNs, the virtual address and the count of said PFNs.

Also provide a generic functions: gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames to populate said structure with
appropriate values for PVHVM and ARM.

To round it off, change the name from 'xen_hvm_resume_frames' to
a more descriptive one - 'xen_auto_xlat_grant_frames'.

For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
we will populate the 'xen_auto_xlat_grant_frames' by ourselves.

v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
and also introduces xen_unmap for gnttab_free_auto_xlat_frames.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/page.h |  1 +
 arch/arm/xen/enlighten.c        |  9 +++++--
 arch/x86/include/asm/xen/page.h |  1 +
 drivers/xen/grant-table.c       | 58 ++++++++++++++++++++++++++++++++++++-----
 drivers/xen/platform-pci.c      | 10 ++++---
 include/xen/grant_table.h       |  9 ++++++-
 6 files changed, 75 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 75579a9d..5af8fb3 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -118,5 +118,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 }
 
 #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_unmap(cookie) iounmap((cookie))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..2162172 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
+	unsigned long grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
 	}
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
-	xen_hvm_resume_frames = res.start;
+	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
+			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
 	if (xen_vcpu_info == NULL)
 		return -ENOMEM;
 
+	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
+		free_percpu(xen_vcpu_info);
+		return -ENOMEM;
+	}
 	gnttab_init();
 	if (!xen_initial_domain())
 		xenbus_probe(NULL);
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 4a092cc..3e276eb 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -227,5 +227,6 @@ void make_lowmem_page_readonly(void *vaddr);
 void make_lowmem_page_readwrite(void *vaddr);
 
 #define xen_remap(cookie, size) ioremap((cookie), (size));
+#define xen_unmap(cookie) iounmap((cookie))
 
 #endif /* _ASM_X86_XEN_PAGE_H */
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index cc1b4fa..6c78fd21 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -65,8 +65,7 @@ static unsigned int nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
-unsigned long xen_hvm_resume_frames;
-EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
+struct grant_frames xen_auto_xlat_grant_frames;
 
 static union {
 	struct grant_entry_v1 *v1;
@@ -838,6 +837,51 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+int gnttab_setup_auto_xlat_frames(unsigned long addr)
+{
+	xen_pfn_t *pfn;
+	unsigned int max_nr_gframes = __max_nr_grant_frames();
+	unsigned int i;
+	void *vaddr;
+
+	if (xen_auto_xlat_grant_frames.count)
+		return -EINVAL;
+
+	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
+	if (vaddr == NULL) {
+		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
+			addr);
+		return -ENOMEM;
+	}
+	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
+	if (!pfn) {
+		xen_unmap(vaddr);
+		return -ENOMEM;
+	}
+	for (i = 0; i < max_nr_gframes; i++)
+		pfn[i] = PFN_DOWN(addr) + i;
+
+	xen_auto_xlat_grant_frames.vaddr = vaddr;
+	xen_auto_xlat_grant_frames.pfn = pfn;
+	xen_auto_xlat_grant_frames.count = max_nr_gframes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
+
+void gnttab_free_auto_xlat_frames(void)
+{
+	if (!xen_auto_xlat_grant_frames.count)
+		return;
+	kfree(xen_auto_xlat_grant_frames.pfn);
+	xen_unmap(xen_auto_xlat_grant_frames.vaddr);
+
+	xen_auto_xlat_grant_frames.pfn = NULL;
+	xen_auto_xlat_grant_frames.count = 0;
+	xen_auto_xlat_grant_frames.vaddr = NULL;
+}
+EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
+
 /* Handling of paged out grant targets (GNTST_eagain) */
 #define MAX_DELAY 256
 static inline void
@@ -1068,6 +1112,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -1076,7 +1121,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				pr_warn("grant table add_to_physmap failed, err=%d\n",
@@ -1175,11 +1220,10 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-					       PAGE_SIZE * max_nr_gframes);
+		gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
 		if (gnttab_shared.addr == NULL) {
-			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-					xen_hvm_resume_frames);
+			pr_warn("gnttab share frames (addr=0x%08lx) is not mapped!\n",
+				(unsigned long)xen_auto_xlat_grant_frames.vaddr);
 			return -ENOMEM;
 		}
 	}
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 2f3528e..f1947ac 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
 	long ioaddr;
 	long mmio_addr, mmio_len;
 	unsigned int max_nr_gframes;
+	unsigned long grant_frames;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
 	}
 
 	max_nr_gframes = gnttab_max_grant_frames();
-	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	if (gnttab_setup_auto_xlat_frames(grant_frames))
+		goto out;
 	ret = gnttab_init();
 	if (ret)
-		goto out;
+		goto grant_out;
 	xenbus_probe(NULL);
 	return 0;
-
+grant_out:
+	gnttab_free_auto_xlat_frames();
 out:
 	pci_release_region(pdev, 0);
 mem_out:
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..5acb1e4 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
 			   grant_status_t **__shared);
 void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
 
-extern unsigned long xen_hvm_resume_frames;
+struct grant_frames {
+	xen_pfn_t *pfn;
+	unsigned int count;
+	void *vaddr;
+};
+extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
+int gnttab_setup_auto_xlat_frames(unsigned long addr);
+void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBO8-0003v6-UL; Fri, 03 Jan 2014 20:30:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d4-9a
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [85.158.143.35:18034] by server-3.bemta-4.messagelabs.com id
	80/5B-32360-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388781001!9518190!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27257 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxF6022547
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwBO019595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwYu019585; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5E2CF1C02CC; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:33 -0500
Message-Id: <1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 16/19] xen/grant: Implement an grant frame
	array struct (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The 'xen_hvm_resume_frames' used to be an 'unsigned long'
and contain the virtual address of the grants. That was OK
for most architectures (PVHVM, ARM) were the grants are contiguous
in memory. That however is not the case for PVH - in which case
we will have to do a lookup for each virtual address for the PFN.

Instead of doing that, lets make it a structure which will contain
the array of PFNs, the virtual address and the count of said PFNs.

Also provide a generic functions: gnttab_setup_auto_xlat_frames and
gnttab_free_auto_xlat_frames to populate said structure with
appropriate values for PVHVM and ARM.

To round it off, change the name from 'xen_hvm_resume_frames' to
a more descriptive one - 'xen_auto_xlat_grant_frames'.

For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
we will populate the 'xen_auto_xlat_grant_frames' by ourselves.

v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
and also introduces xen_unmap for gnttab_free_auto_xlat_frames.

Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/arm/include/asm/xen/page.h |  1 +
 arch/arm/xen/enlighten.c        |  9 +++++--
 arch/x86/include/asm/xen/page.h |  1 +
 drivers/xen/grant-table.c       | 58 ++++++++++++++++++++++++++++++++++++-----
 drivers/xen/platform-pci.c      | 10 ++++---
 include/xen/grant_table.h       |  9 ++++++-
 6 files changed, 75 insertions(+), 13 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 75579a9d..5af8fb3 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -118,5 +118,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 }
 
 #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
+#define xen_unmap(cookie) iounmap((cookie))
 
 #endif /* _ASM_ARM_XEN_PAGE_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..2162172 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
+	unsigned long grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
 	}
 	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
 		return 0;
-	xen_hvm_resume_frames = res.start;
+	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
+			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
 	if (xen_vcpu_info == NULL)
 		return -ENOMEM;
 
+	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
+		free_percpu(xen_vcpu_info);
+		return -ENOMEM;
+	}
 	gnttab_init();
 	if (!xen_initial_domain())
 		xenbus_probe(NULL);
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 4a092cc..3e276eb 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -227,5 +227,6 @@ void make_lowmem_page_readonly(void *vaddr);
 void make_lowmem_page_readwrite(void *vaddr);
 
 #define xen_remap(cookie, size) ioremap((cookie), (size));
+#define xen_unmap(cookie) iounmap((cookie))
 
 #endif /* _ASM_X86_XEN_PAGE_H */
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index cc1b4fa..6c78fd21 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -65,8 +65,7 @@ static unsigned int nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
-unsigned long xen_hvm_resume_frames;
-EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
+struct grant_frames xen_auto_xlat_grant_frames;
 
 static union {
 	struct grant_entry_v1 *v1;
@@ -838,6 +837,51 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
+int gnttab_setup_auto_xlat_frames(unsigned long addr)
+{
+	xen_pfn_t *pfn;
+	unsigned int max_nr_gframes = __max_nr_grant_frames();
+	unsigned int i;
+	void *vaddr;
+
+	if (xen_auto_xlat_grant_frames.count)
+		return -EINVAL;
+
+	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
+	if (vaddr == NULL) {
+		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
+			addr);
+		return -ENOMEM;
+	}
+	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
+	if (!pfn) {
+		xen_unmap(vaddr);
+		return -ENOMEM;
+	}
+	for (i = 0; i < max_nr_gframes; i++)
+		pfn[i] = PFN_DOWN(addr) + i;
+
+	xen_auto_xlat_grant_frames.vaddr = vaddr;
+	xen_auto_xlat_grant_frames.pfn = pfn;
+	xen_auto_xlat_grant_frames.count = max_nr_gframes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
+
+void gnttab_free_auto_xlat_frames(void)
+{
+	if (!xen_auto_xlat_grant_frames.count)
+		return;
+	kfree(xen_auto_xlat_grant_frames.pfn);
+	xen_unmap(xen_auto_xlat_grant_frames.vaddr);
+
+	xen_auto_xlat_grant_frames.pfn = NULL;
+	xen_auto_xlat_grant_frames.count = 0;
+	xen_auto_xlat_grant_frames.vaddr = NULL;
+}
+EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
+
 /* Handling of paged out grant targets (GNTST_eagain) */
 #define MAX_DELAY 256
 static inline void
@@ -1068,6 +1112,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
+		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
 		/*
 		 * Loop backwards, so that the first hypercall has the largest
 		 * index, ensuring that the table will grow only once.
@@ -1076,7 +1121,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 			xatp.domid = DOMID_SELF;
 			xatp.idx = i;
 			xatp.space = XENMAPSPACE_grant_table;
-			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
+			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
 			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
 			if (rc != 0) {
 				pr_warn("grant table add_to_physmap failed, err=%d\n",
@@ -1175,11 +1220,10 @@ static int gnttab_setup(void)
 
 	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
 	{
-		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-					       PAGE_SIZE * max_nr_gframes);
+		gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
 		if (gnttab_shared.addr == NULL) {
-			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-					xen_hvm_resume_frames);
+			pr_warn("gnttab share frames (addr=0x%08lx) is not mapped!\n",
+				(unsigned long)xen_auto_xlat_grant_frames.vaddr);
 			return -ENOMEM;
 		}
 	}
diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 2f3528e..f1947ac 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
 	long ioaddr;
 	long mmio_addr, mmio_len;
 	unsigned int max_nr_gframes;
+	unsigned long grant_frames;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
 	}
 
 	max_nr_gframes = gnttab_max_grant_frames();
-	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
+	if (gnttab_setup_auto_xlat_frames(grant_frames))
+		goto out;
 	ret = gnttab_init();
 	if (ret)
-		goto out;
+		goto grant_out;
 	xenbus_probe(NULL);
 	return 0;
-
+grant_out:
+	gnttab_free_auto_xlat_frames();
 out:
 	pci_release_region(pdev, 0);
 mem_out:
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..5acb1e4 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
 			   grant_status_t **__shared);
 void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
 
-extern unsigned long xen_hvm_resume_frames;
+struct grant_frames {
+	xen_pfn_t *pfn;
+	unsigned int count;
+	void *vaddr;
+};
+extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
+int gnttab_setup_auto_xlat_frames(unsigned long addr);
+void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOD-0003xm-Hc; Fri, 03 Jan 2014 20:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d5-As
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [85.158.143.35:26412] by server-1.bemta-4.messagelabs.com id
	50/1D-02132-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388781001!9531740!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24299 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSw2Y007123
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwwG019567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:58 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSvDL020036; Fri, 3 Jan 2014 20:28:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4984D1C0176; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:31 -0500
Message-Id: <1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 14/19] xen/grants: Remove
	gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gnttab_max_grant_frames() returns the maximum amount
of frames (pages) of grants we can have. Unfortunatly it was
dependent on gnttab_init() having been run before to initialize
the boot max value (boot_max_nr_grant_frames).

This meant that users of gnttab_max_grant_frames would always
get a zero value if they called before gnttab_init() - such as
'platform_pci_init' (drivers/xen/platform-pci.c).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/grant-table.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..99399cb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -62,7 +62,6 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
-static unsigned int boot_max_nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
@@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
 unsigned int gnttab_max_grant_frames(void)
 {
 	unsigned int xen_max = __max_nr_grant_frames();
+	static unsigned int boot_max_nr_grant_frames;
+
+	/* First time, initialize it properly. */
+	if (!boot_max_nr_grant_frames)
+		boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	if (xen_max > boot_max_nr_grant_frames)
 		return boot_max_nr_grant_frames;
@@ -1227,13 +1231,12 @@ int gnttab_init(void)
 
 	gnttab_request_version();
 	nr_grant_frames = 1;
-	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
 	BUG_ON(grefs_per_grant_frame == 0);
-	max_nr_glist_frames = (boot_max_nr_grant_frames *
+	max_nr_glist_frames = (gnttab_max_grant_frames() *
 			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOD-0003xm-Hc; Fri, 03 Jan 2014 20:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d5-As
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:04 +0000
Received: from [85.158.143.35:26412] by server-1.bemta-4.messagelabs.com id
	50/1D-02132-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388781001!9531740!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24299 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSw2Y007123
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwwG019567
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:58 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSvDL020036; Fri, 3 Jan 2014 20:28:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4984D1C0176; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:31 -0500
Message-Id: <1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 14/19] xen/grants: Remove
	gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gnttab_max_grant_frames() returns the maximum amount
of frames (pages) of grants we can have. Unfortunatly it was
dependent on gnttab_init() having been run before to initialize
the boot max value (boot_max_nr_grant_frames).

This meant that users of gnttab_max_grant_frames would always
get a zero value if they called before gnttab_init() - such as
'platform_pci_init' (drivers/xen/platform-pci.c).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/grant-table.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..99399cb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -62,7 +62,6 @@
 
 static grant_ref_t **gnttab_list;
 static unsigned int nr_grant_frames;
-static unsigned int boot_max_nr_grant_frames;
 static int gnttab_free_count;
 static grant_ref_t gnttab_free_head;
 static DEFINE_SPINLOCK(gnttab_list_lock);
@@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
 unsigned int gnttab_max_grant_frames(void)
 {
 	unsigned int xen_max = __max_nr_grant_frames();
+	static unsigned int boot_max_nr_grant_frames;
+
+	/* First time, initialize it properly. */
+	if (!boot_max_nr_grant_frames)
+		boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	if (xen_max > boot_max_nr_grant_frames)
 		return boot_max_nr_grant_frames;
@@ -1227,13 +1231,12 @@ int gnttab_init(void)
 
 	gnttab_request_version();
 	nr_grant_frames = 1;
-	boot_max_nr_grant_frames = __max_nr_grant_frames();
 
 	/* Determine the maximum number of frames required for the
 	 * grant reference free list on the current hypervisor.
 	 */
 	BUG_ON(grefs_per_grant_frame == 0);
-	max_nr_glist_frames = (boot_max_nr_grant_frames *
+	max_nr_glist_frames = (gnttab_max_grant_frames() *
 			       grefs_per_grant_frame / RPP);
 
 	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOG-00041H-Sp; Fri, 03 Jan 2014 20:30:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d9-Ha
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:05 +0000
Received: from [85.158.139.211:54255] by server-6.bemta-5.messagelabs.com id
	5B/D4-16310-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388781001!7550968!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31956 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxLh022548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw2j020070
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw9S020065; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 742441C0752; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:36 -0500
Message-Id: <1388777916-1328-20-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 19/19] xen/pvh: Support ParaVirtualized
	Hardware extensions (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH allows PV linux guest to utilize hardware extended capabilities,
such as running MMU updates in a HVM container.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Use .ascii and .asciz to define xen feature string. Note, the PVH
string must be in a single line (not multiple lines with \) to keep the
assembler from putting null char after each string before \.
This patch allows it to be configured and enabled.

We also use introduce the 'XEN_ELFNOTE_SUPPORTED_FEATURES' ELF note to
tell the hypervisor that 'hvm_callback_vector' is what the kernel
needs. We can not put it in 'XEN_ELFNOTE_FEATURES' as older hypervisor
parse fields they don't understand as errors and refuse to load
the kernel. This work-around fixes the problem.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/Kconfig            |  2 +-
 arch/x86/xen/xen-head.S         | 25 ++++++++++++++++++++++++-
 include/xen/interface/elfnote.h | 13 +++++++++++++
 3 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index e7d0590..d88bfd6 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -53,6 +53,6 @@ config XEN_DEBUG_FS
 
 config XEN_PVH
 	bool "Support for running as a PVH guest"
-	depends on X86_64 && XEN && BROKEN
+	depends on X86_64 && XEN
 	select XEN_PVHVM
 	def_bool n
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 7faed58..485b695 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -11,8 +11,28 @@
 #include <asm/page_types.h>
 
 #include <xen/interface/elfnote.h>
+#include <xen/interface/features.h>
 #include <asm/xen/interface.h>
 
+#ifdef CONFIG_XEN_PVH
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
+/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
+ * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
+ * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
+ */
+#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
+		      (1 << XENFEAT_auto_translated_physmap) | \
+		      (1 << XENFEAT_supervisor_mode_kernel) | \
+		      (1 << XENFEAT_hvm_callback_vector))
+/* The XENFEAT_writable_page_tables is not stricly neccessary as we set that
+ * up regardless whether this CONFIG option is enabled or not, but it
+ * clarifies what the right flags need to be.
+ */
+#else
+#define PVH_FEATURES_STR  ""
+#define PVH_FEATURES (0)
+#endif
+
 	__INIT
 ENTRY(startup_xen)
 	cld
@@ -95,7 +115,10 @@ NEXT_HYPERCALL(arch_6)
 #endif
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
-	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz "!writable_page_tables|pae_pgdir_above_4gb")
+	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
+	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
+						(1 << XENFEAT_writable_page_tables) |
+						(1 << XENFEAT_dom0))
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/interface/elfnote.h b/include/xen/interface/elfnote.h
index 0360b15..6f4eae3 100644
--- a/include/xen/interface/elfnote.h
+++ b/include/xen/interface/elfnote.h
@@ -140,6 +140,19 @@
  */
 #define XEN_ELFNOTE_SUSPEND_CANCEL 14
 
+/*
+ * The features supported by this kernel (numeric).
+ *
+ * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
+ * kernel to specify support for features that older hypervisors don't
+ * know about. The set of features 4.2 and newer hypervisors will
+ * consider supported by the kernel is the combination of the sets
+ * specified through this and the string note.
+ *
+ * LEGACY: FEATURES
+ */
+#define XEN_ELFNOTE_SUPPORTED_FEATURES 17
+
 #endif /* __XEN_PUBLIC_ELFNOTE_H__ */
 
 /*
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOG-00041H-Sp; Fri, 03 Jan 2014 20:30:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNc-0003d9-Ha
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:05 +0000
Received: from [85.158.139.211:54255] by server-6.bemta-5.messagelabs.com id
	5B/D4-16310-BCD17C25; Fri, 03 Jan 2014 20:30:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388781001!7550968!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31956 invoked from network); 3 Jan 2014 20:30:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxLh022548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw2j020070
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw9S020065; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 742441C0752; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:36 -0500
Message-Id: <1388777916-1328-20-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 19/19] xen/pvh: Support ParaVirtualized
	Hardware extensions (v3).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH allows PV linux guest to utilize hardware extended capabilities,
such as running MMU updates in a HVM container.

The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
with modifications):

"* the guest uses auto translate:
 - p2m is managed by Xen
 - pagetables are owned by the guest
 - mmu_update hypercall not available
* it uses event callback and not vlapic emulation,
* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.

For a full list of hcalls supported for PVH, see pvh_hypercall64_table
in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
PV guest with auto translate, although it does use hvm_op for setting
callback vector."

Use .ascii and .asciz to define xen feature string. Note, the PVH
string must be in a single line (not multiple lines with \) to keep the
assembler from putting null char after each string before \.
This patch allows it to be configured and enabled.

We also use introduce the 'XEN_ELFNOTE_SUPPORTED_FEATURES' ELF note to
tell the hypervisor that 'hvm_callback_vector' is what the kernel
needs. We can not put it in 'XEN_ELFNOTE_FEATURES' as older hypervisor
parse fields they don't understand as errors and refuse to load
the kernel. This work-around fixes the problem.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/x86/xen/Kconfig            |  2 +-
 arch/x86/xen/xen-head.S         | 25 ++++++++++++++++++++++++-
 include/xen/interface/elfnote.h | 13 +++++++++++++
 3 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index e7d0590..d88bfd6 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -53,6 +53,6 @@ config XEN_DEBUG_FS
 
 config XEN_PVH
 	bool "Support for running as a PVH guest"
-	depends on X86_64 && XEN && BROKEN
+	depends on X86_64 && XEN
 	select XEN_PVHVM
 	def_bool n
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 7faed58..485b695 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -11,8 +11,28 @@
 #include <asm/page_types.h>
 
 #include <xen/interface/elfnote.h>
+#include <xen/interface/features.h>
 #include <asm/xen/interface.h>
 
+#ifdef CONFIG_XEN_PVH
+#define PVH_FEATURES_STR  "|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
+/* Note the lack of 'hvm_callback_vector'. Older hypervisor will
+ * balk at this being part of XEN_ELFNOTE_FEATURES, so we put it in
+ * XEN_ELFNOTE_SUPPORTED_FEATURES which older hypervisors will ignore.
+ */
+#define PVH_FEATURES ((1 << XENFEAT_writable_page_tables) | \
+		      (1 << XENFEAT_auto_translated_physmap) | \
+		      (1 << XENFEAT_supervisor_mode_kernel) | \
+		      (1 << XENFEAT_hvm_callback_vector))
+/* The XENFEAT_writable_page_tables is not stricly neccessary as we set that
+ * up regardless whether this CONFIG option is enabled or not, but it
+ * clarifies what the right flags need to be.
+ */
+#else
+#define PVH_FEATURES_STR  ""
+#define PVH_FEATURES (0)
+#endif
+
 	__INIT
 ENTRY(startup_xen)
 	cld
@@ -95,7 +115,10 @@ NEXT_HYPERCALL(arch_6)
 #endif
 	ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
 	ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
-	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz "!writable_page_tables|pae_pgdir_above_4gb")
+	ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .ascii "!writable_page_tables|pae_pgdir_above_4gb"; .asciz PVH_FEATURES_STR)
+	ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, .long (PVH_FEATURES) |
+						(1 << XENFEAT_writable_page_tables) |
+						(1 << XENFEAT_dom0))
 	ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz "yes")
 	ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz "generic")
 	ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,
diff --git a/include/xen/interface/elfnote.h b/include/xen/interface/elfnote.h
index 0360b15..6f4eae3 100644
--- a/include/xen/interface/elfnote.h
+++ b/include/xen/interface/elfnote.h
@@ -140,6 +140,19 @@
  */
 #define XEN_ELFNOTE_SUSPEND_CANCEL 14
 
+/*
+ * The features supported by this kernel (numeric).
+ *
+ * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
+ * kernel to specify support for features that older hypervisors don't
+ * know about. The set of features 4.2 and newer hypervisors will
+ * consider supported by the kernel is the combination of the sets
+ * specified through this and the string note.
+ *
+ * LEGACY: FEATURES
+ */
+#define XEN_ELFNOTE_SUPPORTED_FEATURES 17
+
 #endif /* __XEN_PUBLIC_ELFNOTE_H__ */
 
 /*
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOI-000449-EJ; Fri, 03 Jan 2014 20:30:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNd-0003de-CF
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:05 +0000
Received: from [85.158.143.35:18075] by server-3.bemta-4.messagelabs.com id
	C1/5B-32360-CCD17C25; Fri, 03 Jan 2014 20:30:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388781002!8248627!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15386 invoked from network); 3 Jan 2014 20:30:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxxA007135
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:29:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSxno025366
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwuD025348; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6B4E71C02CF; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:35 -0500
Message-Id: <1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 18/19] xen/pvh: Piggyback on PVHVM XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. For the XenBus
mechanism we want to use the PVHVM mechanism.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/xenbus/xenbus_client.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index ec097d6..01d59e6 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -45,6 +45,7 @@
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
+#include <xen/features.h>
 
 #include "xenbus_probe.h"
 
@@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOI-000449-EJ; Fri, 03 Jan 2014 20:30:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNd-0003de-CF
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:05 +0000
Received: from [85.158.143.35:18075] by server-3.bemta-4.messagelabs.com id
	C1/5B-32360-CCD17C25; Fri, 03 Jan 2014 20:30:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388781002!8248627!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15386 invoked from network); 3 Jan 2014 20:30:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxxA007135
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:29:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSxno025366
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSwuD025348; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6B4E71C02CF; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:35 -0500
Message-Id: <1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 18/19] xen/pvh: Piggyback on PVHVM XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mukesh Rathor <mukesh.rathor@oracle.com>

PVH is a PV guest with a twist - there are certain things
that work in it like HVM and some like PV. For the XenBus
mechanism we want to use the PVHVM mechanism.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/xenbus/xenbus_client.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index ec097d6..01d59e6 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -45,6 +45,7 @@
 #include <xen/grant_table.h>
 #include <xen/xenbus.h>
 #include <xen/xen.h>
+#include <xen/features.h>
 
 #include "xenbus_probe.h"
 
@@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
 
 void __init xenbus_ring_ops_init(void)
 {
-	if (xen_pv_domain())
+	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		ring_ops = &ring_ops_pv;
 	else
 		ring_ops = &ring_ops_hvm;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOK-000475-Bz; Fri, 03 Jan 2014 20:30:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNd-0003dy-RW
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:06 +0000
Received: from [85.158.143.35:26453] by server-2.bemta-4.messagelabs.com id
	6D/28-11386-DCD17C25; Fri, 03 Jan 2014 20:30:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388781003!9364142!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21356 invoked from network); 3 Jan 2014 20:30:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSwiI007124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw85020043
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:58 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSvMY020039; Fri, 3 Jan 2014 20:28:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51D991C01B2; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:32 -0500
Message-Id: <1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have this odd scenario of where for PV paths we take a shortcut
but for the HVM paths we first ioremap xen_hvm_resume_frames, then
assign it to gnttab_shared.addr. This is needed because gnttab_map
uses gnttab_shared.addr.

Instead of having:
	if (pv)
		return gnttab_map
	if (hvm)
		...

	gnttab_map

Lets move the HVM part before the gnttab_map and remove the
first call to gnttab_map.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/grant-table.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 99399cb..cc1b4fa 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
-		return gnttab_map(0, nr_grant_frames - 1);
-
-	if (gnttab_shared.addr == NULL) {
+	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
+	{
 		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-						PAGE_SIZE * max_nr_gframes);
+					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_hvm_resume_frames);
 			return -ENOMEM;
 		}
 	}
-
-	gnttab_map(0, nr_grant_frames - 1);
-
-	return 0;
+	return gnttab_map(0, nr_grant_frames - 1);
 }
 
 int gnttab_resume(void)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOK-000475-Bz; Fri, 03 Jan 2014 20:30:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNd-0003dy-RW
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:06 +0000
Received: from [85.158.143.35:26453] by server-2.bemta-4.messagelabs.com id
	6D/28-11386-DCD17C25; Fri, 03 Jan 2014 20:30:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388781003!9364142!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21356 invoked from network); 3 Jan 2014 20:30:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSwiI007124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw85020043
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:58 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSvMY020039; Fri, 3 Jan 2014 20:28:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51D991C01B2; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:32 -0500
Message-Id: <1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have this odd scenario of where for PV paths we take a shortcut
but for the HVM paths we first ioremap xen_hvm_resume_frames, then
assign it to gnttab_shared.addr. This is needed because gnttab_map
uses gnttab_shared.addr.

Instead of having:
	if (pv)
		return gnttab_map
	if (hvm)
		...

	gnttab_map

Lets move the HVM part before the gnttab_map and remove the
first call to gnttab_map.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/grant-table.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 99399cb..cc1b4fa 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
 	if (max_nr_gframes < nr_grant_frames)
 		return -ENOSYS;
 
-	if (xen_pv_domain())
-		return gnttab_map(0, nr_grant_frames - 1);
-
-	if (gnttab_shared.addr == NULL) {
+	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
+	{
 		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
-						PAGE_SIZE * max_nr_gframes);
+					       PAGE_SIZE * max_nr_gframes);
 		if (gnttab_shared.addr == NULL) {
 			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
 					xen_hvm_resume_frames);
 			return -ENOMEM;
 		}
 	}
-
-	gnttab_map(0, nr_grant_frames - 1);
-
-	return 0;
+	return gnttab_map(0, nr_grant_frames - 1);
 }
 
 int gnttab_resume(void)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOL-00049j-5g; Fri, 03 Jan 2014 20:30:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNe-0003e9-7U
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:06 +0000
Received: from [85.158.137.68:24655] by server-16.bemta-3.messagelabs.com id
	7F/EA-26128-DCD17C25; Fri, 03 Jan 2014 20:30:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388781003!7168329!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10756 invoked from network); 3 Jan 2014 20:30:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxod022551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:29:00 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw6d025356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSww8019596; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 627D91C02CD; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:34 -0500
Message-Id: <1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 17/19] xen/pvh: Piggyback on PVHVM for grant
	driver (v4)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In PVH the shared grant frame is the PFN and not MFN,
hence its mapped via the same code path as HVM.

The allocation of the grant frame is done differently - we
do not use the early platform-pci driver and have an
ioremap area - instead we use balloon memory and stitch
all of the non-contingous pages in a virtualized area.

That means when we call the hypervisor to replace the GMFN
with a XENMAPSPACE_grant_table type, we need to lookup the
old PFN for every iteration instead of assuming a flat
contingous PFN allocation.

Lastly, we only use v1 for grants. This is because PVHVM
is not able to use v2 due to no XENMEM_add_to_physmap
calls on the error status page (see commit
69e8f430e243d657c2053f097efebc2e2cd559f0
 xen/granttable: Disable grant v2 for HVM domains.)

Until that is implemented this workaround has to
be in place.

Also per suggestions by Stefano utilize the PVHVM paths
as they share common functionality.

v2 of this patch moves most of the PVH code out in the
arch/x86/xen/grant-table driver and touches only minimally
the generic driver.

v3, v4: fixes us some of the code due to earlier patches.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/grant-table.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/xen/gntdev.c       |  2 +-
 drivers/xen/grant-table.c  |  9 ++++---
 3 files changed, 68 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 3a5f55d..2d71979 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,3 +125,65 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
+#ifdef CONFIG_XEN_PVH
+#include <xen/balloon.h>
+#include <xen/events.h>
+#include <linux/slab.h>
+static int __init xlated_setup_gnttab_pages(void)
+{
+	struct page **pages;
+	xen_pfn_t *pfns;
+	int rc;
+	unsigned int i;
+	unsigned long nr_grant_frames = gnttab_max_grant_frames();
+
+	BUG_ON(nr_grant_frames == 0);
+	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
+	if (!pfns) {
+		kfree(pages);
+		return -ENOMEM;
+	}
+	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
+	if (rc) {
+		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		kfree(pages);
+		kfree(pfns);
+		return rc;
+	}
+	for (i = 0; i < nr_grant_frames; i++)
+		pfns[i] = page_to_pfn(pages[i]);
+
+	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
+				    &xen_auto_xlat_grant_frames.vaddr);
+
+	kfree(pages);
+	if (rc) {
+		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pfns);
+		return rc;
+	}
+
+	xen_auto_xlat_grant_frames.pfn = pfns;
+	xen_auto_xlat_grant_frames.count = nr_grant_frames;
+
+	return 0;
+}
+
+static int __init xen_pvh_gnttab_setup(void)
+{
+	if (!xen_pvh_domain())
+		return -ENODEV;
+
+	return xlated_setup_gnttab_pages();
+}
+/* Call it _before_ __gnttab_init as we need to initialize the
+ * xen_auto_xlat_grant_frames first. */
+core_initcall(xen_pvh_gnttab_setup);
+#endif
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..073b4a1 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -846,7 +846,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 6c78fd21..3d04c1c 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1108,7 +1108,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
@@ -1184,7 +1184,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1328,5 +1328,6 @@ static int __gnttab_init(void)
 
 	return gnttab_init();
 }
-
-core_initcall(__gnttab_init);
+/* Starts after core_initcall so that xen_pvh_gnttab_setup can be called
+ * beforehand to initialize xen_auto_xlat_grant_frames. */
+core_initcall_sync(__gnttab_init);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:30:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBOL-00049j-5g; Fri, 03 Jan 2014 20:30:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBNe-0003e9-7U
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:30:06 +0000
Received: from [85.158.137.68:24655] by server-16.bemta-3.messagelabs.com id
	7F/EA-26128-DCD17C25; Fri, 03 Jan 2014 20:30:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388781003!7168329!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10756 invoked from network); 3 Jan 2014 20:30:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:30:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KSxod022551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:29:00 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSw6d025356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:28:59 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KSww8019596; Fri, 3 Jan 2014 20:28:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:28:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 627D91C02CD; Fri,  3 Jan 2014 15:28:51 -0500 (EST)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	boris.ostrovsky@oracle.com, stefano.stabellini@eu.citrix.com,
	david.vrabel@citrix.com
Date: Fri,  3 Jan 2014 14:38:34 -0500
Message-Id: <1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, hpa@zytor.com
Subject: [Xen-devel] [PATCH v13 17/19] xen/pvh: Piggyback on PVHVM for grant
	driver (v4)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In PVH the shared grant frame is the PFN and not MFN,
hence its mapped via the same code path as HVM.

The allocation of the grant frame is done differently - we
do not use the early platform-pci driver and have an
ioremap area - instead we use balloon memory and stitch
all of the non-contingous pages in a virtualized area.

That means when we call the hypervisor to replace the GMFN
with a XENMAPSPACE_grant_table type, we need to lookup the
old PFN for every iteration instead of assuming a flat
contingous PFN allocation.

Lastly, we only use v1 for grants. This is because PVHVM
is not able to use v2 due to no XENMEM_add_to_physmap
calls on the error status page (see commit
69e8f430e243d657c2053f097efebc2e2cd559f0
 xen/granttable: Disable grant v2 for HVM domains.)

Until that is implemented this workaround has to
be in place.

Also per suggestions by Stefano utilize the PVHVM paths
as they share common functionality.

v2 of this patch moves most of the PVH code out in the
arch/x86/xen/grant-table driver and touches only minimally
the generic driver.

v3, v4: fixes us some of the code due to earlier patches.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/grant-table.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++
 drivers/xen/gntdev.c       |  2 +-
 drivers/xen/grant-table.c  |  9 ++++---
 3 files changed, 68 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 3a5f55d..2d71979 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -125,3 +125,65 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	apply_to_page_range(&init_mm, (unsigned long)shared,
 			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
 }
+#ifdef CONFIG_XEN_PVH
+#include <xen/balloon.h>
+#include <xen/events.h>
+#include <linux/slab.h>
+static int __init xlated_setup_gnttab_pages(void)
+{
+	struct page **pages;
+	xen_pfn_t *pfns;
+	int rc;
+	unsigned int i;
+	unsigned long nr_grant_frames = gnttab_max_grant_frames();
+
+	BUG_ON(nr_grant_frames == 0);
+	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
+	if (!pages)
+		return -ENOMEM;
+
+	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
+	if (!pfns) {
+		kfree(pages);
+		return -ENOMEM;
+	}
+	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
+	if (rc) {
+		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		kfree(pages);
+		kfree(pfns);
+		return rc;
+	}
+	for (i = 0; i < nr_grant_frames; i++)
+		pfns[i] = page_to_pfn(pages[i]);
+
+	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
+				    &xen_auto_xlat_grant_frames.vaddr);
+
+	kfree(pages);
+	if (rc) {
+		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
+			nr_grant_frames, rc);
+		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pfns);
+		return rc;
+	}
+
+	xen_auto_xlat_grant_frames.pfn = pfns;
+	xen_auto_xlat_grant_frames.count = nr_grant_frames;
+
+	return 0;
+}
+
+static int __init xen_pvh_gnttab_setup(void)
+{
+	if (!xen_pvh_domain())
+		return -ENODEV;
+
+	return xlated_setup_gnttab_pages();
+}
+/* Call it _before_ __gnttab_init as we need to initialize the
+ * xen_auto_xlat_grant_frames first. */
+core_initcall(xen_pvh_gnttab_setup);
+#endif
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..073b4a1 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -846,7 +846,7 @@ static int __init gntdev_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
-	use_ptemod = xen_pv_domain();
+	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
 
 	err = misc_register(&gntdev_miscdev);
 	if (err != 0) {
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 6c78fd21..3d04c1c 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1108,7 +1108,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
 	unsigned int nr_gframes = end_idx + 1;
 	int rc;
 
-	if (xen_hvm_domain()) {
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 		struct xen_add_to_physmap xatp;
 		unsigned int i = end_idx;
 		rc = 0;
@@ -1184,7 +1184,7 @@ static void gnttab_request_version(void)
 	int rc;
 	struct gnttab_set_version gsv;
 
-	if (xen_hvm_domain())
+	if (xen_feature(XENFEAT_auto_translated_physmap))
 		gsv.version = 1;
 	else
 		gsv.version = 2;
@@ -1328,5 +1328,6 @@ static int __gnttab_init(void)
 
 	return gnttab_init();
 }
-
-core_initcall(__gnttab_init);
+/* Starts after core_initcall so that xen_pvh_gnttab_setup can be called
+ * beforehand to initialize xen_auto_xlat_grant_frames. */
+core_initcall_sync(__gnttab_init);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:31:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBPP-0005Ns-92; Fri, 03 Jan 2014 20:31:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1VzBPN-0005Mi-Q8; Fri, 03 Jan 2014 20:31:54 +0000
Received: from [193.109.254.147:28792] by server-12.bemta-14.messagelabs.com
	id FD/BD-13681-93E17C25; Fri, 03 Jan 2014 20:31:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388781110!8741390!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30193 invoked from network); 3 Jan 2014 20:31:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:31:51 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KVh6v009769
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:31:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KVgvh026356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:31:43 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KVgJt027091; Fri, 3 Jan 2014 20:31:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:31:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F1AA41BFB02; Fri,  3 Jan 2014 15:31:39 -0500 (EST)
Date: Fri, 3 Jan 2014 15:31:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20140103203139.GA2570@phenom.dumpdata.com>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
	<C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Olaf Hering <olaf@aepfle.de>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 02:51:14PM -0500, Andres Lagar-Cavilla wrote:
> On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
> >> 
> >> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
> >> 
> >>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
> >>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
> >>>>> On Twitter, Florian Heigl sent a out a few messages about issues with xenpaging:
> >>>>> 
> >>>>> ---
> >>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=%23xen&src=hash> #xenpaging<https://twitter.com/search?q=%23xenpaging&src=hash>? docs are at SLES manual, rest is mostly this: http://www.gossamer-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or usable?
> >>>>> 
> >>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on holiday and not constant online)
> >>>>> 
> >>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=%23xen&src=hash> Xenpaging (memory overcommit)
> >>>>> [x] largely untested
> >>>>> [x] docs outdated
> >>>>> [x] syntax+logic changed
> >>>>> [x] broken
> >>>>> ---
> >>>>> 
> >>>>> [I've taken the liberty of removing the colorful expletive from the final post]
> >>>>> 
> >>>>> Is Florian's assessment correct, or is there somewhere we can point him for help?  I'm on vacation this week, but if someone replies to me, I will try to forward the information appropriately.
> >>>> 
> >>>> The Maintainers file implies otherwise. Let me CC the maintainers.
> >>> 
> >>> Andres really owns this code, so I'll punt to him for an official
> >>> answer, but:
> >> The part actively maintained is the hypervisor support for paging, and the interface.
> >> 
> >> tools/xenpaging is one way to consume that interface. It seems to have suffered from bitrot.
> > 
> > What is the other interface? Thanks!
> 
> Not sure what the question is. There is one interface. What I was referring to, is that tools/xenpaging implements one specific paging policy: victim selection, rate limiting, paging target, all of these are algorithms that entirely define what bang for your money you will get.
> 

Right, but there is other code that uses this interface as well correct?
Is it available for users ?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:31:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBPP-0005Ns-92; Fri, 03 Jan 2014 20:31:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1VzBPN-0005Mi-Q8; Fri, 03 Jan 2014 20:31:54 +0000
Received: from [193.109.254.147:28792] by server-12.bemta-14.messagelabs.com
	id FD/BD-13681-93E17C25; Fri, 03 Jan 2014 20:31:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388781110!8741390!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30193 invoked from network); 3 Jan 2014 20:31:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:31:51 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KVh6v009769
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:31:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KVgvh026356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:31:43 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KVgJt027091; Fri, 3 Jan 2014 20:31:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:31:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F1AA41BFB02; Fri,  3 Jan 2014 15:31:39 -0500 (EST)
Date: Fri, 3 Jan 2014 15:31:39 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
Message-ID: <20140103203139.GA2570@phenom.dumpdata.com>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
	<C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Olaf Hering <olaf@aepfle.de>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 02:51:14PM -0500, Andres Lagar-Cavilla wrote:
> On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
> >> 
> >> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
> >> 
> >>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
> >>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
> >>>>> On Twitter, Florian Heigl sent a out a few messages about issues with xenpaging:
> >>>>> 
> >>>>> ---
> >>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=%23xen&src=hash> #xenpaging<https://twitter.com/search?q=%23xenpaging&src=hash>? docs are at SLES manual, rest is mostly this: http://www.gossamer-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or usable?
> >>>>> 
> >>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on holiday and not constant online)
> >>>>> 
> >>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=%23xen&src=hash> Xenpaging (memory overcommit)
> >>>>> [x] largely untested
> >>>>> [x] docs outdated
> >>>>> [x] syntax+logic changed
> >>>>> [x] broken
> >>>>> ---
> >>>>> 
> >>>>> [I've taken the liberty of removing the colorful expletive from the final post]
> >>>>> 
> >>>>> Is Florian's assessment correct, or is there somewhere we can point him for help?  I'm on vacation this week, but if someone replies to me, I will try to forward the information appropriately.
> >>>> 
> >>>> The Maintainers file implies otherwise. Let me CC the maintainers.
> >>> 
> >>> Andres really owns this code, so I'll punt to him for an official
> >>> answer, but:
> >> The part actively maintained is the hypervisor support for paging, and the interface.
> >> 
> >> tools/xenpaging is one way to consume that interface. It seems to have suffered from bitrot.
> > 
> > What is the other interface? Thanks!
> 
> Not sure what the question is. There is one interface. What I was referring to, is that tools/xenpaging implements one specific paging policy: victim selection, rate limiting, paging target, all of these are algorithms that entirely define what bang for your money you will get.
> 

Right, but there is other code that uses this interface as well correct?
Is it available for users ?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:35:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBSx-0006Xk-3H; Fri, 03 Jan 2014 20:35:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBSv-0006XM-Vk
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 20:35:34 +0000
Received: from [85.158.139.211:9795] by server-17.bemta-5.messagelabs.com id
	52/4C-19152-51F17C25; Fri, 03 Jan 2014 20:35:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388781330!7758570!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3372 invoked from network); 3 Jan 2014 20:35:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:35:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KYR93027998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:34:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KYOmb002458
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:34:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KYOnA002082; Fri, 3 Jan 2014 20:34:24 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:34:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C22931BFB02; Fri,  3 Jan 2014 15:34:22 -0500 (EST)
Date: Fri, 3 Jan 2014 15:34:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103203422.GA2668@phenom.dumpdata.com>
References: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: linux-fbdev@vger.kernel.org, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, tomi.valkeinen@ti.com,
	boris.ostrovsky@oracle.com, plagnioj@jcrosoft.com
Subject: Re: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 07:02:09PM +0000, Stefano Stabellini wrote:

The title needs 'xen/fb' prefixed but that is easy enough.

> There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
> guests, they are not affected.
> 
> Please note that at this time QEMU needs few outstanding fixes to
> provide xenfb on ARM:
> 
> http://marc.info/?l=qemu-devel&m=138739419700837&w=2

Cool. Is the video maintainer OK with the Xen maintainers stashing it
in the Xen tree for Linus?

Thanks!
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: David Vrabel <david.vrabel@citrix.com>
> CC: boris.ostrovsky@oracle.com
> CC: plagnioj@jcrosoft.com
> CC: tomi.valkeinen@ti.com
> CC: linux-fbdev@vger.kernel.org
> CC: konrad.wilk@oracle.com
> ---
>  drivers/video/xen-fbfront.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
> index cd005c2..02e1c01 100644
> --- a/drivers/video/xen-fbfront.c
> +++ b/drivers/video/xen-fbfront.c
> @@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
>  
>  static int __init xenfb_init(void)
>  {
> -	if (!xen_pv_domain())
> +	if (!xen_domain())
>  		return -ENODEV;
>  
>  	/* Nothing to do if running in dom0. */
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:35:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBSx-0006Xk-3H; Fri, 03 Jan 2014 20:35:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBSv-0006XM-Vk
	for xen-devel@lists.xensource.com; Fri, 03 Jan 2014 20:35:34 +0000
Received: from [85.158.139.211:9795] by server-17.bemta-5.messagelabs.com id
	52/4C-19152-51F17C25; Fri, 03 Jan 2014 20:35:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388781330!7758570!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3372 invoked from network); 3 Jan 2014 20:35:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:35:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03KYR93027998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:34:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KYOmb002458
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:34:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KYOnA002082; Fri, 3 Jan 2014 20:34:24 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:34:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C22931BFB02; Fri,  3 Jan 2014 15:34:22 -0500 (EST)
Date: Fri, 3 Jan 2014 15:34:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103203422.GA2668@phenom.dumpdata.com>
References: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: linux-fbdev@vger.kernel.org, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, tomi.valkeinen@ti.com,
	boris.ostrovsky@oracle.com, plagnioj@jcrosoft.com
Subject: Re: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 07:02:09PM +0000, Stefano Stabellini wrote:

The title needs 'xen/fb' prefixed but that is easy enough.

> There is no reasons why an HVM guest shouldn't be allowed to use xenfb.
> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
> guests, they are not affected.
> 
> Please note that at this time QEMU needs few outstanding fixes to
> provide xenfb on ARM:
> 
> http://marc.info/?l=qemu-devel&m=138739419700837&w=2

Cool. Is the video maintainer OK with the Xen maintainers stashing it
in the Xen tree for Linus?

Thanks!
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: David Vrabel <david.vrabel@citrix.com>
> CC: boris.ostrovsky@oracle.com
> CC: plagnioj@jcrosoft.com
> CC: tomi.valkeinen@ti.com
> CC: linux-fbdev@vger.kernel.org
> CC: konrad.wilk@oracle.com
> ---
>  drivers/video/xen-fbfront.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/video/xen-fbfront.c b/drivers/video/xen-fbfront.c
> index cd005c2..02e1c01 100644
> --- a/drivers/video/xen-fbfront.c
> +++ b/drivers/video/xen-fbfront.c
> @@ -692,7 +692,7 @@ static DEFINE_XENBUS_DRIVER(xenfb, ,
>  
>  static int __init xenfb_init(void)
>  {
> -	if (!xen_pv_domain())
> +	if (!xen_domain())
>  		return -ENODEV;
>  
>  	/* Nothing to do if running in dom0. */
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:46:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBcu-0007RY-8X; Fri, 03 Jan 2014 20:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBcs-0007RR-K9
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:45:50 +0000
Received: from [85.158.137.68:32546] by server-17.bemta-3.messagelabs.com id
	73/1C-15965-D7127C25; Fri, 03 Jan 2014 20:45:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388781947!7169505!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32185 invoked from network); 3 Jan 2014 20:45:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:45:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Kjb6N005543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:45:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Kjac9023439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:45:37 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Kja4t023432; Fri, 3 Jan 2014 20:45:36 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:45:36 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0A7541BFB02; Fri,  3 Jan 2014 15:45:35 -0500 (EST)
Date: Fri, 3 Jan 2014 15:45:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103204534.GA2732@phenom.dumpdata.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-4-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1387884062-41154-4-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: jhb@freebsd.org, julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 03/13] xen: mask event channels while
 changing affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 24, 2013 at 12:20:52PM +0100, Roger Pau Monne wrote:
> Event channels should be masked while chaning affinity, or else we
changing

> might get spurious/lost interrupts.
> ---
>  sys/x86/xen/xen_intr.c |   15 ++++++++++++---
>  1 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/sys/x86/xen/xen_intr.c b/sys/x86/xen/xen_intr.c
> index fd36e68..bc0781e 100644
> --- a/sys/x86/xen/xen_intr.c
> +++ b/sys/x86/xen/xen_intr.c
> @@ -797,7 +797,7 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  	struct evtchn_bind_vcpu bind_vcpu;
>  	struct xenisrc *isrc;
>  	u_int to_cpu, vcpu_id;
> -	int error;
> +	int error, masked;
>  
>  #ifdef XENHVM
>  	if (xen_vector_callback_enabled == 0)
> @@ -815,6 +815,12 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  		return (EINVAL);
>  	}
>  
> +	/*
> +	 * Mask the event channel port so we don't receive spurious events
> +	 * while changing affinity.
> +	 */
> +	masked = evtchn_test_and_set_mask(isrc->xi_port);
> +
>  	if ((isrc->xi_type == EVTCHN_TYPE_VIRQ) ||
>  		(isrc->xi_type == EVTCHN_TYPE_IPI)) {
>  		/*
> @@ -825,8 +831,7 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  		evtchn_cpu_mask_port(isrc->xi_cpu, isrc->xi_port);
>  		isrc->xi_cpu = to_cpu;
>  		evtchn_cpu_unmask_port(isrc->xi_cpu, isrc->xi_port);
> -		mtx_unlock(&xen_intr_isrc_lock);
> -		return (0);
> +		goto out;
>  	}
>  
>  	bind_vcpu.port = isrc->xi_port;
> @@ -848,6 +853,10 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  			evtchn_cpu_mask_port(to_cpu, isrc->xi_port);
>  		}
>  	}
> +
> +out:
> +	if (!masked)
> +		evtchn_unmask_port(isrc->xi_port);
>  	mtx_unlock(&xen_intr_isrc_lock);
>  	return (0);
>  #else
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:46:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBcu-0007RY-8X; Fri, 03 Jan 2014 20:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBcs-0007RR-K9
	for xen-devel@lists.xenproject.org; Fri, 03 Jan 2014 20:45:50 +0000
Received: from [85.158.137.68:32546] by server-17.bemta-3.messagelabs.com id
	73/1C-15965-D7127C25; Fri, 03 Jan 2014 20:45:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388781947!7169505!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32185 invoked from network); 3 Jan 2014 20:45:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 20:45:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Kjb6N005543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:45:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Kjac9023439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:45:37 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Kja4t023432; Fri, 3 Jan 2014 20:45:36 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:45:36 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0A7541BFB02; Fri,  3 Jan 2014 15:45:35 -0500 (EST)
Date: Fri, 3 Jan 2014 15:45:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103204534.GA2732@phenom.dumpdata.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-4-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1387884062-41154-4-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: jhb@freebsd.org, julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 03/13] xen: mask event channels while
 changing affinity
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 24, 2013 at 12:20:52PM +0100, Roger Pau Monne wrote:
> Event channels should be masked while chaning affinity, or else we
changing

> might get spurious/lost interrupts.
> ---
>  sys/x86/xen/xen_intr.c |   15 ++++++++++++---
>  1 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/sys/x86/xen/xen_intr.c b/sys/x86/xen/xen_intr.c
> index fd36e68..bc0781e 100644
> --- a/sys/x86/xen/xen_intr.c
> +++ b/sys/x86/xen/xen_intr.c
> @@ -797,7 +797,7 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  	struct evtchn_bind_vcpu bind_vcpu;
>  	struct xenisrc *isrc;
>  	u_int to_cpu, vcpu_id;
> -	int error;
> +	int error, masked;
>  
>  #ifdef XENHVM
>  	if (xen_vector_callback_enabled == 0)
> @@ -815,6 +815,12 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  		return (EINVAL);
>  	}
>  
> +	/*
> +	 * Mask the event channel port so we don't receive spurious events
> +	 * while changing affinity.
> +	 */
> +	masked = evtchn_test_and_set_mask(isrc->xi_port);
> +
>  	if ((isrc->xi_type == EVTCHN_TYPE_VIRQ) ||
>  		(isrc->xi_type == EVTCHN_TYPE_IPI)) {
>  		/*
> @@ -825,8 +831,7 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  		evtchn_cpu_mask_port(isrc->xi_cpu, isrc->xi_port);
>  		isrc->xi_cpu = to_cpu;
>  		evtchn_cpu_unmask_port(isrc->xi_cpu, isrc->xi_port);
> -		mtx_unlock(&xen_intr_isrc_lock);
> -		return (0);
> +		goto out;
>  	}
>  
>  	bind_vcpu.port = isrc->xi_port;
> @@ -848,6 +853,10 @@ xen_intr_assign_cpu(struct intsrc *base_isrc, u_int apic_id)
>  			evtchn_cpu_mask_port(to_cpu, isrc->xi_port);
>  		}
>  	}
> +
> +out:
> +	if (!masked)
> +		evtchn_unmask_port(isrc->xi_port);
>  	mtx_unlock(&xen_intr_isrc_lock);
>  	return (0);
>  #else
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:54:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBkv-0007oA-BC; Fri, 03 Jan 2014 20:54:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBku-0007o2-3h
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 20:54:08 +0000
Received: from [85.158.139.211:61339] by server-12.bemta-5.messagelabs.com id
	C0/B1-30017-F6327C25; Fri, 03 Jan 2014 20:54:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388782445!7730077!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10309 invoked from network); 3 Jan 2014 20:54:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 20:54:06 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ks1CX012946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:54:01 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ks07A008745
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:54:00 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ks0M0008785; Fri, 3 Jan 2014 20:54:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:53:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 515271BFB02; Fri,  3 Jan 2014 15:53:52 -0500 (EST)
Date: Fri, 3 Jan 2014 15:53:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103205352.GB2732@phenom.dumpdata.com>
References: <1387479296-33389-1-git-send-email-roger.pau@citrix.com>
	<1387479296-33389-13-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1387479296-33389-13-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v7 12/19] xen: add a hook to perform AP
 startup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 19, 2013 at 07:54:49PM +0100, Roger Pau Monne wrote:
> AP startup on PVH follows the PV method, so we need to add a hook in
> order to diverge from bare metal.
> ---
>  sys/amd64/amd64/mp_machdep.c |   16 ++++---
>  sys/amd64/include/cpu.h      |    1 +
>  sys/x86/xen/hvm.c            |   17 +++++++-
>  sys/x86/xen/pv.c             |   90 ++++++++++++++++++++++++++++++++++++++++++
>  sys/xen/pv.h                 |    1 +
>  5 files changed, 117 insertions(+), 8 deletions(-)
> 
> diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
> index 4ef4b3d..e302886 100644
> --- a/sys/amd64/amd64/mp_machdep.c
> +++ b/sys/amd64/amd64/mp_machdep.c
> @@ -90,7 +90,7 @@ extern  struct pcpu __pcpu[];
>  
>  /* AP uses this during bootstrap.  Do not staticize.  */
>  char *bootSTK;
> -static int bootAP;
> +int bootAP;
>  
>  /* Free these after use */
>  void *bootstacks[MAXCPU];
> @@ -122,9 +122,12 @@ u_long *ipi_rendezvous_counts[MAXCPU];
>  static u_long *ipi_hardclock_counts[MAXCPU];
>  #endif
>  
> +int native_start_all_aps(void);
> +
>  /* Default cpu_ops implementation. */
>  struct cpu_ops cpu_ops = {
> -	.ipi_vectored = lapic_ipi_vectored
> +	.ipi_vectored = lapic_ipi_vectored,
> +	.start_all_aps = native_start_all_aps,
>  };
>  
>  extern inthand_t IDTVEC(fast_syscall), IDTVEC(fast_syscall32);
> @@ -138,7 +141,7 @@ extern int pmap_pcid_enabled;
>  static volatile cpuset_t ipi_nmi_pending;
>  
>  /* used to hold the AP's until we are ready to release them */
> -static struct mtx ap_boot_mtx;
> +struct mtx ap_boot_mtx;
>  
>  /* Set to 1 once we're ready to let the APs out of the pen. */
>  static volatile int aps_ready = 0;
> @@ -165,7 +168,6 @@ static int cpu_cores;			/* cores per package */
>  
>  static void	assign_cpu_ids(void);
>  static void	set_interrupt_apic_ids(void);
> -static int	start_all_aps(void);
>  static int	start_ap(int apic_id);
>  static void	release_aps(void *dummy);
>  
> @@ -569,7 +571,7 @@ cpu_mp_start(void)
>  	assign_cpu_ids();
>  
>  	/* Start each Application Processor */
> -	start_all_aps();
> +	cpu_ops.start_all_aps();
>  
>  	set_interrupt_apic_ids();
>  }
> @@ -908,8 +910,8 @@ assign_cpu_ids(void)
>  /*
>   * start each AP in our list
>   */
> -static int
> -start_all_aps(void)
> +int
> +native_start_all_aps(void)
>  {
>  	vm_offset_t va = boot_address + KERNBASE;
>  	u_int64_t *pt4, *pt3, *pt2;
> diff --git a/sys/amd64/include/cpu.h b/sys/amd64/include/cpu.h
> index 3d9ff531..ed9f1db 100644
> --- a/sys/amd64/include/cpu.h
> +++ b/sys/amd64/include/cpu.h
> @@ -64,6 +64,7 @@ struct cpu_ops {
>  	void (*cpu_init)(void);
>  	void (*cpu_resume)(void);
>  	void (*ipi_vectored)(u_int, int);
> +	int  (*start_all_aps)(void);
>  };
>  
>  extern struct	cpu_ops cpu_ops;
> diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
> index fb1ed79..5ec9f3a 100644
> --- a/sys/x86/xen/hvm.c
> +++ b/sys/x86/xen/hvm.c
> @@ -53,6 +53,9 @@ __FBSDID("$FreeBSD$");
>  #include <xen/hypervisor.h>
>  #include <xen/hvm.h>
>  #include <xen/xen_intr.h>
> +#ifdef __amd64__
> +#include <xen/pv.h>
> +#endif
>  
>  #include <xen/interface/hvm/params.h>
>  #include <xen/interface/vcpu.h>
> @@ -97,6 +100,11 @@ extern void pmap_lazyfix_action(void);
>  /* Variables used by mp_machdep to perform the bitmap IPI */
>  extern volatile u_int cpu_ipi_pending[MAXCPU];
>  
> +#ifdef __amd64__
> +/* Native AP start used on PVHVM */
> +extern int native_start_all_aps(void);
> +#endif
> +
>  /*---------------------------------- Macros ----------------------------------*/
>  #define	IPI_TO_IDX(ipi) ((ipi) - APIC_IPI_INTS)
>  
> @@ -119,7 +127,10 @@ enum xen_domain_type xen_domain_type = XEN_NATIVE;
>  struct cpu_ops xen_hvm_cpu_ops = {
>  	.ipi_vectored	= lapic_ipi_vectored,
>  	.cpu_init	= xen_hvm_cpu_init,
> -	.cpu_resume	= xen_hvm_cpu_resume
> +	.cpu_resume	= xen_hvm_cpu_resume,
> +#ifdef __amd64__
> +	.start_all_aps = native_start_all_aps,
> +#endif
>  };
>  
>  static MALLOC_DEFINE(M_XENHVM, "xen_hvm", "Xen HVM PV Support");
> @@ -698,6 +709,10 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
>  		setup_xen_features();
>  		cpu_ops = xen_hvm_cpu_ops;
>   		vm_guest = VM_GUEST_XEN;
> +#ifdef __amd64__
> +		if (xen_pv_domain())
> +			cpu_ops.start_all_aps = xen_pv_start_all_aps;
> +#endif
>  		break;
>  	case XEN_HVM_INIT_RESUME:
>  		if (error != 0)
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index db576e0..7e45a83 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -34,21 +34,43 @@ __FBSDID("$FreeBSD$");
>  #include <sys/kernel.h>
>  #include <sys/reboot.h>
>  #include <sys/systm.h>
> +#include <sys/malloc.h>
> +#include <sys/lock.h>
> +#include <sys/mutex.h>
> +#include <sys/smp.h>
> +
> +#include <vm/vm.h>
> +#include <vm/pmap.h>
> +#include <vm/vm_extern.h>
> +#include <vm/vm_kern.h>
>  
>  #include <machine/sysarch.h>
>  #include <machine/clock.h>
>  #include <machine/pc/bios.h>
> +#include <machine/smp.h>
>  
>  #include <xen/xen-os.h>
>  #include <xen/pv.h>
>  #include <xen/hypervisor.h>
>  
> +#include <xen/interface/vcpu.h>
> +
>  #define MAX_E820_ENTRIES	128
>  
>  /*--------------------------- Forward Declarations ---------------------------*/
>  static caddr_t xen_pv_parse_preload_data(u_int64_t);
>  static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
>  
> +/*---------------------------- Extern Declarations ---------------------------*/
> +/* Variables used by amd64 mp_machdep to start APs */
> +extern struct mtx ap_boot_mtx;
> +extern void *bootstacks[];
> +extern char *doublefault_stack;
> +extern char *nmi_stack;
> +extern void *dpcpu;
> +extern int bootAP;
> +extern char *bootSTK;
> +
>  /*-------------------------------- Global Data -------------------------------*/
>  /* Xen init_ops implementation. */
>  struct init_ops xen_init_ops = {
> @@ -78,6 +100,74 @@ static struct
>  
>  static struct bios_smap xen_smap[MAX_E820_ENTRIES];
>  
> +static int
> +start_xen_ap(int cpu)
> +{
> +	struct vcpu_guest_context *ctxt;
> +	int ms, cpus = mp_naps;
> +
> +	ctxt = malloc(sizeof(*ctxt), M_TEMP, M_NOWAIT | M_ZERO);
> +	if (ctxt == NULL)
> +		panic("unable to allocate memory");
> +
> +	ctxt->flags = VGCF_IN_KERNEL;
> +	ctxt->user_regs.rip = (unsigned long) init_secondary;
> +	ctxt->user_regs.rsp = (unsigned long) bootSTK;
> +
> +	/* Set the AP to use the same page tables */
> +	ctxt->ctrlreg[3] = KPML4phys;
> +
> +	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
> +		panic("unable to initialize AP#%d\n", cpu);
> +
> +	free(ctxt, M_TEMP);
> +
> +	/* Launch the vCPU */
> +	if (HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
> +		panic("unable to start AP#%d\n", cpu);

Why the panic? Why not just return 0 (which I presume
would mean we failed to init the CPU?)
> +
> +	/* Wait up to 5 seconds for it to start. */
> +	for (ms = 0; ms < 5000; ms++) {
> +		if (mp_naps > cpus)
> +			return (1);	/* return SUCCESS */
> +		DELAY(1000);
> +	}
> +

Wow. So much simpler than the PV path.

> +	return (0);

There isn't a doorbell mechanism for the booted processor to tell
the BSP it is up?

> +}
> +
> +int
> +xen_pv_start_all_aps(void)
> +{
> +	int cpu;
> +
> +	mtx_init(&ap_boot_mtx, "ap boot", NULL, MTX_SPIN);
> +
> +	for (cpu = 1; cpu < mp_ncpus; cpu++) {
> +
> +		/* allocate and set up an idle stack data page */
> +		bootstacks[cpu] = (void *)kmem_malloc(kernel_arena,
> +		    KSTACK_PAGES * PAGE_SIZE, M_WAITOK | M_ZERO);
> +		doublefault_stack = (char *)kmem_malloc(kernel_arena,
> +		    PAGE_SIZE, M_WAITOK | M_ZERO);
> +		nmi_stack = (char *)kmem_malloc(kernel_arena, PAGE_SIZE,
> +		    M_WAITOK | M_ZERO);
> +		dpcpu = (void *)kmem_malloc(kernel_arena, DPCPU_SIZE,
> +		    M_WAITOK | M_ZERO);
> +
> +		bootSTK = (char *)bootstacks[cpu] + KSTACK_PAGES * PAGE_SIZE - 8;
> +		bootAP = cpu;
> +
> +		/* attempt to start the Application Processor */
> +		if (!start_xen_ap(cpu))
> +			panic("AP #%d failed to start!", cpu);
> +
> +		CPU_SET(cpu, &all_cpus);	/* record AP in CPU map */
> +	}
> +
> +	return (mp_naps);
> +}
> +
>  /*
>   * Functions to convert the "extra" parameters passed by Xen
>   * into FreeBSD boot options (from the i386 Xen port).
> diff --git a/sys/xen/pv.h b/sys/xen/pv.h
> index 71b8776..60d9def 100644
> --- a/sys/xen/pv.h
> +++ b/sys/xen/pv.h
> @@ -24,5 +24,6 @@
>  #define	__XEN_PV_H__
>  
>  void	xen_pv_set_init_ops(void);
> +int	xen_pv_start_all_aps(void);
>  
>  #endif	/* __XEN_PV_H__ */
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 20:54:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 20:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBkv-0007oA-BC; Fri, 03 Jan 2014 20:54:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBku-0007o2-3h
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 20:54:08 +0000
Received: from [85.158.139.211:61339] by server-12.bemta-5.messagelabs.com id
	C0/B1-30017-F6327C25; Fri, 03 Jan 2014 20:54:07 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388782445!7730077!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10309 invoked from network); 3 Jan 2014 20:54:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Jan 2014 20:54:06 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Ks1CX012946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:54:01 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ks07A008745
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:54:00 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03Ks0M0008785; Fri, 3 Jan 2014 20:54:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:53:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 515271BFB02; Fri,  3 Jan 2014 15:53:52 -0500 (EST)
Date: Fri, 3 Jan 2014 15:53:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103205352.GB2732@phenom.dumpdata.com>
References: <1387479296-33389-1-git-send-email-roger.pau@citrix.com>
	<1387479296-33389-13-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1387479296-33389-13-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v7 12/19] xen: add a hook to perform AP
 startup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Dec 19, 2013 at 07:54:49PM +0100, Roger Pau Monne wrote:
> AP startup on PVH follows the PV method, so we need to add a hook in
> order to diverge from bare metal.
> ---
>  sys/amd64/amd64/mp_machdep.c |   16 ++++---
>  sys/amd64/include/cpu.h      |    1 +
>  sys/x86/xen/hvm.c            |   17 +++++++-
>  sys/x86/xen/pv.c             |   90 ++++++++++++++++++++++++++++++++++++++++++
>  sys/xen/pv.h                 |    1 +
>  5 files changed, 117 insertions(+), 8 deletions(-)
> 
> diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
> index 4ef4b3d..e302886 100644
> --- a/sys/amd64/amd64/mp_machdep.c
> +++ b/sys/amd64/amd64/mp_machdep.c
> @@ -90,7 +90,7 @@ extern  struct pcpu __pcpu[];
>  
>  /* AP uses this during bootstrap.  Do not staticize.  */
>  char *bootSTK;
> -static int bootAP;
> +int bootAP;
>  
>  /* Free these after use */
>  void *bootstacks[MAXCPU];
> @@ -122,9 +122,12 @@ u_long *ipi_rendezvous_counts[MAXCPU];
>  static u_long *ipi_hardclock_counts[MAXCPU];
>  #endif
>  
> +int native_start_all_aps(void);
> +
>  /* Default cpu_ops implementation. */
>  struct cpu_ops cpu_ops = {
> -	.ipi_vectored = lapic_ipi_vectored
> +	.ipi_vectored = lapic_ipi_vectored,
> +	.start_all_aps = native_start_all_aps,
>  };
>  
>  extern inthand_t IDTVEC(fast_syscall), IDTVEC(fast_syscall32);
> @@ -138,7 +141,7 @@ extern int pmap_pcid_enabled;
>  static volatile cpuset_t ipi_nmi_pending;
>  
>  /* used to hold the AP's until we are ready to release them */
> -static struct mtx ap_boot_mtx;
> +struct mtx ap_boot_mtx;
>  
>  /* Set to 1 once we're ready to let the APs out of the pen. */
>  static volatile int aps_ready = 0;
> @@ -165,7 +168,6 @@ static int cpu_cores;			/* cores per package */
>  
>  static void	assign_cpu_ids(void);
>  static void	set_interrupt_apic_ids(void);
> -static int	start_all_aps(void);
>  static int	start_ap(int apic_id);
>  static void	release_aps(void *dummy);
>  
> @@ -569,7 +571,7 @@ cpu_mp_start(void)
>  	assign_cpu_ids();
>  
>  	/* Start each Application Processor */
> -	start_all_aps();
> +	cpu_ops.start_all_aps();
>  
>  	set_interrupt_apic_ids();
>  }
> @@ -908,8 +910,8 @@ assign_cpu_ids(void)
>  /*
>   * start each AP in our list
>   */
> -static int
> -start_all_aps(void)
> +int
> +native_start_all_aps(void)
>  {
>  	vm_offset_t va = boot_address + KERNBASE;
>  	u_int64_t *pt4, *pt3, *pt2;
> diff --git a/sys/amd64/include/cpu.h b/sys/amd64/include/cpu.h
> index 3d9ff531..ed9f1db 100644
> --- a/sys/amd64/include/cpu.h
> +++ b/sys/amd64/include/cpu.h
> @@ -64,6 +64,7 @@ struct cpu_ops {
>  	void (*cpu_init)(void);
>  	void (*cpu_resume)(void);
>  	void (*ipi_vectored)(u_int, int);
> +	int  (*start_all_aps)(void);
>  };
>  
>  extern struct	cpu_ops cpu_ops;
> diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
> index fb1ed79..5ec9f3a 100644
> --- a/sys/x86/xen/hvm.c
> +++ b/sys/x86/xen/hvm.c
> @@ -53,6 +53,9 @@ __FBSDID("$FreeBSD$");
>  #include <xen/hypervisor.h>
>  #include <xen/hvm.h>
>  #include <xen/xen_intr.h>
> +#ifdef __amd64__
> +#include <xen/pv.h>
> +#endif
>  
>  #include <xen/interface/hvm/params.h>
>  #include <xen/interface/vcpu.h>
> @@ -97,6 +100,11 @@ extern void pmap_lazyfix_action(void);
>  /* Variables used by mp_machdep to perform the bitmap IPI */
>  extern volatile u_int cpu_ipi_pending[MAXCPU];
>  
> +#ifdef __amd64__
> +/* Native AP start used on PVHVM */
> +extern int native_start_all_aps(void);
> +#endif
> +
>  /*---------------------------------- Macros ----------------------------------*/
>  #define	IPI_TO_IDX(ipi) ((ipi) - APIC_IPI_INTS)
>  
> @@ -119,7 +127,10 @@ enum xen_domain_type xen_domain_type = XEN_NATIVE;
>  struct cpu_ops xen_hvm_cpu_ops = {
>  	.ipi_vectored	= lapic_ipi_vectored,
>  	.cpu_init	= xen_hvm_cpu_init,
> -	.cpu_resume	= xen_hvm_cpu_resume
> +	.cpu_resume	= xen_hvm_cpu_resume,
> +#ifdef __amd64__
> +	.start_all_aps = native_start_all_aps,
> +#endif
>  };
>  
>  static MALLOC_DEFINE(M_XENHVM, "xen_hvm", "Xen HVM PV Support");
> @@ -698,6 +709,10 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
>  		setup_xen_features();
>  		cpu_ops = xen_hvm_cpu_ops;
>   		vm_guest = VM_GUEST_XEN;
> +#ifdef __amd64__
> +		if (xen_pv_domain())
> +			cpu_ops.start_all_aps = xen_pv_start_all_aps;
> +#endif
>  		break;
>  	case XEN_HVM_INIT_RESUME:
>  		if (error != 0)
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index db576e0..7e45a83 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -34,21 +34,43 @@ __FBSDID("$FreeBSD$");
>  #include <sys/kernel.h>
>  #include <sys/reboot.h>
>  #include <sys/systm.h>
> +#include <sys/malloc.h>
> +#include <sys/lock.h>
> +#include <sys/mutex.h>
> +#include <sys/smp.h>
> +
> +#include <vm/vm.h>
> +#include <vm/pmap.h>
> +#include <vm/vm_extern.h>
> +#include <vm/vm_kern.h>
>  
>  #include <machine/sysarch.h>
>  #include <machine/clock.h>
>  #include <machine/pc/bios.h>
> +#include <machine/smp.h>
>  
>  #include <xen/xen-os.h>
>  #include <xen/pv.h>
>  #include <xen/hypervisor.h>
>  
> +#include <xen/interface/vcpu.h>
> +
>  #define MAX_E820_ENTRIES	128
>  
>  /*--------------------------- Forward Declarations ---------------------------*/
>  static caddr_t xen_pv_parse_preload_data(u_int64_t);
>  static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
>  
> +/*---------------------------- Extern Declarations ---------------------------*/
> +/* Variables used by amd64 mp_machdep to start APs */
> +extern struct mtx ap_boot_mtx;
> +extern void *bootstacks[];
> +extern char *doublefault_stack;
> +extern char *nmi_stack;
> +extern void *dpcpu;
> +extern int bootAP;
> +extern char *bootSTK;
> +
>  /*-------------------------------- Global Data -------------------------------*/
>  /* Xen init_ops implementation. */
>  struct init_ops xen_init_ops = {
> @@ -78,6 +100,74 @@ static struct
>  
>  static struct bios_smap xen_smap[MAX_E820_ENTRIES];
>  
> +static int
> +start_xen_ap(int cpu)
> +{
> +	struct vcpu_guest_context *ctxt;
> +	int ms, cpus = mp_naps;
> +
> +	ctxt = malloc(sizeof(*ctxt), M_TEMP, M_NOWAIT | M_ZERO);
> +	if (ctxt == NULL)
> +		panic("unable to allocate memory");
> +
> +	ctxt->flags = VGCF_IN_KERNEL;
> +	ctxt->user_regs.rip = (unsigned long) init_secondary;
> +	ctxt->user_regs.rsp = (unsigned long) bootSTK;
> +
> +	/* Set the AP to use the same page tables */
> +	ctxt->ctrlreg[3] = KPML4phys;
> +
> +	if (HYPERVISOR_vcpu_op(VCPUOP_initialise, cpu, ctxt))
> +		panic("unable to initialize AP#%d\n", cpu);
> +
> +	free(ctxt, M_TEMP);
> +
> +	/* Launch the vCPU */
> +	if (HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
> +		panic("unable to start AP#%d\n", cpu);

Why the panic? Why not just return 0 (which I presume
would mean we failed to init the CPU?)
> +
> +	/* Wait up to 5 seconds for it to start. */
> +	for (ms = 0; ms < 5000; ms++) {
> +		if (mp_naps > cpus)
> +			return (1);	/* return SUCCESS */
> +		DELAY(1000);
> +	}
> +

Wow. So much simpler than the PV path.

> +	return (0);

There isn't a doorbell mechanism for the booted processor to tell
the BSP it is up?

> +}
> +
> +int
> +xen_pv_start_all_aps(void)
> +{
> +	int cpu;
> +
> +	mtx_init(&ap_boot_mtx, "ap boot", NULL, MTX_SPIN);
> +
> +	for (cpu = 1; cpu < mp_ncpus; cpu++) {
> +
> +		/* allocate and set up an idle stack data page */
> +		bootstacks[cpu] = (void *)kmem_malloc(kernel_arena,
> +		    KSTACK_PAGES * PAGE_SIZE, M_WAITOK | M_ZERO);
> +		doublefault_stack = (char *)kmem_malloc(kernel_arena,
> +		    PAGE_SIZE, M_WAITOK | M_ZERO);
> +		nmi_stack = (char *)kmem_malloc(kernel_arena, PAGE_SIZE,
> +		    M_WAITOK | M_ZERO);
> +		dpcpu = (void *)kmem_malloc(kernel_arena, DPCPU_SIZE,
> +		    M_WAITOK | M_ZERO);
> +
> +		bootSTK = (char *)bootstacks[cpu] + KSTACK_PAGES * PAGE_SIZE - 8;
> +		bootAP = cpu;
> +
> +		/* attempt to start the Application Processor */
> +		if (!start_xen_ap(cpu))
> +			panic("AP #%d failed to start!", cpu);
> +
> +		CPU_SET(cpu, &all_cpus);	/* record AP in CPU map */
> +	}
> +
> +	return (mp_naps);
> +}
> +
>  /*
>   * Functions to convert the "extra" parameters passed by Xen
>   * into FreeBSD boot options (from the i386 Xen port).
> diff --git a/sys/xen/pv.h b/sys/xen/pv.h
> index 71b8776..60d9def 100644
> --- a/sys/xen/pv.h
> +++ b/sys/xen/pv.h
> @@ -24,5 +24,6 @@
>  #define	__XEN_PV_H__
>  
>  void	xen_pv_set_init_ops(void);
> +int	xen_pv_start_all_aps(void);
>  
>  #endif	/* __XEN_PV_H__ */
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 21:00:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 21:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBqg-00086w-8X; Fri, 03 Jan 2014 21:00:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBqe-000855-D4
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 21:00:04 +0000
Received: from [85.158.139.211:21472] by server-17.bemta-5.messagelabs.com id
	4F/36-19152-3D427C25; Fri, 03 Jan 2014 21:00:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388782801!7730773!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26532 invoked from network); 3 Jan 2014 21:00:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 21:00:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Kxv22018306
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:59:58 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KxucX029193
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:59:57 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KxuiY019232; Fri, 3 Jan 2014 20:59:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:59:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 679C21BFB02; Fri,  3 Jan 2014 15:59:49 -0500 (EST)
Date: Fri, 3 Jan 2014 15:59:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103205949.GC2732@phenom.dumpdata.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-5-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388677433-49525-5-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
 preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:43:38PM +0100, Roger Pau Monne wrote:
> ---
>  sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
>  sys/amd64/include/sysarch.h |   12 ++++++
>  sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 124 insertions(+), 11 deletions(-)
> 
> diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
> index eae657b..e073eea 100644
> --- a/sys/amd64/amd64/machdep.c
> +++ b/sys/amd64/amd64/machdep.c
> @@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
>  #include <machine/reg.h>
>  #include <machine/sigframe.h>
>  #include <machine/specialreg.h>
> +#include <machine/sysarch.h>
>  #ifdef PERFMON
>  #include <machine/perfmon.h>
>  #endif
> @@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
>      char *xfpustate, size_t xfpustate_len);
>  SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
>  
> +/* Preload data parse function */
> +static caddr_t native_parse_preload_data(u_int64_t);
> +
> +/* Default init_ops implementation. */
> +struct init_ops init_ops = {
> +	.parse_preload_data =	native_parse_preload_data,

Extra space there.

> +};
> +
>  /*
>   * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
>   * the physical address at which the kernel is loaded.
> @@ -1683,6 +1692,26 @@ do_next:
>  	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
>  }
>  
> +static caddr_t
> +native_parse_preload_data(u_int64_t modulep)
> +{
> +	caddr_t kmdp;
> +
> +	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);

Two casts? Could it be done via one?

> +	preload_bootstrap_relocate(KERNBASE);
> +	kmdp = preload_search_by_type("elf kernel");
> +	if (kmdp == NULL)
> +		kmdp = preload_search_by_type("elf64 kernel");
> +	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
> +	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
> +#ifdef DDB
> +	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
> +	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
> +#endif
> +
> +	return (kmdp);
> +}
> +
>  u_int64_t
>  hammer_time(u_int64_t modulep, u_int64_t physfree)
>  {
> @@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
>  	 */
>  	proc_linkup0(&proc0, &thread0);
>  
> -	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);

Oh, you just moved the code - right, lets not modify it in this patch.

> -	preload_bootstrap_relocate(KERNBASE);
> -	kmdp = preload_search_by_type("elf kernel");
> -	if (kmdp == NULL)
> -		kmdp = preload_search_by_type("elf64 kernel");
> -	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
> -	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
> -#ifdef DDB
> -	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
> -	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
> -#endif
> +	kmdp = init_ops.parse_preload_data(modulep);
>  
>  	/* Init basic tunables, hz etc */
>  	init_param1();
> diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
> index cd380d4..58ac8cd 100644
> --- a/sys/amd64/include/sysarch.h
> +++ b/sys/amd64/include/sysarch.h
> @@ -4,3 +4,15 @@
>  /* $FreeBSD$ */
>  
>  #include <x86/sysarch.h>
> +
> +/*
> + * Struct containing pointers to init functions whose
> + * implementation is run time selectable.  Selection can be made,
> + * for example, based on detection of a BIOS variant or
> + * hypervisor environment.
> + */
> +struct init_ops {
> +	caddr_t	(*parse_preload_data)(u_int64_t);
> +};
> +
> +extern struct init_ops init_ops;
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index db3b7a3..908b50b 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
>  #include <vm/vm_pager.h>
>  #include <vm/vm_param.h>
>  
> +#include <machine/sysarch.h>
> +
>  #include <xen/xen-os.h>
>  #include <xen/hypervisor.h>
>  
> @@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>  /* Xen initial function */
>  extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
>  
> +/*--------------------------- Forward Declarations ---------------------------*/
> +static caddr_t xen_pv_parse_preload_data(u_int64_t);
> +
> +static void xen_pv_set_init_ops(void);
> +
> +/*-------------------------------- Global Data -------------------------------*/
> +/* Xen init_ops implementation. */
> +struct init_ops xen_init_ops = {
> +	.parse_preload_data =	xen_pv_parse_preload_data,
> +};
> +
> +static struct
> +{
> +	const char	*ev;
> +	int		mask;
> +} howto_names[] = {
> +	{"boot_askname",	RB_ASKNAME},
> +	{"boot_single",		RB_SINGLE},
> +	{"boot_nosync",		RB_NOSYNC},
> +	{"boot_halt",		RB_ASKNAME},
> +	{"boot_serial",		RB_SERIAL},
> +	{"boot_cdrom",		RB_CDROM},
> +	{"boot_gdb",		RB_GDB},
> +	{"boot_gdb_pause",	RB_RESERVED1},
> +	{"boot_verbose",	RB_VERBOSE},
> +	{"boot_multicons",	RB_MULTIPLE},
> +	{NULL,	0}
> +};
> +
> +/*-------------------------------- Xen PV init -------------------------------*/
>  /*
>   * First function called by the Xen PVH boot sequence.
>   *
> @@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
>  	}
>  	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
>  
> +	/* Set the hooks for early functions that diverge from bare metal */
> +	xen_pv_set_init_ops();
> +
>  	/* Now we can jump into the native init function */
>  	return (hammer_time(0, physfree));
>  }
> +
> +/*-------------------------------- PV specific -------------------------------*/
> +/*
> + * Functions to convert the "extra" parameters passed by Xen
> + * into FreeBSD boot options (from the i386 Xen port).
> + */
> +static char *
> +xen_setbootenv(char *cmd_line)
> +{
> +	char *cmd_line_next;
> +
> +        /* Skip leading spaces */
> +        for (; *cmd_line == ' '; cmd_line++);

Spaces?

> +
> +	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
> +	return (cmd_line);
> +}
> +
> +static int
> +xen_boothowto(char *envp)
> +{
> +	int i, howto = 0;
> +
> +	/* get equivalents from the environment */
> +	for (i = 0; howto_names[i].ev != NULL; i++)
> +		if (getenv(howto_names[i].ev) != NULL)
> +			howto |= howto_names[i].mask;

You don't believe in '{}' do you ? :-)

> +	return (howto);
> +}
> +
> +static caddr_t
> +xen_pv_parse_preload_data(u_int64_t modulep)
> +{
> +	/* Parse the extra boot information given by Xen */
> +	if (HYPERVISOR_start_info->cmd_line)
> +		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
> +	boothowto |= xen_boothowto(kern_envp);
> +
> +	return (NULL);
> +}
> +
> +static void
> +xen_pv_set_init_ops(void)
> +{
> +	/* Init ops for Xen PV */
> +	init_ops = xen_init_ops;
> +}
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 21:00:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 21:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzBqg-00086w-8X; Fri, 03 Jan 2014 21:00:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzBqe-000855-D4
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 21:00:04 +0000
Received: from [85.158.139.211:21472] by server-17.bemta-5.messagelabs.com id
	4F/36-19152-3D427C25; Fri, 03 Jan 2014 21:00:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388782801!7730773!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26532 invoked from network); 3 Jan 2014 21:00:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Jan 2014 21:00:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s03Kxv22018306
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 3 Jan 2014 20:59:58 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KxucX029193
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 3 Jan 2014 20:59:57 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s03KxuiY019232; Fri, 3 Jan 2014 20:59:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 12:59:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 679C21BFB02; Fri,  3 Jan 2014 15:59:49 -0500 (EST)
Date: Fri, 3 Jan 2014 15:59:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140103205949.GC2732@phenom.dumpdata.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-5-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388677433-49525-5-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
 preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 02, 2014 at 04:43:38PM +0100, Roger Pau Monne wrote:
> ---
>  sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
>  sys/amd64/include/sysarch.h |   12 ++++++
>  sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 124 insertions(+), 11 deletions(-)
> 
> diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
> index eae657b..e073eea 100644
> --- a/sys/amd64/amd64/machdep.c
> +++ b/sys/amd64/amd64/machdep.c
> @@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
>  #include <machine/reg.h>
>  #include <machine/sigframe.h>
>  #include <machine/specialreg.h>
> +#include <machine/sysarch.h>
>  #ifdef PERFMON
>  #include <machine/perfmon.h>
>  #endif
> @@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
>      char *xfpustate, size_t xfpustate_len);
>  SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
>  
> +/* Preload data parse function */
> +static caddr_t native_parse_preload_data(u_int64_t);
> +
> +/* Default init_ops implementation. */
> +struct init_ops init_ops = {
> +	.parse_preload_data =	native_parse_preload_data,

Extra space there.

> +};
> +
>  /*
>   * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
>   * the physical address at which the kernel is loaded.
> @@ -1683,6 +1692,26 @@ do_next:
>  	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
>  }
>  
> +static caddr_t
> +native_parse_preload_data(u_int64_t modulep)
> +{
> +	caddr_t kmdp;
> +
> +	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);

Two casts? Could it be done via one?

> +	preload_bootstrap_relocate(KERNBASE);
> +	kmdp = preload_search_by_type("elf kernel");
> +	if (kmdp == NULL)
> +		kmdp = preload_search_by_type("elf64 kernel");
> +	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
> +	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
> +#ifdef DDB
> +	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
> +	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
> +#endif
> +
> +	return (kmdp);
> +}
> +
>  u_int64_t
>  hammer_time(u_int64_t modulep, u_int64_t physfree)
>  {
> @@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
>  	 */
>  	proc_linkup0(&proc0, &thread0);
>  
> -	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);

Oh, you just moved the code - right, lets not modify it in this patch.

> -	preload_bootstrap_relocate(KERNBASE);
> -	kmdp = preload_search_by_type("elf kernel");
> -	if (kmdp == NULL)
> -		kmdp = preload_search_by_type("elf64 kernel");
> -	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
> -	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
> -#ifdef DDB
> -	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
> -	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
> -#endif
> +	kmdp = init_ops.parse_preload_data(modulep);
>  
>  	/* Init basic tunables, hz etc */
>  	init_param1();
> diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
> index cd380d4..58ac8cd 100644
> --- a/sys/amd64/include/sysarch.h
> +++ b/sys/amd64/include/sysarch.h
> @@ -4,3 +4,15 @@
>  /* $FreeBSD$ */
>  
>  #include <x86/sysarch.h>
> +
> +/*
> + * Struct containing pointers to init functions whose
> + * implementation is run time selectable.  Selection can be made,
> + * for example, based on detection of a BIOS variant or
> + * hypervisor environment.
> + */
> +struct init_ops {
> +	caddr_t	(*parse_preload_data)(u_int64_t);
> +};
> +
> +extern struct init_ops init_ops;
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index db3b7a3..908b50b 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
>  #include <vm/vm_pager.h>
>  #include <vm/vm_param.h>
>  
> +#include <machine/sysarch.h>
> +
>  #include <xen/xen-os.h>
>  #include <xen/hypervisor.h>
>  
> @@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>  /* Xen initial function */
>  extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
>  
> +/*--------------------------- Forward Declarations ---------------------------*/
> +static caddr_t xen_pv_parse_preload_data(u_int64_t);
> +
> +static void xen_pv_set_init_ops(void);
> +
> +/*-------------------------------- Global Data -------------------------------*/
> +/* Xen init_ops implementation. */
> +struct init_ops xen_init_ops = {
> +	.parse_preload_data =	xen_pv_parse_preload_data,
> +};
> +
> +static struct
> +{
> +	const char	*ev;
> +	int		mask;
> +} howto_names[] = {
> +	{"boot_askname",	RB_ASKNAME},
> +	{"boot_single",		RB_SINGLE},
> +	{"boot_nosync",		RB_NOSYNC},
> +	{"boot_halt",		RB_ASKNAME},
> +	{"boot_serial",		RB_SERIAL},
> +	{"boot_cdrom",		RB_CDROM},
> +	{"boot_gdb",		RB_GDB},
> +	{"boot_gdb_pause",	RB_RESERVED1},
> +	{"boot_verbose",	RB_VERBOSE},
> +	{"boot_multicons",	RB_MULTIPLE},
> +	{NULL,	0}
> +};
> +
> +/*-------------------------------- Xen PV init -------------------------------*/
>  /*
>   * First function called by the Xen PVH boot sequence.
>   *
> @@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
>  	}
>  	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
>  
> +	/* Set the hooks for early functions that diverge from bare metal */
> +	xen_pv_set_init_ops();
> +
>  	/* Now we can jump into the native init function */
>  	return (hammer_time(0, physfree));
>  }
> +
> +/*-------------------------------- PV specific -------------------------------*/
> +/*
> + * Functions to convert the "extra" parameters passed by Xen
> + * into FreeBSD boot options (from the i386 Xen port).
> + */
> +static char *
> +xen_setbootenv(char *cmd_line)
> +{
> +	char *cmd_line_next;
> +
> +        /* Skip leading spaces */
> +        for (; *cmd_line == ' '; cmd_line++);

Spaces?

> +
> +	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
> +	return (cmd_line);
> +}
> +
> +static int
> +xen_boothowto(char *envp)
> +{
> +	int i, howto = 0;
> +
> +	/* get equivalents from the environment */
> +	for (i = 0; howto_names[i].ev != NULL; i++)
> +		if (getenv(howto_names[i].ev) != NULL)
> +			howto |= howto_names[i].mask;

You don't believe in '{}' do you ? :-)

> +	return (howto);
> +}
> +
> +static caddr_t
> +xen_pv_parse_preload_data(u_int64_t modulep)
> +{
> +	/* Parse the extra boot information given by Xen */
> +	if (HYPERVISOR_start_info->cmd_line)
> +		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
> +	boothowto |= xen_boothowto(kern_envp);
> +
> +	return (NULL);
> +}
> +
> +static void
> +xen_pv_set_init_ops(void)
> +{
> +	/* Init ops for Xen PV */
> +	init_ops = xen_init_ops;
> +}
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 21:18:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 21:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzC7p-0000F0-7W; Fri, 03 Jan 2014 21:17:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1VzC7n-0000Eq-4v
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 21:17:47 +0000
Received: from [193.109.254.147:8778] by server-2.bemta-14.messagelabs.com id
	70/06-00361-AF827C25; Fri, 03 Jan 2014 21:17:46 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388783863!8710706!1
X-Originating-IP: [209.85.213.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28104 invoked from network); 3 Jan 2014 21:17:45 -0000
Received: from mail-ig0-f169.google.com (HELO mail-ig0-f169.google.com)
	(209.85.213.169)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 21:17:45 -0000
Received: by mail-ig0-f169.google.com with SMTP id hk11so2128815igb.0
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 13:17:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=jXseu7KteV4ERhroHY4Z8zXGxe0JukU4lrCHJt/RGk8=;
	b=e3mPA3tP4PFsdcm9MWq1c4TsZ2XGVRzo5oL+QAWoZ6Mkee59gyXh9YHCHPEIBag2Jb
	TAnSqF7SZSSFWJm91eYY19I/TiPws7X/7ID30ra6wHtkQmgZ/9T+Fe40OxCcUVmT9ik7
	3KI0WhbB6vPParnGQsIT33yJLhW1IVcSmGrSdsahW7DmNEOsJ4gNrde+F9qOBuS+vQM0
	KXRbsophfawUYke0xnbqKGmfT+6LqTXnTi4QZ2xAEFIanJhQKyyv1wpR16xy2xSoSt+I
	EnLAhEG3aWSl73uQtPJuE8PWuVc2EIPEmoG20DKZeDY2xnrsR4E73nhbQKaiWrpSXqx9
	N98g==
X-Gm-Message-State: ALoCoQkze8v/E5aOpRkR5qKIUjdC6yEC3sJ9Obx84P7gMbYuP1SMOIqhKVBRbYjp5K1KAwGZBDxv
X-Received: by 10.50.100.170 with SMTP id ez10mr5057289igb.15.1388783863254;
	Fri, 03 Jan 2014 13:17:43 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id da14sm3553784igc.1.2014.01.03.13.17.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 13:17:42 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20140103203139.GA2570@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 16:17:43 -0500
Message-Id: <E287D3C0-378C-42F5-AF4E-46AD67B73BEB@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
	<C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
	<20140103203139.GA2570@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 3, 2014, at 3:31 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Fri, Jan 03, 2014 at 02:51:14PM -0500, Andres Lagar-Cavilla wrote:
>> On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> 
>>> On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
>>>> 
>>>> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
>>>> 
>>>>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>>>>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>>>>>> On Twitter, Florian Heigl sent a out a few messages about issues with xenpaging:
>>>>>>> 
>>>>>>> ---
>>>>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=%23xen&src=hash> #xenpaging<https://twitter.com/search?q=%23xenpaging&src=hash>? docs are at SLES manual, rest is mostly this: http://www.gossamer-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or usable?
>>>>>>> 
>>>>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on holiday and not constant online)
>>>>>>> 
>>>>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=%23xen&src=hash> Xenpaging (memory overcommit)
>>>>>>> [x] largely untested
>>>>>>> [x] docs outdated
>>>>>>> [x] syntax+logic changed
>>>>>>> [x] broken
>>>>>>> ---
>>>>>>> 
>>>>>>> [I've taken the liberty of removing the colorful expletive from the final post]
>>>>>>> 
>>>>>>> Is Florian's assessment correct, or is there somewhere we can point him for help?  I'm on vacation this week, but if someone replies to me, I will try to forward the information appropriately.
>>>>>> 
>>>>>> The Maintainers file implies otherwise. Let me CC the maintainers.
>>>>> 
>>>>> Andres really owns this code, so I'll punt to him for an official
>>>>> answer, but:
>>>> The part actively maintained is the hypervisor support for paging, and the interface.
>>>> 
>>>> tools/xenpaging is one way to consume that interface. It seems to have suffered from bitrot.
>>> 
>>> What is the other interface? Thanks!
>> 
>> Not sure what the question is. There is one interface. What I was referring to, is that tools/xenpaging implements one specific paging policy: victim selection, rate limiting, paging target, all of these are algorithms that entirely define what bang for your money you will get.
>> 
> 
> Right, but there is other code that uses this interface as well correct?
> Is it available for users ?

That I know of, Gridcentric's product. It's available as proprietary software for a fee. I am unaware of a sharing user other than Gridcentric. Virtuata was a mem-event user, Gridcentric is, and others have surfaced on the list (Razvan Cocajaru for instance).

Andres


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 21:18:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 21:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzC7p-0000F0-7W; Fri, 03 Jan 2014 21:17:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1VzC7n-0000Eq-4v
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 21:17:47 +0000
Received: from [193.109.254.147:8778] by server-2.bemta-14.messagelabs.com id
	70/06-00361-AF827C25; Fri, 03 Jan 2014 21:17:46 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-3.tower-27.messagelabs.com!1388783863!8710706!1
X-Originating-IP: [209.85.213.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiB0LmNvL2U1TFFDVUQ5ZDAp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28104 invoked from network); 3 Jan 2014 21:17:45 -0000
Received: from mail-ig0-f169.google.com (HELO mail-ig0-f169.google.com)
	(209.85.213.169)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 21:17:45 -0000
Received: by mail-ig0-f169.google.com with SMTP id hk11so2128815igb.0
	for <xen-devel@lists.xen.org>; Fri, 03 Jan 2014 13:17:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=jXseu7KteV4ERhroHY4Z8zXGxe0JukU4lrCHJt/RGk8=;
	b=e3mPA3tP4PFsdcm9MWq1c4TsZ2XGVRzo5oL+QAWoZ6Mkee59gyXh9YHCHPEIBag2Jb
	TAnSqF7SZSSFWJm91eYY19I/TiPws7X/7ID30ra6wHtkQmgZ/9T+Fe40OxCcUVmT9ik7
	3KI0WhbB6vPParnGQsIT33yJLhW1IVcSmGrSdsahW7DmNEOsJ4gNrde+F9qOBuS+vQM0
	KXRbsophfawUYke0xnbqKGmfT+6LqTXnTi4QZ2xAEFIanJhQKyyv1wpR16xy2xSoSt+I
	EnLAhEG3aWSl73uQtPJuE8PWuVc2EIPEmoG20DKZeDY2xnrsR4E73nhbQKaiWrpSXqx9
	N98g==
X-Gm-Message-State: ALoCoQkze8v/E5aOpRkR5qKIUjdC6yEC3sJ9Obx84P7gMbYuP1SMOIqhKVBRbYjp5K1KAwGZBDxv
X-Received: by 10.50.100.170 with SMTP id ez10mr5057289igb.15.1388783863254;
	Fri, 03 Jan 2014 13:17:43 -0800 (PST)
Received: from [192.168.1.112] (192-0-158-112.cpe.teksavvy.com.
	[192.0.158.112])
	by mx.google.com with ESMTPSA id da14sm3553784igc.1.2014.01.03.13.17.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 03 Jan 2014 13:17:42 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <20140103203139.GA2570@phenom.dumpdata.com>
Date: Fri, 3 Jan 2014 16:17:43 -0500
Message-Id: <E287D3C0-378C-42F5-AF4E-46AD67B73BEB@gridcentric.ca>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
	<20131231153330.GC20357@phenom.dumpdata.com>
	<20131231163110.GA34150@deinos.phlegethon.org>
	<556C4AC0-F10F-4977-8FCD-3129E416B062@gridcentric.ca>
	<20140103184154.GA29283@phenom.dumpdata.com>
	<C862CE7C-4B8E-4A69-AD79-BE3E9417C9CD@gridcentric.ca>
	<20140103203139.GA2570@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1510)
Cc: Olaf Hering <olaf@aepfle.de>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>, Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	andres@lagarcavilla.org,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 3, 2014, at 3:31 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Fri, Jan 03, 2014 at 02:51:14PM -0500, Andres Lagar-Cavilla wrote:
>> On Jan 3, 2014, at 1:41 PM, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> 
>>> On Fri, Jan 03, 2014 at 09:49:36AM -0500, Andres Lagar-Cavilla wrote:
>>>> 
>>>> On Dec 31, 2013, at 11:31 AM, Tim Deegan <tim@xen.org> wrote:
>>>> 
>>>>> At 10:33 -0500 on 31 Dec (1388482410), Konrad Rzeszutek Wilk wrote:
>>>>>> On Mon, Dec 23, 2013 at 06:34:55PM +0000, Russell Pavlicek wrote:
>>>>>>> On Twitter, Florian Heigl sent a out a few messages about issues with xenpaging:
>>>>>>> 
>>>>>>> ---
>>>>>>> 19-Dec: Anyone successfully use #xen<https://twitter.com/search?q=%23xen&src=hash> #xenpaging<https://twitter.com/search?q=%23xenpaging&src=hash>? docs are at SLES manual, rest is mostly this: http://www.gossamer-threads.com/lists/xen/devel/255798<http://t.co/P36VdL84Et> dead feature or usable?
>>>>>>> 
>>>>>>> 22-Dec: @lars_kurth<https://twitter.com/lars_kurth> @RCPavlicek<https://twitter.com/RCPavlicek> Hey guys, I wrote down as much as I could https://piratenpad.de/p/Ik3lOBLniq1L5TEM   <https://t.co/e5LQCUD9d0> (since I'm on holiday and not constant online)
>>>>>>> 
>>>>>>> 22-Dec: Yay, tested #xen<https://twitter.com/search?q=%23xen&src=hash> Xenpaging (memory overcommit)
>>>>>>> [x] largely untested
>>>>>>> [x] docs outdated
>>>>>>> [x] syntax+logic changed
>>>>>>> [x] broken
>>>>>>> ---
>>>>>>> 
>>>>>>> [I've taken the liberty of removing the colorful expletive from the final post]
>>>>>>> 
>>>>>>> Is Florian's assessment correct, or is there somewhere we can point him for help?  I'm on vacation this week, but if someone replies to me, I will try to forward the information appropriately.
>>>>>> 
>>>>>> The Maintainers file implies otherwise. Let me CC the maintainers.
>>>>> 
>>>>> Andres really owns this code, so I'll punt to him for an official
>>>>> answer, but:
>>>> The part actively maintained is the hypervisor support for paging, and the interface.
>>>> 
>>>> tools/xenpaging is one way to consume that interface. It seems to have suffered from bitrot.
>>> 
>>> What is the other interface? Thanks!
>> 
>> Not sure what the question is. There is one interface. What I was referring to, is that tools/xenpaging implements one specific paging policy: victim selection, rate limiting, paging target, all of these are algorithms that entirely define what bang for your money you will get.
>> 
> 
> Right, but there is other code that uses this interface as well correct?
> Is it available for users ?

That I know of, Gridcentric's product. It's available as proprietary software for a fee. I am unaware of a sharing user other than Gridcentric. Virtuata was a mem-event user, Gridcentric is, and others have surfaced on the list (Razvan Cocajaru for instance).

Andres


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 22:26:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 22:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzDC3-0004D7-QV; Fri, 03 Jan 2014 22:26:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzDC2-0004D2-Jv
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 22:26:14 +0000
Received: from [85.158.143.35:45989] by server-3.bemta-4.messagelabs.com id
	D3/DD-32360-50937C25; Fri, 03 Jan 2014 22:26:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388787971!9486848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11772 invoked from network); 3 Jan 2014 22:26:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 22:26:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,600,1384300800"; d="scan'208";a="89680106"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 22:26:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 17:26:10 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1VzDBy-0005HY-L0; Fri, 03 Jan 2014 22:26:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 22:26:09 +0000
Message-ID: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel]  [Patch] docs/Makefile: Split the install target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Split the current install target into two subtargets, install-man-pages and
install-html, with the main install target depending on both.

This helps packagers who want the man pages to put in appropriate rpms/debs,
but don't want to build the html developer docs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

I am slightly dubious as to the behaviour of rm-rf'ing the DOCDIR, but the
behaviour is left exactly as before, for sake-of-mind when reviewing.

I would like this to be taken for 4.4, as I have fallen over it yet again when
packing 4.4-rc1 for testing in XenServer.  Unlike some of my other fixes to
the Xen build system, this is trivial to fix correctly upstream, and will
benefit everyone trying to package 4.4 for distros.
---
 docs/Makefile |   15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/docs/Makefile b/docs/Makefile
index 8d5d48e..e4bf28c 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -78,16 +78,21 @@ distclean: clean
 	rm -rf $(XEN_ROOT)/config/Docs.mk config.log config.status config.cache \
 		autom4te.cache
 
-.PHONY: install
-install: all
-	rm -rf $(DESTDIR)$(DOCDIR)
-	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
-
+.PHONY: install-man-pages
+install-man-pages: man-pages
 	$(INSTALL_DIR) $(DESTDIR)$(MANDIR)
 	cp -R man1 $(DESTDIR)$(MANDIR)
 	cp -R man5 $(DESTDIR)$(MANDIR)
+
+.PHONY: install-html
+install-html: html txt figs
+	rm -rf $(DESTDIR)$(DOCDIR)
+	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
 	[ ! -d html ] || cp -R html $(DESTDIR)$(DOCDIR)
 
+.PHONY: install
+install: install-man-pages install-html
+
 html/index.html: $(DOC_HTML) $(CURDIR)/gen-html-index INDEX
 	$(PERL) -w -- $(CURDIR)/gen-html-index -i INDEX html $(DOC_HTML)
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 03 22:26:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jan 2014 22:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzDC3-0004D7-QV; Fri, 03 Jan 2014 22:26:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzDC2-0004D2-Jv
	for xen-devel@lists.xen.org; Fri, 03 Jan 2014 22:26:14 +0000
Received: from [85.158.143.35:45989] by server-3.bemta-4.messagelabs.com id
	D3/DD-32360-50937C25; Fri, 03 Jan 2014 22:26:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388787971!9486848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11772 invoked from network); 3 Jan 2014 22:26:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Jan 2014 22:26:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,600,1384300800"; d="scan'208";a="89680106"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Jan 2014 22:26:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 3 Jan 2014 17:26:10 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1VzDBy-0005HY-L0; Fri, 03 Jan 2014 22:26:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 3 Jan 2014 22:26:09 +0000
Message-ID: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel]  [Patch] docs/Makefile: Split the install target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Split the current install target into two subtargets, install-man-pages and
install-html, with the main install target depending on both.

This helps packagers who want the man pages to put in appropriate rpms/debs,
but don't want to build the html developer docs.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

I am slightly dubious as to the behaviour of rm-rf'ing the DOCDIR, but the
behaviour is left exactly as before, for sake-of-mind when reviewing.

I would like this to be taken for 4.4, as I have fallen over it yet again when
packing 4.4-rc1 for testing in XenServer.  Unlike some of my other fixes to
the Xen build system, this is trivial to fix correctly upstream, and will
benefit everyone trying to package 4.4 for distros.
---
 docs/Makefile |   15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/docs/Makefile b/docs/Makefile
index 8d5d48e..e4bf28c 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -78,16 +78,21 @@ distclean: clean
 	rm -rf $(XEN_ROOT)/config/Docs.mk config.log config.status config.cache \
 		autom4te.cache
 
-.PHONY: install
-install: all
-	rm -rf $(DESTDIR)$(DOCDIR)
-	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
-
+.PHONY: install-man-pages
+install-man-pages: man-pages
 	$(INSTALL_DIR) $(DESTDIR)$(MANDIR)
 	cp -R man1 $(DESTDIR)$(MANDIR)
 	cp -R man5 $(DESTDIR)$(MANDIR)
+
+.PHONY: install-html
+install-html: html txt figs
+	rm -rf $(DESTDIR)$(DOCDIR)
+	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
 	[ ! -d html ] || cp -R html $(DESTDIR)$(DOCDIR)
 
+.PHONY: install
+install: install-man-pages install-html
+
 html/index.html: $(DOC_HTML) $(CURDIR)/gen-html-index INDEX
 	$(PERL) -w -- $(CURDIR)/gen-html-index -i INDEX html $(DOC_HTML)
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 00:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 00:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzF85-0001AO-UM; Sat, 04 Jan 2014 00:30:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzF84-0001AJ-1G
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 00:30:16 +0000
Received: from [85.158.143.35:57798] by server-2.bemta-4.messagelabs.com id
	D3/E8-11386-71657C25; Sat, 04 Jan 2014 00:30:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388795413!6853015!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25747 invoked from network); 4 Jan 2014 00:30:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 00:30:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s040T8ZN020536
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 00:29:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040T6Ap006744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 00:29:06 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040T5wN014352; Sat, 4 Jan 2014 00:29:05 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 16:29:05 -0800
Date: Fri, 3 Jan 2014 16:29:04 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103162904.5b51041f@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131231185656.GB3129@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014 15:04:27 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > > @@ -45,6 +45,7 @@
> > > >  #include <xen/grant_table.h>
> > > >  #include <xen/xenbus.h>
> > > >  #include <xen/xen.h>
> > > > +#include <xen/features.h>
> > > >  
> > > >  #include "xenbus_probe.h"
> > > >  
> > > > @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops
> > > > ring_ops_hvm = { 
> > > >  void __init xenbus_ring_ops_init(void)
> > > >  {
> > > > -	if (xen_pv_domain())
> > > > +	if (xen_pv_domain()
> > > > && !xen_feature(XENFEAT_auto_translated_physmap))
> > > 
> > > Can we just change this test to
> > > 
> > > if (!xen_feature(XENFEAT_auto_translated_physmap))
> > > 
> > > ?
> > 
> > No. If we do then the HVM domains (which are also !auto-xlat)
> > will end up using the PV version of ring_ops.
> 
> Actually HVM guests have XENFEAT_auto_translated_physmap, so in this
> case they would get &ring_ops_hvm.

Right. Back then I was confused about all the other PV modes, like
shadow, supervisor, ... but looks like they are all obsolete. It could 
just be:

        if (!xen_feature(XENFEAT_auto_translated_physmap))
                ring_ops = &ring_ops_pv;
        else
                ring_ops = &ring_ops_hvm;

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 00:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 00:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzF85-0001AO-UM; Sat, 04 Jan 2014 00:30:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzF84-0001AJ-1G
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 00:30:16 +0000
Received: from [85.158.143.35:57798] by server-2.bemta-4.messagelabs.com id
	D3/E8-11386-71657C25; Sat, 04 Jan 2014 00:30:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1388795413!6853015!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25747 invoked from network); 4 Jan 2014 00:30:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 00:30:14 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s040T8ZN020536
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 00:29:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040T6Ap006744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 00:29:06 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040T5wN014352; Sat, 4 Jan 2014 00:29:05 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 16:29:05 -0800
Date: Fri, 3 Jan 2014 16:29:04 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140103162904.5b51041f@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131231185656.GB3129@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401031501180.8667@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014 15:04:27 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 31 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > > --- a/drivers/xen/xenbus/xenbus_client.c
> > > > +++ b/drivers/xen/xenbus/xenbus_client.c
> > > > @@ -45,6 +45,7 @@
> > > >  #include <xen/grant_table.h>
> > > >  #include <xen/xenbus.h>
> > > >  #include <xen/xen.h>
> > > > +#include <xen/features.h>
> > > >  
> > > >  #include "xenbus_probe.h"
> > > >  
> > > > @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops
> > > > ring_ops_hvm = { 
> > > >  void __init xenbus_ring_ops_init(void)
> > > >  {
> > > > -	if (xen_pv_domain())
> > > > +	if (xen_pv_domain()
> > > > && !xen_feature(XENFEAT_auto_translated_physmap))
> > > 
> > > Can we just change this test to
> > > 
> > > if (!xen_feature(XENFEAT_auto_translated_physmap))
> > > 
> > > ?
> > 
> > No. If we do then the HVM domains (which are also !auto-xlat)
> > will end up using the PV version of ring_ops.
> 
> Actually HVM guests have XENFEAT_auto_translated_physmap, so in this
> case they would get &ring_ops_hvm.

Right. Back then I was confused about all the other PV modes, like
shadow, supervisor, ... but looks like they are all obsolete. It could 
just be:

        if (!xen_feature(XENFEAT_auto_translated_physmap))
                ring_ops = &ring_ops_pv;
        else
                ring_ops = &ring_ops_hvm;

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 00:49:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 00:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFQN-000228-To; Sat, 04 Jan 2014 00:49:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFQM-000223-H6
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 00:49:10 +0000
Received: from [85.158.143.35:43523] by server-1.bemta-4.messagelabs.com id
	CD/92-02132-58A57C25; Sat, 04 Jan 2014 00:49:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388796547!9553112!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21046 invoked from network); 4 Jan 2014 00:49:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 00:49:08 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s040m3Ca008544
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 00:48:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040m2DD011013
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 00:48:02 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040m1oW011794; Sat, 4 Jan 2014 00:48:01 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 16:48:01 -0800
Date: Fri, 3 Jan 2014 16:48:00 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103164800.00ef581c@mantra.us.oracle.com>
In-Reply-To: <20131218211739.GD11717@phenom.dumpdata.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131218211739.GD11717@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013 16:17:39 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Wed, Dec 18, 2013 at 06:31:43PM +0000, Stefano Stabellini wrote:
> > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > PVH is a PV guest with a twist - there are certain things
> > > that work in it like HVM and some like PV. There is
> > > a similar mode - PVHVM where we run in HVM mode with
> > > PV code enabled - and this patch explores that.
> > > 
> > > The most notable PV interfaces are the XenBus and event channels.
> > > For PVH, we will use XenBus and event channels.
> > > 
> > > For the XenBus mechanism we piggyback on how it is done for
> > > PVHVM guests.
> > > 
> > > Ditto for the event channel mechanism - we piggyback on PVHVM -
> > > by setting up a specific vector callback and that
> > > vector ends up calling the event channel mechanism to
> > > dispatch the events as needed.
> > > 
> > > This means that from a pvops perspective, we can use
> > > native_irq_ops instead of the Xen PV specific. Albeit in the
> > > future we could support pirq_eoi_map. But that is
> > > a feature request that can be shared with PVHVM.
> > > 
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/enlighten.c           | 6 ++++++
> > >  arch/x86/xen/irq.c                 | 5 ++++-
> > >  drivers/xen/events.c               | 5 +++++
> > >  drivers/xen/xenbus/xenbus_client.c | 3 ++-
> > >  4 files changed, 17 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > > index e420613..7fceb51 100644
> > > --- a/arch/x86/xen/enlighten.c
> > > +++ b/arch/x86/xen/enlighten.c
> > > @@ -1134,6 +1134,8 @@ void xen_setup_shared_info(void)
> > >  	/* In UP this is as good a place as any to set up shared
> > > info */ xen_setup_vcpu_info_placement();
> > >  #endif
> > > +	if (xen_pvh_domain())
> > > +		return;
> > >  
> > >  	xen_setup_mfn_list_list();
> > >  }
> > 
> > This is another one of those cases where I think we would benefit
> > from introducing xen_setup_shared_info_pvh instead of adding more
> > ifs here.
> 
> Actually this one can be removed.
> 
> > 
> > 
> > > @@ -1146,6 +1148,10 @@ void xen_setup_vcpu_info_placement(void)
> > >  	for_each_possible_cpu(cpu)
> > >  		xen_vcpu_setup(cpu);
> > >  
> > > +	/* PVH always uses native IRQ ops */
> > > +	if (xen_pvh_domain())
> > > +		return;
> > > +
> > >  	/* xen_vcpu_setup managed to place the vcpu_info within
> > > the percpu area for all cpus, so make use of it */
> > >  	if (have_vcpu_info_placement) {
> > 
> > Same here?
> 
> Hmmm, I wonder if the vcpu info placement could work with PVH.

It should now (after a patch I sent while ago)... the comment implies
that PVH uses native IRQs even case of vcpu info placlement...

perhaps it would be more clear to do:

        for_each_possible_cpu(cpu)
                xen_vcpu_setup(cpu);
        /* PVH always uses native IRQ ops */
        if (have_vcpu_info_placement && !xen_pvh_domain) {
            pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
            .........


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 00:49:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 00:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFQN-000228-To; Sat, 04 Jan 2014 00:49:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFQM-000223-H6
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 00:49:10 +0000
Received: from [85.158.143.35:43523] by server-1.bemta-4.messagelabs.com id
	CD/92-02132-58A57C25; Sat, 04 Jan 2014 00:49:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388796547!9553112!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21046 invoked from network); 4 Jan 2014 00:49:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 00:49:08 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s040m3Ca008544
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 00:48:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040m2DD011013
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 00:48:02 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s040m1oW011794; Sat, 4 Jan 2014 00:48:01 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 16:48:01 -0800
Date: Fri, 3 Jan 2014 16:48:00 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103164800.00ef581c@mantra.us.oracle.com>
In-Reply-To: <20131218211739.GD11717@phenom.dumpdata.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131218211739.GD11717@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013 16:17:39 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Wed, Dec 18, 2013 at 06:31:43PM +0000, Stefano Stabellini wrote:
> > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > PVH is a PV guest with a twist - there are certain things
> > > that work in it like HVM and some like PV. There is
> > > a similar mode - PVHVM where we run in HVM mode with
> > > PV code enabled - and this patch explores that.
> > > 
> > > The most notable PV interfaces are the XenBus and event channels.
> > > For PVH, we will use XenBus and event channels.
> > > 
> > > For the XenBus mechanism we piggyback on how it is done for
> > > PVHVM guests.
> > > 
> > > Ditto for the event channel mechanism - we piggyback on PVHVM -
> > > by setting up a specific vector callback and that
> > > vector ends up calling the event channel mechanism to
> > > dispatch the events as needed.
> > > 
> > > This means that from a pvops perspective, we can use
> > > native_irq_ops instead of the Xen PV specific. Albeit in the
> > > future we could support pirq_eoi_map. But that is
> > > a feature request that can be shared with PVHVM.
> > > 
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/enlighten.c           | 6 ++++++
> > >  arch/x86/xen/irq.c                 | 5 ++++-
> > >  drivers/xen/events.c               | 5 +++++
> > >  drivers/xen/xenbus/xenbus_client.c | 3 ++-
> > >  4 files changed, 17 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > > index e420613..7fceb51 100644
> > > --- a/arch/x86/xen/enlighten.c
> > > +++ b/arch/x86/xen/enlighten.c
> > > @@ -1134,6 +1134,8 @@ void xen_setup_shared_info(void)
> > >  	/* In UP this is as good a place as any to set up shared
> > > info */ xen_setup_vcpu_info_placement();
> > >  #endif
> > > +	if (xen_pvh_domain())
> > > +		return;
> > >  
> > >  	xen_setup_mfn_list_list();
> > >  }
> > 
> > This is another one of those cases where I think we would benefit
> > from introducing xen_setup_shared_info_pvh instead of adding more
> > ifs here.
> 
> Actually this one can be removed.
> 
> > 
> > 
> > > @@ -1146,6 +1148,10 @@ void xen_setup_vcpu_info_placement(void)
> > >  	for_each_possible_cpu(cpu)
> > >  		xen_vcpu_setup(cpu);
> > >  
> > > +	/* PVH always uses native IRQ ops */
> > > +	if (xen_pvh_domain())
> > > +		return;
> > > +
> > >  	/* xen_vcpu_setup managed to place the vcpu_info within
> > > the percpu area for all cpus, so make use of it */
> > >  	if (have_vcpu_info_placement) {
> > 
> > Same here?
> 
> Hmmm, I wonder if the vcpu info placement could work with PVH.

It should now (after a patch I sent while ago)... the comment implies
that PVH uses native IRQs even case of vcpu info placlement...

perhaps it would be more clear to do:

        for_each_possible_cpu(cpu)
                xen_vcpu_setup(cpu);
        /* PVH always uses native IRQ ops */
        if (have_vcpu_info_placement && !xen_pvh_domain) {
            pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
            .........


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 01:15:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 01:15:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFpR-0007la-QH; Sat, 04 Jan 2014 01:15:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFpP-0007lV-OH
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 01:15:03 +0000
Received: from [85.158.137.68:10081] by server-4.bemta-3.messagelabs.com id
	7C/2A-10414-69067C25; Sat, 04 Jan 2014 01:15:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388798100!6003198!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16920 invoked from network); 4 Jan 2014 01:15:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 01:15:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s041Dtf9024216
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 01:13:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041DsHb016395
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 01:13:55 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041DroS009406; Sat, 4 Jan 2014 01:13:54 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 17:13:53 -0800
Date: Fri, 3 Jan 2014 17:13:52 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103171352.7db54a79@mantra.us.oracle.com>
In-Reply-To: <20140103173555.GF27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
	<20140103173555.GF27019@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014 12:35:55 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 05:34:38PM -0800, Mukesh Rathor wrote:
> > On Thu, 2 Jan 2014 13:32:21 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > > 
> > > > > In the bootup code for PVH we can trap cpuid via vmexit, so
> > > > > don't need to use emulated prefix call. We also check for
> > > > > vector callback early on, as it is a required feature. PVH
> > > > > also runs at default kernel IOPL.
> > > > > 
> > > > > Finally, pure PV settings are moved to a separate function
> > > > > that are only called for pure PV, ie, pv with pvmmu. They are
> > > > > also #ifdef with CONFIG_XEN_PVMMU.
> > > > [...]
> > > > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > > > unsigned int *bx, break;
> > > > >  	}
> > > > >  
> > > > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > > > -		: "=a" (*ax),
> > > > > -		  "=b" (*bx),
> > > > > -		  "=c" (*cx),
> > > > > -		  "=d" (*dx)
> > > > > -		: "0" (*ax), "2" (*cx));
> > > > > +	if (xen_pvh_domain())
> > > > > +		native_cpuid(ax, bx, cx, dx);
> > > > > +	else
> > > > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > > > +			: "=a" (*ax),
> > > > > +			"=b" (*bx),
> > > > > +			"=c" (*cx),
> > > > > +			"=d" (*dx)
> > > > > +			: "0" (*ax), "2" (*cx));
> > > > 
> > > > For this one off cpuid call it seems preferrable to me to use
> > > > the emulate prefix rather than diverge from PV.
> > > 
> > > This was before the PV cpuid was deemed OK to be used on PVH.
> > > Will rip this out to use the same version.
> > 
> > Whats wrong with using native cpuid? That is one of the benefits
> > that cpuid can be trapped via vmexit, and also there is talk of
> > making PV cpuid trap obsolete in the future. I suggest leaving it
> > native.
> 
> I chatted with David, Andrew and Roger on IRC about this. I like the
> idea of using xen_cpuid because:
>  1) It filters some of the CPUID flags that guests should not use.
> There is the 'aperfmperf,'x2apic', 'xsave', and whether the MWAIT_LEAF
>     should be exposed (so that the ACPI AML code can call the right
>     initialization code to use the extended C3 states instead of the
>     legacy IOPORT ones). All of that is in xen_cpuid.
>    
>  2) It works, while we can concentrate on making 1) work in the
>     hypervisor/toolstack.
> 
> Meaning that the future way would be to use the native cpuid and have
> the hypervisor/toolstack setup the proper cpuid. In other words - use
> the xen_cpuid as is until that code for filtering is in the
> hypervisor.
> 
> 
> Except that PVH does not work the PV cpuid at all. I get a triple
> fault. The instruction it fails at is at the 'XEN_EMULATE_PREFIX'.
> 
> Mukesh, can you point me to the patch where the PV cpuid functionality
> is enabled?
> 
> Anyhow, as it stands, I will just use the native cpuid.

I am referring to using "cpuid" instruction instead of XEN_EMULATE_PREFIX.
cpuid is faster and long term better... there is no benefit using
XEN_EMULATE_PREFIX IMO. We can look at removing xen_cpuid() altogether for
PVH when/after pvh 32bit work gets done IMO.

The triple fault seems to be a new bug... I can create a bug, but for
now, with using cpuid instruction, that won't be an issue.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 01:15:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 01:15:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFpR-0007la-QH; Sat, 04 Jan 2014 01:15:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFpP-0007lV-OH
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 01:15:03 +0000
Received: from [85.158.137.68:10081] by server-4.bemta-3.messagelabs.com id
	7C/2A-10414-69067C25; Sat, 04 Jan 2014 01:15:02 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388798100!6003198!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16920 invoked from network); 4 Jan 2014 01:15:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 01:15:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s041Dtf9024216
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 01:13:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041DsHb016395
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 01:13:55 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041DroS009406; Sat, 4 Jan 2014 01:13:54 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 17:13:53 -0800
Date: Fri, 3 Jan 2014 17:13:52 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103171352.7db54a79@mantra.us.oracle.com>
In-Reply-To: <20140103173555.GF27019@phenom.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-4-git-send-email-konrad.wilk@oracle.com>
	<52C58691.4040502@citrix.com>
	<20140102183221.GD3021@pegasus.dumpdata.com>
	<20140102173438.40612127@mantra.us.oracle.com>
	<20140103173555.GF27019@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 03/18] xen/pvh: Early bootup changes in
 PV code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014 12:35:55 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 05:34:38PM -0800, Mukesh Rathor wrote:
> > On Thu, 2 Jan 2014 13:32:21 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > On Thu, Jan 02, 2014 at 03:32:33PM +0000, David Vrabel wrote:
> > > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > > 
> > > > > In the bootup code for PVH we can trap cpuid via vmexit, so
> > > > > don't need to use emulated prefix call. We also check for
> > > > > vector callback early on, as it is a required feature. PVH
> > > > > also runs at default kernel IOPL.
> > > > > 
> > > > > Finally, pure PV settings are moved to a separate function
> > > > > that are only called for pure PV, ie, pv with pvmmu. They are
> > > > > also #ifdef with CONFIG_XEN_PVMMU.
> > > > [...]
> > > > > @@ -331,12 +333,15 @@ static void xen_cpuid(unsigned int *ax,
> > > > > unsigned int *bx, break;
> > > > >  	}
> > > > >  
> > > > > -	asm(XEN_EMULATE_PREFIX "cpuid"
> > > > > -		: "=a" (*ax),
> > > > > -		  "=b" (*bx),
> > > > > -		  "=c" (*cx),
> > > > > -		  "=d" (*dx)
> > > > > -		: "0" (*ax), "2" (*cx));
> > > > > +	if (xen_pvh_domain())
> > > > > +		native_cpuid(ax, bx, cx, dx);
> > > > > +	else
> > > > > +		asm(XEN_EMULATE_PREFIX "cpuid"
> > > > > +			: "=a" (*ax),
> > > > > +			"=b" (*bx),
> > > > > +			"=c" (*cx),
> > > > > +			"=d" (*dx)
> > > > > +			: "0" (*ax), "2" (*cx));
> > > > 
> > > > For this one off cpuid call it seems preferrable to me to use
> > > > the emulate prefix rather than diverge from PV.
> > > 
> > > This was before the PV cpuid was deemed OK to be used on PVH.
> > > Will rip this out to use the same version.
> > 
> > Whats wrong with using native cpuid? That is one of the benefits
> > that cpuid can be trapped via vmexit, and also there is talk of
> > making PV cpuid trap obsolete in the future. I suggest leaving it
> > native.
> 
> I chatted with David, Andrew and Roger on IRC about this. I like the
> idea of using xen_cpuid because:
>  1) It filters some of the CPUID flags that guests should not use.
> There is the 'aperfmperf,'x2apic', 'xsave', and whether the MWAIT_LEAF
>     should be exposed (so that the ACPI AML code can call the right
>     initialization code to use the extended C3 states instead of the
>     legacy IOPORT ones). All of that is in xen_cpuid.
>    
>  2) It works, while we can concentrate on making 1) work in the
>     hypervisor/toolstack.
> 
> Meaning that the future way would be to use the native cpuid and have
> the hypervisor/toolstack setup the proper cpuid. In other words - use
> the xen_cpuid as is until that code for filtering is in the
> hypervisor.
> 
> 
> Except that PVH does not work the PV cpuid at all. I get a triple
> fault. The instruction it fails at is at the 'XEN_EMULATE_PREFIX'.
> 
> Mukesh, can you point me to the patch where the PV cpuid functionality
> is enabled?
> 
> Anyhow, as it stands, I will just use the native cpuid.

I am referring to using "cpuid" instruction instead of XEN_EMULATE_PREFIX.
cpuid is faster and long term better... there is no benefit using
XEN_EMULATE_PREFIX IMO. We can look at removing xen_cpuid() altogether for
PVH when/after pvh 32bit work gets done IMO.

The triple fault seems to be a new bug... I can create a bug, but for
now, with using cpuid instruction, that won't be an issue.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 01:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 01:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFyq-000898-Uz; Sat, 04 Jan 2014 01:24:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFyp-000893-Na
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 01:24:47 +0000
Received: from [85.158.137.68:30256] by server-3.bemta-3.messagelabs.com id
	00/43-10658-FD267C25; Sat, 04 Jan 2014 01:24:47 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388798684!3503156!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26957 invoked from network); 4 Jan 2014 01:24:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 01:24:46 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s041Nfrs023028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 01:23:42 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041Ndaw029108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 01:23:40 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041NdfP026677; Sat, 4 Jan 2014 01:23:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 17:23:38 -0800
Date: Fri, 3 Jan 2014 17:23:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103172337.1e077e56@mantra.us.oracle.com>
In-Reply-To: <20140102184133.GE3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
	<20140102184133.GE3021@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 13:41:34 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > > by Xen. PVH maps the entire IO space, but only RAM pages need
> > > to be repopulated.
> > 
> > So this looks minimal but I can't work out what PVH actually needs
> > to do here.  This code really doesn't need to be made any more
> > confusing.
> 
> I gather you prefer Mukesh's original version?

I think Konrad thats easier to follow as one can quickly spot
the PVH difference... but your call.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 01:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 01:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzFyq-000898-Uz; Sat, 04 Jan 2014 01:24:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1VzFyp-000893-Na
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 01:24:47 +0000
Received: from [85.158.137.68:30256] by server-3.bemta-3.messagelabs.com id
	00/43-10658-FD267C25; Sat, 04 Jan 2014 01:24:47 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388798684!3503156!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26957 invoked from network); 4 Jan 2014 01:24:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 01:24:46 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s041Nfrs023028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 01:23:42 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041Ndaw029108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 01:23:40 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s041NdfP026677; Sat, 4 Jan 2014 01:23:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 17:23:38 -0800
Date: Fri, 3 Jan 2014 17:23:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140103172337.1e077e56@mantra.us.oracle.com>
In-Reply-To: <20140102184133.GE3021@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
	<20140102184133.GE3021@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2 Jan 2014 13:41:34 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > > by Xen. PVH maps the entire IO space, but only RAM pages need
> > > to be repopulated.
> > 
> > So this looks minimal but I can't work out what PVH actually needs
> > to do here.  This code really doesn't need to be made any more
> > confusing.
> 
> I gather you prefer Mukesh's original version?

I think Konrad thats easier to follow as one can quickly spot
the PVH difference... but your call.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 02:27:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 02:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzGwx-0002zI-8p; Sat, 04 Jan 2014 02:26:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzGwv-0002zD-Bz
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 02:26:53 +0000
Received: from [85.158.143.35:3340] by server-1.bemta-4.messagelabs.com id
	E2/16-02132-C6177C25; Sat, 04 Jan 2014 02:26:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388802410!9551703!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21699 invoked from network); 4 Jan 2014 02:26:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 02:26:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s042Pi8o028813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 02:25:45 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s042PhSb015269
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 02:25:44 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s042PhUl015257; Sat, 4 Jan 2014 02:25:43 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 18:25:42 -0800
Date: Fri, 3 Jan 2014 21:25:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140104022526.GA14630@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
	<20140102184133.GE3021@pegasus.dumpdata.com>
	<20140103172337.1e077e56@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140103172337.1e077e56@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:23:37PM -0800, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:41:34 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > > > by Xen. PVH maps the entire IO space, but only RAM pages need
> > > > to be repopulated.
> > > 
> > > So this looks minimal but I can't work out what PVH actually needs
> > > to do here.  This code really doesn't need to be made any more
> > > confusing.
> > 
> > I gather you prefer Mukesh's original version?
> 
> I think Konrad thats easier to follow as one can quickly spot
> the PVH difference... but your call.

I prefer the one that re-uses the existing logic. That has been - both
in the hypervisor and in the Linux kernel for PVH - the path - just do
nice little one-offs that do something simpler and easier than the
old PV path.

That way one can easily spot how PV vs PVH works for certain operations.

It also from a testing coverage perspective means we end up using the
same codepath for both PV and PVH - so we do get more testing exposure
for different modes.

> 
> thanks
> mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 02:27:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 02:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzGwx-0002zI-8p; Sat, 04 Jan 2014 02:26:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzGwv-0002zD-Bz
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 02:26:53 +0000
Received: from [85.158.143.35:3340] by server-1.bemta-4.messagelabs.com id
	E2/16-02132-C6177C25; Sat, 04 Jan 2014 02:26:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388802410!9551703!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21699 invoked from network); 4 Jan 2014 02:26:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 02:26:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s042Pi8o028813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 02:25:45 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s042PhSb015269
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 02:25:44 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s042PhUl015257; Sat, 4 Jan 2014 02:25:43 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 03 Jan 2014 18:25:42 -0800
Date: Fri, 3 Jan 2014 21:25:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140104022526.GA14630@pegasus.dumpdata.com>
References: <1388550945-25499-1-git-send-email-konrad.wilk@oracle.com>
	<1388550945-25499-11-git-send-email-konrad.wilk@oracle.com>
	<52C59068.1040603@citrix.com>
	<20140102184133.GE3021@pegasus.dumpdata.com>
	<20140103172337.1e077e56@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140103172337.1e077e56@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com, David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v12 10/18] xen/pvh: Update E820 to work with
	PVH (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 03, 2014 at 05:23:37PM -0800, Mukesh Rathor wrote:
> On Thu, 2 Jan 2014 13:41:34 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Jan 02, 2014 at 04:14:32PM +0000, David Vrabel wrote:
> > > On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > In xen_add_extra_mem() we can skip updating P2M as it's managed
> > > > by Xen. PVH maps the entire IO space, but only RAM pages need
> > > > to be repopulated.
> > > 
> > > So this looks minimal but I can't work out what PVH actually needs
> > > to do here.  This code really doesn't need to be made any more
> > > confusing.
> > 
> > I gather you prefer Mukesh's original version?
> 
> I think Konrad thats easier to follow as one can quickly spot
> the PVH difference... but your call.

I prefer the one that re-uses the existing logic. That has been - both
in the hypervisor and in the Linux kernel for PVH - the path - just do
nice little one-offs that do something simpler and easier than the
old PV path.

That way one can easily spot how PV vs PVH works for certain operations.

It also from a testing coverage perspective means we end up using the
same codepath for both PV and PVH - so we do get more testing exposure
for different modes.

> 
> thanks
> mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 07:22:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 07:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzLYA-0005cg-QM; Sat, 04 Jan 2014 07:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1VzLYA-0005cb-Aj
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 07:21:38 +0000
Received: from [85.158.139.211:52936] by server-6.bemta-5.messagelabs.com id
	60/E5-16310-186B7C25; Sat, 04 Jan 2014 07:21:37 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388820094!5131284!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29824 invoked from network); 4 Jan 2014 07:21:36 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Jan 2014 07:21:36 -0000
Received: from smtp2.bendigoit.com.au ([203.16.207.99]
	helo=BITCOM1.int.sbss.com.au)
	by smtp1.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1VzLYh-0001A6-DK; Sat, 04 Jan 2014 18:22:11 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sat, 4 Jan 2014 18:21:26 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzg
Date: Sat, 4 Jan 2014 07:21:25 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
In-Reply-To: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I fiddled around with compiling you GPL PV driver, however I got an error
> when generating the MSI packages. The error was related to xenscsi which is
> not compiled, however WIX try to included. By applying the following patch, I
> successfully compiled the driver. It might not be the right approached just to
> remove the lines in installer.wxs, since I got an error relating to xenvbd when
> installing the driver on a windows 8.1 domU.
> 

Yep. I had ancient .sys etc files lying around in my build tree which was hiding the problem.

I have removed all xenscsi related stuff (needs to be rewritten anyway if anyone ever wanted it) and I can build MSI files now. Latest changes are pushed.

I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the drivers themselves should be okay. Was the error you got a crash, or an error message during install?

Thanks

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 07:22:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 07:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzLYA-0005cg-QM; Sat, 04 Jan 2014 07:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1VzLYA-0005cb-Aj
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 07:21:38 +0000
Received: from [85.158.139.211:52936] by server-6.bemta-5.messagelabs.com id
	60/E5-16310-186B7C25; Sat, 04 Jan 2014 07:21:37 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-7.tower-206.messagelabs.com!1388820094!5131284!1
X-Originating-IP: [203.16.224.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29824 invoked from network); 4 Jan 2014 07:21:36 -0000
Received: from smtp1.bendigoit.com.au (HELO smtp1.bendigoit.com.au)
	(203.16.224.4)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Jan 2014 07:21:36 -0000
Received: from smtp2.bendigoit.com.au ([203.16.207.99]
	helo=BITCOM1.int.sbss.com.au)
	by smtp1.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1VzLYh-0001A6-DK; Sat, 04 Jan 2014 18:22:11 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sat, 4 Jan 2014 18:21:26 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzg
Date: Sat, 4 Jan 2014 07:21:25 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
In-Reply-To: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> I fiddled around with compiling you GPL PV driver, however I got an error
> when generating the MSI packages. The error was related to xenscsi which is
> not compiled, however WIX try to included. By applying the following patch, I
> successfully compiled the driver. It might not be the right approached just to
> remove the lines in installer.wxs, since I got an error relating to xenvbd when
> installing the driver on a windows 8.1 domU.
> 

Yep. I had ancient .sys etc files lying around in my build tree which was hiding the problem.

I have removed all xenscsi related stuff (needs to be rewritten anyway if anyone ever wanted it) and I can build MSI files now. Latest changes are pushed.

I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the drivers themselves should be okay. Was the error you got a crash, or an error message during install?

Thanks

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 10:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 10:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzOIs-0003P6-LE; Sat, 04 Jan 2014 10:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VzOIq-0003P1-Gh
	for xen-devel@lists.xensource.com; Sat, 04 Jan 2014 10:18:00 +0000
Received: from [85.158.143.35:35364] by server-2.bemta-4.messagelabs.com id
	92/9D-11386-7DFD7C25; Sat, 04 Jan 2014 10:17:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388830677!9542005!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14092 invoked from network); 4 Jan 2014 10:17:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Jan 2014 10:17:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,603,1384300800"; d="scan'208";a="89748169"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Jan 2014 10:17:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 4 Jan 2014 05:17:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VzOIk-0001ZS-Sn;
	Sat, 04 Jan 2014 10:17:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VzOIk-0002jx-SV;
	Sat, 04 Jan 2014 10:17:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24041-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Jan 2014 10:17:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24041: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24041 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24041/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23938
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 23827
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 23938

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 10:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 10:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzOIs-0003P6-LE; Sat, 04 Jan 2014 10:18:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VzOIq-0003P1-Gh
	for xen-devel@lists.xensource.com; Sat, 04 Jan 2014 10:18:00 +0000
Received: from [85.158.143.35:35364] by server-2.bemta-4.messagelabs.com id
	92/9D-11386-7DFD7C25; Sat, 04 Jan 2014 10:17:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1388830677!9542005!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14092 invoked from network); 4 Jan 2014 10:17:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Jan 2014 10:17:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,603,1384300800"; d="scan'208";a="89748169"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Jan 2014 10:17:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 4 Jan 2014 05:17:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VzOIk-0001ZS-Sn;
	Sat, 04 Jan 2014 10:17:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VzOIk-0002jx-SV;
	Sat, 04 Jan 2014 10:17:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24041-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 4 Jan 2014 10:17:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24041: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24041 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24041/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 23938
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 23827
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 23938

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 12:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 12:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzQ5c-0007Yj-7R; Sat, 04 Jan 2014 12:12:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sonjs91@gmail.com>) id 1VzQ5a-0007Ye-Qy
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 12:12:27 +0000
Received: from [85.158.137.68:39499] by server-9.bemta-3.messagelabs.com id
	02/58-13104-AAAF7C25; Sat, 04 Jan 2014 12:12:26 +0000
X-Env-Sender: sonjs91@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388837544!7170311!1
X-Originating-IP: [209.85.128.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22444 invoked from network); 4 Jan 2014 12:12:25 -0000
Received: from mail-qe0-f42.google.com (HELO mail-qe0-f42.google.com)
	(209.85.128.42)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Jan 2014 12:12:25 -0000
Received: by mail-qe0-f42.google.com with SMTP id b4so16786659qen.29
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 04:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=2srARr9w1J0D8lXSIzmlln1umgMFkDbLitKt1NIQFck=;
	b=aez1c3zWNIKiDmfMAjQdf5IMrFDAunlXNI++9ujGc20cBOLVRqwyh4HWlwLZ81nBnu
	MD/7SGamg0YCZATcBlk3EQNIc8+j2a0xgEHyfYsvIf3GrHt9Cn+kDXO+cIHyDFbuCKsr
	nr/v3U+jTW8ZrpXjepwSRwyiYg0Si0epbFyUV6tzzSRTv/kk/+OOJfIiyyPP59SyQCx4
	bvdQ+Q+01RSPcIHkoAiaggJj0drB5hUuZvdDbUAGYHFhzvevgkPs8YQtmGpc96XdJe68
	fFCWEod7mbHiFF3UaXGE+xoY91qdaNG+nWmY+BLJsFC/DiVKf7goBX38LayfAJmdfwZz
	6bFg==
MIME-Version: 1.0
X-Received: by 10.224.119.200 with SMTP id a8mr34132374qar.7.1388837543972;
	Sat, 04 Jan 2014 04:12:23 -0800 (PST)
Received: by 10.224.79.143 with HTTP; Sat, 4 Jan 2014 04:12:23 -0800 (PST)
Date: Sat, 4 Jan 2014 21:12:23 +0900
X-Google-Sender-Auth: prxuu2mCbqerju5htNK03cRJskA
Message-ID: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
From: Jeongseok Son <invictusjs@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU (Xen
	4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.

HVM DomU works well. However, with PV domU, or with PVHVM domU, the
disk I/O performance is severely slow. The following are the results
of dd executions in each domUs.

- HVM
$ dd if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s

- PV
$ if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s

And after copying, the responsiveness of PV domU becomes too slow to
use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
performance.

I configured my PV domU to use disk file image like this.

root = '/dev/xvda2 ro'
disk = [
    'file:/path/to/pv/images/disk.img,xvda2,w',
    'file:/path/to/pv/images/swap.img,xvda1,w',
]

My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
tried with an older linux kernel (2.6.39) but the result was same.)

In my own guess, there's some issue when using disk image with PV
block device driver in Xen 4.1.2. But I have to use disk file image
because of some other experimental issues.

Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
of code should I modify to fix this problem in Xen 4.1.2?

Thank you for any help you can provide.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 12:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 12:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzQ5c-0007Yj-7R; Sat, 04 Jan 2014 12:12:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sonjs91@gmail.com>) id 1VzQ5a-0007Ye-Qy
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 12:12:27 +0000
Received: from [85.158.137.68:39499] by server-9.bemta-3.messagelabs.com id
	02/58-13104-AAAF7C25; Sat, 04 Jan 2014 12:12:26 +0000
X-Env-Sender: sonjs91@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388837544!7170311!1
X-Originating-IP: [209.85.128.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22444 invoked from network); 4 Jan 2014 12:12:25 -0000
Received: from mail-qe0-f42.google.com (HELO mail-qe0-f42.google.com)
	(209.85.128.42)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Jan 2014 12:12:25 -0000
Received: by mail-qe0-f42.google.com with SMTP id b4so16786659qen.29
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 04:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=2srARr9w1J0D8lXSIzmlln1umgMFkDbLitKt1NIQFck=;
	b=aez1c3zWNIKiDmfMAjQdf5IMrFDAunlXNI++9ujGc20cBOLVRqwyh4HWlwLZ81nBnu
	MD/7SGamg0YCZATcBlk3EQNIc8+j2a0xgEHyfYsvIf3GrHt9Cn+kDXO+cIHyDFbuCKsr
	nr/v3U+jTW8ZrpXjepwSRwyiYg0Si0epbFyUV6tzzSRTv/kk/+OOJfIiyyPP59SyQCx4
	bvdQ+Q+01RSPcIHkoAiaggJj0drB5hUuZvdDbUAGYHFhzvevgkPs8YQtmGpc96XdJe68
	fFCWEod7mbHiFF3UaXGE+xoY91qdaNG+nWmY+BLJsFC/DiVKf7goBX38LayfAJmdfwZz
	6bFg==
MIME-Version: 1.0
X-Received: by 10.224.119.200 with SMTP id a8mr34132374qar.7.1388837543972;
	Sat, 04 Jan 2014 04:12:23 -0800 (PST)
Received: by 10.224.79.143 with HTTP; Sat, 4 Jan 2014 04:12:23 -0800 (PST)
Date: Sat, 4 Jan 2014 21:12:23 +0900
X-Google-Sender-Auth: prxuu2mCbqerju5htNK03cRJskA
Message-ID: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
From: Jeongseok Son <invictusjs@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU (Xen
	4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.

HVM DomU works well. However, with PV domU, or with PVHVM domU, the
disk I/O performance is severely slow. The following are the results
of dd executions in each domUs.

- HVM
$ dd if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s

- PV
$ if=/dev/zero of=disk.img count=512k bs=1k
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s

And after copying, the responsiveness of PV domU becomes too slow to
use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
performance.

I configured my PV domU to use disk file image like this.

root = '/dev/xvda2 ro'
disk = [
    'file:/path/to/pv/images/disk.img,xvda2,w',
    'file:/path/to/pv/images/swap.img,xvda1,w',
]

My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
tried with an older linux kernel (2.6.39) but the result was same.)

In my own guess, there's some issue when using disk image with PV
block device driver in Xen 4.1.2. But I have to use disk file image
because of some other experimental issues.

Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
of code should I modify to fix this problem in Xen 4.1.2?

Thank you for any help you can provide.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035E-0d; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOd-00034s-43
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:27 +0000
Received: from [193.109.254.147:4027] by server-7.bemta-14.messagelabs.com id
	4C/51-15500-A5A48C25; Sat, 04 Jan 2014 17:52:26 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11964 invoked from network); 4 Jan 2014 17:52:25 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:25 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:23 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907562"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:22 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:13 -0500
Message-Id: <1388857936-664-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..eceb805 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
         mfn = (has_hvm_container_domain(dp)
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
-        if ( mfn == INVALID_MFN ) 
+        if ( mfn == INVALID_MFN ) {
+            if ( gfn != INVALID_GFN )
+                put_gfn(dp, gfn);
             break;
+        }
 
         va = map_domain_page(mfn);
         va = va + (addr & (PAGE_SIZE-1));
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOi-00035e-4s; Sat, 04 Jan 2014 17:52:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOg-00035Z-Rq
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:30 +0000
Received: from [85.158.143.35:44597] by server-3.bemta-4.messagelabs.com id
	21/DA-32360-E5A48C25; Sat, 04 Jan 2014 17:52:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388857947!9560419!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23758 invoked from network); 4 Jan 2014 17:52:28 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:28 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 17:52:26 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907586"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:25 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:16 -0500
Message-Id: <1388857936-664-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio: always
	do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
returned.

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/domctl.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..4aa751f 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -997,8 +997,7 @@ long arch_do_domctl(
             domctl->u.gdbsx_guest_memio.len;
 
         ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
-        if ( !ret )
-           copyback = 1;
+        copyback = 1;
     }
     break;
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035L-Bp; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOd-00034x-SS
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:28 +0000
Received: from [193.109.254.147:4059] by server-14.bemta-14.messagelabs.com id
	00/89-12628-B5A48C25; Sat, 04 Jan 2014 17:52:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!3
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12209 invoked from network); 4 Jan 2014 17:52:26 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:24 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907568"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:23 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:14 -0500
Message-Id: <1388857936-664-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This also fixes the old debug output to compile and work if DBGP1
and DBGP2 are defined like DBGP3.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 41 +++++++++++++++++++++++++++++++----------
 1 file changed, 31 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index eceb805..2e0a05a 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -41,6 +41,12 @@
 #define DBGP2(...) ((void)0)
 #endif
 
+#ifdef XEN_GDBSX_DEBUG3
+#define DBGP3(...) gdprintk(XENLOG_DEBUG, __VA_ARGS__)
+#else
+#define DBGP3(...) ((void)0)
+#endif
+
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
 dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
@@ -60,6 +66,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
+    DBGP3("L: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
@@ -102,8 +109,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP2("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
             DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
@@ -114,8 +121,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP2("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -128,8 +135,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP2("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
@@ -140,8 +147,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP2("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -162,8 +169,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
         if ( mfn == INVALID_MFN ) {
-            if ( gfn != INVALID_GFN )
+            if ( gfn != INVALID_GFN ) {
                 put_gfn(dp, gfn);
+                DBGP3("R: addr:%lx pagecnt=%ld domid:%d mfn:%lx\n",
+                      addr, pagecnt, dp->domain_id, mfn);
+            }
             break;
         }
 
@@ -181,8 +191,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
         }
 
         unmap_domain_page(va);
-        if ( gfn != INVALID_GFN )
+        if ( gfn != INVALID_GFN ) {
             put_gfn(dp, gfn);
+            DBGP3("R: addr:%lx pagecnt=%ld domid:%d mfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, mfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOi-00035e-4s; Sat, 04 Jan 2014 17:52:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOg-00035Z-Rq
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:30 +0000
Received: from [85.158.143.35:44597] by server-3.bemta-4.messagelabs.com id
	21/DA-32360-E5A48C25; Sat, 04 Jan 2014 17:52:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1388857947!9560419!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23758 invoked from network); 4 Jan 2014 17:52:28 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:28 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 17:52:26 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907586"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:25 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:16 -0500
Message-Id: <1388857936-664-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio: always
	do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
returned.

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/domctl.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..4aa751f 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -997,8 +997,7 @@ long arch_do_domctl(
             domctl->u.gdbsx_guest_memio.len;
 
         ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
-        if ( !ret )
-           copyback = 1;
+        copyback = 1;
     }
     break;
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035L-Bp; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOd-00034x-SS
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:28 +0000
Received: from [193.109.254.147:4059] by server-14.bemta-14.messagelabs.com id
	00/89-12628-B5A48C25; Sat, 04 Jan 2014 17:52:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!3
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12209 invoked from network); 4 Jan 2014 17:52:26 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:24 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907568"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:23 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:14 -0500
Message-Id: <1388857936-664-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This also fixes the old debug output to compile and work if DBGP1
and DBGP2 are defined like DBGP3.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 41 +++++++++++++++++++++++++++++++----------
 1 file changed, 31 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index eceb805..2e0a05a 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -41,6 +41,12 @@
 #define DBGP2(...) ((void)0)
 #endif
 
+#ifdef XEN_GDBSX_DEBUG3
+#define DBGP3(...) gdprintk(XENLOG_DEBUG, __VA_ARGS__)
+#else
+#define DBGP3(...) ((void)0)
+#endif
+
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
 dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
@@ -60,6 +66,7 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
+    DBGP3("L: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
@@ -102,8 +109,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP2("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
             DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
@@ -114,8 +121,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP2("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -128,8 +135,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP2("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
@@ -140,8 +147,8 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP2("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -162,8 +169,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
         if ( mfn == INVALID_MFN ) {
-            if ( gfn != INVALID_GFN )
+            if ( gfn != INVALID_GFN ) {
                 put_gfn(dp, gfn);
+                DBGP3("R: addr:%lx pagecnt=%ld domid:%d mfn:%lx\n",
+                      addr, pagecnt, dp->domain_id, mfn);
+            }
             break;
         }
 
@@ -181,8 +191,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
         }
 
         unmap_domain_page(va);
-        if ( gfn != INVALID_GFN )
+        if ( gfn != INVALID_GFN ) {
             put_gfn(dp, gfn);
+            DBGP3("R: addr:%lx pagecnt=%ld domid:%d mfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, mfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOd-00034z-Ki; Sat, 04 Jan 2014 17:52:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOc-00034n-Ex
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:26 +0000
Received: from [193.109.254.147:4004] by server-2.bemta-14.messagelabs.com id
	72/B8-00361-95A48C25; Sat, 04 Jan 2014 17:52:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 4 Jan 2014 17:52:24 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:22 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907556"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:20 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:12 -0500
Message-Id: <1388857936-664-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Release manager requests:
  patch 1 should be in 4.4.0
  patch 3 and 4 would be good to be in 4.4.0
  patch 2 is optional.

While tracking down a bug in seabios/grub I found the bug in patch
1.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 2 is debug logging that was used to find the 2nd way.

Patch 3 and 4 are more of a cleanup bug fix.

Don Slutz (4):
  dbg_rw_guest_mem: need to call put_gfn in error path.
  dbg_rw_guest_mem: Enable debug log output
  xg_read_mem: Report on error.
  XEN_DOMCTL_gdbsx_guestmemio: always do the copyback.

 tools/debugger/gdbsx/xg/xg_main.c |  6 ++++--
 xen/arch/x86/debug.c              | 44 ++++++++++++++++++++++++++++++---------
 xen/arch/x86/domctl.c             |  3 +--
 3 files changed, 39 insertions(+), 14 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035S-O9; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOe-00034y-6z
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:28 +0000
Received: from [85.158.137.68:34265] by server-16.bemta-3.messagelabs.com id
	D2/E0-26128-B5A48C25; Sat, 04 Jan 2014 17:52:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388857945!7196716!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27028 invoked from network); 4 Jan 2014 17:52:26 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:25 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907577"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:24 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:15 -0500
Message-Id: <1388857936-664-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 3/4] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 5736b86..a192478 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOd-00034z-Ki; Sat, 04 Jan 2014 17:52:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOc-00034n-Ex
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:26 +0000
Received: from [193.109.254.147:4004] by server-2.bemta-14.messagelabs.com id
	72/B8-00361-95A48C25; Sat, 04 Jan 2014 17:52:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 4 Jan 2014 17:52:24 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:22 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907556"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:20 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:12 -0500
Message-Id: <1388857936-664-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Release manager requests:
  patch 1 should be in 4.4.0
  patch 3 and 4 would be good to be in 4.4.0
  patch 2 is optional.

While tracking down a bug in seabios/grub I found the bug in patch
1.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 2 is debug logging that was used to find the 2nd way.

Patch 3 and 4 are more of a cleanup bug fix.

Don Slutz (4):
  dbg_rw_guest_mem: need to call put_gfn in error path.
  dbg_rw_guest_mem: Enable debug log output
  xg_read_mem: Report on error.
  XEN_DOMCTL_gdbsx_guestmemio: always do the copyback.

 tools/debugger/gdbsx/xg/xg_main.c |  6 ++++--
 xen/arch/x86/debug.c              | 44 ++++++++++++++++++++++++++++++---------
 xen/arch/x86/domctl.c             |  3 +--
 3 files changed, 39 insertions(+), 14 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035E-0d; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOd-00034s-43
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:27 +0000
Received: from [193.109.254.147:4027] by server-7.bemta-14.messagelabs.com id
	4C/51-15500-A5A48C25; Sat, 04 Jan 2014 17:52:26 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1388857943!8823873!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11964 invoked from network); 4 Jan 2014 17:52:25 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:25 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:23 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907562"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:22 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:13 -0500
Message-Id: <1388857936-664-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..eceb805 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
         mfn = (has_hvm_container_domain(dp)
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
-        if ( mfn == INVALID_MFN ) 
+        if ( mfn == INVALID_MFN ) {
+            if ( gfn != INVALID_GFN )
+                put_gfn(dp, gfn);
             break;
+        }
 
         va = map_domain_page(mfn);
         va = va + (addr & (PAGE_SIZE-1));
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 17:53:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 17:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzVOf-00035S-O9; Sat, 04 Jan 2014 17:52:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzVOe-00034y-6z
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 17:52:28 +0000
Received: from [85.158.137.68:34265] by server-16.bemta-3.messagelabs.com id
	D2/E0-26128-B5A48C25; Sat, 04 Jan 2014 17:52:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388857945!7196716!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27028 invoked from network); 4 Jan 2014 17:52:26 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 17:52:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 17:52:25 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623907577"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 17:52:24 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Sat,  4 Jan 2014 12:52:15 -0500
Message-Id: <1388857936-664-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH 3/4] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 5736b86..a192478 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdF-0006Za-Qn; Sat, 04 Jan 2014 19:11:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdE-0006Yz-77
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:36 +0000
Received: from [85.158.137.68:44695] by server-6.bemta-3.messagelabs.com id
	5A/62-04868-7EC58C25; Sat, 04 Jan 2014 19:11:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!3
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10320 invoked from network); 4 Jan 2014 19:11:33 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:33 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925882"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:31 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:24 -0500
Message-Id: <1388862686-1832-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/4] xen: Fix offset output to be decimal.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper_dump_tables.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen_hyper_dump_tables.c b/xen_hyper_dump_tables.c
index 0582159..38558d5 100644
--- a/xen_hyper_dump_tables.c
+++ b/xen_hyper_dump_tables.c
@@ -767,7 +767,7 @@ xen_hyper_dump_xen_hyper_offset_table(char *spec, ulong makestruct)
 	XEN_HYPER_PRI(fp, len, "domain_next_in_list: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_next_in_list));
 	XEN_HYPER_PRI(fp, len, "domain_domain_flags: ", buf, flag,
-		(buf, "%lx\n", xen_hyper_offset_table.domain_domain_flags));
+		(buf, "%ld\n", xen_hyper_offset_table.domain_domain_flags));
 	XEN_HYPER_PRI(fp, len, "domain_evtchn: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_evtchn));
 	XEN_HYPER_PRI(fp, len, "domain_is_hvm: ", buf, flag,
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdG-0006Zj-7A; Sat, 04 Jan 2014 19:11:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdE-0006Z7-Mo
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:36 +0000
Received: from [85.158.143.35:42405] by server-3.bemta-4.messagelabs.com id
	C0/CC-32360-7EC58C25; Sat, 04 Jan 2014 19:11:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388862693!9473921!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 4 Jan 2014 19:11:34 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:33 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925895"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:26 -0500
Message-Id: <1388862686-1832-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 4/4] Add basic domain.guest_type support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen_hyper.c             | 8 ++++++++
 xen_hyper_defs.h        | 1 +
 xen_hyper_dump_tables.c | 2 ++
 3 files changed, 11 insertions(+)

diff --git a/xen_hyper.c b/xen_hyper.c
index 00a0e2c..c18e815 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -207,6 +207,7 @@ xen_hyper_domain_init(void)
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_domain_flags, "domain", "domain_flags");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_evtchn, "domain", "evtchn");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_hvm, "domain", "is_hvm");
+	XEN_HYPER_MEMBER_OFFSET_INIT(domain_guest_type, "domain", "guest_type");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_privileged, "domain", "is_privileged");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_debugger_attached, "domain", "debugger_attached");
 
@@ -1251,6 +1252,13 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
                     *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
 		}
+                if (XEN_HYPER_VALID_MEMBER(domain_guest_type) &&
+                    *(dp + XEN_HYPER_OFFSET(domain_guest_type))) {
+			/* For now PVH and HVM are the same for crash.
+			 * and 0 is PV.
+			 */
+			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
+		}
 		if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
 		}
diff --git a/xen_hyper_defs.h b/xen_hyper_defs.h
index 1f5b0ba..abc9a07 100644
--- a/xen_hyper_defs.h
+++ b/xen_hyper_defs.h
@@ -673,6 +673,7 @@ struct xen_hyper_offset_table {
 	long domain_domain_flags;
 	long domain_evtchn;
 	long domain_is_hvm;
+	long domain_guest_type;
 	long domain_is_privileged;
 	long domain_debugger_attached;
 	long domain_is_polling;
diff --git a/xen_hyper_dump_tables.c b/xen_hyper_dump_tables.c
index 38558d5..337eb25 100644
--- a/xen_hyper_dump_tables.c
+++ b/xen_hyper_dump_tables.c
@@ -772,6 +772,8 @@ xen_hyper_dump_xen_hyper_offset_table(char *spec, ulong makestruct)
 		(buf, "%ld\n", xen_hyper_offset_table.domain_evtchn));
 	XEN_HYPER_PRI(fp, len, "domain_is_hvm: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_is_hvm));
+	XEN_HYPER_PRI(fp, len, "domain_guest_type: ", buf, flag,
+		(buf, "%ld\n", xen_hyper_offset_table.domain_guest_type));
 	XEN_HYPER_PRI(fp, len, "domain_is_privileged: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_is_privileged));
 	XEN_HYPER_PRI(fp, len, "domain_debugger_attached: ", buf, flag,
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdD-0006Yv-I7; Sat, 04 Jan 2014 19:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdC-0006Yg-5f
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:34 +0000
Received: from [85.158.137.68:44639] by server-16.bemta-3.messagelabs.com id
	D0/63-26128-5EC58C25; Sat, 04 Jan 2014 19:11:33 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!2
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10296 invoked from network); 4 Jan 2014 19:11:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:31 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925868"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:30 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:23 -0500
Message-Id: <1388862686-1832-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen_hyper.c b/xen_hyper.c
index 7c7d3e1..3d56516 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
 		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
 	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
 		dc->domain_flags = 0;
-		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
+                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
+                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
 		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdG-0006Zj-7A; Sat, 04 Jan 2014 19:11:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdE-0006Z7-Mo
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:36 +0000
Received: from [85.158.143.35:42405] by server-3.bemta-4.messagelabs.com id
	C0/CC-32360-7EC58C25; Sat, 04 Jan 2014 19:11:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1388862693!9473921!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 4 Jan 2014 19:11:34 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:33 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925895"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:26 -0500
Message-Id: <1388862686-1832-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 4/4] Add basic domain.guest_type support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen_hyper.c             | 8 ++++++++
 xen_hyper_defs.h        | 1 +
 xen_hyper_dump_tables.c | 2 ++
 3 files changed, 11 insertions(+)

diff --git a/xen_hyper.c b/xen_hyper.c
index 00a0e2c..c18e815 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -207,6 +207,7 @@ xen_hyper_domain_init(void)
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_domain_flags, "domain", "domain_flags");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_evtchn, "domain", "evtchn");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_hvm, "domain", "is_hvm");
+	XEN_HYPER_MEMBER_OFFSET_INIT(domain_guest_type, "domain", "guest_type");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_privileged, "domain", "is_privileged");
 	XEN_HYPER_MEMBER_OFFSET_INIT(domain_debugger_attached, "domain", "debugger_attached");
 
@@ -1251,6 +1252,13 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
                     *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
 		}
+                if (XEN_HYPER_VALID_MEMBER(domain_guest_type) &&
+                    *(dp + XEN_HYPER_OFFSET(domain_guest_type))) {
+			/* For now PVH and HVM are the same for crash.
+			 * and 0 is PV.
+			 */
+			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
+		}
 		if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
 		}
diff --git a/xen_hyper_defs.h b/xen_hyper_defs.h
index 1f5b0ba..abc9a07 100644
--- a/xen_hyper_defs.h
+++ b/xen_hyper_defs.h
@@ -673,6 +673,7 @@ struct xen_hyper_offset_table {
 	long domain_domain_flags;
 	long domain_evtchn;
 	long domain_is_hvm;
+	long domain_guest_type;
 	long domain_is_privileged;
 	long domain_debugger_attached;
 	long domain_is_polling;
diff --git a/xen_hyper_dump_tables.c b/xen_hyper_dump_tables.c
index 38558d5..337eb25 100644
--- a/xen_hyper_dump_tables.c
+++ b/xen_hyper_dump_tables.c
@@ -772,6 +772,8 @@ xen_hyper_dump_xen_hyper_offset_table(char *spec, ulong makestruct)
 		(buf, "%ld\n", xen_hyper_offset_table.domain_evtchn));
 	XEN_HYPER_PRI(fp, len, "domain_is_hvm: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_is_hvm));
+	XEN_HYPER_PRI(fp, len, "domain_guest_type: ", buf, flag,
+		(buf, "%ld\n", xen_hyper_offset_table.domain_guest_type));
 	XEN_HYPER_PRI(fp, len, "domain_is_privileged: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_is_privileged));
 	XEN_HYPER_PRI(fp, len, "domain_debugger_attached: ", buf, flag,
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdD-0006Yv-I7; Sat, 04 Jan 2014 19:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdC-0006Yg-5f
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:34 +0000
Received: from [85.158.137.68:44639] by server-16.bemta-3.messagelabs.com id
	D0/63-26128-5EC58C25; Sat, 04 Jan 2014 19:11:33 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!2
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10296 invoked from network); 4 Jan 2014 19:11:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:31 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925868"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:30 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:23 -0500
Message-Id: <1388862686-1832-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen_hyper.c b/xen_hyper.c
index 7c7d3e1..3d56516 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
 		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
 	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
 		dc->domain_flags = 0;
-		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
+                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
+                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
 		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdE-0006ZB-A0; Sat, 04 Jan 2014 19:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdD-0006Yl-4b
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:35 +0000
Received: from [85.158.139.211:63239] by server-2.bemta-5.messagelabs.com id
	E5/69-29392-6EC58C25; Sat, 04 Jan 2014 19:11:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388862692!7854259!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 857 invoked from network); 4 Jan 2014 19:11:33 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:33 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925890"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:25 -0500
Message-Id: <1388862686-1832-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/4] xen: set all domain_flags, not just the 1st.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/xen_hyper.c b/xen_hyper.c
index 3d56516..00a0e2c 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -1250,20 +1250,27 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
                 if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
                     *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_debugger_attached))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_debugger_attached))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_debugging;
-		} else if (XEN_HYPER_VALID_MEMBER(domain_is_polling) &&
+		}
+		if (XEN_HYPER_VALID_MEMBER(domain_is_polling) &&
 				*(dp + XEN_HYPER_OFFSET(domain_is_polling))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_polling;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_paused_by_controller))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_paused_by_controller))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_ctrl_pause;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_dying))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_dying))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_dying;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_shutting_down))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_shutting_down))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_shuttingdown;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_shut_down))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_shut_down))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_shutdown;
 		}
 	} else {
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdD-0006Z3-Tz; Sat, 04 Jan 2014 19:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdD-0006Ym-4N
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:35 +0000
Received: from [85.158.137.68:26853] by server-14.bemta-3.messagelabs.com id
	77/E0-06105-6EC58C25; Sat, 04 Jan 2014 19:11:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10284 invoked from network); 4 Jan 2014 19:11:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:30 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925862"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:29 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:22 -0500
Message-Id: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/4] Enable use of crash on xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the addition on PVH code to xen 4.4, domain.is_hvm no longer
exists.  This prevents crash from using a xen 4.4.0 vmcore.

Patch 1 "fixes" this.

Patch 2 is a minor fix in that outputing the offset in hex for
domain_domain_flags is different.

Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
one found.

Patch 4 is a quick way to add domain.guest_type support.

Don Slutz (4):
  Make domian.is_hvm optional
  xen: Fix offset output to be decimal.
  xen: set all domain_flags, not just the 1st.
  Add basic domain.guest_type support.

 xen_hyper.c             | 32 ++++++++++++++++++++++++--------
 xen_hyper_defs.h        |  1 +
 xen_hyper_dump_tables.c |  4 +++-
 3 files changed, 28 insertions(+), 9 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdF-0006Za-Qn; Sat, 04 Jan 2014 19:11:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdE-0006Yz-77
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:36 +0000
Received: from [85.158.137.68:44695] by server-6.bemta-3.messagelabs.com id
	5A/62-04868-7EC58C25; Sat, 04 Jan 2014 19:11:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!3
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10320 invoked from network); 4 Jan 2014 19:11:33 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:33 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925882"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:31 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:24 -0500
Message-Id: <1388862686-1832-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/4] xen: Fix offset output to be decimal.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper_dump_tables.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen_hyper_dump_tables.c b/xen_hyper_dump_tables.c
index 0582159..38558d5 100644
--- a/xen_hyper_dump_tables.c
+++ b/xen_hyper_dump_tables.c
@@ -767,7 +767,7 @@ xen_hyper_dump_xen_hyper_offset_table(char *spec, ulong makestruct)
 	XEN_HYPER_PRI(fp, len, "domain_next_in_list: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_next_in_list));
 	XEN_HYPER_PRI(fp, len, "domain_domain_flags: ", buf, flag,
-		(buf, "%lx\n", xen_hyper_offset_table.domain_domain_flags));
+		(buf, "%ld\n", xen_hyper_offset_table.domain_domain_flags));
 	XEN_HYPER_PRI(fp, len, "domain_evtchn: ", buf, flag,
 		(buf, "%ld\n", xen_hyper_offset_table.domain_evtchn));
 	XEN_HYPER_PRI(fp, len, "domain_is_hvm: ", buf, flag,
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdD-0006Z3-Tz; Sat, 04 Jan 2014 19:11:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdD-0006Ym-4N
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:35 +0000
Received: from [85.158.137.68:26853] by server-14.bemta-3.messagelabs.com id
	77/E0-06105-6EC58C25; Sat, 04 Jan 2014 19:11:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388862691!7201917!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10284 invoked from network); 4 Jan 2014 19:11:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:30 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925862"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:29 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:22 -0500
Message-Id: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/4] Enable use of crash on xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With the addition on PVH code to xen 4.4, domain.is_hvm no longer
exists.  This prevents crash from using a xen 4.4.0 vmcore.

Patch 1 "fixes" this.

Patch 2 is a minor fix in that outputing the offset in hex for
domain_domain_flags is different.

Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
one found.

Patch 4 is a quick way to add domain.guest_type support.

Don Slutz (4):
  Make domian.is_hvm optional
  xen: Fix offset output to be decimal.
  xen: set all domain_flags, not just the 1st.
  Add basic domain.guest_type support.

 xen_hyper.c             | 32 ++++++++++++++++++++++++--------
 xen_hyper_defs.h        |  1 +
 xen_hyper_dump_tables.c |  4 +++-
 3 files changed, 28 insertions(+), 9 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzWdE-0006ZB-A0; Sat, 04 Jan 2014 19:11:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzWdD-0006Yl-4b
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 19:11:35 +0000
Received: from [85.158.139.211:63239] by server-2.bemta-5.messagelabs.com id
	E5/69-29392-6EC58C25; Sat, 04 Jan 2014 19:11:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1388862692!7854259!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 857 invoked from network); 4 Jan 2014 19:11:33 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 19:11:33 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="623925890"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.250])
	by fldsmtpi02.verizon.com with ESMTP; 04 Jan 2014 19:11:32 +0000
From: Don Slutz <dslutz@verizon.com>
To: <crash-utility@redhat.com>
Date: Sat,  4 Jan 2014 14:11:25 -0500
Message-Id: <1388862686-1832-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/4] xen: set all domain_flags, not just the 1st.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 xen_hyper.c | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/xen_hyper.c b/xen_hyper.c
index 3d56516..00a0e2c 100644
--- a/xen_hyper.c
+++ b/xen_hyper.c
@@ -1250,20 +1250,27 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
                 if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
                     *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_debugger_attached))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_debugger_attached))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_debugging;
-		} else if (XEN_HYPER_VALID_MEMBER(domain_is_polling) &&
+		}
+		if (XEN_HYPER_VALID_MEMBER(domain_is_polling) &&
 				*(dp + XEN_HYPER_OFFSET(domain_is_polling))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_polling;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_paused_by_controller))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_paused_by_controller))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_ctrl_pause;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_dying))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_dying))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_dying;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_shutting_down))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_shutting_down))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_shuttingdown;
-		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_shut_down))) {
+		}
+		if (*(dp + XEN_HYPER_OFFSET(domain_is_shut_down))) {
 			dc->domain_flags |= XEN_HYPER_DOMS_shutdown;
 		}
 	} else {
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:49:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzXDD-00007v-HR; Sat, 04 Jan 2014 19:48:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1VzXDC-00007q-Gv
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 19:48:46 +0000
Received: from [85.158.137.68:27948] by server-6.bemta-3.messagelabs.com id
	3D/B9-04868-D9568C25; Sat, 04 Jan 2014 19:48:45 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388864924!3582951!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7181 invoked from network); 4 Jan 2014 19:48:44 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-15.tower-31.messagelabs.com with SMTP;
	4 Jan 2014 19:48:44 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 25E1A800008;
	Sat,  4 Jan 2014 20:48:43 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Sat, 4 Jan 2014 20:47:29 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Sat, 4 Jan 2014 20:47:29 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzgABmbrAA=
Date: Sat, 4 Jan 2014 19:47:29 +0000
Message-ID: <2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: James Harper [mailto:james.harper@bendigoit.com.au]
> Sent: 4. januar 2014 08:21
> To: Kristian Hagsted Rasmussen
> Cc: xen-devel@lists.xenproject.org
> Subject: RE: [GPLPV] exclude xenscsi from installer, since it is not compiled
> 
> >
> > I fiddled around with compiling you GPL PV driver, however I got an
> > error when generating the MSI packages. The error was related to
> > xenscsi which is not compiled, however WIX try to included. By
> > applying the following patch, I successfully compiled the driver. It
> > might not be the right approached just to remove the lines in
> > installer.wxs, since I got an error relating to xenvbd when installing the
> driver on a windows 8.1 domU.
> >
> 
> Yep. I had ancient .sys etc files lying around in my build tree which was hiding
> the problem.
> 
> I have removed all xenscsi related stuff (needs to be rewritten anyway if
> anyone ever wanted it) and I can build MSI files now. Latest changes are
> pushed.
> 
> I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the
> drivers themselves should be okay. Was the error you got a crash, or an error
> message during install?

The error resulted in a crash of the installer. Trying to reinstall the package crashed the entire DomU (due to xenpci.sys unload), however the remaining drivers (including xenvbd) could be installed through device manager without problems.

On a side note, I have used some time to move the relevant drivers (pci, net, usb, vbd_storport) and all programs to VS2013. I have succeeded in compiling in win7 mode and they install correctly on my windows 8.1 domU. Compiling for win8 and win8.1 result in an error for the storport build, with the following error:

xenvbd.c(198): error C2039: 'Reserved' : is not a member of '_PORT_CONFIGURATION_INFORMATION'

This is due to an if statement in storport.h, removing 'Reserved' for newer versions of windows. However I do not know how to fix this.
I you are interested in my project files for VS2013, I would be willing to clean them up and integrate them in your directory structure.

Best regards Kristian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 19:49:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 19:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzXDD-00007v-HR; Sat, 04 Jan 2014 19:48:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1VzXDC-00007q-Gv
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 19:48:46 +0000
Received: from [85.158.137.68:27948] by server-6.bemta-3.messagelabs.com id
	3D/B9-04868-D9568C25; Sat, 04 Jan 2014 19:48:45 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388864924!3582951!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7181 invoked from network); 4 Jan 2014 19:48:44 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-15.tower-31.messagelabs.com with SMTP;
	4 Jan 2014 19:48:44 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 25E1A800008;
	Sat,  4 Jan 2014 20:48:43 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Sat, 4 Jan 2014 20:47:29 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Sat, 4 Jan 2014 20:47:29 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzgABmbrAA=
Date: Sat, 4 Jan 2014 19:47:29 +0000
Message-ID: <2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: James Harper [mailto:james.harper@bendigoit.com.au]
> Sent: 4. januar 2014 08:21
> To: Kristian Hagsted Rasmussen
> Cc: xen-devel@lists.xenproject.org
> Subject: RE: [GPLPV] exclude xenscsi from installer, since it is not compiled
> 
> >
> > I fiddled around with compiling you GPL PV driver, however I got an
> > error when generating the MSI packages. The error was related to
> > xenscsi which is not compiled, however WIX try to included. By
> > applying the following patch, I successfully compiled the driver. It
> > might not be the right approached just to remove the lines in
> > installer.wxs, since I got an error relating to xenvbd when installing the
> driver on a windows 8.1 domU.
> >
> 
> Yep. I had ancient .sys etc files lying around in my build tree which was hiding
> the problem.
> 
> I have removed all xenscsi related stuff (needs to be rewritten anyway if
> anyone ever wanted it) and I can build MSI files now. Latest changes are
> pushed.
> 
> I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the
> drivers themselves should be okay. Was the error you got a crash, or an error
> message during install?

The error resulted in a crash of the installer. Trying to reinstall the package crashed the entire DomU (due to xenpci.sys unload), however the remaining drivers (including xenvbd) could be installed through device manager without problems.

On a side note, I have used some time to move the relevant drivers (pci, net, usb, vbd_storport) and all programs to VS2013. I have succeeded in compiling in win7 mode and they install correctly on my windows 8.1 domU. Compiling for win8 and win8.1 result in an error for the storport build, with the following error:

xenvbd.c(198): error C2039: 'Reserved' : is not a member of '_PORT_CONFIGURATION_INFORMATION'

This is due to an if statement in storport.h, removing 'Reserved' for newer versions of windows. However I do not know how to fix this.
I you are interested in my project files for VS2013, I would be willing to clean them up and integrate them in your directory structure.

Best regards Kristian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYYA-0003Rl-SJ; Sat, 04 Jan 2014 21:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzYYA-0003RV-0Q
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:14:30 +0000
Received: from [85.158.137.68:46126] by server-10.bemta-3.messagelabs.com id
	A9/0D-23989-4B978C25; Sat, 04 Jan 2014 21:14:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388870066!6082645!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24301 invoked from network); 4 Jan 2014 21:14:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:14:28 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s04LDLkf018649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 21:13:22 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDKvF010738
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 21:13:21 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDKDU029371; Sat, 4 Jan 2014 21:13:20 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 04 Jan 2014 13:13:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 314931BFB02; Sat,  4 Jan 2014 16:13:19 -0500 (EST)
Date: Sat, 4 Jan 2014 16:13:19 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140104211319.GC9395@phenom.dumpdata.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388857936-664-3-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
> This also fixes the old debug output to compile and work if DBGP1
> and DBGP2 are defined like DBGP3.
> 
> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>      return len;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */

??

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYYA-0003Re-Fq; Sat, 04 Jan 2014 21:14:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzYY9-0003RU-6i
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:14:29 +0000
Received: from [193.109.254.147:45527] by server-3.bemta-14.messagelabs.com id
	44/13-11000-CA978C25; Sat, 04 Jan 2014 21:14:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388870056!8828235!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15454 invoked from network); 4 Jan 2014 21:14:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:14:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s04LDovD011473
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 21:13:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s04LDmt9005203
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 21:13:49 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDmPS010876; Sat, 4 Jan 2014 21:13:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 04 Jan 2014 13:13:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 70EAD1BFB02; Sat,  4 Jan 2014 16:13:47 -0500 (EST)
Date: Sat, 4 Jan 2014 16:13:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140104211347.GD9395@phenom.dumpdata.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1388862686-1832-2-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388862686-1832-2-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 02:11:23PM -0500, Don Slutz wrote:

You title should say 'domain'.

> ---
>  xen_hyper.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen_hyper.c b/xen_hyper.c
> index 7c7d3e1..3d56516 100644
> --- a/xen_hyper.c
> +++ b/xen_hyper.c
> @@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
>  		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
>  	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
>  		dc->domain_flags = 0;
> -		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
> +                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
> +                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>  			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
>  		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
>  			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
> -- 
> 1.8.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYYA-0003Rl-SJ; Sat, 04 Jan 2014 21:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzYYA-0003RV-0Q
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:14:30 +0000
Received: from [85.158.137.68:46126] by server-10.bemta-3.messagelabs.com id
	A9/0D-23989-4B978C25; Sat, 04 Jan 2014 21:14:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388870066!6082645!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24301 invoked from network); 4 Jan 2014 21:14:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:14:28 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s04LDLkf018649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 21:13:22 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDKvF010738
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 21:13:21 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDKDU029371; Sat, 4 Jan 2014 21:13:20 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 04 Jan 2014 13:13:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 314931BFB02; Sat,  4 Jan 2014 16:13:19 -0500 (EST)
Date: Sat, 4 Jan 2014 16:13:19 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140104211319.GC9395@phenom.dumpdata.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388857936-664-3-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
> This also fixes the old debug output to compile and work if DBGP1
> and DBGP2 are defined like DBGP3.
> 
> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>      return len;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */

??

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYYA-0003Re-Fq; Sat, 04 Jan 2014 21:14:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VzYY9-0003RU-6i
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:14:29 +0000
Received: from [193.109.254.147:45527] by server-3.bemta-14.messagelabs.com id
	44/13-11000-CA978C25; Sat, 04 Jan 2014 21:14:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388870056!8828235!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15454 invoked from network); 4 Jan 2014 21:14:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:14:19 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s04LDovD011473
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 4 Jan 2014 21:13:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s04LDmt9005203
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 4 Jan 2014 21:13:49 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s04LDmPS010876; Sat, 4 Jan 2014 21:13:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 04 Jan 2014 13:13:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 70EAD1BFB02; Sat,  4 Jan 2014 16:13:47 -0500 (EST)
Date: Sat, 4 Jan 2014 16:13:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140104211347.GD9395@phenom.dumpdata.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1388862686-1832-2-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388862686-1832-2-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 02:11:23PM -0500, Don Slutz wrote:

You title should say 'domain'.

> ---
>  xen_hyper.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen_hyper.c b/xen_hyper.c
> index 7c7d3e1..3d56516 100644
> --- a/xen_hyper.c
> +++ b/xen_hyper.c
> @@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
>  		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
>  	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
>  		dc->domain_flags = 0;
> -		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
> +                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
> +                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>  			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
>  		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
>  			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
> -- 
> 1.8.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYiD-00043b-0o; Sat, 04 Jan 2014 21:24:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzYiB-00043W-76
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:24:51 +0000
Received: from [85.158.143.35:56889] by server-1.bemta-4.messagelabs.com id
	EF/C6-02132-22C78C25; Sat, 04 Jan 2014 21:24:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388870687!9597446!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 758 invoked from network); 4 Jan 2014 21:24:49 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:24:49 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 21:24:46 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; 
	d="scan'208,217";a="640766427"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.90])
	by fldsmtpi01.verizon.com with ESMTP; 04 Jan 2014 21:24:45 +0000
Message-ID: <52C87C1C.5080908@terremark.com>
Date: Sat, 04 Jan 2014 16:24:44 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
In-Reply-To: <20140104211319.GC9395@phenom.dumpdata.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4070028943122963288=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4070028943122963288==
Content-Type: multipart/alternative;
 boundary="------------040904070703080307050903"

This is a multi-part message in MIME format.
--------------040904070703080307050903
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>> This also fixes the old debug output to compile and work if DBGP1
>> and DBGP2 are defined like DBGP3.
>>
>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>>       return len;
>>   }
>>   
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
> ??

I take that as "Why add this"?

Dealing with many different styles of code, I find it helps to have this 
emacs add.  From CODING_STYLE:

    Emacs local variables
    ---------------------

    A comment block containing local variables for emacs is permitted at
    the end of files.  It should be:

    /*
      * Local variables:
      * mode: C
      * c-file-style: "BSD"
      * c-basic-offset: 4
      * indent-tabs-mode: nil
      * End:
      */

    -Don Slutz

--------------040904070703080307050903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/04/14 16:13, Konrad Rzeszutek
      Wilk wrote:<br>
    </div>
    <blockquote cite="mid:20140104211319.GC9395@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">This also fixes the old debug output to compile and work if DBGP1
and DBGP2 are defined like DBGP3.

@@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
</pre>
      </blockquote>
      <pre wrap="">
??
</pre>
    </blockquote>
    <br>
    I take that as "Why add this"?<br>
    <br>
    Dealing with many different styles of code, I find it helps to have
    this emacs add.&nbsp; From CODING_STYLE:<br>
    <br>
    <blockquote>Emacs local variables<br>
      ---------------------<br>
      <br>
      A comment block containing local variables for emacs is permitted
      at<br>
      the end of files.&nbsp; It should be:<br>
      <br>
      /*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* Local
      variables:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* mode:
      C&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* c-file-style:
      "BSD"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* c-basic-offset:
      4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* indent-tabs-mode:
      nil&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;*
      End:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;*/<br>
    </blockquote>
    &nbsp;&nbsp; -Don Slutz<br>
  </body>
</html>

--------------040904070703080307050903--


--===============4070028943122963288==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4070028943122963288==--


From xen-devel-bounces@lists.xen.org Sat Jan 04 21:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYiD-00043b-0o; Sat, 04 Jan 2014 21:24:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzYiB-00043W-76
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:24:51 +0000
Received: from [85.158.143.35:56889] by server-1.bemta-4.messagelabs.com id
	EF/C6-02132-22C78C25; Sat, 04 Jan 2014 21:24:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1388870687!9597446!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 758 invoked from network); 4 Jan 2014 21:24:49 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:24:49 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 21:24:46 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; 
	d="scan'208,217";a="640766427"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.90])
	by fldsmtpi01.verizon.com with ESMTP; 04 Jan 2014 21:24:45 +0000
Message-ID: <52C87C1C.5080908@terremark.com>
Date: Sat, 04 Jan 2014 16:24:44 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
In-Reply-To: <20140104211319.GC9395@phenom.dumpdata.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4070028943122963288=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4070028943122963288==
Content-Type: multipart/alternative;
 boundary="------------040904070703080307050903"

This is a multi-part message in MIME format.
--------------040904070703080307050903
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>> This also fixes the old debug output to compile and work if DBGP1
>> and DBGP2 are defined like DBGP3.
>>
>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>>       return len;
>>   }
>>   
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
> ??

I take that as "Why add this"?

Dealing with many different styles of code, I find it helps to have this 
emacs add.  From CODING_STYLE:

    Emacs local variables
    ---------------------

    A comment block containing local variables for emacs is permitted at
    the end of files.  It should be:

    /*
      * Local variables:
      * mode: C
      * c-file-style: "BSD"
      * c-basic-offset: 4
      * indent-tabs-mode: nil
      * End:
      */

    -Don Slutz

--------------040904070703080307050903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/04/14 16:13, Konrad Rzeszutek
      Wilk wrote:<br>
    </div>
    <blockquote cite="mid:20140104211319.GC9395@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">This also fixes the old debug output to compile and work if DBGP1
and DBGP2 are defined like DBGP3.

@@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
</pre>
      </blockquote>
      <pre wrap="">
??
</pre>
    </blockquote>
    <br>
    I take that as "Why add this"?<br>
    <br>
    Dealing with many different styles of code, I find it helps to have
    this emacs add.&nbsp; From CODING_STYLE:<br>
    <br>
    <blockquote>Emacs local variables<br>
      ---------------------<br>
      <br>
      A comment block containing local variables for emacs is permitted
      at<br>
      the end of files.&nbsp; It should be:<br>
      <br>
      /*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* Local
      variables:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* mode:
      C&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* c-file-style:
      "BSD"&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* c-basic-offset:
      4&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;* indent-tabs-mode:
      nil&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;*
      End:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
      <br>
      &nbsp;*/<br>
    </blockquote>
    &nbsp;&nbsp; -Don Slutz<br>
  </body>
</html>

--------------040904070703080307050903--


--===============4070028943122963288==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4070028943122963288==--


From xen-devel-bounces@lists.xen.org Sat Jan 04 21:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYjt-0004Av-Ms; Sat, 04 Jan 2014 21:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzYjr-0004Ao-MK
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:26:35 +0000
Received: from [85.158.137.68:33765] by server-13.bemta-3.messagelabs.com id
	42/44-28603-A8C78C25; Sat, 04 Jan 2014 21:26:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388870792!7248756!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11943 invoked from network); 4 Jan 2014 21:26:34 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:26:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 21:26:31 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="640766838"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.90])
	by fldsmtpi01.verizon.com with ESMTP; 04 Jan 2014 21:26:30 +0000
Message-ID: <52C87C86.1070901@terremark.com>
Date: Sat, 04 Jan 2014 16:26:30 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1388862686-1832-2-git-send-email-dslutz@verizon.com>
	<20140104211347.GD9395@phenom.dumpdata.com>
In-Reply-To: <20140104211347.GD9395@phenom.dumpdata.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 04, 2014 at 02:11:23PM -0500, Don Slutz wrote:
>
> You title should say 'domain'.
>
Will fix.
     -Don Slutz
>> ---
>>   xen_hyper.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen_hyper.c b/xen_hyper.c
>> index 7c7d3e1..3d56516 100644
>> --- a/xen_hyper.c
>> +++ b/xen_hyper.c
>> @@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
>>   		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
>>   	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
>>   		dc->domain_flags = 0;
>> -		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>> +                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
>> +                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>>   			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
>>   		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
>>   			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
>> -- 
>> 1.8.4
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 21:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 21:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzYjt-0004Av-Ms; Sat, 04 Jan 2014 21:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1VzYjr-0004Ao-MK
	for xen-devel@lists.xen.org; Sat, 04 Jan 2014 21:26:35 +0000
Received: from [85.158.137.68:33765] by server-13.bemta-3.messagelabs.com id
	42/44-28603-A8C78C25; Sat, 04 Jan 2014 21:26:34 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388870792!7248756!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11943 invoked from network); 4 Jan 2014 21:26:34 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Jan 2014 21:26:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 04 Jan 2014 21:26:31 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,604,1384300800"; d="scan'208";a="640766838"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.90])
	by fldsmtpi01.verizon.com with ESMTP; 04 Jan 2014 21:26:30 +0000
Message-ID: <52C87C86.1070901@terremark.com>
Date: Sat, 04 Jan 2014 16:26:30 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1388862686-1832-2-git-send-email-dslutz@verizon.com>
	<20140104211347.GD9395@phenom.dumpdata.com>
In-Reply-To: <20140104211347.GD9395@phenom.dumpdata.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	kexec@lists.infradead.org, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 1/4] Make domian.is_hvm optional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 04, 2014 at 02:11:23PM -0500, Don Slutz wrote:
>
> You title should say 'domain'.
>
Will fix.
     -Don Slutz
>> ---
>>   xen_hyper.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen_hyper.c b/xen_hyper.c
>> index 7c7d3e1..3d56516 100644
>> --- a/xen_hyper.c
>> +++ b/xen_hyper.c
>> @@ -1247,7 +1247,8 @@ xen_hyper_store_domain_context(struct xen_hyper_domain_context *dc,
>>   		dc->domain_flags = ULONG(dp + XEN_HYPER_OFFSET(domain_domain_flags));
>>   	else if (XEN_HYPER_VALID_MEMBER(domain_is_shut_down)) {
>>   		dc->domain_flags = 0;
>> -		if (*(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>> +                if (XEN_HYPER_VALID_MEMBER(domain_is_hvm) &&
>> +                    *(dp + XEN_HYPER_OFFSET(domain_is_hvm))) {
>>   			dc->domain_flags |= XEN_HYPER_DOMS_HVM;
>>   		} else if (*(dp + XEN_HYPER_OFFSET(domain_is_privileged))) {
>>   			dc->domain_flags |= XEN_HYPER_DOMS_privileged;
>> -- 
>> 1.8.4
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 23:53:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 23:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzb1e-0001tP-Cf; Sat, 04 Jan 2014 23:53:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzb1c-0001tK-PG
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 23:53:04 +0000
Received: from [85.158.143.35:49508] by server-1.bemta-4.messagelabs.com id
	AF/80-02132-FDE98C25; Sat, 04 Jan 2014 23:53:03 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388879579!9646624!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7657 invoked from network); 4 Jan 2014 23:53:02 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Jan 2014 23:53:02 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vzb1T-0000nV-GE; Sun, 05 Jan 2014 10:52:55 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sun, 5 Jan 2014 10:52:54 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzgABmbrAAACM/7EA==
Date: Sat, 4 Jan 2014 23:52:53 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
In-Reply-To: <2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the
> > drivers themselves should be okay. Was the error you got a crash, or an
> > error message during install?
> 
> The error resulted in a crash of the installer. Trying to reinstall the package
> crashed the entire DomU (due to xenpci.sys unload), however the remaining
> drivers (including xenvbd) could be installed through device manager without
> problems.
> 
> On a side note, I have used some time to move the relevant drivers (pci, net,
> usb, vbd_storport) and all programs to VS2013. I have succeeded in compiling
> in win7 mode and they install correctly on my windows 8.1 domU. Compiling
> for win8 and win8.1 result in an error for the storport build, with the
> following error:
> 
> xenvbd.c(198): error C2039: 'Reserved' : is not a member of
> '_PORT_CONFIGURATION_INFORMATION'
> 
> This is due to an if statement in storport.h, removing 'Reserved' for newer
> versions of windows. However I do not know how to fix this.
> I you are interested in my project files for VS2013, I would be willing to clean
> them up and integrate them in your directory structure.
> 

The header defines it as:

#if (NTDDI_VERSION >= NTDDI_WIN8)
  PVOID                                  MiniportDumpData;
#else 
  PVOID                                  Reserved;
#endif

So I guess gplpv needs to incorporate that. Untested patch follows this email. I've previously read through the virtual storport changes and there are a heap of them, so there may be other issues to resolve too.

Unfortunately the latest DDK drops support for XP (I think), and doesn't add any new features that I could really use at this time, so I haven't bothered with it so far.

James

diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
--- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
+++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 10:48:43 2014 +1100
@@ -185,6 +185,12 @@
 XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext, PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)
 {
   PXENVBD_DEVICE_DATA xvdd = (PXENVBD_DEVICE_DATA)DeviceExtension;
+#if (NTDDI_VERSION >= NTDDI_WIN8)
+  PVOID dump_data = ConfigInfo->MiniportDumpData;
+#else
+  PVOID dump_data = ConfigInfo->Reserved;
+#endif
+

   UNREFERENCED_PARAMETER(HwContext);
   UNREFERENCED_PARAMETER(BusInformation);
@@ -195,14 +201,14 @@
   FUNCTION_MSG("xvdd = %p\n", xvdd);
   FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);

-  memcpy(xvdd, ConfigInfo->Reserved, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
+  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
   if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
     return SP_RETURN_ERROR;
   }
   /* restore hypercall_stubs into dump_xenpci */
   XnSetHypercallStubs(xvdd->hypercall_stubs);
   /* make sure original xvdd is set to DISCONNECTED or resume will not work */
-  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state = DEVICE_STATE_DISCONNECTED;
+  ((PXENVBD_DEVICE_DATA)dump_data)->device_state = DEVICE_STATE_DISCONNECTED;
   InitializeListHead(&xvdd->srb_list);
   xvdd->aligned_buffer_in_use = FALSE;
   /* align the buffer to PAGE_SIZE */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 04 23:53:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jan 2014 23:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzb1e-0001tP-Cf; Sat, 04 Jan 2014 23:53:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzb1c-0001tK-PG
	for xen-devel@lists.xenproject.org; Sat, 04 Jan 2014 23:53:04 +0000
Received: from [85.158.143.35:49508] by server-1.bemta-4.messagelabs.com id
	AF/80-02132-FDE98C25; Sat, 04 Jan 2014 23:53:03 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388879579!9646624!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7657 invoked from network); 4 Jan 2014 23:53:02 -0000
Received: from smtp2.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Jan 2014 23:53:02 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vzb1T-0000nV-GE; Sun, 05 Jan 2014 10:52:55 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sun, 5 Jan 2014 10:52:54 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [GPLPV] exclude xenscsi from installer, since it is not compiled
Thread-Index: Ac8Id/1R4U4vKAZhQj6eIy6rFl2IMAApTYzgABmbrAAACM/7EA==
Date: Sat, 4 Jan 2014 23:52:53 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
In-Reply-To: <2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
	since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > I'm using latest GPLPV on 2012R2 which is the same kernel as 8.1, so the
> > drivers themselves should be okay. Was the error you got a crash, or an
> > error message during install?
> 
> The error resulted in a crash of the installer. Trying to reinstall the package
> crashed the entire DomU (due to xenpci.sys unload), however the remaining
> drivers (including xenvbd) could be installed through device manager without
> problems.
> 
> On a side note, I have used some time to move the relevant drivers (pci, net,
> usb, vbd_storport) and all programs to VS2013. I have succeeded in compiling
> in win7 mode and they install correctly on my windows 8.1 domU. Compiling
> for win8 and win8.1 result in an error for the storport build, with the
> following error:
> 
> xenvbd.c(198): error C2039: 'Reserved' : is not a member of
> '_PORT_CONFIGURATION_INFORMATION'
> 
> This is due to an if statement in storport.h, removing 'Reserved' for newer
> versions of windows. However I do not know how to fix this.
> I you are interested in my project files for VS2013, I would be willing to clean
> them up and integrate them in your directory structure.
> 

The header defines it as:

#if (NTDDI_VERSION >= NTDDI_WIN8)
  PVOID                                  MiniportDumpData;
#else 
  PVOID                                  Reserved;
#endif

So I guess gplpv needs to incorporate that. Untested patch follows this email. I've previously read through the virtual storport changes and there are a heap of them, so there may be other issues to resolve too.

Unfortunately the latest DDK drops support for XP (I think), and doesn't add any new features that I could really use at this time, so I haven't bothered with it so far.

James

diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
--- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
+++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 10:48:43 2014 +1100
@@ -185,6 +185,12 @@
 XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext, PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)
 {
   PXENVBD_DEVICE_DATA xvdd = (PXENVBD_DEVICE_DATA)DeviceExtension;
+#if (NTDDI_VERSION >= NTDDI_WIN8)
+  PVOID dump_data = ConfigInfo->MiniportDumpData;
+#else
+  PVOID dump_data = ConfigInfo->Reserved;
+#endif
+

   UNREFERENCED_PARAMETER(HwContext);
   UNREFERENCED_PARAMETER(BusInformation);
@@ -195,14 +201,14 @@
   FUNCTION_MSG("xvdd = %p\n", xvdd);
   FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);

-  memcpy(xvdd, ConfigInfo->Reserved, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
+  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
   if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
     return SP_RETURN_ERROR;
   }
   /* restore hypercall_stubs into dump_xenpci */
   XnSetHypercallStubs(xvdd->hypercall_stubs);
   /* make sure original xvdd is set to DISCONNECTED or resume will not work */
-  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state = DEVICE_STATE_DISCONNECTED;
+  ((PXENVBD_DEVICE_DATA)dump_data)->device_state = DEVICE_STATE_DISCONNECTED;
   InitializeListHead(&xvdd->srb_list);
   xvdd->aligned_buffer_in_use = FALSE;
   /* align the buffer to PAGE_SIZE */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 00:53:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 00:53:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzbxp-0004ta-VE; Sun, 05 Jan 2014 00:53:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzbxn-0004tV-R1
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 00:53:12 +0000
Received: from [193.109.254.147:53111] by server-9.bemta-14.messagelabs.com id
	07/77-13957-7FCA8C25; Sun, 05 Jan 2014 00:53:11 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388883188!8842222!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23063 invoked from network); 5 Jan 2014 00:53:09 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 00:53:09 -0000
Received: by mail-qc0-f172.google.com with SMTP id e16so16400460qcx.31
	for <xen-devel@lists.xen.org>; Sat, 04 Jan 2014 16:53:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=XZMC1T5bpqggi+cQf7EvlAIfs94n7GpKl3OveoMEcGQ=;
	b=cJEs/n8B0y0yqaTeLYp2Zl3jiA7+jnrMAG52MLVjAkEQX/Eu6XLA/XqH491zB4umXJ
	ewiwGSjjJspFIm2Qgc7eDMeyjBv/hC+WmcRMECb1H/Nsy/b5IPfPhz3Bhg+MXwPgPtL9
	8e33ZutKC5rojb8pFMOD6k2C1OpEfXaRIr+XYZG23CQCxq1JDtyqcKcovGNl5ta7sgBH
	GflMZNMhjhcB7DBLu04cOiVi0/GJnoQbwDnLk3AiIJosF0OQcvQVJ8T6ZrNrHa+wAtg6
	PNsnfOMr82hYPR880ztx69LxWPOP/oRco8/k/+X6G4w1jJ8Pi7Mu2Z1j4J4n5qfd6izi
	aSoA==
MIME-Version: 1.0
X-Received: by 10.224.119.147 with SMTP id z19mr162566009qaq.20.1388883188502; 
	Sat, 04 Jan 2014 16:53:08 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sat, 4 Jan 2014 16:53:08 -0800 (PST)
Date: Sun, 5 Jan 2014 00:53:08 +0000
Message-ID: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>, baozich@gmail.com
Subject: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7257586477154350349=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7257586477154350349==
Content-Type: multipart/alternative; boundary=047d7bacb71015d5ad04ef2e9150

--047d7bacb71015d5ad04ef2e9150
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I've mentioned earlier [0] that I was working on porting mini-os to arm and
Dario suggested making the sources public even if it was still in early
stages (and extremely experimental ).

So, here we go. I just published the sources on github
https://github.com/KarimAllah/xen/tree/minios-arm-port

Currently as already mentioned, it's still in very early stages. So don't
expect too much. :)

Here's a list of things that has been done so far:
* added a linker script for arm
* building zImage for arm
* adding hypercalls support for arm
* map shared_info page
** currently only events_irq and vtimer_irq are enabled
* binding to VIRQ_DEBUG ( very helpful to test events during porting the OS
- press 'q' through serial port to trigger it - )
* included libfdt from freebsd to allow device tree parsing ( only built at
the moment, still not used by the code )
* 1-1 virtual memory for whole virtual address space ( mapped as 1 Mb
descriptors )
* exception handling ( support for only IRQ atm )
* thread create/switch support for arm ( __arch_switch_threads,
create_thread, and run_idle_thread )
* div and mod support for arm ( copied from freebsd )
* wrote a trivial arm gic driver
* wrote a trivial arm generic virtual timer driver ( using CompareValue to
trigger interrupts and update wallclock )
* retrieving store pfn and event channel from hypervisor instead of
start_info page
* trivial spinlock implementation for arm
* porting os.h file to arm ( barrier, __cli, __sti, irqs_disabled,
local_irq_save, local_irq_restore, local_save_flags, and bit operations )

I've also created another list for *some* of the things that needs to be
done in ${xen}/extras/mini-os/ARM-TODO.txt in case any body is interested
to pick one of them.


[0]
http://www.gossamer-threads.com/lists/xen/devel/312174?do=post_view_flat#312174


Regards

-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--047d7bacb71015d5ad04ef2e9150
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div><div><div><div><div><div><div><di=
v><div>Hi,<br><br></div><div>I&#39;ve mentioned earlier [0] that I was work=
ing on porting mini-os to arm and Dario suggested  making the sources publi=
c even if it was still in early stages (and extremely experimental ).<br>
<br></div><div>So, here we go. I just published the sources on github<br><a=
 href=3D"https://github.com/KarimAllah/xen/tree/minios-arm-port">https://gi=
thub.com/KarimAllah/xen/tree/minios-arm-port</a><br><br></div><div>Currentl=
y as already mentioned, it&#39;s still in very early stages. So don&#39;t e=
xpect too much. :)<br>
</div><div><br></div>Here&#39;s a list of things that has been done so far:=
<br></div><div>* added a linker script for arm<br></div><div>* building zIm=
age for arm<br></div><div>* adding hypercalls support for arm<br></div>
<div>* map shared_info page<br></div>** currently only events_irq and vtime=
r_irq are enabled<br></div><div>* binding to VIRQ_DEBUG ( very helpful to t=
est events during porting the OS - press &#39;q&#39; through serial port to=
 trigger it - )<br>
</div>* included libfdt from freebsd to allow device tree parsing ( only bu=
ilt at the moment, still not used by the code )<br></div>* 1-1 virtual memo=
ry for whole virtual address space ( mapped as 1 Mb descriptors )<br></div>
* exception handling ( support for only IRQ atm )<br></div>* thread create/=
switch support for arm ( __arch_switch_threads, create_thread, and run_idle=
_thread )<br></div>* div and mod support for arm ( copied from freebsd )<br=
>
* wrote a trivial arm gic driver<br></div>* wrote a trivial arm generic vir=
tual timer driver ( using CompareValue to trigger interrupts and update wal=
lclock )<br></div>* retrieving store pfn and event channel from hypervisor =
instead of start_info page<br>
</div>* trivial spinlock implementation for arm<br></div>* porting os.h fil=
e to arm ( barrier, __cli, __sti, irqs_disabled, local_irq_save, local_irq_=
restore, local_save_flags, and bit operations )<br><br></div>I&#39;ve also =
created another list for *some* of the things that needs to be done in ${xe=
n}/extras/mini-os/ARM-TODO.txt in case any body is interested to pick one o=
f them.<br>
<br></div><div><div><div><div><div><div><div><div><div><div><div><div><div>=
<div><div><br>[0] <a href=3D"http://www.gossamer-threads.com/lists/xen/deve=
l/312174?do=3Dpost_view_flat#312174">http://www.gossamer-threads.com/lists/=
xen/devel/312174?do=3Dpost_view_flat#312174</a><br>
<br><br></div><div>Regards<br></div><div><br>-- <br><div dir=3D"ltr"><div>K=
arim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div></div></div></div></div></div></div></div></div></div></div></div></d=
iv></div></div></div>

--047d7bacb71015d5ad04ef2e9150--


--===============7257586477154350349==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7257586477154350349==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 00:53:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 00:53:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzbxp-0004ta-VE; Sun, 05 Jan 2014 00:53:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzbxn-0004tV-R1
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 00:53:12 +0000
Received: from [193.109.254.147:53111] by server-9.bemta-14.messagelabs.com id
	07/77-13957-7FCA8C25; Sun, 05 Jan 2014 00:53:11 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1388883188!8842222!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23063 invoked from network); 5 Jan 2014 00:53:09 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 00:53:09 -0000
Received: by mail-qc0-f172.google.com with SMTP id e16so16400460qcx.31
	for <xen-devel@lists.xen.org>; Sat, 04 Jan 2014 16:53:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=XZMC1T5bpqggi+cQf7EvlAIfs94n7GpKl3OveoMEcGQ=;
	b=cJEs/n8B0y0yqaTeLYp2Zl3jiA7+jnrMAG52MLVjAkEQX/Eu6XLA/XqH491zB4umXJ
	ewiwGSjjJspFIm2Qgc7eDMeyjBv/hC+WmcRMECb1H/Nsy/b5IPfPhz3Bhg+MXwPgPtL9
	8e33ZutKC5rojb8pFMOD6k2C1OpEfXaRIr+XYZG23CQCxq1JDtyqcKcovGNl5ta7sgBH
	GflMZNMhjhcB7DBLu04cOiVi0/GJnoQbwDnLk3AiIJosF0OQcvQVJ8T6ZrNrHa+wAtg6
	PNsnfOMr82hYPR880ztx69LxWPOP/oRco8/k/+X6G4w1jJ8Pi7Mu2Z1j4J4n5qfd6izi
	aSoA==
MIME-Version: 1.0
X-Received: by 10.224.119.147 with SMTP id z19mr162566009qaq.20.1388883188502; 
	Sat, 04 Jan 2014 16:53:08 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sat, 4 Jan 2014 16:53:08 -0800 (PST)
Date: Sun, 5 Jan 2014 00:53:08 +0000
Message-ID: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>, baozich@gmail.com
Subject: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7257586477154350349=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7257586477154350349==
Content-Type: multipart/alternative; boundary=047d7bacb71015d5ad04ef2e9150

--047d7bacb71015d5ad04ef2e9150
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I've mentioned earlier [0] that I was working on porting mini-os to arm and
Dario suggested making the sources public even if it was still in early
stages (and extremely experimental ).

So, here we go. I just published the sources on github
https://github.com/KarimAllah/xen/tree/minios-arm-port

Currently as already mentioned, it's still in very early stages. So don't
expect too much. :)

Here's a list of things that has been done so far:
* added a linker script for arm
* building zImage for arm
* adding hypercalls support for arm
* map shared_info page
** currently only events_irq and vtimer_irq are enabled
* binding to VIRQ_DEBUG ( very helpful to test events during porting the OS
- press 'q' through serial port to trigger it - )
* included libfdt from freebsd to allow device tree parsing ( only built at
the moment, still not used by the code )
* 1-1 virtual memory for whole virtual address space ( mapped as 1 Mb
descriptors )
* exception handling ( support for only IRQ atm )
* thread create/switch support for arm ( __arch_switch_threads,
create_thread, and run_idle_thread )
* div and mod support for arm ( copied from freebsd )
* wrote a trivial arm gic driver
* wrote a trivial arm generic virtual timer driver ( using CompareValue to
trigger interrupts and update wallclock )
* retrieving store pfn and event channel from hypervisor instead of
start_info page
* trivial spinlock implementation for arm
* porting os.h file to arm ( barrier, __cli, __sti, irqs_disabled,
local_irq_save, local_irq_restore, local_save_flags, and bit operations )

I've also created another list for *some* of the things that needs to be
done in ${xen}/extras/mini-os/ARM-TODO.txt in case any body is interested
to pick one of them.


[0]
http://www.gossamer-threads.com/lists/xen/devel/312174?do=post_view_flat#312174


Regards

-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--047d7bacb71015d5ad04ef2e9150
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div><div><div><div><div><div><div><di=
v><div>Hi,<br><br></div><div>I&#39;ve mentioned earlier [0] that I was work=
ing on porting mini-os to arm and Dario suggested  making the sources publi=
c even if it was still in early stages (and extremely experimental ).<br>
<br></div><div>So, here we go. I just published the sources on github<br><a=
 href=3D"https://github.com/KarimAllah/xen/tree/minios-arm-port">https://gi=
thub.com/KarimAllah/xen/tree/minios-arm-port</a><br><br></div><div>Currentl=
y as already mentioned, it&#39;s still in very early stages. So don&#39;t e=
xpect too much. :)<br>
</div><div><br></div>Here&#39;s a list of things that has been done so far:=
<br></div><div>* added a linker script for arm<br></div><div>* building zIm=
age for arm<br></div><div>* adding hypercalls support for arm<br></div>
<div>* map shared_info page<br></div>** currently only events_irq and vtime=
r_irq are enabled<br></div><div>* binding to VIRQ_DEBUG ( very helpful to t=
est events during porting the OS - press &#39;q&#39; through serial port to=
 trigger it - )<br>
</div>* included libfdt from freebsd to allow device tree parsing ( only bu=
ilt at the moment, still not used by the code )<br></div>* 1-1 virtual memo=
ry for whole virtual address space ( mapped as 1 Mb descriptors )<br></div>
* exception handling ( support for only IRQ atm )<br></div>* thread create/=
switch support for arm ( __arch_switch_threads, create_thread, and run_idle=
_thread )<br></div>* div and mod support for arm ( copied from freebsd )<br=
>
* wrote a trivial arm gic driver<br></div>* wrote a trivial arm generic vir=
tual timer driver ( using CompareValue to trigger interrupts and update wal=
lclock )<br></div>* retrieving store pfn and event channel from hypervisor =
instead of start_info page<br>
</div>* trivial spinlock implementation for arm<br></div>* porting os.h fil=
e to arm ( barrier, __cli, __sti, irqs_disabled, local_irq_save, local_irq_=
restore, local_save_flags, and bit operations )<br><br></div>I&#39;ve also =
created another list for *some* of the things that needs to be done in ${xe=
n}/extras/mini-os/ARM-TODO.txt in case any body is interested to pick one o=
f them.<br>
<br></div><div><div><div><div><div><div><div><div><div><div><div><div><div>=
<div><div><br>[0] <a href=3D"http://www.gossamer-threads.com/lists/xen/deve=
l/312174?do=3Dpost_view_flat#312174">http://www.gossamer-threads.com/lists/=
xen/devel/312174?do=3Dpost_view_flat#312174</a><br>
<br><br></div><div>Regards<br></div><div><br>-- <br><div dir=3D"ltr"><div>K=
arim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div></div></div></div></div></div></div></div></div></div></div></div></d=
iv></div></div></div>

--047d7bacb71015d5ad04ef2e9150--


--===============7257586477154350349==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7257586477154350349==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 02:37:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 02:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzdaP-0004m9-2N; Sun, 05 Jan 2014 02:37:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1VzdaO-0004m1-0w
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 02:37:08 +0000
Received: from [193.109.254.147:49223] by server-16.bemta-14.messagelabs.com
	id 1A/10-20600-355C8C25; Sun, 05 Jan 2014 02:37:07 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388889424!8814249!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20897 invoked from network); 5 Jan 2014 02:37:06 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 02:37:06 -0000
Received: by mail-pa0-f47.google.com with SMTP id kq14so17082177pab.6
	for <xen-devel@lists.xen.org>; Sat, 04 Jan 2014 18:37:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=9SUg+fh1XRMPY3B5IiVaUhFiMF8k1fcMUIxnbJwi4ZY=;
	b=gZdFI22m7OaHeOMAcKDug3Z7o+g3IMr3aX52pofcyceiBbfcCNyncJDt5EHai9meDh
	O4n5qfwiIYhGgH+k2N/cv7n4Yg/5yOUmXUN7Qgz3ktFhMqua3xhpPP1OKjwQTJPCJgiL
	Zo5NrGV38vVKrsJnT/GWt8EqC3OALHABfAyNJYP/ulwsOaA6kzlh+opywohnSURWF7/X
	bTDhfEMSwDmcup6qUDI0fX/K5hFlV6IEsP8+hSn9Bsdpt72/tMtCWBtEBps8euS1t3Lx
	+LKlWBM50xXaFHoJUijD5Av+0LcIda8H1r/3KQkfHYRq0Aa3EvKDA8eHqlOO3jfUWFMO
	nUtA==
X-Received: by 10.68.138.165 with SMTP id qr5mr92533958pbb.123.1388889424324; 
	Sat, 04 Jan 2014 18:37:04 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	zc6sm106439213pab.0.2014.01.04.18.37.00 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 04 Jan 2014 18:37:03 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
Date: Sun, 5 Jan 2014 10:36:44 +0800
Message-Id: <CCF1BBD3-ED6C-4D4D-B141-FC8627271086@gmail.com>
References: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 5, 2014, at 8:53, karim.allah.ahmed@gmail.com wrote:

> Hi,
> 
> I've mentioned earlier [0] that I was working on porting mini-os to arm and Dario suggested making the sources public even if it was still in early stages (and extremely experimental ).
> 
> So, here we go. I just published the sources on github
> https://github.com/KarimAllah/xen/tree/minios-arm-port

Bravo!

Baozi.

> 
> Currently as already mentioned, it's still in very early stages. So don't expect too much. :)
> 
> Here's a list of things that has been done so far:
> * added a linker script for arm
> * building zImage for arm
> * adding hypercalls support for arm
> * map shared_info page
> ** currently only events_irq and vtimer_irq are enabled
> * binding to VIRQ_DEBUG ( very helpful to test events during porting the OS - press 'q' through serial port to trigger it - )
> * included libfdt from freebsd to allow device tree parsing ( only built at the moment, still not used by the code )
> * 1-1 virtual memory for whole virtual address space ( mapped as 1 Mb descriptors )
> * exception handling ( support for only IRQ atm )
> * thread create/switch support for arm ( __arch_switch_threads, create_thread, and run_idle_thread )
> * div and mod support for arm ( copied from freebsd )
> * wrote a trivial arm gic driver
> * wrote a trivial arm generic virtual timer driver ( using CompareValue to trigger interrupts and update wallclock )
> * retrieving store pfn and event channel from hypervisor instead of start_info page
> * trivial spinlock implementation for arm
> * porting os.h file to arm ( barrier, __cli, __sti, irqs_disabled, local_irq_save, local_irq_restore, local_save_flags, and bit operations )
> 
> I've also created another list for *some* of the things that needs to be done in ${xen}/extras/mini-os/ARM-TODO.txt in case any body is interested to pick one of them.
> 
> 
> [0] http://www.gossamer-threads.com/lists/xen/devel/312174?do=post_view_flat#312174
> 
> 
> Regards
> 
> -- 
> Karim Allah Ahmed.
> LinkedIn


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 02:37:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 02:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzdaP-0004m9-2N; Sun, 05 Jan 2014 02:37:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1VzdaO-0004m1-0w
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 02:37:08 +0000
Received: from [193.109.254.147:49223] by server-16.bemta-14.messagelabs.com
	id 1A/10-20600-355C8C25; Sun, 05 Jan 2014 02:37:07 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388889424!8814249!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20897 invoked from network); 5 Jan 2014 02:37:06 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 02:37:06 -0000
Received: by mail-pa0-f47.google.com with SMTP id kq14so17082177pab.6
	for <xen-devel@lists.xen.org>; Sat, 04 Jan 2014 18:37:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=9SUg+fh1XRMPY3B5IiVaUhFiMF8k1fcMUIxnbJwi4ZY=;
	b=gZdFI22m7OaHeOMAcKDug3Z7o+g3IMr3aX52pofcyceiBbfcCNyncJDt5EHai9meDh
	O4n5qfwiIYhGgH+k2N/cv7n4Yg/5yOUmXUN7Qgz3ktFhMqua3xhpPP1OKjwQTJPCJgiL
	Zo5NrGV38vVKrsJnT/GWt8EqC3OALHABfAyNJYP/ulwsOaA6kzlh+opywohnSURWF7/X
	bTDhfEMSwDmcup6qUDI0fX/K5hFlV6IEsP8+hSn9Bsdpt72/tMtCWBtEBps8euS1t3Lx
	+LKlWBM50xXaFHoJUijD5Av+0LcIda8H1r/3KQkfHYRq0Aa3EvKDA8eHqlOO3jfUWFMO
	nUtA==
X-Received: by 10.68.138.165 with SMTP id qr5mr92533958pbb.123.1388889424324; 
	Sat, 04 Jan 2014 18:37:04 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	zc6sm106439213pab.0.2014.01.04.18.37.00 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 04 Jan 2014 18:37:03 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
Date: Sun, 5 Jan 2014 10:36:44 +0800
Message-Id: <CCF1BBD3-ED6C-4D4D-B141-FC8627271086@gmail.com>
References: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 5, 2014, at 8:53, karim.allah.ahmed@gmail.com wrote:

> Hi,
> 
> I've mentioned earlier [0] that I was working on porting mini-os to arm and Dario suggested making the sources public even if it was still in early stages (and extremely experimental ).
> 
> So, here we go. I just published the sources on github
> https://github.com/KarimAllah/xen/tree/minios-arm-port

Bravo!

Baozi.

> 
> Currently as already mentioned, it's still in very early stages. So don't expect too much. :)
> 
> Here's a list of things that has been done so far:
> * added a linker script for arm
> * building zImage for arm
> * adding hypercalls support for arm
> * map shared_info page
> ** currently only events_irq and vtimer_irq are enabled
> * binding to VIRQ_DEBUG ( very helpful to test events during porting the OS - press 'q' through serial port to trigger it - )
> * included libfdt from freebsd to allow device tree parsing ( only built at the moment, still not used by the code )
> * 1-1 virtual memory for whole virtual address space ( mapped as 1 Mb descriptors )
> * exception handling ( support for only IRQ atm )
> * thread create/switch support for arm ( __arch_switch_threads, create_thread, and run_idle_thread )
> * div and mod support for arm ( copied from freebsd )
> * wrote a trivial arm gic driver
> * wrote a trivial arm generic virtual timer driver ( using CompareValue to trigger interrupts and update wallclock )
> * retrieving store pfn and event channel from hypervisor instead of start_info page
> * trivial spinlock implementation for arm
> * porting os.h file to arm ( barrier, __cli, __sti, irqs_disabled, local_irq_save, local_irq_restore, local_save_flags, and bit operations )
> 
> I've also created another list for *some* of the things that needs to be done in ${xen}/extras/mini-os/ARM-TODO.txt in case any body is interested to pick one of them.
> 
> 
> [0] http://www.gossamer-threads.com/lists/xen/devel/312174?do=post_view_flat#312174
> 
> 
> Regards
> 
> -- 
> Karim Allah Ahmed.
> LinkedIn


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 02:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 02:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzdcG-0004x6-VB; Sun, 05 Jan 2014 02:39:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1VzdcC-0004x0-OF
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 02:39:00 +0000
Received: from [193.109.254.147:50837] by server-7.bemta-14.messagelabs.com id
	24/CF-15500-4C5C8C25; Sun, 05 Jan 2014 02:39:00 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388889538!6602478!1
X-Originating-IP: [209.85.214.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31243 invoked from network); 5 Jan 2014 02:38:59 -0000
Received: from mail-ob0-f178.google.com (HELO mail-ob0-f178.google.com)
	(209.85.214.178)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 02:38:59 -0000
Received: by mail-ob0-f178.google.com with SMTP id uz6so17245179obc.9
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 18:38:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=6nfrpf3lutogeH/T2Vk2AVJnB2oeaTObj5ehjZbeZug=;
	b=P4Yn9jd0cS59qKnlBn9x8yBUjiH27bCcu4XMoPuRCoHPMy8+hhnIFM2StwR9cadVIW
	9lGnUlU3cKtFDUE/w6/yoso8r+Z7/zuPbzDVpVgqAU/gOBI1TxZaLbl8OWoF+9nnlCfQ
	ijvnZqGrregD+yI05y8mipgL2Hg8RmwtJiP0ZWwczopXWW8Tqr3NmZzz17NJIdKEolOr
	RpA9jsAhPGQ/+0iitM6p8l8gwd/TXGacMxI89lHFY4Nh5RMoL44sYcaUO2biOT/tRVWS
	IKQ2eZIVh6iiOHfmBRsxbZeqD9fkBOrFC3Qob/kHul3tS3qlHuGJlRUcdMDYFns4eA5I
	WLAg==
X-Gm-Message-State: ALoCoQmk4mDIfMYGPvU2pKh5iJbTrsRk64w4YWi9Xk9yy1dEUE1GzBlruMET/qHuYXex6NbcJOTe
MIME-Version: 1.0
X-Received: by 10.182.16.33 with SMTP id c1mr66302713obd.4.1388889537658; Sat,
	04 Jan 2014 18:38:57 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Sat, 4 Jan 2014 18:38:57 -0800 (PST)
In-Reply-To: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
References: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
Date: Sat, 4 Jan 2014 18:38:57 -0800
Message-ID: <CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Jeongseok Son <invictusjs@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU
 (Xen 4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 4, 2014 at 4:12 AM, Jeongseok Son <invictusjs@gmail.com> wrote:
> Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.
>
> HVM DomU works well. However, with PV domU, or with PVHVM domU, the
> disk I/O performance is severely slow. The following are the results
> of dd executions in each domUs.
>
> - HVM
> $ dd if=/dev/zero of=disk.img count=512k bs=1k
> 524288+0 records in
> 524288+0 records out
> 536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s

This is not doing what you think it is.  You are writing to the page
cache so chances very little has hit disk by the time your benchmark
finishes.  Instead, you want:

$ dd if=/dev/zero of=disk.img oflag=direct,sync count=128k bs=4k

Expect the results to be far worse than your above test.

> - PV
> $ if=/dev/zero of=disk.img count=512k bs=1k
> 524288+0 records in
> 524288+0 records out
> 536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s

Since you are writing to the page cache, you are primarily bound by
memcpy speed and syscall overhead.  I assume you are testing a 64-bit
guest.  If so, syscalls are much more expensive on PV than HVM.

Regards,

Anthony Liguori

> And after copying, the responsiveness of PV domU becomes too slow to
> use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
> performance.
>
> I configured my PV domU to use disk file image like this.
>
> root = '/dev/xvda2 ro'
> disk = [
>     'file:/path/to/pv/images/disk.img,xvda2,w',
>     'file:/path/to/pv/images/swap.img,xvda1,w',
> ]
>
> My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
> runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
> tried with an older linux kernel (2.6.39) but the result was same.)
>
> In my own guess, there's some issue when using disk image with PV
> block device driver in Xen 4.1.2. But I have to use disk file image
> because of some other experimental issues.
>
> Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
> of code should I modify to fix this problem in Xen 4.1.2?
>
> Thank you for any help you can provide.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 02:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 02:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzdcG-0004x6-VB; Sun, 05 Jan 2014 02:39:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1VzdcC-0004x0-OF
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 02:39:00 +0000
Received: from [193.109.254.147:50837] by server-7.bemta-14.messagelabs.com id
	24/CF-15500-4C5C8C25; Sun, 05 Jan 2014 02:39:00 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388889538!6602478!1
X-Originating-IP: [209.85.214.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31243 invoked from network); 5 Jan 2014 02:38:59 -0000
Received: from mail-ob0-f178.google.com (HELO mail-ob0-f178.google.com)
	(209.85.214.178)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 02:38:59 -0000
Received: by mail-ob0-f178.google.com with SMTP id uz6so17245179obc.9
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 18:38:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=6nfrpf3lutogeH/T2Vk2AVJnB2oeaTObj5ehjZbeZug=;
	b=P4Yn9jd0cS59qKnlBn9x8yBUjiH27bCcu4XMoPuRCoHPMy8+hhnIFM2StwR9cadVIW
	9lGnUlU3cKtFDUE/w6/yoso8r+Z7/zuPbzDVpVgqAU/gOBI1TxZaLbl8OWoF+9nnlCfQ
	ijvnZqGrregD+yI05y8mipgL2Hg8RmwtJiP0ZWwczopXWW8Tqr3NmZzz17NJIdKEolOr
	RpA9jsAhPGQ/+0iitM6p8l8gwd/TXGacMxI89lHFY4Nh5RMoL44sYcaUO2biOT/tRVWS
	IKQ2eZIVh6iiOHfmBRsxbZeqD9fkBOrFC3Qob/kHul3tS3qlHuGJlRUcdMDYFns4eA5I
	WLAg==
X-Gm-Message-State: ALoCoQmk4mDIfMYGPvU2pKh5iJbTrsRk64w4YWi9Xk9yy1dEUE1GzBlruMET/qHuYXex6NbcJOTe
MIME-Version: 1.0
X-Received: by 10.182.16.33 with SMTP id c1mr66302713obd.4.1388889537658; Sat,
	04 Jan 2014 18:38:57 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Sat, 4 Jan 2014 18:38:57 -0800 (PST)
In-Reply-To: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
References: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
Date: Sat, 4 Jan 2014 18:38:57 -0800
Message-ID: <CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Jeongseok Son <invictusjs@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU
 (Xen 4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 4, 2014 at 4:12 AM, Jeongseok Son <invictusjs@gmail.com> wrote:
> Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.
>
> HVM DomU works well. However, with PV domU, or with PVHVM domU, the
> disk I/O performance is severely slow. The following are the results
> of dd executions in each domUs.
>
> - HVM
> $ dd if=/dev/zero of=disk.img count=512k bs=1k
> 524288+0 records in
> 524288+0 records out
> 536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s

This is not doing what you think it is.  You are writing to the page
cache so chances very little has hit disk by the time your benchmark
finishes.  Instead, you want:

$ dd if=/dev/zero of=disk.img oflag=direct,sync count=128k bs=4k

Expect the results to be far worse than your above test.

> - PV
> $ if=/dev/zero of=disk.img count=512k bs=1k
> 524288+0 records in
> 524288+0 records out
> 536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s

Since you are writing to the page cache, you are primarily bound by
memcpy speed and syscall overhead.  I assume you are testing a 64-bit
guest.  If so, syscalls are much more expensive on PV than HVM.

Regards,

Anthony Liguori

> And after copying, the responsiveness of PV domU becomes too slow to
> use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
> performance.
>
> I configured my PV domU to use disk file image like this.
>
> root = '/dev/xvda2 ro'
> disk = [
>     'file:/path/to/pv/images/disk.img,xvda2,w',
>     'file:/path/to/pv/images/swap.img,xvda1,w',
> ]
>
> My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
> runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
> tried with an older linux kernel (2.6.39) but the result was same.)
>
> In my own guess, there's some issue when using disk image with PV
> block device driver in Xen 4.1.2. But I have to use disk file image
> because of some other experimental issues.
>
> Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
> of code should I modify to fix this problem in Xen 4.1.2?
>
> Thank you for any help you can provide.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 05:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 05:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzg6J-0002oV-OT; Sun, 05 Jan 2014 05:18:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzg6I-0002oQ-Ex
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 05:18:14 +0000
Received: from [85.158.143.35:30280] by server-2.bemta-4.messagelabs.com id
	05/23-11386-51BE8C25; Sun, 05 Jan 2014 05:18:13 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388899089!9681796!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11432 invoked from network); 5 Jan 2014 05:18:12 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Jan 2014 05:18:12 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vzg68-0003Ss-1H; Sun, 05 Jan 2014 16:18:04 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sun, 5 Jan 2014 16:18:02 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: James Harper <james.harper@bendigoit.com.au>, Kristian Hagsted Rasmussen
	<kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCajPa184K0T6dkKwhvrU7nePBpp1lyrQ
Date: Sun, 5 Jan 2014 05:18:01 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> The header defines it as:
> 
> #if (NTDDI_VERSION >= NTDDI_WIN8)
>   PVOID                                  MiniportDumpData;
> #else
>   PVOID                                  Reserved;
> #endif
> 
> So I guess gplpv needs to incorporate that. Untested patch follows this email.
> I've previously read through the virtual storport changes and there are a
> heap of them, so there may be other issues to resolve too.
> 

... but of course that patch fails on earlier builds because NTDDI_WIN8 isn't defined. Try the patch following this email instead and let me know if it works and I'll commit it.

James

diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
--- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
+++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 16:15:48 2014 +1100
@@ -185,6 +185,12 @@
 XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext, PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)
 {
   PXENVBD_DEVICE_DATA xvdd = (PXENVBD_DEVICE_DATA)DeviceExtension;
+#if defined(NTDDI_WIN8) && (NTDDI_VERSION >= NTDDI_WIN8)
+  PVOID dump_data = ConfigInfo->MiniportDumpData;
+#else
+  PVOID dump_data = ConfigInfo->Reserved;
+#endif
+

   UNREFERENCED_PARAMETER(HwContext);
   UNREFERENCED_PARAMETER(BusInformation);
@@ -195,14 +201,14 @@
   FUNCTION_MSG("xvdd = %p\n", xvdd);
   FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);

-  memcpy(xvdd, ConfigInfo->Reserved, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
+  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
   if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
     return SP_RETURN_ERROR;
   }
   /* restore hypercall_stubs into dump_xenpci */
   XnSetHypercallStubs(xvdd->hypercall_stubs);
   /* make sure original xvdd is set to DISCONNECTED or resume will not work */
-  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state = DEVICE_STATE_DISCONNECTED;
+  ((PXENVBD_DEVICE_DATA)dump_data)->device_state = DEVICE_STATE_DISCONNECTED;
   InitializeListHead(&xvdd->srb_list);
   xvdd->aligned_buffer_in_use = FALSE;
   /* align the buffer to PAGE_SIZE */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 05:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 05:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzg6J-0002oV-OT; Sun, 05 Jan 2014 05:18:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzg6I-0002oQ-Ex
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 05:18:14 +0000
Received: from [85.158.143.35:30280] by server-2.bemta-4.messagelabs.com id
	05/23-11386-51BE8C25; Sun, 05 Jan 2014 05:18:13 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-5.tower-21.messagelabs.com!1388899089!9681796!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11432 invoked from network); 5 Jan 2014 05:18:12 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Jan 2014 05:18:12 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1Vzg68-0003Ss-1H; Sun, 05 Jan 2014 16:18:04 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Sun, 5 Jan 2014 16:18:02 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: James Harper <james.harper@bendigoit.com.au>, Kristian Hagsted Rasmussen
	<kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCajPa184K0T6dkKwhvrU7nePBpp1lyrQ
Date: Sun, 5 Jan 2014 05:18:01 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> The header defines it as:
> 
> #if (NTDDI_VERSION >= NTDDI_WIN8)
>   PVOID                                  MiniportDumpData;
> #else
>   PVOID                                  Reserved;
> #endif
> 
> So I guess gplpv needs to incorporate that. Untested patch follows this email.
> I've previously read through the virtual storport changes and there are a
> heap of them, so there may be other issues to resolve too.
> 

... but of course that patch fails on earlier builds because NTDDI_WIN8 isn't defined. Try the patch following this email instead and let me know if it works and I'll commit it.

James

diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
--- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
+++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 16:15:48 2014 +1100
@@ -185,6 +185,12 @@
 XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext, PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)
 {
   PXENVBD_DEVICE_DATA xvdd = (PXENVBD_DEVICE_DATA)DeviceExtension;
+#if defined(NTDDI_WIN8) && (NTDDI_VERSION >= NTDDI_WIN8)
+  PVOID dump_data = ConfigInfo->MiniportDumpData;
+#else
+  PVOID dump_data = ConfigInfo->Reserved;
+#endif
+

   UNREFERENCED_PARAMETER(HwContext);
   UNREFERENCED_PARAMETER(BusInformation);
@@ -195,14 +201,14 @@
   FUNCTION_MSG("xvdd = %p\n", xvdd);
   FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);

-  memcpy(xvdd, ConfigInfo->Reserved, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
+  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
   if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
     return SP_RETURN_ERROR;
   }
   /* restore hypercall_stubs into dump_xenpci */
   XnSetHypercallStubs(xvdd->hypercall_stubs);
   /* make sure original xvdd is set to DISCONNECTED or resume will not work */
-  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state = DEVICE_STATE_DISCONNECTED;
+  ((PXENVBD_DEVICE_DATA)dump_data)->device_state = DEVICE_STATE_DISCONNECTED;
   InitializeListHead(&xvdd->srb_list);
   xvdd->aligned_buffer_in_use = FALSE;
   /* align the buffer to PAGE_SIZE */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 07:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 07:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzi2Z-0007wb-7r; Sun, 05 Jan 2014 07:22:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <invictusjs@gmail.com>) id 1Vzi2X-0007wW-KE
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 07:22:29 +0000
Received: from [193.109.254.147:35964] by server-7.bemta-14.messagelabs.com id
	05/48-15500-43809C25; Sun, 05 Jan 2014 07:22:28 +0000
X-Env-Sender: invictusjs@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388906546!8894350!1
X-Originating-IP: [209.85.212.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15036 invoked from network); 5 Jan 2014 07:22:27 -0000
Received: from mail-vb0-f54.google.com (HELO mail-vb0-f54.google.com)
	(209.85.212.54)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 07:22:27 -0000
Received: by mail-vb0-f54.google.com with SMTP id g10so8282741vbg.41
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 23:22:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=FOz57JQsYibDLJzZfyLX/mUGgWSEqghIXy8WxtJ0/K8=;
	b=McrJgo58hRqOoNWGXmESF/Y1jAPRkadRTw2rD/VmQwSaoO64G6V+AzhgnRMfQXhnMG
	kriZxx351sn/q69r7eNs8v4T5YwfL2IA4zkhFXjnVhHjMQoybqR5hodfpIUNwRdlCuw/
	L6F08eXQtajdXYnx079JCyiRba7js0PWSvm84+0Ft3me9NHcl4FaBLRPxDAysOYw1Edu
	/RJ0HTiefB15A8tpb1EMAGgjmkpLvBvfnFODz5A5ivTgFsQ2ZHJ4VpM2MgoQVu45naYx
	hOQJVaT/Vn7U2mldchSk8nmhrkinFfPG8A2NB5X9Ak9dXdPufWYOU0sxtmvC9Q55Vn++
	w3Ng==
MIME-Version: 1.0
X-Received: by 10.220.252.137 with SMTP id mw9mr849381vcb.23.1388906546454;
	Sat, 04 Jan 2014 23:22:26 -0800 (PST)
Received: by 10.52.163.19 with HTTP; Sat, 4 Jan 2014 23:22:26 -0800 (PST)
In-Reply-To: <CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
References: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
	<CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
Date: Sun, 5 Jan 2014 16:22:26 +0900
Message-ID: <CABH_4FpEeXZt9MFqUV7nZtn6q+jbxv=hrzbOJ1xiwOr2yvrPhg@mail.gmail.com>
From: Oscar <invictusjs@gmail.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU
 (Xen 4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you for your reply.

On Sun, Jan 5, 2014 at 11:38 AM, Anthony Liguori <anthony@codemonkey.ws> wrote:
> On Sat, Jan 4, 2014 at 4:12 AM, Jeongseok Son <invictusjs@gmail.com> wrote:
>> Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.
>>
>> HVM DomU works well. However, with PV domU, or with PVHVM domU, the
>> disk I/O performance is severely slow. The following are the results
>> of dd executions in each domUs.
>>
>> - HVM
>> $ dd if=/dev/zero of=disk.img count=512k bs=1k
>> 524288+0 records in
>> 524288+0 records out
>> 536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s
>
> This is not doing what you think it is.  You are writing to the page
> cache so chances very little has hit disk by the time your benchmark
> finishes.  Instead, you want:
>
> $ dd if=/dev/zero of=disk.img oflag=direct,sync count=128k bs=4k
>
> Expect the results to be far worse than your above test.

Yes you are right. The write speed is less than a 1MB/s in HVM guest,
and so does in PV guest.

Probably, disk I/O performance is not a real cause of performance
degradation of PV.

>
>> - PV
>> $ if=/dev/zero of=disk.img count=512k bs=1k
>> 524288+0 records in
>> 524288+0 records out
>> 536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s
>
> Since you are writing to the page cache, you are primarily bound by
> memcpy speed and syscall overhead.  I assume you are testing a 64-bit
> guest.  If so, syscalls are much more expensive on PV than HVM.

Then, is it a normal performance of 64 bit PV guest? I also copied a
large file, compiled linux kernel source code, and did some other
tests on PV guest and usually all of them took 10 times longer than
did on HVM guest.

And as I have said, in Xen 4.3, PV domU shows good performance like
HVM domU. Were there any patches related to this issue since Xen
4.1.2?

Best Regards,

Jeongseok

>
> Regards,
>
> Anthony Liguori
>
>> And after copying, the responsiveness of PV domU becomes too slow to
>> use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
>> performance.
>>
>> I configured my PV domU to use disk file image like this.
>>
>> root = '/dev/xvda2 ro'
>> disk = [
>>     'file:/path/to/pv/images/disk.img,xvda2,w',
>>     'file:/path/to/pv/images/swap.img,xvda1,w',
>> ]
>>
>> My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
>> runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
>> tried with an older linux kernel (2.6.39) but the result was same.)
>>
>> In my own guess, there's some issue when using disk image with PV
>> block device driver in Xen 4.1.2. But I have to use disk file image
>> because of some other experimental issues.
>>
>> Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
>> of code should I modify to fix this problem in Xen 4.1.2?
>>
>> Thank you for any help you can provide.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 07:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 07:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzi2Z-0007wb-7r; Sun, 05 Jan 2014 07:22:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <invictusjs@gmail.com>) id 1Vzi2X-0007wW-KE
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 07:22:29 +0000
Received: from [193.109.254.147:35964] by server-7.bemta-14.messagelabs.com id
	05/48-15500-43809C25; Sun, 05 Jan 2014 07:22:28 +0000
X-Env-Sender: invictusjs@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1388906546!8894350!1
X-Originating-IP: [209.85.212.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15036 invoked from network); 5 Jan 2014 07:22:27 -0000
Received: from mail-vb0-f54.google.com (HELO mail-vb0-f54.google.com)
	(209.85.212.54)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 07:22:27 -0000
Received: by mail-vb0-f54.google.com with SMTP id g10so8282741vbg.41
	for <xen-devel@lists.xenproject.org>;
	Sat, 04 Jan 2014 23:22:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=FOz57JQsYibDLJzZfyLX/mUGgWSEqghIXy8WxtJ0/K8=;
	b=McrJgo58hRqOoNWGXmESF/Y1jAPRkadRTw2rD/VmQwSaoO64G6V+AzhgnRMfQXhnMG
	kriZxx351sn/q69r7eNs8v4T5YwfL2IA4zkhFXjnVhHjMQoybqR5hodfpIUNwRdlCuw/
	L6F08eXQtajdXYnx079JCyiRba7js0PWSvm84+0Ft3me9NHcl4FaBLRPxDAysOYw1Edu
	/RJ0HTiefB15A8tpb1EMAGgjmkpLvBvfnFODz5A5ivTgFsQ2ZHJ4VpM2MgoQVu45naYx
	hOQJVaT/Vn7U2mldchSk8nmhrkinFfPG8A2NB5X9Ak9dXdPufWYOU0sxtmvC9Q55Vn++
	w3Ng==
MIME-Version: 1.0
X-Received: by 10.220.252.137 with SMTP id mw9mr849381vcb.23.1388906546454;
	Sat, 04 Jan 2014 23:22:26 -0800 (PST)
Received: by 10.52.163.19 with HTTP; Sat, 4 Jan 2014 23:22:26 -0800 (PST)
In-Reply-To: <CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
References: <CA+rgWU0YX37vegAV90bOB3CwrQCXg5tJKGty=+SX5MFoA9=MAQ@mail.gmail.com>
	<CA+aC4kuQu+r+XJvKv19PBDy7OoB4tNUnh2hAOA+4k2m8x+NKfw@mail.gmail.com>
Date: Sun, 5 Jan 2014 16:22:26 +0900
Message-ID: <CABH_4FpEeXZt9MFqUV7nZtn6q+jbxv=hrzbOJ1xiwOr2yvrPhg@mail.gmail.com>
From: Oscar <invictusjs@gmail.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Very slow disk I/O performance in PV & PVHVM domU
 (Xen 4.1.2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you for your reply.

On Sun, Jan 5, 2014 at 11:38 AM, Anthony Liguori <anthony@codemonkey.ws> wrote:
> On Sat, Jan 4, 2014 at 4:12 AM, Jeongseok Son <invictusjs@gmail.com> wrote:
>> Hello, I've used Xen 4.1.2 for 2 years to experiment some ideas.
>>
>> HVM DomU works well. However, with PV domU, or with PVHVM domU, the
>> disk I/O performance is severely slow. The following are the results
>> of dd executions in each domUs.
>>
>> - HVM
>> $ dd if=/dev/zero of=disk.img count=512k bs=1k
>> 524288+0 records in
>> 524288+0 records out
>> 536870912 bytes (537 MB) copied, 5.42523 s, 99.0 MB/s
>
> This is not doing what you think it is.  You are writing to the page
> cache so chances very little has hit disk by the time your benchmark
> finishes.  Instead, you want:
>
> $ dd if=/dev/zero of=disk.img oflag=direct,sync count=128k bs=4k
>
> Expect the results to be far worse than your above test.

Yes you are right. The write speed is less than a 1MB/s in HVM guest,
and so does in PV guest.

Probably, disk I/O performance is not a real cause of performance
degradation of PV.

>
>> - PV
>> $ if=/dev/zero of=disk.img count=512k bs=1k
>> 524288+0 records in
>> 524288+0 records out
>> 536870912 bytes (537 MB) copied, 45.898 s, 11.7 MB/s
>
> Since you are writing to the page cache, you are primarily bound by
> memcpy speed and syscall overhead.  I assume you are testing a 64-bit
> guest.  If so, syscalls are much more expensive on PV than HVM.

Then, is it a normal performance of 64 bit PV guest? I also copied a
large file, compiled linux kernel source code, and did some other
tests on PV guest and usually all of them took 10 times longer than
did on HVM guest.

And as I have said, in Xen 4.3, PV domU shows good performance like
HVM domU. Were there any patches related to this issue since Xen
4.1.2?

Best Regards,

Jeongseok

>
> Regards,
>
> Anthony Liguori
>
>> And after copying, the responsiveness of PV domU becomes too slow to
>> use and so does PVHVM domU. But in Xen 4.3, PV domU shows right
>> performance.
>>
>> I configured my PV domU to use disk file image like this.
>>
>> root = '/dev/xvda2 ro'
>> disk = [
>>     'file:/path/to/pv/images/disk.img,xvda2,w',
>>     'file:/path/to/pv/images/swap.img,xvda1,w',
>> ]
>>
>> My dom0 runs Ubuntu 12.04.2 LTS with linux kernel 3.9.4, and PV domU
>> runs Ubuntu 10.04 LTS / Ubuntu 12.04 LTS with the same kernel. (I
>> tried with an older linux kernel (2.6.39) but the result was same.)
>>
>> In my own guess, there's some issue when using disk image with PV
>> block device driver in Xen 4.1.2. But I have to use disk file image
>> because of some other experimental issues.
>>
>> Why my PV domU is so slow in Xen 4.1.2 but not in Xen 4.3? Which part
>> of code should I modify to fix this problem in Xen 4.1.2?
>>
>> Thank you for any help you can provide.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 08:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 08:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vziz2-0002GS-PK; Sun, 05 Jan 2014 08:22:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vziz1-0002GN-3B
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 08:22:55 +0000
Received: from [85.158.137.68:16058] by server-11.bemta-3.messagelabs.com id
	E3/9D-19379-E5619C25; Sun, 05 Jan 2014 08:22:54 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388910171!7277092!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32512 invoked from network); 5 Jan 2014 08:22:52 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 08:22:52 -0000
Received: by mail-pd0-f178.google.com with SMTP id y10so16884017pdj.23
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 00:22:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:message-id:date:to:mime-version;
	bh=xZ3/V1DF8KG/CsWH26rEqsOQDYLytCiMJJgwwS3HzwE=;
	b=ta6MEwVkSmKoh5pCjOVp4EP+ZexVeLlfCPni526MrnRUs00Pz4ppZxFIeKecBqyH+p
	YDiXKHoUd1SRiY15nZvHEbDoHCuL8E0XGO9gSQvVZZeyG4T5kL0u8AsdznHLIvjcbKlw
	kwG2YGmz/9pASzUGkQ2jGn1VW9RBf97skytqi2XAizQcRYcAdppG8NQC4FVn11k+kr4K
	hJUjfwsoafoqiqWAlevQCWesW6wfwwSgMxWQHsGUPOu7YVx8j9G6iKjEVL+Fx3hRTUBj
	50kYBsGyVu14zNhtIVB1BRY7/6SA2vAUnPB28Lk6+sF7hQp34tZVttZ7h3MB8NmZxYc5
	G5GQ==
X-Received: by 10.68.189.5 with SMTP id ge5mr111895093pbc.42.1388910170233;
	Sun, 05 Jan 2014 00:22:50 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	ka3sm120403727pbc.32.2014.01.05.00.22.09
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 00:22:49 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
Content-Type: multipart/mixed;
	boundary="Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274"
Message-Id: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
Date: Sun, 5 Jan 2014 16:21:54 +0800
To: List Developer Xen <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Subject: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi all,

Recently, I=92ve been trying to continue my work of mini-os on arm64.  =
After some
early-day hacks in summer, there is a basic framework that could pass =
the build.
If everything goes well, it is supposed to be at least able to print=20
=93Bootstrapping...=94 info in start_kernel. However, it seems there =
still something
wrong when create domain by xl. When alloc_magic_pages is called, the =
dom0 kernel
would generate bug oops. I tried to replace the domU kernel to linux. It =
would
also trigger the kernel bug oops.

The log from mini-os and linux are both attached below:


--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.dmesg-linux
Content-Type: application/octet-stream;
	name="log.dmesg-linux"
Content-Transfer-Encoding: 7bit

BUG: Bad page state in process xl  pfn:f213b
page:ffffffbc034f44e8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213c
page:ffffffbc034f4520 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213d
page:ffffffbc034f4558 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213e
page:ffffffbc034f4590 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213f
page:ffffffbc034f45c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2140
page:ffffffbc034f4600 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2141
page:ffffffbc034f4638 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2142
page:ffffffbc034f4670 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2143
page:ffffffbc034f46a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2144
page:ffffffbc034f46e0 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2145
page:ffffffbc034f4718 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2146
page:ffffffbc034f4750 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2147
page:ffffffbc034f4788 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
xen_add_mach_to_phys_entry: cannot add pfn=0x00000000000f2042 -> mfn=0x00000000000f1a7f: pfn=0x00000000000f1a7f -> mfn=0x00000000000f1a7f already exists
BUG: Bad rss-counter state mm:ffffffc002fd2700 idx:0 val:-1393
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f2310 -> mfn=0x0000000000083cc2: pfn=0x00000000000f2310 -> mfn=0x00000000000f2310 already exists

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.dmesg-minios
Content-Type: application/octet-stream;
	name="log.dmesg-minios"
Content-Transfer-Encoding: 7bit

[<ffffffc000120af4>] print_bad_pte+0x124/0x1b8
[<ffffffc000121e84>] unmap_single_vma+0x4bc/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20ce
page:ffffffbc034f2d10 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f205f
page:ffffffbc034f14c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20d5
page:ffffffbc034f2e98 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20dc
page:ffffffbc034f3020 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20dd
page:ffffffbc034f3058 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20de
page:ffffffbc034f3090 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20df
page:ffffffbc034f30c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e0
page:ffffffbc034f3100 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e1
page:ffffffbc034f3138 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e2
page:ffffffbc034f3170 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e3
page:ffffffbc034f31a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e4
page:ffffffbc034f31e0 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page map in process xl  pte:a00000f20e3f53 pmd:f207f003
page:ffffffbc034f31a8 count:1 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
addr:0000007fb01dc000 vm_flags:040644ff anon_vma:          (null) mapping:ffffffc0028077a8 index:0
vma->vm_ops->fault: privcmd_fault+0x0/0x38
vma->vm_file->f_op->mmap: privcmd_mmap+0x0/0x2c
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc000120af4>] print_bad_pte+0x124/0x1b8
[<ffffffc000121e84>] unmap_single_vma+0x4bc/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e3
page:ffffffbc034f31a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f20e3 -> mfn=0x00000000000f20e3: pfn=0x00000000000f20e3 -> mfn=0x00000000000f20e3 already exists
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f2085 -> mfn=0x00000000000f205f: pfn=0x00000000000f205f -> mfn=0x00000000000f205f already exists
BUG: Bad rss-counter state mm:ffffffc002fd1b00 idx:0 val:-18

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.xl-linux
Content-Type: application/octet-stream;
	name="log.xl-linux"
Content-Transfer-Encoding: 7bit

libxl: debug: libxl_create.c:1309:do_domain_create: ao 0xa169160: create: how=(nil) callback=(nil) poller=0xa15ec60
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is unavailable, use qemu-xen-traditional instead: No such file or directory
libxl: debug: libxl_create.c:753:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0xa15f038: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=5, free_memkb=1885
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1885 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path /root/vmlinuz
domainbuilder: detail: xc_dom_kernel_file: filename="/root/vmlinuz"
domainbuilder: detail: xc_dom_malloc_filemap    : 5544 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps xen-3.0-aarch64 xen-3.0-armv7l 
domainbuilder: detail: xc_dom_rambase_init: RAM starts at 80000
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64) loader ... 
domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64: 0x80080000 -> 0x805ea338
libxl: debug: libxl_arm.c:433:libxl__arch_domain_configure: constructing DTB for Xen version 4.4 guest
libxl: debug: libxl_arm.c:497:libxl__arch_domain_configure: fdt total size 1138
domainbuilder: detail: xc_dom_devicetree_mem: called
domainbuilder: detail: xc_dom_mem_init: mem 128 MB, pages 0x8000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x8000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 64
domainbuilder: detail: xc_dom_malloc            : 256 kB
domainbuilder: detail: arch_setup_meminit: devicetree: 0x87fff000 -> 0x87fff472
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: seg->vstart = 0x80080000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x80080000 -> 0x805eb000  (pfn 0x80080 + 0x56b pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x80080+0x56b at 0x7fae5b7000
domainbuilder: detail: xc_dom_load_zimage_kernel: called
domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x80080000-0x805eb000
domainbuilder: detail: xc_dom_load_zimage_kernel: copy 5677880 bytes from blob 0x7faeb63000 to dst 0x7fae5b7000
domainbuilder: detail: seg->vstart = 0x87fff000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   devicetree   : 0x87fff000 -> 0x88000000  (pfn 0x87fff + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x87fff+0x1 at 0x7faf400000
domainbuilder: detail: alloc_magic_pages: called
domainbuilder: detail: count_pgtables_arm: called
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x88000000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-aarch64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-armv7l
domainbuilder: detail: setup_pgtables_arm: called
domainbuilder: detail: clear_page: pfn 0x88000, mfn 0x88000
domainbuilder: detail: clear_page: pfn 0x88001, mfn 0x88001
domainbuilder: detail: start_info_arm: called
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 290 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 5544 kB
domainbuilder: detail:       domU mmap          : 5552 kB
domainbuilder: detail: vcpu_arm64: called
domainbuilder: detail: DTB 87fff000
domainbuilder: detail: Initial state CPSR 0x1c5 PC 0x80080000
domainbuilder: detail: launch_vm: called, ctxt=0x7faf404004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0xa169160: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0xa169160: complete, rc=0
libxl: debug: libxl_create.c:1323:do_domain_create: ao 0xa169160: inprogress: poller=0xa15ec60, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0xa169160: destroy
xc: debug: hypercall buffer: total allocations:106 total releases:106
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:98 misses:4 toobig:4
Parsing config from config-linux

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.xl-minios
Content-Type: application/octet-stream;
	name="log.xl-minios"
Content-Transfer-Encoding: 7bit

libxl: debug: libxl_create.c:1309:do_domain_create: ao 0x2615d160: create: how=(nil) callback=(nil) poller=0x26152c60
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is unavailable, use qemu-xen-traditional instead: No such file or directory
libxl: debug: libxl_create.c:753:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x26153038: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=5, free_memkb=1885
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1885 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path /root/mini-os
domainbuilder: detail: xc_dom_kernel_file: filename="/root/mini-os"
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps xen-3.0-aarch64 xen-3.0-armv7l 
domainbuilder: detail: xc_dom_rambase_init: RAM starts at 80000
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64) loader ... 
domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64: 0x80000000 -> 0x8000b1a0
libxl: debug: libxl_arm.c:433:libxl__arch_domain_configure: constructing DTB for Xen version 4.4 guest
libxl: debug: libxl_arm.c:497:libxl__arch_domain_configure: fdt total size 1138
domainbuilder: detail: xc_dom_devicetree_mem: called
domainbuilder: detail: xc_dom_mem_init: mem 128 MB, pages 0x8000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x8000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 64
domainbuilder: detail: xc_dom_malloc            : 256 kB
domainbuilder: detail: arch_setup_meminit: devicetree: 0x87fff000 -> 0x87fff472
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: seg->vstart = 0x80000000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x80000000 -> 0x8000c000  (pfn 0x80000 + 0xc pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x80000+0xc at 0x7fafe51000
domainbuilder: detail: xc_dom_load_zimage_kernel: called
domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x80000000-0x8000c000
domainbuilder: detail: xc_dom_load_zimage_kernel: copy 45472 bytes from blob 0x7fafe9e000 to dst 0x7fafe51000
domainbuilder: detail: seg->vstart = 0x87fff000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   devicetree   : 0x87fff000 -> 0x88000000  (pfn 0x87fff + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x87fff+0x1 at 0x7fb01dc000
domainbuilder: detail: alloc_magic_pages: called
domainbuilder: detail: count_pgtables_arm: called
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x88000000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-aarch64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-armv7l
domainbuilder: detail: setup_pgtables_arm: called
domainbuilder: detail: clear_page: pfn 0x88000, mfn 0x88000
domainbuilder: detail: clear_page: pfn 0x88001, mfn 0x88001
domainbuilder: detail: start_info_arm: called
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 258 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 44 kB
domainbuilder: detail:       domU mmap          : 52 kB
domainbuilder: detail: vcpu_arm64: called
domainbuilder: detail: DTB 87fff000
domainbuilder: detail: Initial state CPSR 0x1c5 PC 0x80000000
domainbuilder: detail: launch_vm: called, ctxt=0x7fb01e0004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0x2615d160: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x2615d160: complete, rc=0
libxl: debug: libxl_create.c:1323:do_domain_create: ao 0x2615d160: inprogress: poller=0x26152c60, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x2615d160: destroy
xc: debug: hypercall buffer: total allocations:106 total releases:106
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:98 misses:4 toobig:4
Parsing config from config

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
	charset=us-ascii



Any ideas?

Cheers,

Baozi
--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274--


From xen-devel-bounces@lists.xen.org Sun Jan 05 08:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 08:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vziz2-0002GS-PK; Sun, 05 Jan 2014 08:22:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1Vziz1-0002GN-3B
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 08:22:55 +0000
Received: from [85.158.137.68:16058] by server-11.bemta-3.messagelabs.com id
	E3/9D-19379-E5619C25; Sun, 05 Jan 2014 08:22:54 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388910171!7277092!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32512 invoked from network); 5 Jan 2014 08:22:52 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 08:22:52 -0000
Received: by mail-pd0-f178.google.com with SMTP id y10so16884017pdj.23
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 00:22:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:message-id:date:to:mime-version;
	bh=xZ3/V1DF8KG/CsWH26rEqsOQDYLytCiMJJgwwS3HzwE=;
	b=ta6MEwVkSmKoh5pCjOVp4EP+ZexVeLlfCPni526MrnRUs00Pz4ppZxFIeKecBqyH+p
	YDiXKHoUd1SRiY15nZvHEbDoHCuL8E0XGO9gSQvVZZeyG4T5kL0u8AsdznHLIvjcbKlw
	kwG2YGmz/9pASzUGkQ2jGn1VW9RBf97skytqi2XAizQcRYcAdppG8NQC4FVn11k+kr4K
	hJUjfwsoafoqiqWAlevQCWesW6wfwwSgMxWQHsGUPOu7YVx8j9G6iKjEVL+Fx3hRTUBj
	50kYBsGyVu14zNhtIVB1BRY7/6SA2vAUnPB28Lk6+sF7hQp34tZVttZ7h3MB8NmZxYc5
	G5GQ==
X-Received: by 10.68.189.5 with SMTP id ge5mr111895093pbc.42.1388910170233;
	Sun, 05 Jan 2014 00:22:50 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	ka3sm120403727pbc.32.2014.01.05.00.22.09
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 00:22:49 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
Content-Type: multipart/mixed;
	boundary="Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274"
Message-Id: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
Date: Sun, 5 Jan 2014 16:21:54 +0800
To: List Developer Xen <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Subject: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi all,

Recently, I=92ve been trying to continue my work of mini-os on arm64.  =
After some
early-day hacks in summer, there is a basic framework that could pass =
the build.
If everything goes well, it is supposed to be at least able to print=20
=93Bootstrapping...=94 info in start_kernel. However, it seems there =
still something
wrong when create domain by xl. When alloc_magic_pages is called, the =
dom0 kernel
would generate bug oops. I tried to replace the domU kernel to linux. It =
would
also trigger the kernel bug oops.

The log from mini-os and linux are both attached below:


--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.dmesg-linux
Content-Type: application/octet-stream;
	name="log.dmesg-linux"
Content-Transfer-Encoding: 7bit

BUG: Bad page state in process xl  pfn:f213b
page:ffffffbc034f44e8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213c
page:ffffffbc034f4520 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213d
page:ffffffbc034f4558 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213e
page:ffffffbc034f4590 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f213f
page:ffffffbc034f45c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2140
page:ffffffbc034f4600 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2141
page:ffffffbc034f4638 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2142
page:ffffffbc034f4670 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2143
page:ffffffbc034f46a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2144
page:ffffffbc034f46e0 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2145
page:ffffffbc034f4718 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2146
page:ffffffbc034f4750 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f2147
page:ffffffbc034f4788 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 504 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc000121c54>] unmap_single_vma+0x28c/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
xen_add_mach_to_phys_entry: cannot add pfn=0x00000000000f2042 -> mfn=0x00000000000f1a7f: pfn=0x00000000000f1a7f -> mfn=0x00000000000f1a7f already exists
BUG: Bad rss-counter state mm:ffffffc002fd2700 idx:0 val:-1393
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f2310 -> mfn=0x0000000000083cc2: pfn=0x00000000000f2310 -> mfn=0x00000000000f2310 already exists

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.dmesg-minios
Content-Type: application/octet-stream;
	name="log.dmesg-minios"
Content-Transfer-Encoding: 7bit

[<ffffffc000120af4>] print_bad_pte+0x124/0x1b8
[<ffffffc000121e84>] unmap_single_vma+0x4bc/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20ce
page:ffffffbc034f2d10 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f205f
page:ffffffbc034f14c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20d5
page:ffffffbc034f2e98 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20dc
page:ffffffbc034f3020 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20dd
page:ffffffbc034f3058 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20de
page:ffffffbc034f3090 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20df
page:ffffffbc034f30c8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e0
page:ffffffbc034f3100 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e1
page:ffffffbc034f3138 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e2
page:ffffffbc034f3170 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e3
page:ffffffbc034f31a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e4
page:ffffffbc034f31e0 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page map in process xl  pte:a00000f20e3f53 pmd:f207f003
page:ffffffbc034f31a8 count:1 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
addr:0000007fb01dc000 vm_flags:040644ff anon_vma:          (null) mapping:ffffffc0028077a8 index:0
vma->vm_ops->fault: privcmd_fault+0x0/0x38
vma->vm_file->f_op->mmap: privcmd_mmap+0x0/0x2c
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc000120af4>] print_bad_pte+0x124/0x1b8
[<ffffffc000121e84>] unmap_single_vma+0x4bc/0x6e8
[<ffffffc000122990>] unmap_vmas+0x68/0xb4
[<ffffffc00012761c>] unmap_region+0xcc/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
BUG: Bad page state in process xl  pfn:f20e3
page:ffffffbc034f31a8 count:0 mapcount:-1 mapping:          (null) index:0x0
page flags: 0x14(referenced|dirty)
Modules linked in:
CPU: 1 PID: 505 Comm: xl Tainted: G    B        3.13.0-rc4+ #4
Call trace:
[<ffffffc000087be4>] dump_backtrace+0x0/0x12c
[<ffffffc000087d24>] show_stack+0x14/0x1c
[<ffffffc00042967c>] dump_stack+0x70/0xac
[<ffffffc00010ac3c>] bad_page+0xc4/0x110
[<ffffffc00010ad58>] free_pages_prepare+0xd0/0xd8
[<ffffffc00010c338>] free_hot_cold_page+0x2c/0x178
[<ffffffc00010c900>] free_hot_cold_page_list+0x38/0x60
[<ffffffc000110200>] release_pages+0x190/0x1dc
[<ffffffc0001276ac>] unmap_region+0x15c/0x1d4
[<ffffffc000129498>] do_munmap+0x218/0x314
[<ffffffc0001295d8>] vm_munmap+0x44/0x64
[<ffffffc00012a2dc>] SyS_munmap+0x24/0x34
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f20e3 -> mfn=0x00000000000f20e3: pfn=0x00000000000f20e3 -> mfn=0x00000000000f20e3 already exists
xen_add_phys_to_mach_entry: cannot add pfn=0x00000000000f2085 -> mfn=0x00000000000f205f: pfn=0x00000000000f205f -> mfn=0x00000000000f205f already exists
BUG: Bad rss-counter state mm:ffffffc002fd1b00 idx:0 val:-18

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.xl-linux
Content-Type: application/octet-stream;
	name="log.xl-linux"
Content-Transfer-Encoding: 7bit

libxl: debug: libxl_create.c:1309:do_domain_create: ao 0xa169160: create: how=(nil) callback=(nil) poller=0xa15ec60
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is unavailable, use qemu-xen-traditional instead: No such file or directory
libxl: debug: libxl_create.c:753:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0xa15f038: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=5, free_memkb=1885
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1885 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path /root/vmlinuz
domainbuilder: detail: xc_dom_kernel_file: filename="/root/vmlinuz"
domainbuilder: detail: xc_dom_malloc_filemap    : 5544 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps xen-3.0-aarch64 xen-3.0-armv7l 
domainbuilder: detail: xc_dom_rambase_init: RAM starts at 80000
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64) loader ... 
domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64: 0x80080000 -> 0x805ea338
libxl: debug: libxl_arm.c:433:libxl__arch_domain_configure: constructing DTB for Xen version 4.4 guest
libxl: debug: libxl_arm.c:497:libxl__arch_domain_configure: fdt total size 1138
domainbuilder: detail: xc_dom_devicetree_mem: called
domainbuilder: detail: xc_dom_mem_init: mem 128 MB, pages 0x8000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x8000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 64
domainbuilder: detail: xc_dom_malloc            : 256 kB
domainbuilder: detail: arch_setup_meminit: devicetree: 0x87fff000 -> 0x87fff472
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: seg->vstart = 0x80080000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x80080000 -> 0x805eb000  (pfn 0x80080 + 0x56b pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x80080+0x56b at 0x7fae5b7000
domainbuilder: detail: xc_dom_load_zimage_kernel: called
domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x80080000-0x805eb000
domainbuilder: detail: xc_dom_load_zimage_kernel: copy 5677880 bytes from blob 0x7faeb63000 to dst 0x7fae5b7000
domainbuilder: detail: seg->vstart = 0x87fff000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   devicetree   : 0x87fff000 -> 0x88000000  (pfn 0x87fff + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x87fff+0x1 at 0x7faf400000
domainbuilder: detail: alloc_magic_pages: called
domainbuilder: detail: count_pgtables_arm: called
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x88000000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-aarch64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-armv7l
domainbuilder: detail: setup_pgtables_arm: called
domainbuilder: detail: clear_page: pfn 0x88000, mfn 0x88000
domainbuilder: detail: clear_page: pfn 0x88001, mfn 0x88001
domainbuilder: detail: start_info_arm: called
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 290 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 5544 kB
domainbuilder: detail:       domU mmap          : 5552 kB
domainbuilder: detail: vcpu_arm64: called
domainbuilder: detail: DTB 87fff000
domainbuilder: detail: Initial state CPSR 0x1c5 PC 0x80080000
domainbuilder: detail: launch_vm: called, ctxt=0x7faf404004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0xa169160: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0xa169160: complete, rc=0
libxl: debug: libxl_create.c:1323:do_domain_create: ao 0xa169160: inprogress: poller=0xa15ec60, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0xa169160: destroy
xc: debug: hypercall buffer: total allocations:106 total releases:106
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:98 misses:4 toobig:4
Parsing config from config-linux

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Disposition: attachment;
	filename=log.xl-minios
Content-Type: application/octet-stream;
	name="log.xl-minios"
Content-Transfer-Encoding: 7bit

libxl: debug: libxl_create.c:1309:do_domain_create: ao 0x2615d160: create: how=(nil) callback=(nil) poller=0x26152c60
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is unavailable, use qemu-xen-traditional instead: No such file or directory
libxl: debug: libxl_create.c:753:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x26153038: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=5, free_memkb=1885
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 4 cpus and 1885 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path /root/mini-os
domainbuilder: detail: xc_dom_kernel_file: filename="/root/mini-os"
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps xen-3.0-aarch64 xen-3.0-armv7l 
domainbuilder: detail: xc_dom_rambase_init: RAM starts at 80000
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ... 
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64) loader ... 
domainbuilder: detail: loader probe OK
domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64: 0x80000000 -> 0x8000b1a0
libxl: debug: libxl_arm.c:433:libxl__arch_domain_configure: constructing DTB for Xen version 4.4 guest
libxl: debug: libxl_arm.c:497:libxl__arch_domain_configure: fdt total size 1138
domainbuilder: detail: xc_dom_devicetree_mem: called
domainbuilder: detail: xc_dom_mem_init: mem 128 MB, pages 0x8000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x8000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 64
domainbuilder: detail: xc_dom_malloc            : 256 kB
domainbuilder: detail: arch_setup_meminit: devicetree: 0x87fff000 -> 0x87fff472
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: seg->vstart = 0x80000000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x80000000 -> 0x8000c000  (pfn 0x80000 + 0xc pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x80000+0xc at 0x7fafe51000
domainbuilder: detail: xc_dom_load_zimage_kernel: called
domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x80000000-0x8000c000
domainbuilder: detail: xc_dom_load_zimage_kernel: copy 45472 bytes from blob 0x7fafe9e000 to dst 0x7fafe51000
domainbuilder: detail: seg->vstart = 0x87fff000
domainbuilder: detail: dom->parms.virt_base = 0x80000000
domainbuilder: detail: dom->rambase_pfn = 0x80000
domainbuilder: detail: xc_dom_alloc_segment:   devicetree   : 0x87fff000 -> 0x88000000  (pfn 0x87fff + 0x1 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x87fff+0x1 at 0x7fb01dc000
domainbuilder: detail: alloc_magic_pages: called
domainbuilder: detail: count_pgtables_arm: called
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x88000000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-aarch64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-armv7l
domainbuilder: detail: setup_pgtables_arm: called
domainbuilder: detail: clear_page: pfn 0x88000, mfn 0x88000
domainbuilder: detail: clear_page: pfn 0x88001, mfn 0x88001
domainbuilder: detail: start_info_arm: called
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 258 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 44 kB
domainbuilder: detail:       domU mmap          : 52 kB
domainbuilder: detail: vcpu_arm64: called
domainbuilder: detail: DTB 87fff000
domainbuilder: detail: Initial state CPSR 0x1c5 PC 0x80000000
domainbuilder: detail: launch_vm: called, ctxt=0x7fb01e0004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0x2615d160: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x2615d160: complete, rc=0
libxl: debug: libxl_create.c:1323:do_domain_create: ao 0x2615d160: inprogress: poller=0x26152c60, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x2615d160: destroy
xc: debug: hypercall buffer: total allocations:106 total releases:106
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:98 misses:4 toobig:4
Parsing config from config

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
	charset=us-ascii



Any ideas?

Cheers,

Baozi
--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Apple-Mail=_C0CDC659-0355-46D8-B846-1C567D325274--


From xen-devel-bounces@lists.xen.org Sun Jan 05 11:09:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 11:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzlZj-0000Sw-29; Sun, 05 Jan 2014 11:08:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VzlZh-0000S0-QA
	for xen-devel@lists.xensource.com; Sun, 05 Jan 2014 11:08:58 +0000
Received: from [85.158.139.211:12396] by server-5.bemta-5.messagelabs.com id
	49/33-14928-94D39C25; Sun, 05 Jan 2014 11:08:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388920134!7919386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28186 invoked from network); 5 Jan 2014 11:08:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 11:08:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="89862941"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 11:08:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 5 Jan 2014 06:08:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VzlZc-0000dC-Tb;
	Sun, 05 Jan 2014 11:08:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VzlZc-0005vk-Pl;
	Sun, 05 Jan 2014 11:08:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24146-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Jan 2014 11:08:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24146: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24146 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24146/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24041
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24041

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 11:09:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 11:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzlZj-0000Sw-29; Sun, 05 Jan 2014 11:08:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1VzlZh-0000S0-QA
	for xen-devel@lists.xensource.com; Sun, 05 Jan 2014 11:08:58 +0000
Received: from [85.158.139.211:12396] by server-5.bemta-5.messagelabs.com id
	49/33-14928-94D39C25; Sun, 05 Jan 2014 11:08:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388920134!7919386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28186 invoked from network); 5 Jan 2014 11:08:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 11:08:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="89862941"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 11:08:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 5 Jan 2014 06:08:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1VzlZc-0000dC-Tb;
	Sun, 05 Jan 2014 11:08:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1VzlZc-0005vk-Pl;
	Sun, 05 Jan 2014 11:08:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24146-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 5 Jan 2014 11:08:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24146: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24146 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24146/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24041
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24041

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 11:50:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 11:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzmDo-00027m-Hq; Sun, 05 Jan 2014 11:50:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1VzmDn-00027h-8A
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 11:50:23 +0000
Received: from [85.158.137.68:61966] by server-9.bemta-3.messagelabs.com id
	BA/A1-13104-EF649C25; Sun, 05 Jan 2014 11:50:22 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388922619!7293738!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29473 invoked from network); 5 Jan 2014 11:50:21 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 11:50:21 -0000
Received: by mail-pd0-f179.google.com with SMTP id r10so16969788pdi.24
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 03:50:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date
	:content-transfer-encoding:message-id:references:to;
	bh=Yp8sr24HqC7ysfBeTOiXsjlQvHHiWkiOvL0Xpv8zRBc=;
	b=TEvBiXnl/ipZISO1HhJpYxcAk87+QZMS5LpolT6B3l3jMI9W8S5VdGiB4uBs/e80Pj
	AX/JEDmzVxXqPZw22BCYw92Kdi83q4LiT8q05uOTGFwcBgPuN8qd+cLVxAYBlg78pDgY
	+aRGiLbGrQmA2JzPVu952qRRgj/Y8nVC6nMppfpEqN46gNXw+E3v/pFXIJ6lg8Ab8hpV
	Dr180gf4/z0UY3YfUob9ySr2YIcyLUxWFhwNrECjUYcNRs4VyzWYQebV6YALLf1GhhDl
	25Eq2T7nUThG/faFrOpXBJwfwORY2Urdq8QkVnAAG0fUFc7/n9c45TRxS/qY1P6dLN/b
	SDqg==
X-Received: by 10.68.190.103 with SMTP id gp7mr114543409pbc.74.1388922619335; 
	Sun, 05 Jan 2014 03:50:19 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	pa1sm159158176pac.17.2014.01.05.03.50.15
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 03:50:18 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
Date: Sun, 5 Jan 2014 19:50:04 +0800
Message-Id: <ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
To: List Developer Xen <xen-devel@lists.xen.org>
X-Mailer: Apple Mail (2.1827)
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 5, 2014, at 16:21, Chen Baozi <baozich@gmail.com> wrote:

> Hi all,
> =

> Recently, I=92ve been trying to continue my work of mini-os on arm64.  Af=
ter some
> early-day hacks in summer, there is a basic framework that could pass the=
 build.
> If everything goes well, it is supposed to be at least able to print =

> =93Bootstrapping...=94 info in start_kernel. However, it seems there stil=
l something
> wrong when create domain by xl.

It seems to be something wrong with my dom0 privcmd driver. After I add som=
e printk
to debug and rebuild the dom0 kernel, it seems to be all right (even I remo=
ve
those printk for debugging)...

> When alloc_magic_pages is called, the dom0 kernel
> would generate bug oops. I tried to replace the domU kernel to linux. It =
would
> also trigger the kernel bug oops.
> =

> The log from mini-os and linux are both attached below:
> =

> <log.dmesg-linux><log.dmesg-minios><log.xl-linux><log.xl-minios>
> =

> Any ideas?
> =

> Cheers,
> =

> Baozi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 11:50:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 11:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzmDo-00027m-Hq; Sun, 05 Jan 2014 11:50:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1VzmDn-00027h-8A
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 11:50:23 +0000
Received: from [85.158.137.68:61966] by server-9.bemta-3.messagelabs.com id
	BA/A1-13104-EF649C25; Sun, 05 Jan 2014 11:50:22 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388922619!7293738!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29473 invoked from network); 5 Jan 2014 11:50:21 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 11:50:21 -0000
Received: by mail-pd0-f179.google.com with SMTP id r10so16969788pdi.24
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 03:50:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date
	:content-transfer-encoding:message-id:references:to;
	bh=Yp8sr24HqC7ysfBeTOiXsjlQvHHiWkiOvL0Xpv8zRBc=;
	b=TEvBiXnl/ipZISO1HhJpYxcAk87+QZMS5LpolT6B3l3jMI9W8S5VdGiB4uBs/e80Pj
	AX/JEDmzVxXqPZw22BCYw92Kdi83q4LiT8q05uOTGFwcBgPuN8qd+cLVxAYBlg78pDgY
	+aRGiLbGrQmA2JzPVu952qRRgj/Y8nVC6nMppfpEqN46gNXw+E3v/pFXIJ6lg8Ab8hpV
	Dr180gf4/z0UY3YfUob9ySr2YIcyLUxWFhwNrECjUYcNRs4VyzWYQebV6YALLf1GhhDl
	25Eq2T7nUThG/faFrOpXBJwfwORY2Urdq8QkVnAAG0fUFc7/n9c45TRxS/qY1P6dLN/b
	SDqg==
X-Received: by 10.68.190.103 with SMTP id gp7mr114543409pbc.74.1388922619335; 
	Sun, 05 Jan 2014 03:50:19 -0800 (PST)
Received: from [192.168.1.101] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	pa1sm159158176pac.17.2014.01.05.03.50.15
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 03:50:18 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
Date: Sun, 5 Jan 2014 19:50:04 +0800
Message-Id: <ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
To: List Developer Xen <xen-devel@lists.xen.org>
X-Mailer: Apple Mail (2.1827)
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 5, 2014, at 16:21, Chen Baozi <baozich@gmail.com> wrote:

> Hi all,
> =

> Recently, I=92ve been trying to continue my work of mini-os on arm64.  Af=
ter some
> early-day hacks in summer, there is a basic framework that could pass the=
 build.
> If everything goes well, it is supposed to be at least able to print =

> =93Bootstrapping...=94 info in start_kernel. However, it seems there stil=
l something
> wrong when create domain by xl.

It seems to be something wrong with my dom0 privcmd driver. After I add som=
e printk
to debug and rebuild the dom0 kernel, it seems to be all right (even I remo=
ve
those printk for debugging)...

> When alloc_magic_pages is called, the dom0 kernel
> would generate bug oops. I tried to replace the domU kernel to linux. It =
would
> also trigger the kernel bug oops.
> =

> The log from mini-os and linux are both attached below:
> =

> <log.dmesg-linux><log.dmesg-minios><log.xl-linux><log.xl-minios>
> =

> Any ideas?
> =

> Cheers,
> =

> Baozi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 16:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 16:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzqsj-0004NQ-SY; Sun, 05 Jan 2014 16:48:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzqsi-0004NL-2v
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 16:48:56 +0000
Received: from [85.158.137.68:40198] by server-14.bemta-3.messagelabs.com id
	B3/9D-06105-7FC89C25; Sun, 05 Jan 2014 16:48:55 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388940532!7313969!1
X-Originating-IP: [209.85.128.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15886 invoked from network); 5 Jan 2014 16:48:53 -0000
Received: from mail-qe0-f44.google.com (HELO mail-qe0-f44.google.com)
	(209.85.128.44)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 16:48:53 -0000
Received: by mail-qe0-f44.google.com with SMTP id nd7so17397634qeb.17
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 08:48:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IaGtYxoFgXs9SpesQykFKlt8yxAbQgLoA9IJJepqABI=;
	b=miYYCn9Wqxzc6SyvGVpbZ503sp9ZeHlDLVVo9yuTS7ak4S+sC3iQJJWY+I5kkuI8c1
	RSCgnEf/J2Hi8xBkPvqETkXc0wG2IiJfJ5rhkWZ2TytaZGQF8QMB9/K5y4H+/5US65//
	RZcVEhmiFr7JvzA3S0ItJ/pi6NKAmFsOYz3xYLw7+nlwAav2/I2Br+DprZaWWoNLt8Tj
	nUBQCq7Tqw0NyBcfViTnHm1TGFZRb9dWPMYFViSbPe4rL+RoZzrM9PyA9VeM00mklYHc
	MKHXr2BzHuvaOGtEmLcReEFk8FmbJBI0xifSQjPOFqQD64sTSPlTFccArBV1nLU4bYxx
	EjBg==
MIME-Version: 1.0
X-Received: by 10.224.167.15 with SMTP id o15mr169351516qay.96.1388940532273; 
	Sun, 05 Jan 2014 08:48:52 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 08:48:52 -0800 (PST)
In-Reply-To: <52B05913.2080300@linaro.org>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
Date: Sun, 5 Jan 2014 16:48:52 +0000
Message-ID: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8068755970627559267=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8068755970627559267==
Content-Type: multipart/alternative; boundary=089e01294f640a57c304ef3beba6

--089e01294f640a57c304ef3beba6
Content-Type: text/plain; charset=ISO-8859-1

Hi Peter,

If you still can't boot with any memory bigger than 128M, as a fast
workaround you can apply this patch.

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..849df3f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -22,7 +22,7 @@
 static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);

-static int dom0_11_mapping = 1;
+static int dom0_11_mapping = 0;

 #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
 static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;


It's failing because none of the zones has a contiguous memory block with
an order bigger than 15 ( 128M ). I think this is due to the alignment of
the phys_start with buddy system in cubieboard, I'll look further and let
you know if there's a cleaner approach to fix that.

It used to work before because the 11_mapping wasn't forced to "true" for
all platforms and there was a quirk exposed by the platform that used to
express that. I think Julien removed that quirk and defaulted to 11_mapping
in commit "71952bfcbe9187765cf4010b1479af86def4fb1f"



Regards.




On Tue, Dec 17, 2013 at 2:00 PM, Julien Grall <julien.grall@linaro.org>wrote:

> On 12/17/2013 09:25 AM, peter wrote:
> > Well I managed to boot the current master
> > Version: d96392361cd05a66b385f0153e398128b196e480 (xen: arm: correct
> > return value of raw_copy_{to/from}_guest_*, raw_clear_guest).
>
> This commit introduced a bug which doesn't permit to boot a guest (-14
> in your log). You should at least have the commit 3a5767a " xen/arm: Fix
> regression after commit d963923".
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>



-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--089e01294f640a57c304ef3beba6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi Peter,<br><br></div>If you still ca=
n&#39;t boot with any memory bigger than 128M, as a fast workaround you can=
 apply this patch.<br><br>diff --git a/xen/arch/arm/domain_build.c b/xen/ar=
ch/arm/domain_build.c<br>
index faff88e..849df3f 100644<br>--- a/xen/arch/arm/domain_build.c<br>+++ b=
/xen/arch/arm/domain_build.c<br>@@ -22,7 +22,7 @@<br>=A0static unsigned int=
 __initdata opt_dom0_max_vcpus;<br>=A0integer_param(&quot;dom0_max_vcpus&qu=
ot;, opt_dom0_max_vcpus);<br>
=A0<br>-static int dom0_11_mapping =3D 1;<br>+static int dom0_11_mapping =
=3D 0;<br>=A0<br>=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */<br>=A0=
static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;<br><br><br></div>It&#3=
9;s failing because none of the zones has a contiguous memory block with an=
 order bigger than 15 ( 128M ). I think this is due to the alignment of the=
 phys_start with buddy system in cubieboard, I&#39;ll look further and let =
you know if there&#39;s a cleaner approach to fix that.<br>
<br></div>It used to work before because the 11_mapping wasn&#39;t forced t=
o &quot;true&quot; for all platforms and there was a quirk exposed by the p=
latform that used to express that. I think Julien removed that quirk and de=
faulted to 11_mapping in commit &quot;71952bfcbe9187765cf4010b1479af86def4f=
b1f&quot;<br>
<br><br><br></div>Regards.<br><div><div><br><br></div></div></div><div clas=
s=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 17, 2013 a=
t 2:00 PM, Julien Grall <span dir=3D"ltr">&lt;<a href=3D"mailto:julien.gral=
l@linaro.org" target=3D"_blank">julien.grall@linaro.org</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 12/17/2013 09:25 AM, pe=
ter wrote:<br>
&gt; Well I managed to boot the current master<br>
&gt; Version: d96392361cd05a66b385f0153e398128b196e480 (xen: arm: correct<b=
r>
&gt; return value of raw_copy_{to/from}_guest_*, raw_clear_guest).<br>
<br>
</div>This commit introduced a bug which doesn&#39;t permit to boot a guest=
 (-14<br>
in your log). You should at least have the commit 3a5767a &quot; xen/arm: F=
ix<br>
regression after commit d963923&quot;.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
Julien Grall<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br><div dir=3D=
"ltr"><div>Karim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div>

--089e01294f640a57c304ef3beba6--


--===============8068755970627559267==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8068755970627559267==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 16:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 16:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzqsj-0004NQ-SY; Sun, 05 Jan 2014 16:48:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzqsi-0004NL-2v
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 16:48:56 +0000
Received: from [85.158.137.68:40198] by server-14.bemta-3.messagelabs.com id
	B3/9D-06105-7FC89C25; Sun, 05 Jan 2014 16:48:55 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1388940532!7313969!1
X-Originating-IP: [209.85.128.44]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15886 invoked from network); 5 Jan 2014 16:48:53 -0000
Received: from mail-qe0-f44.google.com (HELO mail-qe0-f44.google.com)
	(209.85.128.44)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 16:48:53 -0000
Received: by mail-qe0-f44.google.com with SMTP id nd7so17397634qeb.17
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 08:48:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IaGtYxoFgXs9SpesQykFKlt8yxAbQgLoA9IJJepqABI=;
	b=miYYCn9Wqxzc6SyvGVpbZ503sp9ZeHlDLVVo9yuTS7ak4S+sC3iQJJWY+I5kkuI8c1
	RSCgnEf/J2Hi8xBkPvqETkXc0wG2IiJfJ5rhkWZ2TytaZGQF8QMB9/K5y4H+/5US65//
	RZcVEhmiFr7JvzA3S0ItJ/pi6NKAmFsOYz3xYLw7+nlwAav2/I2Br+DprZaWWoNLt8Tj
	nUBQCq7Tqw0NyBcfViTnHm1TGFZRb9dWPMYFViSbPe4rL+RoZzrM9PyA9VeM00mklYHc
	MKHXr2BzHuvaOGtEmLcReEFk8FmbJBI0xifSQjPOFqQD64sTSPlTFccArBV1nLU4bYxx
	EjBg==
MIME-Version: 1.0
X-Received: by 10.224.167.15 with SMTP id o15mr169351516qay.96.1388940532273; 
	Sun, 05 Jan 2014 08:48:52 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 08:48:52 -0800 (PST)
In-Reply-To: <52B05913.2080300@linaro.org>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
Date: Sun, 5 Jan 2014 16:48:52 +0000
Message-ID: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8068755970627559267=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8068755970627559267==
Content-Type: multipart/alternative; boundary=089e01294f640a57c304ef3beba6

--089e01294f640a57c304ef3beba6
Content-Type: text/plain; charset=ISO-8859-1

Hi Peter,

If you still can't boot with any memory bigger than 128M, as a fast
workaround you can apply this patch.

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..849df3f 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -22,7 +22,7 @@
 static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);

-static int dom0_11_mapping = 1;
+static int dom0_11_mapping = 0;

 #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
 static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;


It's failing because none of the zones has a contiguous memory block with
an order bigger than 15 ( 128M ). I think this is due to the alignment of
the phys_start with buddy system in cubieboard, I'll look further and let
you know if there's a cleaner approach to fix that.

It used to work before because the 11_mapping wasn't forced to "true" for
all platforms and there was a quirk exposed by the platform that used to
express that. I think Julien removed that quirk and defaulted to 11_mapping
in commit "71952bfcbe9187765cf4010b1479af86def4fb1f"



Regards.




On Tue, Dec 17, 2013 at 2:00 PM, Julien Grall <julien.grall@linaro.org>wrote:

> On 12/17/2013 09:25 AM, peter wrote:
> > Well I managed to boot the current master
> > Version: d96392361cd05a66b385f0153e398128b196e480 (xen: arm: correct
> > return value of raw_copy_{to/from}_guest_*, raw_clear_guest).
>
> This commit introduced a bug which doesn't permit to boot a guest (-14
> in your log). You should at least have the commit 3a5767a " xen/arm: Fix
> regression after commit d963923".
>
> --
> Julien Grall
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>



-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--089e01294f640a57c304ef3beba6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi Peter,<br><br></div>If you still ca=
n&#39;t boot with any memory bigger than 128M, as a fast workaround you can=
 apply this patch.<br><br>diff --git a/xen/arch/arm/domain_build.c b/xen/ar=
ch/arm/domain_build.c<br>
index faff88e..849df3f 100644<br>--- a/xen/arch/arm/domain_build.c<br>+++ b=
/xen/arch/arm/domain_build.c<br>@@ -22,7 +22,7 @@<br>=A0static unsigned int=
 __initdata opt_dom0_max_vcpus;<br>=A0integer_param(&quot;dom0_max_vcpus&qu=
ot;, opt_dom0_max_vcpus);<br>
=A0<br>-static int dom0_11_mapping =3D 1;<br>+static int dom0_11_mapping =
=3D 0;<br>=A0<br>=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */<br>=A0=
static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;<br><br><br></div>It&#3=
9;s failing because none of the zones has a contiguous memory block with an=
 order bigger than 15 ( 128M ). I think this is due to the alignment of the=
 phys_start with buddy system in cubieboard, I&#39;ll look further and let =
you know if there&#39;s a cleaner approach to fix that.<br>
<br></div>It used to work before because the 11_mapping wasn&#39;t forced t=
o &quot;true&quot; for all platforms and there was a quirk exposed by the p=
latform that used to express that. I think Julien removed that quirk and de=
faulted to 11_mapping in commit &quot;71952bfcbe9187765cf4010b1479af86def4f=
b1f&quot;<br>
<br><br><br></div>Regards.<br><div><div><br><br></div></div></div><div clas=
s=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Dec 17, 2013 a=
t 2:00 PM, Julien Grall <span dir=3D"ltr">&lt;<a href=3D"mailto:julien.gral=
l@linaro.org" target=3D"_blank">julien.grall@linaro.org</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On 12/17/2013 09:25 AM, pe=
ter wrote:<br>
&gt; Well I managed to boot the current master<br>
&gt; Version: d96392361cd05a66b385f0153e398128b196e480 (xen: arm: correct<b=
r>
&gt; return value of raw_copy_{to/from}_guest_*, raw_clear_guest).<br>
<br>
</div>This commit introduced a bug which doesn&#39;t permit to boot a guest=
 (-14<br>
in your log). You should at least have the commit 3a5767a &quot; xen/arm: F=
ix<br>
regression after commit d963923&quot;.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
Julien Grall<br>
</font></span><div class=3D"HOEnZb"><div class=3D"h5"><br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br><div dir=3D=
"ltr"><div>Karim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div>

--089e01294f640a57c304ef3beba6--


--===============8068755970627559267==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8068755970627559267==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 17:19:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrM3-0005NR-3W; Sun, 05 Jan 2014 17:19:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrM1-0005NM-0Y
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:19:13 +0000
Received: from [85.158.139.211:16163] by server-11.bemta-5.messagelabs.com id
	60/18-23268-01499C25; Sun, 05 Jan 2014 17:19:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388942349!7732462!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21844 invoked from network); 5 Jan 2014 17:19:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:19:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="87710243"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:19:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:19:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrLw-00074c-Hv;
	Sun, 05 Jan 2014 17:19:08 +0000
Date: Sun, 5 Jan 2014 17:18:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20140103164800.00ef581c@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1401051718020.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131218211739.GD11717@phenom.dumpdata.com>
	<20140103164800.00ef581c@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Mukesh Rathor wrote:
> On Wed, 18 Dec 2013 16:17:39 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Dec 18, 2013 at 06:31:43PM +0000, Stefano Stabellini wrote:
> > > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > PVH is a PV guest with a twist - there are certain things
> > > > that work in it like HVM and some like PV. There is
> > > > a similar mode - PVHVM where we run in HVM mode with
> > > > PV code enabled - and this patch explores that.
> > > > 
> > > > The most notable PV interfaces are the XenBus and event channels.
> > > > For PVH, we will use XenBus and event channels.
> > > > 
> > > > For the XenBus mechanism we piggyback on how it is done for
> > > > PVHVM guests.
> > > > 
> > > > Ditto for the event channel mechanism - we piggyback on PVHVM -
> > > > by setting up a specific vector callback and that
> > > > vector ends up calling the event channel mechanism to
> > > > dispatch the events as needed.
> > > > 
> > > > This means that from a pvops perspective, we can use
> > > > native_irq_ops instead of the Xen PV specific. Albeit in the
> > > > future we could support pirq_eoi_map. But that is
> > > > a feature request that can be shared with PVHVM.
> > > > 
> > > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > ---
> > > >  arch/x86/xen/enlighten.c           | 6 ++++++
> > > >  arch/x86/xen/irq.c                 | 5 ++++-
> > > >  drivers/xen/events.c               | 5 +++++
> > > >  drivers/xen/xenbus/xenbus_client.c | 3 ++-
> > > >  4 files changed, 17 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > > > index e420613..7fceb51 100644
> > > > --- a/arch/x86/xen/enlighten.c
> > > > +++ b/arch/x86/xen/enlighten.c
> > > > @@ -1134,6 +1134,8 @@ void xen_setup_shared_info(void)
> > > >  	/* In UP this is as good a place as any to set up shared
> > > > info */ xen_setup_vcpu_info_placement();
> > > >  #endif
> > > > +	if (xen_pvh_domain())
> > > > +		return;
> > > >  
> > > >  	xen_setup_mfn_list_list();
> > > >  }
> > > 
> > > This is another one of those cases where I think we would benefit
> > > from introducing xen_setup_shared_info_pvh instead of adding more
> > > ifs here.
> > 
> > Actually this one can be removed.
> > 
> > > 
> > > 
> > > > @@ -1146,6 +1148,10 @@ void xen_setup_vcpu_info_placement(void)
> > > >  	for_each_possible_cpu(cpu)
> > > >  		xen_vcpu_setup(cpu);
> > > >  
> > > > +	/* PVH always uses native IRQ ops */
> > > > +	if (xen_pvh_domain())
> > > > +		return;
> > > > +
> > > >  	/* xen_vcpu_setup managed to place the vcpu_info within
> > > > the percpu area for all cpus, so make use of it */
> > > >  	if (have_vcpu_info_placement) {
> > > 
> > > Same here?
> > 
> > Hmmm, I wonder if the vcpu info placement could work with PVH.
> 
> It should now (after a patch I sent while ago)... the comment implies
> that PVH uses native IRQs even case of vcpu info placlement...
> 
> perhaps it would be more clear to do:
> 
>         for_each_possible_cpu(cpu)
>                 xen_vcpu_setup(cpu);
>         /* PVH always uses native IRQ ops */
>         if (have_vcpu_info_placement && !xen_pvh_domain) {
>             pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>             .........

Yeah, this looks better

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:19:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrM3-0005NR-3W; Sun, 05 Jan 2014 17:19:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrM1-0005NM-0Y
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:19:13 +0000
Received: from [85.158.139.211:16163] by server-11.bemta-5.messagelabs.com id
	60/18-23268-01499C25; Sun, 05 Jan 2014 17:19:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388942349!7732462!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21844 invoked from network); 5 Jan 2014 17:19:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:19:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="87710243"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:19:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:19:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrLw-00074c-Hv;
	Sun, 05 Jan 2014 17:19:08 +0000
Date: Sun, 5 Jan 2014 17:18:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mukesh Rathor <mukesh.rathor@oracle.com>
In-Reply-To: <20140103164800.00ef581c@mantra.us.oracle.com>
Message-ID: <alpine.DEB.2.02.1401051718020.8667@kaball.uk.xensource.com>
References: <1387313503-31362-1-git-send-email-konrad.wilk@oracle.com>
	<1387313503-31362-10-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1312181827280.8667@kaball.uk.xensource.com>
	<20131218211739.GD11717@phenom.dumpdata.com>
	<20140103164800.00ef581c@mantra.us.oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	jbeulich@suse.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v11 09/12] xen/pvh: Piggyback on PVHVM
 XenBus and event channels for PVH.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Mukesh Rathor wrote:
> On Wed, 18 Dec 2013 16:17:39 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Dec 18, 2013 at 06:31:43PM +0000, Stefano Stabellini wrote:
> > > On Tue, 17 Dec 2013, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > PVH is a PV guest with a twist - there are certain things
> > > > that work in it like HVM and some like PV. There is
> > > > a similar mode - PVHVM where we run in HVM mode with
> > > > PV code enabled - and this patch explores that.
> > > > 
> > > > The most notable PV interfaces are the XenBus and event channels.
> > > > For PVH, we will use XenBus and event channels.
> > > > 
> > > > For the XenBus mechanism we piggyback on how it is done for
> > > > PVHVM guests.
> > > > 
> > > > Ditto for the event channel mechanism - we piggyback on PVHVM -
> > > > by setting up a specific vector callback and that
> > > > vector ends up calling the event channel mechanism to
> > > > dispatch the events as needed.
> > > > 
> > > > This means that from a pvops perspective, we can use
> > > > native_irq_ops instead of the Xen PV specific. Albeit in the
> > > > future we could support pirq_eoi_map. But that is
> > > > a feature request that can be shared with PVHVM.
> > > > 
> > > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > ---
> > > >  arch/x86/xen/enlighten.c           | 6 ++++++
> > > >  arch/x86/xen/irq.c                 | 5 ++++-
> > > >  drivers/xen/events.c               | 5 +++++
> > > >  drivers/xen/xenbus/xenbus_client.c | 3 ++-
> > > >  4 files changed, 17 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > > > index e420613..7fceb51 100644
> > > > --- a/arch/x86/xen/enlighten.c
> > > > +++ b/arch/x86/xen/enlighten.c
> > > > @@ -1134,6 +1134,8 @@ void xen_setup_shared_info(void)
> > > >  	/* In UP this is as good a place as any to set up shared
> > > > info */ xen_setup_vcpu_info_placement();
> > > >  #endif
> > > > +	if (xen_pvh_domain())
> > > > +		return;
> > > >  
> > > >  	xen_setup_mfn_list_list();
> > > >  }
> > > 
> > > This is another one of those cases where I think we would benefit
> > > from introducing xen_setup_shared_info_pvh instead of adding more
> > > ifs here.
> > 
> > Actually this one can be removed.
> > 
> > > 
> > > 
> > > > @@ -1146,6 +1148,10 @@ void xen_setup_vcpu_info_placement(void)
> > > >  	for_each_possible_cpu(cpu)
> > > >  		xen_vcpu_setup(cpu);
> > > >  
> > > > +	/* PVH always uses native IRQ ops */
> > > > +	if (xen_pvh_domain())
> > > > +		return;
> > > > +
> > > >  	/* xen_vcpu_setup managed to place the vcpu_info within
> > > > the percpu area for all cpus, so make use of it */
> > > >  	if (have_vcpu_info_placement) {
> > > 
> > > Same here?
> > 
> > Hmmm, I wonder if the vcpu info placement could work with PVH.
> 
> It should now (after a patch I sent while ago)... the comment implies
> that PVH uses native IRQs even case of vcpu info placlement...
> 
> perhaps it would be more clear to do:
> 
>         for_each_possible_cpu(cpu)
>                 xen_vcpu_setup(cpu);
>         /* PVH always uses native IRQ ops */
>         if (have_vcpu_info_placement && !xen_pvh_domain) {
>             pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>             .........

Yeah, this looks better

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:41:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrh0-0006HV-FV; Sun, 05 Jan 2014 17:40:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrgy-0006HQ-JA
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 17:40:52 +0000
Received: from [85.158.139.211:3237] by server-14.bemta-5.messagelabs.com id
	5E/06-24200-32999C25; Sun, 05 Jan 2014 17:40:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388943649!7912066!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9926 invoked from network); 5 Jan 2014 17:40:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:40:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="89897632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 17:40:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:40:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrgn-0007KP-Rt;
	Sun, 05 Jan 2014 17:40:41 +0000
Date: Sun, 5 Jan 2014 17:39:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-907729068-1388943592=:8667"
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-907729068-1388943592=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> Hi Peter,
>=20
> If you still can't boot with any memory bigger than 128M, as a fast worka=
round you can apply this patch.
>=20
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..849df3f 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -22,7 +22,7 @@
> =C2=A0static unsigned int __initdata opt_dom0_max_vcpus;
> =C2=A0integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> =C2=A0
> -static int dom0_11_mapping =3D 1;
> +static int dom0_11_mapping =3D 0;
> =C2=A0
> =C2=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> =C2=A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;
>=20
>=20
> It's failing because none of the zones has a contiguous memory block with=
 an order bigger than 15 ( 128M ). I think this is due
> to the alignment of the phys_start with buddy system in cubieboard, I'll =
look further and let you know if there's a cleaner
> approach to fix that.
>=20
> It used to work before because the 11_mapping wasn't forced to "true" for=
 all platforms and there was a quirk exposed by the
> platform that used to express that. I think Julien removed that quirk and=
 defaulted to 11_mapping in commit
> "71952bfcbe9187765cf4010b1479af86def4fb1f"

Unfortunately dom0_11_mapping is needed if at least one device driver
for the Allwinner uses DMA.
For example, if you disable dom0_11_mapping, can you still access the
network? On the other hand if all device drivers do not use DMA we can
set dom0_11_mapping to false for this platform.

In any case I think that we could benefit from having dom0_11_mapping
user configurable from the command line.
--1342847746-907729068-1388943592=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-907729068-1388943592=:8667--


From xen-devel-bounces@lists.xen.org Sun Jan 05 17:41:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrh0-0006HV-FV; Sun, 05 Jan 2014 17:40:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrgy-0006HQ-JA
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 17:40:52 +0000
Received: from [85.158.139.211:3237] by server-14.bemta-5.messagelabs.com id
	5E/06-24200-32999C25; Sun, 05 Jan 2014 17:40:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1388943649!7912066!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9926 invoked from network); 5 Jan 2014 17:40:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:40:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,607,1384300800"; d="scan'208";a="89897632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 17:40:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:40:42 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrgn-0007KP-Rt;
	Sun, 05 Jan 2014 17:40:41 +0000
Date: Sun, 5 Jan 2014 17:39:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-907729068-1388943592=:8667"
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-907729068-1388943592=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> Hi Peter,
>=20
> If you still can't boot with any memory bigger than 128M, as a fast worka=
round you can apply this patch.
>=20
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..849df3f 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -22,7 +22,7 @@
> =C2=A0static unsigned int __initdata opt_dom0_max_vcpus;
> =C2=A0integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> =C2=A0
> -static int dom0_11_mapping =3D 1;
> +static int dom0_11_mapping =3D 0;
> =C2=A0
> =C2=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> =C2=A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;
>=20
>=20
> It's failing because none of the zones has a contiguous memory block with=
 an order bigger than 15 ( 128M ). I think this is due
> to the alignment of the phys_start with buddy system in cubieboard, I'll =
look further and let you know if there's a cleaner
> approach to fix that.
>=20
> It used to work before because the 11_mapping wasn't forced to "true" for=
 all platforms and there was a quirk exposed by the
> platform that used to express that. I think Julien removed that quirk and=
 defaulted to 11_mapping in commit
> "71952bfcbe9187765cf4010b1479af86def4fb1f"

Unfortunately dom0_11_mapping is needed if at least one device driver
for the Allwinner uses DMA.
For example, if you disable dom0_11_mapping, can you still access the
network? On the other hand if all device drivers do not use DMA we can
set dom0_11_mapping to false for this platform.

In any case I think that we could benefit from having dom0_11_mapping
user configurable from the command line.
--1342847746-907729068-1388943592=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-907729068-1388943592=:8667--


From xen-devel-bounces@lists.xen.org Sun Jan 05 17:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrqg-0006Wa-Rw; Sun, 05 Jan 2014 17:50:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrqf-0006WV-NV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:50:54 +0000
Received: from [193.109.254.147:55314] by server-9.bemta-14.messagelabs.com id
	CD/BF-13957-D7B99C25; Sun, 05 Jan 2014 17:50:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388944249!7434509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5032 invoked from network); 5 Jan 2014 17:50:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:50:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87712753"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:50:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:50:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrqZ-0007Ri-P3;
	Sun, 05 Jan 2014 17:50:47 +0000
Date: Sun, 5 Jan 2014 17:49:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051749510.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 03/19] xen/pvh: Early bootup changes in
 PV code (v4).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> We don't use the filtering that 'xen_cpuid' is doing
> because the hypervisor treats 'XEN_EMULATE_PREFIX' as
> an invalid instruction. This means that all of the filtering
> will have to be done in the hypervisor/toolstack.
> 
> Without the filtering we expose to the guest the:
> 
>  - cpu topology (sockets, cores, etc);
>  - the APERF (which the generic scheduler likes to
>     use), see  5e626254206a709c6e937f3dda69bf26c7344f6f
>     "xen/setup: filter APERFMPERF cpuid feature out"
>  - and the inability to figure out whether MWAIT_LEAF
>    should be exposed or not. See
>    df88b2d96e36d9a9e325bfcd12eb45671cbbc937
>    "xen/enlighten: Disable MWAIT_LEAF so that acpi-pad won't be loaded."
>  - x2apic, see  4ea9b9aca90cfc71e6872ed3522356755162932c
>    "xen: mask x2APIC feature in PV"
> 
> We also check for vector callback early on, as it is a required
> feature. PVH also runs at default kernel IOPL.
> 
> Finally, pure PV settings are moved to a separate function that are
> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> with CONFIG_XEN_PVMMU.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c | 48 ++++++++++++++++++++++++++++++++++--------------
>  arch/x86/xen/setup.c     | 18 ++++++++++++------
>  2 files changed, 46 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fa6ade7..eb0efc2 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -46,6 +46,7 @@
>  #include <xen/hvm.h>
>  #include <xen/hvc-console.h>
>  #include <xen/acpi.h>
> +#include <xen/features.h>
>  
>  #include <asm/paravirt.h>
>  #include <asm/apic.h>
> @@ -262,8 +263,9 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	pr_info("Booting paravirtualized kernel %son %s\n",
> +		xen_feature(XENFEAT_auto_translated_physmap) ?
> +			"with PVH extensions " : "", pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -433,7 +435,7 @@ static void __init xen_init_cpuid_mask(void)
>  
>  	ax = 1;
>  	cx = 0;
> -	xen_cpuid(&ax, &bx, &cx, &dx);
> +	cpuid(1, &ax, &bx, &cx, &dx);
>  
>  	xsave_mask =
>  		(1 << (X86_FEATURE_XSAVE % 32)) |
> @@ -1420,6 +1422,19 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_early_guest_init(void)
> +{
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +#ifdef CONFIG_X86_32
> +	BUG(); /* PVH: Implement proper support. */
> +#endif
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1431,13 +1446,16 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
> +	xen_pvh_early_guest_init();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (!xen_pvh_domain())
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1469,8 +1487,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1548,14 +1564,18 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* set the limit of our address space */
>  	xen_reserve_top();
>  
> -	/* We used to do this in xen_arch_setup, but that is too late on AMD
> -	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
> -	 * which pokes 0xcf8 port.
> -	 */
> -	set_iopl.iopl = 1;
> -	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
> -	if (rc != 0)
> -		xen_raw_printk("physdev_op failed %d\n", rc);
> +	/* PVH: runs at default kernel iopl of 0 */
> +	if (!xen_pvh_domain()) {
> +		/*
> +		 * We used to do this in xen_arch_setup, but that is too late
> +		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
> +		 * early_amd_init which pokes 0xcf8 port.
> +		 */
> +		set_iopl.iopl = 1;
> +		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
> +		if (rc != 0)
> +			xen_raw_printk("physdev_op failed %d\n", rc);
> +	}
>  
>  #ifdef CONFIG_X86_32
>  	/* set up basic CPUID stuff */
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 68c054f..2137c51 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -563,16 +563,13 @@ void xen_enable_nmi(void)
>  		BUG();
>  #endif
>  }
> -void __init xen_arch_setup(void)
> +void __init xen_pvmmu_arch_setup(void)
>  {
> -	xen_panic_handler_init();
> -
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> -	if (!xen_feature(XENFEAT_auto_translated_physmap))
> -		HYPERVISOR_vm_assist(VMASST_CMD_enable,
> -				     VMASST_TYPE_pae_extended_cr3);
> +	HYPERVISOR_vm_assist(VMASST_CMD_enable,
> +			     VMASST_TYPE_pae_extended_cr3);
>  
>  	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
>  	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
> @@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
>  	xen_enable_sysenter();
>  	xen_enable_syscall();
>  	xen_enable_nmi();
> +}
> +
> +/* This function is not called for HVM domains */
> +void __init xen_arch_setup(void)
> +{
> +	xen_panic_handler_init();
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		xen_pvmmu_arch_setup();
> +
>  #ifdef CONFIG_ACPI
>  	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
>  		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrqg-0006Wa-Rw; Sun, 05 Jan 2014 17:50:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrqf-0006WV-NV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:50:54 +0000
Received: from [193.109.254.147:55314] by server-9.bemta-14.messagelabs.com id
	CD/BF-13957-D7B99C25; Sun, 05 Jan 2014 17:50:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1388944249!7434509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5032 invoked from network); 5 Jan 2014 17:50:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:50:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87712753"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:50:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:50:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrqZ-0007Ri-P3;
	Sun, 05 Jan 2014 17:50:47 +0000
Date: Sun, 5 Jan 2014 17:49:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051749510.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-4-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 03/19] xen/pvh: Early bootup changes in
 PV code (v4).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> We don't use the filtering that 'xen_cpuid' is doing
> because the hypervisor treats 'XEN_EMULATE_PREFIX' as
> an invalid instruction. This means that all of the filtering
> will have to be done in the hypervisor/toolstack.
> 
> Without the filtering we expose to the guest the:
> 
>  - cpu topology (sockets, cores, etc);
>  - the APERF (which the generic scheduler likes to
>     use), see  5e626254206a709c6e937f3dda69bf26c7344f6f
>     "xen/setup: filter APERFMPERF cpuid feature out"
>  - and the inability to figure out whether MWAIT_LEAF
>    should be exposed or not. See
>    df88b2d96e36d9a9e325bfcd12eb45671cbbc937
>    "xen/enlighten: Disable MWAIT_LEAF so that acpi-pad won't be loaded."
>  - x2apic, see  4ea9b9aca90cfc71e6872ed3522356755162932c
>    "xen: mask x2APIC feature in PV"
> 
> We also check for vector callback early on, as it is a required
> feature. PVH also runs at default kernel IOPL.
> 
> Finally, pure PV settings are moved to a separate function that are
> only called for pure PV, ie, pv with pvmmu. They are also #ifdef
> with CONFIG_XEN_PVMMU.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c | 48 ++++++++++++++++++++++++++++++++++--------------
>  arch/x86/xen/setup.c     | 18 ++++++++++++------
>  2 files changed, 46 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fa6ade7..eb0efc2 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -46,6 +46,7 @@
>  #include <xen/hvm.h>
>  #include <xen/hvc-console.h>
>  #include <xen/acpi.h>
> +#include <xen/features.h>
>  
>  #include <asm/paravirt.h>
>  #include <asm/apic.h>
> @@ -262,8 +263,9 @@ static void __init xen_banner(void)
>  	struct xen_extraversion extra;
>  	HYPERVISOR_xen_version(XENVER_extraversion, &extra);
>  
> -	printk(KERN_INFO "Booting paravirtualized kernel on %s\n",
> -	       pv_info.name);
> +	pr_info("Booting paravirtualized kernel %son %s\n",
> +		xen_feature(XENFEAT_auto_translated_physmap) ?
> +			"with PVH extensions " : "", pv_info.name);
>  	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
>  	       version >> 16, version & 0xffff, extra.extraversion,
>  	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> @@ -433,7 +435,7 @@ static void __init xen_init_cpuid_mask(void)
>  
>  	ax = 1;
>  	cx = 0;
> -	xen_cpuid(&ax, &bx, &cx, &dx);
> +	cpuid(1, &ax, &bx, &cx, &dx);
>  
>  	xsave_mask =
>  		(1 << (X86_FEATURE_XSAVE % 32)) |
> @@ -1420,6 +1422,19 @@ static void __init xen_setup_stackprotector(void)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +static void __init xen_pvh_early_guest_init(void)
> +{
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
> +		xen_have_vector_callback = 1;
> +
> +#ifdef CONFIG_X86_32
> +	BUG(); /* PVH: Implement proper support. */
> +#endif
> +}
> +
>  /* First C function to be called on Xen boot */
>  asmlinkage void __init xen_start_kernel(void)
>  {
> @@ -1431,13 +1446,16 @@ asmlinkage void __init xen_start_kernel(void)
>  
>  	xen_domain_type = XEN_PV_DOMAIN;
>  
> +	xen_setup_features();
> +	xen_pvh_early_guest_init();
>  	xen_setup_machphys_mapping();
>  
>  	/* Install Xen paravirt ops */
>  	pv_info = xen_info;
>  	pv_init_ops = xen_init_ops;
> -	pv_cpu_ops = xen_cpu_ops;
>  	pv_apic_ops = xen_apic_ops;
> +	if (!xen_pvh_domain())
> +		pv_cpu_ops = xen_cpu_ops;
>  
>  	x86_init.resources.memory_setup = xen_memory_setup;
>  	x86_init.oem.arch_setup = xen_arch_setup;
> @@ -1469,8 +1487,6 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* Work out if we support NX */
>  	x86_configure_nx();
>  
> -	xen_setup_features();
> -
>  	/* Get mfn list */
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		xen_build_dynamic_phys_to_machine();
> @@ -1548,14 +1564,18 @@ asmlinkage void __init xen_start_kernel(void)
>  	/* set the limit of our address space */
>  	xen_reserve_top();
>  
> -	/* We used to do this in xen_arch_setup, but that is too late on AMD
> -	 * were early_cpu_init (run before ->arch_setup()) calls early_amd_init
> -	 * which pokes 0xcf8 port.
> -	 */
> -	set_iopl.iopl = 1;
> -	rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
> -	if (rc != 0)
> -		xen_raw_printk("physdev_op failed %d\n", rc);
> +	/* PVH: runs at default kernel iopl of 0 */
> +	if (!xen_pvh_domain()) {
> +		/*
> +		 * We used to do this in xen_arch_setup, but that is too late
> +		 * on AMD were early_cpu_init (run before ->arch_setup()) calls
> +		 * early_amd_init which pokes 0xcf8 port.
> +		 */
> +		set_iopl.iopl = 1;
> +		rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
> +		if (rc != 0)
> +			xen_raw_printk("physdev_op failed %d\n", rc);
> +	}
>  
>  #ifdef CONFIG_X86_32
>  	/* set up basic CPUID stuff */
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 68c054f..2137c51 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -563,16 +563,13 @@ void xen_enable_nmi(void)
>  		BUG();
>  #endif
>  }
> -void __init xen_arch_setup(void)
> +void __init xen_pvmmu_arch_setup(void)
>  {
> -	xen_panic_handler_init();
> -
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
>  	HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_writable_pagetables);
>  
> -	if (!xen_feature(XENFEAT_auto_translated_physmap))
> -		HYPERVISOR_vm_assist(VMASST_CMD_enable,
> -				     VMASST_TYPE_pae_extended_cr3);
> +	HYPERVISOR_vm_assist(VMASST_CMD_enable,
> +			     VMASST_TYPE_pae_extended_cr3);
>  
>  	if (register_callback(CALLBACKTYPE_event, xen_hypervisor_callback) ||
>  	    register_callback(CALLBACKTYPE_failsafe, xen_failsafe_callback))
> @@ -581,6 +578,15 @@ void __init xen_arch_setup(void)
>  	xen_enable_sysenter();
>  	xen_enable_syscall();
>  	xen_enable_nmi();
> +}
> +
> +/* This function is not called for HVM domains */
> +void __init xen_arch_setup(void)
> +{
> +	xen_panic_handler_init();
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
> +		xen_pvmmu_arch_setup();
> +
>  #ifdef CONFIG_ACPI
>  	if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
>  		printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:52:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrsZ-0006cH-7U; Sun, 05 Jan 2014 17:52:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrsY-0006cA-Do
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:52:50 +0000
Received: from [85.158.137.68:32939] by server-3.bemta-3.messagelabs.com id
	48/FA-10658-1FB99C25; Sun, 05 Jan 2014 17:52:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388944367!7320626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11219 invoked from network); 5 Jan 2014 17:52:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:52:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89898632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 17:52:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:52:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrsT-0007TM-SQ;
	Sun, 05 Jan 2014 17:52:45 +0000
Date: Sun, 5 Jan 2014 17:51:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051751160.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 05/19] xen/mmu/p2m: Refactor the
 xen_pagetable_init code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/mmu.c | 70 +++++++++++++++++++++++++++++-------------------------
>  1 file changed, 37 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..c140eff 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
>  	 * instead of somewhere later and be confusing. */
>  	xen_mc_flush();
>  }
> -#endif
> -static void __init xen_pagetable_init(void)
> +static void __init xen_pagetable_p2m_copy(void)
>  {
> -#ifdef CONFIG_X86_64
>  	unsigned long size;
>  	unsigned long addr;
> -#endif
> -	paging_init();
> -	xen_setup_shared_info();
> -#ifdef CONFIG_X86_64
> -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		unsigned long new_mfn_list;
> +	unsigned long new_mfn_list;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* On 32-bit, we get zero so this never gets executed. */
> +	new_mfn_list = xen_revector_p2m_tree();
> +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +		/* We should be in __ka space. */
> +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +		addr = xen_start_info->mfn_list;
> +		/* We roundup to the PMD, which means that if anybody at this stage is
> +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +		 * they are in going to crash. Fortunatly we have already revectored
> +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +		size = roundup(size, PMD_SIZE);
> +		xen_cleanhighmap(addr, addr + size);
>  
>  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +		memblock_free(__pa(xen_start_info->mfn_list), size);
> +		/* And revector! Bye bye old array */
> +		xen_start_info->mfn_list = new_mfn_list;
> +	} else
> +		return;
>  
> -		/* On 32-bit, we get zero so this never gets executed. */
> -		new_mfn_list = xen_revector_p2m_tree();
> -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -			/* We should be in __ka space. */
> -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -			addr = xen_start_info->mfn_list;
> -			/* We roundup to the PMD, which means that if anybody at this stage is
> -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -			 * they are in going to crash. Fortunatly we have already revectored
> -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -			size = roundup(size, PMD_SIZE);
> -			xen_cleanhighmap(addr, addr + size);
> -
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -			memblock_free(__pa(xen_start_info->mfn_list), size);
> -			/* And revector! Bye bye old array */
> -			xen_start_info->mfn_list = new_mfn_list;
> -		} else
> -			goto skip;
> -	}
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> @@ -1255,7 +1251,15 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();
> +#ifdef CONFIG_X86_64
> +	xen_pagetable_p2m_copy();
>  #endif
>  	xen_post_allocator_init();
>  }
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:52:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrsZ-0006cH-7U; Sun, 05 Jan 2014 17:52:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrsY-0006cA-Do
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:52:50 +0000
Received: from [85.158.137.68:32939] by server-3.bemta-3.messagelabs.com id
	48/FA-10658-1FB99C25; Sun, 05 Jan 2014 17:52:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1388944367!7320626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11219 invoked from network); 5 Jan 2014 17:52:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:52:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89898632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 17:52:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:52:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzrsT-0007TM-SQ;
	Sun, 05 Jan 2014 17:52:45 +0000
Date: Sun, 5 Jan 2014 17:51:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051751160.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-6-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 05/19] xen/mmu/p2m: Refactor the
 xen_pagetable_init code (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The revector and copying of the P2M only happens when
> !auto-xlat and on 64-bit builds. It is not obvious from
> the code, so lets have seperate 32 and 64-bit functions.
> 
> We also invert the check for auto-xlat to make the code
> flow simpler.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/mmu.c | 70 +++++++++++++++++++++++++++++-------------------------
>  1 file changed, 37 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..c140eff 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1198,44 +1198,40 @@ static void __init xen_cleanhighmap(unsigned long vaddr,
>  	 * instead of somewhere later and be confusing. */
>  	xen_mc_flush();
>  }
> -#endif
> -static void __init xen_pagetable_init(void)
> +static void __init xen_pagetable_p2m_copy(void)
>  {
> -#ifdef CONFIG_X86_64
>  	unsigned long size;
>  	unsigned long addr;
> -#endif
> -	paging_init();
> -	xen_setup_shared_info();
> -#ifdef CONFIG_X86_64
> -	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		unsigned long new_mfn_list;
> +	unsigned long new_mfn_list;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return;
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +
> +	/* On 32-bit, we get zero so this never gets executed. */
> +	new_mfn_list = xen_revector_p2m_tree();
> +	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> +		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +		memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +		/* We should be in __ka space. */
> +		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +		addr = xen_start_info->mfn_list;
> +		/* We roundup to the PMD, which means that if anybody at this stage is
> +		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +		 * they are in going to crash. Fortunatly we have already revectored
> +		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +		size = roundup(size, PMD_SIZE);
> +		xen_cleanhighmap(addr, addr + size);
>  
>  		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +		memblock_free(__pa(xen_start_info->mfn_list), size);
> +		/* And revector! Bye bye old array */
> +		xen_start_info->mfn_list = new_mfn_list;
> +	} else
> +		return;
>  
> -		/* On 32-bit, we get zero so this never gets executed. */
> -		new_mfn_list = xen_revector_p2m_tree();
> -		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -			/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -			memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -			/* We should be in __ka space. */
> -			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -			addr = xen_start_info->mfn_list;
> -			/* We roundup to the PMD, which means that if anybody at this stage is
> -			 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -			 * they are in going to crash. Fortunatly we have already revectored
> -			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -			size = roundup(size, PMD_SIZE);
> -			xen_cleanhighmap(addr, addr + size);
> -
> -			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -			memblock_free(__pa(xen_start_info->mfn_list), size);
> -			/* And revector! Bye bye old array */
> -			xen_start_info->mfn_list = new_mfn_list;
> -		} else
> -			goto skip;
> -	}
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> @@ -1255,7 +1251,15 @@ static void __init xen_pagetable_init(void)
>  	 * anything at this stage. */
>  	xen_cleanhighmap(MODULES_VADDR, roundup(MODULES_VADDR, PUD_SIZE) - 1);
>  #endif
> -skip:
> +}
> +#endif
> +
> +static void __init xen_pagetable_init(void)
> +{
> +	paging_init();
> +	xen_setup_shared_info();
> +#ifdef CONFIG_X86_64
> +	xen_pagetable_p2m_copy();
>  #endif
>  	xen_post_allocator_init();
>  }
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:55:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrvC-0006zh-0k; Sun, 05 Jan 2014 17:55:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrvA-0006zZ-08
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:55:32 +0000
Received: from [85.158.139.211:38435] by server-12.bemta-5.messagelabs.com id
	B2/FC-30017-39C99C25; Sun, 05 Jan 2014 17:55:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388944529!7735306!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17179 invoked from network); 5 Jan 2014 17:55:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:55:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87713138"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:55:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:55:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrv6-0007Vj-51;
	Sun, 05 Jan 2014 17:55:28 +0000
Date: Sun, 5 Jan 2014 17:54:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051754280.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 18/19] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  drivers/xen/xenbus/xenbus_client.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index ec097d6..01d59e6 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -45,6 +45,7 @@
>  #include <xen/grant_table.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
> +#include <xen/features.h>
>  
>  #include "xenbus_probe.h"
>  
> @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:55:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzrvC-0006zh-0k; Sun, 05 Jan 2014 17:55:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzrvA-0006zZ-08
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:55:32 +0000
Received: from [85.158.139.211:38435] by server-12.bemta-5.messagelabs.com id
	B2/FC-30017-39C99C25; Sun, 05 Jan 2014 17:55:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1388944529!7735306!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17179 invoked from network); 5 Jan 2014 17:55:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:55:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87713138"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:55:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:55:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrv6-0007Vj-51;
	Sun, 05 Jan 2014 17:55:28 +0000
Date: Sun, 5 Jan 2014 17:54:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051754280.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-19-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 18/19] xen/pvh: Piggyback on PVHVM
	XenBus.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. For the XenBus
> mechanism we want to use the PVHVM mechanism.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

>  drivers/xen/xenbus/xenbus_client.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index ec097d6..01d59e6 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -45,6 +45,7 @@
>  #include <xen/grant_table.h>
>  #include <xen/xenbus.h>
>  #include <xen/xen.h>
> +#include <xen/features.h>
>  
>  #include "xenbus_probe.h"
>  
> @@ -743,7 +744,7 @@ static const struct xenbus_ring_ops ring_ops_hvm = {
>  
>  void __init xenbus_ring_ops_init(void)
>  {
> -	if (xen_pv_domain())
> +	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		ring_ops = &ring_ops_pv;
>  	else
>  		ring_ops = &ring_ops_hvm;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:57:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrwl-000784-HS; Sun, 05 Jan 2014 17:57:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrwk-00077s-H7
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:57:10 +0000
Received: from [85.158.137.68:46735] by server-5.bemta-3.messagelabs.com id
	A3/0C-25188-5FC99C25; Sun, 05 Jan 2014 17:57:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388944627!7290568!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12949 invoked from network); 5 Jan 2014 17:57:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:57:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87713268"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:57:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:57:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrwg-0007XO-G5;
	Sun, 05 Jan 2014 17:57:06 +0000
Date: Sun, 5 Jan 2014 17:56:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051756090.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 06/19] xen/mmu: Cleanup
 xen_pagetable_p2m_copy a bit.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> Stefano noticed that the code runs only under 64-bit so
> the comments about 32-bit are pointless.
> 
> Also we change the condition for xen_revector_p2m_tree
> returning the same value (because it could not allocate
> a swath of space to put the new P2M in) or it had been
> called once already. In such we return early from the
> function.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/mmu.c | 40 ++++++++++++++++++++--------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c140eff..9d74249 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1209,29 +1209,29 @@ static void __init xen_pagetable_p2m_copy(void)
>  
>  	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  
> -	/* On 32-bit, we get zero so this never gets executed. */
>  	new_mfn_list = xen_revector_p2m_tree();
> -	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -		memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -		/* We should be in __ka space. */
> -		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -		addr = xen_start_info->mfn_list;
> -		/* We roundup to the PMD, which means that if anybody at this stage is
> -		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -		 * they are in going to crash. Fortunatly we have already revectored
> -		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -		size = roundup(size, PMD_SIZE);
> -		xen_cleanhighmap(addr, addr + size);
> -
> -		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -		memblock_free(__pa(xen_start_info->mfn_list), size);
> -		/* And revector! Bye bye old array */
> -		xen_start_info->mfn_list = new_mfn_list;
> -	} else
> +	/* No memory or already called. */
> +	if (!new_mfn_list || new_mfn_list == xen_start_info->mfn_list)
>  		return;
>  
> +	/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +	memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +	/* We should be in __ka space. */
> +	BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +	addr = xen_start_info->mfn_list;
> +	/* We roundup to the PMD, which means that if anybody at this stage is
> +	 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +	 * they are in going to crash. Fortunatly we have already revectored
> +	 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +	size = roundup(size, PMD_SIZE);
> +	xen_cleanhighmap(addr, addr + size);
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +	memblock_free(__pa(xen_start_info->mfn_list), size);
> +	/* And revector! Bye bye old array */
> +	xen_start_info->mfn_list = new_mfn_list;
> +
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 17:57:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 17:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzrwl-000784-HS; Sun, 05 Jan 2014 17:57:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1Vzrwk-00077s-H7
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 17:57:10 +0000
Received: from [85.158.137.68:46735] by server-5.bemta-3.messagelabs.com id
	A3/0C-25188-5FC99C25; Sun, 05 Jan 2014 17:57:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388944627!7290568!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12949 invoked from network); 5 Jan 2014 17:57:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 17:57:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87713268"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 17:57:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 12:57:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1Vzrwg-0007XO-G5;
	Sun, 05 Jan 2014 17:57:06 +0000
Date: Sun, 5 Jan 2014 17:56:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051756090.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-7-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 06/19] xen/mmu: Cleanup
 xen_pagetable_p2m_copy a bit.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> Stefano noticed that the code runs only under 64-bit so
> the comments about 32-bit are pointless.
> 
> Also we change the condition for xen_revector_p2m_tree
> returning the same value (because it could not allocate
> a swath of space to put the new P2M in) or it had been
> called once already. In such we return early from the
> function.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/mmu.c | 40 ++++++++++++++++++++--------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c140eff..9d74249 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1209,29 +1209,29 @@ static void __init xen_pagetable_p2m_copy(void)
>  
>  	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  
> -	/* On 32-bit, we get zero so this never gets executed. */
>  	new_mfn_list = xen_revector_p2m_tree();
> -	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -		memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -		/* We should be in __ka space. */
> -		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -		addr = xen_start_info->mfn_list;
> -		/* We roundup to the PMD, which means that if anybody at this stage is
> -		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -		 * they are in going to crash. Fortunatly we have already revectored
> -		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -		size = roundup(size, PMD_SIZE);
> -		xen_cleanhighmap(addr, addr + size);
> -
> -		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -		memblock_free(__pa(xen_start_info->mfn_list), size);
> -		/* And revector! Bye bye old array */
> -		xen_start_info->mfn_list = new_mfn_list;
> -	} else
> +	/* No memory or already called. */
> +	if (!new_mfn_list || new_mfn_list == xen_start_info->mfn_list)
>  		return;
>  
> +	/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +	memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +	/* We should be in __ka space. */
> +	BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +	addr = xen_start_info->mfn_list;
> +	/* We roundup to the PMD, which means that if anybody at this stage is
> +	 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +	 * they are in going to crash. Fortunatly we have already revectored
> +	 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +	size = roundup(size, PMD_SIZE);
> +	xen_cleanhighmap(addr, addr + size);
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +	memblock_free(__pa(xen_start_info->mfn_list), size);
> +	/* And revector! Bye bye old array */
> +	xen_start_info->mfn_list = new_mfn_list;
> +
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:12:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsBe-0007z1-D5; Sun, 05 Jan 2014 18:12:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsBc-0007yw-NV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:12:32 +0000
Received: from [193.109.254.147:39442] by server-10.bemta-14.messagelabs.com
	id 65/86-20752-090A9C25; Sun, 05 Jan 2014 18:12:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388945550!8886467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 801 invoked from network); 5 Jan 2014 18:12:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:12:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87714873"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:12:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:12:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsBZ-0007iF-5B;
	Sun, 05 Jan 2014 18:12:29 +0000
Date: Sun, 5 Jan 2014 18:11:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> We also optimize one - the TLB flush. The native operation would
> needlessly IPI offline VCPUs causing extra wakeups. Using the
> Xen one avoids that and lets the hypervisor determine which
> VCPU needs the TLB flush.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 490ddb3..c1d406f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
>  void __init xen_init_mmu_ops(void)
>  {
>  	x86_init.paging.pagetable_init = xen_pagetable_init;
> +
> +	/* Optimization - we can use the HVM one but it has no idea which
> +	 * VCPUs are descheduled - which means that it will needlessly IPI
> +	 * them. Xen knows so let it do the job.
> +	 */
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +		return;
> +	}
>  	pv_mmu_ops = xen_mmu_ops;
>  
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);

Regarding this patch, the next one and the other changes to
xen_setup_shared_info, xen_setup_mfn_list_list,
xen_setup_vcpu_info_placement, etc: considering that the mmu related
stuff is very different between PV and PVH guests, I wonder if it makes
any sense to keep calling xen_init_mmu_ops on PVH.

I would introduce a new function, xen_init_pvh_mmu_ops, that sets
pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
under a new xen_pvh_pagetable_init.
Just to give you an idea, not even compiled tested:



diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 23ead29..4e53fa3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1117,15 +1117,12 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 void xen_setup_shared_info(void)
 {
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		set_fixmap(FIX_PARAVIRT_BOOTMAP,
-			   xen_start_info->shared_info);
+	BUG_ON(xen_feature(XENFEAT_auto_translated_physmap));
+	set_fixmap(FIX_PARAVIRT_BOOTMAP,
+			xen_start_info->shared_info);
 
-		HYPERVISOR_shared_info =
-			(struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP);
-	} else
-		HYPERVISOR_shared_info =
-			(struct shared_info *)__va(xen_start_info->shared_info);
+	HYPERVISOR_shared_info =
+		(struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP);
 
 #ifndef CONFIG_SMP
 	/* In UP this is as good a place as any to set up shared info */
@@ -1467,7 +1464,10 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up some pagetable state before starting to set any ptes.
 	 */
 
-	xen_init_mmu_ops();
+	if (xen_pvh_domain())
+		xen_init_pvh_mmu_ops();
+	else
+		xen_init_mmu_ops();
 
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 490ddb3..04405bc 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1254,6 +1254,15 @@ static void __init xen_pagetable_p2m_copy(void)
 }
 #endif
 
+static void __init xen_pvh_pagetable_init(void)
+{
+	paging_init();
+	HYPERVISOR_shared_info =
+		(struct shared_info *)__va(xen_start_info->shared_info);
+	for_each_possible_cpu(cpu)
+		xen_vcpu_setup(cpu);
+}
+
 static void __init xen_pagetable_init(void)
 {
 	paging_init();
@@ -2219,6 +2228,20 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.set_fixmap = xen_set_fixmap,
 };
 
+void __init xen_init_pvh_mmu_ops(void)
+{
+	x86_init.paging.pagetable_init = xen_pvh_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
+}
+
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:12:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsBe-0007z1-D5; Sun, 05 Jan 2014 18:12:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsBc-0007yw-NV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:12:32 +0000
Received: from [193.109.254.147:39442] by server-10.bemta-14.messagelabs.com
	id 65/86-20752-090A9C25; Sun, 05 Jan 2014 18:12:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1388945550!8886467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 801 invoked from network); 5 Jan 2014 18:12:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:12:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87714873"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:12:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:12:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsBZ-0007iF-5B;
	Sun, 05 Jan 2014 18:12:29 +0000
Date: Sun, 5 Jan 2014 18:11:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> We also optimize one - the TLB flush. The native operation would
> needlessly IPI offline VCPUs causing extra wakeups. Using the
> Xen one avoids that and lets the hypervisor determine which
> VCPU needs the TLB flush.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/mmu.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index 490ddb3..c1d406f 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
>  void __init xen_init_mmu_ops(void)
>  {
>  	x86_init.paging.pagetable_init = xen_pagetable_init;
> +
> +	/* Optimization - we can use the HVM one but it has no idea which
> +	 * VCPUs are descheduled - which means that it will needlessly IPI
> +	 * them. Xen knows so let it do the job.
> +	 */
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> +		return;
> +	}
>  	pv_mmu_ops = xen_mmu_ops;
>  
>  	memset(dummy_mapping, 0xff, PAGE_SIZE);

Regarding this patch, the next one and the other changes to
xen_setup_shared_info, xen_setup_mfn_list_list,
xen_setup_vcpu_info_placement, etc: considering that the mmu related
stuff is very different between PV and PVH guests, I wonder if it makes
any sense to keep calling xen_init_mmu_ops on PVH.

I would introduce a new function, xen_init_pvh_mmu_ops, that sets
pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
under a new xen_pvh_pagetable_init.
Just to give you an idea, not even compiled tested:



diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 23ead29..4e53fa3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1117,15 +1117,12 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 
 void xen_setup_shared_info(void)
 {
-	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		set_fixmap(FIX_PARAVIRT_BOOTMAP,
-			   xen_start_info->shared_info);
+	BUG_ON(xen_feature(XENFEAT_auto_translated_physmap));
+	set_fixmap(FIX_PARAVIRT_BOOTMAP,
+			xen_start_info->shared_info);
 
-		HYPERVISOR_shared_info =
-			(struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP);
-	} else
-		HYPERVISOR_shared_info =
-			(struct shared_info *)__va(xen_start_info->shared_info);
+	HYPERVISOR_shared_info =
+		(struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP);
 
 #ifndef CONFIG_SMP
 	/* In UP this is as good a place as any to set up shared info */
@@ -1467,7 +1464,10 @@ asmlinkage void __init xen_start_kernel(void)
 	 * Set up some pagetable state before starting to set any ptes.
 	 */
 
-	xen_init_mmu_ops();
+	if (xen_pvh_domain())
+		xen_init_pvh_mmu_ops();
+	else
+		xen_init_mmu_ops();
 
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 490ddb3..04405bc 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1254,6 +1254,15 @@ static void __init xen_pagetable_p2m_copy(void)
 }
 #endif
 
+static void __init xen_pvh_pagetable_init(void)
+{
+	paging_init();
+	HYPERVISOR_shared_info =
+		(struct shared_info *)__va(xen_start_info->shared_info);
+	for_each_possible_cpu(cpu)
+		xen_vcpu_setup(cpu);
+}
+
 static void __init xen_pagetable_init(void)
 {
 	paging_init();
@@ -2219,6 +2228,20 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.set_fixmap = xen_set_fixmap,
 };
 
+void __init xen_init_pvh_mmu_ops(void)
+{
+	x86_init.paging.pagetable_init = xen_pvh_pagetable_init;
+
+	/* Optimization - we can use the HVM one but it has no idea which
+	 * VCPUs are descheduled - which means that it will needlessly IPI
+	 * them. Xen knows so let it do the job.
+	 */
+	if (xen_feature(XENFEAT_auto_translated_physmap)) {
+		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+		return;
+	}
+}
+
 void __init xen_init_mmu_ops(void)
 {
 	x86_init.paging.pagetable_init = xen_pagetable_init;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsFD-00086J-3E; Sun, 05 Jan 2014 18:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsFB-00086D-Ia
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:16:13 +0000
Received: from [85.158.143.35:23803] by server-2.bemta-4.messagelabs.com id
	26/0B-11386-C61A9C25; Sun, 05 Jan 2014 18:16:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388945770!8465935!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9872 invoked from network); 5 Jan 2014 18:16:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715236"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:16:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:16:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsF8-0007lJ-00;
	Sun, 05 Jan 2014 18:16:10 +0000
Date: Sun, 5 Jan 2014 18:15:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051815110.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 13/19] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c |  5 +++--
>  arch/x86/xen/irq.c       |  5 ++++-
>  drivers/xen/events.c     | 14 +++++++++-----
>  3 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fde62c4..628099a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1144,8 +1144,9 @@ void xen_setup_vcpu_info_placement(void)
>  		xen_vcpu_setup(cpu);
>  
>  	/* xen_vcpu_setup managed to place the vcpu_info within the
> -	   percpu area for all cpus, so make use of it */
> -	if (have_vcpu_info_placement) {
> +	 * percpu area for all cpus, so make use of it. Note that for
> +	 * PVH we want to use native IRQ mechanism. */
> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 0da7f86..76ca326 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/features.h>
>  #include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
> @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	/* For PVH we use default pv_irq_ops settings. */
> +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..783b972 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1908,8 +1908,15 @@ void __init xen_init_IRQ(void)
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
>  #ifdef CONFIG_X86
> -	if (xen_hvm_domain()) {
> +	if (xen_pv_domain()) {
> +		irq_ctx_init(smp_processor_id());
> +		if (xen_initial_domain())
> +			pci_xen_initial_domain();
> +	}
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
>  		xen_callback_vector();
> +
> +	if (xen_hvm_domain()) {
>  		native_init_IRQ();
>  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
>  		 * __acpi_register_gsi can point at the right function */
> @@ -1918,13 +1925,10 @@ void __init xen_init_IRQ(void)
>  		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
> -		irq_ctx_init(smp_processor_id());
> -		if (xen_initial_domain())
> -			pci_xen_initial_domain();
> -
>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> +		/* TODO: No PVH support for PIRQ EOI */
>  		if (rc != 0) {
>  			free_page((unsigned long) pirq_eoi_map);
>  			pirq_eoi_map = NULL;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsFD-00086J-3E; Sun, 05 Jan 2014 18:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsFB-00086D-Ia
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:16:13 +0000
Received: from [85.158.143.35:23803] by server-2.bemta-4.messagelabs.com id
	26/0B-11386-C61A9C25; Sun, 05 Jan 2014 18:16:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388945770!8465935!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9872 invoked from network); 5 Jan 2014 18:16:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715236"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:16:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:16:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsF8-0007lJ-00;
	Sun, 05 Jan 2014 18:16:10 +0000
Date: Sun, 5 Jan 2014 18:15:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051815110.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-14-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 13/19] xen/pvh: Piggyback on PVHVM for
 event channels (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> PVH is a PV guest with a twist - there are certain things
> that work in it like HVM and some like PV. There is
> a similar mode - PVHVM where we run in HVM mode with
> PV code enabled - and this patch explores that.
> 
> The most notable PV interfaces are the XenBus and event channels.
> 
> We will piggyback on how the event channel mechanism is
> used in PVHVM - that is we want the normal native IRQ mechanism
> and we will install a vector (hvm callback) for which we
> will call the event channel mechanism.
> 
> This means that from a pvops perspective, we can use
> native_irq_ops instead of the Xen PV specific. Albeit in the
> future we could support pirq_eoi_map. But that is
> a feature request that can be shared with PVHVM.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/enlighten.c |  5 +++--
>  arch/x86/xen/irq.c       |  5 ++++-
>  drivers/xen/events.c     | 14 +++++++++-----
>  3 files changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index fde62c4..628099a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1144,8 +1144,9 @@ void xen_setup_vcpu_info_placement(void)
>  		xen_vcpu_setup(cpu);
>  
>  	/* xen_vcpu_setup managed to place the vcpu_info within the
> -	   percpu area for all cpus, so make use of it */
> -	if (have_vcpu_info_placement) {
> +	 * percpu area for all cpus, so make use of it. Note that for
> +	 * PVH we want to use native IRQ mechanism. */
> +	if (have_vcpu_info_placement && !xen_pvh_domain()) {
>  		pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
>  		pv_irq_ops.restore_fl = __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
>  		pv_irq_ops.irq_disable = __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
> index 0da7f86..76ca326 100644
> --- a/arch/x86/xen/irq.c
> +++ b/arch/x86/xen/irq.c
> @@ -5,6 +5,7 @@
>  #include <xen/interface/xen.h>
>  #include <xen/interface/sched.h>
>  #include <xen/interface/vcpu.h>
> +#include <xen/features.h>
>  #include <xen/events.h>
>  
>  #include <asm/xen/hypercall.h>
> @@ -128,6 +129,8 @@ static const struct pv_irq_ops xen_irq_ops __initconst = {
>  
>  void __init xen_init_irq_ops(void)
>  {
> -	pv_irq_ops = xen_irq_ops;
> +	/* For PVH we use default pv_irq_ops settings. */
> +	if (!xen_feature(XENFEAT_hvm_callback_vector))
> +		pv_irq_ops = xen_irq_ops;
>  	x86_init.irqs.intr_init = xen_init_IRQ;
>  }
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4035e83..783b972 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -1908,8 +1908,15 @@ void __init xen_init_IRQ(void)
>  	pirq_needs_eoi = pirq_needs_eoi_flag;
>  
>  #ifdef CONFIG_X86
> -	if (xen_hvm_domain()) {
> +	if (xen_pv_domain()) {
> +		irq_ctx_init(smp_processor_id());
> +		if (xen_initial_domain())
> +			pci_xen_initial_domain();
> +	}
> +	if (xen_feature(XENFEAT_hvm_callback_vector))
>  		xen_callback_vector();
> +
> +	if (xen_hvm_domain()) {
>  		native_init_IRQ();
>  		/* pci_xen_hvm_init must be called after native_init_IRQ so that
>  		 * __acpi_register_gsi can point at the right function */
> @@ -1918,13 +1925,10 @@ void __init xen_init_IRQ(void)
>  		int rc;
>  		struct physdev_pirq_eoi_gmfn eoi_gmfn;
>  
> -		irq_ctx_init(smp_processor_id());
> -		if (xen_initial_domain())
> -			pci_xen_initial_domain();
> -
>  		pirq_eoi_map = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO);
>  		eoi_gmfn.gmfn = virt_to_mfn(pirq_eoi_map);
>  		rc = HYPERVISOR_physdev_op(PHYSDEVOP_pirq_eoi_gmfn_v2, &eoi_gmfn);
> +		/* TODO: No PVH support for PIRQ EOI */
>  		if (rc != 0) {
>  			free_page((unsigned long) pirq_eoi_map);
>  			pirq_eoi_map = NULL;
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:17:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsGC-0008Bw-Ie; Sun, 05 Jan 2014 18:17:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsGB-0008Bf-5n
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:17:15 +0000
Received: from [85.158.143.35:62901] by server-2.bemta-4.messagelabs.com id
	B4/4B-11386-AA1A9C25; Sun, 05 Jan 2014 18:17:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388945832!9653266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5743 invoked from network); 5 Jan 2014 18:17:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:17:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:17:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:17:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsG7-0007mH-Gb;
	Sun, 05 Jan 2014 18:17:11 +0000
Date: Sun, 5 Jan 2014 18:16:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051816120.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 14/19] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  drivers/xen/grant-table.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..99399cb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,6 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> @@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
>  unsigned int gnttab_max_grant_frames(void)
>  {
>  	unsigned int xen_max = __max_nr_grant_frames();
> +	static unsigned int boot_max_nr_grant_frames;
> +
> +	/* First time, initialize it properly. */
> +	if (!boot_max_nr_grant_frames)
> +		boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	if (xen_max > boot_max_nr_grant_frames)
>  		return boot_max_nr_grant_frames;
> @@ -1227,13 +1231,12 @@ int gnttab_init(void)
>  
>  	gnttab_request_version();
>  	nr_grant_frames = 1;
> -	boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	/* Determine the maximum number of frames required for the
>  	 * grant reference free list on the current hypervisor.
>  	 */
>  	BUG_ON(grefs_per_grant_frame == 0);
> -	max_nr_glist_frames = (boot_max_nr_grant_frames *
> +	max_nr_glist_frames = (gnttab_max_grant_frames() *
>  			       grefs_per_grant_frame / RPP);
>  
>  	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:17:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsGC-0008Bw-Ie; Sun, 05 Jan 2014 18:17:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsGB-0008Bf-5n
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:17:15 +0000
Received: from [85.158.143.35:62901] by server-2.bemta-4.messagelabs.com id
	B4/4B-11386-AA1A9C25; Sun, 05 Jan 2014 18:17:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388945832!9653266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5743 invoked from network); 5 Jan 2014 18:17:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:17:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:17:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:17:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsG7-0007mH-Gb;
	Sun, 05 Jan 2014 18:17:11 +0000
Date: Sun, 5 Jan 2014 18:16:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051816120.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-15-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 14/19] xen/grants: Remove
 gnttab_max_grant_frames dependency on gnttab_init.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The function gnttab_max_grant_frames() returns the maximum amount
> of frames (pages) of grants we can have. Unfortunatly it was
> dependent on gnttab_init() having been run before to initialize
> the boot max value (boot_max_nr_grant_frames).
> 
> This meant that users of gnttab_max_grant_frames would always
> get a zero value if they called before gnttab_init() - such as
> 'platform_pci_init' (drivers/xen/platform-pci.c).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  drivers/xen/grant-table.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..99399cb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -62,7 +62,6 @@
>  
>  static grant_ref_t **gnttab_list;
>  static unsigned int nr_grant_frames;
> -static unsigned int boot_max_nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> @@ -827,6 +826,11 @@ static unsigned int __max_nr_grant_frames(void)
>  unsigned int gnttab_max_grant_frames(void)
>  {
>  	unsigned int xen_max = __max_nr_grant_frames();
> +	static unsigned int boot_max_nr_grant_frames;
> +
> +	/* First time, initialize it properly. */
> +	if (!boot_max_nr_grant_frames)
> +		boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	if (xen_max > boot_max_nr_grant_frames)
>  		return boot_max_nr_grant_frames;
> @@ -1227,13 +1231,12 @@ int gnttab_init(void)
>  
>  	gnttab_request_version();
>  	nr_grant_frames = 1;
> -	boot_max_nr_grant_frames = __max_nr_grant_frames();
>  
>  	/* Determine the maximum number of frames required for the
>  	 * grant reference free list on the current hypervisor.
>  	 */
>  	BUG_ON(grefs_per_grant_frame == 0);
> -	max_nr_glist_frames = (boot_max_nr_grant_frames *
> +	max_nr_glist_frames = (gnttab_max_grant_frames() *
>  			       grefs_per_grant_frame / RPP);
>  
>  	gnttab_list = kmalloc(max_nr_glist_frames * sizeof(grant_ref_t *),
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:19:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsHq-00009y-4E; Sun, 05 Jan 2014 18:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsHo-00009s-OA
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:18:56 +0000
Received: from [85.158.137.68:57390] by server-13.bemta-3.messagelabs.com id
	77/2F-28603-012A9C25; Sun, 05 Jan 2014 18:18:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388945933!7357063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31358 invoked from network); 5 Jan 2014 18:18:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:18:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715388"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:18:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:18:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsHk-0007mw-OL;
	Sun, 05 Jan 2014 18:18:52 +0000
Date: Sun, 5 Jan 2014 18:18:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

As I wrote in my reply to the previous version of the patch, you can
have my acked-by, except for the spurious code style fix mixed-up with
the other changes.


>  drivers/xen/grant-table.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 99399cb..cc1b4fa 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> -		return gnttab_map(0, nr_grant_frames - 1);
> -
> -	if (gnttab_shared.addr == NULL) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> +	{
>  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -						PAGE_SIZE * max_nr_gframes);
> +					       PAGE_SIZE * max_nr_gframes);
>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_hvm_resume_frames);
>  			return -ENOMEM;
>  		}
>  	}
> -
> -	gnttab_map(0, nr_grant_frames - 1);
> -
> -	return 0;
> +	return gnttab_map(0, nr_grant_frames - 1);
>  }
>  
>  int gnttab_resume(void)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:19:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsHq-00009y-4E; Sun, 05 Jan 2014 18:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsHo-00009s-OA
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:18:56 +0000
Received: from [85.158.137.68:57390] by server-13.bemta-3.messagelabs.com id
	77/2F-28603-012A9C25; Sun, 05 Jan 2014 18:18:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1388945933!7357063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31358 invoked from network); 5 Jan 2014 18:18:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:18:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87715388"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:18:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:18:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsHk-0007mw-OL;
	Sun, 05 Jan 2014 18:18:52 +0000
Date: Sun, 5 Jan 2014 18:18:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> We have this odd scenario of where for PV paths we take a shortcut
> but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> assign it to gnttab_shared.addr. This is needed because gnttab_map
> uses gnttab_shared.addr.
> 
> Instead of having:
> 	if (pv)
> 		return gnttab_map
> 	if (hvm)
> 		...
> 
> 	gnttab_map
> 
> Lets move the HVM part before the gnttab_map and remove the
> first call to gnttab_map.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

As I wrote in my reply to the previous version of the patch, you can
have my acked-by, except for the spurious code style fix mixed-up with
the other changes.


>  drivers/xen/grant-table.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 99399cb..cc1b4fa 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
>  	if (max_nr_gframes < nr_grant_frames)
>  		return -ENOSYS;
>  
> -	if (xen_pv_domain())
> -		return gnttab_map(0, nr_grant_frames - 1);
> -
> -	if (gnttab_shared.addr == NULL) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> +	{
>  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -						PAGE_SIZE * max_nr_gframes);
> +					       PAGE_SIZE * max_nr_gframes);
>  		if (gnttab_shared.addr == NULL) {
>  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
>  					xen_hvm_resume_frames);
>  			return -ENOMEM;
>  		}
>  	}
> -
> -	gnttab_map(0, nr_grant_frames - 1);
> -
> -	return 0;
> +	return gnttab_map(0, nr_grant_frames - 1);
>  }
>  
>  int gnttab_resume(void)
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:21:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsJy-0000Lu-Ms; Sun, 05 Jan 2014 18:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsJw-0000Lp-Ra
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:21:09 +0000
Received: from [85.158.143.35:9816] by server-3.bemta-4.messagelabs.com id
	4B/25-32360-492A9C25; Sun, 05 Jan 2014 18:21:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388946065!9733498!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28667 invoked from network); 5 Jan 2014 18:21:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:21:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89900913"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 18:21:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:21:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsJt-0007oZ-09;
	Sun, 05 Jan 2014 18:21:05 +0000
Date: Sun, 5 Jan 2014 18:20:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051819590.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 17/19] xen/pvh: Piggyback on PVHVM for
 grant driver (v4)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
> 
> v3, v4: fixes us some of the code due to earlier patches.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/grant-table.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/gntdev.c       |  2 +-
>  drivers/xen/grant-table.c  |  9 ++++---
>  3 files changed, 68 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 3a5f55d..2d71979 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -125,3 +125,65 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
>  	apply_to_page_range(&init_mm, (unsigned long)shared,
>  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
>  }
> +#ifdef CONFIG_XEN_PVH
> +#include <xen/balloon.h>
> +#include <xen/events.h>
> +#include <linux/slab.h>
> +static int __init xlated_setup_gnttab_pages(void)
> +{
> +	struct page **pages;
> +	xen_pfn_t *pfns;
> +	int rc;
> +	unsigned int i;
> +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> +
> +	BUG_ON(nr_grant_frames == 0);
> +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> +	if (!pages)
> +		return -ENOMEM;
> +
> +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> +	if (!pfns) {
> +		kfree(pages);
> +		return -ENOMEM;
> +	}
> +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> +	if (rc) {
> +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		kfree(pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +	for (i = 0; i < nr_grant_frames; i++)
> +		pfns[i] = page_to_pfn(pages[i]);
> +
> +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> +				    &xen_auto_xlat_grant_frames.vaddr);
> +
> +	kfree(pages);
> +	if (rc) {
> +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +
> +	xen_auto_xlat_grant_frames.pfn = pfns;
> +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> +
> +	return 0;
> +}
> +
> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_pvh_domain())
> +		return -ENODEV;
> +
> +	return xlated_setup_gnttab_pages();
> +}
> +/* Call it _before_ __gnttab_init as we need to initialize the
> + * xen_auto_xlat_grant_frames first. */
> +core_initcall(xen_pvh_gnttab_setup);
> +#endif
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..073b4a1 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
>  
> -	use_ptemod = xen_pv_domain();
> +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>  
>  	err = misc_register(&gntdev_miscdev);
>  	if (err != 0) {
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 6c78fd21..3d04c1c 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1108,7 +1108,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> @@ -1184,7 +1184,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1328,5 +1328,6 @@ static int __gnttab_init(void)
>  
>  	return gnttab_init();
>  }
> -
> -core_initcall(__gnttab_init);
> +/* Starts after core_initcall so that xen_pvh_gnttab_setup can be called
> + * beforehand to initialize xen_auto_xlat_grant_frames. */
> +core_initcall_sync(__gnttab_init);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:21:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsJy-0000Lu-Ms; Sun, 05 Jan 2014 18:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsJw-0000Lp-Ra
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:21:09 +0000
Received: from [85.158.143.35:9816] by server-3.bemta-4.messagelabs.com id
	4B/25-32360-492A9C25; Sun, 05 Jan 2014 18:21:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388946065!9733498!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28667 invoked from network); 5 Jan 2014 18:21:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:21:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89900913"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 18:21:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:21:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsJt-0007oZ-09;
	Sun, 05 Jan 2014 18:21:05 +0000
Date: Sun, 5 Jan 2014 18:20:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051819590.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-18-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 17/19] xen/pvh: Piggyback on PVHVM for
 grant driver (v4)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> In PVH the shared grant frame is the PFN and not MFN,
> hence its mapped via the same code path as HVM.
> 
> The allocation of the grant frame is done differently - we
> do not use the early platform-pci driver and have an
> ioremap area - instead we use balloon memory and stitch
> all of the non-contingous pages in a virtualized area.
> 
> That means when we call the hypervisor to replace the GMFN
> with a XENMAPSPACE_grant_table type, we need to lookup the
> old PFN for every iteration instead of assuming a flat
> contingous PFN allocation.
> 
> Lastly, we only use v1 for grants. This is because PVHVM
> is not able to use v2 due to no XENMEM_add_to_physmap
> calls on the error status page (see commit
> 69e8f430e243d657c2053f097efebc2e2cd559f0
>  xen/granttable: Disable grant v2 for HVM domains.)
> 
> Until that is implemented this workaround has to
> be in place.
> 
> Also per suggestions by Stefano utilize the PVHVM paths
> as they share common functionality.
> 
> v2 of this patch moves most of the PVH code out in the
> arch/x86/xen/grant-table driver and touches only minimally
> the generic driver.
> 
> v3, v4: fixes us some of the code due to earlier patches.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/grant-table.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/xen/gntdev.c       |  2 +-
>  drivers/xen/grant-table.c  |  9 ++++---
>  3 files changed, 68 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 3a5f55d..2d71979 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -125,3 +125,65 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
>  	apply_to_page_range(&init_mm, (unsigned long)shared,
>  			    PAGE_SIZE * nr_gframes, unmap_pte_fn, NULL);
>  }
> +#ifdef CONFIG_XEN_PVH
> +#include <xen/balloon.h>
> +#include <xen/events.h>
> +#include <linux/slab.h>
> +static int __init xlated_setup_gnttab_pages(void)
> +{
> +	struct page **pages;
> +	xen_pfn_t *pfns;
> +	int rc;
> +	unsigned int i;
> +	unsigned long nr_grant_frames = gnttab_max_grant_frames();
> +
> +	BUG_ON(nr_grant_frames == 0);
> +	pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
> +	if (!pages)
> +		return -ENOMEM;
> +
> +	pfns = kcalloc(nr_grant_frames, sizeof(pfns[0]), GFP_KERNEL);
> +	if (!pfns) {
> +		kfree(pages);
> +		return -ENOMEM;
> +	}
> +	rc = alloc_xenballooned_pages(nr_grant_frames, pages, 0 /* lowmem */);
> +	if (rc) {
> +		pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		kfree(pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +	for (i = 0; i < nr_grant_frames; i++)
> +		pfns[i] = page_to_pfn(pages[i]);
> +
> +	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
> +				    &xen_auto_xlat_grant_frames.vaddr);
> +
> +	kfree(pages);
> +	if (rc) {
> +		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
> +			nr_grant_frames, rc);
> +		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pfns);
> +		return rc;
> +	}
> +
> +	xen_auto_xlat_grant_frames.pfn = pfns;
> +	xen_auto_xlat_grant_frames.count = nr_grant_frames;
> +
> +	return 0;
> +}
> +
> +static int __init xen_pvh_gnttab_setup(void)
> +{
> +	if (!xen_pvh_domain())
> +		return -ENODEV;
> +
> +	return xlated_setup_gnttab_pages();
> +}
> +/* Call it _before_ __gnttab_init as we need to initialize the
> + * xen_auto_xlat_grant_frames first. */
> +core_initcall(xen_pvh_gnttab_setup);
> +#endif
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..073b4a1 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -846,7 +846,7 @@ static int __init gntdev_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
>  
> -	use_ptemod = xen_pv_domain();
> +	use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>  
>  	err = misc_register(&gntdev_miscdev);
>  	if (err != 0) {
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 6c78fd21..3d04c1c 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -1108,7 +1108,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  	unsigned int nr_gframes = end_idx + 1;
>  	int rc;
>  
> -	if (xen_hvm_domain()) {
> +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> @@ -1184,7 +1184,7 @@ static void gnttab_request_version(void)
>  	int rc;
>  	struct gnttab_set_version gsv;
>  
> -	if (xen_hvm_domain())
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
>  		gsv.version = 1;
>  	else
>  		gsv.version = 2;
> @@ -1328,5 +1328,6 @@ static int __gnttab_init(void)
>  
>  	return gnttab_init();
>  }
> -
> -core_initcall(__gnttab_init);
> +/* Starts after core_initcall so that xen_pvh_gnttab_setup can be called
> + * beforehand to initialize xen_auto_xlat_grant_frames. */
> +core_initcall_sync(__gnttab_init);
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsZs-0000qo-Jo; Sun, 05 Jan 2014 18:37:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1VzsZr-0000qj-HU
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:37:35 +0000
Received: from [85.158.139.211:61551] by server-2.bemta-5.messagelabs.com id
	0C/CC-29392-E66A9C25; Sun, 05 Jan 2014 18:37:34 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388947054!7917215!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25946 invoked from network); 5 Jan 2014 18:37:34 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-8.tower-206.messagelabs.com with SMTP;
	5 Jan 2014 18:37:34 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 6CF4880000B;
	Sun,  5 Jan 2014 19:37:33 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Sun, 5 Jan 2014 19:36:23 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Sun, 5 Jan 2014 19:36:23 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCdViuyuSJetUF0eAsYrZ1uXS0pp2cbwA
Date: Sun, 5 Jan 2014 18:36:22 +0000
Message-ID: <d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > So I guess gplpv needs to incorporate that. Untested patch follows this
> email.
> > I've previously read through the virtual storport changes and there
> > are a heap of them, so there may be other issues to resolve too.
> >
> 
> ... but of course that patch fails on earlier builds because NTDDI_WIN8 isn't
> defined. Try the patch following this email instead and let me know if it works
> and I'll commit it.

After applying the patch the win8 driver compilation, gives the following error and warning:
  1>xenvbd.c(463): error C2220: warning treated as error - no 'object' file generated
  1>xenvbd.c(463): warning C4152: nonstandard extension, function/data pointer conversion in expression

The line in question is the following:
  HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;

XenVbd_HwStorFindAdapter is the data structure which you have corrected a few lines in, in the patch. As it is a level 4 warning, it can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I suspect that it would be preferred to find the cause of the warning.

On a side note line 219 and 230 are duplicates
  ConfigInfo->VirtualDevice = FALSE;

> 
> James
> 
> diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
> --- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
> +++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 16:15:48 2014 +1100
> @@ -185,6 +185,12 @@
>  XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext,
> PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
> IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)  {
>    PXENVBD_DEVICE_DATA xvdd =
> (PXENVBD_DEVICE_DATA)DeviceExtension;
> +#if defined(NTDDI_WIN8) && (NTDDI_VERSION >= NTDDI_WIN8)
> +  PVOID dump_data = ConfigInfo->MiniportDumpData; #else
> +  PVOID dump_data = ConfigInfo->Reserved; #endif
> +
> 
>    UNREFERENCED_PARAMETER(HwContext);
>    UNREFERENCED_PARAMETER(BusInformation);
> @@ -195,14 +201,14 @@
>    FUNCTION_MSG("xvdd = %p\n", xvdd);
>    FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);
> 
> -  memcpy(xvdd, ConfigInfo->Reserved,
> FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
> +  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA,
> + aligned_buffer_data));
>    if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
>      return SP_RETURN_ERROR;
>    }
>    /* restore hypercall_stubs into dump_xenpci */
>    XnSetHypercallStubs(xvdd->hypercall_stubs);
>    /* make sure original xvdd is set to DISCONNECTED or resume will not work
> */
> -  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state =
> DEVICE_STATE_DISCONNECTED;
> +  ((PXENVBD_DEVICE_DATA)dump_data)->device_state =
> + DEVICE_STATE_DISCONNECTED;
>    InitializeListHead(&xvdd->srb_list);
>    xvdd->aligned_buffer_in_use = FALSE;
>    /* align the buffer to PAGE_SIZE */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsZs-0000qo-Jo; Sun, 05 Jan 2014 18:37:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1VzsZr-0000qj-HU
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:37:35 +0000
Received: from [85.158.139.211:61551] by server-2.bemta-5.messagelabs.com id
	0C/CC-29392-E66A9C25; Sun, 05 Jan 2014 18:37:34 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388947054!7917215!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25946 invoked from network); 5 Jan 2014 18:37:34 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-8.tower-206.messagelabs.com with SMTP;
	5 Jan 2014 18:37:34 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 6CF4880000B;
	Sun,  5 Jan 2014 19:37:33 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Sun, 5 Jan 2014 19:36:23 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Sun, 5 Jan 2014 19:36:23 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCdViuyuSJetUF0eAsYrZ1uXS0pp2cbwA
Date: Sun, 5 Jan 2014 18:36:22 +0000
Message-ID: <d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > So I guess gplpv needs to incorporate that. Untested patch follows this
> email.
> > I've previously read through the virtual storport changes and there
> > are a heap of them, so there may be other issues to resolve too.
> >
> 
> ... but of course that patch fails on earlier builds because NTDDI_WIN8 isn't
> defined. Try the patch following this email instead and let me know if it works
> and I'll commit it.

After applying the patch the win8 driver compilation, gives the following error and warning:
  1>xenvbd.c(463): error C2220: warning treated as error - no 'object' file generated
  1>xenvbd.c(463): warning C4152: nonstandard extension, function/data pointer conversion in expression

The line in question is the following:
  HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;

XenVbd_HwStorFindAdapter is the data structure which you have corrected a few lines in, in the patch. As it is a level 4 warning, it can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I suspect that it would be preferred to find the cause of the warning.

On a side note line 219 and 230 are duplicates
  ConfigInfo->VirtualDevice = FALSE;

> 
> James
> 
> diff -r 85b99b9795a6 xenvbd_storport/xenvbd.c
> --- a/xenvbd_storport/xenvbd.c  Sat Jan 04 18:17:51 2014 +1100
> +++ b/xenvbd_storport/xenvbd.c  Sun Jan 05 16:15:48 2014 +1100
> @@ -185,6 +185,12 @@
>  XenVbd_HwStorFindAdapter(PVOID DeviceExtension, PVOID HwContext,
> PVOID BusInformation, PCHAR ArgumentString, PPORT_CONF
> IGURATION_INFORMATION ConfigInfo, PBOOLEAN Again)  {
>    PXENVBD_DEVICE_DATA xvdd =
> (PXENVBD_DEVICE_DATA)DeviceExtension;
> +#if defined(NTDDI_WIN8) && (NTDDI_VERSION >= NTDDI_WIN8)
> +  PVOID dump_data = ConfigInfo->MiniportDumpData; #else
> +  PVOID dump_data = ConfigInfo->Reserved; #endif
> +
> 
>    UNREFERENCED_PARAMETER(HwContext);
>    UNREFERENCED_PARAMETER(BusInformation);
> @@ -195,14 +201,14 @@
>    FUNCTION_MSG("xvdd = %p\n", xvdd);
>    FUNCTION_MSG("ArgumentString = %s\n", ArgumentString);
> 
> -  memcpy(xvdd, ConfigInfo->Reserved,
> FIELD_OFFSET(XENVBD_DEVICE_DATA, aligned_buffer_data));
> +  memcpy(xvdd, dump_data, FIELD_OFFSET(XENVBD_DEVICE_DATA,
> + aligned_buffer_data));
>    if (xvdd->device_state != DEVICE_STATE_ACTIVE) {
>      return SP_RETURN_ERROR;
>    }
>    /* restore hypercall_stubs into dump_xenpci */
>    XnSetHypercallStubs(xvdd->hypercall_stubs);
>    /* make sure original xvdd is set to DISCONNECTED or resume will not work
> */
> -  ((PXENVBD_DEVICE_DATA)ConfigInfo->Reserved)->device_state =
> DEVICE_STATE_DISCONNECTED;
> +  ((PXENVBD_DEVICE_DATA)dump_data)->device_state =
> + DEVICE_STATE_DISCONNECTED;
>    InitializeListHead(&xvdd->srb_list);
>    xvdd->aligned_buffer_in_use = FALSE;
>    /* align the buffer to PAGE_SIZE */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:39:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsbO-0001Du-3d; Sun, 05 Jan 2014 18:39:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsbM-0001Dm-IQ
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:39:09 +0000
Received: from [85.158.137.68:34101] by server-2.bemta-3.messagelabs.com id
	B0/3F-17329-BC6A9C25; Sun, 05 Jan 2014 18:39:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388947145!6168534!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 5 Jan 2014 18:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87717505"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:39:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:39:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsbH-00081Z-Fz;
	Sun, 05 Jan 2014 18:39:03 +0000
Date: Sun, 5 Jan 2014 18:38:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051838010.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 16/19] xen/grant: Implement an grant
 frame array struct (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contiguous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropriate values for PVHVM and ARM.
> 
> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> 
> v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
> and also introduces xen_unmap for gnttab_free_auto_xlat_frames.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/arm/include/asm/xen/page.h |  1 +
>  arch/arm/xen/enlighten.c        |  9 +++++--
>  arch/x86/include/asm/xen/page.h |  1 +
>  drivers/xen/grant-table.c       | 58 ++++++++++++++++++++++++++++++++++++-----
>  drivers/xen/platform-pci.c      | 10 ++++---
>  include/xen/grant_table.h       |  9 ++++++-
>  6 files changed, 75 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index 75579a9d..5af8fb3 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -118,5 +118,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
>  }
>  
>  #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
> +#define xen_unmap(cookie) iounmap((cookie))
>  
>  #endif /* _ASM_ARM_XEN_PAGE_H */
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8550123..2162172 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
>  	const char *version = NULL;
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
> +	unsigned long grant_frames;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
>  	}
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
> -	xen_hvm_resume_frames = res.start;
> +	grant_frames = res.start;
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
>  	if (xen_vcpu_info == NULL)
>  		return -ENOMEM;
>  
> +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> +		free_percpu(xen_vcpu_info);
> +		return -ENOMEM;
> +	}
>  	gnttab_init();
>  	if (!xen_initial_domain())
>  		xenbus_probe(NULL);
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 4a092cc..3e276eb 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -227,5 +227,6 @@ void make_lowmem_page_readonly(void *vaddr);
>  void make_lowmem_page_readwrite(void *vaddr);
>  
>  #define xen_remap(cookie, size) ioremap((cookie), (size));
> +#define xen_unmap(cookie) iounmap((cookie))
>  
>  #endif /* _ASM_X86_XEN_PAGE_H */
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index cc1b4fa..6c78fd21 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -65,8 +65,7 @@ static unsigned int nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -unsigned long xen_hvm_resume_frames;
> -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> +struct grant_frames xen_auto_xlat_grant_frames;
>  
>  static union {
>  	struct grant_entry_v1 *v1;
> @@ -838,6 +837,51 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	unsigned int i;
> +	void *vaddr;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
> +	if (vaddr == NULL) {
> +		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> +			addr);
> +		return -ENOMEM;
> +	}
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn) {
> +		xen_unmap(vaddr);
> +		return -ENOMEM;
> +	}
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr) + i;
> +
> +	xen_auto_xlat_grant_frames.vaddr = vaddr;
> +	xen_auto_xlat_grant_frames.pfn = pfn;
> +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> +
> +void gnttab_free_auto_xlat_frames(void)
> +{
> +	if (!xen_auto_xlat_grant_frames.count)
> +		return;
> +	kfree(xen_auto_xlat_grant_frames.pfn);
> +	xen_unmap(xen_auto_xlat_grant_frames.vaddr);
> +
> +	xen_auto_xlat_grant_frames.pfn = NULL;
> +	xen_auto_xlat_grant_frames.count = 0;
> +	xen_auto_xlat_grant_frames.vaddr = NULL;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
> +
>  /* Handling of paged out grant targets (GNTST_eagain) */
>  #define MAX_DELAY 256
>  static inline void
> @@ -1068,6 +1112,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -1076,7 +1121,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> @@ -1175,11 +1220,10 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -					       PAGE_SIZE * max_nr_gframes);
> +		gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
>  		if (gnttab_shared.addr == NULL) {
> -			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> -					xen_hvm_resume_frames);
> +			pr_warn("gnttab share frames (addr=0x%08lx) is not mapped!\n",
> +				(unsigned long)xen_auto_xlat_grant_frames.vaddr);
>  			return -ENOMEM;
>  		}
>  	}
> diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
> index 2f3528e..f1947ac 100644
> --- a/drivers/xen/platform-pci.c
> +++ b/drivers/xen/platform-pci.c
> @@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	long ioaddr;
>  	long mmio_addr, mmio_len;
>  	unsigned int max_nr_gframes;
> +	unsigned long grant_frames;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	}
>  
>  	max_nr_gframes = gnttab_max_grant_frames();
> -	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	if (gnttab_setup_auto_xlat_frames(grant_frames))
> +		goto out;
>  	ret = gnttab_init();
>  	if (ret)
> -		goto out;
> +		goto grant_out;
>  	xenbus_probe(NULL);
>  	return 0;
> -
> +grant_out:
> +	gnttab_free_auto_xlat_frames();
>  out:
>  	pci_release_region(pdev, 0);
>  mem_out:
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..5acb1e4 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	unsigned int count;
> +	void *vaddr;
> +};
> +extern struct grant_frames xen_auto_xlat_grant_frames;
>  unsigned int gnttab_max_grant_frames(void);
> +int gnttab_setup_auto_xlat_frames(unsigned long addr);
> +void gnttab_free_auto_xlat_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:39:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzsbO-0001Du-3d; Sun, 05 Jan 2014 18:39:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VzsbM-0001Dm-IQ
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 18:39:09 +0000
Received: from [85.158.137.68:34101] by server-2.bemta-3.messagelabs.com id
	B0/3F-17329-BC6A9C25; Sun, 05 Jan 2014 18:39:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388947145!6168534!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 5 Jan 2014 18:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87717505"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 18:39:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 13:39:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VzsbH-00081Z-Fz;
	Sun, 05 Jan 2014 18:39:03 +0000
Date: Sun, 5 Jan 2014 18:38:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
Message-ID: <alpine.DEB.2.02.1401051838010.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-17-git-send-email-konrad.wilk@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 16/19] xen/grant: Implement an grant
 frame array struct (v2).
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> The 'xen_hvm_resume_frames' used to be an 'unsigned long'
> and contain the virtual address of the grants. That was OK
> for most architectures (PVHVM, ARM) were the grants are contiguous
> in memory. That however is not the case for PVH - in which case
> we will have to do a lookup for each virtual address for the PFN.
> 
> Instead of doing that, lets make it a structure which will contain
> the array of PFNs, the virtual address and the count of said PFNs.
> 
> Also provide a generic functions: gnttab_setup_auto_xlat_frames and
> gnttab_free_auto_xlat_frames to populate said structure with
> appropriate values for PVHVM and ARM.
> 
> To round it off, change the name from 'xen_hvm_resume_frames' to
> a more descriptive one - 'xen_auto_xlat_grant_frames'.
> 
> For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
> we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
> 
> v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
> and also introduces xen_unmap for gnttab_free_auto_xlat_frames.
> 
> Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/arm/include/asm/xen/page.h |  1 +
>  arch/arm/xen/enlighten.c        |  9 +++++--
>  arch/x86/include/asm/xen/page.h |  1 +
>  drivers/xen/grant-table.c       | 58 ++++++++++++++++++++++++++++++++++++-----
>  drivers/xen/platform-pci.c      | 10 ++++---
>  include/xen/grant_table.h       |  9 ++++++-
>  6 files changed, 75 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index 75579a9d..5af8fb3 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -118,5 +118,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn)
>  }
>  
>  #define xen_remap(cookie, size) ioremap_cached((cookie), (size));
> +#define xen_unmap(cookie) iounmap((cookie))
>  
>  #endif /* _ASM_ARM_XEN_PAGE_H */
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 8550123..2162172 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -208,6 +208,7 @@ static int __init xen_guest_init(void)
>  	const char *version = NULL;
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
> +	unsigned long grant_frames;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -224,10 +225,10 @@ static int __init xen_guest_init(void)
>  	}
>  	if (of_address_to_resource(node, GRANT_TABLE_PHYSADDR, &res))
>  		return 0;
> -	xen_hvm_resume_frames = res.start;
> +	grant_frames = res.start;
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
> -			version, xen_events_irq, (xen_hvm_resume_frames >> PAGE_SHIFT));
> +			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -265,6 +266,10 @@ static int __init xen_guest_init(void)
>  	if (xen_vcpu_info == NULL)
>  		return -ENOMEM;
>  
> +	if (gnttab_setup_auto_xlat_frames(grant_frames)) {
> +		free_percpu(xen_vcpu_info);
> +		return -ENOMEM;
> +	}
>  	gnttab_init();
>  	if (!xen_initial_domain())
>  		xenbus_probe(NULL);
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 4a092cc..3e276eb 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -227,5 +227,6 @@ void make_lowmem_page_readonly(void *vaddr);
>  void make_lowmem_page_readwrite(void *vaddr);
>  
>  #define xen_remap(cookie, size) ioremap((cookie), (size));
> +#define xen_unmap(cookie) iounmap((cookie))
>  
>  #endif /* _ASM_X86_XEN_PAGE_H */
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index cc1b4fa..6c78fd21 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -65,8 +65,7 @@ static unsigned int nr_grant_frames;
>  static int gnttab_free_count;
>  static grant_ref_t gnttab_free_head;
>  static DEFINE_SPINLOCK(gnttab_list_lock);
> -unsigned long xen_hvm_resume_frames;
> -EXPORT_SYMBOL_GPL(xen_hvm_resume_frames);
> +struct grant_frames xen_auto_xlat_grant_frames;
>  
>  static union {
>  	struct grant_entry_v1 *v1;
> @@ -838,6 +837,51 @@ unsigned int gnttab_max_grant_frames(void)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
>  
> +int gnttab_setup_auto_xlat_frames(unsigned long addr)
> +{
> +	xen_pfn_t *pfn;
> +	unsigned int max_nr_gframes = __max_nr_grant_frames();
> +	unsigned int i;
> +	void *vaddr;
> +
> +	if (xen_auto_xlat_grant_frames.count)
> +		return -EINVAL;
> +
> +	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
> +	if (vaddr == NULL) {
> +		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> +			addr);
> +		return -ENOMEM;
> +	}
> +	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
> +	if (!pfn) {
> +		xen_unmap(vaddr);
> +		return -ENOMEM;
> +	}
> +	for (i = 0; i < max_nr_gframes; i++)
> +		pfn[i] = PFN_DOWN(addr) + i;
> +
> +	xen_auto_xlat_grant_frames.vaddr = vaddr;
> +	xen_auto_xlat_grant_frames.pfn = pfn;
> +	xen_auto_xlat_grant_frames.count = max_nr_gframes;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_setup_auto_xlat_frames);
> +
> +void gnttab_free_auto_xlat_frames(void)
> +{
> +	if (!xen_auto_xlat_grant_frames.count)
> +		return;
> +	kfree(xen_auto_xlat_grant_frames.pfn);
> +	xen_unmap(xen_auto_xlat_grant_frames.vaddr);
> +
> +	xen_auto_xlat_grant_frames.pfn = NULL;
> +	xen_auto_xlat_grant_frames.count = 0;
> +	xen_auto_xlat_grant_frames.vaddr = NULL;
> +}
> +EXPORT_SYMBOL_GPL(gnttab_free_auto_xlat_frames);
> +
>  /* Handling of paged out grant targets (GNTST_eagain) */
>  #define MAX_DELAY 256
>  static inline void
> @@ -1068,6 +1112,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  		struct xen_add_to_physmap xatp;
>  		unsigned int i = end_idx;
>  		rc = 0;
> +		BUG_ON(xen_auto_xlat_grant_frames.count < nr_gframes);
>  		/*
>  		 * Loop backwards, so that the first hypercall has the largest
>  		 * index, ensuring that the table will grow only once.
> @@ -1076,7 +1121,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  			xatp.domid = DOMID_SELF;
>  			xatp.idx = i;
>  			xatp.space = XENMAPSPACE_grant_table;
> -			xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +			xatp.gpfn = xen_auto_xlat_grant_frames.pfn[i];
>  			rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>  			if (rc != 0) {
>  				pr_warn("grant table add_to_physmap failed, err=%d\n",
> @@ -1175,11 +1220,10 @@ static int gnttab_setup(void)
>  
>  	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
>  	{
> -		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> -					       PAGE_SIZE * max_nr_gframes);
> +		gnttab_shared.addr = xen_auto_xlat_grant_frames.vaddr;
>  		if (gnttab_shared.addr == NULL) {
> -			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> -					xen_hvm_resume_frames);
> +			pr_warn("gnttab share frames (addr=0x%08lx) is not mapped!\n",
> +				(unsigned long)xen_auto_xlat_grant_frames.vaddr);
>  			return -ENOMEM;
>  		}
>  	}
> diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
> index 2f3528e..f1947ac 100644
> --- a/drivers/xen/platform-pci.c
> +++ b/drivers/xen/platform-pci.c
> @@ -108,6 +108,7 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	long ioaddr;
>  	long mmio_addr, mmio_len;
>  	unsigned int max_nr_gframes;
> +	unsigned long grant_frames;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -154,13 +155,16 @@ static int platform_pci_init(struct pci_dev *pdev,
>  	}
>  
>  	max_nr_gframes = gnttab_max_grant_frames();
> -	xen_hvm_resume_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
> +	if (gnttab_setup_auto_xlat_frames(grant_frames))
> +		goto out;
>  	ret = gnttab_init();
>  	if (ret)
> -		goto out;
> +		goto grant_out;
>  	xenbus_probe(NULL);
>  	return 0;
> -
> +grant_out:
> +	gnttab_free_auto_xlat_frames();
>  out:
>  	pci_release_region(pdev, 0);
>  mem_out:
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..5acb1e4 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -178,8 +178,15 @@ int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes,
>  			   grant_status_t **__shared);
>  void arch_gnttab_unmap(void *shared, unsigned long nr_gframes);
>  
> -extern unsigned long xen_hvm_resume_frames;
> +struct grant_frames {
> +	xen_pfn_t *pfn;
> +	unsigned int count;
> +	void *vaddr;
> +};
> +extern struct grant_frames xen_auto_xlat_grant_frames;
>  unsigned int gnttab_max_grant_frames(void);
> +int gnttab_setup_auto_xlat_frames(unsigned long addr);
> +void gnttab_free_auto_xlat_frames(void);
>  
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
> -- 
> 1.8.3.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:45:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzshk-0001QP-3L; Sun, 05 Jan 2014 18:45:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzshh-0001QK-PL
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 18:45:42 +0000
Received: from [85.158.143.35:36209] by server-2.bemta-4.messagelabs.com id
	D6/72-11386-458A9C25; Sun, 05 Jan 2014 18:45:40 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388947539!8468452!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2386 invoked from network); 5 Jan 2014 18:45:40 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:45:40 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm19so2086190wib.7
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 10:45:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=lisUsTYh3eJh7FF4W2B1SK0V08Y3ell6SQmiU8xmEDg=;
	b=Bc5GCC1tF800ytRAfj5LoNwWbLNuo35RGbeR1jOXDYD3hCdR1DKU6j3l8cI5dmENQd
	OwYHrk9mVR+vtoCDWx0CtRPDpnpDfvvohof3ANX0aU/aLNX6Cm+UL6v6sq7jYnJpt21Z
	vMVSTVG1gZwxhCa9Ygj8mPKBW0TAW5sxr3rI3IdFzb/R6vv/aSvBP4cAOZdKrxAUBMbi
	WgfQI0kvI4XmtNwSbIodGiC5+Ww1Ojg+triqe1C6N3/unaIYAAv1eJLBXaPbdiqRTQKY
	6+d23+/fNBkknhJ4K39tX1dlQlzLAD+qB27/VU/F7d2+qJHahuSfGeI3TlINbVSp0Hfh
	Ft3Q==
X-Received: by 10.194.220.98 with SMTP id pv2mr128600wjc.74.1388947539804;
	Sun, 05 Jan 2014 10:45:39 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id o4sm13901708wiy.2.2014.01.05.10.45.38
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 10:45:39 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 18:45:31 +0000
Message-Id: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH] dump available order allocations in each zone
	while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/common/page_alloc.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..5419b3f 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
 static void dump_heap(unsigned char key)
 {
     s_time_t      now = NOW();
-    int           i, j;
+    int           i, j, k;
 
     printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
            (u32)(now>>32), (u32)now);
@@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
         if ( !avail[i] )
             continue;
         for ( j = 0; j < NR_ZONES; j++ )
+        {
             printk("heap[node=%d][zone=%d] -> %lu pages\n",
                    i, j, avail[i][j]);
+            if(avail[i][j]) {
+            	printk("\t(In:\n");
+				for ( k = 0; k < MAX_ORDER; k++)
+					if(!page_list_empty(&heap(i, j, k)))
+						printk("  \t[order=%d]\n",k);
+				printk(")\n");
+            }
+        }
     }
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 18:45:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 18:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzshk-0001QP-3L; Sun, 05 Jan 2014 18:45:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzshh-0001QK-PL
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 18:45:42 +0000
Received: from [85.158.143.35:36209] by server-2.bemta-4.messagelabs.com id
	D6/72-11386-458A9C25; Sun, 05 Jan 2014 18:45:40 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1388947539!8468452!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2386 invoked from network); 5 Jan 2014 18:45:40 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 18:45:40 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm19so2086190wib.7
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 10:45:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=lisUsTYh3eJh7FF4W2B1SK0V08Y3ell6SQmiU8xmEDg=;
	b=Bc5GCC1tF800ytRAfj5LoNwWbLNuo35RGbeR1jOXDYD3hCdR1DKU6j3l8cI5dmENQd
	OwYHrk9mVR+vtoCDWx0CtRPDpnpDfvvohof3ANX0aU/aLNX6Cm+UL6v6sq7jYnJpt21Z
	vMVSTVG1gZwxhCa9Ygj8mPKBW0TAW5sxr3rI3IdFzb/R6vv/aSvBP4cAOZdKrxAUBMbi
	WgfQI0kvI4XmtNwSbIodGiC5+Ww1Ojg+triqe1C6N3/unaIYAAv1eJLBXaPbdiqRTQKY
	6+d23+/fNBkknhJ4K39tX1dlQlzLAD+qB27/VU/F7d2+qJHahuSfGeI3TlINbVSp0Hfh
	Ft3Q==
X-Received: by 10.194.220.98 with SMTP id pv2mr128600wjc.74.1388947539804;
	Sun, 05 Jan 2014 10:45:39 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id o4sm13901708wiy.2.2014.01.05.10.45.38
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 10:45:39 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 18:45:31 +0000
Message-Id: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH] dump available order allocations in each zone
	while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/common/page_alloc.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..5419b3f 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
 static void dump_heap(unsigned char key)
 {
     s_time_t      now = NOW();
-    int           i, j;
+    int           i, j, k;
 
     printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
            (u32)(now>>32), (u32)now);
@@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
         if ( !avail[i] )
             continue;
         for ( j = 0; j < NR_ZONES; j++ )
+        {
             printk("heap[node=%d][zone=%d] -> %lu pages\n",
                    i, j, avail[i][j]);
+            if(avail[i][j]) {
+            	printk("\t(In:\n");
+				for ( k = 0; k < MAX_ORDER; k++)
+					if(!page_list_empty(&heap(i, j, k)))
+						printk("  \t[order=%d]\n",k);
+				printk(")\n");
+            }
+        }
     }
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztGm-0002oB-T8; Sun, 05 Jan 2014 19:21:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VztGl-0002o6-Bo
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:21:55 +0000
Received: from [85.158.139.211:52895] by server-8.bemta-5.messagelabs.com id
	98/70-29838-2D0B9C25; Sun, 05 Jan 2014 19:21:54 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388949713!7955544!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10871 invoked from network); 5 Jan 2014 19:21:53 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:21:53 -0000
Received: by mail-wi0-f178.google.com with SMTP id bz8so2100551wib.5
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:21:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=lZlhx0qcdJKy6Bd6r05ufYNrZ//5DCLb/tt+TQSYTJw=;
	b=Wmkq9+3g2JDF5v7Q3qKVsLKZzw743wien3kL0uSbOLP0CKG2dv+Cc6yNPtstOGn7m4
	39TFPUb0O7Fqsndi2CVShXJ7tLAuWqsBe8hnCjaNreihGVP+Ej/mEnNzolkINGatse6T
	RR8Kz37kNdxK3ADmGvciRQr9b4lPvA4W9x5vWtWTUyILuO8iYxvGs3nBxEycyIzUDeTn
	30DIItPDLrwkfki9wqRK9CF45aGLVPWpwlmpdOpa8SM5tZJ5SRSMztEOM6h9CobRvCvH
	wTx4rIV2+ffSjj38CAc5RCrCgtYJsaEsxmexmBOJ5cOuA1j3qVlUDXZ7T/cmxbzMSgMN
	X/kg==
X-Received: by 10.180.108.42 with SMTP id hh10mr9673173wib.15.1388949713719;
	Sun, 05 Jan 2014 11:21:53 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id gd5sm14053662wic.0.2014.01.05.11.21.50
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 11:21:51 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 19:21:43 +0000
Message-Id: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
	command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/arch/arm/domain_build.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..2831ad0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -23,6 +23,7 @@ static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
 
 static int dom0_11_mapping = 1;
+integer_param("dom0_11_mapping", dom0_11_mapping);
 
 #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
 static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztGm-0002oB-T8; Sun, 05 Jan 2014 19:21:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VztGl-0002o6-Bo
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:21:55 +0000
Received: from [85.158.139.211:52895] by server-8.bemta-5.messagelabs.com id
	98/70-29838-2D0B9C25; Sun, 05 Jan 2014 19:21:54 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1388949713!7955544!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10871 invoked from network); 5 Jan 2014 19:21:53 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:21:53 -0000
Received: by mail-wi0-f178.google.com with SMTP id bz8so2100551wib.5
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:21:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=lZlhx0qcdJKy6Bd6r05ufYNrZ//5DCLb/tt+TQSYTJw=;
	b=Wmkq9+3g2JDF5v7Q3qKVsLKZzw743wien3kL0uSbOLP0CKG2dv+Cc6yNPtstOGn7m4
	39TFPUb0O7Fqsndi2CVShXJ7tLAuWqsBe8hnCjaNreihGVP+Ej/mEnNzolkINGatse6T
	RR8Kz37kNdxK3ADmGvciRQr9b4lPvA4W9x5vWtWTUyILuO8iYxvGs3nBxEycyIzUDeTn
	30DIItPDLrwkfki9wqRK9CF45aGLVPWpwlmpdOpa8SM5tZJ5SRSMztEOM6h9CobRvCvH
	wTx4rIV2+ffSjj38CAc5RCrCgtYJsaEsxmexmBOJ5cOuA1j3qVlUDXZ7T/cmxbzMSgMN
	X/kg==
X-Received: by 10.180.108.42 with SMTP id hh10mr9673173wib.15.1388949713719;
	Sun, 05 Jan 2014 11:21:53 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id gd5sm14053662wic.0.2014.01.05.11.21.50
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 11:21:51 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 19:21:43 +0000
Message-Id: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
	command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/arch/arm/domain_build.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..2831ad0 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -23,6 +23,7 @@ static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
 
 static int dom0_11_mapping = 1;
+integer_param("dom0_11_mapping", dom0_11_mapping);
 
 #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
 static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:23:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztID-0002uS-8B; Sun, 05 Jan 2014 19:23:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VztIB-0002uM-Qa
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:23:24 +0000
Received: from [85.158.137.68:40583] by server-2.bemta-3.messagelabs.com id
	1C/7B-17329-B21B9C25; Sun, 05 Jan 2014 19:23:23 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388949800!6171452!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15649 invoked from network); 5 Jan 2014 19:23:21 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:23:21 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so16705583qcy.23
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:23:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=dHmLqUPvbz0IZF+gDitgks9wX86ZZDuzrEU/wLt3Gf4=;
	b=pREv8IgpMIidxI1mhhPNrevVV1kZh4Bfo5ISNBxo+ToFsIwlLg5dYvXhL5Vp0mQvQd
	dJ2CcKw8qH0EHB53kxywM5F1lBSczHd6ck93fP7bft4uB0W37UAuW5pczairSPjyoiru
	x5VXVgN+OqDWyUCaDCPV75kMSVaubd0dihxGfeonbL0I7Nj8VO1xOFNAoPszUZ9rEb4R
	CEux7gVDJo1FEgtTMmhwLtuhYUHd9tGNtsApZkMKPEV9L1JowehiNkeMI/rfJuFh4VSP
	1sMC5X77nYnbM4hKEAytRw5I/m6Tm7+PLbbQamD1GJmMhsN6TZtiEYpsp+oFhtWoamrB
	k+HA==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr177473213qev.11.1388949800563;
	Sun, 05 Jan 2014 11:23:20 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 11:23:20 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
Date: Sun, 5 Jan 2014 19:23:20 +0000
Message-ID: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6718681932023252254=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6718681932023252254==
Content-Type: multipart/alternative; boundary=047d7b5db1247938b904ef3e13db

--047d7b5db1247938b904ef3e13db
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> > Hi Peter,
> >
> > If you still can't boot with any memory bigger than 128M, as a fast
> workaround you can apply this patch.
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index faff88e..849df3f 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -22,7 +22,7 @@
> >  static unsigned int __initdata opt_dom0_max_vcpus;
> >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> >
> > -static int dom0_11_mapping = 1;
> > +static int dom0_11_mapping = 0;
> >
> >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> >
> >
> > It's failing because none of the zones has a contiguous memory block
> with an order bigger than 15 ( 128M ). I think this is due
> > to the alignment of the phys_start with buddy system in cubieboard, I'll
> look further and let you know if there's a cleaner
> > approach to fix that.
> >
> > It used to work before because the 11_mapping wasn't forced to "true"
> for all platforms and there was a quirk exposed by the
> > platform that used to express that. I think Julien removed that quirk
> and defaulted to 11_mapping in commit
> > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>
> Unfortunately dom0_11_mapping is needed if at least one device driver
> for the Allwinner uses DMA.
> For example, if you disable dom0_11_mapping, can you still access the
> network? On the other hand if all device drivers do not use DMA we can
> set dom0_11_mapping to false for this platform.
>

I'm not quite sure about all devices in cubieboard, but at least for the
network case I think it'll still work ( well, it's working for me ) .
Besides, Cubieboard didn't have this quirk to begin with before defaulting
to the 11_mapping


>
> In any case I think that we could benefit from having dom0_11_mapping
> user configurable from the command line.

I agree ( just sent a patch for that )



-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--047d7b5db1247938b904ef3e13db
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On S=
un, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefano.sta=
bellini@eu.citrix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Sun, 5 Jan 2014, <a hre=
f=3D"mailto:karim.allah.ahmed@gmail.com">karim.allah.ahmed@gmail.com</a> wr=
ote:<br>

&gt; Hi Peter,<br>
&gt;<br>
&gt; If you still can&#39;t boot with any memory bigger than 128M, as a fas=
t workaround you can apply this patch.<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c=
<br>
&gt; index faff88e..849df3f 100644<br>
&gt; --- a/xen/arch/arm/domain_build.c<br>
&gt; +++ b/xen/arch/arm/domain_build.c<br>
&gt; @@ -22,7 +22,7 @@<br>
&gt; =A0static unsigned int __initdata opt_dom0_max_vcpus;<br>
&gt; =A0integer_param(&quot;dom0_max_vcpus&quot;, opt_dom0_max_vcpus);<br>
&gt; =A0<br>
&gt; -static int dom0_11_mapping =3D 1;<br>
&gt; +static int dom0_11_mapping =3D 0;<br>
&gt; =A0<br>
&gt; =A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */<br>
&gt; =A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;<br>
&gt;<br>
&gt;<br>
&gt; It&#39;s failing because none of the zones has a contiguous memory blo=
ck with an order bigger than 15 ( 128M ). I think this is due<br>
&gt; to the alignment of the phys_start with buddy system in cubieboard, I&=
#39;ll look further and let you know if there&#39;s a cleaner<br>
&gt; approach to fix that.<br>
&gt;<br>
&gt; It used to work before because the 11_mapping wasn&#39;t forced to &qu=
ot;true&quot; for all platforms and there was a quirk exposed by the<br>
&gt; platform that used to express that. I think Julien removed that quirk =
and defaulted to 11_mapping in commit<br>
&gt; &quot;71952bfcbe9187765cf4010b1479af86def4fb1f&quot;<br>
<br>
</div>Unfortunately dom0_11_mapping is needed if at least one device driver=
<br>
for the Allwinner uses DMA.<br>
For example, if you disable dom0_11_mapping, can you still access the<br>
network? On the other hand if all device drivers do not use DMA we can<br>
set dom0_11_mapping to false for this platform.<br></blockquote><div><br></=
div><div>I&#39;m not quite sure about all devices in cubieboard, but at lea=
st for the network case I think it&#39;ll still work ( well, it&#39;s worki=
ng for me ) . Besides, Cubieboard didn&#39;t have this quirk to begin with =
before defaulting to the 11_mapping<br>
=A0<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;b=
order-left:1px #ccc solid;padding-left:1ex">
<br>
In any case I think that we could benefit from having dom0_11_mapping<br>
user configurable from the command line.</blockquote><div>I agree ( just se=
nt a patch for that ) <br></div></div><br><br clear=3D"all"><br>-- <br><div=
 dir=3D"ltr"><div>Karim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div></div>

--047d7b5db1247938b904ef3e13db--


--===============6718681932023252254==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6718681932023252254==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 19:23:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztID-0002uS-8B; Sun, 05 Jan 2014 19:23:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VztIB-0002uM-Qa
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:23:24 +0000
Received: from [85.158.137.68:40583] by server-2.bemta-3.messagelabs.com id
	1C/7B-17329-B21B9C25; Sun, 05 Jan 2014 19:23:23 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1388949800!6171452!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15649 invoked from network); 5 Jan 2014 19:23:21 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:23:21 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so16705583qcy.23
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:23:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=dHmLqUPvbz0IZF+gDitgks9wX86ZZDuzrEU/wLt3Gf4=;
	b=pREv8IgpMIidxI1mhhPNrevVV1kZh4Bfo5ISNBxo+ToFsIwlLg5dYvXhL5Vp0mQvQd
	dJ2CcKw8qH0EHB53kxywM5F1lBSczHd6ck93fP7bft4uB0W37UAuW5pczairSPjyoiru
	x5VXVgN+OqDWyUCaDCPV75kMSVaubd0dihxGfeonbL0I7Nj8VO1xOFNAoPszUZ9rEb4R
	CEux7gVDJo1FEgtTMmhwLtuhYUHd9tGNtsApZkMKPEV9L1JowehiNkeMI/rfJuFh4VSP
	1sMC5X77nYnbM4hKEAytRw5I/m6Tm7+PLbbQamD1GJmMhsN6TZtiEYpsp+oFhtWoamrB
	k+HA==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr177473213qev.11.1388949800563;
	Sun, 05 Jan 2014 11:23:20 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 11:23:20 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
Date: Sun, 5 Jan 2014 19:23:20 +0000
Message-ID: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6718681932023252254=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6718681932023252254==
Content-Type: multipart/alternative; boundary=047d7b5db1247938b904ef3e13db

--047d7b5db1247938b904ef3e13db
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> > Hi Peter,
> >
> > If you still can't boot with any memory bigger than 128M, as a fast
> workaround you can apply this patch.
> >
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index faff88e..849df3f 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -22,7 +22,7 @@
> >  static unsigned int __initdata opt_dom0_max_vcpus;
> >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> >
> > -static int dom0_11_mapping = 1;
> > +static int dom0_11_mapping = 0;
> >
> >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> >
> >
> > It's failing because none of the zones has a contiguous memory block
> with an order bigger than 15 ( 128M ). I think this is due
> > to the alignment of the phys_start with buddy system in cubieboard, I'll
> look further and let you know if there's a cleaner
> > approach to fix that.
> >
> > It used to work before because the 11_mapping wasn't forced to "true"
> for all platforms and there was a quirk exposed by the
> > platform that used to express that. I think Julien removed that quirk
> and defaulted to 11_mapping in commit
> > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>
> Unfortunately dom0_11_mapping is needed if at least one device driver
> for the Allwinner uses DMA.
> For example, if you disable dom0_11_mapping, can you still access the
> network? On the other hand if all device drivers do not use DMA we can
> set dom0_11_mapping to false for this platform.
>

I'm not quite sure about all devices in cubieboard, but at least for the
network case I think it'll still work ( well, it's working for me ) .
Besides, Cubieboard didn't have this quirk to begin with before defaulting
to the 11_mapping


>
> In any case I think that we could benefit from having dom0_11_mapping
> user configurable from the command line.

I agree ( just sent a patch for that )



-- 
Karim Allah Ahmed.
LinkedIn <http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/>

--047d7b5db1247938b904ef3e13db
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On S=
un, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:stefano.stabellini@eu.citrix.com" target=3D"_blank">stefano.sta=
bellini@eu.citrix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Sun, 5 Jan 2014, <a hre=
f=3D"mailto:karim.allah.ahmed@gmail.com">karim.allah.ahmed@gmail.com</a> wr=
ote:<br>

&gt; Hi Peter,<br>
&gt;<br>
&gt; If you still can&#39;t boot with any memory bigger than 128M, as a fas=
t workaround you can apply this patch.<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c=
<br>
&gt; index faff88e..849df3f 100644<br>
&gt; --- a/xen/arch/arm/domain_build.c<br>
&gt; +++ b/xen/arch/arm/domain_build.c<br>
&gt; @@ -22,7 +22,7 @@<br>
&gt; =A0static unsigned int __initdata opt_dom0_max_vcpus;<br>
&gt; =A0integer_param(&quot;dom0_max_vcpus&quot;, opt_dom0_max_vcpus);<br>
&gt; =A0<br>
&gt; -static int dom0_11_mapping =3D 1;<br>
&gt; +static int dom0_11_mapping =3D 0;<br>
&gt; =A0<br>
&gt; =A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */<br>
&gt; =A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;<br>
&gt;<br>
&gt;<br>
&gt; It&#39;s failing because none of the zones has a contiguous memory blo=
ck with an order bigger than 15 ( 128M ). I think this is due<br>
&gt; to the alignment of the phys_start with buddy system in cubieboard, I&=
#39;ll look further and let you know if there&#39;s a cleaner<br>
&gt; approach to fix that.<br>
&gt;<br>
&gt; It used to work before because the 11_mapping wasn&#39;t forced to &qu=
ot;true&quot; for all platforms and there was a quirk exposed by the<br>
&gt; platform that used to express that. I think Julien removed that quirk =
and defaulted to 11_mapping in commit<br>
&gt; &quot;71952bfcbe9187765cf4010b1479af86def4fb1f&quot;<br>
<br>
</div>Unfortunately dom0_11_mapping is needed if at least one device driver=
<br>
for the Allwinner uses DMA.<br>
For example, if you disable dom0_11_mapping, can you still access the<br>
network? On the other hand if all device drivers do not use DMA we can<br>
set dom0_11_mapping to false for this platform.<br></blockquote><div><br></=
div><div>I&#39;m not quite sure about all devices in cubieboard, but at lea=
st for the network case I think it&#39;ll still work ( well, it&#39;s worki=
ng for me ) . Besides, Cubieboard didn&#39;t have this quirk to begin with =
before defaulting to the 11_mapping<br>
=A0<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;b=
order-left:1px #ccc solid;padding-left:1ex">
<br>
In any case I think that we could benefit from having dom0_11_mapping<br>
user configurable from the command line.</blockquote><div>I agree ( just se=
nt a patch for that ) <br></div></div><br><br clear=3D"all"><br>-- <br><div=
 dir=3D"ltr"><div>Karim Allah Ahmed.</div>
<div><a href=3D"http://eg.linkedin.com/pub/karim-allah-ahmed/13/829/550/" t=
arget=3D"_blank">LinkedIn</a></div></div>
</div></div>

--047d7b5db1247938b904ef3e13db--


--===============6718681932023252254==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6718681932023252254==--


From xen-devel-bounces@lists.xen.org Sun Jan 05 19:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztKa-00033E-SI; Sun, 05 Jan 2014 19:25:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VztKZ-000333-Fp
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:25:51 +0000
Received: from [85.158.137.68:47719] by server-10.bemta-3.messagelabs.com id
	0D/AB-23989-EB1B9C25; Sun, 05 Jan 2014 19:25:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388949948!6175754!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6059 invoked from network); 5 Jan 2014 19:25:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87721630"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 19:25:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 14:25:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VztKV-00008m-5j;
	Sun, 05 Jan 2014 19:25:47 +0000
Date: Sun, 5 Jan 2014 19:24:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Karim Raslan <karim.allah.ahmed@gmail.com>
In-Reply-To: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
Message-ID: <alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Karim Raslan wrote:
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>

Thanks!
Given that dom0_11_mapping is an int, the patch is OK.
However it might be better to turn dom0_11_mapping into a bool_t and use
boolean_param instead.


>  xen/arch/arm/domain_build.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..2831ad0 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -23,6 +23,7 @@ static unsigned int __initdata opt_dom0_max_vcpus;
>  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>  
>  static int dom0_11_mapping = 1;
> +integer_param("dom0_11_mapping", dom0_11_mapping);
>  
>  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztKa-00033E-SI; Sun, 05 Jan 2014 19:25:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VztKZ-000333-Fp
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:25:51 +0000
Received: from [85.158.137.68:47719] by server-10.bemta-3.messagelabs.com id
	0D/AB-23989-EB1B9C25; Sun, 05 Jan 2014 19:25:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1388949948!6175754!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6059 invoked from network); 5 Jan 2014 19:25:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="87721630"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 19:25:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 14:25:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VztKV-00008m-5j;
	Sun, 05 Jan 2014 19:25:47 +0000
Date: Sun, 5 Jan 2014 19:24:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Karim Raslan <karim.allah.ahmed@gmail.com>
In-Reply-To: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
Message-ID: <alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Karim Raslan wrote:
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>

Thanks!
Given that dom0_11_mapping is an int, the patch is OK.
However it might be better to turn dom0_11_mapping into a bool_t and use
boolean_param instead.


>  xen/arch/arm/domain_build.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..2831ad0 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -23,6 +23,7 @@ static unsigned int __initdata opt_dom0_max_vcpus;
>  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>  
>  static int dom0_11_mapping = 1;
> +integer_param("dom0_11_mapping", dom0_11_mapping);
>  
>  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:30:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztOT-0003Wa-IP; Sun, 05 Jan 2014 19:29:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VztOR-0003WT-Sv
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:29:52 +0000
Received: from [85.158.137.68:22015] by server-16.bemta-3.messagelabs.com id
	99/3A-26128-FA2B9C25; Sun, 05 Jan 2014 19:29:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1388950188!6554371!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17976 invoked from network); 5 Jan 2014 19:29:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89906483"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 19:29:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 14:29:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VztON-0000Bo-Sp;
	Sun, 05 Jan 2014 19:29:47 +0000
Date: Sun, 5 Jan 2014 19:28:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1823363193-1388950138=:8667"
X-DLP: MIA2
Cc: peter <peter@perkbv.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1823363193-1388950138=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Could you use plain text for emails, please?

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu=
=2Ecitrix.com> wrote:
>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>       > Hi Peter,
>       >
>       > If you still can't boot with any memory bigger than 128M, as a fa=
st workaround you can apply this patch.
>       >
>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_bu=
ild.c
>       > index faff88e..849df3f 100644
>       > --- a/xen/arch/arm/domain_build.c
>       > +++ b/xen/arch/arm/domain_build.c
>       > @@ -22,7 +22,7 @@
>       > =C2=A0static unsigned int __initdata opt_dom0_max_vcpus;
>       > =C2=A0integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>       > =C2=A0
>       > -static int dom0_11_mapping =3D 1;
>       > +static int dom0_11_mapping =3D 0;
>       > =C2=A0
>       > =C2=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>       > =C2=A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;
>       >
>       >
>       > It's failing because none of the zones has a contiguous memory bl=
ock with an order bigger than 15 ( 128M ). I think
>       this is due
>       > to the alignment of the phys_start with buddy system in cubieboar=
d, I'll look further and let you know if there's a
>       cleaner
>       > approach to fix that.
>       >
>       > It used to work before because the 11_mapping wasn't forced to "t=
rue" for all platforms and there was a quirk
>       exposed by the
>       > platform that used to express that. I think Julien removed that q=
uirk and defaulted to 11_mapping in commit
>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>=20
> Unfortunately dom0_11_mapping is needed if at least one device driver
> for the Allwinner uses DMA.
> For example, if you disable dom0_11_mapping, can you still access the
> network? On the other hand if all device drivers do not use DMA we can
> set dom0_11_mapping to false for this platform.
>=20
>=20
> I'm not quite sure about all devices in cubieboard, but at least for the =
network case I think it'll still work ( well, it's
> working for me ) . Besides, Cubieboard didn't have this quirk to begin wi=
th before defaulting to the 11_mapping

What is the linux device driver that you are using for the network? And
the one for the disk/sdcard?
--1342847746-1823363193-1388950138=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1823363193-1388950138=:8667--


From xen-devel-bounces@lists.xen.org Sun Jan 05 19:30:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztOT-0003Wa-IP; Sun, 05 Jan 2014 19:29:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1VztOR-0003WT-Sv
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:29:52 +0000
Received: from [85.158.137.68:22015] by server-16.bemta-3.messagelabs.com id
	99/3A-26128-FA2B9C25; Sun, 05 Jan 2014 19:29:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1388950188!6554371!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17976 invoked from network); 5 Jan 2014 19:29:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:29:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,608,1384300800"; d="scan'208";a="89906483"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 19:29:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 5 Jan 2014 14:29:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1VztON-0000Bo-Sp;
	Sun, 05 Jan 2014 19:29:47 +0000
Date: Sun, 5 Jan 2014 19:28:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1823363193-1388950138=:8667"
X-DLP: MIA2
Cc: peter <peter@perkbv.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1823363193-1388950138=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Could you use plain text for emails, please?

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu=
=2Ecitrix.com> wrote:
>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>       > Hi Peter,
>       >
>       > If you still can't boot with any memory bigger than 128M, as a fa=
st workaround you can apply this patch.
>       >
>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_bu=
ild.c
>       > index faff88e..849df3f 100644
>       > --- a/xen/arch/arm/domain_build.c
>       > +++ b/xen/arch/arm/domain_build.c
>       > @@ -22,7 +22,7 @@
>       > =C2=A0static unsigned int __initdata opt_dom0_max_vcpus;
>       > =C2=A0integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>       > =C2=A0
>       > -static int dom0_11_mapping =3D 1;
>       > +static int dom0_11_mapping =3D 0;
>       > =C2=A0
>       > =C2=A0#define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>       > =C2=A0static u64 __initdata dom0_mem =3D DOM0_MEM_DEFAULT;
>       >
>       >
>       > It's failing because none of the zones has a contiguous memory bl=
ock with an order bigger than 15 ( 128M ). I think
>       this is due
>       > to the alignment of the phys_start with buddy system in cubieboar=
d, I'll look further and let you know if there's a
>       cleaner
>       > approach to fix that.
>       >
>       > It used to work before because the 11_mapping wasn't forced to "t=
rue" for all platforms and there was a quirk
>       exposed by the
>       > platform that used to express that. I think Julien removed that q=
uirk and defaulted to 11_mapping in commit
>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>=20
> Unfortunately dom0_11_mapping is needed if at least one device driver
> for the Allwinner uses DMA.
> For example, if you disable dom0_11_mapping, can you still access the
> network? On the other hand if all device drivers do not use DMA we can
> set dom0_11_mapping to false for this platform.
>=20
>=20
> I'm not quite sure about all devices in cubieboard, but at least for the =
network case I think it'll still work ( well, it's
> working for me ) . Besides, Cubieboard didn't have this quirk to begin wi=
th before defaulting to the 11_mapping

What is the linux device driver that you are using for the network? And
the one for the disk/sdcard?
--1342847746-1823363193-1388950138=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1823363193-1388950138=:8667--


From xen-devel-bounces@lists.xen.org Sun Jan 05 19:34:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztSo-0003f0-Bt; Sun, 05 Jan 2014 19:34:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VztSm-0003er-Ss
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 19:34:21 +0000
Received: from [85.158.143.35:2165] by server-2.bemta-4.messagelabs.com id
	3A/BF-11386-CB3B9C25; Sun, 05 Jan 2014 19:34:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388950458!9746604!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20998 invoked from network); 5 Jan 2014 19:34:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Jan 2014 19:34:19 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s05JXFmr000405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 5 Jan 2014 19:33:15 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JXExs003585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 5 Jan 2014 19:33:14 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JXEjU003580; Sun, 5 Jan 2014 19:33:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 05 Jan 2014 11:33:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DE9271C18DD; Sun,  5 Jan 2014 14:33:12 -0500 (EST)
Date: Sun, 5 Jan 2014 14:33:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140105193312.GB12263@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 05, 2014 at 06:18:03PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > We have this odd scenario of where for PV paths we take a shortcut
> > but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> > assign it to gnttab_shared.addr. This is needed because gnttab_map
> > uses gnttab_shared.addr.
> > 
> > Instead of having:
> > 	if (pv)
> > 		return gnttab_map
> > 	if (hvm)
> > 		...
> > 
> > 	gnttab_map
> > 
> > Lets move the HVM part before the gnttab_map and remove the
> > first call to gnttab_map.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> As I wrote in my reply to the previous version of the patch, you can
> have my acked-by, except for the spurious code style fix mixed-up with
> the other changes.

Thanks. I fixed it up (removed the code style fix).

> 
> 
> >  drivers/xen/grant-table.c | 13 ++++---------
> >  1 file changed, 4 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 99399cb..cc1b4fa 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
> >  	if (max_nr_gframes < nr_grant_frames)
> >  		return -ENOSYS;
> >  
> > -	if (xen_pv_domain())
> > -		return gnttab_map(0, nr_grant_frames - 1);
> > -
> > -	if (gnttab_shared.addr == NULL) {
> > +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> > +	{
> >  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> > -						PAGE_SIZE * max_nr_gframes);
> > +					       PAGE_SIZE * max_nr_gframes);
> >  		if (gnttab_shared.addr == NULL) {
> >  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> >  					xen_hvm_resume_frames);
> >  			return -ENOMEM;
> >  		}
> >  	}
> > -
> > -	gnttab_map(0, nr_grant_frames - 1);
> > -
> > -	return 0;
> > +	return gnttab_map(0, nr_grant_frames - 1);
> >  }
> >  
> >  int gnttab_resume(void)
> > -- 
> > 1.8.3.1
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:34:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztSo-0003f0-Bt; Sun, 05 Jan 2014 19:34:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VztSm-0003er-Ss
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 19:34:21 +0000
Received: from [85.158.143.35:2165] by server-2.bemta-4.messagelabs.com id
	3A/BF-11386-CB3B9C25; Sun, 05 Jan 2014 19:34:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1388950458!9746604!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20998 invoked from network); 5 Jan 2014 19:34:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Jan 2014 19:34:19 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s05JXFmr000405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 5 Jan 2014 19:33:15 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JXExs003585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 5 Jan 2014 19:33:14 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JXEjU003580; Sun, 5 Jan 2014 19:33:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 05 Jan 2014 11:33:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DE9271C18DD; Sun,  5 Jan 2014 14:33:12 -0500 (EST)
Date: Sun, 5 Jan 2014 14:33:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140105193312.GB12263@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-16-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401051817150.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v13 15/19] xen/grant-table: Refactor
	gnttab_init
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 05, 2014 at 06:18:03PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > We have this odd scenario of where for PV paths we take a shortcut
> > but for the HVM paths we first ioremap xen_hvm_resume_frames, then
> > assign it to gnttab_shared.addr. This is needed because gnttab_map
> > uses gnttab_shared.addr.
> > 
> > Instead of having:
> > 	if (pv)
> > 		return gnttab_map
> > 	if (hvm)
> > 		...
> > 
> > 	gnttab_map
> > 
> > Lets move the HVM part before the gnttab_map and remove the
> > first call to gnttab_map.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> As I wrote in my reply to the previous version of the patch, you can
> have my acked-by, except for the spurious code style fix mixed-up with
> the other changes.

Thanks. I fixed it up (removed the code style fix).

> 
> 
> >  drivers/xen/grant-table.c | 13 ++++---------
> >  1 file changed, 4 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index 99399cb..cc1b4fa 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -1173,22 +1173,17 @@ static int gnttab_setup(void)
> >  	if (max_nr_gframes < nr_grant_frames)
> >  		return -ENOSYS;
> >  
> > -	if (xen_pv_domain())
> > -		return gnttab_map(0, nr_grant_frames - 1);
> > -
> > -	if (gnttab_shared.addr == NULL) {
> > +	if (xen_feature(XENFEAT_auto_translated_physmap) && gnttab_shared.addr == NULL)
> > +	{
> >  		gnttab_shared.addr = xen_remap(xen_hvm_resume_frames,
> > -						PAGE_SIZE * max_nr_gframes);
> > +					       PAGE_SIZE * max_nr_gframes);
> >  		if (gnttab_shared.addr == NULL) {
> >  			pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
> >  					xen_hvm_resume_frames);
> >  			return -ENOMEM;
> >  		}
> >  	}
> > -
> > -	gnttab_map(0, nr_grant_frames - 1);
> > -
> > -	return 0;
> > +	return gnttab_map(0, nr_grant_frames - 1);
> >  }
> >  
> >  int gnttab_resume(void)
> > -- 
> > 1.8.3.1
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzta1-000478-DC; Sun, 05 Jan 2014 19:41:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzta0-000473-1R
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:41:48 +0000
Received: from [85.158.137.68:46223] by server-17.bemta-3.messagelabs.com id
	7E/E6-15965-B75B9C25; Sun, 05 Jan 2014 19:41:47 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388950905!7300698!1
X-Originating-IP: [209.85.216.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18246 invoked from network); 5 Jan 2014 19:41:46 -0000
Received: from mail-qc0-f175.google.com (HELO mail-qc0-f175.google.com)
	(209.85.216.175)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:41:46 -0000
Received: by mail-qc0-f175.google.com with SMTP id e9so16547423qcy.20
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:41:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+/4FKykXsms8z+Q/BMMF6JV94USjC79DAmQvW063zJg=;
	b=IzugtU0jDW48kRuQ08BMRgNnr1z0dlpT2z46CS7ZDRiZidWG4ei5GpeHhpcekph5mS
	bWEULEaNr65TkcansGygnm8waSUWMft6jIWxdue0PrtOn33ETGRRha9LVAivc2Px5WfZ
	NngQbw6SuOv23tFjLRFnLYdFx2LlOy+WlXXFspKFVLSMHwteh9ZfvRMDy488JHsIYelI
	F2Zr1xnIxv4Oh1fv2tZz6GaF0b9Xt3zjUv9TOOCo4q0byMqOuChSSY9hnBTM/AN8t3ZI
	x441eJ/ht1N4EiZ9k2nc9hiu8qL6QIVBgDMGm3Qb7qaMmdRKNIjQ6iyN59LGNTmhVUsc
	pVLQ==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr177611909qev.11.1388950905058;
	Sun, 05 Jan 2014 11:41:45 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 11:41:44 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
	<alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
Date: Sun, 5 Jan 2014 19:41:44 +0000
Message-ID: <CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 5, 2014 at 7:28 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Could you use plain text for emails, please?
Sorry about that :)

>
> On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>>       > Hi Peter,
>>       >
>>       > If you still can't boot with any memory bigger than 128M, as a fast workaround you can apply this patch.
>>       >
>>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>       > index faff88e..849df3f 100644
>>       > --- a/xen/arch/arm/domain_build.c
>>       > +++ b/xen/arch/arm/domain_build.c
>>       > @@ -22,7 +22,7 @@
>>       >  static unsigned int __initdata opt_dom0_max_vcpus;
>>       >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>>       >
>>       > -static int dom0_11_mapping = 1;
>>       > +static int dom0_11_mapping = 0;
>>       >
>>       >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>>       >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
>>       >
>>       >
>>       > It's failing because none of the zones has a contiguous memory block with an order bigger than 15 ( 128M ). I think
>>       this is due
>>       > to the alignment of the phys_start with buddy system in cubieboard, I'll look further and let you know if there's a
>>       cleaner
>>       > approach to fix that.
>>       >
>>       > It used to work before because the 11_mapping wasn't forced to "true" for all platforms and there was a quirk
>>       exposed by the
>>       > platform that used to express that. I think Julien removed that quirk and defaulted to 11_mapping in commit
>>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>>
>> Unfortunately dom0_11_mapping is needed if at least one device driver
>> for the Allwinner uses DMA.
>> For example, if you disable dom0_11_mapping, can you still access the
>> network? On the other hand if all device drivers do not use DMA we can
>> set dom0_11_mapping to false for this platform.
>>
>>
>> I'm not quite sure about all devices in cubieboard, but at least for the network case I think it'll still work ( well, it's
>> working for me ) . Besides, Cubieboard didn't have this quirk to begin with before defaulting to the 11_mapping
>
> What is the linux device driver that you are using for the network? And
> the one for the disk/sdcard?

For network "sun4i-emac", and currently I'm mounting my rootfs through
nfs, so no disk/sdcard




-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzta1-000478-DC; Sun, 05 Jan 2014 19:41:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1Vzta0-000473-1R
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 19:41:48 +0000
Received: from [85.158.137.68:46223] by server-17.bemta-3.messagelabs.com id
	7E/E6-15965-B75B9C25; Sun, 05 Jan 2014 19:41:47 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388950905!7300698!1
X-Originating-IP: [209.85.216.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18246 invoked from network); 5 Jan 2014 19:41:46 -0000
Received: from mail-qc0-f175.google.com (HELO mail-qc0-f175.google.com)
	(209.85.216.175)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 19:41:46 -0000
Received: by mail-qc0-f175.google.com with SMTP id e9so16547423qcy.20
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 11:41:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+/4FKykXsms8z+Q/BMMF6JV94USjC79DAmQvW063zJg=;
	b=IzugtU0jDW48kRuQ08BMRgNnr1z0dlpT2z46CS7ZDRiZidWG4ei5GpeHhpcekph5mS
	bWEULEaNr65TkcansGygnm8waSUWMft6jIWxdue0PrtOn33ETGRRha9LVAivc2Px5WfZ
	NngQbw6SuOv23tFjLRFnLYdFx2LlOy+WlXXFspKFVLSMHwteh9ZfvRMDy488JHsIYelI
	F2Zr1xnIxv4Oh1fv2tZz6GaF0b9Xt3zjUv9TOOCo4q0byMqOuChSSY9hnBTM/AN8t3ZI
	x441eJ/ht1N4EiZ9k2nc9hiu8qL6QIVBgDMGm3Qb7qaMmdRKNIjQ6iyN59LGNTmhVUsc
	pVLQ==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr177611909qev.11.1388950905058;
	Sun, 05 Jan 2014 11:41:45 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 11:41:44 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
	<alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
Date: Sun, 5 Jan 2014 19:41:44 +0000
Message-ID: <CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 5, 2014 at 7:28 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Could you use plain text for emails, please?
Sorry about that :)

>
> On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
>>       > Hi Peter,
>>       >
>>       > If you still can't boot with any memory bigger than 128M, as a fast workaround you can apply this patch.
>>       >
>>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>       > index faff88e..849df3f 100644
>>       > --- a/xen/arch/arm/domain_build.c
>>       > +++ b/xen/arch/arm/domain_build.c
>>       > @@ -22,7 +22,7 @@
>>       >  static unsigned int __initdata opt_dom0_max_vcpus;
>>       >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>>       >
>>       > -static int dom0_11_mapping = 1;
>>       > +static int dom0_11_mapping = 0;
>>       >
>>       >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
>>       >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
>>       >
>>       >
>>       > It's failing because none of the zones has a contiguous memory block with an order bigger than 15 ( 128M ). I think
>>       this is due
>>       > to the alignment of the phys_start with buddy system in cubieboard, I'll look further and let you know if there's a
>>       cleaner
>>       > approach to fix that.
>>       >
>>       > It used to work before because the 11_mapping wasn't forced to "true" for all platforms and there was a quirk
>>       exposed by the
>>       > platform that used to express that. I think Julien removed that quirk and defaulted to 11_mapping in commit
>>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
>>
>> Unfortunately dom0_11_mapping is needed if at least one device driver
>> for the Allwinner uses DMA.
>> For example, if you disable dom0_11_mapping, can you still access the
>> network? On the other hand if all device drivers do not use DMA we can
>> set dom0_11_mapping to false for this platform.
>>
>>
>> I'm not quite sure about all devices in cubieboard, but at least for the network case I think it'll still work ( well, it's
>> working for me ) . Besides, Cubieboard didn't have this quirk to begin with before defaulting to the 11_mapping
>
> What is the linux device driver that you are using for the network? And
> the one for the disk/sdcard?

For network "sun4i-emac", and currently I'm mounting my rootfs through
nfs, so no disk/sdcard




-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztbH-0004CM-UZ; Sun, 05 Jan 2014 19:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VztbE-0004CC-Nm
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 19:43:06 +0000
Received: from [85.158.137.68:35709] by server-1.bemta-3.messagelabs.com id
	6F/F7-29598-7C5B9C25; Sun, 05 Jan 2014 19:43:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388950981!7300792!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20007 invoked from network); 5 Jan 2014 19:43:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Jan 2014 19:43:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s05JfwG6005471
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 5 Jan 2014 19:41:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s05Jfv0V005355
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 5 Jan 2014 19:41:57 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JfuWa003267; Sun, 5 Jan 2014 19:41:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 05 Jan 2014 11:41:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 453191C18DD; Sun,  5 Jan 2014 14:41:55 -0500 (EST)
Date: Sun, 5 Jan 2014 14:41:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140105194155.GC12263@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 490ddb3..c1d406f 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> >  void __init xen_init_mmu_ops(void)
> >  {
> >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > +
> > +	/* Optimization - we can use the HVM one but it has no idea which
> > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > +	 * them. Xen knows so let it do the job.
> > +	 */
> > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +		return;
> > +	}
> >  	pv_mmu_ops = xen_mmu_ops;
> >  
> >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> 
> Regarding this patch, the next one and the other changes to
> xen_setup_shared_info, xen_setup_mfn_list_list,
> xen_setup_vcpu_info_placement, etc: considering that the mmu related
> stuff is very different between PV and PVH guests, I wonder if it makes
> any sense to keep calling xen_init_mmu_ops on PVH.
> 
> I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> under a new xen_pvh_pagetable_init.
> Just to give you an idea, not even compiled tested:

There is something to be said about sharing the same code path
that "old-style" PV is using with the new-style - code coverage.

That is the code gets tested under both platforms and if I (or
anybody else) introduce a bug in the "common-PV-paths" it will
be immediately obvious as hopefully the regression tests
will pick it up.

It is not nice - as low-level code is sprinkled with the one-offs
for the PVH - which mostly is doing _less_.

What I was thinking is to flip this around. Make the PVH paths
the default and then have something like 'if (!xen_pvh_domain())'
... the big code.

Would you be OK with this line of thinking going forward say
after this patchset?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 19:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 19:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VztbH-0004CM-UZ; Sun, 05 Jan 2014 19:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1VztbE-0004CC-Nm
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 19:43:06 +0000
Received: from [85.158.137.68:35709] by server-1.bemta-3.messagelabs.com id
	6F/F7-29598-7C5B9C25; Sun, 05 Jan 2014 19:43:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1388950981!7300792!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20007 invoked from network); 5 Jan 2014 19:43:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Jan 2014 19:43:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s05JfwG6005471
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 5 Jan 2014 19:41:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s05Jfv0V005355
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 5 Jan 2014 19:41:57 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s05JfuWa003267; Sun, 5 Jan 2014 19:41:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 05 Jan 2014 11:41:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 453191C18DD; Sun,  5 Jan 2014 14:41:55 -0500 (EST)
Date: Sun, 5 Jan 2014 14:41:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140105194155.GC12263@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > We also optimize one - the TLB flush. The native operation would
> > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > Xen one avoids that and lets the hypervisor determine which
> > VCPU needs the TLB flush.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/mmu.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index 490ddb3..c1d406f 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> >  void __init xen_init_mmu_ops(void)
> >  {
> >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > +
> > +	/* Optimization - we can use the HVM one but it has no idea which
> > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > +	 * them. Xen knows so let it do the job.
> > +	 */
> > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > +		return;
> > +	}
> >  	pv_mmu_ops = xen_mmu_ops;
> >  
> >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> 
> Regarding this patch, the next one and the other changes to
> xen_setup_shared_info, xen_setup_mfn_list_list,
> xen_setup_vcpu_info_placement, etc: considering that the mmu related
> stuff is very different between PV and PVH guests, I wonder if it makes
> any sense to keep calling xen_init_mmu_ops on PVH.
> 
> I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> under a new xen_pvh_pagetable_init.
> Just to give you an idea, not even compiled tested:

There is something to be said about sharing the same code path
that "old-style" PV is using with the new-style - code coverage.

That is the code gets tested under both platforms and if I (or
anybody else) introduce a bug in the "common-PV-paths" it will
be immediately obvious as hopefully the regression tests
will pick it up.

It is not nice - as low-level code is sprinkled with the one-offs
for the PVH - which mostly is doing _less_.

What I was thinking is to flip this around. Make the PVH paths
the default and then have something like 'if (!xen_pvh_domain())'
... the big code.

Would you be OK with this line of thinking going forward say
after this patchset?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzu72-0005P8-Fj; Sun, 05 Jan 2014 20:15:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzu71-0005P3-2c
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:15:55 +0000
Received: from [85.158.137.68:39934] by server-10.bemta-3.messagelabs.com id
	D6/19-23989-A7DB9C25; Sun, 05 Jan 2014 20:15:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388952952!3680494!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8570 invoked from network); 5 Jan 2014 20:15:53 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:15:53 -0000
Received: by mail-la0-f47.google.com with SMTP id e16so1397819lan.20
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:15:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VwG3e/XjzbIWxJZ95cVBS/J+5NXXf4zNQFeOusiJBnI=;
	b=ZsVnufbyuPqD+3gMhoMR/cLjYDFpXouqSi5Kxeq3ztElkOwDoVuvFPKilkJIp83CMO
	+I8J3Rea/plvKpgqIDj1nt4LyCpSPQrnT7GbHlbIXRaSKIa/OzqOgwjLJJ7Uv8U+wQaG
	BeeRL+6CB9BEe+WTwOzW5UFqUM0i0PC3oijHKyqzKL8JemODjE6/mijKxCONFLlUP/wf
	P5CnMw6bACayPcdFm2fbIyzn2mvCwrERpJr5Q1WOKXNx+WOeWL1RQd9KuzVD+GQnBbb/
	khCpQay7/vPaxVVOOTfXY1fqi0bolYbfNK4fteJlu/uAUiuCq9BUkEZ8YKE+W2lmH9NO
	yj8A==
X-Gm-Message-State: ALoCoQlkbT66qP5JVSL4NIKGJmtAqBCKku92WBy1BUcpkCbvxGeNkP+pqbi2HR/oL8mgnpJ8evHl
X-Received: by 10.152.9.193 with SMTP id c1mr258644lab.53.1388952952329;
	Sun, 05 Jan 2014 12:15:52 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id tc8sm41265361lbb.9.2014.01.05.12.15.50
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:15:51 -0800 (PST)
Message-ID: <52C9BD75.6070207@linaro.org>
Date: Sun, 05 Jan 2014 20:15:49 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Karim Raslan <karim.allah.ahmed@gmail.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Karim Raslan wrote:
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>
> Thanks!
> Given that dom0_11_mapping is an int, the patch is OK.
> However it might be better to turn dom0_11_mapping into a bool_t and use
> boolean_param instead.

After reading the thread "Master not working on AllWinner A20", I still 
don't see why this patch is useful.

Unless if the platform has an IOMMU, the 1:1 mapping for the memory 
SHOULD be enabled. Otherwise DMA-capable won't work.

See my answer on your mail (thread "Master not working on AllWinner for 
A20") for a proper fix.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzu72-0005P8-Fj; Sun, 05 Jan 2014 20:15:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzu71-0005P3-2c
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:15:55 +0000
Received: from [85.158.137.68:39934] by server-10.bemta-3.messagelabs.com id
	D6/19-23989-A7DB9C25; Sun, 05 Jan 2014 20:15:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388952952!3680494!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8570 invoked from network); 5 Jan 2014 20:15:53 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:15:53 -0000
Received: by mail-la0-f47.google.com with SMTP id e16so1397819lan.20
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:15:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VwG3e/XjzbIWxJZ95cVBS/J+5NXXf4zNQFeOusiJBnI=;
	b=ZsVnufbyuPqD+3gMhoMR/cLjYDFpXouqSi5Kxeq3ztElkOwDoVuvFPKilkJIp83CMO
	+I8J3Rea/plvKpgqIDj1nt4LyCpSPQrnT7GbHlbIXRaSKIa/OzqOgwjLJJ7Uv8U+wQaG
	BeeRL+6CB9BEe+WTwOzW5UFqUM0i0PC3oijHKyqzKL8JemODjE6/mijKxCONFLlUP/wf
	P5CnMw6bACayPcdFm2fbIyzn2mvCwrERpJr5Q1WOKXNx+WOeWL1RQd9KuzVD+GQnBbb/
	khCpQay7/vPaxVVOOTfXY1fqi0bolYbfNK4fteJlu/uAUiuCq9BUkEZ8YKE+W2lmH9NO
	yj8A==
X-Gm-Message-State: ALoCoQlkbT66qP5JVSL4NIKGJmtAqBCKku92WBy1BUcpkCbvxGeNkP+pqbi2HR/oL8mgnpJ8evHl
X-Received: by 10.152.9.193 with SMTP id c1mr258644lab.53.1388952952329;
	Sun, 05 Jan 2014 12:15:52 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id tc8sm41265361lbb.9.2014.01.05.12.15.50
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:15:51 -0800 (PST)
Message-ID: <52C9BD75.6070207@linaro.org>
Date: Sun, 05 Jan 2014 20:15:49 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Karim Raslan <karim.allah.ahmed@gmail.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Karim Raslan wrote:
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>
> Thanks!
> Given that dom0_11_mapping is an int, the patch is OK.
> However it might be better to turn dom0_11_mapping into a bool_t and use
> boolean_param instead.

After reading the thread "Master not working on AllWinner A20", I still 
don't see why this patch is useful.

Unless if the platform has an IOMMU, the 1:1 mapping for the memory 
SHOULD be enabled. Otherwise DMA-capable won't work.

See my answer on your mail (thread "Master not working on AllWinner for 
A20") for a proper fix.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:18:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzu9n-0005mb-4K; Sun, 05 Jan 2014 20:18:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzu9l-0005mV-7H
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:18:45 +0000
Received: from [85.158.139.211:6376] by server-2.bemta-5.messagelabs.com id
	BB/CA-29392-42EB9C25; Sun, 05 Jan 2014 20:18:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388953123!7924159!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12207 invoked from network); 5 Jan 2014 20:18:44 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:18:44 -0000
Received: by mail-lb0-f182.google.com with SMTP id l4so9512852lbv.27
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:18:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ez1cNzg1kazmqhaEpzAMxujY3RB0YObcSbbG7TcqflI=;
	b=gXqLc08/ZV5VItDU9ENDe2PYwMPlvfXm1JURy1cB2gazWxG0gQs8qKvyhAOU2w9rj8
	jtUsvrETWH2TfgOggLHBrEH7BlXxvXLorYyzGN68xXW71dMo8ZdxWeuJk9vOq//7DEpx
	yFHJmb+Uc927zZPr7FJdxsKj4XIF4ytob6Rmna0exFzefFHG0EA2Q/e7EWj037ZQ6gL1
	pOQH4ShjroOX6PrP7iUNkATeEre+a7/kf96xA7SRe1r8W6PZz7EWpjNujTfbBeERgGQX
	xvh2VV8nae7d7LWc/79kXS42iZ1Hnm3TA44j+ObBDxfcNuGeJgpYJA7tvzZ09yGtVnHJ
	RH/A==
X-Gm-Message-State: ALoCoQlHjpajWfrTBT5l2CIMFsH9NkY6k5ICFhaNP52fec2Bvx9FCXjUs7DfzPrBBLrDqIuGOeU6
X-Received: by 10.152.115.230 with SMTP id jr6mr2125328lab.45.1388953123331;
	Sun, 05 Jan 2014 12:18:43 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id xl4sm53102533lac.9.2014.01.05.12.18.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:18:42 -0800 (PST)
Message-ID: <52C9BE20.90706@linaro.org>
Date: Sun, 05 Jan 2014 20:18:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <WC20131216075445.88000B@perkbv.com>	<1387188577.20076.44.camel@kazak.uk.xensource.com>	<WC20131216110403.65000D@perkbv.com>	<1387199654.10247.30.camel@kazak.uk.xensource.com>	<WC20131217092526.720011@perkbv.com>	<52B05913.2080300@linaro.org>	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
In-Reply-To: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 07:23 PM, karim.allah.ahmed@gmail.com wrote:
>
> I'm not quite sure about all devices in cubieboard, but at least for the
> network case I think it'll still work ( well, it's working for me ) .
> Besides, Cubieboard didn't have this quirk to begin with before
> defaulting to the 11_mapping

Ian has introduced the defaulting to the 11_mapping for the Cubieboard 
to avoid duplicate code (as all platform have 1:1 mapping enabled).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:18:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzu9n-0005mb-4K; Sun, 05 Jan 2014 20:18:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzu9l-0005mV-7H
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:18:45 +0000
Received: from [85.158.139.211:6376] by server-2.bemta-5.messagelabs.com id
	BB/CA-29392-42EB9C25; Sun, 05 Jan 2014 20:18:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1388953123!7924159!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12207 invoked from network); 5 Jan 2014 20:18:44 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:18:44 -0000
Received: by mail-lb0-f182.google.com with SMTP id l4so9512852lbv.27
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:18:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ez1cNzg1kazmqhaEpzAMxujY3RB0YObcSbbG7TcqflI=;
	b=gXqLc08/ZV5VItDU9ENDe2PYwMPlvfXm1JURy1cB2gazWxG0gQs8qKvyhAOU2w9rj8
	jtUsvrETWH2TfgOggLHBrEH7BlXxvXLorYyzGN68xXW71dMo8ZdxWeuJk9vOq//7DEpx
	yFHJmb+Uc927zZPr7FJdxsKj4XIF4ytob6Rmna0exFzefFHG0EA2Q/e7EWj037ZQ6gL1
	pOQH4ShjroOX6PrP7iUNkATeEre+a7/kf96xA7SRe1r8W6PZz7EWpjNujTfbBeERgGQX
	xvh2VV8nae7d7LWc/79kXS42iZ1Hnm3TA44j+ObBDxfcNuGeJgpYJA7tvzZ09yGtVnHJ
	RH/A==
X-Gm-Message-State: ALoCoQlHjpajWfrTBT5l2CIMFsH9NkY6k5ICFhaNP52fec2Bvx9FCXjUs7DfzPrBBLrDqIuGOeU6
X-Received: by 10.152.115.230 with SMTP id jr6mr2125328lab.45.1388953123331;
	Sun, 05 Jan 2014 12:18:43 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id xl4sm53102533lac.9.2014.01.05.12.18.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:18:42 -0800 (PST)
Message-ID: <52C9BE20.90706@linaro.org>
Date: Sun, 05 Jan 2014 20:18:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <WC20131216075445.88000B@perkbv.com>	<1387188577.20076.44.camel@kazak.uk.xensource.com>	<WC20131216110403.65000D@perkbv.com>	<1387199654.10247.30.camel@kazak.uk.xensource.com>	<WC20131217092526.720011@perkbv.com>	<52B05913.2080300@linaro.org>	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
In-Reply-To: <CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 07:23 PM, karim.allah.ahmed@gmail.com wrote:
>
> I'm not quite sure about all devices in cubieboard, but at least for the
> network case I think it'll still work ( well, it's working for me ) .
> Besides, Cubieboard didn't have this quirk to begin with before
> defaulting to the 11_mapping

Ian has introduced the defaulting to the 11_mapping for the Cubieboard 
to avoid duplicate code (as all platform have 1:1 mapping enabled).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzuEY-0005xb-U5; Sun, 05 Jan 2014 20:23:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzuEY-0005xV-0q
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:23:42 +0000
Received: from [193.109.254.147:59836] by server-3.bemta-14.messagelabs.com id
	D3/A7-11000-D4FB9C25; Sun, 05 Jan 2014 20:23:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388953420!6684716!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32725 invoked from network); 5 Jan 2014 20:23:40 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:23:40 -0000
Received: by mail-lb0-f174.google.com with SMTP id y6so9396609lbh.33
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:23:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OD/OHgyPP4RRZEkkGGiH6UmvnCbHIiygOMAs5sK59aY=;
	b=Y8ZWjbu1P/69ZUkcn8sS/iWy9lc5i2UQMqrl6M6CWV+e6j9jVEEolTwWQjBQ/DL/av
	T1XDGFV2SIGC1f1KBYPXALHB22PAfjkso+tSiPBCYl1FO6DWytHC91qpgswaVkV6jIF0
	r+LC/3eThjktogvm42pibFb8ry9B3il4/em2ABHFmwJEZX0Mu5NdxyHAPNSnn18F3/VI
	QBP3ADosxwglqVZs0s6KZ54CSpeQR3Ap1ozBOKDRhTXTGre09T+65hKccQ6v3PkNjf1+
	lNg/R/m/hmCNhb+3SqWUttBsQiovAtuApv+w88qdx63x+xEYROm5pHe+un5eUNUxwc8A
	8mMQ==
X-Gm-Message-State: ALoCoQlChkqbHLYo6TWv8/CXmYqn0LLHPmC+X+qrd5LkijQqCc+V9etHPb4iFhaYbcTLSzL96Dsk
X-Received: by 10.112.173.137 with SMTP id bk9mr2097129lbc.48.1388953419896;
	Sun, 05 Jan 2014 12:23:39 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id rb4sm41298251lbb.1.2014.01.05.12.23.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:23:39 -0800 (PST)
Message-ID: <52C9BF49.9010303@linaro.org>
Date: Sun, 05 Jan 2014 20:23:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <WC20131216075445.88000B@perkbv.com>	<1387188577.20076.44.camel@kazak.uk.xensource.com>	<WC20131216110403.65000D@perkbv.com>	<1387199654.10247.30.camel@kazak.uk.xensource.com>	<WC20131217092526.720011@perkbv.com>	<52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Karim,

On 01/05/2014 04:48 PM, karim.allah.ahmed@gmail.com wrote:
> It's failing because none of the zones has a contiguous memory block
> with an order bigger than 15 ( 128M ). I think this is due to the
> alignment of the phys_start with buddy system in cubieboard, I'll look
> further and let you know if there's a cleaner approach to fix that.

Currently we support only one memory bank for dom0 when the 1:1 mapping 
is enabled. This solution is not reliable because it will depends where 
U-boot has loaded modules (kernel, initramfs, dtb,...) in the RAM.

The proper solution would be to support multiple banks for dom0. When 
Xen will allocate dom0 memory, it will split the total amount of RAM in 
small chunk, so it will likely be able to allocate the memory correctly, 
even with the 1:1 mapping.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzuEY-0005xb-U5; Sun, 05 Jan 2014 20:23:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzuEY-0005xV-0q
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:23:42 +0000
Received: from [193.109.254.147:59836] by server-3.bemta-14.messagelabs.com id
	D3/A7-11000-D4FB9C25; Sun, 05 Jan 2014 20:23:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1388953420!6684716!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32725 invoked from network); 5 Jan 2014 20:23:40 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:23:40 -0000
Received: by mail-lb0-f174.google.com with SMTP id y6so9396609lbh.33
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:23:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OD/OHgyPP4RRZEkkGGiH6UmvnCbHIiygOMAs5sK59aY=;
	b=Y8ZWjbu1P/69ZUkcn8sS/iWy9lc5i2UQMqrl6M6CWV+e6j9jVEEolTwWQjBQ/DL/av
	T1XDGFV2SIGC1f1KBYPXALHB22PAfjkso+tSiPBCYl1FO6DWytHC91qpgswaVkV6jIF0
	r+LC/3eThjktogvm42pibFb8ry9B3il4/em2ABHFmwJEZX0Mu5NdxyHAPNSnn18F3/VI
	QBP3ADosxwglqVZs0s6KZ54CSpeQR3Ap1ozBOKDRhTXTGre09T+65hKccQ6v3PkNjf1+
	lNg/R/m/hmCNhb+3SqWUttBsQiovAtuApv+w88qdx63x+xEYROm5pHe+un5eUNUxwc8A
	8mMQ==
X-Gm-Message-State: ALoCoQlChkqbHLYo6TWv8/CXmYqn0LLHPmC+X+qrd5LkijQqCc+V9etHPb4iFhaYbcTLSzL96Dsk
X-Received: by 10.112.173.137 with SMTP id bk9mr2097129lbc.48.1388953419896;
	Sun, 05 Jan 2014 12:23:39 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id rb4sm41298251lbb.1.2014.01.05.12.23.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:23:39 -0800 (PST)
Message-ID: <52C9BF49.9010303@linaro.org>
Date: Sun, 05 Jan 2014 20:23:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <WC20131216075445.88000B@perkbv.com>	<1387188577.20076.44.camel@kazak.uk.xensource.com>	<WC20131216110403.65000D@perkbv.com>	<1387199654.10247.30.camel@kazak.uk.xensource.com>	<WC20131217092526.720011@perkbv.com>	<52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Karim,

On 01/05/2014 04:48 PM, karim.allah.ahmed@gmail.com wrote:
> It's failing because none of the zones has a contiguous memory block
> with an order bigger than 15 ( 128M ). I think this is due to the
> alignment of the phys_start with buddy system in cubieboard, I'll look
> further and let you know if there's a cleaner approach to fix that.

Currently we support only one memory bank for dom0 when the 1:1 mapping 
is enabled. This solution is not reliable because it will depends where 
U-boot has loaded modules (kernel, initramfs, dtb,...) in the RAM.

The proper solution would be to support multiple banks for dom0. When 
Xen will allocate dom0 memory, it will split the total amount of RAM in 
small chunk, so it will likely be able to allocate the memory correctly, 
even with the 1:1 mapping.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzuYn-0006yl-Nu; Sun, 05 Jan 2014 20:44:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzuYm-0006yg-Pb
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:44:36 +0000
Received: from [85.158.139.211:29147] by server-1.bemta-5.messagelabs.com id
	21/8B-21065-434C9C25; Sun, 05 Jan 2014 20:44:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388954674!7923618!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22278 invoked from network); 5 Jan 2014 20:44:35 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:44:35 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so9469430lbh.32
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:44:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=CmZRo92bao5fBL7F/aAQ1hJIO7LkDFz9IC3PNvgBWpc=;
	b=TJ3uemDsn8Mda2lVBlUpPpLLi/8PHXI3GWD2P2ePzhBCcsz5dVDPqANwawOUKwW+My
	17EBlIZyI481VVotgl562sHbbqIznH8aaIDIOScOv3JgUcu/8t9czXahRQSGZGKflA3Y
	q0TlTMvMv1c0h/iCWlHuvOiee2hLSTYieuEvAcydiFYHiAmOvJioOa1QlzlBeWjDHFWr
	DttCPu1qRuJXRycnnU47lARZB5Xlt+TpyUiU968tjr57MH5hmXSazDDU0kB6/KqMdQwe
	pOsBo6GmX3KRKJCLH9GOxu3FrbDYfRXeiWiLhxoYlQjfRWNwgyQkVkT+fWRNCpk8cxpt
	mlJA==
X-Gm-Message-State: ALoCoQk2sq3J1X/xd+3JemqyFXHGkKsN7CODeEL6/NplW2Ev56bqzPPSgnogH/HDYupqcjf1gYcK
X-Received: by 10.152.234.231 with SMTP id uh7mr42796450lac.10.1388954674503; 
	Sun, 05 Jan 2014 12:44:34 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	ld10sm53147018lab.8.2014.01.05.12.44.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:44:33 -0800 (PST)
Message-ID: <52C9C42F.1060705@linaro.org>
Date: Sun, 05 Jan 2014 20:44:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Karim Raslan <karim.allah.ahmed@gmail.com>, xen-devel@lists.xen.org, 
	Keir Fraser <keir@xen.org>
References: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
In-Reply-To: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
Subject: Re: [Xen-devel] [PATCH] dump available order allocations in each
 zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Karim,

(+ adding maintainers) Don't forget the maintainers, you can use 
scripts/get_maintainers.pl.

On 01/05/2014 06:45 PM, Karim Raslan wrote:
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> ---
>   xen/common/page_alloc.c |   11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..5419b3f 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
>   static void dump_heap(unsigned char key)
>   {
>       s_time_t      now = NOW();
> -    int           i, j;
> +    int           i, j, k;
>
>       printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
>              (u32)(now>>32), (u32)now);
> @@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
>           if ( !avail[i] )
>               continue;
>           for ( j = 0; j < NR_ZONES; j++ )
> +        {
>               printk("heap[node=%d][zone=%d] -> %lu pages\n",
>                      i, j, avail[i][j]);
> +            if(avail[i][j]) {

if ( .. )
{

You are using Linux coding style, not Xen. See CODING_STYLE in the root 
of the repository.

> +            	printk("\t(In:\n");

We don't use tabulations on Xen, please use spaces (same for the next 4 
lines).

> +				for ( k = 0; k < MAX_ORDER; k++)
> +					if(!page_list_empty(&heap(i, j, k)))

if ( .. )

> +						printk("  \t[order=%d]\n",k);
> +				printk(")\n");
> +            }
> +        }
>       }
>   }
>
>

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzuYn-0006yl-Nu; Sun, 05 Jan 2014 20:44:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzuYm-0006yg-Pb
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 20:44:36 +0000
Received: from [85.158.139.211:29147] by server-1.bemta-5.messagelabs.com id
	21/8B-21065-434C9C25; Sun, 05 Jan 2014 20:44:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1388954674!7923618!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22278 invoked from network); 5 Jan 2014 20:44:35 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 20:44:35 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so9469430lbh.32
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 12:44:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=CmZRo92bao5fBL7F/aAQ1hJIO7LkDFz9IC3PNvgBWpc=;
	b=TJ3uemDsn8Mda2lVBlUpPpLLi/8PHXI3GWD2P2ePzhBCcsz5dVDPqANwawOUKwW+My
	17EBlIZyI481VVotgl562sHbbqIznH8aaIDIOScOv3JgUcu/8t9czXahRQSGZGKflA3Y
	q0TlTMvMv1c0h/iCWlHuvOiee2hLSTYieuEvAcydiFYHiAmOvJioOa1QlzlBeWjDHFWr
	DttCPu1qRuJXRycnnU47lARZB5Xlt+TpyUiU968tjr57MH5hmXSazDDU0kB6/KqMdQwe
	pOsBo6GmX3KRKJCLH9GOxu3FrbDYfRXeiWiLhxoYlQjfRWNwgyQkVkT+fWRNCpk8cxpt
	mlJA==
X-Gm-Message-State: ALoCoQk2sq3J1X/xd+3JemqyFXHGkKsN7CODeEL6/NplW2Ev56bqzPPSgnogH/HDYupqcjf1gYcK
X-Received: by 10.152.234.231 with SMTP id uh7mr42796450lac.10.1388954674503; 
	Sun, 05 Jan 2014 12:44:34 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	ld10sm53147018lab.8.2014.01.05.12.44.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 12:44:33 -0800 (PST)
Message-ID: <52C9C42F.1060705@linaro.org>
Date: Sun, 05 Jan 2014 20:44:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Karim Raslan <karim.allah.ahmed@gmail.com>, xen-devel@lists.xen.org, 
	Keir Fraser <keir@xen.org>
References: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
In-Reply-To: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
Subject: Re: [Xen-devel] [PATCH] dump available order allocations in each
 zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Karim,

(+ adding maintainers) Don't forget the maintainers, you can use 
scripts/get_maintainers.pl.

On 01/05/2014 06:45 PM, Karim Raslan wrote:
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> ---
>   xen/common/page_alloc.c |   11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..5419b3f 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
>   static void dump_heap(unsigned char key)
>   {
>       s_time_t      now = NOW();
> -    int           i, j;
> +    int           i, j, k;
>
>       printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
>              (u32)(now>>32), (u32)now);
> @@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
>           if ( !avail[i] )
>               continue;
>           for ( j = 0; j < NR_ZONES; j++ )
> +        {
>               printk("heap[node=%d][zone=%d] -> %lu pages\n",
>                      i, j, avail[i][j]);
> +            if(avail[i][j]) {

if ( .. )
{

You are using Linux coding style, not Xen. See CODING_STYLE in the root 
of the repository.

> +            	printk("\t(In:\n");

We don't use tabulations on Xen, please use spaces (same for the next 4 
lines).

> +				for ( k = 0; k < MAX_ORDER; k++)
> +					if(!page_list_empty(&heap(i, j, k)))

if ( .. )

> +						printk("  \t[order=%d]\n",k);
> +				printk(")\n");
> +            }
> +        }
>       }
>   }
>
>

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:56:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzujh-0007U6-6l; Sun, 05 Jan 2014 20:55:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzujf-0007Ty-Vh
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 20:55:52 +0000
Received: from [85.158.137.68:2969] by server-17.bemta-3.messagelabs.com id
	CD/3A-15965-7D6C9C25; Sun, 05 Jan 2014 20:55:51 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388955347!7342981!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6386 invoked from network); 5 Jan 2014 20:55:50 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Jan 2014 20:55:50 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1VzujX-00032L-GA; Mon, 06 Jan 2014 07:55:43 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Mon, 6 Jan 2014 07:55:42 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCajPa184K0T6dkKwhvrU7nePBpp1lyrQgAAnLQCAAN7cAA==
Date: Sun, 5 Jan 2014 20:55:41 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
In-Reply-To: <d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> After applying the patch the win8 driver compilation, gives the following
> error and warning:
>   1>xenvbd.c(463): error C2220: warning treated as error - no 'object' file
> generated
>   1>xenvbd.c(463): warning C4152: nonstandard extension, function/data
> pointer conversion in expression
> 
> The line in question is the following:
>   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> 
> XenVbd_HwStorFindAdapter is the data structure which you have corrected
> a few lines in, in the patch. As it is a level 4 warning, it can be ignored by
> setting /W3 in the MSC_WARNING_LEVEL. However I suspect that it would
> be preferred to find the cause of the warning.

That would imply that my function definition doesn't match the expected function definition in the HW_INITIALIZATION_DATA structure, but according to the docs I have everything right. Can you check the storport headers and check the declaration there against my function?

> 
> On a side note line 219 and 230 are duplicates
>   ConfigInfo->VirtualDevice = FALSE;
> 

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 20:56:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 20:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzujh-0007U6-6l; Sun, 05 Jan 2014 20:55:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1Vzujf-0007Ty-Vh
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 20:55:52 +0000
Received: from [85.158.137.68:2969] by server-17.bemta-3.messagelabs.com id
	CD/3A-15965-7D6C9C25; Sun, 05 Jan 2014 20:55:51 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-3.tower-31.messagelabs.com!1388955347!7342981!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6386 invoked from network); 5 Jan 2014 20:55:50 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Jan 2014 20:55:50 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1VzujX-00032L-GA; Mon, 06 Jan 2014 07:55:43 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Mon, 6 Jan 2014 07:55:42 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCajPa184K0T6dkKwhvrU7nePBpp1lyrQgAAnLQCAAN7cAA==
Date: Sun, 5 Jan 2014 20:55:41 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
In-Reply-To: <d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.3.241]
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> After applying the patch the win8 driver compilation, gives the following
> error and warning:
>   1>xenvbd.c(463): error C2220: warning treated as error - no 'object' file
> generated
>   1>xenvbd.c(463): warning C4152: nonstandard extension, function/data
> pointer conversion in expression
> 
> The line in question is the following:
>   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> 
> XenVbd_HwStorFindAdapter is the data structure which you have corrected
> a few lines in, in the patch. As it is a level 4 warning, it can be ignored by
> setting /W3 in the MSC_WARNING_LEVEL. However I suspect that it would
> be preferred to find the cause of the warning.

That would imply that my function definition doesn't match the expected function definition in the HW_INITIALIZATION_DATA structure, but according to the docs I have everything right. Can you check the storport headers and check the declaration there against my function?

> 
> On a side note line 219 and 230 are duplicates
>   ConfigInfo->VirtualDevice = FALSE;
> 

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzv6o-00007P-No; Sun, 05 Jan 2014 21:19:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzv6n-00007K-3n
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:19:45 +0000
Received: from [193.109.254.147:21180] by server-7.bemta-14.messagelabs.com id
	7E/88-15500-07CC9C25; Sun, 05 Jan 2014 21:19:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388956782!8963351!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14507 invoked from network); 5 Jan 2014 21:19:43 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:19:43 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9229000lbh.6
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:19:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=l7J8YUXhOyKW5+2D3KPYJM36UxtHQm3avB+rz3gymjA=;
	b=YUR+Z90YoKymzE77o/XtelutGE2HeTMZfj9bmkpth1IC5jPg+keQyFrdmyc/Yz45p6
	PhruDLvVWJAWQwdy05hWaBT/3aErdFWhvcAX25H/JyVtx9oNkg4JE6ixieSBIwRR6A56
	WNao/XRBuy5nrglTifvP9zh6t2BoQgYVcaQcVSfGfF/CuZaMK4+0FERAdJOxNAUZQg2w
	YcjMorpUzoSs4ufjBH+1b298mju2wPafKbp7vrzzpgUfrUKEbkWmnOA2cMzntWcpSyyt
	BN2HDL40bLO7blA6BiqShJCkW/aJUjCQNR9ngxHuLUNG0cxqLD1E7uMDsfJY3RC9RxlL
	0KlQ==
X-Gm-Message-State: ALoCoQmTIEsXDDL9SHJHCCov9WcLwTBfEre/Y1SlIfz131vDsps2YWJM5cxYs4KpZDoI6Ir0dSXh
X-Received: by 10.112.133.106 with SMTP id pb10mr39312lbb.94.1388956782336;
	Sun, 05 Jan 2014 13:19:42 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id bl6sm41373769lbb.5.2014.01.05.13.19.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:19:41 -0800 (PST)
Message-ID: <52C9CC6B.7030905@linaro.org>
Date: Sun, 05 Jan 2014 21:19:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, tsahee@gmx.com
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

Any comments on this patch?

Sincerely yours,

On 12/13/2013 02:31 AM, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
>
> "it is assumed that no mapping exists between children of node and the parent
> address space".
>
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
>
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
>
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> ---
>
> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.
> ---
>   xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>   1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 84e709d..9b9a348 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -93,7 +93,7 @@ struct dt_bus
>   {
>       const char *name;
>       const char *addresses;
> -    int (*match)(const struct dt_device_node *parent);
> +    bool_t (*match)(const struct dt_device_node *parent);
>       void (*count_cells)(const struct dt_device_node *child,
>                           int *addrc, int *sizec);
>       u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>   /*
>    * Default translator (generic bus)
>    */
> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
> +{
> +    /* Root node doesn't have "ranges" property */
> +    if ( parent->parent == NULL )
> +        return 1;
> +
> +    /* The default bus is only used when the "ranges" property exists.
> +     * Otherwise we can't translate the address
> +     */
> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
> +}
> +
>   static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>                                   int *addrc, int *sizec)
>   {
> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>        * If the number of address cells is larger than 2 we assume the
>        * mapping doesn't specify a physical address. Rather, the address
>        * specifies an identifier that must match exactly.
> -     */
> +      */
>       if ( na > 2 && memcmp(range, addr, na * 4) != 0 )
>           return DT_BAD_ADDR;
>
> @@ -856,7 +868,7 @@ static const struct dt_bus dt_busses[] =
>       {
>           .name = "default",
>           .addresses = "reg",
> -        .match = NULL,
> +        .match = dt_bus_default_match,
>           .count_cells = dt_bus_default_count_cells,
>           .map = dt_bus_default_map,
>           .translate = dt_bus_default_translate,
> @@ -871,7 +883,6 @@ static const struct dt_bus *dt_match_bus(const struct dt_device_node *np)
>       for ( i = 0; i < ARRAY_SIZE(dt_busses); i++ )
>           if ( !dt_busses[i].match || dt_busses[i].match(np) )
>               return &dt_busses[i];
> -    BUG();
>
>       return NULL;
>   }
> @@ -890,7 +901,10 @@ static const __be32 *dt_get_address(const struct dt_device_node *dev,
>       parent = dt_get_parent(dev);
>       if ( parent == NULL )
>           return NULL;
> +
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        return NULL;
>       bus->count_cells(dev, &na, &ns);
>
>       if ( !DT_CHECK_ADDR_COUNT(na) )
> @@ -994,6 +1008,8 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
>       if ( parent == NULL )
>           goto bail;
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        goto bail;
>
>       /* Count address cells & copy address locally */
>       bus->count_cells(dev, &na, &ns);
> @@ -1026,6 +1042,11 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
>
>           /* Get new parent bus and counts */
>           pbus = dt_match_bus(parent);
> +        if ( pbus == NULL )
> +        {
> +            dt_printk("DT: %s is not a valid bus\n", parent->full_name);
> +            break;
> +        }
>           pbus->count_cells(dev, &pna, &pns);
>           if ( !DT_CHECK_COUNTS(pna, pns) )
>           {
> @@ -1164,6 +1185,8 @@ unsigned int dt_number_of_address(const struct dt_device_node *dev)
>           return 0;
>
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        return 0;
>       bus->count_cells(dev, &na, &ns);
>
>       if ( !DT_CHECK_COUNTS(na, ns) )
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzv6o-00007P-No; Sun, 05 Jan 2014 21:19:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzv6n-00007K-3n
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:19:45 +0000
Received: from [193.109.254.147:21180] by server-7.bemta-14.messagelabs.com id
	7E/88-15500-07CC9C25; Sun, 05 Jan 2014 21:19:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388956782!8963351!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14507 invoked from network); 5 Jan 2014 21:19:43 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:19:43 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9229000lbh.6
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:19:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=l7J8YUXhOyKW5+2D3KPYJM36UxtHQm3avB+rz3gymjA=;
	b=YUR+Z90YoKymzE77o/XtelutGE2HeTMZfj9bmkpth1IC5jPg+keQyFrdmyc/Yz45p6
	PhruDLvVWJAWQwdy05hWaBT/3aErdFWhvcAX25H/JyVtx9oNkg4JE6ixieSBIwRR6A56
	WNao/XRBuy5nrglTifvP9zh6t2BoQgYVcaQcVSfGfF/CuZaMK4+0FERAdJOxNAUZQg2w
	YcjMorpUzoSs4ufjBH+1b298mju2wPafKbp7vrzzpgUfrUKEbkWmnOA2cMzntWcpSyyt
	BN2HDL40bLO7blA6BiqShJCkW/aJUjCQNR9ngxHuLUNG0cxqLD1E7uMDsfJY3RC9RxlL
	0KlQ==
X-Gm-Message-State: ALoCoQmTIEsXDDL9SHJHCCov9WcLwTBfEre/Y1SlIfz131vDsps2YWJM5cxYs4KpZDoI6Ir0dSXh
X-Received: by 10.112.133.106 with SMTP id pb10mr39312lbb.94.1388956782336;
	Sun, 05 Jan 2014 13:19:42 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id bl6sm41373769lbb.5.2014.01.05.13.19.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:19:41 -0800 (PST)
Message-ID: <52C9CC6B.7030905@linaro.org>
Date: Sun, 05 Jan 2014 21:19:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, tsahee@gmx.com
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

Any comments on this patch?

Sincerely yours,

On 12/13/2013 02:31 AM, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
>
> "it is assumed that no mapping exists between children of node and the parent
> address space".
>
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
>
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
>
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> ---
>
> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.
> ---
>   xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>   1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 84e709d..9b9a348 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -93,7 +93,7 @@ struct dt_bus
>   {
>       const char *name;
>       const char *addresses;
> -    int (*match)(const struct dt_device_node *parent);
> +    bool_t (*match)(const struct dt_device_node *parent);
>       void (*count_cells)(const struct dt_device_node *child,
>                           int *addrc, int *sizec);
>       u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>   /*
>    * Default translator (generic bus)
>    */
> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
> +{
> +    /* Root node doesn't have "ranges" property */
> +    if ( parent->parent == NULL )
> +        return 1;
> +
> +    /* The default bus is only used when the "ranges" property exists.
> +     * Otherwise we can't translate the address
> +     */
> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
> +}
> +
>   static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>                                   int *addrc, int *sizec)
>   {
> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>        * If the number of address cells is larger than 2 we assume the
>        * mapping doesn't specify a physical address. Rather, the address
>        * specifies an identifier that must match exactly.
> -     */
> +      */
>       if ( na > 2 && memcmp(range, addr, na * 4) != 0 )
>           return DT_BAD_ADDR;
>
> @@ -856,7 +868,7 @@ static const struct dt_bus dt_busses[] =
>       {
>           .name = "default",
>           .addresses = "reg",
> -        .match = NULL,
> +        .match = dt_bus_default_match,
>           .count_cells = dt_bus_default_count_cells,
>           .map = dt_bus_default_map,
>           .translate = dt_bus_default_translate,
> @@ -871,7 +883,6 @@ static const struct dt_bus *dt_match_bus(const struct dt_device_node *np)
>       for ( i = 0; i < ARRAY_SIZE(dt_busses); i++ )
>           if ( !dt_busses[i].match || dt_busses[i].match(np) )
>               return &dt_busses[i];
> -    BUG();
>
>       return NULL;
>   }
> @@ -890,7 +901,10 @@ static const __be32 *dt_get_address(const struct dt_device_node *dev,
>       parent = dt_get_parent(dev);
>       if ( parent == NULL )
>           return NULL;
> +
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        return NULL;
>       bus->count_cells(dev, &na, &ns);
>
>       if ( !DT_CHECK_ADDR_COUNT(na) )
> @@ -994,6 +1008,8 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
>       if ( parent == NULL )
>           goto bail;
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        goto bail;
>
>       /* Count address cells & copy address locally */
>       bus->count_cells(dev, &na, &ns);
> @@ -1026,6 +1042,11 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
>
>           /* Get new parent bus and counts */
>           pbus = dt_match_bus(parent);
> +        if ( pbus == NULL )
> +        {
> +            dt_printk("DT: %s is not a valid bus\n", parent->full_name);
> +            break;
> +        }
>           pbus->count_cells(dev, &pna, &pns);
>           if ( !DT_CHECK_COUNTS(pna, pns) )
>           {
> @@ -1164,6 +1185,8 @@ unsigned int dt_number_of_address(const struct dt_device_node *dev)
>           return 0;
>
>       bus = dt_match_bus(parent);
> +    if ( !bus )
> +        return 0;
>       bus->count_cells(dev, &na, &ns);
>
>       if ( !DT_CHECK_COUNTS(na, ns) )
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDX-0000GL-Ki; Sun, 05 Jan 2014 21:26:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDW-0000GE-1s
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:42 +0000
Received: from [85.158.137.68:29472] by server-5.bemta-3.messagelabs.com id
	77/63-25188-11EC9C25; Sun, 05 Jan 2014 21:26:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388957200!7305014!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4050 invoked from network); 5 Jan 2014 21:26:40 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:40 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9205302lan.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=6YH5O53UZGMRI97E+VM63wsttKbO9ZIJVxZJQ2ERY+I=;
	b=P5gWfbzoAuZi64zwUKaMHq649nFsAc3IiPfW0k79i5tJmpGbAE9nWFNhlWL2ljeBUd
	XXhY/FZyKfgCh9t7t5C9X8zu4WUtgAHtZZxKbw/5o6UBL/iLb5SEJ2sW+6AppprSC4Rs
	GxveUC9Yd2jsBSsu0nPa+ZfyvFk5+MYQXFiCP2VJFENB4rPsjyciae/v+bMupiRS4JY5
	64k5kioaN33P/aHtScREfnXdFwm8geRmVZfgVsCOzT2VxIE3/kjSzq51Rvpa07SXRG73
	3RsiQRucjeK243aDqeE+69iMSrNSHGn64veE8Ei7am+Nvx6gdeXZTd0+ZUHnOR0gxg1s
	rsLg==
X-Gm-Message-State: ALoCoQm3bw4ygDvLzNRvsNAjD7uQrbFjY1gXijYw4wwkrTb2rv1jYjFFzC97CGaQwZG6wRQtYNqA
X-Received: by 10.112.138.70 with SMTP id qo6mr2197683lbb.34.1388957199588;
	Sun, 05 Jan 2014 13:26:39 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.36
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:38 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:25 +0000
Message-Id: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 0/6] xen/arm: Merge early_printk function in
	console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

This patch series aims to merge early printk in the console code. This will
avoid the developper to avoid to care whether the message is printed before or
after the console is initialized.

Sincerely yours,

Julien Grall (6):
  xen/arm: earlyprintk: move early_flush in early_puts
  xen/arm: earlyprintk: export early_puts
  xen/arm: Rename EARLY_PRINTK compile option to CONFIG_EARLY_PRINTK
  xen/console: Add support for early printk
  xen/console: Add noreturn attribute to panic function
  xen/arm: Replace early_printk call to printk call

 xen/arch/arm/Rules.mk              |  2 +-
 xen/arch/arm/arm32/head.S          | 18 +++++++++---------
 xen/arch/arm/arm64/head.S          | 18 +++++++++---------
 xen/arch/arm/early_printk.c        | 34 +---------------------------------
 xen/arch/arm/setup.c               | 28 +++++++++++++---------------
 xen/common/device_tree.c           | 36 +++++++++++++-----------------------
 xen/drivers/char/console.c         | 17 +++++++++++++++--
 xen/drivers/char/dt-uart.c         |  9 ++++-----
 xen/drivers/char/exynos4210-uart.c | 13 +++++--------
 xen/drivers/char/omap-uart.c       | 13 ++++++-------
 xen/drivers/char/pl011.c           | 13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      | 29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h | 25 ++++---------------------
 xen/include/xen/lib.h              |  2 +-
 14 files changed, 101 insertions(+), 156 deletions(-)

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDX-0000GL-Ki; Sun, 05 Jan 2014 21:26:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDW-0000GE-1s
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:42 +0000
Received: from [85.158.137.68:29472] by server-5.bemta-3.messagelabs.com id
	77/63-25188-11EC9C25; Sun, 05 Jan 2014 21:26:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388957200!7305014!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4050 invoked from network); 5 Jan 2014 21:26:40 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:40 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9205302lan.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=6YH5O53UZGMRI97E+VM63wsttKbO9ZIJVxZJQ2ERY+I=;
	b=P5gWfbzoAuZi64zwUKaMHq649nFsAc3IiPfW0k79i5tJmpGbAE9nWFNhlWL2ljeBUd
	XXhY/FZyKfgCh9t7t5C9X8zu4WUtgAHtZZxKbw/5o6UBL/iLb5SEJ2sW+6AppprSC4Rs
	GxveUC9Yd2jsBSsu0nPa+ZfyvFk5+MYQXFiCP2VJFENB4rPsjyciae/v+bMupiRS4JY5
	64k5kioaN33P/aHtScREfnXdFwm8geRmVZfgVsCOzT2VxIE3/kjSzq51Rvpa07SXRG73
	3RsiQRucjeK243aDqeE+69iMSrNSHGn64veE8Ei7am+Nvx6gdeXZTd0+ZUHnOR0gxg1s
	rsLg==
X-Gm-Message-State: ALoCoQm3bw4ygDvLzNRvsNAjD7uQrbFjY1gXijYw4wwkrTb2rv1jYjFFzC97CGaQwZG6wRQtYNqA
X-Received: by 10.112.138.70 with SMTP id qo6mr2197683lbb.34.1388957199588;
	Sun, 05 Jan 2014 13:26:39 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.36
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:38 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:25 +0000
Message-Id: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 0/6] xen/arm: Merge early_printk function in
	console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

This patch series aims to merge early printk in the console code. This will
avoid the developper to avoid to care whether the message is printed before or
after the console is initialized.

Sincerely yours,

Julien Grall (6):
  xen/arm: earlyprintk: move early_flush in early_puts
  xen/arm: earlyprintk: export early_puts
  xen/arm: Rename EARLY_PRINTK compile option to CONFIG_EARLY_PRINTK
  xen/console: Add support for early printk
  xen/console: Add noreturn attribute to panic function
  xen/arm: Replace early_printk call to printk call

 xen/arch/arm/Rules.mk              |  2 +-
 xen/arch/arm/arm32/head.S          | 18 +++++++++---------
 xen/arch/arm/arm64/head.S          | 18 +++++++++---------
 xen/arch/arm/early_printk.c        | 34 +---------------------------------
 xen/arch/arm/setup.c               | 28 +++++++++++++---------------
 xen/common/device_tree.c           | 36 +++++++++++++-----------------------
 xen/drivers/char/console.c         | 17 +++++++++++++++--
 xen/drivers/char/dt-uart.c         |  9 ++++-----
 xen/drivers/char/exynos4210-uart.c | 13 +++++--------
 xen/drivers/char/omap-uart.c       | 13 ++++++-------
 xen/drivers/char/pl011.c           | 13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      | 29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h | 25 ++++---------------------
 xen/include/xen/lib.h              |  2 +-
 14 files changed, 101 insertions(+), 156 deletions(-)

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDh-0000I6-38; Sun, 05 Jan 2014 21:26:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDf-0000Hc-T1
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:52 +0000
Received: from [85.158.143.35:65126] by server-1.bemta-4.messagelabs.com id
	04/D2-02132-B1EC9C25; Sun, 05 Jan 2014 21:26:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388957209!9751720!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12611 invoked from network); 5 Jan 2014 21:26:50 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:50 -0000
Received: by mail-la0-f46.google.com with SMTP id eh20so9427248lab.33
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=by0kCvB/BGotm9gaw25AVUKif8cDV7RRZ8iEcmUm+DQ=;
	b=fyhs3wg5CVE7+sBWCdHRdAfewxmTHrVbtYRnnNNEWF/FyF8FiSHKdI48btX3UsD12J
	DZFSMKnC+e7zfSI2MxYnj9T8ynCrvkHqyQt9XjIztCNa8mNq4s78UdTOOtRytfWHpi6m
	qKfzLzmTaYmeMKNnODoJIpjmyIzOHO2Y8IukGiOW0WepxmAj+XkJKEbhW6CDRYl7g/G2
	ksDqlhhHBPdWFEI68A2ixlSVViGLEGZ+nBzAQbRUJVkfA9lromtlorN8gYhMc5Kk30KE
	EdtN1uamCimPkzyTclU1dwn7dJ6Y7VsjAdfO3fbmMvi8JPraikWNSrw0z7PIG/sbDSq1
	vcbw==
X-Gm-Message-State: ALoCoQlWi5sA7mUt10I83U8/lIBaaYeXS/teaGmLPNI2ZGqkXbLRzVapkis4VN/XnJUcars8NtMO
X-Received: by 10.152.1.234 with SMTP id 10mr43042778lap.19.1388957209401;
	Sun, 05 Jan 2014 13:26:49 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.47
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:28 +0000
Message-Id: <1388957191-10337-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 3/6] xen/arm: Rename EARLY_PRINTK compile option
	to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
to CONFIG_EARLY_PRINTK to be compliant.

This option will be used in common code (eg console) later.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/Rules.mk              |  2 +-
 xen/arch/arm/arm32/head.S          | 18 +++++++++---------
 xen/arch/arm/arm64/head.S          | 18 +++++++++---------
 xen/include/asm-arm/early_printk.h |  6 +++---
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index aaa203e..57f2eb1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
 EARLY_PRINTK := y
 endif
 
-CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
+CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
 CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..1b1801b 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -34,7 +34,7 @@
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -59,7 +59,7 @@
  */
 /* Macro to print a string to the UART, if there is one.
  * Clobbers r0-r3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   r0, 98f ; \
         bl    puts    ; \
@@ -67,9 +67,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         .arm
 
@@ -149,7 +149,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
         teq   r12, #0                /* Boot CPU sets up the UART too */
         bleq  init_uart
@@ -330,7 +330,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         teq   r12, #0
@@ -492,7 +492,7 @@ ENTRY(relocate_xen)
 
         mov pc, lr
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * r11: Early UART base address
  * Clobbers r0-r2 */
@@ -537,7 +537,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -545,7 +545,7 @@ early_puts:
 puts:
 putn:   mov   pc, lr
 
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..c97c194 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -30,7 +30,7 @@
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -71,7 +71,7 @@
 
 /* Macro to print a string to the UART, if there is one.
  * Clobbers x0-x3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   x0, 98f ; \
         bl    puts    ; \
@@ -79,9 +79,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         /*.aarch64*/
 
@@ -174,7 +174,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
         cbnz  x22, 1f
         bl    init_uart                 /* Boot CPU sets up the UART too */
@@ -343,7 +343,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb   sy
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         cbnz  x22, 1f
@@ -489,7 +489,7 @@ ENTRY(relocate_xen)
 
         ret
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * x23: Early UART base address
  * Clobbers x0-x1 */
@@ -536,7 +536,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -544,7 +544,7 @@ early_puts:
 puts:
 putn:   ret
 
-#endif /* EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 31024b5..a58e3e7 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -12,7 +12,7 @@
 
 #include <xen/config.h>
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 /* need to add the uart address offset in page to the fixmap address */
 #define EARLY_UART_VIRTUAL_ADDRESS \
@@ -22,7 +22,7 @@
 
 #ifndef __ASSEMBLY__
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
@@ -43,7 +43,7 @@ static inline void  __attribute__((noreturn))
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
-#endif
+#endif /* !CONFIG_EARLY_PRINTK */
 
 #endif	/* __ASSEMBLY__ */
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDj-0000Iy-G4; Sun, 05 Jan 2014 21:26:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDh-0000IM-TV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:54 +0000
Received: from [85.158.143.35:3905] by server-3.bemta-4.messagelabs.com id
	6C/E6-32360-D1EC9C25; Sun, 05 Jan 2014 21:26:53 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388957211!9748645!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6855 invoked from network); 5 Jan 2014 21:26:52 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:52 -0000
Received: by mail-lb0-f174.google.com with SMTP id y6so9317807lbh.19
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=OhrapjbjngucU5rUq+SMZMlP/HfANtdhrleTIO/jMFI=;
	b=OdwDLZwt7l5SnOlgGdD1/KiqXY52Z0ArBaQMo7+tJjzuOAYPXbyI+HNL+3/8HSqh/R
	INRS8gdB0JJhSViNifT08pL14NI7BJAyQfb9XgLKFRuedmeoBV1x49anRRihBWQs3/S7
	JZIBCR/CqhK8CvhytVJEDTW8Re1M31ANI6yDqZ/SzkCsHH+nZzWSVtcPx9SrXFw2e3kf
	LjzoYB/TnoqHxUbRo99JRBVcx8hglMFvkPHmCUSf2Gof1rTDWVKTQSs+1HdVNHP0MAwt
	jeYl7JimpZcMY07/omvMtUILRbClvB5+dmjPKVNF27P2WXob8s9xdfBC1y2nZxy/MvHP
	3NSQ==
X-Gm-Message-State: ALoCoQm+dtuXb7sic6pcRzCEXdEHavq4gcZkIenlycXvrue101v17iYX2qsnDDgGD2rhBPKlgC6f
X-Received: by 10.153.6.34 with SMTP id cr2mr2276330lad.44.1388957211657;
	Sun, 05 Jan 2014 13:26:51 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.49
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:29 +0000
Message-Id: <1388957191-10337-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org, patches@linaro.org
Subject: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, a function (early_printk) was introduced to output message when the
serial port is not initialized.

This solution is fragile because the developper needs to know when the serial
port is initialized, to use either early_printk or printk. Moreover some
functions (mainly in common code), only use printk. This will result to a loss
of message sometimes.

Directly call early_printk in console code when the serial port is not yet
initialized. For this purpose use serial_steal_fn.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..f83c92e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -28,6 +28,9 @@
 #include <asm/debugger.h>
 #include <asm/div64.h>
 #include <xen/hypercall.h> /* for do_console_io */
+#ifdef CONFIG_EARLY_PRINTK
+#include <asm/early_printk.h>
+#endif
 
 /* console: comma-separated list of console outputs. */
 static char __initdata opt_console[30] = OPT_CONSOLE_STR;
@@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
 static char serial_rx_ring[SERIAL_RX_SIZE];
 static unsigned int serial_rx_cons, serial_rx_prod;
 
-static void (*serial_steal_fn)(const char *);
+#ifndef CONFIG_EARLY_PRINTK
+static inline void early_puts(const char *str)
+{}
+#endif
+
+static void (*serial_steal_fn)(const char *) = early_puts;
 
 int console_steal(int handle, void (*fn)(const char *))
 {
@@ -652,7 +660,10 @@ void __init console_init_preirq(void)
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
+        {
             sercon_handle = sh;
+            serial_steal_fn = NULL;
+        }
         else
         {
             char *q = strchr(p, ',');
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDl-0000K6-VJ; Sun, 05 Jan 2014 21:26:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDk-0000JR-GZ
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:56 +0000
Received: from [85.158.143.35:3975] by server-2.bemta-4.messagelabs.com id
	B1/AD-11386-F1EC9C25; Sun, 05 Jan 2014 21:26:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388957214!9751729!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12993 invoked from network); 5 Jan 2014 21:26:55 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:55 -0000
Received: by mail-lb0-f169.google.com with SMTP id u14so9556880lbd.28
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8LBw925mLg3Vi5SnK1oGa5VtkyXRw4IJAYJWo/6ZaJ8=;
	b=PwC7Rf3FUxy17LUtxK7MMDM6gCJDI1/KQATXIx5NrSAxOqcjToy7nyER3Y181HQ5Zm
	iWKL5A/+i8Sbmm4g35piKtMb63fLoi9/sHu6kXGqTirdnSWGyUoalrCT7KAx540Br6++
	pOu+ArrP0ammTZFg01IZTCKtpbrLZAo87WN64/IgT3qetACfQLPm+ujYbZkK5Q0H33Vl
	3aNeKJRjCbp4iSU0XRPUBFQBdL7PqD5rXO68gcQxQg4vEoYUYD6Pvsbt1iOI4q3IryCd
	dFtvi5hb8uM2GFBeg0DsZOWgy2N1HjLShbQkZDF8z2bjon4AOfLjwhX+Ho7tWf3ccOxk
	05Hg==
X-Gm-Message-State: ALoCoQnI1sGBinfvbtMEpzBDvFkG69Y3fKT+t2D3ifz5DAUQ/zZ51LicV43VzBHEHL4U+XJA9gv8
X-Received: by 10.112.167.228 with SMTP id zr4mr87043lbb.56.1388957214302;
	Sun, 05 Jan 2014 13:26:54 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:52 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:30 +0000
Message-Id: <1388957191-10337-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org, patches@linaro.org
Subject: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to panic
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Panic function will never return. Without this attribute, gcc may output
warnings in call function.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c | 4 +++-
 xen/include/xen/lib.h      | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f83c92e..229d48a 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1049,7 +1049,7 @@ __initcall(debugtrace_init);
  * **************************************************************
  */
 
-void panic(const char *fmt, ...)
+void __attribute__((noreturn)) panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
@@ -1092,6 +1092,8 @@ void panic(const char *fmt, ...)
         watchdog_disable();
         machine_restart(5000);
     }
+
+    while ( 1 );
 }
 
 void __bug(char *file, int line)
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..9c3a242 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
 extern void panic(const char *format, ...)
-    __attribute__ ((format (printf, 1, 2)));
+    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
 extern int printk_ratelimit(void);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDc-0000H3-97; Sun, 05 Jan 2014 21:26:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDb-0000Gn-6i
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:47 +0000
Received: from [193.109.254.147:65150] by server-8.bemta-14.messagelabs.com id
	93/36-30921-61EC9C25; Sun, 05 Jan 2014 21:26:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388957205!8958116!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14394 invoked from network); 5 Jan 2014 21:26:45 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:45 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9231409lbh.6
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=v8dZ535X5zAZv9HQUCbECNA2MOIjulQxZxvLOIpkgSg=;
	b=MaKeQ8uMM4GwukcEw01vLtnfdHD9/sCJ+YmcRzPwXUK7iS0ftvYQ/P7kWcpcl76D3R
	ZwD9ATcAvKteIrDoOa7Ux58I/XOC4dMkP2qrqawn2V9u663FfV7Pc39+j+O4LJuVR7k/
	+NAtFWKGg/yj25dpzBNZ1g/mimnZYc7dvs6+UIXJTSI3YIjujTPPwLM1WYskL4Ltlx/I
	G2sQp9dyn8XNMfavjgWeFh8of0Qer/HpQrXdRs8aGxiQL4N2d2GMsOOB+JsHlArmAuxc
	t5SvahohkBSYLCLZ+GLmilYqT1wI/SEIt69pc3gQnWrf3R2rjZUYykaWsQZJs5lxqUpv
	9f6g==
X-Gm-Message-State: ALoCoQk408tTaWJx7jNfGq8HLsYbAzMjj/VP/QXa7ha73vSzxrHnnZwWRZuKS52cVSJWCE5MDN0I
X-Received: by 10.152.170.199 with SMTP id ao7mr2271173lac.40.1388957204826;
	Sun, 05 Jan 2014 13:26:44 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:43 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:26 +0000
Message-Id: <1388957191-10337-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 1/6] xen/arm: earlyprintk: move early_flush in
	early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

early_puts function will be exported to be used in the console code. To
avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
characters with early_printk), early_flush needs to be called in this
function.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..7143f9e 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
         early_putch(*s);
         s++;
     }
-}
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
 
     /*
      * Wait the UART has finished to transfer all characters before
@@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
     early_flush();
 }
 
+static void __init early_vprintk(const char *fmt, va_list args)
+{
+    vsnprintf(buf, sizeof(buf), fmt, args);
+    early_puts(buf);
+}
+
 void __init early_printk(const char *fmt, ...)
 {
     va_list args;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDf-0000Hf-N0; Sun, 05 Jan 2014 21:26:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDd-0000HN-Mz
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:49 +0000
Received: from [85.158.143.35:65086] by server-2.bemta-4.messagelabs.com id
	EB/9D-11386-91EC9C25; Sun, 05 Jan 2014 21:26:49 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388957207!9668371!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19451 invoked from network); 5 Jan 2014 21:26:48 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:48 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9205347lan.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=R5NEG1pDIN9pogpDItGN165t4fR3S+3ec/5ngGkSs9A=;
	b=nKXWRhOqIJmMfUd89CYi+zrT6Up2kcCFdQUDS65BMdq5oaASaQjIDom+KbiTv/3gLz
	uUEff8s4zXzVvV3nIt+MeEil++XdlCbijJZtpaM+RbZdSWT7WOMXV1RSXUaQctmDqLmO
	P6h2q7pJag3ekJc2MDSg0hjjoqndaG522ERKaQOz36o459b8Kc1v8r0WKRie+NSZLEfi
	SplslNbbD20Hi2lw6QmiTR0lECDLFVMJu61mx70tWHvIlMmd3fM7dGe7bNsT0w4wzf5t
	ZTL61ogZM5abHEVZno/vsCxn7EVdGngtBmivbtBQxIkLgNjcEI0UYfWOKYEYDiTFty/M
	xvBA==
X-Gm-Message-State: ALoCoQlnTer+3SxlKMixR7JWAziBYo8T+7cbJ7JY94CfRyKP+LBRlZO1PLNuACBXISdaHGH54zcK
X-Received: by 10.152.234.75 with SMTP id uc11mr2304119lac.30.1388957207342;
	Sun, 05 Jan 2014 13:26:47 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:46 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:27 +0000
Message-Id: <1388957191-10337-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c        | 2 +-
 xen/include/asm-arm/early_printk.h | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 7143f9e..affe424 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -21,7 +21,7 @@ void early_flush(void);
 /* Early printk buffer */
 static char __initdata buf[512];
 
-static void __init early_puts(const char *s)
+void early_puts(const char *s)
 {
     while (*s != '\0') {
         if (*s == '\n')
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..31024b5 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,6 +24,7 @@
 
 #ifdef EARLY_PRINTK
 
+void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -31,6 +32,9 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
+static inline void early_puts(const char *)
+{}
+
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDh-0000I6-38; Sun, 05 Jan 2014 21:26:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDf-0000Hc-T1
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:52 +0000
Received: from [85.158.143.35:65126] by server-1.bemta-4.messagelabs.com id
	04/D2-02132-B1EC9C25; Sun, 05 Jan 2014 21:26:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388957209!9751720!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12611 invoked from network); 5 Jan 2014 21:26:50 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:50 -0000
Received: by mail-la0-f46.google.com with SMTP id eh20so9427248lab.33
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=by0kCvB/BGotm9gaw25AVUKif8cDV7RRZ8iEcmUm+DQ=;
	b=fyhs3wg5CVE7+sBWCdHRdAfewxmTHrVbtYRnnNNEWF/FyF8FiSHKdI48btX3UsD12J
	DZFSMKnC+e7zfSI2MxYnj9T8ynCrvkHqyQt9XjIztCNa8mNq4s78UdTOOtRytfWHpi6m
	qKfzLzmTaYmeMKNnODoJIpjmyIzOHO2Y8IukGiOW0WepxmAj+XkJKEbhW6CDRYl7g/G2
	ksDqlhhHBPdWFEI68A2ixlSVViGLEGZ+nBzAQbRUJVkfA9lromtlorN8gYhMc5Kk30KE
	EdtN1uamCimPkzyTclU1dwn7dJ6Y7VsjAdfO3fbmMvi8JPraikWNSrw0z7PIG/sbDSq1
	vcbw==
X-Gm-Message-State: ALoCoQlWi5sA7mUt10I83U8/lIBaaYeXS/teaGmLPNI2ZGqkXbLRzVapkis4VN/XnJUcars8NtMO
X-Received: by 10.152.1.234 with SMTP id 10mr43042778lap.19.1388957209401;
	Sun, 05 Jan 2014 13:26:49 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.47
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:28 +0000
Message-Id: <1388957191-10337-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 3/6] xen/arm: Rename EARLY_PRINTK compile option
	to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
to CONFIG_EARLY_PRINTK to be compliant.

This option will be used in common code (eg console) later.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/Rules.mk              |  2 +-
 xen/arch/arm/arm32/head.S          | 18 +++++++++---------
 xen/arch/arm/arm64/head.S          | 18 +++++++++---------
 xen/include/asm-arm/early_printk.h |  6 +++---
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index aaa203e..57f2eb1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
 EARLY_PRINTK := y
 endif
 
-CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
+CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
 CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..1b1801b 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -34,7 +34,7 @@
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -59,7 +59,7 @@
  */
 /* Macro to print a string to the UART, if there is one.
  * Clobbers r0-r3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   r0, 98f ; \
         bl    puts    ; \
@@ -67,9 +67,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         .arm
 
@@ -149,7 +149,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
         teq   r12, #0                /* Boot CPU sets up the UART too */
         bleq  init_uart
@@ -330,7 +330,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         teq   r12, #0
@@ -492,7 +492,7 @@ ENTRY(relocate_xen)
 
         mov pc, lr
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * r11: Early UART base address
  * Clobbers r0-r2 */
@@ -537,7 +537,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -545,7 +545,7 @@ early_puts:
 puts:
 putn:   mov   pc, lr
 
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..c97c194 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -30,7 +30,7 @@
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -71,7 +71,7 @@
 
 /* Macro to print a string to the UART, if there is one.
  * Clobbers x0-x3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   x0, 98f ; \
         bl    puts    ; \
@@ -79,9 +79,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         /*.aarch64*/
 
@@ -174,7 +174,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
         cbnz  x22, 1f
         bl    init_uart                 /* Boot CPU sets up the UART too */
@@ -343,7 +343,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb   sy
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         cbnz  x22, 1f
@@ -489,7 +489,7 @@ ENTRY(relocate_xen)
 
         ret
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * x23: Early UART base address
  * Clobbers x0-x1 */
@@ -536,7 +536,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -544,7 +544,7 @@ early_puts:
 puts:
 putn:   ret
 
-#endif /* EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 31024b5..a58e3e7 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -12,7 +12,7 @@
 
 #include <xen/config.h>
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 /* need to add the uart address offset in page to the fixmap address */
 #define EARLY_UART_VIRTUAL_ADDRESS \
@@ -22,7 +22,7 @@
 
 #ifndef __ASSEMBLY__
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
@@ -43,7 +43,7 @@ static inline void  __attribute__((noreturn))
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
-#endif
+#endif /* !CONFIG_EARLY_PRINTK */
 
 #endif	/* __ASSEMBLY__ */
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDl-0000K6-VJ; Sun, 05 Jan 2014 21:26:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDk-0000JR-GZ
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:56 +0000
Received: from [85.158.143.35:3975] by server-2.bemta-4.messagelabs.com id
	B1/AD-11386-F1EC9C25; Sun, 05 Jan 2014 21:26:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1388957214!9751729!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12993 invoked from network); 5 Jan 2014 21:26:55 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:55 -0000
Received: by mail-lb0-f169.google.com with SMTP id u14so9556880lbd.28
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8LBw925mLg3Vi5SnK1oGa5VtkyXRw4IJAYJWo/6ZaJ8=;
	b=PwC7Rf3FUxy17LUtxK7MMDM6gCJDI1/KQATXIx5NrSAxOqcjToy7nyER3Y181HQ5Zm
	iWKL5A/+i8Sbmm4g35piKtMb63fLoi9/sHu6kXGqTirdnSWGyUoalrCT7KAx540Br6++
	pOu+ArrP0ammTZFg01IZTCKtpbrLZAo87WN64/IgT3qetACfQLPm+ujYbZkK5Q0H33Vl
	3aNeKJRjCbp4iSU0XRPUBFQBdL7PqD5rXO68gcQxQg4vEoYUYD6Pvsbt1iOI4q3IryCd
	dFtvi5hb8uM2GFBeg0DsZOWgy2N1HjLShbQkZDF8z2bjon4AOfLjwhX+Ho7tWf3ccOxk
	05Hg==
X-Gm-Message-State: ALoCoQnI1sGBinfvbtMEpzBDvFkG69Y3fKT+t2D3ifz5DAUQ/zZ51LicV43VzBHEHL4U+XJA9gv8
X-Received: by 10.112.167.228 with SMTP id zr4mr87043lbb.56.1388957214302;
	Sun, 05 Jan 2014 13:26:54 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:52 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:30 +0000
Message-Id: <1388957191-10337-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org, patches@linaro.org
Subject: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to panic
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Panic function will never return. Without this attribute, gcc may output
warnings in call function.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c | 4 +++-
 xen/include/xen/lib.h      | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index f83c92e..229d48a 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1049,7 +1049,7 @@ __initcall(debugtrace_init);
  * **************************************************************
  */
 
-void panic(const char *fmt, ...)
+void __attribute__((noreturn)) panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
@@ -1092,6 +1092,8 @@ void panic(const char *fmt, ...)
         watchdog_disable();
         machine_restart(5000);
     }
+
+    while ( 1 );
 }
 
 void __bug(char *file, int line)
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..9c3a242 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
 extern void panic(const char *format, ...)
-    __attribute__ ((format (printf, 1, 2)));
+    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
 extern int printk_ratelimit(void);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDc-0000H3-97; Sun, 05 Jan 2014 21:26:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDb-0000Gn-6i
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:47 +0000
Received: from [193.109.254.147:65150] by server-8.bemta-14.messagelabs.com id
	93/36-30921-61EC9C25; Sun, 05 Jan 2014 21:26:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1388957205!8958116!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14394 invoked from network); 5 Jan 2014 21:26:45 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:45 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9231409lbh.6
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=v8dZ535X5zAZv9HQUCbECNA2MOIjulQxZxvLOIpkgSg=;
	b=MaKeQ8uMM4GwukcEw01vLtnfdHD9/sCJ+YmcRzPwXUK7iS0ftvYQ/P7kWcpcl76D3R
	ZwD9ATcAvKteIrDoOa7Ux58I/XOC4dMkP2qrqawn2V9u663FfV7Pc39+j+O4LJuVR7k/
	+NAtFWKGg/yj25dpzBNZ1g/mimnZYc7dvs6+UIXJTSI3YIjujTPPwLM1WYskL4Ltlx/I
	G2sQp9dyn8XNMfavjgWeFh8of0Qer/HpQrXdRs8aGxiQL4N2d2GMsOOB+JsHlArmAuxc
	t5SvahohkBSYLCLZ+GLmilYqT1wI/SEIt69pc3gQnWrf3R2rjZUYykaWsQZJs5lxqUpv
	9f6g==
X-Gm-Message-State: ALoCoQk408tTaWJx7jNfGq8HLsYbAzMjj/VP/QXa7ha73vSzxrHnnZwWRZuKS52cVSJWCE5MDN0I
X-Received: by 10.152.170.199 with SMTP id ao7mr2271173lac.40.1388957204826;
	Sun, 05 Jan 2014 13:26:44 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:43 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:26 +0000
Message-Id: <1388957191-10337-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 1/6] xen/arm: earlyprintk: move early_flush in
	early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

early_puts function will be exported to be used in the console code. To
avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
characters with early_printk), early_flush needs to be called in this
function.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..7143f9e 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
         early_putch(*s);
         s++;
     }
-}
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
 
     /*
      * Wait the UART has finished to transfer all characters before
@@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
     early_flush();
 }
 
+static void __init early_vprintk(const char *fmt, va_list args)
+{
+    vsnprintf(buf, sizeof(buf), fmt, args);
+    early_puts(buf);
+}
+
 void __init early_printk(const char *fmt, ...)
 {
     va_list args;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDj-0000Iy-G4; Sun, 05 Jan 2014 21:26:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDh-0000IM-TV
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:54 +0000
Received: from [85.158.143.35:3905] by server-3.bemta-4.messagelabs.com id
	6C/E6-32360-D1EC9C25; Sun, 05 Jan 2014 21:26:53 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1388957211!9748645!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6855 invoked from network); 5 Jan 2014 21:26:52 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:52 -0000
Received: by mail-lb0-f174.google.com with SMTP id y6so9317807lbh.19
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=OhrapjbjngucU5rUq+SMZMlP/HfANtdhrleTIO/jMFI=;
	b=OdwDLZwt7l5SnOlgGdD1/KiqXY52Z0ArBaQMo7+tJjzuOAYPXbyI+HNL+3/8HSqh/R
	INRS8gdB0JJhSViNifT08pL14NI7BJAyQfb9XgLKFRuedmeoBV1x49anRRihBWQs3/S7
	JZIBCR/CqhK8CvhytVJEDTW8Re1M31ANI6yDqZ/SzkCsHH+nZzWSVtcPx9SrXFw2e3kf
	LjzoYB/TnoqHxUbRo99JRBVcx8hglMFvkPHmCUSf2Gof1rTDWVKTQSs+1HdVNHP0MAwt
	jeYl7JimpZcMY07/omvMtUILRbClvB5+dmjPKVNF27P2WXob8s9xdfBC1y2nZxy/MvHP
	3NSQ==
X-Gm-Message-State: ALoCoQm+dtuXb7sic6pcRzCEXdEHavq4gcZkIenlycXvrue101v17iYX2qsnDDgGD2rhBPKlgC6f
X-Received: by 10.153.6.34 with SMTP id cr2mr2276330lad.44.1388957211657;
	Sun, 05 Jan 2014 13:26:51 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.49
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:29 +0000
Message-Id: <1388957191-10337-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org, patches@linaro.org
Subject: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, a function (early_printk) was introduced to output message when the
serial port is not initialized.

This solution is fragile because the developper needs to know when the serial
port is initialized, to use either early_printk or printk. Moreover some
functions (mainly in common code), only use printk. This will result to a loss
of message sometimes.

Directly call early_printk in console code when the serial port is not yet
initialized. For this purpose use serial_steal_fn.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..f83c92e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -28,6 +28,9 @@
 #include <asm/debugger.h>
 #include <asm/div64.h>
 #include <xen/hypercall.h> /* for do_console_io */
+#ifdef CONFIG_EARLY_PRINTK
+#include <asm/early_printk.h>
+#endif
 
 /* console: comma-separated list of console outputs. */
 static char __initdata opt_console[30] = OPT_CONSOLE_STR;
@@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
 static char serial_rx_ring[SERIAL_RX_SIZE];
 static unsigned int serial_rx_cons, serial_rx_prod;
 
-static void (*serial_steal_fn)(const char *);
+#ifndef CONFIG_EARLY_PRINTK
+static inline void early_puts(const char *str)
+{}
+#endif
+
+static void (*serial_steal_fn)(const char *) = early_puts;
 
 int console_steal(int handle, void (*fn)(const char *))
 {
@@ -652,7 +660,10 @@ void __init console_init_preirq(void)
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
+        {
             sercon_handle = sh;
+            serial_steal_fn = NULL;
+        }
         else
         {
             char *q = strchr(p, ',');
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDf-0000Hf-N0; Sun, 05 Jan 2014 21:26:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDd-0000HN-Mz
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:26:49 +0000
Received: from [85.158.143.35:65086] by server-2.bemta-4.messagelabs.com id
	EB/9D-11386-91EC9C25; Sun, 05 Jan 2014 21:26:49 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388957207!9668371!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19451 invoked from network); 5 Jan 2014 21:26:48 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:48 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9205347lan.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=R5NEG1pDIN9pogpDItGN165t4fR3S+3ec/5ngGkSs9A=;
	b=nKXWRhOqIJmMfUd89CYi+zrT6Up2kcCFdQUDS65BMdq5oaASaQjIDom+KbiTv/3gLz
	uUEff8s4zXzVvV3nIt+MeEil++XdlCbijJZtpaM+RbZdSWT7WOMXV1RSXUaQctmDqLmO
	P6h2q7pJag3ekJc2MDSg0hjjoqndaG522ERKaQOz36o459b8Kc1v8r0WKRie+NSZLEfi
	SplslNbbD20Hi2lw6QmiTR0lECDLFVMJu61mx70tWHvIlMmd3fM7dGe7bNsT0w4wzf5t
	ZTL61ogZM5abHEVZno/vsCxn7EVdGngtBmivbtBQxIkLgNjcEI0UYfWOKYEYDiTFty/M
	xvBA==
X-Gm-Message-State: ALoCoQlnTer+3SxlKMixR7JWAziBYo8T+7cbJ7JY94CfRyKP+LBRlZO1PLNuACBXISdaHGH54zcK
X-Received: by 10.152.234.75 with SMTP id uc11mr2304119lac.30.1388957207342;
	Sun, 05 Jan 2014 13:26:47 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:46 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:27 +0000
Message-Id: <1388957191-10337-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c        | 2 +-
 xen/include/asm-arm/early_printk.h | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 7143f9e..affe424 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -21,7 +21,7 @@ void early_flush(void);
 /* Early printk buffer */
 static char __initdata buf[512];
 
-static void __init early_puts(const char *s)
+void early_puts(const char *s)
 {
     while (*s != '\0') {
         if (*s == '\n')
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..31024b5 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,6 +24,7 @@
 
 #ifdef EARLY_PRINTK
 
+void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -31,6 +32,9 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
+static inline void early_puts(const char *)
+{}
+
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:27:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDs-0000Ob-Nf; Sun, 05 Jan 2014 21:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDp-0000N1-6b
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:27:02 +0000
Received: from [85.158.139.211:37704] by server-2.bemta-5.messagelabs.com id
	67/5F-29392-42EC9C25; Sun, 05 Jan 2014 21:27:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388957218!7963430!1
X-Originating-IP: [209.85.217.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21841 invoked from network); 5 Jan 2014 21:26:59 -0000
Received: from mail-lb0-f176.google.com (HELO mail-lb0-f176.google.com)
	(209.85.217.176)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:59 -0000
Received: by mail-lb0-f176.google.com with SMTP id l4so9235443lbv.21
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=U3mVv6CyLeN7LZgD/yU4NPJu2dmb4Z/mPPpTrijUZsM=;
	b=g2+1DEkYG8SgUVAGDcDbBY+OJydCG4HJxIwdk83rg/Ha5uj2U996Od4vHtrunZNNpq
	enxi3OhhMkoGGL+Fm4JJNgO9T/q01LzSrdzr1m/7UF49VjW3rK/eAYCBpUq3mRzceNlE
	nTSXzkPldoEsCrEqKOc9Mc6ah7p7FdBFVFMEq8CDCYRABWcutBvF1LkLJmO4b8jidnAN
	F4THmmqXMsj8Md2PK6FwcHOkz83JdIvVsS+/SDPCiABRZDXhj+BetaIkMBKd5y4hVB8X
	7eqs6j0+uOdnBeHyagveodaT9BIsCFlJ47oY2kqnAzNdLhXo04uov1ufja4Sb9jtGQbR
	SQeg==
X-Gm-Message-State: ALoCoQn3rx0s0nNUW/Zp6fKF6vEnfbs92TyvPlrqbtXg5snQ9Mrg6FhalenXd8F5RRAK77uWeASz
X-Received: by 10.152.20.6 with SMTP id j6mr41951481lae.8.1388957217357;
	Sun, 05 Jan 2014 13:26:57 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:31 +0000
Message-Id: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to printk
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the console supports earlyprintk, we can get a rid of
early_printk call.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c        | 32 --------------------------------
 xen/arch/arm/setup.c               | 28 +++++++++++++---------------
 xen/common/device_tree.c           | 36 +++++++++++++-----------------------
 xen/drivers/char/dt-uart.c         |  9 ++++-----
 xen/drivers/char/exynos4210-uart.c | 13 +++++--------
 xen/drivers/char/omap-uart.c       | 13 ++++++-------
 xen/drivers/char/pl011.c           | 13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      | 29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h | 23 +----------------------
 9 files changed, 62 insertions(+), 134 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index affe424..9119c8c 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -18,9 +18,6 @@
 void early_putch(char c);
 void early_flush(void);
 
-/* Early printk buffer */
-static char __initdata buf[512];
-
 void early_puts(const char *s)
 {
     while (*s != '\0') {
@@ -36,32 +33,3 @@ void early_puts(const char *s)
      */
     early_flush();
 }
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
-}
-
-void __init early_printk(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-}
-
-void __attribute__((noreturn)) __init
-early_panic(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-
-    early_printk("\n\nEarly Panic: Stopping\n");
-
-    while(1);
-}
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 840b04b..76b4273 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,6 @@
 #include <asm/page.h>
 #include <asm/current.h>
 #include <asm/setup.h>
-#include <asm/early_printk.h>
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
@@ -346,10 +345,10 @@ static paddr_t __init get_xen_paddr(void)
     }
 
     if ( !paddr )
-        early_panic("Not enough memory to relocate Xen");
+        panic("Not enough memory to relocate Xen");
 
-    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
-                 paddr, paddr + min_size);
+    printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+           paddr, paddr + min_size);
 
     early_info.modules.module[MOD_XEN].start = paddr;
     early_info.modules.module[MOD_XEN].size = min_size;
@@ -371,7 +370,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     void *fdt;
 
     if ( !early_info.mem.nr_banks )
-        early_panic("No memory bank");
+        panic("No memory bank");
 
     /*
      * We are going to accumulate two regions here.
@@ -430,8 +429,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( i != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     i, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               i, early_info.mem.nr_banks);
         early_info.mem.nr_banks = i;
     }
 
@@ -465,14 +464,13 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
 
     if ( ! e )
-        early_panic("Not not enough space for xenheap");
+        panic("Not not enough space for xenheap");
 
     domheap_pages = heap_pages - xenheap_pages;
 
-    early_printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
-                 e - (pfn_to_paddr(xenheap_pages)), e,
-                 xenheap_pages);
-    early_printk("Dom heap: %lu pages\n", domheap_pages);
+    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
+            e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages);
+    printk("Dom heap: %lu pages\n", domheap_pages);
 
     setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
@@ -606,8 +604,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( bank != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     bank, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               bank, early_info.mem.nr_banks);
         early_info.mem.nr_banks = bank;
     }
 
@@ -672,7 +670,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     fdt_size = device_tree_early_init(device_tree_flattened, fdt_paddr);
 
     cmdline = device_tree_bootargs(device_tree_flattened);
-    early_printk("Command line: %s\n", cmdline);
+    printk("Command line: %s\n", cmdline);
     cmdline_parse(cmdline);
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 84e709d..c35aee1 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -23,7 +23,6 @@
 #include <xen/cpumask.h>
 #include <xen/ctype.h>
 #include <xen/lib.h>
-#include <asm/early_printk.h>
 
 struct dt_early_info __initdata early_info;
 const void *device_tree_flattened;
@@ -54,16 +53,7 @@ struct dt_alias_prop {
 
 static LIST_HEAD(aliases_lookup);
 
-/* Some device tree functions may be called both before and after the
-   console is initialized. */
-#define dt_printk(fmt, ...)                         \
-    do                                              \
-    {                                               \
-        if ( system_state == SYS_STATE_early_boot ) \
-            early_printk(fmt, ## __VA_ARGS__);      \
-        else                                        \
-            printk(fmt, ## __VA_ARGS__);            \
-    } while (0)
+#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__);
 
 // #define DEBUG_DT
 
@@ -316,7 +306,7 @@ static void __init process_memory_node(const void *fdt, int node,
 
     if ( address_cells < 1 || size_cells < 1 )
     {
-        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+        dt_printk("fdt: node `%s': invalid #address-cells or #size-cells",
                      name);
         return;
     }
@@ -324,7 +314,7 @@ static void __init process_memory_node(const void *fdt, int node,
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
-        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        dt_printk("fdt: node `%s': missing `reg' property\n", name);
         return;
     }
 
@@ -355,16 +345,16 @@ static void __init process_multiboot_node(const void *fdt, int node,
     else if ( fdt_node_check_compatible(fdt, node, "xen,linux-initrd") == 0)
         nr = MOD_INITRD;
     else
-        early_panic("%s not a known xen multiboot type\n", name);
+        panic("%s not a known xen multiboot type\n", name);
 
     mod = &early_info.modules.module[nr];
 
     prop = fdt_get_property(fdt, node, "reg", &len);
     if ( !prop )
-        early_panic("node %s missing `reg' property\n", name);
+        panic("node %s missing `reg' property\n", name);
 
     if ( len < dt_cells_to_size(address_cells + size_cells) )
-        early_panic("fdt: node `%s': `reg` property length is too short\n",
+        panic("fdt: node `%s': `reg` property length is too short\n",
                     name);
 
     cell = (const __be32 *)prop->data;
@@ -375,7 +365,7 @@ static void __init process_multiboot_node(const void *fdt, int node,
     if ( prop )
     {
         if ( len > sizeof(mod->cmdline) )
-            early_panic("module %d command line too long\n", nr);
+            panic("module %d command line too long\n", nr);
 
         safe_strcpy(mod->cmdline, prop->data);
     }
@@ -458,12 +448,12 @@ static void __init early_print_info(void)
     int i, nr_rsvd;
 
     for ( i = 0; i < mi->nr_banks; i++ )
-        early_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
                      mi->bank[i].start,
                      mi->bank[i].start + mi->bank[i].size - 1);
-    early_printk("\n");
+    dt_printk("\n");
     for ( i = 1 ; i < mods->nr_mods + 1; i++ )
-        early_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
+        dt_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
                      i,
                      mods->module[i].start,
                      mods->module[i].start + mods->module[i].size,
@@ -476,10 +466,10 @@ static void __init early_print_info(void)
             continue;
         /* fdt_get_mem_rsv returns length */
         e += s;
-        early_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
                      i, s, e);
     }
-    early_printk("\n");
+    dt_printk("\n");
 }
 
 /**
@@ -495,7 +485,7 @@ size_t __init device_tree_early_init(const void *fdt, paddr_t paddr)
 
     ret = fdt_check_header(fdt);
     if ( ret < 0 )
-        early_panic("No valid device tree\n");
+        panic("No valid device tree\n");
 
     mod = &early_info.modules.module[MOD_FDT];
     mod->start = paddr;
diff --git a/xen/drivers/char/dt-uart.c b/xen/drivers/char/dt-uart.c
index d7204fb..fa92b5c 100644
--- a/xen/drivers/char/dt-uart.c
+++ b/xen/drivers/char/dt-uart.c
@@ -18,7 +18,6 @@
  */
 
 #include <asm/device.h>
-#include <asm/early_printk.h>
 #include <asm/types.h>
 #include <xen/console.h>
 #include <xen/device_tree.h>
@@ -44,7 +43,7 @@ void __init dt_uart_init(void)
 
     if ( !console_has("dtuart") || !strcmp(opt_dtuart, "") )
     {
-        early_printk("No console\n");
+        printk("No console\n");
         return;
     }
 
@@ -54,7 +53,7 @@ void __init dt_uart_init(void)
     else
         options = "";
 
-    early_printk("Looking for UART console %s\n", devpath);
+    printk("Looking for UART console %s\n", devpath);
     if ( *devpath == '/' )
         dev = dt_find_node_by_path(devpath);
     else
@@ -62,12 +61,12 @@ void __init dt_uart_init(void)
 
     if ( !dev )
     {
-        early_printk("Unable to find device \"%s\"\n", devpath);
+        printk("Unable to find device \"%s\"\n", devpath);
         return;
     }
 
     ret = device_init(dev, DEVICE_SERIAL, options);
 
     if ( ret )
-        early_printk("Unable to initialize serial: %d\n", ret);
+        printk("Unable to initialize serial: %d\n", ret);
 }
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0a2ac17..17ba010 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -24,7 +24,6 @@
 #include <xen/init.h>
 #include <xen/irq.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include <asm/device.h>
 #include <asm/exynos4210-uart.h>
 #include <asm/io.h>
@@ -314,9 +313,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-    {
-        early_printk("WARNING: UART configuration is not supported\n");
-    }
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &exynos4210_com;
 
@@ -329,21 +326,21 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("exynos4210: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("exynos4210: Unable to map the UART memory\n");
+        printk("exynos4210: Unable to map the UART memory\n");
         return -ENOMEM;
     }
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        printk("exynos4210: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index 321e636..ad5aabb 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -15,7 +15,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <asm/device.h>
 #include <xen/errno.h>
@@ -301,14 +300,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &omap_com;
 
     res = dt_property_read_u32(dev, "clock-frequency", &clkspec);
     if ( !res )
     {
-        early_printk("omap-uart: Unable to retrieve the clock frequency\n");
+        printk("omap-uart: Unable to retrieve the clock frequency\n");
         return -EINVAL;
     }
 
@@ -321,22 +320,22 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("omap-uart: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
     if ( !uart->regs )
     {
-        early_printk("omap-uart: Unable to map the UART memory\n");
+        printk("omap-uart: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        printk("omap-uart: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index 613b9eb..378d37e 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -22,7 +22,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <asm/device.h>
@@ -107,7 +106,7 @@ static void __init pl011_init_preirq(struct serial_port *port)
         /* Baud rate already set: read it out from the divisor latch. */
         divisor = (pl011_read(uart, IBRD) << 6) | (pl011_read(uart, FBRD));
         if (!divisor)
-            early_panic("pl011: No Baud rate configured\n");
+            panic("pl011: No Baud rate configured\n");
         uart->baud = (uart->clock_hz << 2) / divisor;
     }
     /* This write must follow FBRD and IBRD writes. */
@@ -229,7 +228,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
 
     if ( strcmp(config, "") )
     {
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
     }
 
     uart = &pl011_com;
@@ -243,15 +242,15 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("pl011: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
     if ( !uart->regs )
     {
-        early_printk("pl011: Unable to map the UART memory\n");
+        printk("pl011: Unable to map the UART memory\n");
 
         return -ENOMEM;
     }
@@ -259,7 +258,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the IRQ\n");
+        printk("pl011: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..2a5f72e 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -25,7 +25,6 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include "font.h"
 #include "lfb.h"
 #include "modelines.h"
@@ -123,21 +122,21 @@ void __init video_init(void)
 
     if ( !dev )
     {
-        early_printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
+        printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
         return;
     }
 
     res = dt_device_get_address(dev, 0, &hdlcd_start, &hdlcd_size);
     if ( !res )
     {
-        early_printk("HDLCD: Unable to retrieve MMIO base address\n");
+        printk("HDLCD: Unable to retrieve MMIO base address\n");
         return;
     }
 
     cells = dt_get_property(dev, "framebuffer", &lenp);
     if ( !cells )
     {
-        early_printk("HDLCD: Unable to retrieve framebuffer property\n");
+        printk("HDLCD: Unable to retrieve framebuffer property\n");
         return;
     }
 
@@ -146,13 +145,13 @@ void __init video_init(void)
 
     if ( !hdlcd_start )
     {
-        early_printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
         return;
     }
 
     if ( !framebuffer_start )
     {
-        early_printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
         return;
     }
 
@@ -166,13 +165,13 @@ void __init video_init(void)
     else if ( strlen(mode_string) < strlen("800x600@60") ||
             strlen(mode_string) > sizeof(_mode_string) - 1 )
     {
-        early_printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
+        printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
         return;
     } else {
         char *s = strchr(mode_string, '-');
         if ( !s )
         {
-            early_printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+            printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
                          mode_string);
             get_color_masks("32", &c);
             memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
@@ -180,13 +179,13 @@ void __init video_init(void)
         } else {
             if ( strlen(s) < 6 )
             {
-                early_printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
+                printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
                 return;
             }
             s++;
             if ( get_color_masks(s, &c) < 0 )
             {
-                early_printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
+                printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
                 return;
             }
             bytes_per_pixel = simple_strtoll(s, NULL, 10) / 8;
@@ -205,23 +204,23 @@ void __init video_init(void)
     }
     if ( !videomode )
     {
-        early_printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
-                     _mode_string);
+        printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
+               _mode_string);
         return;
     }
 
     if ( framebuffer_size < bytes_per_pixel * videomode->xres * videomode->yres )
     {
-        early_printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
+        printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
         return;
     }
 
-    early_printk(KERN_INFO "Initializing HDLCD driver\n");
+    printk(KERN_INFO "Initializing HDLCD driver\n");
 
     lfb = ioremap_wc(framebuffer_start, framebuffer_size);
     if ( !lfb )
     {
-        early_printk(KERN_ERR "Couldn't map the framebuffer\n");
+        printk(KERN_ERR "Couldn't map the framebuffer\n");
         return;
     }
     memset(lfb, 0x00, bytes_per_pixel * videomode->xres * videomode->yres);
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index a58e3e7..6569397 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -18,33 +18,12 @@
 #define EARLY_UART_VIRTUAL_ADDRESS \
     (FIXMAP_ADDR(FIXMAP_CONSOLE) +(EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
 
-#endif
-
 #ifndef __ASSEMBLY__
 
-#ifdef CONFIG_EARLY_PRINTK
-
 void early_puts(const char *s);
-void early_printk(const char *fmt, ...)
-    __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
-    __attribute__((format (printf, 1, 2)));
 
-#else
-
-static inline void early_puts(const char *)
-{}
-
-static inline  __attribute__((format (printf, 1, 2))) void
-early_printk(const char *fmt, ...)
-{}
-
-static inline void  __attribute__((noreturn))
-__attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
-{while(1);}
+#endif /* !__ASSEMBLY__ */
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
-#endif	/* __ASSEMBLY__ */
-
 #endif
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:27:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvDs-0000Ob-Nf; Sun, 05 Jan 2014 21:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvDp-0000N1-6b
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 21:27:02 +0000
Received: from [85.158.139.211:37704] by server-2.bemta-5.messagelabs.com id
	67/5F-29392-42EC9C25; Sun, 05 Jan 2014 21:27:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388957218!7963430!1
X-Originating-IP: [209.85.217.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21841 invoked from network); 5 Jan 2014 21:26:59 -0000
Received: from mail-lb0-f176.google.com (HELO mail-lb0-f176.google.com)
	(209.85.217.176)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:26:59 -0000
Received: by mail-lb0-f176.google.com with SMTP id l4so9235443lbv.21
	for <xen-devel@lists.xenproject.org>;
	Sun, 05 Jan 2014 13:26:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=U3mVv6CyLeN7LZgD/yU4NPJu2dmb4Z/mPPpTrijUZsM=;
	b=g2+1DEkYG8SgUVAGDcDbBY+OJydCG4HJxIwdk83rg/Ha5uj2U996Od4vHtrunZNNpq
	enxi3OhhMkoGGL+Fm4JJNgO9T/q01LzSrdzr1m/7UF49VjW3rK/eAYCBpUq3mRzceNlE
	nTSXzkPldoEsCrEqKOc9Mc6ah7p7FdBFVFMEq8CDCYRABWcutBvF1LkLJmO4b8jidnAN
	F4THmmqXMsj8Md2PK6FwcHOkz83JdIvVsS+/SDPCiABRZDXhj+BetaIkMBKd5y4hVB8X
	7eqs6j0+uOdnBeHyagveodaT9BIsCFlJ47oY2kqnAzNdLhXo04uov1ufja4Sb9jtGQbR
	SQeg==
X-Gm-Message-State: ALoCoQn3rx0s0nNUW/Zp6fKF6vEnfbs92TyvPlrqbtXg5snQ9Mrg6FhalenXd8F5RRAK77uWeASz
X-Received: by 10.152.20.6 with SMTP id j6mr41951481lae.8.1388957217357;
	Sun, 05 Jan 2014 13:26:57 -0800 (PST)
Received: from localhost.localdomain (ti0273a400-0979.bb.online.no.
	[85.165.250.215])
	by mx.google.com with ESMTPSA id e10sm53207917laa.6.2014.01.05.13.26.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 05 Jan 2014 13:26:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun,  5 Jan 2014 21:26:31 +0000
Message-Id: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to printk
	call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the console supports earlyprintk, we can get a rid of
early_printk call.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/early_printk.c        | 32 --------------------------------
 xen/arch/arm/setup.c               | 28 +++++++++++++---------------
 xen/common/device_tree.c           | 36 +++++++++++++-----------------------
 xen/drivers/char/dt-uart.c         |  9 ++++-----
 xen/drivers/char/exynos4210-uart.c | 13 +++++--------
 xen/drivers/char/omap-uart.c       | 13 ++++++-------
 xen/drivers/char/pl011.c           | 13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      | 29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h | 23 +----------------------
 9 files changed, 62 insertions(+), 134 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index affe424..9119c8c 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -18,9 +18,6 @@
 void early_putch(char c);
 void early_flush(void);
 
-/* Early printk buffer */
-static char __initdata buf[512];
-
 void early_puts(const char *s)
 {
     while (*s != '\0') {
@@ -36,32 +33,3 @@ void early_puts(const char *s)
      */
     early_flush();
 }
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
-}
-
-void __init early_printk(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-}
-
-void __attribute__((noreturn)) __init
-early_panic(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-
-    early_printk("\n\nEarly Panic: Stopping\n");
-
-    while(1);
-}
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 840b04b..76b4273 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,6 @@
 #include <asm/page.h>
 #include <asm/current.h>
 #include <asm/setup.h>
-#include <asm/early_printk.h>
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
@@ -346,10 +345,10 @@ static paddr_t __init get_xen_paddr(void)
     }
 
     if ( !paddr )
-        early_panic("Not enough memory to relocate Xen");
+        panic("Not enough memory to relocate Xen");
 
-    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
-                 paddr, paddr + min_size);
+    printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+           paddr, paddr + min_size);
 
     early_info.modules.module[MOD_XEN].start = paddr;
     early_info.modules.module[MOD_XEN].size = min_size;
@@ -371,7 +370,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     void *fdt;
 
     if ( !early_info.mem.nr_banks )
-        early_panic("No memory bank");
+        panic("No memory bank");
 
     /*
      * We are going to accumulate two regions here.
@@ -430,8 +429,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( i != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     i, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               i, early_info.mem.nr_banks);
         early_info.mem.nr_banks = i;
     }
 
@@ -465,14 +464,13 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
 
     if ( ! e )
-        early_panic("Not not enough space for xenheap");
+        panic("Not not enough space for xenheap");
 
     domheap_pages = heap_pages - xenheap_pages;
 
-    early_printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
-                 e - (pfn_to_paddr(xenheap_pages)), e,
-                 xenheap_pages);
-    early_printk("Dom heap: %lu pages\n", domheap_pages);
+    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
+            e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages);
+    printk("Dom heap: %lu pages\n", domheap_pages);
 
     setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
@@ -606,8 +604,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( bank != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     bank, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               bank, early_info.mem.nr_banks);
         early_info.mem.nr_banks = bank;
     }
 
@@ -672,7 +670,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     fdt_size = device_tree_early_init(device_tree_flattened, fdt_paddr);
 
     cmdline = device_tree_bootargs(device_tree_flattened);
-    early_printk("Command line: %s\n", cmdline);
+    printk("Command line: %s\n", cmdline);
     cmdline_parse(cmdline);
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 84e709d..c35aee1 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -23,7 +23,6 @@
 #include <xen/cpumask.h>
 #include <xen/ctype.h>
 #include <xen/lib.h>
-#include <asm/early_printk.h>
 
 struct dt_early_info __initdata early_info;
 const void *device_tree_flattened;
@@ -54,16 +53,7 @@ struct dt_alias_prop {
 
 static LIST_HEAD(aliases_lookup);
 
-/* Some device tree functions may be called both before and after the
-   console is initialized. */
-#define dt_printk(fmt, ...)                         \
-    do                                              \
-    {                                               \
-        if ( system_state == SYS_STATE_early_boot ) \
-            early_printk(fmt, ## __VA_ARGS__);      \
-        else                                        \
-            printk(fmt, ## __VA_ARGS__);            \
-    } while (0)
+#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__);
 
 // #define DEBUG_DT
 
@@ -316,7 +306,7 @@ static void __init process_memory_node(const void *fdt, int node,
 
     if ( address_cells < 1 || size_cells < 1 )
     {
-        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+        dt_printk("fdt: node `%s': invalid #address-cells or #size-cells",
                      name);
         return;
     }
@@ -324,7 +314,7 @@ static void __init process_memory_node(const void *fdt, int node,
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
-        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        dt_printk("fdt: node `%s': missing `reg' property\n", name);
         return;
     }
 
@@ -355,16 +345,16 @@ static void __init process_multiboot_node(const void *fdt, int node,
     else if ( fdt_node_check_compatible(fdt, node, "xen,linux-initrd") == 0)
         nr = MOD_INITRD;
     else
-        early_panic("%s not a known xen multiboot type\n", name);
+        panic("%s not a known xen multiboot type\n", name);
 
     mod = &early_info.modules.module[nr];
 
     prop = fdt_get_property(fdt, node, "reg", &len);
     if ( !prop )
-        early_panic("node %s missing `reg' property\n", name);
+        panic("node %s missing `reg' property\n", name);
 
     if ( len < dt_cells_to_size(address_cells + size_cells) )
-        early_panic("fdt: node `%s': `reg` property length is too short\n",
+        panic("fdt: node `%s': `reg` property length is too short\n",
                     name);
 
     cell = (const __be32 *)prop->data;
@@ -375,7 +365,7 @@ static void __init process_multiboot_node(const void *fdt, int node,
     if ( prop )
     {
         if ( len > sizeof(mod->cmdline) )
-            early_panic("module %d command line too long\n", nr);
+            panic("module %d command line too long\n", nr);
 
         safe_strcpy(mod->cmdline, prop->data);
     }
@@ -458,12 +448,12 @@ static void __init early_print_info(void)
     int i, nr_rsvd;
 
     for ( i = 0; i < mi->nr_banks; i++ )
-        early_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
                      mi->bank[i].start,
                      mi->bank[i].start + mi->bank[i].size - 1);
-    early_printk("\n");
+    dt_printk("\n");
     for ( i = 1 ; i < mods->nr_mods + 1; i++ )
-        early_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
+        dt_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
                      i,
                      mods->module[i].start,
                      mods->module[i].start + mods->module[i].size,
@@ -476,10 +466,10 @@ static void __init early_print_info(void)
             continue;
         /* fdt_get_mem_rsv returns length */
         e += s;
-        early_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
                      i, s, e);
     }
-    early_printk("\n");
+    dt_printk("\n");
 }
 
 /**
@@ -495,7 +485,7 @@ size_t __init device_tree_early_init(const void *fdt, paddr_t paddr)
 
     ret = fdt_check_header(fdt);
     if ( ret < 0 )
-        early_panic("No valid device tree\n");
+        panic("No valid device tree\n");
 
     mod = &early_info.modules.module[MOD_FDT];
     mod->start = paddr;
diff --git a/xen/drivers/char/dt-uart.c b/xen/drivers/char/dt-uart.c
index d7204fb..fa92b5c 100644
--- a/xen/drivers/char/dt-uart.c
+++ b/xen/drivers/char/dt-uart.c
@@ -18,7 +18,6 @@
  */
 
 #include <asm/device.h>
-#include <asm/early_printk.h>
 #include <asm/types.h>
 #include <xen/console.h>
 #include <xen/device_tree.h>
@@ -44,7 +43,7 @@ void __init dt_uart_init(void)
 
     if ( !console_has("dtuart") || !strcmp(opt_dtuart, "") )
     {
-        early_printk("No console\n");
+        printk("No console\n");
         return;
     }
 
@@ -54,7 +53,7 @@ void __init dt_uart_init(void)
     else
         options = "";
 
-    early_printk("Looking for UART console %s\n", devpath);
+    printk("Looking for UART console %s\n", devpath);
     if ( *devpath == '/' )
         dev = dt_find_node_by_path(devpath);
     else
@@ -62,12 +61,12 @@ void __init dt_uart_init(void)
 
     if ( !dev )
     {
-        early_printk("Unable to find device \"%s\"\n", devpath);
+        printk("Unable to find device \"%s\"\n", devpath);
         return;
     }
 
     ret = device_init(dev, DEVICE_SERIAL, options);
 
     if ( ret )
-        early_printk("Unable to initialize serial: %d\n", ret);
+        printk("Unable to initialize serial: %d\n", ret);
 }
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0a2ac17..17ba010 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -24,7 +24,6 @@
 #include <xen/init.h>
 #include <xen/irq.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include <asm/device.h>
 #include <asm/exynos4210-uart.h>
 #include <asm/io.h>
@@ -314,9 +313,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-    {
-        early_printk("WARNING: UART configuration is not supported\n");
-    }
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &exynos4210_com;
 
@@ -329,21 +326,21 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("exynos4210: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("exynos4210: Unable to map the UART memory\n");
+        printk("exynos4210: Unable to map the UART memory\n");
         return -ENOMEM;
     }
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        printk("exynos4210: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index 321e636..ad5aabb 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -15,7 +15,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <asm/device.h>
 #include <xen/errno.h>
@@ -301,14 +300,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &omap_com;
 
     res = dt_property_read_u32(dev, "clock-frequency", &clkspec);
     if ( !res )
     {
-        early_printk("omap-uart: Unable to retrieve the clock frequency\n");
+        printk("omap-uart: Unable to retrieve the clock frequency\n");
         return -EINVAL;
     }
 
@@ -321,22 +320,22 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("omap-uart: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
     if ( !uart->regs )
     {
-        early_printk("omap-uart: Unable to map the UART memory\n");
+        printk("omap-uart: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        printk("omap-uart: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index 613b9eb..378d37e 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -22,7 +22,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <asm/device.h>
@@ -107,7 +106,7 @@ static void __init pl011_init_preirq(struct serial_port *port)
         /* Baud rate already set: read it out from the divisor latch. */
         divisor = (pl011_read(uart, IBRD) << 6) | (pl011_read(uart, FBRD));
         if (!divisor)
-            early_panic("pl011: No Baud rate configured\n");
+            panic("pl011: No Baud rate configured\n");
         uart->baud = (uart->clock_hz << 2) / divisor;
     }
     /* This write must follow FBRD and IBRD writes. */
@@ -229,7 +228,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
 
     if ( strcmp(config, "") )
     {
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
     }
 
     uart = &pl011_com;
@@ -243,15 +242,15 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("pl011: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
     if ( !uart->regs )
     {
-        early_printk("pl011: Unable to map the UART memory\n");
+        printk("pl011: Unable to map the UART memory\n");
 
         return -ENOMEM;
     }
@@ -259,7 +258,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the IRQ\n");
+        printk("pl011: Unable to retrieve the IRQ\n");
         return res;
     }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..2a5f72e 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -25,7 +25,6 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include "font.h"
 #include "lfb.h"
 #include "modelines.h"
@@ -123,21 +122,21 @@ void __init video_init(void)
 
     if ( !dev )
     {
-        early_printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
+        printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
         return;
     }
 
     res = dt_device_get_address(dev, 0, &hdlcd_start, &hdlcd_size);
     if ( !res )
     {
-        early_printk("HDLCD: Unable to retrieve MMIO base address\n");
+        printk("HDLCD: Unable to retrieve MMIO base address\n");
         return;
     }
 
     cells = dt_get_property(dev, "framebuffer", &lenp);
     if ( !cells )
     {
-        early_printk("HDLCD: Unable to retrieve framebuffer property\n");
+        printk("HDLCD: Unable to retrieve framebuffer property\n");
         return;
     }
 
@@ -146,13 +145,13 @@ void __init video_init(void)
 
     if ( !hdlcd_start )
     {
-        early_printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
         return;
     }
 
     if ( !framebuffer_start )
     {
-        early_printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
         return;
     }
 
@@ -166,13 +165,13 @@ void __init video_init(void)
     else if ( strlen(mode_string) < strlen("800x600@60") ||
             strlen(mode_string) > sizeof(_mode_string) - 1 )
     {
-        early_printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
+        printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
         return;
     } else {
         char *s = strchr(mode_string, '-');
         if ( !s )
         {
-            early_printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+            printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
                          mode_string);
             get_color_masks("32", &c);
             memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
@@ -180,13 +179,13 @@ void __init video_init(void)
         } else {
             if ( strlen(s) < 6 )
             {
-                early_printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
+                printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
                 return;
             }
             s++;
             if ( get_color_masks(s, &c) < 0 )
             {
-                early_printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
+                printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
                 return;
             }
             bytes_per_pixel = simple_strtoll(s, NULL, 10) / 8;
@@ -205,23 +204,23 @@ void __init video_init(void)
     }
     if ( !videomode )
     {
-        early_printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
-                     _mode_string);
+        printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
+               _mode_string);
         return;
     }
 
     if ( framebuffer_size < bytes_per_pixel * videomode->xres * videomode->yres )
     {
-        early_printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
+        printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
         return;
     }
 
-    early_printk(KERN_INFO "Initializing HDLCD driver\n");
+    printk(KERN_INFO "Initializing HDLCD driver\n");
 
     lfb = ioremap_wc(framebuffer_start, framebuffer_size);
     if ( !lfb )
     {
-        early_printk(KERN_ERR "Couldn't map the framebuffer\n");
+        printk(KERN_ERR "Couldn't map the framebuffer\n");
         return;
     }
     memset(lfb, 0x00, bytes_per_pixel * videomode->xres * videomode->yres);
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index a58e3e7..6569397 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -18,33 +18,12 @@
 #define EARLY_UART_VIRTUAL_ADDRESS \
     (FIXMAP_ADDR(FIXMAP_CONSOLE) +(EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
 
-#endif
-
 #ifndef __ASSEMBLY__
 
-#ifdef CONFIG_EARLY_PRINTK
-
 void early_puts(const char *s);
-void early_printk(const char *fmt, ...)
-    __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
-    __attribute__((format (printf, 1, 2)));
 
-#else
-
-static inline void early_puts(const char *)
-{}
-
-static inline  __attribute__((format (printf, 1, 2))) void
-early_printk(const char *fmt, ...)
-{}
-
-static inline void  __attribute__((noreturn))
-__attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
-{while(1);}
+#endif /* !__ASSEMBLY__ */
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
-#endif	/* __ASSEMBLY__ */
-
 #endif
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzvcw-0002B6-9M; Sun, 05 Jan 2014 21:52:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzvcu-0002B1-D1
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 21:52:56 +0000
Received: from [85.158.143.35:60979] by server-2.bemta-4.messagelabs.com id
	69/A3-11386-734D9C25; Sun, 05 Jan 2014 21:52:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388958773!9752175!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25910 invoked from network); 5 Jan 2014 21:52:54 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:52:54 -0000
Received: by mail-lb0-f170.google.com with SMTP id c11so9416806lbj.15
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 13:52:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Hd7a+G9AczFGRnmFEywXhhbfOAzBN+U2XkAWW/PPKlE=;
	b=RE6m6qfKNDlmktN0BAPenn/d7YFRhke/pVlCu8HrFediD47QiW4nUxi7diohWHvy6e
	AkS+by8+dt47JR6QjGAWbiumPpTf08MtUvGeo1BHrINo2e48FZZKBiVnWCICiMjo2Vcf
	YFa5Aqo7AoR4NQsOUOGHviBXDCo0OL+sRoKMzSNgHBk7dVB6BZgeiQrMGaAapE8/wuE/
	7fW4UNrKk4wgAdrf89hnOmuCVog3ZBHHy2OWORewQBFw1J+3EZijIUSFjxGUsSXjzlo4
	vtxvxhlbuePZeDd76nETl/mOviFMXcsfxg219+Yjk1xVK4w1VIAYUbq/TSRRAmFZcOfL
	DDZQ==
X-Gm-Message-State: ALoCoQlvQvfphxXkcx+QNeIXpfBNODkyG4L+XtMANcmSYbrm3QWHaVT+ydPCWu6Tx54qWDE/T8kF
X-Received: by 10.152.19.65 with SMTP id c1mr452980lae.49.1388958773550;
	Sun, 05 Jan 2014 13:52:53 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	mq10sm41404148lbb.12.2014.01.05.13.52.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:52:52 -0800 (PST)
Message-ID: <52C9D432.3040409@linaro.org>
Date: Sun, 05 Jan 2014 21:52:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org, 
	freebsd-current@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	jhb@freebsd.org, kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-15-git-send-email-roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> Since Xen PVH guests doesn't have ACPI, we need to create a dummy
> bus so top level Xen devices can attach to it (instead of
> attaching directly to the nexus) and a pvcpu device that will be used
> to fill the pcpu->pc_device field.
> ---
>   sys/conf/files.amd64 |    1 +
>   sys/conf/files.i386  |    1 +
>   sys/x86/xen/xenpv.c  |  155 ++++++++++++++++++++++++++++++++++++++++++++++++++

I think it makes more sense to have 2 files: one for xenpv bus and one 
for a dummy pvcpu device. It would allow us to move xenpv bus to common 
code (sys/xen or sys/dev/xen).

[..]

> +
> +static int
> +xenpv_probe(device_t dev)
> +{
> +
> +	device_set_desc(dev, "Xen PV bus");
> +	device_quiet(dev);
> +	return (0);

As I understand, 0 means I can "handle" the current device, in this case 
if a device is probing, because it doesn't have yet a driver, we will 
use xenpv and end up with 2 (or even more) xenpv buses.

As we only want to probe xenpv bus once, when the bus was added 
manually, returning BUS_PROBE_NO_WILDCARD would suit better.

[..]

> +static int
> +xenpvcpu_probe(device_t dev)
> +{
> +
> +	device_set_desc(dev, "Xen PV CPU");
> +	return (0);

Same here: BUS_PROBE_NOWILDCARD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzvcw-0002B6-9M; Sun, 05 Jan 2014 21:52:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1Vzvcu-0002B1-D1
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 21:52:56 +0000
Received: from [85.158.143.35:60979] by server-2.bemta-4.messagelabs.com id
	69/A3-11386-734D9C25; Sun, 05 Jan 2014 21:52:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1388958773!9752175!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25910 invoked from network); 5 Jan 2014 21:52:54 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:52:54 -0000
Received: by mail-lb0-f170.google.com with SMTP id c11so9416806lbj.15
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 13:52:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Hd7a+G9AczFGRnmFEywXhhbfOAzBN+U2XkAWW/PPKlE=;
	b=RE6m6qfKNDlmktN0BAPenn/d7YFRhke/pVlCu8HrFediD47QiW4nUxi7diohWHvy6e
	AkS+by8+dt47JR6QjGAWbiumPpTf08MtUvGeo1BHrINo2e48FZZKBiVnWCICiMjo2Vcf
	YFa5Aqo7AoR4NQsOUOGHviBXDCo0OL+sRoKMzSNgHBk7dVB6BZgeiQrMGaAapE8/wuE/
	7fW4UNrKk4wgAdrf89hnOmuCVog3ZBHHy2OWORewQBFw1J+3EZijIUSFjxGUsSXjzlo4
	vtxvxhlbuePZeDd76nETl/mOviFMXcsfxg219+Yjk1xVK4w1VIAYUbq/TSRRAmFZcOfL
	DDZQ==
X-Gm-Message-State: ALoCoQlvQvfphxXkcx+QNeIXpfBNODkyG4L+XtMANcmSYbrm3QWHaVT+ydPCWu6Tx54qWDE/T8kF
X-Received: by 10.152.19.65 with SMTP id c1mr452980lae.49.1388958773550;
	Sun, 05 Jan 2014 13:52:53 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	mq10sm41404148lbb.12.2014.01.05.13.52.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:52:52 -0800 (PST)
Message-ID: <52C9D432.3040409@linaro.org>
Date: Sun, 05 Jan 2014 21:52:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org, 
	freebsd-current@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	jhb@freebsd.org, kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-15-git-send-email-roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> Since Xen PVH guests doesn't have ACPI, we need to create a dummy
> bus so top level Xen devices can attach to it (instead of
> attaching directly to the nexus) and a pvcpu device that will be used
> to fill the pcpu->pc_device field.
> ---
>   sys/conf/files.amd64 |    1 +
>   sys/conf/files.i386  |    1 +
>   sys/x86/xen/xenpv.c  |  155 ++++++++++++++++++++++++++++++++++++++++++++++++++

I think it makes more sense to have 2 files: one for xenpv bus and one 
for a dummy pvcpu device. It would allow us to move xenpv bus to common 
code (sys/xen or sys/dev/xen).

[..]

> +
> +static int
> +xenpv_probe(device_t dev)
> +{
> +
> +	device_set_desc(dev, "Xen PV bus");
> +	device_quiet(dev);
> +	return (0);

As I understand, 0 means I can "handle" the current device, in this case 
if a device is probing, because it doesn't have yet a driver, we will 
use xenpv and end up with 2 (or even more) xenpv buses.

As we only want to probe xenpv bus once, when the bus was added 
manually, returning BUS_PROBE_NO_WILDCARD would suit better.

[..]

> +static int
> +xenpvcpu_probe(device_t dev)
> +{
> +
> +	device_set_desc(dev, "Xen PV CPU");
> +	return (0);

Same here: BUS_PROBE_NOWILDCARD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvfN-0002HW-Qx; Sun, 05 Jan 2014 21:55:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvfM-0002HN-Cm
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 21:55:28 +0000
Received: from [85.158.143.35:44459] by server-2.bemta-4.messagelabs.com id
	03/34-11386-FC4D9C25; Sun, 05 Jan 2014 21:55:27 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388958926!9670753!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6722 invoked from network); 5 Jan 2014 21:55:27 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:55:27 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9485986lan.13
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 13:55:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=aFZAtkayilhQJ7rhwgkobHzrUqIslOB12Lt2ONgVhM0=;
	b=VXAr0oT8HTq4AePcsQ5Cgn+m3+2O9RC7HyFtxG3JC2IUnbI2VA94GZhkzIKuapnpVc
	jY28JQPQnRrWvSyItC63jB7C1iJrxuhhbed9WZusxO2jm3jOyGMCF3G+Ft9d8ZfwNecL
	0en/MHoQBJWRnqAfOy1S9lU40zztO7O24W7dvc3+9MtDAzSfeEVqchPVqP/HHQForMyu
	8EtC68kX7E4W9t/AjJmVFI4YXh0GpvlAcSCxuRTtxce9E3+IB+bn0w+ccz24KSTlFZVE
	BeBtxlAGsnLDuyoIjocLoC3NmkkVBxqLqBgTO/y3qlRjbCDWM8WQBkkY2iP/8fBox2A7
	sEvA==
X-Gm-Message-State: ALoCoQnQFpi4aElBiZVBFwZbtFlBjLKsvWHlC7e6z0yLhhfg62jpR4Ni/+wYkAIOydMFrV35E2vm
X-Received: by 10.152.26.72 with SMTP id j8mr33789lag.85.1388958926488;
	Sun, 05 Jan 2014 13:55:26 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	mq10sm41398588lbb.12.2014.01.05.13.55.23 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:55:25 -0800 (PST)
Message-ID: <52C9D4CA.6070403@linaro.org>
Date: Sun, 05 Jan 2014 21:55:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org, 
	freebsd-current@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	jhb@freebsd.org, kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-16-git-send-email-roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> Introduce a Xen specific nexus that is going to be in charge for
> attaching Xen specific devices.

Now that we have a xenpv bus, do we really need a specific nexus for Xen?
We should be able to use the identify callback of xenpv to create the bus.

The other part of this patch can be merged in the patch #14 "Introduce 
xenpv bus and a dummy pvcpu device".

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 21:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 21:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvfN-0002HW-Qx; Sun, 05 Jan 2014 21:55:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1VzvfM-0002HN-Cm
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 21:55:28 +0000
Received: from [85.158.143.35:44459] by server-2.bemta-4.messagelabs.com id
	03/34-11386-FC4D9C25; Sun, 05 Jan 2014 21:55:27 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1388958926!9670753!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6722 invoked from network); 5 Jan 2014 21:55:27 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 21:55:27 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so9485986lan.13
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 13:55:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=aFZAtkayilhQJ7rhwgkobHzrUqIslOB12Lt2ONgVhM0=;
	b=VXAr0oT8HTq4AePcsQ5Cgn+m3+2O9RC7HyFtxG3JC2IUnbI2VA94GZhkzIKuapnpVc
	jY28JQPQnRrWvSyItC63jB7C1iJrxuhhbed9WZusxO2jm3jOyGMCF3G+Ft9d8ZfwNecL
	0en/MHoQBJWRnqAfOy1S9lU40zztO7O24W7dvc3+9MtDAzSfeEVqchPVqP/HHQForMyu
	8EtC68kX7E4W9t/AjJmVFI4YXh0GpvlAcSCxuRTtxce9E3+IB+bn0w+ccz24KSTlFZVE
	BeBtxlAGsnLDuyoIjocLoC3NmkkVBxqLqBgTO/y3qlRjbCDWM8WQBkkY2iP/8fBox2A7
	sEvA==
X-Gm-Message-State: ALoCoQnQFpi4aElBiZVBFwZbtFlBjLKsvWHlC7e6z0yLhhfg62jpR4Ni/+wYkAIOydMFrV35E2vm
X-Received: by 10.152.26.72 with SMTP id j8mr33789lag.85.1388958926488;
	Sun, 05 Jan 2014 13:55:26 -0800 (PST)
Received: from [10.0.0.94] (ti0273a400-0979.bb.online.no. [85.165.250.215])
	by mx.google.com with ESMTPSA id
	mq10sm41398588lbb.12.2014.01.05.13.55.23 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 13:55:25 -0800 (PST)
Message-ID: <52C9D4CA.6070403@linaro.org>
Date: Sun, 05 Jan 2014 21:55:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, freebsd-xen@freebsd.org, 
	freebsd-current@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	jhb@freebsd.org, kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-16-git-send-email-roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> Introduce a Xen specific nexus that is going to be in charge for
> attaching Xen specific devices.

Now that we have a xenpv bus, do we really need a specific nexus for Xen?
We should be able to use the identify callback of xenpv to create the bus.

The other part of this patch can be merged in the patch #14 "Introduce 
xenpv bus and a dummy pvcpu device".

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:08:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvrZ-0002oB-91; Sun, 05 Jan 2014 22:08:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VzvrY-0002o6-8j
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 22:08:04 +0000
Received: from [85.158.137.68:30771] by server-12.bemta-3.messagelabs.com id
	6E/FB-20055-3C7D9C25; Sun, 05 Jan 2014 22:08:03 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388959682!3688133!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21102 invoked from network); 5 Jan 2014 22:08:03 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:08:03 -0000
Received: by mail-we0-f169.google.com with SMTP id w61so15389363wes.0
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 14:08:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=k/7RQFnzvR34IByOvoonFYhk8TovfaUpvAfNHkLYDyo=;
	b=UXq27k9pJjVVUPgL/dwJ3GwY6NPe1n0IsPDE8YJZsEWKvwdXT8yTpD2Vm/lkFu9EOi
	Dm471YPLOburzdUZ8OyWba9kTgu6bSvA85es07VNtieRY4g/vIHeNr98uQu2TsmXO0SI
	PBIJaxQA9akvqIlY1h/U0QlymoPIT46/xIJyJiBzTEOzZwrv0QvTcFPXvGQALzRXaLYN
	nkSiFtCsWUDPdxk6u7OrleYRZ09KYWSSoSu9NKNys9RN3Up3fpaTm9tH+kEpmWUchBwK
	CYFx/9/z7aQbUNRtHJOptuO5YtFMDV6N0Hpy1g3fVPfWTIQ0e379PmrId+CWiIngwh8a
	47nA==
X-Received: by 10.194.77.106 with SMTP id r10mr9401wjw.91.1388959682775;
	Sun, 05 Jan 2014 14:08:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id v7sm14664251wix.5.2014.01.05.14.08.01
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 14:08:02 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 22:07:54 +0000
Message-Id: <1388959674-15162-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org
Subject: [Xen-devel] [PATCH v2] dump available order allocations in each
	zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/common/page_alloc.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..9a27bc5 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
 static void dump_heap(unsigned char key)
 {
     s_time_t      now = NOW();
-    int           i, j;
+    int           i, j, k;
 
     printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
            (u32)(now>>32), (u32)now);
@@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
         if ( !avail[i] )
             continue;
         for ( j = 0; j < NR_ZONES; j++ )
+        {
             printk("heap[node=%d][zone=%d] -> %lu pages\n",
                    i, j, avail[i][j]);
+            if( avail[i][j] ) {
+                printk("  (In:\n");
+                for ( k = 0; k < MAX_ORDER; k++ )
+                    if( !page_list_empty(&heap(i, j, k)) )
+                        printk("   [order=%d]\n",k);
+                printk("  )\n");
+            }
+        }
     }
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:08:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvrZ-0002oB-91; Sun, 05 Jan 2014 22:08:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VzvrY-0002o6-8j
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 22:08:04 +0000
Received: from [85.158.137.68:30771] by server-12.bemta-3.messagelabs.com id
	6E/FB-20055-3C7D9C25; Sun, 05 Jan 2014 22:08:03 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1388959682!3688133!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21102 invoked from network); 5 Jan 2014 22:08:03 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:08:03 -0000
Received: by mail-we0-f169.google.com with SMTP id w61so15389363wes.0
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 14:08:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=k/7RQFnzvR34IByOvoonFYhk8TovfaUpvAfNHkLYDyo=;
	b=UXq27k9pJjVVUPgL/dwJ3GwY6NPe1n0IsPDE8YJZsEWKvwdXT8yTpD2Vm/lkFu9EOi
	Dm471YPLOburzdUZ8OyWba9kTgu6bSvA85es07VNtieRY4g/vIHeNr98uQu2TsmXO0SI
	PBIJaxQA9akvqIlY1h/U0QlymoPIT46/xIJyJiBzTEOzZwrv0QvTcFPXvGQALzRXaLYN
	nkSiFtCsWUDPdxk6u7OrleYRZ09KYWSSoSu9NKNys9RN3Up3fpaTm9tH+kEpmWUchBwK
	CYFx/9/z7aQbUNRtHJOptuO5YtFMDV6N0Hpy1g3fVPfWTIQ0e379PmrId+CWiIngwh8a
	47nA==
X-Received: by 10.194.77.106 with SMTP id r10mr9401wjw.91.1388959682775;
	Sun, 05 Jan 2014 14:08:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id v7sm14664251wix.5.2014.01.05.14.08.01
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 05 Jan 2014 14:08:02 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Sun,  5 Jan 2014 22:07:54 +0000
Message-Id: <1388959674-15162-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org
Subject: [Xen-devel] [PATCH v2] dump available order allocations in each
	zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/common/page_alloc.c |   11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..9a27bc5 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
 static void dump_heap(unsigned char key)
 {
     s_time_t      now = NOW();
-    int           i, j;
+    int           i, j, k;
 
     printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
            (u32)(now>>32), (u32)now);
@@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
         if ( !avail[i] )
             continue;
         for ( j = 0; j < NR_ZONES; j++ )
+        {
             printk("heap[node=%d][zone=%d] -> %lu pages\n",
                    i, j, avail[i][j]);
+            if( avail[i][j] ) {
+                printk("  (In:\n");
+                for ( k = 0; k < MAX_ORDER; k++ )
+                    if( !page_list_empty(&heap(i, j, k)) )
+                        printk("   [order=%d]\n",k);
+                printk("  )\n");
+            }
+        }
     }
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:10:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvuC-0003DT-Rz; Sun, 05 Jan 2014 22:10:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VzvuB-0003DK-3M
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 22:10:47 +0000
Received: from [85.158.137.68:41170] by server-1.bemta-3.messagelabs.com id
	FB/2E-29598-668D9C25; Sun, 05 Jan 2014 22:10:46 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388959844!7339278!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32349 invoked from network); 5 Jan 2014 22:10:45 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:10:45 -0000
Received: by mail-qc0-f171.google.com with SMTP id c9so16988241qcz.30
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 14:10:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=mkdU2RWGnN+TldeP3x+sx9rqxLhhZ6Nwj3sI40a+MSQ=;
	b=w6BrxpdnHHw/uScb9MLhLgXz5o+4KUOPZL5oy36gdUVuvqHTYSlkvQyD8eruRXYwWo
	tHsqlHHuTklrjt0wedQ49kzjKZzD6LciXzerrxvMbp0UaVMcksCyrauQ5LMe/MYQfGY6
	Naz8cluAVl0B6CPL0sHrsMVCQ0GDplEV8Ve/TpDMlEkkowm7Dd2mbkbZMxa+Ebc2Fujo
	zdYs2WB7Qj4GRRawnmfwMR9difN2mk6Bwv11alnPoeHHbpJCdDcQIYsJx0b7m8BC4uFw
	e5+pJfxiVNOnCku4JscjTBhky6ve/RD/ByRzPTl0vgjpmKZ6SI2LTU1t+4xfhb49W5zA
	JymA==
MIME-Version: 1.0
X-Received: by 10.224.21.203 with SMTP id k11mr2686075qab.2.1388959844118;
	Sun, 05 Jan 2014 14:10:44 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 14:10:44 -0800 (PST)
In-Reply-To: <52C9C42F.1060705@linaro.org>
References: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
	<52C9C42F.1060705@linaro.org>
Date: Sun, 5 Jan 2014 22:10:44 +0000
Message-ID: <CAOTdubuXT-4ogQSiVDKoH7rAt24jmb=MYxQQVb0OYV5t7OEaKA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] dump available order allocations in each
 zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Julien for your comments, reposting.

On Sun, Jan 5, 2014 at 8:44 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi Karim,
>
> (+ adding maintainers) Don't forget the maintainers, you can use
> scripts/get_maintainers.pl.
>
>
> On 01/05/2014 06:45 PM, Karim Raslan wrote:
>>
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> ---
>>   xen/common/page_alloc.c |   11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 5f484a2..5419b3f 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
>>   static void dump_heap(unsigned char key)
>>   {
>>       s_time_t      now = NOW();
>> -    int           i, j;
>> +    int           i, j, k;
>>
>>       printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
>>              (u32)(now>>32), (u32)now);
>> @@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
>>           if ( !avail[i] )
>>               continue;
>>           for ( j = 0; j < NR_ZONES; j++ )
>> +        {
>>               printk("heap[node=%d][zone=%d] -> %lu pages\n",
>>                      i, j, avail[i][j]);
>> +            if(avail[i][j]) {
>
>
> if ( .. )
> {
>
> You are using Linux coding style, not Xen. See CODING_STYLE in the root of
> the repository.
>
>
>> +               printk("\t(In:\n");
>
>
> We don't use tabulations on Xen, please use spaces (same for the next 4
> lines).
>
>
>> +                               for ( k = 0; k < MAX_ORDER; k++)
>> +                                       if(!page_list_empty(&heap(i, j,
>> k)))
>
>
> if ( .. )
>
>
>> +                                               printk("
>> \t[order=%d]\n",k);
>> +                               printk(")\n");
>> +            }
>> +        }
>>       }
>>   }
>>
>>
>
> Sincerely yours,
>
> --
> Julien Grall



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:10:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzvuC-0003DT-Rz; Sun, 05 Jan 2014 22:10:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1VzvuB-0003DK-3M
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 22:10:47 +0000
Received: from [85.158.137.68:41170] by server-1.bemta-3.messagelabs.com id
	FB/2E-29598-668D9C25; Sun, 05 Jan 2014 22:10:46 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1388959844!7339278!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32349 invoked from network); 5 Jan 2014 22:10:45 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:10:45 -0000
Received: by mail-qc0-f171.google.com with SMTP id c9so16988241qcz.30
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 14:10:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=mkdU2RWGnN+TldeP3x+sx9rqxLhhZ6Nwj3sI40a+MSQ=;
	b=w6BrxpdnHHw/uScb9MLhLgXz5o+4KUOPZL5oy36gdUVuvqHTYSlkvQyD8eruRXYwWo
	tHsqlHHuTklrjt0wedQ49kzjKZzD6LciXzerrxvMbp0UaVMcksCyrauQ5LMe/MYQfGY6
	Naz8cluAVl0B6CPL0sHrsMVCQ0GDplEV8Ve/TpDMlEkkowm7Dd2mbkbZMxa+Ebc2Fujo
	zdYs2WB7Qj4GRRawnmfwMR9difN2mk6Bwv11alnPoeHHbpJCdDcQIYsJx0b7m8BC4uFw
	e5+pJfxiVNOnCku4JscjTBhky6ve/RD/ByRzPTl0vgjpmKZ6SI2LTU1t+4xfhb49W5zA
	JymA==
MIME-Version: 1.0
X-Received: by 10.224.21.203 with SMTP id k11mr2686075qab.2.1388959844118;
	Sun, 05 Jan 2014 14:10:44 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Sun, 5 Jan 2014 14:10:44 -0800 (PST)
In-Reply-To: <52C9C42F.1060705@linaro.org>
References: <1388947531-26583-1-git-send-email-karim.allah.ahmed@gmail.com>
	<52C9C42F.1060705@linaro.org>
Date: Sun, 5 Jan 2014 22:10:44 +0000
Message-ID: <CAOTdubuXT-4ogQSiVDKoH7rAt24jmb=MYxQQVb0OYV5t7OEaKA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] dump available order allocations in each
 zone while dumping heap information
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks Julien for your comments, reposting.

On Sun, Jan 5, 2014 at 8:44 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi Karim,
>
> (+ adding maintainers) Don't forget the maintainers, you can use
> scripts/get_maintainers.pl.
>
>
> On 01/05/2014 06:45 PM, Karim Raslan wrote:
>>
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> ---
>>   xen/common/page_alloc.c |   11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 5f484a2..5419b3f 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -1673,7 +1673,7 @@ void scrub_one_page(struct page_info *pg)
>>   static void dump_heap(unsigned char key)
>>   {
>>       s_time_t      now = NOW();
>> -    int           i, j;
>> +    int           i, j, k;
>>
>>       printk("'%c' pressed -> dumping heap info (now-0x%X:%08X)\n", key,
>>              (u32)(now>>32), (u32)now);
>> @@ -1683,8 +1683,17 @@ static void dump_heap(unsigned char key)
>>           if ( !avail[i] )
>>               continue;
>>           for ( j = 0; j < NR_ZONES; j++ )
>> +        {
>>               printk("heap[node=%d][zone=%d] -> %lu pages\n",
>>                      i, j, avail[i][j]);
>> +            if(avail[i][j]) {
>
>
> if ( .. )
> {
>
> You are using Linux coding style, not Xen. See CODING_STYLE in the root of
> the repository.
>
>
>> +               printk("\t(In:\n");
>
>
> We don't use tabulations on Xen, please use spaces (same for the next 4
> lines).
>
>
>> +                               for ( k = 0; k < MAX_ORDER; k++)
>> +                                       if(!page_list_empty(&heap(i, j,
>> k)))
>
>
> if ( .. )
>
>
>> +                                               printk("
>> \t[order=%d]\n",k);
>> +                               printk(")\n");
>> +            }
>> +        }
>>       }
>>   }
>>
>>
>
> Sincerely yours,
>
> --
> Julien Grall



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzwQX-0004L4-Pv; Sun, 05 Jan 2014 22:44:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzwQV-0004Kz-OC
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 22:44:11 +0000
Received: from [193.109.254.147:21883] by server-14.bemta-14.messagelabs.com
	id 3D/C5-12628-A30E9C25; Sun, 05 Jan 2014 22:44:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388961848!8969377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28354 invoked from network); 5 Jan 2014 22:44:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:44:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="89922762"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 22:44:08 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 17:44:07 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sun, 5 Jan 2014
	23:44:06 +0100
Message-ID: <52C9E034.1020707@citrix.com>
Date: Sun, 5 Jan 2014 22:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1388957191-10337-6-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: tim@xen.org, patches@linaro.org, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/2014 21:26, Julien Grall wrote:
> Panic function will never return. Without this attribute, gcc may output
> warnings in call function.
>
> Cc: Keir Fraser <keir@xen.org>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

I have a longer series doing rather more noreturn'ing than just this, if
you can wait until the 4.5 dev window opens up again.

~Andrew

> ---
>  xen/drivers/char/console.c | 4 +++-
>  xen/include/xen/lib.h      | 2 +-
>  2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index f83c92e..229d48a 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -1049,7 +1049,7 @@ __initcall(debugtrace_init);
>   * **************************************************************
>   */
>  
> -void panic(const char *fmt, ...)
> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>  {
>      va_list args;
>      unsigned long flags;
> @@ -1092,6 +1092,8 @@ void panic(const char *fmt, ...)
>          watchdog_disable();
>          machine_restart(5000);
>      }
> +
> +    while ( 1 );
>  }
>  
>  void __bug(char *file, int line)
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index 5b258fd..9c3a242 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
>  extern void guest_printk(const struct domain *d, const char *format, ...)
>      __attribute__ ((format (printf, 2, 3)));
>  extern void panic(const char *format, ...)
> -    __attribute__ ((format (printf, 1, 2)));
> +    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
>  extern long vm_assist(struct domain *, unsigned int, unsigned int);
>  extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
>  extern int printk_ratelimit(void);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 22:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 22:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzwQX-0004L4-Pv; Sun, 05 Jan 2014 22:44:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzwQV-0004Kz-OC
	for xen-devel@lists.xenproject.org; Sun, 05 Jan 2014 22:44:11 +0000
Received: from [193.109.254.147:21883] by server-14.bemta-14.messagelabs.com
	id 3D/C5-12628-A30E9C25; Sun, 05 Jan 2014 22:44:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1388961848!8969377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28354 invoked from network); 5 Jan 2014 22:44:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 22:44:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="89922762"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Jan 2014 22:44:08 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 17:44:07 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sun, 5 Jan 2014
	23:44:06 +0100
Message-ID: <52C9E034.1020707@citrix.com>
Date: Sun, 5 Jan 2014 22:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1388957191-10337-6-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: tim@xen.org, patches@linaro.org, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/2014 21:26, Julien Grall wrote:
> Panic function will never return. Without this attribute, gcc may output
> warnings in call function.
>
> Cc: Keir Fraser <keir@xen.org>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

I have a longer series doing rather more noreturn'ing than just this, if
you can wait until the 4.5 dev window opens up again.

~Andrew

> ---
>  xen/drivers/char/console.c | 4 +++-
>  xen/include/xen/lib.h      | 2 +-
>  2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index f83c92e..229d48a 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -1049,7 +1049,7 @@ __initcall(debugtrace_init);
>   * **************************************************************
>   */
>  
> -void panic(const char *fmt, ...)
> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>  {
>      va_list args;
>      unsigned long flags;
> @@ -1092,6 +1092,8 @@ void panic(const char *fmt, ...)
>          watchdog_disable();
>          machine_restart(5000);
>      }
> +
> +    while ( 1 );
>  }
>  
>  void __bug(char *file, int line)
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index 5b258fd..9c3a242 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
>  extern void guest_printk(const struct domain *d, const char *format, ...)
>      __attribute__ ((format (printf, 2, 3)));
>  extern void panic(const char *format, ...)
> -    __attribute__ ((format (printf, 1, 2)));
> +    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
>  extern long vm_assist(struct domain *, unsigned int, unsigned int);
>  extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
>  extern int printk_ratelimit(void);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 23:09:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 23:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzwp1-0005RC-Pn; Sun, 05 Jan 2014 23:09:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vzwp0-0005R7-IC
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 23:09:30 +0000
Received: from [85.158.137.68:43353] by server-4.bemta-3.messagelabs.com id
	A3/20-10414-926E9C25; Sun, 05 Jan 2014 23:09:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388963367!7311809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3946 invoked from network); 5 Jan 2014 23:09:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 23:09:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="87740632"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 23:09:26 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 18:09:26 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	00:09:24 +0100
Message-ID: <52C9E621.4020608@citrix.com>
Date: Sun, 5 Jan 2014 23:09:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388857936-664-2-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/01/2014 17:52, Don Slutz wrote:
> Using a 1G hvm domU (in grub) and gdbsx:
>
> (gdb) set arch i8086
> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
> of GDB.  Attempting to continue with the default i8086 settings.
>
> The target architecture is assumed to be i8086
> (gdb) target remote localhost:9999
> Remote debugging using localhost:9999
> Remote debugging from host 127.0.0.1
> 0x0000d475 in ?? ()
> (gdb) x/1xh 0x6ae9168b
>
> Will reproduce this bug.
>
> With a debug=y build you will get:
>
> Assertion '!preempt_count()' failed at preempt.c:37
>
> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>
>          [ffff82c4c0126eec] _write_lock+0x3c/0x50
>           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>           ffff82c4c01709ed  get_page+0x2d/0x100
>           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>           ffff82c4c012938b  add_entry+0x4b/0xb0
>           ffff82c4c02223f9  syscall_enter+0xa9/0xae
>
> And gdb output:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     0x3024
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Ignoring packet error, continuing...
> Reply contains invalid hex digit 116
>
> The 1st one worked because the p2m.lock is recursive and the PCPU
> had not yet changed.
>
> crash reports (for example):
>
> crash> mm_rwlock_t 0xffff83083f913010
> struct mm_rwlock_t {
>   lock = {
>     raw = {
>       lock = 2147483647
>     },
>     debug = {<No data fields>}
>   },
>   unlock_level = 0,
>   recurse_count = 1,
>   locker = 1,
>   locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
> }
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..eceb805 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
>          mfn = (has_hvm_container_domain(dp)
>                 ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>                 : dbg_pv_va2mfn(addr, dp, pgd3));
> -        if ( mfn == INVALID_MFN ) 
> +        if ( mfn == INVALID_MFN ) {

Xen coding style would have this brace on the next line.

Content-wise, I think it would be better to fix up the error path in
dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
reference on the gfn.

~Andrew

> +            if ( gfn != INVALID_GFN )
> +                put_gfn(dp, gfn);
>              break;
> +        }
>  
>          va = map_domain_page(mfn);
>          va = va + (addr & (PAGE_SIZE-1));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 23:09:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 23:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1Vzwp1-0005RC-Pn; Sun, 05 Jan 2014 23:09:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1Vzwp0-0005R7-IC
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 23:09:30 +0000
Received: from [85.158.137.68:43353] by server-4.bemta-3.messagelabs.com id
	A3/20-10414-926E9C25; Sun, 05 Jan 2014 23:09:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1388963367!7311809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3946 invoked from network); 5 Jan 2014 23:09:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 23:09:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="87740632"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 23:09:26 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 18:09:26 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	00:09:24 +0100
Message-ID: <52C9E621.4020608@citrix.com>
Date: Sun, 5 Jan 2014 23:09:21 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388857936-664-2-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/01/2014 17:52, Don Slutz wrote:
> Using a 1G hvm domU (in grub) and gdbsx:
>
> (gdb) set arch i8086
> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
> of GDB.  Attempting to continue with the default i8086 settings.
>
> The target architecture is assumed to be i8086
> (gdb) target remote localhost:9999
> Remote debugging using localhost:9999
> Remote debugging from host 127.0.0.1
> 0x0000d475 in ?? ()
> (gdb) x/1xh 0x6ae9168b
>
> Will reproduce this bug.
>
> With a debug=y build you will get:
>
> Assertion '!preempt_count()' failed at preempt.c:37
>
> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>
>          [ffff82c4c0126eec] _write_lock+0x3c/0x50
>           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>           ffff82c4c01709ed  get_page+0x2d/0x100
>           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>           ffff82c4c012938b  add_entry+0x4b/0xb0
>           ffff82c4c02223f9  syscall_enter+0xa9/0xae
>
> And gdb output:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     0x3024
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Ignoring packet error, continuing...
> Reply contains invalid hex digit 116
>
> The 1st one worked because the p2m.lock is recursive and the PCPU
> had not yet changed.
>
> crash reports (for example):
>
> crash> mm_rwlock_t 0xffff83083f913010
> struct mm_rwlock_t {
>   lock = {
>     raw = {
>       lock = 2147483647
>     },
>     debug = {<No data fields>}
>   },
>   unlock_level = 0,
>   recurse_count = 1,
>   locker = 1,
>   locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
> }
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..eceb805 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
>          mfn = (has_hvm_container_domain(dp)
>                 ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>                 : dbg_pv_va2mfn(addr, dp, pgd3));
> -        if ( mfn == INVALID_MFN ) 
> +        if ( mfn == INVALID_MFN ) {

Xen coding style would have this brace on the next line.

Content-wise, I think it would be better to fix up the error path in
dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
reference on the gfn.

~Andrew

> +            if ( gfn != INVALID_GFN )
> +                put_gfn(dp, gfn);
>              break;
> +        }
>  
>          va = map_domain_page(mfn);
>          va = va + (addr & (PAGE_SIZE-1));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 23:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 23:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzxEW-0006TR-PU; Sun, 05 Jan 2014 23:35:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzxEV-0006TL-CB
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 23:35:51 +0000
Received: from [85.158.139.211:21568] by server-17.bemta-5.messagelabs.com id
	23/0E-19152-65CE9C25; Sun, 05 Jan 2014 23:35:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388964948!7972533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8197 invoked from network); 5 Jan 2014 23:35:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 23:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="87742658"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 23:35:30 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 18:35:29 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	00:35:28 +0100
Message-ID: <52C9EC3C.2060008@citrix.com>
Date: Sun, 5 Jan 2014 23:35:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388857936-664-5-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/01/2014 17:52, Don Slutz wrote:
> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> returned.
>
> Without this gdb does not report an error.
>
> With this patch and using a 1G hvm domU:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/domctl.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..4aa751f 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -997,8 +997,7 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
> -        if ( !ret )
> -           copyback = 1;
> +        copyback = 1;

This whole hypercall implementation looks suspect.

gdbsx_guest_mem_io() itself writes to the 'remain' field, making the
statement crossing the top of this context redundant.

Furthermore, ret is strictly 0 in the case that 'remain' is 0, or
-EFAULT if 'remain' is non-0.

While this does change the ABI of a hypercall, I think it is reasonable
to do, as domctl.h does imply that 'remain' is an output parameter,
without specifying anything about its behaviour in the case of an error.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 05 23:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jan 2014 23:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzxEW-0006TR-PU; Sun, 05 Jan 2014 23:35:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1VzxEV-0006TL-CB
	for xen-devel@lists.xen.org; Sun, 05 Jan 2014 23:35:51 +0000
Received: from [85.158.139.211:21568] by server-17.bemta-5.messagelabs.com id
	23/0E-19152-65CE9C25; Sun, 05 Jan 2014 23:35:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1388964948!7972533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8197 invoked from network); 5 Jan 2014 23:35:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Jan 2014 23:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,609,1384300800"; d="scan'208";a="87742658"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Jan 2014 23:35:30 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 5 Jan 2014 18:35:29 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	00:35:28 +0100
Message-ID: <52C9EC3C.2060008@citrix.com>
Date: Sun, 5 Jan 2014 23:35:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388857936-664-5-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/01/2014 17:52, Don Slutz wrote:
> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> returned.
>
> Without this gdb does not report an error.
>
> With this patch and using a 1G hvm domU:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/domctl.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..4aa751f 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -997,8 +997,7 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain, &domctl->u.gdbsx_guest_memio);
> -        if ( !ret )
> -           copyback = 1;
> +        copyback = 1;

This whole hypercall implementation looks suspect.

gdbsx_guest_mem_io() itself writes to the 'remain' field, making the
statement crossing the top of this context redundant.

Furthermore, ret is strictly 0 in the case that 'remain' is 0, or
-EFAULT if 'remain' is non-0.

While this does change the ABI of a hypercall, I think it is reasonable
to do, as domctl.h does imply that 'remain' is an output parameter,
without specifying anything about its behaviour in the case of an error.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 00:54:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 00:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzySK-00014K-C0; Mon, 06 Jan 2014 00:54:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1VzySJ-00014F-AM
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 00:54:11 +0000
Received: from [85.158.137.68:24898] by server-10.bemta-3.messagelabs.com id
	56/25-23989-2BEF9C25; Mon, 06 Jan 2014 00:54:10 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1388969648!6577579!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12943 invoked from network); 6 Jan 2014 00:54:09 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 00:54:09 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so17064908qcy.9
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 16:54:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=WrK2abF2iv1ozHHzdgj+4qaKq392SYBk4ybIQP1OrjM=;
	b=TDQVk7h+INL1LyMnH9MWmhY8VL+c0RqmG6zVLJ56rZb/jkqlsWVALefpbC4k3lxTMz
	uzGZoUbjT0fjJ62Prx5rKrhw4ZGW/96xdBMKC19ImOgNeR1Cjr7q0KMLUyfAZqt1JMNy
	LFNBPHZaRNyB5te9uRR8m+TbgBg84o+SdrBXSJEd9/fLzdTepJnJ51gLC/OVIya/C1Uo
	MjF6x+hXLm3nXO7O/v9vZrtcTgNHRX1MTssjHfR+U1MZtE4/tMxPsbYs3tu5OaRYYXmW
	1p0CaXmQCJu0wRF/i5FCpsVqci51aw5il2OikLw0tRnG/Ugw9FVfv3+MFFh9fUfWHeJ4
	vpdw==
MIME-Version: 1.0
X-Received: by 10.224.12.5 with SMTP id v5mr174001644qav.4.1388969647921; Sun,
	05 Jan 2014 16:54:07 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Sun, 5 Jan 2014 16:54:07 -0800 (PST)
In-Reply-To: <CAEPeN01g2RAZA7-=UG6oeb+33VHt9Vr8LGL=wbNvx_zdr2fbVw@mail.gmail.com>
References: <CAEPeN00N65Nx4Ak5RpChtfHtThaFUJ-mUqis4KrHP1xeH=arSA@mail.gmail.com>
	<52B6F7A2.8000707@citrix.com>
	<CAEPeN01g2RAZA7-=UG6oeb+33VHt9Vr8LGL=wbNvx_zdr2fbVw@mail.gmail.com>
Date: Mon, 6 Jan 2014 08:54:07 +0800
Message-ID: <CAOtp4Kqtavppv5eR7y9N6zGmhMxZRXzyF7Xpkf_dnk+JVm66PQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Yunbin Wang <wangyunbin1989@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to calculate the crashing time cost in live
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3761712835489047176=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3761712835489047176==
Content-Type: multipart/alternative; boundary=089e01493f1678387104ef42b267

--089e01493f1678387104ef42b267
Content-Type: text/plain; charset=ISO-8859-1

If I understand correctly, from hypervisor's view, there are two situations
that VM will be out of service. The first one is when userland calls
hypercall to get dirty pages, during which the domain will be paused. And
the second one is at last stage of migration, the domain will be suspended
and transmitted to remote. For the first case, we can simply add
__trace_var to the beginning and end of domain_log_dirty_op, and use
xentrace to get the time. For the second case, I am not familiar with it,
but if it's also done in one hypercall, it can be done as the first case.

-Kai


On Mon, Dec 23, 2013 at 2:20 PM, Yunbin Wang <wangyunbin1989@gmail.com>wrote:

> Hi Cooper,
> Thanks for your reply.
>
> Firstly, I explain how to measure migration time. When live migration
> needed, we should enter the following command
> xm migrate --live <domain> <target>, so I just measure the cost time by
> this command as migration time.
>
> Secondly, I explain the meaning of "crash time" aforementioned. This
> "crash time", I mean down time. Although using live migration technology,
> VM can be still in service status during migration, but there's still a
> small period, the VM will be out of service. So I want to measure the
> period VM out of service.
>
> Thanks,
> Yunbin Wang
>
>
> 2013/12/22 Andrew Cooper <andrew.cooper3@citrix.com>
>
>>  On 22/12/2013 13:23, Yunbin Wang wrote:
>>
>> Hi all,
>> I'm try to improve performance of live migration of Xen, and I
>> implemented a new method to migrate, so I want to test whether the new
>> method can bring a better performance.
>>
>>  I test the method from two aspects, the migration time and crash time.
>> The migration time is easy to get, but how can I calculate the crash time?
>>
>>  Thanks,
>> Yunbin Wang
>>
>>
>> Can you explain what you mean by the "crash time", and perhaps also
>> explain how you are measuring migration time.
>>
>> ~Andrew
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--089e01493f1678387104ef42b267
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">If I understand correctly, from hypervisor&#39;s view, the=
re are two situations that VM will be out of service. The first one is when=
 userland calls hypercall to get dirty pages, during which the domain will =
be paused. And the second one is at last stage of migration, the domain wil=
l be suspended and transmitted to remote. For the first case, we can simply=
 add __trace_var to the beginning and end of domain_log_dirty_op, and use x=
entrace to get the time. For the second case, I am not familiar with it, bu=
t if it&#39;s also done in one hypercall, it can be done as the first case.=
<div>
<br></div><div>-Kai</div></div><div class=3D"gmail_extra"><br><br><div clas=
s=3D"gmail_quote">On Mon, Dec 23, 2013 at 2:20 PM, Yunbin Wang <span dir=3D=
"ltr">&lt;<a href=3D"mailto:wangyunbin1989@gmail.com" target=3D"_blank">wan=
gyunbin1989@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Hi Cooper,<div>Thanks for y=
our reply.</div><div><br></div><div>Firstly, I explain how to measure migra=
tion time. When live migration needed, we should enter the following comman=
d</div>
<div>xm migrate --live &lt;domain&gt; &lt;target&gt;, so I just measure the=
 cost time by this command as migration time.</div>
<div><br></div><div>Secondly, I explain the meaning of &quot;crash time&quo=
t; aforementioned. This &quot;crash time&quot;, I mean down time. Although =
using live migration technology, VM can be still in service status during m=
igration, but there&#39;s still a small period, the VM will be out of servi=
ce. So I want to measure the period VM out of service.</div>

<div><br></div><div>Thanks,</div><div>Yunbin Wang</div></div><div class=3D"=
HOEnZb"><div class=3D"h5"><div class=3D"gmail_extra"><br><br><div class=3D"=
gmail_quote">2013/12/22 Andrew Cooper <span dir=3D"ltr">&lt;<a href=3D"mail=
to:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citrix.com</=
a>&gt;</span><br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div>
    <div>On 22/12/2013 13:23, Yunbin Wang wrote:<br>
    </div>
    <blockquote type=3D"cite">
     =20
      <div dir=3D"ltr">Hi all,
        <div>I&#39;m try to improve performance of live migration of Xen,
          and I implemented a new method to migrate, so I want to test
          whether the new method can bring a better performance.</div>
        <div><br>
        </div>
        <div>I test the method from two aspects, the migration time and
          crash time. The migration time is easy to get, but how can I
          calculate the crash time?</div>
        <div><br>
        </div>
        <div>Thanks,</div>
        <div>Yunbin Wang</div>
      </div>
    </blockquote>
    <br></div></div>
    Can you explain what you mean by the &quot;crash time&quot;, and perhap=
s also
    explain how you are measuring migration time.<span><font color=3D"#8888=
88"><br>
    <br>
    ~Andrew<br>
  </font></span></div>

</blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--089e01493f1678387104ef42b267--


--===============3761712835489047176==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3761712835489047176==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 00:54:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 00:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1VzySK-00014K-C0; Mon, 06 Jan 2014 00:54:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1VzySJ-00014F-AM
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 00:54:11 +0000
Received: from [85.158.137.68:24898] by server-10.bemta-3.messagelabs.com id
	56/25-23989-2BEF9C25; Mon, 06 Jan 2014 00:54:10 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1388969648!6577579!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12943 invoked from network); 6 Jan 2014 00:54:09 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 00:54:09 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so17064908qcy.9
	for <xen-devel@lists.xen.org>; Sun, 05 Jan 2014 16:54:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=WrK2abF2iv1ozHHzdgj+4qaKq392SYBk4ybIQP1OrjM=;
	b=TDQVk7h+INL1LyMnH9MWmhY8VL+c0RqmG6zVLJ56rZb/jkqlsWVALefpbC4k3lxTMz
	uzGZoUbjT0fjJ62Prx5rKrhw4ZGW/96xdBMKC19ImOgNeR1Cjr7q0KMLUyfAZqt1JMNy
	LFNBPHZaRNyB5te9uRR8m+TbgBg84o+SdrBXSJEd9/fLzdTepJnJ51gLC/OVIya/C1Uo
	MjF6x+hXLm3nXO7O/v9vZrtcTgNHRX1MTssjHfR+U1MZtE4/tMxPsbYs3tu5OaRYYXmW
	1p0CaXmQCJu0wRF/i5FCpsVqci51aw5il2OikLw0tRnG/Ugw9FVfv3+MFFh9fUfWHeJ4
	vpdw==
MIME-Version: 1.0
X-Received: by 10.224.12.5 with SMTP id v5mr174001644qav.4.1388969647921; Sun,
	05 Jan 2014 16:54:07 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Sun, 5 Jan 2014 16:54:07 -0800 (PST)
In-Reply-To: <CAEPeN01g2RAZA7-=UG6oeb+33VHt9Vr8LGL=wbNvx_zdr2fbVw@mail.gmail.com>
References: <CAEPeN00N65Nx4Ak5RpChtfHtThaFUJ-mUqis4KrHP1xeH=arSA@mail.gmail.com>
	<52B6F7A2.8000707@citrix.com>
	<CAEPeN01g2RAZA7-=UG6oeb+33VHt9Vr8LGL=wbNvx_zdr2fbVw@mail.gmail.com>
Date: Mon, 6 Jan 2014 08:54:07 +0800
Message-ID: <CAOtp4Kqtavppv5eR7y9N6zGmhMxZRXzyF7Xpkf_dnk+JVm66PQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Yunbin Wang <wangyunbin1989@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] How to calculate the crashing time cost in live
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3761712835489047176=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3761712835489047176==
Content-Type: multipart/alternative; boundary=089e01493f1678387104ef42b267

--089e01493f1678387104ef42b267
Content-Type: text/plain; charset=ISO-8859-1

If I understand correctly, from hypervisor's view, there are two situations
that VM will be out of service. The first one is when userland calls
hypercall to get dirty pages, during which the domain will be paused. And
the second one is at last stage of migration, the domain will be suspended
and transmitted to remote. For the first case, we can simply add
__trace_var to the beginning and end of domain_log_dirty_op, and use
xentrace to get the time. For the second case, I am not familiar with it,
but if it's also done in one hypercall, it can be done as the first case.

-Kai


On Mon, Dec 23, 2013 at 2:20 PM, Yunbin Wang <wangyunbin1989@gmail.com>wrote:

> Hi Cooper,
> Thanks for your reply.
>
> Firstly, I explain how to measure migration time. When live migration
> needed, we should enter the following command
> xm migrate --live <domain> <target>, so I just measure the cost time by
> this command as migration time.
>
> Secondly, I explain the meaning of "crash time" aforementioned. This
> "crash time", I mean down time. Although using live migration technology,
> VM can be still in service status during migration, but there's still a
> small period, the VM will be out of service. So I want to measure the
> period VM out of service.
>
> Thanks,
> Yunbin Wang
>
>
> 2013/12/22 Andrew Cooper <andrew.cooper3@citrix.com>
>
>>  On 22/12/2013 13:23, Yunbin Wang wrote:
>>
>> Hi all,
>> I'm try to improve performance of live migration of Xen, and I
>> implemented a new method to migrate, so I want to test whether the new
>> method can bring a better performance.
>>
>>  I test the method from two aspects, the migration time and crash time.
>> The migration time is easy to get, but how can I calculate the crash time?
>>
>>  Thanks,
>> Yunbin Wang
>>
>>
>> Can you explain what you mean by the "crash time", and perhaps also
>> explain how you are measuring migration time.
>>
>> ~Andrew
>>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--089e01493f1678387104ef42b267
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">If I understand correctly, from hypervisor&#39;s view, the=
re are two situations that VM will be out of service. The first one is when=
 userland calls hypercall to get dirty pages, during which the domain will =
be paused. And the second one is at last stage of migration, the domain wil=
l be suspended and transmitted to remote. For the first case, we can simply=
 add __trace_var to the beginning and end of domain_log_dirty_op, and use x=
entrace to get the time. For the second case, I am not familiar with it, bu=
t if it&#39;s also done in one hypercall, it can be done as the first case.=
<div>
<br></div><div>-Kai</div></div><div class=3D"gmail_extra"><br><br><div clas=
s=3D"gmail_quote">On Mon, Dec 23, 2013 at 2:20 PM, Yunbin Wang <span dir=3D=
"ltr">&lt;<a href=3D"mailto:wangyunbin1989@gmail.com" target=3D"_blank">wan=
gyunbin1989@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Hi Cooper,<div>Thanks for y=
our reply.</div><div><br></div><div>Firstly, I explain how to measure migra=
tion time. When live migration needed, we should enter the following comman=
d</div>
<div>xm migrate --live &lt;domain&gt; &lt;target&gt;, so I just measure the=
 cost time by this command as migration time.</div>
<div><br></div><div>Secondly, I explain the meaning of &quot;crash time&quo=
t; aforementioned. This &quot;crash time&quot;, I mean down time. Although =
using live migration technology, VM can be still in service status during m=
igration, but there&#39;s still a small period, the VM will be out of servi=
ce. So I want to measure the period VM out of service.</div>

<div><br></div><div>Thanks,</div><div>Yunbin Wang</div></div><div class=3D"=
HOEnZb"><div class=3D"h5"><div class=3D"gmail_extra"><br><br><div class=3D"=
gmail_quote">2013/12/22 Andrew Cooper <span dir=3D"ltr">&lt;<a href=3D"mail=
to:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citrix.com</=
a>&gt;</span><br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div>
    <div>On 22/12/2013 13:23, Yunbin Wang wrote:<br>
    </div>
    <blockquote type=3D"cite">
     =20
      <div dir=3D"ltr">Hi all,
        <div>I&#39;m try to improve performance of live migration of Xen,
          and I implemented a new method to migrate, so I want to test
          whether the new method can bring a better performance.</div>
        <div><br>
        </div>
        <div>I test the method from two aspects, the migration time and
          crash time. The migration time is easy to get, but how can I
          calculate the crash time?</div>
        <div><br>
        </div>
        <div>Thanks,</div>
        <div>Yunbin Wang</div>
      </div>
    </blockquote>
    <br></div></div>
    Can you explain what you mean by the &quot;crash time&quot;, and perhap=
s also
    explain how you are measuring migration time.<span><font color=3D"#8888=
88"><br>
    <br>
    ~Andrew<br>
  </font></span></div>

</blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div>

--089e01493f1678387104ef42b267--


--===============3761712835489047176==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3761712835489047176==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 09:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06ap-0004av-NJ; Mon, 06 Jan 2014 09:35:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W06ao-0004ap-SW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:35:30 +0000
Received: from [85.158.139.211:35285] by server-7.bemta-5.messagelabs.com id
	D2/83-04824-2E87AC25; Mon, 06 Jan 2014 09:35:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389000928!7848521!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 6 Jan 2014 09:35:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87820970"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:35:23 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 04:35:27 -0500
Message-ID: <52CA78DE.9060502@citrix.com>
Date: Mon, 6 Jan 2014 10:35:26 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org>
In-Reply-To: <52C9D4CA.6070403@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/14 22:55, Julien Grall wrote:
> 
> 
> On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
>> Introduce a Xen specific nexus that is going to be in charge for
>> attaching Xen specific devices.
> 
> Now that we have a xenpv bus, do we really need a specific nexus for Xen?
> We should be able to use the identify callback of xenpv to create the bus.
> 
> The other part of this patch can be merged in the patch #14 "Introduce
> xenpv bus and a dummy pvcpu device".

On x86 at least we need the Xen specific nexus, or we will fall back to
use the legacy nexus which is not what we really want.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06ap-0004av-NJ; Mon, 06 Jan 2014 09:35:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W06ao-0004ap-SW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:35:30 +0000
Received: from [85.158.139.211:35285] by server-7.bemta-5.messagelabs.com id
	D2/83-04824-2E87AC25; Mon, 06 Jan 2014 09:35:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389000928!7848521!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6420 invoked from network); 6 Jan 2014 09:35:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:35:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87820970"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:35:23 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 04:35:27 -0500
Message-ID: <52CA78DE.9060502@citrix.com>
Date: Mon, 6 Jan 2014 10:35:26 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org>
In-Reply-To: <52C9D4CA.6070403@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/14 22:55, Julien Grall wrote:
> 
> 
> On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
>> Introduce a Xen specific nexus that is going to be in charge for
>> attaching Xen specific devices.
> 
> Now that we have a xenpv bus, do we really need a specific nexus for Xen?
> We should be able to use the identify callback of xenpv to create the bus.
> 
> The other part of this patch can be merged in the patch #14 "Introduce
> xenpv bus and a dummy pvcpu device".

On x86 at least we need the Xen specific nexus, or we will fall back to
use the legacy nexus which is not what we really want.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06ba-0004gp-PA; Mon, 06 Jan 2014 09:36:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W06bZ-0004gb-Jn
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 09:36:17 +0000
Received: from [85.158.139.211:64622] by server-7.bemta-5.messagelabs.com id
	4C/C4-04824-0197AC25; Mon, 06 Jan 2014 09:36:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389000973!7821711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28202 invoked from network); 6 Jan 2014 09:36:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="89999635"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:36:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 04:36:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W06bT-0007Lk-E2;
	Mon, 06 Jan 2014 09:36:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W06bT-0000EF-6e;
	Mon, 06 Jan 2014 09:36:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24250-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 09:36:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24250 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24146
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06ba-0004gp-PA; Mon, 06 Jan 2014 09:36:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W06bZ-0004gb-Jn
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 09:36:17 +0000
Received: from [85.158.139.211:64622] by server-7.bemta-5.messagelabs.com id
	4C/C4-04824-0197AC25; Mon, 06 Jan 2014 09:36:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389000973!7821711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28202 invoked from network); 6 Jan 2014 09:36:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="89999635"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:36:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 04:36:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W06bT-0007Lk-E2;
	Mon, 06 Jan 2014 09:36:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W06bT-0000EF-6e;
	Mon, 06 Jan 2014 09:36:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24250-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 09:36:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24250 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24146
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06g7-0005Jr-N5; Mon, 06 Jan 2014 09:40:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06g6-0005Jj-1z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:40:58 +0000
Received: from [193.109.254.147:58773] by server-14.bemta-14.messagelabs.com
	id E5/F2-12628-92A7AC25; Mon, 06 Jan 2014 09:40:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389001254!9035991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25477 invoked from network); 6 Jan 2014 09:40:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:40:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87821880"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:40:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:40:54 -0500
Message-ID: <1389001252.13274.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 6 Jan 2014 09:40:52 +0000
In-Reply-To: <19FD20D9-E6AD-42BC-B1F6-498F8E3A7292@gmail.com>
References: <19FD20D9-E6AD-42BC-B1F6-498F8E3A7292@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] gnttab.c moved to grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-26 at 17:40 +0400, Igor Kozhukhov wrote:
> i have found that on xen-4.x implementation on Linux file gnttab.c has
> been moved to grant-table.c

There is not really any such thing as the "xen-4.2 implementation on
Linux". Xen and the dom0 kernel (be it Linux or otherwise) are for the
most part completely independent with independent release time lines.

Xen used to ship some kernels in its but we stopped doing so some time
ago when Xen support was added to mainline (in fact, IIRC, it was
actually before then that we stopped).

> also - about grant-table.c - is it available only on Linux kernel tree
> of you have additional repo for platforms ?

Are you talking about drivers/xen/grant-table.c in the Linux source? It
is only in the Linux tree AFAIK, and is specific to Linux, although it
is licensed to be able to serve as a reference if you need one. There is
also a simple PV guest kernel in mini-os which has BSD licensed versions
of various bits of useful infrastructure which can likewise serve as a
reference implementation.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06g7-0005Jr-N5; Mon, 06 Jan 2014 09:40:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06g6-0005Jj-1z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:40:58 +0000
Received: from [193.109.254.147:58773] by server-14.bemta-14.messagelabs.com
	id E5/F2-12628-92A7AC25; Mon, 06 Jan 2014 09:40:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389001254!9035991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25477 invoked from network); 6 Jan 2014 09:40:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:40:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87821880"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:40:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:40:54 -0500
Message-ID: <1389001252.13274.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 6 Jan 2014 09:40:52 +0000
In-Reply-To: <19FD20D9-E6AD-42BC-B1F6-498F8E3A7292@gmail.com>
References: <19FD20D9-E6AD-42BC-B1F6-498F8E3A7292@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] gnttab.c moved to grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-26 at 17:40 +0400, Igor Kozhukhov wrote:
> i have found that on xen-4.x implementation on Linux file gnttab.c has
> been moved to grant-table.c

There is not really any such thing as the "xen-4.2 implementation on
Linux". Xen and the dom0 kernel (be it Linux or otherwise) are for the
most part completely independent with independent release time lines.

Xen used to ship some kernels in its but we stopped doing so some time
ago when Xen support was added to mainline (in fact, IIRC, it was
actually before then that we stopped).

> also - about grant-table.c - is it available only on Linux kernel tree
> of you have additional repo for platforms ?

Are you talking about drivers/xen/grant-table.c in the Linux source? It
is only in the Linux tree AFAIK, and is specific to Linux, although it
is licensed to be able to serve as a reference if you need one. There is
also a simple PV guest kernel in mini-os which has BSD licensed versions
of various bits of useful infrastructure which can likewise serve as a
reference implementation.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06mG-0005Zg-P3; Mon, 06 Jan 2014 09:47:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W06mF-0005Za-Vc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:47:20 +0000
Received: from [85.158.143.35:40232] by server-2.bemta-4.messagelabs.com id
	4C/AC-11386-7AB7AC25; Mon, 06 Jan 2014 09:47:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389001637!9795549!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29427 invoked from network); 6 Jan 2014 09:47:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87823046"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:46:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 04:46:56 -0500
Message-ID: <52CA7B8F.9060402@citrix.com>
Date: Mon, 6 Jan 2014 10:46:55 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org>
In-Reply-To: <52C9D432.3040409@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/14 22:52, Julien Grall wrote:
> 
> 
> On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
>> Since Xen PVH guests doesn't have ACPI, we need to create a dummy
>> bus so top level Xen devices can attach to it (instead of
>> attaching directly to the nexus) and a pvcpu device that will be used
>> to fill the pcpu->pc_device field.
>> ---
>>   sys/conf/files.amd64 |    1 +
>>   sys/conf/files.i386  |    1 +
>>   sys/x86/xen/xenpv.c  |  155
>> ++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> I think it makes more sense to have 2 files: one for xenpv bus and one
> for a dummy pvcpu device. It would allow us to move xenpv bus to common
> code (sys/xen or sys/dev/xen).

Ack. I wasn't thinking other arches will probably use the xenpv bus but
not the dummy cpu device. Would you agree to leave xenpv bus inside
x86/xen for now and move the dummy PV cpu device to dev/xen/pvcpu/?

> 
> [..]
> 
>> +
>> +static int
>> +xenpv_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV bus");
>> +    device_quiet(dev);
>> +    return (0);
> 
> As I understand, 0 means I can "handle" the current device, in this case
> if a device is probing, because it doesn't have yet a driver, we will
> use xenpv and end up with 2 (or even more) xenpv buses.
>
> As we only want to probe xenpv bus once, when the bus was added
> manually, returning BUS_PROBE_NO_WILDCARD would suit better.
> 
> [..]
> 
>> +static int
>> +xenpvcpu_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV CPU");
>> +    return (0);
> 
> Same here: BUS_PROBE_NOWILDCARD.

Ack for both, will change it to BUS_PROBE_NOWILDCARD. While at it, we
should also change xenstore probe function to return BUS_PROBE_NOWILDCARD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06mG-0005Zg-P3; Mon, 06 Jan 2014 09:47:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W06mF-0005Za-Vc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:47:20 +0000
Received: from [85.158.143.35:40232] by server-2.bemta-4.messagelabs.com id
	4C/AC-11386-7AB7AC25; Mon, 06 Jan 2014 09:47:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389001637!9795549!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29427 invoked from network); 6 Jan 2014 09:47:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:47:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87823046"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 09:46:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 04:46:56 -0500
Message-ID: <52CA7B8F.9060402@citrix.com>
Date: Mon, 6 Jan 2014 10:46:55 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org>
In-Reply-To: <52C9D432.3040409@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/01/14 22:52, Julien Grall wrote:
> 
> 
> On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
>> Since Xen PVH guests doesn't have ACPI, we need to create a dummy
>> bus so top level Xen devices can attach to it (instead of
>> attaching directly to the nexus) and a pvcpu device that will be used
>> to fill the pcpu->pc_device field.
>> ---
>>   sys/conf/files.amd64 |    1 +
>>   sys/conf/files.i386  |    1 +
>>   sys/x86/xen/xenpv.c  |  155
>> ++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> I think it makes more sense to have 2 files: one for xenpv bus and one
> for a dummy pvcpu device. It would allow us to move xenpv bus to common
> code (sys/xen or sys/dev/xen).

Ack. I wasn't thinking other arches will probably use the xenpv bus but
not the dummy cpu device. Would you agree to leave xenpv bus inside
x86/xen for now and move the dummy PV cpu device to dev/xen/pvcpu/?

> 
> [..]
> 
>> +
>> +static int
>> +xenpv_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV bus");
>> +    device_quiet(dev);
>> +    return (0);
> 
> As I understand, 0 means I can "handle" the current device, in this case
> if a device is probing, because it doesn't have yet a driver, we will
> use xenpv and end up with 2 (or even more) xenpv buses.
>
> As we only want to probe xenpv bus once, when the bus was added
> manually, returning BUS_PROBE_NO_WILDCARD would suit better.
> 
> [..]
> 
>> +static int
>> +xenpvcpu_probe(device_t dev)
>> +{
>> +
>> +    device_set_desc(dev, "Xen PV CPU");
>> +    return (0);
> 
> Same here: BUS_PROBE_NOWILDCARD.

Ack for both, will change it to BUS_PROBE_NOWILDCARD. While at it, we
should also change xenstore probe function to return BUS_PROBE_NOWILDCARD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:48:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06nX-0005ol-MK; Mon, 06 Jan 2014 09:48:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06nV-0005oW-Vi
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:48:38 +0000
Received: from [85.158.139.211:5243] by server-3.bemta-5.messagelabs.com id
	6C/17-04773-4FB7AC25; Mon, 06 Jan 2014 09:48:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389001714!8039787!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10844 invoked from network); 6 Jan 2014 09:48:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:48:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90001889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:48:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:48:33 -0500
Message-ID: <1389001712.13274.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E4=BD=99=E5=8A=B2?= <vigorousfish@gmail.com>
Date: Mon, 6 Jan 2014 09:48:32 +0000
In-Reply-To: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
References: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] git server maybe faild
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEzLTEyLTMxIGF0IDExOjAwICswODAwLCDkvZnlirIgd3JvdGU6Cj4gCj4gCj4g
SGkgYWxsLEkgY2FuJ3QgYnVpbGQgeGVuIHRoZXNlIGRheXMuIEFmdGVyIEkgcnVuICJtYWtlIHRv
b2xzIiBpdAo+IGFsd2F5cyBzdG9wIG9uICJDbG9uaW5nIGludG8gJ3NlYWJpb3MtZGlyLXJlbW90
ZS50bXAnLi4uIiB0aGlzIHN0ZXAsCj4gYW5kIHRoZW4gcHJpbnQgImZhdGFsOiByZWFkIGVycm9y
OiBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIiLiAKPiAKPiAKPiBUaGUgeGVuIGdpdCBzZXJ2ZXIg
bWF5YmUgaGFkIGRvd24gc2V2ZXJhbCBkYXlzIGFnby4gV2hvIGNhbiByZXN1bWUgaXQ/CgpUaGVy
ZSB3YXMgYW4gb3V0YWdlIGZvciBhIGZldyBob3VycyBidXQgSSBiZWxpZXZlIGl0IGlzIGZpeGVk
IG5vdy4KClRoYW5rcy4KSWFuLgoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:48:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06nX-0005ol-MK; Mon, 06 Jan 2014 09:48:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06nV-0005oW-Vi
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 09:48:38 +0000
Received: from [85.158.139.211:5243] by server-3.bemta-5.messagelabs.com id
	6C/17-04773-4FB7AC25; Mon, 06 Jan 2014 09:48:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389001714!8039787!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10844 invoked from network); 6 Jan 2014 09:48:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:48:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90001889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:48:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:48:33 -0500
Message-ID: <1389001712.13274.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: =?UTF-8?Q?=E4=BD=99=E5=8A=B2?= <vigorousfish@gmail.com>
Date: Mon, 6 Jan 2014 09:48:32 +0000
In-Reply-To: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
References: <CA+RiofjFjkieeD0ZbPudwpK5omQY8=qAc2UCbnP9MbMZcNEMRQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] git server maybe faild
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDEzLTEyLTMxIGF0IDExOjAwICswODAwLCDkvZnlirIgd3JvdGU6Cj4gCj4gCj4g
SGkgYWxsLEkgY2FuJ3QgYnVpbGQgeGVuIHRoZXNlIGRheXMuIEFmdGVyIEkgcnVuICJtYWtlIHRv
b2xzIiBpdAo+IGFsd2F5cyBzdG9wIG9uICJDbG9uaW5nIGludG8gJ3NlYWJpb3MtZGlyLXJlbW90
ZS50bXAnLi4uIiB0aGlzIHN0ZXAsCj4gYW5kIHRoZW4gcHJpbnQgImZhdGFsOiByZWFkIGVycm9y
OiBDb25uZWN0aW9uIHJlc2V0IGJ5IHBlZXIiLiAKPiAKPiAKPiBUaGUgeGVuIGdpdCBzZXJ2ZXIg
bWF5YmUgaGFkIGRvd24gc2V2ZXJhbCBkYXlzIGFnby4gV2hvIGNhbiByZXN1bWUgaXQ/CgpUaGVy
ZSB3YXMgYW4gb3V0YWdlIGZvciBhIGZldyBob3VycyBidXQgSSBiZWxpZXZlIGl0IGlzIGZpeGVk
IG5vdy4KClRoYW5rcy4KSWFuLgoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06tC-0006KD-HF; Mon, 06 Jan 2014 09:54:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06tA-0006K4-Jl
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 09:54:28 +0000
Received: from [85.158.143.35:34862] by server-3.bemta-4.messagelabs.com id
	51/E4-32360-35D7AC25; Mon, 06 Jan 2014 09:54:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389002065!9840233!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26776 invoked from network); 6 Jan 2014 09:54:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90002673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:54:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:54:24 -0500
Message-ID: <1389002063.13274.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Wu, Feng" <feng.wu@intel.com>
Date: Mon, 6 Jan 2014 09:54:23 +0000
In-Reply-To: <E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<1387378646.28680.48.camel@kazak.uk.xensource.com>
	<20131218152235.GG4934@phenom.dumpdata.com>
	<1387383185.28680.60.camel@kazak.uk.xensource.com>
	<E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 11:25 +0000, Wu, Feng wrote:
> 
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Thursday, December 19, 2013 12:13 AM
> > To: Konrad Rzeszutek Wilk
> > Cc: Anthony PERARD; xen-devel@lists.xenproject.org;
> > stefano.stabellini@citrix.com
> > Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> > 
> > On Wed, 2013-12-18 at 10:22 -0500, Konrad Rzeszutek Wilk wrote:
> > > load_roms and bios_load are not set - so it wouldn't even do it.
> > > It only does it for Bochs BIOS.
> > 
> > Right, this is deliberate.
> > 
> > For ROMBIOS (AKA BOchs BIOS) hvmloader loads the options roms. and I
> > think ROMBIOS subsequently loads them.
> > 
> > For SeaBIOS it is the BIOS itself which both loads and executes the
> > ROMS, which is why it is NULL in hvmloader.
> > 
> > The SeaBIOS way is far more like how systems normally work and because
> > the BIOS is in charge it can do a better job than splitting it between
> > two entities.
> > 
> 
> I am also interested in this thread. Do you know why ROMBIOS doesn't handle
> option ROM the same way as seaBIOS?

ROMBIOS was always this way and nobody has updated it to do things
properly. It's unclear whether it is worth the effort.

IMHO any effort expended would be better spent bringing qemu-upstream
+seabios up to scratch rather than spending time fixing qemu-traditional
+ROMBIOS for new usecases. Although qemu-traditional+ROMBIOS is not
going to be removed (in order to retain compatibility with existing
configurations) it is also not taking new features etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06tC-0006KD-HF; Mon, 06 Jan 2014 09:54:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06tA-0006K4-Jl
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 09:54:28 +0000
Received: from [85.158.143.35:34862] by server-3.bemta-4.messagelabs.com id
	51/E4-32360-35D7AC25; Mon, 06 Jan 2014 09:54:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389002065!9840233!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26776 invoked from network); 6 Jan 2014 09:54:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90002673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:54:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:54:24 -0500
Message-ID: <1389002063.13274.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Wu, Feng" <feng.wu@intel.com>
Date: Mon, 6 Jan 2014 09:54:23 +0000
In-Reply-To: <E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<1387378646.28680.48.camel@kazak.uk.xensource.com>
	<20131218152235.GG4934@phenom.dumpdata.com>
	<1387383185.28680.60.camel@kazak.uk.xensource.com>
	<E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 11:25 +0000, Wu, Feng wrote:
> 
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Thursday, December 19, 2013 12:13 AM
> > To: Konrad Rzeszutek Wilk
> > Cc: Anthony PERARD; xen-devel@lists.xenproject.org;
> > stefano.stabellini@citrix.com
> > Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> > 
> > On Wed, 2013-12-18 at 10:22 -0500, Konrad Rzeszutek Wilk wrote:
> > > load_roms and bios_load are not set - so it wouldn't even do it.
> > > It only does it for Bochs BIOS.
> > 
> > Right, this is deliberate.
> > 
> > For ROMBIOS (AKA BOchs BIOS) hvmloader loads the options roms. and I
> > think ROMBIOS subsequently loads them.
> > 
> > For SeaBIOS it is the BIOS itself which both loads and executes the
> > ROMS, which is why it is NULL in hvmloader.
> > 
> > The SeaBIOS way is far more like how systems normally work and because
> > the BIOS is in charge it can do a better job than splitting it between
> > two entities.
> > 
> 
> I am also interested in this thread. Do you know why ROMBIOS doesn't handle
> option ROM the same way as seaBIOS?

ROMBIOS was always this way and nobody has updated it to do things
properly. It's unclear whether it is worth the effort.

IMHO any effort expended would be better spent bringing qemu-upstream
+seabios up to scratch rather than spending time fixing qemu-traditional
+ROMBIOS for new usecases. Although qemu-traditional+ROMBIOS is not
going to be removed (in order to retain compatibility with existing
configurations) it is also not taking new features etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06xA-0006fu-8u; Mon, 06 Jan 2014 09:58:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06x9-0006fm-2d
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 09:58:35 +0000
Received: from [193.109.254.147:17346] by server-7.bemta-14.messagelabs.com id
	69/E8-15500-A4E7AC25; Mon, 06 Jan 2014 09:58:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389002312!9009662!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1094 invoked from network); 6 Jan 2014 09:58:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:58:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90003435"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:58:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:58:31 -0500
Message-ID: <1389002310.13274.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ben Hutchings <ben@decadent.org.uk>
Date: Mon, 6 Jan 2014 09:58:30 +0000
In-Reply-To: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
References: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/pci: Fix build on non-x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2013-12-31 at 20:46 +0100, Ben Hutchings wrote:
> We can't include <asm/pci_x86.h> if this isn't x86, and we only need
> it if CONFIG_PCI_MMCONFIG is enabled.
> 
> Fixes: 8deb3eb1461e ('xen/mcfg: Call PHYSDEVOP_pci_mmcfg_reserved for MCFG areas.')
> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 09:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 09:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06xA-0006fu-8u; Mon, 06 Jan 2014 09:58:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06x9-0006fm-2d
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 09:58:35 +0000
Received: from [193.109.254.147:17346] by server-7.bemta-14.messagelabs.com id
	69/E8-15500-A4E7AC25; Mon, 06 Jan 2014 09:58:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389002312!9009662!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1094 invoked from network); 6 Jan 2014 09:58:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 09:58:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="90003435"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 09:58:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	04:58:31 -0500
Message-ID: <1389002310.13274.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ben Hutchings <ben@decadent.org.uk>
Date: Mon, 6 Jan 2014 09:58:30 +0000
In-Reply-To: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
References: <1388519187.2900.99.camel@deadeye.wl.decadent.org.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/pci: Fix build on non-x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2013-12-31 at 20:46 +0100, Ben Hutchings wrote:
> We can't include <asm/pci_x86.h> if this isn't x86, and we only need
> it if CONFIG_PCI_MMCONFIG is enabled.
> 
> Fixes: 8deb3eb1461e ('xen/mcfg: Call PHYSDEVOP_pci_mmcfg_reserved for MCFG areas.')
> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:00:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06zL-0006zI-SY; Mon, 06 Jan 2014 10:00:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06zK-0006zB-Nl
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:00:50 +0000
Received: from [85.158.143.35:30016] by server-1.bemta-4.messagelabs.com id
	F6/D5-02132-2DE7AC25; Mon, 06 Jan 2014 10:00:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389002448!9839656!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13391 invoked from network); 6 Jan 2014 10:00:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:00:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87825660"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:00:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:00:47 -0500
Message-ID: <1389002445.13274.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 10:00:45 +0000
In-Reply-To: <52C56F43.6030804@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
	<52C56F43.6030804@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
> On 19/12/13 15:03, Levente Kurusa wrote:
> > This is required so that we give up the last reference to the device.
> > 
> > Signed-off-by: Levente Kurusa <levex@linux.com>
> > ---
> >  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> > index 3c0a74b..4abb9ee 100644
> > --- a/drivers/xen/xenbus/xenbus_probe.c
> > +++ b/drivers/xen/xenbus/xenbus_probe.c
> > @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
> >  
> >  	/* Register with generic device framework. */
> >  	err = device_register(&xendev->dev);
> > -	if (err)
> > +	if (err) {
> > +		put_device(&xendev->dev);
> >  		goto fail;
> > +	}
> >  
> >  	return 0;
> >  fail:
> 
> There is a kfree(xendev) here so this introduces a double-free.

How? put_device doesn't touch xendev itself, does it? It just drops the
ref on the dev member which is not separately dynamically allocated and
so not freed either.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:00:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W06zL-0006zI-SY; Mon, 06 Jan 2014 10:00:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W06zK-0006zB-Nl
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:00:50 +0000
Received: from [85.158.143.35:30016] by server-1.bemta-4.messagelabs.com id
	F6/D5-02132-2DE7AC25; Mon, 06 Jan 2014 10:00:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389002448!9839656!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13391 invoked from network); 6 Jan 2014 10:00:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:00:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,611,1384300800"; d="scan'208";a="87825660"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:00:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:00:47 -0500
Message-ID: <1389002445.13274.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 10:00:45 +0000
In-Reply-To: <52C56F43.6030804@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
	<52C56F43.6030804@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
> On 19/12/13 15:03, Levente Kurusa wrote:
> > This is required so that we give up the last reference to the device.
> > 
> > Signed-off-by: Levente Kurusa <levex@linux.com>
> > ---
> >  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> > index 3c0a74b..4abb9ee 100644
> > --- a/drivers/xen/xenbus/xenbus_probe.c
> > +++ b/drivers/xen/xenbus/xenbus_probe.c
> > @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
> >  
> >  	/* Register with generic device framework. */
> >  	err = device_register(&xendev->dev);
> > -	if (err)
> > +	if (err) {
> > +		put_device(&xendev->dev);
> >  		goto fail;
> > +	}
> >  
> >  	return 0;
> >  fail:
> 
> There is a kfree(xendev) here so this introduces a double-free.

How? put_device doesn't touch xendev itself, does it? It just drops the
ref on the dev member which is not separately dynamically allocated and
so not freed either.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W073s-0007DB-NA; Mon, 06 Jan 2014 10:05:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W073r-0007D5-Hh
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:05:31 +0000
Received: from [85.158.137.68:39663] by server-10.bemta-3.messagelabs.com id
	34/B3-23989-AEF7AC25; Mon, 06 Jan 2014 10:05:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389002728!6640539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15790 invoked from network); 6 Jan 2014 10:05:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:05:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90004608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 10:05:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:05:11 -0500
Message-ID: <1389002710.13274.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 10:05:10 +0000
In-Reply-To: <52C5815C.5020908@citrix.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>
	<52B1EC43.7030408@cantab.net>
	<1387447808.9925.24.camel@kazak.uk.xensource.com>
	<52C5815C.5020908@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien
	Grall <julien.grall@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	David Vrabel <dvrabel@cantab.net>, Stefano.Stabellini@citrix.com, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 15:10 +0000, David Vrabel wrote:
> > We basically only support guests running without caches at start of day.
> > We expect and require them to come up and enable caches and then keep
> > them enabled. Fortunately this is normal practice...
> 
> There is still a (small) window where a guest is running with caches
> disabled and it may be saved/migrated at this point.  Is this a concern?

I don't think so, if nothing else while the guest has caches disabled it
is so early that it is mostly certainly not watching the xenbus control
node.

I suppose we might theoretically start a live migration right after boot
and iterate until caches were enabled, but with stale pages in the save
image (those which didn't change since boot).

There's a large dollop of "don't do that then" but probably the
migration code should check that caches are enabled as one of the first
things it does.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W073s-0007DB-NA; Mon, 06 Jan 2014 10:05:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W073r-0007D5-Hh
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:05:31 +0000
Received: from [85.158.137.68:39663] by server-10.bemta-3.messagelabs.com id
	34/B3-23989-AEF7AC25; Mon, 06 Jan 2014 10:05:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389002728!6640539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15790 invoked from network); 6 Jan 2014 10:05:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:05:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90004608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 10:05:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:05:11 -0500
Message-ID: <1389002710.13274.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 10:05:10 +0000
In-Reply-To: <52C5815C.5020908@citrix.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>
	<52B1EC43.7030408@cantab.net>
	<1387447808.9925.24.camel@kazak.uk.xensource.com>
	<52C5815C.5020908@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien
	Grall <julien.grall@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, xen-devel <xen-devel@lists.xen.org>,
	David Vrabel <dvrabel@cantab.net>, Stefano.Stabellini@citrix.com, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 15:10 +0000, David Vrabel wrote:
> > We basically only support guests running without caches at start of day.
> > We expect and require them to come up and enable caches and then keep
> > them enabled. Fortunately this is normal practice...
> 
> There is still a (small) window where a guest is running with caches
> disabled and it may be saved/migrated at this point.  Is this a concern?

I don't think so, if nothing else while the guest has caches disabled it
is so early that it is mostly certainly not watching the xenbus control
node.

I suppose we might theoretically start a live migration right after boot
and iterate until caches were enabled, but with stale pages in the save
image (those which didn't change since boot).

There's a large dollop of "don't do that then" but probably the
migration code should check that caches are enabled as one of the first
things it does.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:10:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W078P-0007cO-Gd; Mon, 06 Jan 2014 10:10:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W078N-0007cH-OZ
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:10:11 +0000
Received: from [193.109.254.147:9630] by server-11.bemta-14.messagelabs.com id
	5E/CF-20576-3018AC25; Mon, 06 Jan 2014 10:10:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389003009!9048764!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29059 invoked from network); 6 Jan 2014 10:10:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:10:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827521"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:10:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:10:08 -0500
Message-ID: <1389003007.13274.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 6 Jan 2014 10:10:07 +0000
In-Reply-To: <alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	James Harper <james.harper@bendigoit.com.au>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 12:51 +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, James Harper wrote:
> > > > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > > > an 'xl console' working to check right now.
> > > 
> > > Nice! Please send the libxl patches to the list!
> > > 
> > > The reason why libxl has a specific list of supported formats is that we
> > > thought that a simple wildcard and external bash scripts without a clear
> > > calling convention were too fragile. Of course if RDB "just works", we
> > > should enable it in libxl.
> 
> Sorry, I realize now that the word "format" in my reply is wrong.  I
> should have written "targets", like iscsi and nfs, each of them can
> support multiple image formats like raw and qcow2.
> 
> 
> > It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:
> > 
> > disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]
[...]
> disk = [ 'rbd:prod/virt-zoneminder-root,raw,xvda1,rw,backendtype=qdisk' ]

raw is the default so these two are equivalent.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:10:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W078P-0007cO-Gd; Mon, 06 Jan 2014 10:10:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W078N-0007cH-OZ
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:10:11 +0000
Received: from [193.109.254.147:9630] by server-11.bemta-14.messagelabs.com id
	5E/CF-20576-3018AC25; Mon, 06 Jan 2014 10:10:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389003009!9048764!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29059 invoked from network); 6 Jan 2014 10:10:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:10:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827521"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:10:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:10:08 -0500
Message-ID: <1389003007.13274.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 6 Jan 2014 10:10:07 +0000
In-Reply-To: <alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
References: <6035A0D088A63A46850C3988ED045A4B66BF85B4@BITCOM1.int.sbss.com.au>
	<CABMPFzjbhsNeAEKo06aNp=AVtToRHgGGzhmgxB1MWGeQDcja4A@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8E58@BITCOM1.int.sbss.com.au>
	<CABMPFzjM+nJ6M9roU2BT9H4NXSLFqyq9cs8smFrsHae5WdhiCA@mail.gmail.com>
	<6035A0D088A63A46850C3988ED045A4B66BF8F13@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1312081501080.7093@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B66CD9CFA@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031207580.8667@kaball.uk.xensource.com>
	<6035A0D088A63A46850C3988ED045A4B6F354DDE@BITCOM1.int.sbss.com.au>
	<alpine.DEB.2.02.1401031248270.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	James Harper <james.harper@bendigoit.com.au>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] tapdisk in debian 3.11 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 12:51 +0000, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, James Harper wrote:
> > > > and get rbd with ceph under hvm. Probably pv too but I can't seem to get
> > > > an 'xl console' working to check right now.
> > > 
> > > Nice! Please send the libxl patches to the list!
> > > 
> > > The reason why libxl has a specific list of supported formats is that we
> > > thought that a simple wildcard and external bash scripts without a clear
> > > calling convention were too fragile. Of course if RDB "just works", we
> > > should enable it in libxl.
> 
> Sorry, I realize now that the word "format" in my reply is wrong.  I
> should have written "targets", like iscsi and nfs, each of them can
> support multiple image formats like raw and qcow2.
> 
> 
> > It turns out that simply omitting the format altogether is sufficient, and probably preferable. Something like:
> > 
> > disk = [ 'rbd:prod/virt-zoneminder-root,,xvda1,rw,backendtype=qdisk' ]
[...]
> disk = [ 'rbd:prod/virt-zoneminder-root,raw,xvda1,rw,backendtype=qdisk' ]

raw is the default so these two are equivalent.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:10:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0794-0007fJ-U8; Mon, 06 Jan 2014 10:10:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0793-0007fC-Pg
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:10:53 +0000
Received: from [85.158.139.211:43149] by server-11.bemta-5.messagelabs.com id
	6E/04-23268-C218AC25; Mon, 06 Jan 2014 10:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389003050!8005215!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8279 invoked from network); 6 Jan 2014 10:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827639"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:10:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:10:49 -0500
Message-ID: <1389003048.13274.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 10:10:48 +0000
In-Reply-To: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
References: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, Wei Liu <liuw@liuw.name>,
	linux-arm-kernel@lists.infradead.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] asm/xen/page.h: remove redundant semicolon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 14:03 +0000, Wei Liu wrote:
> From: Wei Liu <liuw@liuw.name>
> 
> Signed-off-by: Wei Liu <liuw@liuw.name>

Acked-by: Ian Campbell <ian.campbell@citrix.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:10:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0794-0007fJ-U8; Mon, 06 Jan 2014 10:10:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0793-0007fC-Pg
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:10:53 +0000
Received: from [85.158.139.211:43149] by server-11.bemta-5.messagelabs.com id
	6E/04-23268-C218AC25; Mon, 06 Jan 2014 10:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389003050!8005215!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8279 invoked from network); 6 Jan 2014 10:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827639"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:10:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:10:49 -0500
Message-ID: <1389003048.13274.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 10:10:48 +0000
In-Reply-To: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
References: <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, Wei Liu <liuw@liuw.name>,
	linux-arm-kernel@lists.infradead.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] asm/xen/page.h: remove redundant semicolon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 14:03 +0000, Wei Liu wrote:
> From: Wei Liu <liuw@liuw.name>
> 
> Signed-off-by: Wei Liu <liuw@liuw.name>

Acked-by: Ian Campbell <ian.campbell@citrix.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:11:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W079r-0007lu-Cq; Mon, 06 Jan 2014 10:11:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W079q-0007li-0V
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:11:42 +0000
Received: from [85.158.137.68:51353] by server-8.bemta-3.messagelabs.com id
	6A/22-31081-D518AC25; Mon, 06 Jan 2014 10:11:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389003099!6259804!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24248 invoked from network); 6 Jan 2014 10:11:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827940"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:11:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:11:38 -0500
Message-ID: <1389003096.13274.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 6 Jan 2014 10:11:36 +0000
In-Reply-To: <1388758643-3897-1-git-send-email-baozich@gmail.com>
References: <1388758643-3897-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com,
	linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com
Subject: Re: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
 definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:17 +0800, Chen Baozi wrote:
> Reported-by: Wei Liu <liuw@liuw.name>
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Heh, Wei beat you to the patch by 14 minutes ;-)

See <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:11:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W079r-0007lu-Cq; Mon, 06 Jan 2014 10:11:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W079q-0007li-0V
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:11:42 +0000
Received: from [85.158.137.68:51353] by server-8.bemta-3.messagelabs.com id
	6A/22-31081-D518AC25; Mon, 06 Jan 2014 10:11:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389003099!6259804!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24248 invoked from network); 6 Jan 2014 10:11:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87827940"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:11:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:11:38 -0500
Message-ID: <1389003096.13274.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 6 Jan 2014 10:11:36 +0000
In-Reply-To: <1388758643-3897-1-git-send-email-baozich@gmail.com>
References: <1388758643-3897-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, liuw@liuw.name,
	stefano.stabellini@eu.citrix.com,
	linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com
Subject: Re: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
 definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:17 +0800, Chen Baozi wrote:
> Reported-by: Wei Liu <liuw@liuw.name>
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Heh, Wei beat you to the patch by 14 minutes ;-)

See <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07GW-00086M-Cp; Mon, 06 Jan 2014 10:18:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07GV-00086H-EB
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:18:35 +0000
Received: from [85.158.139.211:24195] by server-12.bemta-5.messagelabs.com id
	62/5C-30017-AF28AC25; Mon, 06 Jan 2014 10:18:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389003510!8047171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20787 invoked from network); 6 Jan 2014 10:18:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:18:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87829485"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:18:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:18:29 -0500
Message-ID: <1389003509.13274.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 6 Jan 2014 10:18:29 +0000
In-Reply-To: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:38 +0400, Igor Kozhukhov wrote:
> Hello All,
> 
> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
> 
> xen/include/public/trace.h :
> 
> struct t_buf {}
> struct t_info {}

What do these conflict with? Is it user or kernel space? Please can you
provide the full build error for context.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07GW-00086M-Cp; Mon, 06 Jan 2014 10:18:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07GV-00086H-EB
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:18:35 +0000
Received: from [85.158.139.211:24195] by server-12.bemta-5.messagelabs.com id
	62/5C-30017-AF28AC25; Mon, 06 Jan 2014 10:18:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389003510!8047171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20787 invoked from network); 6 Jan 2014 10:18:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:18:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87829485"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:18:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:18:29 -0500
Message-ID: <1389003509.13274.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 6 Jan 2014 10:18:29 +0000
In-Reply-To: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:38 +0400, Igor Kozhukhov wrote:
> Hello All,
> 
> I'm working on port xen-4.2 to illumos based platform - DilOS - and have problems with conflicts for:
> 
> xen/include/public/trace.h :
> 
> struct t_buf {}
> struct t_info {}

What do these conflict with? Is it user or kernel space? Please can you
provide the full build error for context.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:39:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07aG-0001Hh-J0; Mon, 06 Jan 2014 10:39:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07aE-0001Ha-Ln
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:38:58 +0000
Received: from [85.158.137.68:54759] by server-14.bemta-3.messagelabs.com id
	81/8A-06105-1C78AC25; Mon, 06 Jan 2014 10:38:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389004735!7394808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11781 invoked from network); 6 Jan 2014 10:38:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:38:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87833902"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:38:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:38:54 -0500
Message-ID: <1389004733.13274.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 10:38:53 +0000
In-Reply-To: <52B70563.6090907@citrix.com>
References: <alpine.LRH.2.03.1312211127170.8855@loki.cs.umass.edu>
	<52B70563.6090907@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Prateek Sharma <prateeks@cs.umass.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM checkpoint image format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 15:29 +0000, Andrew Cooper wrote:
> On 21/12/2013 16:31, Prateek Sharma wrote:
> > Hello all,
> >     Is the VM-save image format specified anywhere? (the file written
> > to by xc_domain_save). I am looking for what fields are saved in what
> > order.
> >     Or is the xc_domain_save function itself the specification? Can I
> > change the order of fields saved (thus changing the on-disk image
> > format) and expect no breakages? Ofcourse, I will need to change the
> > xc_domain_restore function as well.
> >
> > Thanks!
> > --Prateek
> 
> There is no documentation that I am aware of, other than the code
> itself,

There is a big comment in tools/libxc/xg_save_restore.h which describes
the migration protocol. The save format is basically just a single pass
of that.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:39:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07aG-0001Hh-J0; Mon, 06 Jan 2014 10:39:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07aE-0001Ha-Ln
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:38:58 +0000
Received: from [85.158.137.68:54759] by server-14.bemta-3.messagelabs.com id
	81/8A-06105-1C78AC25; Mon, 06 Jan 2014 10:38:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389004735!7394808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11781 invoked from network); 6 Jan 2014 10:38:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:38:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87833902"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:38:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:38:54 -0500
Message-ID: <1389004733.13274.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 10:38:53 +0000
In-Reply-To: <52B70563.6090907@citrix.com>
References: <alpine.LRH.2.03.1312211127170.8855@loki.cs.umass.edu>
	<52B70563.6090907@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Prateek Sharma <prateeks@cs.umass.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM checkpoint image format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 15:29 +0000, Andrew Cooper wrote:
> On 21/12/2013 16:31, Prateek Sharma wrote:
> > Hello all,
> >     Is the VM-save image format specified anywhere? (the file written
> > to by xc_domain_save). I am looking for what fields are saved in what
> > order.
> >     Or is the xc_domain_save function itself the specification? Can I
> > change the order of fields saved (thus changing the on-disk image
> > format) and expect no breakages? Ofcourse, I will need to change the
> > xc_domain_restore function as well.
> >
> > Thanks!
> > --Prateek
> 
> There is no documentation that I am aware of, other than the code
> itself,

There is a big comment in tools/libxc/xg_save_restore.h which describes
the migration protocol. The save format is basically just a single pass
of that.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:40:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07bS-0001T2-6s; Mon, 06 Jan 2014 10:40:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07bR-0001Sw-H6
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:40:13 +0000
Received: from [85.158.137.68:20123] by server-15.bemta-3.messagelabs.com id
	A4/EB-11556-C088AC25; Mon, 06 Jan 2014 10:40:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389004809!6270668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11160 invoked from network); 6 Jan 2014 10:40:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:40:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87834226"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:40:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:40:08 -0500
Message-ID: <1389004807.13274.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Prateek Sharma <prateeks@cs.umass.edu>
Date: Mon, 6 Jan 2014 10:40:07 +0000
In-Reply-To: <alpine.LRH.2.03.1312221621020.7811@loki.cs.umass.edu>
References: <alpine.LRH.2.03.1312211127170.8855@loki.cs.umass.edu>
	<52B70563.6090907@citrix.com>
	<alpine.LRH.2.03.1312221621020.7811@loki.cs.umass.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM checkpoint image format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 16:45 -0500, Prateek Sharma wrote:
> Is there a way to access (and hence save) all the page-table pages of a 
> guest?
> 

The code in xc_domain_save.c:canonicalize_pagetable is doing something
along these lines.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:40:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07bS-0001T2-6s; Mon, 06 Jan 2014 10:40:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07bR-0001Sw-H6
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:40:13 +0000
Received: from [85.158.137.68:20123] by server-15.bemta-3.messagelabs.com id
	A4/EB-11556-C088AC25; Mon, 06 Jan 2014 10:40:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389004809!6270668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11160 invoked from network); 6 Jan 2014 10:40:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:40:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87834226"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:40:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:40:08 -0500
Message-ID: <1389004807.13274.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Prateek Sharma <prateeks@cs.umass.edu>
Date: Mon, 6 Jan 2014 10:40:07 +0000
In-Reply-To: <alpine.LRH.2.03.1312221621020.7811@loki.cs.umass.edu>
References: <alpine.LRH.2.03.1312211127170.8855@loki.cs.umass.edu>
	<52B70563.6090907@citrix.com>
	<alpine.LRH.2.03.1312221621020.7811@loki.cs.umass.edu>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] VM checkpoint image format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 16:45 -0500, Prateek Sharma wrote:
> Is there a way to access (and hence save) all the page-table pages of a 
> guest?
> 

The code in xc_domain_save.c:canonicalize_pagetable is doing something
along these lines.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:40:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07bi-0001Vb-Jv; Mon, 06 Jan 2014 10:40:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W07bh-0001VD-2G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:40:29 +0000
Received: from [193.109.254.147:21236] by server-8.bemta-14.messagelabs.com id
	38/29-30921-C188AC25; Mon, 06 Jan 2014 10:40:28 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389004827!9023351!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20872 invoked from network); 6 Jan 2014 10:40:27 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:40:27 -0000
Received: by mail-we0-f181.google.com with SMTP id x55so15429548wes.40
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 02:40:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=l1rH5tJlTp8YzVvId+jaCOyXQpL4kwNyHOzxjNef1pA=;
	b=cyc1etqpilQyS1CXfcRFRCAivGNtnI7+Rw+Qbpo98R3viPJ5q2K2xiIshqhmHvm6B2
	cDaUo+DlDSkuuek2uXXeJTCRzJqM/gNBSnP/oh94mrbyqHqd6kLL6aPOSjUHd47HUz0J
	ZFFdKvhsOmqZkdjJSoT868/qhYR/LdtLpkjsbZfsyuMCYqJx4RwyfKWGkSKsFd4CLWqU
	46tUZeXNyT4zWn9lIsP8j52nNqv/pOJyaw9NBagHQ2t9phwgT06LNZ6Ekbjv8348zQNV
	0AEsLQiaB0Z4DTYIbH6r4rctlWrOAkjTy4vFSIfsV++p2LomDM+qmrbi4/1FblC+2FAn
	bhvA==
X-Received: by 10.194.118.225 with SMTP id kp1mr768313wjb.86.1389004827324;
	Mon, 06 Jan 2014 02:40:27 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id gd5sm17207123wic.0.2014.01.06.02.40.26
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 02:40:26 -0800 (PST)
Message-ID: <52CA8819.20205@xen.org>
Date: Mon, 06 Jan 2014 10:40:25 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [Urgent] Need Volunteers for Virt & Iaas DevRoom as
 well as manning the Xen Project booth @ FOSDEM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I need volunteers to help out with FOSDEM. I was just informed that 
FOSDEM has managed to secure recording equipment for all devrooms 
including the Virt & IaaS Devroom (in which we have 4 Xen related 
talks). However the actual recording relies on volunteers (see 
https://lists.fosdem.org/pipermail/video/2014-January/000078.html). As I 
also have a Xen Project booth, I cannot do the recordings and the booth. 
Would any volunteers that are willing to help on the booth or be willing 
to be in the DevRoom for 1/2 day please get in touch with me ASAP.
Best Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:40:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07bi-0001Vb-Jv; Mon, 06 Jan 2014 10:40:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W07bh-0001VD-2G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:40:29 +0000
Received: from [193.109.254.147:21236] by server-8.bemta-14.messagelabs.com id
	38/29-30921-C188AC25; Mon, 06 Jan 2014 10:40:28 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389004827!9023351!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20872 invoked from network); 6 Jan 2014 10:40:27 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:40:27 -0000
Received: by mail-we0-f181.google.com with SMTP id x55so15429548wes.40
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 02:40:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=l1rH5tJlTp8YzVvId+jaCOyXQpL4kwNyHOzxjNef1pA=;
	b=cyc1etqpilQyS1CXfcRFRCAivGNtnI7+Rw+Qbpo98R3viPJ5q2K2xiIshqhmHvm6B2
	cDaUo+DlDSkuuek2uXXeJTCRzJqM/gNBSnP/oh94mrbyqHqd6kLL6aPOSjUHd47HUz0J
	ZFFdKvhsOmqZkdjJSoT868/qhYR/LdtLpkjsbZfsyuMCYqJx4RwyfKWGkSKsFd4CLWqU
	46tUZeXNyT4zWn9lIsP8j52nNqv/pOJyaw9NBagHQ2t9phwgT06LNZ6Ekbjv8348zQNV
	0AEsLQiaB0Z4DTYIbH6r4rctlWrOAkjTy4vFSIfsV++p2LomDM+qmrbi4/1FblC+2FAn
	bhvA==
X-Received: by 10.194.118.225 with SMTP id kp1mr768313wjb.86.1389004827324;
	Mon, 06 Jan 2014 02:40:27 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id gd5sm17207123wic.0.2014.01.06.02.40.26
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 02:40:26 -0800 (PST)
Message-ID: <52CA8819.20205@xen.org>
Date: Mon, 06 Jan 2014 10:40:25 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [Urgent] Need Volunteers for Virt & Iaas DevRoom as
 well as manning the Xen Project booth @ FOSDEM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I need volunteers to help out with FOSDEM. I was just informed that 
FOSDEM has managed to secure recording equipment for all devrooms 
including the Virt & IaaS Devroom (in which we have 4 Xen related 
talks). However the actual recording relies on volunteers (see 
https://lists.fosdem.org/pipermail/video/2014-January/000078.html). As I 
also have a Xen Project booth, I cannot do the recordings and the booth. 
Would any volunteers that are willing to help on the booth or be willing 
to be in the DevRoom for 1/2 day please get in touch with me ASAP.
Best Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07na-0002NB-4A; Mon, 06 Jan 2014 10:52:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07nZ-0002N6-2y
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:52:45 +0000
Received: from [193.109.254.147:6164] by server-3.bemta-14.messagelabs.com id
	B8/20-11000-CFA8AC25; Mon, 06 Jan 2014 10:52:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389005562!9059936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31342 invoked from network); 6 Jan 2014 10:52:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87836532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:52:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:52:41 -0500
Message-ID: <52CA8AF7.9040305@citrix.com>
Date: Mon, 6 Jan 2014 10:52:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> The VCPU bringup protocol follows the PV with certain twists.
> From xen/include/public/arch-x86/xen.h:
> 
> Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> for HVM and PVH guests, not all information in this structure is updated:
> 
>  - For HVM guests, the structures read include: fpu_ctxt (if
>  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> 
>  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
>  set cr3. All other fields not used should be set to 0.
> 
> This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> can load per-cpu data-structures. It has no effect on the VCPU0.
> 
> We also piggyback on the %rdi register to pass in the CPU number - so
> that when we bootup a new CPU, the cpu_bringup_and_idle will have
> passed as the first parameter the CPU number (via %rdi for 64-bit).
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> + *
> + * Note, that it is refok - because the only caller of this after init

"Note, this is __ref because..."

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07na-0002NB-4A; Mon, 06 Jan 2014 10:52:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07nZ-0002N6-2y
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:52:45 +0000
Received: from [193.109.254.147:6164] by server-3.bemta-14.messagelabs.com id
	B8/20-11000-CFA8AC25; Mon, 06 Jan 2014 10:52:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389005562!9059936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31342 invoked from network); 6 Jan 2014 10:52:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87836532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:52:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:52:41 -0500
Message-ID: <52CA8AF7.9040305@citrix.com>
Date: Mon, 6 Jan 2014 10:52:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> The VCPU bringup protocol follows the PV with certain twists.
> From xen/include/public/arch-x86/xen.h:
> 
> Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> for HVM and PVH guests, not all information in this structure is updated:
> 
>  - For HVM guests, the structures read include: fpu_ctxt (if
>  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> 
>  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
>  set cr3. All other fields not used should be set to 0.
> 
> This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> can load per-cpu data-structures. It has no effect on the VCPU0.
> 
> We also piggyback on the %rdi register to pass in the CPU number - so
> that when we bootup a new CPU, the cpu_bringup_and_idle will have
> passed as the first parameter the CPU number (via %rdi for 64-bit).
[...]
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> + *
> + * Note, that it is refok - because the only caller of this after init

"Note, this is __ref because..."

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:53:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07nt-0002Oh-IA; Mon, 06 Jan 2014 10:53:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07nr-0002OS-CV
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:53:03 +0000
Received: from [85.158.137.68:43005] by server-11.bemta-3.messagelabs.com id
	2D/1A-19379-E0B8AC25; Mon, 06 Jan 2014 10:53:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389005580!7424991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 6 Jan 2014 10:53:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:53:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90014154"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 10:53:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:52:59 -0500
Message-ID: <1389005578.13274.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Mon, 6 Jan 2014 10:52:58 +0000
In-Reply-To: <CAOTdubs18-bVYU=EB408JnpJD-Y=DDFQ_-6g7u1vb_dkoX=3ZA@mail.gmail.com>
References: <CAOTdubuJcxdiR-B6X5sgdAfaDEMa7VFKG3Kc8oMhbfEE_zUU+Q@mail.gmail.com>
	<20131223103852.GC5594@type.youpi.perso.aquilenet.fr>
	<CAOTdubs18-bVYU=EB408JnpJD-Y=DDFQ_-6g7u1vb_dkoX=3ZA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enable virtual memory for Mini-os on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-12-23 at 12:52 +0000, karim.allah.ahmed@gmail.com wrote:
> So, Is there a reason for doing map/unmap on virtual address space not
> on physical one ? 

The only thing which springs to mind is if you needed to control the
cacheability attributes of the mappings (see [0] for the requirements).
However I think these are met by just enabling the SCTLR.C bit, SCTLR.M
isn't needed.

Things like foreign page mappings (e.g. via grant tables) and ballooning
happen via the stage 2 paging, so no need for stage 1 MMU to be enabled
there, I don't think.

I think it would be reasonable to omit page table setup from arm mini-os
for now and reconsider it if/when we discover a need for it.

Ian.

[0]
http://xenbits.xen.org/docs/unstable/hypercall/arm/include,public,arch-arm.h.html#incontents_arm_abi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:53:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07nt-0002Oh-IA; Mon, 06 Jan 2014 10:53:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07nr-0002OS-CV
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 10:53:03 +0000
Received: from [85.158.137.68:43005] by server-11.bemta-3.messagelabs.com id
	2D/1A-19379-E0B8AC25; Mon, 06 Jan 2014 10:53:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389005580!7424991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 6 Jan 2014 10:53:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:53:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90014154"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 10:53:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:52:59 -0500
Message-ID: <1389005578.13274.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Mon, 6 Jan 2014 10:52:58 +0000
In-Reply-To: <CAOTdubs18-bVYU=EB408JnpJD-Y=DDFQ_-6g7u1vb_dkoX=3ZA@mail.gmail.com>
References: <CAOTdubuJcxdiR-B6X5sgdAfaDEMa7VFKG3Kc8oMhbfEE_zUU+Q@mail.gmail.com>
	<20131223103852.GC5594@type.youpi.perso.aquilenet.fr>
	<CAOTdubs18-bVYU=EB408JnpJD-Y=DDFQ_-6g7u1vb_dkoX=3ZA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enable virtual memory for Mini-os on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-12-23 at 12:52 +0000, karim.allah.ahmed@gmail.com wrote:
> So, Is there a reason for doing map/unmap on virtual address space not
> on physical one ? 

The only thing which springs to mind is if you needed to control the
cacheability attributes of the mappings (see [0] for the requirements).
However I think these are met by just enabling the SCTLR.C bit, SCTLR.M
isn't needed.

Things like foreign page mappings (e.g. via grant tables) and ballooning
happen via the stage 2 paging, so no need for stage 1 MMU to be enabled
there, I don't think.

I think it would be reasonable to omit page table setup from arm mini-os
for now and reconsider it if/when we discover a need for it.

Ian.

[0]
http://xenbits.xen.org/docs/unstable/hypercall/arm/include,public,arch-arm.h.html#incontents_arm_abi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:55:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07qO-0002cn-PK; Mon, 06 Jan 2014 10:55:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07qN-0002cT-DK
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:55:39 +0000
Received: from [85.158.137.68:35693] by server-2.bemta-3.messagelabs.com id
	38/C0-17329-AAB8AC25; Mon, 06 Jan 2014 10:55:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389005736!6652170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6189 invoked from network); 6 Jan 2014 10:55:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87836971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:55:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:55:35 -0500
Message-ID: <52CA8BA6.5040305@citrix.com>
Date: Mon, 6 Jan 2014 10:55:34 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13

A minor nit with a comment but consider the complete series:

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

I'm happy for this to go into 3.14.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 10:55:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 10:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07qO-0002cn-PK; Mon, 06 Jan 2014 10:55:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07qN-0002cT-DK
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 10:55:39 +0000
Received: from [85.158.137.68:35693] by server-2.bemta-3.messagelabs.com id
	38/C0-17329-AAB8AC25; Mon, 06 Jan 2014 10:55:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389005736!6652170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6189 invoked from network); 6 Jan 2014 10:55:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 10:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87836971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 10:55:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	05:55:35 -0500
Message-ID: <52CA8BA6.5040305@citrix.com>
Date: Mon, 6 Jan 2014 10:55:34 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> The patches, also available at
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13

A minor nit with a comment but consider the complete series:

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

I'm happy for this to go into 3.14.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:00:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07vD-0003RA-9J; Mon, 06 Jan 2014 11:00:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07vB-0003Qv-Md
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:00:37 +0000
Received: from [85.158.143.35:8281] by server-1.bemta-4.messagelabs.com id
	14/3C-02132-5DC8AC25; Mon, 06 Jan 2014 11:00:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389006035!9854368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30525 invoked from network); 6 Jan 2014 11:00:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:00:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87838192"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:00:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:00:34 -0500
Message-ID: <52CA8CD1.6080003@citrix.com>
Date: Mon, 6 Jan 2014 11:00:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>	
	<1387465429-3568-27-git-send-email-levex@linux.com>	
	<52C56F43.6030804@citrix.com>
	<1389002445.13274.23.camel@kazak.uk.xensource.com>
In-Reply-To: <1389002445.13274.23.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 10:00, Ian Campbell wrote:
> On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
>> On 19/12/13 15:03, Levente Kurusa wrote:
>>> This is required so that we give up the last reference to the device.
>>>
>>> Signed-off-by: Levente Kurusa <levex@linux.com>
>>> ---
>>>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>> index 3c0a74b..4abb9ee 100644
>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
>>>  
>>>  	/* Register with generic device framework. */
>>>  	err = device_register(&xendev->dev);
>>> -	if (err)
>>> +	if (err) {
>>> +		put_device(&xendev->dev);
>>>  		goto fail;
>>> +	}
>>>  
>>>  	return 0;
>>>  fail:
>>
>> There is a kfree(xendev) here so this introduces a double-free.
> 
> How? put_device doesn't touch xendev itself, does it? It just drops the
> ref on the dev member which is not separately dynamically allocated and
> so not freed either.

Releasing all references to the struct device frees the containing
structure via xenbus->dev.release. i.e., xenbus_dev_release() which does
kfree(xendev).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:00:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07vD-0003RA-9J; Mon, 06 Jan 2014 11:00:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W07vB-0003Qv-Md
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:00:37 +0000
Received: from [85.158.143.35:8281] by server-1.bemta-4.messagelabs.com id
	14/3C-02132-5DC8AC25; Mon, 06 Jan 2014 11:00:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389006035!9854368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30525 invoked from network); 6 Jan 2014 11:00:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:00:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87838192"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:00:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:00:34 -0500
Message-ID: <52CA8CD1.6080003@citrix.com>
Date: Mon, 6 Jan 2014 11:00:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>	
	<1387465429-3568-27-git-send-email-levex@linux.com>	
	<52C56F43.6030804@citrix.com>
	<1389002445.13274.23.camel@kazak.uk.xensource.com>
In-Reply-To: <1389002445.13274.23.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 10:00, Ian Campbell wrote:
> On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
>> On 19/12/13 15:03, Levente Kurusa wrote:
>>> This is required so that we give up the last reference to the device.
>>>
>>> Signed-off-by: Levente Kurusa <levex@linux.com>
>>> ---
>>>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
>>> index 3c0a74b..4abb9ee 100644
>>> --- a/drivers/xen/xenbus/xenbus_probe.c
>>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>>> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
>>>  
>>>  	/* Register with generic device framework. */
>>>  	err = device_register(&xendev->dev);
>>> -	if (err)
>>> +	if (err) {
>>> +		put_device(&xendev->dev);
>>>  		goto fail;
>>> +	}
>>>  
>>>  	return 0;
>>>  fail:
>>
>> There is a kfree(xendev) here so this introduces a double-free.
> 
> How? put_device doesn't touch xendev itself, does it? It just drops the
> ref on the dev member which is not separately dynamically allocated and
> so not freed either.

Releasing all references to the struct device frees the containing
structure via xenbus->dev.release. i.e., xenbus_dev_release() which does
kfree(xendev).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:01:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07wI-0003Y2-Sm; Mon, 06 Jan 2014 11:01:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07wG-0003Xm-VD
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 11:01:45 +0000
Received: from [193.109.254.147:24782] by server-3.bemta-14.messagelabs.com id
	CD/DC-11000-81D8AC25; Mon, 06 Jan 2014 11:01:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006102!5545287!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9436 invoked from network); 6 Jan 2014 11:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87838451"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:01:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:01:41 -0500
Message-ID: <1389006100.13274.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 6 Jan 2014 11:01:40 +0000
In-Reply-To: <20140102162735.5a3038c0@mantra.us.oracle.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102162735.5a3038c0@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 16:27 -0800, Mukesh Rathor wrote:
> On Mon, 30 Dec 2013 14:56:48 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> ......
> > > > 
> > > > The elf parsing should accept unrecognised fields for forward
> > > > compatibility, which would then allow a PV & PVH compiled kernel
> > > > to run in PV mode.
> > > 
> > > It should but it doesn't, so a different way needs to be found for
> > > the kernel to report (optional) PVH support.  A method that is
> > > compatible with older toolstacks.
> > 
> > Also known as changes to the PVH ABI.
> > 
> > Mukesh, Roger, George (emailing Ian instead since he is now the
> > Release Manager-pro-temp), Jan,
> > 
> > a).  That means dropping the 'hvm_callback_vector' check from
> > xc_dom_core.c and just depending on:
> > "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> > for PVH guests.
> 
> I"m not sure what the state of auto xlated with shadow and Ian's work
> for supervisor mode kernel is. If they are obsolete, then we can drop
> the hvm_callback_vector check.

supervisor mode kernel is a decade old stunt, I don't think it has any
relevance any more -- or at least I've no intention to work on it any
further and no objections to it being removed (I must confess I thought
it already had been...).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:01:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07wI-0003Y2-Sm; Mon, 06 Jan 2014 11:01:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07wG-0003Xm-VD
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 11:01:45 +0000
Received: from [193.109.254.147:24782] by server-3.bemta-14.messagelabs.com id
	CD/DC-11000-81D8AC25; Mon, 06 Jan 2014 11:01:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006102!5545287!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9436 invoked from network); 6 Jan 2014 11:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87838451"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:01:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:01:41 -0500
Message-ID: <1389006100.13274.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 6 Jan 2014 11:01:40 +0000
In-Reply-To: <20140102162735.5a3038c0@mantra.us.oracle.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<52B97E95.9060900@cantab.net> <52B98447.9080404@citrix.com>
	<52B98686.9060009@cantab.net>
	<20131230195648.GA2937@phenom.dumpdata.com>
	<20140102162735.5a3038c0@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <dvrabel@cantab.net>, jbeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-02 at 16:27 -0800, Mukesh Rathor wrote:
> On Mon, 30 Dec 2013 14:56:48 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Tue, Dec 24, 2013 at 01:05:10PM +0000, David Vrabel wrote:
> ......
> > > > 
> > > > The elf parsing should accept unrecognised fields for forward
> > > > compatibility, which would then allow a PV & PVH compiled kernel
> > > > to run in PV mode.
> > > 
> > > It should but it doesn't, so a different way needs to be found for
> > > the kernel to report (optional) PVH support.  A method that is
> > > compatible with older toolstacks.
> > 
> > Also known as changes to the PVH ABI.
> > 
> > Mukesh, Roger, George (emailing Ian instead since he is now the
> > Release Manager-pro-temp), Jan,
> > 
> > a).  That means dropping the 'hvm_callback_vector' check from
> > xc_dom_core.c and just depending on:
> > "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
> > for PVH guests.
> 
> I"m not sure what the state of auto xlated with shadow and Ian's work
> for supervisor mode kernel is. If they are obsolete, then we can drop
> the hvm_callback_vector check.

supervisor mode kernel is a decade old stunt, I don't think it has any
relevance any more -- or at least I've no intention to work on it any
further and no objections to it being removed (I must confess I thought
it already had been...).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:03:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07xa-0003hN-DX; Mon, 06 Jan 2014 11:03:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07xY-0003h8-V9
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:03:05 +0000
Received: from [193.109.254.147:29204] by server-16.bemta-14.messagelabs.com
	id 47/45-20600-86D8AC25; Mon, 06 Jan 2014 11:03:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006182!5545706!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 6 Jan 2014 11:03:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:03:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90016120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:03:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:03:01 -0500
Message-ID: <1389006180.13274.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Mon, 6 Jan 2014 11:03:00 +0000
In-Reply-To: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
References: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>, baozich@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 00:53 +0000, karim.allah.ahmed@gmail.com wrote:
> Hi,
> 
> 
> I've mentioned earlier [0] that I was working on porting mini-os to
> arm and Dario suggested making the sources public even if it was still
> in early stages (and extremely experimental ).
> 
> 
> So, here we go. I just published the sources on github
> https://github.com/KarimAllah/xen/tree/minios-arm-port

Nice one!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:03:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07xa-0003hN-DX; Mon, 06 Jan 2014 11:03:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07xY-0003h8-V9
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:03:05 +0000
Received: from [193.109.254.147:29204] by server-16.bemta-14.messagelabs.com
	id 47/45-20600-86D8AC25; Mon, 06 Jan 2014 11:03:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006182!5545706!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 6 Jan 2014 11:03:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:03:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90016120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:03:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:03:01 -0500
Message-ID: <1389006180.13274.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Mon, 6 Jan 2014 11:03:00 +0000
In-Reply-To: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
References: <CAOTdubu-u89F0emaRTmW85=S+Nqx8YeUukjxnivgfAqxxjwynQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dario Faggioli <dario.faggioli@citrix.com>, baozich@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Mini-os port to ARM public repo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 00:53 +0000, karim.allah.ahmed@gmail.com wrote:
> Hi,
> 
> 
> I've mentioned earlier [0] that I was working on porting mini-os to
> arm and Dario suggested making the sources public even if it was still
> in early stages (and extremely experimental ).
> 
> 
> So, here we go. I just published the sources on github
> https://github.com/KarimAllah/xen/tree/minios-arm-port

Nice one!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:05:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07ze-0003tZ-07; Mon, 06 Jan 2014 11:05:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07zV-0003tD-L2
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:05:12 +0000
Received: from [193.109.254.147:5851] by server-3.bemta-14.messagelabs.com id
	39/F1-11000-0ED8AC25; Mon, 06 Jan 2014 11:05:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006300!5546357!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19041 invoked from network); 6 Jan 2014 11:05:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:05:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90016482"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:05:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:04:59 -0500
Message-ID: <1389006298.13274.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 6 Jan 2014 11:04:58 +0000
In-Reply-To: <ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
	<ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAxLTA1IGF0IDE5OjUwICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEphbiA1LCAyMDE0LCBhdCAxNjoyMSwgQ2hlbiBCYW96aSA8YmFvemljaEBnbWFpbC5jb20+IHdy
b3RlOgo+IAo+ID4gSGkgYWxsLAo+ID4gCj4gPiBSZWNlbnRseSwgSeKAmXZlIGJlZW4gdHJ5aW5n
IHRvIGNvbnRpbnVlIG15IHdvcmsgb2YgbWluaS1vcyBvbiBhcm02NC4gIEFmdGVyIHNvbWUKPiA+
IGVhcmx5LWRheSBoYWNrcyBpbiBzdW1tZXIsIHRoZXJlIGlzIGEgYmFzaWMgZnJhbWV3b3JrIHRo
YXQgY291bGQgcGFzcyB0aGUgYnVpbGQuCj4gPiBJZiBldmVyeXRoaW5nIGdvZXMgd2VsbCwgaXQg
aXMgc3VwcG9zZWQgdG8gYmUgYXQgbGVhc3QgYWJsZSB0byBwcmludCAKPiA+IOKAnEJvb3RzdHJh
cHBpbmcuLi7igJ0gaW5mbyBpbiBzdGFydF9rZXJuZWwuIEhvd2V2ZXIsIGl0IHNlZW1zIHRoZXJl
IHN0aWxsIHNvbWV0aGluZwo+ID4gd3Jvbmcgd2hlbiBjcmVhdGUgZG9tYWluIGJ5IHhsLgo+IAo+
IEl0IHNlZW1zIHRvIGJlIHNvbWV0aGluZyB3cm9uZyB3aXRoIG15IGRvbTAgcHJpdmNtZCBkcml2
ZXIuIEFmdGVyIEkgYWRkIHNvbWUgcHJpbnRrCj4gdG8gZGVidWcgYW5kIHJlYnVpbGQgdGhlIGRv
bTAga2VybmVsLCBpdCBzZWVtcyB0byBiZSBhbGwgcmlnaHQgKGV2ZW4gSSByZW1vdmUKPiB0aG9z
ZSBwcmludGsgZm9yIGRlYnVnZ2luZykuLi4KClRoaXMgc2hvdWxkIGJlIGZpeGVkIGJ5OgogICAg
ICAgIGNvbW1pdCBhNzg5MmYzMmNjMzUzNGQ0Y2MwZTY0YjI0NWZiZjQ3YThlMzY0NjUyCiAgICAg
ICAgQXV0aG9yOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgogICAgICAg
IERhdGU6ICAgV2VkIERlYyAxMSAxNzowMjoyNyAyMDEzICswMDAwCiAgICAgICAgCiAgICAgICAg
ICAgIGFybTogeGVuOiBmb3JlaWduIG1hcHBpbmcgUFRFcyBhcmUgc3BlY2lhbC4KICAgICAgICAg
ICAgCndoaWNoIGlzIG5vdyBpbiBMaW51cycgdHJlZS4KCklhbi4KCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:05:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W07ze-0003tZ-07; Mon, 06 Jan 2014 11:05:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W07zV-0003tD-L2
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:05:12 +0000
Received: from [193.109.254.147:5851] by server-3.bemta-14.messagelabs.com id
	39/F1-11000-0ED8AC25; Mon, 06 Jan 2014 11:05:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006300!5546357!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19041 invoked from network); 6 Jan 2014 11:05:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:05:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90016482"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:05:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:04:59 -0500
Message-ID: <1389006298.13274.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 6 Jan 2014 11:04:58 +0000
In-Reply-To: <ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
	<ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAxLTA1IGF0IDE5OjUwICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEphbiA1LCAyMDE0LCBhdCAxNjoyMSwgQ2hlbiBCYW96aSA8YmFvemljaEBnbWFpbC5jb20+IHdy
b3RlOgo+IAo+ID4gSGkgYWxsLAo+ID4gCj4gPiBSZWNlbnRseSwgSeKAmXZlIGJlZW4gdHJ5aW5n
IHRvIGNvbnRpbnVlIG15IHdvcmsgb2YgbWluaS1vcyBvbiBhcm02NC4gIEFmdGVyIHNvbWUKPiA+
IGVhcmx5LWRheSBoYWNrcyBpbiBzdW1tZXIsIHRoZXJlIGlzIGEgYmFzaWMgZnJhbWV3b3JrIHRo
YXQgY291bGQgcGFzcyB0aGUgYnVpbGQuCj4gPiBJZiBldmVyeXRoaW5nIGdvZXMgd2VsbCwgaXQg
aXMgc3VwcG9zZWQgdG8gYmUgYXQgbGVhc3QgYWJsZSB0byBwcmludCAKPiA+IOKAnEJvb3RzdHJh
cHBpbmcuLi7igJ0gaW5mbyBpbiBzdGFydF9rZXJuZWwuIEhvd2V2ZXIsIGl0IHNlZW1zIHRoZXJl
IHN0aWxsIHNvbWV0aGluZwo+ID4gd3Jvbmcgd2hlbiBjcmVhdGUgZG9tYWluIGJ5IHhsLgo+IAo+
IEl0IHNlZW1zIHRvIGJlIHNvbWV0aGluZyB3cm9uZyB3aXRoIG15IGRvbTAgcHJpdmNtZCBkcml2
ZXIuIEFmdGVyIEkgYWRkIHNvbWUgcHJpbnRrCj4gdG8gZGVidWcgYW5kIHJlYnVpbGQgdGhlIGRv
bTAga2VybmVsLCBpdCBzZWVtcyB0byBiZSBhbGwgcmlnaHQgKGV2ZW4gSSByZW1vdmUKPiB0aG9z
ZSBwcmludGsgZm9yIGRlYnVnZ2luZykuLi4KClRoaXMgc2hvdWxkIGJlIGZpeGVkIGJ5OgogICAg
ICAgIGNvbW1pdCBhNzg5MmYzMmNjMzUzNGQ0Y2MwZTY0YjI0NWZiZjQ3YThlMzY0NjUyCiAgICAg
ICAgQXV0aG9yOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgogICAgICAg
IERhdGU6ICAgV2VkIERlYyAxMSAxNzowMjoyNyAyMDEzICswMDAwCiAgICAgICAgCiAgICAgICAg
ICAgIGFybTogeGVuOiBmb3JlaWduIG1hcHBpbmcgUFRFcyBhcmUgc3BlY2lhbC4KICAgICAgICAg
ICAgCndoaWNoIGlzIG5vdyBpbiBMaW51cycgdHJlZS4KCklhbi4KCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:10:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W084L-0004NI-5f; Mon, 06 Jan 2014 11:10:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W084J-0004NC-Rt
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:10:04 +0000
Received: from [193.109.254.147:33326] by server-14.bemta-14.messagelabs.com
	id 63/36-12628-B0F8AC25; Mon, 06 Jan 2014 11:10:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389006601!9047764!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15195 invoked from network); 6 Jan 2014 11:10:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:10:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90017803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:10:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:09:59 -0500
Message-ID: <1389006598.13274.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 11:09:58 +0000
In-Reply-To: <52CA8CD1.6080003@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
	<52C56F43.6030804@citrix.com>
	<1389002445.13274.23.camel@kazak.uk.xensource.com>
	<52CA8CD1.6080003@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:00 +0000, David Vrabel wrote:
> On 06/01/14 10:00, Ian Campbell wrote:
> > On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
> >> On 19/12/13 15:03, Levente Kurusa wrote:
> >>> This is required so that we give up the last reference to the device.
> >>>
> >>> Signed-off-by: Levente Kurusa <levex@linux.com>
> >>> ---
> >>>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
> >>>  1 file changed, 3 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>> index 3c0a74b..4abb9ee 100644
> >>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
> >>>  
> >>>  	/* Register with generic device framework. */
> >>>  	err = device_register(&xendev->dev);
> >>> -	if (err)
> >>> +	if (err) {
> >>> +		put_device(&xendev->dev);
> >>>  		goto fail;
> >>> +	}
> >>>  
> >>>  	return 0;
> >>>  fail:
> >>
> >> There is a kfree(xendev) here so this introduces a double-free.
> > 
> > How? put_device doesn't touch xendev itself, does it? It just drops the
> > ref on the dev member which is not separately dynamically allocated and
> > so not freed either.
> 
> Releasing all references to the struct device frees the containing
> structure via xenbus->dev.release. i.e., xenbus_dev_release() which does
> kfree(xendev).

Ooh, twisty, well spotted ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:10:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W084L-0004NI-5f; Mon, 06 Jan 2014 11:10:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W084J-0004NC-Rt
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:10:04 +0000
Received: from [193.109.254.147:33326] by server-14.bemta-14.messagelabs.com
	id 63/36-12628-B0F8AC25; Mon, 06 Jan 2014 11:10:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389006601!9047764!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15195 invoked from network); 6 Jan 2014 11:10:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:10:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90017803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:10:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:09:59 -0500
Message-ID: <1389006598.13274.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 6 Jan 2014 11:09:58 +0000
In-Reply-To: <52CA8CD1.6080003@citrix.com>
References: <1387465429-3568-2-git-send-email-levex@linux.com>
	<1387465429-3568-27-git-send-email-levex@linux.com>
	<52C56F43.6030804@citrix.com>
	<1389002445.13274.23.camel@kazak.uk.xensource.com>
	<52CA8CD1.6080003@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Levente Kurusa <levex@linux.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>, xen-devel@lists.xenproject.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 26/38] xen: xenbus: add missing put_device
 call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:00 +0000, David Vrabel wrote:
> On 06/01/14 10:00, Ian Campbell wrote:
> > On Thu, 2014-01-02 at 13:53 +0000, David Vrabel wrote:
> >> On 19/12/13 15:03, Levente Kurusa wrote:
> >>> This is required so that we give up the last reference to the device.
> >>>
> >>> Signed-off-by: Levente Kurusa <levex@linux.com>
> >>> ---
> >>>  drivers/xen/xenbus/xenbus_probe.c | 4 +++-
> >>>  1 file changed, 3 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
> >>> index 3c0a74b..4abb9ee 100644
> >>> --- a/drivers/xen/xenbus/xenbus_probe.c
> >>> +++ b/drivers/xen/xenbus/xenbus_probe.c
> >>> @@ -465,8 +465,10 @@ int xenbus_probe_node(struct xen_bus_type *bus,
> >>>  
> >>>  	/* Register with generic device framework. */
> >>>  	err = device_register(&xendev->dev);
> >>> -	if (err)
> >>> +	if (err) {
> >>> +		put_device(&xendev->dev);
> >>>  		goto fail;
> >>> +	}
> >>>  
> >>>  	return 0;
> >>>  fail:
> >>
> >> There is a kfree(xendev) here so this introduces a double-free.
> > 
> > How? put_device doesn't touch xendev itself, does it? It just drops the
> > ref on the dev member which is not separately dynamically allocated and
> > so not freed either.
> 
> Releasing all references to the struct device frees the containing
> structure via xenbus->dev.release. i.e., xenbus_dev_release() which does
> kfree(xendev).

Ooh, twisty, well spotted ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:13:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W087q-0004cb-VH; Mon, 06 Jan 2014 11:13:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W087p-0004cU-Lt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:13:41 +0000
Received: from [85.158.137.68:50837] by server-9.bemta-3.messagelabs.com id
	6E/91-13104-4EF8AC25; Mon, 06 Jan 2014 11:13:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389006818!6656735!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30206 invoked from network); 6 Jan 2014 11:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841093"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:13:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:13:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W087l-0003tc-Ab;
	Mon, 06 Jan 2014 11:13:37 +0000
Date: Mon, 6 Jan 2014 11:12:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52C9BD75.6070207@linaro.org>
Message-ID: <alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
	<52C9BD75.6070207@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Julien Grall wrote:
> On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> > On Sun, 5 Jan 2014, Karim Raslan wrote:
> > > Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> > 
> > Thanks!
> > Given that dom0_11_mapping is an int, the patch is OK.
> > However it might be better to turn dom0_11_mapping into a bool_t and use
> > boolean_param instead.
> 
> After reading the thread "Master not working on AllWinner A20", I still don't
> see why this patch is useful.
> 
> Unless if the platform has an IOMMU, the 1:1 mapping for the memory SHOULD be
> enabled. Otherwise DMA-capable won't work.
> 
> See my answer on your mail (thread "Master not working on AllWinner for A20")
> for a proper fix.

It is only useful for debugging and development. It should not be used
in production.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:13:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W087q-0004cb-VH; Mon, 06 Jan 2014 11:13:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W087p-0004cU-Lt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:13:41 +0000
Received: from [85.158.137.68:50837] by server-9.bemta-3.messagelabs.com id
	6E/91-13104-4EF8AC25; Mon, 06 Jan 2014 11:13:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389006818!6656735!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30206 invoked from network); 6 Jan 2014 11:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841093"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:13:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:13:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W087l-0003tc-Ab;
	Mon, 06 Jan 2014 11:13:37 +0000
Date: Mon, 6 Jan 2014 11:12:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52C9BD75.6070207@linaro.org>
Message-ID: <alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
	<52C9BD75.6070207@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Julien Grall wrote:
> On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> > On Sun, 5 Jan 2014, Karim Raslan wrote:
> > > Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> > 
> > Thanks!
> > Given that dom0_11_mapping is an int, the patch is OK.
> > However it might be better to turn dom0_11_mapping into a bool_t and use
> > boolean_param instead.
> 
> After reading the thread "Master not working on AllWinner A20", I still don't
> see why this patch is useful.
> 
> Unless if the platform has an IOMMU, the 1:1 mapping for the memory SHOULD be
> enabled. Otherwise DMA-capable won't work.
> 
> See my answer on your mail (thread "Master not working on AllWinner for A20")
> for a proper fix.

It is only useful for debugging and development. It should not be used
in production.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:15:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W089H-0004kL-FS; Mon, 06 Jan 2014 11:15:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W089G-0004kD-0u
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:15:10 +0000
Received: from [193.109.254.147:58053] by server-3.bemta-14.messagelabs.com id
	96/C0-11000-D309AC25; Mon, 06 Jan 2014 11:15:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006907!5549019!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1471 invoked from network); 6 Jan 2014 11:15:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:15:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841379"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:14:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:15:06 -0500
Message-ID: <1389006905.13274.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 6 Jan 2014 11:15:05 +0000
In-Reply-To: <alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
	<52C9BD75.6070207@linaro.org>
	<alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:12 +0000, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Julien Grall wrote:
> > On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> > > On Sun, 5 Jan 2014, Karim Raslan wrote:
> > > > Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> > > 
> > > Thanks!
> > > Given that dom0_11_mapping is an int, the patch is OK.
> > > However it might be better to turn dom0_11_mapping into a bool_t and use
> > > boolean_param instead.
> > 
> > After reading the thread "Master not working on AllWinner A20", I still don't
> > see why this patch is useful.
> > 
> > Unless if the platform has an IOMMU, the 1:1 mapping for the memory SHOULD be
> > enabled. Otherwise DMA-capable won't work.
> > 
> > See my answer on your mail (thread "Master not working on AllWinner for A20")
> > for a proper fix.
> 
> It is only useful for debugging and development. It should not be used
> in production.

Assuming we are talking about the dom0_11_mapping command line parameter
then for debugging and development the constant in the code can simply
be changed, then there is no danger of an end user thinking it can be
useful.

That's essentially why I dropped the option from my initial series which
added the dom0_11_mapping variable.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:15:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W089H-0004kL-FS; Mon, 06 Jan 2014 11:15:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W089G-0004kD-0u
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:15:10 +0000
Received: from [193.109.254.147:58053] by server-3.bemta-14.messagelabs.com id
	96/C0-11000-D309AC25; Mon, 06 Jan 2014 11:15:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389006907!5549019!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1471 invoked from network); 6 Jan 2014 11:15:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:15:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841379"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:14:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:15:06 -0500
Message-ID: <1389006905.13274.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 6 Jan 2014 11:15:05 +0000
In-Reply-To: <alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
References: <1388949703-1091-1-git-send-email-karim.allah.ahmed@gmail.com>
	<alpine.DEB.2.02.1401051923590.8667@kaball.uk.xensource.com>
	<52C9BD75.6070207@linaro.org>
	<alpine.DEB.2.02.1401061112120.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] enable/disable dom0_11_mapping from
 command-line and default to enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:12 +0000, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Julien Grall wrote:
> > On 01/05/2014 07:24 PM, Stefano Stabellini wrote:
> > > On Sun, 5 Jan 2014, Karim Raslan wrote:
> > > > Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> > > 
> > > Thanks!
> > > Given that dom0_11_mapping is an int, the patch is OK.
> > > However it might be better to turn dom0_11_mapping into a bool_t and use
> > > boolean_param instead.
> > 
> > After reading the thread "Master not working on AllWinner A20", I still don't
> > see why this patch is useful.
> > 
> > Unless if the platform has an IOMMU, the 1:1 mapping for the memory SHOULD be
> > enabled. Otherwise DMA-capable won't work.
> > 
> > See my answer on your mail (thread "Master not working on AllWinner for A20")
> > for a proper fix.
> 
> It is only useful for debugging and development. It should not be used
> in production.

Assuming we are talking about the dom0_11_mapping command line parameter
then for debugging and development the constant in the code can simply
be changed, then there is no danger of an end user thinking it can be
useful.

That's essentially why I dropped the option from my initial series which
added the dom0_11_mapping variable.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W089e-0004ni-Sj; Mon, 06 Jan 2014 11:15:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W089d-0004nQ-EP
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:15:33 +0000
Received: from [193.109.254.147:49223] by server-8.bemta-14.messagelabs.com id
	EC/A6-30921-4509AC25; Mon, 06 Jan 2014 11:15:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389006930!9065131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18837 invoked from network); 6 Jan 2014 11:15:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:15:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841457"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:15:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:15:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W089Z-0003v5-L0;
	Mon, 06 Jan 2014 11:15:29 +0000
Date: Mon, 6 Jan 2014 11:14:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061113500.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
	<alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
	<CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter <peter@perkbv.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> On Sun, Jan 5, 2014 at 7:28 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Could you use plain text for emails, please?
> Sorry about that :)
> 
> >
> > On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> >> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> >>       > Hi Peter,
> >>       >
> >>       > If you still can't boot with any memory bigger than 128M, as a fast workaround you can apply this patch.
> >>       >
> >>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >>       > index faff88e..849df3f 100644
> >>       > --- a/xen/arch/arm/domain_build.c
> >>       > +++ b/xen/arch/arm/domain_build.c
> >>       > @@ -22,7 +22,7 @@
> >>       >  static unsigned int __initdata opt_dom0_max_vcpus;
> >>       >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> >>       >
> >>       > -static int dom0_11_mapping = 1;
> >>       > +static int dom0_11_mapping = 0;
> >>       >
> >>       >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> >>       >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> >>       >
> >>       >
> >>       > It's failing because none of the zones has a contiguous memory block with an order bigger than 15 ( 128M ). I think
> >>       this is due
> >>       > to the alignment of the phys_start with buddy system in cubieboard, I'll look further and let you know if there's a
> >>       cleaner
> >>       > approach to fix that.
> >>       >
> >>       > It used to work before because the 11_mapping wasn't forced to "true" for all platforms and there was a quirk
> >>       exposed by the
> >>       > platform that used to express that. I think Julien removed that quirk and defaulted to 11_mapping in commit
> >>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
> >>
> >> Unfortunately dom0_11_mapping is needed if at least one device driver
> >> for the Allwinner uses DMA.
> >> For example, if you disable dom0_11_mapping, can you still access the
> >> network? On the other hand if all device drivers do not use DMA we can
> >> set dom0_11_mapping to false for this platform.
> >>
> >>
> >> I'm not quite sure about all devices in cubieboard, but at least for the network case I think it'll still work ( well, it's
> >> working for me ) . Besides, Cubieboard didn't have this quirk to begin with before defaulting to the 11_mapping
> >
> > What is the linux device driver that you are using for the network? And
> > the one for the disk/sdcard?
> 
> For network "sun4i-emac", and currently I'm mounting my rootfs through
> nfs, so no disk/sdcard

Ah, that explains it! AFAICT sun4i-emac does not use DMA.
So yes, if this is your use case, you can safely disable the 1:1
workaround.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W089e-0004ni-Sj; Mon, 06 Jan 2014 11:15:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W089d-0004nQ-EP
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:15:33 +0000
Received: from [193.109.254.147:49223] by server-8.bemta-14.messagelabs.com id
	EC/A6-30921-4509AC25; Mon, 06 Jan 2014 11:15:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389006930!9065131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18837 invoked from network); 6 Jan 2014 11:15:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:15:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87841457"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:15:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:15:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W089Z-0003v5-L0;
	Mon, 06 Jan 2014 11:15:29 +0000
Date: Mon, 6 Jan 2014 11:14:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
In-Reply-To: <CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061113500.8667@kaball.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<alpine.DEB.2.02.1401051736170.8667@kaball.uk.xensource.com>
	<CAOTdubvOcE4OhotoCzjXKLiTt8Xz0VEyvYzNqMmfWMfNgC9smA@mail.gmail.com>
	<alpine.DEB.2.02.1401051927030.8667@kaball.uk.xensource.com>
	<CAOTdubvXTjSJc3D0Sxx5Y3BMFSMhVk19xS0s+LLwvdfvJ1Hh0w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter <peter@perkbv.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> On Sun, Jan 5, 2014 at 7:28 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Could you use plain text for emails, please?
> Sorry about that :)
> 
> >
> > On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> >> On Sun, Jan 5, 2014 at 5:39 PM, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >>       On Sun, 5 Jan 2014, karim.allah.ahmed@gmail.com wrote:
> >>       > Hi Peter,
> >>       >
> >>       > If you still can't boot with any memory bigger than 128M, as a fast workaround you can apply this patch.
> >>       >
> >>       > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >>       > index faff88e..849df3f 100644
> >>       > --- a/xen/arch/arm/domain_build.c
> >>       > +++ b/xen/arch/arm/domain_build.c
> >>       > @@ -22,7 +22,7 @@
> >>       >  static unsigned int __initdata opt_dom0_max_vcpus;
> >>       >  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> >>       >
> >>       > -static int dom0_11_mapping = 1;
> >>       > +static int dom0_11_mapping = 0;
> >>       >
> >>       >  #define DOM0_MEM_DEFAULT 0x8000000 /* 128 MiB */
> >>       >  static u64 __initdata dom0_mem = DOM0_MEM_DEFAULT;
> >>       >
> >>       >
> >>       > It's failing because none of the zones has a contiguous memory block with an order bigger than 15 ( 128M ). I think
> >>       this is due
> >>       > to the alignment of the phys_start with buddy system in cubieboard, I'll look further and let you know if there's a
> >>       cleaner
> >>       > approach to fix that.
> >>       >
> >>       > It used to work before because the 11_mapping wasn't forced to "true" for all platforms and there was a quirk
> >>       exposed by the
> >>       > platform that used to express that. I think Julien removed that quirk and defaulted to 11_mapping in commit
> >>       > "71952bfcbe9187765cf4010b1479af86def4fb1f"
> >>
> >> Unfortunately dom0_11_mapping is needed if at least one device driver
> >> for the Allwinner uses DMA.
> >> For example, if you disable dom0_11_mapping, can you still access the
> >> network? On the other hand if all device drivers do not use DMA we can
> >> set dom0_11_mapping to false for this platform.
> >>
> >>
> >> I'm not quite sure about all devices in cubieboard, but at least for the network case I think it'll still work ( well, it's
> >> working for me ) . Besides, Cubieboard didn't have this quirk to begin with before defaulting to the 11_mapping
> >
> > What is the linux device driver that you are using for the network? And
> > the one for the disk/sdcard?
> 
> For network "sun4i-emac", and currently I'm mounting my rootfs through
> nfs, so no disk/sdcard

Ah, that explains it! AFAICT sun4i-emac does not use DMA.
So yes, if this is your use case, you can safely disable the 1:1
workaround.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:21:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08Et-0005O7-RR; Mon, 06 Jan 2014 11:20:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W08Er-0005O2-QI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:20:58 +0000
Received: from [85.158.137.68:24178] by server-15.bemta-3.messagelabs.com id
	8D/D4-11556-9919AC25; Mon, 06 Jan 2014 11:20:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389007255!6277287!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9258 invoked from network); 6 Jan 2014 11:20:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:20:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87842585"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:20:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:20:53 -0500
Message-ID: <52CA9194.8040600@citrix.com>
Date: Mon, 6 Jan 2014 11:20:52 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
	<alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 18:12, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PCI devices may have BARs located above the end of RAM so mark such
>> frames as identity frames in the p2m (instead of the default of
>> missing).
>>
>> PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
>> identity frames for the same reason.
>>
>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> 
> Shouldn't this be the case only for dom0?

I don't see why.  DomUs can have PCI devices with high MMIO regions
passed-through.

>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
>>  	unsigned topidx, mididx, idx;
>>  
>>  	if (unlikely(pfn >= MAX_P2M_PFN))
>> -		return INVALID_P2M_ENTRY;
>> +		return IDENTITY_FRAME(pfn);
>>  
>>  	topidx = p2m_top_index(pfn);
>>  	mididx = p2m_mid_index(pfn);
>> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
>> index 68c054f..6d7798f 100644
>> --- a/arch/x86/xen/setup.c
>> +++ b/arch/x86/xen/setup.c
>> @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
>>  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
>>  		mem_end = PFN_PHYS(max_pfn);
>>  	}
>> +
>> +	/*
>> +	 * Set the rest as identity mapped, in case PCI BARs are
>> +	 * located here.
>> +	 *
>> +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
>> +	 * well.
>> +	 */
>> +	set_phys_range_identity(max_pfn + 1, ~0ul);
>> +
> 
> Wouldn't this increase the size of the P2M considerably?

I thought the p2m code would be smart but I see I need to add a
p2m_mid_identity page (similar to the existign p2m_mid_missing page) and
make sure we use it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:21:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08Et-0005O7-RR; Mon, 06 Jan 2014 11:20:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W08Er-0005O2-QI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:20:58 +0000
Received: from [85.158.137.68:24178] by server-15.bemta-3.messagelabs.com id
	8D/D4-11556-9919AC25; Mon, 06 Jan 2014 11:20:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389007255!6277287!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9258 invoked from network); 6 Jan 2014 11:20:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:20:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87842585"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:20:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:20:53 -0500
Message-ID: <52CA9194.8040600@citrix.com>
Date: Mon, 6 Jan 2014 11:20:52 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1388767522-11768-1-git-send-email-david.vrabel@citrix.com>
	<1388767522-11768-2-git-send-email-david.vrabel@citrix.com>
	<alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401031805160.8667@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] x86/xen: set regions above the end of
 RAM as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 18:12, Stefano Stabellini wrote:
> On Fri, 3 Jan 2014, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> PCI devices may have BARs located above the end of RAM so mark such
>> frames as identity frames in the p2m (instead of the default of
>> missing).
>>
>> PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
>> identity frames for the same reason.
>>
>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> 
> Shouldn't this be the case only for dom0?

I don't see why.  DomUs can have PCI devices with high MMIO regions
passed-through.

>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -481,7 +481,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
>>  	unsigned topidx, mididx, idx;
>>  
>>  	if (unlikely(pfn >= MAX_P2M_PFN))
>> -		return INVALID_P2M_ENTRY;
>> +		return IDENTITY_FRAME(pfn);
>>  
>>  	topidx = p2m_top_index(pfn);
>>  	mididx = p2m_mid_index(pfn);
>> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
>> index 68c054f..6d7798f 100644
>> --- a/arch/x86/xen/setup.c
>> +++ b/arch/x86/xen/setup.c
>> @@ -412,6 +412,16 @@ char * __init xen_memory_setup(void)
>>  		max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);
>>  		mem_end = PFN_PHYS(max_pfn);
>>  	}
>> +
>> +	/*
>> +	 * Set the rest as identity mapped, in case PCI BARs are
>> +	 * located here.
>> +	 *
>> +	 * PFNs above MAX_P2M_PFN are considered identity mapped as
>> +	 * well.
>> +	 */
>> +	set_phys_range_identity(max_pfn + 1, ~0ul);
>> +
> 
> Wouldn't this increase the size of the P2M considerably?

I thought the p2m code would be smart but I see I need to add a
p2m_mid_identity page (similar to the existign p2m_mid_missing page) and
make sure we use it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08MX-0005b8-Uf; Mon, 06 Jan 2014 11:28:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08MW-0005ae-G2
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:28:52 +0000
Received: from [193.109.254.147:18781] by server-8.bemta-14.messagelabs.com id
	B3/77-30921-3739AC25; Mon, 06 Jan 2014 11:28:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389007730!9054639!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22920 invoked from network); 6 Jan 2014 11:28:51 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:28:51 -0000
Received: by mail-lb0-f180.google.com with SMTP id x18so9513465lbi.25
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 03:28:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=wAmz0/AnkdSWtFxyqTbBWmZeac9Tq38a7Gv+UnPnKo0=;
	b=Hf7Pew98oPF/fDK07av6RFah4Zw7bls6N2kSF/uDrvfPBcTcw/2XFmONImK1l2Ul7x
	RtP7ZU8k+4KGqRbRyHjepFJ5WbnFJbDjy1KuaeU5/pVbFJtqt2muyBV2IV3HJ8hk+7nr
	8svEX4D//bfBnC0tRbj2/EcTpJcd+ajW0GWAnzbULfeLCU8RYFq3t+/S7WbEa01X2E1Q
	/ELkuqiktPlh/enwdzMgZX8uhAbwFXr+Rf4gMDRSTPjnzceronYTy8ShwY1eFxtlAtDL
	lAGrUYCboHucAvyEch+sDb4SQccz7Qnur62V/6MD/1ycbB2S0EfTuZCR3YqVX5QiuE7b
	uwtg==
X-Gm-Message-State: ALoCoQlsIriE9hffEOfyAdOLUw2VXsQctAQXFsFBDDxEKD7eyyOzeM2eG87cXCvCfBUWC4QrV4VL
X-Received: by 10.152.44.225 with SMTP id h1mr44576319lam.22.1389007730321;
	Mon, 06 Jan 2014 03:28:50 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id mv9sm42580544lbc.0.2014.01.06.03.28.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:28:49 -0800 (PST)
Message-ID: <52CA9347.8040901@linaro.org>
Date: Mon, 06 Jan 2014 11:28:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org, 
	xen-devel@lists.xen.org, gibbs@freebsd.org, jhb@freebsd.org, 
	kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org> <52CA7B8F.9060402@citrix.com>
In-Reply-To: <52CA7B8F.9060402@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgpPbiAwMS8wNi8yMDE0IDA5OjQ2IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+IE9uIDA1
LzAxLzE0IDIyOjUyLCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4+Cj4+Cj4+IE9uIDAxLzAyLzIwMTQg
MDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4+IFNpbmNlIFhlbiBQVkggZ3Vlc3Rz
IGRvZXNuJ3QgaGF2ZSBBQ1BJLCB3ZSBuZWVkIHRvIGNyZWF0ZSBhIGR1bW15Cj4+PiBidXMgc28g
dG9wIGxldmVsIFhlbiBkZXZpY2VzIGNhbiBhdHRhY2ggdG8gaXQgKGluc3RlYWQgb2YKPj4+IGF0
dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRldmljZSB0aGF0IHdp
bGwgYmUgdXNlZAo+Pj4gdG8gZmlsbCB0aGUgcGNwdS0+cGNfZGV2aWNlIGZpZWxkLgo+Pj4gLS0t
Cj4+PiAgICBzeXMvY29uZi9maWxlcy5hbWQ2NCB8ICAgIDEgKwo+Pj4gICAgc3lzL2NvbmYvZmls
ZXMuaTM4NiAgfCAgICAxICsKPj4+ICAgIHN5cy94ODYveGVuL3hlbnB2LmMgIHwgIDE1NQo+Pj4g
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKPj4KPj4g
SSB0aGluayBpdCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUgMiBmaWxlczogb25lIGZvciB4ZW5w
diBidXMgYW5kIG9uZQo+PiBmb3IgYSBkdW1teSBwdmNwdSBkZXZpY2UuIEl0IHdvdWxkIGFsbG93
IHVzIHRvIG1vdmUgeGVucHYgYnVzIHRvIGNvbW1vbgo+PiBjb2RlIChzeXMveGVuIG9yIHN5cy9k
ZXYveGVuKS4KPgo+IEFjay4gSSB3YXNuJ3QgdGhpbmtpbmcgb3RoZXIgYXJjaGVzIHdpbGwgcHJv
YmFibHkgdXNlIHRoZSB4ZW5wdiBidXMgYnV0Cj4gbm90IHRoZSBkdW1teSBjcHUgZGV2aWNlLiBX
b3VsZCB5b3UgYWdyZWUgdG8gbGVhdmUgeGVucHYgYnVzIGluc2lkZQo+IHg4Ni94ZW4gZm9yIG5v
dyBhbmQgbW92ZSB0aGUgZHVtbXkgUFYgY3B1IGRldmljZSB0byBkZXYveGVuL3B2Y3B1Lz8KCkFz
IHdlIHdpbGwgYXR0YWNoIGV2ZXJ5IHhlbiBkZXZpY2UgdG8geGVucHYsIGl0IG1ha2VzIG1vcmUg
c2Vuc2UgdG8gaGF2ZSAKeGVucHYgYnVzIHVzZWQgb24gQVJNLiBJdCB3aWxsIGF2b2lkIGR1cGxp
Y2F0aW9uIGNvZGUgYW5kIGtlZXAgaXQgbmljZXIuCgpJJ20gZmluZSB3aXRoIHRoaXMgc29sdXRp
b24gZm9yIG5vdy4gSSB3aWxsIHVwZGF0ZS9tb3ZlIHRoZSBjb2RlIHdoZW4gSSAKd2lsbCBzZW5k
IHRoZSBwYXRjaCBzZXJpZXMgdG8gc3VwcG9ydCBGcmVlQlNEIG9uIFhlbiBvbiBBUk0uCgo+Pgo+
PiBbLi5dCj4+Cj4+PiArCj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2X3Byb2JlKGRldmljZV90
IGRldikKPj4+ICt7Cj4+PiArCj4+PiArICAgIGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYg
YnVzIik7Cj4+PiArICAgIGRldmljZV9xdWlldChkZXYpOwo+Pj4gKyAgICByZXR1cm4gKDApOwo+
Pgo+PiBBcyBJIHVuZGVyc3RhbmQsIDAgbWVhbnMgSSBjYW4gImhhbmRsZSIgdGhlIGN1cnJlbnQg
ZGV2aWNlLCBpbiB0aGlzIGNhc2UKPj4gaWYgYSBkZXZpY2UgaXMgcHJvYmluZywgYmVjYXVzZSBp
dCBkb2Vzbid0IGhhdmUgeWV0IGEgZHJpdmVyLCB3ZSB3aWxsCj4+IHVzZSB4ZW5wdiBhbmQgZW5k
IHVwIHdpdGggMiAob3IgZXZlbiBtb3JlKSB4ZW5wdiBidXNlcy4KPj4KPj4gQXMgd2Ugb25seSB3
YW50IHRvIHByb2JlIHhlbnB2IGJ1cyBvbmNlLCB3aGVuIHRoZSBidXMgd2FzIGFkZGVkCj4+IG1h
bnVhbGx5LCByZXR1cm5pbmcgQlVTX1BST0JFX05PX1dJTERDQVJEIHdvdWxkIHN1aXQgYmV0dGVy
Lgo+Pgo+PiBbLi5dCj4+Cj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2Y3B1X3Byb2JlKGRldmlj
ZV90IGRldikKPj4+ICt7Cj4+PiArCj4+PiArICAgIGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4g
UFYgQ1BVIik7Cj4+PiArICAgIHJldHVybiAoMCk7Cj4+Cj4+IFNhbWUgaGVyZTogQlVTX1BST0JF
X05PV0lMRENBUkQuCj4KPiBBY2sgZm9yIGJvdGgsIHdpbGwgY2hhbmdlIGl0IHRvIEJVU19QUk9C
RV9OT1dJTERDQVJELiBXaGlsZSBhdCBpdCwgd2UKPiBzaG91bGQgYWxzbyBjaGFuZ2UgeGVuc3Rv
cmUgcHJvYmUgZnVuY3Rpb24gdG8gcmV0dXJuIEJVU19QUk9CRV9OT1dJTERDQVJELgo+CgpSaWdo
dCwgSSBoYXZlIGEgcGF0Y2ggZm9yIHhlbnN0b3JlLiBEbyB5b3Ugd2FudCBtZSB0byBzZW5kIGl0
PwoKLS0gCkp1bGllbiBHcmFsbAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08MX-0005b8-Uf; Mon, 06 Jan 2014 11:28:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08MW-0005ae-G2
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:28:52 +0000
Received: from [193.109.254.147:18781] by server-8.bemta-14.messagelabs.com id
	B3/77-30921-3739AC25; Mon, 06 Jan 2014 11:28:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389007730!9054639!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22920 invoked from network); 6 Jan 2014 11:28:51 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:28:51 -0000
Received: by mail-lb0-f180.google.com with SMTP id x18so9513465lbi.25
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 03:28:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=wAmz0/AnkdSWtFxyqTbBWmZeac9Tq38a7Gv+UnPnKo0=;
	b=Hf7Pew98oPF/fDK07av6RFah4Zw7bls6N2kSF/uDrvfPBcTcw/2XFmONImK1l2Ul7x
	RtP7ZU8k+4KGqRbRyHjepFJ5WbnFJbDjy1KuaeU5/pVbFJtqt2muyBV2IV3HJ8hk+7nr
	8svEX4D//bfBnC0tRbj2/EcTpJcd+ajW0GWAnzbULfeLCU8RYFq3t+/S7WbEa01X2E1Q
	/ELkuqiktPlh/enwdzMgZX8uhAbwFXr+Rf4gMDRSTPjnzceronYTy8ShwY1eFxtlAtDL
	lAGrUYCboHucAvyEch+sDb4SQccz7Qnur62V/6MD/1ycbB2S0EfTuZCR3YqVX5QiuE7b
	uwtg==
X-Gm-Message-State: ALoCoQlsIriE9hffEOfyAdOLUw2VXsQctAQXFsFBDDxEKD7eyyOzeM2eG87cXCvCfBUWC4QrV4VL
X-Received: by 10.152.44.225 with SMTP id h1mr44576319lam.22.1389007730321;
	Mon, 06 Jan 2014 03:28:50 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id mv9sm42580544lbc.0.2014.01.06.03.28.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:28:49 -0800 (PST)
Message-ID: <52CA9347.8040901@linaro.org>
Date: Mon, 06 Jan 2014 11:28:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org, 
	xen-devel@lists.xen.org, gibbs@freebsd.org, jhb@freebsd.org, 
	kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org> <52CA7B8F.9060402@citrix.com>
In-Reply-To: <52CA7B8F.9060402@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgpPbiAwMS8wNi8yMDE0IDA5OjQ2IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+IE9uIDA1
LzAxLzE0IDIyOjUyLCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4+Cj4+Cj4+IE9uIDAxLzAyLzIwMTQg
MDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4+IFNpbmNlIFhlbiBQVkggZ3Vlc3Rz
IGRvZXNuJ3QgaGF2ZSBBQ1BJLCB3ZSBuZWVkIHRvIGNyZWF0ZSBhIGR1bW15Cj4+PiBidXMgc28g
dG9wIGxldmVsIFhlbiBkZXZpY2VzIGNhbiBhdHRhY2ggdG8gaXQgKGluc3RlYWQgb2YKPj4+IGF0
dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRldmljZSB0aGF0IHdp
bGwgYmUgdXNlZAo+Pj4gdG8gZmlsbCB0aGUgcGNwdS0+cGNfZGV2aWNlIGZpZWxkLgo+Pj4gLS0t
Cj4+PiAgICBzeXMvY29uZi9maWxlcy5hbWQ2NCB8ICAgIDEgKwo+Pj4gICAgc3lzL2NvbmYvZmls
ZXMuaTM4NiAgfCAgICAxICsKPj4+ICAgIHN5cy94ODYveGVuL3hlbnB2LmMgIHwgIDE1NQo+Pj4g
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKPj4KPj4g
SSB0aGluayBpdCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUgMiBmaWxlczogb25lIGZvciB4ZW5w
diBidXMgYW5kIG9uZQo+PiBmb3IgYSBkdW1teSBwdmNwdSBkZXZpY2UuIEl0IHdvdWxkIGFsbG93
IHVzIHRvIG1vdmUgeGVucHYgYnVzIHRvIGNvbW1vbgo+PiBjb2RlIChzeXMveGVuIG9yIHN5cy9k
ZXYveGVuKS4KPgo+IEFjay4gSSB3YXNuJ3QgdGhpbmtpbmcgb3RoZXIgYXJjaGVzIHdpbGwgcHJv
YmFibHkgdXNlIHRoZSB4ZW5wdiBidXMgYnV0Cj4gbm90IHRoZSBkdW1teSBjcHUgZGV2aWNlLiBX
b3VsZCB5b3UgYWdyZWUgdG8gbGVhdmUgeGVucHYgYnVzIGluc2lkZQo+IHg4Ni94ZW4gZm9yIG5v
dyBhbmQgbW92ZSB0aGUgZHVtbXkgUFYgY3B1IGRldmljZSB0byBkZXYveGVuL3B2Y3B1Lz8KCkFz
IHdlIHdpbGwgYXR0YWNoIGV2ZXJ5IHhlbiBkZXZpY2UgdG8geGVucHYsIGl0IG1ha2VzIG1vcmUg
c2Vuc2UgdG8gaGF2ZSAKeGVucHYgYnVzIHVzZWQgb24gQVJNLiBJdCB3aWxsIGF2b2lkIGR1cGxp
Y2F0aW9uIGNvZGUgYW5kIGtlZXAgaXQgbmljZXIuCgpJJ20gZmluZSB3aXRoIHRoaXMgc29sdXRp
b24gZm9yIG5vdy4gSSB3aWxsIHVwZGF0ZS9tb3ZlIHRoZSBjb2RlIHdoZW4gSSAKd2lsbCBzZW5k
IHRoZSBwYXRjaCBzZXJpZXMgdG8gc3VwcG9ydCBGcmVlQlNEIG9uIFhlbiBvbiBBUk0uCgo+Pgo+
PiBbLi5dCj4+Cj4+PiArCj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2X3Byb2JlKGRldmljZV90
IGRldikKPj4+ICt7Cj4+PiArCj4+PiArICAgIGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYg
YnVzIik7Cj4+PiArICAgIGRldmljZV9xdWlldChkZXYpOwo+Pj4gKyAgICByZXR1cm4gKDApOwo+
Pgo+PiBBcyBJIHVuZGVyc3RhbmQsIDAgbWVhbnMgSSBjYW4gImhhbmRsZSIgdGhlIGN1cnJlbnQg
ZGV2aWNlLCBpbiB0aGlzIGNhc2UKPj4gaWYgYSBkZXZpY2UgaXMgcHJvYmluZywgYmVjYXVzZSBp
dCBkb2Vzbid0IGhhdmUgeWV0IGEgZHJpdmVyLCB3ZSB3aWxsCj4+IHVzZSB4ZW5wdiBhbmQgZW5k
IHVwIHdpdGggMiAob3IgZXZlbiBtb3JlKSB4ZW5wdiBidXNlcy4KPj4KPj4gQXMgd2Ugb25seSB3
YW50IHRvIHByb2JlIHhlbnB2IGJ1cyBvbmNlLCB3aGVuIHRoZSBidXMgd2FzIGFkZGVkCj4+IG1h
bnVhbGx5LCByZXR1cm5pbmcgQlVTX1BST0JFX05PX1dJTERDQVJEIHdvdWxkIHN1aXQgYmV0dGVy
Lgo+Pgo+PiBbLi5dCj4+Cj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2Y3B1X3Byb2JlKGRldmlj
ZV90IGRldikKPj4+ICt7Cj4+PiArCj4+PiArICAgIGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4g
UFYgQ1BVIik7Cj4+PiArICAgIHJldHVybiAoMCk7Cj4+Cj4+IFNhbWUgaGVyZTogQlVTX1BST0JF
X05PV0lMRENBUkQuCj4KPiBBY2sgZm9yIGJvdGgsIHdpbGwgY2hhbmdlIGl0IHRvIEJVU19QUk9C
RV9OT1dJTERDQVJELiBXaGlsZSBhdCBpdCwgd2UKPiBzaG91bGQgYWxzbyBjaGFuZ2UgeGVuc3Rv
cmUgcHJvYmUgZnVuY3Rpb24gdG8gcmV0dXJuIEJVU19QUk9CRV9OT1dJTERDQVJELgo+CgpSaWdo
dCwgSSBoYXZlIGEgcGF0Y2ggZm9yIHhlbnN0b3JlLiBEbyB5b3Ugd2FudCBtZSB0byBzZW5kIGl0
PwoKLS0gCkp1bGllbiBHcmFsbAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:33:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08R3-000600-AH; Mon, 06 Jan 2014 11:33:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08R1-0005zo-4m
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:33:31 +0000
Received: from [193.109.254.147:55650] by server-13.bemta-14.messagelabs.com
	id 23/57-19374-A849AC25; Mon, 06 Jan 2014 11:33:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389008009!9034949!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1334 invoked from network); 6 Jan 2014 11:33:29 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:33:29 -0000
Received: by mail-la0-f50.google.com with SMTP id el20so9602092lab.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 03:33:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=BHCU8QxWQB+7yKHp3kswSx7pa3p/IlbprJea1h0acGc=;
	b=En7L77KuqH9Vf2WYR8rZS9bDjmMEOxZn7DKvN0kcZEgtCeuZWO+gnfng9vGKP2CUTg
	rMW4vKqhlpg/ZlWrqbVBSsRIRLfuMoR0oYhST7HuZL22BxyD0/cim4DbgkoVPlnyLQFj
	gNgTnEbFd19WsSVZ8BF4QW6vYY9+HC/0BRA8yETwTrQHqsvCwSISFRRBJ1bNRrttfYS4
	UvBiP/DWCY/01WjzusekteQ3fe8Os4qsySU3I/vTKFaQF6Q5Vn1yJAJ2VSFL36meaR4R
	giuLlFLzK/j8RaPGHk9TFpAuM9sKYmiqS7iOmaHOkXmEMn7yJevE26zktBnIOzV5/XH7
	RdnA==
X-Gm-Message-State: ALoCoQnM4qJTIoPm11eMZY2q7Iw/6MKvLYm3vRJvI9HHFpXKKOm5ZQgByPg/FXa6kaBYtGBtw+M7
X-Received: by 10.112.13.169 with SMTP id i9mr197869lbc.73.1389008009013;
	Mon, 06 Jan 2014 03:33:29 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id rb4sm42585872lbb.1.2014.01.06.03.33.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:33:27 -0800 (PST)
Message-ID: <52CA9481.4090703@linaro.org>
Date: Mon, 06 Jan 2014 11:33:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org, 
	xen-devel@lists.xen.org, gibbs@freebsd.org, jhb@freebsd.org, 
	kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
In-Reply-To: <52CA78DE.9060502@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgpPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+IE9uIDA1
LzAxLzE0IDIyOjU1LCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4+Cj4+Cj4+IE9uIDAxLzAyLzIwMTQg
MDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4+IEludHJvZHVjZSBhIFhlbiBzcGVj
aWZpYyBuZXh1cyB0aGF0IGlzIGdvaW5nIHRvIGJlIGluIGNoYXJnZSBmb3IKPj4+IGF0dGFjaGlu
ZyBYZW4gc3BlY2lmaWMgZGV2aWNlcy4KPj4KPj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1
cywgZG8gd2UgcmVhbGx5IG5lZWQgYSBzcGVjaWZpYyBuZXh1cyBmb3IgWGVuPwo+PiBXZSBzaG91
bGQgYmUgYWJsZSB0byB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGNyZWF0
ZSB0aGUgYnVzLgo+Pgo+PiBUaGUgb3RoZXIgcGFydCBvZiB0aGlzIHBhdGNoIGNhbiBiZSBtZXJn
ZWQgaW4gdGhlIHBhdGNoICMxNCAiSW50cm9kdWNlCj4+IHhlbnB2IGJ1cyBhbmQgYSBkdW1teSBw
dmNwdSBkZXZpY2UiLgo+Cj4gT24geDg2IGF0IGxlYXN0IHdlIG5lZWQgdGhlIFhlbiBzcGVjaWZp
YyBuZXh1cywgb3Igd2Ugd2lsbCBmYWxsIGJhY2sgdG8KPiB1c2UgdGhlIGxlZ2FjeSBuZXh1cyB3
aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2FudC4KPgoKT2ggcmlnaHQsIGluIGFueSBjYXNl
IGNhbiB3ZSB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGFkZCAKdGhlIGJ1
cz8KCldpdGggdGhpcyBzb2x1dGlvbiB4ZW5wdiBjYW4gYWRkIGl0c2VsZiBubyBtYXR0ZXIgRnJl
ZUJTRCB1c2UgdGhlIApnZW5lcmljIG5leHVzIG9yIHRoZSBuZXh1cyBYZW4sIG9mIGNvdXJzZSB3
aXRoIGEgY2hlY2sgaWYgd2UgYXJlIHJ1bm5pbmcgCm9uIFhlbiA6KS4KCkZvciBpbnN0YW5jZSwg
b24gQVJNIHNpZGUgSSBkb24ndCBwbGFuIHRvIGhhdmUgYSBzcGVjaWZpYyBYZW4gbmV4dXMuCgpD
aGVlcnMsCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:33:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08R3-000600-AH; Mon, 06 Jan 2014 11:33:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08R1-0005zo-4m
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:33:31 +0000
Received: from [193.109.254.147:55650] by server-13.bemta-14.messagelabs.com
	id 23/57-19374-A849AC25; Mon, 06 Jan 2014 11:33:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389008009!9034949!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1334 invoked from network); 6 Jan 2014 11:33:29 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:33:29 -0000
Received: by mail-la0-f50.google.com with SMTP id el20so9602092lab.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 03:33:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=BHCU8QxWQB+7yKHp3kswSx7pa3p/IlbprJea1h0acGc=;
	b=En7L77KuqH9Vf2WYR8rZS9bDjmMEOxZn7DKvN0kcZEgtCeuZWO+gnfng9vGKP2CUTg
	rMW4vKqhlpg/ZlWrqbVBSsRIRLfuMoR0oYhST7HuZL22BxyD0/cim4DbgkoVPlnyLQFj
	gNgTnEbFd19WsSVZ8BF4QW6vYY9+HC/0BRA8yETwTrQHqsvCwSISFRRBJ1bNRrttfYS4
	UvBiP/DWCY/01WjzusekteQ3fe8Os4qsySU3I/vTKFaQF6Q5Vn1yJAJ2VSFL36meaR4R
	giuLlFLzK/j8RaPGHk9TFpAuM9sKYmiqS7iOmaHOkXmEMn7yJevE26zktBnIOzV5/XH7
	RdnA==
X-Gm-Message-State: ALoCoQnM4qJTIoPm11eMZY2q7Iw/6MKvLYm3vRJvI9HHFpXKKOm5ZQgByPg/FXa6kaBYtGBtw+M7
X-Received: by 10.112.13.169 with SMTP id i9mr197869lbc.73.1389008009013;
	Mon, 06 Jan 2014 03:33:29 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id rb4sm42585872lbb.1.2014.01.06.03.33.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:33:27 -0800 (PST)
Message-ID: <52CA9481.4090703@linaro.org>
Date: Mon, 06 Jan 2014 11:33:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org, 
	xen-devel@lists.xen.org, gibbs@freebsd.org, jhb@freebsd.org, 
	kib@freebsd.org, julien.grall@citrix.com
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
In-Reply-To: <52CA78DE.9060502@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CgpPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+IE9uIDA1
LzAxLzE0IDIyOjU1LCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4+Cj4+Cj4+IE9uIDAxLzAyLzIwMTQg
MDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4+IEludHJvZHVjZSBhIFhlbiBzcGVj
aWZpYyBuZXh1cyB0aGF0IGlzIGdvaW5nIHRvIGJlIGluIGNoYXJnZSBmb3IKPj4+IGF0dGFjaGlu
ZyBYZW4gc3BlY2lmaWMgZGV2aWNlcy4KPj4KPj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1
cywgZG8gd2UgcmVhbGx5IG5lZWQgYSBzcGVjaWZpYyBuZXh1cyBmb3IgWGVuPwo+PiBXZSBzaG91
bGQgYmUgYWJsZSB0byB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGNyZWF0
ZSB0aGUgYnVzLgo+Pgo+PiBUaGUgb3RoZXIgcGFydCBvZiB0aGlzIHBhdGNoIGNhbiBiZSBtZXJn
ZWQgaW4gdGhlIHBhdGNoICMxNCAiSW50cm9kdWNlCj4+IHhlbnB2IGJ1cyBhbmQgYSBkdW1teSBw
dmNwdSBkZXZpY2UiLgo+Cj4gT24geDg2IGF0IGxlYXN0IHdlIG5lZWQgdGhlIFhlbiBzcGVjaWZp
YyBuZXh1cywgb3Igd2Ugd2lsbCBmYWxsIGJhY2sgdG8KPiB1c2UgdGhlIGxlZ2FjeSBuZXh1cyB3
aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2FudC4KPgoKT2ggcmlnaHQsIGluIGFueSBjYXNl
IGNhbiB3ZSB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGFkZCAKdGhlIGJ1
cz8KCldpdGggdGhpcyBzb2x1dGlvbiB4ZW5wdiBjYW4gYWRkIGl0c2VsZiBubyBtYXR0ZXIgRnJl
ZUJTRCB1c2UgdGhlIApnZW5lcmljIG5leHVzIG9yIHRoZSBuZXh1cyBYZW4sIG9mIGNvdXJzZSB3
aXRoIGEgY2hlY2sgaWYgd2UgYXJlIHJ1bm5pbmcgCm9uIFhlbiA6KS4KCkZvciBpbnN0YW5jZSwg
b24gQVJNIHNpZGUgSSBkb24ndCBwbGFuIHRvIGhhdmUgYSBzcGVjaWZpYyBYZW4gbmV4dXMuCgpD
aGVlcnMsCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:33:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08RP-00065J-SI; Mon, 06 Jan 2014 11:33:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W08RO-00064n-6M
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:33:54 +0000
Received: from [193.109.254.147:62330] by server-7.bemta-14.messagelabs.com id
	DB/61-15500-1A49AC25; Mon, 06 Jan 2014 11:33:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389008031!9065832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28455 invoked from network); 6 Jan 2014 11:33:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:33:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87845836"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:33:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:33:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W08RK-0004DL-In;
	Mon, 06 Jan 2014 11:33:50 +0000
Date: Mon, 6 Jan 2014 11:33:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140105194155.GC12263@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
	<20140105194155.GC12263@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > We also optimize one - the TLB flush. The native operation would
> > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > Xen one avoids that and lets the hypervisor determine which
> > > VCPU needs the TLB flush.
> > > 
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/mmu.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > > 
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index 490ddb3..c1d406f 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> > >  void __init xen_init_mmu_ops(void)
> > >  {
> > >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > > +
> > > +	/* Optimization - we can use the HVM one but it has no idea which
> > > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > > +	 * them. Xen knows so let it do the job.
> > > +	 */
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > +		return;
> > > +	}
> > >  	pv_mmu_ops = xen_mmu_ops;
> > >  
> > >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> > 
> > Regarding this patch, the next one and the other changes to
> > xen_setup_shared_info, xen_setup_mfn_list_list,
> > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > stuff is very different between PV and PVH guests, I wonder if it makes
> > any sense to keep calling xen_init_mmu_ops on PVH.
> > 
> > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > under a new xen_pvh_pagetable_init.
> > Just to give you an idea, not even compiled tested:
> 
> There is something to be said about sharing the same code path
> that "old-style" PV is using with the new-style - code coverage.
> 
> That is the code gets tested under both platforms and if I (or
> anybody else) introduce a bug in the "common-PV-paths" it will
> be immediately obvious as hopefully the regression tests
> will pick it up.
> 
> It is not nice - as low-level code is sprinkled with the one-offs
> for the PVH - which mostly is doing _less_.

I thought you would say that. However in this specific case the costs
exceed the benefits. Think of all the times we'll have to debug
something, we'll be staring at the code, and several dozens of minutes
later we'll realize that the code we have been looking at all along is
not actually executed in PVH mode.


> What I was thinking is to flip this around. Make the PVH paths
> the default and then have something like 'if (!xen_pvh_domain())'
> ... the big code.
> 
> Would you be OK with this line of thinking going forward say
> after this patchset?
 
I am not opposed to it in principle but I don't expect that you'll be
able to improve things significantly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:33:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08RP-00065J-SI; Mon, 06 Jan 2014 11:33:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W08RO-00064n-6M
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:33:54 +0000
Received: from [193.109.254.147:62330] by server-7.bemta-14.messagelabs.com id
	DB/61-15500-1A49AC25; Mon, 06 Jan 2014 11:33:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389008031!9065832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28455 invoked from network); 6 Jan 2014 11:33:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:33:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87845836"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:33:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:33:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W08RK-0004DL-In;
	Mon, 06 Jan 2014 11:33:50 +0000
Date: Mon, 6 Jan 2014 11:33:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140105194155.GC12263@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
	<20140105194155.GC12263@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > 
> > > We also optimize one - the TLB flush. The native operation would
> > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > Xen one avoids that and lets the hypervisor determine which
> > > VCPU needs the TLB flush.
> > > 
> > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > ---
> > >  arch/x86/xen/mmu.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > > 
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index 490ddb3..c1d406f 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> > >  void __init xen_init_mmu_ops(void)
> > >  {
> > >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > > +
> > > +	/* Optimization - we can use the HVM one but it has no idea which
> > > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > > +	 * them. Xen knows so let it do the job.
> > > +	 */
> > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > +		return;
> > > +	}
> > >  	pv_mmu_ops = xen_mmu_ops;
> > >  
> > >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> > 
> > Regarding this patch, the next one and the other changes to
> > xen_setup_shared_info, xen_setup_mfn_list_list,
> > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > stuff is very different between PV and PVH guests, I wonder if it makes
> > any sense to keep calling xen_init_mmu_ops on PVH.
> > 
> > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > under a new xen_pvh_pagetable_init.
> > Just to give you an idea, not even compiled tested:
> 
> There is something to be said about sharing the same code path
> that "old-style" PV is using with the new-style - code coverage.
> 
> That is the code gets tested under both platforms and if I (or
> anybody else) introduce a bug in the "common-PV-paths" it will
> be immediately obvious as hopefully the regression tests
> will pick it up.
> 
> It is not nice - as low-level code is sprinkled with the one-offs
> for the PVH - which mostly is doing _less_.

I thought you would say that. However in this specific case the costs
exceed the benefits. Think of all the times we'll have to debug
something, we'll be staring at the code, and several dozens of minutes
later we'll realize that the code we have been looking at all along is
not actually executed in PVH mode.


> What I was thinking is to flip this around. Make the PVH paths
> the default and then have something like 'if (!xen_pvh_domain())'
> ... the big code.
> 
> Would you be OK with this line of thinking going forward say
> after this patchset?
 
I am not opposed to it in principle but I don't expect that you'll be
able to improve things significantly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08X3-0006X5-Rt; Mon, 06 Jan 2014 11:39:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08X2-0006Wv-3s
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:39:44 +0000
Received: from [85.158.143.35:22501] by server-3.bemta-4.messagelabs.com id
	03/5D-32360-FF59AC25; Mon, 06 Jan 2014 11:39:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389008382!9865295!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17672 invoked from network); 6 Jan 2014 11:39:42 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:39:42 -0000
Received: by mail-la0-f45.google.com with SMTP id eh20so9775559lab.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:39:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=WcWnwShwFF5AxHFNjzTHDuJ7udVt5PLU7K8YtZw111Y=;
	b=YXK5Oc4ZMzyNGu67Ro95M4+IOt+G/mpG79cY8yoBaozBP155hxVq8TE1QZa1m9ISux
	+e21WoyWSsWB+k6K05VeOuu5bCSSPihN8Yx8B7Ma1dHNKo4ZFt3uedU/O3NlafKQFT/c
	cXG1MA/fDmxCBvV4GFD53JPrA8M6CVvzzlSNvWGUmg3GWr/8yV6hoQClJ32rknSUxiCo
	UUc7aOrTRo5v8cYOw+x32l5l3Ze7L74foyMwa9GYCvlPG/QvkFNN6y0Nc8nV0LuO+sVa
	4Y03iXbL838lqDslY8G9kX2Usbrvp4Mcu3FKPu8GCwAtm6phyip+hPAazZULZlhVgJ+G
	k7/A==
X-Gm-Message-State: ALoCoQnQCfkOzwzlQJ7lAM+CT6a9uYwtC5z4JhP0qEP1clT7BtNMPbgRJ0WorSImDqc8cAnbK2gK
X-Received: by 10.152.18.168 with SMTP id x8mr223262lad.63.1389008381883;
	Mon, 06 Jan 2014 03:39:41 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id e6sm42583604lbs.3.2014.01.06.03.39.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:39:40 -0800 (PST)
Message-ID: <52CA95F8.1080907@linaro.org>
Date: Mon, 06 Jan 2014 11:39:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com>
In-Reply-To: <52C9E034.1020707@citrix.com>
Cc: tim@xen.org, patches@linaro.org, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 10:44 PM, Andrew Cooper wrote:
> On 05/01/2014 21:26, Julien Grall wrote:
>> Panic function will never return. Without this attribute, gcc may output
>> warnings in call function.
>>
>> Cc: Keir Fraser <keir@xen.org>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> I have a longer series doing rather more noreturn'ing than just this, if
> you can wait until the 4.5 dev window opens up again.

I'm not sure to get what you said. When early_panic is converted to 
panic function, I got some gcc warning which result in errors, because 
of -Werror.

So without this small patch, this series can't compile.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08X3-0006X5-Rt; Mon, 06 Jan 2014 11:39:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08X2-0006Wv-3s
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:39:44 +0000
Received: from [85.158.143.35:22501] by server-3.bemta-4.messagelabs.com id
	03/5D-32360-FF59AC25; Mon, 06 Jan 2014 11:39:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389008382!9865295!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17672 invoked from network); 6 Jan 2014 11:39:42 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:39:42 -0000
Received: by mail-la0-f45.google.com with SMTP id eh20so9775559lab.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:39:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=WcWnwShwFF5AxHFNjzTHDuJ7udVt5PLU7K8YtZw111Y=;
	b=YXK5Oc4ZMzyNGu67Ro95M4+IOt+G/mpG79cY8yoBaozBP155hxVq8TE1QZa1m9ISux
	+e21WoyWSsWB+k6K05VeOuu5bCSSPihN8Yx8B7Ma1dHNKo4ZFt3uedU/O3NlafKQFT/c
	cXG1MA/fDmxCBvV4GFD53JPrA8M6CVvzzlSNvWGUmg3GWr/8yV6hoQClJ32rknSUxiCo
	UUc7aOrTRo5v8cYOw+x32l5l3Ze7L74foyMwa9GYCvlPG/QvkFNN6y0Nc8nV0LuO+sVa
	4Y03iXbL838lqDslY8G9kX2Usbrvp4Mcu3FKPu8GCwAtm6phyip+hPAazZULZlhVgJ+G
	k7/A==
X-Gm-Message-State: ALoCoQnQCfkOzwzlQJ7lAM+CT6a9uYwtC5z4JhP0qEP1clT7BtNMPbgRJ0WorSImDqc8cAnbK2gK
X-Received: by 10.152.18.168 with SMTP id x8mr223262lad.63.1389008381883;
	Mon, 06 Jan 2014 03:39:41 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id e6sm42583604lbs.3.2014.01.06.03.39.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:39:40 -0800 (PST)
Message-ID: <52CA95F8.1080907@linaro.org>
Date: Mon, 06 Jan 2014 11:39:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com>
In-Reply-To: <52C9E034.1020707@citrix.com>
Cc: tim@xen.org, patches@linaro.org, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/05/2014 10:44 PM, Andrew Cooper wrote:
> On 05/01/2014 21:26, Julien Grall wrote:
>> Panic function will never return. Without this attribute, gcc may output
>> warnings in call function.
>>
>> Cc: Keir Fraser <keir@xen.org>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> I have a longer series doing rather more noreturn'ing than just this, if
> you can wait until the 4.5 dev window opens up again.

I'm not sure to get what you said. When early_panic is converted to 
panic function, I got some gcc warning which result in errors, because 
of -Werror.

So without this small patch, this series can't compile.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08Zi-0006j6-Kz; Mon, 06 Jan 2014 11:42:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W08Zh-0006iw-De
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:42:29 +0000
Received: from [85.158.137.68:55170] by server-10.bemta-3.messagelabs.com id
	FA/AE-23989-4A69AC25; Mon, 06 Jan 2014 11:42:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389008546!7487450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11705 invoked from network); 6 Jan 2014 11:42:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:42:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90024541"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:42:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:42:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W08Zd-0004Kn-GH;
	Mon, 06 Jan 2014 11:42:25 +0000
Message-ID: <52CA96A1.2080406@citrix.com>
Date: Mon, 6 Jan 2014 11:42:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com> <52CA95F8.1080907@linaro.org>
In-Reply-To: <52CA95F8.1080907@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 11:39, Julien Grall wrote:
>
>
> On 01/05/2014 10:44 PM, Andrew Cooper wrote:
>> On 05/01/2014 21:26, Julien Grall wrote:
>>> Panic function will never return. Without this attribute, gcc may
>>> output
>>> warnings in call function.
>>>
>>> Cc: Keir Fraser <keir@xen.org>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> I have a longer series doing rather more noreturn'ing than just this, if
>> you can wait until the 4.5 dev window opens up again.
>
> I'm not sure to get what you said. When early_panic is converted to
> panic function, I got some gcc warning which result in errors, because
> of -Werror.
>
> So without this small patch, this series can't compile.
>

My series makes panic() (along with other several other function)
properly noreturn, so you don't need the while(1); at the end.

i.e., if your series can wait until after mine is committed, this
problem will disappear.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08Zi-0006j6-Kz; Mon, 06 Jan 2014 11:42:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W08Zh-0006iw-De
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:42:29 +0000
Received: from [85.158.137.68:55170] by server-10.bemta-3.messagelabs.com id
	FA/AE-23989-4A69AC25; Mon, 06 Jan 2014 11:42:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389008546!7487450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11705 invoked from network); 6 Jan 2014 11:42:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:42:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90024541"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:42:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:42:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W08Zd-0004Kn-GH;
	Mon, 06 Jan 2014 11:42:25 +0000
Message-ID: <52CA96A1.2080406@citrix.com>
Date: Mon, 6 Jan 2014 11:42:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com> <52CA95F8.1080907@linaro.org>
In-Reply-To: <52CA95F8.1080907@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 11:39, Julien Grall wrote:
>
>
> On 01/05/2014 10:44 PM, Andrew Cooper wrote:
>> On 05/01/2014 21:26, Julien Grall wrote:
>>> Panic function will never return. Without this attribute, gcc may
>>> output
>>> warnings in call function.
>>>
>>> Cc: Keir Fraser <keir@xen.org>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> I have a longer series doing rather more noreturn'ing than just this, if
>> you can wait until the 4.5 dev window opens up again.
>
> I'm not sure to get what you said. When early_panic is converted to
> panic function, I got some gcc warning which result in errors, because
> of -Werror.
>
> So without this small patch, this series can't compile.
>

My series makes panic() (along with other several other function)
properly noreturn, so you don't need the while(1); at the end.

i.e., if your series can wait until after mine is committed, this
problem will disappear.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:44:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08bt-0006ti-72; Mon, 06 Jan 2014 11:44:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08br-0006tS-Cr
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:44:43 +0000
Received: from [85.158.139.211:8835] by server-11.bemta-5.messagelabs.com id
	B0/89-23268-A279AC25; Mon, 06 Jan 2014 11:44:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389008681!8073808!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4138 invoked from network); 6 Jan 2014 11:44:42 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:44:42 -0000
Received: by mail-lb0-f177.google.com with SMTP id q8so9567077lbi.22
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:44:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5+wE9WdXfsm9WG9hOt4mtZNw7uipdhfhaKWI6MRlXo0=;
	b=P2RBDrRhfBQbccKJsT6p1v6hhdBRtUF2sBpo4M26yrFSQcJ393ngtA548uSpMfSS2y
	ncp/MaWSqP/Z44B8S7vAfoWgVoO485fzvIsPOyE86pfHmY//e0A9YxuuqMp+vfEwUkUg
	QedZwIdiKbzfpGD50+nWaHxzK39cMeFbS4WL1QHeOJamiW167R54cX+0T4fAp1PtBfeY
	p8Qd7wC1DEyNKHtNjwBJ4XFM8pWxf2HCAPQXE3n/xs7WI5XubssHlC20+hWZ9HLExKur
	9eZHjpPRjymFr4Y2RCENBa6VjjuMbCQWgYJmVNe2NeSscr318kv+dZTVS6S2sY861mWh
	mhTg==
X-Gm-Message-State: ALoCoQlRAdi/3DX7zk4NnCeJohWSzMrCX3sVc01AawNzgb0lDg3pBpTRMauAG2qMQoHpQPs7xIbG
X-Received: by 10.112.219.170 with SMTP id pp10mr804638lbc.29.1389008681172;
	Mon, 06 Jan 2014 03:44:41 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id e6sm42597682lbs.3.2014.01.06.03.44.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:44:39 -0800 (PST)
Message-ID: <52CA9717.1060700@linaro.org>
Date: Mon, 06 Jan 2014 11:44:23 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>	
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
In-Reply-To: <1387461097.9925.78.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	ian.jackson@eu.citrix.com, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
	1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/19/2013 01:51 PM, Ian Campbell wrote:
> On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
>> On Tue, 17 Dec 2013, Julien Grall wrote:
>>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
>>> these guest physical address. When the ballon decides to give back a
>>> page to the kernel, this page must have the same address as previously.
>>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
>>> devices.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> Cc: Keir Fraser <keir@xen.org>
>>
>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> Keir, any objections to this patch?

Hi Ian,

Do we need to wait Keir Ack? Without this patch most of supported ARM 
platform won't be able to create guest correctly. On the Arndale, the 
network is unstable and on midway, the kernel get stucks.
It would be nice to have this patch for the next Xen 4.4 RC.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:44:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:44:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08bt-0006ti-72; Mon, 06 Jan 2014 11:44:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08br-0006tS-Cr
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:44:43 +0000
Received: from [85.158.139.211:8835] by server-11.bemta-5.messagelabs.com id
	B0/89-23268-A279AC25; Mon, 06 Jan 2014 11:44:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389008681!8073808!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4138 invoked from network); 6 Jan 2014 11:44:42 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:44:42 -0000
Received: by mail-lb0-f177.google.com with SMTP id q8so9567077lbi.22
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:44:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5+wE9WdXfsm9WG9hOt4mtZNw7uipdhfhaKWI6MRlXo0=;
	b=P2RBDrRhfBQbccKJsT6p1v6hhdBRtUF2sBpo4M26yrFSQcJ393ngtA548uSpMfSS2y
	ncp/MaWSqP/Z44B8S7vAfoWgVoO485fzvIsPOyE86pfHmY//e0A9YxuuqMp+vfEwUkUg
	QedZwIdiKbzfpGD50+nWaHxzK39cMeFbS4WL1QHeOJamiW167R54cX+0T4fAp1PtBfeY
	p8Qd7wC1DEyNKHtNjwBJ4XFM8pWxf2HCAPQXE3n/xs7WI5XubssHlC20+hWZ9HLExKur
	9eZHjpPRjymFr4Y2RCENBa6VjjuMbCQWgYJmVNe2NeSscr318kv+dZTVS6S2sY861mWh
	mhTg==
X-Gm-Message-State: ALoCoQlRAdi/3DX7zk4NnCeJohWSzMrCX3sVc01AawNzgb0lDg3pBpTRMauAG2qMQoHpQPs7xIbG
X-Received: by 10.112.219.170 with SMTP id pp10mr804638lbc.29.1389008681172;
	Mon, 06 Jan 2014 03:44:41 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id e6sm42597682lbs.3.2014.01.06.03.44.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:44:39 -0800 (PST)
Message-ID: <52CA9717.1060700@linaro.org>
Date: Mon, 06 Jan 2014 11:44:23 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>	
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
In-Reply-To: <1387461097.9925.78.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	ian.jackson@eu.citrix.com, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
	1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 12/19/2013 01:51 PM, Ian Campbell wrote:
> On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
>> On Tue, 17 Dec 2013, Julien Grall wrote:
>>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
>>> these guest physical address. When the ballon decides to give back a
>>> page to the kernel, this page must have the same address as previously.
>>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
>>> devices.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>> Cc: Keir Fraser <keir@xen.org>
>>
>> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> Keir, any objections to this patch?

Hi Ian,

Do we need to wait Keir Ack? Without this patch most of supported ARM 
platform won't be able to create guest correctly. On the Arndale, the 
network is unstable and on midway, the kernel get stucks.
It would be nice to have this patch for the next Xen 4.4 RC.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:46:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08do-00076a-Qx; Mon, 06 Jan 2014 11:46:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08dm-00076O-In
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:46:42 +0000
Received: from [85.158.137.68:45032] by server-11.bemta-3.messagelabs.com id
	1D/D7-19379-1A79AC25; Mon, 06 Jan 2014 11:46:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389008800!7409749!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24382 invoked from network); 6 Jan 2014 11:46:41 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:46:41 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so9961145lbi.30
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:46:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=54dQpJPFJIYeeMJM0vRUTS/aOCMjIM8L59b2kY6uNb0=;
	b=TwRpzPjEQLko2g0qxyZYQlwhOz928q/TJi+YM9bRvlmE+LV/OARNrVJHZpWNcXmAsH
	xPH0mFYJviaZhQlMqoy4aRds3x6GGQcR09DFnc/nAQLEP737seZffTmn4nXKfRYgpuQF
	RPEkDSWi3/WYi25LKOyhsPSZjCbX+mQNQFITrDphLaIUdnZfQ+LVk5anbyvjkb7gxM6g
	jKq3qj7xdCf3BvyfPC2DpoL9hjZciZ/OXZSmBxAW5tvkdEPw/2ZDW5jNWcAmMoGTDty8
	jiAQ/IB0zmIsEPQsEFNhxmLHo3rFTLO4vqXn8gW+vX8UIck3X8RJDxDESe6Ggm96n77y
	X/KA==
X-Gm-Message-State: ALoCoQn5GWjRpx7LaWORinBQhFZ6dg8THV1io42cO1XvIRcJ3bMDszUl0TiFEoP6yAl6uzTB7sej
X-Received: by 10.153.6.34 with SMTP id cr2mr484980lad.44.1389008800488;
	Mon, 06 Jan 2014 03:46:40 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id c8sm54760263lag.3.2014.01.06.03.46.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:46:39 -0800 (PST)
Message-ID: <52CA9793.4030204@linaro.org>
Date: Mon, 06 Jan 2014 11:46:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com> <52CA95F8.1080907@linaro.org>
	<52CA96A1.2080406@citrix.com>
In-Reply-To: <52CA96A1.2080406@citrix.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/06/2014 11:42 AM, Andrew Cooper wrote:
> On 06/01/14 11:39, Julien Grall wrote:
>
> My series makes panic() (along with other several other function)
> properly noreturn, so you don't need the while(1); at the end.
>
> i.e., if your series can wait until after mine is committed, this
> problem will disappear.

This series is not for Xen 4.4, so I'm fine to wait your series.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:46:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08do-00076a-Qx; Mon, 06 Jan 2014 11:46:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W08dm-00076O-In
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:46:42 +0000
Received: from [85.158.137.68:45032] by server-11.bemta-3.messagelabs.com id
	1D/D7-19379-1A79AC25; Mon, 06 Jan 2014 11:46:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389008800!7409749!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24382 invoked from network); 6 Jan 2014 11:46:41 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:46:41 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so9961145lbi.30
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 03:46:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=54dQpJPFJIYeeMJM0vRUTS/aOCMjIM8L59b2kY6uNb0=;
	b=TwRpzPjEQLko2g0qxyZYQlwhOz928q/TJi+YM9bRvlmE+LV/OARNrVJHZpWNcXmAsH
	xPH0mFYJviaZhQlMqoy4aRds3x6GGQcR09DFnc/nAQLEP737seZffTmn4nXKfRYgpuQF
	RPEkDSWi3/WYi25LKOyhsPSZjCbX+mQNQFITrDphLaIUdnZfQ+LVk5anbyvjkb7gxM6g
	jKq3qj7xdCf3BvyfPC2DpoL9hjZciZ/OXZSmBxAW5tvkdEPw/2ZDW5jNWcAmMoGTDty8
	jiAQ/IB0zmIsEPQsEFNhxmLHo3rFTLO4vqXn8gW+vX8UIck3X8RJDxDESe6Ggm96n77y
	X/KA==
X-Gm-Message-State: ALoCoQn5GWjRpx7LaWORinBQhFZ6dg8THV1io42cO1XvIRcJ3bMDszUl0TiFEoP6yAl6uzTB7sej
X-Received: by 10.153.6.34 with SMTP id cr2mr484980lad.44.1389008800488;
	Mon, 06 Jan 2014 03:46:40 -0800 (PST)
Received: from [192.168.42.157] ([195.69.14.50])
	by mx.google.com with ESMTPSA id c8sm54760263lag.3.2014.01.06.03.46.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 03:46:39 -0800 (PST)
Message-ID: <52CA9793.4030204@linaro.org>
Date: Mon, 06 Jan 2014 11:46:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-6-git-send-email-julien.grall@linaro.org>
	<52C9E034.1020707@citrix.com> <52CA95F8.1080907@linaro.org>
	<52CA96A1.2080406@citrix.com>
In-Reply-To: <52CA96A1.2080406@citrix.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 5/6] xen/console: Add noreturn attribute to
 panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/06/2014 11:42 AM, Andrew Cooper wrote:
> On 06/01/14 11:39, Julien Grall wrote:
>
> My series makes panic() (along with other several other function)
> properly noreturn, so you don't need the while(1); at the end.
>
> i.e., if your series can wait until after mine is committed, this
> problem will disappear.

This series is not for Xen 4.4, so I'm fine to wait your series.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08fR-0007G2-C1; Mon, 06 Jan 2014 11:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W08fQ-0007Fw-Df
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:48:24 +0000
Received: from [85.158.137.68:64839] by server-8.bemta-3.messagelabs.com id
	28/6A-31081-7089AC25; Mon, 06 Jan 2014 11:48:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389008901!3787449!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 6 Jan 2014 11:48:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:48:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90026203"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:48:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:48:20 -0500
Message-ID: <1389008899.31766.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 11:48:19 +0000
In-Reply-To: <52CA9717.1060700@linaro.org>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
	<52CA9717.1060700@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian.jackson@eu.citrix.com, stefano.stabellini@citrix.com,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
 1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:44 +0000, Julien Grall wrote:
> 
> On 12/19/2013 01:51 PM, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
> >> On Tue, 17 Dec 2013, Julien Grall wrote:
> >>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
> >>> these guest physical address. When the ballon decides to give back a
> >>> page to the kernel, this page must have the same address as previously.
> >>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
> >>> devices.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>> Cc: Keir Fraser <keir@xen.org>
> >>
> >> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >
> > Keir, any objections to this patch?
> 
> Hi Ian,
> 
> Do we need to wait Keir Ack?

I think he's had ample opportunity to object (last call!) and since this
code is explicitly dead on everything apart from ARM I intend to commit
it next time I go through my queue.

>  Without this patch most of supported ARM 
> platform won't be able to create guest correctly. On the Arndale, the 
> network is unstable and on midway, the kernel get stucks.
> It would be nice to have this patch for the next Xen 4.4 RC.
> 
> Sincerely yours,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08fR-0007G2-C1; Mon, 06 Jan 2014 11:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W08fQ-0007Fw-Df
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 11:48:24 +0000
Received: from [85.158.137.68:64839] by server-8.bemta-3.messagelabs.com id
	28/6A-31081-7089AC25; Mon, 06 Jan 2014 11:48:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389008901!3787449!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 6 Jan 2014 11:48:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:48:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90026203"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 11:48:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	06:48:20 -0500
Message-ID: <1389008899.31766.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 11:48:19 +0000
In-Reply-To: <52CA9717.1060700@linaro.org>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
	<52CA9717.1060700@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian.jackson@eu.citrix.com, stefano.stabellini@citrix.com,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
 1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:44 +0000, Julien Grall wrote:
> 
> On 12/19/2013 01:51 PM, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
> >> On Tue, 17 Dec 2013, Julien Grall wrote:
> >>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
> >>> these guest physical address. When the ballon decides to give back a
> >>> page to the kernel, this page must have the same address as previously.
> >>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
> >>> devices.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>> Cc: Keir Fraser <keir@xen.org>
> >>
> >> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >
> > Keir, any objections to this patch?
> 
> Hi Ian,
> 
> Do we need to wait Keir Ack?

I think he's had ample opportunity to object (last call!) and since this
code is explicitly dead on everything apart from ARM I intend to commit
it next time I go through my queue.

>  Without this patch most of supported ARM 
> platform won't be able to create guest correctly. On the Arndale, the 
> network is unstable and on midway, the kernel get stucks.
> It would be nice to have this patch for the next Xen 4.4 RC.
> 
> Sincerely yours,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08nU-0007so-Ht; Mon, 06 Jan 2014 11:56:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W08nS-0007sh-NK
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 11:56:42 +0000
Received: from [85.158.137.68:3393] by server-17.bemta-3.messagelabs.com id
	BE/2F-15965-9F99AC25; Mon, 06 Jan 2014 11:56:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389009399!7441588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23391 invoked from network); 6 Jan 2014 11:56:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87851028"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 06:56:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W08nO-000862-Gt;
	Mon, 06 Jan 2014 11:56:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W08nO-0002GU-8e;
	Mon, 06 Jan 2014 11:56:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.39413.802675.333845@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 11:56:37 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <osstest-24246-mainreport@xen.org>
References: <osstest-24246-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> flight 24246 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427

Having fixed the problem with the test job setup, this now fails with:

  Building modules, stage 2.
  MODPOST 21 modules
  Kernel: arch/arm/boot/Image is ready
  AS      arch/arm/boot/compressed/head.o
  GZIP    arch/arm/boot/compressed/piggy.gzip
ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
  CC      arch/arm/boot/compressed/misc.o
  CC      arch/arm/boot/compressed/decompress.o
  CC      arch/arm/boot/compressed/string.o
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
make: *** Waiting for unfinished jobs....

> version targeted for testing:
>  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> baseline version:
>  linux                5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52

So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08nU-0007so-Ht; Mon, 06 Jan 2014 11:56:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W08nS-0007sh-NK
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 11:56:42 +0000
Received: from [85.158.137.68:3393] by server-17.bemta-3.messagelabs.com id
	BE/2F-15965-9F99AC25; Mon, 06 Jan 2014 11:56:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389009399!7441588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23391 invoked from network); 6 Jan 2014 11:56:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87851028"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 06:56:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W08nO-000862-Gt;
	Mon, 06 Jan 2014 11:56:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W08nO-0002GU-8e;
	Mon, 06 Jan 2014 11:56:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.39413.802675.333845@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 11:56:37 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <osstest-24246-mainreport@xen.org>
References: <osstest-24246-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> flight 24246 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427

Having fixed the problem with the test job setup, this now fails with:

  Building modules, stage 2.
  MODPOST 21 modules
  Kernel: arch/arm/boot/Image is ready
  AS      arch/arm/boot/compressed/head.o
  GZIP    arch/arm/boot/compressed/piggy.gzip
ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
  CC      arch/arm/boot/compressed/misc.o
  CC      arch/arm/boot/compressed/decompress.o
  CC      arch/arm/boot/compressed/string.o
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
make: *** Waiting for unfinished jobs....

> version targeted for testing:
>  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> baseline version:
>  linux                5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52

So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:59:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08pt-00089a-Em; Mon, 06 Jan 2014 11:59:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W08ps-00089T-57
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:59:12 +0000
Received: from [193.109.254.147:43409] by server-1.bemta-14.messagelabs.com id
	E8/49-15600-F8A9AC25; Mon, 06 Jan 2014 11:59:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389009549!5560415!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16696 invoked from network); 6 Jan 2014 11:59:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87851910"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:59:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:59:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W08po-0004Xt-FJ; Mon, 06 Jan 2014 11:59:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Jan 2014 11:59:07 +0000
Message-ID: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
	image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A patch to this effect has been in XenServer for a little while, and has
proved to be a useful debugging point for servers which have different
behaviours depending when crashing on the non-bootstrap processor.

Moving the printk() from kexec_panic() to one_cpu_only() means that it will
not be printed multiple times in the case of multiple cpus racing on the kexec
path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: David Vrabel <david.vrabel@citrix.com>

---

I would like this to be considered for 4.4, given that it is a quite trivial
change, yet very useful information in the case of a crash.
---
 xen/common/kexec.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 481b0c2..5dcd487 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
     }
 
     set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
+    printk("Executing crash image on cpu%u\n", cpu);
+
     return 0;
 }
 
@@ -340,8 +342,6 @@ void kexec_crash(void)
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
 
-    printk("Executing crash image\n");
-
     kexecing = TRUE;
 
     if ( kexec_common_shutdown() != 0 )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 11:59:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 11:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W08pt-00089a-Em; Mon, 06 Jan 2014 11:59:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W08ps-00089T-57
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 11:59:12 +0000
Received: from [193.109.254.147:43409] by server-1.bemta-14.messagelabs.com id
	E8/49-15600-F8A9AC25; Mon, 06 Jan 2014 11:59:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389009549!5560415!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16696 invoked from network); 6 Jan 2014 11:59:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 11:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87851910"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 11:59:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 06:59:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W08po-0004Xt-FJ; Mon, 06 Jan 2014 11:59:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 6 Jan 2014 11:59:07 +0000
Message-ID: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
	image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A patch to this effect has been in XenServer for a little while, and has
proved to be a useful debugging point for servers which have different
behaviours depending when crashing on the non-bootstrap processor.

Moving the printk() from kexec_panic() to one_cpu_only() means that it will
not be printed multiple times in the case of multiple cpus racing on the kexec
path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: David Vrabel <david.vrabel@citrix.com>

---

I would like this to be considered for 4.4, given that it is a quite trivial
change, yet very useful information in the case of a crash.
---
 xen/common/kexec.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 481b0c2..5dcd487 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
     }
 
     set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
+    printk("Executing crash image on cpu%u\n", cpu);
+
     return 0;
 }
 
@@ -340,8 +342,6 @@ void kexec_crash(void)
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
 
-    printk("Executing crash image\n");
-
     kexecing = TRUE;
 
     if ( kexec_common_shutdown() != 0 )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:11:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W091D-0000U3-D9; Mon, 06 Jan 2014 12:10:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W091B-0000Ty-7I
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 12:10:53 +0000
Received: from [193.109.254.147:45104] by server-2.bemta-14.messagelabs.com id
	0B/D1-00361-C4D9AC25; Mon, 06 Jan 2014 12:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389010250!9053677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5356 invoked from network); 6 Jan 2014 12:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90030504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 12:10:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	07:10:31 -0500
Message-ID: <1389010229.31766.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 12:10:29 +0000
In-Reply-To: <1387463435.9925.86.camel@kazak.uk.xensource.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>
	<52B2758C.70208@linaro.org>
	<1387463435.9925.86.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>, Stefano.Stabellini@citrix.com,
	David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-19 at 14:30 +0000, Ian Campbell wrote:
> Looks like I might have missed a callpath.

Not just that, but a cache flush too so even with the uncached mapping
the cache is *still* potentially dirty.

One place this could be fixed is in dom0 in remap_pte_fn, and I've
confirmed that this works:
        @@ -101,6 +102,8 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsig
                if (map_foreign_page(pfn, info->fgmfn, info->domid))
                        return -EFAULT;
                set_pte_at(info->vma->vm_mm, addr, ptep, pte);
        +       if (pgprot_val(info->prot) == pgprot_val(pgprot_noncached(info->prot)))
        +               __cpuc_flush_dcache_area((void *)addr, PAGE_SIZE);
         
                return 0;
         }
        
I'm wondering though if it might be more correct to do this on the
hypervisor side. In particular I'm thinking that caches should be
invalidated when a page is freed. scrub_one_page would have been
convenient but is not called for a ballooned down page, so we'd have to
add something which maps the page and flushes it on free.

An alternative would be to invalidate the cache when we allocate the
page to a guest, e.g. in populate physmap.

Sadly -- neither of the last two ideas actually work in practice for
some reason...

I'll poke at it a bit more.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:11:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W091D-0000U3-D9; Mon, 06 Jan 2014 12:10:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W091B-0000Ty-7I
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 12:10:53 +0000
Received: from [193.109.254.147:45104] by server-2.bemta-14.messagelabs.com id
	0B/D1-00361-C4D9AC25; Mon, 06 Jan 2014 12:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389010250!9053677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5356 invoked from network); 6 Jan 2014 12:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90030504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 12:10:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	07:10:31 -0500
Message-ID: <1389010229.31766.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 12:10:29 +0000
In-Reply-To: <1387463435.9925.86.camel@kazak.uk.xensource.com>
References: <1387387710.28680.88.camel@kazak.uk.xensource.com>
	<52B2758C.70208@linaro.org>
	<1387463435.9925.86.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>, Stefano.Stabellini@citrix.com,
	David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: arm: use uncached foreign mappings
 when building guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-19 at 14:30 +0000, Ian Campbell wrote:
> Looks like I might have missed a callpath.

Not just that, but a cache flush too so even with the uncached mapping
the cache is *still* potentially dirty.

One place this could be fixed is in dom0 in remap_pte_fn, and I've
confirmed that this works:
        @@ -101,6 +102,8 @@ static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsig
                if (map_foreign_page(pfn, info->fgmfn, info->domid))
                        return -EFAULT;
                set_pte_at(info->vma->vm_mm, addr, ptep, pte);
        +       if (pgprot_val(info->prot) == pgprot_val(pgprot_noncached(info->prot)))
        +               __cpuc_flush_dcache_area((void *)addr, PAGE_SIZE);
         
                return 0;
         }
        
I'm wondering though if it might be more correct to do this on the
hypervisor side. In particular I'm thinking that caches should be
invalidated when a page is freed. scrub_one_page would have been
convenient but is not called for a ballooned down page, so we'd have to
add something which maps the page and flushes it on free.

An alternative would be to invalidate the cache when we allocate the
page to a guest, e.g. in populate physmap.

Sadly -- neither of the last two ideas actually work in practice for
some reason...

I'll poke at it a bit more.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09Cy-0000nZ-RL; Mon, 06 Jan 2014 12:23:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09Cx-0000nU-Nj
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 12:23:04 +0000
Received: from [85.158.137.68:21776] by server-2.bemta-3.messagelabs.com id
	F8/1A-17329-620AAC25; Mon, 06 Jan 2014 12:23:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389010980!7446809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15942 invoked from network); 6 Jan 2014 12:23:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:23:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90036366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 12:23:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 07:23:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09Ct-0004ty-Os;
	Mon, 06 Jan 2014 12:22:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 6 Jan 2014 12:22:05 +0000
Message-ID: <1389010925-22832-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, phcoder@gmail.com, qemu-devel@nongnu.org,
	Gerd Hoffmann <kraxel@redhat.com>, Anthony Liguori <anthony@codemonkey.ws>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PATCH] xenfb: map framebuffer read-only and handle
	unmap errors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The framebuffer is needlessly mapped (PROT_READ | PROT_WRITE), map it
PROT_READ instead.

The framebuffer is unmapped by replacing the framebuffer pages with
anonymous shared memory, calling mmap. Check for return errors and print
a warning.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Gerd Hoffmann <kraxel@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Anthony Liguori <anthony@codemonkey.ws>
CC: phcoder@gmail.com

diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index f0333a0..cb9d456 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -495,7 +495,7 @@ static int xenfb_map_fb(struct XenFB *xenfb)
     munmap(map, n_fbdirs * XC_PAGE_SIZE);
 
     xenfb->pixels = xc_map_foreign_pages(xen_xc, xenfb->c.xendev.dom,
-					 PROT_READ | PROT_WRITE, fbmfns, xenfb->fbpages);
+            PROT_READ, fbmfns, xenfb->fbpages);
     if (xenfb->pixels == NULL)
 	goto out;
 
@@ -903,6 +903,11 @@ static void fb_disconnect(struct XenDevice *xendev)
     fb->pixels = mmap(fb->pixels, fb->fbpages * XC_PAGE_SIZE,
                       PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANON,
                       -1, 0);
+    if (fb->pixels == MAP_FAILED) {
+        xen_be_printf(xendev, 0,
+                "Couldn't replace the framebuffer with anonymous memory errno=%d\n",
+                errno);
+    }
     common_unbind(&fb->c);
     fb->feature_update = 0;
     fb->bug_trigger    = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09Cy-0000nZ-RL; Mon, 06 Jan 2014 12:23:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09Cx-0000nU-Nj
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 12:23:04 +0000
Received: from [85.158.137.68:21776] by server-2.bemta-3.messagelabs.com id
	F8/1A-17329-620AAC25; Mon, 06 Jan 2014 12:23:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389010980!7446809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15942 invoked from network); 6 Jan 2014 12:23:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:23:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90036366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 12:23:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 07:23:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09Ct-0004ty-Os;
	Mon, 06 Jan 2014 12:22:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 6 Jan 2014 12:22:05 +0000
Message-ID: <1389010925-22832-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, phcoder@gmail.com, qemu-devel@nongnu.org,
	Gerd Hoffmann <kraxel@redhat.com>, Anthony Liguori <anthony@codemonkey.ws>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PATCH] xenfb: map framebuffer read-only and handle
	unmap errors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The framebuffer is needlessly mapped (PROT_READ | PROT_WRITE), map it
PROT_READ instead.

The framebuffer is unmapped by replacing the framebuffer pages with
anonymous shared memory, calling mmap. Check for return errors and print
a warning.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Gerd Hoffmann <kraxel@redhat.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Anthony Liguori <anthony@codemonkey.ws>
CC: phcoder@gmail.com

diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index f0333a0..cb9d456 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -495,7 +495,7 @@ static int xenfb_map_fb(struct XenFB *xenfb)
     munmap(map, n_fbdirs * XC_PAGE_SIZE);
 
     xenfb->pixels = xc_map_foreign_pages(xen_xc, xenfb->c.xendev.dom,
-					 PROT_READ | PROT_WRITE, fbmfns, xenfb->fbpages);
+            PROT_READ, fbmfns, xenfb->fbpages);
     if (xenfb->pixels == NULL)
 	goto out;
 
@@ -903,6 +903,11 @@ static void fb_disconnect(struct XenDevice *xendev)
     fb->pixels = mmap(fb->pixels, fb->fbpages * XC_PAGE_SIZE,
                       PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANON,
                       -1, 0);
+    if (fb->pixels == MAP_FAILED) {
+        xen_be_printf(xendev, 0,
+                "Couldn't replace the framebuffer with anonymous memory errno=%d\n",
+                errno);
+    }
     common_unbind(&fb->c);
     fb->feature_update = 0;
     fb->bug_trigger    = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09EJ-0000v5-AQ; Mon, 06 Jan 2014 12:24:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09EH-0000un-VC
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 12:24:26 +0000
Received: from [85.158.139.211:29736] by server-16.bemta-5.messagelabs.com id
	9D/A6-11843-970AAC25; Mon, 06 Jan 2014 12:24:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389011062!8086532!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21265 invoked from network); 6 Jan 2014 12:24:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87860935"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 12:24:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 07:24:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09ED-0004ui-U2;
	Mon, 06 Jan 2014 12:24:21 +0000
Date: Mon, 6 Jan 2014 12:23:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: =?UTF-8?Q?Vladimir_'=CF=86-coder=2Fphcoder'_Serbinenko?=
	<phcoder@gmail.com>
In-Reply-To: <52B434AC.7090905@gmail.com>
Message-ID: <alpine.DEB.2.02.1401061222360.8667@kaball.uk.xensource.com>
References: <527EA084.6000706@gmail.com> <52987A43.9070806@m2r.biz>
	<52987D7F.3050006@gmail.com> <52988F86.6050008@m2r.biz>
	<529DB2F1.4080509@m2r.biz> <529DB363.7080003@gmail.com>
	<529DBED9.80105@m2r.biz> <529DC07E.8000201@gmail.com>
	<529DE3FD.90002@m2r.biz>
	<529DF9D5.2060301@gmail.com> <529E03FB.90603@m2r.biz>
	<52A1B0CB.3000705@m2r.biz> <52A1B5E8.5090709@gmail.com>
	<52A1E2CD.9030002@m2r.biz> <52A1E56E.3070105@gmail.com>
	<52A1EBAB.5090006@m2r.biz> <52A2F341.9010606@gmail.com>
	<52A5961A.2010608@m2r.biz>
	<52B02B13.1000103@m2r.biz> <52B02F84.6070403@gmail.com>
	<52B04D6E.3070700@m2r.biz> <52B0527C.40104@gmail.com>
	<52B057BD.8070701@m2r.biz> <52B05AF1.1040508@gmail.com>
	<52B05B5C.1080901@m2r.biz> <52B06131.8040809@m2r.biz>
	<52B1B82D.9050501@gmail.com>
	<alpine.DEB.2.02.1312181936470.8667@kaball.uk.xensource.com>
	<52B20392.9090001@gmail.com>
	<alpine.DEB.2.02.1312191125450.8667@kaball.uk.xensource.com>
	<52B434AC.7090905@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-890200315-1389010962=:8667"
Content-ID: <alpine.DEB.2.02.1401061223250.8667@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: The development of GRUB 2 <grub-devel@gnu.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] pvgrub2 is merged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-890200315-1389010962=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061223251.8667@kaball.uk.xensource.com>

On Fri, 20 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote:
> On 19.12.2013 12:54, Stefano Stabellini wrote:
> > On Wed, 18 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote:
> >> On 18.12.2013 20:39, Stefano Stabellini wrote:
> >>> On Wed, 18 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote=
:
> >>>> On 17.12.2013 15:35, Fabio Fantoni wrote:
> >>>>> Il 17/12/2013 15:10, Fabio Fantoni ha scritto:
> >>>>>> Il 17/12/2013 15:08, Vladimir '=CF=86-coder/phcoder' Serbinenko ha=
 scritto:
> >>>>>>>> Thanks.
> >>>>>>>> Now there is another error, probably introduced by xenfb support=
:
> >>>>>>>>
> >>>>>>> doesn't look like related to xenfb. Is it 64-bit or PAE guest?
> >>>>>>
> >>>>>> 64 bit
> >>>>>
> >>>>> I did "git reset --hard" to commit "Remove grub_bios_interrupt on
> >>>>> coreboot." and then I applied only
> >>>>> "grub-core/lib/x86_64/xen/relocator.S: Fix hypercall ABI violation.=
"
> >>>>> commit.
> >>>>> Now the Sid domU boot correctly, therefore the regression is caused=
 by
> >>>>> "xenfb" or "xen grants to v1" commit, should I find the exact commi=
t
> >>>>> that causes that problem or these informations are enough for you?
> >>>>
> >>>> It's because of vfb. Apparently vfb framebuffer stays mapped as rw e=
ven
> >>>> after vfb shutdown
> >>>> phcoder@debian:15:52:40:~/grub2$ sudo xenstore-ls
> >>>> /local/domain/54/device/vfb
> >>>> 0 =3D ""
> >>>>  backend =3D "/local/domain/0/backend/vfb/54/0"
> >>>>  backend-id =3D "0"
> >>>>  state =3D "1"
> >>>> phcoder@debian:15:52:51:~/grub2$ sudo xenstore-ls
> >>>> /local/domain/0/backend/vfb/54/0
> >>>> frontend =3D "/local/domain/54/device/vfb/0"
> >>>> frontend-id =3D "54"
> >>>> online =3D "1"
> >>>> state =3D "2"
> >>>> domain =3D "grub"
> >>>> vnc =3D "1"
> >>>> vnclisten =3D "127.0.0.1"
> >>>> vncdisplay =3D "0"
> >>>> vncunused =3D "1"
> >>>> sdl =3D "0"
> >>>> opengl =3D "0"
> >>>> feature-resize =3D "1"
> >>>> hotplug-status =3D "connected"
> >>>>
> >>>> When I do "dry vfb": do everything except writing vfb state problem
> >>>> disappears. So my question would be:
> >>>> - how can I inspect how backend maps framebuffer pages?
> >>>
> >>> There is only one xenfb backend: hw/display/xenfb.c in the QEMU sourc=
e
> >>> tree.
> >>>
> >>>
> >>>> - Why does it map as rw and not ro? It doesn't need to write to fram=
ebuffer?
> >>>
> >>> Actually it is mapping it RO, see hw/display/xenfb.c:xenfb_map_fb
> >>>
> >> ./tools/qemu-xen-dir-remote/hw/xenfb.c:
> >>     xenfb->pixels =3D xc_map_foreign_pages(xen_xc, xenfb->c.xendev.dom=
,
> >> =09=09=09=09=09 PROT_READ | PROT_WRITE, fbmfns, xenfb->fbpages);
> >=20
> > You are right, my bad.
> > I did a quick test and it should be safe to modify it to PROT_READ only=
=2E
> >=20
> >=20
> >>>> - How do I force it to drop the mapping?
> >>>
> >>> Theoretically QEMU should drop the mapping at disconnect time:
> >>>
> >>> hw/display/xenfb.c:fb_disconnect
> >>>
> >>>     /*
> >>>      * FIXME: qemu can't un-init gfx display (yet?).
> >>>      *   Replacing the framebuffer with anonymous shared memory
> >>>      *   instead.  This releases the guest pages and keeps qemu happy=
=2E
> >>>      */
> >>>     fb->pixels =3D mmap(fb->pixels, fb->fbpages * XC_PAGE_SIZE,
> >>>                       PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANON,
> >>>                       -1, 0);
> >>>
> >> Could this fail?
> >=20
> > Yes and we don't check for the return value (-1 in case of error). Well=
 spotted!
> > Do you want to submit a patch to fix both issues or should I do it?
> >=20
> I'm fine with you doing it.

Done, see:

http://marc.info/?l=3Dqemu-devel&m=3D138901099512700&w=3D2
--1342847746-890200315-1389010962=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-890200315-1389010962=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 12:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09EJ-0000v5-AQ; Mon, 06 Jan 2014 12:24:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09EH-0000un-VC
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 12:24:26 +0000
Received: from [85.158.139.211:29736] by server-16.bemta-5.messagelabs.com id
	9D/A6-11843-970AAC25; Mon, 06 Jan 2014 12:24:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389011062!8086532!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21265 invoked from network); 6 Jan 2014 12:24:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 12:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87860935"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 12:24:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 07:24:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09ED-0004ui-U2;
	Mon, 06 Jan 2014 12:24:21 +0000
Date: Mon, 6 Jan 2014 12:23:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: =?UTF-8?Q?Vladimir_'=CF=86-coder=2Fphcoder'_Serbinenko?=
	<phcoder@gmail.com>
In-Reply-To: <52B434AC.7090905@gmail.com>
Message-ID: <alpine.DEB.2.02.1401061222360.8667@kaball.uk.xensource.com>
References: <527EA084.6000706@gmail.com> <52987A43.9070806@m2r.biz>
	<52987D7F.3050006@gmail.com> <52988F86.6050008@m2r.biz>
	<529DB2F1.4080509@m2r.biz> <529DB363.7080003@gmail.com>
	<529DBED9.80105@m2r.biz> <529DC07E.8000201@gmail.com>
	<529DE3FD.90002@m2r.biz>
	<529DF9D5.2060301@gmail.com> <529E03FB.90603@m2r.biz>
	<52A1B0CB.3000705@m2r.biz> <52A1B5E8.5090709@gmail.com>
	<52A1E2CD.9030002@m2r.biz> <52A1E56E.3070105@gmail.com>
	<52A1EBAB.5090006@m2r.biz> <52A2F341.9010606@gmail.com>
	<52A5961A.2010608@m2r.biz>
	<52B02B13.1000103@m2r.biz> <52B02F84.6070403@gmail.com>
	<52B04D6E.3070700@m2r.biz> <52B0527C.40104@gmail.com>
	<52B057BD.8070701@m2r.biz> <52B05AF1.1040508@gmail.com>
	<52B05B5C.1080901@m2r.biz> <52B06131.8040809@m2r.biz>
	<52B1B82D.9050501@gmail.com>
	<alpine.DEB.2.02.1312181936470.8667@kaball.uk.xensource.com>
	<52B20392.9090001@gmail.com>
	<alpine.DEB.2.02.1312191125450.8667@kaball.uk.xensource.com>
	<52B434AC.7090905@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-890200315-1389010962=:8667"
Content-ID: <alpine.DEB.2.02.1401061223250.8667@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: The development of GRUB 2 <grub-devel@gnu.org>,
	Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] pvgrub2 is merged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-890200315-1389010962=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061223251.8667@kaball.uk.xensource.com>

On Fri, 20 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote:
> On 19.12.2013 12:54, Stefano Stabellini wrote:
> > On Wed, 18 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote:
> >> On 18.12.2013 20:39, Stefano Stabellini wrote:
> >>> On Wed, 18 Dec 2013, Vladimir '=CF=86-coder/phcoder' Serbinenko wrote=
:
> >>>> On 17.12.2013 15:35, Fabio Fantoni wrote:
> >>>>> Il 17/12/2013 15:10, Fabio Fantoni ha scritto:
> >>>>>> Il 17/12/2013 15:08, Vladimir '=CF=86-coder/phcoder' Serbinenko ha=
 scritto:
> >>>>>>>> Thanks.
> >>>>>>>> Now there is another error, probably introduced by xenfb support=
:
> >>>>>>>>
> >>>>>>> doesn't look like related to xenfb. Is it 64-bit or PAE guest?
> >>>>>>
> >>>>>> 64 bit
> >>>>>
> >>>>> I did "git reset --hard" to commit "Remove grub_bios_interrupt on
> >>>>> coreboot." and then I applied only
> >>>>> "grub-core/lib/x86_64/xen/relocator.S: Fix hypercall ABI violation.=
"
> >>>>> commit.
> >>>>> Now the Sid domU boot correctly, therefore the regression is caused=
 by
> >>>>> "xenfb" or "xen grants to v1" commit, should I find the exact commi=
t
> >>>>> that causes that problem or these informations are enough for you?
> >>>>
> >>>> It's because of vfb. Apparently vfb framebuffer stays mapped as rw e=
ven
> >>>> after vfb shutdown
> >>>> phcoder@debian:15:52:40:~/grub2$ sudo xenstore-ls
> >>>> /local/domain/54/device/vfb
> >>>> 0 =3D ""
> >>>>  backend =3D "/local/domain/0/backend/vfb/54/0"
> >>>>  backend-id =3D "0"
> >>>>  state =3D "1"
> >>>> phcoder@debian:15:52:51:~/grub2$ sudo xenstore-ls
> >>>> /local/domain/0/backend/vfb/54/0
> >>>> frontend =3D "/local/domain/54/device/vfb/0"
> >>>> frontend-id =3D "54"
> >>>> online =3D "1"
> >>>> state =3D "2"
> >>>> domain =3D "grub"
> >>>> vnc =3D "1"
> >>>> vnclisten =3D "127.0.0.1"
> >>>> vncdisplay =3D "0"
> >>>> vncunused =3D "1"
> >>>> sdl =3D "0"
> >>>> opengl =3D "0"
> >>>> feature-resize =3D "1"
> >>>> hotplug-status =3D "connected"
> >>>>
> >>>> When I do "dry vfb": do everything except writing vfb state problem
> >>>> disappears. So my question would be:
> >>>> - how can I inspect how backend maps framebuffer pages?
> >>>
> >>> There is only one xenfb backend: hw/display/xenfb.c in the QEMU sourc=
e
> >>> tree.
> >>>
> >>>
> >>>> - Why does it map as rw and not ro? It doesn't need to write to fram=
ebuffer?
> >>>
> >>> Actually it is mapping it RO, see hw/display/xenfb.c:xenfb_map_fb
> >>>
> >> ./tools/qemu-xen-dir-remote/hw/xenfb.c:
> >>     xenfb->pixels =3D xc_map_foreign_pages(xen_xc, xenfb->c.xendev.dom=
,
> >> =09=09=09=09=09 PROT_READ | PROT_WRITE, fbmfns, xenfb->fbpages);
> >=20
> > You are right, my bad.
> > I did a quick test and it should be safe to modify it to PROT_READ only=
=2E
> >=20
> >=20
> >>>> - How do I force it to drop the mapping?
> >>>
> >>> Theoretically QEMU should drop the mapping at disconnect time:
> >>>
> >>> hw/display/xenfb.c:fb_disconnect
> >>>
> >>>     /*
> >>>      * FIXME: qemu can't un-init gfx display (yet?).
> >>>      *   Replacing the framebuffer with anonymous shared memory
> >>>      *   instead.  This releases the guest pages and keeps qemu happy=
=2E
> >>>      */
> >>>     fb->pixels =3D mmap(fb->pixels, fb->fbpages * XC_PAGE_SIZE,
> >>>                       PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANON,
> >>>                       -1, 0);
> >>>
> >> Could this fail?
> >=20
> > Yes and we don't check for the return value (-1 in case of error). Well=
 spotted!
> > Do you want to submit a patch to fix both issues or should I do it?
> >=20
> I'm fine with you doing it.

Done, see:

http://marc.info/?l=3Dqemu-devel&m=3D138901099512700&w=3D2
--1342847746-890200315-1389010962=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-890200315-1389010962=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 12:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09X8-0001oX-Cv; Mon, 06 Jan 2014 12:43:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W09X6-0001oO-Nt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 12:43:52 +0000
Received: from [85.158.139.211:23272] by server-11.bemta-5.messagelabs.com id
	B6/97-23268-805AAC25; Mon, 06 Jan 2014 12:43:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389012229!8079623!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4899 invoked from network); 6 Jan 2014 12:43:51 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 12:43:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 12:43:48 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="641444412"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.133])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 12:43:47 +0000
Message-ID: <52CAA503.5080707@terremark.com>
Date: Mon, 06 Jan 2014 07:43:47 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com>
In-Reply-To: <52C9E621.4020608@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/05/14 18:09, Andrew Cooper wrote:
> On 04/01/2014 17:52, Don Slutz wrote:
>> Using a 1G hvm domU (in grub) and gdbsx:
>>
>> (gdb) set arch i8086
>> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
>> of GDB.  Attempting to continue with the default i8086 settings.
>>
>> The target architecture is assumed to be i8086
>> (gdb) target remote localhost:9999
>> Remote debugging using localhost:9999
>> Remote debugging from host 127.0.0.1
>> 0x0000d475 in ?? ()
>> (gdb) x/1xh 0x6ae9168b
>>
>> Will reproduce this bug.
>>
>> With a debug=y build you will get:
>>
>> Assertion '!preempt_count()' failed at preempt.c:37
>>
>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>
>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>
>> And gdb output:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     0x3024
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Ignoring packet error, continuing...
>> Reply contains invalid hex digit 116
>>
>> The 1st one worked because the p2m.lock is recursive and the PCPU
>> had not yet changed.
>>
>> crash reports (for example):
>>
>> crash> mm_rwlock_t 0xffff83083f913010
>> struct mm_rwlock_t {
>>    lock = {
>>      raw = {
>>        lock = 2147483647
>>      },
>>      debug = {<No data fields>}
>>    },
>>    unlock_level = 0,
>>    recurse_count = 1,
>>    locker = 1,
>>    locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
>> }
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/debug.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index 3e21ca8..eceb805 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
>>           mfn = (has_hvm_container_domain(dp)
>>                  ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>                  : dbg_pv_va2mfn(addr, dp, pgd3));
>> -        if ( mfn == INVALID_MFN )
>> +        if ( mfn == INVALID_MFN ) {
> Xen coding style would have this brace on the next line.

Will fix.


> Content-wise, I think it would be better to fix up the error path in
> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
> reference on the gfn.

That would only fix "p2m_is_readonly(gfntype) and writing memory" case 
(see cover letter). To fix "the requested vaddr does not exist" case, I 
would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also.  As 
It currently stands it is a smaller fix.

    -Don Slutz


> ~Andrew
>
>> +            if ( gfn != INVALID_GFN )
>> +                put_gfn(dp, gfn);
>>               break;
>> +        }
>>   
>>           va = map_domain_page(mfn);
>>           va = va + (addr & (PAGE_SIZE-1));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 12:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 12:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09X8-0001oX-Cv; Mon, 06 Jan 2014 12:43:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W09X6-0001oO-Nt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 12:43:52 +0000
Received: from [85.158.139.211:23272] by server-11.bemta-5.messagelabs.com id
	B6/97-23268-805AAC25; Mon, 06 Jan 2014 12:43:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389012229!8079623!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4899 invoked from network); 6 Jan 2014 12:43:51 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 12:43:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 12:43:48 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="641444412"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.133])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 12:43:47 +0000
Message-ID: <52CAA503.5080707@terremark.com>
Date: Mon, 06 Jan 2014 07:43:47 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com>
In-Reply-To: <52C9E621.4020608@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/05/14 18:09, Andrew Cooper wrote:
> On 04/01/2014 17:52, Don Slutz wrote:
>> Using a 1G hvm domU (in grub) and gdbsx:
>>
>> (gdb) set arch i8086
>> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
>> of GDB.  Attempting to continue with the default i8086 settings.
>>
>> The target architecture is assumed to be i8086
>> (gdb) target remote localhost:9999
>> Remote debugging using localhost:9999
>> Remote debugging from host 127.0.0.1
>> 0x0000d475 in ?? ()
>> (gdb) x/1xh 0x6ae9168b
>>
>> Will reproduce this bug.
>>
>> With a debug=y build you will get:
>>
>> Assertion '!preempt_count()' failed at preempt.c:37
>>
>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>
>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>
>> And gdb output:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     0x3024
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Ignoring packet error, continuing...
>> Reply contains invalid hex digit 116
>>
>> The 1st one worked because the p2m.lock is recursive and the PCPU
>> had not yet changed.
>>
>> crash reports (for example):
>>
>> crash> mm_rwlock_t 0xffff83083f913010
>> struct mm_rwlock_t {
>>    lock = {
>>      raw = {
>>        lock = 2147483647
>>      },
>>      debug = {<No data fields>}
>>    },
>>    unlock_level = 0,
>>    recurse_count = 1,
>>    locker = 1,
>>    locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
>> }
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/debug.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index 3e21ca8..eceb805 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
>>           mfn = (has_hvm_container_domain(dp)
>>                  ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>                  : dbg_pv_va2mfn(addr, dp, pgd3));
>> -        if ( mfn == INVALID_MFN )
>> +        if ( mfn == INVALID_MFN ) {
> Xen coding style would have this brace on the next line.

Will fix.


> Content-wise, I think it would be better to fix up the error path in
> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
> reference on the gfn.

That would only fix "p2m_is_readonly(gfntype) and writing memory" case 
(see cover letter). To fix "the requested vaddr does not exist" case, I 
would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also.  As 
It currently stands it is a smaller fix.

    -Don Slutz


> ~Andrew
>
>> +            if ( gfn != INVALID_GFN )
>> +                put_gfn(dp, gfn);
>>               break;
>> +        }
>>   
>>           va = map_domain_page(mfn);
>>           va = va + (addr & (PAGE_SIZE-1));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:01:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09nd-0002aP-Lw; Mon, 06 Jan 2014 13:00:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W09nb-0002aK-RV
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 13:00:56 +0000
Received: from [85.158.137.68:45460] by server-7.bemta-3.messagelabs.com id
	C2/9C-27599-709AAC25; Mon, 06 Jan 2014 13:00:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389013252!3803298!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21761 invoked from network); 6 Jan 2014 13:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87869584"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 13:00:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	08:00:51 -0500
Message-ID: <1389013250.31766.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 13:00:50 +0000
In-Reply-To: <21194.39413.802675.333845@mariner.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
> xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> > flight 24246 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427
> 
> Having fixed the problem with the test job setup, this now fails with:
> 
>   Building modules, stage 2.
>   MODPOST 21 modules
>   Kernel: arch/arm/boot/Image is ready
>   AS      arch/arm/boot/compressed/head.o
>   GZIP    arch/arm/boot/compressed/piggy.gzip
> ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
>   CC      arch/arm/boot/compressed/misc.o
>   CC      arch/arm/boot/compressed/decompress.o
>   CC      arch/arm/boot/compressed/string.o
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> > version targeted for testing:
> >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe

I can't find this in
git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
which I think is the source of the arm kernels.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:01:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09nd-0002aP-Lw; Mon, 06 Jan 2014 13:00:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W09nb-0002aK-RV
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 13:00:56 +0000
Received: from [85.158.137.68:45460] by server-7.bemta-3.messagelabs.com id
	C2/9C-27599-709AAC25; Mon, 06 Jan 2014 13:00:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389013252!3803298!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21761 invoked from network); 6 Jan 2014 13:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87869584"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 13:00:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	08:00:51 -0500
Message-ID: <1389013250.31766.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 13:00:50 +0000
In-Reply-To: <21194.39413.802675.333845@mariner.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
> xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> > flight 24246 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427
> 
> Having fixed the problem with the test job setup, this now fails with:
> 
>   Building modules, stage 2.
>   MODPOST 21 modules
>   Kernel: arch/arm/boot/Image is ready
>   AS      arch/arm/boot/compressed/head.o
>   GZIP    arch/arm/boot/compressed/piggy.gzip
> ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
>   CC      arch/arm/boot/compressed/misc.o
>   CC      arch/arm/boot/compressed/decompress.o
>   CC      arch/arm/boot/compressed/string.o
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> > version targeted for testing:
> >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe

I can't find this in
git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
which I think is the source of the arm kernels.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pO-0002gT-Fg; Mon, 06 Jan 2014 13:02:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09pN-0002gJ-CV
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:02:45 +0000
Received: from [85.158.139.211:40295] by server-16.bemta-5.messagelabs.com id
	83/58-11843-479AAC25; Mon, 06 Jan 2014 13:02:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389013362!8087799!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11724 invoked from network); 6 Jan 2014 13:02:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90045465"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09mm-0005Sb-Pr;
	Mon, 06 Jan 2014 13:00:04 +0000
Date: Mon, 6 Jan 2014 12:59:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paul Durrant <paul.durrant@citrix.com>
In-Reply-To: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
Message-ID: <alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
References: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix guest-receive-side
 array sizes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 23 Dec 2013, Paul Durrant wrote:
> The sizes chosen for the metadata and grant_copy_op arrays on the guest
> receive size are wrong;
> 
> - The meta array is needlessly twice the ring size, when we only ever
>   consume a single array element per RX ring slot
> - The grant_copy_op array is way too small. It's sized based on a bogus
>   assumption: that at most two copy ops will be used per ring slot. This
>   may have been true at some point in the past but it's clear from looking
>   at start_new_rx_buffer() that a new ring slot is only consumed if a frag
>   would overflow the current slot (plus some other conditions) so the actual
>   limit is MAX_SKB_FRAGS grant_copy_ops per ring slot.
> 
> This patch fixes those two sizing issues and, because grant_copy_ops grows
> so much, it pulls it out into a separate chunk of vmalloc()ed memory.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>

Unfortunately this patch (now in 3.13-rc7) breaks the ARM build:

  CC      drivers/net/xen-netback/interface.o
drivers/net/xen-netback/interface.c: In function 'xenvif_alloc':
drivers/net/xen-netback/interface.c:311:2: error: implicit declaration of function 'vmalloc' [-Werror=implicit-function-declaration]
  vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
  ^
drivers/net/xen-netback/interface.c:311:21: warning: assignment makes pointer from integer without a cast [enabled by default]
  vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
                     ^
drivers/net/xen-netback/interface.c: In function 'xenvif_free':
drivers/net/xen-netback/interface.c:499:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
  vfree(vif->grant_copy_op);
  ^
cc1: some warnings being treated as errors
make[3]: *** [drivers/net/xen-netback/interface.o] Error 1
make[2]: *** [drivers/net/xen-netback] Error 2
make[1]: *** [drivers/net] Error 2
make: *** [drivers] Error 2

I suggest we fix it (probably by reverting it) ASAP otherwise we risk
break the release.


> This was originally submitted for discussion on xen-devel. Wei acked it
> there, which is why this carbon-copy submission to netdev already carries
> his ack.
> 
>  drivers/net/xen-netback/common.h    |   19 +++++++++++++------
>  drivers/net/xen-netback/interface.c |   10 ++++++++++
>  drivers/net/xen-netback/netback.c   |    2 +-
>  3 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 08ae01b..c47794b 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -101,6 +101,13 @@ struct xenvif_rx_meta {
>  
>  #define MAX_PENDING_REQS 256
>  
> +/* It's possible for an skb to have a maximal number of frags
> + * but still be less than MAX_BUFFER_OFFSET in size. Thus the
> + * worst-case number of copy operations is MAX_SKB_FRAGS per
> + * ring slot.
> + */
> +#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
> +
>  struct xenvif {
>  	/* Unique identifier for this interface. */
>  	domid_t          domid;
> @@ -143,13 +150,13 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
>  
> -	/* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each
> -	 * head/fragment page uses 2 copy operations because it
> -	 * straddles two buffers in the frontend.
> -	 */
> -	struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
> -	struct xenvif_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
> +	/* This array is allocated seperately as it is large */
> +	struct gnttab_copy *grant_copy_op;
>  
> +	/* We create one meta structure per ring request we consume, so
> +	 * the maximum number is the same as the ring size.
> +	 */
> +	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>  
>  	u8               fe_dev_addr[6];
>  
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 870f1fa..34ca4e5 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -307,6 +307,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	SET_NETDEV_DEV(dev, parent);
>  
>  	vif = netdev_priv(dev);
> +
> +	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> +				     MAX_GRANT_COPY_OPS);
> +	if (vif->grant_copy_op == NULL) {
> +		pr_warn("Could not allocate grant copy space for %s\n", name);
> +		free_netdev(dev);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
> @@ -487,6 +496,7 @@ void xenvif_free(struct xenvif *vif)
>  
>  	unregister_netdev(vif->dev);
>  
> +	vfree(vif->grant_copy_op);
>  	free_netdev(vif->dev);
>  
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 7b4fd93..7842555 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -608,7 +608,7 @@ void xenvif_rx_action(struct xenvif *vif)
>  	if (!npo.copy_prod)
>  		return;
>  
> -	BUG_ON(npo.copy_prod > ARRAY_SIZE(vif->grant_copy_op));
> +	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
>  	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
>  
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pO-0002gT-Fg; Mon, 06 Jan 2014 13:02:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09pN-0002gJ-CV
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:02:45 +0000
Received: from [85.158.139.211:40295] by server-16.bemta-5.messagelabs.com id
	83/58-11843-479AAC25; Mon, 06 Jan 2014 13:02:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389013362!8087799!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11724 invoked from network); 6 Jan 2014 13:02:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90045465"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09mm-0005Sb-Pr;
	Mon, 06 Jan 2014 13:00:04 +0000
Date: Mon, 6 Jan 2014 12:59:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paul Durrant <paul.durrant@citrix.com>
In-Reply-To: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
Message-ID: <alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
References: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix guest-receive-side
 array sizes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 23 Dec 2013, Paul Durrant wrote:
> The sizes chosen for the metadata and grant_copy_op arrays on the guest
> receive size are wrong;
> 
> - The meta array is needlessly twice the ring size, when we only ever
>   consume a single array element per RX ring slot
> - The grant_copy_op array is way too small. It's sized based on a bogus
>   assumption: that at most two copy ops will be used per ring slot. This
>   may have been true at some point in the past but it's clear from looking
>   at start_new_rx_buffer() that a new ring slot is only consumed if a frag
>   would overflow the current slot (plus some other conditions) so the actual
>   limit is MAX_SKB_FRAGS grant_copy_ops per ring slot.
> 
> This patch fixes those two sizing issues and, because grant_copy_ops grows
> so much, it pulls it out into a separate chunk of vmalloc()ed memory.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>

Unfortunately this patch (now in 3.13-rc7) breaks the ARM build:

  CC      drivers/net/xen-netback/interface.o
drivers/net/xen-netback/interface.c: In function 'xenvif_alloc':
drivers/net/xen-netback/interface.c:311:2: error: implicit declaration of function 'vmalloc' [-Werror=implicit-function-declaration]
  vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
  ^
drivers/net/xen-netback/interface.c:311:21: warning: assignment makes pointer from integer without a cast [enabled by default]
  vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
                     ^
drivers/net/xen-netback/interface.c: In function 'xenvif_free':
drivers/net/xen-netback/interface.c:499:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
  vfree(vif->grant_copy_op);
  ^
cc1: some warnings being treated as errors
make[3]: *** [drivers/net/xen-netback/interface.o] Error 1
make[2]: *** [drivers/net/xen-netback] Error 2
make[1]: *** [drivers/net] Error 2
make: *** [drivers] Error 2

I suggest we fix it (probably by reverting it) ASAP otherwise we risk
break the release.


> This was originally submitted for discussion on xen-devel. Wei acked it
> there, which is why this carbon-copy submission to netdev already carries
> his ack.
> 
>  drivers/net/xen-netback/common.h    |   19 +++++++++++++------
>  drivers/net/xen-netback/interface.c |   10 ++++++++++
>  drivers/net/xen-netback/netback.c   |    2 +-
>  3 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 08ae01b..c47794b 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -101,6 +101,13 @@ struct xenvif_rx_meta {
>  
>  #define MAX_PENDING_REQS 256
>  
> +/* It's possible for an skb to have a maximal number of frags
> + * but still be less than MAX_BUFFER_OFFSET in size. Thus the
> + * worst-case number of copy operations is MAX_SKB_FRAGS per
> + * ring slot.
> + */
> +#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
> +
>  struct xenvif {
>  	/* Unique identifier for this interface. */
>  	domid_t          domid;
> @@ -143,13 +150,13 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
>  
> -	/* Given MAX_BUFFER_OFFSET of 4096 the worst case is that each
> -	 * head/fragment page uses 2 copy operations because it
> -	 * straddles two buffers in the frontend.
> -	 */
> -	struct gnttab_copy grant_copy_op[2*XEN_NETIF_RX_RING_SIZE];
> -	struct xenvif_rx_meta meta[2*XEN_NETIF_RX_RING_SIZE];
> +	/* This array is allocated seperately as it is large */
> +	struct gnttab_copy *grant_copy_op;
>  
> +	/* We create one meta structure per ring request we consume, so
> +	 * the maximum number is the same as the ring size.
> +	 */
> +	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>  
>  	u8               fe_dev_addr[6];
>  
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 870f1fa..34ca4e5 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -307,6 +307,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	SET_NETDEV_DEV(dev, parent);
>  
>  	vif = netdev_priv(dev);
> +
> +	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> +				     MAX_GRANT_COPY_OPS);
> +	if (vif->grant_copy_op == NULL) {
> +		pr_warn("Could not allocate grant copy space for %s\n", name);
> +		free_netdev(dev);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
> @@ -487,6 +496,7 @@ void xenvif_free(struct xenvif *vif)
>  
>  	unregister_netdev(vif->dev);
>  
> +	vfree(vif->grant_copy_op);
>  	free_netdev(vif->dev);
>  
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 7b4fd93..7842555 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -608,7 +608,7 @@ void xenvif_rx_action(struct xenvif *vif)
>  	if (!npo.copy_prod)
>  		return;
>  
> -	BUG_ON(npo.copy_prod > ARRAY_SIZE(vif->grant_copy_op));
> +	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
>  	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
>  
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pQ-0002gz-3c; Mon, 06 Jan 2014 13:02:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09pO-0002gQ-Op
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 13:02:46 +0000
Received: from [85.158.137.68:59345] by server-5.bemta-3.messagelabs.com id
	79/93-25188-679AAC25; Mon, 06 Jan 2014 13:02:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389013363!7456556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2558 invoked from network); 6 Jan 2014 13:02:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90045474"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09hA-0005Mn-OZ;
	Mon, 06 Jan 2014 12:54:16 +0000
Date: Mon, 6 Jan 2014 12:53:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21194.39413.802675.333845@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Ian Jackson wrote:
> xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> > flight 24246 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427
> 
> Having fixed the problem with the test job setup, this now fails with:
> 
>   Building modules, stage 2.
>   MODPOST 21 modules
>   Kernel: arch/arm/boot/Image is ready
>   AS      arch/arm/boot/compressed/head.o
>   GZIP    arch/arm/boot/compressed/piggy.gzip
> ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
>   CC      arch/arm/boot/compressed/misc.o
>   CC      arch/arm/boot/compressed/decompress.o
>   CC      arch/arm/boot/compressed/string.o
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> > version targeted for testing:
> >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> > baseline version:
> >  linux                5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52
> 
> So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.

I pushed a new tree based on 3.13-rc6, it should be fixed now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pQ-0002gz-3c; Mon, 06 Jan 2014 13:02:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09pO-0002gQ-Op
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 13:02:46 +0000
Received: from [85.158.137.68:59345] by server-5.bemta-3.messagelabs.com id
	79/93-25188-679AAC25; Mon, 06 Jan 2014 13:02:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389013363!7456556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2558 invoked from network); 6 Jan 2014 13:02:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90045474"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09hA-0005Mn-OZ;
	Mon, 06 Jan 2014 12:54:16 +0000
Date: Mon, 6 Jan 2014 12:53:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21194.39413.802675.333845@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Ian Jackson wrote:
> xen.org writes ("[linux-arm-xen test] 24246: regressions - FAIL"):
> > flight 24246 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24246/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-armhf-pvops             4 kernel-build         fail REGR. vs. 21427
> 
> Having fixed the problem with the test job setup, this now fails with:
> 
>   Building modules, stage 2.
>   MODPOST 21 modules
>   Kernel: arch/arm/boot/Image is ready
>   AS      arch/arm/boot/compressed/head.o
>   GZIP    arch/arm/boot/compressed/piggy.gzip
> ERROR: "phys_to_mach" [drivers/xen/xen-gntalloc.ko] undefined!
>   CC      arch/arm/boot/compressed/misc.o
>   CC      arch/arm/boot/compressed/decompress.o
>   CC      arch/arm/boot/compressed/string.o
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> > version targeted for testing:
> >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> > baseline version:
> >  linux                5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52
> 
> So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.

I pushed a new tree based on 3.13-rc6, it should be fixed now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:03:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pc-0002l2-K8; Mon, 06 Jan 2014 13:03:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W09pa-0002kW-Qc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:02:59 +0000
Received: from [85.158.137.68:39394] by server-14.bemta-3.messagelabs.com id
	1A/B9-06105-289AAC25; Mon, 06 Jan 2014 13:02:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389013376!7506683!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2509 invoked from network); 6 Jan 2014 13:02:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87870016"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W09h4-0005Mk-Tb;
	Mon, 06 Jan 2014 12:54:10 +0000
Date: Mon, 6 Jan 2014 12:54:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <qemu-devel@nongnu.org>, <xen-devel@lists.xen.org>
Message-ID: <20140106125410.GD3119@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

This idea is to modify QEMU's Makefiles, plus implementing some stubs to
make it possible to tailor QEMU to a smaller binary.

The current setup for Xen on X86 is to build i386-softmmu, and uses this
single binary for two purposes:
1. serves as device emulator for HVM guest.
2. serves as PV driver backend for PV guest.

Either case CPU emulation is never used because Xen handles that
already. So we are in fact having a load of unused code in QEMU build.

What I have in mind is to build a QEMU binary which:
1. does not include CPU emulation code at all.
2. only includes components that's useful (what's useful is TBD).

And the rationales behind this idea are:

1. Reduce memory footprint. One usecase would be running Xen on embedded
   platform (X86 or ARM). We would expect the system has very limited
   resources. The smaller the binary, the better.

2. It doesn't make sense to have i386 emulation on ARM platform.
   Arguably nobody can prevent user from running i386 emulator on ARM
   platform, but it doesn't make sense in Xen's setup where QEMU is
   only used as PV device backend on ARM.

3. Security concern. It's much easier to audit small code base.

Please note that I'm not proposing to invalidate all the other usecases.
I'm only speaking with my Xen developer's hat on, aiming to make QEMU
more flexible.

Down to implementation level I only need to (hopefully) add a few stubs
and create some new CONFIG_* options and move a few things around. It
might not be as intrusive as one thinks.

In fact I've already hacked a prototype during Christmas. What's I've
done so far:

1. create target-null which only has some stubs to CPU emulation
   framework.

2. add a few lines to configure / Makefiles*, create
   default-configs/null-softmmu

Finally I got a qemu-system-null. And the effect is immediately visible
-- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
looked at what device emulation code can be removed so the size can even
be made smaller.

What do you think about this idea?

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:03:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09pc-0002l2-K8; Mon, 06 Jan 2014 13:03:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W09pa-0002kW-Qc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:02:59 +0000
Received: from [85.158.137.68:39394] by server-14.bemta-3.messagelabs.com id
	1A/B9-06105-289AAC25; Mon, 06 Jan 2014 13:02:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389013376!7506683!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2509 invoked from network); 6 Jan 2014 13:02:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="87870016"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 13:02:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:02:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W09h4-0005Mk-Tb;
	Mon, 06 Jan 2014 12:54:10 +0000
Date: Mon, 6 Jan 2014 12:54:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <qemu-devel@nongnu.org>, <xen-devel@lists.xen.org>
Message-ID: <20140106125410.GD3119@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

This idea is to modify QEMU's Makefiles, plus implementing some stubs to
make it possible to tailor QEMU to a smaller binary.

The current setup for Xen on X86 is to build i386-softmmu, and uses this
single binary for two purposes:
1. serves as device emulator for HVM guest.
2. serves as PV driver backend for PV guest.

Either case CPU emulation is never used because Xen handles that
already. So we are in fact having a load of unused code in QEMU build.

What I have in mind is to build a QEMU binary which:
1. does not include CPU emulation code at all.
2. only includes components that's useful (what's useful is TBD).

And the rationales behind this idea are:

1. Reduce memory footprint. One usecase would be running Xen on embedded
   platform (X86 or ARM). We would expect the system has very limited
   resources. The smaller the binary, the better.

2. It doesn't make sense to have i386 emulation on ARM platform.
   Arguably nobody can prevent user from running i386 emulator on ARM
   platform, but it doesn't make sense in Xen's setup where QEMU is
   only used as PV device backend on ARM.

3. Security concern. It's much easier to audit small code base.

Please note that I'm not proposing to invalidate all the other usecases.
I'm only speaking with my Xen developer's hat on, aiming to make QEMU
more flexible.

Down to implementation level I only need to (hopefully) add a few stubs
and create some new CONFIG_* options and move a few things around. It
might not be as intrusive as one thinks.

In fact I've already hacked a prototype during Christmas. What's I've
done so far:

1. create target-null which only has some stubs to CPU emulation
   framework.

2. add a few lines to configure / Makefiles*, create
   default-configs/null-softmmu

Finally I got a qemu-system-null. And the effect is immediately visible
-- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
looked at what device emulation code can be removed so the size can even
be made smaller.

What do you think about this idea?

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:06:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09sV-00036f-93; Mon, 06 Jan 2014 13:05:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09sU-00036X-8q
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:05:58 +0000
Received: from [193.109.254.147:32242] by server-4.bemta-14.messagelabs.com id
	7F/B1-03916-53AAAC25; Mon, 06 Jan 2014 13:05:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389013555!9078708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1139 invoked from network); 6 Jan 2014 13:05:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:05:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90046222"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:05:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:05:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09s1-0005aT-So;
	Mon, 06 Jan 2014 13:05:29 +0000
Date: Mon, 6 Jan 2014 13:04:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061303480.8667@kaball.uk.xensource.com>
References: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
	<alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix guest-receive-side
 array sizes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Stefano Stabellini wrote:
> On Mon, 23 Dec 2013, Paul Durrant wrote:
> > The sizes chosen for the metadata and grant_copy_op arrays on the guest
> > receive size are wrong;
> > 
> > - The meta array is needlessly twice the ring size, when we only ever
> >   consume a single array element per RX ring slot
> > - The grant_copy_op array is way too small. It's sized based on a bogus
> >   assumption: that at most two copy ops will be used per ring slot. This
> >   may have been true at some point in the past but it's clear from looking
> >   at start_new_rx_buffer() that a new ring slot is only consumed if a frag
> >   would overflow the current slot (plus some other conditions) so the actual
> >   limit is MAX_SKB_FRAGS grant_copy_ops per ring slot.
> > 
> > This patch fixes those two sizing issues and, because grant_copy_ops grows
> > so much, it pulls it out into a separate chunk of vmalloc()ed memory.
> > 
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Acked-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> 
> Unfortunately this patch (now in 3.13-rc7) breaks the ARM build:
> 
>   CC      drivers/net/xen-netback/interface.o
> drivers/net/xen-netback/interface.c: In function 'xenvif_alloc':
> drivers/net/xen-netback/interface.c:311:2: error: implicit declaration of function 'vmalloc' [-Werror=implicit-function-declaration]
>   vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>   ^
> drivers/net/xen-netback/interface.c:311:21: warning: assignment makes pointer from integer without a cast [enabled by default]
>   vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>                      ^
> drivers/net/xen-netback/interface.c: In function 'xenvif_free':
> drivers/net/xen-netback/interface.c:499:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
>   vfree(vif->grant_copy_op);
>   ^
> cc1: some warnings being treated as errors
> make[3]: *** [drivers/net/xen-netback/interface.o] Error 1
> make[2]: *** [drivers/net/xen-netback] Error 2
> make[1]: *** [drivers/net] Error 2
> make: *** [drivers] Error 2
> 
> I suggest we fix it (probably by reverting it) ASAP otherwise we risk
> break the release.

Actually I realized that the issue has already been fixed, thanks!

http://marc.info/?l=linux-kernel&m=138897212904706&w=2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:06:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09sV-00036f-93; Mon, 06 Jan 2014 13:05:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W09sU-00036X-8q
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:05:58 +0000
Received: from [193.109.254.147:32242] by server-4.bemta-14.messagelabs.com id
	7F/B1-03916-53AAAC25; Mon, 06 Jan 2014 13:05:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389013555!9078708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1139 invoked from network); 6 Jan 2014 13:05:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:05:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,612,1384300800"; d="scan'208";a="90046222"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:05:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:05:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W09s1-0005aT-So;
	Mon, 06 Jan 2014 13:05:29 +0000
Date: Mon, 6 Jan 2014 13:04:39 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061303480.8667@kaball.uk.xensource.com>
References: <1387790838-8852-1-git-send-email-paul.durrant@citrix.com>
	<alpine.DEB.2.02.1401061257360.8667@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix guest-receive-side
 array sizes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Stefano Stabellini wrote:
> On Mon, 23 Dec 2013, Paul Durrant wrote:
> > The sizes chosen for the metadata and grant_copy_op arrays on the guest
> > receive size are wrong;
> > 
> > - The meta array is needlessly twice the ring size, when we only ever
> >   consume a single array element per RX ring slot
> > - The grant_copy_op array is way too small. It's sized based on a bogus
> >   assumption: that at most two copy ops will be used per ring slot. This
> >   may have been true at some point in the past but it's clear from looking
> >   at start_new_rx_buffer() that a new ring slot is only consumed if a frag
> >   would overflow the current slot (plus some other conditions) so the actual
> >   limit is MAX_SKB_FRAGS grant_copy_ops per ring slot.
> > 
> > This patch fixes those two sizing issues and, because grant_copy_ops grows
> > so much, it pulls it out into a separate chunk of vmalloc()ed memory.
> > 
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Acked-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> 
> Unfortunately this patch (now in 3.13-rc7) breaks the ARM build:
> 
>   CC      drivers/net/xen-netback/interface.o
> drivers/net/xen-netback/interface.c: In function 'xenvif_alloc':
> drivers/net/xen-netback/interface.c:311:2: error: implicit declaration of function 'vmalloc' [-Werror=implicit-function-declaration]
>   vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>   ^
> drivers/net/xen-netback/interface.c:311:21: warning: assignment makes pointer from integer without a cast [enabled by default]
>   vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>                      ^
> drivers/net/xen-netback/interface.c: In function 'xenvif_free':
> drivers/net/xen-netback/interface.c:499:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
>   vfree(vif->grant_copy_op);
>   ^
> cc1: some warnings being treated as errors
> make[3]: *** [drivers/net/xen-netback/interface.o] Error 1
> make[2]: *** [drivers/net/xen-netback] Error 2
> make[1]: *** [drivers/net] Error 2
> make: *** [drivers] Error 2
> 
> I suggest we fix it (probably by reverting it) ASAP otherwise we risk
> break the release.

Actually I realized that the issue has already been fixed, thanks!

http://marc.info/?l=linux-kernel&m=138897212904706&w=2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09zq-0003cp-DP; Mon, 06 Jan 2014 13:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W09zo-0003ck-LD
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:13:32 +0000
Received: from [85.158.139.211:55531] by server-3.bemta-5.messagelabs.com id
	70/A9-04773-BFBAAC25; Mon, 06 Jan 2014 13:13:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389014009!8086868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16326 invoked from network); 6 Jan 2014 13:13:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90049874"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:13:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:13:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W09zk-0005ha-OZ;
	Mon, 06 Jan 2014 13:13:28 +0000
Message-ID: <52CAABF8.30702@citrix.com>
Date: Mon, 6 Jan 2014 13:13:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
In-Reply-To: <52CAA503.5080707@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 12:43, Don Slutz wrote:
> On 01/05/14 18:09, Andrew Cooper wrote:
>> On 04/01/2014 17:52, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
>>> (gdb) set arch i8086
>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>> configuration
>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>
>>> The target architecture is assumed to be i8086
>>> (gdb) target remote localhost:9999
>>> Remote debugging using localhost:9999
>>> Remote debugging from host 127.0.0.1
>>> 0x0000d475 in ?? ()
>>> (gdb) x/1xh 0x6ae9168b
>>>
>>> Will reproduce this bug.
>>>
>>> With a debug=y build you will get:
>>>
>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>
>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>
>>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>
>>> And gdb output:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     0x3024
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>> Reply contains invalid hex digit 116
>>>
>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>> had not yet changed.
>>>
>>> crash reports (for example):
>>>
>>> crash> mm_rwlock_t 0xffff83083f913010
>>> struct mm_rwlock_t {
>>>    lock = {
>>>      raw = {
>>>        lock = 2147483647
>>>      },
>>>      debug = {<No data fields>}
>>>    },
>>>    unlock_level = 0,
>>>    recurse_count = 1,
>>>    locker = 1,
>>>    locker_function = 0xffff82c4c022c640 <__func__.13514>
>>> "__get_gfn_type_access"
>>> }
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>>   xen/arch/x86/debug.c | 5 ++++-
>>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>>> index 3e21ca8..eceb805 100644
>>> --- a/xen/arch/x86/debug.c
>>> +++ b/xen/arch/x86/debug.c
>>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
>>> int len, struct domain *dp,
>>>           mfn = (has_hvm_container_domain(dp)
>>>                  ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>>                  : dbg_pv_va2mfn(addr, dp, pgd3));
>>> -        if ( mfn == INVALID_MFN )
>>> +        if ( mfn == INVALID_MFN ) {
>> Xen coding style would have this brace on the next line.
>
> Will fix.
>
>
>> Content-wise, I think it would be better to fix up the error path in
>> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
>> reference on the gfn.
>
> That would only fix "p2m_is_readonly(gfntype) and writing memory" case
> (see cover letter). To fix "the requested vaddr does not exist" case,
> I would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also. 
> As It currently stands it is a smaller fix.
>
>    -Don Slutz

Size really doesn't matter.

What does matter is that the caller of dbg_hvm_va2mfn() should not have
to cleanup a reference taken when it returns an error.

Anyway, "the requested vaddr does not exist" is covered by
paging_gva_to_gfn(), which will exit before taking the ref on the gfn.

The following (completely untested) patch ought to do:

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..3372adb 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int
toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
     }
 
     DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
+    }
+
     return mfn;
 }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W09zq-0003cp-DP; Mon, 06 Jan 2014 13:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W09zo-0003ck-LD
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:13:32 +0000
Received: from [85.158.139.211:55531] by server-3.bemta-5.messagelabs.com id
	70/A9-04773-BFBAAC25; Mon, 06 Jan 2014 13:13:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389014009!8086868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16326 invoked from network); 6 Jan 2014 13:13:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90049874"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:13:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:13:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W09zk-0005ha-OZ;
	Mon, 06 Jan 2014 13:13:28 +0000
Message-ID: <52CAABF8.30702@citrix.com>
Date: Mon, 6 Jan 2014 13:13:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
In-Reply-To: <52CAA503.5080707@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 12:43, Don Slutz wrote:
> On 01/05/14 18:09, Andrew Cooper wrote:
>> On 04/01/2014 17:52, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
>>> (gdb) set arch i8086
>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>> configuration
>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>
>>> The target architecture is assumed to be i8086
>>> (gdb) target remote localhost:9999
>>> Remote debugging using localhost:9999
>>> Remote debugging from host 127.0.0.1
>>> 0x0000d475 in ?? ()
>>> (gdb) x/1xh 0x6ae9168b
>>>
>>> Will reproduce this bug.
>>>
>>> With a debug=y build you will get:
>>>
>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>
>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>
>>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>
>>> And gdb output:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     0x3024
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>> Reply contains invalid hex digit 116
>>>
>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>> had not yet changed.
>>>
>>> crash reports (for example):
>>>
>>> crash> mm_rwlock_t 0xffff83083f913010
>>> struct mm_rwlock_t {
>>>    lock = {
>>>      raw = {
>>>        lock = 2147483647
>>>      },
>>>      debug = {<No data fields>}
>>>    },
>>>    unlock_level = 0,
>>>    recurse_count = 1,
>>>    locker = 1,
>>>    locker_function = 0xffff82c4c022c640 <__func__.13514>
>>> "__get_gfn_type_access"
>>> }
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>>   xen/arch/x86/debug.c | 5 ++++-
>>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>>> index 3e21ca8..eceb805 100644
>>> --- a/xen/arch/x86/debug.c
>>> +++ b/xen/arch/x86/debug.c
>>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
>>> int len, struct domain *dp,
>>>           mfn = (has_hvm_container_domain(dp)
>>>                  ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>>                  : dbg_pv_va2mfn(addr, dp, pgd3));
>>> -        if ( mfn == INVALID_MFN )
>>> +        if ( mfn == INVALID_MFN ) {
>> Xen coding style would have this brace on the next line.
>
> Will fix.
>
>
>> Content-wise, I think it would be better to fix up the error path in
>> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
>> reference on the gfn.
>
> That would only fix "p2m_is_readonly(gfntype) and writing memory" case
> (see cover letter). To fix "the requested vaddr does not exist" case,
> I would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also. 
> As It currently stands it is a smaller fix.
>
>    -Don Slutz

Size really doesn't matter.

What does matter is that the caller of dbg_hvm_va2mfn() should not have
to cleanup a reference taken when it returns an error.

Anyway, "the requested vaddr does not exist" is covered by
paging_gva_to_gfn(), which will exit before taking the ref on the gfn.

The following (completely untested) patch ought to do:

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..3372adb 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int
toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
     }
 
     DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
+    }
+
     return mfn;
 }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:24:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0A9q-000450-SL; Mon, 06 Jan 2014 13:23:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.crosthwaite@petalogix.com>) id 1W0A9O-00044V-2W
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:23:26 +0000
Received: from [193.109.254.147:59505] by server-15.bemta-14.messagelabs.com
	id B0/60-22186-D4EAAC25; Mon, 06 Jan 2014 13:23:25 +0000
X-Env-Sender: peter.crosthwaite@petalogix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389014604!5580032!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2539 invoked from network); 6 Jan 2014 13:23:24 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:23:24 -0000
Received: by mail-wg0-f48.google.com with SMTP id z12so15820248wgg.27
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:23:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=yUB60vL8t+PhdZaAgrTL/YawGWCNX0h2ARvqcQOjjhM=;
	b=F+tCA+wC7cFz+CCsB6UnK+qweBNUMBY43/Msmo32eYuK+OWJA/ocA8GwWQvk3hZscd
	BUdrJAoWt0xRV3R2fjNPSKvkQcCFujvHg+bqiel4HBQ24+Ib3Y5DhMIPfhwKoYTFUAVq
	1y2GkLidf6m3d9IJWBhv03DkUMXIMDOtsrovI1xPmpRICTQHVMnv9AhExvpzu5FWL54p
	vRPB5OsOZwGDrR/lgrTPRmyLk0S4kH1Fa1JMIdqpAEeI8zyjVPLT0A0rgrYRivhQPXQa
	BbEZzdoqjp955KGb7ekjGjNsKahkXhsTkE0Zy+wLYn+C3+y1hhnvOV51dCM8MUu2P6g+
	o4kQ==
X-Gm-Message-State: ALoCoQkFJPWXOiHisChoKrd/HcE+J9JUbT/96ofAMr0R74/iGn0deZqEk2V3KbN2RpTY2DiWFvVl
MIME-Version: 1.0
X-Received: by 10.180.76.103 with SMTP id j7mr12179795wiw.58.1389014604248;
	Mon, 06 Jan 2014 05:23:24 -0800 (PST)
Received: by 10.227.13.9 with HTTP; Mon, 6 Jan 2014 05:23:24 -0800 (PST)
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
Date: Mon, 6 Jan 2014 23:23:24 +1000
X-Google-Sender-Auth: PfgVZOGhLlXbIU0_I9smwLEXUSo
Message-ID: <CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
From: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
To: Wei Liu <wei.liu2@citrix.com>
X-Mailman-Approved-At: Mon, 06 Jan 2014 13:23:53 +0000
Cc: "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 6, 2014 at 10:54 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> Hi all
>
> This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> make it possible to tailor QEMU to a smaller binary.
>
> The current setup for Xen on X86 is to build i386-softmmu, and uses this
> single binary for two purposes:
> 1. serves as device emulator for HVM guest.
> 2. serves as PV driver backend for PV guest.
>
> Either case CPU emulation is never used because Xen handles that
> already. So we are in fact having a load of unused code in QEMU build.
>
> What I have in mind is to build a QEMU binary which:
> 1. does not include CPU emulation code at all.
> 2. only includes components that's useful (what's useful is TBD).
>
> And the rationales behind this idea are:
>
> 1. Reduce memory footprint. One usecase would be running Xen on embedded
>    platform (X86 or ARM). We would expect the system has very limited
>    resources. The smaller the binary, the better.
>
> 2. It doesn't make sense to have i386 emulation on ARM platform.
>    Arguably nobody can prevent user from running i386 emulator on ARM
>    platform, but it doesn't make sense in Xen's setup where QEMU is
>    only used as PV device backend on ARM.
>
> 3. Security concern. It's much easier to audit small code base.
>
> Please note that I'm not proposing to invalidate all the other usecases.
> I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> more flexible.
>
> Down to implementation level I only need to (hopefully) add a few stubs
> and create some new CONFIG_* options and move a few things around. It
> might not be as intrusive as one thinks.
>
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
>
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
>
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu
>

Your idea of aggressively reducing binary size may not really fit a
defconfig at all. If you are going lean-and-mean based on a specific
use-case I think you need to bring your own config.

> Finally I got a qemu-system-null. And the effect is immediately visible

qemu-system-null has been on my wish-list in the past, although my
reasons were slightly different to yours. Specifically, the goal was
to test CPUs in an RTL simulator interacting with RAM and peripheral
devices hosted in QEMU.

> -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> looked at what device emulation code can be removed so the size can even
> be made smaller.
>

So what exactly is an appropriate device-suite for qemu-system-null is
an open question. I would suggest that the "correct" default config
for such a QEMU would actually be the full suite of devices, not less
that whats already in i386. Free of CPU/arch restrictions, all devices
are fair game.

Regards,
Peter

> What do you think about this idea?
>
> Thanks
> Wei.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:24:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0A9q-000450-SL; Mon, 06 Jan 2014 13:23:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.crosthwaite@petalogix.com>) id 1W0A9O-00044V-2W
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:23:26 +0000
Received: from [193.109.254.147:59505] by server-15.bemta-14.messagelabs.com
	id B0/60-22186-D4EAAC25; Mon, 06 Jan 2014 13:23:25 +0000
X-Env-Sender: peter.crosthwaite@petalogix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389014604!5580032!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2539 invoked from network); 6 Jan 2014 13:23:24 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:23:24 -0000
Received: by mail-wg0-f48.google.com with SMTP id z12so15820248wgg.27
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:23:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=yUB60vL8t+PhdZaAgrTL/YawGWCNX0h2ARvqcQOjjhM=;
	b=F+tCA+wC7cFz+CCsB6UnK+qweBNUMBY43/Msmo32eYuK+OWJA/ocA8GwWQvk3hZscd
	BUdrJAoWt0xRV3R2fjNPSKvkQcCFujvHg+bqiel4HBQ24+Ib3Y5DhMIPfhwKoYTFUAVq
	1y2GkLidf6m3d9IJWBhv03DkUMXIMDOtsrovI1xPmpRICTQHVMnv9AhExvpzu5FWL54p
	vRPB5OsOZwGDrR/lgrTPRmyLk0S4kH1Fa1JMIdqpAEeI8zyjVPLT0A0rgrYRivhQPXQa
	BbEZzdoqjp955KGb7ekjGjNsKahkXhsTkE0Zy+wLYn+C3+y1hhnvOV51dCM8MUu2P6g+
	o4kQ==
X-Gm-Message-State: ALoCoQkFJPWXOiHisChoKrd/HcE+J9JUbT/96ofAMr0R74/iGn0deZqEk2V3KbN2RpTY2DiWFvVl
MIME-Version: 1.0
X-Received: by 10.180.76.103 with SMTP id j7mr12179795wiw.58.1389014604248;
	Mon, 06 Jan 2014 05:23:24 -0800 (PST)
Received: by 10.227.13.9 with HTTP; Mon, 6 Jan 2014 05:23:24 -0800 (PST)
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
Date: Mon, 6 Jan 2014 23:23:24 +1000
X-Google-Sender-Auth: PfgVZOGhLlXbIU0_I9smwLEXUSo
Message-ID: <CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
From: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
To: Wei Liu <wei.liu2@citrix.com>
X-Mailman-Approved-At: Mon, 06 Jan 2014 13:23:53 +0000
Cc: "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 6, 2014 at 10:54 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> Hi all
>
> This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> make it possible to tailor QEMU to a smaller binary.
>
> The current setup for Xen on X86 is to build i386-softmmu, and uses this
> single binary for two purposes:
> 1. serves as device emulator for HVM guest.
> 2. serves as PV driver backend for PV guest.
>
> Either case CPU emulation is never used because Xen handles that
> already. So we are in fact having a load of unused code in QEMU build.
>
> What I have in mind is to build a QEMU binary which:
> 1. does not include CPU emulation code at all.
> 2. only includes components that's useful (what's useful is TBD).
>
> And the rationales behind this idea are:
>
> 1. Reduce memory footprint. One usecase would be running Xen on embedded
>    platform (X86 or ARM). We would expect the system has very limited
>    resources. The smaller the binary, the better.
>
> 2. It doesn't make sense to have i386 emulation on ARM platform.
>    Arguably nobody can prevent user from running i386 emulator on ARM
>    platform, but it doesn't make sense in Xen's setup where QEMU is
>    only used as PV device backend on ARM.
>
> 3. Security concern. It's much easier to audit small code base.
>
> Please note that I'm not proposing to invalidate all the other usecases.
> I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> more flexible.
>
> Down to implementation level I only need to (hopefully) add a few stubs
> and create some new CONFIG_* options and move a few things around. It
> might not be as intrusive as one thinks.
>
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
>
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
>
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu
>

Your idea of aggressively reducing binary size may not really fit a
defconfig at all. If you are going lean-and-mean based on a specific
use-case I think you need to bring your own config.

> Finally I got a qemu-system-null. And the effect is immediately visible

qemu-system-null has been on my wish-list in the past, although my
reasons were slightly different to yours. Specifically, the goal was
to test CPUs in an RTL simulator interacting with RAM and peripheral
devices hosted in QEMU.

> -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> looked at what device emulation code can be removed so the size can even
> be made smaller.
>

So what exactly is an appropriate device-suite for qemu-system-null is
an open question. I would suggest that the "correct" default config
for such a QEMU would actually be the full suite of devices, not less
that whats already in i386. Free of CPU/arch restrictions, all devices
are fair game.

Regards,
Peter

> What do you think about this idea?
>
> Thanks
> Wei.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ABJ-0004AK-Cn; Mon, 06 Jan 2014 13:25:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W0ABI-0004AD-7M
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:25:24 +0000
Received: from [193.109.254.147:41066] by server-2.bemta-14.messagelabs.com id
	B4/0F-00361-3CEAAC25; Mon, 06 Jan 2014 13:25:23 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389014721!9064083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29035 invoked from network); 6 Jan 2014 13:25:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90053863"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:25:20 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W0ABE-0005rR-Jh;
	Mon, 06 Jan 2014 13:25:20 +0000
Message-ID: <1389014715.19378.8.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 13:25:15 +0000
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: qemu-devel@nongnu.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 12:54 +0000, Wei Liu wrote:
> Hi all
> 
> This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> make it possible to tailor QEMU to a smaller binary.
> 
> The current setup for Xen on X86 is to build i386-softmmu, and uses this
> single binary for two purposes:
> 1. serves as device emulator for HVM guest.
> 2. serves as PV driver backend for PV guest.
> 

Perhaps even KVM would benefit too.
Are you sure we don't emulate any instruction for other purposes?

> Either case CPU emulation is never used because Xen handles that
> already. So we are in fact having a load of unused code in QEMU build.
> 
> What I have in mind is to build a QEMU binary which:
> 1. does not include CPU emulation code at all.
> 2. only includes components that's useful (what's useful is TBD).
> 
> And the rationales behind this idea are:
> 
> 1. Reduce memory footprint. One usecase would be running Xen on embedded
>    platform (X86 or ARM). We would expect the system has very limited
>    resources. The smaller the binary, the better.
> 
> 2. It doesn't make sense to have i386 emulation on ARM platform.
>    Arguably nobody can prevent user from running i386 emulator on ARM
>    platform, but it doesn't make sense in Xen's setup where QEMU is
>    only used as PV device backend on ARM.
> 
> 3. Security concern. It's much easier to audit small code base.
> 
> Please note that I'm not proposing to invalidate all the other usecases.
> I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> more flexible.
> 
> Down to implementation level I only need to (hopefully) add a few stubs
> and create some new CONFIG_* options and move a few things around. It
> might not be as intrusive as one thinks.
> 
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
> 
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
> 
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu
> 
> Finally I got a qemu-system-null. And the effect is immediately visible
> -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> looked at what device emulation code can be removed so the size can even
> be made smaller.
> 

Well, I would prefer something like a i386-nocpu. The system qemu
emulate is still specific to a given architecture. You can't emulate ARM
with the same executable. So we'd add default-configs/arm-nocpu.mak and
default-config/x86_64-nocpu.mak (i386 don't make much sense anymore).

> What do you think about this idea?
> 
> Thanks
> Wei.
> 

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ABJ-0004AK-Cn; Mon, 06 Jan 2014 13:25:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W0ABI-0004AD-7M
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:25:24 +0000
Received: from [193.109.254.147:41066] by server-2.bemta-14.messagelabs.com id
	B4/0F-00361-3CEAAC25; Mon, 06 Jan 2014 13:25:23 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389014721!9064083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29035 invoked from network); 6 Jan 2014 13:25:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90053863"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 13:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 08:25:20 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W0ABE-0005rR-Jh;
	Mon, 06 Jan 2014 13:25:20 +0000
Message-ID: <1389014715.19378.8.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 13:25:15 +0000
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: qemu-devel@nongnu.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 12:54 +0000, Wei Liu wrote:
> Hi all
> 
> This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> make it possible to tailor QEMU to a smaller binary.
> 
> The current setup for Xen on X86 is to build i386-softmmu, and uses this
> single binary for two purposes:
> 1. serves as device emulator for HVM guest.
> 2. serves as PV driver backend for PV guest.
> 

Perhaps even KVM would benefit too.
Are you sure we don't emulate any instruction for other purposes?

> Either case CPU emulation is never used because Xen handles that
> already. So we are in fact having a load of unused code in QEMU build.
> 
> What I have in mind is to build a QEMU binary which:
> 1. does not include CPU emulation code at all.
> 2. only includes components that's useful (what's useful is TBD).
> 
> And the rationales behind this idea are:
> 
> 1. Reduce memory footprint. One usecase would be running Xen on embedded
>    platform (X86 or ARM). We would expect the system has very limited
>    resources. The smaller the binary, the better.
> 
> 2. It doesn't make sense to have i386 emulation on ARM platform.
>    Arguably nobody can prevent user from running i386 emulator on ARM
>    platform, but it doesn't make sense in Xen's setup where QEMU is
>    only used as PV device backend on ARM.
> 
> 3. Security concern. It's much easier to audit small code base.
> 
> Please note that I'm not proposing to invalidate all the other usecases.
> I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> more flexible.
> 
> Down to implementation level I only need to (hopefully) add a few stubs
> and create some new CONFIG_* options and move a few things around. It
> might not be as intrusive as one thinks.
> 
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
> 
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
> 
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu
> 
> Finally I got a qemu-system-null. And the effect is immediately visible
> -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> looked at what device emulation code can be removed so the size can even
> be made smaller.
> 

Well, I would prefer something like a i386-nocpu. The system qemu
emulate is still specific to a given architecture. You can't emulate ARM
with the same executable. So we'd add default-configs/arm-nocpu.mak and
default-config/x86_64-nocpu.mak (i386 don't make much sense anymore).

> What do you think about this idea?
> 
> Thanks
> Wei.
> 

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AFz-0004pU-M8; Mon, 06 Jan 2014 13:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W0AFy-0004pP-CH
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 13:30:14 +0000
Received: from [85.158.137.68:37400] by server-16.bemta-3.messagelabs.com id
	C5/33-26128-5EFAAC25; Mon, 06 Jan 2014 13:30:13 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389015011!6690092!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7371 invoked from network); 6 Jan 2014 13:30:12 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:30:12 -0000
Received: by mail-pb0-f47.google.com with SMTP id um1so18444236pbc.34
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 05:30:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=bmsffutVFrY18x3n6ap/N5UPIeqgo758xCru9RKo91I=;
	b=RRqJVjO+bKHC8onchT5LhagTxny41BlRMQmRORa4E6kRBEi3Eb9pQ7ADZNhisZFHfJ
	YuRJZT29wBo1IrN2c0x03KuZbcX8f1poXIhOeTUG5GjjXWXBNQRC+gfUVo65aT8NUXqX
	dHQ1QnhIDO/sZU3y4kwENsT/wzDw18YarccOE4KQv8NSrELabYJdSu9hmJX+jZOtNPjj
	rvlrmEBwyii3nChUSKArKVPefFHheoVs8sQ3oFx8178sFDPlvCabtKsTEjf4joRY+SiT
	BH25yY7fo6le+WOFnGN2rN9wWzgK2hpx1Lh0k04q7pNGsyVT/+sBx4SkahI5D5f+u6Cj
	SJXg==
X-Received: by 10.68.34.37 with SMTP id w5mr6766182pbi.159.1389015008418;
	Mon, 06 Jan 2014 05:30:08 -0800 (PST)
Received: from [192.168.1.102] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	zc6sm119570599pab.0.2014.01.06.05.30.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 05:30:07 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389003096.13274.29.camel@kazak.uk.xensource.com>
Date: Mon, 6 Jan 2014 21:29:57 +0800
Message-Id: <839E6A17-6F04-48D1-9905-DF3ACA599AB9@gmail.com>
References: <1388758643-3897-1-git-send-email-baozich@gmail.com>
	<1389003096.13274.29.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org, liuw@liuw.name,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com
Subject: Re: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
	definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 6, 2014, at 18:11, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Fri, 2014-01-03 at 22:17 +0800, Chen Baozi wrote:
>> Reported-by: Wei Liu <liuw@liuw.name>
>> Signed-off-by: Chen Baozi <baozich@gmail.com>
> =

> Heh, Wei beat you to the patch by 14 minutes ;-)

lol, there must be something wrong with my mail box. I didn=92t notice
it when sending this patch.

> =

> See <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
> =

> Ian.
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AFz-0004pU-M8; Mon, 06 Jan 2014 13:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W0AFy-0004pP-CH
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 13:30:14 +0000
Received: from [85.158.137.68:37400] by server-16.bemta-3.messagelabs.com id
	C5/33-26128-5EFAAC25; Mon, 06 Jan 2014 13:30:13 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389015011!6690092!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7371 invoked from network); 6 Jan 2014 13:30:12 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:30:12 -0000
Received: by mail-pb0-f47.google.com with SMTP id um1so18444236pbc.34
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 05:30:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=bmsffutVFrY18x3n6ap/N5UPIeqgo758xCru9RKo91I=;
	b=RRqJVjO+bKHC8onchT5LhagTxny41BlRMQmRORa4E6kRBEi3Eb9pQ7ADZNhisZFHfJ
	YuRJZT29wBo1IrN2c0x03KuZbcX8f1poXIhOeTUG5GjjXWXBNQRC+gfUVo65aT8NUXqX
	dHQ1QnhIDO/sZU3y4kwENsT/wzDw18YarccOE4KQv8NSrELabYJdSu9hmJX+jZOtNPjj
	rvlrmEBwyii3nChUSKArKVPefFHheoVs8sQ3oFx8178sFDPlvCabtKsTEjf4joRY+SiT
	BH25yY7fo6le+WOFnGN2rN9wWzgK2hpx1Lh0k04q7pNGsyVT/+sBx4SkahI5D5f+u6Cj
	SJXg==
X-Received: by 10.68.34.37 with SMTP id w5mr6766182pbi.159.1389015008418;
	Mon, 06 Jan 2014 05:30:08 -0800 (PST)
Received: from [192.168.1.102] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	zc6sm119570599pab.0.2014.01.06.05.30.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 05:30:07 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389003096.13274.29.camel@kazak.uk.xensource.com>
Date: Mon, 6 Jan 2014 21:29:57 +0800
Message-Id: <839E6A17-6F04-48D1-9905-DF3ACA599AB9@gmail.com>
References: <1388758643-3897-1-git-send-email-baozich@gmail.com>
	<1389003096.13274.29.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org, liuw@liuw.name,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com
Subject: Re: [Xen-devel] [PATCH] arm/xen: remove redundant semicolon in
	definition of ioremap_cached
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 6, 2014, at 18:11, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Fri, 2014-01-03 at 22:17 +0800, Chen Baozi wrote:
>> Reported-by: Wei Liu <liuw@liuw.name>
>> Signed-off-by: Chen Baozi <baozich@gmail.com>
> =

> Heh, Wei beat you to the patch by 14 minutes ;-)

lol, there must be something wrong with my mail box. I didn=92t notice
it when sending this patch.

> =

> See <1388757815-11394-1-git-send-email-wei.liu2@citrix.com>
> =

> Ian.
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:30:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AGS-0004sT-6I; Mon, 06 Jan 2014 13:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0AGR-0004s8-6W
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:30:43 +0000
Received: from [85.158.137.68:8150] by server-14.bemta-3.messagelabs.com id
	CA/E2-06105-200BAC25; Mon, 06 Jan 2014 13:30:42 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389015040!7480465!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13413 invoked from network); 6 Jan 2014 13:30:40 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:30:40 -0000
Received: by mail-la0-f50.google.com with SMTP id el20so9684365lab.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:30:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=1VgzVAb7d4M/HmPDl+++yRTGHN5AYMxxHNX4PPd5k3c=;
	b=mCxkDLGjzT/3ERetUzqziM+uY8J/FJmP7nRy6bR/94wgqM3yNd2ZQIgoNXs/+kW5AC
	FYXgYO7WPrVlMLtfjvQALAW6SbD7+yVhI8ZqIHYICNZJl+7DMSwiMjJ3HU2G8ukunXRj
	5EfKFxgsNZAWD495nbUrH5TnIM6V1zcmwAEH84IRZGNinxzhOGXDhxDnn+OoQ0efGMfo
	qwllfZo85Bwp6utsZdApqgKuHiCFKy5KyvTDVspwlwG+CiSyLLFUm+E+YImGlotKI0Nx
	pHCpeck+ni206UQB7hWD9aQsjgC2FxK347wGAQZuvzGO5ND3imLHHdAj/EJzC+m/ylbD
	1GSw==
X-Gm-Message-State: ALoCoQlbFMoINuQzhHlZH75drCwcCc+pXH7zVLalO61y/ygJfDjCM8X0ATr0YQScUEmkbmz93CZN
X-Received: by 10.152.23.39 with SMTP id j7mr1040955laf.28.1389015040237; Mon,
	06 Jan 2014 05:30:40 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 05:30:20 -0800 (PST)
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 13:30:20 +0000
Message-ID: <CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
>
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
>
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu

I think it would be better to add support to allow you to
configure with --disable-tcg. This would match the existing
--disable/--enable switches for KVM and Xen, and then you
could configure --disable-kvm --disable-tcg --enable-xen
and get a qemu-system-i386 or qemu-system-arm with only
the Xen support and none of the TCG emulation code.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:30:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AGS-0004sT-6I; Mon, 06 Jan 2014 13:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0AGR-0004s8-6W
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:30:43 +0000
Received: from [85.158.137.68:8150] by server-14.bemta-3.messagelabs.com id
	CA/E2-06105-200BAC25; Mon, 06 Jan 2014 13:30:42 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389015040!7480465!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13413 invoked from network); 6 Jan 2014 13:30:40 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:30:40 -0000
Received: by mail-la0-f50.google.com with SMTP id el20so9684365lab.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:30:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=1VgzVAb7d4M/HmPDl+++yRTGHN5AYMxxHNX4PPd5k3c=;
	b=mCxkDLGjzT/3ERetUzqziM+uY8J/FJmP7nRy6bR/94wgqM3yNd2ZQIgoNXs/+kW5AC
	FYXgYO7WPrVlMLtfjvQALAW6SbD7+yVhI8ZqIHYICNZJl+7DMSwiMjJ3HU2G8ukunXRj
	5EfKFxgsNZAWD495nbUrH5TnIM6V1zcmwAEH84IRZGNinxzhOGXDhxDnn+OoQ0efGMfo
	qwllfZo85Bwp6utsZdApqgKuHiCFKy5KyvTDVspwlwG+CiSyLLFUm+E+YImGlotKI0Nx
	pHCpeck+ni206UQB7hWD9aQsjgC2FxK347wGAQZuvzGO5ND3imLHHdAj/EJzC+m/ylbD
	1GSw==
X-Gm-Message-State: ALoCoQlbFMoINuQzhHlZH75drCwcCc+pXH7zVLalO61y/ygJfDjCM8X0ATr0YQScUEmkbmz93CZN
X-Received: by 10.152.23.39 with SMTP id j7mr1040955laf.28.1389015040237; Mon,
	06 Jan 2014 05:30:40 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 05:30:20 -0800 (PST)
In-Reply-To: <20140106125410.GD3119@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 13:30:20 +0000
Message-ID: <CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> In fact I've already hacked a prototype during Christmas. What's I've
> done so far:
>
> 1. create target-null which only has some stubs to CPU emulation
>    framework.
>
> 2. add a few lines to configure / Makefiles*, create
>    default-configs/null-softmmu

I think it would be better to add support to allow you to
configure with --disable-tcg. This would match the existing
--disable/--enable switches for KVM and Xen, and then you
could configure --disable-kvm --disable-tcg --enable-xen
and get a qemu-system-i386 or qemu-system-arm with only
the Xen support and none of the TCG emulation code.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:32:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AHe-000520-Qy; Mon, 06 Jan 2014 13:31:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W0AHd-00051p-84
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:31:57 +0000
Received: from [85.158.137.68:27757] by server-6.bemta-3.messagelabs.com id
	17/FA-04868-C40BAC25; Mon, 06 Jan 2014 13:31:56 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389015113!7471488!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7320 invoked from network); 6 Jan 2014 13:31:55 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:31:55 -0000
Received: by mail-pb0-f47.google.com with SMTP id um1so18322319pbc.6
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:31:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=/Y2slECV2JOuJdaNUOeYEPwLA24V2VCr0gHCuBYRRrA=;
	b=JR0J5dUNmfmNm9+uvuEkmlD6/K7hEnHc9D6HQZD16LZSzzloSDAGfrwbiAF80IEjA2
	I9clncSYcsc6+jbtMbKZw8M2msXi3tmQZb2QJvAZ2ue2ITvWEJMOE/lc4xfwcAOgaKOQ
	wIu6HoR687Yfvt9xy8495CRo01uTfprBFCSe+ceBmtQ46+oRW29O+8QkEdZ9qzmeqCwf
	Mzf+ibEGeZXIjmUY2VYJgwVJ3BYDDmix06+YRURroGmd5y6XAX9AgYmFUCYZP8ZdfzyS
	BgUM1AgNo8WTrp8z+hJQAdHKUkrt6h8XS0dPM0RvvpDiuBKpaKj2c4gvCYfFTqfPDWzt
	VqkQ==
X-Received: by 10.68.166.69 with SMTP id ze5mr3311390pbb.82.1389015110450;
	Mon, 06 Jan 2014 05:31:50 -0800 (PST)
Received: from [192.168.1.102] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	oc9sm128977649pbb.10.2014.01.06.05.31.46 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 05:31:49 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389006298.13274.48.camel@kazak.uk.xensource.com>
Date: Mon, 6 Jan 2014 21:31:39 +0800
Message-Id: <5D276967-4F69-4934-8281-7B10EC10B3B7@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
	<ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
	<1389006298.13274.48.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 6, 2014, at 19:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-01-05 at 19:50 +0800, Chen Baozi wrote:
>> On Jan 5, 2014, at 16:21, Chen Baozi <baozich@gmail.com> wrote:
>> =

>>> Hi all,
>>> =

>>> Recently, I=92ve been trying to continue my work of mini-os on arm64.  =
After some
>>> early-day hacks in summer, there is a basic framework that could pass t=
he build.
>>> If everything goes well, it is supposed to be at least able to print =

>>> =93Bootstrapping...=94 info in start_kernel. However, it seems there st=
ill something
>>> wrong when create domain by xl.
>> =

>> It seems to be something wrong with my dom0 privcmd driver. After I add =
some printk
>> to debug and rebuild the dom0 kernel, it seems to be all right (even I r=
emove
>> those printk for debugging)...
> =

> This should be fixed by:
>        commit a7892f32cc3534d4cc0e64b245fbf47a8e364652
>        Author: Ian Campbell <ian.campbell@citrix.com>
>        Date:   Wed Dec 11 17:02:27 2013 +0000
> =

>            arm: xen: foreign mapping PTEs are special.
> =

> which is now in Linus' tree.

I see. I updated my git and it disappeared, ;-)

Thanks all the same.

Baozi

> =

> Ian.
> =

> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 13:32:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 13:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0AHe-000520-Qy; Mon, 06 Jan 2014 13:31:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W0AHd-00051p-84
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 13:31:57 +0000
Received: from [85.158.137.68:27757] by server-6.bemta-3.messagelabs.com id
	17/FA-04868-C40BAC25; Mon, 06 Jan 2014 13:31:56 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389015113!7471488!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7320 invoked from network); 6 Jan 2014 13:31:55 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 13:31:55 -0000
Received: by mail-pb0-f47.google.com with SMTP id um1so18322319pbc.6
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 05:31:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=/Y2slECV2JOuJdaNUOeYEPwLA24V2VCr0gHCuBYRRrA=;
	b=JR0J5dUNmfmNm9+uvuEkmlD6/K7hEnHc9D6HQZD16LZSzzloSDAGfrwbiAF80IEjA2
	I9clncSYcsc6+jbtMbKZw8M2msXi3tmQZb2QJvAZ2ue2ITvWEJMOE/lc4xfwcAOgaKOQ
	wIu6HoR687Yfvt9xy8495CRo01uTfprBFCSe+ceBmtQ46+oRW29O+8QkEdZ9qzmeqCwf
	Mzf+ibEGeZXIjmUY2VYJgwVJ3BYDDmix06+YRURroGmd5y6XAX9AgYmFUCYZP8ZdfzyS
	BgUM1AgNo8WTrp8z+hJQAdHKUkrt6h8XS0dPM0RvvpDiuBKpaKj2c4gvCYfFTqfPDWzt
	VqkQ==
X-Received: by 10.68.166.69 with SMTP id ze5mr3311390pbb.82.1389015110450;
	Mon, 06 Jan 2014 05:31:50 -0800 (PST)
Received: from [192.168.1.102] ([113.247.7.238])
	by mx.google.com with ESMTPSA id
	oc9sm128977649pbb.10.2014.01.06.05.31.46 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 05:31:49 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389006298.13274.48.camel@kazak.uk.xensource.com>
Date: Mon, 6 Jan 2014 21:31:39 +0800
Message-Id: <5D276967-4F69-4934-8281-7B10EC10B3B7@gmail.com>
References: <D22EB646-2C10-4E4B-8717-FC500FCB7380@gmail.com>
	<ED51122C-F7F8-4688-8038-DE288C04A098@gmail.com>
	<1389006298.13274.48.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Trigger kernel bug when creating domU on arm64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 6, 2014, at 19:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-01-05 at 19:50 +0800, Chen Baozi wrote:
>> On Jan 5, 2014, at 16:21, Chen Baozi <baozich@gmail.com> wrote:
>> =

>>> Hi all,
>>> =

>>> Recently, I=92ve been trying to continue my work of mini-os on arm64.  =
After some
>>> early-day hacks in summer, there is a basic framework that could pass t=
he build.
>>> If everything goes well, it is supposed to be at least able to print =

>>> =93Bootstrapping...=94 info in start_kernel. However, it seems there st=
ill something
>>> wrong when create domain by xl.
>> =

>> It seems to be something wrong with my dom0 privcmd driver. After I add =
some printk
>> to debug and rebuild the dom0 kernel, it seems to be all right (even I r=
emove
>> those printk for debugging)...
> =

> This should be fixed by:
>        commit a7892f32cc3534d4cc0e64b245fbf47a8e364652
>        Author: Ian Campbell <ian.campbell@citrix.com>
>        Date:   Wed Dec 11 17:02:27 2013 +0000
> =

>            arm: xen: foreign mapping PTEs are special.
> =

> which is now in Linus' tree.

I see. I updated my git and it disappeared, ;-)

Thanks all the same.

Baozi

> =

> Ian.
> =

> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B0Q-0006mH-AY; Mon, 06 Jan 2014 14:18:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0B0P-0006mC-Eq
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:18:13 +0000
Received: from [85.158.137.68:30608] by server-6.bemta-3.messagelabs.com id
	D2/AF-04868-42BBAC25; Mon, 06 Jan 2014 14:18:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389017890!7472753!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18479 invoked from network); 6 Jan 2014 14:18:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:18:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87894413"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 14:18:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 09:18:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0B0L-00070S-9Q;
	Mon, 06 Jan 2014 14:18:09 +0000
Date: Mon, 6 Jan 2014 14:17:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <1389014715.19378.8.camel@hamster.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Frediano Ziglio wrote:
> On Mon, 2014-01-06 at 12:54 +0000, Wei Liu wrote:
> > Hi all
> > 
> > This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> > make it possible to tailor QEMU to a smaller binary.
> > 
> > The current setup for Xen on X86 is to build i386-softmmu, and uses this
> > single binary for two purposes:
> > 1. serves as device emulator for HVM guest.
> > 2. serves as PV driver backend for PV guest.
> > 
> 
> Perhaps even KVM would benefit too.
> Are you sure we don't emulate any instruction for other purposes?

The xenpv machine doesn't do any emulation whatsoever, in fact it is used
only for PV or ARM guests.


> > Either case CPU emulation is never used because Xen handles that
> > already. So we are in fact having a load of unused code in QEMU build.
> > 
> > What I have in mind is to build a QEMU binary which:
> > 1. does not include CPU emulation code at all.
> > 2. only includes components that's useful (what's useful is TBD).
> > 
> > And the rationales behind this idea are:
> > 
> > 1. Reduce memory footprint. One usecase would be running Xen on embedded
> >    platform (X86 or ARM). We would expect the system has very limited
> >    resources. The smaller the binary, the better.
> > 
> > 2. It doesn't make sense to have i386 emulation on ARM platform.
> >    Arguably nobody can prevent user from running i386 emulator on ARM
> >    platform, but it doesn't make sense in Xen's setup where QEMU is
> >    only used as PV device backend on ARM.
> > 
> > 3. Security concern. It's much easier to audit small code base.
> > 
> > Please note that I'm not proposing to invalidate all the other usecases.
> > I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> > more flexible.
> > 
> > Down to implementation level I only need to (hopefully) add a few stubs
> > and create some new CONFIG_* options and move a few things around. It
> > might not be as intrusive as one thinks.
> > 
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> > 
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> > 
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> > 
> > Finally I got a qemu-system-null. And the effect is immediately visible
> > -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> > looked at what device emulation code can be removed so the size can even
> > be made smaller.
> > 
> 
> Well, I would prefer something like a i386-nocpu. The system qemu
> emulate is still specific to a given architecture. You can't emulate ARM
> with the same executable. So we'd add default-configs/arm-nocpu.mak and
> default-config/x86_64-nocpu.mak (i386 don't make much sense anymore).

It doesn't do any emulation so it is not specific to any architecture or
any cpu.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B0Q-0006mH-AY; Mon, 06 Jan 2014 14:18:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0B0P-0006mC-Eq
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:18:13 +0000
Received: from [85.158.137.68:30608] by server-6.bemta-3.messagelabs.com id
	D2/AF-04868-42BBAC25; Mon, 06 Jan 2014 14:18:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389017890!7472753!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18479 invoked from network); 6 Jan 2014 14:18:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:18:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87894413"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 14:18:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 09:18:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0B0L-00070S-9Q;
	Mon, 06 Jan 2014 14:18:09 +0000
Date: Mon, 6 Jan 2014 14:17:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <1389014715.19378.8.camel@hamster.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Frediano Ziglio wrote:
> On Mon, 2014-01-06 at 12:54 +0000, Wei Liu wrote:
> > Hi all
> > 
> > This idea is to modify QEMU's Makefiles, plus implementing some stubs to
> > make it possible to tailor QEMU to a smaller binary.
> > 
> > The current setup for Xen on X86 is to build i386-softmmu, and uses this
> > single binary for two purposes:
> > 1. serves as device emulator for HVM guest.
> > 2. serves as PV driver backend for PV guest.
> > 
> 
> Perhaps even KVM would benefit too.
> Are you sure we don't emulate any instruction for other purposes?

The xenpv machine doesn't do any emulation whatsoever, in fact it is used
only for PV or ARM guests.


> > Either case CPU emulation is never used because Xen handles that
> > already. So we are in fact having a load of unused code in QEMU build.
> > 
> > What I have in mind is to build a QEMU binary which:
> > 1. does not include CPU emulation code at all.
> > 2. only includes components that's useful (what's useful is TBD).
> > 
> > And the rationales behind this idea are:
> > 
> > 1. Reduce memory footprint. One usecase would be running Xen on embedded
> >    platform (X86 or ARM). We would expect the system has very limited
> >    resources. The smaller the binary, the better.
> > 
> > 2. It doesn't make sense to have i386 emulation on ARM platform.
> >    Arguably nobody can prevent user from running i386 emulator on ARM
> >    platform, but it doesn't make sense in Xen's setup where QEMU is
> >    only used as PV device backend on ARM.
> > 
> > 3. Security concern. It's much easier to audit small code base.
> > 
> > Please note that I'm not proposing to invalidate all the other usecases.
> > I'm only speaking with my Xen developer's hat on, aiming to make QEMU
> > more flexible.
> > 
> > Down to implementation level I only need to (hopefully) add a few stubs
> > and create some new CONFIG_* options and move a few things around. It
> > might not be as intrusive as one thinks.
> > 
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> > 
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> > 
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> > 
> > Finally I got a qemu-system-null. And the effect is immediately visible
> > -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> > looked at what device emulation code can be removed so the size can even
> > be made smaller.
> > 
> 
> Well, I would prefer something like a i386-nocpu. The system qemu
> emulate is still specific to a given architecture. You can't emulate ARM
> with the same executable. So we'd add default-configs/arm-nocpu.mak and
> default-config/x86_64-nocpu.mak (i386 don't make much sense anymore).

It doesn't do any emulation so it is not specific to any architecture or
any cpu.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:22:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B49-0007CZ-2D; Mon, 06 Jan 2014 14:22:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0B47-0007CQ-E0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:22:03 +0000
Received: from [193.109.254.147:65262] by server-10.bemta-14.messagelabs.com
	id 64/48-20752-A0CBAC25; Mon, 06 Jan 2014 14:22:02 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389018121!9104472!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2029 invoked from network); 6 Jan 2014 14:22:02 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:22:02 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so9834053lbh.4
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 06:22:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Q/U7iDAvaIujOgyBtPH8OtrKUPN30uZpY4oXFUVSdV0=;
	b=iTW5BEJ3E255m9xkWbzkgyigftp+3PmTzizscGmJRrYaLVuR2dThEptaTDdJAWMk7L
	+ICd0jMfv0vxmfJjNnIlT84I8jlhiOLcncmKTtLybGCNIpcfQK5DNiul26AL99432z7P
	OQrKDXmrN1b7CN7xdBkwL3YoWIVBJsZQ0VPOM2vmjH4T+0oOhYUDzFsrFpQxF2GRyp2j
	bAXgESwtnpQMRJ9wIABgashDvhZb153SLloLleO5+O3/SjAoBdlkX/rh/7+Yg7PHJj5/
	zOjVtfA+QMgORgPfMh8yzbNaIw4ycemcbJJN4OfVQHQzR7vnCBGo+Gadkyc8lPTQWmpY
	Hn4A==
X-Gm-Message-State: ALoCoQnLNIh6Tno1zQop8hNLzkyMqukkEFtnWp8QmEssUeiFgCOlyWmFnqDDLORt1Gsl6FlU/k5c
X-Received: by 10.112.167.42 with SMTP id zl10mr78130lbb.92.1389018121211;
	Mon, 06 Jan 2014 06:22:01 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 06:21:41 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 14:21:41 +0000
Message-ID: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 14:17, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> It doesn't do any emulation so it is not specific to any architecture or
> any cpu.

You presumably still care about the compiled in values of
TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:22:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B49-0007CZ-2D; Mon, 06 Jan 2014 14:22:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0B47-0007CQ-E0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:22:03 +0000
Received: from [193.109.254.147:65262] by server-10.bemta-14.messagelabs.com
	id 64/48-20752-A0CBAC25; Mon, 06 Jan 2014 14:22:02 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389018121!9104472!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2029 invoked from network); 6 Jan 2014 14:22:02 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:22:02 -0000
Received: by mail-lb0-f173.google.com with SMTP id z5so9834053lbh.4
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 06:22:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Q/U7iDAvaIujOgyBtPH8OtrKUPN30uZpY4oXFUVSdV0=;
	b=iTW5BEJ3E255m9xkWbzkgyigftp+3PmTzizscGmJRrYaLVuR2dThEptaTDdJAWMk7L
	+ICd0jMfv0vxmfJjNnIlT84I8jlhiOLcncmKTtLybGCNIpcfQK5DNiul26AL99432z7P
	OQrKDXmrN1b7CN7xdBkwL3YoWIVBJsZQ0VPOM2vmjH4T+0oOhYUDzFsrFpQxF2GRyp2j
	bAXgESwtnpQMRJ9wIABgashDvhZb153SLloLleO5+O3/SjAoBdlkX/rh/7+Yg7PHJj5/
	zOjVtfA+QMgORgPfMh8yzbNaIw4ycemcbJJN4OfVQHQzR7vnCBGo+Gadkyc8lPTQWmpY
	Hn4A==
X-Gm-Message-State: ALoCoQnLNIh6Tno1zQop8hNLzkyMqukkEFtnWp8QmEssUeiFgCOlyWmFnqDDLORt1Gsl6FlU/k5c
X-Received: by 10.112.167.42 with SMTP id zl10mr78130lbb.92.1389018121211;
	Mon, 06 Jan 2014 06:22:01 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 06:21:41 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 14:21:41 +0000
Message-ID: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 14:17, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> It doesn't do any emulation so it is not specific to any architecture or
> any cpu.

You presumably still care about the compiled in values of
TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:27:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B9l-0007ND-0H; Mon, 06 Jan 2014 14:27:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0B9j-0007N8-MS
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:27:51 +0000
Received: from [85.158.139.211:48111] by server-6.bemta-5.messagelabs.com id
	5A/71-16310-66DBAC25; Mon, 06 Jan 2014 14:27:50 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389018468!8066875!1
X-Originating-IP: [209.85.214.177]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28299 invoked from network); 6 Jan 2014 14:27:50 -0000
Received: from mail-ob0-f177.google.com (HELO mail-ob0-f177.google.com)
	(209.85.214.177)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:27:50 -0000
Received: by mail-ob0-f177.google.com with SMTP id vb8so18344102obc.36
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 06:27:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W6DhIcq93dBJgqTjGL5q7eCKcSKl8BGHCKZaXv0ryYg=;
	b=mh4WTkMzWDHNy5kXEuIOS2o7epLt/IDsFzXjjH8TXgpN8NYnNT9Iw5qP4va3IW95SS
	XAbjiabGvQjdiLsB/ZAwvTp0RPkxGaj44K5c9DRCZ/vBHI/bvsiTHBPDaKBZ9FZS7NYz
	Emke+gJFLzer8aHrBskDYPHdvl8g1XTakbZ+EK5krnGL0wLtGiRHRqlbZT61HxV5Ljoc
	H5kkoXUEiz+15O97Y8ex7NRYIy7hUHwO3i3hdwuwDXJxtrObuMOhD9+fPc6m1OBHKgh/
	g1whd+xxLZbsjSn11qCqs7x4SymNzEaXVDu6FCVZ95VmAyzr3/FdpAPTaDMLvUctr6hC
	t2xA==
X-Gm-Message-State: ALoCoQkBDysy0RfGPUCKrV8QPH0KPtRlYfNndCOtX1HY6n05LinbU0dSq0T6JFdN+KFrizzFH02P
MIME-Version: 1.0
X-Received: by 10.182.232.4 with SMTP id tk4mr73407936obc.9.1389018468333;
	Mon, 06 Jan 2014 06:27:48 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 06:27:48 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 06:27:48 -0800 (PST)
In-Reply-To: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
Date: Mon, 6 Jan 2014 06:27:48 -0800
Message-ID: <CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3638435422822587778=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3638435422822587778==
Content-Type: multipart/alternative; boundary=f46d0445178f64503f04ef4e109b

--f46d0445178f64503f04ef4e109b
Content-Type: text/plain; charset=ISO-8859-1

On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
>
> On 6 January 2014 14:17, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > It doesn't do any emulation so it is not specific to any architecture or
> > any cpu.
>
> You presumably still care about the compiled in values of
> TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

Yup.  It's still accel=xen just with no VCPUs.

Regards,

Anthony Liguori

>
> thanks
> -- PMM
>

--f46d0445178f64503f04ef4e109b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">On Jan 6, 2014 6:23 AM, &quot;Peter Maydell&quot; &lt;<a hre=
f=3D"mailto:peter.maydell@linaro.org">peter.maydell@linaro.org</a>&gt; wrot=
e:<br>
&gt;<br>
&gt; On 6 January 2014 14:17, Stefano Stabellini<br>
&gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citrix.com">stefano.stabel=
lini@eu.citrix.com</a>&gt; wrote:<br>
&gt; &gt; It doesn&#39;t do any emulation so it is not specific to any arch=
itecture or<br>
&gt; &gt; any cpu.<br>
&gt;<br>
&gt; You presumably still care about the compiled in values of<br>
&gt; TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...</p>
<p dir=3D"ltr">Yup.=A0 It&#39;s still accel=3Dxen just with no VCPUs.</p>
<p dir=3D"ltr">Regards,</p>
<p dir=3D"ltr">Anthony Liguori</p>
<p dir=3D"ltr">&gt;<br>
&gt; thanks<br>
&gt; -- PMM<br>
&gt;<br>
</p>

--f46d0445178f64503f04ef4e109b--


--===============3638435422822587778==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3638435422822587778==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 14:27:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0B9l-0007ND-0H; Mon, 06 Jan 2014 14:27:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0B9j-0007N8-MS
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:27:51 +0000
Received: from [85.158.139.211:48111] by server-6.bemta-5.messagelabs.com id
	5A/71-16310-66DBAC25; Mon, 06 Jan 2014 14:27:50 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389018468!8066875!1
X-Originating-IP: [209.85.214.177]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28299 invoked from network); 6 Jan 2014 14:27:50 -0000
Received: from mail-ob0-f177.google.com (HELO mail-ob0-f177.google.com)
	(209.85.214.177)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:27:50 -0000
Received: by mail-ob0-f177.google.com with SMTP id vb8so18344102obc.36
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 06:27:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W6DhIcq93dBJgqTjGL5q7eCKcSKl8BGHCKZaXv0ryYg=;
	b=mh4WTkMzWDHNy5kXEuIOS2o7epLt/IDsFzXjjH8TXgpN8NYnNT9Iw5qP4va3IW95SS
	XAbjiabGvQjdiLsB/ZAwvTp0RPkxGaj44K5c9DRCZ/vBHI/bvsiTHBPDaKBZ9FZS7NYz
	Emke+gJFLzer8aHrBskDYPHdvl8g1XTakbZ+EK5krnGL0wLtGiRHRqlbZT61HxV5Ljoc
	H5kkoXUEiz+15O97Y8ex7NRYIy7hUHwO3i3hdwuwDXJxtrObuMOhD9+fPc6m1OBHKgh/
	g1whd+xxLZbsjSn11qCqs7x4SymNzEaXVDu6FCVZ95VmAyzr3/FdpAPTaDMLvUctr6hC
	t2xA==
X-Gm-Message-State: ALoCoQkBDysy0RfGPUCKrV8QPH0KPtRlYfNndCOtX1HY6n05LinbU0dSq0T6JFdN+KFrizzFH02P
MIME-Version: 1.0
X-Received: by 10.182.232.4 with SMTP id tk4mr73407936obc.9.1389018468333;
	Mon, 06 Jan 2014 06:27:48 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 06:27:48 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 06:27:48 -0800 (PST)
In-Reply-To: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
Date: Mon, 6 Jan 2014 06:27:48 -0800
Message-ID: <CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3638435422822587778=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3638435422822587778==
Content-Type: multipart/alternative; boundary=f46d0445178f64503f04ef4e109b

--f46d0445178f64503f04ef4e109b
Content-Type: text/plain; charset=ISO-8859-1

On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
>
> On 6 January 2014 14:17, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > It doesn't do any emulation so it is not specific to any architecture or
> > any cpu.
>
> You presumably still care about the compiled in values of
> TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

Yup.  It's still accel=xen just with no VCPUs.

Regards,

Anthony Liguori

>
> thanks
> -- PMM
>

--f46d0445178f64503f04ef4e109b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">On Jan 6, 2014 6:23 AM, &quot;Peter Maydell&quot; &lt;<a hre=
f=3D"mailto:peter.maydell@linaro.org">peter.maydell@linaro.org</a>&gt; wrot=
e:<br>
&gt;<br>
&gt; On 6 January 2014 14:17, Stefano Stabellini<br>
&gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citrix.com">stefano.stabel=
lini@eu.citrix.com</a>&gt; wrote:<br>
&gt; &gt; It doesn&#39;t do any emulation so it is not specific to any arch=
itecture or<br>
&gt; &gt; any cpu.<br>
&gt;<br>
&gt; You presumably still care about the compiled in values of<br>
&gt; TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...</p>
<p dir=3D"ltr">Yup.=A0 It&#39;s still accel=3Dxen just with no VCPUs.</p>
<p dir=3D"ltr">Regards,</p>
<p dir=3D"ltr">Anthony Liguori</p>
<p dir=3D"ltr">&gt;<br>
&gt; thanks<br>
&gt; -- PMM<br>
&gt;<br>
</p>

--f46d0445178f64503f04ef4e109b--


--===============3638435422822587778==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3638435422822587778==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 14:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BZL-0008Sc-5H; Mon, 06 Jan 2014 14:54:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0BZJ-0008SX-64
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 14:54:17 +0000
Received: from [85.158.139.211:32135] by server-1.bemta-5.messagelabs.com id
	69/DA-21065-893CAC25; Mon, 06 Jan 2014 14:54:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389020054!6880063!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25073 invoked from network); 6 Jan 2014 14:54:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 14:54:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06ErBvj003957
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 14:53:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06Er8aa012814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 14:53:08 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06Er7eF013382; Mon, 6 Jan 2014 14:53:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 06:53:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 00E761C18DD; Mon,  6 Jan 2014 09:53:05 -0500 (EST)
Date: Mon, 6 Jan 2014 09:53:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140106145305.GA15337@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<52CA8BA6.5040305@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CA8BA6.5040305@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 10:55:34AM +0000, David Vrabel wrote:
> On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13
> 
> A minor nit with a comment but consider the complete series:
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> I'm happy for this to go into 3.14.

Woot! Thank you.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BZL-0008Sc-5H; Mon, 06 Jan 2014 14:54:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0BZJ-0008SX-64
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 14:54:17 +0000
Received: from [85.158.139.211:32135] by server-1.bemta-5.messagelabs.com id
	69/DA-21065-893CAC25; Mon, 06 Jan 2014 14:54:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389020054!6880063!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25073 invoked from network); 6 Jan 2014 14:54:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 14:54:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06ErBvj003957
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 14:53:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06Er8aa012814
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 14:53:08 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06Er7eF013382; Mon, 6 Jan 2014 14:53:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 06:53:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 00E761C18DD; Mon,  6 Jan 2014 09:53:05 -0500 (EST)
Date: Mon, 6 Jan 2014 09:53:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140106145305.GA15337@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<52CA8BA6.5040305@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CA8BA6.5040305@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v13] Linux Xen PVH support (v13)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 10:55:34AM +0000, David Vrabel wrote:
> On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> > The patches, also available at
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/pvh.v13
> 
> A minor nit with a comment but consider the complete series:
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> I'm happy for this to go into 3.14.

Woot! Thank you.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 14:55:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BaC-0008VZ-Sm; Mon, 06 Jan 2014 14:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0BaB-0008VJ-67
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:55:11 +0000
Received: from [85.158.139.211:18473] by server-16.bemta-5.messagelabs.com id
	8D/44-11843-EC3CAC25; Mon, 06 Jan 2014 14:55:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389020107!8115291!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4135 invoked from network); 6 Jan 2014 14:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90081894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 14:55:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 09:55:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Ba6-0007fK-9Y;
	Mon, 06 Jan 2014 14:55:06 +0000
Date: Mon, 6 Jan 2014 14:54:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-583805329-1389018747=:8667"
Content-ID: <alpine.DEB.2.02.1401061434030.8667@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-583805329-1389018747=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061434031.8667@kaball.uk.xensource.com>

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
> >
> > On 6 January 2014 14:17, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > It doesn't do any emulation so it is not specific to any architecture=
 or
> > > any cpu.
> >
> > You presumably still care about the compiled in values of
> > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

Actually it only uses XC_PAGE_SIZE and the endianness is the host
endianness.


> Yup.=C2=A0 It's still accel=3Dxen just with no VCPUs.

Are you talking about introducing accel=3Dxen to Wei's target-null?
I guess that would work OK.

On the other hand if you are thinking of avoiding the introduction of a
new target-null, how would you make xen_machine_pv.c available to
multiple architectures? How would you avoid the compilation of all the
unnecessary emulated devices?
--1342847746-583805329-1389018747=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-583805329-1389018747=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 14:55:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 14:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BaC-0008VZ-Sm; Mon, 06 Jan 2014 14:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0BaB-0008VJ-67
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 14:55:11 +0000
Received: from [85.158.139.211:18473] by server-16.bemta-5.messagelabs.com id
	8D/44-11843-EC3CAC25; Mon, 06 Jan 2014 14:55:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389020107!8115291!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4135 invoked from network); 6 Jan 2014 14:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 14:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90081894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 14:55:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 09:55:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Ba6-0007fK-9Y;
	Mon, 06 Jan 2014 14:55:06 +0000
Date: Mon, 6 Jan 2014 14:54:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-583805329-1389018747=:8667"
Content-ID: <alpine.DEB.2.02.1401061434030.8667@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-583805329-1389018747=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061434031.8667@kaball.uk.xensource.com>

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
> >
> > On 6 January 2014 14:17, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > It doesn't do any emulation so it is not specific to any architecture=
 or
> > > any cpu.
> >
> > You presumably still care about the compiled in values of
> > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...

Actually it only uses XC_PAGE_SIZE and the endianness is the host
endianness.


> Yup.=C2=A0 It's still accel=3Dxen just with no VCPUs.

Are you talking about introducing accel=3Dxen to Wei's target-null?
I guess that would work OK.

On the other hand if you are thinking of avoiding the introduction of a
new target-null, how would you make xen_machine_pv.c available to
multiple architectures? How would you avoid the compilation of all the
unnecessary emulated devices?
--1342847746-583805329-1389018747=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-583805329-1389018747=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 15:00:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BfY-0000ni-9x; Mon, 06 Jan 2014 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0BfW-0000nW-V0
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:00:43 +0000
Received: from [85.158.143.35:42939] by server-3.bemta-4.messagelabs.com id
	4D/27-32360-A15CAC25; Mon, 06 Jan 2014 15:00:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389020440!2786149!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20493 invoked from network); 6 Jan 2014 15:00:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 15:00:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06ExaB4011779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 14:59:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06ExZBJ026513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 14:59:36 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06ExZdb027021; Mon, 6 Jan 2014 14:59:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 06:59:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1DE571C18DD; Mon,  6 Jan 2014 09:59:34 -0500 (EST)
Date: Mon, 6 Jan 2014 09:59:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140106145934.GB15337@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
	<20140105194155.GC12263@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 11:33:00AM +0000, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > We also optimize one - the TLB flush. The native operation would
> > > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > > Xen one avoids that and lets the hypervisor determine which
> > > > VCPU needs the TLB flush.
> > > > 
> > > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > ---
> > > >  arch/x86/xen/mmu.c | 9 +++++++++
> > > >  1 file changed, 9 insertions(+)
> > > > 
> > > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > > index 490ddb3..c1d406f 100644
> > > > --- a/arch/x86/xen/mmu.c
> > > > +++ b/arch/x86/xen/mmu.c
> > > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> > > >  void __init xen_init_mmu_ops(void)
> > > >  {
> > > >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > > > +
> > > > +	/* Optimization - we can use the HVM one but it has no idea which
> > > > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > > > +	 * them. Xen knows so let it do the job.
> > > > +	 */
> > > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > > +		return;
> > > > +	}
> > > >  	pv_mmu_ops = xen_mmu_ops;
> > > >  
> > > >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> > > 
> > > Regarding this patch, the next one and the other changes to
> > > xen_setup_shared_info, xen_setup_mfn_list_list,
> > > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > > stuff is very different between PV and PVH guests, I wonder if it makes
> > > any sense to keep calling xen_init_mmu_ops on PVH.
> > > 
> > > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > > under a new xen_pvh_pagetable_init.
> > > Just to give you an idea, not even compiled tested:
> > 
> > There is something to be said about sharing the same code path
> > that "old-style" PV is using with the new-style - code coverage.
> > 
> > That is the code gets tested under both platforms and if I (or
> > anybody else) introduce a bug in the "common-PV-paths" it will
> > be immediately obvious as hopefully the regression tests
> > will pick it up.
> > 
> > It is not nice - as low-level code is sprinkled with the one-offs
> > for the PVH - which mostly is doing _less_.
> 
> I thought you would say that. However in this specific case the costs

You know me too well :-)

> exceed the benefits. Think of all the times we'll have to debug
> something, we'll be staring at the code, and several dozens of minutes
> later we'll realize that the code we have been looking at all along is
> not actually executed in PVH mode.
> 

For this specific code - that is the shared grants and the hypercalls
I think it needs a bit more testing to make sure suspend/resume works
well. And then this segregation can be done.

My reasoning is that - there might be more code that could benefit
from this - so I could do it in one nice big patchset.

Also the other reasoning of mine for delaying your suggestion
is so that this patchset goes in Linux and doesn't accumulate 20+
patches on top to make the review more daunting.
> 
> > What I was thinking is to flip this around. Make the PVH paths
> > the default and then have something like 'if (!xen_pvh_domain())'
> > ... the big code.
> > 
> > Would you be OK with this line of thinking going forward say
> > after this patchset?
>  
> I am not opposed to it in principle but I don't expect that you'll be
> able to improve things significantly.

The end goal is to take a chainsaw to the code and cut out the
PV-old specific ones. But that is not going to happen now - but rather
in 5 years when we are comfortable with it.

And perhaps even make some #ifdef CONFIG_XEN_PVMMU parts around it
to even further identify the old code.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:00:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BfY-0000ni-9x; Mon, 06 Jan 2014 15:00:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0BfW-0000nW-V0
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:00:43 +0000
Received: from [85.158.143.35:42939] by server-3.bemta-4.messagelabs.com id
	4D/27-32360-A15CAC25; Mon, 06 Jan 2014 15:00:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389020440!2786149!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20493 invoked from network); 6 Jan 2014 15:00:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 15:00:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06ExaB4011779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 14:59:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06ExZBJ026513
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 14:59:36 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06ExZdb027021; Mon, 6 Jan 2014 14:59:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 06:59:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1DE571C18DD; Mon,  6 Jan 2014 09:59:34 -0500 (EST)
Date: Mon, 6 Jan 2014 09:59:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140106145934.GB15337@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-9-git-send-email-konrad.wilk@oracle.com>
	<alpine.DEB.2.02.1401051757260.8667@kaball.uk.xensource.com>
	<20140105194155.GC12263@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401061128130.8667@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 08/19] xen/pvh/mmu: Use PV TLB instead
	of native.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 11:33:00AM +0000, Stefano Stabellini wrote:
> On Sun, 5 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Sun, Jan 05, 2014 at 06:11:39PM +0000, Stefano Stabellini wrote:
> > > On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > 
> > > > We also optimize one - the TLB flush. The native operation would
> > > > needlessly IPI offline VCPUs causing extra wakeups. Using the
> > > > Xen one avoids that and lets the hypervisor determine which
> > > > VCPU needs the TLB flush.
> > > > 
> > > > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > ---
> > > >  arch/x86/xen/mmu.c | 9 +++++++++
> > > >  1 file changed, 9 insertions(+)
> > > > 
> > > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > > index 490ddb3..c1d406f 100644
> > > > --- a/arch/x86/xen/mmu.c
> > > > +++ b/arch/x86/xen/mmu.c
> > > > @@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
> > > >  void __init xen_init_mmu_ops(void)
> > > >  {
> > > >  	x86_init.paging.pagetable_init = xen_pagetable_init;
> > > > +
> > > > +	/* Optimization - we can use the HVM one but it has no idea which
> > > > +	 * VCPUs are descheduled - which means that it will needlessly IPI
> > > > +	 * them. Xen knows so let it do the job.
> > > > +	 */
> > > > +	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> > > > +		pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
> > > > +		return;
> > > > +	}
> > > >  	pv_mmu_ops = xen_mmu_ops;
> > > >  
> > > >  	memset(dummy_mapping, 0xff, PAGE_SIZE);
> > > 
> > > Regarding this patch, the next one and the other changes to
> > > xen_setup_shared_info, xen_setup_mfn_list_list,
> > > xen_setup_vcpu_info_placement, etc: considering that the mmu related
> > > stuff is very different between PV and PVH guests, I wonder if it makes
> > > any sense to keep calling xen_init_mmu_ops on PVH.
> > > 
> > > I would introduce a new function, xen_init_pvh_mmu_ops, that sets
> > > pv_mmu_ops.flush_tlb_others and only calls whatever is needed for PVH
> > > under a new xen_pvh_pagetable_init.
> > > Just to give you an idea, not even compiled tested:
> > 
> > There is something to be said about sharing the same code path
> > that "old-style" PV is using with the new-style - code coverage.
> > 
> > That is the code gets tested under both platforms and if I (or
> > anybody else) introduce a bug in the "common-PV-paths" it will
> > be immediately obvious as hopefully the regression tests
> > will pick it up.
> > 
> > It is not nice - as low-level code is sprinkled with the one-offs
> > for the PVH - which mostly is doing _less_.
> 
> I thought you would say that. However in this specific case the costs

You know me too well :-)

> exceed the benefits. Think of all the times we'll have to debug
> something, we'll be staring at the code, and several dozens of minutes
> later we'll realize that the code we have been looking at all along is
> not actually executed in PVH mode.
> 

For this specific code - that is the shared grants and the hypercalls
I think it needs a bit more testing to make sure suspend/resume works
well. And then this segregation can be done.

My reasoning is that - there might be more code that could benefit
from this - so I could do it in one nice big patchset.

Also the other reasoning of mine for delaying your suggestion
is so that this patchset goes in Linux and doesn't accumulate 20+
patches on top to make the review more daunting.
> 
> > What I was thinking is to flip this around. Make the PVH paths
> > the default and then have something like 'if (!xen_pvh_domain())'
> > ... the big code.
> > 
> > Would you be OK with this line of thinking going forward say
> > after this patchset?
>  
> I am not opposed to it in principle but I don't expect that you'll be
> able to improve things significantly.

The end goal is to take a chainsaw to the code and cut out the
PV-old specific ones. But that is not going to happen now - but rather
in 5 years when we are comfortable with it.

And perhaps even make some #ifdef CONFIG_XEN_PVMMU parts around it
to even further identify the old code.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bj6-0000z2-4M; Mon, 06 Jan 2014 15:04:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Bj4-0000yv-4W
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:04:22 +0000
Received: from [85.158.143.35:2800] by server-3.bemta-4.messagelabs.com id
	EB/FD-32360-5F5CAC25; Mon, 06 Jan 2014 15:04:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389020659!9842563!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14432 invoked from network); 6 Jan 2014 15:04:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 15:04:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06F3G5c023743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 15:03:16 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06F3FeJ013927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 15:03:15 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06F3E4K001893; Mon, 6 Jan 2014 15:03:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 07:03:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 253A31C18DD; Mon,  6 Jan 2014 10:03:13 -0500 (EST)
Date: Mon, 6 Jan 2014 10:03:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140106150313.GA15684@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
	<52CA8AF7.9040305@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CA8AF7.9040305@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 10:52:39AM +0000, David Vrabel wrote:
> On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > The VCPU bringup protocol follows the PV with certain twists.
> > From xen/include/public/arch-x86/xen.h:
> > 
> > Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> > for HVM and PVH guests, not all information in this structure is updated:
> > 
> >  - For HVM guests, the structures read include: fpu_ctxt (if
> >  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> > 
> >  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
> >  set cr3. All other fields not used should be set to 0.
> > 
> > This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> > a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> > can load per-cpu data-structures. It has no effect on the VCPU0.
> > 
> > We also piggyback on the %rdi register to pass in the CPU number - so
> > that when we bootup a new CPU, the cpu_bringup_and_idle will have
> > passed as the first parameter the CPU number (via %rdi for 64-bit).
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
> >   * Set up the GDT and segment registers for -fstack-protector.  Until
> >   * we do this, we have to be careful not to call any stack-protected
> >   * function, which is most of the kernel.
> > + *
> > + * Note, that it is refok - because the only caller of this after init
> 
> "Note, this is __ref because..."

Fixed.

Thank you.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bj6-0000z2-4M; Mon, 06 Jan 2014 15:04:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Bj4-0000yv-4W
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:04:22 +0000
Received: from [85.158.143.35:2800] by server-3.bemta-4.messagelabs.com id
	EB/FD-32360-5F5CAC25; Mon, 06 Jan 2014 15:04:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389020659!9842563!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14432 invoked from network); 6 Jan 2014 15:04:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 15:04:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06F3G5c023743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 15:03:16 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06F3FeJ013927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 15:03:15 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06F3E4K001893; Mon, 6 Jan 2014 15:03:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 07:03:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 253A31C18DD; Mon,  6 Jan 2014 10:03:13 -0500 (EST)
Date: Mon, 6 Jan 2014 10:03:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140106150313.GA15684@phenom.dumpdata.com>
References: <1388777916-1328-1-git-send-email-konrad.wilk@oracle.com>
	<1388777916-1328-12-git-send-email-konrad.wilk@oracle.com>
	<52CA8AF7.9040305@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CA8AF7.9040305@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	hpa@zytor.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v13 11/19] xen/pvh: Secondary VCPU bringup
 (non-bootup CPUs)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 10:52:39AM +0000, David Vrabel wrote:
> On 03/01/14 19:38, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > 
> > The VCPU bringup protocol follows the PV with certain twists.
> > From xen/include/public/arch-x86/xen.h:
> > 
> > Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
> > for HVM and PVH guests, not all information in this structure is updated:
> > 
> >  - For HVM guests, the structures read include: fpu_ctxt (if
> >  VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
> > 
> >  - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
> >  set cr3. All other fields not used should be set to 0.
> > 
> > This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
> > a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
> > can load per-cpu data-structures. It has no effect on the VCPU0.
> > 
> > We also piggyback on the %rdi register to pass in the CPU number - so
> > that when we bootup a new CPU, the cpu_bringup_and_idle will have
> > passed as the first parameter the CPU number (via %rdi for 64-bit).
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1409,14 +1409,19 @@ static void __init xen_boot_params_init_edd(void)
> >   * Set up the GDT and segment registers for -fstack-protector.  Until
> >   * we do this, we have to be careful not to call any stack-protected
> >   * function, which is most of the kernel.
> > + *
> > + * Note, that it is refok - because the only caller of this after init
> 
> "Note, this is __ref because..."

Fixed.

Thank you.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BjA-0000zq-HM; Mon, 06 Jan 2014 15:04:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0Bj8-0000zH-8z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:04:26 +0000
Received: from [85.158.143.35:3169] by server-2.bemta-4.messagelabs.com id
	A1/FB-11386-9F5CAC25; Mon, 06 Jan 2014 15:04:25 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389020664!9766738!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19711 invoked from network); 6 Jan 2014 15:04:25 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:04:25 -0000
Received: by mail-lb0-f180.google.com with SMTP id x18so9679076lbi.25
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:04:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=ayLygcy0eDYXjSkhZkob9HETHUJKBQ7X62pc9sHijwE=;
	b=LL4qxXqEYC/jQLiW1bl3rPXigCtavkLFXkOKGMeWwtvB2m0YHwDyWKT8moeRRuDOW9
	5XtlN3Pza77h2HkxpfiOQDvX03NOv6yK03exLRkJpXCtuuJI96GRweY62J8rXFjgRpZU
	3jgQzE6bV37lbFp2LQL4ysMnkspOuiSnuMPmSQXJB2gt5IVkaEhJbVDB7CXkd+lpCZz1
	dWFESB0kz3/LkdQh3bIOwok4cp8ofSRX3q8aVpYmN/3tTlMgRVX/DKZs/e0zlTH1eH7V
	dy99ClezmJ5au9tgWEniSmE8LQLIYnmVdozqj/0LRuGmKRPtxYkS/LjB87gHWlAMHDm7
	VT7g==
X-Gm-Message-State: ALoCoQkSx8EXNMRzexNdhWWngkyUM01tUvLzanigqjczjFvcCzsx50w6xvQ4Egna61Vf9HxyTImP
X-Received: by 10.152.121.105 with SMTP id lj9mr44591744lab.6.1389020664260;
	Mon, 06 Jan 2014 07:04:24 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 07:04:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 15:04:04 +0000
Message-ID: <CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 14:54, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> How would you avoid the compilation of all the
> unnecessary emulated devices?

Didn't we have some patches for doing a Kconfig-style
"select the devices you need" build recently?

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BjA-0000zq-HM; Mon, 06 Jan 2014 15:04:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0Bj8-0000zH-8z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:04:26 +0000
Received: from [85.158.143.35:3169] by server-2.bemta-4.messagelabs.com id
	A1/FB-11386-9F5CAC25; Mon, 06 Jan 2014 15:04:25 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389020664!9766738!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19711 invoked from network); 6 Jan 2014 15:04:25 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:04:25 -0000
Received: by mail-lb0-f180.google.com with SMTP id x18so9679076lbi.25
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:04:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=ayLygcy0eDYXjSkhZkob9HETHUJKBQ7X62pc9sHijwE=;
	b=LL4qxXqEYC/jQLiW1bl3rPXigCtavkLFXkOKGMeWwtvB2m0YHwDyWKT8moeRRuDOW9
	5XtlN3Pza77h2HkxpfiOQDvX03NOv6yK03exLRkJpXCtuuJI96GRweY62J8rXFjgRpZU
	3jgQzE6bV37lbFp2LQL4ysMnkspOuiSnuMPmSQXJB2gt5IVkaEhJbVDB7CXkd+lpCZz1
	dWFESB0kz3/LkdQh3bIOwok4cp8ofSRX3q8aVpYmN/3tTlMgRVX/DKZs/e0zlTH1eH7V
	dy99ClezmJ5au9tgWEniSmE8LQLIYnmVdozqj/0LRuGmKRPtxYkS/LjB87gHWlAMHDm7
	VT7g==
X-Gm-Message-State: ALoCoQkSx8EXNMRzexNdhWWngkyUM01tUvLzanigqjczjFvcCzsx50w6xvQ4Egna61Vf9HxyTImP
X-Received: by 10.152.121.105 with SMTP id lj9mr44591744lab.6.1389020664260;
	Mon, 06 Jan 2014 07:04:24 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 07:04:04 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 15:04:04 +0000
Message-ID: <CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 14:54, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> How would you avoid the compilation of all the
> unnecessary emulated devices?

Didn't we have some patches for doing a Kconfig-style
"select the devices you need" build recently?

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bjc-00015j-3x; Mon, 06 Jan 2014 15:04:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Bja-00015I-Md
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:04:54 +0000
Received: from [85.158.139.211:35025] by server-8.bemta-5.messagelabs.com id
	F0/8F-29838-616CAC25; Mon, 06 Jan 2014 15:04:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389020691!8118040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6492 invoked from network); 6 Jan 2014 15:04:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:04:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87911249"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:04:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:04:50 -0500
Message-ID: <1389020689.31766.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 15:04:49 +0000
In-Reply-To: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
References: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs: Honour --{en,
 dis}able-xend when building docs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-12-23 at 18:47 +0000, Andrew Cooper wrote:
> If a user has specified --disable-xend, they wont want the manpages either.
> 
> Propagating this parameters requires reorganising the way in which the
> makefile chooses which documents to build.
> 
> There is now a split of {MAN1,MAN5,MARKDOWN,TXT}SRC-y to select which
> documentation to build, which is separate from the patsubst section which
> generates appropriate paths to trigger the later rules.
> 
> The manpages are quite easy to split between xend, xl and xenstore, and have
> been.  Items from misc/ are much harder and been left.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> --
> 
> The configure scripts should be regenerated as part of applying this patch.
> 
> George:
>    I request this gets a release ack for 4.4.  It could be argued as a bug in
>    the current implementation of --disable-xend, and the extent of potential
>    problems are that I have accidentally missed some of the manpages during
>    the reorg, but this can be easily confirmed by comparing the results of the
>    two builds (which I have done).

(as acting RM In George's absence):

I suppose I agree:
Release-Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bjc-00015j-3x; Mon, 06 Jan 2014 15:04:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Bja-00015I-Md
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:04:54 +0000
Received: from [85.158.139.211:35025] by server-8.bemta-5.messagelabs.com id
	F0/8F-29838-616CAC25; Mon, 06 Jan 2014 15:04:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389020691!8118040!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6492 invoked from network); 6 Jan 2014 15:04:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:04:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87911249"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:04:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:04:50 -0500
Message-ID: <1389020689.31766.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 15:04:49 +0000
In-Reply-To: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
References: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs: Honour --{en,
 dis}able-xend when building docs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-12-23 at 18:47 +0000, Andrew Cooper wrote:
> If a user has specified --disable-xend, they wont want the manpages either.
> 
> Propagating this parameters requires reorganising the way in which the
> makefile chooses which documents to build.
> 
> There is now a split of {MAN1,MAN5,MARKDOWN,TXT}SRC-y to select which
> documentation to build, which is separate from the patsubst section which
> generates appropriate paths to trigger the later rules.
> 
> The manpages are quite easy to split between xend, xl and xenstore, and have
> been.  Items from misc/ are much harder and been left.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> --
> 
> The configure scripts should be regenerated as part of applying this patch.
> 
> George:
>    I request this gets a release ack for 4.4.  It could be argued as a bug in
>    the current implementation of --disable-xend, and the extent of potential
>    problems are that I have accidentally missed some of the manpages during
>    the reorg, but this can be easily confirmed by comparing the results of the
>    two builds (which I have done).

(as acting RM In George's absence):

I suppose I agree:
Release-Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bku-0001IR-Lh; Mon, 06 Jan 2014 15:06:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Bkt-0001IF-1O
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:06:15 +0000
Received: from [85.158.143.35:26776] by server-2.bemta-4.messagelabs.com id
	62/EE-11386-666CAC25; Mon, 06 Jan 2014 15:06:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389020772!9926108!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12119 invoked from network); 6 Jan 2014 15:06:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:06:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90085978"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:05:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:05:53 -0500
Message-ID: <1389020752.31766.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 15:05:52 +0000
In-Reply-To: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
References: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, keir@xen.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: driver/char: fix const declaration of
 DT compatible list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2013-12-24 at 11:28 +0000, Julien Grall wrote:
> The data type for DT compatible list should be:
>     const char * const[]  __initconst
> 
> Fix every serial drivers which support device tree.
> 
> Spotted-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

WRT the release I think this is a bug fix.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Bku-0001IR-Lh; Mon, 06 Jan 2014 15:06:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Bkt-0001IF-1O
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:06:15 +0000
Received: from [85.158.143.35:26776] by server-2.bemta-4.messagelabs.com id
	62/EE-11386-666CAC25; Mon, 06 Jan 2014 15:06:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389020772!9926108!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12119 invoked from network); 6 Jan 2014 15:06:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:06:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90085978"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:05:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:05:53 -0500
Message-ID: <1389020752.31766.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 15:05:52 +0000
In-Reply-To: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
References: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, keir@xen.org,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: driver/char: fix const declaration of
 DT compatible list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2013-12-24 at 11:28 +0000, Julien Grall wrote:
> The data type for DT compatible list should be:
>     const char * const[]  __initconst
> 
> Fix every serial drivers which support device tree.
> 
> Spotted-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

WRT the release I think this is a bug fix.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BvO-0002Ca-31; Mon, 06 Jan 2014 15:17:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0BvM-0002CG-VQ
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:17:05 +0000
Received: from [85.158.137.68:49486] by server-12.bemta-3.messagelabs.com id
	C3/83-20055-0F8CAC25; Mon, 06 Jan 2014 15:17:04 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389021419!3832174!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6156 invoked from network); 6 Jan 2014 15:17:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:17:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90091818"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:16:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 10:16:20 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0BqM-0007vG-FH;
	Mon, 06 Jan 2014 15:11:54 +0000
Date: Mon, 6 Jan 2014 15:11:54 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-ID: <20140106151154.GA10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
[...]
> >
> > Down to implementation level I only need to (hopefully) add a few stubs
> > and create some new CONFIG_* options and move a few things around. It
> > might not be as intrusive as one thinks.
> >
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> >
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> >
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> >
> 
> Your idea of aggressively reducing binary size may not really fit a
> defconfig at all. If you are going lean-and-mean based on a specific
> use-case I think you need to bring your own config.
> 

Fair enough.

> > Finally I got a qemu-system-null. And the effect is immediately visible
> 
> qemu-system-null has been on my wish-list in the past, although my
> reasons were slightly different to yours. Specifically, the goal was
> to test CPUs in an RTL simulator interacting with RAM and peripheral
> devices hosted in QEMU.
> 

Cool. However small this is still a valid usecase.

> > -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> > looked at what device emulation code can be removed so the size can even
> > be made smaller.
> >
> 
> So what exactly is an appropriate device-suite for qemu-system-null is
> an open question. I would suggest that the "correct" default config
> for such a QEMU would actually be the full suite of devices, not less
> that whats already in i386. Free of CPU/arch restrictions, all devices
> are fair game.
> 

Good point. So only avoid the CPU emulation code and leave full device
emulations alone. Will this make a sensible defconfig? ;-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BvN-0002CQ-NL; Mon, 06 Jan 2014 15:17:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0BvL-0002C5-SL
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:17:04 +0000
Received: from [85.158.137.68:3761] by server-16.bemta-3.messagelabs.com id
	62/76-26128-FE8CAC25; Mon, 06 Jan 2014 15:17:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389021419!3832174!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5967 invoked from network); 6 Jan 2014 15:17:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90091793"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:16:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 10:16:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0BrH-0007w3-Gs;
	Mon, 06 Jan 2014 15:12:51 +0000
Date: Mon, 6 Jan 2014 15:12:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20140106151251.GB10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> >
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> >
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> 
> I think it would be better to add support to allow you to
> configure with --disable-tcg. This would match the existing
> --disable/--enable switches for KVM and Xen, and then you
> could configure --disable-kvm --disable-tcg --enable-xen
> and get a qemu-system-i386 or qemu-system-arm with only
> the Xen support and none of the TCG emulation code.
> 

In this case the architecture-specific code in target-* is still
included which might not help reduce the size much.

Wei.

> thanks
> -- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BvO-0002Ca-31; Mon, 06 Jan 2014 15:17:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0BvM-0002CG-VQ
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:17:05 +0000
Received: from [85.158.137.68:49486] by server-12.bemta-3.messagelabs.com id
	C3/83-20055-0F8CAC25; Mon, 06 Jan 2014 15:17:04 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389021419!3832174!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6156 invoked from network); 6 Jan 2014 15:17:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:17:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90091818"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:16:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 10:16:20 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0BqM-0007vG-FH;
	Mon, 06 Jan 2014 15:11:54 +0000
Date: Mon, 6 Jan 2014 15:11:54 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Message-ID: <20140106151154.GA10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
[...]
> >
> > Down to implementation level I only need to (hopefully) add a few stubs
> > and create some new CONFIG_* options and move a few things around. It
> > might not be as intrusive as one thinks.
> >
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> >
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> >
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> >
> 
> Your idea of aggressively reducing binary size may not really fit a
> defconfig at all. If you are going lean-and-mean based on a specific
> use-case I think you need to bring your own config.
> 

Fair enough.

> > Finally I got a qemu-system-null. And the effect is immediately visible
> 
> qemu-system-null has been on my wish-list in the past, although my
> reasons were slightly different to yours. Specifically, the goal was
> to test CPUs in an RTL simulator interacting with RAM and peripheral
> devices hosted in QEMU.
> 

Cool. However small this is still a valid usecase.

> > -- the size of QEMU binary shrinked from 13MB to 7.6MB. I haven't really
> > looked at what device emulation code can be removed so the size can even
> > be made smaller.
> >
> 
> So what exactly is an appropriate device-suite for qemu-system-null is
> an open question. I would suggest that the "correct" default config
> for such a QEMU would actually be the full suite of devices, not less
> that whats already in i386. Free of CPU/arch restrictions, all devices
> are fair game.
> 

Good point. So only avoid the CPU emulation code and leave full device
emulations alone. Will this make a sensible defconfig? ;-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0BvN-0002CQ-NL; Mon, 06 Jan 2014 15:17:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0BvL-0002C5-SL
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:17:04 +0000
Received: from [85.158.137.68:3761] by server-16.bemta-3.messagelabs.com id
	62/76-26128-FE8CAC25; Mon, 06 Jan 2014 15:17:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389021419!3832174!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5967 invoked from network); 6 Jan 2014 15:17:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90091793"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:16:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 10:16:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0BrH-0007w3-Gs;
	Mon, 06 Jan 2014 15:12:51 +0000
Date: Mon, 6 Jan 2014 15:12:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20140106151251.GB10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> > In fact I've already hacked a prototype during Christmas. What's I've
> > done so far:
> >
> > 1. create target-null which only has some stubs to CPU emulation
> >    framework.
> >
> > 2. add a few lines to configure / Makefiles*, create
> >    default-configs/null-softmmu
> 
> I think it would be better to add support to allow you to
> configure with --disable-tcg. This would match the existing
> --disable/--enable switches for KVM and Xen, and then you
> could configure --disable-kvm --disable-tcg --enable-xen
> and get a qemu-system-i386 or qemu-system-arm with only
> the Xen support and none of the TCG emulation code.
> 

In this case the architecture-specific code in target-* is still
included which might not help reduce the size much.

Wei.

> thanks
> -- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0C2a-000348-D3; Mon, 06 Jan 2014 15:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0C2Z-000342-Fy
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:24:31 +0000
Received: from [85.158.139.211:12016] by server-13.bemta-5.messagelabs.com id
	63/15-11357-EAACAC25; Mon, 06 Jan 2014 15:24:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389021868!8126687!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27781 invoked from network); 6 Jan 2014 15:24:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:24:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87920982"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:24:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:24:27 -0500
Message-ID: <1389021866.31766.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 15:24:26 +0000
In-Reply-To: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tsahee@gmx.com, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-13 at 02:31 +0000, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
> 
> "it is assumed that no mapping exists between children of node and the parent
> address space".
> 
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
> 
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
> 
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
> 
> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.
> ---
>  xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>  1 file changed, 27 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 84e709d..9b9a348 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -93,7 +93,7 @@ struct dt_bus
>  {
>      const char *name;
>      const char *addresses;
> -    int (*match)(const struct dt_device_node *parent);
> +    bool_t (*match)(const struct dt_device_node *parent);
>      void (*count_cells)(const struct dt_device_node *child,
>                          int *addrc, int *sizec);
>      u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>  /*
>   * Default translator (generic bus)
>   */
> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
> +{
> +    /* Root node doesn't have "ranges" property */
> +    if ( parent->parent == NULL )

Calling the parameter "parent" led to me confusedly wondering why it was
the grandparent which mattered. I suppose "parent" in this sense means
the "parent bus" as opposed to parent node? Or does it just fall out of
the fact that in the caller it is the parent which is passed through?

Can we call it something else, like "bus" or "node" or something else
appropriate?

> +        return 1;
> +
> +    /* The default bus is only used when the "ranges" property exists.
> +     * Otherwise we can't translate the address
> +     */
> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
> +}
> +
>  static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>                                  int *addrc, int *sizec)
>  {
> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>       * If the number of address cells is larger than 2 we assume the
>       * mapping doesn't specify a physical address. Rather, the address
>       * specifies an identifier that must match exactly.
> -     */
> +      */

Spurious w/s change.

Other that those two changes the patch looks good.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0C2a-000348-D3; Mon, 06 Jan 2014 15:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0C2Z-000342-Fy
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 15:24:31 +0000
Received: from [85.158.139.211:12016] by server-13.bemta-5.messagelabs.com id
	63/15-11357-EAACAC25; Mon, 06 Jan 2014 15:24:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389021868!8126687!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27781 invoked from network); 6 Jan 2014 15:24:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:24:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87920982"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:24:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:24:27 -0500
Message-ID: <1389021866.31766.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 6 Jan 2014 15:24:26 +0000
In-Reply-To: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tsahee@gmx.com, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-13 at 02:31 +0000, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
> 
> "it is assumed that no mapping exists between children of node and the parent
> address space".
> 
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
> 
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
> 
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
> 
> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.
> ---
>  xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>  1 file changed, 27 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 84e709d..9b9a348 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -93,7 +93,7 @@ struct dt_bus
>  {
>      const char *name;
>      const char *addresses;
> -    int (*match)(const struct dt_device_node *parent);
> +    bool_t (*match)(const struct dt_device_node *parent);
>      void (*count_cells)(const struct dt_device_node *child,
>                          int *addrc, int *sizec);
>      u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>  /*
>   * Default translator (generic bus)
>   */
> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
> +{
> +    /* Root node doesn't have "ranges" property */
> +    if ( parent->parent == NULL )

Calling the parameter "parent" led to me confusedly wondering why it was
the grandparent which mattered. I suppose "parent" in this sense means
the "parent bus" as opposed to parent node? Or does it just fall out of
the fact that in the caller it is the parent which is passed through?

Can we call it something else, like "bus" or "node" or something else
appropriate?

> +        return 1;
> +
> +    /* The default bus is only used when the "ranges" property exists.
> +     * Otherwise we can't translate the address
> +     */
> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
> +}
> +
>  static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>                                  int *addrc, int *sizec)
>  {
> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>       * If the number of address cells is larger than 2 we assume the
>       * mapping doesn't specify a physical address. Rather, the address
>       * specifies an identifier that must match exactly.
> -     */
> +      */

Spurious w/s change.

Other that those two changes the patch looks good.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0C6S-0003H7-B8; Mon, 06 Jan 2014 15:28:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0C6Q-0003Gz-Oi
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:28:30 +0000
Received: from [85.158.143.35:63062] by server-1.bemta-4.messagelabs.com id
	BA/58-02132-E9BCAC25; Mon, 06 Jan 2014 15:28:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389022108!9932359!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23438 invoked from network); 6 Jan 2014 15:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90096452"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:28:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:28:26 -0500
Message-ID: <1389022105.31766.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 15:28:25 +0000
In-Reply-To: <52C87C1C.5080908@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
 output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-04 at 16:24 -0500, Don Slutz wrote:
> On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> 
> > On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
> > > This also fixes the old debug output to compile and work if DBGP1
> > > and DBGP2 are defined like DBGP3.
> > > 
> > > @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
> > >      return len;
> > >  }
> > >  
> > > +/*
> > > + * Local variables:
> > > + * mode: C
> > > + * c-file-style: "BSD"
> > > + * c-basic-offset: 4
> > > + * indent-tabs-mode: nil
> > > + * End:
> > > + */
> > ??
> 
> I take that as "Why add this"?
> 
> Dealing with many different styles of code, I find it helps to have
> this emacs add.

Yes, but why is it in the optional debug patch?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0C6S-0003H7-B8; Mon, 06 Jan 2014 15:28:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0C6Q-0003Gz-Oi
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:28:30 +0000
Received: from [85.158.143.35:63062] by server-1.bemta-4.messagelabs.com id
	BA/58-02132-E9BCAC25; Mon, 06 Jan 2014 15:28:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389022108!9932359!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23438 invoked from network); 6 Jan 2014 15:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90096452"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 15:28:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:28:26 -0500
Message-ID: <1389022105.31766.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 15:28:25 +0000
In-Reply-To: <52C87C1C.5080908@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
 output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-04 at 16:24 -0500, Don Slutz wrote:
> On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
> 
> > On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
> > > This also fixes the old debug output to compile and work if DBGP1
> > > and DBGP2 are defined like DBGP3.
> > > 
> > > @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
> > >      return len;
> > >  }
> > >  
> > > +/*
> > > + * Local variables:
> > > + * mode: C
> > > + * c-file-style: "BSD"
> > > + * c-basic-offset: 4
> > > + * indent-tabs-mode: nil
> > > + * End:
> > > + */
> > ??
> 
> I take that as "Why add this"?
> 
> Dealing with many different styles of code, I find it helps to have
> this emacs add.

Yes, but why is it in the optional debug patch?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:35:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CD3-0003jp-CK; Mon, 06 Jan 2014 15:35:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W0CD2-0003jk-Ka
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:35:20 +0000
Received: from [85.158.139.211:37827] by server-6.bemta-5.messagelabs.com id
	0D/17-16310-73DCAC25; Mon, 06 Jan 2014 15:35:19 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389022519!8112287!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4034 invoked from network); 6 Jan 2014 15:35:19 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:35:19 -0000
Received: by mail-wg0-f42.google.com with SMTP id a1so2657245wgh.1
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=T4rq/R/mf6WvcdaBTW64dzfB8i1zxKtuDb2ntZCUSuQ=;
	b=vpGeuDrWyZJ7+A+RnPW9MsQuH1GwkFphHF/+1s9ogFb7/gbqEa+xMX9L4O1JQkqyQr
	x36G+wiAowuo3c5kIdefv01UaUDN4cSAfWyNzgR8g8vPyclbu1VLnevN5XbhEt2EyCfZ
	k6m0BKWNq1Vxuwp6FK4oHSCBQa8zTM1vvD6iC9mWlPt47sxLpnt4ckTMtQMSCxZXopS9
	KDJVFWOmFJ/aNer6Ckdgo8VWz4vVpCN7exQ7UYnsjXMG5nC2+72ZSARDsEzAVzivgMuD
	3Jr+j6nIvMqLnY9uINdp/Wne/RwOlWPfPUfdjpKa2BDDxQRVE2V6/XwzXUjfsA0+l6p3
	SjQg==
X-Received: by 10.194.93.3 with SMTP id cq3mr73495791wjb.26.1389022519104;
	Mon, 06 Jan 2014 07:35:19 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id po3sm42885653wjc.3.2014.01.06.07.35.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 07:35:17 -0800 (PST)
Message-ID: <52CACD34.1050205@xen.org>
Date: Mon, 06 Jan 2014 15:35:16 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, =?UTF-8?B?VmxhZGltaXIgJ8+GLWNv?=
	=?UTF-8?B?ZGVyL3BoY29kZXInIFNlcmJpbmVua28=?= <phcoder@gmail.com>
References: <527EA084.6000706@gmail.com>	<1384360619.29080.0.camel@kazak.uk.xensource.com>	<5283C406.7010708@gmail.com>	<1384418221.7059.30.camel@dagon.hellion.org.uk>	<52A850D4.40900@gmail.com>
	<1386762709.30271.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1386762709.30271.60.camel@kazak.uk.xensource.com>
Cc: The development of GRUB 2 <grub-devel@gnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvgrub2 is merged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMTIvMjAxMyAxMTo1MSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+IE9uIFdlZCwgMjAxMy0x
Mi0xMSBhdCAxMjo0NyArMDEwMCwgVmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVu
a28KPiB3cm90ZToKPj4gSSBjYW4ndCBjb25maXJtIGZvciAxMDAlIG5vdywgYnV0IEknbGwgYmUg
OTAtOTUlIGF0IEZPU0RFTSBhbmQgaWYgSSdsbAo+PiBiZSB0aGVyIEknbSBvayB0byBnaXZlIGEg
dGFsay4gSXMgdGhpcyBvZmZlciBzdGlsbCBvbiB0aGUgdGFibGU/Cj4gSSdtIGFmcmFpZCB0aGUg
ZGVhZGxpbmUgZm9yIHN1Ym1pc3Npb25zIGhhcyBwYXNzZWQgKElJUkMgaXQgd2FzIDEKPiBEZWNl
bWJlciwgYXQgbGVhc3QgZm9yIHRoZSB2aXJ0IGRldnJvb20pLgpUaGVyZSBpcyBhIHBvc3NpYmls
aXR5IHRvIGdpdmUgYW4gYXBwcm9wcmlhdGUgdGFsayB0aGUgZGF5IGJlZm9yZSBGT1NERU0gCmF0
IGEgQ2VudE9TIERvSm8gd2hpY2ggd2lsbCBiZSBob3N0ZWQgYXQgSUJNIGluIEJydXNzZWxlcy4g
SSB3aWxsIHRhbGsgCnRvIHRoZSBvcmdhbml6ZXIgbGF0ZXIgdG9kYXkgYXMgaXQgaXMgbm90IHll
dCBjbGVhciB3aGF0IHRoZSBmb3JtYXQgb2YgCnRoZSBEb0pvIHdpbGwgYmUgKGp1c3QgdGFsa3Mg
b3IgYSBtaXh0dXJlIG9mIEhhY2thdGhvbiBhbmQgdGFsa3MpLgoKUmVnYXJkcwpMYXJzCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:35:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CD3-0003jp-CK; Mon, 06 Jan 2014 15:35:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W0CD2-0003jk-Ka
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:35:20 +0000
Received: from [85.158.139.211:37827] by server-6.bemta-5.messagelabs.com id
	0D/17-16310-73DCAC25; Mon, 06 Jan 2014 15:35:19 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389022519!8112287!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4034 invoked from network); 6 Jan 2014 15:35:19 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:35:19 -0000
Received: by mail-wg0-f42.google.com with SMTP id a1so2657245wgh.1
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=T4rq/R/mf6WvcdaBTW64dzfB8i1zxKtuDb2ntZCUSuQ=;
	b=vpGeuDrWyZJ7+A+RnPW9MsQuH1GwkFphHF/+1s9ogFb7/gbqEa+xMX9L4O1JQkqyQr
	x36G+wiAowuo3c5kIdefv01UaUDN4cSAfWyNzgR8g8vPyclbu1VLnevN5XbhEt2EyCfZ
	k6m0BKWNq1Vxuwp6FK4oHSCBQa8zTM1vvD6iC9mWlPt47sxLpnt4ckTMtQMSCxZXopS9
	KDJVFWOmFJ/aNer6Ckdgo8VWz4vVpCN7exQ7UYnsjXMG5nC2+72ZSARDsEzAVzivgMuD
	3Jr+j6nIvMqLnY9uINdp/Wne/RwOlWPfPUfdjpKa2BDDxQRVE2V6/XwzXUjfsA0+l6p3
	SjQg==
X-Received: by 10.194.93.3 with SMTP id cq3mr73495791wjb.26.1389022519104;
	Mon, 06 Jan 2014 07:35:19 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id po3sm42885653wjc.3.2014.01.06.07.35.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 07:35:17 -0800 (PST)
Message-ID: <52CACD34.1050205@xen.org>
Date: Mon, 06 Jan 2014 15:35:16 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, =?UTF-8?B?VmxhZGltaXIgJ8+GLWNv?=
	=?UTF-8?B?ZGVyL3BoY29kZXInIFNlcmJpbmVua28=?= <phcoder@gmail.com>
References: <527EA084.6000706@gmail.com>	<1384360619.29080.0.camel@kazak.uk.xensource.com>	<5283C406.7010708@gmail.com>	<1384418221.7059.30.camel@dagon.hellion.org.uk>	<52A850D4.40900@gmail.com>
	<1386762709.30271.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1386762709.30271.60.camel@kazak.uk.xensource.com>
Cc: The development of GRUB 2 <grub-devel@gnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] pvgrub2 is merged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMTIvMjAxMyAxMTo1MSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+IE9uIFdlZCwgMjAxMy0x
Mi0xMSBhdCAxMjo0NyArMDEwMCwgVmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVu
a28KPiB3cm90ZToKPj4gSSBjYW4ndCBjb25maXJtIGZvciAxMDAlIG5vdywgYnV0IEknbGwgYmUg
OTAtOTUlIGF0IEZPU0RFTSBhbmQgaWYgSSdsbAo+PiBiZSB0aGVyIEknbSBvayB0byBnaXZlIGEg
dGFsay4gSXMgdGhpcyBvZmZlciBzdGlsbCBvbiB0aGUgdGFibGU/Cj4gSSdtIGFmcmFpZCB0aGUg
ZGVhZGxpbmUgZm9yIHN1Ym1pc3Npb25zIGhhcyBwYXNzZWQgKElJUkMgaXQgd2FzIDEKPiBEZWNl
bWJlciwgYXQgbGVhc3QgZm9yIHRoZSB2aXJ0IGRldnJvb20pLgpUaGVyZSBpcyBhIHBvc3NpYmls
aXR5IHRvIGdpdmUgYW4gYXBwcm9wcmlhdGUgdGFsayB0aGUgZGF5IGJlZm9yZSBGT1NERU0gCmF0
IGEgQ2VudE9TIERvSm8gd2hpY2ggd2lsbCBiZSBob3N0ZWQgYXQgSUJNIGluIEJydXNzZWxlcy4g
SSB3aWxsIHRhbGsgCnRvIHRoZSBvcmdhbml6ZXIgbGF0ZXIgdG9kYXkgYXMgaXQgaXMgbm90IHll
dCBjbGVhciB3aGF0IHRoZSBmb3JtYXQgb2YgCnRoZSBEb0pvIHdpbGwgYmUgKGp1c3QgdGFsa3Mg
b3IgYSBtaXh0dXJlIG9mIEhhY2thdGhvbiBhbmQgdGFsa3MpLgoKUmVnYXJkcwpMYXJzCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:40:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CHS-000476-B7; Mon, 06 Jan 2014 15:39:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0CHQ-00046l-Kj
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:39:52 +0000
Received: from [85.158.143.35:48297] by server-3.bemta-4.messagelabs.com id
	45/16-32360-84ECAC25; Mon, 06 Jan 2014 15:39:52 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389022790!9891507!1
X-Originating-IP: [209.85.219.50]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11185 invoked from network); 6 Jan 2014 15:39:51 -0000
Received: from mail-oa0-f50.google.com (HELO mail-oa0-f50.google.com)
	(209.85.219.50)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:39:51 -0000
Received: by mail-oa0-f50.google.com with SMTP id l6so219012oag.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:39:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=hGgsJlPmWykPwdqZRimgIYk52FdR8A3ZbfU3/fxWjBA=;
	b=CuMVydM6UTi98Mx1Qi8mOl9+v8AR6rfNgVzmjjiD/LtsjHG3O2WEUtlguhEa+2ttXG
	fd1/2gRewPNeleiz4nIxtvrcGMAoNWIKT0ujK2R0V84YnmxDuCGaV4wn2VDTu5/OrzQU
	OUA4I9NqWTuYEslcvP156oYjuBebHVHEXI4RSzsHSQ5GQuJsFUCU05EPF0Uq5m+P8vY9
	UpjFuUYQe0rCceIhtpg3Njhij/jfDgLovEWoGv3KydNNpXXC6QoKy3O4Z6hhGJB+UZGL
	WkvZyqZ4oYtFVIImx2ONbvQsvL6i+edf2m5vTREkRxZ/tsAyGf/L8Q0cvAO6djrnRe/K
	594w==
X-Gm-Message-State: ALoCoQnhg8TWxgL2mYFJbMJlfce8bpPDxDe6cXGY5CzUS9VZMKOhmXos1WpAI2H4wqfkFkWkixSo
MIME-Version: 1.0
X-Received: by 10.60.35.194 with SMTP id k2mr2214145oej.42.1389022789604; Mon,
	06 Jan 2014 07:39:49 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 07:39:49 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 07:39:49 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 07:39:49 -0800
Message-ID: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2810903229328544931=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2810903229328544931==
Content-Type: multipart/alternative; boundary=089e0112cb56f5b6c404ef4f1109

--089e0112cb56f5b6c404ef4f1109
Content-Type: text/plain; charset=ISO-8859-1

On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <
stefano.stabellini@eu.citrix.com> wrote:
>
> On Mon, 6 Jan 2014, Anthony Liguori wrote:
> > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org>
wrote:
> > >
> > > On 6 January 2014 14:17, Stefano Stabellini
> > > <stefano.stabellini@eu.citrix.com> wrote:
> > > > It doesn't do any emulation so it is not specific to any
architecture or
> > > > any cpu.
> > >
> > > You presumably still care about the compiled in values of
> > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
>
> Actually it only uses XC_PAGE_SIZE and the endianness is the host
> endianness.

If blkif in QEMU is relying on host endianness thats a bug.

>
>
> > Yup.  It's still accel=xen just with no VCPUs.
>
> Are you talking about introducing accel=xen to Wei's target-null?
> I guess that would work OK.

We already have accel=xen.  I'm echoing Peter's suggestion of having the
ability to compile out accel=tcg.

>
> On the other hand if you are thinking of avoiding the introduction of a
> new target-null, how would you make xen_machine_pv.c available to
> multiple architectures?

Why does qdisk need a full machine?

How would you avoid the compilation of all the
> unnecessary emulated devices?

Device config files.

Regards,

Anthony Liguori

--089e0112cb56f5b6c404ef4f1109
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">On Jan 6, 2014 6:55 AM, &quot;Stefano Stabellini&quot; &lt;<=
a href=3D"mailto:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.ci=
trix.com</a>&gt; wrote:<br>
&gt;<br>
&gt; On Mon, 6 Jan 2014, Anthony Liguori wrote:<br>
&gt; &gt; On Jan 6, 2014 6:23 AM, &quot;Peter Maydell&quot; &lt;<a href=3D"=
mailto:peter.maydell@linaro.org">peter.maydell@linaro.org</a>&gt; wrote:<br=
>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On 6 January 2014 14:17, Stefano Stabellini<br>
&gt; &gt; &gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citrix.com">stef=
ano.stabellini@eu.citrix.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; It doesn&#39;t do any emulation so it is not specific t=
o any architecture or<br>
&gt; &gt; &gt; &gt; any cpu.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; You presumably still care about the compiled in values of<br=
>
&gt; &gt; &gt; TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...<br>
&gt;<br>
&gt; Actually it only uses XC_PAGE_SIZE and the endianness is the host<br>
&gt; endianness.</p>
<p dir=3D"ltr">If blkif in QEMU is relying on host endianness thats a bug.<=
/p>
<p dir=3D"ltr">&gt;<br>
&gt;<br>
&gt; &gt; Yup.=A0 It&#39;s still accel=3Dxen just with no VCPUs.<br>
&gt;<br>
&gt; Are you talking about introducing accel=3Dxen to Wei&#39;s target-null=
?<br>
&gt; I guess that would work OK.</p>
<p dir=3D"ltr">We already have accel=3Dxen.=A0 I&#39;m echoing Peter&#39;s =
suggestion of having the ability to compile out accel=3Dtcg.</p>
<p dir=3D"ltr">&gt;<br>
&gt; On the other hand if you are thinking of avoiding the introduction of =
a<br>
&gt; new target-null, how would you make xen_machine_pv.c available to<br>
&gt; multiple architectures? </p>
<p dir=3D"ltr">Why does qdisk need a full machine?</p>
<p dir=3D"ltr">How would you avoid the compilation of all the<br>
&gt; unnecessary emulated devices?</p>
<p dir=3D"ltr">Device config files.</p>
<p dir=3D"ltr">Regards,</p>
<p dir=3D"ltr">Anthony Liguori</p>

--089e0112cb56f5b6c404ef4f1109--


--===============2810903229328544931==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2810903229328544931==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 15:40:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CHS-000476-B7; Mon, 06 Jan 2014 15:39:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0CHQ-00046l-Kj
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:39:52 +0000
Received: from [85.158.143.35:48297] by server-3.bemta-4.messagelabs.com id
	45/16-32360-84ECAC25; Mon, 06 Jan 2014 15:39:52 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389022790!9891507!1
X-Originating-IP: [209.85.219.50]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11185 invoked from network); 6 Jan 2014 15:39:51 -0000
Received: from mail-oa0-f50.google.com (HELO mail-oa0-f50.google.com)
	(209.85.219.50)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:39:51 -0000
Received: by mail-oa0-f50.google.com with SMTP id l6so219012oag.23
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 07:39:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=hGgsJlPmWykPwdqZRimgIYk52FdR8A3ZbfU3/fxWjBA=;
	b=CuMVydM6UTi98Mx1Qi8mOl9+v8AR6rfNgVzmjjiD/LtsjHG3O2WEUtlguhEa+2ttXG
	fd1/2gRewPNeleiz4nIxtvrcGMAoNWIKT0ujK2R0V84YnmxDuCGaV4wn2VDTu5/OrzQU
	OUA4I9NqWTuYEslcvP156oYjuBebHVHEXI4RSzsHSQ5GQuJsFUCU05EPF0Uq5m+P8vY9
	UpjFuUYQe0rCceIhtpg3Njhij/jfDgLovEWoGv3KydNNpXXC6QoKy3O4Z6hhGJB+UZGL
	WkvZyqZ4oYtFVIImx2ONbvQsvL6i+edf2m5vTREkRxZ/tsAyGf/L8Q0cvAO6djrnRe/K
	594w==
X-Gm-Message-State: ALoCoQnhg8TWxgL2mYFJbMJlfce8bpPDxDe6cXGY5CzUS9VZMKOhmXos1WpAI2H4wqfkFkWkixSo
MIME-Version: 1.0
X-Received: by 10.60.35.194 with SMTP id k2mr2214145oej.42.1389022789604; Mon,
	06 Jan 2014 07:39:49 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 07:39:49 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 07:39:49 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 07:39:49 -0800
Message-ID: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2810903229328544931=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2810903229328544931==
Content-Type: multipart/alternative; boundary=089e0112cb56f5b6c404ef4f1109

--089e0112cb56f5b6c404ef4f1109
Content-Type: text/plain; charset=ISO-8859-1

On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <
stefano.stabellini@eu.citrix.com> wrote:
>
> On Mon, 6 Jan 2014, Anthony Liguori wrote:
> > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org>
wrote:
> > >
> > > On 6 January 2014 14:17, Stefano Stabellini
> > > <stefano.stabellini@eu.citrix.com> wrote:
> > > > It doesn't do any emulation so it is not specific to any
architecture or
> > > > any cpu.
> > >
> > > You presumably still care about the compiled in values of
> > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
>
> Actually it only uses XC_PAGE_SIZE and the endianness is the host
> endianness.

If blkif in QEMU is relying on host endianness thats a bug.

>
>
> > Yup.  It's still accel=xen just with no VCPUs.
>
> Are you talking about introducing accel=xen to Wei's target-null?
> I guess that would work OK.

We already have accel=xen.  I'm echoing Peter's suggestion of having the
ability to compile out accel=tcg.

>
> On the other hand if you are thinking of avoiding the introduction of a
> new target-null, how would you make xen_machine_pv.c available to
> multiple architectures?

Why does qdisk need a full machine?

How would you avoid the compilation of all the
> unnecessary emulated devices?

Device config files.

Regards,

Anthony Liguori

--089e0112cb56f5b6c404ef4f1109
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">On Jan 6, 2014 6:55 AM, &quot;Stefano Stabellini&quot; &lt;<=
a href=3D"mailto:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.ci=
trix.com</a>&gt; wrote:<br>
&gt;<br>
&gt; On Mon, 6 Jan 2014, Anthony Liguori wrote:<br>
&gt; &gt; On Jan 6, 2014 6:23 AM, &quot;Peter Maydell&quot; &lt;<a href=3D"=
mailto:peter.maydell@linaro.org">peter.maydell@linaro.org</a>&gt; wrote:<br=
>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On 6 January 2014 14:17, Stefano Stabellini<br>
&gt; &gt; &gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citrix.com">stef=
ano.stabellini@eu.citrix.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; It doesn&#39;t do any emulation so it is not specific t=
o any architecture or<br>
&gt; &gt; &gt; &gt; any cpu.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; You presumably still care about the compiled in values of<br=
>
&gt; &gt; &gt; TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...<br>
&gt;<br>
&gt; Actually it only uses XC_PAGE_SIZE and the endianness is the host<br>
&gt; endianness.</p>
<p dir=3D"ltr">If blkif in QEMU is relying on host endianness thats a bug.<=
/p>
<p dir=3D"ltr">&gt;<br>
&gt;<br>
&gt; &gt; Yup.=A0 It&#39;s still accel=3Dxen just with no VCPUs.<br>
&gt;<br>
&gt; Are you talking about introducing accel=3Dxen to Wei&#39;s target-null=
?<br>
&gt; I guess that would work OK.</p>
<p dir=3D"ltr">We already have accel=3Dxen.=A0 I&#39;m echoing Peter&#39;s =
suggestion of having the ability to compile out accel=3Dtcg.</p>
<p dir=3D"ltr">&gt;<br>
&gt; On the other hand if you are thinking of avoiding the introduction of =
a<br>
&gt; new target-null, how would you make xen_machine_pv.c available to<br>
&gt; multiple architectures? </p>
<p dir=3D"ltr">Why does qdisk need a full machine?</p>
<p dir=3D"ltr">How would you avoid the compilation of all the<br>
&gt; unnecessary emulated devices?</p>
<p dir=3D"ltr">Device config files.</p>
<p dir=3D"ltr">Regards,</p>
<p dir=3D"ltr">Anthony Liguori</p>

--089e0112cb56f5b6c404ef4f1109--


--===============2810903229328544931==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2810903229328544931==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 15:44:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CLJ-0004Kh-R0; Mon, 06 Jan 2014 15:43:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0CLI-0004Kc-V0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:43:53 +0000
Received: from [85.158.143.35:4204] by server-2.bemta-4.messagelabs.com id
	1D/87-11386-83FCAC25; Mon, 06 Jan 2014 15:43:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389023030!9887522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23251 invoked from network); 6 Jan 2014 15:43:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:43:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87929847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:43:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:43:49 -0500
Message-ID: <1389023027.31766.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 15:43:47 +0000
In-Reply-To: <52C9EC3C.2060008@citrix.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<52C9EC3C.2060008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 23:35 +0000, Andrew Cooper wrote:
> While this does change the ABI of a hypercall, I think it is
> reasonable to do, as domctl.h does imply that 'remain' is an output
> parameter, without specifying anything about its behaviour in the case
> of an error. 

The domctl ABI is subject to change anyway.

We don't seem to have had any cause to bump XEN_DOMCTL_INTERFACE_VERSION
during the 4.4 dev cycle yet. I don't think this change warrants it
either.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:44:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CLJ-0004Kh-R0; Mon, 06 Jan 2014 15:43:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0CLI-0004Kc-V0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:43:53 +0000
Received: from [85.158.143.35:4204] by server-2.bemta-4.messagelabs.com id
	1D/87-11386-83FCAC25; Mon, 06 Jan 2014 15:43:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389023030!9887522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23251 invoked from network); 6 Jan 2014 15:43:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:43:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87929847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:43:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:43:49 -0500
Message-ID: <1389023027.31766.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 6 Jan 2014 15:43:47 +0000
In-Reply-To: <52C9EC3C.2060008@citrix.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<52C9EC3C.2060008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 23:35 +0000, Andrew Cooper wrote:
> While this does change the ABI of a hypercall, I think it is
> reasonable to do, as domctl.h does imply that 'remain' is an output
> parameter, without specifying anything about its behaviour in the case
> of an error. 

The domctl ABI is subject to change anyway.

We don't seem to have had any cause to bump XEN_DOMCTL_INTERFACE_VERSION
during the 4.4 dev cycle yet. I don't think this change warrants it
either.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:45:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CMz-0004R7-Fh; Mon, 06 Jan 2014 15:45:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0CMx-0004Qy-Sf
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 15:45:36 +0000
Received: from [85.158.139.211:39206] by server-9.bemta-5.messagelabs.com id
	55/A6-15098-F9FCAC25; Mon, 06 Jan 2014 15:45:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389023133!6893431!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 765 invoked from network); 6 Jan 2014 15:45:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87930682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:45:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 10:45:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0CMt-0000md-HY;
	Mon, 06 Jan 2014 15:45:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0CMt-0003ZR-9A;
	Mon, 06 Jan 2014 15:45:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.53147.116254.601567@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 15:45:31 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
In-Reply-To: <1389013250.31766.35.camel@kazak.uk.xensource.com>,
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
	<1389013250.31766.35.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL [and
	1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL"):
> On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
...
> > > version targeted for testing:
> > >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> 
> I can't find this in
> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> which I think is the source of the arm kernels.

I had a conversation with Stefano where I gave him the go-ahead to
rewind the testing input branch, provided that it remained a
descendant of the test gate output branch (in linux-pvops.git on
xenbits).  I conjecture that he's done so and that explains why that
object isn't in your clone.

Stefano Stabellini writes ("Re: [linux-arm-xen test] 24246: regressions - FAIL"):
> On Mon, 6 Jan 2014, Ian Jackson wrote:
> > So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.
> 
> I pushed a new tree based on 3.13-rc6, it should be fixed now.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:45:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CMz-0004R7-Fh; Mon, 06 Jan 2014 15:45:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0CMx-0004Qy-Sf
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 15:45:36 +0000
Received: from [85.158.139.211:39206] by server-9.bemta-5.messagelabs.com id
	55/A6-15098-F9FCAC25; Mon, 06 Jan 2014 15:45:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389023133!6893431!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 765 invoked from network); 6 Jan 2014 15:45:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87930682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:45:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 10:45:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0CMt-0000md-HY;
	Mon, 06 Jan 2014 15:45:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0CMt-0003ZR-9A;
	Mon, 06 Jan 2014 15:45:31 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.53147.116254.601567@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 15:45:31 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
In-Reply-To: <1389013250.31766.35.camel@kazak.uk.xensource.com>,
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
	<1389013250.31766.35.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL [and
	1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL"):
> On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
...
> > > version targeted for testing:
> > >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> 
> I can't find this in
> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> which I think is the source of the arm kernels.

I had a conversation with Stefano where I gave him the go-ahead to
rewind the testing input branch, provided that it remained a
descendant of the test gate output branch (in linux-pvops.git on
xenbits).  I conjecture that he's done so and that explains why that
object isn't in your clone.

Stefano Stabellini writes ("Re: [linux-arm-xen test] 24246: regressions - FAIL"):
> On Mon, 6 Jan 2014, Ian Jackson wrote:
> > So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.
> 
> I pushed a new tree based on 3.13-rc6, it should be fixed now.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:45:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CNG-0004TS-Sq; Mon, 06 Jan 2014 15:45:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0CNF-0004TD-DQ
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:45:53 +0000
Received: from [85.158.143.35:26841] by server-1.bemta-4.messagelabs.com id
	A3/22-02132-0BFCAC25; Mon, 06 Jan 2014 15:45:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389023150!9888059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2685 invoked from network); 6 Jan 2014 15:45:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:45:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87930801"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:45:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:45:50 -0500
Message-ID: <1389023148.31766.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 15:45:48 +0000
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-04 at 12:52 -0500, Don Slutz wrote:
> Release manager requests:
>   patch 1 should be in 4.4.0
>   patch 3 and 4 would be good to be in 4.4.0

(as acting RM in George's absence):

In all three cases what is missing is the "why" and the appropriate
analysis from
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze 
i.e. what is the impact of the bug (i..e what are the advantages of the
fix) and what are the risks of this changing causing further breakage?
I'm not really in a position to evaluate the risk of a change in gdbsx,
so someone needs to tell me.

I think given that gdbsx is a somewhat "peripheral" bit of code and that
it is targeted at developers (who might be better able to tolerate any
resulting issues and more able to apply subsequent fixups than regular
users) we can accept a larger risk than we would with a change to the
hypervisor itself etc (that's assuming that all of these changes cannot
potentially impact non-debugger usecases which I expect is the case from
the function names but I would like to see confirmed).

Apart from a release ack 4 of these would need an Ack from Mukesh IMHO.
TBH if he is happy with them then I see no reason not to give a release
ack as well.

>   patch 2 is optional.
> 
> While tracking down a bug in seabios/grub I found the bug in patch
> 1.
> 
> There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
> be INVALID_MFN.
> 
>   1) p2m_is_readonly(gfntype) and writing memory.
>   2) the requested vaddr does not exist.
> 
> This may only be an issue for a HVM guest that is in real mode
> (I.E. no page tables).
> 
> Patch 2 is debug logging that was used to find the 2nd way.
> 
> Patch 3 and 4 are more of a cleanup bug fix.
> 
> Don Slutz (4):
>   dbg_rw_guest_mem: need to call put_gfn in error path.
>   dbg_rw_guest_mem: Enable debug log output
>   xg_read_mem: Report on error.
>   XEN_DOMCTL_gdbsx_guestmemio: always do the copyback.
> 
>  tools/debugger/gdbsx/xg/xg_main.c |  6 ++++--
>  xen/arch/x86/debug.c              | 44 ++++++++++++++++++++++++++++++---------
>  xen/arch/x86/domctl.c             |  3 +--
>  3 files changed, 39 insertions(+), 14 deletions(-)
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:45:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CNG-0004TS-Sq; Mon, 06 Jan 2014 15:45:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0CNF-0004TD-DQ
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 15:45:53 +0000
Received: from [85.158.143.35:26841] by server-1.bemta-4.messagelabs.com id
	A3/22-02132-0BFCAC25; Mon, 06 Jan 2014 15:45:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389023150!9888059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2685 invoked from network); 6 Jan 2014 15:45:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:45:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87930801"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:45:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:45:50 -0500
Message-ID: <1389023148.31766.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 15:45:48 +0000
In-Reply-To: <1388857936-664-1-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-04 at 12:52 -0500, Don Slutz wrote:
> Release manager requests:
>   patch 1 should be in 4.4.0
>   patch 3 and 4 would be good to be in 4.4.0

(as acting RM in George's absence):

In all three cases what is missing is the "why" and the appropriate
analysis from
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze 
i.e. what is the impact of the bug (i..e what are the advantages of the
fix) and what are the risks of this changing causing further breakage?
I'm not really in a position to evaluate the risk of a change in gdbsx,
so someone needs to tell me.

I think given that gdbsx is a somewhat "peripheral" bit of code and that
it is targeted at developers (who might be better able to tolerate any
resulting issues and more able to apply subsequent fixups than regular
users) we can accept a larger risk than we would with a change to the
hypervisor itself etc (that's assuming that all of these changes cannot
potentially impact non-debugger usecases which I expect is the case from
the function names but I would like to see confirmed).

Apart from a release ack 4 of these would need an Ack from Mukesh IMHO.
TBH if he is happy with them then I see no reason not to give a release
ack as well.

>   patch 2 is optional.
> 
> While tracking down a bug in seabios/grub I found the bug in patch
> 1.
> 
> There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
> be INVALID_MFN.
> 
>   1) p2m_is_readonly(gfntype) and writing memory.
>   2) the requested vaddr does not exist.
> 
> This may only be an issue for a HVM guest that is in real mode
> (I.E. no page tables).
> 
> Patch 2 is debug logging that was used to find the 2nd way.
> 
> Patch 3 and 4 are more of a cleanup bug fix.
> 
> Don Slutz (4):
>   dbg_rw_guest_mem: need to call put_gfn in error path.
>   dbg_rw_guest_mem: Enable debug log output
>   xg_read_mem: Report on error.
>   XEN_DOMCTL_gdbsx_guestmemio: always do the copyback.
> 
>  tools/debugger/gdbsx/xg/xg_main.c |  6 ++++--
>  xen/arch/x86/debug.c              | 44 ++++++++++++++++++++++++++++++---------
>  xen/arch/x86/domctl.c             |  3 +--
>  3 files changed, 39 insertions(+), 14 deletions(-)
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0COn-0004gE-K6; Mon, 06 Jan 2014 15:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0COl-0004g1-VA
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 15:47:28 +0000
Received: from [85.158.137.68:51688] by server-5.bemta-3.messagelabs.com id
	1C/95-25188-F00DAC25; Mon, 06 Jan 2014 15:47:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389023245!7458865!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27937 invoked from network); 6 Jan 2014 15:47:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87931627"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:47:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:47:23 -0500
Message-ID: <1389023243.31766.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 15:47:23 +0000
In-Reply-To: <21194.53147.116254.601567@mariner.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
	<1389013250.31766.35.camel@kazak.uk.xensource.com>
	<21194.53147.116254.601567@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL [and
 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:45 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL"):
> > On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
> ...
> > > > version targeted for testing:
> > > >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> > 
> > I can't find this in
> > git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > which I think is the source of the arm kernels.
> 
> I had a conversation with Stefano where I gave him the go-ahead to
> rewind the testing input branch, provided that it remained a
> descendant of the test gate output branch (in linux-pvops.git on
> xenbits).  I conjecture that he's done so and that explains why that
> object isn't in your clone.

That makes sense, thanks.

> Stefano Stabellini writes ("Re: [linux-arm-xen test] 24246: regressions - FAIL"):
> > On Mon, 6 Jan 2014, Ian Jackson wrote:
> > > So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.
> > 
> > I pushed a new tree based on 3.13-rc6, it should be fixed now.

Right, this is what I was actually seeing IIRC.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 15:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 15:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0COn-0004gE-K6; Mon, 06 Jan 2014 15:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0COl-0004g1-VA
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 15:47:28 +0000
Received: from [85.158.137.68:51688] by server-5.bemta-3.messagelabs.com id
	1C/95-25188-F00DAC25; Mon, 06 Jan 2014 15:47:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389023245!7458865!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27937 invoked from network); 6 Jan 2014 15:47:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 15:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87931627"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 15:47:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	10:47:23 -0500
Message-ID: <1389023243.31766.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 6 Jan 2014 15:47:23 +0000
In-Reply-To: <21194.53147.116254.601567@mariner.uk.xensource.com>
References: <osstest-24246-mainreport@xen.org>
	<21194.39413.802675.333845@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1401061253050.8667@kaball.uk.xensource.com>
	<1389013250.31766.35.camel@kazak.uk.xensource.com>
	<21194.53147.116254.601567@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL [and
 1 more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:45 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [linux-arm-xen test] 24246: regressions - FAIL"):
> > On Mon, 2014-01-06 at 11:56 +0000, Ian Jackson wrote:
> ...
> > > > version targeted for testing:
> > > >  linux                cd7bd0ad11c8979ce247215833eb23d661c1c8fe
> > 
> > I can't find this in
> > git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > which I think is the source of the arm kernels.
> 
> I had a conversation with Stefano where I gave him the go-ahead to
> rewind the testing input branch, provided that it remained a
> descendant of the test gate output branch (in linux-pvops.git on
> xenbits).  I conjecture that he's done so and that explains why that
> object isn't in your clone.

That makes sense, thanks.

> Stefano Stabellini writes ("Re: [linux-arm-xen test] 24246: regressions - FAIL"):
> > On Mon, 6 Jan 2014, Ian Jackson wrote:
> > > So 5e01dc7b26d9f24f39abace5da98ccbd6a5ceb52 worked.
> > 
> > I pushed a new tree based on 3.13-rc6, it should be fixed now.

Right, this is what I was actually seeing IIRC.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:03:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CeA-00060c-T2; Mon, 06 Jan 2014 16:03:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0CeA-00060U-8a
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:03:22 +0000
Received: from [85.158.137.68:49470] by server-1.bemta-3.messagelabs.com id
	68/0D-29598-9C3DAC25; Mon, 06 Jan 2014 16:03:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389024198!7462357!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25924 invoked from network); 6 Jan 2014 16:03:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87939051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 16:02:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 11:02:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0CZa-0000wp-S2;
	Mon, 06 Jan 2014 15:58:38 +0000
Date: Mon, 6 Jan 2014 15:57:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-89343518-1389023666=:8667"
Content-ID: <alpine.DEB.2.02.1401061555500.8667@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-89343518-1389023666=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061555501.8667@kaball.uk.xensource.com>

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citri=
x.com> wrote:
> >
> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wr=
ote:
> > > >
> > > > On 6 January 2014 14:17, Stefano Stabellini
> > > > <stefano.stabellini@eu.citrix.com> wrote:
> > > > > It doesn't do any emulation so it is not specific to any architec=
ture or
> > > > > any cpu.
> > > >
> > > > You presumably still care about the compiled in values of
> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
> >
> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
> > endianness.
>=20
> If blkif in QEMU is relying on host endianness thats a bug.

Why? Xen doesn't support a different guest/host endianness.


> > > Yup.=C2=A0 It's still accel=3Dxen just with no VCPUs.
> >
> > Are you talking about introducing accel=3Dxen to Wei's target-null?
> > I guess that would work OK.
>=20
> We already have accel=3Dxen.=C2=A0 I'm echoing Peter's suggestion of havi=
ng the ability to compile out accel=3Dtcg.
>=20
> >
> > On the other hand if you are thinking of avoiding the introduction of a
> > new target-null, how would you make xen_machine_pv.c available to
> > multiple architectures?
>=20
> Why does qdisk need a full machine?

qdisk is just one device, xen_machine_pv is the machine that initializes
the backend infrastructure (one of the backends is qdisk).
It doesn't make sense to use a full-blown machine like pc.c just to
start few backends, right?


> How would you avoid the compilation of all the
> > unnecessary emulated devices?
>=20
> Device config files.
--1342847746-89343518-1389023666=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-89343518-1389023666=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 16:03:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CeA-00060c-T2; Mon, 06 Jan 2014 16:03:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0CeA-00060U-8a
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:03:22 +0000
Received: from [85.158.137.68:49470] by server-1.bemta-3.messagelabs.com id
	68/0D-29598-9C3DAC25; Mon, 06 Jan 2014 16:03:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389024198!7462357!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25924 invoked from network); 6 Jan 2014 16:03:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:03:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="87939051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 16:02:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 11:02:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0CZa-0000wp-S2;
	Mon, 06 Jan 2014 15:58:38 +0000
Date: Mon, 6 Jan 2014 15:57:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-89343518-1389023666=:8667"
Content-ID: <alpine.DEB.2.02.1401061555500.8667@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-89343518-1389023666=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1401061555501.8667@kaball.uk.xensource.com>

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citri=
x.com> wrote:
> >
> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wr=
ote:
> > > >
> > > > On 6 January 2014 14:17, Stefano Stabellini
> > > > <stefano.stabellini@eu.citrix.com> wrote:
> > > > > It doesn't do any emulation so it is not specific to any architec=
ture or
> > > > > any cpu.
> > > >
> > > > You presumably still care about the compiled in values of
> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
> >
> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
> > endianness.
>=20
> If blkif in QEMU is relying on host endianness thats a bug.

Why? Xen doesn't support a different guest/host endianness.


> > > Yup.=C2=A0 It's still accel=3Dxen just with no VCPUs.
> >
> > Are you talking about introducing accel=3Dxen to Wei's target-null?
> > I guess that would work OK.
>=20
> We already have accel=3Dxen.=C2=A0 I'm echoing Peter's suggestion of havi=
ng the ability to compile out accel=3Dtcg.
>=20
> >
> > On the other hand if you are thinking of avoiding the introduction of a
> > new target-null, how would you make xen_machine_pv.c available to
> > multiple architectures?
>=20
> Why does qdisk need a full machine?

qdisk is just one device, xen_machine_pv is the machine that initializes
the backend infrastructure (one of the backends is qdisk).
It doesn't make sense to use a full-blown machine like pc.c just to
start few backends, right?


> How would you avoid the compilation of all the
> > unnecessary emulated devices?
>=20
> Device config files.
--1342847746-89343518-1389023666=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-89343518-1389023666=:8667--


From xen-devel-bounces@lists.xen.org Mon Jan 06 16:18:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CsX-0006UR-O1; Mon, 06 Jan 2014 16:18:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0CsW-0006UM-GO
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 16:18:12 +0000
Received: from [85.158.139.211:34441] by server-17.bemta-5.messagelabs.com id
	EB/6A-19152-347DAC25; Mon, 06 Jan 2014 16:18:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389025090!8091971!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20614 invoked from network); 6 Jan 2014 16:18:10 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:18:10 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9847157lbh.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 08:18:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kAxD5aadMqJjsnYpMJZHdpQm0m+rAs805y+uPxlKRRA=;
	b=LwWVn1z4DERef3DyHCUK4uch2iIz2I3w0bbHOP/htfUuC66NtHV9wddMBStJg8O8cy
	MpbvfBpQMr8FPO+bFROLJR2PMsGe8a82tDoCg5vIWmJ3whQWytRO9pLKklBhsgSn3BXP
	saNjvQtkjzjY/Yo/ZKfgrZnTgVafwmfQw4VyGViEh8Bd7xGuLwo1T+MGLeQ+fXF43gE4
	/BDS6kYtfkfkLiuE9sGURO0bXv7gdinyWHZI/ztK7mXyVtLqymN/CD5Lgs4nkm5Xwfe1
	EwRiUul/0Gc3SUOxJJPxegXgNE9e1CKMIJY0t9RU38FMBuClveN3k1gEmJMVhcmqEsHl
	sUtQ==
X-Gm-Message-State: ALoCoQkjQ+EgosmSsdR6DFJwtXC3cvk6I6mN++9UKJ07ePHC4AKBx71Ay7W95rm1UEbc9frNrDLc
X-Received: by 10.152.143.101 with SMTP id sd5mr32280005lab.26.1389025090011; 
	Mon, 06 Jan 2014 08:18:10 -0800 (PST)
Received: from [195.69.50.12] ([195.69.50.12])
	by mx.google.com with ESMTPSA id e10sm55339430laa.6.2014.01.06.08.18.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 08:18:09 -0800 (PST)
Message-ID: <52CAD73F.6050006@linaro.org>
Date: Mon, 06 Jan 2014 16:18:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
	<1389021866.31766.51.camel@kazak.uk.xensource.com>
In-Reply-To: <1389021866.31766.51.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tsahee@gmx.com, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/06/2014 03:24 PM, Ian Campbell wrote:
> On Fri, 2013-12-13 at 02:31 +0000, Julien Grall wrote:
>> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
>>
>> "it is assumed that no mapping exists between children of node and the parent
>> address space".
>>
>> Modify dt_number_of_address to check if the list of ranges are valid. Return
>> 0 (ie there is zero range) if the list is not valid.
>>
>> This patch has been tested on the Arndale where the bug can occur with the
>> '/hdmi' node.
>>
>> Reported-by: <tsahee@gmx.com>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>
>> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
>> because it's unable to translate a wrong address.
>> ---
>>   xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>>   1 file changed, 27 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index 84e709d..9b9a348 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -93,7 +93,7 @@ struct dt_bus
>>   {
>>       const char *name;
>>       const char *addresses;
>> -    int (*match)(const struct dt_device_node *parent);
>> +    bool_t (*match)(const struct dt_device_node *parent);
>>       void (*count_cells)(const struct dt_device_node *child,
>>                           int *addrc, int *sizec);
>>       u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
>> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>>   /*
>>    * Default translator (generic bus)
>>    */
>> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
>> +{
>> +    /* Root node doesn't have "ranges" property */
>> +    if ( parent->parent == NULL )
>
> Calling the parameter "parent" led to me confusedly wondering why it was
> the grandparent which mattered. I suppose "parent" in this sense means
> the "parent bus" as opposed to parent node? Or does it just fall out of
> the fact that in the caller it is the parent which is passed through?
>
> Can we call it something else, like "bus" or "node" or something else
> appropriate?


Right, the name is confusing. It took me few minutes, even if I wrote 
the code, to understand it :). I blindly copied the code from Linux 
of_bus structure.

The best name would be "node", because the function is trying to find if 
the parameters is a bus or not.

>> +        return 1;
>> +
>> +    /* The default bus is only used when the "ranges" property exists.
>> +     * Otherwise we can't translate the address
>> +     */
>> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
>> +}
>> +
>>   static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>>                                   int *addrc, int *sizec)
>>   {
>> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>>        * If the number of address cells is larger than 2 we assume the
>>        * mapping doesn't specify a physical address. Rather, the address
>>        * specifies an identifier that must match exactly.
>> -     */
>> +      */
>
> Spurious w/s change.
>
> Other that those two changes the patch looks good.

I will fix it.



-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:18:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0CsX-0006UR-O1; Mon, 06 Jan 2014 16:18:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0CsW-0006UM-GO
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 16:18:12 +0000
Received: from [85.158.139.211:34441] by server-17.bemta-5.messagelabs.com id
	EB/6A-19152-347DAC25; Mon, 06 Jan 2014 16:18:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389025090!8091971!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20614 invoked from network); 6 Jan 2014 16:18:10 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:18:10 -0000
Received: by mail-lb0-f175.google.com with SMTP id w6so9847157lbh.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 08:18:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kAxD5aadMqJjsnYpMJZHdpQm0m+rAs805y+uPxlKRRA=;
	b=LwWVn1z4DERef3DyHCUK4uch2iIz2I3w0bbHOP/htfUuC66NtHV9wddMBStJg8O8cy
	MpbvfBpQMr8FPO+bFROLJR2PMsGe8a82tDoCg5vIWmJ3whQWytRO9pLKklBhsgSn3BXP
	saNjvQtkjzjY/Yo/ZKfgrZnTgVafwmfQw4VyGViEh8Bd7xGuLwo1T+MGLeQ+fXF43gE4
	/BDS6kYtfkfkLiuE9sGURO0bXv7gdinyWHZI/ztK7mXyVtLqymN/CD5Lgs4nkm5Xwfe1
	EwRiUul/0Gc3SUOxJJPxegXgNE9e1CKMIJY0t9RU38FMBuClveN3k1gEmJMVhcmqEsHl
	sUtQ==
X-Gm-Message-State: ALoCoQkjQ+EgosmSsdR6DFJwtXC3cvk6I6mN++9UKJ07ePHC4AKBx71Ay7W95rm1UEbc9frNrDLc
X-Received: by 10.152.143.101 with SMTP id sd5mr32280005lab.26.1389025090011; 
	Mon, 06 Jan 2014 08:18:10 -0800 (PST)
Received: from [195.69.50.12] ([195.69.50.12])
	by mx.google.com with ESMTPSA id e10sm55339430laa.6.2014.01.06.08.18.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 08:18:09 -0800 (PST)
Message-ID: <52CAD73F.6050006@linaro.org>
Date: Mon, 06 Jan 2014 16:18:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1386901910-1016-1-git-send-email-julien.grall@linaro.org>
	<1389021866.31766.51.camel@kazak.uk.xensource.com>
In-Reply-To: <1389021866.31766.51.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tsahee@gmx.com, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/06/2014 03:24 PM, Ian Campbell wrote:
> On Fri, 2013-12-13 at 02:31 +0000, Julien Grall wrote:
>> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
>>
>> "it is assumed that no mapping exists between children of node and the parent
>> address space".
>>
>> Modify dt_number_of_address to check if the list of ranges are valid. Return
>> 0 (ie there is zero range) if the list is not valid.
>>
>> This patch has been tested on the Arndale where the bug can occur with the
>> '/hdmi' node.
>>
>> Reported-by: <tsahee@gmx.com>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>
>> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
>> because it's unable to translate a wrong address.
>> ---
>>   xen/common/device_tree.c |   31 +++++++++++++++++++++++++++----
>>   1 file changed, 27 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index 84e709d..9b9a348 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -93,7 +93,7 @@ struct dt_bus
>>   {
>>       const char *name;
>>       const char *addresses;
>> -    int (*match)(const struct dt_device_node *parent);
>> +    bool_t (*match)(const struct dt_device_node *parent);
>>       void (*count_cells)(const struct dt_device_node *child,
>>                           int *addrc, int *sizec);
>>       u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
>> @@ -793,6 +793,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
>>   /*
>>    * Default translator (generic bus)
>>    */
>> +static bool_t dt_bus_default_match(const struct dt_device_node *parent)
>> +{
>> +    /* Root node doesn't have "ranges" property */
>> +    if ( parent->parent == NULL )
>
> Calling the parameter "parent" led to me confusedly wondering why it was
> the grandparent which mattered. I suppose "parent" in this sense means
> the "parent bus" as opposed to parent node? Or does it just fall out of
> the fact that in the caller it is the parent which is passed through?
>
> Can we call it something else, like "bus" or "node" or something else
> appropriate?


Right, the name is confusing. It took me few minutes, even if I wrote 
the code, to understand it :). I blindly copied the code from Linux 
of_bus structure.

The best name would be "node", because the function is trying to find if 
the parameters is a bus or not.

>> +        return 1;
>> +
>> +    /* The default bus is only used when the "ranges" property exists.
>> +     * Otherwise we can't translate the address
>> +     */
>> +    return (dt_get_property(parent, "ranges", NULL) != NULL);
>> +}
>> +
>>   static void dt_bus_default_count_cells(const struct dt_device_node *dev,
>>                                   int *addrc, int *sizec)
>>   {
>> @@ -819,7 +831,7 @@ static u64 dt_bus_default_map(__be32 *addr, const __be32 *range,
>>        * If the number of address cells is larger than 2 we assume the
>>        * mapping doesn't specify a physical address. Rather, the address
>>        * specifies an identifier that must match exactly.
>> -     */
>> +      */
>
> Spurious w/s change.
>
> Other that those two changes the patch looks good.

I will fix it.



-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:33:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0D7F-0007Is-J0; Mon, 06 Jan 2014 16:33:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0D7E-0007In-PH
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:33:24 +0000
Received: from [85.158.143.35:17585] by server-1.bemta-4.messagelabs.com id
	BC/B8-02132-4DADAC25; Mon, 06 Jan 2014 16:33:24 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389026002!9945870!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5688 invoked from network); 6 Jan 2014 16:33:23 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:33:23 -0000
Received: by mail-lb0-f170.google.com with SMTP id c11so10112730lbj.1
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 08:33:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=gMqo0uczPAlMYHv8lxvJ+dtM5BDeh9CWaN40EI7iGd0=;
	b=B+KtgQZpOXy4LYe5cmGhfu983uEPGwh8OhSpdpSsYZugl6S8WdhhsV9JVmb9+sszqN
	N5bAhIzxGtXkR3M397yr95wG7qjC9PI0/ClwGq/vo8cnUMSujoayVlFhaQhtnvAYkhqX
	pDK1ArlVMQ0q9fZYlo5ROWVXeicLk6FuNvQjnEgmdwaB/HPWBYrglW0Dhh1SA1g2X+wl
	C12tRVx09y4wujz+M7zxwxSIr06JOARAUUKyLr6FYcDrRxQPwfFpDzb0CWjH7z8WX3Jm
	JtrEEbBqsVkz13FV+wwvEgz3JujwS80lwowC6sNJvmB1G+SJFHoJBKZqeC8MH1V5XRpn
	rDrw==
X-Gm-Message-State: ALoCoQmTcTBiYOKbqHuu9iGYZ7hroWpuwQ4X8UyhA4RzoLRFhJ0kgqhNS07nJKfcDL/GREO/ZsbF
X-Received: by 10.112.135.102 with SMTP id pr6mr1358130lbb.43.1389025999682;
	Mon, 06 Jan 2014 08:33:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 08:32:59 -0800 (PST)
In-Reply-To: <20140106151154.GA10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 16:32:59 +0000
Message-ID: <CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 15:11, Wei Liu <wei.liu2@citrix.com> wrote:
> On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
> [...]
>> >
>> > Finally I got a qemu-system-null. And the effect is immediately visible
>>
>> qemu-system-null has been on my wish-list in the past, although my
>> reasons were slightly different to yours. Specifically, the goal was
>> to test CPUs in an RTL simulator interacting with RAM and peripheral
>> devices hosted in QEMU.

> Cool. However small this is still a valid usecase.

However I don't think we can have a qemu-system-null
(regardless of use cases) until/unless we get rid of
all the things which are compile-time decided by
the system config. In an ideal world we wouldn't have
any of those (and perhaps you could even build
support for more than one kind of CPU into one QEMU
binary), but as it is we do have them, and so a
qemu-system-null is not possible.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:33:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0D7F-0007Is-J0; Mon, 06 Jan 2014 16:33:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0D7E-0007In-PH
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:33:24 +0000
Received: from [85.158.143.35:17585] by server-1.bemta-4.messagelabs.com id
	BC/B8-02132-4DADAC25; Mon, 06 Jan 2014 16:33:24 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389026002!9945870!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5688 invoked from network); 6 Jan 2014 16:33:23 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:33:23 -0000
Received: by mail-lb0-f170.google.com with SMTP id c11so10112730lbj.1
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 08:33:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=gMqo0uczPAlMYHv8lxvJ+dtM5BDeh9CWaN40EI7iGd0=;
	b=B+KtgQZpOXy4LYe5cmGhfu983uEPGwh8OhSpdpSsYZugl6S8WdhhsV9JVmb9+sszqN
	N5bAhIzxGtXkR3M397yr95wG7qjC9PI0/ClwGq/vo8cnUMSujoayVlFhaQhtnvAYkhqX
	pDK1ArlVMQ0q9fZYlo5ROWVXeicLk6FuNvQjnEgmdwaB/HPWBYrglW0Dhh1SA1g2X+wl
	C12tRVx09y4wujz+M7zxwxSIr06JOARAUUKyLr6FYcDrRxQPwfFpDzb0CWjH7z8WX3Jm
	JtrEEbBqsVkz13FV+wwvEgz3JujwS80lwowC6sNJvmB1G+SJFHoJBKZqeC8MH1V5XRpn
	rDrw==
X-Gm-Message-State: ALoCoQmTcTBiYOKbqHuu9iGYZ7hroWpuwQ4X8UyhA4RzoLRFhJ0kgqhNS07nJKfcDL/GREO/ZsbF
X-Received: by 10.112.135.102 with SMTP id pr6mr1358130lbb.43.1389025999682;
	Mon, 06 Jan 2014 08:33:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 08:32:59 -0800 (PST)
In-Reply-To: <20140106151154.GA10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 16:32:59 +0000
Message-ID: <CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 15:11, Wei Liu <wei.liu2@citrix.com> wrote:
> On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
> [...]
>> >
>> > Finally I got a qemu-system-null. And the effect is immediately visible
>>
>> qemu-system-null has been on my wish-list in the past, although my
>> reasons were slightly different to yours. Specifically, the goal was
>> to test CPUs in an RTL simulator interacting with RAM and peripheral
>> devices hosted in QEMU.

> Cool. However small this is still a valid usecase.

However I don't think we can have a qemu-system-null
(regardless of use cases) until/unless we get rid of
all the things which are compile-time decided by
the system config. In an ideal world we wouldn't have
any of those (and perhaps you could even build
support for more than one kind of CPU into one QEMU
binary), but as it is we do have them, and so a
qemu-system-null is not possible.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DAF-0007PJ-6Q; Mon, 06 Jan 2014 16:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0DAD-0007PE-B9
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 16:36:29 +0000
Received: from [85.158.143.35:8726] by server-1.bemta-4.messagelabs.com id
	2C/0D-02132-C8BDAC25; Mon, 06 Jan 2014 16:36:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389026187!7256052!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20731 invoked from network); 6 Jan 2014 16:36:27 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:36:27 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so10188209lan.27
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 08:36:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=y3k4NKzAoqL5ReBn5KEJhItRYtTANNEQvlCm5MNqE4U=;
	b=DMO4bDcF0L8/nr69jaymfXR/u0Aiwo9mr1q/vevbsT8cIBWCPe75KIlCmPvNXHLRSP
	HQ8tBzAUmYH+t0/GI/fbrbLxHE3nb5neKJPR0daFe5jPqqWfEhLemVSXz1+qFh1pxgSy
	Ja5WirjCgT2WlzMh+hTiURPq3S27NW4j3uY3vI2P/73P9xytE6oxVWMjh41/Wb5Vvr/j
	EoiNOSC8cuXBXQpgn2o+9bQKvyiprxW4iDexDHO3coZfYcdYY2sIoLGWj7VfhB9Oi7AJ
	7N50fbRlEbdOe6wUeOSqL6uteha5uCVi1jkLUNHuEMoVnAIqwZmq+EJNlBA7hxXpfKXA
	Jhpg==
X-Gm-Message-State: ALoCoQlDVseJB/pH1WcnK7xQ5zJwy4K4Xm4NyVFgLK6+7APPN1hU7gO5uoe/aNVc9YXLLUhSRNgj
X-Received: by 10.112.64.10 with SMTP id k10mr562758lbs.86.1389026186811;
	Mon, 06 Jan 2014 08:36:26 -0800 (PST)
Received: from localhost.localdomain ([195.69.49.131])
	by mx.google.com with ESMTPSA id r10sm55382913lag.7.2014.01.06.08.36.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 06 Jan 2014 08:36:26 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon,  6 Jan 2014 16:36:18 +0000
Message-Id: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ePAR specifies that if the property "ranges" doesn't exist in a bus node:

"it is assumed that no mapping exists between children of node and the parent
address space".

Modify dt_number_of_address to check if the list of ranges are valid. Return
0 (ie there is zero range) if the list is not valid.

This patch has been tested on the Arndale where the bug can occur with the
'/hdmi' node.

Reported-by: <tsahee@gmx.com>
Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
because it's unable to translate a wrong address.

    Changes in v2:
        - "parent" name is very confusing, rename it in "node"
        - remove spurious change
---
 xen/common/device_tree.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c35aee1..fa9d06c 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -83,7 +83,7 @@ struct dt_bus
 {
     const char *name;
     const char *addresses;
-    int (*match)(const struct dt_device_node *parent);
+    bool_t (*match)(const struct dt_device_node *node);
     void (*count_cells)(const struct dt_device_node *child,
                         int *addrc, int *sizec);
     u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
@@ -783,6 +783,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
 /*
  * Default translator (generic bus)
  */
+static bool_t dt_bus_default_match(const struct dt_device_node *node)
+{
+    /* Root node doesn't have "ranges" property */
+    if ( node->parent == NULL )
+        return 1;
+
+    /* The default bus is only used when the "ranges" property exists.
+     * Otherwise we can't translate the address
+     */
+    return (dt_get_property(node, "ranges", NULL) != NULL);
+}
+
 static void dt_bus_default_count_cells(const struct dt_device_node *dev,
                                 int *addrc, int *sizec)
 {
@@ -846,7 +858,7 @@ static const struct dt_bus dt_busses[] =
     {
         .name = "default",
         .addresses = "reg",
-        .match = NULL,
+        .match = dt_bus_default_match,
         .count_cells = dt_bus_default_count_cells,
         .map = dt_bus_default_map,
         .translate = dt_bus_default_translate,
@@ -861,7 +873,6 @@ static const struct dt_bus *dt_match_bus(const struct dt_device_node *np)
     for ( i = 0; i < ARRAY_SIZE(dt_busses); i++ )
         if ( !dt_busses[i].match || dt_busses[i].match(np) )
             return &dt_busses[i];
-    BUG();
 
     return NULL;
 }
@@ -880,7 +891,10 @@ static const __be32 *dt_get_address(const struct dt_device_node *dev,
     parent = dt_get_parent(dev);
     if ( parent == NULL )
         return NULL;
+
     bus = dt_match_bus(parent);
+    if ( !bus )
+        return NULL;
     bus->count_cells(dev, &na, &ns);
 
     if ( !DT_CHECK_ADDR_COUNT(na) )
@@ -984,6 +998,8 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
     if ( parent == NULL )
         goto bail;
     bus = dt_match_bus(parent);
+    if ( !bus )
+        goto bail;
 
     /* Count address cells & copy address locally */
     bus->count_cells(dev, &na, &ns);
@@ -1016,6 +1032,11 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
 
         /* Get new parent bus and counts */
         pbus = dt_match_bus(parent);
+        if ( pbus == NULL )
+        {
+            dt_printk("DT: %s is not a valid bus\n", parent->full_name);
+            break;
+        }
         pbus->count_cells(dev, &pna, &pns);
         if ( !DT_CHECK_COUNTS(pna, pns) )
         {
@@ -1154,6 +1175,8 @@ unsigned int dt_number_of_address(const struct dt_device_node *dev)
         return 0;
 
     bus = dt_match_bus(parent);
+    if ( !bus )
+        return 0;
     bus->count_cells(dev, &na, &ns);
 
     if ( !DT_CHECK_COUNTS(na, ns) )
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DAF-0007PJ-6Q; Mon, 06 Jan 2014 16:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0DAD-0007PE-B9
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 16:36:29 +0000
Received: from [85.158.143.35:8726] by server-1.bemta-4.messagelabs.com id
	2C/0D-02132-C8BDAC25; Mon, 06 Jan 2014 16:36:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389026187!7256052!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20731 invoked from network); 6 Jan 2014 16:36:27 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:36:27 -0000
Received: by mail-la0-f54.google.com with SMTP id b8so10188209lan.27
	for <xen-devel@lists.xenproject.org>;
	Mon, 06 Jan 2014 08:36:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=y3k4NKzAoqL5ReBn5KEJhItRYtTANNEQvlCm5MNqE4U=;
	b=DMO4bDcF0L8/nr69jaymfXR/u0Aiwo9mr1q/vevbsT8cIBWCPe75KIlCmPvNXHLRSP
	HQ8tBzAUmYH+t0/GI/fbrbLxHE3nb5neKJPR0daFe5jPqqWfEhLemVSXz1+qFh1pxgSy
	Ja5WirjCgT2WlzMh+hTiURPq3S27NW4j3uY3vI2P/73P9xytE6oxVWMjh41/Wb5Vvr/j
	EoiNOSC8cuXBXQpgn2o+9bQKvyiprxW4iDexDHO3coZfYcdYY2sIoLGWj7VfhB9Oi7AJ
	7N50fbRlEbdOe6wUeOSqL6uteha5uCVi1jkLUNHuEMoVnAIqwZmq+EJNlBA7hxXpfKXA
	Jhpg==
X-Gm-Message-State: ALoCoQlDVseJB/pH1WcnK7xQ5zJwy4K4Xm4NyVFgLK6+7APPN1hU7gO5uoe/aNVc9YXLLUhSRNgj
X-Received: by 10.112.64.10 with SMTP id k10mr562758lbs.86.1389026186811;
	Mon, 06 Jan 2014 08:36:26 -0800 (PST)
Received: from localhost.localdomain ([195.69.49.131])
	by mx.google.com with ESMTPSA id r10sm55382913lag.7.2014.01.06.08.36.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 06 Jan 2014 08:36:26 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon,  6 Jan 2014 16:36:18 +0000
Message-Id: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.3.1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, patches@linaro.org
Subject: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ePAR specifies that if the property "ranges" doesn't exist in a bus node:

"it is assumed that no mapping exists between children of node and the parent
address space".

Modify dt_number_of_address to check if the list of ranges are valid. Return
0 (ie there is zero range) if the list is not valid.

This patch has been tested on the Arndale where the bug can occur with the
'/hdmi' node.

Reported-by: <tsahee@gmx.com>
Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
because it's unable to translate a wrong address.

    Changes in v2:
        - "parent" name is very confusing, rename it in "node"
        - remove spurious change
---
 xen/common/device_tree.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c35aee1..fa9d06c 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -83,7 +83,7 @@ struct dt_bus
 {
     const char *name;
     const char *addresses;
-    int (*match)(const struct dt_device_node *parent);
+    bool_t (*match)(const struct dt_device_node *node);
     void (*count_cells)(const struct dt_device_node *child,
                         int *addrc, int *sizec);
     u64 (*map)(__be32 *addr, const __be32 *range, int na, int ns, int pna);
@@ -783,6 +783,18 @@ int dt_n_size_cells(const struct dt_device_node *np)
 /*
  * Default translator (generic bus)
  */
+static bool_t dt_bus_default_match(const struct dt_device_node *node)
+{
+    /* Root node doesn't have "ranges" property */
+    if ( node->parent == NULL )
+        return 1;
+
+    /* The default bus is only used when the "ranges" property exists.
+     * Otherwise we can't translate the address
+     */
+    return (dt_get_property(node, "ranges", NULL) != NULL);
+}
+
 static void dt_bus_default_count_cells(const struct dt_device_node *dev,
                                 int *addrc, int *sizec)
 {
@@ -846,7 +858,7 @@ static const struct dt_bus dt_busses[] =
     {
         .name = "default",
         .addresses = "reg",
-        .match = NULL,
+        .match = dt_bus_default_match,
         .count_cells = dt_bus_default_count_cells,
         .map = dt_bus_default_map,
         .translate = dt_bus_default_translate,
@@ -861,7 +873,6 @@ static const struct dt_bus *dt_match_bus(const struct dt_device_node *np)
     for ( i = 0; i < ARRAY_SIZE(dt_busses); i++ )
         if ( !dt_busses[i].match || dt_busses[i].match(np) )
             return &dt_busses[i];
-    BUG();
 
     return NULL;
 }
@@ -880,7 +891,10 @@ static const __be32 *dt_get_address(const struct dt_device_node *dev,
     parent = dt_get_parent(dev);
     if ( parent == NULL )
         return NULL;
+
     bus = dt_match_bus(parent);
+    if ( !bus )
+        return NULL;
     bus->count_cells(dev, &na, &ns);
 
     if ( !DT_CHECK_ADDR_COUNT(na) )
@@ -984,6 +998,8 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
     if ( parent == NULL )
         goto bail;
     bus = dt_match_bus(parent);
+    if ( !bus )
+        goto bail;
 
     /* Count address cells & copy address locally */
     bus->count_cells(dev, &na, &ns);
@@ -1016,6 +1032,11 @@ static u64 __dt_translate_address(const struct dt_device_node *dev,
 
         /* Get new parent bus and counts */
         pbus = dt_match_bus(parent);
+        if ( pbus == NULL )
+        {
+            dt_printk("DT: %s is not a valid bus\n", parent->full_name);
+            break;
+        }
         pbus->count_cells(dev, &pna, &pns);
         if ( !DT_CHECK_COUNTS(pna, pns) )
         {
@@ -1154,6 +1175,8 @@ unsigned int dt_number_of_address(const struct dt_device_node *dev)
         return 0;
 
     bus = dt_match_bus(parent);
+    if ( !bus )
+        return 0;
     bus->count_cells(dev, &na, &ns);
 
     if ( !DT_CHECK_COUNTS(na, ns) )
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:39:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DDE-0007qU-S5; Mon, 06 Jan 2014 16:39:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0DDD-0007pY-CC
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:39:35 +0000
Received: from [85.158.143.35:45429] by server-3.bemta-4.messagelabs.com id
	CA/9F-32360-64CDAC25; Mon, 06 Jan 2014 16:39:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389026372!9944923!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24369 invoked from network); 6 Jan 2014 16:39:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:39:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90128418"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 16:39:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	11:39:31 -0500
Message-ID: <1389026370.31766.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 16:39:30 +0000
In-Reply-To: <20131218134629.GG25969@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > creates a VFB for you.
> > > 
> > > Fixes bug #25.
> > > http://bugs.xenproject.org/xen/bug/25
> > > 
> > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > 
> > Looks good.
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Has this had a discussion about the 4.4 release? It's somewhere between
> > a bug fix and a new feature, I suppose more of a bug fix.
> > 
> 
> Yes it is a bug fix.
> 
> CC'ing George now.

And now George has gone away and left me holding the can, smart move on
his part ;-)

With RM hat on my main concern here is that this smells an awful lot
like a new feature and not strictly a bug fix (the presence of a bug is
a bit of a red-herring, it's notionally a wishlist bug even if it isn't
currently tagged as such).

On the flip side we have been giving somewhat special dispensation to
"bugs" of the form "xl does not do $something_xend_did" and treating
them as something stronger than wishlist (although I'm not sure how much
stronger). I notice that George has moved all of those under backlog in
the latest 4.4 dev update though (see [1]), so I'm not sure if he would
still apply this rule (were I to know what it actually was).

Konrad, to what extent is this a blocker for you (or the OVM tooling)
vs. it just being something you spotted by random chance?

So considering the guidelines George left[2]:

The patch contributes to an "awesome release" by allowing some set of
existing xm configuration files to work as is, which is something we are
keen on in order to continue to migrate users over. There is a
workaround though (rewrite the cfg file to use the other syntax, which
works), which although falling short of our desire for this to "just
work" is not immensely complex to apply.

The potential risk is that this breaks the existing vfb syntax which
works, or it breaks the hvm stuff. This is of particular concern because
I don't think any of that is covered by osstest (except perhaps hvm
console, but that might only be on some other error when osstest takes a
screenshot), so the probability of finding it before release is reliant
on manual testing/test days/user testing etc.
Are you happy that all the existing options keep working?

The patch is big enough that it isn't "obviously correct". . THe patch
is textually large because it contain two refactorings
(ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
had pretty comprehensive review from me and Ian J as it was constructed.
parse_top_... is really just code motion (although I'm a bit concerned
that the hvm callsite has moved too). With my RM hat off and my
maintainer hat on I've reviewed it and it looks good. (RM hat back on)

So, I think I'm a bit border line but slightly on the side of accepting
it for 4.4 unless anyone has a counter opinion.

Ian.

[1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29
[2] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:39:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DDE-0007qU-S5; Mon, 06 Jan 2014 16:39:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0DDD-0007pY-CC
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:39:35 +0000
Received: from [85.158.143.35:45429] by server-3.bemta-4.messagelabs.com id
	CA/9F-32360-64CDAC25; Mon, 06 Jan 2014 16:39:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389026372!9944923!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24369 invoked from network); 6 Jan 2014 16:39:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:39:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90128418"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 16:39:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	11:39:31 -0500
Message-ID: <1389026370.31766.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 6 Jan 2014 16:39:30 +0000
In-Reply-To: <20131218134629.GG25969@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > creates a VFB for you.
> > > 
> > > Fixes bug #25.
> > > http://bugs.xenproject.org/xen/bug/25
> > > 
> > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > 
> > Looks good.
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Has this had a discussion about the 4.4 release? It's somewhere between
> > a bug fix and a new feature, I suppose more of a bug fix.
> > 
> 
> Yes it is a bug fix.
> 
> CC'ing George now.

And now George has gone away and left me holding the can, smart move on
his part ;-)

With RM hat on my main concern here is that this smells an awful lot
like a new feature and not strictly a bug fix (the presence of a bug is
a bit of a red-herring, it's notionally a wishlist bug even if it isn't
currently tagged as such).

On the flip side we have been giving somewhat special dispensation to
"bugs" of the form "xl does not do $something_xend_did" and treating
them as something stronger than wishlist (although I'm not sure how much
stronger). I notice that George has moved all of those under backlog in
the latest 4.4 dev update though (see [1]), so I'm not sure if he would
still apply this rule (were I to know what it actually was).

Konrad, to what extent is this a blocker for you (or the OVM tooling)
vs. it just being something you spotted by random chance?

So considering the guidelines George left[2]:

The patch contributes to an "awesome release" by allowing some set of
existing xm configuration files to work as is, which is something we are
keen on in order to continue to migrate users over. There is a
workaround though (rewrite the cfg file to use the other syntax, which
works), which although falling short of our desire for this to "just
work" is not immensely complex to apply.

The potential risk is that this breaks the existing vfb syntax which
works, or it breaks the hvm stuff. This is of particular concern because
I don't think any of that is covered by osstest (except perhaps hvm
console, but that might only be on some other error when osstest takes a
screenshot), so the probability of finding it before release is reliant
on manual testing/test days/user testing etc.
Are you happy that all the existing options keep working?

The patch is big enough that it isn't "obviously correct". . THe patch
is textually large because it contain two refactorings
(ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
had pretty comprehensive review from me and Ian J as it was constructed.
parse_top_... is really just code motion (although I'm a bit concerned
that the hvm callsite has moved too). With my RM hat off and my
maintainer hat on I've reviewed it and it looks good. (RM hat back on)

So, I think I'm a bit border line but slightly on the side of accepting
it for 4.4 unless anyone has a counter opinion.

Ian.

[1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29
[2] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DFW-000827-PT; Mon, 06 Jan 2014 16:41:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0DFV-00081z-AT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:41:57 +0000
Received: from [85.158.143.35:5959] by server-2.bemta-4.messagelabs.com id
	55/BB-11386-4DCDAC25; Mon, 06 Jan 2014 16:41:56 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389026515!9866650!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11977 invoked from network); 6 Jan 2014 16:41:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 16:41:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 06 Jan 2014 16:41:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="641681439"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.45])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 16:41:53 +0000
Message-ID: <52CADCD1.3060107@terremark.com>
Date: Mon, 06 Jan 2014 11:41:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-3-git-send-email-dslutz@verizon.com>	
	<20140104211319.GC9395@phenom.dumpdata.com>	
	<52C87C1C.5080908@terremark.com>
	<1389022105.31766.54.camel@kazak.uk.xensource.com>
In-Reply-To: <1389022105.31766.54.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 10:28, Ian Campbell wrote:
> On Sat, 2014-01-04 at 16:24 -0500, Don Slutz wrote:
>> On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
>>
>>> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>>>> This also fixes the old debug output to compile and work if DBGP1
>>>> and DBGP2 are defined like DBGP3.
>>>>
>>>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>>>>       return len;
>>>>   }
>>>>   
>>>> +/*
>>>> + * Local variables:
>>>> + * mode: C
>>>> + * c-file-style: "BSD"
>>>> + * c-basic-offset: 4
>>>> + * indent-tabs-mode: nil
>>>> + * End:
>>>> + */
>>> ??
>> I take that as "Why add this"?
>>
>> Dealing with many different styles of code, I find it helps to have
>> this emacs add.
> Yes, but why is it in the optional debug patch?
>
>

Because I did not think that it needs to be in 4.4.0.  I have no issue with moving it into the 1st patch.  Will wait for guidance.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DFW-000827-PT; Mon, 06 Jan 2014 16:41:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0DFV-00081z-AT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:41:57 +0000
Received: from [85.158.143.35:5959] by server-2.bemta-4.messagelabs.com id
	55/BB-11386-4DCDAC25; Mon, 06 Jan 2014 16:41:56 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389026515!9866650!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11977 invoked from network); 6 Jan 2014 16:41:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 16:41:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 06 Jan 2014 16:41:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="641681439"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.45])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 16:41:53 +0000
Message-ID: <52CADCD1.3060107@terremark.com>
Date: Mon, 06 Jan 2014 11:41:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-3-git-send-email-dslutz@verizon.com>	
	<20140104211319.GC9395@phenom.dumpdata.com>	
	<52C87C1C.5080908@terremark.com>
	<1389022105.31766.54.camel@kazak.uk.xensource.com>
In-Reply-To: <1389022105.31766.54.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 10:28, Ian Campbell wrote:
> On Sat, 2014-01-04 at 16:24 -0500, Don Slutz wrote:
>> On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
>>
>>> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>>>> This also fixes the old debug output to compile and work if DBGP1
>>>> and DBGP2 are defined like DBGP3.
>>>>
>>>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
>>>>       return len;
>>>>   }
>>>>   
>>>> +/*
>>>> + * Local variables:
>>>> + * mode: C
>>>> + * c-file-style: "BSD"
>>>> + * c-basic-offset: 4
>>>> + * indent-tabs-mode: nil
>>>> + * End:
>>>> + */
>>> ??
>> I take that as "Why add this"?
>>
>> Dealing with many different styles of code, I find it helps to have
>> this emacs add.
> Yes, but why is it in the optional debug patch?
>
>

Because I did not think that it needs to be in 4.4.0.  I have no issue with moving it into the 1st patch.  Will wait for guidance.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:44:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DHv-0008Ae-LH; Mon, 06 Jan 2014 16:44:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0DHu-0008AX-IW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:44:26 +0000
Received: from [193.109.254.147:38742] by server-16.bemta-14.messagelabs.com
	id 4F/DC-20600-96DDAC25; Mon, 06 Jan 2014 16:44:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389026664!6807727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19943 invoked from network); 6 Jan 2014 16:44:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90130333"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 16:44:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	11:44:22 -0500
Message-ID: <1389026661.31766.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 16:44:21 +0000
In-Reply-To: <52CADCD1.3060107@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
	<1389022105.31766.54.camel@kazak.uk.xensource.com>
	<52CADCD1.3060107@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
 output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:41 -0500, Don Slutz wrote:
> Because I did not think that it needs to be in 4.4.0.

Oh, did you mean it was optional in the sense of could be deferred from
4.4.0 to 4.5.0 versus being entirely discardable?

>   I have no issue with moving it into the 1st patch.

As usual a separate patch for a separate change would be best.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:44:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DHv-0008Ae-LH; Mon, 06 Jan 2014 16:44:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0DHu-0008AX-IW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:44:26 +0000
Received: from [193.109.254.147:38742] by server-16.bemta-14.messagelabs.com
	id 4F/DC-20600-96DDAC25; Mon, 06 Jan 2014 16:44:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389026664!6807727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19943 invoked from network); 6 Jan 2014 16:44:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 16:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="90130333"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 16:44:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	11:44:22 -0500
Message-ID: <1389026661.31766.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Mon, 6 Jan 2014 16:44:21 +0000
In-Reply-To: <52CADCD1.3060107@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
	<1389022105.31766.54.camel@kazak.uk.xensource.com>
	<52CADCD1.3060107@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
 output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:41 -0500, Don Slutz wrote:
> Because I did not think that it needs to be in 4.4.0.

Oh, did you mean it was optional in the sense of could be deferred from
4.4.0 to 4.5.0 versus being entirely discardable?

>   I have no issue with moving it into the 1st patch.

As usual a separate patch for a separate change would be best.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DNH-0000EM-Uk; Mon, 06 Jan 2014 16:49:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0DNG-0000EH-7J
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:49:58 +0000
Received: from [193.109.254.147:47934] by server-14.bemta-14.messagelabs.com
	id 99/AC-12628-5BEDAC25; Mon, 06 Jan 2014 16:49:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389026995!6874772!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20170 invoked from network); 6 Jan 2014 16:49:56 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 16:49:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 16:49:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="641688847"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.45])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 16:49:52 +0000
Message-ID: <52CADEB0.3080802@terremark.com>
Date: Mon, 06 Jan 2014 11:49:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>		
	<1388857936-664-3-git-send-email-dslutz@verizon.com>		
	<20140104211319.GC9395@phenom.dumpdata.com>		
	<52C87C1C.5080908@terremark.com>	
	<1389022105.31766.54.camel@kazak.uk.xensource.com>	
	<52CADCD1.3060107@terremark.com>
	<1389026661.31766.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1389026661.31766.85.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 11:44, Ian Campbell wrote:
> On Mon, 2014-01-06 at 11:41 -0500, Don Slutz wrote:
>> Because I did not think that it needs to be in 4.4.0.
> Oh, did you mean it was optional in the sense of could be deferred from
> 4.4.0 to 4.5.0 versus being entirely discardable?

Yes, that is what I was trying to say.

>
>>    I have no issue with moving it into the 1st patch.

Will make this change it's own patch.

    -Don Slutz

> As usual a separate patch for a separate change would be best.
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 16:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 16:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DNH-0000EM-Uk; Mon, 06 Jan 2014 16:49:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0DNG-0000EH-7J
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 16:49:58 +0000
Received: from [193.109.254.147:47934] by server-14.bemta-14.messagelabs.com
	id 99/AC-12628-5BEDAC25; Mon, 06 Jan 2014 16:49:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389026995!6874772!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20170 invoked from network); 6 Jan 2014 16:49:56 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 16:49:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 16:49:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,613,1384300800"; d="scan'208";a="641688847"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.45])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 16:49:52 +0000
Message-ID: <52CADEB0.3080802@terremark.com>
Date: Mon, 06 Jan 2014 11:49:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>		
	<1388857936-664-3-git-send-email-dslutz@verizon.com>		
	<20140104211319.GC9395@phenom.dumpdata.com>		
	<52C87C1C.5080908@terremark.com>	
	<1389022105.31766.54.camel@kazak.uk.xensource.com>	
	<52CADCD1.3060107@terremark.com>
	<1389026661.31766.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1389026661.31766.85.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 11:44, Ian Campbell wrote:
> On Mon, 2014-01-06 at 11:41 -0500, Don Slutz wrote:
>> Because I did not think that it needs to be in 4.4.0.
> Oh, did you mean it was optional in the sense of could be deferred from
> 4.4.0 to 4.5.0 versus being entirely discardable?

Yes, that is what I was trying to say.

>
>>    I have no issue with moving it into the 1st patch.

Will make this change it's own patch.

    -Don Slutz

> As usual a separate patch for a separate change would be best.
>
> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:05:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DcQ-0000eN-Ua; Mon, 06 Jan 2014 17:05:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1W0DcP-0000d7-Lr
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:05:37 +0000
Received: from [85.158.139.211:20243] by server-16.bemta-5.messagelabs.com id
	0E/6D-11843-062EAC25; Mon, 06 Jan 2014 17:05:36 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389027935!8131487!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gNzcxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10835 invoked from network); 6 Jan 2014 17:05:36 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 17:05:36 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s06H5Adg005508;
	Mon, 6 Jan 2014 12:05:10 -0500
Date: Mon, 6 Jan 2014 12:05:10 -0500 (EST)
From: Dave Anderson <anderson@redhat.com>
To: "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Message-ID: <1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
X-Originating-IP: [10.5.82.12]
X-Mailer: Zimbra 8.0.3_GA_5664 (ZimbraWebClient - FF22 (Linux)/8.0.3_GA_5664)
Thread-Topic: Enable use of crash on xen 4.4.0 vmcore
Thread-Index: TqudEE70mIy+2BzJr6yoQVma8NzoqQ==
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, kexec@lists.infradead.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> exists.  This prevents crash from using a xen 4.4.0 vmcore.
> 
> Patch 1 "fixes" this.
> 
> Patch 2 is a minor fix in that outputing the offset in hex for
> domain_domain_flags is different.
> 
> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> one found.
> 
> Patch 4 is a quick way to add domain.guest_type support.

Hi Don,

The patch looks good to me.  But for the crash.changelog, can you show
what happens when you attempt to look at one of these PVH dumps without
your patches?

Thanks,
  Dave

> 
> Don Slutz (4):
>   Make domian.is_hvm optional
>   xen: Fix offset output to be decimal.
>   xen: set all domain_flags, not just the 1st.
>   Add basic domain.guest_type support.
> 
>  xen_hyper.c             | 32 ++++++++++++++++++++++++--------
>  xen_hyper_defs.h        |  1 +
>  xen_hyper_dump_tables.c |  4 +++-
>  3 files changed, 28 insertions(+), 9 deletions(-)
> 
> --
> 1.8.4
> 
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:05:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DcQ-0000eN-Ua; Mon, 06 Jan 2014 17:05:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1W0DcP-0000d7-Lr
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:05:37 +0000
Received: from [85.158.139.211:20243] by server-16.bemta-5.messagelabs.com id
	0E/6D-11843-062EAC25; Mon, 06 Jan 2014 17:05:36 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389027935!8131487!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gNzcxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10835 invoked from network); 6 Jan 2014 17:05:36 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 17:05:36 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s06H5Adg005508;
	Mon, 6 Jan 2014 12:05:10 -0500
Date: Mon, 6 Jan 2014 12:05:10 -0500 (EST)
From: Dave Anderson <anderson@redhat.com>
To: "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Message-ID: <1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
X-Originating-IP: [10.5.82.12]
X-Mailer: Zimbra 8.0.3_GA_5664 (ZimbraWebClient - FF22 (Linux)/8.0.3_GA_5664)
Thread-Topic: Enable use of crash on xen 4.4.0 vmcore
Thread-Index: TqudEE70mIy+2BzJr6yoQVma8NzoqQ==
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, kexec@lists.infradead.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> exists.  This prevents crash from using a xen 4.4.0 vmcore.
> 
> Patch 1 "fixes" this.
> 
> Patch 2 is a minor fix in that outputing the offset in hex for
> domain_domain_flags is different.
> 
> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> one found.
> 
> Patch 4 is a quick way to add domain.guest_type support.

Hi Don,

The patch looks good to me.  But for the crash.changelog, can you show
what happens when you attempt to look at one of these PVH dumps without
your patches?

Thanks,
  Dave

> 
> Don Slutz (4):
>   Make domian.is_hvm optional
>   xen: Fix offset output to be decimal.
>   xen: set all domain_flags, not just the 1st.
>   Add basic domain.guest_type support.
> 
>  xen_hyper.c             | 32 ++++++++++++++++++++++++--------
>  xen_hyper_defs.h        |  1 +
>  xen_hyper_dump_tables.c |  4 +++-
>  3 files changed, 28 insertions(+), 9 deletions(-)
> 
> --
> 1.8.4
> 
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:27:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Dxg-0001aP-FF; Mon, 06 Jan 2014 17:27:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Dxf-0001aK-0m
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:27:35 +0000
Received: from [193.109.254.147:64999] by server-15.bemta-14.messagelabs.com
	id AC/CF-22186-687EAC25; Mon, 06 Jan 2014 17:27:34 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389029252!6816485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10360 invoked from network); 6 Jan 2014 17:27:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:27:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90145672"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:27:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:27:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Dxb-0002k6-6k;
	Mon, 06 Jan 2014 17:27:31 +0000
Date: Mon, 6 Jan 2014 17:27:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106172731.GC10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> > On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > > creates a VFB for you.
> > > > 
> > > > Fixes bug #25.
> > > > http://bugs.xenproject.org/xen/bug/25
> > > > 
> > > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > > 
> > > Looks good.
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Has this had a discussion about the 4.4 release? It's somewhere between
> > > a bug fix and a new feature, I suppose more of a bug fix.
> > > 
> > 
> > Yes it is a bug fix.
> > 
> > CC'ing George now.
> 
> And now George has gone away and left me holding the can, smart move on
> his part ;-)
> 
> With RM hat on my main concern here is that this smells an awful lot
> like a new feature and not strictly a bug fix (the presence of a bug is
> a bit of a red-herring, it's notionally a wishlist bug even if it isn't
> currently tagged as such).
> 
> On the flip side we have been giving somewhat special dispensation to
> "bugs" of the form "xl does not do $something_xend_did" and treating
> them as something stronger than wishlist (although I'm not sure how much
> stronger). I notice that George has moved all of those under backlog in
> the latest 4.4 dev update though (see [1]), so I'm not sure if he would
> still apply this rule (were I to know what it actually was).
> 

FWIW the wiki page you looked at was updated on Dec 5 while in 
<CAFLBxZZYMU3wJey4tZN9pVrfeTeujLQLBRbiGe-WFevpO2UhvQ@mail.gmail.com>
which was sent later on Dec 13. This bug was the first one in "open"
category. That's why I picked this one in the first place. :-)

That said, I'm not urging to merge this in. This is just my side work
when I was relatively free. I'm fine with any decision you make.

Wei.

> [1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:27:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Dxg-0001aP-FF; Mon, 06 Jan 2014 17:27:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Dxf-0001aK-0m
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:27:35 +0000
Received: from [193.109.254.147:64999] by server-15.bemta-14.messagelabs.com
	id AC/CF-22186-687EAC25; Mon, 06 Jan 2014 17:27:34 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389029252!6816485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10360 invoked from network); 6 Jan 2014 17:27:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:27:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90145672"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:27:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:27:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Dxb-0002k6-6k;
	Mon, 06 Jan 2014 17:27:31 +0000
Date: Mon, 6 Jan 2014 17:27:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106172731.GC10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> > On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > > creates a VFB for you.
> > > > 
> > > > Fixes bug #25.
> > > > http://bugs.xenproject.org/xen/bug/25
> > > > 
> > > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > > 
> > > Looks good.
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Has this had a discussion about the 4.4 release? It's somewhere between
> > > a bug fix and a new feature, I suppose more of a bug fix.
> > > 
> > 
> > Yes it is a bug fix.
> > 
> > CC'ing George now.
> 
> And now George has gone away and left me holding the can, smart move on
> his part ;-)
> 
> With RM hat on my main concern here is that this smells an awful lot
> like a new feature and not strictly a bug fix (the presence of a bug is
> a bit of a red-herring, it's notionally a wishlist bug even if it isn't
> currently tagged as such).
> 
> On the flip side we have been giving somewhat special dispensation to
> "bugs" of the form "xl does not do $something_xend_did" and treating
> them as something stronger than wishlist (although I'm not sure how much
> stronger). I notice that George has moved all of those under backlog in
> the latest 4.4 dev update though (see [1]), so I'm not sure if he would
> still apply this rule (were I to know what it actually was).
> 

FWIW the wiki page you looked at was updated on Dec 5 while in 
<CAFLBxZZYMU3wJey4tZN9pVrfeTeujLQLBRbiGe-WFevpO2UhvQ@mail.gmail.com>
which was sent later on Dec 13. This bug was the first one in "open"
category. That's why I picked this one in the first place. :-)

That said, I'm not urging to merge this in. This is just my side work
when I was relatively free. I'm fine with any decision you make.

Wei.

> [1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DzA-0001t7-WF; Mon, 06 Jan 2014 17:29:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0Dz9-0001sh-Ds
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:29:07 +0000
Received: from [85.158.143.35:17212] by server-1.bemta-4.messagelabs.com id
	55/30-02132-2E7EAC25; Mon, 06 Jan 2014 17:29:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389029344!9799956!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11043 invoked from network); 6 Jan 2014 17:29:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:29:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90146205"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:29:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 12:29:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0Dz4-0001Jp-8d;
	Mon, 06 Jan 2014 17:29:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0Dz3-0003ii-WF;
	Mon, 06 Jan 2014 17:29:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.59357.835065.559486@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 17:29:01 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> And now George has gone away and left me holding the can, smart move on
> his part ;-)

I have also reviewed this patch again.

Firstly,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Now, onto the RM questions.

> The potential risk is that this breaks the existing vfb syntax which
> works, or it breaks the hvm stuff. This is of particular concern because
> I don't think any of that is covered by osstest (except perhaps hvm
> console, but that might only be on some other error when osstest takes a
> screenshot), so the probability of finding it before release is reliant
> on manual testing/test days/user testing etc.
> Are you happy that all the existing options keep working?

Wei, could you tell us what configuration(s) you tested ?

> The patch is big enough that it isn't "obviously correct". . THe patch
> is textually large because it contain two refactorings
> (ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
> had pretty comprehensive review from me and Ian J as it was constructed.

More to the point, if ARRAY_EXTEND_INIT and the use site pattern were
broken it would totally break vfb entirely.  I think we'd be likely to
detect this well before release if it hadn't been spotted already.

> parse_top_... is really just code motion (although I'm a bit concerned
> that the hvm callsite has moved too). With my RM hat off and my
> maintainer hat on I've reviewed it and it looks good. (RM hat back on)

The autotester does use vnc, so it should detect any systematic bugs
introduced here.  So so long as the change is sufficiently systematic
I think we are fine.
 
> So, I think I'm a bit border line but slightly on the side of accepting
> it for 4.4 unless anyone has a counter opinion.

Right, I don't disagree, particularly if Wei's reply to my question
about testing is reassuring.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0DzA-0001t7-WF; Mon, 06 Jan 2014 17:29:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0Dz9-0001sh-Ds
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:29:07 +0000
Received: from [85.158.143.35:17212] by server-1.bemta-4.messagelabs.com id
	55/30-02132-2E7EAC25; Mon, 06 Jan 2014 17:29:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389029344!9799956!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11043 invoked from network); 6 Jan 2014 17:29:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:29:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90146205"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:29:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 6 Jan 2014 12:29:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0Dz4-0001Jp-8d;
	Mon, 06 Jan 2014 17:29:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0Dz3-0003ii-WF;
	Mon, 06 Jan 2014 17:29:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21194.59357.835065.559486@mariner.uk.xensource.com>
Date: Mon, 6 Jan 2014 17:29:01 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> And now George has gone away and left me holding the can, smart move on
> his part ;-)

I have also reviewed this patch again.

Firstly,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Now, onto the RM questions.

> The potential risk is that this breaks the existing vfb syntax which
> works, or it breaks the hvm stuff. This is of particular concern because
> I don't think any of that is covered by osstest (except perhaps hvm
> console, but that might only be on some other error when osstest takes a
> screenshot), so the probability of finding it before release is reliant
> on manual testing/test days/user testing etc.
> Are you happy that all the existing options keep working?

Wei, could you tell us what configuration(s) you tested ?

> The patch is big enough that it isn't "obviously correct". . THe patch
> is textually large because it contain two refactorings
> (ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
> had pretty comprehensive review from me and Ian J as it was constructed.

More to the point, if ARRAY_EXTEND_INIT and the use site pattern were
broken it would totally break vfb entirely.  I think we'd be likely to
detect this well before release if it hadn't been spotted already.

> parse_top_... is really just code motion (although I'm a bit concerned
> that the hvm callsite has moved too). With my RM hat off and my
> maintainer hat on I've reviewed it and it looks good. (RM hat back on)

The autotester does use vnc, so it should detect any systematic bugs
introduced here.  So so long as the change is sufficiently systematic
I think we are fine.
 
> So, I think I'm a bit border line but slightly on the side of accepting
> it for 4.4 unless anyone has a counter opinion.

Right, I don't disagree, particularly if Wei's reply to my question
about testing is reassuring.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:35:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0E5O-0002Bq-UP; Mon, 06 Jan 2014 17:35:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0E5M-0002Bl-VY
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:35:33 +0000
Received: from [85.158.143.35:51741] by server-3.bemta-4.messagelabs.com id
	99/E5-32360-469EAC25; Mon, 06 Jan 2014 17:35:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389029730!9960018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1918 invoked from network); 6 Jan 2014 17:35:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90148242"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:35:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:35:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0E5J-0002qh-Er;
	Mon, 06 Jan 2014 17:35:29 +0000
Date: Mon, 6 Jan 2014 17:34:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Peter Maydell wrote:
> On 6 January 2014 15:11, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
> > [...]
> >> >
> >> > Finally I got a qemu-system-null. And the effect is immediately visible
> >>
> >> qemu-system-null has been on my wish-list in the past, although my
> >> reasons were slightly different to yours. Specifically, the goal was
> >> to test CPUs in an RTL simulator interacting with RAM and peripheral
> >> devices hosted in QEMU.
> 
> > Cool. However small this is still a valid usecase.
> 
> However I don't think we can have a qemu-system-null
> (regardless of use cases) until/unless we get rid of
> all the things which are compile-time decided by
> the system config. In an ideal world we wouldn't have
> any of those (and perhaps you could even build
> support for more than one kind of CPU into one QEMU
> binary), but as it is we do have them, and so a
> qemu-system-null is not possible.

What are these compile-time things you are referring to?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:35:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0E5O-0002Bq-UP; Mon, 06 Jan 2014 17:35:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0E5M-0002Bl-VY
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:35:33 +0000
Received: from [85.158.143.35:51741] by server-3.bemta-4.messagelabs.com id
	99/E5-32360-469EAC25; Mon, 06 Jan 2014 17:35:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389029730!9960018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1918 invoked from network); 6 Jan 2014 17:35:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90148242"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 17:35:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:35:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0E5J-0002qh-Er;
	Mon, 06 Jan 2014 17:35:29 +0000
Date: Mon, 6 Jan 2014 17:34:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Peter Maydell wrote:
> On 6 January 2014 15:11, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Mon, Jan 06, 2014 at 11:23:24PM +1000, Peter Crosthwaite wrote:
> > [...]
> >> >
> >> > Finally I got a qemu-system-null. And the effect is immediately visible
> >>
> >> qemu-system-null has been on my wish-list in the past, although my
> >> reasons were slightly different to yours. Specifically, the goal was
> >> to test CPUs in an RTL simulator interacting with RAM and peripheral
> >> devices hosted in QEMU.
> 
> > Cool. However small this is still a valid usecase.
> 
> However I don't think we can have a qemu-system-null
> (regardless of use cases) until/unless we get rid of
> all the things which are compile-time decided by
> the system config. In an ideal world we wouldn't have
> any of those (and perhaps you could even build
> support for more than one kind of CPU into one QEMU
> binary), but as it is we do have them, and so a
> qemu-system-null is not possible.

What are these compile-time things you are referring to?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0E9G-0002ad-MA; Mon, 06 Jan 2014 17:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0E9F-0002a5-Jt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:39:33 +0000
Received: from [85.158.139.211:64035] by server-17.bemta-5.messagelabs.com id
	E3/E2-19152-45AEAC25; Mon, 06 Jan 2014 17:39:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389029970!8149889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3008 invoked from network); 6 Jan 2014 17:39:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87977535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 17:39:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	12:39:30 -0500
Message-ID: <52CAEA50.8010702@citrix.com>
Date: Mon, 6 Jan 2014 17:39:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 11:59, Andrew Cooper wrote:
> A patch to this effect has been in XenServer for a little while, and has
> proved to be a useful debugging point for servers which have different
> behaviours depending when crashing on the non-bootstrap processor.
> 
> Moving the printk() from kexec_panic() to one_cpu_only() means that it will
> not be printed multiple times in the case of multiple cpus racing on the kexec
> path.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: David Vrabel <david.vrabel@citrix.com>

Acked-by: David Vrabel <david.vrabel@citrix.com>

One for 4.5 I think.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0E9G-0002ad-MA; Mon, 06 Jan 2014 17:39:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0E9F-0002a5-Jt
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:39:33 +0000
Received: from [85.158.139.211:64035] by server-17.bemta-5.messagelabs.com id
	E3/E2-19152-45AEAC25; Mon, 06 Jan 2014 17:39:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389029970!8149889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3008 invoked from network); 6 Jan 2014 17:39:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:39:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87977535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 17:39:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	12:39:30 -0500
Message-ID: <52CAEA50.8010702@citrix.com>
Date: Mon, 6 Jan 2014 17:39:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/14 11:59, Andrew Cooper wrote:
> A patch to this effect has been in XenServer for a little while, and has
> proved to be a useful debugging point for servers which have different
> behaviours depending when crashing on the non-bootstrap processor.
> 
> Moving the printk() from kexec_panic() to one_cpu_only() means that it will
> not be printed multiple times in the case of multiple cpus racing on the kexec
> path.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: David Vrabel <david.vrabel@citrix.com>

Acked-by: David Vrabel <david.vrabel@citrix.com>

One for 4.5 I think.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EGw-0002qC-DH; Mon, 06 Jan 2014 17:47:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0EGv-0002q7-L8
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:47:29 +0000
Received: from [85.158.143.35:48676] by server-2.bemta-4.messagelabs.com id
	88/A6-11386-13CEAC25; Mon, 06 Jan 2014 17:47:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389030447!7271357!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5790 invoked from network); 6 Jan 2014 17:47:28 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 17:47:28 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06HkNJM028605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 17:46:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HkLSW011812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 17:46:23 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HkLWx000543; Mon, 6 Jan 2014 17:46:21 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 09:46:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1A0E51C18DD; Mon,  6 Jan 2014 12:46:20 -0500 (EST)
Date: Mon, 6 Jan 2014 12:46:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106174620.GA28636@phenom.dumpdata.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> > On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > > creates a VFB for you.
> > > > 
> > > > Fixes bug #25.
> > > > http://bugs.xenproject.org/xen/bug/25
> > > > 
> > > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > > 
> > > Looks good.
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Has this had a discussion about the 4.4 release? It's somewhere between
> > > a bug fix and a new feature, I suppose more of a bug fix.
> > > 
> > 
> > Yes it is a bug fix.
> > 
> > CC'ing George now.
> 
> And now George has gone away and left me holding the can, smart move on
> his part ;-)
> 
> With RM hat on my main concern here is that this smells an awful lot
> like a new feature and not strictly a bug fix (the presence of a bug is
> a bit of a red-herring, it's notionally a wishlist bug even if it isn't
> currently tagged as such).
> 
> On the flip side we have been giving somewhat special dispensation to
> "bugs" of the form "xl does not do $something_xend_did" and treating
> them as something stronger than wishlist (although I'm not sure how much
> stronger). I notice that George has moved all of those under backlog in
> the latest 4.4 dev update though (see [1]), so I'm not sure if he would
> still apply this rule (were I to know what it actually was).
> 
> Konrad, to what extent is this a blocker for you (or the OVM tooling)
> vs. it just being something you spotted by random chance?

No blocker. Just me diligently filling issues with 'xend vs xl'
as I spot them along.
> 
> So considering the guidelines George left[2]:
> 
> The patch contributes to an "awesome release" by allowing some set of
> existing xm configuration files to work as is, which is something we are
> keen on in order to continue to migrate users over. There is a
> workaround though (rewrite the cfg file to use the other syntax, which
> works), which although falling short of our desire for this to "just
> work" is not immensely complex to apply.
> 
> The potential risk is that this breaks the existing vfb syntax which
> works, or it breaks the hvm stuff. This is of particular concern because
> I don't think any of that is covered by osstest (except perhaps hvm
> console, but that might only be on some other error when osstest takes a
> screenshot), so the probability of finding it before release is reliant
> on manual testing/test days/user testing etc.
> Are you happy that all the existing options keep working?
> 
> The patch is big enough that it isn't "obviously correct". . THe patch
> is textually large because it contain two refactorings
> (ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
> had pretty comprehensive review from me and Ian J as it was constructed.
> parse_top_... is really just code motion (although I'm a bit concerned
> that the hvm callsite has moved too). With my RM hat off and my
> maintainer hat on I've reviewed it and it looks good. (RM hat back on)
> 
> So, I think I'm a bit border line but slightly on the side of accepting
> it for 4.4 unless anyone has a counter opinion.
> 
> Ian.
> 
> [1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29
> [2] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EGw-0002qC-DH; Mon, 06 Jan 2014 17:47:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0EGv-0002q7-L8
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:47:29 +0000
Received: from [85.158.143.35:48676] by server-2.bemta-4.messagelabs.com id
	88/A6-11386-13CEAC25; Mon, 06 Jan 2014 17:47:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389030447!7271357!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5790 invoked from network); 6 Jan 2014 17:47:28 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 17:47:28 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06HkNJM028605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 17:46:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HkLSW011812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 17:46:23 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HkLWx000543; Mon, 6 Jan 2014 17:46:21 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 09:46:21 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1A0E51C18DD; Mon,  6 Jan 2014 12:46:20 -0500 (EST)
Date: Mon, 6 Jan 2014 12:46:20 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106174620.GA28636@phenom.dumpdata.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389026370.31766.83.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 13:46 +0000, Wei Liu wrote:
> > On Wed, Dec 18, 2013 at 01:12:13PM +0000, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 22:53 +0000, Wei Liu wrote:
> > > > This replicates a Xend behavior. When you specify 'vnc=1' and there's no
> > > > 'vfb=[]' in a PV guest's config file, xl parses all top level VNC options and
> > > > creates a VFB for you.
> > > > 
> > > > Fixes bug #25.
> > > > http://bugs.xenproject.org/xen/bug/25
> > > > 
> > > > Reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > > > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > > 
> > > Looks good.
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > > 
> > > Has this had a discussion about the 4.4 release? It's somewhere between
> > > a bug fix and a new feature, I suppose more of a bug fix.
> > > 
> > 
> > Yes it is a bug fix.
> > 
> > CC'ing George now.
> 
> And now George has gone away and left me holding the can, smart move on
> his part ;-)
> 
> With RM hat on my main concern here is that this smells an awful lot
> like a new feature and not strictly a bug fix (the presence of a bug is
> a bit of a red-herring, it's notionally a wishlist bug even if it isn't
> currently tagged as such).
> 
> On the flip side we have been giving somewhat special dispensation to
> "bugs" of the form "xl does not do $something_xend_did" and treating
> them as something stronger than wishlist (although I'm not sure how much
> stronger). I notice that George has moved all of those under backlog in
> the latest 4.4 dev update though (see [1]), so I'm not sure if he would
> still apply this rule (were I to know what it actually was).
> 
> Konrad, to what extent is this a blocker for you (or the OVM tooling)
> vs. it just being something you spotted by random chance?

No blocker. Just me diligently filling issues with 'xend vs xl'
as I spot them along.
> 
> So considering the guidelines George left[2]:
> 
> The patch contributes to an "awesome release" by allowing some set of
> existing xm configuration files to work as is, which is something we are
> keen on in order to continue to migrate users over. There is a
> workaround though (rewrite the cfg file to use the other syntax, which
> works), which although falling short of our desire for this to "just
> work" is not immensely complex to apply.
> 
> The potential risk is that this breaks the existing vfb syntax which
> works, or it breaks the hvm stuff. This is of particular concern because
> I don't think any of that is covered by osstest (except perhaps hvm
> console, but that might only be on some other error when osstest takes a
> screenshot), so the probability of finding it before release is reliant
> on manual testing/test days/user testing etc.
> Are you happy that all the existing options keep working?
> 
> The patch is big enough that it isn't "obviously correct". . THe patch
> is textually large because it contain two refactorings
> (ARRAY_EXTEND_INIT and parse_top_level_vnc_options). ARRAY_EXTEND_INIT
> had pretty comprehensive review from me and Ian J as it was constructed.
> parse_top_... is really just code motion (although I'm a bit concerned
> that the hvm callsite has moved too). With my RM hat off and my
> maintainer hat on I've reviewed it and it looks good. (RM hat back on)
> 
> So, I think I'm a bit border line but slightly on the side of accepting
> it for 4.4 unless anyone has a counter opinion.
> 
> Ian.
> 
> [1] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Meta-items_.28composed_of_other_items.29
> [2] http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:49:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EIp-0003En-V3; Mon, 06 Jan 2014 17:49:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0EIo-0003EP-Nd
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:49:26 +0000
Received: from [85.158.143.35:59075] by server-3.bemta-4.messagelabs.com id
	C2/82-32360-6ACEAC25; Mon, 06 Jan 2014 17:49:26 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389030564!9880042!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17508 invoked from network); 6 Jan 2014 17:49:25 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:49:25 -0000
Received: by mail-oa0-f51.google.com with SMTP id m1so1432114oag.24
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 09:49:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=+D/HA49EQwovC2hLhhXIsWNOAasJph8QRBTkQDHyPvE=;
	b=V9QWZnW8Y5SwwATTw+aqGqufv+fcYvH91TX9do2+bRweMiCP0TceRo7S5gA843JPQ9
	wRpe6H8xDXyWigI1Orx/pXW/Iu/as1b6RwgKxXv481WCnhOYm/zi6MgnMxOEDxihEKD4
	99N5wGUp7nRfEhjN90y/e04nEyZbX4gqxfv5/XJAdpNhkQxF/UCdc8RSzLJpdaki/A0W
	fG8Yb2TG3wEFYA98Z2WPqSaVB5XFtYAZaSVb7ySSXKYXKMgRutKZJ2W82PCq5Bo3ZMS7
	UrE+t9wlg1qBIdLcGq/Hh0cKobz1ozkFbm6o7zoHHPWzNBQMLYCCz023zq/gCeyWbwyl
	002Q==
X-Gm-Message-State: ALoCoQnU2NE6hgsT3qLb89ERSj1oq5hHR77D+3OvfLXqeqK8LixrEBproaFIOI4YT9y9bdTAVfel
MIME-Version: 1.0
X-Received: by 10.60.97.36 with SMTP id dx4mr6703460oeb.40.1389030563646; Mon,
	06 Jan 2014 09:49:23 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 09:49:23 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 09:49:23 -0800
Message-ID: <CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 6, 2014 at 7:57 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Anthony Liguori wrote:
>> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:
>> >
>> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
>> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
>> > > >
>> > > > On 6 January 2014 14:17, Stefano Stabellini
>> > > > <stefano.stabellini@eu.citrix.com> wrote:
>> > > > > It doesn't do any emulation so it is not specific to any architecture or
>> > > > > any cpu.
>> > > >
>> > > > You presumably still care about the compiled in values of
>> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
>> >
>> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
>> > endianness.
>>
>> If blkif in QEMU is relying on host endianness thats a bug.
>
> Why? Xen doesn't support a different guest/host endianness.

For the same reason that the virtio devices do not rely on host endianness.

It should be possible to use the Xen devices with TCG.  It isn't today
simply because the code wasn't structured that way but it could be
refactored to do this.

>> > > Yup.  It's still accel=xen just with no VCPUs.
>> >
>> > Are you talking about introducing accel=xen to Wei's target-null?
>> > I guess that would work OK.
>>
>> We already have accel=xen.  I'm echoing Peter's suggestion of having the ability to compile out accel=tcg.
>>
>> >
>> > On the other hand if you are thinking of avoiding the introduction of a
>> > new target-null, how would you make xen_machine_pv.c available to
>> > multiple architectures?
>>
>> Why does qdisk need a full machine?
>
> qdisk is just one device, xen_machine_pv is the machine that initializes
> the backend infrastructure (one of the backends is qdisk).
> It doesn't make sense to use a full-blown machine like pc.c just to
> start few backends, right?

What prevents xen_machine_pv from being compiled for multiple targets?
 If there i386 specific things in it, they surely could be refactored
a bit, no?

Regards,

Anthony Liguori

>
>
>> How would you avoid the compilation of all the
>> > unnecessary emulated devices?
>>
>> Device config files.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:49:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EIp-0003En-V3; Mon, 06 Jan 2014 17:49:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1W0EIo-0003EP-Nd
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:49:26 +0000
Received: from [85.158.143.35:59075] by server-3.bemta-4.messagelabs.com id
	C2/82-32360-6ACEAC25; Mon, 06 Jan 2014 17:49:26 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389030564!9880042!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17508 invoked from network); 6 Jan 2014 17:49:25 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:49:25 -0000
Received: by mail-oa0-f51.google.com with SMTP id m1so1432114oag.24
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 09:49:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=+D/HA49EQwovC2hLhhXIsWNOAasJph8QRBTkQDHyPvE=;
	b=V9QWZnW8Y5SwwATTw+aqGqufv+fcYvH91TX9do2+bRweMiCP0TceRo7S5gA843JPQ9
	wRpe6H8xDXyWigI1Orx/pXW/Iu/as1b6RwgKxXv481WCnhOYm/zi6MgnMxOEDxihEKD4
	99N5wGUp7nRfEhjN90y/e04nEyZbX4gqxfv5/XJAdpNhkQxF/UCdc8RSzLJpdaki/A0W
	fG8Yb2TG3wEFYA98Z2WPqSaVB5XFtYAZaSVb7ySSXKYXKMgRutKZJ2W82PCq5Bo3ZMS7
	UrE+t9wlg1qBIdLcGq/Hh0cKobz1ozkFbm6o7zoHHPWzNBQMLYCCz023zq/gCeyWbwyl
	002Q==
X-Gm-Message-State: ALoCoQnU2NE6hgsT3qLb89ERSj1oq5hHR77D+3OvfLXqeqK8LixrEBproaFIOI4YT9y9bdTAVfel
MIME-Version: 1.0
X-Received: by 10.60.97.36 with SMTP id dx4mr6703460oeb.40.1389030563646; Mon,
	06 Jan 2014 09:49:23 -0800 (PST)
Received: by 10.60.232.71 with HTTP; Mon, 6 Jan 2014 09:49:23 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 09:49:23 -0800
Message-ID: <CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 6, 2014 at 7:57 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Anthony Liguori wrote:
>> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:
>> >
>> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
>> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
>> > > >
>> > > > On 6 January 2014 14:17, Stefano Stabellini
>> > > > <stefano.stabellini@eu.citrix.com> wrote:
>> > > > > It doesn't do any emulation so it is not specific to any architecture or
>> > > > > any cpu.
>> > > >
>> > > > You presumably still care about the compiled in values of
>> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
>> >
>> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
>> > endianness.
>>
>> If blkif in QEMU is relying on host endianness thats a bug.
>
> Why? Xen doesn't support a different guest/host endianness.

For the same reason that the virtio devices do not rely on host endianness.

It should be possible to use the Xen devices with TCG.  It isn't today
simply because the code wasn't structured that way but it could be
refactored to do this.

>> > > Yup.  It's still accel=xen just with no VCPUs.
>> >
>> > Are you talking about introducing accel=xen to Wei's target-null?
>> > I guess that would work OK.
>>
>> We already have accel=xen.  I'm echoing Peter's suggestion of having the ability to compile out accel=tcg.
>>
>> >
>> > On the other hand if you are thinking of avoiding the introduction of a
>> > new target-null, how would you make xen_machine_pv.c available to
>> > multiple architectures?
>>
>> Why does qdisk need a full machine?
>
> qdisk is just one device, xen_machine_pv is the machine that initializes
> the backend infrastructure (one of the backends is qdisk).
> It doesn't make sense to use a full-blown machine like pc.c just to
> start few backends, right?

What prevents xen_machine_pv from being compiled for multiple targets?
 If there i386 specific things in it, they surely could be refactored
a bit, no?

Regards,

Anthony Liguori

>
>
>> How would you avoid the compilation of all the
>> > unnecessary emulated devices?
>>
>> Device config files.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:56:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EPg-0003Rw-U9; Mon, 06 Jan 2014 17:56:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0EPf-0003Rr-KR
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:56:31 +0000
Received: from [85.158.143.35:31520] by server-2.bemta-4.messagelabs.com id
	BF/3E-11386-E4EEAC25; Mon, 06 Jan 2014 17:56:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389030989!9964710!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24372 invoked from network); 6 Jan 2014 17:56:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:56:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87982736"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 17:56:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:56:28 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0EPc-00037L-8F;
	Mon, 06 Jan 2014 17:56:28 +0000
Date: Mon, 6 Jan 2014 17:56:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140106175628.GD10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21194.59357.835065.559486@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 05:29:01PM +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> > And now George has gone away and left me holding the can, smart move on
> > his part ;-)
> 
> I have also reviewed this patch again.
> 
> Firstly,
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Now, onto the RM questions.
> 
> > The potential risk is that this breaks the existing vfb syntax which
> > works, or it breaks the hvm stuff. This is of particular concern because
> > I don't think any of that is covered by osstest (except perhaps hvm
> > console, but that might only be on some other error when osstest takes a
> > screenshot), so the probability of finding it before release is reliant
> > on manual testing/test days/user testing etc.
> > Are you happy that all the existing options keep working?
> 
> Wei, could you tell us what configuration(s) you tested ?
> 

#1 PV guest (to see if it causes regression):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']

#2 PV guest (to see if it fixes the bug):
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#3 PV guest (to test if if will create more than 1 VFB):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#4 HVM guest (to see if it causes regression):
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#5 HVM guest (to see if xl errneously parse vfb= for HVM):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:56:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EPg-0003Rw-U9; Mon, 06 Jan 2014 17:56:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0EPf-0003Rr-KR
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 17:56:31 +0000
Received: from [85.158.143.35:31520] by server-2.bemta-4.messagelabs.com id
	BF/3E-11386-E4EEAC25; Mon, 06 Jan 2014 17:56:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389030989!9964710!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24372 invoked from network); 6 Jan 2014 17:56:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 17:56:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87982736"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 17:56:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 12:56:28 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0EPc-00037L-8F;
	Mon, 06 Jan 2014 17:56:28 +0000
Date: Mon, 6 Jan 2014 17:56:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140106175628.GD10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21194.59357.835065.559486@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 05:29:01PM +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> > And now George has gone away and left me holding the can, smart move on
> > his part ;-)
> 
> I have also reviewed this patch again.
> 
> Firstly,
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Now, onto the RM questions.
> 
> > The potential risk is that this breaks the existing vfb syntax which
> > works, or it breaks the hvm stuff. This is of particular concern because
> > I don't think any of that is covered by osstest (except perhaps hvm
> > console, but that might only be on some other error when osstest takes a
> > screenshot), so the probability of finding it before release is reliant
> > on manual testing/test days/user testing etc.
> > Are you happy that all the existing options keep working?
> 
> Wei, could you tell us what configuration(s) you tested ?
> 

#1 PV guest (to see if it causes regression):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']

#2 PV guest (to see if it fixes the bug):
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#3 PV guest (to test if if will create more than 1 VFB):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#4 HVM guest (to see if it causes regression):
vnc=1
vncunused=1
vnclisten='0.0.0.0'

#5 HVM guest (to see if xl errneously parse vfb= for HVM):
vfb = ['vnc=1,vncunused=1,vnclisten=0.0.0.0']

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:57:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EQq-0003Wb-Di; Mon, 06 Jan 2014 17:57:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0EQp-0003WS-QZ
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 17:57:44 +0000
Received: from [85.158.143.35:57537] by server-3.bemta-4.messagelabs.com id
	9B/49-32360-79EEAC25; Mon, 06 Jan 2014 17:57:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389031061!9963699!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21666 invoked from network); 6 Jan 2014 17:57:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 17:57:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06HvdTa010906
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HvcVQ024508
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:39 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06HvcJj006974
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 09:57:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7FAF31C18DC; Mon,  6 Jan 2014 12:57:13 -0500 (EST)
Date: Mon, 6 Jan 2014 12:57:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org
Message-ID: <20140106175713.GB28636@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>' are
 not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In Xend, if I had a pci entry in the guest config and the 
PCI device was not assigned to xen-pciback or pci-stub it
would refuse to launch the guest.

Not so with 'xl'. It will complain but still launch:

-bash-4.1# cd drivers/pciback/
-bash-4.1# ls
0000:01:00.0  0000:03:08.1  0000:03:0a.0  0000:03:0b.1       irq_handlers  new_slot    remove_id    uevent
0000:01:00.1  0000:03:09.0  0000:03:0a.1  bind               module        permissive  remove_slot  unbind
0000:03:08.0  0000:03:09.1  0000:03:0b.0  irq_handler_state  new_id        quirks      slots
-bash-4.1# echo "0000:03:0b.0" > unbind
-bash-4.1# echo "0000:03:0b.1" > unbind
-bash-4.1# xl create /mnt/lab/security/security.cfg  
Parsing config from /mnt/lab/security/security.cfg
libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.0 is not assignable
libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.1 is not assignable
-bash-4.1# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2047     4     r-----      14.7
security                                     1  1023     1     -b----       8.0
-bash-4.1# 
-bash-4.1# cat /mnt/lab/security/security.cfg |grep -v \#
device_model_version="qemu-xen-traditional"
builder="hvm"
memory = 1024
name = "security"
vcpus=1
vif = [ 'mac=00:0F:4B:00:00:84,bridge=switch' ]
disk = [ 'phy:/dev/sda,xvda,w' ]
pci= ['0000:03:08.0', '000:03:08.1', '0000:03:09.0', '0000:03:09.1', '0000:03:0a.0', '0000:03:0a.1', '0000:03:0b.0', '0000:03:0b.1']
vnc=1
vnclisten='0.0.0.0'
vncunused=1
serial="pty"


And naturally when shutting/destroying the guest it will say:
-bash-4.1# xl destroy 1
libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device

(XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.0 from dom1 failed (-19)
(XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.1 from dom1 failed (-19)

because it tries to de-allocate them even though they were not
part of the guest.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 17:57:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 17:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EQq-0003Wb-Di; Mon, 06 Jan 2014 17:57:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0EQp-0003WS-QZ
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 17:57:44 +0000
Received: from [85.158.143.35:57537] by server-3.bemta-4.messagelabs.com id
	9B/49-32360-79EEAC25; Mon, 06 Jan 2014 17:57:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389031061!9963699!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21666 invoked from network); 6 Jan 2014 17:57:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 17:57:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06HvdTa010906
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06HvcVQ024508
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:39 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06HvcJj006974
	for <xen-devel@lists.xenproject.org>; Mon, 6 Jan 2014 17:57:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 09:57:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7FAF31C18DC; Mon,  6 Jan 2014 12:57:13 -0500 (EST)
Date: Mon, 6 Jan 2014 12:57:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org
Message-ID: <20140106175713.GB28636@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>' are
 not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In Xend, if I had a pci entry in the guest config and the 
PCI device was not assigned to xen-pciback or pci-stub it
would refuse to launch the guest.

Not so with 'xl'. It will complain but still launch:

-bash-4.1# cd drivers/pciback/
-bash-4.1# ls
0000:01:00.0  0000:03:08.1  0000:03:0a.0  0000:03:0b.1       irq_handlers  new_slot    remove_id    uevent
0000:01:00.1  0000:03:09.0  0000:03:0a.1  bind               module        permissive  remove_slot  unbind
0000:03:08.0  0000:03:09.1  0000:03:0b.0  irq_handler_state  new_id        quirks      slots
-bash-4.1# echo "0000:03:0b.0" > unbind
-bash-4.1# echo "0000:03:0b.1" > unbind
-bash-4.1# xl create /mnt/lab/security/security.cfg  
Parsing config from /mnt/lab/security/security.cfg
libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.0 is not assignable
libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.1 is not assignable
-bash-4.1# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2047     4     r-----      14.7
security                                     1  1023     1     -b----       8.0
-bash-4.1# 
-bash-4.1# cat /mnt/lab/security/security.cfg |grep -v \#
device_model_version="qemu-xen-traditional"
builder="hvm"
memory = 1024
name = "security"
vcpus=1
vif = [ 'mac=00:0F:4B:00:00:84,bridge=switch' ]
disk = [ 'phy:/dev/sda,xvda,w' ]
pci= ['0000:03:08.0', '000:03:08.1', '0000:03:09.0', '0000:03:09.1', '0000:03:0a.0', '0000:03:0a.1', '0000:03:0b.0', '0000:03:0b.1']
vnc=1
vnclisten='0.0.0.0'
vncunused=1
serial="pty"


And naturally when shutting/destroying the guest it will say:
-bash-4.1# xl destroy 1
libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device

(XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.0 from dom1 failed (-19)
(XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.1 from dom1 failed (-19)

because it tries to de-allocate them even though they were not
part of the guest.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:01:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EU0-0003zN-Bi; Mon, 06 Jan 2014 18:01:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1W0ETy-0003zI-Hp
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:00:58 +0000
Received: from [85.158.137.68:25222] by server-12.bemta-3.messagelabs.com id
	10/93-20055-95FEAC25; Mon, 06 Jan 2014 18:00:57 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389031257!3864233!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14455 invoked from network); 6 Jan 2014 18:00:57 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:00:57 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D7671AC48;
	Mon,  6 Jan 2014 18:00:56 +0000 (UTC)
Message-ID: <52CAEF54.7030901@suse.de>
Date: Mon, 06 Jan 2014 19:00:52 +0100
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Anthony Liguori <anthony@codemonkey.ws>, 
	Paolo Bonzini <pbonzini@redhat.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<1389014715.19378.8.camel@hamster.uk.xensource.com>	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
In-Reply-To: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 06.01.2014 16:39, schrieb Anthony Liguori:
> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having the
> ability to compile out accel=3Dtcg.

Didn't you and Paolo even have patches for that a while ago?

Cheers,
Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:01:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EU0-0003zN-Bi; Mon, 06 Jan 2014 18:01:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1W0ETy-0003zI-Hp
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:00:58 +0000
Received: from [85.158.137.68:25222] by server-12.bemta-3.messagelabs.com id
	10/93-20055-95FEAC25; Mon, 06 Jan 2014 18:00:57 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389031257!3864233!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14455 invoked from network); 6 Jan 2014 18:00:57 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:00:57 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D7671AC48;
	Mon,  6 Jan 2014 18:00:56 +0000 (UTC)
Message-ID: <52CAEF54.7030901@suse.de>
Date: Mon, 06 Jan 2014 19:00:52 +0100
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Anthony Liguori <anthony@codemonkey.ws>, 
	Paolo Bonzini <pbonzini@redhat.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<1389014715.19378.8.camel@hamster.uk.xensource.com>	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
In-Reply-To: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 06.01.2014 16:39, schrieb Anthony Liguori:
> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having the
> ability to compile out accel=3Dtcg.

Didn't you and Paolo even have patches for that a while ago?

Cheers,
Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EUP-00047r-24; Mon, 06 Jan 2014 18:01:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W0EUN-00047b-OI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:01:23 +0000
Received: from [193.109.254.147:29492] by server-11.bemta-14.messagelabs.com
	id 9F/21-20576-37FEAC25; Mon, 06 Jan 2014 18:01:23 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389031261!9157142!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 6 Jan 2014 18:01:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:01:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87984561"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:01:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:01:00 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W0EU0-0003CQ-5T;
	Mon, 06 Jan 2014 18:01:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Mon, 6 Jan 2014 18:00:41 +0000
Message-ID: <1389031241-3429-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered GPE
	event to a edge one for qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should help to reduce a CPU hotplug race window where a cpu hotplug
event while not be seen by the OS.

When hotplugging more than one vcpu, some of those vcpus might not be
seen as plugged by the guest.

This is what is currently happenning:

1. hw adds cpu, sets GPE.2 bit and sends SCI
2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
    since GPE00.en masks event
5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en

as result event for step 4 is lost because step 5 clears it and OS
will not see added second cpu.

ACPI 50 spec: 5.6.4 General-Purpose Event Handling
defines GPE event handling as following:

1. Disables the interrupt source (GPEx_BLK EN bit).
2. If an edge event, clears the status bit.
3. Performs one of the following:
* Dispatches to an ACPI-aware device driver.
* Queues the matching control method for execution.
* Manages a wake event using device _PRW objects.
4. If a level event, clears the status bit.
5. Enables the interrupt source.

So, by using edge-triggered General-Purpose Event instead of a
level-triggered GPE, OSPM is less likely to clear the status bit of the
addition of the second CPU. On step 5, QEMU will resend an interrupt if
the status bit is set.

This description apply also for PCI hotplug since the same step are
followed by QEMU, so we also change the GPE event type for PCI hotplug.

This does not apply to qemu-xen-traditional because it does not resend
an interrupt if necessary as a result of step 5.

Patch and description inspired by SeaBIOS's commit:
Replace level gpe event with edge gpe event for hot-plug handlers
9c6635bd48d39a1d17d0a73df6e577ef6bd0037c
from Igor Mammedov <imammedo@redhat.com>

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
Change in V3:
  - description for: does not apply to qemu-dm
Change in V2:
  - better patch comment:
    patch does not fix race, but reduce the window
    include patch description of the quoted commit
  - change also apply to pci hotplug.
---
 tools/firmware/hvmloader/acpi/mk_dsdt.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index 996f30b..a4b693b 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -220,9 +220,13 @@ int main(int argc, char **argv)
 
     pop_block();
 
-    /* Define GPE control method '_L02'. */
+    /* Define GPE control method. */
     push_block("Scope", "\\_GPE");
-    push_block("Method", "_L02");
+    if (dm_version == QEMU_XEN_TRADITIONAL) {
+        push_block("Method", "_L02");
+    } else {
+        push_block("Method", "_E02");
+    }
     stmt("Return", "\\_SB.PRSC()");
     pop_block();
     pop_block();
@@ -428,7 +432,7 @@ int main(int argc, char **argv)
         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
         pop_block();
     } else {
-        push_block("Method", "_L01");
+        push_block("Method", "_E01");
         for (slot = 1; slot <= 31; slot++) {
             push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
             stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EUP-00047r-24; Mon, 06 Jan 2014 18:01:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W0EUN-00047b-OI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:01:23 +0000
Received: from [193.109.254.147:29492] by server-11.bemta-14.messagelabs.com
	id 9F/21-20576-37FEAC25; Mon, 06 Jan 2014 18:01:23 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389031261!9157142!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 6 Jan 2014 18:01:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:01:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87984561"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:01:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:01:00 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W0EU0-0003CQ-5T;
	Mon, 06 Jan 2014 18:01:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Mon, 6 Jan 2014 18:00:41 +0000
Message-ID: <1389031241-3429-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered GPE
	event to a edge one for qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should help to reduce a CPU hotplug race window where a cpu hotplug
event while not be seen by the OS.

When hotplugging more than one vcpu, some of those vcpus might not be
seen as plugged by the guest.

This is what is currently happenning:

1. hw adds cpu, sets GPE.2 bit and sends SCI
2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
    since GPE00.en masks event
5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en

as result event for step 4 is lost because step 5 clears it and OS
will not see added second cpu.

ACPI 50 spec: 5.6.4 General-Purpose Event Handling
defines GPE event handling as following:

1. Disables the interrupt source (GPEx_BLK EN bit).
2. If an edge event, clears the status bit.
3. Performs one of the following:
* Dispatches to an ACPI-aware device driver.
* Queues the matching control method for execution.
* Manages a wake event using device _PRW objects.
4. If a level event, clears the status bit.
5. Enables the interrupt source.

So, by using edge-triggered General-Purpose Event instead of a
level-triggered GPE, OSPM is less likely to clear the status bit of the
addition of the second CPU. On step 5, QEMU will resend an interrupt if
the status bit is set.

This description apply also for PCI hotplug since the same step are
followed by QEMU, so we also change the GPE event type for PCI hotplug.

This does not apply to qemu-xen-traditional because it does not resend
an interrupt if necessary as a result of step 5.

Patch and description inspired by SeaBIOS's commit:
Replace level gpe event with edge gpe event for hot-plug handlers
9c6635bd48d39a1d17d0a73df6e577ef6bd0037c
from Igor Mammedov <imammedo@redhat.com>

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
Change in V3:
  - description for: does not apply to qemu-dm
Change in V2:
  - better patch comment:
    patch does not fix race, but reduce the window
    include patch description of the quoted commit
  - change also apply to pci hotplug.
---
 tools/firmware/hvmloader/acpi/mk_dsdt.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index 996f30b..a4b693b 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -220,9 +220,13 @@ int main(int argc, char **argv)
 
     pop_block();
 
-    /* Define GPE control method '_L02'. */
+    /* Define GPE control method. */
     push_block("Scope", "\\_GPE");
-    push_block("Method", "_L02");
+    if (dm_version == QEMU_XEN_TRADITIONAL) {
+        push_block("Method", "_L02");
+    } else {
+        push_block("Method", "_E02");
+    }
     stmt("Return", "\\_SB.PRSC()");
     pop_block();
     pop_block();
@@ -428,7 +432,7 @@ int main(int argc, char **argv)
         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
         pop_block();
     } else {
-        push_block("Method", "_L01");
+        push_block("Method", "_E01");
         for (slot = 1; slot <= 31; slot++) {
             push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
             stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EXf-0004O1-Fa; Mon, 06 Jan 2014 18:04:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0EXe-0004Nt-8Y
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 18:04:46 +0000
Received: from [193.109.254.147:5556] by server-1.bemta-14.messagelabs.com id
	2D/82-15600-D30FAC25; Mon, 06 Jan 2014 18:04:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389031483!9144097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10955 invoked from network); 6 Jan 2014 18:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87986334"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:04:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	13:04:43 -0500
Message-ID: <52CAF039.1050505@citrix.com>
Date: Mon, 6 Jan 2014 18:04:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1386683820-9834-1-git-send-email-david.vrabel@citrix.com>	<1386683820-9834-2-git-send-email-david.vrabel@citrix.com>
	<52A73830020000780010BE4E@nat28.tlf.novell.com>
	<52AF03EE.1010202@citrix.com>
In-Reply-To: <52AF03EE.1010202@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] evtchn/fifo: initialize priority when
 events are bound
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 13:45, David Vrabel wrote:
> On 10/12/13 14:50, Jan Beulich wrote:
>>>>> On 10.12.13 at 14:56, David Vrabel <david.vrabel@citrix.com> wrote:
>>> From: David Vrabel <david.vrabel@citrix.com>
>>>
>>> Event channel ports that are reused or that were not in the initial
>>> bucket would have a non-default priority.
>>>
>>> Add an init evtchn_port_op hook and use this to set the priority when
>>> an event channel is bound.
>>>
>>> Within this new evtchn_fifo_init() call, also check if the event is
>>> already on a queue and print a warning, as this event may have its
>>> first event delivered on a queue with the wrong VCPU or priority.
>>> This guest is expected to prevent this (if it cares) by not unbinding
>>> events that are still linked.
>>>
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>
>>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Keir,  if you hadn't noticed, this touches common code and thus needs
> your ack.

Ping?

> I would like to see these fixes go in prior to the first RC as without
> patch 2, the FIFO event channels don't really work if irqbalanced is
> used (which seems to be a common setup[1]).
> 
> David
> 
> [1] Except my main test box for some reason...

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EXf-0004O1-Fa; Mon, 06 Jan 2014 18:04:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0EXe-0004Nt-8Y
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 18:04:46 +0000
Received: from [193.109.254.147:5556] by server-1.bemta-14.messagelabs.com id
	2D/82-15600-D30FAC25; Mon, 06 Jan 2014 18:04:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389031483!9144097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10955 invoked from network); 6 Jan 2014 18:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87986334"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:04:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 6 Jan 2014
	13:04:43 -0500
Message-ID: <52CAF039.1050505@citrix.com>
Date: Mon, 6 Jan 2014 18:04:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1386683820-9834-1-git-send-email-david.vrabel@citrix.com>	<1386683820-9834-2-git-send-email-david.vrabel@citrix.com>
	<52A73830020000780010BE4E@nat28.tlf.novell.com>
	<52AF03EE.1010202@citrix.com>
In-Reply-To: <52AF03EE.1010202@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] evtchn/fifo: initialize priority when
 events are bound
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 13:45, David Vrabel wrote:
> On 10/12/13 14:50, Jan Beulich wrote:
>>>>> On 10.12.13 at 14:56, David Vrabel <david.vrabel@citrix.com> wrote:
>>> From: David Vrabel <david.vrabel@citrix.com>
>>>
>>> Event channel ports that are reused or that were not in the initial
>>> bucket would have a non-default priority.
>>>
>>> Add an init evtchn_port_op hook and use this to set the priority when
>>> an event channel is bound.
>>>
>>> Within this new evtchn_fifo_init() call, also check if the event is
>>> already on a queue and print a warning, as this event may have its
>>> first event delivered on a queue with the wrong VCPU or priority.
>>> This guest is expected to prevent this (if it cares) by not unbinding
>>> events that are still linked.
>>>
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>
>>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Keir,  if you hadn't noticed, this touches common code and thus needs
> your ack.

Ping?

> I would like to see these fixes go in prior to the first RC as without
> patch 2, the FIFO event channels don't really work if irqbalanced is
> used (which seems to be a common setup[1]).
> 
> David
> 
> [1] Except my main test box for some reason...

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EYg-0004TZ-Ud; Mon, 06 Jan 2014 18:05:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0EYf-0004TH-CT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:05:49 +0000
Received: from [193.109.254.147:33282] by server-9.bemta-14.messagelabs.com id
	6E/17-13957-C70FAC25; Mon, 06 Jan 2014 18:05:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389031546!9144259!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15246 invoked from network); 6 Jan 2014 18:05:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:05:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87986708"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:05:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:05:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0EYb-0003Fh-OI;
	Mon, 06 Jan 2014 18:05:45 +0000
Date: Mon, 6 Jan 2014 18:04:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061804100.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
	<CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Mon, Jan 6, 2014 at 7:57 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> >> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:
> >> >
> >> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> >> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
> >> > > >
> >> > > > On 6 January 2014 14:17, Stefano Stabellini
> >> > > > <stefano.stabellini@eu.citrix.com> wrote:
> >> > > > > It doesn't do any emulation so it is not specific to any architecture or
> >> > > > > any cpu.
> >> > > >
> >> > > > You presumably still care about the compiled in values of
> >> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
> >> >
> >> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
> >> > endianness.
> >>
> >> If blkif in QEMU is relying on host endianness thats a bug.
> >
> > Why? Xen doesn't support a different guest/host endianness.
> 
> For the same reason that the virtio devices do not rely on host endianness.
> 
> It should be possible to use the Xen devices with TCG.  It isn't today
> simply because the code wasn't structured that way but it could be
> refactored to do this.
> 
> >> > > Yup.  It's still accel=xen just with no VCPUs.
> >> >
> >> > Are you talking about introducing accel=xen to Wei's target-null?
> >> > I guess that would work OK.
> >>
> >> We already have accel=xen.  I'm echoing Peter's suggestion of having the ability to compile out accel=tcg.
> >>
> >> >
> >> > On the other hand if you are thinking of avoiding the introduction of a
> >> > new target-null, how would you make xen_machine_pv.c available to
> >> > multiple architectures?
> >>
> >> Why does qdisk need a full machine?
> >
> > qdisk is just one device, xen_machine_pv is the machine that initializes
> > the backend infrastructure (one of the backends is qdisk).
> > It doesn't make sense to use a full-blown machine like pc.c just to
> > start few backends, right?
> 
> What prevents xen_machine_pv from being compiled for multiple targets?
>  If there i386 specific things in it, they surely could be refactored
> a bit, no?

Not at all, but I thought that at the moment one machine has to be tied
to one architecture. If I am wrong, then there is no issue.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EYg-0004TZ-Ud; Mon, 06 Jan 2014 18:05:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0EYf-0004TH-CT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:05:49 +0000
Received: from [193.109.254.147:33282] by server-9.bemta-14.messagelabs.com id
	6E/17-13957-C70FAC25; Mon, 06 Jan 2014 18:05:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389031546!9144259!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15246 invoked from network); 6 Jan 2014 18:05:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:05:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="87986708"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Jan 2014 18:05:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:05:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0EYb-0003Fh-OI;
	Mon, 06 Jan 2014 18:05:45 +0000
Date: Mon, 6 Jan 2014 18:04:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401061804100.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<alpine.DEB.2.02.1401061546480.8667@kaball.uk.xensource.com>
	<CA+aC4kukrj8wNTXvEcwDNkCuVUhUVvuAg+VOHX+D3bsYnkq6Eg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Anthony Liguori wrote:
> On Mon, Jan 6, 2014 at 7:57 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> >> On Jan 6, 2014 6:55 AM, "Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:
> >> >
> >> > On Mon, 6 Jan 2014, Anthony Liguori wrote:
> >> > > On Jan 6, 2014 6:23 AM, "Peter Maydell" <peter.maydell@linaro.org> wrote:
> >> > > >
> >> > > > On 6 January 2014 14:17, Stefano Stabellini
> >> > > > <stefano.stabellini@eu.citrix.com> wrote:
> >> > > > > It doesn't do any emulation so it is not specific to any architecture or
> >> > > > > any cpu.
> >> > > >
> >> > > > You presumably still care about the compiled in values of
> >> > > > TARGET_WORDS_BIGENDIAN, TARGET_LONG_SIZE, and so on...
> >> >
> >> > Actually it only uses XC_PAGE_SIZE and the endianness is the host
> >> > endianness.
> >>
> >> If blkif in QEMU is relying on host endianness thats a bug.
> >
> > Why? Xen doesn't support a different guest/host endianness.
> 
> For the same reason that the virtio devices do not rely on host endianness.
> 
> It should be possible to use the Xen devices with TCG.  It isn't today
> simply because the code wasn't structured that way but it could be
> refactored to do this.
> 
> >> > > Yup.  It's still accel=xen just with no VCPUs.
> >> >
> >> > Are you talking about introducing accel=xen to Wei's target-null?
> >> > I guess that would work OK.
> >>
> >> We already have accel=xen.  I'm echoing Peter's suggestion of having the ability to compile out accel=tcg.
> >>
> >> >
> >> > On the other hand if you are thinking of avoiding the introduction of a
> >> > new target-null, how would you make xen_machine_pv.c available to
> >> > multiple architectures?
> >>
> >> Why does qdisk need a full machine?
> >
> > qdisk is just one device, xen_machine_pv is the machine that initializes
> > the backend infrastructure (one of the backends is qdisk).
> > It doesn't make sense to use a full-blown machine like pc.c just to
> > start few backends, right?
> 
> What prevents xen_machine_pv from being compiled for multiple targets?
>  If there i386 specific things in it, they surely could be refactored
> a bit, no?

Not at all, but I thought that at the moment one machine has to be tied
to one architecture. If I am wrong, then there is no issue.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:06:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EZj-0004aJ-Eb; Mon, 06 Jan 2014 18:06:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0EZh-0004a9-BI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:06:53 +0000
Received: from [85.158.137.68:64579] by server-12.bemta-3.messagelabs.com id
	DF/39-20055-CB0FAC25; Mon, 06 Jan 2014 18:06:52 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389031611!3869692!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13308 invoked from network); 6 Jan 2014 18:06:51 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:06:51 -0000
Received: by mail-lb0-f181.google.com with SMTP id q8so10004430lbi.40
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 10:06:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=VqTKbsxySlkuCNpgm7oVKgNo66rysaJSiV5ab/x9IAI=;
	b=Hs7So2tvHFl4fuCw3fIXNOEysyXrEmawOrYbqqNXtnFKGjJVZ8wZ2yo9EQjUiv7wDP
	tsQjyHusjESSzagQWO5TcSsH7Hr+RPAOHceXwlqgwQuj0tjatFsADHUXjYejwCOC0EHh
	2VDMzKtl429sFEW410NLwK4aYvkkoBHlvAc+VYhospxpOd/gDT4LNXS2j9FsKui364cv
	6929xBK2vqBYLPj4h9533Ssz/rtdRAAhQeUH2x+8s1mhuFVR+DpE+HfGmjCfTMJUxKSZ
	mhvn3bTAvj0X9ObYjZfbjeVt8KmKl+l6usU+0JXbt1tCWto2l+kkywgBiZKHaV3kzBF1
	XMDA==
X-Gm-Message-State: ALoCoQlAHY64q/U6qh2lbfS1MzqYD6xNEeXUkXnGQFtcZLUD9Axznj/UvPm/bbiBPFKPiLQ5TmSm
X-Received: by 10.152.23.39 with SMTP id j7mr1672201laf.28.1389031611195; Mon,
	06 Jan 2014 10:06:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 10:06:31 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 18:06:31 +0000
Message-ID: <CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 17:34, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Peter Maydell wrote:
>> However I don't think we can have a qemu-system-null
>> (regardless of use cases) until/unless we get rid of
>> all the things which are compile-time decided by
>> the system config. In an ideal world we wouldn't have
>> any of those (and perhaps you could even build
>> support for more than one kind of CPU into one QEMU
>> binary), but as it is we do have them, and so a
>> qemu-system-null is not possible.
>
> What are these compile-time things you are referring to?

The identifiers poisoned by include/qemu/poison.h are
an initial but not complete list. Host and target
endianness is a particularly obvious one, as is the
size of a target long. You may not use these things
in your Xen devices, but "qemu-system-null" implies
more than "weird special purpose thing which only
has Xen devices in it".

Speaking of odd Xen special cases, it's pretty ugly
that builds with Xen enabled override the size of
ram_addr_t... maybe we should get rid of that special
casing one way or the other.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:06:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0EZj-0004aJ-Eb; Mon, 06 Jan 2014 18:06:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0EZh-0004a9-BI
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:06:53 +0000
Received: from [85.158.137.68:64579] by server-12.bemta-3.messagelabs.com id
	DF/39-20055-CB0FAC25; Mon, 06 Jan 2014 18:06:52 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389031611!3869692!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13308 invoked from network); 6 Jan 2014 18:06:51 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:06:51 -0000
Received: by mail-lb0-f181.google.com with SMTP id q8so10004430lbi.40
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 10:06:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=VqTKbsxySlkuCNpgm7oVKgNo66rysaJSiV5ab/x9IAI=;
	b=Hs7So2tvHFl4fuCw3fIXNOEysyXrEmawOrYbqqNXtnFKGjJVZ8wZ2yo9EQjUiv7wDP
	tsQjyHusjESSzagQWO5TcSsH7Hr+RPAOHceXwlqgwQuj0tjatFsADHUXjYejwCOC0EHh
	2VDMzKtl429sFEW410NLwK4aYvkkoBHlvAc+VYhospxpOd/gDT4LNXS2j9FsKui364cv
	6929xBK2vqBYLPj4h9533Ssz/rtdRAAhQeUH2x+8s1mhuFVR+DpE+HfGmjCfTMJUxKSZ
	mhvn3bTAvj0X9ObYjZfbjeVt8KmKl+l6usU+0JXbt1tCWto2l+kkywgBiZKHaV3kzBF1
	XMDA==
X-Gm-Message-State: ALoCoQlAHY64q/U6qh2lbfS1MzqYD6xNEeXUkXnGQFtcZLUD9Axznj/UvPm/bbiBPFKPiLQ5TmSm
X-Received: by 10.152.23.39 with SMTP id j7mr1672201laf.28.1389031611195; Mon,
	06 Jan 2014 10:06:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Mon, 6 Jan 2014 10:06:31 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Mon, 6 Jan 2014 18:06:31 +0000
Message-ID: <CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 6 January 2014 17:34, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Peter Maydell wrote:
>> However I don't think we can have a qemu-system-null
>> (regardless of use cases) until/unless we get rid of
>> all the things which are compile-time decided by
>> the system config. In an ideal world we wouldn't have
>> any of those (and perhaps you could even build
>> support for more than one kind of CPU into one QEMU
>> binary), but as it is we do have them, and so a
>> qemu-system-null is not possible.
>
> What are these compile-time things you are referring to?

The identifiers poisoned by include/qemu/poison.h are
an initial but not complete list. Host and target
endianness is a particularly obvious one, as is the
size of a target long. You may not use these things
in your Xen devices, but "qemu-system-null" implies
more than "weird special purpose thing which only
has Xen devices in it".

Speaking of odd Xen special cases, it's pretty ugly
that builds with Xen enabled override the size of
ram_addr_t... maybe we should get rid of that special
casing one way or the other.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ees-00059x-LF; Mon, 06 Jan 2014 18:12:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1W0Eeq-00059s-IH
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:12:12 +0000
Received: from [85.158.137.68:47606] by server-2.bemta-3.messagelabs.com id
	C9/69-17329-BF1FAC25; Mon, 06 Jan 2014 18:12:11 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389031930!3865865!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14509 invoked from network); 6 Jan 2014 18:12:11 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:12:11 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D4D4FAC49;
	Mon,  6 Jan 2014 18:12:10 +0000 (UTC)
Message-ID: <52CAF1F7.80304@suse.de>
Date: Mon, 06 Jan 2014 19:12:07 +0100
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
In-Reply-To: <20140106151251.GB10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 06.01.2014 16:12, schrieb Wei Liu:
> On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
>> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
>>> In fact I've already hacked a prototype during Christmas. What's I've
>>> done so far:
>>>
>>> 1. create target-null which only has some stubs to CPU emulation
>>>    framework.
>>>
>>> 2. add a few lines to configure / Makefiles*, create
>>>    default-configs/null-softmmu
>>
>> I think it would be better to add support to allow you to
>> configure with --disable-tcg. This would match the existing
>> --disable/--enable switches for KVM and Xen, and then you
>> could configure --disable-kvm --disable-tcg --enable-xen
>> and get a qemu-system-i386 or qemu-system-arm with only
>> the Xen support and none of the TCG emulation code.
>>
> =

> In this case the architecture-specific code in target-* is still
> included which might not help reduce the size much.

Define target-specific code in target-*? Most of that is TCG-specific
and wouldn't be compiled in in that case. The KVM-specific bits don't
get compiled in with --disable-kvm today already save for a few stubs.

Adding yet another separate binary with no added functional value
doesn't strike me as the most helpful idea for the community, compared
to configure-optimizing the binaries built today.

Who would use the stripped-down binaries anyway? Just Citrix? Because
SUSE is headed for sharing QEMU packages between Xen and KVM, so we
couldn't enable such Xen-only-optimized binaries.

Regards,
Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ees-00059x-LF; Mon, 06 Jan 2014 18:12:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <afaerber@suse.de>) id 1W0Eeq-00059s-IH
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:12:12 +0000
Received: from [85.158.137.68:47606] by server-2.bemta-3.messagelabs.com id
	C9/69-17329-BF1FAC25; Mon, 06 Jan 2014 18:12:11 +0000
X-Env-Sender: afaerber@suse.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389031930!3865865!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14509 invoked from network); 6 Jan 2014 18:12:11 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:12:11 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D4D4FAC49;
	Mon,  6 Jan 2014 18:12:10 +0000 (UTC)
Message-ID: <52CAF1F7.80304@suse.de>
Date: Mon, 06 Jan 2014 19:12:07 +0100
From: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Organization: SUSE LINUX Products GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
In-Reply-To: <20140106151251.GB10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Am 06.01.2014 16:12, schrieb Wei Liu:
> On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
>> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
>>> In fact I've already hacked a prototype during Christmas. What's I've
>>> done so far:
>>>
>>> 1. create target-null which only has some stubs to CPU emulation
>>>    framework.
>>>
>>> 2. add a few lines to configure / Makefiles*, create
>>>    default-configs/null-softmmu
>>
>> I think it would be better to add support to allow you to
>> configure with --disable-tcg. This would match the existing
>> --disable/--enable switches for KVM and Xen, and then you
>> could configure --disable-kvm --disable-tcg --enable-xen
>> and get a qemu-system-i386 or qemu-system-arm with only
>> the Xen support and none of the TCG emulation code.
>>
> =

> In this case the architecture-specific code in target-* is still
> included which might not help reduce the size much.

Define target-specific code in target-*? Most of that is TCG-specific
and wouldn't be compiled in in that case. The KVM-specific bits don't
get compiled in with --disable-kvm today already save for a few stubs.

Adding yet another separate binary with no added functional value
doesn't strike me as the most helpful idea for the community, compared
to configure-optimizing the binaries built today.

Who would use the stripped-down binaries anyway? Just Citrix? Because
SUSE is headed for sharing QEMU packages between Xen and KVM, so we
couldn't enable such Xen-only-optimized binaries.

Regards,
Andreas

-- =

SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnberg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Es0-0005fH-Dm; Mon, 06 Jan 2014 18:25:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Ery-0005fC-QT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:25:47 +0000
Received: from [85.158.143.35:47909] by server-3.bemta-4.messagelabs.com id
	91/3F-32360-925FAC25; Mon, 06 Jan 2014 18:25:45 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389032742!9885976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11360 invoked from network); 6 Jan 2014 18:25:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:25:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90165326"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 18:25:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:25:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Ert-0003XT-43;
	Mon, 06 Jan 2014 18:25:41 +0000
Date: Mon, 6 Jan 2014 18:25:41 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Message-ID: <20140106182541.GE10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
	<52CAF1F7.80304@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CAF1F7.80304@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 07:12:07PM +0100, Andreas F=E4rber wrote:
> Am 06.01.2014 16:12, schrieb Wei Liu:
> > On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> >> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> >>> In fact I've already hacked a prototype during Christmas. What's I've
> >>> done so far:
> >>>
> >>> 1. create target-null which only has some stubs to CPU emulation
> >>>    framework.
> >>>
> >>> 2. add a few lines to configure / Makefiles*, create
> >>>    default-configs/null-softmmu
> >>
> >> I think it would be better to add support to allow you to
> >> configure with --disable-tcg. This would match the existing
> >> --disable/--enable switches for KVM and Xen, and then you
> >> could configure --disable-kvm --disable-tcg --enable-xen
> >> and get a qemu-system-i386 or qemu-system-arm with only
> >> the Xen support and none of the TCG emulation code.
> >>
> > =

> > In this case the architecture-specific code in target-* is still
> > included which might not help reduce the size much.
> =

> Define target-specific code in target-*? Most of that is TCG-specific
> and wouldn't be compiled in in that case. The KVM-specific bits don't
> get compiled in with --disable-kvm today already save for a few stubs.
> =


Probably I used the wrong terminology. I meant, say,
target-i386/translate.c, exec.c etc, which won't be necessary for Xen. I
guess that's what you mean by TCG-specific.  I see the possibility to
create some stubs for them, if that's what you mean.

> Adding yet another separate binary with no added functional value
> doesn't strike me as the most helpful idea for the community, compared
> to configure-optimizing the binaries built today.
> =

> Who would use the stripped-down binaries anyway? Just Citrix? Because
> SUSE is headed for sharing QEMU packages between Xen and KVM, so we
> couldn't enable such Xen-only-optimized binaries.
> =


No, I don't speak for Citrix. I work for opensource Xen project, I just
happen to be hired by Citrix. The general idea is to have an option for
user to create smaller binary, without those unnecessary code compiled /
linked in. How vendor makes their choice is out of my reach. :-)

Wei.

> Regards,
> Andreas
> =

> -- =

> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnbe=
rg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Es0-0005fH-Dm; Mon, 06 Jan 2014 18:25:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Ery-0005fC-QT
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:25:47 +0000
Received: from [85.158.143.35:47909] by server-3.bemta-4.messagelabs.com id
	91/3F-32360-925FAC25; Mon, 06 Jan 2014 18:25:45 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389032742!9885976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11360 invoked from network); 6 Jan 2014 18:25:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 18:25:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="90165326"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Jan 2014 18:25:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 6 Jan 2014 13:25:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Ert-0003XT-43;
	Mon, 06 Jan 2014 18:25:41 +0000
Date: Mon, 6 Jan 2014 18:25:41 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Message-ID: <20140106182541.GE10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
	<52CAF1F7.80304@suse.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CAF1F7.80304@suse.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 07:12:07PM +0100, Andreas F=E4rber wrote:
> Am 06.01.2014 16:12, schrieb Wei Liu:
> > On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> >> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> >>> In fact I've already hacked a prototype during Christmas. What's I've
> >>> done so far:
> >>>
> >>> 1. create target-null which only has some stubs to CPU emulation
> >>>    framework.
> >>>
> >>> 2. add a few lines to configure / Makefiles*, create
> >>>    default-configs/null-softmmu
> >>
> >> I think it would be better to add support to allow you to
> >> configure with --disable-tcg. This would match the existing
> >> --disable/--enable switches for KVM and Xen, and then you
> >> could configure --disable-kvm --disable-tcg --enable-xen
> >> and get a qemu-system-i386 or qemu-system-arm with only
> >> the Xen support and none of the TCG emulation code.
> >>
> > =

> > In this case the architecture-specific code in target-* is still
> > included which might not help reduce the size much.
> =

> Define target-specific code in target-*? Most of that is TCG-specific
> and wouldn't be compiled in in that case. The KVM-specific bits don't
> get compiled in with --disable-kvm today already save for a few stubs.
> =


Probably I used the wrong terminology. I meant, say,
target-i386/translate.c, exec.c etc, which won't be necessary for Xen. I
guess that's what you mean by TCG-specific.  I see the possibility to
create some stubs for them, if that's what you mean.

> Adding yet another separate binary with no added functional value
> doesn't strike me as the most helpful idea for the community, compared
> to configure-optimizing the binaries built today.
> =

> Who would use the stripped-down binaries anyway? Just Citrix? Because
> SUSE is headed for sharing QEMU packages between Xen and KVM, so we
> couldn't enable such Xen-only-optimized binaries.
> =


No, I don't speak for Citrix. I work for opensource Xen project, I just
happen to be hired by Citrix. The general idea is to have an option for
user to create smaller binary, without those unnecessary code compiled /
linked in. How vendor makes their choice is out of my reach. :-)

Wei.

> Regards,
> Andreas
> =

> -- =

> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N=FCrnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imend=F6rffer; HRB 16746 AG N=FCrnbe=
rg

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ew5-00068H-71; Mon, 06 Jan 2014 18:30:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1W0Ew4-00068B-1j
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 18:30:00 +0000
Received: from [85.158.137.68:39915] by server-7.bemta-3.messagelabs.com id
	BE/B2-27599-726FAC25; Mon, 06 Jan 2014 18:29:59 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389032998!7520423!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17020 invoked from network); 6 Jan 2014 18:29:58 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-6.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 18:29:58 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 139C48000D6;
	Mon,  6 Jan 2014 19:29:57 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Mon, 6 Jan 2014 19:28:30 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Mon, 6 Jan 2014 19:28:29 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCdViuyuSJetUF0eAsYrZ1uXS0pp2cbwAgAAa0oCAAV5XAA==
Date: Mon, 6 Jan 2014 18:28:29 +0000
Message-ID: <f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >
> > After applying the patch the win8 driver compilation, gives the
> > following error and warning:
> >   1>xenvbd.c(463): error C2220: warning treated as error - no 'object'
> > file generated
> >   1>xenvbd.c(463): warning C4152: nonstandard extension, function/data
> > pointer conversion in expression
> >
> > The line in question is the following:
> >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> >
> > XenVbd_HwStorFindAdapter is the data structure which you have
> > corrected a few lines in, in the patch. As it is a level 4 warning, it
> > can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I
> > suspect that it would be preferred to find the cause of the warning.
> 
> That would imply that my function definition doesn't match the expected
> function definition in the HW_INITIALIZATION_DATA structure, but according
> to the docs I have everything right. Can you check the storport headers and
> check the declaration there against my function?

For windows 8 and newer HwFindAdapter is declared as
  PVOID                     		HwFindAdapter;
While for earlier versions of windows it is declared as:
  PHW_FIND_ADAPTER		HwFindAdapter;

Hope this helps you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ew5-00068H-71; Mon, 06 Jan 2014 18:30:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1W0Ew4-00068B-1j
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 18:30:00 +0000
Received: from [85.158.137.68:39915] by server-7.bemta-3.messagelabs.com id
	BE/B2-27599-726FAC25; Mon, 06 Jan 2014 18:29:59 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389032998!7520423!1
X-Originating-IP: [80.160.77.114]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3LjExNCA9PiAxNjYyOTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17020 invoked from network); 6 Jan 2014 18:29:58 -0000
Received: from pasmtpa.tele.dk (HELO pasmtpA.tele.dk) (80.160.77.114)
	by server-6.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 18:29:58 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpA.tele.dk (Postfix) with ESMTP id 139C48000D6;
	Mon,  6 Jan 2014 19:29:57 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Mon, 6 Jan 2014 19:28:30 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Mon, 6 Jan 2014 19:28:29 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer,	since it
	is not compiled
Thread-Index: AQHPCdViuyuSJetUF0eAsYrZ1uXS0pp2cbwAgAAa0oCAAV5XAA==
Date: Mon, 6 Jan 2014 18:28:29 +0000
Message-ID: <f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
In-Reply-To: <6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >
> > After applying the patch the win8 driver compilation, gives the
> > following error and warning:
> >   1>xenvbd.c(463): error C2220: warning treated as error - no 'object'
> > file generated
> >   1>xenvbd.c(463): warning C4152: nonstandard extension, function/data
> > pointer conversion in expression
> >
> > The line in question is the following:
> >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> >
> > XenVbd_HwStorFindAdapter is the data structure which you have
> > corrected a few lines in, in the patch. As it is a level 4 warning, it
> > can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I
> > suspect that it would be preferred to find the cause of the warning.
> 
> That would imply that my function definition doesn't match the expected
> function definition in the HW_INITIALIZATION_DATA structure, but according
> to the docs I have everything right. Can you check the storport headers and
> check the declaration there against my function?

For windows 8 and newer HwFindAdapter is declared as
  PVOID                     		HwFindAdapter;
While for earlier versions of windows it is declared as:
  PHW_FIND_ADAPTER		HwFindAdapter;

Hope this helps you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FJF-0006za-6u; Mon, 06 Jan 2014 18:53:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>)
	id 1W0FJD-0006zS-4c; Mon, 06 Jan 2014 18:53:55 +0000
Received: from [85.158.143.35:42853] by server-1.bemta-4.messagelabs.com id
	91/D0-02132-2CBFAC25; Mon, 06 Jan 2014 18:53:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389034433!9981448!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16097 invoked from network); 6 Jan 2014 18:53:53 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:53:53 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389034433; l=5349;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=NlZrL0pMuQalIo0Kfp4cTrVyMEQ=;
	b=Ac9rRXB4wJH9PWx3gkhR8s2VtrGte2DPxRRqfJIZvMgAQFkNk+WEhWwOGZLMpi3ZbGp
	RgE9mrS4YWy9dObd1X7ebuUKLQFPUJJaJhBvv6sXOHiWXBvmc1Rpf4eK2wRkY7uwek8lU
	t9nY8hx197PzMbkMrrMDI3r8JFHxI3sB7H0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5rbU3nEQtz+i
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-101-211.pools.arcor-ip.net [88.65.101.211])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id k04067q06IrrHPX ; 
	Mon, 6 Jan 2014 19:53:53 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id A522050245; Mon,  6 Jan 2014 19:53:52 +0100 (CET)
Date: Mon, 6 Jan 2014 19:53:52 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Russell Pavlicek <russell.pavlicek@citrix.com>
Message-ID: <20140106185352.GB2443@aepfle.de>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 23, Russell Pavlicek wrote:

> 22-Dec: [S:@:S]lars_kurth [S:@:S]RCPavlicek Hey guys, I wrote down as much as I
> could https://piratenpad.de/p/Ik3lOBLniq1L5TEM    (since I'm on holiday and not
> constant online)


> [I've taken the liberty of removing the colorful expletive from the final post]

Thanks for that.

Quoting the above document:

> Docs issues:
> Most accessible documentation is from SUSE manual
> 
>      written for Xen 4.1/xm toolstack

The work was done for SLES11SP2.

>     xenpaging at least on alpine isn't in $PATH,

Yes, its a helper for the toolstack, just like qemu-dm.
 
>     actual path /usr/lib/xen/bin/xenpaging is weird

Its not because its not supposed to be called manually.
 
> Google turns up the SLES docs, the commit and the "how does one test this" thread
> SLES docs don't mention experimental, but mailing list threads etc, do.

I did some testing during that time and xenpaging works well enough with SP2
and SP3 and its xend based toolstack.

> two years after release it would be a good time to have docs + testing sorted

Yes, but that did not happen due to lack of time, resources and lack of
integration into libxl.

> Looking for sources (last time, at least) doesn't show acticity post-2012
> issues:
>  incoherent:       
> 
>     specify a KB "size" of the paging region
> 
>     but specify a number of pages to hold back
> 
> Decide for one thing, not two. Actually, let us specify any of it?

> I bet the system page size is queryable, so why not KB/MB/GB

The current pagesize can be obtained by syscalls.
Its up to the toolstack to drive xenpaging, so the actual admin
interface should be at this level. The values passed to the helper are
an internal detail of the toolstack.

> /var/lib/xen/xenpaging Path is a bit weird, too. 
> Should it rather be /var/lib/xen/paging in line with /var/lib/xen/save ...

Thats a minor detail. I decided on a default path, so lets stick with
it.

> Where's docs how to define it in domU.cfg for XL stack?

There are no docs because there is no code and not even a usable
proposal how to specify the memory related properties for a domU. What
exists today is either populate-on-demand or a fixed amount of memory.
There was some discussion about two years ago how paging, sharing, PoD
and maybe even tmem could be described in a domU.cfg. Nothing was
decided.

> Sles docs say domU.cfg for xm stack was:
> xenpaging = NUMPAGES
> Sles docs basically say for cli use post-domU-creation
> xenpaging domID NUM
> Actual usage is:
> /usr/lib/xen/bin/xenpaging -f /var/lib/xen/xenpaging/freebsd2.page -
> d 79  -m 524288 

I dont think so. The docs say 'memory=N ; actmem=M' and mem-swap-target
to adjust actmem at runtime.
Old upstream docs/misc/xenpaging.txt is certainly wrong, simply because
no code exists to integrate it into the toolstack. 4.2+ is slightly
better.

> Q's:
> - Obviously, the file name handling when not manually setting it up

That did not parse.

> - How does one pull an overview of current xenpaging consumers?

xenpaging is not called by anything.

> - Is the paging file autodiscarded at domU termination?

It is supposed to be removed, xenpaging will get an event when the domid
disappears. In my testing this part works well.

> Testing / doc:
> - what happens if someone messes up and overcommits the xenpaging area

What does that question mean? Does it mean lack of diskspace in dom0?

> - is there any protection against it?
> - will it block or crash?

The overall OOM handling is not very good. Once a guest starts the full
amount of memory for a given domU is required. So if the whole system is
in overcommit state and all guests do a restart an OOM situation occurs
and some guests will not start anymore due to lack of memory.

> Nested paging:
> Would be nice to document it's needed. This should be the like line 2
> of the docs actually. 

What is "nested paging"?

> Same goes for a note that it's only for HVM domUs, btw.

Thats written down in docs/misc/xenpaging.txt.

> Us users don't really use HVM whenever possible, so this is an
> relevant info. It's moot to see the cool feature, mess around long
> since NODOCS and then find out the one use case you got won't work.
> Would be nicer to have s/w workaround for nested page tables if it's
> not available.  Why? On more modern (basically DDR3-Era) hosts we tend
> to be CPU-bound these days and have TONs of Ram.  For that I'd rather
> use tmem to blink out dup's and have compression fancyness instead of
> paging. (if that were tested & documented and known stable ;) If you'd
> use xenpaging on a current vm host you'd probably topside it if the
> memory is ever really allocated.  (think, 128GB ram, 16GB paging,
> allocate 16GB ... ouch, and it just makes no sense since RAM and
> Raid-1 SSD prices are not much different) It's the 4-core 8GB boxes
> and embedded where the paging is a good last resort (methinks) - but
> there you don't have nested page tables.

xenpaging is not a solution but rather a workaround. It can be used to
quickly swapout parts of a running guest to free host memory. Its not
predictable what pages a guest will access next, so it comes with a high
performance cost (even if the pagefile is in dom0 /dev/shm/).

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 18:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 18:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FJF-0006za-6u; Mon, 06 Jan 2014 18:53:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>)
	id 1W0FJD-0006zS-4c; Mon, 06 Jan 2014 18:53:55 +0000
Received: from [85.158.143.35:42853] by server-1.bemta-4.messagelabs.com id
	91/D0-02132-2CBFAC25; Mon, 06 Jan 2014 18:53:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389034433!9981448!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16097 invoked from network); 6 Jan 2014 18:53:53 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 18:53:53 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389034433; l=5349;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=NlZrL0pMuQalIo0Kfp4cTrVyMEQ=;
	b=Ac9rRXB4wJH9PWx3gkhR8s2VtrGte2DPxRRqfJIZvMgAQFkNk+WEhWwOGZLMpi3ZbGp
	RgE9mrS4YWy9dObd1X7ebuUKLQFPUJJaJhBvv6sXOHiWXBvmc1Rpf4eK2wRkY7uwek8lU
	t9nY8hx197PzMbkMrrMDI3r8JFHxI3sB7H0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5rbU3nEQtz+i
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-101-211.pools.arcor-ip.net [88.65.101.211])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id k04067q06IrrHPX ; 
	Mon, 6 Jan 2014 19:53:53 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id A522050245; Mon,  6 Jan 2014 19:53:52 +0100 (CET)
Date: Mon, 6 Jan 2014 19:53:52 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Russell Pavlicek <russell.pavlicek@citrix.com>
Message-ID: <20140106185352.GB2443@aepfle.de>
References: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <55E78A57290FB64FA0D3CF672F9F3DA211C793@SJCPEX01CL03.citrite.net>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Serious issues with xenpaging
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Dec 23, Russell Pavlicek wrote:

> 22-Dec: [S:@:S]lars_kurth [S:@:S]RCPavlicek Hey guys, I wrote down as much as I
> could https://piratenpad.de/p/Ik3lOBLniq1L5TEM    (since I'm on holiday and not
> constant online)


> [I've taken the liberty of removing the colorful expletive from the final post]

Thanks for that.

Quoting the above document:

> Docs issues:
> Most accessible documentation is from SUSE manual
> 
>      written for Xen 4.1/xm toolstack

The work was done for SLES11SP2.

>     xenpaging at least on alpine isn't in $PATH,

Yes, its a helper for the toolstack, just like qemu-dm.
 
>     actual path /usr/lib/xen/bin/xenpaging is weird

Its not because its not supposed to be called manually.
 
> Google turns up the SLES docs, the commit and the "how does one test this" thread
> SLES docs don't mention experimental, but mailing list threads etc, do.

I did some testing during that time and xenpaging works well enough with SP2
and SP3 and its xend based toolstack.

> two years after release it would be a good time to have docs + testing sorted

Yes, but that did not happen due to lack of time, resources and lack of
integration into libxl.

> Looking for sources (last time, at least) doesn't show acticity post-2012
> issues:
>  incoherent:       
> 
>     specify a KB "size" of the paging region
> 
>     but specify a number of pages to hold back
> 
> Decide for one thing, not two. Actually, let us specify any of it?

> I bet the system page size is queryable, so why not KB/MB/GB

The current pagesize can be obtained by syscalls.
Its up to the toolstack to drive xenpaging, so the actual admin
interface should be at this level. The values passed to the helper are
an internal detail of the toolstack.

> /var/lib/xen/xenpaging Path is a bit weird, too. 
> Should it rather be /var/lib/xen/paging in line with /var/lib/xen/save ...

Thats a minor detail. I decided on a default path, so lets stick with
it.

> Where's docs how to define it in domU.cfg for XL stack?

There are no docs because there is no code and not even a usable
proposal how to specify the memory related properties for a domU. What
exists today is either populate-on-demand or a fixed amount of memory.
There was some discussion about two years ago how paging, sharing, PoD
and maybe even tmem could be described in a domU.cfg. Nothing was
decided.

> Sles docs say domU.cfg for xm stack was:
> xenpaging = NUMPAGES
> Sles docs basically say for cli use post-domU-creation
> xenpaging domID NUM
> Actual usage is:
> /usr/lib/xen/bin/xenpaging -f /var/lib/xen/xenpaging/freebsd2.page -
> d 79  -m 524288 

I dont think so. The docs say 'memory=N ; actmem=M' and mem-swap-target
to adjust actmem at runtime.
Old upstream docs/misc/xenpaging.txt is certainly wrong, simply because
no code exists to integrate it into the toolstack. 4.2+ is slightly
better.

> Q's:
> - Obviously, the file name handling when not manually setting it up

That did not parse.

> - How does one pull an overview of current xenpaging consumers?

xenpaging is not called by anything.

> - Is the paging file autodiscarded at domU termination?

It is supposed to be removed, xenpaging will get an event when the domid
disappears. In my testing this part works well.

> Testing / doc:
> - what happens if someone messes up and overcommits the xenpaging area

What does that question mean? Does it mean lack of diskspace in dom0?

> - is there any protection against it?
> - will it block or crash?

The overall OOM handling is not very good. Once a guest starts the full
amount of memory for a given domU is required. So if the whole system is
in overcommit state and all guests do a restart an OOM situation occurs
and some guests will not start anymore due to lack of memory.

> Nested paging:
> Would be nice to document it's needed. This should be the like line 2
> of the docs actually. 

What is "nested paging"?

> Same goes for a note that it's only for HVM domUs, btw.

Thats written down in docs/misc/xenpaging.txt.

> Us users don't really use HVM whenever possible, so this is an
> relevant info. It's moot to see the cool feature, mess around long
> since NODOCS and then find out the one use case you got won't work.
> Would be nicer to have s/w workaround for nested page tables if it's
> not available.  Why? On more modern (basically DDR3-Era) hosts we tend
> to be CPU-bound these days and have TONs of Ram.  For that I'd rather
> use tmem to blink out dup's and have compression fancyness instead of
> paging. (if that were tested & documented and known stable ;) If you'd
> use xenpaging on a current vm host you'd probably topside it if the
> memory is ever really allocated.  (think, 128GB ram, 16GB paging,
> allocate 16GB ... ouch, and it just makes no sense since RAM and
> Raid-1 SSD prices are not much different) It's the 4-core 8GB boxes
> and embedded where the paging is a good last resort (methinks) - but
> there you don't have nested page tables.

xenpaging is not a solution but rather a workaround. It can be used to
quickly swapout parts of a running guest to free host memory. Its not
predictable what pages a guest will access next, so it comes with a high
performance cost (even if the pagefile is in dom0 /dev/shm/).

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:00:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FP0-0007YD-FZ; Mon, 06 Jan 2014 18:59:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0FOz-0007Y5-A0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:59:53 +0000
Received: from [193.109.254.147:7908] by server-9.bemta-14.messagelabs.com id
	ED/EE-13957-82DFAC25; Mon, 06 Jan 2014 18:59:52 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389034791!9134078!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3485 invoked from network); 6 Jan 2014 18:59:51 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-12.tower-27.messagelabs.com with SMTP;
	6 Jan 2014 18:59:51 -0000
Received: from jfehlig2.provo.novell.com (jfehlig2.dnsdhcp.provo.novell.com
	[137.65.132.22])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 11:59:37 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Mon,  6 Jan 2014 11:59:28 -0700
Message-Id: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
Cc: Jim Fehlig <jfehlig@suse.com>, stefan.bader@canonical.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] libxl: Fix initialization of nictype in
	libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As pointed out by the Xen folks [1], HVM nics should always be set
to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
LIBXL_NIC_TYPE_VIF via model='netfront'.  The current logic in
libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
a model is specified that is not 'netfront', which breaks PXE booting
configurations where no model is specified (i.e. use the hypervisor
default).

  Reported-by: Stefan Bader <stefan.bader@canonical.com>

[1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.html

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---

I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList()
and passing the bool to libxlMakeNic(), but in the end left the detection
to libxlMakeNic().

 src/libxl/libxl_conf.c | 20 ++++++++++++++------
 src/libxl/libxl_conf.h |  4 +++-
 2 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index aaeb00e..04d01af 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -855,8 +855,12 @@ error:
 }
 
 int
-libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic)
+libxlMakeNic(virDomainDefPtr def,
+             virDomainNetDefPtr l_nic,
+             libxl_device_nic *x_nic)
 {
+    bool ioemu_nic = STREQ(def->os.type, "hvm");
+
     /* TODO: Where is mtu stored?
      *
      * x_nics[i].mtu = 1492;
@@ -866,12 +870,16 @@ libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic)
 
     virMacAddrGetRaw(&l_nic->mac, x_nic->mac);
 
-    if (l_nic->model && !STREQ(l_nic->model, "netfront")) {
-        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
-            return -1;
+    if (ioemu_nic)
         x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
-    } else {
+    else
         x_nic->nictype = LIBXL_NIC_TYPE_VIF;
+
+    if (l_nic->model) {
+        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
+            return -1;
+        if (STREQ(l_nic->model, "netfront"))
+            x_nic->nictype = LIBXL_NIC_TYPE_VIF;
     }
 
     if (VIR_STRDUP(x_nic->ifname, l_nic->ifname) < 0)
@@ -908,7 +916,7 @@ libxlMakeNicList(virDomainDefPtr def,  libxl_domain_config *d_config)
         return -1;
 
     for (i = 0; i < nnics; i++) {
-        if (libxlMakeNic(l_nics[i], &x_nics[i]))
+        if (libxlMakeNic(def, l_nics[i], &x_nics[i]))
             goto error;
     }
 
diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h
index ffa93bd..90d590f 100644
--- a/src/libxl/libxl_conf.h
+++ b/src/libxl/libxl_conf.h
@@ -142,7 +142,9 @@ libxlMakeCapabilities(libxl_ctx *ctx);
 int
 libxlMakeDisk(virDomainDiskDefPtr l_dev, libxl_device_disk *x_dev);
 int
-libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic);
+libxlMakeNic(virDomainDefPtr def,
+             virDomainNetDefPtr l_nic,
+             libxl_device_nic *x_nic);
 int
 libxlMakeVfb(libxlDriverPrivatePtr driver,
              virDomainGraphicsDefPtr l_vfb, libxl_device_vfb *x_vfb);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:00:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FP0-0007YD-FZ; Mon, 06 Jan 2014 18:59:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0FOz-0007Y5-A0
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 18:59:53 +0000
Received: from [193.109.254.147:7908] by server-9.bemta-14.messagelabs.com id
	ED/EE-13957-82DFAC25; Mon, 06 Jan 2014 18:59:52 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389034791!9134078!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3485 invoked from network); 6 Jan 2014 18:59:51 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-12.tower-27.messagelabs.com with SMTP;
	6 Jan 2014 18:59:51 -0000
Received: from jfehlig2.provo.novell.com (jfehlig2.dnsdhcp.provo.novell.com
	[137.65.132.22])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 11:59:37 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Mon,  6 Jan 2014 11:59:28 -0700
Message-Id: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
Cc: Jim Fehlig <jfehlig@suse.com>, stefan.bader@canonical.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] libxl: Fix initialization of nictype in
	libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As pointed out by the Xen folks [1], HVM nics should always be set
to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
LIBXL_NIC_TYPE_VIF via model='netfront'.  The current logic in
libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
a model is specified that is not 'netfront', which breaks PXE booting
configurations where no model is specified (i.e. use the hypervisor
default).

  Reported-by: Stefan Bader <stefan.bader@canonical.com>

[1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.html

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---

I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList()
and passing the bool to libxlMakeNic(), but in the end left the detection
to libxlMakeNic().

 src/libxl/libxl_conf.c | 20 ++++++++++++++------
 src/libxl/libxl_conf.h |  4 +++-
 2 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index aaeb00e..04d01af 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -855,8 +855,12 @@ error:
 }
 
 int
-libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic)
+libxlMakeNic(virDomainDefPtr def,
+             virDomainNetDefPtr l_nic,
+             libxl_device_nic *x_nic)
 {
+    bool ioemu_nic = STREQ(def->os.type, "hvm");
+
     /* TODO: Where is mtu stored?
      *
      * x_nics[i].mtu = 1492;
@@ -866,12 +870,16 @@ libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic)
 
     virMacAddrGetRaw(&l_nic->mac, x_nic->mac);
 
-    if (l_nic->model && !STREQ(l_nic->model, "netfront")) {
-        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
-            return -1;
+    if (ioemu_nic)
         x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
-    } else {
+    else
         x_nic->nictype = LIBXL_NIC_TYPE_VIF;
+
+    if (l_nic->model) {
+        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
+            return -1;
+        if (STREQ(l_nic->model, "netfront"))
+            x_nic->nictype = LIBXL_NIC_TYPE_VIF;
     }
 
     if (VIR_STRDUP(x_nic->ifname, l_nic->ifname) < 0)
@@ -908,7 +916,7 @@ libxlMakeNicList(virDomainDefPtr def,  libxl_domain_config *d_config)
         return -1;
 
     for (i = 0; i < nnics; i++) {
-        if (libxlMakeNic(l_nics[i], &x_nics[i]))
+        if (libxlMakeNic(def, l_nics[i], &x_nics[i]))
             goto error;
     }
 
diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h
index ffa93bd..90d590f 100644
--- a/src/libxl/libxl_conf.h
+++ b/src/libxl/libxl_conf.h
@@ -142,7 +142,9 @@ libxlMakeCapabilities(libxl_ctx *ctx);
 int
 libxlMakeDisk(virDomainDiskDefPtr l_dev, libxl_device_disk *x_dev);
 int
-libxlMakeNic(virDomainNetDefPtr l_nic, libxl_device_nic *x_nic);
+libxlMakeNic(virDomainDefPtr def,
+             virDomainNetDefPtr l_nic,
+             libxl_device_nic *x_nic);
 int
 libxlMakeVfb(libxlDriverPrivatePtr driver,
              virDomainGraphicsDefPtr l_vfb, libxl_device_vfb *x_vfb);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fml-0008TF-9p; Mon, 06 Jan 2014 19:24:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmi-0008Sv-S9
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:25 +0000
Received: from [193.109.254.147:46891] by server-7.bemta-14.messagelabs.com id
	8A/FD-15500-8E20BC25; Mon, 06 Jan 2014 19:24:24 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389036262!9137038!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1706 invoked from network); 6 Jan 2014 19:24:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOHbq009856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOHVt025992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:17 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOGhY003895; Mon, 6 Jan 2014 19:24:16 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:16 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:40 -0500
Message-Id: <1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
	symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 19 ++++++++++++
 xen/include/xen/symbols.h                |  7 +++--
 6 files changed, 94 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..cdb6886 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE_64(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.u.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..98f9534 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++(*symnum);
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..ba9da49 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,24 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    union {
+        XEN_GUEST_HANDLE(char) name;
+        uint64_t pad;
+    } u;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +571,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..adbf91d 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,8 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/xen.h>
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+extern int xensyms_read(uint32_t *symnum, uint32_t *type,
+                        uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn5-0000AU-15; Mon, 06 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008Va-0L
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [85.158.143.35:48676] by server-1.bemta-4.messagelabs.com id
	BC/23-02132-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389036271!9985713!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23757 invoked from network); 6 Jan 2014 19:24:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOQOK009667
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQEL002646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOPwq002619; Mon, 6 Jan 2014 19:24:25 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:25 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:51 -0500
Message-Id: <1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 117 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/xenpmu.h |   7 +++
 2 files changed, 117 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 2cc37cc..666067d 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -74,7 +74,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if (is_hvm_domain(current->domain) ||
+        !(current->arch.vpmu.xenpmu_data &&
+          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
     return 0;
 }
 
@@ -91,16 +112,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags)
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
 
-    if ( vpmu->arch_vpmu_ops )
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    {
+        if ( smp_processor_id() >= dom0->max_vcpus )
+            return 0;
+        v = dom0->vcpu[smp_processor_id()];
+    }
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        void *p;
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs cmp;
+
+            gregs = guest_cpu_user_regs();
+            XLAT_cpu_user_regs(&cmp, gregs);
+            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+        }
+        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(p, gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
     {
         struct vlapic *vlapic = vcpu_vlapic(v);
         u32 vlapic_lvtpc;
@@ -212,8 +304,13 @@ void vpmu_load(struct vcpu *v)
 
     local_irq_enable();
 
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (!is_hvm_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -404,6 +501,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
         vpmu_lvtpc_update((uint32_t)pmu_params.val);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index bad5211..f4913c6 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
@@ -75,6 +76,12 @@ DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
 #define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xenpmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmr-0008VP-Qr; Mon, 06 Jan 2014 19:24:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmo-0008Ts-Ec
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:31 +0000
Received: from [85.158.137.68:17506] by server-10.bemta-3.messagelabs.com id
	EF/8F-23989-DE20BC25; Mon, 06 Jan 2014 19:24:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389036266!6758648!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2763 invoked from network); 6 Jan 2014 19:24:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOK0e009603
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKdp015362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOJBk026085; Mon, 6 Jan 2014 19:24:19 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:19 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:43 -0500
Message-Id: <1389036295-3877-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 04/16] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a1f1561..89212ec 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -373,56 +361,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -433,10 +404,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -452,7 +421,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -497,6 +466,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -509,27 +479,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -538,27 +506,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -577,7 +545,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -593,7 +560,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -678,7 +645,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -696,27 +663,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -734,7 +699,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -797,18 +762,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmw-00005l-PC; Mon, 06 Jan 2014 19:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008UZ-Fe
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:32 +0000
Received: from [85.158.139.211:12508] by server-9.bemta-5.messagelabs.com id
	41/22-15098-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389036269!8158396!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5419 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JONi6009625
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMeW004097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:22 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMDY004091; Mon, 6 Jan 2014 19:24:22 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:22 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:47 -0500
Message-Id: <1389036295-3877-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 08/16] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 9d39061..f352a84 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -396,6 +396,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2698686..6d641ea 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmz-00006I-0d; Mon, 06 Jan 2014 19:24:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008Ug-KG
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:33 +0000
Received: from [85.158.139.211:9826] by server-7.bemta-5.messagelabs.com id
	24/A4-04824-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389036268!8123966!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4729 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JONHC009943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMQl004090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOLu3002485; Mon, 6 Jan 2014 19:24:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:46 -0500
Message-Id: <1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xenpmu.h header file, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 47 +++++++++++---------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 70 +++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 --------------
 xen/include/asm-x86/hvm/vpmu.h           | 10 +----
 xen/include/public/arch-x86/xenpmu.h     | 74 ++++++++++++++++++++++++++++++++
 xen/include/public/xenpmu.h              | 38 ++++++++++++++++
 8 files changed, 183 insertions(+), 95 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 1f7d6b7..78979aa 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/xenpmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -178,11 +172,13 @@ static inline void context_load(struct vcpu *v)
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
@@ -196,9 +192,10 @@ static void amd_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
     {
         unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -211,10 +208,11 @@ static inline void context_save(struct vcpu *v)
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
@@ -249,6 +247,8 @@ static void context_update(unsigned int msr, u64 msr_content)
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -260,12 +260,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -373,7 +373,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -382,6 +384,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct amd_vpmu_context);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -413,6 +418,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     const struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -442,8 +449,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 6b4cd61..e756d87 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/xenpmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -294,11 +289,14 @@ static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -321,9 +319,12 @@ static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -367,10 +368,15 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -447,7 +453,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -510,11 +516,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct arch_cntr_pair *arch_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            arch_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (arch_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -576,7 +585,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -627,6 +636,8 @@ static void core2_vpmu_dump(const struct vcpu *v)
     int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -645,12 +656,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -659,7 +667,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -677,7 +685,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -740,12 +748,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 2c1201b..99eb654 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/xenpmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..2698686 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/xenpmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -76,11 +75,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-x86/xenpmu.h b/xen/include/public/arch-x86/xenpmu.h
new file mode 100644
index 0000000..2959d46
--- /dev/null
+++ b/xen/include/public/arch-x86/xenpmu.h
@@ -0,0 +1,74 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+#include "xen.h"
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* AMD PMU registers and structures */
+struct amd_vpmu_context {
+    uint64_t counters;      /* Offset to counter MSRs */
+    uint64_t ctrls;         /* Offset to control MSRs */
+    uint8_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct arch_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+struct core2_vpmu_context {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+/* ANSI-C does not support anonymous unions */
+#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
+#define __ANON_UNION_NAME(x) x
+#else
+#define __ANON_UNION_NAME(x)
+#endif
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
+                                    sizeof(struct core2_vpmu_context) ? \
+                                     sizeof(struct amd_vpmu_context) : \
+                                     sizeof(struct core2_vpmu_context))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct arch_xenpmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } __ANON_UNION_NAME(r);
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } __ANON_UNION_NAME(l);
+    union {
+        struct amd_vpmu_context amd;
+        struct core2_vpmu_context intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } __ANON_UNION_NAME(c);
+};
+typedef struct arch_xenpmu arch_xenpmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
new file mode 100644
index 0000000..bcadd15
--- /dev/null
+++ b/xen/include/public/xenpmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_XENPMU_H__
+#define __XEN_PUBLIC_XENPMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/xenpmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xenpmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    arch_xenpmu_t pmu;
+};
+typedef struct xenpmu_data xenpmu_data_t;
+
+#endif /* __XEN_PUBLIC_XENPMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmv-000058-B6; Mon, 06 Jan 2014 19:24:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008UT-6v
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:32 +0000
Received: from [85.158.137.68:61212] by server-2.bemta-3.messagelabs.com id
	F0/0F-17329-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389036268!6378557!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1528 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOLEg009906
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKbP015406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKxY004034; Mon, 6 Jan 2014 19:24:20 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:20 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:45 -0500
Message-Id: <1389036295-3877-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 06/16] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 7fd2420..6b4cd61 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -371,8 +364,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmp-0008U8-7J; Mon, 06 Jan 2014 19:24:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmo-0008Tf-15
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:30 +0000
Received: from [193.109.254.147:47139] by server-1.bemta-14.messagelabs.com id
	C4/69-15600-DE20BC25; Mon, 06 Jan 2014 19:24:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389036266!9142267!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19955 invoked from network); 6 Jan 2014 19:24:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOL8h009604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKFp002413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKM8002371; Mon, 6 Jan 2014 19:24:20 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:19 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:44 -0500
Message-Id: <1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 16 +++++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 18 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 842bce7..1f7d6b7 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( is_hvm_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 89212ec..7fd2420 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d6a9ff6..2c1201b 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
 }
 
 void vpmu_destroy(struct vcpu *v)
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn5-0000AU-15; Mon, 06 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008Va-0L
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [85.158.143.35:48676] by server-1.bemta-4.messagelabs.com id
	BC/23-02132-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389036271!9985713!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23757 invoked from network); 6 Jan 2014 19:24:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOQOK009667
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQEL002646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOPwq002619; Mon, 6 Jan 2014 19:24:25 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:25 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:51 -0500
Message-Id: <1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 117 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/xenpmu.h |   7 +++
 2 files changed, 117 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 2cc37cc..666067d 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -74,7 +74,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if (is_hvm_domain(current->domain) ||
+        !(current->arch.vpmu.xenpmu_data &&
+          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
     return 0;
 }
 
@@ -91,16 +112,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags)
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
 
-    if ( vpmu->arch_vpmu_ops )
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    {
+        if ( smp_processor_id() >= dom0->max_vcpus )
+            return 0;
+        v = dom0->vcpu[smp_processor_id()];
+    }
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        void *p;
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs cmp;
+
+            gregs = guest_cpu_user_regs();
+            XLAT_cpu_user_regs(&cmp, gregs);
+            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+        }
+        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(p, gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
     {
         struct vlapic *vlapic = vcpu_vlapic(v);
         u32 vlapic_lvtpc;
@@ -212,8 +304,13 @@ void vpmu_load(struct vcpu *v)
 
     local_irq_enable();
 
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (!is_hvm_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -404,6 +501,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
         vpmu_lvtpc_update((uint32_t)pmu_params.val);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index bad5211..f4913c6 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
@@ -75,6 +76,12 @@ DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
 #define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xenpmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmz-00006I-0d; Mon, 06 Jan 2014 19:24:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008Ug-KG
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:33 +0000
Received: from [85.158.139.211:9826] by server-7.bemta-5.messagelabs.com id
	24/A4-04824-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389036268!8123966!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4729 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JONHC009943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMQl004090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOLu3002485; Mon, 6 Jan 2014 19:24:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:46 -0500
Message-Id: <1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xenpmu.h header file, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 47 +++++++++++---------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 70 +++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 --------------
 xen/include/asm-x86/hvm/vpmu.h           | 10 +----
 xen/include/public/arch-x86/xenpmu.h     | 74 ++++++++++++++++++++++++++++++++
 xen/include/public/xenpmu.h              | 38 ++++++++++++++++
 8 files changed, 183 insertions(+), 95 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 1f7d6b7..78979aa 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/xenpmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -178,11 +172,13 @@ static inline void context_load(struct vcpu *v)
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
@@ -196,9 +192,10 @@ static void amd_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
     {
         unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -211,10 +208,11 @@ static inline void context_save(struct vcpu *v)
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
@@ -249,6 +247,8 @@ static void context_update(unsigned int msr, u64 msr_content)
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -260,12 +260,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -373,7 +373,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -382,6 +384,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct amd_vpmu_context);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -413,6 +418,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     const struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -442,8 +449,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 6b4cd61..e756d87 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/xenpmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -294,11 +289,14 @@ static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -321,9 +319,12 @@ static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -367,10 +368,15 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -447,7 +453,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -510,11 +516,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct arch_cntr_pair *arch_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            arch_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (arch_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -576,7 +585,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -627,6 +636,8 @@ static void core2_vpmu_dump(const struct vcpu *v)
     int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -645,12 +656,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -659,7 +667,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -677,7 +685,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -740,12 +748,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 2c1201b..99eb654 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/xenpmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..2698686 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/xenpmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -76,11 +75,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-x86/xenpmu.h b/xen/include/public/arch-x86/xenpmu.h
new file mode 100644
index 0000000..2959d46
--- /dev/null
+++ b/xen/include/public/arch-x86/xenpmu.h
@@ -0,0 +1,74 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+#include "xen.h"
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* AMD PMU registers and structures */
+struct amd_vpmu_context {
+    uint64_t counters;      /* Offset to counter MSRs */
+    uint64_t ctrls;         /* Offset to control MSRs */
+    uint8_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct arch_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+struct core2_vpmu_context {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+/* ANSI-C does not support anonymous unions */
+#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
+#define __ANON_UNION_NAME(x) x
+#else
+#define __ANON_UNION_NAME(x)
+#endif
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
+                                    sizeof(struct core2_vpmu_context) ? \
+                                     sizeof(struct amd_vpmu_context) : \
+                                     sizeof(struct core2_vpmu_context))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct arch_xenpmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } __ANON_UNION_NAME(r);
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } __ANON_UNION_NAME(l);
+    union {
+        struct amd_vpmu_context amd;
+        struct core2_vpmu_context intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } __ANON_UNION_NAME(c);
+};
+typedef struct arch_xenpmu arch_xenpmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
new file mode 100644
index 0000000..bcadd15
--- /dev/null
+++ b/xen/include/public/xenpmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_XENPMU_H__
+#define __XEN_PUBLIC_XENPMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/xenpmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xenpmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    arch_xenpmu_t pmu;
+};
+typedef struct xenpmu_data xenpmu_data_t;
+
+#endif /* __XEN_PUBLIC_XENPMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmm-0008TT-9x; Mon, 06 Jan 2014 19:24:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmk-0008T1-Lz
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:26 +0000
Received: from [193.109.254.147:59844] by server-3.bemta-14.messagelabs.com id
	1B/8B-11000-AE20BC25; Mon, 06 Jan 2014 19:24:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389036263!9137043!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1788 invoked from network); 6 Jan 2014 19:24:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:25 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOJgg009869
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOJVf026079
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:19 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOINZ002296; Mon, 6 Jan 2014 19:24:18 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:18 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:42 -0500
Message-Id: <1389036295-3877-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 03/16] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           | 11 +++--------
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bec40d8..842bce7 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -276,7 +277,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -292,7 +293,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -303,7 +305,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -395,7 +398,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..a1f1561 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -446,7 +443,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -813,7 +810,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a4e3664..d6a9ff6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,10 +127,7 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
         return;
 
     vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -138,8 +135,7 @@ static void vpmu_save_force(void *arg)
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -149,8 +145,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmw-00005l-PC; Mon, 06 Jan 2014 19:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008UZ-Fe
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:32 +0000
Received: from [85.158.139.211:12508] by server-9.bemta-5.messagelabs.com id
	41/22-15098-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389036269!8158396!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5419 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JONi6009625
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMeW004097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:22 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOMDY004091; Mon, 6 Jan 2014 19:24:22 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:22 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:47 -0500
Message-Id: <1389036295-3877-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 08/16] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 9d39061..f352a84 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -396,6 +396,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2698686..6d641ea 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fml-0008TM-TU; Mon, 06 Jan 2014 19:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmk-0008T0-BW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:26 +0000
Received: from [85.158.137.68:17255] by server-13.bemta-3.messagelabs.com id
	CE/57-28603-9E20BC25; Mon, 06 Jan 2014 19:24:25 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389036263!7582740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13106 invoked from network); 6 Jan 2014 19:24:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOH6W009854
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOGbi025987
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:16 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOGwa025963; Mon, 6 Jan 2014 19:24:16 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:15 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:39 -0500
Message-Id: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 00/16] x86/PMU: Xen PMU PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the much delayed version 3 of PV PMU patches.

The following patch series adds PMU support in Xen for PV guests. There is a
companion patchset for Linux kernel. In addition, another set of changes will
be provided (later) for userland perf code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately


A few notes that may help reviewing: 

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor 
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or a regular vector
interrupt for both HVM and PV. The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV guests
* Guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for both HVM and PV and therefore has been moved
up from hvm subtree


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest




Boris Ostrovsky (16):
  common/symbols: Export hypervisor symbols to PV guest
  x86/VPMU: Stop AMD counters when called from vpmu_save_force()
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  17 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  34 +-
 xen/arch/x86/vpmu.c                      | 621 ++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 499 ++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 936 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  95 ++++
 xen/include/public/arch-x86/xenpmu.h     |  74 +++
 xen/include/public/platform.h            |  19 +
 xen/include/public/xen.h                 |   2 +
 xen/include/public/xenpmu.h              | 109 ++++
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   7 +-
 35 files changed, 2554 insertions(+), 1872 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmm-0008TT-9x; Mon, 06 Jan 2014 19:24:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmk-0008T1-Lz
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:26 +0000
Received: from [193.109.254.147:59844] by server-3.bemta-14.messagelabs.com id
	1B/8B-11000-AE20BC25; Mon, 06 Jan 2014 19:24:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389036263!9137043!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1788 invoked from network); 6 Jan 2014 19:24:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:25 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOJgg009869
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOJVf026079
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:19 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOINZ002296; Mon, 6 Jan 2014 19:24:18 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:18 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:42 -0500
Message-Id: <1389036295-3877-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 03/16] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           | 11 +++--------
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bec40d8..842bce7 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -276,7 +277,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -292,7 +293,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -303,7 +305,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -395,7 +398,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..a1f1561 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -446,7 +443,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -813,7 +810,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a4e3664..d6a9ff6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,10 +127,7 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
         return;
 
     vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -138,8 +135,7 @@ static void vpmu_save_force(void *arg)
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -149,8 +145,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fml-0008TF-9p; Mon, 06 Jan 2014 19:24:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmi-0008Sv-S9
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:25 +0000
Received: from [193.109.254.147:46891] by server-7.bemta-14.messagelabs.com id
	8A/FD-15500-8E20BC25; Mon, 06 Jan 2014 19:24:24 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389036262!9137038!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1706 invoked from network); 6 Jan 2014 19:24:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOHbq009856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOHVt025992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:17 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOGhY003895; Mon, 6 Jan 2014 19:24:16 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:16 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:40 -0500
Message-Id: <1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
	symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 19 ++++++++++++
 xen/include/xen/symbols.h                |  7 +++--
 6 files changed, 94 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..cdb6886 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE_64(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.u.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..98f9534 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++(*symnum);
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..ba9da49 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,24 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    union {
+        XEN_GUEST_HANDLE(char) name;
+        uint64_t pad;
+    } u;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +571,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..adbf91d 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,8 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/xen.h>
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+extern int xensyms_read(uint32_t *symnum, uint32_t *type,
+                        uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fml-0008TM-TU; Mon, 06 Jan 2014 19:24:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmk-0008T0-BW
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:26 +0000
Received: from [85.158.137.68:17255] by server-13.bemta-3.messagelabs.com id
	CE/57-28603-9E20BC25; Mon, 06 Jan 2014 19:24:25 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389036263!7582740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13106 invoked from network); 6 Jan 2014 19:24:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOH6W009854
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOGbi025987
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:16 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOGwa025963; Mon, 6 Jan 2014 19:24:16 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:15 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:39 -0500
Message-Id: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 00/16] x86/PMU: Xen PMU PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the much delayed version 3 of PV PMU patches.

The following patch series adds PMU support in Xen for PV guests. There is a
companion patchset for Linux kernel. In addition, another set of changes will
be provided (later) for userland perf code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately


A few notes that may help reviewing: 

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor 
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or a regular vector
interrupt for both HVM and PV. The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV guests
* Guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for both HVM and PV and therefore has been moved
up from hvm subtree


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest




Boris Ostrovsky (16):
  common/symbols: Export hypervisor symbols to PV guest
  x86/VPMU: Stop AMD counters when called from vpmu_save_force()
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  17 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  34 +-
 xen/arch/x86/vpmu.c                      | 621 ++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 499 ++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 936 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  95 ++++
 xen/include/public/arch-x86/xenpmu.h     |  74 +++
 xen/include/public/platform.h            |  19 +
 xen/include/public/xen.h                 |   2 +
 xen/include/public/xenpmu.h              | 109 ++++
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   7 +-
 35 files changed, 2554 insertions(+), 1872 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn1-000082-Oq; Mon, 06 Jan 2014 19:24:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fms-0008VO-8z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [193.109.254.147:60232] by server-3.bemta-14.messagelabs.com id
	AE/9B-11000-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389036271!9143131!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25438 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JORsc010015
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:28 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQEN002657
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQrX015578; Mon, 6 Jan 2014 19:24:26 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:26 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:52 -0500
Message-Id: <1389036295-3877-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 13/16] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 65 ++++++++++++++++++++++++++++++++-------------
 xen/arch/x86/traps.c        |  6 ++++-
 xen/include/public/xenpmu.h |  3 +++
 3 files changed, 55 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 666067d..0cbede4 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -86,6 +86,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
     {
         int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
@@ -111,6 +114,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
     {
         int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
@@ -133,7 +139,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
     {
         if ( smp_processor_id() >= dom0->max_vcpus )
             return 0;
@@ -141,7 +148,10 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     }
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
         void *p;
@@ -158,27 +168,45 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
         /* Store appropriate registers in xenpmu_data */
         p = &v->arch.vpmu.xenpmu_data->pmu.regs;
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs cmp;
-
-            gregs = guest_cpu_user_regs();
-            XLAT_cpu_user_regs(&cmp, gregs);
-            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                /*
+                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+                 * and therefore we treat it the same way as a non-priviledged
+                 * PV 32-bit domain.
+                 */
+                struct compat_cpu_user_regs cmp;
+
+                gregs = guest_cpu_user_regs();
+                XLAT_cpu_user_regs(&cmp, gregs);
+                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+            ((struct cpu_user_regs *)p)->cs =
+                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
         }
-        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )
+        else 
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+
             memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(p, regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -441,7 +469,8 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
 
         mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
-        if ( mode & ~XENPMU_MODE_ON )
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode &= ~XENPMU_MODE_MASK;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 8a3353f..3ff24e5 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2507,7 +2507,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index f4913c6..c02eb66 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -63,11 +63,14 @@ DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_FEATURE_SHIFT      16
 #define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn4-00009Z-3X; Mon, 06 Jan 2014 19:24:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008VX-06
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [85.158.143.35:48657] by server-3.bemta-4.messagelabs.com id
	C2/83-32360-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389036270!9893997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22350 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOPJI009651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOMF002572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOO45015500; Mon, 6 Jan 2014 19:24:24 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:23 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:49 -0500
Message-Id: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 38 +++++++++++----------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 51 ++++++++++++++++++----------
 xen/arch/x86/hvm/vpmu.c           | 71 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/xen.h          |  1 +
 xen/include/public/xenpmu.h       |  2 ++
 xen/include/xen/softirq.h         |  1 +
 8 files changed, 131 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 1369d74..f35aee2 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( is_hvm_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
 
     ctxt->counters = sizeof(struct amd_vpmu_context);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -399,18 +404,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( is_hvm_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 5670924..25b2a96 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -356,22 +356,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -750,6 +758,11 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( !is_hvm_domain(v->domain) )
+        if ( !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -760,11 +773,15 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index c83256f..1a97b0e 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,9 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
@@ -259,7 +262,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -271,6 +280,54 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn = params->val;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( !mfn_valid(mfn) ||
+        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
+    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
 {
     int ret = -EINVAL;
@@ -327,7 +384,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 34efd24..daf381c 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 022a171..4b0ae38 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -57,6 +57,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xenpmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index faf05fc..37592f8 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmn-0008Ta-0V; Mon, 06 Jan 2014 19:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fml-0008T2-0k
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:27 +0000
Received: from [193.109.254.147:59856] by server-4.bemta-14.messagelabs.com id
	76/28-03916-AE20BC25; Mon, 06 Jan 2014 19:24:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389036264!7658408!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22414 invoked from network); 6 Jan 2014 19:24:25 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOJu1009581
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOI8Q003942
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOHar026014; Mon, 6 Jan 2014 19:24:17 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:17 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:41 -0500
Message-Id: <1389036295-3877-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 02/16] x86/VPMU: Stop AMD counters when
	called from vpmu_save_force()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change amd_vpmu_save() algorithm to accommodate cases when we need
to stop counters from vpmu_save_force() (needed by subsequent PMU
patches).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c | 14 ++++----------
 xen/arch/x86/hvm/vpmu.c     | 12 ++++++------
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..bec40d8 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -223,22 +223,16 @@ static int amd_vpmu_save(struct vcpu *v)
     struct amd_vpmu_context *ctx = vpmu->context;
     unsigned int i;
 
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
     {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
         for ( i = 0; i < num_counters; i++ )
             wrmsrl(ctrls[i], 0);
 
-        return 0;
+        vpmu_set(vpmu, VPMU_FROZEN);
     }
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
 
     context_save(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..a4e3664 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,13 +127,19 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
     vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -177,12 +183,8 @@ void vpmu_load(struct vcpu *v)
          * before saving the context.
          */
         if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
             on_selected_cpus(cpumask_of(vpmu->last_pcpu),
                              vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
     } 
 
     /* Prevent forced context save from remote CPU */
@@ -195,9 +197,7 @@ void vpmu_load(struct vcpu *v)
         vpmu = vcpu_vpmu(prev);
 
         /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
         vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
         vpmu = vcpu_vpmu(v);
     }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn4-00009Z-3X; Mon, 06 Jan 2014 19:24:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008VX-06
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [85.158.143.35:48657] by server-3.bemta-4.messagelabs.com id
	C2/83-32360-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389036270!9893997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22350 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOPJI009651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOMF002572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOO45015500; Mon, 6 Jan 2014 19:24:24 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:23 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:49 -0500
Message-Id: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 38 +++++++++++----------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 51 ++++++++++++++++++----------
 xen/arch/x86/hvm/vpmu.c           | 71 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/xen.h          |  1 +
 xen/include/public/xenpmu.h       |  2 ++
 xen/include/xen/softirq.h         |  1 +
 8 files changed, 131 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 1369d74..f35aee2 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( is_hvm_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
 
     ctxt->counters = sizeof(struct amd_vpmu_context);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -399,18 +404,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( is_hvm_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 5670924..25b2a96 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -356,22 +356,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -750,6 +758,11 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( !is_hvm_domain(v->domain) )
+        if ( !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -760,11 +773,15 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index c83256f..1a97b0e 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,9 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
@@ -259,7 +262,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -271,6 +280,54 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn = params->val;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( !mfn_valid(mfn) ||
+        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
+    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
 {
     int ret = -EINVAL;
@@ -327,7 +384,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 34efd24..daf381c 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 022a171..4b0ae38 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -57,6 +57,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xenpmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index faf05fc..37592f8 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmn-0008Ta-0V; Mon, 06 Jan 2014 19:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fml-0008T2-0k
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:27 +0000
Received: from [193.109.254.147:59856] by server-4.bemta-14.messagelabs.com id
	76/28-03916-AE20BC25; Mon, 06 Jan 2014 19:24:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389036264!7658408!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22414 invoked from network); 6 Jan 2014 19:24:25 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOJu1009581
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOI8Q003942
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:18 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOHar026014; Mon, 6 Jan 2014 19:24:17 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:17 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:41 -0500
Message-Id: <1389036295-3877-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 02/16] x86/VPMU: Stop AMD counters when
	called from vpmu_save_force()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change amd_vpmu_save() algorithm to accommodate cases when we need
to stop counters from vpmu_save_force() (needed by subsequent PMU
patches).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c | 14 ++++----------
 xen/arch/x86/hvm/vpmu.c     | 12 ++++++------
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..bec40d8 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -223,22 +223,16 @@ static int amd_vpmu_save(struct vcpu *v)
     struct amd_vpmu_context *ctx = vpmu->context;
     unsigned int i;
 
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
     {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
         for ( i = 0; i < num_counters; i++ )
             wrmsrl(ctrls[i], 0);
 
-        return 0;
+        vpmu_set(vpmu, VPMU_FROZEN);
     }
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
 
     context_save(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..a4e3664 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,13 +127,19 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
     vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -177,12 +183,8 @@ void vpmu_load(struct vcpu *v)
          * before saving the context.
          */
         if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
             on_selected_cpus(cpumask_of(vpmu->last_pcpu),
                              vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
     } 
 
     /* Prevent forced context save from remote CPU */
@@ -195,9 +197,7 @@ void vpmu_load(struct vcpu *v)
         vpmu = vcpu_vpmu(prev);
 
         /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
         vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
         vpmu = vcpu_vpmu(v);
     }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmv-000058-B6; Mon, 06 Jan 2014 19:24:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmq-0008UT-6v
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:32 +0000
Received: from [85.158.137.68:61212] by server-2.bemta-3.messagelabs.com id
	F0/0F-17329-FE20BC25; Mon, 06 Jan 2014 19:24:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389036268!6378557!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1528 invoked from network); 6 Jan 2014 19:24:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOLEg009906
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKbP015406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKxY004034; Mon, 6 Jan 2014 19:24:20 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:20 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:45 -0500
Message-Id: <1389036295-3877-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 06/16] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 7fd2420..6b4cd61 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -371,8 +364,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn0-00007C-2f; Mon, 06 Jan 2014 19:24:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmr-0008V4-9G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:33 +0000
Received: from [85.158.143.35:48623] by server-3.bemta-4.messagelabs.com id
	01/83-32360-0F20BC25; Mon, 06 Jan 2014 19:24:32 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389036270!9816274!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30826 invoked from network); 6 Jan 2014 19:24:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:31 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOP0E009656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOPbe004186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOml004174; Mon, 6 Jan 2014 19:24:24 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:24 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:50 -0500
Message-Id: <1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  8 ++++++
 xen/arch/x86/traps.c              | 30 ++++++++++++++++++--
 xen/include/public/xenpmu.h       |  1 +
 5 files changed, 88 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index da8e522..25572d5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1972,8 +1972,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 25b2a96..b9b2ea9 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,6 +298,9 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
+
+    if ( !is_hvm_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -306,10 +310,14 @@ static int core2_vpmu_save(struct vcpu *v)
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
+    if ( !is_hvm_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !is_hvm_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -424,6 +439,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( is_hvm_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -450,7 +473,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -462,11 +485,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -482,7 +506,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -507,10 +531,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( is_hvm_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -527,7 +555,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct arch_cntr_pair *arch_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             arch_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -566,13 +597,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( is_hvm_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -596,7 +633,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 1a97b0e..2cc37cc 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -396,6 +396,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.val);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3f7a3c7..8a3353f 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
         break;
 
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
     unsupported:
         a = b = c = d = 0;
         break;
-
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
+        break;
     default:
         (void)cpuid_hypervisor_leaves(regs->eax, 0, &a, &b, &c, &d);
         break;
@@ -2499,6 +2501,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2587,6 +2597,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 37592f8..bad5211 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmr-0008VP-Qr; Mon, 06 Jan 2014 19:24:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmo-0008Ts-Ec
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:31 +0000
Received: from [85.158.137.68:17506] by server-10.bemta-3.messagelabs.com id
	EF/8F-23989-DE20BC25; Mon, 06 Jan 2014 19:24:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389036266!6758648!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2763 invoked from network); 6 Jan 2014 19:24:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:28 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOK0e009603
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKdp015362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:20 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOJBk026085; Mon, 6 Jan 2014 19:24:19 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:19 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:43 -0500
Message-Id: <1389036295-3877-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 04/16] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a1f1561..89212ec 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -373,56 +361,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -433,10 +404,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -452,7 +421,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -497,6 +466,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -509,27 +479,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -538,27 +506,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -577,7 +545,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -593,7 +560,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -678,7 +645,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -696,27 +663,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -734,7 +699,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -797,18 +762,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn1-000082-Oq; Mon, 06 Jan 2014 19:24:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fms-0008VO-8z
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [193.109.254.147:60232] by server-3.bemta-14.messagelabs.com id
	AE/9B-11000-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389036271!9143131!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25438 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JORsc010015
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:28 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQEN002657
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOQrX015578; Mon, 6 Jan 2014 19:24:26 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:26 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:52 -0500
Message-Id: <1389036295-3877-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 13/16] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 65 ++++++++++++++++++++++++++++++++-------------
 xen/arch/x86/traps.c        |  6 ++++-
 xen/include/public/xenpmu.h |  3 +++
 3 files changed, 55 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 666067d..0cbede4 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -86,6 +86,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
     {
         int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
@@ -111,6 +114,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
     {
         int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
@@ -133,7 +139,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
     {
         if ( smp_processor_id() >= dom0->max_vcpus )
             return 0;
@@ -141,7 +148,10 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     }
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
         void *p;
@@ -158,27 +168,45 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
         /* Store appropriate registers in xenpmu_data */
         p = &v->arch.vpmu.xenpmu_data->pmu.regs;
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs cmp;
-
-            gregs = guest_cpu_user_regs();
-            XLAT_cpu_user_regs(&cmp, gregs);
-            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                /*
+                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+                 * and therefore we treat it the same way as a non-priviledged
+                 * PV 32-bit domain.
+                 */
+                struct compat_cpu_user_regs cmp;
+
+                gregs = guest_cpu_user_regs();
+                XLAT_cpu_user_regs(&cmp, gregs);
+                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+            ((struct cpu_user_regs *)p)->cs =
+                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
         }
-        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )
+        else 
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+
             memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(p, regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -441,7 +469,8 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
 
         mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
-        if ( mode & ~XENPMU_MODE_ON )
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode &= ~XENPMU_MODE_MASK;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 8a3353f..3ff24e5 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2507,7 +2507,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index f4913c6..c02eb66 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -63,11 +63,14 @@ DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_FEATURE_SHIFT      16
 #define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fmp-0008U8-7J; Mon, 06 Jan 2014 19:24:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmo-0008Tf-15
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:30 +0000
Received: from [193.109.254.147:47139] by server-1.bemta-14.messagelabs.com id
	C4/69-15600-DE20BC25; Mon, 06 Jan 2014 19:24:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389036266!9142267!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19955 invoked from network); 6 Jan 2014 19:24:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOL8h009604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKFp002413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOKM8002371; Mon, 6 Jan 2014 19:24:20 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:19 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:44 -0500
Message-Id: <1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 16 +++++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 18 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 842bce7..1f7d6b7 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( is_hvm_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 89212ec..7fd2420 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d6a9ff6..2c1201b 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
 }
 
 void vpmu_destroy(struct vcpu *v)
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fn0-00007C-2f; Mon, 06 Jan 2014 19:24:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmr-0008V4-9G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:33 +0000
Received: from [85.158.143.35:48623] by server-3.bemta-4.messagelabs.com id
	01/83-32360-0F20BC25; Mon, 06 Jan 2014 19:24:32 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389036270!9816274!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30826 invoked from network); 6 Jan 2014 19:24:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:31 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOP0E009656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOPbe004186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOml004174; Mon, 6 Jan 2014 19:24:24 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:24 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:50 -0500
Message-Id: <1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  8 ++++++
 xen/arch/x86/traps.c              | 30 ++++++++++++++++++--
 xen/include/public/xenpmu.h       |  1 +
 5 files changed, 88 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index da8e522..25572d5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1972,8 +1972,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 25b2a96..b9b2ea9 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,6 +298,9 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
+
+    if ( !is_hvm_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -306,10 +310,14 @@ static int core2_vpmu_save(struct vcpu *v)
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
+    if ( !is_hvm_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !is_hvm_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -424,6 +439,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( is_hvm_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -450,7 +473,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -462,11 +485,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -482,7 +506,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -507,10 +531,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( is_hvm_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -527,7 +555,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct arch_cntr_pair *arch_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             arch_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -566,13 +597,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( is_hvm_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -596,7 +633,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 1a97b0e..2cc37cc 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -396,6 +396,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.val);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3f7a3c7..8a3353f 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
         break;
 
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
     unsupported:
         a = b = c = d = 0;
         break;
-
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
+        break;
     default:
         (void)cpuid_hypervisor_leaves(regs->eax, 0, &a, &b, &c, &d);
         break;
@@ -2499,6 +2501,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2587,6 +2597,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 37592f8..bad5211 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* ANSI-C does not support anonymous unions */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnD-0000GG-FD; Mon, 06 Jan 2014 19:24:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008Vf-3E
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [193.109.254.147:35469] by server-15.bemta-14.messagelabs.com
	id 44/5A-22186-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389036271!9149872!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25968 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOOa4009649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOrI004144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:24 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JON4Y002546; Mon, 6 Jan 2014 19:24:23 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:22 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:48 -0500
Message-Id: <1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 09/16] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c        |  2 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  4 +-
 xen/arch/x86/hvm/vpmu.c            | 77 ++++++++++++++++++++++++++++++++++----
 xen/arch/x86/x86_64/compat/entry.S |  4 ++
 xen/arch/x86/x86_64/entry.S        |  4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  9 +----
 xen/include/public/xen.h           |  1 +
 xen/include/public/xenpmu.h        | 58 ++++++++++++++++++++++++++++
 xen/include/xen/hypercall.h        |  4 ++
 9 files changed, 145 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 78979aa..1369d74 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -471,7 +471,7 @@ int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index e756d87..5670924 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -706,7 +706,7 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -825,7 +825,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 99eb654..c83256f 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,7 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +53,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +61,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode |= XENPMU_MODE_ON;
         break;
     }
 }
@@ -234,19 +235,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 
@@ -270,3 +271,63 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xenpmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
+        if ( mode & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.version.maj = XENPMU_VER_MAJ;
+        pmu_params.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 6d641ea..022a171 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/xenpmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
@@ -95,5 +88,7 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint32_t vpmu_mode;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index bcadd15..faf05fc 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -13,6 +13,64 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* ANSI-C does not support anonymous unions */
+#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
+#define __ANON_UNION_NAME(x) x
+#else
+#define __ANON_UNION_NAME(x)
+#endif
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xenpmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } __ANON_UNION_NAME(v);
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } __ANON_UNION_NAME(d);
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xenpmu_params xenpmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_FEATURE_SHIFT      16
+#define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xenpmu_data {
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..4b49084 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/xenpmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnD-0000GG-FD; Mon, 06 Jan 2014 19:24:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmt-0008Vf-3E
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:35 +0000
Received: from [193.109.254.147:35469] by server-15.bemta-14.messagelabs.com
	id 44/5A-22186-1F20BC25; Mon, 06 Jan 2014 19:24:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389036271!9149872!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25968 invoked from network); 6 Jan 2014 19:24:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOOa4009649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOOrI004144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:24 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JON4Y002546; Mon, 6 Jan 2014 19:24:23 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:22 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:48 -0500
Message-Id: <1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 09/16] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c        |  2 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  4 +-
 xen/arch/x86/hvm/vpmu.c            | 77 ++++++++++++++++++++++++++++++++++----
 xen/arch/x86/x86_64/compat/entry.S |  4 ++
 xen/arch/x86/x86_64/entry.S        |  4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  9 +----
 xen/include/public/xen.h           |  1 +
 xen/include/public/xenpmu.h        | 58 ++++++++++++++++++++++++++++
 xen/include/xen/hypercall.h        |  4 ++
 9 files changed, 145 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 78979aa..1369d74 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -471,7 +471,7 @@ int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index e756d87..5670924 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -706,7 +706,7 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -825,7 +825,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 99eb654..c83256f 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,7 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +53,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +61,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode |= XENPMU_MODE_ON;
         break;
     }
 }
@@ -234,19 +235,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 
@@ -270,3 +271,63 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xenpmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
+        if ( mode & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.version.maj = XENPMU_VER_MAJ;
+        pmu_params.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 6d641ea..022a171 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/xenpmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
@@ -95,5 +88,7 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint32_t vpmu_mode;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index bcadd15..faf05fc 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -13,6 +13,64 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* ANSI-C does not support anonymous unions */
+#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
+#define __ANON_UNION_NAME(x) x
+#else
+#define __ANON_UNION_NAME(x)
+#endif
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xenpmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } __ANON_UNION_NAME(v);
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } __ANON_UNION_NAME(d);
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xenpmu_params xenpmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xenpmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_FEATURE_SHIFT      16
+#define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xenpmu_data {
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..4b49084 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/xenpmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnQ-0000Mj-8I; Mon, 06 Jan 2014 19:25:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmv-00004r-9G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:37 +0000
Received: from [85.158.143.35:48756] by server-1.bemta-4.messagelabs.com id
	E1/33-02132-4F20BC25; Mon, 06 Jan 2014 19:24:36 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389036274!9971613!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29589 invoked from network); 6 Jan 2014 19:24:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOR9G009677
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:28 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JORI0004247
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JORpM015608; Mon, 6 Jan 2014 19:24:27 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:26 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:53 -0500
Message-Id: <1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 14/16] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 25572d5..daf0313 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1444,17 +1444,15 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if (prev != next)
-        update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        update_runstate_area(prev);
+        if ( !(vpmu_mode & XENPMU_MODE_PRIV) || prev->domain != dom0 )
             vpmu_save(prev);
-
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
-            pt_save_timer(prev);
     }
 
+    if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+        pt_save_timer(prev);
+
     local_irq_disable();
 
     set_current(next);
@@ -1491,7 +1489,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( prev != next && !(vpmu_mode & XENPMU_MODE_PRIV) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnQ-0000Mj-8I; Mon, 06 Jan 2014 19:25:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmv-00004r-9G
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:37 +0000
Received: from [85.158.143.35:48756] by server-1.bemta-4.messagelabs.com id
	E1/33-02132-4F20BC25; Mon, 06 Jan 2014 19:24:36 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389036274!9971613!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29589 invoked from network); 6 Jan 2014 19:24:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOR9G009677
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:28 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JORI0004247
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:27 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JORpM015608; Mon, 6 Jan 2014 19:24:27 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:26 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:53 -0500
Message-Id: <1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 14/16] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 25572d5..daf0313 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1444,17 +1444,15 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if (prev != next)
-        update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        update_runstate_area(prev);
+        if ( !(vpmu_mode & XENPMU_MODE_PRIV) || prev->domain != dom0 )
             vpmu_save(prev);
-
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
-            pt_save_timer(prev);
     }
 
+    if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+        pt_save_timer(prev);
+
     local_irq_disable();
 
     set_current(next);
@@ -1491,7 +1489,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( prev != next && !(vpmu_mode & XENPMU_MODE_PRIV) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnU-0000Ok-2T; Mon, 06 Jan 2014 19:25:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmv-000059-Vd
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:38 +0000
Received: from [85.158.143.35:43992] by server-2.bemta-4.messagelabs.com id
	3A/E8-11386-4F20BC25; Mon, 06 Jan 2014 19:24:36 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389036274!2837821!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11381 invoked from network); 6 Jan 2014 19:24:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOTL1009695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOSsq002768
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:29 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOSxZ026404; Mon, 6 Jan 2014 19:24:28 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:27 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:54 -0500
Message-Id: <1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c | 119 ++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 99 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0cbede4..ff05ff5 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -35,6 +35,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/xenpmu.h>
 
 /*
@@ -47,33 +48,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode |= XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if (is_hvm_domain(current->domain) ||
@@ -202,10 +227,14 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             struct segment_register cs;
 
             gregs = guest_cpu_user_regs();
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
 
             memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( vpmu_apic_vector != APIC_DM_NMI )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -216,7 +245,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_apic_vector == APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
@@ -288,7 +323,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -379,7 +414,7 @@ void vpmu_initialise(struct vcpu *v)
         return;
     }
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
 }
 
 void vpmu_destroy(struct vcpu *v)
@@ -405,10 +440,40 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id()];
+    else
+        v = sampled;
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
+    if ( is_hvm_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
 static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
 {
     struct vcpu *v;
     uint64_t mfn = params->val;
+    static int pvpmu_initted = 0;
 
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
@@ -417,6 +482,20 @@ static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
         !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
         return -EINVAL;
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     v = d->vcpu[params->vcpu];
     v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
     memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0FnU-0000Ok-2T; Mon, 06 Jan 2014 19:25:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmv-000059-Vd
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:38 +0000
Received: from [85.158.143.35:43992] by server-2.bemta-4.messagelabs.com id
	3A/E8-11386-4F20BC25; Mon, 06 Jan 2014 19:24:36 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389036274!2837821!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11381 invoked from network); 6 Jan 2014 19:24:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOTL1009695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOSsq002768
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:29 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s06JOSxZ026404; Mon, 6 Jan 2014 19:24:28 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:27 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:54 -0500
Message-Id: <1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c | 119 ++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 99 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0cbede4..ff05ff5 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -35,6 +35,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/xenpmu.h>
 
 /*
@@ -47,33 +48,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode |= XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if (is_hvm_domain(current->domain) ||
@@ -202,10 +227,14 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             struct segment_register cs;
 
             gregs = guest_cpu_user_regs();
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
 
             memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( vpmu_apic_vector != APIC_DM_NMI )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -216,7 +245,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_apic_vector == APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
@@ -288,7 +323,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -379,7 +414,7 @@ void vpmu_initialise(struct vcpu *v)
         return;
     }
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
 }
 
 void vpmu_destroy(struct vcpu *v)
@@ -405,10 +440,40 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id()];
+    else
+        v = sampled;
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
+    if ( is_hvm_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
 static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
 {
     struct vcpu *v;
     uint64_t mfn = params->val;
+    static int pvpmu_initted = 0;
 
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
@@ -417,6 +482,20 @@ static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
         !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
         return -EINVAL;
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     v = d->vcpu[params->vcpu];
     v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
     memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fnr-0000WT-Eb; Mon, 06 Jan 2014 19:25:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmx-00005j-74
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:42 +0000
Received: from [193.109.254.147:47550] by server-2.bemta-14.messagelabs.com id
	18/CF-00361-6F20BC25; Mon, 06 Jan 2014 19:24:38 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389036275!9105543!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22245 invoked from network); 6 Jan 2014 19:24:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOUax009737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOTap026434
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:30 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOTQH015674; Mon, 6 Jan 2014 19:24:29 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:28 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:55 -0500
Message-Id: <1389036295-3877-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 16/16] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 499 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 936 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 621 ----------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 621 ++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 499 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 936 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  95 ----
 xen/include/asm-x86/vpmu.h            |  95 ++++
 16 files changed, 2156 insertions(+), 2158 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index f35aee2..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,499 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/xenpmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
-    unsigned int i;
-
-    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        vpmu_set(vpmu, VPMU_FROZEN);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-            return 0;
-
-    context_save(v);
-
-    if ( is_hvm_domain(v->domain) && 
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( is_hvm_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct amd_vpmu_context *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
-
-    ctxt->counters = sizeof(struct amd_vpmu_context);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index b9b2ea9..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,936 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/xenpmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
-
-    if ( !is_hvm_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( !is_hvm_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( is_hvm_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( is_hvm_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct arch_cntr_pair *arch_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( is_hvm_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            arch_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (arch_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( is_hvm_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( is_hvm_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( !is_hvm_domain(v->domain) )
-        if ( !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index ff05ff5..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,621 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <public/xenpmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  (parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_apic_vector = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if (is_hvm_domain(current->domain) ||
-        !(current->arch.vpmu.xenpmu_data &&
-          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !is_hvm_domain(current->domain) &&
-             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return val;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !is_hvm_domain(current->domain) &&
-             current->arch.vpmu.xenpmu_data->pmu_flags)
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return val;
-    }
-    return 0;
-}
-
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-    {
-        if ( smp_processor_id() >= dom0->max_vcpus )
-            return 0;
-        v = dom0->vcpu[smp_processor_id()];
-    }
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV guest or dom0 is doing system profiling */
-        void *p;
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
-            return 1;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        /* Store appropriate registers in xenpmu_data */
-        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
-        if ( !is_hvm_domain(current->domain) )
-        {
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                /*
-                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-                 * and therefore we treat it the same way as a non-priviledged
-                 * PV 32-bit domain.
-                 */
-                struct compat_cpu_user_regs cmp;
-
-                gregs = guest_cpu_user_regs();
-                XLAT_cpu_user_regs(&cmp, gregs);
-                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(p, regs, sizeof(struct cpu_user_regs));
-
-            ((struct cpu_user_regs *)p)->cs =
-                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-        }
-        else 
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-
-            memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( vpmu_apic_vector != APIC_DM_NMI )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_apic_vector == APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-    else if ( vpmu->arch_vpmu_ops )
-    {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-        else
-            v->nmi_pending = 1;
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* 
-     * Only when PMU is counting and is not cached (for PV guests) do
-     * we load PMU context immediately.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-         (!is_hvm_domain(v->domain) &&
-          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( vpmu_mode & XENPMU_MODE_PRIV ||
-         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
-        v = dom0->vcpu[smp_processor_id()];
-    else
-        v = sampled;
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
-    if ( is_hvm_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn = params->val;
-    static int pvpmu_initted = 0;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    if ( !mfn_valid(mfn) ||
-        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
-        return -EINVAL;
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
-    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page_and_type(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xenpmu_params_t pmu_params;
-    uint32_t mode;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
-        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_MODE_MASK;
-        vpmu_mode |= mode;
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
-        pmu_params.version.maj = XENPMU_VER_MAJ;
-        pmu_params.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_FEATURE_MASK;
-        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        vpmu_lvtpc_update((uint32_t)pmu_params.val);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_load(current);
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3ff24e5..ba062f5 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..c206f7d
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,621 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <public/xenpmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if (is_hvm_domain(current->domain) ||
+        !(current->arch.vpmu.xenpmu_data &&
+          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags)
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
+    return 0;
+}
+
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+    {
+        if ( smp_processor_id() >= dom0->max_vcpus )
+            return 0;
+        v = dom0->vcpu[smp_processor_id()];
+    }
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        void *p;
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
+        if ( !is_hvm_domain(current->domain) )
+        {
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                /*
+                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+                 * and therefore we treat it the same way as a non-priviledged
+                 * PV 32-bit domain.
+                 */
+                struct compat_cpu_user_regs cmp;
+
+                gregs = guest_cpu_user_regs();
+                XLAT_cpu_user_regs(&cmp, gregs);
+                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+            ((struct cpu_user_regs *)p)->cs =
+                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+        }
+        else 
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+
+            memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( vpmu_apic_vector != APIC_DM_NMI )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_apic_vector == APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
+    {
+        struct vlapic *vlapic = vcpu_vlapic(v);
+        u32 vlapic_lvtpc;
+        unsigned char int_vec;
+
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( !is_vlapic_lvtpc_enabled(vlapic) )
+            return 1;
+
+        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        else
+            v->nmi_pending = 1;
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_save_force(prev);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (!is_hvm_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id()];
+    else
+        v = sampled;
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
+    if ( is_hvm_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn = params->val;
+    static int pvpmu_initted = 0;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( !mfn_valid(mfn) ||
+        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
+        return -EINVAL;
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
+    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xenpmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.version.maj = XENPMU_VER_MAJ;
+        pmu_params.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.val);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..bdc1d00
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,499 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/xenpmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctx = vpmu->context;
+    unsigned int i;
+
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        vpmu_set(vpmu, VPMU_FROZEN);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
+
+    context_save(v);
+
+    if ( is_hvm_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct amd_vpmu_context *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
+
+    ctxt->counters = sizeof(struct amd_vpmu_context);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..162742a
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,936 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/xenpmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
+
+    if ( !is_hvm_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && is_hvm_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !is_hvm_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( is_hvm_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( is_hvm_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct arch_cntr_pair *arch_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            arch_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (arch_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( is_hvm_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( !is_hvm_domain(v->domain) )
+        if ( !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v, vpmu_flags);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 4b0ae38..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/xenpmu.h>
-
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xenpmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint32_t vpmu_mode;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..4b0ae38
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,95 @@
+/*
+ * vpmu.h: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_HVM_VPMU_H_
+#define __ASM_X86_HVM_VPMU_H_
+
+#include <public/xenpmu.h>
+
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
+int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xenpmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint32_t vpmu_mode;
+
+#endif /* __ASM_X86_HVM_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Fnr-0000WT-Eb; Mon, 06 Jan 2014 19:25:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Fmx-00005j-74
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:24:42 +0000
Received: from [193.109.254.147:47550] by server-2.bemta-14.messagelabs.com id
	18/CF-00361-6F20BC25; Mon, 06 Jan 2014 19:24:38 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389036275!9105543!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22245 invoked from network); 6 Jan 2014 19:24:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:24:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06JOUax009737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 19:24:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06JOTap026434
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 19:24:30 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06JOTQH015674; Mon, 6 Jan 2014 19:24:29 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 11:24:28 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Mon,  6 Jan 2014 14:24:55 -0500
Message-Id: <1389036295-3877-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v3 16/16] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 499 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 936 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 621 ----------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 621 ++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 499 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 936 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  95 ----
 xen/include/asm-x86/vpmu.h            |  95 ++++
 16 files changed, 2156 insertions(+), 2158 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index f35aee2..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,499 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/xenpmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
-    unsigned int i;
-
-    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        vpmu_set(vpmu, VPMU_FROZEN);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-            return 0;
-
-    context_save(v);
-
-    if ( is_hvm_domain(v->domain) && 
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( is_hvm_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct amd_vpmu_context *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
-
-    ctxt->counters = sizeof(struct amd_vpmu_context);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index b9b2ea9..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,936 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/xenpmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
-
-    if ( !is_hvm_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( !is_hvm_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( is_hvm_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( is_hvm_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct arch_cntr_pair *arch_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( is_hvm_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            arch_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (arch_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( is_hvm_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( is_hvm_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( !is_hvm_domain(v->domain) )
-        if ( !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( is_hvm_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index ff05ff5..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,621 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <public/xenpmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  (parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_apic_vector = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if (is_hvm_domain(current->domain) ||
-        !(current->arch.vpmu.xenpmu_data &&
-          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !is_hvm_domain(current->domain) &&
-             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return val;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !is_hvm_domain(current->domain) &&
-             current->arch.vpmu.xenpmu_data->pmu_flags)
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return val;
-    }
-    return 0;
-}
-
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-    {
-        if ( smp_processor_id() >= dom0->max_vcpus )
-            return 0;
-        v = dom0->vcpu[smp_processor_id()];
-    }
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV guest or dom0 is doing system profiling */
-        void *p;
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
-            return 1;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        /* Store appropriate registers in xenpmu_data */
-        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
-        if ( !is_hvm_domain(current->domain) )
-        {
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                /*
-                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-                 * and therefore we treat it the same way as a non-priviledged
-                 * PV 32-bit domain.
-                 */
-                struct compat_cpu_user_regs cmp;
-
-                gregs = guest_cpu_user_regs();
-                XLAT_cpu_user_regs(&cmp, gregs);
-                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(p, regs, sizeof(struct cpu_user_regs));
-
-            ((struct cpu_user_regs *)p)->cs =
-                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-        }
-        else 
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-
-            memcpy(p, gregs, sizeof(struct cpu_user_regs));
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( vpmu_apic_vector != APIC_DM_NMI )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_apic_vector == APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-    else if ( vpmu->arch_vpmu_ops )
-    {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-        else
-            v->nmi_pending = 1;
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* 
-     * Only when PMU is counting and is not cached (for PV guests) do
-     * we load PMU context immediately.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-         (!is_hvm_domain(v->domain) &&
-          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( vpmu_mode & XENPMU_MODE_PRIV ||
-         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
-        v = dom0->vcpu[smp_processor_id()];
-    else
-        v = sampled;
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
-    if ( is_hvm_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn = params->val;
-    static int pvpmu_initted = 0;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    if ( !mfn_valid(mfn) ||
-        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
-        return -EINVAL;
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
-    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page_and_type(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xenpmu_params_t pmu_params;
-    uint32_t mode;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
-        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_MODE_MASK;
-        vpmu_mode |= mode;
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
-        pmu_params.version.maj = XENPMU_VER_MAJ;
-        pmu_params.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_FEATURE_MASK;
-        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        vpmu_lvtpc_update((uint32_t)pmu_params.val);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_load(current);
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3ff24e5..ba062f5 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..c206f7d
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,621 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <public/xenpmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if (is_hvm_domain(current->domain) ||
+        !(current->arch.vpmu.xenpmu_data &&
+          current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+             current->arch.vpmu.xenpmu_data->pmu_flags)
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return val;
+    }
+    return 0;
+}
+
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+    {
+        if ( smp_processor_id() >= dom0->max_vcpus )
+            return 0;
+        v = dom0->vcpu[smp_processor_id()];
+    }
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        void *p;
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        p = &v->arch.vpmu.xenpmu_data->pmu.regs;
+        if ( !is_hvm_domain(current->domain) )
+        {
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                /*
+                 * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+                 * and therefore we treat it the same way as a non-priviledged
+                 * PV 32-bit domain.
+                 */
+                struct compat_cpu_user_regs cmp;
+
+                gregs = guest_cpu_user_regs();
+                XLAT_cpu_user_regs(&cmp, gregs);
+                memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(p, regs, sizeof(struct cpu_user_regs));
+
+            ((struct cpu_user_regs *)p)->cs =
+                    (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+        }
+        else 
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+
+            memcpy(p, gregs, sizeof(struct cpu_user_regs));
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( vpmu_apic_vector != APIC_DM_NMI )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_apic_vector == APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
+    {
+        struct vlapic *vlapic = vcpu_vlapic(v);
+        u32 vlapic_lvtpc;
+        unsigned char int_vec;
+
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( !is_vlapic_lvtpc_enabled(vlapic) )
+            return 1;
+
+        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        else
+            v->nmi_pending = 1;
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_save_force(prev);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (!is_hvm_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | APIC_LVT_MASKED;
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id()];
+    else
+        v = sampled;
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
+    if ( is_hvm_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+static int pvpmu_init(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn = params->val;
+    static int pvpmu_initted = 0;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( !mfn_valid(mfn) ||
+        !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
+        return -EINVAL;
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = map_domain_page_global(mfn);
+    memset(v->arch.vpmu.xenpmu_data, 0, PAGE_SIZE);
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xenpmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xenpmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xenpmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.val & XENPMU_MODE_MASK;
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.version.maj = XENPMU_VER_MAJ;
+        pmu_params.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.val);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.lapic_lvtpc);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..bdc1d00
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,499 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/xenpmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctx = vpmu->context;
+    unsigned int i;
+
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        vpmu_set(vpmu, VPMU_FROZEN);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
+
+    context_save(v);
+
+    if ( is_hvm_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct amd_vpmu_context *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.amd;
+
+    ctxt->counters = sizeof(struct amd_vpmu_context);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct amd_vpmu_context *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..162742a
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,936 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/xenpmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, arch_cntr_pair[i].counter);
+
+    if ( !is_hvm_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && is_hvm_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, arch_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, arch_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !is_hvm_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct arch_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct core2_vpmu_context);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( is_hvm_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( is_hvm_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct arch_cntr_pair *arch_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            arch_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (arch_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( is_hvm_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( is_hvm_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct arch_cntr_pair *arch_cntr_pair = vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, arch_cntr_pair[i].counter, arch_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( !is_hvm_domain(v->domain) )
+        if ( !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v, vpmu_flags);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 4b0ae38..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,95 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/xenpmu.h>
-
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xenpmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint32_t vpmu_mode;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..4b0ae38
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,95 @@
+/*
+ * vpmu.h: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_HVM_VPMU_H_
+#define __ASM_X86_HVM_VPMU_H_
+
+#include <public/xenpmu.h>
+
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
+int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xenpmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint32_t vpmu_mode;
+
+#endif /* __ASM_X86_HVM_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0G7V-0003kb-1J; Mon, 06 Jan 2014 19:45:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0G7S-0003kJ-RC
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:45:51 +0000
Received: from [85.158.139.211:13940] by server-2.bemta-5.messagelabs.com id
	D8/8F-29392-EE70BC25; Mon, 06 Jan 2014 19:45:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389037546!8122706!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24638 invoked from network); 6 Jan 2014 19:45:48 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:45:48 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 19:45:45 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="641867678"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.86])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 19:45:44 +0000
Message-ID: <52CB07E7.7000406@terremark.com>
Date: Mon, 06 Jan 2014 14:45:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Dave Anderson <anderson@redhat.com>, 
	"Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
In-Reply-To: <1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, kexec@lists.infradead.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 12:05, Dave Anderson wrote:
>
> ----- Original Message -----
>> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
>> exists.  This prevents crash from using a xen 4.4.0 vmcore.
>>
>> Patch 1 "fixes" this.
>>
>> Patch 2 is a minor fix in that outputing the offset in hex for
>> domain_domain_flags is different.
>>
>> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
>> one found.
>>
>> Patch 4 is a quick way to add domain.guest_type support.
> Hi Don,
>
> The patch looks good to me.  But for the crash.changelog, can you show
> what happens when you attempt to look at one of these PVH dumps without
> your patches?
>
> Thanks,
>    Dave

Before patch 1:



dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...


crash: invalid structure member offset: domain_is_hvm
        FILE: xen_hyper.c  LINE: 1250  FUNCTION: xen_hyper_store_domain_context()

[/home/don/crash-7.0/crash] error trace: 54c307 => 54ba5f => 51823b => 460895

   460895: OFFSET_verify.part.25+67
   51823b: OFFSET_verify+43
   54ba5f: xen_hyper_store_domain_context+639
   54c307: xen_hyper_refresh_domain_context_space+151


After patch 1:

dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: ffffffffffffffff
                               domain_evtchn: 144
                               domain_is_hvm: -1
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16



After patch 2:



dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: -1
                               domain_evtchn: 144
                               domain_is_hvm: -1
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16



After patch 3:


dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000         0


After patch 4:


dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: -1
                               domain_evtchn: 144
                               domain_is_hvm: -1
                           domain_guest_type: 272
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16


    -Don Slutz


>> Don Slutz (4):
>>    Make domian.is_hvm optional
>>    xen: Fix offset output to be decimal.
>>    xen: set all domain_flags, not just the 1st.
>>    Add basic domain.guest_type support.
>>
>>   xen_hyper.c             | 32 ++++++++++++++++++++++++--------
>>   xen_hyper_defs.h        |  1 +
>>   xen_hyper_dump_tables.c |  4 +++-
>>   3 files changed, 28 insertions(+), 9 deletions(-)
>>
>> --
>> 1.8.4
>>
>> --
>> Crash-utility mailing list
>> Crash-utility@redhat.com
>> https://www.redhat.com/mailman/listinfo/crash-utility
>>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0G7V-0003kb-1J; Mon, 06 Jan 2014 19:45:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0G7S-0003kJ-RC
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:45:51 +0000
Received: from [85.158.139.211:13940] by server-2.bemta-5.messagelabs.com id
	D8/8F-29392-EE70BC25; Mon, 06 Jan 2014 19:45:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389037546!8122706!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24638 invoked from network); 6 Jan 2014 19:45:48 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 19:45:48 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 19:45:45 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="641867678"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.86])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 19:45:44 +0000
Message-ID: <52CB07E7.7000406@terremark.com>
Date: Mon, 06 Jan 2014 14:45:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Dave Anderson <anderson@redhat.com>, 
	"Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
In-Reply-To: <1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, kexec@lists.infradead.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 12:05, Dave Anderson wrote:
>
> ----- Original Message -----
>> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
>> exists.  This prevents crash from using a xen 4.4.0 vmcore.
>>
>> Patch 1 "fixes" this.
>>
>> Patch 2 is a minor fix in that outputing the offset in hex for
>> domain_domain_flags is different.
>>
>> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
>> one found.
>>
>> Patch 4 is a quick way to add domain.guest_type support.
> Hi Don,
>
> The patch looks good to me.  But for the crash.changelog, can you show
> what happens when you attempt to look at one of these PVH dumps without
> your patches?
>
> Thanks,
>    Dave

Before patch 1:



dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...


crash: invalid structure member offset: domain_is_hvm
        FILE: xen_hyper.c  LINE: 1250  FUNCTION: xen_hyper_store_domain_context()

[/home/don/crash-7.0/crash] error trace: 54c307 => 54ba5f => 51823b => 460895

   460895: OFFSET_verify.part.25+67
   51823b: OFFSET_verify+43
   54ba5f: xen_hyper_store_domain_context+639
   54c307: xen_hyper_refresh_domain_context_space+151


After patch 1:

dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: ffffffffffffffff
                               domain_evtchn: 144
                               domain_is_hvm: -1
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16



After patch 2:



dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: -1
                               domain_evtchn: 144
                               domain_is_hvm: -1
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16



After patch 3:


dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000         0


After patch 4:


dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable

crash 7.0.4
Copyright (C) 2002-2013  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...

    KERNEL: xen-syms-4.4-unstable
  DUMPFILE: vmcore
      CPUS: 8
   DOMAINS: 5
    UPTIME: --:--:--
   MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
    MEMORY: 32 GB
   PCPU-ID: 0
      PCPU: ffff82d0802bff18
   VCPU-ID: 0
      VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
DOMAIN-ID: 32767
    DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
     STATE: CRASH

crash> doms
    DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I          P2M_MFN
   32753 ffff83083a1a7000 RU O     0        0      0 0              ----
   32754 ffff83083a164000 RU X     0        0      0 0              ----
 >*32767 ffff83083a163000 RU I     0        0      8 0              ----
 >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000      81df80
       2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000         0
crash> help -X ofs
                        ELF_Prstatus_pr_info: 0
                      ELF_Prstatus_pr_cursig: 12
                     ELF_Prstatus_pr_sigpend: 16
                     ELF_Prstatus_pr_sighold: 24
                         ELF_Prstatus_pr_pid: 32
                        ELF_Prstatus_pr_ppid: 36
                        ELF_Prstatus_pr_pgrp: 40
                         ELF_Prstatus_pr_sid: 44
                       ELF_Prstatus_pr_stime: 64
                      ELF_Prstatus_pr_cutime: 80
                      ELF_Prstatus_pr_cstime: 96
                         ELF_Prstatus_pr_reg: 112
                     ELF_Prstatus_pr_fpvalid: 328
                          ELF_Timeval_tv_sec: 0
                         ELF_Timeval_tv_usec: 8
                    arch_shared_info_max_pfn: 0
arch_shared_info_pfn_to_mfn_frame_list_list: 8
                 arch_shared_info_nmi_reason: 16
                cpu_info_guest_cpu_user_regs: 0
                       cpu_info_processor_id: 200
                       cpu_info_current_vcpu: 208
                    cpu_time_local_tsc_stamp: 0
                  cpu_time_stime_local_stamp: 0
                 cpu_time_stime_master_stamp: 0
                          cpu_time_tsc_scale: 0
                  cpu_time_calibration_timer: 0
                           crash_note_t_core: -1
                            crash_note_t_xen: -1
                       crash_note_t_xen_regs: -1
                       crash_note_t_xen_info: -1
                      crash_note_core_t_note: -1
                      crash_note_core_t_desc: -1
                       crash_note_xen_t_note: -1
                       crash_note_xen_t_desc: -1
                  crash_note_xen_core_t_note: -1
                  crash_note_xen_core_t_desc: -1
                  crash_note_xen_info_t_note: -1
                  crash_note_xen_info_t_desc: -1
                            domain_page_list: 0
                         domain_xenpage_list: 0
                            domain_domain_id: 0
                            domain_tot_pages: 56
                            domain_max_pages: 64
                        domain_xenheap_pages: 76
                          domain_shared_info: 8
                           domain_sched_priv: 88
                         domain_next_in_list: 104
                         domain_domain_flags: -1
                               domain_evtchn: 144
                               domain_is_hvm: -1
                           domain_guest_type: 272
                        domain_is_privileged: 278
                    domain_debugger_attached: 288
                             domain_is_dying: 292
              domain_is_paused_by_controller: 296
                     domain_is_shutting_down: 316
                         domain_is_shut_down: 317
                                 domain_vcpu: 352
                                 domain_arch: 384
                 schedule_data_schedule_lock: 0
                          schedule_data_curr: 16
                          schedule_data_idle: 0
                    schedule_data_sched_priv: 24
                       schedule_data_s_timer: 32
                          schedule_data_tick: -1
                              scheduler_name: 0
                          scheduler_opt_name: 8
                          scheduler_sched_id: 16
                              scheduler_init: 40
                              scheduler_tick: -1
                         scheduler_init_vcpu: -1
                    scheduler_destroy_domain: 112
                             scheduler_sleep: 136
                              scheduler_wake: 144
                      scheduler_set_affinity: -1
                       scheduler_do_schedule: 168
                            scheduler_adjust: 192
                     scheduler_dump_settings: 216
                    scheduler_dump_cpu_state: 224
                       shared_info_vcpu_info: 0
                  shared_info_evtchn_pending: 2048
                     shared_info_evtchn_mask: 2560
                            shared_info_arch: 3088
                               timer_expires: 0
                                   timer_cpu: 40
                              timer_function: 24
                                  timer_data: 32
                           timer_heap_offset: -1
                                timer_killed: -1
                             tss_struct_rsp0: 4
                             tss_struct_esp0: 0
                                vcpu_vcpu_id: 0
                              vcpu_processor: 4
                              vcpu_vcpu_info: 8
                                 vcpu_domain: 16
                           vcpu_next_in_list: 24
                                  vcpu_timer: -1
                             vcpu_sleep_tick: -1
                             vcpu_poll_timer: 144
                             vcpu_sched_priv: 192
                               vcpu_runstate: 200
                         vcpu_runstate_guest: 248
                             vcpu_vcpu_flags: -1
                            vcpu_pause_count: 296
                         vcpu_virq_to_evtchn: 300
                           vcpu_cpu_affinity: 352
                               vcpu_nmi_addr: -1
                     vcpu_vcpu_dirty_cpumask: 376
                                   vcpu_arch: 640
                    vcpu_runstate_info_state: 0
         vcpu_runstate_info_state_entry_time: 8
                     vcpu_runstate_info_time: 16


    -Don Slutz


>> Don Slutz (4):
>>    Make domian.is_hvm optional
>>    xen: Fix offset output to be decimal.
>>    xen: set all domain_flags, not just the 1st.
>>    Add basic domain.guest_type support.
>>
>>   xen_hyper.c             | 32 ++++++++++++++++++++++++--------
>>   xen_hyper_defs.h        |  1 +
>>   xen_hyper_dump_tables.c |  4 +++-
>>   3 files changed, 28 insertions(+), 9 deletions(-)
>>
>> --
>> 1.8.4
>>
>> --
>> Crash-utility mailing list
>> Crash-utility@redhat.com
>> https://www.redhat.com/mailman/listinfo/crash-utility
>>
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GCX-0004IB-4z; Mon, 06 Jan 2014 19:51:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0GCV-0004I6-WB
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:51:04 +0000
Received: from [85.158.139.211:53987] by server-3.bemta-5.messagelabs.com id
	65/D2-04773-7290BC25; Mon, 06 Jan 2014 19:51:03 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389037860!7976015!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9241 invoked from network); 6 Jan 2014 19:51:02 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 19:51:02 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 19:50:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="641872590"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.86])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 19:50:58 +0000
Message-ID: <52CB0922.1030209@terremark.com>
Date: Mon, 06 Jan 2014 14:50:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
	<52CAABF8.30702@citrix.com>
In-Reply-To: <52CAABF8.30702@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 08:13, Andrew Cooper wrote:
> On 06/01/14 12:43, Don Slutz wrote:
>> On 01/05/14 18:09, Andrew Cooper wrote:
>>> On 04/01/2014 17:52, Don Slutz wrote:
>>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>>
>>>> (gdb) set arch i8086
>>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>>> configuration
>>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>>
>>>> The target architecture is assumed to be i8086
>>>> (gdb) target remote localhost:9999
>>>> Remote debugging using localhost:9999
>>>> Remote debugging from host 127.0.0.1
>>>> 0x0000d475 in ?? ()
>>>> (gdb) x/1xh 0x6ae9168b
>>>>
>>>> Will reproduce this bug.
>>>>
>>>> With a debug=y build you will get:
>>>>
>>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>>
>>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>>
>>>>            [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>>             ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>>             ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>>             ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>>             ffff82c4c01709ed  get_page+0x2d/0x100
>>>>             ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>>             ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>>             ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>>             ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>>             ffff82c4c012938b  add_entry+0x4b/0xb0
>>>>             ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>>
>>>> And gdb output:
>>>>
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     0x3024
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>>> Reply contains invalid hex digit 116
>>>>
>>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>>> had not yet changed.
>>>>
>>>> crash reports (for example):
>>>>
>>>> crash> mm_rwlock_t 0xffff83083f913010
>>>> struct mm_rwlock_t {
>>>>     lock = {
>>>>       raw = {
>>>>         lock = 2147483647
>>>>       },
>>>>       debug = {<No data fields>}
>>>>     },
>>>>     unlock_level = 0,
>>>>     recurse_count = 1,
>>>>     locker = 1,
>>>>     locker_function = 0xffff82c4c022c640 <__func__.13514>
>>>> "__get_gfn_type_access"
>>>> }
>>>>
>>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>>> ---
>>>>    xen/arch/x86/debug.c | 5 ++++-
>>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>>>> index 3e21ca8..eceb805 100644
>>>> --- a/xen/arch/x86/debug.c
>>>> +++ b/xen/arch/x86/debug.c
>>>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
>>>> int len, struct domain *dp,
>>>>            mfn = (has_hvm_container_domain(dp)
>>>>                   ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>>>                   : dbg_pv_va2mfn(addr, dp, pgd3));
>>>> -        if ( mfn == INVALID_MFN )
>>>> +        if ( mfn == INVALID_MFN ) {
>>> Xen coding style would have this brace on the next line.
>> Will fix.
>>
>>
>>> Content-wise, I think it would be better to fix up the error path in
>>> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
>>> reference on the gfn.
>> That would only fix "p2m_is_readonly(gfntype) and writing memory" case
>> (see cover letter). To fix "the requested vaddr does not exist" case,
>> I would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also.
>> As It currently stands it is a smaller fix.
>>
>>     -Don Slutz
> Size really doesn't matter.
>
> What does matter is that the caller of dbg_hvm_va2mfn() should not have
> to cleanup a reference taken when it returns an error.
>
> Anyway, "the requested vaddr does not exist" is covered by
> paging_gva_to_gfn(), which will exit before taking the ref on the gfn.

This is a false statement.  A valid GFN is returned in this case. The lock is taken by get_gfn().

>
> The following (completely untested) patch ought to do:
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..3372adb 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int
> toaddr,
>       if ( p2m_is_readonly(gfntype) && toaddr )
>       {
>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>       }
>   
>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>       return mfn;
>   }
>

However this patch (which I have now tested) works fine.  I will post it soon as part of v2 of this patch set.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 19:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 19:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GCX-0004IB-4z; Mon, 06 Jan 2014 19:51:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0GCV-0004I6-WB
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 19:51:04 +0000
Received: from [85.158.139.211:53987] by server-3.bemta-5.messagelabs.com id
	65/D2-04773-7290BC25; Mon, 06 Jan 2014 19:51:03 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389037860!7976015!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9241 invoked from network); 6 Jan 2014 19:51:02 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 19:51:02 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 06 Jan 2014 19:50:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,614,1384300800"; d="scan'208";a="641872590"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.86])
	by fldsmtpi01.verizon.com with ESMTP; 06 Jan 2014 19:50:58 +0000
Message-ID: <52CB0922.1030209@terremark.com>
Date: Mon, 06 Jan 2014 14:50:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
	<52CAABF8.30702@citrix.com>
In-Reply-To: <52CAABF8.30702@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/06/14 08:13, Andrew Cooper wrote:
> On 06/01/14 12:43, Don Slutz wrote:
>> On 01/05/14 18:09, Andrew Cooper wrote:
>>> On 04/01/2014 17:52, Don Slutz wrote:
>>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>>
>>>> (gdb) set arch i8086
>>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>>> configuration
>>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>>
>>>> The target architecture is assumed to be i8086
>>>> (gdb) target remote localhost:9999
>>>> Remote debugging using localhost:9999
>>>> Remote debugging from host 127.0.0.1
>>>> 0x0000d475 in ?? ()
>>>> (gdb) x/1xh 0x6ae9168b
>>>>
>>>> Will reproduce this bug.
>>>>
>>>> With a debug=y build you will get:
>>>>
>>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>>
>>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>>
>>>>            [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>>             ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>>             ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>>             ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>>             ffff82c4c01709ed  get_page+0x2d/0x100
>>>>             ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>>             ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>>             ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>>             ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>>             ffff82c4c012938b  add_entry+0x4b/0xb0
>>>>             ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>>
>>>> And gdb output:
>>>>
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     0x3024
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>>> Reply contains invalid hex digit 116
>>>>
>>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>>> had not yet changed.
>>>>
>>>> crash reports (for example):
>>>>
>>>> crash> mm_rwlock_t 0xffff83083f913010
>>>> struct mm_rwlock_t {
>>>>     lock = {
>>>>       raw = {
>>>>         lock = 2147483647
>>>>       },
>>>>       debug = {<No data fields>}
>>>>     },
>>>>     unlock_level = 0,
>>>>     recurse_count = 1,
>>>>     locker = 1,
>>>>     locker_function = 0xffff82c4c022c640 <__func__.13514>
>>>> "__get_gfn_type_access"
>>>> }
>>>>
>>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>>> ---
>>>>    xen/arch/x86/debug.c | 5 ++++-
>>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>>>> index 3e21ca8..eceb805 100644
>>>> --- a/xen/arch/x86/debug.c
>>>> +++ b/xen/arch/x86/debug.c
>>>> @@ -161,8 +161,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
>>>> int len, struct domain *dp,
>>>>            mfn = (has_hvm_container_domain(dp)
>>>>                   ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
>>>>                   : dbg_pv_va2mfn(addr, dp, pgd3));
>>>> -        if ( mfn == INVALID_MFN )
>>>> +        if ( mfn == INVALID_MFN ) {
>>> Xen coding style would have this brace on the next line.
>> Will fix.
>>
>>
>>> Content-wise, I think it would be better to fix up the error path in
>>> dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having taken a
>>> reference on the gfn.
>> That would only fix "p2m_is_readonly(gfntype) and writing memory" case
>> (see cover letter). To fix "the requested vaddr does not exist" case,
>> I would need to add a check for INVALID_MFN in dbg_hvm_va2mfn() also.
>> As It currently stands it is a smaller fix.
>>
>>     -Don Slutz
> Size really doesn't matter.
>
> What does matter is that the caller of dbg_hvm_va2mfn() should not have
> to cleanup a reference taken when it returns an error.
>
> Anyway, "the requested vaddr does not exist" is covered by
> paging_gva_to_gfn(), which will exit before taking the ref on the gfn.

This is a false statement.  A valid GFN is returned in this case. The lock is taken by get_gfn().

>
> The following (completely untested) patch ought to do:
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..3372adb 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int
> toaddr,
>       if ( p2m_is_readonly(gfntype) && toaddr )
>       {
>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>       }
>   
>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>       return mfn;
>   }
>

However this patch (which I have now tested) works fine.  I will post it soon as part of v2 of this patch set.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:00:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GLh-0004hH-4W; Mon, 06 Jan 2014 20:00:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W0GLf-0004h8-C6
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 20:00:31 +0000
Received: from [85.158.139.211:28804] by server-15.bemta-5.messagelabs.com id
	2A/14-08490-E5B0BC25; Mon, 06 Jan 2014 20:00:30 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389038428!7977049!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16159 invoked from network); 6 Jan 2014 20:00:29 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 20:00:29 -0000
Received: by mail-ie0-f172.google.com with SMTP id qd12so19387166ieb.31
	for <xen-devel@lists.xensource.com>;
	Mon, 06 Jan 2014 12:00:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=2L/YvqpJFWQ4vp1C9Ck5ajJMC1NnyvtB4syGttjgGbo=;
	b=L0aCd3fC7xZ8PEku+8u7nd4fDJH+IhFb5GULBX9HST0NVuNnXLKFnAPBsaBep58JAI
	U2hSC9VZd1Vqpx1KfurAHw7znN08aON6Z6A5BSaX0YnZQJoG1sCUtjy/reLZSbmMW8kb
	O42hjR7zJ/X1y7YwmlFrr6Cnpg/pbyHy71DNavUsRijk5cq/ERSm/4MFWbWMiMxmkyon
	DNeOwtaYd8sHpsa+xcdYHSNiwvwHb3pw9V0laH7vZDrw+b5wR9D6UyUvj2jRpEhLtWQK
	idol1XUoRSKUCWq0zM8H05jk86vf9KpLxSdgxD5ik/z3Sf+nWvDiPzafxuzl1SkEZJCX
	Gi2w==
MIME-Version: 1.0
X-Received: by 10.42.32.133 with SMTP id e5mr79688861icd.39.1389038428421;
	Mon, 06 Jan 2014 12:00:28 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Mon, 6 Jan 2014 12:00:28 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
	<alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 12:00:28 -0800
X-Google-Sender-Auth: TlH_aYZoagsUgjBq8Okqrxj0u7Y
Message-ID: <CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Tony Luck <tony.luck@intel.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 3, 2014 at 9:50 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:

>>  drivers/xen/events.c | 8 ++++++--
>>  1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
>> index 4035e83..020cd77 100644
>> --- a/drivers/xen/events.c
>> +++ b/drivers/xen/events.c
>> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
>>       /* Legacy IRQ descriptors are already allocated by the arch. */
>>       if (gsi < NR_IRQS_LEGACY)
>>               irq = gsi;
>> -     else
>> -             irq = irq_alloc_desc_at(gsi, -1);
>> +     else {
>> +             /* for x86, irq already get reserved for gsi */
>> +             irq = irq_alloc_reserved_desc_at(gsi, -1);
>> +             if (irq < 0)
>> +                     irq = irq_alloc_desc_at(gsi, -1);
>> +     }
>
> This is common code. On ARM I get:
>
> drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
> drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
>    irq = irq_alloc_reserved_desc_at(gsi, -1);
>    ^
> cc1: some warnings being treated as errors

It's strange.

that is defined with irq_alloc_desc_at in parallel in
include/linux/irq.h and kernel/irq/irqdesc.c.

Did you try the whole tree, or just this patch?

Thanks

Yinghai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:00:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GLh-0004hH-4W; Mon, 06 Jan 2014 20:00:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W0GLf-0004h8-C6
	for xen-devel@lists.xensource.com; Mon, 06 Jan 2014 20:00:31 +0000
Received: from [85.158.139.211:28804] by server-15.bemta-5.messagelabs.com id
	2A/14-08490-E5B0BC25; Mon, 06 Jan 2014 20:00:30 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389038428!7977049!1
X-Originating-IP: [209.85.223.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16159 invoked from network); 6 Jan 2014 20:00:29 -0000
Received: from mail-ie0-f172.google.com (HELO mail-ie0-f172.google.com)
	(209.85.223.172)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 20:00:29 -0000
Received: by mail-ie0-f172.google.com with SMTP id qd12so19387166ieb.31
	for <xen-devel@lists.xensource.com>;
	Mon, 06 Jan 2014 12:00:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=2L/YvqpJFWQ4vp1C9Ck5ajJMC1NnyvtB4syGttjgGbo=;
	b=L0aCd3fC7xZ8PEku+8u7nd4fDJH+IhFb5GULBX9HST0NVuNnXLKFnAPBsaBep58JAI
	U2hSC9VZd1Vqpx1KfurAHw7znN08aON6Z6A5BSaX0YnZQJoG1sCUtjy/reLZSbmMW8kb
	O42hjR7zJ/X1y7YwmlFrr6Cnpg/pbyHy71DNavUsRijk5cq/ERSm/4MFWbWMiMxmkyon
	DNeOwtaYd8sHpsa+xcdYHSNiwvwHb3pw9V0laH7vZDrw+b5wR9D6UyUvj2jRpEhLtWQK
	idol1XUoRSKUCWq0zM8H05jk86vf9KpLxSdgxD5ik/z3Sf+nWvDiPzafxuzl1SkEZJCX
	Gi2w==
MIME-Version: 1.0
X-Received: by 10.42.32.133 with SMTP id e5mr79688861icd.39.1389038428421;
	Mon, 06 Jan 2014 12:00:28 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Mon, 6 Jan 2014 12:00:28 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
	<alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
Date: Mon, 6 Jan 2014 12:00:28 -0800
X-Google-Sender-Auth: TlH_aYZoagsUgjBq8Okqrxj0u7Y
Message-ID: <CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Tony Luck <tony.luck@intel.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 3, 2014 at 9:50 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:

>>  drivers/xen/events.c | 8 ++++++--
>>  1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
>> index 4035e83..020cd77 100644
>> --- a/drivers/xen/events.c
>> +++ b/drivers/xen/events.c
>> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
>>       /* Legacy IRQ descriptors are already allocated by the arch. */
>>       if (gsi < NR_IRQS_LEGACY)
>>               irq = gsi;
>> -     else
>> -             irq = irq_alloc_desc_at(gsi, -1);
>> +     else {
>> +             /* for x86, irq already get reserved for gsi */
>> +             irq = irq_alloc_reserved_desc_at(gsi, -1);
>> +             if (irq < 0)
>> +                     irq = irq_alloc_desc_at(gsi, -1);
>> +     }
>
> This is common code. On ARM I get:
>
> drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
> drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
>    irq = irq_alloc_reserved_desc_at(gsi, -1);
>    ^
> cc1: some warnings being treated as errors

It's strange.

that is defined with irq_alloc_desc_at in parallel in
include/linux/irq.h and kernel/irq/irqdesc.c.

Did you try the whole tree, or just this patch?

Thanks

Yinghai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Gl2-0005gQ-GU; Mon, 06 Jan 2014 20:26:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Gl0-0005gJ-Eu
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 20:26:42 +0000
Received: from [85.158.139.211:63166] by server-10.bemta-5.messagelabs.com id
	AE/1F-01405-1811BC25; Mon, 06 Jan 2014 20:26:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389039999!8169953!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 924 invoked from network); 6 Jan 2014 20:26:40 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 20:26:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06KQVj7013429
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 20:26:32 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06KQS72023718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 20:26:28 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06KQRYC022530; Mon, 6 Jan 2014 20:26:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 12:26:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EAE3D1C18DD; Mon,  6 Jan 2014 15:26:21 -0500 (EST)
Date: Mon, 6 Jan 2014 15:26:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140106202621.GA30667@phenom.dumpdata.com>
References: <387b2f80a3866e53ec471421558cf4de@mail.shatteredsilicon.net>
	<52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52AB275D.2010401@bobich.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 13, 2013 at 03:27:25PM +0000, Gordan Bobic wrote:
> On 12/13/2013 02:56 PM, Jan Beulich wrote:
> >>>>On 13.12.13 at 15:43, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>-[0000:00]-+-00.0  Intel Corporation 4th Gen Core Processor DRAM Controller
> >>            +-01.0-[01]--+-00.0  Intel Corporation 82571EB Gigabit Ethernet Controller
> >>            |            \-00.1  Intel Corporation 82571EB Gigabit Ethernet Controller
> >>            +-01.1-[02]----00.0  LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
> >>            +-02.0  Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller
> >>            +-03.0  Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller
> >>            +-14.0  Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI
> >>            +-16.0  Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1
> >>            +-16.3  Intel Corporation 8 Series/C220 Series Chipset Family KT Controller
> >>            +-19.0  Intel Corporation Ethernet Connection I217-LM
> >>            +-1a.0  Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2
> >>            +-1b.0  Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller
> >>            +-1c.0-[03]----00.0  Intel Corporation 82574L Gigabit Network Connection
> >>            +-1c.1-[04]----00.0  Intel Corporation 82574L Gigabit Network Connection
> >>            +-1c.3-[05]----00.0  Intel Corporation I210 Gigabit Network Connection
> >>            +-1c.4-[06]----00.0  Intel Corporation 82572EI Gigabit Ethernet Controller (Copper)
> >>            +-1c.5-[07-09]----00.0-[08-09]--+-01.0-[09]--+-08.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-09.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
> >
> >So 00:1c.5 is the PCIe->PCI bridge, 07:00.0 and 08:01.0 are
> >PCI-PCI bridges, the capture devices are ordinary ones. The only
> >possibly unexpected aspect I can see here is that there are two
> >intermediate PCI-PCI bridges, but iirc this should be handled by
> >the VT-d code.
> 
> Not sure about multiple intermediate PCI-PCI bridges, but I can
> confirm that multiuple intermediate PCIe bridge setup works fine,
> e.g. Intel X58 -> Nvidia NF200 -> PLX -> GPU (GTX690)

To double-check whether this is a problem with the card itself,
I plucked the card in an AMD box (TA890FXE) and an Intel box (ThinkServer
TS130) and they work great there. It only has one PCIe->PCI bridge
for the PCI cards.

TA890FXE:

00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) (prog-if 01 [Subtractive decode])
        Flags: bus master, 66MHz, medium devsel, latency 64
        Bus: primary=00, secondary=02, subordinate=02, sec-latency=64
        I/O behind bridge: 0000c000-0000dfff
02:05.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11) (prog-if 00 [Normal decode])
        Flags: bus master, medium devsel, latency 64
        Bus: primary=02, secondary=03, subordinate=03, sec-latency=64
        Prefetchable memory behind bridge: fde00000-fdefffff
        Capabilities: [80] Power Management version 2
        Capabilities: [90] CompactPCI hot-swap <?> 

       +-14.4-[02-03]--+-05.0-[03]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-09.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
           |               \-06.0  NetMos Technology PCI 9835 Multi-I/O Controller

ThinkServer TS130:

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=02, subordinate=03, sec-latency=32
        Prefetchable memory behind bridge: 00000000d0000000-00000000d00fffff
        Capabilities: [50] Subsystem: Lenovo Device 1025
00:00.0 Host bridge: Intel Corporation Device 0108 (rev 09)
        Subsystem: Lenovo Device 1025
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=0c <?>

           +-1e.0-[02-03]----00.0-[03]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |                            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-09.0  Brooktree Corporation Bt878 Video Capture
           |                            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |                            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |                            \-0b.1  Brooktree Corporation Bt878 Audio Capture

SuperMicro (two PCI bridgeS):
         +-1c.5-[07-09]----00.0-[08-09]--+-01.0-[09]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-09.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |                               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
           |                               \-03.0  Texas Instruments TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]

00:1c.5 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #6 (rev d4) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=07, subordinate=09, sec-latency=0
        Memory behind bridge: f0400000-f05fffff
        Capabilities: [40] Express Root Port (Slot+), MSI 00
        Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit-
        Capabilities: [90] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3
        Kernel driver in use: pcieport


00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 0000e000-0000efff
        Memory behind bridge: f0d00000-f0dfffff
        Capabilities: [88] Subsystem: Intel Corporation Device 2010
        Capabilities: [80] Power Management version 3
        Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [a0] Express Root Port (Slot+), MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [140] Root Complex Link
        Capabilities: [d94] #19
        Kernel driver in use: pcieport

07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=07, secondary=08, subordinate=09, sec-latency=32
        Memory behind bridge: f0400000-f05fffff
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3

Which would look like this:

C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
	      \--------------> IEEE-1394a

I am actually wondering if this 07:00.0 device is the one that
reports itself as 08:00.0 (which I think is what you alluding to Jan)

Gordan (in earlier emails) reported that the VT-d does work for him and
he has much more complex setup than I do on this motherboard:

This is on http://www.newegg.com/Product/Product.aspx?Item=N82E16813188070


 \-[0000:00]-+-00.0  Intel Corporation 5520 I/O Hub to ESI Port [8086:3406]
             +-07.0-[05-08]----00.0-[06-08]--+-00.0-[08]--+-00.0  NVIDIA Corporation GF100GL [Quadro 6000] [10de:06d8]
             |                               |            \-00.1  NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5]
             |                               \-02.0-[07]--+-00.0  NVIDIA Corporation GF100GL [Quadro 5000] [10de:06d9]
             |                                            \-00.1  NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5]

00:07.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 [8086:340e] (rev 22) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=00, secondary=05, subordinate=08, sec-latency=0
        I/O behind bridge: 0000c000-0000dfff
        Memory behind bridge: f4000000-fbdfffff

05:00.0 PCI bridge [0604]: NVIDIA Corporation NF200 PCIe 2.0 switch [10de:05b1] (rev a3) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=05, secondary=06, subordinate=08, sec-latency=0
        I/O behind bridge: 0000c000-0000dfff
        Memory behind bridge: f4000000-fbdfffff
        Prefetchable memory behind bridge: 00000000a8000000-00000000bfffffff

06:00.0 PCI bridge [0604]: NVIDIA Corporation NF200 PCIe 2.0 switch [10de:05b1] (rev a3) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=06, secondary=08, subordinate=08, sec-latency=0
        I/O behind bridge: 0000d000-0000dfff
        Memory behind bridge: f8000000-fbdfffff
        Prefetchable memory behind bridge: 00000000b4000000-00000000bfffffff

Thought I am having a hard time parsing the 'lspci -vt' I think it is:

X58---> NF200 ---> Quadro 6000 (GPU and audio card)
  \---> NF200 ---> Quadro 5000 (GPU and audio card).


Which is similar setup as mine - an intermediate bridge.

So I think the code is OK - this is likely a firmware/motherboard
bug.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:26:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Gl2-0005gQ-GU; Mon, 06 Jan 2014 20:26:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Gl0-0005gJ-Eu
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 20:26:42 +0000
Received: from [85.158.139.211:63166] by server-10.bemta-5.messagelabs.com id
	AE/1F-01405-1811BC25; Mon, 06 Jan 2014 20:26:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389039999!8169953!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 924 invoked from network); 6 Jan 2014 20:26:40 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Jan 2014 20:26:40 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06KQVj7013429
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 20:26:32 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06KQS72023718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 20:26:28 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06KQRYC022530; Mon, 6 Jan 2014 20:26:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 12:26:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EAE3D1C18DD; Mon,  6 Jan 2014 15:26:21 -0500 (EST)
Date: Mon, 6 Jan 2014 15:26:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140106202621.GA30667@phenom.dumpdata.com>
References: <387b2f80a3866e53ec471421558cf4de@mail.shatteredsilicon.net>
	<52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52AB275D.2010401@bobich.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 13, 2013 at 03:27:25PM +0000, Gordan Bobic wrote:
> On 12/13/2013 02:56 PM, Jan Beulich wrote:
> >>>>On 13.12.13 at 15:43, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>-[0000:00]-+-00.0  Intel Corporation 4th Gen Core Processor DRAM Controller
> >>            +-01.0-[01]--+-00.0  Intel Corporation 82571EB Gigabit Ethernet Controller
> >>            |            \-00.1  Intel Corporation 82571EB Gigabit Ethernet Controller
> >>            +-01.1-[02]----00.0  LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
> >>            +-02.0  Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller
> >>            +-03.0  Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller
> >>            +-14.0  Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI
> >>            +-16.0  Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1
> >>            +-16.3  Intel Corporation 8 Series/C220 Series Chipset Family KT Controller
> >>            +-19.0  Intel Corporation Ethernet Connection I217-LM
> >>            +-1a.0  Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2
> >>            +-1b.0  Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller
> >>            +-1c.0-[03]----00.0  Intel Corporation 82574L Gigabit Network Connection
> >>            +-1c.1-[04]----00.0  Intel Corporation 82574L Gigabit Network Connection
> >>            +-1c.3-[05]----00.0  Intel Corporation I210 Gigabit Network Connection
> >>            +-1c.4-[06]----00.0  Intel Corporation 82572EI Gigabit Ethernet Controller (Copper)
> >>            +-1c.5-[07-09]----00.0-[08-09]--+-01.0-[09]--+-08.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-09.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
> >>            |                               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
> >>            |                               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
> >
> >So 00:1c.5 is the PCIe->PCI bridge, 07:00.0 and 08:01.0 are
> >PCI-PCI bridges, the capture devices are ordinary ones. The only
> >possibly unexpected aspect I can see here is that there are two
> >intermediate PCI-PCI bridges, but iirc this should be handled by
> >the VT-d code.
> 
> Not sure about multiple intermediate PCI-PCI bridges, but I can
> confirm that multiuple intermediate PCIe bridge setup works fine,
> e.g. Intel X58 -> Nvidia NF200 -> PLX -> GPU (GTX690)

To double-check whether this is a problem with the card itself,
I plucked the card in an AMD box (TA890FXE) and an Intel box (ThinkServer
TS130) and they work great there. It only has one PCIe->PCI bridge
for the PCI cards.

TA890FXE:

00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) (prog-if 01 [Subtractive decode])
        Flags: bus master, 66MHz, medium devsel, latency 64
        Bus: primary=00, secondary=02, subordinate=02, sec-latency=64
        I/O behind bridge: 0000c000-0000dfff
02:05.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11) (prog-if 00 [Normal decode])
        Flags: bus master, medium devsel, latency 64
        Bus: primary=02, secondary=03, subordinate=03, sec-latency=64
        Prefetchable memory behind bridge: fde00000-fdefffff
        Capabilities: [80] Power Management version 2
        Capabilities: [90] CompactPCI hot-swap <?> 

       +-14.4-[02-03]--+-05.0-[03]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-09.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
           |               \-06.0  NetMos Technology PCI 9835 Multi-I/O Controller

ThinkServer TS130:

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=02, subordinate=03, sec-latency=32
        Prefetchable memory behind bridge: 00000000d0000000-00000000d00fffff
        Capabilities: [50] Subsystem: Lenovo Device 1025
00:00.0 Host bridge: Intel Corporation Device 0108 (rev 09)
        Subsystem: Lenovo Device 1025
        Flags: bus master, fast devsel, latency 0
        Capabilities: [e0] Vendor Specific Information: Len=0c <?>

           +-1e.0-[02-03]----00.0-[03]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |                            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-09.0  Brooktree Corporation Bt878 Video Capture
           |                            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |                            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |                            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |                            \-0b.1  Brooktree Corporation Bt878 Audio Capture

SuperMicro (two PCI bridgeS):
         +-1c.5-[07-09]----00.0-[08-09]--+-01.0-[09]--+-08.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-08.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-09.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-09.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-0a.0  Brooktree Corporation Bt878 Video Capture
           |                               |            +-0a.1  Brooktree Corporation Bt878 Audio Capture
           |                               |            +-0b.0  Brooktree Corporation Bt878 Video Capture
           |                               |            \-0b.1  Brooktree Corporation Bt878 Audio Capture
           |                               \-03.0  Texas Instruments TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]

00:1c.5 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #6 (rev d4) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=07, subordinate=09, sec-latency=0
        Memory behind bridge: f0400000-f05fffff
        Capabilities: [40] Express Root Port (Slot+), MSI 00
        Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit-
        Capabilities: [90] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3
        Kernel driver in use: pcieport


00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) (prog-if 00 [Normal decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 0000e000-0000efff
        Memory behind bridge: f0d00000-f0dfffff
        Capabilities: [88] Subsystem: Intel Corporation Device 2010
        Capabilities: [80] Power Management version 3
        Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
        Capabilities: [a0] Express Root Port (Slot+), MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [140] Root Complex Link
        Capabilities: [d94] #19
        Kernel driver in use: pcieport

07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=07, secondary=08, subordinate=09, sec-latency=32
        Memory behind bridge: f0400000-f05fffff
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3

Which would look like this:

C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
	      \--------------> IEEE-1394a

I am actually wondering if this 07:00.0 device is the one that
reports itself as 08:00.0 (which I think is what you alluding to Jan)

Gordan (in earlier emails) reported that the VT-d does work for him and
he has much more complex setup than I do on this motherboard:

This is on http://www.newegg.com/Product/Product.aspx?Item=N82E16813188070


 \-[0000:00]-+-00.0  Intel Corporation 5520 I/O Hub to ESI Port [8086:3406]
             +-07.0-[05-08]----00.0-[06-08]--+-00.0-[08]--+-00.0  NVIDIA Corporation GF100GL [Quadro 6000] [10de:06d8]
             |                               |            \-00.1  NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5]
             |                               \-02.0-[07]--+-00.0  NVIDIA Corporation GF100GL [Quadro 5000] [10de:06d9]
             |                                            \-00.1  NVIDIA Corporation GF100 High Definition Audio Controller [10de:0be5]

00:07.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 [8086:340e] (rev 22) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=00, secondary=05, subordinate=08, sec-latency=0
        I/O behind bridge: 0000c000-0000dfff
        Memory behind bridge: f4000000-fbdfffff

05:00.0 PCI bridge [0604]: NVIDIA Corporation NF200 PCIe 2.0 switch [10de:05b1] (rev a3) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=05, secondary=06, subordinate=08, sec-latency=0
        I/O behind bridge: 0000c000-0000dfff
        Memory behind bridge: f4000000-fbdfffff
        Prefetchable memory behind bridge: 00000000a8000000-00000000bfffffff

06:00.0 PCI bridge [0604]: NVIDIA Corporation NF200 PCIe 2.0 switch [10de:05b1] (rev a3) (prog-if 00 [Normal decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 256 bytes
        Bus: primary=06, secondary=08, subordinate=08, sec-latency=0
        I/O behind bridge: 0000d000-0000dfff
        Memory behind bridge: f8000000-fbdfffff
        Prefetchable memory behind bridge: 00000000b4000000-00000000bfffffff

Thought I am having a hard time parsing the 'lspci -vt' I think it is:

X58---> NF200 ---> Quadro 6000 (GPU and audio card)
  \---> NF200 ---> Quadro 5000 (GPU and audio card).


Which is similar setup as mine - an intermediate bridge.

So I think the code is OK - this is likely a firmware/motherboard
bug.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GnU-00068n-6n; Mon, 06 Jan 2014 20:29:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1W0GnS-00068Z-D3
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:29:14 +0000
Received: from [85.158.139.211:16854] by server-12.bemta-5.messagelabs.com id
	9D/1C-30017-9121BC25; Mon, 06 Jan 2014 20:29:13 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389040151!8165849!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gNzcxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14377 invoked from network); 6 Jan 2014 20:29:12 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-10.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 20:29:12 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s06KSjio020155;
	Mon, 6 Jan 2014 15:28:45 -0500
Date: Mon, 6 Jan 2014 15:28:45 -0500 (EST)
From: Dave Anderson <anderson@redhat.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <803685365.26201847.1389040125161.JavaMail.root@redhat.com>
In-Reply-To: <52CB07E7.7000406@terremark.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
	<52CB07E7.7000406@terremark.com>
MIME-Version: 1.0
X-Originating-IP: [10.5.82.12]
X-Mailer: Zimbra 8.0.3_GA_5664 (ZimbraWebClient - FF22 (Linux)/8.0.3_GA_5664)
Thread-Topic: Enable use of crash on xen 4.4.0 vmcore
Thread-Index: LHDkncQtmBAVMRvQBlUdu03uTeaa6A==
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On 01/06/14 12:05, Dave Anderson wrote:
> >
> > ----- Original Message -----
> >> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> >> exists.  This prevents crash from using a xen 4.4.0 vmcore.
> >>
> >> Patch 1 "fixes" this.
> >>
> >> Patch 2 is a minor fix in that outputing the offset in hex for
> >> domain_domain_flags is different.
> >>
> >> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> >> one found.
> >>
> >> Patch 4 is a quick way to add domain.guest_type support.
> > Hi Don,
> >
> > The patch looks good to me.  But for the crash.changelog, can you show
> > what happens when you attempt to look at one of these PVH dumps without
> > your patches?
> >
> > Thanks,
> >    Dave
> 
> Before patch 1:
> 
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
> 
> crash: invalid structure member offset: domain_is_hvm
>         FILE: xen_hyper.c  LINE: 1250  FUNCTION: xen_hyper_store_domain_context()
> 
> [/home/don/crash-7.0/crash] error trace: 54c307 => 54ba5f => 51823b => 460895
> 
>    460895: OFFSET_verify.part.25+67
>    51823b: OFFSET_verify+43
>    54ba5f: xen_hyper_store_domain_context+639
>    54c307: xen_hyper_refresh_domain_context_space+151

OK good, that's what I'm looking for -- queued for crash-7.0.5.

Thanks,
  Dave

> 
> 
> After patch 1:
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: ffffffffffffffff
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
> 
> After patch 2:
> 
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: -1
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
> 
> After patch 3:
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000
>        0
> 
> 
> After patch 4:
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: -1
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                            domain_guest_type: 272
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
>     -Don Slutz
> 
> 
> >> Don Slutz (4):
> >>    Make domian.is_hvm optional
> >>    xen: Fix offset output to be decimal.
> >>    xen: set all domain_flags, not just the 1st.
> >>    Add basic domain.guest_type support.
> >>
> >>   xen_hyper.c             | 32 ++++++++++++++++++++++++--------
> >>   xen_hyper_defs.h        |  1 +
> >>   xen_hyper_dump_tables.c |  4 +++-
> >>   3 files changed, 28 insertions(+), 9 deletions(-)
> >>
> >> --
> >> 1.8.4
> >>
> >> --
> >> Crash-utility mailing list
> >> Crash-utility@redhat.com
> >> https://www.redhat.com/mailman/listinfo/crash-utility
> >>
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0GnU-00068n-6n; Mon, 06 Jan 2014 20:29:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anderson@redhat.com>) id 1W0GnS-00068Z-D3
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:29:14 +0000
Received: from [85.158.139.211:16854] by server-12.bemta-5.messagelabs.com id
	9D/1C-30017-9121BC25; Mon, 06 Jan 2014 20:29:13 +0000
X-Env-Sender: anderson@redhat.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389040151!8165849!1
X-Originating-IP: [209.132.183.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjQgPT4gNzcxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14377 invoked from network); 6 Jan 2014 20:29:12 -0000
Received: from mx3-phx2.redhat.com (HELO mx3-phx2.redhat.com) (209.132.183.24)
	by server-10.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 20:29:12 -0000
Received: from zmail15.collab.prod.int.phx2.redhat.com
	(zmail15.collab.prod.int.phx2.redhat.com [10.5.83.17])
	by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s06KSjio020155;
	Mon, 6 Jan 2014 15:28:45 -0500
Date: Mon, 6 Jan 2014 15:28:45 -0500 (EST)
From: Dave Anderson <anderson@redhat.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <803685365.26201847.1389040125161.JavaMail.root@redhat.com>
In-Reply-To: <52CB07E7.7000406@terremark.com>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
	<1761421528.25775303.1389027910680.JavaMail.root@redhat.com>
	<52CB07E7.7000406@terremark.com>
MIME-Version: 1.0
X-Originating-IP: [10.5.82.12]
X-Mailer: Zimbra 8.0.3_GA_5664 (ZimbraWebClient - FF22 (Linux)/8.0.3_GA_5664)
Thread-Topic: Enable use of crash on xen 4.4.0 vmcore
Thread-Index: LHDkncQtmBAVMRvQBlUdu03uTeaa6A==
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, "Discussion list for crash utility usage,
	maintenance and development" <crash-utility@redhat.com>
Subject: Re: [Xen-devel] [Crash-utility] [PATCH 0/4] Enable use of crash on
 xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



----- Original Message -----
> On 01/06/14 12:05, Dave Anderson wrote:
> >
> > ----- Original Message -----
> >> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> >> exists.  This prevents crash from using a xen 4.4.0 vmcore.
> >>
> >> Patch 1 "fixes" this.
> >>
> >> Patch 2 is a minor fix in that outputing the offset in hex for
> >> domain_domain_flags is different.
> >>
> >> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> >> one found.
> >>
> >> Patch 4 is a quick way to add domain.guest_type support.
> > Hi Don,
> >
> > The patch looks good to me.  But for the crash.changelog, can you show
> > what happens when you attempt to look at one of these PVH dumps without
> > your patches?
> >
> > Thanks,
> >    Dave
> 
> Before patch 1:
> 
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
> 
> crash: invalid structure member offset: domain_is_hvm
>         FILE: xen_hyper.c  LINE: 1250  FUNCTION: xen_hyper_store_domain_context()
> 
> [/home/don/crash-7.0/crash] error trace: 54c307 => 54ba5f => 51823b => 460895
> 
>    460895: OFFSET_verify.part.25+67
>    51823b: OFFSET_verify+43
>    54ba5f: xen_hyper_store_domain_context+639
>    54c307: xen_hyper_refresh_domain_context_space+151

OK good, that's what I'm looking for -- queued for crash-7.0.5.

Thanks,
  Dave

> 
> 
> After patch 1:
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: ffffffffffffffff
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
> 
> After patch 2:
> 
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 RU U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: -1
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
> 
> After patch 3:
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000
>        0
> 
> 
> After patch 4:
> 
> 
> dcs-xen-54:/big/crash/vmcore-hang-0>~/crash-7.0/crash vmcore
> xen-syms-4.4-unstable
> 
> crash 7.0.4
> Copyright (C) 2002-2013  Red Hat, Inc.
> Copyright (C) 2004, 2005, 2006, 2010  IBM Corporation
> Copyright (C) 1999-2006  Hewlett-Packard Co
> Copyright (C) 2005, 2006, 2011, 2012  Fujitsu Limited
> Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
> Copyright (C) 2005, 2011  NEC Corporation
> Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
> Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
> This program is free software, covered by the GNU General Public License,
> and you are welcome to change it and/or distribute copies of it under
> certain conditions.  Enter "help copying" to see the conditions.
> This program has absolutely no warranty.  Enter "help warranty" for details.
> 
> GNU gdb (GDB) 7.6
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-unknown-linux-gnu"...
> 
>     KERNEL: xen-syms-4.4-unstable
>   DUMPFILE: vmcore
>       CPUS: 8
>    DOMAINS: 5
>     UPTIME: --:--:--
>    MACHINE: Intel(R) Xeon(R) CPU E31265L @ 2.40GHz  (2400 Mhz)
>     MEMORY: 32 GB
>    PCPU-ID: 0
>       PCPU: ffff82d0802bff18
>    VCPU-ID: 0
>       VCPU: ffff8300bf2fe000  (VCPU_RUNNING)
> DOMAIN-ID: 32767
>     DOMAIN: ffff83083a163000  (DOMAIN_RUNNING)
>      STATE: CRASH
> 
> crash> doms
>     DID       DOMAIN      ST T  MAXPAGE  TOTPAGE VCPU SHARED_I
>     P2M_MFN
>    32753 ffff83083a1a7000 RU O     0        0      0 0              ----
>    32754 ffff83083a164000 RU X     0        0      0 0              ----
>  >*32767 ffff83083a163000 RU I     0        0      8 0              ----
>  >     0 ffff830823fb7000 RU 0 ffffffff   80000    8 ffff8300bf504000
>  >     81df80
>        2 ffff83083f12e000 CP U  100100    fd043    2 ffff8300bf4b6000
>        0
> crash> help -X ofs
>                         ELF_Prstatus_pr_info: 0
>                       ELF_Prstatus_pr_cursig: 12
>                      ELF_Prstatus_pr_sigpend: 16
>                      ELF_Prstatus_pr_sighold: 24
>                          ELF_Prstatus_pr_pid: 32
>                         ELF_Prstatus_pr_ppid: 36
>                         ELF_Prstatus_pr_pgrp: 40
>                          ELF_Prstatus_pr_sid: 44
>                        ELF_Prstatus_pr_stime: 64
>                       ELF_Prstatus_pr_cutime: 80
>                       ELF_Prstatus_pr_cstime: 96
>                          ELF_Prstatus_pr_reg: 112
>                      ELF_Prstatus_pr_fpvalid: 328
>                           ELF_Timeval_tv_sec: 0
>                          ELF_Timeval_tv_usec: 8
>                     arch_shared_info_max_pfn: 0
> arch_shared_info_pfn_to_mfn_frame_list_list: 8
>                  arch_shared_info_nmi_reason: 16
>                 cpu_info_guest_cpu_user_regs: 0
>                        cpu_info_processor_id: 200
>                        cpu_info_current_vcpu: 208
>                     cpu_time_local_tsc_stamp: 0
>                   cpu_time_stime_local_stamp: 0
>                  cpu_time_stime_master_stamp: 0
>                           cpu_time_tsc_scale: 0
>                   cpu_time_calibration_timer: 0
>                            crash_note_t_core: -1
>                             crash_note_t_xen: -1
>                        crash_note_t_xen_regs: -1
>                        crash_note_t_xen_info: -1
>                       crash_note_core_t_note: -1
>                       crash_note_core_t_desc: -1
>                        crash_note_xen_t_note: -1
>                        crash_note_xen_t_desc: -1
>                   crash_note_xen_core_t_note: -1
>                   crash_note_xen_core_t_desc: -1
>                   crash_note_xen_info_t_note: -1
>                   crash_note_xen_info_t_desc: -1
>                             domain_page_list: 0
>                          domain_xenpage_list: 0
>                             domain_domain_id: 0
>                             domain_tot_pages: 56
>                             domain_max_pages: 64
>                         domain_xenheap_pages: 76
>                           domain_shared_info: 8
>                            domain_sched_priv: 88
>                          domain_next_in_list: 104
>                          domain_domain_flags: -1
>                                domain_evtchn: 144
>                                domain_is_hvm: -1
>                            domain_guest_type: 272
>                         domain_is_privileged: 278
>                     domain_debugger_attached: 288
>                              domain_is_dying: 292
>               domain_is_paused_by_controller: 296
>                      domain_is_shutting_down: 316
>                          domain_is_shut_down: 317
>                                  domain_vcpu: 352
>                                  domain_arch: 384
>                  schedule_data_schedule_lock: 0
>                           schedule_data_curr: 16
>                           schedule_data_idle: 0
>                     schedule_data_sched_priv: 24
>                        schedule_data_s_timer: 32
>                           schedule_data_tick: -1
>                               scheduler_name: 0
>                           scheduler_opt_name: 8
>                           scheduler_sched_id: 16
>                               scheduler_init: 40
>                               scheduler_tick: -1
>                          scheduler_init_vcpu: -1
>                     scheduler_destroy_domain: 112
>                              scheduler_sleep: 136
>                               scheduler_wake: 144
>                       scheduler_set_affinity: -1
>                        scheduler_do_schedule: 168
>                             scheduler_adjust: 192
>                      scheduler_dump_settings: 216
>                     scheduler_dump_cpu_state: 224
>                        shared_info_vcpu_info: 0
>                   shared_info_evtchn_pending: 2048
>                      shared_info_evtchn_mask: 2560
>                             shared_info_arch: 3088
>                                timer_expires: 0
>                                    timer_cpu: 40
>                               timer_function: 24
>                                   timer_data: 32
>                            timer_heap_offset: -1
>                                 timer_killed: -1
>                              tss_struct_rsp0: 4
>                              tss_struct_esp0: 0
>                                 vcpu_vcpu_id: 0
>                               vcpu_processor: 4
>                               vcpu_vcpu_info: 8
>                                  vcpu_domain: 16
>                            vcpu_next_in_list: 24
>                                   vcpu_timer: -1
>                              vcpu_sleep_tick: -1
>                              vcpu_poll_timer: 144
>                              vcpu_sched_priv: 192
>                                vcpu_runstate: 200
>                          vcpu_runstate_guest: 248
>                              vcpu_vcpu_flags: -1
>                             vcpu_pause_count: 296
>                          vcpu_virq_to_evtchn: 300
>                            vcpu_cpu_affinity: 352
>                                vcpu_nmi_addr: -1
>                      vcpu_vcpu_dirty_cpumask: 376
>                                    vcpu_arch: 640
>                     vcpu_runstate_info_state: 0
>          vcpu_runstate_info_state_entry_time: 8
>                      vcpu_runstate_info_time: 16
> 
> 
>     -Don Slutz
> 
> 
> >> Don Slutz (4):
> >>    Make domian.is_hvm optional
> >>    xen: Fix offset output to be decimal.
> >>    xen: set all domain_flags, not just the 1st.
> >>    Add basic domain.guest_type support.
> >>
> >>   xen_hyper.c             | 32 ++++++++++++++++++++++++--------
> >>   xen_hyper_defs.h        |  1 +
> >>   xen_hyper_dump_tables.c |  4 +++-
> >>   3 files changed, 28 insertions(+), 9 deletions(-)
> >>
> >> --
> >> 1.8.4
> >>
> >> --
> >> Crash-utility mailing list
> >> Crash-utility@redhat.com
> >> https://www.redhat.com/mailman/listinfo/crash-utility
> >>
> > _______________________________________________
> > kexec mailing list
> > kexec@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/kexec
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:55:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HCJ-0007Ix-NC; Mon, 06 Jan 2014 20:54:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0HCI-0007Is-Po
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:54:54 +0000
Received: from [85.158.139.211:38954] by server-10.bemta-5.messagelabs.com id
	2D/41-01405-E181BC25; Mon, 06 Jan 2014 20:54:54 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389041692!8133686!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29058 invoked from network); 6 Jan 2014 20:54:53 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 20:54:53 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s06KrcxG015974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 15:53:39 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-34.ams2.redhat.com
	[10.36.112.34])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s06KrYCV032356; Mon, 6 Jan 2014 15:53:35 -0500
Message-ID: <52CB17D1.2060400@redhat.com>
Date: Mon, 06 Jan 2014 21:53:37 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<1389014715.19378.8.camel@hamster.uk.xensource.com>	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de>
In-Reply-To: <52CAEF54.7030901@suse.de>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> Am 06.01.2014 16:39, schrieb Anthony Liguori:
>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having t=
he
>> ability to compile out accel=3Dtcg.
> =

> Didn't you and Paolo even have patches for that a while ago?

Yes, but some code shuffling is required in each target to make sure you
can compile out translate-all.c, cputlb.c, etc.  So my patches only
worked for x86 at the time.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:55:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HCJ-0007Ix-NC; Mon, 06 Jan 2014 20:54:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0HCI-0007Is-Po
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:54:54 +0000
Received: from [85.158.139.211:38954] by server-10.bemta-5.messagelabs.com id
	2D/41-01405-E181BC25; Mon, 06 Jan 2014 20:54:54 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389041692!8133686!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29058 invoked from network); 6 Jan 2014 20:54:53 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-206.messagelabs.com with SMTP;
	6 Jan 2014 20:54:53 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s06KrcxG015974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 15:53:39 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-34.ams2.redhat.com
	[10.36.112.34])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s06KrYCV032356; Mon, 6 Jan 2014 15:53:35 -0500
Message-ID: <52CB17D1.2060400@redhat.com>
Date: Mon, 06 Jan 2014 21:53:37 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
References: <20140106125410.GD3119@zion.uk.xensource.com>	<1389014715.19378.8.camel@hamster.uk.xensource.com>	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de>
In-Reply-To: <52CAEF54.7030901@suse.de>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> Am 06.01.2014 16:39, schrieb Anthony Liguori:
>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having t=
he
>> ability to compile out accel=3Dtcg.
> =

> Didn't you and Paolo even have patches for that a while ago?

Yes, but some code shuffling is required in each target to make sure you
can compile out translate-all.c, cputlb.c, etc.  So my patches only
worked for x86 at the time.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HH8-0007kG-LY; Mon, 06 Jan 2014 20:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0HH6-0007kA-Qc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:59:53 +0000
Received: from [85.158.137.68:7511] by server-17.bemta-3.messagelabs.com id
	CC/3E-15965-7491BC25; Mon, 06 Jan 2014 20:59:51 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389041990!7539190!1
X-Originating-IP: [209.85.128.46]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18215 invoked from network); 6 Jan 2014 20:59:51 -0000
Received: from mail-qe0-f46.google.com (HELO mail-qe0-f46.google.com)
	(209.85.128.46)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 20:59:51 -0000
Received: by mail-qe0-f46.google.com with SMTP id a11so18673765qen.19
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 12:59:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=OwIWnhJsN4P6x4KTTys/qrJgKT6uCd9B8taBxiLy5ig=;
	b=M2+aTd8zNOq+FpYSMr0R+qmqMlkMwwahCp9YfI4ExBH4U2YsVkcs0G39dM68j7rgvh
	2fyF4NUaVBZu5LfODUy2FBAw4umX960FTcGDWqKIkrGgNIhjAo5wdzv2CfpeUwEVvM65
	h2vvI0fy1DAoUKyTTKOQyQ8kFizg0/zvsewmZDxv8EvT8FBchJ3490Vhieip/T/eMHA9
	WWNDRm7PFtZaHdtZ6C/9nY6Dz3ilxws2+8soAT1JbIK8zfqDVTSe6zCasunHIzIFa0oU
	oIoYqnqZ8EhsiB+PIoQLw+VcK8AMt8dYCcvJ++oFWkP7XVXLtO0pJl2/W0KUc4Nc+A1X
	wUsg==
X-Received: by 10.224.42.197 with SMTP id t5mr10106737qae.57.1389041990032;
	Mon, 06 Jan 2014 12:59:50 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186])
	by mx.google.com with ESMTPSA id z4sm96649732qeq.11.2014.01.06.12.59.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 12:59:48 -0800 (PST)
Message-ID: <52CB1943.8010006@redhat.com>
Date: Mon, 06 Jan 2014 21:59:47 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
In-Reply-To: <CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: =?UTF-8?B?w4Frb3MgSw==?= =?UTF-8?B?b3bDoWNz?= <akoskovacs@gmx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 06/01/2014 16:04, Peter Maydell ha scritto:
> On 6 January 2014 14:54, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> How would you avoid the compilation of all the
>> unnecessary emulated devices?
> 
> Didn't we have some patches for doing a Kconfig-style
> "select the devices you need" build recently?

It was really "select what boards you need" and "select any additional
-device-capable devices you need".  So the covered devices were
basically ISA, PCI, USB and I2C.  Other files (e.g. scsi-bus.c or sd.c)
were picked up via dependencies so they could still be left out unless
necessary.

The final plan was to keep the default-configs/ files, and add
dependency-handling via the Kconfig language but without Kconfig sources
itself.  Akos, did you go any further with your mini_kconfig.py parser?

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 20:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 20:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HH8-0007kG-LY; Mon, 06 Jan 2014 20:59:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0HH6-0007kA-Qc
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 20:59:53 +0000
Received: from [85.158.137.68:7511] by server-17.bemta-3.messagelabs.com id
	CC/3E-15965-7491BC25; Mon, 06 Jan 2014 20:59:51 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389041990!7539190!1
X-Originating-IP: [209.85.128.46]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18215 invoked from network); 6 Jan 2014 20:59:51 -0000
Received: from mail-qe0-f46.google.com (HELO mail-qe0-f46.google.com)
	(209.85.128.46)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Jan 2014 20:59:51 -0000
Received: by mail-qe0-f46.google.com with SMTP id a11so18673765qen.19
	for <xen-devel@lists.xen.org>; Mon, 06 Jan 2014 12:59:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=OwIWnhJsN4P6x4KTTys/qrJgKT6uCd9B8taBxiLy5ig=;
	b=M2+aTd8zNOq+FpYSMr0R+qmqMlkMwwahCp9YfI4ExBH4U2YsVkcs0G39dM68j7rgvh
	2fyF4NUaVBZu5LfODUy2FBAw4umX960FTcGDWqKIkrGgNIhjAo5wdzv2CfpeUwEVvM65
	h2vvI0fy1DAoUKyTTKOQyQ8kFizg0/zvsewmZDxv8EvT8FBchJ3490Vhieip/T/eMHA9
	WWNDRm7PFtZaHdtZ6C/9nY6Dz3ilxws2+8soAT1JbIK8zfqDVTSe6zCasunHIzIFa0oU
	oIoYqnqZ8EhsiB+PIoQLw+VcK8AMt8dYCcvJ++oFWkP7XVXLtO0pJl2/W0KUc4Nc+A1X
	wUsg==
X-Received: by 10.224.42.197 with SMTP id t5mr10106737qae.57.1389041990032;
	Mon, 06 Jan 2014 12:59:50 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186])
	by mx.google.com with ESMTPSA id z4sm96649732qeq.11.2014.01.06.12.59.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 06 Jan 2014 12:59:48 -0800 (PST)
Message-ID: <52CB1943.8010006@redhat.com>
Date: Mon, 06 Jan 2014 21:59:47 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
In-Reply-To: <CAFEAcA_Sh-fr3k+NcZrCCrrBW2oTdvDL+COpp19KRAW4RJ7C5g@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: =?UTF-8?B?w4Frb3MgSw==?= =?UTF-8?B?b3bDoWNz?= <akoskovacs@gmx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 06/01/2014 16:04, Peter Maydell ha scritto:
> On 6 January 2014 14:54, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> How would you avoid the compilation of all the
>> unnecessary emulated devices?
> 
> Didn't we have some patches for doing a Kconfig-style
> "select the devices you need" build recently?

It was really "select what boards you need" and "select any additional
-device-capable devices you need".  So the covered devices were
basically ISA, PCI, USB and I2C.  Other files (e.g. scsi-bus.c or sd.c)
were picked up via dependencies so they could still be left out unless
necessary.

The final plan was to keep the default-configs/ files, and add
dependency-handling via the Kconfig language but without Kconfig sources
itself.  Akos, did you go any further with your mini_kconfig.py parser?

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HiX-0000I1-VY; Mon, 06 Jan 2014 21:28:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0HiW-0000Hw-S4
	for Xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:28:13 +0000
Received: from [85.158.143.35:65417] by server-1.bemta-4.messagelabs.com id
	13/7E-02132-BEF1BC25; Mon, 06 Jan 2014 21:28:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389043689!9908673!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24326 invoked from network); 6 Jan 2014 21:28:10 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-7.tower-21.messagelabs.com with SMTP;
	6 Jan 2014 21:28:10 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 14:28:01 -0700
Message-ID: <52CB1F8A.2020308@suse.com>
Date: Mon, 06 Jan 2014 14:26:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52B07D09.5060008@canonical.com>				<1387299534.1025.19.camel@dagon.hellion.org.uk>				<52B08AA9.8010809@canonical.com>				<1387369646.27441.129.camel@kazak.uk.xensource.com>				<52B19F4E.8010601@canonical.com>			<1387373284.28680.18.camel@kazak.uk.xensource.com>			<52B1B842.4090306@canonical.com>	<52B2415A.3030903@suse.com>		<1387448340.9925.30.camel@kazak.uk.xensource.com>		<52B3278D.3000607@canonical.com>
	<52B33D6C.6010608@suse.com>	<1387534262.17289.34.camel@kazak.uk.xensource.com>
	<52B92832.1030705@suse.com>
In-Reply-To: <52B92832.1030705@suse.com>
Cc: libvir-list@redhat.com, Stefan Bader <stefan.bader@canonical.com>,
	Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] Setting devid for emulated NICs (Xen
 4.3.1 / libvirt 1.2.0) using libxl driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Campbell wrote:
>   
>> On Thu, 2013-12-19 at 11:39 -0700, Jim Fehlig wrote:
>>   
>>     
>>> Stefan Bader wrote:
>>>     
>>>       
>>>> Oh, just while talking about setdefault. Jim, this is one of the odd things when
>>>> moving from xm to xl stack from libvirt: libvirt defaults to the netfront NIC
>>>> when no model is specified and sets the type. The libxl setdefault function sets
>>>> the model to rtl8139 but leaves the type untouched.
>>>>       
>>>>         
>>> The xend toolstack always creates both emulated and vif devices unless
>>> 'type=netfront' is explicitly specified.  As you say, the guest gets to
>>> choose what to do with them.  E.g. PXE boot using the emulated device,
>>> or have the driver for the PV device unplug the emulated one.  I don't
>>> think libxl supports this right?
>>>     
>>>       
>> On my 4.3.1 setup, I changed the above to
>>     

Updated the system to Xen 4.4 rc1 meanwhile...

>>
>>
>>   if (hvm) {
>>
>>   x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
>>
>>       if (l_nic->model) {
>>
>>           if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
>>
>>       return -1;
>>
>>           if (STREQ(l_nic->model, "netfront"))
>>
>>               x_nic->nictype = LIBXL_NIC_TYPE_VIF;
>>
>>       }
>>
>>   } else {
>>
>>       x_nic->nictype = LIBXL_NIC_TYPE_VIF;
>>
>>   }
>>
>>
>>
>> which is better initialization logic IMO.  If the domain is hvm, set
>> nictype to LIBXL_NIC_TYPE_VIF_IOEMU, unless model 'netfront' is
>> specified.  This behavior is consistent with the legacy xen driver. 
>> The change seems to work fine and resolves the PXE issue Stefan noted -
>>     

I've submitted a patch for the PXE issue to the libvirt list

https://www.redhat.com/archives/libvir-list/2014-January/msg00208.html

>> as long as I initialize devid in libvirt.  So we'll need the above fix
>> in libvirt, as well as a resolution to the nic devid initialization in
>> libxl that started this thread.
>>     

Stefan, any progress on the devid initialization?

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HiX-0000I1-VY; Mon, 06 Jan 2014 21:28:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0HiW-0000Hw-S4
	for Xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:28:13 +0000
Received: from [85.158.143.35:65417] by server-1.bemta-4.messagelabs.com id
	13/7E-02132-BEF1BC25; Mon, 06 Jan 2014 21:28:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389043689!9908673!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24326 invoked from network); 6 Jan 2014 21:28:10 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-7.tower-21.messagelabs.com with SMTP;
	6 Jan 2014 21:28:10 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 14:28:01 -0700
Message-ID: <52CB1F8A.2020308@suse.com>
Date: Mon, 06 Jan 2014 14:26:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52B07D09.5060008@canonical.com>				<1387299534.1025.19.camel@dagon.hellion.org.uk>				<52B08AA9.8010809@canonical.com>				<1387369646.27441.129.camel@kazak.uk.xensource.com>				<52B19F4E.8010601@canonical.com>			<1387373284.28680.18.camel@kazak.uk.xensource.com>			<52B1B842.4090306@canonical.com>	<52B2415A.3030903@suse.com>		<1387448340.9925.30.camel@kazak.uk.xensource.com>		<52B3278D.3000607@canonical.com>
	<52B33D6C.6010608@suse.com>	<1387534262.17289.34.camel@kazak.uk.xensource.com>
	<52B92832.1030705@suse.com>
In-Reply-To: <52B92832.1030705@suse.com>
Cc: libvir-list@redhat.com, Stefan Bader <stefan.bader@canonical.com>,
	Xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] Setting devid for emulated NICs (Xen
 4.3.1 / libvirt 1.2.0) using libxl driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Campbell wrote:
>   
>> On Thu, 2013-12-19 at 11:39 -0700, Jim Fehlig wrote:
>>   
>>     
>>> Stefan Bader wrote:
>>>     
>>>       
>>>> Oh, just while talking about setdefault. Jim, this is one of the odd things when
>>>> moving from xm to xl stack from libvirt: libvirt defaults to the netfront NIC
>>>> when no model is specified and sets the type. The libxl setdefault function sets
>>>> the model to rtl8139 but leaves the type untouched.
>>>>       
>>>>         
>>> The xend toolstack always creates both emulated and vif devices unless
>>> 'type=netfront' is explicitly specified.  As you say, the guest gets to
>>> choose what to do with them.  E.g. PXE boot using the emulated device,
>>> or have the driver for the PV device unplug the emulated one.  I don't
>>> think libxl supports this right?
>>>     
>>>       
>> On my 4.3.1 setup, I changed the above to
>>     

Updated the system to Xen 4.4 rc1 meanwhile...

>>
>>
>>   if (hvm) {
>>
>>   x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
>>
>>       if (l_nic->model) {
>>
>>           if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
>>
>>       return -1;
>>
>>           if (STREQ(l_nic->model, "netfront"))
>>
>>               x_nic->nictype = LIBXL_NIC_TYPE_VIF;
>>
>>       }
>>
>>   } else {
>>
>>       x_nic->nictype = LIBXL_NIC_TYPE_VIF;
>>
>>   }
>>
>>
>>
>> which is better initialization logic IMO.  If the domain is hvm, set
>> nictype to LIBXL_NIC_TYPE_VIF_IOEMU, unless model 'netfront' is
>> specified.  This behavior is consistent with the legacy xen driver. 
>> The change seems to work fine and resolves the PXE issue Stefan noted -
>>     

I've submitted a patch for the PXE issue to the libvirt list

https://www.redhat.com/archives/libvir-list/2014-January/msg00208.html

>> as long as I initialize devid in libvirt.  So we'll need the above fix
>> in libvirt, as well as a resolution to the nic devid initialization in
>> libxl that started this thread.
>>     

Stefan, any progress on the devid initialization?

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:33:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Hnd-0000m7-O0; Mon, 06 Jan 2014 21:33:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0Hnb-0000m1-WA
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:33:28 +0000
Received: from [85.158.137.68:53476] by server-5.bemta-3.messagelabs.com id
	B3/EA-25188-7212BC25; Mon, 06 Jan 2014 21:33:27 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389044005!7511479!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4678 invoked from network); 6 Jan 2014 21:33:26 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-5.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 21:33:26 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 14:33:17 -0700
Message-ID: <52CB20C5.6070906@suse.com>
Date: Mon, 06 Jan 2014 14:31:49 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52B07D09.5060008@canonical.com>	<1387299534.1025.19.camel@dagon.hellion.org.uk>	<52B08AA9.8010809@canonical.com>	<1387369646.27441.129.camel@kazak.uk.xensource.com>	<52B19F4E.8010601@canonical.com>	<1387373284.28680.18.camel@kazak.uk.xensource.com>	<52B1B842.4090306@canonical.com>
	<52B2415A.3030903@suse.com>	<1387448340.9925.30.camel@kazak.uk.xensource.com>	<52B3278D.3000607@canonical.com>
	<52B33D6C.6010608@suse.com>	<1387534262.17289.34.camel@kazak.uk.xensource.com>	<52B41BED.5000808@canonical.com>	<1387535806.17289.50.camel@kazak.uk.xensource.com>	<52B42459.3080609@canonical.com>
	<1387538524.17289.64.camel@kazak.uk.xensource.com>
In-Reply-To: <1387538524.17289.64.camel@kazak.uk.xensource.com>
Cc: Stefan Bader <stefan.bader@canonical.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] Setting devid for emulated NICs (Xen
 4.3.1 / libvirt 1.2.0) using libxl driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Fri, 2013-12-20 at 12:04 +0100, Stefan Bader wrote:
>   
>> config specifies no model or model == netfront:
>>  - nic->model unset, nic->type = NIC_TYPE_VIF
>> config specifies any other mode:
>>  - nic->model = <name>, nic-type = NIC_TYPE_VIF_IOEMU
>>
>> In libxl__device_nic_setdefault:
>>  - nic->model unset -> nic->model = "rtl8139"
>>  - For HWM domain
>>    - nic->type unset -> nic-type = NIC_TYPE_VIF_IOEMU
>>     
>
> OK, I think this is all working as intended.
>
>   
>> I am only "complaining" about the case of having no NIC model set in the libvirt
>> configuration. This sets NIC_TYPE_VIF but leaves nic->model unset.
>> libxl sets the nic->model later but that has no effect because the type is set
>> to VIF only.
>>     
>
> Correct, the setting of nic->model here is irrelevant since that field
> is ignored if the type is not VIF+IOEMU.
>
>   
>>  And the default used to be VIF+IOEMU with rtl8139 as model.
>>     
>
> Right, this sounds like a libvirt level issue then.
>   

The following patch work in my testing

https://www.redhat.com/archives/libvir-list/2014-January/msg00208.html

Stefan, can you help test/review the patch?  Would be nice to get this
pushed for the upcoming libvirt 1.2.1 release.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:33:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Hnd-0000m7-O0; Mon, 06 Jan 2014 21:33:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0Hnb-0000m1-WA
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:33:28 +0000
Received: from [85.158.137.68:53476] by server-5.bemta-3.messagelabs.com id
	B3/EA-25188-7212BC25; Mon, 06 Jan 2014 21:33:27 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389044005!7511479!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4678 invoked from network); 6 Jan 2014 21:33:26 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-5.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 21:33:26 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 14:33:17 -0700
Message-ID: <52CB20C5.6070906@suse.com>
Date: Mon, 06 Jan 2014 14:31:49 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52B07D09.5060008@canonical.com>	<1387299534.1025.19.camel@dagon.hellion.org.uk>	<52B08AA9.8010809@canonical.com>	<1387369646.27441.129.camel@kazak.uk.xensource.com>	<52B19F4E.8010601@canonical.com>	<1387373284.28680.18.camel@kazak.uk.xensource.com>	<52B1B842.4090306@canonical.com>
	<52B2415A.3030903@suse.com>	<1387448340.9925.30.camel@kazak.uk.xensource.com>	<52B3278D.3000607@canonical.com>
	<52B33D6C.6010608@suse.com>	<1387534262.17289.34.camel@kazak.uk.xensource.com>	<52B41BED.5000808@canonical.com>	<1387535806.17289.50.camel@kazak.uk.xensource.com>	<52B42459.3080609@canonical.com>
	<1387538524.17289.64.camel@kazak.uk.xensource.com>
In-Reply-To: <1387538524.17289.64.camel@kazak.uk.xensource.com>
Cc: Stefan Bader <stefan.bader@canonical.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] Setting devid for emulated NICs (Xen
 4.3.1 / libvirt 1.2.0) using libxl driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Fri, 2013-12-20 at 12:04 +0100, Stefan Bader wrote:
>   
>> config specifies no model or model == netfront:
>>  - nic->model unset, nic->type = NIC_TYPE_VIF
>> config specifies any other mode:
>>  - nic->model = <name>, nic-type = NIC_TYPE_VIF_IOEMU
>>
>> In libxl__device_nic_setdefault:
>>  - nic->model unset -> nic->model = "rtl8139"
>>  - For HWM domain
>>    - nic->type unset -> nic-type = NIC_TYPE_VIF_IOEMU
>>     
>
> OK, I think this is all working as intended.
>
>   
>> I am only "complaining" about the case of having no NIC model set in the libvirt
>> configuration. This sets NIC_TYPE_VIF but leaves nic->model unset.
>> libxl sets the nic->model later but that has no effect because the type is set
>> to VIF only.
>>     
>
> Correct, the setting of nic->model here is irrelevant since that field
> is ignored if the type is not VIF+IOEMU.
>
>   
>>  And the default used to be VIF+IOEMU with rtl8139 as model.
>>     
>
> Right, this sounds like a libvirt level issue then.
>   

The following patch work in my testing

https://www.redhat.com/archives/libvir-list/2014-January/msg00208.html

Stefan, can you help test/review the patch?  Would be nice to get this
pushed for the upcoming libvirt 1.2.1 release.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:43:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HxK-0001KB-1J; Mon, 06 Jan 2014 21:43:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1W0HxH-0001K6-Ve
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:43:28 +0000
Received: from [85.158.137.68:36268] by server-9.bemta-3.messagelabs.com id
	A6/94-13104-F732BC25; Mon, 06 Jan 2014 21:43:27 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389044605!3889126!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27923 invoked from network); 6 Jan 2014 21:43:26 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 21:43:26 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s06LhNn3002524
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 16:43:23 -0500
Received: from [10.3.113.2] ([10.3.113.2])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s06LhNkW019430; Mon, 6 Jan 2014 16:43:23 -0500
Message-ID: <52CB237A.10702@redhat.com>
Date: Mon, 06 Jan 2014 14:43:22 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
X-Enigmail-Version: 1.6
OpenPGP: url=http://people.redhat.com/eblake/eblake.gpg
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] libxl: Fix initialization of
	nictype in libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4145885176418891637=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4145885176418891637==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="aQLwduvexPf6Psev9IajvsC8EVfbTj5Li"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/06/2014 11:59 AM, Jim Fehlig wrote:
> As pointed out by the Xen folks [1], HVM nics should always be set
> to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
> LIBXL_NIC_TYPE_VIF via model=3D'netfront'.  The current logic in
> libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
> a model is specified that is not 'netfront', which breaks PXE booting
> configurations where no model is specified (i.e. use the hypervisor
> default).
>=20
>   Reported-by: Stefan Bader <stefan.bader@canonical.com>
>=20
> [1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.=
html
>=20
> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
> ---
>=20
> I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList(=
)
> and passing the bool to libxlMakeNic(), but in the end left the detecti=
on
> to libxlMakeNic().

Seems okay to me with the approach you took.

>=20
>  src/libxl/libxl_conf.c | 20 ++++++++++++++------
>  src/libxl/libxl_conf.h |  4 +++-
>  2 files changed, 17 insertions(+), 7 deletions(-)

ACK.

--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJSyyN6AAoJEKeha0olJ0Nq9xkH/1uiDlj3mql/l6r7IqxgEF3j
bpiqvuG3Nd+RxW/w8HbpH/nDrY5oXjg13106LJfRX4t3PuJSxAgr/WSEXDX6ypy+
s9tUpRX1RyjFP/qdKH6H9ibv2mjx05eXn4fGhqitLinahgf0HHhBT7oduEQt+Hax
w32RpuTEWbx9VyBzxVwBj5Xh0qNh+XWHqKCZZVDq19XnTXnLejmrPADLqNeD1JEU
D/4RJNrb1YZ2BaLa89Cb3Jv49i/D/5aq/I4Obx+Pd+H3NIeMd0g9NIgSlSYRAEa/
SxDDkr5zrwbFsPJKMuQvwBTPgiqj6nL/oVUoLOV1OsN3U9LpowuM+DMqanAIhVQ=
=MDph
-----END PGP SIGNATURE-----

--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li--


--===============4145885176418891637==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4145885176418891637==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 21:43:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HxK-0001KB-1J; Mon, 06 Jan 2014 21:43:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1W0HxH-0001K6-Ve
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:43:28 +0000
Received: from [85.158.137.68:36268] by server-9.bemta-3.messagelabs.com id
	A6/94-13104-F732BC25; Mon, 06 Jan 2014 21:43:27 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389044605!3889126!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27923 invoked from network); 6 Jan 2014 21:43:26 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-31.messagelabs.com with SMTP;
	6 Jan 2014 21:43:26 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s06LhNn3002524
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 16:43:23 -0500
Received: from [10.3.113.2] ([10.3.113.2])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s06LhNkW019430; Mon, 6 Jan 2014 16:43:23 -0500
Message-ID: <52CB237A.10702@redhat.com>
Date: Mon, 06 Jan 2014 14:43:22 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
X-Enigmail-Version: 1.6
OpenPGP: url=http://people.redhat.com/eblake/eblake.gpg
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] libxl: Fix initialization of
	nictype in libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4145885176418891637=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4145885176418891637==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="aQLwduvexPf6Psev9IajvsC8EVfbTj5Li"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/06/2014 11:59 AM, Jim Fehlig wrote:
> As pointed out by the Xen folks [1], HVM nics should always be set
> to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
> LIBXL_NIC_TYPE_VIF via model=3D'netfront'.  The current logic in
> libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
> a model is specified that is not 'netfront', which breaks PXE booting
> configurations where no model is specified (i.e. use the hypervisor
> default).
>=20
>   Reported-by: Stefan Bader <stefan.bader@canonical.com>
>=20
> [1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.=
html
>=20
> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
> ---
>=20
> I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList(=
)
> and passing the bool to libxlMakeNic(), but in the end left the detecti=
on
> to libxlMakeNic().

Seems okay to me with the approach you took.

>=20
>  src/libxl/libxl_conf.c | 20 ++++++++++++++------
>  src/libxl/libxl_conf.h |  4 +++-
>  2 files changed, 17 insertions(+), 7 deletions(-)

ACK.

--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJSyyN6AAoJEKeha0olJ0Nq9xkH/1uiDlj3mql/l6r7IqxgEF3j
bpiqvuG3Nd+RxW/w8HbpH/nDrY5oXjg13106LJfRX4t3PuJSxAgr/WSEXDX6ypy+
s9tUpRX1RyjFP/qdKH6H9ibv2mjx05eXn4fGhqitLinahgf0HHhBT7oduEQt+Hax
w32RpuTEWbx9VyBzxVwBj5Xh0qNh+XWHqKCZZVDq19XnTXnLejmrPADLqNeD1JEU
D/4RJNrb1YZ2BaLa89Cb3Jv49i/D/5aq/I4Obx+Pd+H3NIeMd0g9NIgSlSYRAEa/
SxDDkr5zrwbFsPJKMuQvwBTPgiqj6nL/oVUoLOV1OsN3U9LpowuM+DMqanAIhVQ=
=MDph
-----END PGP SIGNATURE-----

--aQLwduvexPf6Psev9IajvsC8EVfbTj5Li--


--===============4145885176418891637==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4145885176418891637==--


From xen-devel-bounces@lists.xen.org Mon Jan 06 21:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HzU-0001S4-Mi; Mon, 06 Jan 2014 21:45:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0HzT-0001Rm-5E
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 21:45:43 +0000
Received: from [193.109.254.147:6704] by server-1.bemta-14.messagelabs.com id
	35/00-15600-6042BC25; Mon, 06 Jan 2014 21:45:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389044740!9158985!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20846 invoked from network); 6 Jan 2014 21:45:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 21:45:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06LjVEs000318
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 21:45:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06LjS4c006415
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 21:45:29 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06LjSu9014654; Mon, 6 Jan 2014 21:45:28 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 13:45:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E0AE1C18DC; Mon,  6 Jan 2014 16:45:27 -0500 (EST)
Date: Mon, 6 Jan 2014 16:45:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140106214527.GA31147@phenom.dumpdata.com>
References: <52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140106202621.GA30667@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Which would look like this:
> 
> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
> 	      \--------------> IEEE-1394a
> 
> I am actually wondering if this 07:00.0 device is the one that
> reports itself as 08:00.0 (which I think is what you alluding to Jan)
> 

And to double check that theory I decided to pass in the IEEE-1394a to a guest:

           +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]


(XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow
(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault addr 370f1000, iommu reg = ffff82c3ffd53000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1
(XEN)     root_entry = ffff83083d47f000
(XEN)     root_entry[8] = 72569b001
(XEN)     context = ffff83072569b000
(XEN)     context[0] = 0_0
(XEN)     ctxt_entry[0] not present

So, capture card OK - Likely the Tundra bridge has an issue:

07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
        Latency: 0
        Bus: primary=07, secondary=08, subordinate=08, sec-latency=32
        Memory behind bridge: f0600000-f06fffff
        Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
        BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-


or there is some unknown bridge in the motherboard.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 21:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 21:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0HzU-0001S4-Mi; Mon, 06 Jan 2014 21:45:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0HzT-0001Rm-5E
	for xen-devel@lists.xenproject.org; Mon, 06 Jan 2014 21:45:43 +0000
Received: from [193.109.254.147:6704] by server-1.bemta-14.messagelabs.com id
	35/00-15600-6042BC25; Mon, 06 Jan 2014 21:45:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389044740!9158985!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20846 invoked from network); 6 Jan 2014 21:45:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 21:45:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s06LjVEs000318
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 6 Jan 2014 21:45:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s06LjS4c006415
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 6 Jan 2014 21:45:29 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s06LjSu9014654; Mon, 6 Jan 2014 21:45:28 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 13:45:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E0AE1C18DC; Mon,  6 Jan 2014 16:45:27 -0500 (EST)
Date: Mon, 6 Jan 2014 16:45:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140106214527.GA31147@phenom.dumpdata.com>
References: <52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140106202621.GA30667@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Which would look like this:
> 
> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
> 	      \--------------> IEEE-1394a
> 
> I am actually wondering if this 07:00.0 device is the one that
> reports itself as 08:00.0 (which I think is what you alluding to Jan)
> 

And to double check that theory I decided to pass in the IEEE-1394a to a guest:

           +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]


(XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow
(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault addr 370f1000, iommu reg = ffff82c3ffd53000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1
(XEN)     root_entry = ffff83083d47f000
(XEN)     root_entry[8] = 72569b001
(XEN)     context = ffff83072569b000
(XEN)     context[0] = 0_0
(XEN)     ctxt_entry[0] not present

So, capture card OK - Likely the Tundra bridge has an issue:

07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
        Latency: 0
        Bus: primary=07, secondary=08, subordinate=08, sec-latency=32
        Memory behind bridge: f0600000-f06fffff
        Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
        BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-


or there is some unknown bridge in the motherboard.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 23:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 23:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0JSF-0005kI-Td; Mon, 06 Jan 2014 23:19:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0JSE-0005kD-U7
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 23:19:31 +0000
Received: from [193.109.254.147:44183] by server-16.bemta-14.messagelabs.com
	id 9C/68-20600-20A3BC25; Mon, 06 Jan 2014 23:19:30 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389050368!5675211!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10349 invoked from network); 6 Jan 2014 23:19:29 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-4.tower-27.messagelabs.com with SMTP;
	6 Jan 2014 23:19:29 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 16:19:20 -0700
Message-ID: <52CB399C.8080203@suse.com>
Date: Mon, 06 Jan 2014 16:17:48 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Eric Blake <eblake@redhat.com>
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
	<52CB237A.10702@redhat.com>
In-Reply-To: <52CB237A.10702@redhat.com>
Cc: libvir-list@redhat.com, stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] libxl: Fix initialization of
	nictype in libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eric Blake wrote:
> On 01/06/2014 11:59 AM, Jim Fehlig wrote:
>   
>> As pointed out by the Xen folks [1], HVM nics should always be set
>> to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
>> LIBXL_NIC_TYPE_VIF via model='netfront'.  The current logic in
>> libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
>> a model is specified that is not 'netfront', which breaks PXE booting
>> configurations where no model is specified (i.e. use the hypervisor
>> default).
>>
>>   Reported-by: Stefan Bader <stefan.bader@canonical.com>
>>
>> [1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.html
>>
>> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
>> ---
>>
>> I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList()
>> and passing the bool to libxlMakeNic(), but in the end left the detection
>> to libxlMakeNic().
>>     
>
> Seems okay to me with the approach you took.
>
>   
>>  src/libxl/libxl_conf.c | 20 ++++++++++++++------
>>  src/libxl/libxl_conf.h |  4 +++-
>>  2 files changed, 17 insertions(+), 7 deletions(-)
>>     
>
> ACK.
>   

Thanks, pushed.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 06 23:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jan 2014 23:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0JSF-0005kI-Td; Mon, 06 Jan 2014 23:19:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W0JSE-0005kD-U7
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 23:19:31 +0000
Received: from [193.109.254.147:44183] by server-16.bemta-14.messagelabs.com
	id 9C/68-20600-20A3BC25; Mon, 06 Jan 2014 23:19:30 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389050368!5675211!1
X-Originating-IP: [137.65.248.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10349 invoked from network); 6 Jan 2014 23:19:29 -0000
Received: from inet-orm.provo.novell.com (HELO mail.novell.com)
	(137.65.248.124) by server-4.tower-27.messagelabs.com with SMTP;
	6 Jan 2014 23:19:29 -0000
Received: from [137.65.220.140] ([137.65.220.140])
	by mail.novell.com with ESMTP; Mon, 06 Jan 2014 16:19:20 -0700
Message-ID: <52CB399C.8080203@suse.com>
Date: Mon, 06 Jan 2014 16:17:48 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Eric Blake <eblake@redhat.com>
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
	<52CB237A.10702@redhat.com>
In-Reply-To: <52CB237A.10702@redhat.com>
Cc: libvir-list@redhat.com, stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH] libxl: Fix initialization of
	nictype in libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Eric Blake wrote:
> On 01/06/2014 11:59 AM, Jim Fehlig wrote:
>   
>> As pointed out by the Xen folks [1], HVM nics should always be set
>> to type LIBXL_NIC_TYPE_VIF_IOEMU unless the user explicity requests
>> LIBXL_NIC_TYPE_VIF via model='netfront'.  The current logic in
>> libxlMakeNic() only sets the nictype to LIBXL_NIC_TYPE_VIF_IOEMU if
>> a model is specified that is not 'netfront', which breaks PXE booting
>> configurations where no model is specified (i.e. use the hypervisor
>> default).
>>
>>   Reported-by: Stefan Bader <stefan.bader@canonical.com>
>>
>> [1] https://www.redhat.com/archives/libvir-list/2013-December/msg01156.html
>>
>> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
>> ---
>>
>> I toyed with detecting whether to use an IOEMU nic in libxlMakeNicList()
>> and passing the bool to libxlMakeNic(), but in the end left the detection
>> to libxlMakeNic().
>>     
>
> Seems okay to me with the approach you took.
>
>   
>>  src/libxl/libxl_conf.c | 20 ++++++++++++++------
>>  src/libxl/libxl_conf.h |  4 +++-
>>  2 files changed, 17 insertions(+), 7 deletions(-)
>>     
>
> ACK.
>   

Thanks, pushed.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:24:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KSH-0008R3-Kg; Tue, 07 Jan 2014 00:23:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0KSG-0008Qy-Ci
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:23:36 +0000
Received: from [193.109.254.147:30500] by server-12.bemta-14.messagelabs.com
	id 8F/CC-13681-7094BC25; Tue, 07 Jan 2014 00:23:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389054212!9192525!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1184 invoked from network); 7 Jan 2014 00:23:34 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:23:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 00:23:30 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,615,1384300800"; d="scan'208";a="642091241"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.227])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 00:23:28 +0000
Message-ID: <52CB4900.8010404@terremark.com>
Date: Mon, 06 Jan 2014 19:23:28 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com> <52C543E9.1050701@citrix.com>
	<52C5525B.4040704@citrix.com>
In-Reply-To: <52C5525B.4040704@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/14 06:49, David Vrabel wrote:
> On 02/01/14 10:48, Andrew Cooper wrote:
>> On 02/01/14 10:46, David Vrabel wrote:
>>> On 01/01/14 16:51, Don Slutz wrote:
>>>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>>> Since this commit introduced a regression, a revert is the best thing to
>>> do here.
>>>
>>> Acked-by: David Vrabel <david.vrabel@citrix.com>
>>>
>>>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>>>
>>>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>>> I guess Daniel tested a debug build without this crashkernel option.
>>> This would place the crash region above the direct mapping region and
>>> map_domain_page() would do the right thing.
>>>
>>>
>>>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>>>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>>>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>>>> +
>>> This should be made conditional on the location of the crash region --
>>> it is wrong to do this for portions of the crash region that are outside
>>> the crash region.
>> Presume you mean "outside the direct-map region"?
> Yes.
>
> David

I have no idea on how to even check for "outside the direct-map region" 
and how to test any additional changes.  Testing with "crashkernel=256M" 
((XEN) Kdump: 256MB (262144kB) at 0x82a1be000) does not work before or 
after this patch.  In both cases, I get an error out of kexec:

~/kexec/build/sbin/kexec -p '--command-line=placeholder 
root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 
rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 
console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 
earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices 
cgroup_disable=memory mce=off' 
--initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img 
/boot/vmlinuz-3.8.11-100.fc17.x86_64
Could not find a free area of memory of 0xa000 bytes...
locate_hole failed

I am currently busy with other tasks and so do not expect to get to 
learning about xen's direct-map region and/or finding out why kexec does 
not work in this case before xen 4.4.0 release date.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:24:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KSH-0008R3-Kg; Tue, 07 Jan 2014 00:23:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0KSG-0008Qy-Ci
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:23:36 +0000
Received: from [193.109.254.147:30500] by server-12.bemta-14.messagelabs.com
	id 8F/CC-13681-7094BC25; Tue, 07 Jan 2014 00:23:35 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389054212!9192525!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1184 invoked from network); 7 Jan 2014 00:23:34 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:23:34 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 00:23:30 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,615,1384300800"; d="scan'208";a="642091241"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.227])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 00:23:28 +0000
Message-ID: <52CB4900.8010404@terremark.com>
Date: Mon, 06 Jan 2014 19:23:28 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com> <52C543E9.1050701@citrix.com>
	<52C5525B.4040704@citrix.com>
In-Reply-To: <52C5525B.4040704@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/14 06:49, David Vrabel wrote:
> On 02/01/14 10:48, Andrew Cooper wrote:
>> On 02/01/14 10:46, David Vrabel wrote:
>>> On 01/01/14 16:51, Don Slutz wrote:
>>>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
>>> Since this commit introduced a regression, a revert is the best thing to
>>> do here.
>>>
>>> Acked-by: David Vrabel <david.vrabel@citrix.com>
>>>
>>>> Using kexec commit 027413d822fd57dd39d2d2afab1484bc6b6b84f9
>>>>
>>>> With "crashkernel=256M@256M" ((XEN) Kdump: 256MB (262144kB) at 0x10000000)
>>> I guess Daniel tested a debug build without this crashkernel option.
>>> This would place the crash region above the direct mapping region and
>>> map_domain_page() would do the right thing.
>>>
>>>
>>>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>>>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>>>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>>>> +
>>> This should be made conditional on the location of the crash region --
>>> it is wrong to do this for portions of the crash region that are outside
>>> the crash region.
>> Presume you mean "outside the direct-map region"?
> Yes.
>
> David

I have no idea on how to even check for "outside the direct-map region" 
and how to test any additional changes.  Testing with "crashkernel=256M" 
((XEN) Kdump: 256MB (262144kB) at 0x82a1be000) does not work before or 
after this patch.  In both cases, I get an error out of kexec:

~/kexec/build/sbin/kexec -p '--command-line=placeholder 
root=/dev/mapper/vg_f17--xen-lv_root ro rd.md=0 rd.dm=0 
rd.lvm.lv=vg_f17-xen/lv_swap KEYTABLE=us SYSFONT=True rd.luks=0 
console=ttyS0,9600n8 rd.lvm.lv=vg_f17-xen/lv_root LANG=en_US.UTF-8 
earlyprintk=ttyS0 rd_NO_PLYMOUTH irqpoll nr_cpus=1 reset_devices 
cgroup_disable=memory mce=off' 
--initrd=/boot/initramfs-3.8.11-100.fc17.x86_64kdump.img 
/boot/vmlinuz-3.8.11-100.fc17.x86_64
Could not find a free area of memory of 0xa000 bytes...
locate_hole failed

I am currently busy with other tasks and so do not expect to get to 
learning about xen's direct-map region and/or finding out why kexec does 
not work in this case before xen 4.4.0 release date.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KW8-00006f-EV; Tue, 07 Jan 2014 00:27:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W0KW7-00006a-3H
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 00:27:35 +0000
Received: from [85.158.143.35:54801] by server-1.bemta-4.messagelabs.com id
	8C/43-02132-6F94BC25; Tue, 07 Jan 2014 00:27:34 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389054453!9951013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19114 invoked from network); 7 Jan 2014 00:27:33 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 00:27:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 06 Jan 2014 16:27:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,615,1384329600"; d="scan'208";a="454542432"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 06 Jan 2014 16:27:19 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 16:27:19 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 16:27:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX151.ccr.corp.intel.com ([10.239.6.50]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 08:27:17 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHO/Axb/dIoK2SSkk6bSCfLWR1khZpgGD8wgBb0GYCAAMGtMA==
Date: Tue, 7 Jan 2014 00:27:16 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D4FD74@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<1387378646.28680.48.camel@kazak.uk.xensource.com>
	<20131218152235.GG4934@phenom.dumpdata.com>
	<1387383185.28680.60.camel@kazak.uk.xensource.com>
	<E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
	<1389002063.13274.20.camel@kazak.uk.xensource.com>
In-Reply-To: <1389002063.13274.20.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Monday, January 06, 2014 5:54 PM
> To: Wu, Feng
> Cc: Konrad Rzeszutek Wilk; Anthony PERARD; xen-devel@lists.xenproject.org;
> stefano.stabellini@citrix.com
> Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> 
> On Sun, 2013-12-22 at 11:25 +0000, Wu, Feng wrote:
> >
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > > Sent: Thursday, December 19, 2013 12:13 AM
> > > To: Konrad Rzeszutek Wilk
> > > Cc: Anthony PERARD; xen-devel@lists.xenproject.org;
> > > stefano.stabellini@citrix.com
> > > Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> > >
> > > On Wed, 2013-12-18 at 10:22 -0500, Konrad Rzeszutek Wilk wrote:
> > > > load_roms and bios_load are not set - so it wouldn't even do it.
> > > > It only does it for Bochs BIOS.
> > >
> > > Right, this is deliberate.
> > >
> > > For ROMBIOS (AKA BOchs BIOS) hvmloader loads the options roms. and I
> > > think ROMBIOS subsequently loads them.
> > >
> > > For SeaBIOS it is the BIOS itself which both loads and executes the
> > > ROMS, which is why it is NULL in hvmloader.
> > >
> > > The SeaBIOS way is far more like how systems normally work and because
> > > the BIOS is in charge it can do a better job than splitting it between
> > > two entities.
> > >
> >
> > I am also interested in this thread. Do you know why ROMBIOS doesn't handle
> > option ROM the same way as seaBIOS?
> 
> ROMBIOS was always this way and nobody has updated it to do things
> properly. It's unclear whether it is worth the effort.
> 
> IMHO any effort expended would be better spent bringing qemu-upstream
> +seabios up to scratch rather than spending time fixing qemu-traditional
> +ROMBIOS for new usecases. Although qemu-traditional+ROMBIOS is not
> going to be removed (in order to retain compatibility with existing
> configurations) it is also not taking new features etc.
> 

Thanks a lot for your explanation!

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KW8-00006f-EV; Tue, 07 Jan 2014 00:27:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W0KW7-00006a-3H
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 00:27:35 +0000
Received: from [85.158.143.35:54801] by server-1.bemta-4.messagelabs.com id
	8C/43-02132-6F94BC25; Tue, 07 Jan 2014 00:27:34 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389054453!9951013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19114 invoked from network); 7 Jan 2014 00:27:33 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 00:27:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 06 Jan 2014 16:27:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,615,1384329600"; d="scan'208";a="454542432"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 06 Jan 2014 16:27:19 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 16:27:19 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 16:27:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX151.ccr.corp.intel.com ([10.239.6.50]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 08:27:17 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHO/Axb/dIoK2SSkk6bSCfLWR1khZpgGD8wgBb0GYCAAMGtMA==
Date: Tue, 7 Jan 2014 00:27:16 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D4FD74@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<1387378646.28680.48.camel@kazak.uk.xensource.com>
	<20131218152235.GG4934@phenom.dumpdata.com>
	<1387383185.28680.60.camel@kazak.uk.xensource.com>
	<E959C4978C3B6342920538CF579893F001D35359@SHSMSX104.ccr.corp.intel.com>
	<1389002063.13274.20.camel@kazak.uk.xensource.com>
In-Reply-To: <1389002063.13274.20.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Monday, January 06, 2014 5:54 PM
> To: Wu, Feng
> Cc: Konrad Rzeszutek Wilk; Anthony PERARD; xen-devel@lists.xenproject.org;
> stefano.stabellini@citrix.com
> Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> 
> On Sun, 2013-12-22 at 11:25 +0000, Wu, Feng wrote:
> >
> > > -----Original Message-----
> > > From: xen-devel-bounces@lists.xen.org
> > > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Ian Campbell
> > > Sent: Thursday, December 19, 2013 12:13 AM
> > > To: Konrad Rzeszutek Wilk
> > > Cc: Anthony PERARD; xen-devel@lists.xenproject.org;
> > > stefano.stabellini@citrix.com
> > > Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
> > >
> > > On Wed, 2013-12-18 at 10:22 -0500, Konrad Rzeszutek Wilk wrote:
> > > > load_roms and bios_load are not set - so it wouldn't even do it.
> > > > It only does it for Bochs BIOS.
> > >
> > > Right, this is deliberate.
> > >
> > > For ROMBIOS (AKA BOchs BIOS) hvmloader loads the options roms. and I
> > > think ROMBIOS subsequently loads them.
> > >
> > > For SeaBIOS it is the BIOS itself which both loads and executes the
> > > ROMS, which is why it is NULL in hvmloader.
> > >
> > > The SeaBIOS way is far more like how systems normally work and because
> > > the BIOS is in charge it can do a better job than splitting it between
> > > two entities.
> > >
> >
> > I am also interested in this thread. Do you know why ROMBIOS doesn't handle
> > option ROM the same way as seaBIOS?
> 
> ROMBIOS was always this way and nobody has updated it to do things
> properly. It's unclear whether it is worth the effort.
> 
> IMHO any effort expended would be better spent bringing qemu-upstream
> +seabios up to scratch rather than spending time fixing qemu-traditional
> +ROMBIOS for new usecases. Although qemu-traditional+ROMBIOS is not
> going to be removed (in order to retain compatibility with existing
> configurations) it is also not taking new features etc.
> 

Thanks a lot for your explanation!

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Kkj-0000vq-RH; Tue, 07 Jan 2014 00:42:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Kki-0000vl-GT
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:42:40 +0000
Received: from [85.158.137.68:60096] by server-2.bemta-3.messagelabs.com id
	FA/EC-17329-F7D4BC25; Tue, 07 Jan 2014 00:42:39 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389055357!6408794!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10417 invoked from network); 7 Jan 2014 00:42:38 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:42:38 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070fVUI018604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:41:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070fUIT013470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:41:30 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s070fTQt026595; Tue, 7 Jan 2014 00:41:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:41:28 -0800
Date: Mon, 6 Jan 2014 16:41:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106164127.556d654e@mantra.us.oracle.com>
In-Reply-To: <52CB0922.1030209@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
	<52CAABF8.30702@citrix.com> <52CB0922.1030209@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 06 Jan 2014 14:50:58 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/06/14 08:13, Andrew Cooper wrote:
> > On 06/01/14 12:43, Don Slutz wrote:
> >> On 01/05/14 18:09, Andrew Cooper wrote:
> >>> On 04/01/2014 17:52, Don Slutz wrote:
> >>>> Using a 1G hvm domU (in grub) and gdbsx:
> >>>>
......
> >>> Content-wise, I think it would be better to fix up the error path
> >>> in dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having
> >>> taken a reference on the gfn.

Right, that is similar to what I had done when I had noticed this early this
year. I had just changed to nolock query since the domain is paused during
this call, but in hindsight it's better to do locked query. So I prefer
doing in dbg_hvm_va2mfn() too.

http://lists.xen.org/archives/html/xen-devel/2013-01/msg00781.html

Will wait for your v2 you said you will be submitting.

thanks,
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Kkj-0000vq-RH; Tue, 07 Jan 2014 00:42:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Kki-0000vl-GT
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:42:40 +0000
Received: from [85.158.137.68:60096] by server-2.bemta-3.messagelabs.com id
	FA/EC-17329-F7D4BC25; Tue, 07 Jan 2014 00:42:39 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389055357!6408794!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10417 invoked from network); 7 Jan 2014 00:42:38 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:42:38 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070fVUI018604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:41:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070fUIT013470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:41:30 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s070fTQt026595; Tue, 7 Jan 2014 00:41:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:41:28 -0800
Date: Mon, 6 Jan 2014 16:41:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106164127.556d654e@mantra.us.oracle.com>
In-Reply-To: <52CB0922.1030209@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-2-git-send-email-dslutz@verizon.com>
	<52C9E621.4020608@citrix.com> <52CAA503.5080707@terremark.com>
	<52CAABF8.30702@citrix.com> <52CB0922.1030209@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/4] dbg_rw_guest_mem: need to call
 put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 06 Jan 2014 14:50:58 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/06/14 08:13, Andrew Cooper wrote:
> > On 06/01/14 12:43, Don Slutz wrote:
> >> On 01/05/14 18:09, Andrew Cooper wrote:
> >>> On 04/01/2014 17:52, Don Slutz wrote:
> >>>> Using a 1G hvm domU (in grub) and gdbsx:
> >>>>
......
> >>> Content-wise, I think it would be better to fix up the error path
> >>> in dbg_hvm_va2mfn() so it doesn't exit with INVALID_MFN having
> >>> taken a reference on the gfn.

Right, that is similar to what I had done when I had noticed this early this
year. I had just changed to nolock query since the domain is paused during
this call, but in hindsight it's better to do locked query. So I prefer
doing in dbg_hvm_va2mfn() too.

http://lists.xen.org/archives/html/xen-devel/2013-01/msg00781.html

Will wait for your v2 you said you will be submitting.

thanks,
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:51:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Kt0-0001Ow-3k; Tue, 07 Jan 2014 00:51:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Ksy-0001Or-M9
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:51:12 +0000
Received: from [85.158.137.68:48715] by server-11.bemta-3.messagelabs.com id
	75/78-19379-F7F4BC25; Tue, 07 Jan 2014 00:51:11 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389055869!3909715!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9123 invoked from network); 7 Jan 2014 00:51:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:51:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070o5qP025366
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:50:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s070o3wi012311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:50:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070o3ip029333; Tue, 7 Jan 2014 00:50:03 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:50:03 -0800
Date: Mon, 6 Jan 2014 16:50:01 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106165001.2cd08ce9@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-4-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-4-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 3/4] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:15 -0500
Don Slutz <dslutz@verizon.com> wrote:

> I had coded this with XGERR, but gdb will try to read memory without
> a direct request from the user.  So the error message can be
> confusing.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 5736b86..a192478 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) {
>      struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
>      union {uint64_t llbuf8; char buf8[8];} u = {0};
> -    int i;
> +    int i, rc;
>  
>      XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf,
> tobuf_len); 
> @@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->len = tobuf_len;
>      iop->gwr = 0;       /* not writing to guest */
>  
> -    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
> +    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> +        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> +              iop->remain, errno, rc);
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:51:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Kt0-0001Ow-3k; Tue, 07 Jan 2014 00:51:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Ksy-0001Or-M9
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:51:12 +0000
Received: from [85.158.137.68:48715] by server-11.bemta-3.messagelabs.com id
	75/78-19379-F7F4BC25; Tue, 07 Jan 2014 00:51:11 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389055869!3909715!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9123 invoked from network); 7 Jan 2014 00:51:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:51:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070o5qP025366
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:50:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s070o3wi012311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:50:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070o3ip029333; Tue, 7 Jan 2014 00:50:03 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:50:03 -0800
Date: Mon, 6 Jan 2014 16:50:01 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106165001.2cd08ce9@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-4-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-4-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 3/4] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:15 -0500
Don Slutz <dslutz@verizon.com> wrote:

> I had coded this with XGERR, but gdb will try to read memory without
> a direct request from the user.  So the error message can be
> confusing.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 5736b86..a192478 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) {
>      struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
>      union {uint64_t llbuf8; char buf8[8];} u = {0};
> -    int i;
> +    int i, rc;
>  
>      XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf,
> tobuf_len); 
> @@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->len = tobuf_len;
>      iop->gwr = 0;       /* not writing to guest */
>  
> -    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
> +    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> +        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> +              iop->remain, errno, rc);
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KwU-0001Wa-Ps; Tue, 07 Jan 2014 00:54:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0KwT-0001WU-Ph
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:54:49 +0000
Received: from [193.109.254.147:46210] by server-4.bemta-14.messagelabs.com id
	CA/C8-03916-9505BC25; Tue, 07 Jan 2014 00:54:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389056086!9195508!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16568 invoked from network); 7 Jan 2014 00:54:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:54:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070rhj1027800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:53:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s070rgJx018137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:53:42 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070rgvq004937; Tue, 7 Jan 2014 00:53:42 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:53:41 -0800
Date: Mon, 6 Jan 2014 16:53:40 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106165340.30c42de0@mantra.us.oracle.com>
In-Reply-To: <1389023148.31766.64.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1389023148.31766.64.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014 15:45:48 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sat, 2014-01-04 at 12:52 -0500, Don Slutz wrote:
> > Release manager requests:
> >   patch 1 should be in 4.4.0
> >   patch 3 and 4 would be good to be in 4.4.0
> 
> (as acting RM in George's absence):
> 
> In all three cases what is missing is the "why" and the appropriate
> analysis from
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze 
> i.e. what is the impact of the bug (i..e what are the advantages of
> the fix) and what are the risks of this changing causing further
> breakage? I'm not really in a position to evaluate the risk of a
> change in gdbsx, so someone needs to tell me.
> 
> I think given that gdbsx is a somewhat "peripheral" bit of code and
> that it is targeted at developers (who might be better able to
> tolerate any resulting issues and more able to apply subsequent
> fixups than regular users) we can accept a larger risk than we would
> with a change to the hypervisor itself etc (that's assuming that all
> of these changes cannot potentially impact non-debugger usecases
> which I expect is the case from the function names but I would like
> to see confirmed).
> 
> Apart from a release ack 4 of these would need an Ack from Mukesh
> IMHO. TBH if he is happy with them then I see no reason not to give a
> release ack as well.

Right Ian, at present I believe the only user of this file is gdbsx
so 4.4 would be very low risk. The issue is in a rare use case, so it 
could wait till 4.5 too if you'd rather not. Ok either way... :)..

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 00:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 00:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0KwU-0001Wa-Ps; Tue, 07 Jan 2014 00:54:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0KwT-0001WU-Ph
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 00:54:49 +0000
Received: from [193.109.254.147:46210] by server-4.bemta-14.messagelabs.com id
	CA/C8-03916-9505BC25; Tue, 07 Jan 2014 00:54:49 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389056086!9195508!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16568 invoked from network); 7 Jan 2014 00:54:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 00:54:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s070rhj1027800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 00:53:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s070rgJx018137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 00:53:42 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s070rgvq004937; Tue, 7 Jan 2014 00:53:42 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 16:53:41 -0800
Date: Mon, 6 Jan 2014 16:53:40 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140106165340.30c42de0@mantra.us.oracle.com>
In-Reply-To: <1389023148.31766.64.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1389023148.31766.64.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 0/4] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014 15:45:48 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sat, 2014-01-04 at 12:52 -0500, Don Slutz wrote:
> > Release manager requests:
> >   patch 1 should be in 4.4.0
> >   patch 3 and 4 would be good to be in 4.4.0
> 
> (as acting RM in George's absence):
> 
> In all three cases what is missing is the "why" and the appropriate
> analysis from
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze 
> i.e. what is the impact of the bug (i..e what are the advantages of
> the fix) and what are the risks of this changing causing further
> breakage? I'm not really in a position to evaluate the risk of a
> change in gdbsx, so someone needs to tell me.
> 
> I think given that gdbsx is a somewhat "peripheral" bit of code and
> that it is targeted at developers (who might be better able to
> tolerate any resulting issues and more able to apply subsequent
> fixups than regular users) we can accept a larger risk than we would
> with a change to the hypervisor itself etc (that's assuming that all
> of these changes cannot potentially impact non-debugger usecases
> which I expect is the case from the function names but I would like
> to see confirmed).
> 
> Apart from a release ack 4 of these would need an Ack from Mukesh
> IMHO. TBH if he is happy with them then I see no reason not to give a
> release ack as well.

Right Ian, at present I believe the only user of this file is gdbsx
so 4.4 would be very low risk. The issue is in a rare use case, so it 
could wait till 4.5 too if you'd rather not. Ok either way... :)..

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 01:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 01:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0L6r-0005v8-No; Tue, 07 Jan 2014 01:05:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0L6p-0005v3-It
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 01:05:31 +0000
Received: from [85.158.137.68:28332] by server-15.bemta-3.messagelabs.com id
	E9/47-11556-AD25BC25; Tue, 07 Jan 2014 01:05:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389056728!7614482!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18945 invoked from network); 7 Jan 2014 01:05:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 01:05:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0714OM6012303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 01:04:25 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0714KGg023808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 01:04:23 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0714Khv005125; Tue, 7 Jan 2014 01:04:20 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 17:04:19 -0800
Date: Mon, 6 Jan 2014 17:04:18 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106170418.55208f82@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-3-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:14 -0500
Don Slutz <dslutz@verizon.com> wrote:

> This also fixes the old debug output to compile and work if DBGP1
> and DBGP2 are defined like DBGP3.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 41 +++++++++++++++++++++++++++++++----------
>  1 file changed, 31 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index eceb805..2e0a05a 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -41,6 +41,12 @@
>  #define DBGP2(...) ((void)0)
>  #endif
>  
> +#ifdef XEN_GDBSX_DEBUG3
> +#define DBGP3(...) gdprintk(XENLOG_DEBUG, __VA_ARGS__)
> +#else
> +#define DBGP3(...) ((void)0)
> +#endif
> +

Umm... some hostorical perspective... this file is shared by both
gdbsx, and kdb, and possibly any future debug type tools. While gdbsx
got merged, kdb did not. So how about, just define say dbg_debug to replace
kdbdbg, (please don't call it "debug" - I hate using words that grep 
million results), and then just change DBGP*. Also looks like DBGP is 
not used anywhere, so we only need two... 

static volatile int dbg_debug;
#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}

This allows us to not need XEN_GDBSX_DEBUG3, and also the debug
can be enabled dynamically without recompile (in my case, via kdb).

thanks,
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 01:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 01:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0L6r-0005v8-No; Tue, 07 Jan 2014 01:05:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0L6p-0005v3-It
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 01:05:31 +0000
Received: from [85.158.137.68:28332] by server-15.bemta-3.messagelabs.com id
	E9/47-11556-AD25BC25; Tue, 07 Jan 2014 01:05:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389056728!7614482!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18945 invoked from network); 7 Jan 2014 01:05:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 01:05:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0714OM6012303
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 01:04:25 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0714KGg023808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 01:04:23 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0714Khv005125; Tue, 7 Jan 2014 01:04:20 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 17:04:19 -0800
Date: Mon, 6 Jan 2014 17:04:18 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106170418.55208f82@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-3-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:14 -0500
Don Slutz <dslutz@verizon.com> wrote:

> This also fixes the old debug output to compile and work if DBGP1
> and DBGP2 are defined like DBGP3.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 41 +++++++++++++++++++++++++++++++----------
>  1 file changed, 31 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index eceb805..2e0a05a 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -41,6 +41,12 @@
>  #define DBGP2(...) ((void)0)
>  #endif
>  
> +#ifdef XEN_GDBSX_DEBUG3
> +#define DBGP3(...) gdprintk(XENLOG_DEBUG, __VA_ARGS__)
> +#else
> +#define DBGP3(...) ((void)0)
> +#endif
> +

Umm... some hostorical perspective... this file is shared by both
gdbsx, and kdb, and possibly any future debug type tools. While gdbsx
got merged, kdb did not. So how about, just define say dbg_debug to replace
kdbdbg, (please don't call it "debug" - I hate using words that grep 
million results), and then just change DBGP*. Also looks like DBGP is 
not used anywhere, so we only need two... 

static volatile int dbg_debug;
#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}

This allows us to not need XEN_GDBSX_DEBUG3, and also the debug
can be enabled dynamically without recompile (in my case, via kdb).

thanks,
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 01:55:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 01:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Lsj-0007oL-Fg; Tue, 07 Jan 2014 01:55:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Lsh-0007oG-I7
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 01:54:59 +0000
Received: from [85.158.143.35:40059] by server-3.bemta-4.messagelabs.com id
	1F/62-32360-27E5BC25; Tue, 07 Jan 2014 01:54:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389059696!10019331!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28679 invoked from network); 7 Jan 2014 01:54:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 01:54:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s071rr1D016653
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 01:53:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s071rpcB003452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 01:53:51 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s071ro3M006767; Tue, 7 Jan 2014 01:53:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 17:53:50 -0800
Date: Mon, 6 Jan 2014 17:53:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106175349.6cbd190b@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-5-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:16 -0500
Don Slutz <dslutz@verizon.com> wrote:

> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> returned.
> 
> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/domctl.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..4aa751f 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -997,8 +997,7 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain,
> &domctl->u.gdbsx_guest_memio);
> -        if ( !ret )
> -           copyback = 1;
> +        copyback = 1;
>      }
>      break;
>  

Ooopsy... my thought was that an application should not even look at
remain if the hcall/syscall failed, but forgot when writing the
gdbsx itself :). Think of it this way, if the call didn't even make it to 
xen, and some reason the ioctl returned non-zero rc, then remain would 
still be zero. So I think we should fix gdbsx instead of here:

xg_write_mem():
    if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
    {
        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
               iop->remain, errno, rc);
        return iop->len;
    }    

Similarly in xg_read_mem().

Hope that makes sense. Don't mean to create work for you for my mistake,
so if you don't have time, I can submit a patch for this too.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 01:55:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 01:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Lsj-0007oL-Fg; Tue, 07 Jan 2014 01:55:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0Lsh-0007oG-I7
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 01:54:59 +0000
Received: from [85.158.143.35:40059] by server-3.bemta-4.messagelabs.com id
	1F/62-32360-27E5BC25; Tue, 07 Jan 2014 01:54:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389059696!10019331!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28679 invoked from network); 7 Jan 2014 01:54:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 01:54:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s071rr1D016653
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 01:53:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s071rpcB003452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 01:53:51 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s071ro3M006767; Tue, 7 Jan 2014 01:53:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 06 Jan 2014 17:53:50 -0800
Date: Mon, 6 Jan 2014 17:53:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140106175349.6cbd190b@mantra.us.oracle.com>
In-Reply-To: <1388857936-664-5-git-send-email-dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat,  4 Jan 2014 12:52:16 -0500
Don Slutz <dslutz@verizon.com> wrote:

> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> returned.
> 
> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/domctl.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..4aa751f 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -997,8 +997,7 @@ long arch_do_domctl(
>              domctl->u.gdbsx_guest_memio.len;
>  
>          ret = gdbsx_guest_mem_io(domctl->domain,
> &domctl->u.gdbsx_guest_memio);
> -        if ( !ret )
> -           copyback = 1;
> +        copyback = 1;
>      }
>      break;
>  

Ooopsy... my thought was that an application should not even look at
remain if the hcall/syscall failed, but forgot when writing the
gdbsx itself :). Think of it this way, if the call didn't even make it to 
xen, and some reason the ioctl returned non-zero rc, then remain would 
still be zero. So I think we should fix gdbsx instead of here:

xg_write_mem():
    if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
    {
        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
               iop->remain, errno, rc);
        return iop->len;
    }    

Similarly in xg_read_mem().

Hope that makes sense. Don't mean to create work for you for my mistake,
so if you don't have time, I can submit a patch for this too.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 03:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 03:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0NAL-0002b0-PY; Tue, 07 Jan 2014 03:17:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0NAK-0002av-Sf
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 03:17:17 +0000
Received: from [85.158.139.211:42061] by server-8.bemta-5.messagelabs.com id
	ED/73-29838-CB17BC25; Tue, 07 Jan 2014 03:17:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389064634!8201772!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14255 invoked from network); 7 Jan 2014 03:17:15 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-206.messagelabs.com with SMTP;
	7 Jan 2014 03:17:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Jan 2014 19:17:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,616,1384329600"; d="scan'208";a="434867620"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 06 Jan 2014 19:17:11 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 19:17:10 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 19:17:10 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 11:17:08 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Gordan Bobic
	<gordan@bobich.net>
Thread-Topic: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
	issues with LSI MegaSAS (PERC5i))
Thread-Index: AQHPCx2fAP1cdYPAPEyz1duWHtr/uJp3tOGAgADglRA=
Date: Tue, 7 Jan 2014 03:17:08 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
References: <52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
In-Reply-To: <20140106214527.GA31147@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-01-07:
>> Which would look like this:
>> 
>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
>> 	      \--------------> IEEE-1394a
>> 
>> I am actually wondering if this 07:00.0 device is the one that
>> reports itself as 08:00.0 (which I think is what you alluding to
>> Jan)
>> 
> 
> And to double check that theory I decided to pass in the IEEE-1394a to a guest:
> 
>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> 
> 
> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     root_entry
> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)     context
> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
> not present
> 
> So, capture card OK - Likely the Tundra bridge has an issue:
> 
> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> (prog-if 01 [Subtractive decode])
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>         subordinate=08, sec-latency=32 Memory behind bridge:
>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl:
>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
>         Capabilities: [a0] Power Management version 3
>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>                 PME-Enable- DSel=0 DScale=0 PME-
> 
> or there is some unknown bridge in the motherboard.

According your description above, the upstream Linux should also have the same problem. Did you see it with upstream Linux?
There may be some buggy device that generate DMA request with internal BDF but it didn't expose it(not like Phantom device). For those devices, I think we need to setup the VT-d page table manually.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 03:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 03:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0NAL-0002b0-PY; Tue, 07 Jan 2014 03:17:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0NAK-0002av-Sf
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 03:17:17 +0000
Received: from [85.158.139.211:42061] by server-8.bemta-5.messagelabs.com id
	ED/73-29838-CB17BC25; Tue, 07 Jan 2014 03:17:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389064634!8201772!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14255 invoked from network); 7 Jan 2014 03:17:15 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-206.messagelabs.com with SMTP;
	7 Jan 2014 03:17:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Jan 2014 19:17:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,616,1384329600"; d="scan'208";a="434867620"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 06 Jan 2014 19:17:11 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 19:17:10 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 19:17:10 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 11:17:08 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Gordan Bobic
	<gordan@bobich.net>
Thread-Topic: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
	issues with LSI MegaSAS (PERC5i))
Thread-Index: AQHPCx2fAP1cdYPAPEyz1duWHtr/uJp3tOGAgADglRA=
Date: Tue, 7 Jan 2014 03:17:08 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
References: <52308E1402000078000F2748@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>
	<20131211183233.GA2760@phenom.dumpdata.com>
	<52A8D5E5.2030902@bobich.net>
	<20131211213025.GA8283@phenom.dumpdata.com>
	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
In-Reply-To: <20140106214527.GA31147@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-01-07:
>> Which would look like this:
>> 
>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on the card
>> 	      \--------------> IEEE-1394a
>> 
>> I am actually wondering if this 07:00.0 device is the one that
>> reports itself as 08:00.0 (which I think is what you alluding to
>> Jan)
>> 
> 
> And to double check that theory I decided to pass in the IEEE-1394a to a guest:
> 
>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> 
> 
> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     root_entry
> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)     context
> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
> not present
> 
> So, capture card OK - Likely the Tundra bridge has an issue:
> 
> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> (prog-if 01 [Subtractive decode])
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>         subordinate=08, sec-latency=32 Memory behind bridge:
>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl:
>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
>         Capabilities: [a0] Power Management version 3
>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>                 PME-Enable- DSel=0 DScale=0 PME-
> 
> or there is some unknown bridge in the motherboard.

According your description above, the upstream Linux should also have the same problem. Did you see it with upstream Linux?
There may be some buggy device that generate DMA request with internal BDF but it didn't expose it(not like Phantom device). For those devices, I think we need to setup the VT-d page table manually.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 04:43:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 04:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0OUt-0005oR-Bd; Tue, 07 Jan 2014 04:42:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sfr@canb.auug.org.au>) id 1W0OUs-0005oM-7k
	for Xen-devel@lists.xensource.com; Tue, 07 Jan 2014 04:42:34 +0000
Received: from [193.109.254.147:24866] by server-10.bemta-14.messagelabs.com
	id E3/BD-20752-9B58BC25; Tue, 07 Jan 2014 04:42:33 +0000
X-Env-Sender: sfr@canb.auug.org.au
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389069749!9206909!1
X-Originating-IP: [203.10.76.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAzLjEwLjc2LjQ1ID0+IDIyNjg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31787 invoked from network); 7 Jan 2014 04:42:32 -0000
Received: from ozlabs.org (HELO ozlabs.org) (203.10.76.45)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 04:42:32 -0000
Received: from canb.auug.org.au (ibmaus65.lnk.telstra.net [165.228.126.9])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by ozlabs.org (Postfix) with ESMTPSA id 6555F2C00B6;
	Tue,  7 Jan 2014 15:42:28 +1100 (EST)
Date: Tue, 7 Jan 2014 15:42:22 +1100
From: Stephen Rothwell <sfr@canb.auug.org.au>
To: Jeremy Fitzhardinge <jeremy@goop.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Xen Devel
	<Xen-devel@lists.xensource.com>, Russell King <rmk@arm.linux.org.uk>
Message-Id: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
X-Mailer: Sylpheed 3.4.0beta7 (GTK+ 2.24.22; i486-pc-linux-gnu)
Mime-Version: 1.0
Cc: linux-next@vger.kernel.org, linux-kernel@vger.kernel.org,
	Rob Herring <rob.herring@calxeda.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] linux-next: manual merge of the xen-tip tree with the
 arm-current tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4807876475369508745=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4807876475369508745==
Content-Type: multipart/signed; protocol="application/pgp-signature";
 micalg="PGP-SHA256";
 boundary="Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz"

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi all,

Today's linux-next merge of the xen-tip tree got a conflict in
arch/arm/include/asm/xen/page.h between commit 0a5ccc86507f ("ARM:
7933/1: rename ioremap_cached to ioremap_cache") from the  tree and
commit 02bcf053e9c5 ("asm/xen/page.h: remove redundant semicolon") from
the xen-tip tree.

I fixed it up (see below) and can carry the fix as necessary (no action
is required).

--=20
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

diff --cc arch/arm/include/asm/xen/page.h
index 3759cacdd7f8,709c4b4d2f1d..000000000000
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@@ -117,6 -117,7 +117,7 @@@ static inline bool set_phys_to_machine(
  	return __set_phys_to_machine(pfn, mfn);
  }
 =20
- #define xen_remap(cookie, size) ioremap_cache((cookie), (size));
 -#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
++#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
+ #define xen_unmap(cookie) iounmap((cookie))
 =20
  #endif /* _ASM_ARM_XEN_PAGE_H */

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBCAAGBQJSy4WzAAoJEMDTa8Ir7ZwVUnUP/0EHDRFLLab9spxf8kCH6AIN
a5Bpk3xpkVU/8Yx2so9pVA9lmo9NfizhCvrJ+eX+P0rOSt7wGag3Bu/9U3R9znrX
AwFr9b1awuSFP9qXGqDqmGKY4Cf8fwZC3YPa5keQ4DJoyxCCZ8e8fjWTgDMaA5xD
AK3Dfy+eYwRDdhvAnGj9MUxzLMGeuqGF9YCdtSCtiWtz343kJDzRJmEGoimnAyYU
GSG9za3HYkl3ElBNF1Vjkklfph5vnn0zuMd3hLsRBufRlAsUuM95WguJwAvxMrbW
hP66ONsbLMihIGlRo0si5gmTsWiquk1Yhb3X9uHcDq662I8iwW4JW8+oXyNAYlKd
V/BmzAoK2vp/8vVyCOqp4gN6NHbhy9dL9K8WQZOOgUegPq3YxQOAjXCb7OCoYxFo
1E/XoeAqu+dXVTKbsnLdeVRCEmfh/yPj22mSpfrN16RvJFSUjQbXWzK1vhGOuQK6
4Jajbz5mBhrs+gDYCMRORvxaCE1/TK+k4nMt4ON8W9hVU6TeFPv9YKAfPqPLqlp2
GaSjwsXzSoBe3TIObtPG34uYf2ec2J7LRu0YLddZ8s3s+HyoClxygmL4ELtmZtUu
2bEA0t3jTOR98KrNd1u2KYFMT8pxs0KoheHq8X2Y8hK07Ety3J4OpIZ+p5x22xwi
5axr1xKdt7hLTkDB4Xg9
=udXo
-----END PGP SIGNATURE-----

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz--


--===============4807876475369508745==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4807876475369508745==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 04:43:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 04:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0OUt-0005oR-Bd; Tue, 07 Jan 2014 04:42:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sfr@canb.auug.org.au>) id 1W0OUs-0005oM-7k
	for Xen-devel@lists.xensource.com; Tue, 07 Jan 2014 04:42:34 +0000
Received: from [193.109.254.147:24866] by server-10.bemta-14.messagelabs.com
	id E3/BD-20752-9B58BC25; Tue, 07 Jan 2014 04:42:33 +0000
X-Env-Sender: sfr@canb.auug.org.au
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389069749!9206909!1
X-Originating-IP: [203.10.76.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAzLjEwLjc2LjQ1ID0+IDIyNjg2\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31787 invoked from network); 7 Jan 2014 04:42:32 -0000
Received: from ozlabs.org (HELO ozlabs.org) (203.10.76.45)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 04:42:32 -0000
Received: from canb.auug.org.au (ibmaus65.lnk.telstra.net [165.228.126.9])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by ozlabs.org (Postfix) with ESMTPSA id 6555F2C00B6;
	Tue,  7 Jan 2014 15:42:28 +1100 (EST)
Date: Tue, 7 Jan 2014 15:42:22 +1100
From: Stephen Rothwell <sfr@canb.auug.org.au>
To: Jeremy Fitzhardinge <jeremy@goop.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Xen Devel
	<Xen-devel@lists.xensource.com>, Russell King <rmk@arm.linux.org.uk>
Message-Id: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
X-Mailer: Sylpheed 3.4.0beta7 (GTK+ 2.24.22; i486-pc-linux-gnu)
Mime-Version: 1.0
Cc: linux-next@vger.kernel.org, linux-kernel@vger.kernel.org,
	Rob Herring <rob.herring@calxeda.com>, Wei Liu <liuw@liuw.name>
Subject: [Xen-devel] linux-next: manual merge of the xen-tip tree with the
 arm-current tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4807876475369508745=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4807876475369508745==
Content-Type: multipart/signed; protocol="application/pgp-signature";
 micalg="PGP-SHA256";
 boundary="Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz"

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi all,

Today's linux-next merge of the xen-tip tree got a conflict in
arch/arm/include/asm/xen/page.h between commit 0a5ccc86507f ("ARM:
7933/1: rename ioremap_cached to ioremap_cache") from the  tree and
commit 02bcf053e9c5 ("asm/xen/page.h: remove redundant semicolon") from
the xen-tip tree.

I fixed it up (see below) and can carry the fix as necessary (no action
is required).

--=20
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

diff --cc arch/arm/include/asm/xen/page.h
index 3759cacdd7f8,709c4b4d2f1d..000000000000
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@@ -117,6 -117,7 +117,7 @@@ static inline bool set_phys_to_machine(
  	return __set_phys_to_machine(pfn, mfn);
  }
 =20
- #define xen_remap(cookie, size) ioremap_cache((cookie), (size));
 -#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
++#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
+ #define xen_unmap(cookie) iounmap((cookie))
 =20
  #endif /* _ASM_ARM_XEN_PAGE_H */

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBCAAGBQJSy4WzAAoJEMDTa8Ir7ZwVUnUP/0EHDRFLLab9spxf8kCH6AIN
a5Bpk3xpkVU/8Yx2so9pVA9lmo9NfizhCvrJ+eX+P0rOSt7wGag3Bu/9U3R9znrX
AwFr9b1awuSFP9qXGqDqmGKY4Cf8fwZC3YPa5keQ4DJoyxCCZ8e8fjWTgDMaA5xD
AK3Dfy+eYwRDdhvAnGj9MUxzLMGeuqGF9YCdtSCtiWtz343kJDzRJmEGoimnAyYU
GSG9za3HYkl3ElBNF1Vjkklfph5vnn0zuMd3hLsRBufRlAsUuM95WguJwAvxMrbW
hP66ONsbLMihIGlRo0si5gmTsWiquk1Yhb3X9uHcDq662I8iwW4JW8+oXyNAYlKd
V/BmzAoK2vp/8vVyCOqp4gN6NHbhy9dL9K8WQZOOgUegPq3YxQOAjXCb7OCoYxFo
1E/XoeAqu+dXVTKbsnLdeVRCEmfh/yPj22mSpfrN16RvJFSUjQbXWzK1vhGOuQK6
4Jajbz5mBhrs+gDYCMRORvxaCE1/TK+k4nMt4ON8W9hVU6TeFPv9YKAfPqPLqlp2
GaSjwsXzSoBe3TIObtPG34uYf2ec2J7LRu0YLddZ8s3s+HyoClxygmL4ELtmZtUu
2bEA0t3jTOR98KrNd1u2KYFMT8pxs0KoheHq8X2Y8hK07Ety3J4OpIZ+p5x22xwi
5axr1xKdt7hLTkDB4Xg9
=udXo
-----END PGP SIGNATURE-----

--Signature=_Tue__7_Jan_2014_15_42_22_+1100_o1XXANa77vLhDjqz--


--===============4807876475369508745==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4807876475369508745==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 08:24:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Rwl-0005gt-F5; Tue, 07 Jan 2014 08:23:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Rwk-0005go-CV
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:23:34 +0000
Received: from [193.109.254.147:12659] by server-15.bemta-14.messagelabs.com
	id 36/67-22186-589BBC25; Tue, 07 Jan 2014 08:23:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389083012!9188555!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24703 invoked from network); 7 Jan 2014 08:23:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 08:23:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 08:23:33 +0000
Message-Id: <52CBC7900200007800110EF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 08:23:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
In-Reply-To: <20131224015650.GA2191@pegasus.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.12.13 at 02:56, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Mon, Dec 23, 2013 at 11:49:49AM +0000, Jan Beulich wrote:
>> >>> "Jan Beulich" <jbeulich@suse.com> 12/23/13 10:39 AM >>>
>> >>>> Ian Campbell <Ian.Campbell@citrix.com> 12/21/13 12:10 PM >>>
>> >>On Fri, 2013-12-20 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
>> >>> But perhaps that is not the way to do it and we should just cherry-pick
>> >>> 30832c06a8d1f9caff0987654ef9e24d59469d9a in Xen 4.1?
>> >>
>> >>I think we should do both, i.e. backport 30832c06a8d1 now to solve the
>> >>immediate problem and then look at fixing unstable to be more accepting
>> >>of new features which it doesn't yet know about.
>> >
>> >Hmm, not sure - without a split between necessary to be understood
>> >and acceptable to be unknown ones, I'm not sure either model will be
>> >the right thing.
>> 
>> And actually, in the case at hand the "BOOM" is correct: If the kernel tells
>> the hypervisor that it needs a feature the hypervisor doesn't even 
> recognize,
>> it's surely wrong to ignore this. The mistake here is for the kernel to 
> require
> 
> But it does not ignore it. It checks later on in construct_dom0 whether
> the 'required' parameters are present, like:

That's what I said: It is / would be wrong to ignore a feature th
kernel requires.

> if ( test_bit(XENFEAT_supervisor_mode_kernel, parms.f_required) )

This is even more confusing: 30832c06 is about hvm_callback_vector,
not supervisor_mode_kernel.

>> that feature statically in the first place - that should be done only if the 
> kernel
>> could _only_ boot in PVH mode.
> 
> The feature is not marked as "required" but rather - it can utilize said
> extension (so supported). I am advocating that the calleer checks that
> all of the required pieces are correct - and it can ignore the ones it
> has no idea off (which it does for some of the Xen ELF notes - ignores
> them if it has no idea of what they are).

What would be to point of telling the hypervisor that the kernel
can utilize a certain extension? The kernel could just utilize it, and
the hypervisor would know by that simple fact.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:24:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Rwl-0005gt-F5; Tue, 07 Jan 2014 08:23:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Rwk-0005go-CV
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:23:34 +0000
Received: from [193.109.254.147:12659] by server-15.bemta-14.messagelabs.com
	id 36/67-22186-589BBC25; Tue, 07 Jan 2014 08:23:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389083012!9188555!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24703 invoked from network); 7 Jan 2014 08:23:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 08:23:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 08:23:33 +0000
Message-Id: <52CBC7900200007800110EF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 08:23:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
In-Reply-To: <20131224015650.GA2191@pegasus.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.12.13 at 02:56, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Mon, Dec 23, 2013 at 11:49:49AM +0000, Jan Beulich wrote:
>> >>> "Jan Beulich" <jbeulich@suse.com> 12/23/13 10:39 AM >>>
>> >>>> Ian Campbell <Ian.Campbell@citrix.com> 12/21/13 12:10 PM >>>
>> >>On Fri, 2013-12-20 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
>> >>> But perhaps that is not the way to do it and we should just cherry-pick
>> >>> 30832c06a8d1f9caff0987654ef9e24d59469d9a in Xen 4.1?
>> >>
>> >>I think we should do both, i.e. backport 30832c06a8d1 now to solve the
>> >>immediate problem and then look at fixing unstable to be more accepting
>> >>of new features which it doesn't yet know about.
>> >
>> >Hmm, not sure - without a split between necessary to be understood
>> >and acceptable to be unknown ones, I'm not sure either model will be
>> >the right thing.
>> 
>> And actually, in the case at hand the "BOOM" is correct: If the kernel tells
>> the hypervisor that it needs a feature the hypervisor doesn't even 
> recognize,
>> it's surely wrong to ignore this. The mistake here is for the kernel to 
> require
> 
> But it does not ignore it. It checks later on in construct_dom0 whether
> the 'required' parameters are present, like:

That's what I said: It is / would be wrong to ignore a feature th
kernel requires.

> if ( test_bit(XENFEAT_supervisor_mode_kernel, parms.f_required) )

This is even more confusing: 30832c06 is about hvm_callback_vector,
not supervisor_mode_kernel.

>> that feature statically in the first place - that should be done only if the 
> kernel
>> could _only_ boot in PVH mode.
> 
> The feature is not marked as "required" but rather - it can utilize said
> extension (so supported). I am advocating that the calleer checks that
> all of the required pieces are correct - and it can ignore the ones it
> has no idea off (which it does for some of the Xen ELF notes - ignores
> them if it has no idea of what they are).

What would be to point of telling the hypervisor that the kernel
can utilize a certain extension? The kernel could just utilize it, and
the hypervisor would know by that simple fact.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0RyB-0005kx-Us; Tue, 07 Jan 2014 08:25:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0RyA-0005ko-07
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 08:25:02 +0000
Received: from [85.158.143.35:12102] by server-2.bemta-4.messagelabs.com id
	EB/30-11386-DD9BBC25; Tue, 07 Jan 2014 08:25:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389083099!8771322!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30530 invoked from network); 7 Jan 2014 08:25:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 08:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="90356747"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 08:25:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 03:24:58 -0500
Message-ID: <52CBB9DB.6080407@citrix.com>
Date: Tue, 7 Jan 2014 09:24:59 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org> <52CA7B8F.9060402@citrix.com>
	<52CA9347.8040901@linaro.org>
In-Reply-To: <52CA9347.8040901@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDYvMDEvMTQgMTI6MjgsIEp1bGllbiBHcmFsbCB3cm90ZToKPiAKPiAKPiBPbiAwMS8wNi8y
MDE0IDA5OjQ2IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNS8wMS8xNCAyMjo1
MiwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wMi8yMDE0IDAzOjQzIFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gU2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24n
dCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRlIGEgZHVtbXkKPj4+PiBidXMgc28gdG9wIGxl
dmVsIFhlbiBkZXZpY2VzIGNhbiBhdHRhY2ggdG8gaXQgKGluc3RlYWQgb2YKPj4+PiBhdHRhY2hp
bmcgZGlyZWN0bHkgdG8gdGhlIG5leHVzKSBhbmQgYSBwdmNwdSBkZXZpY2UgdGhhdCB3aWxsIGJl
IHVzZWQKPj4+PiB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQuCj4+Pj4gLS0tCj4+
Pj4gICAgc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKPj4+PiAgICBzeXMvY29uZi9maWxl
cy5pMzg2ICB8ICAgIDEgKwo+Pj4+ICAgIHN5cy94ODYveGVuL3hlbnB2LmMgIHwgIDE1NQo+Pj4+
ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCj4+Pgo+
Pj4gSSB0aGluayBpdCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUgMiBmaWxlczogb25lIGZvciB4
ZW5wdiBidXMgYW5kIG9uZQo+Pj4gZm9yIGEgZHVtbXkgcHZjcHUgZGV2aWNlLiBJdCB3b3VsZCBh
bGxvdyB1cyB0byBtb3ZlIHhlbnB2IGJ1cyB0byBjb21tb24KPj4+IGNvZGUgKHN5cy94ZW4gb3Ig
c3lzL2Rldi94ZW4pLgo+Pgo+PiBBY2suIEkgd2Fzbid0IHRoaW5raW5nIG90aGVyIGFyY2hlcyB3
aWxsIHByb2JhYmx5IHVzZSB0aGUgeGVucHYgYnVzIGJ1dAo+PiBub3QgdGhlIGR1bW15IGNwdSBk
ZXZpY2UuIFdvdWxkIHlvdSBhZ3JlZSB0byBsZWF2ZSB4ZW5wdiBidXMgaW5zaWRlCj4+IHg4Ni94
ZW4gZm9yIG5vdyBhbmQgbW92ZSB0aGUgZHVtbXkgUFYgY3B1IGRldmljZSB0byBkZXYveGVuL3B2
Y3B1Lz8KPiAKPiBBcyB3ZSB3aWxsIGF0dGFjaCBldmVyeSB4ZW4gZGV2aWNlIHRvIHhlbnB2LCBp
dCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUKPiB4ZW5wdiBidXMgdXNlZCBvbiBBUk0uIEl0IHdp
bGwgYXZvaWQgZHVwbGljYXRpb24gY29kZSBhbmQga2VlcCBpdCBuaWNlci4KPiAKPiBJJ20gZmlu
ZSB3aXRoIHRoaXMgc29sdXRpb24gZm9yIG5vdy4gSSB3aWxsIHVwZGF0ZS9tb3ZlIHRoZSBjb2Rl
IHdoZW4gSQo+IHdpbGwgc2VuZCB0aGUgcGF0Y2ggc2VyaWVzIHRvIHN1cHBvcnQgRnJlZUJTRCBv
biBYZW4gb24gQVJNLgo+IAo+Pj4KPj4+IFsuLl0KPj4+Cj4+Pj4gKwo+Pj4+ICtzdGF0aWMgaW50
Cj4+Pj4gK3hlbnB2X3Byb2JlKGRldmljZV90IGRldikKPj4+PiArewo+Pj4+ICsKPj4+PiArICAg
IGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVzIik7Cj4+Pj4gKyAgICBkZXZpY2VfcXVp
ZXQoZGV2KTsKPj4+PiArICAgIHJldHVybiAoMCk7Cj4+Pgo+Pj4gQXMgSSB1bmRlcnN0YW5kLCAw
IG1lYW5zIEkgY2FuICJoYW5kbGUiIHRoZSBjdXJyZW50IGRldmljZSwgaW4gdGhpcyBjYXNlCj4+
PiBpZiBhIGRldmljZSBpcyBwcm9iaW5nLCBiZWNhdXNlIGl0IGRvZXNuJ3QgaGF2ZSB5ZXQgYSBk
cml2ZXIsIHdlIHdpbGwKPj4+IHVzZSB4ZW5wdiBhbmQgZW5kIHVwIHdpdGggMiAob3IgZXZlbiBt
b3JlKSB4ZW5wdiBidXNlcy4KPj4+Cj4+PiBBcyB3ZSBvbmx5IHdhbnQgdG8gcHJvYmUgeGVucHYg
YnVzIG9uY2UsIHdoZW4gdGhlIGJ1cyB3YXMgYWRkZWQKPj4+IG1hbnVhbGx5LCByZXR1cm5pbmcg
QlVTX1BST0JFX05PX1dJTERDQVJEIHdvdWxkIHN1aXQgYmV0dGVyLgo+Pj4KPj4+IFsuLl0KPj4+
Cj4+Pj4gK3N0YXRpYyBpbnQKPj4+PiAreGVucHZjcHVfcHJvYmUoZGV2aWNlX3QgZGV2KQo+Pj4+
ICt7Cj4+Pj4gKwo+Pj4+ICsgICAgZGV2aWNlX3NldF9kZXNjKGRldiwgIlhlbiBQViBDUFUiKTsK
Pj4+PiArICAgIHJldHVybiAoMCk7Cj4+Pgo+Pj4gU2FtZSBoZXJlOiBCVVNfUFJPQkVfTk9XSUxE
Q0FSRC4KPj4KPj4gQWNrIGZvciBib3RoLCB3aWxsIGNoYW5nZSBpdCB0byBCVVNfUFJPQkVfTk9X
SUxEQ0FSRC4gV2hpbGUgYXQgaXQsIHdlCj4+IHNob3VsZCBhbHNvIGNoYW5nZSB4ZW5zdG9yZSBw
cm9iZSBmdW5jdGlvbiB0byByZXR1cm4KPj4gQlVTX1BST0JFX05PV0lMRENBUkQuCj4+Cj4gCj4g
UmlnaHQsIEkgaGF2ZSBhIHBhdGNoIGZvciB4ZW5zdG9yZS4gRG8geW91IHdhbnQgbWUgdG8gc2Vu
ZCBpdD8KClN1cmUsIHNlbmQgaXQhCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0RyB-0005kx-Us; Tue, 07 Jan 2014 08:25:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0RyA-0005ko-07
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 08:25:02 +0000
Received: from [85.158.143.35:12102] by server-2.bemta-4.messagelabs.com id
	EB/30-11386-DD9BBC25; Tue, 07 Jan 2014 08:25:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389083099!8771322!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30530 invoked from network); 7 Jan 2014 08:25:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 08:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="90356747"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 08:25:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 03:24:58 -0500
Message-ID: <52CBB9DB.6080407@citrix.com>
Date: Tue, 7 Jan 2014 09:24:59 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-15-git-send-email-roger.pau@citrix.com>
	<52C9D432.3040409@linaro.org> <52CA7B8F.9060402@citrix.com>
	<52CA9347.8040901@linaro.org>
In-Reply-To: <52CA9347.8040901@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 14/19] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDYvMDEvMTQgMTI6MjgsIEp1bGllbiBHcmFsbCB3cm90ZToKPiAKPiAKPiBPbiAwMS8wNi8y
MDE0IDA5OjQ2IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNS8wMS8xNCAyMjo1
MiwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wMi8yMDE0IDAzOjQzIFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gU2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24n
dCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRlIGEgZHVtbXkKPj4+PiBidXMgc28gdG9wIGxl
dmVsIFhlbiBkZXZpY2VzIGNhbiBhdHRhY2ggdG8gaXQgKGluc3RlYWQgb2YKPj4+PiBhdHRhY2hp
bmcgZGlyZWN0bHkgdG8gdGhlIG5leHVzKSBhbmQgYSBwdmNwdSBkZXZpY2UgdGhhdCB3aWxsIGJl
IHVzZWQKPj4+PiB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQuCj4+Pj4gLS0tCj4+
Pj4gICAgc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKPj4+PiAgICBzeXMvY29uZi9maWxl
cy5pMzg2ICB8ICAgIDEgKwo+Pj4+ICAgIHN5cy94ODYveGVuL3hlbnB2LmMgIHwgIDE1NQo+Pj4+
ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCj4+Pgo+
Pj4gSSB0aGluayBpdCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUgMiBmaWxlczogb25lIGZvciB4
ZW5wdiBidXMgYW5kIG9uZQo+Pj4gZm9yIGEgZHVtbXkgcHZjcHUgZGV2aWNlLiBJdCB3b3VsZCBh
bGxvdyB1cyB0byBtb3ZlIHhlbnB2IGJ1cyB0byBjb21tb24KPj4+IGNvZGUgKHN5cy94ZW4gb3Ig
c3lzL2Rldi94ZW4pLgo+Pgo+PiBBY2suIEkgd2Fzbid0IHRoaW5raW5nIG90aGVyIGFyY2hlcyB3
aWxsIHByb2JhYmx5IHVzZSB0aGUgeGVucHYgYnVzIGJ1dAo+PiBub3QgdGhlIGR1bW15IGNwdSBk
ZXZpY2UuIFdvdWxkIHlvdSBhZ3JlZSB0byBsZWF2ZSB4ZW5wdiBidXMgaW5zaWRlCj4+IHg4Ni94
ZW4gZm9yIG5vdyBhbmQgbW92ZSB0aGUgZHVtbXkgUFYgY3B1IGRldmljZSB0byBkZXYveGVuL3B2
Y3B1Lz8KPiAKPiBBcyB3ZSB3aWxsIGF0dGFjaCBldmVyeSB4ZW4gZGV2aWNlIHRvIHhlbnB2LCBp
dCBtYWtlcyBtb3JlIHNlbnNlIHRvIGhhdmUKPiB4ZW5wdiBidXMgdXNlZCBvbiBBUk0uIEl0IHdp
bGwgYXZvaWQgZHVwbGljYXRpb24gY29kZSBhbmQga2VlcCBpdCBuaWNlci4KPiAKPiBJJ20gZmlu
ZSB3aXRoIHRoaXMgc29sdXRpb24gZm9yIG5vdy4gSSB3aWxsIHVwZGF0ZS9tb3ZlIHRoZSBjb2Rl
IHdoZW4gSQo+IHdpbGwgc2VuZCB0aGUgcGF0Y2ggc2VyaWVzIHRvIHN1cHBvcnQgRnJlZUJTRCBv
biBYZW4gb24gQVJNLgo+IAo+Pj4KPj4+IFsuLl0KPj4+Cj4+Pj4gKwo+Pj4+ICtzdGF0aWMgaW50
Cj4+Pj4gK3hlbnB2X3Byb2JlKGRldmljZV90IGRldikKPj4+PiArewo+Pj4+ICsKPj4+PiArICAg
IGRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVzIik7Cj4+Pj4gKyAgICBkZXZpY2VfcXVp
ZXQoZGV2KTsKPj4+PiArICAgIHJldHVybiAoMCk7Cj4+Pgo+Pj4gQXMgSSB1bmRlcnN0YW5kLCAw
IG1lYW5zIEkgY2FuICJoYW5kbGUiIHRoZSBjdXJyZW50IGRldmljZSwgaW4gdGhpcyBjYXNlCj4+
PiBpZiBhIGRldmljZSBpcyBwcm9iaW5nLCBiZWNhdXNlIGl0IGRvZXNuJ3QgaGF2ZSB5ZXQgYSBk
cml2ZXIsIHdlIHdpbGwKPj4+IHVzZSB4ZW5wdiBhbmQgZW5kIHVwIHdpdGggMiAob3IgZXZlbiBt
b3JlKSB4ZW5wdiBidXNlcy4KPj4+Cj4+PiBBcyB3ZSBvbmx5IHdhbnQgdG8gcHJvYmUgeGVucHYg
YnVzIG9uY2UsIHdoZW4gdGhlIGJ1cyB3YXMgYWRkZWQKPj4+IG1hbnVhbGx5LCByZXR1cm5pbmcg
QlVTX1BST0JFX05PX1dJTERDQVJEIHdvdWxkIHN1aXQgYmV0dGVyLgo+Pj4KPj4+IFsuLl0KPj4+
Cj4+Pj4gK3N0YXRpYyBpbnQKPj4+PiAreGVucHZjcHVfcHJvYmUoZGV2aWNlX3QgZGV2KQo+Pj4+
ICt7Cj4+Pj4gKwo+Pj4+ICsgICAgZGV2aWNlX3NldF9kZXNjKGRldiwgIlhlbiBQViBDUFUiKTsK
Pj4+PiArICAgIHJldHVybiAoMCk7Cj4+Pgo+Pj4gU2FtZSBoZXJlOiBCVVNfUFJPQkVfTk9XSUxE
Q0FSRC4KPj4KPj4gQWNrIGZvciBib3RoLCB3aWxsIGNoYW5nZSBpdCB0byBCVVNfUFJPQkVfTk9X
SUxEQ0FSRC4gV2hpbGUgYXQgaXQsIHdlCj4+IHNob3VsZCBhbHNvIGNoYW5nZSB4ZW5zdG9yZSBw
cm9iZSBmdW5jdGlvbiB0byByZXR1cm4KPj4gQlVTX1BST0JFX05PV0lMRENBUkQuCj4+Cj4gCj4g
UmlnaHQsIEkgaGF2ZSBhIHBhdGNoIGZvciB4ZW5zdG9yZS4gRG8geW91IHdhbnQgbWUgdG8gc2Vu
ZCBpdD8KClN1cmUsIHNlbmQgaXQhCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:28:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0S1j-00063D-M0; Tue, 07 Jan 2014 08:28:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0S1i-000638-6a
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:28:42 +0000
Received: from [85.158.137.68:17554] by server-3.bemta-3.messagelabs.com id
	78/F4-10658-9BABBC25; Tue, 07 Jan 2014 08:28:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389083320!3958073!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5392 invoked from network); 7 Jan 2014 08:28:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 08:28:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 08:28:39 +0000
Message-Id: <52CBC8C10200007800110EFA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 08:28:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2013-12-18:
>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2013-12-18:
>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>> way we can be certain that the retry won't do something different
>>>>>> from what requested the retry, getting once again closer to real
>>>>>> hardware behavior (where what we use retries for is simply a bus
>>>>>> operation, not involving redundant decoding of instructions).
>>>>>> 
>>>>> 
>>>>> This patch doesn't consider the nested case.
>>>>> For example, if the buffer saved the L2's instruction, then vmexit
>>>>> to
>>>>> L1 and
>>>>> L1 may use the wrong instruction.
>>>> 
>>>> I'm having difficulty seeing how the two could get intermixed:
>>>> There should be, at any given point in time, at most one
>>>> instruction being emulated. Can you please give a more elaborate
>>>> explanation of the situation where you see a (theoretical? practical?) 
> problem?
>>> 
>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>> and saw the strange phenomenon:
>>> 
>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>> content:f7420c1f608488b 44000000011442c7
>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>> content:f7420c1f608488b 44000000011442c7
>>> 
>>> From the log, we can see different eip but using the same buffer.
>>> Since I don't know how hyper-v working, so I cannot give more
>>> information why this happens. And I only saw it with L1 hyper-v (Xen
>>> on Xen and KVM on Xen don't have this issue) .
>> 
>> But in order to validate the fix is (a) needed and (b) correct, a
>> proper understanding of the issue is necessary as the very first step.
>> Which doesn't require knowing internals of Hyper-V, all you need is
>> tracking of emulator state changes in Xen (which I realize is said
>> easier than done, since you want to make sure you don't generate huge
>> amounts of output before actually hitting the issue, making it close to 
> impossible to analyze).
> 
> Ok. It should be an old issue and just exposed by your patch only:
> Sometimes, L0 need to decode the L2's instruction to handle IO access 
> directly. For example, if L1 pass through (not VT-d) a device to L2 directly. 
> And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if 
> there is a virtual vmexit pending (for example, an interrupt pending to 
> inject to L1) and hypervisor will switch the VCPU context from L2 to L1. Now 
> we already in L1's context, but since we got a X86EMUL_RETRY just now and 
> this means hyprevisor will retry to handle the IO request later and 
> unfortunately, the retry will happen in L1's context.
> Without your patch, L0 just emulate an L1's instruction and everything goes 
> on. With your patch, L0 will get the instruction from the buffer which is 
> belonging to L2, and problem arises.
> 
> So the fixing is that if there is a pending IO request, no virtual 
> vmexit/vmentry is allowed which following hardware's behavior.
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 41db52b..c5446a9 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *p = get_ioreq(v);
>  
> +    if ( p->state != STATE_IOREQ_NONE )
> +        return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
>       * just handled and the true vmentry. If during this window,

This change looks much more sensible; question is whether simply
ignoring the switch request is acceptable (and I don't know the
nested HVM code well enough to tell). Furthermore I wonder
whether that's really a VMX-only issue.

Another alternative might be to flush the cached instruction when
a guest mode switch is being done. Again, I don't know the
nested code well enough to tell what the most suitable behavior
here would be.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:28:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0S1j-00063D-M0; Tue, 07 Jan 2014 08:28:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0S1i-000638-6a
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:28:42 +0000
Received: from [85.158.137.68:17554] by server-3.bemta-3.messagelabs.com id
	78/F4-10658-9BABBC25; Tue, 07 Jan 2014 08:28:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389083320!3958073!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5392 invoked from network); 7 Jan 2014 08:28:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 08:28:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 08:28:39 +0000
Message-Id: <52CBC8C10200007800110EFA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 08:28:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2013-12-18:
>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2013-12-18:
>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>> way we can be certain that the retry won't do something different
>>>>>> from what requested the retry, getting once again closer to real
>>>>>> hardware behavior (where what we use retries for is simply a bus
>>>>>> operation, not involving redundant decoding of instructions).
>>>>>> 
>>>>> 
>>>>> This patch doesn't consider the nested case.
>>>>> For example, if the buffer saved the L2's instruction, then vmexit
>>>>> to
>>>>> L1 and
>>>>> L1 may use the wrong instruction.
>>>> 
>>>> I'm having difficulty seeing how the two could get intermixed:
>>>> There should be, at any given point in time, at most one
>>>> instruction being emulated. Can you please give a more elaborate
>>>> explanation of the situation where you see a (theoretical? practical?) 
> problem?
>>> 
>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>> and saw the strange phenomenon:
>>> 
>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>> content:f7420c1f608488b 44000000011442c7
>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>> content:f7420c1f608488b 44000000011442c7
>>> 
>>> From the log, we can see different eip but using the same buffer.
>>> Since I don't know how hyper-v working, so I cannot give more
>>> information why this happens. And I only saw it with L1 hyper-v (Xen
>>> on Xen and KVM on Xen don't have this issue) .
>> 
>> But in order to validate the fix is (a) needed and (b) correct, a
>> proper understanding of the issue is necessary as the very first step.
>> Which doesn't require knowing internals of Hyper-V, all you need is
>> tracking of emulator state changes in Xen (which I realize is said
>> easier than done, since you want to make sure you don't generate huge
>> amounts of output before actually hitting the issue, making it close to 
> impossible to analyze).
> 
> Ok. It should be an old issue and just exposed by your patch only:
> Sometimes, L0 need to decode the L2's instruction to handle IO access 
> directly. For example, if L1 pass through (not VT-d) a device to L2 directly. 
> And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if 
> there is a virtual vmexit pending (for example, an interrupt pending to 
> inject to L1) and hypervisor will switch the VCPU context from L2 to L1. Now 
> we already in L1's context, but since we got a X86EMUL_RETRY just now and 
> this means hyprevisor will retry to handle the IO request later and 
> unfortunately, the retry will happen in L1's context.
> Without your patch, L0 just emulate an L1's instruction and everything goes 
> on. With your patch, L0 will get the instruction from the buffer which is 
> belonging to L2, and problem arises.
> 
> So the fixing is that if there is a pending IO request, no virtual 
> vmexit/vmentry is allowed which following hardware's behavior.
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 41db52b..c5446a9 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *p = get_ioreq(v);
>  
> +    if ( p->state != STATE_IOREQ_NONE )
> +        return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
>       * just handled and the true vmentry. If during this window,

This change looks much more sensible; question is whether simply
ignoring the switch request is acceptable (and I don't know the
nested HVM code well enough to tell). Furthermore I wonder
whether that's really a VMX-only issue.

Another alternative might be to flush the cached instruction when
a guest mode switch is being done. Again, I don't know the
nested code well enough to tell what the most suitable behavior
here would be.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0S2y-0006OF-B6; Tue, 07 Jan 2014 08:30:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0S2x-0006O4-95
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 08:29:59 +0000
Received: from [85.158.139.211:65443] by server-15.bemta-5.messagelabs.com id
	84/5C-08490-60BBBC25; Tue, 07 Jan 2014 08:29:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389083396!8228474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9082 invoked from network); 7 Jan 2014 08:29:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 08:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="88192043"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 08:29:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 03:29:55 -0500
Message-ID: <52CBBB05.6020104@citrix.com>
Date: Tue, 7 Jan 2014 09:29:57 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org>
In-Reply-To: <52CA9481.4090703@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDYvMDEvMTQgMTI6MzMsIEp1bGllbiBHcmFsbCB3cm90ZToKPiAKPiAKPiBPbiAwMS8wNi8y
MDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNS8wMS8xNCAyMjo1
NSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wMi8yMDE0IDAzOjQzIFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gSW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5l
eHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hhcmdlIGZvcgo+Pj4+IGF0dGFjaGluZyBYZW4g
c3BlY2lmaWMgZGV2aWNlcy4KPj4+Cj4+PiBOb3cgdGhhdCB3ZSBoYXZlIGEgeGVucHYgYnVzLCBk
byB3ZSByZWFsbHkgbmVlZCBhIHNwZWNpZmljIG5leHVzIGZvcgo+Pj4gWGVuPwo+Pj4gV2Ugc2hv
dWxkIGJlIGFibGUgdG8gdXNlIHRoZSBpZGVudGlmeSBjYWxsYmFjayBvZiB4ZW5wdiB0byBjcmVh
dGUgdGhlCj4+PiBidXMuCj4+Pgo+Pj4gVGhlIG90aGVyIHBhcnQgb2YgdGhpcyBwYXRjaCBjYW4g
YmUgbWVyZ2VkIGluIHRoZSBwYXRjaCAjMTQgIkludHJvZHVjZQo+Pj4geGVucHYgYnVzIGFuZCBh
IGR1bW15IHB2Y3B1IGRldmljZSIuCj4+Cj4+IE9uIHg4NiBhdCBsZWFzdCB3ZSBuZWVkIHRoZSBY
ZW4gc3BlY2lmaWMgbmV4dXMsIG9yIHdlIHdpbGwgZmFsbCBiYWNrIHRvCj4+IHVzZSB0aGUgbGVn
YWN5IG5leHVzIHdoaWNoIGlzIG5vdCB3aGF0IHdlIHJlYWxseSB3YW50Lgo+Pgo+IAo+IE9oIHJp
Z2h0LCBpbiBhbnkgY2FzZSBjYW4gd2UgdXNlIHRoZSBpZGVudGlmeSBjYWxsYmFjayBvZiB4ZW5w
diB0byBhZGQKPiB0aGUgYnVzPwoKQUZBSUNUIHRoaXMga2luZCBvZiBidXMgZGV2aWNlcyBkb24n
dCBoYXZlIGEgaWRlbnRpZnkgcm91dGluZSwgYW5kIHRoZXkKYXJlIHVzdWFsbHkgYWRkZWQgbWFu
dWFsbHkgZnJvbSB0aGUgc3BlY2lmaWMgbmV4dXMsIHNlZSBhY3BpIG9yIGxlZ2FjeS4KQ291bGQg
eW91IGFkZCB0aGUgZGV2aWNlIG9uIEFSTSB3aGVuIHlvdSBkZXRlY3QgdGhhdCB5b3UgYXJlIHJ1
bm5pbmcgYXMKYSBYZW4gZ3Vlc3QsIG9yIGluIHRoZSBnZW5lcmljIEFSTSBuZXh1cyBpZiBYZW4g
aXMgZGV0ZWN0ZWQ/CgpSb2dlci4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0S2y-0006OF-B6; Tue, 07 Jan 2014 08:30:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0S2x-0006O4-95
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 08:29:59 +0000
Received: from [85.158.139.211:65443] by server-15.bemta-5.messagelabs.com id
	84/5C-08490-60BBBC25; Tue, 07 Jan 2014 08:29:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389083396!8228474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9082 invoked from network); 7 Jan 2014 08:29:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 08:29:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="88192043"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 08:29:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 03:29:55 -0500
Message-ID: <52CBBB05.6020104@citrix.com>
Date: Tue, 7 Jan 2014 09:29:57 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <freebsd-xen@freebsd.org>,
	<freebsd-current@freebsd.org>, <xen-devel@lists.xen.org>,
	<gibbs@freebsd.org>, <jhb@freebsd.org>, <kib@freebsd.org>,
	<julien.grall@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org>
In-Reply-To: <52CA9481.4090703@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDYvMDEvMTQgMTI6MzMsIEp1bGllbiBHcmFsbCB3cm90ZToKPiAKPiAKPiBPbiAwMS8wNi8y
MDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNS8wMS8xNCAyMjo1
NSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wMi8yMDE0IDAzOjQzIFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gSW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5l
eHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hhcmdlIGZvcgo+Pj4+IGF0dGFjaGluZyBYZW4g
c3BlY2lmaWMgZGV2aWNlcy4KPj4+Cj4+PiBOb3cgdGhhdCB3ZSBoYXZlIGEgeGVucHYgYnVzLCBk
byB3ZSByZWFsbHkgbmVlZCBhIHNwZWNpZmljIG5leHVzIGZvcgo+Pj4gWGVuPwo+Pj4gV2Ugc2hv
dWxkIGJlIGFibGUgdG8gdXNlIHRoZSBpZGVudGlmeSBjYWxsYmFjayBvZiB4ZW5wdiB0byBjcmVh
dGUgdGhlCj4+PiBidXMuCj4+Pgo+Pj4gVGhlIG90aGVyIHBhcnQgb2YgdGhpcyBwYXRjaCBjYW4g
YmUgbWVyZ2VkIGluIHRoZSBwYXRjaCAjMTQgIkludHJvZHVjZQo+Pj4geGVucHYgYnVzIGFuZCBh
IGR1bW15IHB2Y3B1IGRldmljZSIuCj4+Cj4+IE9uIHg4NiBhdCBsZWFzdCB3ZSBuZWVkIHRoZSBY
ZW4gc3BlY2lmaWMgbmV4dXMsIG9yIHdlIHdpbGwgZmFsbCBiYWNrIHRvCj4+IHVzZSB0aGUgbGVn
YWN5IG5leHVzIHdoaWNoIGlzIG5vdCB3aGF0IHdlIHJlYWxseSB3YW50Lgo+Pgo+IAo+IE9oIHJp
Z2h0LCBpbiBhbnkgY2FzZSBjYW4gd2UgdXNlIHRoZSBpZGVudGlmeSBjYWxsYmFjayBvZiB4ZW5w
diB0byBhZGQKPiB0aGUgYnVzPwoKQUZBSUNUIHRoaXMga2luZCBvZiBidXMgZGV2aWNlcyBkb24n
dCBoYXZlIGEgaWRlbnRpZnkgcm91dGluZSwgYW5kIHRoZXkKYXJlIHVzdWFsbHkgYWRkZWQgbWFu
dWFsbHkgZnJvbSB0aGUgc3BlY2lmaWMgbmV4dXMsIHNlZSBhY3BpIG9yIGxlZ2FjeS4KQ291bGQg
eW91IGFkZCB0aGUgZGV2aWNlIG9uIEFSTSB3aGVuIHlvdSBkZXRlY3QgdGhhdCB5b3UgYXJlIHJ1
bm5pbmcgYXMKYSBYZW4gZ3Vlc3QsIG9yIGluIHRoZSBnZW5lcmljIEFSTSBuZXh1cyBpZiBYZW4g
aXMgZGV0ZWN0ZWQ/CgpSb2dlci4KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:54:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0SQU-0007Hq-34; Tue, 07 Jan 2014 08:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0SQS-0007Hl-Fg
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:54:16 +0000
Received: from [85.158.137.68:57310] by server-6.bemta-3.messagelabs.com id
	9A/ED-04868-7B0CBC25; Tue, 07 Jan 2014 08:54:15 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389084853!3959332!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15887 invoked from network); 7 Jan 2014 08:54:14 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-31.messagelabs.com with SMTP;
	7 Jan 2014 08:54:14 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 00:50:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,617,1384329600"; d="scan'208";a="462793582"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 00:54:12 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 00:54:11 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 00:54:11 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 16:54:09 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w
Date: Tue, 7 Jan 2014 08:54:09 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
In-Reply-To: <52CBC8C10200007800110EFA@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"Egger, Christoph \(chegger@amazon.de\)" <chegger@amazon.de>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-07:
>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2013-12-18:
>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>> That way we can be certain that the retry won't do something
>>>>>>> different from what requested the retry, getting once again
>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>> is simply a bus operation, not involving redundant decoding of instructions).
>>>>>>> 
>>>>>> 
>>>>>> This patch doesn't consider the nested case.
>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>> vmexit to
>>>>>> L1 and
>>>>>> L1 may use the wrong instruction.
>>>>> 
>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>> There should be, at any given point in time, at most one
>>>>> instruction being emulated. Can you please give a more elaborate
>>>>> explanation of the situation where you see a (theoretical?
>>>>> practical?)
>> problem?
>>>> 
>>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>>> and saw the strange phenomenon:
>>>> 
>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>> content:f7420c1f608488b 44000000011442c7
>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>> content:f7420c1f608488b 44000000011442c7
>>>> 
>>>> From the log, we can see different eip but using the same buffer.
>>>> Since I don't know how hyper-v working, so I cannot give more
>>>> information why this happens. And I only saw it with L1 hyper-v
>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>> 
>>> But in order to validate the fix is (a) needed and (b) correct, a
>>> proper understanding of the issue is necessary as the very first step.
>>> Which doesn't require knowing internals of Hyper-V, all you need is
>>> tracking of emulator state changes in Xen (which I realize is said
>>> easier than done, since you want to make sure you don't generate
>>> huge amounts of output before actually hitting the issue, making it
>>> close to
>> impossible to analyze).
>> 
>> Ok. It should be an old issue and just exposed by your patch only:
>> Sometimes, L0 need to decode the L2's instruction to handle IO
>> access directly. For example, if L1 pass through (not VT-d) a device to L2 directly.
>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>> time, if there is a virtual vmexit pending (for example, an
>> interrupt pending to inject to L1) and hypervisor will switch the
>> VCPU context from L2 to L1. Now we already in L1's context, but
>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>> retry to handle the IO request later and unfortunately, the retry will happen in L1's context.
>> Without your patch, L0 just emulate an L1's instruction and
>> everything goes on. With your patch, L0 will get the instruction
>> from the buffer which is belonging to L2, and problem arises.
>> 
>> So the fixing is that if there is a pending IO request, no virtual
>> vmexit/vmentry is allowed which following hardware's behavior.
>> 
>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>      struct vcpu *v = current;
>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +    ioreq_t *p = get_ioreq(v);
>> 
>> +    if ( p->state != STATE_IOREQ_NONE )
>> +        return;
>>      /*
>>       * a softirq may interrupt us between a virtual vmentry is
>>       * just handled and the true vmentry. If during this window,
> 
> This change looks much more sensible; question is whether simply
> ignoring the switch request is acceptable (and I don't know the nested
> HVM code well enough to tell). Furthermore I wonder whether that's really a VMX-only issue.

>From hardware's point, an IO operation is handled atomically. So the problem will never happen. But in Xen, an IO operation is divided into several steps. Without nested virtualization, the VCPU is paused or looped until the IO emulation is finished. But in nested environment, the VCPU will continue running even the IO emulation is not finished. So my patch will check this and allow the VCPU to continue running only there is no pending IO request. This is matching hardware's behavior.

I guess SVM also has this problem. But since I don't know nested SVM well, so perhaps Christoph can help to have a double check.

> 
> Another alternative might be to flush the cached instruction when a
> guest mode switch is being done. Again, I don't know the nested code
> well enough to tell what the most suitable behavior here would be.
> 

Your patch to use cached instruction just exposed the issue. But the essence of it is the wrong vmswitch during the IO emulation. So, even we flushed the cached instruction during the vmswitch, the problem still exists. But maybe hard to trigger.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 08:54:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 08:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0SQU-0007Hq-34; Tue, 07 Jan 2014 08:54:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0SQS-0007Hl-Fg
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 08:54:16 +0000
Received: from [85.158.137.68:57310] by server-6.bemta-3.messagelabs.com id
	9A/ED-04868-7B0CBC25; Tue, 07 Jan 2014 08:54:15 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389084853!3959332!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15887 invoked from network); 7 Jan 2014 08:54:14 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-31.messagelabs.com with SMTP;
	7 Jan 2014 08:54:14 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 00:50:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,617,1384329600"; d="scan'208";a="462793582"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 00:54:12 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 00:54:11 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 00:54:11 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 16:54:09 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w
Date: Tue, 7 Jan 2014 08:54:09 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
In-Reply-To: <52CBC8C10200007800110EFA@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"Egger, Christoph \(chegger@amazon.de\)" <chegger@amazon.de>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-07:
>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2013-12-18:
>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>> That way we can be certain that the retry won't do something
>>>>>>> different from what requested the retry, getting once again
>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>> is simply a bus operation, not involving redundant decoding of instructions).
>>>>>>> 
>>>>>> 
>>>>>> This patch doesn't consider the nested case.
>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>> vmexit to
>>>>>> L1 and
>>>>>> L1 may use the wrong instruction.
>>>>> 
>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>> There should be, at any given point in time, at most one
>>>>> instruction being emulated. Can you please give a more elaborate
>>>>> explanation of the situation where you see a (theoretical?
>>>>> practical?)
>> problem?
>>>> 
>>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>>> and saw the strange phenomenon:
>>>> 
>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>> content:f7420c1f608488b 44000000011442c7
>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>> content:f7420c1f608488b 44000000011442c7
>>>> 
>>>> From the log, we can see different eip but using the same buffer.
>>>> Since I don't know how hyper-v working, so I cannot give more
>>>> information why this happens. And I only saw it with L1 hyper-v
>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>> 
>>> But in order to validate the fix is (a) needed and (b) correct, a
>>> proper understanding of the issue is necessary as the very first step.
>>> Which doesn't require knowing internals of Hyper-V, all you need is
>>> tracking of emulator state changes in Xen (which I realize is said
>>> easier than done, since you want to make sure you don't generate
>>> huge amounts of output before actually hitting the issue, making it
>>> close to
>> impossible to analyze).
>> 
>> Ok. It should be an old issue and just exposed by your patch only:
>> Sometimes, L0 need to decode the L2's instruction to handle IO
>> access directly. For example, if L1 pass through (not VT-d) a device to L2 directly.
>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>> time, if there is a virtual vmexit pending (for example, an
>> interrupt pending to inject to L1) and hypervisor will switch the
>> VCPU context from L2 to L1. Now we already in L1's context, but
>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>> retry to handle the IO request later and unfortunately, the retry will happen in L1's context.
>> Without your patch, L0 just emulate an L1's instruction and
>> everything goes on. With your patch, L0 will get the instruction
>> from the buffer which is belonging to L2, and problem arises.
>> 
>> So the fixing is that if there is a pending IO request, no virtual
>> vmexit/vmentry is allowed which following hardware's behavior.
>> 
>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>      struct vcpu *v = current;
>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +    ioreq_t *p = get_ioreq(v);
>> 
>> +    if ( p->state != STATE_IOREQ_NONE )
>> +        return;
>>      /*
>>       * a softirq may interrupt us between a virtual vmentry is
>>       * just handled and the true vmentry. If during this window,
> 
> This change looks much more sensible; question is whether simply
> ignoring the switch request is acceptable (and I don't know the nested
> HVM code well enough to tell). Furthermore I wonder whether that's really a VMX-only issue.

>From hardware's point, an IO operation is handled atomically. So the problem will never happen. But in Xen, an IO operation is divided into several steps. Without nested virtualization, the VCPU is paused or looped until the IO emulation is finished. But in nested environment, the VCPU will continue running even the IO emulation is not finished. So my patch will check this and allow the VCPU to continue running only there is no pending IO request. This is matching hardware's behavior.

I guess SVM also has this problem. But since I don't know nested SVM well, so perhaps Christoph can help to have a double check.

> 
> Another alternative might be to flush the cached instruction when a
> guest mode switch is being done. Again, I don't know the nested code
> well enough to tell what the most suitable behavior here would be.
> 

Your patch to use cached instruction just exposed the issue. But the essence of it is the wrong vmswitch during the IO emulation. So, even we flushed the cached instruction during the vmswitch, the problem still exists. But maybe hard to trigger.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:21:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0SqZ-0008TG-PG; Tue, 07 Jan 2014 09:21:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W0SqY-0008TB-E2
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 09:21:14 +0000
Received: from [85.158.139.211:39463] by server-12.bemta-5.messagelabs.com id
	4A/C6-30017-907CBC25; Tue, 07 Jan 2014 09:21:13 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389086472!8255096!1
X-Originating-IP: [213.199.154.80]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8312 invoked from network); 7 Jan 2014 09:21:12 -0000
Received: from mail-db3lp0080.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.80)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Jan 2014 09:21:12 -0000
Received: from AMSPRD0111HT002.eurprd01.prod.exchangelabs.com (157.56.250.229)
	by DB4PR03MB409.eurprd03.prod.outlook.com (10.141.235.142) with
	Microsoft
	SMTP Server (TLS) id 15.0.851.11; Tue, 7 Jan 2014 09:21:10 +0000
Message-ID: <52CBC700.1060602@zynstra.com>
Date: Tue, 7 Jan 2014 09:21:04 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com>
In-Reply-To: <52C50661.7060900@oracle.com>
X-Originating-IP: [157.56.250.229]
X-ClientProxiedBy: AMSPR03CA002.eurprd03.prod.outlook.com (10.242.77.140) To
	DB4PR03MB409.eurprd03.prod.outlook.com (10.141.235.142)
X-Forefront-PRVS: 008421A8FF
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(24454002)(52034003)(377454003)(479174003)(164054003)(51704005)(189002)(199002)(36756003)(85852003)(83072002)(56816005)(81542001)(90146001)(74662001)(31966008)(76786001)(76796001)(81342001)(69226001)(47446002)(74502001)(54316002)(63696002)(47776003)(66066001)(80022001)(53806001)(54356001)(42186004)(46102001)(51856001)(23756003)(76482001)(64126003)(59766001)(77982001)(59896001)(79102001)(47976001)(50986001)(56776001)(47736001)(4396001)(49866001)(50466002)(74706001)(83322001)(80316001)(19580395003)(74876001)(33656001)(85306002)(74366001)(77096001)(83506001)(81686001)(81816001)(80976001)(87976001)(7744002)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB409;
	H:AMSPRD0111HT002.eurprd01.prod.exchangelabs.com;
	CLIP:157.56.250.229; FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en;
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 12/26/2013 04:42 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 12/20/2013 03:08 AM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 12/12/2013 12:30 AM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>>>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>>>>>> increase/decrease their memory allocation according to their
>>>>>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>>>>>
>>>>>>>>>>> # free
>>>>>>>>>>>                  total       used       free     shared    buffers
>>>>>>>>>>> cached
>>>>>>>>>>> Mem:        272080     248108      23972          0 1448
>>>>>>>>>>> 63064
>>>>>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>>>>>> Swap:      2097148          8    2097140
>>>>>>>>>>>
>>>>>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>>>>>> balloon to the maximum size:
>>>>>>>>>>> # xl info | grep free_mem
>>>>>>>>>>> free_memory            : 14923
>>>>>>>>>>>
>>>>>>>>>>> An example trace (they are always the same) from the oom
>>>>>>>>>>> killer in
>>>>>>>>>>> 3.12 is added below.  So far I have not been able to reproduce
>>>>>>>>>>> this
>>>>>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>>>>>> behaviour is wrong because a) ballooning could give the guest
>>>>>>>>>>> more
>>>>>>>>>>> memory, b) there is lots of swap available which could be used
>>>>>>>>>>> as a
>>>>>>>>>>> fallback.
>>>>>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>>>>>> sounds odd -but basically pages that are destined for swap end up
>>>>>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>>>>>
>>>>>>>>>>> If other information could help or there are more tests that I
>>>>>>>>>>> could
>>>>>>>>>>> run then please let me know.
>>>>>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>>>>>> the guest right?
>>>>>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>>>>>> Excellent. The odd thing is that your swap is not used that much,
>>>>>>>> but
>>>>>>>> it should be (as that is part of what the self-balloon is suppose to
>>>>>>>> do).
>>>>>>>>
>>>>>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>>>>>> to account for the slab - would this be relevant to this problem?
>>>>>>>>
>>>>>>> Perhaps, I have attached the patch.
>>>>>>> James, could you please apply it and try your application again? You
>>>>>>> have to rebuild the guest kernel.
>>>>>>> Oh, and also take a look at whether frontswap is in use, you can
>>>>>>> check
>>>>>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>>>>>> I have tested this patch with a workload where I have previously seen
>>>>>> failures and so far so good.  I'll try to keep a guest with it
>>>>>> stressed
>>>>>> to see if I do get any problems.  I don't know if it is expected but I
>>>>> By the way, besides longer time of kswapd, is this patch work well
>>>>> during your stress testing?
>>>>>
>>>>> Have you seen the OOM killer triggered quite frequently again?(with
>>>>> selfshrink=true)
>>>>>
>>>>> Thanks,
>>>>> -Bob
>>>> It was looking good until today (selfshrink=true).  The trace below is
>>>> during a compile of subversion, it looks like the memory has ballooned
>>>> to almost the maximum permissible but even under pressure the swap disk
>>>> has hardly come in to use.
>>>>
>>> So if without selfshrink the swap disk can be used a lot?
>>>
>>> If that's the case, I'm afraid the frontswap-selfshrink in
>>> xen-selfballoon did something incorrect.
>>>
>>> Could you please try this patch which make the frontswap-selfshrink
>>> slower and add a printk for debug.
>>> Please still keep selfshrink=true in your test but can with or without
>>> my previous patch.
>>> Thanks a lot!
>>>
>> The oom trace below was triggered during a compile of gcc.  I have the
>> full dmesg from boot which shows all the printks, please let me know if
>> you would like to see that.
>>
> Sorry for the later response.
> Could you confirm that this problem doesn't exist if loading tmem with
> selfshrinking=0 during compile gcc? It seems that you are compiling
> difference packages during your testing.
> This will help to figure out whether selfshrinking is the root cause.
Got an oom with selfshrinking=0, again during a gcc compile. 
Unfortunately I don't have a single test case which demonstrates the 
problem but as I mentioned before it will generally show up under 
compiles of large packages such as glibc, kdelibs, gcc etc.

I don't know if this is a separate or related issue but over the 
holidays I also had a problem with six of the guests on my system where 
kswapd was running at 100% and had clocked up >9000 minutes of cpu time 
even though there was otherwise no load on them.  Of the guests I 
restarted yesterday in this state two have already got in to the same 
state again, they are running a kernel with the first patch that you sent.

/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking N

James

[ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0, 
oom_score_adj=0
[ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
[ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200 
ffff88001f90e8e8
[ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7 
ffff88000094f9b8
[ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8 
ffff88000094f9a8
[ 8212.940542] Call Trace:
[ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
[ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
[ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
[ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
[ 8212.940578]  [<ffffffff81494abe>] ? _raw_spin_unlock_irqrestore+0x47/0x62
[ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
[ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
[ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
[ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
[ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
[ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
[ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
[ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
[ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
[ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
[ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
[ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
[ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
[ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
[ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
[ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
[ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
[ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
[ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
[ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
[ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
[ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
[ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
[ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
[ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
[ 8212.940679] Mem-Info:
[ 8212.940681] Node 0 DMA per-cpu:
[ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
[ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
[ 8212.940686] Node 0 DMA32 per-cpu:
[ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
[ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
[ 8212.940691] Node 0 Normal per-cpu:
[ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
[ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
[ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
  active_file:8412 inactive_file:8612 isolated_file:0
  unevictable:0 dirty:0 writeback:0 unstable:0
  free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
  mapped:3792 shmem:6 pagetables:2534 bounce:0
  free_cma:0 totalram:246132 balloontarget:306242
[ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB 
active_anon:5092kB inactive_anon:5328kB active_file:416kB 
inactive_file:608kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB 
writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB 
slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB 
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:26951 all_unreclaimable? yes
[ 8212.940711] lowmem_reserve[]: 0 469 469 469
[ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB 
high:4092kB active_anon:181456kB inactive_anon:181528kB 
active_file:22296kB inactive_file:22644kB unevictable:0kB 
isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB 
mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB 
slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB 
pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:612393 all_unreclaimable? yes
[ 8212.940722] lowmem_reserve[]: 0 0 0 0
[ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB 
active_anon:236512kB inactive_anon:236672kB active_file:10936kB 
inactive_file:11196kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB 
dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB 
slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB 
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:745963 all_unreclaimable? yes
[ 8212.940732] lowmem_reserve[]: 0 0 0 0
[ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB 
(R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
[ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB 
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
[ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 
0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
[ 8212.940765] 16847 total pagecache pages
[ 8212.940766] 8381 pages in swap cache
[ 8212.940768] Swap cache stats: add 741397, delete 733016, find 
250268/342284
[ 8212.940769] Free swap  = 1925576kB
[ 8212.940770] Total swap = 2097148kB
[ 8212.951044] 262143 pages RAM
[ 8212.951046] 11939 pages reserved
[ 8212.951047] 540820 pages shared
[ 8212.951048] 240248 pages non-shared
[ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents 
oom_score_adj name
<snip process list>
[ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or 
sacrifice child
[ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB, 
anon-rss:350980kB, file-rss:9408kB
[54810.683658] kjournald starting.  Commit interval 5 seconds
[54810.684381] EXT3-fs (xvda1): using internal journal
[54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:21:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0SqZ-0008TG-PG; Tue, 07 Jan 2014 09:21:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W0SqY-0008TB-E2
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 09:21:14 +0000
Received: from [85.158.139.211:39463] by server-12.bemta-5.messagelabs.com id
	4A/C6-30017-907CBC25; Tue, 07 Jan 2014 09:21:13 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389086472!8255096!1
X-Originating-IP: [213.199.154.80]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8312 invoked from network); 7 Jan 2014 09:21:12 -0000
Received: from mail-db3lp0080.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.80)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Jan 2014 09:21:12 -0000
Received: from AMSPRD0111HT002.eurprd01.prod.exchangelabs.com (157.56.250.229)
	by DB4PR03MB409.eurprd03.prod.outlook.com (10.141.235.142) with
	Microsoft
	SMTP Server (TLS) id 15.0.851.11; Tue, 7 Jan 2014 09:21:10 +0000
Message-ID: <52CBC700.1060602@zynstra.com>
Date: Tue, 7 Jan 2014 09:21:04 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com>
In-Reply-To: <52C50661.7060900@oracle.com>
X-Originating-IP: [157.56.250.229]
X-ClientProxiedBy: AMSPR03CA002.eurprd03.prod.outlook.com (10.242.77.140) To
	DB4PR03MB409.eurprd03.prod.outlook.com (10.141.235.142)
X-Forefront-PRVS: 008421A8FF
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(24454002)(52034003)(377454003)(479174003)(164054003)(51704005)(189002)(199002)(36756003)(85852003)(83072002)(56816005)(81542001)(90146001)(74662001)(31966008)(76786001)(76796001)(81342001)(69226001)(47446002)(74502001)(54316002)(63696002)(47776003)(66066001)(80022001)(53806001)(54356001)(42186004)(46102001)(51856001)(23756003)(76482001)(64126003)(59766001)(77982001)(59896001)(79102001)(47976001)(50986001)(56776001)(47736001)(4396001)(49866001)(50466002)(74706001)(83322001)(80316001)(19580395003)(74876001)(33656001)(85306002)(74366001)(77096001)(83506001)(81686001)(81816001)(80976001)(87976001)(7744002)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB409;
	H:AMSPRD0111HT002.eurprd01.prod.exchangelabs.com;
	CLIP:157.56.250.229; FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en;
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 12/26/2013 04:42 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 12/20/2013 03:08 AM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 12/12/2013 12:30 AM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> On 12/10/2013 11:27 PM, Konrad Rzeszutek Wilk wrote:
>>>>>>>> On Tue, Dec 10, 2013 at 02:52:40PM +0000, James Dingwall wrote:
>>>>>>>>> Konrad Rzeszutek Wilk wrote:
>>>>>>>>>> On Mon, Dec 09, 2013 at 05:50:29PM +0000, James Dingwall wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Since 3.11 I have noticed that the OOM killer quite frequently
>>>>>>>>>>> triggers in my Xen guest domains which use ballooning to
>>>>>>>>>>> increase/decrease their memory allocation according to their
>>>>>>>>>>> requirements.  One example domain I have has a maximum memory
>>>>>>>>>>> setting of ~1.5Gb but it usually idles at ~300Mb, it is also
>>>>>>>>>>> configured with 2Gb swap which is almost 100% free.
>>>>>>>>>>>
>>>>>>>>>>> # free
>>>>>>>>>>>                  total       used       free     shared    buffers
>>>>>>>>>>> cached
>>>>>>>>>>> Mem:        272080     248108      23972          0 1448
>>>>>>>>>>> 63064
>>>>>>>>>>> -/+ buffers/cache:     183596      88484
>>>>>>>>>>> Swap:      2097148          8    2097140
>>>>>>>>>>>
>>>>>>>>>>> There is plenty of available free memory in the hypervisor to
>>>>>>>>>>> balloon to the maximum size:
>>>>>>>>>>> # xl info | grep free_mem
>>>>>>>>>>> free_memory            : 14923
>>>>>>>>>>>
>>>>>>>>>>> An example trace (they are always the same) from the oom
>>>>>>>>>>> killer in
>>>>>>>>>>> 3.12 is added below.  So far I have not been able to reproduce
>>>>>>>>>>> this
>>>>>>>>>>> at will so it is difficult to start bisecting it to see if a
>>>>>>>>>>> particular change introduced this.  However it does seem that the
>>>>>>>>>>> behaviour is wrong because a) ballooning could give the guest
>>>>>>>>>>> more
>>>>>>>>>>> memory, b) there is lots of swap available which could be used
>>>>>>>>>>> as a
>>>>>>>>>>> fallback.
>>>>>>>> Keep in mind that swap with tmem is actually no more swap. Heh, that
>>>>>>>> sounds odd -but basically pages that are destined for swap end up
>>>>>>>> going in the tmem code which pipes them up to the hypervisor.
>>>>>>>>
>>>>>>>>>>> If other information could help or there are more tests that I
>>>>>>>>>>> could
>>>>>>>>>>> run then please let me know.
>>>>>>>>>> I presume you have enabled 'tmem' both in the hypervisor and in
>>>>>>>>>> the guest right?
>>>>>>>>> Yes, domU and dom0 both have the tmem module loaded and  tmem
>>>>>>>>> tmem_dedup=on tmem_compress=on is given on the xen command line.
>>>>>>>> Excellent. The odd thing is that your swap is not used that much,
>>>>>>>> but
>>>>>>>> it should be (as that is part of what the self-balloon is suppose to
>>>>>>>> do).
>>>>>>>>
>>>>>>>> Bob, you had a patch for the logic of how self-balloon is suppose
>>>>>>>> to account for the slab - would this be relevant to this problem?
>>>>>>>>
>>>>>>> Perhaps, I have attached the patch.
>>>>>>> James, could you please apply it and try your application again? You
>>>>>>> have to rebuild the guest kernel.
>>>>>>> Oh, and also take a look at whether frontswap is in use, you can
>>>>>>> check
>>>>>>> it by watching "cat /sys/kernel/debug/frontswap/*".
>>>>>> I have tested this patch with a workload where I have previously seen
>>>>>> failures and so far so good.  I'll try to keep a guest with it
>>>>>> stressed
>>>>>> to see if I do get any problems.  I don't know if it is expected but I
>>>>> By the way, besides longer time of kswapd, is this patch work well
>>>>> during your stress testing?
>>>>>
>>>>> Have you seen the OOM killer triggered quite frequently again?(with
>>>>> selfshrink=true)
>>>>>
>>>>> Thanks,
>>>>> -Bob
>>>> It was looking good until today (selfshrink=true).  The trace below is
>>>> during a compile of subversion, it looks like the memory has ballooned
>>>> to almost the maximum permissible but even under pressure the swap disk
>>>> has hardly come in to use.
>>>>
>>> So if without selfshrink the swap disk can be used a lot?
>>>
>>> If that's the case, I'm afraid the frontswap-selfshrink in
>>> xen-selfballoon did something incorrect.
>>>
>>> Could you please try this patch which make the frontswap-selfshrink
>>> slower and add a printk for debug.
>>> Please still keep selfshrink=true in your test but can with or without
>>> my previous patch.
>>> Thanks a lot!
>>>
>> The oom trace below was triggered during a compile of gcc.  I have the
>> full dmesg from boot which shows all the printks, please let me know if
>> you would like to see that.
>>
> Sorry for the later response.
> Could you confirm that this problem doesn't exist if loading tmem with
> selfshrinking=0 during compile gcc? It seems that you are compiling
> difference packages during your testing.
> This will help to figure out whether selfshrinking is the root cause.
Got an oom with selfshrinking=0, again during a gcc compile. 
Unfortunately I don't have a single test case which demonstrates the 
problem but as I mentioned before it will generally show up under 
compiles of large packages such as glibc, kdelibs, gcc etc.

I don't know if this is a separate or related issue but over the 
holidays I also had a problem with six of the guests on my system where 
kswapd was running at 100% and had clocked up >9000 minutes of cpu time 
even though there was otherwise no load on them.  Of the guests I 
restarted yesterday in this state two have already got in to the same 
state again, they are running a kernel with the first patch that you sent.

/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking N

James

[ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0, 
oom_score_adj=0
[ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
[ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200 
ffff88001f90e8e8
[ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7 
ffff88000094f9b8
[ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8 
ffff88000094f9a8
[ 8212.940542] Call Trace:
[ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
[ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
[ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
[ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
[ 8212.940578]  [<ffffffff81494abe>] ? _raw_spin_unlock_irqrestore+0x47/0x62
[ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
[ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
[ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
[ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
[ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
[ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
[ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
[ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
[ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
[ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
[ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
[ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
[ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
[ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
[ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
[ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
[ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
[ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
[ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
[ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
[ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
[ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
[ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
[ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
[ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
[ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
[ 8212.940679] Mem-Info:
[ 8212.940681] Node 0 DMA per-cpu:
[ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
[ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
[ 8212.940686] Node 0 DMA32 per-cpu:
[ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
[ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
[ 8212.940691] Node 0 Normal per-cpu:
[ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
[ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
[ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
  active_file:8412 inactive_file:8612 isolated_file:0
  unevictable:0 dirty:0 writeback:0 unstable:0
  free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
  mapped:3792 shmem:6 pagetables:2534 bounce:0
  free_cma:0 totalram:246132 balloontarget:306242
[ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB 
active_anon:5092kB inactive_anon:5328kB active_file:416kB 
inactive_file:608kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB 
writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB 
slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB 
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:26951 all_unreclaimable? yes
[ 8212.940711] lowmem_reserve[]: 0 469 469 469
[ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB 
high:4092kB active_anon:181456kB inactive_anon:181528kB 
active_file:22296kB inactive_file:22644kB unevictable:0kB 
isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB 
mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB 
slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB 
pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:612393 all_unreclaimable? yes
[ 8212.940722] lowmem_reserve[]: 0 0 0 0
[ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB 
active_anon:236512kB inactive_anon:236672kB active_file:10936kB 
inactive_file:11196kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB 
dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB 
slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB 
unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:745963 all_unreclaimable? yes
[ 8212.940732] lowmem_reserve[]: 0 0 0 0
[ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB 
(R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
[ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB 
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
[ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 
0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
[ 8212.940765] 16847 total pagecache pages
[ 8212.940766] 8381 pages in swap cache
[ 8212.940768] Swap cache stats: add 741397, delete 733016, find 
250268/342284
[ 8212.940769] Free swap  = 1925576kB
[ 8212.940770] Total swap = 2097148kB
[ 8212.951044] 262143 pages RAM
[ 8212.951046] 11939 pages reserved
[ 8212.951047] 540820 pages shared
[ 8212.951048] 240248 pages non-shared
[ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents 
oom_score_adj name
<snip process list>
[ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or 
sacrifice child
[ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB, 
anon-rss:350980kB, file-rss:9408kB
[54810.683658] kjournald starting.  Commit interval 5 seconds
[54810.684381] EXT3-fs (xvda1): using internal journal
[54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:44:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TCd-000155-UG; Tue, 07 Jan 2014 09:44:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TCc-00014z-Kw
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 09:44:02 +0000
Received: from [85.158.143.35:24404] by server-3.bemta-4.messagelabs.com id
	6C/DC-32360-26CCBC25; Tue, 07 Jan 2014 09:44:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389087840!10045242!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6292 invoked from network); 7 Jan 2014 09:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 09:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90372783"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 09:44:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	04:43:59 -0500
Message-ID: <1389087838.31766.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jim Fehlig <jfehlig@suse.com>
Date: Tue, 7 Jan 2014 09:43:58 +0000
In-Reply-To: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: libvir-list@redhat.com, stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Fix initialization of nictype in
 libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:59 -0700, Jim Fehlig wrote:
> +    bool ioemu_nic = STREQ(def->os.type, "hvm");
> [...] 
> -    if (l_nic->model && !STREQ(l_nic->model, "netfront")) {
> -        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
> -            return -1;
> +    if (ioemu_nic)
>          x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
> -    } else {
> +    else
>          x_nic->nictype = LIBXL_NIC_TYPE_VIF;

It's up to you but you could just leave nictype set to the default (as
initialised by libxl_device_nic_init and let the library do the right
thing based on the guest type. 

> +
> +    if (l_nic->model) {
> +        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
> +            return -1;
> +        if (STREQ(l_nic->model, "netfront"))
> +            x_nic->nictype = LIBXL_NIC_TYPE_VIF;

This bit would remain valid whether or not you did the above.

Ultimately up to you whether you want to precisely control things or
just follow the defaults.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:44:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TCd-000155-UG; Tue, 07 Jan 2014 09:44:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TCc-00014z-Kw
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 09:44:02 +0000
Received: from [85.158.143.35:24404] by server-3.bemta-4.messagelabs.com id
	6C/DC-32360-26CCBC25; Tue, 07 Jan 2014 09:44:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389087840!10045242!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6292 invoked from network); 7 Jan 2014 09:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 09:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90372783"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 09:44:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	04:43:59 -0500
Message-ID: <1389087838.31766.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jim Fehlig <jfehlig@suse.com>
Date: Tue, 7 Jan 2014 09:43:58 +0000
In-Reply-To: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
References: <1389034768-4547-1-git-send-email-jfehlig@suse.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: libvir-list@redhat.com, stefan.bader@canonical.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Fix initialization of nictype in
 libxl_device_nic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:59 -0700, Jim Fehlig wrote:
> +    bool ioemu_nic = STREQ(def->os.type, "hvm");
> [...] 
> -    if (l_nic->model && !STREQ(l_nic->model, "netfront")) {
> -        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
> -            return -1;
> +    if (ioemu_nic)
>          x_nic->nictype = LIBXL_NIC_TYPE_VIF_IOEMU;
> -    } else {
> +    else
>          x_nic->nictype = LIBXL_NIC_TYPE_VIF;

It's up to you but you could just leave nictype set to the default (as
initialised by libxl_device_nic_init and let the library do the right
thing based on the guest type. 

> +
> +    if (l_nic->model) {
> +        if (VIR_STRDUP(x_nic->model, l_nic->model) < 0)
> +            return -1;
> +        if (STREQ(l_nic->model, "netfront"))
> +            x_nic->nictype = LIBXL_NIC_TYPE_VIF;

This bit would remain valid whether or not you did the above.

Ultimately up to you whether you want to precisely control things or
just follow the defaults.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:44:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TD7-00017x-IA; Tue, 07 Jan 2014 09:44:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=077367184=chegger@amazon.de>)
	id 1W0TD5-00017j-MU
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 09:44:32 +0000
Received: from [85.158.137.68:27085] by server-10.bemta-3.messagelabs.com id
	96/1D-23989-E7CCBC25; Tue, 07 Jan 2014 09:44:30 +0000
X-Env-Sender: prvs=077367184=chegger@amazon.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389087867!6477699!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 578 invoked from network); 7 Jan 2014 09:44:29 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 09:44:29 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389087869; x=1420623869;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Yjhn5BlUZE1kQHVxICdoYUUxHSGhcD6aTeO/8rJIPtY=;
	b=kIZIxTxXDbTZ+WfOgNIv7D/K7V8dlobV7QjgpaSwil0qGumilU0unfl6
	cnqO82BJywHoZqReZcQf3tTj2u3m0+WC94BGuKpgfFliKJ6zYYtDxs2yR
	f3VnxoYp4k7RK32+9LexAMR7VHGkWrASwsWmIR+vjRkWPtPfDFjmf1dBo I=;
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="56441897"
Received: from smtp-in-9004.sea19.amazon.com ([10.186.102.8])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Jan 2014 09:44:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-9004.sea19.amazon.com (8.14.7/8.14.7) with ESMTP id
	s079iKhI023901
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 7 Jan 2014 09:44:23 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 7 Jan 2014 01:43:46 -0800
Message-ID: <52CBCC4F.8080500@amazon.de>
Date: Tue, 7 Jan 2014 10:43:43 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.01.14 09:54, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-01-07:
>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2013-12-18:
>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>>> wrote:
>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>>> That way we can be certain that the retry won't do something
>>>>>>>> different from what requested the retry, getting once again
>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>> is simply a bus operation, not involving redundant decoding of instructions).
>>>>>>>>
>>>>>>>
>>>>>>> This patch doesn't consider the nested case.
>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>> vmexit to
>>>>>>> L1 and
>>>>>>> L1 may use the wrong instruction.
>>>>>>
>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>> There should be, at any given point in time, at most one
>>>>>> instruction being emulated. Can you please give a more elaborate
>>>>>> explanation of the situation where you see a (theoretical?
>>>>>> practical?)
>>> problem?
>>>>>
>>>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>>>> and saw the strange phenomenon:
>>>>>
>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>> content:f7420c1f608488b 44000000011442c7
>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>
>>>>> From the log, we can see different eip but using the same buffer.
>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>
>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>> proper understanding of the issue is necessary as the very first step.
>>>> Which doesn't require knowing internals of Hyper-V, all you need is
>>>> tracking of emulator state changes in Xen (which I realize is said
>>>> easier than done, since you want to make sure you don't generate
>>>> huge amounts of output before actually hitting the issue, making it
>>>> close to
>>> impossible to analyze).
>>>
>>> Ok. It should be an old issue and just exposed by your patch only:
>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>> access directly. For example, if L1 pass through (not VT-d) a device to L2 directly.
>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>> time, if there is a virtual vmexit pending (for example, an
>>> interrupt pending to inject to L1) and hypervisor will switch the
>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>> retry to handle the IO request later and unfortunately, the retry will happen in L1's context.
>>> Without your patch, L0 just emulate an L1's instruction and
>>> everything goes on. With your patch, L0 will get the instruction
>>> from the buffer which is belonging to L2, and problem arises.
>>>
>>> So the fixing is that if there is a pending IO request, no virtual
>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>
>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>      struct vcpu *v = current;
>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +    ioreq_t *p = get_ioreq(v);
>>>
>>> +    if ( p->state != STATE_IOREQ_NONE )
>>> +        return;
>>>      /*
>>>       * a softirq may interrupt us between a virtual vmentry is
>>>       * just handled and the true vmentry. If during this window,
>>
>> This change looks much more sensible; question is whether simply
>> ignoring the switch request is acceptable (and I don't know the nested
>> HVM code well enough to tell). Furthermore I wonder whether that's really a VMX-only issue.
> 
> From hardware's point, an IO operation is handled atomically. So the problem
> will never happen. But in Xen, an IO operation is divided into several
steps.
> Without nested virtualization, the VCPU is paused or looped until the
IO emulation
> is finished. But in nested environment, the VCPU will continue running
even the
> IO emulation is not finished. So my patch will check this and allow
the VCPU to
> continue running only there is no pending IO request. This is matching
hardware's behavior.
> 
> I guess SVM also has this problem. But since I don't know nested SVM well,
> so perhaps Christoph can help to have a double check.

For SVM this issue was fixed with commit
d740d811925385c09553cbe6dee8e77c1d43b198

And there is a followup cleanup commit
ac97fa6a21ccd395cca43890bbd0bf32e3255ebb

The change in nestedsvm.c in commit d740d811925385c09553cbe6dee8e77c1d43b198
is actually not SVM specific.

Move that into nhvm_interrupt_blocked()
in hvm.c right before

    return hvm_funcs.nhvm_intr_blocked(v);

and the fix applies for both SVM and VMX.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 09:44:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 09:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TD7-00017x-IA; Tue, 07 Jan 2014 09:44:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=077367184=chegger@amazon.de>)
	id 1W0TD5-00017j-MU
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 09:44:32 +0000
Received: from [85.158.137.68:27085] by server-10.bemta-3.messagelabs.com id
	96/1D-23989-E7CCBC25; Tue, 07 Jan 2014 09:44:30 +0000
X-Env-Sender: prvs=077367184=chegger@amazon.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389087867!6477699!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 578 invoked from network); 7 Jan 2014 09:44:29 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 09:44:29 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389087869; x=1420623869;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Yjhn5BlUZE1kQHVxICdoYUUxHSGhcD6aTeO/8rJIPtY=;
	b=kIZIxTxXDbTZ+WfOgNIv7D/K7V8dlobV7QjgpaSwil0qGumilU0unfl6
	cnqO82BJywHoZqReZcQf3tTj2u3m0+WC94BGuKpgfFliKJ6zYYtDxs2yR
	f3VnxoYp4k7RK32+9LexAMR7VHGkWrASwsWmIR+vjRkWPtPfDFjmf1dBo I=;
X-IronPort-AV: E=Sophos;i="4.95,617,1384300800"; d="scan'208";a="56441897"
Received: from smtp-in-9004.sea19.amazon.com ([10.186.102.8])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 07 Jan 2014 09:44:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-9004.sea19.amazon.com (8.14.7/8.14.7) with ESMTP id
	s079iKhI023901
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 7 Jan 2014 09:44:23 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 7 Jan 2014 01:43:46 -0800
Message-ID: <52CBCC4F.8080500@amazon.de>
Date: Tue, 7 Jan 2014 10:43:43 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.01.14 09:54, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-01-07:
>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2013-12-18:
>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>>> wrote:
>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>>> That way we can be certain that the retry won't do something
>>>>>>>> different from what requested the retry, getting once again
>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>> is simply a bus operation, not involving redundant decoding of instructions).
>>>>>>>>
>>>>>>>
>>>>>>> This patch doesn't consider the nested case.
>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>> vmexit to
>>>>>>> L1 and
>>>>>>> L1 may use the wrong instruction.
>>>>>>
>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>> There should be, at any given point in time, at most one
>>>>>> instruction being emulated. Can you please give a more elaborate
>>>>>> explanation of the situation where you see a (theoretical?
>>>>>> practical?)
>>> problem?
>>>>>
>>>>> I saw this issue when booting L1 hyper-v. I added some debug info
>>>>> and saw the strange phenomenon:
>>>>>
>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>> content:f7420c1f608488b 44000000011442c7
>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>
>>>>> From the log, we can see different eip but using the same buffer.
>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>
>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>> proper understanding of the issue is necessary as the very first step.
>>>> Which doesn't require knowing internals of Hyper-V, all you need is
>>>> tracking of emulator state changes in Xen (which I realize is said
>>>> easier than done, since you want to make sure you don't generate
>>>> huge amounts of output before actually hitting the issue, making it
>>>> close to
>>> impossible to analyze).
>>>
>>> Ok. It should be an old issue and just exposed by your patch only:
>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>> access directly. For example, if L1 pass through (not VT-d) a device to L2 directly.
>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>> time, if there is a virtual vmexit pending (for example, an
>>> interrupt pending to inject to L1) and hypervisor will switch the
>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>> retry to handle the IO request later and unfortunately, the retry will happen in L1's context.
>>> Without your patch, L0 just emulate an L1's instruction and
>>> everything goes on. With your patch, L0 will get the instruction
>>> from the buffer which is belonging to L2, and problem arises.
>>>
>>> So the fixing is that if there is a pending IO request, no virtual
>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>
>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>      struct vcpu *v = current;
>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +    ioreq_t *p = get_ioreq(v);
>>>
>>> +    if ( p->state != STATE_IOREQ_NONE )
>>> +        return;
>>>      /*
>>>       * a softirq may interrupt us between a virtual vmentry is
>>>       * just handled and the true vmentry. If during this window,
>>
>> This change looks much more sensible; question is whether simply
>> ignoring the switch request is acceptable (and I don't know the nested
>> HVM code well enough to tell). Furthermore I wonder whether that's really a VMX-only issue.
> 
> From hardware's point, an IO operation is handled atomically. So the problem
> will never happen. But in Xen, an IO operation is divided into several
steps.
> Without nested virtualization, the VCPU is paused or looped until the
IO emulation
> is finished. But in nested environment, the VCPU will continue running
even the
> IO emulation is not finished. So my patch will check this and allow
the VCPU to
> continue running only there is no pending IO request. This is matching
hardware's behavior.
> 
> I guess SVM also has this problem. But since I don't know nested SVM well,
> so perhaps Christoph can help to have a double check.

For SVM this issue was fixed with commit
d740d811925385c09553cbe6dee8e77c1d43b198

And there is a followup cleanup commit
ac97fa6a21ccd395cca43890bbd0bf32e3255ebb

The change in nestedsvm.c in commit d740d811925385c09553cbe6dee8e77c1d43b198
is actually not SVM specific.

Move that into nhvm_interrupt_blocked()
in hvm.c right before

    return hvm_funcs.nhvm_intr_blocked(v);

and the fix applies for both SVM and VMX.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:00:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TSY-0002Aj-NW; Tue, 07 Jan 2014 10:00:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TSX-0002Ae-Ou
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:00:29 +0000
Received: from [85.158.137.68:64242] by server-8.bemta-3.messagelabs.com id
	7E/10-31081-C30DBC25; Tue, 07 Jan 2014 10:00:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389088826!7671703!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8487 invoked from network); 7 Jan 2014 10:00:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90376034"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:00:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	05:00:26 -0500
Message-ID: <1389088824.31766.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 7 Jan 2014 10:00:24 +0000
In-Reply-To: <20140106175349.6cbd190b@mantra.us.oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> On Sat,  4 Jan 2014 12:52:16 -0500
> Don Slutz <dslutz@verizon.com> wrote:
> 
> > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > returned.
> > 
> > Without this gdb does not report an error.
> > 
> > With this patch and using a 1G hvm domU:
> > 
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> >  xen/arch/x86/domctl.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > index ef6c140..4aa751f 100644
> > --- a/xen/arch/x86/domctl.c
> > +++ b/xen/arch/x86/domctl.c
> > @@ -997,8 +997,7 @@ long arch_do_domctl(
> >              domctl->u.gdbsx_guest_memio.len;
> >  
> >          ret = gdbsx_guest_mem_io(domctl->domain,
> > &domctl->u.gdbsx_guest_memio);
> > -        if ( !ret )
> > -           copyback = 1;
> > +        copyback = 1;
> >      }
> >      break;
> >  
> 
> Ooopsy... my thought was that an application should not even look at
> remain if the hcall/syscall failed, but forgot when writing the
> gdbsx itself :). Think of it this way, if the call didn't even make it to 
> xen, and some reason the ioctl returned non-zero rc, then remain would 
> still be zero. So I think we should fix gdbsx instead of here:
> 
> xg_write_mem():
>     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>     {
>         XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>                iop->remain, errno, rc);

Isn't this still using iop->remain on failure which is what you say
shouldn't be done?

>         return iop->len;
>     }    
> 
> Similarly in xg_read_mem().
> 
> Hope that makes sense. Don't mean to create work for you for my mistake,
> so if you don't have time, I can submit a patch for this too.
> 
> thanks
> Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:00:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TSY-0002Aj-NW; Tue, 07 Jan 2014 10:00:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TSX-0002Ae-Ou
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:00:29 +0000
Received: from [85.158.137.68:64242] by server-8.bemta-3.messagelabs.com id
	7E/10-31081-C30DBC25; Tue, 07 Jan 2014 10:00:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389088826!7671703!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8487 invoked from network); 7 Jan 2014 10:00:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90376034"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:00:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	05:00:26 -0500
Message-ID: <1389088824.31766.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 7 Jan 2014 10:00:24 +0000
In-Reply-To: <20140106175349.6cbd190b@mantra.us.oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> On Sat,  4 Jan 2014 12:52:16 -0500
> Don Slutz <dslutz@verizon.com> wrote:
> 
> > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > returned.
> > 
> > Without this gdb does not report an error.
> > 
> > With this patch and using a 1G hvm domU:
> > 
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> >  xen/arch/x86/domctl.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > index ef6c140..4aa751f 100644
> > --- a/xen/arch/x86/domctl.c
> > +++ b/xen/arch/x86/domctl.c
> > @@ -997,8 +997,7 @@ long arch_do_domctl(
> >              domctl->u.gdbsx_guest_memio.len;
> >  
> >          ret = gdbsx_guest_mem_io(domctl->domain,
> > &domctl->u.gdbsx_guest_memio);
> > -        if ( !ret )
> > -           copyback = 1;
> > +        copyback = 1;
> >      }
> >      break;
> >  
> 
> Ooopsy... my thought was that an application should not even look at
> remain if the hcall/syscall failed, but forgot when writing the
> gdbsx itself :). Think of it this way, if the call didn't even make it to 
> xen, and some reason the ioctl returned non-zero rc, then remain would 
> still be zero. So I think we should fix gdbsx instead of here:
> 
> xg_write_mem():
>     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>     {
>         XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>                iop->remain, errno, rc);

Isn't this still using iop->remain on failure which is what you say
shouldn't be done?

>         return iop->len;
>     }    
> 
> Similarly in xg_read_mem().
> 
> Hope that makes sense. Don't mean to create work for you for my mistake,
> so if you don't have time, I can submit a patch for this too.
> 
> thanks
> Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TUN-0002GU-91; Tue, 07 Jan 2014 10:02:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TUM-0002GL-Er
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:02:22 +0000
Received: from [85.158.143.35:48046] by server-2.bemta-4.messagelabs.com id
	EB/ED-11386-DA0DBC25; Tue, 07 Jan 2014 10:02:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389088939!2956590!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7763 invoked from network); 7 Jan 2014 10:02:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:02:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88212577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 10:02:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	05:02:19 -0500
Message-ID: <1389088937.31766.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 7 Jan 2014 10:02:17 +0000
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> > On Sat,  4 Jan 2014 12:52:16 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> > 
> > > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > > returned.
> > > 
> > > Without this gdb does not report an error.
> > > 
> > > With this patch and using a 1G hvm domU:
> > > 
> > > (gdb) x/1xh 0x6ae9168b
> > > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > > 
> > > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > > ---
> > >  xen/arch/x86/domctl.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > > index ef6c140..4aa751f 100644
> > > --- a/xen/arch/x86/domctl.c
> > > +++ b/xen/arch/x86/domctl.c
> > > @@ -997,8 +997,7 @@ long arch_do_domctl(
> > >              domctl->u.gdbsx_guest_memio.len;
> > >  
> > >          ret = gdbsx_guest_mem_io(domctl->domain,
> > > &domctl->u.gdbsx_guest_memio);
> > > -        if ( !ret )
> > > -           copyback = 1;
> > > +        copyback = 1;
> > >      }
> > >      break;
> > >  
> > 
> > Ooopsy... my thought was that an application should not even look at
> > remain if the hcall/syscall failed, but forgot when writing the
> > gdbsx itself :). Think of it this way, if the call didn't even make it to 
> > xen, and some reason the ioctl returned non-zero rc, then remain would 
> > still be zero.

Thinking about this for a second longer -- how does this interface deal
with partial success?

It seems natural to me for it to return an error and also update
->remain, but perhaps you have a different scheme in mind?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TUN-0002GU-91; Tue, 07 Jan 2014 10:02:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0TUM-0002GL-Er
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:02:22 +0000
Received: from [85.158.143.35:48046] by server-2.bemta-4.messagelabs.com id
	EB/ED-11386-DA0DBC25; Tue, 07 Jan 2014 10:02:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389088939!2956590!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7763 invoked from network); 7 Jan 2014 10:02:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:02:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88212577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 10:02:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	05:02:19 -0500
Message-ID: <1389088937.31766.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 7 Jan 2014 10:02:17 +0000
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> > On Sat,  4 Jan 2014 12:52:16 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> > 
> > > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > > returned.
> > > 
> > > Without this gdb does not report an error.
> > > 
> > > With this patch and using a 1G hvm domU:
> > > 
> > > (gdb) x/1xh 0x6ae9168b
> > > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > > 
> > > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > > ---
> > >  xen/arch/x86/domctl.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > > index ef6c140..4aa751f 100644
> > > --- a/xen/arch/x86/domctl.c
> > > +++ b/xen/arch/x86/domctl.c
> > > @@ -997,8 +997,7 @@ long arch_do_domctl(
> > >              domctl->u.gdbsx_guest_memio.len;
> > >  
> > >          ret = gdbsx_guest_mem_io(domctl->domain,
> > > &domctl->u.gdbsx_guest_memio);
> > > -        if ( !ret )
> > > -           copyback = 1;
> > > +        copyback = 1;
> > >      }
> > >      break;
> > >  
> > 
> > Ooopsy... my thought was that an application should not even look at
> > remain if the hcall/syscall failed, but forgot when writing the
> > gdbsx itself :). Think of it this way, if the call didn't even make it to 
> > xen, and some reason the ioctl returned non-zero rc, then remain would 
> > still be zero.

Thinking about this for a second longer -- how does this interface deal
with partial success?

It seems natural to me for it to return an error and also update
->remain, but perhaps you have a different scheme in mind?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:04:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TWP-0002Oz-RS; Tue, 07 Jan 2014 10:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0TWN-0002Os-Tu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:04:28 +0000
Received: from [85.158.143.35:12709] by server-3.bemta-4.messagelabs.com id
	02/7A-32360-B21DBC25; Tue, 07 Jan 2014 10:04:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389089065!10056361!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25686 invoked from network); 7 Jan 2014 10:04:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:04:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90376990"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:04:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 05:04:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W0TWK-0001nm-IZ; Tue, 07 Jan 2014 10:04:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 10:04:23 +0000
Message-ID: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The errors have been incorrectly identifying their function since c/s
861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
cleanup.

Use __func__ to ensure the name remains correct in the future.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_domain_restore.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 80769a7..ca2fb51 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -87,7 +87,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
             if ( len == -1 && errno == EINTR )
                 continue;
             if ( !FD_ISSET(fd, &rfds) ) {
-                ERROR("read_exact_timed failed (select returned %zd)", len);
+                ERROR("%s failed (select returned %zd)", __func__, len);
                 errno = ETIMEDOUT;
                 return -1;
             }
@@ -101,7 +101,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
             errno = 0;
         }
         if ( len <= 0 ) {
-            ERROR("read_exact_timed failed (read rc: %d, errno: %d)", len, errno);
+            ERROR("%s failed (read rc: %d, errno: %d)", __func__, len, errno);
             return -1;
         }
         offset += len;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:04:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TWP-0002Oz-RS; Tue, 07 Jan 2014 10:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0TWN-0002Os-Tu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:04:28 +0000
Received: from [85.158.143.35:12709] by server-3.bemta-4.messagelabs.com id
	02/7A-32360-B21DBC25; Tue, 07 Jan 2014 10:04:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389089065!10056361!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25686 invoked from network); 7 Jan 2014 10:04:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:04:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90376990"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:04:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 05:04:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W0TWK-0001nm-IZ; Tue, 07 Jan 2014 10:04:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 10:04:23 +0000
Message-ID: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The errors have been incorrectly identifying their function since c/s
861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
cleanup.

Use __func__ to ensure the name remains correct in the future.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_domain_restore.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 80769a7..ca2fb51 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -87,7 +87,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
             if ( len == -1 && errno == EINTR )
                 continue;
             if ( !FD_ISSET(fd, &rfds) ) {
-                ERROR("read_exact_timed failed (select returned %zd)", len);
+                ERROR("%s failed (select returned %zd)", __func__, len);
                 errno = ETIMEDOUT;
                 return -1;
             }
@@ -101,7 +101,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
             errno = 0;
         }
         if ( len <= 0 ) {
-            ERROR("read_exact_timed failed (read rc: %d, errno: %d)", len, errno);
+            ERROR("%s failed (read rc: %d, errno: %d)", __func__, len, errno);
             return -1;
         }
         offset += len;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:06:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TY4-0002X9-Cf; Tue, 07 Jan 2014 10:06:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1W0HJS-0007wY-3v
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:02:18 +0000
Received: from [85.158.137.68:50561] by server-13.bemta-3.messagelabs.com id
	31/A0-28603-9D91BC25; Mon, 06 Jan 2014 21:02:17 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389042134!7510143!1
X-Originating-IP: [15.201.24.17]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE3ID0+IDcyNjQ1Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19677 invoked from network); 6 Jan 2014 21:02:16 -0000
Received: from g4t0014.houston.hp.com (HELO g4t0014.houston.hp.com)
	(15.201.24.17)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 21:02:16 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t0014.houston.hp.com (Postfix) with ESMTPS id A29B924017
	for <xen-devel@lists.xen.org>; Mon,  6 Jan 2014 21:02:14 +0000 (UTC)
Received: from G4W6306.americas.hpqcorp.net (16.210.26.231) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 21:01:22 +0000
Received: from G4W3221.americas.hpqcorp.net ([169.254.3.234]) by
	G4W6306.americas.hpqcorp.net ([16.210.26.231]) with mapi id
	14.03.0123.003; Mon, 6 Jan 2014 21:01:22 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQ==
Date: Mon, 6 Jan 2014 21:01:21 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 07 Jan 2014 10:06:11 +0000
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Subject: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2560261537247038096=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2560261537247038096==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,

Qemu 1.6.1 allows us to passing multiple SMBIOS table to it using -smbios o=
ption(s).  However, I couldn't seem to see the table I passed into guest OS=
.  I used rwall under the windows and dmidecode under linux, they both retu=
rns a fresh smbios table built in the hvmloader instead of the ones I pass =
into qemu.

I tried to figure out what's missing by digging into the code.  To my surpr=
ise, although qemu now can read and put multiple smbios tables in a nice st=
ructure, that never gets used by hvmloader or seabios.  I have looked from =
Xen4.2.0 to Xen4.2.3, with Seabios up to 1.7.3.

Question, am I missing anything, or this feature (passing smbios) is still =
work in progress?

Regards/Eniac

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Qemu 1.6.1 allows us to passing multiple SMBIOS tabl=
e to it using -smbios option(s).&nbsp; However, I couldn&#8217;t seem to se=
e the table I passed into guest OS.&nbsp; I used rwall under the windows an=
d dmidecode under linux, they both returns a fresh
 smbios table built in the hvmloader instead of the ones I pass into qemu.<=
o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I tried to figure out what&#8217;s missing by diggin=
g into the code.&nbsp; To my surprise, although qemu now can read and put m=
ultiple smbios tables in a nice structure, that never gets used by hvmloade=
r or seabios.&nbsp; I have looked from Xen4.2.0 to
 Xen4.2.3, with Seabios up to 1.7.3.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Question, am I missing anything, or this feature (pa=
ssing smbios) is still work in progress?
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Regards/Eniac<o:p></o:p></p>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_--


--===============2560261537247038096==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2560261537247038096==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 10:06:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0TY4-0002X9-Cf; Tue, 07 Jan 2014 10:06:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1W0HJS-0007wY-3v
	for xen-devel@lists.xen.org; Mon, 06 Jan 2014 21:02:18 +0000
Received: from [85.158.137.68:50561] by server-13.bemta-3.messagelabs.com id
	31/A0-28603-9D91BC25; Mon, 06 Jan 2014 21:02:17 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389042134!7510143!1
X-Originating-IP: [15.201.24.17]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE3ID0+IDcyNjQ1Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19677 invoked from network); 6 Jan 2014 21:02:16 -0000
Received: from g4t0014.houston.hp.com (HELO g4t0014.houston.hp.com)
	(15.201.24.17)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Jan 2014 21:02:16 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t0014.houston.hp.com (Postfix) with ESMTPS id A29B924017
	for <xen-devel@lists.xen.org>; Mon,  6 Jan 2014 21:02:14 +0000 (UTC)
Received: from G4W6306.americas.hpqcorp.net (16.210.26.231) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 6 Jan 2014 21:01:22 +0000
Received: from G4W3221.americas.hpqcorp.net ([169.254.3.234]) by
	G4W6306.americas.hpqcorp.net ([16.210.26.231]) with mapi id
	14.03.0123.003; Mon, 6 Jan 2014 21:01:22 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQ==
Date: Mon, 6 Jan 2014 21:01:21 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 07 Jan 2014 10:06:11 +0000
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Subject: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2560261537247038096=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2560261537247038096==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,

Qemu 1.6.1 allows us to passing multiple SMBIOS table to it using -smbios o=
ption(s).  However, I couldn't seem to see the table I passed into guest OS=
.  I used rwall under the windows and dmidecode under linux, they both retu=
rns a fresh smbios table built in the hvmloader instead of the ones I pass =
into qemu.

I tried to figure out what's missing by digging into the code.  To my surpr=
ise, although qemu now can read and put multiple smbios tables in a nice st=
ructure, that never gets used by hvmloader or seabios.  I have looked from =
Xen4.2.0 to Xen4.2.3, with Seabios up to 1.7.3.

Question, am I missing anything, or this feature (passing smbios) is still =
work in progress?

Regards/Eniac

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Qemu 1.6.1 allows us to passing multiple SMBIOS tabl=
e to it using -smbios option(s).&nbsp; However, I couldn&#8217;t seem to se=
e the table I passed into guest OS.&nbsp; I used rwall under the windows an=
d dmidecode under linux, they both returns a fresh
 smbios table built in the hvmloader instead of the ones I pass into qemu.<=
o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I tried to figure out what&#8217;s missing by diggin=
g into the code.&nbsp; To my surprise, although qemu now can read and put m=
ultiple smbios tables in a nice structure, that never gets used by hvmloade=
r or seabios.&nbsp; I have looked from Xen4.2.0 to
 Xen4.2.3, with Seabios up to 1.7.3.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Question, am I missing anything, or this feature (pa=
ssing smbios) is still work in progress?
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Regards/Eniac<o:p></o:p></p>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57G4W3221americas_--


--===============2560261537247038096==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2560261537247038096==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 10:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Tqx-0003QQ-4L; Tue, 07 Jan 2014 10:25:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0Tqu-0003QL-J1
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:25:41 +0000
Received: from [85.158.137.68:60496] by server-1.bemta-3.messagelabs.com id
	78/0E-29598-326DBC25; Tue, 07 Jan 2014 10:25:39 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389090336!7609767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27556 invoked from network); 7 Jan 2014 10:25:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:25:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90382000"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:25:36 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 05:25:35 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 11:25:34 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Thread-Topic: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
Thread-Index: AQHO6sf9BfCBQXbBGka9crYXUPAkJppu0YSAgAp9oNA=
Date: Tue, 7 Jan 2014 10:25:34 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
In-Reply-To: <52C31691.9040302@oracle.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
	IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: 31 December 2013 19:10
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
> 
> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> > This patch adds support for IPv6 checksum offload and GSO when those
> > features are available in the backend.
> 
> Sorry for late review. Mostly style comments.
> 

Thanks for the review.

The checksum related code essentially needs to be a duplicate of that in netback and it seems wasteful to have the code in both places. Could this code be moved perhaps to net/core/dev.c? It's not specific to netback/netfront usage.

Opinions?

  Paul

> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Annie Li <annie.li@oracle.com>
> > ---
> >
> > v3:
> >   - Addressed comments raised by Annie Li
> >
> > v2:
> >   - Addressed comments raised by Ian Campbell
> >
> >   drivers/net/xen-netfront.c |  239
> ++++++++++++++++++++++++++++++++++++++++----
> >   include/linux/ipv6.h       |    2 +
> >   2 files changed, 224 insertions(+), 17 deletions(-)
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index dd1011e..fe747e4 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >   		tx->flags |= XEN_NETTXF_extra_info;
> >
> >   		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> > -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> > +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> SKB_GSO_TCPV6) ?
> > +			          XEN_NETIF_GSO_TYPE_TCPV6 :
> > +			          XEN_NETIF_GSO_TYPE_TCPV4;
> >   		gso->u.gso.pad = 0;
> >   		gso->u.gso.features = 0;
> >
> > @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
> *skb,
> >   		return -EINVAL;
> >   	}
> >
> > -	/* Currently only TCPv4 S.O. is supported. */
> > -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> > +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> > +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >   		if (net_ratelimit())
> >   			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >   		return -EINVAL;
> >   	}
> >
> >   	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> > -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> > +	skb_shinfo(skb)->gso_type =
> > +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> > +		SKB_GSO_TCPV4 :
> > +		SKB_GSO_TCPV6;
> >
> >   	/* Header must be checked, and gso_segs computed. */
> >   	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> > @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
> netfront_info *np,
> >   	return cons;
> >   }
> >
> > -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> > +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
> > +				   unsigned int max)
> 
> Should this routine return error code instead of a boolean? Otherwise
> it's not clear what "false" should mean --- whether it is that it failed
> to pull or that the pull wasn't needed.
> 
> >   {
> > -	struct iphdr *iph;
> > -	int err = -EPROTO;
> > +	int target;
> > +
> > +	BUG_ON(max < min);
> > +
> > +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
> > +		return true;
> > +
> > +	/* If we need to pullup then pullup to max, so we hopefully
> > +	 * won't need to do it again.
> > +	 */
> 
> Comment style.
> 
> > +	target = min_t(int, skb->len, max);
> > +	__pskb_pull_tail(skb, target - skb_headlen(skb));
> > +
> > +	if (skb_headlen(skb) < min) {
> 
> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
> 
> > +		net_err_ratelimited("Failed to pullup packet header\n");
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +/* This value should be large enough to cover a tagged ethernet header
> plus
> > + * maximally sized IP and TCP or UDP headers.
> > + */
> 
> Comment style.
> 
> > +#define MAX_IP_HEADER 128
> > +
> > +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
> *skb)
> > +{
> > +	struct iphdr *iph = (void *)skb->data;
> > +	unsigned int header_size;
> > +	unsigned int off;
> >   	int recalculate_partial_csum = 0;
> > +	int err = -EPROTO;
> >
> >   	/*
> >   	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
> > @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
> *dev, struct sk_buff *skb)
> >   	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >   		return 0;
> >
> > -	if (skb->protocol != htons(ETH_P_IP))
> > +	off = sizeof(struct iphdr);
> > +
> > +	header_size = skb->network_header + off;
> > +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
> >   		goto out;
> >
> > -	iph = (void *)skb->data;
> > +	off = iph->ihl * 4;
> >
> >   	switch (iph->protocol) {
> >   	case IPPROTO_TCP:
> > -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> > +		if (!skb_partial_csum_set(skb, off,
> >   					  offsetof(struct tcphdr, check)))
> >   			goto out;
> >
> >   		if (recalculate_partial_csum) {
> >   			struct tcphdr *tcph = tcp_hdr(skb);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct tcphdr);
> 
> You can put these (off and sizeof) onto the same line.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IP_HEADER))
> > +				goto out;
> > +
> >   			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
> >daddr,
> > -							 skb->len - iph->ihl*4,
> > +							 skb->len - off,
> >   							 IPPROTO_TCP, 0);
> >   		}
> >   		break;
> >   	case IPPROTO_UDP:
> > -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> > +		if (!skb_partial_csum_set(skb, off,
> >   					  offsetof(struct udphdr, check)))
> >   			goto out;
> >
> >   		if (recalculate_partial_csum) {
> >   			struct udphdr *udph = udp_hdr(skb);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct udphdr);
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IP_HEADER))
> > +				goto out;
> > +
> >   			udph->check = ~csum_tcpudp_magic(iph->saddr,
> iph->daddr,
> > -							 skb->len - iph->ihl*4,
> > +							 skb->len - off,
> >   							 IPPROTO_UDP, 0);
> >   		}
> >   		break;
> >   	default:
> > -		if (net_ratelimit())
> > -			pr_err("Attempting to checksum a non-TCP/UDP
> packet, dropping a protocol %d packet\n",
> > -			       iph->protocol);
> > +		net_err_ratelimited("Attempting to checksum a non-
> TCP/UDP packet, dropping a protocol %d packet\n",
> > +				    iph->protocol);
> > +		goto out;
> > +	}
> > +
> > +	err = 0;
> > +
> > +out:
> > +	return err;
> > +}
> > +
> > +/* This value should be large enough to cover a tagged ethernet header
> plus
> > + * an IPv6 header, all options, and a maximal TCP or UDP header.
> > + */
> > +#define MAX_IPV6_HEADER 256
> > +
> > +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
> *skb)
> > +{
> > +	struct ipv6hdr *ipv6h = (void *)skb->data;
> > +	u8 nexthdr;
> > +	unsigned int header_size;
> > +	unsigned int off;
> > +	bool fragment;
> > +	bool done;
> > +	int err = -EPROTO;
> > +
> > +	done = false;
> 
> This should probably be moved down to the beginning of the while loop.
> And you also need to initialize fragment to "false" (and possibly rename
> it to is_fragment?)
> 
> > +
> > +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
> > +	if (skb->ip_summed != CHECKSUM_PARTIAL)
> > +		return 0;
> > +
> > +	off = sizeof(struct ipv6hdr);
> > +
> > +	header_size = skb->network_header + off;
> > +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
> > +		goto out;
> > +
> > +	nexthdr = ipv6h->nexthdr;
> > +
> > +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
> &&
> > +	       !done) {
> > +		switch (nexthdr) {
> > +		case IPPROTO_DSTOPTS:
> > +		case IPPROTO_HOPOPTS:
> > +		case IPPROTO_ROUTING: {
> > +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct ipv6_opt_hdr);
> 
> I'd merge the last two lines.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IPV6_HEADER))
> > +				goto out;
> > +
> > +			nexthdr = hp->nexthdr;
> > +			off += ipv6_optlen(hp);
> > +			break;
> > +		}
> > +		case IPPROTO_AH: {
> > +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct ip_auth_hdr);
> 
> Here as well.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IPV6_HEADER))
> > +				goto out;
> > +
> > +			nexthdr = hp->nexthdr;
> > +			off += ipv6_ahlen(hp);
> > +			break;
> > +		}
> > +		case IPPROTO_FRAGMENT:
> > +			fragment = true;
> > +			/* fall through */
> > +		default:
> > +			done = true;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!done) {
> > +		net_err_ratelimited("Failed to parse packet header\n");
> > +		goto out;
> > +	}
> > +
> > +	if (fragment) {
> > +		net_err_ratelimited("Packet is a fragment!\n");
> > +		goto out;
> > +	}
> > +
> > +	switch (nexthdr) {
> > +	case IPPROTO_TCP:
> > +		if (!skb_partial_csum_set(skb, off,
> > +					  offsetof(struct tcphdr, check)))
> > +			goto out;
> > +		break;
> > +	case IPPROTO_UDP:
> > +		if (!skb_partial_csum_set(skb, off,
> > +					  offsetof(struct udphdr, check)))
> > +			goto out;
> > +		break;
> > +	default:
> > +		net_err_ratelimited("Attempting to checksum a non-
> TCP/UDP packet, dropping a protocol %d packet\n",
> > +				    nexthdr);
> >   		goto out;
> >   	}
> >
> > @@ -922,6 +1076,25 @@ out:
> >   	return err;
> >   }
> >
> > +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> > +{
> > +	int err;
> 
> Initialize to -EPROTO (just to keep consistent with the rest of the file)
> 
> > +
> > +	switch (skb->protocol) {
> > +	case htons(ETH_P_IP):
> > +		err = checksum_setup_ip(dev, skb);
> > +		break;
> > +	case htons(ETH_P_IPV6):
> > +		err = checksum_setup_ipv6(dev, skb);
> > +		break;
> > +	default:
> > +		err = -EPROTO;
> > +		break;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> >   static int handle_incoming_queue(struct net_device *dev,
> >   				 struct sk_buff_head *rxq)
> >   {
> > @@ -1232,6 +1405,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >   			features &= ~NETIF_F_SG;
> >   	}
> >
> > +	if (features & NETIF_F_IPV6_CSUM) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-ipv6-csum-offload", "%d", &val) <
> 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_IPV6_CSUM;
> > +	}
> > +
> >   	if (features & NETIF_F_TSO) {
> >   		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >   				 "feature-gso-tcpv4", "%d", &val) < 0)
> > @@ -1241,6 +1423,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >   			features &= ~NETIF_F_TSO;
> >   	}
> >
> > +	if (features & NETIF_F_TSO6) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-gso-tcpv6", "%d", &val) < 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_TSO6;
> > +	}
> > +
> >   	return features;
> >   }
> >
> > @@ -1373,7 +1564,9 @@ static struct net_device
> *xennet_create_dev(struct xenbus_device *dev)
> >   	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >   	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >   				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_SG |
> > +		                  NETIF_F_IPV6_CSUM |
> > +		                  NETIF_F_TSO | NETIF_F_TSO6;
> 
> Can you merge these three lines and stay under 80? If not, merge either
> of the two of them.
> 
> 
> -boris
> 
> >
> >   	/*
> >            * Assume that all hw features are available for now. This set
> > @@ -1751,6 +1944,18 @@ again:
> >   		goto abort_transaction;
> >   	}
> >
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-gso-tcpv6";
> > +		goto abort_transaction;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> offload", "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-ipv6-csum-offload";
> > +		goto abort_transaction;
> > +	}
> > +
> >   	err = xenbus_transaction_end(xbt, 0);
> >   	if (err) {
> >   		if (err == -EAGAIN)
> > diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
> > index 5d89d1b..10f1b03 100644
> > --- a/include/linux/ipv6.h
> > +++ b/include/linux/ipv6.h
> > @@ -4,6 +4,8 @@
> >   #include <uapi/linux/ipv6.h>
> >
> >   #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
> > +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
> > +
> >   /*
> >    * This structure contains configuration options per IPv6 link.
> >    */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Tqx-0003QQ-4L; Tue, 07 Jan 2014 10:25:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0Tqu-0003QL-J1
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:25:41 +0000
Received: from [85.158.137.68:60496] by server-1.bemta-3.messagelabs.com id
	78/0E-29598-326DBC25; Tue, 07 Jan 2014 10:25:39 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389090336!7609767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27556 invoked from network); 7 Jan 2014 10:25:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:25:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90382000"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 10:25:36 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 05:25:35 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 11:25:34 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Thread-Topic: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
Thread-Index: AQHO6sf9BfCBQXbBGka9crYXUPAkJppu0YSAgAp9oNA=
Date: Tue, 7 Jan 2014 10:25:34 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
In-Reply-To: <52C31691.9040302@oracle.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
	IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: 31 December 2013 19:10
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
> 
> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> > This patch adds support for IPv6 checksum offload and GSO when those
> > features are available in the backend.
> 
> Sorry for late review. Mostly style comments.
> 

Thanks for the review.

The checksum related code essentially needs to be a duplicate of that in netback and it seems wasteful to have the code in both places. Could this code be moved perhaps to net/core/dev.c? It's not specific to netback/netfront usage.

Opinions?

  Paul

> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Annie Li <annie.li@oracle.com>
> > ---
> >
> > v3:
> >   - Addressed comments raised by Annie Li
> >
> > v2:
> >   - Addressed comments raised by Ian Campbell
> >
> >   drivers/net/xen-netfront.c |  239
> ++++++++++++++++++++++++++++++++++++++++----
> >   include/linux/ipv6.h       |    2 +
> >   2 files changed, 224 insertions(+), 17 deletions(-)
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index dd1011e..fe747e4 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >   		tx->flags |= XEN_NETTXF_extra_info;
> >
> >   		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> > -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> > +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> SKB_GSO_TCPV6) ?
> > +			          XEN_NETIF_GSO_TYPE_TCPV6 :
> > +			          XEN_NETIF_GSO_TYPE_TCPV4;
> >   		gso->u.gso.pad = 0;
> >   		gso->u.gso.features = 0;
> >
> > @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
> *skb,
> >   		return -EINVAL;
> >   	}
> >
> > -	/* Currently only TCPv4 S.O. is supported. */
> > -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> > +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> > +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >   		if (net_ratelimit())
> >   			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >   		return -EINVAL;
> >   	}
> >
> >   	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> > -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> > +	skb_shinfo(skb)->gso_type =
> > +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> > +		SKB_GSO_TCPV4 :
> > +		SKB_GSO_TCPV6;
> >
> >   	/* Header must be checked, and gso_segs computed. */
> >   	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> > @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
> netfront_info *np,
> >   	return cons;
> >   }
> >
> > -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> > +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
> > +				   unsigned int max)
> 
> Should this routine return error code instead of a boolean? Otherwise
> it's not clear what "false" should mean --- whether it is that it failed
> to pull or that the pull wasn't needed.
> 
> >   {
> > -	struct iphdr *iph;
> > -	int err = -EPROTO;
> > +	int target;
> > +
> > +	BUG_ON(max < min);
> > +
> > +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
> > +		return true;
> > +
> > +	/* If we need to pullup then pullup to max, so we hopefully
> > +	 * won't need to do it again.
> > +	 */
> 
> Comment style.
> 
> > +	target = min_t(int, skb->len, max);
> > +	__pskb_pull_tail(skb, target - skb_headlen(skb));
> > +
> > +	if (skb_headlen(skb) < min) {
> 
> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
> 
> > +		net_err_ratelimited("Failed to pullup packet header\n");
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +/* This value should be large enough to cover a tagged ethernet header
> plus
> > + * maximally sized IP and TCP or UDP headers.
> > + */
> 
> Comment style.
> 
> > +#define MAX_IP_HEADER 128
> > +
> > +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
> *skb)
> > +{
> > +	struct iphdr *iph = (void *)skb->data;
> > +	unsigned int header_size;
> > +	unsigned int off;
> >   	int recalculate_partial_csum = 0;
> > +	int err = -EPROTO;
> >
> >   	/*
> >   	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
> > @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
> *dev, struct sk_buff *skb)
> >   	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >   		return 0;
> >
> > -	if (skb->protocol != htons(ETH_P_IP))
> > +	off = sizeof(struct iphdr);
> > +
> > +	header_size = skb->network_header + off;
> > +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
> >   		goto out;
> >
> > -	iph = (void *)skb->data;
> > +	off = iph->ihl * 4;
> >
> >   	switch (iph->protocol) {
> >   	case IPPROTO_TCP:
> > -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> > +		if (!skb_partial_csum_set(skb, off,
> >   					  offsetof(struct tcphdr, check)))
> >   			goto out;
> >
> >   		if (recalculate_partial_csum) {
> >   			struct tcphdr *tcph = tcp_hdr(skb);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct tcphdr);
> 
> You can put these (off and sizeof) onto the same line.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IP_HEADER))
> > +				goto out;
> > +
> >   			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
> >daddr,
> > -							 skb->len - iph->ihl*4,
> > +							 skb->len - off,
> >   							 IPPROTO_TCP, 0);
> >   		}
> >   		break;
> >   	case IPPROTO_UDP:
> > -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> > +		if (!skb_partial_csum_set(skb, off,
> >   					  offsetof(struct udphdr, check)))
> >   			goto out;
> >
> >   		if (recalculate_partial_csum) {
> >   			struct udphdr *udph = udp_hdr(skb);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct udphdr);
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IP_HEADER))
> > +				goto out;
> > +
> >   			udph->check = ~csum_tcpudp_magic(iph->saddr,
> iph->daddr,
> > -							 skb->len - iph->ihl*4,
> > +							 skb->len - off,
> >   							 IPPROTO_UDP, 0);
> >   		}
> >   		break;
> >   	default:
> > -		if (net_ratelimit())
> > -			pr_err("Attempting to checksum a non-TCP/UDP
> packet, dropping a protocol %d packet\n",
> > -			       iph->protocol);
> > +		net_err_ratelimited("Attempting to checksum a non-
> TCP/UDP packet, dropping a protocol %d packet\n",
> > +				    iph->protocol);
> > +		goto out;
> > +	}
> > +
> > +	err = 0;
> > +
> > +out:
> > +	return err;
> > +}
> > +
> > +/* This value should be large enough to cover a tagged ethernet header
> plus
> > + * an IPv6 header, all options, and a maximal TCP or UDP header.
> > + */
> > +#define MAX_IPV6_HEADER 256
> > +
> > +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
> *skb)
> > +{
> > +	struct ipv6hdr *ipv6h = (void *)skb->data;
> > +	u8 nexthdr;
> > +	unsigned int header_size;
> > +	unsigned int off;
> > +	bool fragment;
> > +	bool done;
> > +	int err = -EPROTO;
> > +
> > +	done = false;
> 
> This should probably be moved down to the beginning of the while loop.
> And you also need to initialize fragment to "false" (and possibly rename
> it to is_fragment?)
> 
> > +
> > +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
> > +	if (skb->ip_summed != CHECKSUM_PARTIAL)
> > +		return 0;
> > +
> > +	off = sizeof(struct ipv6hdr);
> > +
> > +	header_size = skb->network_header + off;
> > +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
> > +		goto out;
> > +
> > +	nexthdr = ipv6h->nexthdr;
> > +
> > +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
> &&
> > +	       !done) {
> > +		switch (nexthdr) {
> > +		case IPPROTO_DSTOPTS:
> > +		case IPPROTO_HOPOPTS:
> > +		case IPPROTO_ROUTING: {
> > +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct ipv6_opt_hdr);
> 
> I'd merge the last two lines.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IPV6_HEADER))
> > +				goto out;
> > +
> > +			nexthdr = hp->nexthdr;
> > +			off += ipv6_optlen(hp);
> > +			break;
> > +		}
> > +		case IPPROTO_AH: {
> > +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
> > +
> > +			header_size = skb->network_header +
> > +				off +
> > +				sizeof(struct ip_auth_hdr);
> 
> Here as well.
> 
> > +			if (!maybe_pull_tail(skb, header_size,
> MAX_IPV6_HEADER))
> > +				goto out;
> > +
> > +			nexthdr = hp->nexthdr;
> > +			off += ipv6_ahlen(hp);
> > +			break;
> > +		}
> > +		case IPPROTO_FRAGMENT:
> > +			fragment = true;
> > +			/* fall through */
> > +		default:
> > +			done = true;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!done) {
> > +		net_err_ratelimited("Failed to parse packet header\n");
> > +		goto out;
> > +	}
> > +
> > +	if (fragment) {
> > +		net_err_ratelimited("Packet is a fragment!\n");
> > +		goto out;
> > +	}
> > +
> > +	switch (nexthdr) {
> > +	case IPPROTO_TCP:
> > +		if (!skb_partial_csum_set(skb, off,
> > +					  offsetof(struct tcphdr, check)))
> > +			goto out;
> > +		break;
> > +	case IPPROTO_UDP:
> > +		if (!skb_partial_csum_set(skb, off,
> > +					  offsetof(struct udphdr, check)))
> > +			goto out;
> > +		break;
> > +	default:
> > +		net_err_ratelimited("Attempting to checksum a non-
> TCP/UDP packet, dropping a protocol %d packet\n",
> > +				    nexthdr);
> >   		goto out;
> >   	}
> >
> > @@ -922,6 +1076,25 @@ out:
> >   	return err;
> >   }
> >
> > +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> > +{
> > +	int err;
> 
> Initialize to -EPROTO (just to keep consistent with the rest of the file)
> 
> > +
> > +	switch (skb->protocol) {
> > +	case htons(ETH_P_IP):
> > +		err = checksum_setup_ip(dev, skb);
> > +		break;
> > +	case htons(ETH_P_IPV6):
> > +		err = checksum_setup_ipv6(dev, skb);
> > +		break;
> > +	default:
> > +		err = -EPROTO;
> > +		break;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> >   static int handle_incoming_queue(struct net_device *dev,
> >   				 struct sk_buff_head *rxq)
> >   {
> > @@ -1232,6 +1405,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >   			features &= ~NETIF_F_SG;
> >   	}
> >
> > +	if (features & NETIF_F_IPV6_CSUM) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-ipv6-csum-offload", "%d", &val) <
> 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_IPV6_CSUM;
> > +	}
> > +
> >   	if (features & NETIF_F_TSO) {
> >   		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >   				 "feature-gso-tcpv4", "%d", &val) < 0)
> > @@ -1241,6 +1423,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >   			features &= ~NETIF_F_TSO;
> >   	}
> >
> > +	if (features & NETIF_F_TSO6) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-gso-tcpv6", "%d", &val) < 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_TSO6;
> > +	}
> > +
> >   	return features;
> >   }
> >
> > @@ -1373,7 +1564,9 @@ static struct net_device
> *xennet_create_dev(struct xenbus_device *dev)
> >   	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >   	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >   				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_SG |
> > +		                  NETIF_F_IPV6_CSUM |
> > +		                  NETIF_F_TSO | NETIF_F_TSO6;
> 
> Can you merge these three lines and stay under 80? If not, merge either
> of the two of them.
> 
> 
> -boris
> 
> >
> >   	/*
> >            * Assume that all hw features are available for now. This set
> > @@ -1751,6 +1944,18 @@ again:
> >   		goto abort_transaction;
> >   	}
> >
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-gso-tcpv6";
> > +		goto abort_transaction;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> offload", "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-ipv6-csum-offload";
> > +		goto abort_transaction;
> > +	}
> > +
> >   	err = xenbus_transaction_end(xbt, 0);
> >   	if (err) {
> >   		if (err == -EAGAIN)
> > diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
> > index 5d89d1b..10f1b03 100644
> > --- a/include/linux/ipv6.h
> > +++ b/include/linux/ipv6.h
> > @@ -4,6 +4,8 @@
> >   #include <uapi/linux/ipv6.h>
> >
> >   #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
> > +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
> > +
> >   /*
> >    * This structure contains configuration options per IPv6 link.
> >    */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:36:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U0n-0003wn-4m; Tue, 07 Jan 2014 10:35:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0U0m-0003wi-Gl
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:35:52 +0000
Received: from [193.109.254.147:4516] by server-11.bemta-14.messagelabs.com id
	10/AB-20576-788DBC25; Tue, 07 Jan 2014 10:35:51 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389090950!6956555!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 7 Jan 2014 10:35:50 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 10:35:50 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 286DD2202F2
	for <xen-devel@lists.xen.org>; Tue,  7 Jan 2014 10:35:50 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 10:35:49 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
References: "\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>	<52AB2E18020000780010D13B@nat28.tlf.novell.com>	<52AB275D.2010401@bobich.net>"
	<20140106202621.GA30667@phenom.dumpdata.com>"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
Message-ID: <1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 03:17, Zhang, Yang Z wrote:
> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>> Which would look like this:
>>> 
>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on 
>>> the card
>>> 	      \--------------> IEEE-1394a
>>> 
>>> I am actually wondering if this 07:00.0 device is the one that
>>> reports itself as 08:00.0 (which I think is what you alluding to
>>> Jan)
>>> 
>> 
>> And to double check that theory I decided to pass in the IEEE-1394a to 
>> a guest:
>> 
>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>> 
>> 
>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     
>> root_entry
>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)     
>> context
>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
>> not present
>> 
>> So, capture card OK - Likely the Tundra bridge has an issue:
>> 
>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>> (prog-if 01 [Subtractive decode])
>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR- 
>> BridgeCtl:
>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 
>> 0805
>>         Capabilities: [a0] Power Management version 3
>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>>                 PME-Enable- DSel=0 DScale=0 PME-
>> 
>> or there is some unknown bridge in the motherboard.
> 
> According your description above, the upstream Linux should also have
> the same problem. Did you see it with upstream Linux?

The problem I was seeing with LSI cards (phantom device doing DMA)
does, indeed, also occur in upstream Linux. If I enable intel-iommu on
bare metal Linux, the same problem occurs as with Xen.

> There may be some buggy device that generate DMA request with internal
> BDF but it didn't expose it(not like Phantom device). For those
> devices, I think we need to setup the VT-d page table manually.

I think what is needed is a pci-phantom style override that tells the
hypervisor to tell the IOMMU to allow DMA traffic from a specific
invisible device ID.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:36:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U0n-0003wn-4m; Tue, 07 Jan 2014 10:35:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0U0m-0003wi-Gl
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:35:52 +0000
Received: from [193.109.254.147:4516] by server-11.bemta-14.messagelabs.com id
	10/AB-20576-788DBC25; Tue, 07 Jan 2014 10:35:51 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389090950!6956555!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 7 Jan 2014 10:35:50 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 10:35:50 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 286DD2202F2
	for <xen-devel@lists.xen.org>; Tue,  7 Jan 2014 10:35:50 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 10:35:49 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
References: "\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>	<52AB2E18020000780010D13B@nat28.tlf.novell.com>	<52AB275D.2010401@bobich.net>"
	<20140106202621.GA30667@phenom.dumpdata.com>"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
Message-ID: <1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 03:17, Zhang, Yang Z wrote:
> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>> Which would look like this:
>>> 
>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs) on 
>>> the card
>>> 	      \--------------> IEEE-1394a
>>> 
>>> I am actually wondering if this 07:00.0 device is the one that
>>> reports itself as 08:00.0 (which I think is what you alluding to
>>> Jan)
>>> 
>> 
>> And to double check that theory I decided to pass in the IEEE-1394a to 
>> a guest:
>> 
>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>> 
>> 
>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     
>> root_entry
>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)     
>> context
>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
>> not present
>> 
>> So, capture card OK - Likely the Tundra bridge has an issue:
>> 
>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>> (prog-if 01 [Subtractive decode])
>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR- 
>> BridgeCtl:
>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 
>> 0805
>>         Capabilities: [a0] Power Management version 3
>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>>                 PME-Enable- DSel=0 DScale=0 PME-
>> 
>> or there is some unknown bridge in the motherboard.
> 
> According your description above, the upstream Linux should also have
> the same problem. Did you see it with upstream Linux?

The problem I was seeing with LSI cards (phantom device doing DMA)
does, indeed, also occur in upstream Linux. If I enable intel-iommu on
bare metal Linux, the same problem occurs as with Xen.

> There may be some buggy device that generate DMA request with internal
> BDF but it didn't expose it(not like Phantom device). For those
> devices, I think we need to setup the VT-d page table manually.

I think what is needed is a pci-phantom style override that tells the
hypervisor to tell the IOMMU to allow DMA traffic from a specific
invisible device ID.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U3Y-00048F-OW; Tue, 07 Jan 2014 10:38:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0U3X-000484-8x
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:38:43 +0000
Received: from [85.158.143.35:8598] by server-3.bemta-4.messagelabs.com id
	57/F3-32360-239DBC25; Tue, 07 Jan 2014 10:38:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389091120!8814488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31729 invoked from network); 7 Jan 2014 10:38:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:38:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88220830"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 10:38:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 05:38:40 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0U3T-0002K8-KT;
	Tue, 07 Jan 2014 10:38:39 +0000
Message-ID: <52CBD92F.3050301@citrix.com>
Date: Tue, 7 Jan 2014 10:38:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Gordan Bobic <gordan@bobich.net>
References: "\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>	<52AB2E18020000780010D13B@nat28.tlf.novell.com>	<52AB275D.2010401@bobich.net>"
	<20140106202621.GA30667@phenom.dumpdata.com>"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
In-Reply-To: <1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 10:35, Gordan Bobic wrote:
> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>> Which would look like this:
>>>>
>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>> on the card
>>>>           \--------------> IEEE-1394a
>>>>
>>>> I am actually wondering if this 07:00.0 device is the one that
>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>> Jan)
>>>>
>>>
>>> And to double check that theory I decided to pass in the IEEE-1394a
>>> to a guest:
>>>
>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>
>>>
>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     root_entry
>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)    
>>> context
>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
>>> not present
>>>
>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>
>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>> (prog-if 01 [Subtractive decode])
>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>> BridgeCtl:
>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>> Device 0805
>>>         Capabilities: [a0] Power Management version 3
>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>
>>> or there is some unknown bridge in the motherboard.
>>
>> According your description above, the upstream Linux should also have
>> the same problem. Did you see it with upstream Linux?
>
> The problem I was seeing with LSI cards (phantom device doing DMA)
> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> bare metal Linux, the same problem occurs as with Xen.
>
>> There may be some buggy device that generate DMA request with internal
>> BDF but it didn't expose it(not like Phantom device). For those
>> devices, I think we need to setup the VT-d page table manually.
>
> I think what is needed is a pci-phantom style override that tells the
> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> invisible device ID.
>
> Gordan

There is.  See "pci-phantom" in
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U3Y-00048F-OW; Tue, 07 Jan 2014 10:38:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0U3X-000484-8x
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:38:43 +0000
Received: from [85.158.143.35:8598] by server-3.bemta-4.messagelabs.com id
	57/F3-32360-239DBC25; Tue, 07 Jan 2014 10:38:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389091120!8814488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31729 invoked from network); 7 Jan 2014 10:38:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 10:38:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88220830"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 10:38:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 05:38:40 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0U3T-0002K8-KT;
	Tue, 07 Jan 2014 10:38:39 +0000
Message-ID: <52CBD92F.3050301@citrix.com>
Date: Tue, 7 Jan 2014 10:38:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Gordan Bobic <gordan@bobich.net>
References: "\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>	<52AB2E18020000780010D13B@nat28.tlf.novell.com>	<52AB275D.2010401@bobich.net>"
	<20140106202621.GA30667@phenom.dumpdata.com>"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
In-Reply-To: <1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 10:35, Gordan Bobic wrote:
> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>> Which would look like this:
>>>>
>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>> on the card
>>>>           \--------------> IEEE-1394a
>>>>
>>>> I am actually wondering if this 07:00.0 device is the one that
>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>> Jan)
>>>>
>>>
>>> And to double check that theory I decided to pass in the IEEE-1394a
>>> to a guest:
>>>
>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>
>>>
>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] fault
>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     root_entry
>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)    
>>> context
>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     ctxt_entry[0]
>>> not present
>>>
>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>
>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>> (prog-if 01 [Subtractive decode])
>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz-
>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, secondary=08,
>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>> BridgeCtl:
>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>> Device 0805
>>>         Capabilities: [a0] Power Management version 3
>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+
>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>
>>> or there is some unknown bridge in the motherboard.
>>
>> According your description above, the upstream Linux should also have
>> the same problem. Did you see it with upstream Linux?
>
> The problem I was seeing with LSI cards (phantom device doing DMA)
> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> bare metal Linux, the same problem occurs as with Xen.
>
>> There may be some buggy device that generate DMA request with internal
>> BDF but it didn't expose it(not like Phantom device). For those
>> devices, I think we need to setup the VT-d page table manually.
>
> I think what is needed is a pci-phantom style override that tells the
> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> invisible device ID.
>
> Gordan

There is.  See "pci-phantom" in
http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U9B-0004YZ-Hi; Tue, 07 Jan 2014 10:44:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0U99-0004YU-RY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:44:32 +0000
Received: from [85.158.143.35:59868] by server-3.bemta-4.messagelabs.com id
	FF/7F-32360-F8ADBC25; Tue, 07 Jan 2014 10:44:31 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389091470!10114905!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4174 invoked from network); 7 Jan 2014 10:44:30 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 10:44:30 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 81A442202F2;
	Tue,  7 Jan 2014 10:44:29 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 10:44:29 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <52CBD92F.3050301@citrix.com>
References: "\"\\\"\\\\\\\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>"
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>"
	"<52AB275D.2010401@bobich.net>\"
	\"" <20140106202621.GA30667@phenom.dumpdata.com>\"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	"<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>\""
	<52CBD92F.3050301@citrix.com>
Message-ID: <7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 10:38, Andrew Cooper wrote:
> On 07/01/14 10:35, Gordan Bobic wrote:
>> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>>> Which would look like this:
>>>>> 
>>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>>> on the card
>>>>>           \--------------> IEEE-1394a
>>>>> 
>>>>> I am actually wondering if this 07:00.0 device is the one that
>>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>>> Jan)
>>>>> 
>>>> 
>>>> And to double check that theory I decided to pass in the IEEE-1394a
>>>> to a guest:
>>>> 
>>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>> 
>>>> 
>>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] 
>>>> fault
>>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     
>>>> root_entry
>>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>>> context
>>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     
>>>> ctxt_entry[0]
>>>> not present
>>>> 
>>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>> 
>>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>>> (prog-if 01 [Subtractive decode])
>>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 
>>>> 66MHz-
>>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, 
>>>> secondary=08,
>>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>>> BridgeCtl:
>>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>>> Device 0805
>>>>         Capabilities: [a0] Power Management version 3
>>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 
>>>> NoSoftRst+
>>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>> 
>>>> or there is some unknown bridge in the motherboard.
>>> 
>>> According your description above, the upstream Linux should also have
>>> the same problem. Did you see it with upstream Linux?
>> 
>> The problem I was seeing with LSI cards (phantom device doing DMA)
>> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>> bare metal Linux, the same problem occurs as with Xen.
>> 
>>> There may be some buggy device that generate DMA request with 
>>> internal
>>> BDF but it didn't expose it(not like Phantom device). For those
>>> devices, I think we need to setup the VT-d page table manually.
>> 
>> I think what is needed is a pci-phantom style override that tells the
>> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>> invisible device ID.
>> 
>> Gordan
> 
> There is.  See "pci-phantom" in
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

I thought this was only applicable to phantom _functions_ (number after 
the
dot) rather than whole phantom _devices_. Is that not the case?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0U9B-0004YZ-Hi; Tue, 07 Jan 2014 10:44:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0U99-0004YU-RY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:44:32 +0000
Received: from [85.158.143.35:59868] by server-3.bemta-4.messagelabs.com id
	FF/7F-32360-F8ADBC25; Tue, 07 Jan 2014 10:44:31 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389091470!10114905!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4174 invoked from network); 7 Jan 2014 10:44:30 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 10:44:30 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 81A442202F2;
	Tue,  7 Jan 2014 10:44:29 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 10:44:29 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <52CBD92F.3050301@citrix.com>
References: "\"\\\"\\\\\\\"<52308E1402000078000F2748@nat28.tlf.novell.com>	<A9667DDFB95DB7438FA9D7D576C3D87E0A91943F@SHSMSX104.ccr.corp.intel.com>	<20131211183233.GA2760@phenom.dumpdata.com>	<52A8D5E5.2030902@bobich.net>	<20131211213025.GA8283@phenom.dumpdata.com>	<52AAF9D7020000780010CF2C@nat28.tlf.novell.com>	<20131213144319.GK2923@phenom.dumpdata.com>"
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>"
	"<52AB275D.2010401@bobich.net>\"
	\"" <20140106202621.GA30667@phenom.dumpdata.com>\"
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	"<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>\""
	<52CBD92F.3050301@citrix.com>
Message-ID: <7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 10:38, Andrew Cooper wrote:
> On 07/01/14 10:35, Gordan Bobic wrote:
>> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>>> Which would look like this:
>>>>> 
>>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>>> on the card
>>>>>           \--------------> IEEE-1394a
>>>>> 
>>>>> I am actually wondering if this 07:00.0 device is the one that
>>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>>> Jan)
>>>>> 
>>>> 
>>>> And to double check that theory I decided to pass in the IEEE-1394a
>>>> to a guest:
>>>> 
>>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>> 
>>>> 
>>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0] 
>>>> fault
>>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)     
>>>> root_entry
>>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>>> context
>>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)     
>>>> ctxt_entry[0]
>>>> not present
>>>> 
>>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>> 
>>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>>> (prog-if 01 [Subtractive decode])
>>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 
>>>> 66MHz-
>>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
>>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07, 
>>>> secondary=08,
>>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>>> BridgeCtl:
>>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
>>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>>> Device 0805
>>>>         Capabilities: [a0] Power Management version 3
>>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 
>>>> NoSoftRst+
>>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>> 
>>>> or there is some unknown bridge in the motherboard.
>>> 
>>> According your description above, the upstream Linux should also have
>>> the same problem. Did you see it with upstream Linux?
>> 
>> The problem I was seeing with LSI cards (phantom device doing DMA)
>> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>> bare metal Linux, the same problem occurs as with Xen.
>> 
>>> There may be some buggy device that generate DMA request with 
>>> internal
>>> BDF but it didn't expose it(not like Phantom device). For those
>>> devices, I think we need to setup the VT-d page table manually.
>> 
>> I think what is needed is a pci-phantom style override that tells the
>> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>> invisible device ID.
>> 
>> Gordan
> 
> There is.  See "pci-phantom" in
> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

I thought this was only applicable to phantom _functions_ (number after 
the
dot) rather than whole phantom _devices_. Is that not the case?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:45:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UA7-0004cD-WA; Tue, 07 Jan 2014 10:45:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W0UA6-0004bw-GV
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 10:45:30 +0000
Received: from [85.158.139.211:2589] by server-4.bemta-5.messagelabs.com id
	B0/23-26791-9CADBC25; Tue, 07 Jan 2014 10:45:29 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389091527!8234612!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13434 invoked from network); 7 Jan 2014 10:45:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 10:45:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Aj5Q2024048
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 10:45:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s07Aj48s017266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 10:45:04 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07Aj3hq017244; Tue, 7 Jan 2014 10:45:03 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 02:45:03 -0800
Message-ID: <52CBDAA3.2000403@oracle.com>
Date: Tue, 07 Jan 2014 18:44:51 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
In-Reply-To: <20131213164405.GA11305@phenom.dumpdata.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Bob Liu <lliubbo@gmail.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in order
>> to make things more straightforward.
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?
> 

Ping..

Thanks!
.. and Happy New Year
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:45:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UA7-0004cD-WA; Tue, 07 Jan 2014 10:45:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W0UA6-0004bw-GV
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 10:45:30 +0000
Received: from [85.158.139.211:2589] by server-4.bemta-5.messagelabs.com id
	B0/23-26791-9CADBC25; Tue, 07 Jan 2014 10:45:29 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389091527!8234612!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13434 invoked from network); 7 Jan 2014 10:45:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 10:45:28 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Aj5Q2024048
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 10:45:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s07Aj48s017266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 10:45:04 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07Aj3hq017244; Tue, 7 Jan 2014 10:45:03 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 02:45:03 -0800
Message-ID: <52CBDAA3.2000403@oracle.com>
Date: Tue, 07 Jan 2014 18:44:51 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
In-Reply-To: <20131213164405.GA11305@phenom.dumpdata.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Bob Liu <lliubbo@gmail.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in order
>> to make things more straightforward.
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?
> 

Ping..

Thanks!
.. and Happy New Year
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:57:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ULB-0005Ac-8H; Tue, 07 Jan 2014 10:56:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0ULA-0005AX-BS
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:56:56 +0000
Received: from [85.158.143.35:57441] by server-1.bemta-4.messagelabs.com id
	8C/C4-02132-77DDBC25; Tue, 07 Jan 2014 10:56:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389092214!8820690!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11051 invoked from network); 7 Jan 2014 10:56:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 10:56:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 10:56:54 +0000
Message-Id: <52CBEB820200007800111079@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 10:56:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] xen/spinlock: Improvements to check_lock()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.12.13 at 17:39, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/include/xen/spinlock.h
> +++ b/xen/include/xen/spinlock.h
> @@ -6,9 +6,14 @@
>  
>  #ifndef NDEBUG
>  struct lock_debug {
> -    int irq_safe; /* +1: IRQ-safe; 0: not IRQ-safe; -1: don't know yet */
> +    enum lock_state {
> +        /* leave 0 unused to help detect uninitalised spinlocks */

Why don't you give this an explicit name, and then check for that
one state instead of checking multiple other states? That'll also
simplify adding eventual further states, and eliminate the need to
use initializers on the enumerator constants.

Jan

> +        lock_state_unknown = 1,
> +        lock_state_irqsafe = 2,
> +        lock_state_irqunsafe = 3
> +    } state;
>  };
> -#define _LOCK_DEBUG { -1 }
> +#define _LOCK_DEBUG { lock_state_unknown }
>  void spin_debug_enable(void);
>  void spin_debug_disable(void);
>  #else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 10:57:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 10:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ULB-0005Ac-8H; Tue, 07 Jan 2014 10:56:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0ULA-0005AX-BS
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 10:56:56 +0000
Received: from [85.158.143.35:57441] by server-1.bemta-4.messagelabs.com id
	8C/C4-02132-77DDBC25; Tue, 07 Jan 2014 10:56:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389092214!8820690!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11051 invoked from network); 7 Jan 2014 10:56:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 10:56:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 10:56:54 +0000
Message-Id: <52CBEB820200007800111079@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 10:56:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1387816747-21470-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] xen/spinlock: Improvements to check_lock()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.12.13 at 17:39, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/include/xen/spinlock.h
> +++ b/xen/include/xen/spinlock.h
> @@ -6,9 +6,14 @@
>  
>  #ifndef NDEBUG
>  struct lock_debug {
> -    int irq_safe; /* +1: IRQ-safe; 0: not IRQ-safe; -1: don't know yet */
> +    enum lock_state {
> +        /* leave 0 unused to help detect uninitalised spinlocks */

Why don't you give this an explicit name, and then check for that
one state instead of checking multiple other states? That'll also
simplify adding eventual further states, and eliminate the need to
use initializers on the enumerator constants.

Jan

> +        lock_state_unknown = 1,
> +        lock_state_irqsafe = 2,
> +        lock_state_irqunsafe = 3
> +    } state;
>  };
> -#define _LOCK_DEBUG { -1 }
> +#define _LOCK_DEBUG { lock_state_unknown }
>  void spin_debug_enable(void);
>  void spin_debug_disable(void);
>  #else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:07:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UUz-0005gi-ER; Tue, 07 Jan 2014 11:07:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0UUx-0005gd-SX
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:07:04 +0000
Received: from [85.158.143.35:46385] by server-2.bemta-4.messagelabs.com id
	37/61-11386-7DFDBC25; Tue, 07 Jan 2014 11:07:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389092821!10036943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12023 invoked from network); 7 Jan 2014 11:07:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88227067"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:07:01 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:07:00 -0500
Message-ID: <52CBDFD2.603@citrix.com>
Date: Tue, 7 Jan 2014 12:06:58 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-5-git-send-email-roger.pau@citrix.com>
	<20140103205949.GC2732@phenom.dumpdata.com>
In-Reply-To: <20140103205949.GC2732@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
 preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 21:59, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:43:38PM +0100, Roger Pau Monne wrote:
>> ---
>>  sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
>>  sys/amd64/include/sysarch.h |   12 ++++++
>>  sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 124 insertions(+), 11 deletions(-)
>>
>> diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
>> index eae657b..e073eea 100644
>> --- a/sys/amd64/amd64/machdep.c
>> +++ b/sys/amd64/amd64/machdep.c
>> @@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
>>  #include <machine/reg.h>
>>  #include <machine/sigframe.h>
>>  #include <machine/specialreg.h>
>> +#include <machine/sysarch.h>
>>  #ifdef PERFMON
>>  #include <machine/perfmon.h>
>>  #endif
>> @@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
>>      char *xfpustate, size_t xfpustate_len);
>>  SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
>>  
>> +/* Preload data parse function */
>> +static caddr_t native_parse_preload_data(u_int64_t);
>> +
>> +/* Default init_ops implementation. */
>> +struct init_ops init_ops = {
>> +	.parse_preload_data =	native_parse_preload_data,
> 
> Extra space there.

It's for alignment, looks strange now because there's only one hook.

>> +};
>> +
>>  /*
>>   * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
>>   * the physical address at which the kernel is loaded.
>> @@ -1683,6 +1692,26 @@ do_next:
>>  	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
>>  }
>>  
>> +static caddr_t
>> +native_parse_preload_data(u_int64_t modulep)
>> +{
>> +	caddr_t kmdp;
>> +
>> +	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
> 
> Two casts? Could it be done via one?
> 
>> +	preload_bootstrap_relocate(KERNBASE);
>> +	kmdp = preload_search_by_type("elf kernel");
>> +	if (kmdp == NULL)
>> +		kmdp = preload_search_by_type("elf64 kernel");
>> +	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
>> +	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
>> +#ifdef DDB
>> +	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
>> +	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
>> +#endif
>> +
>> +	return (kmdp);
>> +}
>> +
>>  u_int64_t
>>  hammer_time(u_int64_t modulep, u_int64_t physfree)
>>  {
>> @@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
>>  	 */
>>  	proc_linkup0(&proc0, &thread0);
>>  
>> -	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
> 
> Oh, you just moved the code - right, lets not modify it in this patch.

Yes, it's just code movement.

> 
>> -	preload_bootstrap_relocate(KERNBASE);
>> -	kmdp = preload_search_by_type("elf kernel");
>> -	if (kmdp == NULL)
>> -		kmdp = preload_search_by_type("elf64 kernel");
>> -	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
>> -	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
>> -#ifdef DDB
>> -	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
>> -	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
>> -#endif
>> +	kmdp = init_ops.parse_preload_data(modulep);
>>  
>>  	/* Init basic tunables, hz etc */
>>  	init_param1();
>> diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
>> index cd380d4..58ac8cd 100644
>> --- a/sys/amd64/include/sysarch.h
>> +++ b/sys/amd64/include/sysarch.h
>> @@ -4,3 +4,15 @@
>>  /* $FreeBSD$ */
>>  
>>  #include <x86/sysarch.h>
>> +
>> +/*
>> + * Struct containing pointers to init functions whose
>> + * implementation is run time selectable.  Selection can be made,
>> + * for example, based on detection of a BIOS variant or
>> + * hypervisor environment.
>> + */
>> +struct init_ops {
>> +	caddr_t	(*parse_preload_data)(u_int64_t);
>> +};
>> +
>> +extern struct init_ops init_ops;
>> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
>> index db3b7a3..908b50b 100644
>> --- a/sys/x86/xen/pv.c
>> +++ b/sys/x86/xen/pv.c
>> @@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
>>  #include <vm/vm_pager.h>
>>  #include <vm/vm_param.h>
>>  
>> +#include <machine/sysarch.h>
>> +
>>  #include <xen/xen-os.h>
>>  #include <xen/hypervisor.h>
>>  
>> @@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>>  /* Xen initial function */
>>  extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
>>  
>> +/*--------------------------- Forward Declarations ---------------------------*/
>> +static caddr_t xen_pv_parse_preload_data(u_int64_t);
>> +
>> +static void xen_pv_set_init_ops(void);
>> +
>> +/*-------------------------------- Global Data -------------------------------*/
>> +/* Xen init_ops implementation. */
>> +struct init_ops xen_init_ops = {
>> +	.parse_preload_data =	xen_pv_parse_preload_data,
>> +};
>> +
>> +static struct
>> +{
>> +	const char	*ev;
>> +	int		mask;
>> +} howto_names[] = {
>> +	{"boot_askname",	RB_ASKNAME},
>> +	{"boot_single",		RB_SINGLE},
>> +	{"boot_nosync",		RB_NOSYNC},
>> +	{"boot_halt",		RB_ASKNAME},
>> +	{"boot_serial",		RB_SERIAL},
>> +	{"boot_cdrom",		RB_CDROM},
>> +	{"boot_gdb",		RB_GDB},
>> +	{"boot_gdb_pause",	RB_RESERVED1},
>> +	{"boot_verbose",	RB_VERBOSE},
>> +	{"boot_multicons",	RB_MULTIPLE},
>> +	{NULL,	0}
>> +};
>> +
>> +/*-------------------------------- Xen PV init -------------------------------*/
>>  /*
>>   * First function called by the Xen PVH boot sequence.
>>   *
>> @@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
>>  	}
>>  	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
>>  
>> +	/* Set the hooks for early functions that diverge from bare metal */
>> +	xen_pv_set_init_ops();
>> +
>>  	/* Now we can jump into the native init function */
>>  	return (hammer_time(0, physfree));
>>  }
>> +
>> +/*-------------------------------- PV specific -------------------------------*/
>> +/*
>> + * Functions to convert the "extra" parameters passed by Xen
>> + * into FreeBSD boot options (from the i386 Xen port).
>> + */
>> +static char *
>> +xen_setbootenv(char *cmd_line)
>> +{
>> +	char *cmd_line_next;
>> +
>> +        /* Skip leading spaces */
>> +        for (; *cmd_line == ' '; cmd_line++);
> 
> Spaces?
> 
>> +
>> +	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
>> +	return (cmd_line);
>> +}
>> +
>> +static int
>> +xen_boothowto(char *envp)
>> +{
>> +	int i, howto = 0;
>> +
>> +	/* get equivalents from the environment */
>> +	for (i = 0; howto_names[i].ev != NULL; i++)
>> +		if (getenv(howto_names[i].ev) != NULL)
>> +			howto |= howto_names[i].mask;
> 
> You don't believe in '{}' do you ? :-)

All this code has also been taken from the FreeBSD Xen i386 PV port, but
maybe some refactoring wouldn't be bad :).

>> +	return (howto);
>> +}
>> +
>> +static caddr_t
>> +xen_pv_parse_preload_data(u_int64_t modulep)
>> +{
>> +	/* Parse the extra boot information given by Xen */
>> +	if (HYPERVISOR_start_info->cmd_line)
>> +		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
>> +	boothowto |= xen_boothowto(kern_envp);
>> +
>> +	return (NULL);
>> +}
>> +
>> +static void
>> +xen_pv_set_init_ops(void)
>> +{
>> +	/* Init ops for Xen PV */
>> +	init_ops = xen_init_ops;
>> +}
>> -- 
>> 1.7.7.5 (Apple Git-26)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:07:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UUz-0005gi-ER; Tue, 07 Jan 2014 11:07:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W0UUx-0005gd-SX
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:07:04 +0000
Received: from [85.158.143.35:46385] by server-2.bemta-4.messagelabs.com id
	37/61-11386-7DFDBC25; Tue, 07 Jan 2014 11:07:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389092821!10036943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12023 invoked from network); 7 Jan 2014 11:07:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88227067"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:07:01 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:07:00 -0500
Message-ID: <52CBDFD2.603@citrix.com>
Date: Tue, 7 Jan 2014 12:06:58 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-5-git-send-email-roger.pau@citrix.com>
	<20140103205949.GC2732@phenom.dumpdata.com>
In-Reply-To: <20140103205949.GC2732@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 04/19] amd64: introduce hook for custom
 preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/01/14 21:59, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 02, 2014 at 04:43:38PM +0100, Roger Pau Monne wrote:
>> ---
>>  sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
>>  sys/amd64/include/sysarch.h |   12 ++++++
>>  sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 124 insertions(+), 11 deletions(-)
>>
>> diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
>> index eae657b..e073eea 100644
>> --- a/sys/amd64/amd64/machdep.c
>> +++ b/sys/amd64/amd64/machdep.c
>> @@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
>>  #include <machine/reg.h>
>>  #include <machine/sigframe.h>
>>  #include <machine/specialreg.h>
>> +#include <machine/sysarch.h>
>>  #ifdef PERFMON
>>  #include <machine/perfmon.h>
>>  #endif
>> @@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
>>      char *xfpustate, size_t xfpustate_len);
>>  SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
>>  
>> +/* Preload data parse function */
>> +static caddr_t native_parse_preload_data(u_int64_t);
>> +
>> +/* Default init_ops implementation. */
>> +struct init_ops init_ops = {
>> +	.parse_preload_data =	native_parse_preload_data,
> 
> Extra space there.

It's for alignment, looks strange now because there's only one hook.

>> +};
>> +
>>  /*
>>   * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
>>   * the physical address at which the kernel is loaded.
>> @@ -1683,6 +1692,26 @@ do_next:
>>  	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
>>  }
>>  
>> +static caddr_t
>> +native_parse_preload_data(u_int64_t modulep)
>> +{
>> +	caddr_t kmdp;
>> +
>> +	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
> 
> Two casts? Could it be done via one?
> 
>> +	preload_bootstrap_relocate(KERNBASE);
>> +	kmdp = preload_search_by_type("elf kernel");
>> +	if (kmdp == NULL)
>> +		kmdp = preload_search_by_type("elf64 kernel");
>> +	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
>> +	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
>> +#ifdef DDB
>> +	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
>> +	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
>> +#endif
>> +
>> +	return (kmdp);
>> +}
>> +
>>  u_int64_t
>>  hammer_time(u_int64_t modulep, u_int64_t physfree)
>>  {
>> @@ -1707,17 +1736,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
>>  	 */
>>  	proc_linkup0(&proc0, &thread0);
>>  
>> -	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
> 
> Oh, you just moved the code - right, lets not modify it in this patch.

Yes, it's just code movement.

> 
>> -	preload_bootstrap_relocate(KERNBASE);
>> -	kmdp = preload_search_by_type("elf kernel");
>> -	if (kmdp == NULL)
>> -		kmdp = preload_search_by_type("elf64 kernel");
>> -	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
>> -	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
>> -#ifdef DDB
>> -	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
>> -	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
>> -#endif
>> +	kmdp = init_ops.parse_preload_data(modulep);
>>  
>>  	/* Init basic tunables, hz etc */
>>  	init_param1();
>> diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
>> index cd380d4..58ac8cd 100644
>> --- a/sys/amd64/include/sysarch.h
>> +++ b/sys/amd64/include/sysarch.h
>> @@ -4,3 +4,15 @@
>>  /* $FreeBSD$ */
>>  
>>  #include <x86/sysarch.h>
>> +
>> +/*
>> + * Struct containing pointers to init functions whose
>> + * implementation is run time selectable.  Selection can be made,
>> + * for example, based on detection of a BIOS variant or
>> + * hypervisor environment.
>> + */
>> +struct init_ops {
>> +	caddr_t	(*parse_preload_data)(u_int64_t);
>> +};
>> +
>> +extern struct init_ops init_ops;
>> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
>> index db3b7a3..908b50b 100644
>> --- a/sys/x86/xen/pv.c
>> +++ b/sys/x86/xen/pv.c
>> @@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
>>  #include <vm/vm_pager.h>
>>  #include <vm/vm_param.h>
>>  
>> +#include <machine/sysarch.h>
>> +
>>  #include <xen/xen-os.h>
>>  #include <xen/hypervisor.h>
>>  
>> @@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>>  /* Xen initial function */
>>  extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
>>  
>> +/*--------------------------- Forward Declarations ---------------------------*/
>> +static caddr_t xen_pv_parse_preload_data(u_int64_t);
>> +
>> +static void xen_pv_set_init_ops(void);
>> +
>> +/*-------------------------------- Global Data -------------------------------*/
>> +/* Xen init_ops implementation. */
>> +struct init_ops xen_init_ops = {
>> +	.parse_preload_data =	xen_pv_parse_preload_data,
>> +};
>> +
>> +static struct
>> +{
>> +	const char	*ev;
>> +	int		mask;
>> +} howto_names[] = {
>> +	{"boot_askname",	RB_ASKNAME},
>> +	{"boot_single",		RB_SINGLE},
>> +	{"boot_nosync",		RB_NOSYNC},
>> +	{"boot_halt",		RB_ASKNAME},
>> +	{"boot_serial",		RB_SERIAL},
>> +	{"boot_cdrom",		RB_CDROM},
>> +	{"boot_gdb",		RB_GDB},
>> +	{"boot_gdb_pause",	RB_RESERVED1},
>> +	{"boot_verbose",	RB_VERBOSE},
>> +	{"boot_multicons",	RB_MULTIPLE},
>> +	{NULL,	0}
>> +};
>> +
>> +/*-------------------------------- Xen PV init -------------------------------*/
>>  /*
>>   * First function called by the Xen PVH boot sequence.
>>   *
>> @@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
>>  	}
>>  	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
>>  
>> +	/* Set the hooks for early functions that diverge from bare metal */
>> +	xen_pv_set_init_ops();
>> +
>>  	/* Now we can jump into the native init function */
>>  	return (hammer_time(0, physfree));
>>  }
>> +
>> +/*-------------------------------- PV specific -------------------------------*/
>> +/*
>> + * Functions to convert the "extra" parameters passed by Xen
>> + * into FreeBSD boot options (from the i386 Xen port).
>> + */
>> +static char *
>> +xen_setbootenv(char *cmd_line)
>> +{
>> +	char *cmd_line_next;
>> +
>> +        /* Skip leading spaces */
>> +        for (; *cmd_line == ' '; cmd_line++);
> 
> Spaces?
> 
>> +
>> +	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
>> +	return (cmd_line);
>> +}
>> +
>> +static int
>> +xen_boothowto(char *envp)
>> +{
>> +	int i, howto = 0;
>> +
>> +	/* get equivalents from the environment */
>> +	for (i = 0; howto_names[i].ev != NULL; i++)
>> +		if (getenv(howto_names[i].ev) != NULL)
>> +			howto |= howto_names[i].mask;
> 
> You don't believe in '{}' do you ? :-)

All this code has also been taken from the FreeBSD Xen i386 PV port, but
maybe some refactoring wouldn't be bad :).

>> +	return (howto);
>> +}
>> +
>> +static caddr_t
>> +xen_pv_parse_preload_data(u_int64_t modulep)
>> +{
>> +	/* Parse the extra boot information given by Xen */
>> +	if (HYPERVISOR_start_info->cmd_line)
>> +		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
>> +	boothowto |= xen_boothowto(kern_envp);
>> +
>> +	return (NULL);
>> +}
>> +
>> +static void
>> +xen_pv_set_init_ops(void)
>> +{
>> +	/* Init ops for Xen PV */
>> +	init_ops = xen_init_ops;
>> +}
>> -- 
>> 1.7.7.5 (Apple Git-26)
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UkW-0006Xz-4U; Tue, 07 Jan 2014 11:23:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0UkR-0006Xr-Lo
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:23:05 +0000
Received: from [85.158.139.211:22136] by server-9.bemta-5.messagelabs.com id
	DC/38-15098-693EBC25; Tue, 07 Jan 2014 11:23:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389093780!7057980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23194 invoked from network); 7 Jan 2014 11:23:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:23:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88230669"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:23:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	06:22:59 -0500
Message-ID: <1389093778.31766.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 7 Jan 2014 11:22:58 +0000
In-Reply-To: <20140106175713.GB28636@phenom.dumpdata.com>
References: <20140106175713.GB28636@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>'
 are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
thanks

On Mon, 2014-01-06 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
> In Xend, if I had a pci entry in the guest config and the 
> PCI device was not assigned to xen-pciback or pci-stub it
> would refuse to launch the guest.
> 
> Not so with 'xl'. It will complain but still launch:

It looks like domcreate_attach_pci() is ignoring the result of
libxl__device_pci_add(). It appears to have always done so.

I suppose there is an argument that there are usecases where starting
the domain at all even without the full set of devices is better than
not starting it at all, but I think I agree that the default should be
to fail if some devices are not available.

Is this a blocker for you for 4.4 or can it wait for 4.5?

> 
> -bash-4.1# cd drivers/pciback/
> -bash-4.1# ls
> 0000:01:00.0  0000:03:08.1  0000:03:0a.0  0000:03:0b.1       irq_handlers  new_slot    remove_id    uevent
> 0000:01:00.1  0000:03:09.0  0000:03:0a.1  bind               module        permissive  remove_slot  unbind
> 0000:03:08.0  0000:03:09.1  0000:03:0b.0  irq_handler_state  new_id        quirks      slots
> -bash-4.1# echo "0000:03:0b.0" > unbind
> -bash-4.1# echo "0000:03:0b.1" > unbind
> -bash-4.1# xl create /mnt/lab/security/security.cfg  
> Parsing config from /mnt/lab/security/security.cfg
> libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.0 is not assignable
> libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.1 is not assignable
> -bash-4.1# xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0  2047     4     r-----      14.7
> security                                     1  1023     1     -b----       8.0
> -bash-4.1# 
> -bash-4.1# cat /mnt/lab/security/security.cfg |grep -v \#
> device_model_version="qemu-xen-traditional"
> builder="hvm"
> memory = 1024
> name = "security"
> vcpus=1
> vif = [ 'mac=00:0F:4B:00:00:84,bridge=switch' ]
> disk = [ 'phy:/dev/sda,xvda,w' ]
> pci= ['0000:03:08.0', '000:03:08.1', '0000:03:09.0', '0000:03:09.1', '0000:03:0a.0', '0000:03:0a.1', '0000:03:0b.0', '0000:03:0b.1']
> vnc=1
> vnclisten='0.0.0.0'
> vncunused=1
> serial="pty"
> 
> 
> And naturally when shutting/destroying the guest it will say:
> -bash-4.1# xl destroy 1
> libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
> libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
> 
> (XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.0 from dom1 failed (-19)
> (XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.1 from dom1 failed (-19)
> 
> because it tries to de-allocate them even though they were not
> part of the guest.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UkW-0006Xz-4U; Tue, 07 Jan 2014 11:23:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0UkR-0006Xr-Lo
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:23:05 +0000
Received: from [85.158.139.211:22136] by server-9.bemta-5.messagelabs.com id
	DC/38-15098-693EBC25; Tue, 07 Jan 2014 11:23:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389093780!7057980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23194 invoked from network); 7 Jan 2014 11:23:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:23:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88230669"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:23:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	06:22:59 -0500
Message-ID: <1389093778.31766.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 7 Jan 2014 11:22:58 +0000
In-Reply-To: <20140106175713.GB28636@phenom.dumpdata.com>
References: <20140106175713.GB28636@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>'
 are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
thanks

On Mon, 2014-01-06 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
> In Xend, if I had a pci entry in the guest config and the 
> PCI device was not assigned to xen-pciback or pci-stub it
> would refuse to launch the guest.
> 
> Not so with 'xl'. It will complain but still launch:

It looks like domcreate_attach_pci() is ignoring the result of
libxl__device_pci_add(). It appears to have always done so.

I suppose there is an argument that there are usecases where starting
the domain at all even without the full set of devices is better than
not starting it at all, but I think I agree that the default should be
to fail if some devices are not available.

Is this a blocker for you for 4.4 or can it wait for 4.5?

> 
> -bash-4.1# cd drivers/pciback/
> -bash-4.1# ls
> 0000:01:00.0  0000:03:08.1  0000:03:0a.0  0000:03:0b.1       irq_handlers  new_slot    remove_id    uevent
> 0000:01:00.1  0000:03:09.0  0000:03:0a.1  bind               module        permissive  remove_slot  unbind
> 0000:03:08.0  0000:03:09.1  0000:03:0b.0  irq_handler_state  new_id        quirks      slots
> -bash-4.1# echo "0000:03:0b.0" > unbind
> -bash-4.1# echo "0000:03:0b.1" > unbind
> -bash-4.1# xl create /mnt/lab/security/security.cfg  
> Parsing config from /mnt/lab/security/security.cfg
> libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.0 is not assignable
> libxl: error: libxl_pci.c:1055:libxl__device_pci_add: PCI device 0:3:b.1 is not assignable
> -bash-4.1# xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0  2047     4     r-----      14.7
> security                                     1  1023     1     -b----       8.0
> -bash-4.1# 
> -bash-4.1# cat /mnt/lab/security/security.cfg |grep -v \#
> device_model_version="qemu-xen-traditional"
> builder="hvm"
> memory = 1024
> name = "security"
> vcpus=1
> vif = [ 'mac=00:0F:4B:00:00:84,bridge=switch' ]
> disk = [ 'phy:/dev/sda,xvda,w' ]
> pci= ['0000:03:08.0', '000:03:08.1', '0000:03:09.0', '0000:03:09.1', '0000:03:0a.0', '0000:03:0a.1', '0000:03:0b.0', '0000:03:0b.1']
> vnc=1
> vnclisten='0.0.0.0'
> vncunused=1
> serial="pty"
> 
> 
> And naturally when shutting/destroying the guest it will say:
> -bash-4.1# xl destroy 1
> libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
> libxl: error: libxl_pci.c:1265:do_pci_remove: xc_deassign_device failed: No such device
> 
> (XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.0 from dom1 failed (-19)
> (XEN) [2014-01-06 17:54:39] deassign 0000:03:0b.1 from dom1 failed (-19)
> 
> because it tries to de-allocate them even though they were not
> part of the guest.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UnX-0006eb-NU; Tue, 07 Jan 2014 11:26:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0UnW-0006eU-8r
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:26:14 +0000
Received: from [85.158.139.211:12680] by server-16.bemta-5.messagelabs.com id
	D8/10-11843-554EBC25; Tue, 07 Jan 2014 11:26:13 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389093971!8291399!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18837 invoked from network); 7 Jan 2014 11:26:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 7 Jan 2014 11:26:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0UrC-0008I4-8x; Tue, 07 Jan 2014 11:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389094202.31870@bugs.xenproject.org>
References: <20140106175713.GB28636@phenom.dumpdata.com>
	<1389093778.31766.131.camel@kazak.uk.xensource.com>
In-Reply-To: <1389093778.31766.131.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 07 Jan 2014 11:30:02 +0000
Subject: [Xen-devel] Processed: Re: xend vs xl with pci=['<bdf'] wherein the
 '<bdf>' are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #27 rooted at `<20140106175713.GB28636@phenom.dumpdata.com>'
Title: `Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>' are not owned by pciback or pcistub will still launch.'
> thanks
Finished processing.

Modified/created Bugs:
 - 27: http://bugs.xenproject.org/xen/bug/27 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UnX-0006eb-NU; Tue, 07 Jan 2014 11:26:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0UnW-0006eU-8r
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:26:14 +0000
Received: from [85.158.139.211:12680] by server-16.bemta-5.messagelabs.com id
	D8/10-11843-554EBC25; Tue, 07 Jan 2014 11:26:13 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389093971!8291399!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18837 invoked from network); 7 Jan 2014 11:26:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 7 Jan 2014 11:26:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0UrC-0008I4-8x; Tue, 07 Jan 2014 11:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389094202.31870@bugs.xenproject.org>
References: <20140106175713.GB28636@phenom.dumpdata.com>
	<1389093778.31766.131.camel@kazak.uk.xensource.com>
In-Reply-To: <1389093778.31766.131.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 07 Jan 2014 11:30:02 +0000
Subject: [Xen-devel] Processed: Re: xend vs xl with pci=['<bdf'] wherein the
 '<bdf>' are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #27 rooted at `<20140106175713.GB28636@phenom.dumpdata.com>'
Title: `Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>' are not owned by pciback or pcistub will still launch.'
> thanks
Finished processing.

Modified/created Bugs:
 - 27: http://bugs.xenproject.org/xen/bug/27 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:26:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Unr-0006go-3r; Tue, 07 Jan 2014 11:26:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W0Unp-0006gY-Lc
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:26:33 +0000
Received: from [85.158.143.35:49727] by server-2.bemta-4.messagelabs.com id
	1B/B7-11386-864EBC25; Tue, 07 Jan 2014 11:26:32 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389093991!10135497!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9789 invoked from network); 7 Jan 2014 11:26:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 11:26:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 03:26:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,618,1384329600"; d="scan'208";a="454794521"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 07 Jan 2014 03:26:29 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 03:26:29 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX152.ccr.corp.intel.com ([10.239.6.52]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 19:26:28 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Gordan Bobic <gordan@bobich.net>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
	issues with LSI MegaSAS (PERC5i))
Thread-Index: Ac8Lm0MTQlpT0EfkSMy7HDEYvjrxew==
Date: Tue, 7 Jan 2014 11:26:27 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> Sent: Tuesday, January 07, 2014 6:44 PM
> To: Andrew Cooper
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
> issues with LSI MegaSAS (PERC5i))
> 
> On 2014-01-07 10:38, Andrew Cooper wrote:
> > On 07/01/14 10:35, Gordan Bobic wrote:
> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>> Which would look like this:
> >>>>>
> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>> on the card
> >>>>>           \--------------> IEEE-1394a
> >>>>>
> >>>>> I am actually wondering if this 07:00.0 device is the one that
> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
> >>>>> Jan)
> >>>>>
> >>>>
> >>>> And to double check that theory I decided to pass in the IEEE-1394a
> >>>> to a guest:
> >>>>
> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>
> >>>>
> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
> >>>> fault
> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>> root_entry
> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>> context
> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>> ctxt_entry[0]
> >>>> not present
> >>>>
> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>
> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>> (prog-if 01 [Subtractive decode])
> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
> VGASnoop-
> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
> >>>> 66MHz-
> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
> <MAbort+
> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>> secondary=08,
> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>> BridgeCtl:
> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
> DiscTmrSERREn-
> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>> Device 0805
> >>>>         Capabilities: [a0] Power Management version 3
> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>> NoSoftRst+
> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
> >>>>
> >>>> or there is some unknown bridge in the motherboard.
> >>>
> >>> According your description above, the upstream Linux should also have
> >>> the same problem. Did you see it with upstream Linux?
> >>
> >> The problem I was seeing with LSI cards (phantom device doing DMA)
> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >> bare metal Linux, the same problem occurs as with Xen.
> >>
> >>> There may be some buggy device that generate DMA request with
> >>> internal
> >>> BDF but it didn't expose it(not like Phantom device). For those
> >>> devices, I think we need to setup the VT-d page table manually.
> >>
> >> I think what is needed is a pci-phantom style override that tells the
> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >> invisible device ID.
> >>
> >> Gordan
> >
> > There is.  See "pci-phantom" in
> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> I thought this was only applicable to phantom _functions_ (number after
> the
> dot) rather than whole phantom _devices_. Is that not the case?

I think that's right. I go through the related code for the pci phantom device just now, I find that
the information of command line 'pci-phantom' is stored in variable ' phantom_devs[8] '
with type of s truct phantom_dev{}. This variable is used in function alloc_pdev() as follow:


                for ( i = 0; i < nr_phantom_devs; ++i )
                    if ( phantom_devs[i].seg == pseg->nr &&
                         phantom_devs[i].bus == bus &&
                         phantom_devs[i].slot == PCI_SLOT(devfn) &&
                         phantom_devs[i].stride > PCI_FUNC(devfn) )
                    {
                        pdev->phantom_stride = phantom_devs[i].stride;
                        break;
                    }

So from the code, we can see this command line only works for phantom _function_, not for whole phantom _devices_.


> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:26:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Unr-0006go-3r; Tue, 07 Jan 2014 11:26:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W0Unp-0006gY-Lc
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:26:33 +0000
Received: from [85.158.143.35:49727] by server-2.bemta-4.messagelabs.com id
	1B/B7-11386-864EBC25; Tue, 07 Jan 2014 11:26:32 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389093991!10135497!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9789 invoked from network); 7 Jan 2014 11:26:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-5.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 11:26:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 03:26:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,618,1384329600"; d="scan'208";a="454794521"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 07 Jan 2014 03:26:29 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 03:26:29 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX152.ccr.corp.intel.com ([10.239.6.52]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 19:26:28 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Gordan Bobic <gordan@bobich.net>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
	issues with LSI MegaSAS (PERC5i))
Thread-Index: Ac8Lm0MTQlpT0EfkSMy7HDEYvjrxew==
Date: Tue, 7 Jan 2014 11:26:27 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> Sent: Tuesday, January 07, 2014 6:44 PM
> To: Andrew Cooper
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
> issues with LSI MegaSAS (PERC5i))
> 
> On 2014-01-07 10:38, Andrew Cooper wrote:
> > On 07/01/14 10:35, Gordan Bobic wrote:
> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>> Which would look like this:
> >>>>>
> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>> on the card
> >>>>>           \--------------> IEEE-1394a
> >>>>>
> >>>>> I am actually wondering if this 07:00.0 device is the one that
> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
> >>>>> Jan)
> >>>>>
> >>>>
> >>>> And to double check that theory I decided to pass in the IEEE-1394a
> >>>> to a guest:
> >>>>
> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>
> >>>>
> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
> >>>> fault
> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>> root_entry
> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>> context
> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>> ctxt_entry[0]
> >>>> not present
> >>>>
> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>
> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>> (prog-if 01 [Subtractive decode])
> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
> VGASnoop-
> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
> >>>> 66MHz-
> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
> <MAbort+
> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>> secondary=08,
> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>> BridgeCtl:
> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
> DiscTmrSERREn-
> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>> Device 0805
> >>>>         Capabilities: [a0] Power Management version 3
> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>> NoSoftRst+
> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
> >>>>
> >>>> or there is some unknown bridge in the motherboard.
> >>>
> >>> According your description above, the upstream Linux should also have
> >>> the same problem. Did you see it with upstream Linux?
> >>
> >> The problem I was seeing with LSI cards (phantom device doing DMA)
> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >> bare metal Linux, the same problem occurs as with Xen.
> >>
> >>> There may be some buggy device that generate DMA request with
> >>> internal
> >>> BDF but it didn't expose it(not like Phantom device). For those
> >>> devices, I think we need to setup the VT-d page table manually.
> >>
> >> I think what is needed is a pci-phantom style override that tells the
> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >> invisible device ID.
> >>
> >> Gordan
> >
> > There is.  See "pci-phantom" in
> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> I thought this was only applicable to phantom _functions_ (number after
> the
> dot) rather than whole phantom _devices_. Is that not the case?

I think that's right. I go through the related code for the pci phantom device just now, I find that
the information of command line 'pci-phantom' is stored in variable ' phantom_devs[8] '
with type of s truct phantom_dev{}. This variable is used in function alloc_pdev() as follow:


                for ( i = 0; i < nr_phantom_devs; ++i )
                    if ( phantom_devs[i].seg == pseg->nr &&
                         phantom_devs[i].bus == bus &&
                         phantom_devs[i].slot == PCI_SLOT(devfn) &&
                         phantom_devs[i].stride > PCI_FUNC(devfn) )
                    {
                        pdev->phantom_stride = phantom_devs[i].stride;
                        break;
                    }

So from the code, we can see this command line only works for phantom _function_, not for whole phantom _devices_.


> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:31:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UsS-0007JJ-U9; Tue, 07 Jan 2014 11:31:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0UsQ-0007JE-Pr
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:31:18 +0000
Received: from [85.158.139.211:20307] by server-4.bemta-5.messagelabs.com id
	CF/9A-26791-585EBC25; Tue, 07 Jan 2014 11:31:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389094275!8290582!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31861 invoked from network); 7 Jan 2014 11:31:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:31:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88232572"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:31:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	06:31:14 -0500
Message-ID: <1389094274.31766.133.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 11:31:14 +0000
In-Reply-To: <52CBC7900200007800110EF7@nat28.tlf.novell.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
> >> that feature statically in the first place - that should be done only if the 
> > kernel
> >> could _only_ boot in PVH mode.
> > 
> > The feature is not marked as "required" but rather - it can utilize said
> > extension (so supported). I am advocating that the calleer checks that
> > all of the required pieces are correct - and it can ignore the ones it
> > has no idea off (which it does for some of the Xen ELF notes - ignores
> > them if it has no idea of what they are).
> 
> What would be to point of telling the hypervisor that the kernel
> can utilize a certain extension? The kernel could just utilize it, and
> the hypervisor would know by that simple fact.

But for PVH doesn't the hypervisor need to now at dom0 build time
whether to build a PV or PVH domain? So it needs to know upfront if the
kernel could do PVH or not, and then pick, but once it has picked the
kernel had best follow that choice.

So in the PVH case it's not just a simple case of the kernel deciding to
utilize an optional feature, the optional feature has already been
enabled.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:31:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UsS-0007JJ-U9; Tue, 07 Jan 2014 11:31:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0UsQ-0007JE-Pr
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:31:18 +0000
Received: from [85.158.139.211:20307] by server-4.bemta-5.messagelabs.com id
	CF/9A-26791-585EBC25; Tue, 07 Jan 2014 11:31:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389094275!8290582!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31861 invoked from network); 7 Jan 2014 11:31:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:31:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88232572"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:31:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	06:31:14 -0500
Message-ID: <1389094274.31766.133.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 11:31:14 +0000
In-Reply-To: <52CBC7900200007800110EF7@nat28.tlf.novell.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
> >> that feature statically in the first place - that should be done only if the 
> > kernel
> >> could _only_ boot in PVH mode.
> > 
> > The feature is not marked as "required" but rather - it can utilize said
> > extension (so supported). I am advocating that the calleer checks that
> > all of the required pieces are correct - and it can ignore the ones it
> > has no idea off (which it does for some of the Xen ELF notes - ignores
> > them if it has no idea of what they are).
> 
> What would be to point of telling the hypervisor that the kernel
> can utilize a certain extension? The kernel could just utilize it, and
> the hypervisor would know by that simple fact.

But for PVH doesn't the hypervisor need to now at dom0 build time
whether to build a PV or PVH domain? So it needs to know upfront if the
kernel could do PVH or not, and then pick, but once it has picked the
kernel had best follow that choice.

So in the PVH case it's not just a simple case of the kernel deciding to
utilize an optional feature, the optional feature has already been
enabled.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UuR-0007Pc-Ef; Tue, 07 Jan 2014 11:33:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0UuQ-0007PV-9r
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:33:22 +0000
Received: from [85.158.139.211:59037] by server-9.bemta-5.messagelabs.com id
	2E/DF-15098-106EBC25; Tue, 07 Jan 2014 11:33:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389094400!8081864!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26950 invoked from network); 7 Jan 2014 11:33:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:33:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:33:20 +0000
Message-Id: <52CBF40D02000078001110CD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:33:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> ---
>  xen/common/sysctl.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..cd6184a 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
> u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
> +        else
> +            copyback = 0;
>  
>          xfree(status);
> -        copyback = 0;

This is wrong (and not covered by the title or description) - there's
nothing to copy back here (apart from "status"), so this should
remain unconditional.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0UuR-0007Pc-Ef; Tue, 07 Jan 2014 11:33:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0UuQ-0007PV-9r
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:33:22 +0000
Received: from [85.158.139.211:59037] by server-9.bemta-5.messagelabs.com id
	2E/DF-15098-106EBC25; Tue, 07 Jan 2014 11:33:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389094400!8081864!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26950 invoked from network); 7 Jan 2014 11:33:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:33:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:33:20 +0000
Message-Id: <52CBF40D02000078001110CD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:33:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> ---
>  xen/common/sysctl.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..cd6184a 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
> u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
> +        else
> +            copyback = 0;
>  
>          xfree(status);
> -        copyback = 0;

This is wrong (and not covered by the title or description) - there's
nothing to copy back here (apart from "status"), so this should
remain unconditional.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:35:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Uvz-0007X8-UT; Tue, 07 Jan 2014 11:34:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0Uvy-0007X0-1z
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:34:58 +0000
Received: from [85.158.137.68:40506] by server-9.bemta-3.messagelabs.com id
	3B/AD-13104-166EBC25; Tue, 07 Jan 2014 11:34:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389094494!4011725!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5364 invoked from network); 7 Jan 2014 11:34:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:34:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88233271"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:34:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:34:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0Uvt-0003Ro-Qf;
	Tue, 07 Jan 2014 11:34:53 +0000
Message-ID: <52CBE65D.90803@citrix.com>
Date: Tue, 7 Jan 2014 11:34:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
	<52CBF40D02000078001110CD@nat28.tlf.novell.com>
In-Reply-To: <52CBF40D02000078001110CD@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:33, Jan Beulich wrote:
>>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Also fix the indentation of the arguments to copy_to_guest() to help clarify
>> that the 'ret = -EFAULT' is not part of the condition.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> ---
>>  xen/common/sysctl.c |   10 ++++------
>>  1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>> index 117e095..cd6184a 100644
>> --- a/xen/common/sysctl.c
>> +++ b/xen/common/sysctl.c
>> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
>> u_sysctl)
>>          }
>>  
>>          if ( copy_to_guest(
>> -            op->u.page_offline.status, status,
>> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
>> -        {
>> +                 op->u.page_offline.status, status,
>> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>>              ret = -EFAULT;
>> -            break;
>> -        }
>> +        else
>> +            copyback = 0;
>>  
>>          xfree(status);
>> -        copyback = 0;
> This is wrong (and not covered by the title or description) - there's
> nothing to copy back here (apart from "status"), so this should
> remain unconditional.
>
> Jan
>

There is a 'break' removed from the if statement, so there is no change
to the conditions during which copyback gets set.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:35:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Uvz-0007X8-UT; Tue, 07 Jan 2014 11:34:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0Uvy-0007X0-1z
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:34:58 +0000
Received: from [85.158.137.68:40506] by server-9.bemta-3.messagelabs.com id
	3B/AD-13104-166EBC25; Tue, 07 Jan 2014 11:34:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389094494!4011725!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5364 invoked from network); 7 Jan 2014 11:34:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:34:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88233271"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:34:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:34:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0Uvt-0003Ro-Qf;
	Tue, 07 Jan 2014 11:34:53 +0000
Message-ID: <52CBE65D.90803@citrix.com>
Date: Tue, 7 Jan 2014 11:34:53 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
	<52CBF40D02000078001110CD@nat28.tlf.novell.com>
In-Reply-To: <52CBF40D02000078001110CD@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:33, Jan Beulich wrote:
>>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Also fix the indentation of the arguments to copy_to_guest() to help clarify
>> that the 'ret = -EFAULT' is not part of the condition.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> ---
>>  xen/common/sysctl.c |   10 ++++------
>>  1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>> index 117e095..cd6184a 100644
>> --- a/xen/common/sysctl.c
>> +++ b/xen/common/sysctl.c
>> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
>> u_sysctl)
>>          }
>>  
>>          if ( copy_to_guest(
>> -            op->u.page_offline.status, status,
>> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
>> -        {
>> +                 op->u.page_offline.status, status,
>> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>>              ret = -EFAULT;
>> -            break;
>> -        }
>> +        else
>> +            copyback = 0;
>>  
>>          xfree(status);
>> -        copyback = 0;
> This is wrong (and not covered by the title or description) - there's
> nothing to copy back here (apart from "status"), so this should
> remain unconditional.
>
> Jan
>

There is a 'break' removed from the if statement, so there is no change
to the conditions during which copyback gets set.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:35:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Uwa-0007bm-Ba; Tue, 07 Jan 2014 11:35:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0UwY-0007bS-KT
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:35:34 +0000
Received: from [85.158.137.68:47998] by server-9.bemta-3.messagelabs.com id
	51/CE-13104-586EBC25; Tue, 07 Jan 2014 11:35:33 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389094532!7629918!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31829 invoked from network); 7 Jan 2014 11:35:32 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 11:35:32 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id D8A602202F2;
	Tue,  7 Jan 2014 11:35:30 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 11:35:30 +0000
From: Gordan Bobic <gordan@bobich.net>
To: "Wu, Feng" <feng.wu@intel.com>
In-Reply-To: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
Message-ID: <6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 11:26, Wu, Feng wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xen.org
>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>> Sent: Tuesday, January 07, 2014 6:44 PM
>> To: Andrew Cooper
>> Cc: xen-devel@lists.xen.org
>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: 
>> iommuu/vt-d
>> issues with LSI MegaSAS (PERC5i))
>> 
>> On 2014-01-07 10:38, Andrew Cooper wrote:
>> > On 07/01/14 10:35, Gordan Bobic wrote:
>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>> >>>>> Which would look like this:
>> >>>>>
>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>> >>>>> on the card
>> >>>>>           \--------------> IEEE-1394a
>> >>>>>
>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>> >>>>> Jan)
>> >>>>>
>> >>>>
>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>> >>>> to a guest:
>> >>>>
>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>> >>>>
>> >>>>
>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>> >>>> fault
>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>> >>>> root_entry
>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>> >>>> context
>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>> >>>> ctxt_entry[0]
>> >>>> not present
>> >>>>
>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>> >>>>
>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>> >>>> (prog-if 01 [Subtractive decode])
>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>> VGASnoop-
>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>> >>>> 66MHz-
>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>> <MAbort+
>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>> >>>> secondary=08,
>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>> >>>> BridgeCtl:
>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>> DiscTmrSERREn-
>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>> >>>> Device 0805
>> >>>>         Capabilities: [a0] Power Management version 3
>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>> >>>> NoSoftRst+
>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>> >>>>
>> >>>> or there is some unknown bridge in the motherboard.
>> >>>
>> >>> According your description above, the upstream Linux should also have
>> >>> the same problem. Did you see it with upstream Linux?
>> >>
>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>> >> bare metal Linux, the same problem occurs as with Xen.
>> >>
>> >>> There may be some buggy device that generate DMA request with
>> >>> internal
>> >>> BDF but it didn't expose it(not like Phantom device). For those
>> >>> devices, I think we need to setup the VT-d page table manually.
>> >>
>> >> I think what is needed is a pci-phantom style override that tells the
>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>> >> invisible device ID.
>> >>
>> >> Gordan
>> >
>> > There is.  See "pci-phantom" in
>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
>> 
>> I thought this was only applicable to phantom _functions_ (number 
>> after
>> the
>> dot) rather than whole phantom _devices_. Is that not the case?
> 
> I think that's right. I go through the related code for the pci
> phantom device just now, I find that
> the information of command line 'pci-phantom' is stored in variable '
> phantom_devs[8] '
> with type of s truct phantom_dev{}. This variable is used in function
> alloc_pdev() as follow:
> 
> 
>                 for ( i = 0; i < nr_phantom_devs; ++i )
>                     if ( phantom_devs[i].seg == pseg->nr &&
>                          phantom_devs[i].bus == bus &&
>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>                     {
>                         pdev->phantom_stride = phantom_devs[i].stride;
>                         break;
>                     }
> 
> So from the code, we can see this command line only works for phantom
> _function_, not for whole phantom _devices_.

What would it take to make it work for a whole phantom device?

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:35:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Uwa-0007bm-Ba; Tue, 07 Jan 2014 11:35:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0UwY-0007bS-KT
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:35:34 +0000
Received: from [85.158.137.68:47998] by server-9.bemta-3.messagelabs.com id
	51/CE-13104-586EBC25; Tue, 07 Jan 2014 11:35:33 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389094532!7629918!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31829 invoked from network); 7 Jan 2014 11:35:32 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 11:35:32 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id D8A602202F2;
	Tue,  7 Jan 2014 11:35:30 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 11:35:30 +0000
From: Gordan Bobic <gordan@bobich.net>
To: "Wu, Feng" <feng.wu@intel.com>
In-Reply-To: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
Message-ID: <6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 11:26, Wu, Feng wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xen.org
>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>> Sent: Tuesday, January 07, 2014 6:44 PM
>> To: Andrew Cooper
>> Cc: xen-devel@lists.xen.org
>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: 
>> iommuu/vt-d
>> issues with LSI MegaSAS (PERC5i))
>> 
>> On 2014-01-07 10:38, Andrew Cooper wrote:
>> > On 07/01/14 10:35, Gordan Bobic wrote:
>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>> >>>>> Which would look like this:
>> >>>>>
>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>> >>>>> on the card
>> >>>>>           \--------------> IEEE-1394a
>> >>>>>
>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>> >>>>> Jan)
>> >>>>>
>> >>>>
>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>> >>>> to a guest:
>> >>>>
>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>> >>>>
>> >>>>
>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>> >>>> fault
>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>> >>>> root_entry
>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>> >>>> context
>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>> >>>> ctxt_entry[0]
>> >>>> not present
>> >>>>
>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>> >>>>
>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>> >>>> (prog-if 01 [Subtractive decode])
>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>> VGASnoop-
>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>> >>>> 66MHz-
>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>> <MAbort+
>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>> >>>> secondary=08,
>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>> >>>> BridgeCtl:
>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>> DiscTmrSERREn-
>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>> >>>> Device 0805
>> >>>>         Capabilities: [a0] Power Management version 3
>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>> >>>> NoSoftRst+
>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>> >>>>
>> >>>> or there is some unknown bridge in the motherboard.
>> >>>
>> >>> According your description above, the upstream Linux should also have
>> >>> the same problem. Did you see it with upstream Linux?
>> >>
>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>> >> bare metal Linux, the same problem occurs as with Xen.
>> >>
>> >>> There may be some buggy device that generate DMA request with
>> >>> internal
>> >>> BDF but it didn't expose it(not like Phantom device). For those
>> >>> devices, I think we need to setup the VT-d page table manually.
>> >>
>> >> I think what is needed is a pci-phantom style override that tells the
>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>> >> invisible device ID.
>> >>
>> >> Gordan
>> >
>> > There is.  See "pci-phantom" in
>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
>> 
>> I thought this was only applicable to phantom _functions_ (number 
>> after
>> the
>> dot) rather than whole phantom _devices_. Is that not the case?
> 
> I think that's right. I go through the related code for the pci
> phantom device just now, I find that
> the information of command line 'pci-phantom' is stored in variable '
> phantom_devs[8] '
> with type of s truct phantom_dev{}. This variable is used in function
> alloc_pdev() as follow:
> 
> 
>                 for ( i = 0; i < nr_phantom_devs; ++i )
>                     if ( phantom_devs[i].seg == pseg->nr &&
>                          phantom_devs[i].bus == bus &&
>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>                     {
>                         pdev->phantom_stride = phantom_devs[i].stride;
>                         break;
>                     }
> 
> So from the code, we can see this command line only works for phantom
> _function_, not for whole phantom _devices_.

What would it take to make it work for a whole phantom device?

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0V8l-0008Fl-3I; Tue, 07 Jan 2014 11:48:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0V8j-0008Fg-OV
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:48:09 +0000
Received: from [85.158.143.35:16937] by server-2.bemta-4.messagelabs.com id
	F7/81-11386-879EBC25; Tue, 07 Jan 2014 11:48:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389095288!9975176!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27500 invoked from network); 7 Jan 2014 11:48:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:48:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:48:07 +0000
Message-Id: <52CBF78402000078001110E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:48:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
	<52CBF40D02000078001110CD@nat28.tlf.novell.com>
	<52CBE65D.90803@citrix.com>
In-Reply-To: <52CBE65D.90803@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/01/14 11:33, Jan Beulich wrote:
>>>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> Also fix the indentation of the arguments to copy_to_guest() to help clarify
>>> that the 'ret = -EFAULT' is not part of the condition.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Keir Fraser <keir@xen.org>
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> ---
>>>  xen/common/sysctl.c |   10 ++++------
>>>  1 file changed, 4 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>> index 117e095..cd6184a 100644
>>> --- a/xen/common/sysctl.c
>>> +++ b/xen/common/sysctl.c
>>> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
>>> u_sysctl)
>>>          }
>>>  
>>>          if ( copy_to_guest(
>>> -            op->u.page_offline.status, status,
>>> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
>>> -        {
>>> +                 op->u.page_offline.status, status,
>>> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>>>              ret = -EFAULT;
>>> -            break;
>>> -        }
>>> +        else
>>> +            copyback = 0;
>>>  
>>>          xfree(status);
>>> -        copyback = 0;
>> This is wrong (and not covered by the title or description) - there's
>> nothing to copy back here (apart from "status"), so this should
>> remain unconditional.
> 
> There is a 'break' removed from the if statement, so there is no change
> to the conditions during which copyback gets set.

Ah, true. But nevertheless, the code would be more correct if
the clearing of copyback was left where it was.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0V8l-0008Fl-3I; Tue, 07 Jan 2014 11:48:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0V8j-0008Fg-OV
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:48:09 +0000
Received: from [85.158.143.35:16937] by server-2.bemta-4.messagelabs.com id
	F7/81-11386-879EBC25; Tue, 07 Jan 2014 11:48:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389095288!9975176!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27500 invoked from network); 7 Jan 2014 11:48:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:48:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:48:07 +0000
Message-Id: <52CBF78402000078001110E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:48:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1388156253-14509-1-git-send-email-andrew.cooper3@citrix.com>
	<1388156253-14509-2-git-send-email-andrew.cooper3@citrix.com>
	<52CBF40D02000078001110CD@nat28.tlf.novell.com>
	<52CBE65D.90803@citrix.com>
In-Reply-To: <52CBE65D.90803@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/01/14 11:33, Jan Beulich wrote:
>>>>> On 27.12.13 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> Also fix the indentation of the arguments to copy_to_guest() to help clarify
>>> that the 'ret = -EFAULT' is not part of the condition.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Keir Fraser <keir@xen.org>
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> ---
>>>  xen/common/sysctl.c |   10 ++++------
>>>  1 file changed, 4 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>> index 117e095..cd6184a 100644
>>> --- a/xen/common/sysctl.c
>>> +++ b/xen/common/sysctl.c
>>> @@ -230,15 +230,13 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
>>> u_sysctl)
>>>          }
>>>  
>>>          if ( copy_to_guest(
>>> -            op->u.page_offline.status, status,
>>> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
>>> -        {
>>> +                 op->u.page_offline.status, status,
>>> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>>>              ret = -EFAULT;
>>> -            break;
>>> -        }
>>> +        else
>>> +            copyback = 0;
>>>  
>>>          xfree(status);
>>> -        copyback = 0;
>> This is wrong (and not covered by the title or description) - there's
>> nothing to copy back here (apart from "status"), so this should
>> remain unconditional.
> 
> There is a 'break' removed from the if statement, so there is no change
> to the conditions during which copyback gets set.

Ah, true. But nevertheless, the code would be more correct if
the clearing of copyback was left where it was.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VBE-0000Fb-Lb; Tue, 07 Jan 2014 11:50:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VBD-0000FS-CS
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:50:43 +0000
Received: from [85.158.137.68:31436] by server-12.bemta-3.messagelabs.com id
	0E/77-20055-21AEBC25; Tue, 07 Jan 2014 11:50:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389095441!7669220!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9564 invoked from network); 7 Jan 2014 11:50:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:50:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:50:40 +0000
Message-Id: <52CBF81D02000078001110EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:50:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
	<1389094274.31766.133.camel@kazak.uk.xensource.com>
In-Reply-To: <1389094274.31766.133.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
>> >> that feature statically in the first place - that should be done only if 
> the 
>> > kernel
>> >> could _only_ boot in PVH mode.
>> > 
>> > The feature is not marked as "required" but rather - it can utilize said
>> > extension (so supported). I am advocating that the calleer checks that
>> > all of the required pieces are correct - and it can ignore the ones it
>> > has no idea off (which it does for some of the Xen ELF notes - ignores
>> > them if it has no idea of what they are).
>> 
>> What would be to point of telling the hypervisor that the kernel
>> can utilize a certain extension? The kernel could just utilize it, and
>> the hypervisor would know by that simple fact.
> 
> But for PVH doesn't the hypervisor need to now at dom0 build time
> whether to build a PV or PVH domain?

Which needs to be communicated via hypervisor command line option
anyway. Specifying the option without have a suitable kernel is (of
course) a user error (generally expected to result in a kernel crash).

Jan

> So it needs to know upfront if the
> kernel could do PVH or not, and then pick, but once it has picked the
> kernel had best follow that choice.
> 
> So in the PVH case it's not just a simple case of the kernel deciding to
> utilize an optional feature, the optional feature has already been
> enabled.
> 
> Ian




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VBE-0000Fb-Lb; Tue, 07 Jan 2014 11:50:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VBD-0000FS-CS
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 11:50:43 +0000
Received: from [85.158.137.68:31436] by server-12.bemta-3.messagelabs.com id
	0E/77-20055-21AEBC25; Tue, 07 Jan 2014 11:50:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389095441!7669220!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9564 invoked from network); 7 Jan 2014 11:50:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 11:50:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 11:50:40 +0000
Message-Id: <52CBF81D02000078001110EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 11:50:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
	<1389094274.31766.133.camel@kazak.uk.xensource.com>
In-Reply-To: <1389094274.31766.133.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
>> >> that feature statically in the first place - that should be done only if 
> the 
>> > kernel
>> >> could _only_ boot in PVH mode.
>> > 
>> > The feature is not marked as "required" but rather - it can utilize said
>> > extension (so supported). I am advocating that the calleer checks that
>> > all of the required pieces are correct - and it can ignore the ones it
>> > has no idea off (which it does for some of the Xen ELF notes - ignores
>> > them if it has no idea of what they are).
>> 
>> What would be to point of telling the hypervisor that the kernel
>> can utilize a certain extension? The kernel could just utilize it, and
>> the hypervisor would know by that simple fact.
> 
> But for PVH doesn't the hypervisor need to now at dom0 build time
> whether to build a PV or PVH domain?

Which needs to be communicated via hypervisor command line option
anyway. Specifying the option without have a suitable kernel is (of
course) a user error (generally expected to result in a kernel crash).

Jan

> So it needs to know upfront if the
> kernel could do PVH or not, and then pick, but once it has picked the
> kernel had best follow that choice.
> 
> So in the PVH case it's not just a simple case of the kernel deciding to
> utilize an optional feature, the optional feature has already been
> enabled.
> 
> Ian




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VEQ-0000PL-4D; Tue, 07 Jan 2014 11:54:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0VEM-0000PB-P0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:53:59 +0000
Received: from [85.158.137.68:61143] by server-15.bemta-3.messagelabs.com id
	E2/02-11556-6DAEBC25; Tue, 07 Jan 2014 11:53:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389095636!7688483!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26751 invoked from network); 7 Jan 2014 11:53:56 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 11:53:56 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52998 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0V3c-0005kn-7T; Tue, 07 Jan 2014 12:42:52 +0100
Date: Tue, 7 Jan 2014 12:53:52 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1536712177.20140107125352@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------12706E23E275304AE"
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup -
	CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------12706E23E275304AE
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Konrad,

A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
But dom0 seems to blow up for me .. (without this branch pulled it works ok)

Xen: latest xen-unstable
Linux: latest 3.13-rc7+ with devel/for-linus-3.14 branch pulled on top of it.

The complete serial log is attached.

--
Sander

[  196.354896] BUG: soft lockup - CPU#1 stuck for 22s! [xendomains:9679]
[  196.362706] Modules linked in:
[  196.370360] irq event stamp: 61738
[  196.377871] hardirqs last  enabled at (61737): [<ffffffff81aa0b33>] restore_args+0x0/0x30
[  196.385465] hardirqs last disabled at (61738): [<ffffffff81aa1016>] error_sti+0x5/0x6
[  196.392910] softirqs last  enabled at (61736): [<ffffffff810a9df1>] __do_softirq+0x191/0x220
[  196.400192] softirqs last disabled at (61731): [<ffffffff810aa2e2>] irq_exit+0xa2/0xd0
[  196.407263] CPU: 1 PID: 9679 Comm: xendomains Not tainted 3.13.0-rc7-20140107-xendevel+ #1
[  196.414303] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
[  196.421273] task: ffff880058dec240 ti: ffff88004cd32000 task.ti: ffff88004cd32000
[  196.428263] RIP: e030:[<ffffffff81109a58>]  [<ffffffff81109a58>] generic_exec_single+0x88/0xc0
[  196.435323] RSP: e02b:ffff88004cd33a88  EFLAGS: 00000202
[  196.442170] RAX: 0000000000000008 RBX: ffff88005f614040 RCX: 0000000000000038
[  196.448898] RDX: 00000000000000ff RSI: 0000000000000008 RDI: 0000000000000008
[  196.455578] RBP: ffff88004cd33ac8 R08: ffffffff81c0d468 R09: 0000000000000000
[  196.462283] R10: 0000000000000001 R11: 0000000000000000 R12: ffff88004cd33af0
[  196.468994] R13: 0000000000000001 R14: 0000000000000000 R15: ffff88005f614050
[  196.475704] FS:  00007f152ec4d700(0000) GS:ffff88005f640000(0000) knlGS:0000000000000000
[  196.482492] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  196.489229] CR2: 00007f152e2a1e02 CR3: 000000004cd2a000 CR4: 0000000000000660
[  196.495906] Stack:
[  196.502595]  0000000000000200 ffff88005f614040 0000000000000006 0000000000000000
[  196.509412]  0000000000000001 ffffffff822e7300 ffffffff81007980 0000000000000001
[  196.516266]  ffff88004cd33b38 ffffffff81109cc5 ffffffff81aa0b33 ffff8800ced56000
[  196.523112] Call Trace:
[  196.529769]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.536519]  [<ffffffff81109cc5>] smp_call_function_single+0xe5/0x1e0
[  196.543374]  [<ffffffff81aa0b33>] ? retint_restore_args+0x13/0x13
[  196.550098]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.556833]  [<ffffffff8110a03a>] smp_call_function_many+0x27a/0x2a0
[  196.563591]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.570240]  [<ffffffff8100841e>] xen_exit_mmap+0xce/0x1a0
[  196.576791]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
[  196.583449]  [<ffffffff81169426>] exit_mmap+0x56/0x180
[  196.590020]  [<ffffffff810e717a>] ? lock_release+0x12a/0x250
[  196.596504]  [<ffffffff811ddde0>] ? exit_aio+0xb0/0xe0
[  196.602900]  [<ffffffff811ddd44>] ? exit_aio+0x14/0xe0
[  196.609179]  [<ffffffff810a2689>] mmput+0x59/0xe0
[  196.615475]  [<ffffffff8119a3a9>] flush_old_exec+0x439/0x830
[  196.621761]  [<ffffffff811e8cca>] load_elf_binary+0x32a/0x1a00
[  196.627908]  [<ffffffff81a9ffe6>] ? _raw_read_unlock+0x26/0x30
[  196.633975]  [<ffffffff810e738c>] ? lock_acquire+0xec/0x110
[  196.640137]  [<ffffffff81199243>] ? search_binary_handler+0xc3/0x1b0
[  196.646375]  [<ffffffff810e738c>] ? lock_acquire+0xec/0x110
[  196.652537]  [<ffffffff810e717a>] ? lock_release+0x12a/0x250
[  196.658587]  [<ffffffff81199204>] search_binary_handler+0x84/0x1b0
[  196.664735]  [<ffffffff8119b252>] do_execve_common.isra.31+0x592/0x710
[  196.670825]  [<ffffffff8119b1e7>] ? do_execve_common.isra.31+0x527/0x710
[  196.676902]  [<ffffffff81186345>] ? kmem_cache_alloc+0xb5/0x120
[  196.682972]  [<ffffffff8119b3e3>] do_execve+0x13/0x20
[  196.689016]  [<ffffffff8119b658>] SyS_execve+0x38/0x60
[  196.694944]  [<ffffffff81aa19e9>] stub_execve+0x69/0xa0
[  196.700913] Code: c8 48 89 da e8 ea c1 32 00 48 8b 45 c0 4c 89 ff 48 89 c6 e8 bb 67 99 00 48 3b 5d c8 74 2d 45 85 ed 75 0a eb 10 66 0f 1f 44 00 00 <f3> 90 41 f6 44 24 20 01 75 f6 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d
------------12706E23E275304AE
Content-Type: application/octet-stream;
 name="serial.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="serial.log"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICBfICBfICAgICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCB8IHx8IHwg
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxffCB8fCB8XyBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3xfXyAgIF98X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKSB8X3wgICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC40LXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MtNC43LnJlYWwgKERlYmlhbiA0LjcuMi01KSA0Ljcu
MikgZGVidWc9eSBUdWUgSmFuICA3IDExOjUzOjMwIENFVCAyMDE0DQooWEVOKSBMYXRlc3Qg
Q2hhbmdlU2V0OiBGcmkgRGVjIDIwIDEyOjAyOjA2IDIwMTMgKzAxMDAgZ2l0OjlhODBkNTAt
ZGlydHkNCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45OS0yNytkZWI3dTINCihYRU4pIENv
bW1hbmQgbGluZTogZG9tMF9tZW09MTUzNk0sbWF4OjE1MzZNIGxvZ2x2bD1hbGwgbG9nbHZs
X2d1ZXN0PWFsbCBjb25zb2xlX3RpbWVzdGFtcHMgdmdhPWdmeC0xMjgweDEwMjR4MzIgY3B1
aWRsZSBjcHVmcmVxPXhlbiBkZWJ1ZyBsYXBpYz1kZWJ1ZyBhcGljX3ZlcmJvc2l0eT1kZWJ1
ZyBhcGljPWRlYnVnIGlvbW11PW9uLHZlcmJvc2UsZGVidWcsYW1kLWlvbW11LWRlYnVnIGl2
cnNfaW9hcGljWzZdPTAwOjE0LjAgaXZyc19ocGV0WzBdPTAwOjE0LjAgY29tMT0zODQwMCw4
bjEgY29uc29sZT12Z2EsY29tMQ0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAg
VkdBIGlzIGdyYXBoaWNzIG1vZGUgMTI4MHgxMDI0LCAzMiBicHANCihYRU4pICBWQkUvRERD
IG1ldGhvZHM6IG5vbmU7IEVESUQgdHJhbnNmZXIgdGltZTogMCBzZWNvbmRzDQooWEVOKSAg
RURJRCBpbmZvIG5vdCByZXRyaWV2ZWQgYmVjYXVzZSBubyBEREMgcmV0cmlldmFsIG1ldGhv
ZCBkZXRlY3RlZA0KKFhFTikgRGlzYyBpbmZvcm1hdGlvbjoNCihYRU4pICBGb3VuZCAyIE1C
UiBzaWduYXR1cmVzDQooWEVOKSAgRm91bmQgMiBFREQgaW5mb3JtYXRpb24gc3RydWN0dXJl
cw0KKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoNCihYRU4pICAwMDAwMDAwMDAwMDAwMDAwIC0g
MDAwMDAwMDAwMDA5OTgwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwMDAwOTk4MDAgLSAw
MDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMGU0MDAwIC0g
MDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAt
IDAwMDAwMDAwOWZmOTAwMDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDlmZjkwMDAwIC0g
MDAwMDAwMDA5ZmY5ZTAwMCAoQUNQSSBkYXRhKQ0KKFhFTikgIDAwMDAwMDAwOWZmOWUwMDAg
LSAwMDAwMDAwMDlmZmUwMDAwIChBQ1BJIE5WUykNCihYRU4pICAwMDAwMDAwMDlmZmUwMDAw
IC0gMDAwMDAwMDBhMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmUwMDAw
MCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNTYwMDAwMDAwICh1c2FibGUpDQooWEVOKSBBQ1BJOiBSU0RQIDAwMEZC
MTAwLCAwMDE0IChyMCBBQ1BJQU0pDQooWEVOKSBBQ1BJOiBSU0RUIDlGRjkwMDAwLCAwMDQ4
IChyMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFD
UEk6IEZBQ1AgOUZGOTAyMDAsIDAwODQgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBN
U0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRFNEVCA5RkY5MDVFMCwgOTQyNyAocjEgIEE3
NjQwIEE3NjQwMTAwICAgICAgMTAwIElOVEwgMjAwNTExMTcpDQooWEVOKSBBQ1BJOiBGQUNT
IDlGRjlFMDAwLCAwMDQwDQooWEVOKSBBQ1BJOiBBUElDIDlGRjkwMzkwLCAwMDg4IChyMSA3
NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE1D
RkcgOUZGOTA0MjAsIDAwM0MgKHIxIDc2NDBNUyBPRU1NQ0ZHICAyMDEwMDkxMyBNU0ZUICAg
ICAgIDk3KQ0KKFhFTikgQUNQSTogU0xJQyA5RkY5MDQ2MCwgMDE3NiAocjEgTVNJICAgIE9F
TVNMSUMgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBPRU1CIDlGRjlF
MDQwLCAwMDcyIChyMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykN
CihYRU4pIEFDUEk6IFNSQVQgOUZGOUE1RTAsIDAxMDggKHIzIEFNRCAgICBGQU1fRl8xMCAg
ICAgICAgMiBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCA5RkY5QTZGMCwgMDAz
OCAocjEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcpDQooWEVOKSBB
Q1BJOiBJVlJTIDlGRjlBNzMwLCAwMTA4IChyMSAgQU1EICAgICBSRDg5MFMgICAyMDIwMzEg
QU1EICAgICAgICAgMCkNCihYRU4pIEFDUEk6IFNTRFQgOUZGOUE4NDAsIDBEQTQgKHIxIEEg
TSBJICBQT1dFUk5PVyAgICAgICAgMSBBTUQgICAgICAgICAxKQ0KKFhFTikgU3lzdGVtIFJB
TTogMjA0NzlNQiAoMjA5NzA2NjBrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+
IEFQSUMgMyAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2Rl
IDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAw
LWEwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTU2MDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSA1NWQyNmEwMDAgLSA1
NWQyNzAwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhF
TikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIg
YXQgMHhmYjAwMDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDIwMTAwMCwgdXNpbmcgNjE0
NGssIHRvdGFsIDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwg
bGluZWxlbmd0aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBz
aXplPTg6ODo4OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxl
IGF0IDAwMGZmNzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0
ZSBpcyAneGFwaWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBB
Q1BJOiBQTS1UaW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogU0xFRVAgSU5GTzog
cG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4MDAsMF0NCihYRU4pIEFDUEk6ICAgICAgICAg
ICAgIHdha2V1cF92ZWNbOWZmOWUwMGNdLCB2ZWNfc2l6ZVsyMF0NCihYRU4pIEFDUEk6IExv
Y2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwDQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAxXSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMwIDA6
MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAyXSBs
YXBpY19pZFsweDAxXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMxIDA6MTAgQVBJQyB2
ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsw
eDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMyIDA6MTAgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBpY19pZFsweDAzXSBlbmFi
bGVkKQ0KKFhFTikgUHJvY2Vzc29yICMzIDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA0XSBlbmFibGVkKQ0KKFhF
TikgUHJvY2Vzc29yICM0IDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vz
c29yICM1IDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlkWzB4
MDZdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQooWEVOKSBJT0FQSUNbMF06
IGFwaWNfaWQgNiwgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kgMC0yMw0K
KFhFTikgQUNQSTogSU9BUElDIChpZFsweDA3XSBhZGRyZXNzWzB4ZmVjMjAwMDBdIGdzaV9i
YXNlWzI0XSkNCihYRU4pIElPQVBJQ1sxXTogYXBpY19pZCA3LCB2ZXJzaW9uIDMzLCBhZGRy
ZXNzIDB4ZmVjMjAwMDAsIEdTSSAyNC01NQ0KKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1
cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkNCihYRU4pIEFDUEk6IElOVF9T
UkNfT1ZSIChidXMgMCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkNCihYRU4p
IEFDUEk6IElSUTAgdXNlZCBieSBvdmVycmlkZS4NCihYRU4pIEFDUEk6IElSUTIgdXNlZCBi
eSBvdmVycmlkZS4NCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVycmlkZS4NCihYRU4p
IEVuYWJsaW5nIEFQSUMgbW9kZTogIEZsYXQuICBVc2luZyAyIEkvTyBBUElDcw0KKFhFTikg
QUNQSTogSFBFVCBpZDogMHg4MzAwIGJhc2U6IDB4ZmVkMDAwMDANCihYRU4pIEVSU1QgdGFi
bGUgd2FzIG5vdCBmb3VuZA0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDYgQ1BVcyAoMCBo
b3RwbHVnIENQVXMpDQooWEVOKSBtYXBwZWQgQVBJQyB0byBmZmZmODJjZmZmZGZiMDAwIChm
ZWUwMDAwMCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyY2ZmZmRmYTAwMCAoZmVj
MDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZkZjkwMDAgKGZlYzIw
MDAwKQ0KKFhFTikgSVJRIGxpbWl0czogNTYgR1NJLCAxMTEyIE1TSS9NU0ktWA0KKFhFTikg
VXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQ0KKFhFTikg
RGV0ZWN0ZWQgMzIwMC4xNjEgTUh6IHByb2Nlc3Nvci4NCihYRU4pIEluaXRpbmcgbWVtb3J5
IHNoYXJpbmcuDQooWEVOKSBBTUQgRmFtMTBoIG1hY2hpbmUgY2hlY2sgcmVwb3J0aW5nIGVu
YWJsZWQNCihYRU4pIFBDSTogTUNGRyBjb25maWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAg
c2VnbWVudCAwMDAwIGJ1c2VzIDAwIC0gZmYNCihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcg
Zm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYNCihYRU4pIEFNRC1WaTogRm91bmQgTVNJIGNh
cGFiaWxpdHkgYmxvY2sgYXQgMHg1NA0KKFhFTikgQU1ELVZpOiBBQ1BJIFRhYmxlOg0KKFhF
TikgQU1ELVZpOiAgU2lnbmF0dXJlIElWUlMNCihYRU4pIEFNRC1WaTogIExlbmd0aCAweDEw
OA0KKFhFTikgQU1ELVZpOiAgUmV2aXNpb24gMHgxDQooWEVOKSBBTUQtVmk6ICBDaGVja1N1
bSAweGZkDQooWEVOKSBBTUQtVmk6ICBPRU1fSWQgQU1EICANCihYRU4pIEFNRC1WaTogIE9F
TV9UYWJsZV9JZCBSRDg5MFMNCihYRU4pIEFNRC1WaTogIE9FTV9SZXZpc2lvbiAweDIwMjAz
MQ0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9JZCBBTUQgDQooWEVOKSBBTUQtVmk6ICBDcmVh
dG9yX1JldmlzaW9uIDANCihYRU4pIEFNRC1WaTogSVZSUyBCbG9jazogdHlwZSAweDEwIGZs
YWdzIDB4M2UgbGVuIDB4ZDggaWQgMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MyBpZCAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5n
ZTogMCAtPiAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgy
IGlkIDB4MTAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlw
ZSAweDIgaWQgMHhkMDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRy
eTogdHlwZSAweDIgaWQgMHgxOCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4MyBpZCAweGMwMCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgUmFuZ2U6IDB4YzAwIC0+IDB4YzAxDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MiBpZCAweDI4IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZp
Y2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YjAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4MzAgZmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhhMDAgZmxhZ3MgMA0KKFhFTikg
QU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg0OCBmbGFncyAwDQoo
WEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDkwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDUw
IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlk
IDB4ODAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUg
MHgyIGlkIDB4NTggZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDMgaWQgMHg3MDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdl
OiAweDcwMCAtPiAweDcwMQ0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlw
ZSAweDIgaWQgMHg2MCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
OiB0eXBlIDB4MiBpZCAweDUwMCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4NDMgaWQgMHg2MDggZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIFJhbmdlOiAweDYwOCAtPiAweDZmZiBhbGlhcyAweDYwMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg2OCBmbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDQwMCBmbGFncyAwDQooWEVO
KSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDg4IGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OTAgZmxh
Z3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDkwIC0+IDB4OTINCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OTggZmxhZ3MgMA0K
KFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YTAgZmxhZ3MgMHhkNw0KKFhF
TikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhhMiBmbGFncyAw
DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweGEzIGZs
YWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4
YTQgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDQz
IGlkIDB4MzAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHgzMDAg
LT4gMHgzZmYgYWxpYXMgMHhhNA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDIgaWQgMHhhNSBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MiBpZCAweGE4IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZp
Y2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YTkgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHgxMDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDMgaWQgMHhiMCBmbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4YjAgLT4gMHhiMg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAweDQ4IGlkIDAgZmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZp
OiBJVkhEIFNwZWNpYWw6IDAwMDA6MDA6MTQuMCB2YXJpZXR5IDB4MiBoYW5kbGUgMA0KKFhF
TikgQU1ELVZpOiBJVkhEOiBDb21tYW5kIGxpbmUgb3ZlcnJpZGUgcHJlc2VudCBmb3IgSFBF
VCAwIChJVlJTOiAwIGRldklEIDAwMDA6MDA6MTQuMCkNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBTcGVjaWFsOiAwMDAwOjAwOjAwLjEgdmFyaWV0eSAweDEgaGFuZGxlIDB4Nw0KKFhFTikg
QU1ELVZpOiBJT01NVSAwIEVuYWJsZWQuDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5h
YmxlZA0KKFhFTikgIC0gRG9tMCBtb2RlOiBSZWxheGVkDQooWEVOKSBJbnRlcnJ1cHQgcmVt
YXBwaW5nIGVuYWJsZWQNCihYRU4pIEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTANCihYRU4p
IEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTANCihYRU4pIEdldHRpbmcgSUQ6IDANCihYRU4p
IEdldHRpbmcgTFZUMDogNzAwDQooWEVOKSBHZXR0aW5nIExWVDE6IDQwMA0KKFhFTikgZW5h
YmxlZCBFeHRJTlQgb24gQ1BVIzANCihYRU4pIEVTUiB2YWx1ZSBiZWZvcmUgZW5hYmxpbmcg
dmVjdG9yOiAweDQgIGFmdGVyOiAwDQooWEVOKSBFTkFCTElORyBJTy1BUElDIElSUXMNCihY
RU4pICAtPiBVc2luZyBuZXcgQUNLIG1ldGhvZA0KKFhFTikgaW5pdCBJT19BUElDIElSUXMN
CihYRU4pICBJTy1BUElDIChhcGljaWQtcGluKSA2LTAsIDYtMTYsIDYtMTcsIDYtMTgsIDYt
MTksIDYtMjAsIDYtMjEsIDYtMjIsIDYtMjMsIDctMCwgNy0xLCA3LTIsIDctMywgNy00LCA3
LTUsIDctNiwgNy03LCA3LTgsIDctOSwgNy0xMCwgNy0xMSwgNy0xMiwgNy0xMywgNy0xNCwg
Ny0xNSwgNy0xNiwgNy0xNywgNy0xOCwgNy0xOSwgNy0yMCwgNy0yMSwgNy0yMiwgNy0yMywg
Ny0yNCwgNy0yNSwgNy0yNiwgNy0yNywgNy0yOCwgNy0yOSwgNy0zMCwgNy0zMSBub3QgY29u
bmVjdGVkLg0KKFhFTikgLi5USU1FUjogdmVjdG9yPTB4RjAgYXBpYzE9MCBwaW4xPTIgYXBp
YzI9LTEgcGluMj0tMQ0KKFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4NCihY
RU4pIG51bWJlciBvZiBJTy1BUElDICM2IHJlZ2lzdGVyczogMjQuDQooWEVOKSBudW1iZXIg
b2YgSU8tQVBJQyAjNyByZWdpc3RlcnM6IDMyLg0KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJ
Qy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uDQooWEVOKSBJTyBBUElDICM2Li4uLi4uDQooWEVO
KSAuLi4uIHJlZ2lzdGVyICMwMDogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlz
aWNhbCBBUElDIGlkOiAwNg0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAN
CihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lz
dGVyICMwMTogMDAxNzgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9u
IGVudHJpZXM6IDAwMTcNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAx
DQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4u
LiByZWdpc3RlciAjMDI6IDA2MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0
aW9uOiAwNg0KKFhFTikgLi4uLiByZWdpc3RlciAjMDM6IDA3MDAwMDAwDQooWEVOKSAuLi4u
Li4uICAgICA6IEJvb3QgRFQgICAgOiAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0
YWJsZToNCihYRU4pICBOUiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBE
ZWxpIFZlY3Q6ICAgDQooWEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMSAgICAzMA0KKFhFTikgIDAxIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgMzANCihYRU4pICAwMiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIEYwDQooWEVOKSAgMDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICAzOA0KKFhFTikgIDA0IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgRjENCihYRU4pICAwNSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA3IDAwMSAwMSAgMCAgICAwICAgIDAgICAw
ICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwOCAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDkgMDAxIDAxICAxICAgIDEgICAgMCAg
IDEgICAwICAgIDEgICAgMCAgICAwMA0KKFhFTikgIDBhIDAwMSAwMSAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNjgNCihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAgICAw
ICAgMCAgIDAgICAgMSAgICAxICAgIDcwDQooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA3OA0KKFhFTikgIDBkIDAwMSAwMSAgMCAgICAwICAg
IDAgICAwICAgMCAgICAxICAgIDEgICAgODgNCihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDkwDQooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5OA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxMSAwMDAgMDAgIDEgICAg
MCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAgMTIgMDAwIDAwICAxICAg
IDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAg
ICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxNCAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAgMTUgMDAwIDAwICAx
ICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgIDE2IDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxNyAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSBJTyBBUElDICM3
Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDcwMDAwMDANCihYRU4pIC4uLi4u
Li4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNw0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2
ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVO
KSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxRjgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4
IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMUYNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGlt
cGxlbWVudGVkOiAxDQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAy
MQ0KKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDAwMDAwMDAwDQooWEVOKSAuLi4uLi4uICAg
ICA6IGFyYml0cmF0aW9uOiAwMA0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6
DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBW
ZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMDIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDAzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMDUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDA2IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAwNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDggMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDA5IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAwYSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMGIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDBjIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMGUgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDBmIDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNiAwMDAgMDAgIDEgICAg
MCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTcgMDAwIDAwICAxICAg
IDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE4IDAwMCAwMCAgMSAg
ICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxOSAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWEgMDAwIDAwICAx
ICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFiIDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYyAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWQgMDAwIDAw
ICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFlIDAwMCAw
MCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZiAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSBVc2luZyB2
ZWN0b3ItYmFzZWQgaW5kZXhpbmcNCihYRU4pIElSUSB0byBwaW4gbWFwcGluZ3M6DQooWEVO
KSBJUlEyNDAgLT4gMDoyDQooWEVOKSBJUlE0OCAtPiAwOjENCihYRU4pIElSUTU2IC0+IDA6
Mw0KKFhFTikgSVJRMjQxIC0+IDA6NA0KKFhFTikgSVJRNjQgLT4gMDo1DQooWEVOKSBJUlE3
MiAtPiAwOjYNCihYRU4pIElSUTgwIC0+IDA6Nw0KKFhFTikgSVJRODggLT4gMDo4DQooWEVO
KSBJUlE5NiAtPiAwOjkNCihYRU4pIElSUTEwNCAtPiAwOjEwDQooWEVOKSBJUlExMTIgLT4g
MDoxMQ0KKFhFTikgSVJRMTIwIC0+IDA6MTINCihYRU4pIElSUTEzNiAtPiAwOjEzDQooWEVO
KSBJUlExNDQgLT4gMDoxNA0KKFhFTikgSVJRMTUyIC0+IDA6MTUNCihYRU4pIC4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25lLg0KKFhFTikgVXNpbmcgbG9jYWwg
QVBJQyB0aW1lciBpbnRlcnJ1cHRzLg0KKFhFTikgY2FsaWJyYXRpbmcgQVBJQyB0aW1lciAu
Li4NCihYRU4pIC4uLi4uIENQVSBjbG9jayBzcGVlZCBpcyAzMjAwLjEzNjcgTUh6Lg0KKFhF
TikgLi4uLi4gaG9zdCBidXMgY2xvY2sgc3BlZWQgaXMgMjAwLjAwODQgTUh6Lg0KKFhFTikg
Li4uLi4gYnVzX3NjYWxlID0gMHhjY2Q3DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0g
UGxhdGZvcm0gdGltZXIgaXMgMTQuMzE4TUh6IEhQRVQNCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjA5XSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDY0IEtpQi4NCihYRU4pIFsyMDE0
LTAxLTA3IDExOjA5OjA5XSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOTowOV0gU1ZNOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6DQooWEVOKSBb
MjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTmVzdGVkIFBhZ2UgVGFibGVzIChOUFQpDQooWEVO
KSBbMjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZp
cnR1YWxpc2F0aW9uDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTmV4dC1SSVAg
U2F2ZWQgb24gI1ZNRVhJVA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICAtIFBhdXNl
LUludGVyY2VwdCBGaWx0ZXINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gSFZNOiBIYXJkd2FyZSBB
c3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA5XSBIVk06IEhBUCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOTowOV0gSFZNOiBQVkggbW9kZSBub3Qgc3VwcG9ydGVkIG9uIHRoaXMgcGxh
dGZvcm0NCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA4XSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSMxDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gbWljcm9jb2RlOiBDUFUxIGNvbGxl
Y3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MDhdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA5XSBtaWNyb2NvZGU6IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAw
MGJmDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOF0gbWFza2VkIEV4dElOVCBvbiBDUFUj
Mw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIG1pY3JvY29kZTogQ1BVMyBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA4XSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTow
OV0gbWljcm9jb2RlOiBDUFU0IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBi
Zg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDhdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzUN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBCcm91Z2h0IHVwIDYgQ1BVcw0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MDldIG1pY3JvY29kZTogQ1BVNSBjb2xsZWN0X2NwdV9pbmZv
OiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBBTUQt
Vmk6IEZhaWxlZCB0byBzZXR1cCBIUEVUIE1TSSByZW1hcHBpbmcuIFdyb25nIEhQRVQuDQoo
WEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gQU1ELVZpOiBGYWlsZWQgdG8gc2V0dXAgSFBF
VCBNU0kgcmVtYXBwaW5nLiBXcm9uZyBIUEVULg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MDldIEFNRC1WaTogRmFpbGVkIHRvIHNldHVwIEhQRVQgTVNJIHJlbWFwcGluZy4gV3Jvbmcg
SFBFVC4NCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBIUEVUOiAzIHRpbWVycyB1c2Fi
bGUgZm9yIGJyb2FkY2FzdCAoMyB0b3RhbCkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5
XSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIE1D
QTogVXNlIGh3IHRocmVzaG9sZGluZyB0byBhZGp1c3QgcG9sbGluZyBmcmVxdWVuY3kNCihY
RU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBw
b2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gWGVu
b3Byb2ZpbGU6IEZhaWxlZCB0byBzZXR1cCBJQlMgTFZUIG9mZnNldCwgSUJTQ1RMID0gMHhm
ZmZmZmZmZg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICoqKiBMT0FESU5HIERPTUFJ
TiAwICoqKg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDEwMjgwMDANCihYRU4pIFsyMDE0LTAx
LTA3IDExOjA5OjA5XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDIyMDAwMDAg
bWVtc3o9MHhmNzExMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl9wYXJzZV9i
aW5hcnk6IHBoZHI6IHBhZGRyPTB4MjJmODAwMCBtZW1zej0weDE0MzQwDQooWEVOKSBbMjAx
NC0wMS0wNyAxMTowOTowOV0gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgyMzBk
MDAwIG1lbXN6PTB4ZTgyMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3Bh
cnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgzMThmMDAwDQooWEVOKSBbMjAx
NC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBHVUVTVF9PUyA9ICJsaW51
eCINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IEdV
RVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl94
ZW5fcGFyc2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCINCihYRU4pIFsyMDE0LTAx
LTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IFZJUlRfQkFTRSA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl94ZW5fcGFyc2Vf
bm90ZTogRU5UUlkgPSAweGZmZmZmZmZmODIzMGQxZTANCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZUEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZm
ZjgxMDAxMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9u
b3RlOiBGRUFUVVJFUyA9ICIhd3JpdGFibGVfcGFnZV90YWJsZXN8cGFlX3BnZGlyX2Fib3Zl
XzRnYiINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6
IFNVUFBPUlRFRF9GRUFUVVJFUyA9IDB4ODAxDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTow
OV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyIN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25v
d24geGVuIGVsZiBub3RlICgweGQpDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxm
X3hlbl9wYXJzZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MDldIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZm
ODAwMDAwMDAwMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBQQUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5
XSBlbGZfeGVuX2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MDldICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAw
DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAw
eDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSAgICAgdmlydF9vZmZzZXQgICAgICA9
IDB4ZmZmZmZmZmY4MDAwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICAgICB2
aXJ0X2tzdGFydCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOTowOV0gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODMxOGYwMDAN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4
ZmZmZmZmZmY4MjMwZDFlMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBdICAgICBwMm1f
YmFzZSAgICAgICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMF0gIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsy
MDE0LTAxLTA3IDExOjA5OjEwXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBh
ZGRyIDB4MTAwMDAwMCAtPiAweDMxOGYwMDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEw
XSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMF0gIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA1NDgwMDAwMDAtPjAwMDAwMDA1NGMwMDAw
MDAgKDM3MzIyMyBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMF0gIEluaXQuIHJhbWRpc2s6IDAwMDAwMDA1NWYxZTcwMDAtPjAwMDAwMDA1NWZm
ZmY0MDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSBWSVJUVUFMIE1FTU9SWSBBUlJB
TkdFTUVOVDoNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgTG9hZGVkIGtlcm5lbDog
ZmZmZmZmZmY4MTAwMDAwMC0+ZmZmZmZmZmY4MzE4ZjAwMA0KKFhFTikgWzIwMTQtMDEtMDcg
MTE6MDk6MTBdICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgzMThmMDAwLT5mZmZmZmZmZjgz
ZmE3NDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMF0gIFBoeXMtTWFjaCBtYXA6IGZm
ZmZmZmZmODNmYTgwMDAtPmZmZmZmZmZmODQyYTgwMDANCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjEwXSAgU3RhcnQgaW5mbzogICAgZmZmZmZmZmY4NDJhODAwMC0+ZmZmZmZmZmY4NDJh
ODRiNA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBdICBQYWdlIHRhYmxlczogICBmZmZm
ZmZmZjg0MmE5MDAwLT5mZmZmZmZmZjg0MmNlMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMF0gIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODQyY2UwMDAtPmZmZmZmZmZmODQyY2Yw
MDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgVE9UQUw6ICAgICAgICAgZmZmZmZm
ZmY4MDAwMDAwMC0+ZmZmZmZmZmY4NDQwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MTBdICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgyMzBkMWUwDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMF0gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMF0gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAw
MDAwIC0+IDB4ZmZmZmZmZmY4MjAyODAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBd
IGVsZl9sb2FkX2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MjIwMDAwMCAtPiAweGZm
ZmZmZmZmODIyZjcxMTANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSBlbGZfbG9hZF9i
aW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODIyZjgwMDAgLT4gMHhmZmZmZmZmZjgyMzBj
MzQwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMF0gZWxmX2xvYWRfYmluYXJ5OiBwaGRy
IDMgYXQgMHhmZmZmZmZmZjgyMzBkMDAwIC0+IDB4ZmZmZmZmZmY4MjQwZjAwMA0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDAsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCB0eXBlID0g
MHg3LCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBh
Z2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAw
eDU0ZjBhYzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHgxOCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCB0eXBlID0gMHgy
LCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4MzAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDU0
ZjBhYzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg0OCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwLCB0eXBlID0gMHgyLCBy
b290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFi
bGU6IGRldmljZSBpZCA9IDB4NTgsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDU0ZjBh
YzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2
MCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDY4LCB0eXBlID0gMHgyLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MCwg
dHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkyLCB0eXBlID0gMHg3LCByb290IHRh
YmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTox
MV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5YSwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxl
ID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIw
MTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmlj
ZSBpZCA9IDB4YTIsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMywgdHlwZSA9
IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0gMHg1LCByb290IHRhYmxlID0g
MHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQt
MDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9tYWlu
ID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhOCwgdHlwZSA9IDB4
Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9
IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweGIwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHg1
NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4YjIsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1ELVZp
OiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDE0LTAxLTA3
IDExOjA5OjExXSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MTguMQ0K
KFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2tpcHBpbmcgaG9zdCBicmlk
Z2UgMDAwMDowMDoxOC4yDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1ELVZpOiBT
a2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjExXSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MTguNA0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4NDAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NTAw
LCB0eXBlID0gMHgzLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NjAwLCB0eXBlID0gMHg3LCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4NzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NzAx
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4ODAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4OTAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAw
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YjAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4YzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzAx
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4ZDAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIFNjcnViYmluZyBGcmVlIFJBTTogLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLmRvbmUuDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxN10gSW5p
dGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQoo
WEVOKSBbMjAxNC0wMS0wNyAxMTowOToxN10gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBb
MjAxNC0wMS0wNyAxMTowOToxN10gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTQt
MDEtMDcgMTE6MDk6MTddIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTddICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlw
ZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTddIEZyZWVkIDI3MmtCIGluaXQgbWVtb3J5Lg0KbWFwcGlu
ZyBrZXJuZWwgaW50byBwaHlzaWNhbCBtZW1vcnkNCmFib3V0IHRvIGdldCBzdGFydGVkLi4u
DQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsg
ICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAw
MDAwMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1YWNjdA0KWyAgICAwLjAwMDAw
MF0gTGludXggdmVyc2lvbiAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAocm9vdEBz
ZXJ2ZWVyc3RlcnRqZSkgKGdjYyB2ZXJzaW9uIDQuNy4yIChEZWJpYW4gNC43LjItNSkgKSAj
MSBTTVAgVHVlIEphbiA3IDEwOjAyOjU1IENFVCAyMDE0DQpbICAgIDAuMDAwMDAwXSBDb21t
YW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBwZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJi
b3NlIGVhcmx5cHJpbnRrPXhlbiBtZW09MTUzNk0gY29uc29sZT1odmMwIGNvbnNvbGU9dHR5
MCB2Z2E9Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2VuZm9yY2VfcmVzb3VyY2VzPWxheCBtYXhf
bG9vcD0zMCBsb29wX21heF9wYXJ0PTEwIHhlbi1wY2liYWNrLmhpZGU9KDAzOjA2LjApKDA0
OjAwLiopKDA2OjAxLiopKDA3OjAwLiopKDA4OjAwLiopKDBjOjAwLiopIGRlYnVnIGxvZ2xl
dmVsPTEwIG5vbW9kZXNldA0KWyAgICAwLjAwMDAwMF0gRnJlZWluZyA5OS0xMDAgcGZuIHJh
bmdlOiAxMDMgcGFnZXMgZnJlZWQNClsgICAgMC4wMDAwMDBdIFJlbGVhc2VkIDEwMyBwYWdl
cyBvZiB1bnVzZWQgbWVtb3J5DQpbICAgIDAuMDAwMDAwXSBTZXQgMzkzNDMxIHBhZ2Uocykg
dG8gMS0xIG1hcHBpbmcNClsgICAgMC4wMDAwMDBdIFBvcHVsYXRpbmcgNjAwMDAtNjAwNjcg
cGZuIHJhbmdlOiAxMDMgcGFnZXMgYWRkZWQNClsgICAgMC4wMDAwMDBdIGU4MjA6IEJJT1Mt
cHJvdmlkZWQgcGh5c2ljYWwgUkFNIG1hcDoNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAw
eDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDAwMDk4ZmZmXSB1c2FibGUNClsgICAgMC4w
MDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAwOTk4MDAtMHgwMDAwMDAwMDAwMGZmZmZm
XSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDAwMDEwMDAw
MC0weDAwMDAwMDAwNjAwNjZmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVt
IDB4MDAwMDAwMDA2MDA2NzAwMC0weDAwMDAwMDAwOWZmOGZmZmZdIHVudXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDlmZjkwMDAwLTB4MDAwMDAwMDA5ZmY5
ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDlm
ZjllMDAwLTB4MDAwMDAwMDA5ZmZkZmZmZl0gQUNQSSBOVlMNClsgICAgMC4wMDAwMDBdIFhl
bjogW21lbSAweDAwMDAwMDAwOWZmZTAwMDAtMHgwMDAwMDAwMDlmZmZmZmZmXSByZXNlcnZl
ZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAwMDAw
MDAwZmVlZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAw
MDAwMGZmZTAwMDAwLTB4MDAwMDAwMDBmZmZmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAw
MDBdIFhlbjogW21lbSAweDAwMDAwMDAxMDAwMDAwMDAtMHgwMDAwMDAwNTVmZmZmZmZmXSB1
bnVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwZmQwMDAwMDAwMC0w
eDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBib290Y29uc29s
ZSBbeGVuYm9vdDBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2Fi
bGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXNlci1kZWZp
bmVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAw
MDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOThmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAw
MF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwOTk4MDAtMHgwMDAwMDAwMDAwMGZmZmZmXSBy
ZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAt
MHgwMDAwMDAwMDVmZmZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0g
MHgwMDAwMDAwMDYwMDY3MDAwLTB4MDAwMDAwMDA5ZmY4ZmZmZl0gdW51c2FibGUNClsgICAg
MC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDlmZjkwMDAwLTB4MDAwMDAwMDA5ZmY5
ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA5
ZmY5ZTAwMC0weDAwMDAwMDAwOWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1
c2VyOiBbbWVtIDB4MDAwMDAwMDA5ZmZlMDAwMC0weDAwMDAwMDAwOWZmZmZmZmZdIHJlc2Vy
dmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAw
MDAwMDAwZmVlZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4
MDAwMDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDA1NWZmZmZm
ZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwZmQwMDAw
MDAwMC0weDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBTTUJJ
T1MgMi41IHByZXNlbnQuDQpbICAgIDAuMDAwMDAwXSBETUk6IE1TSSBNUy03NjQwLzg5MEZY
QS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAgIDAuMDAw
MDAwXSBlODIwOiB1cGRhdGUgW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmZdIHVzYWJsZSA9
PT4gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIGU4MjA6IHJlbW92ZSBbbWVtIDB4MDAwYTAw
MDAtMHgwMDBmZmZmZl0gdXNhYmxlDQpbICAgIDAuMDAwMDAwXSBObyBBR1AgYnJpZGdlIGZv
dW5kDQpbICAgIDAuMDAwMDAwXSBlODIwOiBsYXN0X3BmbiA9IDB4NjAwMDAgbWF4X2FyY2hf
cGZuID0gMHg0MDAwMDAwMDANClsgICAgMC4wMDAwMDBdIFNjYW5uaW5nIDEgYXJlYXMgZm9y
IGxvdyBtZW1vcnkgY29ycnVwdGlvbg0KWyAgICAwLjAwMDAwMF0gQmFzZSBtZW1vcnkgdHJh
bXBvbGluZSBhdCBbZmZmZjg4MDAwMDA5MzAwMF0gOTMwMDAgc2l6ZSAyNDU3Ng0KWyAgICAw
LjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDAwMDAwMDAwLTB4MDAwZmZm
ZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAwMDAwMDAwLTB4MDAwZmZmZmZdIHBhZ2Ug
NGsNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHg1ZmUwMDAw
MC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHg1ZmUwMDAwMC0weDVmZmZm
ZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDJkODIwMDAsIDB4MDJkODJm
ZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4MzAwMCwgMHgwMmQ4M2Zm
Zl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAw
eDVjMDAwMDAwLTB4NWZkZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDVjMDAwMDAw
LTB4NWZkZmZmZmZdIHBhZ2UgNGsNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4NDAwMCwg
MHgwMmQ4NGZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gQlJLIFsweDAyZDg1MDAwLCAw
eDAyZDg1ZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDJkODYwMDAsIDB4
MDJkODZmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4NzAwMCwgMHgw
MmQ4N2ZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzog
W21lbSAweDAwMTAwMDAwLTB4NWJmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAw
MTAwMDAwLTB4NWJmZmZmZmZdIHBhZ2UgNGsNClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFtt
ZW0gMHgwMzE4ZjAwMC0weDAzZmE3ZmZmXQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEUCAw
MDAwMDAwMDAwMGZiMTAwIDAwMDAxNCAodjAwIEFDUElBTSkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IFJTRFQgMDAwMDAwMDA5ZmY5MDAwMCAwMDAwNDggKHYwMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IEZBQ1AgMDAw
MDAwMDA5ZmY5MDIwMCAwMDAwODQgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNG
VCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IERTRFQgMDAwMDAwMDA5ZmY5MDVl
MCAwMDk0MjcgKHYwMSAgQTc2NDAgQTc2NDAxMDAgMDAwMDAxMDAgSU5UTCAyMDA1MTExNykN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IEZBQ1MgMDAwMDAwMDA5ZmY5ZTAwMCAwMDAwNDANClsg
ICAgMC4wMDAwMDBdIEFDUEk6IEFQSUMgMDAwMDAwMDA5ZmY5MDM5MCAwMDAwODggKHYwMSA3
NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBd
IEFDUEk6IE1DRkcgMDAwMDAwMDA5ZmY5MDQyMCAwMDAwM0MgKHYwMSA3NjQwTVMgT0VNTUNG
RyAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNMSUMg
MDAwMDAwMDA5ZmY5MDQ2MCAwMDAxNzYgKHYwMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA5MTMg
TVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IE9FTUIgMDAwMDAwMDA5ZmY5
ZTA0MCAwMDAwNzIgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5
NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNSQVQgMDAwMDAwMDA5ZmY5YTVlMCAwMDAxMDgg
KHYwMyBBTUQgICAgRkFNX0ZfMTAgMDAwMDAwMDIgQU1EICAwMDAwMDAwMSkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IEhQRVQgMDAwMDAwMDA5ZmY5YTZmMCAwMDAwMzggKHYwMSA3NjQwTVMg
T0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6
IElWUlMgMDAwMDAwMDA5ZmY5YTczMCAwMDAxMDggKHYwMSAgQU1EICAgICBSRDg5MFMgMDAy
MDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAw
MDA5ZmY5YTg0MCAwMDBEQTQgKHYwMSBBIE0gSSAgUE9XRVJOT1cgMDAwMDAwMDEgQU1EICAw
MDAwMDAwMSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZl
ZTAwMDAwDQpbICAgIDAuMDAwMDAwXSBOVU1BIHR1cm5lZCBvZmYNClsgICAgMC4wMDAwMDBd
IEZha2luZyBhIG5vZGUgYXQgW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDVm
ZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAw
MDAwMDAwLTB4NWZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgIE5PREVfREFUQSBbbWVtIDB4
NWZkMWMwMDAtMHg1ZmQyNmZmZl0NClsgICAgMC4wMDAwMDBdIFpvbmUgcmFuZ2VzOg0KWyAg
ICAwLjAwMDAwMF0gICBETUEgICAgICBbbWVtIDB4MDAwMDEwMDAtMHgwMGZmZmZmZl0NClsg
ICAgMC4wMDAwMDBdICAgRE1BMzIgICAgW21lbSAweDAxMDAwMDAwLTB4ZmZmZmZmZmZdDQpb
ICAgIDAuMDAwMDAwXSAgIE5vcm1hbCAgIGVtcHR5DQpbICAgIDAuMDAwMDAwXSBNb3ZhYmxl
IHpvbmUgc3RhcnQgZm9yIGVhY2ggbm9kZQ0KWyAgICAwLjAwMDAwMF0gRWFybHkgbWVtb3J5
IG5vZGUgcmFuZ2VzDQpbICAgIDAuMDAwMDAwXSAgIG5vZGUgICAwOiBbbWVtIDB4MDAwMDEw
MDAtMHgwMDA5OGZmZl0NClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDEw
MDAwMC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gT24gbm9kZSAwIHRvdGFscGFnZXM6
IDM5MzExMg0KWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogNjQgcGFnZXMgdXNlZCBmb3Ig
bWVtbWFwDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6b25lOiAyMSBwYWdlcyByZXNlcnZlZA0K
WyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogMzk5MiBwYWdlcywgTElGTyBiYXRjaDowDQpb
ICAgIDAuMDAwMDAwXSAgIERNQTMyIHpvbmU6IDYwODAgcGFnZXMgdXNlZCBmb3IgbWVtbWFw
DQpbICAgIDAuMDAwMDAwXSAgIERNQTMyIHpvbmU6IDM4OTEyMCBwYWdlcywgTElGTyBiYXRj
aDozMQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAxXSBsYXBpY19pZFsweDAwXSBl
bmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFw
aWNfaWRbMHgwMV0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDNdIGxhcGljX2lkWzB4MDJdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBpY19pZFsweDAzXSBlbmFibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5h
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDZdIGxhcGlj
X2lkWzB4MDVdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJT0FQSUMgKGlkWzB4
MDZdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQpbICAgIDAuMDAwMDAwXSBJ
T0FQSUNbMF06IGFwaWNfaWQgNiwgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzAwMDAwLCBH
U0kgMC0yMw0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA3XSBhZGRyZXNz
WzB4ZmVjMjAwMDBdIGdzaV9iYXNlWzI0XSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1sxXTog
YXBpY19pZCA3LCB2ZXJzaW9uIDMzLCBhZGRyZXNzIDB4ZmVjMjAwMDAsIEdTSSAyNC01NQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9i
YWxfaXJxIDIgZGZsIGRmbCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IElSUTAgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIEFDUEk6IElS
UTIgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIEFDUEk6IElSUTkgdXNlZCBi
eSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIFVzaW5nIEFDUEkgKE1BRFQpIGZvciBTTVAg
Y29uZmlndXJhdGlvbiBpbmZvcm1hdGlvbg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCBp
ZDogMHg4MzAwIGJhc2U6IDB4ZmVkMDAwMDANClsgICAgMC4wMDAwMDBdIHNtcGJvb3Q6IEFs
bG93aW5nIDYgQ1BVcywgMCBob3RwbHVnIENQVXMNClsgICAgMC4wMDAwMDBdIG5yX2lycXNf
Z3NpOiA3Mg0KWyAgICAwLjAwMDAwMF0gZTgyMDogW21lbSAweGEwMDAwMDAwLTB4ZmVkZmZm
ZmZdIGF2YWlsYWJsZSBmb3IgUENJIGRldmljZXMNClsgICAgMC4wMDAwMDBdIEJvb3Rpbmcg
cGFyYXZpcnR1YWxpemVkIGtlcm5lbCBvbiBYZW4NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJz
aW9uOiA0LjQtdW5zdGFibGUgKHByZXNlcnZlLUFEKQ0KWyAgICAwLjAwMDAwMF0gc2V0dXBf
cGVyY3B1OiBOUl9DUFVTOjggbnJfY3B1bWFza19iaXRzOjggbnJfY3B1X2lkczo2IG5yX25v
ZGVfaWRzOjENClsgICAgMC4wMDAwMDBdIFBFUkNQVTogRW1iZWRkZWQgMjggcGFnZXMvY3B1
IEBmZmZmODgwMDVmNjAwMDAwIHM4Mjc1MiByODE5MiBkMjM3NDQgdTI2MjE0NA0KWyAgICAw
LjAwMDAwMF0gcGNwdS1hbGxvYzogczgyNzUyIHI4MTkyIGQyMzc0NCB1MjYyMTQ0IGFsbG9j
PTEqMjA5NzE1Mg0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogWzBdIDAgMSAyIDMgNCA1
IC0gLSANClsgICAgOC4zMDc0NDddIEJ1aWx0IDEgem9uZWxpc3RzIGluIE5vZGUgb3JkZXIs
IG1vYmlsaXR5IGdyb3VwaW5nIG9uLiAgVG90YWwgcGFnZXM6IDM4Njk0Nw0KWyAgICA4LjMw
NzQ1MF0gUG9saWN5IHpvbmU6IERNQTMyDQpbICAgIDguMzA3NDU4XSBLZXJuZWwgY29tbWFu
ZCBsaW5lOiByb290PS9kZXYvbWFwcGVyL3NlcnZlZXJzdGVydGplLXJvb3Qgcm8gdmVyYm9z
ZSBlYXJseXByaW50az14ZW4gbWVtPTE1MzZNIGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAg
dmdhPTc5NCB2aWRlbz12ZXNhZmIgYWNwaV9lbmZvcmNlX3Jlc291cmNlcz1sYXggbWF4X2xv
b3A9MzAgbG9vcF9tYXhfcGFydD0xMCB4ZW4tcGNpYmFjay5oaWRlPSgwMzowNi4wKSgwNDow
MC4qKSgwNjowMS4qKSgwNzowMC4qKSgwODowMC4qKSgwYzowMC4qKSBkZWJ1ZyBsb2dsZXZl
bD0xMCBub21vZGVzZXQNClsgICAgOC4zMDc2ODNdIFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6
IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgOC4zNTQwNDBdIHNvZnR3YXJl
IElPIFRMQiBbbWVtIDB4NTljMDAwMDAtMHg1ZGMwMDAwMF0gKDY0TUIpIG1hcHBlZCBhdCBb
ZmZmZjg4MDA1OWMwMDAwMC1mZmZmODgwMDVkYmZmZmZmXQ0KWyAgICA4LjM2MTA1NV0gTWVt
b3J5OiAxNDMwMDQwSy8xNTcyNDQ4SyBhdmFpbGFibGUgKDEwOTAySyBrZXJuZWwgY29kZSwg
OTg2SyByd2RhdGEsIDQyNTZLIHJvZGF0YSwgMTA4MEsgaW5pdCwgOTU4OEsgYnNzLCAxNDI0
MDhLIHJlc2VydmVkKQ0KWyAgICA4LjM2MTMxOV0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9
MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9NiwgTm9kZXM9MQ0KWyAgICA4LjM2MTM2OV0gSGll
cmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAgOC4zNjEzNzFdIAlSQ1UgZHlu
dGljay1pZGxlIGdyYWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMgZW5hYmxlZC4NClsgICAg
OC4zNjEzNzNdIAlBZGRpdGlvbmFsIHBlci1DUFUgaW5mbyBwcmludGVkIHdpdGggc3RhbGxz
Lg0KWyAgICA4LjM2MTM3NV0gCVJDVSByZXN0cmljdGluZyBDUFVzIGZyb20gTlJfQ1BVUz04
IHRvIG5yX2NwdV9pZHM9Ni4NClsgICAgOC4zNjE0MDVdIE5SX0lSUVM6NDM1MiBucl9pcnFz
OjEyNzIgMTYNClsgICAgOC4zNjE1MDRdIHhlbjpldmVudHM6IFVzaW5nIEZJRk8tYmFzZWQg
QUJJDQpbICAgIDguMzYxNTExXSB4ZW46IHNjaSBvdmVycmlkZTogZ2xvYmFsX2lycT05IHRy
aWdnZXI9MCBwb2xhcml0eT0xDQpbICAgIDguMzYxNTE0XSB4ZW46IHJlZ2lzdGVyaW5nIGdz
aSA5IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgIDguMzYxNTQ5XSB4ZW46IC0tPiBw
aXJxPTkgLT4gaXJxPTkgKGdzaT05KQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTddIElP
QVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTkgLT4gMHg2MCAtPiBJUlEgOSBN
b2RlOjEgQWN0aXZlOjEpDQpbICAgIDguMzYxNjAwXSB4ZW46IGFjcGkgc2NpIDkNClsgICAg
OC4zNjE2MDZdIHhlbjogLS0+IHBpcnE9MSAtPiBpcnE9MSAoZ3NpPTEpDQpbICAgIDguMzYx
NjEyXSB4ZW46IC0tPiBwaXJxPTIgLT4gaXJxPTIgKGdzaT0yKQ0KWyAgICA4LjM2MTYxOF0g
eGVuOiAtLT4gcGlycT0zIC0+IGlycT0zIChnc2k9MykNClsgICAgOC4zNjE2MjRdIHhlbjog
LS0+IHBpcnE9NCAtPiBpcnE9NCAoZ3NpPTQpDQpbICAgIDguMzYxNjMwXSB4ZW46IC0tPiBw
aXJxPTUgLT4gaXJxPTUgKGdzaT01KQ0KWyAgICA4LjM2MTYzNV0geGVuOiAtLT4gcGlycT02
IC0+IGlycT02IChnc2k9NikNClsgICAgOC4zNjE2NDFdIHhlbjogLS0+IHBpcnE9NyAtPiBp
cnE9NyAoZ3NpPTcpDQpbICAgIDguMzYxNjUwXSB4ZW46IC0tPiBwaXJxPTggLT4gaXJxPTgg
KGdzaT04KQ0KWyAgICA4LjM2MTY1N10geGVuOiAtLT4gcGlycT0xMCAtPiBpcnE9MTAgKGdz
aT0xMCkNClsgICAgOC4zNjE2NjNdIHhlbjogLS0+IHBpcnE9MTEgLT4gaXJxPTExIChnc2k9
MTEpDQpbICAgIDguMzYxNjY5XSB4ZW46IC0tPiBwaXJxPTEyIC0+IGlycT0xMiAoZ3NpPTEy
KQ0KWyAgICA4LjM2MTY3NV0geGVuOiAtLT4gcGlycT0xMyAtPiBpcnE9MTMgKGdzaT0xMykN
ClsgICAgOC4zNjE2ODFdIHhlbjogLS0+IHBpcnE9MTQgLT4gaXJxPTE0IChnc2k9MTQpDQpb
ICAgIDguMzYxNjg3XSB4ZW46IC0tPiBwaXJxPTE1IC0+IGlycT0xNSAoZ3NpPTE1KQ0KWyAg
ICA4LjM2MTgwOV0gQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgyNQ0KWyAgICA4
LjM2MTgxNF0gY29uc29sZSBbdHR5MF0gZW5hYmxlZA0KWyAgICA4LjM2MTgxOV0gYm9vdGNv
bnNvbGUgW3hlbmJvb3QwXSBkaXNhYmxlZA0KWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgY3B1c2V0DQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBjcHUNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vi
c3lzIGNwdWFjY3QNClsgICAgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgKHJvb3RAc2VydmVlcnN0ZXJ0amUpIChnY2MgdmVyc2lvbiA0
LjcuMiAoRGViaWFuIDQuNy4yLTUpICkgIzEgU01QIFR1ZSBKYW4gNyAxMDowMjo1NSBDRVQg
MjAxNA0KWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiByb290PS9kZXYvbWFwcGVyL3Nl
cnZlZXJzdGVydGplLXJvb3Qgcm8gdmVyYm9zZSBlYXJseXByaW50az14ZW4gbWVtPTE1MzZN
IGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAgdmdhPTc5NCB2aWRlbz12ZXNhZmIgYWNwaV9l
bmZvcmNlX3Jlc291cmNlcz1sYXggbWF4X2xvb3A9MzAgbG9vcF9tYXhfcGFydD0xMCB4ZW4t
cGNpYmFjay5oaWRlPSgwMzowNi4wKSgwNDowMC4qKSgwNjowMS4qKSgwNzowMC4qKSgwODow
MC4qKSgwYzowMC4qKSBkZWJ1ZyBsb2dsZXZlbD0xMCBub21vZGVzZXQNClsgICAgMC4wMDAw
MDBdIEZyZWVpbmcgOTktMTAwIHBmbiByYW5nZTogMTAzIHBhZ2VzIGZyZWVkDQpbICAgIDAu
MDAwMDAwXSAxLTEgbWFwcGluZyBvbiA5OS0+MTAwDQpbICAgIDAuMDAwMDAwXSAxLTEgbWFw
cGluZyBvbiA5ZmY5MC0+MTAwMDAwDQpbICAgIDAuMDAwMDAwXSBSZWxlYXNlZCAxMDMgcGFn
ZXMgb2YgdW51c2VkIG1lbW9yeQ0KWyAgICAwLjAwMDAwMF0gU2V0IDM5MzQzMSBwYWdlKHMp
IHRvIDEtMSBtYXBwaW5nDQpbICAgIDAuMDAwMDAwXSBQb3B1bGF0aW5nIDYwMDAwLTYwMDY3
IHBmbiByYW5nZTogMTAzIHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBCSU9T
LXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0g
MHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5OGZmZl0gdXNhYmxlDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDk5ODAwLTB4MDAwMDAwMDAwMDBmZmZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAxMDAw
MDAtMHgwMDAwMDAwMDYwMDY2ZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjogW21l
bSAweDAwMDAwMDAwNjAwNjcwMDAtMHgwMDAwMDAwMDlmZjhmZmZmXSB1bnVzYWJsZQ0KWyAg
ICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDA5ZmY5MDAwMC0weDAwMDAwMDAwOWZm
OWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDA5
ZmY5ZTAwMC0weDAwMDAwMDAwOWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSBY
ZW46IFttZW0gMHgwMDAwMDAwMDlmZmUwMDAwLTB4MDAwMDAwMDA5ZmZmZmZmZl0gcmVzZXJ2
ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVlMDAwMDAtMHgwMDAw
MDAwMGZlZWZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAw
MDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAw
MDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMTAwMDAwMDAwLTB4MDAwMDAwMDU1ZmZmZmZmZl0g
dW51c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMGZkMDAwMDAwMDAt
MHgwMDAwMDBmZmZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gYm9vdGNvbnNv
bGUgW3hlbmJvb3QwXSBlbmFibGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiByZW1vdmUgW21l
bSAweDYwMDAwMDAwLTB4ZmZmZmZmZmZmZmZmZmZmZV0gdXNhYmxlDQpbICAgIDAuMDAwMDAw
XSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3RpdmUNClsgICAgMC4wMDAw
MDBdIGU4MjA6IHVzZXItZGVmaW5lZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAw
MF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDAwMDk4ZmZmXSB1
c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDAwMDk5ODAwLTB4
MDAwMDAwMDAwMDBmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0g
MHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA1ZmZmZmZmZl0gdXNhYmxlDQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA2MDA2NzAwMC0weDAwMDAwMDAwOWZmOGZm
ZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA5ZmY5
MDAwMC0weDAwMDAwMDAwOWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gdXNl
cjogW21lbSAweDAwMDAwMDAwOWZmOWUwMDAtMHgwMDAwMDAwMDlmZmRmZmZmXSBBQ1BJIE5W
Uw0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwOWZmZTAwMDAtMHgwMDAw
MDAwMDlmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAw
MDAwMDAwZmVlMDAwMDAtMHgwMDAwMDAwMGZlZWZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAw
MDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwZmZlMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZm
XSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAxMDAwMDAw
MDAtMHgwMDAwMDAwNTVmZmZmZmZmXSB1bnVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNlcjog
W21lbSAweDAwMDAwMGZkMDAwMDAwMDAtMHgwMDAwMDBmZmZmZmZmZmZmXSByZXNlcnZlZA0K
WyAgICAwLjAwMDAwMF0gU01CSU9TIDIuNSBwcmVzZW50Lg0KWyAgICAwLjAwMDAwMF0gRE1J
OiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwgQklPUyBWMS44QjEgMDkv
MTMvMjAxMA0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXBkYXRlIFttZW0gMHgwMDAwMDAwMC0w
eDAwMDAwZmZmXSB1c2FibGUgPT0+IHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBy
ZW1vdmUgW21lbSAweDAwMGEwMDAwLTB4MDAwZmZmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAw
MF0gTm8gQUdQIGJyaWRnZSBmb3VuZA0KWyAgICAwLjAwMDAwMF0gZTgyMDogbGFzdF9wZm4g
PSAweDYwMDAwIG1heF9hcmNoX3BmbiA9IDB4NDAwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBT
Y2FubmluZyAxIGFyZWFzIGZvciBsb3cgbWVtb3J5IGNvcnJ1cHRpb24NClsgICAgMC4wMDAw
MDBdIEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUgYXQgW2ZmZmY4ODAwMDAwOTMwMDBdIDkzMDAw
IHNpemUgMjQ1NzYNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0g
MHgwMDAwMDAwMC0weDAwMGZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAw
MC0weDAwMGZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBw
aW5nOiBbbWVtIDB4NWZlMDAwMDAtMHg1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICBbbWVt
IDB4NWZlMDAwMDAtMHg1ZmZmZmZmZl0gcGFnZSA0aw0KWyAgICAwLjAwMDAwMF0gQlJLIFsw
eDAyZDgyMDAwLCAweDAyZDgyZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4
MDJkODMwMDAsIDB4MDJkODNmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIGluaXRfbWVt
b3J5X21hcHBpbmc6IFttZW0gMHg1YzAwMDAwMC0weDVmZGZmZmZmXQ0KWyAgICAwLjAwMDAw
MF0gIFttZW0gMHg1YzAwMDAwMC0weDVmZGZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAw
XSBCUksgWzB4MDJkODQwMDAsIDB4MDJkODRmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBd
IEJSSyBbMHgwMmQ4NTAwMCwgMHgwMmQ4NWZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0g
QlJLIFsweDAyZDg2MDAwLCAweDAyZDg2ZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBC
UksgWzB4MDJkODcwMDAsIDB4MDJkODdmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIGlu
aXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgwMDEwMDAwMC0weDViZmZmZmZmXQ0KWyAgICAw
LjAwMDAwMF0gIFttZW0gMHgwMDEwMDAwMC0weDViZmZmZmZmXSBwYWdlIDRrDQpbICAgIDAu
MDAwMDAwXSBSQU1ESVNLOiBbbWVtIDB4MDMxOGYwMDAtMHgwM2ZhN2ZmZl0NClsgICAgMC4w
MDAwMDBdIEFDUEk6IFJTRFAgMDAwMDAwMDAwMDBmYjEwMCAwMDAwMTQgKHYwMCBBQ1BJQU0p
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RUIDAwMDAwMDAwOWZmOTAwMDAgMDAwMDQ4ICh2
MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwOWZmOTAyMDAgMDAwMDg0ICh2MDEgNzY0ME1TIEE3
NjQwMTAwIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBE
U0RUIDAwMDAwMDAwOWZmOTA1ZTAgMDA5NDI3ICh2MDEgIEE3NjQwIEE3NjQwMTAwIDAwMDAw
MTAwIElOVEwgMjAwNTExMTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNTIDAwMDAwMDAw
OWZmOWUwMDAgMDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIDAwMDAwMDAwOWZm
OTAzOTAgMDAwMDg4ICh2MDEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBNQ0ZHIDAwMDAwMDAwOWZmOTA0MjAgMDAwMDND
ICh2MDEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBTTElDIDAwMDAwMDAwOWZmOTA0NjAgMDAwMTc2ICh2MDEgTVNJICAg
IE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBPRU1CIDAwMDAwMDAwOWZmOWUwNDAgMDAwMDcyICh2MDEgNzY0ME1TIEE3NjQwMTAwIDIw
MTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTUkFUIDAwMDAw
MDAwOWZmOWE1ZTAgMDAwMTA4ICh2MDMgQU1EICAgIEZBTV9GXzEwIDAwMDAwMDAyIEFNRCAg
MDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIDAwMDAwMDAwOWZmOWE2ZjAg
MDAwMDM4ICh2MDEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBJVlJTIDAwMDAwMDAwOWZmOWE3MzAgMDAwMTA4ICh2MDEg
IEFNRCAgICAgUkQ4OTBTIDAwMjAyMDMxIEFNRCAgMDAwMDAwMDApDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBTU0RUIDAwMDAwMDAwOWZmOWE4NDAgMDAwREE0ICh2MDEgQSBNIEkgIFBPV0VS
Tk9XIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2Nh
bCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gTlVNQSB0dXJuZWQg
b2ZmDQpbICAgIDAuMDAwMDAwXSBGYWtpbmcgYSBub2RlIGF0IFttZW0gMHgwMDAwMDAwMDAw
MDAwMDAwLTB4MDAwMDAwMDA1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdIEluaXRtZW0gc2V0
dXAgbm9kZSAwIFttZW0gMHgwMDAwMDAwMC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0g
ICBOT0RFX0RBVEEgW21lbSAweDVmZDFjMDAwLTB4NWZkMjZmZmZdDQpbICAgIDAuMDAwMDAw
XSBab25lIHJhbmdlczoNClsgICAgMC4wMDAwMDBdICAgRE1BICAgICAgW21lbSAweDAwMDAx
MDAwLTB4MDBmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgIERNQTMyICAgIFttZW0gMHgwMTAw
MDAwMC0weGZmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBOb3JtYWwgICBlbXB0eQ0KWyAg
ICAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUNClsgICAgMC4w
MDAwMDBdIEVhcmx5IG1lbW9yeSBub2RlIHJhbmdlcw0KWyAgICAwLjAwMDAwMF0gICBub2Rl
ICAgMDogW21lbSAweDAwMDAxMDAwLTB4MDAwOThmZmZdDQpbICAgIDAuMDAwMDAwXSAgIG5v
ZGUgICAwOiBbbWVtIDB4MDAxMDAwMDAtMHg1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdIE9u
IG5vZGUgMCB0b3RhbHBhZ2VzOiAzOTMxMTINClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6
IDY0IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTog
MjEgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5OTIgcGFn
ZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA2MDgwIHBh
Z2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAzODkx
MjAgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVy
IElPIFBvcnQ6IDB4ODA4WyAgIDEwLjY5NTg2N10gcGNpYmFjayAwMDAwOjA0OjAwLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4M2MgKHdhcyAweDEwMCwgd3JpdGlu
ZyAweDEwYSkNClsgICAxMC42OTYxMDhdIHBjaWJhY2sgMDAwMDowNDowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEwICh3YXMgMHg0LCB3cml0aW5nIDB4Zjk3
ZmUwMDQpDQpbICAgMTAuNjk2MzM2XSBwY2liYWNrIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5n
IGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhjICh3YXMgMHgwLCB3cml0aW5nIDB4MTApDQpb
ICAgMTAuNjk2NTUwXSBwY2liYWNrIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBz
cGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAxMDIpDQpb
ICAgMTAuNjk2OTAyXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzOSB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjY5NzAzOF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozOQ0KWyAg
IDEwLjcyMjU3Nl0gcGNpYmFjayAwMDAwOjA2OjAxLjI6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzIyOTYxXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzOCB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjcyMzEwMl0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozOA0KWyAg
IDEwLjc0OTI0MV0gcGNpYmFjayAwMDAwOjA2OjAxLjE6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzQ5NjM1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzNyB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjc0OTc3Ml0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozNw0KWyAg
IDEwLjc3NTkxN10gcGNpYmFjayAwMDAwOjA2OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzc2MzA5XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzMyB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjc3NjQ2MF0geGVuOiAtLT4gcGlycT0zMyAtPiBpcnE9MzMgKGdz
aT0zMykNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjE5XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy05IC0+IDB4YTkgLT4gSVJRIDMzIE1vZGU6MSBBY3RpdmU6MSkN
ClsgICAxMC44MDI2MzFdIHBjaWJhY2sgMDAwMDowNzowMC4wOiBlbmFibGluZyBkZXZpY2Ug
KDAwMDAgLT4gMDAwMykNClsgICAxMC44MDI4MTBdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDMy
IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTAuODAyOTU3XSB4ZW46IC0tPiBwaXJx
PTMyIC0+IGlycT0zMiAoZ3NpPTMyKQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTldIElP
QVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTggLT4gMHhiMSAtPiBJUlEgMzIg
TW9kZToxIEFjdGl2ZToxKQ0KWyAgIDEwLjgyOTIzOV0geGVuOiByZWdpc3RlcmluZyBnc2kg
NDcgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMC44MjkzOTVdIHhlbjogLS0+IHBp
cnE9NDcgLT4gaXJxPTQ3IChnc2k9NDcpDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxOV0g
SU9BUElDWzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctMjMgLT4gMHhiOSAtPiBJUlEg
NDcgTW9kZToxIEFjdGl2ZToxKQ0KWyAgIDExLjgzOTE5MF0gcGNpYmFjayAwMDAwOjA4OjAw
LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4M2MgKHdhcyAweDEwMCwg
d3JpdGluZyAweDEwYSkNClsgICAxMS44Mzk0MzddIHBjaWJhY2sgMDAwMDowODowMC4wOiBy
ZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEwICh3YXMgMHg0LCB3cml0aW5n
IDB4ZjlhMDAwMDQpDQpbICAgMTEuODM5NjY3XSBwY2liYWNrIDAwMDA6MDg6MDAuMDogcmVz
dG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhjICh3YXMgMHgwLCB3cml0aW5nIDB4
MTApDQpbICAgMTEuODQ2Njc3XSBwY2liYWNrIDAwMDA6MDg6MDAuMDogcmVzdG9yaW5nIGNv
bmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAx
MDYpDQpbICAgMTEuODUzNzg1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAyOSB0cmlnZ2VyaW5n
IDAgcG9sYXJpdHkgMQ0KWyAgIDExLjg2MDc4Ml0geGVuOiAtLT4gcGlycT0yOSAtPiBpcnE9
MjkgKGdzaT0yOSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjIwXSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy01IC0+IDB4YzEgLT4gSVJRIDI5IE1vZGU6MSBBY3Rp
dmU6MSkNClsgICAxMS44OTI2MjldIHBjaWJhY2sgMDAwMDowYzowMC4wOiBlbmFibGluZyBk
ZXZpY2UgKDAwMDAgLT4gMDAwMykNClsgICAxMS44OTk2NzBdIHhlbjogcmVnaXN0ZXJpbmcg
Z3NpIDI4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTEuOTA2NjkzXSB4ZW46IC0t
PiBwaXJxPTI4IC0+IGlycT0yOCAoZ3NpPTI4KQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MjBdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTQgLT4gMHhjOSAtPiBJ
UlEgMjggTW9kZToxIEFjdGl2ZToxKQ0KWyAgIDExLjkzOTQwOF0geGVuX3BjaWJhY2s6IGJh
Y2tlbmQgaXMgdnBjaQ0KWyAgIDExLjk0NzM2OV0geGVuX2FjcGlfcHJvY2Vzc29yOiBVcGxv
YWRpbmcgWGVuIHByb2Nlc3NvciBQTSBpbmZvDQpbICAgMTEuOTU3MTM3XSBTZXJpYWw6IDgy
NTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkDQpbICAgMTEu
OTY3NDEwXSBocGV0X2FjcGlfYWRkOiBubyBhZGRyZXNzIG9yIGlycXMgaW4gX0NSUw0KWyAg
IDExLjk3NTA5Nl0gTGludXggYWdwZ2FydCBpbnRlcmZhY2UgdjAuMTAzDQpbICAgMTEuOTgz
MTM0XSBIYW5nY2hlY2s6IHN0YXJ0aW5nIGhhbmdjaGVjayB0aW1lciAwLjkuMSAodGljayBp
cyAxODAgc2Vjb25kcywgbWFyZ2luIGlzIDYwIHNlY29uZHMpLg0KWyAgIDExLjk5MDM2OV0g
SGFuZ2NoZWNrOiBVc2luZyBnZXRyYXdtb25vdG9uaWMoKS4NClsgICAxMS45OTc2MzNdIFtk
cm1dIEluaXRpYWxpemVkIGRybSAxLjEuMCAyMDA2MDgxMA0KWyAgIDEyLjAwNTAwN10gW2Ry
bV0gVkdBQ09OIGRpc2FibGUgcmFkZW9uIGtlcm5lbCBtb2Rlc2V0dGluZy4NClsgICAxMi4w
MTIxMDddIFtkcm06cmFkZW9uX2luaXRdICpFUlJPUiogTm8gVU1TIHN1cHBvcnQgaW4gcmFk
ZW9uIG1vZHVsZSENClsgICAxMi4wMjcyODNdIGJyZDogbW9kdWxlIGxvYWRlZA0KWyAgIDEy
LjA1MDM4NF0gbG9vcDogbW9kdWxlIGxvYWRlZA0KWyAgIDEyLjA1ODU5NV0gYWhjaSAwMDAw
OjAwOjExLjA6IHZlcnNpb24gMy4wDQpbICAgMTIuMDY1NzcwXSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSAxOSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjA3MjY5OF0geGVuOiAt
LT4gcGlycT0xOSAtPiBpcnE9MTkgKGdzaT0xOSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjIxXSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi0xOSAtPiAweGQxIC0+
IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTIuMDc5ODUxXSBhaGNpIDAwMDA6MDA6
MTEuMDogQUhDSSAwMDAxLjAyMDAgMzIgc2xvdHMgNiBwb3J0cyA2IEdicHMgMHgzZiBpbXBs
IFNBVEEgbW9kZQ0KWyAgIDEyLjA4NzAxNF0gYWhjaSAwMDAwOjAwOjExLjA6IGZsYWdzOiA2
NGJpdCBuY3Egc250ZiBpbGNrIHBtIGxlZCBjbG8gcG1wIHBpbyBzbHVtIHBhcnQgDQpbICAg
MTIuMDk3MzYzXSBzY3NpMCA6IGFoY2kNClsgICAxMi4xMDU0MTldIHNjc2kxIDogYWhjaQ0K
WyAgIDEyLjExMjY3MF0gc2NzaTIgOiBhaGNpDQpbICAgMTIuMTE5ODM5XSBzY3NpMyA6IGFo
Y2kNClsgICAxMi4xMjY5NjhdIHNjc2k0IDogYWhjaQ0KWyAgIDEyLjEzMzczOF0gc2NzaTUg
OiBhaGNpDQpbICAgMTIuMTQwMzA4XSBhdGExOiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0x
MDI0QDB4Zjk1ZmYwMDAgcG9ydCAweGY5NWZmMTAwIGlycSAxMjcNClsgICAxMi4xNDY3MDhd
IGF0YTI6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTEwMjRAMHhmOTVmZjAwMCBwb3J0IDB4
Zjk1ZmYxODAgaXJxIDEyNw0KWyAgIDEyLjE1MzAxOF0gYXRhMzogU0FUQSBtYXggVURNQS8x
MzMgYWJhciBtMTAyNEAweGY5NWZmMDAwIHBvcnQgMHhmOTVmZjIwMCBpcnEgMTI3DQpbICAg
MTIuMTU5MTkyXSBhdGE0OiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0xMDI0QDB4Zjk1ZmYw
MDAgcG9ydCAweGY5NWZmMjgwIGlycSAxMjcNClsgICAxMi4xNjU0MzNdIGF0YTU6IFNBVEEg
bWF4IFVETUEvMTMzIGFiYXIgbTEwMjRAMHhmOTVmZjAwMCBwb3J0IDB4Zjk1ZmYzMDAgaXJx
IDEyNw0KWyAgIDEyLjE3MTUwMF0gYXRhNjogU0FUQSBtYXggVURNQS8xMzMgYWJhciBtMTAy
NEAweGY5NWZmMDAwIHBvcnQgMHhmOTVmZjM4MCBpcnEgMTI3DQpbICAgMTIuMTc4NzA2XSB0
dW46IFVuaXZlcnNhbCBUVU4vVEFQIGRldmljZSBkcml2ZXIsIDEuNg0KWyAgIDEyLjE4NDY3
MF0gdHVuOiAoQykgMTk5OS0yMDA0IE1heCBLcmFzbnlhbnNreSA8bWF4a0BxdWFsY29tbS5j
b20+DQpbICAgMTIuMTkwODU4XSBlMTAwMDogSW50ZWwoUikgUFJPLzEwMDAgTmV0d29yayBE
cml2ZXIgLSB2ZXJzaW9uIDcuMy4yMS1rOC1OQVBJDQpbICAgMTIuMTk2ODc5XSBlMTAwMDog
Q29weXJpZ2h0IChjKSAxOTk5LTIwMDYgSW50ZWwgQ29ycG9yYXRpb24uDQpbICAgMTIuMjAz
MzAyXSBlMTAwMGU6IEludGVsKFIpIFBSTy8xMDAwIE5ldHdvcmsgRHJpdmVyIC0gMi4zLjIt
aw0KWyAgIDEyLjIwOTI5NF0gZTEwMDBlOiBDb3B5cmlnaHQoYykgMTk5OSAtIDIwMTMgSW50
ZWwgQ29ycG9yYXRpb24uDQpbICAgMTIuMjE1NTM1XSBpZ2I6IEludGVsKFIpIEdpZ2FiaXQg
RXRoZXJuZXQgTmV0d29yayBEcml2ZXIgLSB2ZXJzaW9uIDUuMC41LWsNClsgICAxMi4yMjE2
MzFdIGlnYjogQ29weXJpZ2h0IChjKSAyMDA3LTIwMTMgSW50ZWwgQ29ycG9yYXRpb24uDQpb
ICAgMTIuMjI3NzMwXSBpZ2J2ZjogSW50ZWwoUikgR2lnYWJpdCBWaXJ0dWFsIEZ1bmN0aW9u
IE5ldHdvcmsgRHJpdmVyIC0gdmVyc2lvbiAyLjAuMi1rDQpbICAgMTIuMjMzNzAwXSBpZ2J2
ZjogQ29weXJpZ2h0IChjKSAyMDA5IC0gMjAxMiBJbnRlbCBDb3Jwb3JhdGlvbi4NClsgICAx
Mi4yMzk4MjldIHI4MTY5IEdpZ2FiaXQgRXRoZXJuZXQgZHJpdmVyIDIuM0xLLU5BUEkgbG9h
ZGVkDQpbICAgMTIuMjQ1Nzk3XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSA0NiB0cmlnZ2VyaW5n
IDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjI1MTYwNl0geGVuOiAtLT4gcGlycT00NiAtPiBpcnE9
NDYgKGdzaT00NikNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjIxXSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMiAtPiAweDIyIC0+IElSUSA0NiBNb2RlOjEgQWN0
aXZlOjEpDQpbICAgMTIuMjU3MzIzXSByODE2OSAwMDAwOjBiOjAwLjA6IGVuYWJsaW5nIE1l
bS1Xci1JbnZhbA0KWyAgIDEyLjI2MzcxOV0gcjgxNjkgMDAwMDowYjowMC4wIGV0aDA6IFJU
TDgxNjhkLzgxMTFkIGF0IDB4ZmZmZmM5MDAwMDMzYzAwMCwgNDA6NjE6ODY6ZjQ6Njc6ZDks
IFhJRCAwODEwMDBjMCBJUlEgMTI4DQpbICAgMTIuMjY5NTc3XSByODE2OSAwMDAwOjBiOjAw
LjAgZXRoMDoganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIwMCBieXRlcywgdHggY2hlY2tz
dW1taW5nOiBrb10NClsgICAxMi4yNzU0NTRdIHI4MTY5IEdpZ2FiaXQgRXRoZXJuZXQgZHJp
dmVyIDIuM0xLLU5BUEkgbG9hZGVkDQpbICAgMTIuMjgxNDA0XSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSA1MSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjI4NzIxN10geGVuOiAt
LT4gcGlycT01MSAtPiBpcnE9NTEgKGdzaT01MSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjIxXSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweDMyIC0+
IElSUSA1MSBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTIuMjkyOTk1XSByODE2OSAwMDAwOjBh
OjAwLjA6IGVuYWJsaW5nIE1lbS1Xci1JbnZhbA0KWyAgIDEyLjI5OTgwMl0gcjgxNjkgMDAw
MDowYTowMC4wIGV0aDE6IFJUTDgxNjhkLzgxMTFkIGF0IDB4ZmZmZmM5MDAwMDMzZTAwMCwg
NDA6NjE6ODY6ZjQ6Njc6ZDgsIFhJRCAwODEwMDBjMCBJUlEgMTI5DQpbICAgMTIuMzA1ODIw
XSByODE2OSAwMDAwOjBhOjAwLjAgZXRoMToganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIw
MCBieXRlcywgdHggY2hlY2tzdW1taW5nOiBrb10NClsgICAxMi4zMTE5NzRdIHhlbl9uZXRm
cm9udDogSW5pdGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlcg0KWyAgIDEy
LjMxODY0NF0gZWhjaV9oY2Q6IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIg
KEVIQ0kpIERyaXZlcg0KWyAgIDEyLjMyNDgwNl0gZWhjaS1wY2k6IEVIQ0kgUENJIHBsYXRm
b3JtIGRyaXZlcg0KWyAgIDEyLjMzMDkzN10geGVuOiByZWdpc3RlcmluZyBnc2kgMTcgdHJp
Z2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMi4zMzY5MTldIEFscmVhZHkgc2V0dXAgdGhl
IEdTSSA6MTcNClsgICAxMi4zNDI5NzVdIFFVSVJLOiBFbmFibGUgQU1EIFBMTCBmaXgNClsg
ICAxMi4zNDg5MDhdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZW5hYmxpbmcgYnVzIG1hc3Rl
cmluZw0KWyAgIDEyLjM1NDg5OF0gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0KWyAgIDEyLjM2MTM2MF0gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBuZXcg
VVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDENClsgICAxMi4zNjcz
MDddIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogYXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1
ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kDQpbICAgMTIuMzczMzgwXSBlaGNp
LXBjaSAwMDAwOjAwOjEyLjI6IGRlYnVnIHBvcnQgMQ0KWyAgIDEyLjM3OTQyMl0gZWhjaS1w
Y2kgMDAwMDowMDoxMi4yOiBlbmFibGluZyBNZW0tV3ItSW52YWwNClsgICAxMi4zODU0Nzdd
IGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBpbyBtZW0gMHhmOTVmZjQwMA0KWyAg
IDEyLjM5NjAwNV0gcmFuZG9tOiBub25ibG9ja2luZyBwb29sIGlzIGluaXRpYWxpemVkDQpb
ICAgMTIuMzk5MTE1XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IFVTQiAyLjAgc3RhcnRlZCwg
RUhDSSAxLjAwDQpbICAgMTIuMzk5MzU0XSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyDQpbICAgMTIuMzk5MzU2XSB1c2Ig
dXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTENClsgICAxMi4zOTkzNTddIHVzYiB1c2IxOiBQcm9kdWN0OiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0KWyAgIDEyLjM5OTM1OF0gdXNiIHVzYjE6IE1hbnVmYWN0dXJlcjogTGlu
dXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgZWhjaV9oY2QNClsgICAxMi4zOTkz
NTldIHVzYiB1c2IxOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTIuMg0KWyAgIDEyLjQwMDgy
NV0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91bmQNClsgICAxMi40MDA4NzhdIGh1YiAxLTA6
MS4wOiA1IHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNDAxMzg5XSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjQwMTM5MV0gQWxyZWFk
eSBzZXR1cCB0aGUgR1NJIDoxNw0KWyAgIDEyLjQwMTQwOF0gZWhjaS1wY2kgMDAwMDowMDox
My4yOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAgMTIuNDAxNDMyXSBlaGNpLXBjaSAw
MDAwOjAwOjEzLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNDAxOTAwXSBlaGNp
LXBjaSAwMDAwOjAwOjEzLjI6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1
cyBudW1iZXIgMg0KWyAgIDEyLjQwMTkyMF0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBhcHBs
eWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBFSENJIGR1bW15IHFoIHdvcmthcm91
bmQNClsgICAxMi40MDE5NzFdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogZGVidWcgcG9ydCAx
DQpbICAgMTIuNDAyMTM1XSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IGVuYWJsaW5nIE1lbS1X
ci1JbnZhbA0KWyAgIDEyLjQwMjE0Nl0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBpcnEgMTcs
IGlvIG1lbSAweGY5NWZmODAwDQpbICAgMTIuNDA5MTIwXSBlaGNpLXBjaSAwMDAwOjAwOjEz
LjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwDQpbICAgMTIuNDA5MTkzXSB1c2IgdXNi
MjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAy
DQpbICAgMTIuNDA5MTk0XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi40MDkxOTVdIHVzYiB1c2Iy
OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcg0KWyAgIDEyLjQwOTE5Nl0gdXNiIHVz
YjI6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsg
ZWhjaV9oY2QNClsgICAxMi40MDkxOTddIHVzYiB1c2IyOiBTZXJpYWxOdW1iZXI6IDAwMDA6
MDA6MTMuMg0KWyAgIDEyLjQwOTkxN10gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQNClsg
ICAxMi40MDk5MzBdIGh1YiAyLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNDEw
Mjk1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0K
WyAgIDEyLjQxMDI5N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNw0KWyAgIDEyLjQxMDMx
NV0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAg
MTIuNDEwMzM5XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IEVIQ0kgSG9zdCBDb250cm9sbGVy
DQpbICAgMTIuNDEwODIxXSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IG5ldyBVU0IgYnVzIHJl
Z2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgMw0KWyAgIDEyLjQxMDg0MF0gZWhjaS1w
Y2kgMDAwMDowMDoxNi4yOiBhcHBseWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBF
SENJIGR1bW15IHFoIHdvcmthcm91bmQNClsgICAxMi40MTA4OTBdIGVoY2ktcGNpIDAwMDA6
MDA6MTYuMjogZGVidWcgcG9ydCAxDQpbICAgMTIuNDExMDM0XSBlaGNpLXBjaSAwMDAwOjAw
OjE2LjI6IGVuYWJsaW5nIE1lbS1Xci1JbnZhbA0KWyAgIDEyLjQxMTA0NV0gZWhjaS1wY2kg
MDAwMDowMDoxNi4yOiBpcnEgMTcsIGlvIG1lbSAweGY5NWZmYzAwDQpbICAgMTIuNDE5MjE0
XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwDQpb
ICAgMTIuNDE5MzUzXSB1c2IgdXNiMzogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9y
PTFkNmIsIGlkUHJvZHVjdD0wMDAyDQpbICAgMTIuNDE5MzU0XSB1c2IgdXNiMzogTmV3IFVT
QiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsg
ICAxMi40MTkzNTVdIHVzYiB1c2IzOiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcg0K
WyAgIDEyLjQxOTM1Nl0gdXNiIHVzYjM6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJj
Ny0yMDE0MDEwNy14ZW5kZXZlbCsgZWhjaV9oY2QNClsgICAxMi40MTkzNTddIHVzYiB1c2Iz
OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTYuMg0KWyAgIDEyLjQyMDIzMl0gaHViIDMtMDox
LjA6IFVTQiBodWIgZm91bmQNClsgICAxMi40MjAyNDRdIGh1YiAzLTA6MS4wOiA0IHBvcnRz
IGRldGVjdGVkDQpbICAgMTIuNDIwNzE3XSBvaGNpX2hjZDogVVNCIDEuMSAnT3BlbicgSG9z
dCBDb250cm9sbGVyIChPSENJKSBEcml2ZXINClsgICAxMi40MjA3MThdIG9oY2ktcGNpOiBP
SENJIFBDSSBwbGF0Zm9ybSBkcml2ZXINClsgICAxMi40MjA5MDhdIHhlbjogcmVnaXN0ZXJp
bmcgZ3NpIDE4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTIuNDIwOTEwXSBBbHJl
YWR5IHNldHVwIHRoZSBHU0kgOjE4DQpbICAgMTIuNDIwOTMxXSBvaGNpLXBjaSAwMDAwOjAw
OjEyLjA6IGVuYWJsaW5nIGJ1cyBtYXN0ZXJpbmcNClsgICAxMi40MjA5NDVdIG9oY2ktcGNp
IDAwMDA6MDA6MTIuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyDQpbICAgMTIuNDIxNDU2
XSBvaGNpLXBjaSAwMDAwOjAwOjEyLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgNA0KWyAgIDEyLjQyMTY0N10gb2hjaS1wY2kgMDAwMDowMDoxMi4w
OiBpcnEgMTgsIGlvIG1lbSAweGY5NWY3MDAwDQpbICAgMTIuNDc2NzI0XSB1c2IgdXNiNDog
TmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQpb
ICAgMTIuNDc2NzI1XSB1c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMs
IFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi40NzY3MjZdIHVzYiB1c2I0OiBQ
cm9kdWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXINClsgICAxMi40NzY3MjddIHVzYiB1
c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwr
IG9oY2lfaGNkDQpbICAgMTIuNDc2NzI4XSB1c2IgdXNiNDogU2VyaWFsTnVtYmVyOiAwMDAw
OjAwOjEyLjANClsgICAxMi40Nzc2NjldIGh1YiA0LTA6MS4wOiBVU0IgaHViIGZvdW5kDQpb
ICAgMTIuNDc3Njg3XSBodWIgNC0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZA0KWyAgIDEyLjQ3
ODAzMV0geGVuOiByZWdpc3RlcmluZyBnc2kgMTggdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEN
ClsgICAxMi40NzgwMzNdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgNClsgICAxMi40Nzgw
NTFdIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogZW5hYmxpbmcgYnVzIG1hc3RlcmluZw0KWyAg
IDEyLjQ3ODA1N10gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBPSENJIFBDSSBob3N0IGNvbnRy
b2xsZXINClsgICAxMi40Nzg1MjldIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogbmV3IFVTQiBi
dXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA1DQpbICAgMTIuNDc4NjY3XSBv
aGNpLXBjaSAwMDAwOjAwOjEzLjA6IGlycSAxOCwgaW8gbWVtIDB4Zjk1ZmMwMDANClsgICAx
Mi40OTU3OTddIGF0YTI6IFNBVEEgbGluayBkb3duIChTU3RhdHVzIDAgU0NvbnRyb2wgMzAw
KQ0KWyAgIDEyLjQ5NTg2M10gYXRhNDogU0FUQSBsaW5rIGRvd24gKFNTdGF0dXMgMCBTQ29u
dHJvbCAzMDApDQpbICAgMTIuNTMzMzgyXSB1c2IgdXNiNTogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQpbICAgMTIuNTMzMzgzXSB1c2Ig
dXNiNTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTENClsgICAxMi41MzMzODRdIHVzYiB1c2I1OiBQcm9kdWN0OiBPSENJIFBDSSBo
b3N0IGNvbnRyb2xsZXINClsgICAxMi41MzMzODVdIHVzYiB1c2I1OiBNYW51ZmFjdHVyZXI6
IExpbnV4IDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrIG9oY2lfaGNkDQpbICAgMTIu
NTMzMzg2XSB1c2IgdXNiNTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEzLjANClsgICAxMi41
MzQzMjFdIGh1YiA1LTA6MS4wOiBVU0IgaHViIGZvdW5kDQpbICAgMTIuNTM0MzM4XSBodWIg
NS0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZA0KWyAgIDEyLjUzNDcyMF0geGVuOiByZWdpc3Rl
cmluZyBnc2kgMTggdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMi41MzQ3MjJdIEFs
cmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgNClsgICAxMi41MzQ3NDNdIG9oY2ktcGNpIDAwMDA6
MDA6MTQuNTogZW5hYmxpbmcgYnVzIG1hc3RlcmluZw0KWyAgIDEyLjUzNDc0OV0gb2hjaS1w
Y2kgMDAwMDowMDoxNC41OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXINClsgICAxMi41MzUy
MzFdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciA2DQpbICAgMTIuNTM1MzY5XSBvaGNpLXBjaSAwMDAwOjAwOjE0
LjU6IGlycSAxOCwgaW8gbWVtIDB4Zjk1ZmQwMDANClsgICAxMi41OTAwMDZdIHVzYiB1c2I2
OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEN
ClsgICAxMi41OTAwMDddIHVzYiB1c2I2OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9
MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQ0KWyAgIDEyLjU5MDAwOF0gdXNiIHVzYjY6
IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcg0KWyAgIDEyLjU5MDAwOV0gdXNi
IHVzYjY6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZl
bCsgb2hjaV9oY2QNClsgICAxMi41OTAwMTBdIHVzYiB1c2I2OiBTZXJpYWxOdW1iZXI6IDAw
MDA6MDA6MTQuNQ0KWyAgIDEyLjU5MDk4Ml0gaHViIDYtMDoxLjA6IFVTQiBodWIgZm91bmQN
ClsgICAxMi41OTEwMDBdIGh1YiA2LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkDQpbICAgMTIu
NTkxMjk3XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxOCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkg
MQ0KWyAgIDEyLjU5MTI5OV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxOA0KWyAgIDEyLjU5
MTMxNF0gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpb
ICAgMTIuNTkxMzIwXSBvaGNpLXBjaSAwMDAwOjAwOjE2LjA6IE9IQ0kgUENJIGhvc3QgY29u
dHJvbGxlcg0KWyAgIDEyLjU5MTY4M10gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBuZXcgVVNC
IGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDcNClsgICAxMi41OTE3MjVd
IG9oY2ktcGNpIDAwMDA6MDA6MTYuMDogaXJxIDE4LCBpbyBtZW0gMHhmOTVmZTAwMA0KWyAg
IDEyLjY0NjU2NF0gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0x
ZDZiLCBpZFByb2R1Y3Q9MDAwMQ0KWyAgIDEyLjY0NjU2Nl0gdXNiIHVzYjc6IE5ldyBVU0Ig
ZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xDQpbICAg
MTIuNjQ2NTY3XSB1c2IgdXNiNzogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVy
DQpbICAgMTIuNjQ2NTY4XSB1c2IgdXNiNzogTWFudWZhY3R1cmVyOiBMaW51eCAzLjEzLjAt
cmM3LTIwMTQwMTA3LXhlbmRldmVsKyBvaGNpX2hjZA0KWyAgIDEyLjY0NjU2OV0gdXNiIHVz
Yjc6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxNi4wDQpbICAgMTIuNjQ3Mzk5XSBodWIgNy0w
OjEuMDogVVNCIGh1YiBmb3VuZA0KWyAgIDEyLjY0NzQxNV0gaHViIDctMDoxLjA6IDQgcG9y
dHMgZGV0ZWN0ZWQNClsgICAxMi42NDc4ODFdIHVoY2lfaGNkOiBVU0IgVW5pdmVyc2FsIEhv
c3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVyDQpbICAgMTIuNjQ4MTE2XSB4ZW46IHJl
Z2lzdGVyaW5nIGdzaSA0OCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjY0ODEx
OF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDo0OA0KWyAgIDEyLjY0ODEzNl0geGhjaV9oY2Qg
MDAwMDowOTowMC4wOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAgMTIuNjQ4MTQxXSB4
aGNpX2hjZCAwMDAwOjA5OjAwLjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNjQ4
NTM1XSB4aGNpX2hjZCAwMDAwOjA5OjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFz
c2lnbmVkIGJ1cyBudW1iZXIgOA0KWyAgIDEyLjY0ODcxNl0geGhjaV9oY2QgMDAwMDowOTow
MC4wOiBlbmFibGluZyBNZW0tV3ItSW52YWwNClsgICAxMi42NDk0OTldIHVzYiB1c2I4OiBO
ZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDINClsg
ICAxMi42NDk1MDBdIHVzYiB1c2I4OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9Mywg
UHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQ0KWyAgIDEyLjY0OTUwMV0gdXNiIHVzYjg6IFBy
b2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNjQ5NTAzXSB1c2IgdXNiODog
TWFudWZhY3R1cmVyOiBMaW51eCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyB4aGNp
X2hjZA0KWyAgIDEyLjY0OTUwM10gdXNiIHVzYjg6IFNlcmlhbE51bWJlcjogMDAwMDowOTow
MC4wDQpbICAgMTIuNjUwNDc0XSBodWIgOC0wOjEuMDogVVNCIGh1YiBmb3VuZA0KWyAgIDEy
LjY1MDUyM10gaHViIDgtMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQNClsgICAxMi42NTA2NjBd
IHhoY2lfaGNkIDAwMDA6MDk6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXINClsgICAxMi42
NTEwODVdIHhoY2lfaGNkIDAwMDA6MDk6MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwg
YXNzaWduZWQgYnVzIG51bWJlciA5DQpbICAgMTIuNjUyMTI2XSB1c2IgdXNiOTogTmV3IFVT
QiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAzDQpbICAgMTIu
NjUyMTI3XSB1c2IgdXNiOTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1
Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi42NTIxMjhdIHVzYiB1c2I5OiBQcm9kdWN0
OiB4SENJIEhvc3QgQ29udHJvbGxlcg0KWyAgIDEyLjY1MjEyOV0gdXNiIHVzYjk6IE1hbnVm
YWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgeGhjaV9oY2QN
ClsgICAxMi42NTIxMzBdIHVzYiB1c2I5OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDk6MDAuMA0K
WyAgIDEyLjY1MzA2Ml0gaHViIDktMDoxLjA6IFVTQiBodWIgZm91bmQNClsgICAxMi42NTMw
OTldIGh1YiA5LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNjYyNTU2XSBhdGEz
OiBTQVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0dXMgMTMzIFNDb250cm9sIDMwMCkNClsg
ICAxMi42NjI2MzVdIGF0YTE6IFNBVEEgbGluayB1cCAzLjAgR2JwcyAoU1N0YXR1cyAxMjMg
U0NvbnRyb2wgMzAwKQ0KWyAgIDEyLjY2MzMzOV0gYXRhMy4wMDogQVRBLTg6IFNUMjAwMERM
MDAzLTlWVDE2NiwgQ0M0NSwgbWF4IFVETUEvMTMzDQpbICAgMTIuNjYzMzQxXSBhdGEzLjAw
OiAzOTA3MDI5MTY4IHNlY3RvcnMsIG11bHRpIDA6IExCQTQ4IE5DUSAoZGVwdGggMzEvMzIp
DQpbICAgMTIuNjY0MDY1XSBhdGEzLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMw0KWyAg
IDEyLjY2NDExOV0gYXRhMS4wMDogQVRBLTg6IEhpdGFjaGkgSERTNzIyMDIwQUxBMzMwLCBK
S0FPQTIwTiwgbWF4IFVETUEvMTMzDQpbICAgMTIuNjY0MTIxXSBhdGExLjAwOiAzOTA3MDI5
MTY4IHNlY3RvcnMsIG11bHRpIDA6IExCQTQ4IE5DUSAoZGVwdGggMzEvMzIpLCBBQQ0KWyAg
IDEyLjY2NTY2N10gYXRhMS4wMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMNClsgICAxMi42
NjYxMThdIHNjc2kgMDowOjA6MDogRGlyZWN0LUFjY2VzcyAgICAgQVRBICAgICAgSGl0YWNo
aSBIRFM3MjIwMiBKS0FPIFBROiAwIEFOU0k6IDUNClsgICAxMi42Njc0MzVdIHNkIDA6MDow
OjA6IFtzZGFdIDM5MDcwMjkxNjggNTEyLWJ5dGUgbG9naWNhbCBibG9ja3M6ICgyLjAwIFRC
LzEuODEgVGlCKQ0KWyAgIDEyLjY2NzUxMF0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUgUHJv
dGVjdCBpcyBvZmYNClsgICAxMi42Njc1MTJdIHNkIDA6MDowOjA6IFtzZGFdIE1vZGUgU2Vu
c2U6IDAwIDNhIDAwIDAwDQpbICAgMTIuNjY3NTQxXSBzZCAwOjA6MDowOiBbc2RhXSBXcml0
ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBwb3J0
IERQTyBvciBGVUENClsgICAxMi42NjgxOTZdIHNkIDA6MDowOjA6IEF0dGFjaGVkIHNjc2kg
Z2VuZXJpYyBzZzAgdHlwZSAwDQpbICAgMTIuNjY5MzkwXSBzY3NpIDI6MDowOjA6IERpcmVj
dC1BY2Nlc3MgICAgIEFUQSAgICAgIFNUMjAwMERMMDAzLTlWVDEgQ0M0NSBQUTogMCBBTlNJ
OiA1DQpbICAgMTIuNjcwMTQ0XSBzZCAyOjA6MDowOiBbc2RiXSAzOTA3MDI5MTY4IDUxMi1i
eXRlIGxvZ2ljYWwgYmxvY2tzOiAoMi4wMCBUQi8xLjgxIFRpQikNClsgICAxMi42NzAxNDZd
IHNkIDI6MDowOjA6IFtzZGJdIDQwOTYtYnl0ZSBwaHlzaWNhbCBibG9ja3MNClsgICAxMi42
NzAzNTVdIHNkIDI6MDowOjA6IEF0dGFjaGVkIHNjc2kgZ2VuZXJpYyBzZzEgdHlwZSAwDQpb
ICAgMTIuNjcwNDA2XSBzZCAyOjA6MDowOiBbc2RiXSBXcml0ZSBQcm90ZWN0IGlzIG9mZg0K
WyAgIDEyLjY3MDQwOF0gc2QgMjowOjA6MDogW3NkYl0gTW9kZSBTZW5zZTogMDAgM2EgMDAg
MDANClsgICAxMi42NzA0OTVdIHNkIDI6MDowOjA6IFtzZGJdIFdyaXRlIGNhY2hlOiBlbmFi
bGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQ0K
WyAgIDEyLjY3NzA4MV0gIHNkYTogc2RhMSBzZGEyIHNkYTMNClsgICAxMi42NzkwMTJdIHNk
IDA6MDowOjA6IFtzZGFdIEF0dGFjaGVkIFNDU0kgZGlzaw0KWyAgIDEyLjY4ODkwMl0gIHNk
Yjogc2RiMQ0KWyAgIDEyLjY5MDE1MV0gc2QgMjowOjA6MDogW3NkYl0gQXR0YWNoZWQgU0NT
SSBkaXNrDQpbICAgMTMuMjI3MTcyXSBhdGE2OiBTQVRBIGxpbmsgZG93biAoU1N0YXR1cyAw
IFNDb250cm9sIDMwMCkNClsgICAxMy4yMzMyODhdIGF0YTU6IFNBVEEgbGluayBkb3duIChT
U3RhdHVzIDAgU0NvbnRyb2wgMzAwKQ0KWyAgIDEzLjI1Mjk1MF0gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JscA0KWyAgIDEzLjI1OTE1MV0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Itc3RvcmFnZQ0KWyAgIDEz
LjI2NTY0OF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jz
ZXJpYWwNClsgICAxMy4yNzE4MTBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgY3AyMTB4DQpbICAgMTMuMjc3OTQ0XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwg
c3VwcG9ydCByZWdpc3RlcmVkIGZvciBjcDIxMHgNClsgICAxMy4yNzkxODhdIHVzYiA0LTU6
IG5ldyBmdWxsLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgb2hjaS1wY2kNClsg
ICAxMy4yODk1MzBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIg
Y3lwcmVzc19tOA0KWyAgIDEzLjI5NTUyMV0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBv
cnQgcmVnaXN0ZXJlZCBmb3IgRGVMb3JtZSBFYXJ0aG1hdGUgVVNCDQpbICAgMTMuMzAxNTI5
XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwgc3VwcG9ydCByZWdpc3RlcmVkIGZvciBISUQtPkNP
TSBSUzIzMiBBZGFwdGVyDQpbICAgMTMuMzA3NDc3XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwg
c3VwcG9ydCByZWdpc3RlcmVkIGZvciBOb2tpYSBDQS00MiBWMiBBZGFwdGVyDQpbICAgMTMu
MzEzNDQzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIG1vczc3
MjANClsgICAxMy4zMTkyOThdIHVzYnNlcmlhbDogVVNCIFNlcmlhbCBzdXBwb3J0IHJlZ2lz
dGVyZWQgZm9yIE1vc2NoaXAgMiBwb3J0IGFkYXB0ZXINClsgICAxMy4zMjUyMzJdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgbW9zNzg0MA0KWyAgIDEzLjMz
MTIyNV0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgTW9z
Y2hpcCA3ODQwLzc4MjAgVVNCIFNlcmlhbCBEcml2ZXINClsgICAxMy4zMzc3ODVdIGk4MDQy
OiBQTlA6IE5vIFBTLzIgY29udHJvbGxlciBmb3VuZC4gUHJvYmluZyBwb3J0cyBkaXJlY3Rs
eS4NClsgICAxMy4zNDQ0OTFdIHNlcmlvOiBpODA0MiBLQkQgcG9ydCBhdCAweDYwLDB4NjQg
aXJxIDENClsgICAxMy4zNTA1MDBdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAweDYwLDB4
NjQgaXJxIDEyDQpbICAgMTMuMzU2OTczXSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2Ug
Y29tbW9uIGZvciBhbGwgbWljZQ0KWyAgIDEzLjM2NDIzMF0gcnRjX2Ntb3MgMDA6MDM6IFJU
QyBjYW4gd2FrZSBmcm9tIFM0DQpbICAgMTMuMzcwNDQzXSBydGNfY21vcyAwMDowMzogcnRj
IGNvcmU6IHJlZ2lzdGVyZWQgcnRjX2Ntb3MgYXMgcnRjMA0KWyAgIDEzLjM3NjI5NV0gcnRj
X2Ntb3MgMDA6MDM6IGFsYXJtcyB1cCB0byBvbmUgbW9udGgsIHkzaywgMTE0IGJ5dGVzIG52
cmFtDQpbICAgMTMuMzgzMDU3XSBBQ1BJIFdhcm5pbmc6IDB4MDAwMDAwMDAwMDAwMGIwMC0w
eDAwMDAwMDAwMDAwMDBiMDcgU3lzdGVtSU8gY29uZmxpY3RzIHdpdGggUmVnaW9uIFxTT1Ix
IDEgKDIwMTMxMTE1L3V0YWRkcmVzcy0yNTEpDQpbICAgMTMuMzg5MjI2XSBBQ1BJOiBUaGlz
IGNvbmZsaWN0IG1heSBjYXVzZSByYW5kb20gcHJvYmxlbXMgYW5kIHN5c3RlbSBpbnN0YWJp
bGl0eQ0KWyAgIDEzLjM5NTQwMV0gQUNQSTogSWYgYW4gQUNQSSBkcml2ZXIgaXMgYXZhaWxh
YmxlIGZvciB0aGlzIGRldmljZSwgeW91IHNob3VsZCB1c2UgaXQgaW5zdGVhZCBvZiB0aGUg
bmF0aXZlIGRyaXZlcg0KWyAgIDEzLjQwMTgwOV0gcGlpeDRfc21idXMgMDAwMDowMDoxNC4w
OiBTTUJ1cyBIb3N0IENvbnRyb2xsZXIgYXQgMHhiMDAsIHJldmlzaW9uIDANClsgICAxMy40
MDgzNzZdIEFDUEkgV2FybmluZzogMHgwMDAwMDAwMDAwMDAwYjIwLTB4MDAwMDAwMDAwMDAw
MGIyNyBTeXN0ZW1JTyBjb25mbGljdHMgd2l0aCBSZWdpb24gXFNPUjIgMSAoMjAxMzExMTUv
dXRhZGRyZXNzLTI1MSkNClsgICAxMy40MTQ5NjVdIEFDUEk6IFRoaXMgY29uZmxpY3QgbWF5
IGNhdXNlIHJhbmRvbSBwcm9ibGVtcyBhbmQgc3lzdGVtIGluc3RhYmlsaXR5DQpbICAgMTMu
NDIxNTc0XSBBQ1BJOiBJZiBhbiBBQ1BJIGRyaXZlciBpcyBhdmFpbGFibGUgZm9yIHRoaXMg
ZGV2aWNlLCB5b3Ugc2hvdWxkIHVzZSBpdCBpblsgICAxOC42NzgyNTldIHVkZXZkWzE5MDdd
OiBzdGFydGluZyB2ZXJzaW9uIDE3NQ0KWyAgIDE5Ljc1Mzg3MV0gYXRhMS4wMDogY29uZmln
dXJlZCBmb3IgVURNQS8xMzMNClsgICAxOS43NjI3NzZdIGF0YTE6IEVIIGNvbXBsZXRlDQpb
ICAgMjAuOTk3MTIyXSBhdGExLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMw0KWyAgIDIx
LjAwNDM1M10gYXRhMTogRUggY29tcGxldGUNClsgICAyMi4yODg1ODJdIEVYVDQtZnMgKGRt
LTApOiByZS1tb3VudGVkLiBPcHRzOiAobnVsbCkNClsgICAyMi44NjAzMDldIEVYVDQtZnMg
KGRtLTApOiByZS1tb3VudGVkLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8N
ClsgICAzMC43MTMzODNdIEFkZGluZyAyMDk3MTQ4ayBzd2FwIG9uIC9kZXYvbWFwcGVyL3Nl
cnZlZXJzdGVydGplLXN3YXAuICBQcmlvcml0eTotMSBleHRlbnRzOjEgYWNyb3NzOjIwOTcx
NDhrIA0KWyAgIDQzLjQ2MjI0Nl0gRVhUNC1mcyAoc2RhMSk6IG1vdW50ZWQgZmlsZXN5c3Rl
bSB3aXRoIG9yZGVyZWQgZGF0YSBtb2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91
bnQtcm8NClsgICA0Ni42ODM4NTFdIHI4MTY5IDAwMDA6MGE6MDAuMCBldGgxOiBsaW5rIGRv
d24NClsgICA0Ni42OTA2MTVdIHI4MTY5IDAwMDA6MGE6MDAuMCBldGgxOiBsaW5rIGRvd24N
ClsgICA0Ni45NTcxOTldIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIGRvd24NClsg
ICA0Ni45NTcyODJdIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIGRvd24NClsgICA0
OC42MDg0NzFdIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIHVwDQpbICAgNDkuMzEz
NDgyXSByODE2OSAwMDAwOjBhOjAwLjAgZXRoMTogbGluayB1cA0KWyAgIDg5Ljk5NjcyMl0g
RVhUNC1mcyAoZG0tMik6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQgZGF0YSBt
b2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8NClsgICA5Ni4zNTg0MjNd
IEVYVDQtZnMgKGRtLTM5KTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3JkZXJlZCBkYXRh
IG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAgMTAyLjQwNDEx
OV0gRVhUNC1mcyAoZG0tNDApOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRlcmVkIGRh
dGEgbW9kZS4gT3B0czogYmFycmllcj0xLGVycm9ycz1yZW1vdW50LXJvDQpbICAxMDguNTEz
NTg4XSBFWFQ0LWZzIChkbS00MSk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQg
ZGF0YSBtb2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8NClsgIDExNi43
MjQ1MjNdIEVYVDQtZnMgKGRtLTQyKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3JkZXJl
ZCBkYXRhIG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAgMTIz
LjI4MzcwMl0gRVhUNC1mcyAoZG0tNDMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRl
cmVkIGRhdGEgbW9kZS4gT3B0czogYmFycmllcj0xLGVycm9ycz1yZW1vdW50LXJvDQpbICAx
MjkuNjE3MDgzXSBFWFQ0LWZzIChzZGIxKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3Jk
ZXJlZCBkYXRhIG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAg
MTM2LjI5NTgzNF0gZGV2aWNlIHZpZjEuMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihk
MSkgWzIwMTQtMDEtMDcgMTE6MTE6MjRdIG1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwg
bWVtb3J5DQooZDEpIFsyMDE0LTAxLTA3IDExOjExOjI0XSBhYm91dCB0byBnZXQgc3RhcnRl
ZC4uLg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MTE6MjVdIHRyYXBzLmM6MjUxNjpkMSBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAw
MDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KWyAgMTM3LjUyODczN10geGVuLWJsa2Jh
Y2s6cmluZy1yZWYgOCwgZXZlbnQtY2hhbm5lbCAxNywgcHJvdG9jb2wgMSAoeDg2XzY0LWFi
aSkgcGVyc2lzdGVudCBncmFudHMNClsgIDEzNy41NDM2MTVdIHhlbi1ibGtiYWNrOnJpbmct
cmVmIDksIGV2ZW50LWNoYW5uZWwgMTgsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNp
c3RlbnQgZ3JhbnRzDQpbICAxMzcuNTU5MTYyXSB4ZW5fYnJpZGdlOiBwb3J0IDEodmlmMS4w
KSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDEzNy41NzAwODVdIHhlbl9icmlkZ2U6
IHBvcnQgMSh2aWYxLjApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTQyLjIyODE0
MF0gZGV2aWNlIHZpZjIuMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihkMikgWzIwMTQt
MDEtMDcgMTE6MTE6MzBdIG1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQoo
ZDIpIFsyMDE0LTAxLTA3IDExOjExOjMwXSBhYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MTE6MzBdIHRyYXBzLmM6MjUxNjpkMiBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAwMDAwMDAgdG8g
MHgwMDAwMDAwMDAwMDBmZmZmLg0KWyAgMTQzLjEwNDU4N10geGVuLWJsa2JhY2s6cmluZy1y
ZWYgOCwgZXZlbnQtY2hhbm5lbCAxMCwgcHJvdG9jb2wgMSAoeDg2XzY0LWFiaSkgcGVyc2lz
dGVudCBncmFudHMNClsgIDE0NC4xMTc3NzVdIHhlbl9icmlkZ2U6IHBvcnQgMih2aWYyLjAp
IGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTQ0LjEyODkwOV0geGVuX2JyaWRnZTog
cG9ydCAyKHZpZjIuMCkgZW50ZXJlZCBmb3J3YXJkaW5nIHN0YXRlDQpbICAxNDguMTI0MTQy
XSBkZXZpY2UgdmlmMy4wIGVudGVyZWQgcHJvbWlzY3VvdXMgbW9kZQ0KKGQzKSBbMjAxNC0w
MS0wNyAxMToxMTozNl0gbWFwcGluZyBrZXJuZWwgaW50byBwaHlzaWNhbCBtZW1vcnkNCihk
MykgWzIwMTQtMDEtMDcgMTE6MTE6MzZdIGFib3V0IHRvIGdldCBzdGFydGVkLi4uDQooWEVO
KSBbMjAxNC0wMS0wNyAxMToxMTozNl0gdHJhcHMuYzoyNTE2OmQzIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAw
eDAwMDAwMDAwMDAwMGZmZmYuDQpbICAxNTIuNTc2ODc3XSB4ZW5fYnJpZGdlOiBwb3J0IDEo
dmlmMS4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE1My45NzcxOTldIGRldmlj
ZSB2aWY0LjAgZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQooZDQpIFsyMDE0LTAxLTA3IDEx
OjExOjQyXSBtYXBwaW5nIGtlcm5lbCBpbnRvIHBoeXNpY2FsIG1lbW9yeQ0KKGQ0KSBbMjAx
NC0wMS0wNyAxMToxMTo0Ml0gYWJvdXQgdG8gZ2V0IHN0YXJ0ZWQuLi4NCihYRU4pIFsyMDE0
LTAxLTA3IDExOjExOjQyXSB0cmFwcy5jOjI1MTY6ZDQgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAw
MDAwMDAwZmZmZi4NClsgIDE1NC44OTY3MDhdIHhlbi1ibGtiYWNrOnJpbmctcmVmIDgsIGV2
ZW50LWNoYW5uZWwgMTAsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNpc3RlbnQgZ3Jh
bnRzDQpbICAxNTUuOTExNTg5XSB4ZW5fYnJpZGdlOiBwb3J0IDQodmlmNC4wKSBlbnRlcmVk
IGZvcndhcmRpbmcgc3RhdGUNClsgIDE1NS45MjI2NjNdIHhlbl9icmlkZ2U6IHBvcnQgNCh2
aWY0LjApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MTE6NDVdIGdyYW50X3RhYmxlLmM6Mjg5OmQwIEluY3JlYXNlZCBtYXB0cmFjayBzaXplIHRv
IDIgZnJhbWVzDQpbICAxNTkuMTMzNDQxXSB4ZW5fYnJpZGdlOiBwb3J0IDIodmlmMi4wKSBl
bnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE1OS44OTI5NjJdIGRldmljZSB2aWY1LjAg
ZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQooZDUpIFsyMDE0LTAxLTA3IDExOjExOjQ4XSBt
YXBwaW5nIGtlcm5lbCBpbnRvIHBoeXNpY2FsIG1lbW9yeQ0KKGQ1KSBbMjAxNC0wMS0wNyAx
MToxMTo0OF0gYWJvdXQgdG8gZ2V0IHN0YXJ0ZWQuLi4NCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjExOjQ4XSB0cmFwcy5jOjI1MTY6ZDUgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NClsgIDE2MC44NTQyMDJdIHhlbi1ibGtiYWNrOnJpbmctcmVmIDgsIGV2ZW50LWNoYW5u
ZWwgMTAsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNpc3RlbnQgZ3JhbnRzDQpbICAx
NjEuODY4NTA3XSB4ZW5fYnJpZGdlOiBwb3J0IDUodmlmNS4wKSBlbnRlcmVkIGZvcndhcmRp
bmcgc3RhdGUNClsgIDE2MS44NzkwMzddIHhlbl9icmlkZ2U6IHBvcnQgNSh2aWY1LjApIGVu
dGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTY2LjAwMjM1OV0gZGV2aWNlIHZpZjYuMCBl
bnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihkNikgWzIwMTQtMDEtMDcgMTE6MTE6NTRdIG1h
cHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQooZDYpIFsyMDE0LTAxLTA3IDEx
OjExOjU0XSBhYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MTE6NTRdIHRyYXBzLmM6MjUxNjpkNiBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAwMDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZm
Lg0KWyAgMTY2Ljg3Nzg2NV0geGVuLWJsa2JhY2s6cmluZy1yZWYgOCwgZXZlbnQtY2hhbm5l
bCAxMCwgcHJvdG9jb2wgMSAoeDg2XzY0LWFiaSkgcGVyc2lzdGVudCBncmFudHMNClsgIDE2
Ni44OTI0MzVdIHhlbl9icmlkZ2U6IHBvcnQgNih2aWY2LjApIGVudGVyZWQgZm9yd2FyZGlu
ZyBzdGF0ZQ0KWyAgMTY2LjkwMjQ4Nl0geGVuX2JyaWRnZTogcG9ydCA2KHZpZjYuMCkgZW50
ZXJlZCBmb3J3YXJkaW5nIHN0YXRlDQpbICAxNzAuOTY3NjAwXSB4ZW5fYnJpZGdlOiBwb3J0
IDQodmlmNC4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE3NC4yNzkzMzBdIEJs
dWV0b290aDogaGNpMCBjb21tYW5kIDB4MDgwNCB0eCB0aW1lb3V0DQpbICAxNzYuODg0NjAy
XSB4ZW5fYnJpZGdlOiBwb3J0IDUodmlmNS4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUN
ClsgIDE4MS45NDg4MTVdIHhlbl9icmlkZ2U6IHBvcnQgNih2aWY2LjApIGVudGVyZWQgZm9y
d2FyZGluZyBzdGF0ZQ0KWyAgMTgzLjY4MTQwNl0gYXRhMy4wMDogZXhjZXB0aW9uIEVtYXNr
IDB4MCBTQWN0IDB4MCBTRXJyIDB4MCBhY3Rpb24gMHg2IGZyb3plbg0KWyAgMTgzLjY5MDY3
Nl0gYXRhMy4wMDogZmFpbGVkIGNvbW1hbmQ6IENIRUNLIFBPV0VSIE1PREUNClsgIDE4My42
OTk3OTRdIGF0YTMuMDA6IGNtZCBlNS8wMDowMDowMDowMDowMC8wMDowMDowMDowMDowMC8w
MCB0YWcgMA0KWyAgMTgzLjY5OTc5NF0gICAgICAgICAgcmVzIDQwLzAwOmZmOjAwOjAwOjAw
LzAwOjAwOjAwOjAwOjAwLzQwIEVtYXNrIDB4NCAodGltZW91dCkNClsgIDE4My43MTc4MzNd
IGF0YTMuMDA6IHN0YXR1czogeyBEUkRZIH0NClsgIDE4My43MjY2NzVdIGF0YTM6IGhhcmQg
cmVzZXR0aW5nIGxpbmsNClsgIDE4NC4yMjEwMTldIGF0YTM6IFNBVEEgbGluayB1cCA2LjAg
R2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQ0KWyAgMTg5LjIyNTIzMV0gYXRhMy4w
MDogcWMgdGltZW91dCAoY21kIDB4ZWMpDQpbICAxODkuMjMzNjYwXSBhdGEzLjAwOiBmYWls
ZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJvciwgZXJyX21hc2s9MHg0KQ0KWyAgMTg5LjI0MjAy
Ml0gYXRhMy4wMDogcmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpDQpbICAxODkuMjUw
Mjc5XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAxODkuNzQxNTQ4XSBhdGEzOiBT
QVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0dXMgMTMzIFNDb250cm9sIDMwMCkNClsgIDE5
Ni4zNTQ4OTZdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjJzISBbeGVu
ZG9tYWluczo5Njc5XQ0KWyAgMTk2LjM2MjcwNl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAx
OTYuMzcwMzYwXSBpcnEgZXZlbnQgc3RhbXA6IDYxNzM4DQpbICAxOTYuMzc3ODcxXSBoYXJk
aXJxcyBsYXN0ICBlbmFibGVkIGF0ICg2MTczNyk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJl
c3RvcmVfYXJncysweDAvMHgzMA0KWyAgMTk2LjM4NTQ2NV0gaGFyZGlycXMgbGFzdCBkaXNh
YmxlZCBhdCAoNjE3MzgpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4
Ng0KWyAgMTk2LjM5MjkxMF0gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoNjE3MzYpOiBb
PGZmZmZmZmZmODEwYTlkZjE+XSBfX2RvX3NvZnRpcnErMHgxOTEvMHgyMjANClsgIDE5Ni40
MDAxOTJdIHNvZnRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDYxNzMxKTogWzxmZmZmZmZmZjgx
MGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAxOTYuNDA3MjYzXSBDUFU6IDEgUElE
OiA5Njc5IENvbW06IHhlbmRvbWFpbnMgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEw
Ny14ZW5kZXZlbCsgIzENClsgIDE5Ni40MTQzMDNdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03
NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpb
ICAxOTYuNDIxMjczXSB0YXNrOiBmZmZmODgwMDU4ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMy
MDAwIHRhc2sudGk6IGZmZmY4ODAwNGNkMzIwMDANClsgIDE5Ni40MjgyNjNdIFJJUDogZTAz
MDpbPGZmZmZmZmZmODExMDlhNTg+XSAgWzxmZmZmZmZmZjgxMTA5YTU4Pl0gZ2VuZXJpY19l
eGVjX3NpbmdsZSsweDg4LzB4YzANClsgIDE5Ni40MzUzMjNdIFJTUDogZTAyYjpmZmZmODgw
MDRjZDMzYTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAxOTYuNDQyMTcwXSBSQVg6IDAwMDAw
MDAwMDAwMDAwMDggUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAz
OA0KWyAgMTk2LjQ0ODg5OF0gUkRYOiAwMDAwMDAwMDAwMDAwMGZmIFJTSTogMDAwMDAwMDAw
MDAwMDAwOCBSREk6IDAwMDAwMDAwMDAwMDAwMDgNClsgIDE5Ni40NTU1NzhdIFJCUDogZmZm
Zjg4MDA0Y2QzM2FjOCBSMDg6IGZmZmZmZmZmODFjMGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAw
MDAwDQpbICAxOTYuNDYyMjgzXSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAw
MDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2QzM2FmMA0KWyAgMTk2LjQ2ODk5NF0gUjEzOiAw
MDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2
MTQwNTANClsgIDE5Ni40NzU3MDRdIEZTOiAgMDAwMDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpm
ZmZmODgwMDVmNjQwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDE5Ni40
ODI0OTJdIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAw
M2INClsgIDE5Ni40ODkyMjldIENSMjogMDAwMDdmMTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAw
NGNkMmEwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAxOTYuNDk1OTA2XSBTdGFjazoN
ClsgIDE5Ni41MDI1OTVdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAw
MDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAwMDAwMDAwDQpbICAxOTYuNTA5NDEyXSAgMDAwMDAw
MDAwMDAwMDAwMSBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAw
MDAwMDAwMQ0KWyAgMTk2LjUxNjI2Nl0gIGZmZmY4ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEw
OWNjNSBmZmZmZmZmZjgxYWEwYjMzIGZmZmY4ODAwY2VkNTYwMDANClsgIDE5Ni41MjMxMTJd
IENhbGwgVHJhY2U6DQpbICAxOTYuNTI5NzY5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4
ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMTk2LjUzNjUx
OV0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1
LzB4MWUwDQpbICAxOTYuNTQzMzc0XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRf
cmVzdG9yZV9hcmdzKzB4MTMvMHgxMw0KWyAgMTk2LjU1MDA5OF0gIFs8ZmZmZmZmZmY4MTAw
Nzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsg
IDE5Ni41NTY4MzNdICBbPGZmZmZmZmZmODExMGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9t
YW55KzB4MjdhLzB4MmEwDQpbICAxOTYuNTYzNTkxXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0g
PyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMTk2LjU3
MDI0MF0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0K
WyAgMTk2LjU3Njc5MV0gIFs8ZmZmZmZmZmY4MTAwMTIyYT5dID8geGVuX2h5cGVyY2FsbF94
ZW5fdmVyc2lvbisweGEvMHgyMA0KWyAgMTk2LjU4MzQ0OV0gIFs8ZmZmZmZmZmY4MTE2OTQy
Nj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAxOTYuNTkwMDIwXSAgWzxmZmZmZmZmZjgx
MGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDE5Ni41OTY1MDRdICBb
PGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMTk2LjYwMjkw
MF0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAxOTYu
NjA5MTc5XSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAxOTYu
NjE1NDc1XSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4
MzANClsgIDE5Ni42MjE3NjFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5h
cnkrMHgzMmEvMHgxYTAwDQpbICAxOTYuNjI3OTA4XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0g
PyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMTk2LjYzMzk3NV0gIFs8ZmZmZmZm
ZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDE5Ni42NDAxMzdd
ICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4
MWIwDQpbICAxOTYuNjQ2Mzc1XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVp
cmUrMHhlYy8weDExMA0KWyAgMTk2LjY1MjUzN10gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8g
bG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAxOTYuNjU4NTg3XSAgWzxmZmZmZmZmZjgx
MTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDE5Ni42NjQ3
MzVdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1
OTIvMHg3MTANClsgIDE5Ni42NzA4MjVdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4
ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMTk2LjY3NjkwMl0gIFs8ZmZm
ZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAxOTYu
NjgyOTcyXSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAg
MTk2LjY4OTAxNl0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYw
DQpbICAxOTYuNjk0OTQ0XSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2
OS8weGEwDQpbICAxOTYuNzAwOTEzXSBDb2RlOiBjOCA0OCA4OSBkYSBlOCBlYSBjMSAzMiAw
MCA0OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1
ZCBjOCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCA8ZjM+
IDkwIDQxIGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRj
IDhiIDZkIA0KWyAgMTk5Ljc0MzM1OV0gYXRhMy4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMp
DQpbICAxOTkuNzQ5ODg0XSBhdGEzLjAwOiBmYWlsZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJv
ciwgZXJyX21hc2s9MHg0KQ0KWyAgMTk5Ljc1NjM0NV0gYXRhMy4wMDogcmV2YWxpZGF0aW9u
IGZhaWxlZCAoZXJybm89LTUpDQpbICAxOTkuNzYyOTQzXSBhdGEzOiBsaW1pdGluZyBTQVRB
IGxpbmsgc3BlZWQgdG8gMy4wIEdicHMNClsgIDE5OS43Njk0MDFdIGF0YTM6IGhhcmQgcmVz
ZXR0aW5nIGxpbmsNClsgIDIwMC4yNTYzNzZdIGF0YTM6IFNBVEEgbGluayB1cCAzLjAgR2Jw
cyAoU1N0YXR1cyAxMjMgU0NvbnRyb2wgMzIwKQ0KWyAgMjA0LjY5NzYxOV0gYXRhMS4wMDog
ZXhjZXB0aW9uIEVtYXNrIDB4MCBTQWN0IDB4MWYgU0VyciAweDAgYWN0aW9uIDB4NiBmcm96
ZW4NClsgIDIwNC43MDM5OTJdIGF0YTEuMDA6IGZhaWxlZCBjb21tYW5kOiBSRUFEIEZQRE1B
IFFVRVVFRA0KWyAgMjA0LjcxMDI0MF0gYXRhMS4wMDogY21kIDYwLzA4OjAwOjAxOjFlOjJk
LzAwOjAwOmQxOjAwOjAwLzQwIHRhZyAwIG5jcSA0MDk2IGluDQpbICAyMDQuNzEwMjQwXSAg
ICAgICAgICByZXMgNDAvMDA6ZmY6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1hc2sg
MHg0ICh0aW1lb3V0KQ0KWyAgMjA0LjcyMjkyNl0gYXRhMS4wMDogc3RhdHVzOiB7IERSRFkg
fQ0KWyAgMjA0LjcyOTI5OV0gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFdSSVRFIEZQRE1B
IFFVRVVFRA0KWyAgMjA0LjczNTU5OV0gYXRhMS4wMDogY21kIDYxL2UwOjA4OmU5OjY3OjJl
LzAwOjAwOjA0OjAwOjAwLzQwIHRhZyAxIG5jcSAxMTQ2ODggb3V0DQpbICAyMDQuNzM1NTk5
XSAgICAgICAgICByZXMgNDAvMDA6MDA6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1h
c2sgMHg0ICh0aW1lb3V0KQ0KWyAgMjA0Ljc0ODExMV0gYXRhMS4wMDogc3RhdHVzOiB7IERS
RFkgfQ0KWyAgMjA0Ljc1NDIyMV0gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFJFQUQgRlBE
TUEgUVVFVUVEDQpbICAyMDQuNzYwMjU4XSBhdGExLjAwOiBjbWQgNjAvMjA6MTA6ODE6ZWQ6
YzIvMDA6MDA6ZDA6MDA6MDAvNDAgdGFnIDIgbmNxIDE2Mzg0IGluDQpbICAyMDQuNzYwMjU4
XSAgICAgICAgICByZXMgNDAvMDA6MDA6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1h
c2sgMHg0ICh0aW1lb3V0KQ0KWyAgMjA0Ljc3MjQ0MV0gYXRhMS4wMDogc3RhdHVzOiB7IERS
RFkgfQ0KWyAgMjA0Ljc3ODU0M10gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFJFQUQgRlBE
TUEgUVVFVUVEDQpbICAyMDQuNzg0NjE1XSBhdGExLjAwOiBjbWQgNjAvMDg6MTg6ZTE6Mjg6
OGUvMDA6MDA6ZDA6MDA6MDAvNDAgdGFnIDMgbmNxIDQwOTYgaW4NClsgIDIwNC43ODQ2MTVd
ICAgICAgICAgIHJlcyA0MC8wMDowMDowMDowMDowMC8wMDowMDowMDowMDowMC8wMCBFbWFz
ayAweDQgKHRpbWVvdXQpDQpbICAyMDQuNzk2OTgzXSBhdGExLjAwOiBzdGF0dXM6IHsgRFJE
WSB9DQpbICAyMDQuODAzMDcyXSBhdGExLjAwOiBmYWlsZWQgY29tbWFuZDogUkVBRCBGUERN
QSBRVUVVRUQNClsgIDIwNC44MDkxNDFdIGF0YTEuMDA6IGNtZCA2MC8wODoyMDpiOTozYTo2
ZS8wMDowMDpjYzowMDowMC80MCB0YWcgNCBuY3EgNDA5NiBpbg0KWyAgMjA0LjgwOTE0MV0g
ICAgICAgICAgcmVzIDQwLzAwOjAwOjAwOjAwOjAwLzAwOjAwOjAwOjAwOjAwLzAwIEVtYXNr
IDB4NCAodGltZW91dCkNClsgIDIwNC44MjEyNzRdIGF0YTEuMDA6IHN0YXR1czogeyBEUkRZ
IH0NClsgIDIwNC44MjcyOTRdIGF0YTE6IGhhcmQgcmVzZXR0aW5nIGxpbmsNClsgIDIwNS4z
MTcyMjddIGF0YTE6IFNBVEEgbGluayB1cCAzLjAgR2JwcyAoU1N0YXR1cyAxMjMgU0NvbnRy
b2wgMzAwKQ0KWyAgMjEwLjMxODAwNF0gYXRhMS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMp
DQpbICAyMTAuMzI0MTgyXSBhdGExLjAwOiBmYWlsZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJv
ciwgZXJyX21hc2s9MHg0KQ0KWyAgMjEwLjMzMDMwMF0gYXRhMS4wMDogcmV2YWxpZGF0aW9u
IGZhaWxlZCAoZXJybm89LTUpDQpbICAyMTAuMzM2NDExXSBhdGExOiBoYXJkIHJlc2V0dGlu
ZyBsaW5rDQpbICAyMTAuODI3OTA4XSBhdGExOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNT
dGF0dXMgMTIzIFNDb250cm9sIDMwMCkNClsgIDIyMC44MjYyMDldIGF0YTEuMDA6IHFjIHRp
bWVvdXQgKGNtZCAweGVjKQ0KWyAgMjIwLjgzMjIwN10gYXRhMS4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkNClsgIDIyMC44MzgxNDJdIGF0YTEu
MDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQ0KWyAgMjIwLjg0NDEwOF0gYXRh
MTogbGltaXRpbmcgU0FUQSBsaW5rIHNwZWVkIHRvIDEuNSBHYnBzDQpbICAyMjAuODUwMDA4
XSBhdGExOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyMjEuMzM5MjI4XSBhdGExOiBTQVRB
IGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9sIDMxMCkNClsgIDIyNC4z
NDA5ODRdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjJzISBbeGVuZG9t
YWluczo5Njc5XQ0KWyAgMjI0LjM0NzEwMl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyMjQu
MzUzMTEzXSBpcnEgZXZlbnQgc3RhbXA6IDEyODY1Ng0KWyAgMjI0LjM1OTE2MF0gaGFyZGly
cXMgbGFzdCAgZW5hYmxlZCBhdCAoMTI4NjU1KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVz
dG9yZV9hcmdzKzB4MC8weDMwDQpbICAyMjQuMzY1NDI0XSBoYXJkaXJxcyBsYXN0IGRpc2Fi
bGVkIGF0ICgxMjg2NTYpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4
Ng0KWyAgMjI0LjM3MTY3N10gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTI4NjU0KTog
WzxmZmZmZmZmZjgxMGE5ZGYxPl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyMjQu
Mzc3OTM2XSBzb2Z0aXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxMjg2NDkpOiBbPGZmZmZmZmZm
ODEwYWEyZTI+XSBpcnFfZXhpdCsweGEyLzB4ZDANClsgIDIyNC4zODQxNjhdIENQVTogMSBQ
SUQ6IDk2NzkgQ29tbTogeGVuZG9tYWlucyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQw
MTA3LXhlbmRldmVsKyAjMQ0KWyAgMjI0LjM5MDQ3MV0gSGFyZHdhcmUgbmFtZTogTVNJIE1T
LTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTAN
ClsgIDIyNC4zOTY4ODldIHRhc2s6IGZmZmY4ODAwNThkZWMyNDAgdGk6IGZmZmY4ODAwNGNk
MzIwMDAgdGFzay50aTogZmZmZjg4MDA0Y2QzMjAwMA0KWyAgMjI0LjQwMzM4N10gUklQOiBl
MDMwOls8ZmZmZmZmZmY4MTEwOWE1YT5dICBbPGZmZmZmZmZmODExMDlhNWE+XSBnZW5lcmlj
X2V4ZWNfc2luZ2xlKzB4OGEvMHhjMA0KWyAgMjI0LjQxMDA0MF0gUlNQOiBlMDJiOmZmZmY4
ODAwNGNkMzNhODggIEVGTEFHUzogMDAwMDAyMDINClsgIDIyNC40MTY2MjVdIFJBWDogMDAw
MDAwMDAwMDAwMDAwOCBSQlg6IGZmZmY4ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAw
MDM4DQpbICAyMjQuNDIzMzMzXSBSRFg6IDAwMDAwMDAwMDAwMDAwZmYgUlNJOiAwMDAwMDAw
MDAwMDAwMDA4IFJESTogMDAwMDAwMDAwMDAwMDAwOA0KWyAgMjI0LjQzMDAxMF0gUkJQOiBm
ZmZmODgwMDRjZDMzYWM4IFIwODogZmZmZmZmZmY4MWMwZDQ2OCBSMDk6IDAwMDAwMDAwMDAw
MDAwMDANClsgIDIyNC40MzY2NzRdIFIxMDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAw
MDAwMDAwMDAwMDAgUjEyOiBmZmZmODgwMDRjZDMzYWYwDQpbICAyMjQuNDQzMzk1XSBSMTM6
IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1
ZjYxNDA1MA0KWyAgMjI0LjQ0OTkzNl0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdT
OmZmZmY4ODAwNWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjI0
LjQ1NjMzOF0gQ1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1
MDAzYg0KWyAgMjI0LjQ2MjczOV0gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAw
MDA0Y2QyYTAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIyNC40NjkyOTBdIFN0YWNr
Og0KWyAgMjI0LjQ3NTY1N10gIDAwMDAwMDAwMDAwMDAyMDAgZmZmZjg4MDA1ZjYxNDA0MCAw
MDAwMDAwMDAwMDAwMDA2IDAwMDAwMDAwMDAwMDAwMDANClsgIDIyNC40ODIyNzddICAwMDAw
MDAwMDAwMDAwMDAxIGZmZmZmZmZmODIyZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAw
MDAwMDAwMDAxDQpbICAyMjQuNDg4ODMzXSAgZmZmZjg4MDA0Y2QzM2IzOCBmZmZmZmZmZjgx
MTA5Y2M1IGZmZmZmZmZmODFhYTBiMzMgZmZmZjg4MDBjZWQ1NjAwMA0KWyAgMjI0LjQ5NTM5
Ml0gQ2FsbCBUcmFjZToNClsgIDIyNC41MDE4MThdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMjQuNTA4
MzkwXSAgWzxmZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4
ZTUvMHgxZTANClsgIDIyNC41MTQ5MzBdICBbPGZmZmZmZmZmODFhYTBiMzM+XSA/IHJldGlu
dF9yZXN0b3JlX2FyZ3MrMHgxMy8weDEzDQpbICAyMjQuNTIxNDg3XSAgWzxmZmZmZmZmZjgx
MDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0K
WyAgMjI0LjUyODAxNF0gIFs8ZmZmZmZmZmY4MTEwYTAzYT5dIHNtcF9jYWxsX2Z1bmN0aW9u
X21hbnkrMHgyN2EvMHgyYTANClsgIDIyNC41MzQ1MTldICBbPGZmZmZmZmZmODEwMDc5ODA+
XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMjQu
NTQxMDc4XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEw
DQpbICAyMjQuNTQ3NjE3XSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxs
X3hlbl92ZXJzaW9uKzB4YS8weDIwDQpbICAyMjQuNTU0MTI3XSAgWzxmZmZmZmZmZjgxMTY5
NDI2Pl0gZXhpdF9tbWFwKzB4NTYvMHgxODANClsgIDIyNC41NjA2MDhdICBbPGZmZmZmZmZm
ODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjI0LjU2NzA4N10g
IFs8ZmZmZmZmZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAyMjQuNTcz
NTI5XSAgWzxmZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDIy
NC41Nzk3OTBdICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDIy
NC41ODYwOTBdICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8w
eDgzMA0KWyAgMjI0LjU5MjM0Nl0gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2Jp
bmFyeSsweDMyYS8weDFhMDANClsgIDIyNC41OTg1NzBdICBbPGZmZmZmZmZmODFhOWZmZTY+
XSA/IF9yYXdfcmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAyMjQuNjA0NzQ2XSAgWzxmZmZm
ZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjI0LjYxMDg5
NF0gIFs8ZmZmZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMv
MHgxYjANClsgIDIyNC42MTcwMjZdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNx
dWlyZSsweGVjLzB4MTEwDQpbICAyMjQuNjIzMTg2XSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0g
PyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDIyNC42MjkyNTBdICBbPGZmZmZmZmZm
ODExOTkyMDQ+XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMjI0LjYz
NTMzOF0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsw
eDU5Mi8weDcxMA0KWyAgMjI0LjY0MTQzMV0gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9f
ZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyMjQuNjQ3NDk5XSAgWzxm
ZmZmZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDIy
NC42NTM1NDZdICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpb
ICAyMjQuNjU5NTQwXSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4
NjANClsgIDIyNC42NjU1MzldICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsw
eDY5LzB4YTANClsgIDIyNC42NzE0ODBdIENvZGU6IDg5IGRhIGU4IGVhIGMxIDMyIDAwIDQ4
IDhiIDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4
IDc0IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEwIDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDw0
MT4gZjYgNDQgMjQgMjAgMDEgNzUgZjYgNDggOGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIg
NmQgZTggNGMgDQpbICAyMzAuMjQ0ODM1XSBhdGEzLjAwOiBxYyB0aW1lb3V0IChjbWQgMHhl
YykNClsgIDIzMC4yNTEzNjJdIGF0YTMuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVy
cm9yLCBlcnJfbWFzaz0weDQpDQpbICAyMzAuMjU3OTE5XSBhdGEzLjAwOiByZXZhbGlkYXRp
b24gZmFpbGVkIChlcnJubz0tNSkNClsgIDIzMC4yNjQ0ODddIGF0YTMuMDA6IGRpc2FibGVk
DQpbICAyMzAuMjcwODY5XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyMzAuNzYx
MjAwXSBhdGEzOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNTdGF0dXMgMTIzIFNDb250cm9s
IDMyMCkNClsgIDIzMC43Njc0ODFdIGF0YTM6IEVIIGNvbXBsZXRlDQpbICAyMzEuMTY0MjY1
XSBJTkZPOiByY3Vfc2NoZWQgc2VsZi1kZXRlY3RlZCBzdGFsbCBvbiBDUFUNClsgIDIzMS4x
NzA0MjhdIAkxOiAoMTc3OTEgdGlja3MgdGhpcyBHUCkgaWRsZT1kZTcvMTQwMDAwMDAwMDAw
MDAxLzAgc29mdGlycT04MjIwLzgyMjAgbGFzdF9hY2NlbGVyYXRlOiA1ZjQ0L2E1OTUsIG5v
bmxhenlfcG9zdGVkOiAxLCAuLg0KWyAgMjMxLjE3MDk2NF0gSU5GTzogcmN1X3NjaGVkIGRl
dGVjdGVkIHN0YWxscyBvbiBDUFVzL3Rhc2tzOg0KWyAgMjMxLjE3MDk2OF0gCTE6ICgxNzc5
MSB0aWNrcyB0aGlzIEdQKSBpZGxlPWRlNy8xNDAwMDAwMDAwMDAwMDEvMCBzb2Z0aXJxPTgy
MjAvODIyMCBsYXN0X2FjY2VsZXJhdGU6IDVmNDQvYTU5Niwgbm9ubGF6eV9wb3N0ZWQ6IDEs
IC4uDQpbICAyMzEuMTcwOTY5XSAJKGRldGVjdGVkIGJ5IDUsIHQ9MTgwMDIgamlmZmllcywg
Zz01ODkyLCBjPTU4OTEsIHE9NDAxMykNClsgIDIzMS4xNzA5NzVdIHNlbmRpbmcgTk1JIHRv
IGFsbCBDUFVzOg0KWyAgMjMxLjE3MTAxMV0gTk1JIGJhY2t0cmFjZSBmb3IgY3B1IDENClsg
IDIzMS4xNzEwMTNdIENQVTogMSBQSUQ6IDk2NzkgQ29tbTogeGVuZG9tYWlucyBOb3QgdGFp
bnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3MTAxM10g
SGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJ
T1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDIzMS4xNzEwMTRdIHRhc2s6IGZmZmY4ODAwNThk
ZWMyNDAgdGk6IGZmZmY4ODAwNGNkMzIwMDAgdGFzay50aTogZmZmZjg4MDA0Y2QzMjAwMA0K
WyAgMjMxLjE3MTAxOF0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTQ4YTE5Yz5dICBbPGZmZmZm
ZmZmODE0OGExOWM+XSBjZmJfaW1hZ2VibGl0KzB4MmFjLzB4NGUwDQpbICAyMzEuMTcxMDE5
XSBSU1A6IGUwMmI6ZmZmZjg4MDA1ZjY0MzY3OCAgRUZMQUdTOiAwMDAwMDAwMg0KWyAgMjMx
LjE3MTAyMF0gUkFYOiAwMDAwMDAwMGZmZmZmZmZmIFJCWDogMDAwMDAwMDAwMDAwMDAwYSBS
Q1g6IDAwMDAwMDAwMDAwMDAwMDMNClsgIDIzMS4xNzEwMjBdIFJEWDogMDAwMDAwMDAwMDAw
MDAzYiBSU0k6IDAwMDAwMDAwMDAwMDAwMDAgUkRJOiAwMDAwMDAwMDAwMDAwMDAxDQpbICAy
MzEuMTcxMDIxXSBSQlA6IGZmZmY4ODAwNWY2NDM2ZjggUjA4OiBmZmZmYzkwMDEwNTkyZTUw
IFIwOTogZmZmZmZmZmY4MWM2NzZlOA0KWyAgMjMxLjE3MTAyMl0gUjEwOiAwMDAwMDAwMDAw
MDAwMDAxIFIxMTogMDAwMDAwMDAwMGFhYWFhYSBSMTI6IGZmZmY4ODAwNTc5MzEwNDANClsg
IDIzMS4xNzEwMjJdIFIxMzogZmZmZmM5MDAxMDU5MmU1NCBSMTQ6IGZmZmY4ODAwNTc5MzEw
MTQgUjE1OiBmZmZmYzkwMDEwNTkyOGMwDQpbICAyMzEuMTcxMDI1XSBGUzogIDAwMDA3ZjE1
MmVjNGQ3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjY0MDAwMCgwMDAwKSBrbmxHUzowMDAwMDAw
MDAwMDAwMDAwDQpbICAyMzEuMTcxMDI2XSBDUzogIGUwMzMgRFM6IDAwMDAgRVM6IDAwMDAg
Q1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxMDI2XSBDUjI6IDAwMDA3ZjE1MmUy
YTFlMDIgQ1IzOiAwMDAwMDAwMDRjZDJhMDAwIENSNDogMDAwMDAwMDAwMDAwMDY2MA0KWyAg
MjMxLjE3MTAyN10gU3RhY2s6DQpbICAyMzEuMTcxMDMwXSAgZmZmZjg4MDA1OGRlY2E1OCBm
ZmZmODgwMDU4ZGVjMjQwIGZmZmY4ODAwNWY2NDM3NTggZmZmZmZmZmY4MTBlNGYzOA0KWyAg
MjMxLjE3MTAzMl0gIGZmZmZmZmZmODJjMTc3NTAgZmZmZjg4MDA1ZjY0MzgzYSAwMDAwMDAw
MDAwMDAwMWEwIGZmZmY4ODAwNTc5ZDgyZTANClsgIDIzMS4xNzEwMzRdICAwMDAwMDAwMDAw
MDAxNDAwIDAwMDAwMDAwMDAwMDAwMzQgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgwMDU3OTMw
ZjQ0DQpbICAyMzEuMTcxMDM0XSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3MTAzNV0gIDxJUlE+
IA0KWyAgMjMxLjE3MTAzN10gIFs8ZmZmZmZmZmY4MTBlNGYzOD5dID8gX19sb2NrX2FjcXVp
cmUrMHg0MTgvMHgyMjIwDQpbICAyMzEuMTcxMDM5XSAgWzxmZmZmZmZmZjgxNDg3NDM1Pl0g
Yml0X3B1dGNzKzB4MmU1LzB4NWEwDQpbICAyMzEuMTcxMDQxXSAgWzxmZmZmZmZmZjgxNDg3
OGE3Pl0gPyBzb2Z0X2N1cnNvcisweDFiNy8weDI1MA0KWyAgMjMxLjE3MTA0M10gIFs8ZmZm
ZmZmZmY4MTQ4MjZjZT5dID8gZ2V0X2NvbG9yLmlzcmEuMTcrMHgzZS8weDE1MA0KWyAgMjMx
LjE3MTA0NF0gIFs8ZmZmZmZmZmY4MTQ4MmE2ND5dIGZiY29uX3B1dGNzKzB4MTM0LzB4MTgw
DQpbICAyMzEuMTcxMDQ2XSAgWzxmZmZmZmZmZjgxNDg3MTUwPl0gPyBiaXRfY3Vyc29yKzB4
NjMwLzB4NjMwDQpbICAyMzEuMTcxMDQ3XSAgWzxmZmZmZmZmZjgxNDg0OThmPl0gZmJjb25f
cmVkcmF3LmlzcmEuMjQrMHgxN2YvMHgxZjANClsgIDIzMS4xNzEwNDldICBbPGZmZmZmZmZm
ODE0ODRiZTg+XSBmYmNvbl9zY3JvbGwrMHgxZTgvMHhkMDANClsgIDIzMS4xNzEwNTFdICBb
PGZmZmZmZmZmODE0ZjcyYmI+XSBzY3J1cCsweDEwYi8weDEyMA0KWyAgMjMxLjE3MTA1Ml0g
IFs8ZmZmZmZmZmY4MTRmNzQzMD5dIGxmKzB4NzAvMHg4MA0KWyAgMjMxLjE3MTA1NF0gIFs8
ZmZmZmZmZmY4MTRmODJhNj5dIHZ0X2NvbnNvbGVfcHJpbnQrMHgyODYvMHgzYjANClsgIDIz
MS4xNzEwNTZdICBbPGZmZmZmZmZmODEwZWQyYzM+XSBjYWxsX2NvbnNvbGVfZHJpdmVycy5j
b25zdHByb3AuMjErMHg5My8weGIwDQpbICAyMzEuMTcxMDU3XSAgWzxmZmZmZmZmZjgxMGVl
MThjPl0gY29uc29sZV91bmxvY2srMHgzZmMvMHg0ODANClsgIDIzMS4xNzEwNThdICBbPGZm
ZmZmZmZmODEwZWU1NjY+XSB2cHJpbnRrX2VtaXQrMHgxOTYvMHg1ODANClsgIDIzMS4xNzEw
NjFdICBbPGZmZmZmZmZmODFhOTFhMmI+XSBwcmludGsrMHg0OC8weDRhDQpbICAyMzEuMTcx
MDYzXSAgWzxmZmZmZmZmZjgxMGY4NjIxPl0gcHJpbnRfY3B1X3N0YWxsX2luZm8uaXNyYS41
NCsweDExMS8weDE1MA0KWyAgMjMxLjE3MTA2NF0gIFs8ZmZmZmZmZmY4MTBmOWNkOT5dIHJj
dV9jaGVja19jYWxsYmFja3MrMHgzMjkvMHg2MjANClsgIDIzMS4xNzEwNjZdICBbPGZmZmZm
ZmZmODEwZTBhZTk+XSA/IHRyYWNlX2hhcmRpcnFzX29mZl9jYWxsZXIrMHhiOS8weDE2MA0K
WyAgMjMxLjE3MTA2N10gIFs8ZmZmZmZmZmY4MTBlMGI5ZD5dID8gdHJhY2VfaGFyZGlycXNf
b2ZmKzB4ZC8weDEwDQpbICAyMzEuMTcxMDY5XSAgWzxmZmZmZmZmZjgxMTA0ODcwPl0gPyB0
aWNrX25vaHpfaGFuZGxlcisweGEwLzB4YTANClsgIDIzMS4xNzEwNzFdICBbPGZmZmZmZmZm
ODEwYjEyMzM+XSB1cGRhdGVfcHJvY2Vzc190aW1lcysweDQzLzB4ODANClsgIDIzMS4xNzEw
NzJdICBbPGZmZmZmZmZmODExMDQ3OTk+XSB0aWNrX3NjaGVkX2hhbmRsZS5pc3JhLjE0KzB4
MjkvMHg2MA0KWyAgMjMxLjE3MTA3M10gIFs8ZmZmZmZmZmY4MTEwNDhiNz5dIHRpY2tfc2No
ZWRfdGltZXIrMHg0Ny8weDcwDQpbICAyMzEuMTcxMDc1XSAgWzxmZmZmZmZmZjgxMGM4NDRm
Pl0gX19ydW5faHJ0aW1lci5pc3JhLjI4KzB4NmYvMHgxMjANClsgIDIzMS4xNzEwNzddICBb
PGZmZmZmZmZmODEwYzhkMDU+XSBocnRpbWVyX2ludGVycnVwdCsweGY1LzB4MjQwDQpbICAy
MzEuMTcxMDc5XSAgWzxmZmZmZmZmZjgxMDA4ZGFmPl0geGVuX3RpbWVyX2ludGVycnVwdCsw
eDJmLzB4MTUwDQpbICAyMzEuMTcxMDgxXSAgWzxmZmZmZmZmZjgxYWEwNDBiPl0gPyBfcmF3
X3NwaW5fdW5sb2NrX2lycSsweDJiLzB4NTANClsgIDIzMS4xNzEwODJdICBbPGZmZmZmZmZm
ODEwZTMxY2I+XSA/IHRyYWNlX2hhcmRpcnFzX29uX2NhbGxlcisweGZiLzB4MjQwDQpbICAy
MzEuMTcxMDg0XSAgWzxmZmZmZmZmZjgxMGYwNjc3Pl0gaGFuZGxlX2lycV9ldmVudF9wZXJj
cHUrMHg0Ny8weDE5MA0KWyAgMjMxLjE3MTA4N10gIFs8ZmZmZmZmZmY4MTRjYjExOT5dID8g
aW5mb19mb3JfaXJxKzB4OS8weDIwDQpbICAyMzEuMTcxMDg5XSAgWzxmZmZmZmZmZjgxMGYz
OTYyPl0gaGFuZGxlX3BlcmNwdV9pcnErMHg0Mi8weDYwDQpbICAyMzEuMTcxMDkxXSAgWzxm
ZmZmZmZmZjgxNGNkYWNkPl0gZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50cysweDEyZC8weDE0
MA0KWyAgMjMxLjE3MTA5Ml0gIFs8ZmZmZmZmZmY4MTRjYWM3OD5dIF9feGVuX2V2dGNobl9k
b191cGNhbGwrMHg0OC8weDkwDQpbICAyMzEuMTcxMDk0XSAgWzxmZmZmZmZmZjgxNGNjODFh
Pl0geGVuX2V2dGNobl9kb191cGNhbGwrMHgyYS8weDQwDQpbICAyMzEuMTcxMDk2XSAgWzxm
ZmZmZmZmZjgxYWEyODVlPl0geGVuX2RvX2h5cGVydmlzb3JfY2FsbGJhY2srMHgxZS8weDMw
DQpbICAyMzEuMTcxMDk3XSAgPEVPST4gDQpbICAyMzEuMTcxMDk5XSAgWzxmZmZmZmZmZjgx
MTA5YTVhPl0gPyBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4OGEvMHhjMA0KWyAgMjMxLjE3MTEw
MF0gIFs8ZmZmZmZmZmY4MTEwOWE4MT5dID8gZ2VuZXJpY19leGVjX3NpbmdsZSsweGIxLzB4
YzANClsgIDIzMS4xNzExMDJdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95
X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzEuMTcxMTA0XSAgWzxmZmZm
ZmZmZjgxMTA5Y2M1Pl0gPyBzbXBfY2FsbF9mdW5jdGlvbl9zaW5nbGUrMHhlNS8weDFlMA0K
WyAgMjMxLjE3MTEwNV0gIFs8ZmZmZmZmZmY4MWFhMGIzMz5dID8gcmV0aW50X3Jlc3RvcmVf
YXJncysweDEzLzB4MTMNClsgIDIzMS4xNzExMDddICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzEuMTcx
MTA4XSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gPyBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4
MjdhLzB4MmEwDQpbICAyMzEuMTcxMTEwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5f
ZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjMxLjE3MTExMl0g
IFs8ZmZmZmZmZmY4MTAwODQxZT5dID8geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEwDQpbICAy
MzEuMTcxMTEzXSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxsX3hlbl92
ZXJzaW9uKzB4YS8weDIwDQpbICAyMzEuMTcxMTE1XSAgWzxmZmZmZmZmZjgxMTY5NDI2Pl0g
PyBleGl0X21tYXArMHg1Ni8weDE4MA0KWyAgMjMxLjE3MTExN10gIFs8ZmZmZmZmZmY4MTBl
NzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyMzEuMTcxMTE4XSAgWzxm
ZmZmZmZmZjgxMWRkZGUwPl0gPyBleGl0X2FpbysweGIwLzB4ZTANClsgIDIzMS4xNzExMjBd
ICBbPGZmZmZmZmZmODExZGRkNDQ+XSA/IGV4aXRfYWlvKzB4MTQvMHhlMA0KWyAgMjMxLjE3
MTEyMV0gIFs8ZmZmZmZmZmY4MTBhMjY4OT5dID8gbW1wdXQrMHg1OS8weGUwDQpbICAyMzEu
MTcxMTIzXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gPyBmbHVzaF9vbGRfZXhlYysweDQzOS8w
eDgzMA0KWyAgMjMxLjE3MTEyNV0gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dID8gbG9hZF9lbGZf
YmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAgMjMxLjE3MTEyNl0gIFs8ZmZmZmZmZmY4MWE5ZmZl
Nj5dID8gX3Jhd19yZWFkX3VubG9jaysweDI2LzB4MzANClsgIDIzMS4xNzExMjhdICBbPGZm
ZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsweGVjLzB4MTEwDQpbICAyMzEuMTcx
MTI5XSAgWzxmZmZmZmZmZjgxMTk5MjQzPl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHhj
My8weDFiMA0KWyAgMjMxLjE3MTEzMV0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19h
Y3F1aXJlKzB4ZWMvMHgxMTANClsgIDIzMS4xNzExMzJdICBbPGZmZmZmZmZmODEwZTcxN2E+
XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjMxLjE3MTEzNF0gIFs8ZmZmZmZm
ZmY4MTE5OTIwND5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDIz
MS4xNzExMzZdICBbPGZmZmZmZmZmODExOWIyNTI+XSA/IGRvX2V4ZWN2ZV9jb21tb24uaXNy
YS4zMSsweDU5Mi8weDcxMA0KWyAgMjMxLjE3MTEzN10gIFs8ZmZmZmZmZmY4MTE5YjFlNz5d
ID8gZG9fZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyMzEuMTcxMTM5
XSAgWzxmZmZmZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjAN
ClsgIDIzMS4xNzExNDFdICBbPGZmZmZmZmZmODExOWIzZTM+XSA/IGRvX2V4ZWN2ZSsweDEz
LzB4MjANClsgIDIzMS4xNzExNDJdICBbPGZmZmZmZmZmODExOWI2NTg+XSA/IFN5U19leGVj
dmUrMHgzOC8weDYwDQpbICAyMzEuMTcxMTQzXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gPyBz
dHViX2V4ZWN2ZSsweDY5LzB4YTANClsgIDIzMS4xNzExNThdIENvZGU6IGMwIDhiIDU1IGIw
IDRkIDg5IGY4IDRkIDg5IGY0IGI5IDA4IDAwIDAwIDAwIGViIDJmIDY2IDBmIDFmIDQ0IDAw
IDAwIDQxIDBmIGJlIDA0IDI0IDI5IGY5IDRkIDhkIDY4IDA0IGQzIGY4IDQ0IDIxIGQwIDQx
IDhiIDA0IDgxIDw0ND4gMjEgZDggMzEgZjAgNDEgODkgMDAgODUgYzkgNzUgMDYgNDkgODMg
YzQgMDEgYjEgMDggNGQgODkgZTggDQpbICAyMzEuMTcxMTc3XSBOTUkgYmFja3RyYWNlIGZv
ciBjcHUgNQ0KWyAgMjMxLjE3MTE3OV0gQ1BVOiA1IFBJRDogMCBDb21tOiBzd2FwcGVyLzUg
Tm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDIzMS4x
NzExODBdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQw
KSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAyMzEuMTcxMTgxXSB0YXNrOiBmZmZm
ODgwMDU5YmM4MDAwIHRpOiBmZmZmODgwMDU5YmM2MDAwIHRhc2sudGk6IGZmZmY4ODAwNTli
YzYwMDANClsgIDIzMS4xNzExODZdIFJJUDogZTAzMDpbPGZmZmZmZmZmODEwMDEzMGE+XSAg
WzxmZmZmZmZmZjgxMDAxMzBhPl0geGVuX2h5cGVyY2FsbF92Y3B1X29wKzB4YS8weDIwDQpb
ICAyMzEuMTcxMTg3XSBSU1A6IGUwMmI6ZmZmZjg4MDA1Zjc0M2MxMCAgRUZMQUdTOiAwMDAw
MDA0Ng0KWyAgMjMxLjE3MTE4OF0gUkFYOiAwMDAwMDAwMDAwMDAwMDAwIFJCWDogMDAwMDAw
MDAwMDAwMDAwNSBSQ1g6IGZmZmZmZmZmODEwMDEzMGENClsgIDIzMS4xNzExODldIFJEWDog
MDAwMDAwMDBkZWFkYmVlZiBSU0k6IDAwMDAwMDAwZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRl
YWRiZWVmDQpbICAyMzEuMTcxMTkwXSBSQlA6IGZmZmY4ODAwNWY3NDNjMjggUjA4OiBmZmZm
ZmZmZjgyMmU3MzAwIFIwOTogMDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMxLjE3MTE5MF0gUjEw
OiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAwMDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZm
ODIyZTczMDANClsgIDIzMS4xNzExOTFdIFIxMzogZmZmZmZmZmY4MjJlNzMwMCBSMTQ6IDAw
MDAwMDAwMDAwMDAwMDUgUjE1OiAwMDAwMDAwMDAwMDAwMDA1DQpbICAyMzEuMTcxMTk0XSBG
UzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1Zjc0MDAwMCgwMDAwKSBr
bmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxMTk0XSBDUzogIGUwMzMgRFM6IDAw
MmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxMTk1XSBDUjI6
IDAwMDA3ZjdiZWIxZTJkZTIgQ1IzOiAwMDAwMDAwMDU3ZDkwMDAwIENSNDogMDAwMDAwMDAw
MDAwMDY2MA0KWyAgMjMxLjE3MTE5N10gU3RhY2s6DQpbICAyMzEuMTcxMTk5XSAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDA1IGZmZmZmZmZmODE0Y2M3Y2MgZmZmZjg4MDA1
Zjc0M2M1OA0KWyAgMjMxLjE3MTIwMV0gIGZmZmZmZmZmODEwMGFkNGEgMDAwMDAwMDAwMDAw
MjcxMCBmZmZmZmZmZjgyMjQzMWMwIGZmZmZmZmZmODIyZTczMDgNClsgIDIzMS4xNzEyMDJd
ICBmZmZmZmZmZjgyMjQzMWMwIGZmZmY4ODAwNWY3NDNjNjggZmZmZmZmZmY4MTAwYmI2OSBm
ZmZmODgwMDVmNzQzYzg4DQpbICAyMzEuMTcxMjAzXSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3
MTIwNV0gIDxJUlE+IA0KWyAgMjMxLjE3MTIwOV0gIFs8ZmZmZmZmZmY4MTRjYzdjYz5dID8g
eGVuX3NlbmRfSVBJX29uZSsweDNjLzB4NjANClsgIDIzMS4xNzEyMTJdICBbPGZmZmZmZmZm
ODEwMGFkNGE+XSBfX3hlbl9zZW5kX0lQSV9tYXNrKzB4MmEvMHg1MA0KWyAgMjMxLjE3MTIx
NF0gIFs8ZmZmZmZmZmY4MTAwYmI2OT5dIHhlbl9zZW5kX0lQSV9hbGwrMHgyOS8weDgwDQpb
ICAyMzEuMTcxMjE2XSAgWzxmZmZmZmZmZjgxMDNmZmZhPl0gYXJjaF90cmlnZ2VyX2FsbF9j
cHVfYmFja3RyYWNlKzB4NGEvMHg4MA0KWyAgMjMxLjE3MTIxOV0gIFs8ZmZmZmZmZmY4MTBm
OWZiMz5dIHJjdV9jaGVja19jYWxsYmFja3MrMHg2MDMvMHg2MjANClsgIDIzMS4xNzEyMjFd
ICBbPGZmZmZmZmZmODExMDQ4NzA+XSA/IHRpY2tfbm9oel9oYW5kbGVyKzB4YTAvMHhhMA0K
WyAgMjMxLjE3MTIyM10gIFs8ZmZmZmZmZmY4MTBiMTIzMz5dIHVwZGF0ZV9wcm9jZXNzX3Rp
bWVzKzB4NDMvMHg4MA0KWyAgMjMxLjE3MTIyNF0gIFs8ZmZmZmZmZmY4MTEwNDc5OT5dIHRp
Y2tfc2NoZWRfaGFuZGxlLmlzcmEuMTQrMHgyOS8weDYwDQpbICAyMzEuMTcxMjI1XSAgWzxm
ZmZmZmZmZjgxMTA0OGI3Pl0gdGlja19zY2hlZF90aW1lcisweDQ3LzB4NzANClsgIDIzMS4x
NzEyMjhdICBbPGZmZmZmZmZmODEwYzg0NGY+XSBfX3J1bl9ocnRpbWVyLmlzcmEuMjgrMHg2
Zi8weDEyMA0KWyAgMjMxLjE3MTIyOV0gIFs8ZmZmZmZmZmY4MTBjOGQwNT5dIGhydGltZXJf
aW50ZXJydXB0KzB4ZjUvMHgyNDANClsgIDIzMS4xNzEyMzJdICBbPGZmZmZmZmZmODEwMDhk
YWY+XSB4ZW5fdGltZXJfaW50ZXJydXB0KzB4MmYvMHgxNTANClsgIDIzMS4xNzEyMzVdICBb
PGZmZmZmZmZmODEwZTBhZTk+XSA/IHRyYWNlX2hhcmRpcnFzX29mZl9jYWxsZXIrMHhiOS8w
eDE2MA0KWyAgMjMxLjE3MTIzN10gIFs8ZmZmZmZmZmY4MTBmMDY3Nz5dIGhhbmRsZV9pcnFf
ZXZlbnRfcGVyY3B1KzB4NDcvMHgxOTANClsgIDIzMS4xNzEyMzldICBbPGZmZmZmZmZmODE0
Y2IxMTk+XSA/IGluZm9fZm9yX2lycSsweDkvMHgyMA0KWyAgMjMxLjE3MTI0MV0gIFs8ZmZm
ZmZmZmY4MTBmMzk2Mj5dIGhhbmRsZV9wZXJjcHVfaXJxKzB4NDIvMHg2MA0KWyAgMjMxLjE3
MTI0M10gIFs8ZmZmZmZmZmY4MTRjZGFjZD5dIGV2dGNobl9maWZvX2hhbmRsZV9ldmVudHMr
MHgxMmQvMHgxNDANClsgIDIzMS4xNzEyNDVdICBbPGZmZmZmZmZmODEwYzljOTA+XSA/IHJh
d19ub3RpZmllcl9jYWxsX2NoYWluKzB4MjAvMHgyMA0KWyAgMjMxLjE3MTI0N10gIFs8ZmZm
ZmZmZmY4MTRjYWM3OD5dIF9feGVuX2V2dGNobl9kb191cGNhbGwrMHg0OC8weDkwDQpbICAy
MzEuMTcxMjQ4XSAgWzxmZmZmZmZmZjgxNGNjODFhPl0geGVuX2V2dGNobl9kb191cGNhbGwr
MHgyYS8weDQwDQpbICAyMzEuMTcxMjUyXSAgWzxmZmZmZmZmZjgxYWEyODVlPl0geGVuX2Rv
X2h5cGVydmlzb3JfY2FsbGJhY2srMHgxZS8weDMwDQpbICAyMzEuMTcxMjUzXSAgPEVPST4g
DQpbICAyMzEuMTcxMjU0XSAgWzxmZmZmZmZmZjgxMDAxM2FhPl0gPyB4ZW5faHlwZXJjYWxs
X3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjU1XSAgWzxmZmZmZmZmZjgxMDAxM2Fh
Pl0gPyB4ZW5faHlwZXJjYWxsX3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjU4XSAg
WzxmZmZmZmZmZjgxMDA4YzEwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMA0KWyAgMjMx
LjE3MTI2MF0gIFs8ZmZmZmZmZmY4MTAxODA1OD5dID8gZGVmYXVsdF9pZGxlKzB4MTgvMHgy
MA0KWyAgMjMxLjE3MTI2MV0gIFs8ZmZmZmZmZmY4MTAxODg2ZT5dID8gYXJjaF9jcHVfaWRs
ZSsweDJlLzB4NDANClsgIDIzMS4xNzEyNjNdICBbPGZmZmZmZmZmODEwZWZiYjE+XSA/IGNw
dV9zdGFydHVwX2VudHJ5KzB4NzEvMHgxZjANClsgIDIzMS4xNzEyNjRdICBbPGZmZmZmZmZm
ODEwMGI5YzU+XSA/IGNwdV9icmluZ3VwX2FuZF9pZGxlKzB4MjUvMHg0MA0KWyAgMjMxLjE3
MTI3OF0gQ29kZTogY2MgNTEgNDEgNTMgNTAgYjggMTcgMDAgMDAgMDAgMGYgMDUgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgNTEgNDEgNTMgYjggMTggMDAgMDAgMDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMS4x
NzEyODddIE5NSSBiYWNrdHJhY2UgZm9yIGNwdSAzDQpbICAyMzEuMTcxMjkwXSBDUFU6IDMg
UElEOiAwIENvbW06IHN3YXBwZXIvMyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3
LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3MTI5MV0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2
NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsg
IDIzMS4xNzEyOTJdIHRhc2s6IGZmZmY4ODAwNTliYjUyZDAgdGk6IGZmZmY4ODAwNTliYzIw
MDAgdGFzay50aTogZmZmZjg4MDA1OWJjMjAwMA0KWyAgMjMxLjE3MTI5OF0gUklQOiBlMDMw
Ols8ZmZmZmZmZmY4MTAwMTNhYT5dICBbPGZmZmZmZmZmODEwMDEzYWE+XSB4ZW5faHlwZXJj
YWxsX3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjk5XSBSU1A6IGUwMmI6ZmZmZjg4
MDA1OWJjM2VjOCAgRUZMQUdTOiAwMDAwMDI0Ng0KWyAgMjMxLjE3MTMwMF0gUkFYOiAwMDAw
MDAwMDAwMDAwMDAwIFJCWDogZmZmZjg4MDA1OWJjM2ZkOCBSQ1g6IGZmZmZmZmZmODEwMDEz
YWENClsgIDIzMS4xNzEzMDFdIFJEWDogZmZmZjg4MDA1OWJiNTJkMCBSU0k6IDAwMDAwMDAw
ZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRlYWRiZWVmDQpbICAyMzEuMTcxMzAxXSBSQlA6IGZm
ZmY4ODAwNTliYzNlZTAgUjA4OiAwMDAwMDAwMDAwMDAwMDAwIFIwOTogMDAwMDAwMDAwMDAw
MDAwMQ0KWyAgMjMxLjE3MTMwMl0gUjEwOiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAw
MDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZmODIyZTczMDANClsgIDIzMS4xNzEzMDJdIFIxMzog
ZmZmZjg4MDA1OWJjM2ZkOCBSMTQ6IGZmZmY4ODAwNTliYzNmZDggUjE1OiBmZmZmODgwMDU5
YmMzZmQ4DQpbICAyMzEuMTcxMzA2XSBGUzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6
ZmZmZjg4MDA1ZjZjMDAwMCgwMDAwKSBrbmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEu
MTcxMzA2XSBDUzogIGUwMzMgRFM6IDAwMmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUw
MDNiDQpbICAyMzEuMTcxMzA3XSBDUjI6IGZmZmZmZmZmZmY2MDA0MDAgQ1IzOiAwMDAwMDAw
MDU3ZDkwMDAwIENSNDogMDAwMDAwMDAwMDAwMDY2MA0KWyAgMjMxLjE3MTMwOF0gU3RhY2s6
DQpbICAyMzEuMTcxMzExXSAgMDAwMDAwMDAwMDAwMDAwMCAwMTAwMDAwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDhjMTAgZmZmZjg4MDA1OWJjM2VmMA0KWyAgMjMxLjE3MTMxM10gIGZmZmZm
ZmZmODEwMTgwNTggZmZmZjg4MDA1OWJjM2YwMCBmZmZmZmZmZjgxMDE4ODZlIGZmZmY4ODAw
NTliYzNmNDANClsgIDIzMS4xNzEzMTRdICBmZmZmZmZmZjgxMGVmYmIxIDAwMDAwMDAwMDAw
MDAwMGEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxMzE1
XSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3MTMxOV0gIFs8ZmZmZmZmZmY4MTAwOGMxMD5dID8g
eGVuX3NhZmVfaGFsdCsweDEwLzB4MjANClsgIDIzMS4xNzEzMjFdICBbPGZmZmZmZmZmODEw
MTgwNTg+XSBkZWZhdWx0X2lkbGUrMHgxOC8weDIwDQpbICAyMzEuMTcxMzIyXSAgWzxmZmZm
ZmZmZjgxMDE4ODZlPl0gYXJjaF9jcHVfaWRsZSsweDJlLzB4NDANClsgIDIzMS4xNzEzMjVd
ICBbPGZmZmZmZmZmODEwZWZiYjE+XSBjcHVfc3RhcnR1cF9lbnRyeSsweDcxLzB4MWYwDQpb
ICAyMzEuMTcxMzI2XSAgWzxmZmZmZmZmZjgxMDBiOWM1Pl0gY3B1X2JyaW5ndXBfYW5kX2lk
bGUrMHgyNS8weDQwDQpbICAyMzEuMTcxMzQxXSBDb2RlOiBjYyA1MSA0MSA1MyBiOCAxYyAw
MCAwMCAwMCAwZiAwNSA0MSA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyA1MSA0MSA1MyBiOCAxZCAwMCAwMCAwMCAwZiAw
NSA8NDE+IDViIDU5IGMzIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNj
IGNjIGNjIGNjIGNjIA0KWyAgMjMxLjE3MTM2Nl0gTk1JIGJhY2t0cmFjZSBmb3IgY3B1IDAN
ClsgIDIzMS4xNzEzNzBdIENQVTogMCBQSUQ6IDAgQ29tbTogc3dhcHBlci8wIE5vdCB0YWlu
dGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrICMxDQpbICAyMzEuMTcxMzcwXSBI
YXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwgQklP
UyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMjMxLjE3MTM3Ml0gdGFzazogZmZmZmZmZmY4MjIx
MzRlMCB0aTogZmZmZmZmZmY4MjIwMDAwMCB0YXNrLnRpOiBmZmZmZmZmZjgyMjAwMDAwDQpb
ICAyMzEuMTcxMzc3XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMDAxM2FhPl0gIFs8ZmZmZmZm
ZmY4MTAwMTNhYT5dIHhlbl9oeXBlcmNhbGxfc2NoZWRfb3ArMHhhLzB4MjANClsgIDIzMS4x
NzEzNzldIFJTUDogZTAyYjpmZmZmZmZmZjgyMjAxZTQwICBFRkxBR1M6IDAwMDAwMjQ2DQpb
ICAyMzEuMTcxMzc5XSBSQVg6IDAwMDAwMDAwMDAwMDAwMDAgUkJYOiBmZmZmZmZmZjgyMjAx
ZmQ4IFJDWDogZmZmZmZmZmY4MTAwMTNhYQ0KWyAgMjMxLjE3MTM4MF0gUkRYOiBmZmZmZmZm
ZjgyMjEzNGUwIFJTSTogMDAwMDAwMDBkZWFkYmVlZiBSREk6IDAwMDAwMDAwZGVhZGJlZWYN
ClsgIDIzMS4xNzEzODFdIFJCUDogZmZmZmZmZmY4MjIwMWU1OCBSMDg6IDAwMDAwMDAwMDAw
MDAwMDAgUjA5OiAwMDAwMDAwMDAwMDAwMDAxDQpbICAyMzEuMTcxMzgxXSBSMTA6IDAwMDAw
MDAwMDAwMDAwMDAgUjExOiAwMDAwMDAwMDAwMDAwMjQ2IFIxMjogZmZmZmZmZmY4MjJlNzMw
MA0KWyAgMjMxLjE3MTM4Ml0gUjEzOiBmZmZmZmZmZjgyMjAxZmQ4IFIxNDogZmZmZmZmZmY4
MjIwMWZkOCBSMTU6IGZmZmZmZmZmODIyMDFmZDgNClsgIDIzMS4xNzEzODZdIEZTOiAgMDAw
MDdmZDA5MTE2YTcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjAwMDAwKDAwMDApIGtubEdTOjAw
MDAwMDAwMDAwMDAwMDANClsgIDIzMS4xNzEzODZdIENTOiAgZTAzMyBEUzogMDAwMCBFUzog
MDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDIzMS4xNzEzODddIENSMjogZmZmZmZm
ZmZmZjYwMDQwMCBDUjM6IDAwMDAwMDAwNTdkOTAwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYw
DQpbICAyMzEuMTcxMzg4XSBTdGFjazoNClsgIDIzMS4xNzEzOTFdICAwMDAwMDAwMDAwMDAw
MDAwIDAxMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwOGMxMCBmZmZmZmZmZjgyMjAxZTY4
DQpbICAyMzEuMTcxMzkzXSAgZmZmZmZmZmY4MTAxODA1OCBmZmZmZmZmZjgyMjAxZTc4IGZm
ZmZmZmZmODEwMTg4NmUgZmZmZmZmZmY4MjIwMWViOA0KWyAgMjMxLjE3MTM5NV0gIGZmZmZm
ZmZmODEwZWZiYjEgZmZmZmZmZmY4MjNhNjBjMCAwMDAwMDAwMDAwMDAwMDAyIGZmZmZmZmZm
ODIzOWQwMjANClsgIDIzMS4xNzEzOTVdIENhbGwgVHJhY2U6DQpbICAyMzEuMTcxNDAwXSAg
WzxmZmZmZmZmZjgxMDA4YzEwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMA0KWyAgMjMx
LjE3MTQwMl0gIFs8ZmZmZmZmZmY4MTAxODA1OD5dIGRlZmF1bHRfaWRsZSsweDE4LzB4MjAN
ClsgIDIzMS4xNzE0MDNdICBbPGZmZmZmZmZmODEwMTg4NmU+XSBhcmNoX2NwdV9pZGxlKzB4
MmUvMHg0MA0KWyAgMjMxLjE3MTQwNV0gIFs8ZmZmZmZmZmY4MTBlZmJiMT5dIGNwdV9zdGFy
dHVwX2VudHJ5KzB4NzEvMHgxZjANClsgIDIzMS4xNzE0MTBdICBbPGZmZmZmZmZmODFhOGE4
MTc+XSByZXN0X2luaXQrMHhiNy8weGMwDQpbICAyMzEuMTcxNDExXSAgWzxmZmZmZmZmZjgx
YThhNzYwPl0gPyBjc3VtX3BhcnRpYWxfY29weV9nZW5lcmljKzB4MTcwLzB4MTcwDQpbICAy
MzEuMTcxNDE0XSAgWzxmZmZmZmZmZjgyMzBkZjAxPl0gc3RhcnRfa2VybmVsKzB4M2VlLzB4
M2ZiDQpbICAyMzEuMTcxNDE1XSAgWzxmZmZmZmZmZjgyMzBkOTEyPl0gPyByZXBhaXJfZW52
X3N0cmluZysweDVlLzB4NWUNClsgIDIzMS4xNzE0MTddICBbPGZmZmZmZmZmODIzMGQ1Zjg+
XSB4ODZfNjRfc3RhcnRfcmVzZXJ2YXRpb25zKzB4MmEvMHgyYw0KWyAgMjMxLjE3MTQxOV0g
IFs8ZmZmZmZmZmY4MjMxMGRjNj5dIHhlbl9zdGFydF9rZXJuZWwrMHg1NDYvMHg1NDgNClsg
IDIzMS4xNzE0MzJdIENvZGU6IGNjIDUxIDQxIDUzIGI4IDFjIDAwIDAwIDAwIDBmIDA1IDQx
IDViIDU5IGMzIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNj
IGNjIGNjIGNjIDUxIDQxIDUzIGI4IDFkIDAwIDAwIDAwIDBmIDA1IDw0MT4gNWIgNTkgYzMg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgDQpb
ICAyMzEuMTcxNDQ1XSBOTUkgYmFja3RyYWNlIGZvciBjcHUgMg0KWyAgMjMxLjE3MTQ0OV0g
Q1BVOiAyIFBJRDogMCBDb21tOiBzd2FwcGVyLzIgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDIzMS4xNzE0NTBdIEhhcmR3YXJlIG5hbWU6IE1T
SSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8y
MDEwDQpbICAyMzEuMTcxNDUxXSB0YXNrOiBmZmZmODgwMDU5YmI0MjQwIHRpOiBmZmZmODgw
MDU5YmMwMDAwIHRhc2sudGk6IGZmZmY4ODAwNTliYzAwMDANClsgIDIzMS4xNzE0NTddIFJJ
UDogZTAzMDpbPGZmZmZmZmZmODEwMDEzYWE+XSAgWzxmZmZmZmZmZjgxMDAxM2FhPl0geGVu
X2h5cGVyY2FsbF9zY2hlZF9vcCsweGEvMHgyMA0KWyAgMjMxLjE3MTQ1OF0gUlNQOiBlMDJi
OmZmZmY4ODAwNTliYzFlYzggIEVGTEFHUzogMDAwMDAyNDYNClsgIDIzMS4xNzE0NThdIFJB
WDogMDAwMDAwMDAwMDAwMDAwMCBSQlg6IGZmZmY4ODAwNTliYzFmZDggUkNYOiBmZmZmZmZm
ZjgxMDAxM2FhDQpbICAyMzEuMTcxNDU5XSBSRFg6IGZmZmY4ODAwNTliYjQyNDAgUlNJOiAw
MDAwMDAwMGRlYWRiZWVmIFJESTogMDAwMDAwMDBkZWFkYmVlZg0KWyAgMjMxLjE3MTQ2MF0g
UkJQOiBmZmZmODgwMDU5YmMxZWUwIFIwODogMDAwMDAwMDAwMDAwMDAwMCBSMDk6IDAwMDAw
MDAwMDAwMDAwMDENClsgIDIzMS4xNzE0NjFdIFIxMDogMDAwMDAwMDAwMDAwMDAwMCBSMTE6
IDAwMDAwMDAwMDAwMDAyNDYgUjEyOiBmZmZmZmZmZjgyMmU3MzAwDQpbICAyMzEuMTcxNDYx
XSBSMTM6IGZmZmY4ODAwNTliYzFmZDggUjE0OiBmZmZmODgwMDU5YmMxZmQ4IFIxNTogZmZm
Zjg4MDA1OWJjMWZkOA0KWyAgMjMxLjE3MTQ2NV0gRlM6ICAwMDAwN2YxZTI2NWZiNzAwKDAw
MDApIEdTOmZmZmY4ODAwNWY2ODAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0K
WyAgMjMxLjE3MTQ2NV0gQ1M6ICBlMDMzIERTOiAwMDJiIEVTOiAwMDJiIENSMDogMDAwMDAw
MDA4MDA1MDAzYg0KWyAgMjMxLjE3MTQ2Nl0gQ1IyOiBmZmZmZmZmZmZmNjAwNDAwIENSMzog
MDAwMDAwMDA1OTM5YTAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIzMS4xNzE0Njdd
IFN0YWNrOg0KWyAgMjMxLjE3MTQ3MV0gIDAwMDAwMDAwMDAwMDAwMDAgMDEwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxMDA4YzEwIGZmZmY4ODAwNTliYzFlZjANClsgIDIzMS4xNzE0NzNd
ICBmZmZmZmZmZjgxMDE4MDU4IGZmZmY4ODAwNTliYzFmMDAgZmZmZmZmZmY4MTAxODg2ZSBm
ZmZmODgwMDU5YmMxZjQwDQpbICAyMzEuMTcxNDc0XSAgZmZmZmZmZmY4MTBlZmJiMSAwMDAw
MDAwMDAwMDAwMDBhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMx
LjE3MTQ3NV0gQ2FsbCBUcmFjZToNClsgIDIzMS4xNzE0NzldICBbPGZmZmZmZmZmODEwMDhj
MTA+XSA/IHhlbl9zYWZlX2hhbHQrMHgxMC8weDIwDQpbICAyMzEuMTcxNDgxXSAgWzxmZmZm
ZmZmZjgxMDE4MDU4Pl0gZGVmYXVsdF9pZGxlKzB4MTgvMHgyMA0KWyAgMjMxLjE3MTQ4M10g
IFs8ZmZmZmZmZmY4MTAxODg2ZT5dIGFyY2hfY3B1X2lkbGUrMHgyZS8weDQwDQpbICAyMzEu
MTcxNDg2XSAgWzxmZmZmZmZmZjgxMGVmYmIxPl0gY3B1X3N0YXJ0dXBfZW50cnkrMHg3MS8w
eDFmMA0KWyAgMjMxLjE3MTQ4OF0gIFs8ZmZmZmZmZmY4MTAwYjljNT5dIGNwdV9icmluZ3Vw
X2FuZF9pZGxlKzB4MjUvMHg0MA0KWyAgMjMxLjE3MTUwM10gQ29kZTogY2MgNTEgNDEgNTMg
YjggMWMgMDAgMDAgMDAgMGYgMDUgNDEgNWIgNTkgYzMgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgNTEgNDEgNTMgYjggMWQgMDAgMDAg
MDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMS4xNzE1MjddIE5NSSBiYWNrdHJhY2UgZm9y
IGNwdSA0DQpbICAyMzEuMTcxNTMwXSBDUFU6IDQgUElEOiAwIENvbW06IHN3YXBwZXIvNCBO
b3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3
MTUzMF0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDAp
ICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDIzMS4xNzE1MzJdIHRhc2s6IGZmZmY4
ODAwNTliYjYzNjAgdGk6IGZmZmY4ODAwNTliYzQwMDAgdGFzay50aTogZmZmZjg4MDA1OWJj
NDAwMA0KWyAgMjMxLjE3MTUzNV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTAwMTNhYT5dICBb
PGZmZmZmZmZmODEwMDEzYWE+XSB4ZW5faHlwZXJjYWxsX3NjaGVkX29wKzB4YS8weDIwDQpb
ICAyMzEuMTcxNTM2XSBSU1A6IGUwMmI6ZmZmZjg4MDA1OWJjNWVjOCAgRUZMQUdTOiAwMDAw
MDI0Ng0KWyAgMjMxLjE3MTUzN10gUkFYOiAwMDAwMDAwMDAwMDAwMDAwIFJCWDogZmZmZjg4
MDA1OWJjNWZkOCBSQ1g6IGZmZmZmZmZmODEwMDEzYWENClsgIDIzMS4xNzE1MzhdIFJEWDog
ZmZmZjg4MDA1OWJiNjM2MCBSU0k6IDAwMDAwMDAwZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRl
YWRiZWVmDQpbICAyMzEuMTcxNTM4XSBSQlA6IGZmZmY4ODAwNTliYzVlZTAgUjA4OiAwMDAw
MDAwMDAwMDAwMDAwIFIwOTogMDAwMDAwMDAwMDAwMDAwMQ0KWyAgMjMxLjE3MTUzOV0gUjEw
OiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAwMDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZm
ODIyZTczMDANClsgIDIzMS4xNzE1NDBdIFIxMzogZmZmZjg4MDA1OWJjNWZkOCBSMTQ6IGZm
ZmY4ODAwNTliYzVmZDggUjE1OiBmZmZmODgwMDU5YmM1ZmQ4DQpbICAyMzEuMTcxNTQzXSBG
UzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjcwMDAwMCgwMDAwKSBr
bmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxNTQ0XSBDUzogIGUwMzMgRFM6IDAw
MmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxNTQ1XSBDUjI6
IDAwMDA3ZjI4ZWY1YWEwMDAgQ1IzOiAwMDAwMDAwMDU3ZDkwMDAwIENSNDogMDAwMDAwMDAw
MDAwMDY2MA0KWyAgMjMxLjE3MTU0Nl0gU3RhY2s6DQpbICAyMzEuMTcxNTQ5XSAgMDAwMDAw
MDAwMDAwMDAwMCAwMTAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDhjMTAgZmZmZjg4MDA1
OWJjNWVmMA0KWyAgMjMxLjE3MTU1MF0gIGZmZmZmZmZmODEwMTgwNTggZmZmZjg4MDA1OWJj
NWYwMCBmZmZmZmZmZjgxMDE4ODZlIGZmZmY4ODAwNTliYzVmNDANClsgIDIzMS4xNzE1NTJd
ICBmZmZmZmZmZjgxMGVmYmIxIDAwMDAwMDAwMDAwMDAwMGEgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxNTUyXSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3
MTU1NV0gIFs8ZmZmZmZmZmY4MTAwOGMxMD5dID8geGVuX3NhZmVfaGFsdCsweDEwLzB4MjAN
ClsgIDIzMS4xNzE1NTddICBbPGZmZmZmZmZmODEwMTgwNTg+XSBkZWZhdWx0X2lkbGUrMHgx
OC8weDIwDQpbICAyMzEuMTcxNTU4XSAgWzxmZmZmZmZmZjgxMDE4ODZlPl0gYXJjaF9jcHVf
aWRsZSsweDJlLzB4NDANClsgIDIzMS4xNzE1NjBdICBbPGZmZmZmZmZmODEwZWZiYjE+XSBj
cHVfc3RhcnR1cF9lbnRyeSsweDcxLzB4MWYwDQpbICAyMzEuMTcxNTYxXSAgWzxmZmZmZmZm
ZjgxMDBiOWM1Pl0gY3B1X2JyaW5ndXBfYW5kX2lkbGUrMHgyNS8weDQwDQpbICAyMzEuMTcx
NTc1XSBDb2RlOiBjYyA1MSA0MSA1MyBiOCAxYyAwMCAwMCAwMCAwZiAwNSA0MSA1YiA1OSBj
MyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyA1MSA0MSA1MyBiOCAxZCAwMCAwMCAwMCAwZiAwNSA8NDE+IDViIDU5IGMzIGNjIGNjIGNj
IGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIA0KWyAgMjMyLjcy
Nzk0NV0gCSAodD0xODQ2OCBqaWZmaWVzIGc9NTg5MiBjPTU4OTEgcT00MTI3KQ0KWyAgMjMy
LjczNTI4NV0gc2VuZGluZyBOTUkgdG8gYWxsIENQVXM6DQpbICAyMzIuNzQyNjE2XSBOTUkg
YmFja3RyYWNlIGZvciBjcHUgMQ0KWyAgMjMyLjc0OTkwMl0gQ1BVOiAxIFBJRDogOTY3OSBD
b21tOiB4ZW5kb21haW5zIE5vdCB0YWludGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2
ZWwrICMxDQpbICAyMzIuNzU3Mzc2XSBIYXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBG
WEEtR0Q3MCAoTVMtNzY0MCkgICwgQklPUyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMjMyLjc2
NDc5OV0gdGFzazogZmZmZjg4MDA1OGRlYzI0MCB0aTogZmZmZjg4MDA0Y2QzMjAwMCB0YXNr
LnRpOiBmZmZmODgwMDRjZDMyMDAwDQpbICAyMzIuNzcyMDgxXSBSSVA6IGUwMzA6WzxmZmZm
ZmZmZjgxMDAxMzBhPl0gIFs8ZmZmZmZmZmY4MTAwMTMwYT5dIHhlbl9oeXBlcmNhbGxfdmNw
dV9vcCsweGEvMHgyMA0KWyAgMjMyLjc3OTUyOF0gUlNQOiBlMDJiOmZmZmY4ODAwNWY2NDNj
MTAgIEVGTEFHUzogMDAwMDAwNDYNClsgIDIzMi43ODY5MDhdIFJBWDogMDAwMDAwMDAwMDAw
MDAwMCBSQlg6IDAwMDAwMDAwMDAwMDAwMDEgUkNYOiBmZmZmZmZmZjgxMDAxMzBhDQpbICAy
MzIuNzk0Mjc3XSBSRFg6IDAwMDAwMDAwZGVhZGJlZWYgUlNJOiAwMDAwMDAwMGRlYWRiZWVm
IFJESTogMDAwMDAwMDBkZWFkYmVlZg0KWyAgMjMyLjgwMTU0OF0gUkJQOiBmZmZmODgwMDVm
NjQzYzI4IFIwODogZmZmZmZmZmY4MjJlNzMwMCBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsg
IDIzMi44MDg3NzBdIFIxMDogMDAwMDAwMDAwMDAwMDAwMCBSMTE6IDAwMDAwMDAwMDAwMDAy
NDYgUjEyOiBmZmZmZmZmZjgyMmU3MzAwDQpbICAyMzIuODE1OTE4XSBSMTM6IGZmZmZmZmZm
ODIyZTczMDAgUjE0OiAwMDAwMDAwMDAwMDAwMDA1IFIxNTogMDAwMDAwMDAwMDAwMDAwMQ0K
WyAgMjMyLjgyMzA4OV0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdTOmZmZmY4ODAw
NWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMyLjgzMDMxMF0g
Q1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAg
MjMyLjgzNzUzM10gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAwMDA0Y2QyYTAw
MCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIzMi44NDQ4NjhdIFN0YWNrOg0KWyAgMjMy
Ljg1MjA4Ml0gIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmZmZmZjgx
NGNjN2NjIGZmZmY4ODAwNWY2NDNjNTgNClsgIDIzMi44NTk0NTFdICBmZmZmZmZmZjgxMDBh
ZDRhIDAwMDAwMDAwMDAwMDI3MTAgZmZmZmZmZmY4MjI0MzFjMCBmZmZmZmZmZjgyMmU3MzA4
DQpbICAyMzIuODY2NzY0XSAgZmZmZmZmZmY4MjI0MzFjMCBmZmZmODgwMDVmNjQzYzY4IGZm
ZmZmZmZmODEwMGJiNjkgZmZmZjg4MDA1ZjY0M2M4OA0KWyAgMjMyLjg3NDA0MV0gQ2FsbCBU
cmFjZToNClsgIDIzMi44ODEyMDddICA8SVJRPiANClsgIDIzMi44ODEyNTddICBbPGZmZmZm
ZmZmODE0Y2M3Y2M+XSA/IHhlbl9zZW5kX0lQSV9vbmUrMHgzYy8weDYwDQpbICAyMzIuODk1
NDEyXSAgWzxmZmZmZmZmZjgxMDBhZDRhPl0gX194ZW5fc2VuZF9JUElfbWFzaysweDJhLzB4
NTANClsgIDIzMi45MDI2MjBdICBbPGZmZmZmZmZmODEwMGJiNjk+XSB4ZW5fc2VuZF9JUElf
YWxsKzB4MjkvMHg4MA0KWyAgMjMyLjkwOTcwMF0gIFs8ZmZmZmZmZmY4MTAzZmZmYT5dIGFy
Y2hfdHJpZ2dlcl9hbGxfY3B1X2JhY2t0cmFjZSsweDRhLzB4ODANClsgIDIzMi45MTY3OTRd
ICBbPGZmZmZmZmZmODEwZjlkNjg+XSByY3VfY2hlY2tfY2FsbGJhY2tzKzB4M2I4LzB4NjIw
DQpbICAyMzIuOTIzOTQyXSAgWzxmZmZmZmZmZjgxMGUwYjlkPl0gPyB0cmFjZV9oYXJkaXJx
c19vZmYrMHhkLzB4MTANClsgIDIzMi45MzEwNjRdICBbPGZmZmZmZmZmODExMDQ4NzA+XSA/
IHRpY2tfbm9oel9oYW5kbGVyKzB4YTAvMHhhMA0KWyAgMjMyLjkzODIxOV0gIFs8ZmZmZmZm
ZmY4MTBiMTIzMz5dIHVwZGF0ZV9wcm9jZXNzX3RpbWVzKzB4NDMvMHg4MA0KWyAgMjMyLjk0
NTQxMF0gIFs8ZmZmZmZmZmY4MTEwNDc5OT5dIHRpY2tfc2NoZWRfaGFuZGxlLmlzcmEuMTQr
MHgyOS8weDYwDQpbICAyMzIuOTUyNDI0XSAgWzxmZmZmZmZmZjgxMTA0OGI3Pl0gdGlja19z
Y2hlZF90aW1lcisweDQ3LzB4NzANClsgIDIzMi45NTkyMDNdICBbPGZmZmZmZmZmODEwYzg0
NGY+XSBfX3J1bl9ocnRpbWVyLmlzcmEuMjgrMHg2Zi8weDEyMA0KWyAgMjMyLjk2NjA1NV0g
IFs8ZmZmZmZmZmY4MTBjOGQwNT5dIGhydGltZXJfaW50ZXJydXB0KzB4ZjUvMHgyNDANClsg
IDIzMi45NzI3ODBdICBbPGZmZmZmZmZmODEwMDhkYWY+XSB4ZW5fdGltZXJfaW50ZXJydXB0
KzB4MmYvMHgxNTANClsgIDIzMi45Nzk0NThdICBbPGZmZmZmZmZmODFhYTA0MGI+XSA/IF9y
YXdfc3Bpbl91bmxvY2tfaXJxKzB4MmIvMHg1MA0KWyAgMjMyLjk4NjA4MV0gIFs8ZmZmZmZm
ZmY4MTBlMzFjYj5dID8gdHJhY2VfaGFyZGlycXNfb25fY2FsbGVyKzB4ZmIvMHgyNDANClsg
IDIzMi45OTI2ODNdICBbPGZmZmZmZmZmODEwZjA2Nzc+XSBoYW5kbGVfaXJxX2V2ZW50X3Bl
cmNwdSsweDQ3LzB4MTkwDQpbICAyMzIuOTk5MTQwXSAgWzxmZmZmZmZmZjgxNGNiMTE5Pl0g
PyBpbmZvX2Zvcl9pcnErMHg5LzB4MjANClsgIDIzMy4wMDU1NTRdICBbPGZmZmZmZmZmODEw
ZjM5NjI+XSBoYW5kbGVfcGVyY3B1X2lycSsweDQyLzB4NjANClsgIDIzMy4wMTE5MjldICBb
PGZmZmZmZmZmODE0Y2RhY2Q+XSBldnRjaG5fZmlmb19oYW5kbGVfZXZlbnRzKzB4MTJkLzB4
MTQwDQpbICAyMzMuMDE4NDA1XSAgWzxmZmZmZmZmZjgxNGNhYzc4Pl0gX194ZW5fZXZ0Y2hu
X2RvX3VwY2FsbCsweDQ4LzB4OTANClsgIDIzMy4wMjQ4ODddICBbPGZmZmZmZmZmODE0Y2M4
MWE+XSB4ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDJhLzB4NDANClsgIDIzMy4wMzEzMzFdICBb
PGZmZmZmZmZmODFhYTI4NWU+XSB4ZW5fZG9faHlwZXJ2aXNvcl9jYWxsYmFjaysweDFlLzB4
MzANClsgIDIzMy4wMzc2OTldICA8RU9JPiANClsgIDIzMy4wMzc3NDldICBbPGZmZmZmZmZm
ODExMDlhNWE+XSA/IGdlbmVyaWNfZXhlY19zaW5nbGUrMHg4YS8weGMwDQpbICAyMzMuMDUw
MzUzXSAgWzxmZmZmZmZmZjgxMTA5YTgxPl0gPyBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4YjEv
MHhjMA0KWyAgMjMzLjA1NjU5N10gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ry
b3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDIzMy4wNjI4NzBdICBbPGZm
ZmZmZmZmODExMDljYzU+XSA/IHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUw
DQpbICAyMzMuMDY5MDk3XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9y
ZV9hcmdzKzB4MTMvMHgxMw0KWyAgMjMzLjA3NTMzMV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5d
ID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDIzMy4w
ODE1NzFdICBbPGZmZmZmZmZmODExMGEwM2E+XSA/IHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkr
MHgyN2EvMHgyYTANClsgIDIzMy4wODc3ODVdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhl
bl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzMuMDk0MDI5
XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0gPyB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTANClsg
IDIzMy4xMDAzMDddICBbPGZmZmZmZmZmODEwMDEyMmE+XSA/IHhlbl9oeXBlcmNhbGxfeGVu
X3ZlcnNpb24rMHhhLzB4MjANClsgIDIzMy4xMDY2MDJdICBbPGZmZmZmZmZmODExNjk0MjY+
XSA/IGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAyMzMuMTEyOTE5XSAgWzxmZmZmZmZmZjgx
MGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDIzMy4xMTkwODhdICBb
PGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjMzLjEyNTAz
Nl0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAyMzMu
MTMwODkzXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gPyBtbXB1dCsweDU5LzB4ZTANClsgIDIz
My4xMzY2ODldICBbPGZmZmZmZmZmODExOWEzYTk+XSA/IGZsdXNoX29sZF9leGVjKzB4NDM5
LzB4ODMwDQpbICAyMzMuMTQyNTQ1XSAgWzxmZmZmZmZmZjgxMWU4Y2NhPl0gPyBsb2FkX2Vs
Zl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAyMzMuMTQ4MzI2XSAgWzxmZmZmZmZmZjgxYTlm
ZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMjMzLjE1NDAxMV0gIFs8
ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDIzMy4x
NTk1NjldICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisw
eGMzLzB4MWIwDQpbICAyMzMuMTY1MDcyXSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2Nr
X2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjMzLjE3MDQ2MF0gIFs8ZmZmZmZmZmY4MTBlNzE3
YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyMzMuMTc1Nzg0XSAgWzxmZmZm
ZmZmZjgxMTk5MjA0Pl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAg
MjMzLjE4MTE1Nl0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dID8gZG9fZXhlY3ZlX2NvbW1vbi5p
c3JhLjMxKzB4NTkyLzB4NzEwDQpbICAyMzMuMTg2NDk0XSAgWzxmZmZmZmZmZjgxMTliMWU3
Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1MjcvMHg3MTANClsgIDIzMy4xOTE3
NTZdICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGttZW1fY2FjaGVfYWxsb2MrMHhiNS8weDEy
MA0KWyAgMjMzLjE5NzAwMl0gIFs8ZmZmZmZmZmY4MTE5YjNlMz5dID8gZG9fZXhlY3ZlKzB4
MTMvMHgyMA0KWyAgMjMzLjIwMjIzOF0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dID8gU3lTX2V4
ZWN2ZSsweDM4LzB4NjANClsgIDIzMy4yMDczMTJdICBbPGZmZmZmZmZmODFhYTE5ZTk+XSA/
IHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMjMzLjIxMjMzOV0gQ29kZTogY2MgNTEgNDEg
NTMgNTAgYjggMTcgMDAgMDAgMDAgMGYgMDUgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgNTEgNDEgNTMgYjggMTggMDAg
MDAgMDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMy4yMjMxNjZdIElORk86IE5NSSBoYW5k
bGVyIChhcmNoX3RyaWdnZXJfYWxsX2NwdV9iYWNrdHJhY2VfaGFuZGxlcikgdG9vayB0b28g
bG9uZyB0byBydW46IDQ4MC41NDcgbXNlY3MNClsgIDIzMy4yMjMxNzVdIE5NSSBiYWNrdHJh
Y2UgZm9yIGNwdSAwDQpbICAyMzMuMjIzMTc4XSBDUFU6IDAgUElEOiAwIENvbW06IHN3YXBw
ZXIvMCBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAg
MjMzLjIyMzE3OF0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1T
LTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzL1sgIDIzOC45MTUzNzRdIEJsdWV0b290aDog
aGNpMCBsaW5rIHR4IHRpbWVvdXQNClsgIDIzOC45MjM2MzhdIEJsdWV0b290aDogaGNpMCBr
aWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjQwLjkx
NTIxNl0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQwLjkyMzQzMV0g
Qmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1
OjEzOjcyDQpbICAyNDAuOTMxMzM4XSBCbHVldG9vdGg6IGhjaTAgbGluayB0eCB0aW1lb3V0
DQpbICAyNDAuOTMyNzg1XSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGlt
ZW91dA0KWyAgMjQwLjk0NzA1OV0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBj
b25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAyNDIuOTU4NDU4XSBCbHVldG9vdGg6
IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGltZW91dA0KWyAgMjQ0LjkxMzk3Nl0gQmx1ZXRv
b3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQ0LjkyMTMzMF0gQmx1ZXRvb3RoOiBo
Y2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAy
NDQuOTcwODMwXSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGltZW91dA0K
WyAgMjQ2LjkxNjU3Nl0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQ2
LjkyMzgxNV0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAz
OjEyOjA5OjI1OjEzOjcyDQpbICAyNDYuOTgzMTc4XSBCbHVldG9vdGg6IGhjaTAgY29tbWFu
ZCAweDA0MDYgdHggdGltZW91dA0KWyAgMjQ4Ljk5NTQ4M10gQmx1ZXRvb3RoOiBoY2kwIGNv
bW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI1MS4zMjc2NTVdIGF0YTEuMDA6IHFjIHRp
bWVvdXQgKGNtZCAweGVjKQ0KWyAgMjUxLjMzNDY4M10gYXRhMS4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkNClsgIDI1MS4zNDE3NDZdIGF0YTEu
MDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQ0KWyAgMjUxLjM0ODczMV0gYXRh
MS4wMDogZGlzYWJsZWQNClsgIDI1MS4zNTU2ODFdIGF0YTEuMDA6IGRldmljZSByZXBvcnRl
ZCBpbnZhbGlkIENIUyBzZWN0b3IgMA0KWyAgMjUxLjM2MjU3M10gYXRhMS4wMDogZGV2aWNl
IHJlcG9ydGVkIGludmFsaWQgQ0hTIHNlY3RvciAwDQpbICAyNTEuMzY5MzcyXSBhdGExLjAw
OiBkZXZpY2UgcmVwb3J0ZWQgaW52YWxpZCBDSFMgc2VjdG9yIDANClsgIDI1MS4zNzYwNjVd
IGF0YTEuMDA6IGRldmljZSByZXBvcnRlZCBpbnZhbGlkIENIUyBzZWN0b3IgMA0KWyAgMjUx
LjM4MjYxNF0gYXRhMS4wMDogZGV2aWNlIHJlcG9ydGVkIGludmFsaWQgQ0hTIHNlY3RvciAw
DQpbICAyNTEuMzg5MDc1XSBhdGExOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyNTEuODgw
ODI4XSBhdGExOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9s
IDMxMCkNClsgIDI1MS44ODc0MDNdIGF0YTE6IEVIIGNvbXBsZXRlDQpbICAyNTEuODkzOTEz
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUxLjkwMDQ0
M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUxLjkwNjc4Ml0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUxLjkxMjk1N10g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1MS45MTkwMDldIFJlYWQoMTApOiAyOCAw
MCBjYyA2ZSAzYSBiOSAwMCAwMCAwOCAwMA0KWyAgMjUxLjkyNTExNl0gZW5kX3JlcXVlc3Q6
IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDM0Mjk3NzYwNTcNClsgIDI1MS45MzExNjNd
IHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTEuOTM3MDU1
XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTEuOTQyODIxXSBSZXN1bHQ6IGhvc3RieXRl
PURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTEuOTQ4NTYwXSBz
ZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUxLjk1NDIzMF0gUmVhZCgxMCk6IDI4IDAw
IGQwIDhlIDI4IGUxIDAwIDAwIDA4IDAwDQpbICAyNTEuOTU5ODk2XSBlbmRfcmVxdWVzdDog
SS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQ5ODk3NzUwNQ0KWyAgMjUxLjk2NTU4MV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1MS45NzEzMDVd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1MS45NzY5MThdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1MS45ODI2NzddIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTEuOTg4MzA3XSBSZWFkKDEwKTogMjggMDAg
ZDAgYzIgZWQgODEgMDAgMDAgMjAgMDANClsgIDI1MS45OTM5NTldIGVuZF9yZXF1ZXN0OiBJ
L08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAzNTAyNDM1NzEzDQpbICAyNTEuOTk5NjExXSBz
ZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjAwNTE5MF0g
c2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjAxMDY3M10gUmVzdWx0OiBob3N0Ynl0ZT1E
SURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjAxNjI0N10gc2Qg
MDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4wMjE3NzFdIFdyaXRlKDEwKTogMmEgMDAg
MDQgMmUgNjcgZTkgMDAgMDAgZTAgMDANClsgIDI1Mi4wMjczMThdIGVuZF9yZXF1ZXN0OiBJ
L08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciA3MDE1MDEyMQ0KWyAgMjUyLjAzMjk1N10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4wMzMxOTVdIEFi
b3J0aW5nIGpvdXJuYWwgb24gZGV2aWNlIGRtLTAtOC4NClsgIDI1Mi4wMzMyNTldIHNkIDA6
MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMDMzMjYxXSBzZCAw
OjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMDMzMjYyXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMDMzMjYzXSBzZCAwOjA6
MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjAzMzI2OF0gV3JpdGUoMTApOiAyYSAwMCAwNCAy
ZSA1NSAwMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjAzMzI3MF0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDcwMTQ1MjgxDQpbICAyNTIuMDMzMjc1XSBCdWZmZXIg
SS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDYzMjQyMjQNClsgIDI1
Mi4wMzMyNzZdIGxvc3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTANClsg
IDI1Mi4wMzMzMThdIEpCRDI6IEVycm9yIC01IGRldGVjdGVkIHdoZW4gdXBkYXRpbmcgam91
cm5hbCBzdXBlcmJsb2NrIGZvciBkbS0wLTguDQpbICAyNTIuMDMzNDc3XSBzZCAwOjA6MDow
OiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjAzMzQ3OV0gc2QgMDowOjA6
MDogW3NkYV0gIA0KWyAgMjUyLjAzMzQ4MV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RB
UkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjAzMzQ4Ml0gc2QgMDowOjA6MDog
W3NkYV0gQ0RCOiANClsgIDI1Mi4wMzM0ODZdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUg
MDEgMDAgMDAgMDggMDANClsgIDI1Mi4wMzM0ODhdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3Is
IGRldiBzZGEsIHNlY3RvciAxOTU1MTQ4OQ0KWyAgMjUyLjAzMzQ5M10gQnVmZmVyIEkvTyBl
cnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayAwDQpbICAyNTIuMDMzNDk0XSBs
b3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDMzNTQ1
XSBFWFQ0LWZzIGVycm9yIChkZXZpY2UgZG0tMCk6IGV4dDRfam91cm5hbF9jaGVja19zdGFy
dDo1NjogRGV0ZWN0ZWQgYWJvcnRlZCBqb3VybmFsDQpbICAyNTIuMDMzNTQ3XSBFWFQ0LWZz
IChkbS0wKTogUmVtb3VudGluZyBmaWxlc3lzdGVtIHJlYWQtb25seQ0KWyAgMjUyLjAzMzU0
OV0gRVhUNC1mcyAoZG0tMCk6IHByZXZpb3VzIEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRl
dGVjdGVkDQpbICAyNTIuMDMzNTg1XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQ0KWyAgMjUyLjAzMzU4Nl0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjAz
MzU4N10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZF
Ul9PSw0KWyAgMjUyLjAzMzU4OF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4w
MzM1OTFdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1
Mi4wMzM1OTNdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxOTU1
MTQ4OQ0KWyAgMjUyLjAzMzU5NV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwg
bG9naWNhbCBibG9jayAwDQpbICAyNTIuMDMzNTk2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRv
IEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3NDU5XSBFWFQ0LWZzIGVycm9yIChkZXZp
Y2UgZG0tMCkgaW4gZXh0NF9vcnBoYW5fYWRkOjI2MTQ6IEpvdXJuYWwgaGFzIGFib3J0ZWQN
ClsgIDI1Mi4wNDc0NjJdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91cyBJL08gZXJyb3IgdG8g
c3VwZXJibG9jayBkZXRlY3RlZA0KWyAgMjUyLjA0NzQ4M10gRVhUNC1mcyBlcnJvciAoZGV2
aWNlIGRtLTApIGluIGV4dDRfb3JwaGFuX2FkZDoyNjE0OiBKb3VybmFsIGhhcyBhYm9ydGVk
DQpbICAyNTIuMDQ3NjM0XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29k
ZQ0KWyAgMjUyLjA0NzYzNl0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjA0NzYzNl0g
UmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0K
WyAgMjUyLjA0NzYzN10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4wNDc2NDFd
IFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1Mi4wNDc2
NDNdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxOTU1MTQ4OQ0K
WyAgMjUyLjA0NzY0Nl0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNh
bCBibG9jayAwDQpbICAyNTIuMDQ3NjQ2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBl
cnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3OTEzXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRs
ZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjA0NzkxNF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAg
MjUyLjA0NzkxNV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRl
PURSSVZFUl9PSw0KWyAgMjUyLjA0NzkxNl0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsg
IDI1Mi4wNDc5MjBdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDAN
ClsgIDI1Mi4wNDc5MjFdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3Rv
ciAxOTU1MTQ4OQ0KWyAgMjUyLjA0NzkyNF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2Ug
ZG0tMCwgbG9naWNhbCBibG9jayAwDQpbICAyNTIuMDQ3OTI1XSBsb3N0IHBhZ2Ugd3JpdGUg
ZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3OTgxXSBFWFQ0LWZzIGVycm9y
IChkZXZpY2UgZG0tMCkgaW4gZXh0NF9yZXNlcnZlX2lub2RlX3dyaXRlOjQ4NDI6IEpvdXJu
YWwgaGFzIGFib3J0ZWQNClsgIDI1Mi4wNDc5ODNdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91
cyBJL08gZXJyb3IgdG8gc3VwZXJibG9jayBkZXRlY3RlZA0KWyAgMjUyLjA0ODE2N10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4wNDgxNjhdIHNk
IDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4wNDgxNjldIFJlc3VsdDogaG9zdGJ5dGU9RElE
X0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4wNDgxNzBdIHNkIDA6
MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMDQ4MTc0XSBXcml0ZSgxMCk6IDJhIDAwIDAx
IDJhIDU1IDAxIDAwIDAwIDA4IDAwDQpbICAyNTIuMDQ4MTc1XSBlbmRfcmVxdWVzdDogSS9P
IGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMTk1NTE0ODkNClsgIDI1Mi4wNDgxNzhdIEJ1ZmZl
ciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgMA0KWyAgMjUyLjA0
ODE3OF0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tMA0KWyAgMjUy
LjA2MDcxNl0gRVhUNC1mcyBlcnJvciAoZGV2aWNlIGRtLTApIGluIGV4dDRfcmVzZXJ2ZV9p
bm9kZV93cml0ZTo0ODQyOiBKb3VybmFsIGhhcyBhYm9ydGVkDQpbICAyNTIuMDYwNzE4XSBF
WFQ0LWZzIChkbS0wKTogcHJldmlvdXMgSS9PIGVycm9yIHRvIHN1cGVyYmxvY2sgZGV0ZWN0
ZWQNClsgIDI1Mi4wNjA3NTZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlDQpbICAyNTIuMDYwNzU3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMDYwNzU3
XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
DQpbICAyNTIuMDYwNzU4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjA2MDc2
Ml0gV3JpdGUoMTApOiAyYSAwMCAwMSAyYSA1NSAwMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjA2
MDc2NV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayAw
DQpbICAyNTIuMDYwNzY2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBk
bS0wDQpbICAyNTIuMTgzNjM5XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3Ig
Y29kZQ0KWyAgMjUyLjE4MzY0MF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4MzY0
MV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9P
Sw0KWyAgMjUyLjE4MzY0Ml0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM2
NDZdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMjEgMDAgMDAgMDggMDANClsgIDI1Mi4x
ODM2NTJdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sg
NA0KWyAgMjUyLjE4MzY1Ml0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24g
ZG0tMA0KWyAgMjUyLjE4MzY5Ml0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4xODM2OTNdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM2
OTNdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4xODM2OTRdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgz
Njk3XSBXcml0ZSgxMCk6IDJhIDAwIDAxIGFhIDU1IDA5IDAwIDAwIDA4IDAwDQpbICAyNTIu
MTgzNzAwXSBCdWZmZXIgSS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2Nr
IDEwNDg1NzcNClsgIDI1Mi4xODM3MDBdIGxvc3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVy
cm9yIG9uIGRtLTANClsgIDI1Mi4xODM3MDhdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzNzA5XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTgzNzA5XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTgzNzEwXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4MzcxM10gV3JpdGUoMTApOiAyYSAwMCAwMSBlYSA1NSAwOSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4MzcxNl0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNh
bCBibG9jayAxNTcyODY1DQpbICAyNTIuMTgzNzE3XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRv
IEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMTgzNzI0XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4MzcyNV0gc2QgMDowOjA6MDogW3NkYV0g
IA0KWyAgMjUyLjE4MzcyNl0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4MzcyNl0gc2QgMDowOjA6MDogW3NkYV0gQ0RC
OiANClsgIDI1Mi4xODM3MjldIFdyaXRlKDEwKTogMmEgMDAgMDEgZWEgNTUgMzEgMDAgMDAg
MDggMDANClsgIDI1Mi4xODM3MzldIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJv
ciBjb2RlDQpbICAyNTIuMTgzNzQwXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgz
NzQxXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVS
X09LDQpbICAyNTIuMTgzNzQxXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4
Mzc0NF0gV3JpdGUoMTApOiAyYSAwMCAwMyA2YSA1NSAzOSAwMCAwMCAwOCAwMA0KWyAgMjUy
LjE4Mzc1NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1
Mi4xODM3NTVdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM3NTZdIFJlc3VsdDog
aG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4x
ODM3NTZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzNzU5XSBXcml0ZSgx
MCk6IDJhIDAwIDAzIDZhIDU1IDc5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzNzY4XSBzZCAw
OjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzc2OV0gc2Qg
MDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4Mzc2OV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURf
QkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4Mzc3MF0gc2QgMDow
OjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM3NzNdIFdyaXRlKDEwKTogMmEgMDAgMDUg
NmEgNTUgMjkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM3ODJdIHNkIDA6MDowOjA6IFtzZGFd
IFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzNzgzXSBzZCAwOjA6MDowOiBbc2Rh
XSAgDQpbICAyNTIuMTgzNzg0XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRy
aXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzNzg0XSBzZCAwOjA6MDowOiBbc2RhXSBD
REI6IA0KWyAgMjUyLjE4Mzc4N10gV3JpdGUoMTApOiAyYSAwMCAwNSA2YSA1NSA0MSAwMCAw
MCAxMCAwMA0KWyAgMjUyLjE4Mzc5OF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVy
cm9yIGNvZGUNClsgIDI1Mi4xODM3OThdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4x
ODM3OTldIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklW
RVJfT0sNClsgIDI1Mi4xODM4MDBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIu
MTgzODAzXSBXcml0ZSgxMCk6IDJhIDAwIDA1IDZhIDU1IDg5IDAwIDAwIDA4IDAwDQpbICAy
NTIuMTgzODEyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAg
MjUyLjE4MzgxM10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4MzgxNF0gUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUy
LjE4MzgxNF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM4MTddIFdyaXRl
KDEwKTogMmEgMDAgMDUgNmEgNTggZDkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM4NDRdIHNk
IDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzODQ0XSBz
ZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgzODQ1XSBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzODQ2XSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4Mzg0OV0gV3JpdGUoMTApOiAyYSAwMCAw
NSA2YSA1OCBlOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzg1N10gc2QgMDowOjA6MDogW3Nk
YV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODM4NThdIHNkIDA6MDowOjA6IFtz
ZGFdICANClsgIDI1Mi4xODM4NTldIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQg
ZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODM4NTldIHNkIDA6MDowOjA6IFtzZGFd
IENEQjogDQpbICAyNTIuMTgzODYyXSBXcml0ZSgxMCk6IDJhIDAwIDA1IDZhIDVhIDY5IDAw
IDAwIDA4IDAwDQpbICAyNTIuMTgzODcyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQg
ZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzg3Ml0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUy
LjE4Mzg3M10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURS
SVZFUl9PSw0KWyAgMjUyLjE4Mzg3NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1
Mi4xODM4NzddIFdyaXRlKDEwKTogMmEgMDAgMDUgNmIgNWYgNjkgMDAgMDAgMDggMDANClsg
IDI1Mi4xODM4ODVdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpb
ICAyNTIuMTgzODg2XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgzODg2XSBSZXN1
bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAy
NTIuMTgzODg3XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4Mzg5MF0gV3Jp
dGUoMTApOiAyYSAwMCAwNSA2YiA2MiBiMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzg5OV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODM5MDBd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM5MDFdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODM5MDJdIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzOTA0XSBXcml0ZSgxMCk6IDJhIDAw
IDA1IDZiIDY1IGU5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzOTEzXSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4MzkxNF0gc2QgMDowOjA6MDog
W3NkYV0gIA0KWyAgMjUyLjE4MzkxNF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdF
VCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4MzkxNV0gc2QgMDowOjA6MDogW3Nk
YV0gQ0RCOiANClsgIDI1Mi4xODM5MThdIFdyaXRlKDEwKTogMmEgMDAgMDUgNmIgODcgMjkg
MDAgMDAgMDggMDANClsgIDI1Mi4xODM5MjddIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzOTI3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTgzOTI4XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTgzOTI5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4MzkzMl0gV3JpdGUoMTApOiAyYSAwMCAwNSA2YiBhNyAyMSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4Mzk0MF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODM5NDFdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM5NDFdIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODM5NDJdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzOTQ1XSBX
cml0ZSgxMCk6IDJhIDAwIDA1IDZlIDk2IDE5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzOTUy
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzk1
M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4Mzk1M10gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4Mzk1NF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM5NTddIFdyaXRlKDEwKTogMmEg
MDAgMDUgNmYgMTYgNzkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM5NjRdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzOTY0XSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTgzOTY1XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzOTY2XSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4Mzk2OF0gV3JpdGUoMTApOiAyYSAwMCAwNSA2ZiAxNyBi
MSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzk3M10gRVhUNC1mcyB3YXJuaW5nIChkZXZpY2Ug
ZG0tMCk6IGV4dDRfZW5kX2JpbzozMTc6IEkvTyBlcnJvciB3cml0aW5nIHRvIGlub2RlIDIy
MzgyNjMgKG9mZnNldCAwIHNpemUgNDA5NiBzdGFydGluZyBibG9jayA4OTUxODk0KQ0KWyAg
MjUyLjE4NDA1NF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBi
bG9jayA4OTUxODk0DQpbICAyNTIuMTg0MDYxXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRs
ZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDA2MV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAg
MjUyLjE4NDA2Ml0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRl
PURSSVZFUl9PSw0KWyAgMjUyLjE4NDA2M10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsg
IDI1Mi4xODQwNjZdIFdyaXRlKDEwKTogMmEgMDAgMDUgNmYgMmEgMjkgMDAgMDAgMDggMDAN
ClsgIDI1Mi4xODQwNzBdIEVYVDQtZnMgd2FybmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2Vu
ZF9iaW86MzE3OiBJL08gZXJyb3Igd3JpdGluZyB0byBpbm9kZSAyMjM4MjYyIChvZmZzZXQg
MCBzaXplIDQwOTYgc3RhcnRpbmcgYmxvY2sgODk1MjQ4NSkNClsgIDI1Mi4xODQwNzJdIEJ1
ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgODk1MjQ4NQ0K
WyAgMjUyLjE4NDA3OF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODQwNzldIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQwNzldIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODQwODBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MDgzXSBX
cml0ZSgxMCk6IDJhIDAwIDA1IDczIDE2IGI5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MDkw
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDA5
MF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDA5MV0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDA5Ml0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQwOTVdIFdyaXRlKDEwKTogMmEg
MDAgMDUgNzMgMjMgMTEgMDAgMDAgMDggMDANClsgIDI1Mi4xODQxMDJdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MTAzXSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTg0MTA0XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MTA0XSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4NDEwN10gV3JpdGUoMTApOiAyYSAwMCAwNSA3MyBjNiAy
OSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDExNF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5k
bGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQxMTVdIHNkIDA6MDowOjA6IFtzZGFdICANClsg
IDI1Mi4xODQxMTVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0
ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQxMTZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpb
ICAyNTIuMTg0MTE5XSBXcml0ZSgxMCk6IDJhIDAwIDA1IDc1IDg2IDgxIDAwIDAwIDA4IDAw
DQpbICAyNTIuMTg0MTIzXSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTogZXh0NF9l
bmRfYmlvOjMxNzogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjYyMTQ3MyAob2Zmc2V0
IDAgc2l6ZSA0MDk2IHN0YXJ0aW5nIGJsb2NrIDkwMDQ1OTIpDQpbICAyNTIuMTg0MTI1XSBC
dWZmZXIgSS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDkwMDQ1OTIN
ClsgIDI1Mi4xODQxMzBdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2Rl
DQpbICAyNTIuMTg0MTMxXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MTMxXSBS
ZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpb
ICAyNTIuMTg0MTMyXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDEzNV0g
V3JpdGUoMTApOiAyYSAwMCAwNSA3NSBjNSAwOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDEz
OF0gRVhUNC1mcyB3YXJuaW5nIChkZXZpY2UgZG0tMCk6IGV4dDRfZW5kX2JpbzozMTc6IEkv
TyBlcnJvciB3cml0aW5nIHRvIGlub2RlIDI2MjE1OTcgKG9mZnNldCAwIHNpemUgNDA5NiBz
dGFydGluZyBibG9jayA5MDA2NTkzKQ0KWyAgMjUyLjE4NDE0MF0gQnVmZmVyIEkvTyBlcnJv
ciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayA5MDA2NTkzDQpbICAyNTIuMTg0MTQ5
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDE0
OV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDE1OV0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDE2MF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQxNjNdIFdyaXRlKDEwKTogMmEg
MDAgMDUgNzcgMDUgMDkgMDAgMDAgMDggMDANClsgIDI1Mi4xODQxNjVdIEVYVDQtZnMgd2Fy
bmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MzE3OiBJL08gZXJyb3Igd3JpdGlu
ZyB0byBpbm9kZSAyNzU4NDA4IChvZmZzZXQgMCBzaXplIDAgc3RhcnRpbmcgYmxvY2sgOTAx
NjgzMykNClsgIDI1Mi4xODQxNjddIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAs
IGxvZ2ljYWwgYmxvY2sgOTAxNjgzMw0KWyAgMjUyLjE4NDE3MV0gc2QgMDowOjA6MDogW3Nk
YV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQxNzJdIHNkIDA6MDowOjA6IFtz
ZGFdICANClsgIDI1Mi4xODQxNzJdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQg
ZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQxNzNdIHNkIDA6MDowOjA6IFtzZGFd
IENEQjogDQpbICAyNTIuMTg0MTc2XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDJhIDU1IDgxIDAw
IDAwIDA4IDAwDQpbICAyNTIuMTg0MTgyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQg
ZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDE4M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUy
LjE4NDE4M10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURS
SVZFUl9PSw0KWyAgMjUyLjE4NDE4NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1
Mi4xODQxODZdIFdyaXRlKDEwKTogMmEgMDAgMDYgMmEgNTYgMDkgMDAgMDAgMDggMDANClsg
IDI1Mi4xODQxOTJdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpb
ICAyNTIuMTg0MTkzXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MTkzXSBSZXN1
bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAy
NTIuMTg0MTk0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDE5N10gV3Jp
dGUoMTApOiAyYSAwMCAwNiAyYSA1NiAxOSAwMCAwMCAzMCAwMA0KWyAgMjUyLjE4NDIxMV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQyMTFd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQyMTJdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQyMTNdIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MjE1XSBXcml0ZSgxMCk6IDJhIDAw
IDA2IDJhIDU2IDUxIDAwIDAwIDIwIDAwDQpbICAyNTIuMTg0MjI3XSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDIyN10gc2QgMDowOjA6MDog
W3NkYV0gIA0KWyAgMjUyLjE4NDIyOF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdF
VCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDIyOV0gc2QgMDowOjA6MDogW3Nk
YV0gQ0RCOiANClsgIDI1Mi4xODQyMzFdIFdyaXRlKDEwKTogMmEgMDAgMDYgMmEgODIgYTEg
MDAgMDAgMDggMDANClsgIDI1Mi4xODQyMzddIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MjM4XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTg0MjM4XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTg0MjM5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4NDI0MV0gV3JpdGUoMTApOiAyYSAwMCAwNiAyYSA4MiBjMSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4NDI0Nl0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODQyNDddIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQyNDhdIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODQyNDhdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MjUxXSBX
cml0ZSgxMCk6IDJhIDAwIDA2IDJiIDdkIGE5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MjU2
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDI1
N10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDI1OF0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDI1OF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQyNjFdIFdyaXRlKDEwKTogMmEg
MDAgMDYgMmIgYTcgYzEgMDAgMDAgMTggMDANClsgIDI1Mi4xODQyNjldIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MjcwXSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTg0MjcxXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MjcxXSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4NDI3NF0gV3JpdGUoMTApOiAyYSAwMCAwNiA2YSA1NiAw
MSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDI4MF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5k
bGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQyODBdIHNkIDA6MDowOjA6IFtzZGFdICANClsg
IDI1Mi4xODQyODFdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0
ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQyODFdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpb
ICAyNTIuMTg0Mjg0XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDZhIDU2IGYxIDAwIDAwIDA4IDAw
DQpbICAyNTIuMTg0MjkwXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29k
ZQ0KWyAgMjUyLjE4NDI5MV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDI5MV0g
UmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0K
WyAgMjUyLjE4NDI5Ml0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQyOTRd
IFdyaXRlKDEwKTogMmEgMDAgMDYgNmEgNTggMjEgMDAgMDAgMDggMDANClsgIDI1Mi4xODQz
MDBdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0
MzAxXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MzAxXSBSZXN1bHQ6IGhvc3Ri
eXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MzAy
XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDMwNV0gV3JpdGUoMTApOiAy
YSAwMCAwNiA2YSA2ZSBkOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDMxMV0gc2QgMDowOjA6
MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQzMTFdIHNkIDA6MDow
OjA6IFtzZGFdICANClsgIDI1Mi4xODQzMTJdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9U
QVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQzMTJdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MzE1XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDZhIDhi
IGYxIDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MzM1XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhh
bmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDMzNl0gc2QgMDowOjA6MDogW3NkYV0gIA0K
WyAgMjUyLjE4NDMzNl0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJi
eXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDMzN10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiAN
ClsgIDI1Mi4xODQzNDBdIFdyaXRlKDEwKTogMmEgMDAgMDYgNmIgNWIgNTEgMDAgMDAgMDgg
MDANClsgIDI1Mi4xODQzNDZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlDQpbICAyNTIuMTg0MzQ3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MzQ3
XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
DQpbICAyNTIuMTg0MzQ4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDM1
MV0gV3JpdGUoMTApOiAyYSAwMCAwNiA2YiA4OCBhMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4
NDM1N10gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4x
ODQzNThdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQzNThdIFJlc3VsdDogaG9z
dGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQz
NTldIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MzYyXSBXcml0ZSgxMCk6
IDJhIDAwIDA3IDJhIDU1IDIxIDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MzY5XSBzZCAwOjA6
MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDM3MF0gc2QgMDow
OjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDM3MF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFE
X1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDM3MV0gc2QgMDowOjA6
MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQzNzRdIFdyaXRlKDEwKTogMmEgMDAgMDcgMmEg
NTYgMzEgMDAgMDAgMjAgMDANClsgIDI1Mi4xODQzODZdIHNkIDA6MDowOjA6IFtzZGFdIFVu
aGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0Mzg2XSBzZCAwOjA6MDowOiBbc2RhXSAg
DQpbICAyNTIuMTg0Mzg3XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZl
cmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0Mzg4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6
IA0KWyAgMjUyLjE4NDM5MV0gV3JpdGUoMTApOiAyYSAwMCA4NyAxMCAzNSA4MSAwMCAwMCAw
OCAwMA0KWyAgMjUyLjE4NDQwMl0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4xODQ0MDJdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQ0
MDNdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4xODQ0MDRdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0
NDA3XSBXcml0ZSgxMCk6IDJhIDAwIDhjIDUwIDM2IDgxIDAwIDAwIDE4IDAwDQpbICAyNTIu
MTg0NDE4XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUy
LjE4NDQxOF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDQxOV0gUmVzdWx0OiBo
b3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4
NDQyMF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQ0MjNdIFdyaXRlKDEw
KTogMmEgMDAgMDAgMDAgMDAgM2YgMDAgMDAgMDggMDANClsgIDI1Mi4yNzk2NTddIHNkIDA6
MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMjc5NjU5XSBzZCAw
OjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMjc5NjYwXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMjc5NjYxXSBzZCAwOjA6
MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjI3OTY2NV0gUmVhZCgxMCk6IDI4IDAwIDAyIGJm
IGFlIDAxIDAwIDAwIDIwIDAwDQpbICAyNTIuMjc5Njk4XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjI3OTY5OV0gc2QgMDowOjA6MDogW3NkYV0g
IA0KWyAgMjUyLjI3OTcwMF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjI3OTcwMV0gc2QgMDowOjA6MDogW3NkYV0gQ0RC
OiANClsgIDI1Mi4yNzk3MDRdIFJlYWQoMTApOiAyOCAwMCAwMiBiZiBhZSAwMSAwMCAwMCAw
OCAwMA0KWyAgMjUyLjI3OTg0NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4yNzk4NDRdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4yNzk4
NDVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4yNzk4NDZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMjc5
ODQ5XSBSZWFkKDEwKTogMjggMDAgMDIgYmYgYWUgMDEgMDAgMDAgMDggMDANClsgIDI1My44
ODAwMzNdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1My44ODUwMTddIFJlc3VsdDogaG9z
dGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1My44ODk5
ODBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTMuODk0OTc4XSBSZWFkKDEwKTog
MjggMDAgZDEgMmQgMWUgMDEgMDAgMDAgMDggMDANClsgIDI1My45MDAzMzFdIEVYVDQtZnMg
ZXJyb3IgKGRldmljZSBkbS0wKSBpbiBleHQ0X3Jlc2VydmVfaW5vZGVfd3JpdGU6NDg0Mjog
Sm91cm5hbCBoYXMgYWJvcnRlZA0KWyAgMjUzLjkwNTUwNl0gW3NjaGVkX2RlbGF5ZWRdIHNj
aGVkOiBSVCB0aHJvdHRsaW5nIGFjdGl2YXRlZA0KWyAgMjUzLjkxMDgyNl0gRVhUNC1mcyAo
ZG0tMCk6IHByZXZpb3VzIEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRldGVjdGVkDQpbICAy
NTMuOTE2MTc3XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAg
MjUzLjkyMTQ0OF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUzLjkyNjY4OF0gUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUz
LjkzMjA3N10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1My45MzczNzZdIFdyaXRl
KDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1Ni4zMjUxMjldIEJV
Rzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjFzISBbeGVuZG9tYWluczo5Njc5
XQ0KWyAgMjU2LjMzMDU4OV0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyNTYuMzM1OTY0XSBp
cnEgZXZlbnQgc3RhbXA6IDE5ODE4OA0KWyAgMjU2LjM0MTM3Nl0gaGFyZGlycXMgbGFzdCAg
ZW5hYmxlZCBhdCAoMTk4MTg3KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVzdG9yZV9hcmdz
KzB4MC8weDMwDQpbICAyNTYuMzQ2OTU2XSBoYXJkaXJxcyBsYXN0IGRpc2FibGVkIGF0ICgx
OTgxODgpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMjU2
LjM1MjQ5NV0gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTk4MTg2KTogWzxmZmZmZmZm
ZjgxMGE5ZGYxPl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyNTYuMzU4MDY4XSBz
b2Z0aXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxOTgxODEpOiBbPGZmZmZmZmZmODEwYWEyZTI+
XSBpcnFfZXhpdCsweGEyLzB4ZDANClsgIDI1Ni4zNjM2MDRdIENQVTogMSBQSUQ6IDk2Nzkg
Q29tbTogeGVuZG9tYWlucyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRl
dmVsKyAjMQ0KWyAgMjU2LjM2OTI5M10gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkw
RlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDI1Ni4z
NzUwMzBdIHRhc2s6IGZmZmY4ODAwNThkZWMyNDAgdGk6IGZmZmY4ODAwNGNkMzIwMDAgdGFz
ay50aTogZmZmZjg4MDA0Y2QzMjAwMA0KWyAgMjU2LjM4MDc1NF0gUklQOiBlMDMwOls8ZmZm
ZmZmZmY4MTEwOWE2MD5dICBbPGZmZmZmZmZmODExMDlhNjA+XSBnZW5lcmljX2V4ZWNfc2lu
Z2xlKzB4OTAvMHhjMA0KWyAgMjU2LjM4NjY1NF0gUlNQOiBlMDJiOmZmZmY4ODAwNGNkMzNh
ODggIEVGTEFHUzogMDAwMDAyMDINClsgIDI1Ni4zOTI1ODJdIFJBWDogMDAwMDAwMDAwMDAw
MDAwOCBSQlg6IGZmZmY4ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAwMDM4DQpbICAy
NTYuMzk4NjQzXSBSRFg6IDAwMDAwMDAwMDAwMDAwZmYgUlNJOiAwMDAwMDAwMDAwMDAwMDA4
IFJESTogMDAwMDAwMDAwMDAwMDAwOA0KWyAgMjU2LjQwNDY2N10gUkJQOiBmZmZmODgwMDRj
ZDMzYWM4IFIwODogZmZmZmZmZmY4MWMwZDQ2OCBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsg
IDI1Ni40MTA2ODhdIFIxMDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAw
MDAgUjEyOiBmZmZmODgwMDRjZDMzYWYwDQpbICAyNTYuNDE2Njk4XSBSMTM6IDAwMDAwMDAw
MDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0K
WyAgMjU2LjQyMjYwMF0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdTOmZmZmY4ODAw
NWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjU2LjQyODY5MF0g
Q1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAg
MjU2LjQzNDc3NV0gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAwMDA0Y2QyYTAw
MCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDI1Ni40NDA4NzJdIFN0YWNrOg0KWyAgMjU2
LjQ0NjkyOF0gIDAwMDAwMDAwMDAwMDAyMDAgZmZmZjg4MDA1ZjYxNDA0MCAwMDAwMDAwMDAw
MDAwMDA2IDAwMDAwMDAwMDAwMDAwMDANClsgIDI1Ni40NTMxNzJdICAwMDAwMDAwMDAwMDAw
MDAxIGZmZmZmZmZmODIyZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDAx
DQpbICAyNTYuNDU5NDU3XSAgZmZmZjg4MDA0Y2QzM2IzOCBmZmZmZmZmZjgxMTA5Y2M1IGZm
ZmZmZmZmODFhYTBiMzMgZmZmZjg4MDBjZWQ1NjAwMA0KWyAgMjU2LjQ2NTcxOF0gQ2FsbCBU
cmFjZToNClsgIDI1Ni40NzE4ODhdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0
cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNTYuNDc4MjUwXSAgWzxm
ZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTAN
ClsgIDI1Ni40ODQ2MTVdICBbPGZmZmZmZmZmODFhYTBiMzM+XSA/IHJldGludF9yZXN0b3Jl
X2FyZ3MrMHgxMy8weDEzDQpbICAyNTYuNDkxMDI1XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0g
PyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjU2LjQ5
NzQ2N10gIFs8ZmZmZmZmZmY4MTEwYTAzYT5dIHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkrMHgy
N2EvMHgyYTANClsgIDI1Ni41MDM4ODRdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9k
ZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNTYuNTEwMzY2XSAg
WzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEwDQpbICAyNTYu
NTE2ODAzXSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxsX3hlbl92ZXJz
aW9uKzB4YS8weDIwDQpbICAyNTYuNTIzMzE3XSAgWzxmZmZmZmZmZjgxMTY5NDI2Pl0gZXhp
dF9tbWFwKzB4NTYvMHgxODANClsgIDI1Ni41Mjk4MThdICBbPGZmZmZmZmZmODEwZTcxN2E+
XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjU2LjUzNjIyOF0gIFs8ZmZmZmZm
ZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAyNTYuNTQyNjMwXSAgWzxm
ZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDI1Ni41NDg5OTld
ICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDI1Ni41NTUzODld
ICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8weDgzMA0KWyAg
MjU2LjU2MTgwN10gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2JpbmFyeSsweDMy
YS8weDFhMDANClsgIDI1Ni41NjgxOTVdICBbPGZmZmZmZmZmODFhOWZmZTY+XSA/IF9yYXdf
cmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAyNTYuNTc0NTQwXSAgWzxmZmZmZmZmZjgxMGU3
MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjU2LjU4MDg2MF0gIFs8ZmZm
ZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMvMHgxYjANClsg
IDI1Ni41ODcyNTFdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsweGVj
LzB4MTEwDQpbICAyNTYuNTkzNjE1XSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3Jl
bGVhc2UrMHgxMmEvMHgyNTANClsgIDI1Ni41OTk5NDFdICBbPGZmZmZmZmZmODExOTkyMDQ+
XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMjU2LjYwNjI4MF0gIFs8
ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDU5Mi8weDcx
MA0KWyAgMjU2LjYxMjYxOF0gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9fZXhlY3ZlX2Nv
bW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyNTYuNjE4OTY5XSAgWzxmZmZmZmZmZjgx
MTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDI1Ni42MjUyNjdd
ICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpbICAyNTYuNjMx
NTQ3XSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4NjANClsgIDI1
Ni42Mzc3MjldICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsweDY5LzB4YTAN
ClsgIDI1Ni42NDM4NDJdIENvZGU6IDAwIDQ4IDhiIDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2
IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4IDc0IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEw
IDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDQxIGY2IDQ0IDI0IDIwIDAxIDw3NT4gZjYgNDgg
OGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIgNmQgZTggNGMgOGIgNzUgZjAgNGMgOGIgN2Qg
DQpbICAyNjEuMDQ0MDY5XSBCbHVldG9vdGg6IGhjaTAgbGluayB0eCB0aW1lb3V0DQpbICAy
NjEuMDUwMzUyXSBCbHVldG9vdGg6IGhjaTAga2lsbGluZyBzdGFsbGVkIGNvbm5lY3Rpb24g
MDM6MTI6MDk6MjU6MTM6NzINClsgIDI2My4wNDQwNTFdIEJsdWV0b290aDogaGNpMCBsaW5r
IHR4IHRpbWVvdXQNClsgIDI2My4wNTA0NzldIEJsdWV0b290aDogaGNpMCBraWxsaW5nIHN0
YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjYzLjA1NjkwNV0gQmx1
ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjYzLjA2MzM1N10gQmx1ZXRvb3Ro
OiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI2My4wNjk2MzZdIEJsdWV0
b290aDogaGNpMCBraWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3
Mg0KWyAgMjY1LjA4MDgxMF0gQmx1ZXRvb3RoOiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRp
bWVvdXQNClsgIDI2Ny4wNDY2MzddIEJsdWV0b290aDogaGNpMCBsaW5rIHR4IHRpbWVvdXQN
ClsgIDI2Ny4wNTI5MThdIEJsdWV0b290aDogaGNpMCBraWxsaW5nIHN0YWxsZWQgY29ubmVj
dGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjY3LjA5OTg0Ml0gQmx1ZXRvb3RoOiBoY2kw
IGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI2OS4wNDMyNzddIEJsdWV0b290aDog
aGNpMCBsaW5rIHR4IHRpbWVvdXQNClsgIDI2OS4wNDkzMzJdIEJsdWV0b290aDogaGNpMCBr
aWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjY5LjEw
NTQ4NV0gQmx1ZXRvb3RoOiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI3
MS4xMTc4NDBdIEJsdWV0b290aDogaGNpMCBjb21tYW5kIDB4MDQwNiB0eCB0aW1lb3V0DQpb
ICAyNzYuMzE1MjE0XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzIgc3R1Y2sgZm9yIDIzcyEg
W2Jhc2g6OTcxNF0NClsgIDI3Ni4zMjA5NTJdIE1vZHVsZXMgbGlua2VkIGluOg0KWyAgMjc2
LjMyNjYwM10gaXJxIGV2ZW50IHN0YW1wOiAxMjk4NzANClsgIDI3Ni4zMzIyNzhdIGhhcmRp
cnFzIGxhc3QgIGVuYWJsZWQgYXQgKDEyOTg2OSk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJl
c3RvcmVfYXJncysweDAvMHgzMA0KWyAgMjc2LjMzODE3NF0gaGFyZGlycXMgbGFzdCBkaXNh
YmxlZCBhdCAoMTI5ODcwKTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jfc3RpKzB4NS8w
eDYNClsgIDI3Ni4zNDM5ODhdIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDEyOTg2OCk6
IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIyMA0KWyAgMjc2
LjM0OTg5MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMTI5ODUzKTogWzxmZmZmZmZm
ZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAyNzYuMzU1NzMxXSBDUFU6IDIg
UElEOiA5NzE0IENvbW06IGJhc2ggTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14
ZW5kZXZlbCsgIzENClsgIDI3Ni4zNjE1ODFdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQw
Lzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAy
NzYuMzY3NDY3XSB0YXNrOiBmZmZmODgwMDUyODY2MzYwIHRpOiBmZmZmODgwMDRjZjg4MDAw
IHRhc2sudGk6IGZmZmY4ODAwNGNmODgwMDANClsgIDI3Ni4zNzMzNjRdIFJJUDogZTAzMDpb
PGZmZmZmZmZmODExMDlhNjA+XSAgWzxmZmZmZmZmZjgxMTA5YTYwPl0gZ2VuZXJpY19leGVj
X3NpbmdsZSsweDkwLzB4YzANClsgIDI3Ni4zNzkzMjVdIFJTUDogZTAyYjpmZmZmODgwMDRj
Zjg5YTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAyNzYuMzg1Mjc1XSBSQVg6IGZmZmY4ODAw
NTI4NjYzNjAgUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAwNg0K
WyAgMjc2LjM5MTMxMF0gUkRYOiAwMDAwMDAwMDAwMDAwMDA2IFJTSTogZmZmZjg4MDA1Mjg2
NmEzOCBSREk6IDAwMDAwMDAwMDAwMDAyMDANClsgIDI3Ni4zOTczMDBdIFJCUDogZmZmZjg4
MDA0Y2Y4OWFjOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDYgUjA5OiAwMDAwMDAwMDAwMDAwMDAw
DQpbICAyNzYuNDAzMjA1XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAw
MDAwMDAwIFIxMjogZmZmZjg4MDA0Y2Y4OWFmMA0KWyAgMjc2LjQwOTA4Nl0gUjEzOiAwMDAw
MDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQw
NTANClsgIDI3Ni40MTQ5MDZdIEZTOiAgMDAwMDdmNDA3YjQ5ZjcwMCgwMDAwKSBHUzpmZmZm
ODgwMDVmNjgwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDI3Ni40MjA3
NzldIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2IN
ClsgIDI3Ni40MjY2NDNdIENSMjogMDAwMDdmNDA3YjA2YjA3MCBDUjM6IDAwMDAwMDAwNGNk
ZDUwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAyNzYuNDMyNTU5XSBTdGFjazoNClsg
IDI3Ni40MzgzNjJdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNGNkMzNhZjAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAyNzYuNDQ0MzY1XSAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAw
MDAwMg0KWyAgMjc2LjQ1MDM0NV0gIGZmZmY4ODAwNGNmODliMzggZmZmZmZmZmY4MTEwOWNj
NSBmZmZmODgwMDU3N2I0ODc4IGZmZmZmZmZmODEwMDdkNjANClsgIDI3Ni40NTYzNTddIENh
bGwgVHJhY2U6DQpbICAyNzYuNDYyMzI4XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5f
ZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjc2LjQ2ODUzMF0g
IFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4
MWUwDQpbICAyNzYuNDc0NzU1XSAgWzxmZmZmZmZmZjgxMDA3ZDYwPl0gPyB4ZW5fcGluX3Bh
Z2UrMHgxMTAvMHgxMjANClsgIDI3Ni40ODA5NThdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNzYuNDg3
MjI2XSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gc21wX2NhbGxfZnVuY3Rpb25fbWFueSsweDI3
YS8weDJhMA0KWyAgMjc2LjQ5MzQ2M10gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rl
c3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI3Ni40OTk3ODZdICBb
PGZmZmZmZmZmODEwMDg0MWU+XSB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTANClsgIDI3Ni41
MDYwNzZdICBbPGZmZmZmZmZmODExNjk0MjY+XSBleGl0X21tYXArMHg1Ni8weDE4MA0KWyAg
Mjc2LjUxMjM0MV0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJh
LzB4MjUwDQpbICAyNzYuNTE4Njc0XSAgWzxmZmZmZmZmZjgxMWRkZGUwPl0gPyBleGl0X2Fp
bysweGIwLzB4ZTANClsgIDI3Ni41MjQ5NzFdICBbPGZmZmZmZmZmODExZGRkNDQ+XSA/IGV4
aXRfYWlvKzB4MTQvMHhlMA0KWyAgMjc2LjUzMTE4NF0gIFs8ZmZmZmZmZmY4MTBhMjY4OT5d
IG1tcHV0KzB4NTkvMHhlMA0KWyAgMjc2LjUzNzM1MF0gIFs8ZmZmZmZmZmY4MTE5YTNhOT5d
IGZsdXNoX29sZF9leGVjKzB4NDM5LzB4ODMwDQpbICAyNzYuNTQzNTIzXSAgWzxmZmZmZmZm
ZjgxMWU4Y2NhPl0gbG9hZF9lbGZfYmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAgMjc2LjU0OTcz
N10gIFs8ZmZmZmZmZmY4MWE5ZmZlNj5dID8gX3Jhd19yZWFkX3VubG9jaysweDI2LzB4MzAN
ClsgIDI3Ni41NTU5MzZdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsw
eGVjLzB4MTEwDQpbICAyNzYuNTYyMTYxXSAgWzxmZmZmZmZmZjgxMTk5MjQzPl0gPyBzZWFy
Y2hfYmluYXJ5X2hhbmRsZXIrMHhjMy8weDFiMA0KWyAgMjc2LjU2ODI1MF0gIFs8ZmZmZmZm
ZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI3Ni41NzQxMzdd
ICBbPGZmZmZmZmZmODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAg
Mjc2LjU3OTk0N10gIFs8ZmZmZmZmZmY4MTE5OTIwND5dIHNlYXJjaF9iaW5hcnlfaGFuZGxl
cisweDg0LzB4MWIwDQpbICAyNzYuNTg1ODg1XSAgWzxmZmZmZmZmZjgxMTliMjUyPl0gZG9f
ZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTkyLzB4NzEwDQpbICAyNzYuNTkxNzQ5XSAgWzxm
ZmZmZmZmZjgxMTliMWU3Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1MjcvMHg3
MTANClsgIDI3Ni41OTc2MDldICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGttZW1fY2FjaGVf
YWxsb2MrMHhiNS8weDEyMA0KWyAgMjc2LjYwMzQ0Nl0gIFs8ZmZmZmZmZmY4MTE5YjNlMz5d
IGRvX2V4ZWN2ZSsweDEzLzB4MjANClsgIDI3Ni42MDkyNzddICBbPGZmZmZmZmZmODExOWI2
NTg+XSBTeVNfZXhlY3ZlKzB4MzgvMHg2MA0KWyAgMjc2LjYxNTA4Nl0gIFs8ZmZmZmZmZmY4
MWFhMTllOT5dIHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMjc2LjYyMDg0NV0gQ29kZTog
MDAgNDggOGIgNDUgYzAgNGMgODkgZmYgNDggODkgYzYgZTggYmIgNjcgOTkgMDAgNDggM2Ig
NWQgYzggNzQgMmQgNDUgODUgZWQgNzUgMGEgZWIgMTAgNjYgMGYgMWYgNDQgMDAgMDAgZjMg
OTAgNDEgZjYgNDQgMjQgMjAgMDEgPDc1PiBmNiA0OCA4YiA1ZCBkOCA0YyA4YiA2NSBlMCA0
YyA4YiA2ZCBlOCA0YyA4YiA3NSBmMCA0YyA4YiA3ZCANClsgIDI4MC4zMTMyMjhdIEJVRzog
c29mdCBsb2NrdXAgLSBDUFUjMyBzdHVjayBmb3IgMjNzISBbYmFzaDo5NzE4XQ0KWyAgMjgw
LjMxOTQyNl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyODAuMzI1NTM4XSBpcnEgZXZlbnQg
c3RhbXA6IDE0MjIzMA0KWyAgMjgwLjMzMTY1OV0gaGFyZGlycXMgbGFzdCAgZW5hYmxlZCBh
dCAoMTQyMjI5KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVzdG9yZV9hcmdzKzB4MC8weDMw
DQpbICAyODAuMzM4MDA2XSBoYXJkaXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxNDIyMzApOiBb
PGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMjgwLjM0NDM0NV0g
c29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTQyMjI4KTogWzxmZmZmZmZmZjgxMGE5ZGYx
Pl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyODAuMzUwNjkwXSBzb2Z0aXJxcyBs
YXN0IGRpc2FibGVkIGF0ICgxNDIyMTMpOiBbPGZmZmZmZmZmODEwYWEyZTI+XSBpcnFfZXhp
dCsweGEyLzB4ZDANClsgIDI4MC4zNTY5OThdIENQVTogMyBQSUQ6IDk3MTggQ29tbTogYmFz
aCBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjgw
LjM2MzQwMV0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2
NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDI4MC4zNjk4MDNdIHRhc2s6IGZm
ZmY4ODAwNThkZWQyZDAgdGk6IGZmZmY4ODAwNGNmNTYwMDAgdGFzay50aTogZmZmZjg4MDA0
Y2Y1NjAwMA0KWyAgMjgwLjM3NjMwMV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTEwOWE2MD5d
ICBbPGZmZmZmZmZmODExMDlhNjA+XSBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4OTAvMHhjMA0K
WyAgMjgwLjM4MjkzNV0gUlNQOiBlMDJiOmZmZmY4ODAwNGNmNTdhODggIEVGTEFHUzogMDAw
MDAyMDINClsgIDI4MC4zODk1MzBdIFJBWDogZmZmZjg4MDA1OGRlZDJkMCBSQlg6IGZmZmY4
ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAwMDA2DQpbICAyODAuMzk2MTI1XSBSRFg6
IDAwMDAwMDAwMDAwMDAwMDYgUlNJOiBmZmZmODgwMDU4ZGVkOWE4IFJESTogMDAwMDAwMDAw
MDAwMDIwMA0KWyAgMjgwLjQwMjY3N10gUkJQOiBmZmZmODgwMDRjZjU3YWM4IFIwODogMDAw
MDAwMDAwMDAwMDAwNiBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsgIDI4MC40MDkxMTVdIFIx
MDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAwMDAgUjEyOiBmZmZmODgw
MDRjZjU3YWYwDQpbICAyODAuNDE1NDQ4XSBSMTM6IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAw
MDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0KWyAgMjgwLjQyMTY3Nl0g
RlM6ICAwMDAwN2ZlM2NhOTg3NzAwKDAwMDApIEdTOmZmZmY4ODAwNWY2YzAwMDAoMDAwMCkg
a25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjgwLjQyNzk3MV0gQ1M6ICBlMDMzIERTOiAw
MDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAgMjgwLjQzNDI2MV0gQ1Iy
OiAwMDAwN2Y5NTc3NTcwZjMwIENSMzogMDAwMDAwMDA0Y2RjMjAwMCBDUjQ6IDAwMDAwMDAw
MDAwMDA2NjANClsgIDI4MC40NDA2MDldIFN0YWNrOg0KWyAgMjgwLjQ0NjkyN10gIDAwMDAw
MDAwMDAwMDAyMDAgZmZmZjg4MDA0Y2QzM2FmMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANClsgIDI4MC40NTM0ODddICAwMDAwMDAwMDAwMDAwMDAzIGZmZmZmZmZmODIy
ZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDAzDQpbICAyODAuNDU5OTUz
XSAgZmZmZjg4MDA0Y2Y1N2IzOCBmZmZmZmZmZjgxMTA5Y2M1IGZmZmY4ODAwNTc1ODUxNzgg
ZmZmZmZmZmY4MTAwN2Q2MA0KWyAgMjgwLjQ2NjM5Ml0gQ2FsbCBUcmFjZToNClsgIDI4MC40
NzI3NTldICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNf
cmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyODAuNDc5MjYwXSAgWzxmZmZmZmZmZjgxMTA5Y2M1
Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTANClsgIDI4MC40ODU2MzBd
ICBbPGZmZmZmZmZmODEwMDdkNjA+XSA/IHhlbl9waW5fcGFnZSsweDExMC8weDEyMA0KWyAg
MjgwLjQ5MjAyNV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGln
dW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI4MC40OTg0NDNdICBbPGZmZmZmZmZmODEx
MGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAyODAuNTA0
ODA5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3Jl
Z2lvbisweDE2MC8weDE2MA0KWyAgMjgwLjUxMTE0M10gIFs8ZmZmZmZmZmY4MTAwODQxZT5d
IHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMjgwLjUxNzM4NF0gIFs8ZmZmZmZmZmY4
MTE2OTQyNj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAyODAuNTIzNjU5XSAgWzxmZmZm
ZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDI4MC41Mjk5
NDldICBbPGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjgw
LjUzNjE1NV0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpb
ICAyODAuNTQyMjkzXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpb
ICAyODAuNTQ4MzYwXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0
MzkvMHg4MzANClsgIDI4MC41NTQ0NzFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2Vs
Zl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAyODAuNTYwNjAyXSAgWzxmZmZmZmZmZjgxYTlm
ZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMjgwLjU2NjcxNF0gIFs8
ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI4MC41
NzI4ODldICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisw
eGMzLzB4MWIwDQpbICAyODAuNTc5MTA3XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2Nr
X2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjgwLjU4NTMyMl0gIFs8ZmZmZmZmZmY4MTBlNzE3
YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyODAuNTkxNDg3XSAgWzxmZmZm
ZmZmZjgxMTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDI4
MC41OTc3MjhdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEu
MzErMHg1OTIvMHg3MTANClsgIDI4MC42MDM5NzRdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/
IGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMjgwLjYxMDIxNF0g
IFs8ZmZmZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpb
ICAyODAuNjE2NDUyXSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgy
MA0KWyAgMjgwLjYyMjY5Nl0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgz
OC8weDYwDQpbICAyODAuNjI4OTEwXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVj
dmUrMHg2OS8weGEwDQpbICAyODAuNjM1MTM0XSBDb2RlOiAwMCA0OCA4YiA0NSBjMCA0YyA4
OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBl
ZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCBmMyA5MCA0MSBmNiA0NCAyNCAyMCAw
MSA8NzU+IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIDhiIDc1
IGYwIDRjIDhiIDdkIA0KWyAgMjgzLjE3MDg1OF0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHgg
dGltZW91dA0KWyAgMjgzLjE3NzMxN10gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxl
ZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAyODQuMzExMjMyXSBCVUc6IHNv
ZnQgbG9ja3VwIC0gQ1BVIzEgc3R1Y2sgZm9yIDIycyEgW3hlbmRvbWFpbnM6OTY3OV0NClsg
IDI4NC4zMTc5NTddIE1vZHVsZXMgbGlua2VkIGluOg0KWyAgMjg0LjMyNDQ5Ml0gaXJxIGV2
ZW50IHN0YW1wOiAyNjQ5MzINClsgIDI4NC4zMzA4MTRdIGhhcmRpcnFzIGxhc3QgIGVuYWJs
ZWQgYXQgKDI2NDkzMSk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAv
MHgzMA0KWyAgMjg0LjMzNzI5NF0gaGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMjY0OTMy
KTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jfc3RpKzB4NS8weDYNClsgIDI4NC4zNDM3
MThdIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDI2NDkzMCk6IFs8ZmZmZmZmZmY4MTBh
OWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIyMA0KWyAgMjg0LjM1MDE4NV0gc29mdGly
cXMgbGFzdCBkaXNhYmxlZCBhdCAoMjY0OTI1KTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJx
X2V4aXQrMHhhMi8weGQwDQpbICAyODQuMzU2NTYzXSBDUFU6IDEgUElEOiA5Njc5IENvbW06
IHhlbmRvbWFpbnMgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsg
IzENClsgIDI4NC4zNjMwMTVdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1H
RDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAyODQuMzY5NTU2
XSB0YXNrOiBmZmZmODgwMDU4ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMyMDAwIHRhc2sudGk6
IGZmZmY4ODAwNGNkMzIwMDANClsgIDI4NC4zNzYyMDldIFJJUDogZTAzMDpbPGZmZmZmZmZm
ODExMDlhNWE+XSAgWzxmZmZmZmZmZjgxMTA5YTVhPl0gZ2VuZXJpY19leGVjX3NpbmdsZSsw
eDhhLzB4YzANClsgIDI4NC4zODI4NTJdIFJTUDogZTAyYjpmZmZmODgwMDRjZDMzYTg4ICBF
RkxBR1M6IDAwMDAwMjAyDQpbICAyODQuMzg5NDQwXSBSQVg6IDAwMDAwMDAwMDAwMDAwMDgg
UkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAzOA0KWyAgMjg0LjM5
NjAzM10gUkRYOiAwMDAwMDAwMDAwMDAwMGZmIFJTSTogMDAwMDAwMDAwMDAwMDAwOCBSREk6
IDAwMDAwMDAwMDAwMDAwMDgNClsgIDI4NC40MDI0ODddIFJCUDogZmZmZjg4MDA0Y2QzM2Fj
OCBSMDg6IGZmZmZmZmZmODFjMGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAwMDAwDQpbICAyODQu
NDA4ODYxXSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAwMDAwMDAwIFIx
MjogZmZmZjg4MDA0Y2QzM2FmMA0KWyAgMjg0LjQxNTI0Nl0gUjEzOiAwMDAwMDAwMDAwMDAw
MDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQwNTANClsgIDI4
NC40MjE2MDddIEZTOiAgMDAwMDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjQw
MDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDI4NC40MjgwNDVdIENTOiAg
ZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDI4NC40
MzQ0NzRdIENSMjogMDAwMDdmMTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAwNGNkMmEwMDAgQ1I0
OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAyODQuNDQxMDQ1XSBTdGFjazoNClsgIDI4NC40NDc0
NzVdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAwMDAwMDAwMDAwMDAw
NiAwMDAwMDAwMDAwMDAwMDAwDQpbICAyODQuNDU0MDMyXSAgMDAwMDAwMDAwMDAwMDAwMSBm
ZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAwMDAwMQ0KWyAg
Mjg0LjQ2MDU2OF0gIGZmZmY4ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEwOWNjNSBmZmZmZmZm
ZjgxYWEwYjMzIGZmZmY4ODAwY2VkNTYwMDANClsgIDI4NC40NjcwNDNdIENhbGwgVHJhY2U6
DQpbICAyODQuNDczNDUwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9j
b250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjg0LjQ3OTk2N10gIFs8ZmZmZmZm
ZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUwDQpbICAy
ODQuNDg2NDY1XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9yZV9hcmdz
KzB4MTMvMHgxMw0KWyAgMjg0LjQ5MjkyMl0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVu
X2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI4NC40OTkzNzRd
ICBbPGZmZmZmZmZmODExMGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4
MmEwDQpbICAyODQuNTA1Nzg5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJv
eV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjg0LjUxMjI1NV0gIFs8ZmZm
ZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMjg0LjUxODcw
MF0gIFs8ZmZmZmZmZmY4MTAwMTIyYT5dID8geGVuX2h5cGVyY2FsbF94ZW5fdmVyc2lvbisw
eGEvMHgyMA0KWyAgMjg0LjUyNTEzOF0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5dIGV4aXRfbW1h
cCsweDU2LzB4MTgwDQpbICAyODQuNTMxNTUzXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBs
b2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDI4NC41Mzc3OTldICBbPGZmZmZmZmZmODEx
ZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjg0LjU0NDEzOV0gIFs8ZmZmZmZm
ZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAyODQuNTUwNDYwXSAgWzxm
ZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAyODQuNTU2NzMzXSAgWzxm
ZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzANClsgIDI4NC41
NjMwNjZdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkrMHgzMmEvMHgx
YTAwDQpbICAyODQuNTY5NDUwXSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBfcmF3X3JlYWRf
dW5sb2NrKzB4MjYvMHgzMA0KWyAgMjg0LjU3NTg1NF0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5d
ID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI4NC41ODIyMTVdICBbPGZmZmZmZmZm
ODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIwDQpbICAyODQu
NTg4NjAwXSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDEx
MA0KWyAgMjg0LjU5NDk5N10gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNl
KzB4MTJhLzB4MjUwDQpbICAyODQuNjAxMzU0XSAgWzxmZmZmZmZmZjgxMTk5MjA0Pl0gc2Vh
cmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDI4NC42MDc3MDhdICBbPGZmZmZm
ZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIvMHg3MTANClsg
IDI4NC42MTQxMjVdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2ZV9jb21tb24u
aXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMjg0LjYyMDU0OF0gIFs8ZmZmZmZmZmY4MTE4NjM0
NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAyODQuNjI2OTUxXSAgWzxm
ZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMjg0LjYzMzM2NF0g
IFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpbICAyODQuNjM5
Njk4XSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8weGEwDQpbICAy
ODQuNjQ2MDQxXSBDb2RlOiA4OSBkYSBlOCBlYSBjMSAzMiAwMCA0OCA4YiA0NSBjMCA0YyA4
OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBl
ZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCBmMyA5MCA8NDE+IGY2IDQ0IDI0IDIw
IDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIA0KWyAg
Mjg1LjE3MDkwOF0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjg1LjE3
NzM1NF0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEy
OjA5OjI1OjEzOjcyDQpbICAyODUuMTg0MTkyXSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAw
eDA0MDYgdHggdGltZW91dA0KWyAgMjg1LjY1NTY4MF0gc2QgMDowOjA6MDogW3NkYV0gVW5o
YW5kbGVkIGVycm9yIGNvZGUNClsgIDI4NS42NjIwMDRdIHNkIDA6MDowOjA6IFtzZGFdICAN
ClsgIDI4NS42NjgxNDhdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVy
Ynl0ZT1EUklWRVJfT0sNClsgIDI4NS42NzQzNTddIHNkIDA6MDowOjA6IFtzZGFdIENEQjog
DQpbICAyODUuNjgwNTUxXSBSZWFkKDEwKTogMjggMDAgMDYgNmIgODggYTEgMDAgMDAgMDgg
MDANClsgIDI4NS42ODY3NDZdIGJsa191cGRhdGVfcmVxdWVzdDogNTMgY2FsbGJhY2tzIHN1
cHByZXNzZWQNClsgIDI4NS42OTI5MjZdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBz
ZGEsIHNlY3RvciAxMDc3MTA2MjUNClsgIDI4NS42OTkxOTJdIEVYVDQtZnMgZXJyb3IgKGRl
dmljZSBkbS0wKTogZXh0NF9maW5kX2VudHJ5OjEzMDk6IGlub2RlICMyNzY1MjQxOiBjb21t
IGRoY2xpZW50LXNjcmlwdDogcmVhZGluZyBkaXJlY3RvcnkgbGJsb2NrIDANClsgIDI4NS43
MDU3MDNdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91cyBJL08gZXJyb3IgdG8gc3VwZXJibG9j
ayBkZXRlY3RlZA0KWyAgMjg1LjcxMjE4M10gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVk
IGVycm9yIGNvZGUNClsgIDI4NS43MTg1MjFdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI4
NS43MjQ3MTVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sNClsgIDI4NS43MzA5MDNdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAy
ODUuNzM2OTgyXSBXcml0ZSgxMCk6IDJhIDAwIDAxIDJhIDU1IDAxIDAwIDAwIDA4IDAwDQpb
ICAyODUuNzQzMTAxXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3Ig
MTk1NTE0ODkNClsgIDI4NS43NDkyNDJdIHF1aWV0X2Vycm9yOiA1NiBjYWxsYmFja3Mgc3Vw
cHJlc3NlZA0KWyAgMjg1Ljc1NTM3M10gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0t
MCwgbG9naWNhbCBibG9jayAwDQpbICAyODUuNzYxNTE4XSBsb3N0IHBhZ2Ugd3JpdGUgZHVl
IHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyODcuMTg5Nzk3XSBCbHVldG9vdGg6IGhjaTAg
Y29tbWFuZCAweDA0MDYgdHggdGltZW91dA0KWyAgMzA0LjMwMTMwM10gQlVHOiBzb2Z0IGxv
Y2t1cCAtIENQVSMyIHN0dWNrIGZvciAyMnMhIFtiYXNoOjk3MTRdDQpbICAzMDQuMzA3NDcz
XSBNb2R1bGVzIGxpbmtlZCBpbjoNClsgIDMwNC4zMTM1MjddIGlycSBldmVudCBzdGFtcDog
Mjc4NzQyDQpbICAzMDQuMzE5NTExXSBoYXJkaXJxcyBsYXN0ICBlbmFibGVkIGF0ICgyNzg3
NDEpOiBbPGZmZmZmZmZmODFhYTBiMzM+XSByZXN0b3JlX2FyZ3MrMHgwLzB4MzANClsgIDMw
NC4zMjU2MzldIGhhcmRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDI3ODc0Mik6IFs8ZmZmZmZm
ZmY4MWFhMTAxNj5dIGVycm9yX3N0aSsweDUvMHg2DQpbICAzMDQuMzMxNzU1XSBzb2Z0aXJx
cyBsYXN0ICBlbmFibGVkIGF0ICgyNzg3NDApOiBbPGZmZmZmZmZmODEwYTlkZjE+XSBfX2Rv
X3NvZnRpcnErMHgxOTEvMHgyMjANClsgIDMwNC4zMzc4ODVdIHNvZnRpcnFzIGxhc3QgZGlz
YWJsZWQgYXQgKDI3ODcyNSk6IFs8ZmZmZmZmZmY4MTBhYTJlMj5dIGlycV9leGl0KzB4YTIv
MHhkMA0KWyAgMzA0LjM0MzkzNl0gQ1BVOiAyIFBJRDogOTcxNCBDb21tOiBiYXNoIE5vdCB0
YWludGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrICMxDQpbICAzMDQuMzQ5OTk0
XSBIYXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwg
QklPUyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMzA0LjM1NjEwOF0gdGFzazogZmZmZjg4MDA1
Mjg2NjM2MCB0aTogZmZmZjg4MDA0Y2Y4ODAwMCB0YXNrLnRpOiBmZmZmODgwMDRjZjg4MDAw
DQpbICAzMDQuMzYyMjg2XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMTA5YTVhPl0gIFs8ZmZm
ZmZmZmY4MTEwOWE1YT5dIGdlbmVyaWNfZXhlY19zaW5nbGUrMHg4YS8weGMwDQpbICAzMDQu
MzY4NTY1XSBSU1A6IGUwMmI6ZmZmZjg4MDA0Y2Y4OWE4OCAgRUZMQUdTOiAwMDAwMDIwMg0K
WyAgMzA0LjM3NDgxMF0gUkFYOiBmZmZmODgwMDUyODY2MzYwIFJCWDogZmZmZjg4MDA1ZjYx
NDA0MCBSQ1g6IDAwMDAwMDAwMDAwMDAwMDYNClsgIDMwNC4zODExMjFdIFJEWDogMDAwMDAw
MDAwMDAwMDAwNiBSU0k6IGZmZmY4ODAwNTI4NjZhMzggUkRJOiAwMDAwMDAwMDAwMDAwMjAw
DQpbICAzMDQuMzg3MzkzXSBSQlA6IGZmZmY4ODAwNGNmODlhYzggUjA4OiAwMDAwMDAwMDAw
MDAwMDA2IFIwOTogMDAwMDAwMDAwMDAwMDAwMA0KWyAgMzA0LjM5MzY2MF0gUjEwOiAwMDAw
MDAwMDAwMDAwMDAxIFIxMTogMDAwMDAwMDAwMDAwMDAwMCBSMTI6IGZmZmY4ODAwNGNmODlh
ZjANClsgIDMwNC4zOTk5MTVdIFIxMzogMDAwMDAwMDAwMDAwMDAwMSBSMTQ6IDAwMDAwMDAw
MDAwMDAwMDAgUjE1OiBmZmZmODgwMDVmNjE0MDUwDQpbICAzMDQuNDA2MTI5XSBGUzogIDAw
MDA3ZjQwN2I0OWY3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjY4MDAwMCgwMDAwKSBrbmxHUzow
MDAwMDAwMDAwMDAwMDAwDQpbICAzMDQuNDEyNDQ0XSBDUzogIGUwMzMgRFM6IDAwMDAgRVM6
IDAwMDAgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAzMDQuNDE4ODM2XSBDUjI6IDAwMDA3
ZjQwN2IwNmIwNzAgQ1IzOiAwMDAwMDAwMDRjZGQ1MDAwIENSNDogMDAwMDAwMDAwMDAwMDY2
MA0KWyAgMzA0LjQyNTMyMV0gU3RhY2s6DQpbICAzMDQuNDMxNzMzXSAgMDAwMDAwMDAwMDAw
MDIwMCBmZmZmODgwMDRjZDMzYWYwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KWyAgMzA0LjQzODM1N10gIDAwMDAwMDAwMDAwMDAwMDIgZmZmZmZmZmY4MjJlNzMwMCBm
ZmZmZmZmZjgxMDA3OTgwIDAwMDAwMDAwMDAwMDAwMDINClsgIDMwNC40NDQ5NzRdICBmZmZm
ODgwMDRjZjg5YjM4IGZmZmZmZmZmODExMDljYzUgZmZmZjg4MDA1NzdiNDg3OCBmZmZmZmZm
ZjgxMDA3ZDYwDQpbICAzMDQuNDUxNjIyXSBDYWxsIFRyYWNlOg0KWyAgMzA0LjQ1ODE5OF0g
IFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24r
MHgxNjAvMHgxNjANClsgIDMwNC40NjQ5MzhdICBbPGZmZmZmZmZmODExMDljYzU+XSBzbXBf
Y2FsbF9mdW5jdGlvbl9zaW5nbGUrMHhlNS8weDFlMA0KWyAgMzA0LjQ3MTcyNl0gIFs8ZmZm
ZmZmZmY4MTAwN2Q2MD5dID8geGVuX3Bpbl9wYWdlKzB4MTEwLzB4MTIwDQpbICAzMDQuNDc4
NDkwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3Jl
Z2lvbisweDE2MC8weDE2MA0KWyAgMzA0LjQ4NTI5M10gIFs8ZmZmZmZmZmY4MTEwYTAzYT5d
IHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkrMHgyN2EvMHgyYTANClsgIDMwNC40OTIwODVdICBb
PGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4
MTYwLzB4MTYwDQpbICAzMDQuNDk4OTA5XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4
aXRfbW1hcCsweGNlLzB4MWEwDQpbICAzMDQuNTA1NzI2XSAgWzxmZmZmZmZmZjgxMTY5NDI2
Pl0gZXhpdF9tbWFwKzB4NTYvMHgxODANClsgIDMwNC41MTI1NDVdICBbPGZmZmZmZmZmODEw
ZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMzA0LjUxOTM4NF0gIFs8
ZmZmZmZmZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAzMDQuNTI2MDM3
XSAgWzxmZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDMwNC41
MzI0NjVdICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDMwNC41
Mzg4MjldICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8weDgz
MA0KWyAgMzA0LjU0NTE5M10gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2JpbmFy
eSsweDMyYS8weDFhMDANClsgIDMwNC41NTE1NThdICBbPGZmZmZmZmZmODFhOWZmZTY+XSA/
IF9yYXdfcmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAzMDQuNTU3ODk0XSAgWzxmZmZmZmZm
ZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMzA0LjU2NDI1MV0g
IFs8ZmZmZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMvMHgx
YjANClsgIDMwNC41NzA1OTldICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWly
ZSsweGVjLzB4MTEwDQpbICAzMDQuNTc2OTAyXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBs
b2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDMwNC41ODMxNzddICBbPGZmZmZmZmZmODEx
OTkyMDQ+XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMzA0LjU4OTQ1
Ml0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDU5
Mi8weDcxMA0KWyAgMzA0LjU5NTcyM10gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9fZXhl
Y3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAzMDQuNjAxODYzXSAgWzxmZmZm
ZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDMwNC42
MDc4NzNdICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpbICAz
MDQuNjEzODAyXSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4NjAN
ClsgIDMwNC42MTk3NDNdICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsweDY5
LzB4YTANClsgIDMwNC42MjU2NDNdIENvZGU6IDg5IGRhIGU4IGVhIGMxIDMyIDAwIDQ4IDhi
IDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4IDc0
IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEwIDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDw0MT4g
ZjYgNDQgMjQgMjAgMDEgNzUgZjYgNDggOGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIgNmQg
ZTggNGMgDQpbICAzMDguMjk5MzE4XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzMgc3R1Y2sg
Zm9yIDIzcyEgW2Jhc2g6OTcxOF0NClsgIDMwOC4zMDU1NjZdIE1vZHVsZXMgbGlua2VkIGlu
Og0KWyAgMzA4LjMxMTc0NF0gaXJxIGV2ZW50IHN0YW1wOiAyOTA5NjQNClsgIDMwOC4zMTc4
ODZdIGhhcmRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDI5MDk2Myk6IFs8ZmZmZmZmZmY4MWFh
MGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzA4LjMyNDIxMF0gaGFyZGlycXMg
bGFzdCBkaXNhYmxlZCBhdCAoMjkwOTY0KTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jf
c3RpKzB4NS8weDYNClsgIDMwOC4zMzA0OTddIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQg
KDI5MDk2Mik6IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIy
MA0KWyAgMzA4LjMzNjc3MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMjkwOTQ3KTog
WzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAzMDguMzQzMDA3
XSBDUFU6IDMgUElEOiA5NzE4IENvbW06IGJhc2ggTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDMwOC4zNDkzMzJdIEhhcmR3YXJlIG5hbWU6IE1T
SSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8y
MDEwDQpbICAzMDguMzU1Njc1XSB0YXNrOiBmZmZmODgwMDU4ZGVkMmQwIHRpOiBmZmZmODgw
MDRjZjU2MDAwIHRhc2sudGk6IGZmZmY4ODAwNGNmNTYwMDANClsgIDMwOC4zNjIwODVdIFJJ
UDogZTAzMDpbPGZmZmZmZmZmODExMDlhNTg+XSAgWzxmZmZmZmZmZjgxMTA5YTU4Pl0gZ2Vu
ZXJpY19leGVjX3NpbmdsZSsweDg4LzB4YzANClsgIDMwOC4zNjg2MzBdIFJTUDogZTAyYjpm
ZmZmODgwMDRjZjU3YTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAzMDguMzc1MTczXSBSQVg6
IGZmZmY4ODAwNThkZWQyZDAgUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAw
MDAwMDAwNg0KWyAgMzA4LjM4MTcxNV0gUkRYOiAwMDAwMDAwMDAwMDAwMDA2IFJTSTogZmZm
Zjg4MDA1OGRlZDlhOCBSREk6IDAwMDAwMDAwMDAwMDAyMDANClsgIDMwOC4zODgxNjddIFJC
UDogZmZmZjg4MDA0Y2Y1N2FjOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDYgUjA5OiAwMDAwMDAw
MDAwMDAwMDAwDQpbICAzMDguMzk0NTM2XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAw
MDAwMDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2Y1N2FmMA0KWyAgMzA4LjQwMDgxMl0g
UjEzOiAwMDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4
ODAwNWY2MTQwNTANClsgIDMwOC40MDY5NjZdIEZTOiAgMDAwMDdmZTNjYTk4NzcwMCgwMDAw
KSBHUzpmZmZmODgwMDVmNmMwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsg
IDMwOC40MTMxNTldIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAw
ODAwNTAwM2INClsgIDMwOC40MTkzNjJdIENSMjogMDAwMDdmOTU3NzU3MGYzMCBDUjM6IDAw
MDAwMDAwNGNkYzIwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAzMDguNDI1NjQzXSBT
dGFjazoNClsgIDMwOC40MzE4NTNdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNGNkMzNh
ZjAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAzMDguNDM4MzIzXSAg
MDAwMDAwMDAwMDAwMDAwMyBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAw
MDAwMDAwMDAwMDAwMw0KWyAgMzA4LjQ0NDcwNl0gIGZmZmY4ODAwNGNmNTdiMzggZmZmZmZm
ZmY4MTEwOWNjNSBmZmZmODgwMDU3NTg1MTc4IGZmZmZmZmZmODEwMDdkNjANClsgIDMwOC40
NTEwNjJdIENhbGwgVHJhY2U6DQpbICAzMDguNDU3MzA1XSAgWzxmZmZmZmZmZjgxMDA3OTgw
Pl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMzA4
LjQ2MzY2NF0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3Npbmds
ZSsweGU1LzB4MWUwDQpbICAzMDguNDY5OTY4XSAgWzxmZmZmZmZmZjgxMDA3ZDYwPl0gPyB4
ZW5fcGluX3BhZ2UrMHgxMTAvMHgxMjANClsgIDMwOC40NzYyNzVdICBbPGZmZmZmZmZmODEw
MDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpb
ICAzMDguNDgyNjAxXSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gc21wX2NhbGxfZnVuY3Rpb25f
bWFueSsweDI3YS8weDJhMA0KWyAgMzA4LjQ4ODg3MV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5d
ID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDMwOC40
OTUxMjFdICBbPGZmZmZmZmZmODEwMDg0MWU+XSB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTAN
ClsgIDMwOC41MDEzMzVdICBbPGZmZmZmZmZmODExNjk0MjY+XSBleGl0X21tYXArMHg1Ni8w
eDE4MA0KWyAgMzA4LjUwNzUzMF0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxl
YXNlKzB4MTJhLzB4MjUwDQpbICAzMDguNTEzNzA0XSAgWzxmZmZmZmZmZjgxMWRkZGUwPl0g
PyBleGl0X2FpbysweGIwLzB4ZTANClsgIDMwOC41MTk4MzNdICBbPGZmZmZmZmZmODExZGRk
NDQ+XSA/IGV4aXRfYWlvKzB4MTQvMHhlMA0KWyAgMzA4LjUyNTg3N10gIFs8ZmZmZmZmZmY4
MTBhMjY4OT5dIG1tcHV0KzB4NTkvMHhlMA0KWyAgMzA4LjUzMTg2MF0gIFs8ZmZmZmZmZmY4
MTE5YTNhOT5dIGZsdXNoX29sZF9leGVjKzB4NDM5LzB4ODMwDQpbICAzMDguNTM3ODY5XSAg
WzxmZmZmZmZmZjgxMWU4Y2NhPl0gbG9hZF9lbGZfYmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAg
MzA4LjU0Mzg4MV0gIFs8ZmZmZmZmZmY4MWE5ZmZlNj5dID8gX3Jhd19yZWFkX3VubG9jaysw
eDI2LzB4MzANClsgIDMwOC41NDk5MTNdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tf
YWNxdWlyZSsweGVjLzB4MTEwDQpbICAzMDguNTU1OTg5XSAgWzxmZmZmZmZmZjgxMTk5MjQz
Pl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHhjMy8weDFiMA0KWyAgMzA4LjU2MjA5N10g
IFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDMw
OC41NjgyMDNdICBbPGZmZmZmZmZmODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8w
eDI1MA0KWyAgMzA4LjU3NDI3OF0gIFs8ZmZmZmZmZmY4MTE5OTIwND5dIHNlYXJjaF9iaW5h
cnlfaGFuZGxlcisweDg0LzB4MWIwDQpbICAzMDguNTgwMzgxXSAgWzxmZmZmZmZmZjgxMTli
MjUyPl0gZG9fZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTkyLzB4NzEwDQpbICAzMDguNTg2
NTAyXSAgWzxmZmZmZmZmZjgxMTliMWU3Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzEr
MHg1MjcvMHg3MTANClsgIDMwOC41OTI2MjldICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGtt
ZW1fY2FjaGVfYWxsb2MrMHhiNS8weDEyMA0KWyAgMzA4LjU5ODc0NF0gIFs8ZmZmZmZmZmY4
MTE5YjNlMz5dIGRvX2V4ZWN2ZSsweDEzLzB4MjANClsgIDMwOC42MDQ4ODBdICBbPGZmZmZm
ZmZmODExOWI2NTg+XSBTeVNfZXhlY3ZlKzB4MzgvMHg2MA0KWyAgMzA4LjYxMDk4MF0gIFs8
ZmZmZmZmZmY4MWFhMTllOT5dIHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMzA4LjYxNzA1
Nl0gQ29kZTogYzggNDggODkgZGEgZTggZWEgYzEgMzIgMDAgNDggOGIgNDUgYzAgNGMgODkg
ZmYgNDggODkgYzYgZTggYmIgNjcgOTkgMDAgNDggM2IgNWQgYzggNzQgMmQgNDUgODUgZWQg
NzUgMGEgZWIgMTAgNjYgMGYgMWYgNDQgMDAgMDAgPGYzPiA5MCA0MSBmNiA0NCAyNCAyMCAw
MSA3NSBmNiA0OCA4YiA1ZCBkOCA0YyA4YiA2NSBlMCA0YyA4YiA2ZCANClsgIDMxMi4yOTcz
MjFdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjNCBzdHVjayBmb3IgMjJzISBbZGhjbGllbnQt
c2NyaXB0Ojk3MjNdDQpbICAzMTIuMjk3MzI0XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzEg
c3R1Y2sgZm9yIDIycyEgW3hlbmRvbWFpbnM6OTY3OV0NClsgIDMxMi4yOTczMjZdIE1vZHVs
ZXMgbGlua2VkIGluOg0KWyAgMzEyLjI5NzMzMV0gaXJxIGV2ZW50IHN0YW1wOiAzMzE2NDgN
ClsgIDMxMi4yOTczMzRdIGhhcmRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDMzMTY0Nyk6IFs8
ZmZmZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzEyLjI5NzMz
N10gaGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMzMxNjQ4KTogWzxmZmZmZmZmZjgxYWEx
MDE2Pl0gZXJyb3Jfc3RpKzB4NS8weDYNClsgIDMxMi4yOTczMzldIHNvZnRpcnFzIGxhc3Qg
IGVuYWJsZWQgYXQgKDMzMTY0Nik6IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGly
cSsweDE5MS8weDIyMA0KWyAgMzEyLjI5NzM0MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBh
dCAoMzMxNjQxKTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpb
ICAzMTIuMjk3MzQyXSBDUFU6IDEgUElEOiA5Njc5IENvbW06IHhlbmRvbWFpbnMgTm90IHRh
aW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDMxMi4yOTczNDNd
IEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBC
SU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAzMTIuMjk3MzQzXSB0YXNrOiBmZmZmODgwMDU4
ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMyMDAwIHRhc2sudGk6IGZmZmY4ODAwNGNkMzIwMDAN
ClsgIDMxMi4yOTczNDVdIFJJUDogZTAzMDpbPGZmZmZmZmZmODExMDlhNWE+XSAgWzxmZmZm
ZmZmZjgxMTA5YTVhPl0gZ2VuZXJpY19leGVjX3NpbmdsZSsweDhhLzB4YzANClsgIDMxMi4y
OTczNDZdIFJTUDogZTAyYjpmZmZmODgwMDRjZDMzYTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpb
ICAzMTIuMjk3MzQ3XSBSQVg6IDAwMDAwMDAwMDAwMDAwMDggUkJYOiBmZmZmODgwMDVmNjE0
MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAzOA0KWyAgMzEyLjI5NzM0N10gUkRYOiAwMDAwMDAw
MDAwMDAwMGZmIFJTSTogMDAwMDAwMDAwMDAwMDAwOCBSREk6IDAwMDAwMDAwMDAwMDAwMDgN
ClsgIDMxMi4yOTczNDhdIFJCUDogZmZmZjg4MDA0Y2QzM2FjOCBSMDg6IGZmZmZmZmZmODFj
MGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAwMDAwDQpbICAzMTIuMjk3MzQ4XSBSMTA6IDAwMDAw
MDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2QzM2Fm
MA0KWyAgMzEyLjI5NzM0OV0gUjEzOiAwMDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAw
MDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQwNTANClsgIDMxMi4yOTczNTFdIEZTOiAgMDAw
MDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjQwMDAwKDAwMDApIGtubEdTOjAw
MDAwMDAwMDAwMDAwMDANClsgIDMxMi4yOTczNTJdIENTOiAgZTAzMyBEUzogMDAwMCBFUzog
MDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDMxMi4yOTczNThdIENSMjogMDAwMDdm
MTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAwNGNkMmEwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYw
DQpbICAzMTIuMjk3MzYyXSBTdGFjazoNClsgIDMxMi4yOTczNjhdICAwMDAwMDAwMDAwMDAw
MjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAwMDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAwMDAwMDAw
DQpbICAzMTIuMjk3MzcwXSAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmZmZmZjgyMmU3MzAwIGZm
ZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAwMDAwMQ0KWyAgMzEyLjI5NzM3MV0gIGZmZmY4
ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEwOWNjNSBmZmZmZmZmZjgxYWEwYjMzIGZmZmY4ODAw
Y2VkNTYwMDANClsgIDMxMi4yOTczNzJdIENhbGwgVHJhY2U6DQpbICAzMTIuMjk3Mzc0XSAg
WzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisw
eDE2MC8weDE2MA0KWyAgMzEyLjI5NzM3Nl0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9j
YWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUwDQpbICAzMTIuMjk3Mzc4XSAgWzxmZmZm
ZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9yZV9hcmdzKzB4MTMvMHgxMw0KWyAgMzEy
LjI5NzM4MF0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91
c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDMxMi4yOTczODFdICBbPGZmZmZmZmZmODExMGEw
M2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAzMTIuMjk3Mzgz
XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lv
bisweDE2MC8weDE2MA0KWyAgMzEyLjI5NzM4NV0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhl
bl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMzEyLjI5NzM4N10gIFs8ZmZmZmZmZmY4MTAw
MTIyYT5dID8geGVuX2h5cGVyY2FsbF94ZW5fdmVyc2lvbisweGEvMHgyMA0KWyAgMzEyLjI5
NzM4OV0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAz
MTIuMjk3MzkxXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEv
MHgyNTANClsgIDMxMi4yOTczOTNdICBbPGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlv
KzB4YjAvMHhlMA0KWyAgMzEyLjI5NzM5NV0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhp
dF9haW8rMHgxNC8weGUwDQpbICAzMTIuMjk3Mzk3XSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0g
bW1wdXQrMHg1OS8weGUwDQpbICAzMTIuMjk3Mzk5XSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0g
Zmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzANClsgIDMxMi4yOTc0MDBdICBbPGZmZmZmZmZm
ODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAzMTIuMjk3NDAz
XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0K
WyAgMzEyLjI5NzQwNV0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4
ZWMvMHgxMTANClsgIDMxMi4yOTc0MDZdICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJj
aF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIwDQpbICAzMTIuMjk3NDA4XSAgWzxmZmZmZmZm
ZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMzEyLjI5NzQxMF0g
IFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAz
MTIuMjk3NDEyXSAgWzxmZmZmZmZmZjgxMTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVy
KzB4ODQvMHgxYjANClsgIDMxMi4yOTc0MTNdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19l
eGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIvMHg3MTANClsgIDMxMi4yOTc0MTVdICBbPGZm
ZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcx
MA0KWyAgMzEyLjI5NzQxN10gIFs8ZmZmZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9h
bGxvYysweGI1LzB4MTIwDQpbICAzMTIuMjk3NDE4XSAgWzxmZmZmZmZmZjgxMTliM2UzPl0g
ZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMzEyLjI5NzQyMF0gIFs8ZmZmZmZmZmY4MTE5YjY1
OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpbICAzMTIuMjk3NDIyXSAgWzxmZmZmZmZmZjgx
YWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8weGEwDQpbICAzMTIuMjk3NDM3XSBDb2RlOiA4
OSBkYSBlOCBlYSBjMSAzMiAwMCA0OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBi
YiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAw
ZiAxZiA0NCAwMCAwMCBmMyA5MCA8NDE+IGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVk
IGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIA0KWyAgMzEyLjYzMTM1OV0gTW9kdWxl
cyBsaW5rZWQgaW46DQpbICAzMTIuNjM3NDc0XSBpcnEgZXZlbnQgc3RhbXA6IDY1NjEwDQpb
ICAzMTIuNjQzNDA2XSBoYXJkaXJxcyBsYXN0ICBlbmFibGVkIGF0ICg2NTYwOSk6IFs8ZmZm
ZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzEyLjY0OTM2OV0g
aGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoNjU2MTApOiBbPGZmZmZmZmZmODFhYTEwMTY+
XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMzEyLjY1NTI0MV0gc29mdGlycXMgbGFzdCAgZW5h
YmxlZCBhdCAoNjU2MDgpOiBbPGZmZmZmZmZmODEwYTlkZjE+XSBfX2RvX3NvZnRpcnErMHgx
OTEvMHgyMjANClsgIDMxMi42NjExNzJdIHNvZnRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDY1
NjAzKTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAzMTIu
NjY3MDU3XSBDUFU6IDQgUElEOiA5NzIzIENvbW06IGRoY2xpZW50LXNjcmlwdCBOb3QgdGFp
bnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMzEyLjY3MzA3Nl0g
SGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJ
T1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDMxMi42NzkwOTNdIHRhc2s6IGZmZmY4ODAwNTI4
MDUyZDAgdGk6IGZmZmY4ODAwNGM0ODgwMDAgdGFzay50aTogZmZmZjg4MDA0YzQ4ODAwMA0K
WyAgMzEyLjY4NTE5NV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTEwOWE1OD5dICBbPGZmZmZm
ZmZmODExMDlhNTg+XSBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4ODgvMHhjMA0KWyAgMzEyLjY5
MTQyOF0gUlNQOiBlMDJiOmZmZmY4ODAwNGM0ODlhODggIEVGTEFHUzogMDAwMDAyMDINClsg
IDMxMi42OTc2NjRdIFJBWDogZmZmZjg4MDA1MjgwNTJkMCBSQlg6IGZmZmY4ODAwNWY2MTQw
NDAgUkNYOiAwMDAwMDAwMDAwMDAwMDA2DQpbICAzMTIuNzAzOTA4XSBSRFg6IDAwMDAwMDAw
MDAwMDAwMDYgUlNJOiBmZmZmODgwMDUyODA1OWE4IFJESTogMDAwMDAwMDAwMDAwMDIwMA0K
WyAgMzEyLjcxMDA2MV0gUkJQOiBmZmZmODgwMDRjNDg5YWM4IFIwODogMDAwMDAwMDAwMDAw
MDAwNiBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsgIDMxMi43MTYxNDRdIFIxMDogMDAwMDAw
MDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAwMDAgUjEyOiBmZmZmODgwMDRjNDg5YWYw
DQpbICAzMTIuNzIyMTM3XSBSMTM6IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAw
MDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0KWyAgMzEyLjcyODAxMV0gRlM6ICAwMDAw
N2Y1NzIyYzU2NzAwKDAwMDApIEdTOmZmZmY4ODAwNWY3MDAwMDAoMDAwMCkga25sR1M6MDAw
MDAwMDAwMDAwMDAwMA0KWyAgMzEyLjczMzkzM10gQ1M6ICBlMDMzIERTOiAwMDAwIEVTOiAw
MDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAgMzEyLjczOTg4OV0gQ1IyOiAwMDAwN2Y2
NjUwNDI4ZjMwIENSMzogMDAwMDAwMDA0Y2Y5MzAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjAN
ClsgIDMxMi43NDU5MzBdIFN0YWNrOg0KWyAgMzEyLjc1MTg3OV0gIDAwMDAwMDAwMDAwMDAy
MDAgZmZmZjg4MDA0Y2QzM2FmMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAN
ClsgIDMxMi43NTgwOTVdICAwMDAwMDAwMDAwMDAwMDA0IGZmZmZmZmZmODIyZTczMDAgZmZm
ZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDA0DQpbICAzMTIuNzY0MjM0XSAgZmZmZjg4
MDA0YzQ4OWIzOCBmZmZmZmZmZjgxMTA5Y2M1IGZmZmY4ODAwNTc0NTk2ZjggZmZmZmZmZmY4
MTAwN2Q2MA0KWyAgMzEyLjc3MDM1NF0gQ2FsbCBUcmFjZToNClsgIDMxMi43NzYzNzRdICBb
PGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4
MTYwLzB4MTYwDQpbICAzMTIuNzgyNTAyXSAgWzxmZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2Nh
bGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTANClsgIDMxMi43ODg1OTVdICBbPGZmZmZm
ZmZmODEwMDdkNjA+XSA/IHhlbl9waW5fcGFnZSsweDExMC8weDEyMA0KWyAgMzEyLjc5NDY4
MF0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdp
b24rMHgxNjAvMHgxNjANClsgIDMxMi44MDA4MTBdICBbPGZmZmZmZmZmODExMGEwM2E+XSBz
bXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAzMTIuODA2ODkyXSAgWzxm
ZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2
MC8weDE2MA0KWyAgMzEyLjgxMjk2MF0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0
X21tYXArMHhjZS8weDFhMA0KWyAgMzEyLjgxODk5OF0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5d
IGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAzMTIuODI1MDA5XSAgWzxmZmZmZmZmZjgxMGU3
MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDMxMi44MzEwMDZdICBbPGZm
ZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMzEyLjgzNjk1OF0g
IFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAzMTIuODQy
ODMwXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAzMTIuODQ4
NjUzXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzAN
ClsgIDMxMi44NTQ0OTFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkr
MHgzMmEvMHgxYTAwDQpbICAzMTIuODYwMzI2XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBf
cmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMzEyLjg2NjE4NF0gIFs8ZmZmZmZmZmY4
MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDMxMi44NzIwMzVdICBb
PGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIw
DQpbICAzMTIuODc3OTc5XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUr
MHhlYy8weDExMA0KWyAgMzEyLjg4Mzk0MF0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9j
a19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAzMTIuODg5ODgwXSAgWzxmZmZmZmZmZjgxMTk5
MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDMxMi44OTU4NDFd
ICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIv
MHg3MTANClsgIDMxMi45MDE4MjZdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2
ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMzEyLjkwNzg0N10gIFs8ZmZmZmZm
ZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAzMTIuOTEz
ODU3XSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMzEy
LjkxOTg0M10gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpb
ICAzMTIuOTI1ODQwXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8w
eGEwDQpbICAzMTIuOTMxODIyXSBDb2RlOiBjOCA0OCA4OSBkYSBlOCBlYSBjMSAzMiAwMCA0
OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBj
OCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCA8ZjM+IDkw
IDQxIGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhi
IDZkIA0K
------------12706E23E275304AE
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------12706E23E275304AE--



From xen-devel-bounces@lists.xen.org Tue Jan 07 11:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VEQ-0000PL-4D; Tue, 07 Jan 2014 11:54:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0VEM-0000PB-P0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:53:59 +0000
Received: from [85.158.137.68:61143] by server-15.bemta-3.messagelabs.com id
	E2/02-11556-6DAEBC25; Tue, 07 Jan 2014 11:53:58 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389095636!7688483!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26751 invoked from network); 7 Jan 2014 11:53:56 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 11:53:56 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52998 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0V3c-0005kn-7T; Tue, 07 Jan 2014 12:42:52 +0100
Date: Tue, 7 Jan 2014 12:53:52 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1536712177.20140107125352@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------12706E23E275304AE"
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup -
	CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------12706E23E275304AE
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi Konrad,

A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
But dom0 seems to blow up for me .. (without this branch pulled it works ok)

Xen: latest xen-unstable
Linux: latest 3.13-rc7+ with devel/for-linus-3.14 branch pulled on top of it.

The complete serial log is attached.

--
Sander

[  196.354896] BUG: soft lockup - CPU#1 stuck for 22s! [xendomains:9679]
[  196.362706] Modules linked in:
[  196.370360] irq event stamp: 61738
[  196.377871] hardirqs last  enabled at (61737): [<ffffffff81aa0b33>] restore_args+0x0/0x30
[  196.385465] hardirqs last disabled at (61738): [<ffffffff81aa1016>] error_sti+0x5/0x6
[  196.392910] softirqs last  enabled at (61736): [<ffffffff810a9df1>] __do_softirq+0x191/0x220
[  196.400192] softirqs last disabled at (61731): [<ffffffff810aa2e2>] irq_exit+0xa2/0xd0
[  196.407263] CPU: 1 PID: 9679 Comm: xendomains Not tainted 3.13.0-rc7-20140107-xendevel+ #1
[  196.414303] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
[  196.421273] task: ffff880058dec240 ti: ffff88004cd32000 task.ti: ffff88004cd32000
[  196.428263] RIP: e030:[<ffffffff81109a58>]  [<ffffffff81109a58>] generic_exec_single+0x88/0xc0
[  196.435323] RSP: e02b:ffff88004cd33a88  EFLAGS: 00000202
[  196.442170] RAX: 0000000000000008 RBX: ffff88005f614040 RCX: 0000000000000038
[  196.448898] RDX: 00000000000000ff RSI: 0000000000000008 RDI: 0000000000000008
[  196.455578] RBP: ffff88004cd33ac8 R08: ffffffff81c0d468 R09: 0000000000000000
[  196.462283] R10: 0000000000000001 R11: 0000000000000000 R12: ffff88004cd33af0
[  196.468994] R13: 0000000000000001 R14: 0000000000000000 R15: ffff88005f614050
[  196.475704] FS:  00007f152ec4d700(0000) GS:ffff88005f640000(0000) knlGS:0000000000000000
[  196.482492] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  196.489229] CR2: 00007f152e2a1e02 CR3: 000000004cd2a000 CR4: 0000000000000660
[  196.495906] Stack:
[  196.502595]  0000000000000200 ffff88005f614040 0000000000000006 0000000000000000
[  196.509412]  0000000000000001 ffffffff822e7300 ffffffff81007980 0000000000000001
[  196.516266]  ffff88004cd33b38 ffffffff81109cc5 ffffffff81aa0b33 ffff8800ced56000
[  196.523112] Call Trace:
[  196.529769]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.536519]  [<ffffffff81109cc5>] smp_call_function_single+0xe5/0x1e0
[  196.543374]  [<ffffffff81aa0b33>] ? retint_restore_args+0x13/0x13
[  196.550098]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.556833]  [<ffffffff8110a03a>] smp_call_function_many+0x27a/0x2a0
[  196.563591]  [<ffffffff81007980>] ? xen_destroy_contiguous_region+0x160/0x160
[  196.570240]  [<ffffffff8100841e>] xen_exit_mmap+0xce/0x1a0
[  196.576791]  [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
[  196.583449]  [<ffffffff81169426>] exit_mmap+0x56/0x180
[  196.590020]  [<ffffffff810e717a>] ? lock_release+0x12a/0x250
[  196.596504]  [<ffffffff811ddde0>] ? exit_aio+0xb0/0xe0
[  196.602900]  [<ffffffff811ddd44>] ? exit_aio+0x14/0xe0
[  196.609179]  [<ffffffff810a2689>] mmput+0x59/0xe0
[  196.615475]  [<ffffffff8119a3a9>] flush_old_exec+0x439/0x830
[  196.621761]  [<ffffffff811e8cca>] load_elf_binary+0x32a/0x1a00
[  196.627908]  [<ffffffff81a9ffe6>] ? _raw_read_unlock+0x26/0x30
[  196.633975]  [<ffffffff810e738c>] ? lock_acquire+0xec/0x110
[  196.640137]  [<ffffffff81199243>] ? search_binary_handler+0xc3/0x1b0
[  196.646375]  [<ffffffff810e738c>] ? lock_acquire+0xec/0x110
[  196.652537]  [<ffffffff810e717a>] ? lock_release+0x12a/0x250
[  196.658587]  [<ffffffff81199204>] search_binary_handler+0x84/0x1b0
[  196.664735]  [<ffffffff8119b252>] do_execve_common.isra.31+0x592/0x710
[  196.670825]  [<ffffffff8119b1e7>] ? do_execve_common.isra.31+0x527/0x710
[  196.676902]  [<ffffffff81186345>] ? kmem_cache_alloc+0xb5/0x120
[  196.682972]  [<ffffffff8119b3e3>] do_execve+0x13/0x20
[  196.689016]  [<ffffffff8119b658>] SyS_execve+0x38/0x60
[  196.694944]  [<ffffffff81aa19e9>] stub_execve+0x69/0xa0
[  196.700913] Code: c8 48 89 da e8 ea c1 32 00 48 8b 45 c0 4c 89 ff 48 89 c6 e8 bb 67 99 00 48 3b 5d c8 74 2d 45 85 ed 75 0a eb 10 66 0f 1f 44 00 00 <f3> 90 41 f6 44 24 20 01 75 f6 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d
------------12706E23E275304AE
Content-Type: application/octet-stream;
 name="serial.log"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="serial.log"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICBfICBfICAgICAgICAgICAgICAgICAgICAgIF8g
ICAgICAgIF8gICAgIF8gICAgICANCiBcIFwvIC9fX18gXyBfXyAgIHwgfHwgfCB8IHx8IHwg
ICAgIF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18gDQogIFwgIC8vIF8g
XCAnXyBcICB8IHx8IHxffCB8fCB8XyBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdf
IFx8IHwvIF8gXA0KICAvICBcICBfXy8gfCB8IHwgfF9fICAgX3xfXyAgIF98X198IHxffCB8
IHwgfCBcX18gXCB8fCAoX3wgfCB8XykgfCB8ICBfXy8NCiAvXy9cX1xfX198X3wgfF98ICAg
IHxffChfKSB8X3wgICAgIFxfXyxffF98IHxffF9fXy9cX19cX18sX3xfLl9fL3xffFxfX198
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KKFhFTikgWGVuIHZlcnNpb24gNC40LXVuc3RhYmxl
IChyb290QGR5bmRucy5vcmcpIChnY2MtNC43LnJlYWwgKERlYmlhbiA0LjcuMi01KSA0Ljcu
MikgZGVidWc9eSBUdWUgSmFuICA3IDExOjUzOjMwIENFVCAyMDE0DQooWEVOKSBMYXRlc3Qg
Q2hhbmdlU2V0OiBGcmkgRGVjIDIwIDEyOjAyOjA2IDIwMTMgKzAxMDAgZ2l0OjlhODBkNTAt
ZGlydHkNCihYRU4pIEJvb3Rsb2FkZXI6IEdSVUIgMS45OS0yNytkZWI3dTINCihYRU4pIENv
bW1hbmQgbGluZTogZG9tMF9tZW09MTUzNk0sbWF4OjE1MzZNIGxvZ2x2bD1hbGwgbG9nbHZs
X2d1ZXN0PWFsbCBjb25zb2xlX3RpbWVzdGFtcHMgdmdhPWdmeC0xMjgweDEwMjR4MzIgY3B1
aWRsZSBjcHVmcmVxPXhlbiBkZWJ1ZyBsYXBpYz1kZWJ1ZyBhcGljX3ZlcmJvc2l0eT1kZWJ1
ZyBhcGljPWRlYnVnIGlvbW11PW9uLHZlcmJvc2UsZGVidWcsYW1kLWlvbW11LWRlYnVnIGl2
cnNfaW9hcGljWzZdPTAwOjE0LjAgaXZyc19ocGV0WzBdPTAwOjE0LjAgY29tMT0zODQwMCw4
bjEgY29uc29sZT12Z2EsY29tMQ0KKFhFTikgVmlkZW8gaW5mb3JtYXRpb246DQooWEVOKSAg
VkdBIGlzIGdyYXBoaWNzIG1vZGUgMTI4MHgxMDI0LCAzMiBicHANCihYRU4pICBWQkUvRERD
IG1ldGhvZHM6IG5vbmU7IEVESUQgdHJhbnNmZXIgdGltZTogMCBzZWNvbmRzDQooWEVOKSAg
RURJRCBpbmZvIG5vdCByZXRyaWV2ZWQgYmVjYXVzZSBubyBEREMgcmV0cmlldmFsIG1ldGhv
ZCBkZXRlY3RlZA0KKFhFTikgRGlzYyBpbmZvcm1hdGlvbjoNCihYRU4pICBGb3VuZCAyIE1C
UiBzaWduYXR1cmVzDQooWEVOKSAgRm91bmQgMiBFREQgaW5mb3JtYXRpb24gc3RydWN0dXJl
cw0KKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoNCihYRU4pICAwMDAwMDAwMDAwMDAwMDAwIC0g
MDAwMDAwMDAwMDA5OTgwMCAodXNhYmxlKQ0KKFhFTikgIDAwMDAwMDAwMDAwOTk4MDAgLSAw
MDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMDAwMGU0MDAwIC0g
MDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDAwMDEwMDAwMCAt
IDAwMDAwMDAwOWZmOTAwMDAgKHVzYWJsZSkNCihYRU4pICAwMDAwMDAwMDlmZjkwMDAwIC0g
MDAwMDAwMDA5ZmY5ZTAwMCAoQUNQSSBkYXRhKQ0KKFhFTikgIDAwMDAwMDAwOWZmOWUwMDAg
LSAwMDAwMDAwMDlmZmUwMDAwIChBQ1BJIE5WUykNCihYRU4pICAwMDAwMDAwMDlmZmUwMDAw
IC0gMDAwMDAwMDBhMDAwMDAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZmUwMDAw
MCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAxMDAwMDAw
MDAgLSAwMDAwMDAwNTYwMDAwMDAwICh1c2FibGUpDQooWEVOKSBBQ1BJOiBSU0RQIDAwMEZC
MTAwLCAwMDE0IChyMCBBQ1BJQU0pDQooWEVOKSBBQ1BJOiBSU0RUIDlGRjkwMDAwLCAwMDQ4
IChyMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFD
UEk6IEZBQ1AgOUZGOTAyMDAsIDAwODQgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBN
U0ZUICAgICAgIDk3KQ0KKFhFTikgQUNQSTogRFNEVCA5RkY5MDVFMCwgOTQyNyAocjEgIEE3
NjQwIEE3NjQwMTAwICAgICAgMTAwIElOVEwgMjAwNTExMTcpDQooWEVOKSBBQ1BJOiBGQUNT
IDlGRjlFMDAwLCAwMDQwDQooWEVOKSBBQ1BJOiBBUElDIDlGRjkwMzkwLCAwMDg4IChyMSA3
NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykNCihYRU4pIEFDUEk6IE1D
RkcgOUZGOTA0MjAsIDAwM0MgKHIxIDc2NDBNUyBPRU1NQ0ZHICAyMDEwMDkxMyBNU0ZUICAg
ICAgIDk3KQ0KKFhFTikgQUNQSTogU0xJQyA5RkY5MDQ2MCwgMDE3NiAocjEgTVNJICAgIE9F
TVNMSUMgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBPRU1CIDlGRjlF
MDQwLCAwMDcyIChyMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAgICAgICA5NykN
CihYRU4pIEFDUEk6IFNSQVQgOUZGOUE1RTAsIDAxMDggKHIzIEFNRCAgICBGQU1fRl8xMCAg
ICAgICAgMiBBTUQgICAgICAgICAxKQ0KKFhFTikgQUNQSTogSFBFVCA5RkY5QTZGMCwgMDAz
OCAocjEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgICAgICAgOTcpDQooWEVOKSBB
Q1BJOiBJVlJTIDlGRjlBNzMwLCAwMTA4IChyMSAgQU1EICAgICBSRDg5MFMgICAyMDIwMzEg
QU1EICAgICAgICAgMCkNCihYRU4pIEFDUEk6IFNTRFQgOUZGOUE4NDAsIDBEQTQgKHIxIEEg
TSBJICBQT1dFUk5PVyAgICAgICAgMSBBTUQgICAgICAgICAxKQ0KKFhFTikgU3lzdGVtIFJB
TTogMjA0NzlNQiAoMjA5NzA2NjBrQikNCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMCAt
PiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMSAtPiBOb2RlIDANCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgMiAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+
IEFQSUMgMyAtPiBOb2RlIDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNCAtPiBOb2Rl
IDANCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgNSAtPiBOb2RlIDANCihYRU4pIFNSQVQ6
IE5vZGUgMCBQWE0gMCAwLWEwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAw
LWEwMDAwMDAwDQooWEVOKSBTUkFUOiBOb2RlIDAgUFhNIDAgMTAwMDAwMDAwLTU2MDAwMDAw
MA0KKFhFTikgTlVNQTogQWxsb2NhdGVkIG1lbW5vZGVtYXAgZnJvbSA1NWQyNmEwMDAgLSA1
NWQyNzAwMDANCihYRU4pIE5VTUE6IFVzaW5nIDggZm9yIHRoZSBoYXNoIHNoaWZ0Lg0KKFhF
TikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQNCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIg
YXQgMHhmYjAwMDAwMCwgbWFwcGVkIHRvIDB4ZmZmZjgyYzAwMDIwMTAwMCwgdXNpbmcgNjE0
NGssIHRvdGFsIDE0MzM2aw0KKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwg
bGluZWxlbmd0aD01MTIwLCBmb250IDh4MTYNCihYRU4pIHZlc2FmYjogVHJ1ZWNvbG9yOiBz
aXplPTg6ODo4OjgsIHNoaWZ0PTI0OjE2Ojg6MA0KKFhFTikgZm91bmQgU01QIE1QLXRhYmxl
IGF0IDAwMGZmNzgwDQooWEVOKSBETUkgcHJlc2VudC4NCihYRU4pIEFQSUMgYm9vdCBzdGF0
ZSBpcyAneGFwaWMnDQooWEVOKSBVc2luZyBBUElDIGRyaXZlciBkZWZhdWx0DQooWEVOKSBB
Q1BJOiBQTS1UaW1lciBJTyBQb3J0OiAweDgwOA0KKFhFTikgQUNQSTogU0xFRVAgSU5GTzog
cG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4MDAsMF0NCihYRU4pIEFDUEk6ICAgICAgICAg
ICAgIHdha2V1cF92ZWNbOWZmOWUwMGNdLCB2ZWNfc2l6ZVsyMF0NCihYRU4pIEFDUEk6IExv
Y2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwDQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9p
ZFsweDAxXSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMwIDA6
MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAyXSBs
YXBpY19pZFsweDAxXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMxIDA6MTAgQVBJQyB2
ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAzXSBsYXBpY19pZFsw
eDAyXSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vzc29yICMyIDA6MTAgQVBJQyB2ZXJzaW9uIDE2
DQooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBpY19pZFsweDAzXSBlbmFi
bGVkKQ0KKFhFTikgUHJvY2Vzc29yICMzIDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDA1XSBsYXBpY19pZFsweDA0XSBlbmFibGVkKQ0KKFhF
TikgUHJvY2Vzc29yICM0IDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBMQVBJ
QyAoYWNwaV9pZFsweDA2XSBsYXBpY19pZFsweDA1XSBlbmFibGVkKQ0KKFhFTikgUHJvY2Vz
c29yICM1IDA6MTAgQVBJQyB2ZXJzaW9uIDE2DQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlkWzB4
MDZdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQooWEVOKSBJT0FQSUNbMF06
IGFwaWNfaWQgNiwgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kgMC0yMw0K
KFhFTikgQUNQSTogSU9BUElDIChpZFsweDA3XSBhZGRyZXNzWzB4ZmVjMjAwMDBdIGdzaV9i
YXNlWzI0XSkNCihYRU4pIElPQVBJQ1sxXTogYXBpY19pZCA3LCB2ZXJzaW9uIDMzLCBhZGRy
ZXNzIDB4ZmVjMjAwMDAsIEdTSSAyNC01NQ0KKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1
cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkNCihYRU4pIEFDUEk6IElOVF9T
UkNfT1ZSIChidXMgMCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkNCihYRU4p
IEFDUEk6IElSUTAgdXNlZCBieSBvdmVycmlkZS4NCihYRU4pIEFDUEk6IElSUTIgdXNlZCBi
eSBvdmVycmlkZS4NCihYRU4pIEFDUEk6IElSUTkgdXNlZCBieSBvdmVycmlkZS4NCihYRU4p
IEVuYWJsaW5nIEFQSUMgbW9kZTogIEZsYXQuICBVc2luZyAyIEkvTyBBUElDcw0KKFhFTikg
QUNQSTogSFBFVCBpZDogMHg4MzAwIGJhc2U6IDB4ZmVkMDAwMDANCihYRU4pIEVSU1QgdGFi
bGUgd2FzIG5vdCBmb3VuZA0KKFhFTikgVXNpbmcgQUNQSSAoTUFEVCkgZm9yIFNNUCBjb25m
aWd1cmF0aW9uIGluZm9ybWF0aW9uDQooWEVOKSBTTVA6IEFsbG93aW5nIDYgQ1BVcyAoMCBo
b3RwbHVnIENQVXMpDQooWEVOKSBtYXBwZWQgQVBJQyB0byBmZmZmODJjZmZmZGZiMDAwIChm
ZWUwMDAwMCkNCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyY2ZmZmRmYTAwMCAoZmVj
MDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZkZjkwMDAgKGZlYzIw
MDAwKQ0KKFhFTikgSVJRIGxpbWl0czogNTYgR1NJLCAxMTEyIE1TSS9NU0ktWA0KKFhFTikg
VXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQ0KKFhFTikg
RGV0ZWN0ZWQgMzIwMC4xNjEgTUh6IHByb2Nlc3Nvci4NCihYRU4pIEluaXRpbmcgbWVtb3J5
IHNoYXJpbmcuDQooWEVOKSBBTUQgRmFtMTBoIG1hY2hpbmUgY2hlY2sgcmVwb3J0aW5nIGVu
YWJsZWQNCihYRU4pIFBDSTogTUNGRyBjb25maWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAg
c2VnbWVudCAwMDAwIGJ1c2VzIDAwIC0gZmYNCihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcg
Zm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYNCihYRU4pIEFNRC1WaTogRm91bmQgTVNJIGNh
cGFiaWxpdHkgYmxvY2sgYXQgMHg1NA0KKFhFTikgQU1ELVZpOiBBQ1BJIFRhYmxlOg0KKFhF
TikgQU1ELVZpOiAgU2lnbmF0dXJlIElWUlMNCihYRU4pIEFNRC1WaTogIExlbmd0aCAweDEw
OA0KKFhFTikgQU1ELVZpOiAgUmV2aXNpb24gMHgxDQooWEVOKSBBTUQtVmk6ICBDaGVja1N1
bSAweGZkDQooWEVOKSBBTUQtVmk6ICBPRU1fSWQgQU1EICANCihYRU4pIEFNRC1WaTogIE9F
TV9UYWJsZV9JZCBSRDg5MFMNCihYRU4pIEFNRC1WaTogIE9FTV9SZXZpc2lvbiAweDIwMjAz
MQ0KKFhFTikgQU1ELVZpOiAgQ3JlYXRvcl9JZCBBTUQgDQooWEVOKSBBTUQtVmk6ICBDcmVh
dG9yX1JldmlzaW9uIDANCihYRU4pIEFNRC1WaTogSVZSUyBCbG9jazogdHlwZSAweDEwIGZs
YWdzIDB4M2UgbGVuIDB4ZDggaWQgMHgyDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MyBpZCAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5n
ZTogMCAtPiAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgy
IGlkIDB4MTAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlw
ZSAweDIgaWQgMHhkMDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRy
eTogdHlwZSAweDIgaWQgMHgxOCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4MyBpZCAweGMwMCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZf
SWQgUmFuZ2U6IDB4YzAwIC0+IDB4YzAxDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MiBpZCAweDI4IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZp
Y2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YjAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4MzAgZmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhhMDAgZmxhZ3MgMA0KKFhFTikg
QU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg0OCBmbGFncyAwDQoo
WEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDkwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDUw
IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlk
IDB4ODAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUg
MHgyIGlkIDB4NTggZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDMgaWQgMHg3MDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdl
OiAweDcwMCAtPiAweDcwMQ0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlw
ZSAweDIgaWQgMHg2MCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
OiB0eXBlIDB4MiBpZCAweDUwMCBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4NDMgaWQgMHg2MDggZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2
X0lkIFJhbmdlOiAweDYwOCAtPiAweDZmZiBhbGlhcyAweDYwMA0KKFhFTikgQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg2OCBmbGFncyAwDQooWEVOKSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDQwMCBmbGFncyAwDQooWEVO
KSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDg4IGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OTAgZmxh
Z3MgMA0KKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDkwIC0+IDB4OTINCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OTggZmxhZ3MgMA0K
KFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDk4IC0+IDB4OWENCihYRU4pIEFNRC1W
aTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YTAgZmxhZ3MgMHhkNw0KKFhF
TikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhhMiBmbGFncyAw
DQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweGEzIGZs
YWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4
YTQgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDQz
IGlkIDB4MzAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHgzMDAg
LT4gMHgzZmYgYWxpYXMgMHhhNA0KKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDIgaWQgMHhhNSBmbGFncyAwDQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVu
dHJ5OiB0eXBlIDB4MiBpZCAweGE4IGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZp
Y2UgRW50cnk6IHR5cGUgMHgyIGlkIDB4YTkgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHgxMDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZp
OiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDMgaWQgMHhiMCBmbGFncyAwDQooWEVOKSBB
TUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4YjAgLT4gMHhiMg0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAgZmxhZ3MgMA0KKFhFTikgQU1ELVZpOiBJVkhE
IERldmljZSBFbnRyeTogdHlwZSAweDQ4IGlkIDAgZmxhZ3MgMHhkNw0KKFhFTikgQU1ELVZp
OiBJVkhEIFNwZWNpYWw6IDAwMDA6MDA6MTQuMCB2YXJpZXR5IDB4MiBoYW5kbGUgMA0KKFhF
TikgQU1ELVZpOiBJVkhEOiBDb21tYW5kIGxpbmUgb3ZlcnJpZGUgcHJlc2VudCBmb3IgSFBF
VCAwIChJVlJTOiAwIGRldklEIDAwMDA6MDA6MTQuMCkNCihYRU4pIEFNRC1WaTogSVZIRCBE
ZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDANCihYRU4pIEFNRC1WaTogSVZI
RCBTcGVjaWFsOiAwMDAwOjAwOjAwLjEgdmFyaWV0eSAweDEgaGFuZGxlIDB4Nw0KKFhFTikg
QU1ELVZpOiBJT01NVSAwIEVuYWJsZWQuDQooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5h
YmxlZA0KKFhFTikgIC0gRG9tMCBtb2RlOiBSZWxheGVkDQooWEVOKSBJbnRlcnJ1cHQgcmVt
YXBwaW5nIGVuYWJsZWQNCihYRU4pIEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTANCihYRU4p
IEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTANCihYRU4pIEdldHRpbmcgSUQ6IDANCihYRU4p
IEdldHRpbmcgTFZUMDogNzAwDQooWEVOKSBHZXR0aW5nIExWVDE6IDQwMA0KKFhFTikgZW5h
YmxlZCBFeHRJTlQgb24gQ1BVIzANCihYRU4pIEVTUiB2YWx1ZSBiZWZvcmUgZW5hYmxpbmcg
dmVjdG9yOiAweDQgIGFmdGVyOiAwDQooWEVOKSBFTkFCTElORyBJTy1BUElDIElSUXMNCihY
RU4pICAtPiBVc2luZyBuZXcgQUNLIG1ldGhvZA0KKFhFTikgaW5pdCBJT19BUElDIElSUXMN
CihYRU4pICBJTy1BUElDIChhcGljaWQtcGluKSA2LTAsIDYtMTYsIDYtMTcsIDYtMTgsIDYt
MTksIDYtMjAsIDYtMjEsIDYtMjIsIDYtMjMsIDctMCwgNy0xLCA3LTIsIDctMywgNy00LCA3
LTUsIDctNiwgNy03LCA3LTgsIDctOSwgNy0xMCwgNy0xMSwgNy0xMiwgNy0xMywgNy0xNCwg
Ny0xNSwgNy0xNiwgNy0xNywgNy0xOCwgNy0xOSwgNy0yMCwgNy0yMSwgNy0yMiwgNy0yMywg
Ny0yNCwgNy0yNSwgNy0yNiwgNy0yNywgNy0yOCwgNy0yOSwgNy0zMCwgNy0zMSBub3QgY29u
bmVjdGVkLg0KKFhFTikgLi5USU1FUjogdmVjdG9yPTB4RjAgYXBpYzE9MCBwaW4xPTIgYXBp
YzI9LTEgcGluMj0tMQ0KKFhFTikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4NCihY
RU4pIG51bWJlciBvZiBJTy1BUElDICM2IHJlZ2lzdGVyczogMjQuDQooWEVOKSBudW1iZXIg
b2YgSU8tQVBJQyAjNyByZWdpc3RlcnM6IDMyLg0KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJ
Qy4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uDQooWEVOKSBJTyBBUElDICM2Li4uLi4uDQooWEVO
KSAuLi4uIHJlZ2lzdGVyICMwMDogMDYwMDAwMDANCihYRU4pIC4uLi4uLi4gICAgOiBwaHlz
aWNhbCBBUElDIGlkOiAwNg0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAN
CihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVOKSAuLi4uIHJlZ2lz
dGVyICMwMTogMDAxNzgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4IHJlZGlyZWN0aW9u
IGVudHJpZXM6IDAwMTcNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGltcGxlbWVudGVkOiAx
DQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAyMQ0KKFhFTikgLi4u
LiByZWdpc3RlciAjMDI6IDA2MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgICA6IGFyYml0cmF0
aW9uOiAwNg0KKFhFTikgLi4uLiByZWdpc3RlciAjMDM6IDA3MDAwMDAwDQooWEVOKSAuLi4u
Li4uICAgICA6IEJvb3QgRFQgICAgOiAwDQooWEVOKSAuLi4uIElSUSByZWRpcmVjdGlvbiB0
YWJsZToNCihYRU4pICBOUiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBE
ZWxpIFZlY3Q6ICAgDQooWEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMSAgICAzMA0KKFhFTikgIDAxIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAg
ICAxICAgIDEgICAgMzANCihYRU4pICAwMiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIEYwDQooWEVOKSAgMDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICAzOA0KKFhFTikgIDA0IDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgRjENCihYRU4pICAwNSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA3IDAwMSAwMSAgMCAgICAwICAgIDAgICAw
ICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwOCAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDkgMDAxIDAxICAxICAgIDEgICAgMCAg
IDEgICAwICAgIDEgICAgMCAgICAwMA0KKFhFTikgIDBhIDAwMSAwMSAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNjgNCihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAgICAw
ICAgMCAgIDAgICAgMSAgICAxICAgIDcwDQooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA3OA0KKFhFTikgIDBkIDAwMSAwMSAgMCAgICAwICAg
IDAgICAwICAgMCAgICAxICAgIDEgICAgODgNCihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDkwDQooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA5OA0KKFhFTikgIDEwIDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxMSAwMDAgMDAgIDEgICAg
MCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAgMTIgMDAwIDAwICAxICAg
IDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgIDEzIDAwMCAwMCAgMSAg
ICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxNCAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAgMTUgMDAwIDAwICAx
ICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgIDE2IDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAxNyAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSBJTyBBUElDICM3
Li4uLi4uDQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMDogMDcwMDAwMDANCihYRU4pIC4uLi4u
Li4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNw0KKFhFTikgLi4uLi4uLiAgICA6IERlbGl2
ZXJ5IFR5cGU6IDANCihYRU4pIC4uLi4uLi4gICAgOiBMVFMgICAgICAgICAgOiAwDQooWEVO
KSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxRjgwMjENCihYRU4pIC4uLi4uLi4gICAgIDogbWF4
IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMUYNCihYRU4pIC4uLi4uLi4gICAgIDogUFJRIGlt
cGxlbWVudGVkOiAxDQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAy
MQ0KKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDAwMDAwMDAwDQooWEVOKSAuLi4uLi4uICAg
ICA6IGFyYml0cmF0aW9uOiAwMA0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6
DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBW
ZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAg
IDAgICAgMDANCihYRU4pICAwMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAwICAgIDAwDQooWEVOKSAgMDIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMCAgICAwMA0KKFhFTikgIDAzIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAw
ICAgIDAgICAgMDANCihYRU4pICAwNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAwICAgIDAwDQooWEVOKSAgMDUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMCAgICAwMA0KKFhFTikgIDA2IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAg
ICAwICAgIDAgICAgMDANCihYRU4pICAwNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAg
ICAgMCAgICAwICAgIDAwDQooWEVOKSAgMDggMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAw
ICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDA5IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAg
MCAgICAwICAgIDAgICAgMDANCihYRU4pICAwYSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAg
IDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMGIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAg
ICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDBjIDAwMCAwMCAgMSAgICAwICAgIDAgICAw
ICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAwZCAwMDAgMDAgIDEgICAgMCAgICAwICAg
MCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMGUgMDAwIDAwICAxICAgIDAgICAgMCAg
IDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDBmIDAwMCAwMCAgMSAgICAwICAgIDAg
ICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAw
ICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAg
MCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAg
IDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAg
ICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAg
ICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAw
ICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxNiAwMDAgMDAgIDEgICAg
MCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMTcgMDAwIDAwICAxICAg
IDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDE4IDAwMCAwMCAgMSAg
ICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxOSAwMDAgMDAgIDEg
ICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWEgMDAwIDAwICAx
ICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFiIDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxYyAwMDAgMDAg
IDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSAgMWQgMDAwIDAw
ICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMCAgICAwMA0KKFhFTikgIDFlIDAwMCAw
MCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDAgICAgMDANCihYRU4pICAxZiAwMDAg
MDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAwICAgIDAwDQooWEVOKSBVc2luZyB2
ZWN0b3ItYmFzZWQgaW5kZXhpbmcNCihYRU4pIElSUSB0byBwaW4gbWFwcGluZ3M6DQooWEVO
KSBJUlEyNDAgLT4gMDoyDQooWEVOKSBJUlE0OCAtPiAwOjENCihYRU4pIElSUTU2IC0+IDA6
Mw0KKFhFTikgSVJRMjQxIC0+IDA6NA0KKFhFTikgSVJRNjQgLT4gMDo1DQooWEVOKSBJUlE3
MiAtPiAwOjYNCihYRU4pIElSUTgwIC0+IDA6Nw0KKFhFTikgSVJRODggLT4gMDo4DQooWEVO
KSBJUlE5NiAtPiAwOjkNCihYRU4pIElSUTEwNCAtPiAwOjEwDQooWEVOKSBJUlExMTIgLT4g
MDoxMQ0KKFhFTikgSVJRMTIwIC0+IDA6MTINCihYRU4pIElSUTEzNiAtPiAwOjEzDQooWEVO
KSBJUlExNDQgLT4gMDoxNA0KKFhFTikgSVJRMTUyIC0+IDA6MTUNCihYRU4pIC4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25lLg0KKFhFTikgVXNpbmcgbG9jYWwg
QVBJQyB0aW1lciBpbnRlcnJ1cHRzLg0KKFhFTikgY2FsaWJyYXRpbmcgQVBJQyB0aW1lciAu
Li4NCihYRU4pIC4uLi4uIENQVSBjbG9jayBzcGVlZCBpcyAzMjAwLjEzNjcgTUh6Lg0KKFhF
TikgLi4uLi4gaG9zdCBidXMgY2xvY2sgc3BlZWQgaXMgMjAwLjAwODQgTUh6Lg0KKFhFTikg
Li4uLi4gYnVzX3NjYWxlID0gMHhjY2Q3DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0g
UGxhdGZvcm0gdGltZXIgaXMgMTQuMzE4TUh6IEhQRVQNCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjA5XSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDY0IEtpQi4NCihYRU4pIFsyMDE0
LTAxLTA3IDExOjA5OjA5XSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOTowOV0gU1ZNOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVhdHVyZXM6DQooWEVOKSBb
MjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTmVzdGVkIFBhZ2UgVGFibGVzIChOUFQpDQooWEVO
KSBbMjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZp
cnR1YWxpc2F0aW9uDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gIC0gTmV4dC1SSVAg
U2F2ZWQgb24gI1ZNRVhJVA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICAtIFBhdXNl
LUludGVyY2VwdCBGaWx0ZXINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBIVk06IFNW
TSBlbmFibGVkDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gSFZNOiBIYXJkd2FyZSBB
c3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA5XSBIVk06IEhBUCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOTowOV0gSFZNOiBQVkggbW9kZSBub3Qgc3VwcG9ydGVkIG9uIHRoaXMgcGxh
dGZvcm0NCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA4XSBtYXNrZWQgRXh0SU5UIG9uIENQ
VSMxDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gbWljcm9jb2RlOiBDUFUxIGNvbGxl
Y3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBiZg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MDhdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA5XSBtaWNyb2NvZGU6IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHgxMDAw
MGJmDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOF0gbWFza2VkIEV4dElOVCBvbiBDUFUj
Mw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIG1pY3JvY29kZTogQ1BVMyBjb2xsZWN0
X2NwdV9pbmZvOiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjA4XSBtYXNrZWQgRXh0SU5UIG9uIENQVSM0DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTow
OV0gbWljcm9jb2RlOiBDUFU0IGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTB4MTAwMDBi
Zg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDhdIG1hc2tlZCBFeHRJTlQgb24gQ1BVIzUN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBCcm91Z2h0IHVwIDYgQ1BVcw0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MDldIG1pY3JvY29kZTogQ1BVNSBjb2xsZWN0X2NwdV9pbmZv
OiBwYXRjaF9pZD0weDEwMDAwYmYNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBBTUQt
Vmk6IEZhaWxlZCB0byBzZXR1cCBIUEVUIE1TSSByZW1hcHBpbmcuIFdyb25nIEhQRVQuDQoo
WEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gQU1ELVZpOiBGYWlsZWQgdG8gc2V0dXAgSFBF
VCBNU0kgcmVtYXBwaW5nLiBXcm9uZyBIUEVULg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MDldIEFNRC1WaTogRmFpbGVkIHRvIHNldHVwIEhQRVQgTVNJIHJlbWFwcGluZy4gV3Jvbmcg
SFBFVC4NCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBIUEVUOiAzIHRpbWVycyB1c2Fi
bGUgZm9yIGJyb2FkY2FzdCAoMyB0b3RhbCkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5
XSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIE1D
QTogVXNlIGh3IHRocmVzaG9sZGluZyB0byBhZGp1c3QgcG9sbGluZyBmcmVxdWVuY3kNCihY
RU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBw
b2xsaW5nIHRpbWVyIHN0YXJ0ZWQuDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gWGVu
b3Byb2ZpbGU6IEZhaWxlZCB0byBzZXR1cCBJQlMgTFZUIG9mZnNldCwgSUJTQ1RMID0gMHhm
ZmZmZmZmZg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICoqKiBMT0FESU5HIERPTUFJ
TiAwICoqKg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl9wYXJzZV9iaW5hcnk6
IHBoZHI6IHBhZGRyPTB4MTAwMDAwMCBtZW1zej0weDEwMjgwMDANCihYRU4pIFsyMDE0LTAx
LTA3IDExOjA5OjA5XSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDIyMDAwMDAg
bWVtc3o9MHhmNzExMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl9wYXJzZV9i
aW5hcnk6IHBoZHI6IHBhZGRyPTB4MjJmODAwMCBtZW1zej0weDE0MzQwDQooWEVOKSBbMjAx
NC0wMS0wNyAxMTowOTowOV0gZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgyMzBk
MDAwIG1lbXN6PTB4ZTgyMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3Bh
cnNlX2JpbmFyeTogbWVtb3J5OiAweDEwMDAwMDAgLT4gMHgzMThmMDAwDQooWEVOKSBbMjAx
NC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBHVUVTVF9PUyA9ICJsaW51
eCINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IEdV
RVNUX1ZFUlNJT04gPSAiMi42Ig0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl94
ZW5fcGFyc2Vfbm90ZTogWEVOX1ZFUlNJT04gPSAieGVuLTMuMCINCihYRU4pIFsyMDE0LTAx
LTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IFZJUlRfQkFTRSA9IDB4ZmZmZmZm
ZmY4MDAwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldIGVsZl94ZW5fcGFyc2Vf
bm90ZTogRU5UUlkgPSAweGZmZmZmZmZmODIzMGQxZTANCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IEhZUEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZm
ZjgxMDAxMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9u
b3RlOiBGRUFUVVJFUyA9ICIhd3JpdGFibGVfcGFnZV90YWJsZXN8cGFlX3BnZGlyX2Fib3Zl
XzRnYiINCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6
IFNVUFBPUlRFRF9GRUFUVVJFUyA9IDB4ODAxDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTow
OV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBQQUVfTU9ERSA9ICJ5ZXMiDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyIN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25v
d24geGVuIGVsZiBub3RlICgweGQpDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxm
X3hlbl9wYXJzZV9ub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQ0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MDldIGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZm
ODAwMDAwMDAwMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gZWxmX3hlbl9wYXJz
ZV9ub3RlOiBQQUREUl9PRkZTRVQgPSAweDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5
XSBlbGZfeGVuX2FkZHJfY2FsY19jaGVjazogYWRkcmVzc2VzOg0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MDldICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAw
DQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTowOV0gICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAw
eDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjA5XSAgICAgdmlydF9vZmZzZXQgICAgICA9
IDB4ZmZmZmZmZmY4MDAwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MDldICAgICB2
aXJ0X2tzdGFydCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOTowOV0gICAgIHZpcnRfa2VuZCAgICAgICAgPSAweGZmZmZmZmZmODMxOGYwMDAN
CihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4
ZmZmZmZmZmY4MjMwZDFlMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBdICAgICBwMm1f
YmFzZSAgICAgICAgID0gMHhmZmZmZmZmZmZmZmZmZmZmDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMF0gIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzINCihYRU4pIFsy
MDE0LTAxLTA3IDExOjA5OjEwXSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBh
ZGRyIDB4MTAwMDAwMCAtPiAweDMxOGYwMDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEw
XSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6DQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMF0gIERvbTAgYWxsb2MuOiAgIDAwMDAwMDA1NDgwMDAwMDAtPjAwMDAwMDA1NGMwMDAw
MDAgKDM3MzIyMyBwYWdlcyB0byBiZSBhbGxvY2F0ZWQpDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMF0gIEluaXQuIHJhbWRpc2s6IDAwMDAwMDA1NWYxZTcwMDAtPjAwMDAwMDA1NWZm
ZmY0MDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSBWSVJUVUFMIE1FTU9SWSBBUlJB
TkdFTUVOVDoNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgTG9hZGVkIGtlcm5lbDog
ZmZmZmZmZmY4MTAwMDAwMC0+ZmZmZmZmZmY4MzE4ZjAwMA0KKFhFTikgWzIwMTQtMDEtMDcg
MTE6MDk6MTBdICBJbml0LiByYW1kaXNrOiBmZmZmZmZmZjgzMThmMDAwLT5mZmZmZmZmZjgz
ZmE3NDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMF0gIFBoeXMtTWFjaCBtYXA6IGZm
ZmZmZmZmODNmYTgwMDAtPmZmZmZmZmZmODQyYTgwMDANCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjEwXSAgU3RhcnQgaW5mbzogICAgZmZmZmZmZmY4NDJhODAwMC0+ZmZmZmZmZmY4NDJh
ODRiNA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBdICBQYWdlIHRhYmxlczogICBmZmZm
ZmZmZjg0MmE5MDAwLT5mZmZmZmZmZjg0MmNlMDAwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMF0gIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODQyY2UwMDAtPmZmZmZmZmZmODQyY2Yw
MDANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSAgVE9UQUw6ICAgICAgICAgZmZmZmZm
ZmY4MDAwMDAwMC0+ZmZmZmZmZmY4NDQwMDAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MTBdICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgyMzBkMWUwDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMF0gRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMF0gZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHhmZmZmZmZmZjgxMDAw
MDAwIC0+IDB4ZmZmZmZmZmY4MjAyODAwMA0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTBd
IGVsZl9sb2FkX2JpbmFyeTogcGhkciAxIGF0IDB4ZmZmZmZmZmY4MjIwMDAwMCAtPiAweGZm
ZmZmZmZmODIyZjcxMTANCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjEwXSBlbGZfbG9hZF9i
aW5hcnk6IHBoZHIgMiBhdCAweGZmZmZmZmZmODIyZjgwMDAgLT4gMHhmZmZmZmZmZjgyMzBj
MzQwDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMF0gZWxmX2xvYWRfYmluYXJ5OiBwaGRy
IDMgYXQgMHhmZmZmZmZmZjgyMzBkMDAwIC0+IDB4ZmZmZmZmZmY4MjQwZjAwMA0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDAsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyLCB0eXBlID0g
MHg3LCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBh
Z2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAw
eDU0ZjBhYzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0w
MS0wNyAxMTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHgxOCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4LCB0eXBlID0gMHgy
LCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
Mw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4MzAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDU0
ZjBhYzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0w
NyAxMTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg0OCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwLCB0eXBlID0gMHgyLCBy
b290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFi
bGU6IGRldmljZSBpZCA9IDB4NTgsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDU0ZjBh
YzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAx
MTowOToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2
MCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDY4LCB0eXBlID0gMHgyLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTow
OToxMV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MCwg
dHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkyLCB0eXBlID0gMHg3LCByb290IHRh
YmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOTox
MV0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5YSwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxl
ID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIw
MTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmlj
ZSBpZCA9IDB4YTIsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMywgdHlwZSA9
IDB4Nywgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0gMHg1LCByb290IHRhYmxlID0g
MHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQt
MDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9tYWlu
ID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhOCwgdHlwZSA9IDB4
Miwgcm9vdCB0YWJsZSA9IDB4NTRmMGFjMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9
IDMNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjExXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweGIwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHg1
NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEt
MDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4YjIsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDU0ZjBhYzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1ELVZp
OiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjANCihYRU4pIFsyMDE0LTAxLTA3
IDExOjA5OjExXSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MTguMQ0K
KFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2tpcHBpbmcgaG9zdCBicmlk
Z2UgMDAwMDowMDoxOC4yDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxMV0gQU1ELVZpOiBT
a2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjMNCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjA5OjExXSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MTguNA0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4NDAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NTAw
LCB0eXBlID0gMHgzLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NjAwLCB0eXBlID0gMHg3LCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4NzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NzAx
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4ODAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4OTAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTAw
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YjAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4YzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MDk6MTFdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzAx
LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4ZDAwLCB0eXBlID0gMHgxLCByb290
IHRhYmxlID0gMHg1NGYwYWMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTFdIFNjcnViYmluZyBGcmVlIFJBTTogLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLmRvbmUuDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxN10gSW5p
dGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuDQoo
WEVOKSBbMjAxNC0wMS0wNyAxMTowOToxN10gU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBb
MjAxNC0wMS0wNyAxMTowOToxN10gR3Vlc3QgTG9nbGV2ZWw6IEFsbA0KKFhFTikgWzIwMTQt
MDEtMDcgMTE6MDk6MTddIFhlbiBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLg0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MDk6MTddICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlw
ZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikg
WzIwMTQtMDEtMDcgMTE6MDk6MTddIEZyZWVkIDI3MmtCIGluaXQgbWVtb3J5Lg0KbWFwcGlu
ZyBrZXJuZWwgaW50byBwaHlzaWNhbCBtZW1vcnkNCmFib3V0IHRvIGdldCBzdGFydGVkLi4u
DQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVzZXQNClsg
ICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIGNwdQ0KWyAgICAwLjAw
MDAwMF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgY3B1YWNjdA0KWyAgICAwLjAwMDAw
MF0gTGludXggdmVyc2lvbiAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAocm9vdEBz
ZXJ2ZWVyc3RlcnRqZSkgKGdjYyB2ZXJzaW9uIDQuNy4yIChEZWJpYW4gNC43LjItNSkgKSAj
MSBTTVAgVHVlIEphbiA3IDEwOjAyOjU1IENFVCAyMDE0DQpbICAgIDAuMDAwMDAwXSBDb21t
YW5kIGxpbmU6IHJvb3Q9L2Rldi9tYXBwZXIvc2VydmVlcnN0ZXJ0amUtcm9vdCBybyB2ZXJi
b3NlIGVhcmx5cHJpbnRrPXhlbiBtZW09MTUzNk0gY29uc29sZT1odmMwIGNvbnNvbGU9dHR5
MCB2Z2E9Nzk0IHZpZGVvPXZlc2FmYiBhY3BpX2VuZm9yY2VfcmVzb3VyY2VzPWxheCBtYXhf
bG9vcD0zMCBsb29wX21heF9wYXJ0PTEwIHhlbi1wY2liYWNrLmhpZGU9KDAzOjA2LjApKDA0
OjAwLiopKDA2OjAxLiopKDA3OjAwLiopKDA4OjAwLiopKDBjOjAwLiopIGRlYnVnIGxvZ2xl
dmVsPTEwIG5vbW9kZXNldA0KWyAgICAwLjAwMDAwMF0gRnJlZWluZyA5OS0xMDAgcGZuIHJh
bmdlOiAxMDMgcGFnZXMgZnJlZWQNClsgICAgMC4wMDAwMDBdIFJlbGVhc2VkIDEwMyBwYWdl
cyBvZiB1bnVzZWQgbWVtb3J5DQpbICAgIDAuMDAwMDAwXSBTZXQgMzkzNDMxIHBhZ2Uocykg
dG8gMS0xIG1hcHBpbmcNClsgICAgMC4wMDAwMDBdIFBvcHVsYXRpbmcgNjAwMDAtNjAwNjcg
cGZuIHJhbmdlOiAxMDMgcGFnZXMgYWRkZWQNClsgICAgMC4wMDAwMDBdIGU4MjA6IEJJT1Mt
cHJvdmlkZWQgcGh5c2ljYWwgUkFNIG1hcDoNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAw
eDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDAwMDk4ZmZmXSB1c2FibGUNClsgICAgMC4w
MDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAwOTk4MDAtMHgwMDAwMDAwMDAwMGZmZmZm
XSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDAwMDEwMDAw
MC0weDAwMDAwMDAwNjAwNjZmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVt
IDB4MDAwMDAwMDA2MDA2NzAwMC0weDAwMDAwMDAwOWZmOGZmZmZdIHVudXNhYmxlDQpbICAg
IDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDlmZjkwMDAwLTB4MDAwMDAwMDA5ZmY5
ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDlm
ZjllMDAwLTB4MDAwMDAwMDA5ZmZkZmZmZl0gQUNQSSBOVlMNClsgICAgMC4wMDAwMDBdIFhl
bjogW21lbSAweDAwMDAwMDAwOWZmZTAwMDAtMHgwMDAwMDAwMDlmZmZmZmZmXSByZXNlcnZl
ZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAwMDAw
MDAwZmVlZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0gMHgwMDAw
MDAwMGZmZTAwMDAwLTB4MDAwMDAwMDBmZmZmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAw
MDBdIFhlbjogW21lbSAweDAwMDAwMDAxMDAwMDAwMDAtMHgwMDAwMDAwNTVmZmZmZmZmXSB1
bnVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwZmQwMDAwMDAwMC0w
eDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBib290Y29uc29s
ZSBbeGVuYm9vdDBdIGVuYWJsZWQNClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2Fi
bGUpIHByb3RlY3Rpb246IGFjdGl2ZQ0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXNlci1kZWZp
bmVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAw
MDAwMDAwMDAwMDAwMC0weDAwMDAwMDAwMDAwOThmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAw
MF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwOTk4MDAtMHgwMDAwMDAwMDAwMGZmZmZmXSBy
ZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAxMDAwMDAt
MHgwMDAwMDAwMDVmZmZmZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0g
MHgwMDAwMDAwMDYwMDY3MDAwLTB4MDAwMDAwMDA5ZmY4ZmZmZl0gdW51c2FibGUNClsgICAg
MC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDlmZjkwMDAwLTB4MDAwMDAwMDA5ZmY5
ZGZmZl0gQUNQSSBkYXRhDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA5
ZmY5ZTAwMC0weDAwMDAwMDAwOWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSB1
c2VyOiBbbWVtIDB4MDAwMDAwMDA5ZmZlMDAwMC0weDAwMDAwMDAwOWZmZmZmZmZdIHJlc2Vy
dmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDBmZWUwMDAwMC0weDAw
MDAwMDAwZmVlZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4
MDAwMDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDEwMDAwMDAwMC0weDAwMDAwMDA1NWZmZmZm
ZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwZmQwMDAw
MDAwMC0weDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBTTUJJ
T1MgMi41IHByZXNlbnQuDQpbICAgIDAuMDAwMDAwXSBETUk6IE1TSSBNUy03NjQwLzg5MEZY
QS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAgIDAuMDAw
MDAwXSBlODIwOiB1cGRhdGUgW21lbSAweDAwMDAwMDAwLTB4MDAwMDBmZmZdIHVzYWJsZSA9
PT4gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIGU4MjA6IHJlbW92ZSBbbWVtIDB4MDAwYTAw
MDAtMHgwMDBmZmZmZl0gdXNhYmxlDQpbICAgIDAuMDAwMDAwXSBObyBBR1AgYnJpZGdlIGZv
dW5kDQpbICAgIDAuMDAwMDAwXSBlODIwOiBsYXN0X3BmbiA9IDB4NjAwMDAgbWF4X2FyY2hf
cGZuID0gMHg0MDAwMDAwMDANClsgICAgMC4wMDAwMDBdIFNjYW5uaW5nIDEgYXJlYXMgZm9y
IGxvdyBtZW1vcnkgY29ycnVwdGlvbg0KWyAgICAwLjAwMDAwMF0gQmFzZSBtZW1vcnkgdHJh
bXBvbGluZSBhdCBbZmZmZjg4MDAwMDA5MzAwMF0gOTMwMDAgc2l6ZSAyNDU3Ng0KWyAgICAw
LjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDAwMDAwMDAwLTB4MDAwZmZm
ZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAwMDAwMDAwLTB4MDAwZmZmZmZdIHBhZ2Ug
NGsNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHg1ZmUwMDAw
MC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHg1ZmUwMDAwMC0weDVmZmZm
ZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDJkODIwMDAsIDB4MDJkODJm
ZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4MzAwMCwgMHgwMmQ4M2Zm
Zl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAw
eDVjMDAwMDAwLTB4NWZkZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDVjMDAwMDAw
LTB4NWZkZmZmZmZdIHBhZ2UgNGsNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4NDAwMCwg
MHgwMmQ4NGZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gQlJLIFsweDAyZDg1MDAwLCAw
eDAyZDg1ZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDJkODYwMDAsIDB4
MDJkODZmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMmQ4NzAwMCwgMHgw
MmQ4N2ZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzog
W21lbSAweDAwMTAwMDAwLTB4NWJmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAw
MTAwMDAwLTB4NWJmZmZmZmZdIHBhZ2UgNGsNClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFtt
ZW0gMHgwMzE4ZjAwMC0weDAzZmE3ZmZmXQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUlNEUCAw
MDAwMDAwMDAwMGZiMTAwIDAwMDAxNCAodjAwIEFDUElBTSkNClsgICAgMC4wMDAwMDBdIEFD
UEk6IFJTRFQgMDAwMDAwMDA5ZmY5MDAwMCAwMDAwNDggKHYwMSBNU0kgICAgT0VNU0xJQyAg
MjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IEZBQ1AgMDAw
MDAwMDA5ZmY5MDIwMCAwMDAwODQgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNG
VCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IERTRFQgMDAwMDAwMDA5ZmY5MDVl
MCAwMDk0MjcgKHYwMSAgQTc2NDAgQTc2NDAxMDAgMDAwMDAxMDAgSU5UTCAyMDA1MTExNykN
ClsgICAgMC4wMDAwMDBdIEFDUEk6IEZBQ1MgMDAwMDAwMDA5ZmY5ZTAwMCAwMDAwNDANClsg
ICAgMC4wMDAwMDBdIEFDUEk6IEFQSUMgMDAwMDAwMDA5ZmY5MDM5MCAwMDAwODggKHYwMSA3
NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBd
IEFDUEk6IE1DRkcgMDAwMDAwMDA5ZmY5MDQyMCAwMDAwM0MgKHYwMSA3NjQwTVMgT0VNTUNG
RyAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNMSUMg
MDAwMDAwMDA5ZmY5MDQ2MCAwMDAxNzYgKHYwMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA5MTMg
TVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IE9FTUIgMDAwMDAwMDA5ZmY5
ZTA0MCAwMDAwNzIgKHYwMSA3NjQwTVMgQTc2NDAxMDAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5
NykNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNSQVQgMDAwMDAwMDA5ZmY5YTVlMCAwMDAxMDgg
KHYwMyBBTUQgICAgRkFNX0ZfMTAgMDAwMDAwMDIgQU1EICAwMDAwMDAwMSkNClsgICAgMC4w
MDAwMDBdIEFDUEk6IEhQRVQgMDAwMDAwMDA5ZmY5YTZmMCAwMDAwMzggKHYwMSA3NjQwTVMg
T0VNSFBFVCAgMjAxMDA5MTMgTVNGVCAwMDAwMDA5NykNClsgICAgMC4wMDAwMDBdIEFDUEk6
IElWUlMgMDAwMDAwMDA5ZmY5YTczMCAwMDAxMDggKHYwMSAgQU1EICAgICBSRDg5MFMgMDAy
MDIwMzEgQU1EICAwMDAwMDAwMCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IFNTRFQgMDAwMDAw
MDA5ZmY5YTg0MCAwMDBEQTQgKHYwMSBBIE0gSSAgUE9XRVJOT1cgMDAwMDAwMDEgQU1EICAw
MDAwMDAwMSkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZl
ZTAwMDAwDQpbICAgIDAuMDAwMDAwXSBOVU1BIHR1cm5lZCBvZmYNClsgICAgMC4wMDAwMDBd
IEZha2luZyBhIG5vZGUgYXQgW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDVm
ZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAw
MDAwMDAwLTB4NWZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgIE5PREVfREFUQSBbbWVtIDB4
NWZkMWMwMDAtMHg1ZmQyNmZmZl0NClsgICAgMC4wMDAwMDBdIFpvbmUgcmFuZ2VzOg0KWyAg
ICAwLjAwMDAwMF0gICBETUEgICAgICBbbWVtIDB4MDAwMDEwMDAtMHgwMGZmZmZmZl0NClsg
ICAgMC4wMDAwMDBdICAgRE1BMzIgICAgW21lbSAweDAxMDAwMDAwLTB4ZmZmZmZmZmZdDQpb
ICAgIDAuMDAwMDAwXSAgIE5vcm1hbCAgIGVtcHR5DQpbICAgIDAuMDAwMDAwXSBNb3ZhYmxl
IHpvbmUgc3RhcnQgZm9yIGVhY2ggbm9kZQ0KWyAgICAwLjAwMDAwMF0gRWFybHkgbWVtb3J5
IG5vZGUgcmFuZ2VzDQpbICAgIDAuMDAwMDAwXSAgIG5vZGUgICAwOiBbbWVtIDB4MDAwMDEw
MDAtMHgwMDA5OGZmZl0NClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDEw
MDAwMC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gT24gbm9kZSAwIHRvdGFscGFnZXM6
IDM5MzExMg0KWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogNjQgcGFnZXMgdXNlZCBmb3Ig
bWVtbWFwDQpbICAgIDAuMDAwMDAwXSAgIERNQSB6b25lOiAyMSBwYWdlcyByZXNlcnZlZA0K
WyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogMzk5MiBwYWdlcywgTElGTyBiYXRjaDowDQpb
ICAgIDAuMDAwMDAwXSAgIERNQTMyIHpvbmU6IDYwODAgcGFnZXMgdXNlZCBmb3IgbWVtbWFw
DQpbICAgIDAuMDAwMDAwXSAgIERNQTMyIHpvbmU6IDM4OTEyMCBwYWdlcywgTElGTyBiYXRj
aDozMQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNClsg
ICAgMC4wMDAwMDBdIEFDUEk6IExvY2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwDQpbICAg
IDAuMDAwMDAwXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAxXSBsYXBpY19pZFsweDAwXSBl
bmFibGVkKQ0KWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFw
aWNfaWRbMHgwMV0gZW5hYmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3Bp
X2lkWzB4MDNdIGxhcGljX2lkWzB4MDJdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBpY19pZFsweDAzXSBlbmFibGVkKQ0KWyAgICAw
LjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5h
YmxlZCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDZdIGxhcGlj
X2lkWzB4MDVdIGVuYWJsZWQpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBJT0FQSUMgKGlkWzB4
MDZdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pDQpbICAgIDAuMDAwMDAwXSBJ
T0FQSUNbMF06IGFwaWNfaWQgNiwgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzAwMDAwLCBH
U0kgMC0yMw0KWyAgICAwLjAwMDAwMF0gQUNQSTogSU9BUElDIChpZFsweDA3XSBhZGRyZXNz
WzB4ZmVjMjAwMDBdIGdzaV9iYXNlWzI0XSkNClsgICAgMC4wMDAwMDBdIElPQVBJQ1sxXTog
YXBpY19pZCA3LCB2ZXJzaW9uIDMzLCBhZGRyZXNzIDB4ZmVjMjAwMDAsIEdTSSAyNC01NQ0K
WyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9i
YWxfaXJxIDIgZGZsIGRmbCkNClsgICAgMC4wMDAwMDBdIEFDUEk6IElOVF9TUkNfT1ZSIChi
dXMgMCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkNClsgICAgMC4wMDAwMDBd
IEFDUEk6IElSUTAgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIEFDUEk6IElS
UTIgdXNlZCBieSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIEFDUEk6IElSUTkgdXNlZCBi
eSBvdmVycmlkZS4NClsgICAgMC4wMDAwMDBdIFVzaW5nIEFDUEkgKE1BRFQpIGZvciBTTVAg
Y29uZmlndXJhdGlvbiBpbmZvcm1hdGlvbg0KWyAgICAwLjAwMDAwMF0gQUNQSTogSFBFVCBp
ZDogMHg4MzAwIGJhc2U6IDB4ZmVkMDAwMDANClsgICAgMC4wMDAwMDBdIHNtcGJvb3Q6IEFs
bG93aW5nIDYgQ1BVcywgMCBob3RwbHVnIENQVXMNClsgICAgMC4wMDAwMDBdIG5yX2lycXNf
Z3NpOiA3Mg0KWyAgICAwLjAwMDAwMF0gZTgyMDogW21lbSAweGEwMDAwMDAwLTB4ZmVkZmZm
ZmZdIGF2YWlsYWJsZSBmb3IgUENJIGRldmljZXMNClsgICAgMC4wMDAwMDBdIEJvb3Rpbmcg
cGFyYXZpcnR1YWxpemVkIGtlcm5lbCBvbiBYZW4NClsgICAgMC4wMDAwMDBdIFhlbiB2ZXJz
aW9uOiA0LjQtdW5zdGFibGUgKHByZXNlcnZlLUFEKQ0KWyAgICAwLjAwMDAwMF0gc2V0dXBf
cGVyY3B1OiBOUl9DUFVTOjggbnJfY3B1bWFza19iaXRzOjggbnJfY3B1X2lkczo2IG5yX25v
ZGVfaWRzOjENClsgICAgMC4wMDAwMDBdIFBFUkNQVTogRW1iZWRkZWQgMjggcGFnZXMvY3B1
IEBmZmZmODgwMDVmNjAwMDAwIHM4Mjc1MiByODE5MiBkMjM3NDQgdTI2MjE0NA0KWyAgICAw
LjAwMDAwMF0gcGNwdS1hbGxvYzogczgyNzUyIHI4MTkyIGQyMzc0NCB1MjYyMTQ0IGFsbG9j
PTEqMjA5NzE1Mg0KWyAgICAwLjAwMDAwMF0gcGNwdS1hbGxvYzogWzBdIDAgMSAyIDMgNCA1
IC0gLSANClsgICAgOC4zMDc0NDddIEJ1aWx0IDEgem9uZWxpc3RzIGluIE5vZGUgb3JkZXIs
IG1vYmlsaXR5IGdyb3VwaW5nIG9uLiAgVG90YWwgcGFnZXM6IDM4Njk0Nw0KWyAgICA4LjMw
NzQ1MF0gUG9saWN5IHpvbmU6IERNQTMyDQpbICAgIDguMzA3NDU4XSBLZXJuZWwgY29tbWFu
ZCBsaW5lOiByb290PS9kZXYvbWFwcGVyL3NlcnZlZXJzdGVydGplLXJvb3Qgcm8gdmVyYm9z
ZSBlYXJseXByaW50az14ZW4gbWVtPTE1MzZNIGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAg
dmdhPTc5NCB2aWRlbz12ZXNhZmIgYWNwaV9lbmZvcmNlX3Jlc291cmNlcz1sYXggbWF4X2xv
b3A9MzAgbG9vcF9tYXhfcGFydD0xMCB4ZW4tcGNpYmFjay5oaWRlPSgwMzowNi4wKSgwNDow
MC4qKSgwNjowMS4qKSgwNzowMC4qKSgwODowMC4qKSgwYzowMC4qKSBkZWJ1ZyBsb2dsZXZl
bD0xMCBub21vZGVzZXQNClsgICAgOC4zMDc2ODNdIFBJRCBoYXNoIHRhYmxlIGVudHJpZXM6
IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcykNClsgICAgOC4zNTQwNDBdIHNvZnR3YXJl
IElPIFRMQiBbbWVtIDB4NTljMDAwMDAtMHg1ZGMwMDAwMF0gKDY0TUIpIG1hcHBlZCBhdCBb
ZmZmZjg4MDA1OWMwMDAwMC1mZmZmODgwMDVkYmZmZmZmXQ0KWyAgICA4LjM2MTA1NV0gTWVt
b3J5OiAxNDMwMDQwSy8xNTcyNDQ4SyBhdmFpbGFibGUgKDEwOTAySyBrZXJuZWwgY29kZSwg
OTg2SyByd2RhdGEsIDQyNTZLIHJvZGF0YSwgMTA4MEsgaW5pdCwgOTU4OEsgYnNzLCAxNDI0
MDhLIHJlc2VydmVkKQ0KWyAgICA4LjM2MTMxOV0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9
MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9NiwgTm9kZXM9MQ0KWyAgICA4LjM2MTM2OV0gSGll
cmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4NClsgICAgOC4zNjEzNzFdIAlSQ1UgZHlu
dGljay1pZGxlIGdyYWNlLXBlcmlvZCBhY2NlbGVyYXRpb24gaXMgZW5hYmxlZC4NClsgICAg
OC4zNjEzNzNdIAlBZGRpdGlvbmFsIHBlci1DUFUgaW5mbyBwcmludGVkIHdpdGggc3RhbGxz
Lg0KWyAgICA4LjM2MTM3NV0gCVJDVSByZXN0cmljdGluZyBDUFVzIGZyb20gTlJfQ1BVUz04
IHRvIG5yX2NwdV9pZHM9Ni4NClsgICAgOC4zNjE0MDVdIE5SX0lSUVM6NDM1MiBucl9pcnFz
OjEyNzIgMTYNClsgICAgOC4zNjE1MDRdIHhlbjpldmVudHM6IFVzaW5nIEZJRk8tYmFzZWQg
QUJJDQpbICAgIDguMzYxNTExXSB4ZW46IHNjaSBvdmVycmlkZTogZ2xvYmFsX2lycT05IHRy
aWdnZXI9MCBwb2xhcml0eT0xDQpbICAgIDguMzYxNTE0XSB4ZW46IHJlZ2lzdGVyaW5nIGdz
aSA5IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgIDguMzYxNTQ5XSB4ZW46IC0tPiBw
aXJxPTkgLT4gaXJxPTkgKGdzaT05KQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTddIElP
QVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg2LTkgLT4gMHg2MCAtPiBJUlEgOSBN
b2RlOjEgQWN0aXZlOjEpDQpbICAgIDguMzYxNjAwXSB4ZW46IGFjcGkgc2NpIDkNClsgICAg
OC4zNjE2MDZdIHhlbjogLS0+IHBpcnE9MSAtPiBpcnE9MSAoZ3NpPTEpDQpbICAgIDguMzYx
NjEyXSB4ZW46IC0tPiBwaXJxPTIgLT4gaXJxPTIgKGdzaT0yKQ0KWyAgICA4LjM2MTYxOF0g
eGVuOiAtLT4gcGlycT0zIC0+IGlycT0zIChnc2k9MykNClsgICAgOC4zNjE2MjRdIHhlbjog
LS0+IHBpcnE9NCAtPiBpcnE9NCAoZ3NpPTQpDQpbICAgIDguMzYxNjMwXSB4ZW46IC0tPiBw
aXJxPTUgLT4gaXJxPTUgKGdzaT01KQ0KWyAgICA4LjM2MTYzNV0geGVuOiAtLT4gcGlycT02
IC0+IGlycT02IChnc2k9NikNClsgICAgOC4zNjE2NDFdIHhlbjogLS0+IHBpcnE9NyAtPiBp
cnE9NyAoZ3NpPTcpDQpbICAgIDguMzYxNjUwXSB4ZW46IC0tPiBwaXJxPTggLT4gaXJxPTgg
KGdzaT04KQ0KWyAgICA4LjM2MTY1N10geGVuOiAtLT4gcGlycT0xMCAtPiBpcnE9MTAgKGdz
aT0xMCkNClsgICAgOC4zNjE2NjNdIHhlbjogLS0+IHBpcnE9MTEgLT4gaXJxPTExIChnc2k9
MTEpDQpbICAgIDguMzYxNjY5XSB4ZW46IC0tPiBwaXJxPTEyIC0+IGlycT0xMiAoZ3NpPTEy
KQ0KWyAgICA4LjM2MTY3NV0geGVuOiAtLT4gcGlycT0xMyAtPiBpcnE9MTMgKGdzaT0xMykN
ClsgICAgOC4zNjE2ODFdIHhlbjogLS0+IHBpcnE9MTQgLT4gaXJxPTE0IChnc2k9MTQpDQpb
ICAgIDguMzYxNjg3XSB4ZW46IC0tPiBwaXJxPTE1IC0+IGlycT0xNSAoZ3NpPTE1KQ0KWyAg
ICA4LjM2MTgwOV0gQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgyNQ0KWyAgICA4
LjM2MTgxNF0gY29uc29sZSBbdHR5MF0gZW5hYmxlZA0KWyAgICA4LjM2MTgxOV0gYm9vdGNv
bnNvbGUgW3hlbmJvb3QwXSBkaXNhYmxlZA0KWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgY3B1c2V0DQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dy
b3VwIHN1YnN5cyBjcHUNClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vi
c3lzIGNwdWFjY3QNClsgICAgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgKHJvb3RAc2VydmVlcnN0ZXJ0amUpIChnY2MgdmVyc2lvbiA0
LjcuMiAoRGViaWFuIDQuNy4yLTUpICkgIzEgU01QIFR1ZSBKYW4gNyAxMDowMjo1NSBDRVQg
MjAxNA0KWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiByb290PS9kZXYvbWFwcGVyL3Nl
cnZlZXJzdGVydGplLXJvb3Qgcm8gdmVyYm9zZSBlYXJseXByaW50az14ZW4gbWVtPTE1MzZN
IGNvbnNvbGU9aHZjMCBjb25zb2xlPXR0eTAgdmdhPTc5NCB2aWRlbz12ZXNhZmIgYWNwaV9l
bmZvcmNlX3Jlc291cmNlcz1sYXggbWF4X2xvb3A9MzAgbG9vcF9tYXhfcGFydD0xMCB4ZW4t
cGNpYmFjay5oaWRlPSgwMzowNi4wKSgwNDowMC4qKSgwNjowMS4qKSgwNzowMC4qKSgwODow
MC4qKSgwYzowMC4qKSBkZWJ1ZyBsb2dsZXZlbD0xMCBub21vZGVzZXQNClsgICAgMC4wMDAw
MDBdIEZyZWVpbmcgOTktMTAwIHBmbiByYW5nZTogMTAzIHBhZ2VzIGZyZWVkDQpbICAgIDAu
MDAwMDAwXSAxLTEgbWFwcGluZyBvbiA5OS0+MTAwDQpbICAgIDAuMDAwMDAwXSAxLTEgbWFw
cGluZyBvbiA5ZmY5MC0+MTAwMDAwDQpbICAgIDAuMDAwMDAwXSBSZWxlYXNlZCAxMDMgcGFn
ZXMgb2YgdW51c2VkIG1lbW9yeQ0KWyAgICAwLjAwMDAwMF0gU2V0IDM5MzQzMSBwYWdlKHMp
IHRvIDEtMSBtYXBwaW5nDQpbICAgIDAuMDAwMDAwXSBQb3B1bGF0aW5nIDYwMDAwLTYwMDY3
IHBmbiByYW5nZTogMTAzIHBhZ2VzIGFkZGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBCSU9T
LXByb3ZpZGVkIHBoeXNpY2FsIFJBTSBtYXA6DQpbICAgIDAuMDAwMDAwXSBYZW46IFttZW0g
MHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDAwMDA5OGZmZl0gdXNhYmxlDQpbICAgIDAu
MDAwMDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDk5ODAwLTB4MDAwMDAwMDAwMDBmZmZm
Zl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwMDAxMDAw
MDAtMHgwMDAwMDAwMDYwMDY2ZmZmXSB1c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjogW21l
bSAweDAwMDAwMDAwNjAwNjcwMDAtMHgwMDAwMDAwMDlmZjhmZmZmXSB1bnVzYWJsZQ0KWyAg
ICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDA5ZmY5MDAwMC0weDAwMDAwMDAwOWZm
OWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAwMDAwMDA5
ZmY5ZTAwMC0weDAwMDAwMDAwOWZmZGZmZmZdIEFDUEkgTlZTDQpbICAgIDAuMDAwMDAwXSBY
ZW46IFttZW0gMHgwMDAwMDAwMDlmZmUwMDAwLTB4MDAwMDAwMDA5ZmZmZmZmZl0gcmVzZXJ2
ZWQNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMDAwZmVlMDAwMDAtMHgwMDAw
MDAwMGZlZWZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gWGVuOiBbbWVtIDB4MDAw
MDAwMDBmZmUwMDAwMC0weDAwMDAwMDAwZmZmZmZmZmZdIHJlc2VydmVkDQpbICAgIDAuMDAw
MDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMTAwMDAwMDAwLTB4MDAwMDAwMDU1ZmZmZmZmZl0g
dW51c2FibGUNClsgICAgMC4wMDAwMDBdIFhlbjogW21lbSAweDAwMDAwMGZkMDAwMDAwMDAt
MHgwMDAwMDBmZmZmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gYm9vdGNvbnNv
bGUgW3hlbmJvb3QwXSBlbmFibGVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiByZW1vdmUgW21l
bSAweDYwMDAwMDAwLTB4ZmZmZmZmZmZmZmZmZmZmZV0gdXNhYmxlDQpbICAgIDAuMDAwMDAw
XSBOWCAoRXhlY3V0ZSBEaXNhYmxlKSBwcm90ZWN0aW9uOiBhY3RpdmUNClsgICAgMC4wMDAw
MDBdIGU4MjA6IHVzZXItZGVmaW5lZCBwaHlzaWNhbCBSQU0gbWFwOg0KWyAgICAwLjAwMDAw
MF0gdXNlcjogW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgwMDAwMDAwMDAwMDk4ZmZmXSB1
c2FibGUNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0gMHgwMDAwMDAwMDAwMDk5ODAwLTB4
MDAwMDAwMDAwMDBmZmZmZl0gcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdIHVzZXI6IFttZW0g
MHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA1ZmZmZmZmZl0gdXNhYmxlDQpbICAgIDAu
MDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA2MDA2NzAwMC0weDAwMDAwMDAwOWZmOGZm
ZmZdIHVudXNhYmxlDQpbICAgIDAuMDAwMDAwXSB1c2VyOiBbbWVtIDB4MDAwMDAwMDA5ZmY5
MDAwMC0weDAwMDAwMDAwOWZmOWRmZmZdIEFDUEkgZGF0YQ0KWyAgICAwLjAwMDAwMF0gdXNl
cjogW21lbSAweDAwMDAwMDAwOWZmOWUwMDAtMHgwMDAwMDAwMDlmZmRmZmZmXSBBQ1BJIE5W
Uw0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwOWZmZTAwMDAtMHgwMDAw
MDAwMDlmZmZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAw
MDAwMDAwZmVlMDAwMDAtMHgwMDAwMDAwMGZlZWZmZmZmXSByZXNlcnZlZA0KWyAgICAwLjAw
MDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAwZmZlMDAwMDAtMHgwMDAwMDAwMGZmZmZmZmZm
XSByZXNlcnZlZA0KWyAgICAwLjAwMDAwMF0gdXNlcjogW21lbSAweDAwMDAwMDAxMDAwMDAw
MDAtMHgwMDAwMDAwNTVmZmZmZmZmXSB1bnVzYWJsZQ0KWyAgICAwLjAwMDAwMF0gdXNlcjog
W21lbSAweDAwMDAwMGZkMDAwMDAwMDAtMHgwMDAwMDBmZmZmZmZmZmZmXSByZXNlcnZlZA0K
WyAgICAwLjAwMDAwMF0gU01CSU9TIDIuNSBwcmVzZW50Lg0KWyAgICAwLjAwMDAwMF0gRE1J
OiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwgQklPUyBWMS44QjEgMDkv
MTMvMjAxMA0KWyAgICAwLjAwMDAwMF0gZTgyMDogdXBkYXRlIFttZW0gMHgwMDAwMDAwMC0w
eDAwMDAwZmZmXSB1c2FibGUgPT0+IHJlc2VydmVkDQpbICAgIDAuMDAwMDAwXSBlODIwOiBy
ZW1vdmUgW21lbSAweDAwMGEwMDAwLTB4MDAwZmZmZmZdIHVzYWJsZQ0KWyAgICAwLjAwMDAw
MF0gTm8gQUdQIGJyaWRnZSBmb3VuZA0KWyAgICAwLjAwMDAwMF0gZTgyMDogbGFzdF9wZm4g
PSAweDYwMDAwIG1heF9hcmNoX3BmbiA9IDB4NDAwMDAwMDAwDQpbICAgIDAuMDAwMDAwXSBT
Y2FubmluZyAxIGFyZWFzIGZvciBsb3cgbWVtb3J5IGNvcnJ1cHRpb24NClsgICAgMC4wMDAw
MDBdIEJhc2UgbWVtb3J5IHRyYW1wb2xpbmUgYXQgW2ZmZmY4ODAwMDAwOTMwMDBdIDkzMDAw
IHNpemUgMjQ1NzYNClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0g
MHgwMDAwMDAwMC0weDAwMGZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgwMDAwMDAw
MC0weDAwMGZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBw
aW5nOiBbbWVtIDB4NWZlMDAwMDAtMHg1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdICBbbWVt
IDB4NWZlMDAwMDAtMHg1ZmZmZmZmZl0gcGFnZSA0aw0KWyAgICAwLjAwMDAwMF0gQlJLIFsw
eDAyZDgyMDAwLCAweDAyZDgyZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBCUksgWzB4
MDJkODMwMDAsIDB4MDJkODNmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIGluaXRfbWVt
b3J5X21hcHBpbmc6IFttZW0gMHg1YzAwMDAwMC0weDVmZGZmZmZmXQ0KWyAgICAwLjAwMDAw
MF0gIFttZW0gMHg1YzAwMDAwMC0weDVmZGZmZmZmXSBwYWdlIDRrDQpbICAgIDAuMDAwMDAw
XSBCUksgWzB4MDJkODQwMDAsIDB4MDJkODRmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBd
IEJSSyBbMHgwMmQ4NTAwMCwgMHgwMmQ4NWZmZl0gUEdUQUJMRQ0KWyAgICAwLjAwMDAwMF0g
QlJLIFsweDAyZDg2MDAwLCAweDAyZDg2ZmZmXSBQR1RBQkxFDQpbICAgIDAuMDAwMDAwXSBC
UksgWzB4MDJkODcwMDAsIDB4MDJkODdmZmZdIFBHVEFCTEUNClsgICAgMC4wMDAwMDBdIGlu
aXRfbWVtb3J5X21hcHBpbmc6IFttZW0gMHgwMDEwMDAwMC0weDViZmZmZmZmXQ0KWyAgICAw
LjAwMDAwMF0gIFttZW0gMHgwMDEwMDAwMC0weDViZmZmZmZmXSBwYWdlIDRrDQpbICAgIDAu
MDAwMDAwXSBSQU1ESVNLOiBbbWVtIDB4MDMxOGYwMDAtMHgwM2ZhN2ZmZl0NClsgICAgMC4w
MDAwMDBdIEFDUEk6IFJTRFAgMDAwMDAwMDAwMDBmYjEwMCAwMDAwMTQgKHYwMCBBQ1BJQU0p
DQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RUIDAwMDAwMDAwOWZmOTAwMDAgMDAwMDQ4ICh2
MDEgTVNJICAgIE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAw
MDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwOWZmOTAyMDAgMDAwMDg0ICh2MDEgNzY0ME1TIEE3
NjQwMTAwIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBE
U0RUIDAwMDAwMDAwOWZmOTA1ZTAgMDA5NDI3ICh2MDEgIEE3NjQwIEE3NjQwMTAwIDAwMDAw
MTAwIElOVEwgMjAwNTExMTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNTIDAwMDAwMDAw
OWZmOWUwMDAgMDAwMDQwDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBBUElDIDAwMDAwMDAwOWZm
OTAzOTAgMDAwMDg4ICh2MDEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQgMDAwMDAw
OTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBNQ0ZHIDAwMDAwMDAwOWZmOTA0MjAgMDAwMDND
ICh2MDEgNzY0ME1TIE9FTU1DRkcgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAu
MDAwMDAwXSBBQ1BJOiBTTElDIDAwMDAwMDAwOWZmOTA0NjAgMDAwMTc2ICh2MDEgTVNJICAg
IE9FTVNMSUMgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBPRU1CIDAwMDAwMDAwOWZmOWUwNDAgMDAwMDcyICh2MDEgNzY0ME1TIEE3NjQwMTAwIDIw
MTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTUkFUIDAwMDAw
MDAwOWZmOWE1ZTAgMDAwMTA4ICh2MDMgQU1EICAgIEZBTV9GXzEwIDAwMDAwMDAyIEFNRCAg
MDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIDAwMDAwMDAwOWZmOWE2ZjAg
MDAwMDM4ICh2MDEgNzY0ME1TIE9FTUhQRVQgIDIwMTAwOTEzIE1TRlQgMDAwMDAwOTcpDQpb
ICAgIDAuMDAwMDAwXSBBQ1BJOiBJVlJTIDAwMDAwMDAwOWZmOWE3MzAgMDAwMTA4ICh2MDEg
IEFNRCAgICAgUkQ4OTBTIDAwMjAyMDMxIEFNRCAgMDAwMDAwMDApDQpbICAgIDAuMDAwMDAw
XSBBQ1BJOiBTU0RUIDAwMDAwMDAwOWZmOWE4NDAgMDAwREE0ICh2MDEgQSBNIEkgIFBPV0VS
Tk9XIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDEpDQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBMb2Nh
bCBBUElDIGFkZHJlc3MgMHhmZWUwMDAwMA0KWyAgICAwLjAwMDAwMF0gTlVNQSB0dXJuZWQg
b2ZmDQpbICAgIDAuMDAwMDAwXSBGYWtpbmcgYSBub2RlIGF0IFttZW0gMHgwMDAwMDAwMDAw
MDAwMDAwLTB4MDAwMDAwMDA1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdIEluaXRtZW0gc2V0
dXAgbm9kZSAwIFttZW0gMHgwMDAwMDAwMC0weDVmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0g
ICBOT0RFX0RBVEEgW21lbSAweDVmZDFjMDAwLTB4NWZkMjZmZmZdDQpbICAgIDAuMDAwMDAw
XSBab25lIHJhbmdlczoNClsgICAgMC4wMDAwMDBdICAgRE1BICAgICAgW21lbSAweDAwMDAx
MDAwLTB4MDBmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgIERNQTMyICAgIFttZW0gMHgwMTAw
MDAwMC0weGZmZmZmZmZmXQ0KWyAgICAwLjAwMDAwMF0gICBOb3JtYWwgICBlbXB0eQ0KWyAg
ICAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUNClsgICAgMC4w
MDAwMDBdIEVhcmx5IG1lbW9yeSBub2RlIHJhbmdlcw0KWyAgICAwLjAwMDAwMF0gICBub2Rl
ICAgMDogW21lbSAweDAwMDAxMDAwLTB4MDAwOThmZmZdDQpbICAgIDAuMDAwMDAwXSAgIG5v
ZGUgICAwOiBbbWVtIDB4MDAxMDAwMDAtMHg1ZmZmZmZmZl0NClsgICAgMC4wMDAwMDBdIE9u
IG5vZGUgMCB0b3RhbHBhZ2VzOiAzOTMxMTINClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6
IDY0IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTog
MjEgcGFnZXMgcmVzZXJ2ZWQNClsgICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDM5OTIgcGFn
ZXMsIExJRk8gYmF0Y2g6MA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiA2MDgwIHBh
Z2VzIHVzZWQgZm9yIG1lbW1hcA0KWyAgICAwLjAwMDAwMF0gICBETUEzMiB6b25lOiAzODkx
MjAgcGFnZXMsIExJRk8gYmF0Y2g6MzENClsgICAgMC4wMDAwMDBdIEFDUEk6IFBNLVRpbWVy
IElPIFBvcnQ6IDB4ODA4WyAgIDEwLjY5NTg2N10gcGNpYmFjayAwMDAwOjA0OjAwLjA6IHJl
c3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4M2MgKHdhcyAweDEwMCwgd3JpdGlu
ZyAweDEwYSkNClsgICAxMC42OTYxMDhdIHBjaWJhY2sgMDAwMDowNDowMC4wOiByZXN0b3Jp
bmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEwICh3YXMgMHg0LCB3cml0aW5nIDB4Zjk3
ZmUwMDQpDQpbICAgMTAuNjk2MzM2XSBwY2liYWNrIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5n
IGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhjICh3YXMgMHgwLCB3cml0aW5nIDB4MTApDQpb
ICAgMTAuNjk2NTUwXSBwY2liYWNrIDAwMDA6MDQ6MDAuMDogcmVzdG9yaW5nIGNvbmZpZyBz
cGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAxMDIpDQpb
ICAgMTAuNjk2OTAyXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzOSB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjY5NzAzOF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozOQ0KWyAg
IDEwLjcyMjU3Nl0gcGNpYmFjayAwMDAwOjA2OjAxLjI6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzIyOTYxXSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzOCB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjcyMzEwMl0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozOA0KWyAg
IDEwLjc0OTI0MV0gcGNpYmFjayAwMDAwOjA2OjAxLjE6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzQ5NjM1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzNyB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjc0OTc3Ml0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDozNw0KWyAg
IDEwLjc3NTkxN10gcGNpYmFjayAwMDAwOjA2OjAxLjA6IHJlc3RvcmluZyBjb25maWcgc3Bh
Y2UgYXQgb2Zmc2V0IDB4NCAod2FzIDB4MjEwMDAwMCwgd3JpdGluZyAweDIxMDAxMTIpDQpb
ICAgMTAuNzc2MzA5XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAzMyB0cmlnZ2VyaW5nIDAgcG9s
YXJpdHkgMQ0KWyAgIDEwLjc3NjQ2MF0geGVuOiAtLT4gcGlycT0zMyAtPiBpcnE9MzMgKGdz
aT0zMykNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjE5XSBJT0FQSUNbMV06IFNldCBQQ0kg
cm91dGluZyBlbnRyeSAoNy05IC0+IDB4YTkgLT4gSVJRIDMzIE1vZGU6MSBBY3RpdmU6MSkN
ClsgICAxMC44MDI2MzFdIHBjaWJhY2sgMDAwMDowNzowMC4wOiBlbmFibGluZyBkZXZpY2Ug
KDAwMDAgLT4gMDAwMykNClsgICAxMC44MDI4MTBdIHhlbjogcmVnaXN0ZXJpbmcgZ3NpIDMy
IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTAuODAyOTU3XSB4ZW46IC0tPiBwaXJx
PTMyIC0+IGlycT0zMiAoZ3NpPTMyKQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6MTldIElP
QVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTggLT4gMHhiMSAtPiBJUlEgMzIg
TW9kZToxIEFjdGl2ZToxKQ0KWyAgIDEwLjgyOTIzOV0geGVuOiByZWdpc3RlcmluZyBnc2kg
NDcgdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMC44MjkzOTVdIHhlbjogLS0+IHBp
cnE9NDcgLT4gaXJxPTQ3IChnc2k9NDcpDQooWEVOKSBbMjAxNC0wMS0wNyAxMTowOToxOV0g
SU9BUElDWzFdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDctMjMgLT4gMHhiOSAtPiBJUlEg
NDcgTW9kZToxIEFjdGl2ZToxKQ0KWyAgIDExLjgzOTE5MF0gcGNpYmFjayAwMDAwOjA4OjAw
LjA6IHJlc3RvcmluZyBjb25maWcgc3BhY2UgYXQgb2Zmc2V0IDB4M2MgKHdhcyAweDEwMCwg
d3JpdGluZyAweDEwYSkNClsgICAxMS44Mzk0MzddIHBjaWJhY2sgMDAwMDowODowMC4wOiBy
ZXN0b3JpbmcgY29uZmlnIHNwYWNlIGF0IG9mZnNldCAweDEwICh3YXMgMHg0LCB3cml0aW5n
IDB4ZjlhMDAwMDQpDQpbICAgMTEuODM5NjY3XSBwY2liYWNrIDAwMDA6MDg6MDAuMDogcmVz
dG9yaW5nIGNvbmZpZyBzcGFjZSBhdCBvZmZzZXQgMHhjICh3YXMgMHgwLCB3cml0aW5nIDB4
MTApDQpbICAgMTEuODQ2Njc3XSBwY2liYWNrIDAwMDA6MDg6MDAuMDogcmVzdG9yaW5nIGNv
bmZpZyBzcGFjZSBhdCBvZmZzZXQgMHg0ICh3YXMgMHgxMDAwMDAsIHdyaXRpbmcgMHgxMDAx
MDYpDQpbICAgMTEuODUzNzg1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAyOSB0cmlnZ2VyaW5n
IDAgcG9sYXJpdHkgMQ0KWyAgIDExLjg2MDc4Ml0geGVuOiAtLT4gcGlycT0yOSAtPiBpcnE9
MjkgKGdzaT0yOSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjIwXSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy01IC0+IDB4YzEgLT4gSVJRIDI5IE1vZGU6MSBBY3Rp
dmU6MSkNClsgICAxMS44OTI2MjldIHBjaWJhY2sgMDAwMDowYzowMC4wOiBlbmFibGluZyBk
ZXZpY2UgKDAwMDAgLT4gMDAwMykNClsgICAxMS44OTk2NzBdIHhlbjogcmVnaXN0ZXJpbmcg
Z3NpIDI4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTEuOTA2NjkzXSB4ZW46IC0t
PiBwaXJxPTI4IC0+IGlycT0yOCAoZ3NpPTI4KQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MDk6
MjBdIElPQVBJQ1sxXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg3LTQgLT4gMHhjOSAtPiBJ
UlEgMjggTW9kZToxIEFjdGl2ZToxKQ0KWyAgIDExLjkzOTQwOF0geGVuX3BjaWJhY2s6IGJh
Y2tlbmQgaXMgdnBjaQ0KWyAgIDExLjk0NzM2OV0geGVuX2FjcGlfcHJvY2Vzc29yOiBVcGxv
YWRpbmcgWGVuIHByb2Nlc3NvciBQTSBpbmZvDQpbICAgMTEuOTU3MTM3XSBTZXJpYWw6IDgy
NTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkDQpbICAgMTEu
OTY3NDEwXSBocGV0X2FjcGlfYWRkOiBubyBhZGRyZXNzIG9yIGlycXMgaW4gX0NSUw0KWyAg
IDExLjk3NTA5Nl0gTGludXggYWdwZ2FydCBpbnRlcmZhY2UgdjAuMTAzDQpbICAgMTEuOTgz
MTM0XSBIYW5nY2hlY2s6IHN0YXJ0aW5nIGhhbmdjaGVjayB0aW1lciAwLjkuMSAodGljayBp
cyAxODAgc2Vjb25kcywgbWFyZ2luIGlzIDYwIHNlY29uZHMpLg0KWyAgIDExLjk5MDM2OV0g
SGFuZ2NoZWNrOiBVc2luZyBnZXRyYXdtb25vdG9uaWMoKS4NClsgICAxMS45OTc2MzNdIFtk
cm1dIEluaXRpYWxpemVkIGRybSAxLjEuMCAyMDA2MDgxMA0KWyAgIDEyLjAwNTAwN10gW2Ry
bV0gVkdBQ09OIGRpc2FibGUgcmFkZW9uIGtlcm5lbCBtb2Rlc2V0dGluZy4NClsgICAxMi4w
MTIxMDddIFtkcm06cmFkZW9uX2luaXRdICpFUlJPUiogTm8gVU1TIHN1cHBvcnQgaW4gcmFk
ZW9uIG1vZHVsZSENClsgICAxMi4wMjcyODNdIGJyZDogbW9kdWxlIGxvYWRlZA0KWyAgIDEy
LjA1MDM4NF0gbG9vcDogbW9kdWxlIGxvYWRlZA0KWyAgIDEyLjA1ODU5NV0gYWhjaSAwMDAw
OjAwOjExLjA6IHZlcnNpb24gMy4wDQpbICAgMTIuMDY1NzcwXSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSAxOSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjA3MjY5OF0geGVuOiAt
LT4gcGlycT0xOSAtPiBpcnE9MTkgKGdzaT0xOSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjIxXSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNi0xOSAtPiAweGQxIC0+
IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTIuMDc5ODUxXSBhaGNpIDAwMDA6MDA6
MTEuMDogQUhDSSAwMDAxLjAyMDAgMzIgc2xvdHMgNiBwb3J0cyA2IEdicHMgMHgzZiBpbXBs
IFNBVEEgbW9kZQ0KWyAgIDEyLjA4NzAxNF0gYWhjaSAwMDAwOjAwOjExLjA6IGZsYWdzOiA2
NGJpdCBuY3Egc250ZiBpbGNrIHBtIGxlZCBjbG8gcG1wIHBpbyBzbHVtIHBhcnQgDQpbICAg
MTIuMDk3MzYzXSBzY3NpMCA6IGFoY2kNClsgICAxMi4xMDU0MTldIHNjc2kxIDogYWhjaQ0K
WyAgIDEyLjExMjY3MF0gc2NzaTIgOiBhaGNpDQpbICAgMTIuMTE5ODM5XSBzY3NpMyA6IGFo
Y2kNClsgICAxMi4xMjY5NjhdIHNjc2k0IDogYWhjaQ0KWyAgIDEyLjEzMzczOF0gc2NzaTUg
OiBhaGNpDQpbICAgMTIuMTQwMzA4XSBhdGExOiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0x
MDI0QDB4Zjk1ZmYwMDAgcG9ydCAweGY5NWZmMTAwIGlycSAxMjcNClsgICAxMi4xNDY3MDhd
IGF0YTI6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTEwMjRAMHhmOTVmZjAwMCBwb3J0IDB4
Zjk1ZmYxODAgaXJxIDEyNw0KWyAgIDEyLjE1MzAxOF0gYXRhMzogU0FUQSBtYXggVURNQS8x
MzMgYWJhciBtMTAyNEAweGY5NWZmMDAwIHBvcnQgMHhmOTVmZjIwMCBpcnEgMTI3DQpbICAg
MTIuMTU5MTkyXSBhdGE0OiBTQVRBIG1heCBVRE1BLzEzMyBhYmFyIG0xMDI0QDB4Zjk1ZmYw
MDAgcG9ydCAweGY5NWZmMjgwIGlycSAxMjcNClsgICAxMi4xNjU0MzNdIGF0YTU6IFNBVEEg
bWF4IFVETUEvMTMzIGFiYXIgbTEwMjRAMHhmOTVmZjAwMCBwb3J0IDB4Zjk1ZmYzMDAgaXJx
IDEyNw0KWyAgIDEyLjE3MTUwMF0gYXRhNjogU0FUQSBtYXggVURNQS8xMzMgYWJhciBtMTAy
NEAweGY5NWZmMDAwIHBvcnQgMHhmOTVmZjM4MCBpcnEgMTI3DQpbICAgMTIuMTc4NzA2XSB0
dW46IFVuaXZlcnNhbCBUVU4vVEFQIGRldmljZSBkcml2ZXIsIDEuNg0KWyAgIDEyLjE4NDY3
MF0gdHVuOiAoQykgMTk5OS0yMDA0IE1heCBLcmFzbnlhbnNreSA8bWF4a0BxdWFsY29tbS5j
b20+DQpbICAgMTIuMTkwODU4XSBlMTAwMDogSW50ZWwoUikgUFJPLzEwMDAgTmV0d29yayBE
cml2ZXIgLSB2ZXJzaW9uIDcuMy4yMS1rOC1OQVBJDQpbICAgMTIuMTk2ODc5XSBlMTAwMDog
Q29weXJpZ2h0IChjKSAxOTk5LTIwMDYgSW50ZWwgQ29ycG9yYXRpb24uDQpbICAgMTIuMjAz
MzAyXSBlMTAwMGU6IEludGVsKFIpIFBSTy8xMDAwIE5ldHdvcmsgRHJpdmVyIC0gMi4zLjIt
aw0KWyAgIDEyLjIwOTI5NF0gZTEwMDBlOiBDb3B5cmlnaHQoYykgMTk5OSAtIDIwMTMgSW50
ZWwgQ29ycG9yYXRpb24uDQpbICAgMTIuMjE1NTM1XSBpZ2I6IEludGVsKFIpIEdpZ2FiaXQg
RXRoZXJuZXQgTmV0d29yayBEcml2ZXIgLSB2ZXJzaW9uIDUuMC41LWsNClsgICAxMi4yMjE2
MzFdIGlnYjogQ29weXJpZ2h0IChjKSAyMDA3LTIwMTMgSW50ZWwgQ29ycG9yYXRpb24uDQpb
ICAgMTIuMjI3NzMwXSBpZ2J2ZjogSW50ZWwoUikgR2lnYWJpdCBWaXJ0dWFsIEZ1bmN0aW9u
IE5ldHdvcmsgRHJpdmVyIC0gdmVyc2lvbiAyLjAuMi1rDQpbICAgMTIuMjMzNzAwXSBpZ2J2
ZjogQ29weXJpZ2h0IChjKSAyMDA5IC0gMjAxMiBJbnRlbCBDb3Jwb3JhdGlvbi4NClsgICAx
Mi4yMzk4MjldIHI4MTY5IEdpZ2FiaXQgRXRoZXJuZXQgZHJpdmVyIDIuM0xLLU5BUEkgbG9h
ZGVkDQpbICAgMTIuMjQ1Nzk3XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSA0NiB0cmlnZ2VyaW5n
IDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjI1MTYwNl0geGVuOiAtLT4gcGlycT00NiAtPiBpcnE9
NDYgKGdzaT00NikNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5OjIxXSBJT0FQSUNbMV06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yMiAtPiAweDIyIC0+IElSUSA0NiBNb2RlOjEgQWN0
aXZlOjEpDQpbICAgMTIuMjU3MzIzXSByODE2OSAwMDAwOjBiOjAwLjA6IGVuYWJsaW5nIE1l
bS1Xci1JbnZhbA0KWyAgIDEyLjI2MzcxOV0gcjgxNjkgMDAwMDowYjowMC4wIGV0aDA6IFJU
TDgxNjhkLzgxMTFkIGF0IDB4ZmZmZmM5MDAwMDMzYzAwMCwgNDA6NjE6ODY6ZjQ6Njc6ZDks
IFhJRCAwODEwMDBjMCBJUlEgMTI4DQpbICAgMTIuMjY5NTc3XSByODE2OSAwMDAwOjBiOjAw
LjAgZXRoMDoganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIwMCBieXRlcywgdHggY2hlY2tz
dW1taW5nOiBrb10NClsgICAxMi4yNzU0NTRdIHI4MTY5IEdpZ2FiaXQgRXRoZXJuZXQgZHJp
dmVyIDIuM0xLLU5BUEkgbG9hZGVkDQpbICAgMTIuMjgxNDA0XSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSA1MSB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjI4NzIxN10geGVuOiAt
LT4gcGlycT01MSAtPiBpcnE9NTEgKGdzaT01MSkNCihYRU4pIFsyMDE0LTAxLTA3IDExOjA5
OjIxXSBJT0FQSUNbMV06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNy0yNyAtPiAweDMyIC0+
IElSUSA1MSBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTIuMjkyOTk1XSByODE2OSAwMDAwOjBh
OjAwLjA6IGVuYWJsaW5nIE1lbS1Xci1JbnZhbA0KWyAgIDEyLjI5OTgwMl0gcjgxNjkgMDAw
MDowYTowMC4wIGV0aDE6IFJUTDgxNjhkLzgxMTFkIGF0IDB4ZmZmZmM5MDAwMDMzZTAwMCwg
NDA6NjE6ODY6ZjQ6Njc6ZDgsIFhJRCAwODEwMDBjMCBJUlEgMTI5DQpbICAgMTIuMzA1ODIw
XSByODE2OSAwMDAwOjBhOjAwLjAgZXRoMToganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIw
MCBieXRlcywgdHggY2hlY2tzdW1taW5nOiBrb10NClsgICAxMi4zMTE5NzRdIHhlbl9uZXRm
cm9udDogSW5pdGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlcg0KWyAgIDEy
LjMxODY0NF0gZWhjaV9oY2Q6IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIg
KEVIQ0kpIERyaXZlcg0KWyAgIDEyLjMyNDgwNl0gZWhjaS1wY2k6IEVIQ0kgUENJIHBsYXRm
b3JtIGRyaXZlcg0KWyAgIDEyLjMzMDkzN10geGVuOiByZWdpc3RlcmluZyBnc2kgMTcgdHJp
Z2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMi4zMzY5MTldIEFscmVhZHkgc2V0dXAgdGhl
IEdTSSA6MTcNClsgICAxMi4zNDI5NzVdIFFVSVJLOiBFbmFibGUgQU1EIFBMTCBmaXgNClsg
ICAxMi4zNDg5MDhdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZW5hYmxpbmcgYnVzIG1hc3Rl
cmluZw0KWyAgIDEyLjM1NDg5OF0gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0KWyAgIDEyLjM2MTM2MF0gZWhjaS1wY2kgMDAwMDowMDoxMi4yOiBuZXcg
VVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDENClsgICAxMi4zNjcz
MDddIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogYXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1
ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kDQpbICAgMTIuMzczMzgwXSBlaGNp
LXBjaSAwMDAwOjAwOjEyLjI6IGRlYnVnIHBvcnQgMQ0KWyAgIDEyLjM3OTQyMl0gZWhjaS1w
Y2kgMDAwMDowMDoxMi4yOiBlbmFibGluZyBNZW0tV3ItSW52YWwNClsgICAxMi4zODU0Nzdd
IGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBpbyBtZW0gMHhmOTVmZjQwMA0KWyAg
IDEyLjM5NjAwNV0gcmFuZG9tOiBub25ibG9ja2luZyBwb29sIGlzIGluaXRpYWxpemVkDQpb
ICAgMTIuMzk5MTE1XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IFVTQiAyLjAgc3RhcnRlZCwg
RUhDSSAxLjAwDQpbICAgMTIuMzk5MzU0XSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyDQpbICAgMTIuMzk5MzU2XSB1c2Ig
dXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTENClsgICAxMi4zOTkzNTddIHVzYiB1c2IxOiBQcm9kdWN0OiBFSENJIEhvc3Qg
Q29udHJvbGxlcg0KWyAgIDEyLjM5OTM1OF0gdXNiIHVzYjE6IE1hbnVmYWN0dXJlcjogTGlu
dXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgZWhjaV9oY2QNClsgICAxMi4zOTkz
NTldIHVzYiB1c2IxOiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTIuMg0KWyAgIDEyLjQwMDgy
NV0gaHViIDEtMDoxLjA6IFVTQiBodWIgZm91bmQNClsgICAxMi40MDA4NzhdIGh1YiAxLTA6
MS4wOiA1IHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNDAxMzg5XSB4ZW46IHJlZ2lzdGVyaW5n
IGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjQwMTM5MV0gQWxyZWFk
eSBzZXR1cCB0aGUgR1NJIDoxNw0KWyAgIDEyLjQwMTQwOF0gZWhjaS1wY2kgMDAwMDowMDox
My4yOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAgMTIuNDAxNDMyXSBlaGNpLXBjaSAw
MDAwOjAwOjEzLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNDAxOTAwXSBlaGNp
LXBjaSAwMDAwOjAwOjEzLjI6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1
cyBudW1iZXIgMg0KWyAgIDEyLjQwMTkyMF0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBhcHBs
eWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBFSENJIGR1bW15IHFoIHdvcmthcm91
bmQNClsgICAxMi40MDE5NzFdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogZGVidWcgcG9ydCAx
DQpbICAgMTIuNDAyMTM1XSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IGVuYWJsaW5nIE1lbS1X
ci1JbnZhbA0KWyAgIDEyLjQwMjE0Nl0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBpcnEgMTcs
IGlvIG1lbSAweGY5NWZmODAwDQpbICAgMTIuNDA5MTIwXSBlaGNpLXBjaSAwMDAwOjAwOjEz
LjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwDQpbICAgMTIuNDA5MTkzXSB1c2IgdXNi
MjogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAy
DQpbICAgMTIuNDA5MTk0XSB1c2IgdXNiMjogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZy
PTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi40MDkxOTVdIHVzYiB1c2Iy
OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcg0KWyAgIDEyLjQwOTE5Nl0gdXNiIHVz
YjI6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsg
ZWhjaV9oY2QNClsgICAxMi40MDkxOTddIHVzYiB1c2IyOiBTZXJpYWxOdW1iZXI6IDAwMDA6
MDA6MTMuMg0KWyAgIDEyLjQwOTkxN10gaHViIDItMDoxLjA6IFVTQiBodWIgZm91bmQNClsg
ICAxMi40MDk5MzBdIGh1YiAyLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNDEw
Mjk1XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxNyB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0K
WyAgIDEyLjQxMDI5N10gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxNw0KWyAgIDEyLjQxMDMx
NV0gZWhjaS1wY2kgMDAwMDowMDoxNi4yOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAg
MTIuNDEwMzM5XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IEVIQ0kgSG9zdCBDb250cm9sbGVy
DQpbICAgMTIuNDEwODIxXSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IG5ldyBVU0IgYnVzIHJl
Z2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1iZXIgMw0KWyAgIDEyLjQxMDg0MF0gZWhjaS1w
Y2kgMDAwMDowMDoxNi4yOiBhcHBseWluZyBBTUQgU0I3MDAvU0I4MDAvSHVkc29uLTIvMyBF
SENJIGR1bW15IHFoIHdvcmthcm91bmQNClsgICAxMi40MTA4OTBdIGVoY2ktcGNpIDAwMDA6
MDA6MTYuMjogZGVidWcgcG9ydCAxDQpbICAgMTIuNDExMDM0XSBlaGNpLXBjaSAwMDAwOjAw
OjE2LjI6IGVuYWJsaW5nIE1lbS1Xci1JbnZhbA0KWyAgIDEyLjQxMTA0NV0gZWhjaS1wY2kg
MDAwMDowMDoxNi4yOiBpcnEgMTcsIGlvIG1lbSAweGY5NWZmYzAwDQpbICAgMTIuNDE5MjE0
XSBlaGNpLXBjaSAwMDAwOjAwOjE2LjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAxLjAwDQpb
ICAgMTIuNDE5MzUzXSB1c2IgdXNiMzogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9y
PTFkNmIsIGlkUHJvZHVjdD0wMDAyDQpbICAgMTIuNDE5MzU0XSB1c2IgdXNiMzogTmV3IFVT
QiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsg
ICAxMi40MTkzNTVdIHVzYiB1c2IzOiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcg0K
WyAgIDEyLjQxOTM1Nl0gdXNiIHVzYjM6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJj
Ny0yMDE0MDEwNy14ZW5kZXZlbCsgZWhjaV9oY2QNClsgICAxMi40MTkzNTddIHVzYiB1c2Iz
OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDA6MTYuMg0KWyAgIDEyLjQyMDIzMl0gaHViIDMtMDox
LjA6IFVTQiBodWIgZm91bmQNClsgICAxMi40MjAyNDRdIGh1YiAzLTA6MS4wOiA0IHBvcnRz
IGRldGVjdGVkDQpbICAgMTIuNDIwNzE3XSBvaGNpX2hjZDogVVNCIDEuMSAnT3BlbicgSG9z
dCBDb250cm9sbGVyIChPSENJKSBEcml2ZXINClsgICAxMi40MjA3MThdIG9oY2ktcGNpOiBP
SENJIFBDSSBwbGF0Zm9ybSBkcml2ZXINClsgICAxMi40MjA5MDhdIHhlbjogcmVnaXN0ZXJp
bmcgZ3NpIDE4IHRyaWdnZXJpbmcgMCBwb2xhcml0eSAxDQpbICAgMTIuNDIwOTEwXSBBbHJl
YWR5IHNldHVwIHRoZSBHU0kgOjE4DQpbICAgMTIuNDIwOTMxXSBvaGNpLXBjaSAwMDAwOjAw
OjEyLjA6IGVuYWJsaW5nIGJ1cyBtYXN0ZXJpbmcNClsgICAxMi40MjA5NDVdIG9oY2ktcGNp
IDAwMDA6MDA6MTIuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyDQpbICAgMTIuNDIxNDU2
XSBvaGNpLXBjaSAwMDAwOjAwOjEyLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgNA0KWyAgIDEyLjQyMTY0N10gb2hjaS1wY2kgMDAwMDowMDoxMi4w
OiBpcnEgMTgsIGlvIG1lbSAweGY5NWY3MDAwDQpbICAgMTIuNDc2NzI0XSB1c2IgdXNiNDog
TmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQpb
ICAgMTIuNDc2NzI1XSB1c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMs
IFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi40NzY3MjZdIHVzYiB1c2I0OiBQ
cm9kdWN0OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXINClsgICAxMi40NzY3MjddIHVzYiB1
c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwr
IG9oY2lfaGNkDQpbICAgMTIuNDc2NzI4XSB1c2IgdXNiNDogU2VyaWFsTnVtYmVyOiAwMDAw
OjAwOjEyLjANClsgICAxMi40Nzc2NjldIGh1YiA0LTA6MS4wOiBVU0IgaHViIGZvdW5kDQpb
ICAgMTIuNDc3Njg3XSBodWIgNC0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZA0KWyAgIDEyLjQ3
ODAzMV0geGVuOiByZWdpc3RlcmluZyBnc2kgMTggdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDEN
ClsgICAxMi40NzgwMzNdIEFscmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgNClsgICAxMi40Nzgw
NTFdIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogZW5hYmxpbmcgYnVzIG1hc3RlcmluZw0KWyAg
IDEyLjQ3ODA1N10gb2hjaS1wY2kgMDAwMDowMDoxMy4wOiBPSENJIFBDSSBob3N0IGNvbnRy
b2xsZXINClsgICAxMi40Nzg1MjldIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogbmV3IFVTQiBi
dXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA1DQpbICAgMTIuNDc4NjY3XSBv
aGNpLXBjaSAwMDAwOjAwOjEzLjA6IGlycSAxOCwgaW8gbWVtIDB4Zjk1ZmMwMDANClsgICAx
Mi40OTU3OTddIGF0YTI6IFNBVEEgbGluayBkb3duIChTU3RhdHVzIDAgU0NvbnRyb2wgMzAw
KQ0KWyAgIDEyLjQ5NTg2M10gYXRhNDogU0FUQSBsaW5rIGRvd24gKFNTdGF0dXMgMCBTQ29u
dHJvbCAzMDApDQpbICAgMTIuNTMzMzgyXSB1c2IgdXNiNTogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxDQpbICAgMTIuNTMzMzgzXSB1c2Ig
dXNiNTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTENClsgICAxMi41MzMzODRdIHVzYiB1c2I1OiBQcm9kdWN0OiBPSENJIFBDSSBo
b3N0IGNvbnRyb2xsZXINClsgICAxMi41MzMzODVdIHVzYiB1c2I1OiBNYW51ZmFjdHVyZXI6
IExpbnV4IDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrIG9oY2lfaGNkDQpbICAgMTIu
NTMzMzg2XSB1c2IgdXNiNTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEzLjANClsgICAxMi41
MzQzMjFdIGh1YiA1LTA6MS4wOiBVU0IgaHViIGZvdW5kDQpbICAgMTIuNTM0MzM4XSBodWIg
NS0wOjEuMDogNSBwb3J0cyBkZXRlY3RlZA0KWyAgIDEyLjUzNDcyMF0geGVuOiByZWdpc3Rl
cmluZyBnc2kgMTggdHJpZ2dlcmluZyAwIHBvbGFyaXR5IDENClsgICAxMi41MzQ3MjJdIEFs
cmVhZHkgc2V0dXAgdGhlIEdTSSA6MTgNClsgICAxMi41MzQ3NDNdIG9oY2ktcGNpIDAwMDA6
MDA6MTQuNTogZW5hYmxpbmcgYnVzIG1hc3RlcmluZw0KWyAgIDEyLjUzNDc0OV0gb2hjaS1w
Y2kgMDAwMDowMDoxNC41OiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXINClsgICAxMi41MzUy
MzFdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciA2DQpbICAgMTIuNTM1MzY5XSBvaGNpLXBjaSAwMDAwOjAwOjE0
LjU6IGlycSAxOCwgaW8gbWVtIDB4Zjk1ZmQwMDANClsgICAxMi41OTAwMDZdIHVzYiB1c2I2
OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEN
ClsgICAxMi41OTAwMDddIHVzYiB1c2I2OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9
MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQ0KWyAgIDEyLjU5MDAwOF0gdXNiIHVzYjY6
IFByb2R1Y3Q6IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcg0KWyAgIDEyLjU5MDAwOV0gdXNi
IHVzYjY6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZl
bCsgb2hjaV9oY2QNClsgICAxMi41OTAwMTBdIHVzYiB1c2I2OiBTZXJpYWxOdW1iZXI6IDAw
MDA6MDA6MTQuNQ0KWyAgIDEyLjU5MDk4Ml0gaHViIDYtMDoxLjA6IFVTQiBodWIgZm91bmQN
ClsgICAxMi41OTEwMDBdIGh1YiA2LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkDQpbICAgMTIu
NTkxMjk3XSB4ZW46IHJlZ2lzdGVyaW5nIGdzaSAxOCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkg
MQ0KWyAgIDEyLjU5MTI5OV0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDoxOA0KWyAgIDEyLjU5
MTMxNF0gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpb
ICAgMTIuNTkxMzIwXSBvaGNpLXBjaSAwMDAwOjAwOjE2LjA6IE9IQ0kgUENJIGhvc3QgY29u
dHJvbGxlcg0KWyAgIDEyLjU5MTY4M10gb2hjaS1wY2kgMDAwMDowMDoxNi4wOiBuZXcgVVNC
IGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDcNClsgICAxMi41OTE3MjVd
IG9oY2ktcGNpIDAwMDA6MDA6MTYuMDogaXJxIDE4LCBpbyBtZW0gMHhmOTVmZTAwMA0KWyAg
IDEyLjY0NjU2NF0gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0x
ZDZiLCBpZFByb2R1Y3Q9MDAwMQ0KWyAgIDEyLjY0NjU2Nl0gdXNiIHVzYjc6IE5ldyBVU0Ig
ZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xDQpbICAg
MTIuNjQ2NTY3XSB1c2IgdXNiNzogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVy
DQpbICAgMTIuNjQ2NTY4XSB1c2IgdXNiNzogTWFudWZhY3R1cmVyOiBMaW51eCAzLjEzLjAt
cmM3LTIwMTQwMTA3LXhlbmRldmVsKyBvaGNpX2hjZA0KWyAgIDEyLjY0NjU2OV0gdXNiIHVz
Yjc6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxNi4wDQpbICAgMTIuNjQ3Mzk5XSBodWIgNy0w
OjEuMDogVVNCIGh1YiBmb3VuZA0KWyAgIDEyLjY0NzQxNV0gaHViIDctMDoxLjA6IDQgcG9y
dHMgZGV0ZWN0ZWQNClsgICAxMi42NDc4ODFdIHVoY2lfaGNkOiBVU0IgVW5pdmVyc2FsIEhv
c3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVyDQpbICAgMTIuNjQ4MTE2XSB4ZW46IHJl
Z2lzdGVyaW5nIGdzaSA0OCB0cmlnZ2VyaW5nIDAgcG9sYXJpdHkgMQ0KWyAgIDEyLjY0ODEx
OF0gQWxyZWFkeSBzZXR1cCB0aGUgR1NJIDo0OA0KWyAgIDEyLjY0ODEzNl0geGhjaV9oY2Qg
MDAwMDowOTowMC4wOiBlbmFibGluZyBidXMgbWFzdGVyaW5nDQpbICAgMTIuNjQ4MTQxXSB4
aGNpX2hjZCAwMDAwOjA5OjAwLjA6IHhIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNjQ4
NTM1XSB4aGNpX2hjZCAwMDAwOjA5OjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFz
c2lnbmVkIGJ1cyBudW1iZXIgOA0KWyAgIDEyLjY0ODcxNl0geGhjaV9oY2QgMDAwMDowOTow
MC4wOiBlbmFibGluZyBNZW0tV3ItSW52YWwNClsgICAxMi42NDk0OTldIHVzYiB1c2I4OiBO
ZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDINClsg
ICAxMi42NDk1MDBdIHVzYiB1c2I4OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9Mywg
UHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQ0KWyAgIDEyLjY0OTUwMV0gdXNiIHVzYjg6IFBy
b2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgMTIuNjQ5NTAzXSB1c2IgdXNiODog
TWFudWZhY3R1cmVyOiBMaW51eCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyB4aGNp
X2hjZA0KWyAgIDEyLjY0OTUwM10gdXNiIHVzYjg6IFNlcmlhbE51bWJlcjogMDAwMDowOTow
MC4wDQpbICAgMTIuNjUwNDc0XSBodWIgOC0wOjEuMDogVVNCIGh1YiBmb3VuZA0KWyAgIDEy
LjY1MDUyM10gaHViIDgtMDoxLjA6IDIgcG9ydHMgZGV0ZWN0ZWQNClsgICAxMi42NTA2NjBd
IHhoY2lfaGNkIDAwMDA6MDk6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXINClsgICAxMi42
NTEwODVdIHhoY2lfaGNkIDAwMDA6MDk6MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwg
YXNzaWduZWQgYnVzIG51bWJlciA5DQpbICAgMTIuNjUyMTI2XSB1c2IgdXNiOTogTmV3IFVT
QiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAzDQpbICAgMTIu
NjUyMTI3XSB1c2IgdXNiOTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1
Y3Q9MiwgU2VyaWFsTnVtYmVyPTENClsgICAxMi42NTIxMjhdIHVzYiB1c2I5OiBQcm9kdWN0
OiB4SENJIEhvc3QgQ29udHJvbGxlcg0KWyAgIDEyLjY1MjEyOV0gdXNiIHVzYjk6IE1hbnVm
YWN0dXJlcjogTGludXggMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgeGhjaV9oY2QN
ClsgICAxMi42NTIxMzBdIHVzYiB1c2I5OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDk6MDAuMA0K
WyAgIDEyLjY1MzA2Ml0gaHViIDktMDoxLjA6IFVTQiBodWIgZm91bmQNClsgICAxMi42NTMw
OTldIGh1YiA5LTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkDQpbICAgMTIuNjYyNTU2XSBhdGEz
OiBTQVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0dXMgMTMzIFNDb250cm9sIDMwMCkNClsg
ICAxMi42NjI2MzVdIGF0YTE6IFNBVEEgbGluayB1cCAzLjAgR2JwcyAoU1N0YXR1cyAxMjMg
U0NvbnRyb2wgMzAwKQ0KWyAgIDEyLjY2MzMzOV0gYXRhMy4wMDogQVRBLTg6IFNUMjAwMERM
MDAzLTlWVDE2NiwgQ0M0NSwgbWF4IFVETUEvMTMzDQpbICAgMTIuNjYzMzQxXSBhdGEzLjAw
OiAzOTA3MDI5MTY4IHNlY3RvcnMsIG11bHRpIDA6IExCQTQ4IE5DUSAoZGVwdGggMzEvMzIp
DQpbICAgMTIuNjY0MDY1XSBhdGEzLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMw0KWyAg
IDEyLjY2NDExOV0gYXRhMS4wMDogQVRBLTg6IEhpdGFjaGkgSERTNzIyMDIwQUxBMzMwLCBK
S0FPQTIwTiwgbWF4IFVETUEvMTMzDQpbICAgMTIuNjY0MTIxXSBhdGExLjAwOiAzOTA3MDI5
MTY4IHNlY3RvcnMsIG11bHRpIDA6IExCQTQ4IE5DUSAoZGVwdGggMzEvMzIpLCBBQQ0KWyAg
IDEyLjY2NTY2N10gYXRhMS4wMDogY29uZmlndXJlZCBmb3IgVURNQS8xMzMNClsgICAxMi42
NjYxMThdIHNjc2kgMDowOjA6MDogRGlyZWN0LUFjY2VzcyAgICAgQVRBICAgICAgSGl0YWNo
aSBIRFM3MjIwMiBKS0FPIFBROiAwIEFOU0k6IDUNClsgICAxMi42Njc0MzVdIHNkIDA6MDow
OjA6IFtzZGFdIDM5MDcwMjkxNjggNTEyLWJ5dGUgbG9naWNhbCBibG9ja3M6ICgyLjAwIFRC
LzEuODEgVGlCKQ0KWyAgIDEyLjY2NzUxMF0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUgUHJv
dGVjdCBpcyBvZmYNClsgICAxMi42Njc1MTJdIHNkIDA6MDowOjA6IFtzZGFdIE1vZGUgU2Vu
c2U6IDAwIDNhIDAwIDAwDQpbICAgMTIuNjY3NTQxXSBzZCAwOjA6MDowOiBbc2RhXSBXcml0
ZSBjYWNoZTogZW5hYmxlZCwgcmVhZCBjYWNoZTogZW5hYmxlZCwgZG9lc24ndCBzdXBwb3J0
IERQTyBvciBGVUENClsgICAxMi42NjgxOTZdIHNkIDA6MDowOjA6IEF0dGFjaGVkIHNjc2kg
Z2VuZXJpYyBzZzAgdHlwZSAwDQpbICAgMTIuNjY5MzkwXSBzY3NpIDI6MDowOjA6IERpcmVj
dC1BY2Nlc3MgICAgIEFUQSAgICAgIFNUMjAwMERMMDAzLTlWVDEgQ0M0NSBQUTogMCBBTlNJ
OiA1DQpbICAgMTIuNjcwMTQ0XSBzZCAyOjA6MDowOiBbc2RiXSAzOTA3MDI5MTY4IDUxMi1i
eXRlIGxvZ2ljYWwgYmxvY2tzOiAoMi4wMCBUQi8xLjgxIFRpQikNClsgICAxMi42NzAxNDZd
IHNkIDI6MDowOjA6IFtzZGJdIDQwOTYtYnl0ZSBwaHlzaWNhbCBibG9ja3MNClsgICAxMi42
NzAzNTVdIHNkIDI6MDowOjA6IEF0dGFjaGVkIHNjc2kgZ2VuZXJpYyBzZzEgdHlwZSAwDQpb
ICAgMTIuNjcwNDA2XSBzZCAyOjA6MDowOiBbc2RiXSBXcml0ZSBQcm90ZWN0IGlzIG9mZg0K
WyAgIDEyLjY3MDQwOF0gc2QgMjowOjA6MDogW3NkYl0gTW9kZSBTZW5zZTogMDAgM2EgMDAg
MDANClsgICAxMi42NzA0OTVdIHNkIDI6MDowOjA6IFtzZGJdIFdyaXRlIGNhY2hlOiBlbmFi
bGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQ0K
WyAgIDEyLjY3NzA4MV0gIHNkYTogc2RhMSBzZGEyIHNkYTMNClsgICAxMi42NzkwMTJdIHNk
IDA6MDowOjA6IFtzZGFdIEF0dGFjaGVkIFNDU0kgZGlzaw0KWyAgIDEyLjY4ODkwMl0gIHNk
Yjogc2RiMQ0KWyAgIDEyLjY5MDE1MV0gc2QgMjowOjA6MDogW3NkYl0gQXR0YWNoZWQgU0NT
SSBkaXNrDQpbICAgMTMuMjI3MTcyXSBhdGE2OiBTQVRBIGxpbmsgZG93biAoU1N0YXR1cyAw
IFNDb250cm9sIDMwMCkNClsgICAxMy4yMzMyODhdIGF0YTU6IFNBVEEgbGluayBkb3duIChT
U3RhdHVzIDAgU0NvbnRyb2wgMzAwKQ0KWyAgIDEzLjI1Mjk1MF0gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2JscA0KWyAgIDEzLjI1OTE1MV0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Itc3RvcmFnZQ0KWyAgIDEz
LjI2NTY0OF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jz
ZXJpYWwNClsgICAxMy4yNzE4MTBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgY3AyMTB4DQpbICAgMTMuMjc3OTQ0XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwg
c3VwcG9ydCByZWdpc3RlcmVkIGZvciBjcDIxMHgNClsgICAxMy4yNzkxODhdIHVzYiA0LTU6
IG5ldyBmdWxsLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgb2hjaS1wY2kNClsg
ICAxMy4yODk1MzBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIg
Y3lwcmVzc19tOA0KWyAgIDEzLjI5NTUyMV0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBv
cnQgcmVnaXN0ZXJlZCBmb3IgRGVMb3JtZSBFYXJ0aG1hdGUgVVNCDQpbICAgMTMuMzAxNTI5
XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwgc3VwcG9ydCByZWdpc3RlcmVkIGZvciBISUQtPkNP
TSBSUzIzMiBBZGFwdGVyDQpbICAgMTMuMzA3NDc3XSB1c2JzZXJpYWw6IFVTQiBTZXJpYWwg
c3VwcG9ydCByZWdpc3RlcmVkIGZvciBOb2tpYSBDQS00MiBWMiBBZGFwdGVyDQpbICAgMTMu
MzEzNDQzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIG1vczc3
MjANClsgICAxMy4zMTkyOThdIHVzYnNlcmlhbDogVVNCIFNlcmlhbCBzdXBwb3J0IHJlZ2lz
dGVyZWQgZm9yIE1vc2NoaXAgMiBwb3J0IGFkYXB0ZXINClsgICAxMy4zMjUyMzJdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgbW9zNzg0MA0KWyAgIDEzLjMz
MTIyNV0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgTW9z
Y2hpcCA3ODQwLzc4MjAgVVNCIFNlcmlhbCBEcml2ZXINClsgICAxMy4zMzc3ODVdIGk4MDQy
OiBQTlA6IE5vIFBTLzIgY29udHJvbGxlciBmb3VuZC4gUHJvYmluZyBwb3J0cyBkaXJlY3Rs
eS4NClsgICAxMy4zNDQ0OTFdIHNlcmlvOiBpODA0MiBLQkQgcG9ydCBhdCAweDYwLDB4NjQg
aXJxIDENClsgICAxMy4zNTA1MDBdIHNlcmlvOiBpODA0MiBBVVggcG9ydCBhdCAweDYwLDB4
NjQgaXJxIDEyDQpbICAgMTMuMzU2OTczXSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2Ug
Y29tbW9uIGZvciBhbGwgbWljZQ0KWyAgIDEzLjM2NDIzMF0gcnRjX2Ntb3MgMDA6MDM6IFJU
QyBjYW4gd2FrZSBmcm9tIFM0DQpbICAgMTMuMzcwNDQzXSBydGNfY21vcyAwMDowMzogcnRj
IGNvcmU6IHJlZ2lzdGVyZWQgcnRjX2Ntb3MgYXMgcnRjMA0KWyAgIDEzLjM3NjI5NV0gcnRj
X2Ntb3MgMDA6MDM6IGFsYXJtcyB1cCB0byBvbmUgbW9udGgsIHkzaywgMTE0IGJ5dGVzIG52
cmFtDQpbICAgMTMuMzgzMDU3XSBBQ1BJIFdhcm5pbmc6IDB4MDAwMDAwMDAwMDAwMGIwMC0w
eDAwMDAwMDAwMDAwMDBiMDcgU3lzdGVtSU8gY29uZmxpY3RzIHdpdGggUmVnaW9uIFxTT1Ix
IDEgKDIwMTMxMTE1L3V0YWRkcmVzcy0yNTEpDQpbICAgMTMuMzg5MjI2XSBBQ1BJOiBUaGlz
IGNvbmZsaWN0IG1heSBjYXVzZSByYW5kb20gcHJvYmxlbXMgYW5kIHN5c3RlbSBpbnN0YWJp
bGl0eQ0KWyAgIDEzLjM5NTQwMV0gQUNQSTogSWYgYW4gQUNQSSBkcml2ZXIgaXMgYXZhaWxh
YmxlIGZvciB0aGlzIGRldmljZSwgeW91IHNob3VsZCB1c2UgaXQgaW5zdGVhZCBvZiB0aGUg
bmF0aXZlIGRyaXZlcg0KWyAgIDEzLjQwMTgwOV0gcGlpeDRfc21idXMgMDAwMDowMDoxNC4w
OiBTTUJ1cyBIb3N0IENvbnRyb2xsZXIgYXQgMHhiMDAsIHJldmlzaW9uIDANClsgICAxMy40
MDgzNzZdIEFDUEkgV2FybmluZzogMHgwMDAwMDAwMDAwMDAwYjIwLTB4MDAwMDAwMDAwMDAw
MGIyNyBTeXN0ZW1JTyBjb25mbGljdHMgd2l0aCBSZWdpb24gXFNPUjIgMSAoMjAxMzExMTUv
dXRhZGRyZXNzLTI1MSkNClsgICAxMy40MTQ5NjVdIEFDUEk6IFRoaXMgY29uZmxpY3QgbWF5
IGNhdXNlIHJhbmRvbSBwcm9ibGVtcyBhbmQgc3lzdGVtIGluc3RhYmlsaXR5DQpbICAgMTMu
NDIxNTc0XSBBQ1BJOiBJZiBhbiBBQ1BJIGRyaXZlciBpcyBhdmFpbGFibGUgZm9yIHRoaXMg
ZGV2aWNlLCB5b3Ugc2hvdWxkIHVzZSBpdCBpblsgICAxOC42NzgyNTldIHVkZXZkWzE5MDdd
OiBzdGFydGluZyB2ZXJzaW9uIDE3NQ0KWyAgIDE5Ljc1Mzg3MV0gYXRhMS4wMDogY29uZmln
dXJlZCBmb3IgVURNQS8xMzMNClsgICAxOS43NjI3NzZdIGF0YTE6IEVIIGNvbXBsZXRlDQpb
ICAgMjAuOTk3MTIyXSBhdGExLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMw0KWyAgIDIx
LjAwNDM1M10gYXRhMTogRUggY29tcGxldGUNClsgICAyMi4yODg1ODJdIEVYVDQtZnMgKGRt
LTApOiByZS1tb3VudGVkLiBPcHRzOiAobnVsbCkNClsgICAyMi44NjAzMDldIEVYVDQtZnMg
KGRtLTApOiByZS1tb3VudGVkLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8N
ClsgICAzMC43MTMzODNdIEFkZGluZyAyMDk3MTQ4ayBzd2FwIG9uIC9kZXYvbWFwcGVyL3Nl
cnZlZXJzdGVydGplLXN3YXAuICBQcmlvcml0eTotMSBleHRlbnRzOjEgYWNyb3NzOjIwOTcx
NDhrIA0KWyAgIDQzLjQ2MjI0Nl0gRVhUNC1mcyAoc2RhMSk6IG1vdW50ZWQgZmlsZXN5c3Rl
bSB3aXRoIG9yZGVyZWQgZGF0YSBtb2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91
bnQtcm8NClsgICA0Ni42ODM4NTFdIHI4MTY5IDAwMDA6MGE6MDAuMCBldGgxOiBsaW5rIGRv
d24NClsgICA0Ni42OTA2MTVdIHI4MTY5IDAwMDA6MGE6MDAuMCBldGgxOiBsaW5rIGRvd24N
ClsgICA0Ni45NTcxOTldIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIGRvd24NClsg
ICA0Ni45NTcyODJdIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIGRvd24NClsgICA0
OC42MDg0NzFdIHI4MTY5IDAwMDA6MGI6MDAuMCBldGgwOiBsaW5rIHVwDQpbICAgNDkuMzEz
NDgyXSByODE2OSAwMDAwOjBhOjAwLjAgZXRoMTogbGluayB1cA0KWyAgIDg5Ljk5NjcyMl0g
RVhUNC1mcyAoZG0tMik6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQgZGF0YSBt
b2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8NClsgICA5Ni4zNTg0MjNd
IEVYVDQtZnMgKGRtLTM5KTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3JkZXJlZCBkYXRh
IG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAgMTAyLjQwNDEx
OV0gRVhUNC1mcyAoZG0tNDApOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRlcmVkIGRh
dGEgbW9kZS4gT3B0czogYmFycmllcj0xLGVycm9ycz1yZW1vdW50LXJvDQpbICAxMDguNTEz
NTg4XSBFWFQ0LWZzIChkbS00MSk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQg
ZGF0YSBtb2RlLiBPcHRzOiBiYXJyaWVyPTEsZXJyb3JzPXJlbW91bnQtcm8NClsgIDExNi43
MjQ1MjNdIEVYVDQtZnMgKGRtLTQyKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3JkZXJl
ZCBkYXRhIG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAgMTIz
LjI4MzcwMl0gRVhUNC1mcyAoZG0tNDMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRl
cmVkIGRhdGEgbW9kZS4gT3B0czogYmFycmllcj0xLGVycm9ycz1yZW1vdW50LXJvDQpbICAx
MjkuNjE3MDgzXSBFWFQ0LWZzIChzZGIxKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGggb3Jk
ZXJlZCBkYXRhIG1vZGUuIE9wdHM6IGJhcnJpZXI9MSxlcnJvcnM9cmVtb3VudC1ybw0KWyAg
MTM2LjI5NTgzNF0gZGV2aWNlIHZpZjEuMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihk
MSkgWzIwMTQtMDEtMDcgMTE6MTE6MjRdIG1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwg
bWVtb3J5DQooZDEpIFsyMDE0LTAxLTA3IDExOjExOjI0XSBhYm91dCB0byBnZXQgc3RhcnRl
ZC4uLg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6MTE6MjVdIHRyYXBzLmM6MjUxNjpkMSBEb21h
aW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAw
MDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZmLg0KWyAgMTM3LjUyODczN10geGVuLWJsa2Jh
Y2s6cmluZy1yZWYgOCwgZXZlbnQtY2hhbm5lbCAxNywgcHJvdG9jb2wgMSAoeDg2XzY0LWFi
aSkgcGVyc2lzdGVudCBncmFudHMNClsgIDEzNy41NDM2MTVdIHhlbi1ibGtiYWNrOnJpbmct
cmVmIDksIGV2ZW50LWNoYW5uZWwgMTgsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNp
c3RlbnQgZ3JhbnRzDQpbICAxMzcuNTU5MTYyXSB4ZW5fYnJpZGdlOiBwb3J0IDEodmlmMS4w
KSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDEzNy41NzAwODVdIHhlbl9icmlkZ2U6
IHBvcnQgMSh2aWYxLjApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTQyLjIyODE0
MF0gZGV2aWNlIHZpZjIuMCBlbnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihkMikgWzIwMTQt
MDEtMDcgMTE6MTE6MzBdIG1hcHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQoo
ZDIpIFsyMDE0LTAxLTA3IDExOjExOjMwXSBhYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KKFhF
TikgWzIwMTQtMDEtMDcgMTE6MTE6MzBdIHRyYXBzLmM6MjUxNjpkMiBEb21haW4gYXR0ZW1w
dGVkIFdSTVNSIDAwMDAwMDAwYzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAwMDAwMDAgdG8g
MHgwMDAwMDAwMDAwMDBmZmZmLg0KWyAgMTQzLjEwNDU4N10geGVuLWJsa2JhY2s6cmluZy1y
ZWYgOCwgZXZlbnQtY2hhbm5lbCAxMCwgcHJvdG9jb2wgMSAoeDg2XzY0LWFiaSkgcGVyc2lz
dGVudCBncmFudHMNClsgIDE0NC4xMTc3NzVdIHhlbl9icmlkZ2U6IHBvcnQgMih2aWYyLjAp
IGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTQ0LjEyODkwOV0geGVuX2JyaWRnZTog
cG9ydCAyKHZpZjIuMCkgZW50ZXJlZCBmb3J3YXJkaW5nIHN0YXRlDQpbICAxNDguMTI0MTQy
XSBkZXZpY2UgdmlmMy4wIGVudGVyZWQgcHJvbWlzY3VvdXMgbW9kZQ0KKGQzKSBbMjAxNC0w
MS0wNyAxMToxMTozNl0gbWFwcGluZyBrZXJuZWwgaW50byBwaHlzaWNhbCBtZW1vcnkNCihk
MykgWzIwMTQtMDEtMDcgMTE6MTE6MzZdIGFib3V0IHRvIGdldCBzdGFydGVkLi4uDQooWEVO
KSBbMjAxNC0wMS0wNyAxMToxMTozNl0gdHJhcHMuYzoyNTE2OmQzIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDBjMDAxMDAwNCBmcm9tIDB4MDAwMDAwMDAwMDAwMDAwMCB0byAw
eDAwMDAwMDAwMDAwMGZmZmYuDQpbICAxNTIuNTc2ODc3XSB4ZW5fYnJpZGdlOiBwb3J0IDEo
dmlmMS4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE1My45NzcxOTldIGRldmlj
ZSB2aWY0LjAgZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQooZDQpIFsyMDE0LTAxLTA3IDEx
OjExOjQyXSBtYXBwaW5nIGtlcm5lbCBpbnRvIHBoeXNpY2FsIG1lbW9yeQ0KKGQ0KSBbMjAx
NC0wMS0wNyAxMToxMTo0Ml0gYWJvdXQgdG8gZ2V0IHN0YXJ0ZWQuLi4NCihYRU4pIFsyMDE0
LTAxLTA3IDExOjExOjQyXSB0cmFwcy5jOjI1MTY6ZDQgRG9tYWluIGF0dGVtcHRlZCBXUk1T
UiAwMDAwMDAwMGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAw
MDAwMDAwZmZmZi4NClsgIDE1NC44OTY3MDhdIHhlbi1ibGtiYWNrOnJpbmctcmVmIDgsIGV2
ZW50LWNoYW5uZWwgMTAsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNpc3RlbnQgZ3Jh
bnRzDQpbICAxNTUuOTExNTg5XSB4ZW5fYnJpZGdlOiBwb3J0IDQodmlmNC4wKSBlbnRlcmVk
IGZvcndhcmRpbmcgc3RhdGUNClsgIDE1NS45MjI2NjNdIHhlbl9icmlkZ2U6IHBvcnQgNCh2
aWY0LjApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MTE6NDVdIGdyYW50X3RhYmxlLmM6Mjg5OmQwIEluY3JlYXNlZCBtYXB0cmFjayBzaXplIHRv
IDIgZnJhbWVzDQpbICAxNTkuMTMzNDQxXSB4ZW5fYnJpZGdlOiBwb3J0IDIodmlmMi4wKSBl
bnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE1OS44OTI5NjJdIGRldmljZSB2aWY1LjAg
ZW50ZXJlZCBwcm9taXNjdW91cyBtb2RlDQooZDUpIFsyMDE0LTAxLTA3IDExOjExOjQ4XSBt
YXBwaW5nIGtlcm5lbCBpbnRvIHBoeXNpY2FsIG1lbW9yeQ0KKGQ1KSBbMjAxNC0wMS0wNyAx
MToxMTo0OF0gYWJvdXQgdG8gZ2V0IHN0YXJ0ZWQuLi4NCihYRU4pIFsyMDE0LTAxLTA3IDEx
OjExOjQ4XSB0cmFwcy5jOjI1MTY6ZDUgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MGMwMDEwMDA0IGZyb20gMHgwMDAwMDAwMDAwMDAwMDAwIHRvIDB4MDAwMDAwMDAwMDAwZmZm
Zi4NClsgIDE2MC44NTQyMDJdIHhlbi1ibGtiYWNrOnJpbmctcmVmIDgsIGV2ZW50LWNoYW5u
ZWwgMTAsIHByb3RvY29sIDEgKHg4Nl82NC1hYmkpIHBlcnNpc3RlbnQgZ3JhbnRzDQpbICAx
NjEuODY4NTA3XSB4ZW5fYnJpZGdlOiBwb3J0IDUodmlmNS4wKSBlbnRlcmVkIGZvcndhcmRp
bmcgc3RhdGUNClsgIDE2MS44NzkwMzddIHhlbl9icmlkZ2U6IHBvcnQgNSh2aWY1LjApIGVu
dGVyZWQgZm9yd2FyZGluZyBzdGF0ZQ0KWyAgMTY2LjAwMjM1OV0gZGV2aWNlIHZpZjYuMCBl
bnRlcmVkIHByb21pc2N1b3VzIG1vZGUNCihkNikgWzIwMTQtMDEtMDcgMTE6MTE6NTRdIG1h
cHBpbmcga2VybmVsIGludG8gcGh5c2ljYWwgbWVtb3J5DQooZDYpIFsyMDE0LTAxLTA3IDEx
OjExOjU0XSBhYm91dCB0byBnZXQgc3RhcnRlZC4uLg0KKFhFTikgWzIwMTQtMDEtMDcgMTE6
MTE6NTRdIHRyYXBzLmM6MjUxNjpkNiBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
YzAwMTAwMDQgZnJvbSAweDAwMDAwMDAwMDAwMDAwMDAgdG8gMHgwMDAwMDAwMDAwMDBmZmZm
Lg0KWyAgMTY2Ljg3Nzg2NV0geGVuLWJsa2JhY2s6cmluZy1yZWYgOCwgZXZlbnQtY2hhbm5l
bCAxMCwgcHJvdG9jb2wgMSAoeDg2XzY0LWFiaSkgcGVyc2lzdGVudCBncmFudHMNClsgIDE2
Ni44OTI0MzVdIHhlbl9icmlkZ2U6IHBvcnQgNih2aWY2LjApIGVudGVyZWQgZm9yd2FyZGlu
ZyBzdGF0ZQ0KWyAgMTY2LjkwMjQ4Nl0geGVuX2JyaWRnZTogcG9ydCA2KHZpZjYuMCkgZW50
ZXJlZCBmb3J3YXJkaW5nIHN0YXRlDQpbICAxNzAuOTY3NjAwXSB4ZW5fYnJpZGdlOiBwb3J0
IDQodmlmNC4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUNClsgIDE3NC4yNzkzMzBdIEJs
dWV0b290aDogaGNpMCBjb21tYW5kIDB4MDgwNCB0eCB0aW1lb3V0DQpbICAxNzYuODg0NjAy
XSB4ZW5fYnJpZGdlOiBwb3J0IDUodmlmNS4wKSBlbnRlcmVkIGZvcndhcmRpbmcgc3RhdGUN
ClsgIDE4MS45NDg4MTVdIHhlbl9icmlkZ2U6IHBvcnQgNih2aWY2LjApIGVudGVyZWQgZm9y
d2FyZGluZyBzdGF0ZQ0KWyAgMTgzLjY4MTQwNl0gYXRhMy4wMDogZXhjZXB0aW9uIEVtYXNr
IDB4MCBTQWN0IDB4MCBTRXJyIDB4MCBhY3Rpb24gMHg2IGZyb3plbg0KWyAgMTgzLjY5MDY3
Nl0gYXRhMy4wMDogZmFpbGVkIGNvbW1hbmQ6IENIRUNLIFBPV0VSIE1PREUNClsgIDE4My42
OTk3OTRdIGF0YTMuMDA6IGNtZCBlNS8wMDowMDowMDowMDowMC8wMDowMDowMDowMDowMC8w
MCB0YWcgMA0KWyAgMTgzLjY5OTc5NF0gICAgICAgICAgcmVzIDQwLzAwOmZmOjAwOjAwOjAw
LzAwOjAwOjAwOjAwOjAwLzQwIEVtYXNrIDB4NCAodGltZW91dCkNClsgIDE4My43MTc4MzNd
IGF0YTMuMDA6IHN0YXR1czogeyBEUkRZIH0NClsgIDE4My43MjY2NzVdIGF0YTM6IGhhcmQg
cmVzZXR0aW5nIGxpbmsNClsgIDE4NC4yMjEwMTldIGF0YTM6IFNBVEEgbGluayB1cCA2LjAg
R2JwcyAoU1N0YXR1cyAxMzMgU0NvbnRyb2wgMzAwKQ0KWyAgMTg5LjIyNTIzMV0gYXRhMy4w
MDogcWMgdGltZW91dCAoY21kIDB4ZWMpDQpbICAxODkuMjMzNjYwXSBhdGEzLjAwOiBmYWls
ZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJvciwgZXJyX21hc2s9MHg0KQ0KWyAgMTg5LjI0MjAy
Ml0gYXRhMy4wMDogcmV2YWxpZGF0aW9uIGZhaWxlZCAoZXJybm89LTUpDQpbICAxODkuMjUw
Mjc5XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAxODkuNzQxNTQ4XSBhdGEzOiBT
QVRBIGxpbmsgdXAgNi4wIEdicHMgKFNTdGF0dXMgMTMzIFNDb250cm9sIDMwMCkNClsgIDE5
Ni4zNTQ4OTZdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjJzISBbeGVu
ZG9tYWluczo5Njc5XQ0KWyAgMTk2LjM2MjcwNl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAx
OTYuMzcwMzYwXSBpcnEgZXZlbnQgc3RhbXA6IDYxNzM4DQpbICAxOTYuMzc3ODcxXSBoYXJk
aXJxcyBsYXN0ICBlbmFibGVkIGF0ICg2MTczNyk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJl
c3RvcmVfYXJncysweDAvMHgzMA0KWyAgMTk2LjM4NTQ2NV0gaGFyZGlycXMgbGFzdCBkaXNh
YmxlZCBhdCAoNjE3MzgpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4
Ng0KWyAgMTk2LjM5MjkxMF0gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoNjE3MzYpOiBb
PGZmZmZmZmZmODEwYTlkZjE+XSBfX2RvX3NvZnRpcnErMHgxOTEvMHgyMjANClsgIDE5Ni40
MDAxOTJdIHNvZnRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDYxNzMxKTogWzxmZmZmZmZmZjgx
MGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAxOTYuNDA3MjYzXSBDUFU6IDEgUElE
OiA5Njc5IENvbW06IHhlbmRvbWFpbnMgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEw
Ny14ZW5kZXZlbCsgIzENClsgIDE5Ni40MTQzMDNdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03
NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpb
ICAxOTYuNDIxMjczXSB0YXNrOiBmZmZmODgwMDU4ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMy
MDAwIHRhc2sudGk6IGZmZmY4ODAwNGNkMzIwMDANClsgIDE5Ni40MjgyNjNdIFJJUDogZTAz
MDpbPGZmZmZmZmZmODExMDlhNTg+XSAgWzxmZmZmZmZmZjgxMTA5YTU4Pl0gZ2VuZXJpY19l
eGVjX3NpbmdsZSsweDg4LzB4YzANClsgIDE5Ni40MzUzMjNdIFJTUDogZTAyYjpmZmZmODgw
MDRjZDMzYTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAxOTYuNDQyMTcwXSBSQVg6IDAwMDAw
MDAwMDAwMDAwMDggUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAz
OA0KWyAgMTk2LjQ0ODg5OF0gUkRYOiAwMDAwMDAwMDAwMDAwMGZmIFJTSTogMDAwMDAwMDAw
MDAwMDAwOCBSREk6IDAwMDAwMDAwMDAwMDAwMDgNClsgIDE5Ni40NTU1NzhdIFJCUDogZmZm
Zjg4MDA0Y2QzM2FjOCBSMDg6IGZmZmZmZmZmODFjMGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAw
MDAwDQpbICAxOTYuNDYyMjgzXSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAw
MDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2QzM2FmMA0KWyAgMTk2LjQ2ODk5NF0gUjEzOiAw
MDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2
MTQwNTANClsgIDE5Ni40NzU3MDRdIEZTOiAgMDAwMDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpm
ZmZmODgwMDVmNjQwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDE5Ni40
ODI0OTJdIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAw
M2INClsgIDE5Ni40ODkyMjldIENSMjogMDAwMDdmMTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAw
NGNkMmEwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAxOTYuNDk1OTA2XSBTdGFjazoN
ClsgIDE5Ni41MDI1OTVdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAw
MDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAwMDAwMDAwDQpbICAxOTYuNTA5NDEyXSAgMDAwMDAw
MDAwMDAwMDAwMSBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAw
MDAwMDAwMQ0KWyAgMTk2LjUxNjI2Nl0gIGZmZmY4ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEw
OWNjNSBmZmZmZmZmZjgxYWEwYjMzIGZmZmY4ODAwY2VkNTYwMDANClsgIDE5Ni41MjMxMTJd
IENhbGwgVHJhY2U6DQpbICAxOTYuNTI5NzY5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4
ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMTk2LjUzNjUx
OV0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1
LzB4MWUwDQpbICAxOTYuNTQzMzc0XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRf
cmVzdG9yZV9hcmdzKzB4MTMvMHgxMw0KWyAgMTk2LjU1MDA5OF0gIFs8ZmZmZmZmZmY4MTAw
Nzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsg
IDE5Ni41NTY4MzNdICBbPGZmZmZmZmZmODExMGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9t
YW55KzB4MjdhLzB4MmEwDQpbICAxOTYuNTYzNTkxXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0g
PyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMTk2LjU3
MDI0MF0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0K
WyAgMTk2LjU3Njc5MV0gIFs8ZmZmZmZmZmY4MTAwMTIyYT5dID8geGVuX2h5cGVyY2FsbF94
ZW5fdmVyc2lvbisweGEvMHgyMA0KWyAgMTk2LjU4MzQ0OV0gIFs8ZmZmZmZmZmY4MTE2OTQy
Nj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAxOTYuNTkwMDIwXSAgWzxmZmZmZmZmZjgx
MGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDE5Ni41OTY1MDRdICBb
PGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMTk2LjYwMjkw
MF0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAxOTYu
NjA5MTc5XSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAxOTYu
NjE1NDc1XSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4
MzANClsgIDE5Ni42MjE3NjFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5h
cnkrMHgzMmEvMHgxYTAwDQpbICAxOTYuNjI3OTA4XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0g
PyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMTk2LjYzMzk3NV0gIFs8ZmZmZmZm
ZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDE5Ni42NDAxMzdd
ICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4
MWIwDQpbICAxOTYuNjQ2Mzc1XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVp
cmUrMHhlYy8weDExMA0KWyAgMTk2LjY1MjUzN10gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8g
bG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAxOTYuNjU4NTg3XSAgWzxmZmZmZmZmZjgx
MTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDE5Ni42NjQ3
MzVdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1
OTIvMHg3MTANClsgIDE5Ni42NzA4MjVdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4
ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMTk2LjY3NjkwMl0gIFs8ZmZm
ZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAxOTYu
NjgyOTcyXSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAg
MTk2LjY4OTAxNl0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYw
DQpbICAxOTYuNjk0OTQ0XSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2
OS8weGEwDQpbICAxOTYuNzAwOTEzXSBDb2RlOiBjOCA0OCA4OSBkYSBlOCBlYSBjMSAzMiAw
MCA0OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1
ZCBjOCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCA8ZjM+
IDkwIDQxIGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRj
IDhiIDZkIA0KWyAgMTk5Ljc0MzM1OV0gYXRhMy4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMp
DQpbICAxOTkuNzQ5ODg0XSBhdGEzLjAwOiBmYWlsZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJv
ciwgZXJyX21hc2s9MHg0KQ0KWyAgMTk5Ljc1NjM0NV0gYXRhMy4wMDogcmV2YWxpZGF0aW9u
IGZhaWxlZCAoZXJybm89LTUpDQpbICAxOTkuNzYyOTQzXSBhdGEzOiBsaW1pdGluZyBTQVRB
IGxpbmsgc3BlZWQgdG8gMy4wIEdicHMNClsgIDE5OS43Njk0MDFdIGF0YTM6IGhhcmQgcmVz
ZXR0aW5nIGxpbmsNClsgIDIwMC4yNTYzNzZdIGF0YTM6IFNBVEEgbGluayB1cCAzLjAgR2Jw
cyAoU1N0YXR1cyAxMjMgU0NvbnRyb2wgMzIwKQ0KWyAgMjA0LjY5NzYxOV0gYXRhMS4wMDog
ZXhjZXB0aW9uIEVtYXNrIDB4MCBTQWN0IDB4MWYgU0VyciAweDAgYWN0aW9uIDB4NiBmcm96
ZW4NClsgIDIwNC43MDM5OTJdIGF0YTEuMDA6IGZhaWxlZCBjb21tYW5kOiBSRUFEIEZQRE1B
IFFVRVVFRA0KWyAgMjA0LjcxMDI0MF0gYXRhMS4wMDogY21kIDYwLzA4OjAwOjAxOjFlOjJk
LzAwOjAwOmQxOjAwOjAwLzQwIHRhZyAwIG5jcSA0MDk2IGluDQpbICAyMDQuNzEwMjQwXSAg
ICAgICAgICByZXMgNDAvMDA6ZmY6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1hc2sg
MHg0ICh0aW1lb3V0KQ0KWyAgMjA0LjcyMjkyNl0gYXRhMS4wMDogc3RhdHVzOiB7IERSRFkg
fQ0KWyAgMjA0LjcyOTI5OV0gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFdSSVRFIEZQRE1B
IFFVRVVFRA0KWyAgMjA0LjczNTU5OV0gYXRhMS4wMDogY21kIDYxL2UwOjA4OmU5OjY3OjJl
LzAwOjAwOjA0OjAwOjAwLzQwIHRhZyAxIG5jcSAxMTQ2ODggb3V0DQpbICAyMDQuNzM1NTk5
XSAgICAgICAgICByZXMgNDAvMDA6MDA6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1h
c2sgMHg0ICh0aW1lb3V0KQ0KWyAgMjA0Ljc0ODExMV0gYXRhMS4wMDogc3RhdHVzOiB7IERS
RFkgfQ0KWyAgMjA0Ljc1NDIyMV0gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFJFQUQgRlBE
TUEgUVVFVUVEDQpbICAyMDQuNzYwMjU4XSBhdGExLjAwOiBjbWQgNjAvMjA6MTA6ODE6ZWQ6
YzIvMDA6MDA6ZDA6MDA6MDAvNDAgdGFnIDIgbmNxIDE2Mzg0IGluDQpbICAyMDQuNzYwMjU4
XSAgICAgICAgICByZXMgNDAvMDA6MDA6MDA6MDA6MDAvMDA6MDA6MDA6MDA6MDAvMDAgRW1h
c2sgMHg0ICh0aW1lb3V0KQ0KWyAgMjA0Ljc3MjQ0MV0gYXRhMS4wMDogc3RhdHVzOiB7IERS
RFkgfQ0KWyAgMjA0Ljc3ODU0M10gYXRhMS4wMDogZmFpbGVkIGNvbW1hbmQ6IFJFQUQgRlBE
TUEgUVVFVUVEDQpbICAyMDQuNzg0NjE1XSBhdGExLjAwOiBjbWQgNjAvMDg6MTg6ZTE6Mjg6
OGUvMDA6MDA6ZDA6MDA6MDAvNDAgdGFnIDMgbmNxIDQwOTYgaW4NClsgIDIwNC43ODQ2MTVd
ICAgICAgICAgIHJlcyA0MC8wMDowMDowMDowMDowMC8wMDowMDowMDowMDowMC8wMCBFbWFz
ayAweDQgKHRpbWVvdXQpDQpbICAyMDQuNzk2OTgzXSBhdGExLjAwOiBzdGF0dXM6IHsgRFJE
WSB9DQpbICAyMDQuODAzMDcyXSBhdGExLjAwOiBmYWlsZWQgY29tbWFuZDogUkVBRCBGUERN
QSBRVUVVRUQNClsgIDIwNC44MDkxNDFdIGF0YTEuMDA6IGNtZCA2MC8wODoyMDpiOTozYTo2
ZS8wMDowMDpjYzowMDowMC80MCB0YWcgNCBuY3EgNDA5NiBpbg0KWyAgMjA0LjgwOTE0MV0g
ICAgICAgICAgcmVzIDQwLzAwOjAwOjAwOjAwOjAwLzAwOjAwOjAwOjAwOjAwLzAwIEVtYXNr
IDB4NCAodGltZW91dCkNClsgIDIwNC44MjEyNzRdIGF0YTEuMDA6IHN0YXR1czogeyBEUkRZ
IH0NClsgIDIwNC44MjcyOTRdIGF0YTE6IGhhcmQgcmVzZXR0aW5nIGxpbmsNClsgIDIwNS4z
MTcyMjddIGF0YTE6IFNBVEEgbGluayB1cCAzLjAgR2JwcyAoU1N0YXR1cyAxMjMgU0NvbnRy
b2wgMzAwKQ0KWyAgMjEwLjMxODAwNF0gYXRhMS4wMDogcWMgdGltZW91dCAoY21kIDB4ZWMp
DQpbICAyMTAuMzI0MTgyXSBhdGExLjAwOiBmYWlsZWQgdG8gSURFTlRJRlkgKEkvTyBlcnJv
ciwgZXJyX21hc2s9MHg0KQ0KWyAgMjEwLjMzMDMwMF0gYXRhMS4wMDogcmV2YWxpZGF0aW9u
IGZhaWxlZCAoZXJybm89LTUpDQpbICAyMTAuMzM2NDExXSBhdGExOiBoYXJkIHJlc2V0dGlu
ZyBsaW5rDQpbICAyMTAuODI3OTA4XSBhdGExOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNT
dGF0dXMgMTIzIFNDb250cm9sIDMwMCkNClsgIDIyMC44MjYyMDldIGF0YTEuMDA6IHFjIHRp
bWVvdXQgKGNtZCAweGVjKQ0KWyAgMjIwLjgzMjIwN10gYXRhMS4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkNClsgIDIyMC44MzgxNDJdIGF0YTEu
MDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQ0KWyAgMjIwLjg0NDEwOF0gYXRh
MTogbGltaXRpbmcgU0FUQSBsaW5rIHNwZWVkIHRvIDEuNSBHYnBzDQpbICAyMjAuODUwMDA4
XSBhdGExOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyMjEuMzM5MjI4XSBhdGExOiBTQVRB
IGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9sIDMxMCkNClsgIDIyNC4z
NDA5ODRdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjJzISBbeGVuZG9t
YWluczo5Njc5XQ0KWyAgMjI0LjM0NzEwMl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyMjQu
MzUzMTEzXSBpcnEgZXZlbnQgc3RhbXA6IDEyODY1Ng0KWyAgMjI0LjM1OTE2MF0gaGFyZGly
cXMgbGFzdCAgZW5hYmxlZCBhdCAoMTI4NjU1KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVz
dG9yZV9hcmdzKzB4MC8weDMwDQpbICAyMjQuMzY1NDI0XSBoYXJkaXJxcyBsYXN0IGRpc2Fi
bGVkIGF0ICgxMjg2NTYpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4
Ng0KWyAgMjI0LjM3MTY3N10gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTI4NjU0KTog
WzxmZmZmZmZmZjgxMGE5ZGYxPl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyMjQu
Mzc3OTM2XSBzb2Z0aXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxMjg2NDkpOiBbPGZmZmZmZmZm
ODEwYWEyZTI+XSBpcnFfZXhpdCsweGEyLzB4ZDANClsgIDIyNC4zODQxNjhdIENQVTogMSBQ
SUQ6IDk2NzkgQ29tbTogeGVuZG9tYWlucyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQw
MTA3LXhlbmRldmVsKyAjMQ0KWyAgMjI0LjM5MDQ3MV0gSGFyZHdhcmUgbmFtZTogTVNJIE1T
LTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTAN
ClsgIDIyNC4zOTY4ODldIHRhc2s6IGZmZmY4ODAwNThkZWMyNDAgdGk6IGZmZmY4ODAwNGNk
MzIwMDAgdGFzay50aTogZmZmZjg4MDA0Y2QzMjAwMA0KWyAgMjI0LjQwMzM4N10gUklQOiBl
MDMwOls8ZmZmZmZmZmY4MTEwOWE1YT5dICBbPGZmZmZmZmZmODExMDlhNWE+XSBnZW5lcmlj
X2V4ZWNfc2luZ2xlKzB4OGEvMHhjMA0KWyAgMjI0LjQxMDA0MF0gUlNQOiBlMDJiOmZmZmY4
ODAwNGNkMzNhODggIEVGTEFHUzogMDAwMDAyMDINClsgIDIyNC40MTY2MjVdIFJBWDogMDAw
MDAwMDAwMDAwMDAwOCBSQlg6IGZmZmY4ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAw
MDM4DQpbICAyMjQuNDIzMzMzXSBSRFg6IDAwMDAwMDAwMDAwMDAwZmYgUlNJOiAwMDAwMDAw
MDAwMDAwMDA4IFJESTogMDAwMDAwMDAwMDAwMDAwOA0KWyAgMjI0LjQzMDAxMF0gUkJQOiBm
ZmZmODgwMDRjZDMzYWM4IFIwODogZmZmZmZmZmY4MWMwZDQ2OCBSMDk6IDAwMDAwMDAwMDAw
MDAwMDANClsgIDIyNC40MzY2NzRdIFIxMDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAw
MDAwMDAwMDAwMDAgUjEyOiBmZmZmODgwMDRjZDMzYWYwDQpbICAyMjQuNDQzMzk1XSBSMTM6
IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1
ZjYxNDA1MA0KWyAgMjI0LjQ0OTkzNl0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdT
OmZmZmY4ODAwNWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjI0
LjQ1NjMzOF0gQ1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1
MDAzYg0KWyAgMjI0LjQ2MjczOV0gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAw
MDA0Y2QyYTAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIyNC40NjkyOTBdIFN0YWNr
Og0KWyAgMjI0LjQ3NTY1N10gIDAwMDAwMDAwMDAwMDAyMDAgZmZmZjg4MDA1ZjYxNDA0MCAw
MDAwMDAwMDAwMDAwMDA2IDAwMDAwMDAwMDAwMDAwMDANClsgIDIyNC40ODIyNzddICAwMDAw
MDAwMDAwMDAwMDAxIGZmZmZmZmZmODIyZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAw
MDAwMDAwMDAxDQpbICAyMjQuNDg4ODMzXSAgZmZmZjg4MDA0Y2QzM2IzOCBmZmZmZmZmZjgx
MTA5Y2M1IGZmZmZmZmZmODFhYTBiMzMgZmZmZjg4MDBjZWQ1NjAwMA0KWyAgMjI0LjQ5NTM5
Ml0gQ2FsbCBUcmFjZToNClsgIDIyNC41MDE4MThdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMjQuNTA4
MzkwXSAgWzxmZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4
ZTUvMHgxZTANClsgIDIyNC41MTQ5MzBdICBbPGZmZmZmZmZmODFhYTBiMzM+XSA/IHJldGlu
dF9yZXN0b3JlX2FyZ3MrMHgxMy8weDEzDQpbICAyMjQuNTIxNDg3XSAgWzxmZmZmZmZmZjgx
MDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0K
WyAgMjI0LjUyODAxNF0gIFs8ZmZmZmZmZmY4MTEwYTAzYT5dIHNtcF9jYWxsX2Z1bmN0aW9u
X21hbnkrMHgyN2EvMHgyYTANClsgIDIyNC41MzQ1MTldICBbPGZmZmZmZmZmODEwMDc5ODA+
XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMjQu
NTQxMDc4XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEw
DQpbICAyMjQuNTQ3NjE3XSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxs
X3hlbl92ZXJzaW9uKzB4YS8weDIwDQpbICAyMjQuNTU0MTI3XSAgWzxmZmZmZmZmZjgxMTY5
NDI2Pl0gZXhpdF9tbWFwKzB4NTYvMHgxODANClsgIDIyNC41NjA2MDhdICBbPGZmZmZmZmZm
ODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjI0LjU2NzA4N10g
IFs8ZmZmZmZmZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAyMjQuNTcz
NTI5XSAgWzxmZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDIy
NC41Nzk3OTBdICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDIy
NC41ODYwOTBdICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8w
eDgzMA0KWyAgMjI0LjU5MjM0Nl0gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2Jp
bmFyeSsweDMyYS8weDFhMDANClsgIDIyNC41OTg1NzBdICBbPGZmZmZmZmZmODFhOWZmZTY+
XSA/IF9yYXdfcmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAyMjQuNjA0NzQ2XSAgWzxmZmZm
ZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjI0LjYxMDg5
NF0gIFs8ZmZmZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMv
MHgxYjANClsgIDIyNC42MTcwMjZdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNx
dWlyZSsweGVjLzB4MTEwDQpbICAyMjQuNjIzMTg2XSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0g
PyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDIyNC42MjkyNTBdICBbPGZmZmZmZmZm
ODExOTkyMDQ+XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMjI0LjYz
NTMzOF0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsw
eDU5Mi8weDcxMA0KWyAgMjI0LjY0MTQzMV0gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9f
ZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyMjQuNjQ3NDk5XSAgWzxm
ZmZmZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDIy
NC42NTM1NDZdICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpb
ICAyMjQuNjU5NTQwXSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4
NjANClsgIDIyNC42NjU1MzldICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsw
eDY5LzB4YTANClsgIDIyNC42NzE0ODBdIENvZGU6IDg5IGRhIGU4IGVhIGMxIDMyIDAwIDQ4
IDhiIDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4
IDc0IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEwIDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDw0
MT4gZjYgNDQgMjQgMjAgMDEgNzUgZjYgNDggOGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIg
NmQgZTggNGMgDQpbICAyMzAuMjQ0ODM1XSBhdGEzLjAwOiBxYyB0aW1lb3V0IChjbWQgMHhl
YykNClsgIDIzMC4yNTEzNjJdIGF0YTMuMDA6IGZhaWxlZCB0byBJREVOVElGWSAoSS9PIGVy
cm9yLCBlcnJfbWFzaz0weDQpDQpbICAyMzAuMjU3OTE5XSBhdGEzLjAwOiByZXZhbGlkYXRp
b24gZmFpbGVkIChlcnJubz0tNSkNClsgIDIzMC4yNjQ0ODddIGF0YTMuMDA6IGRpc2FibGVk
DQpbICAyMzAuMjcwODY5XSBhdGEzOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyMzAuNzYx
MjAwXSBhdGEzOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMgKFNTdGF0dXMgMTIzIFNDb250cm9s
IDMyMCkNClsgIDIzMC43Njc0ODFdIGF0YTM6IEVIIGNvbXBsZXRlDQpbICAyMzEuMTY0MjY1
XSBJTkZPOiByY3Vfc2NoZWQgc2VsZi1kZXRlY3RlZCBzdGFsbCBvbiBDUFUNClsgIDIzMS4x
NzA0MjhdIAkxOiAoMTc3OTEgdGlja3MgdGhpcyBHUCkgaWRsZT1kZTcvMTQwMDAwMDAwMDAw
MDAxLzAgc29mdGlycT04MjIwLzgyMjAgbGFzdF9hY2NlbGVyYXRlOiA1ZjQ0L2E1OTUsIG5v
bmxhenlfcG9zdGVkOiAxLCAuLg0KWyAgMjMxLjE3MDk2NF0gSU5GTzogcmN1X3NjaGVkIGRl
dGVjdGVkIHN0YWxscyBvbiBDUFVzL3Rhc2tzOg0KWyAgMjMxLjE3MDk2OF0gCTE6ICgxNzc5
MSB0aWNrcyB0aGlzIEdQKSBpZGxlPWRlNy8xNDAwMDAwMDAwMDAwMDEvMCBzb2Z0aXJxPTgy
MjAvODIyMCBsYXN0X2FjY2VsZXJhdGU6IDVmNDQvYTU5Niwgbm9ubGF6eV9wb3N0ZWQ6IDEs
IC4uDQpbICAyMzEuMTcwOTY5XSAJKGRldGVjdGVkIGJ5IDUsIHQ9MTgwMDIgamlmZmllcywg
Zz01ODkyLCBjPTU4OTEsIHE9NDAxMykNClsgIDIzMS4xNzA5NzVdIHNlbmRpbmcgTk1JIHRv
IGFsbCBDUFVzOg0KWyAgMjMxLjE3MTAxMV0gTk1JIGJhY2t0cmFjZSBmb3IgY3B1IDENClsg
IDIzMS4xNzEwMTNdIENQVTogMSBQSUQ6IDk2NzkgQ29tbTogeGVuZG9tYWlucyBOb3QgdGFp
bnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3MTAxM10g
SGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJ
T1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDIzMS4xNzEwMTRdIHRhc2s6IGZmZmY4ODAwNThk
ZWMyNDAgdGk6IGZmZmY4ODAwNGNkMzIwMDAgdGFzay50aTogZmZmZjg4MDA0Y2QzMjAwMA0K
WyAgMjMxLjE3MTAxOF0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTQ4YTE5Yz5dICBbPGZmZmZm
ZmZmODE0OGExOWM+XSBjZmJfaW1hZ2VibGl0KzB4MmFjLzB4NGUwDQpbICAyMzEuMTcxMDE5
XSBSU1A6IGUwMmI6ZmZmZjg4MDA1ZjY0MzY3OCAgRUZMQUdTOiAwMDAwMDAwMg0KWyAgMjMx
LjE3MTAyMF0gUkFYOiAwMDAwMDAwMGZmZmZmZmZmIFJCWDogMDAwMDAwMDAwMDAwMDAwYSBS
Q1g6IDAwMDAwMDAwMDAwMDAwMDMNClsgIDIzMS4xNzEwMjBdIFJEWDogMDAwMDAwMDAwMDAw
MDAzYiBSU0k6IDAwMDAwMDAwMDAwMDAwMDAgUkRJOiAwMDAwMDAwMDAwMDAwMDAxDQpbICAy
MzEuMTcxMDIxXSBSQlA6IGZmZmY4ODAwNWY2NDM2ZjggUjA4OiBmZmZmYzkwMDEwNTkyZTUw
IFIwOTogZmZmZmZmZmY4MWM2NzZlOA0KWyAgMjMxLjE3MTAyMl0gUjEwOiAwMDAwMDAwMDAw
MDAwMDAxIFIxMTogMDAwMDAwMDAwMGFhYWFhYSBSMTI6IGZmZmY4ODAwNTc5MzEwNDANClsg
IDIzMS4xNzEwMjJdIFIxMzogZmZmZmM5MDAxMDU5MmU1NCBSMTQ6IGZmZmY4ODAwNTc5MzEw
MTQgUjE1OiBmZmZmYzkwMDEwNTkyOGMwDQpbICAyMzEuMTcxMDI1XSBGUzogIDAwMDA3ZjE1
MmVjNGQ3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjY0MDAwMCgwMDAwKSBrbmxHUzowMDAwMDAw
MDAwMDAwMDAwDQpbICAyMzEuMTcxMDI2XSBDUzogIGUwMzMgRFM6IDAwMDAgRVM6IDAwMDAg
Q1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxMDI2XSBDUjI6IDAwMDA3ZjE1MmUy
YTFlMDIgQ1IzOiAwMDAwMDAwMDRjZDJhMDAwIENSNDogMDAwMDAwMDAwMDAwMDY2MA0KWyAg
MjMxLjE3MTAyN10gU3RhY2s6DQpbICAyMzEuMTcxMDMwXSAgZmZmZjg4MDA1OGRlY2E1OCBm
ZmZmODgwMDU4ZGVjMjQwIGZmZmY4ODAwNWY2NDM3NTggZmZmZmZmZmY4MTBlNGYzOA0KWyAg
MjMxLjE3MTAzMl0gIGZmZmZmZmZmODJjMTc3NTAgZmZmZjg4MDA1ZjY0MzgzYSAwMDAwMDAw
MDAwMDAwMWEwIGZmZmY4ODAwNTc5ZDgyZTANClsgIDIzMS4xNzEwMzRdICAwMDAwMDAwMDAw
MDAxNDAwIDAwMDAwMDAwMDAwMDAwMzQgMDAwMDAwMDAwMDAwMDAwMCBmZmZmODgwMDU3OTMw
ZjQ0DQpbICAyMzEuMTcxMDM0XSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3MTAzNV0gIDxJUlE+
IA0KWyAgMjMxLjE3MTAzN10gIFs8ZmZmZmZmZmY4MTBlNGYzOD5dID8gX19sb2NrX2FjcXVp
cmUrMHg0MTgvMHgyMjIwDQpbICAyMzEuMTcxMDM5XSAgWzxmZmZmZmZmZjgxNDg3NDM1Pl0g
Yml0X3B1dGNzKzB4MmU1LzB4NWEwDQpbICAyMzEuMTcxMDQxXSAgWzxmZmZmZmZmZjgxNDg3
OGE3Pl0gPyBzb2Z0X2N1cnNvcisweDFiNy8weDI1MA0KWyAgMjMxLjE3MTA0M10gIFs8ZmZm
ZmZmZmY4MTQ4MjZjZT5dID8gZ2V0X2NvbG9yLmlzcmEuMTcrMHgzZS8weDE1MA0KWyAgMjMx
LjE3MTA0NF0gIFs8ZmZmZmZmZmY4MTQ4MmE2ND5dIGZiY29uX3B1dGNzKzB4MTM0LzB4MTgw
DQpbICAyMzEuMTcxMDQ2XSAgWzxmZmZmZmZmZjgxNDg3MTUwPl0gPyBiaXRfY3Vyc29yKzB4
NjMwLzB4NjMwDQpbICAyMzEuMTcxMDQ3XSAgWzxmZmZmZmZmZjgxNDg0OThmPl0gZmJjb25f
cmVkcmF3LmlzcmEuMjQrMHgxN2YvMHgxZjANClsgIDIzMS4xNzEwNDldICBbPGZmZmZmZmZm
ODE0ODRiZTg+XSBmYmNvbl9zY3JvbGwrMHgxZTgvMHhkMDANClsgIDIzMS4xNzEwNTFdICBb
PGZmZmZmZmZmODE0ZjcyYmI+XSBzY3J1cCsweDEwYi8weDEyMA0KWyAgMjMxLjE3MTA1Ml0g
IFs8ZmZmZmZmZmY4MTRmNzQzMD5dIGxmKzB4NzAvMHg4MA0KWyAgMjMxLjE3MTA1NF0gIFs8
ZmZmZmZmZmY4MTRmODJhNj5dIHZ0X2NvbnNvbGVfcHJpbnQrMHgyODYvMHgzYjANClsgIDIz
MS4xNzEwNTZdICBbPGZmZmZmZmZmODEwZWQyYzM+XSBjYWxsX2NvbnNvbGVfZHJpdmVycy5j
b25zdHByb3AuMjErMHg5My8weGIwDQpbICAyMzEuMTcxMDU3XSAgWzxmZmZmZmZmZjgxMGVl
MThjPl0gY29uc29sZV91bmxvY2srMHgzZmMvMHg0ODANClsgIDIzMS4xNzEwNThdICBbPGZm
ZmZmZmZmODEwZWU1NjY+XSB2cHJpbnRrX2VtaXQrMHgxOTYvMHg1ODANClsgIDIzMS4xNzEw
NjFdICBbPGZmZmZmZmZmODFhOTFhMmI+XSBwcmludGsrMHg0OC8weDRhDQpbICAyMzEuMTcx
MDYzXSAgWzxmZmZmZmZmZjgxMGY4NjIxPl0gcHJpbnRfY3B1X3N0YWxsX2luZm8uaXNyYS41
NCsweDExMS8weDE1MA0KWyAgMjMxLjE3MTA2NF0gIFs8ZmZmZmZmZmY4MTBmOWNkOT5dIHJj
dV9jaGVja19jYWxsYmFja3MrMHgzMjkvMHg2MjANClsgIDIzMS4xNzEwNjZdICBbPGZmZmZm
ZmZmODEwZTBhZTk+XSA/IHRyYWNlX2hhcmRpcnFzX29mZl9jYWxsZXIrMHhiOS8weDE2MA0K
WyAgMjMxLjE3MTA2N10gIFs8ZmZmZmZmZmY4MTBlMGI5ZD5dID8gdHJhY2VfaGFyZGlycXNf
b2ZmKzB4ZC8weDEwDQpbICAyMzEuMTcxMDY5XSAgWzxmZmZmZmZmZjgxMTA0ODcwPl0gPyB0
aWNrX25vaHpfaGFuZGxlcisweGEwLzB4YTANClsgIDIzMS4xNzEwNzFdICBbPGZmZmZmZmZm
ODEwYjEyMzM+XSB1cGRhdGVfcHJvY2Vzc190aW1lcysweDQzLzB4ODANClsgIDIzMS4xNzEw
NzJdICBbPGZmZmZmZmZmODExMDQ3OTk+XSB0aWNrX3NjaGVkX2hhbmRsZS5pc3JhLjE0KzB4
MjkvMHg2MA0KWyAgMjMxLjE3MTA3M10gIFs8ZmZmZmZmZmY4MTEwNDhiNz5dIHRpY2tfc2No
ZWRfdGltZXIrMHg0Ny8weDcwDQpbICAyMzEuMTcxMDc1XSAgWzxmZmZmZmZmZjgxMGM4NDRm
Pl0gX19ydW5faHJ0aW1lci5pc3JhLjI4KzB4NmYvMHgxMjANClsgIDIzMS4xNzEwNzddICBb
PGZmZmZmZmZmODEwYzhkMDU+XSBocnRpbWVyX2ludGVycnVwdCsweGY1LzB4MjQwDQpbICAy
MzEuMTcxMDc5XSAgWzxmZmZmZmZmZjgxMDA4ZGFmPl0geGVuX3RpbWVyX2ludGVycnVwdCsw
eDJmLzB4MTUwDQpbICAyMzEuMTcxMDgxXSAgWzxmZmZmZmZmZjgxYWEwNDBiPl0gPyBfcmF3
X3NwaW5fdW5sb2NrX2lycSsweDJiLzB4NTANClsgIDIzMS4xNzEwODJdICBbPGZmZmZmZmZm
ODEwZTMxY2I+XSA/IHRyYWNlX2hhcmRpcnFzX29uX2NhbGxlcisweGZiLzB4MjQwDQpbICAy
MzEuMTcxMDg0XSAgWzxmZmZmZmZmZjgxMGYwNjc3Pl0gaGFuZGxlX2lycV9ldmVudF9wZXJj
cHUrMHg0Ny8weDE5MA0KWyAgMjMxLjE3MTA4N10gIFs8ZmZmZmZmZmY4MTRjYjExOT5dID8g
aW5mb19mb3JfaXJxKzB4OS8weDIwDQpbICAyMzEuMTcxMDg5XSAgWzxmZmZmZmZmZjgxMGYz
OTYyPl0gaGFuZGxlX3BlcmNwdV9pcnErMHg0Mi8weDYwDQpbICAyMzEuMTcxMDkxXSAgWzxm
ZmZmZmZmZjgxNGNkYWNkPl0gZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50cysweDEyZC8weDE0
MA0KWyAgMjMxLjE3MTA5Ml0gIFs8ZmZmZmZmZmY4MTRjYWM3OD5dIF9feGVuX2V2dGNobl9k
b191cGNhbGwrMHg0OC8weDkwDQpbICAyMzEuMTcxMDk0XSAgWzxmZmZmZmZmZjgxNGNjODFh
Pl0geGVuX2V2dGNobl9kb191cGNhbGwrMHgyYS8weDQwDQpbICAyMzEuMTcxMDk2XSAgWzxm
ZmZmZmZmZjgxYWEyODVlPl0geGVuX2RvX2h5cGVydmlzb3JfY2FsbGJhY2srMHgxZS8weDMw
DQpbICAyMzEuMTcxMDk3XSAgPEVPST4gDQpbICAyMzEuMTcxMDk5XSAgWzxmZmZmZmZmZjgx
MTA5YTVhPl0gPyBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4OGEvMHhjMA0KWyAgMjMxLjE3MTEw
MF0gIFs8ZmZmZmZmZmY4MTEwOWE4MT5dID8gZ2VuZXJpY19leGVjX3NpbmdsZSsweGIxLzB4
YzANClsgIDIzMS4xNzExMDJdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95
X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzEuMTcxMTA0XSAgWzxmZmZm
ZmZmZjgxMTA5Y2M1Pl0gPyBzbXBfY2FsbF9mdW5jdGlvbl9zaW5nbGUrMHhlNS8weDFlMA0K
WyAgMjMxLjE3MTEwNV0gIFs8ZmZmZmZmZmY4MWFhMGIzMz5dID8gcmV0aW50X3Jlc3RvcmVf
YXJncysweDEzLzB4MTMNClsgIDIzMS4xNzExMDddICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzEuMTcx
MTA4XSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gPyBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4
MjdhLzB4MmEwDQpbICAyMzEuMTcxMTEwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5f
ZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjMxLjE3MTExMl0g
IFs8ZmZmZmZmZmY4MTAwODQxZT5dID8geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEwDQpbICAy
MzEuMTcxMTEzXSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxsX3hlbl92
ZXJzaW9uKzB4YS8weDIwDQpbICAyMzEuMTcxMTE1XSAgWzxmZmZmZmZmZjgxMTY5NDI2Pl0g
PyBleGl0X21tYXArMHg1Ni8weDE4MA0KWyAgMjMxLjE3MTExN10gIFs8ZmZmZmZmZmY4MTBl
NzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyMzEuMTcxMTE4XSAgWzxm
ZmZmZmZmZjgxMWRkZGUwPl0gPyBleGl0X2FpbysweGIwLzB4ZTANClsgIDIzMS4xNzExMjBd
ICBbPGZmZmZmZmZmODExZGRkNDQ+XSA/IGV4aXRfYWlvKzB4MTQvMHhlMA0KWyAgMjMxLjE3
MTEyMV0gIFs8ZmZmZmZmZmY4MTBhMjY4OT5dID8gbW1wdXQrMHg1OS8weGUwDQpbICAyMzEu
MTcxMTIzXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gPyBmbHVzaF9vbGRfZXhlYysweDQzOS8w
eDgzMA0KWyAgMjMxLjE3MTEyNV0gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dID8gbG9hZF9lbGZf
YmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAgMjMxLjE3MTEyNl0gIFs8ZmZmZmZmZmY4MWE5ZmZl
Nj5dID8gX3Jhd19yZWFkX3VubG9jaysweDI2LzB4MzANClsgIDIzMS4xNzExMjhdICBbPGZm
ZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsweGVjLzB4MTEwDQpbICAyMzEuMTcx
MTI5XSAgWzxmZmZmZmZmZjgxMTk5MjQzPl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHhj
My8weDFiMA0KWyAgMjMxLjE3MTEzMV0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19h
Y3F1aXJlKzB4ZWMvMHgxMTANClsgIDIzMS4xNzExMzJdICBbPGZmZmZmZmZmODEwZTcxN2E+
XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjMxLjE3MTEzNF0gIFs8ZmZmZmZm
ZmY4MTE5OTIwND5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDIz
MS4xNzExMzZdICBbPGZmZmZmZmZmODExOWIyNTI+XSA/IGRvX2V4ZWN2ZV9jb21tb24uaXNy
YS4zMSsweDU5Mi8weDcxMA0KWyAgMjMxLjE3MTEzN10gIFs8ZmZmZmZmZmY4MTE5YjFlNz5d
ID8gZG9fZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyMzEuMTcxMTM5
XSAgWzxmZmZmZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjAN
ClsgIDIzMS4xNzExNDFdICBbPGZmZmZmZmZmODExOWIzZTM+XSA/IGRvX2V4ZWN2ZSsweDEz
LzB4MjANClsgIDIzMS4xNzExNDJdICBbPGZmZmZmZmZmODExOWI2NTg+XSA/IFN5U19leGVj
dmUrMHgzOC8weDYwDQpbICAyMzEuMTcxMTQzXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gPyBz
dHViX2V4ZWN2ZSsweDY5LzB4YTANClsgIDIzMS4xNzExNThdIENvZGU6IGMwIDhiIDU1IGIw
IDRkIDg5IGY4IDRkIDg5IGY0IGI5IDA4IDAwIDAwIDAwIGViIDJmIDY2IDBmIDFmIDQ0IDAw
IDAwIDQxIDBmIGJlIDA0IDI0IDI5IGY5IDRkIDhkIDY4IDA0IGQzIGY4IDQ0IDIxIGQwIDQx
IDhiIDA0IDgxIDw0ND4gMjEgZDggMzEgZjAgNDEgODkgMDAgODUgYzkgNzUgMDYgNDkgODMg
YzQgMDEgYjEgMDggNGQgODkgZTggDQpbICAyMzEuMTcxMTc3XSBOTUkgYmFja3RyYWNlIGZv
ciBjcHUgNQ0KWyAgMjMxLjE3MTE3OV0gQ1BVOiA1IFBJRDogMCBDb21tOiBzd2FwcGVyLzUg
Tm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDIzMS4x
NzExODBdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQw
KSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAyMzEuMTcxMTgxXSB0YXNrOiBmZmZm
ODgwMDU5YmM4MDAwIHRpOiBmZmZmODgwMDU5YmM2MDAwIHRhc2sudGk6IGZmZmY4ODAwNTli
YzYwMDANClsgIDIzMS4xNzExODZdIFJJUDogZTAzMDpbPGZmZmZmZmZmODEwMDEzMGE+XSAg
WzxmZmZmZmZmZjgxMDAxMzBhPl0geGVuX2h5cGVyY2FsbF92Y3B1X29wKzB4YS8weDIwDQpb
ICAyMzEuMTcxMTg3XSBSU1A6IGUwMmI6ZmZmZjg4MDA1Zjc0M2MxMCAgRUZMQUdTOiAwMDAw
MDA0Ng0KWyAgMjMxLjE3MTE4OF0gUkFYOiAwMDAwMDAwMDAwMDAwMDAwIFJCWDogMDAwMDAw
MDAwMDAwMDAwNSBSQ1g6IGZmZmZmZmZmODEwMDEzMGENClsgIDIzMS4xNzExODldIFJEWDog
MDAwMDAwMDBkZWFkYmVlZiBSU0k6IDAwMDAwMDAwZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRl
YWRiZWVmDQpbICAyMzEuMTcxMTkwXSBSQlA6IGZmZmY4ODAwNWY3NDNjMjggUjA4OiBmZmZm
ZmZmZjgyMmU3MzAwIFIwOTogMDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMxLjE3MTE5MF0gUjEw
OiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAwMDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZm
ODIyZTczMDANClsgIDIzMS4xNzExOTFdIFIxMzogZmZmZmZmZmY4MjJlNzMwMCBSMTQ6IDAw
MDAwMDAwMDAwMDAwMDUgUjE1OiAwMDAwMDAwMDAwMDAwMDA1DQpbICAyMzEuMTcxMTk0XSBG
UzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1Zjc0MDAwMCgwMDAwKSBr
bmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxMTk0XSBDUzogIGUwMzMgRFM6IDAw
MmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxMTk1XSBDUjI6
IDAwMDA3ZjdiZWIxZTJkZTIgQ1IzOiAwMDAwMDAwMDU3ZDkwMDAwIENSNDogMDAwMDAwMDAw
MDAwMDY2MA0KWyAgMjMxLjE3MTE5N10gU3RhY2s6DQpbICAyMzEuMTcxMTk5XSAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDA1IGZmZmZmZmZmODE0Y2M3Y2MgZmZmZjg4MDA1
Zjc0M2M1OA0KWyAgMjMxLjE3MTIwMV0gIGZmZmZmZmZmODEwMGFkNGEgMDAwMDAwMDAwMDAw
MjcxMCBmZmZmZmZmZjgyMjQzMWMwIGZmZmZmZmZmODIyZTczMDgNClsgIDIzMS4xNzEyMDJd
ICBmZmZmZmZmZjgyMjQzMWMwIGZmZmY4ODAwNWY3NDNjNjggZmZmZmZmZmY4MTAwYmI2OSBm
ZmZmODgwMDVmNzQzYzg4DQpbICAyMzEuMTcxMjAzXSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3
MTIwNV0gIDxJUlE+IA0KWyAgMjMxLjE3MTIwOV0gIFs8ZmZmZmZmZmY4MTRjYzdjYz5dID8g
eGVuX3NlbmRfSVBJX29uZSsweDNjLzB4NjANClsgIDIzMS4xNzEyMTJdICBbPGZmZmZmZmZm
ODEwMGFkNGE+XSBfX3hlbl9zZW5kX0lQSV9tYXNrKzB4MmEvMHg1MA0KWyAgMjMxLjE3MTIx
NF0gIFs8ZmZmZmZmZmY4MTAwYmI2OT5dIHhlbl9zZW5kX0lQSV9hbGwrMHgyOS8weDgwDQpb
ICAyMzEuMTcxMjE2XSAgWzxmZmZmZmZmZjgxMDNmZmZhPl0gYXJjaF90cmlnZ2VyX2FsbF9j
cHVfYmFja3RyYWNlKzB4NGEvMHg4MA0KWyAgMjMxLjE3MTIxOV0gIFs8ZmZmZmZmZmY4MTBm
OWZiMz5dIHJjdV9jaGVja19jYWxsYmFja3MrMHg2MDMvMHg2MjANClsgIDIzMS4xNzEyMjFd
ICBbPGZmZmZmZmZmODExMDQ4NzA+XSA/IHRpY2tfbm9oel9oYW5kbGVyKzB4YTAvMHhhMA0K
WyAgMjMxLjE3MTIyM10gIFs8ZmZmZmZmZmY4MTBiMTIzMz5dIHVwZGF0ZV9wcm9jZXNzX3Rp
bWVzKzB4NDMvMHg4MA0KWyAgMjMxLjE3MTIyNF0gIFs8ZmZmZmZmZmY4MTEwNDc5OT5dIHRp
Y2tfc2NoZWRfaGFuZGxlLmlzcmEuMTQrMHgyOS8weDYwDQpbICAyMzEuMTcxMjI1XSAgWzxm
ZmZmZmZmZjgxMTA0OGI3Pl0gdGlja19zY2hlZF90aW1lcisweDQ3LzB4NzANClsgIDIzMS4x
NzEyMjhdICBbPGZmZmZmZmZmODEwYzg0NGY+XSBfX3J1bl9ocnRpbWVyLmlzcmEuMjgrMHg2
Zi8weDEyMA0KWyAgMjMxLjE3MTIyOV0gIFs8ZmZmZmZmZmY4MTBjOGQwNT5dIGhydGltZXJf
aW50ZXJydXB0KzB4ZjUvMHgyNDANClsgIDIzMS4xNzEyMzJdICBbPGZmZmZmZmZmODEwMDhk
YWY+XSB4ZW5fdGltZXJfaW50ZXJydXB0KzB4MmYvMHgxNTANClsgIDIzMS4xNzEyMzVdICBb
PGZmZmZmZmZmODEwZTBhZTk+XSA/IHRyYWNlX2hhcmRpcnFzX29mZl9jYWxsZXIrMHhiOS8w
eDE2MA0KWyAgMjMxLjE3MTIzN10gIFs8ZmZmZmZmZmY4MTBmMDY3Nz5dIGhhbmRsZV9pcnFf
ZXZlbnRfcGVyY3B1KzB4NDcvMHgxOTANClsgIDIzMS4xNzEyMzldICBbPGZmZmZmZmZmODE0
Y2IxMTk+XSA/IGluZm9fZm9yX2lycSsweDkvMHgyMA0KWyAgMjMxLjE3MTI0MV0gIFs8ZmZm
ZmZmZmY4MTBmMzk2Mj5dIGhhbmRsZV9wZXJjcHVfaXJxKzB4NDIvMHg2MA0KWyAgMjMxLjE3
MTI0M10gIFs8ZmZmZmZmZmY4MTRjZGFjZD5dIGV2dGNobl9maWZvX2hhbmRsZV9ldmVudHMr
MHgxMmQvMHgxNDANClsgIDIzMS4xNzEyNDVdICBbPGZmZmZmZmZmODEwYzljOTA+XSA/IHJh
d19ub3RpZmllcl9jYWxsX2NoYWluKzB4MjAvMHgyMA0KWyAgMjMxLjE3MTI0N10gIFs8ZmZm
ZmZmZmY4MTRjYWM3OD5dIF9feGVuX2V2dGNobl9kb191cGNhbGwrMHg0OC8weDkwDQpbICAy
MzEuMTcxMjQ4XSAgWzxmZmZmZmZmZjgxNGNjODFhPl0geGVuX2V2dGNobl9kb191cGNhbGwr
MHgyYS8weDQwDQpbICAyMzEuMTcxMjUyXSAgWzxmZmZmZmZmZjgxYWEyODVlPl0geGVuX2Rv
X2h5cGVydmlzb3JfY2FsbGJhY2srMHgxZS8weDMwDQpbICAyMzEuMTcxMjUzXSAgPEVPST4g
DQpbICAyMzEuMTcxMjU0XSAgWzxmZmZmZmZmZjgxMDAxM2FhPl0gPyB4ZW5faHlwZXJjYWxs
X3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjU1XSAgWzxmZmZmZmZmZjgxMDAxM2Fh
Pl0gPyB4ZW5faHlwZXJjYWxsX3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjU4XSAg
WzxmZmZmZmZmZjgxMDA4YzEwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMA0KWyAgMjMx
LjE3MTI2MF0gIFs8ZmZmZmZmZmY4MTAxODA1OD5dID8gZGVmYXVsdF9pZGxlKzB4MTgvMHgy
MA0KWyAgMjMxLjE3MTI2MV0gIFs8ZmZmZmZmZmY4MTAxODg2ZT5dID8gYXJjaF9jcHVfaWRs
ZSsweDJlLzB4NDANClsgIDIzMS4xNzEyNjNdICBbPGZmZmZmZmZmODEwZWZiYjE+XSA/IGNw
dV9zdGFydHVwX2VudHJ5KzB4NzEvMHgxZjANClsgIDIzMS4xNzEyNjRdICBbPGZmZmZmZmZm
ODEwMGI5YzU+XSA/IGNwdV9icmluZ3VwX2FuZF9pZGxlKzB4MjUvMHg0MA0KWyAgMjMxLjE3
MTI3OF0gQ29kZTogY2MgNTEgNDEgNTMgNTAgYjggMTcgMDAgMDAgMDAgMGYgMDUgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgNTEgNDEgNTMgYjggMTggMDAgMDAgMDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMS4x
NzEyODddIE5NSSBiYWNrdHJhY2UgZm9yIGNwdSAzDQpbICAyMzEuMTcxMjkwXSBDUFU6IDMg
UElEOiAwIENvbW06IHN3YXBwZXIvMyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3
LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3MTI5MV0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2
NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsg
IDIzMS4xNzEyOTJdIHRhc2s6IGZmZmY4ODAwNTliYjUyZDAgdGk6IGZmZmY4ODAwNTliYzIw
MDAgdGFzay50aTogZmZmZjg4MDA1OWJjMjAwMA0KWyAgMjMxLjE3MTI5OF0gUklQOiBlMDMw
Ols8ZmZmZmZmZmY4MTAwMTNhYT5dICBbPGZmZmZmZmZmODEwMDEzYWE+XSB4ZW5faHlwZXJj
YWxsX3NjaGVkX29wKzB4YS8weDIwDQpbICAyMzEuMTcxMjk5XSBSU1A6IGUwMmI6ZmZmZjg4
MDA1OWJjM2VjOCAgRUZMQUdTOiAwMDAwMDI0Ng0KWyAgMjMxLjE3MTMwMF0gUkFYOiAwMDAw
MDAwMDAwMDAwMDAwIFJCWDogZmZmZjg4MDA1OWJjM2ZkOCBSQ1g6IGZmZmZmZmZmODEwMDEz
YWENClsgIDIzMS4xNzEzMDFdIFJEWDogZmZmZjg4MDA1OWJiNTJkMCBSU0k6IDAwMDAwMDAw
ZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRlYWRiZWVmDQpbICAyMzEuMTcxMzAxXSBSQlA6IGZm
ZmY4ODAwNTliYzNlZTAgUjA4OiAwMDAwMDAwMDAwMDAwMDAwIFIwOTogMDAwMDAwMDAwMDAw
MDAwMQ0KWyAgMjMxLjE3MTMwMl0gUjEwOiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAw
MDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZmODIyZTczMDANClsgIDIzMS4xNzEzMDJdIFIxMzog
ZmZmZjg4MDA1OWJjM2ZkOCBSMTQ6IGZmZmY4ODAwNTliYzNmZDggUjE1OiBmZmZmODgwMDU5
YmMzZmQ4DQpbICAyMzEuMTcxMzA2XSBGUzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6
ZmZmZjg4MDA1ZjZjMDAwMCgwMDAwKSBrbmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEu
MTcxMzA2XSBDUzogIGUwMzMgRFM6IDAwMmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUw
MDNiDQpbICAyMzEuMTcxMzA3XSBDUjI6IGZmZmZmZmZmZmY2MDA0MDAgQ1IzOiAwMDAwMDAw
MDU3ZDkwMDAwIENSNDogMDAwMDAwMDAwMDAwMDY2MA0KWyAgMjMxLjE3MTMwOF0gU3RhY2s6
DQpbICAyMzEuMTcxMzExXSAgMDAwMDAwMDAwMDAwMDAwMCAwMTAwMDAwMDAwMDAwMDAwIGZm
ZmZmZmZmODEwMDhjMTAgZmZmZjg4MDA1OWJjM2VmMA0KWyAgMjMxLjE3MTMxM10gIGZmZmZm
ZmZmODEwMTgwNTggZmZmZjg4MDA1OWJjM2YwMCBmZmZmZmZmZjgxMDE4ODZlIGZmZmY4ODAw
NTliYzNmNDANClsgIDIzMS4xNzEzMTRdICBmZmZmZmZmZjgxMGVmYmIxIDAwMDAwMDAwMDAw
MDAwMGEgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxMzE1
XSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3MTMxOV0gIFs8ZmZmZmZmZmY4MTAwOGMxMD5dID8g
eGVuX3NhZmVfaGFsdCsweDEwLzB4MjANClsgIDIzMS4xNzEzMjFdICBbPGZmZmZmZmZmODEw
MTgwNTg+XSBkZWZhdWx0X2lkbGUrMHgxOC8weDIwDQpbICAyMzEuMTcxMzIyXSAgWzxmZmZm
ZmZmZjgxMDE4ODZlPl0gYXJjaF9jcHVfaWRsZSsweDJlLzB4NDANClsgIDIzMS4xNzEzMjVd
ICBbPGZmZmZmZmZmODEwZWZiYjE+XSBjcHVfc3RhcnR1cF9lbnRyeSsweDcxLzB4MWYwDQpb
ICAyMzEuMTcxMzI2XSAgWzxmZmZmZmZmZjgxMDBiOWM1Pl0gY3B1X2JyaW5ndXBfYW5kX2lk
bGUrMHgyNS8weDQwDQpbICAyMzEuMTcxMzQxXSBDb2RlOiBjYyA1MSA0MSA1MyBiOCAxYyAw
MCAwMCAwMCAwZiAwNSA0MSA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyA1MSA0MSA1MyBiOCAxZCAwMCAwMCAwMCAwZiAw
NSA8NDE+IDViIDU5IGMzIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNj
IGNjIGNjIGNjIGNjIA0KWyAgMjMxLjE3MTM2Nl0gTk1JIGJhY2t0cmFjZSBmb3IgY3B1IDAN
ClsgIDIzMS4xNzEzNzBdIENQVTogMCBQSUQ6IDAgQ29tbTogc3dhcHBlci8wIE5vdCB0YWlu
dGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrICMxDQpbICAyMzEuMTcxMzcwXSBI
YXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwgQklP
UyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMjMxLjE3MTM3Ml0gdGFzazogZmZmZmZmZmY4MjIx
MzRlMCB0aTogZmZmZmZmZmY4MjIwMDAwMCB0YXNrLnRpOiBmZmZmZmZmZjgyMjAwMDAwDQpb
ICAyMzEuMTcxMzc3XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMDAxM2FhPl0gIFs8ZmZmZmZm
ZmY4MTAwMTNhYT5dIHhlbl9oeXBlcmNhbGxfc2NoZWRfb3ArMHhhLzB4MjANClsgIDIzMS4x
NzEzNzldIFJTUDogZTAyYjpmZmZmZmZmZjgyMjAxZTQwICBFRkxBR1M6IDAwMDAwMjQ2DQpb
ICAyMzEuMTcxMzc5XSBSQVg6IDAwMDAwMDAwMDAwMDAwMDAgUkJYOiBmZmZmZmZmZjgyMjAx
ZmQ4IFJDWDogZmZmZmZmZmY4MTAwMTNhYQ0KWyAgMjMxLjE3MTM4MF0gUkRYOiBmZmZmZmZm
ZjgyMjEzNGUwIFJTSTogMDAwMDAwMDBkZWFkYmVlZiBSREk6IDAwMDAwMDAwZGVhZGJlZWYN
ClsgIDIzMS4xNzEzODFdIFJCUDogZmZmZmZmZmY4MjIwMWU1OCBSMDg6IDAwMDAwMDAwMDAw
MDAwMDAgUjA5OiAwMDAwMDAwMDAwMDAwMDAxDQpbICAyMzEuMTcxMzgxXSBSMTA6IDAwMDAw
MDAwMDAwMDAwMDAgUjExOiAwMDAwMDAwMDAwMDAwMjQ2IFIxMjogZmZmZmZmZmY4MjJlNzMw
MA0KWyAgMjMxLjE3MTM4Ml0gUjEzOiBmZmZmZmZmZjgyMjAxZmQ4IFIxNDogZmZmZmZmZmY4
MjIwMWZkOCBSMTU6IGZmZmZmZmZmODIyMDFmZDgNClsgIDIzMS4xNzEzODZdIEZTOiAgMDAw
MDdmZDA5MTE2YTcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjAwMDAwKDAwMDApIGtubEdTOjAw
MDAwMDAwMDAwMDAwMDANClsgIDIzMS4xNzEzODZdIENTOiAgZTAzMyBEUzogMDAwMCBFUzog
MDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDIzMS4xNzEzODddIENSMjogZmZmZmZm
ZmZmZjYwMDQwMCBDUjM6IDAwMDAwMDAwNTdkOTAwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYw
DQpbICAyMzEuMTcxMzg4XSBTdGFjazoNClsgIDIzMS4xNzEzOTFdICAwMDAwMDAwMDAwMDAw
MDAwIDAxMDAwMDAwMDAwMDAwMDAgZmZmZmZmZmY4MTAwOGMxMCBmZmZmZmZmZjgyMjAxZTY4
DQpbICAyMzEuMTcxMzkzXSAgZmZmZmZmZmY4MTAxODA1OCBmZmZmZmZmZjgyMjAxZTc4IGZm
ZmZmZmZmODEwMTg4NmUgZmZmZmZmZmY4MjIwMWViOA0KWyAgMjMxLjE3MTM5NV0gIGZmZmZm
ZmZmODEwZWZiYjEgZmZmZmZmZmY4MjNhNjBjMCAwMDAwMDAwMDAwMDAwMDAyIGZmZmZmZmZm
ODIzOWQwMjANClsgIDIzMS4xNzEzOTVdIENhbGwgVHJhY2U6DQpbICAyMzEuMTcxNDAwXSAg
WzxmZmZmZmZmZjgxMDA4YzEwPl0gPyB4ZW5fc2FmZV9oYWx0KzB4MTAvMHgyMA0KWyAgMjMx
LjE3MTQwMl0gIFs8ZmZmZmZmZmY4MTAxODA1OD5dIGRlZmF1bHRfaWRsZSsweDE4LzB4MjAN
ClsgIDIzMS4xNzE0MDNdICBbPGZmZmZmZmZmODEwMTg4NmU+XSBhcmNoX2NwdV9pZGxlKzB4
MmUvMHg0MA0KWyAgMjMxLjE3MTQwNV0gIFs8ZmZmZmZmZmY4MTBlZmJiMT5dIGNwdV9zdGFy
dHVwX2VudHJ5KzB4NzEvMHgxZjANClsgIDIzMS4xNzE0MTBdICBbPGZmZmZmZmZmODFhOGE4
MTc+XSByZXN0X2luaXQrMHhiNy8weGMwDQpbICAyMzEuMTcxNDExXSAgWzxmZmZmZmZmZjgx
YThhNzYwPl0gPyBjc3VtX3BhcnRpYWxfY29weV9nZW5lcmljKzB4MTcwLzB4MTcwDQpbICAy
MzEuMTcxNDE0XSAgWzxmZmZmZmZmZjgyMzBkZjAxPl0gc3RhcnRfa2VybmVsKzB4M2VlLzB4
M2ZiDQpbICAyMzEuMTcxNDE1XSAgWzxmZmZmZmZmZjgyMzBkOTEyPl0gPyByZXBhaXJfZW52
X3N0cmluZysweDVlLzB4NWUNClsgIDIzMS4xNzE0MTddICBbPGZmZmZmZmZmODIzMGQ1Zjg+
XSB4ODZfNjRfc3RhcnRfcmVzZXJ2YXRpb25zKzB4MmEvMHgyYw0KWyAgMjMxLjE3MTQxOV0g
IFs8ZmZmZmZmZmY4MjMxMGRjNj5dIHhlbl9zdGFydF9rZXJuZWwrMHg1NDYvMHg1NDgNClsg
IDIzMS4xNzE0MzJdIENvZGU6IGNjIDUxIDQxIDUzIGI4IDFjIDAwIDAwIDAwIDBmIDA1IDQx
IDViIDU5IGMzIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNj
IGNjIGNjIGNjIDUxIDQxIDUzIGI4IDFkIDAwIDAwIDAwIDBmIDA1IDw0MT4gNWIgNTkgYzMg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgDQpb
ICAyMzEuMTcxNDQ1XSBOTUkgYmFja3RyYWNlIGZvciBjcHUgMg0KWyAgMjMxLjE3MTQ0OV0g
Q1BVOiAyIFBJRDogMCBDb21tOiBzd2FwcGVyLzIgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDIzMS4xNzE0NTBdIEhhcmR3YXJlIG5hbWU6IE1T
SSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8y
MDEwDQpbICAyMzEuMTcxNDUxXSB0YXNrOiBmZmZmODgwMDU5YmI0MjQwIHRpOiBmZmZmODgw
MDU5YmMwMDAwIHRhc2sudGk6IGZmZmY4ODAwNTliYzAwMDANClsgIDIzMS4xNzE0NTddIFJJ
UDogZTAzMDpbPGZmZmZmZmZmODEwMDEzYWE+XSAgWzxmZmZmZmZmZjgxMDAxM2FhPl0geGVu
X2h5cGVyY2FsbF9zY2hlZF9vcCsweGEvMHgyMA0KWyAgMjMxLjE3MTQ1OF0gUlNQOiBlMDJi
OmZmZmY4ODAwNTliYzFlYzggIEVGTEFHUzogMDAwMDAyNDYNClsgIDIzMS4xNzE0NThdIFJB
WDogMDAwMDAwMDAwMDAwMDAwMCBSQlg6IGZmZmY4ODAwNTliYzFmZDggUkNYOiBmZmZmZmZm
ZjgxMDAxM2FhDQpbICAyMzEuMTcxNDU5XSBSRFg6IGZmZmY4ODAwNTliYjQyNDAgUlNJOiAw
MDAwMDAwMGRlYWRiZWVmIFJESTogMDAwMDAwMDBkZWFkYmVlZg0KWyAgMjMxLjE3MTQ2MF0g
UkJQOiBmZmZmODgwMDU5YmMxZWUwIFIwODogMDAwMDAwMDAwMDAwMDAwMCBSMDk6IDAwMDAw
MDAwMDAwMDAwMDENClsgIDIzMS4xNzE0NjFdIFIxMDogMDAwMDAwMDAwMDAwMDAwMCBSMTE6
IDAwMDAwMDAwMDAwMDAyNDYgUjEyOiBmZmZmZmZmZjgyMmU3MzAwDQpbICAyMzEuMTcxNDYx
XSBSMTM6IGZmZmY4ODAwNTliYzFmZDggUjE0OiBmZmZmODgwMDU5YmMxZmQ4IFIxNTogZmZm
Zjg4MDA1OWJjMWZkOA0KWyAgMjMxLjE3MTQ2NV0gRlM6ICAwMDAwN2YxZTI2NWZiNzAwKDAw
MDApIEdTOmZmZmY4ODAwNWY2ODAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0K
WyAgMjMxLjE3MTQ2NV0gQ1M6ICBlMDMzIERTOiAwMDJiIEVTOiAwMDJiIENSMDogMDAwMDAw
MDA4MDA1MDAzYg0KWyAgMjMxLjE3MTQ2Nl0gQ1IyOiBmZmZmZmZmZmZmNjAwNDAwIENSMzog
MDAwMDAwMDA1OTM5YTAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIzMS4xNzE0Njdd
IFN0YWNrOg0KWyAgMjMxLjE3MTQ3MV0gIDAwMDAwMDAwMDAwMDAwMDAgMDEwMDAwMDAwMDAw
MDAwMCBmZmZmZmZmZjgxMDA4YzEwIGZmZmY4ODAwNTliYzFlZjANClsgIDIzMS4xNzE0NzNd
ICBmZmZmZmZmZjgxMDE4MDU4IGZmZmY4ODAwNTliYzFmMDAgZmZmZmZmZmY4MTAxODg2ZSBm
ZmZmODgwMDU5YmMxZjQwDQpbICAyMzEuMTcxNDc0XSAgZmZmZmZmZmY4MTBlZmJiMSAwMDAw
MDAwMDAwMDAwMDBhIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMx
LjE3MTQ3NV0gQ2FsbCBUcmFjZToNClsgIDIzMS4xNzE0NzldICBbPGZmZmZmZmZmODEwMDhj
MTA+XSA/IHhlbl9zYWZlX2hhbHQrMHgxMC8weDIwDQpbICAyMzEuMTcxNDgxXSAgWzxmZmZm
ZmZmZjgxMDE4MDU4Pl0gZGVmYXVsdF9pZGxlKzB4MTgvMHgyMA0KWyAgMjMxLjE3MTQ4M10g
IFs8ZmZmZmZmZmY4MTAxODg2ZT5dIGFyY2hfY3B1X2lkbGUrMHgyZS8weDQwDQpbICAyMzEu
MTcxNDg2XSAgWzxmZmZmZmZmZjgxMGVmYmIxPl0gY3B1X3N0YXJ0dXBfZW50cnkrMHg3MS8w
eDFmMA0KWyAgMjMxLjE3MTQ4OF0gIFs8ZmZmZmZmZmY4MTAwYjljNT5dIGNwdV9icmluZ3Vw
X2FuZF9pZGxlKzB4MjUvMHg0MA0KWyAgMjMxLjE3MTUwM10gQ29kZTogY2MgNTEgNDEgNTMg
YjggMWMgMDAgMDAgMDAgMGYgMDUgNDEgNWIgNTkgYzMgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgNTEgNDEgNTMgYjggMWQgMDAgMDAg
MDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMS4xNzE1MjddIE5NSSBiYWNrdHJhY2UgZm9y
IGNwdSA0DQpbICAyMzEuMTcxNTMwXSBDUFU6IDQgUElEOiAwIENvbW06IHN3YXBwZXIvNCBO
b3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjMxLjE3
MTUzMF0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDAp
ICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDIzMS4xNzE1MzJdIHRhc2s6IGZmZmY4
ODAwNTliYjYzNjAgdGk6IGZmZmY4ODAwNTliYzQwMDAgdGFzay50aTogZmZmZjg4MDA1OWJj
NDAwMA0KWyAgMjMxLjE3MTUzNV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTAwMTNhYT5dICBb
PGZmZmZmZmZmODEwMDEzYWE+XSB4ZW5faHlwZXJjYWxsX3NjaGVkX29wKzB4YS8weDIwDQpb
ICAyMzEuMTcxNTM2XSBSU1A6IGUwMmI6ZmZmZjg4MDA1OWJjNWVjOCAgRUZMQUdTOiAwMDAw
MDI0Ng0KWyAgMjMxLjE3MTUzN10gUkFYOiAwMDAwMDAwMDAwMDAwMDAwIFJCWDogZmZmZjg4
MDA1OWJjNWZkOCBSQ1g6IGZmZmZmZmZmODEwMDEzYWENClsgIDIzMS4xNzE1MzhdIFJEWDog
ZmZmZjg4MDA1OWJiNjM2MCBSU0k6IDAwMDAwMDAwZGVhZGJlZWYgUkRJOiAwMDAwMDAwMGRl
YWRiZWVmDQpbICAyMzEuMTcxNTM4XSBSQlA6IGZmZmY4ODAwNTliYzVlZTAgUjA4OiAwMDAw
MDAwMDAwMDAwMDAwIFIwOTogMDAwMDAwMDAwMDAwMDAwMQ0KWyAgMjMxLjE3MTUzOV0gUjEw
OiAwMDAwMDAwMDAwMDAwMDAwIFIxMTogMDAwMDAwMDAwMDAwMDI0NiBSMTI6IGZmZmZmZmZm
ODIyZTczMDANClsgIDIzMS4xNzE1NDBdIFIxMzogZmZmZjg4MDA1OWJjNWZkOCBSMTQ6IGZm
ZmY4ODAwNTliYzVmZDggUjE1OiBmZmZmODgwMDU5YmM1ZmQ4DQpbICAyMzEuMTcxNTQzXSBG
UzogIDAwMDA3ZmQwOTExNmE3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjcwMDAwMCgwMDAwKSBr
bmxHUzowMDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxNTQ0XSBDUzogIGUwMzMgRFM6IDAw
MmIgRVM6IDAwMmIgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAyMzEuMTcxNTQ1XSBDUjI6
IDAwMDA3ZjI4ZWY1YWEwMDAgQ1IzOiAwMDAwMDAwMDU3ZDkwMDAwIENSNDogMDAwMDAwMDAw
MDAwMDY2MA0KWyAgMjMxLjE3MTU0Nl0gU3RhY2s6DQpbICAyMzEuMTcxNTQ5XSAgMDAwMDAw
MDAwMDAwMDAwMCAwMTAwMDAwMDAwMDAwMDAwIGZmZmZmZmZmODEwMDhjMTAgZmZmZjg4MDA1
OWJjNWVmMA0KWyAgMjMxLjE3MTU1MF0gIGZmZmZmZmZmODEwMTgwNTggZmZmZjg4MDA1OWJj
NWYwMCBmZmZmZmZmZjgxMDE4ODZlIGZmZmY4ODAwNTliYzVmNDANClsgIDIzMS4xNzE1NTJd
ICBmZmZmZmZmZjgxMGVmYmIxIDAwMDAwMDAwMDAwMDAwMGEgMDAwMDAwMDAwMDAwMDAwMCAw
MDAwMDAwMDAwMDAwMDAwDQpbICAyMzEuMTcxNTUyXSBDYWxsIFRyYWNlOg0KWyAgMjMxLjE3
MTU1NV0gIFs8ZmZmZmZmZmY4MTAwOGMxMD5dID8geGVuX3NhZmVfaGFsdCsweDEwLzB4MjAN
ClsgIDIzMS4xNzE1NTddICBbPGZmZmZmZmZmODEwMTgwNTg+XSBkZWZhdWx0X2lkbGUrMHgx
OC8weDIwDQpbICAyMzEuMTcxNTU4XSAgWzxmZmZmZmZmZjgxMDE4ODZlPl0gYXJjaF9jcHVf
aWRsZSsweDJlLzB4NDANClsgIDIzMS4xNzE1NjBdICBbPGZmZmZmZmZmODEwZWZiYjE+XSBj
cHVfc3RhcnR1cF9lbnRyeSsweDcxLzB4MWYwDQpbICAyMzEuMTcxNTYxXSAgWzxmZmZmZmZm
ZjgxMDBiOWM1Pl0gY3B1X2JyaW5ndXBfYW5kX2lkbGUrMHgyNS8weDQwDQpbICAyMzEuMTcx
NTc1XSBDb2RlOiBjYyA1MSA0MSA1MyBiOCAxYyAwMCAwMCAwMCAwZiAwNSA0MSA1YiA1OSBj
MyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyA1MSA0MSA1MyBiOCAxZCAwMCAwMCAwMCAwZiAwNSA8NDE+IDViIDU5IGMzIGNjIGNjIGNj
IGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIGNjIA0KWyAgMjMyLjcy
Nzk0NV0gCSAodD0xODQ2OCBqaWZmaWVzIGc9NTg5MiBjPTU4OTEgcT00MTI3KQ0KWyAgMjMy
LjczNTI4NV0gc2VuZGluZyBOTUkgdG8gYWxsIENQVXM6DQpbICAyMzIuNzQyNjE2XSBOTUkg
YmFja3RyYWNlIGZvciBjcHUgMQ0KWyAgMjMyLjc0OTkwMl0gQ1BVOiAxIFBJRDogOTY3OSBD
b21tOiB4ZW5kb21haW5zIE5vdCB0YWludGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2
ZWwrICMxDQpbICAyMzIuNzU3Mzc2XSBIYXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBG
WEEtR0Q3MCAoTVMtNzY0MCkgICwgQklPUyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMjMyLjc2
NDc5OV0gdGFzazogZmZmZjg4MDA1OGRlYzI0MCB0aTogZmZmZjg4MDA0Y2QzMjAwMCB0YXNr
LnRpOiBmZmZmODgwMDRjZDMyMDAwDQpbICAyMzIuNzcyMDgxXSBSSVA6IGUwMzA6WzxmZmZm
ZmZmZjgxMDAxMzBhPl0gIFs8ZmZmZmZmZmY4MTAwMTMwYT5dIHhlbl9oeXBlcmNhbGxfdmNw
dV9vcCsweGEvMHgyMA0KWyAgMjMyLjc3OTUyOF0gUlNQOiBlMDJiOmZmZmY4ODAwNWY2NDNj
MTAgIEVGTEFHUzogMDAwMDAwNDYNClsgIDIzMi43ODY5MDhdIFJBWDogMDAwMDAwMDAwMDAw
MDAwMCBSQlg6IDAwMDAwMDAwMDAwMDAwMDEgUkNYOiBmZmZmZmZmZjgxMDAxMzBhDQpbICAy
MzIuNzk0Mjc3XSBSRFg6IDAwMDAwMDAwZGVhZGJlZWYgUlNJOiAwMDAwMDAwMGRlYWRiZWVm
IFJESTogMDAwMDAwMDBkZWFkYmVlZg0KWyAgMjMyLjgwMTU0OF0gUkJQOiBmZmZmODgwMDVm
NjQzYzI4IFIwODogZmZmZmZmZmY4MjJlNzMwMCBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsg
IDIzMi44MDg3NzBdIFIxMDogMDAwMDAwMDAwMDAwMDAwMCBSMTE6IDAwMDAwMDAwMDAwMDAy
NDYgUjEyOiBmZmZmZmZmZjgyMmU3MzAwDQpbICAyMzIuODE1OTE4XSBSMTM6IGZmZmZmZmZm
ODIyZTczMDAgUjE0OiAwMDAwMDAwMDAwMDAwMDA1IFIxNTogMDAwMDAwMDAwMDAwMDAwMQ0K
WyAgMjMyLjgyMzA4OV0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdTOmZmZmY4ODAw
NWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjMyLjgzMDMxMF0g
Q1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAg
MjMyLjgzNzUzM10gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAwMDA0Y2QyYTAw
MCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDIzMi44NDQ4NjhdIFN0YWNrOg0KWyAgMjMy
Ljg1MjA4Ml0gIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmZmZmZjgx
NGNjN2NjIGZmZmY4ODAwNWY2NDNjNTgNClsgIDIzMi44NTk0NTFdICBmZmZmZmZmZjgxMDBh
ZDRhIDAwMDAwMDAwMDAwMDI3MTAgZmZmZmZmZmY4MjI0MzFjMCBmZmZmZmZmZjgyMmU3MzA4
DQpbICAyMzIuODY2NzY0XSAgZmZmZmZmZmY4MjI0MzFjMCBmZmZmODgwMDVmNjQzYzY4IGZm
ZmZmZmZmODEwMGJiNjkgZmZmZjg4MDA1ZjY0M2M4OA0KWyAgMjMyLjg3NDA0MV0gQ2FsbCBU
cmFjZToNClsgIDIzMi44ODEyMDddICA8SVJRPiANClsgIDIzMi44ODEyNTddICBbPGZmZmZm
ZmZmODE0Y2M3Y2M+XSA/IHhlbl9zZW5kX0lQSV9vbmUrMHgzYy8weDYwDQpbICAyMzIuODk1
NDEyXSAgWzxmZmZmZmZmZjgxMDBhZDRhPl0gX194ZW5fc2VuZF9JUElfbWFzaysweDJhLzB4
NTANClsgIDIzMi45MDI2MjBdICBbPGZmZmZmZmZmODEwMGJiNjk+XSB4ZW5fc2VuZF9JUElf
YWxsKzB4MjkvMHg4MA0KWyAgMjMyLjkwOTcwMF0gIFs8ZmZmZmZmZmY4MTAzZmZmYT5dIGFy
Y2hfdHJpZ2dlcl9hbGxfY3B1X2JhY2t0cmFjZSsweDRhLzB4ODANClsgIDIzMi45MTY3OTRd
ICBbPGZmZmZmZmZmODEwZjlkNjg+XSByY3VfY2hlY2tfY2FsbGJhY2tzKzB4M2I4LzB4NjIw
DQpbICAyMzIuOTIzOTQyXSAgWzxmZmZmZmZmZjgxMGUwYjlkPl0gPyB0cmFjZV9oYXJkaXJx
c19vZmYrMHhkLzB4MTANClsgIDIzMi45MzEwNjRdICBbPGZmZmZmZmZmODExMDQ4NzA+XSA/
IHRpY2tfbm9oel9oYW5kbGVyKzB4YTAvMHhhMA0KWyAgMjMyLjkzODIxOV0gIFs8ZmZmZmZm
ZmY4MTBiMTIzMz5dIHVwZGF0ZV9wcm9jZXNzX3RpbWVzKzB4NDMvMHg4MA0KWyAgMjMyLjk0
NTQxMF0gIFs8ZmZmZmZmZmY4MTEwNDc5OT5dIHRpY2tfc2NoZWRfaGFuZGxlLmlzcmEuMTQr
MHgyOS8weDYwDQpbICAyMzIuOTUyNDI0XSAgWzxmZmZmZmZmZjgxMTA0OGI3Pl0gdGlja19z
Y2hlZF90aW1lcisweDQ3LzB4NzANClsgIDIzMi45NTkyMDNdICBbPGZmZmZmZmZmODEwYzg0
NGY+XSBfX3J1bl9ocnRpbWVyLmlzcmEuMjgrMHg2Zi8weDEyMA0KWyAgMjMyLjk2NjA1NV0g
IFs8ZmZmZmZmZmY4MTBjOGQwNT5dIGhydGltZXJfaW50ZXJydXB0KzB4ZjUvMHgyNDANClsg
IDIzMi45NzI3ODBdICBbPGZmZmZmZmZmODEwMDhkYWY+XSB4ZW5fdGltZXJfaW50ZXJydXB0
KzB4MmYvMHgxNTANClsgIDIzMi45Nzk0NThdICBbPGZmZmZmZmZmODFhYTA0MGI+XSA/IF9y
YXdfc3Bpbl91bmxvY2tfaXJxKzB4MmIvMHg1MA0KWyAgMjMyLjk4NjA4MV0gIFs8ZmZmZmZm
ZmY4MTBlMzFjYj5dID8gdHJhY2VfaGFyZGlycXNfb25fY2FsbGVyKzB4ZmIvMHgyNDANClsg
IDIzMi45OTI2ODNdICBbPGZmZmZmZmZmODEwZjA2Nzc+XSBoYW5kbGVfaXJxX2V2ZW50X3Bl
cmNwdSsweDQ3LzB4MTkwDQpbICAyMzIuOTk5MTQwXSAgWzxmZmZmZmZmZjgxNGNiMTE5Pl0g
PyBpbmZvX2Zvcl9pcnErMHg5LzB4MjANClsgIDIzMy4wMDU1NTRdICBbPGZmZmZmZmZmODEw
ZjM5NjI+XSBoYW5kbGVfcGVyY3B1X2lycSsweDQyLzB4NjANClsgIDIzMy4wMTE5MjldICBb
PGZmZmZmZmZmODE0Y2RhY2Q+XSBldnRjaG5fZmlmb19oYW5kbGVfZXZlbnRzKzB4MTJkLzB4
MTQwDQpbICAyMzMuMDE4NDA1XSAgWzxmZmZmZmZmZjgxNGNhYzc4Pl0gX194ZW5fZXZ0Y2hu
X2RvX3VwY2FsbCsweDQ4LzB4OTANClsgIDIzMy4wMjQ4ODddICBbPGZmZmZmZmZmODE0Y2M4
MWE+XSB4ZW5fZXZ0Y2huX2RvX3VwY2FsbCsweDJhLzB4NDANClsgIDIzMy4wMzEzMzFdICBb
PGZmZmZmZmZmODFhYTI4NWU+XSB4ZW5fZG9faHlwZXJ2aXNvcl9jYWxsYmFjaysweDFlLzB4
MzANClsgIDIzMy4wMzc2OTldICA8RU9JPiANClsgIDIzMy4wMzc3NDldICBbPGZmZmZmZmZm
ODExMDlhNWE+XSA/IGdlbmVyaWNfZXhlY19zaW5nbGUrMHg4YS8weGMwDQpbICAyMzMuMDUw
MzUzXSAgWzxmZmZmZmZmZjgxMTA5YTgxPl0gPyBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4YjEv
MHhjMA0KWyAgMjMzLjA1NjU5N10gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ry
b3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDIzMy4wNjI4NzBdICBbPGZm
ZmZmZmZmODExMDljYzU+XSA/IHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUw
DQpbICAyMzMuMDY5MDk3XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9y
ZV9hcmdzKzB4MTMvMHgxMw0KWyAgMjMzLjA3NTMzMV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5d
ID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDIzMy4w
ODE1NzFdICBbPGZmZmZmZmZmODExMGEwM2E+XSA/IHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkr
MHgyN2EvMHgyYTANClsgIDIzMy4wODc3ODVdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhl
bl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyMzMuMDk0MDI5
XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0gPyB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTANClsg
IDIzMy4xMDAzMDddICBbPGZmZmZmZmZmODEwMDEyMmE+XSA/IHhlbl9oeXBlcmNhbGxfeGVu
X3ZlcnNpb24rMHhhLzB4MjANClsgIDIzMy4xMDY2MDJdICBbPGZmZmZmZmZmODExNjk0MjY+
XSA/IGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAyMzMuMTEyOTE5XSAgWzxmZmZmZmZmZjgx
MGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDIzMy4xMTkwODhdICBb
PGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjMzLjEyNTAz
Nl0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAyMzMu
MTMwODkzXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gPyBtbXB1dCsweDU5LzB4ZTANClsgIDIz
My4xMzY2ODldICBbPGZmZmZmZmZmODExOWEzYTk+XSA/IGZsdXNoX29sZF9leGVjKzB4NDM5
LzB4ODMwDQpbICAyMzMuMTQyNTQ1XSAgWzxmZmZmZmZmZjgxMWU4Y2NhPl0gPyBsb2FkX2Vs
Zl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAyMzMuMTQ4MzI2XSAgWzxmZmZmZmZmZjgxYTlm
ZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMjMzLjE1NDAxMV0gIFs8
ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDIzMy4x
NTk1NjldICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisw
eGMzLzB4MWIwDQpbICAyMzMuMTY1MDcyXSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2Nr
X2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjMzLjE3MDQ2MF0gIFs8ZmZmZmZmZmY4MTBlNzE3
YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyMzMuMTc1Nzg0XSAgWzxmZmZm
ZmZmZjgxMTk5MjA0Pl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAg
MjMzLjE4MTE1Nl0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dID8gZG9fZXhlY3ZlX2NvbW1vbi5p
c3JhLjMxKzB4NTkyLzB4NzEwDQpbICAyMzMuMTg2NDk0XSAgWzxmZmZmZmZmZjgxMTliMWU3
Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1MjcvMHg3MTANClsgIDIzMy4xOTE3
NTZdICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGttZW1fY2FjaGVfYWxsb2MrMHhiNS8weDEy
MA0KWyAgMjMzLjE5NzAwMl0gIFs8ZmZmZmZmZmY4MTE5YjNlMz5dID8gZG9fZXhlY3ZlKzB4
MTMvMHgyMA0KWyAgMjMzLjIwMjIzOF0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dID8gU3lTX2V4
ZWN2ZSsweDM4LzB4NjANClsgIDIzMy4yMDczMTJdICBbPGZmZmZmZmZmODFhYTE5ZTk+XSA/
IHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMjMzLjIxMjMzOV0gQ29kZTogY2MgNTEgNDEg
NTMgNTAgYjggMTcgMDAgMDAgMDAgMGYgMDUgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2Mg
Y2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgY2MgNTEgNDEgNTMgYjggMTggMDAg
MDAgMDAgMGYgMDUgPDQxPiA1YiA1OSBjMyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBjYyBj
YyBjYyBjYyBjYyBjYyBjYyBjYyBjYyANClsgIDIzMy4yMjMxNjZdIElORk86IE5NSSBoYW5k
bGVyIChhcmNoX3RyaWdnZXJfYWxsX2NwdV9iYWNrdHJhY2VfaGFuZGxlcikgdG9vayB0b28g
bG9uZyB0byBydW46IDQ4MC41NDcgbXNlY3MNClsgIDIzMy4yMjMxNzVdIE5NSSBiYWNrdHJh
Y2UgZm9yIGNwdSAwDQpbICAyMzMuMjIzMTc4XSBDUFU6IDAgUElEOiAwIENvbW06IHN3YXBw
ZXIvMCBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAg
MjMzLjIyMzE3OF0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1T
LTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzL1sgIDIzOC45MTUzNzRdIEJsdWV0b290aDog
aGNpMCBsaW5rIHR4IHRpbWVvdXQNClsgIDIzOC45MjM2MzhdIEJsdWV0b290aDogaGNpMCBr
aWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjQwLjkx
NTIxNl0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQwLjkyMzQzMV0g
Qmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1
OjEzOjcyDQpbICAyNDAuOTMxMzM4XSBCbHVldG9vdGg6IGhjaTAgbGluayB0eCB0aW1lb3V0
DQpbICAyNDAuOTMyNzg1XSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGlt
ZW91dA0KWyAgMjQwLjk0NzA1OV0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBj
b25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAyNDIuOTU4NDU4XSBCbHVldG9vdGg6
IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGltZW91dA0KWyAgMjQ0LjkxMzk3Nl0gQmx1ZXRv
b3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQ0LjkyMTMzMF0gQmx1ZXRvb3RoOiBo
Y2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAy
NDQuOTcwODMwXSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAweDA0MDYgdHggdGltZW91dA0K
WyAgMjQ2LjkxNjU3Nl0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjQ2
LjkyMzgxNV0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAz
OjEyOjA5OjI1OjEzOjcyDQpbICAyNDYuOTgzMTc4XSBCbHVldG9vdGg6IGhjaTAgY29tbWFu
ZCAweDA0MDYgdHggdGltZW91dA0KWyAgMjQ4Ljk5NTQ4M10gQmx1ZXRvb3RoOiBoY2kwIGNv
bW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI1MS4zMjc2NTVdIGF0YTEuMDA6IHFjIHRp
bWVvdXQgKGNtZCAweGVjKQ0KWyAgMjUxLjMzNDY4M10gYXRhMS4wMDogZmFpbGVkIHRvIElE
RU5USUZZIChJL08gZXJyb3IsIGVycl9tYXNrPTB4NCkNClsgIDI1MS4zNDE3NDZdIGF0YTEu
MDA6IHJldmFsaWRhdGlvbiBmYWlsZWQgKGVycm5vPS01KQ0KWyAgMjUxLjM0ODczMV0gYXRh
MS4wMDogZGlzYWJsZWQNClsgIDI1MS4zNTU2ODFdIGF0YTEuMDA6IGRldmljZSByZXBvcnRl
ZCBpbnZhbGlkIENIUyBzZWN0b3IgMA0KWyAgMjUxLjM2MjU3M10gYXRhMS4wMDogZGV2aWNl
IHJlcG9ydGVkIGludmFsaWQgQ0hTIHNlY3RvciAwDQpbICAyNTEuMzY5MzcyXSBhdGExLjAw
OiBkZXZpY2UgcmVwb3J0ZWQgaW52YWxpZCBDSFMgc2VjdG9yIDANClsgIDI1MS4zNzYwNjVd
IGF0YTEuMDA6IGRldmljZSByZXBvcnRlZCBpbnZhbGlkIENIUyBzZWN0b3IgMA0KWyAgMjUx
LjM4MjYxNF0gYXRhMS4wMDogZGV2aWNlIHJlcG9ydGVkIGludmFsaWQgQ0hTIHNlY3RvciAw
DQpbICAyNTEuMzg5MDc1XSBhdGExOiBoYXJkIHJlc2V0dGluZyBsaW5rDQpbICAyNTEuODgw
ODI4XSBhdGExOiBTQVRBIGxpbmsgdXAgMS41IEdicHMgKFNTdGF0dXMgMTEzIFNDb250cm9s
IDMxMCkNClsgIDI1MS44ODc0MDNdIGF0YTE6IEVIIGNvbXBsZXRlDQpbICAyNTEuODkzOTEz
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUxLjkwMDQ0
M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUxLjkwNjc4Ml0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUxLjkxMjk1N10g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1MS45MTkwMDldIFJlYWQoMTApOiAyOCAw
MCBjYyA2ZSAzYSBiOSAwMCAwMCAwOCAwMA0KWyAgMjUxLjkyNTExNl0gZW5kX3JlcXVlc3Q6
IEkvTyBlcnJvciwgZGV2IHNkYSwgc2VjdG9yIDM0Mjk3NzYwNTcNClsgIDI1MS45MzExNjNd
IHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTEuOTM3MDU1
XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTEuOTQyODIxXSBSZXN1bHQ6IGhvc3RieXRl
PURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTEuOTQ4NTYwXSBz
ZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUxLjk1NDIzMF0gUmVhZCgxMCk6IDI4IDAw
IGQwIDhlIDI4IGUxIDAwIDAwIDA4IDAwDQpbICAyNTEuOTU5ODk2XSBlbmRfcmVxdWVzdDog
SS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMzQ5ODk3NzUwNQ0KWyAgMjUxLjk2NTU4MV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1MS45NzEzMDVd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1MS45NzY5MThdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1MS45ODI2NzddIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTEuOTg4MzA3XSBSZWFkKDEwKTogMjggMDAg
ZDAgYzIgZWQgODEgMDAgMDAgMjAgMDANClsgIDI1MS45OTM5NTldIGVuZF9yZXF1ZXN0OiBJ
L08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAzNTAyNDM1NzEzDQpbICAyNTEuOTk5NjExXSBz
ZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjAwNTE5MF0g
c2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjAxMDY3M10gUmVzdWx0OiBob3N0Ynl0ZT1E
SURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjAxNjI0N10gc2Qg
MDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4wMjE3NzFdIFdyaXRlKDEwKTogMmEgMDAg
MDQgMmUgNjcgZTkgMDAgMDAgZTAgMDANClsgIDI1Mi4wMjczMThdIGVuZF9yZXF1ZXN0OiBJ
L08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciA3MDE1MDEyMQ0KWyAgMjUyLjAzMjk1N10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4wMzMxOTVdIEFi
b3J0aW5nIGpvdXJuYWwgb24gZGV2aWNlIGRtLTAtOC4NClsgIDI1Mi4wMzMyNTldIHNkIDA6
MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMDMzMjYxXSBzZCAw
OjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMDMzMjYyXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMDMzMjYzXSBzZCAwOjA6
MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjAzMzI2OF0gV3JpdGUoMTApOiAyYSAwMCAwNCAy
ZSA1NSAwMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjAzMzI3MF0gZW5kX3JlcXVlc3Q6IEkvTyBl
cnJvciwgZGV2IHNkYSwgc2VjdG9yIDcwMTQ1MjgxDQpbICAyNTIuMDMzMjc1XSBCdWZmZXIg
SS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDYzMjQyMjQNClsgIDI1
Mi4wMzMyNzZdIGxvc3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVycm9yIG9uIGRtLTANClsg
IDI1Mi4wMzMzMThdIEpCRDI6IEVycm9yIC01IGRldGVjdGVkIHdoZW4gdXBkYXRpbmcgam91
cm5hbCBzdXBlcmJsb2NrIGZvciBkbS0wLTguDQpbICAyNTIuMDMzNDc3XSBzZCAwOjA6MDow
OiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjAzMzQ3OV0gc2QgMDowOjA6
MDogW3NkYV0gIA0KWyAgMjUyLjAzMzQ4MV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RB
UkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjAzMzQ4Ml0gc2QgMDowOjA6MDog
W3NkYV0gQ0RCOiANClsgIDI1Mi4wMzM0ODZdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUg
MDEgMDAgMDAgMDggMDANClsgIDI1Mi4wMzM0ODhdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3Is
IGRldiBzZGEsIHNlY3RvciAxOTU1MTQ4OQ0KWyAgMjUyLjAzMzQ5M10gQnVmZmVyIEkvTyBl
cnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayAwDQpbICAyNTIuMDMzNDk0XSBs
b3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDMzNTQ1
XSBFWFQ0LWZzIGVycm9yIChkZXZpY2UgZG0tMCk6IGV4dDRfam91cm5hbF9jaGVja19zdGFy
dDo1NjogRGV0ZWN0ZWQgYWJvcnRlZCBqb3VybmFsDQpbICAyNTIuMDMzNTQ3XSBFWFQ0LWZz
IChkbS0wKTogUmVtb3VudGluZyBmaWxlc3lzdGVtIHJlYWQtb25seQ0KWyAgMjUyLjAzMzU0
OV0gRVhUNC1mcyAoZG0tMCk6IHByZXZpb3VzIEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRl
dGVjdGVkDQpbICAyNTIuMDMzNTg1XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJy
b3IgY29kZQ0KWyAgMjUyLjAzMzU4Nl0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjAz
MzU4N10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZF
Ul9PSw0KWyAgMjUyLjAzMzU4OF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4w
MzM1OTFdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1
Mi4wMzM1OTNdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxOTU1
MTQ4OQ0KWyAgMjUyLjAzMzU5NV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwg
bG9naWNhbCBibG9jayAwDQpbICAyNTIuMDMzNTk2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRv
IEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3NDU5XSBFWFQ0LWZzIGVycm9yIChkZXZp
Y2UgZG0tMCkgaW4gZXh0NF9vcnBoYW5fYWRkOjI2MTQ6IEpvdXJuYWwgaGFzIGFib3J0ZWQN
ClsgIDI1Mi4wNDc0NjJdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91cyBJL08gZXJyb3IgdG8g
c3VwZXJibG9jayBkZXRlY3RlZA0KWyAgMjUyLjA0NzQ4M10gRVhUNC1mcyBlcnJvciAoZGV2
aWNlIGRtLTApIGluIGV4dDRfb3JwaGFuX2FkZDoyNjE0OiBKb3VybmFsIGhhcyBhYm9ydGVk
DQpbICAyNTIuMDQ3NjM0XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29k
ZQ0KWyAgMjUyLjA0NzYzNl0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjA0NzYzNl0g
UmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0K
WyAgMjUyLjA0NzYzN10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4wNDc2NDFd
IFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1Mi4wNDc2
NDNdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3RvciAxOTU1MTQ4OQ0K
WyAgMjUyLjA0NzY0Nl0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNh
bCBibG9jayAwDQpbICAyNTIuMDQ3NjQ2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBl
cnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3OTEzXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRs
ZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjA0NzkxNF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAg
MjUyLjA0NzkxNV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRl
PURSSVZFUl9PSw0KWyAgMjUyLjA0NzkxNl0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsg
IDI1Mi4wNDc5MjBdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDAN
ClsgIDI1Mi4wNDc5MjFdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBzZGEsIHNlY3Rv
ciAxOTU1MTQ4OQ0KWyAgMjUyLjA0NzkyNF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2Ug
ZG0tMCwgbG9naWNhbCBibG9jayAwDQpbICAyNTIuMDQ3OTI1XSBsb3N0IHBhZ2Ugd3JpdGUg
ZHVlIHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMDQ3OTgxXSBFWFQ0LWZzIGVycm9y
IChkZXZpY2UgZG0tMCkgaW4gZXh0NF9yZXNlcnZlX2lub2RlX3dyaXRlOjQ4NDI6IEpvdXJu
YWwgaGFzIGFib3J0ZWQNClsgIDI1Mi4wNDc5ODNdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91
cyBJL08gZXJyb3IgdG8gc3VwZXJibG9jayBkZXRlY3RlZA0KWyAgMjUyLjA0ODE2N10gc2Qg
MDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4wNDgxNjhdIHNk
IDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4wNDgxNjldIFJlc3VsdDogaG9zdGJ5dGU9RElE
X0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4wNDgxNzBdIHNkIDA6
MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMDQ4MTc0XSBXcml0ZSgxMCk6IDJhIDAwIDAx
IDJhIDU1IDAxIDAwIDAwIDA4IDAwDQpbICAyNTIuMDQ4MTc1XSBlbmRfcmVxdWVzdDogSS9P
IGVycm9yLCBkZXYgc2RhLCBzZWN0b3IgMTk1NTE0ODkNClsgIDI1Mi4wNDgxNzhdIEJ1ZmZl
ciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgMA0KWyAgMjUyLjA0
ODE3OF0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24gZG0tMA0KWyAgMjUy
LjA2MDcxNl0gRVhUNC1mcyBlcnJvciAoZGV2aWNlIGRtLTApIGluIGV4dDRfcmVzZXJ2ZV9p
bm9kZV93cml0ZTo0ODQyOiBKb3VybmFsIGhhcyBhYm9ydGVkDQpbICAyNTIuMDYwNzE4XSBF
WFQ0LWZzIChkbS0wKTogcHJldmlvdXMgSS9PIGVycm9yIHRvIHN1cGVyYmxvY2sgZGV0ZWN0
ZWQNClsgIDI1Mi4wNjA3NTZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlDQpbICAyNTIuMDYwNzU3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMDYwNzU3
XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
DQpbICAyNTIuMDYwNzU4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjA2MDc2
Ml0gV3JpdGUoMTApOiAyYSAwMCAwMSAyYSA1NSAwMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjA2
MDc2NV0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayAw
DQpbICAyNTIuMDYwNzY2XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRvIEkvTyBlcnJvciBvbiBk
bS0wDQpbICAyNTIuMTgzNjM5XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3Ig
Y29kZQ0KWyAgMjUyLjE4MzY0MF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4MzY0
MV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9P
Sw0KWyAgMjUyLjE4MzY0Ml0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM2
NDZdIFdyaXRlKDEwKTogMmEgMDAgMDEgMmEgNTUgMjEgMDAgMDAgMDggMDANClsgIDI1Mi4x
ODM2NTJdIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sg
NA0KWyAgMjUyLjE4MzY1Ml0gbG9zdCBwYWdlIHdyaXRlIGR1ZSB0byBJL08gZXJyb3Igb24g
ZG0tMA0KWyAgMjUyLjE4MzY5Ml0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4xODM2OTNdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM2
OTNdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4xODM2OTRdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgz
Njk3XSBXcml0ZSgxMCk6IDJhIDAwIDAxIGFhIDU1IDA5IDAwIDAwIDA4IDAwDQpbICAyNTIu
MTgzNzAwXSBCdWZmZXIgSS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2Nr
IDEwNDg1NzcNClsgIDI1Mi4xODM3MDBdIGxvc3QgcGFnZSB3cml0ZSBkdWUgdG8gSS9PIGVy
cm9yIG9uIGRtLTANClsgIDI1Mi4xODM3MDhdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzNzA5XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTgzNzA5XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTgzNzEwXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4MzcxM10gV3JpdGUoMTApOiAyYSAwMCAwMSBlYSA1NSAwOSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4MzcxNl0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNh
bCBibG9jayAxNTcyODY1DQpbICAyNTIuMTgzNzE3XSBsb3N0IHBhZ2Ugd3JpdGUgZHVlIHRv
IEkvTyBlcnJvciBvbiBkbS0wDQpbICAyNTIuMTgzNzI0XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4MzcyNV0gc2QgMDowOjA6MDogW3NkYV0g
IA0KWyAgMjUyLjE4MzcyNl0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4MzcyNl0gc2QgMDowOjA6MDogW3NkYV0gQ0RC
OiANClsgIDI1Mi4xODM3MjldIFdyaXRlKDEwKTogMmEgMDAgMDEgZWEgNTUgMzEgMDAgMDAg
MDggMDANClsgIDI1Mi4xODM3MzldIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJv
ciBjb2RlDQpbICAyNTIuMTgzNzQwXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgz
NzQxXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVS
X09LDQpbICAyNTIuMTgzNzQxXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4
Mzc0NF0gV3JpdGUoMTApOiAyYSAwMCAwMyA2YSA1NSAzOSAwMCAwMCAwOCAwMA0KWyAgMjUy
LjE4Mzc1NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1
Mi4xODM3NTVdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM3NTZdIFJlc3VsdDog
aG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4x
ODM3NTZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzNzU5XSBXcml0ZSgx
MCk6IDJhIDAwIDAzIDZhIDU1IDc5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzNzY4XSBzZCAw
OjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzc2OV0gc2Qg
MDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4Mzc2OV0gUmVzdWx0OiBob3N0Ynl0ZT1ESURf
QkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4Mzc3MF0gc2QgMDow
OjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM3NzNdIFdyaXRlKDEwKTogMmEgMDAgMDUg
NmEgNTUgMjkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM3ODJdIHNkIDA6MDowOjA6IFtzZGFd
IFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzNzgzXSBzZCAwOjA6MDowOiBbc2Rh
XSAgDQpbICAyNTIuMTgzNzg0XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRy
aXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzNzg0XSBzZCAwOjA6MDowOiBbc2RhXSBD
REI6IA0KWyAgMjUyLjE4Mzc4N10gV3JpdGUoMTApOiAyYSAwMCAwNSA2YSA1NSA0MSAwMCAw
MCAxMCAwMA0KWyAgMjUyLjE4Mzc5OF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVy
cm9yIGNvZGUNClsgIDI1Mi4xODM3OThdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4x
ODM3OTldIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklW
RVJfT0sNClsgIDI1Mi4xODM4MDBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIu
MTgzODAzXSBXcml0ZSgxMCk6IDJhIDAwIDA1IDZhIDU1IDg5IDAwIDAwIDA4IDAwDQpbICAy
NTIuMTgzODEyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAg
MjUyLjE4MzgxM10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4MzgxNF0gUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUy
LjE4MzgxNF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM4MTddIFdyaXRl
KDEwKTogMmEgMDAgMDUgNmEgNTggZDkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM4NDRdIHNk
IDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzODQ0XSBz
ZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgzODQ1XSBSZXN1bHQ6IGhvc3RieXRlPURJ
RF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzODQ2XSBzZCAw
OjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4Mzg0OV0gV3JpdGUoMTApOiAyYSAwMCAw
NSA2YSA1OCBlOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzg1N10gc2QgMDowOjA6MDogW3Nk
YV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODM4NThdIHNkIDA6MDowOjA6IFtz
ZGFdICANClsgIDI1Mi4xODM4NTldIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQg
ZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODM4NTldIHNkIDA6MDowOjA6IFtzZGFd
IENEQjogDQpbICAyNTIuMTgzODYyXSBXcml0ZSgxMCk6IDJhIDAwIDA1IDZhIDVhIDY5IDAw
IDAwIDA4IDAwDQpbICAyNTIuMTgzODcyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQg
ZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzg3Ml0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUy
LjE4Mzg3M10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURS
SVZFUl9PSw0KWyAgMjUyLjE4Mzg3NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1
Mi4xODM4NzddIFdyaXRlKDEwKTogMmEgMDAgMDUgNmIgNWYgNjkgMDAgMDAgMDggMDANClsg
IDI1Mi4xODM4ODVdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpb
ICAyNTIuMTgzODg2XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTgzODg2XSBSZXN1
bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAy
NTIuMTgzODg3XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4Mzg5MF0gV3Jp
dGUoMTApOiAyYSAwMCAwNSA2YiA2MiBiMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzg5OV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODM5MDBd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM5MDFdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODM5MDJdIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzOTA0XSBXcml0ZSgxMCk6IDJhIDAw
IDA1IDZiIDY1IGU5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzOTEzXSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4MzkxNF0gc2QgMDowOjA6MDog
W3NkYV0gIA0KWyAgMjUyLjE4MzkxNF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdF
VCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4MzkxNV0gc2QgMDowOjA6MDogW3Nk
YV0gQ0RCOiANClsgIDI1Mi4xODM5MThdIFdyaXRlKDEwKTogMmEgMDAgMDUgNmIgODcgMjkg
MDAgMDAgMDggMDANClsgIDI1Mi4xODM5MjddIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzOTI3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTgzOTI4XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTgzOTI5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4MzkzMl0gV3JpdGUoMTApOiAyYSAwMCAwNSA2YiBhNyAyMSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4Mzk0MF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODM5NDFdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODM5NDFdIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODM5NDJdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTgzOTQ1XSBX
cml0ZSgxMCk6IDJhIDAwIDA1IDZlIDk2IDE5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTgzOTUy
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4Mzk1
M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4Mzk1M10gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4Mzk1NF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODM5NTddIFdyaXRlKDEwKTogMmEg
MDAgMDUgNmYgMTYgNzkgMDAgMDAgMDggMDANClsgIDI1Mi4xODM5NjRdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTgzOTY0XSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTgzOTY1XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTgzOTY2XSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4Mzk2OF0gV3JpdGUoMTApOiAyYSAwMCAwNSA2ZiAxNyBi
MSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4Mzk3M10gRVhUNC1mcyB3YXJuaW5nIChkZXZpY2Ug
ZG0tMCk6IGV4dDRfZW5kX2JpbzozMTc6IEkvTyBlcnJvciB3cml0aW5nIHRvIGlub2RlIDIy
MzgyNjMgKG9mZnNldCAwIHNpemUgNDA5NiBzdGFydGluZyBibG9jayA4OTUxODk0KQ0KWyAg
MjUyLjE4NDA1NF0gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBi
bG9jayA4OTUxODk0DQpbICAyNTIuMTg0MDYxXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRs
ZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDA2MV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAg
MjUyLjE4NDA2Ml0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRl
PURSSVZFUl9PSw0KWyAgMjUyLjE4NDA2M10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsg
IDI1Mi4xODQwNjZdIFdyaXRlKDEwKTogMmEgMDAgMDUgNmYgMmEgMjkgMDAgMDAgMDggMDAN
ClsgIDI1Mi4xODQwNzBdIEVYVDQtZnMgd2FybmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2Vu
ZF9iaW86MzE3OiBJL08gZXJyb3Igd3JpdGluZyB0byBpbm9kZSAyMjM4MjYyIChvZmZzZXQg
MCBzaXplIDQwOTYgc3RhcnRpbmcgYmxvY2sgODk1MjQ4NSkNClsgIDI1Mi4xODQwNzJdIEJ1
ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAsIGxvZ2ljYWwgYmxvY2sgODk1MjQ4NQ0K
WyAgMjUyLjE4NDA3OF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODQwNzldIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQwNzldIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODQwODBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MDgzXSBX
cml0ZSgxMCk6IDJhIDAwIDA1IDczIDE2IGI5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MDkw
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDA5
MF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDA5MV0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDA5Ml0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQwOTVdIFdyaXRlKDEwKTogMmEg
MDAgMDUgNzMgMjMgMTEgMDAgMDAgMDggMDANClsgIDI1Mi4xODQxMDJdIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MTAzXSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTg0MTA0XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MTA0XSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4NDEwN10gV3JpdGUoMTApOiAyYSAwMCAwNSA3MyBjNiAy
OSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDExNF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5k
bGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQxMTVdIHNkIDA6MDowOjA6IFtzZGFdICANClsg
IDI1Mi4xODQxMTVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0
ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQxMTZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpb
ICAyNTIuMTg0MTE5XSBXcml0ZSgxMCk6IDJhIDAwIDA1IDc1IDg2IDgxIDAwIDAwIDA4IDAw
DQpbICAyNTIuMTg0MTIzXSBFWFQ0LWZzIHdhcm5pbmcgKGRldmljZSBkbS0wKTogZXh0NF9l
bmRfYmlvOjMxNzogSS9PIGVycm9yIHdyaXRpbmcgdG8gaW5vZGUgMjYyMTQ3MyAob2Zmc2V0
IDAgc2l6ZSA0MDk2IHN0YXJ0aW5nIGJsb2NrIDkwMDQ1OTIpDQpbICAyNTIuMTg0MTI1XSBC
dWZmZXIgSS9PIGVycm9yIG9uIGRldmljZSBkbS0wLCBsb2dpY2FsIGJsb2NrIDkwMDQ1OTIN
ClsgIDI1Mi4xODQxMzBdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2Rl
DQpbICAyNTIuMTg0MTMxXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MTMxXSBS
ZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpb
ICAyNTIuMTg0MTMyXSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDEzNV0g
V3JpdGUoMTApOiAyYSAwMCAwNSA3NSBjNSAwOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDEz
OF0gRVhUNC1mcyB3YXJuaW5nIChkZXZpY2UgZG0tMCk6IGV4dDRfZW5kX2JpbzozMTc6IEkv
TyBlcnJvciB3cml0aW5nIHRvIGlub2RlIDI2MjE1OTcgKG9mZnNldCAwIHNpemUgNDA5NiBz
dGFydGluZyBibG9jayA5MDA2NTkzKQ0KWyAgMjUyLjE4NDE0MF0gQnVmZmVyIEkvTyBlcnJv
ciBvbiBkZXZpY2UgZG0tMCwgbG9naWNhbCBibG9jayA5MDA2NTkzDQpbICAyNTIuMTg0MTQ5
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDE0
OV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDE1OV0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDE2MF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQxNjNdIFdyaXRlKDEwKTogMmEg
MDAgMDUgNzcgMDUgMDkgMDAgMDAgMDggMDANClsgIDI1Mi4xODQxNjVdIEVYVDQtZnMgd2Fy
bmluZyAoZGV2aWNlIGRtLTApOiBleHQ0X2VuZF9iaW86MzE3OiBJL08gZXJyb3Igd3JpdGlu
ZyB0byBpbm9kZSAyNzU4NDA4IChvZmZzZXQgMCBzaXplIDAgc3RhcnRpbmcgYmxvY2sgOTAx
NjgzMykNClsgIDI1Mi4xODQxNjddIEJ1ZmZlciBJL08gZXJyb3Igb24gZGV2aWNlIGRtLTAs
IGxvZ2ljYWwgYmxvY2sgOTAxNjgzMw0KWyAgMjUyLjE4NDE3MV0gc2QgMDowOjA6MDogW3Nk
YV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQxNzJdIHNkIDA6MDowOjA6IFtz
ZGFdICANClsgIDI1Mi4xODQxNzJdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQg
ZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQxNzNdIHNkIDA6MDowOjA6IFtzZGFd
IENEQjogDQpbICAyNTIuMTg0MTc2XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDJhIDU1IDgxIDAw
IDAwIDA4IDAwDQpbICAyNTIuMTg0MTgyXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQg
ZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDE4M10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUy
LjE4NDE4M10gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURS
SVZFUl9PSw0KWyAgMjUyLjE4NDE4NF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1
Mi4xODQxODZdIFdyaXRlKDEwKTogMmEgMDAgMDYgMmEgNTYgMDkgMDAgMDAgMDggMDANClsg
IDI1Mi4xODQxOTJdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpb
ICAyNTIuMTg0MTkzXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MTkzXSBSZXN1
bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAy
NTIuMTg0MTk0XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDE5N10gV3Jp
dGUoMTApOiAyYSAwMCAwNiAyYSA1NiAxOSAwMCAwMCAzMCAwMA0KWyAgMjUyLjE4NDIxMV0g
c2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQyMTFd
IHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQyMTJdIFJlc3VsdDogaG9zdGJ5dGU9
RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQyMTNdIHNk
IDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MjE1XSBXcml0ZSgxMCk6IDJhIDAw
IDA2IDJhIDU2IDUxIDAwIDAwIDIwIDAwDQpbICAyNTIuMTg0MjI3XSBzZCAwOjA6MDowOiBb
c2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDIyN10gc2QgMDowOjA6MDog
W3NkYV0gIA0KWyAgMjUyLjE4NDIyOF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdF
VCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDIyOV0gc2QgMDowOjA6MDogW3Nk
YV0gQ0RCOiANClsgIDI1Mi4xODQyMzFdIFdyaXRlKDEwKTogMmEgMDAgMDYgMmEgODIgYTEg
MDAgMDAgMDggMDANClsgIDI1Mi4xODQyMzddIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxl
ZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MjM4XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAy
NTIuMTg0MjM4XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9
RFJJVkVSX09LDQpbICAyNTIuMTg0MjM5XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAg
MjUyLjE4NDI0MV0gV3JpdGUoMTApOiAyYSAwMCAwNiAyYSA4MiBjMSAwMCAwMCAwOCAwMA0K
WyAgMjUyLjE4NDI0Nl0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUN
ClsgIDI1Mi4xODQyNDddIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQyNDhdIFJl
c3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsg
IDI1Mi4xODQyNDhdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MjUxXSBX
cml0ZSgxMCk6IDJhIDAwIDA2IDJiIDdkIGE5IDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MjU2
XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDI1
N10gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDI1OF0gUmVzdWx0OiBob3N0Ynl0
ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDI1OF0g
c2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQyNjFdIFdyaXRlKDEwKTogMmEg
MDAgMDYgMmIgYTcgYzEgMDAgMDAgMTggMDANClsgIDI1Mi4xODQyNjldIHNkIDA6MDowOjA6
IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0MjcwXSBzZCAwOjA6MDow
OiBbc2RhXSAgDQpbICAyNTIuMTg0MjcxXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFS
R0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MjcxXSBzZCAwOjA6MDowOiBb
c2RhXSBDREI6IA0KWyAgMjUyLjE4NDI3NF0gV3JpdGUoMTApOiAyYSAwMCAwNiA2YSA1NiAw
MSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDI4MF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5k
bGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQyODBdIHNkIDA6MDowOjA6IFtzZGFdICANClsg
IDI1Mi4xODQyODFdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0
ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQyODFdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpb
ICAyNTIuMTg0Mjg0XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDZhIDU2IGYxIDAwIDAwIDA4IDAw
DQpbICAyNTIuMTg0MjkwXSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29k
ZQ0KWyAgMjUyLjE4NDI5MV0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDI5MV0g
UmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0K
WyAgMjUyLjE4NDI5Ml0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQyOTRd
IFdyaXRlKDEwKTogMmEgMDAgMDYgNmEgNTggMjEgMDAgMDAgMDggMDANClsgIDI1Mi4xODQz
MDBdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0
MzAxXSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MzAxXSBSZXN1bHQ6IGhvc3Ri
eXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0MzAy
XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDMwNV0gV3JpdGUoMTApOiAy
YSAwMCAwNiA2YSA2ZSBkOSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4NDMxMV0gc2QgMDowOjA6
MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4xODQzMTFdIHNkIDA6MDow
OjA6IFtzZGFdICANClsgIDI1Mi4xODQzMTJdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9U
QVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQzMTJdIHNkIDA6MDowOjA6
IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MzE1XSBXcml0ZSgxMCk6IDJhIDAwIDA2IDZhIDhi
IGYxIDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MzM1XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhh
bmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDMzNl0gc2QgMDowOjA6MDogW3NkYV0gIA0K
WyAgMjUyLjE4NDMzNl0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJi
eXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDMzN10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiAN
ClsgIDI1Mi4xODQzNDBdIFdyaXRlKDEwKTogMmEgMDAgMDYgNmIgNWIgNTEgMDAgMDAgMDgg
MDANClsgIDI1Mi4xODQzNDZdIHNkIDA6MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBj
b2RlDQpbICAyNTIuMTg0MzQ3XSBzZCAwOjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMTg0MzQ3
XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09L
DQpbICAyNTIuMTg0MzQ4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjE4NDM1
MV0gV3JpdGUoMTApOiAyYSAwMCAwNiA2YiA4OCBhMSAwMCAwMCAwOCAwMA0KWyAgMjUyLjE4
NDM1N10gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9yIGNvZGUNClsgIDI1Mi4x
ODQzNThdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQzNThdIFJlc3VsdDogaG9z
dGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1Mi4xODQz
NTldIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0MzYyXSBXcml0ZSgxMCk6
IDJhIDAwIDA3IDJhIDU1IDIxIDAwIDAwIDA4IDAwDQpbICAyNTIuMTg0MzY5XSBzZCAwOjA6
MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjE4NDM3MF0gc2QgMDow
OjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDM3MF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFE
X1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4NDM3MV0gc2QgMDowOjA6
MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQzNzRdIFdyaXRlKDEwKTogMmEgMDAgMDcgMmEg
NTYgMzEgMDAgMDAgMjAgMDANClsgIDI1Mi4xODQzODZdIHNkIDA6MDowOjA6IFtzZGFdIFVu
aGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMTg0Mzg2XSBzZCAwOjA6MDowOiBbc2RhXSAg
DQpbICAyNTIuMTg0Mzg3XSBSZXN1bHQ6IGhvc3RieXRlPURJRF9CQURfVEFSR0VUIGRyaXZl
cmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMTg0Mzg4XSBzZCAwOjA6MDowOiBbc2RhXSBDREI6
IA0KWyAgMjUyLjE4NDM5MV0gV3JpdGUoMTApOiAyYSAwMCA4NyAxMCAzNSA4MSAwMCAwMCAw
OCAwMA0KWyAgMjUyLjE4NDQwMl0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4xODQ0MDJdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4xODQ0
MDNdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4xODQ0MDRdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMTg0
NDA3XSBXcml0ZSgxMCk6IDJhIDAwIDhjIDUwIDM2IDgxIDAwIDAwIDE4IDAwDQpbICAyNTIu
MTg0NDE4XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUy
LjE4NDQxOF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUyLjE4NDQxOV0gUmVzdWx0OiBo
b3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjE4
NDQyMF0gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1Mi4xODQ0MjNdIFdyaXRlKDEw
KTogMmEgMDAgMDAgMDAgMDAgM2YgMDAgMDAgMDggMDANClsgIDI1Mi4yNzk2NTddIHNkIDA6
MDowOjA6IFtzZGFdIFVuaGFuZGxlZCBlcnJvciBjb2RlDQpbICAyNTIuMjc5NjU5XSBzZCAw
OjA6MDowOiBbc2RhXSAgDQpbICAyNTIuMjc5NjYwXSBSZXN1bHQ6IGhvc3RieXRlPURJRF9C
QURfVEFSR0VUIGRyaXZlcmJ5dGU9RFJJVkVSX09LDQpbICAyNTIuMjc5NjYxXSBzZCAwOjA6
MDowOiBbc2RhXSBDREI6IA0KWyAgMjUyLjI3OTY2NV0gUmVhZCgxMCk6IDI4IDAwIDAyIGJm
IGFlIDAxIDAwIDAwIDIwIDAwDQpbICAyNTIuMjc5Njk4XSBzZCAwOjA6MDowOiBbc2RhXSBV
bmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAgMjUyLjI3OTY5OV0gc2QgMDowOjA6MDogW3NkYV0g
IA0KWyAgMjUyLjI3OTcwMF0gUmVzdWx0OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2
ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUyLjI3OTcwMV0gc2QgMDowOjA6MDogW3NkYV0gQ0RC
OiANClsgIDI1Mi4yNzk3MDRdIFJlYWQoMTApOiAyOCAwMCAwMiBiZiBhZSAwMSAwMCAwMCAw
OCAwMA0KWyAgMjUyLjI3OTg0NF0gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVkIGVycm9y
IGNvZGUNClsgIDI1Mi4yNzk4NDRdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1Mi4yNzk4
NDVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJf
T0sNClsgIDI1Mi4yNzk4NDZdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTIuMjc5
ODQ5XSBSZWFkKDEwKTogMjggMDAgMDIgYmYgYWUgMDEgMDAgMDAgMDggMDANClsgIDI1My44
ODAwMzNdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI1My44ODUwMTddIFJlc3VsdDogaG9z
dGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1EUklWRVJfT0sNClsgIDI1My44ODk5
ODBdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAyNTMuODk0OTc4XSBSZWFkKDEwKTog
MjggMDAgZDEgMmQgMWUgMDEgMDAgMDAgMDggMDANClsgIDI1My45MDAzMzFdIEVYVDQtZnMg
ZXJyb3IgKGRldmljZSBkbS0wKSBpbiBleHQ0X3Jlc2VydmVfaW5vZGVfd3JpdGU6NDg0Mjog
Sm91cm5hbCBoYXMgYWJvcnRlZA0KWyAgMjUzLjkwNTUwNl0gW3NjaGVkX2RlbGF5ZWRdIHNj
aGVkOiBSVCB0aHJvdHRsaW5nIGFjdGl2YXRlZA0KWyAgMjUzLjkxMDgyNl0gRVhUNC1mcyAo
ZG0tMCk6IHByZXZpb3VzIEkvTyBlcnJvciB0byBzdXBlcmJsb2NrIGRldGVjdGVkDQpbICAy
NTMuOTE2MTc3XSBzZCAwOjA6MDowOiBbc2RhXSBVbmhhbmRsZWQgZXJyb3IgY29kZQ0KWyAg
MjUzLjkyMTQ0OF0gc2QgMDowOjA6MDogW3NkYV0gIA0KWyAgMjUzLjkyNjY4OF0gUmVzdWx0
OiBob3N0Ynl0ZT1ESURfQkFEX1RBUkdFVCBkcml2ZXJieXRlPURSSVZFUl9PSw0KWyAgMjUz
LjkzMjA3N10gc2QgMDowOjA6MDogW3NkYV0gQ0RCOiANClsgIDI1My45MzczNzZdIFdyaXRl
KDEwKTogMmEgMDAgMDEgMmEgNTUgMDEgMDAgMDAgMDggMDANClsgIDI1Ni4zMjUxMjldIEJV
Rzogc29mdCBsb2NrdXAgLSBDUFUjMSBzdHVjayBmb3IgMjFzISBbeGVuZG9tYWluczo5Njc5
XQ0KWyAgMjU2LjMzMDU4OV0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyNTYuMzM1OTY0XSBp
cnEgZXZlbnQgc3RhbXA6IDE5ODE4OA0KWyAgMjU2LjM0MTM3Nl0gaGFyZGlycXMgbGFzdCAg
ZW5hYmxlZCBhdCAoMTk4MTg3KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVzdG9yZV9hcmdz
KzB4MC8weDMwDQpbICAyNTYuMzQ2OTU2XSBoYXJkaXJxcyBsYXN0IGRpc2FibGVkIGF0ICgx
OTgxODgpOiBbPGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMjU2
LjM1MjQ5NV0gc29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTk4MTg2KTogWzxmZmZmZmZm
ZjgxMGE5ZGYxPl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyNTYuMzU4MDY4XSBz
b2Z0aXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxOTgxODEpOiBbPGZmZmZmZmZmODEwYWEyZTI+
XSBpcnFfZXhpdCsweGEyLzB4ZDANClsgIDI1Ni4zNjM2MDRdIENQVTogMSBQSUQ6IDk2Nzkg
Q29tbTogeGVuZG9tYWlucyBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRl
dmVsKyAjMQ0KWyAgMjU2LjM2OTI5M10gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkw
RlhBLUdENzAgKE1TLTc2NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDI1Ni4z
NzUwMzBdIHRhc2s6IGZmZmY4ODAwNThkZWMyNDAgdGk6IGZmZmY4ODAwNGNkMzIwMDAgdGFz
ay50aTogZmZmZjg4MDA0Y2QzMjAwMA0KWyAgMjU2LjM4MDc1NF0gUklQOiBlMDMwOls8ZmZm
ZmZmZmY4MTEwOWE2MD5dICBbPGZmZmZmZmZmODExMDlhNjA+XSBnZW5lcmljX2V4ZWNfc2lu
Z2xlKzB4OTAvMHhjMA0KWyAgMjU2LjM4NjY1NF0gUlNQOiBlMDJiOmZmZmY4ODAwNGNkMzNh
ODggIEVGTEFHUzogMDAwMDAyMDINClsgIDI1Ni4zOTI1ODJdIFJBWDogMDAwMDAwMDAwMDAw
MDAwOCBSQlg6IGZmZmY4ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAwMDM4DQpbICAy
NTYuMzk4NjQzXSBSRFg6IDAwMDAwMDAwMDAwMDAwZmYgUlNJOiAwMDAwMDAwMDAwMDAwMDA4
IFJESTogMDAwMDAwMDAwMDAwMDAwOA0KWyAgMjU2LjQwNDY2N10gUkJQOiBmZmZmODgwMDRj
ZDMzYWM4IFIwODogZmZmZmZmZmY4MWMwZDQ2OCBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsg
IDI1Ni40MTA2ODhdIFIxMDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAw
MDAgUjEyOiBmZmZmODgwMDRjZDMzYWYwDQpbICAyNTYuNDE2Njk4XSBSMTM6IDAwMDAwMDAw
MDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0K
WyAgMjU2LjQyMjYwMF0gRlM6ICAwMDAwN2YxNTJlYzRkNzAwKDAwMDApIEdTOmZmZmY4ODAw
NWY2NDAwMDAoMDAwMCkga25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjU2LjQyODY5MF0g
Q1M6ICBlMDMzIERTOiAwMDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAg
MjU2LjQzNDc3NV0gQ1IyOiAwMDAwN2YxNTJlMmExZTAyIENSMzogMDAwMDAwMDA0Y2QyYTAw
MCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjANClsgIDI1Ni40NDA4NzJdIFN0YWNrOg0KWyAgMjU2
LjQ0NjkyOF0gIDAwMDAwMDAwMDAwMDAyMDAgZmZmZjg4MDA1ZjYxNDA0MCAwMDAwMDAwMDAw
MDAwMDA2IDAwMDAwMDAwMDAwMDAwMDANClsgIDI1Ni40NTMxNzJdICAwMDAwMDAwMDAwMDAw
MDAxIGZmZmZmZmZmODIyZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDAx
DQpbICAyNTYuNDU5NDU3XSAgZmZmZjg4MDA0Y2QzM2IzOCBmZmZmZmZmZjgxMTA5Y2M1IGZm
ZmZmZmZmODFhYTBiMzMgZmZmZjg4MDBjZWQ1NjAwMA0KWyAgMjU2LjQ2NTcxOF0gQ2FsbCBU
cmFjZToNClsgIDI1Ni40NzE4ODhdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0
cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNTYuNDc4MjUwXSAgWzxm
ZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTAN
ClsgIDI1Ni40ODQ2MTVdICBbPGZmZmZmZmZmODFhYTBiMzM+XSA/IHJldGludF9yZXN0b3Jl
X2FyZ3MrMHgxMy8weDEzDQpbICAyNTYuNDkxMDI1XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0g
PyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjU2LjQ5
NzQ2N10gIFs8ZmZmZmZmZmY4MTEwYTAzYT5dIHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkrMHgy
N2EvMHgyYTANClsgIDI1Ni41MDM4ODRdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9k
ZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNTYuNTEwMzY2XSAg
WzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4aXRfbW1hcCsweGNlLzB4MWEwDQpbICAyNTYu
NTE2ODAzXSAgWzxmZmZmZmZmZjgxMDAxMjJhPl0gPyB4ZW5faHlwZXJjYWxsX3hlbl92ZXJz
aW9uKzB4YS8weDIwDQpbICAyNTYuNTIzMzE3XSAgWzxmZmZmZmZmZjgxMTY5NDI2Pl0gZXhp
dF9tbWFwKzB4NTYvMHgxODANClsgIDI1Ni41Mjk4MThdICBbPGZmZmZmZmZmODEwZTcxN2E+
XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMjU2LjUzNjIyOF0gIFs8ZmZmZmZm
ZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAyNTYuNTQyNjMwXSAgWzxm
ZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDI1Ni41NDg5OTld
ICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDI1Ni41NTUzODld
ICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8weDgzMA0KWyAg
MjU2LjU2MTgwN10gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2JpbmFyeSsweDMy
YS8weDFhMDANClsgIDI1Ni41NjgxOTVdICBbPGZmZmZmZmZmODFhOWZmZTY+XSA/IF9yYXdf
cmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAyNTYuNTc0NTQwXSAgWzxmZmZmZmZmZjgxMGU3
MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjU2LjU4MDg2MF0gIFs8ZmZm
ZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMvMHgxYjANClsg
IDI1Ni41ODcyNTFdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsweGVj
LzB4MTEwDQpbICAyNTYuNTkzNjE1XSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3Jl
bGVhc2UrMHgxMmEvMHgyNTANClsgIDI1Ni41OTk5NDFdICBbPGZmZmZmZmZmODExOTkyMDQ+
XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMjU2LjYwNjI4MF0gIFs8
ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDU5Mi8weDcx
MA0KWyAgMjU2LjYxMjYxOF0gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9fZXhlY3ZlX2Nv
bW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAyNTYuNjE4OTY5XSAgWzxmZmZmZmZmZjgx
MTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDI1Ni42MjUyNjdd
ICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpbICAyNTYuNjMx
NTQ3XSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4NjANClsgIDI1
Ni42Mzc3MjldICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsweDY5LzB4YTAN
ClsgIDI1Ni42NDM4NDJdIENvZGU6IDAwIDQ4IDhiIDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2
IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4IDc0IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEw
IDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDQxIGY2IDQ0IDI0IDIwIDAxIDw3NT4gZjYgNDgg
OGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIgNmQgZTggNGMgOGIgNzUgZjAgNGMgOGIgN2Qg
DQpbICAyNjEuMDQ0MDY5XSBCbHVldG9vdGg6IGhjaTAgbGluayB0eCB0aW1lb3V0DQpbICAy
NjEuMDUwMzUyXSBCbHVldG9vdGg6IGhjaTAga2lsbGluZyBzdGFsbGVkIGNvbm5lY3Rpb24g
MDM6MTI6MDk6MjU6MTM6NzINClsgIDI2My4wNDQwNTFdIEJsdWV0b290aDogaGNpMCBsaW5r
IHR4IHRpbWVvdXQNClsgIDI2My4wNTA0NzldIEJsdWV0b290aDogaGNpMCBraWxsaW5nIHN0
YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjYzLjA1NjkwNV0gQmx1
ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjYzLjA2MzM1N10gQmx1ZXRvb3Ro
OiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI2My4wNjk2MzZdIEJsdWV0
b290aDogaGNpMCBraWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3
Mg0KWyAgMjY1LjA4MDgxMF0gQmx1ZXRvb3RoOiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRp
bWVvdXQNClsgIDI2Ny4wNDY2MzddIEJsdWV0b290aDogaGNpMCBsaW5rIHR4IHRpbWVvdXQN
ClsgIDI2Ny4wNTI5MThdIEJsdWV0b290aDogaGNpMCBraWxsaW5nIHN0YWxsZWQgY29ubmVj
dGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjY3LjA5OTg0Ml0gQmx1ZXRvb3RoOiBoY2kw
IGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI2OS4wNDMyNzddIEJsdWV0b290aDog
aGNpMCBsaW5rIHR4IHRpbWVvdXQNClsgIDI2OS4wNDkzMzJdIEJsdWV0b290aDogaGNpMCBr
aWxsaW5nIHN0YWxsZWQgY29ubmVjdGlvbiAwMzoxMjowOToyNToxMzo3Mg0KWyAgMjY5LjEw
NTQ4NV0gQmx1ZXRvb3RoOiBoY2kwIGNvbW1hbmQgMHgwNDA2IHR4IHRpbWVvdXQNClsgIDI3
MS4xMTc4NDBdIEJsdWV0b290aDogaGNpMCBjb21tYW5kIDB4MDQwNiB0eCB0aW1lb3V0DQpb
ICAyNzYuMzE1MjE0XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzIgc3R1Y2sgZm9yIDIzcyEg
W2Jhc2g6OTcxNF0NClsgIDI3Ni4zMjA5NTJdIE1vZHVsZXMgbGlua2VkIGluOg0KWyAgMjc2
LjMyNjYwM10gaXJxIGV2ZW50IHN0YW1wOiAxMjk4NzANClsgIDI3Ni4zMzIyNzhdIGhhcmRp
cnFzIGxhc3QgIGVuYWJsZWQgYXQgKDEyOTg2OSk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJl
c3RvcmVfYXJncysweDAvMHgzMA0KWyAgMjc2LjMzODE3NF0gaGFyZGlycXMgbGFzdCBkaXNh
YmxlZCBhdCAoMTI5ODcwKTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jfc3RpKzB4NS8w
eDYNClsgIDI3Ni4zNDM5ODhdIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDEyOTg2OCk6
IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIyMA0KWyAgMjc2
LjM0OTg5MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMTI5ODUzKTogWzxmZmZmZmZm
ZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAyNzYuMzU1NzMxXSBDUFU6IDIg
UElEOiA5NzE0IENvbW06IGJhc2ggTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14
ZW5kZXZlbCsgIzENClsgIDI3Ni4zNjE1ODFdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQw
Lzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAy
NzYuMzY3NDY3XSB0YXNrOiBmZmZmODgwMDUyODY2MzYwIHRpOiBmZmZmODgwMDRjZjg4MDAw
IHRhc2sudGk6IGZmZmY4ODAwNGNmODgwMDANClsgIDI3Ni4zNzMzNjRdIFJJUDogZTAzMDpb
PGZmZmZmZmZmODExMDlhNjA+XSAgWzxmZmZmZmZmZjgxMTA5YTYwPl0gZ2VuZXJpY19leGVj
X3NpbmdsZSsweDkwLzB4YzANClsgIDI3Ni4zNzkzMjVdIFJTUDogZTAyYjpmZmZmODgwMDRj
Zjg5YTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAyNzYuMzg1Mjc1XSBSQVg6IGZmZmY4ODAw
NTI4NjYzNjAgUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAwNg0K
WyAgMjc2LjM5MTMxMF0gUkRYOiAwMDAwMDAwMDAwMDAwMDA2IFJTSTogZmZmZjg4MDA1Mjg2
NmEzOCBSREk6IDAwMDAwMDAwMDAwMDAyMDANClsgIDI3Ni4zOTczMDBdIFJCUDogZmZmZjg4
MDA0Y2Y4OWFjOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDYgUjA5OiAwMDAwMDAwMDAwMDAwMDAw
DQpbICAyNzYuNDAzMjA1XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAw
MDAwMDAwIFIxMjogZmZmZjg4MDA0Y2Y4OWFmMA0KWyAgMjc2LjQwOTA4Nl0gUjEzOiAwMDAw
MDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQw
NTANClsgIDI3Ni40MTQ5MDZdIEZTOiAgMDAwMDdmNDA3YjQ5ZjcwMCgwMDAwKSBHUzpmZmZm
ODgwMDVmNjgwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDI3Ni40MjA3
NzldIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2IN
ClsgIDI3Ni40MjY2NDNdIENSMjogMDAwMDdmNDA3YjA2YjA3MCBDUjM6IDAwMDAwMDAwNGNk
ZDUwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAyNzYuNDMyNTU5XSBTdGFjazoNClsg
IDI3Ni40MzgzNjJdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNGNkMzNhZjAgMDAwMDAw
MDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAyNzYuNDQ0MzY1XSAgMDAwMDAwMDAw
MDAwMDAwMiBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAw
MDAwMg0KWyAgMjc2LjQ1MDM0NV0gIGZmZmY4ODAwNGNmODliMzggZmZmZmZmZmY4MTEwOWNj
NSBmZmZmODgwMDU3N2I0ODc4IGZmZmZmZmZmODEwMDdkNjANClsgIDI3Ni40NTYzNTddIENh
bGwgVHJhY2U6DQpbICAyNzYuNDYyMzI4XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5f
ZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjc2LjQ2ODUzMF0g
IFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4
MWUwDQpbICAyNzYuNDc0NzU1XSAgWzxmZmZmZmZmZjgxMDA3ZDYwPl0gPyB4ZW5fcGluX3Bh
Z2UrMHgxMTAvMHgxMjANClsgIDI3Ni40ODA5NThdICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/
IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyNzYuNDg3
MjI2XSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gc21wX2NhbGxfZnVuY3Rpb25fbWFueSsweDI3
YS8weDJhMA0KWyAgMjc2LjQ5MzQ2M10gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rl
c3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI3Ni40OTk3ODZdICBb
PGZmZmZmZmZmODEwMDg0MWU+XSB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTANClsgIDI3Ni41
MDYwNzZdICBbPGZmZmZmZmZmODExNjk0MjY+XSBleGl0X21tYXArMHg1Ni8weDE4MA0KWyAg
Mjc2LjUxMjM0MV0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJh
LzB4MjUwDQpbICAyNzYuNTE4Njc0XSAgWzxmZmZmZmZmZjgxMWRkZGUwPl0gPyBleGl0X2Fp
bysweGIwLzB4ZTANClsgIDI3Ni41MjQ5NzFdICBbPGZmZmZmZmZmODExZGRkNDQ+XSA/IGV4
aXRfYWlvKzB4MTQvMHhlMA0KWyAgMjc2LjUzMTE4NF0gIFs8ZmZmZmZmZmY4MTBhMjY4OT5d
IG1tcHV0KzB4NTkvMHhlMA0KWyAgMjc2LjUzNzM1MF0gIFs8ZmZmZmZmZmY4MTE5YTNhOT5d
IGZsdXNoX29sZF9leGVjKzB4NDM5LzB4ODMwDQpbICAyNzYuNTQzNTIzXSAgWzxmZmZmZmZm
ZjgxMWU4Y2NhPl0gbG9hZF9lbGZfYmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAgMjc2LjU0OTcz
N10gIFs8ZmZmZmZmZmY4MWE5ZmZlNj5dID8gX3Jhd19yZWFkX3VubG9jaysweDI2LzB4MzAN
ClsgIDI3Ni41NTU5MzZdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWlyZSsw
eGVjLzB4MTEwDQpbICAyNzYuNTYyMTYxXSAgWzxmZmZmZmZmZjgxMTk5MjQzPl0gPyBzZWFy
Y2hfYmluYXJ5X2hhbmRsZXIrMHhjMy8weDFiMA0KWyAgMjc2LjU2ODI1MF0gIFs8ZmZmZmZm
ZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI3Ni41NzQxMzdd
ICBbPGZmZmZmZmZmODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAg
Mjc2LjU3OTk0N10gIFs8ZmZmZmZmZmY4MTE5OTIwND5dIHNlYXJjaF9iaW5hcnlfaGFuZGxl
cisweDg0LzB4MWIwDQpbICAyNzYuNTg1ODg1XSAgWzxmZmZmZmZmZjgxMTliMjUyPl0gZG9f
ZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTkyLzB4NzEwDQpbICAyNzYuNTkxNzQ5XSAgWzxm
ZmZmZmZmZjgxMTliMWU3Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1MjcvMHg3
MTANClsgIDI3Ni41OTc2MDldICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGttZW1fY2FjaGVf
YWxsb2MrMHhiNS8weDEyMA0KWyAgMjc2LjYwMzQ0Nl0gIFs8ZmZmZmZmZmY4MTE5YjNlMz5d
IGRvX2V4ZWN2ZSsweDEzLzB4MjANClsgIDI3Ni42MDkyNzddICBbPGZmZmZmZmZmODExOWI2
NTg+XSBTeVNfZXhlY3ZlKzB4MzgvMHg2MA0KWyAgMjc2LjYxNTA4Nl0gIFs8ZmZmZmZmZmY4
MWFhMTllOT5dIHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMjc2LjYyMDg0NV0gQ29kZTog
MDAgNDggOGIgNDUgYzAgNGMgODkgZmYgNDggODkgYzYgZTggYmIgNjcgOTkgMDAgNDggM2Ig
NWQgYzggNzQgMmQgNDUgODUgZWQgNzUgMGEgZWIgMTAgNjYgMGYgMWYgNDQgMDAgMDAgZjMg
OTAgNDEgZjYgNDQgMjQgMjAgMDEgPDc1PiBmNiA0OCA4YiA1ZCBkOCA0YyA4YiA2NSBlMCA0
YyA4YiA2ZCBlOCA0YyA4YiA3NSBmMCA0YyA4YiA3ZCANClsgIDI4MC4zMTMyMjhdIEJVRzog
c29mdCBsb2NrdXAgLSBDUFUjMyBzdHVjayBmb3IgMjNzISBbYmFzaDo5NzE4XQ0KWyAgMjgw
LjMxOTQyNl0gTW9kdWxlcyBsaW5rZWQgaW46DQpbICAyODAuMzI1NTM4XSBpcnEgZXZlbnQg
c3RhbXA6IDE0MjIzMA0KWyAgMjgwLjMzMTY1OV0gaGFyZGlycXMgbGFzdCAgZW5hYmxlZCBh
dCAoMTQyMjI5KTogWzxmZmZmZmZmZjgxYWEwYjMzPl0gcmVzdG9yZV9hcmdzKzB4MC8weDMw
DQpbICAyODAuMzM4MDA2XSBoYXJkaXJxcyBsYXN0IGRpc2FibGVkIGF0ICgxNDIyMzApOiBb
PGZmZmZmZmZmODFhYTEwMTY+XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMjgwLjM0NDM0NV0g
c29mdGlycXMgbGFzdCAgZW5hYmxlZCBhdCAoMTQyMjI4KTogWzxmZmZmZmZmZjgxMGE5ZGYx
Pl0gX19kb19zb2Z0aXJxKzB4MTkxLzB4MjIwDQpbICAyODAuMzUwNjkwXSBzb2Z0aXJxcyBs
YXN0IGRpc2FibGVkIGF0ICgxNDIyMTMpOiBbPGZmZmZmZmZmODEwYWEyZTI+XSBpcnFfZXhp
dCsweGEyLzB4ZDANClsgIDI4MC4zNTY5OThdIENQVTogMyBQSUQ6IDk3MTggQ29tbTogYmFz
aCBOb3QgdGFpbnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMjgw
LjM2MzQwMV0gSGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2
NDApICAsIEJJT1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDI4MC4zNjk4MDNdIHRhc2s6IGZm
ZmY4ODAwNThkZWQyZDAgdGk6IGZmZmY4ODAwNGNmNTYwMDAgdGFzay50aTogZmZmZjg4MDA0
Y2Y1NjAwMA0KWyAgMjgwLjM3NjMwMV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTEwOWE2MD5d
ICBbPGZmZmZmZmZmODExMDlhNjA+XSBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4OTAvMHhjMA0K
WyAgMjgwLjM4MjkzNV0gUlNQOiBlMDJiOmZmZmY4ODAwNGNmNTdhODggIEVGTEFHUzogMDAw
MDAyMDINClsgIDI4MC4zODk1MzBdIFJBWDogZmZmZjg4MDA1OGRlZDJkMCBSQlg6IGZmZmY4
ODAwNWY2MTQwNDAgUkNYOiAwMDAwMDAwMDAwMDAwMDA2DQpbICAyODAuMzk2MTI1XSBSRFg6
IDAwMDAwMDAwMDAwMDAwMDYgUlNJOiBmZmZmODgwMDU4ZGVkOWE4IFJESTogMDAwMDAwMDAw
MDAwMDIwMA0KWyAgMjgwLjQwMjY3N10gUkJQOiBmZmZmODgwMDRjZjU3YWM4IFIwODogMDAw
MDAwMDAwMDAwMDAwNiBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsgIDI4MC40MDkxMTVdIFIx
MDogMDAwMDAwMDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAwMDAgUjEyOiBmZmZmODgw
MDRjZjU3YWYwDQpbICAyODAuNDE1NDQ4XSBSMTM6IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAw
MDAwMDAwMDAwMDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0KWyAgMjgwLjQyMTY3Nl0g
RlM6ICAwMDAwN2ZlM2NhOTg3NzAwKDAwMDApIEdTOmZmZmY4ODAwNWY2YzAwMDAoMDAwMCkg
a25sR1M6MDAwMDAwMDAwMDAwMDAwMA0KWyAgMjgwLjQyNzk3MV0gQ1M6ICBlMDMzIERTOiAw
MDAwIEVTOiAwMDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAgMjgwLjQzNDI2MV0gQ1Iy
OiAwMDAwN2Y5NTc3NTcwZjMwIENSMzogMDAwMDAwMDA0Y2RjMjAwMCBDUjQ6IDAwMDAwMDAw
MDAwMDA2NjANClsgIDI4MC40NDA2MDldIFN0YWNrOg0KWyAgMjgwLjQ0NjkyN10gIDAwMDAw
MDAwMDAwMDAyMDAgZmZmZjg4MDA0Y2QzM2FmMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAw
MDAwMDAwMDANClsgIDI4MC40NTM0ODddICAwMDAwMDAwMDAwMDAwMDAzIGZmZmZmZmZmODIy
ZTczMDAgZmZmZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDAzDQpbICAyODAuNDU5OTUz
XSAgZmZmZjg4MDA0Y2Y1N2IzOCBmZmZmZmZmZjgxMTA5Y2M1IGZmZmY4ODAwNTc1ODUxNzgg
ZmZmZmZmZmY4MTAwN2Q2MA0KWyAgMjgwLjQ2NjM5Ml0gQ2FsbCBUcmFjZToNClsgIDI4MC40
NzI3NTldICBbPGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNf
cmVnaW9uKzB4MTYwLzB4MTYwDQpbICAyODAuNDc5MjYwXSAgWzxmZmZmZmZmZjgxMTA5Y2M1
Pl0gc21wX2NhbGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTANClsgIDI4MC40ODU2MzBd
ICBbPGZmZmZmZmZmODEwMDdkNjA+XSA/IHhlbl9waW5fcGFnZSsweDExMC8weDEyMA0KWyAg
MjgwLjQ5MjAyNV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGln
dW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI4MC40OTg0NDNdICBbPGZmZmZmZmZmODEx
MGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAyODAuNTA0
ODA5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3Jl
Z2lvbisweDE2MC8weDE2MA0KWyAgMjgwLjUxMTE0M10gIFs8ZmZmZmZmZmY4MTAwODQxZT5d
IHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMjgwLjUxNzM4NF0gIFs8ZmZmZmZmZmY4
MTE2OTQyNj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAyODAuNTIzNjU5XSAgWzxmZmZm
ZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDI4MC41Mjk5
NDldICBbPGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjgw
LjUzNjE1NV0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpb
ICAyODAuNTQyMjkzXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpb
ICAyODAuNTQ4MzYwXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0
MzkvMHg4MzANClsgIDI4MC41NTQ0NzFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2Vs
Zl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAyODAuNTYwNjAyXSAgWzxmZmZmZmZmZjgxYTlm
ZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMjgwLjU2NjcxNF0gIFs8
ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI4MC41
NzI4ODldICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisw
eGMzLzB4MWIwDQpbICAyODAuNTc5MTA3XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2Nr
X2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMjgwLjU4NTMyMl0gIFs8ZmZmZmZmZmY4MTBlNzE3
YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAyODAuNTkxNDg3XSAgWzxmZmZm
ZmZmZjgxMTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDI4
MC41OTc3MjhdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEu
MzErMHg1OTIvMHg3MTANClsgIDI4MC42MDM5NzRdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/
IGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMjgwLjYxMDIxNF0g
IFs8ZmZmZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpb
ICAyODAuNjE2NDUyXSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgy
MA0KWyAgMjgwLjYyMjY5Nl0gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgz
OC8weDYwDQpbICAyODAuNjI4OTEwXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVj
dmUrMHg2OS8weGEwDQpbICAyODAuNjM1MTM0XSBDb2RlOiAwMCA0OCA4YiA0NSBjMCA0YyA4
OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBl
ZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCBmMyA5MCA0MSBmNiA0NCAyNCAyMCAw
MSA8NzU+IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIDhiIDc1
IGYwIDRjIDhiIDdkIA0KWyAgMjgzLjE3MDg1OF0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHgg
dGltZW91dA0KWyAgMjgzLjE3NzMxN10gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxl
ZCBjb25uZWN0aW9uIDAzOjEyOjA5OjI1OjEzOjcyDQpbICAyODQuMzExMjMyXSBCVUc6IHNv
ZnQgbG9ja3VwIC0gQ1BVIzEgc3R1Y2sgZm9yIDIycyEgW3hlbmRvbWFpbnM6OTY3OV0NClsg
IDI4NC4zMTc5NTddIE1vZHVsZXMgbGlua2VkIGluOg0KWyAgMjg0LjMyNDQ5Ml0gaXJxIGV2
ZW50IHN0YW1wOiAyNjQ5MzINClsgIDI4NC4zMzA4MTRdIGhhcmRpcnFzIGxhc3QgIGVuYWJs
ZWQgYXQgKDI2NDkzMSk6IFs8ZmZmZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAv
MHgzMA0KWyAgMjg0LjMzNzI5NF0gaGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMjY0OTMy
KTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jfc3RpKzB4NS8weDYNClsgIDI4NC4zNDM3
MThdIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDI2NDkzMCk6IFs8ZmZmZmZmZmY4MTBh
OWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIyMA0KWyAgMjg0LjM1MDE4NV0gc29mdGly
cXMgbGFzdCBkaXNhYmxlZCBhdCAoMjY0OTI1KTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJx
X2V4aXQrMHhhMi8weGQwDQpbICAyODQuMzU2NTYzXSBDUFU6IDEgUElEOiA5Njc5IENvbW06
IHhlbmRvbWFpbnMgTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsg
IzENClsgIDI4NC4zNjMwMTVdIEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1H
RDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAyODQuMzY5NTU2
XSB0YXNrOiBmZmZmODgwMDU4ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMyMDAwIHRhc2sudGk6
IGZmZmY4ODAwNGNkMzIwMDANClsgIDI4NC4zNzYyMDldIFJJUDogZTAzMDpbPGZmZmZmZmZm
ODExMDlhNWE+XSAgWzxmZmZmZmZmZjgxMTA5YTVhPl0gZ2VuZXJpY19leGVjX3NpbmdsZSsw
eDhhLzB4YzANClsgIDI4NC4zODI4NTJdIFJTUDogZTAyYjpmZmZmODgwMDRjZDMzYTg4ICBF
RkxBR1M6IDAwMDAwMjAyDQpbICAyODQuMzg5NDQwXSBSQVg6IDAwMDAwMDAwMDAwMDAwMDgg
UkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAzOA0KWyAgMjg0LjM5
NjAzM10gUkRYOiAwMDAwMDAwMDAwMDAwMGZmIFJTSTogMDAwMDAwMDAwMDAwMDAwOCBSREk6
IDAwMDAwMDAwMDAwMDAwMDgNClsgIDI4NC40MDI0ODddIFJCUDogZmZmZjg4MDA0Y2QzM2Fj
OCBSMDg6IGZmZmZmZmZmODFjMGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAwMDAwDQpbICAyODQu
NDA4ODYxXSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAwMDAwMDAwIFIx
MjogZmZmZjg4MDA0Y2QzM2FmMA0KWyAgMjg0LjQxNTI0Nl0gUjEzOiAwMDAwMDAwMDAwMDAw
MDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQwNTANClsgIDI4
NC40MjE2MDddIEZTOiAgMDAwMDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjQw
MDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsgIDI4NC40MjgwNDVdIENTOiAg
ZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDI4NC40
MzQ0NzRdIENSMjogMDAwMDdmMTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAwNGNkMmEwMDAgQ1I0
OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAyODQuNDQxMDQ1XSBTdGFjazoNClsgIDI4NC40NDc0
NzVdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAwMDAwMDAwMDAwMDAw
NiAwMDAwMDAwMDAwMDAwMDAwDQpbICAyODQuNDU0MDMyXSAgMDAwMDAwMDAwMDAwMDAwMSBm
ZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAwMDAwMQ0KWyAg
Mjg0LjQ2MDU2OF0gIGZmZmY4ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEwOWNjNSBmZmZmZmZm
ZjgxYWEwYjMzIGZmZmY4ODAwY2VkNTYwMDANClsgIDI4NC40NjcwNDNdIENhbGwgVHJhY2U6
DQpbICAyODQuNDczNDUwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9j
b250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjg0LjQ3OTk2N10gIFs8ZmZmZmZm
ZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUwDQpbICAy
ODQuNDg2NDY1XSAgWzxmZmZmZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9yZV9hcmdz
KzB4MTMvMHgxMw0KWyAgMjg0LjQ5MjkyMl0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVu
X2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDI4NC40OTkzNzRd
ICBbPGZmZmZmZmZmODExMGEwM2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4
MmEwDQpbICAyODQuNTA1Nzg5XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJv
eV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMjg0LjUxMjI1NV0gIFs8ZmZm
ZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMjg0LjUxODcw
MF0gIFs8ZmZmZmZmZmY4MTAwMTIyYT5dID8geGVuX2h5cGVyY2FsbF94ZW5fdmVyc2lvbisw
eGEvMHgyMA0KWyAgMjg0LjUyNTEzOF0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5dIGV4aXRfbW1h
cCsweDU2LzB4MTgwDQpbICAyODQuNTMxNTUzXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBs
b2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDI4NC41Mzc3OTldICBbPGZmZmZmZmZmODEx
ZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMjg0LjU0NDEzOV0gIFs8ZmZmZmZm
ZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAyODQuNTUwNDYwXSAgWzxm
ZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAyODQuNTU2NzMzXSAgWzxm
ZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzANClsgIDI4NC41
NjMwNjZdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkrMHgzMmEvMHgx
YTAwDQpbICAyODQuNTY5NDUwXSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBfcmF3X3JlYWRf
dW5sb2NrKzB4MjYvMHgzMA0KWyAgMjg0LjU3NTg1NF0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5d
ID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDI4NC41ODIyMTVdICBbPGZmZmZmZmZm
ODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIwDQpbICAyODQu
NTg4NjAwXSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDEx
MA0KWyAgMjg0LjU5NDk5N10gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNl
KzB4MTJhLzB4MjUwDQpbICAyODQuNjAxMzU0XSAgWzxmZmZmZmZmZjgxMTk5MjA0Pl0gc2Vh
cmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDI4NC42MDc3MDhdICBbPGZmZmZm
ZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIvMHg3MTANClsg
IDI4NC42MTQxMjVdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2ZV9jb21tb24u
aXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMjg0LjYyMDU0OF0gIFs8ZmZmZmZmZmY4MTE4NjM0
NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAyODQuNjI2OTUxXSAgWzxm
ZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMjg0LjYzMzM2NF0g
IFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpbICAyODQuNjM5
Njk4XSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8weGEwDQpbICAy
ODQuNjQ2MDQxXSBDb2RlOiA4OSBkYSBlOCBlYSBjMSAzMiAwMCA0OCA4YiA0NSBjMCA0YyA4
OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBl
ZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCBmMyA5MCA8NDE+IGY2IDQ0IDI0IDIw
IDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIA0KWyAg
Mjg1LjE3MDkwOF0gQmx1ZXRvb3RoOiBoY2kwIGxpbmsgdHggdGltZW91dA0KWyAgMjg1LjE3
NzM1NF0gQmx1ZXRvb3RoOiBoY2kwIGtpbGxpbmcgc3RhbGxlZCBjb25uZWN0aW9uIDAzOjEy
OjA5OjI1OjEzOjcyDQpbICAyODUuMTg0MTkyXSBCbHVldG9vdGg6IGhjaTAgY29tbWFuZCAw
eDA0MDYgdHggdGltZW91dA0KWyAgMjg1LjY1NTY4MF0gc2QgMDowOjA6MDogW3NkYV0gVW5o
YW5kbGVkIGVycm9yIGNvZGUNClsgIDI4NS42NjIwMDRdIHNkIDA6MDowOjA6IFtzZGFdICAN
ClsgIDI4NS42NjgxNDhdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVy
Ynl0ZT1EUklWRVJfT0sNClsgIDI4NS42NzQzNTddIHNkIDA6MDowOjA6IFtzZGFdIENEQjog
DQpbICAyODUuNjgwNTUxXSBSZWFkKDEwKTogMjggMDAgMDYgNmIgODggYTEgMDAgMDAgMDgg
MDANClsgIDI4NS42ODY3NDZdIGJsa191cGRhdGVfcmVxdWVzdDogNTMgY2FsbGJhY2tzIHN1
cHByZXNzZWQNClsgIDI4NS42OTI5MjZdIGVuZF9yZXF1ZXN0OiBJL08gZXJyb3IsIGRldiBz
ZGEsIHNlY3RvciAxMDc3MTA2MjUNClsgIDI4NS42OTkxOTJdIEVYVDQtZnMgZXJyb3IgKGRl
dmljZSBkbS0wKTogZXh0NF9maW5kX2VudHJ5OjEzMDk6IGlub2RlICMyNzY1MjQxOiBjb21t
IGRoY2xpZW50LXNjcmlwdDogcmVhZGluZyBkaXJlY3RvcnkgbGJsb2NrIDANClsgIDI4NS43
MDU3MDNdIEVYVDQtZnMgKGRtLTApOiBwcmV2aW91cyBJL08gZXJyb3IgdG8gc3VwZXJibG9j
ayBkZXRlY3RlZA0KWyAgMjg1LjcxMjE4M10gc2QgMDowOjA6MDogW3NkYV0gVW5oYW5kbGVk
IGVycm9yIGNvZGUNClsgIDI4NS43MTg1MjFdIHNkIDA6MDowOjA6IFtzZGFdICANClsgIDI4
NS43MjQ3MTVdIFJlc3VsdDogaG9zdGJ5dGU9RElEX0JBRF9UQVJHRVQgZHJpdmVyYnl0ZT1E
UklWRVJfT0sNClsgIDI4NS43MzA5MDNdIHNkIDA6MDowOjA6IFtzZGFdIENEQjogDQpbICAy
ODUuNzM2OTgyXSBXcml0ZSgxMCk6IDJhIDAwIDAxIDJhIDU1IDAxIDAwIDAwIDA4IDAwDQpb
ICAyODUuNzQzMTAxXSBlbmRfcmVxdWVzdDogSS9PIGVycm9yLCBkZXYgc2RhLCBzZWN0b3Ig
MTk1NTE0ODkNClsgIDI4NS43NDkyNDJdIHF1aWV0X2Vycm9yOiA1NiBjYWxsYmFja3Mgc3Vw
cHJlc3NlZA0KWyAgMjg1Ljc1NTM3M10gQnVmZmVyIEkvTyBlcnJvciBvbiBkZXZpY2UgZG0t
MCwgbG9naWNhbCBibG9jayAwDQpbICAyODUuNzYxNTE4XSBsb3N0IHBhZ2Ugd3JpdGUgZHVl
IHRvIEkvTyBlcnJvciBvbiBkbS0wDQpbICAyODcuMTg5Nzk3XSBCbHVldG9vdGg6IGhjaTAg
Y29tbWFuZCAweDA0MDYgdHggdGltZW91dA0KWyAgMzA0LjMwMTMwM10gQlVHOiBzb2Z0IGxv
Y2t1cCAtIENQVSMyIHN0dWNrIGZvciAyMnMhIFtiYXNoOjk3MTRdDQpbICAzMDQuMzA3NDcz
XSBNb2R1bGVzIGxpbmtlZCBpbjoNClsgIDMwNC4zMTM1MjddIGlycSBldmVudCBzdGFtcDog
Mjc4NzQyDQpbICAzMDQuMzE5NTExXSBoYXJkaXJxcyBsYXN0ICBlbmFibGVkIGF0ICgyNzg3
NDEpOiBbPGZmZmZmZmZmODFhYTBiMzM+XSByZXN0b3JlX2FyZ3MrMHgwLzB4MzANClsgIDMw
NC4zMjU2MzldIGhhcmRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDI3ODc0Mik6IFs8ZmZmZmZm
ZmY4MWFhMTAxNj5dIGVycm9yX3N0aSsweDUvMHg2DQpbICAzMDQuMzMxNzU1XSBzb2Z0aXJx
cyBsYXN0ICBlbmFibGVkIGF0ICgyNzg3NDApOiBbPGZmZmZmZmZmODEwYTlkZjE+XSBfX2Rv
X3NvZnRpcnErMHgxOTEvMHgyMjANClsgIDMwNC4zMzc4ODVdIHNvZnRpcnFzIGxhc3QgZGlz
YWJsZWQgYXQgKDI3ODcyNSk6IFs8ZmZmZmZmZmY4MTBhYTJlMj5dIGlycV9leGl0KzB4YTIv
MHhkMA0KWyAgMzA0LjM0MzkzNl0gQ1BVOiAyIFBJRDogOTcxNCBDb21tOiBiYXNoIE5vdCB0
YWludGVkIDMuMTMuMC1yYzctMjAxNDAxMDcteGVuZGV2ZWwrICMxDQpbICAzMDQuMzQ5OTk0
XSBIYXJkd2FyZSBuYW1lOiBNU0kgTVMtNzY0MC84OTBGWEEtR0Q3MCAoTVMtNzY0MCkgICwg
QklPUyBWMS44QjEgMDkvMTMvMjAxMA0KWyAgMzA0LjM1NjEwOF0gdGFzazogZmZmZjg4MDA1
Mjg2NjM2MCB0aTogZmZmZjg4MDA0Y2Y4ODAwMCB0YXNrLnRpOiBmZmZmODgwMDRjZjg4MDAw
DQpbICAzMDQuMzYyMjg2XSBSSVA6IGUwMzA6WzxmZmZmZmZmZjgxMTA5YTVhPl0gIFs8ZmZm
ZmZmZmY4MTEwOWE1YT5dIGdlbmVyaWNfZXhlY19zaW5nbGUrMHg4YS8weGMwDQpbICAzMDQu
MzY4NTY1XSBSU1A6IGUwMmI6ZmZmZjg4MDA0Y2Y4OWE4OCAgRUZMQUdTOiAwMDAwMDIwMg0K
WyAgMzA0LjM3NDgxMF0gUkFYOiBmZmZmODgwMDUyODY2MzYwIFJCWDogZmZmZjg4MDA1ZjYx
NDA0MCBSQ1g6IDAwMDAwMDAwMDAwMDAwMDYNClsgIDMwNC4zODExMjFdIFJEWDogMDAwMDAw
MDAwMDAwMDAwNiBSU0k6IGZmZmY4ODAwNTI4NjZhMzggUkRJOiAwMDAwMDAwMDAwMDAwMjAw
DQpbICAzMDQuMzg3MzkzXSBSQlA6IGZmZmY4ODAwNGNmODlhYzggUjA4OiAwMDAwMDAwMDAw
MDAwMDA2IFIwOTogMDAwMDAwMDAwMDAwMDAwMA0KWyAgMzA0LjM5MzY2MF0gUjEwOiAwMDAw
MDAwMDAwMDAwMDAxIFIxMTogMDAwMDAwMDAwMDAwMDAwMCBSMTI6IGZmZmY4ODAwNGNmODlh
ZjANClsgIDMwNC4zOTk5MTVdIFIxMzogMDAwMDAwMDAwMDAwMDAwMSBSMTQ6IDAwMDAwMDAw
MDAwMDAwMDAgUjE1OiBmZmZmODgwMDVmNjE0MDUwDQpbICAzMDQuNDA2MTI5XSBGUzogIDAw
MDA3ZjQwN2I0OWY3MDAoMDAwMCkgR1M6ZmZmZjg4MDA1ZjY4MDAwMCgwMDAwKSBrbmxHUzow
MDAwMDAwMDAwMDAwMDAwDQpbICAzMDQuNDEyNDQ0XSBDUzogIGUwMzMgRFM6IDAwMDAgRVM6
IDAwMDAgQ1IwOiAwMDAwMDAwMDgwMDUwMDNiDQpbICAzMDQuNDE4ODM2XSBDUjI6IDAwMDA3
ZjQwN2IwNmIwNzAgQ1IzOiAwMDAwMDAwMDRjZGQ1MDAwIENSNDogMDAwMDAwMDAwMDAwMDY2
MA0KWyAgMzA0LjQyNTMyMV0gU3RhY2s6DQpbICAzMDQuNDMxNzMzXSAgMDAwMDAwMDAwMDAw
MDIwMCBmZmZmODgwMDRjZDMzYWYwIDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MA0KWyAgMzA0LjQzODM1N10gIDAwMDAwMDAwMDAwMDAwMDIgZmZmZmZmZmY4MjJlNzMwMCBm
ZmZmZmZmZjgxMDA3OTgwIDAwMDAwMDAwMDAwMDAwMDINClsgIDMwNC40NDQ5NzRdICBmZmZm
ODgwMDRjZjg5YjM4IGZmZmZmZmZmODExMDljYzUgZmZmZjg4MDA1NzdiNDg3OCBmZmZmZmZm
ZjgxMDA3ZDYwDQpbICAzMDQuNDUxNjIyXSBDYWxsIFRyYWNlOg0KWyAgMzA0LjQ1ODE5OF0g
IFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24r
MHgxNjAvMHgxNjANClsgIDMwNC40NjQ5MzhdICBbPGZmZmZmZmZmODExMDljYzU+XSBzbXBf
Y2FsbF9mdW5jdGlvbl9zaW5nbGUrMHhlNS8weDFlMA0KWyAgMzA0LjQ3MTcyNl0gIFs8ZmZm
ZmZmZmY4MTAwN2Q2MD5dID8geGVuX3Bpbl9wYWdlKzB4MTEwLzB4MTIwDQpbICAzMDQuNDc4
NDkwXSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3Jl
Z2lvbisweDE2MC8weDE2MA0KWyAgMzA0LjQ4NTI5M10gIFs8ZmZmZmZmZmY4MTEwYTAzYT5d
IHNtcF9jYWxsX2Z1bmN0aW9uX21hbnkrMHgyN2EvMHgyYTANClsgIDMwNC40OTIwODVdICBb
PGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4
MTYwLzB4MTYwDQpbICAzMDQuNDk4OTA5XSAgWzxmZmZmZmZmZjgxMDA4NDFlPl0geGVuX2V4
aXRfbW1hcCsweGNlLzB4MWEwDQpbICAzMDQuNTA1NzI2XSAgWzxmZmZmZmZmZjgxMTY5NDI2
Pl0gZXhpdF9tbWFwKzB4NTYvMHgxODANClsgIDMwNC41MTI1NDVdICBbPGZmZmZmZmZmODEw
ZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8weDI1MA0KWyAgMzA0LjUxOTM4NF0gIFs8
ZmZmZmZmZmY4MTFkZGRlMD5dID8gZXhpdF9haW8rMHhiMC8weGUwDQpbICAzMDQuNTI2MDM3
XSAgWzxmZmZmZmZmZjgxMWRkZDQ0Pl0gPyBleGl0X2FpbysweDE0LzB4ZTANClsgIDMwNC41
MzI0NjVdICBbPGZmZmZmZmZmODEwYTI2ODk+XSBtbXB1dCsweDU5LzB4ZTANClsgIDMwNC41
Mzg4MjldICBbPGZmZmZmZmZmODExOWEzYTk+XSBmbHVzaF9vbGRfZXhlYysweDQzOS8weDgz
MA0KWyAgMzA0LjU0NTE5M10gIFs8ZmZmZmZmZmY4MTFlOGNjYT5dIGxvYWRfZWxmX2JpbmFy
eSsweDMyYS8weDFhMDANClsgIDMwNC41NTE1NThdICBbPGZmZmZmZmZmODFhOWZmZTY+XSA/
IF9yYXdfcmVhZF91bmxvY2srMHgyNi8weDMwDQpbICAzMDQuNTU3ODk0XSAgWzxmZmZmZmZm
ZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMzA0LjU2NDI1MV0g
IFs8ZmZmZmZmZmY4MTE5OTI0Mz5dID8gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4YzMvMHgx
YjANClsgIDMwNC41NzA1OTldICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tfYWNxdWly
ZSsweGVjLzB4MTEwDQpbICAzMDQuNTc2OTAyXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBs
b2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDMwNC41ODMxNzddICBbPGZmZmZmZmZmODEx
OTkyMDQ+XSBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHg4NC8weDFiMA0KWyAgMzA0LjU4OTQ1
Ml0gIFs8ZmZmZmZmZmY4MTE5YjI1Mj5dIGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDU5
Mi8weDcxMA0KWyAgMzA0LjU5NTcyM10gIFs8ZmZmZmZmZmY4MTE5YjFlNz5dID8gZG9fZXhl
Y3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTI3LzB4NzEwDQpbICAzMDQuNjAxODYzXSAgWzxmZmZm
ZmZmZjgxMTg2MzQ1Pl0gPyBrbWVtX2NhY2hlX2FsbG9jKzB4YjUvMHgxMjANClsgIDMwNC42
MDc4NzNdICBbPGZmZmZmZmZmODExOWIzZTM+XSBkb19leGVjdmUrMHgxMy8weDIwDQpbICAz
MDQuNjEzODAyXSAgWzxmZmZmZmZmZjgxMTliNjU4Pl0gU3lTX2V4ZWN2ZSsweDM4LzB4NjAN
ClsgIDMwNC42MTk3NDNdICBbPGZmZmZmZmZmODFhYTE5ZTk+XSBzdHViX2V4ZWN2ZSsweDY5
LzB4YTANClsgIDMwNC42MjU2NDNdIENvZGU6IDg5IGRhIGU4IGVhIGMxIDMyIDAwIDQ4IDhi
IDQ1IGMwIDRjIDg5IGZmIDQ4IDg5IGM2IGU4IGJiIDY3IDk5IDAwIDQ4IDNiIDVkIGM4IDc0
IDJkIDQ1IDg1IGVkIDc1IDBhIGViIDEwIDY2IDBmIDFmIDQ0IDAwIDAwIGYzIDkwIDw0MT4g
ZjYgNDQgMjQgMjAgMDEgNzUgZjYgNDggOGIgNWQgZDggNGMgOGIgNjUgZTAgNGMgOGIgNmQg
ZTggNGMgDQpbICAzMDguMjk5MzE4XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzMgc3R1Y2sg
Zm9yIDIzcyEgW2Jhc2g6OTcxOF0NClsgIDMwOC4zMDU1NjZdIE1vZHVsZXMgbGlua2VkIGlu
Og0KWyAgMzA4LjMxMTc0NF0gaXJxIGV2ZW50IHN0YW1wOiAyOTA5NjQNClsgIDMwOC4zMTc4
ODZdIGhhcmRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDI5MDk2Myk6IFs8ZmZmZmZmZmY4MWFh
MGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzA4LjMyNDIxMF0gaGFyZGlycXMg
bGFzdCBkaXNhYmxlZCBhdCAoMjkwOTY0KTogWzxmZmZmZmZmZjgxYWExMDE2Pl0gZXJyb3Jf
c3RpKzB4NS8weDYNClsgIDMwOC4zMzA0OTddIHNvZnRpcnFzIGxhc3QgIGVuYWJsZWQgYXQg
KDI5MDk2Mik6IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGlycSsweDE5MS8weDIy
MA0KWyAgMzA4LjMzNjc3MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMjkwOTQ3KTog
WzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAzMDguMzQzMDA3
XSBDUFU6IDMgUElEOiA5NzE4IENvbW06IGJhc2ggTm90IHRhaW50ZWQgMy4xMy4wLXJjNy0y
MDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDMwOC4zNDkzMzJdIEhhcmR3YXJlIG5hbWU6IE1T
SSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBCSU9TIFYxLjhCMSAwOS8xMy8y
MDEwDQpbICAzMDguMzU1Njc1XSB0YXNrOiBmZmZmODgwMDU4ZGVkMmQwIHRpOiBmZmZmODgw
MDRjZjU2MDAwIHRhc2sudGk6IGZmZmY4ODAwNGNmNTYwMDANClsgIDMwOC4zNjIwODVdIFJJ
UDogZTAzMDpbPGZmZmZmZmZmODExMDlhNTg+XSAgWzxmZmZmZmZmZjgxMTA5YTU4Pl0gZ2Vu
ZXJpY19leGVjX3NpbmdsZSsweDg4LzB4YzANClsgIDMwOC4zNjg2MzBdIFJTUDogZTAyYjpm
ZmZmODgwMDRjZjU3YTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpbICAzMDguMzc1MTczXSBSQVg6
IGZmZmY4ODAwNThkZWQyZDAgUkJYOiBmZmZmODgwMDVmNjE0MDQwIFJDWDogMDAwMDAwMDAw
MDAwMDAwNg0KWyAgMzA4LjM4MTcxNV0gUkRYOiAwMDAwMDAwMDAwMDAwMDA2IFJTSTogZmZm
Zjg4MDA1OGRlZDlhOCBSREk6IDAwMDAwMDAwMDAwMDAyMDANClsgIDMwOC4zODgxNjddIFJC
UDogZmZmZjg4MDA0Y2Y1N2FjOCBSMDg6IDAwMDAwMDAwMDAwMDAwMDYgUjA5OiAwMDAwMDAw
MDAwMDAwMDAwDQpbICAzMDguMzk0NTM2XSBSMTA6IDAwMDAwMDAwMDAwMDAwMDEgUjExOiAw
MDAwMDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2Y1N2FmMA0KWyAgMzA4LjQwMDgxMl0g
UjEzOiAwMDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAwMDAwMDAwMCBSMTU6IGZmZmY4
ODAwNWY2MTQwNTANClsgIDMwOC40MDY5NjZdIEZTOiAgMDAwMDdmZTNjYTk4NzcwMCgwMDAw
KSBHUzpmZmZmODgwMDVmNmMwMDAwKDAwMDApIGtubEdTOjAwMDAwMDAwMDAwMDAwMDANClsg
IDMwOC40MTMxNTldIENTOiAgZTAzMyBEUzogMDAwMCBFUzogMDAwMCBDUjA6IDAwMDAwMDAw
ODAwNTAwM2INClsgIDMwOC40MTkzNjJdIENSMjogMDAwMDdmOTU3NzU3MGYzMCBDUjM6IDAw
MDAwMDAwNGNkYzIwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYwDQpbICAzMDguNDI1NjQzXSBT
dGFjazoNClsgIDMwOC40MzE4NTNdICAwMDAwMDAwMDAwMDAwMjAwIGZmZmY4ODAwNGNkMzNh
ZjAgMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQpbICAzMDguNDM4MzIzXSAg
MDAwMDAwMDAwMDAwMDAwMyBmZmZmZmZmZjgyMmU3MzAwIGZmZmZmZmZmODEwMDc5ODAgMDAw
MDAwMDAwMDAwMDAwMw0KWyAgMzA4LjQ0NDcwNl0gIGZmZmY4ODAwNGNmNTdiMzggZmZmZmZm
ZmY4MTEwOWNjNSBmZmZmODgwMDU3NTg1MTc4IGZmZmZmZmZmODEwMDdkNjANClsgIDMwOC40
NTEwNjJdIENhbGwgVHJhY2U6DQpbICAzMDguNDU3MzA1XSAgWzxmZmZmZmZmZjgxMDA3OTgw
Pl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2MC8weDE2MA0KWyAgMzA4
LjQ2MzY2NF0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9jYWxsX2Z1bmN0aW9uX3Npbmds
ZSsweGU1LzB4MWUwDQpbICAzMDguNDY5OTY4XSAgWzxmZmZmZmZmZjgxMDA3ZDYwPl0gPyB4
ZW5fcGluX3BhZ2UrMHgxMTAvMHgxMjANClsgIDMwOC40NzYyNzVdICBbPGZmZmZmZmZmODEw
MDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4MTYwLzB4MTYwDQpb
ICAzMDguNDgyNjAxXSAgWzxmZmZmZmZmZjgxMTBhMDNhPl0gc21wX2NhbGxfZnVuY3Rpb25f
bWFueSsweDI3YS8weDJhMA0KWyAgMzA4LjQ4ODg3MV0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5d
ID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDMwOC40
OTUxMjFdICBbPGZmZmZmZmZmODEwMDg0MWU+XSB4ZW5fZXhpdF9tbWFwKzB4Y2UvMHgxYTAN
ClsgIDMwOC41MDEzMzVdICBbPGZmZmZmZmZmODExNjk0MjY+XSBleGl0X21tYXArMHg1Ni8w
eDE4MA0KWyAgMzA4LjUwNzUzMF0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxl
YXNlKzB4MTJhLzB4MjUwDQpbICAzMDguNTEzNzA0XSAgWzxmZmZmZmZmZjgxMWRkZGUwPl0g
PyBleGl0X2FpbysweGIwLzB4ZTANClsgIDMwOC41MTk4MzNdICBbPGZmZmZmZmZmODExZGRk
NDQ+XSA/IGV4aXRfYWlvKzB4MTQvMHhlMA0KWyAgMzA4LjUyNTg3N10gIFs8ZmZmZmZmZmY4
MTBhMjY4OT5dIG1tcHV0KzB4NTkvMHhlMA0KWyAgMzA4LjUzMTg2MF0gIFs8ZmZmZmZmZmY4
MTE5YTNhOT5dIGZsdXNoX29sZF9leGVjKzB4NDM5LzB4ODMwDQpbICAzMDguNTM3ODY5XSAg
WzxmZmZmZmZmZjgxMWU4Y2NhPl0gbG9hZF9lbGZfYmluYXJ5KzB4MzJhLzB4MWEwMA0KWyAg
MzA4LjU0Mzg4MV0gIFs8ZmZmZmZmZmY4MWE5ZmZlNj5dID8gX3Jhd19yZWFkX3VubG9jaysw
eDI2LzB4MzANClsgIDMwOC41NDk5MTNdICBbPGZmZmZmZmZmODEwZTczOGM+XSA/IGxvY2tf
YWNxdWlyZSsweGVjLzB4MTEwDQpbICAzMDguNTU1OTg5XSAgWzxmZmZmZmZmZjgxMTk5MjQz
Pl0gPyBzZWFyY2hfYmluYXJ5X2hhbmRsZXIrMHhjMy8weDFiMA0KWyAgMzA4LjU2MjA5N10g
IFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDMw
OC41NjgyMDNdICBbPGZmZmZmZmZmODEwZTcxN2E+XSA/IGxvY2tfcmVsZWFzZSsweDEyYS8w
eDI1MA0KWyAgMzA4LjU3NDI3OF0gIFs8ZmZmZmZmZmY4MTE5OTIwND5dIHNlYXJjaF9iaW5h
cnlfaGFuZGxlcisweDg0LzB4MWIwDQpbICAzMDguNTgwMzgxXSAgWzxmZmZmZmZmZjgxMTli
MjUyPl0gZG9fZXhlY3ZlX2NvbW1vbi5pc3JhLjMxKzB4NTkyLzB4NzEwDQpbICAzMDguNTg2
NTAyXSAgWzxmZmZmZmZmZjgxMTliMWU3Pl0gPyBkb19leGVjdmVfY29tbW9uLmlzcmEuMzEr
MHg1MjcvMHg3MTANClsgIDMwOC41OTI2MjldICBbPGZmZmZmZmZmODExODYzNDU+XSA/IGtt
ZW1fY2FjaGVfYWxsb2MrMHhiNS8weDEyMA0KWyAgMzA4LjU5ODc0NF0gIFs8ZmZmZmZmZmY4
MTE5YjNlMz5dIGRvX2V4ZWN2ZSsweDEzLzB4MjANClsgIDMwOC42MDQ4ODBdICBbPGZmZmZm
ZmZmODExOWI2NTg+XSBTeVNfZXhlY3ZlKzB4MzgvMHg2MA0KWyAgMzA4LjYxMDk4MF0gIFs8
ZmZmZmZmZmY4MWFhMTllOT5dIHN0dWJfZXhlY3ZlKzB4NjkvMHhhMA0KWyAgMzA4LjYxNzA1
Nl0gQ29kZTogYzggNDggODkgZGEgZTggZWEgYzEgMzIgMDAgNDggOGIgNDUgYzAgNGMgODkg
ZmYgNDggODkgYzYgZTggYmIgNjcgOTkgMDAgNDggM2IgNWQgYzggNzQgMmQgNDUgODUgZWQg
NzUgMGEgZWIgMTAgNjYgMGYgMWYgNDQgMDAgMDAgPGYzPiA5MCA0MSBmNiA0NCAyNCAyMCAw
MSA3NSBmNiA0OCA4YiA1ZCBkOCA0YyA4YiA2NSBlMCA0YyA4YiA2ZCANClsgIDMxMi4yOTcz
MjFdIEJVRzogc29mdCBsb2NrdXAgLSBDUFUjNCBzdHVjayBmb3IgMjJzISBbZGhjbGllbnQt
c2NyaXB0Ojk3MjNdDQpbICAzMTIuMjk3MzI0XSBCVUc6IHNvZnQgbG9ja3VwIC0gQ1BVIzEg
c3R1Y2sgZm9yIDIycyEgW3hlbmRvbWFpbnM6OTY3OV0NClsgIDMxMi4yOTczMjZdIE1vZHVs
ZXMgbGlua2VkIGluOg0KWyAgMzEyLjI5NzMzMV0gaXJxIGV2ZW50IHN0YW1wOiAzMzE2NDgN
ClsgIDMxMi4yOTczMzRdIGhhcmRpcnFzIGxhc3QgIGVuYWJsZWQgYXQgKDMzMTY0Nyk6IFs8
ZmZmZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzEyLjI5NzMz
N10gaGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoMzMxNjQ4KTogWzxmZmZmZmZmZjgxYWEx
MDE2Pl0gZXJyb3Jfc3RpKzB4NS8weDYNClsgIDMxMi4yOTczMzldIHNvZnRpcnFzIGxhc3Qg
IGVuYWJsZWQgYXQgKDMzMTY0Nik6IFs8ZmZmZmZmZmY4MTBhOWRmMT5dIF9fZG9fc29mdGly
cSsweDE5MS8weDIyMA0KWyAgMzEyLjI5NzM0MV0gc29mdGlycXMgbGFzdCBkaXNhYmxlZCBh
dCAoMzMxNjQxKTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpb
ICAzMTIuMjk3MzQyXSBDUFU6IDEgUElEOiA5Njc5IENvbW06IHhlbmRvbWFpbnMgTm90IHRh
aW50ZWQgMy4xMy4wLXJjNy0yMDE0MDEwNy14ZW5kZXZlbCsgIzENClsgIDMxMi4yOTczNDNd
IEhhcmR3YXJlIG5hbWU6IE1TSSBNUy03NjQwLzg5MEZYQS1HRDcwIChNUy03NjQwKSAgLCBC
SU9TIFYxLjhCMSAwOS8xMy8yMDEwDQpbICAzMTIuMjk3MzQzXSB0YXNrOiBmZmZmODgwMDU4
ZGVjMjQwIHRpOiBmZmZmODgwMDRjZDMyMDAwIHRhc2sudGk6IGZmZmY4ODAwNGNkMzIwMDAN
ClsgIDMxMi4yOTczNDVdIFJJUDogZTAzMDpbPGZmZmZmZmZmODExMDlhNWE+XSAgWzxmZmZm
ZmZmZjgxMTA5YTVhPl0gZ2VuZXJpY19leGVjX3NpbmdsZSsweDhhLzB4YzANClsgIDMxMi4y
OTczNDZdIFJTUDogZTAyYjpmZmZmODgwMDRjZDMzYTg4ICBFRkxBR1M6IDAwMDAwMjAyDQpb
ICAzMTIuMjk3MzQ3XSBSQVg6IDAwMDAwMDAwMDAwMDAwMDggUkJYOiBmZmZmODgwMDVmNjE0
MDQwIFJDWDogMDAwMDAwMDAwMDAwMDAzOA0KWyAgMzEyLjI5NzM0N10gUkRYOiAwMDAwMDAw
MDAwMDAwMGZmIFJTSTogMDAwMDAwMDAwMDAwMDAwOCBSREk6IDAwMDAwMDAwMDAwMDAwMDgN
ClsgIDMxMi4yOTczNDhdIFJCUDogZmZmZjg4MDA0Y2QzM2FjOCBSMDg6IGZmZmZmZmZmODFj
MGQ0NjggUjA5OiAwMDAwMDAwMDAwMDAwMDAwDQpbICAzMTIuMjk3MzQ4XSBSMTA6IDAwMDAw
MDAwMDAwMDAwMDEgUjExOiAwMDAwMDAwMDAwMDAwMDAwIFIxMjogZmZmZjg4MDA0Y2QzM2Fm
MA0KWyAgMzEyLjI5NzM0OV0gUjEzOiAwMDAwMDAwMDAwMDAwMDAxIFIxNDogMDAwMDAwMDAw
MDAwMDAwMCBSMTU6IGZmZmY4ODAwNWY2MTQwNTANClsgIDMxMi4yOTczNTFdIEZTOiAgMDAw
MDdmMTUyZWM0ZDcwMCgwMDAwKSBHUzpmZmZmODgwMDVmNjQwMDAwKDAwMDApIGtubEdTOjAw
MDAwMDAwMDAwMDAwMDANClsgIDMxMi4yOTczNTJdIENTOiAgZTAzMyBEUzogMDAwMCBFUzog
MDAwMCBDUjA6IDAwMDAwMDAwODAwNTAwM2INClsgIDMxMi4yOTczNThdIENSMjogMDAwMDdm
MTUyZTJhMWUwMiBDUjM6IDAwMDAwMDAwNGNkMmEwMDAgQ1I0OiAwMDAwMDAwMDAwMDAwNjYw
DQpbICAzMTIuMjk3MzYyXSBTdGFjazoNClsgIDMxMi4yOTczNjhdICAwMDAwMDAwMDAwMDAw
MjAwIGZmZmY4ODAwNWY2MTQwNDAgMDAwMDAwMDAwMDAwMDAwNiAwMDAwMDAwMDAwMDAwMDAw
DQpbICAzMTIuMjk3MzcwXSAgMDAwMDAwMDAwMDAwMDAwMSBmZmZmZmZmZjgyMmU3MzAwIGZm
ZmZmZmZmODEwMDc5ODAgMDAwMDAwMDAwMDAwMDAwMQ0KWyAgMzEyLjI5NzM3MV0gIGZmZmY4
ODAwNGNkMzNiMzggZmZmZmZmZmY4MTEwOWNjNSBmZmZmZmZmZjgxYWEwYjMzIGZmZmY4ODAw
Y2VkNTYwMDANClsgIDMxMi4yOTczNzJdIENhbGwgVHJhY2U6DQpbICAzMTIuMjk3Mzc0XSAg
WzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisw
eDE2MC8weDE2MA0KWyAgMzEyLjI5NzM3Nl0gIFs8ZmZmZmZmZmY4MTEwOWNjNT5dIHNtcF9j
YWxsX2Z1bmN0aW9uX3NpbmdsZSsweGU1LzB4MWUwDQpbICAzMTIuMjk3Mzc4XSAgWzxmZmZm
ZmZmZjgxYWEwYjMzPl0gPyByZXRpbnRfcmVzdG9yZV9hcmdzKzB4MTMvMHgxMw0KWyAgMzEy
LjI5NzM4MF0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91
c19yZWdpb24rMHgxNjAvMHgxNjANClsgIDMxMi4yOTczODFdICBbPGZmZmZmZmZmODExMGEw
M2E+XSBzbXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAzMTIuMjk3Mzgz
XSAgWzxmZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lv
bisweDE2MC8weDE2MA0KWyAgMzEyLjI5NzM4NV0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhl
bl9leGl0X21tYXArMHhjZS8weDFhMA0KWyAgMzEyLjI5NzM4N10gIFs8ZmZmZmZmZmY4MTAw
MTIyYT5dID8geGVuX2h5cGVyY2FsbF94ZW5fdmVyc2lvbisweGEvMHgyMA0KWyAgMzEyLjI5
NzM4OV0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5dIGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAz
MTIuMjk3MzkxXSAgWzxmZmZmZmZmZjgxMGU3MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEv
MHgyNTANClsgIDMxMi4yOTczOTNdICBbPGZmZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlv
KzB4YjAvMHhlMA0KWyAgMzEyLjI5NzM5NV0gIFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhp
dF9haW8rMHgxNC8weGUwDQpbICAzMTIuMjk3Mzk3XSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0g
bW1wdXQrMHg1OS8weGUwDQpbICAzMTIuMjk3Mzk5XSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0g
Zmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzANClsgIDMxMi4yOTc0MDBdICBbPGZmZmZmZmZm
ODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkrMHgzMmEvMHgxYTAwDQpbICAzMTIuMjk3NDAz
XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBfcmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0K
WyAgMzEyLjI5NzQwNV0gIFs8ZmZmZmZmZmY4MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4
ZWMvMHgxMTANClsgIDMxMi4yOTc0MDZdICBbPGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJj
aF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIwDQpbICAzMTIuMjk3NDA4XSAgWzxmZmZmZmZm
ZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUrMHhlYy8weDExMA0KWyAgMzEyLjI5NzQxMF0g
IFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9ja19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAz
MTIuMjk3NDEyXSAgWzxmZmZmZmZmZjgxMTk5MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVy
KzB4ODQvMHgxYjANClsgIDMxMi4yOTc0MTNdICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19l
eGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIvMHg3MTANClsgIDMxMi4yOTc0MTVdICBbPGZm
ZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcx
MA0KWyAgMzEyLjI5NzQxN10gIFs8ZmZmZmZmZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9h
bGxvYysweGI1LzB4MTIwDQpbICAzMTIuMjk3NDE4XSAgWzxmZmZmZmZmZjgxMTliM2UzPl0g
ZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMzEyLjI5NzQyMF0gIFs8ZmZmZmZmZmY4MTE5YjY1
OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpbICAzMTIuMjk3NDIyXSAgWzxmZmZmZmZmZjgx
YWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8weGEwDQpbICAzMTIuMjk3NDM3XSBDb2RlOiA4
OSBkYSBlOCBlYSBjMSAzMiAwMCA0OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBi
YiA2NyA5OSAwMCA0OCAzYiA1ZCBjOCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAw
ZiAxZiA0NCAwMCAwMCBmMyA5MCA8NDE+IGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVk
IGQ4IDRjIDhiIDY1IGUwIDRjIDhiIDZkIGU4IDRjIA0KWyAgMzEyLjYzMTM1OV0gTW9kdWxl
cyBsaW5rZWQgaW46DQpbICAzMTIuNjM3NDc0XSBpcnEgZXZlbnQgc3RhbXA6IDY1NjEwDQpb
ICAzMTIuNjQzNDA2XSBoYXJkaXJxcyBsYXN0ICBlbmFibGVkIGF0ICg2NTYwOSk6IFs8ZmZm
ZmZmZmY4MWFhMGIzMz5dIHJlc3RvcmVfYXJncysweDAvMHgzMA0KWyAgMzEyLjY0OTM2OV0g
aGFyZGlycXMgbGFzdCBkaXNhYmxlZCBhdCAoNjU2MTApOiBbPGZmZmZmZmZmODFhYTEwMTY+
XSBlcnJvcl9zdGkrMHg1LzB4Ng0KWyAgMzEyLjY1NTI0MV0gc29mdGlycXMgbGFzdCAgZW5h
YmxlZCBhdCAoNjU2MDgpOiBbPGZmZmZmZmZmODEwYTlkZjE+XSBfX2RvX3NvZnRpcnErMHgx
OTEvMHgyMjANClsgIDMxMi42NjExNzJdIHNvZnRpcnFzIGxhc3QgZGlzYWJsZWQgYXQgKDY1
NjAzKTogWzxmZmZmZmZmZjgxMGFhMmUyPl0gaXJxX2V4aXQrMHhhMi8weGQwDQpbICAzMTIu
NjY3MDU3XSBDUFU6IDQgUElEOiA5NzIzIENvbW06IGRoY2xpZW50LXNjcmlwdCBOb3QgdGFp
bnRlZCAzLjEzLjAtcmM3LTIwMTQwMTA3LXhlbmRldmVsKyAjMQ0KWyAgMzEyLjY3MzA3Nl0g
SGFyZHdhcmUgbmFtZTogTVNJIE1TLTc2NDAvODkwRlhBLUdENzAgKE1TLTc2NDApICAsIEJJ
T1MgVjEuOEIxIDA5LzEzLzIwMTANClsgIDMxMi42NzkwOTNdIHRhc2s6IGZmZmY4ODAwNTI4
MDUyZDAgdGk6IGZmZmY4ODAwNGM0ODgwMDAgdGFzay50aTogZmZmZjg4MDA0YzQ4ODAwMA0K
WyAgMzEyLjY4NTE5NV0gUklQOiBlMDMwOls8ZmZmZmZmZmY4MTEwOWE1OD5dICBbPGZmZmZm
ZmZmODExMDlhNTg+XSBnZW5lcmljX2V4ZWNfc2luZ2xlKzB4ODgvMHhjMA0KWyAgMzEyLjY5
MTQyOF0gUlNQOiBlMDJiOmZmZmY4ODAwNGM0ODlhODggIEVGTEFHUzogMDAwMDAyMDINClsg
IDMxMi42OTc2NjRdIFJBWDogZmZmZjg4MDA1MjgwNTJkMCBSQlg6IGZmZmY4ODAwNWY2MTQw
NDAgUkNYOiAwMDAwMDAwMDAwMDAwMDA2DQpbICAzMTIuNzAzOTA4XSBSRFg6IDAwMDAwMDAw
MDAwMDAwMDYgUlNJOiBmZmZmODgwMDUyODA1OWE4IFJESTogMDAwMDAwMDAwMDAwMDIwMA0K
WyAgMzEyLjcxMDA2MV0gUkJQOiBmZmZmODgwMDRjNDg5YWM4IFIwODogMDAwMDAwMDAwMDAw
MDAwNiBSMDk6IDAwMDAwMDAwMDAwMDAwMDANClsgIDMxMi43MTYxNDRdIFIxMDogMDAwMDAw
MDAwMDAwMDAwMSBSMTE6IDAwMDAwMDAwMDAwMDAwMDAgUjEyOiBmZmZmODgwMDRjNDg5YWYw
DQpbICAzMTIuNzIyMTM3XSBSMTM6IDAwMDAwMDAwMDAwMDAwMDEgUjE0OiAwMDAwMDAwMDAw
MDAwMDAwIFIxNTogZmZmZjg4MDA1ZjYxNDA1MA0KWyAgMzEyLjcyODAxMV0gRlM6ICAwMDAw
N2Y1NzIyYzU2NzAwKDAwMDApIEdTOmZmZmY4ODAwNWY3MDAwMDAoMDAwMCkga25sR1M6MDAw
MDAwMDAwMDAwMDAwMA0KWyAgMzEyLjczMzkzM10gQ1M6ICBlMDMzIERTOiAwMDAwIEVTOiAw
MDAwIENSMDogMDAwMDAwMDA4MDA1MDAzYg0KWyAgMzEyLjczOTg4OV0gQ1IyOiAwMDAwN2Y2
NjUwNDI4ZjMwIENSMzogMDAwMDAwMDA0Y2Y5MzAwMCBDUjQ6IDAwMDAwMDAwMDAwMDA2NjAN
ClsgIDMxMi43NDU5MzBdIFN0YWNrOg0KWyAgMzEyLjc1MTg3OV0gIDAwMDAwMDAwMDAwMDAy
MDAgZmZmZjg4MDA0Y2QzM2FmMCAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDAN
ClsgIDMxMi43NTgwOTVdICAwMDAwMDAwMDAwMDAwMDA0IGZmZmZmZmZmODIyZTczMDAgZmZm
ZmZmZmY4MTAwNzk4MCAwMDAwMDAwMDAwMDAwMDA0DQpbICAzMTIuNzY0MjM0XSAgZmZmZjg4
MDA0YzQ4OWIzOCBmZmZmZmZmZjgxMTA5Y2M1IGZmZmY4ODAwNTc0NTk2ZjggZmZmZmZmZmY4
MTAwN2Q2MA0KWyAgMzEyLjc3MDM1NF0gQ2FsbCBUcmFjZToNClsgIDMxMi43NzYzNzRdICBb
PGZmZmZmZmZmODEwMDc5ODA+XSA/IHhlbl9kZXN0cm95X2NvbnRpZ3VvdXNfcmVnaW9uKzB4
MTYwLzB4MTYwDQpbICAzMTIuNzgyNTAyXSAgWzxmZmZmZmZmZjgxMTA5Y2M1Pl0gc21wX2Nh
bGxfZnVuY3Rpb25fc2luZ2xlKzB4ZTUvMHgxZTANClsgIDMxMi43ODg1OTVdICBbPGZmZmZm
ZmZmODEwMDdkNjA+XSA/IHhlbl9waW5fcGFnZSsweDExMC8weDEyMA0KWyAgMzEyLjc5NDY4
MF0gIFs8ZmZmZmZmZmY4MTAwNzk4MD5dID8geGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdp
b24rMHgxNjAvMHgxNjANClsgIDMxMi44MDA4MTBdICBbPGZmZmZmZmZmODExMGEwM2E+XSBz
bXBfY2FsbF9mdW5jdGlvbl9tYW55KzB4MjdhLzB4MmEwDQpbICAzMTIuODA2ODkyXSAgWzxm
ZmZmZmZmZjgxMDA3OTgwPl0gPyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbisweDE2
MC8weDE2MA0KWyAgMzEyLjgxMjk2MF0gIFs8ZmZmZmZmZmY4MTAwODQxZT5dIHhlbl9leGl0
X21tYXArMHhjZS8weDFhMA0KWyAgMzEyLjgxODk5OF0gIFs8ZmZmZmZmZmY4MTE2OTQyNj5d
IGV4aXRfbW1hcCsweDU2LzB4MTgwDQpbICAzMTIuODI1MDA5XSAgWzxmZmZmZmZmZjgxMGU3
MTdhPl0gPyBsb2NrX3JlbGVhc2UrMHgxMmEvMHgyNTANClsgIDMxMi44MzEwMDZdICBbPGZm
ZmZmZmZmODExZGRkZTA+XSA/IGV4aXRfYWlvKzB4YjAvMHhlMA0KWyAgMzEyLjgzNjk1OF0g
IFs8ZmZmZmZmZmY4MTFkZGQ0ND5dID8gZXhpdF9haW8rMHgxNC8weGUwDQpbICAzMTIuODQy
ODMwXSAgWzxmZmZmZmZmZjgxMGEyNjg5Pl0gbW1wdXQrMHg1OS8weGUwDQpbICAzMTIuODQ4
NjUzXSAgWzxmZmZmZmZmZjgxMTlhM2E5Pl0gZmx1c2hfb2xkX2V4ZWMrMHg0MzkvMHg4MzAN
ClsgIDMxMi44NTQ0OTFdICBbPGZmZmZmZmZmODExZThjY2E+XSBsb2FkX2VsZl9iaW5hcnkr
MHgzMmEvMHgxYTAwDQpbICAzMTIuODYwMzI2XSAgWzxmZmZmZmZmZjgxYTlmZmU2Pl0gPyBf
cmF3X3JlYWRfdW5sb2NrKzB4MjYvMHgzMA0KWyAgMzEyLjg2NjE4NF0gIFs8ZmZmZmZmZmY4
MTBlNzM4Yz5dID8gbG9ja19hY3F1aXJlKzB4ZWMvMHgxMTANClsgIDMxMi44NzIwMzVdICBb
PGZmZmZmZmZmODExOTkyNDM+XSA/IHNlYXJjaF9iaW5hcnlfaGFuZGxlcisweGMzLzB4MWIw
DQpbICAzMTIuODc3OTc5XSAgWzxmZmZmZmZmZjgxMGU3MzhjPl0gPyBsb2NrX2FjcXVpcmUr
MHhlYy8weDExMA0KWyAgMzEyLjg4Mzk0MF0gIFs8ZmZmZmZmZmY4MTBlNzE3YT5dID8gbG9j
a19yZWxlYXNlKzB4MTJhLzB4MjUwDQpbICAzMTIuODg5ODgwXSAgWzxmZmZmZmZmZjgxMTk5
MjA0Pl0gc2VhcmNoX2JpbmFyeV9oYW5kbGVyKzB4ODQvMHgxYjANClsgIDMxMi44OTU4NDFd
ICBbPGZmZmZmZmZmODExOWIyNTI+XSBkb19leGVjdmVfY29tbW9uLmlzcmEuMzErMHg1OTIv
MHg3MTANClsgIDMxMi45MDE4MjZdICBbPGZmZmZmZmZmODExOWIxZTc+XSA/IGRvX2V4ZWN2
ZV9jb21tb24uaXNyYS4zMSsweDUyNy8weDcxMA0KWyAgMzEyLjkwNzg0N10gIFs8ZmZmZmZm
ZmY4MTE4NjM0NT5dID8ga21lbV9jYWNoZV9hbGxvYysweGI1LzB4MTIwDQpbICAzMTIuOTEz
ODU3XSAgWzxmZmZmZmZmZjgxMTliM2UzPl0gZG9fZXhlY3ZlKzB4MTMvMHgyMA0KWyAgMzEy
LjkxOTg0M10gIFs8ZmZmZmZmZmY4MTE5YjY1OD5dIFN5U19leGVjdmUrMHgzOC8weDYwDQpb
ICAzMTIuOTI1ODQwXSAgWzxmZmZmZmZmZjgxYWExOWU5Pl0gc3R1Yl9leGVjdmUrMHg2OS8w
eGEwDQpbICAzMTIuOTMxODIyXSBDb2RlOiBjOCA0OCA4OSBkYSBlOCBlYSBjMSAzMiAwMCA0
OCA4YiA0NSBjMCA0YyA4OSBmZiA0OCA4OSBjNiBlOCBiYiA2NyA5OSAwMCA0OCAzYiA1ZCBj
OCA3NCAyZCA0NSA4NSBlZCA3NSAwYSBlYiAxMCA2NiAwZiAxZiA0NCAwMCAwMCA8ZjM+IDkw
IDQxIGY2IDQ0IDI0IDIwIDAxIDc1IGY2IDQ4IDhiIDVkIGQ4IDRjIDhiIDY1IGUwIDRjIDhi
IDZkIA0K
------------12706E23E275304AE
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------12706E23E275304AE--



From xen-devel-bounces@lists.xen.org Tue Jan 07 11:59:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VJQ-0000sz-9S; Tue, 07 Jan 2014 11:59:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0VJO-0000rN-Gw
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:59:10 +0000
Received: from [193.109.254.147:65298] by server-10.bemta-14.messagelabs.com
	id 13/65-20752-D0CEBC25; Tue, 07 Jan 2014 11:59:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389095948!9287740!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26640 invoked from network); 7 Jan 2014 11:59:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:59:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88237920"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:59:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:59:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W0VJK-0003qt-Uw; Tue, 07 Jan 2014 11:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 11:59:06 +0000
Message-ID: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <52CBF78402000078001110E8@nat28.tlf.novell.com>
References: <52CBF78402000078001110E8@nat28.tlf.novell.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In addition, 'copyback' should be cleared even in the error case.

Also fix the indentation of the arguments to copy_to_guest() to help clarify
that the 'ret = -EFAULT' is not part of the condition.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v2:
 * Still clear copyback even in the error case.
---
 xen/common/sysctl.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 117e095..0cb6ee1 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         }
 
         if ( copy_to_guest(
-            op->u.page_offline.status, status,
-            op->u.page_offline.end - op->u.page_offline.start + 1) )
-        {
+                 op->u.page_offline.status, status,
+                 op->u.page_offline.end - op->u.page_offline.start + 1) )
             ret = -EFAULT;
-            break;
-        }
 
         xfree(status);
         copyback = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 11:59:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 11:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VJQ-0000sz-9S; Tue, 07 Jan 2014 11:59:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0VJO-0000rN-Gw
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 11:59:10 +0000
Received: from [193.109.254.147:65298] by server-10.bemta-14.messagelabs.com
	id 13/65-20752-D0CEBC25; Tue, 07 Jan 2014 11:59:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389095948!9287740!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26640 invoked from network); 7 Jan 2014 11:59:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 11:59:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88237920"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 11:59:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 06:59:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W0VJK-0003qt-Uw; Tue, 07 Jan 2014 11:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 11:59:06 +0000
Message-ID: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <52CBF78402000078001110E8@nat28.tlf.novell.com>
References: <52CBF78402000078001110E8@nat28.tlf.novell.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In addition, 'copyback' should be cleared even in the error case.

Also fix the indentation of the arguments to copy_to_guest() to help clarify
that the 'ret = -EFAULT' is not part of the condition.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v2:
 * Still clear copyback even in the error case.
---
 xen/common/sysctl.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 117e095..0cb6ee1 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         }
 
         if ( copy_to_guest(
-            op->u.page_offline.status, status,
-            op->u.page_offline.end - op->u.page_offline.start + 1) )
-        {
+                 op->u.page_offline.status, status,
+                 op->u.page_offline.end - op->u.page_offline.start + 1) )
             ret = -EFAULT;
-            break;
-        }
 
         xfree(status);
         copyback = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VLk-00017v-0m; Tue, 07 Jan 2014 12:01:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0VLi-00017p-DQ
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:01:34 +0000
Received: from [193.109.254.147:3085] by server-2.bemta-14.messagelabs.com id
	1A/0C-00361-D9CEBC25; Tue, 07 Jan 2014 12:01:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389096091!7047326!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23356 invoked from network); 7 Jan 2014 12:01:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:01:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90401700"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:01:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:01:30 -0500
Message-ID: <1389096089.31766.140.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 7 Jan 2014 12:01:29 +0000
In-Reply-To: <1387461038.9925.77.camel@kazak.uk.xensource.com>
References: <alpine.DEB.2.02.1312171610140.8667@kaball.uk.xensource.com>
	<1387461038.9925.77.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: pass a struct pending_irq* as
 parameter to gic helper functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-19 at 13:50 +0000, Ian Campbell wrote:
> On Tue, 2013-12-17 at 16:16 +0000, Stefano Stabellini wrote:
> > gic_add_to_lr_pending and gic_set_lr should take a struct pending_irq*
> > as parameter instead of the virtual_irq number and the priority
> > separately and doing yet another irq_to_pending lookup.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> What do you think about this one for 4.4? It whiffs a bit of a cleanup
> rather than a bug fix (unless e.g. there is some subtle incorrectness in
> the priority which was getting passed around before, rather than just
> the potential for it)

Unless there is an actual bug fixed by this I think I'll leave it for
4.5

> 
> 
> > @@ -672,12 +670,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> >          if (i < nr_lrs) {
> >              set_bit(i, &this_cpu(lr_mask));
> > -            gic_set_lr(i, virtual_irq, state, priority);
> > +            gic_set_lr(i, irq_to_pending(v, virtual_irq), state);
> >              goto out;
> >          }
> >      }
> >  
> > -    gic_add_to_lr_pending(v, virtual_irq, priority);
> > +    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> 
> I think you could take this one step further and make a similar change
> to gic_set_guest_irq, the only caller has the pending_irq * in its hand
> already.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VLk-00017v-0m; Tue, 07 Jan 2014 12:01:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0VLi-00017p-DQ
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:01:34 +0000
Received: from [193.109.254.147:3085] by server-2.bemta-14.messagelabs.com id
	1A/0C-00361-D9CEBC25; Tue, 07 Jan 2014 12:01:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389096091!7047326!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23356 invoked from network); 7 Jan 2014 12:01:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:01:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90401700"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:01:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:01:30 -0500
Message-ID: <1389096089.31766.140.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 7 Jan 2014 12:01:29 +0000
In-Reply-To: <1387461038.9925.77.camel@kazak.uk.xensource.com>
References: <alpine.DEB.2.02.1312171610140.8667@kaball.uk.xensource.com>
	<1387461038.9925.77.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: pass a struct pending_irq* as
 parameter to gic helper functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-19 at 13:50 +0000, Ian Campbell wrote:
> On Tue, 2013-12-17 at 16:16 +0000, Stefano Stabellini wrote:
> > gic_add_to_lr_pending and gic_set_lr should take a struct pending_irq*
> > as parameter instead of the virtual_irq number and the priority
> > separately and doing yet another irq_to_pending lookup.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> What do you think about this one for 4.4? It whiffs a bit of a cleanup
> rather than a bug fix (unless e.g. there is some subtle incorrectness in
> the priority which was getting passed around before, rather than just
> the potential for it)

Unless there is an actual bug fixed by this I think I'll leave it for
4.5

> 
> 
> > @@ -672,12 +670,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> >          if (i < nr_lrs) {
> >              set_bit(i, &this_cpu(lr_mask));
> > -            gic_set_lr(i, virtual_irq, state, priority);
> > +            gic_set_lr(i, irq_to_pending(v, virtual_irq), state);
> >              goto out;
> >          }
> >      }
> >  
> > -    gic_add_to_lr_pending(v, virtual_irq, priority);
> > +    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> 
> I think you could take this one step further and make a similar change
> to gic_set_guest_irq, the only caller has the pending_irq * in its hand
> already.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:12:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VVe-0001hc-8O; Tue, 07 Jan 2014 12:11:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VVd-0001hX-34
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:11:49 +0000
Received: from [85.158.137.68:45504] by server-12.bemta-3.messagelabs.com id
	C2/62-20055-40FEBC25; Tue, 07 Jan 2014 12:11:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389096707!6518576!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25340 invoked from network); 7 Jan 2014 12:11:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:11:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:11:46 +0000
Message-Id: <52CBFD0E0200007800111119@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:11:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com>
In-Reply-To: <52C54388.105@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.01.14 at 11:46, David Vrabel <david.vrabel@citrix.com> wrote:
> On 01/01/14 16:51, Don Slutz wrote:
>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
> 
> Since this commit introduced a regression, a revert is the best thing to
> do here.
> 
> Acked-by: David Vrabel <david.vrabel@citrix.com>

I don't agree: Reverting this would imply restoring the issue with
mapping the kexec range even when outside the direct map area.

Furthermore I intend to backport this change, as from all I can
tell prior to your rework of the kexec code it was entirely unused
(and hence should never have been there).

>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>> +
> 
> This should be made conditional on the location of the crash region --
> it is wrong to do this for portions of the crash region that are outside
> the crash region.

Either that or the kexec code would become independent of the
specific behavioral aspects of the direct map and map_domain_page().
It's the latter that I'd prefer, not the least because I'm getting the
impression that restoring the code (even if conditional) would still not
help when the kexec area is not only outside the direct map area, but
beyond accessible RAM boundaries (i.e. on MFNs for which mfn_valid()
would return false).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:12:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VVe-0001hc-8O; Tue, 07 Jan 2014 12:11:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VVd-0001hX-34
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:11:49 +0000
Received: from [85.158.137.68:45504] by server-12.bemta-3.messagelabs.com id
	C2/62-20055-40FEBC25; Tue, 07 Jan 2014 12:11:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389096707!6518576!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25340 invoked from network); 7 Jan 2014 12:11:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:11:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:11:46 +0000
Message-Id: <52CBFD0E0200007800111119@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:11:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1388595096-15787-1-git-send-email-dslutz@verizon.com>
	<52C54388.105@citrix.com>
In-Reply-To: <52C54388.105@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX] [PATCH] kexec/x86: Do map crash kernel area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 02.01.14 at 11:46, David Vrabel <david.vrabel@citrix.com> wrote:
> On 01/01/14 16:51, Don Slutz wrote:
>> Revert of commit 7113a45451a9f656deeff070e47672043ed83664
> 
> Since this commit introduced a regression, a revert is the best thing to
> do here.
> 
> Acked-by: David Vrabel <david.vrabel@citrix.com>

I don't agree: Reverting this would imply restoring the issue with
mapping the kexec range even when outside the direct map area.

Furthermore I intend to backport this change, as from all I can
tell prior to your rework of the kexec code it was entirely unused
(and hence should never have been there).

>> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                     kexec_crash_area.start >> PAGE_SHIFT,
>> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>> +
> 
> This should be made conditional on the location of the crash region --
> it is wrong to do this for portions of the crash region that are outside
> the crash region.

Either that or the kexec code would become independent of the
specific behavioral aspects of the direct map and map_domain_page().
It's the latter that I'd prefer, not the least because I'm getting the
impression that restoring the code (even if conditional) would still not
help when the kexec area is not only outside the direct map area, but
beyond accessible RAM boundaries (i.e. on MFNs for which mfn_valid()
would return false).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VYz-0001oz-Si; Tue, 07 Jan 2014 12:15:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VYy-0001or-76
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:15:16 +0000
Received: from [85.158.143.35:11057] by server-3.bemta-4.messagelabs.com id
	7C/78-32360-3DFEBC25; Tue, 07 Jan 2014 12:15:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389096914!3001563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23964 invoked from network); 7 Jan 2014 12:15:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:15:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:15:14 +0000
Message-Id: <52CBFDDD020000780011112C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:15:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Gordan Bobic" <gordan@bobich.net>,"Feng Wu" <feng.wu@intel.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
In-Reply-To: <6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-07 11:26, Wu, Feng wrote:
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org 
>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>> Sent: Tuesday, January 07, 2014 6:44 PM
>>> To: Andrew Cooper
>>> Cc: xen-devel@lists.xen.org 
>>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: 
>>> iommuu/vt-d
>>> issues with LSI MegaSAS (PERC5i))
>>> 
>>> On 2014-01-07 10:38, Andrew Cooper wrote:
>>> > On 07/01/14 10:35, Gordan Bobic wrote:
>>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>> >>>>> Which would look like this:
>>> >>>>>
>>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>> >>>>> on the card
>>> >>>>>           \--------------> IEEE-1394a
>>> >>>>>
>>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>> >>>>> Jan)
>>> >>>>>
>>> >>>>
>>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>>> >>>> to a guest:
>>> >>>>
>>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>> >>>>
>>> >>>>
>>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>>> >>>> fault
>>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>>> >>>> root_entry
>>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>> >>>> context
>>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>>> >>>> ctxt_entry[0]
>>> >>>> not present
>>> >>>>
>>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>> >>>>
>>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>> >>>> (prog-if 01 [Subtractive decode])
>>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>>> VGASnoop-
>>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>>> >>>> 66MHz-
>>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>>> <MAbort+
>>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>>> >>>> secondary=08,
>>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>> >>>> BridgeCtl:
>>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>>> DiscTmrSERREn-
>>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>> >>>> Device 0805
>>> >>>>         Capabilities: [a0] Power Management version 3
>>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>>> >>>> NoSoftRst+
>>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>> >>>>
>>> >>>> or there is some unknown bridge in the motherboard.
>>> >>>
>>> >>> According your description above, the upstream Linux should also have
>>> >>> the same problem. Did you see it with upstream Linux?
>>> >>
>>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>>> >> bare metal Linux, the same problem occurs as with Xen.
>>> >>
>>> >>> There may be some buggy device that generate DMA request with
>>> >>> internal
>>> >>> BDF but it didn't expose it(not like Phantom device). For those
>>> >>> devices, I think we need to setup the VT-d page table manually.
>>> >>
>>> >> I think what is needed is a pci-phantom style override that tells the
>>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>>> >> invisible device ID.
>>> >>
>>> >> Gordan
>>> >
>>> > There is.  See "pci-phantom" in
>>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html 
>>> 
>>> I thought this was only applicable to phantom _functions_ (number 
>>> after
>>> the
>>> dot) rather than whole phantom _devices_. Is that not the case?
>> 
>> I think that's right. I go through the related code for the pci
>> phantom device just now, I find that
>> the information of command line 'pci-phantom' is stored in variable '
>> phantom_devs[8] '
>> with type of s truct phantom_dev{}. This variable is used in function
>> alloc_pdev() as follow:
>> 
>> 
>>                 for ( i = 0; i < nr_phantom_devs; ++i )
>>                     if ( phantom_devs[i].seg == pseg->nr &&
>>                          phantom_devs[i].bus == bus &&
>>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>>                     {
>>                         pdev->phantom_stride = phantom_devs[i].stride;
>>                         break;
>>                     }
>> 
>> So from the code, we can see this command line only works for phantom
>> _function_, not for whole phantom _devices_.
> 
> What would it take to make it work for a whole phantom device?

First and foremost a definition of what a phantom device is and
how one would behave. Once again - phantom functions are part
of the PCIe specification, so those don't require a definition.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VYz-0001oz-Si; Tue, 07 Jan 2014 12:15:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VYy-0001or-76
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:15:16 +0000
Received: from [85.158.143.35:11057] by server-3.bemta-4.messagelabs.com id
	7C/78-32360-3DFEBC25; Tue, 07 Jan 2014 12:15:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389096914!3001563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23964 invoked from network); 7 Jan 2014 12:15:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:15:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:15:14 +0000
Message-Id: <52CBFDDD020000780011112C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:15:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Gordan Bobic" <gordan@bobich.net>,"Feng Wu" <feng.wu@intel.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
In-Reply-To: <6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-07 11:26, Wu, Feng wrote:
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org 
>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>> Sent: Tuesday, January 07, 2014 6:44 PM
>>> To: Andrew Cooper
>>> Cc: xen-devel@lists.xen.org 
>>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: 
>>> iommuu/vt-d
>>> issues with LSI MegaSAS (PERC5i))
>>> 
>>> On 2014-01-07 10:38, Andrew Cooper wrote:
>>> > On 07/01/14 10:35, Gordan Bobic wrote:
>>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>> >>>>> Which would look like this:
>>> >>>>>
>>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>> >>>>> on the card
>>> >>>>>           \--------------> IEEE-1394a
>>> >>>>>
>>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>> >>>>> Jan)
>>> >>>>>
>>> >>>>
>>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>>> >>>> to a guest:
>>> >>>>
>>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>> >>>>
>>> >>>>
>>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>>> >>>> fault
>>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>>> >>>> root_entry
>>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>> >>>> context
>>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>>> >>>> ctxt_entry[0]
>>> >>>> not present
>>> >>>>
>>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>> >>>>
>>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>> >>>> (prog-if 01 [Subtractive decode])
>>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>>> VGASnoop-
>>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>>> >>>> 66MHz-
>>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>>> <MAbort+
>>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>>> >>>> secondary=08,
>>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>> >>>> BridgeCtl:
>>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>>> DiscTmrSERREn-
>>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>> >>>> Device 0805
>>> >>>>         Capabilities: [a0] Power Management version 3
>>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>>> >>>> NoSoftRst+
>>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>> >>>>
>>> >>>> or there is some unknown bridge in the motherboard.
>>> >>>
>>> >>> According your description above, the upstream Linux should also have
>>> >>> the same problem. Did you see it with upstream Linux?
>>> >>
>>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>>> >> bare metal Linux, the same problem occurs as with Xen.
>>> >>
>>> >>> There may be some buggy device that generate DMA request with
>>> >>> internal
>>> >>> BDF but it didn't expose it(not like Phantom device). For those
>>> >>> devices, I think we need to setup the VT-d page table manually.
>>> >>
>>> >> I think what is needed is a pci-phantom style override that tells the
>>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>>> >> invisible device ID.
>>> >>
>>> >> Gordan
>>> >
>>> > There is.  See "pci-phantom" in
>>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html 
>>> 
>>> I thought this was only applicable to phantom _functions_ (number 
>>> after
>>> the
>>> dot) rather than whole phantom _devices_. Is that not the case?
>> 
>> I think that's right. I go through the related code for the pci
>> phantom device just now, I find that
>> the information of command line 'pci-phantom' is stored in variable '
>> phantom_devs[8] '
>> with type of s truct phantom_dev{}. This variable is used in function
>> alloc_pdev() as follow:
>> 
>> 
>>                 for ( i = 0; i < nr_phantom_devs; ++i )
>>                     if ( phantom_devs[i].seg == pseg->nr &&
>>                          phantom_devs[i].bus == bus &&
>>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>>                     {
>>                         pdev->phantom_stride = phantom_devs[i].stride;
>>                         break;
>>                     }
>> 
>> So from the code, we can see this command line only works for phantom
>> _function_, not for whole phantom _devices_.
> 
> What would it take to make it work for a whole phantom device?

First and foremost a definition of what a phantom device is and
how one would behave. Once again - phantom functions are part
of the PCIe specification, so those don't require a definition.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:16:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VaC-0001wX-2X; Tue, 07 Jan 2014 12:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VaA-0001wH-4X
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:16:30 +0000
Received: from [85.158.143.35:23515] by server-2.bemta-4.messagelabs.com id
	87/47-11386-D10FBC25; Tue, 07 Jan 2014 12:16:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389096988!10139453!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28217 invoked from network); 7 Jan 2014 12:16:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:16:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:16:28 +0000
Message-Id: <52CBFE27020000780011112F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:16:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yijing Wang" <wangyijing@huawei.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1386243245-34280-1-git-send-email-wangyijing@huawei.com>
	<20131231145111.GB3349@phenom.dumpdata.com>
In-Reply-To: <20131231145111.GB3349@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, Hanjun Guo <guohanjun@huawei.com>
Subject: Re: [Xen-devel] [PATCH] xen: Use dev_is_pci() to check whether it
 is pci device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.12.13 at 15:51, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Dec 05, 2013 at 07:34:05PM +0800, Yijing Wang wrote:
>> Use PCI standard marco dev_is_pci() instead of directly compare
>> pci_bus_type to check whether it is pci device.
> 
> Jan, you OK with this?

Yes, sure - I simply wasn't aware of that wrapper.

Jan

>> Signed-off-by: Yijing Wang <wangyijing@huawei.com>
>> ---
>>  drivers/xen/dbgp.c |    2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>> 
>> diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
>> index f3ccc80..8145a59 100644
>> --- a/drivers/xen/dbgp.c
>> +++ b/drivers/xen/dbgp.c
>> @@ -19,7 +19,7 @@ static int xen_dbgp_op(struct usb_hcd *hcd, int op)
>>  	dbgp.op = op;
>>  
>>  #ifdef CONFIG_PCI
>> -	if (ctrlr->bus == &pci_bus_type) {
>> +	if (dev_is_pci(ctrlr)) {
>>  		const struct pci_dev *pdev = to_pci_dev(ctrlr);
>>  
>>  		dbgp.u.pci.seg = pci_domain_nr(pdev->bus);
>> -- 
>> 1.7.1
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:16:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VaC-0001wX-2X; Tue, 07 Jan 2014 12:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VaA-0001wH-4X
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:16:30 +0000
Received: from [85.158.143.35:23515] by server-2.bemta-4.messagelabs.com id
	87/47-11386-D10FBC25; Tue, 07 Jan 2014 12:16:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389096988!10139453!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28217 invoked from network); 7 Jan 2014 12:16:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:16:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:16:28 +0000
Message-Id: <52CBFE27020000780011112F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:16:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yijing Wang" <wangyijing@huawei.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1386243245-34280-1-git-send-email-wangyijing@huawei.com>
	<20131231145111.GB3349@phenom.dumpdata.com>
In-Reply-To: <20131231145111.GB3349@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	linux-kernel@vger.kernel.org, Hanjun Guo <guohanjun@huawei.com>
Subject: Re: [Xen-devel] [PATCH] xen: Use dev_is_pci() to check whether it
 is pci device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.12.13 at 15:51, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Dec 05, 2013 at 07:34:05PM +0800, Yijing Wang wrote:
>> Use PCI standard marco dev_is_pci() instead of directly compare
>> pci_bus_type to check whether it is pci device.
> 
> Jan, you OK with this?

Yes, sure - I simply wasn't aware of that wrapper.

Jan

>> Signed-off-by: Yijing Wang <wangyijing@huawei.com>
>> ---
>>  drivers/xen/dbgp.c |    2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>> 
>> diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
>> index f3ccc80..8145a59 100644
>> --- a/drivers/xen/dbgp.c
>> +++ b/drivers/xen/dbgp.c
>> @@ -19,7 +19,7 @@ static int xen_dbgp_op(struct usb_hcd *hcd, int op)
>>  	dbgp.op = op;
>>  
>>  #ifdef CONFIG_PCI
>> -	if (ctrlr->bus == &pci_bus_type) {
>> +	if (dev_is_pci(ctrlr)) {
>>  		const struct pci_dev *pdev = to_pci_dev(ctrlr);
>>  
>>  		dbgp.u.pci.seg = pci_domain_nr(pdev->bus);
>> -- 
>> 1.7.1
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:25:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ViI-0002gK-81; Tue, 07 Jan 2014 12:24:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ViH-0002gE-2F
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:24:53 +0000
Received: from [193.109.254.147:37132] by server-3.bemta-14.messagelabs.com id
	35/AD-11000-412FBC25; Tue, 07 Jan 2014 12:24:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389097490!9294771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19687 invoked from network); 7 Jan 2014 12:24:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:24:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90410097"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:24:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:24:49 -0500
Message-ID: <1389097488.31766.142.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 12:24:48 +0000
In-Reply-To: <52B44F3F020000780010F8E9@nat28.tlf.novell.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
	batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
> The use of "range" here wasn't really correct - there are no ranges
> involved. As the comment in the public header already correctly said,
> all this is about is batching of XENMEM_add_to_physmap calls (with
> the addition of having a way to specify a foreign domain for
> XENMAPSPACE_gmfn_foreign).
> 
> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Were you targeting this one at 4.4?

> ---
> v2: fix the compatibility DEFINE_XEN_GUEST_HANDLE()
> 
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -595,54 +595,54 @@ static int xenmem_add_to_physmap(struct 
>      return rc;
>  }
>  
> -static int xenmem_add_to_physmap_range(struct domain *d,
> -                                       struct xen_add_to_physmap_range *xatpr,
> +static int xenmem_add_to_physmap_batch(struct domain *d,
> +                                       struct xen_add_to_physmap_batch *xatpb,
>                                         unsigned int start)
>  {
>      unsigned int done = 0;
>      int rc;
>  
> -    if ( xatpr->size < start )
> +    if ( xatpb->size < start )
>          return -EILSEQ;
>  
> -    guest_handle_add_offset(xatpr->idxs, start);
> -    guest_handle_add_offset(xatpr->gpfns, start);
> -    guest_handle_add_offset(xatpr->errs, start);
> -    xatpr->size -= start;
> +    guest_handle_add_offset(xatpb->idxs, start);
> +    guest_handle_add_offset(xatpb->gpfns, start);
> +    guest_handle_add_offset(xatpb->errs, start);
> +    xatpb->size -= start;
>  
> -    while ( xatpr->size > done )
> +    while ( xatpb->size > done )
>      {
>          xen_ulong_t idx;
>          xen_pfn_t gpfn;
>  
> -        if ( unlikely(__copy_from_guest_offset(&idx, xatpr->idxs, 0, 1)) )
> +        if ( unlikely(__copy_from_guest_offset(&idx, xatpb->idxs, 0, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        if ( unlikely(__copy_from_guest_offset(&gpfn, xatpr->gpfns, 0, 1)) )
> +        if ( unlikely(__copy_from_guest_offset(&gpfn, xatpb->gpfns, 0, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        rc = xenmem_add_to_physmap_one(d, xatpr->space,
> -                                       xatpr->foreign_domid,
> +        rc = xenmem_add_to_physmap_one(d, xatpb->space,
> +                                       xatpb->foreign_domid,
>                                         idx, gpfn);
>  
> -        if ( unlikely(__copy_to_guest_offset(xatpr->errs, 0, &rc, 1)) )
> +        if ( unlikely(__copy_to_guest_offset(xatpb->errs, 0, &rc, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        guest_handle_add_offset(xatpr->idxs, 1);
> -        guest_handle_add_offset(xatpr->gpfns, 1);
> -        guest_handle_add_offset(xatpr->errs, 1);
> +        guest_handle_add_offset(xatpb->idxs, 1);
> +        guest_handle_add_offset(xatpb->gpfns, 1);
> +        guest_handle_add_offset(xatpb->errs, 1);
>  
>          /* Check for continuation if it's not the last iteration. */
> -        if ( xatpr->size > ++done && hypercall_preempt_check() )
> +        if ( xatpb->size > ++done && hypercall_preempt_check() )
>          {
>              rc = start + done;
>              goto out;
> @@ -797,7 +797,7 @@ long do_memory_op(unsigned long cmd, XEN
>          if ( copy_from_guest(&xatp, arg, 1) )
>              return -EFAULT;
>  
> -        /* Foreign mapping is only possible via add_to_physmap_range. */
> +        /* Foreign mapping is only possible via add_to_physmap_batch. */
>          if ( xatp.space == XENMAPSPACE_gmfn_foreign )
>              return -ENOSYS;
>  
> @@ -824,29 +824,29 @@ long do_memory_op(unsigned long cmd, XEN
>          return rc;
>      }
>  
> -    case XENMEM_add_to_physmap_range:
> +    case XENMEM_add_to_physmap_batch:
>      {
> -        struct xen_add_to_physmap_range xatpr;
> +        struct xen_add_to_physmap_batch xatpb;
>          struct domain *d;
>  
> -        BUILD_BUG_ON((typeof(xatpr.size))-1 >
> +        BUILD_BUG_ON((typeof(xatpb.size))-1 >
>                       (UINT_MAX >> MEMOP_EXTENT_SHIFT));
>  
>          /* Check for malicious or buggy input. */
> -        if ( start_extent != (typeof(xatpr.size))start_extent )
> +        if ( start_extent != (typeof(xatpb.size))start_extent )
>              return -EDOM;
>  
> -        if ( copy_from_guest(&xatpr, arg, 1) ||
> -             !guest_handle_okay(xatpr.idxs, xatpr.size) ||
> -             !guest_handle_okay(xatpr.gpfns, xatpr.size) ||
> -             !guest_handle_okay(xatpr.errs, xatpr.size) )
> +        if ( copy_from_guest(&xatpb, arg, 1) ||
> +             !guest_handle_okay(xatpb.idxs, xatpb.size) ||
> +             !guest_handle_okay(xatpb.gpfns, xatpb.size) ||
> +             !guest_handle_okay(xatpb.errs, xatpb.size) )
>              return -EFAULT;
>  
>          /* This mapspace is unsupported for this hypercall. */
> -        if ( xatpr.space == XENMAPSPACE_gmfn_range )
> +        if ( xatpb.space == XENMAPSPACE_gmfn_range )
>              return -EOPNOTSUPP;
>  
> -        d = rcu_lock_domain_by_any_id(xatpr.domid);
> +        d = rcu_lock_domain_by_any_id(xatpb.domid);
>          if ( d == NULL )
>              return -ESRCH;
>  
> @@ -857,7 +857,7 @@ long do_memory_op(unsigned long cmd, XEN
>              return rc;
>          }
>  
> -        rc = xenmem_add_to_physmap_range(d, &xatpr, start_extent);
> +        rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>  
>          rcu_unlock_domain(d);
>  
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -79,7 +79,7 @@
>   *
>   *   In addition the following arch specific sub-ops:
>   *    * XENMEM_add_to_physmap
> - *    * XENMEM_add_to_physmap_range
> + *    * XENMEM_add_to_physmap_batch
>   *
>   *  HYPERVISOR_domctl
>   *   All generic sub-operations, with the exception of:
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -207,8 +207,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_map
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range, XENMEM_add_to_physmap only. */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another dom,
> -                                    * XENMEM_add_to_physmap_range only.
> -                                    */
> +                                    * XENMEM_add_to_physmap_batch only. */
>  /* ` } */
>  
>  /*
> @@ -238,8 +237,8 @@ typedef struct xen_add_to_physmap xen_ad
>  DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_t);
>  
>  /* A batched version of add_to_physmap. */
> -#define XENMEM_add_to_physmap_range 23
> -struct xen_add_to_physmap_range {
> +#define XENMEM_add_to_physmap_batch 23
> +struct xen_add_to_physmap_batch {
>      /* IN */
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> @@ -260,8 +259,15 @@ struct xen_add_to_physmap_range {
>      /* Per index error code. */
>      XEN_GUEST_HANDLE(int) errs;
>  };
> -typedef struct xen_add_to_physmap_range xen_add_to_physmap_range_t;
> -DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_batch_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_batch_t);
> +
> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> +#endif
>  
>  /*
>   * Unmaps the page appearing at a particular GPFN from the specified guest's
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:25:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ViI-0002gK-81; Tue, 07 Jan 2014 12:24:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ViH-0002gE-2F
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:24:53 +0000
Received: from [193.109.254.147:37132] by server-3.bemta-14.messagelabs.com id
	35/AD-11000-412FBC25; Tue, 07 Jan 2014 12:24:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389097490!9294771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19687 invoked from network); 7 Jan 2014 12:24:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:24:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90410097"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:24:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:24:49 -0500
Message-ID: <1389097488.31766.142.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 12:24:48 +0000
In-Reply-To: <52B44F3F020000780010F8E9@nat28.tlf.novell.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
	batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
> The use of "range" here wasn't really correct - there are no ranges
> involved. As the comment in the public header already correctly said,
> all this is about is batching of XENMEM_add_to_physmap calls (with
> the addition of having a way to specify a foreign domain for
> XENMAPSPACE_gmfn_foreign).
> 
> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Were you targeting this one at 4.4?

> ---
> v2: fix the compatibility DEFINE_XEN_GUEST_HANDLE()
> 
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -595,54 +595,54 @@ static int xenmem_add_to_physmap(struct 
>      return rc;
>  }
>  
> -static int xenmem_add_to_physmap_range(struct domain *d,
> -                                       struct xen_add_to_physmap_range *xatpr,
> +static int xenmem_add_to_physmap_batch(struct domain *d,
> +                                       struct xen_add_to_physmap_batch *xatpb,
>                                         unsigned int start)
>  {
>      unsigned int done = 0;
>      int rc;
>  
> -    if ( xatpr->size < start )
> +    if ( xatpb->size < start )
>          return -EILSEQ;
>  
> -    guest_handle_add_offset(xatpr->idxs, start);
> -    guest_handle_add_offset(xatpr->gpfns, start);
> -    guest_handle_add_offset(xatpr->errs, start);
> -    xatpr->size -= start;
> +    guest_handle_add_offset(xatpb->idxs, start);
> +    guest_handle_add_offset(xatpb->gpfns, start);
> +    guest_handle_add_offset(xatpb->errs, start);
> +    xatpb->size -= start;
>  
> -    while ( xatpr->size > done )
> +    while ( xatpb->size > done )
>      {
>          xen_ulong_t idx;
>          xen_pfn_t gpfn;
>  
> -        if ( unlikely(__copy_from_guest_offset(&idx, xatpr->idxs, 0, 1)) )
> +        if ( unlikely(__copy_from_guest_offset(&idx, xatpb->idxs, 0, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        if ( unlikely(__copy_from_guest_offset(&gpfn, xatpr->gpfns, 0, 1)) )
> +        if ( unlikely(__copy_from_guest_offset(&gpfn, xatpb->gpfns, 0, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        rc = xenmem_add_to_physmap_one(d, xatpr->space,
> -                                       xatpr->foreign_domid,
> +        rc = xenmem_add_to_physmap_one(d, xatpb->space,
> +                                       xatpb->foreign_domid,
>                                         idx, gpfn);
>  
> -        if ( unlikely(__copy_to_guest_offset(xatpr->errs, 0, &rc, 1)) )
> +        if ( unlikely(__copy_to_guest_offset(xatpb->errs, 0, &rc, 1)) )
>          {
>              rc = -EFAULT;
>              goto out;
>          }
>  
> -        guest_handle_add_offset(xatpr->idxs, 1);
> -        guest_handle_add_offset(xatpr->gpfns, 1);
> -        guest_handle_add_offset(xatpr->errs, 1);
> +        guest_handle_add_offset(xatpb->idxs, 1);
> +        guest_handle_add_offset(xatpb->gpfns, 1);
> +        guest_handle_add_offset(xatpb->errs, 1);
>  
>          /* Check for continuation if it's not the last iteration. */
> -        if ( xatpr->size > ++done && hypercall_preempt_check() )
> +        if ( xatpb->size > ++done && hypercall_preempt_check() )
>          {
>              rc = start + done;
>              goto out;
> @@ -797,7 +797,7 @@ long do_memory_op(unsigned long cmd, XEN
>          if ( copy_from_guest(&xatp, arg, 1) )
>              return -EFAULT;
>  
> -        /* Foreign mapping is only possible via add_to_physmap_range. */
> +        /* Foreign mapping is only possible via add_to_physmap_batch. */
>          if ( xatp.space == XENMAPSPACE_gmfn_foreign )
>              return -ENOSYS;
>  
> @@ -824,29 +824,29 @@ long do_memory_op(unsigned long cmd, XEN
>          return rc;
>      }
>  
> -    case XENMEM_add_to_physmap_range:
> +    case XENMEM_add_to_physmap_batch:
>      {
> -        struct xen_add_to_physmap_range xatpr;
> +        struct xen_add_to_physmap_batch xatpb;
>          struct domain *d;
>  
> -        BUILD_BUG_ON((typeof(xatpr.size))-1 >
> +        BUILD_BUG_ON((typeof(xatpb.size))-1 >
>                       (UINT_MAX >> MEMOP_EXTENT_SHIFT));
>  
>          /* Check for malicious or buggy input. */
> -        if ( start_extent != (typeof(xatpr.size))start_extent )
> +        if ( start_extent != (typeof(xatpb.size))start_extent )
>              return -EDOM;
>  
> -        if ( copy_from_guest(&xatpr, arg, 1) ||
> -             !guest_handle_okay(xatpr.idxs, xatpr.size) ||
> -             !guest_handle_okay(xatpr.gpfns, xatpr.size) ||
> -             !guest_handle_okay(xatpr.errs, xatpr.size) )
> +        if ( copy_from_guest(&xatpb, arg, 1) ||
> +             !guest_handle_okay(xatpb.idxs, xatpb.size) ||
> +             !guest_handle_okay(xatpb.gpfns, xatpb.size) ||
> +             !guest_handle_okay(xatpb.errs, xatpb.size) )
>              return -EFAULT;
>  
>          /* This mapspace is unsupported for this hypercall. */
> -        if ( xatpr.space == XENMAPSPACE_gmfn_range )
> +        if ( xatpb.space == XENMAPSPACE_gmfn_range )
>              return -EOPNOTSUPP;
>  
> -        d = rcu_lock_domain_by_any_id(xatpr.domid);
> +        d = rcu_lock_domain_by_any_id(xatpb.domid);
>          if ( d == NULL )
>              return -ESRCH;
>  
> @@ -857,7 +857,7 @@ long do_memory_op(unsigned long cmd, XEN
>              return rc;
>          }
>  
> -        rc = xenmem_add_to_physmap_range(d, &xatpr, start_extent);
> +        rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>  
>          rcu_unlock_domain(d);
>  
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -79,7 +79,7 @@
>   *
>   *   In addition the following arch specific sub-ops:
>   *    * XENMEM_add_to_physmap
> - *    * XENMEM_add_to_physmap_range
> + *    * XENMEM_add_to_physmap_batch
>   *
>   *  HYPERVISOR_domctl
>   *   All generic sub-operations, with the exception of:
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -207,8 +207,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_map
>  #define XENMAPSPACE_gmfn         2 /* GMFN */
>  #define XENMAPSPACE_gmfn_range   3 /* GMFN range, XENMEM_add_to_physmap only. */
>  #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another dom,
> -                                    * XENMEM_add_to_physmap_range only.
> -                                    */
> +                                    * XENMEM_add_to_physmap_batch only. */
>  /* ` } */
>  
>  /*
> @@ -238,8 +237,8 @@ typedef struct xen_add_to_physmap xen_ad
>  DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_t);
>  
>  /* A batched version of add_to_physmap. */
> -#define XENMEM_add_to_physmap_range 23
> -struct xen_add_to_physmap_range {
> +#define XENMEM_add_to_physmap_batch 23
> +struct xen_add_to_physmap_batch {
>      /* IN */
>      /* Which domain to change the mapping for. */
>      domid_t domid;
> @@ -260,8 +259,15 @@ struct xen_add_to_physmap_range {
>      /* Per index error code. */
>      XEN_GUEST_HANDLE(int) errs;
>  };
> -typedef struct xen_add_to_physmap_range xen_add_to_physmap_range_t;
> -DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_batch_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_batch_t);
> +
> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> +#endif
>  
>  /*
>   * Unmaps the page appearing at a particular GPFN from the specified guest's
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VjC-0002kI-O2; Tue, 07 Jan 2014 12:25:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VjB-0002k8-Pu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:25:49 +0000
Received: from [85.158.139.211:2365] by server-10.bemta-5.messagelabs.com id
	8E/7E-01405-C42FBC25; Tue, 07 Jan 2014 12:25:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389097547!8262737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7566 invoked from network); 7 Jan 2014 12:25:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:25:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:25:47 +0000
Message-Id: <52CC0057020000780011114D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:25:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>,
	"Jeroen Groenewegen van der Weyden" <groen692@grosc.com>
References: <52C65D06.8080404@grosc.com> <52C6A3E6.3060801@citrix.com>
In-Reply-To: <52C6A3E6.3060801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] BUG: unable to handle kernel NULL pointer
 dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 12:49, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/01/14 06:47, Jeroen Groenewegen van der Weyden wrote:
>> Hi All,
>> 
>> Yesterday my xen machine stopped working, after a kernel panic.
>> To me it seems to problem started at something called xen_spin_kick,
>> 
>> I attached a screenshot of the console of the xen machine.
>> 
>> Is this email-list the right one to address this bug?
> 
> You should report this to a Suse specific list as the Suse kernel with
> Xen support is very different to the Xen support in upstream kernels.

Actually issues like this are best reported through bugzilla.novell.com.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VjC-0002kI-O2; Tue, 07 Jan 2014 12:25:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0VjB-0002k8-Pu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:25:49 +0000
Received: from [85.158.139.211:2365] by server-10.bemta-5.messagelabs.com id
	8E/7E-01405-C42FBC25; Tue, 07 Jan 2014 12:25:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389097547!8262737!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7566 invoked from network); 7 Jan 2014 12:25:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:25:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:25:47 +0000
Message-Id: <52CC0057020000780011114D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:25:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>,
	"Jeroen Groenewegen van der Weyden" <groen692@grosc.com>
References: <52C65D06.8080404@grosc.com> <52C6A3E6.3060801@citrix.com>
In-Reply-To: <52C6A3E6.3060801@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] BUG: unable to handle kernel NULL pointer
 dereference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 12:49, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/01/14 06:47, Jeroen Groenewegen van der Weyden wrote:
>> Hi All,
>> 
>> Yesterday my xen machine stopped working, after a kernel panic.
>> To me it seems to problem started at something called xen_spin_kick,
>> 
>> I attached a screenshot of the console of the xen machine.
>> 
>> Is this email-list the right one to address this bug?
> 
> You should report this to a Suse specific list as the Suse kernel with
> Xen support is very different to the Xen support in upstream kernels.

Actually issues like this are best reported through bugzilla.novell.com.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vk0-0002sn-DJ; Tue, 07 Jan 2014 12:26:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Vjy-0002sN-9Y
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:26:38 +0000
Received: from [85.158.137.68:49641] by server-10.bemta-3.messagelabs.com id
	AB/C8-23989-D72FBC25; Tue, 07 Jan 2014 12:26:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389097595!6522730!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31008 invoked from network); 7 Jan 2014 12:26:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90410512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:26:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:26:34 -0500
Message-ID: <1389097593.31766.143.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <rob.hoes@citrix.com>
Date: Tue, 7 Jan 2014 12:26:33 +0000
In-Reply-To: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, dave.scott@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-12 at 16:36 +0000, Rob Hoes wrote:
> I'd really appreciate it if these fixes can still be considered for Xen 4.4,
> for the same reasons as the larger libxl/ocaml series that went in yesterday.

I haven't delved into these threads but I think we are waiting for a v2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vk0-0002sn-DJ; Tue, 07 Jan 2014 12:26:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Vjy-0002sN-9Y
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:26:38 +0000
Received: from [85.158.137.68:49641] by server-10.bemta-3.messagelabs.com id
	AB/C8-23989-D72FBC25; Tue, 07 Jan 2014 12:26:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389097595!6522730!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31008 invoked from network); 7 Jan 2014 12:26:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90410512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:26:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	07:26:34 -0500
Message-ID: <1389097593.31766.143.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <rob.hoes@citrix.com>
Date: Tue, 7 Jan 2014 12:26:33 +0000
In-Reply-To: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, dave.scott@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2013-12-12 at 16:36 +0000, Rob Hoes wrote:
> I'd really appreciate it if these fixes can still be considered for Xen 4.4,
> for the same reasons as the larger libxl/ocaml series that went in yesterday.

I haven't delved into these threads but I think we are waiting for a v2?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:29:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vmn-0003QC-3N; Tue, 07 Jan 2014 12:29:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vmm-0003OF-5n
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:29:32 +0000
Received: from [85.158.139.211:16298] by server-17.bemta-5.messagelabs.com id
	2B/73-19152-923FBC25; Tue, 07 Jan 2014 12:29:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389097769!8315917!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11347 invoked from network); 7 Jan 2014 12:29:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:29:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:29:28 +0000
Message-Id: <52CC01350200007800111164@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:29:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
In-Reply-To: <osstest-24250-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 24250 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> 
> Failures :-/ but no regressions.
> 
> Regressions which are regarded as allowable (not blocking):
>  test-armhf-armhf-xl           7 debian-install               fail   like 24146
>  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
>  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146

These windows-install failures have been pretty persistent for
the last month or two. I've been looking at the logs from the
hypervisor side a number of times without spotting anything. It'd
be nice to know whether anyone also did so from the tools and
qemu sides... In any event we will need to do something about
this before 4.4 goes out.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:29:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vmn-0003QC-3N; Tue, 07 Jan 2014 12:29:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vmm-0003OF-5n
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:29:32 +0000
Received: from [85.158.139.211:16298] by server-17.bemta-5.messagelabs.com id
	2B/73-19152-923FBC25; Tue, 07 Jan 2014 12:29:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389097769!8315917!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11347 invoked from network); 7 Jan 2014 12:29:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:29:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:29:28 +0000
Message-Id: <52CC01350200007800111164@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:29:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
In-Reply-To: <osstest-24250-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 24250 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> 
> Failures :-/ but no regressions.
> 
> Regressions which are regarded as allowable (not blocking):
>  test-armhf-armhf-xl           7 debian-install               fail   like 24146
>  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
>  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146

These windows-install failures have been pretty persistent for
the last month or two. I've been looking at the logs from the
hypervisor side a number of times without spotting anything. It'd
be nice to know whether anyone also did so from the tools and
qemu sides... In any event we will need to do something about
this before 4.4 goes out.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:32:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vp8-0003hF-F7; Tue, 07 Jan 2014 12:31:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vp7-0003gz-EP
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:31:57 +0000
Received: from [85.158.139.211:4908] by server-13.bemta-5.messagelabs.com id
	9D/09-11357-CB3FBC25; Tue, 07 Jan 2014 12:31:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389097916!8316661!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29882 invoked from network); 7 Jan 2014 12:31:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:31:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:31:55 +0000
Message-Id: <52CC01C50200007800111167@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:31:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
	<1389097488.31766.142.camel@kazak.uk.xensource.com>
In-Reply-To: <1389097488.31766.142.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
 batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 13:24, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
>> The use of "range" here wasn't really correct - there are no ranges
>> involved. As the comment in the public header already correctly said,
>> all this is about is batching of XENMEM_add_to_physmap calls (with
>> the addition of having a way to specify a foreign domain for
>> XENMAPSPACE_gmfn_foreign).
>> 
>> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Were you targeting this one at 4.4?

Yes, so that the old form of it doesn't go into wide spread use.
Also implied by ...

>> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
>> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
>> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
>> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
>> +#endif

... the conditional here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:32:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vp8-0003hF-F7; Tue, 07 Jan 2014 12:31:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vp7-0003gz-EP
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 12:31:57 +0000
Received: from [85.158.139.211:4908] by server-13.bemta-5.messagelabs.com id
	9D/09-11357-CB3FBC25; Tue, 07 Jan 2014 12:31:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389097916!8316661!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29882 invoked from network); 7 Jan 2014 12:31:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:31:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:31:55 +0000
Message-Id: <52CC01C50200007800111167@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:31:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
	<1389097488.31766.142.camel@kazak.uk.xensource.com>
In-Reply-To: <1389097488.31766.142.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
 batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 13:24, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
>> The use of "range" here wasn't really correct - there are no ranges
>> involved. As the comment in the public header already correctly said,
>> all this is about is batching of XENMEM_add_to_physmap calls (with
>> the addition of having a way to specify a foreign domain for
>> XENMAPSPACE_gmfn_foreign).
>> 
>> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Were you targeting this one at 4.4?

Yes, so that the old form of it doesn't go into wide spread use.
Also implied by ...

>> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
>> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
>> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
>> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
>> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
>> +#endif

... the conditional here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:34:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VrV-0003wg-EC; Tue, 07 Jan 2014 12:34:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0VrT-0003wT-Ng
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:34:23 +0000
Received: from [85.158.139.211:39134] by server-14.bemta-5.messagelabs.com id
	13/CD-24200-F44FBC25; Tue, 07 Jan 2014 12:34:23 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389098060!8310927!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29441 invoked from network); 7 Jan 2014 12:34:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90413022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:34:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 07:34:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0VrO-0004MP-38;
	Tue, 07 Jan 2014 12:34:18 +0000
Date: Tue, 7 Jan 2014 12:34:18 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107123417.GG10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CB17D1.2060400@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> > Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having=
 the
> >> ability to compile out accel=3Dtcg.
> > =

> > Didn't you and Paolo even have patches for that a while ago?
> =

> Yes, but some code shuffling is required in each target to make sure you
> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> worked for x86 at the time.
> =


Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
reference to your patches? Thanks.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:34:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VrV-0003wg-EC; Tue, 07 Jan 2014 12:34:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0VrT-0003wT-Ng
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:34:23 +0000
Received: from [85.158.139.211:39134] by server-14.bemta-5.messagelabs.com id
	13/CD-24200-F44FBC25; Tue, 07 Jan 2014 12:34:23 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389098060!8310927!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29441 invoked from network); 7 Jan 2014 12:34:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90413022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:34:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 07:34:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0VrO-0004MP-38;
	Tue, 07 Jan 2014 12:34:18 +0000
Date: Tue, 7 Jan 2014 12:34:18 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107123417.GG10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CB17D1.2060400@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> > Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having=
 the
> >> ability to compile out accel=3Dtcg.
> > =

> > Didn't you and Paolo even have patches for that a while ago?
> =

> Yes, but some code shuffling is required in each target to make sure you
> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> worked for x86 at the time.
> =


Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
reference to your patches? Thanks.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:36:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vtn-00049d-1m; Tue, 07 Jan 2014 12:36:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Vtl-00049S-M0
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:36:45 +0000
Received: from [85.158.137.68:35557] by server-1.bemta-3.messagelabs.com id
	0B/4E-29598-BD4FBC25; Tue, 07 Jan 2014 12:36:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389098200!6525473!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5027 invoked from network); 7 Jan 2014 12:36:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90413675"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:36:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 07:36:40 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Vtg-0004O4-0j;
	Tue, 07 Jan 2014 12:36:40 +0000
Date: Tue, 7 Jan 2014 12:35:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389096089.31766.140.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401071235410.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312171610140.8667@kaball.uk.xensource.com>
	<1387461038.9925.77.camel@kazak.uk.xensource.com>
	<1389096089.31766.140.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: pass a struct pending_irq* as
 parameter to gic helper functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Ian Campbell wrote:
> On Thu, 2013-12-19 at 13:50 +0000, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 16:16 +0000, Stefano Stabellini wrote:
> > > gic_add_to_lr_pending and gic_set_lr should take a struct pending_irq*
> > > as parameter instead of the virtual_irq number and the priority
> > > separately and doing yet another irq_to_pending lookup.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > What do you think about this one for 4.4? It whiffs a bit of a cleanup
> > rather than a bug fix (unless e.g. there is some subtle incorrectness in
> > the priority which was getting passed around before, rather than just
> > the potential for it)
> 
> Unless there is an actual bug fixed by this I think I'll leave it for
> 4.5

That's fine.

> > 
> > > @@ -672,12 +670,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> > >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> > >          if (i < nr_lrs) {
> > >              set_bit(i, &this_cpu(lr_mask));
> > > -            gic_set_lr(i, virtual_irq, state, priority);
> > > +            gic_set_lr(i, irq_to_pending(v, virtual_irq), state);
> > >              goto out;
> > >          }
> > >      }
> > >  
> > > -    gic_add_to_lr_pending(v, virtual_irq, priority);
> > > +    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> > 
> > I think you could take this one step further and make a similar change
> > to gic_set_guest_irq, the only caller has the pending_irq * in its hand
> > already.
> > 
> > Ian.
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:36:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vtn-00049d-1m; Tue, 07 Jan 2014 12:36:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Vtl-00049S-M0
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:36:45 +0000
Received: from [85.158.137.68:35557] by server-1.bemta-3.messagelabs.com id
	0B/4E-29598-BD4FBC25; Tue, 07 Jan 2014 12:36:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389098200!6525473!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5027 invoked from network); 7 Jan 2014 12:36:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 12:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90413675"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 12:36:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 07:36:40 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Vtg-0004O4-0j;
	Tue, 07 Jan 2014 12:36:40 +0000
Date: Tue, 7 Jan 2014 12:35:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389096089.31766.140.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401071235410.8667@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1312171610140.8667@kaball.uk.xensource.com>
	<1387461038.9925.77.camel@kazak.uk.xensource.com>
	<1389096089.31766.140.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: pass a struct pending_irq* as
 parameter to gic helper functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Ian Campbell wrote:
> On Thu, 2013-12-19 at 13:50 +0000, Ian Campbell wrote:
> > On Tue, 2013-12-17 at 16:16 +0000, Stefano Stabellini wrote:
> > > gic_add_to_lr_pending and gic_set_lr should take a struct pending_irq*
> > > as parameter instead of the virtual_irq number and the priority
> > > separately and doing yet another irq_to_pending lookup.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > What do you think about this one for 4.4? It whiffs a bit of a cleanup
> > rather than a bug fix (unless e.g. there is some subtle incorrectness in
> > the priority which was getting passed around before, rather than just
> > the potential for it)
> 
> Unless there is an actual bug fixed by this I think I'll leave it for
> 4.5

That's fine.

> > 
> > > @@ -672,12 +670,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> > >          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> > >          if (i < nr_lrs) {
> > >              set_bit(i, &this_cpu(lr_mask));
> > > -            gic_set_lr(i, virtual_irq, state, priority);
> > > +            gic_set_lr(i, irq_to_pending(v, virtual_irq), state);
> > >              goto out;
> > >          }
> > >      }
> > >  
> > > -    gic_add_to_lr_pending(v, virtual_irq, priority);
> > > +    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> > 
> > I think you could take this one step further and make a similar change
> > to gic_set_guest_irq, the only caller has the pending_irq * in its hand
> > already.
> > 
> > Ian.
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:41:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vxs-0004iL-QM; Tue, 07 Jan 2014 12:41:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vxr-0004iG-N3
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:40:59 +0000
Received: from [85.158.143.35:4473] by server-2.bemta-4.messagelabs.com id
	0D/40-11386-BD5FBC25; Tue, 07 Jan 2014 12:40:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389098458!7460990!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17119 invoked from network); 7 Jan 2014 12:40:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:40:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:40:57 +0000
Message-Id: <52CC03E60200007800111191@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:40:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Igor Kozhukhov" <ikozhukhov@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
	<52C704CE.5080003@citrix.com>
	<BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
In-Reply-To: <BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 20:49, Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
> it is impossible to do outside public headers, because we have trace.h 
> included to another public readers - for example: xen/public/hvm/hvm_op.h

Just create suitable wrapper headers for any such instance.

> will be better to have committed changes in public headers for next ports 
> without additional changes in sources.
> 
> is it possible to add '#ifdef __sun' to public headers ?

Not if there's a better way. Yes, I agree that the lack of xen_
prefixes in various places is unfortunate, but I don't see why
you shouldn't be able to cope if everyone else can.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:41:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Vxs-0004iL-QM; Tue, 07 Jan 2014 12:41:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Vxr-0004iG-N3
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:40:59 +0000
Received: from [85.158.143.35:4473] by server-2.bemta-4.messagelabs.com id
	0D/40-11386-BD5FBC25; Tue, 07 Jan 2014 12:40:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389098458!7460990!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17119 invoked from network); 7 Jan 2014 12:40:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:40:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 12:40:57 +0000
Message-Id: <52CC03E60200007800111191@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 12:40:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Igor Kozhukhov" <ikozhukhov@gmail.com>
References: <167FDCF3-92DC-4CF9-97F8-A636122FB8B2@gmail.com>
	<52C704CE.5080003@citrix.com>
	<BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
In-Reply-To: <BB7137D2-CDC4-4C42-B227-30BD2E0604BA@gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] conflict in public header for values definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 20:49, Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
> it is impossible to do outside public headers, because we have trace.h 
> included to another public readers - for example: xen/public/hvm/hvm_op.h

Just create suitable wrapper headers for any such instance.

> will be better to have committed changes in public headers for next ports 
> without additional changes in sources.
> 
> is it possible to add '#ifdef __sun' to public headers ?

Not if there's a better way. Yes, I agree that the lack of xen_
prefixes in various places is unfortunate, but I don't see why
you shouldn't be able to cope if everyone else can.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:42:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VzC-0004oo-FH; Tue, 07 Jan 2014 12:42:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0VzB-0004oi-Ha
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:42:21 +0000
Received: from [85.158.139.211:2465] by server-2.bemta-5.messagelabs.com id
	E3/EF-29392-C26FBC25; Tue, 07 Jan 2014 12:42:20 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389098539!8317523!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23404 invoked from network); 7 Jan 2014 12:42:19 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:42:19 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id F26AF2202F2;
	Tue,  7 Jan 2014 12:42:17 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 12:42:17 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52CBFDDD020000780011112C@nat28.tlf.novell.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
Message-ID: <5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 12:15, Jan Beulich wrote:
>>>> On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
>> On 2014-01-07 11:26, Wu, Feng wrote:
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@lists.xen.org
>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>> Sent: Tuesday, January 07, 2014 6:44 PM
>>>> To: Andrew Cooper
>>>> Cc: xen-devel@lists.xen.org
>>>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re:
>>>> iommuu/vt-d
>>>> issues with LSI MegaSAS (PERC5i))
>>>> 
>>>> On 2014-01-07 10:38, Andrew Cooper wrote:
>>>> > On 07/01/14 10:35, Gordan Bobic wrote:
>>>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>> >>>>> Which would look like this:
>>>> >>>>>
>>>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>> >>>>> on the card
>>>> >>>>>           \--------------> IEEE-1394a
>>>> >>>>>
>>>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>>>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>> >>>>> Jan)
>>>> >>>>>
>>>> >>>>
>>>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>>>> >>>> to a guest:
>>>> >>>>
>>>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>> >>>>
>>>> >>>>
>>>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>>>> >>>> fault
>>>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>>>> >>>> root_entry
>>>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>>> >>>> context
>>>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>>>> >>>> ctxt_entry[0]
>>>> >>>> not present
>>>> >>>>
>>>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>> >>>>
>>>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>>> >>>> (prog-if 01 [Subtractive decode])
>>>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>>>> VGASnoop-
>>>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>>>> >>>> 66MHz-
>>>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>>>> <MAbort+
>>>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>>>> >>>> secondary=08,
>>>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>>> >>>> BridgeCtl:
>>>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>>>> DiscTmrSERREn-
>>>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>>> >>>> Device 0805
>>>> >>>>         Capabilities: [a0] Power Management version 3
>>>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>>>> >>>> NoSoftRst+
>>>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>> >>>>
>>>> >>>> or there is some unknown bridge in the motherboard.
>>>> >>>
>>>> >>> According your description above, the upstream Linux should also have
>>>> >>> the same problem. Did you see it with upstream Linux?
>>>> >>
>>>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>>>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>>>> >> bare metal Linux, the same problem occurs as with Xen.
>>>> >>
>>>> >>> There may be some buggy device that generate DMA request with
>>>> >>> internal
>>>> >>> BDF but it didn't expose it(not like Phantom device). For those
>>>> >>> devices, I think we need to setup the VT-d page table manually.
>>>> >>
>>>> >> I think what is needed is a pci-phantom style override that tells the
>>>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>>>> >> invisible device ID.
>>>> >>
>>>> >> Gordan
>>>> >
>>>> > There is.  See "pci-phantom" in
>>>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
>>>> 
>>>> I thought this was only applicable to phantom _functions_ (number
>>>> after
>>>> the
>>>> dot) rather than whole phantom _devices_. Is that not the case?
>>> 
>>> I think that's right. I go through the related code for the pci
>>> phantom device just now, I find that
>>> the information of command line 'pci-phantom' is stored in variable '
>>> phantom_devs[8] '
>>> with type of s truct phantom_dev{}. This variable is used in function
>>> alloc_pdev() as follow:
>>> 
>>> 
>>>                 for ( i = 0; i < nr_phantom_devs; ++i )
>>>                     if ( phantom_devs[i].seg == pseg->nr &&
>>>                          phantom_devs[i].bus == bus &&
>>>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>>>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>>>                     {
>>>                         pdev->phantom_stride = 
>>> phantom_devs[i].stride;
>>>                         break;
>>>                     }
>>> 
>>> So from the code, we can see this command line only works for phantom
>>> _function_, not for whole phantom _devices_.
>> 
>> What would it take to make it work for a whole phantom device?
> 
> First and foremost a definition of what a phantom device is and
> how one would behave. Once again - phantom functions are part
> of the PCIe specification, so those don't require a definition.

Konrad's patch from a while back seemed to do the required thing to
allow an otherwise invisible/undetected device to do DMA transfers
without freaking out the IOMMU that doesn't know about it.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:42:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0VzC-0004oo-FH; Tue, 07 Jan 2014 12:42:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W0VzB-0004oi-Ha
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 12:42:21 +0000
Received: from [85.158.139.211:2465] by server-2.bemta-5.messagelabs.com id
	E3/EF-29392-C26FBC25; Tue, 07 Jan 2014 12:42:20 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389098539!8317523!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23404 invoked from network); 7 Jan 2014 12:42:19 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 12:42:19 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id F26AF2202F2;
	Tue,  7 Jan 2014 12:42:17 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 07 Jan 2014 12:42:17 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52CBFDDD020000780011112C@nat28.tlf.novell.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
Message-ID: <5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-07 12:15, Jan Beulich wrote:
>>>> On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
>> On 2014-01-07 11:26, Wu, Feng wrote:
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@lists.xen.org
>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>> Sent: Tuesday, January 07, 2014 6:44 PM
>>>> To: Andrew Cooper
>>>> Cc: xen-devel@lists.xen.org
>>>> Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re:
>>>> iommuu/vt-d
>>>> issues with LSI MegaSAS (PERC5i))
>>>> 
>>>> On 2014-01-07 10:38, Andrew Cooper wrote:
>>>> > On 07/01/14 10:35, Gordan Bobic wrote:
>>>> >> On 2014-01-07 03:17, Zhang, Yang Z wrote:
>>>> >>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
>>>> >>>>> Which would look like this:
>>>> >>>>>
>>>> >>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
>>>> >>>>> on the card
>>>> >>>>>           \--------------> IEEE-1394a
>>>> >>>>>
>>>> >>>>> I am actually wondering if this 07:00.0 device is the one that
>>>> >>>>> reports itself as 08:00.0 (which I think is what you alluding to
>>>> >>>>> Jan)
>>>> >>>>>
>>>> >>>>
>>>> >>>> And to double check that theory I decided to pass in the IEEE-1394a
>>>> >>>> to a guest:
>>>> >>>>
>>>> >>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
>>>> >>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
>>>> >>>>
>>>> >>>>
>>>> >>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
>>>> >>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
>>>> >>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
>>>> >>>> fault
>>>> >>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
>>>> >>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
>>>> >>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
>>>> >>>> root_entry
>>>> >>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
>>>> >>>> context
>>>> >>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
>>>> >>>> ctxt_entry[0]
>>>> >>>> not present
>>>> >>>>
>>>> >>>> So, capture card OK - Likely the Tundra bridge has an issue:
>>>> >>>>
>>>> >>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
>>>> >>>> (prog-if 01 [Subtractive decode])
>>>> >>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>>>> VGASnoop-
>>>> >>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
>>>> >>>> 66MHz-
>>>> >>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
>>>> <MAbort+
>>>> >>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
>>>> >>>> secondary=08,
>>>> >>>>         subordinate=08, sec-latency=32 Memory behind bridge:
>>>> >>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
>>>> >>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
>>>> >>>> BridgeCtl:
>>>> >>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
>>>> >>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
>>>> DiscTmrSERREn-
>>>> >>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
>>>> >>>> Device 0805
>>>> >>>>         Capabilities: [a0] Power Management version 3
>>>> >>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
>>>> >>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
>>>> >>>> NoSoftRst+
>>>> >>>>                 PME-Enable- DSel=0 DScale=0 PME-
>>>> >>>>
>>>> >>>> or there is some unknown bridge in the motherboard.
>>>> >>>
>>>> >>> According your description above, the upstream Linux should also have
>>>> >>> the same problem. Did you see it with upstream Linux?
>>>> >>
>>>> >> The problem I was seeing with LSI cards (phantom device doing DMA)
>>>> >> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
>>>> >> bare metal Linux, the same problem occurs as with Xen.
>>>> >>
>>>> >>> There may be some buggy device that generate DMA request with
>>>> >>> internal
>>>> >>> BDF but it didn't expose it(not like Phantom device). For those
>>>> >>> devices, I think we need to setup the VT-d page table manually.
>>>> >>
>>>> >> I think what is needed is a pci-phantom style override that tells the
>>>> >> hypervisor to tell the IOMMU to allow DMA traffic from a specific
>>>> >> invisible device ID.
>>>> >>
>>>> >> Gordan
>>>> >
>>>> > There is.  See "pci-phantom" in
>>>> > http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
>>>> 
>>>> I thought this was only applicable to phantom _functions_ (number
>>>> after
>>>> the
>>>> dot) rather than whole phantom _devices_. Is that not the case?
>>> 
>>> I think that's right. I go through the related code for the pci
>>> phantom device just now, I find that
>>> the information of command line 'pci-phantom' is stored in variable '
>>> phantom_devs[8] '
>>> with type of s truct phantom_dev{}. This variable is used in function
>>> alloc_pdev() as follow:
>>> 
>>> 
>>>                 for ( i = 0; i < nr_phantom_devs; ++i )
>>>                     if ( phantom_devs[i].seg == pseg->nr &&
>>>                          phantom_devs[i].bus == bus &&
>>>                          phantom_devs[i].slot == PCI_SLOT(devfn) &&
>>>                          phantom_devs[i].stride > PCI_FUNC(devfn) )
>>>                     {
>>>                         pdev->phantom_stride = 
>>> phantom_devs[i].stride;
>>>                         break;
>>>                     }
>>> 
>>> So from the code, we can see this command line only works for phantom
>>> _function_, not for whole phantom _devices_.
>> 
>> What would it take to make it work for a whole phantom device?
> 
> First and foremost a definition of what a phantom device is and
> how one would behave. Once again - phantom functions are part
> of the PCIe specification, so those don't require a definition.

Konrad's patch from a while back seemed to do the required thing to
allow an otherwise invisible/undetected device to do DMA transfers
without freaking out the IOMMU that doesn't know about it.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 12:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WCe-0005Pd-8x; Tue, 07 Jan 2014 12:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tomi.valkeinen@ti.com>) id 1W0W6T-0005OH-Pj
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:49:53 +0000
Received: from [85.158.139.211:33941] by server-16.bemta-5.messagelabs.com id
	73/A8-11843-0F7FBC25; Tue, 07 Jan 2014 12:49:52 +0000
X-Env-Sender: tomi.valkeinen@ti.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389098990!8270236!1
X-Originating-IP: [192.94.94.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjk0Ljk0LjQwID0+IDE3MDg0MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16238 invoked from network); 7 Jan 2014 12:49:51 -0000
Received: from arroyo.ext.ti.com (HELO arroyo.ext.ti.com) (192.94.94.40)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 12:49:51 -0000
Received: from dflxv15.itg.ti.com ([128.247.5.124])
	by arroyo.ext.ti.com (8.13.7/8.13.7) with ESMTP id s07CnW46011336;
	Tue, 7 Jan 2014 06:49:32 -0600
Received: from DFLE72.ent.ti.com (dfle72.ent.ti.com [128.247.5.109])
	by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s07CnWcC000912;
	Tue, 7 Jan 2014 06:49:32 -0600
Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE72.ent.ti.com
	(128.247.5.109) with Microsoft SMTP Server id 14.2.342.3;
	Tue, 7 Jan 2014 06:49:31 -0600
Received: from [192.168.2.6] (ileax41-snat.itg.ti.com [10.172.224.153])	by
	dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id s07CnT4L031506;
	Tue, 7 Jan 2014 06:49:30 -0600
Message-ID: <52CBF7D9.6050602@ti.com>
Date: Tue, 7 Jan 2014 14:49:29 +0200
From: Tomi Valkeinen <tomi.valkeinen@ti.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>
References: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140103203422.GA2668@phenom.dumpdata.com>
In-Reply-To: <20140103203422.GA2668@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Tue, 07 Jan 2014 12:56:14 +0000
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	plagnioj@jcrosoft.com, linux-kernel@vger.kernel.org,
	linux-fbdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2411136448840932688=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2411136448840932688==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature";
	boundary="brmrbF089gFfMnvoao7WWtqq70TPj1JG7"

--brmrbF089gFfMnvoao7WWtqq70TPj1JG7
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On 2014-01-03 22:34, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 07:02:09PM +0000, Stefano Stabellini wrote:
>=20
> The title needs 'xen/fb' prefixed but that is easy enough.
>=20
>> There is no reasons why an HVM guest shouldn't be allowed to use xenfb=
=2E
>> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
>> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
>> guests, they are not affected.
>>
>> Please note that at this time QEMU needs few outstanding fixes to
>> provide xenfb on ARM:
>>
>> http://marc.info/?l=3Dqemu-devel&m=3D138739419700837&w=3D2
>=20
> Cool. Is the video maintainer OK with the Xen maintainers stashing it
> in the Xen tree for Linus?

Yep, fine for me.

 Tomi



--brmrbF089gFfMnvoao7WWtqq70TPj1JG7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSy/fZAAoJEPo9qoy8lh71w08P/RlAHiaj9Ece7pg9DMw16o7+
+KRe4v+J9izF2Vj41y70qR2qvy3FM3QuTnhg3caN6xyZBHy2QQwy9Ybw5oolU5TQ
9m8eBCcAsp6+IzE/OuDaRCRupI0u8gcmhxCcIM3+ZSUhNM1gFzYc0be9GVyyj796
lLFBQ6vijkfgZgbYH7QtdwNrr6b3nO7k+LIpB0Yh6D0aY957Rq1mUsFHfm5FMd76
esbnSNnRCMHpOJkWWAlvd4kKJN7eZVzuXq/0zX14s68kqiom5MrxAEDyvL+usofj
NqMn47GH7cgyaGrJBEJ++wf1qdY0wkhi30+DuGeDmGCfVALk6Hk6DvcX+va+pGKH
p/M9mMeFRYZAspfTJjbN7856b2cBP1HFJrEAQZCQXfru9vfYH3ThAj9RhDAp6hr3
nZm4x7owiHuiEX5ohpatWueTxfKhfsJvZ+YiRPa+wTksvruMwofj7HkL1mGPfgUS
4A+xuKtylWZ/98T9uUAQQBMk8StllYTlyzTTHUXPInwQM+ObfOqcv/S3BZ9h4uam
TZhqsO8WMMdRuE4kv3H2ydcLMBft+KEXGCqx1miNf0KumJr5nfKuIxk44lDH5y8T
3JHmc5X/I/lDzPH4ZF5nFvpfWhHHU01pShpTKrlIKOuUbXj0EcNlJYfLZsKbVbT9
RgX/XB1nCsnTbXv74bzy
=EOHX
-----END PGP SIGNATURE-----

--brmrbF089gFfMnvoao7WWtqq70TPj1JG7--


--===============2411136448840932688==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2411136448840932688==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 12:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 12:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WCe-0005Pd-8x; Tue, 07 Jan 2014 12:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tomi.valkeinen@ti.com>) id 1W0W6T-0005OH-Pj
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 12:49:53 +0000
Received: from [85.158.139.211:33941] by server-16.bemta-5.messagelabs.com id
	73/A8-11843-0F7FBC25; Tue, 07 Jan 2014 12:49:52 +0000
X-Env-Sender: tomi.valkeinen@ti.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389098990!8270236!1
X-Originating-IP: [192.94.94.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjk0Ljk0LjQwID0+IDE3MDg0MQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16238 invoked from network); 7 Jan 2014 12:49:51 -0000
Received: from arroyo.ext.ti.com (HELO arroyo.ext.ti.com) (192.94.94.40)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 12:49:51 -0000
Received: from dflxv15.itg.ti.com ([128.247.5.124])
	by arroyo.ext.ti.com (8.13.7/8.13.7) with ESMTP id s07CnW46011336;
	Tue, 7 Jan 2014 06:49:32 -0600
Received: from DFLE72.ent.ti.com (dfle72.ent.ti.com [128.247.5.109])
	by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s07CnWcC000912;
	Tue, 7 Jan 2014 06:49:32 -0600
Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE72.ent.ti.com
	(128.247.5.109) with Microsoft SMTP Server id 14.2.342.3;
	Tue, 7 Jan 2014 06:49:31 -0600
Received: from [192.168.2.6] (ileax41-snat.itg.ti.com [10.172.224.153])	by
	dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id s07CnT4L031506;
	Tue, 7 Jan 2014 06:49:30 -0600
Message-ID: <52CBF7D9.6050602@ti.com>
Date: Tue, 7 Jan 2014 14:49:29 +0200
From: Tomi Valkeinen <tomi.valkeinen@ti.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>
References: <1388775730-2984-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140103203422.GA2668@phenom.dumpdata.com>
In-Reply-To: <20140103203422.GA2668@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Tue, 07 Jan 2014 12:56:14 +0000
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	plagnioj@jcrosoft.com, linux-kernel@vger.kernel.org,
	linux-fbdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v2] allow xenfb initialization for hvm guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2411136448840932688=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2411136448840932688==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature";
	boundary="brmrbF089gFfMnvoao7WWtqq70TPj1JG7"

--brmrbF089gFfMnvoao7WWtqq70TPj1JG7
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On 2014-01-03 22:34, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 03, 2014 at 07:02:09PM +0000, Stefano Stabellini wrote:
>=20
> The title needs 'xen/fb' prefixed but that is easy enough.
>=20
>> There is no reasons why an HVM guest shouldn't be allowed to use xenfb=
=2E
>> As a matter of fact ARM guests, HVM from Linux POV, can use xenfb.
>> Given that no Xen toolstacks configure a xenfb backend for x86 HVM
>> guests, they are not affected.
>>
>> Please note that at this time QEMU needs few outstanding fixes to
>> provide xenfb on ARM:
>>
>> http://marc.info/?l=3Dqemu-devel&m=3D138739419700837&w=3D2
>=20
> Cool. Is the video maintainer OK with the Xen maintainers stashing it
> in the Xen tree for Linus?

Yep, fine for me.

 Tomi



--brmrbF089gFfMnvoao7WWtqq70TPj1JG7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSy/fZAAoJEPo9qoy8lh71w08P/RlAHiaj9Ece7pg9DMw16o7+
+KRe4v+J9izF2Vj41y70qR2qvy3FM3QuTnhg3caN6xyZBHy2QQwy9Ybw5oolU5TQ
9m8eBCcAsp6+IzE/OuDaRCRupI0u8gcmhxCcIM3+ZSUhNM1gFzYc0be9GVyyj796
lLFBQ6vijkfgZgbYH7QtdwNrr6b3nO7k+LIpB0Yh6D0aY957Rq1mUsFHfm5FMd76
esbnSNnRCMHpOJkWWAlvd4kKJN7eZVzuXq/0zX14s68kqiom5MrxAEDyvL+usofj
NqMn47GH7cgyaGrJBEJ++wf1qdY0wkhi30+DuGeDmGCfVALk6Hk6DvcX+va+pGKH
p/M9mMeFRYZAspfTJjbN7856b2cBP1HFJrEAQZCQXfru9vfYH3ThAj9RhDAp6hr3
nZm4x7owiHuiEX5ohpatWueTxfKhfsJvZ+YiRPa+wTksvruMwofj7HkL1mGPfgUS
4A+xuKtylWZ/98T9uUAQQBMk8StllYTlyzTTHUXPInwQM+ObfOqcv/S3BZ9h4uam
TZhqsO8WMMdRuE4kv3H2ydcLMBft+KEXGCqx1miNf0KumJr5nfKuIxk44lDH5y8T
3JHmc5X/I/lDzPH4ZF5nFvpfWhHHU01pShpTKrlIKOuUbXj0EcNlJYfLZsKbVbT9
RgX/XB1nCsnTbXv74bzy
=EOHX
-----END PGP SIGNATURE-----

--brmrbF089gFfMnvoao7WWtqq70TPj1JG7--


--===============2411136448840932688==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2411136448840932688==--


From xen-devel-bounces@lists.xen.org Tue Jan 07 13:05:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WLk-000663-GH; Tue, 07 Jan 2014 13:05:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0WLi-00065t-CF
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:05:38 +0000
Received: from [85.158.137.68:35013] by server-17.bemta-3.messagelabs.com id
	93/08-15965-1ABFBC25; Tue, 07 Jan 2014 13:05:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389099934!7682373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 416 invoked from network); 7 Jan 2014 13:05:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:05:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90420991"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:05:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:05:30 -0500
Message-ID: <1389099929.31766.153.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 13:05:29 +0000
In-Reply-To: <52CC01C50200007800111167@nat28.tlf.novell.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
	<1389097488.31766.142.camel@kazak.uk.xensource.com>
	<52CC01C50200007800111167@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
	batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 12:31 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 13:24, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
> >> The use of "range" here wasn't really correct - there are no ranges
> >> involved. As the comment in the public header already correctly said,
> >> all this is about is batching of XENMEM_add_to_physmap calls (with
> >> the addition of having a way to specify a foreign domain for
> >> XENMAPSPACE_gmfn_foreign).
> >> 
> >> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Were you targeting this one at 4.4?
> 
> Yes, so that the old form of it doesn't go into wide spread use.
> Also implied by ...
> 
> >> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> >> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
> >> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
> >> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
> >> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> >> +#endif
> 
> ... the conditional here.

Yes, that's what I inferred.

So if you need a release Ack then I think you can have one from me in
George's absence. The risk is basically build breakage which should be
trivially caught and the benefit is not exposing a broken API in the new
release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:05:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WLk-000663-GH; Tue, 07 Jan 2014 13:05:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0WLi-00065t-CF
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:05:38 +0000
Received: from [85.158.137.68:35013] by server-17.bemta-3.messagelabs.com id
	93/08-15965-1ABFBC25; Tue, 07 Jan 2014 13:05:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389099934!7682373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 416 invoked from network); 7 Jan 2014 13:05:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:05:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90420991"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:05:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:05:30 -0500
Message-ID: <1389099929.31766.153.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 13:05:29 +0000
In-Reply-To: <52CC01C50200007800111167@nat28.tlf.novell.com>
References: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
	<52B44F3F020000780010F8E9@nat28.tlf.novell.com>
	<1389097488.31766.142.camel@kazak.uk.xensource.com>
	<52CC01C50200007800111167@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] rename XENMEM_add_to_physmap_{range =>
	batch} (v2)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 12:31 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 13:24, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2013-12-20 at 13:07 +0000, Jan Beulich wrote:
> >> The use of "range" here wasn't really correct - there are no ranges
> >> involved. As the comment in the public header already correctly said,
> >> all this is about is batching of XENMEM_add_to_physmap calls (with
> >> the addition of having a way to specify a foreign domain for
> >> XENMAPSPACE_gmfn_foreign).
> >> 
> >> Suggested-by: Ian Campbell <Ian.Campbell@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Were you targeting this one at 4.4?
> 
> Yes, so that the old form of it doesn't go into wide spread use.
> Also implied by ...
> 
> >> +#if __XEN_INTERFACE_VERSION__ < 0x00040400
> >> +#define XENMEM_add_to_physmap_range XENMEM_add_to_physmap_batch
> >> +#define xen_add_to_physmap_range xen_add_to_physmap_batch
> >> +typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
> >> +DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
> >> +#endif
> 
> ... the conditional here.

Yes, that's what I inferred.

So if you need a release Ack then I think you can have one from me in
George's absence. The risk is basically build breakage which should be
trivially caught and the benefit is not exposing a broken API in the new
release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:11:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WR9-0006io-NT; Tue, 07 Jan 2014 13:11:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WR8-0006ii-Q0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:11:15 +0000
Received: from [85.158.139.211:49086] by server-5.bemta-5.messagelabs.com id
	63/26-14928-2FCFBC25; Tue, 07 Jan 2014 13:11:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389100271!8324412!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8280 invoked from network); 7 Jan 2014 13:11:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90423324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:11:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:11:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WR4-0004rY-Og;
	Tue, 07 Jan 2014 13:11:10 +0000
Date: Tue, 7 Jan 2014 13:10:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140106182541.GE10654@zion.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401071306111.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
	<52CAF1F7.80304@suse.de>
	<20140106182541.GE10654@zion.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1374307092-1389100219=:8667"
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	=?UTF-8?Q?Andreas_F=C3=A4rber?= <afaerber@suse.de>,
	QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1374307092-1389100219=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 6 Jan 2014, Wei Liu wrote:
> On Mon, Jan 06, 2014 at 07:12:07PM +0100, Andreas F=C3=A4rber wrote:
> > Am 06.01.2014 16:12, schrieb Wei Liu:
> > > On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> > >> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> > >>> In fact I've already hacked a prototype during Christmas. What's I'=
ve
> > >>> done so far:
> > >>>
> > >>> 1. create target-null which only has some stubs to CPU emulation
> > >>>    framework.
> > >>>
> > >>> 2. add a few lines to configure / Makefiles*, create
> > >>>    default-configs/null-softmmu
> > >>
> > >> I think it would be better to add support to allow you to
> > >> configure with --disable-tcg. This would match the existing
> > >> --disable/--enable switches for KVM and Xen, and then you
> > >> could configure --disable-kvm --disable-tcg --enable-xen
> > >> and get a qemu-system-i386 or qemu-system-arm with only
> > >> the Xen support and none of the TCG emulation code.
> > >>
> > >=20
> > > In this case the architecture-specific code in target-* is still
> > > included which might not help reduce the size much.
> >=20
> > Define target-specific code in target-*? Most of that is TCG-specific
> > and wouldn't be compiled in in that case. The KVM-specific bits don't
> > get compiled in with --disable-kvm today already save for a few stubs.
> >=20
>=20
> Probably I used the wrong terminology. I meant, say,
> target-i386/translate.c, exec.c etc, which won't be necessary for Xen. I
> guess that's what you mean by TCG-specific.  I see the possibility to
> create some stubs for them, if that's what you mean.
>=20
> > Adding yet another separate binary with no added functional value
> > doesn't strike me as the most helpful idea for the community, compared
> > to configure-optimizing the binaries built today.
> >=20
> > Who would use the stripped-down binaries anyway? Just Citrix? Because
> > SUSE is headed for sharing QEMU packages between Xen and KVM, so we
> > couldn't enable such Xen-only-optimized binaries.
> >=20
>=20
> No, I don't speak for Citrix. I work for opensource Xen project, I just
> happen to be hired by Citrix. The general idea is to have an option for
> user to create smaller binary, without those unnecessary code compiled /
> linked in. How vendor makes their choice is out of my reach. :-)

Right.

Lots of people trying Xen on ARM today come from the embedded world:
routers, set top boxes, in-vehicle entertainment, etc. One wouldn't want
to run a full blown distro in these cases or a generic QEMU binary. The
smaller the better.

I am sure that this work would be useful even on bigger systems as
somebody already pointed out on this thread.
--1342847746-1374307092-1389100219=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1374307092-1389100219=:8667--


From xen-devel-bounces@lists.xen.org Tue Jan 07 13:11:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WR9-0006io-NT; Tue, 07 Jan 2014 13:11:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WR8-0006ii-Q0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:11:15 +0000
Received: from [85.158.139.211:49086] by server-5.bemta-5.messagelabs.com id
	63/26-14928-2FCFBC25; Tue, 07 Jan 2014 13:11:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389100271!8324412!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8280 invoked from network); 7 Jan 2014 13:11:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90423324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:11:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:11:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WR4-0004rY-Og;
	Tue, 07 Jan 2014 13:11:10 +0000
Date: Tue, 7 Jan 2014 13:10:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140106182541.GE10654@zion.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401071306111.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAFEAcA8vE1inUFk3n1TX9d1GqwD60S0gG66_ZWj=OnQqXEhajA@mail.gmail.com>
	<20140106151251.GB10654@zion.uk.xensource.com>
	<52CAF1F7.80304@suse.de>
	<20140106182541.GE10654@zion.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1374307092-1389100219=:8667"
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	=?UTF-8?Q?Andreas_F=C3=A4rber?= <afaerber@suse.de>,
	QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1374307092-1389100219=:8667
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 6 Jan 2014, Wei Liu wrote:
> On Mon, Jan 06, 2014 at 07:12:07PM +0100, Andreas F=C3=A4rber wrote:
> > Am 06.01.2014 16:12, schrieb Wei Liu:
> > > On Mon, Jan 06, 2014 at 01:30:20PM +0000, Peter Maydell wrote:
> > >> On 6 January 2014 12:54, Wei Liu <wei.liu2@citrix.com> wrote:
> > >>> In fact I've already hacked a prototype during Christmas. What's I'=
ve
> > >>> done so far:
> > >>>
> > >>> 1. create target-null which only has some stubs to CPU emulation
> > >>>    framework.
> > >>>
> > >>> 2. add a few lines to configure / Makefiles*, create
> > >>>    default-configs/null-softmmu
> > >>
> > >> I think it would be better to add support to allow you to
> > >> configure with --disable-tcg. This would match the existing
> > >> --disable/--enable switches for KVM and Xen, and then you
> > >> could configure --disable-kvm --disable-tcg --enable-xen
> > >> and get a qemu-system-i386 or qemu-system-arm with only
> > >> the Xen support and none of the TCG emulation code.
> > >>
> > >=20
> > > In this case the architecture-specific code in target-* is still
> > > included which might not help reduce the size much.
> >=20
> > Define target-specific code in target-*? Most of that is TCG-specific
> > and wouldn't be compiled in in that case. The KVM-specific bits don't
> > get compiled in with --disable-kvm today already save for a few stubs.
> >=20
>=20
> Probably I used the wrong terminology. I meant, say,
> target-i386/translate.c, exec.c etc, which won't be necessary for Xen. I
> guess that's what you mean by TCG-specific.  I see the possibility to
> create some stubs for them, if that's what you mean.
>=20
> > Adding yet another separate binary with no added functional value
> > doesn't strike me as the most helpful idea for the community, compared
> > to configure-optimizing the binaries built today.
> >=20
> > Who would use the stripped-down binaries anyway? Just Citrix? Because
> > SUSE is headed for sharing QEMU packages between Xen and KVM, so we
> > couldn't enable such Xen-only-optimized binaries.
> >=20
>=20
> No, I don't speak for Citrix. I work for opensource Xen project, I just
> happen to be hired by Citrix. The general idea is to have an option for
> user to create smaller binary, without those unnecessary code compiled /
> linked in. How vendor makes their choice is out of my reach. :-)

Right.

Lots of people trying Xen on ARM today come from the embedded world:
routers, set top boxes, in-vehicle entertainment, etc. One wouldn't want
to run a full blown distro in these cases or a generic QEMU binary. The
smaller the better.

I am sure that this work would be useful even on bigger systems as
somebody already pointed out on this thread.
--1342847746-1374307092-1389100219=:8667
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1374307092-1389100219=:8667--


From xen-devel-bounces@lists.xen.org Tue Jan 07 13:17:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WXT-0006wM-OL; Tue, 07 Jan 2014 13:17:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WR1-0006iK-7P
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:11:07 +0000
Received: from [85.158.139.211:42413] by server-9.bemta-5.messagelabs.com id
	9B/E5-15098-AECFBC25; Tue, 07 Jan 2014 13:11:06 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389100265!8317647!1
X-Originating-IP: [209.85.214.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2965 invoked from network); 7 Jan 2014 13:11:06 -0000
Received: from mail-bk0-f50.google.com (HELO mail-bk0-f50.google.com)
	(209.85.214.50)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:06 -0000
Received: by mail-bk0-f50.google.com with SMTP id e11so219670bkh.23
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:11:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=qxRdPkMfGgzi9oSQMadZnQJX7n/L91/cFQ2aJie1UDg=;
	b=yLFn/yfRRBAvFxRPADO5UKR0LQCMvZsnPOvH8VJyWoGsN8RIWZ945GnMuDCETnC7Vd
	mjAzdJhtrBkpqdB13wd9BQMB9VnOIX3yjWJfgu0O6y894kNL4JY2nmxZz485tgD8dBN0
	3wMPm5vfJuoDS7ZYeqhRH3ZslTQ63nxreqGbUWRjHlQObXs8FneW2Ykkq3seJR4y9N+l
	tgTn9Ab6DCG98tgNa3IxzT6AEEKpILcTi/4eXCYFd+Va5PkfuvsJasD6Ur4FRCYQhWrd
	mQmAIZWxcNsOTwmv69PL6bIHmOhxaCafq8jS6VAZflXRFuhM+cxe4tOh0QolOX/2ue95
	oacw==
MIME-Version: 1.0
X-Received: by 10.204.200.201 with SMTP id ex9mr2222345bkb.75.1389100265460;
	Tue, 07 Jan 2014 05:11:05 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:11:05 -0800 (PST)
Date: Tue, 7 Jan 2014 21:11:05 +0800
Message-ID: <CAPgLHd886_=FOPw5MFRRzdpfRU63nW=eR0sVF+vAzw7TnHHByA@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:17:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen-platform: fix error return code in
	platform_pci_init()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Fix to return a negative error code from the error handling
case instead of 0, otherwise the error condition cann't be
reflected from the return value.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 drivers/xen/platform-pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index f1947ac..a1361c3 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -156,7 +156,8 @@ static int platform_pci_init(struct pci_dev *pdev,
 
 	max_nr_gframes = gnttab_max_grant_frames();
 	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
-	if (gnttab_setup_auto_xlat_frames(grant_frames))
+	ret = gnttab_setup_auto_xlat_frames(grant_frames);
+	if (ret)
 		goto out;
 	ret = gnttab_init();
 	if (ret)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:17:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WXU-0006wT-3H; Tue, 07 Jan 2014 13:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WRL-0006jm-Ri
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:11:28 +0000
Received: from [193.109.254.147:59501] by server-9.bemta-14.messagelabs.com id
	A6/AA-13957-FFCFBC25; Tue, 07 Jan 2014 13:11:27 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389100285!9303682!1
X-Originating-IP: [209.85.214.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20523 invoked from network); 7 Jan 2014 13:11:26 -0000
Received: from mail-bk0-f49.google.com (HELO mail-bk0-f49.google.com)
	(209.85.214.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:26 -0000
Received: by mail-bk0-f49.google.com with SMTP id my13so221057bkb.36
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:11:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=7bbM9N9H/oqBWjYefyh7LjmQeappm2TKVCW5VVchieo=;
	b=Awf0U3DyQUNjqdTw23xsmgsk5W1p026ISojIiIZWQ7JtYTXOvQhkMq6OmevOpTGlz1
	gRlFgU5rGaF9PkUg8A70jWV+1aeQKYd9YG22VzNBsSMva31KJ7lg42ZM3zeXJm80G7y6
	hqHVxOOZqTGIBImDeGbMTNDBHyCTutpEKGJrt1LJpke88KkIkqiQIvd17On3MnNU416l
	9Ak6ozxyvO36WcoodLE3luuLPtQy30e9IkEvM4aBvqjZ8iF+a/EY0N24JdqygMbF4OCo
	fviJRZf76bhH03PaSPcWv5W14C1drWBYahk56rlW327C8n7WRQYRepqJ7AQ+pjv5WYNM
	bZTw==
MIME-Version: 1.0
X-Received: by 10.204.98.205 with SMTP id r13mr164167bkn.149.1389100285690;
	Tue, 07 Jan 2014 05:11:25 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:11:25 -0800 (PST)
Date: Tue, 7 Jan 2014 21:11:25 +0800
Message-ID: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:17:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen/evtchn_fifo: fix error return code in
	evtchn_fifo_setup()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Fix to return -ENOMEM from the error handling case instead of
0 (overwrited to 0 by the HYPERVISOR_event_channel_op call),
otherwise the error condition cann't be reflected from the
return value.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 drivers/xen/events/events_fifo.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index e2bf957..89e4893 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -109,7 +109,7 @@ static int evtchn_fifo_setup(struct irq_info *info)
 {
 	unsigned port = info->evtchn;
 	unsigned new_array_pages;
-	int ret = -ENOMEM;
+	int ret;
 
 	new_array_pages = port / EVENT_WORDS_PER_PAGE + 1;
 
@@ -124,8 +124,10 @@ static int evtchn_fifo_setup(struct irq_info *info)
 		array_page = event_array[event_array_pages];
 		if (!array_page) {
 			array_page = (void *)__get_free_page(GFP_KERNEL);
-			if (array_page == NULL)
+			if (array_page == NULL) {
+				ret = -ENOMEM;
 				goto error;
+			}
 			event_array[event_array_pages] = array_page;
 		}
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:17:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WXT-0006wM-OL; Tue, 07 Jan 2014 13:17:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WR1-0006iK-7P
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:11:07 +0000
Received: from [85.158.139.211:42413] by server-9.bemta-5.messagelabs.com id
	9B/E5-15098-AECFBC25; Tue, 07 Jan 2014 13:11:06 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389100265!8317647!1
X-Originating-IP: [209.85.214.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2965 invoked from network); 7 Jan 2014 13:11:06 -0000
Received: from mail-bk0-f50.google.com (HELO mail-bk0-f50.google.com)
	(209.85.214.50)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:06 -0000
Received: by mail-bk0-f50.google.com with SMTP id e11so219670bkh.23
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:11:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=qxRdPkMfGgzi9oSQMadZnQJX7n/L91/cFQ2aJie1UDg=;
	b=yLFn/yfRRBAvFxRPADO5UKR0LQCMvZsnPOvH8VJyWoGsN8RIWZ945GnMuDCETnC7Vd
	mjAzdJhtrBkpqdB13wd9BQMB9VnOIX3yjWJfgu0O6y894kNL4JY2nmxZz485tgD8dBN0
	3wMPm5vfJuoDS7ZYeqhRH3ZslTQ63nxreqGbUWRjHlQObXs8FneW2Ykkq3seJR4y9N+l
	tgTn9Ab6DCG98tgNa3IxzT6AEEKpILcTi/4eXCYFd+Va5PkfuvsJasD6Ur4FRCYQhWrd
	mQmAIZWxcNsOTwmv69PL6bIHmOhxaCafq8jS6VAZflXRFuhM+cxe4tOh0QolOX/2ue95
	oacw==
MIME-Version: 1.0
X-Received: by 10.204.200.201 with SMTP id ex9mr2222345bkb.75.1389100265460;
	Tue, 07 Jan 2014 05:11:05 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:11:05 -0800 (PST)
Date: Tue, 7 Jan 2014 21:11:05 +0800
Message-ID: <CAPgLHd886_=FOPw5MFRRzdpfRU63nW=eR0sVF+vAzw7TnHHByA@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:17:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen-platform: fix error return code in
	platform_pci_init()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Fix to return a negative error code from the error handling
case instead of 0, otherwise the error condition cann't be
reflected from the return value.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 drivers/xen/platform-pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index f1947ac..a1361c3 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -156,7 +156,8 @@ static int platform_pci_init(struct pci_dev *pdev,
 
 	max_nr_gframes = gnttab_max_grant_frames();
 	grant_frames = alloc_xen_mmio(PAGE_SIZE * max_nr_gframes);
-	if (gnttab_setup_auto_xlat_frames(grant_frames))
+	ret = gnttab_setup_auto_xlat_frames(grant_frames);
+	if (ret)
 		goto out;
 	ret = gnttab_init();
 	if (ret)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:17:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WXU-0006wT-3H; Tue, 07 Jan 2014 13:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WRL-0006jm-Ri
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:11:28 +0000
Received: from [193.109.254.147:59501] by server-9.bemta-14.messagelabs.com id
	A6/AA-13957-FFCFBC25; Tue, 07 Jan 2014 13:11:27 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389100285!9303682!1
X-Originating-IP: [209.85.214.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20523 invoked from network); 7 Jan 2014 13:11:26 -0000
Received: from mail-bk0-f49.google.com (HELO mail-bk0-f49.google.com)
	(209.85.214.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:11:26 -0000
Received: by mail-bk0-f49.google.com with SMTP id my13so221057bkb.36
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:11:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=7bbM9N9H/oqBWjYefyh7LjmQeappm2TKVCW5VVchieo=;
	b=Awf0U3DyQUNjqdTw23xsmgsk5W1p026ISojIiIZWQ7JtYTXOvQhkMq6OmevOpTGlz1
	gRlFgU5rGaF9PkUg8A70jWV+1aeQKYd9YG22VzNBsSMva31KJ7lg42ZM3zeXJm80G7y6
	hqHVxOOZqTGIBImDeGbMTNDBHyCTutpEKGJrt1LJpke88KkIkqiQIvd17On3MnNU416l
	9Ak6ozxyvO36WcoodLE3luuLPtQy30e9IkEvM4aBvqjZ8iF+a/EY0N24JdqygMbF4OCo
	fviJRZf76bhH03PaSPcWv5W14C1drWBYahk56rlW327C8n7WRQYRepqJ7AQ+pjv5WYNM
	bZTw==
MIME-Version: 1.0
X-Received: by 10.204.98.205 with SMTP id r13mr164167bkn.149.1389100285690;
	Tue, 07 Jan 2014 05:11:25 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:11:25 -0800 (PST)
Date: Tue, 7 Jan 2014 21:11:25 +0800
Message-ID: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:17:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen/evtchn_fifo: fix error return code in
	evtchn_fifo_setup()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Fix to return -ENOMEM from the error handling case instead of
0 (overwrited to 0 by the HYPERVISOR_event_channel_op call),
otherwise the error condition cann't be reflected from the
return value.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 drivers/xen/events/events_fifo.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index e2bf957..89e4893 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -109,7 +109,7 @@ static int evtchn_fifo_setup(struct irq_info *info)
 {
 	unsigned port = info->evtchn;
 	unsigned new_array_pages;
-	int ret = -ENOMEM;
+	int ret;
 
 	new_array_pages = port / EVENT_WORDS_PER_PAGE + 1;
 
@@ -124,8 +124,10 @@ static int evtchn_fifo_setup(struct irq_info *info)
 		array_page = event_array[event_array_pages];
 		if (!array_page) {
 			array_page = (void *)__get_free_page(GFP_KERNEL);
-			if (array_page == NULL)
+			if (array_page == NULL) {
+				ret = -ENOMEM;
 				goto error;
+			}
 			event_array[event_array_pages] = array_page;
 		}
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:23:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wd1-0007am-UK; Tue, 07 Jan 2014 13:23:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Wd0-0007ah-HY
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:23:30 +0000
Received: from [85.158.139.211:54914] by server-11.bemta-5.messagelabs.com id
	14/6D-23268-1DFFBC25; Tue, 07 Jan 2014 13:23:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389101008!8310686!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15981 invoked from network); 7 Jan 2014 13:23:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 13:23:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 13:23:27 +0000
Message-Id: <52CC0DDB0200007800111213@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 13:23:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <52A744B7020000780010BEF1@nat28.tlf.novell.com>
	<52A74539020000780010BF1F@nat28.tlf.novell.com>
	<52A8B1A5.9040302@citrix.com>
	<52AB20DB020000780010D060@nat28.tlf.novell.com>
	<52AB230E.90607@citrix.com>
	<52AB38A3020000780010D1DF@nat28.tlf.novell.com>
	<52AB2B55.9030508@citrix.com>
	<B6C2EB9186482D47BD0C5A9A4834564404E702CA@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A4834564404E702CA@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v2 1/5] IOMMU: make page table population
 preemptible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.12.13 at 14:43, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

Thanks - but what about patch 2 of this same series?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:23:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wd1-0007am-UK; Tue, 07 Jan 2014 13:23:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Wd0-0007ah-HY
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:23:30 +0000
Received: from [85.158.139.211:54914] by server-11.bemta-5.messagelabs.com id
	14/6D-23268-1DFFBC25; Tue, 07 Jan 2014 13:23:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389101008!8310686!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15981 invoked from network); 7 Jan 2014 13:23:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 13:23:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 13:23:27 +0000
Message-Id: <52CC0DDB0200007800111213@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 13:23:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>
References: <52A744B7020000780010BEF1@nat28.tlf.novell.com>
	<52A74539020000780010BF1F@nat28.tlf.novell.com>
	<52A8B1A5.9040302@citrix.com>
	<52AB20DB020000780010D060@nat28.tlf.novell.com>
	<52AB230E.90607@citrix.com>
	<52AB38A3020000780010D1DF@nat28.tlf.novell.com>
	<52AB2B55.9030508@citrix.com>
	<B6C2EB9186482D47BD0C5A9A4834564404E702CA@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <B6C2EB9186482D47BD0C5A9A4834564404E702CA@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v2 1/5] IOMMU: make page table population
 preemptible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.12.13 at 14:43, "Zhang, Xiantao" <xiantao.zhang@intel.com> wrote:
> Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

Thanks - but what about patch 2 of this same series?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:27:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WgU-0007iS-If; Tue, 07 Jan 2014 13:27:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WgT-0007iM-Gv
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:27:05 +0000
Received: from [85.158.137.68:4407] by server-7.bemta-3.messagelabs.com id
	0D/1A-27599-8A00CC25; Tue, 07 Jan 2014 13:27:04 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389101222!7702664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11100 invoked from network); 7 Jan 2014 13:27:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:27:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90428510"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:27:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:27:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WgP-00055R-Dq;
	Tue, 07 Jan 2014 13:27:01 +0000
Date: Tue, 7 Jan 2014 13:26:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Peter Maydell wrote:
> On 6 January 2014 17:34, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Jan 2014, Peter Maydell wrote:
> >> However I don't think we can have a qemu-system-null
> >> (regardless of use cases) until/unless we get rid of
> >> all the things which are compile-time decided by
> >> the system config. In an ideal world we wouldn't have
> >> any of those (and perhaps you could even build
> >> support for more than one kind of CPU into one QEMU
> >> binary), but as it is we do have them, and so a
> >> qemu-system-null is not possible.
> >
> > What are these compile-time things you are referring to?
> 
> The identifiers poisoned by include/qemu/poison.h are
> an initial but not complete list. Host and target
> endianness is a particularly obvious one, as is the
> size of a target long. You may not use these things
> in your Xen devices, but "qemu-system-null" implies
> more than "weird special purpose thing which only
> has Xen devices in it".

I see your point.
Could we allow target endinness and long size being selected at
configure time for target-null?
The default could be the same as the host, or could even be simply
statically determined, maybe little endian, 4 bytes.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:27:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WgU-0007iS-If; Tue, 07 Jan 2014 13:27:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WgT-0007iM-Gv
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:27:05 +0000
Received: from [85.158.137.68:4407] by server-7.bemta-3.messagelabs.com id
	0D/1A-27599-8A00CC25; Tue, 07 Jan 2014 13:27:04 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389101222!7702664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11100 invoked from network); 7 Jan 2014 13:27:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:27:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90428510"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:27:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:27:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WgP-00055R-Dq;
	Tue, 07 Jan 2014 13:27:01 +0000
Date: Tue, 7 Jan 2014 13:26:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Peter Maydell wrote:
> On 6 January 2014 17:34, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 6 Jan 2014, Peter Maydell wrote:
> >> However I don't think we can have a qemu-system-null
> >> (regardless of use cases) until/unless we get rid of
> >> all the things which are compile-time decided by
> >> the system config. In an ideal world we wouldn't have
> >> any of those (and perhaps you could even build
> >> support for more than one kind of CPU into one QEMU
> >> binary), but as it is we do have them, and so a
> >> qemu-system-null is not possible.
> >
> > What are these compile-time things you are referring to?
> 
> The identifiers poisoned by include/qemu/poison.h are
> an initial but not complete list. Host and target
> endianness is a particularly obvious one, as is the
> size of a target long. You may not use these things
> in your Xen devices, but "qemu-system-null" implies
> more than "weird special purpose thing which only
> has Xen devices in it".

I see your point.
Could we allow target endinness and long size being selected at
configure time for target-null?
The default could be the same as the host, or could even be simply
statically determined, maybe little endian, 4 bytes.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:31:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:31:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wke-0008EO-GH; Tue, 07 Jan 2014 13:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Wkd-0008EH-1d
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:31:23 +0000
Received: from [85.158.139.211:30616] by server-7.bemta-5.messagelabs.com id
	72/94-04824-AA10CC25; Tue, 07 Jan 2014 13:31:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389101480!8325946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 351 invoked from network); 7 Jan 2014 13:31:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:31:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88268192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:31:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:31:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WkH-00058Y-Eu;
	Tue, 07 Jan 2014 13:31:01 +0000
Date: Tue, 7 Jan 2014 13:30:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Yinghai Lu <yinghai@kernel.org>
In-Reply-To: <CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401071327510.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
	<alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
	<CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Tony Luck <tony.luck@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Yinghai Lu wrote:
> On Fri, Jan 3, 2014 at 9:50 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> 
> >>  drivers/xen/events.c | 8 ++++++--
> >>  1 file changed, 6 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> >> index 4035e83..020cd77 100644
> >> --- a/drivers/xen/events.c
> >> +++ b/drivers/xen/events.c
> >> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
> >>       /* Legacy IRQ descriptors are already allocated by the arch. */
> >>       if (gsi < NR_IRQS_LEGACY)
> >>               irq = gsi;
> >> -     else
> >> -             irq = irq_alloc_desc_at(gsi, -1);
> >> +     else {
> >> +             /* for x86, irq already get reserved for gsi */
> >> +             irq = irq_alloc_reserved_desc_at(gsi, -1);
> >> +             if (irq < 0)
> >> +                     irq = irq_alloc_desc_at(gsi, -1);
> >> +     }
> >
> > This is common code. On ARM I get:
> >
> > drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
> > drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
> >    irq = irq_alloc_reserved_desc_at(gsi, -1);
> >    ^
> > cc1: some warnings being treated as errors
> 
> It's strange.
> 
> that is defined with irq_alloc_desc_at in parallel in
> include/linux/irq.h and kernel/irq/irqdesc.c.
> 
> Did you try the whole tree, or just this patch?

Just this patch.
The whole tree (yinghai/for-x86-irq-3.14) builds just fine.
Thanks!

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:31:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:31:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wke-0008EO-GH; Tue, 07 Jan 2014 13:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Wkd-0008EH-1d
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:31:23 +0000
Received: from [85.158.139.211:30616] by server-7.bemta-5.messagelabs.com id
	72/94-04824-AA10CC25; Tue, 07 Jan 2014 13:31:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389101480!8325946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 351 invoked from network); 7 Jan 2014 13:31:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:31:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88268192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:31:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:31:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WkH-00058Y-Eu;
	Tue, 07 Jan 2014 13:31:01 +0000
Date: Tue, 7 Jan 2014 13:30:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Yinghai Lu <yinghai@kernel.org>
In-Reply-To: <CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401071327510.8667@kaball.uk.xensource.com>
References: <1388707565-16535-1-git-send-email-yinghai@kernel.org>
	<1388707565-16535-17-git-send-email-yinghai@kernel.org>
	<alpine.DEB.2.02.1401031749340.8667@kaball.uk.xensource.com>
	<CAE9FiQVba9K_3LDkHTP4YZJ-FzwE_mF3h6iiXEFb1shUUmhGpg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Tony Luck <tony.luck@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@sisk.pl>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH v5 16/33] xen,
 irq: Call irq_alloc_reserved_desc_at() at first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 6 Jan 2014, Yinghai Lu wrote:
> On Fri, Jan 3, 2014 at 9:50 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> 
> >>  drivers/xen/events.c | 8 ++++++--
> >>  1 file changed, 6 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> >> index 4035e83..020cd77 100644
> >> --- a/drivers/xen/events.c
> >> +++ b/drivers/xen/events.c
> >> @@ -508,8 +508,12 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
> >>       /* Legacy IRQ descriptors are already allocated by the arch. */
> >>       if (gsi < NR_IRQS_LEGACY)
> >>               irq = gsi;
> >> -     else
> >> -             irq = irq_alloc_desc_at(gsi, -1);
> >> +     else {
> >> +             /* for x86, irq already get reserved for gsi */
> >> +             irq = irq_alloc_reserved_desc_at(gsi, -1);
> >> +             if (irq < 0)
> >> +                     irq = irq_alloc_desc_at(gsi, -1);
> >> +     }
> >
> > This is common code. On ARM I get:
> >
> > drivers/xen/events.c: In function 'xen_allocate_irq_gsi':
> > drivers/xen/events.c:513:3: error: implicit declaration of function 'irq_alloc_reserved_desc_at' [-Werror=implicit-function-declaration]
> >    irq = irq_alloc_reserved_desc_at(gsi, -1);
> >    ^
> > cc1: some warnings being treated as errors
> 
> It's strange.
> 
> that is defined with irq_alloc_desc_at in parallel in
> include/linux/irq.h and kernel/irq/irqdesc.c.
> 
> Did you try the whole tree, or just this patch?

Just this patch.
The whole tree (yinghai/for-x86-irq-3.14) builds just fine.
Thanks!

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:32:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:32:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wly-0008K7-Vj; Tue, 07 Jan 2014 13:32:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0Wlx-0008K0-SO
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:32:46 +0000
Received: from [85.158.143.35:37672] by server-3.bemta-4.messagelabs.com id
	FB/AC-32360-DF10CC25; Tue, 07 Jan 2014 13:32:45 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389101564!10161626!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14278 invoked from network); 7 Jan 2014 13:32:44 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:32:44 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so77758eek.4
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:32:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=vfIyBNiuGzclUzm7HabdTAl+IShylDpUccy5wKRqGd0=;
	b=ljKCm083J7B/fJSVQcI3Y1W3oDMJDUbwgH7NdqWa5zf4QxFe6av7Tp3w+e7Z/O+Ppw
	acaMIgonISpYsAIvCm+ckpDYg7MPoqhOeH7vlLtOBKZ8vKR+cX8pcFVHn25gBmty9hRh
	fVGxs+/eoG3vhryInJVd8e92V2WGQtaP9/mhDkP5rMGr7PhjED/1r8GWLAt7Ujp07cAt
	iDBdl3es6b55nsw3w+8hDg00VVfLuZ9mbBlJ/XZrcVxkQXUHTp3mO4kSYMwTz80sAAO5
	JuaTtgkYp511OnIO3pTUE1SzPoC3zj1e/anDdptRx3rz8Oa3jbE2eRW0qX57zXoYJBDU
	AGWw==
X-Received: by 10.14.48.74 with SMTP id u50mr1837892eeb.107.1389101564384;
	Tue, 07 Jan 2014 05:32:44 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186]) by mx.google.com with ESMTPSA id
	n1sm180164799eep.20.2014.01.07.05.32.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 05:32:42 -0800 (PST)
Message-ID: <52CC01F6.6050502@redhat.com>
Date: Tue, 07 Jan 2014 14:32:38 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
In-Reply-To: <20140107123417.GG10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 13:34, Wei Liu ha scritto:
> On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
>> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
>>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
>>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having=
 the
>>>> ability to compile out accel=3Dtcg.
>>>
>>> Didn't you and Paolo even have patches for that a while ago?
>>
>> Yes, but some code shuffling is required in each target to make sure you
>> can compile out translate-all.c, cputlb.c, etc.  So my patches only
>> worked for x86 at the time.
>>
> =

> Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> reference to your patches? Thanks.

Googling "disable tcg" would have provided an answer, but the patches
were old enough to be basically useless.  I'll refresh the current
version in the next few days.  Currently I am (or try to be) on
vacation, so I cannot really say when, but I'll do my best. :)

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:32:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:32:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wly-0008K7-Vj; Tue, 07 Jan 2014 13:32:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0Wlx-0008K0-SO
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:32:46 +0000
Received: from [85.158.143.35:37672] by server-3.bemta-4.messagelabs.com id
	FB/AC-32360-DF10CC25; Tue, 07 Jan 2014 13:32:45 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389101564!10161626!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14278 invoked from network); 7 Jan 2014 13:32:44 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:32:44 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so77758eek.4
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:32:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=vfIyBNiuGzclUzm7HabdTAl+IShylDpUccy5wKRqGd0=;
	b=ljKCm083J7B/fJSVQcI3Y1W3oDMJDUbwgH7NdqWa5zf4QxFe6av7Tp3w+e7Z/O+Ppw
	acaMIgonISpYsAIvCm+ckpDYg7MPoqhOeH7vlLtOBKZ8vKR+cX8pcFVHn25gBmty9hRh
	fVGxs+/eoG3vhryInJVd8e92V2WGQtaP9/mhDkP5rMGr7PhjED/1r8GWLAt7Ujp07cAt
	iDBdl3es6b55nsw3w+8hDg00VVfLuZ9mbBlJ/XZrcVxkQXUHTp3mO4kSYMwTz80sAAO5
	JuaTtgkYp511OnIO3pTUE1SzPoC3zj1e/anDdptRx3rz8Oa3jbE2eRW0qX57zXoYJBDU
	AGWw==
X-Received: by 10.14.48.74 with SMTP id u50mr1837892eeb.107.1389101564384;
	Tue, 07 Jan 2014 05:32:44 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186]) by mx.google.com with ESMTPSA id
	n1sm180164799eep.20.2014.01.07.05.32.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 05:32:42 -0800 (PST)
Message-ID: <52CC01F6.6050502@redhat.com>
Date: Tue, 07 Jan 2014 14:32:38 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
In-Reply-To: <20140107123417.GG10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 13:34, Wei Liu ha scritto:
> On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
>> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
>>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
>>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of having=
 the
>>>> ability to compile out accel=3Dtcg.
>>>
>>> Didn't you and Paolo even have patches for that a while ago?
>>
>> Yes, but some code shuffling is required in each target to make sure you
>> can compile out translate-all.c, cputlb.c, etc.  So my patches only
>> worked for x86 at the time.
>>
> =

> Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> reference to your patches? Thanks.

Googling "disable tcg" would have provided an answer, but the patches
were old enough to be basically useless.  I'll refresh the current
version in the next few days.  Currently I am (or try to be) on
vacation, so I cannot really say when, but I'll do my best. :)

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WmX-0008Pl-Lh; Tue, 07 Jan 2014 13:33:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WmV-0008PH-Ra
	for Xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:33:20 +0000
Received: from [85.158.143.35:33055] by server-1.bemta-4.messagelabs.com id
	E3/02-02132-F120CC25; Tue, 07 Jan 2014 13:33:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389101597!10007274!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4157 invoked from network); 7 Jan 2014 13:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90430170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:33:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:33:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WmS-0005AB-F0;
	Tue, 07 Jan 2014 13:33:16 +0000
Date: Tue, 7 Jan 2014 13:32:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stephen Rothwell <sfr@canb.auug.org.au>
In-Reply-To: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
Message-ID: <alpine.DEB.2.02.1401071332140.8667@kaball.uk.xensource.com>
References: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Xen Devel <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <liuw@liuw.name>, linux-kernel@vger.kernel.org,
	Rob Herring <rob.herring@calxeda.com>,
	linux-next@vger.kernel.org, Russell King <rmk@arm.linux.org.uk>
Subject: Re: [Xen-devel] linux-next: manual merge of the xen-tip tree with
 the arm-current tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Stephen Rothwell wrote:
> Hi all,
> 
> Today's linux-next merge of the xen-tip tree got a conflict in
> arch/arm/include/asm/xen/page.h between commit 0a5ccc86507f ("ARM:
> 7933/1: rename ioremap_cached to ioremap_cache") from the  tree and
> commit 02bcf053e9c5 ("asm/xen/page.h: remove redundant semicolon") from
> the xen-tip tree.
> 
> I fixed it up (see below) and can carry the fix as necessary (no action
> is required).

That's fine, thanks!


> -- 
> Cheers,
> Stephen Rothwell                    sfr@canb.auug.org.au
> 
> diff --cc arch/arm/include/asm/xen/page.h
> index 3759cacdd7f8,709c4b4d2f1d..000000000000
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@@ -117,6 -117,7 +117,7 @@@ static inline bool set_phys_to_machine(
>   	return __set_phys_to_machine(pfn, mfn);
>   }
>   
> - #define xen_remap(cookie, size) ioremap_cache((cookie), (size));
>  -#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
> ++#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
> + #define xen_unmap(cookie) iounmap((cookie))
>   
>   #endif /* _ASM_ARM_XEN_PAGE_H */
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WmX-0008Pl-Lh; Tue, 07 Jan 2014 13:33:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0WmV-0008PH-Ra
	for Xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:33:20 +0000
Received: from [85.158.143.35:33055] by server-1.bemta-4.messagelabs.com id
	E3/02-02132-F120CC25; Tue, 07 Jan 2014 13:33:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389101597!10007274!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4157 invoked from network); 7 Jan 2014 13:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90430170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:33:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:33:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0WmS-0005AB-F0;
	Tue, 07 Jan 2014 13:33:16 +0000
Date: Tue, 7 Jan 2014 13:32:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stephen Rothwell <sfr@canb.auug.org.au>
In-Reply-To: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
Message-ID: <alpine.DEB.2.02.1401071332140.8667@kaball.uk.xensource.com>
References: <20140107154222.b2c0c8081441671f32d78bc1@canb.auug.org.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Xen Devel <Xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <liuw@liuw.name>, linux-kernel@vger.kernel.org,
	Rob Herring <rob.herring@calxeda.com>,
	linux-next@vger.kernel.org, Russell King <rmk@arm.linux.org.uk>
Subject: Re: [Xen-devel] linux-next: manual merge of the xen-tip tree with
 the arm-current tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Stephen Rothwell wrote:
> Hi all,
> 
> Today's linux-next merge of the xen-tip tree got a conflict in
> arch/arm/include/asm/xen/page.h between commit 0a5ccc86507f ("ARM:
> 7933/1: rename ioremap_cached to ioremap_cache") from the  tree and
> commit 02bcf053e9c5 ("asm/xen/page.h: remove redundant semicolon") from
> the xen-tip tree.
> 
> I fixed it up (see below) and can carry the fix as necessary (no action
> is required).

That's fine, thanks!


> -- 
> Cheers,
> Stephen Rothwell                    sfr@canb.auug.org.au
> 
> diff --cc arch/arm/include/asm/xen/page.h
> index 3759cacdd7f8,709c4b4d2f1d..000000000000
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@@ -117,6 -117,7 +117,7 @@@ static inline bool set_phys_to_machine(
>   	return __set_phys_to_machine(pfn, mfn);
>   }
>   
> - #define xen_remap(cookie, size) ioremap_cache((cookie), (size));
>  -#define xen_remap(cookie, size) ioremap_cached((cookie), (size))
> ++#define xen_remap(cookie, size) ioremap_cache((cookie), (size))
> + #define xen_unmap(cookie) iounmap((cookie))
>   
>   #endif /* _ASM_ARM_XEN_PAGE_H */
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WpE-0000Cd-9w; Tue, 07 Jan 2014 13:36:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0WpC-0000CQ-NK
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:36:06 +0000
Received: from [193.109.254.147:22120] by server-12.bemta-14.messagelabs.com
	id 5C/F3-13681-6C20CC25; Tue, 07 Jan 2014 13:36:06 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389101764!9315961!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32562 invoked from network); 7 Jan 2014 13:36:05 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:36:05 -0000
Received: by mail-lb0-f182.google.com with SMTP id l4so280706lbv.13
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:36:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=JH6COA+ZtA8t8RQUrrcG22r6nkPTtjLiEyd4DsfkF4M=;
	b=QIT7I7ARjtXR1tqqqI0ibmUmX0KrXU2Or44nNDugaXYz5DgkBBHtUGO6QEIbfPpclc
	ZXZyhLQ6y/5WEtmHIJQN9DpNVevmHYGie2iOXeXB8wInf9//7/lf2i1Gs3uEvmCb7WpJ
	mI7qkr8c3f1exdTOPG89STpNwGYF6lROa4arcHZYfa0mHGbaqaoB2zrhnom60QS9CTrm
	9tXMvczdxrLHgk358NSsVTyHfVOPvYKTP0j0zE1lMlSiKyXp3ZznWbUWJh3lwoD86zng
	gKxi93CXnAXtUoyp+p9RzeammFGmdaoYht/dHRC5VAa/I6usN0zu0Y7VDht5azES8BVP
	AQXA==
X-Gm-Message-State: ALoCoQmArM3jnAXJr6DNntkt3ULRnkN0bWZPcGIhHt3HgHjIB8AU2Y8emQ02wyEpHzDqIOzHNSfg
X-Received: by 10.152.234.231 with SMTP id uh7mr46947555lac.10.1389101764655; 
	Tue, 07 Jan 2014 05:36:04 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Tue, 7 Jan 2014 05:35:44 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 7 Jan 2014 13:35:44 +0000
Message-ID: <CAFEAcA_X-PSZwoTu-X7mMeD33s0SqgEY0U2nz-uQv992YQV_wA@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 January 2014 13:26, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Peter Maydell wrote:
>> The identifiers poisoned by include/qemu/poison.h are
>> an initial but not complete list. Host and target
>> endianness is a particularly obvious one, as is the
>> size of a target long. You may not use these things
>> in your Xen devices, but "qemu-system-null" implies
>> more than "weird special purpose thing which only
>> has Xen devices in it".
>
> I see your point.
> Could we allow target endinness and long size being selected at
> configure time for target-null?
> The default could be the same as the host, or could even be simply
> statically determined, maybe little endian, 4 bytes.

I think this is heading down the "weird special case for
Xen" path, which seems a bad idea. I'd rather see us
able to configure with "no tcg, no kvm" and "only
build in the devices for this minimal xen board",
which pretty much gets you to the same place without
adding a misleading target-null extra binary.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WpE-0000Cd-9w; Tue, 07 Jan 2014 13:36:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0WpC-0000CQ-NK
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:36:06 +0000
Received: from [193.109.254.147:22120] by server-12.bemta-14.messagelabs.com
	id 5C/F3-13681-6C20CC25; Tue, 07 Jan 2014 13:36:06 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389101764!9315961!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32562 invoked from network); 7 Jan 2014 13:36:05 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:36:05 -0000
Received: by mail-lb0-f182.google.com with SMTP id l4so280706lbv.13
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:36:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=JH6COA+ZtA8t8RQUrrcG22r6nkPTtjLiEyd4DsfkF4M=;
	b=QIT7I7ARjtXR1tqqqI0ibmUmX0KrXU2Or44nNDugaXYz5DgkBBHtUGO6QEIbfPpclc
	ZXZyhLQ6y/5WEtmHIJQN9DpNVevmHYGie2iOXeXB8wInf9//7/lf2i1Gs3uEvmCb7WpJ
	mI7qkr8c3f1exdTOPG89STpNwGYF6lROa4arcHZYfa0mHGbaqaoB2zrhnom60QS9CTrm
	9tXMvczdxrLHgk358NSsVTyHfVOPvYKTP0j0zE1lMlSiKyXp3ZznWbUWJh3lwoD86zng
	gKxi93CXnAXtUoyp+p9RzeammFGmdaoYht/dHRC5VAa/I6usN0zu0Y7VDht5azES8BVP
	AQXA==
X-Gm-Message-State: ALoCoQmArM3jnAXJr6DNntkt3ULRnkN0bWZPcGIhHt3HgHjIB8AU2Y8emQ02wyEpHzDqIOzHNSfg
X-Received: by 10.152.234.231 with SMTP id uh7mr46947555lac.10.1389101764655; 
	Tue, 07 Jan 2014 05:36:04 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Tue, 7 Jan 2014 05:35:44 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 7 Jan 2014 13:35:44 +0000
Message-ID: <CAFEAcA_X-PSZwoTu-X7mMeD33s0SqgEY0U2nz-uQv992YQV_wA@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Qemu-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 January 2014 13:26, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 6 Jan 2014, Peter Maydell wrote:
>> The identifiers poisoned by include/qemu/poison.h are
>> an initial but not complete list. Host and target
>> endianness is a particularly obvious one, as is the
>> size of a target long. You may not use these things
>> in your Xen devices, but "qemu-system-null" implies
>> more than "weird special purpose thing which only
>> has Xen devices in it".
>
> I see your point.
> Could we allow target endinness and long size being selected at
> configure time for target-null?
> The default could be the same as the host, or could even be simply
> statically determined, maybe little endian, 4 bytes.

I think this is heading down the "weird special case for
Xen" path, which seems a bad idea. I'd rather see us
able to configure with "no tcg, no kvm" and "only
build in the devices for this minimal xen board",
which pretty much gets you to the same place without
adding a misleading target-null extra binary.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:37:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wqv-0000Kp-TI; Tue, 07 Jan 2014 13:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Wqv-0000Kd-2q
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:37:53 +0000
Received: from [85.158.143.35:24227] by server-3.bemta-4.messagelabs.com id
	D0/35-32360-0330CC25; Tue, 07 Jan 2014 13:37:52 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389101870!10166837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30747 invoked from network); 7 Jan 2014 13:37:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90431697"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:37:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:37:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Wqr-0005Eo-Ls;
	Tue, 07 Jan 2014 13:37:49 +0000
Date: Tue, 7 Jan 2014 13:37:49 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107133749.GH10654@zion.uk.xensource.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC01F6.6050502@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:32:38PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 13:34, Wei Liu ha scritto:
> > On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> >> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> >>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of havi=
ng the
> >>>> ability to compile out accel=3Dtcg.
> >>>
> >>> Didn't you and Paolo even have patches for that a while ago?
> >>
> >> Yes, but some code shuffling is required in each target to make sure y=
ou
> >> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> >> worked for x86 at the time.
> >>
> > =

> > Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> > reference to your patches? Thanks.
> =

> Googling "disable tcg" would have provided an answer, but the patches
> were old enough to be basically useless.  I'll refresh the current
> version in the next few days.  Currently I am (or try to be) on
> vacation, so I cannot really say when, but I'll do my best. :)
> =


Thanks. I found them. Enjoy your vacation!

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:37:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wqv-0000Kp-TI; Tue, 07 Jan 2014 13:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Wqv-0000Kd-2q
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:37:53 +0000
Received: from [85.158.143.35:24227] by server-3.bemta-4.messagelabs.com id
	D0/35-32360-0330CC25; Tue, 07 Jan 2014 13:37:52 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389101870!10166837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30747 invoked from network); 7 Jan 2014 13:37:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="90431697"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:37:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 08:37:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Wqr-0005Eo-Ls;
	Tue, 07 Jan 2014 13:37:49 +0000
Date: Tue, 7 Jan 2014 13:37:49 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107133749.GH10654@zion.uk.xensource.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC01F6.6050502@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:32:38PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 13:34, Wei Liu ha scritto:
> > On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> >> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> >>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of havi=
ng the
> >>>> ability to compile out accel=3Dtcg.
> >>>
> >>> Didn't you and Paolo even have patches for that a while ago?
> >>
> >> Yes, but some code shuffling is required in each target to make sure y=
ou
> >> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> >> worked for x86 at the time.
> >>
> > =

> > Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> > reference to your patches? Thanks.
> =

> Googling "disable tcg" would have provided an answer, but the patches
> were old enough to be basically useless.  I'll refresh the current
> version in the next few days.  Currently I am (or try to be) on
> vacation, so I cannot really say when, but I'll do my best. :)
> =


Thanks. I found them. Enjoy your vacation!

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:39:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WsS-0000kc-D4; Tue, 07 Jan 2014 13:39:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0WsR-0000kC-3m
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:39:27 +0000
Received: from [193.109.254.147:57318] by server-11.bemta-14.messagelabs.com
	id EC/81-20576-E830CC25; Tue, 07 Jan 2014 13:39:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389101964!9349336!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30597 invoked from network); 7 Jan 2014 13:39:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:39:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88271025"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:39:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:39:09 -0500
Message-ID: <1389101948.31766.155.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 13:39:08 +0000
In-Reply-To: <52CBF81D02000078001110EF@nat28.tlf.novell.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
	<1389094274.31766.133.camel@kazak.uk.xensource.com>
	<52CBF81D02000078001110EF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 11:50 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 12:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
> >> >> that feature statically in the first place - that should be done only if 
> > the 
> >> > kernel
> >> >> could _only_ boot in PVH mode.
> >> > 
> >> > The feature is not marked as "required" but rather - it can utilize said
> >> > extension (so supported). I am advocating that the calleer checks that
> >> > all of the required pieces are correct - and it can ignore the ones it
> >> > has no idea off (which it does for some of the Xen ELF notes - ignores
> >> > them if it has no idea of what they are).
> >> 
> >> What would be to point of telling the hypervisor that the kernel
> >> can utilize a certain extension? The kernel could just utilize it, and
> >> the hypervisor would know by that simple fact.
> > 
> > But for PVH doesn't the hypervisor need to now at dom0 build time
> > whether to build a PV or PVH domain?
> 
> Which needs to be communicated via hypervisor command line option
> anyway.

I would expect that the plan is to eventually enable PVH by default if
the kernel can cope with it.

>  Specifying the option without have a suitable kernel is (of
> course) a user error (generally expected to result in a kernel crash).

In the case where the user has specified the option sure.

Note that the original issue was a PVH capable kernel under a non-PVH
capable Xen, although we've strayed a bit from that topic.

Ian.

> 
> Jan
> 
> > So it needs to know upfront if the
> > kernel could do PVH or not, and then pick, but once it has picked the
> > kernel had best follow that choice.
> > 
> > So in the PVH case it's not just a simple case of the kernel deciding to
> > utilize an optional feature, the optional feature has already been
> > enabled.
> > 
> > Ian
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:39:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0WsS-0000kc-D4; Tue, 07 Jan 2014 13:39:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0WsR-0000kC-3m
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:39:27 +0000
Received: from [193.109.254.147:57318] by server-11.bemta-14.messagelabs.com
	id EC/81-20576-E830CC25; Tue, 07 Jan 2014 13:39:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389101964!9349336!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30597 invoked from network); 7 Jan 2014 13:39:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:39:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88271025"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:39:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:39:09 -0500
Message-ID: <1389101948.31766.155.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 13:39:08 +0000
In-Reply-To: <52CBF81D02000078001110EF@nat28.tlf.novell.com>
References: <20131220175735.GA619@phenom.dumpdata.com>
	<1387624194.1025.70.camel@dagon.hellion.org.uk>
	<52B8046302000078000A9C8C@nat28.tlf.novell.com>
	<52B8235D02000078000A9CB1@nat28.tlf.novell.com>
	<20131224015650.GA2191@pegasus.dumpdata.com>
	<52CBC7900200007800110EF7@nat28.tlf.novell.com>
	<1389094274.31766.133.camel@kazak.uk.xensource.com>
	<52CBF81D02000078001110EF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.1 + Linux compiled with PVH == BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 11:50 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 12:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 08:23 +0000, Jan Beulich wrote:
> >> >> that feature statically in the first place - that should be done only if 
> > the 
> >> > kernel
> >> >> could _only_ boot in PVH mode.
> >> > 
> >> > The feature is not marked as "required" but rather - it can utilize said
> >> > extension (so supported). I am advocating that the calleer checks that
> >> > all of the required pieces are correct - and it can ignore the ones it
> >> > has no idea off (which it does for some of the Xen ELF notes - ignores
> >> > them if it has no idea of what they are).
> >> 
> >> What would be to point of telling the hypervisor that the kernel
> >> can utilize a certain extension? The kernel could just utilize it, and
> >> the hypervisor would know by that simple fact.
> > 
> > But for PVH doesn't the hypervisor need to now at dom0 build time
> > whether to build a PV or PVH domain?
> 
> Which needs to be communicated via hypervisor command line option
> anyway.

I would expect that the plan is to eventually enable PVH by default if
the kernel can cope with it.

>  Specifying the option without have a suitable kernel is (of
> course) a user error (generally expected to result in a kernel crash).

In the case where the user has specified the option sure.

Note that the original issue was a PVH capable kernel under a non-PVH
capable Xen, although we've strayed a bit from that topic.

Ian.

> 
> Jan
> 
> > So it needs to know upfront if the
> > kernel could do PVH or not, and then pick, but once it has picked the
> > kernel had best follow that choice.
> > 
> > So in the PVH case it's not just a simple case of the kernel deciding to
> > utilize an optional feature, the optional feature has already been
> > enabled.
> > 
> > Ian
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:41:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wuj-0000yZ-VT; Tue, 07 Jan 2014 13:41:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0Wui-0000yP-OX
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:41:48 +0000
Received: from [193.109.254.147:52088] by server-11.bemta-14.messagelabs.com
	id DB/15-20576-C140CC25; Tue, 07 Jan 2014 13:41:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389102106!9321790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12838 invoked from network); 7 Jan 2014 13:41:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:41:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88271678"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:41:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:41:45 -0500
Message-ID: <52CC0418.7090405@citrix.com>
Date: Tue, 7 Jan 2014 13:41:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Yongjun <weiyj.lk@gmail.com>
References: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
In-Reply-To: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	yongjun_wei@trendmicro.com.cn, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH -next] xen/evtchn_fifo: fix error return
	code in evtchn_fifo_setup()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 13:11, Wei Yongjun wrote:
> From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
> 
> Fix to return -ENOMEM from the error handling case instead of
> 0 (overwrited to 0 by the HYPERVISOR_event_channel_op call),
> otherwise the error condition cann't be reflected from the
> return value.

Thanks.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:41:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Wuj-0000yZ-VT; Tue, 07 Jan 2014 13:41:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0Wui-0000yP-OX
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:41:48 +0000
Received: from [193.109.254.147:52088] by server-11.bemta-14.messagelabs.com
	id DB/15-20576-C140CC25; Tue, 07 Jan 2014 13:41:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389102106!9321790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12838 invoked from network); 7 Jan 2014 13:41:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:41:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,618,1384300800"; d="scan'208";a="88271678"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:41:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:41:45 -0500
Message-ID: <52CC0418.7090405@citrix.com>
Date: Tue, 7 Jan 2014 13:41:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Yongjun <weiyj.lk@gmail.com>
References: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
In-Reply-To: <CAPgLHd_AO6mEwaemSobCMwCg2J=xmH+-FF0Ek+PS8ZYdHJStXw@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	yongjun_wei@trendmicro.com.cn, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH -next] xen/evtchn_fifo: fix error return
	code in evtchn_fifo_setup()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 13:11, Wei Yongjun wrote:
> From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
> 
> Fix to return -ENOMEM from the error handling case instead of
> 0 (overwrited to 0 by the HYPERVISOR_event_channel_op call),
> otherwise the error condition cann't be reflected from the
> return value.

Thanks.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:50:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X2x-0001X8-WD; Tue, 07 Jan 2014 13:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0X2w-0001X3-Ix
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:50:19 +0000
Received: from [85.158.137.68:27689] by server-14.bemta-3.messagelabs.com id
	6D/B5-06105-9160CC25; Tue, 07 Jan 2014 13:50:17 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389102616!6921252!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 7 Jan 2014 13:50:16 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:50:16 -0000
Received: by mail-ea0-f170.google.com with SMTP id k10so221332eaj.15
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:50:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=KYwtEW7K3nUtU59pCbicLS8K4fqWtAaYJvhUeZE3rqI=;
	b=up/IFn72ftlwYDQw9cQ/mo0L7zbcphDVFtLS0nPbg03qHzVhhMUNjM6UFSw1FHWMSN
	NlPzMb1xqQDavfIdXdlafEe5m7XO/i+0Nxjk5UOim/K1NKUnw1G9AjKHgdB6Q+MjOtnk
	zMRI38mF0IKwc0kCJBjAOBJ9HlsbjlfsltUGydaXs9+mYcpihVTUrg9fP98I+gltNno4
	JLREdV6HIuO3nBNxkMkZTJQ/UMAi4KYw3dyLtwbdoU2FbbnkQIJ6e8ZFyf8KOeiYYtct
	LVDu6mqTluEL6+5dL558zZNnoFPPFog2RZEot+FM1Xzriv3ut9kW0wIvV7Z/I81cLEWg
	TT3Q==
X-Received: by 10.14.207.194 with SMTP id n42mr21593125eeo.76.1389102616359;
	Tue, 07 Jan 2014 05:50:16 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186])
	by mx.google.com with ESMTPSA id 1sm180130988eeg.4.2014.01.07.05.50.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 05:50:14 -0800 (PST)
Message-ID: <52CC0614.5050402@redhat.com>
Date: Tue, 07 Jan 2014 14:50:12 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > The identifiers poisoned by include/qemu/poison.h are
> > an initial but not complete list. Host and target
> > endianness is a particularly obvious one, as is the
> > size of a target long. You may not use these things
> > in your Xen devices, but "qemu-system-null" implies
> > more than "weird special purpose thing which only
> > has Xen devices in it".
> 
> I see your point.
> Could we allow target endinness and long size being selected at
> configure time for target-null?
> The default could be the same as the host, or could even be simply
> statically determined, maybe little endian, 4 bytes.

For Xen both long sizes are already supported by the block backend.  Are
there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
might not matter at all.

And if in the future Xen were to grow support for a big-endian target,
you could either enforce little-endian for the ring buffers, or
negotiate it in xenstore like you do for sizeof(long).

So let's call things by their name and add qemu-system-xenpv that covers
both x86 and ARM and anything else in the future.  Phasing out the
i386/x86_64 xenpv machine type makes total sense if the exact same code
can support ARM PV domains too.  This machine would only be compiled if
you had support for Xen.  My current patches have:

supported_target() {
    test "$tcg" = "yes" && return 0
    supported_kvm_target && return 0
    supported_xen_target && return 0
    return 1
}

but adding a more refined test for supported-on-TCG would be easy.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:50:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X2x-0001X8-WD; Tue, 07 Jan 2014 13:50:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W0X2w-0001X3-Ix
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:50:19 +0000
Received: from [85.158.137.68:27689] by server-14.bemta-3.messagelabs.com id
	6D/B5-06105-9160CC25; Tue, 07 Jan 2014 13:50:17 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389102616!6921252!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 7 Jan 2014 13:50:16 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:50:16 -0000
Received: by mail-ea0-f170.google.com with SMTP id k10so221332eaj.15
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 05:50:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=KYwtEW7K3nUtU59pCbicLS8K4fqWtAaYJvhUeZE3rqI=;
	b=up/IFn72ftlwYDQw9cQ/mo0L7zbcphDVFtLS0nPbg03qHzVhhMUNjM6UFSw1FHWMSN
	NlPzMb1xqQDavfIdXdlafEe5m7XO/i+0Nxjk5UOim/K1NKUnw1G9AjKHgdB6Q+MjOtnk
	zMRI38mF0IKwc0kCJBjAOBJ9HlsbjlfsltUGydaXs9+mYcpihVTUrg9fP98I+gltNno4
	JLREdV6HIuO3nBNxkMkZTJQ/UMAi4KYw3dyLtwbdoU2FbbnkQIJ6e8ZFyf8KOeiYYtct
	LVDu6mqTluEL6+5dL558zZNnoFPPFog2RZEot+FM1Xzriv3ut9kW0wIvV7Z/I81cLEWg
	TT3Q==
X-Received: by 10.14.207.194 with SMTP id n42mr21593125eeo.76.1389102616359;
	Tue, 07 Jan 2014 05:50:16 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-192-186.cust.dsl.vodafone.it.
	[2.35.192.186])
	by mx.google.com with ESMTPSA id 1sm180130988eeg.4.2014.01.07.05.50.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 05:50:14 -0800 (PST)
Message-ID: <52CC0614.5050402@redhat.com>
Date: Tue, 07 Jan 2014 14:50:12 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > The identifiers poisoned by include/qemu/poison.h are
> > an initial but not complete list. Host and target
> > endianness is a particularly obvious one, as is the
> > size of a target long. You may not use these things
> > in your Xen devices, but "qemu-system-null" implies
> > more than "weird special purpose thing which only
> > has Xen devices in it".
> 
> I see your point.
> Could we allow target endinness and long size being selected at
> configure time for target-null?
> The default could be the same as the host, or could even be simply
> statically determined, maybe little endian, 4 bytes.

For Xen both long sizes are already supported by the block backend.  Are
there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
might not matter at all.

And if in the future Xen were to grow support for a big-endian target,
you could either enforce little-endian for the ring buffers, or
negotiate it in xenstore like you do for sizeof(long).

So let's call things by their name and add qemu-system-xenpv that covers
both x86 and ARM and anything else in the future.  Phasing out the
i386/x86_64 xenpv machine type makes total sense if the exact same code
can support ARM PV domains too.  This machine would only be compiled if
you had support for Xen.  My current patches have:

supported_target() {
    test "$tcg" = "yes" && return 0
    supported_kvm_target && return 0
    supported_xen_target && return 0
    return 1
}

but adding a more refined test for supported-on-TCG would be easy.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:52:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X4x-0001cl-Iq; Tue, 07 Jan 2014 13:52:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X4v-0001cZ-Qg
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:52:22 +0000
Received: from [85.158.137.68:6687] by server-12.bemta-3.messagelabs.com id
	4F/69-20055-5960CC25; Tue, 07 Jan 2014 13:52:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389102738!7709300!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6161 invoked from network); 7 Jan 2014 13:52:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436026"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:52:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:52:17 -0500
Message-ID: <1389102736.12612.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 7 Jan 2014 13:52:16 +0000
In-Reply-To: <1389020689.31766.42.camel@kazak.uk.xensource.com>
References: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
	<1389020689.31766.42.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs: Honour --{en,
 dis}able-xend when building docs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:04 +0000, Ian Campbell wrote:
> On Mon, 2013-12-23 at 18:47 +0000, Andrew Cooper wrote:
> > If a user has specified --disable-xend, they wont want the manpages either.
> > 
> > Propagating this parameters requires reorganising the way in which the
> > makefile chooses which documents to build.
> > 
> > There is now a split of {MAN1,MAN5,MARKDOWN,TXT}SRC-y to select which
> > documentation to build, which is separate from the patsubst section which
> > generates appropriate paths to trigger the later rules.
> > 
> > The manpages are quite easy to split between xend, xl and xenstore, and have
> > been.  Items from misc/ are much harder and been left.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> > CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap <george.dunlap@eu.citrix.com>
> > 
> > --

NB only "---" (three dashes) has the affect you were intending here.
Possibly "-- " (two dashes and a space) does too but there was no space
in the above (and I don't know if that syntax works in the context of
git am...)
 
> > The configure scripts should be regenerated as part of applying this patch.
> > 
> > George:
> >    I request this gets a release ack for 4.4.  It could be argued as a bug in
> >    the current implementation of --disable-xend, and the extent of potential
> >    problems are that I have accidentally missed some of the manpages during
> >    the reorg, but this can be easily confirmed by comparing the results of the
> >    two builds (which I have done).
> 
> (as acting RM In George's absence):
> 
> I suppose I agree:
> Release-Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

With this applied I get the following difference to the set of installed
files. Given this and the fact that too many manpages being installed is
not the worst thing in the world I'm now inclined to say this should
wait for 4.5, since it's obviously not as trivial as it might seem.


--- ../FILE_LIST.BASE	2014-01-07 13:38:03.000000000 +0000
+++ ../FILE_LIST	2014-01-07 13:45:55.000000000 +0000
@@ -988,33 +988,25 @@
 dist/install/usr/local/share/doc/xen/html/man/xmdomain.cfg.5.html
 dist/install/usr/local/share/doc/xen/html/misc
 dist/install/usr/local/share/doc/xen/html/misc/console.txt
-dist/install/usr/local/share/doc/xen/html/misc/coverage.html
 dist/install/usr/local/share/doc/xen/html/misc/crashdb.txt
 dist/install/usr/local/share/doc/xen/html/misc/distro_mapping.txt
 dist/install/usr/local/share/doc/xen/html/misc/dump-core-format.txt
-dist/install/usr/local/share/doc/xen/html/misc/efi.html
 dist/install/usr/local/share/doc/xen/html/misc/grant-tables.txt
-dist/install/usr/local/share/doc/xen/html/misc/hvm-emulated-unplug.html
 dist/install/usr/local/share/doc/xen/html/misc/index.html
 dist/install/usr/local/share/doc/xen/html/misc/kexec_and_kdump.txt
 dist/install/usr/local/share/doc/xen/html/misc/libxl_memory.txt
 dist/install/usr/local/share/doc/xen/html/misc/pci-device-reservations.txt
 dist/install/usr/local/share/doc/xen/html/misc/printk-formats.txt
 dist/install/usr/local/share/doc/xen/html/misc/pvh-readme.txt
-dist/install/usr/local/share/doc/xen/html/misc/qemu-upstream_howto_use_it.html
 dist/install/usr/local/share/doc/xen/html/misc/sedf_scheduler_mini-HOWTO.txt
 dist/install/usr/local/share/doc/xen/html/misc/tscmode.txt
 dist/install/usr/local/share/doc/xen/html/misc/vbd-interface.txt
 dist/install/usr/local/share/doc/xen/html/misc/vtd.txt
 dist/install/usr/local/share/doc/xen/html/misc/vtpm.txt
-dist/install/usr/local/share/doc/xen/html/misc/xen-command-line.html
 dist/install/usr/local/share/doc/xen/html/misc/xen-error-handling.txt
 dist/install/usr/local/share/doc/xen/html/misc/xenpaging.txt
-dist/install/usr/local/share/doc/xen/html/misc/xenstore-paths.html
 dist/install/usr/local/share/doc/xen/html/misc/xenstore.txt
 dist/install/usr/local/share/doc/xen/html/misc/xl-disk-configuration.txt
-dist/install/usr/local/share/doc/xen/html/misc/xl-network-configuration.html
-dist/install/usr/local/share/doc/xen/html/misc/xl-numa-placement.html
 dist/install/usr/local/share/doc/xen/html/misc/xsm-flask.txt
 dist/install/usr/local/share/doc/xen/README.stubdom
 dist/install/usr/local/share/doc/xen/README.xenmon




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:52:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X4x-0001cl-Iq; Tue, 07 Jan 2014 13:52:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X4v-0001cZ-Qg
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:52:22 +0000
Received: from [85.158.137.68:6687] by server-12.bemta-3.messagelabs.com id
	4F/69-20055-5960CC25; Tue, 07 Jan 2014 13:52:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389102738!7709300!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6161 invoked from network); 7 Jan 2014 13:52:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436026"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:52:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:52:17 -0500
Message-ID: <1389102736.12612.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 7 Jan 2014 13:52:16 +0000
In-Reply-To: <1389020689.31766.42.camel@kazak.uk.xensource.com>
References: <1387824442-368-1-git-send-email-andrew.cooper3@citrix.com>
	<1389020689.31766.42.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs: Honour --{en,
 dis}able-xend when building docs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:04 +0000, Ian Campbell wrote:
> On Mon, 2013-12-23 at 18:47 +0000, Andrew Cooper wrote:
> > If a user has specified --disable-xend, they wont want the manpages either.
> > 
> > Propagating this parameters requires reorganising the way in which the
> > makefile chooses which documents to build.
> > 
> > There is now a split of {MAN1,MAN5,MARKDOWN,TXT}SRC-y to select which
> > documentation to build, which is separate from the patsubst section which
> > generates appropriate paths to trigger the later rules.
> > 
> > The manpages are quite easy to split between xend, xl and xenstore, and have
> > been.  Items from misc/ are much harder and been left.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> > CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap <george.dunlap@eu.citrix.com>
> > 
> > --

NB only "---" (three dashes) has the affect you were intending here.
Possibly "-- " (two dashes and a space) does too but there was no space
in the above (and I don't know if that syntax works in the context of
git am...)
 
> > The configure scripts should be regenerated as part of applying this patch.
> > 
> > George:
> >    I request this gets a release ack for 4.4.  It could be argued as a bug in
> >    the current implementation of --disable-xend, and the extent of potential
> >    problems are that I have accidentally missed some of the manpages during
> >    the reorg, but this can be easily confirmed by comparing the results of the
> >    two builds (which I have done).
> 
> (as acting RM In George's absence):
> 
> I suppose I agree:
> Release-Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

With this applied I get the following difference to the set of installed
files. Given this and the fact that too many manpages being installed is
not the worst thing in the world I'm now inclined to say this should
wait for 4.5, since it's obviously not as trivial as it might seem.


--- ../FILE_LIST.BASE	2014-01-07 13:38:03.000000000 +0000
+++ ../FILE_LIST	2014-01-07 13:45:55.000000000 +0000
@@ -988,33 +988,25 @@
 dist/install/usr/local/share/doc/xen/html/man/xmdomain.cfg.5.html
 dist/install/usr/local/share/doc/xen/html/misc
 dist/install/usr/local/share/doc/xen/html/misc/console.txt
-dist/install/usr/local/share/doc/xen/html/misc/coverage.html
 dist/install/usr/local/share/doc/xen/html/misc/crashdb.txt
 dist/install/usr/local/share/doc/xen/html/misc/distro_mapping.txt
 dist/install/usr/local/share/doc/xen/html/misc/dump-core-format.txt
-dist/install/usr/local/share/doc/xen/html/misc/efi.html
 dist/install/usr/local/share/doc/xen/html/misc/grant-tables.txt
-dist/install/usr/local/share/doc/xen/html/misc/hvm-emulated-unplug.html
 dist/install/usr/local/share/doc/xen/html/misc/index.html
 dist/install/usr/local/share/doc/xen/html/misc/kexec_and_kdump.txt
 dist/install/usr/local/share/doc/xen/html/misc/libxl_memory.txt
 dist/install/usr/local/share/doc/xen/html/misc/pci-device-reservations.txt
 dist/install/usr/local/share/doc/xen/html/misc/printk-formats.txt
 dist/install/usr/local/share/doc/xen/html/misc/pvh-readme.txt
-dist/install/usr/local/share/doc/xen/html/misc/qemu-upstream_howto_use_it.html
 dist/install/usr/local/share/doc/xen/html/misc/sedf_scheduler_mini-HOWTO.txt
 dist/install/usr/local/share/doc/xen/html/misc/tscmode.txt
 dist/install/usr/local/share/doc/xen/html/misc/vbd-interface.txt
 dist/install/usr/local/share/doc/xen/html/misc/vtd.txt
 dist/install/usr/local/share/doc/xen/html/misc/vtpm.txt
-dist/install/usr/local/share/doc/xen/html/misc/xen-command-line.html
 dist/install/usr/local/share/doc/xen/html/misc/xen-error-handling.txt
 dist/install/usr/local/share/doc/xen/html/misc/xenpaging.txt
-dist/install/usr/local/share/doc/xen/html/misc/xenstore-paths.html
 dist/install/usr/local/share/doc/xen/html/misc/xenstore.txt
 dist/install/usr/local/share/doc/xen/html/misc/xl-disk-configuration.txt
-dist/install/usr/local/share/doc/xen/html/misc/xl-network-configuration.html
-dist/install/usr/local/share/doc/xen/html/misc/xl-numa-placement.html
 dist/install/usr/local/share/doc/xen/html/misc/xsm-flask.txt
 dist/install/usr/local/share/doc/xen/README.stubdom
 dist/install/usr/local/share/doc/xen/README.xenmon




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X5v-0001jl-6Z; Tue, 07 Jan 2014 13:53:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X5t-0001jb-Ck
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:53:21 +0000
Received: from [85.158.143.35:53859] by server-1.bemta-4.messagelabs.com id
	5D/93-02132-0D60CC25; Tue, 07 Jan 2014 13:53:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389102798!10181433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5859 invoked from network); 7 Jan 2014 13:53:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:53:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:53:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:53:17 -0500
Message-ID: <1389102796.12612.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <tsahee@gmx.com>
Date: Tue, 7 Jan 2014 13:53:16 +0000
In-Reply-To: <1387710091-1843-1-git-send-email-tsahee@gmx.com>
References: <1387710091-1843-1-git-send-email-tsahee@gmx.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: specific bad cell count error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 13:01 +0200, tsahee@gmx.com wrote:
> From: Tsahee Zidenberg <tsahee@gmx.com>
> 
> Specify in the error message if bad cell count is in device or parent.
> 
> Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

RM hat: clearly no risk to the release here and improve the diagnostics
in a useful way.

Applied, thanks.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X5v-0001jl-6Z; Tue, 07 Jan 2014 13:53:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X5t-0001jb-Ck
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:53:21 +0000
Received: from [85.158.143.35:53859] by server-1.bemta-4.messagelabs.com id
	5D/93-02132-0D60CC25; Tue, 07 Jan 2014 13:53:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389102798!10181433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5859 invoked from network); 7 Jan 2014 13:53:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:53:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:53:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:53:17 -0500
Message-ID: <1389102796.12612.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <tsahee@gmx.com>
Date: Tue, 7 Jan 2014 13:53:16 +0000
In-Reply-To: <1387710091-1843-1-git-send-email-tsahee@gmx.com>
References: <1387710091-1843-1-git-send-email-tsahee@gmx.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, stefano.stabellini@eu.citrix.com, tim@xen.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/dts: specific bad cell count error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 13:01 +0200, tsahee@gmx.com wrote:
> From: Tsahee Zidenberg <tsahee@gmx.com>
> 
> Specify in the error message if bad cell count is in device or parent.
> 
> Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

RM hat: clearly no risk to the release here and improve the diagnostics
in a useful way.

Applied, thanks.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6K-0001nv-Jm; Tue, 07 Jan 2014 13:53:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WqE-0000Ii-PB
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:37:10 +0000
Received: from [85.158.139.211:4763] by server-7.bemta-5.messagelabs.com id
	81/5E-04824-6030CC25; Tue, 07 Jan 2014 13:37:10 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389101829!8327203!1
X-Originating-IP: [209.85.214.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29353 invoked from network); 7 Jan 2014 13:37:09 -0000
Received: from mail-bk0-f41.google.com (HELO mail-bk0-f41.google.com)
	(209.85.214.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:37:09 -0000
Received: by mail-bk0-f41.google.com with SMTP id v15so236987bkz.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:37:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=zenXD+M5QxtzjxURkDY2INtIP1odRrcBBrRi9rDYVF0=;
	b=cTSZuwipoBjmB6NJQkELj0c64WJI3915jxctp3rb0k9BFnajkbS5nXTuoIzHv4Vz0F
	fXxU6uG8R7AvaknVYx+wk25kWN/R0d5U2ZVx6CZtp+y2AeNIEXfNlQrMJgKQsVBc3j87
	p4HivbkPQXeQueg+qIIs5Z7T8+m+N5aWIHOH1pacM98y5VP6sjidjaotKDhDjb7sYUbV
	zCbtspu+pwq8a9bmomE4OBP5LFSJQc+TnY/Zs/FHAdG9Owddf3+cYXhC5Munc/jm1Q/M
	J9A/hiD5f14nYWzAgupMvtPM5gHGKbreQsp/Tew5cUHZx3do6gcuYWtYoxAMIwHHopc6
	9gwg==
MIME-Version: 1.0
X-Received: by 10.205.65.81 with SMTP id xl17mr2149195bkb.66.1389101829153;
	Tue, 07 Jan 2014 05:37:09 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:37:09 -0800 (PST)
Date: Tue, 7 Jan 2014 21:37:09 +0800
Message-ID: <CAPgLHd-ADur1e3T_jBFK7=+s+j4ggRVKutqYBS0pXMbtDSdjOA@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com, tglx@linutronix.de, mingo@redhat.com,
	hpa@zytor.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:53:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	x86@kernel.org, linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen/pvh: remove duplicated include from
	enlighten.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Remove duplicated include.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 arch/x86/xen/enlighten.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4e2f30..b6d61c3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,7 +46,6 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
-#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6K-0001nv-Jm; Tue, 07 Jan 2014 13:53:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <weiyj.lk@gmail.com>) id 1W0WqE-0000Ii-PB
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:37:10 +0000
Received: from [85.158.139.211:4763] by server-7.bemta-5.messagelabs.com id
	81/5E-04824-6030CC25; Tue, 07 Jan 2014 13:37:10 +0000
X-Env-Sender: weiyj.lk@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389101829!8327203!1
X-Originating-IP: [209.85.214.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29353 invoked from network); 7 Jan 2014 13:37:09 -0000
Received: from mail-bk0-f41.google.com (HELO mail-bk0-f41.google.com)
	(209.85.214.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:37:09 -0000
Received: by mail-bk0-f41.google.com with SMTP id v15so236987bkz.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 05:37:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=zenXD+M5QxtzjxURkDY2INtIP1odRrcBBrRi9rDYVF0=;
	b=cTSZuwipoBjmB6NJQkELj0c64WJI3915jxctp3rb0k9BFnajkbS5nXTuoIzHv4Vz0F
	fXxU6uG8R7AvaknVYx+wk25kWN/R0d5U2ZVx6CZtp+y2AeNIEXfNlQrMJgKQsVBc3j87
	p4HivbkPQXeQueg+qIIs5Z7T8+m+N5aWIHOH1pacM98y5VP6sjidjaotKDhDjb7sYUbV
	zCbtspu+pwq8a9bmomE4OBP5LFSJQc+TnY/Zs/FHAdG9Owddf3+cYXhC5Munc/jm1Q/M
	J9A/hiD5f14nYWzAgupMvtPM5gHGKbreQsp/Tew5cUHZx3do6gcuYWtYoxAMIwHHopc6
	9gwg==
MIME-Version: 1.0
X-Received: by 10.205.65.81 with SMTP id xl17mr2149195bkb.66.1389101829153;
	Tue, 07 Jan 2014 05:37:09 -0800 (PST)
Received: by 10.204.74.130 with HTTP; Tue, 7 Jan 2014 05:37:09 -0800 (PST)
Date: Tue, 7 Jan 2014 21:37:09 +0800
Message-ID: <CAPgLHd-ADur1e3T_jBFK7=+s+j4ggRVKutqYBS0pXMbtDSdjOA@mail.gmail.com>
From: Wei Yongjun <weiyj.lk@gmail.com>
To: konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, 
	david.vrabel@citrix.com, tglx@linutronix.de, mingo@redhat.com,
	hpa@zytor.com
X-Mailman-Approved-At: Tue, 07 Jan 2014 13:53:46 +0000
Cc: xen-devel@lists.xenproject.org, yongjun_wei@trendmicro.com.cn,
	x86@kernel.org, linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH -next] xen/pvh: remove duplicated include from
	enlighten.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Remove duplicated include.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
---
 arch/x86/xen/enlighten.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4e2f30..b6d61c3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -46,7 +46,6 @@
 #include <xen/hvm.h>
 #include <xen/hvc-console.h>
 #include <xen/acpi.h>
-#include <xen/features.h>
 
 #include <asm/paravirt.h>
 #include <asm/apic.h>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6U-0001qH-17; Tue, 07 Jan 2014 13:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X6S-0001pi-KZ
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:53:56 +0000
Received: from [85.158.137.68:23672] by server-2.bemta-3.messagelabs.com id
	C8/99-17329-3F60CC25; Tue, 07 Jan 2014 13:53:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389102833!7660676!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32721 invoked from network); 7 Jan 2014 13:53:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:53:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:53:52 -0500
Message-ID: <1389102830.12612.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:53:50 +0000
In-Reply-To: <1389008899.31766.13.camel@kazak.uk.xensource.com>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
	<52CA9717.1060700@linaro.org>
	<1389008899.31766.13.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	patches@linaro.org, ian.jackson@eu.citrix.com,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
 1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:48 +0000, Ian Campbell wrote:
> On Mon, 2014-01-06 at 11:44 +0000, Julien Grall wrote:
> > 
> > On 12/19/2013 01:51 PM, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
> > >> On Tue, 17 Dec 2013, Julien Grall wrote:
> > >>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
> > >>> these guest physical address. When the ballon decides to give back a
> > >>> page to the kernel, this page must have the same address as previously.
> > >>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
> > >>> devices.
> > >>>
> > >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > >>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > >>> Cc: Keir Fraser <keir@xen.org>
> > >>
> > >> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > >
> > > Keir, any objections to this patch?
> > 
> > Hi Ian,
> > 
> > Do we need to wait Keir Ack?
> 
> I think he's had ample opportunity to object (last call!) and since this
> code is explicitly dead on everything apart from ARM I intend to commit
> it next time I go through my queue.

Which I've now done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:53:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6U-0001qH-17; Tue, 07 Jan 2014 13:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X6S-0001pi-KZ
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:53:56 +0000
Received: from [85.158.137.68:23672] by server-2.bemta-3.messagelabs.com id
	C8/99-17329-3F60CC25; Tue, 07 Jan 2014 13:53:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389102833!7660676!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32721 invoked from network); 7 Jan 2014 13:53:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:53:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:53:52 -0500
Message-ID: <1389102830.12612.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:53:50 +0000
In-Reply-To: <1389008899.31766.13.camel@kazak.uk.xensource.com>
References: <1387290499-3970-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1312171455110.8667@kaball.uk.xensource.com>
	<1387461097.9925.78.camel@kazak.uk.xensource.com>
	<52CA9717.1060700@linaro.org>
	<1389008899.31766.13.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	patches@linaro.org, ian.jackson@eu.citrix.com,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/arm: Allow balooning working with
 1:1 memory mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 11:48 +0000, Ian Campbell wrote:
> On Mon, 2014-01-06 at 11:44 +0000, Julien Grall wrote:
> > 
> > On 12/19/2013 01:51 PM, Ian Campbell wrote:
> > > On Tue, 2013-12-17 at 14:55 +0000, Stefano Stabellini wrote:
> > >> On Tue, 17 Dec 2013, Julien Grall wrote:
> > >>> With the lack of iommu, dom0 must have a 1:1 memory mapping for all
> > >>> these guest physical address. When the ballon decides to give back a
> > >>> page to the kernel, this page must have the same address as previously.
> > >>> Otherwise, we will loose the 1:1 mapping and will break DMA-capable
> > >>> devices.
> > >>>
> > >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > >>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > >>> Cc: Keir Fraser <keir@xen.org>
> > >>
> > >> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > >
> > > Keir, any objections to this patch?
> > 
> > Hi Ian,
> > 
> > Do we need to wait Keir Ack?
> 
> I think he's had ample opportunity to object (last call!) and since this
> code is explicitly dead on everything apart from ARM I intend to commit
> it next time I go through my queue.

Which I've now done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6z-0001yK-Ft; Tue, 07 Jan 2014 13:54:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X6y-0001xy-PQ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:54:28 +0000
Received: from [85.158.137.68:30623] by server-2.bemta-3.messagelabs.com id
	53/8A-17329-3170CC25; Tue, 07 Jan 2014 13:54:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389102865!7666027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3663 invoked from network); 7 Jan 2014 13:54:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88275795"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:54:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:54:24 -0500
Message-ID: <1389102863.12612.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:54:23 +0000
In-Reply-To: <52B71FD5.5090000@linaro.org>
References: <1387709997-1551-1-git-send-email-tsahee@gmx.com>
	<52B71FD5.5090000@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, tsahee@gmx.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] ns16550: support ns16550a
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 17:22 +0000, Julien Grall wrote:
> 
> On 12/22/2013 10:59 AM, tsahee@gmx.com wrote:
> > From: Tsahee Zidenberg <tsahee@gmx.com>
> >
> > Ns16550a devices are Ns16550 devices with additional capabilities.
> > Decare XEN is compatible with this device, to be able to use unmodified
> > devicetrees.
> >
> > Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

RM hat: There is basically no risk to existing systems since this just
adds a new compatible string. The benefit is that Xen will work on some
new set of hardware, if it turns out that things are buggy on that type
of hardware then we haven't lost anything vs. not trying to run on it at
all.

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X6z-0001yK-Ft; Tue, 07 Jan 2014 13:54:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X6y-0001xy-PQ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:54:28 +0000
Received: from [85.158.137.68:30623] by server-2.bemta-3.messagelabs.com id
	53/8A-17329-3170CC25; Tue, 07 Jan 2014 13:54:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389102865!7666027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3663 invoked from network); 7 Jan 2014 13:54:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88275795"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:54:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:54:24 -0500
Message-ID: <1389102863.12612.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:54:23 +0000
In-Reply-To: <52B71FD5.5090000@linaro.org>
References: <1387709997-1551-1-git-send-email-tsahee@gmx.com>
	<52B71FD5.5090000@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, tsahee@gmx.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] ns16550: support ns16550a
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2013-12-22 at 17:22 +0000, Julien Grall wrote:
> 
> On 12/22/2013 10:59 AM, tsahee@gmx.com wrote:
> > From: Tsahee Zidenberg <tsahee@gmx.com>
> >
> > Ns16550a devices are Ns16550 devices with additional capabilities.
> > Decare XEN is compatible with this device, to be able to use unmodified
> > devicetrees.
> >
> > Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

RM hat: There is basically no risk to existing systems since this just
adds a new compatible string. The benefit is that Xen will work on some
new set of hardware, if it turns out that things are buggy on that type
of hardware then we haven't lost anything vs. not trying to run on it at
all.

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:55:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X7U-00025E-U0; Tue, 07 Jan 2014 13:55:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X7T-00024o-9s
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:54:59 +0000
Received: from [85.158.143.35:10152] by server-3.bemta-4.messagelabs.com id
	77/A3-32360-2370CC25; Tue, 07 Jan 2014 13:54:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389102896!10013337!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14503 invoked from network); 7 Jan 2014 13:54:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:54:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:54:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:54:55 -0500
Message-ID: <1389102894.12612.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:54:54 +0000
In-Reply-To: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 16:36 +0000, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
> 
> "it is assumed that no mapping exists between children of node and the parent
> address space".
> 
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
> 
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
> 
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.

RM hat: Arndale is one of our main platforms today, so not working out
of the box is a pretty serious issue. The risk of course is that we
break things on some other platform. I think we know that none of the
other platforms have this problem, since otherwise Xen wouldn't boot on
them. Given the ePAPR specification trying to translate these addresses
is totally meaningless so it seems unlikely that stopping trying to
translate them would cause an issue. So I think this is OK.

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:55:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X7U-00025E-U0; Tue, 07 Jan 2014 13:55:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X7T-00024o-9s
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:54:59 +0000
Received: from [85.158.143.35:10152] by server-3.bemta-4.messagelabs.com id
	77/A3-32360-2370CC25; Tue, 07 Jan 2014 13:54:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389102896!10013337!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14503 invoked from network); 7 Jan 2014 13:54:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:54:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:54:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:54:55 -0500
Message-ID: <1389102894.12612.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:54:54 +0000
In-Reply-To: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 16:36 +0000, Julien Grall wrote:
> ePAR specifies that if the property "ranges" doesn't exist in a bus node:
> 
> "it is assumed that no mapping exists between children of node and the parent
> address space".
> 
> Modify dt_number_of_address to check if the list of ranges are valid. Return
> 0 (ie there is zero range) if the list is not valid.
> 
> This patch has been tested on the Arndale where the bug can occur with the
> '/hdmi' node.
> 
> Reported-by: <tsahee@gmx.com>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> This patch is a bug fix for Xen 4.4. Without it, Xen can't boot on the Arndale
> because it's unable to translate a wrong address.

RM hat: Arndale is one of our main platforms today, so not working out
of the box is a pretty serious issue. The risk of course is that we
break things on some other platform. I think we know that none of the
other platforms have this problem, since otherwise Xen wouldn't boot on
them. Given the ePAPR specification trying to translate these addresses
is totally meaningless so it seems unlikely that stopping trying to
translate them would cause an issue. So I think this is OK.

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X7t-0002Bl-Da; Tue, 07 Jan 2014 13:55:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X7r-0002B7-Ds
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:55:23 +0000
Received: from [193.109.254.147:34037] by server-11.bemta-14.messagelabs.com
	id 1C/6A-20576-A470CC25; Tue, 07 Jan 2014 13:55:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389102920!9325797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28612 invoked from network); 7 Jan 2014 13:55:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:55:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:55:20 -0500
Message-ID: <1389102919.12612.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 7 Jan 2014 13:55:19 +0000
In-Reply-To: <1387373714.28680.21.camel@kazak.uk.xensource.com>
References: <1387305337-15355-1-git-send-email-ian.jackson@eu.citrix.com>
	<52B184DD.8090909@eu.citrix.com>
	<1387373714.28680.21.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2013-12-18 at 13:35 +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 11:19 +0000, George Dunlap wrote:
> > On 12/17/2013 06:35 PM, Ian Jackson wrote:
> > > This series removes the usleeps and waiting loops in libxl_dom.c and
> > > replaces them with event-callback code.
> > >
> > > Firstly, some documentation of things I had to reverse-engineer:
> > >   01/23 xen: Document XEN_DOMCTL_subscribe
> > >   02/23 xen: Document that EVTCHNOP_bind_interdomain signals
> > >   03/23 docs: Document event-channel-based suspend protocol
> > >   04/23 libxc: Document xenctrl.h event channel calls
> > > Arguably these might be 4.4 material (hence the CC to George).
> > 
> > These document pretty important aspects of behavior (fixing what might 
> > be considered documentation bugs), and obviously can have no functional 
> > impact.  I guess the only risk is that they might be wrong and mislead 
> > someone, but I think that's pretty low. :-)
> > 
> > These four:
> > 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> They look good to me too:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

And I've now applied just those 4 docs changes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X7t-0002Bl-Da; Tue, 07 Jan 2014 13:55:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X7r-0002B7-Ds
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 13:55:23 +0000
Received: from [193.109.254.147:34037] by server-11.bemta-14.messagelabs.com
	id 1C/6A-20576-A470CC25; Tue, 07 Jan 2014 13:55:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389102920!9325797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28612 invoked from network); 7 Jan 2014 13:55:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90436937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:55:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:55:20 -0500
Message-ID: <1389102919.12612.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 7 Jan 2014 13:55:19 +0000
In-Reply-To: <1387373714.28680.21.camel@kazak.uk.xensource.com>
References: <1387305337-15355-1-git-send-email-ian.jackson@eu.citrix.com>
	<52B184DD.8090909@eu.citrix.com>
	<1387373714.28680.21.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2013-12-18 at 13:35 +0000, Ian Campbell wrote:
> On Wed, 2013-12-18 at 11:19 +0000, George Dunlap wrote:
> > On 12/17/2013 06:35 PM, Ian Jackson wrote:
> > > This series removes the usleeps and waiting loops in libxl_dom.c and
> > > replaces them with event-callback code.
> > >
> > > Firstly, some documentation of things I had to reverse-engineer:
> > >   01/23 xen: Document XEN_DOMCTL_subscribe
> > >   02/23 xen: Document that EVTCHNOP_bind_interdomain signals
> > >   03/23 docs: Document event-channel-based suspend protocol
> > >   04/23 libxc: Document xenctrl.h event channel calls
> > > Arguably these might be 4.4 material (hence the CC to George).
> > 
> > These document pretty important aspects of behavior (fixing what might 
> > be considered documentation bugs), and obviously can have no functional 
> > impact.  I guess the only risk is that they might be wrong and mislead 
> > someone, but I think that's pretty low. :-)
> > 
> > These four:
> > 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> They look good to me too:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

And I've now applied just those 4 docs changes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:57:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X9i-0002XD-Tg; Tue, 07 Jan 2014 13:57:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X9h-0002Ww-6p
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:57:17 +0000
Received: from [85.158.137.68:15611] by server-15.bemta-3.messagelabs.com id
	D6/37-11556-CB70CC25; Tue, 07 Jan 2014 13:57:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389102964!7695327!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17754 invoked from network); 7 Jan 2014 13:56:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90437071"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:56:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:56:03 -0500
Message-ID: <1389102962.12612.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Date: Tue, 7 Jan 2014 13:56:02 +0000
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:

> Question, am I missing anything, or this feature (passing smbios) is
> still work in progress? 

Under Xen smbios tables are supplied via hvmloader, not via qemu.

What tables and or values do you want to override/supply?

I believe that libxc supports passing in extra smbios tables when
building the guest (via struct xc_hvm_build_args.smbios_module) but
nothing has been plumbed in to make use of this.

I'm not aware of any on going work to plumb that stuff further up, e.g.
to libxl and xl or other toolstacks. (I think the libxc functionality is
only consumed by the XenClient toolstack).

There is also some support in hvmloader for setting certain SMBIOS
parameters via xenstore keys. See the varios HVM_XS_* in
tools/firmware/hvmloader/smbios.c. It includes things like the system
manufacturer, chassis number etc.

Do either of those cover your use case? Are you interested in plumbing
them up?

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:57:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0X9i-0002XD-Tg; Tue, 07 Jan 2014 13:57:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0X9h-0002Ww-6p
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:57:17 +0000
Received: from [85.158.137.68:15611] by server-15.bemta-3.messagelabs.com id
	D6/37-11556-CB70CC25; Tue, 07 Jan 2014 13:57:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389102964!7695327!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17754 invoked from network); 7 Jan 2014 13:56:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:56:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90437071"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 13:56:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:56:03 -0500
Message-ID: <1389102962.12612.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Date: Tue, 7 Jan 2014 13:56:02 +0000
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:

> Question, am I missing anything, or this feature (passing smbios) is
> still work in progress? 

Under Xen smbios tables are supplied via hvmloader, not via qemu.

What tables and or values do you want to override/supply?

I believe that libxc supports passing in extra smbios tables when
building the guest (via struct xc_hvm_build_args.smbios_module) but
nothing has been plumbed in to make use of this.

I'm not aware of any on going work to plumb that stuff further up, e.g.
to libxl and xl or other toolstacks. (I think the libxc functionality is
only consumed by the XenClient toolstack).

There is also some support in hvmloader for setting certain SMBIOS
parameters via xenstore keys. See the varios HVM_XS_* in
tools/firmware/hvmloader/smbios.c. It includes things like the system
manufacturer, chassis number etc.

Do either of those cover your use case? Are you interested in plumbing
them up?

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XAm-0002jK-Cf; Tue, 07 Jan 2014 13:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XAk-0002iw-KX
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:58:22 +0000
Received: from [193.109.254.147:6943] by server-15.bemta-14.messagelabs.com id
	B7/6E-22186-DF70CC25; Tue, 07 Jan 2014 13:58:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389103100!9320472!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16537 invoked from network); 7 Jan 2014 13:58:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:58:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88276819"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:58:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:58:19 -0500
Message-ID: <1389103098.12612.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 7 Jan 2014 13:58:18 +0000
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:

> If you still can't boot with any memory bigger than 128M, as a fast
> workaround you can apply this patch.

I wonder if it might be possible to work around this by more carefully
selecting the load addresses for Xen+Linux+DTB+initrd, such that they
are packed into the top end of RAM, leaving a larger contiguous chunk
available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
	MEMMAX-X:	Leave free for high relocation of hypervisor
	MEMMAX-X-L:	Load Linux here
	MEMMAX-X-L-D:	Load DTB here
	MEMMAX-X-L-D-X: Load initial Xen image here

Ultimately this is because allocations need to be aligned to their size,
so on a 1GB system there are only two possible 512MB allocations, if
even one page is allocated in each half then it isn't possible to
satisfy things. I don't think the core allocator gives us the option to
do non-aligned allocations. Disabling the 1:1 mapping workaround
allocates the region a page at a time so it doesn't suffer from this.

We are probably mostly stuck with this for 4.4. As Julien says for 4.5
we should probably look into giving dom0 multiple banks where necessary.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XAm-0002jK-Cf; Tue, 07 Jan 2014 13:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XAk-0002iw-KX
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 13:58:22 +0000
Received: from [193.109.254.147:6943] by server-15.bemta-14.messagelabs.com id
	B7/6E-22186-DF70CC25; Tue, 07 Jan 2014 13:58:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389103100!9320472!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16537 invoked from network); 7 Jan 2014 13:58:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:58:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88276819"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:58:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:58:19 -0500
Message-ID: <1389103098.12612.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 7 Jan 2014 13:58:18 +0000
In-Reply-To: <CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:

> If you still can't boot with any memory bigger than 128M, as a fast
> workaround you can apply this patch.

I wonder if it might be possible to work around this by more carefully
selecting the load addresses for Xen+Linux+DTB+initrd, such that they
are packed into the top end of RAM, leaving a larger contiguous chunk
available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
	MEMMAX-X:	Leave free for high relocation of hypervisor
	MEMMAX-X-L:	Load Linux here
	MEMMAX-X-L-D:	Load DTB here
	MEMMAX-X-L-D-X: Load initial Xen image here

Ultimately this is because allocations need to be aligned to their size,
so on a 1GB system there are only two possible 512MB allocations, if
even one page is allocated in each half then it isn't possible to
satisfy things. I don't think the core allocator gives us the option to
do non-aligned allocations. Disabling the 1:1 mapping workaround
allocates the region a page at a time so it doesn't suffer from this.

We are probably mostly stuck with this for 4.4. As Julien says for 4.5
we should probably look into giving dom0 multiple banks where necessary.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XB7-0002zh-Qo; Tue, 07 Jan 2014 13:58:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XB6-0002y5-O2
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:58:44 +0000
Received: from [85.158.139.211:28696] by server-15.bemta-5.messagelabs.com id
	B3/D1-08490-4180CC25; Tue, 07 Jan 2014 13:58:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389103122!8331139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1932 invoked from network); 7 Jan 2014 13:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88276892"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:58:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:58:41 -0500
Message-ID: <1389103119.12612.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:58:39 +0000
In-Reply-To: <1389020752.31766.43.camel@kazak.uk.xensource.com>
References: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
	<1389020752.31766.43.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, patches@linaro.org,
	keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: driver/char: fix const declaration of
 DT compatible list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:05 +0000, Ian Campbell wrote:
> On Tue, 2013-12-24 at 11:28 +0000, Julien Grall wrote:
> > The data type for DT compatible list should be:
> >     const char * const[]  __initconst
> > 
> > Fix every serial drivers which support device tree.
> > 
> > Spotted-by: Jan Beulich <jbeulich@suse.com>
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> WRT the release I think this is a bug fix.

The risk here is a build error which will be easily detected, this
patterns of const is used in all the existing non-serial DT drivers so I
think we can be pretty confident in it.

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 13:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 13:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XB7-0002zh-Qo; Tue, 07 Jan 2014 13:58:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XB6-0002y5-O2
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 13:58:44 +0000
Received: from [85.158.139.211:28696] by server-15.bemta-5.messagelabs.com id
	B3/D1-08490-4180CC25; Tue, 07 Jan 2014 13:58:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389103122!8331139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1932 invoked from network); 7 Jan 2014 13:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 13:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88276892"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 13:58:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	08:58:41 -0500
Message-ID: <1389103119.12612.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 13:58:39 +0000
In-Reply-To: <1389020752.31766.43.camel@kazak.uk.xensource.com>
References: <1387884527-6067-1-git-send-email-julien.grall@linaro.org>
	<1389020752.31766.43.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, patches@linaro.org,
	keir@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: driver/char: fix const declaration of
 DT compatible list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 15:05 +0000, Ian Campbell wrote:
> On Tue, 2013-12-24 at 11:28 +0000, Julien Grall wrote:
> > The data type for DT compatible list should be:
> >     const char * const[]  __initconst
> > 
> > Fix every serial drivers which support device tree.
> > 
> > Spotted-by: Jan Beulich <jbeulich@suse.com>
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> WRT the release I think this is a bug fix.

The risk here is a build error which will be easily detected, this
patterns of const is used in all the existing non-serial DT drivers so I
think we can be pretty confident in it.

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:01:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XE0-0003fF-6z; Tue, 07 Jan 2014 14:01:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0XDy-0003ey-TZ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:01:43 +0000
Received: from [193.109.254.147:2801] by server-16.bemta-14.messagelabs.com id
	36/94-20600-6C80CC25; Tue, 07 Jan 2014 14:01:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389103300!9326112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20399 invoked from network); 7 Jan 2014 14:01:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:01:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90438771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:01:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:01:39 -0500
Message-ID: <52CC08C2.5090004@citrix.com>
Date: Tue, 7 Jan 2014 14:01:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1536712177.20140107125352@eikelenboom.it>
In-Reply-To: <1536712177.20140107125352@eikelenboom.it>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 -	CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]	[<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:53, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> 
> Xen: latest xen-unstable

The FIFO-based event channel ABI is broken in current xen-unstable.

You need the two patches from:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html

You can also disable the use the FIFO ABI by the guest using the
xen.fifo_events = 0 kernel command line option.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:01:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XE0-0003fF-6z; Tue, 07 Jan 2014 14:01:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0XDy-0003ey-TZ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:01:43 +0000
Received: from [193.109.254.147:2801] by server-16.bemta-14.messagelabs.com id
	36/94-20600-6C80CC25; Tue, 07 Jan 2014 14:01:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389103300!9326112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20399 invoked from network); 7 Jan 2014 14:01:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:01:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90438771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:01:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:01:39 -0500
Message-ID: <52CC08C2.5090004@citrix.com>
Date: Tue, 7 Jan 2014 14:01:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1536712177.20140107125352@eikelenboom.it>
In-Reply-To: <1536712177.20140107125352@eikelenboom.it>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 -	CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]	[<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:53, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> 
> Xen: latest xen-unstable

The FIFO-based event channel ABI is broken in current xen-unstable.

You need the two patches from:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html

You can also disable the use the FIFO ABI by the guest using the
xen.fifo_events = 0 kernel command line option.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:06:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XIe-0003wk-Ux; Tue, 07 Jan 2014 14:06:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jim.epost@gmail.com>) id 1W0XG9-0003rc-8M
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:03:58 +0000
Received: from [193.109.254.147:48450] by server-10.bemta-14.messagelabs.com
	id 78/0B-20752-C490CC25; Tue, 07 Jan 2014 14:03:56 +0000
X-Env-Sender: jim.epost@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389103430!7014512!1
X-Originating-IP: [209.85.223.177]
X-SpamReason: No, hits=2.7 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,UNIQUE_WORDS,UPPERCASE_75_100,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17383 invoked from network); 7 Jan 2014 14:03:52 -0000
Received: from mail-ie0-f177.google.com (HELO mail-ie0-f177.google.com)
	(209.85.223.177)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:03:52 -0000
Received: by mail-ie0-f177.google.com with SMTP id tp5so380193ieb.36
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 06:03:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=N0lIvdIHswSrVrabYt91Q1QJ/PpxNuv3tVduDJwpgVs=;
	b=ZlzIa+ZA8ILWsuyYAXAURgEHCW6AwavWouI6RSD25INbd1inkEV10rw/JYau7EtTTH
	stMZXuGJgYRUV4rZ6V9cIZrkmT1ui7EPz4bUMhOHeh1BWIgUU3riBxcLcrnjNBweyr9s
	h99EccGZQBGGm2GCNXKLk/Q0HOYTWgufsO5siDr7zc4V10XQwmZwejnaGGl2fmc8puWz
	Ah1FGNNmpLP/sb/1n2nGPlaDQnP8cC0o3XeJLx3FCWolpH7N2Rl6R/VWOdUY2tB+Qva+
	slEAXZ6YrBK4WbW3qXVGtVkBqNKFUeEno1KEJF8mcSrA6cvGf8HZTItfPOahQvNvzW8U
	vXVw==
MIME-Version: 1.0
X-Received: by 10.50.128.72 with SMTP id nm8mr25707563igb.10.1389103430449;
	Tue, 07 Jan 2014 06:03:50 -0800 (PST)
Received: by 10.42.85.71 with HTTP; Tue, 7 Jan 2014 06:03:50 -0800 (PST)
Date: Tue, 7 Jan 2014 07:03:50 -0700
Message-ID: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
From: Jim Davis <jim.epost@gmail.com>
To: Stephen Rothwell <sfr@canb.auug.org.au>, linux-next@vger.kernel.org, 
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com, 
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com, tglx@linutronix.de,
	mingo@redhat.com, hpa@zytor.com, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary=089e013c5af08767ff04ef61d88f
X-Mailman-Approved-At: Tue, 07 Jan 2014 14:06:30 +0000
Subject: [Xen-devel] randconfig build error with next-20140107,
	in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Building with the attached random configuration file,

arch/x86/xen/grant-table.c: In function =91xen_pvh_gnttab_setup=92:
arch/x86/xen/grant-table.c:181:2: error: implicit declaration of
function =91xen_pvh_domain=92 [-Werror=3Dimplicit-function-declaration]
  if (!xen_pvh_domain())
  ^
cc1: some warnings being treated as errors
make[2]: *** [arch/x86/xen/grant-table.o] Error 1

--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset=US-ASCII; name="randconfig-1389085366.txt"
Content-Disposition: attachment; filename="randconfig-1389085366.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hq589mq70

Iw0KIyBBdXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJVC4NCiMgTGludXgv
eDg2IDMuMTMuMC1yYzcgS2VybmVsIENvbmZpZ3VyYXRpb24NCiMNCkNPTkZJR182NEJJVD15DQpD
T05GSUdfWDg2XzY0PXkNCkNPTkZJR19YODY9eQ0KQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQ0KQ09ORklHX09VVFBVVF9GT1JNQVQ9ImVsZjY0LXg4Ni02NCINCkNPTkZJR19BUkNIX0RFRkNP
TkZJRz0iYXJjaC94ODYvY29uZmlncy94ODZfNjRfZGVmY29uZmlnIg0KQ09ORklHX0xPQ0tERVBf
U1VQUE9SVD15DQpDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19IQVZFX0xBVEVO
Q1lUT1BfU1VQUE9SVD15DQpDT05GSUdfTU1VPXkNCkNPTkZJR19ORUVEX0RNQV9NQVBfU1RBVEU9
eQ0KQ09ORklHX05FRURfU0dfRE1BX0xFTkdUSD15DQpDT05GSUdfR0VORVJJQ19IV0VJR0hUPXkN
CkNPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15DQpDT05GSUdfR0VORVJJQ19DQUxJQlJB
VEVfREVMQVk9eQ0KQ09ORklHX0FSQ0hfSEFTX0NQVV9SRUxBWD15DQpDT05GSUdfQVJDSF9IQVNf
Q0FDSEVfTElORV9TSVpFPXkNCkNPTkZJR19BUkNIX0hBU19DUFVfQVVUT1BST0JFPXkNCkNPTkZJ
R19IQVZFX1NFVFVQX1BFUl9DUFVfQVJFQT15DQpDT05GSUdfTkVFRF9QRVJfQ1BVX0VNQkVEX0ZJ
UlNUX0NIVU5LPXkNCkNPTkZJR19ORUVEX1BFUl9DUFVfUEFHRV9GSVJTVF9DSFVOSz15DQpDT05G
SUdfQVJDSF9ISUJFUk5BVElPTl9QT1NTSUJMRT15DQpDT05GSUdfQVJDSF9TVVNQRU5EX1BPU1NJ
QkxFPXkNCkNPTkZJR19BUkNIX1dBTlRfSFVHRV9QTURfU0hBUkU9eQ0KQ09ORklHX0FSQ0hfV0FO
VF9HRU5FUkFMX0hVR0VUTEI9eQ0KQ09ORklHX1pPTkVfRE1BMzI9eQ0KQ09ORklHX0FVRElUX0FS
Q0g9eQ0KQ09ORklHX0FSQ0hfU1VQUE9SVFNfT1BUSU1JWkVEX0lOTElOSU5HPXkNCkNPTkZJR19B
UkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15DQpDT05GSUdfQVJDSF9IV0VJR0hUX0NGTEFH
Uz0iLWZjYWxsLXNhdmVkLXJkaSAtZmNhbGwtc2F2ZWQtcnNpIC1mY2FsbC1zYXZlZC1yZHggLWZj
YWxsLXNhdmVkLXJjeCAtZmNhbGwtc2F2ZWQtcjggLWZjYWxsLXNhdmVkLXI5IC1mY2FsbC1zYXZl
ZC1yMTAgLWZjYWxsLXNhdmVkLXIxMSINCkNPTkZJR19BUkNIX1NVUFBPUlRTX1VQUk9CRVM9eQ0K
Q09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGliL21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZp
ZyINCkNPTkZJR19DT05TVFJVQ1RPUlM9eQ0KQ09ORklHX0lSUV9XT1JLPXkNCkNPTkZJR19CVUlM
RFRJTUVfRVhUQUJMRV9TT1JUPXkNCg0KIw0KIyBHZW5lcmFsIHNldHVwDQojDQpDT05GSUdfQlJP
S0VOX09OX1NNUD15DQpDT05GSUdfSU5JVF9FTlZfQVJHX0xJTUlUPTMyDQpDT05GSUdfQ1JPU1Nf
Q09NUElMRT0iIg0KQ09ORklHX0NPTVBJTEVfVEVTVD15DQpDT05GSUdfTE9DQUxWRVJTSU9OPSIi
DQpDT05GSUdfTE9DQUxWRVJTSU9OX0FVVE89eQ0KQ09ORklHX0hBVkVfS0VSTkVMX0daSVA9eQ0K
Q09ORklHX0hBVkVfS0VSTkVMX0JaSVAyPXkNCkNPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkNCkNP
TkZJR19IQVZFX0tFUk5FTF9YWj15DQpDT05GSUdfSEFWRV9LRVJORUxfTFpPPXkNCkNPTkZJR19I
QVZFX0tFUk5FTF9MWjQ9eQ0KQ09ORklHX0tFUk5FTF9HWklQPXkNCiMgQ09ORklHX0tFUk5FTF9C
WklQMiBpcyBub3Qgc2V0DQojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0DQojIENPTkZJ
R19LRVJORUxfWFogaXMgbm90IHNldA0KIyBDT05GSUdfS0VSTkVMX0xaTyBpcyBub3Qgc2V0DQoj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX0hPU1ROQU1FPSIo
bm9uZSkiDQpDT05GSUdfU1lTVklQQz15DQojIENPTkZJR19QT1NJWF9NUVVFVUUgaXMgbm90IHNl
dA0KQ09ORklHX0ZIQU5ETEU9eQ0KQ09ORklHX0FVRElUPXkNCkNPTkZJR19BVURJVFNZU0NBTEw9
eQ0KQ09ORklHX0FVRElUX1dBVENIPXkNCkNPTkZJR19BVURJVF9UUkVFPXkNCg0KIw0KIyBJUlEg
c3Vic3lzdGVtDQojDQpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQ0KQ09ORklHX0dFTkVSSUNf
SVJRX1NIT1c9eQ0KQ09ORklHX0dFTkVSSUNfSVJRX0NISVA9eQ0KQ09ORklHX0lSUV9ET01BSU49
eQ0KQ09ORklHX0lSUV9ET01BSU5fREVCVUc9eQ0KQ09ORklHX0lSUV9GT1JDRURfVEhSRUFESU5H
PXkNCkNPTkZJR19TUEFSU0VfSVJRPXkNCkNPTkZJR19DTE9DS1NPVVJDRV9XQVRDSERPRz15DQpD
T05GSUdfQVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkNCkNPTkZJR19HRU5FUklDX1RJTUVfVlNZU0NB
TEw9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFM9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tF
VkVOVFNfQlVJTEQ9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUPXkNCkNP
TkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX01JTl9BREpVU1Q9eQ0KQ09ORklHX0dFTkVSSUNfQ01P
U19VUERBVEU9eQ0KDQojDQojIFRpbWVycyBzdWJzeXN0ZW0NCiMNCkNPTkZJR19USUNLX09ORVNI
T1Q9eQ0KQ09ORklHX05PX0haX0NPTU1PTj15DQojIENPTkZJR19IWl9QRVJJT0RJQyBpcyBub3Qg
c2V0DQpDT05GSUdfTk9fSFpfSURMRT15DQpDT05GSUdfTk9fSFo9eQ0KQ09ORklHX0hJR0hfUkVT
X1RJTUVSUz15DQoNCiMNCiMgQ1BVL1Rhc2sgdGltZSBhbmQgc3RhdHMgYWNjb3VudGluZw0KIw0K
Q09ORklHX1RJQ0tfQ1BVX0FDQ09VTlRJTkc9eQ0KIyBDT05GSUdfVklSVF9DUFVfQUNDT1VOVElO
R19HRU4gaXMgbm90IHNldA0KIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0
DQojIENPTkZJR19CU0RfUFJPQ0VTU19BQ0NUIGlzIG5vdCBzZXQNCkNPTkZJR19UQVNLU1RBVFM9
eQ0KQ09ORklHX1RBU0tfREVMQVlfQUNDVD15DQojIENPTkZJR19UQVNLX1hBQ0NUIGlzIG5vdCBz
ZXQNCg0KIw0KIyBSQ1UgU3Vic3lzdGVtDQojDQpDT05GSUdfVElOWV9SQ1U9eQ0KIyBDT05GSUdf
UFJFRU1QVF9SQ1UgaXMgbm90IHNldA0KQ09ORklHX1JDVV9TVEFMTF9DT01NT049eQ0KIyBDT05G
SUdfVFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldA0KQ09ORklHX0lLQ09ORklHPXkNCkNPTkZJR19M
T0dfQlVGX1NISUZUPTE3DQpDT05GSUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15DQpDT05G
SUdfQVJDSF9TVVBQT1JUU19OVU1BX0JBTEFOQ0lORz15DQpDT05GSUdfQVJDSF9TVVBQT1JUU19J
TlQxMjg9eQ0KQ09ORklHX0FSQ0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15DQojIENPTkZJ
R19DR1JPVVBTIGlzIG5vdCBzZXQNCkNPTkZJR19DSEVDS1BPSU5UX1JFU1RPUkU9eQ0KQ09ORklH
X05BTUVTUEFDRVM9eQ0KIyBDT05GSUdfVVRTX05TIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQQ19O
UyBpcyBub3Qgc2V0DQpDT05GSUdfVVNFUl9OUz15DQpDT05GSUdfUElEX05TPXkNCiMgQ09ORklH
X05FVF9OUyBpcyBub3Qgc2V0DQpDT05GSUdfVUlER0lEX1NUUklDVF9UWVBFX0NIRUNLUz15DQoj
IENPTkZJR19TQ0hFRF9BVVRPR1JPVVAgaXMgbm90IHNldA0KIyBDT05GSUdfU1lTRlNfREVQUkVD
QVRFRCBpcyBub3Qgc2V0DQpDT05GSUdfUkVMQVk9eQ0KQ09ORklHX0JMS19ERVZfSU5JVFJEPXkN
CkNPTkZJR19JTklUUkFNRlNfU09VUkNFPSIiDQojIENPTkZJR19SRF9HWklQIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JEX0JaSVAyIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEX0xaTUEgaXMgbm90IHNl
dA0KQ09ORklHX1JEX1haPXkNCiMgQ09ORklHX1JEX0xaTyBpcyBub3Qgc2V0DQojIENPTkZJR19S
RF9MWjQgaXMgbm90IHNldA0KQ09ORklHX0NDX09QVElNSVpFX0ZPUl9TSVpFPXkNCkNPTkZJR19B
Tk9OX0lOT0RFUz15DQpDT05GSUdfU1lTQ1RMX0VYQ0VQVElPTl9UUkFDRT15DQpDT05GSUdfSEFW
RV9QQ1NQS1JfUExBVEZPUk09eQ0KQ09ORklHX0VYUEVSVD15DQpDT05GSUdfS0FMTFNZTVM9eQ0K
Q09ORklHX0tBTExTWU1TX0FMTD15DQpDT05GSUdfUFJJTlRLPXkNCiMgQ09ORklHX0JVRyBpcyBu
b3Qgc2V0DQpDT05GSUdfUENTUEtSX1BMQVRGT1JNPXkNCiMgQ09ORklHX0JBU0VfRlVMTCBpcyBu
b3Qgc2V0DQpDT05GSUdfRlVURVg9eQ0KQ09ORklHX0VQT0xMPXkNCiMgQ09ORklHX1NJR05BTEZE
IGlzIG5vdCBzZXQNCkNPTkZJR19USU1FUkZEPXkNCkNPTkZJR19FVkVOVEZEPXkNCkNPTkZJR19T
SE1FTT15DQpDT05GSUdfQUlPPXkNCiMgQ09ORklHX1BDSV9RVUlSS1MgaXMgbm90IHNldA0KQ09O
RklHX0VNQkVEREVEPXkNCkNPTkZJR19IQVZFX1BFUkZfRVZFTlRTPXkNCg0KIw0KIyBLZXJuZWwg
UGVyZm9ybWFuY2UgRXZlbnRzIEFuZCBDb3VudGVycw0KIw0KQ09ORklHX1BFUkZfRVZFTlRTPXkN
CiMgQ09ORklHX0RFQlVHX1BFUkZfVVNFX1ZNQUxMT0MgaXMgbm90IHNldA0KIyBDT05GSUdfVk1f
RVZFTlRfQ09VTlRFUlMgaXMgbm90IHNldA0KIyBDT05GSUdfU0xVQl9ERUJVRyBpcyBub3Qgc2V0
DQpDT05GSUdfQ09NUEFUX0JSSz15DQojIENPTkZJR19TTEFCIGlzIG5vdCBzZXQNCkNPTkZJR19T
TFVCPXkNCiMgQ09ORklHX1NMT0IgaXMgbm90IHNldA0KIyBDT05GSUdfUFJPRklMSU5HIGlzIG5v
dCBzZXQNCkNPTkZJR19IQVZFX09QUk9GSUxFPXkNCkNPTkZJR19PUFJPRklMRV9OTUlfVElNRVI9
eQ0KIyBDT05GSUdfSlVNUF9MQUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19IQVZFXzY0QklUX0FM
SUdORURfQUNDRVNTIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZFX0VGRklDSUVOVF9VTkFMSUdORURf
QUNDRVNTPXkNCkNPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkNCkNPTkZJR19VU0VSX1JF
VFVSTl9OT1RJRklFUj15DQpDT05GSUdfSEFWRV9JT1JFTUFQX1BST1Q9eQ0KQ09ORklHX0hBVkVf
S1BST0JFUz15DQpDT05GSUdfSEFWRV9LUkVUUFJPQkVTPXkNCkNPTkZJR19IQVZFX09QVFBST0JF
Uz15DQpDT05GSUdfSEFWRV9LUFJPQkVTX09OX0ZUUkFDRT15DQpDT05GSUdfSEFWRV9BUkNIX1RS
QUNFSE9PSz15DQpDT05GSUdfSEFWRV9ETUFfQVRUUlM9eQ0KQ09ORklHX0dFTkVSSUNfU01QX0lE
TEVfVEhSRUFEPXkNCkNPTkZJR19IQVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQ0KQ09O
RklHX0hBVkVfRE1BX0FQSV9ERUJVRz15DQpDT05GSUdfSEFWRV9IV19CUkVBS1BPSU5UPXkNCkNP
TkZJR19IQVZFX01JWEVEX0JSRUFLUE9JTlRTX1JFR1M9eQ0KQ09ORklHX0hBVkVfVVNFUl9SRVRV
Uk5fTk9USUZJRVI9eQ0KQ09ORklHX0hBVkVfUEVSRl9FVkVOVFNfTk1JPXkNCkNPTkZJR19IQVZF
X1BFUkZfUkVHUz15DQpDT05GSUdfSEFWRV9QRVJGX1VTRVJfU1RBQ0tfRFVNUD15DQpDT05GSUdf
SEFWRV9BUkNIX0pVTVBfTEFCRUw9eQ0KQ09ORklHX0FSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hH
PXkNCkNPTkZJR19IQVZFX0FMSUdORURfU1RSVUNUX1BBR0U9eQ0KQ09ORklHX0hBVkVfQ01QWENI
R19MT0NBTD15DQpDT05GSUdfSEFWRV9DTVBYQ0hHX0RPVUJMRT15DQpDT05GSUdfSEFWRV9BUkNI
X1NFQ0NPTVBfRklMVEVSPXkNCkNPTkZJR19IQVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkNCiMgQ09O
RklHX0NDX1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19DQ19TVEFDS1BST1RFQ1RP
Ul9OT05FPXkNCiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldA0K
IyBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfU1RST05HIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZF
X0NPTlRFWFRfVFJBQ0tJTkc9eQ0KQ09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49
eQ0KQ09ORklHX0hBVkVfSVJRX1RJTUVfQUNDT1VOVElORz15DQpDT05GSUdfSEFWRV9BUkNIX1RS
QU5TUEFSRU5UX0hVR0VQQUdFPXkNCkNPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15DQpDT05G
SUdfTU9EVUxFU19VU0VfRUxGX1JFTEE9eQ0KQ09ORklHX0hBVkVfSVJRX0VYSVRfT05fSVJRX1NU
QUNLPXkNCg0KIw0KIyBHQ09WLWJhc2VkIGtlcm5lbCBwcm9maWxpbmcNCiMNCkNPTkZJR19HQ09W
X0tFUk5FTD15DQojIENPTkZJR19HQ09WX1BST0ZJTEVfQUxMIGlzIG5vdCBzZXQNCkNPTkZJR19H
Q09WX0ZPUk1BVF9BVVRPREVURUNUPXkNCiMgQ09ORklHX0dDT1ZfRk9STUFUXzNfNCBpcyBub3Qg
c2V0DQojIENPTkZJR19HQ09WX0ZPUk1BVF80XzcgaXMgbm90IHNldA0KIyBDT05GSUdfSEFWRV9H
RU5FUklDX0RNQV9DT0hFUkVOVCBpcyBub3Qgc2V0DQpDT05GSUdfUlRfTVVURVhFUz15DQpDT05G
SUdfQkFTRV9TTUFMTD0xDQojIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01PRFVMRVMgaXMgbm90IHNldA0KIyBDT05GSUdfQkxPQ0sgaXMgbm90IHNl
dA0KQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkNCkNPTkZJR19VTklOTElORV9TUElOX1VOTE9D
Sz15DQpDT05GSUdfRlJFRVpFUj15DQoNCiMNCiMgUHJvY2Vzc29yIHR5cGUgYW5kIGZlYXR1cmVz
DQojDQojIENPTkZJR19aT05FX0RNQSBpcyBub3Qgc2V0DQojIENPTkZJR19TTVAgaXMgbm90IHNl
dA0KQ09ORklHX1g4Nl9NUFBBUlNFPXkNCiMgQ09ORklHX1g4Nl9FWFRFTkRFRF9QTEFURk9STSBp
cyBub3Qgc2V0DQojIENPTkZJR19YODZfSU5URUxfTFBTUyBpcyBub3Qgc2V0DQpDT05GSUdfU0NI
RURfT01JVF9GUkFNRV9QT0lOVEVSPXkNCkNPTkZJR19IWVBFUlZJU09SX0dVRVNUPXkNCkNPTkZJ
R19QQVJBVklSVD15DQpDT05GSUdfUEFSQVZJUlRfREVCVUc9eQ0KQ09ORklHX1hFTj15DQpDT05G
SUdfWEVOX0RPTTA9eQ0KQ09ORklHX1hFTl9QUklWSUxFR0VEX0dVRVNUPXkNCkNPTkZJR19YRU5f
UFZIVk09eQ0KQ09ORklHX1hFTl9NQVhfRE9NQUlOX01FTU9SWT01MDANCkNPTkZJR19YRU5fU0FW
RV9SRVNUT1JFPXkNCiMgQ09ORklHX1hFTl9ERUJVR19GUyBpcyBub3Qgc2V0DQpDT05GSUdfWEVO
X1BWSD15DQojIENPTkZJR19LVk1fR1VFU1QgaXMgbm90IHNldA0KIyBDT05GSUdfUEFSQVZJUlRf
VElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQNCkNPTkZJR19QQVJBVklSVF9DTE9DSz15DQpDT05G
SUdfTk9fQk9PVE1FTT15DQpDT05GSUdfTUVNVEVTVD15DQojIENPTkZJR19NSzggaXMgbm90IHNl
dA0KIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0DQojIENPTkZJR19NQ09SRTIgaXMgbm90IHNldA0K
IyBDT05GSUdfTUFUT00gaXMgbm90IHNldA0KQ09ORklHX0dFTkVSSUNfQ1BVPXkNCkNPTkZJR19Y
ODZfSU5URVJOT0RFX0NBQ0hFX1NISUZUPTYNCkNPTkZJR19YODZfTDFfQ0FDSEVfU0hJRlQ9Ng0K
Q09ORklHX1g4Nl9UU0M9eQ0KQ09ORklHX1g4Nl9DTVBYQ0hHNjQ9eQ0KQ09ORklHX1g4Nl9DTU9W
PXkNCkNPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0DQpDT05GSUdfWDg2X0RFQlVHQ1RM
TVNSPXkNCiMgQ09ORklHX1BST0NFU1NPUl9TRUxFQ1QgaXMgbm90IHNldA0KQ09ORklHX0NQVV9T
VVBfSU5URUw9eQ0KQ09ORklHX0NQVV9TVVBfQU1EPXkNCkNPTkZJR19DUFVfU1VQX0NFTlRBVVI9
eQ0KQ09ORklHX0hQRVRfVElNRVI9eQ0KQ09ORklHX0hQRVRfRU1VTEFURV9SVEM9eQ0KQ09ORklH
X0RNST15DQojIENPTkZJR19HQVJUX0lPTU1VIGlzIG5vdCBzZXQNCkNPTkZJR19DQUxHQVJZX0lP
TU1VPXkNCkNPTkZJR19DQUxHQVJZX0lPTU1VX0VOQUJMRURfQllfREVGQVVMVD15DQpDT05GSUdf
U1dJT1RMQj15DQpDT05GSUdfSU9NTVVfSEVMUEVSPXkNCkNPTkZJR19OUl9DUFVTPTENCkNPTkZJ
R19QUkVFTVBUX05PTkU9eQ0KIyBDT05GSUdfUFJFRU1QVF9WT0xVTlRBUlkgaXMgbm90IHNldA0K
IyBDT05GSUdfUFJFRU1QVCBpcyBub3Qgc2V0DQpDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQ0KQ09O
RklHX1g4Nl9JT19BUElDPXkNCiMgQ09ORklHX1g4Nl9SRVJPVVRFX0ZPUl9CUk9LRU5fQk9PVF9J
UlFTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1g4Nl9NQ0UgaXMgbm90IHNldA0KQ09ORklHX0k4Sz15
DQpDT05GSUdfTUlDUk9DT0RFPXkNCiMgQ09ORklHX01JQ1JPQ09ERV9JTlRFTCBpcyBub3Qgc2V0
DQpDT05GSUdfTUlDUk9DT0RFX0FNRD15DQpDT05GSUdfTUlDUk9DT0RFX09MRF9JTlRFUkZBQ0U9
eQ0KIyBDT05GSUdfTUlDUk9DT0RFX0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQNCiMgQ09ORklHX01J
Q1JPQ09ERV9BTURfRUFSTFkgaXMgbm90IHNldA0KIyBDT05GSUdfTUlDUk9DT0RFX0VBUkxZIGlz
IG5vdCBzZXQNCkNPTkZJR19YODZfTVNSPXkNCkNPTkZJR19YODZfQ1BVSUQ9eQ0KQ09ORklHX0FS
Q0hfUEhZU19BRERSX1RfNjRCSVQ9eQ0KQ09ORklHX0FSQ0hfRE1BX0FERFJfVF82NEJJVD15DQpD
T05GSUdfRElSRUNUX0dCUEFHRVM9eQ0KQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0VOQUJMRT15DQpD
T05GSUdfQVJDSF9TUEFSU0VNRU1fREVGQVVMVD15DQpDT05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZ
X01PREVMPXkNCiMgQ09ORklHX0FSQ0hfTUVNT1JZX1BST0JFIGlzIG5vdCBzZXQNCkNPTkZJR19J
TExFR0FMX1BPSU5URVJfVkFMVUU9MHhkZWFkMDAwMDAwMDAwMDAwDQpDT05GSUdfU0VMRUNUX01F
TU9SWV9NT0RFTD15DQpDT05GSUdfU1BBUlNFTUVNX01BTlVBTD15DQpDT05GSUdfU1BBUlNFTUVN
PXkNCkNPTkZJR19IQVZFX01FTU9SWV9QUkVTRU5UPXkNCkNPTkZJR19TUEFSU0VNRU1fRVhUUkVN
RT15DQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkNCkNPTkZJR19TUEFSU0VNRU1f
QUxMT0NfTUVNX01BUF9UT0dFVEhFUj15DQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVA9eQ0KQ09O
RklHX0hBVkVfTUVNQkxPQ0s9eQ0KQ09ORklHX0hBVkVfTUVNQkxPQ0tfTk9ERV9NQVA9eQ0KQ09O
RklHX0FSQ0hfRElTQ0FSRF9NRU1CTE9DSz15DQojIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19O
T0RFIGlzIG5vdCBzZXQNCkNPTkZJR19NRU1PUllfSE9UUExVRz15DQpDT05GSUdfTUVNT1JZX0hP
VFBMVUdfU1BBUlNFPXkNCiMgQ09ORklHX01FTU9SWV9IT1RSRU1PVkUgaXMgbm90IHNldA0KQ09O
RklHX1BBR0VGTEFHU19FWFRFTkRFRD15DQpDT05GSUdfU1BMSVRfUFRMT0NLX0NQVVM9NA0KQ09O
RklHX0FSQ0hfRU5BQkxFX1NQTElUX1BNRF9QVExPQ0s9eQ0KQ09ORklHX0JBTExPT05fQ09NUEFD
VElPTj15DQpDT05GSUdfQ09NUEFDVElPTj15DQpDT05GSUdfTUlHUkFUSU9OPXkNCkNPTkZJR19Q
SFlTX0FERFJfVF82NEJJVD15DQpDT05GSUdfWk9ORV9ETUFfRkxBRz0wDQpDT05GSUdfVklSVF9U
T19CVVM9eQ0KQ09ORklHX01NVV9OT1RJRklFUj15DQojIENPTkZJR19LU00gaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfTU1BUF9NSU5fQUREUj00MDk2DQojIENPTkZJR19UUkFOU1BBUkVOVF9I
VUdFUEFHRSBpcyBub3Qgc2V0DQojIENPTkZJR19DUk9TU19NRU1PUllfQVRUQUNIIGlzIG5vdCBz
ZXQNCkNPTkZJR19ORUVEX1BFUl9DUFVfS009eQ0KQ09ORklHX0NMRUFOQ0FDSEU9eQ0KIyBDT05G
SUdfQ01BIGlzIG5vdCBzZXQNCiMgQ09ORklHX1pCVUQgaXMgbm90IHNldA0KQ09ORklHX1pTTUFM
TE9DPXkNCkNPTkZJR19QR1RBQkxFX01BUFBJTkc9eQ0KQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NP
UlJVUFRJT049eQ0KQ09ORklHX1g4Nl9CT09UUEFSQU1fTUVNT1JZX0NPUlJVUFRJT05fQ0hFQ0s9
eQ0KQ09ORklHX1g4Nl9SRVNFUlZFX0xPVz02NA0KIyBDT05GSUdfTVRSUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX1JBTkRPTSBpcyBub3Qgc2V0DQojIENPTkZJR19YODZfU01BUCBpcyBub3Qg
c2V0DQpDT05GSUdfRUZJPXkNCiMgQ09ORklHX0VGSV9TVFVCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ0NPTVAgaXMgbm90IHNldA0KIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQNCkNPTkZJR19I
Wl8yNTA9eQ0KIyBDT05GSUdfSFpfMzAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0haXzEwMDAgaXMg
bm90IHNldA0KQ09ORklHX0haPTI1MA0KQ09ORklHX1NDSEVEX0hSVElDSz15DQpDT05GSUdfS0VY
RUM9eQ0KIyBDT05GSUdfQ1JBU0hfRFVNUCBpcyBub3Qgc2V0DQpDT05GSUdfUEhZU0lDQUxfU1RB
UlQ9MHgxMDAwMDAwDQojIENPTkZJR19SRUxPQ0FUQUJMRSBpcyBub3Qgc2V0DQpDT05GSUdfUEhZ
U0lDQUxfQUxJR049MHgyMDAwMDANCiMgQ09ORklHX0NNRExJTkVfQk9PTCBpcyBub3Qgc2V0DQpD
T05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQ0KQ09ORklHX0FSQ0hfRU5BQkxFX01F
TU9SWV9IT1RSRU1PVkU9eQ0KDQojDQojIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9u
cw0KIw0KIyBDT05GSUdfU1VTUEVORCBpcyBub3Qgc2V0DQpDT05GSUdfSElCRVJOQVRFX0NBTExC
QUNLUz15DQpDT05GSUdfUE1fU0xFRVA9eQ0KQ09ORklHX1BNX0FVVE9TTEVFUD15DQojIENPTkZJ
R19QTV9XQUtFTE9DS1MgaXMgbm90IHNldA0KIyBDT05GSUdfUE1fUlVOVElNRSBpcyBub3Qgc2V0
DQpDT05GSUdfUE09eQ0KQ09ORklHX1BNX0RFQlVHPXkNCiMgQ09ORklHX1BNX0FEVkFOQ0VEX0RF
QlVHIGlzIG5vdCBzZXQNCkNPTkZJR19QTV9TTEVFUF9ERUJVRz15DQpDT05GSUdfUE1fVFJBQ0U9
eQ0KQ09ORklHX1BNX1RSQUNFX1JUQz15DQpDT05GSUdfV1FfUE9XRVJfRUZGSUNJRU5UX0RFRkFV
TFQ9eQ0KQ09ORklHX0FDUEk9eQ0KIyBDT05GSUdfQUNQSV9FQ19ERUJVR0ZTIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0FDUElfQUMgaXMgbm90IHNldA0KQ09ORklHX0FDUElfQkFUVEVSWT15DQpDT05G
SUdfQUNQSV9CVVRUT049eQ0KQ09ORklHX0FDUElfVklERU89eQ0KQ09ORklHX0FDUElfRkFOPXkN
CkNPTkZJR19BQ1BJX0RPQ0s9eQ0KQ09ORklHX0FDUElfUFJPQ0VTU09SPXkNCiMgQ09ORklHX0FD
UElfUFJPQ0VTU09SX0FHR1JFR0FUT1IgaXMgbm90IHNldA0KQ09ORklHX0FDUElfVEhFUk1BTD15
DQpDT05GSUdfQUNQSV9DVVNUT01fRFNEVF9GSUxFPSIiDQojIENPTkZJR19BQ1BJX0NVU1RPTV9E
U0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FDUElfSU5JVFJEX1RBQkxFX09WRVJSSURFIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX0FDUElfUENJX1NM
T1Q9eQ0KIyBDT05GSUdfWDg2X1BNX1RJTUVSIGlzIG5vdCBzZXQNCkNPTkZJR19BQ1BJX0NPTlRB
SU5FUj15DQpDT05GSUdfQUNQSV9IT1RQTFVHX01FTU9SWT15DQpDT05GSUdfQUNQSV9TQlM9eQ0K
IyBDT05GSUdfQUNQSV9IRUQgaXMgbm90IHNldA0KQ09ORklHX0FDUElfQ1VTVE9NX01FVEhPRD15
DQojIENPTkZJR19BQ1BJX0JHUlQgaXMgbm90IHNldA0KIyBDT05GSUdfQUNQSV9BUEVJIGlzIG5v
dCBzZXQNCkNPTkZJR19TRkk9eQ0KDQojDQojIENQVSBGcmVxdWVuY3kgc2NhbGluZw0KIw0KQ09O
RklHX0NQVV9GUkVRPXkNCkNPTkZJR19DUFVfRlJFUV9HT1ZfQ09NTU9OPXkNCiMgQ09ORklHX0NQ
VV9GUkVRX1NUQVQgaXMgbm90IHNldA0KQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1BFUkZP
Uk1BTkNFPXkNCiMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1BPV0VSU0FWRSBpcyBub3Qg
c2V0DQojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9VU0VSU1BBQ0UgaXMgbm90IHNldA0K
IyBDT05GSUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfT05ERU1BTkQgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfQ09OU0VSVkFUSVZFIGlzIG5vdCBzZXQNCkNPTkZJR19D
UFVfRlJFUV9HT1ZfUEVSRk9STUFOQ0U9eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9QT1dFUlNBVkU9
eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9VU0VSU1BBQ0U9eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9P
TkRFTUFORD15DQpDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTlNFUlZBVElWRT15DQoNCiMNCiMgeDg2
IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJzDQojDQojIENPTkZJR19YODZfSU5URUxfUFNU
QVRFIGlzIG5vdCBzZXQNCkNPTkZJR19YODZfUENDX0NQVUZSRVE9eQ0KIyBDT05GSUdfWDg2X0FD
UElfQ1BVRlJFUSBpcyBub3Qgc2V0DQojIENPTkZJR19YODZfU1BFRURTVEVQX0NFTlRSSU5PIGlz
IG5vdCBzZXQNCkNPTkZJR19YODZfUDRfQ0xPQ0tNT0Q9eQ0KDQojDQojIHNoYXJlZCBvcHRpb25z
DQojDQpDT05GSUdfWDg2X1NQRUVEU1RFUF9MSUI9eQ0KDQojDQojIENQVSBJZGxlDQojDQpDT05G
SUdfQ1BVX0lETEU9eQ0KIyBDT05GSUdfQ1BVX0lETEVfTVVMVElQTEVfRFJJVkVSUyBpcyBub3Qg
c2V0DQpDT05GSUdfQ1BVX0lETEVfR09WX0xBRERFUj15DQpDT05GSUdfQ1BVX0lETEVfR09WX01F
TlU9eQ0KIyBDT05GSUdfQVJDSF9ORUVEU19DUFVfSURMRV9DT1VQTEVEIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0lOVEVMX0lETEUgaXMgbm90IHNldA0KDQojDQojIE1lbW9yeSBwb3dlciBzYXZpbmdz
DQojDQpDT05GSUdfSTczMDBfSURMRV9JT0FUX0NIQU5ORUw9eQ0KQ09ORklHX0k3MzAwX0lETEU9
eQ0KDQojDQojIEJ1cyBvcHRpb25zIChQQ0kgZXRjLikNCiMNCkNPTkZJR19QQ0k9eQ0KQ09ORklH
X1BDSV9ESVJFQ1Q9eQ0KQ09ORklHX1BDSV9NTUNPTkZJRz15DQpDT05GSUdfUENJX1hFTj15DQpD
T05GSUdfUENJX0RPTUFJTlM9eQ0KQ09ORklHX1BDSV9DTkIyMExFX1FVSVJLPXkNCiMgQ09ORklH
X1BDSUVQT1JUQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BDSV9NU0kgaXMgbm90IHNldA0KQ09O
RklHX1BDSV9ERUJVRz15DQojIENPTkZJR19QQ0lfUkVBTExPQ19FTkFCTEVfQVVUTyBpcyBub3Qg
c2V0DQpDT05GSUdfUENJX1NUVUI9eQ0KQ09ORklHX1hFTl9QQ0lERVZfRlJPTlRFTkQ9eQ0KQ09O
RklHX0hUX0lSUT15DQpDT05GSUdfUENJX0FUUz15DQojIENPTkZJR19QQ0lfSU9WIGlzIG5vdCBz
ZXQNCkNPTkZJR19QQ0lfUFJJPXkNCiMgQ09ORklHX1BDSV9QQVNJRCBpcyBub3Qgc2V0DQpDT05G
SUdfUENJX0lPQVBJQz15DQpDT05GSUdfUENJX0xBQkVMPXkNCg0KIw0KIyBQQ0kgaG9zdCBjb250
cm9sbGVyIGRyaXZlcnMNCiMNCiMgQ09ORklHX0lTQV9ETUFfQVBJIGlzIG5vdCBzZXQNCkNPTkZJ
R19BTURfTkI9eQ0KIyBDT05GSUdfUENDQVJEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hPVFBMVUdf
UENJIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JBUElESU8gaXMgbm90IHNldA0KQ09ORklHX1g4Nl9T
WVNGQj15DQoNCiMNCiMgRXhlY3V0YWJsZSBmaWxlIGZvcm1hdHMgLyBFbXVsYXRpb25zDQojDQoj
IENPTkZJR19CSU5GTVRfRUxGIGlzIG5vdCBzZXQNCkNPTkZJR19BUkNIX0JJTkZNVF9FTEZfUkFO
RE9NSVpFX1BJRT15DQpDT05GSUdfQklORk1UX1NDUklQVD15DQojIENPTkZJR19IQVZFX0FPVVQg
aXMgbm90IHNldA0KIyBDT05GSUdfQklORk1UX01JU0MgaXMgbm90IHNldA0KIyBDT05GSUdfQ09S
RURVTVAgaXMgbm90IHNldA0KIyBDT05GSUdfSUEzMl9FTVVMQVRJT04gaXMgbm90IHNldA0KQ09O
RklHX1g4Nl9ERVZfRE1BX09QUz15DQpDT05GSUdfTkVUPXkNCg0KIw0KIyBOZXR3b3JraW5nIG9w
dGlvbnMNCiMNCkNPTkZJR19QQUNLRVQ9eQ0KIyBDT05GSUdfUEFDS0VUX0RJQUcgaXMgbm90IHNl
dA0KIyBDT05GSUdfVU5JWCBpcyBub3Qgc2V0DQpDT05GSUdfWEZSTT15DQpDT05GSUdfWEZSTV9B
TEdPPXkNCkNPTkZJR19YRlJNX1VTRVI9eQ0KIyBDT05GSUdfWEZSTV9TVUJfUE9MSUNZIGlzIG5v
dCBzZXQNCkNPTkZJR19YRlJNX01JR1JBVEU9eQ0KQ09ORklHX1hGUk1fSVBDT01QPXkNCkNPTkZJ
R19ORVRfS0VZPXkNCiMgQ09ORklHX05FVF9LRVlfTUlHUkFURSBpcyBub3Qgc2V0DQpDT05GSUdf
SU5FVD15DQojIENPTkZJR19JUF9NVUxUSUNBU1QgaXMgbm90IHNldA0KQ09ORklHX0lQX0FEVkFO
Q0VEX1JPVVRFUj15DQpDT05GSUdfSVBfRklCX1RSSUVfU1RBVFM9eQ0KQ09ORklHX0lQX01VTFRJ
UExFX1RBQkxFUz15DQojIENPTkZJR19JUF9ST1VURV9NVUxUSVBBVEggaXMgbm90IHNldA0KIyBD
T05GSUdfSVBfUk9VVEVfVkVSQk9TRSBpcyBub3Qgc2V0DQojIENPTkZJR19JUF9QTlAgaXMgbm90
IHNldA0KQ09ORklHX05FVF9JUElQPXkNCkNPTkZJR19ORVRfSVBHUkVfREVNVVg9eQ0KQ09ORklH
X05FVF9JUF9UVU5ORUw9eQ0KIyBDT05GSUdfTkVUX0lQR1JFIGlzIG5vdCBzZXQNCkNPTkZJR19T
WU5fQ09PS0lFUz15DQojIENPTkZJR19ORVRfSVBWVEkgaXMgbm90IHNldA0KIyBDT05GSUdfSU5F
VF9BSCBpcyBub3Qgc2V0DQpDT05GSUdfSU5FVF9FU1A9eQ0KQ09ORklHX0lORVRfSVBDT01QPXkN
CkNPTkZJR19JTkVUX1hGUk1fVFVOTkVMPXkNCkNPTkZJR19JTkVUX1RVTk5FTD15DQpDT05GSUdf
SU5FVF9YRlJNX01PREVfVFJBTlNQT1JUPXkNCkNPTkZJR19JTkVUX1hGUk1fTU9ERV9UVU5ORUw9
eQ0KQ09ORklHX0lORVRfWEZSTV9NT0RFX0JFRVQ9eQ0KQ09ORklHX0lORVRfTFJPPXkNCkNPTkZJ
R19JTkVUX0RJQUc9eQ0KQ09ORklHX0lORVRfVENQX0RJQUc9eQ0KQ09ORklHX0lORVRfVURQX0RJ
QUc9eQ0KQ09ORklHX1RDUF9DT05HX0FEVkFOQ0VEPXkNCiMgQ09ORklHX1RDUF9DT05HX0JJQyBp
cyBub3Qgc2V0DQojIENPTkZJR19UQ1BfQ09OR19DVUJJQyBpcyBub3Qgc2V0DQpDT05GSUdfVENQ
X0NPTkdfV0VTVFdPT0Q9eQ0KQ09ORklHX1RDUF9DT05HX0hUQ1A9eQ0KIyBDT05GSUdfVENQX0NP
TkdfSFNUQ1AgaXMgbm90IHNldA0KIyBDT05GSUdfVENQX0NPTkdfSFlCTEEgaXMgbm90IHNldA0K
Q09ORklHX1RDUF9DT05HX1ZFR0FTPXkNCiMgQ09ORklHX1RDUF9DT05HX1NDQUxBQkxFIGlzIG5v
dCBzZXQNCkNPTkZJR19UQ1BfQ09OR19MUD15DQpDT05GSUdfVENQX0NPTkdfVkVOTz15DQojIENP
TkZJR19UQ1BfQ09OR19ZRUFIIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RDUF9DT05HX0lMTElOT0lT
IGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX0hUQ1A9eQ0KIyBDT05GSUdfREVGQVVMVF9WRUdB
UyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUZBVUxUX1ZFTk8gaXMgbm90IHNldA0KIyBDT05GSUdf
REVGQVVMVF9XRVNUV09PRCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUZBVUxUX1JFTk8gaXMgbm90
IHNldA0KQ09ORklHX0RFRkFVTFRfVENQX0NPTkc9Imh0Y3AiDQpDT05GSUdfVENQX01ENVNJRz15
DQpDT05GSUdfSVBWNj15DQpDT05GSUdfSVBWNl9ST1VURVJfUFJFRj15DQojIENPTkZJR19JUFY2
X1JPVVRFX0lORk8gaXMgbm90IHNldA0KIyBDT05GSUdfSVBWNl9PUFRJTUlTVElDX0RBRCBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTkVUNl9BSCBpcyBub3Qgc2V0DQpDT05GSUdfSU5FVDZfRVNQPXkN
CiMgQ09ORklHX0lORVQ2X0lQQ09NUCBpcyBub3Qgc2V0DQpDT05GSUdfSVBWNl9NSVA2PXkNCiMg
Q09ORklHX0lORVQ2X1hGUk1fVFVOTkVMIGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUNl9UVU5ORUw9
eQ0KQ09ORklHX0lORVQ2X1hGUk1fTU9ERV9UUkFOU1BPUlQ9eQ0KQ09ORklHX0lORVQ2X1hGUk1f
TU9ERV9UVU5ORUw9eQ0KIyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNldA0K
IyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX1JPVVRFT1BUSU1JWkFUSU9OIGlzIG5vdCBzZXQNCkNP
TkZJR19JUFY2X1ZUST15DQojIENPTkZJR19JUFY2X1NJVCBpcyBub3Qgc2V0DQpDT05GSUdfSVBW
Nl9UVU5ORUw9eQ0KQ09ORklHX0lQVjZfR1JFPXkNCkNPTkZJR19JUFY2X01VTFRJUExFX1RBQkxF
Uz15DQojIENPTkZJR19JUFY2X1NVQlRSRUVTIGlzIG5vdCBzZXQNCkNPTkZJR19JUFY2X01ST1VU
RT15DQpDT05GSUdfSVBWNl9NUk9VVEVfTVVMVElQTEVfVEFCTEVTPXkNCiMgQ09ORklHX0lQVjZf
UElNU01fVjIgaXMgbm90IHNldA0KIyBDT05GSUdfTkVUTEFCRUwgaXMgbm90IHNldA0KQ09ORklH
X05FVFdPUktfU0VDTUFSSz15DQpDT05GSUdfTkVUV09SS19QSFlfVElNRVNUQU1QSU5HPXkNCkNP
TkZJR19ORVRGSUxURVI9eQ0KIyBDT05GSUdfTkVURklMVEVSX0RFQlVHIGlzIG5vdCBzZXQNCkNP
TkZJR19ORVRGSUxURVJfQURWQU5DRUQ9eQ0KIyBDT05GSUdfQlJJREdFX05FVEZJTFRFUiBpcyBu
b3Qgc2V0DQoNCiMNCiMgQ29yZSBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KQ09ORklHX05F
VEZJTFRFUl9ORVRMSU5LPXkNCkNPTkZJR19ORVRGSUxURVJfTkVUTElOS19BQ0NUPXkNCiMgQ09O
RklHX05FVEZJTFRFUl9ORVRMSU5LX1FVRVVFIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJf
TkVUTElOS19MT0c9eQ0KQ09ORklHX05GX0NPTk5UUkFDSz15DQpDT05GSUdfTkZfQ09OTlRSQUNL
X01BUks9eQ0KQ09ORklHX05GX0NPTk5UUkFDS19TRUNNQVJLPXkNCkNPTkZJR19ORl9DT05OVFJB
Q0tfRVZFTlRTPXkNCiMgQ09ORklHX05GX0NPTk5UUkFDS19USU1FT1VUIGlzIG5vdCBzZXQNCiMg
Q09ORklHX05GX0NPTk5UUkFDS19USU1FU1RBTVAgaXMgbm90IHNldA0KQ09ORklHX05GX0NPTk5U
UkFDS19MQUJFTFM9eQ0KIyBDT05GSUdfTkZfQ1RfUFJPVE9fRENDUCBpcyBub3Qgc2V0DQpDT05G
SUdfTkZfQ1RfUFJPVE9fR1JFPXkNCkNPTkZJR19ORl9DVF9QUk9UT19TQ1RQPXkNCkNPTkZJR19O
Rl9DVF9QUk9UT19VRFBMSVRFPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfQU1BTkRBPXkNCkNPTkZJ
R19ORl9DT05OVFJBQ0tfRlRQPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15DQojIENPTkZJ
R19ORl9DT05OVFJBQ0tfSVJDIGlzIG5vdCBzZXQNCkNPTkZJR19ORl9DT05OVFJBQ0tfQlJPQURD
QVNUPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfTkVUQklPU19OUz15DQpDT05GSUdfTkZfQ09OTlRS
QUNLX1NOTVA9eQ0KQ09ORklHX05GX0NPTk5UUkFDS19QUFRQPXkNCkNPTkZJR19ORl9DT05OVFJB
Q0tfU0FORT15DQojIENPTkZJR19ORl9DT05OVFJBQ0tfU0lQIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05GX0NPTk5UUkFDS19URlRQIGlzIG5vdCBzZXQNCiMgQ09ORklHX05GX0NUX05FVExJTksgaXMg
bm90IHNldA0KQ09ORklHX05GX0NUX05FVExJTktfVElNRU9VVD15DQojIENPTkZJR19ORl9UQUJM
RVMgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVEFCTEVTPXkNCg0KIw0KIyBYdGFibGVz
IGNvbWJpbmVkIG1vZHVsZXMNCiMNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFSSz15DQpDT05GSUdf
TkVURklMVEVSX1hUX0NPTk5NQVJLPXkNCkNPTkZJR19ORVRGSUxURVJfWFRfU0VUPXkNCg0KIw0K
IyBYdGFibGVzIHRhcmdldHMNCiMNCkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0FVRElUPXkN
CiMgQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfQ0xBU1NJRlkgaXMgbm90IHNldA0KQ09ORklH
X05FVEZJTFRFUl9YVF9UQVJHRVRfQ09OTk1BUks9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX1RB
UkdFVF9DT05OU0VDTUFSSyBpcyBub3Qgc2V0DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9I
TUFSSz15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9JRExFVElNRVI9eQ0KIyBDT05GSUdf
TkVURklMVEVSX1hUX1RBUkdFVF9MRUQgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9U
QVJHRVRfTE9HPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfTUFSSyBpcyBub3Qgc2V0
DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORkxPRz15DQojIENPTkZJR19ORVRGSUxURVJf
WFRfVEFSR0VUX05GUVVFVUUgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRf
UkFURUVTVD15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9URUU9eQ0KQ09ORklHX05FVEZJ
TFRFUl9YVF9UQVJHRVRfU0VDTUFSSz15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9UQ1BN
U1M9eQ0KDQojDQojIFh0YWJsZXMgbWF0Y2hlcw0KIw0KIyBDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0FERFJUWVBFIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQlBGPXkN
CiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DTFVTVEVSIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05FVEZJTFRFUl9YVF9NQVRDSF9DT01NRU5UIGlzIG5vdCBzZXQNCiMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9DT05OQllURVMgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRD
SF9DT05OTEFCRUw9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTElNSVQ9eQ0KQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTUFSSz15DQojIENPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfQ09OTlRSQUNLIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ1BV
PXkNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfRENDUD15DQojIENPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfREVWR1JPVVAgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9E
U0NQPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9FQ04gaXMgbm90IHNldA0KQ09ORklH
X05FVEZJTFRFUl9YVF9NQVRDSF9FU1A9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9IQVNI
TElNSVQ9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0hFTFBFUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEwgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9JUENPTVA9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9JUFJBTkdFPXkN
CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTEVOR1RIPXkNCkNPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfTElNSVQ9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX01BQyBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFSSyBpcyBub3Qgc2V0DQojIENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfTVVMVElQT1JUIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfTkZBQ0NUPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PU0YgaXMgbm90
IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PV05FUj15DQojIENPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfUE9MSUNZIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
UEtUVFlQRT15DQojIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfUVVPVEEgaXMgbm90IHNldA0K
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9SQVRFRVNUPXkNCiMgQ09ORklHX05FVEZJTFRFUl9Y
VF9NQVRDSF9SRUFMTSBpcyBub3Qgc2V0DQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JFQ0VO
VD15DQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1NDVFA9eQ0KQ09ORklHX05FVEZJTFRFUl9Y
VF9NQVRDSF9TT0NLRVQ9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVEFURT15DQojIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RBVElTVElDIGlzIG5vdCBzZXQNCiMgQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9TVFJJTkcgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9UQ1BNU1M9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9USU1FPXkNCkNPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfVTMyPXkNCkNPTkZJR19JUF9TRVQ9eQ0KQ09ORklHX0lQX1NFVF9N
QVg9MjU2DQpDT05GSUdfSVBfU0VUX0JJVE1BUF9JUD15DQpDT05GSUdfSVBfU0VUX0JJVE1BUF9J
UE1BQz15DQojIENPTkZJR19JUF9TRVRfQklUTUFQX1BPUlQgaXMgbm90IHNldA0KQ09ORklHX0lQ
X1NFVF9IQVNIX0lQPXkNCiMgQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVCBpcyBub3Qgc2V0DQpD
T05GSUdfSVBfU0VUX0hBU0hfSVBQT1JUSVA9eQ0KQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVE5F
VD15DQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVE5FVD15DQojIENPTkZJR19JUF9TRVRfSEFT
SF9ORVQgaXMgbm90IHNldA0KQ09ORklHX0lQX1NFVF9IQVNIX05FVE5FVD15DQojIENPTkZJR19J
UF9TRVRfSEFTSF9ORVRQT1JUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQX1NFVF9IQVNIX05FVElG
QUNFIGlzIG5vdCBzZXQNCkNPTkZJR19JUF9TRVRfTElTVF9TRVQ9eQ0KIyBDT05GSUdfSVBfVlMg
aXMgbm90IHNldA0KDQojDQojIElQOiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KQ09ORklH
X05GX0RFRlJBR19JUFY0PXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfSVBWND15DQojIENPTkZJR19J
UF9ORl9JUFRBQkxFUyBpcyBub3Qgc2V0DQpDT05GSUdfSVBfTkZfQVJQVEFCTEVTPXkNCkNPTkZJ
R19JUF9ORl9BUlBGSUxURVI9eQ0KQ09ORklHX0lQX05GX0FSUF9NQU5HTEU9eQ0KDQojDQojIElQ
djY6IE5ldGZpbHRlciBDb25maWd1cmF0aW9uDQojDQpDT05GSUdfTkZfREVGUkFHX0lQVjY9eQ0K
Q09ORklHX05GX0NPTk5UUkFDS19JUFY2PXkNCiMgQ09ORklHX0lQNl9ORl9JUFRBQkxFUyBpcyBu
b3Qgc2V0DQoNCiMNCiMgREVDbmV0OiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KIyBDT05G
SUdfREVDTkVUX05GX0dSQUJVTEFUT1IgaXMgbm90IHNldA0KQ09ORklHX0JSSURHRV9ORl9FQlRB
QkxFUz15DQpDT05GSUdfQlJJREdFX0VCVF9CUk9VVEU9eQ0KQ09ORklHX0JSSURHRV9FQlRfVF9G
SUxURVI9eQ0KIyBDT05GSUdfQlJJREdFX0VCVF9UX05BVCBpcyBub3Qgc2V0DQpDT05GSUdfQlJJ
REdFX0VCVF84MDJfMz15DQpDT05GSUdfQlJJREdFX0VCVF9BTU9ORz15DQojIENPTkZJR19CUklE
R0VfRUJUX0FSUCBpcyBub3Qgc2V0DQpDT05GSUdfQlJJREdFX0VCVF9JUD15DQpDT05GSUdfQlJJ
REdFX0VCVF9JUDY9eQ0KQ09ORklHX0JSSURHRV9FQlRfTElNSVQ9eQ0KIyBDT05GSUdfQlJJREdF
X0VCVF9NQVJLIGlzIG5vdCBzZXQNCkNPTkZJR19CUklER0VfRUJUX1BLVFRZUEU9eQ0KQ09ORklH
X0JSSURHRV9FQlRfU1RQPXkNCiMgQ09ORklHX0JSSURHRV9FQlRfVkxBTiBpcyBub3Qgc2V0DQpD
T05GSUdfQlJJREdFX0VCVF9BUlBSRVBMWT15DQpDT05GSUdfQlJJREdFX0VCVF9ETkFUPXkNCkNP
TkZJR19CUklER0VfRUJUX01BUktfVD15DQpDT05GSUdfQlJJREdFX0VCVF9SRURJUkVDVD15DQoj
IENPTkZJR19CUklER0VfRUJUX1NOQVQgaXMgbm90IHNldA0KQ09ORklHX0JSSURHRV9FQlRfTE9H
PXkNCkNPTkZJR19CUklER0VfRUJUX1VMT0c9eQ0KQ09ORklHX0JSSURHRV9FQlRfTkZMT0c9eQ0K
Q09ORklHX0lQX0RDQ1A9eQ0KQ09ORklHX0lORVRfRENDUF9ESUFHPXkNCg0KIw0KIyBEQ0NQIEND
SURzIENvbmZpZ3VyYXRpb24NCiMNCkNPTkZJR19JUF9EQ0NQX0NDSUQyX0RFQlVHPXkNCkNPTkZJ
R19JUF9EQ0NQX0NDSUQzPXkNCiMgQ09ORklHX0lQX0RDQ1BfQ0NJRDNfREVCVUcgaXMgbm90IHNl
dA0KQ09ORklHX0lQX0RDQ1BfVEZSQ19MSUI9eQ0KDQojDQojIERDQ1AgS2VybmVsIEhhY2tpbmcN
CiMNCkNPTkZJR19JUF9EQ0NQX0RFQlVHPXkNCiMgQ09ORklHX0lQX1NDVFAgaXMgbm90IHNldA0K
Q09ORklHX1JEUz15DQojIENPTkZJR19SRFNfUkRNQSBpcyBub3Qgc2V0DQojIENPTkZJR19SRFNf
VENQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEU19ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfVElQ
Qz15DQpDT05GSUdfVElQQ19QT1JUUz04MTkxDQojIENPTkZJR19BVE0gaXMgbm90IHNldA0KIyBD
T05GSUdfTDJUUCBpcyBub3Qgc2V0DQpDT05GSUdfU1RQPXkNCkNPTkZJR19CUklER0U9eQ0KIyBD
T05GSUdfQlJJREdFX0lHTVBfU05PT1BJTkcgaXMgbm90IHNldA0KIyBDT05GSUdfVkxBTl84MDIx
USBpcyBub3Qgc2V0DQpDT05GSUdfREVDTkVUPXkNCiMgQ09ORklHX0RFQ05FVF9ST1VURVIgaXMg
bm90IHNldA0KQ09ORklHX0xMQz15DQpDT05GSUdfTExDMj15DQpDT05GSUdfSVBYPXkNCkNPTkZJ
R19JUFhfSU5URVJOPXkNCiMgQ09ORklHX0FUQUxLIGlzIG5vdCBzZXQNCkNPTkZJR19YMjU9eQ0K
IyBDT05GSUdfTEFQQiBpcyBub3Qgc2V0DQojIENPTkZJR19QSE9ORVQgaXMgbm90IHNldA0KQ09O
RklHX0lFRUU4MDIxNTQ9eQ0KIyBDT05GSUdfSUVFRTgwMjE1NF82TE9XUEFOIGlzIG5vdCBzZXQN
CiMgQ09ORklHX01BQzgwMjE1NCBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfU0NIRUQgaXMgbm90
IHNldA0KIyBDT05GSUdfRENCIGlzIG5vdCBzZXQNCkNPTkZJR19ETlNfUkVTT0xWRVI9eQ0KIyBD
T05GSUdfQkFUTUFOX0FEViBpcyBub3Qgc2V0DQpDT05GSUdfT1BFTlZTV0lUQ0g9eQ0KIyBDT05G
SUdfT1BFTlZTV0lUQ0hfR1JFIGlzIG5vdCBzZXQNCkNPTkZJR19WU09DS0VUUz15DQpDT05GSUdf
Vk1XQVJFX1ZNQ0lfVlNPQ0tFVFM9eQ0KQ09ORklHX05FVExJTktfTU1BUD15DQpDT05GSUdfTkVU
TElOS19ESUFHPXkNCkNPTkZJR19ORVRfTVBMU19HU089eQ0KQ09ORklHX0hTUj15DQpDT05GSUdf
TkVUX1JYX0JVU1lfUE9MTD15DQpDT05GSUdfQlFMPXkNCg0KIw0KIyBOZXR3b3JrIHRlc3RpbmcN
CiMNCkNPTkZJR19IQU1SQURJTz15DQoNCiMNCiMgUGFja2V0IFJhZGlvIHByb3RvY29scw0KIw0K
Q09ORklHX0FYMjU9eQ0KQ09ORklHX0FYMjVfREFNQV9TTEFWRT15DQpDT05GSUdfTkVUUk9NPXkN
CiMgQ09ORklHX1JPU0UgaXMgbm90IHNldA0KDQojDQojIEFYLjI1IG5ldHdvcmsgZGV2aWNlIGRy
aXZlcnMNCiMNCkNPTkZJR19NS0lTUz15DQojIENPTkZJR182UEFDSyBpcyBub3Qgc2V0DQpDT05G
SUdfQlBRRVRIRVI9eQ0KQ09ORklHX0JBWUNPTV9TRVJfRkRYPXkNCkNPTkZJR19CQVlDT01fU0VS
X0hEWD15DQpDT05GSUdfQkFZQ09NX1BBUj15DQpDT05GSUdfWUFNPXkNCiMgQ09ORklHX0NBTiBp
cyBub3Qgc2V0DQpDT05GSUdfSVJEQT15DQoNCiMNCiMgSXJEQSBwcm90b2NvbHMNCiMNCkNPTkZJ
R19JUkxBTj15DQojIENPTkZJR19JUkNPTU0gaXMgbm90IHNldA0KIyBDT05GSUdfSVJEQV9VTFRS
QSBpcyBub3Qgc2V0DQoNCiMNCiMgSXJEQSBvcHRpb25zDQojDQpDT05GSUdfSVJEQV9DQUNIRV9M
QVNUX0xTQVA9eQ0KQ09ORklHX0lSREFfRkFTVF9SUj15DQpDT05GSUdfSVJEQV9ERUJVRz15DQoN
CiMNCiMgSW5mcmFyZWQtcG9ydCBkZXZpY2UgZHJpdmVycw0KIw0KDQojDQojIFNJUiBkZXZpY2Ug
ZHJpdmVycw0KIw0KIyBDT05GSUdfSVJUVFlfU0lSIGlzIG5vdCBzZXQNCg0KIw0KIyBEb25nbGUg
c3VwcG9ydA0KIw0KDQojDQojIEZJUiBkZXZpY2UgZHJpdmVycw0KIw0KQ09ORklHX1ZMU0lfRklS
PXkNCiMgQ09ORklHX0JUIGlzIG5vdCBzZXQNCkNPTkZJR19BRl9SWFJQQz15DQpDT05GSUdfQUZf
UlhSUENfREVCVUc9eQ0KIyBDT05GSUdfUlhLQUQgaXMgbm90IHNldA0KQ09ORklHX0ZJQl9SVUxF
Uz15DQojIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0DQpDT05GSUdfV0lNQVg9eQ0KQ09ORklH
X1dJTUFYX0RFQlVHX0xFVkVMPTgNCiMgQ09ORklHX1JGS0lMTCBpcyBub3Qgc2V0DQojIENPTkZJ
R19SRktJTExfUkVHVUxBVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfOVA9eQ0KQ09ORklHX05F
VF85UF9WSVJUSU89eQ0KQ09ORklHX05FVF85UF9SRE1BPXkNCiMgQ09ORklHX05FVF85UF9ERUJV
RyBpcyBub3Qgc2V0DQojIENPTkZJR19DQUlGIGlzIG5vdCBzZXQNCkNPTkZJR19DRVBIX0xJQj15
DQojIENPTkZJR19DRVBIX0xJQl9QUkVUVFlERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19DRVBI
X0xJQl9VU0VfRE5TX1JFU09MVkVSIGlzIG5vdCBzZXQNCkNPTkZJR19ORkM9eQ0KQ09ORklHX05G
Q19ESUdJVEFMPXkNCkNPTkZJR19ORkNfTkNJPXkNCkNPTkZJR19ORkNfSENJPXkNCkNPTkZJR19O
RkNfU0hETEM9eQ0KDQojDQojIE5lYXIgRmllbGQgQ29tbXVuaWNhdGlvbiAoTkZDKSBkZXZpY2Vz
DQojDQojIENPTkZJR19ORkNfV0lMSU5LIGlzIG5vdCBzZXQNCkNPTkZJR19ORkNfTUVJX1BIWT15
DQpDT05GSUdfTkZDX1NJTT15DQpDT05GSUdfTkZDX1BONTQ0PXkNCiMgQ09ORklHX05GQ19QTjU0
NF9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTkZDX1BONTQ0X01FSSBpcyBub3Qgc2V0DQpDT05G
SUdfTkZDX01JQ1JPUkVBRD15DQpDT05GSUdfTkZDX01JQ1JPUkVBRF9JMkM9eQ0KQ09ORklHX05G
Q19NSUNST1JFQURfTUVJPXkNCkNPTkZJR19IQVZFX0JQRl9KSVQ9eQ0KDQojDQojIERldmljZSBE
cml2ZXJzDQojDQoNCiMNCiMgR2VuZXJpYyBEcml2ZXIgT3B0aW9ucw0KIw0KQ09ORklHX1VFVkVO
VF9IRUxQRVJfUEFUSD0iIg0KIyBDT05GSUdfREVWVE1QRlMgaXMgbm90IHNldA0KIyBDT05GSUdf
U1RBTkRBTE9ORSBpcyBub3Qgc2V0DQpDT05GSUdfUFJFVkVOVF9GSVJNV0FSRV9CVUlMRD15DQpD
T05GSUdfRldfTE9BREVSPXkNCkNPTkZJR19GSVJNV0FSRV9JTl9LRVJORUw9eQ0KQ09ORklHX0VY
VFJBX0ZJUk1XQVJFPSIiDQojIENPTkZJR19GV19MT0FERVJfVVNFUl9IRUxQRVIgaXMgbm90IHNl
dA0KQ09ORklHX0RFQlVHX0RSSVZFUj15DQojIENPTkZJR19ERUJVR19ERVZSRVMgaXMgbm90IHNl
dA0KQ09ORklHX1NZU19IWVBFUlZJU09SPXkNCiMgQ09ORklHX0dFTkVSSUNfQ1BVX0RFVklDRVMg
aXMgbm90IHNldA0KQ09ORklHX1JFR01BUD15DQpDT05GSUdfUkVHTUFQX0kyQz15DQpDT05GSUdf
UkVHTUFQX01NSU89eQ0KQ09ORklHX1JFR01BUF9JUlE9eQ0KQ09ORklHX0RNQV9TSEFSRURfQlVG
RkVSPXkNCg0KIw0KIyBCdXMgZGV2aWNlcw0KIw0KQ09ORklHX0NPTk5FQ1RPUj15DQpDT05GSUdf
UFJPQ19FVkVOVFM9eQ0KQ09ORklHX01URD15DQojIENPTkZJR19NVERfUkVEQk9PVF9QQVJUUyBp
cyBub3Qgc2V0DQpDT05GSUdfTVREX0NNRExJTkVfUEFSVFM9eQ0KQ09ORklHX01URF9BUjdfUEFS
VFM9eQ0KDQojDQojIFVzZXIgTW9kdWxlcyBBbmQgVHJhbnNsYXRpb24gTGF5ZXJzDQojDQpDT05G
SUdfTVREX09PUFM9eQ0KDQojDQojIFJBTS9ST00vRmxhc2ggY2hpcCBkcml2ZXJzDQojDQpDT05G
SUdfTVREX0NGST15DQpDT05GSUdfTVREX0pFREVDUFJPQkU9eQ0KQ09ORklHX01URF9HRU5fUFJP
QkU9eQ0KIyBDT05GSUdfTVREX0NGSV9BRFZfT1BUSU9OUyBpcyBub3Qgc2V0DQpDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzE9eQ0KQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8yPXkNCkNPTkZJ
R19NVERfTUFQX0JBTktfV0lEVEhfND15DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfOCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMTYgaXMgbm90IHNldA0KIyBD
T05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzMyIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfQ0ZJX0kx
PXkNCkNPTkZJR19NVERfQ0ZJX0kyPXkNCiMgQ09ORklHX01URF9DRklfSTQgaXMgbm90IHNldA0K
IyBDT05GSUdfTVREX0NGSV9JOCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0lOVEVMRVhU
IGlzIG5vdCBzZXQNCkNPTkZJR19NVERfQ0ZJX0FNRFNURD15DQpDT05GSUdfTVREX0NGSV9TVEFB
PXkNCkNPTkZJR19NVERfQ0ZJX1VUSUw9eQ0KQ09ORklHX01URF9SQU09eQ0KQ09ORklHX01URF9S
T009eQ0KIyBDT05GSUdfTVREX0FCU0VOVCBpcyBub3Qgc2V0DQoNCiMNCiMgTWFwcGluZyBkcml2
ZXJzIGZvciBjaGlwIGFjY2Vzcw0KIw0KIyBDT05GSUdfTVREX0NPTVBMRVhfTUFQUElOR1MgaXMg
bm90IHNldA0KIyBDT05GSUdfTVREX1BIWVNNQVAgaXMgbm90IHNldA0KQ09ORklHX01URF9TQzUy
MENEUD15DQpDT05GSUdfTVREX05FVFNDNTIwPXkNCiMgQ09ORklHX01URF9UUzU1MDAgaXMgbm90
IHNldA0KQ09ORklHX01URF9BTUQ3NlhST009eQ0KQ09ORklHX01URF9JQ0hYUk9NPXkNCkNPTkZJ
R19NVERfRVNCMlJPTT15DQojIENPTkZJR19NVERfQ0s4MDRYUk9NIGlzIG5vdCBzZXQNCkNPTkZJ
R19NVERfU0NCMl9GTEFTSD15DQpDT05GSUdfTVREX05FVHRlbD15DQojIENPTkZJR19NVERfTDQ0
MEdYIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfSU5URUxfVlJfTk9SPXkNCkNPTkZJR19NVERfUExB
VFJBTT15DQoNCiMNCiMgU2VsZi1jb250YWluZWQgTVREIGRldmljZSBkcml2ZXJzDQojDQpDT05G
SUdfTVREX1BNQzU1MT15DQojIENPTkZJR19NVERfUE1DNTUxX0JVR0ZJWCBpcyBub3Qgc2V0DQoj
IENPTkZJR19NVERfUE1DNTUxX0RFQlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9TTFJBTSBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfUEhSQU0gaXMgbm90IHNldA0KQ09ORklHX01URF9NVERS
QU09eQ0KQ09ORklHX01URFJBTV9UT1RBTF9TSVpFPTQwOTYNCkNPTkZJR19NVERSQU1fRVJBU0Vf
U0laRT0xMjgNCkNPTkZJR19NVERSQU1fQUJTX1BPUz0wDQoNCiMNCiMgRGlzay1Pbi1DaGlwIERl
dmljZSBEcml2ZXJzDQojDQpDT05GSUdfTVREX0RPQ0czPXkNCkNPTkZJR19CQ0hfQ09OU1RfTT0x
NA0KQ09ORklHX0JDSF9DT05TVF9UPTQNCiMgQ09ORklHX01URF9OQU5EIGlzIG5vdCBzZXQNCiMg
Q09ORklHX01URF9PTkVOQU5EIGlzIG5vdCBzZXQNCg0KIw0KIyBMUEREUiBmbGFzaCBtZW1vcnkg
ZHJpdmVycw0KIw0KIyBDT05GSUdfTVREX0xQRERSIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfVUJJ
PXkNCkNPTkZJR19NVERfVUJJX1dMX1RIUkVTSE9MRD00MDk2DQpDT05GSUdfTVREX1VCSV9CRUJf
TElNSVQ9MjANCkNPTkZJR19NVERfVUJJX0ZBU1RNQVA9eQ0KQ09ORklHX01URF9VQklfR0xVRUJJ
PXkNCkNPTkZJR19QQVJQT1JUPXkNCkNPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15
DQpDT05GSUdfUEFSUE9SVF9QQz15DQpDT05GSUdfUEFSUE9SVF9TRVJJQUw9eQ0KQ09ORklHX1BB
UlBPUlRfUENfRklGTz15DQojIENPTkZJR19QQVJQT1JUX1BDX1NVUEVSSU8gaXMgbm90IHNldA0K
IyBDT05GSUdfUEFSUE9SVF9HU0MgaXMgbm90IHNldA0KQ09ORklHX1BBUlBPUlRfQVg4ODc5Nj15
DQojIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldA0KQ09ORklHX1BBUlBPUlRfTk9UX1BD
PXkNCkNPTkZJR19QTlA9eQ0KIyBDT05GSUdfUE5QX0RFQlVHX01FU1NBR0VTIGlzIG5vdCBzZXQN
Cg0KIw0KIyBQcm90b2NvbHMNCiMNCkNPTkZJR19QTlBBQ1BJPXkNCg0KIw0KIyBNaXNjIGRldmlj
ZXMNCiMNCkNPTkZJR19TRU5TT1JTX0xJUzNMVjAyRD15DQojIENPTkZJR19BRDUyNVhfRFBPVCBp
cyBub3Qgc2V0DQpDT05GSUdfRFVNTVlfSVJRPXkNCkNPTkZJR19JQk1fQVNNPXkNCkNPTkZJR19Q
SEFOVE9NPXkNCiMgQ09ORklHX0lOVEVMX01JRF9QVEkgaXMgbm90IHNldA0KQ09ORklHX1NHSV9J
T0M0PXkNCkNPTkZJR19USUZNX0NPUkU9eQ0KIyBDT05GSUdfVElGTV83WFgxIGlzIG5vdCBzZXQN
CkNPTkZJR19JQ1M5MzJTNDAxPXkNCkNPTkZJR19BVE1FTF9TU0M9eQ0KIyBDT05GSUdfRU5DTE9T
VVJFX1NFUlZJQ0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hQX0lMTyBpcyBub3Qgc2V0DQpDT05G
SUdfQVBEUzk4MDJBTFM9eQ0KQ09ORklHX0lTTDI5MDAzPXkNCiMgQ09ORklHX0lTTDI5MDIwIGlz
IG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX1RTTDI1NTA9eQ0KQ09ORklHX1NFTlNPUlNfQkgxNzgw
PXkNCiMgQ09ORklHX1NFTlNPUlNfQkgxNzcwIGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX0FQ
RFM5OTBYPXkNCkNPTkZJR19ITUM2MzUyPXkNCkNPTkZJR19EUzE2ODI9eQ0KQ09ORklHX1ZNV0FS
RV9CQUxMT09OPXkNCiMgQ09ORklHX0JNUDA4NV9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfUENI
X1BIVUIgaXMgbm90IHNldA0KIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQN
CkNPTkZJR19TUkFNPXkNCkNPTkZJR19DMlBPUlQ9eQ0KQ09ORklHX0MyUE9SVF9EVVJBTUFSXzIx
NTA9eQ0KDQojDQojIEVFUFJPTSBzdXBwb3J0DQojDQpDT05GSUdfRUVQUk9NX0FUMjQ9eQ0KIyBD
T05GSUdfRUVQUk9NX0xFR0FDWSBpcyBub3Qgc2V0DQojIENPTkZJR19FRVBST01fTUFYNjg3NSBp
cyBub3Qgc2V0DQojIENPTkZJR19FRVBST01fOTNDWDYgaXMgbm90IHNldA0KIyBDT05GSUdfQ0I3
MTBfQ09SRSBpcyBub3Qgc2V0DQoNCiMNCiMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5z
cG9ydCBsaW5lIGRpc2NpcGxpbmUNCiMNCkNPTkZJR19USV9TVD15DQpDT05GSUdfU0VOU09SU19M
SVMzX0kyQz15DQoNCiMNCiMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9kdWxlDQoj
DQpDT05GSUdfQUxURVJBX1NUQVBMPXkNCkNPTkZJR19JTlRFTF9NRUk9eQ0KQ09ORklHX0lOVEVM
X01FSV9NRT15DQpDT05GSUdfVk1XQVJFX1ZNQ0k9eQ0KDQojDQojIEludGVsIE1JQyBIb3N0IERy
aXZlcg0KIw0KQ09ORklHX0lOVEVMX01JQ19IT1NUPXkNCg0KIw0KIyBJbnRlbCBNSUMgQ2FyZCBE
cml2ZXINCiMNCkNPTkZJR19JTlRFTF9NSUNfQ0FSRD15DQpDT05GSUdfR0VOV1FFPXkNCkNPTkZJ
R19IQVZFX0lERT15DQoNCiMNCiMgU0NTSSBkZXZpY2Ugc3VwcG9ydA0KIw0KQ09ORklHX1NDU0lf
TU9EPXkNCiMgQ09ORklHX1NDU0lfRE1BIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfTkVUTElO
SyBpcyBub3Qgc2V0DQojIENPTkZJR19GVVNJT04gaXMgbm90IHNldA0KDQojDQojIElFRUUgMTM5
NCAoRmlyZVdpcmUpIHN1cHBvcnQNCiMNCiMgQ09ORklHX0ZJUkVXSVJFIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0ZJUkVXSVJFX05PU1kgaXMgbm90IHNldA0KQ09ORklHX0kyTz15DQojIENPTkZJR19J
Mk9fTENUX05PVElGWV9PTl9DSEFOR0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0kyT19FWFRfQURB
UFRFQyBpcyBub3Qgc2V0DQojIENPTkZJR19JMk9fQ09ORklHIGlzIG5vdCBzZXQNCkNPTkZJR19J
Mk9fQlVTPXkNCiMgQ09ORklHX0kyT19QUk9DIGlzIG5vdCBzZXQNCkNPTkZJR19NQUNJTlRPU0hf
RFJJVkVSUz15DQojIENPTkZJR19ORVRERVZJQ0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZIT1NU
X05FVCBpcyBub3Qgc2V0DQpDT05GSUdfVkhPU1RfUklORz15DQoNCiMNCiMgSW5wdXQgZGV2aWNl
IHN1cHBvcnQNCiMNCkNPTkZJR19JTlBVVD15DQpDT05GSUdfSU5QVVRfRkZfTUVNTEVTUz15DQpD
T05GSUdfSU5QVVRfUE9MTERFVj15DQpDT05GSUdfSU5QVVRfU1BBUlNFS01BUD15DQpDT05GSUdf
SU5QVVRfTUFUUklYS01BUD15DQoNCiMNCiMgVXNlcmxhbmQgaW50ZXJmYWNlcw0KIw0KQ09ORklH
X0lOUFVUX01PVVNFREVWPXkNCkNPTkZJR19JTlBVVF9NT1VTRURFVl9QU0FVWD15DQpDT05GSUdf
SU5QVVRfTU9VU0VERVZfU0NSRUVOX1g9MTAyNA0KQ09ORklHX0lOUFVUX01PVVNFREVWX1NDUkVF
Tl9ZPTc2OA0KIyBDT05GSUdfSU5QVVRfSk9ZREVWIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9F
VkRFVj15DQpDT05GSUdfSU5QVVRfRVZCVUc9eQ0KDQojDQojIElucHV0IERldmljZSBEcml2ZXJz
DQojDQpDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQ0KIyBDT05GSUdfS0VZQk9BUkRfQURQNTU4OCBp
cyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9BRFA1NTg5IGlzIG5vdCBzZXQNCkNPTkZJR19L
RVlCT0FSRF9BVEtCRD15DQpDT05GSUdfS0VZQk9BUkRfUVQxMDcwPXkNCkNPTkZJR19LRVlCT0FS
RF9RVDIxNjA9eQ0KIyBDT05GSUdfS0VZQk9BUkRfTEtLQkQgaXMgbm90IHNldA0KQ09ORklHX0tF
WUJPQVJEX0dQSU89eQ0KQ09ORklHX0tFWUJPQVJEX0dQSU9fUE9MTEVEPXkNCiMgQ09ORklHX0tF
WUJPQVJEX1RDQTY0MTYgaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfVENBODQxOCBpcyBu
b3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9NQVRSSVggaXMgbm90IHNldA0KQ09ORklHX0tFWUJP
QVJEX0xNODMyMz15DQojIENPTkZJR19LRVlCT0FSRF9MTTgzMzMgaXMgbm90IHNldA0KQ09ORklH
X0tFWUJPQVJEX01BWDczNTk9eQ0KQ09ORklHX0tFWUJPQVJEX01DUz15DQojIENPTkZJR19LRVlC
T0FSRF9NUFIxMjEgaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfTkVXVE9OIGlzIG5vdCBz
ZXQNCkNPTkZJR19LRVlCT0FSRF9PUEVOQ09SRVM9eQ0KQ09ORklHX0tFWUJPQVJEX1NUT1dBV0FZ
PXkNCiMgQ09ORklHX0tFWUJPQVJEX1NVTktCRCBpcyBub3Qgc2V0DQpDT05GSUdfS0VZQk9BUkRf
U0hfS0VZU0M9eQ0KQ09ORklHX0tFWUJPQVJEX1RDMzU4OVg9eQ0KQ09ORklHX0tFWUJPQVJEX1RX
TDQwMzA9eQ0KIyBDT05GSUdfS0VZQk9BUkRfWFRLQkQgaXMgbm90IHNldA0KQ09ORklHX0lOUFVU
X0xFRFM9eQ0KIyBDT05GSUdfSU5QVVRfTU9VU0UgaXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRf
Sk9ZU1RJQ0sgaXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRfVEFCTEVUIGlzIG5vdCBzZXQNCkNP
TkZJR19JTlBVVF9UT1VDSFNDUkVFTj15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fODhQTTg2MFg9eQ0K
Q09ORklHX1RPVUNIU0NSRUVOX0FENzg3OT15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQUQ3ODc5X0ky
Qz15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQVRNRUxfTVhUPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X0FVT19QSVhDSVIgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVOX0JVMjEwMTM9eQ0KQ09O
RklHX1RPVUNIU0NSRUVOX0NZOENUTUcxMTA9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUF9D
T1JFPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUF9JMkMgaXMgbm90IHNldA0KQ09ORklH
X1RPVUNIU0NSRUVOX0NZVFRTUDRfQ09SRT15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQNF9J
MkM9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0RZTkFQUk89eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0hB
TVBTSElSRT15DQojIENPTkZJR19UT1VDSFNDUkVFTl9FRVRJIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1RPVUNIU0NSRUVOX0ZVSklUU1UgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVOX0lMSTIx
MFg9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0dVTlpFPXkNCkNPTkZJR19UT1VDSFNDUkVFTl9FTE89
eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1dBQ09NX1c4MDAxPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X1dBQ09NX0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfVE9VQ0hTQ1JFRU5fTUFYMTE4MDE9eQ0KIyBD
T05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0DQpDT05GSUdfVE9VQ0hTQ1JFRU5f
TU1TMTE0PXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX01UT1VDSCBpcyBub3Qgc2V0DQpDT05GSUdf
VE9VQ0hTQ1JFRU5fSU5FWElPPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX01LNzEyIGlzIG5vdCBz
ZXQNCkNPTkZJR19UT1VDSFNDUkVFTl9QRU5NT1VOVD15DQojIENPTkZJR19UT1VDSFNDUkVFTl9F
RFRfRlQ1WDA2IGlzIG5vdCBzZXQNCiMgQ09ORklHX1RPVUNIU0NSRUVOX1RPVUNIUklHSFQgaXMg
bm90IHNldA0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hXSU4gaXMgbm90IHNldA0KQ09ORklH
X1RPVUNIU0NSRUVOX1RJX0FNMzM1WF9UU0M9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1BJWENJUj15
DQpDT05GSUdfVE9VQ0hTQ1JFRU5fV004MzFYPXkNCkNPTkZJR19UT1VDSFNDUkVFTl9XTTk3WFg9
eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fV005NzA1IGlzIG5vdCBzZXQNCkNPTkZJR19UT1VDSFND
UkVFTl9XTTk3MTI9eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fV005NzEzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1RPVUNIU0NSRUVOX01DMTM3ODMgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVO
X1RPVUNISVQyMTM9eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVFNDX1NFUklPIGlzIG5vdCBzZXQN
CkNPTkZJR19UT1VDSFNDUkVFTl9UU0MyMDA3PXkNCkNPTkZJR19UT1VDSFNDUkVFTl9TVDEyMzI9
eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1RQUzY1MDdYPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX1pG
T1JDRSBpcyBub3Qgc2V0DQpDT05GSUdfSU5QVVRfTUlTQz15DQpDT05GSUdfSU5QVVRfODhQTTg2
MFhfT05LRVk9eQ0KIyBDT05GSUdfSU5QVVRfODhQTTgwWF9PTktFWSBpcyBub3Qgc2V0DQpDT05G
SUdfSU5QVVRfQUQ3MTRYPXkNCkNPTkZJR19JTlBVVF9BRDcxNFhfSTJDPXkNCiMgQ09ORklHX0lO
UFVUX0FSSVpPTkFfSEFQVElDUyBpcyBub3Qgc2V0DQojIENPTkZJR19JTlBVVF9CTUExNTAgaXMg
bm90IHNldA0KQ09ORklHX0lOUFVUX1BDU1BLUj15DQojIENPTkZJR19JTlBVVF9NQzEzNzgzX1BX
UkJVVFRPTiBpcyBub3Qgc2V0DQpDT05GSUdfSU5QVVRfTU1BODQ1MD15DQojIENPTkZJR19JTlBV
VF9NUFUzMDUwIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9BUEFORUw9eQ0KQ09ORklHX0lOUFVU
X0dQMkE9eQ0KQ09ORklHX0lOUFVUX0dQSU9fVElMVF9QT0xMRUQ9eQ0KQ09ORklHX0lOUFVUX0FU
TEFTX0JUTlM9eQ0KIyBDT05GSUdfSU5QVVRfS1hUSjkgaXMgbm90IHNldA0KQ09ORklHX0lOUFVU
X1JFVFVfUFdSQlVUVE9OPXkNCkNPTkZJR19JTlBVVF9UV0w0MDMwX1BXUkJVVFRPTj15DQpDT05G
SUdfSU5QVVRfVFdMNDAzMF9WSUJSQT15DQpDT05GSUdfSU5QVVRfVFdMNjA0MF9WSUJSQT15DQpD
T05GSUdfSU5QVVRfVUlOUFVUPXkNCkNPTkZJR19JTlBVVF9QQ0Y1MDYzM19QTVU9eQ0KQ09ORklH
X0lOUFVUX1BDRjg1NzQ9eQ0KIyBDT05GSUdfSU5QVVRfR1BJT19ST1RBUllfRU5DT0RFUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTlBVVF9EQTkwNTVfT05LRVkgaXMgbm90IHNldA0KQ09ORklHX0lO
UFVUX1dNODMxWF9PTj15DQpDT05GSUdfSU5QVVRfQURYTDM0WD15DQpDT05GSUdfSU5QVVRfQURY
TDM0WF9JMkM9eQ0KIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0DQojIENPTkZJR19J
TlBVVF9YRU5fS0JEREVWX0ZST05URU5EIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9JREVBUEFE
X1NMSURFQkFSPXkNCg0KIw0KIyBIYXJkd2FyZSBJL08gcG9ydHMNCiMNCkNPTkZJR19TRVJJTz15
DQpDT05GSUdfQVJDSF9NSUdIVF9IQVZFX1BDX1NFUklPPXkNCkNPTkZJR19TRVJJT19JODA0Mj15
DQpDT05GSUdfU0VSSU9fU0VSUE9SVD15DQojIENPTkZJR19TRVJJT19DVDgyQzcxMCBpcyBub3Qg
c2V0DQpDT05GSUdfU0VSSU9fUEFSS0JEPXkNCkNPTkZJR19TRVJJT19QQ0lQUzI9eQ0KQ09ORklH
X1NFUklPX0xJQlBTMj15DQojIENPTkZJR19TRVJJT19SQVcgaXMgbm90IHNldA0KIyBDT05GSUdf
U0VSSU9fQUxURVJBX1BTMiBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSU9fUFMyTVVMVD15DQojIENP
TkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hZUEVSVl9LRVlCT0FSRCBp
cyBub3Qgc2V0DQpDT05GSUdfR0FNRVBPUlQ9eQ0KQ09ORklHX0dBTUVQT1JUX05TNTU4PXkNCkNP
TkZJR19HQU1FUE9SVF9MND15DQojIENPTkZJR19HQU1FUE9SVF9FTVUxMEsxIGlzIG5vdCBzZXQN
CkNPTkZJR19HQU1FUE9SVF9GTTgwMT15DQoNCiMNCiMgQ2hhcmFjdGVyIGRldmljZXMNCiMNCkNP
TkZJR19UVFk9eQ0KIyBDT05GSUdfVlQgaXMgbm90IHNldA0KIyBDT05GSUdfVU5JWDk4X1BUWVMg
aXMgbm90IHNldA0KQ09ORklHX0xFR0FDWV9QVFlTPXkNCkNPTkZJR19MRUdBQ1lfUFRZX0NPVU5U
PTI1Ng0KIyBDT05GSUdfU0VSSUFMX05PTlNUQU5EQVJEIGlzIG5vdCBzZXQNCkNPTkZJR19OT1pP
TUk9eQ0KQ09ORklHX05fR1NNPXkNCiMgQ09ORklHX1RSQUNFX1NJTksgaXMgbm90IHNldA0KQ09O
RklHX0RFVktNRU09eQ0KDQojDQojIFNlcmlhbCBkcml2ZXJzDQojDQpDT05GSUdfU0VSSUFMXzgy
NTA9eQ0KQ09ORklHX1NFUklBTF84MjUwX0RFUFJFQ0FURURfT1BUSU9OUz15DQpDT05GSUdfU0VS
SUFMXzgyNTBfUE5QPXkNCkNPTkZJR19TRVJJQUxfODI1MF9DT05TT0xFPXkNCkNPTkZJR19GSVhf
RUFSTFlDT05fTUVNPXkNCkNPTkZJR19TRVJJQUxfODI1MF9ETUE9eQ0KQ09ORklHX1NFUklBTF84
MjUwX1BDST15DQpDT05GSUdfU0VSSUFMXzgyNTBfTlJfVUFSVFM9NA0KQ09ORklHX1NFUklBTF84
MjUwX1JVTlRJTUVfVUFSVFM9NA0KQ09ORklHX1NFUklBTF84MjUwX0VYVEVOREVEPXkNCiMgQ09O
RklHX1NFUklBTF84MjUwX01BTllfUE9SVFMgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84MjUw
X1NIQVJFX0lSUT15DQpDT05GSUdfU0VSSUFMXzgyNTBfREVURUNUX0lSUT15DQpDT05GSUdfU0VS
SUFMXzgyNTBfUlNBPXkNCkNPTkZJR19TRVJJQUxfODI1MF9EVz15DQoNCiMNCiMgTm9uLTgyNTAg
c2VyaWFsIHBvcnQgc3VwcG9ydA0KIw0KQ09ORklHX1NFUklBTF9LR0RCX05NST15DQojIENPTkZJ
R19TRVJJQUxfTUZEX0hTVSBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSUFMX1VBUlRMSVRFPXkNCiMg
Q09ORklHX1NFUklBTF9VQVJUTElURV9DT05TT0xFIGlzIG5vdCBzZXQNCkNPTkZJR19TRVJJQUxf
Q09SRT15DQpDT05GSUdfU0VSSUFMX0NPUkVfQ09OU09MRT15DQpDT05GSUdfQ09OU09MRV9QT0xM
PXkNCkNPTkZJR19TRVJJQUxfSlNNPXkNCiMgQ09ORklHX1NFUklBTF9TQ0NOWFAgaXMgbm90IHNl
dA0KIyBDT05GSUdfU0VSSUFMX1RJTUJFUkRBTEUgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF9B
TFRFUkFfSlRBR1VBUlQ9eQ0KQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlRfQ09OU09MRT15
DQpDT05GSUdfU0VSSUFMX0FMVEVSQV9KVEFHVUFSVF9DT05TT0xFX0JZUEFTUz15DQpDT05GSUdf
U0VSSUFMX0FMVEVSQV9VQVJUPXkNCkNPTkZJR19TRVJJQUxfQUxURVJBX1VBUlRfTUFYUE9SVFM9
NA0KQ09ORklHX1NFUklBTF9BTFRFUkFfVUFSVF9CQVVEUkFURT0xMTUyMDANCiMgQ09ORklHX1NF
UklBTF9BTFRFUkFfVUFSVF9DT05TT0xFIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFUklBTF9QQ0hf
VUFSVCBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxfQVJDIGlzIG5vdCBzZXQNCkNPTkZJR19T
RVJJQUxfUlAyPXkNCkNPTkZJR19TRVJJQUxfUlAyX05SX1VBUlRTPTMyDQpDT05GSUdfU0VSSUFM
X0ZTTF9MUFVBUlQ9eQ0KQ09ORklHX1NFUklBTF9GU0xfTFBVQVJUX0NPTlNPTEU9eQ0KQ09ORklH
X1NFUklBTF9TVF9BU0M9eQ0KQ09ORklHX1NFUklBTF9TVF9BU0NfQ09OU09MRT15DQojIENPTkZJ
R19UVFlfUFJJTlRLIGlzIG5vdCBzZXQNCkNPTkZJR19QUklOVEVSPXkNCkNPTkZJR19MUF9DT05T
T0xFPXkNCkNPTkZJR19QUERFVj15DQpDT05GSUdfSFZDX0RSSVZFUj15DQpDT05GSUdfSFZDX0lS
UT15DQpDT05GSUdfSFZDX1hFTj15DQpDT05GSUdfSFZDX1hFTl9GUk9OVEVORD15DQojIENPTkZJ
R19WSVJUSU9fQ09OU09MRSBpcyBub3Qgc2V0DQojIENPTkZJR19JUE1JX0hBTkRMRVIgaXMgbm90
IHNldA0KQ09ORklHX0hXX1JBTkRPTT15DQpDT05GSUdfSFdfUkFORE9NX1RJTUVSSU9NRU09eQ0K
Q09ORklHX0hXX1JBTkRPTV9JTlRFTD15DQpDT05GSUdfSFdfUkFORE9NX0FNRD15DQpDT05GSUdf
SFdfUkFORE9NX1ZJQT15DQpDT05GSUdfSFdfUkFORE9NX1ZJUlRJTz15DQpDT05GSUdfSFdfUkFO
RE9NX1RQTT15DQpDT05GSUdfTlZSQU09eQ0KIyBDT05GSUdfUjM5NjQgaXMgbm90IHNldA0KIyBD
T05GSUdfQVBQTElDT00gaXMgbm90IHNldA0KIyBDT05GSUdfTVdBVkUgaXMgbm90IHNldA0KIyBD
T05GSUdfSFBFVCBpcyBub3Qgc2V0DQpDT05GSUdfSEFOR0NIRUNLX1RJTUVSPXkNCkNPTkZJR19U
Q0dfVFBNPXkNCkNPTkZJR19UQ0dfVElTPXkNCiMgQ09ORklHX1RDR19USVNfSTJDX0FUTUVMIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1RDR19USVNfSTJDX0lORklORU9OIGlzIG5vdCBzZXQNCkNPTkZJ
R19UQ0dfVElTX0kyQ19OVVZPVE9OPXkNCkNPTkZJR19UQ0dfTlNDPXkNCiMgQ09ORklHX1RDR19B
VE1FTCBpcyBub3Qgc2V0DQpDT05GSUdfVENHX0lORklORU9OPXkNCiMgQ09ORklHX1RDR19TVDMz
X0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfVENHX1hFTj15DQojIENPTkZJR19URUxDTE9DSyBpcyBu
b3Qgc2V0DQpDT05GSUdfREVWUE9SVD15DQpDT05GSUdfSTJDPXkNCkNPTkZJR19JMkNfQk9BUkRJ
TkZPPXkNCiMgQ09ORklHX0kyQ19DT01QQVQgaXMgbm90IHNldA0KQ09ORklHX0kyQ19DSEFSREVW
PXkNCkNPTkZJR19JMkNfTVVYPXkNCg0KIw0KIyBNdWx0aXBsZXhlciBJMkMgQ2hpcCBzdXBwb3J0
DQojDQojIENPTkZJR19JMkNfTVVYX0dQSU8gaXMgbm90IHNldA0KQ09ORklHX0kyQ19NVVhfUENB
OTU0MT15DQpDT05GSUdfSTJDX01VWF9QQ0E5NTR4PXkNCkNPTkZJR19JMkNfSEVMUEVSX0FVVE89
eQ0KQ09ORklHX0kyQ19TTUJVUz15DQpDT05GSUdfSTJDX0FMR09CSVQ9eQ0KQ09ORklHX0kyQ19B
TEdPUENBPXkNCg0KIw0KIyBJMkMgSGFyZHdhcmUgQnVzIHN1cHBvcnQNCiMNCg0KIw0KIyBQQyBT
TUJ1cyBob3N0IGNvbnRyb2xsZXIgZHJpdmVycw0KIw0KIyBDT05GSUdfSTJDX0FMSTE1MzUgaXMg
bm90IHNldA0KQ09ORklHX0kyQ19BTEkxNTYzPXkNCkNPTkZJR19JMkNfQUxJMTVYMz15DQojIENP
TkZJR19JMkNfQU1ENzU2IGlzIG5vdCBzZXQNCkNPTkZJR19JMkNfQU1EODExMT15DQojIENPTkZJ
R19JMkNfSTgwMSBpcyBub3Qgc2V0DQpDT05GSUdfSTJDX0lTQ0g9eQ0KQ09ORklHX0kyQ19JU01U
PXkNCkNPTkZJR19JMkNfUElJWDQ9eQ0KIyBDT05GSUdfSTJDX05GT1JDRTIgaXMgbm90IHNldA0K
Q09ORklHX0kyQ19TSVM1NTk1PXkNCkNPTkZJR19JMkNfU0lTNjMwPXkNCkNPTkZJR19JMkNfU0lT
OTZYPXkNCiMgQ09ORklHX0kyQ19WSUEgaXMgbm90IHNldA0KIyBDT05GSUdfSTJDX1ZJQVBSTyBp
cyBub3Qgc2V0DQoNCiMNCiMgQUNQSSBkcml2ZXJzDQojDQpDT05GSUdfSTJDX1NDTUk9eQ0KDQoj
DQojIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQ0KIw0KIyBDT05GSUdfSTJDX0NCVVNfR1BJTyBpcyBub3Qgc2V0DQpDT05GSUdfSTJDX0RF
U0lHTldBUkVfQ09SRT15DQpDT05GSUdfSTJDX0RFU0lHTldBUkVfUENJPXkNCkNPTkZJR19JMkNf
RUcyMFQ9eQ0KQ09ORklHX0kyQ19HUElPPXkNCkNPTkZJR19JMkNfS0VNUExEPXkNCkNPTkZJR19J
MkNfT0NPUkVTPXkNCkNPTkZJR19JMkNfUENBX1BMQVRGT1JNPXkNCiMgQ09ORklHX0kyQ19QWEFf
UENJIGlzIG5vdCBzZXQNCkNPTkZJR19JMkNfUklJQz15DQpDT05GSUdfSTJDX1NIX01PQklMRT15
DQpDT05GSUdfSTJDX1NJTVRFQz15DQpDT05GSUdfSTJDX1hJTElOWD15DQojIENPTkZJR19JMkNf
UkNBUiBpcyBub3Qgc2V0DQoNCiMNCiMgRXh0ZXJuYWwgSTJDL1NNQnVzIGFkYXB0ZXIgZHJpdmVy
cw0KIw0KQ09ORklHX0kyQ19QQVJQT1JUPXkNCkNPTkZJR19JMkNfUEFSUE9SVF9MSUdIVD15DQoj
IENPTkZJR19JMkNfVEFPU19FVk0gaXMgbm90IHNldA0KDQojDQojIE90aGVyIEkyQy9TTUJ1cyBi
dXMgZHJpdmVycw0KIw0KQ09ORklHX0kyQ19ERUJVR19DT1JFPXkNCkNPTkZJR19JMkNfREVCVUdf
QUxHTz15DQojIENPTkZJR19JMkNfREVCVUdfQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NQSSBp
cyBub3Qgc2V0DQpDT05GSUdfSFNJPXkNCkNPTkZJR19IU0lfQk9BUkRJTkZPPXkNCg0KIw0KIyBI
U0kgY2xpZW50cw0KIw0KIyBDT05GSUdfSFNJX0NIQVIgaXMgbm90IHNldA0KDQojDQojIFBQUyBz
dXBwb3J0DQojDQpDT05GSUdfUFBTPXkNCiMgQ09ORklHX1BQU19ERUJVRyBpcyBub3Qgc2V0DQoN
CiMNCiMgUFBTIGNsaWVudHMgc3VwcG9ydA0KIw0KQ09ORklHX1BQU19DTElFTlRfS1RJTUVSPXkN
CkNPTkZJR19QUFNfQ0xJRU5UX0xESVNDPXkNCiMgQ09ORklHX1BQU19DTElFTlRfUEFSUE9SVCBp
cyBub3Qgc2V0DQpDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkNCg0KIw0KIyBQUFMgZ2VuZXJhdG9y
cyBzdXBwb3J0DQojDQoNCiMNCiMgUFRQIGNsb2NrIHN1cHBvcnQNCiMNCkNPTkZJR19QVFBfMTU4
OF9DTE9DSz15DQoNCiMNCiMgRW5hYmxlIFBIWUxJQiBhbmQgTkVUV09SS19QSFlfVElNRVNUQU1Q
SU5HIHRvIHNlZSB0aGUgYWRkaXRpb25hbCBjbG9ja3MuDQojDQojIENPTkZJR19QVFBfMTU4OF9D
TE9DS19QQ0ggaXMgbm90IHNldA0KQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9HUElPTElCPXkN
CkNPTkZJR19HUElPTElCPXkNCkNPTkZJR19HUElPX0RFVlJFUz15DQpDT05GSUdfR1BJT19BQ1BJ
PXkNCkNPTkZJR19ERUJVR19HUElPPXkNCkNPTkZJR19HUElPX1NZU0ZTPXkNCkNPTkZJR19HUElP
X0dFTkVSSUM9eQ0KQ09ORklHX0dQSU9fREE5MDU1PXkNCkNPTkZJR19HUElPX01BWDczMFg9eQ0K
DQojDQojIE1lbW9yeSBtYXBwZWQgR1BJTyBkcml2ZXJzOg0KIw0KQ09ORklHX0dQSU9fR0VORVJJ
Q19QTEFURk9STT15DQojIENPTkZJR19HUElPX0lUODc2MUUgaXMgbm90IHNldA0KQ09ORklHX0dQ
SU9fRjcxODhYPXkNCiMgQ09ORklHX0dQSU9fU0NIMzExWCBpcyBub3Qgc2V0DQojIENPTkZJR19H
UElPX1RTNTUwMCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19TQ0g9eQ0KIyBDT05GSUdfR1BJT19J
Q0ggaXMgbm90IHNldA0KQ09ORklHX0dQSU9fVlg4NTU9eQ0KQ09ORklHX0dQSU9fTFlOWFBPSU5U
PXkNCg0KIw0KIyBJMkMgR1BJTyBleHBhbmRlcnM6DQojDQpDT05GSUdfR1BJT19BUklaT05BPXkN
CkNPTkZJR19HUElPX01BWDczMDA9eQ0KQ09ORklHX0dQSU9fTUFYNzMyWD15DQojIENPTkZJR19H
UElPX01BWDczMlhfSVJRIGlzIG5vdCBzZXQNCiMgQ09ORklHX0dQSU9fUENBOTUzWCBpcyBub3Qg
c2V0DQpDT05GSUdfR1BJT19QQ0Y4NTdYPXkNCiMgQ09ORklHX0dQSU9fU1gxNTBYIGlzIG5vdCBz
ZXQNCkNPTkZJR19HUElPX1RDMzU4OVg9eQ0KIyBDT05GSUdfR1BJT19UV0w0MDMwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0dQSU9fVFdMNjA0MCBpcyBub3Qgc2V0DQojIENPTkZJR19HUElPX1dNODMx
WCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19XTTgzNTA9eQ0KQ09ORklHX0dQSU9fQURQNTU4OD15
DQojIENPTkZJR19HUElPX0FEUDU1ODhfSVJRIGlzIG5vdCBzZXQNCg0KIw0KIyBQQ0kgR1BJTyBl
eHBhbmRlcnM6DQojDQojIENPTkZJR19HUElPX0JUOFhYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0dQ
SU9fQU1EODExMSBpcyBub3Qgc2V0DQojIENPTkZJR19HUElPX0lOVEVMX01JRCBpcyBub3Qgc2V0
DQojIENPTkZJR19HUElPX1BDSCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19NTF9JT0g9eQ0KQ09O
RklHX0dQSU9fVElNQkVSREFMRT15DQpDT05GSUdfR1BJT19SREMzMjFYPXkNCg0KIw0KIyBTUEkg
R1BJTyBleHBhbmRlcnM6DQojDQpDT05GSUdfR1BJT19NQ1AyM1MwOD15DQoNCiMNCiMgQUM5NyBH
UElPIGV4cGFuZGVyczoNCiMNCg0KIw0KIyBMUEMgR1BJTyBleHBhbmRlcnM6DQojDQojIENPTkZJ
R19HUElPX0tFTVBMRCBpcyBub3Qgc2V0DQoNCiMNCiMgTU9EVUxidXMgR1BJTyBleHBhbmRlcnM6
DQojDQpDT05GSUdfR1BJT19KQU5aX1RUTD15DQpDT05GSUdfR1BJT19QQUxNQVM9eQ0KIyBDT05G
SUdfR1BJT19UUFM2NTg2WCBpcyBub3Qgc2V0DQoNCiMNCiMgVVNCIEdQSU8gZXhwYW5kZXJzOg0K
Iw0KQ09ORklHX1cxPXkNCkNPTkZJR19XMV9DT049eQ0KDQojDQojIDEtd2lyZSBCdXMgTWFzdGVy
cw0KIw0KIyBDT05GSUdfVzFfTUFTVEVSX01BVFJPWCBpcyBub3Qgc2V0DQojIENPTkZJR19XMV9N
QVNURVJfRFMyNDgyIGlzIG5vdCBzZXQNCkNPTkZJR19XMV9NQVNURVJfRFMxV009eQ0KIyBDT05G
SUdfVzFfTUFTVEVSX0dQSU8gaXMgbm90IHNldA0KDQojDQojIDEtd2lyZSBTbGF2ZXMNCiMNCkNP
TkZJR19XMV9TTEFWRV9USEVSTT15DQojIENPTkZJR19XMV9TTEFWRV9TTUVNIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1cxX1NMQVZFX0RTMjQwOCBpcyBub3Qgc2V0DQojIENPTkZJR19XMV9TTEFWRV9E
UzI0MTMgaXMgbm90IHNldA0KQ09ORklHX1cxX1NMQVZFX0RTMjQyMz15DQojIENPTkZJR19XMV9T
TEFWRV9EUzI0MzEgaXMgbm90IHNldA0KQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15DQojIENPTkZJ
R19XMV9TTEFWRV9EUzI0MzNfQ1JDIGlzIG5vdCBzZXQNCiMgQ09ORklHX1cxX1NMQVZFX0RTMjc2
MCBpcyBub3Qgc2V0DQpDT05GSUdfVzFfU0xBVkVfRFMyNzgwPXkNCkNPTkZJR19XMV9TTEFWRV9E
UzI3ODE9eQ0KQ09ORklHX1cxX1NMQVZFX0RTMjhFMDQ9eQ0KQ09ORklHX1cxX1NMQVZFX0JRMjcw
MDA9eQ0KQ09ORklHX1BPV0VSX1NVUFBMWT15DQojIENPTkZJR19QT1dFUl9TVVBQTFlfREVCVUcg
aXMgbm90IHNldA0KQ09ORklHX1BEQV9QT1dFUj15DQojIENPTkZJR19XTTgzMVhfQkFDS1VQIGlz
IG5vdCBzZXQNCkNPTkZJR19XTTgzMVhfUE9XRVI9eQ0KQ09ORklHX1dNODM1MF9QT1dFUj15DQpD
T05GSUdfVEVTVF9QT1dFUj15DQpDT05GSUdfQkFUVEVSWV84OFBNODYwWD15DQpDT05GSUdfQkFU
VEVSWV9EUzI3ODA9eQ0KQ09ORklHX0JBVFRFUllfRFMyNzgxPXkNCkNPTkZJR19CQVRURVJZX0RT
Mjc4Mj15DQojIENPTkZJR19CQVRURVJZX1dNOTdYWCBpcyBub3Qgc2V0DQpDT05GSUdfQkFUVEVS
WV9TQlM9eQ0KIyBDT05GSUdfQkFUVEVSWV9CUTI3eDAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0JB
VFRFUllfTUFYMTcwNDAgaXMgbm90IHNldA0KQ09ORklHX0JBVFRFUllfTUFYMTcwNDI9eQ0KIyBD
T05GSUdfQ0hBUkdFUl84OFBNODYwWCBpcyBub3Qgc2V0DQojIENPTkZJR19DSEFSR0VSX1BDRjUw
NjMzIGlzIG5vdCBzZXQNCkNPTkZJR19DSEFSR0VSX01BWDg5MDM9eQ0KQ09ORklHX0NIQVJHRVJf
VFdMNDAzMD15DQpDT05GSUdfQ0hBUkdFUl9MUDg3Mjc9eQ0KQ09ORklHX0NIQVJHRVJfR1BJTz15
DQpDT05GSUdfQ0hBUkdFUl9NQU5BR0VSPXkNCkNPTkZJR19DSEFSR0VSX0JRMjQxNVg9eQ0KIyBD
T05GSUdfQ0hBUkdFUl9CUTI0MTkwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NIQVJHRVJfQlEyNDcz
NSBpcyBub3Qgc2V0DQpDT05GSUdfQ0hBUkdFUl9TTUIzNDc9eQ0KQ09ORklHX0JBVFRFUllfR09M
REZJU0g9eQ0KIyBDT05GSUdfUE9XRVJfUkVTRVQgaXMgbm90IHNldA0KQ09ORklHX1BPV0VSX0FW
Uz15DQpDT05GSUdfSFdNT049eQ0KQ09ORklHX0hXTU9OX1ZJRD15DQpDT05GSUdfSFdNT05fREVC
VUdfQ0hJUD15DQoNCiMNCiMgTmF0aXZlIGRyaXZlcnMNCiMNCkNPTkZJR19TRU5TT1JTX0FCSVRV
R1VSVT15DQpDT05GSUdfU0VOU09SU19BQklUVUdVUlUzPXkNCkNPTkZJR19TRU5TT1JTX0FENzQx
ND15DQpDT05GSUdfU0VOU09SU19BRDc0MTg9eQ0KQ09ORklHX1NFTlNPUlNfQURNMTAyMT15DQpD
T05GSUdfU0VOU09SU19BRE0xMDI1PXkNCkNPTkZJR19TRU5TT1JTX0FETTEwMjY9eQ0KQ09ORklH
X1NFTlNPUlNfQURNMTAyOT15DQpDT05GSUdfU0VOU09SU19BRE0xMDMxPXkNCiMgQ09ORklHX1NF
TlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19BRFQ3WDEwPXkNCkNPTkZJ
R19TRU5TT1JTX0FEVDc0MTA9eQ0KQ09ORklHX1NFTlNPUlNfQURUNzQxMT15DQpDT05GSUdfU0VO
U09SU19BRFQ3NDYyPXkNCkNPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQ0KIyBDT05GSUdfU0VOU09S
U19BRFQ3NDc1IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfQVNDNzYyMSBpcyBub3Qgc2V0
DQpDT05GSUdfU0VOU09SU19LOFRFTVA9eQ0KQ09ORklHX1NFTlNPUlNfSzEwVEVNUD15DQpDT05G
SUdfU0VOU09SU19GQU0xNUhfUE9XRVI9eQ0KQ09ORklHX1NFTlNPUlNfQVNCMTAwPXkNCkNPTkZJ
R19TRU5TT1JTX0FUWFAxPXkNCkNPTkZJR19TRU5TT1JTX0RTNjIwPXkNCkNPTkZJR19TRU5TT1JT
X0RTMTYyMT15DQpDT05GSUdfU0VOU09SU19EQTkwNTU9eQ0KQ09ORklHX1NFTlNPUlNfSTVLX0FN
Qj15DQpDT05GSUdfU0VOU09SU19GNzE4MDVGPXkNCkNPTkZJR19TRU5TT1JTX0Y3MTg4MkZHPXkN
CkNPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQ0KIyBDT05GSUdfU0VOU09SU19GU0NITUQgaXMgbm90
IHNldA0KQ09ORklHX1NFTlNPUlNfRzc2MEE9eQ0KQ09ORklHX1NFTlNPUlNfRzc2Mj15DQojIENP
TkZJR19TRU5TT1JTX0dMNTE4U00gaXMgbm90IHNldA0KQ09ORklHX1NFTlNPUlNfR0w1MjBTTT15
DQpDT05GSUdfU0VOU09SU19HUElPX0ZBTj15DQpDT05GSUdfU0VOU09SU19ISUg2MTMwPXkNCiMg
Q09ORklHX1NFTlNPUlNfSFRVMjEgaXMgbm90IHNldA0KQ09ORklHX1NFTlNPUlNfQ09SRVRFTVA9
eQ0KQ09ORklHX1NFTlNPUlNfSVQ4Nz15DQpDT05GSUdfU0VOU09SU19KQzQyPXkNCkNPTkZJR19T
RU5TT1JTX0xJTkVBR0U9eQ0KQ09ORklHX1NFTlNPUlNfTE02Mz15DQpDT05GSUdfU0VOU09SU19M
TTczPXkNCkNPTkZJR19TRU5TT1JTX0xNNzU9eQ0KIyBDT05GSUdfU0VOU09SU19MTTc3IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfTE03OCBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19M
TTgwPXkNCkNPTkZJR19TRU5TT1JTX0xNODM9eQ0KQ09ORklHX1NFTlNPUlNfTE04NT15DQpDT05G
SUdfU0VOU09SU19MTTg3PXkNCkNPTkZJR19TRU5TT1JTX0xNOTA9eQ0KQ09ORklHX1NFTlNPUlNf
TE05Mj15DQpDT05GSUdfU0VOU09SU19MTTkzPXkNCkNPTkZJR19TRU5TT1JTX0xUQzQxNTE9eQ0K
Q09ORklHX1NFTlNPUlNfTFRDNDIxNT15DQpDT05GSUdfU0VOU09SU19MVEM0MjQ1PXkNCiMgQ09O
RklHX1NFTlNPUlNfTFRDNDI2MSBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19MTTk1MjM0PXkN
CkNPTkZJR19TRU5TT1JTX0xNOTUyNDE9eQ0KQ09ORklHX1NFTlNPUlNfTE05NTI0NT15DQpDT05G
SUdfU0VOU09SU19NQVgxNjA2NT15DQpDT05GSUdfU0VOU09SU19NQVgxNjE5PXkNCiMgQ09ORklH
X1NFTlNPUlNfTUFYMTY2OCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JTX01BWDE5NyBpcyBu
b3Qgc2V0DQpDT05GSUdfU0VOU09SU19NQVg2NjM5PXkNCkNPTkZJR19TRU5TT1JTX01BWDY2NDI9
eQ0KQ09ORklHX1NFTlNPUlNfTUFYNjY1MD15DQojIENPTkZJR19TRU5TT1JTX01BWDY2OTcgaXMg
bm90IHNldA0KQ09ORklHX1NFTlNPUlNfTUNQMzAyMT15DQpDT05GSUdfU0VOU09SU19OQ1Q2Nzc1
PXkNCkNPTkZJR19TRU5TT1JTX05UQ19USEVSTUlTVE9SPXkNCkNPTkZJR19TRU5TT1JTX1BDODcz
NjA9eQ0KQ09ORklHX1NFTlNPUlNfUEM4NzQyNz15DQpDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkN
CiMgQ09ORklHX1BNQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfU0hUMTUgaXMgbm90
IHNldA0KIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JT
X1NJUzU1OTUgaXMgbm90IHNldA0KIyBDT05GSUdfU0VOU09SU19TTU02NjUgaXMgbm90IHNldA0K
IyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX0VNQzE0
MDM9eQ0KQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15DQpDT05GSUdfU0VOU09SU19FTUM2VzIwMT15
DQpDT05GSUdfU0VOU09SU19TTVNDNDdNMT15DQpDT05GSUdfU0VOU09SU19TTVNDNDdNMTkyPXkN
CkNPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTc9eQ0KQ09ORklHX1NFTlNPUlNfU0NINTZYWF9DT01N
T049eQ0KQ09ORklHX1NFTlNPUlNfU0NINTYyNz15DQpDT05GSUdfU0VOU09SU19TQ0g1NjM2PXkN
CkNPTkZJR19TRU5TT1JTX0FEUzEwMTU9eQ0KIyBDT05GSUdfU0VOU09SU19BRFM3ODI4IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfQU1DNjgyMSBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5T
T1JTX0lOQTIwOSBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19JTkEyWFg9eQ0KQ09ORklHX1NF
TlNPUlNfVEhNQzUwPXkNCiMgQ09ORklHX1NFTlNPUlNfVE1QMTAyIGlzIG5vdCBzZXQNCkNPTkZJ
R19TRU5TT1JTX1RNUDQwMT15DQojIENPTkZJR19TRU5TT1JTX1RNUDQyMSBpcyBub3Qgc2V0DQoj
IENPTkZJR19TRU5TT1JTX1ZJQV9DUFVURU1QIGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX1ZJ
QTY4NkE9eQ0KQ09ORklHX1NFTlNPUlNfVlQxMjExPXkNCiMgQ09ORklHX1NFTlNPUlNfVlQ4MjMx
IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfVzgzNzgxRCBpcyBub3Qgc2V0DQpDT05GSUdf
U0VOU09SU19XODM3OTFEPXkNCkNPTkZJR19TRU5TT1JTX1c4Mzc5MkQ9eQ0KQ09ORklHX1NFTlNP
UlNfVzgzNzkzPXkNCiMgQ09ORklHX1NFTlNPUlNfVzgzNzk1IGlzIG5vdCBzZXQNCkNPTkZJR19T
RU5TT1JTX1c4M0w3ODVUUz15DQpDT05GSUdfU0VOU09SU19XODNMNzg2Tkc9eQ0KQ09ORklHX1NF
TlNPUlNfVzgzNjI3SEY9eQ0KQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGPXkNCkNPTkZJR19TRU5T
T1JTX1dNODMxWD15DQpDT05GSUdfU0VOU09SU19XTTgzNTA9eQ0KQ09ORklHX1NFTlNPUlNfQVBQ
TEVTTUM9eQ0KQ09ORklHX1NFTlNPUlNfTUMxMzc4M19BREM9eQ0KDQojDQojIEFDUEkgZHJpdmVy
cw0KIw0KQ09ORklHX1NFTlNPUlNfQUNQSV9QT1dFUj15DQpDT05GSUdfU0VOU09SU19BVEswMTEw
PXkNCkNPTkZJR19USEVSTUFMPXkNCiMgQ09ORklHX1RIRVJNQUxfSFdNT04gaXMgbm90IHNldA0K
Q09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkNCiMgQ09ORklHX1RIRVJNQUxf
REVGQVVMVF9HT1ZfRkFJUl9TSEFSRSBpcyBub3Qgc2V0DQojIENPTkZJR19USEVSTUFMX0RFRkFV
TFRfR09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldA0KQ09ORklHX1RIRVJNQUxfR09WX0ZBSVJfU0hB
UkU9eQ0KQ09ORklHX1RIRVJNQUxfR09WX1NURVBfV0lTRT15DQpDT05GSUdfVEhFUk1BTF9HT1Zf
VVNFUl9TUEFDRT15DQojIENPTkZJR19USEVSTUFMX0VNVUxBVElPTiBpcyBub3Qgc2V0DQpDT05G
SUdfUkNBUl9USEVSTUFMPXkNCkNPTkZJR19JTlRFTF9QT1dFUkNMQU1QPXkNCkNPTkZJR19BQ1BJ
X0lOVDM0MDNfVEhFUk1BTD15DQoNCiMNCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBkcml2
ZXJzDQojDQpDT05GSUdfV0FUQ0hET0c9eQ0KQ09ORklHX1dBVENIRE9HX0NPUkU9eQ0KQ09ORklH
X1dBVENIRE9HX05PV0FZT1VUPXkNCg0KIw0KIyBXYXRjaGRvZyBEZXZpY2UgRHJpdmVycw0KIw0K
IyBDT05GSUdfU09GVF9XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05GSUdfREE5MDU1X1dBVENIRE9H
PXkNCiMgQ09ORklHX1dNODMxWF9XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05GSUdfV004MzUwX1dB
VENIRE9HPXkNCkNPTkZJR19UV0w0MDMwX1dBVENIRE9HPXkNCkNPTkZJR19SRVRVX1dBVENIRE9H
PXkNCiMgQ09ORklHX0FDUVVJUkVfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FEVkFOVEVDSF9X
RFQgaXMgbm90IHNldA0KQ09ORklHX0FMSU0xNTM1X1dEVD15DQpDT05GSUdfQUxJTTcxMDFfV0RU
PXkNCiMgQ09ORklHX0Y3MTgwOEVfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NQNTEwMF9UQ08g
aXMgbm90IHNldA0KIyBDT05GSUdfU0M1MjBfV0RUIGlzIG5vdCBzZXQNCkNPTkZJR19TQkNfRklU
UEMyX1dBVENIRE9HPXkNCkNPTkZJR19FVVJPVEVDSF9XRFQ9eQ0KQ09ORklHX0lCNzAwX1dEVD15
DQpDT05GSUdfSUJNQVNSPXkNCkNPTkZJR19XQUZFUl9XRFQ9eQ0KQ09ORklHX0k2MzAwRVNCX1dE
VD15DQpDT05GSUdfSUU2WFhfV0RUPXkNCkNPTkZJR19JVENPX1dEVD15DQpDT05GSUdfSVRDT19W
RU5ET1JfU1VQUE9SVD15DQpDT05GSUdfSVQ4NzEyRl9XRFQ9eQ0KQ09ORklHX0lUODdfV0RUPXkN
CiMgQ09ORklHX0hQX1dBVENIRE9HIGlzIG5vdCBzZXQNCkNPTkZJR19LRU1QTERfV0RUPXkNCkNP
TkZJR19TQzEyMDBfV0RUPXkNCkNPTkZJR19QQzg3NDEzX1dEVD15DQpDT05GSUdfTlZfVENPPXkN
CkNPTkZJR182MFhYX1dEVD15DQpDT05GSUdfU0JDODM2MF9XRFQ9eQ0KQ09ORklHX0NQVTVfV0RU
PXkNCkNPTkZJR19TTVNDX1NDSDMxMVhfV0RUPXkNCkNPTkZJR19TTVNDMzdCNzg3X1dEVD15DQoj
IENPTkZJR19WSUFfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1c4MzYyN0hGX1dEVCBpcyBub3Qg
c2V0DQpDT05GSUdfVzgzNjk3SEZfV0RUPXkNCkNPTkZJR19XODM2OTdVR19XRFQ9eQ0KQ09ORklH
X1c4Mzg3N0ZfV0RUPXkNCiMgQ09ORklHX1c4Mzk3N0ZfV0RUIGlzIG5vdCBzZXQNCkNPTkZJR19N
QUNIWl9XRFQ9eQ0KIyBDT05GSUdfU0JDX0VQWF9DM19XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05G
SUdfTUVOX0EyMV9XRFQ9eQ0KQ09ORklHX1hFTl9XRFQ9eQ0KDQojDQojIFBDSS1iYXNlZCBXYXRj
aGRvZyBDYXJkcw0KIw0KQ09ORklHX1BDSVBDV0FUQ0hET0c9eQ0KQ09ORklHX1dEVFBDST15DQpD
T05GSUdfU1NCX1BPU1NJQkxFPXkNCg0KIw0KIyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUNCiMN
CiMgQ09ORklHX1NTQiBpcyBub3Qgc2V0DQpDT05GSUdfQkNNQV9QT1NTSUJMRT15DQoNCiMNCiMg
QnJvYWRjb20gc3BlY2lmaWMgQU1CQQ0KIw0KQ09ORklHX0JDTUE9eQ0KQ09ORklHX0JDTUFfSE9T
VF9QQ0lfUE9TU0lCTEU9eQ0KQ09ORklHX0JDTUFfSE9TVF9QQ0k9eQ0KQ09ORklHX0JDTUFfSE9T
VF9TT0M9eQ0KQ09ORklHX0JDTUFfRFJJVkVSX0dNQUNfQ01OPXkNCkNPTkZJR19CQ01BX0RSSVZF
Ul9HUElPPXkNCkNPTkZJR19CQ01BX0RFQlVHPXkNCg0KIw0KIyBNdWx0aWZ1bmN0aW9uIGRldmlj
ZSBkcml2ZXJzDQojDQpDT05GSUdfTUZEX0NPUkU9eQ0KIyBDT05GSUdfTUZEX0NTNTUzNSBpcyBu
b3Qgc2V0DQpDT05GSUdfTUZEX0FTMzcxMT15DQojIENPTkZJR19QTUlDX0FEUDU1MjAgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX0FBVDI4NzBfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRf
Q1JPU19FQyBpcyBub3Qgc2V0DQojIENPTkZJR19QTUlDX0RBOTAzWCBpcyBub3Qgc2V0DQojIENP
TkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX0RBOTA1NT15DQpDT05G
SUdfTUZEX0RBOTA2Mz15DQpDT05GSUdfTUZEX01DMTNYWFg9eQ0KQ09ORklHX01GRF9NQzEzWFhY
X0kyQz15DQpDT05GSUdfSFRDX1BBU0lDMz15DQojIENPTkZJR19IVENfSTJDUExEIGlzIG5vdCBz
ZXQNCkNPTkZJR19MUENfSUNIPXkNCkNPTkZJR19MUENfU0NIPXkNCkNPTkZJR19NRkRfSkFOWl9D
TU9ESU89eQ0KQ09ORklHX01GRF9LRU1QTEQ9eQ0KQ09ORklHX01GRF84OFBNODAwPXkNCiMgQ09O
RklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfODhQTTg2MFg9eQ0KIyBDT05G
SUdfTUZEX01BWDE0NTc3IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9NQVg3NzY4NiBpcyBub3Qg
c2V0DQpDT05GSUdfTUZEX01BWDc3NjkzPXkNCiMgQ09ORklHX01GRF9NQVg4OTA3IGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01GRF9NQVg4OTI1IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9NQVg4OTk3
IGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfTUFYODk5OD15DQpDT05GSUdfTUZEX1JFVFU9eQ0KQ09O
RklHX01GRF9QQ0Y1MDYzMz15DQpDT05GSUdfUENGNTA2MzNfQURDPXkNCkNPTkZJR19QQ0Y1MDYz
M19HUElPPXkNCiMgQ09ORklHX1VDQjE0MDBfQ09SRSBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX1JE
QzMyMVg9eQ0KIyBDT05GSUdfTUZEX1JUU1hfUENJIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9S
QzVUNTgzIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBub3Qgc2V0DQojIENP
TkZJR19NRkRfU0k0NzZYX0NPUkUgaXMgbm90IHNldA0KQ09ORklHX01GRF9TTTUwMT15DQojIENP
TkZJR19NRkRfU001MDFfR1BJTyBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX1NNU0M9eQ0KIyBDT05G
SUdfQUJYNTAwX0NPUkUgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1NUTVBFIGlzIG5vdCBzZXQN
CkNPTkZJR19NRkRfU1lTQ09OPXkNCkNPTkZJR19NRkRfVElfQU0zMzVYX1RTQ0FEQz15DQojIENP
TkZJR19NRkRfTFAzOTQzIGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfTFA4Nzg4PXkNCkNPTkZJR19N
RkRfUEFMTUFTPXkNCkNPTkZJR19UUFM2MTA1WD15DQpDT05GSUdfVFBTNjUwMTA9eQ0KQ09ORklH
X1RQUzY1MDdYPXkNCiMgQ09ORklHX01GRF9UUFM2NTA5MCBpcyBub3Qgc2V0DQpDT05GSUdfTUZE
X1RQUzY1MjE3PXkNCkNPTkZJR19NRkRfVFBTNjU4Nlg9eQ0KIyBDT05GSUdfTUZEX1RQUzY1OTEw
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9UUFM2NTkxMiBpcyBub3Qgc2V0DQojIENPTkZJR19N
RkRfVFBTNjU5MTJfSTJDIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9UUFM4MDAzMSBpcyBub3Qg
c2V0DQpDT05GSUdfVFdMNDAzMF9DT1JFPXkNCiMgQ09ORklHX1RXTDQwMzBfTUFEQyBpcyBub3Qg
c2V0DQpDT05GSUdfTUZEX1RXTDQwMzBfQVVESU89eQ0KQ09ORklHX1RXTDYwNDBfQ09SRT15DQpD
T05GSUdfTUZEX1dMMTI3M19DT1JFPXkNCkNPTkZJR19NRkRfTE0zNTMzPXkNCkNPTkZJR19NRkRf
VElNQkVSREFMRT15DQpDT05GSUdfTUZEX1RDMzU4OVg9eQ0KIyBDT05GSUdfTUZEX1RNSU8gaXMg
bm90IHNldA0KQ09ORklHX01GRF9WWDg1NT15DQpDT05GSUdfTUZEX0FSSVpPTkE9eQ0KQ09ORklH
X01GRF9BUklaT05BX0kyQz15DQojIENPTkZJR19NRkRfV001MTAyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX01GRF9XTTUxMTAgaXMgbm90IHNldA0KQ09ORklHX01GRF9XTTg5OTc9eQ0KQ09ORklHX01G
RF9XTTg0MDA9eQ0KQ09ORklHX01GRF9XTTgzMVg9eQ0KQ09ORklHX01GRF9XTTgzMVhfSTJDPXkN
CkNPTkZJR19NRkRfV004MzUwPXkNCkNPTkZJR19NRkRfV004MzUwX0kyQz15DQojIENPTkZJR19N
RkRfV004OTk0IGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1I9eQ0KIyBDT05GSUdfUkVHVUxB
VE9SX0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1JfRklYRURfVk9MVEFHRT15DQoj
IENPTkZJR19SRUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUiBpcyBub3Qgc2V0DQpDT05GSUdfUkVH
VUxBVE9SX1VTRVJTUEFDRV9DT05TVU1FUj15DQpDT05GSUdfUkVHVUxBVE9SXzg4UE04MDA9eQ0K
IyBDT05GSUdfUkVHVUxBVE9SXzg4UE04NjA3IGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1Jf
QUNUODg2NT15DQojIENPTkZJR19SRUdVTEFUT1JfQUQ1Mzk4IGlzIG5vdCBzZXQNCkNPTkZJR19S
RUdVTEFUT1JfQU5BVE9QPXkNCiMgQ09ORklHX1JFR1VMQVRPUl9BUklaT05BIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JFR1VMQVRPUl9BUzM3MTEgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9E
QTkwNTU9eQ0KIyBDT05GSUdfUkVHVUxBVE9SX0RBOTA2MyBpcyBub3Qgc2V0DQpDT05GSUdfUkVH
VUxBVE9SX0RBOTIxMD15DQojIENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldA0K
IyBDT05GSUdfUkVHVUxBVE9SX0dQSU8gaXMgbm90IHNldA0KIyBDT05GSUdfUkVHVUxBVE9SX0lT
TDYyNzFBIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1JfTFAzOTcxPXkNCkNPTkZJR19SRUdV
TEFUT1JfTFAzOTcyPXkNCkNPTkZJR19SRUdVTEFUT1JfTFA4NzJYPXkNCiMgQ09ORklHX1JFR1VM
QVRPUl9MUDg3NTUgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9MUDg3ODg9eQ0KIyBDT05G
SUdfUkVHVUxBVE9SX01BWDE1ODYgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9NQVg4NjQ5
PXkNCiMgQ09ORklHX1JFR1VMQVRPUl9NQVg4NjYwIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFU
T1JfTUFYODk1Mj15DQojIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBpcyBub3Qgc2V0DQojIENP
TkZJR19SRUdVTEFUT1JfTUFYODk5OCBpcyBub3Qgc2V0DQpDT05GSUdfUkVHVUxBVE9SX01BWDc3
NjkzPXkNCkNPTkZJR19SRUdVTEFUT1JfTUMxM1hYWF9DT1JFPXkNCkNPTkZJR19SRUdVTEFUT1Jf
TUMxMzc4Mz15DQojIENPTkZJR19SRUdVTEFUT1JfTUMxMzg5MiBpcyBub3Qgc2V0DQpDT05GSUdf
UkVHVUxBVE9SX1BBTE1BUz15DQojIENPTkZJR19SRUdVTEFUT1JfUENGNTA2MzMgaXMgbm90IHNl
dA0KQ09ORklHX1JFR1VMQVRPUl9QRlVaRTEwMD15DQojIENPTkZJR19SRUdVTEFUT1JfVFBTNTE2
MzIgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9UUFM2MTA1WD15DQojIENPTkZJR19SRUdV
TEFUT1JfVFBTNjIzNjAgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9UUFM2NTAyMz15DQpD
T05GSUdfUkVHVUxBVE9SX1RQUzY1MDdYPXkNCkNPTkZJR19SRUdVTEFUT1JfVFBTNjUyMTc9eQ0K
IyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1ODZYIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1Jf
VFdMNDAzMD15DQojIENPTkZJR19SRUdVTEFUT1JfV004MzFYIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1JFR1VMQVRPUl9XTTgzNTAgaXMgbm90IHNldA0KIyBDT05GSUdfUkVHVUxBVE9SX1dNODQwMCBp
cyBub3Qgc2V0DQpDT05GSUdfTUVESUFfU1VQUE9SVD15DQoNCiMNCiMgTXVsdGltZWRpYSBjb3Jl
IHN1cHBvcnQNCiMNCkNPTkZJR19NRURJQV9DQU1FUkFfU1VQUE9SVD15DQojIENPTkZJR19NRURJ
QV9BTkFMT0dfVFZfU1VQUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfTUVESUFfRElHSVRBTF9UVl9T
VVBQT1JUPXkNCiMgQ09ORklHX01FRElBX1JBRElPX1NVUFBPUlQgaXMgbm90IHNldA0KQ09ORklH
X01FRElBX1JDX1NVUFBPUlQ9eQ0KIyBDT05GSUdfTUVESUFfQ09OVFJPTExFUiBpcyBub3Qgc2V0
DQpDT05GSUdfVklERU9fREVWPXkNCkNPTkZJR19WSURFT19WNEwyPXkNCiMgQ09ORklHX1ZJREVP
X0FEVl9ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9fRklYRURfTUlOT1JfUkFOR0VTPXkN
CkNPTkZJR19WNEwyX01FTTJNRU1fREVWPXkNCkNPTkZJR19WSURFT0JVRjJfQ09SRT15DQpDT05G
SUdfVklERU9CVUYyX01FTU9QUz15DQpDT05GSUdfVklERU9CVUYyX0RNQV9DT05USUc9eQ0KQ09O
RklHX1ZJREVPQlVGMl9WTUFMTE9DPXkNCkNPTkZJR19WSURFT0JVRjJfRE1BX1NHPXkNCkNPTkZJ
R19EVkJfQ09SRT15DQojIENPTkZJR19EVkJfTkVUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RUUENJ
X0VFUFJPTSBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX01BWF9BREFQVEVSUz04DQojIENPTkZJR19E
VkJfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldA0KDQojDQojIE1lZGlhIGRyaXZlcnMNCiMNCkNP
TkZJR19SQ19DT1JFPXkNCkNPTkZJR19SQ19NQVA9eQ0KQ09ORklHX1JDX0RFQ09ERVJTPXkNCkNP
TkZJR19MSVJDPXkNCkNPTkZJR19JUl9MSVJDX0NPREVDPXkNCiMgQ09ORklHX0lSX05FQ19ERUNP
REVSIGlzIG5vdCBzZXQNCkNPTkZJR19JUl9SQzVfREVDT0RFUj15DQpDT05GSUdfSVJfUkM2X0RF
Q09ERVI9eQ0KIyBDT05GSUdfSVJfSlZDX0RFQ09ERVIgaXMgbm90IHNldA0KQ09ORklHX0lSX1NP
TllfREVDT0RFUj15DQpDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9eQ0KQ09ORklHX0lSX1NBTllP
X0RFQ09ERVI9eQ0KQ09ORklHX0lSX01DRV9LQkRfREVDT0RFUj15DQpDT05GSUdfUkNfREVWSUNF
Uz15DQojIENPTkZJR19JUl9FTkUgaXMgbm90IHNldA0KQ09ORklHX0lSX0lURV9DSVI9eQ0KQ09O
RklHX0lSX0ZJTlRFSz15DQpDT05GSUdfSVJfTlVWT1RPTj15DQpDT05GSUdfSVJfV0lOQk9ORF9D
SVI9eQ0KQ09ORklHX1JDX0xPT1BCQUNLPXkNCkNPTkZJR19JUl9HUElPX0NJUj15DQojIENPTkZJ
R19NRURJQV9QQ0lfU1VQUE9SVCBpcyBub3Qgc2V0DQojIENPTkZJR19WNExfUExBVEZPUk1fRFJJ
VkVSUyBpcyBub3Qgc2V0DQpDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUz15DQpDT05GSUdfVklE
RU9fTUVNMk1FTV9ERUlOVEVSTEFDRT15DQpDT05GSUdfVklERU9fU0hfVkVVPXkNCiMgQ09ORklH
X1Y0TF9URVNUX0RSSVZFUlMgaXMgbm90IHNldA0KDQojDQojIFN1cHBvcnRlZCBNTUMvU0RJTyBh
ZGFwdGVycw0KIw0KQ09ORklHX01FRElBX1BBUlBPUlRfU1VQUE9SVD15DQpDT05GSUdfVklERU9f
QldRQ0FNPXkNCiMgQ09ORklHX1ZJREVPX0NRQ0FNIGlzIG5vdCBzZXQNCg0KIw0KIyBNZWRpYSBh
bmNpbGxhcnkgZHJpdmVycyAodHVuZXJzLCBzZW5zb3JzLCBpMmMsIGZyb250ZW5kcykNCiMNCiMg
Q09ORklHX01FRElBX1NVQkRSVl9BVVRPU0VMRUNUIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19J
Ul9JMkM9eQ0KDQojDQojIEVuY29kZXJzLCBkZWNvZGVycywgc2Vuc29ycyBhbmQgb3RoZXIgaGVs
cGVyIGNoaXBzDQojDQoNCiMNCiMgQXVkaW8gZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVy
cw0KIw0KIyBDT05GSUdfVklERU9fVFZBVURJTyBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9fVERB
NzQzMj15DQojIENPTkZJR19WSURFT19UREE5ODQwIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZJREVP
X1RFQTY0MTVDIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19URUE2NDIwPXkNCiMgQ09ORklHX1ZJ
REVPX01TUDM0MDAgaXMgbm90IHNldA0KIyBDT05GSUdfVklERU9fQ1M1MzQ1IGlzIG5vdCBzZXQN
CkNPTkZJR19WSURFT19DUzUzTDMyQT15DQojIENPTkZJR19WSURFT19UTFYzMjBBSUMyM0IgaXMg
bm90IHNldA0KIyBDT05GSUdfVklERU9fVURBMTM0MiBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9f
V004Nzc1PXkNCkNPTkZJR19WSURFT19XTTg3Mzk9eQ0KQ09ORklHX1ZJREVPX1ZQMjdTTVBYPXkN
CiMgQ09ORklHX1ZJREVPX1NPTllfQlRGX01QWCBpcyBub3Qgc2V0DQoNCiMNCiMgUkRTIGRlY29k
ZXJzDQojDQpDT05GSUdfVklERU9fU0FBNjU4OD15DQoNCiMNCiMgVmlkZW8gZGVjb2RlcnMNCiMN
CkNPTkZJR19WSURFT19BRFY3MTgwPXkNCiMgQ09ORklHX1ZJREVPX0FEVjcxODMgaXMgbm90IHNl
dA0KQ09ORklHX1ZJREVPX0JUODE5PXkNCkNPTkZJR19WSURFT19CVDg1Nj15DQpDT05GSUdfVklE
RU9fQlQ4NjY9eQ0KIyBDT05GSUdfVklERU9fS1MwMTI3IGlzIG5vdCBzZXQNCkNPTkZJR19WSURF
T19NTDg2Vjc2Njc9eQ0KIyBDT05GSUdfVklERU9fU0FBNzExMCBpcyBub3Qgc2V0DQpDT05GSUdf
VklERU9fU0FBNzExWD15DQpDT05GSUdfVklERU9fU0FBNzE5MT15DQpDT05GSUdfVklERU9fVFZQ
NTE0WD15DQpDT05GSUdfVklERU9fVFZQNTE1MD15DQpDT05GSUdfVklERU9fVFZQNzAwMj15DQpD
T05GSUdfVklERU9fVFcyODA0PXkNCkNPTkZJR19WSURFT19UVzk5MDM9eQ0KQ09ORklHX1ZJREVP
X1RXOTkwNj15DQpDT05GSUdfVklERU9fVlBYMzIyMD15DQoNCiMNCiMgVmlkZW8gYW5kIGF1ZGlv
IGRlY29kZXJzDQojDQpDT05GSUdfVklERU9fU0FBNzE3WD15DQojIENPTkZJR19WSURFT19DWDI1
ODQwIGlzIG5vdCBzZXQNCg0KIw0KIyBWaWRlbyBlbmNvZGVycw0KIw0KIyBDT05GSUdfVklERU9f
U0FBNzEyNyBpcyBub3Qgc2V0DQojIENPTkZJR19WSURFT19TQUE3MTg1IGlzIG5vdCBzZXQNCkNP
TkZJR19WSURFT19BRFY3MTcwPXkNCkNPTkZJR19WSURFT19BRFY3MTc1PXkNCkNPTkZJR19WSURF
T19BRFY3MzQzPXkNCiMgQ09ORklHX1ZJREVPX0FEVjczOTMgaXMgbm90IHNldA0KIyBDT05GSUdf
VklERU9fQUs4ODFYIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19USFM4MjAwPXkNCg0KIw0KIyBD
YW1lcmEgc2Vuc29yIGRldmljZXMNCiMNCkNPTkZJR19WSURFT19PVjc2NDA9eQ0KQ09ORklHX1ZJ
REVPX09WNzY3MD15DQojIENPTkZJR19WSURFT19WUzY2MjQgaXMgbm90IHNldA0KQ09ORklHX1ZJ
REVPX01UOVYwMTE9eQ0KIyBDT05GSUdfVklERU9fU1IwMzBQQzMwIGlzIG5vdCBzZXQNCg0KIw0K
IyBGbGFzaCBkZXZpY2VzDQojDQoNCiMNCiMgVmlkZW8gaW1wcm92ZW1lbnQgY2hpcHMNCiMNCkNP
TkZJR19WSURFT19VUEQ2NDAzMUE9eQ0KQ09ORklHX1ZJREVPX1VQRDY0MDgzPXkNCg0KIw0KIyBN
aXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcw0KIw0KQ09ORklHX1ZJREVPX1RIUzczMDM9eQ0KQ09O
RklHX1ZJREVPX001Mjc5MD15DQoNCiMNCiMgU2Vuc29ycyB1c2VkIG9uIHNvY19jYW1lcmEgZHJp
dmVyDQojDQoNCiMNCiMgQ3VzdG9taXplIFRWIHR1bmVycw0KIw0KQ09ORklHX01FRElBX1RVTkVS
X1NJTVBMRT15DQojIENPTkZJR19NRURJQV9UVU5FUl9UREE4MjkwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX01FRElBX1RVTkVSX1REQTgyN1ggaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1RE
QTE4MjcxPXkNCkNPTkZJR19NRURJQV9UVU5FUl9UREE5ODg3PXkNCkNPTkZJR19NRURJQV9UVU5F
Ul9URUE1NzYxPXkNCkNPTkZJR19NRURJQV9UVU5FUl9URUE1NzY3PXkNCiMgQ09ORklHX01FRElB
X1RVTkVSX01UMjBYWCBpcyBub3Qgc2V0DQojIENPTkZJR19NRURJQV9UVU5FUl9NVDIwNjAgaXMg
bm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX01UMjA2Mz15DQpDT05GSUdfTUVESUFfVFVORVJf
TVQyMjY2PXkNCiMgQ09ORklHX01FRElBX1RVTkVSX01UMjEzMSBpcyBub3Qgc2V0DQojIENPTkZJ
R19NRURJQV9UVU5FUl9RVDEwMTAgaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1hDMjAy
OD15DQpDT05GSUdfTUVESUFfVFVORVJfWEM1MDAwPXkNCkNPTkZJR19NRURJQV9UVU5FUl9YQzQw
MDA9eQ0KQ09ORklHX01FRElBX1RVTkVSX01YTDUwMDVTPXkNCiMgQ09ORklHX01FRElBX1RVTkVS
X01YTDUwMDdUIGlzIG5vdCBzZXQNCiMgQ09ORklHX01FRElBX1RVTkVSX01DNDRTODAzIGlzIG5v
dCBzZXQNCkNPTkZJR19NRURJQV9UVU5FUl9NQVgyMTY1PXkNCiMgQ09ORklHX01FRElBX1RVTkVS
X1REQTE4MjE4IGlzIG5vdCBzZXQNCiMgQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMSBpcyBub3Qg
c2V0DQpDT05GSUdfTUVESUFfVFVORVJfRkMwMDEyPXkNCkNPTkZJR19NRURJQV9UVU5FUl9GQzAw
MTM9eQ0KQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjEyPXkNCkNPTkZJR19NRURJQV9UVU5FUl9F
NDAwMD15DQojIENPTkZJR19NRURJQV9UVU5FUl9GQzI1ODAgaXMgbm90IHNldA0KQ09ORklHX01F
RElBX1RVTkVSX004OFRTMjAyMj15DQpDT05GSUdfTUVESUFfVFVORVJfVFVBOTAwMT15DQojIENP
TkZJR19NRURJQV9UVU5FUl9JVDkxM1ggaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1I4
MjBUPXkNCg0KIw0KIyBDdXN0b21pc2UgRFZCIEZyb250ZW5kcw0KIw0KDQojDQojIE11bHRpc3Rh
bmRhcmQgKHNhdGVsbGl0ZSkgZnJvbnRlbmRzDQojDQojIENPTkZJR19EVkJfU1RCMDg5OSBpcyBu
b3Qgc2V0DQojIENPTkZJR19EVkJfU1RCNjEwMCBpcyBub3Qgc2V0DQojIENPTkZJR19EVkJfU1RW
MDkweCBpcyBub3Qgc2V0DQojIENPTkZJR19EVkJfU1RWNjExMHggaXMgbm90IHNldA0KQ09ORklH
X0RWQl9NODhEUzMxMDM9eQ0KDQojDQojIE11bHRpc3RhbmRhcmQgKGNhYmxlICsgdGVycmVzdHJp
YWwpIGZyb250ZW5kcw0KIw0KQ09ORklHX0RWQl9EUlhLPXkNCiMgQ09ORklHX0RWQl9UREExODI3
MUMyREQgaXMgbm90IHNldA0KDQojDQojIERWQi1TIChzYXRlbGxpdGUpIGZyb250ZW5kcw0KIw0K
Q09ORklHX0RWQl9DWDI0MTEwPXkNCiMgQ09ORklHX0RWQl9DWDI0MTIzIGlzIG5vdCBzZXQNCkNP
TkZJR19EVkJfTVQzMTI9eQ0KIyBDT05GSUdfRFZCX1pMMTAwMzYgaXMgbm90IHNldA0KQ09ORklH
X0RWQl9aTDEwMDM5PXkNCkNPTkZJR19EVkJfUzVIMTQyMD15DQpDT05GSUdfRFZCX1NUVjAyODg9
eQ0KQ09ORklHX0RWQl9TVEI2MDAwPXkNCkNPTkZJR19EVkJfU1RWMDI5OT15DQpDT05GSUdfRFZC
X1NUVjYxMTA9eQ0KQ09ORklHX0RWQl9TVFYwOTAwPXkNCkNPTkZJR19EVkJfVERBODA4Mz15DQoj
IENPTkZJR19EVkJfVERBMTAwODYgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX1REQTgyNjEgaXMg
bm90IHNldA0KQ09ORklHX0RWQl9WRVMxWDkzPXkNCkNPTkZJR19EVkJfVFVORVJfSVREMTAwMD15
DQojIENPTkZJR19EVkJfVFVORVJfQ1gyNDExMyBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX1REQTgy
Nlg9eQ0KQ09ORklHX0RWQl9UVUE2MTAwPXkNCkNPTkZJR19EVkJfQ1gyNDExNj15DQpDT05GSUdf
RFZCX0NYMjQxMTc9eQ0KIyBDT05GSUdfRFZCX1NJMjFYWCBpcyBub3Qgc2V0DQojIENPTkZJR19E
VkJfVFMyMDIwIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfRFMzMDAwPXkNCkNPTkZJR19EVkJfTUI4
NkExNj15DQpDT05GSUdfRFZCX1REQTEwMDcxPXkNCg0KIw0KIyBEVkItVCAodGVycmVzdHJpYWwp
IGZyb250ZW5kcw0KIw0KQ09ORklHX0RWQl9TUDg4NzA9eQ0KIyBDT05GSUdfRFZCX1NQODg3WCBp
cyBub3Qgc2V0DQpDT05GSUdfRFZCX0NYMjI3MDA9eQ0KQ09ORklHX0RWQl9DWDIyNzAyPXkNCiMg
Q09ORklHX0RWQl9TNUgxNDMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RWQl9EUlhEIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0RWQl9MNjQ3ODEgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX1REQTEwMDRY
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0RWQl9OWFQ2MDAwIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJf
TVQzNTI9eQ0KIyBDT05GSUdfRFZCX1pMMTAzNTMgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0RJ
QjMwMDBNQiBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX0RJQjMwMDBNQz15DQpDT05GSUdfRFZCX0RJ
QjcwMDBNPXkNCkNPTkZJR19EVkJfRElCNzAwMFA9eQ0KQ09ORklHX0RWQl9ESUI5MDAwPXkNCiMg
Q09ORklHX0RWQl9UREExMDA0OCBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX0FGOTAxMz15DQpDT05G
SUdfRFZCX0VDMTAwPXkNCiMgQ09ORklHX0RWQl9IRDI5TDIgaXMgbm90IHNldA0KIyBDT05GSUdf
RFZCX1NUVjAzNjcgaXMgbm90IHNldA0KQ09ORklHX0RWQl9DWEQyODIwUj15DQpDT05GSUdfRFZC
X1JUTDI4MzA9eQ0KQ09ORklHX0RWQl9SVEwyODMyPXkNCg0KIw0KIyBEVkItQyAoY2FibGUpIGZy
b250ZW5kcw0KIw0KQ09ORklHX0RWQl9WRVMxODIwPXkNCkNPTkZJR19EVkJfVERBMTAwMjE9eQ0K
Q09ORklHX0RWQl9UREExMDAyMz15DQpDT05GSUdfRFZCX1NUVjAyOTc9eQ0KDQojDQojIEFUU0Mg
KE5vcnRoIEFtZXJpY2FuL0tvcmVhbiBUZXJyZXN0cmlhbC9DYWJsZSBEVFYpIGZyb250ZW5kcw0K
Iw0KIyBDT05GSUdfRFZCX05YVDIwMFggaXMgbm90IHNldA0KQ09ORklHX0RWQl9PUjUxMjExPXkN
CiMgQ09ORklHX0RWQl9PUjUxMTMyIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfQkNNMzUxMD15DQoj
IENPTkZJR19EVkJfTEdEVDMzMFggaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0xHRFQzMzA1IGlz
IG5vdCBzZXQNCiMgQ09ORklHX0RWQl9MRzIxNjAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9TNUgx
NDA5PXkNCkNPTkZJR19EVkJfQVU4NTIyPXkNCkNPTkZJR19EVkJfQVU4NTIyX0RUVj15DQpDT05G
SUdfRFZCX0FVODUyMl9WNEw9eQ0KIyBDT05GSUdfRFZCX1M1SDE0MTEgaXMgbm90IHNldA0KDQoj
DQojIElTREItVCAodGVycmVzdHJpYWwpIGZyb250ZW5kcw0KIw0KIyBDT05GSUdfRFZCX1M5MjEg
aXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0RJQjgwMDAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9N
Qjg2QTIwUz15DQoNCiMNCiMgRGlnaXRhbCB0ZXJyZXN0cmlhbCBvbmx5IHR1bmVycy9QTEwNCiMN
CiMgQ09ORklHX0RWQl9QTEwgaXMgbm90IHNldA0KQ09ORklHX0RWQl9UVU5FUl9ESUIwMDcwPXkN
CiMgQ09ORklHX0RWQl9UVU5FUl9ESUIwMDkwIGlzIG5vdCBzZXQNCg0KIw0KIyBTRUMgY29udHJv
bCBkZXZpY2VzIGZvciBEVkItUw0KIw0KQ09ORklHX0RWQl9MTkJQMjE9eQ0KQ09ORklHX0RWQl9M
TkJQMjI9eQ0KIyBDT05GSUdfRFZCX0lTTDY0MDUgaXMgbm90IHNldA0KQ09ORklHX0RWQl9JU0w2
NDIxPXkNCkNPTkZJR19EVkJfSVNMNjQyMz15DQpDT05GSUdfRFZCX0E4MjkzPXkNCiMgQ09ORklH
X0RWQl9MR1M4R0w1IGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfTEdTOEdYWD15DQojIENPTkZJR19E
VkJfQVRCTTg4MzAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9UREE2NjV4PXkNCkNPTkZJR19EVkJf
SVgyNTA1Vj15DQpDT05GSUdfRFZCX0lUOTEzWF9GRT15DQpDT05GSUdfRFZCX004OFJTMjAwMD15
DQpDT05GSUdfRFZCX0FGOTAzMz15DQoNCiMNCiMgVG9vbHMgdG8gZGV2ZWxvcCBuZXcgZnJvbnRl
bmRzDQojDQpDT05GSUdfRFZCX0RVTU1ZX0ZFPXkNCg0KIw0KIyBHcmFwaGljcyBzdXBwb3J0DQoj
DQpDT05GSUdfQUdQPXkNCiMgQ09ORklHX0FHUF9BTUQ2NCBpcyBub3Qgc2V0DQpDT05GSUdfQUdQ
X0lOVEVMPXkNCkNPTkZJR19BR1BfU0lTPXkNCkNPTkZJR19BR1BfVklBPXkNCkNPTkZJR19JTlRF
TF9HVFQ9eQ0KQ09ORklHX1ZHQV9BUkI9eQ0KQ09ORklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYNCkNP
TkZJR19WR0FfU1dJVENIRVJPTz15DQpDT05GSUdfRFJNPXkNCkNPTkZJR19EUk1fS01TX0hFTFBF
Uj15DQpDT05GSUdfRFJNX0tNU19GQl9IRUxQRVI9eQ0KQ09ORklHX0RSTV9MT0FEX0VESURfRklS
TVdBUkU9eQ0KQ09ORklHX0RSTV9UVE09eQ0KDQojDQojIEkyQyBlbmNvZGVyIG9yIGhlbHBlciBj
aGlwcw0KIw0KQ09ORklHX0RSTV9JMkNfQ0g3MDA2PXkNCiMgQ09ORklHX0RSTV9JMkNfU0lMMTY0
IGlzIG5vdCBzZXQNCkNPTkZJR19EUk1fSTJDX05YUF9UREE5OThYPXkNCiMgQ09ORklHX0RSTV9U
REZYIGlzIG5vdCBzZXQNCkNPTkZJR19EUk1fUjEyOD15DQpDT05GSUdfRFJNX1JBREVPTj15DQpD
T05GSUdfRFJNX1JBREVPTl9VTVM9eQ0KIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNldA0K
Q09ORklHX0RSTV9JODEwPXkNCkNPTkZJR19EUk1fSTkxNT15DQojIENPTkZJR19EUk1fSTkxNV9L
TVMgaXMgbm90IHNldA0KIyBDT05GSUdfRFJNX0k5MTVfRkJERVYgaXMgbm90IHNldA0KQ09ORklH
X0RSTV9JOTE1X1BSRUxJTUlOQVJZX0hXX1NVUFBPUlQ9eQ0KIyBDT05GSUdfRFJNX0k5MTVfVU1T
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9NR0EgaXMgbm90IHNldA0KQ09ORklHX0RSTV9TSVM9
eQ0KQ09ORklHX0RSTV9WSUE9eQ0KIyBDT05GSUdfRFJNX1NBVkFHRSBpcyBub3Qgc2V0DQpDT05G
SUdfRFJNX1ZNV0dGWD15DQpDT05GSUdfRFJNX1ZNV0dGWF9GQkNPTj15DQpDT05GSUdfRFJNX0dN
QTUwMD15DQojIENPTkZJR19EUk1fR01BNjAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9HTUEz
NjAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9BU1QgaXMgbm90IHNldA0KIyBDT05GSUdfRFJN
X01HQUcyMDAgaXMgbm90IHNldA0KQ09ORklHX0RSTV9DSVJSVVNfUUVNVT15DQpDT05GSUdfRFJN
X1FYTD15DQpDT05GSUdfVkdBU1RBVEU9eQ0KQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MPXkN
CkNPTkZJR19IRE1JPXkNCkNPTkZJR19GQj15DQpDT05GSUdfRklSTVdBUkVfRURJRD15DQpDT05G
SUdfRkJfRERDPXkNCkNPTkZJR19GQl9CT09UX1ZFU0FfU1VQUE9SVD15DQpDT05GSUdfRkJfQ0ZC
X0ZJTExSRUNUPXkNCkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQ0KQ09ORklHX0ZCX0NGQl9JTUFH
RUJMSVQ9eQ0KIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNfSU5fQllURSBpcyBub3Qgc2V0DQpD
T05GSUdfRkJfU1lTX0ZJTExSRUNUPXkNCkNPTkZJR19GQl9TWVNfQ09QWUFSRUE9eQ0KQ09ORklH
X0ZCX1NZU19JTUFHRUJMSVQ9eQ0KQ09ORklHX0ZCX0ZPUkVJR05fRU5ESUFOPXkNCkNPTkZJR19G
Ql9CT1RIX0VORElBTj15DQojIENPTkZJR19GQl9CSUdfRU5ESUFOIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0ZCX0xJVFRMRV9FTkRJQU4gaXMgbm90IHNldA0KQ09ORklHX0ZCX1NZU19GT1BTPXkNCkNP
TkZJR19GQl9ERUZFUlJFRF9JTz15DQpDT05GSUdfRkJfSEVDVUJBPXkNCkNPTkZJR19GQl9TVkdB
TElCPXkNCiMgQ09ORklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9CQUNLTElH
SFQ9eQ0KQ09ORklHX0ZCX01PREVfSEVMUEVSUz15DQpDT05GSUdfRkJfVElMRUJMSVRUSU5HPXkN
Cg0KIw0KIyBGcmFtZSBidWZmZXIgaGFyZHdhcmUgZHJpdmVycw0KIw0KIyBDT05GSUdfRkJfQ0lS
UlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX1BNMiBpcyBub3Qgc2V0DQpDT05GSUdfRkJfQ1lC
RVIyMDAwPXkNCkNPTkZJR19GQl9DWUJFUjIwMDBfRERDPXkNCkNPTkZJR19GQl9BUkM9eQ0KIyBD
T05GSUdfRkJfQVNJTElBTlQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfSU1TVFQgaXMgbm90IHNl
dA0KQ09ORklHX0ZCX1ZHQTE2PXkNCiMgQ09ORklHX0ZCX1VWRVNBIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0ZCX1ZFU0EgaXMgbm90IHNldA0KQ09ORklHX0ZCX0VGST15DQpDT05GSUdfRkJfTjQxMT15
DQpDT05GSUdfRkJfSEdBPXkNCkNPTkZJR19GQl9TMUQxM1hYWD15DQojIENPTkZJR19GQl9OVklE
SUEgaXMgbm90IHNldA0KQ09ORklHX0ZCX1JJVkE9eQ0KQ09ORklHX0ZCX1JJVkFfSTJDPXkNCiMg
Q09ORklHX0ZCX1JJVkFfREVCVUcgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfUklWQV9CQUNLTElH
SFQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfSTc0MCBpcyBub3Qgc2V0DQpDT05GSUdfRkJfTEU4
MDU3OD15DQpDT05GSUdfRkJfQ0FSSUxMT19SQU5DSD15DQojIENPTkZJR19GQl9NQVRST1ggaXMg
bm90IHNldA0KIyBDT05GSUdfRkJfUkFERU9OIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9BVFkxMjg9
eQ0KQ09ORklHX0ZCX0FUWTEyOF9CQUNLTElHSFQ9eQ0KQ09ORklHX0ZCX0FUWT15DQpDT05GSUdf
RkJfQVRZX0NUPXkNCkNPTkZJR19GQl9BVFlfR0VORVJJQ19MQ0Q9eQ0KIyBDT05GSUdfRkJfQVRZ
X0dYIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9BVFlfQkFDS0xJR0hUPXkNCkNPTkZJR19GQl9TMz15
DQojIENPTkZJR19GQl9TM19EREMgaXMgbm90IHNldA0KQ09ORklHX0ZCX1NBVkFHRT15DQpDT05G
SUdfRkJfU0FWQUdFX0kyQz15DQojIENPTkZJR19GQl9TQVZBR0VfQUNDRUwgaXMgbm90IHNldA0K
Q09ORklHX0ZCX1NJUz15DQpDT05GSUdfRkJfU0lTXzMwMD15DQpDT05GSUdfRkJfU0lTXzMxNT15
DQojIENPTkZJR19GQl9WSUEgaXMgbm90IHNldA0KQ09ORklHX0ZCX05FT01BR0lDPXkNCkNPTkZJ
R19GQl9LWVJPPXkNCiMgQ09ORklHX0ZCXzNERlggaXMgbm90IHNldA0KQ09ORklHX0ZCX1ZPT0RP
TzE9eQ0KIyBDT05GSUdfRkJfVlQ4NjIzIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9UUklERU5UPXkN
CkNPTkZJR19GQl9BUks9eQ0KIyBDT05GSUdfRkJfUE0zIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9D
QVJNSU5FPXkNCkNPTkZJR19GQl9DQVJNSU5FX0RSQU1fRVZBTD15DQojIENPTkZJR19DQVJNSU5F
X0RSQU1fQ1VTVE9NIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9HRU9ERT15DQpDT05GSUdfRkJfR0VP
REVfTFg9eQ0KQ09ORklHX0ZCX0dFT0RFX0dYPXkNCkNPTkZJR19GQl9HRU9ERV9HWDE9eQ0KQ09O
RklHX0ZCX1RNSU89eQ0KIyBDT05GSUdfRkJfVE1JT19BQ0NFTEwgaXMgbm90IHNldA0KQ09ORklH
X0ZCX1NNNTAxPXkNCkNPTkZJR19GQl9HT0xERklTSD15DQpDT05GSUdfRkJfVklSVFVBTD15DQoj
IENPTkZJR19YRU5fRkJERVZfRlJPTlRFTkQgaXMgbm90IHNldA0KQ09ORklHX0ZCX01FVFJPTk9N
RT15DQpDT05GSUdfRkJfTUI4NjJYWD15DQpDT05GSUdfRkJfTUI4NjJYWF9QQ0lfR0RDPXkNCkNP
TkZJR19GQl9NQjg2MlhYX0kyQz15DQojIENPTkZJR19GQl9CUk9BRFNIRUVUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX0FVT19LMTkwWCBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9IWVBFUlYgaXMg
bm90IHNldA0KQ09ORklHX0ZCX1NJTVBMRT15DQpDT05GSUdfRVhZTk9TX1ZJREVPPXkNCkNPTkZJ
R19CQUNLTElHSFRfTENEX1NVUFBPUlQ9eQ0KQ09ORklHX0xDRF9DTEFTU19ERVZJQ0U9eQ0KIyBD
T05GSUdfTENEX1BMQVRGT1JNIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQ0xBU1NfREVW
SUNFPXkNCkNPTkZJR19CQUNLTElHSFRfR0VORVJJQz15DQojIENPTkZJR19CQUNLTElHSFRfTE0z
NTMzIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQ0FSSUxMT19SQU5DSD15DQojIENPTkZJ
R19CQUNLTElHSFRfUFdNIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQVBQTEU9eQ0KQ09O
RklHX0JBQ0tMSUdIVF9TQUhBUkE9eQ0KQ09ORklHX0JBQ0tMSUdIVF9XTTgzMVg9eQ0KQ09ORklH
X0JBQ0tMSUdIVF9BRFA4ODYwPXkNCiMgQ09ORklHX0JBQ0tMSUdIVF9BRFA4ODcwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0JBQ0tMSUdIVF84OFBNODYwWCBpcyBub3Qgc2V0DQojIENPTkZJR19CQUNL
TElHSFRfUENGNTA2MzMgaXMgbm90IHNldA0KQ09ORklHX0JBQ0tMSUdIVF9MTTM2MzBBPXkNCkNP
TkZJR19CQUNLTElHSFRfTE0zNjM5PXkNCkNPTkZJR19CQUNLTElHSFRfTFA4NTVYPXkNCkNPTkZJ
R19CQUNLTElHSFRfTFA4Nzg4PXkNCkNPTkZJR19CQUNLTElHSFRfUEFORE9SQT15DQojIENPTkZJ
R19CQUNLTElHSFRfVFBTNjUyMTcgaXMgbm90IHNldA0KQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTE9
eQ0KQ09ORklHX0JBQ0tMSUdIVF9HUElPPXkNCkNPTkZJR19CQUNLTElHSFRfTFY1MjA3TFA9eQ0K
IyBDT05GSUdfQkFDS0xJR0hUX0JENjEwNyBpcyBub3Qgc2V0DQojIENPTkZJR19MT0dPIGlzIG5v
dCBzZXQNCkNPTkZJR19TT1VORD15DQpDT05GSUdfU09VTkRfT1NTX0NPUkU9eQ0KQ09ORklHX1NP
VU5EX09TU19DT1JFX1BSRUNMQUlNPXkNCkNPTkZJR19TTkQ9eQ0KQ09ORklHX1NORF9USU1FUj15
DQpDT05GSUdfU05EX1BDTT15DQpDT05GSUdfU05EX0hXREVQPXkNCkNPTkZJR19TTkRfUkFXTUlE
ST15DQpDT05GSUdfU05EX0NPTVBSRVNTX09GRkxPQUQ9eQ0KQ09ORklHX1NORF9KQUNLPXkNCiMg
Q09ORklHX1NORF9TRVFVRU5DRVIgaXMgbm90IHNldA0KQ09ORklHX1NORF9PU1NFTVVMPXkNCkNP
TkZJR19TTkRfTUlYRVJfT1NTPXkNCkNPTkZJR19TTkRfUENNX09TUz15DQojIENPTkZJR19TTkRf
UENNX09TU19QTFVHSU5TIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSFJUSU1FUj15DQpDT05GSUdf
U05EX0RZTkFNSUNfTUlOT1JTPXkNCkNPTkZJR19TTkRfTUFYX0NBUkRTPTMyDQpDT05GSUdfU05E
X1NVUFBPUlRfT0xEX0FQST15DQojIENPTkZJR19TTkRfVkVSQk9TRV9QUklOVEsgaXMgbm90IHNl
dA0KQ09ORklHX1NORF9ERUJVRz15DQpDT05GSUdfU05EX0RFQlVHX1ZFUkJPU0U9eQ0KQ09ORklH
X1NORF9WTUFTVEVSPXkNCkNPTkZJR19TTkRfRE1BX1NHQlVGPXkNCiMgQ09ORklHX1NORF9SQVdN
SURJX1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfT1BMM19MSUJfU0VRIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldA0KIyBDT05GSUdfU05EX1NCQVdF
X1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfRU1VMTBLMV9TRVEgaXMgbm90IHNldA0KQ09O
RklHX1NORF9NUFU0MDFfVUFSVD15DQpDT05GSUdfU05EX09QTDNfTElCPXkNCkNPTkZJR19TTkRf
VlhfTElCPXkNCkNPTkZJR19TTkRfQUM5N19DT0RFQz15DQpDT05GSUdfU05EX0RSSVZFUlM9eQ0K
IyBDT05GSUdfU05EX1BDU1AgaXMgbm90IHNldA0KQ09ORklHX1NORF9EVU1NWT15DQpDT05GSUdf
U05EX0FMT09QPXkNCkNPTkZJR19TTkRfTVRQQVY9eQ0KIyBDT05GSUdfU05EX01UUzY0IGlzIG5v
dCBzZXQNCkNPTkZJR19TTkRfU0VSSUFMX1UxNjU1MD15DQpDT05GSUdfU05EX01QVTQwMT15DQpD
T05GSUdfU05EX1BPUlRNQU4yWDQ9eQ0KIyBDT05GSUdfU05EX0FDOTdfUE9XRVJfU0FWRSBpcyBu
b3Qgc2V0DQpDT05GSUdfU05EX1BDST15DQpDT05GSUdfU05EX0FEMTg4OT15DQpDT05GSUdfU05E
X0FMUzMwMD15DQpDT05GSUdfU05EX0FMSTU0NTE9eQ0KIyBDT05GSUdfU05EX0FTSUhQSSBpcyBu
b3Qgc2V0DQpDT05GSUdfU05EX0FUSUlYUD15DQpDT05GSUdfU05EX0FUSUlYUF9NT0RFTT15DQoj
IENPTkZJR19TTkRfQVU4ODEwIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfQVU4ODIwPXkNCkNPTkZJ
R19TTkRfQVU4ODMwPXkNCkNPTkZJR19TTkRfQVcyPXkNCiMgQ09ORklHX1NORF9BWlQzMzI4IGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9CVDg3WCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0NBMDEw
Nj15DQojIENPTkZJR19TTkRfQ01JUENJIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfT1hZR0VOX0xJ
Qj15DQojIENPTkZJR19TTkRfT1hZR0VOIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9DUzQyODEg
aXMgbm90IHNldA0KQ09ORklHX1NORF9DUzQ2WFg9eQ0KQ09ORklHX1NORF9DUzQ2WFhfTkVXX0RT
UD15DQpDT05GSUdfU05EX0NTNTUzNUFVRElPPXkNCiMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qg
c2V0DQojIENPTkZJR19TTkRfREFSTEEyMCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0dJTkEyMD15
DQpDT05GSUdfU05EX0xBWUxBMjA9eQ0KIyBDT05GSUdfU05EX0RBUkxBMjQgaXMgbm90IHNldA0K
Q09ORklHX1NORF9HSU5BMjQ9eQ0KQ09ORklHX1NORF9MQVlMQTI0PXkNCkNPTkZJR19TTkRfTU9O
QT15DQpDT05GSUdfU05EX01JQT15DQojIENPTkZJR19TTkRfRUNITzNHIGlzIG5vdCBzZXQNCkNP
TkZJR19TTkRfSU5ESUdPPXkNCkNPTkZJR19TTkRfSU5ESUdPSU89eQ0KIyBDT05GSUdfU05EX0lO
RElHT0RKIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSU5ESUdPSU9YPXkNCiMgQ09ORklHX1NORF9J
TkRJR09ESlggaXMgbm90IHNldA0KQ09ORklHX1NORF9FTVUxMEsxPXkNCkNPTkZJR19TTkRfRU1V
MTBLMVg9eQ0KIyBDT05GSUdfU05EX0VOUzEzNzAgaXMgbm90IHNldA0KQ09ORklHX1NORF9FTlMx
MzcxPXkNCkNPTkZJR19TTkRfRVMxOTM4PXkNCkNPTkZJR19TTkRfRVMxOTY4PXkNCkNPTkZJR19T
TkRfRVMxOTY4X0lOUFVUPXkNCkNPTkZJR19TTkRfRk04MDE9eQ0KIyBDT05GSUdfU05EX0hEQV9J
TlRFTCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0hEU1A9eQ0KDQojDQojIERvbid0IGZvcmdldCB0
byBhZGQgYnVpbHQtaW4gZmlybXdhcmVzIGZvciBIRFNQIGRyaXZlcg0KIw0KIyBDT05GSUdfU05E
X0hEU1BNIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSUNFMTcxMj15DQpDT05GSUdfU05EX0lDRTE3
MjQ9eQ0KQ09ORklHX1NORF9JTlRFTDhYMD15DQpDT05GSUdfU05EX0lOVEVMOFgwTT15DQpDT05G
SUdfU05EX0tPUkcxMjEyPXkNCkNPTkZJR19TTkRfTE9MQT15DQpDT05GSUdfU05EX0xYNjQ2NEVT
PXkNCkNPTkZJR19TTkRfTUFFU1RSTzM9eQ0KIyBDT05GSUdfU05EX01BRVNUUk8zX0lOUFVUIGlz
IG5vdCBzZXQNCkNPTkZJR19TTkRfTUlYQVJUPXkNCkNPTkZJR19TTkRfTk0yNTY9eQ0KIyBDT05G
SUdfU05EX1BDWEhSIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfUklQVElERT15DQojIENPTkZJR19T
TkRfUk1FMzIgaXMgbm90IHNldA0KQ09ORklHX1NORF9STUU5Nj15DQpDT05GSUdfU05EX1JNRTk2
NTI9eQ0KQ09ORklHX1NORF9TT05JQ1ZJQkVTPXkNCiMgQ09ORklHX1NORF9UUklERU5UIGlzIG5v
dCBzZXQNCkNPTkZJR19TTkRfVklBODJYWD15DQojIENPTkZJR19TTkRfVklBODJYWF9NT0RFTSBp
cyBub3Qgc2V0DQpDT05GSUdfU05EX1ZJUlRVT1NPPXkNCkNPTkZJR19TTkRfVlgyMjI9eQ0KQ09O
RklHX1NORF9ZTUZQQ0k9eQ0KQ09ORklHX1NORF9TT0M9eQ0KIyBDT05GSUdfU05EX1NPQ19BREkg
aXMgbm90IHNldA0KIyBDT05GSUdfU05EX0FUTUVMX1NPQyBpcyBub3Qgc2V0DQojIENPTkZJR19T
TkRfQkNNMjgzNV9TT0NfSTJTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9FUDkzWFhfU09DIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9JTVhfU09DIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9L
SVJLV09PRF9TT0MgaXMgbm90IHNldA0KQ09ORklHX1NORF9TT0NfSTJDX0FORF9TUEk9eQ0KQ09O
RklHX1NORF9TT0NfQUxMX0NPREVDUz15DQpDT05GSUdfU05EX1NPQ184OFBNODYwWD15DQpDT05G
SUdfU05EX1NPQ19BUklaT05BPXkNCkNPTkZJR19TTkRfU09DX1dNX0hVQlM9eQ0KQ09ORklHX1NO
RF9TT0NfV01fQURTUD15DQpDT05GSUdfU05EX1NPQ19BRDE5M1g9eQ0KQ09ORklHX1NORF9TT0Nf
QUQ3MzMxMT15DQpDT05GSUdfU05EX1NPQ19BREFVMTcwMT15DQpDT05GSUdfU05EX1NPQ19BREFV
MTM3Mz15DQpDT05GSUdfU05EX1NPQ19BREFWODBYPXkNCkNPTkZJR19TTkRfU09DX0FEUzExN1g9
eQ0KQ09ORklHX1NORF9TT0NfQUs0NTM1PXkNCkNPTkZJR19TTkRfU09DX0FLNDY0MT15DQpDT05G
SUdfU05EX1NPQ19BSzQ2NDI9eQ0KQ09ORklHX1NORF9TT0NfQUs0NjcxPXkNCkNPTkZJR19TTkRf
U09DX0FLNTM4Nj15DQpDT05GSUdfU05EX1NPQ19BTEM1NjIzPXkNCkNPTkZJR19TTkRfU09DX0FM
QzU2MzI9eQ0KQ09ORklHX1NORF9TT0NfQ1M0Mkw1MT15DQpDT05GSUdfU05EX1NPQ19DUzQyTDUy
PXkNCkNPTkZJR19TTkRfU09DX0NTNDJMNzM9eQ0KQ09ORklHX1NORF9TT0NfQ1M0MjcwPXkNCkNP
TkZJR19TTkRfU09DX0NTNDI3MT15DQpDT05GSUdfU05EX1NPQ19DWDIwNDQyPXkNCkNPTkZJR19T
TkRfU09DX0paNDc0MF9DT0RFQz15DQpDT05GSUdfU05EX1NPQ19MMz15DQpDT05GSUdfU05EX1NP
Q19EQTcyMTA9eQ0KQ09ORklHX1NORF9TT0NfREE3MjEzPXkNCkNPTkZJR19TTkRfU09DX0RBNzMy
WD15DQpDT05GSUdfU05EX1NPQ19EQTkwNTU9eQ0KQ09ORklHX1NORF9TT0NfQlRfU0NPPXkNCkNP
TkZJR19TTkRfU09DX0lTQUJFTExFPXkNCkNPTkZJR19TTkRfU09DX0xNNDk0NTM9eQ0KQ09ORklH
X1NORF9TT0NfTUFYOTgwODg9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTgwOTA9eQ0KQ09ORklHX1NO
RF9TT0NfTUFYOTgwOTU9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTg1MD15DQpDT05GSUdfU05EX1NP
Q19IRE1JX0NPREVDPXkNCkNPTkZJR19TTkRfU09DX1BDTTE2ODE9eQ0KQ09ORklHX1NORF9TT0Nf
UENNMzAwOD15DQpDT05GSUdfU05EX1NPQ19SVDU2MzE9eQ0KQ09ORklHX1NORF9TT0NfUlQ1NjQw
PXkNCkNPTkZJR19TTkRfU09DX1NHVEw1MDAwPXkNCkNPTkZJR19TTkRfU09DX1NJR01BRFNQPXkN
CkNPTkZJR19TTkRfU09DX1NQRElGPXkNCkNPTkZJR19TTkRfU09DX1NTTTI1MTg9eQ0KQ09ORklH
X1NORF9TT0NfU1NNMjYwMj15DQpDT05GSUdfU05EX1NPQ19TVEEzMlg9eQ0KQ09ORklHX1NORF9T
T0NfU1RBNTI5PXkNCkNPTkZJR19TTkRfU09DX1RBUzUwODY9eQ0KQ09ORklHX1NORF9TT0NfVExW
MzIwQUlDMjM9eQ0KQ09ORklHX1NORF9TT0NfVExWMzIwQUlDMzJYND15DQpDT05GSUdfU05EX1NP
Q19UTFYzMjBBSUMzWD15DQpDT05GSUdfU05EX1NPQ19UTFYzMjBEQUMzMz15DQpDT05GSUdfU05E
X1NPQ19UV0w0MDMwPXkNCkNPTkZJR19TTkRfU09DX1RXTDYwNDA9eQ0KQ09ORklHX1NORF9TT0Nf
VURBMTM0WD15DQpDT05GSUdfU05EX1NPQ19VREExMzgwPXkNCkNPTkZJR19TTkRfU09DX1dMMTI3
Mz15DQpDT05GSUdfU05EX1NPQ19XTTEyNTBfRVYxPXkNCkNPTkZJR19TTkRfU09DX1dNMjAwMD15
DQpDT05GSUdfU05EX1NPQ19XTTIyMDA9eQ0KQ09ORklHX1NORF9TT0NfV001MTAwPXkNCkNPTkZJ
R19TTkRfU09DX1dNODM1MD15DQpDT05GSUdfU05EX1NPQ19XTTg0MDA9eQ0KQ09ORklHX1NORF9T
T0NfV004NTEwPXkNCkNPTkZJR19TTkRfU09DX1dNODUyMz15DQpDT05GSUdfU05EX1NPQ19XTTg1
ODA9eQ0KQ09ORklHX1NORF9TT0NfV004NzExPXkNCkNPTkZJR19TTkRfU09DX1dNODcyNz15DQpD
T05GSUdfU05EX1NPQ19XTTg3Mjg9eQ0KQ09ORklHX1NORF9TT0NfV004NzMxPXkNCkNPTkZJR19T
TkRfU09DX1dNODczNz15DQpDT05GSUdfU05EX1NPQ19XTTg3NDE9eQ0KQ09ORklHX1NORF9TT0Nf
V004NzUwPXkNCkNPTkZJR19TTkRfU09DX1dNODc1Mz15DQpDT05GSUdfU05EX1NPQ19XTTg3NzY9
eQ0KQ09ORklHX1NORF9TT0NfV004NzgyPXkNCkNPTkZJR19TTkRfU09DX1dNODgwND15DQpDT05G
SUdfU05EX1NPQ19XTTg5MDA9eQ0KQ09ORklHX1NORF9TT0NfV004OTAzPXkNCkNPTkZJR19TTkRf
U09DX1dNODkwND15DQpDT05GSUdfU05EX1NPQ19XTTg5NDA9eQ0KQ09ORklHX1NORF9TT0NfV004
OTU1PXkNCkNPTkZJR19TTkRfU09DX1dNODk2MD15DQpDT05GSUdfU05EX1NPQ19XTTg5NjE9eQ0K
Q09ORklHX1NORF9TT0NfV004OTYyPXkNCkNPTkZJR19TTkRfU09DX1dNODk3MT15DQpDT05GSUdf
U05EX1NPQ19XTTg5NzQ9eQ0KQ09ORklHX1NORF9TT0NfV004OTc4PXkNCkNPTkZJR19TTkRfU09D
X1dNODk4Mz15DQpDT05GSUdfU05EX1NPQ19XTTg5ODU9eQ0KQ09ORklHX1NORF9TT0NfV004OTg4
PXkNCkNPTkZJR19TTkRfU09DX1dNODk5MD15DQpDT05GSUdfU05EX1NPQ19XTTg5OTE9eQ0KQ09O
RklHX1NORF9TT0NfV004OTkzPXkNCkNPTkZJR19TTkRfU09DX1dNODk5NT15DQpDT05GSUdfU05E
X1NPQ19XTTg5OTY9eQ0KQ09ORklHX1NORF9TT0NfV004OTk3PXkNCkNPTkZJR19TTkRfU09DX1dN
OTA4MT15DQpDT05GSUdfU05EX1NPQ19XTTkwOTA9eQ0KQ09ORklHX1NORF9TT0NfTE00ODU3PXkN
CkNPTkZJR19TTkRfU09DX01BWDk3Njg9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTg3Nz15DQpDT05G
SUdfU05EX1NPQ19NQzEzNzgzPXkNCkNPTkZJR19TTkRfU09DX01MMjYxMjQ9eQ0KQ09ORklHX1NO
RF9TT0NfVFBBNjEzMEEyPXkNCkNPTkZJR19TTkRfU0lNUExFX0NBUkQ9eQ0KQ09ORklHX1NPVU5E
X1BSSU1FPXkNCkNPTkZJR19BQzk3X0JVUz15DQoNCiMNCiMgSElEIHN1cHBvcnQNCiMNCkNPTkZJ
R19ISUQ9eQ0KIyBDT05GSUdfSElEX0JBVFRFUllfU1RSRU5HVEggaXMgbm90IHNldA0KIyBDT05G
SUdfSElEUkFXIGlzIG5vdCBzZXQNCkNPTkZJR19VSElEPXkNCiMgQ09ORklHX0hJRF9HRU5FUklD
IGlzIG5vdCBzZXQNCg0KIw0KIyBTcGVjaWFsIEhJRCBkcml2ZXJzDQojDQpDT05GSUdfSElEX0E0
VEVDSD15DQpDT05GSUdfSElEX0FDUlVYPXkNCiMgQ09ORklHX0hJRF9BQ1JVWF9GRiBpcyBub3Qg
c2V0DQojIENPTkZJR19ISURfQVBQTEUgaXMgbm90IHNldA0KQ09ORklHX0hJRF9BVVJFQUw9eQ0K
Q09ORklHX0hJRF9CRUxLSU49eQ0KQ09ORklHX0hJRF9DSEVSUlk9eQ0KQ09ORklHX0hJRF9DSElD
T05ZPXkNCiMgQ09ORklHX0hJRF9QUk9ESUtFWVMgaXMgbm90IHNldA0KQ09ORklHX0hJRF9DWVBS
RVNTPXkNCiMgQ09ORklHX0hJRF9EUkFHT05SSVNFIGlzIG5vdCBzZXQNCkNPTkZJR19ISURfRU1T
X0ZGPXkNCkNPTkZJR19ISURfRUxFQ09NPXkNCiMgQ09ORklHX0hJRF9FWktFWSBpcyBub3Qgc2V0
DQpDT05GSUdfSElEX0tFWVRPVUNIPXkNCiMgQ09ORklHX0hJRF9LWUUgaXMgbm90IHNldA0KIyBD
T05GSUdfSElEX1VDTE9HSUMgaXMgbm90IHNldA0KQ09ORklHX0hJRF9XQUxUT1A9eQ0KQ09ORklH
X0hJRF9HWVJBVElPTj15DQpDT05GSUdfSElEX0lDQURFPXkNCkNPTkZJR19ISURfVFdJTkhBTj15
DQpDT05GSUdfSElEX0tFTlNJTkdUT049eQ0KQ09ORklHX0hJRF9MQ1BPV0VSPXkNCiMgQ09ORklH
X0hJRF9MRU5PVk9fVFBLQkQgaXMgbm90IHNldA0KQ09ORklHX0hJRF9MT0dJVEVDSD15DQpDT05G
SUdfTE9HSVRFQ0hfRkY9eQ0KQ09ORklHX0xPR0lSVU1CTEVQQUQyX0ZGPXkNCkNPTkZJR19MT0dJ
Rzk0MF9GRj15DQojIENPTkZJR19MT0dJV0hFRUxTX0ZGIGlzIG5vdCBzZXQNCkNPTkZJR19ISURf
TUFHSUNNT1VTRT15DQojIENPTkZJR19ISURfTUlDUk9TT0ZUIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0hJRF9NT05URVJFWSBpcyBub3Qgc2V0DQojIENPTkZJR19ISURfTVVMVElUT1VDSCBpcyBub3Qg
c2V0DQpDT05GSUdfSElEX09SVEVLPXkNCiMgQ09ORklHX0hJRF9QQU5USEVSTE9SRCBpcyBub3Qg
c2V0DQpDT05GSUdfSElEX1BFVEFMWU5YPXkNCkNPTkZJR19ISURfUElDT0xDRD15DQojIENPTkZJ
R19ISURfUElDT0xDRF9GQiBpcyBub3Qgc2V0DQpDT05GSUdfSElEX1BJQ09MQ0RfQkFDS0xJR0hU
PXkNCiMgQ09ORklHX0hJRF9QSUNPTENEX0xDRCBpcyBub3Qgc2V0DQpDT05GSUdfSElEX1BJQ09M
Q0RfTEVEUz15DQpDT05GSUdfSElEX1BJQ09MQ0RfQ0lSPXkNCiMgQ09ORklHX0hJRF9QUklNQVgg
aXMgbm90IHNldA0KQ09ORklHX0hJRF9TQUlURUs9eQ0KQ09ORklHX0hJRF9TQU1TVU5HPXkNCkNP
TkZJR19ISURfU1BFRURMSU5LPXkNCkNPTkZJR19ISURfU1RFRUxTRVJJRVM9eQ0KQ09ORklHX0hJ
RF9TVU5QTFVTPXkNCiMgQ09ORklHX0hJRF9HUkVFTkFTSUEgaXMgbm90IHNldA0KIyBDT05GSUdf
SElEX0hZUEVSVl9NT1VTRSBpcyBub3Qgc2V0DQojIENPTkZJR19ISURfU01BUlRKT1lQTFVTIGlz
IG5vdCBzZXQNCkNPTkZJR19ISURfVElWTz15DQpDT05GSUdfSElEX1RPUFNFRUQ9eQ0KQ09ORklH
X0hJRF9USElOR009eQ0KQ09ORklHX0hJRF9USFJVU1RNQVNURVI9eQ0KIyBDT05GSUdfVEhSVVNU
TUFTVEVSX0ZGIGlzIG5vdCBzZXQNCkNPTkZJR19ISURfV0FDT009eQ0KQ09ORklHX0hJRF9XSUlN
T1RFPXkNCkNPTkZJR19ISURfWElOTU89eQ0KQ09ORklHX0hJRF9aRVJPUExVUz15DQpDT05GSUdf
WkVST1BMVVNfRkY9eQ0KQ09ORklHX0hJRF9aWURBQ1JPTj15DQojIENPTkZJR19ISURfU0VOU09S
X0hVQiBpcyBub3Qgc2V0DQoNCiMNCiMgSTJDIEhJRCBzdXBwb3J0DQojDQojIENPTkZJR19JMkNf
SElEIGlzIG5vdCBzZXQNCkNPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkNCiMgQ09ORklH
X1VTQl9TVVBQT1JUIGlzIG5vdCBzZXQNCkNPTkZJR19VV0I9eQ0KQ09ORklHX1VXQl9XSENJPXkN
CiMgQ09ORklHX01NQyBpcyBub3Qgc2V0DQojIENPTkZJR19NRU1TVElDSyBpcyBub3Qgc2V0DQpD
T05GSUdfTkVXX0xFRFM9eQ0KQ09ORklHX0xFRFNfQ0xBU1M9eQ0KDQojDQojIExFRCBkcml2ZXJz
DQojDQpDT05GSUdfTEVEU184OFBNODYwWD15DQojIENPTkZJR19MRURTX0xNMzUzMCBpcyBub3Qg
c2V0DQojIENPTkZJR19MRURTX0xNMzUzMyBpcyBub3Qgc2V0DQpDT05GSUdfTEVEU19MTTM2NDI9
eQ0KIyBDT05GSUdfTEVEU19QQ0E5NTMyIGlzIG5vdCBzZXQNCkNPTkZJR19MRURTX0dQSU89eQ0K
Q09ORklHX0xFRFNfTFAzOTQ0PXkNCkNPTkZJR19MRURTX0xQNTVYWF9DT01NT049eQ0KQ09ORklH
X0xFRFNfTFA1NTIxPXkNCkNPTkZJR19MRURTX0xQNTUyMz15DQpDT05GSUdfTEVEU19MUDU1NjI9
eQ0KQ09ORklHX0xFRFNfTFA4NTAxPXkNCkNPTkZJR19MRURTX0xQODc4OD15DQpDT05GSUdfTEVE
U19DTEVWT19NQUlMPXkNCkNPTkZJR19MRURTX1BDQTk1NVg9eQ0KIyBDT05GSUdfTEVEU19QQ0E5
NjNYIGlzIG5vdCBzZXQNCkNPTkZJR19MRURTX1BDQTk2ODU9eQ0KQ09ORklHX0xFRFNfV004MzFY
X1NUQVRVUz15DQpDT05GSUdfTEVEU19XTTgzNTA9eQ0KQ09ORklHX0xFRFNfUFdNPXkNCkNPTkZJ
R19MRURTX1JFR1VMQVRPUj15DQpDT05GSUdfTEVEU19CRDI4MDI9eQ0KQ09ORklHX0xFRFNfSU5U
RUxfU1M0MjAwPXkNCkNPTkZJR19MRURTX0xUMzU5Mz15DQojIENPTkZJR19MRURTX01DMTM3ODMg
aXMgbm90IHNldA0KQ09ORklHX0xFRFNfVENBNjUwNz15DQpDT05GSUdfTEVEU19MTTM1NXg9eQ0K
IyBDT05GSUdfTEVEU19PVDIwMCBpcyBub3Qgc2V0DQpDT05GSUdfTEVEU19CTElOS009eQ0KDQoj
DQojIExFRCBUcmlnZ2Vycw0KIw0KQ09ORklHX0xFRFNfVFJJR0dFUlM9eQ0KQ09ORklHX0xFRFNf
VFJJR0dFUl9USU1FUj15DQpDT05GSUdfTEVEU19UUklHR0VSX09ORVNIT1Q9eQ0KQ09ORklHX0xF
RFNfVFJJR0dFUl9IRUFSVEJFQVQ9eQ0KIyBDT05GSUdfTEVEU19UUklHR0VSX0JBQ0tMSUdIVCBp
cyBub3Qgc2V0DQojIENPTkZJR19MRURTX1RSSUdHRVJfQ1BVIGlzIG5vdCBzZXQNCkNPTkZJR19M
RURTX1RSSUdHRVJfR1BJTz15DQpDT05GSUdfTEVEU19UUklHR0VSX0RFRkFVTFRfT049eQ0KDQoj
DQojIGlwdGFibGVzIHRyaWdnZXIgaXMgdW5kZXIgTmV0ZmlsdGVyIGNvbmZpZyAoTEVEIHRhcmdl
dCkNCiMNCiMgQ09ORklHX0xFRFNfVFJJR0dFUl9UUkFOU0lFTlQgaXMgbm90IHNldA0KQ09ORklH
X0xFRFNfVFJJR0dFUl9DQU1FUkE9eQ0KQ09ORklHX0FDQ0VTU0lCSUxJVFk9eQ0KQ09ORklHX0lO
RklOSUJBTkQ9eQ0KIyBDT05GSUdfSU5GSU5JQkFORF9VU0VSX01BRCBpcyBub3Qgc2V0DQpDT05G
SUdfSU5GSU5JQkFORF9VU0VSX0FDQ0VTUz15DQpDT05GSUdfSU5GSU5JQkFORF9VU0VSX01FTT15
DQpDT05GSUdfSU5GSU5JQkFORF9BRERSX1RSQU5TPXkNCiMgQ09ORklHX0lORklOSUJBTkRfTVRI
Q0EgaXMgbm90IHNldA0KQ09ORklHX0lORklOSUJBTkRfSVBBVEg9eQ0KQ09ORklHX0lORklOSUJB
TkRfUUlCPXkNCkNPTkZJR19JTkZJTklCQU5EX0FNU08xMTAwPXkNCkNPTkZJR19JTkZJTklCQU5E
X0FNU08xMTAwX0RFQlVHPXkNCkNPTkZJR19JTkZJTklCQU5EX05FUz15DQojIENPTkZJR19JTkZJ
TklCQU5EX05FU19ERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19FREFDIGlzIG5vdCBzZXQNCkNP
TkZJR19SVENfTElCPXkNCkNPTkZJR19SVENfQ0xBU1M9eQ0KIyBDT05GSUdfUlRDX0hDVE9TWVMg
aXMgbm90IHNldA0KIyBDT05GSUdfUlRDX1NZU1RPSEMgaXMgbm90IHNldA0KIyBDT05GSUdfUlRD
X0RFQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBSVEMgaW50ZXJmYWNlcw0KIw0KIyBDT05GSUdfUlRD
X0lOVEZfU1lTRlMgaXMgbm90IHNldA0KQ09ORklHX1JUQ19JTlRGX0RFVj15DQojIENPTkZJR19S
VENfSU5URl9ERVZfVUlFX0VNVUwgaXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfVEVTVD15DQoN
CiMNCiMgSTJDIFJUQyBkcml2ZXJzDQojDQpDT05GSUdfUlRDX0RSVl84OFBNODYwWD15DQojIENP
TkZJR19SVENfRFJWXzg4UE04MFggaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RSVl9EUzEzMDcg
aXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfRFMxMzc0PXkNCkNPTkZJR19SVENfRFJWX0RTMTY3
Mj15DQojIENPTkZJR19SVENfRFJWX0RTMzIzMiBpcyBub3Qgc2V0DQpDT05GSUdfUlRDX0RSVl9M
UDg3ODg9eQ0KQ09ORklHX1JUQ19EUlZfTUFYNjkwMD15DQpDT05GSUdfUlRDX0RSVl9NQVg4OTk4
PXkNCkNPTkZJR19SVENfRFJWX1JTNUMzNzI9eQ0KQ09ORklHX1JUQ19EUlZfSVNMMTIwOD15DQpD
T05GSUdfUlRDX0RSVl9JU0wxMjAyMj15DQojIENPTkZJR19SVENfRFJWX0lTTDEyMDU3IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1JUQ19EUlZfWDEyMDUgaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RS
Vl9QQUxNQVMgaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RSVl9QQ0YyMTI3IGlzIG5vdCBzZXQN
CkNPTkZJR19SVENfRFJWX1BDRjg1MjM9eQ0KQ09ORklHX1JUQ19EUlZfUENGODU2Mz15DQojIENP
TkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfTTQxVDgwPXkN
CiMgQ09ORklHX1JUQ19EUlZfTTQxVDgwX1dEVCBpcyBub3Qgc2V0DQojIENPTkZJR19SVENfRFJW
X0JRMzJLIGlzIG5vdCBzZXQNCkNPTkZJR19SVENfRFJWX1RXTDQwMzA9eQ0KQ09ORklHX1JUQ19E
UlZfVFBTNjU4Nlg9eQ0KQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15DQojIENPTkZJR19SVENfRFJW
X0ZNMzEzMCBpcyBub3Qgc2V0DQojIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0DQpD
T05GSUdfUlRDX0RSVl9SWDgwMjU9eQ0KQ09ORklHX1JUQ19EUlZfRU0zMDI3PXkNCiMgQ09ORklH
X1JUQ19EUlZfUlYzMDI5QzIgaXMgbm90IHNldA0KDQojDQojIFNQSSBSVEMgZHJpdmVycw0KIw0K
DQojDQojIFBsYXRmb3JtIFJUQyBkcml2ZXJzDQojDQpDT05GSUdfUlRDX0RSVl9DTU9TPXkNCkNP
TkZJR19SVENfRFJWX0RTMTI4Nj15DQojIENPTkZJR19SVENfRFJWX0RTMTUxMSBpcyBub3Qgc2V0
DQojIENPTkZJR19SVENfRFJWX0RTMTU1MyBpcyBub3Qgc2V0DQpDT05GSUdfUlRDX0RSVl9EUzE3
NDI9eQ0KQ09ORklHX1JUQ19EUlZfREE5MDU1PXkNCkNPTkZJR19SVENfRFJWX1NUSzE3VEE4PXkN
CkNPTkZJR19SVENfRFJWX000OFQ4Nj15DQpDT05GSUdfUlRDX0RSVl9NNDhUMzU9eQ0KQ09ORklH
X1JUQ19EUlZfTTQ4VDU5PXkNCkNPTkZJR19SVENfRFJWX01TTTYyNDI9eQ0KQ09ORklHX1JUQ19E
UlZfQlE0ODAyPXkNCkNPTkZJR19SVENfRFJWX1JQNUMwMT15DQojIENPTkZJR19SVENfRFJWX1Yz
MDIwIGlzIG5vdCBzZXQNCkNPTkZJR19SVENfRFJWX0RTMjQwND15DQpDT05GSUdfUlRDX0RSVl9X
TTgzMVg9eQ0KQ09ORklHX1JUQ19EUlZfV004MzUwPXkNCkNPTkZJR19SVENfRFJWX1BDRjUwNjMz
PXkNCg0KIw0KIyBvbi1DUFUgUlRDIGRyaXZlcnMNCiMNCkNPTkZJR19SVENfRFJWX01DMTNYWFg9
eQ0KQ09ORklHX1JUQ19EUlZfTU9YQVJUPXkNCg0KIw0KIyBISUQgU2Vuc29yIFJUQyBkcml2ZXJz
DQojDQpDT05GSUdfRE1BREVWSUNFUz15DQojIENPTkZJR19ETUFERVZJQ0VTX0RFQlVHIGlzIG5v
dCBzZXQNCg0KIw0KIyBETUEgRGV2aWNlcw0KIw0KQ09ORklHX0lOVEVMX01JRF9ETUFDPXkNCiMg
Q09ORklHX0lOVEVMX0lPQVRETUEgaXMgbm90IHNldA0KIyBDT05GSUdfRFdfRE1BQ19DT1JFIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldA0KIyBDT05GSUdfRFdfRE1BQ19Q
Q0kgaXMgbm90IHNldA0KQ09ORklHX1RJTUJfRE1BPXkNCkNPTkZJR19QQ0hfRE1BPXkNCkNPTkZJ
R19ETUFfRU5HSU5FPXkNCkNPTkZJR19ETUFfQUNQST15DQoNCiMNCiMgRE1BIENsaWVudHMNCiMN
CiMgQ09ORklHX0FTWU5DX1RYX0RNQSBpcyBub3Qgc2V0DQpDT05GSUdfRE1BVEVTVD15DQojIENP
TkZJR19BVVhESVNQTEFZIGlzIG5vdCBzZXQNCkNPTkZJR19VSU89eQ0KQ09ORklHX1VJT19DSUY9
eQ0KQ09ORklHX1VJT19QRFJWX0dFTklSUT15DQpDT05GSUdfVUlPX0RNRU1fR0VOSVJRPXkNCkNP
TkZJR19VSU9fQUVDPXkNCkNPTkZJR19VSU9fU0VSQ09TMz15DQojIENPTkZJR19VSU9fUENJX0dF
TkVSSUMgaXMgbm90IHNldA0KIyBDT05GSUdfVUlPX05FVFggaXMgbm90IHNldA0KQ09ORklHX1VJ
T19NRjYyND15DQojIENPTkZJR19WSVJUX0RSSVZFUlMgaXMgbm90IHNldA0KQ09ORklHX1ZJUlRJ
Tz15DQoNCiMNCiMgVmlydGlvIGRyaXZlcnMNCiMNCiMgQ09ORklHX1ZJUlRJT19QQ0kgaXMgbm90
IHNldA0KQ09ORklHX1ZJUlRJT19CQUxMT09OPXkNCiMgQ09ORklHX1ZJUlRJT19NTUlPIGlzIG5v
dCBzZXQNCg0KIw0KIyBNaWNyb3NvZnQgSHlwZXItViBndWVzdCBzdXBwb3J0DQojDQpDT05GSUdf
SFlQRVJWPXkNCkNPTkZJR19IWVBFUlZfVVRJTFM9eQ0KQ09ORklHX0hZUEVSVl9CQUxMT09OPXkN
Cg0KIw0KIyBYZW4gZHJpdmVyIHN1cHBvcnQNCiMNCiMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5v
dCBzZXQNCkNPTkZJR19YRU5fREVWX0VWVENITj15DQojIENPTkZJR19YRU5fQkFDS0VORCBpcyBu
b3Qgc2V0DQpDT05GSUdfWEVORlM9eQ0KQ09ORklHX1hFTl9DT01QQVRfWEVORlM9eQ0KQ09ORklH
X1hFTl9TWVNfSFlQRVJWSVNPUj15DQpDT05GSUdfWEVOX1hFTkJVU19GUk9OVEVORD15DQpDT05G
SUdfWEVOX0dOVERFVj15DQpDT05GSUdfWEVOX0dSQU5UX0RFVl9BTExPQz15DQpDT05GSUdfU1dJ
T1RMQl9YRU49eQ0KQ09ORklHX1hFTl9UTUVNPXkNCkNPTkZJR19YRU5fUFJJVkNNRD15DQojIENP
TkZJR19YRU5fQUNQSV9QUk9DRVNTT1IgaXMgbm90IHNldA0KQ09ORklHX1hFTl9IQVZFX1BWTU1V
PXkNCkNPTkZJR19TVEFHSU5HPXkNCkNPTkZJR19TTElDT1NTPXkNCkNPTkZJR19FQ0hPPXkNCkNP
TkZJR19QQU5FTD15DQpDT05GSUdfUEFORUxfUEFSUE9SVD0wDQpDT05GSUdfUEFORUxfUFJPRklM
RT01DQpDT05GSUdfUEFORUxfQ0hBTkdFX01FU1NBR0U9eQ0KQ09ORklHX1BBTkVMX0JPT1RfTUVT
U0FHRT0iIg0KIyBDT05GSUdfRFhfU0VQIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9TTTdYWD15DQpD
T05GSUdfQ1JZU1RBTEhEPXkNCkNPTkZJR19GQl9YR0k9eQ0KQ09ORklHX0FDUElfUVVJQ0tTVEFS
VD15DQpDT05GSUdfRlQxMDAwPXkNCg0KIw0KIyBTcGVha3VwIGNvbnNvbGUgc3BlZWNoDQojDQpD
T05GSUdfVE9VQ0hTQ1JFRU5fQ0xFQVJQQURfVE0xMjE3PXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X1NZTkFQVElDU19JMkNfUk1JNCBpcyBub3Qgc2V0DQpDT05GSUdfU1RBR0lOR19NRURJQT15DQpD
T05GSUdfRFZCX0NYRDIwOTk9eQ0KIyBDT05GSUdfVklERU9fRFQzMTU1IGlzIG5vdCBzZXQNCkNP
TkZJR19WSURFT19WNEwyX0lOVF9ERVZJQ0U9eQ0KQ09ORklHX1ZJREVPX1RDTTgyNVg9eQ0KQ09O
RklHX1VTQl9TTjlDMTAyPXkNCkNPTkZJR19TT0xPNlgxMD15DQojIENPTkZJR19MSVJDX1NUQUdJ
TkcgaXMgbm90IHNldA0KDQojDQojIEFuZHJvaWQNCiMNCkNPTkZJR19BTkRST0lEPXkNCkNPTkZJ
R19BTkRST0lEX0JJTkRFUl9JUEM9eQ0KIyBDT05GSUdfQVNITUVNIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FORFJPSURfTE9HR0VSIGlzIG5vdCBzZXQNCkNPTkZJR19BTkRST0lEX1RJTUVEX09VVFBV
VD15DQojIENPTkZJR19BTkRST0lEX1RJTUVEX0dQSU8gaXMgbm90IHNldA0KQ09ORklHX0FORFJP
SURfTE9XX01FTU9SWV9LSUxMRVI9eQ0KIyBDT05GSUdfQU5EUk9JRF9JTlRGX0FMQVJNX0RFViBp
cyBub3Qgc2V0DQojIENPTkZJR19TWU5DIGlzIG5vdCBzZXQNCkNPTkZJR19JT049eQ0KQ09ORklH
X0lPTl9URVNUPXkNCkNPTkZJR19ER1JQPXkNCkNPTkZJR19YSUxMWUJVUz15DQojIENPTkZJR19Y
SUxMWUJVU19QQ0lFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RHTkMgaXMgbm90IHNldA0KQ09ORklH
X0RHQVA9eQ0KQ09ORklHX1g4Nl9QTEFURk9STV9ERVZJQ0VTPXkNCiMgQ09ORklHX0FDRVJIREYg
aXMgbm90IHNldA0KIyBDT05GSUdfQVNVU19MQVBUT1AgaXMgbm90IHNldA0KQ09ORklHX0RFTExf
TEFQVE9QPXkNCkNPTkZJR19GVUpJVFNVX0xBUFRPUD15DQpDT05GSUdfRlVKSVRTVV9MQVBUT1Bf
REVCVUc9eQ0KIyBDT05GSUdfRlVKSVRTVV9UQUJMRVQgaXMgbm90IHNldA0KIyBDT05GSUdfSFBf
QUNDRUwgaXMgbm90IHNldA0KQ09ORklHX1BBTkFTT05JQ19MQVBUT1A9eQ0KQ09ORklHX1RISU5L
UEFEX0FDUEk9eQ0KQ09ORklHX1RISU5LUEFEX0FDUElfQUxTQV9TVVBQT1JUPXkNCkNPTkZJR19U
SElOS1BBRF9BQ1BJX0RFQlVHRkFDSUxJVElFUz15DQojIENPTkZJR19USElOS1BBRF9BQ1BJX0RF
QlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RISU5LUEFEX0FDUElfVU5TQUZFX0xFRFMgaXMgbm90
IHNldA0KQ09ORklHX1RISU5LUEFEX0FDUElfVklERU89eQ0KQ09ORklHX1RISU5LUEFEX0FDUElf
SE9US0VZX1BPTEw9eQ0KIyBDT05GSUdfU0VOU09SU19IREFQUyBpcyBub3Qgc2V0DQpDT05GSUdf
SU5URUxfTUVOTE9XPXkNCiMgQ09ORklHX0FDUElfV01JIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RP
UFNUQVJfTEFQVE9QIGlzIG5vdCBzZXQNCkNPTkZJR19UT1NISUJBX0JUX1JGS0lMTD15DQojIENP
TkZJR19BQ1BJX0NNUEMgaXMgbm90IHNldA0KQ09ORklHX0lOVEVMX0lQUz15DQpDT05GSUdfSUJN
X1JUTD15DQojIENPTkZJR19YTzE1X0VCT09LIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVNVTkdf
TEFQVE9QIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVNVTkdfUTEwIGlzIG5vdCBzZXQNCkNPTkZJ
R19BUFBMRV9HTVVYPXkNCkNPTkZJR19JTlRFTF9SU1Q9eQ0KIyBDT05GSUdfSU5URUxfU01BUlRD
T05ORUNUIGlzIG5vdCBzZXQNCkNPTkZJR19QVlBBTklDPXkNCkNPTkZJR19DSFJPTUVfUExBVEZP
Uk1TPXkNCkNPTkZJR19DSFJPTUVPU19MQVBUT1A9eQ0KIyBDT05GSUdfQ0hST01FT1NfUFNUT1JF
IGlzIG5vdCBzZXQNCg0KIw0KIyBIYXJkd2FyZSBTcGlubG9jayBkcml2ZXJzDQojDQpDT05GSUdf
Q0xLRVZUX0k4MjUzPXkNCkNPTkZJR19JODI1M19MT0NLPXkNCkNPTkZJR19DTEtCTERfSTgyNTM9
eQ0KQ09ORklHX01BSUxCT1g9eQ0KIyBDT05GSUdfSU9NTVVfU1VQUE9SVCBpcyBub3Qgc2V0DQoN
CiMNCiMgUmVtb3RlcHJvYyBkcml2ZXJzDQojDQpDT05GSUdfUkVNT1RFUFJPQz15DQpDT05GSUdf
U1RFX01PREVNX1JQUk9DPXkNCg0KIw0KIyBScG1zZyBkcml2ZXJzDQojDQpDT05GSUdfUE1fREVW
RlJFUT15DQoNCiMNCiMgREVWRlJFUSBHb3Zlcm5vcnMNCiMNCkNPTkZJR19ERVZGUkVRX0dPVl9T
SU1QTEVfT05ERU1BTkQ9eQ0KQ09ORklHX0RFVkZSRVFfR09WX1BFUkZPUk1BTkNFPXkNCiMgQ09O
RklHX0RFVkZSRVFfR09WX1BPV0VSU0FWRSBpcyBub3Qgc2V0DQojIENPTkZJR19ERVZGUkVRX0dP
Vl9VU0VSU1BBQ0UgaXMgbm90IHNldA0KDQojDQojIERFVkZSRVEgRHJpdmVycw0KIw0KQ09ORklH
X0VYVENPTj15DQoNCiMNCiMgRXh0Y29uIERldmljZSBEcml2ZXJzDQojDQpDT05GSUdfRVhUQ09O
X0dQSU89eQ0KQ09ORklHX0VYVENPTl9NQVg3NzY5Mz15DQpDT05GSUdfRVhUQ09OX0FSSVpPTkE9
eQ0KIyBDT05GSUdfRVhUQ09OX1BBTE1BUyBpcyBub3Qgc2V0DQojIENPTkZJR19NRU1PUlkgaXMg
bm90IHNldA0KIyBDT05GSUdfSUlPIGlzIG5vdCBzZXQNCkNPTkZJR19OVEI9eQ0KIyBDT05GSUdf
Vk1FX0JVUyBpcyBub3Qgc2V0DQpDT05GSUdfUFdNPXkNCkNPTkZJR19QV01fU1lTRlM9eQ0KQ09O
RklHX1BXTV9SRU5FU0FTX1RQVT15DQpDT05GSUdfUFdNX1RXTD15DQpDT05GSUdfUFdNX1RXTF9M
RUQ9eQ0KQ09ORklHX0lQQUNLX0JVUz15DQpDT05GSUdfQk9BUkRfVFBDSTIwMD15DQpDT05GSUdf
U0VSSUFMX0lQT0NUQUw9eQ0KQ09ORklHX1JFU0VUX0NPTlRST0xMRVI9eQ0KQ09ORklHX0ZNQz15
DQpDT05GSUdfRk1DX0ZBS0VERVY9eQ0KQ09ORklHX0ZNQ19UUklWSUFMPXkNCiMgQ09ORklHX0ZN
Q19XUklURV9FRVBST00gaXMgbm90IHNldA0KIyBDT05GSUdfRk1DX0NIQVJERVYgaXMgbm90IHNl
dA0KDQojDQojIFBIWSBTdWJzeXN0ZW0NCiMNCiMgQ09ORklHX0dFTkVSSUNfUEhZIGlzIG5vdCBz
ZXQNCkNPTkZJR19QSFlfRVhZTk9TX01JUElfVklERU89eQ0KIyBDT05GSUdfUE9XRVJDQVAgaXMg
bm90IHNldA0KDQojDQojIEZpcm13YXJlIERyaXZlcnMNCiMNCiMgQ09ORklHX0VERCBpcyBub3Qg
c2V0DQpDT05GSUdfRklSTVdBUkVfTUVNTUFQPXkNCiMgQ09ORklHX0RFTExfUkJVIGlzIG5vdCBz
ZXQNCkNPTkZJR19EQ0RCQVM9eQ0KQ09ORklHX0RNSUlEPXkNCiMgQ09ORklHX0RNSV9TWVNGUyBp
cyBub3Qgc2V0DQpDT05GSUdfRE1JX1NDQU5fTUFDSElORV9OT05fRUZJX0ZBTExCQUNLPXkNCkNP
TkZJR19JU0NTSV9JQkZUX0ZJTkQ9eQ0KIyBDT05GSUdfR09PR0xFX0ZJUk1XQVJFIGlzIG5vdCBz
ZXQNCg0KIw0KIyBFRkkgKEV4dGVuc2libGUgRmlybXdhcmUgSW50ZXJmYWNlKSBTdXBwb3J0DQoj
DQpDT05GSUdfRUZJX1ZBUlM9eQ0KQ09ORklHX0VGSV9SVU5USU1FX01BUD15DQoNCiMNCiMgRmls
ZSBzeXN0ZW1zDQojDQpDT05GSUdfRENBQ0hFX1dPUkRfQUNDRVNTPXkNCkNPTkZJR19GU19QT1NJ
WF9BQ0w9eQ0KQ09ORklHX0VYUE9SVEZTPXkNCkNPTkZJR19GSUxFX0xPQ0tJTkc9eQ0KQ09ORklH
X0ZTTk9USUZZPXkNCkNPTkZJR19ETk9USUZZPXkNCiMgQ09ORklHX0lOT1RJRllfVVNFUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19GQU5PVElGWSBpcyBub3Qgc2V0DQpDT05GSUdfUVVPVEE9eQ0KQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkNCiMgQ09ORklHX1BSSU5UX1FVT1RBX1dBUk5J
TkcgaXMgbm90IHNldA0KIyBDT05GSUdfUVVPVEFfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX1FV
T1RBX1RSRUU9eQ0KQ09ORklHX1FGTVRfVjE9eQ0KQ09ORklHX1FGTVRfVjI9eQ0KQ09ORklHX1FV
T1RBQ1RMPXkNCiMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfRlVTRV9G
UyBpcyBub3Qgc2V0DQpDT05GSUdfR0VORVJJQ19BQ0w9eQ0KDQojDQojIENhY2hlcw0KIw0KIyBD
T05GSUdfRlNDQUNIRSBpcyBub3Qgc2V0DQoNCiMNCiMgUHNldWRvIGZpbGVzeXN0ZW1zDQojDQoj
IENPTkZJR19QUk9DX0ZTIGlzIG5vdCBzZXQNCkNPTkZJR19TWVNGUz15DQpDT05GSUdfVE1QRlM9
eQ0KQ09ORklHX1RNUEZTX1BPU0lYX0FDTD15DQpDT05GSUdfVE1QRlNfWEFUVFI9eQ0KQ09ORklH
X0hVR0VUTEJGUz15DQpDT05GSUdfSFVHRVRMQl9QQUdFPXkNCiMgQ09ORklHX0NPTkZJR0ZTX0ZT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01JU0NfRklMRVNZU1RFTVMgaXMgbm90IHNldA0KQ09ORklH
X05FVFdPUktfRklMRVNZU1RFTVM9eQ0KQ09ORklHX05GU19GUz15DQojIENPTkZJR19ORlNfVjIg
aXMgbm90IHNldA0KQ09ORklHX05GU19WMz15DQpDT05GSUdfTkZTX1YzX0FDTD15DQpDT05GSUdf
TkZTX1Y0PXkNCkNPTkZJR19ORlNfU1dBUD15DQojIENPTkZJR19ORlNfVjRfMSBpcyBub3Qgc2V0
DQojIENPTkZJR19ORlNfVVNFX0xFR0FDWV9ETlMgaXMgbm90IHNldA0KQ09ORklHX05GU19VU0Vf
S0VSTkVMX0ROUz15DQojIENPTkZJR19ORlNEIGlzIG5vdCBzZXQNCkNPTkZJR19MT0NLRD15DQpD
T05GSUdfTE9DS0RfVjQ9eQ0KQ09ORklHX05GU19BQ0xfU1VQUE9SVD15DQpDT05GSUdfTkZTX0NP
TU1PTj15DQpDT05GSUdfU1VOUlBDPXkNCkNPTkZJR19TVU5SUENfR1NTPXkNCkNPTkZJR19TVU5S
UENfWFBSVF9SRE1BPXkNCkNPTkZJR19TVU5SUENfU1dBUD15DQojIENPTkZJR19SUENTRUNfR1NT
X0tSQjUgaXMgbm90IHNldA0KIyBDT05GSUdfQ0VQSF9GUyBpcyBub3Qgc2V0DQpDT05GSUdfQ0lG
Uz15DQpDT05GSUdfQ0lGU19TVEFUUz15DQpDT05GSUdfQ0lGU19TVEFUUzI9eQ0KIyBDT05GSUdf
Q0lGU19XRUFLX1BXX0hBU0ggaXMgbm90IHNldA0KIyBDT05GSUdfQ0lGU19VUENBTEwgaXMgbm90
IHNldA0KQ09ORklHX0NJRlNfWEFUVFI9eQ0KQ09ORklHX0NJRlNfUE9TSVg9eQ0KQ09ORklHX0NJ
RlNfQUNMPXkNCiMgQ09ORklHX0NJRlNfREVCVUcgaXMgbm90IHNldA0KIyBDT05GSUdfQ0lGU19E
RlNfVVBDQUxMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NJRlNfU01CMiBpcyBub3Qgc2V0DQpDT05G
SUdfTkNQX0ZTPXkNCiMgQ09ORklHX05DUEZTX1BBQ0tFVF9TSUdOSU5HIGlzIG5vdCBzZXQNCkNP
TkZJR19OQ1BGU19JT0NUTF9MT0NLSU5HPXkNCkNPTkZJR19OQ1BGU19TVFJPTkc9eQ0KQ09ORklH
X05DUEZTX05GU19OUz15DQpDT05GSUdfTkNQRlNfT1MyX05TPXkNCkNPTkZJR19OQ1BGU19TTUFM
TERPUz15DQojIENPTkZJR19OQ1BGU19OTFMgaXMgbm90IHNldA0KQ09ORklHX05DUEZTX0VYVFJB
Uz15DQpDT05GSUdfQ09EQV9GUz15DQojIENPTkZJR19BRlNfRlMgaXMgbm90IHNldA0KQ09ORklH
XzlQX0ZTPXkNCkNPTkZJR185UF9GU19QT1NJWF9BQ0w9eQ0KQ09ORklHXzlQX0ZTX1NFQ1VSSVRZ
PXkNCkNPTkZJR19OTFM9eQ0KQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiDQojIENPTkZJ
R19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfQ09ERVBBR0VfNzM3PXkN
CkNPTkZJR19OTFNfQ09ERVBBR0VfNzc1PXkNCiMgQ09ORklHX05MU19DT0RFUEFHRV84NTAgaXMg
bm90IHNldA0KQ09ORklHX05MU19DT0RFUEFHRV84NTI9eQ0KQ09ORklHX05MU19DT0RFUEFHRV84
NTU9eQ0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NP
REVQQUdFXzg2MD15DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODYxIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05MU19DT0RFUEFHRV84NjIgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2
MyBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15DQpDT05GSUdfTkxTX0NPREVQ
QUdFXzg2NT15DQpDT05GSUdfTkxTX0NPREVQQUdFXzg2Nj15DQojIENPTkZJR19OTFNfQ09ERVBB
R0VfODY5IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV85MzYgaXMgbm90IHNldA0K
Q09ORklHX05MU19DT0RFUEFHRV85NTA9eQ0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzMiBpcyBu
b3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzk0OT15DQpDT05GSUdfTkxTX0NPREVQQUdFXzg3
ND15DQojIENPTkZJR19OTFNfSVNPODg1OV84IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfQ09ERVBB
R0VfMTI1MD15DQpDT05GSUdfTkxTX0NPREVQQUdFXzEyNTE9eQ0KQ09ORklHX05MU19BU0NJST15
DQpDT05GSUdfTkxTX0lTTzg4NTlfMT15DQpDT05GSUdfTkxTX0lTTzg4NTlfMj15DQpDT05GSUdf
TkxTX0lTTzg4NTlfMz15DQpDT05GSUdfTkxTX0lTTzg4NTlfND15DQpDT05GSUdfTkxTX0lTTzg4
NTlfNT15DQpDT05GSUdfTkxTX0lTTzg4NTlfNj15DQpDT05GSUdfTkxTX0lTTzg4NTlfNz15DQoj
IENPTkZJR19OTFNfSVNPODg1OV85IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfSVNPODg1OV8xMz15
DQpDT05GSUdfTkxTX0lTTzg4NTlfMTQ9eQ0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldA0KQ09ORklHX05MU19LT0k4X1I9eQ0KIyBDT05GSUdfTkxTX0tPSThfVSBpcyBub3Qgc2V0
DQpDT05GSUdfTkxTX01BQ19ST01BTj15DQojIENPTkZJR19OTFNfTUFDX0NFTFRJQyBpcyBub3Qg
c2V0DQpDT05GSUdfTkxTX01BQ19DRU5URVVSTz15DQpDT05GSUdfTkxTX01BQ19DUk9BVElBTj15
DQojIENPTkZJR19OTFNfTUFDX0NZUklMTElDIGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfTUFDX0dB
RUxJQz15DQpDT05GSUdfTkxTX01BQ19HUkVFSz15DQpDT05GSUdfTkxTX01BQ19JQ0VMQU5EPXkN
CiMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMgbm90IHNldA0KQ09ORklHX05MU19NQUNfUk9NQU5J
QU49eQ0KQ09ORklHX05MU19NQUNfVFVSS0lTSD15DQpDT05GSUdfTkxTX1VURjg9eQ0KDQojDQoj
IEtlcm5lbCBoYWNraW5nDQojDQpDT05GSUdfVFJBQ0VfSVJRRkxBR1NfU1VQUE9SVD15DQoNCiMN
CiMgcHJpbnRrIGFuZCBkbWVzZyBvcHRpb25zDQojDQojIENPTkZJR19QUklOVEtfVElNRSBpcyBu
b3Qgc2V0DQpDT05GSUdfREVGQVVMVF9NRVNTQUdFX0xPR0xFVkVMPTQNCkNPTkZJR19CT09UX1BS
SU5US19ERUxBWT15DQojIENPTkZJR19EWU5BTUlDX0RFQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBD
b21waWxlLXRpbWUgY2hlY2tzIGFuZCBjb21waWxlciBvcHRpb25zDQojDQojIENPTkZJR19ERUJV
R19JTkZPIGlzIG5vdCBzZXQNCiMgQ09ORklHX0VOQUJMRV9XQVJOX0RFUFJFQ0FURUQgaXMgbm90
IHNldA0KQ09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkNCkNPTkZJR19GUkFNRV9XQVJOPTIwNDgN
CkNPTkZJR19TVFJJUF9BU01fU1lNUz15DQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMgbm90IHNl
dA0KIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldA0KQ09ORklHX0RFQlVHX0ZTPXkN
CiMgQ09ORklHX0hFQURFUlNfQ0hFQ0sgaXMgbm90IHNldA0KQ09ORklHX0RFQlVHX1NFQ1RJT05f
TUlTTUFUQ0g9eQ0KQ09ORklHX0FSQ0hfV0FOVF9GUkFNRV9QT0lOVEVSUz15DQpDT05GSUdfRlJB
TUVfUE9JTlRFUj15DQpDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BVPXkNCkNPTkZJR19N
QUdJQ19TWVNSUT15DQpDT05GSUdfTUFHSUNfU1lTUlFfREVGQVVMVF9FTkFCTEU9MHgxDQpDT05G
SUdfREVCVUdfS0VSTkVMPXkNCg0KIw0KIyBNZW1vcnkgRGVidWdnaW5nDQojDQpDT05GSUdfREVC
VUdfUEFHRUFMTE9DPXkNCkNPTkZJR19XQU5UX1BBR0VfREVCVUdfRkxBR1M9eQ0KQ09ORklHX1BB
R0VfR1VBUkQ9eQ0KQ09ORklHX0RFQlVHX09CSkVDVFM9eQ0KIyBDT05GSUdfREVCVUdfT0JKRUNU
U19TRUxGVEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfREVCVUdfT0JKRUNUU19GUkVFPXkNCkNPTkZJ
R19ERUJVR19PQkpFQ1RTX1RJTUVSUz15DQpDT05GSUdfREVCVUdfT0JKRUNUU19XT1JLPXkNCiMg
Q09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdf
T0JKRUNUU19QRVJDUFVfQ09VTlRFUiBpcyBub3Qgc2V0DQpDT05GSUdfREVCVUdfT0JKRUNUU19F
TkFCTEVfREVGQVVMVD0xDQpDT05GSUdfU0xVQl9TVEFUUz15DQpDT05GSUdfSEFWRV9ERUJVR19L
TUVNTEVBSz15DQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qgc2V0DQpDT05GSUdfREVC
VUdfU1RBQ0tfVVNBR0U9eQ0KIyBDT05GSUdfREVCVUdfVk0gaXMgbm90IHNldA0KIyBDT05GSUdf
REVCVUdfVklSVFVBTCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NRU1PUllfSU5JVCBpcyBu
b3Qgc2V0DQpDT05GSUdfTUVNT1JZX05PVElGSUVSX0VSUk9SX0lOSkVDVD15DQpDT05GSUdfSEFW
RV9ERUJVR19TVEFDS09WRVJGTE9XPXkNCiMgQ09ORklHX0RFQlVHX1NUQUNLT1ZFUkZMT1cgaXMg
bm90IHNldA0KQ09ORklHX0hBVkVfQVJDSF9LTUVNQ0hFQ0s9eQ0KIyBDT05GSUdfREVCVUdfU0hJ
UlEgaXMgbm90IHNldA0KDQojDQojIERlYnVnIExvY2t1cHMgYW5kIEhhbmdzDQojDQojIENPTkZJ
R19MT0NLVVBfREVURUNUT1IgaXMgbm90IHNldA0KQ09ORklHX0RFVEVDVF9IVU5HX1RBU0s9eQ0K
Q09ORklHX0RFRkFVTFRfSFVOR19UQVNLX1RJTUVPVVQ9MTIwDQojIENPTkZJR19CT09UUEFSQU1f
SFVOR19UQVNLX1BBTklDIGlzIG5vdCBzZXQNCkNPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BB
TklDX1ZBTFVFPTANCkNPTkZJR19QQU5JQ19PTl9PT1BTPXkNCkNPTkZJR19QQU5JQ19PTl9PT1BT
X1ZBTFVFPTENCkNPTkZJR19QQU5JQ19USU1FT1VUPTANCg0KIw0KIyBMb2NrIERlYnVnZ2luZyAo
c3BpbmxvY2tzLCBtdXRleGVzLCBldGMuLi4pDQojDQpDT05GSUdfREVCVUdfUlRfTVVURVhFUz15
DQpDT05GSUdfREVCVUdfUElfTElTVD15DQojIENPTkZJR19SVF9NVVRFWF9URVNURVIgaXMgbm90
IHNldA0KQ09ORklHX0RFQlVHX1NQSU5MT0NLPXkNCkNPTkZJR19ERUJVR19NVVRFWEVTPXkNCiMg
Q09ORklHX0RFQlVHX1dXX01VVEVYX1NMT1dQQVRIIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19M
T0NLX0FMTE9DPXkNCiMgQ09ORklHX1BST1ZFX0xPQ0tJTkcgaXMgbm90IHNldA0KQ09ORklHX0xP
Q0tERVA9eQ0KQ09ORklHX0xPQ0tfU1RBVD15DQojIENPTkZJR19ERUJVR19MT0NLREVQIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0RFQlVHX0FUT01JQ19TTEVFUCBpcyBub3Qgc2V0DQpDT05GSUdfREVC
VUdfTE9DS0lOR19BUElfU0VMRlRFU1RTPXkNCkNPTkZJR19TVEFDS1RSQUNFPXkNCkNPTkZJR19E
RUJVR19LT0JKRUNUPXkNCiMgQ09ORklHX0RFQlVHX0tPQkpFQ1RfUkVMRUFTRSBpcyBub3Qgc2V0
DQojIENPTkZJR19ERUJVR19XUklURUNPVU5UIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19MSVNU
PXkNCiMgQ09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19OT1RJRklFUlM9
eQ0KIyBDT05GSUdfREVCVUdfQ1JFREVOVElBTFMgaXMgbm90IHNldA0KDQojDQojIFJDVSBEZWJ1
Z2dpbmcNCiMNCkNPTkZJR19TUEFSU0VfUkNVX1BPSU5URVI9eQ0KQ09ORklHX1JDVV9UT1JUVVJF
X1RFU1Q9eQ0KIyBDT05GSUdfUkNVX1RPUlRVUkVfVEVTVF9SVU5OQUJMRSBpcyBub3Qgc2V0DQpD
T05GSUdfUkNVX0NQVV9TVEFMTF9USU1FT1VUPTIxDQpDT05GSUdfUkNVX1RSQUNFPXkNCkNPTkZJ
R19OT1RJRklFUl9FUlJPUl9JTkpFQ1RJT049eQ0KIyBDT05GSUdfUE1fTk9USUZJRVJfRVJST1Jf
SU5KRUNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZBVUxUX0lOSkVDVElPTiBpcyBub3Qgc2V0DQpD
T05GSUdfQVJDSF9IQVNfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1M9eQ0KIyBDT05GSUdf
REVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1MgaXMgbm90IHNldA0KQ09ORklHX1VTRVJfU1RB
Q0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19IQVZFX0ZVTkNUSU9OX1RSQUNFUj15DQpDT05GSUdf
SEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQ0KQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhf
RlBfVEVTVD15DQpDT05GSUdfSEFWRV9GVU5DVElPTl9UUkFDRV9NQ09VTlRfVEVTVD15DQpDT05G
SUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15DQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9XSVRI
X1JFR1M9eQ0KQ09ORklHX0hBVkVfRlRSQUNFX01DT1VOVF9SRUNPUkQ9eQ0KQ09ORklHX0hBVkVf
U1lTQ0FMTF9UUkFDRVBPSU5UUz15DQpDT05GSUdfSEFWRV9GRU5UUlk9eQ0KQ09ORklHX0hBVkVf
Q19SRUNPUkRNQ09VTlQ9eQ0KQ09ORklHX1RSQUNFX0NMT0NLPXkNCkNPTkZJR19UUkFDSU5HX1NV
UFBPUlQ9eQ0KIyBDT05GSUdfRlRSQUNFIGlzIG5vdCBzZXQNCg0KIw0KIyBSdW50aW1lIFRlc3Rp
bmcNCiMNCkNPTkZJR19URVNUX0xJU1RfU09SVD15DQpDT05GSUdfQkFDS1RSQUNFX1NFTEZfVEVT
VD15DQojIENPTkZJR19SQlRSRUVfVEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfQVRPTUlDNjRfU0VM
RlRFU1Q9eQ0KQ09ORklHX1RFU1RfU1RSSU5HX0hFTFBFUlM9eQ0KQ09ORklHX1RFU1RfS1NUUlRP
WD15DQojIENPTkZJR19QUk9WSURFX09IQ0kxMzk0X0RNQV9JTklUIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0RNQV9BUElfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX1NBTVBMRVM9eQ0KQ09ORklHX0hB
VkVfQVJDSF9LR0RCPXkNCkNPTkZJR19LR0RCPXkNCkNPTkZJR19LR0RCX1NFUklBTF9DT05TT0xF
PXkNCiMgQ09ORklHX0tHREJfVEVTVFMgaXMgbm90IHNldA0KIyBDT05GSUdfS0dEQl9MT1dfTEVW
RUxfVFJBUCBpcyBub3Qgc2V0DQojIENPTkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0DQpDT05GSUdf
U1RSSUNUX0RFVk1FTT15DQojIENPTkZJR19YODZfVkVSQk9TRV9CT09UVVAgaXMgbm90IHNldA0K
IyBDT05GSUdfRUFSTFlfUFJJTlRLIGlzIG5vdCBzZXQNCiMgQ09ORklHX1g4Nl9QVERVTVAgaXMg
bm90IHNldA0KQ09ORklHX0RFQlVHX1JPREFUQT15DQojIENPTkZJR19ERUJVR19ST0RBVEFfVEVT
VCBpcyBub3Qgc2V0DQpDT05GSUdfRE9VQkxFRkFVTFQ9eQ0KIyBDT05GSUdfREVCVUdfVExCRkxV
U0ggaXMgbm90IHNldA0KIyBDT05GSUdfSU9NTVVfU1RSRVNTIGlzIG5vdCBzZXQNCkNPTkZJR19I
QVZFX01NSU9UUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19JT19ERUxBWV9UWVBFXzBYODA9MA0KQ09O
RklHX0lPX0RFTEFZX1RZUEVfMFhFRD0xDQpDT05GSUdfSU9fREVMQVlfVFlQRV9VREVMQVk9Mg0K
Q09ORklHX0lPX0RFTEFZX1RZUEVfTk9ORT0zDQpDT05GSUdfSU9fREVMQVlfMFg4MD15DQojIENP
TkZJR19JT19ERUxBWV8wWEVEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lPX0RFTEFZX1VERUxBWSBp
cyBub3Qgc2V0DQojIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQNCkNPTkZJR19ERUZB
VUxUX0lPX0RFTEFZX1RZUEU9MA0KIyBDT05GSUdfREVCVUdfQk9PVF9QQVJBTVMgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1BBX0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19PUFRJTUlaRV9JTkxJTklO
Rz15DQpDT05GSUdfREVCVUdfTk1JX1NFTEZURVNUPXkNCkNPTkZJR19YODZfREVCVUdfU1RBVElD
X0NQVV9IQVM9eQ0KDQojDQojIFNlY3VyaXR5IG9wdGlvbnMNCiMNCkNPTkZJR19LRVlTPXkNCiMg
Q09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90IHNldA0KQ09ORklHX0JJR19LRVlTPXkN
CiMgQ09ORklHX1RSVVNURURfS0VZUyBpcyBub3Qgc2V0DQpDT05GSUdfRU5DUllQVEVEX0tFWVM9
eQ0KIyBDT05GSUdfS0VZU19ERUJVR19QUk9DX0tFWVMgaXMgbm90IHNldA0KQ09ORklHX1NFQ1VS
SVRZX0RNRVNHX1JFU1RSSUNUPXkNCkNPTkZJR19TRUNVUklUWT15DQpDT05GSUdfU0VDVVJJVFlG
Uz15DQpDT05GSUdfU0VDVVJJVFlfTkVUV09SSz15DQpDT05GSUdfU0VDVVJJVFlfTkVUV09SS19Y
RlJNPXkNCkNPTkZJR19TRUNVUklUWV9QQVRIPXkNCiMgQ09ORklHX1NFQ1VSSVRZX1NFTElOVVgg
aXMgbm90IHNldA0KIyBDT05GSUdfU0VDVVJJVFlfU01BQ0sgaXMgbm90IHNldA0KQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZTz15DQpDT05GSUdfU0VDVVJJVFlfVE9NT1lPX01BWF9BQ0NFUFRfRU5UUlk9
MjA0OA0KQ09ORklHX1NFQ1VSSVRZX1RPTU9ZT19NQVhfQVVESVRfTE9HPTEwMjQNCkNPTkZJR19T
RUNVUklUWV9UT01PWU9fT01JVF9VU0VSU1BBQ0VfTE9BREVSPXkNCkNPTkZJR19TRUNVUklUWV9B
UFBBUk1PUj15DQpDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBBUkFNX1ZBTFVFPTENCkNP
TkZJR19TRUNVUklUWV9BUFBBUk1PUl9IQVNIPXkNCkNPTkZJR19TRUNVUklUWV9ZQU1BPXkNCkNP
TkZJR19TRUNVUklUWV9ZQU1BX1NUQUNLRUQ9eQ0KIyBDT05GSUdfSU1BIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0VWTSBpcyBub3Qgc2V0DQpDT05GSUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQ0K
IyBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9BUFBBUk1PUiBpcyBub3Qgc2V0DQojIENPTkZJR19E
RUZBVUxUX1NFQ1VSSVRZX1lBTUEgaXMgbm90IHNldA0KIyBDT05GSUdfREVGQVVMVF9TRUNVUklU
WV9EQUMgaXMgbm90IHNldA0KQ09ORklHX0RFRkFVTFRfU0VDVVJJVFk9InRvbW95byINCkNPTkZJ
R19DUllQVE89eQ0KDQojDQojIENyeXB0byBjb3JlIG9yIGhlbHBlcg0KIw0KIyBDT05GSUdfQ1JZ
UFRPX0ZJUFMgaXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19BTEdBUEk9eQ0KQ09ORklHX0NSWVBU
T19BTEdBUEkyPXkNCkNPTkZJR19DUllQVE9fQUVBRD15DQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkN
CkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkNCkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSMj15DQpD
T05GSUdfQ1JZUFRPX0hBU0g9eQ0KQ09ORklHX0NSWVBUT19IQVNIMj15DQpDT05GSUdfQ1JZUFRP
X1JORz15DQpDT05GSUdfQ1JZUFRPX1JORzI9eQ0KQ09ORklHX0NSWVBUT19QQ09NUDI9eQ0KQ09O
RklHX0NSWVBUT19NQU5BR0VSPXkNCkNPTkZJR19DUllQVE9fTUFOQUdFUjI9eQ0KQ09ORklHX0NS
WVBUT19VU0VSPXkNCiMgQ09ORklHX0NSWVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFMgaXMgbm90
IHNldA0KQ09ORklHX0NSWVBUT19HRjEyOE1VTD15DQpDT05GSUdfQ1JZUFRPX05VTEw9eQ0KQ09O
RklHX0NSWVBUT19XT1JLUVVFVUU9eQ0KQ09ORklHX0NSWVBUT19DUllQVEQ9eQ0KQ09ORklHX0NS
WVBUT19BVVRIRU5DPXkNCkNPTkZJR19DUllQVE9fQUJMS19IRUxQRVI9eQ0KQ09ORklHX0NSWVBU
T19HTFVFX0hFTFBFUl9YODY9eQ0KDQojDQojIEF1dGhlbnRpY2F0ZWQgRW5jcnlwdGlvbiB3aXRo
IEFzc29jaWF0ZWQgRGF0YQ0KIw0KQ09ORklHX0NSWVBUT19DQ009eQ0KQ09ORklHX0NSWVBUT19H
Q009eQ0KQ09ORklHX0NSWVBUT19TRVFJVj15DQoNCiMNCiMgQmxvY2sgbW9kZXMNCiMNCkNPTkZJ
R19DUllQVE9fQ0JDPXkNCkNPTkZJR19DUllQVE9fQ1RSPXkNCkNPTkZJR19DUllQVE9fQ1RTPXkN
CkNPTkZJR19DUllQVE9fRUNCPXkNCkNPTkZJR19DUllQVE9fTFJXPXkNCkNPTkZJR19DUllQVE9f
UENCQz15DQpDT05GSUdfQ1JZUFRPX1hUUz15DQoNCiMNCiMgSGFzaCBtb2Rlcw0KIw0KQ09ORklH
X0NSWVBUT19DTUFDPXkNCkNPTkZJR19DUllQVE9fSE1BQz15DQojIENPTkZJR19DUllQVE9fWENC
QyBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX1ZNQUM9eQ0KDQojDQojIERpZ2VzdA0KIw0KQ09O
RklHX0NSWVBUT19DUkMzMkM9eQ0KQ09ORklHX0NSWVBUT19DUkMzMkNfSU5URUw9eQ0KIyBDT05G
SUdfQ1JZUFRPX0NSQzMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUwg
aXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19DUkNUMTBESUY9eQ0KQ09ORklHX0NSWVBUT19HSEFT
SD15DQpDT05GSUdfQ1JZUFRPX01END15DQpDT05GSUdfQ1JZUFRPX01ENT15DQpDT05GSUdfQ1JZ
UFRPX01JQ0hBRUxfTUlDPXkNCiMgQ09ORklHX0NSWVBUT19STUQxMjggaXMgbm90IHNldA0KQ09O
RklHX0NSWVBUT19STUQxNjA9eQ0KQ09ORklHX0NSWVBUT19STUQyNTY9eQ0KQ09ORklHX0NSWVBU
T19STUQzMjA9eQ0KQ09ORklHX0NSWVBUT19TSEExPXkNCkNPTkZJR19DUllQVE9fU0hBMV9TU1NF
Mz15DQpDT05GSUdfQ1JZUFRPX1NIQTI1Nl9TU1NFMz15DQpDT05GSUdfQ1JZUFRPX1NIQTUxMl9T
U1NFMz15DQpDT05GSUdfQ1JZUFRPX1NIQTI1Nj15DQpDT05GSUdfQ1JZUFRPX1NIQTUxMj15DQpD
T05GSUdfQ1JZUFRPX1RHUjE5Mj15DQpDT05GSUdfQ1JZUFRPX1dQNTEyPXkNCiMgQ09ORklHX0NS
WVBUT19HSEFTSF9DTE1VTF9OSV9JTlRFTCBpcyBub3Qgc2V0DQoNCiMNCiMgQ2lwaGVycw0KIw0K
Q09ORklHX0NSWVBUT19BRVM9eQ0KQ09ORklHX0NSWVBUT19BRVNfWDg2XzY0PXkNCiMgQ09ORklH
X0NSWVBUT19BRVNfTklfSU5URUwgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0FOVUJJUyBp
cyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX0FSQzQ9eQ0KIyBDT05GSUdfQ1JZUFRPX0JMT1dGSVNI
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19CTE9XRklTSF9YODZfNjQgaXMgbm90IHNldA0K
Q09ORklHX0NSWVBUT19DQU1FTExJQT15DQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX1g4Nl82ND15
DQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FFU05JX0FWWF9YODZfNjQ9eQ0KQ09ORklHX0NSWVBU
T19DQU1FTExJQV9BRVNOSV9BVlgyX1g4Nl82ND15DQpDT05GSUdfQ1JZUFRPX0NBU1RfQ09NTU9O
PXkNCiMgQ09ORklHX0NSWVBUT19DQVNUNSBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fQ0FT
VDVfQVZYX1g4Nl82NCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX0NBU1Q2PXkNCkNPTkZJR19D
UllQVE9fQ0FTVDZfQVZYX1g4Nl82ND15DQpDT05GSUdfQ1JZUFRPX0RFUz15DQpDT05GSUdfQ1JZ
UFRPX0ZDUllQVD15DQpDT05GSUdfQ1JZUFRPX0tIQVpBRD15DQojIENPTkZJR19DUllQVE9fU0FM
U0EyMCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX1NBTFNBMjBfWDg2XzY0PXkNCkNPTkZJR19D
UllQVE9fU0VFRD15DQpDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9eQ0KQ09ORklHX0NSWVBUT19TRVJQ
RU5UX1NTRTJfWDg2XzY0PXkNCkNPTkZJR19DUllQVE9fU0VSUEVOVF9BVlhfWDg2XzY0PXkNCiMg
Q09ORklHX0NSWVBUT19TRVJQRU5UX0FWWDJfWDg2XzY0IGlzIG5vdCBzZXQNCkNPTkZJR19DUllQ
VE9fVEVBPXkNCiMgQ09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBzZXQNCkNPTkZJR19DUllQ
VE9fVFdPRklTSF9DT01NT049eQ0KQ09ORklHX0NSWVBUT19UV09GSVNIX1g4Nl82ND15DQpDT05G
SUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0XzNXQVk9eQ0KQ09ORklHX0NSWVBUT19UV09GSVNIX0FW
WF9YODZfNjQ9eQ0KDQojDQojIENvbXByZXNzaW9uDQojDQpDT05GSUdfQ1JZUFRPX0RFRkxBVEU9
eQ0KIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19MWk89eQ0K
Q09ORklHX0NSWVBUT19MWjQ9eQ0KIyBDT05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBzZXQNCg0K
Iw0KIyBSYW5kb20gTnVtYmVyIEdlbmVyYXRpb24NCiMNCkNPTkZJR19DUllQVE9fQU5TSV9DUFJO
Rz15DQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJPXkNCkNPTkZJR19DUllQVE9fVVNFUl9BUElfSEFT
SD15DQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkNCkNPTkZJR19DUllQVE9fSFc9
eQ0KIyBDT05GSUdfQ1JZUFRPX0RFVl9QQURMT0NLIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBU
T19ERVZfQ0NQIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FTWU1NRVRSSUNfS0VZX1RZUEUgaXMgbm90
IHNldA0KQ09ORklHX0hBVkVfS1ZNPXkNCkNPTkZJR19IQVZFX0tWTV9JUlFDSElQPXkNCkNPTkZJ
R19IQVZFX0tWTV9JUlFfUk9VVElORz15DQpDT05GSUdfSEFWRV9LVk1fRVZFTlRGRD15DQpDT05G
SUdfS1ZNX0FQSUNfQVJDSElURUNUVVJFPXkNCkNPTkZJR19LVk1fTU1JTz15DQpDT05GSUdfS1ZN
X0FTWU5DX1BGPXkNCkNPTkZJR19IQVZFX0tWTV9NU0k9eQ0KQ09ORklHX0hBVkVfS1ZNX0NQVV9S
RUxBWF9JTlRFUkNFUFQ9eQ0KQ09ORklHX0tWTV9WRklPPXkNCkNPTkZJR19WSVJUVUFMSVpBVElP
Tj15DQpDT05GSUdfS1ZNPXkNCiMgQ09ORklHX0tWTV9JTlRFTCBpcyBub3Qgc2V0DQpDT05GSUdf
S1ZNX0FNRD15DQojIENPTkZJR19CSU5BUllfUFJJTlRGIGlzIG5vdCBzZXQNCg0KIw0KIyBMaWJy
YXJ5IHJvdXRpbmVzDQojDQpDT05GSUdfQklUUkVWRVJTRT15DQpDT05GSUdfR0VORVJJQ19TVFJO
Q1BZX0ZST01fVVNFUj15DQpDT05GSUdfR0VORVJJQ19TVFJOTEVOX1VTRVI9eQ0KQ09ORklHX0dF
TkVSSUNfTkVUX1VUSUxTPXkNCkNPTkZJR19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkNCkNPTkZJ
R19HRU5FUklDX1BDSV9JT01BUD15DQpDT05GSUdfR0VORVJJQ19JT01BUD15DQpDT05GSUdfR0VO
RVJJQ19JTz15DQpDT05GSUdfQVJDSF9VU0VfQ01QWENIR19MT0NLUkVGPXkNCkNPTkZJR19DUkNf
Q0NJVFQ9eQ0KQ09ORklHX0NSQzE2PXkNCiMgQ09ORklHX0NSQ19UMTBESUYgaXMgbm90IHNldA0K
Q09ORklHX0NSQ19JVFVfVD15DQpDT05GSUdfQ1JDMzI9eQ0KQ09ORklHX0NSQzMyX1NFTEZURVNU
PXkNCkNPTkZJR19DUkMzMl9TTElDRUJZOD15DQojIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUkMzMl9TQVJXQVRFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSQzMy
X0JJVCBpcyBub3Qgc2V0DQojIENPTkZJR19DUkM3IGlzIG5vdCBzZXQNCkNPTkZJR19MSUJDUkMz
MkM9eQ0KIyBDT05GSUdfQ1JDOCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JDNjRfRUNNQT15DQpDT05G
SUdfUkFORE9NMzJfU0VMRlRFU1Q9eQ0KQ09ORklHX1pMSUJfSU5GTEFURT15DQpDT05GSUdfWkxJ
Ql9ERUZMQVRFPXkNCkNPTkZJR19MWk9fQ09NUFJFU1M9eQ0KQ09ORklHX0xaT19ERUNPTVBSRVNT
PXkNCkNPTkZJR19MWjRfQ09NUFJFU1M9eQ0KQ09ORklHX0xaNF9ERUNPTVBSRVNTPXkNCkNPTkZJ
R19YWl9ERUM9eQ0KIyBDT05GSUdfWFpfREVDX1g4NiBpcyBub3Qgc2V0DQojIENPTkZJR19YWl9E
RUNfUE9XRVJQQyBpcyBub3Qgc2V0DQojIENPTkZJR19YWl9ERUNfSUE2NCBpcyBub3Qgc2V0DQpD
T05GSUdfWFpfREVDX0FSTT15DQojIENPTkZJR19YWl9ERUNfQVJNVEhVTUIgaXMgbm90IHNldA0K
IyBDT05GSUdfWFpfREVDX1NQQVJDIGlzIG5vdCBzZXQNCkNPTkZJR19YWl9ERUNfQkNKPXkNCkNP
TkZJR19YWl9ERUNfVEVTVD15DQpDT05GSUdfREVDT01QUkVTU19YWj15DQpDT05GSUdfR0VORVJJ
Q19BTExPQ0FUT1I9eQ0KQ09ORklHX0JDSD15DQpDT05GSUdfQkNIX0NPTlNUX1BBUkFNUz15DQpD
T05GSUdfVEVYVFNFQVJDSD15DQpDT05GSUdfVEVYVFNFQVJDSF9LTVA9eQ0KQ09ORklHX0FTU09D
SUFUSVZFX0FSUkFZPXkNCkNPTkZJR19IQVNfSU9NRU09eQ0KQ09ORklHX0hBU19JT1BPUlQ9eQ0K
Q09ORklHX0hBU19ETUE9eQ0KQ09ORklHX0NIRUNLX1NJR05BVFVSRT15DQpDT05GSUdfRFFMPXkN
CkNPTkZJR19OTEFUVFI9eQ0KQ09ORklHX0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElW
RT15DQojIENPTkZJR19BVkVSQUdFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPUkRJQyBpcyBub3Qg
c2V0DQojIENPTkZJR19ERFIgaXMgbm90IHNldA0KQ09ORklHX09JRF9SRUdJU1RSWT15DQpDT05G
SUdfVUNTMl9TVFJJTkc9eQ0KQ09ORklHX0ZPTlRfU1VQUE9SVD15DQpDT05GSUdfRk9OVF84eDE2
PXkNCkNPTkZJR19GT05UX0FVVE9TRUxFQ1Q9eQ0K
--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--089e013c5af08767ff04ef61d88f--


From xen-devel-bounces@lists.xen.org Tue Jan 07 14:06:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XIe-0003wk-Ux; Tue, 07 Jan 2014 14:06:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jim.epost@gmail.com>) id 1W0XG9-0003rc-8M
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:03:58 +0000
Received: from [193.109.254.147:48450] by server-10.bemta-14.messagelabs.com
	id 78/0B-20752-C490CC25; Tue, 07 Jan 2014 14:03:56 +0000
X-Env-Sender: jim.epost@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389103430!7014512!1
X-Originating-IP: [209.85.223.177]
X-SpamReason: No, hits=2.7 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,UNIQUE_WORDS,UPPERCASE_75_100,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17383 invoked from network); 7 Jan 2014 14:03:52 -0000
Received: from mail-ie0-f177.google.com (HELO mail-ie0-f177.google.com)
	(209.85.223.177)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:03:52 -0000
Received: by mail-ie0-f177.google.com with SMTP id tp5so380193ieb.36
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 06:03:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=N0lIvdIHswSrVrabYt91Q1QJ/PpxNuv3tVduDJwpgVs=;
	b=ZlzIa+ZA8ILWsuyYAXAURgEHCW6AwavWouI6RSD25INbd1inkEV10rw/JYau7EtTTH
	stMZXuGJgYRUV4rZ6V9cIZrkmT1ui7EPz4bUMhOHeh1BWIgUU3riBxcLcrnjNBweyr9s
	h99EccGZQBGGm2GCNXKLk/Q0HOYTWgufsO5siDr7zc4V10XQwmZwejnaGGl2fmc8puWz
	Ah1FGNNmpLP/sb/1n2nGPlaDQnP8cC0o3XeJLx3FCWolpH7N2Rl6R/VWOdUY2tB+Qva+
	slEAXZ6YrBK4WbW3qXVGtVkBqNKFUeEno1KEJF8mcSrA6cvGf8HZTItfPOahQvNvzW8U
	vXVw==
MIME-Version: 1.0
X-Received: by 10.50.128.72 with SMTP id nm8mr25707563igb.10.1389103430449;
	Tue, 07 Jan 2014 06:03:50 -0800 (PST)
Received: by 10.42.85.71 with HTTP; Tue, 7 Jan 2014 06:03:50 -0800 (PST)
Date: Tue, 7 Jan 2014 07:03:50 -0700
Message-ID: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
From: Jim Davis <jim.epost@gmail.com>
To: Stephen Rothwell <sfr@canb.auug.org.au>, linux-next@vger.kernel.org, 
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com, 
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com, tglx@linutronix.de,
	mingo@redhat.com, hpa@zytor.com, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary=089e013c5af08767ff04ef61d88f
X-Mailman-Approved-At: Tue, 07 Jan 2014 14:06:30 +0000
Subject: [Xen-devel] randconfig build error with next-20140107,
	in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Building with the attached random configuration file,

arch/x86/xen/grant-table.c: In function =91xen_pvh_gnttab_setup=92:
arch/x86/xen/grant-table.c:181:2: error: implicit declaration of
function =91xen_pvh_domain=92 [-Werror=3Dimplicit-function-declaration]
  if (!xen_pvh_domain())
  ^
cc1: some warnings being treated as errors
make[2]: *** [arch/x86/xen/grant-table.o] Error 1

--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset=US-ASCII; name="randconfig-1389085366.txt"
Content-Disposition: attachment; filename="randconfig-1389085366.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hq589mq70

Iw0KIyBBdXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJVC4NCiMgTGludXgv
eDg2IDMuMTMuMC1yYzcgS2VybmVsIENvbmZpZ3VyYXRpb24NCiMNCkNPTkZJR182NEJJVD15DQpD
T05GSUdfWDg2XzY0PXkNCkNPTkZJR19YODY9eQ0KQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQ0KQ09ORklHX09VVFBVVF9GT1JNQVQ9ImVsZjY0LXg4Ni02NCINCkNPTkZJR19BUkNIX0RFRkNP
TkZJRz0iYXJjaC94ODYvY29uZmlncy94ODZfNjRfZGVmY29uZmlnIg0KQ09ORklHX0xPQ0tERVBf
U1VQUE9SVD15DQpDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19IQVZFX0xBVEVO
Q1lUT1BfU1VQUE9SVD15DQpDT05GSUdfTU1VPXkNCkNPTkZJR19ORUVEX0RNQV9NQVBfU1RBVEU9
eQ0KQ09ORklHX05FRURfU0dfRE1BX0xFTkdUSD15DQpDT05GSUdfR0VORVJJQ19IV0VJR0hUPXkN
CkNPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15DQpDT05GSUdfR0VORVJJQ19DQUxJQlJB
VEVfREVMQVk9eQ0KQ09ORklHX0FSQ0hfSEFTX0NQVV9SRUxBWD15DQpDT05GSUdfQVJDSF9IQVNf
Q0FDSEVfTElORV9TSVpFPXkNCkNPTkZJR19BUkNIX0hBU19DUFVfQVVUT1BST0JFPXkNCkNPTkZJ
R19IQVZFX1NFVFVQX1BFUl9DUFVfQVJFQT15DQpDT05GSUdfTkVFRF9QRVJfQ1BVX0VNQkVEX0ZJ
UlNUX0NIVU5LPXkNCkNPTkZJR19ORUVEX1BFUl9DUFVfUEFHRV9GSVJTVF9DSFVOSz15DQpDT05G
SUdfQVJDSF9ISUJFUk5BVElPTl9QT1NTSUJMRT15DQpDT05GSUdfQVJDSF9TVVNQRU5EX1BPU1NJ
QkxFPXkNCkNPTkZJR19BUkNIX1dBTlRfSFVHRV9QTURfU0hBUkU9eQ0KQ09ORklHX0FSQ0hfV0FO
VF9HRU5FUkFMX0hVR0VUTEI9eQ0KQ09ORklHX1pPTkVfRE1BMzI9eQ0KQ09ORklHX0FVRElUX0FS
Q0g9eQ0KQ09ORklHX0FSQ0hfU1VQUE9SVFNfT1BUSU1JWkVEX0lOTElOSU5HPXkNCkNPTkZJR19B
UkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15DQpDT05GSUdfQVJDSF9IV0VJR0hUX0NGTEFH
Uz0iLWZjYWxsLXNhdmVkLXJkaSAtZmNhbGwtc2F2ZWQtcnNpIC1mY2FsbC1zYXZlZC1yZHggLWZj
YWxsLXNhdmVkLXJjeCAtZmNhbGwtc2F2ZWQtcjggLWZjYWxsLXNhdmVkLXI5IC1mY2FsbC1zYXZl
ZC1yMTAgLWZjYWxsLXNhdmVkLXIxMSINCkNPTkZJR19BUkNIX1NVUFBPUlRTX1VQUk9CRVM9eQ0K
Q09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGliL21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZp
ZyINCkNPTkZJR19DT05TVFJVQ1RPUlM9eQ0KQ09ORklHX0lSUV9XT1JLPXkNCkNPTkZJR19CVUlM
RFRJTUVfRVhUQUJMRV9TT1JUPXkNCg0KIw0KIyBHZW5lcmFsIHNldHVwDQojDQpDT05GSUdfQlJP
S0VOX09OX1NNUD15DQpDT05GSUdfSU5JVF9FTlZfQVJHX0xJTUlUPTMyDQpDT05GSUdfQ1JPU1Nf
Q09NUElMRT0iIg0KQ09ORklHX0NPTVBJTEVfVEVTVD15DQpDT05GSUdfTE9DQUxWRVJTSU9OPSIi
DQpDT05GSUdfTE9DQUxWRVJTSU9OX0FVVE89eQ0KQ09ORklHX0hBVkVfS0VSTkVMX0daSVA9eQ0K
Q09ORklHX0hBVkVfS0VSTkVMX0JaSVAyPXkNCkNPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkNCkNP
TkZJR19IQVZFX0tFUk5FTF9YWj15DQpDT05GSUdfSEFWRV9LRVJORUxfTFpPPXkNCkNPTkZJR19I
QVZFX0tFUk5FTF9MWjQ9eQ0KQ09ORklHX0tFUk5FTF9HWklQPXkNCiMgQ09ORklHX0tFUk5FTF9C
WklQMiBpcyBub3Qgc2V0DQojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0DQojIENPTkZJ
R19LRVJORUxfWFogaXMgbm90IHNldA0KIyBDT05GSUdfS0VSTkVMX0xaTyBpcyBub3Qgc2V0DQoj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX0hPU1ROQU1FPSIo
bm9uZSkiDQpDT05GSUdfU1lTVklQQz15DQojIENPTkZJR19QT1NJWF9NUVVFVUUgaXMgbm90IHNl
dA0KQ09ORklHX0ZIQU5ETEU9eQ0KQ09ORklHX0FVRElUPXkNCkNPTkZJR19BVURJVFNZU0NBTEw9
eQ0KQ09ORklHX0FVRElUX1dBVENIPXkNCkNPTkZJR19BVURJVF9UUkVFPXkNCg0KIw0KIyBJUlEg
c3Vic3lzdGVtDQojDQpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQ0KQ09ORklHX0dFTkVSSUNf
SVJRX1NIT1c9eQ0KQ09ORklHX0dFTkVSSUNfSVJRX0NISVA9eQ0KQ09ORklHX0lSUV9ET01BSU49
eQ0KQ09ORklHX0lSUV9ET01BSU5fREVCVUc9eQ0KQ09ORklHX0lSUV9GT1JDRURfVEhSRUFESU5H
PXkNCkNPTkZJR19TUEFSU0VfSVJRPXkNCkNPTkZJR19DTE9DS1NPVVJDRV9XQVRDSERPRz15DQpD
T05GSUdfQVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkNCkNPTkZJR19HRU5FUklDX1RJTUVfVlNZU0NB
TEw9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFM9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tF
VkVOVFNfQlVJTEQ9eQ0KQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUPXkNCkNP
TkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX01JTl9BREpVU1Q9eQ0KQ09ORklHX0dFTkVSSUNfQ01P
U19VUERBVEU9eQ0KDQojDQojIFRpbWVycyBzdWJzeXN0ZW0NCiMNCkNPTkZJR19USUNLX09ORVNI
T1Q9eQ0KQ09ORklHX05PX0haX0NPTU1PTj15DQojIENPTkZJR19IWl9QRVJJT0RJQyBpcyBub3Qg
c2V0DQpDT05GSUdfTk9fSFpfSURMRT15DQpDT05GSUdfTk9fSFo9eQ0KQ09ORklHX0hJR0hfUkVT
X1RJTUVSUz15DQoNCiMNCiMgQ1BVL1Rhc2sgdGltZSBhbmQgc3RhdHMgYWNjb3VudGluZw0KIw0K
Q09ORklHX1RJQ0tfQ1BVX0FDQ09VTlRJTkc9eQ0KIyBDT05GSUdfVklSVF9DUFVfQUNDT1VOVElO
R19HRU4gaXMgbm90IHNldA0KIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0
DQojIENPTkZJR19CU0RfUFJPQ0VTU19BQ0NUIGlzIG5vdCBzZXQNCkNPTkZJR19UQVNLU1RBVFM9
eQ0KQ09ORklHX1RBU0tfREVMQVlfQUNDVD15DQojIENPTkZJR19UQVNLX1hBQ0NUIGlzIG5vdCBz
ZXQNCg0KIw0KIyBSQ1UgU3Vic3lzdGVtDQojDQpDT05GSUdfVElOWV9SQ1U9eQ0KIyBDT05GSUdf
UFJFRU1QVF9SQ1UgaXMgbm90IHNldA0KQ09ORklHX1JDVV9TVEFMTF9DT01NT049eQ0KIyBDT05G
SUdfVFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldA0KQ09ORklHX0lLQ09ORklHPXkNCkNPTkZJR19M
T0dfQlVGX1NISUZUPTE3DQpDT05GSUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15DQpDT05G
SUdfQVJDSF9TVVBQT1JUU19OVU1BX0JBTEFOQ0lORz15DQpDT05GSUdfQVJDSF9TVVBQT1JUU19J
TlQxMjg9eQ0KQ09ORklHX0FSQ0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15DQojIENPTkZJ
R19DR1JPVVBTIGlzIG5vdCBzZXQNCkNPTkZJR19DSEVDS1BPSU5UX1JFU1RPUkU9eQ0KQ09ORklH
X05BTUVTUEFDRVM9eQ0KIyBDT05GSUdfVVRTX05TIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQQ19O
UyBpcyBub3Qgc2V0DQpDT05GSUdfVVNFUl9OUz15DQpDT05GSUdfUElEX05TPXkNCiMgQ09ORklH
X05FVF9OUyBpcyBub3Qgc2V0DQpDT05GSUdfVUlER0lEX1NUUklDVF9UWVBFX0NIRUNLUz15DQoj
IENPTkZJR19TQ0hFRF9BVVRPR1JPVVAgaXMgbm90IHNldA0KIyBDT05GSUdfU1lTRlNfREVQUkVD
QVRFRCBpcyBub3Qgc2V0DQpDT05GSUdfUkVMQVk9eQ0KQ09ORklHX0JMS19ERVZfSU5JVFJEPXkN
CkNPTkZJR19JTklUUkFNRlNfU09VUkNFPSIiDQojIENPTkZJR19SRF9HWklQIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JEX0JaSVAyIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEX0xaTUEgaXMgbm90IHNl
dA0KQ09ORklHX1JEX1haPXkNCiMgQ09ORklHX1JEX0xaTyBpcyBub3Qgc2V0DQojIENPTkZJR19S
RF9MWjQgaXMgbm90IHNldA0KQ09ORklHX0NDX09QVElNSVpFX0ZPUl9TSVpFPXkNCkNPTkZJR19B
Tk9OX0lOT0RFUz15DQpDT05GSUdfU1lTQ1RMX0VYQ0VQVElPTl9UUkFDRT15DQpDT05GSUdfSEFW
RV9QQ1NQS1JfUExBVEZPUk09eQ0KQ09ORklHX0VYUEVSVD15DQpDT05GSUdfS0FMTFNZTVM9eQ0K
Q09ORklHX0tBTExTWU1TX0FMTD15DQpDT05GSUdfUFJJTlRLPXkNCiMgQ09ORklHX0JVRyBpcyBu
b3Qgc2V0DQpDT05GSUdfUENTUEtSX1BMQVRGT1JNPXkNCiMgQ09ORklHX0JBU0VfRlVMTCBpcyBu
b3Qgc2V0DQpDT05GSUdfRlVURVg9eQ0KQ09ORklHX0VQT0xMPXkNCiMgQ09ORklHX1NJR05BTEZE
IGlzIG5vdCBzZXQNCkNPTkZJR19USU1FUkZEPXkNCkNPTkZJR19FVkVOVEZEPXkNCkNPTkZJR19T
SE1FTT15DQpDT05GSUdfQUlPPXkNCiMgQ09ORklHX1BDSV9RVUlSS1MgaXMgbm90IHNldA0KQ09O
RklHX0VNQkVEREVEPXkNCkNPTkZJR19IQVZFX1BFUkZfRVZFTlRTPXkNCg0KIw0KIyBLZXJuZWwg
UGVyZm9ybWFuY2UgRXZlbnRzIEFuZCBDb3VudGVycw0KIw0KQ09ORklHX1BFUkZfRVZFTlRTPXkN
CiMgQ09ORklHX0RFQlVHX1BFUkZfVVNFX1ZNQUxMT0MgaXMgbm90IHNldA0KIyBDT05GSUdfVk1f
RVZFTlRfQ09VTlRFUlMgaXMgbm90IHNldA0KIyBDT05GSUdfU0xVQl9ERUJVRyBpcyBub3Qgc2V0
DQpDT05GSUdfQ09NUEFUX0JSSz15DQojIENPTkZJR19TTEFCIGlzIG5vdCBzZXQNCkNPTkZJR19T
TFVCPXkNCiMgQ09ORklHX1NMT0IgaXMgbm90IHNldA0KIyBDT05GSUdfUFJPRklMSU5HIGlzIG5v
dCBzZXQNCkNPTkZJR19IQVZFX09QUk9GSUxFPXkNCkNPTkZJR19PUFJPRklMRV9OTUlfVElNRVI9
eQ0KIyBDT05GSUdfSlVNUF9MQUJFTCBpcyBub3Qgc2V0DQojIENPTkZJR19IQVZFXzY0QklUX0FM
SUdORURfQUNDRVNTIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZFX0VGRklDSUVOVF9VTkFMSUdORURf
QUNDRVNTPXkNCkNPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkNCkNPTkZJR19VU0VSX1JF
VFVSTl9OT1RJRklFUj15DQpDT05GSUdfSEFWRV9JT1JFTUFQX1BST1Q9eQ0KQ09ORklHX0hBVkVf
S1BST0JFUz15DQpDT05GSUdfSEFWRV9LUkVUUFJPQkVTPXkNCkNPTkZJR19IQVZFX09QVFBST0JF
Uz15DQpDT05GSUdfSEFWRV9LUFJPQkVTX09OX0ZUUkFDRT15DQpDT05GSUdfSEFWRV9BUkNIX1RS
QUNFSE9PSz15DQpDT05GSUdfSEFWRV9ETUFfQVRUUlM9eQ0KQ09ORklHX0dFTkVSSUNfU01QX0lE
TEVfVEhSRUFEPXkNCkNPTkZJR19IQVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQ0KQ09O
RklHX0hBVkVfRE1BX0FQSV9ERUJVRz15DQpDT05GSUdfSEFWRV9IV19CUkVBS1BPSU5UPXkNCkNP
TkZJR19IQVZFX01JWEVEX0JSRUFLUE9JTlRTX1JFR1M9eQ0KQ09ORklHX0hBVkVfVVNFUl9SRVRV
Uk5fTk9USUZJRVI9eQ0KQ09ORklHX0hBVkVfUEVSRl9FVkVOVFNfTk1JPXkNCkNPTkZJR19IQVZF
X1BFUkZfUkVHUz15DQpDT05GSUdfSEFWRV9QRVJGX1VTRVJfU1RBQ0tfRFVNUD15DQpDT05GSUdf
SEFWRV9BUkNIX0pVTVBfTEFCRUw9eQ0KQ09ORklHX0FSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hH
PXkNCkNPTkZJR19IQVZFX0FMSUdORURfU1RSVUNUX1BBR0U9eQ0KQ09ORklHX0hBVkVfQ01QWENI
R19MT0NBTD15DQpDT05GSUdfSEFWRV9DTVBYQ0hHX0RPVUJMRT15DQpDT05GSUdfSEFWRV9BUkNI
X1NFQ0NPTVBfRklMVEVSPXkNCkNPTkZJR19IQVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkNCiMgQ09O
RklHX0NDX1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19DQ19TVEFDS1BST1RFQ1RP
Ul9OT05FPXkNCiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldA0K
IyBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfU1RST05HIGlzIG5vdCBzZXQNCkNPTkZJR19IQVZF
X0NPTlRFWFRfVFJBQ0tJTkc9eQ0KQ09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49
eQ0KQ09ORklHX0hBVkVfSVJRX1RJTUVfQUNDT1VOVElORz15DQpDT05GSUdfSEFWRV9BUkNIX1RS
QU5TUEFSRU5UX0hVR0VQQUdFPXkNCkNPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15DQpDT05G
SUdfTU9EVUxFU19VU0VfRUxGX1JFTEE9eQ0KQ09ORklHX0hBVkVfSVJRX0VYSVRfT05fSVJRX1NU
QUNLPXkNCg0KIw0KIyBHQ09WLWJhc2VkIGtlcm5lbCBwcm9maWxpbmcNCiMNCkNPTkZJR19HQ09W
X0tFUk5FTD15DQojIENPTkZJR19HQ09WX1BST0ZJTEVfQUxMIGlzIG5vdCBzZXQNCkNPTkZJR19H
Q09WX0ZPUk1BVF9BVVRPREVURUNUPXkNCiMgQ09ORklHX0dDT1ZfRk9STUFUXzNfNCBpcyBub3Qg
c2V0DQojIENPTkZJR19HQ09WX0ZPUk1BVF80XzcgaXMgbm90IHNldA0KIyBDT05GSUdfSEFWRV9H
RU5FUklDX0RNQV9DT0hFUkVOVCBpcyBub3Qgc2V0DQpDT05GSUdfUlRfTVVURVhFUz15DQpDT05G
SUdfQkFTRV9TTUFMTD0xDQojIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01PRFVMRVMgaXMgbm90IHNldA0KIyBDT05GSUdfQkxPQ0sgaXMgbm90IHNl
dA0KQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkNCkNPTkZJR19VTklOTElORV9TUElOX1VOTE9D
Sz15DQpDT05GSUdfRlJFRVpFUj15DQoNCiMNCiMgUHJvY2Vzc29yIHR5cGUgYW5kIGZlYXR1cmVz
DQojDQojIENPTkZJR19aT05FX0RNQSBpcyBub3Qgc2V0DQojIENPTkZJR19TTVAgaXMgbm90IHNl
dA0KQ09ORklHX1g4Nl9NUFBBUlNFPXkNCiMgQ09ORklHX1g4Nl9FWFRFTkRFRF9QTEFURk9STSBp
cyBub3Qgc2V0DQojIENPTkZJR19YODZfSU5URUxfTFBTUyBpcyBub3Qgc2V0DQpDT05GSUdfU0NI
RURfT01JVF9GUkFNRV9QT0lOVEVSPXkNCkNPTkZJR19IWVBFUlZJU09SX0dVRVNUPXkNCkNPTkZJ
R19QQVJBVklSVD15DQpDT05GSUdfUEFSQVZJUlRfREVCVUc9eQ0KQ09ORklHX1hFTj15DQpDT05G
SUdfWEVOX0RPTTA9eQ0KQ09ORklHX1hFTl9QUklWSUxFR0VEX0dVRVNUPXkNCkNPTkZJR19YRU5f
UFZIVk09eQ0KQ09ORklHX1hFTl9NQVhfRE9NQUlOX01FTU9SWT01MDANCkNPTkZJR19YRU5fU0FW
RV9SRVNUT1JFPXkNCiMgQ09ORklHX1hFTl9ERUJVR19GUyBpcyBub3Qgc2V0DQpDT05GSUdfWEVO
X1BWSD15DQojIENPTkZJR19LVk1fR1VFU1QgaXMgbm90IHNldA0KIyBDT05GSUdfUEFSQVZJUlRf
VElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQNCkNPTkZJR19QQVJBVklSVF9DTE9DSz15DQpDT05G
SUdfTk9fQk9PVE1FTT15DQpDT05GSUdfTUVNVEVTVD15DQojIENPTkZJR19NSzggaXMgbm90IHNl
dA0KIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0DQojIENPTkZJR19NQ09SRTIgaXMgbm90IHNldA0K
IyBDT05GSUdfTUFUT00gaXMgbm90IHNldA0KQ09ORklHX0dFTkVSSUNfQ1BVPXkNCkNPTkZJR19Y
ODZfSU5URVJOT0RFX0NBQ0hFX1NISUZUPTYNCkNPTkZJR19YODZfTDFfQ0FDSEVfU0hJRlQ9Ng0K
Q09ORklHX1g4Nl9UU0M9eQ0KQ09ORklHX1g4Nl9DTVBYQ0hHNjQ9eQ0KQ09ORklHX1g4Nl9DTU9W
PXkNCkNPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0DQpDT05GSUdfWDg2X0RFQlVHQ1RM
TVNSPXkNCiMgQ09ORklHX1BST0NFU1NPUl9TRUxFQ1QgaXMgbm90IHNldA0KQ09ORklHX0NQVV9T
VVBfSU5URUw9eQ0KQ09ORklHX0NQVV9TVVBfQU1EPXkNCkNPTkZJR19DUFVfU1VQX0NFTlRBVVI9
eQ0KQ09ORklHX0hQRVRfVElNRVI9eQ0KQ09ORklHX0hQRVRfRU1VTEFURV9SVEM9eQ0KQ09ORklH
X0RNST15DQojIENPTkZJR19HQVJUX0lPTU1VIGlzIG5vdCBzZXQNCkNPTkZJR19DQUxHQVJZX0lP
TU1VPXkNCkNPTkZJR19DQUxHQVJZX0lPTU1VX0VOQUJMRURfQllfREVGQVVMVD15DQpDT05GSUdf
U1dJT1RMQj15DQpDT05GSUdfSU9NTVVfSEVMUEVSPXkNCkNPTkZJR19OUl9DUFVTPTENCkNPTkZJ
R19QUkVFTVBUX05PTkU9eQ0KIyBDT05GSUdfUFJFRU1QVF9WT0xVTlRBUlkgaXMgbm90IHNldA0K
IyBDT05GSUdfUFJFRU1QVCBpcyBub3Qgc2V0DQpDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQ0KQ09O
RklHX1g4Nl9JT19BUElDPXkNCiMgQ09ORklHX1g4Nl9SRVJPVVRFX0ZPUl9CUk9LRU5fQk9PVF9J
UlFTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1g4Nl9NQ0UgaXMgbm90IHNldA0KQ09ORklHX0k4Sz15
DQpDT05GSUdfTUlDUk9DT0RFPXkNCiMgQ09ORklHX01JQ1JPQ09ERV9JTlRFTCBpcyBub3Qgc2V0
DQpDT05GSUdfTUlDUk9DT0RFX0FNRD15DQpDT05GSUdfTUlDUk9DT0RFX09MRF9JTlRFUkZBQ0U9
eQ0KIyBDT05GSUdfTUlDUk9DT0RFX0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQNCiMgQ09ORklHX01J
Q1JPQ09ERV9BTURfRUFSTFkgaXMgbm90IHNldA0KIyBDT05GSUdfTUlDUk9DT0RFX0VBUkxZIGlz
IG5vdCBzZXQNCkNPTkZJR19YODZfTVNSPXkNCkNPTkZJR19YODZfQ1BVSUQ9eQ0KQ09ORklHX0FS
Q0hfUEhZU19BRERSX1RfNjRCSVQ9eQ0KQ09ORklHX0FSQ0hfRE1BX0FERFJfVF82NEJJVD15DQpD
T05GSUdfRElSRUNUX0dCUEFHRVM9eQ0KQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0VOQUJMRT15DQpD
T05GSUdfQVJDSF9TUEFSU0VNRU1fREVGQVVMVD15DQpDT05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZ
X01PREVMPXkNCiMgQ09ORklHX0FSQ0hfTUVNT1JZX1BST0JFIGlzIG5vdCBzZXQNCkNPTkZJR19J
TExFR0FMX1BPSU5URVJfVkFMVUU9MHhkZWFkMDAwMDAwMDAwMDAwDQpDT05GSUdfU0VMRUNUX01F
TU9SWV9NT0RFTD15DQpDT05GSUdfU1BBUlNFTUVNX01BTlVBTD15DQpDT05GSUdfU1BBUlNFTUVN
PXkNCkNPTkZJR19IQVZFX01FTU9SWV9QUkVTRU5UPXkNCkNPTkZJR19TUEFSU0VNRU1fRVhUUkVN
RT15DQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkNCkNPTkZJR19TUEFSU0VNRU1f
QUxMT0NfTUVNX01BUF9UT0dFVEhFUj15DQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVA9eQ0KQ09O
RklHX0hBVkVfTUVNQkxPQ0s9eQ0KQ09ORklHX0hBVkVfTUVNQkxPQ0tfTk9ERV9NQVA9eQ0KQ09O
RklHX0FSQ0hfRElTQ0FSRF9NRU1CTE9DSz15DQojIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19O
T0RFIGlzIG5vdCBzZXQNCkNPTkZJR19NRU1PUllfSE9UUExVRz15DQpDT05GSUdfTUVNT1JZX0hP
VFBMVUdfU1BBUlNFPXkNCiMgQ09ORklHX01FTU9SWV9IT1RSRU1PVkUgaXMgbm90IHNldA0KQ09O
RklHX1BBR0VGTEFHU19FWFRFTkRFRD15DQpDT05GSUdfU1BMSVRfUFRMT0NLX0NQVVM9NA0KQ09O
RklHX0FSQ0hfRU5BQkxFX1NQTElUX1BNRF9QVExPQ0s9eQ0KQ09ORklHX0JBTExPT05fQ09NUEFD
VElPTj15DQpDT05GSUdfQ09NUEFDVElPTj15DQpDT05GSUdfTUlHUkFUSU9OPXkNCkNPTkZJR19Q
SFlTX0FERFJfVF82NEJJVD15DQpDT05GSUdfWk9ORV9ETUFfRkxBRz0wDQpDT05GSUdfVklSVF9U
T19CVVM9eQ0KQ09ORklHX01NVV9OT1RJRklFUj15DQojIENPTkZJR19LU00gaXMgbm90IHNldA0K
Q09ORklHX0RFRkFVTFRfTU1BUF9NSU5fQUREUj00MDk2DQojIENPTkZJR19UUkFOU1BBUkVOVF9I
VUdFUEFHRSBpcyBub3Qgc2V0DQojIENPTkZJR19DUk9TU19NRU1PUllfQVRUQUNIIGlzIG5vdCBz
ZXQNCkNPTkZJR19ORUVEX1BFUl9DUFVfS009eQ0KQ09ORklHX0NMRUFOQ0FDSEU9eQ0KIyBDT05G
SUdfQ01BIGlzIG5vdCBzZXQNCiMgQ09ORklHX1pCVUQgaXMgbm90IHNldA0KQ09ORklHX1pTTUFM
TE9DPXkNCkNPTkZJR19QR1RBQkxFX01BUFBJTkc9eQ0KQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NP
UlJVUFRJT049eQ0KQ09ORklHX1g4Nl9CT09UUEFSQU1fTUVNT1JZX0NPUlJVUFRJT05fQ0hFQ0s9
eQ0KQ09ORklHX1g4Nl9SRVNFUlZFX0xPVz02NA0KIyBDT05GSUdfTVRSUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19BUkNIX1JBTkRPTSBpcyBub3Qgc2V0DQojIENPTkZJR19YODZfU01BUCBpcyBub3Qg
c2V0DQpDT05GSUdfRUZJPXkNCiMgQ09ORklHX0VGSV9TVFVCIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1NFQ0NPTVAgaXMgbm90IHNldA0KIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQNCkNPTkZJR19I
Wl8yNTA9eQ0KIyBDT05GSUdfSFpfMzAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0haXzEwMDAgaXMg
bm90IHNldA0KQ09ORklHX0haPTI1MA0KQ09ORklHX1NDSEVEX0hSVElDSz15DQpDT05GSUdfS0VY
RUM9eQ0KIyBDT05GSUdfQ1JBU0hfRFVNUCBpcyBub3Qgc2V0DQpDT05GSUdfUEhZU0lDQUxfU1RB
UlQ9MHgxMDAwMDAwDQojIENPTkZJR19SRUxPQ0FUQUJMRSBpcyBub3Qgc2V0DQpDT05GSUdfUEhZ
U0lDQUxfQUxJR049MHgyMDAwMDANCiMgQ09ORklHX0NNRExJTkVfQk9PTCBpcyBub3Qgc2V0DQpD
T05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQ0KQ09ORklHX0FSQ0hfRU5BQkxFX01F
TU9SWV9IT1RSRU1PVkU9eQ0KDQojDQojIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9u
cw0KIw0KIyBDT05GSUdfU1VTUEVORCBpcyBub3Qgc2V0DQpDT05GSUdfSElCRVJOQVRFX0NBTExC
QUNLUz15DQpDT05GSUdfUE1fU0xFRVA9eQ0KQ09ORklHX1BNX0FVVE9TTEVFUD15DQojIENPTkZJ
R19QTV9XQUtFTE9DS1MgaXMgbm90IHNldA0KIyBDT05GSUdfUE1fUlVOVElNRSBpcyBub3Qgc2V0
DQpDT05GSUdfUE09eQ0KQ09ORklHX1BNX0RFQlVHPXkNCiMgQ09ORklHX1BNX0FEVkFOQ0VEX0RF
QlVHIGlzIG5vdCBzZXQNCkNPTkZJR19QTV9TTEVFUF9ERUJVRz15DQpDT05GSUdfUE1fVFJBQ0U9
eQ0KQ09ORklHX1BNX1RSQUNFX1JUQz15DQpDT05GSUdfV1FfUE9XRVJfRUZGSUNJRU5UX0RFRkFV
TFQ9eQ0KQ09ORklHX0FDUEk9eQ0KIyBDT05GSUdfQUNQSV9FQ19ERUJVR0ZTIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0FDUElfQUMgaXMgbm90IHNldA0KQ09ORklHX0FDUElfQkFUVEVSWT15DQpDT05G
SUdfQUNQSV9CVVRUT049eQ0KQ09ORklHX0FDUElfVklERU89eQ0KQ09ORklHX0FDUElfRkFOPXkN
CkNPTkZJR19BQ1BJX0RPQ0s9eQ0KQ09ORklHX0FDUElfUFJPQ0VTU09SPXkNCiMgQ09ORklHX0FD
UElfUFJPQ0VTU09SX0FHR1JFR0FUT1IgaXMgbm90IHNldA0KQ09ORklHX0FDUElfVEhFUk1BTD15
DQpDT05GSUdfQUNQSV9DVVNUT01fRFNEVF9GSUxFPSIiDQojIENPTkZJR19BQ1BJX0NVU1RPTV9E
U0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FDUElfSU5JVFJEX1RBQkxFX09WRVJSSURFIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX0FDUElfUENJX1NM
T1Q9eQ0KIyBDT05GSUdfWDg2X1BNX1RJTUVSIGlzIG5vdCBzZXQNCkNPTkZJR19BQ1BJX0NPTlRB
SU5FUj15DQpDT05GSUdfQUNQSV9IT1RQTFVHX01FTU9SWT15DQpDT05GSUdfQUNQSV9TQlM9eQ0K
IyBDT05GSUdfQUNQSV9IRUQgaXMgbm90IHNldA0KQ09ORklHX0FDUElfQ1VTVE9NX01FVEhPRD15
DQojIENPTkZJR19BQ1BJX0JHUlQgaXMgbm90IHNldA0KIyBDT05GSUdfQUNQSV9BUEVJIGlzIG5v
dCBzZXQNCkNPTkZJR19TRkk9eQ0KDQojDQojIENQVSBGcmVxdWVuY3kgc2NhbGluZw0KIw0KQ09O
RklHX0NQVV9GUkVRPXkNCkNPTkZJR19DUFVfRlJFUV9HT1ZfQ09NTU9OPXkNCiMgQ09ORklHX0NQ
VV9GUkVRX1NUQVQgaXMgbm90IHNldA0KQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1BFUkZP
Uk1BTkNFPXkNCiMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1BPV0VSU0FWRSBpcyBub3Qg
c2V0DQojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9VU0VSU1BBQ0UgaXMgbm90IHNldA0K
IyBDT05GSUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfT05ERU1BTkQgaXMgbm90IHNldA0KIyBDT05G
SUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfQ09OU0VSVkFUSVZFIGlzIG5vdCBzZXQNCkNPTkZJR19D
UFVfRlJFUV9HT1ZfUEVSRk9STUFOQ0U9eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9QT1dFUlNBVkU9
eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9VU0VSU1BBQ0U9eQ0KQ09ORklHX0NQVV9GUkVRX0dPVl9P
TkRFTUFORD15DQpDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTlNFUlZBVElWRT15DQoNCiMNCiMgeDg2
IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJzDQojDQojIENPTkZJR19YODZfSU5URUxfUFNU
QVRFIGlzIG5vdCBzZXQNCkNPTkZJR19YODZfUENDX0NQVUZSRVE9eQ0KIyBDT05GSUdfWDg2X0FD
UElfQ1BVRlJFUSBpcyBub3Qgc2V0DQojIENPTkZJR19YODZfU1BFRURTVEVQX0NFTlRSSU5PIGlz
IG5vdCBzZXQNCkNPTkZJR19YODZfUDRfQ0xPQ0tNT0Q9eQ0KDQojDQojIHNoYXJlZCBvcHRpb25z
DQojDQpDT05GSUdfWDg2X1NQRUVEU1RFUF9MSUI9eQ0KDQojDQojIENQVSBJZGxlDQojDQpDT05G
SUdfQ1BVX0lETEU9eQ0KIyBDT05GSUdfQ1BVX0lETEVfTVVMVElQTEVfRFJJVkVSUyBpcyBub3Qg
c2V0DQpDT05GSUdfQ1BVX0lETEVfR09WX0xBRERFUj15DQpDT05GSUdfQ1BVX0lETEVfR09WX01F
TlU9eQ0KIyBDT05GSUdfQVJDSF9ORUVEU19DUFVfSURMRV9DT1VQTEVEIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0lOVEVMX0lETEUgaXMgbm90IHNldA0KDQojDQojIE1lbW9yeSBwb3dlciBzYXZpbmdz
DQojDQpDT05GSUdfSTczMDBfSURMRV9JT0FUX0NIQU5ORUw9eQ0KQ09ORklHX0k3MzAwX0lETEU9
eQ0KDQojDQojIEJ1cyBvcHRpb25zIChQQ0kgZXRjLikNCiMNCkNPTkZJR19QQ0k9eQ0KQ09ORklH
X1BDSV9ESVJFQ1Q9eQ0KQ09ORklHX1BDSV9NTUNPTkZJRz15DQpDT05GSUdfUENJX1hFTj15DQpD
T05GSUdfUENJX0RPTUFJTlM9eQ0KQ09ORklHX1BDSV9DTkIyMExFX1FVSVJLPXkNCiMgQ09ORklH
X1BDSUVQT1JUQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1BDSV9NU0kgaXMgbm90IHNldA0KQ09O
RklHX1BDSV9ERUJVRz15DQojIENPTkZJR19QQ0lfUkVBTExPQ19FTkFCTEVfQVVUTyBpcyBub3Qg
c2V0DQpDT05GSUdfUENJX1NUVUI9eQ0KQ09ORklHX1hFTl9QQ0lERVZfRlJPTlRFTkQ9eQ0KQ09O
RklHX0hUX0lSUT15DQpDT05GSUdfUENJX0FUUz15DQojIENPTkZJR19QQ0lfSU9WIGlzIG5vdCBz
ZXQNCkNPTkZJR19QQ0lfUFJJPXkNCiMgQ09ORklHX1BDSV9QQVNJRCBpcyBub3Qgc2V0DQpDT05G
SUdfUENJX0lPQVBJQz15DQpDT05GSUdfUENJX0xBQkVMPXkNCg0KIw0KIyBQQ0kgaG9zdCBjb250
cm9sbGVyIGRyaXZlcnMNCiMNCiMgQ09ORklHX0lTQV9ETUFfQVBJIGlzIG5vdCBzZXQNCkNPTkZJ
R19BTURfTkI9eQ0KIyBDT05GSUdfUENDQVJEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hPVFBMVUdf
UENJIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JBUElESU8gaXMgbm90IHNldA0KQ09ORklHX1g4Nl9T
WVNGQj15DQoNCiMNCiMgRXhlY3V0YWJsZSBmaWxlIGZvcm1hdHMgLyBFbXVsYXRpb25zDQojDQoj
IENPTkZJR19CSU5GTVRfRUxGIGlzIG5vdCBzZXQNCkNPTkZJR19BUkNIX0JJTkZNVF9FTEZfUkFO
RE9NSVpFX1BJRT15DQpDT05GSUdfQklORk1UX1NDUklQVD15DQojIENPTkZJR19IQVZFX0FPVVQg
aXMgbm90IHNldA0KIyBDT05GSUdfQklORk1UX01JU0MgaXMgbm90IHNldA0KIyBDT05GSUdfQ09S
RURVTVAgaXMgbm90IHNldA0KIyBDT05GSUdfSUEzMl9FTVVMQVRJT04gaXMgbm90IHNldA0KQ09O
RklHX1g4Nl9ERVZfRE1BX09QUz15DQpDT05GSUdfTkVUPXkNCg0KIw0KIyBOZXR3b3JraW5nIG9w
dGlvbnMNCiMNCkNPTkZJR19QQUNLRVQ9eQ0KIyBDT05GSUdfUEFDS0VUX0RJQUcgaXMgbm90IHNl
dA0KIyBDT05GSUdfVU5JWCBpcyBub3Qgc2V0DQpDT05GSUdfWEZSTT15DQpDT05GSUdfWEZSTV9B
TEdPPXkNCkNPTkZJR19YRlJNX1VTRVI9eQ0KIyBDT05GSUdfWEZSTV9TVUJfUE9MSUNZIGlzIG5v
dCBzZXQNCkNPTkZJR19YRlJNX01JR1JBVEU9eQ0KQ09ORklHX1hGUk1fSVBDT01QPXkNCkNPTkZJ
R19ORVRfS0VZPXkNCiMgQ09ORklHX05FVF9LRVlfTUlHUkFURSBpcyBub3Qgc2V0DQpDT05GSUdf
SU5FVD15DQojIENPTkZJR19JUF9NVUxUSUNBU1QgaXMgbm90IHNldA0KQ09ORklHX0lQX0FEVkFO
Q0VEX1JPVVRFUj15DQpDT05GSUdfSVBfRklCX1RSSUVfU1RBVFM9eQ0KQ09ORklHX0lQX01VTFRJ
UExFX1RBQkxFUz15DQojIENPTkZJR19JUF9ST1VURV9NVUxUSVBBVEggaXMgbm90IHNldA0KIyBD
T05GSUdfSVBfUk9VVEVfVkVSQk9TRSBpcyBub3Qgc2V0DQojIENPTkZJR19JUF9QTlAgaXMgbm90
IHNldA0KQ09ORklHX05FVF9JUElQPXkNCkNPTkZJR19ORVRfSVBHUkVfREVNVVg9eQ0KQ09ORklH
X05FVF9JUF9UVU5ORUw9eQ0KIyBDT05GSUdfTkVUX0lQR1JFIGlzIG5vdCBzZXQNCkNPTkZJR19T
WU5fQ09PS0lFUz15DQojIENPTkZJR19ORVRfSVBWVEkgaXMgbm90IHNldA0KIyBDT05GSUdfSU5F
VF9BSCBpcyBub3Qgc2V0DQpDT05GSUdfSU5FVF9FU1A9eQ0KQ09ORklHX0lORVRfSVBDT01QPXkN
CkNPTkZJR19JTkVUX1hGUk1fVFVOTkVMPXkNCkNPTkZJR19JTkVUX1RVTk5FTD15DQpDT05GSUdf
SU5FVF9YRlJNX01PREVfVFJBTlNQT1JUPXkNCkNPTkZJR19JTkVUX1hGUk1fTU9ERV9UVU5ORUw9
eQ0KQ09ORklHX0lORVRfWEZSTV9NT0RFX0JFRVQ9eQ0KQ09ORklHX0lORVRfTFJPPXkNCkNPTkZJ
R19JTkVUX0RJQUc9eQ0KQ09ORklHX0lORVRfVENQX0RJQUc9eQ0KQ09ORklHX0lORVRfVURQX0RJ
QUc9eQ0KQ09ORklHX1RDUF9DT05HX0FEVkFOQ0VEPXkNCiMgQ09ORklHX1RDUF9DT05HX0JJQyBp
cyBub3Qgc2V0DQojIENPTkZJR19UQ1BfQ09OR19DVUJJQyBpcyBub3Qgc2V0DQpDT05GSUdfVENQ
X0NPTkdfV0VTVFdPT0Q9eQ0KQ09ORklHX1RDUF9DT05HX0hUQ1A9eQ0KIyBDT05GSUdfVENQX0NP
TkdfSFNUQ1AgaXMgbm90IHNldA0KIyBDT05GSUdfVENQX0NPTkdfSFlCTEEgaXMgbm90IHNldA0K
Q09ORklHX1RDUF9DT05HX1ZFR0FTPXkNCiMgQ09ORklHX1RDUF9DT05HX1NDQUxBQkxFIGlzIG5v
dCBzZXQNCkNPTkZJR19UQ1BfQ09OR19MUD15DQpDT05GSUdfVENQX0NPTkdfVkVOTz15DQojIENP
TkZJR19UQ1BfQ09OR19ZRUFIIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RDUF9DT05HX0lMTElOT0lT
IGlzIG5vdCBzZXQNCkNPTkZJR19ERUZBVUxUX0hUQ1A9eQ0KIyBDT05GSUdfREVGQVVMVF9WRUdB
UyBpcyBub3Qgc2V0DQojIENPTkZJR19ERUZBVUxUX1ZFTk8gaXMgbm90IHNldA0KIyBDT05GSUdf
REVGQVVMVF9XRVNUV09PRCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUZBVUxUX1JFTk8gaXMgbm90
IHNldA0KQ09ORklHX0RFRkFVTFRfVENQX0NPTkc9Imh0Y3AiDQpDT05GSUdfVENQX01ENVNJRz15
DQpDT05GSUdfSVBWNj15DQpDT05GSUdfSVBWNl9ST1VURVJfUFJFRj15DQojIENPTkZJR19JUFY2
X1JPVVRFX0lORk8gaXMgbm90IHNldA0KIyBDT05GSUdfSVBWNl9PUFRJTUlTVElDX0RBRCBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTkVUNl9BSCBpcyBub3Qgc2V0DQpDT05GSUdfSU5FVDZfRVNQPXkN
CiMgQ09ORklHX0lORVQ2X0lQQ09NUCBpcyBub3Qgc2V0DQpDT05GSUdfSVBWNl9NSVA2PXkNCiMg
Q09ORklHX0lORVQ2X1hGUk1fVFVOTkVMIGlzIG5vdCBzZXQNCkNPTkZJR19JTkVUNl9UVU5ORUw9
eQ0KQ09ORklHX0lORVQ2X1hGUk1fTU9ERV9UUkFOU1BPUlQ9eQ0KQ09ORklHX0lORVQ2X1hGUk1f
TU9ERV9UVU5ORUw9eQ0KIyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNldA0K
IyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX1JPVVRFT1BUSU1JWkFUSU9OIGlzIG5vdCBzZXQNCkNP
TkZJR19JUFY2X1ZUST15DQojIENPTkZJR19JUFY2X1NJVCBpcyBub3Qgc2V0DQpDT05GSUdfSVBW
Nl9UVU5ORUw9eQ0KQ09ORklHX0lQVjZfR1JFPXkNCkNPTkZJR19JUFY2X01VTFRJUExFX1RBQkxF
Uz15DQojIENPTkZJR19JUFY2X1NVQlRSRUVTIGlzIG5vdCBzZXQNCkNPTkZJR19JUFY2X01ST1VU
RT15DQpDT05GSUdfSVBWNl9NUk9VVEVfTVVMVElQTEVfVEFCTEVTPXkNCiMgQ09ORklHX0lQVjZf
UElNU01fVjIgaXMgbm90IHNldA0KIyBDT05GSUdfTkVUTEFCRUwgaXMgbm90IHNldA0KQ09ORklH
X05FVFdPUktfU0VDTUFSSz15DQpDT05GSUdfTkVUV09SS19QSFlfVElNRVNUQU1QSU5HPXkNCkNP
TkZJR19ORVRGSUxURVI9eQ0KIyBDT05GSUdfTkVURklMVEVSX0RFQlVHIGlzIG5vdCBzZXQNCkNP
TkZJR19ORVRGSUxURVJfQURWQU5DRUQ9eQ0KIyBDT05GSUdfQlJJREdFX05FVEZJTFRFUiBpcyBu
b3Qgc2V0DQoNCiMNCiMgQ29yZSBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KQ09ORklHX05F
VEZJTFRFUl9ORVRMSU5LPXkNCkNPTkZJR19ORVRGSUxURVJfTkVUTElOS19BQ0NUPXkNCiMgQ09O
RklHX05FVEZJTFRFUl9ORVRMSU5LX1FVRVVFIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJf
TkVUTElOS19MT0c9eQ0KQ09ORklHX05GX0NPTk5UUkFDSz15DQpDT05GSUdfTkZfQ09OTlRSQUNL
X01BUks9eQ0KQ09ORklHX05GX0NPTk5UUkFDS19TRUNNQVJLPXkNCkNPTkZJR19ORl9DT05OVFJB
Q0tfRVZFTlRTPXkNCiMgQ09ORklHX05GX0NPTk5UUkFDS19USU1FT1VUIGlzIG5vdCBzZXQNCiMg
Q09ORklHX05GX0NPTk5UUkFDS19USU1FU1RBTVAgaXMgbm90IHNldA0KQ09ORklHX05GX0NPTk5U
UkFDS19MQUJFTFM9eQ0KIyBDT05GSUdfTkZfQ1RfUFJPVE9fRENDUCBpcyBub3Qgc2V0DQpDT05G
SUdfTkZfQ1RfUFJPVE9fR1JFPXkNCkNPTkZJR19ORl9DVF9QUk9UT19TQ1RQPXkNCkNPTkZJR19O
Rl9DVF9QUk9UT19VRFBMSVRFPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfQU1BTkRBPXkNCkNPTkZJ
R19ORl9DT05OVFJBQ0tfRlRQPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15DQojIENPTkZJ
R19ORl9DT05OVFJBQ0tfSVJDIGlzIG5vdCBzZXQNCkNPTkZJR19ORl9DT05OVFJBQ0tfQlJPQURD
QVNUPXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfTkVUQklPU19OUz15DQpDT05GSUdfTkZfQ09OTlRS
QUNLX1NOTVA9eQ0KQ09ORklHX05GX0NPTk5UUkFDS19QUFRQPXkNCkNPTkZJR19ORl9DT05OVFJB
Q0tfU0FORT15DQojIENPTkZJR19ORl9DT05OVFJBQ0tfU0lQIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05GX0NPTk5UUkFDS19URlRQIGlzIG5vdCBzZXQNCiMgQ09ORklHX05GX0NUX05FVExJTksgaXMg
bm90IHNldA0KQ09ORklHX05GX0NUX05FVExJTktfVElNRU9VVD15DQojIENPTkZJR19ORl9UQUJM
RVMgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVEFCTEVTPXkNCg0KIw0KIyBYdGFibGVz
IGNvbWJpbmVkIG1vZHVsZXMNCiMNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFSSz15DQpDT05GSUdf
TkVURklMVEVSX1hUX0NPTk5NQVJLPXkNCkNPTkZJR19ORVRGSUxURVJfWFRfU0VUPXkNCg0KIw0K
IyBYdGFibGVzIHRhcmdldHMNCiMNCkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0FVRElUPXkN
CiMgQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfQ0xBU1NJRlkgaXMgbm90IHNldA0KQ09ORklH
X05FVEZJTFRFUl9YVF9UQVJHRVRfQ09OTk1BUks9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX1RB
UkdFVF9DT05OU0VDTUFSSyBpcyBub3Qgc2V0DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9I
TUFSSz15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9JRExFVElNRVI9eQ0KIyBDT05GSUdf
TkVURklMVEVSX1hUX1RBUkdFVF9MRUQgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9U
QVJHRVRfTE9HPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfTUFSSyBpcyBub3Qgc2V0
DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORkxPRz15DQojIENPTkZJR19ORVRGSUxURVJf
WFRfVEFSR0VUX05GUVVFVUUgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRf
UkFURUVTVD15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9URUU9eQ0KQ09ORklHX05FVEZJ
TFRFUl9YVF9UQVJHRVRfU0VDTUFSSz15DQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9UQ1BN
U1M9eQ0KDQojDQojIFh0YWJsZXMgbWF0Y2hlcw0KIw0KIyBDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0FERFJUWVBFIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQlBGPXkN
CiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DTFVTVEVSIGlzIG5vdCBzZXQNCiMgQ09ORklH
X05FVEZJTFRFUl9YVF9NQVRDSF9DT01NRU5UIGlzIG5vdCBzZXQNCiMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9DT05OQllURVMgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRD
SF9DT05OTEFCRUw9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTElNSVQ9eQ0KQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTUFSSz15DQojIENPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfQ09OTlRSQUNLIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ1BV
PXkNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfRENDUD15DQojIENPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfREVWR1JPVVAgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9E
U0NQPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9FQ04gaXMgbm90IHNldA0KQ09ORklH
X05FVEZJTFRFUl9YVF9NQVRDSF9FU1A9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9IQVNI
TElNSVQ9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0hFTFBFUiBpcyBub3Qgc2V0DQoj
IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEwgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9JUENPTVA9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9JUFJBTkdFPXkN
CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTEVOR1RIPXkNCkNPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfTElNSVQ9eQ0KIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX01BQyBpcyBub3Qgc2V0
DQojIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFSSyBpcyBub3Qgc2V0DQojIENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfTVVMVElQT1JUIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfTkZBQ0NUPXkNCiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PU0YgaXMgbm90
IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PV05FUj15DQojIENPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfUE9MSUNZIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
UEtUVFlQRT15DQojIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfUVVPVEEgaXMgbm90IHNldA0K
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9SQVRFRVNUPXkNCiMgQ09ORklHX05FVEZJTFRFUl9Y
VF9NQVRDSF9SRUFMTSBpcyBub3Qgc2V0DQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JFQ0VO
VD15DQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1NDVFA9eQ0KQ09ORklHX05FVEZJTFRFUl9Y
VF9NQVRDSF9TT0NLRVQ9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVEFURT15DQojIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RBVElTVElDIGlzIG5vdCBzZXQNCiMgQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9TVFJJTkcgaXMgbm90IHNldA0KQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9UQ1BNU1M9eQ0KQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9USU1FPXkNCkNPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfVTMyPXkNCkNPTkZJR19JUF9TRVQ9eQ0KQ09ORklHX0lQX1NFVF9N
QVg9MjU2DQpDT05GSUdfSVBfU0VUX0JJVE1BUF9JUD15DQpDT05GSUdfSVBfU0VUX0JJVE1BUF9J
UE1BQz15DQojIENPTkZJR19JUF9TRVRfQklUTUFQX1BPUlQgaXMgbm90IHNldA0KQ09ORklHX0lQ
X1NFVF9IQVNIX0lQPXkNCiMgQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVCBpcyBub3Qgc2V0DQpD
T05GSUdfSVBfU0VUX0hBU0hfSVBQT1JUSVA9eQ0KQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVE5F
VD15DQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVE5FVD15DQojIENPTkZJR19JUF9TRVRfSEFT
SF9ORVQgaXMgbm90IHNldA0KQ09ORklHX0lQX1NFVF9IQVNIX05FVE5FVD15DQojIENPTkZJR19J
UF9TRVRfSEFTSF9ORVRQT1JUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lQX1NFVF9IQVNIX05FVElG
QUNFIGlzIG5vdCBzZXQNCkNPTkZJR19JUF9TRVRfTElTVF9TRVQ9eQ0KIyBDT05GSUdfSVBfVlMg
aXMgbm90IHNldA0KDQojDQojIElQOiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KQ09ORklH
X05GX0RFRlJBR19JUFY0PXkNCkNPTkZJR19ORl9DT05OVFJBQ0tfSVBWND15DQojIENPTkZJR19J
UF9ORl9JUFRBQkxFUyBpcyBub3Qgc2V0DQpDT05GSUdfSVBfTkZfQVJQVEFCTEVTPXkNCkNPTkZJ
R19JUF9ORl9BUlBGSUxURVI9eQ0KQ09ORklHX0lQX05GX0FSUF9NQU5HTEU9eQ0KDQojDQojIElQ
djY6IE5ldGZpbHRlciBDb25maWd1cmF0aW9uDQojDQpDT05GSUdfTkZfREVGUkFHX0lQVjY9eQ0K
Q09ORklHX05GX0NPTk5UUkFDS19JUFY2PXkNCiMgQ09ORklHX0lQNl9ORl9JUFRBQkxFUyBpcyBu
b3Qgc2V0DQoNCiMNCiMgREVDbmV0OiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbg0KIw0KIyBDT05G
SUdfREVDTkVUX05GX0dSQUJVTEFUT1IgaXMgbm90IHNldA0KQ09ORklHX0JSSURHRV9ORl9FQlRB
QkxFUz15DQpDT05GSUdfQlJJREdFX0VCVF9CUk9VVEU9eQ0KQ09ORklHX0JSSURHRV9FQlRfVF9G
SUxURVI9eQ0KIyBDT05GSUdfQlJJREdFX0VCVF9UX05BVCBpcyBub3Qgc2V0DQpDT05GSUdfQlJJ
REdFX0VCVF84MDJfMz15DQpDT05GSUdfQlJJREdFX0VCVF9BTU9ORz15DQojIENPTkZJR19CUklE
R0VfRUJUX0FSUCBpcyBub3Qgc2V0DQpDT05GSUdfQlJJREdFX0VCVF9JUD15DQpDT05GSUdfQlJJ
REdFX0VCVF9JUDY9eQ0KQ09ORklHX0JSSURHRV9FQlRfTElNSVQ9eQ0KIyBDT05GSUdfQlJJREdF
X0VCVF9NQVJLIGlzIG5vdCBzZXQNCkNPTkZJR19CUklER0VfRUJUX1BLVFRZUEU9eQ0KQ09ORklH
X0JSSURHRV9FQlRfU1RQPXkNCiMgQ09ORklHX0JSSURHRV9FQlRfVkxBTiBpcyBub3Qgc2V0DQpD
T05GSUdfQlJJREdFX0VCVF9BUlBSRVBMWT15DQpDT05GSUdfQlJJREdFX0VCVF9ETkFUPXkNCkNP
TkZJR19CUklER0VfRUJUX01BUktfVD15DQpDT05GSUdfQlJJREdFX0VCVF9SRURJUkVDVD15DQoj
IENPTkZJR19CUklER0VfRUJUX1NOQVQgaXMgbm90IHNldA0KQ09ORklHX0JSSURHRV9FQlRfTE9H
PXkNCkNPTkZJR19CUklER0VfRUJUX1VMT0c9eQ0KQ09ORklHX0JSSURHRV9FQlRfTkZMT0c9eQ0K
Q09ORklHX0lQX0RDQ1A9eQ0KQ09ORklHX0lORVRfRENDUF9ESUFHPXkNCg0KIw0KIyBEQ0NQIEND
SURzIENvbmZpZ3VyYXRpb24NCiMNCkNPTkZJR19JUF9EQ0NQX0NDSUQyX0RFQlVHPXkNCkNPTkZJ
R19JUF9EQ0NQX0NDSUQzPXkNCiMgQ09ORklHX0lQX0RDQ1BfQ0NJRDNfREVCVUcgaXMgbm90IHNl
dA0KQ09ORklHX0lQX0RDQ1BfVEZSQ19MSUI9eQ0KDQojDQojIERDQ1AgS2VybmVsIEhhY2tpbmcN
CiMNCkNPTkZJR19JUF9EQ0NQX0RFQlVHPXkNCiMgQ09ORklHX0lQX1NDVFAgaXMgbm90IHNldA0K
Q09ORklHX1JEUz15DQojIENPTkZJR19SRFNfUkRNQSBpcyBub3Qgc2V0DQojIENPTkZJR19SRFNf
VENQIGlzIG5vdCBzZXQNCiMgQ09ORklHX1JEU19ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfVElQ
Qz15DQpDT05GSUdfVElQQ19QT1JUUz04MTkxDQojIENPTkZJR19BVE0gaXMgbm90IHNldA0KIyBD
T05GSUdfTDJUUCBpcyBub3Qgc2V0DQpDT05GSUdfU1RQPXkNCkNPTkZJR19CUklER0U9eQ0KIyBD
T05GSUdfQlJJREdFX0lHTVBfU05PT1BJTkcgaXMgbm90IHNldA0KIyBDT05GSUdfVkxBTl84MDIx
USBpcyBub3Qgc2V0DQpDT05GSUdfREVDTkVUPXkNCiMgQ09ORklHX0RFQ05FVF9ST1VURVIgaXMg
bm90IHNldA0KQ09ORklHX0xMQz15DQpDT05GSUdfTExDMj15DQpDT05GSUdfSVBYPXkNCkNPTkZJ
R19JUFhfSU5URVJOPXkNCiMgQ09ORklHX0FUQUxLIGlzIG5vdCBzZXQNCkNPTkZJR19YMjU9eQ0K
IyBDT05GSUdfTEFQQiBpcyBub3Qgc2V0DQojIENPTkZJR19QSE9ORVQgaXMgbm90IHNldA0KQ09O
RklHX0lFRUU4MDIxNTQ9eQ0KIyBDT05GSUdfSUVFRTgwMjE1NF82TE9XUEFOIGlzIG5vdCBzZXQN
CiMgQ09ORklHX01BQzgwMjE1NCBpcyBub3Qgc2V0DQojIENPTkZJR19ORVRfU0NIRUQgaXMgbm90
IHNldA0KIyBDT05GSUdfRENCIGlzIG5vdCBzZXQNCkNPTkZJR19ETlNfUkVTT0xWRVI9eQ0KIyBD
T05GSUdfQkFUTUFOX0FEViBpcyBub3Qgc2V0DQpDT05GSUdfT1BFTlZTV0lUQ0g9eQ0KIyBDT05G
SUdfT1BFTlZTV0lUQ0hfR1JFIGlzIG5vdCBzZXQNCkNPTkZJR19WU09DS0VUUz15DQpDT05GSUdf
Vk1XQVJFX1ZNQ0lfVlNPQ0tFVFM9eQ0KQ09ORklHX05FVExJTktfTU1BUD15DQpDT05GSUdfTkVU
TElOS19ESUFHPXkNCkNPTkZJR19ORVRfTVBMU19HU089eQ0KQ09ORklHX0hTUj15DQpDT05GSUdf
TkVUX1JYX0JVU1lfUE9MTD15DQpDT05GSUdfQlFMPXkNCg0KIw0KIyBOZXR3b3JrIHRlc3RpbmcN
CiMNCkNPTkZJR19IQU1SQURJTz15DQoNCiMNCiMgUGFja2V0IFJhZGlvIHByb3RvY29scw0KIw0K
Q09ORklHX0FYMjU9eQ0KQ09ORklHX0FYMjVfREFNQV9TTEFWRT15DQpDT05GSUdfTkVUUk9NPXkN
CiMgQ09ORklHX1JPU0UgaXMgbm90IHNldA0KDQojDQojIEFYLjI1IG5ldHdvcmsgZGV2aWNlIGRy
aXZlcnMNCiMNCkNPTkZJR19NS0lTUz15DQojIENPTkZJR182UEFDSyBpcyBub3Qgc2V0DQpDT05G
SUdfQlBRRVRIRVI9eQ0KQ09ORklHX0JBWUNPTV9TRVJfRkRYPXkNCkNPTkZJR19CQVlDT01fU0VS
X0hEWD15DQpDT05GSUdfQkFZQ09NX1BBUj15DQpDT05GSUdfWUFNPXkNCiMgQ09ORklHX0NBTiBp
cyBub3Qgc2V0DQpDT05GSUdfSVJEQT15DQoNCiMNCiMgSXJEQSBwcm90b2NvbHMNCiMNCkNPTkZJ
R19JUkxBTj15DQojIENPTkZJR19JUkNPTU0gaXMgbm90IHNldA0KIyBDT05GSUdfSVJEQV9VTFRS
QSBpcyBub3Qgc2V0DQoNCiMNCiMgSXJEQSBvcHRpb25zDQojDQpDT05GSUdfSVJEQV9DQUNIRV9M
QVNUX0xTQVA9eQ0KQ09ORklHX0lSREFfRkFTVF9SUj15DQpDT05GSUdfSVJEQV9ERUJVRz15DQoN
CiMNCiMgSW5mcmFyZWQtcG9ydCBkZXZpY2UgZHJpdmVycw0KIw0KDQojDQojIFNJUiBkZXZpY2Ug
ZHJpdmVycw0KIw0KIyBDT05GSUdfSVJUVFlfU0lSIGlzIG5vdCBzZXQNCg0KIw0KIyBEb25nbGUg
c3VwcG9ydA0KIw0KDQojDQojIEZJUiBkZXZpY2UgZHJpdmVycw0KIw0KQ09ORklHX1ZMU0lfRklS
PXkNCiMgQ09ORklHX0JUIGlzIG5vdCBzZXQNCkNPTkZJR19BRl9SWFJQQz15DQpDT05GSUdfQUZf
UlhSUENfREVCVUc9eQ0KIyBDT05GSUdfUlhLQUQgaXMgbm90IHNldA0KQ09ORklHX0ZJQl9SVUxF
Uz15DQojIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0DQpDT05GSUdfV0lNQVg9eQ0KQ09ORklH
X1dJTUFYX0RFQlVHX0xFVkVMPTgNCiMgQ09ORklHX1JGS0lMTCBpcyBub3Qgc2V0DQojIENPTkZJ
R19SRktJTExfUkVHVUxBVE9SIGlzIG5vdCBzZXQNCkNPTkZJR19ORVRfOVA9eQ0KQ09ORklHX05F
VF85UF9WSVJUSU89eQ0KQ09ORklHX05FVF85UF9SRE1BPXkNCiMgQ09ORklHX05FVF85UF9ERUJV
RyBpcyBub3Qgc2V0DQojIENPTkZJR19DQUlGIGlzIG5vdCBzZXQNCkNPTkZJR19DRVBIX0xJQj15
DQojIENPTkZJR19DRVBIX0xJQl9QUkVUVFlERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19DRVBI
X0xJQl9VU0VfRE5TX1JFU09MVkVSIGlzIG5vdCBzZXQNCkNPTkZJR19ORkM9eQ0KQ09ORklHX05G
Q19ESUdJVEFMPXkNCkNPTkZJR19ORkNfTkNJPXkNCkNPTkZJR19ORkNfSENJPXkNCkNPTkZJR19O
RkNfU0hETEM9eQ0KDQojDQojIE5lYXIgRmllbGQgQ29tbXVuaWNhdGlvbiAoTkZDKSBkZXZpY2Vz
DQojDQojIENPTkZJR19ORkNfV0lMSU5LIGlzIG5vdCBzZXQNCkNPTkZJR19ORkNfTUVJX1BIWT15
DQpDT05GSUdfTkZDX1NJTT15DQpDT05GSUdfTkZDX1BONTQ0PXkNCiMgQ09ORklHX05GQ19QTjU0
NF9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfTkZDX1BONTQ0X01FSSBpcyBub3Qgc2V0DQpDT05G
SUdfTkZDX01JQ1JPUkVBRD15DQpDT05GSUdfTkZDX01JQ1JPUkVBRF9JMkM9eQ0KQ09ORklHX05G
Q19NSUNST1JFQURfTUVJPXkNCkNPTkZJR19IQVZFX0JQRl9KSVQ9eQ0KDQojDQojIERldmljZSBE
cml2ZXJzDQojDQoNCiMNCiMgR2VuZXJpYyBEcml2ZXIgT3B0aW9ucw0KIw0KQ09ORklHX1VFVkVO
VF9IRUxQRVJfUEFUSD0iIg0KIyBDT05GSUdfREVWVE1QRlMgaXMgbm90IHNldA0KIyBDT05GSUdf
U1RBTkRBTE9ORSBpcyBub3Qgc2V0DQpDT05GSUdfUFJFVkVOVF9GSVJNV0FSRV9CVUlMRD15DQpD
T05GSUdfRldfTE9BREVSPXkNCkNPTkZJR19GSVJNV0FSRV9JTl9LRVJORUw9eQ0KQ09ORklHX0VY
VFJBX0ZJUk1XQVJFPSIiDQojIENPTkZJR19GV19MT0FERVJfVVNFUl9IRUxQRVIgaXMgbm90IHNl
dA0KQ09ORklHX0RFQlVHX0RSSVZFUj15DQojIENPTkZJR19ERUJVR19ERVZSRVMgaXMgbm90IHNl
dA0KQ09ORklHX1NZU19IWVBFUlZJU09SPXkNCiMgQ09ORklHX0dFTkVSSUNfQ1BVX0RFVklDRVMg
aXMgbm90IHNldA0KQ09ORklHX1JFR01BUD15DQpDT05GSUdfUkVHTUFQX0kyQz15DQpDT05GSUdf
UkVHTUFQX01NSU89eQ0KQ09ORklHX1JFR01BUF9JUlE9eQ0KQ09ORklHX0RNQV9TSEFSRURfQlVG
RkVSPXkNCg0KIw0KIyBCdXMgZGV2aWNlcw0KIw0KQ09ORklHX0NPTk5FQ1RPUj15DQpDT05GSUdf
UFJPQ19FVkVOVFM9eQ0KQ09ORklHX01URD15DQojIENPTkZJR19NVERfUkVEQk9PVF9QQVJUUyBp
cyBub3Qgc2V0DQpDT05GSUdfTVREX0NNRExJTkVfUEFSVFM9eQ0KQ09ORklHX01URF9BUjdfUEFS
VFM9eQ0KDQojDQojIFVzZXIgTW9kdWxlcyBBbmQgVHJhbnNsYXRpb24gTGF5ZXJzDQojDQpDT05G
SUdfTVREX09PUFM9eQ0KDQojDQojIFJBTS9ST00vRmxhc2ggY2hpcCBkcml2ZXJzDQojDQpDT05G
SUdfTVREX0NGST15DQpDT05GSUdfTVREX0pFREVDUFJPQkU9eQ0KQ09ORklHX01URF9HRU5fUFJP
QkU9eQ0KIyBDT05GSUdfTVREX0NGSV9BRFZfT1BUSU9OUyBpcyBub3Qgc2V0DQpDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzE9eQ0KQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8yPXkNCkNPTkZJ
R19NVERfTUFQX0JBTktfV0lEVEhfND15DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfOCBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfTUFQX0JBTktfV0lEVEhfMTYgaXMgbm90IHNldA0KIyBD
T05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzMyIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfQ0ZJX0kx
PXkNCkNPTkZJR19NVERfQ0ZJX0kyPXkNCiMgQ09ORklHX01URF9DRklfSTQgaXMgbm90IHNldA0K
IyBDT05GSUdfTVREX0NGSV9JOCBpcyBub3Qgc2V0DQojIENPTkZJR19NVERfQ0ZJX0lOVEVMRVhU
IGlzIG5vdCBzZXQNCkNPTkZJR19NVERfQ0ZJX0FNRFNURD15DQpDT05GSUdfTVREX0NGSV9TVEFB
PXkNCkNPTkZJR19NVERfQ0ZJX1VUSUw9eQ0KQ09ORklHX01URF9SQU09eQ0KQ09ORklHX01URF9S
T009eQ0KIyBDT05GSUdfTVREX0FCU0VOVCBpcyBub3Qgc2V0DQoNCiMNCiMgTWFwcGluZyBkcml2
ZXJzIGZvciBjaGlwIGFjY2Vzcw0KIw0KIyBDT05GSUdfTVREX0NPTVBMRVhfTUFQUElOR1MgaXMg
bm90IHNldA0KIyBDT05GSUdfTVREX1BIWVNNQVAgaXMgbm90IHNldA0KQ09ORklHX01URF9TQzUy
MENEUD15DQpDT05GSUdfTVREX05FVFNDNTIwPXkNCiMgQ09ORklHX01URF9UUzU1MDAgaXMgbm90
IHNldA0KQ09ORklHX01URF9BTUQ3NlhST009eQ0KQ09ORklHX01URF9JQ0hYUk9NPXkNCkNPTkZJ
R19NVERfRVNCMlJPTT15DQojIENPTkZJR19NVERfQ0s4MDRYUk9NIGlzIG5vdCBzZXQNCkNPTkZJ
R19NVERfU0NCMl9GTEFTSD15DQpDT05GSUdfTVREX05FVHRlbD15DQojIENPTkZJR19NVERfTDQ0
MEdYIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfSU5URUxfVlJfTk9SPXkNCkNPTkZJR19NVERfUExB
VFJBTT15DQoNCiMNCiMgU2VsZi1jb250YWluZWQgTVREIGRldmljZSBkcml2ZXJzDQojDQpDT05G
SUdfTVREX1BNQzU1MT15DQojIENPTkZJR19NVERfUE1DNTUxX0JVR0ZJWCBpcyBub3Qgc2V0DQoj
IENPTkZJR19NVERfUE1DNTUxX0RFQlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX01URF9TTFJBTSBp
cyBub3Qgc2V0DQojIENPTkZJR19NVERfUEhSQU0gaXMgbm90IHNldA0KQ09ORklHX01URF9NVERS
QU09eQ0KQ09ORklHX01URFJBTV9UT1RBTF9TSVpFPTQwOTYNCkNPTkZJR19NVERSQU1fRVJBU0Vf
U0laRT0xMjgNCkNPTkZJR19NVERSQU1fQUJTX1BPUz0wDQoNCiMNCiMgRGlzay1Pbi1DaGlwIERl
dmljZSBEcml2ZXJzDQojDQpDT05GSUdfTVREX0RPQ0czPXkNCkNPTkZJR19CQ0hfQ09OU1RfTT0x
NA0KQ09ORklHX0JDSF9DT05TVF9UPTQNCiMgQ09ORklHX01URF9OQU5EIGlzIG5vdCBzZXQNCiMg
Q09ORklHX01URF9PTkVOQU5EIGlzIG5vdCBzZXQNCg0KIw0KIyBMUEREUiBmbGFzaCBtZW1vcnkg
ZHJpdmVycw0KIw0KIyBDT05GSUdfTVREX0xQRERSIGlzIG5vdCBzZXQNCkNPTkZJR19NVERfVUJJ
PXkNCkNPTkZJR19NVERfVUJJX1dMX1RIUkVTSE9MRD00MDk2DQpDT05GSUdfTVREX1VCSV9CRUJf
TElNSVQ9MjANCkNPTkZJR19NVERfVUJJX0ZBU1RNQVA9eQ0KQ09ORklHX01URF9VQklfR0xVRUJJ
PXkNCkNPTkZJR19QQVJQT1JUPXkNCkNPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15
DQpDT05GSUdfUEFSUE9SVF9QQz15DQpDT05GSUdfUEFSUE9SVF9TRVJJQUw9eQ0KQ09ORklHX1BB
UlBPUlRfUENfRklGTz15DQojIENPTkZJR19QQVJQT1JUX1BDX1NVUEVSSU8gaXMgbm90IHNldA0K
IyBDT05GSUdfUEFSUE9SVF9HU0MgaXMgbm90IHNldA0KQ09ORklHX1BBUlBPUlRfQVg4ODc5Nj15
DQojIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldA0KQ09ORklHX1BBUlBPUlRfTk9UX1BD
PXkNCkNPTkZJR19QTlA9eQ0KIyBDT05GSUdfUE5QX0RFQlVHX01FU1NBR0VTIGlzIG5vdCBzZXQN
Cg0KIw0KIyBQcm90b2NvbHMNCiMNCkNPTkZJR19QTlBBQ1BJPXkNCg0KIw0KIyBNaXNjIGRldmlj
ZXMNCiMNCkNPTkZJR19TRU5TT1JTX0xJUzNMVjAyRD15DQojIENPTkZJR19BRDUyNVhfRFBPVCBp
cyBub3Qgc2V0DQpDT05GSUdfRFVNTVlfSVJRPXkNCkNPTkZJR19JQk1fQVNNPXkNCkNPTkZJR19Q
SEFOVE9NPXkNCiMgQ09ORklHX0lOVEVMX01JRF9QVEkgaXMgbm90IHNldA0KQ09ORklHX1NHSV9J
T0M0PXkNCkNPTkZJR19USUZNX0NPUkU9eQ0KIyBDT05GSUdfVElGTV83WFgxIGlzIG5vdCBzZXQN
CkNPTkZJR19JQ1M5MzJTNDAxPXkNCkNPTkZJR19BVE1FTF9TU0M9eQ0KIyBDT05GSUdfRU5DTE9T
VVJFX1NFUlZJQ0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hQX0lMTyBpcyBub3Qgc2V0DQpDT05G
SUdfQVBEUzk4MDJBTFM9eQ0KQ09ORklHX0lTTDI5MDAzPXkNCiMgQ09ORklHX0lTTDI5MDIwIGlz
IG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX1RTTDI1NTA9eQ0KQ09ORklHX1NFTlNPUlNfQkgxNzgw
PXkNCiMgQ09ORklHX1NFTlNPUlNfQkgxNzcwIGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX0FQ
RFM5OTBYPXkNCkNPTkZJR19ITUM2MzUyPXkNCkNPTkZJR19EUzE2ODI9eQ0KQ09ORklHX1ZNV0FS
RV9CQUxMT09OPXkNCiMgQ09ORklHX0JNUDA4NV9JMkMgaXMgbm90IHNldA0KIyBDT05GSUdfUENI
X1BIVUIgaXMgbm90IHNldA0KIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQN
CkNPTkZJR19TUkFNPXkNCkNPTkZJR19DMlBPUlQ9eQ0KQ09ORklHX0MyUE9SVF9EVVJBTUFSXzIx
NTA9eQ0KDQojDQojIEVFUFJPTSBzdXBwb3J0DQojDQpDT05GSUdfRUVQUk9NX0FUMjQ9eQ0KIyBD
T05GSUdfRUVQUk9NX0xFR0FDWSBpcyBub3Qgc2V0DQojIENPTkZJR19FRVBST01fTUFYNjg3NSBp
cyBub3Qgc2V0DQojIENPTkZJR19FRVBST01fOTNDWDYgaXMgbm90IHNldA0KIyBDT05GSUdfQ0I3
MTBfQ09SRSBpcyBub3Qgc2V0DQoNCiMNCiMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5z
cG9ydCBsaW5lIGRpc2NpcGxpbmUNCiMNCkNPTkZJR19USV9TVD15DQpDT05GSUdfU0VOU09SU19M
SVMzX0kyQz15DQoNCiMNCiMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9kdWxlDQoj
DQpDT05GSUdfQUxURVJBX1NUQVBMPXkNCkNPTkZJR19JTlRFTF9NRUk9eQ0KQ09ORklHX0lOVEVM
X01FSV9NRT15DQpDT05GSUdfVk1XQVJFX1ZNQ0k9eQ0KDQojDQojIEludGVsIE1JQyBIb3N0IERy
aXZlcg0KIw0KQ09ORklHX0lOVEVMX01JQ19IT1NUPXkNCg0KIw0KIyBJbnRlbCBNSUMgQ2FyZCBE
cml2ZXINCiMNCkNPTkZJR19JTlRFTF9NSUNfQ0FSRD15DQpDT05GSUdfR0VOV1FFPXkNCkNPTkZJ
R19IQVZFX0lERT15DQoNCiMNCiMgU0NTSSBkZXZpY2Ugc3VwcG9ydA0KIw0KQ09ORklHX1NDU0lf
TU9EPXkNCiMgQ09ORklHX1NDU0lfRE1BIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NDU0lfTkVUTElO
SyBpcyBub3Qgc2V0DQojIENPTkZJR19GVVNJT04gaXMgbm90IHNldA0KDQojDQojIElFRUUgMTM5
NCAoRmlyZVdpcmUpIHN1cHBvcnQNCiMNCiMgQ09ORklHX0ZJUkVXSVJFIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0ZJUkVXSVJFX05PU1kgaXMgbm90IHNldA0KQ09ORklHX0kyTz15DQojIENPTkZJR19J
Mk9fTENUX05PVElGWV9PTl9DSEFOR0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0kyT19FWFRfQURB
UFRFQyBpcyBub3Qgc2V0DQojIENPTkZJR19JMk9fQ09ORklHIGlzIG5vdCBzZXQNCkNPTkZJR19J
Mk9fQlVTPXkNCiMgQ09ORklHX0kyT19QUk9DIGlzIG5vdCBzZXQNCkNPTkZJR19NQUNJTlRPU0hf
RFJJVkVSUz15DQojIENPTkZJR19ORVRERVZJQ0VTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZIT1NU
X05FVCBpcyBub3Qgc2V0DQpDT05GSUdfVkhPU1RfUklORz15DQoNCiMNCiMgSW5wdXQgZGV2aWNl
IHN1cHBvcnQNCiMNCkNPTkZJR19JTlBVVD15DQpDT05GSUdfSU5QVVRfRkZfTUVNTEVTUz15DQpD
T05GSUdfSU5QVVRfUE9MTERFVj15DQpDT05GSUdfSU5QVVRfU1BBUlNFS01BUD15DQpDT05GSUdf
SU5QVVRfTUFUUklYS01BUD15DQoNCiMNCiMgVXNlcmxhbmQgaW50ZXJmYWNlcw0KIw0KQ09ORklH
X0lOUFVUX01PVVNFREVWPXkNCkNPTkZJR19JTlBVVF9NT1VTRURFVl9QU0FVWD15DQpDT05GSUdf
SU5QVVRfTU9VU0VERVZfU0NSRUVOX1g9MTAyNA0KQ09ORklHX0lOUFVUX01PVVNFREVWX1NDUkVF
Tl9ZPTc2OA0KIyBDT05GSUdfSU5QVVRfSk9ZREVWIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9F
VkRFVj15DQpDT05GSUdfSU5QVVRfRVZCVUc9eQ0KDQojDQojIElucHV0IERldmljZSBEcml2ZXJz
DQojDQpDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQ0KIyBDT05GSUdfS0VZQk9BUkRfQURQNTU4OCBp
cyBub3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9BRFA1NTg5IGlzIG5vdCBzZXQNCkNPTkZJR19L
RVlCT0FSRF9BVEtCRD15DQpDT05GSUdfS0VZQk9BUkRfUVQxMDcwPXkNCkNPTkZJR19LRVlCT0FS
RF9RVDIxNjA9eQ0KIyBDT05GSUdfS0VZQk9BUkRfTEtLQkQgaXMgbm90IHNldA0KQ09ORklHX0tF
WUJPQVJEX0dQSU89eQ0KQ09ORklHX0tFWUJPQVJEX0dQSU9fUE9MTEVEPXkNCiMgQ09ORklHX0tF
WUJPQVJEX1RDQTY0MTYgaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfVENBODQxOCBpcyBu
b3Qgc2V0DQojIENPTkZJR19LRVlCT0FSRF9NQVRSSVggaXMgbm90IHNldA0KQ09ORklHX0tFWUJP
QVJEX0xNODMyMz15DQojIENPTkZJR19LRVlCT0FSRF9MTTgzMzMgaXMgbm90IHNldA0KQ09ORklH
X0tFWUJPQVJEX01BWDczNTk9eQ0KQ09ORklHX0tFWUJPQVJEX01DUz15DQojIENPTkZJR19LRVlC
T0FSRF9NUFIxMjEgaXMgbm90IHNldA0KIyBDT05GSUdfS0VZQk9BUkRfTkVXVE9OIGlzIG5vdCBz
ZXQNCkNPTkZJR19LRVlCT0FSRF9PUEVOQ09SRVM9eQ0KQ09ORklHX0tFWUJPQVJEX1NUT1dBV0FZ
PXkNCiMgQ09ORklHX0tFWUJPQVJEX1NVTktCRCBpcyBub3Qgc2V0DQpDT05GSUdfS0VZQk9BUkRf
U0hfS0VZU0M9eQ0KQ09ORklHX0tFWUJPQVJEX1RDMzU4OVg9eQ0KQ09ORklHX0tFWUJPQVJEX1RX
TDQwMzA9eQ0KIyBDT05GSUdfS0VZQk9BUkRfWFRLQkQgaXMgbm90IHNldA0KQ09ORklHX0lOUFVU
X0xFRFM9eQ0KIyBDT05GSUdfSU5QVVRfTU9VU0UgaXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRf
Sk9ZU1RJQ0sgaXMgbm90IHNldA0KIyBDT05GSUdfSU5QVVRfVEFCTEVUIGlzIG5vdCBzZXQNCkNP
TkZJR19JTlBVVF9UT1VDSFNDUkVFTj15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fODhQTTg2MFg9eQ0K
Q09ORklHX1RPVUNIU0NSRUVOX0FENzg3OT15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQUQ3ODc5X0ky
Qz15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQVRNRUxfTVhUPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X0FVT19QSVhDSVIgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVOX0JVMjEwMTM9eQ0KQ09O
RklHX1RPVUNIU0NSRUVOX0NZOENUTUcxMTA9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUF9D
T1JFPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUF9JMkMgaXMgbm90IHNldA0KQ09ORklH
X1RPVUNIU0NSRUVOX0NZVFRTUDRfQ09SRT15DQpDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQNF9J
MkM9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0RZTkFQUk89eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0hB
TVBTSElSRT15DQojIENPTkZJR19UT1VDSFNDUkVFTl9FRVRJIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1RPVUNIU0NSRUVOX0ZVSklUU1UgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVOX0lMSTIx
MFg9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX0dVTlpFPXkNCkNPTkZJR19UT1VDSFNDUkVFTl9FTE89
eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1dBQ09NX1c4MDAxPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X1dBQ09NX0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfVE9VQ0hTQ1JFRU5fTUFYMTE4MDE9eQ0KIyBD
T05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0DQpDT05GSUdfVE9VQ0hTQ1JFRU5f
TU1TMTE0PXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX01UT1VDSCBpcyBub3Qgc2V0DQpDT05GSUdf
VE9VQ0hTQ1JFRU5fSU5FWElPPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX01LNzEyIGlzIG5vdCBz
ZXQNCkNPTkZJR19UT1VDSFNDUkVFTl9QRU5NT1VOVD15DQojIENPTkZJR19UT1VDSFNDUkVFTl9F
RFRfRlQ1WDA2IGlzIG5vdCBzZXQNCiMgQ09ORklHX1RPVUNIU0NSRUVOX1RPVUNIUklHSFQgaXMg
bm90IHNldA0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hXSU4gaXMgbm90IHNldA0KQ09ORklH
X1RPVUNIU0NSRUVOX1RJX0FNMzM1WF9UU0M9eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1BJWENJUj15
DQpDT05GSUdfVE9VQ0hTQ1JFRU5fV004MzFYPXkNCkNPTkZJR19UT1VDSFNDUkVFTl9XTTk3WFg9
eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fV005NzA1IGlzIG5vdCBzZXQNCkNPTkZJR19UT1VDSFND
UkVFTl9XTTk3MTI9eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fV005NzEzIGlzIG5vdCBzZXQNCiMg
Q09ORklHX1RPVUNIU0NSRUVOX01DMTM3ODMgaXMgbm90IHNldA0KQ09ORklHX1RPVUNIU0NSRUVO
X1RPVUNISVQyMTM9eQ0KIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVFNDX1NFUklPIGlzIG5vdCBzZXQN
CkNPTkZJR19UT1VDSFNDUkVFTl9UU0MyMDA3PXkNCkNPTkZJR19UT1VDSFNDUkVFTl9TVDEyMzI9
eQ0KQ09ORklHX1RPVUNIU0NSRUVOX1RQUzY1MDdYPXkNCiMgQ09ORklHX1RPVUNIU0NSRUVOX1pG
T1JDRSBpcyBub3Qgc2V0DQpDT05GSUdfSU5QVVRfTUlTQz15DQpDT05GSUdfSU5QVVRfODhQTTg2
MFhfT05LRVk9eQ0KIyBDT05GSUdfSU5QVVRfODhQTTgwWF9PTktFWSBpcyBub3Qgc2V0DQpDT05G
SUdfSU5QVVRfQUQ3MTRYPXkNCkNPTkZJR19JTlBVVF9BRDcxNFhfSTJDPXkNCiMgQ09ORklHX0lO
UFVUX0FSSVpPTkFfSEFQVElDUyBpcyBub3Qgc2V0DQojIENPTkZJR19JTlBVVF9CTUExNTAgaXMg
bm90IHNldA0KQ09ORklHX0lOUFVUX1BDU1BLUj15DQojIENPTkZJR19JTlBVVF9NQzEzNzgzX1BX
UkJVVFRPTiBpcyBub3Qgc2V0DQpDT05GSUdfSU5QVVRfTU1BODQ1MD15DQojIENPTkZJR19JTlBV
VF9NUFUzMDUwIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9BUEFORUw9eQ0KQ09ORklHX0lOUFVU
X0dQMkE9eQ0KQ09ORklHX0lOUFVUX0dQSU9fVElMVF9QT0xMRUQ9eQ0KQ09ORklHX0lOUFVUX0FU
TEFTX0JUTlM9eQ0KIyBDT05GSUdfSU5QVVRfS1hUSjkgaXMgbm90IHNldA0KQ09ORklHX0lOUFVU
X1JFVFVfUFdSQlVUVE9OPXkNCkNPTkZJR19JTlBVVF9UV0w0MDMwX1BXUkJVVFRPTj15DQpDT05G
SUdfSU5QVVRfVFdMNDAzMF9WSUJSQT15DQpDT05GSUdfSU5QVVRfVFdMNjA0MF9WSUJSQT15DQpD
T05GSUdfSU5QVVRfVUlOUFVUPXkNCkNPTkZJR19JTlBVVF9QQ0Y1MDYzM19QTVU9eQ0KQ09ORklH
X0lOUFVUX1BDRjg1NzQ9eQ0KIyBDT05GSUdfSU5QVVRfR1BJT19ST1RBUllfRU5DT0RFUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19JTlBVVF9EQTkwNTVfT05LRVkgaXMgbm90IHNldA0KQ09ORklHX0lO
UFVUX1dNODMxWF9PTj15DQpDT05GSUdfSU5QVVRfQURYTDM0WD15DQpDT05GSUdfSU5QVVRfQURY
TDM0WF9JMkM9eQ0KIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0DQojIENPTkZJR19J
TlBVVF9YRU5fS0JEREVWX0ZST05URU5EIGlzIG5vdCBzZXQNCkNPTkZJR19JTlBVVF9JREVBUEFE
X1NMSURFQkFSPXkNCg0KIw0KIyBIYXJkd2FyZSBJL08gcG9ydHMNCiMNCkNPTkZJR19TRVJJTz15
DQpDT05GSUdfQVJDSF9NSUdIVF9IQVZFX1BDX1NFUklPPXkNCkNPTkZJR19TRVJJT19JODA0Mj15
DQpDT05GSUdfU0VSSU9fU0VSUE9SVD15DQojIENPTkZJR19TRVJJT19DVDgyQzcxMCBpcyBub3Qg
c2V0DQpDT05GSUdfU0VSSU9fUEFSS0JEPXkNCkNPTkZJR19TRVJJT19QQ0lQUzI9eQ0KQ09ORklH
X1NFUklPX0xJQlBTMj15DQojIENPTkZJR19TRVJJT19SQVcgaXMgbm90IHNldA0KIyBDT05GSUdf
U0VSSU9fQUxURVJBX1BTMiBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSU9fUFMyTVVMVD15DQojIENP
TkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0hZUEVSVl9LRVlCT0FSRCBp
cyBub3Qgc2V0DQpDT05GSUdfR0FNRVBPUlQ9eQ0KQ09ORklHX0dBTUVQT1JUX05TNTU4PXkNCkNP
TkZJR19HQU1FUE9SVF9MND15DQojIENPTkZJR19HQU1FUE9SVF9FTVUxMEsxIGlzIG5vdCBzZXQN
CkNPTkZJR19HQU1FUE9SVF9GTTgwMT15DQoNCiMNCiMgQ2hhcmFjdGVyIGRldmljZXMNCiMNCkNP
TkZJR19UVFk9eQ0KIyBDT05GSUdfVlQgaXMgbm90IHNldA0KIyBDT05GSUdfVU5JWDk4X1BUWVMg
aXMgbm90IHNldA0KQ09ORklHX0xFR0FDWV9QVFlTPXkNCkNPTkZJR19MRUdBQ1lfUFRZX0NPVU5U
PTI1Ng0KIyBDT05GSUdfU0VSSUFMX05PTlNUQU5EQVJEIGlzIG5vdCBzZXQNCkNPTkZJR19OT1pP
TUk9eQ0KQ09ORklHX05fR1NNPXkNCiMgQ09ORklHX1RSQUNFX1NJTksgaXMgbm90IHNldA0KQ09O
RklHX0RFVktNRU09eQ0KDQojDQojIFNlcmlhbCBkcml2ZXJzDQojDQpDT05GSUdfU0VSSUFMXzgy
NTA9eQ0KQ09ORklHX1NFUklBTF84MjUwX0RFUFJFQ0FURURfT1BUSU9OUz15DQpDT05GSUdfU0VS
SUFMXzgyNTBfUE5QPXkNCkNPTkZJR19TRVJJQUxfODI1MF9DT05TT0xFPXkNCkNPTkZJR19GSVhf
RUFSTFlDT05fTUVNPXkNCkNPTkZJR19TRVJJQUxfODI1MF9ETUE9eQ0KQ09ORklHX1NFUklBTF84
MjUwX1BDST15DQpDT05GSUdfU0VSSUFMXzgyNTBfTlJfVUFSVFM9NA0KQ09ORklHX1NFUklBTF84
MjUwX1JVTlRJTUVfVUFSVFM9NA0KQ09ORklHX1NFUklBTF84MjUwX0VYVEVOREVEPXkNCiMgQ09O
RklHX1NFUklBTF84MjUwX01BTllfUE9SVFMgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF84MjUw
X1NIQVJFX0lSUT15DQpDT05GSUdfU0VSSUFMXzgyNTBfREVURUNUX0lSUT15DQpDT05GSUdfU0VS
SUFMXzgyNTBfUlNBPXkNCkNPTkZJR19TRVJJQUxfODI1MF9EVz15DQoNCiMNCiMgTm9uLTgyNTAg
c2VyaWFsIHBvcnQgc3VwcG9ydA0KIw0KQ09ORklHX1NFUklBTF9LR0RCX05NST15DQojIENPTkZJ
R19TRVJJQUxfTUZEX0hTVSBpcyBub3Qgc2V0DQpDT05GSUdfU0VSSUFMX1VBUlRMSVRFPXkNCiMg
Q09ORklHX1NFUklBTF9VQVJUTElURV9DT05TT0xFIGlzIG5vdCBzZXQNCkNPTkZJR19TRVJJQUxf
Q09SRT15DQpDT05GSUdfU0VSSUFMX0NPUkVfQ09OU09MRT15DQpDT05GSUdfQ09OU09MRV9QT0xM
PXkNCkNPTkZJR19TRVJJQUxfSlNNPXkNCiMgQ09ORklHX1NFUklBTF9TQ0NOWFAgaXMgbm90IHNl
dA0KIyBDT05GSUdfU0VSSUFMX1RJTUJFUkRBTEUgaXMgbm90IHNldA0KQ09ORklHX1NFUklBTF9B
TFRFUkFfSlRBR1VBUlQ9eQ0KQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlRfQ09OU09MRT15
DQpDT05GSUdfU0VSSUFMX0FMVEVSQV9KVEFHVUFSVF9DT05TT0xFX0JZUEFTUz15DQpDT05GSUdf
U0VSSUFMX0FMVEVSQV9VQVJUPXkNCkNPTkZJR19TRVJJQUxfQUxURVJBX1VBUlRfTUFYUE9SVFM9
NA0KQ09ORklHX1NFUklBTF9BTFRFUkFfVUFSVF9CQVVEUkFURT0xMTUyMDANCiMgQ09ORklHX1NF
UklBTF9BTFRFUkFfVUFSVF9DT05TT0xFIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFUklBTF9QQ0hf
VUFSVCBpcyBub3Qgc2V0DQojIENPTkZJR19TRVJJQUxfQVJDIGlzIG5vdCBzZXQNCkNPTkZJR19T
RVJJQUxfUlAyPXkNCkNPTkZJR19TRVJJQUxfUlAyX05SX1VBUlRTPTMyDQpDT05GSUdfU0VSSUFM
X0ZTTF9MUFVBUlQ9eQ0KQ09ORklHX1NFUklBTF9GU0xfTFBVQVJUX0NPTlNPTEU9eQ0KQ09ORklH
X1NFUklBTF9TVF9BU0M9eQ0KQ09ORklHX1NFUklBTF9TVF9BU0NfQ09OU09MRT15DQojIENPTkZJ
R19UVFlfUFJJTlRLIGlzIG5vdCBzZXQNCkNPTkZJR19QUklOVEVSPXkNCkNPTkZJR19MUF9DT05T
T0xFPXkNCkNPTkZJR19QUERFVj15DQpDT05GSUdfSFZDX0RSSVZFUj15DQpDT05GSUdfSFZDX0lS
UT15DQpDT05GSUdfSFZDX1hFTj15DQpDT05GSUdfSFZDX1hFTl9GUk9OVEVORD15DQojIENPTkZJ
R19WSVJUSU9fQ09OU09MRSBpcyBub3Qgc2V0DQojIENPTkZJR19JUE1JX0hBTkRMRVIgaXMgbm90
IHNldA0KQ09ORklHX0hXX1JBTkRPTT15DQpDT05GSUdfSFdfUkFORE9NX1RJTUVSSU9NRU09eQ0K
Q09ORklHX0hXX1JBTkRPTV9JTlRFTD15DQpDT05GSUdfSFdfUkFORE9NX0FNRD15DQpDT05GSUdf
SFdfUkFORE9NX1ZJQT15DQpDT05GSUdfSFdfUkFORE9NX1ZJUlRJTz15DQpDT05GSUdfSFdfUkFO
RE9NX1RQTT15DQpDT05GSUdfTlZSQU09eQ0KIyBDT05GSUdfUjM5NjQgaXMgbm90IHNldA0KIyBD
T05GSUdfQVBQTElDT00gaXMgbm90IHNldA0KIyBDT05GSUdfTVdBVkUgaXMgbm90IHNldA0KIyBD
T05GSUdfSFBFVCBpcyBub3Qgc2V0DQpDT05GSUdfSEFOR0NIRUNLX1RJTUVSPXkNCkNPTkZJR19U
Q0dfVFBNPXkNCkNPTkZJR19UQ0dfVElTPXkNCiMgQ09ORklHX1RDR19USVNfSTJDX0FUTUVMIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1RDR19USVNfSTJDX0lORklORU9OIGlzIG5vdCBzZXQNCkNPTkZJ
R19UQ0dfVElTX0kyQ19OVVZPVE9OPXkNCkNPTkZJR19UQ0dfTlNDPXkNCiMgQ09ORklHX1RDR19B
VE1FTCBpcyBub3Qgc2V0DQpDT05GSUdfVENHX0lORklORU9OPXkNCiMgQ09ORklHX1RDR19TVDMz
X0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfVENHX1hFTj15DQojIENPTkZJR19URUxDTE9DSyBpcyBu
b3Qgc2V0DQpDT05GSUdfREVWUE9SVD15DQpDT05GSUdfSTJDPXkNCkNPTkZJR19JMkNfQk9BUkRJ
TkZPPXkNCiMgQ09ORklHX0kyQ19DT01QQVQgaXMgbm90IHNldA0KQ09ORklHX0kyQ19DSEFSREVW
PXkNCkNPTkZJR19JMkNfTVVYPXkNCg0KIw0KIyBNdWx0aXBsZXhlciBJMkMgQ2hpcCBzdXBwb3J0
DQojDQojIENPTkZJR19JMkNfTVVYX0dQSU8gaXMgbm90IHNldA0KQ09ORklHX0kyQ19NVVhfUENB
OTU0MT15DQpDT05GSUdfSTJDX01VWF9QQ0E5NTR4PXkNCkNPTkZJR19JMkNfSEVMUEVSX0FVVE89
eQ0KQ09ORklHX0kyQ19TTUJVUz15DQpDT05GSUdfSTJDX0FMR09CSVQ9eQ0KQ09ORklHX0kyQ19B
TEdPUENBPXkNCg0KIw0KIyBJMkMgSGFyZHdhcmUgQnVzIHN1cHBvcnQNCiMNCg0KIw0KIyBQQyBT
TUJ1cyBob3N0IGNvbnRyb2xsZXIgZHJpdmVycw0KIw0KIyBDT05GSUdfSTJDX0FMSTE1MzUgaXMg
bm90IHNldA0KQ09ORklHX0kyQ19BTEkxNTYzPXkNCkNPTkZJR19JMkNfQUxJMTVYMz15DQojIENP
TkZJR19JMkNfQU1ENzU2IGlzIG5vdCBzZXQNCkNPTkZJR19JMkNfQU1EODExMT15DQojIENPTkZJ
R19JMkNfSTgwMSBpcyBub3Qgc2V0DQpDT05GSUdfSTJDX0lTQ0g9eQ0KQ09ORklHX0kyQ19JU01U
PXkNCkNPTkZJR19JMkNfUElJWDQ9eQ0KIyBDT05GSUdfSTJDX05GT1JDRTIgaXMgbm90IHNldA0K
Q09ORklHX0kyQ19TSVM1NTk1PXkNCkNPTkZJR19JMkNfU0lTNjMwPXkNCkNPTkZJR19JMkNfU0lT
OTZYPXkNCiMgQ09ORklHX0kyQ19WSUEgaXMgbm90IHNldA0KIyBDT05GSUdfSTJDX1ZJQVBSTyBp
cyBub3Qgc2V0DQoNCiMNCiMgQUNQSSBkcml2ZXJzDQojDQpDT05GSUdfSTJDX1NDTUk9eQ0KDQoj
DQojIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQ0KIw0KIyBDT05GSUdfSTJDX0NCVVNfR1BJTyBpcyBub3Qgc2V0DQpDT05GSUdfSTJDX0RF
U0lHTldBUkVfQ09SRT15DQpDT05GSUdfSTJDX0RFU0lHTldBUkVfUENJPXkNCkNPTkZJR19JMkNf
RUcyMFQ9eQ0KQ09ORklHX0kyQ19HUElPPXkNCkNPTkZJR19JMkNfS0VNUExEPXkNCkNPTkZJR19J
MkNfT0NPUkVTPXkNCkNPTkZJR19JMkNfUENBX1BMQVRGT1JNPXkNCiMgQ09ORklHX0kyQ19QWEFf
UENJIGlzIG5vdCBzZXQNCkNPTkZJR19JMkNfUklJQz15DQpDT05GSUdfSTJDX1NIX01PQklMRT15
DQpDT05GSUdfSTJDX1NJTVRFQz15DQpDT05GSUdfSTJDX1hJTElOWD15DQojIENPTkZJR19JMkNf
UkNBUiBpcyBub3Qgc2V0DQoNCiMNCiMgRXh0ZXJuYWwgSTJDL1NNQnVzIGFkYXB0ZXIgZHJpdmVy
cw0KIw0KQ09ORklHX0kyQ19QQVJQT1JUPXkNCkNPTkZJR19JMkNfUEFSUE9SVF9MSUdIVD15DQoj
IENPTkZJR19JMkNfVEFPU19FVk0gaXMgbm90IHNldA0KDQojDQojIE90aGVyIEkyQy9TTUJ1cyBi
dXMgZHJpdmVycw0KIw0KQ09ORklHX0kyQ19ERUJVR19DT1JFPXkNCkNPTkZJR19JMkNfREVCVUdf
QUxHTz15DQojIENPTkZJR19JMkNfREVCVUdfQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NQSSBp
cyBub3Qgc2V0DQpDT05GSUdfSFNJPXkNCkNPTkZJR19IU0lfQk9BUkRJTkZPPXkNCg0KIw0KIyBI
U0kgY2xpZW50cw0KIw0KIyBDT05GSUdfSFNJX0NIQVIgaXMgbm90IHNldA0KDQojDQojIFBQUyBz
dXBwb3J0DQojDQpDT05GSUdfUFBTPXkNCiMgQ09ORklHX1BQU19ERUJVRyBpcyBub3Qgc2V0DQoN
CiMNCiMgUFBTIGNsaWVudHMgc3VwcG9ydA0KIw0KQ09ORklHX1BQU19DTElFTlRfS1RJTUVSPXkN
CkNPTkZJR19QUFNfQ0xJRU5UX0xESVNDPXkNCiMgQ09ORklHX1BQU19DTElFTlRfUEFSUE9SVCBp
cyBub3Qgc2V0DQpDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkNCg0KIw0KIyBQUFMgZ2VuZXJhdG9y
cyBzdXBwb3J0DQojDQoNCiMNCiMgUFRQIGNsb2NrIHN1cHBvcnQNCiMNCkNPTkZJR19QVFBfMTU4
OF9DTE9DSz15DQoNCiMNCiMgRW5hYmxlIFBIWUxJQiBhbmQgTkVUV09SS19QSFlfVElNRVNUQU1Q
SU5HIHRvIHNlZSB0aGUgYWRkaXRpb25hbCBjbG9ja3MuDQojDQojIENPTkZJR19QVFBfMTU4OF9D
TE9DS19QQ0ggaXMgbm90IHNldA0KQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9HUElPTElCPXkN
CkNPTkZJR19HUElPTElCPXkNCkNPTkZJR19HUElPX0RFVlJFUz15DQpDT05GSUdfR1BJT19BQ1BJ
PXkNCkNPTkZJR19ERUJVR19HUElPPXkNCkNPTkZJR19HUElPX1NZU0ZTPXkNCkNPTkZJR19HUElP
X0dFTkVSSUM9eQ0KQ09ORklHX0dQSU9fREE5MDU1PXkNCkNPTkZJR19HUElPX01BWDczMFg9eQ0K
DQojDQojIE1lbW9yeSBtYXBwZWQgR1BJTyBkcml2ZXJzOg0KIw0KQ09ORklHX0dQSU9fR0VORVJJ
Q19QTEFURk9STT15DQojIENPTkZJR19HUElPX0lUODc2MUUgaXMgbm90IHNldA0KQ09ORklHX0dQ
SU9fRjcxODhYPXkNCiMgQ09ORklHX0dQSU9fU0NIMzExWCBpcyBub3Qgc2V0DQojIENPTkZJR19H
UElPX1RTNTUwMCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19TQ0g9eQ0KIyBDT05GSUdfR1BJT19J
Q0ggaXMgbm90IHNldA0KQ09ORklHX0dQSU9fVlg4NTU9eQ0KQ09ORklHX0dQSU9fTFlOWFBPSU5U
PXkNCg0KIw0KIyBJMkMgR1BJTyBleHBhbmRlcnM6DQojDQpDT05GSUdfR1BJT19BUklaT05BPXkN
CkNPTkZJR19HUElPX01BWDczMDA9eQ0KQ09ORklHX0dQSU9fTUFYNzMyWD15DQojIENPTkZJR19H
UElPX01BWDczMlhfSVJRIGlzIG5vdCBzZXQNCiMgQ09ORklHX0dQSU9fUENBOTUzWCBpcyBub3Qg
c2V0DQpDT05GSUdfR1BJT19QQ0Y4NTdYPXkNCiMgQ09ORklHX0dQSU9fU1gxNTBYIGlzIG5vdCBz
ZXQNCkNPTkZJR19HUElPX1RDMzU4OVg9eQ0KIyBDT05GSUdfR1BJT19UV0w0MDMwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0dQSU9fVFdMNjA0MCBpcyBub3Qgc2V0DQojIENPTkZJR19HUElPX1dNODMx
WCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19XTTgzNTA9eQ0KQ09ORklHX0dQSU9fQURQNTU4OD15
DQojIENPTkZJR19HUElPX0FEUDU1ODhfSVJRIGlzIG5vdCBzZXQNCg0KIw0KIyBQQ0kgR1BJTyBl
eHBhbmRlcnM6DQojDQojIENPTkZJR19HUElPX0JUOFhYIGlzIG5vdCBzZXQNCiMgQ09ORklHX0dQ
SU9fQU1EODExMSBpcyBub3Qgc2V0DQojIENPTkZJR19HUElPX0lOVEVMX01JRCBpcyBub3Qgc2V0
DQojIENPTkZJR19HUElPX1BDSCBpcyBub3Qgc2V0DQpDT05GSUdfR1BJT19NTF9JT0g9eQ0KQ09O
RklHX0dQSU9fVElNQkVSREFMRT15DQpDT05GSUdfR1BJT19SREMzMjFYPXkNCg0KIw0KIyBTUEkg
R1BJTyBleHBhbmRlcnM6DQojDQpDT05GSUdfR1BJT19NQ1AyM1MwOD15DQoNCiMNCiMgQUM5NyBH
UElPIGV4cGFuZGVyczoNCiMNCg0KIw0KIyBMUEMgR1BJTyBleHBhbmRlcnM6DQojDQojIENPTkZJ
R19HUElPX0tFTVBMRCBpcyBub3Qgc2V0DQoNCiMNCiMgTU9EVUxidXMgR1BJTyBleHBhbmRlcnM6
DQojDQpDT05GSUdfR1BJT19KQU5aX1RUTD15DQpDT05GSUdfR1BJT19QQUxNQVM9eQ0KIyBDT05G
SUdfR1BJT19UUFM2NTg2WCBpcyBub3Qgc2V0DQoNCiMNCiMgVVNCIEdQSU8gZXhwYW5kZXJzOg0K
Iw0KQ09ORklHX1cxPXkNCkNPTkZJR19XMV9DT049eQ0KDQojDQojIDEtd2lyZSBCdXMgTWFzdGVy
cw0KIw0KIyBDT05GSUdfVzFfTUFTVEVSX01BVFJPWCBpcyBub3Qgc2V0DQojIENPTkZJR19XMV9N
QVNURVJfRFMyNDgyIGlzIG5vdCBzZXQNCkNPTkZJR19XMV9NQVNURVJfRFMxV009eQ0KIyBDT05G
SUdfVzFfTUFTVEVSX0dQSU8gaXMgbm90IHNldA0KDQojDQojIDEtd2lyZSBTbGF2ZXMNCiMNCkNP
TkZJR19XMV9TTEFWRV9USEVSTT15DQojIENPTkZJR19XMV9TTEFWRV9TTUVNIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1cxX1NMQVZFX0RTMjQwOCBpcyBub3Qgc2V0DQojIENPTkZJR19XMV9TTEFWRV9E
UzI0MTMgaXMgbm90IHNldA0KQ09ORklHX1cxX1NMQVZFX0RTMjQyMz15DQojIENPTkZJR19XMV9T
TEFWRV9EUzI0MzEgaXMgbm90IHNldA0KQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15DQojIENPTkZJ
R19XMV9TTEFWRV9EUzI0MzNfQ1JDIGlzIG5vdCBzZXQNCiMgQ09ORklHX1cxX1NMQVZFX0RTMjc2
MCBpcyBub3Qgc2V0DQpDT05GSUdfVzFfU0xBVkVfRFMyNzgwPXkNCkNPTkZJR19XMV9TTEFWRV9E
UzI3ODE9eQ0KQ09ORklHX1cxX1NMQVZFX0RTMjhFMDQ9eQ0KQ09ORklHX1cxX1NMQVZFX0JRMjcw
MDA9eQ0KQ09ORklHX1BPV0VSX1NVUFBMWT15DQojIENPTkZJR19QT1dFUl9TVVBQTFlfREVCVUcg
aXMgbm90IHNldA0KQ09ORklHX1BEQV9QT1dFUj15DQojIENPTkZJR19XTTgzMVhfQkFDS1VQIGlz
IG5vdCBzZXQNCkNPTkZJR19XTTgzMVhfUE9XRVI9eQ0KQ09ORklHX1dNODM1MF9QT1dFUj15DQpD
T05GSUdfVEVTVF9QT1dFUj15DQpDT05GSUdfQkFUVEVSWV84OFBNODYwWD15DQpDT05GSUdfQkFU
VEVSWV9EUzI3ODA9eQ0KQ09ORklHX0JBVFRFUllfRFMyNzgxPXkNCkNPTkZJR19CQVRURVJZX0RT
Mjc4Mj15DQojIENPTkZJR19CQVRURVJZX1dNOTdYWCBpcyBub3Qgc2V0DQpDT05GSUdfQkFUVEVS
WV9TQlM9eQ0KIyBDT05GSUdfQkFUVEVSWV9CUTI3eDAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0JB
VFRFUllfTUFYMTcwNDAgaXMgbm90IHNldA0KQ09ORklHX0JBVFRFUllfTUFYMTcwNDI9eQ0KIyBD
T05GSUdfQ0hBUkdFUl84OFBNODYwWCBpcyBub3Qgc2V0DQojIENPTkZJR19DSEFSR0VSX1BDRjUw
NjMzIGlzIG5vdCBzZXQNCkNPTkZJR19DSEFSR0VSX01BWDg5MDM9eQ0KQ09ORklHX0NIQVJHRVJf
VFdMNDAzMD15DQpDT05GSUdfQ0hBUkdFUl9MUDg3Mjc9eQ0KQ09ORklHX0NIQVJHRVJfR1BJTz15
DQpDT05GSUdfQ0hBUkdFUl9NQU5BR0VSPXkNCkNPTkZJR19DSEFSR0VSX0JRMjQxNVg9eQ0KIyBD
T05GSUdfQ0hBUkdFUl9CUTI0MTkwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NIQVJHRVJfQlEyNDcz
NSBpcyBub3Qgc2V0DQpDT05GSUdfQ0hBUkdFUl9TTUIzNDc9eQ0KQ09ORklHX0JBVFRFUllfR09M
REZJU0g9eQ0KIyBDT05GSUdfUE9XRVJfUkVTRVQgaXMgbm90IHNldA0KQ09ORklHX1BPV0VSX0FW
Uz15DQpDT05GSUdfSFdNT049eQ0KQ09ORklHX0hXTU9OX1ZJRD15DQpDT05GSUdfSFdNT05fREVC
VUdfQ0hJUD15DQoNCiMNCiMgTmF0aXZlIGRyaXZlcnMNCiMNCkNPTkZJR19TRU5TT1JTX0FCSVRV
R1VSVT15DQpDT05GSUdfU0VOU09SU19BQklUVUdVUlUzPXkNCkNPTkZJR19TRU5TT1JTX0FENzQx
ND15DQpDT05GSUdfU0VOU09SU19BRDc0MTg9eQ0KQ09ORklHX1NFTlNPUlNfQURNMTAyMT15DQpD
T05GSUdfU0VOU09SU19BRE0xMDI1PXkNCkNPTkZJR19TRU5TT1JTX0FETTEwMjY9eQ0KQ09ORklH
X1NFTlNPUlNfQURNMTAyOT15DQpDT05GSUdfU0VOU09SU19BRE0xMDMxPXkNCiMgQ09ORklHX1NF
TlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19BRFQ3WDEwPXkNCkNPTkZJ
R19TRU5TT1JTX0FEVDc0MTA9eQ0KQ09ORklHX1NFTlNPUlNfQURUNzQxMT15DQpDT05GSUdfU0VO
U09SU19BRFQ3NDYyPXkNCkNPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQ0KIyBDT05GSUdfU0VOU09S
U19BRFQ3NDc1IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfQVNDNzYyMSBpcyBub3Qgc2V0
DQpDT05GSUdfU0VOU09SU19LOFRFTVA9eQ0KQ09ORklHX1NFTlNPUlNfSzEwVEVNUD15DQpDT05G
SUdfU0VOU09SU19GQU0xNUhfUE9XRVI9eQ0KQ09ORklHX1NFTlNPUlNfQVNCMTAwPXkNCkNPTkZJ
R19TRU5TT1JTX0FUWFAxPXkNCkNPTkZJR19TRU5TT1JTX0RTNjIwPXkNCkNPTkZJR19TRU5TT1JT
X0RTMTYyMT15DQpDT05GSUdfU0VOU09SU19EQTkwNTU9eQ0KQ09ORklHX1NFTlNPUlNfSTVLX0FN
Qj15DQpDT05GSUdfU0VOU09SU19GNzE4MDVGPXkNCkNPTkZJR19TRU5TT1JTX0Y3MTg4MkZHPXkN
CkNPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQ0KIyBDT05GSUdfU0VOU09SU19GU0NITUQgaXMgbm90
IHNldA0KQ09ORklHX1NFTlNPUlNfRzc2MEE9eQ0KQ09ORklHX1NFTlNPUlNfRzc2Mj15DQojIENP
TkZJR19TRU5TT1JTX0dMNTE4U00gaXMgbm90IHNldA0KQ09ORklHX1NFTlNPUlNfR0w1MjBTTT15
DQpDT05GSUdfU0VOU09SU19HUElPX0ZBTj15DQpDT05GSUdfU0VOU09SU19ISUg2MTMwPXkNCiMg
Q09ORklHX1NFTlNPUlNfSFRVMjEgaXMgbm90IHNldA0KQ09ORklHX1NFTlNPUlNfQ09SRVRFTVA9
eQ0KQ09ORklHX1NFTlNPUlNfSVQ4Nz15DQpDT05GSUdfU0VOU09SU19KQzQyPXkNCkNPTkZJR19T
RU5TT1JTX0xJTkVBR0U9eQ0KQ09ORklHX1NFTlNPUlNfTE02Mz15DQpDT05GSUdfU0VOU09SU19M
TTczPXkNCkNPTkZJR19TRU5TT1JTX0xNNzU9eQ0KIyBDT05GSUdfU0VOU09SU19MTTc3IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfTE03OCBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19M
TTgwPXkNCkNPTkZJR19TRU5TT1JTX0xNODM9eQ0KQ09ORklHX1NFTlNPUlNfTE04NT15DQpDT05G
SUdfU0VOU09SU19MTTg3PXkNCkNPTkZJR19TRU5TT1JTX0xNOTA9eQ0KQ09ORklHX1NFTlNPUlNf
TE05Mj15DQpDT05GSUdfU0VOU09SU19MTTkzPXkNCkNPTkZJR19TRU5TT1JTX0xUQzQxNTE9eQ0K
Q09ORklHX1NFTlNPUlNfTFRDNDIxNT15DQpDT05GSUdfU0VOU09SU19MVEM0MjQ1PXkNCiMgQ09O
RklHX1NFTlNPUlNfTFRDNDI2MSBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19MTTk1MjM0PXkN
CkNPTkZJR19TRU5TT1JTX0xNOTUyNDE9eQ0KQ09ORklHX1NFTlNPUlNfTE05NTI0NT15DQpDT05G
SUdfU0VOU09SU19NQVgxNjA2NT15DQpDT05GSUdfU0VOU09SU19NQVgxNjE5PXkNCiMgQ09ORklH
X1NFTlNPUlNfTUFYMTY2OCBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JTX01BWDE5NyBpcyBu
b3Qgc2V0DQpDT05GSUdfU0VOU09SU19NQVg2NjM5PXkNCkNPTkZJR19TRU5TT1JTX01BWDY2NDI9
eQ0KQ09ORklHX1NFTlNPUlNfTUFYNjY1MD15DQojIENPTkZJR19TRU5TT1JTX01BWDY2OTcgaXMg
bm90IHNldA0KQ09ORklHX1NFTlNPUlNfTUNQMzAyMT15DQpDT05GSUdfU0VOU09SU19OQ1Q2Nzc1
PXkNCkNPTkZJR19TRU5TT1JTX05UQ19USEVSTUlTVE9SPXkNCkNPTkZJR19TRU5TT1JTX1BDODcz
NjA9eQ0KQ09ORklHX1NFTlNPUlNfUEM4NzQyNz15DQpDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkN
CiMgQ09ORklHX1BNQlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfU0hUMTUgaXMgbm90
IHNldA0KIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5TT1JT
X1NJUzU1OTUgaXMgbm90IHNldA0KIyBDT05GSUdfU0VOU09SU19TTU02NjUgaXMgbm90IHNldA0K
IyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX0VNQzE0
MDM9eQ0KQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15DQpDT05GSUdfU0VOU09SU19FTUM2VzIwMT15
DQpDT05GSUdfU0VOU09SU19TTVNDNDdNMT15DQpDT05GSUdfU0VOU09SU19TTVNDNDdNMTkyPXkN
CkNPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTc9eQ0KQ09ORklHX1NFTlNPUlNfU0NINTZYWF9DT01N
T049eQ0KQ09ORklHX1NFTlNPUlNfU0NINTYyNz15DQpDT05GSUdfU0VOU09SU19TQ0g1NjM2PXkN
CkNPTkZJR19TRU5TT1JTX0FEUzEwMTU9eQ0KIyBDT05GSUdfU0VOU09SU19BRFM3ODI4IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfQU1DNjgyMSBpcyBub3Qgc2V0DQojIENPTkZJR19TRU5T
T1JTX0lOQTIwOSBpcyBub3Qgc2V0DQpDT05GSUdfU0VOU09SU19JTkEyWFg9eQ0KQ09ORklHX1NF
TlNPUlNfVEhNQzUwPXkNCiMgQ09ORklHX1NFTlNPUlNfVE1QMTAyIGlzIG5vdCBzZXQNCkNPTkZJ
R19TRU5TT1JTX1RNUDQwMT15DQojIENPTkZJR19TRU5TT1JTX1RNUDQyMSBpcyBub3Qgc2V0DQoj
IENPTkZJR19TRU5TT1JTX1ZJQV9DUFVURU1QIGlzIG5vdCBzZXQNCkNPTkZJR19TRU5TT1JTX1ZJ
QTY4NkE9eQ0KQ09ORklHX1NFTlNPUlNfVlQxMjExPXkNCiMgQ09ORklHX1NFTlNPUlNfVlQ4MjMx
IGlzIG5vdCBzZXQNCiMgQ09ORklHX1NFTlNPUlNfVzgzNzgxRCBpcyBub3Qgc2V0DQpDT05GSUdf
U0VOU09SU19XODM3OTFEPXkNCkNPTkZJR19TRU5TT1JTX1c4Mzc5MkQ9eQ0KQ09ORklHX1NFTlNP
UlNfVzgzNzkzPXkNCiMgQ09ORklHX1NFTlNPUlNfVzgzNzk1IGlzIG5vdCBzZXQNCkNPTkZJR19T
RU5TT1JTX1c4M0w3ODVUUz15DQpDT05GSUdfU0VOU09SU19XODNMNzg2Tkc9eQ0KQ09ORklHX1NF
TlNPUlNfVzgzNjI3SEY9eQ0KQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGPXkNCkNPTkZJR19TRU5T
T1JTX1dNODMxWD15DQpDT05GSUdfU0VOU09SU19XTTgzNTA9eQ0KQ09ORklHX1NFTlNPUlNfQVBQ
TEVTTUM9eQ0KQ09ORklHX1NFTlNPUlNfTUMxMzc4M19BREM9eQ0KDQojDQojIEFDUEkgZHJpdmVy
cw0KIw0KQ09ORklHX1NFTlNPUlNfQUNQSV9QT1dFUj15DQpDT05GSUdfU0VOU09SU19BVEswMTEw
PXkNCkNPTkZJR19USEVSTUFMPXkNCiMgQ09ORklHX1RIRVJNQUxfSFdNT04gaXMgbm90IHNldA0K
Q09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkNCiMgQ09ORklHX1RIRVJNQUxf
REVGQVVMVF9HT1ZfRkFJUl9TSEFSRSBpcyBub3Qgc2V0DQojIENPTkZJR19USEVSTUFMX0RFRkFV
TFRfR09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldA0KQ09ORklHX1RIRVJNQUxfR09WX0ZBSVJfU0hB
UkU9eQ0KQ09ORklHX1RIRVJNQUxfR09WX1NURVBfV0lTRT15DQpDT05GSUdfVEhFUk1BTF9HT1Zf
VVNFUl9TUEFDRT15DQojIENPTkZJR19USEVSTUFMX0VNVUxBVElPTiBpcyBub3Qgc2V0DQpDT05G
SUdfUkNBUl9USEVSTUFMPXkNCkNPTkZJR19JTlRFTF9QT1dFUkNMQU1QPXkNCkNPTkZJR19BQ1BJ
X0lOVDM0MDNfVEhFUk1BTD15DQoNCiMNCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBkcml2
ZXJzDQojDQpDT05GSUdfV0FUQ0hET0c9eQ0KQ09ORklHX1dBVENIRE9HX0NPUkU9eQ0KQ09ORklH
X1dBVENIRE9HX05PV0FZT1VUPXkNCg0KIw0KIyBXYXRjaGRvZyBEZXZpY2UgRHJpdmVycw0KIw0K
IyBDT05GSUdfU09GVF9XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05GSUdfREE5MDU1X1dBVENIRE9H
PXkNCiMgQ09ORklHX1dNODMxWF9XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05GSUdfV004MzUwX1dB
VENIRE9HPXkNCkNPTkZJR19UV0w0MDMwX1dBVENIRE9HPXkNCkNPTkZJR19SRVRVX1dBVENIRE9H
PXkNCiMgQ09ORklHX0FDUVVJUkVfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FEVkFOVEVDSF9X
RFQgaXMgbm90IHNldA0KQ09ORklHX0FMSU0xNTM1X1dEVD15DQpDT05GSUdfQUxJTTcxMDFfV0RU
PXkNCiMgQ09ORklHX0Y3MTgwOEVfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NQNTEwMF9UQ08g
aXMgbm90IHNldA0KIyBDT05GSUdfU0M1MjBfV0RUIGlzIG5vdCBzZXQNCkNPTkZJR19TQkNfRklU
UEMyX1dBVENIRE9HPXkNCkNPTkZJR19FVVJPVEVDSF9XRFQ9eQ0KQ09ORklHX0lCNzAwX1dEVD15
DQpDT05GSUdfSUJNQVNSPXkNCkNPTkZJR19XQUZFUl9XRFQ9eQ0KQ09ORklHX0k2MzAwRVNCX1dE
VD15DQpDT05GSUdfSUU2WFhfV0RUPXkNCkNPTkZJR19JVENPX1dEVD15DQpDT05GSUdfSVRDT19W
RU5ET1JfU1VQUE9SVD15DQpDT05GSUdfSVQ4NzEyRl9XRFQ9eQ0KQ09ORklHX0lUODdfV0RUPXkN
CiMgQ09ORklHX0hQX1dBVENIRE9HIGlzIG5vdCBzZXQNCkNPTkZJR19LRU1QTERfV0RUPXkNCkNP
TkZJR19TQzEyMDBfV0RUPXkNCkNPTkZJR19QQzg3NDEzX1dEVD15DQpDT05GSUdfTlZfVENPPXkN
CkNPTkZJR182MFhYX1dEVD15DQpDT05GSUdfU0JDODM2MF9XRFQ9eQ0KQ09ORklHX0NQVTVfV0RU
PXkNCkNPTkZJR19TTVNDX1NDSDMxMVhfV0RUPXkNCkNPTkZJR19TTVNDMzdCNzg3X1dEVD15DQoj
IENPTkZJR19WSUFfV0RUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1c4MzYyN0hGX1dEVCBpcyBub3Qg
c2V0DQpDT05GSUdfVzgzNjk3SEZfV0RUPXkNCkNPTkZJR19XODM2OTdVR19XRFQ9eQ0KQ09ORklH
X1c4Mzg3N0ZfV0RUPXkNCiMgQ09ORklHX1c4Mzk3N0ZfV0RUIGlzIG5vdCBzZXQNCkNPTkZJR19N
QUNIWl9XRFQ9eQ0KIyBDT05GSUdfU0JDX0VQWF9DM19XQVRDSERPRyBpcyBub3Qgc2V0DQpDT05G
SUdfTUVOX0EyMV9XRFQ9eQ0KQ09ORklHX1hFTl9XRFQ9eQ0KDQojDQojIFBDSS1iYXNlZCBXYXRj
aGRvZyBDYXJkcw0KIw0KQ09ORklHX1BDSVBDV0FUQ0hET0c9eQ0KQ09ORklHX1dEVFBDST15DQpD
T05GSUdfU1NCX1BPU1NJQkxFPXkNCg0KIw0KIyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUNCiMN
CiMgQ09ORklHX1NTQiBpcyBub3Qgc2V0DQpDT05GSUdfQkNNQV9QT1NTSUJMRT15DQoNCiMNCiMg
QnJvYWRjb20gc3BlY2lmaWMgQU1CQQ0KIw0KQ09ORklHX0JDTUE9eQ0KQ09ORklHX0JDTUFfSE9T
VF9QQ0lfUE9TU0lCTEU9eQ0KQ09ORklHX0JDTUFfSE9TVF9QQ0k9eQ0KQ09ORklHX0JDTUFfSE9T
VF9TT0M9eQ0KQ09ORklHX0JDTUFfRFJJVkVSX0dNQUNfQ01OPXkNCkNPTkZJR19CQ01BX0RSSVZF
Ul9HUElPPXkNCkNPTkZJR19CQ01BX0RFQlVHPXkNCg0KIw0KIyBNdWx0aWZ1bmN0aW9uIGRldmlj
ZSBkcml2ZXJzDQojDQpDT05GSUdfTUZEX0NPUkU9eQ0KIyBDT05GSUdfTUZEX0NTNTUzNSBpcyBu
b3Qgc2V0DQpDT05GSUdfTUZEX0FTMzcxMT15DQojIENPTkZJR19QTUlDX0FEUDU1MjAgaXMgbm90
IHNldA0KIyBDT05GSUdfTUZEX0FBVDI4NzBfQ09SRSBpcyBub3Qgc2V0DQojIENPTkZJR19NRkRf
Q1JPU19FQyBpcyBub3Qgc2V0DQojIENPTkZJR19QTUlDX0RBOTAzWCBpcyBub3Qgc2V0DQojIENP
TkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX0RBOTA1NT15DQpDT05G
SUdfTUZEX0RBOTA2Mz15DQpDT05GSUdfTUZEX01DMTNYWFg9eQ0KQ09ORklHX01GRF9NQzEzWFhY
X0kyQz15DQpDT05GSUdfSFRDX1BBU0lDMz15DQojIENPTkZJR19IVENfSTJDUExEIGlzIG5vdCBz
ZXQNCkNPTkZJR19MUENfSUNIPXkNCkNPTkZJR19MUENfU0NIPXkNCkNPTkZJR19NRkRfSkFOWl9D
TU9ESU89eQ0KQ09ORklHX01GRF9LRU1QTEQ9eQ0KQ09ORklHX01GRF84OFBNODAwPXkNCiMgQ09O
RklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfODhQTTg2MFg9eQ0KIyBDT05G
SUdfTUZEX01BWDE0NTc3IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9NQVg3NzY4NiBpcyBub3Qg
c2V0DQpDT05GSUdfTUZEX01BWDc3NjkzPXkNCiMgQ09ORklHX01GRF9NQVg4OTA3IGlzIG5vdCBz
ZXQNCiMgQ09ORklHX01GRF9NQVg4OTI1IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9NQVg4OTk3
IGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfTUFYODk5OD15DQpDT05GSUdfTUZEX1JFVFU9eQ0KQ09O
RklHX01GRF9QQ0Y1MDYzMz15DQpDT05GSUdfUENGNTA2MzNfQURDPXkNCkNPTkZJR19QQ0Y1MDYz
M19HUElPPXkNCiMgQ09ORklHX1VDQjE0MDBfQ09SRSBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX1JE
QzMyMVg9eQ0KIyBDT05GSUdfTUZEX1JUU1hfUENJIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9S
QzVUNTgzIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBub3Qgc2V0DQojIENP
TkZJR19NRkRfU0k0NzZYX0NPUkUgaXMgbm90IHNldA0KQ09ORklHX01GRF9TTTUwMT15DQojIENP
TkZJR19NRkRfU001MDFfR1BJTyBpcyBub3Qgc2V0DQpDT05GSUdfTUZEX1NNU0M9eQ0KIyBDT05G
SUdfQUJYNTAwX0NPUkUgaXMgbm90IHNldA0KIyBDT05GSUdfTUZEX1NUTVBFIGlzIG5vdCBzZXQN
CkNPTkZJR19NRkRfU1lTQ09OPXkNCkNPTkZJR19NRkRfVElfQU0zMzVYX1RTQ0FEQz15DQojIENP
TkZJR19NRkRfTFAzOTQzIGlzIG5vdCBzZXQNCkNPTkZJR19NRkRfTFA4Nzg4PXkNCkNPTkZJR19N
RkRfUEFMTUFTPXkNCkNPTkZJR19UUFM2MTA1WD15DQpDT05GSUdfVFBTNjUwMTA9eQ0KQ09ORklH
X1RQUzY1MDdYPXkNCiMgQ09ORklHX01GRF9UUFM2NTA5MCBpcyBub3Qgc2V0DQpDT05GSUdfTUZE
X1RQUzY1MjE3PXkNCkNPTkZJR19NRkRfVFBTNjU4Nlg9eQ0KIyBDT05GSUdfTUZEX1RQUzY1OTEw
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9UUFM2NTkxMiBpcyBub3Qgc2V0DQojIENPTkZJR19N
RkRfVFBTNjU5MTJfSTJDIGlzIG5vdCBzZXQNCiMgQ09ORklHX01GRF9UUFM4MDAzMSBpcyBub3Qg
c2V0DQpDT05GSUdfVFdMNDAzMF9DT1JFPXkNCiMgQ09ORklHX1RXTDQwMzBfTUFEQyBpcyBub3Qg
c2V0DQpDT05GSUdfTUZEX1RXTDQwMzBfQVVESU89eQ0KQ09ORklHX1RXTDYwNDBfQ09SRT15DQpD
T05GSUdfTUZEX1dMMTI3M19DT1JFPXkNCkNPTkZJR19NRkRfTE0zNTMzPXkNCkNPTkZJR19NRkRf
VElNQkVSREFMRT15DQpDT05GSUdfTUZEX1RDMzU4OVg9eQ0KIyBDT05GSUdfTUZEX1RNSU8gaXMg
bm90IHNldA0KQ09ORklHX01GRF9WWDg1NT15DQpDT05GSUdfTUZEX0FSSVpPTkE9eQ0KQ09ORklH
X01GRF9BUklaT05BX0kyQz15DQojIENPTkZJR19NRkRfV001MTAyIGlzIG5vdCBzZXQNCiMgQ09O
RklHX01GRF9XTTUxMTAgaXMgbm90IHNldA0KQ09ORklHX01GRF9XTTg5OTc9eQ0KQ09ORklHX01G
RF9XTTg0MDA9eQ0KQ09ORklHX01GRF9XTTgzMVg9eQ0KQ09ORklHX01GRF9XTTgzMVhfSTJDPXkN
CkNPTkZJR19NRkRfV004MzUwPXkNCkNPTkZJR19NRkRfV004MzUwX0kyQz15DQojIENPTkZJR19N
RkRfV004OTk0IGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1I9eQ0KIyBDT05GSUdfUkVHVUxB
VE9SX0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1JfRklYRURfVk9MVEFHRT15DQoj
IENPTkZJR19SRUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUiBpcyBub3Qgc2V0DQpDT05GSUdfUkVH
VUxBVE9SX1VTRVJTUEFDRV9DT05TVU1FUj15DQpDT05GSUdfUkVHVUxBVE9SXzg4UE04MDA9eQ0K
IyBDT05GSUdfUkVHVUxBVE9SXzg4UE04NjA3IGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1Jf
QUNUODg2NT15DQojIENPTkZJR19SRUdVTEFUT1JfQUQ1Mzk4IGlzIG5vdCBzZXQNCkNPTkZJR19S
RUdVTEFUT1JfQU5BVE9QPXkNCiMgQ09ORklHX1JFR1VMQVRPUl9BUklaT05BIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1JFR1VMQVRPUl9BUzM3MTEgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9E
QTkwNTU9eQ0KIyBDT05GSUdfUkVHVUxBVE9SX0RBOTA2MyBpcyBub3Qgc2V0DQpDT05GSUdfUkVH
VUxBVE9SX0RBOTIxMD15DQojIENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldA0K
IyBDT05GSUdfUkVHVUxBVE9SX0dQSU8gaXMgbm90IHNldA0KIyBDT05GSUdfUkVHVUxBVE9SX0lT
TDYyNzFBIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1JfTFAzOTcxPXkNCkNPTkZJR19SRUdV
TEFUT1JfTFAzOTcyPXkNCkNPTkZJR19SRUdVTEFUT1JfTFA4NzJYPXkNCiMgQ09ORklHX1JFR1VM
QVRPUl9MUDg3NTUgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9MUDg3ODg9eQ0KIyBDT05G
SUdfUkVHVUxBVE9SX01BWDE1ODYgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9NQVg4NjQ5
PXkNCiMgQ09ORklHX1JFR1VMQVRPUl9NQVg4NjYwIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFU
T1JfTUFYODk1Mj15DQojIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBpcyBub3Qgc2V0DQojIENP
TkZJR19SRUdVTEFUT1JfTUFYODk5OCBpcyBub3Qgc2V0DQpDT05GSUdfUkVHVUxBVE9SX01BWDc3
NjkzPXkNCkNPTkZJR19SRUdVTEFUT1JfTUMxM1hYWF9DT1JFPXkNCkNPTkZJR19SRUdVTEFUT1Jf
TUMxMzc4Mz15DQojIENPTkZJR19SRUdVTEFUT1JfTUMxMzg5MiBpcyBub3Qgc2V0DQpDT05GSUdf
UkVHVUxBVE9SX1BBTE1BUz15DQojIENPTkZJR19SRUdVTEFUT1JfUENGNTA2MzMgaXMgbm90IHNl
dA0KQ09ORklHX1JFR1VMQVRPUl9QRlVaRTEwMD15DQojIENPTkZJR19SRUdVTEFUT1JfVFBTNTE2
MzIgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9UUFM2MTA1WD15DQojIENPTkZJR19SRUdV
TEFUT1JfVFBTNjIzNjAgaXMgbm90IHNldA0KQ09ORklHX1JFR1VMQVRPUl9UUFM2NTAyMz15DQpD
T05GSUdfUkVHVUxBVE9SX1RQUzY1MDdYPXkNCkNPTkZJR19SRUdVTEFUT1JfVFBTNjUyMTc9eQ0K
IyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1ODZYIGlzIG5vdCBzZXQNCkNPTkZJR19SRUdVTEFUT1Jf
VFdMNDAzMD15DQojIENPTkZJR19SRUdVTEFUT1JfV004MzFYIGlzIG5vdCBzZXQNCiMgQ09ORklH
X1JFR1VMQVRPUl9XTTgzNTAgaXMgbm90IHNldA0KIyBDT05GSUdfUkVHVUxBVE9SX1dNODQwMCBp
cyBub3Qgc2V0DQpDT05GSUdfTUVESUFfU1VQUE9SVD15DQoNCiMNCiMgTXVsdGltZWRpYSBjb3Jl
IHN1cHBvcnQNCiMNCkNPTkZJR19NRURJQV9DQU1FUkFfU1VQUE9SVD15DQojIENPTkZJR19NRURJ
QV9BTkFMT0dfVFZfU1VQUE9SVCBpcyBub3Qgc2V0DQpDT05GSUdfTUVESUFfRElHSVRBTF9UVl9T
VVBQT1JUPXkNCiMgQ09ORklHX01FRElBX1JBRElPX1NVUFBPUlQgaXMgbm90IHNldA0KQ09ORklH
X01FRElBX1JDX1NVUFBPUlQ9eQ0KIyBDT05GSUdfTUVESUFfQ09OVFJPTExFUiBpcyBub3Qgc2V0
DQpDT05GSUdfVklERU9fREVWPXkNCkNPTkZJR19WSURFT19WNEwyPXkNCiMgQ09ORklHX1ZJREVP
X0FEVl9ERUJVRyBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9fRklYRURfTUlOT1JfUkFOR0VTPXkN
CkNPTkZJR19WNEwyX01FTTJNRU1fREVWPXkNCkNPTkZJR19WSURFT0JVRjJfQ09SRT15DQpDT05G
SUdfVklERU9CVUYyX01FTU9QUz15DQpDT05GSUdfVklERU9CVUYyX0RNQV9DT05USUc9eQ0KQ09O
RklHX1ZJREVPQlVGMl9WTUFMTE9DPXkNCkNPTkZJR19WSURFT0JVRjJfRE1BX1NHPXkNCkNPTkZJ
R19EVkJfQ09SRT15DQojIENPTkZJR19EVkJfTkVUIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RUUENJ
X0VFUFJPTSBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX01BWF9BREFQVEVSUz04DQojIENPTkZJR19E
VkJfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldA0KDQojDQojIE1lZGlhIGRyaXZlcnMNCiMNCkNP
TkZJR19SQ19DT1JFPXkNCkNPTkZJR19SQ19NQVA9eQ0KQ09ORklHX1JDX0RFQ09ERVJTPXkNCkNP
TkZJR19MSVJDPXkNCkNPTkZJR19JUl9MSVJDX0NPREVDPXkNCiMgQ09ORklHX0lSX05FQ19ERUNP
REVSIGlzIG5vdCBzZXQNCkNPTkZJR19JUl9SQzVfREVDT0RFUj15DQpDT05GSUdfSVJfUkM2X0RF
Q09ERVI9eQ0KIyBDT05GSUdfSVJfSlZDX0RFQ09ERVIgaXMgbm90IHNldA0KQ09ORklHX0lSX1NP
TllfREVDT0RFUj15DQpDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9eQ0KQ09ORklHX0lSX1NBTllP
X0RFQ09ERVI9eQ0KQ09ORklHX0lSX01DRV9LQkRfREVDT0RFUj15DQpDT05GSUdfUkNfREVWSUNF
Uz15DQojIENPTkZJR19JUl9FTkUgaXMgbm90IHNldA0KQ09ORklHX0lSX0lURV9DSVI9eQ0KQ09O
RklHX0lSX0ZJTlRFSz15DQpDT05GSUdfSVJfTlVWT1RPTj15DQpDT05GSUdfSVJfV0lOQk9ORF9D
SVI9eQ0KQ09ORklHX1JDX0xPT1BCQUNLPXkNCkNPTkZJR19JUl9HUElPX0NJUj15DQojIENPTkZJ
R19NRURJQV9QQ0lfU1VQUE9SVCBpcyBub3Qgc2V0DQojIENPTkZJR19WNExfUExBVEZPUk1fRFJJ
VkVSUyBpcyBub3Qgc2V0DQpDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUz15DQpDT05GSUdfVklE
RU9fTUVNMk1FTV9ERUlOVEVSTEFDRT15DQpDT05GSUdfVklERU9fU0hfVkVVPXkNCiMgQ09ORklH
X1Y0TF9URVNUX0RSSVZFUlMgaXMgbm90IHNldA0KDQojDQojIFN1cHBvcnRlZCBNTUMvU0RJTyBh
ZGFwdGVycw0KIw0KQ09ORklHX01FRElBX1BBUlBPUlRfU1VQUE9SVD15DQpDT05GSUdfVklERU9f
QldRQ0FNPXkNCiMgQ09ORklHX1ZJREVPX0NRQ0FNIGlzIG5vdCBzZXQNCg0KIw0KIyBNZWRpYSBh
bmNpbGxhcnkgZHJpdmVycyAodHVuZXJzLCBzZW5zb3JzLCBpMmMsIGZyb250ZW5kcykNCiMNCiMg
Q09ORklHX01FRElBX1NVQkRSVl9BVVRPU0VMRUNUIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19J
Ul9JMkM9eQ0KDQojDQojIEVuY29kZXJzLCBkZWNvZGVycywgc2Vuc29ycyBhbmQgb3RoZXIgaGVs
cGVyIGNoaXBzDQojDQoNCiMNCiMgQXVkaW8gZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVy
cw0KIw0KIyBDT05GSUdfVklERU9fVFZBVURJTyBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9fVERB
NzQzMj15DQojIENPTkZJR19WSURFT19UREE5ODQwIGlzIG5vdCBzZXQNCiMgQ09ORklHX1ZJREVP
X1RFQTY0MTVDIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19URUE2NDIwPXkNCiMgQ09ORklHX1ZJ
REVPX01TUDM0MDAgaXMgbm90IHNldA0KIyBDT05GSUdfVklERU9fQ1M1MzQ1IGlzIG5vdCBzZXQN
CkNPTkZJR19WSURFT19DUzUzTDMyQT15DQojIENPTkZJR19WSURFT19UTFYzMjBBSUMyM0IgaXMg
bm90IHNldA0KIyBDT05GSUdfVklERU9fVURBMTM0MiBpcyBub3Qgc2V0DQpDT05GSUdfVklERU9f
V004Nzc1PXkNCkNPTkZJR19WSURFT19XTTg3Mzk9eQ0KQ09ORklHX1ZJREVPX1ZQMjdTTVBYPXkN
CiMgQ09ORklHX1ZJREVPX1NPTllfQlRGX01QWCBpcyBub3Qgc2V0DQoNCiMNCiMgUkRTIGRlY29k
ZXJzDQojDQpDT05GSUdfVklERU9fU0FBNjU4OD15DQoNCiMNCiMgVmlkZW8gZGVjb2RlcnMNCiMN
CkNPTkZJR19WSURFT19BRFY3MTgwPXkNCiMgQ09ORklHX1ZJREVPX0FEVjcxODMgaXMgbm90IHNl
dA0KQ09ORklHX1ZJREVPX0JUODE5PXkNCkNPTkZJR19WSURFT19CVDg1Nj15DQpDT05GSUdfVklE
RU9fQlQ4NjY9eQ0KIyBDT05GSUdfVklERU9fS1MwMTI3IGlzIG5vdCBzZXQNCkNPTkZJR19WSURF
T19NTDg2Vjc2Njc9eQ0KIyBDT05GSUdfVklERU9fU0FBNzExMCBpcyBub3Qgc2V0DQpDT05GSUdf
VklERU9fU0FBNzExWD15DQpDT05GSUdfVklERU9fU0FBNzE5MT15DQpDT05GSUdfVklERU9fVFZQ
NTE0WD15DQpDT05GSUdfVklERU9fVFZQNTE1MD15DQpDT05GSUdfVklERU9fVFZQNzAwMj15DQpD
T05GSUdfVklERU9fVFcyODA0PXkNCkNPTkZJR19WSURFT19UVzk5MDM9eQ0KQ09ORklHX1ZJREVP
X1RXOTkwNj15DQpDT05GSUdfVklERU9fVlBYMzIyMD15DQoNCiMNCiMgVmlkZW8gYW5kIGF1ZGlv
IGRlY29kZXJzDQojDQpDT05GSUdfVklERU9fU0FBNzE3WD15DQojIENPTkZJR19WSURFT19DWDI1
ODQwIGlzIG5vdCBzZXQNCg0KIw0KIyBWaWRlbyBlbmNvZGVycw0KIw0KIyBDT05GSUdfVklERU9f
U0FBNzEyNyBpcyBub3Qgc2V0DQojIENPTkZJR19WSURFT19TQUE3MTg1IGlzIG5vdCBzZXQNCkNP
TkZJR19WSURFT19BRFY3MTcwPXkNCkNPTkZJR19WSURFT19BRFY3MTc1PXkNCkNPTkZJR19WSURF
T19BRFY3MzQzPXkNCiMgQ09ORklHX1ZJREVPX0FEVjczOTMgaXMgbm90IHNldA0KIyBDT05GSUdf
VklERU9fQUs4ODFYIGlzIG5vdCBzZXQNCkNPTkZJR19WSURFT19USFM4MjAwPXkNCg0KIw0KIyBD
YW1lcmEgc2Vuc29yIGRldmljZXMNCiMNCkNPTkZJR19WSURFT19PVjc2NDA9eQ0KQ09ORklHX1ZJ
REVPX09WNzY3MD15DQojIENPTkZJR19WSURFT19WUzY2MjQgaXMgbm90IHNldA0KQ09ORklHX1ZJ
REVPX01UOVYwMTE9eQ0KIyBDT05GSUdfVklERU9fU1IwMzBQQzMwIGlzIG5vdCBzZXQNCg0KIw0K
IyBGbGFzaCBkZXZpY2VzDQojDQoNCiMNCiMgVmlkZW8gaW1wcm92ZW1lbnQgY2hpcHMNCiMNCkNP
TkZJR19WSURFT19VUEQ2NDAzMUE9eQ0KQ09ORklHX1ZJREVPX1VQRDY0MDgzPXkNCg0KIw0KIyBN
aXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcw0KIw0KQ09ORklHX1ZJREVPX1RIUzczMDM9eQ0KQ09O
RklHX1ZJREVPX001Mjc5MD15DQoNCiMNCiMgU2Vuc29ycyB1c2VkIG9uIHNvY19jYW1lcmEgZHJp
dmVyDQojDQoNCiMNCiMgQ3VzdG9taXplIFRWIHR1bmVycw0KIw0KQ09ORklHX01FRElBX1RVTkVS
X1NJTVBMRT15DQojIENPTkZJR19NRURJQV9UVU5FUl9UREE4MjkwIGlzIG5vdCBzZXQNCiMgQ09O
RklHX01FRElBX1RVTkVSX1REQTgyN1ggaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1RE
QTE4MjcxPXkNCkNPTkZJR19NRURJQV9UVU5FUl9UREE5ODg3PXkNCkNPTkZJR19NRURJQV9UVU5F
Ul9URUE1NzYxPXkNCkNPTkZJR19NRURJQV9UVU5FUl9URUE1NzY3PXkNCiMgQ09ORklHX01FRElB
X1RVTkVSX01UMjBYWCBpcyBub3Qgc2V0DQojIENPTkZJR19NRURJQV9UVU5FUl9NVDIwNjAgaXMg
bm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX01UMjA2Mz15DQpDT05GSUdfTUVESUFfVFVORVJf
TVQyMjY2PXkNCiMgQ09ORklHX01FRElBX1RVTkVSX01UMjEzMSBpcyBub3Qgc2V0DQojIENPTkZJ
R19NRURJQV9UVU5FUl9RVDEwMTAgaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1hDMjAy
OD15DQpDT05GSUdfTUVESUFfVFVORVJfWEM1MDAwPXkNCkNPTkZJR19NRURJQV9UVU5FUl9YQzQw
MDA9eQ0KQ09ORklHX01FRElBX1RVTkVSX01YTDUwMDVTPXkNCiMgQ09ORklHX01FRElBX1RVTkVS
X01YTDUwMDdUIGlzIG5vdCBzZXQNCiMgQ09ORklHX01FRElBX1RVTkVSX01DNDRTODAzIGlzIG5v
dCBzZXQNCkNPTkZJR19NRURJQV9UVU5FUl9NQVgyMTY1PXkNCiMgQ09ORklHX01FRElBX1RVTkVS
X1REQTE4MjE4IGlzIG5vdCBzZXQNCiMgQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMSBpcyBub3Qg
c2V0DQpDT05GSUdfTUVESUFfVFVORVJfRkMwMDEyPXkNCkNPTkZJR19NRURJQV9UVU5FUl9GQzAw
MTM9eQ0KQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjEyPXkNCkNPTkZJR19NRURJQV9UVU5FUl9F
NDAwMD15DQojIENPTkZJR19NRURJQV9UVU5FUl9GQzI1ODAgaXMgbm90IHNldA0KQ09ORklHX01F
RElBX1RVTkVSX004OFRTMjAyMj15DQpDT05GSUdfTUVESUFfVFVORVJfVFVBOTAwMT15DQojIENP
TkZJR19NRURJQV9UVU5FUl9JVDkxM1ggaXMgbm90IHNldA0KQ09ORklHX01FRElBX1RVTkVSX1I4
MjBUPXkNCg0KIw0KIyBDdXN0b21pc2UgRFZCIEZyb250ZW5kcw0KIw0KDQojDQojIE11bHRpc3Rh
bmRhcmQgKHNhdGVsbGl0ZSkgZnJvbnRlbmRzDQojDQojIENPTkZJR19EVkJfU1RCMDg5OSBpcyBu
b3Qgc2V0DQojIENPTkZJR19EVkJfU1RCNjEwMCBpcyBub3Qgc2V0DQojIENPTkZJR19EVkJfU1RW
MDkweCBpcyBub3Qgc2V0DQojIENPTkZJR19EVkJfU1RWNjExMHggaXMgbm90IHNldA0KQ09ORklH
X0RWQl9NODhEUzMxMDM9eQ0KDQojDQojIE11bHRpc3RhbmRhcmQgKGNhYmxlICsgdGVycmVzdHJp
YWwpIGZyb250ZW5kcw0KIw0KQ09ORklHX0RWQl9EUlhLPXkNCiMgQ09ORklHX0RWQl9UREExODI3
MUMyREQgaXMgbm90IHNldA0KDQojDQojIERWQi1TIChzYXRlbGxpdGUpIGZyb250ZW5kcw0KIw0K
Q09ORklHX0RWQl9DWDI0MTEwPXkNCiMgQ09ORklHX0RWQl9DWDI0MTIzIGlzIG5vdCBzZXQNCkNP
TkZJR19EVkJfTVQzMTI9eQ0KIyBDT05GSUdfRFZCX1pMMTAwMzYgaXMgbm90IHNldA0KQ09ORklH
X0RWQl9aTDEwMDM5PXkNCkNPTkZJR19EVkJfUzVIMTQyMD15DQpDT05GSUdfRFZCX1NUVjAyODg9
eQ0KQ09ORklHX0RWQl9TVEI2MDAwPXkNCkNPTkZJR19EVkJfU1RWMDI5OT15DQpDT05GSUdfRFZC
X1NUVjYxMTA9eQ0KQ09ORklHX0RWQl9TVFYwOTAwPXkNCkNPTkZJR19EVkJfVERBODA4Mz15DQoj
IENPTkZJR19EVkJfVERBMTAwODYgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX1REQTgyNjEgaXMg
bm90IHNldA0KQ09ORklHX0RWQl9WRVMxWDkzPXkNCkNPTkZJR19EVkJfVFVORVJfSVREMTAwMD15
DQojIENPTkZJR19EVkJfVFVORVJfQ1gyNDExMyBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX1REQTgy
Nlg9eQ0KQ09ORklHX0RWQl9UVUE2MTAwPXkNCkNPTkZJR19EVkJfQ1gyNDExNj15DQpDT05GSUdf
RFZCX0NYMjQxMTc9eQ0KIyBDT05GSUdfRFZCX1NJMjFYWCBpcyBub3Qgc2V0DQojIENPTkZJR19E
VkJfVFMyMDIwIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfRFMzMDAwPXkNCkNPTkZJR19EVkJfTUI4
NkExNj15DQpDT05GSUdfRFZCX1REQTEwMDcxPXkNCg0KIw0KIyBEVkItVCAodGVycmVzdHJpYWwp
IGZyb250ZW5kcw0KIw0KQ09ORklHX0RWQl9TUDg4NzA9eQ0KIyBDT05GSUdfRFZCX1NQODg3WCBp
cyBub3Qgc2V0DQpDT05GSUdfRFZCX0NYMjI3MDA9eQ0KQ09ORklHX0RWQl9DWDIyNzAyPXkNCiMg
Q09ORklHX0RWQl9TNUgxNDMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RWQl9EUlhEIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0RWQl9MNjQ3ODEgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX1REQTEwMDRY
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0RWQl9OWFQ2MDAwIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJf
TVQzNTI9eQ0KIyBDT05GSUdfRFZCX1pMMTAzNTMgaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0RJ
QjMwMDBNQiBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX0RJQjMwMDBNQz15DQpDT05GSUdfRFZCX0RJ
QjcwMDBNPXkNCkNPTkZJR19EVkJfRElCNzAwMFA9eQ0KQ09ORklHX0RWQl9ESUI5MDAwPXkNCiMg
Q09ORklHX0RWQl9UREExMDA0OCBpcyBub3Qgc2V0DQpDT05GSUdfRFZCX0FGOTAxMz15DQpDT05G
SUdfRFZCX0VDMTAwPXkNCiMgQ09ORklHX0RWQl9IRDI5TDIgaXMgbm90IHNldA0KIyBDT05GSUdf
RFZCX1NUVjAzNjcgaXMgbm90IHNldA0KQ09ORklHX0RWQl9DWEQyODIwUj15DQpDT05GSUdfRFZC
X1JUTDI4MzA9eQ0KQ09ORklHX0RWQl9SVEwyODMyPXkNCg0KIw0KIyBEVkItQyAoY2FibGUpIGZy
b250ZW5kcw0KIw0KQ09ORklHX0RWQl9WRVMxODIwPXkNCkNPTkZJR19EVkJfVERBMTAwMjE9eQ0K
Q09ORklHX0RWQl9UREExMDAyMz15DQpDT05GSUdfRFZCX1NUVjAyOTc9eQ0KDQojDQojIEFUU0Mg
KE5vcnRoIEFtZXJpY2FuL0tvcmVhbiBUZXJyZXN0cmlhbC9DYWJsZSBEVFYpIGZyb250ZW5kcw0K
Iw0KIyBDT05GSUdfRFZCX05YVDIwMFggaXMgbm90IHNldA0KQ09ORklHX0RWQl9PUjUxMjExPXkN
CiMgQ09ORklHX0RWQl9PUjUxMTMyIGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfQkNNMzUxMD15DQoj
IENPTkZJR19EVkJfTEdEVDMzMFggaXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0xHRFQzMzA1IGlz
IG5vdCBzZXQNCiMgQ09ORklHX0RWQl9MRzIxNjAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9TNUgx
NDA5PXkNCkNPTkZJR19EVkJfQVU4NTIyPXkNCkNPTkZJR19EVkJfQVU4NTIyX0RUVj15DQpDT05G
SUdfRFZCX0FVODUyMl9WNEw9eQ0KIyBDT05GSUdfRFZCX1M1SDE0MTEgaXMgbm90IHNldA0KDQoj
DQojIElTREItVCAodGVycmVzdHJpYWwpIGZyb250ZW5kcw0KIw0KIyBDT05GSUdfRFZCX1M5MjEg
aXMgbm90IHNldA0KIyBDT05GSUdfRFZCX0RJQjgwMDAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9N
Qjg2QTIwUz15DQoNCiMNCiMgRGlnaXRhbCB0ZXJyZXN0cmlhbCBvbmx5IHR1bmVycy9QTEwNCiMN
CiMgQ09ORklHX0RWQl9QTEwgaXMgbm90IHNldA0KQ09ORklHX0RWQl9UVU5FUl9ESUIwMDcwPXkN
CiMgQ09ORklHX0RWQl9UVU5FUl9ESUIwMDkwIGlzIG5vdCBzZXQNCg0KIw0KIyBTRUMgY29udHJv
bCBkZXZpY2VzIGZvciBEVkItUw0KIw0KQ09ORklHX0RWQl9MTkJQMjE9eQ0KQ09ORklHX0RWQl9M
TkJQMjI9eQ0KIyBDT05GSUdfRFZCX0lTTDY0MDUgaXMgbm90IHNldA0KQ09ORklHX0RWQl9JU0w2
NDIxPXkNCkNPTkZJR19EVkJfSVNMNjQyMz15DQpDT05GSUdfRFZCX0E4MjkzPXkNCiMgQ09ORklH
X0RWQl9MR1M4R0w1IGlzIG5vdCBzZXQNCkNPTkZJR19EVkJfTEdTOEdYWD15DQojIENPTkZJR19E
VkJfQVRCTTg4MzAgaXMgbm90IHNldA0KQ09ORklHX0RWQl9UREE2NjV4PXkNCkNPTkZJR19EVkJf
SVgyNTA1Vj15DQpDT05GSUdfRFZCX0lUOTEzWF9GRT15DQpDT05GSUdfRFZCX004OFJTMjAwMD15
DQpDT05GSUdfRFZCX0FGOTAzMz15DQoNCiMNCiMgVG9vbHMgdG8gZGV2ZWxvcCBuZXcgZnJvbnRl
bmRzDQojDQpDT05GSUdfRFZCX0RVTU1ZX0ZFPXkNCg0KIw0KIyBHcmFwaGljcyBzdXBwb3J0DQoj
DQpDT05GSUdfQUdQPXkNCiMgQ09ORklHX0FHUF9BTUQ2NCBpcyBub3Qgc2V0DQpDT05GSUdfQUdQ
X0lOVEVMPXkNCkNPTkZJR19BR1BfU0lTPXkNCkNPTkZJR19BR1BfVklBPXkNCkNPTkZJR19JTlRF
TF9HVFQ9eQ0KQ09ORklHX1ZHQV9BUkI9eQ0KQ09ORklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYNCkNP
TkZJR19WR0FfU1dJVENIRVJPTz15DQpDT05GSUdfRFJNPXkNCkNPTkZJR19EUk1fS01TX0hFTFBF
Uj15DQpDT05GSUdfRFJNX0tNU19GQl9IRUxQRVI9eQ0KQ09ORklHX0RSTV9MT0FEX0VESURfRklS
TVdBUkU9eQ0KQ09ORklHX0RSTV9UVE09eQ0KDQojDQojIEkyQyBlbmNvZGVyIG9yIGhlbHBlciBj
aGlwcw0KIw0KQ09ORklHX0RSTV9JMkNfQ0g3MDA2PXkNCiMgQ09ORklHX0RSTV9JMkNfU0lMMTY0
IGlzIG5vdCBzZXQNCkNPTkZJR19EUk1fSTJDX05YUF9UREE5OThYPXkNCiMgQ09ORklHX0RSTV9U
REZYIGlzIG5vdCBzZXQNCkNPTkZJR19EUk1fUjEyOD15DQpDT05GSUdfRFJNX1JBREVPTj15DQpD
T05GSUdfRFJNX1JBREVPTl9VTVM9eQ0KIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNldA0K
Q09ORklHX0RSTV9JODEwPXkNCkNPTkZJR19EUk1fSTkxNT15DQojIENPTkZJR19EUk1fSTkxNV9L
TVMgaXMgbm90IHNldA0KIyBDT05GSUdfRFJNX0k5MTVfRkJERVYgaXMgbm90IHNldA0KQ09ORklH
X0RSTV9JOTE1X1BSRUxJTUlOQVJZX0hXX1NVUFBPUlQ9eQ0KIyBDT05GSUdfRFJNX0k5MTVfVU1T
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9NR0EgaXMgbm90IHNldA0KQ09ORklHX0RSTV9TSVM9
eQ0KQ09ORklHX0RSTV9WSUE9eQ0KIyBDT05GSUdfRFJNX1NBVkFHRSBpcyBub3Qgc2V0DQpDT05G
SUdfRFJNX1ZNV0dGWD15DQpDT05GSUdfRFJNX1ZNV0dGWF9GQkNPTj15DQpDT05GSUdfRFJNX0dN
QTUwMD15DQojIENPTkZJR19EUk1fR01BNjAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9HTUEz
NjAwIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RSTV9BU1QgaXMgbm90IHNldA0KIyBDT05GSUdfRFJN
X01HQUcyMDAgaXMgbm90IHNldA0KQ09ORklHX0RSTV9DSVJSVVNfUUVNVT15DQpDT05GSUdfRFJN
X1FYTD15DQpDT05GSUdfVkdBU1RBVEU9eQ0KQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MPXkN
CkNPTkZJR19IRE1JPXkNCkNPTkZJR19GQj15DQpDT05GSUdfRklSTVdBUkVfRURJRD15DQpDT05G
SUdfRkJfRERDPXkNCkNPTkZJR19GQl9CT09UX1ZFU0FfU1VQUE9SVD15DQpDT05GSUdfRkJfQ0ZC
X0ZJTExSRUNUPXkNCkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQ0KQ09ORklHX0ZCX0NGQl9JTUFH
RUJMSVQ9eQ0KIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNfSU5fQllURSBpcyBub3Qgc2V0DQpD
T05GSUdfRkJfU1lTX0ZJTExSRUNUPXkNCkNPTkZJR19GQl9TWVNfQ09QWUFSRUE9eQ0KQ09ORklH
X0ZCX1NZU19JTUFHRUJMSVQ9eQ0KQ09ORklHX0ZCX0ZPUkVJR05fRU5ESUFOPXkNCkNPTkZJR19G
Ql9CT1RIX0VORElBTj15DQojIENPTkZJR19GQl9CSUdfRU5ESUFOIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0ZCX0xJVFRMRV9FTkRJQU4gaXMgbm90IHNldA0KQ09ORklHX0ZCX1NZU19GT1BTPXkNCkNP
TkZJR19GQl9ERUZFUlJFRF9JTz15DQpDT05GSUdfRkJfSEVDVUJBPXkNCkNPTkZJR19GQl9TVkdB
TElCPXkNCiMgQ09ORklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9CQUNLTElH
SFQ9eQ0KQ09ORklHX0ZCX01PREVfSEVMUEVSUz15DQpDT05GSUdfRkJfVElMRUJMSVRUSU5HPXkN
Cg0KIw0KIyBGcmFtZSBidWZmZXIgaGFyZHdhcmUgZHJpdmVycw0KIw0KIyBDT05GSUdfRkJfQ0lS
UlVTIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZCX1BNMiBpcyBub3Qgc2V0DQpDT05GSUdfRkJfQ1lC
RVIyMDAwPXkNCkNPTkZJR19GQl9DWUJFUjIwMDBfRERDPXkNCkNPTkZJR19GQl9BUkM9eQ0KIyBD
T05GSUdfRkJfQVNJTElBTlQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfSU1TVFQgaXMgbm90IHNl
dA0KQ09ORklHX0ZCX1ZHQTE2PXkNCiMgQ09ORklHX0ZCX1VWRVNBIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0ZCX1ZFU0EgaXMgbm90IHNldA0KQ09ORklHX0ZCX0VGST15DQpDT05GSUdfRkJfTjQxMT15
DQpDT05GSUdfRkJfSEdBPXkNCkNPTkZJR19GQl9TMUQxM1hYWD15DQojIENPTkZJR19GQl9OVklE
SUEgaXMgbm90IHNldA0KQ09ORklHX0ZCX1JJVkE9eQ0KQ09ORklHX0ZCX1JJVkFfSTJDPXkNCiMg
Q09ORklHX0ZCX1JJVkFfREVCVUcgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfUklWQV9CQUNLTElH
SFQgaXMgbm90IHNldA0KIyBDT05GSUdfRkJfSTc0MCBpcyBub3Qgc2V0DQpDT05GSUdfRkJfTEU4
MDU3OD15DQpDT05GSUdfRkJfQ0FSSUxMT19SQU5DSD15DQojIENPTkZJR19GQl9NQVRST1ggaXMg
bm90IHNldA0KIyBDT05GSUdfRkJfUkFERU9OIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9BVFkxMjg9
eQ0KQ09ORklHX0ZCX0FUWTEyOF9CQUNLTElHSFQ9eQ0KQ09ORklHX0ZCX0FUWT15DQpDT05GSUdf
RkJfQVRZX0NUPXkNCkNPTkZJR19GQl9BVFlfR0VORVJJQ19MQ0Q9eQ0KIyBDT05GSUdfRkJfQVRZ
X0dYIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9BVFlfQkFDS0xJR0hUPXkNCkNPTkZJR19GQl9TMz15
DQojIENPTkZJR19GQl9TM19EREMgaXMgbm90IHNldA0KQ09ORklHX0ZCX1NBVkFHRT15DQpDT05G
SUdfRkJfU0FWQUdFX0kyQz15DQojIENPTkZJR19GQl9TQVZBR0VfQUNDRUwgaXMgbm90IHNldA0K
Q09ORklHX0ZCX1NJUz15DQpDT05GSUdfRkJfU0lTXzMwMD15DQpDT05GSUdfRkJfU0lTXzMxNT15
DQojIENPTkZJR19GQl9WSUEgaXMgbm90IHNldA0KQ09ORklHX0ZCX05FT01BR0lDPXkNCkNPTkZJ
R19GQl9LWVJPPXkNCiMgQ09ORklHX0ZCXzNERlggaXMgbm90IHNldA0KQ09ORklHX0ZCX1ZPT0RP
TzE9eQ0KIyBDT05GSUdfRkJfVlQ4NjIzIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9UUklERU5UPXkN
CkNPTkZJR19GQl9BUks9eQ0KIyBDT05GSUdfRkJfUE0zIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9D
QVJNSU5FPXkNCkNPTkZJR19GQl9DQVJNSU5FX0RSQU1fRVZBTD15DQojIENPTkZJR19DQVJNSU5F
X0RSQU1fQ1VTVE9NIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9HRU9ERT15DQpDT05GSUdfRkJfR0VP
REVfTFg9eQ0KQ09ORklHX0ZCX0dFT0RFX0dYPXkNCkNPTkZJR19GQl9HRU9ERV9HWDE9eQ0KQ09O
RklHX0ZCX1RNSU89eQ0KIyBDT05GSUdfRkJfVE1JT19BQ0NFTEwgaXMgbm90IHNldA0KQ09ORklH
X0ZCX1NNNTAxPXkNCkNPTkZJR19GQl9HT0xERklTSD15DQpDT05GSUdfRkJfVklSVFVBTD15DQoj
IENPTkZJR19YRU5fRkJERVZfRlJPTlRFTkQgaXMgbm90IHNldA0KQ09ORklHX0ZCX01FVFJPTk9N
RT15DQpDT05GSUdfRkJfTUI4NjJYWD15DQpDT05GSUdfRkJfTUI4NjJYWF9QQ0lfR0RDPXkNCkNP
TkZJR19GQl9NQjg2MlhYX0kyQz15DQojIENPTkZJR19GQl9CUk9BRFNIRUVUIGlzIG5vdCBzZXQN
CiMgQ09ORklHX0ZCX0FVT19LMTkwWCBpcyBub3Qgc2V0DQojIENPTkZJR19GQl9IWVBFUlYgaXMg
bm90IHNldA0KQ09ORklHX0ZCX1NJTVBMRT15DQpDT05GSUdfRVhZTk9TX1ZJREVPPXkNCkNPTkZJ
R19CQUNLTElHSFRfTENEX1NVUFBPUlQ9eQ0KQ09ORklHX0xDRF9DTEFTU19ERVZJQ0U9eQ0KIyBD
T05GSUdfTENEX1BMQVRGT1JNIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQ0xBU1NfREVW
SUNFPXkNCkNPTkZJR19CQUNLTElHSFRfR0VORVJJQz15DQojIENPTkZJR19CQUNLTElHSFRfTE0z
NTMzIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQ0FSSUxMT19SQU5DSD15DQojIENPTkZJ
R19CQUNLTElHSFRfUFdNIGlzIG5vdCBzZXQNCkNPTkZJR19CQUNLTElHSFRfQVBQTEU9eQ0KQ09O
RklHX0JBQ0tMSUdIVF9TQUhBUkE9eQ0KQ09ORklHX0JBQ0tMSUdIVF9XTTgzMVg9eQ0KQ09ORklH
X0JBQ0tMSUdIVF9BRFA4ODYwPXkNCiMgQ09ORklHX0JBQ0tMSUdIVF9BRFA4ODcwIGlzIG5vdCBz
ZXQNCiMgQ09ORklHX0JBQ0tMSUdIVF84OFBNODYwWCBpcyBub3Qgc2V0DQojIENPTkZJR19CQUNL
TElHSFRfUENGNTA2MzMgaXMgbm90IHNldA0KQ09ORklHX0JBQ0tMSUdIVF9MTTM2MzBBPXkNCkNP
TkZJR19CQUNLTElHSFRfTE0zNjM5PXkNCkNPTkZJR19CQUNLTElHSFRfTFA4NTVYPXkNCkNPTkZJ
R19CQUNLTElHSFRfTFA4Nzg4PXkNCkNPTkZJR19CQUNLTElHSFRfUEFORE9SQT15DQojIENPTkZJ
R19CQUNLTElHSFRfVFBTNjUyMTcgaXMgbm90IHNldA0KQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTE9
eQ0KQ09ORklHX0JBQ0tMSUdIVF9HUElPPXkNCkNPTkZJR19CQUNLTElHSFRfTFY1MjA3TFA9eQ0K
IyBDT05GSUdfQkFDS0xJR0hUX0JENjEwNyBpcyBub3Qgc2V0DQojIENPTkZJR19MT0dPIGlzIG5v
dCBzZXQNCkNPTkZJR19TT1VORD15DQpDT05GSUdfU09VTkRfT1NTX0NPUkU9eQ0KQ09ORklHX1NP
VU5EX09TU19DT1JFX1BSRUNMQUlNPXkNCkNPTkZJR19TTkQ9eQ0KQ09ORklHX1NORF9USU1FUj15
DQpDT05GSUdfU05EX1BDTT15DQpDT05GSUdfU05EX0hXREVQPXkNCkNPTkZJR19TTkRfUkFXTUlE
ST15DQpDT05GSUdfU05EX0NPTVBSRVNTX09GRkxPQUQ9eQ0KQ09ORklHX1NORF9KQUNLPXkNCiMg
Q09ORklHX1NORF9TRVFVRU5DRVIgaXMgbm90IHNldA0KQ09ORklHX1NORF9PU1NFTVVMPXkNCkNP
TkZJR19TTkRfTUlYRVJfT1NTPXkNCkNPTkZJR19TTkRfUENNX09TUz15DQojIENPTkZJR19TTkRf
UENNX09TU19QTFVHSU5TIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSFJUSU1FUj15DQpDT05GSUdf
U05EX0RZTkFNSUNfTUlOT1JTPXkNCkNPTkZJR19TTkRfTUFYX0NBUkRTPTMyDQpDT05GSUdfU05E
X1NVUFBPUlRfT0xEX0FQST15DQojIENPTkZJR19TTkRfVkVSQk9TRV9QUklOVEsgaXMgbm90IHNl
dA0KQ09ORklHX1NORF9ERUJVRz15DQpDT05GSUdfU05EX0RFQlVHX1ZFUkJPU0U9eQ0KQ09ORklH
X1NORF9WTUFTVEVSPXkNCkNPTkZJR19TTkRfRE1BX1NHQlVGPXkNCiMgQ09ORklHX1NORF9SQVdN
SURJX1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfT1BMM19MSUJfU0VRIGlzIG5vdCBzZXQN
CiMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldA0KIyBDT05GSUdfU05EX1NCQVdF
X1NFUSBpcyBub3Qgc2V0DQojIENPTkZJR19TTkRfRU1VMTBLMV9TRVEgaXMgbm90IHNldA0KQ09O
RklHX1NORF9NUFU0MDFfVUFSVD15DQpDT05GSUdfU05EX09QTDNfTElCPXkNCkNPTkZJR19TTkRf
VlhfTElCPXkNCkNPTkZJR19TTkRfQUM5N19DT0RFQz15DQpDT05GSUdfU05EX0RSSVZFUlM9eQ0K
IyBDT05GSUdfU05EX1BDU1AgaXMgbm90IHNldA0KQ09ORklHX1NORF9EVU1NWT15DQpDT05GSUdf
U05EX0FMT09QPXkNCkNPTkZJR19TTkRfTVRQQVY9eQ0KIyBDT05GSUdfU05EX01UUzY0IGlzIG5v
dCBzZXQNCkNPTkZJR19TTkRfU0VSSUFMX1UxNjU1MD15DQpDT05GSUdfU05EX01QVTQwMT15DQpD
T05GSUdfU05EX1BPUlRNQU4yWDQ9eQ0KIyBDT05GSUdfU05EX0FDOTdfUE9XRVJfU0FWRSBpcyBu
b3Qgc2V0DQpDT05GSUdfU05EX1BDST15DQpDT05GSUdfU05EX0FEMTg4OT15DQpDT05GSUdfU05E
X0FMUzMwMD15DQpDT05GSUdfU05EX0FMSTU0NTE9eQ0KIyBDT05GSUdfU05EX0FTSUhQSSBpcyBu
b3Qgc2V0DQpDT05GSUdfU05EX0FUSUlYUD15DQpDT05GSUdfU05EX0FUSUlYUF9NT0RFTT15DQoj
IENPTkZJR19TTkRfQVU4ODEwIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfQVU4ODIwPXkNCkNPTkZJ
R19TTkRfQVU4ODMwPXkNCkNPTkZJR19TTkRfQVcyPXkNCiMgQ09ORklHX1NORF9BWlQzMzI4IGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9CVDg3WCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0NBMDEw
Nj15DQojIENPTkZJR19TTkRfQ01JUENJIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfT1hZR0VOX0xJ
Qj15DQojIENPTkZJR19TTkRfT1hZR0VOIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9DUzQyODEg
aXMgbm90IHNldA0KQ09ORklHX1NORF9DUzQ2WFg9eQ0KQ09ORklHX1NORF9DUzQ2WFhfTkVXX0RT
UD15DQpDT05GSUdfU05EX0NTNTUzNUFVRElPPXkNCiMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qg
c2V0DQojIENPTkZJR19TTkRfREFSTEEyMCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0dJTkEyMD15
DQpDT05GSUdfU05EX0xBWUxBMjA9eQ0KIyBDT05GSUdfU05EX0RBUkxBMjQgaXMgbm90IHNldA0K
Q09ORklHX1NORF9HSU5BMjQ9eQ0KQ09ORklHX1NORF9MQVlMQTI0PXkNCkNPTkZJR19TTkRfTU9O
QT15DQpDT05GSUdfU05EX01JQT15DQojIENPTkZJR19TTkRfRUNITzNHIGlzIG5vdCBzZXQNCkNP
TkZJR19TTkRfSU5ESUdPPXkNCkNPTkZJR19TTkRfSU5ESUdPSU89eQ0KIyBDT05GSUdfU05EX0lO
RElHT0RKIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSU5ESUdPSU9YPXkNCiMgQ09ORklHX1NORF9J
TkRJR09ESlggaXMgbm90IHNldA0KQ09ORklHX1NORF9FTVUxMEsxPXkNCkNPTkZJR19TTkRfRU1V
MTBLMVg9eQ0KIyBDT05GSUdfU05EX0VOUzEzNzAgaXMgbm90IHNldA0KQ09ORklHX1NORF9FTlMx
MzcxPXkNCkNPTkZJR19TTkRfRVMxOTM4PXkNCkNPTkZJR19TTkRfRVMxOTY4PXkNCkNPTkZJR19T
TkRfRVMxOTY4X0lOUFVUPXkNCkNPTkZJR19TTkRfRk04MDE9eQ0KIyBDT05GSUdfU05EX0hEQV9J
TlRFTCBpcyBub3Qgc2V0DQpDT05GSUdfU05EX0hEU1A9eQ0KDQojDQojIERvbid0IGZvcmdldCB0
byBhZGQgYnVpbHQtaW4gZmlybXdhcmVzIGZvciBIRFNQIGRyaXZlcg0KIw0KIyBDT05GSUdfU05E
X0hEU1BNIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfSUNFMTcxMj15DQpDT05GSUdfU05EX0lDRTE3
MjQ9eQ0KQ09ORklHX1NORF9JTlRFTDhYMD15DQpDT05GSUdfU05EX0lOVEVMOFgwTT15DQpDT05G
SUdfU05EX0tPUkcxMjEyPXkNCkNPTkZJR19TTkRfTE9MQT15DQpDT05GSUdfU05EX0xYNjQ2NEVT
PXkNCkNPTkZJR19TTkRfTUFFU1RSTzM9eQ0KIyBDT05GSUdfU05EX01BRVNUUk8zX0lOUFVUIGlz
IG5vdCBzZXQNCkNPTkZJR19TTkRfTUlYQVJUPXkNCkNPTkZJR19TTkRfTk0yNTY9eQ0KIyBDT05G
SUdfU05EX1BDWEhSIGlzIG5vdCBzZXQNCkNPTkZJR19TTkRfUklQVElERT15DQojIENPTkZJR19T
TkRfUk1FMzIgaXMgbm90IHNldA0KQ09ORklHX1NORF9STUU5Nj15DQpDT05GSUdfU05EX1JNRTk2
NTI9eQ0KQ09ORklHX1NORF9TT05JQ1ZJQkVTPXkNCiMgQ09ORklHX1NORF9UUklERU5UIGlzIG5v
dCBzZXQNCkNPTkZJR19TTkRfVklBODJYWD15DQojIENPTkZJR19TTkRfVklBODJYWF9NT0RFTSBp
cyBub3Qgc2V0DQpDT05GSUdfU05EX1ZJUlRVT1NPPXkNCkNPTkZJR19TTkRfVlgyMjI9eQ0KQ09O
RklHX1NORF9ZTUZQQ0k9eQ0KQ09ORklHX1NORF9TT0M9eQ0KIyBDT05GSUdfU05EX1NPQ19BREkg
aXMgbm90IHNldA0KIyBDT05GSUdfU05EX0FUTUVMX1NPQyBpcyBub3Qgc2V0DQojIENPTkZJR19T
TkRfQkNNMjgzNV9TT0NfSTJTIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9FUDkzWFhfU09DIGlz
IG5vdCBzZXQNCiMgQ09ORklHX1NORF9JTVhfU09DIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NORF9L
SVJLV09PRF9TT0MgaXMgbm90IHNldA0KQ09ORklHX1NORF9TT0NfSTJDX0FORF9TUEk9eQ0KQ09O
RklHX1NORF9TT0NfQUxMX0NPREVDUz15DQpDT05GSUdfU05EX1NPQ184OFBNODYwWD15DQpDT05G
SUdfU05EX1NPQ19BUklaT05BPXkNCkNPTkZJR19TTkRfU09DX1dNX0hVQlM9eQ0KQ09ORklHX1NO
RF9TT0NfV01fQURTUD15DQpDT05GSUdfU05EX1NPQ19BRDE5M1g9eQ0KQ09ORklHX1NORF9TT0Nf
QUQ3MzMxMT15DQpDT05GSUdfU05EX1NPQ19BREFVMTcwMT15DQpDT05GSUdfU05EX1NPQ19BREFV
MTM3Mz15DQpDT05GSUdfU05EX1NPQ19BREFWODBYPXkNCkNPTkZJR19TTkRfU09DX0FEUzExN1g9
eQ0KQ09ORklHX1NORF9TT0NfQUs0NTM1PXkNCkNPTkZJR19TTkRfU09DX0FLNDY0MT15DQpDT05G
SUdfU05EX1NPQ19BSzQ2NDI9eQ0KQ09ORklHX1NORF9TT0NfQUs0NjcxPXkNCkNPTkZJR19TTkRf
U09DX0FLNTM4Nj15DQpDT05GSUdfU05EX1NPQ19BTEM1NjIzPXkNCkNPTkZJR19TTkRfU09DX0FM
QzU2MzI9eQ0KQ09ORklHX1NORF9TT0NfQ1M0Mkw1MT15DQpDT05GSUdfU05EX1NPQ19DUzQyTDUy
PXkNCkNPTkZJR19TTkRfU09DX0NTNDJMNzM9eQ0KQ09ORklHX1NORF9TT0NfQ1M0MjcwPXkNCkNP
TkZJR19TTkRfU09DX0NTNDI3MT15DQpDT05GSUdfU05EX1NPQ19DWDIwNDQyPXkNCkNPTkZJR19T
TkRfU09DX0paNDc0MF9DT0RFQz15DQpDT05GSUdfU05EX1NPQ19MMz15DQpDT05GSUdfU05EX1NP
Q19EQTcyMTA9eQ0KQ09ORklHX1NORF9TT0NfREE3MjEzPXkNCkNPTkZJR19TTkRfU09DX0RBNzMy
WD15DQpDT05GSUdfU05EX1NPQ19EQTkwNTU9eQ0KQ09ORklHX1NORF9TT0NfQlRfU0NPPXkNCkNP
TkZJR19TTkRfU09DX0lTQUJFTExFPXkNCkNPTkZJR19TTkRfU09DX0xNNDk0NTM9eQ0KQ09ORklH
X1NORF9TT0NfTUFYOTgwODg9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTgwOTA9eQ0KQ09ORklHX1NO
RF9TT0NfTUFYOTgwOTU9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTg1MD15DQpDT05GSUdfU05EX1NP
Q19IRE1JX0NPREVDPXkNCkNPTkZJR19TTkRfU09DX1BDTTE2ODE9eQ0KQ09ORklHX1NORF9TT0Nf
UENNMzAwOD15DQpDT05GSUdfU05EX1NPQ19SVDU2MzE9eQ0KQ09ORklHX1NORF9TT0NfUlQ1NjQw
PXkNCkNPTkZJR19TTkRfU09DX1NHVEw1MDAwPXkNCkNPTkZJR19TTkRfU09DX1NJR01BRFNQPXkN
CkNPTkZJR19TTkRfU09DX1NQRElGPXkNCkNPTkZJR19TTkRfU09DX1NTTTI1MTg9eQ0KQ09ORklH
X1NORF9TT0NfU1NNMjYwMj15DQpDT05GSUdfU05EX1NPQ19TVEEzMlg9eQ0KQ09ORklHX1NORF9T
T0NfU1RBNTI5PXkNCkNPTkZJR19TTkRfU09DX1RBUzUwODY9eQ0KQ09ORklHX1NORF9TT0NfVExW
MzIwQUlDMjM9eQ0KQ09ORklHX1NORF9TT0NfVExWMzIwQUlDMzJYND15DQpDT05GSUdfU05EX1NP
Q19UTFYzMjBBSUMzWD15DQpDT05GSUdfU05EX1NPQ19UTFYzMjBEQUMzMz15DQpDT05GSUdfU05E
X1NPQ19UV0w0MDMwPXkNCkNPTkZJR19TTkRfU09DX1RXTDYwNDA9eQ0KQ09ORklHX1NORF9TT0Nf
VURBMTM0WD15DQpDT05GSUdfU05EX1NPQ19VREExMzgwPXkNCkNPTkZJR19TTkRfU09DX1dMMTI3
Mz15DQpDT05GSUdfU05EX1NPQ19XTTEyNTBfRVYxPXkNCkNPTkZJR19TTkRfU09DX1dNMjAwMD15
DQpDT05GSUdfU05EX1NPQ19XTTIyMDA9eQ0KQ09ORklHX1NORF9TT0NfV001MTAwPXkNCkNPTkZJ
R19TTkRfU09DX1dNODM1MD15DQpDT05GSUdfU05EX1NPQ19XTTg0MDA9eQ0KQ09ORklHX1NORF9T
T0NfV004NTEwPXkNCkNPTkZJR19TTkRfU09DX1dNODUyMz15DQpDT05GSUdfU05EX1NPQ19XTTg1
ODA9eQ0KQ09ORklHX1NORF9TT0NfV004NzExPXkNCkNPTkZJR19TTkRfU09DX1dNODcyNz15DQpD
T05GSUdfU05EX1NPQ19XTTg3Mjg9eQ0KQ09ORklHX1NORF9TT0NfV004NzMxPXkNCkNPTkZJR19T
TkRfU09DX1dNODczNz15DQpDT05GSUdfU05EX1NPQ19XTTg3NDE9eQ0KQ09ORklHX1NORF9TT0Nf
V004NzUwPXkNCkNPTkZJR19TTkRfU09DX1dNODc1Mz15DQpDT05GSUdfU05EX1NPQ19XTTg3NzY9
eQ0KQ09ORklHX1NORF9TT0NfV004NzgyPXkNCkNPTkZJR19TTkRfU09DX1dNODgwND15DQpDT05G
SUdfU05EX1NPQ19XTTg5MDA9eQ0KQ09ORklHX1NORF9TT0NfV004OTAzPXkNCkNPTkZJR19TTkRf
U09DX1dNODkwND15DQpDT05GSUdfU05EX1NPQ19XTTg5NDA9eQ0KQ09ORklHX1NORF9TT0NfV004
OTU1PXkNCkNPTkZJR19TTkRfU09DX1dNODk2MD15DQpDT05GSUdfU05EX1NPQ19XTTg5NjE9eQ0K
Q09ORklHX1NORF9TT0NfV004OTYyPXkNCkNPTkZJR19TTkRfU09DX1dNODk3MT15DQpDT05GSUdf
U05EX1NPQ19XTTg5NzQ9eQ0KQ09ORklHX1NORF9TT0NfV004OTc4PXkNCkNPTkZJR19TTkRfU09D
X1dNODk4Mz15DQpDT05GSUdfU05EX1NPQ19XTTg5ODU9eQ0KQ09ORklHX1NORF9TT0NfV004OTg4
PXkNCkNPTkZJR19TTkRfU09DX1dNODk5MD15DQpDT05GSUdfU05EX1NPQ19XTTg5OTE9eQ0KQ09O
RklHX1NORF9TT0NfV004OTkzPXkNCkNPTkZJR19TTkRfU09DX1dNODk5NT15DQpDT05GSUdfU05E
X1NPQ19XTTg5OTY9eQ0KQ09ORklHX1NORF9TT0NfV004OTk3PXkNCkNPTkZJR19TTkRfU09DX1dN
OTA4MT15DQpDT05GSUdfU05EX1NPQ19XTTkwOTA9eQ0KQ09ORklHX1NORF9TT0NfTE00ODU3PXkN
CkNPTkZJR19TTkRfU09DX01BWDk3Njg9eQ0KQ09ORklHX1NORF9TT0NfTUFYOTg3Nz15DQpDT05G
SUdfU05EX1NPQ19NQzEzNzgzPXkNCkNPTkZJR19TTkRfU09DX01MMjYxMjQ9eQ0KQ09ORklHX1NO
RF9TT0NfVFBBNjEzMEEyPXkNCkNPTkZJR19TTkRfU0lNUExFX0NBUkQ9eQ0KQ09ORklHX1NPVU5E
X1BSSU1FPXkNCkNPTkZJR19BQzk3X0JVUz15DQoNCiMNCiMgSElEIHN1cHBvcnQNCiMNCkNPTkZJ
R19ISUQ9eQ0KIyBDT05GSUdfSElEX0JBVFRFUllfU1RSRU5HVEggaXMgbm90IHNldA0KIyBDT05G
SUdfSElEUkFXIGlzIG5vdCBzZXQNCkNPTkZJR19VSElEPXkNCiMgQ09ORklHX0hJRF9HRU5FUklD
IGlzIG5vdCBzZXQNCg0KIw0KIyBTcGVjaWFsIEhJRCBkcml2ZXJzDQojDQpDT05GSUdfSElEX0E0
VEVDSD15DQpDT05GSUdfSElEX0FDUlVYPXkNCiMgQ09ORklHX0hJRF9BQ1JVWF9GRiBpcyBub3Qg
c2V0DQojIENPTkZJR19ISURfQVBQTEUgaXMgbm90IHNldA0KQ09ORklHX0hJRF9BVVJFQUw9eQ0K
Q09ORklHX0hJRF9CRUxLSU49eQ0KQ09ORklHX0hJRF9DSEVSUlk9eQ0KQ09ORklHX0hJRF9DSElD
T05ZPXkNCiMgQ09ORklHX0hJRF9QUk9ESUtFWVMgaXMgbm90IHNldA0KQ09ORklHX0hJRF9DWVBS
RVNTPXkNCiMgQ09ORklHX0hJRF9EUkFHT05SSVNFIGlzIG5vdCBzZXQNCkNPTkZJR19ISURfRU1T
X0ZGPXkNCkNPTkZJR19ISURfRUxFQ09NPXkNCiMgQ09ORklHX0hJRF9FWktFWSBpcyBub3Qgc2V0
DQpDT05GSUdfSElEX0tFWVRPVUNIPXkNCiMgQ09ORklHX0hJRF9LWUUgaXMgbm90IHNldA0KIyBD
T05GSUdfSElEX1VDTE9HSUMgaXMgbm90IHNldA0KQ09ORklHX0hJRF9XQUxUT1A9eQ0KQ09ORklH
X0hJRF9HWVJBVElPTj15DQpDT05GSUdfSElEX0lDQURFPXkNCkNPTkZJR19ISURfVFdJTkhBTj15
DQpDT05GSUdfSElEX0tFTlNJTkdUT049eQ0KQ09ORklHX0hJRF9MQ1BPV0VSPXkNCiMgQ09ORklH
X0hJRF9MRU5PVk9fVFBLQkQgaXMgbm90IHNldA0KQ09ORklHX0hJRF9MT0dJVEVDSD15DQpDT05G
SUdfTE9HSVRFQ0hfRkY9eQ0KQ09ORklHX0xPR0lSVU1CTEVQQUQyX0ZGPXkNCkNPTkZJR19MT0dJ
Rzk0MF9GRj15DQojIENPTkZJR19MT0dJV0hFRUxTX0ZGIGlzIG5vdCBzZXQNCkNPTkZJR19ISURf
TUFHSUNNT1VTRT15DQojIENPTkZJR19ISURfTUlDUk9TT0ZUIGlzIG5vdCBzZXQNCiMgQ09ORklH
X0hJRF9NT05URVJFWSBpcyBub3Qgc2V0DQojIENPTkZJR19ISURfTVVMVElUT1VDSCBpcyBub3Qg
c2V0DQpDT05GSUdfSElEX09SVEVLPXkNCiMgQ09ORklHX0hJRF9QQU5USEVSTE9SRCBpcyBub3Qg
c2V0DQpDT05GSUdfSElEX1BFVEFMWU5YPXkNCkNPTkZJR19ISURfUElDT0xDRD15DQojIENPTkZJ
R19ISURfUElDT0xDRF9GQiBpcyBub3Qgc2V0DQpDT05GSUdfSElEX1BJQ09MQ0RfQkFDS0xJR0hU
PXkNCiMgQ09ORklHX0hJRF9QSUNPTENEX0xDRCBpcyBub3Qgc2V0DQpDT05GSUdfSElEX1BJQ09M
Q0RfTEVEUz15DQpDT05GSUdfSElEX1BJQ09MQ0RfQ0lSPXkNCiMgQ09ORklHX0hJRF9QUklNQVgg
aXMgbm90IHNldA0KQ09ORklHX0hJRF9TQUlURUs9eQ0KQ09ORklHX0hJRF9TQU1TVU5HPXkNCkNP
TkZJR19ISURfU1BFRURMSU5LPXkNCkNPTkZJR19ISURfU1RFRUxTRVJJRVM9eQ0KQ09ORklHX0hJ
RF9TVU5QTFVTPXkNCiMgQ09ORklHX0hJRF9HUkVFTkFTSUEgaXMgbm90IHNldA0KIyBDT05GSUdf
SElEX0hZUEVSVl9NT1VTRSBpcyBub3Qgc2V0DQojIENPTkZJR19ISURfU01BUlRKT1lQTFVTIGlz
IG5vdCBzZXQNCkNPTkZJR19ISURfVElWTz15DQpDT05GSUdfSElEX1RPUFNFRUQ9eQ0KQ09ORklH
X0hJRF9USElOR009eQ0KQ09ORklHX0hJRF9USFJVU1RNQVNURVI9eQ0KIyBDT05GSUdfVEhSVVNU
TUFTVEVSX0ZGIGlzIG5vdCBzZXQNCkNPTkZJR19ISURfV0FDT009eQ0KQ09ORklHX0hJRF9XSUlN
T1RFPXkNCkNPTkZJR19ISURfWElOTU89eQ0KQ09ORklHX0hJRF9aRVJPUExVUz15DQpDT05GSUdf
WkVST1BMVVNfRkY9eQ0KQ09ORklHX0hJRF9aWURBQ1JPTj15DQojIENPTkZJR19ISURfU0VOU09S
X0hVQiBpcyBub3Qgc2V0DQoNCiMNCiMgSTJDIEhJRCBzdXBwb3J0DQojDQojIENPTkZJR19JMkNf
SElEIGlzIG5vdCBzZXQNCkNPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkNCiMgQ09ORklH
X1VTQl9TVVBQT1JUIGlzIG5vdCBzZXQNCkNPTkZJR19VV0I9eQ0KQ09ORklHX1VXQl9XSENJPXkN
CiMgQ09ORklHX01NQyBpcyBub3Qgc2V0DQojIENPTkZJR19NRU1TVElDSyBpcyBub3Qgc2V0DQpD
T05GSUdfTkVXX0xFRFM9eQ0KQ09ORklHX0xFRFNfQ0xBU1M9eQ0KDQojDQojIExFRCBkcml2ZXJz
DQojDQpDT05GSUdfTEVEU184OFBNODYwWD15DQojIENPTkZJR19MRURTX0xNMzUzMCBpcyBub3Qg
c2V0DQojIENPTkZJR19MRURTX0xNMzUzMyBpcyBub3Qgc2V0DQpDT05GSUdfTEVEU19MTTM2NDI9
eQ0KIyBDT05GSUdfTEVEU19QQ0E5NTMyIGlzIG5vdCBzZXQNCkNPTkZJR19MRURTX0dQSU89eQ0K
Q09ORklHX0xFRFNfTFAzOTQ0PXkNCkNPTkZJR19MRURTX0xQNTVYWF9DT01NT049eQ0KQ09ORklH
X0xFRFNfTFA1NTIxPXkNCkNPTkZJR19MRURTX0xQNTUyMz15DQpDT05GSUdfTEVEU19MUDU1NjI9
eQ0KQ09ORklHX0xFRFNfTFA4NTAxPXkNCkNPTkZJR19MRURTX0xQODc4OD15DQpDT05GSUdfTEVE
U19DTEVWT19NQUlMPXkNCkNPTkZJR19MRURTX1BDQTk1NVg9eQ0KIyBDT05GSUdfTEVEU19QQ0E5
NjNYIGlzIG5vdCBzZXQNCkNPTkZJR19MRURTX1BDQTk2ODU9eQ0KQ09ORklHX0xFRFNfV004MzFY
X1NUQVRVUz15DQpDT05GSUdfTEVEU19XTTgzNTA9eQ0KQ09ORklHX0xFRFNfUFdNPXkNCkNPTkZJ
R19MRURTX1JFR1VMQVRPUj15DQpDT05GSUdfTEVEU19CRDI4MDI9eQ0KQ09ORklHX0xFRFNfSU5U
RUxfU1M0MjAwPXkNCkNPTkZJR19MRURTX0xUMzU5Mz15DQojIENPTkZJR19MRURTX01DMTM3ODMg
aXMgbm90IHNldA0KQ09ORklHX0xFRFNfVENBNjUwNz15DQpDT05GSUdfTEVEU19MTTM1NXg9eQ0K
IyBDT05GSUdfTEVEU19PVDIwMCBpcyBub3Qgc2V0DQpDT05GSUdfTEVEU19CTElOS009eQ0KDQoj
DQojIExFRCBUcmlnZ2Vycw0KIw0KQ09ORklHX0xFRFNfVFJJR0dFUlM9eQ0KQ09ORklHX0xFRFNf
VFJJR0dFUl9USU1FUj15DQpDT05GSUdfTEVEU19UUklHR0VSX09ORVNIT1Q9eQ0KQ09ORklHX0xF
RFNfVFJJR0dFUl9IRUFSVEJFQVQ9eQ0KIyBDT05GSUdfTEVEU19UUklHR0VSX0JBQ0tMSUdIVCBp
cyBub3Qgc2V0DQojIENPTkZJR19MRURTX1RSSUdHRVJfQ1BVIGlzIG5vdCBzZXQNCkNPTkZJR19M
RURTX1RSSUdHRVJfR1BJTz15DQpDT05GSUdfTEVEU19UUklHR0VSX0RFRkFVTFRfT049eQ0KDQoj
DQojIGlwdGFibGVzIHRyaWdnZXIgaXMgdW5kZXIgTmV0ZmlsdGVyIGNvbmZpZyAoTEVEIHRhcmdl
dCkNCiMNCiMgQ09ORklHX0xFRFNfVFJJR0dFUl9UUkFOU0lFTlQgaXMgbm90IHNldA0KQ09ORklH
X0xFRFNfVFJJR0dFUl9DQU1FUkE9eQ0KQ09ORklHX0FDQ0VTU0lCSUxJVFk9eQ0KQ09ORklHX0lO
RklOSUJBTkQ9eQ0KIyBDT05GSUdfSU5GSU5JQkFORF9VU0VSX01BRCBpcyBub3Qgc2V0DQpDT05G
SUdfSU5GSU5JQkFORF9VU0VSX0FDQ0VTUz15DQpDT05GSUdfSU5GSU5JQkFORF9VU0VSX01FTT15
DQpDT05GSUdfSU5GSU5JQkFORF9BRERSX1RSQU5TPXkNCiMgQ09ORklHX0lORklOSUJBTkRfTVRI
Q0EgaXMgbm90IHNldA0KQ09ORklHX0lORklOSUJBTkRfSVBBVEg9eQ0KQ09ORklHX0lORklOSUJB
TkRfUUlCPXkNCkNPTkZJR19JTkZJTklCQU5EX0FNU08xMTAwPXkNCkNPTkZJR19JTkZJTklCQU5E
X0FNU08xMTAwX0RFQlVHPXkNCkNPTkZJR19JTkZJTklCQU5EX05FUz15DQojIENPTkZJR19JTkZJ
TklCQU5EX05FU19ERUJVRyBpcyBub3Qgc2V0DQojIENPTkZJR19FREFDIGlzIG5vdCBzZXQNCkNP
TkZJR19SVENfTElCPXkNCkNPTkZJR19SVENfQ0xBU1M9eQ0KIyBDT05GSUdfUlRDX0hDVE9TWVMg
aXMgbm90IHNldA0KIyBDT05GSUdfUlRDX1NZU1RPSEMgaXMgbm90IHNldA0KIyBDT05GSUdfUlRD
X0RFQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBSVEMgaW50ZXJmYWNlcw0KIw0KIyBDT05GSUdfUlRD
X0lOVEZfU1lTRlMgaXMgbm90IHNldA0KQ09ORklHX1JUQ19JTlRGX0RFVj15DQojIENPTkZJR19S
VENfSU5URl9ERVZfVUlFX0VNVUwgaXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfVEVTVD15DQoN
CiMNCiMgSTJDIFJUQyBkcml2ZXJzDQojDQpDT05GSUdfUlRDX0RSVl84OFBNODYwWD15DQojIENP
TkZJR19SVENfRFJWXzg4UE04MFggaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RSVl9EUzEzMDcg
aXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfRFMxMzc0PXkNCkNPTkZJR19SVENfRFJWX0RTMTY3
Mj15DQojIENPTkZJR19SVENfRFJWX0RTMzIzMiBpcyBub3Qgc2V0DQpDT05GSUdfUlRDX0RSVl9M
UDg3ODg9eQ0KQ09ORklHX1JUQ19EUlZfTUFYNjkwMD15DQpDT05GSUdfUlRDX0RSVl9NQVg4OTk4
PXkNCkNPTkZJR19SVENfRFJWX1JTNUMzNzI9eQ0KQ09ORklHX1JUQ19EUlZfSVNMMTIwOD15DQpD
T05GSUdfUlRDX0RSVl9JU0wxMjAyMj15DQojIENPTkZJR19SVENfRFJWX0lTTDEyMDU3IGlzIG5v
dCBzZXQNCiMgQ09ORklHX1JUQ19EUlZfWDEyMDUgaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RS
Vl9QQUxNQVMgaXMgbm90IHNldA0KIyBDT05GSUdfUlRDX0RSVl9QQ0YyMTI3IGlzIG5vdCBzZXQN
CkNPTkZJR19SVENfRFJWX1BDRjg1MjM9eQ0KQ09ORklHX1JUQ19EUlZfUENGODU2Mz15DQojIENP
TkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldA0KQ09ORklHX1JUQ19EUlZfTTQxVDgwPXkN
CiMgQ09ORklHX1JUQ19EUlZfTTQxVDgwX1dEVCBpcyBub3Qgc2V0DQojIENPTkZJR19SVENfRFJW
X0JRMzJLIGlzIG5vdCBzZXQNCkNPTkZJR19SVENfRFJWX1RXTDQwMzA9eQ0KQ09ORklHX1JUQ19E
UlZfVFBTNjU4Nlg9eQ0KQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15DQojIENPTkZJR19SVENfRFJW
X0ZNMzEzMCBpcyBub3Qgc2V0DQojIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0DQpD
T05GSUdfUlRDX0RSVl9SWDgwMjU9eQ0KQ09ORklHX1JUQ19EUlZfRU0zMDI3PXkNCiMgQ09ORklH
X1JUQ19EUlZfUlYzMDI5QzIgaXMgbm90IHNldA0KDQojDQojIFNQSSBSVEMgZHJpdmVycw0KIw0K
DQojDQojIFBsYXRmb3JtIFJUQyBkcml2ZXJzDQojDQpDT05GSUdfUlRDX0RSVl9DTU9TPXkNCkNP
TkZJR19SVENfRFJWX0RTMTI4Nj15DQojIENPTkZJR19SVENfRFJWX0RTMTUxMSBpcyBub3Qgc2V0
DQojIENPTkZJR19SVENfRFJWX0RTMTU1MyBpcyBub3Qgc2V0DQpDT05GSUdfUlRDX0RSVl9EUzE3
NDI9eQ0KQ09ORklHX1JUQ19EUlZfREE5MDU1PXkNCkNPTkZJR19SVENfRFJWX1NUSzE3VEE4PXkN
CkNPTkZJR19SVENfRFJWX000OFQ4Nj15DQpDT05GSUdfUlRDX0RSVl9NNDhUMzU9eQ0KQ09ORklH
X1JUQ19EUlZfTTQ4VDU5PXkNCkNPTkZJR19SVENfRFJWX01TTTYyNDI9eQ0KQ09ORklHX1JUQ19E
UlZfQlE0ODAyPXkNCkNPTkZJR19SVENfRFJWX1JQNUMwMT15DQojIENPTkZJR19SVENfRFJWX1Yz
MDIwIGlzIG5vdCBzZXQNCkNPTkZJR19SVENfRFJWX0RTMjQwND15DQpDT05GSUdfUlRDX0RSVl9X
TTgzMVg9eQ0KQ09ORklHX1JUQ19EUlZfV004MzUwPXkNCkNPTkZJR19SVENfRFJWX1BDRjUwNjMz
PXkNCg0KIw0KIyBvbi1DUFUgUlRDIGRyaXZlcnMNCiMNCkNPTkZJR19SVENfRFJWX01DMTNYWFg9
eQ0KQ09ORklHX1JUQ19EUlZfTU9YQVJUPXkNCg0KIw0KIyBISUQgU2Vuc29yIFJUQyBkcml2ZXJz
DQojDQpDT05GSUdfRE1BREVWSUNFUz15DQojIENPTkZJR19ETUFERVZJQ0VTX0RFQlVHIGlzIG5v
dCBzZXQNCg0KIw0KIyBETUEgRGV2aWNlcw0KIw0KQ09ORklHX0lOVEVMX01JRF9ETUFDPXkNCiMg
Q09ORklHX0lOVEVMX0lPQVRETUEgaXMgbm90IHNldA0KIyBDT05GSUdfRFdfRE1BQ19DT1JFIGlz
IG5vdCBzZXQNCiMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldA0KIyBDT05GSUdfRFdfRE1BQ19Q
Q0kgaXMgbm90IHNldA0KQ09ORklHX1RJTUJfRE1BPXkNCkNPTkZJR19QQ0hfRE1BPXkNCkNPTkZJ
R19ETUFfRU5HSU5FPXkNCkNPTkZJR19ETUFfQUNQST15DQoNCiMNCiMgRE1BIENsaWVudHMNCiMN
CiMgQ09ORklHX0FTWU5DX1RYX0RNQSBpcyBub3Qgc2V0DQpDT05GSUdfRE1BVEVTVD15DQojIENP
TkZJR19BVVhESVNQTEFZIGlzIG5vdCBzZXQNCkNPTkZJR19VSU89eQ0KQ09ORklHX1VJT19DSUY9
eQ0KQ09ORklHX1VJT19QRFJWX0dFTklSUT15DQpDT05GSUdfVUlPX0RNRU1fR0VOSVJRPXkNCkNP
TkZJR19VSU9fQUVDPXkNCkNPTkZJR19VSU9fU0VSQ09TMz15DQojIENPTkZJR19VSU9fUENJX0dF
TkVSSUMgaXMgbm90IHNldA0KIyBDT05GSUdfVUlPX05FVFggaXMgbm90IHNldA0KQ09ORklHX1VJ
T19NRjYyND15DQojIENPTkZJR19WSVJUX0RSSVZFUlMgaXMgbm90IHNldA0KQ09ORklHX1ZJUlRJ
Tz15DQoNCiMNCiMgVmlydGlvIGRyaXZlcnMNCiMNCiMgQ09ORklHX1ZJUlRJT19QQ0kgaXMgbm90
IHNldA0KQ09ORklHX1ZJUlRJT19CQUxMT09OPXkNCiMgQ09ORklHX1ZJUlRJT19NTUlPIGlzIG5v
dCBzZXQNCg0KIw0KIyBNaWNyb3NvZnQgSHlwZXItViBndWVzdCBzdXBwb3J0DQojDQpDT05GSUdf
SFlQRVJWPXkNCkNPTkZJR19IWVBFUlZfVVRJTFM9eQ0KQ09ORklHX0hZUEVSVl9CQUxMT09OPXkN
Cg0KIw0KIyBYZW4gZHJpdmVyIHN1cHBvcnQNCiMNCiMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5v
dCBzZXQNCkNPTkZJR19YRU5fREVWX0VWVENITj15DQojIENPTkZJR19YRU5fQkFDS0VORCBpcyBu
b3Qgc2V0DQpDT05GSUdfWEVORlM9eQ0KQ09ORklHX1hFTl9DT01QQVRfWEVORlM9eQ0KQ09ORklH
X1hFTl9TWVNfSFlQRVJWSVNPUj15DQpDT05GSUdfWEVOX1hFTkJVU19GUk9OVEVORD15DQpDT05G
SUdfWEVOX0dOVERFVj15DQpDT05GSUdfWEVOX0dSQU5UX0RFVl9BTExPQz15DQpDT05GSUdfU1dJ
T1RMQl9YRU49eQ0KQ09ORklHX1hFTl9UTUVNPXkNCkNPTkZJR19YRU5fUFJJVkNNRD15DQojIENP
TkZJR19YRU5fQUNQSV9QUk9DRVNTT1IgaXMgbm90IHNldA0KQ09ORklHX1hFTl9IQVZFX1BWTU1V
PXkNCkNPTkZJR19TVEFHSU5HPXkNCkNPTkZJR19TTElDT1NTPXkNCkNPTkZJR19FQ0hPPXkNCkNP
TkZJR19QQU5FTD15DQpDT05GSUdfUEFORUxfUEFSUE9SVD0wDQpDT05GSUdfUEFORUxfUFJPRklM
RT01DQpDT05GSUdfUEFORUxfQ0hBTkdFX01FU1NBR0U9eQ0KQ09ORklHX1BBTkVMX0JPT1RfTUVT
U0FHRT0iIg0KIyBDT05GSUdfRFhfU0VQIGlzIG5vdCBzZXQNCkNPTkZJR19GQl9TTTdYWD15DQpD
T05GSUdfQ1JZU1RBTEhEPXkNCkNPTkZJR19GQl9YR0k9eQ0KQ09ORklHX0FDUElfUVVJQ0tTVEFS
VD15DQpDT05GSUdfRlQxMDAwPXkNCg0KIw0KIyBTcGVha3VwIGNvbnNvbGUgc3BlZWNoDQojDQpD
T05GSUdfVE9VQ0hTQ1JFRU5fQ0xFQVJQQURfVE0xMjE3PXkNCiMgQ09ORklHX1RPVUNIU0NSRUVO
X1NZTkFQVElDU19JMkNfUk1JNCBpcyBub3Qgc2V0DQpDT05GSUdfU1RBR0lOR19NRURJQT15DQpD
T05GSUdfRFZCX0NYRDIwOTk9eQ0KIyBDT05GSUdfVklERU9fRFQzMTU1IGlzIG5vdCBzZXQNCkNP
TkZJR19WSURFT19WNEwyX0lOVF9ERVZJQ0U9eQ0KQ09ORklHX1ZJREVPX1RDTTgyNVg9eQ0KQ09O
RklHX1VTQl9TTjlDMTAyPXkNCkNPTkZJR19TT0xPNlgxMD15DQojIENPTkZJR19MSVJDX1NUQUdJ
TkcgaXMgbm90IHNldA0KDQojDQojIEFuZHJvaWQNCiMNCkNPTkZJR19BTkRST0lEPXkNCkNPTkZJ
R19BTkRST0lEX0JJTkRFUl9JUEM9eQ0KIyBDT05GSUdfQVNITUVNIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0FORFJPSURfTE9HR0VSIGlzIG5vdCBzZXQNCkNPTkZJR19BTkRST0lEX1RJTUVEX09VVFBV
VD15DQojIENPTkZJR19BTkRST0lEX1RJTUVEX0dQSU8gaXMgbm90IHNldA0KQ09ORklHX0FORFJP
SURfTE9XX01FTU9SWV9LSUxMRVI9eQ0KIyBDT05GSUdfQU5EUk9JRF9JTlRGX0FMQVJNX0RFViBp
cyBub3Qgc2V0DQojIENPTkZJR19TWU5DIGlzIG5vdCBzZXQNCkNPTkZJR19JT049eQ0KQ09ORklH
X0lPTl9URVNUPXkNCkNPTkZJR19ER1JQPXkNCkNPTkZJR19YSUxMWUJVUz15DQojIENPTkZJR19Y
SUxMWUJVU19QQ0lFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0RHTkMgaXMgbm90IHNldA0KQ09ORklH
X0RHQVA9eQ0KQ09ORklHX1g4Nl9QTEFURk9STV9ERVZJQ0VTPXkNCiMgQ09ORklHX0FDRVJIREYg
aXMgbm90IHNldA0KIyBDT05GSUdfQVNVU19MQVBUT1AgaXMgbm90IHNldA0KQ09ORklHX0RFTExf
TEFQVE9QPXkNCkNPTkZJR19GVUpJVFNVX0xBUFRPUD15DQpDT05GSUdfRlVKSVRTVV9MQVBUT1Bf
REVCVUc9eQ0KIyBDT05GSUdfRlVKSVRTVV9UQUJMRVQgaXMgbm90IHNldA0KIyBDT05GSUdfSFBf
QUNDRUwgaXMgbm90IHNldA0KQ09ORklHX1BBTkFTT05JQ19MQVBUT1A9eQ0KQ09ORklHX1RISU5L
UEFEX0FDUEk9eQ0KQ09ORklHX1RISU5LUEFEX0FDUElfQUxTQV9TVVBQT1JUPXkNCkNPTkZJR19U
SElOS1BBRF9BQ1BJX0RFQlVHRkFDSUxJVElFUz15DQojIENPTkZJR19USElOS1BBRF9BQ1BJX0RF
QlVHIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RISU5LUEFEX0FDUElfVU5TQUZFX0xFRFMgaXMgbm90
IHNldA0KQ09ORklHX1RISU5LUEFEX0FDUElfVklERU89eQ0KQ09ORklHX1RISU5LUEFEX0FDUElf
SE9US0VZX1BPTEw9eQ0KIyBDT05GSUdfU0VOU09SU19IREFQUyBpcyBub3Qgc2V0DQpDT05GSUdf
SU5URUxfTUVOTE9XPXkNCiMgQ09ORklHX0FDUElfV01JIGlzIG5vdCBzZXQNCiMgQ09ORklHX1RP
UFNUQVJfTEFQVE9QIGlzIG5vdCBzZXQNCkNPTkZJR19UT1NISUJBX0JUX1JGS0lMTD15DQojIENP
TkZJR19BQ1BJX0NNUEMgaXMgbm90IHNldA0KQ09ORklHX0lOVEVMX0lQUz15DQpDT05GSUdfSUJN
X1JUTD15DQojIENPTkZJR19YTzE1X0VCT09LIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVNVTkdf
TEFQVE9QIGlzIG5vdCBzZXQNCiMgQ09ORklHX1NBTVNVTkdfUTEwIGlzIG5vdCBzZXQNCkNPTkZJ
R19BUFBMRV9HTVVYPXkNCkNPTkZJR19JTlRFTF9SU1Q9eQ0KIyBDT05GSUdfSU5URUxfU01BUlRD
T05ORUNUIGlzIG5vdCBzZXQNCkNPTkZJR19QVlBBTklDPXkNCkNPTkZJR19DSFJPTUVfUExBVEZP
Uk1TPXkNCkNPTkZJR19DSFJPTUVPU19MQVBUT1A9eQ0KIyBDT05GSUdfQ0hST01FT1NfUFNUT1JF
IGlzIG5vdCBzZXQNCg0KIw0KIyBIYXJkd2FyZSBTcGlubG9jayBkcml2ZXJzDQojDQpDT05GSUdf
Q0xLRVZUX0k4MjUzPXkNCkNPTkZJR19JODI1M19MT0NLPXkNCkNPTkZJR19DTEtCTERfSTgyNTM9
eQ0KQ09ORklHX01BSUxCT1g9eQ0KIyBDT05GSUdfSU9NTVVfU1VQUE9SVCBpcyBub3Qgc2V0DQoN
CiMNCiMgUmVtb3RlcHJvYyBkcml2ZXJzDQojDQpDT05GSUdfUkVNT1RFUFJPQz15DQpDT05GSUdf
U1RFX01PREVNX1JQUk9DPXkNCg0KIw0KIyBScG1zZyBkcml2ZXJzDQojDQpDT05GSUdfUE1fREVW
RlJFUT15DQoNCiMNCiMgREVWRlJFUSBHb3Zlcm5vcnMNCiMNCkNPTkZJR19ERVZGUkVRX0dPVl9T
SU1QTEVfT05ERU1BTkQ9eQ0KQ09ORklHX0RFVkZSRVFfR09WX1BFUkZPUk1BTkNFPXkNCiMgQ09O
RklHX0RFVkZSRVFfR09WX1BPV0VSU0FWRSBpcyBub3Qgc2V0DQojIENPTkZJR19ERVZGUkVRX0dP
Vl9VU0VSU1BBQ0UgaXMgbm90IHNldA0KDQojDQojIERFVkZSRVEgRHJpdmVycw0KIw0KQ09ORklH
X0VYVENPTj15DQoNCiMNCiMgRXh0Y29uIERldmljZSBEcml2ZXJzDQojDQpDT05GSUdfRVhUQ09O
X0dQSU89eQ0KQ09ORklHX0VYVENPTl9NQVg3NzY5Mz15DQpDT05GSUdfRVhUQ09OX0FSSVpPTkE9
eQ0KIyBDT05GSUdfRVhUQ09OX1BBTE1BUyBpcyBub3Qgc2V0DQojIENPTkZJR19NRU1PUlkgaXMg
bm90IHNldA0KIyBDT05GSUdfSUlPIGlzIG5vdCBzZXQNCkNPTkZJR19OVEI9eQ0KIyBDT05GSUdf
Vk1FX0JVUyBpcyBub3Qgc2V0DQpDT05GSUdfUFdNPXkNCkNPTkZJR19QV01fU1lTRlM9eQ0KQ09O
RklHX1BXTV9SRU5FU0FTX1RQVT15DQpDT05GSUdfUFdNX1RXTD15DQpDT05GSUdfUFdNX1RXTF9M
RUQ9eQ0KQ09ORklHX0lQQUNLX0JVUz15DQpDT05GSUdfQk9BUkRfVFBDSTIwMD15DQpDT05GSUdf
U0VSSUFMX0lQT0NUQUw9eQ0KQ09ORklHX1JFU0VUX0NPTlRST0xMRVI9eQ0KQ09ORklHX0ZNQz15
DQpDT05GSUdfRk1DX0ZBS0VERVY9eQ0KQ09ORklHX0ZNQ19UUklWSUFMPXkNCiMgQ09ORklHX0ZN
Q19XUklURV9FRVBST00gaXMgbm90IHNldA0KIyBDT05GSUdfRk1DX0NIQVJERVYgaXMgbm90IHNl
dA0KDQojDQojIFBIWSBTdWJzeXN0ZW0NCiMNCiMgQ09ORklHX0dFTkVSSUNfUEhZIGlzIG5vdCBz
ZXQNCkNPTkZJR19QSFlfRVhZTk9TX01JUElfVklERU89eQ0KIyBDT05GSUdfUE9XRVJDQVAgaXMg
bm90IHNldA0KDQojDQojIEZpcm13YXJlIERyaXZlcnMNCiMNCiMgQ09ORklHX0VERCBpcyBub3Qg
c2V0DQpDT05GSUdfRklSTVdBUkVfTUVNTUFQPXkNCiMgQ09ORklHX0RFTExfUkJVIGlzIG5vdCBz
ZXQNCkNPTkZJR19EQ0RCQVM9eQ0KQ09ORklHX0RNSUlEPXkNCiMgQ09ORklHX0RNSV9TWVNGUyBp
cyBub3Qgc2V0DQpDT05GSUdfRE1JX1NDQU5fTUFDSElORV9OT05fRUZJX0ZBTExCQUNLPXkNCkNP
TkZJR19JU0NTSV9JQkZUX0ZJTkQ9eQ0KIyBDT05GSUdfR09PR0xFX0ZJUk1XQVJFIGlzIG5vdCBz
ZXQNCg0KIw0KIyBFRkkgKEV4dGVuc2libGUgRmlybXdhcmUgSW50ZXJmYWNlKSBTdXBwb3J0DQoj
DQpDT05GSUdfRUZJX1ZBUlM9eQ0KQ09ORklHX0VGSV9SVU5USU1FX01BUD15DQoNCiMNCiMgRmls
ZSBzeXN0ZW1zDQojDQpDT05GSUdfRENBQ0hFX1dPUkRfQUNDRVNTPXkNCkNPTkZJR19GU19QT1NJ
WF9BQ0w9eQ0KQ09ORklHX0VYUE9SVEZTPXkNCkNPTkZJR19GSUxFX0xPQ0tJTkc9eQ0KQ09ORklH
X0ZTTk9USUZZPXkNCkNPTkZJR19ETk9USUZZPXkNCiMgQ09ORklHX0lOT1RJRllfVVNFUiBpcyBu
b3Qgc2V0DQojIENPTkZJR19GQU5PVElGWSBpcyBub3Qgc2V0DQpDT05GSUdfUVVPVEE9eQ0KQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkNCiMgQ09ORklHX1BSSU5UX1FVT1RBX1dBUk5J
TkcgaXMgbm90IHNldA0KIyBDT05GSUdfUVVPVEFfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX1FV
T1RBX1RSRUU9eQ0KQ09ORklHX1FGTVRfVjE9eQ0KQ09ORklHX1FGTVRfVjI9eQ0KQ09ORklHX1FV
T1RBQ1RMPXkNCiMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNldA0KIyBDT05GSUdfRlVTRV9G
UyBpcyBub3Qgc2V0DQpDT05GSUdfR0VORVJJQ19BQ0w9eQ0KDQojDQojIENhY2hlcw0KIw0KIyBD
T05GSUdfRlNDQUNIRSBpcyBub3Qgc2V0DQoNCiMNCiMgUHNldWRvIGZpbGVzeXN0ZW1zDQojDQoj
IENPTkZJR19QUk9DX0ZTIGlzIG5vdCBzZXQNCkNPTkZJR19TWVNGUz15DQpDT05GSUdfVE1QRlM9
eQ0KQ09ORklHX1RNUEZTX1BPU0lYX0FDTD15DQpDT05GSUdfVE1QRlNfWEFUVFI9eQ0KQ09ORklH
X0hVR0VUTEJGUz15DQpDT05GSUdfSFVHRVRMQl9QQUdFPXkNCiMgQ09ORklHX0NPTkZJR0ZTX0ZT
IGlzIG5vdCBzZXQNCiMgQ09ORklHX01JU0NfRklMRVNZU1RFTVMgaXMgbm90IHNldA0KQ09ORklH
X05FVFdPUktfRklMRVNZU1RFTVM9eQ0KQ09ORklHX05GU19GUz15DQojIENPTkZJR19ORlNfVjIg
aXMgbm90IHNldA0KQ09ORklHX05GU19WMz15DQpDT05GSUdfTkZTX1YzX0FDTD15DQpDT05GSUdf
TkZTX1Y0PXkNCkNPTkZJR19ORlNfU1dBUD15DQojIENPTkZJR19ORlNfVjRfMSBpcyBub3Qgc2V0
DQojIENPTkZJR19ORlNfVVNFX0xFR0FDWV9ETlMgaXMgbm90IHNldA0KQ09ORklHX05GU19VU0Vf
S0VSTkVMX0ROUz15DQojIENPTkZJR19ORlNEIGlzIG5vdCBzZXQNCkNPTkZJR19MT0NLRD15DQpD
T05GSUdfTE9DS0RfVjQ9eQ0KQ09ORklHX05GU19BQ0xfU1VQUE9SVD15DQpDT05GSUdfTkZTX0NP
TU1PTj15DQpDT05GSUdfU1VOUlBDPXkNCkNPTkZJR19TVU5SUENfR1NTPXkNCkNPTkZJR19TVU5S
UENfWFBSVF9SRE1BPXkNCkNPTkZJR19TVU5SUENfU1dBUD15DQojIENPTkZJR19SUENTRUNfR1NT
X0tSQjUgaXMgbm90IHNldA0KIyBDT05GSUdfQ0VQSF9GUyBpcyBub3Qgc2V0DQpDT05GSUdfQ0lG
Uz15DQpDT05GSUdfQ0lGU19TVEFUUz15DQpDT05GSUdfQ0lGU19TVEFUUzI9eQ0KIyBDT05GSUdf
Q0lGU19XRUFLX1BXX0hBU0ggaXMgbm90IHNldA0KIyBDT05GSUdfQ0lGU19VUENBTEwgaXMgbm90
IHNldA0KQ09ORklHX0NJRlNfWEFUVFI9eQ0KQ09ORklHX0NJRlNfUE9TSVg9eQ0KQ09ORklHX0NJ
RlNfQUNMPXkNCiMgQ09ORklHX0NJRlNfREVCVUcgaXMgbm90IHNldA0KIyBDT05GSUdfQ0lGU19E
RlNfVVBDQUxMIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NJRlNfU01CMiBpcyBub3Qgc2V0DQpDT05G
SUdfTkNQX0ZTPXkNCiMgQ09ORklHX05DUEZTX1BBQ0tFVF9TSUdOSU5HIGlzIG5vdCBzZXQNCkNP
TkZJR19OQ1BGU19JT0NUTF9MT0NLSU5HPXkNCkNPTkZJR19OQ1BGU19TVFJPTkc9eQ0KQ09ORklH
X05DUEZTX05GU19OUz15DQpDT05GSUdfTkNQRlNfT1MyX05TPXkNCkNPTkZJR19OQ1BGU19TTUFM
TERPUz15DQojIENPTkZJR19OQ1BGU19OTFMgaXMgbm90IHNldA0KQ09ORklHX05DUEZTX0VYVFJB
Uz15DQpDT05GSUdfQ09EQV9GUz15DQojIENPTkZJR19BRlNfRlMgaXMgbm90IHNldA0KQ09ORklH
XzlQX0ZTPXkNCkNPTkZJR185UF9GU19QT1NJWF9BQ0w9eQ0KQ09ORklHXzlQX0ZTX1NFQ1VSSVRZ
PXkNCkNPTkZJR19OTFM9eQ0KQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiDQojIENPTkZJ
R19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfQ09ERVBBR0VfNzM3PXkN
CkNPTkZJR19OTFNfQ09ERVBBR0VfNzc1PXkNCiMgQ09ORklHX05MU19DT0RFUEFHRV84NTAgaXMg
bm90IHNldA0KQ09ORklHX05MU19DT0RFUEFHRV84NTI9eQ0KQ09ORklHX05MU19DT0RFUEFHRV84
NTU9eQ0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NP
REVQQUdFXzg2MD15DQojIENPTkZJR19OTFNfQ09ERVBBR0VfODYxIGlzIG5vdCBzZXQNCiMgQ09O
RklHX05MU19DT0RFUEFHRV84NjIgaXMgbm90IHNldA0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2
MyBpcyBub3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15DQpDT05GSUdfTkxTX0NPREVQ
QUdFXzg2NT15DQpDT05GSUdfTkxTX0NPREVQQUdFXzg2Nj15DQojIENPTkZJR19OTFNfQ09ERVBB
R0VfODY5IGlzIG5vdCBzZXQNCiMgQ09ORklHX05MU19DT0RFUEFHRV85MzYgaXMgbm90IHNldA0K
Q09ORklHX05MU19DT0RFUEFHRV85NTA9eQ0KIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzMiBpcyBu
b3Qgc2V0DQpDT05GSUdfTkxTX0NPREVQQUdFXzk0OT15DQpDT05GSUdfTkxTX0NPREVQQUdFXzg3
ND15DQojIENPTkZJR19OTFNfSVNPODg1OV84IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfQ09ERVBB
R0VfMTI1MD15DQpDT05GSUdfTkxTX0NPREVQQUdFXzEyNTE9eQ0KQ09ORklHX05MU19BU0NJST15
DQpDT05GSUdfTkxTX0lTTzg4NTlfMT15DQpDT05GSUdfTkxTX0lTTzg4NTlfMj15DQpDT05GSUdf
TkxTX0lTTzg4NTlfMz15DQpDT05GSUdfTkxTX0lTTzg4NTlfND15DQpDT05GSUdfTkxTX0lTTzg4
NTlfNT15DQpDT05GSUdfTkxTX0lTTzg4NTlfNj15DQpDT05GSUdfTkxTX0lTTzg4NTlfNz15DQoj
IENPTkZJR19OTFNfSVNPODg1OV85IGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfSVNPODg1OV8xMz15
DQpDT05GSUdfTkxTX0lTTzg4NTlfMTQ9eQ0KIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldA0KQ09ORklHX05MU19LT0k4X1I9eQ0KIyBDT05GSUdfTkxTX0tPSThfVSBpcyBub3Qgc2V0
DQpDT05GSUdfTkxTX01BQ19ST01BTj15DQojIENPTkZJR19OTFNfTUFDX0NFTFRJQyBpcyBub3Qg
c2V0DQpDT05GSUdfTkxTX01BQ19DRU5URVVSTz15DQpDT05GSUdfTkxTX01BQ19DUk9BVElBTj15
DQojIENPTkZJR19OTFNfTUFDX0NZUklMTElDIGlzIG5vdCBzZXQNCkNPTkZJR19OTFNfTUFDX0dB
RUxJQz15DQpDT05GSUdfTkxTX01BQ19HUkVFSz15DQpDT05GSUdfTkxTX01BQ19JQ0VMQU5EPXkN
CiMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMgbm90IHNldA0KQ09ORklHX05MU19NQUNfUk9NQU5J
QU49eQ0KQ09ORklHX05MU19NQUNfVFVSS0lTSD15DQpDT05GSUdfTkxTX1VURjg9eQ0KDQojDQoj
IEtlcm5lbCBoYWNraW5nDQojDQpDT05GSUdfVFJBQ0VfSVJRRkxBR1NfU1VQUE9SVD15DQoNCiMN
CiMgcHJpbnRrIGFuZCBkbWVzZyBvcHRpb25zDQojDQojIENPTkZJR19QUklOVEtfVElNRSBpcyBu
b3Qgc2V0DQpDT05GSUdfREVGQVVMVF9NRVNTQUdFX0xPR0xFVkVMPTQNCkNPTkZJR19CT09UX1BS
SU5US19ERUxBWT15DQojIENPTkZJR19EWU5BTUlDX0RFQlVHIGlzIG5vdCBzZXQNCg0KIw0KIyBD
b21waWxlLXRpbWUgY2hlY2tzIGFuZCBjb21waWxlciBvcHRpb25zDQojDQojIENPTkZJR19ERUJV
R19JTkZPIGlzIG5vdCBzZXQNCiMgQ09ORklHX0VOQUJMRV9XQVJOX0RFUFJFQ0FURUQgaXMgbm90
IHNldA0KQ09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkNCkNPTkZJR19GUkFNRV9XQVJOPTIwNDgN
CkNPTkZJR19TVFJJUF9BU01fU1lNUz15DQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMgbm90IHNl
dA0KIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldA0KQ09ORklHX0RFQlVHX0ZTPXkN
CiMgQ09ORklHX0hFQURFUlNfQ0hFQ0sgaXMgbm90IHNldA0KQ09ORklHX0RFQlVHX1NFQ1RJT05f
TUlTTUFUQ0g9eQ0KQ09ORklHX0FSQ0hfV0FOVF9GUkFNRV9QT0lOVEVSUz15DQpDT05GSUdfRlJB
TUVfUE9JTlRFUj15DQpDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BVPXkNCkNPTkZJR19N
QUdJQ19TWVNSUT15DQpDT05GSUdfTUFHSUNfU1lTUlFfREVGQVVMVF9FTkFCTEU9MHgxDQpDT05G
SUdfREVCVUdfS0VSTkVMPXkNCg0KIw0KIyBNZW1vcnkgRGVidWdnaW5nDQojDQpDT05GSUdfREVC
VUdfUEFHRUFMTE9DPXkNCkNPTkZJR19XQU5UX1BBR0VfREVCVUdfRkxBR1M9eQ0KQ09ORklHX1BB
R0VfR1VBUkQ9eQ0KQ09ORklHX0RFQlVHX09CSkVDVFM9eQ0KIyBDT05GSUdfREVCVUdfT0JKRUNU
U19TRUxGVEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfREVCVUdfT0JKRUNUU19GUkVFPXkNCkNPTkZJ
R19ERUJVR19PQkpFQ1RTX1RJTUVSUz15DQpDT05GSUdfREVCVUdfT0JKRUNUU19XT1JLPXkNCiMg
Q09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldA0KIyBDT05GSUdfREVCVUdf
T0JKRUNUU19QRVJDUFVfQ09VTlRFUiBpcyBub3Qgc2V0DQpDT05GSUdfREVCVUdfT0JKRUNUU19F
TkFCTEVfREVGQVVMVD0xDQpDT05GSUdfU0xVQl9TVEFUUz15DQpDT05GSUdfSEFWRV9ERUJVR19L
TUVNTEVBSz15DQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qgc2V0DQpDT05GSUdfREVC
VUdfU1RBQ0tfVVNBR0U9eQ0KIyBDT05GSUdfREVCVUdfVk0gaXMgbm90IHNldA0KIyBDT05GSUdf
REVCVUdfVklSVFVBTCBpcyBub3Qgc2V0DQojIENPTkZJR19ERUJVR19NRU1PUllfSU5JVCBpcyBu
b3Qgc2V0DQpDT05GSUdfTUVNT1JZX05PVElGSUVSX0VSUk9SX0lOSkVDVD15DQpDT05GSUdfSEFW
RV9ERUJVR19TVEFDS09WRVJGTE9XPXkNCiMgQ09ORklHX0RFQlVHX1NUQUNLT1ZFUkZMT1cgaXMg
bm90IHNldA0KQ09ORklHX0hBVkVfQVJDSF9LTUVNQ0hFQ0s9eQ0KIyBDT05GSUdfREVCVUdfU0hJ
UlEgaXMgbm90IHNldA0KDQojDQojIERlYnVnIExvY2t1cHMgYW5kIEhhbmdzDQojDQojIENPTkZJ
R19MT0NLVVBfREVURUNUT1IgaXMgbm90IHNldA0KQ09ORklHX0RFVEVDVF9IVU5HX1RBU0s9eQ0K
Q09ORklHX0RFRkFVTFRfSFVOR19UQVNLX1RJTUVPVVQ9MTIwDQojIENPTkZJR19CT09UUEFSQU1f
SFVOR19UQVNLX1BBTklDIGlzIG5vdCBzZXQNCkNPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BB
TklDX1ZBTFVFPTANCkNPTkZJR19QQU5JQ19PTl9PT1BTPXkNCkNPTkZJR19QQU5JQ19PTl9PT1BT
X1ZBTFVFPTENCkNPTkZJR19QQU5JQ19USU1FT1VUPTANCg0KIw0KIyBMb2NrIERlYnVnZ2luZyAo
c3BpbmxvY2tzLCBtdXRleGVzLCBldGMuLi4pDQojDQpDT05GSUdfREVCVUdfUlRfTVVURVhFUz15
DQpDT05GSUdfREVCVUdfUElfTElTVD15DQojIENPTkZJR19SVF9NVVRFWF9URVNURVIgaXMgbm90
IHNldA0KQ09ORklHX0RFQlVHX1NQSU5MT0NLPXkNCkNPTkZJR19ERUJVR19NVVRFWEVTPXkNCiMg
Q09ORklHX0RFQlVHX1dXX01VVEVYX1NMT1dQQVRIIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19M
T0NLX0FMTE9DPXkNCiMgQ09ORklHX1BST1ZFX0xPQ0tJTkcgaXMgbm90IHNldA0KQ09ORklHX0xP
Q0tERVA9eQ0KQ09ORklHX0xPQ0tfU1RBVD15DQojIENPTkZJR19ERUJVR19MT0NLREVQIGlzIG5v
dCBzZXQNCiMgQ09ORklHX0RFQlVHX0FUT01JQ19TTEVFUCBpcyBub3Qgc2V0DQpDT05GSUdfREVC
VUdfTE9DS0lOR19BUElfU0VMRlRFU1RTPXkNCkNPTkZJR19TVEFDS1RSQUNFPXkNCkNPTkZJR19E
RUJVR19LT0JKRUNUPXkNCiMgQ09ORklHX0RFQlVHX0tPQkpFQ1RfUkVMRUFTRSBpcyBub3Qgc2V0
DQojIENPTkZJR19ERUJVR19XUklURUNPVU5UIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19MSVNU
PXkNCiMgQ09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQNCkNPTkZJR19ERUJVR19OT1RJRklFUlM9
eQ0KIyBDT05GSUdfREVCVUdfQ1JFREVOVElBTFMgaXMgbm90IHNldA0KDQojDQojIFJDVSBEZWJ1
Z2dpbmcNCiMNCkNPTkZJR19TUEFSU0VfUkNVX1BPSU5URVI9eQ0KQ09ORklHX1JDVV9UT1JUVVJF
X1RFU1Q9eQ0KIyBDT05GSUdfUkNVX1RPUlRVUkVfVEVTVF9SVU5OQUJMRSBpcyBub3Qgc2V0DQpD
T05GSUdfUkNVX0NQVV9TVEFMTF9USU1FT1VUPTIxDQpDT05GSUdfUkNVX1RSQUNFPXkNCkNPTkZJ
R19OT1RJRklFUl9FUlJPUl9JTkpFQ1RJT049eQ0KIyBDT05GSUdfUE1fTk9USUZJRVJfRVJST1Jf
SU5KRUNUIGlzIG5vdCBzZXQNCiMgQ09ORklHX0ZBVUxUX0lOSkVDVElPTiBpcyBub3Qgc2V0DQpD
T05GSUdfQVJDSF9IQVNfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1M9eQ0KIyBDT05GSUdf
REVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1MgaXMgbm90IHNldA0KQ09ORklHX1VTRVJfU1RB
Q0tUUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19IQVZFX0ZVTkNUSU9OX1RSQUNFUj15DQpDT05GSUdf
SEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQ0KQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhf
RlBfVEVTVD15DQpDT05GSUdfSEFWRV9GVU5DVElPTl9UUkFDRV9NQ09VTlRfVEVTVD15DQpDT05G
SUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15DQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9XSVRI
X1JFR1M9eQ0KQ09ORklHX0hBVkVfRlRSQUNFX01DT1VOVF9SRUNPUkQ9eQ0KQ09ORklHX0hBVkVf
U1lTQ0FMTF9UUkFDRVBPSU5UUz15DQpDT05GSUdfSEFWRV9GRU5UUlk9eQ0KQ09ORklHX0hBVkVf
Q19SRUNPUkRNQ09VTlQ9eQ0KQ09ORklHX1RSQUNFX0NMT0NLPXkNCkNPTkZJR19UUkFDSU5HX1NV
UFBPUlQ9eQ0KIyBDT05GSUdfRlRSQUNFIGlzIG5vdCBzZXQNCg0KIw0KIyBSdW50aW1lIFRlc3Rp
bmcNCiMNCkNPTkZJR19URVNUX0xJU1RfU09SVD15DQpDT05GSUdfQkFDS1RSQUNFX1NFTEZfVEVT
VD15DQojIENPTkZJR19SQlRSRUVfVEVTVCBpcyBub3Qgc2V0DQpDT05GSUdfQVRPTUlDNjRfU0VM
RlRFU1Q9eQ0KQ09ORklHX1RFU1RfU1RSSU5HX0hFTFBFUlM9eQ0KQ09ORklHX1RFU1RfS1NUUlRP
WD15DQojIENPTkZJR19QUk9WSURFX09IQ0kxMzk0X0RNQV9JTklUIGlzIG5vdCBzZXQNCiMgQ09O
RklHX0RNQV9BUElfREVCVUcgaXMgbm90IHNldA0KQ09ORklHX1NBTVBMRVM9eQ0KQ09ORklHX0hB
VkVfQVJDSF9LR0RCPXkNCkNPTkZJR19LR0RCPXkNCkNPTkZJR19LR0RCX1NFUklBTF9DT05TT0xF
PXkNCiMgQ09ORklHX0tHREJfVEVTVFMgaXMgbm90IHNldA0KIyBDT05GSUdfS0dEQl9MT1dfTEVW
RUxfVFJBUCBpcyBub3Qgc2V0DQojIENPTkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0DQpDT05GSUdf
U1RSSUNUX0RFVk1FTT15DQojIENPTkZJR19YODZfVkVSQk9TRV9CT09UVVAgaXMgbm90IHNldA0K
IyBDT05GSUdfRUFSTFlfUFJJTlRLIGlzIG5vdCBzZXQNCiMgQ09ORklHX1g4Nl9QVERVTVAgaXMg
bm90IHNldA0KQ09ORklHX0RFQlVHX1JPREFUQT15DQojIENPTkZJR19ERUJVR19ST0RBVEFfVEVT
VCBpcyBub3Qgc2V0DQpDT05GSUdfRE9VQkxFRkFVTFQ9eQ0KIyBDT05GSUdfREVCVUdfVExCRkxV
U0ggaXMgbm90IHNldA0KIyBDT05GSUdfSU9NTVVfU1RSRVNTIGlzIG5vdCBzZXQNCkNPTkZJR19I
QVZFX01NSU9UUkFDRV9TVVBQT1JUPXkNCkNPTkZJR19JT19ERUxBWV9UWVBFXzBYODA9MA0KQ09O
RklHX0lPX0RFTEFZX1RZUEVfMFhFRD0xDQpDT05GSUdfSU9fREVMQVlfVFlQRV9VREVMQVk9Mg0K
Q09ORklHX0lPX0RFTEFZX1RZUEVfTk9ORT0zDQpDT05GSUdfSU9fREVMQVlfMFg4MD15DQojIENP
TkZJR19JT19ERUxBWV8wWEVEIGlzIG5vdCBzZXQNCiMgQ09ORklHX0lPX0RFTEFZX1VERUxBWSBp
cyBub3Qgc2V0DQojIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQNCkNPTkZJR19ERUZB
VUxUX0lPX0RFTEFZX1RZUEU9MA0KIyBDT05GSUdfREVCVUdfQk9PVF9QQVJBTVMgaXMgbm90IHNl
dA0KIyBDT05GSUdfQ1BBX0RFQlVHIGlzIG5vdCBzZXQNCkNPTkZJR19PUFRJTUlaRV9JTkxJTklO
Rz15DQpDT05GSUdfREVCVUdfTk1JX1NFTEZURVNUPXkNCkNPTkZJR19YODZfREVCVUdfU1RBVElD
X0NQVV9IQVM9eQ0KDQojDQojIFNlY3VyaXR5IG9wdGlvbnMNCiMNCkNPTkZJR19LRVlTPXkNCiMg
Q09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90IHNldA0KQ09ORklHX0JJR19LRVlTPXkN
CiMgQ09ORklHX1RSVVNURURfS0VZUyBpcyBub3Qgc2V0DQpDT05GSUdfRU5DUllQVEVEX0tFWVM9
eQ0KIyBDT05GSUdfS0VZU19ERUJVR19QUk9DX0tFWVMgaXMgbm90IHNldA0KQ09ORklHX1NFQ1VS
SVRZX0RNRVNHX1JFU1RSSUNUPXkNCkNPTkZJR19TRUNVUklUWT15DQpDT05GSUdfU0VDVVJJVFlG
Uz15DQpDT05GSUdfU0VDVVJJVFlfTkVUV09SSz15DQpDT05GSUdfU0VDVVJJVFlfTkVUV09SS19Y
RlJNPXkNCkNPTkZJR19TRUNVUklUWV9QQVRIPXkNCiMgQ09ORklHX1NFQ1VSSVRZX1NFTElOVVgg
aXMgbm90IHNldA0KIyBDT05GSUdfU0VDVVJJVFlfU01BQ0sgaXMgbm90IHNldA0KQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZTz15DQpDT05GSUdfU0VDVVJJVFlfVE9NT1lPX01BWF9BQ0NFUFRfRU5UUlk9
MjA0OA0KQ09ORklHX1NFQ1VSSVRZX1RPTU9ZT19NQVhfQVVESVRfTE9HPTEwMjQNCkNPTkZJR19T
RUNVUklUWV9UT01PWU9fT01JVF9VU0VSU1BBQ0VfTE9BREVSPXkNCkNPTkZJR19TRUNVUklUWV9B
UFBBUk1PUj15DQpDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBBUkFNX1ZBTFVFPTENCkNP
TkZJR19TRUNVUklUWV9BUFBBUk1PUl9IQVNIPXkNCkNPTkZJR19TRUNVUklUWV9ZQU1BPXkNCkNP
TkZJR19TRUNVUklUWV9ZQU1BX1NUQUNLRUQ9eQ0KIyBDT05GSUdfSU1BIGlzIG5vdCBzZXQNCiMg
Q09ORklHX0VWTSBpcyBub3Qgc2V0DQpDT05GSUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQ0K
IyBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9BUFBBUk1PUiBpcyBub3Qgc2V0DQojIENPTkZJR19E
RUZBVUxUX1NFQ1VSSVRZX1lBTUEgaXMgbm90IHNldA0KIyBDT05GSUdfREVGQVVMVF9TRUNVUklU
WV9EQUMgaXMgbm90IHNldA0KQ09ORklHX0RFRkFVTFRfU0VDVVJJVFk9InRvbW95byINCkNPTkZJ
R19DUllQVE89eQ0KDQojDQojIENyeXB0byBjb3JlIG9yIGhlbHBlcg0KIw0KIyBDT05GSUdfQ1JZ
UFRPX0ZJUFMgaXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19BTEdBUEk9eQ0KQ09ORklHX0NSWVBU
T19BTEdBUEkyPXkNCkNPTkZJR19DUllQVE9fQUVBRD15DQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkN
CkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkNCkNPTkZJR19DUllQVE9fQkxLQ0lQSEVSMj15DQpD
T05GSUdfQ1JZUFRPX0hBU0g9eQ0KQ09ORklHX0NSWVBUT19IQVNIMj15DQpDT05GSUdfQ1JZUFRP
X1JORz15DQpDT05GSUdfQ1JZUFRPX1JORzI9eQ0KQ09ORklHX0NSWVBUT19QQ09NUDI9eQ0KQ09O
RklHX0NSWVBUT19NQU5BR0VSPXkNCkNPTkZJR19DUllQVE9fTUFOQUdFUjI9eQ0KQ09ORklHX0NS
WVBUT19VU0VSPXkNCiMgQ09ORklHX0NSWVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFMgaXMgbm90
IHNldA0KQ09ORklHX0NSWVBUT19HRjEyOE1VTD15DQpDT05GSUdfQ1JZUFRPX05VTEw9eQ0KQ09O
RklHX0NSWVBUT19XT1JLUVVFVUU9eQ0KQ09ORklHX0NSWVBUT19DUllQVEQ9eQ0KQ09ORklHX0NS
WVBUT19BVVRIRU5DPXkNCkNPTkZJR19DUllQVE9fQUJMS19IRUxQRVI9eQ0KQ09ORklHX0NSWVBU
T19HTFVFX0hFTFBFUl9YODY9eQ0KDQojDQojIEF1dGhlbnRpY2F0ZWQgRW5jcnlwdGlvbiB3aXRo
IEFzc29jaWF0ZWQgRGF0YQ0KIw0KQ09ORklHX0NSWVBUT19DQ009eQ0KQ09ORklHX0NSWVBUT19H
Q009eQ0KQ09ORklHX0NSWVBUT19TRVFJVj15DQoNCiMNCiMgQmxvY2sgbW9kZXMNCiMNCkNPTkZJ
R19DUllQVE9fQ0JDPXkNCkNPTkZJR19DUllQVE9fQ1RSPXkNCkNPTkZJR19DUllQVE9fQ1RTPXkN
CkNPTkZJR19DUllQVE9fRUNCPXkNCkNPTkZJR19DUllQVE9fTFJXPXkNCkNPTkZJR19DUllQVE9f
UENCQz15DQpDT05GSUdfQ1JZUFRPX1hUUz15DQoNCiMNCiMgSGFzaCBtb2Rlcw0KIw0KQ09ORklH
X0NSWVBUT19DTUFDPXkNCkNPTkZJR19DUllQVE9fSE1BQz15DQojIENPTkZJR19DUllQVE9fWENC
QyBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX1ZNQUM9eQ0KDQojDQojIERpZ2VzdA0KIw0KQ09O
RklHX0NSWVBUT19DUkMzMkM9eQ0KQ09ORklHX0NSWVBUT19DUkMzMkNfSU5URUw9eQ0KIyBDT05G
SUdfQ1JZUFRPX0NSQzMyIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUwg
aXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19DUkNUMTBESUY9eQ0KQ09ORklHX0NSWVBUT19HSEFT
SD15DQpDT05GSUdfQ1JZUFRPX01END15DQpDT05GSUdfQ1JZUFRPX01ENT15DQpDT05GSUdfQ1JZ
UFRPX01JQ0hBRUxfTUlDPXkNCiMgQ09ORklHX0NSWVBUT19STUQxMjggaXMgbm90IHNldA0KQ09O
RklHX0NSWVBUT19STUQxNjA9eQ0KQ09ORklHX0NSWVBUT19STUQyNTY9eQ0KQ09ORklHX0NSWVBU
T19STUQzMjA9eQ0KQ09ORklHX0NSWVBUT19TSEExPXkNCkNPTkZJR19DUllQVE9fU0hBMV9TU1NF
Mz15DQpDT05GSUdfQ1JZUFRPX1NIQTI1Nl9TU1NFMz15DQpDT05GSUdfQ1JZUFRPX1NIQTUxMl9T
U1NFMz15DQpDT05GSUdfQ1JZUFRPX1NIQTI1Nj15DQpDT05GSUdfQ1JZUFRPX1NIQTUxMj15DQpD
T05GSUdfQ1JZUFRPX1RHUjE5Mj15DQpDT05GSUdfQ1JZUFRPX1dQNTEyPXkNCiMgQ09ORklHX0NS
WVBUT19HSEFTSF9DTE1VTF9OSV9JTlRFTCBpcyBub3Qgc2V0DQoNCiMNCiMgQ2lwaGVycw0KIw0K
Q09ORklHX0NSWVBUT19BRVM9eQ0KQ09ORklHX0NSWVBUT19BRVNfWDg2XzY0PXkNCiMgQ09ORklH
X0NSWVBUT19BRVNfTklfSU5URUwgaXMgbm90IHNldA0KIyBDT05GSUdfQ1JZUFRPX0FOVUJJUyBp
cyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX0FSQzQ9eQ0KIyBDT05GSUdfQ1JZUFRPX0JMT1dGSVNI
IGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBUT19CTE9XRklTSF9YODZfNjQgaXMgbm90IHNldA0K
Q09ORklHX0NSWVBUT19DQU1FTExJQT15DQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX1g4Nl82ND15
DQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FFU05JX0FWWF9YODZfNjQ9eQ0KQ09ORklHX0NSWVBU
T19DQU1FTExJQV9BRVNOSV9BVlgyX1g4Nl82ND15DQpDT05GSUdfQ1JZUFRPX0NBU1RfQ09NTU9O
PXkNCiMgQ09ORklHX0NSWVBUT19DQVNUNSBpcyBub3Qgc2V0DQojIENPTkZJR19DUllQVE9fQ0FT
VDVfQVZYX1g4Nl82NCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX0NBU1Q2PXkNCkNPTkZJR19D
UllQVE9fQ0FTVDZfQVZYX1g4Nl82ND15DQpDT05GSUdfQ1JZUFRPX0RFUz15DQpDT05GSUdfQ1JZ
UFRPX0ZDUllQVD15DQpDT05GSUdfQ1JZUFRPX0tIQVpBRD15DQojIENPTkZJR19DUllQVE9fU0FM
U0EyMCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JZUFRPX1NBTFNBMjBfWDg2XzY0PXkNCkNPTkZJR19D
UllQVE9fU0VFRD15DQpDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9eQ0KQ09ORklHX0NSWVBUT19TRVJQ
RU5UX1NTRTJfWDg2XzY0PXkNCkNPTkZJR19DUllQVE9fU0VSUEVOVF9BVlhfWDg2XzY0PXkNCiMg
Q09ORklHX0NSWVBUT19TRVJQRU5UX0FWWDJfWDg2XzY0IGlzIG5vdCBzZXQNCkNPTkZJR19DUllQ
VE9fVEVBPXkNCiMgQ09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBzZXQNCkNPTkZJR19DUllQ
VE9fVFdPRklTSF9DT01NT049eQ0KQ09ORklHX0NSWVBUT19UV09GSVNIX1g4Nl82ND15DQpDT05G
SUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0XzNXQVk9eQ0KQ09ORklHX0NSWVBUT19UV09GSVNIX0FW
WF9YODZfNjQ9eQ0KDQojDQojIENvbXByZXNzaW9uDQojDQpDT05GSUdfQ1JZUFRPX0RFRkxBVEU9
eQ0KIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldA0KQ09ORklHX0NSWVBUT19MWk89eQ0K
Q09ORklHX0NSWVBUT19MWjQ9eQ0KIyBDT05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBzZXQNCg0K
Iw0KIyBSYW5kb20gTnVtYmVyIEdlbmVyYXRpb24NCiMNCkNPTkZJR19DUllQVE9fQU5TSV9DUFJO
Rz15DQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJPXkNCkNPTkZJR19DUllQVE9fVVNFUl9BUElfSEFT
SD15DQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkNCkNPTkZJR19DUllQVE9fSFc9
eQ0KIyBDT05GSUdfQ1JZUFRPX0RFVl9QQURMT0NLIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSWVBU
T19ERVZfQ0NQIGlzIG5vdCBzZXQNCiMgQ09ORklHX0FTWU1NRVRSSUNfS0VZX1RZUEUgaXMgbm90
IHNldA0KQ09ORklHX0hBVkVfS1ZNPXkNCkNPTkZJR19IQVZFX0tWTV9JUlFDSElQPXkNCkNPTkZJ
R19IQVZFX0tWTV9JUlFfUk9VVElORz15DQpDT05GSUdfSEFWRV9LVk1fRVZFTlRGRD15DQpDT05G
SUdfS1ZNX0FQSUNfQVJDSElURUNUVVJFPXkNCkNPTkZJR19LVk1fTU1JTz15DQpDT05GSUdfS1ZN
X0FTWU5DX1BGPXkNCkNPTkZJR19IQVZFX0tWTV9NU0k9eQ0KQ09ORklHX0hBVkVfS1ZNX0NQVV9S
RUxBWF9JTlRFUkNFUFQ9eQ0KQ09ORklHX0tWTV9WRklPPXkNCkNPTkZJR19WSVJUVUFMSVpBVElP
Tj15DQpDT05GSUdfS1ZNPXkNCiMgQ09ORklHX0tWTV9JTlRFTCBpcyBub3Qgc2V0DQpDT05GSUdf
S1ZNX0FNRD15DQojIENPTkZJR19CSU5BUllfUFJJTlRGIGlzIG5vdCBzZXQNCg0KIw0KIyBMaWJy
YXJ5IHJvdXRpbmVzDQojDQpDT05GSUdfQklUUkVWRVJTRT15DQpDT05GSUdfR0VORVJJQ19TVFJO
Q1BZX0ZST01fVVNFUj15DQpDT05GSUdfR0VORVJJQ19TVFJOTEVOX1VTRVI9eQ0KQ09ORklHX0dF
TkVSSUNfTkVUX1VUSUxTPXkNCkNPTkZJR19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkNCkNPTkZJ
R19HRU5FUklDX1BDSV9JT01BUD15DQpDT05GSUdfR0VORVJJQ19JT01BUD15DQpDT05GSUdfR0VO
RVJJQ19JTz15DQpDT05GSUdfQVJDSF9VU0VfQ01QWENIR19MT0NLUkVGPXkNCkNPTkZJR19DUkNf
Q0NJVFQ9eQ0KQ09ORklHX0NSQzE2PXkNCiMgQ09ORklHX0NSQ19UMTBESUYgaXMgbm90IHNldA0K
Q09ORklHX0NSQ19JVFVfVD15DQpDT05GSUdfQ1JDMzI9eQ0KQ09ORklHX0NSQzMyX1NFTEZURVNU
PXkNCkNPTkZJR19DUkMzMl9TTElDRUJZOD15DQojIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBu
b3Qgc2V0DQojIENPTkZJR19DUkMzMl9TQVJXQVRFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NSQzMy
X0JJVCBpcyBub3Qgc2V0DQojIENPTkZJR19DUkM3IGlzIG5vdCBzZXQNCkNPTkZJR19MSUJDUkMz
MkM9eQ0KIyBDT05GSUdfQ1JDOCBpcyBub3Qgc2V0DQpDT05GSUdfQ1JDNjRfRUNNQT15DQpDT05G
SUdfUkFORE9NMzJfU0VMRlRFU1Q9eQ0KQ09ORklHX1pMSUJfSU5GTEFURT15DQpDT05GSUdfWkxJ
Ql9ERUZMQVRFPXkNCkNPTkZJR19MWk9fQ09NUFJFU1M9eQ0KQ09ORklHX0xaT19ERUNPTVBSRVNT
PXkNCkNPTkZJR19MWjRfQ09NUFJFU1M9eQ0KQ09ORklHX0xaNF9ERUNPTVBSRVNTPXkNCkNPTkZJ
R19YWl9ERUM9eQ0KIyBDT05GSUdfWFpfREVDX1g4NiBpcyBub3Qgc2V0DQojIENPTkZJR19YWl9E
RUNfUE9XRVJQQyBpcyBub3Qgc2V0DQojIENPTkZJR19YWl9ERUNfSUE2NCBpcyBub3Qgc2V0DQpD
T05GSUdfWFpfREVDX0FSTT15DQojIENPTkZJR19YWl9ERUNfQVJNVEhVTUIgaXMgbm90IHNldA0K
IyBDT05GSUdfWFpfREVDX1NQQVJDIGlzIG5vdCBzZXQNCkNPTkZJR19YWl9ERUNfQkNKPXkNCkNP
TkZJR19YWl9ERUNfVEVTVD15DQpDT05GSUdfREVDT01QUkVTU19YWj15DQpDT05GSUdfR0VORVJJ
Q19BTExPQ0FUT1I9eQ0KQ09ORklHX0JDSD15DQpDT05GSUdfQkNIX0NPTlNUX1BBUkFNUz15DQpD
T05GSUdfVEVYVFNFQVJDSD15DQpDT05GSUdfVEVYVFNFQVJDSF9LTVA9eQ0KQ09ORklHX0FTU09D
SUFUSVZFX0FSUkFZPXkNCkNPTkZJR19IQVNfSU9NRU09eQ0KQ09ORklHX0hBU19JT1BPUlQ9eQ0K
Q09ORklHX0hBU19ETUE9eQ0KQ09ORklHX0NIRUNLX1NJR05BVFVSRT15DQpDT05GSUdfRFFMPXkN
CkNPTkZJR19OTEFUVFI9eQ0KQ09ORklHX0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElW
RT15DQojIENPTkZJR19BVkVSQUdFIGlzIG5vdCBzZXQNCiMgQ09ORklHX0NPUkRJQyBpcyBub3Qg
c2V0DQojIENPTkZJR19ERFIgaXMgbm90IHNldA0KQ09ORklHX09JRF9SRUdJU1RSWT15DQpDT05G
SUdfVUNTMl9TVFJJTkc9eQ0KQ09ORklHX0ZPTlRfU1VQUE9SVD15DQpDT05GSUdfRk9OVF84eDE2
PXkNCkNPTkZJR19GT05UX0FVVE9TRUxFQ1Q9eQ0K
--089e013c5af08767ff04ef61d88f
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--089e013c5af08767ff04ef61d88f--


From xen-devel-bounces@lists.xen.org Tue Jan 07 14:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XPf-0004YN-6X; Tue, 07 Jan 2014 14:13:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0XPd-0004YE-Po
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:13:46 +0000
Received: from [85.158.143.35:25637] by server-1.bemta-4.messagelabs.com id
	FE/8A-02132-99B0CC25; Tue, 07 Jan 2014 14:13:45 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389104023!7489585!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7265 invoked from network); 7 Jan 2014 14:13:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90443681"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:13:43 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 09:13:42 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 15:13:41 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: libxl: ocaml: fix tests makefile and osevent callbacks
Thread-Index: AQHPC6O9DduZFZKqk0uy9tG7XZmHppp5S8tQ
Date: Tue, 7 Jan 2014 14:13:40 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1389097593.31766.143.camel@kazak.uk.xensource.com>
In-Reply-To: <1389097593.31766.143.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> Sent: 07 January 2014 12:27 PM
> To: Rob Hoes
> Cc: xen-devel@lists.xen.org; Ian Jackson; Dave Scott
> Subject: Re: libxl: ocaml: fix tests makefile and osevent callbacks
> 
> On Thu, 2013-12-12 at 16:36 +0000, Rob Hoes wrote:
> > I'd really appreciate it if these fixes can still be considered for
> > Xen 4.4, for the same reasons as the larger libxl/ocaml series that went
> in yesterday.
> 
> I haven't delved into these threads but I think we are waiting for a v2?

There were 3 patches:
1: Already applied (the Makefile fix).
2: Dave acked it on 20/12. Ian J was happy with the patch after some discussion (but did not formally ack it yet). No v2 was needed.
3: I sent a v2 on 13/12. Dave acked it on 20/12. We had some discussion and I think you and Ian J seemed happy (but did not formally ack yet).

Cheers,
Rob
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XPf-0004YN-6X; Tue, 07 Jan 2014 14:13:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0XPd-0004YE-Po
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:13:46 +0000
Received: from [85.158.143.35:25637] by server-1.bemta-4.messagelabs.com id
	FE/8A-02132-99B0CC25; Tue, 07 Jan 2014 14:13:45 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389104023!7489585!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7265 invoked from network); 7 Jan 2014 14:13:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90443681"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:13:43 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 09:13:42 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 15:13:41 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: libxl: ocaml: fix tests makefile and osevent callbacks
Thread-Index: AQHPC6O9DduZFZKqk0uy9tG7XZmHppp5S8tQ
Date: Tue, 7 Jan 2014 14:13:40 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1389097593.31766.143.camel@kazak.uk.xensource.com>
In-Reply-To: <1389097593.31766.143.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> Sent: 07 January 2014 12:27 PM
> To: Rob Hoes
> Cc: xen-devel@lists.xen.org; Ian Jackson; Dave Scott
> Subject: Re: libxl: ocaml: fix tests makefile and osevent callbacks
> 
> On Thu, 2013-12-12 at 16:36 +0000, Rob Hoes wrote:
> > I'd really appreciate it if these fixes can still be considered for
> > Xen 4.4, for the same reasons as the larger libxl/ocaml series that went
> in yesterday.
> 
> I haven't delved into these threads but I think we are waiting for a v2?

There were 3 patches:
1: Already applied (the Makefile fix).
2: Dave acked it on 20/12. Ian J was happy with the patch after some discussion (but did not formally ack it yet). No v2 was needed.
3: I sent a v2 on 13/12. Dave acked it on 20/12. We had some discussion and I think you and Ian J seemed happy (but did not formally ack yet).

Cheers,
Rob
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:18:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XUT-0004is-0i; Tue, 07 Jan 2014 14:18:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XUR-0004ii-H6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:18:43 +0000
Received: from [193.109.254.147:52860] by server-2.bemta-14.messagelabs.com id
	35/E8-00361-2CC0CC25; Tue, 07 Jan 2014 14:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389104320!9288312!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22562 invoked from network); 7 Jan 2014 14:18:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90445912"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:18:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:18:40 -0500
Message-ID: <1389104319.12612.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <Rob.Hoes@citrix.com>
Date: Tue, 7 Jan 2014 14:18:39 +0000
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1389097593.31766.143.camel@kazak.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:13 +0000, Rob Hoes wrote:
> 2: Dave acked it on 20/12. Ian J was happy with the patch after some discussion (but did not formally ack it yet). No v2 was needed.

I wasn't sure about Ian's response being approval:
        I mean that I approve of the plan to use two int64's as in Rob's
        proposal.

unless "the plan" == "the patch"?

> 3: I sent a v2 on 13/12. Dave acked it on 20/12. We had some discussion and I think you and Ian J seemed happy (but did not formally ack yet).

Thanks, I had missed v2 of #3.

Ian -- can you formally ack both of these if you are happy with them
please.

Release wise I don't see any point in shipping the ocaml bindings
without these issues fixed, so Release-ack-by: Ian Campbell.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:18:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XUT-0004is-0i; Tue, 07 Jan 2014 14:18:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XUR-0004ii-H6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:18:43 +0000
Received: from [193.109.254.147:52860] by server-2.bemta-14.messagelabs.com id
	35/E8-00361-2CC0CC25; Tue, 07 Jan 2014 14:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389104320!9288312!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22562 invoked from network); 7 Jan 2014 14:18:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90445912"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:18:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:18:40 -0500
Message-ID: <1389104319.12612.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rob Hoes <Rob.Hoes@citrix.com>
Date: Tue, 7 Jan 2014 14:18:39 +0000
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1389097593.31766.143.camel@kazak.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D6D01@AMSPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxl: ocaml: fix tests makefile and osevent
	callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:13 +0000, Rob Hoes wrote:
> 2: Dave acked it on 20/12. Ian J was happy with the patch after some discussion (but did not formally ack it yet). No v2 was needed.

I wasn't sure about Ian's response being approval:
        I mean that I approve of the plan to use two int64's as in Rob's
        proposal.

unless "the plan" == "the patch"?

> 3: I sent a v2 on 13/12. Dave acked it on 20/12. We had some discussion and I think you and Ian J seemed happy (but did not formally ack yet).

Thanks, I had missed v2 of #3.

Ian -- can you formally ack both of these if you are happy with them
please.

Release wise I don't see any point in shipping the ocaml bindings
without these issues fixed, so Release-ack-by: Ian Campbell.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XYw-0005FY-NH; Tue, 07 Jan 2014 14:23:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XYw-0005FR-3A
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 14:23:22 +0000
Received: from [85.158.139.211:12639] by server-9.bemta-5.messagelabs.com id
	81/60-15098-9DD0CC25; Tue, 07 Jan 2014 14:23:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389104600!7107069!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26346 invoked from network); 7 Jan 2014 14:23:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:23:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:23:19 +0000
Message-Id: <52CC1BE302000078001112AE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:23:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1388754947.28243.4.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 14:15, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Avoid turning on legacy interrupts before hpet_event has been set up.
> Particularly, the spinlock can be uninitialised at the point at which
> the interrupt first arrives.

I suppose you actually saw this issue, but I currently fail to see how
it would occur:

        spin_lock_init(&hpet_events[i].lock);
        wmb();
        hpet_events[i].event_handler = handle_hpet_broadcast;

guarantees that the lock gets initialized before the handler gets set
(i.e. if anything you'd do a call through a NULL pointer). And this

    if ( !num_hpets_used )
        hpet_events->flags = HPET_EVT_LEGACY;

happens even later, yet hpet_legacy_irq_tick() checks that flag
before calling the handler (and hence before taking the lock).

Before applying the patch I'd like to understand what I'm
overlooking.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XYw-0005FY-NH; Tue, 07 Jan 2014 14:23:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XYw-0005FR-3A
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 14:23:22 +0000
Received: from [85.158.139.211:12639] by server-9.bemta-5.messagelabs.com id
	81/60-15098-9DD0CC25; Tue, 07 Jan 2014 14:23:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389104600!7107069!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26346 invoked from network); 7 Jan 2014 14:23:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:23:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:23:19 +0000
Message-Id: <52CC1BE302000078001112AE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:23:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1388754947.28243.4.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.01.14 at 14:15, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Avoid turning on legacy interrupts before hpet_event has been set up.
> Particularly, the spinlock can be uninitialised at the point at which
> the interrupt first arrives.

I suppose you actually saw this issue, but I currently fail to see how
it would occur:

        spin_lock_init(&hpet_events[i].lock);
        wmb();
        hpet_events[i].event_handler = handle_hpet_broadcast;

guarantees that the lock gets initialized before the handler gets set
(i.e. if anything you'd do a call through a NULL pointer). And this

    if ( !num_hpets_used )
        hpet_events->flags = HPET_EVT_LEGACY;

happens even later, yet hpet_legacy_irq_tick() checks that flag
before calling the handler (and hence before taking the lock).

Before applying the patch I'd like to understand what I'm
overlooking.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xcu-0005Nw-FA; Tue, 07 Jan 2014 14:27:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xcs-0005Np-KH
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:27:26 +0000
Received: from [85.158.143.35:39886] by server-3.bemta-4.messagelabs.com id
	A8/92-32360-ECE0CC25; Tue, 07 Jan 2014 14:27:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389104843!7494169!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 7 Jan 2014 14:27:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:27:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ER6hC010437
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:27:07 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ER4Av016098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:27:05 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ER43M004933; Tue, 7 Jan 2014 14:27:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:27:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B20B51C18DC; Tue,  7 Jan 2014 09:27:02 -0500 (EST)
Date: Tue, 7 Jan 2014 09:27:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <bob.liu@oracle.com>, keir@xen.org
Message-ID: <20140107142702.GC3588@phenom.dumpdata.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CBDAA3.2000403@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Bob Liu <lliubbo@gmail.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> 
> On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> They are several one line functions in tmem_xen.h which are useless, this patch
> >> embeded them into tmem.c directly.
> >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> >> to make things more straightforward.
> >>
> >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> ---
> >>  xen/common/domain.c        |    4 ++--
> >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >>  xen/include/xen/sched.h    |    2 +-
> >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > 
> > Keir, are you OK with this simple name change?
> > 
> 
> Ping..

Lets make sure his email is on the 'To' (I don't see it
in my email?)
> 
> Thanks!
> .. and Happy New Year
> -Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xcu-0005Nw-FA; Tue, 07 Jan 2014 14:27:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xcs-0005Np-KH
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:27:26 +0000
Received: from [85.158.143.35:39886] by server-3.bemta-4.messagelabs.com id
	A8/92-32360-ECE0CC25; Tue, 07 Jan 2014 14:27:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389104843!7494169!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 7 Jan 2014 14:27:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:27:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ER6hC010437
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:27:07 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ER4Av016098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:27:05 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ER43M004933; Tue, 7 Jan 2014 14:27:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:27:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B20B51C18DC; Tue,  7 Jan 2014 09:27:02 -0500 (EST)
Date: Tue, 7 Jan 2014 09:27:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <bob.liu@oracle.com>, keir@xen.org
Message-ID: <20140107142702.GC3588@phenom.dumpdata.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CBDAA3.2000403@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Bob Liu <lliubbo@gmail.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> 
> On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> They are several one line functions in tmem_xen.h which are useless, this patch
> >> embeded them into tmem.c directly.
> >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> >> to make things more straightforward.
> >>
> >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> ---
> >>  xen/common/domain.c        |    4 ++--
> >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >>  xen/include/xen/sched.h    |    2 +-
> >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > 
> > Keir, are you OK with this simple name change?
> > 
> 
> Ping..

Lets make sure his email is on the 'To' (I don't see it
in my email?)
> 
> Thanks!
> .. and Happy New Year
> -Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:27:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XdO-0005QT-Sw; Tue, 07 Jan 2014 14:27:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0XdN-0005Q9-6V
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:27:57 +0000
Received: from [85.158.143.35:61813] by server-1.bemta-4.messagelabs.com id
	0E/24-02132-CEE0CC25; Tue, 07 Jan 2014 14:27:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389104875!10023819!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7485 invoked from network); 7 Jan 2014 14:27:56 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:27:56 -0000
Received: by mail-ee0-f49.google.com with SMTP id c41so111435eek.8
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:27:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=oJrmNayZ7gcSCDVuYE0I7/7StUnf4uKZIdoVqCdk1rs=;
	b=CrC5EtaS+eR6znnG928zYHk4ILmlxPdGawoMrEL99tLKPGF9RnKyOI7WhCttSQJEWO
	odenKLZjTRbsWnk4AQ2/Jglr97268d/f6uppMvcat/dDMSxJicR0M2EKX9l7TRBF1ehf
	zmkVT7ryDN+BsNG9qnDr4sFh9RkkW9MAYKd2N8dfJaDrgwEt0sfErxopjtuygt3X5ZNK
	eO84j3bxsqwnCQ9S9uTb+brKGwr6DaONa/OUL3JpcwNk05fEiOJlKwplRH1HmFAuWFTW
	2QqOy6U30i72NLJumxfLJ4Q0LkOVtfZMEdvejTsBuPhR1J1iSof7RiPDBtLuqVSV2M/W
	RyMQ==
X-Gm-Message-State: ALoCoQkb3CAOy5Q9pyCCuJ2RKBYOnyn3lu4Cjw/71Ku5jHXEVG5UYSgFkdAQAQ8hpDAnBpEi4jPq
X-Received: by 10.14.99.66 with SMTP id w42mr90958eef.63.1389104875724;
	Tue, 07 Jan 2014 06:27:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 7sm35571011eee.12.2014.01.07.06.27.54
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 06:27:54 -0800 (PST)
Message-ID: <52CC0EE8.6060205@linaro.org>
Date: Tue, 07 Jan 2014 14:27:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
In-Reply-To: <52CBBB05.6020104@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMDcvMjAxNCAwODoyOSBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAwNi8w
MS8xNCAxMjozMywgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pgo+Pgo+PiBPbiAwMS8wNi8yMDE0IDA5
OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMDUvMDEvMTQgMjI6NTUsIEp1
bGllbiBHcmFsbCB3cm90ZToKPj4+Pgo+Pj4+Cj4+Pj4gT24gMDEvMDIvMjAxNCAwMzo0MyBQTSwg
Um9nZXIgUGF1IE1vbm5lIHdyb3RlOgo+Pj4+PiBJbnRyb2R1Y2UgYSBYZW4gc3BlY2lmaWMgbmV4
dXMgdGhhdCBpcyBnb2luZyB0byBiZSBpbiBjaGFyZ2UgZm9yCj4+Pj4+IGF0dGFjaGluZyBYZW4g
c3BlY2lmaWMgZGV2aWNlcy4KPj4+Pgo+Pj4+IE5vdyB0aGF0IHdlIGhhdmUgYSB4ZW5wdiBidXMs
IGRvIHdlIHJlYWxseSBuZWVkIGEgc3BlY2lmaWMgbmV4dXMgZm9yCj4+Pj4gWGVuPwo+Pj4+IFdl
IHNob3VsZCBiZSBhYmxlIHRvIHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2YgeGVucHYgdG8g
Y3JlYXRlIHRoZQo+Pj4+IGJ1cy4KPj4+Pgo+Pj4+IFRoZSBvdGhlciBwYXJ0IG9mIHRoaXMgcGF0
Y2ggY2FuIGJlIG1lcmdlZCBpbiB0aGUgcGF0Y2ggIzE0ICJJbnRyb2R1Y2UKPj4+PiB4ZW5wdiBi
dXMgYW5kIGEgZHVtbXkgcHZjcHUgZGV2aWNlIi4KPj4+Cj4+PiBPbiB4ODYgYXQgbGVhc3Qgd2Ug
bmVlZCB0aGUgWGVuIHNwZWNpZmljIG5leHVzLCBvciB3ZSB3aWxsIGZhbGwgYmFjayB0bwo+Pj4g
dXNlIHRoZSBsZWdhY3kgbmV4dXMgd2hpY2ggaXMgbm90IHdoYXQgd2UgcmVhbGx5IHdhbnQuCj4+
Pgo+Pgo+PiBPaCByaWdodCwgaW4gYW55IGNhc2UgY2FuIHdlIHVzZSB0aGUgaWRlbnRpZnkgY2Fs
bGJhY2sgb2YgeGVucHYgdG8gYWRkCj4+IHRoZSBidXM/Cj4gCj4gQUZBSUNUIHRoaXMga2luZCBv
ZiBidXMgZGV2aWNlcyBkb24ndCBoYXZlIGEgaWRlbnRpZnkgcm91dGluZSwgYW5kIHRoZXkKPiBh
cmUgdXN1YWxseSBhZGRlZCBtYW51YWxseSBmcm9tIHRoZSBzcGVjaWZpYyBuZXh1cywgc2VlIGFj
cGkgb3IgbGVnYWN5Lgo+IENvdWxkIHlvdSBhZGQgdGhlIGRldmljZSBvbiBBUk0gd2hlbiB5b3Ug
ZGV0ZWN0IHRoYXQgeW91IGFyZSBydW5uaW5nIGFzCj4gYSBYZW4gZ3Vlc3QsIG9yIGluIHRoZSBn
ZW5lcmljIEFSTSBuZXh1cyBpZiBYZW4gaXMgZGV0ZWN0ZWQ/CgpJcyB0aGVyZSBhbnkgcmVhc29u
IHRvIG5vdCBhZGQgaWRlbnRpZnkgY2FsbGJhY2s/IElmIGl0J3MgcG9zc2libGUsIEkKd291bGQg
bGlrZSB0byBhdm9pZCBhcyBtdWNoIGFzIHBvc3NpYmxlICNpZmRlZiBYRU5IVk0gaW4gQVJNIGNv
ZGUuCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:27:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XdO-0005QT-Sw; Tue, 07 Jan 2014 14:27:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0XdN-0005Q9-6V
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:27:57 +0000
Received: from [85.158.143.35:61813] by server-1.bemta-4.messagelabs.com id
	0E/24-02132-CEE0CC25; Tue, 07 Jan 2014 14:27:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389104875!10023819!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7485 invoked from network); 7 Jan 2014 14:27:56 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:27:56 -0000
Received: by mail-ee0-f49.google.com with SMTP id c41so111435eek.8
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:27:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=oJrmNayZ7gcSCDVuYE0I7/7StUnf4uKZIdoVqCdk1rs=;
	b=CrC5EtaS+eR6znnG928zYHk4ILmlxPdGawoMrEL99tLKPGF9RnKyOI7WhCttSQJEWO
	odenKLZjTRbsWnk4AQ2/Jglr97268d/f6uppMvcat/dDMSxJicR0M2EKX9l7TRBF1ehf
	zmkVT7ryDN+BsNG9qnDr4sFh9RkkW9MAYKd2N8dfJaDrgwEt0sfErxopjtuygt3X5ZNK
	eO84j3bxsqwnCQ9S9uTb+brKGwr6DaONa/OUL3JpcwNk05fEiOJlKwplRH1HmFAuWFTW
	2QqOy6U30i72NLJumxfLJ4Q0LkOVtfZMEdvejTsBuPhR1J1iSof7RiPDBtLuqVSV2M/W
	RyMQ==
X-Gm-Message-State: ALoCoQkb3CAOy5Q9pyCCuJ2RKBYOnyn3lu4Cjw/71Ku5jHXEVG5UYSgFkdAQAQ8hpDAnBpEi4jPq
X-Received: by 10.14.99.66 with SMTP id w42mr90958eef.63.1389104875724;
	Tue, 07 Jan 2014 06:27:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 7sm35571011eee.12.2014.01.07.06.27.54
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 06:27:54 -0800 (PST)
Message-ID: <52CC0EE8.6060205@linaro.org>
Date: Tue, 07 Jan 2014 14:27:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
In-Reply-To: <52CBBB05.6020104@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMDcvMjAxNCAwODoyOSBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAwNi8w
MS8xNCAxMjozMywgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pgo+Pgo+PiBPbiAwMS8wNi8yMDE0IDA5
OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMDUvMDEvMTQgMjI6NTUsIEp1
bGllbiBHcmFsbCB3cm90ZToKPj4+Pgo+Pj4+Cj4+Pj4gT24gMDEvMDIvMjAxNCAwMzo0MyBQTSwg
Um9nZXIgUGF1IE1vbm5lIHdyb3RlOgo+Pj4+PiBJbnRyb2R1Y2UgYSBYZW4gc3BlY2lmaWMgbmV4
dXMgdGhhdCBpcyBnb2luZyB0byBiZSBpbiBjaGFyZ2UgZm9yCj4+Pj4+IGF0dGFjaGluZyBYZW4g
c3BlY2lmaWMgZGV2aWNlcy4KPj4+Pgo+Pj4+IE5vdyB0aGF0IHdlIGhhdmUgYSB4ZW5wdiBidXMs
IGRvIHdlIHJlYWxseSBuZWVkIGEgc3BlY2lmaWMgbmV4dXMgZm9yCj4+Pj4gWGVuPwo+Pj4+IFdl
IHNob3VsZCBiZSBhYmxlIHRvIHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2YgeGVucHYgdG8g
Y3JlYXRlIHRoZQo+Pj4+IGJ1cy4KPj4+Pgo+Pj4+IFRoZSBvdGhlciBwYXJ0IG9mIHRoaXMgcGF0
Y2ggY2FuIGJlIG1lcmdlZCBpbiB0aGUgcGF0Y2ggIzE0ICJJbnRyb2R1Y2UKPj4+PiB4ZW5wdiBi
dXMgYW5kIGEgZHVtbXkgcHZjcHUgZGV2aWNlIi4KPj4+Cj4+PiBPbiB4ODYgYXQgbGVhc3Qgd2Ug
bmVlZCB0aGUgWGVuIHNwZWNpZmljIG5leHVzLCBvciB3ZSB3aWxsIGZhbGwgYmFjayB0bwo+Pj4g
dXNlIHRoZSBsZWdhY3kgbmV4dXMgd2hpY2ggaXMgbm90IHdoYXQgd2UgcmVhbGx5IHdhbnQuCj4+
Pgo+Pgo+PiBPaCByaWdodCwgaW4gYW55IGNhc2UgY2FuIHdlIHVzZSB0aGUgaWRlbnRpZnkgY2Fs
bGJhY2sgb2YgeGVucHYgdG8gYWRkCj4+IHRoZSBidXM/Cj4gCj4gQUZBSUNUIHRoaXMga2luZCBv
ZiBidXMgZGV2aWNlcyBkb24ndCBoYXZlIGEgaWRlbnRpZnkgcm91dGluZSwgYW5kIHRoZXkKPiBh
cmUgdXN1YWxseSBhZGRlZCBtYW51YWxseSBmcm9tIHRoZSBzcGVjaWZpYyBuZXh1cywgc2VlIGFj
cGkgb3IgbGVnYWN5Lgo+IENvdWxkIHlvdSBhZGQgdGhlIGRldmljZSBvbiBBUk0gd2hlbiB5b3Ug
ZGV0ZWN0IHRoYXQgeW91IGFyZSBydW5uaW5nIGFzCj4gYSBYZW4gZ3Vlc3QsIG9yIGluIHRoZSBn
ZW5lcmljIEFSTSBuZXh1cyBpZiBYZW4gaXMgZGV0ZWN0ZWQ/CgpJcyB0aGVyZSBhbnkgcmVhc29u
IHRvIG5vdCBhZGQgaWRlbnRpZnkgY2FsbGJhY2s/IElmIGl0J3MgcG9zc2libGUsIEkKd291bGQg
bGlrZSB0byBhdm9pZCBhcyBtdWNoIGFzIHBvc3NpYmxlICNpZmRlZiBYRU5IVk0gaW4gQVJNIGNv
ZGUuCgotLSAKSnVsaWVuIEdyYWxsCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:29:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xel-0005yF-CK; Tue, 07 Jan 2014 14:29:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xek-0005y2-2P
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:29:22 +0000
Received: from [85.158.137.68:40711] by server-6.bemta-3.messagelabs.com id
	82/14-04868-04F0CC25; Tue, 07 Jan 2014 14:29:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389104957!7762973!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4839 invoked from network); 7 Jan 2014 14:29:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:29:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ETGM5012959
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:29:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ETF6I013074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:29:15 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ETEEh021935; Tue, 7 Jan 2014 14:29:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:29:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B27651C18DC; Tue,  7 Jan 2014 09:29:13 -0500 (EST)
Date: Tue, 7 Jan 2014 09:29:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140107142913.GD3588@phenom.dumpdata.com>
References: <20140106175713.GB28636@phenom.dumpdata.com>
	<1389093778.31766.131.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389093778.31766.131.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>'
 are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 11:22:58AM +0000, Ian Campbell wrote:
> create ^
> thanks
> 
> On Mon, 2014-01-06 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
> > In Xend, if I had a pci entry in the guest config and the 
> > PCI device was not assigned to xen-pciback or pci-stub it
> > would refuse to launch the guest.
> > 
> > Not so with 'xl'. It will complain but still launch:
> 
> It looks like domcreate_attach_pci() is ignoring the result of
> libxl__device_pci_add(). It appears to have always done so.
> 
> I suppose there is an argument that there are usecases where starting
> the domain at all even without the full set of devices is better than
> not starting it at all, but I think I agree that the default should be
> to fail if some devices are not available.

<nods> The guest wasn't too happy about some of them missing :-)

> 
> Is this a blocker for you for 4.4 or can it wait for 4.5?

Not a blocker. It can wait, just want to make sure we don't forget.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:29:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xel-0005yF-CK; Tue, 07 Jan 2014 14:29:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xek-0005y2-2P
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:29:22 +0000
Received: from [85.158.137.68:40711] by server-6.bemta-3.messagelabs.com id
	82/14-04868-04F0CC25; Tue, 07 Jan 2014 14:29:20 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389104957!7762973!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4839 invoked from network); 7 Jan 2014 14:29:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:29:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ETGM5012959
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:29:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ETF6I013074
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:29:15 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ETEEh021935; Tue, 7 Jan 2014 14:29:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:29:14 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B27651C18DC; Tue,  7 Jan 2014 09:29:13 -0500 (EST)
Date: Tue, 7 Jan 2014 09:29:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140107142913.GD3588@phenom.dumpdata.com>
References: <20140106175713.GB28636@phenom.dumpdata.com>
	<1389093778.31766.131.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389093778.31766.131.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xend vs xl with pci=['<bdf'] wherein the '<bdf>'
 are not owned by pciback or pcistub will still launch.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 11:22:58AM +0000, Ian Campbell wrote:
> create ^
> thanks
> 
> On Mon, 2014-01-06 at 12:57 -0500, Konrad Rzeszutek Wilk wrote:
> > In Xend, if I had a pci entry in the guest config and the 
> > PCI device was not assigned to xen-pciback or pci-stub it
> > would refuse to launch the guest.
> > 
> > Not so with 'xl'. It will complain but still launch:
> 
> It looks like domcreate_attach_pci() is ignoring the result of
> libxl__device_pci_add(). It appears to have always done so.
> 
> I suppose there is an argument that there are usecases where starting
> the domain at all even without the full set of devices is better than
> not starting it at all, but I think I agree that the default should be
> to fail if some devices are not available.

<nods> The guest wasn't too happy about some of them missing :-)

> 
> Is this a blocker for you for 4.4 or can it wait for 4.5?

Not a blocker. It can wait, just want to make sure we don't forget.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xeu-0005zZ-Ov; Tue, 07 Jan 2014 14:29:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Xet-0005zI-QC
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:29:31 +0000
Received: from [85.158.139.211:63258] by server-13.bemta-5.messagelabs.com id
	73/FC-11357-B4F0CC25; Tue, 07 Jan 2014 14:29:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389104970!5658876!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27316 invoked from network); 7 Jan 2014 14:29:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:29:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:29:30 +0000
Message-Id: <52CC1D5702000078001112C5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:29:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 12:59, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/common/kexec.c
> +++ b/xen/common/kexec.c
> @@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
>      }
>  
>      set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
> +    printk("Executing crash image on cpu%u\n", cpu);
> +
>      return 0;
>  }
>  

With the calling function also being used from kexec_reboot(),
printing "crash image" here isn't really correct afaict.

Jan

> @@ -340,8 +342,6 @@ void kexec_crash(void)
>      if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
>          return;
>  
> -    printk("Executing crash image\n");
> -
>      kexecing = TRUE;
>  
>      if ( kexec_common_shutdown() != 0 )
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xeu-0005zZ-Ov; Tue, 07 Jan 2014 14:29:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0Xet-0005zI-QC
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:29:31 +0000
Received: from [85.158.139.211:63258] by server-13.bemta-5.messagelabs.com id
	73/FC-11357-B4F0CC25; Tue, 07 Jan 2014 14:29:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389104970!5658876!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27316 invoked from network); 7 Jan 2014 14:29:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:29:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:29:30 +0000
Message-Id: <52CC1D5702000078001112C5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:29:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, David Vrabel <david.vrabel@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 12:59, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/common/kexec.c
> +++ b/xen/common/kexec.c
> @@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
>      }
>  
>      set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
> +    printk("Executing crash image on cpu%u\n", cpu);
> +
>      return 0;
>  }
>  

With the calling function also being used from kexec_reboot(),
printing "crash image" here isn't really correct afaict.

Jan

> @@ -340,8 +342,6 @@ void kexec_crash(void)
>      if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
>          return;
>  
> -    printk("Executing crash image\n");
> -
>      kexecing = TRUE;
>  
>      if ( kexec_common_shutdown() != 0 )
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:31:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XgP-0006Dd-Ft; Tue, 07 Jan 2014 14:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W0XgN-0006DN-3J
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:31:03 +0000
Received: from [85.158.143.35:44748] by server-1.bemta-4.messagelabs.com id
	02/B9-02132-6AF0CC25; Tue, 07 Jan 2014 14:31:02 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389105060!10135300!1
X-Originating-IP: [209.85.128.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23458 invoked from network); 7 Jan 2014 14:31:01 -0000
Received: from mail-qe0-f41.google.com (HELO mail-qe0-f41.google.com)
	(209.85.128.41)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:31:01 -0000
Received: by mail-qe0-f41.google.com with SMTP id gh4so414948qeb.28
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:31:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=PeHJ479g1LtKaF/1nWXIofbfZNQBxP/u6TBqMa94YSI=;
	b=OoNvW4jyrztq3Wjo1btoQCJcmzQaCHgP6fBGy6CUk3QwMaiikbelZ5pngU8HLBu9Rc
	PtjWxQ/wub7uWH6Ok2Xdvl4h1kzoHM8rOKiav3MzLW++My4BuQBZXOnWWYu/O51TrTPM
	bqjTYZ8M2CSpTjdyTfuoJesJEdPE7KO4gPKEKXuH0PPqOQ25dGtVHNUkX+e5sEg+N/Bl
	CmVAQ2LOgT6qR5cVIxh4EUwuzgjQmD3UBaf2Yzce2eSORiCPPapxEkXw3Qaji6IGst4Q
	aK/Qjt/kwH94k01iNuSeaq4DY2TmTOLBeH06MrF3JuAhbFPb43CiCG4ouisElGYEyR8y
	wRDg==
MIME-Version: 1.0
X-Received: by 10.229.24.4 with SMTP id t4mr188560906qcb.13.1389105060556;
	Tue, 07 Jan 2014 06:31:00 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Tue, 7 Jan 2014 06:31:00 -0800 (PST)
In-Reply-To: <1389103098.12612.21.camel@kazak.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
Date: Tue, 7 Jan 2014 14:31:00 +0000
Message-ID: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
>
>> If you still can't boot with any memory bigger than 128M, as a fast
>> workaround you can apply this patch.
>
> I wonder if it might be possible to work around this by more carefully
> selecting the load addresses for Xen+Linux+DTB+initrd, such that they
> are packed into the top end of RAM, leaving a larger contiguous chunk
> available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
> and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
>         MEMMAX-X:       Leave free for high relocation of hypervisor
>         MEMMAX-X-L:     Load Linux here
>         MEMMAX-X-L-D:   Load DTB here
>         MEMMAX-X-L-D-X: Load initial Xen image here
>
> Ultimately this is because allocations need to be aligned to their size,
> so on a 1GB system there are only two possible 512MB allocations, if
> even one page is allocated in each half then it isn't possible to
> satisfy things. I don't think the core allocator gives us the option to
> do non-aligned allocations.

What if we allocated the dom0 from the boot allocator instead (before
ditching it) ?


> Disabling the 1:1 mapping workaround allocates the region a page at a time so it doesn't suffer from this.
>
> We are probably mostly stuck with this for 4.4. As Julien says for 4.5
> we should probably look into giving dom0 multiple banks where necessary.
>
> Ian.
>
>



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:31:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XgP-0006Dd-Ft; Tue, 07 Jan 2014 14:31:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W0XgN-0006DN-3J
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:31:03 +0000
Received: from [85.158.143.35:44748] by server-1.bemta-4.messagelabs.com id
	02/B9-02132-6AF0CC25; Tue, 07 Jan 2014 14:31:02 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389105060!10135300!1
X-Originating-IP: [209.85.128.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23458 invoked from network); 7 Jan 2014 14:31:01 -0000
Received: from mail-qe0-f41.google.com (HELO mail-qe0-f41.google.com)
	(209.85.128.41)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:31:01 -0000
Received: by mail-qe0-f41.google.com with SMTP id gh4so414948qeb.28
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:31:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=PeHJ479g1LtKaF/1nWXIofbfZNQBxP/u6TBqMa94YSI=;
	b=OoNvW4jyrztq3Wjo1btoQCJcmzQaCHgP6fBGy6CUk3QwMaiikbelZ5pngU8HLBu9Rc
	PtjWxQ/wub7uWH6Ok2Xdvl4h1kzoHM8rOKiav3MzLW++My4BuQBZXOnWWYu/O51TrTPM
	bqjTYZ8M2CSpTjdyTfuoJesJEdPE7KO4gPKEKXuH0PPqOQ25dGtVHNUkX+e5sEg+N/Bl
	CmVAQ2LOgT6qR5cVIxh4EUwuzgjQmD3UBaf2Yzce2eSORiCPPapxEkXw3Qaji6IGst4Q
	aK/Qjt/kwH94k01iNuSeaq4DY2TmTOLBeH06MrF3JuAhbFPb43CiCG4ouisElGYEyR8y
	wRDg==
MIME-Version: 1.0
X-Received: by 10.229.24.4 with SMTP id t4mr188560906qcb.13.1389105060556;
	Tue, 07 Jan 2014 06:31:00 -0800 (PST)
Received: by 10.224.77.17 with HTTP; Tue, 7 Jan 2014 06:31:00 -0800 (PST)
In-Reply-To: <1389103098.12612.21.camel@kazak.uk.xensource.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
Date: Tue, 7 Jan 2014 14:31:00 +0000
Message-ID: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
>
>> If you still can't boot with any memory bigger than 128M, as a fast
>> workaround you can apply this patch.
>
> I wonder if it might be possible to work around this by more carefully
> selecting the load addresses for Xen+Linux+DTB+initrd, such that they
> are packed into the top end of RAM, leaving a larger contiguous chunk
> available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
> and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
>         MEMMAX-X:       Leave free for high relocation of hypervisor
>         MEMMAX-X-L:     Load Linux here
>         MEMMAX-X-L-D:   Load DTB here
>         MEMMAX-X-L-D-X: Load initial Xen image here
>
> Ultimately this is because allocations need to be aligned to their size,
> so on a 1GB system there are only two possible 512MB allocations, if
> even one page is allocated in each half then it isn't possible to
> satisfy things. I don't think the core allocator gives us the option to
> do non-aligned allocations.

What if we allocated the dom0 from the boot allocator instead (before
ditching it) ?


> Disabling the 1:1 mapping workaround allocates the region a page at a time so it doesn't suffer from this.
>
> We are probably mostly stuck with this for 4.4. As Julien says for 4.5
> we should probably look into giving dom0 multiple banks where necessary.
>
> Ian.
>
>



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XhW-0006N2-4q; Tue, 07 Jan 2014 14:32:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XhU-0006Mk-LD
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:32:12 +0000
Received: from [85.158.139.211:47906] by server-7.bemta-5.messagelabs.com id
	94/71-04824-BEF0CC25; Tue, 07 Jan 2014 14:32:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389105129!8329251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30724 invoked from network); 7 Jan 2014 14:32:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:32:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90451561"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:32:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:32:08 -0500
Message-ID: <1389105127.12612.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 7 Jan 2014 14:32:07 +0000
In-Reply-To: <20140107142702.GC3588@phenom.dumpdata.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> > 
> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> > >> They are several one line functions in tmem_xen.h which are useless, this patch
> > >> embeded them into tmem.c directly.
> > >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> > >> to make things more straightforward.
> > >>
> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> > >> ---
> > >>  xen/common/domain.c        |    4 ++--
> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> > >>  xen/include/xen/sched.h    |    2 +-
> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > > 
> > > Keir, are you OK with this simple name change?
> > > 
> > 
> > Ping..
> 
> Lets make sure his email is on the 'To' (I don't see it
> in my email?)

I haven't reviewed the patch or anything but it says "cleanup" -- I
think we are past that point of the release process, aren't we?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XhW-0006N2-4q; Tue, 07 Jan 2014 14:32:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XhU-0006Mk-LD
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:32:12 +0000
Received: from [85.158.139.211:47906] by server-7.bemta-5.messagelabs.com id
	94/71-04824-BEF0CC25; Tue, 07 Jan 2014 14:32:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389105129!8329251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30724 invoked from network); 7 Jan 2014 14:32:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:32:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90451561"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:32:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:32:08 -0500
Message-ID: <1389105127.12612.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 7 Jan 2014 14:32:07 +0000
In-Reply-To: <20140107142702.GC3588@phenom.dumpdata.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> > 
> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> > >> They are several one line functions in tmem_xen.h which are useless, this patch
> > >> embeded them into tmem.c directly.
> > >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> > >> to make things more straightforward.
> > >>
> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> > >> ---
> > >>  xen/common/domain.c        |    4 ++--
> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> > >>  xen/include/xen/sched.h    |    2 +-
> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > > 
> > > Keir, are you OK with this simple name change?
> > > 
> > 
> > Ping..
> 
> Lets make sure his email is on the 'To' (I don't see it
> in my email?)

I haven't reviewed the patch or anything but it says "cleanup" -- I
think we are past that point of the release process, aren't we?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XkC-0006bX-NJ; Tue, 07 Jan 2014 14:35:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XkB-0006bO-BL
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:34:59 +0000
Received: from [85.158.143.35:44368] by server-3.bemta-4.messagelabs.com id
	2C/20-32360-2901CC25; Tue, 07 Jan 2014 14:34:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389105295!10183568!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28867 invoked from network); 7 Jan 2014 14:34:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90452438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:34:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:34:38 -0500
Message-ID: <1389105276.12612.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 7 Jan 2014 14:34:36 +0000
In-Reply-To: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
	<CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:31 +0000, karim.allah.ahmed@gmail.com wrote:
> On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
> >
> >> If you still can't boot with any memory bigger than 128M, as a fast
> >> workaround you can apply this patch.
> >
> > I wonder if it might be possible to work around this by more carefully
> > selecting the load addresses for Xen+Linux+DTB+initrd, such that they
> > are packed into the top end of RAM, leaving a larger contiguous chunk
> > available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
> > and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
> >         MEMMAX-X:       Leave free for high relocation of hypervisor
> >         MEMMAX-X-L:     Load Linux here
> >         MEMMAX-X-L-D:   Load DTB here
> >         MEMMAX-X-L-D-X: Load initial Xen image here
> >
> > Ultimately this is because allocations need to be aligned to their size,
> > so on a 1GB system there are only two possible 512MB allocations, if
> > even one page is allocated in each half then it isn't possible to
> > satisfy things. I don't think the core allocator gives us the option to
> > do non-aligned allocations.
> 
> What if we allocated the dom0 from the boot allocator instead (before
> ditching it) ?

Have you checked if the boot allocator has the same constraints?

I'm also not sure if we have enough info during the early phase to know
what we are supposed to be doing (i.e. have we parsed the command line
yet?).

If you can come up with a patch we can consider it, but to be considered
for being 4.4 material it'd have to be pretty straightforward and
obvious.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XkC-0006bX-NJ; Tue, 07 Jan 2014 14:35:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0XkB-0006bO-BL
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:34:59 +0000
Received: from [85.158.143.35:44368] by server-3.bemta-4.messagelabs.com id
	2C/20-32360-2901CC25; Tue, 07 Jan 2014 14:34:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389105295!10183568!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28867 invoked from network); 7 Jan 2014 14:34:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90452438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:34:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:34:38 -0500
Message-ID: <1389105276.12612.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 7 Jan 2014 14:34:36 +0000
In-Reply-To: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
	<CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, peter <peter@perkbv.com>,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:31 +0000, karim.allah.ahmed@gmail.com wrote:
> On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
> >
> >> If you still can't boot with any memory bigger than 128M, as a fast
> >> workaround you can apply this patch.
> >
> > I wonder if it might be possible to work around this by more carefully
> > selecting the load addresses for Xen+Linux+DTB+initrd, such that they
> > are packed into the top end of RAM, leaving a larger contiguous chunk
> > available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
> > and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
> >         MEMMAX-X:       Leave free for high relocation of hypervisor
> >         MEMMAX-X-L:     Load Linux here
> >         MEMMAX-X-L-D:   Load DTB here
> >         MEMMAX-X-L-D-X: Load initial Xen image here
> >
> > Ultimately this is because allocations need to be aligned to their size,
> > so on a 1GB system there are only two possible 512MB allocations, if
> > even one page is allocated in each half then it isn't possible to
> > satisfy things. I don't think the core allocator gives us the option to
> > do non-aligned allocations.
> 
> What if we allocated the dom0 from the boot allocator instead (before
> ditching it) ?

Have you checked if the boot allocator has the same constraints?

I'm also not sure if we have enough info during the early phase to know
what we are supposed to be doing (i.e. have we parsed the command line
yet?).

If you can come up with a patch we can consider it, but to be considered
for being 4.4 material it'd have to be pretty straightforward and
obvious.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xkk-0006fG-4x; Tue, 07 Jan 2014 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Xki-0006f4-QP
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:35:33 +0000
Received: from [85.158.139.211:6644] by server-14.bemta-5.messagelabs.com id
	E1/5F-24200-4B01CC25; Tue, 07 Jan 2014 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389105329!8339726!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31971 invoked from network); 7 Jan 2014 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88292456"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:35:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:35:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Xke-00064U-Sf;
	Tue, 07 Jan 2014 14:35:28 +0000
Date: Tue, 7 Jan 2014 14:34:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52CC0614.5050402@redhat.com>
Message-ID: <alpine.DEB.2.02.1401071434020.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Paolo Bonzini wrote:
> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > > The identifiers poisoned by include/qemu/poison.h are
> > > an initial but not complete list. Host and target
> > > endianness is a particularly obvious one, as is the
> > > size of a target long. You may not use these things
> > > in your Xen devices, but "qemu-system-null" implies
> > > more than "weird special purpose thing which only
> > > has Xen devices in it".
> > 
> > I see your point.
> > Could we allow target endinness and long size being selected at
> > configure time for target-null?
> > The default could be the same as the host, or could even be simply
> > statically determined, maybe little endian, 4 bytes.
> 
> For Xen both long sizes are already supported by the block backend.  Are
> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
> might not matter at all.
> 
> And if in the future Xen were to grow support for a big-endian target,
> you could either enforce little-endian for the ring buffers, or
> negotiate it in xenstore like you do for sizeof(long).
> 
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.  Phasing out the
> i386/x86_64 xenpv machine type makes total sense if the exact same code
> can support ARM PV domains too.  This machine would only be compiled if
> you had support for Xen.

I agree with you, I would be happy with this solution.



> My current patches have:
> 
> supported_target() {
>     test "$tcg" = "yes" && return 0
>     supported_kvm_target && return 0
>     supported_xen_target && return 0
>     return 1
> }
> 
> but adding a more refined test for supported-on-TCG would be easy.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xkk-0006fG-4x; Tue, 07 Jan 2014 14:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0Xki-0006f4-QP
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:35:33 +0000
Received: from [85.158.139.211:6644] by server-14.bemta-5.messagelabs.com id
	E1/5F-24200-4B01CC25; Tue, 07 Jan 2014 14:35:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389105329!8339726!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31971 invoked from network); 7 Jan 2014 14:35:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:35:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88292456"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:35:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:35:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0Xke-00064U-Sf;
	Tue, 07 Jan 2014 14:35:28 +0000
Date: Tue, 7 Jan 2014 14:34:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52CC0614.5050402@redhat.com>
Message-ID: <alpine.DEB.2.02.1401071434020.8667@kaball.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Paolo Bonzini wrote:
> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > > The identifiers poisoned by include/qemu/poison.h are
> > > an initial but not complete list. Host and target
> > > endianness is a particularly obvious one, as is the
> > > size of a target long. You may not use these things
> > > in your Xen devices, but "qemu-system-null" implies
> > > more than "weird special purpose thing which only
> > > has Xen devices in it".
> > 
> > I see your point.
> > Could we allow target endinness and long size being selected at
> > configure time for target-null?
> > The default could be the same as the host, or could even be simply
> > statically determined, maybe little endian, 4 bytes.
> 
> For Xen both long sizes are already supported by the block backend.  Are
> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
> might not matter at all.
> 
> And if in the future Xen were to grow support for a big-endian target,
> you could either enforce little-endian for the ring buffers, or
> negotiate it in xenstore like you do for sizeof(long).
> 
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.  Phasing out the
> i386/x86_64 xenpv machine type makes total sense if the exact same code
> can support ARM PV domains too.  This machine would only be compiled if
> you had support for Xen.

I agree with you, I would be happy with this solution.



> My current patches have:
> 
> supported_target() {
>     test "$tcg" = "yes" && return 0
>     supported_kvm_target && return 0
>     supported_xen_target && return 0
>     return 1
> }
> 
> but adding a more refined test for supported-on-TCG would be easy.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xlf-0006nO-Jv; Tue, 07 Jan 2014 14:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Xle-0006nF-T1
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:36:30 +0000
Received: from [85.158.143.35:64068] by server-1.bemta-4.messagelabs.com id
	E1/13-02132-EE01CC25; Tue, 07 Jan 2014 14:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389105387!10101840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9362 invoked from network); 7 Jan 2014 14:36:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:36:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88292730"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:35:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:35:59 -0500
Message-ID: <1389105357.12612.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 14:35:57 +0000
In-Reply-To: <52B46365.4000706@linaro.org>
References: <1387552088-9976-1-git-send-email-ian.campbell@citrix.com>
	<52B46365.4000706@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: context switch the aux memory
 attribute registers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-20 at 15:33 +0000, Julien Grall wrote:
> 
> On 12/20/2013 03:08 PM, Ian Campbell wrote:
> > We appear to have somehow missed these. Linux doesn't actually use them and
> > none of the processors I've looked at actually define any bits in them (so
> > they are UNK/SBZP) but it is good form to context switch them anyway.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

I feel a bit conflicted giving a release-ack to my own patch but I think
it is low risk and will avoid hard to diagnose failures if Xen is run on
a processor which uses these.

So applied.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xlf-0006nO-Jv; Tue, 07 Jan 2014 14:36:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Xle-0006nF-T1
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:36:30 +0000
Received: from [85.158.143.35:64068] by server-1.bemta-4.messagelabs.com id
	E1/13-02132-EE01CC25; Tue, 07 Jan 2014 14:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389105387!10101840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9362 invoked from network); 7 Jan 2014 14:36:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:36:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88292730"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:35:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:35:59 -0500
Message-ID: <1389105357.12612.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 7 Jan 2014 14:35:57 +0000
In-Reply-To: <52B46365.4000706@linaro.org>
References: <1387552088-9976-1-git-send-email-ian.campbell@citrix.com>
	<52B46365.4000706@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: context switch the aux memory
 attribute registers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2013-12-20 at 15:33 +0000, Julien Grall wrote:
> 
> On 12/20/2013 03:08 PM, Ian Campbell wrote:
> > We appear to have somehow missed these. Linux doesn't actually use them and
> > none of the processors I've looked at actually define any bits in them (so
> > they are UNK/SBZP) but it is good form to context switch them anyway.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

I feel a bit conflicted giving a release-ack to my own patch but I think
it is low risk and will avoid hard to diagnose failures if Xen is run on
a processor which uses these.

So applied.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xnz-0007IM-8i; Tue, 07 Jan 2014 14:38:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xny-0007ID-2a
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:38:54 +0000
Received: from [85.158.137.68:60763] by server-15.bemta-3.messagelabs.com id
	E8/EC-11556-D711CC25; Tue, 07 Jan 2014 14:38:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389105529!4053133!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9258 invoked from network); 7 Jan 2014 14:38:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:38:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EcdfA006652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:38:40 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EccSa007699
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:38:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ecbj7019274; Tue, 7 Jan 2014 14:38:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:38:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F43B1C18DC; Tue,  7 Jan 2014 09:38:36 -0500 (EST)
Date: Tue, 7 Jan 2014 09:38:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140107143836.GF3588@phenom.dumpdata.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 12:42:17PM +0000, Gordan Bobic wrote:
> On 2014-01-07 12:15, Jan Beulich wrote:
> >>>>On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
> >>On 2014-01-07 11:26, Wu, Feng wrote:
> >>>>-----Original Message-----
> >>>>From: xen-devel-bounces@lists.xen.org
> >>>>[mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> >>>>Sent: Tuesday, January 07, 2014 6:44 PM
> >>>>To: Andrew Cooper
> >>>>Cc: xen-devel@lists.xen.org
> >>>>Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re:
> >>>>iommuu/vt-d
> >>>>issues with LSI MegaSAS (PERC5i))
> >>>>
> >>>>On 2014-01-07 10:38, Andrew Cooper wrote:
> >>>>> On 07/01/14 10:35, Gordan Bobic wrote:
> >>>>>> On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>>>>>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>>>>>> Which would look like this:
> >>>>>>>>>
> >>>>>>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>>>>>> on the card
> >>>>>>>>>           \--------------> IEEE-1394a
> >>>>>>>>>
> >>>>>>>>> I am actually wondering if this 07:00.0 device is the one that
> >>>>>>>>> reports itself as 08:00.0 (which I think is what you alluding to
> >>>>>>>>> Jan)
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> And to double check that theory I decided to pass in the IEEE-1394a
> >>>>>>>> to a guest:
> >>>>>>>>
> >>>>>>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>>>>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>>>>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>>>>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
> >>>>>>>> fault
> >>>>>>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>>>>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>>>>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>>>>>> root_entry
> >>>>>>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>>>>>> context
> >>>>>>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>>>>>> ctxt_entry[0]
> >>>>>>>> not present
> >>>>>>>>
> >>>>>>>> So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>>>>>
> >>>>>>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>>>>>> (prog-if 01 [Subtractive decode])
> >>>>>>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
> >>>>VGASnoop-
> >>>>>>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
> >>>>>>>> 66MHz-
> >>>>>>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
> >>>><MAbort+
> >>>>>>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>>>>>> secondary=08,
> >>>>>>>>         subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>>>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>>>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>>>>>> BridgeCtl:
> >>>>>>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>>>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
> >>>>DiscTmrSERREn-
> >>>>>>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>>>>>> Device 0805
> >>>>>>>>         Capabilities: [a0] Power Management version 3
> >>>>>>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>>>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>>>>>> NoSoftRst+
> >>>>>>>>                 PME-Enable- DSel=0 DScale=0 PME-
> >>>>>>>>
> >>>>>>>> or there is some unknown bridge in the motherboard.
> >>>>>>>
> >>>>>>> According your description above, the upstream Linux should also have
> >>>>>>> the same problem. Did you see it with upstream Linux?

I did not even think to test. I sadly won't be able to do much of reboot/shutdown
as this is a production machine.

> >>>>>>
> >>>>>> The problem I was seeing with LSI cards (phantom device doing DMA)
> >>>>>> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >>>>>> bare metal Linux, the same problem occurs as with Xen.
> >>>>>>
> >>>>>>> There may be some buggy device that generate DMA request with
> >>>>>>> internal
> >>>>>>> BDF but it didn't expose it(not like Phantom device). For those
> >>>>>>> devices, I think we need to setup the VT-d page table manually.
> >>>>>>
> >>>>>> I think what is needed is a pci-phantom style override that tells the
> >>>>>> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >>>>>> invisible device ID.
> >>>>>>
> >>>>>> Gordan
> >>>>>
> >>>>> There is.  See "pci-phantom" in
> >>>>> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> >>>>
> >>>>I thought this was only applicable to phantom _functions_ (number
> >>>>after
> >>>>the
> >>>>dot) rather than whole phantom _devices_. Is that not the case?
> >>>
> >>>I think that's right. I go through the related code for the pci
> >>>phantom device just now, I find that
> >>>the information of command line 'pci-phantom' is stored in variable '
> >>>phantom_devs[8] '
> >>>with type of s truct phantom_dev{}. This variable is used in function
> >>>alloc_pdev() as follow:
> >>>
> >>>
> >>>                for ( i = 0; i < nr_phantom_devs; ++i )
> >>>                    if ( phantom_devs[i].seg == pseg->nr &&
> >>>                         phantom_devs[i].bus == bus &&
> >>>                         phantom_devs[i].slot == PCI_SLOT(devfn) &&
> >>>                         phantom_devs[i].stride > PCI_FUNC(devfn) )
> >>>                    {
> >>>                        pdev->phantom_stride =
> >>>phantom_devs[i].stride;
> >>>                        break;
> >>>                    }
> >>>
> >>>So from the code, we can see this command line only works for phantom
> >>>_function_, not for whole phantom _devices_.
> >>
> >>What would it take to make it work for a whole phantom device?
> >
> >First and foremost a definition of what a phantom device is and
> >how one would behave. Once again - phantom functions are part
> >of the PCIe specification, so those don't require a definition.
> 
> Konrad's patch from a while back seemed to do the required thing to
> allow an otherwise invisible/undetected device to do DMA transfers
> without freaking out the IOMMU that doesn't know about it.

Except it didn't work :-) That was the first thing I tried with this
motherboard. And it looks like there are extra things I would need
to modify in the hypervisor for it to work (like make the
hypervisor create an fake PCI device with BARs and such).

Which is actually what I was going try out - see if I can make it
(hypervisor) add a PCI device for a non-existent PCI device (does
not show in the PCI configuration scan).

That requires knowing the MMIO BARs the 'fake' device has, and
.. well, whatever else the Intel VT-d code requires.

For reference, here is the code that Gordan was mentioning:


#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/limits.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/device.h>

#include <linux/pci.h>

#include <xen/interface/xen.h>
#include <xen/interface/physdev.h>

#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>

#define LSI_HACK  "0.1"

MODULE_AUTHOR("Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>");
MODULE_DESCRIPTION("lsi hack");
MODULE_LICENSE("GPL");
MODULE_VERSION(LSI_HACK);

static int __init lsi_hack_init(void)
{
        int r = 0;

        struct physdev_manage_pci manage_pci = {
                        .bus    = 0x8,
                        .devfn  = PCI_DEVFN(0,0),
                };
        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_add,
                        &manage_pci);

        return r;
}

static void __exit lsi_hack_exit(void)
{
        int r = 0;
        struct physdev_manage_pci manage_pci;

        manage_pci.bus = 0x8;
        manage_pci.devfn = PCI_DEVFN(0,0);

        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
                &manage_pci);
        if (r)
                printk(KERN_ERR "%s: %d\n", __FUNCTION__, r);
}

module_init(lsi_hack_init);
module_exit(lsi_hack_exit);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xnz-0007IM-8i; Tue, 07 Jan 2014 14:38:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xny-0007ID-2a
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:38:54 +0000
Received: from [85.158.137.68:60763] by server-15.bemta-3.messagelabs.com id
	E8/EC-11556-D711CC25; Tue, 07 Jan 2014 14:38:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389105529!4053133!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9258 invoked from network); 7 Jan 2014 14:38:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:38:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EcdfA006652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:38:40 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EccSa007699
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:38:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ecbj7019274; Tue, 7 Jan 2014 14:38:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:38:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F43B1C18DC; Tue,  7 Jan 2014 09:38:36 -0500 (EST)
Date: Tue, 7 Jan 2014 09:38:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Message-ID: <20140107143836.GF3588@phenom.dumpdata.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 12:42:17PM +0000, Gordan Bobic wrote:
> On 2014-01-07 12:15, Jan Beulich wrote:
> >>>>On 07.01.14 at 12:35, Gordan Bobic <gordan@bobich.net> wrote:
> >>On 2014-01-07 11:26, Wu, Feng wrote:
> >>>>-----Original Message-----
> >>>>From: xen-devel-bounces@lists.xen.org
> >>>>[mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> >>>>Sent: Tuesday, January 07, 2014 6:44 PM
> >>>>To: Andrew Cooper
> >>>>Cc: xen-devel@lists.xen.org
> >>>>Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re:
> >>>>iommuu/vt-d
> >>>>issues with LSI MegaSAS (PERC5i))
> >>>>
> >>>>On 2014-01-07 10:38, Andrew Cooper wrote:
> >>>>> On 07/01/14 10:35, Gordan Bobic wrote:
> >>>>>> On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>>>>>> Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>>>>>> Which would look like this:
> >>>>>>>>>
> >>>>>>>>> C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>>>>>> on the card
> >>>>>>>>>           \--------------> IEEE-1394a
> >>>>>>>>>
> >>>>>>>>> I am actually wondering if this 07:00.0 device is the one that
> >>>>>>>>> reports itself as 08:00.0 (which I think is what you alluding to
> >>>>>>>>> Jan)
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> And to double check that theory I decided to pass in the IEEE-1394a
> >>>>>>>> to a guest:
> >>>>>>>>
> >>>>>>>>            +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>>>>>> TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> (XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>>>>>> [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>>>>>> [VT-D]iommu.c:865: DMAR:[DMA Read] Request device [0000:08:00.0]
> >>>>>>>> fault
> >>>>>>>> addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>>>>>> 02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>>>>>> iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>>>>>> root_entry
> >>>>>>>> = ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>>>>>> context
> >>>>>>>> = ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>>>>>> ctxt_entry[0]
> >>>>>>>> not present
> >>>>>>>>
> >>>>>>>> So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>>>>>
> >>>>>>>> 07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>>>>>> (prog-if 01 [Subtractive decode])
> >>>>>>>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
> >>>>VGASnoop-
> >>>>>>>>         ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+
> >>>>>>>> 66MHz-
> >>>>>>>>         UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
> >>>><MAbort+
> >>>>>>>>         >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>>>>>> secondary=08,
> >>>>>>>>         subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>>>>>         f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>>>>>         DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>>>>>> BridgeCtl:
> >>>>>>>>         Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>>>>>                 PriDiscTmr- SecDiscTmr- DiscTmrStat-
> >>>>DiscTmrSERREn-
> >>>>>>>>         Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>>>>>> Device 0805
> >>>>>>>>         Capabilities: [a0] Power Management version 3
> >>>>>>>>                 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>>>>>                 PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>>>>>> NoSoftRst+
> >>>>>>>>                 PME-Enable- DSel=0 DScale=0 PME-
> >>>>>>>>
> >>>>>>>> or there is some unknown bridge in the motherboard.
> >>>>>>>
> >>>>>>> According your description above, the upstream Linux should also have
> >>>>>>> the same problem. Did you see it with upstream Linux?

I did not even think to test. I sadly won't be able to do much of reboot/shutdown
as this is a production machine.

> >>>>>>
> >>>>>> The problem I was seeing with LSI cards (phantom device doing DMA)
> >>>>>> does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >>>>>> bare metal Linux, the same problem occurs as with Xen.
> >>>>>>
> >>>>>>> There may be some buggy device that generate DMA request with
> >>>>>>> internal
> >>>>>>> BDF but it didn't expose it(not like Phantom device). For those
> >>>>>>> devices, I think we need to setup the VT-d page table manually.
> >>>>>>
> >>>>>> I think what is needed is a pci-phantom style override that tells the
> >>>>>> hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >>>>>> invisible device ID.
> >>>>>>
> >>>>>> Gordan
> >>>>>
> >>>>> There is.  See "pci-phantom" in
> >>>>> http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> >>>>
> >>>>I thought this was only applicable to phantom _functions_ (number
> >>>>after
> >>>>the
> >>>>dot) rather than whole phantom _devices_. Is that not the case?
> >>>
> >>>I think that's right. I go through the related code for the pci
> >>>phantom device just now, I find that
> >>>the information of command line 'pci-phantom' is stored in variable '
> >>>phantom_devs[8] '
> >>>with type of s truct phantom_dev{}. This variable is used in function
> >>>alloc_pdev() as follow:
> >>>
> >>>
> >>>                for ( i = 0; i < nr_phantom_devs; ++i )
> >>>                    if ( phantom_devs[i].seg == pseg->nr &&
> >>>                         phantom_devs[i].bus == bus &&
> >>>                         phantom_devs[i].slot == PCI_SLOT(devfn) &&
> >>>                         phantom_devs[i].stride > PCI_FUNC(devfn) )
> >>>                    {
> >>>                        pdev->phantom_stride =
> >>>phantom_devs[i].stride;
> >>>                        break;
> >>>                    }
> >>>
> >>>So from the code, we can see this command line only works for phantom
> >>>_function_, not for whole phantom _devices_.
> >>
> >>What would it take to make it work for a whole phantom device?
> >
> >First and foremost a definition of what a phantom device is and
> >how one would behave. Once again - phantom functions are part
> >of the PCIe specification, so those don't require a definition.
> 
> Konrad's patch from a while back seemed to do the required thing to
> allow an otherwise invisible/undetected device to do DMA transfers
> without freaking out the IOMMU that doesn't know about it.

Except it didn't work :-) That was the first thing I tried with this
motherboard. And it looks like there are extra things I would need
to modify in the hypervisor for it to work (like make the
hypervisor create an fake PCI device with BARs and such).

Which is actually what I was going try out - see if I can make it
(hypervisor) add a PCI device for a non-existent PCI device (does
not show in the PCI configuration scan).

That requires knowing the MMIO BARs the 'fake' device has, and
.. well, whatever else the Intel VT-d code requires.

For reference, here is the code that Gordan was mentioning:


#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/limits.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/device.h>

#include <linux/pci.h>

#include <xen/interface/xen.h>
#include <xen/interface/physdev.h>

#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>

#define LSI_HACK  "0.1"

MODULE_AUTHOR("Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>");
MODULE_DESCRIPTION("lsi hack");
MODULE_LICENSE("GPL");
MODULE_VERSION(LSI_HACK);

static int __init lsi_hack_init(void)
{
        int r = 0;

        struct physdev_manage_pci manage_pci = {
                        .bus    = 0x8,
                        .devfn  = PCI_DEVFN(0,0),
                };
        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_add,
                        &manage_pci);

        return r;
}

static void __exit lsi_hack_exit(void)
{
        int r = 0;
        struct physdev_manage_pci manage_pci;

        manage_pci.bus = 0x8;
        manage_pci.devfn = PCI_DEVFN(0,0);

        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
                &manage_pci);
        if (r)
                printk(KERN_ERR "%s: %d\n", __FUNCTION__, r);
}

module_init(lsi_hack_init);
module_exit(lsi_hack_exit);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XoC-0007LS-Lv; Tue, 07 Jan 2014 14:39:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Xo5-0007Ju-J0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:39:06 +0000
Received: from [85.158.143.35:44208] by server-1.bemta-4.messagelabs.com id
	2B/57-02132-4811CC25; Tue, 07 Jan 2014 14:39:00 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389105538!10184869!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27438 invoked from network); 7 Jan 2014 14:39:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:39:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88294726"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:38:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:38:57 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Xo1-00067f-4o;
	Tue, 07 Jan 2014 14:38:57 +0000
Date: Tue, 7 Jan 2014 14:38:57 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107143857.GI10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC0614.5050402@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:50:12PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > > The identifiers poisoned by include/qemu/poison.h are
> > > an initial but not complete list. Host and target
> > > endianness is a particularly obvious one, as is the
> > > size of a target long. You may not use these things
> > > in your Xen devices, but "qemu-system-null" implies
> > > more than "weird special purpose thing which only
> > > has Xen devices in it".
> > 
> > I see your point.
> > Could we allow target endinness and long size being selected at
> > configure time for target-null?
> > The default could be the same as the host, or could even be simply
> > statically determined, maybe little endian, 4 bytes.
> 
> For Xen both long sizes are already supported by the block backend.  Are
> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
> might not matter at all.
> 
> And if in the future Xen were to grow support for a big-endian target,
> you could either enforce little-endian for the ring buffers, or
> negotiate it in xenstore like you do for sizeof(long).
> 
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.  Phasing out the

I think this makes sense. But does it deserve to be in default-configs/? It
will become default-configs/xenpv-softmmu.mak and target-xenpv shall be
created.

> i386/x86_64 xenpv machine type makes total sense if the exact same code
> can support ARM PV domains too.  This machine would only be compiled if
> you had support for Xen.  My current patches have:
> 
> supported_target() {
>     test "$tcg" = "yes" && return 0
>     supported_kvm_target && return 0
>     supported_xen_target && return 0
>     return 1
> }
> 
> but adding a more refined test for supported-on-TCG would be easy.
> 

I think implementing qemu-system-xenpv will be easier after your TCG
series goes in. In that case I don't need to worry about TCG stubs
anymore.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XoC-0007LS-Lv; Tue, 07 Jan 2014 14:39:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0Xo5-0007Ju-J0
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:39:06 +0000
Received: from [85.158.143.35:44208] by server-1.bemta-4.messagelabs.com id
	2B/57-02132-4811CC25; Tue, 07 Jan 2014 14:39:00 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389105538!10184869!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27438 invoked from network); 7 Jan 2014 14:39:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:39:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88294726"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:38:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:38:57 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0Xo1-00067f-4o;
	Tue, 07 Jan 2014 14:38:57 +0000
Date: Tue, 7 Jan 2014 14:38:57 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140107143857.GI10654@zion.uk.xensource.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC0614.5050402@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:50:12PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
> > > The identifiers poisoned by include/qemu/poison.h are
> > > an initial but not complete list. Host and target
> > > endianness is a particularly obvious one, as is the
> > > size of a target long. You may not use these things
> > > in your Xen devices, but "qemu-system-null" implies
> > > more than "weird special purpose thing which only
> > > has Xen devices in it".
> > 
> > I see your point.
> > Could we allow target endinness and long size being selected at
> > configure time for target-null?
> > The default could be the same as the host, or could even be simply
> > statically determined, maybe little endian, 4 bytes.
> 
> For Xen both long sizes are already supported by the block backend.  Are
> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
> might not matter at all.
> 
> And if in the future Xen were to grow support for a big-endian target,
> you could either enforce little-endian for the ring buffers, or
> negotiate it in xenstore like you do for sizeof(long).
> 
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.  Phasing out the

I think this makes sense. But does it deserve to be in default-configs/? It
will become default-configs/xenpv-softmmu.mak and target-xenpv shall be
created.

> i386/x86_64 xenpv machine type makes total sense if the exact same code
> can support ARM PV domains too.  This machine would only be compiled if
> you had support for Xen.  My current patches have:
> 
> supported_target() {
>     test "$tcg" = "yes" && return 0
>     supported_kvm_target && return 0
>     supported_xen_target && return 0
>     return 1
> }
> 
> but adding a more refined test for supported-on-TCG would be easy.
> 

I think implementing qemu-system-xenpv will be easier after your TCG
series goes in. In that case I don't need to worry about TCG stubs
anymore.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xol-0007Yi-B6; Tue, 07 Jan 2014 14:39:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0Xok-0007YQ-B7
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 14:39:42 +0000
Received: from [193.109.254.147:7867] by server-11.bemta-14.messagelabs.com id
	F1/AC-20576-DA11CC25; Tue, 07 Jan 2014 14:39:41 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389105579!9336519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18773 invoked from network); 7 Jan 2014 14:39:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:39:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90454971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:39:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:39:18 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0XoL-00067l-Py;
	Tue, 07 Jan 2014 14:39:17 +0000
Message-ID: <52CC1195.4020808@citrix.com>
Date: Tue, 7 Jan 2014 14:39:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
	<52CC1BE302000078001112AE@nat28.tlf.novell.com>
In-Reply-To: <52CC1BE302000078001112AE@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 14:23, Jan Beulich wrote:
>>>> On 03.01.14 at 14:15, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> Avoid turning on legacy interrupts before hpet_event has been set up.
>> Particularly, the spinlock can be uninitialised at the point at which
>> the interrupt first arrives.
> I suppose you actually saw this issue, but I currently fail to see how
> it would occur:
>
>         spin_lock_init(&hpet_events[i].lock);
>         wmb();
>         hpet_events[i].event_handler = handle_hpet_broadcast;
>
> guarantees that the lock gets initialized before the handler gets set
> (i.e. if anything you'd do a call through a NULL pointer). And this
>
>     if ( !num_hpets_used )
>         hpet_events->flags = HPET_EVT_LEGACY;
>
> happens even later, yet hpet_legacy_irq_tick() checks that flag
> before calling the handler (and hence before taking the lock).
>
> Before applying the patch I'd like to understand what I'm
> overlooking.
>
> Jan
>

We did indeed find this issue, but I overlooked a key factor.

XenServer is running with my HPET series to fix the stack overflows
which automated testing reliably finds.

My series changes the initialisation order of this, opening up this race
condition.


Overall, turning on the HPET interrupt before initialising its structure
is somewhat poor form, but now you have pointed it out, I don't think
that current upstream in vulnerable to the uninitialised spinlock.

It might be better if I just folded this fix into my series.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:39:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xol-0007Yi-B6; Tue, 07 Jan 2014 14:39:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0Xok-0007YQ-B7
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 14:39:42 +0000
Received: from [193.109.254.147:7867] by server-11.bemta-14.messagelabs.com id
	F1/AC-20576-DA11CC25; Tue, 07 Jan 2014 14:39:41 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389105579!9336519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18773 invoked from network); 7 Jan 2014 14:39:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:39:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90454971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:39:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 09:39:18 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0XoL-00067l-Py;
	Tue, 07 Jan 2014 14:39:17 +0000
Message-ID: <52CC1195.4020808@citrix.com>
Date: Tue, 7 Jan 2014 14:39:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1388754947.28243.4.camel@hamster.uk.xensource.com>
	<52CC1BE302000078001112AE@nat28.tlf.novell.com>
In-Reply-To: <52CC1BE302000078001112AE@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>, xen-devel@lists.xensource.com,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] Avoid race conditions in HPET initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 14:23, Jan Beulich wrote:
>>>> On 03.01.14 at 14:15, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> Avoid turning on legacy interrupts before hpet_event has been set up.
>> Particularly, the spinlock can be uninitialised at the point at which
>> the interrupt first arrives.
> I suppose you actually saw this issue, but I currently fail to see how
> it would occur:
>
>         spin_lock_init(&hpet_events[i].lock);
>         wmb();
>         hpet_events[i].event_handler = handle_hpet_broadcast;
>
> guarantees that the lock gets initialized before the handler gets set
> (i.e. if anything you'd do a call through a NULL pointer). And this
>
>     if ( !num_hpets_used )
>         hpet_events->flags = HPET_EVT_LEGACY;
>
> happens even later, yet hpet_legacy_irq_tick() checks that flag
> before calling the handler (and hence before taking the lock).
>
> Before applying the patch I'd like to understand what I'm
> overlooking.
>
> Jan
>

We did indeed find this issue, but I overlooked a key factor.

XenServer is running with my HPET series to fix the stack overflows
which automated testing reliably finds.

My series changes the initialisation order of this, opening up this race
condition.


Overall, turning on the HPET interrupt before initialising its structure
is somewhat poor form, but now you have pointed it out, I don't think
that current upstream in vulnerable to the uninitialised spinlock.

It might be better if I just folded this fix into my series.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:41:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xqe-0007nX-AD; Tue, 07 Jan 2014 14:41:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0Xqc-0007nK-6s
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:41:38 +0000
Received: from [193.109.254.147:31071] by server-5.bemta-14.messagelabs.com id
	40/9C-03510-1221CC25; Tue, 07 Jan 2014 14:41:37 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389105696!9366621!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3241 invoked from network); 7 Jan 2014 14:41:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	7 Jan 2014 14:41:36 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s07EeVsW004631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 09:40:31 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-36.ams2.redhat.com
	[10.36.112.36])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s07EeRS3030171; Tue, 7 Jan 2014 09:40:28 -0500
Message-ID: <52CC11DA.4050508@redhat.com>
Date: Tue, 07 Jan 2014 15:40:26 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
	<20140107143857.GI10654@zion.uk.xensource.com>
In-Reply-To: <20140107143857.GI10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 15:38, Wei Liu ha scritto:
> On Tue, Jan 07, 2014 at 02:50:12PM +0100, Paolo Bonzini wrote:
>> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
>>>> The identifiers poisoned by include/qemu/poison.h are
>>>> an initial but not complete list. Host and target
>>>> endianness is a particularly obvious one, as is the
>>>> size of a target long. You may not use these things
>>>> in your Xen devices, but "qemu-system-null" implies
>>>> more than "weird special purpose thing which only
>>>> has Xen devices in it".
>>>
>>> I see your point.
>>> Could we allow target endinness and long size being selected at
>>> configure time for target-null?
>>> The default could be the same as the host, or could even be simply
>>> statically determined, maybe little endian, 4 bytes.
>>
>> For Xen both long sizes are already supported by the block backend.  Are
>> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
>> might not matter at all.
>>
>> And if in the future Xen were to grow support for a big-endian target,
>> you could either enforce little-endian for the ring buffers, or
>> negotiate it in xenstore like you do for sizeof(long).
>>
>> So let's call things by their name and add qemu-system-xenpv that covers
>> both x86 and ARM and anything else in the future.  Phasing out the
> 
> I think this makes sense. But does it deserve to be in default-configs/?

Sure.  You could build a qemu-system-xenpv variant that doesn't have the
framebuffer, for example.

> It will become default-configs/xenpv-softmmu.mak and target-xenpv shall be
> created.

Yes, exactly.

> I think implementing qemu-system-xenpv will be easier after your TCG
> series goes in. In that case I don't need to worry about TCG stubs
> anymore.

Right.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:41:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xqe-0007nX-AD; Tue, 07 Jan 2014 14:41:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0Xqc-0007nK-6s
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:41:38 +0000
Received: from [193.109.254.147:31071] by server-5.bemta-14.messagelabs.com id
	40/9C-03510-1221CC25; Tue, 07 Jan 2014 14:41:37 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389105696!9366621!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3241 invoked from network); 7 Jan 2014 14:41:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-8.tower-27.messagelabs.com with SMTP;
	7 Jan 2014 14:41:36 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s07EeVsW004631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 09:40:31 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-36.ams2.redhat.com
	[10.36.112.36])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s07EeRS3030171; Tue, 7 Jan 2014 09:40:28 -0500
Message-ID: <52CC11DA.4050508@redhat.com>
Date: Tue, 07 Jan 2014 15:40:26 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
	<20140107143857.GI10654@zion.uk.xensource.com>
In-Reply-To: <20140107143857.GI10654@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 15:38, Wei Liu ha scritto:
> On Tue, Jan 07, 2014 at 02:50:12PM +0100, Paolo Bonzini wrote:
>> Il 07/01/2014 14:26, Stefano Stabellini ha scritto:
>>>> The identifiers poisoned by include/qemu/poison.h are
>>>> an initial but not complete list. Host and target
>>>> endianness is a particularly obvious one, as is the
>>>> size of a target long. You may not use these things
>>>> in your Xen devices, but "qemu-system-null" implies
>>>> more than "weird special purpose thing which only
>>>> has Xen devices in it".
>>>
>>> I see your point.
>>> Could we allow target endinness and long size being selected at
>>> configure time for target-null?
>>> The default could be the same as the host, or could even be simply
>>> statically determined, maybe little endian, 4 bytes.
>>
>> For Xen both long sizes are already supported by the block backend.  Are
>> there still guests that use BLKIF_PROTOCOL_NATIVE?  If not, long size
>> might not matter at all.
>>
>> And if in the future Xen were to grow support for a big-endian target,
>> you could either enforce little-endian for the ring buffers, or
>> negotiate it in xenstore like you do for sizeof(long).
>>
>> So let's call things by their name and add qemu-system-xenpv that covers
>> both x86 and ARM and anything else in the future.  Phasing out the
> 
> I think this makes sense. But does it deserve to be in default-configs/?

Sure.  You could build a qemu-system-xenpv variant that doesn't have the
framebuffer, for example.

> It will become default-configs/xenpv-softmmu.mak and target-xenpv shall be
> created.

Yes, exactly.

> I think implementing qemu-system-xenpv will be easier after your TCG
> series goes in. In that case I don't need to worry about TCG stubs
> anymore.

Right.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xt3-0007yv-Th; Tue, 07 Jan 2014 14:44:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xt2-0007yf-SI
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:44:09 +0000
Received: from [193.109.254.147:33686] by server-11.bemta-14.messagelabs.com
	id B8/B4-20576-8B21CC25; Tue, 07 Jan 2014 14:44:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389105846!9330145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2926 invoked from network); 7 Jan 2014 14:44:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:44:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Ei2S0002321
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:44:02 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ei05j019076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:44:00 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07Ehxo3012834; Tue, 7 Jan 2014 14:43:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:43:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F058A1C18DC; Tue,  7 Jan 2014 09:43:55 -0500 (EST)
Date: Tue, 7 Jan 2014 09:43:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140107144355.GH3588@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC08C2.5090004@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	-	CPU#1 stuck for 22s! RIP:
	e030:[<ffffffff81109a58>]	[<ffffffff81109a58>]
	generic_exec_single+0x88/0xc0	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> On 07/01/14 11:53, Sander Eikelenboom wrote:
> > Hi Konrad,
> > 
> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)

Hot damm! Thank you for testing so quickly!
> > 
> > Xen: latest xen-unstable
> 
> The FIFO-based event channel ABI is broken in current xen-unstable.
> 
> You need the two patches from:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
> 
> You can also disable the use the FIFO ABI by the guest using the
> xen.fifo_events = 0 kernel command line option.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xt3-0007yv-Th; Tue, 07 Jan 2014 14:44:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xt2-0007yf-SI
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:44:09 +0000
Received: from [193.109.254.147:33686] by server-11.bemta-14.messagelabs.com
	id B8/B4-20576-8B21CC25; Tue, 07 Jan 2014 14:44:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389105846!9330145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2926 invoked from network); 7 Jan 2014 14:44:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:44:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Ei2S0002321
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:44:02 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ei05j019076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:44:00 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07Ehxo3012834; Tue, 7 Jan 2014 14:43:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:43:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F058A1C18DC; Tue,  7 Jan 2014 09:43:55 -0500 (EST)
Date: Tue, 7 Jan 2014 09:43:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140107144355.GH3588@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC08C2.5090004@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	-	CPU#1 stuck for 22s! RIP:
	e030:[<ffffffff81109a58>]	[<ffffffff81109a58>]
	generic_exec_single+0x88/0xc0	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> On 07/01/14 11:53, Sander Eikelenboom wrote:
> > Hi Konrad,
> > 
> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)

Hot damm! Thank you for testing so quickly!
> > 
> > Xen: latest xen-unstable
> 
> The FIFO-based event channel ABI is broken in current xen-unstable.
> 
> You need the two patches from:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
> 
> You can also disable the use the FIFO ABI by the guest using the
> xen.fifo_events = 0 kernel command line option.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XtN-00082E-BL; Tue, 07 Jan 2014 14:44:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XtM-00081u-Dj
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:44:28 +0000
Received: from [85.158.137.68:24035] by server-10.bemta-3.messagelabs.com id
	60/9F-23989-BC21CC25; Tue, 07 Jan 2014 14:44:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389105866!6935870!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31620 invoked from network); 7 Jan 2014 14:44:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:44:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:44:26 +0000
Message-Id: <52CC20D50200007800111313@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:44:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
	<1389105127.12612.35.camel@kazak.uk.xensource.com>
In-Reply-To: <1389105127.12612.35.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
>> > 
>> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
>> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> > >> They are several one line functions in tmem_xen.h which are useless, this 
> patch
>> > >> embeded them into tmem.c directly.
>> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> order
>> > >> to make things more straightforward.
>> > >>
>> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> > >> ---
>> > >>  xen/common/domain.c        |    4 ++--
>> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
>> > >>  xen/include/xen/sched.h    |    2 +-
>> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
>> > > 
>> > > Keir, are you OK with this simple name change?
>> > > 
>> > 
>> > Ping..
>> 
>> Lets make sure his email is on the 'To' (I don't see it
>> in my email?)
> 
> I haven't reviewed the patch or anything but it says "cleanup" -- I
> think we are past that point of the release process, aren't we?

It has been pending for quite a while, and tmem isn't critical code,
so I would tend towards taking it if we can get Keir's ack (if there
wasn't that relatively trivial change to common code, I would have
pulled the set already).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XtN-00082E-BL; Tue, 07 Jan 2014 14:44:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XtM-00081u-Dj
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:44:28 +0000
Received: from [85.158.137.68:24035] by server-10.bemta-3.messagelabs.com id
	60/9F-23989-BC21CC25; Tue, 07 Jan 2014 14:44:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389105866!6935870!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31620 invoked from network); 7 Jan 2014 14:44:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:44:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:44:26 +0000
Message-Id: <52CC20D50200007800111313@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:44:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
	<1389105127.12612.35.camel@kazak.uk.xensource.com>
In-Reply-To: <1389105127.12612.35.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
>> > 
>> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
>> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> > >> They are several one line functions in tmem_xen.h which are useless, this 
> patch
>> > >> embeded them into tmem.c directly.
>> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> order
>> > >> to make things more straightforward.
>> > >>
>> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> > >> ---
>> > >>  xen/common/domain.c        |    4 ++--
>> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
>> > >>  xen/include/xen/sched.h    |    2 +-
>> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
>> > > 
>> > > Keir, are you OK with this simple name change?
>> > > 
>> > 
>> > Ping..
>> 
>> Lets make sure his email is on the 'To' (I don't see it
>> in my email?)
> 
> I haven't reviewed the patch or anything but it says "cleanup" -- I
> think we are past that point of the release process, aren't we?

It has been pending for quite a while, and tmem isn't critical code,
so I would tend towards taking it if we can get Keir's ack (if there
wasn't that relatively trivial change to common code, I would have
pulled the set already).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xtq-00088T-PK; Tue, 07 Jan 2014 14:44:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0Xtp-000882-2Y
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:44:57 +0000
Received: from [193.109.254.147:46622] by server-11.bemta-14.messagelabs.com
	id 7B/96-20576-8E21CC25; Tue, 07 Jan 2014 14:44:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389105895!9367595!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29561 invoked from network); 7 Jan 2014 14:44:55 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:44:55 -0000
Received: by mail-ea0-f179.google.com with SMTP id r15so251272ead.24
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:44:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Wwndg7yk/tECAxQBPq4x2ilDXo8Mv/zINq8h77Yt0/w=;
	b=CWcJ/mXbmd3lijDW34hUjJMt2ceeDc2t3HTfvHiBbrxB8FhCBw9WCNRGVRJ7CxIL8l
	0bfEjkd6Sxt+FX+d4lj5Aq6hTuDdNJxODgPk++akHSQNoVs1zaiNz7PVUAKPK6Tbpb4S
	7lvd//Eo2zLQ0VsBKGenYoemN2O/TV+tKqYygAiiEbJG1tctAeMqzgMKx3yR/+cbGBGM
	P2CPDr83pG18n5LClarvKyVs+AUtdxYoFX7xBp3zqKF387p3UODofwNFrAkVtm2QkEfX
	Ltrcua65p2PeeEnPGvFFB7lIbVW0GODaNABkn+JufdOn66xeGkWe1oipGfRZ33NGM8x8
	ZBqQ==
X-Gm-Message-State: ALoCoQlphYXSpnde5zuwmCY3ye2YibydosqDsbWERBhTIdOl02KhCGiUl0LCSrfJYNbLaUsdnTsx
X-Received: by 10.14.209.129 with SMTP id s1mr94254164eeo.21.1389105895350;
	Tue, 07 Jan 2014 06:44:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n7sm163014eef.5.2014.01.07.06.44.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 06:44:54 -0800 (PST)
Message-ID: <52CC12E5.1000202@linaro.org>
Date: Tue, 07 Jan 2014 14:44:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
	<CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
In-Reply-To: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:31 PM, karim.allah.ahmed@gmail.com wrote:
> On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
>>
>>> If you still can't boot with any memory bigger than 128M, as a fast
>>> workaround you can apply this patch.
>>
>> I wonder if it might be possible to work around this by more carefully
>> selecting the load addresses for Xen+Linux+DTB+initrd, such that they
>> are packed into the top end of RAM, leaving a larger contiguous chunk
>> available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
>> and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
>>         MEMMAX-X:       Leave free for high relocation of hypervisor
>>         MEMMAX-X-L:     Load Linux here
>>         MEMMAX-X-L-D:   Load DTB here
>>         MEMMAX-X-L-D-X: Load initial Xen image here
>>
>> Ultimately this is because allocations need to be aligned to their size,
>> so on a 1GB system there are only two possible 512MB allocations, if
>> even one page is allocated in each half then it isn't possible to
>> satisfy things. I don't think the core allocator gives us the option to
>> do non-aligned allocations.
> 
> What if we allocated the dom0 from the boot allocator instead (before
> ditching it) ?

If I remembered correctly, Anthony did this kind of modification for the
first port of Xen on the Arndale.

It's a too intrusive in the code. As I said previously, the best
solution is having multiple bank support for dom0. It will take you less
time to wrote a such patch.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xtq-00088T-PK; Tue, 07 Jan 2014 14:44:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0Xtp-000882-2Y
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:44:57 +0000
Received: from [193.109.254.147:46622] by server-11.bemta-14.messagelabs.com
	id 7B/96-20576-8E21CC25; Tue, 07 Jan 2014 14:44:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389105895!9367595!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29561 invoked from network); 7 Jan 2014 14:44:55 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:44:55 -0000
Received: by mail-ea0-f179.google.com with SMTP id r15so251272ead.24
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 06:44:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Wwndg7yk/tECAxQBPq4x2ilDXo8Mv/zINq8h77Yt0/w=;
	b=CWcJ/mXbmd3lijDW34hUjJMt2ceeDc2t3HTfvHiBbrxB8FhCBw9WCNRGVRJ7CxIL8l
	0bfEjkd6Sxt+FX+d4lj5Aq6hTuDdNJxODgPk++akHSQNoVs1zaiNz7PVUAKPK6Tbpb4S
	7lvd//Eo2zLQ0VsBKGenYoemN2O/TV+tKqYygAiiEbJG1tctAeMqzgMKx3yR/+cbGBGM
	P2CPDr83pG18n5LClarvKyVs+AUtdxYoFX7xBp3zqKF387p3UODofwNFrAkVtm2QkEfX
	Ltrcua65p2PeeEnPGvFFB7lIbVW0GODaNABkn+JufdOn66xeGkWe1oipGfRZ33NGM8x8
	ZBqQ==
X-Gm-Message-State: ALoCoQlphYXSpnde5zuwmCY3ye2YibydosqDsbWERBhTIdOl02KhCGiUl0LCSrfJYNbLaUsdnTsx
X-Received: by 10.14.209.129 with SMTP id s1mr94254164eeo.21.1389105895350;
	Tue, 07 Jan 2014 06:44:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n7sm163014eef.5.2014.01.07.06.44.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 06:44:54 -0800 (PST)
Message-ID: <52CC12E5.1000202@linaro.org>
Date: Tue, 07 Jan 2014 14:44:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <WC20131216075445.88000B@perkbv.com>
	<1387188577.20076.44.camel@kazak.uk.xensource.com>
	<WC20131216110403.65000D@perkbv.com>
	<1387199654.10247.30.camel@kazak.uk.xensource.com>
	<WC20131217092526.720011@perkbv.com> <52B05913.2080300@linaro.org>
	<CAOTdubt3ZpW4i=ce4U420OH_UrDoOjWV0f-B2eKXu+33UGFdgQ@mail.gmail.com>
	<1389103098.12612.21.camel@kazak.uk.xensource.com>
	<CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
In-Reply-To: <CAOTdubu9yQvaveFtk8H-iu+pH8xCbT3rv7fm==zcyr5usJT6JQ@mail.gmail.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>, peter <peter@perkbv.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] XEN[ARM] Master not working on Allwinner A20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:31 PM, karim.allah.ahmed@gmail.com wrote:
> On Tue, Jan 7, 2014 at 1:58 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Sun, 2014-01-05 at 16:48 +0000, karim.allah.ahmed@gmail.com wrote:
>>
>>> If you still can't boot with any memory bigger than 128M, as a fast
>>> workaround you can apply this patch.
>>
>> I wonder if it might be possible to work around this by more carefully
>> selecting the load addresses for Xen+Linux+DTB+initrd, such that they
>> are packed into the top end of RAM, leaving a larger contiguous chunk
>> available at the beginning. e.g. if sizeof(Xen)=X and sizeof(Linux)=L
>> and sizeof(DTB)=D (all rounded up to 2M boundary) then load things at:
>>         MEMMAX-X:       Leave free for high relocation of hypervisor
>>         MEMMAX-X-L:     Load Linux here
>>         MEMMAX-X-L-D:   Load DTB here
>>         MEMMAX-X-L-D-X: Load initial Xen image here
>>
>> Ultimately this is because allocations need to be aligned to their size,
>> so on a 1GB system there are only two possible 512MB allocations, if
>> even one page is allocated in each half then it isn't possible to
>> satisfy things. I don't think the core allocator gives us the option to
>> do non-aligned allocations.
> 
> What if we allocated the dom0 from the boot allocator instead (before
> ditching it) ?

If I remembered correctly, Anthony did this kind of modification for the
first port of Xen on the Arndale.

It's a too intrusive in the code. As I said previously, the best
solution is having multiple bank support for dom0. It will take you less
time to wrote a such patch.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:45:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xu3-0008AG-EC; Tue, 07 Jan 2014 14:45:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xtv-00089Q-0g
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:45:09 +0000
Received: from [85.158.139.211:34101] by server-13.bemta-5.messagelabs.com id
	6E/70-11357-EE21CC25; Tue, 07 Jan 2014 14:45:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389105898!8132131!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 7 Jan 2014 14:45:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:45:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EiT50011819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:44:30 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EiRsM003486
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:44:28 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07EiQb0014300; Tue, 7 Jan 2014 14:44:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:44:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 618DA1C18DC; Tue,  7 Jan 2014 09:44:24 -0500 (EST)
Date: Tue, 7 Jan 2014 09:44:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>
Message-ID: <20140107144424.GI3588@phenom.dumpdata.com>
References: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-next@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	mingo@redhat.com, tglx@linutronix.de
Subject: Re: [Xen-devel] randconfig build error with next-20140107,
 in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBKYW4gMDcsIDIwMTQgYXQgMDc6MDM6NTBBTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gYXJjaC94ODYveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmHhlbl9wdmhf
Z250dGFiX3NldHVw4oCZOgo+IGFyY2gveDg2L3hlbi9ncmFudC10YWJsZS5jOjE4MToyOiBlcnJv
cjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJh4ZW5fcHZoX2RvbWFpbuKA
mSBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbl0KPiAgIGlmICgheGVuX3B2
aF9kb21haW4oKSkKPiAgIF4KPiBjYzE6IHNvbWUgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBl
cnJvcnMKPiBtYWtlWzJdOiAqKiogW2FyY2gveDg2L3hlbi9ncmFudC10YWJsZS5vXSBFcnJvciAx
CgpZZWFoLCBJIGdvdCB0aGUgc2FtZSBlcnJvciBmcm9tIHRoZSAwLWJ1aWxkIHRlc3Qgc3lzdGVt
LiBXaWxsCmEgZml4IGluIHRvZGF5LgoKVGhhbmsgeW91IGZvciByZXBvcnRpbmchCgo+ICMKPiAj
IEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgo+ICMgTGludXgveDg2
IDMuMTMuMC1yYzcgS2VybmVsIENvbmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHXzY0QklUPXkKPiBD
T05GSUdfWDg2XzY0PXkKPiBDT05GSUdfWDg2PXkKPiBDT05GSUdfSU5TVFJVQ1RJT05fREVDT0RF
Uj15Cj4gQ09ORklHX09VVFBVVF9GT1JNQVQ9ImVsZjY0LXg4Ni02NCIKPiBDT05GSUdfQVJDSF9E
RUZDT05GSUc9ImFyY2gveDg2L2NvbmZpZ3MveDg2XzY0X2RlZmNvbmZpZyIKPiBDT05GSUdfTE9D
S0RFUF9TVVBQT1JUPXkKPiBDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkKPiBDT05GSUdfSEFW
RV9MQVRFTkNZVE9QX1NVUFBPUlQ9eQo+IENPTkZJR19NTVU9eQo+IENPTkZJR19ORUVEX0RNQV9N
QVBfU1RBVEU9eQo+IENPTkZJR19ORUVEX1NHX0RNQV9MRU5HVEg9eQo+IENPTkZJR19HRU5FUklD
X0hXRUlHSFQ9eQo+IENPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15Cj4gQ09ORklHX0dF
TkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX1JFTEFYPXkKPiBD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX0FV
VE9QUk9CRT15Cj4gQ09ORklHX0hBVkVfU0VUVVBfUEVSX0NQVV9BUkVBPXkKPiBDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKPiBDT05GSUdfTkVFRF9QRVJfQ1BVX1BBR0Vf
RklSU1RfQ0hVTks9eQo+IENPTkZJR19BUkNIX0hJQkVSTkFUSU9OX1BPU1NJQkxFPXkKPiBDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0hVR0VfUE1EX1NI
QVJFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0dFTkVSQUxfSFVHRVRMQj15Cj4gQ09ORklHX1pPTkVf
RE1BMzI9eQo+IENPTkZJR19BVURJVF9BUkNIPXkKPiBDT05GSUdfQVJDSF9TVVBQT1JUU19PUFRJ
TUlaRURfSU5MSU5JTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
Cj4gQ09ORklHX0FSQ0hfSFdFSUdIVF9DRkxBR1M9Ii1mY2FsbC1zYXZlZC1yZGkgLWZjYWxsLXNh
dmVkLXJzaSAtZmNhbGwtc2F2ZWQtcmR4IC1mY2FsbC1zYXZlZC1yY3ggLWZjYWxsLXNhdmVkLXI4
IC1mY2FsbC1zYXZlZC1yOSAtZmNhbGwtc2F2ZWQtcjEwIC1mY2FsbC1zYXZlZC1yMTEiCj4gQ09O
RklHX0FSQ0hfU1VQUE9SVFNfVVBST0JFUz15Cj4gQ09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGli
L21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZpZyIKPiBDT05GSUdfQ09OU1RSVUNUT1JTPXkK
PiBDT05GSUdfSVJRX1dPUks9eQo+IENPTkZJR19CVUlMRFRJTUVfRVhUQUJMRV9TT1JUPXkKPiAK
PiAjCj4gIyBHZW5lcmFsIHNldHVwCj4gIwo+IENPTkZJR19CUk9LRU5fT05fU01QPXkKPiBDT05G
SUdfSU5JVF9FTlZfQVJHX0xJTUlUPTMyCj4gQ09ORklHX0NST1NTX0NPTVBJTEU9IiIKPiBDT05G
SUdfQ09NUElMRV9URVNUPXkKPiBDT05GSUdfTE9DQUxWRVJTSU9OPSIiCj4gQ09ORklHX0xPQ0FM
VkVSU0lPTl9BVVRPPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfR1pJUD15Cj4gQ09ORklHX0hBVkVf
S0VSTkVMX0JaSVAyPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfTFpNQT15Cj4gQ09ORklHX0hBVkVf
S0VSTkVMX1haPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfTFpPPXkKPiBDT05GSUdfSEFWRV9LRVJO
RUxfTFo0PXkKPiBDT05GSUdfS0VSTkVMX0daSVA9eQo+ICMgQ09ORklHX0tFUk5FTF9CWklQMiBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfS0VSTkVMX0xaTUEgaXMgbm90IHNldAo+ICMgQ09ORklHX0tF
Uk5FTF9YWiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfS0VSTkVMX0xaTyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfS0VSTkVMX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9Iihu
b25lKSIKPiBDT05GSUdfU1lTVklQQz15Cj4gIyBDT05GSUdfUE9TSVhfTVFVRVVFIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkhBTkRMRT15Cj4gQ09ORklHX0FVRElUPXkKPiBDT05GSUdfQVVESVRTWVND
QUxMPXkKPiBDT05GSUdfQVVESVRfV0FUQ0g9eQo+IENPTkZJR19BVURJVF9UUkVFPXkKPiAKPiAj
Cj4gIyBJUlEgc3Vic3lzdGVtCj4gIwo+IENPTkZJR19HRU5FUklDX0lSUV9QUk9CRT15Cj4gQ09O
RklHX0dFTkVSSUNfSVJRX1NIT1c9eQo+IENPTkZJR19HRU5FUklDX0lSUV9DSElQPXkKPiBDT05G
SUdfSVJRX0RPTUFJTj15Cj4gQ09ORklHX0lSUV9ET01BSU5fREVCVUc9eQo+IENPTkZJR19JUlFf
Rk9SQ0VEX1RIUkVBRElORz15Cj4gQ09ORklHX1NQQVJTRV9JUlE9eQo+IENPTkZJR19DTE9DS1NP
VVJDRV9XQVRDSERPRz15Cj4gQ09ORklHX0FSQ0hfQ0xPQ0tTT1VSQ0VfREFUQT15Cj4gQ09ORklH
X0dFTkVSSUNfVElNRV9WU1lTQ0FMTD15Cj4gQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFM9eQo+
IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX0JVSUxEPXkKPiBDT05GSUdfR0VORVJJQ19DTE9D
S0VWRU5UU19CUk9BRENBU1Q9eQo+IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX01JTl9BREpV
U1Q9eQo+IENPTkZJR19HRU5FUklDX0NNT1NfVVBEQVRFPXkKPiAKPiAjCj4gIyBUaW1lcnMgc3Vi
c3lzdGVtCj4gIwo+IENPTkZJR19USUNLX09ORVNIT1Q9eQo+IENPTkZJR19OT19IWl9DT01NT049
eQo+ICMgQ09ORklHX0haX1BFUklPRElDIGlzIG5vdCBzZXQKPiBDT05GSUdfTk9fSFpfSURMRT15
Cj4gQ09ORklHX05PX0haPXkKPiBDT05GSUdfSElHSF9SRVNfVElNRVJTPXkKPiAKPiAjCj4gIyBD
UFUvVGFzayB0aW1lIGFuZCBzdGF0cyBhY2NvdW50aW5nCj4gIwo+IENPTkZJR19USUNLX0NQVV9B
Q0NPVU5USU5HPXkKPiAjIENPTkZJR19WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQlNE
X1BST0NFU1NfQUNDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1RBU0tTVEFUUz15Cj4gQ09ORklHX1RB
U0tfREVMQVlfQUNDVD15Cj4gIyBDT05GSUdfVEFTS19YQUNDVCBpcyBub3Qgc2V0Cj4gCj4gIwo+
ICMgUkNVIFN1YnN5c3RlbQo+ICMKPiBDT05GSUdfVElOWV9SQ1U9eQo+ICMgQ09ORklHX1BSRUVN
UFRfUkNVIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNVX1NUQUxMX0NPTU1PTj15Cj4gIyBDT05GSUdf
VFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldAo+IENPTkZJR19JS0NPTkZJRz15Cj4gQ09ORklHX0xP
R19CVUZfU0hJRlQ9MTcKPiBDT05GSUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15Cj4gQ09O
RklHX0FSQ0hfU1VQUE9SVFNfTlVNQV9CQUxBTkNJTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRT
X0lOVDEyOD15Cj4gQ09ORklHX0FSQ0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15Cj4gIyBD
T05GSUdfQ0dST1VQUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0NIRUNLUE9JTlRfUkVTVE9SRT15Cj4g
Q09ORklHX05BTUVTUEFDRVM9eQo+ICMgQ09ORklHX1VUU19OUyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSVBDX05TIGlzIG5vdCBzZXQKPiBDT05GSUdfVVNFUl9OUz15Cj4gQ09ORklHX1BJRF9OUz15
Cj4gIyBDT05GSUdfTkVUX05TIGlzIG5vdCBzZXQKPiBDT05GSUdfVUlER0lEX1NUUklDVF9UWVBF
X0NIRUNLUz15Cj4gIyBDT05GSUdfU0NIRURfQVVUT0dST1VQIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19TWVNGU19ERVBSRUNBVEVEIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVMQVk9eQo+IENPTkZJR19C
TEtfREVWX0lOSVRSRD15Cj4gQ09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIKPiAjIENPTkZJR19S
RF9HWklQIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRF9CWklQMiBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfUkRfTFpNQSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JEX1haPXkKPiAjIENPTkZJR19SRF9MWk8g
aXMgbm90IHNldAo+ICMgQ09ORklHX1JEX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NDX09QVElN
SVpFX0ZPUl9TSVpFPXkKPiBDT05GSUdfQU5PTl9JTk9ERVM9eQo+IENPTkZJR19TWVNDVExfRVhD
RVBUSU9OX1RSQUNFPXkKPiBDT05GSUdfSEFWRV9QQ1NQS1JfUExBVEZPUk09eQo+IENPTkZJR19F
WFBFUlQ9eQo+IENPTkZJR19LQUxMU1lNUz15Cj4gQ09ORklHX0tBTExTWU1TX0FMTD15Cj4gQ09O
RklHX1BSSU5USz15Cj4gIyBDT05GSUdfQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfUENTUEtSX1BM
QVRGT1JNPXkKPiAjIENPTkZJR19CQVNFX0ZVTEwgaXMgbm90IHNldAo+IENPTkZJR19GVVRFWD15
Cj4gQ09ORklHX0VQT0xMPXkKPiAjIENPTkZJR19TSUdOQUxGRCBpcyBub3Qgc2V0Cj4gQ09ORklH
X1RJTUVSRkQ9eQo+IENPTkZJR19FVkVOVEZEPXkKPiBDT05GSUdfU0hNRU09eQo+IENPTkZJR19B
SU89eQo+ICMgQ09ORklHX1BDSV9RVUlSS1MgaXMgbm90IHNldAo+IENPTkZJR19FTUJFRERFRD15
Cj4gQ09ORklHX0hBVkVfUEVSRl9FVkVOVFM9eQo+IAo+ICMKPiAjIEtlcm5lbCBQZXJmb3JtYW5j
ZSBFdmVudHMgQW5kIENvdW50ZXJzCj4gIwo+IENPTkZJR19QRVJGX0VWRU5UUz15Cj4gIyBDT05G
SUdfREVCVUdfUEVSRl9VU0VfVk1BTExPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVk1fRVZFTlRf
Q09VTlRFUlMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NMVUJfREVCVUcgaXMgbm90IHNldAo+IENP
TkZJR19DT01QQVRfQlJLPXkKPiAjIENPTkZJR19TTEFCIGlzIG5vdCBzZXQKPiBDT05GSUdfU0xV
Qj15Cj4gIyBDT05GSUdfU0xPQiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPRklMSU5HIGlzIG5v
dCBzZXQKPiBDT05GSUdfSEFWRV9PUFJPRklMRT15Cj4gQ09ORklHX09QUk9GSUxFX05NSV9USU1F
Uj15Cj4gIyBDT05GSUdfSlVNUF9MQUJFTCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSEFWRV82NEJJ
VF9BTElHTkVEX0FDQ0VTUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfRUZGSUNJRU5UX1VOQUxJ
R05FRF9BQ0NFU1M9eQo+IENPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkKPiBDT05GSUdf
VVNFUl9SRVRVUk5fTk9USUZJRVI9eQo+IENPTkZJR19IQVZFX0lPUkVNQVBfUFJPVD15Cj4gQ09O
RklHX0hBVkVfS1BST0JFUz15Cj4gQ09ORklHX0hBVkVfS1JFVFBST0JFUz15Cj4gQ09ORklHX0hB
VkVfT1BUUFJPQkVTPXkKPiBDT05GSUdfSEFWRV9LUFJPQkVTX09OX0ZUUkFDRT15Cj4gQ09ORklH
X0hBVkVfQVJDSF9UUkFDRUhPT0s9eQo+IENPTkZJR19IQVZFX0RNQV9BVFRSUz15Cj4gQ09ORklH
X0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkKPiBDT05GSUdfSEFWRV9SRUdTX0FORF9TVEFDS19B
Q0NFU1NfQVBJPXkKPiBDT05GSUdfSEFWRV9ETUFfQVBJX0RFQlVHPXkKPiBDT05GSUdfSEFWRV9I
V19CUkVBS1BPSU5UPXkKPiBDT05GSUdfSEFWRV9NSVhFRF9CUkVBS1BPSU5UU19SRUdTPXkKPiBD
T05GSUdfSEFWRV9VU0VSX1JFVFVSTl9OT1RJRklFUj15Cj4gQ09ORklHX0hBVkVfUEVSRl9FVkVO
VFNfTk1JPXkKPiBDT05GSUdfSEFWRV9QRVJGX1JFR1M9eQo+IENPTkZJR19IQVZFX1BFUkZfVVNF
Ul9TVEFDS19EVU1QPXkKPiBDT05GSUdfSEFWRV9BUkNIX0pVTVBfTEFCRUw9eQo+IENPTkZJR19B
UkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRz15Cj4gQ09ORklHX0hBVkVfQUxJR05FRF9TVFJVQ1Rf
UEFHRT15Cj4gQ09ORklHX0hBVkVfQ01QWENIR19MT0NBTD15Cj4gQ09ORklHX0hBVkVfQ01QWENI
R19ET1VCTEU9eQo+IENPTkZJR19IQVZFX0FSQ0hfU0VDQ09NUF9GSUxURVI9eQo+IENPTkZJR19I
QVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkKPiAjIENPTkZJR19DQ19TVEFDS1BST1RFQ1RPUiBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX05PTkU9eQo+ICMgQ09ORklHX0NDX1NU
QUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldAo+ICMgQ09ORklHX0NDX1NUQUNLUFJPVEVD
VE9SX1NUUk9ORyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfQ09OVEVYVF9UUkFDS0lORz15Cj4g
Q09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49eQo+IENPTkZJR19IQVZFX0lSUV9U
SU1FX0FDQ09VTlRJTkc9eQo+IENPTkZJR19IQVZFX0FSQ0hfVFJBTlNQQVJFTlRfSFVHRVBBR0U9
eQo+IENPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15Cj4gQ09ORklHX01PRFVMRVNfVVNFX0VM
Rl9SRUxBPXkKPiBDT05GSUdfSEFWRV9JUlFfRVhJVF9PTl9JUlFfU1RBQ0s9eQo+IAo+ICMKPiAj
IEdDT1YtYmFzZWQga2VybmVsIHByb2ZpbGluZwo+ICMKPiBDT05GSUdfR0NPVl9LRVJORUw9eQo+
ICMgQ09ORklHX0dDT1ZfUFJPRklMRV9BTEwgaXMgbm90IHNldAo+IENPTkZJR19HQ09WX0ZPUk1B
VF9BVVRPREVURUNUPXkKPiAjIENPTkZJR19HQ09WX0ZPUk1BVF8zXzQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0dDT1ZfRk9STUFUXzRfNyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSEFWRV9HRU5FUklD
X0RNQV9DT0hFUkVOVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1JUX01VVEVYRVM9eQo+IENPTkZJR19C
QVNFX1NNQUxMPTEKPiAjIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19NT0RVTEVTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19CTE9DSyBpcyBub3Qgc2V0
Cj4gQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkKPiBDT05GSUdfVU5JTkxJTkVfU1BJTl9VTkxP
Q0s9eQo+IENPTkZJR19GUkVFWkVSPXkKPiAKPiAjCj4gIyBQcm9jZXNzb3IgdHlwZSBhbmQgZmVh
dHVyZXMKPiAjCj4gIyBDT05GSUdfWk9ORV9ETUEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NNUCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9NUFBBUlNFPXkKPiAjIENPTkZJR19YODZfRVhURU5ERURf
UExBVEZPUk0gaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9JTlRFTF9MUFNTIGlzIG5vdCBzZXQK
PiBDT05GSUdfU0NIRURfT01JVF9GUkFNRV9QT0lOVEVSPXkKPiBDT05GSUdfSFlQRVJWSVNPUl9H
VUVTVD15Cj4gQ09ORklHX1BBUkFWSVJUPXkKPiBDT05GSUdfUEFSQVZJUlRfREVCVUc9eQo+IENP
TkZJR19YRU49eQo+IENPTkZJR19YRU5fRE9NMD15Cj4gQ09ORklHX1hFTl9QUklWSUxFR0VEX0dV
RVNUPXkKPiBDT05GSUdfWEVOX1BWSFZNPXkKPiBDT05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZ
PTUwMAo+IENPTkZJR19YRU5fU0FWRV9SRVNUT1JFPXkKPiAjIENPTkZJR19YRU5fREVCVUdfRlMg
aXMgbm90IHNldAo+IENPTkZJR19YRU5fUFZIPXkKPiAjIENPTkZJR19LVk1fR1VFU1QgaXMgbm90
IHNldAo+ICMgQ09ORklHX1BBUkFWSVJUX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0Cj4gQ09O
RklHX1BBUkFWSVJUX0NMT0NLPXkKPiBDT05GSUdfTk9fQk9PVE1FTT15Cj4gQ09ORklHX01FTVRF
U1Q9eQo+ICMgQ09ORklHX01LOCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfTUNPUkUyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NQVRPTSBpcyBub3Qgc2V0
Cj4gQ09ORklHX0dFTkVSSUNfQ1BVPXkKPiBDT05GSUdfWDg2X0lOVEVSTk9ERV9DQUNIRV9TSElG
VD02Cj4gQ09ORklHX1g4Nl9MMV9DQUNIRV9TSElGVD02Cj4gQ09ORklHX1g4Nl9UU0M9eQo+IENP
TkZJR19YODZfQ01QWENIRzY0PXkKPiBDT05GSUdfWDg2X0NNT1Y9eQo+IENPTkZJR19YODZfTUlO
SU1VTV9DUFVfRkFNSUxZPTY0Cj4gQ09ORklHX1g4Nl9ERUJVR0NUTE1TUj15Cj4gIyBDT05GSUdf
UFJPQ0VTU09SX1NFTEVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9TVVBfSU5URUw9eQo+IENP
TkZJR19DUFVfU1VQX0FNRD15Cj4gQ09ORklHX0NQVV9TVVBfQ0VOVEFVUj15Cj4gQ09ORklHX0hQ
RVRfVElNRVI9eQo+IENPTkZJR19IUEVUX0VNVUxBVEVfUlRDPXkKPiBDT05GSUdfRE1JPXkKPiAj
IENPTkZJR19HQVJUX0lPTU1VIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FMR0FSWV9JT01NVT15Cj4g
Q09ORklHX0NBTEdBUllfSU9NTVVfRU5BQkxFRF9CWV9ERUZBVUxUPXkKPiBDT05GSUdfU1dJT1RM
Qj15Cj4gQ09ORklHX0lPTU1VX0hFTFBFUj15Cj4gQ09ORklHX05SX0NQVVM9MQo+IENPTkZJR19Q
UkVFTVBUX05PTkU9eQo+ICMgQ09ORklHX1BSRUVNUFRfVk9MVU5UQVJZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19QUkVFTVBUIGlzIG5vdCBzZXQKPiBDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQo+IENP
TkZJR19YODZfSU9fQVBJQz15Cj4gIyBDT05GSUdfWDg2X1JFUk9VVEVfRk9SX0JST0tFTl9CT09U
X0lSUVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9NQ0UgaXMgbm90IHNldAo+IENPTkZJR19J
OEs9eQo+IENPTkZJR19NSUNST0NPREU9eQo+ICMgQ09ORklHX01JQ1JPQ09ERV9JTlRFTCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01JQ1JPQ09ERV9BTUQ9eQo+IENPTkZJR19NSUNST0NPREVfT0xEX0lO
VEVSRkFDRT15Cj4gIyBDT05GSUdfTUlDUk9DT0RFX0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NSUNST0NPREVfQU1EX0VBUkxZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NSUNST0NP
REVfRUFSTFkgaXMgbm90IHNldAo+IENPTkZJR19YODZfTVNSPXkKPiBDT05GSUdfWDg2X0NQVUlE
PXkKPiBDT05GSUdfQVJDSF9QSFlTX0FERFJfVF82NEJJVD15Cj4gQ09ORklHX0FSQ0hfRE1BX0FE
RFJfVF82NEJJVD15Cj4gQ09ORklHX0RJUkVDVF9HQlBBR0VTPXkKPiBDT05GSUdfQVJDSF9TUEFS
U0VNRU1fRU5BQkxFPXkKPiBDT05GSUdfQVJDSF9TUEFSU0VNRU1fREVGQVVMVD15Cj4gQ09ORklH
X0FSQ0hfU0VMRUNUX01FTU9SWV9NT0RFTD15Cj4gIyBDT05GSUdfQVJDSF9NRU1PUllfUFJPQkUg
aXMgbm90IHNldAo+IENPTkZJR19JTExFR0FMX1BPSU5URVJfVkFMVUU9MHhkZWFkMDAwMDAwMDAw
MDAwCj4gQ09ORklHX1NFTEVDVF9NRU1PUllfTU9ERUw9eQo+IENPTkZJR19TUEFSU0VNRU1fTUFO
VUFMPXkKPiBDT05GSUdfU1BBUlNFTUVNPXkKPiBDT05GSUdfSEFWRV9NRU1PUllfUFJFU0VOVD15
Cj4gQ09ORklHX1NQQVJTRU1FTV9FWFRSRU1FPXkKPiBDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBf
RU5BQkxFPXkKPiBDT05GSUdfU1BBUlNFTUVNX0FMTE9DX01FTV9NQVBfVE9HRVRIRVI9eQo+IENP
TkZJR19TUEFSU0VNRU1fVk1FTU1BUD15Cj4gQ09ORklHX0hBVkVfTUVNQkxPQ0s9eQo+IENPTkZJ
R19IQVZFX01FTUJMT0NLX05PREVfTUFQPXkKPiBDT05GSUdfQVJDSF9ESVNDQVJEX01FTUJMT0NL
PXkKPiAjIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFIGlzIG5vdCBzZXQKPiBDT05GSUdf
TUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19NRU1PUllfSE9UUExVR19TUEFSU0U9eQo+ICMgQ09O
RklHX01FTU9SWV9IT1RSRU1PVkUgaXMgbm90IHNldAo+IENPTkZJR19QQUdFRkxBR1NfRVhURU5E
RUQ9eQo+IENPTkZJR19TUExJVF9QVExPQ0tfQ1BVUz00Cj4gQ09ORklHX0FSQ0hfRU5BQkxFX1NQ
TElUX1BNRF9QVExPQ0s9eQo+IENPTkZJR19CQUxMT09OX0NPTVBBQ1RJT049eQo+IENPTkZJR19D
T01QQUNUSU9OPXkKPiBDT05GSUdfTUlHUkFUSU9OPXkKPiBDT05GSUdfUEhZU19BRERSX1RfNjRC
SVQ9eQo+IENPTkZJR19aT05FX0RNQV9GTEFHPTAKPiBDT05GSUdfVklSVF9UT19CVVM9eQo+IENP
TkZJR19NTVVfTk9USUZJRVI9eQo+ICMgQ09ORklHX0tTTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RF
RkFVTFRfTU1BUF9NSU5fQUREUj00MDk2Cj4gIyBDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0Ug
aXMgbm90IHNldAo+ICMgQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0ggaXMgbm90IHNldAo+IENP
TkZJR19ORUVEX1BFUl9DUFVfS009eQo+IENPTkZJR19DTEVBTkNBQ0hFPXkKPiAjIENPTkZJR19D
TUEgaXMgbm90IHNldAo+ICMgQ09ORklHX1pCVUQgaXMgbm90IHNldAo+IENPTkZJR19aU01BTExP
Qz15Cj4gQ09ORklHX1BHVEFCTEVfTUFQUElORz15Cj4gQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NP
UlJVUFRJT049eQo+IENPTkZJR19YODZfQk9PVFBBUkFNX01FTU9SWV9DT1JSVVBUSU9OX0NIRUNL
PXkKPiBDT05GSUdfWDg2X1JFU0VSVkVfTE9XPTY0Cj4gIyBDT05GSUdfTVRSUiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQVJDSF9SQU5ET00gaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9TTUFQIGlz
IG5vdCBzZXQKPiBDT05GSUdfRUZJPXkKPiAjIENPTkZJR19FRklfU1RVQiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VDQ09NUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQK
PiBDT05GSUdfSFpfMjUwPXkKPiAjIENPTkZJR19IWl8zMDAgaXMgbm90IHNldAo+ICMgQ09ORklH
X0haXzEwMDAgaXMgbm90IHNldAo+IENPTkZJR19IWj0yNTAKPiBDT05GSUdfU0NIRURfSFJUSUNL
PXkKPiBDT05GSUdfS0VYRUM9eQo+ICMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldAo+IENP
TkZJR19QSFlTSUNBTF9TVEFSVD0weDEwMDAwMDAKPiAjIENPTkZJR19SRUxPQ0FUQUJMRSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1BIWVNJQ0FMX0FMSUdOPTB4MjAwMDAwCj4gIyBDT05GSUdfQ01ETElO
RV9CT09MIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQo+
IENPTkZJR19BUkNIX0VOQUJMRV9NRU1PUllfSE9UUkVNT1ZFPXkKPiAKPiAjCj4gIyBQb3dlciBt
YW5hZ2VtZW50IGFuZCBBQ1BJIG9wdGlvbnMKPiAjCj4gIyBDT05GSUdfU1VTUEVORCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0hJQkVSTkFURV9DQUxMQkFDS1M9eQo+IENPTkZJR19QTV9TTEVFUD15Cj4g
Q09ORklHX1BNX0FVVE9TTEVFUD15Cj4gIyBDT05GSUdfUE1fV0FLRUxPQ0tTIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19QTV9SVU5USU1FIGlzIG5vdCBzZXQKPiBDT05GSUdfUE09eQo+IENPTkZJR19Q
TV9ERUJVRz15Cj4gIyBDT05GSUdfUE1fQURWQU5DRURfREVCVUcgaXMgbm90IHNldAo+IENPTkZJ
R19QTV9TTEVFUF9ERUJVRz15Cj4gQ09ORklHX1BNX1RSQUNFPXkKPiBDT05GSUdfUE1fVFJBQ0Vf
UlRDPXkKPiBDT05GSUdfV1FfUE9XRVJfRUZGSUNJRU5UX0RFRkFVTFQ9eQo+IENPTkZJR19BQ1BJ
PXkKPiAjIENPTkZJR19BQ1BJX0VDX0RFQlVHRlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0FDUElf
QUMgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0JBVFRFUlk9eQo+IENPTkZJR19BQ1BJX0JVVFRP
Tj15Cj4gQ09ORklHX0FDUElfVklERU89eQo+IENPTkZJR19BQ1BJX0ZBTj15Cj4gQ09ORklHX0FD
UElfRE9DSz15Cj4gQ09ORklHX0FDUElfUFJPQ0VTU09SPXkKPiAjIENPTkZJR19BQ1BJX1BST0NF
U1NPUl9BR0dSRUdBVE9SIGlzIG5vdCBzZXQKPiBDT05GSUdfQUNQSV9USEVSTUFMPXkKPiBDT05G
SUdfQUNQSV9DVVNUT01fRFNEVF9GSUxFPSIiCj4gIyBDT05GSUdfQUNQSV9DVVNUT01fRFNEVCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNQSV9JTklUUkRfVEFCTEVfT1ZFUlJJREUgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX1BDSV9TTE9U
PXkKPiAjIENPTkZJR19YODZfUE1fVElNRVIgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0NPTlRB
SU5FUj15Cj4gQ09ORklHX0FDUElfSE9UUExVR19NRU1PUlk9eQo+IENPTkZJR19BQ1BJX1NCUz15
Cj4gIyBDT05GSUdfQUNQSV9IRUQgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0NVU1RPTV9NRVRI
T0Q9eQo+ICMgQ09ORklHX0FDUElfQkdSVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNQSV9BUEVJ
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0ZJPXkKPiAKPiAjCj4gIyBDUFUgRnJlcXVlbmN5IHNjYWxp
bmcKPiAjCj4gQ09ORklHX0NQVV9GUkVRPXkKPiBDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTU1PTj15
Cj4gIyBDT05GSUdfQ1BVX0ZSRVFfU1RBVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9GUkVRX0RF
RkFVTFRfR09WX1BFUkZPUk1BTkNFPXkKPiAjIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9Q
T1dFUlNBVkUgaXMgbm90IHNldAo+ICMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1VTRVJT
UEFDRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfT05ERU1BTkQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX0NPTlNFUlZBVElWRSBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9GUkVRX0dPVl9QRVJGT1JNQU5DRT15Cj4gQ09ORklHX0NQ
VV9GUkVRX0dPVl9QT1dFUlNBVkU9eQo+IENPTkZJR19DUFVfRlJFUV9HT1ZfVVNFUlNQQUNFPXkK
PiBDT05GSUdfQ1BVX0ZSRVFfR09WX09OREVNQU5EPXkKPiBDT05GSUdfQ1BVX0ZSRVFfR09WX0NP
TlNFUlZBVElWRT15Cj4gCj4gIwo+ICMgeDg2IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJz
Cj4gIwo+ICMgQ09ORklHX1g4Nl9JTlRFTF9QU1RBVEUgaXMgbm90IHNldAo+IENPTkZJR19YODZf
UENDX0NQVUZSRVE9eQo+ICMgQ09ORklHX1g4Nl9BQ1BJX0NQVUZSRVEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1g4Nl9TUEVFRFNURVBfQ0VOVFJJTk8gaXMgbm90IHNldAo+IENPTkZJR19YODZfUDRf
Q0xPQ0tNT0Q9eQo+IAo+ICMKPiAjIHNoYXJlZCBvcHRpb25zCj4gIwo+IENPTkZJR19YODZfU1BF
RURTVEVQX0xJQj15Cj4gCj4gIwo+ICMgQ1BVIElkbGUKPiAjCj4gQ09ORklHX0NQVV9JRExFPXkK
PiAjIENPTkZJR19DUFVfSURMRV9NVUxUSVBMRV9EUklWRVJTIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q1BVX0lETEVfR09WX0xBRERFUj15Cj4gQ09ORklHX0NQVV9JRExFX0dPVl9NRU5VPXkKPiAjIENP
TkZJR19BUkNIX05FRURTX0NQVV9JRExFX0NPVVBMRUQgaXMgbm90IHNldAo+ICMgQ09ORklHX0lO
VEVMX0lETEUgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lbW9yeSBwb3dlciBzYXZpbmdzCj4gIwo+
IENPTkZJR19JNzMwMF9JRExFX0lPQVRfQ0hBTk5FTD15Cj4gQ09ORklHX0k3MzAwX0lETEU9eQo+
IAo+ICMKPiAjIEJ1cyBvcHRpb25zIChQQ0kgZXRjLikKPiAjCj4gQ09ORklHX1BDST15Cj4gQ09O
RklHX1BDSV9ESVJFQ1Q9eQo+IENPTkZJR19QQ0lfTU1DT05GSUc9eQo+IENPTkZJR19QQ0lfWEVO
PXkKPiBDT05GSUdfUENJX0RPTUFJTlM9eQo+IENPTkZJR19QQ0lfQ05CMjBMRV9RVUlSSz15Cj4g
IyBDT05GSUdfUENJRVBPUlRCVVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1BDSV9NU0kgaXMgbm90
IHNldAo+IENPTkZJR19QQ0lfREVCVUc9eQo+ICMgQ09ORklHX1BDSV9SRUFMTE9DX0VOQUJMRV9B
VVRPIGlzIG5vdCBzZXQKPiBDT05GSUdfUENJX1NUVUI9eQo+IENPTkZJR19YRU5fUENJREVWX0ZS
T05URU5EPXkKPiBDT05GSUdfSFRfSVJRPXkKPiBDT05GSUdfUENJX0FUUz15Cj4gIyBDT05GSUdf
UENJX0lPViBpcyBub3Qgc2V0Cj4gQ09ORklHX1BDSV9QUkk9eQo+ICMgQ09ORklHX1BDSV9QQVNJ
RCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BDSV9JT0FQSUM9eQo+IENPTkZJR19QQ0lfTEFCRUw9eQo+
IAo+ICMKPiAjIFBDSSBob3N0IGNvbnRyb2xsZXIgZHJpdmVycwo+ICMKPiAjIENPTkZJR19JU0Ff
RE1BX0FQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FNRF9OQj15Cj4gIyBDT05GSUdfUENDQVJEIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19IT1RQTFVHX1BDSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkFQ
SURJTyBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9TWVNGQj15Cj4gCj4gIwo+ICMgRXhlY3V0YWJs
ZSBmaWxlIGZvcm1hdHMgLyBFbXVsYXRpb25zCj4gIwo+ICMgQ09ORklHX0JJTkZNVF9FTEYgaXMg
bm90IHNldAo+IENPTkZJR19BUkNIX0JJTkZNVF9FTEZfUkFORE9NSVpFX1BJRT15Cj4gQ09ORklH
X0JJTkZNVF9TQ1JJUFQ9eQo+ICMgQ09ORklHX0hBVkVfQU9VVCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfQklORk1UX01JU0MgaXMgbm90IHNldAo+ICMgQ09ORklHX0NPUkVEVU1QIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19JQTMyX0VNVUxBVElPTiBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9ERVZfRE1B
X09QUz15Cj4gQ09ORklHX05FVD15Cj4gCj4gIwo+ICMgTmV0d29ya2luZyBvcHRpb25zCj4gIwo+
IENPTkZJR19QQUNLRVQ9eQo+ICMgQ09ORklHX1BBQ0tFVF9ESUFHIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19VTklYIGlzIG5vdCBzZXQKPiBDT05GSUdfWEZSTT15Cj4gQ09ORklHX1hGUk1fQUxHTz15
Cj4gQ09ORklHX1hGUk1fVVNFUj15Cj4gIyBDT05GSUdfWEZSTV9TVUJfUE9MSUNZIGlzIG5vdCBz
ZXQKPiBDT05GSUdfWEZSTV9NSUdSQVRFPXkKPiBDT05GSUdfWEZSTV9JUENPTVA9eQo+IENPTkZJ
R19ORVRfS0VZPXkKPiAjIENPTkZJR19ORVRfS0VZX01JR1JBVEUgaXMgbm90IHNldAo+IENPTkZJ
R19JTkVUPXkKPiAjIENPTkZJR19JUF9NVUxUSUNBU1QgaXMgbm90IHNldAo+IENPTkZJR19JUF9B
RFZBTkNFRF9ST1VURVI9eQo+IENPTkZJR19JUF9GSUJfVFJJRV9TVEFUUz15Cj4gQ09ORklHX0lQ
X01VTFRJUExFX1RBQkxFUz15Cj4gIyBDT05GSUdfSVBfUk9VVEVfTVVMVElQQVRIIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19JUF9ST1VURV9WRVJCT1NFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUF9Q
TlAgaXMgbm90IHNldAo+IENPTkZJR19ORVRfSVBJUD15Cj4gQ09ORklHX05FVF9JUEdSRV9ERU1V
WD15Cj4gQ09ORklHX05FVF9JUF9UVU5ORUw9eQo+ICMgQ09ORklHX05FVF9JUEdSRSBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NZTl9DT09LSUVTPXkKPiAjIENPTkZJR19ORVRfSVBWVEkgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0lORVRfQUggaXMgbm90IHNldAo+IENPTkZJR19JTkVUX0VTUD15Cj4gQ09O
RklHX0lORVRfSVBDT01QPXkKPiBDT05GSUdfSU5FVF9YRlJNX1RVTk5FTD15Cj4gQ09ORklHX0lO
RVRfVFVOTkVMPXkKPiBDT05GSUdfSU5FVF9YRlJNX01PREVfVFJBTlNQT1JUPXkKPiBDT05GSUdf
SU5FVF9YRlJNX01PREVfVFVOTkVMPXkKPiBDT05GSUdfSU5FVF9YRlJNX01PREVfQkVFVD15Cj4g
Q09ORklHX0lORVRfTFJPPXkKPiBDT05GSUdfSU5FVF9ESUFHPXkKPiBDT05GSUdfSU5FVF9UQ1Bf
RElBRz15Cj4gQ09ORklHX0lORVRfVURQX0RJQUc9eQo+IENPTkZJR19UQ1BfQ09OR19BRFZBTkNF
RD15Cj4gIyBDT05GSUdfVENQX0NPTkdfQklDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UQ1BfQ09O
R19DVUJJQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RDUF9DT05HX1dFU1RXT09EPXkKPiBDT05GSUdf
VENQX0NPTkdfSFRDUD15Cj4gIyBDT05GSUdfVENQX0NPTkdfSFNUQ1AgaXMgbm90IHNldAo+ICMg
Q09ORklHX1RDUF9DT05HX0hZQkxBIGlzIG5vdCBzZXQKPiBDT05GSUdfVENQX0NPTkdfVkVHQVM9
eQo+ICMgQ09ORklHX1RDUF9DT05HX1NDQUxBQkxFIGlzIG5vdCBzZXQKPiBDT05GSUdfVENQX0NP
TkdfTFA9eQo+IENPTkZJR19UQ1BfQ09OR19WRU5PPXkKPiAjIENPTkZJR19UQ1BfQ09OR19ZRUFI
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19UQ1BfQ09OR19JTExJTk9JUyBpcyBub3Qgc2V0Cj4gQ09O
RklHX0RFRkFVTFRfSFRDUD15Cj4gIyBDT05GSUdfREVGQVVMVF9WRUdBUyBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfREVGQVVMVF9WRU5PIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUZBVUxUX1dFU1RX
T09EIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUZBVUxUX1JFTk8gaXMgbm90IHNldAo+IENPTkZJ
R19ERUZBVUxUX1RDUF9DT05HPSJodGNwIgo+IENPTkZJR19UQ1BfTUQ1U0lHPXkKPiBDT05GSUdf
SVBWNj15Cj4gQ09ORklHX0lQVjZfUk9VVEVSX1BSRUY9eQo+ICMgQ09ORklHX0lQVjZfUk9VVEVf
SU5GTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSVBWNl9PUFRJTUlTVElDX0RBRCBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfSU5FVDZfQUggaXMgbm90IHNldAo+IENPTkZJR19JTkVUNl9FU1A9eQo+ICMg
Q09ORklHX0lORVQ2X0lQQ09NUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lQVjZfTUlQNj15Cj4gIyBD
T05GSUdfSU5FVDZfWEZSTV9UVU5ORUwgaXMgbm90IHNldAo+IENPTkZJR19JTkVUNl9UVU5ORUw9
eQo+IENPTkZJR19JTkVUNl9YRlJNX01PREVfVFJBTlNQT1JUPXkKPiBDT05GSUdfSU5FVDZfWEZS
TV9NT0RFX1RVTk5FTD15Cj4gIyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0lORVQ2X1hGUk1fTU9ERV9ST1VURU9QVElNSVpBVElPTiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0lQVjZfVlRJPXkKPiAjIENPTkZJR19JUFY2X1NJVCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0lQVjZfVFVOTkVMPXkKPiBDT05GSUdfSVBWNl9HUkU9eQo+IENPTkZJR19JUFY2X01VTFRJ
UExFX1RBQkxFUz15Cj4gIyBDT05GSUdfSVBWNl9TVUJUUkVFUyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0lQVjZfTVJPVVRFPXkKPiBDT05GSUdfSVBWNl9NUk9VVEVfTVVMVElQTEVfVEFCTEVTPXkKPiAj
IENPTkZJR19JUFY2X1BJTVNNX1YyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRMQUJFTCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX05FVFdPUktfU0VDTUFSSz15Cj4gQ09ORklHX05FVFdPUktfUEhZX1RJ
TUVTVEFNUElORz15Cj4gQ09ORklHX05FVEZJTFRFUj15Cj4gIyBDT05GSUdfTkVURklMVEVSX0RF
QlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX0FEVkFOQ0VEPXkKPiAjIENPTkZJR19C
UklER0VfTkVURklMVEVSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBDb3JlIE5ldGZpbHRlciBDb25m
aWd1cmF0aW9uCj4gIwo+IENPTkZJR19ORVRGSUxURVJfTkVUTElOSz15Cj4gQ09ORklHX05FVEZJ
TFRFUl9ORVRMSU5LX0FDQ1Q9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9ORVRMSU5LX1FVRVVFIGlz
IG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX05FVExJTktfTE9HPXkKPiBDT05GSUdfTkZfQ09O
TlRSQUNLPXkKPiBDT05GSUdfTkZfQ09OTlRSQUNLX01BUks9eQo+IENPTkZJR19ORl9DT05OVFJB
Q0tfU0VDTUFSSz15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19FVkVOVFM9eQo+ICMgQ09ORklHX05G
X0NPTk5UUkFDS19USU1FT1VUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORl9DT05OVFJBQ0tfVElN
RVNUQU1QIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZfQ09OTlRSQUNLX0xBQkVMUz15Cj4gIyBDT05G
SUdfTkZfQ1RfUFJPVE9fRENDUCBpcyBub3Qgc2V0Cj4gQ09ORklHX05GX0NUX1BST1RPX0dSRT15
Cj4gQ09ORklHX05GX0NUX1BST1RPX1NDVFA9eQo+IENPTkZJR19ORl9DVF9QUk9UT19VRFBMSVRF
PXkKPiBDT05GSUdfTkZfQ09OTlRSQUNLX0FNQU5EQT15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19G
VFA9eQo+IENPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15Cj4gIyBDT05GSUdfTkZfQ09OTlRSQUNL
X0lSQyBpcyBub3Qgc2V0Cj4gQ09ORklHX05GX0NPTk5UUkFDS19CUk9BRENBU1Q9eQo+IENPTkZJ
R19ORl9DT05OVFJBQ0tfTkVUQklPU19OUz15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19TTk1QPXkK
PiBDT05GSUdfTkZfQ09OTlRSQUNLX1BQVFA9eQo+IENPTkZJR19ORl9DT05OVFJBQ0tfU0FORT15
Cj4gIyBDT05GSUdfTkZfQ09OTlRSQUNLX1NJUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkZfQ09O
TlRSQUNLX1RGVFAgaXMgbm90IHNldAo+ICMgQ09ORklHX05GX0NUX05FVExJTksgaXMgbm90IHNl
dAo+IENPTkZJR19ORl9DVF9ORVRMSU5LX1RJTUVPVVQ9eQo+ICMgQ09ORklHX05GX1RBQkxFUyBp
cyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVEFCTEVTPXkKPiAKPiAjCj4gIyBYdGFibGVz
IGNvbWJpbmVkIG1vZHVsZXMKPiAjCj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVJLPXkKPiBDT05G
SUdfTkVURklMVEVSX1hUX0NPTk5NQVJLPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX1NFVD15Cj4g
Cj4gIwo+ICMgWHRhYmxlcyB0YXJnZXRzCj4gIwo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VU
X0FVRElUPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NMQVNTSUZZIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DT05OTUFSSz15Cj4gIyBDT05GSUdfTkVU
RklMVEVSX1hUX1RBUkdFVF9DT05OU0VDTUFSSyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfSE1BUks9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0lETEVUSU1F
Uj15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9MRUQgaXMgbm90IHNldAo+IENPTkZJ
R19ORVRGSUxURVJfWFRfVEFSR0VUX0xPRz15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdF
VF9NQVJLIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORkxPRz15Cj4g
IyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORlFVRVVFIGlzIG5vdCBzZXQKPiBDT05GSUdf
TkVURklMVEVSX1hUX1RBUkdFVF9SQVRFRVNUPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdF
VF9URUU9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1NFQ01BUks9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfVEFSR0VUX1RDUE1TUz15Cj4gCj4gIwo+ICMgWHRhYmxlcyBtYXRjaGVzCj4g
Iwo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9BRERSVFlQRSBpcyBub3Qgc2V0Cj4gQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9CUEY9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRD
SF9DTFVTVEVSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09NTUVO
VCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NPTk5CWVRFUyBpcyBu
b3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTEFCRUw9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfQ09OTkxJTUlUPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENI
X0NPTk5NQVJLPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09OTlRSQUNLIGlzIG5v
dCBzZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NQVT15Cj4gQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9EQ0NQPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfREVWR1JPVVAg
aXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfRFNDUD15Cj4gIyBDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX0VDTiBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9FU1A9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEFTSExJTUlUPXkKPiAjIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEVMUEVSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfSEwgaXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
SVBDT01QPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0lQUkFOR0U9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfTEVOR1RIPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0xJ
TUlUPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFDIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFSSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVURklM
VEVSX1hUX01BVENIX01VTFRJUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9ORkFDQ1Q9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PU0YgaXMgbm90IHNl
dAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfT1dORVI9eQo+ICMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9QT0xJQ1kgaXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
UEtUVFlQRT15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1FVT1RBIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JBVEVFU1Q9eQo+ICMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9SRUFMTSBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9S
RUNFTlQ9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU0NUUD15Cj4gQ09ORklHX05FVEZJ
TFRFUl9YVF9NQVRDSF9TT0NLRVQ9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RBVEU9
eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVEFUSVNUSUMgaXMgbm90IHNldAo+ICMg
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVFJJTkcgaXMgbm90IHNldAo+IENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfVENQTVNTPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1RJTUU9
eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfVTMyPXkKPiBDT05GSUdfSVBfU0VUPXkKPiBD
T05GSUdfSVBfU0VUX01BWD0yNTYKPiBDT05GSUdfSVBfU0VUX0JJVE1BUF9JUD15Cj4gQ09ORklH
X0lQX1NFVF9CSVRNQVBfSVBNQUM9eQo+ICMgQ09ORklHX0lQX1NFVF9CSVRNQVBfUE9SVCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0lQX1NFVF9IQVNIX0lQPXkKPiAjIENPTkZJR19JUF9TRVRfSEFTSF9J
UFBPUlQgaXMgbm90IHNldAo+IENPTkZJR19JUF9TRVRfSEFTSF9JUFBPUlRJUD15Cj4gQ09ORklH
X0lQX1NFVF9IQVNIX0lQUE9SVE5FVD15Cj4gQ09ORklHX0lQX1NFVF9IQVNIX05FVFBPUlRORVQ9
eQo+ICMgQ09ORklHX0lQX1NFVF9IQVNIX05FVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lQX1NFVF9I
QVNIX05FVE5FVD15Cj4gIyBDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfSVBfU0VUX0hBU0hfTkVUSUZBQ0UgaXMgbm90IHNldAo+IENPTkZJR19JUF9TRVRf
TElTVF9TRVQ9eQo+ICMgQ09ORklHX0lQX1ZTIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJUDogTmV0
ZmlsdGVyIENvbmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHX05GX0RFRlJBR19JUFY0PXkKPiBDT05G
SUdfTkZfQ09OTlRSQUNLX0lQVjQ9eQo+ICMgQ09ORklHX0lQX05GX0lQVEFCTEVTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSVBfTkZfQVJQVEFCTEVTPXkKPiBDT05GSUdfSVBfTkZfQVJQRklMVEVSPXkK
PiBDT05GSUdfSVBfTkZfQVJQX01BTkdMRT15Cj4gCj4gIwo+ICMgSVB2NjogTmV0ZmlsdGVyIENv
bmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHX05GX0RFRlJBR19JUFY2PXkKPiBDT05GSUdfTkZfQ09O
TlRSQUNLX0lQVjY9eQo+ICMgQ09ORklHX0lQNl9ORl9JUFRBQkxFUyBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgREVDbmV0OiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgo+ICMKPiAjIENPTkZJR19ERUNO
RVRfTkZfR1JBQlVMQVRPUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9ORl9FQlRBQkxFUz15
Cj4gQ09ORklHX0JSSURHRV9FQlRfQlJPVVRFPXkKPiBDT05GSUdfQlJJREdFX0VCVF9UX0ZJTFRF
Uj15Cj4gIyBDT05GSUdfQlJJREdFX0VCVF9UX05BVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURH
RV9FQlRfODAyXzM9eQo+IENPTkZJR19CUklER0VfRUJUX0FNT05HPXkKPiAjIENPTkZJR19CUklE
R0VfRUJUX0FSUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9FQlRfSVA9eQo+IENPTkZJR19C
UklER0VfRUJUX0lQNj15Cj4gQ09ORklHX0JSSURHRV9FQlRfTElNSVQ9eQo+ICMgQ09ORklHX0JS
SURHRV9FQlRfTUFSSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9FQlRfUEtUVFlQRT15Cj4g
Q09ORklHX0JSSURHRV9FQlRfU1RQPXkKPiAjIENPTkZJR19CUklER0VfRUJUX1ZMQU4gaXMgbm90
IHNldAo+IENPTkZJR19CUklER0VfRUJUX0FSUFJFUExZPXkKPiBDT05GSUdfQlJJREdFX0VCVF9E
TkFUPXkKPiBDT05GSUdfQlJJREdFX0VCVF9NQVJLX1Q9eQo+IENPTkZJR19CUklER0VfRUJUX1JF
RElSRUNUPXkKPiAjIENPTkZJR19CUklER0VfRUJUX1NOQVQgaXMgbm90IHNldAo+IENPTkZJR19C
UklER0VfRUJUX0xPRz15Cj4gQ09ORklHX0JSSURHRV9FQlRfVUxPRz15Cj4gQ09ORklHX0JSSURH
RV9FQlRfTkZMT0c9eQo+IENPTkZJR19JUF9EQ0NQPXkKPiBDT05GSUdfSU5FVF9EQ0NQX0RJQUc9
eQo+IAo+ICMKPiAjIERDQ1AgQ0NJRHMgQ29uZmlndXJhdGlvbgo+ICMKPiBDT05GSUdfSVBfREND
UF9DQ0lEMl9ERUJVRz15Cj4gQ09ORklHX0lQX0RDQ1BfQ0NJRDM9eQo+ICMgQ09ORklHX0lQX0RD
Q1BfQ0NJRDNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19JUF9EQ0NQX1RGUkNfTElCPXkKPiAK
PiAjCj4gIyBEQ0NQIEtlcm5lbCBIYWNraW5nCj4gIwo+IENPTkZJR19JUF9EQ0NQX0RFQlVHPXkK
PiAjIENPTkZJR19JUF9TQ1RQIGlzIG5vdCBzZXQKPiBDT05GSUdfUkRTPXkKPiAjIENPTkZJR19S
RFNfUkRNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkRTX1RDUCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfUkRTX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfVElQQz15Cj4gQ09ORklHX1RJUENfUE9S
VFM9ODE5MQo+ICMgQ09ORklHX0FUTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTDJUUCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NUUD15Cj4gQ09ORklHX0JSSURHRT15Cj4gIyBDT05GSUdfQlJJREdFX0lH
TVBfU05PT1BJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZMQU5fODAyMVEgaXMgbm90IHNldAo+
IENPTkZJR19ERUNORVQ9eQo+ICMgQ09ORklHX0RFQ05FVF9ST1VURVIgaXMgbm90IHNldAo+IENP
TkZJR19MTEM9eQo+IENPTkZJR19MTEMyPXkKPiBDT05GSUdfSVBYPXkKPiBDT05GSUdfSVBYX0lO
VEVSTj15Cj4gIyBDT05GSUdfQVRBTEsgaXMgbm90IHNldAo+IENPTkZJR19YMjU9eQo+ICMgQ09O
RklHX0xBUEIgaXMgbm90IHNldAo+ICMgQ09ORklHX1BIT05FVCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0lFRUU4MDIxNTQ9eQo+ICMgQ09ORklHX0lFRUU4MDIxNTRfNkxPV1BBTiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfTUFDODAyMTU0IGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRfU0NIRUQgaXMgbm90
IHNldAo+ICMgQ09ORklHX0RDQiBpcyBub3Qgc2V0Cj4gQ09ORklHX0ROU19SRVNPTFZFUj15Cj4g
IyBDT05GSUdfQkFUTUFOX0FEViBpcyBub3Qgc2V0Cj4gQ09ORklHX09QRU5WU1dJVENIPXkKPiAj
IENPTkZJR19PUEVOVlNXSVRDSF9HUkUgaXMgbm90IHNldAo+IENPTkZJR19WU09DS0VUUz15Cj4g
Q09ORklHX1ZNV0FSRV9WTUNJX1ZTT0NLRVRTPXkKPiBDT05GSUdfTkVUTElOS19NTUFQPXkKPiBD
T05GSUdfTkVUTElOS19ESUFHPXkKPiBDT05GSUdfTkVUX01QTFNfR1NPPXkKPiBDT05GSUdfSFNS
PXkKPiBDT05GSUdfTkVUX1JYX0JVU1lfUE9MTD15Cj4gQ09ORklHX0JRTD15Cj4gCj4gIwo+ICMg
TmV0d29yayB0ZXN0aW5nCj4gIwo+IENPTkZJR19IQU1SQURJTz15Cj4gCj4gIwo+ICMgUGFja2V0
IFJhZGlvIHByb3RvY29scwo+ICMKPiBDT05GSUdfQVgyNT15Cj4gQ09ORklHX0FYMjVfREFNQV9T
TEFWRT15Cj4gQ09ORklHX05FVFJPTT15Cj4gIyBDT05GSUdfUk9TRSBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgQVguMjUgbmV0d29yayBkZXZpY2UgZHJpdmVycwo+ICMKPiBDT05GSUdfTUtJU1M9eQo+
ICMgQ09ORklHXzZQQUNLIGlzIG5vdCBzZXQKPiBDT05GSUdfQlBRRVRIRVI9eQo+IENPTkZJR19C
QVlDT01fU0VSX0ZEWD15Cj4gQ09ORklHX0JBWUNPTV9TRVJfSERYPXkKPiBDT05GSUdfQkFZQ09N
X1BBUj15Cj4gQ09ORklHX1lBTT15Cj4gIyBDT05GSUdfQ0FOIGlzIG5vdCBzZXQKPiBDT05GSUdf
SVJEQT15Cj4gCj4gIwo+ICMgSXJEQSBwcm90b2NvbHMKPiAjCj4gQ09ORklHX0lSTEFOPXkKPiAj
IENPTkZJR19JUkNPTU0gaXMgbm90IHNldAo+ICMgQ09ORklHX0lSREFfVUxUUkEgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIElyREEgb3B0aW9ucwo+ICMKPiBDT05GSUdfSVJEQV9DQUNIRV9MQVNUX0xT
QVA9eQo+IENPTkZJR19JUkRBX0ZBU1RfUlI9eQo+IENPTkZJR19JUkRBX0RFQlVHPXkKPiAKPiAj
Cj4gIyBJbmZyYXJlZC1wb3J0IGRldmljZSBkcml2ZXJzCj4gIwo+IAo+ICMKPiAjIFNJUiBkZXZp
Y2UgZHJpdmVycwo+ICMKPiAjIENPTkZJR19JUlRUWV9TSVIgaXMgbm90IHNldAo+IAo+ICMKPiAj
IERvbmdsZSBzdXBwb3J0Cj4gIwo+IAo+ICMKPiAjIEZJUiBkZXZpY2UgZHJpdmVycwo+ICMKPiBD
T05GSUdfVkxTSV9GSVI9eQo+ICMgQ09ORklHX0JUIGlzIG5vdCBzZXQKPiBDT05GSUdfQUZfUlhS
UEM9eQo+IENPTkZJR19BRl9SWFJQQ19ERUJVRz15Cj4gIyBDT05GSUdfUlhLQUQgaXMgbm90IHNl
dAo+IENPTkZJR19GSUJfUlVMRVM9eQo+ICMgQ09ORklHX1dJUkVMRVNTIGlzIG5vdCBzZXQKPiBD
T05GSUdfV0lNQVg9eQo+IENPTkZJR19XSU1BWF9ERUJVR19MRVZFTD04Cj4gIyBDT05GSUdfUkZL
SUxMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRktJTExfUkVHVUxBVE9SIGlzIG5vdCBzZXQKPiBD
T05GSUdfTkVUXzlQPXkKPiBDT05GSUdfTkVUXzlQX1ZJUlRJTz15Cj4gQ09ORklHX05FVF85UF9S
RE1BPXkKPiAjIENPTkZJR19ORVRfOVBfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NBSUYg
aXMgbm90IHNldAo+IENPTkZJR19DRVBIX0xJQj15Cj4gIyBDT05GSUdfQ0VQSF9MSUJfUFJFVFRZ
REVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NFUEhfTElCX1VTRV9ETlNfUkVTT0xWRVIgaXMg
bm90IHNldAo+IENPTkZJR19ORkM9eQo+IENPTkZJR19ORkNfRElHSVRBTD15Cj4gQ09ORklHX05G
Q19OQ0k9eQo+IENPTkZJR19ORkNfSENJPXkKPiBDT05GSUdfTkZDX1NIRExDPXkKPiAKPiAjCj4g
IyBOZWFyIEZpZWxkIENvbW11bmljYXRpb24gKE5GQykgZGV2aWNlcwo+ICMKPiAjIENPTkZJR19O
RkNfV0lMSU5LIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZDX01FSV9QSFk9eQo+IENPTkZJR19ORkNf
U0lNPXkKPiBDT05GSUdfTkZDX1BONTQ0PXkKPiAjIENPTkZJR19ORkNfUE41NDRfSTJDIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19ORkNfUE41NDRfTUVJIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZDX01J
Q1JPUkVBRD15Cj4gQ09ORklHX05GQ19NSUNST1JFQURfSTJDPXkKPiBDT05GSUdfTkZDX01JQ1JP
UkVBRF9NRUk9eQo+IENPTkZJR19IQVZFX0JQRl9KSVQ9eQo+IAo+ICMKPiAjIERldmljZSBEcml2
ZXJzCj4gIwo+IAo+ICMKPiAjIEdlbmVyaWMgRHJpdmVyIE9wdGlvbnMKPiAjCj4gQ09ORklHX1VF
VkVOVF9IRUxQRVJfUEFUSD0iIgo+ICMgQ09ORklHX0RFVlRNUEZTIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TVEFOREFMT05FIGlzIG5vdCBzZXQKPiBDT05GSUdfUFJFVkVOVF9GSVJNV0FSRV9CVUlM
RD15Cj4gQ09ORklHX0ZXX0xPQURFUj15Cj4gQ09ORklHX0ZJUk1XQVJFX0lOX0tFUk5FTD15Cj4g
Q09ORklHX0VYVFJBX0ZJUk1XQVJFPSIiCj4gIyBDT05GSUdfRldfTE9BREVSX1VTRVJfSEVMUEVS
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVCVUdfRFJJVkVSPXkKPiAjIENPTkZJR19ERUJVR19ERVZS
RVMgaXMgbm90IHNldAo+IENPTkZJR19TWVNfSFlQRVJWSVNPUj15Cj4gIyBDT05GSUdfR0VORVJJ
Q19DUFVfREVWSUNFUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR01BUD15Cj4gQ09ORklHX1JFR01B
UF9JMkM9eQo+IENPTkZJR19SRUdNQVBfTU1JTz15Cj4gQ09ORklHX1JFR01BUF9JUlE9eQo+IENP
TkZJR19ETUFfU0hBUkVEX0JVRkZFUj15Cj4gCj4gIwo+ICMgQnVzIGRldmljZXMKPiAjCj4gQ09O
RklHX0NPTk5FQ1RPUj15Cj4gQ09ORklHX1BST0NfRVZFTlRTPXkKPiBDT05GSUdfTVREPXkKPiAj
IENPTkZJR19NVERfUkVEQk9PVF9QQVJUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9DTURMSU5F
X1BBUlRTPXkKPiBDT05GSUdfTVREX0FSN19QQVJUUz15Cj4gCj4gIwo+ICMgVXNlciBNb2R1bGVz
IEFuZCBUcmFuc2xhdGlvbiBMYXllcnMKPiAjCj4gQ09ORklHX01URF9PT1BTPXkKPiAKPiAjCj4g
IyBSQU0vUk9NL0ZsYXNoIGNoaXAgZHJpdmVycwo+ICMKPiBDT05GSUdfTVREX0NGST15Cj4gQ09O
RklHX01URF9KRURFQ1BST0JFPXkKPiBDT05GSUdfTVREX0dFTl9QUk9CRT15Cj4gIyBDT05GSUdf
TVREX0NGSV9BRFZfT1BUSU9OUyBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9NQVBfQkFOS19XSURU
SF8xPXkKPiBDT05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzI9eQo+IENPTkZJR19NVERfTUFQX0JB
TktfV0lEVEhfND15Cj4gIyBDT05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzggaXMgbm90IHNldAo+
ICMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8xNiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzMyIGlzIG5vdCBzZXQKPiBDT05GSUdfTVREX0NGSV9JMT15Cj4gQ09O
RklHX01URF9DRklfSTI9eQo+ICMgQ09ORklHX01URF9DRklfSTQgaXMgbm90IHNldAo+ICMgQ09O
RklHX01URF9DRklfSTggaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9DRklfSU5URUxFWFQgaXMg
bm90IHNldAo+IENPTkZJR19NVERfQ0ZJX0FNRFNURD15Cj4gQ09ORklHX01URF9DRklfU1RBQT15
Cj4gQ09ORklHX01URF9DRklfVVRJTD15Cj4gQ09ORklHX01URF9SQU09eQo+IENPTkZJR19NVERf
Uk9NPXkKPiAjIENPTkZJR19NVERfQUJTRU5UIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBNYXBwaW5n
IGRyaXZlcnMgZm9yIGNoaXAgYWNjZXNzCj4gIwo+ICMgQ09ORklHX01URF9DT01QTEVYX01BUFBJ
TkdTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NVERfUEhZU01BUCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01URF9TQzUyMENEUD15Cj4gQ09ORklHX01URF9ORVRTQzUyMD15Cj4gIyBDT05GSUdfTVREX1RT
NTUwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9BTUQ3NlhST009eQo+IENPTkZJR19NVERfSUNI
WFJPTT15Cj4gQ09ORklHX01URF9FU0IyUk9NPXkKPiAjIENPTkZJR19NVERfQ0s4MDRYUk9NIGlz
IG5vdCBzZXQKPiBDT05GSUdfTVREX1NDQjJfRkxBU0g9eQo+IENPTkZJR19NVERfTkVUdGVsPXkK
PiAjIENPTkZJR19NVERfTDQ0MEdYIGlzIG5vdCBzZXQKPiBDT05GSUdfTVREX0lOVEVMX1ZSX05P
Uj15Cj4gQ09ORklHX01URF9QTEFUUkFNPXkKPiAKPiAjCj4gIyBTZWxmLWNvbnRhaW5lZCBNVEQg
ZGV2aWNlIGRyaXZlcnMKPiAjCj4gQ09ORklHX01URF9QTUM1NTE9eQo+ICMgQ09ORklHX01URF9Q
TUM1NTFfQlVHRklYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NVERfUE1DNTUxX0RFQlVHIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19NVERfU0xSQU0gaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9QSFJB
TSBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9NVERSQU09eQo+IENPTkZJR19NVERSQU1fVE9UQUxf
U0laRT00MDk2Cj4gQ09ORklHX01URFJBTV9FUkFTRV9TSVpFPTEyOAo+IENPTkZJR19NVERSQU1f
QUJTX1BPUz0wCj4gCj4gIwo+ICMgRGlzay1Pbi1DaGlwIERldmljZSBEcml2ZXJzCj4gIwo+IENP
TkZJR19NVERfRE9DRzM9eQo+IENPTkZJR19CQ0hfQ09OU1RfTT0xNAo+IENPTkZJR19CQ0hfQ09O
U1RfVD00Cj4gIyBDT05GSUdfTVREX05BTkQgaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9PTkVO
QU5EIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBMUEREUiBmbGFzaCBtZW1vcnkgZHJpdmVycwo+ICMK
PiAjIENPTkZJR19NVERfTFBERFIgaXMgbm90IHNldAo+IENPTkZJR19NVERfVUJJPXkKPiBDT05G
SUdfTVREX1VCSV9XTF9USFJFU0hPTEQ9NDA5Ngo+IENPTkZJR19NVERfVUJJX0JFQl9MSU1JVD0y
MAo+IENPTkZJR19NVERfVUJJX0ZBU1RNQVA9eQo+IENPTkZJR19NVERfVUJJX0dMVUVCST15Cj4g
Q09ORklHX1BBUlBPUlQ9eQo+IENPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15Cj4g
Q09ORklHX1BBUlBPUlRfUEM9eQo+IENPTkZJR19QQVJQT1JUX1NFUklBTD15Cj4gQ09ORklHX1BB
UlBPUlRfUENfRklGTz15Cj4gIyBDT05GSUdfUEFSUE9SVF9QQ19TVVBFUklPIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19QQVJQT1JUX0dTQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1BBUlBPUlRfQVg4ODc5
Nj15Cj4gIyBDT05GSUdfUEFSUE9SVF8xMjg0IGlzIG5vdCBzZXQKPiBDT05GSUdfUEFSUE9SVF9O
T1RfUEM9eQo+IENPTkZJR19QTlA9eQo+ICMgQ09ORklHX1BOUF9ERUJVR19NRVNTQUdFUyBpcyBu
b3Qgc2V0Cj4gCj4gIwo+ICMgUHJvdG9jb2xzCj4gIwo+IENPTkZJR19QTlBBQ1BJPXkKPiAKPiAj
Cj4gIyBNaXNjIGRldmljZXMKPiAjCj4gQ09ORklHX1NFTlNPUlNfTElTM0xWMDJEPXkKPiAjIENP
TkZJR19BRDUyNVhfRFBPVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RVTU1ZX0lSUT15Cj4gQ09ORklH
X0lCTV9BU009eQo+IENPTkZJR19QSEFOVE9NPXkKPiAjIENPTkZJR19JTlRFTF9NSURfUFRJIGlz
IG5vdCBzZXQKPiBDT05GSUdfU0dJX0lPQzQ9eQo+IENPTkZJR19USUZNX0NPUkU9eQo+ICMgQ09O
RklHX1RJRk1fN1hYMSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lDUzkzMlM0MDE9eQo+IENPTkZJR19B
VE1FTF9TU0M9eQo+ICMgQ09ORklHX0VOQ0xPU1VSRV9TRVJWSUNFUyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfSFBfSUxPIGlzIG5vdCBzZXQKPiBDT05GSUdfQVBEUzk4MDJBTFM9eQo+IENPTkZJR19J
U0wyOTAwMz15Cj4gIyBDT05GSUdfSVNMMjkwMjAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JT
X1RTTDI1NTA9eQo+IENPTkZJR19TRU5TT1JTX0JIMTc4MD15Cj4gIyBDT05GSUdfU0VOU09SU19C
SDE3NzAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0FQRFM5OTBYPXkKPiBDT05GSUdfSE1D
NjM1Mj15Cj4gQ09ORklHX0RTMTY4Mj15Cj4gQ09ORklHX1ZNV0FSRV9CQUxMT09OPXkKPiAjIENP
TkZJR19CTVAwODVfSTJDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19QQ0hfUEhVQiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQKPiBDT05GSUdfU1JBTT15
Cj4gQ09ORklHX0MyUE9SVD15Cj4gQ09ORklHX0MyUE9SVF9EVVJBTUFSXzIxNTA9eQo+IAo+ICMK
PiAjIEVFUFJPTSBzdXBwb3J0Cj4gIwo+IENPTkZJR19FRVBST01fQVQyND15Cj4gIyBDT05GSUdf
RUVQUk9NX0xFR0FDWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRUVQUk9NX01BWDY4NzUgaXMgbm90
IHNldAo+ICMgQ09ORklHX0VFUFJPTV85M0NYNiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0I3MTBf
Q09SRSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5z
cG9ydCBsaW5lIGRpc2NpcGxpbmUKPiAjCj4gQ09ORklHX1RJX1NUPXkKPiBDT05GSUdfU0VOU09S
U19MSVMzX0kyQz15Cj4gCj4gIwo+ICMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9k
dWxlCj4gIwo+IENPTkZJR19BTFRFUkFfU1RBUEw9eQo+IENPTkZJR19JTlRFTF9NRUk9eQo+IENP
TkZJR19JTlRFTF9NRUlfTUU9eQo+IENPTkZJR19WTVdBUkVfVk1DST15Cj4gCj4gIwo+ICMgSW50
ZWwgTUlDIEhvc3QgRHJpdmVyCj4gIwo+IENPTkZJR19JTlRFTF9NSUNfSE9TVD15Cj4gCj4gIwo+
ICMgSW50ZWwgTUlDIENhcmQgRHJpdmVyCj4gIwo+IENPTkZJR19JTlRFTF9NSUNfQ0FSRD15Cj4g
Q09ORklHX0dFTldRRT15Cj4gQ09ORklHX0hBVkVfSURFPXkKPiAKPiAjCj4gIyBTQ1NJIGRldmlj
ZSBzdXBwb3J0Cj4gIwo+IENPTkZJR19TQ1NJX01PRD15Cj4gIyBDT05GSUdfU0NTSV9ETUEgaXMg
bm90IHNldAo+ICMgQ09ORklHX1NDU0lfTkVUTElOSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRlVT
SU9OIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJRUVFIDEzOTQgKEZpcmVXaXJlKSBzdXBwb3J0Cj4g
Iwo+ICMgQ09ORklHX0ZJUkVXSVJFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GSVJFV0lSRV9OT1NZ
IGlzIG5vdCBzZXQKPiBDT05GSUdfSTJPPXkKPiAjIENPTkZJR19JMk9fTENUX05PVElGWV9PTl9D
SEFOR0VTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMk9fRVhUX0FEQVBURUMgaXMgbm90IHNldAo+
ICMgQ09ORklHX0kyT19DT05GSUcgaXMgbm90IHNldAo+IENPTkZJR19JMk9fQlVTPXkKPiAjIENP
TkZJR19JMk9fUFJPQyBpcyBub3Qgc2V0Cj4gQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTPXkKPiAj
IENPTkZJR19ORVRERVZJQ0VTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSE9TVF9ORVQgaXMgbm90
IHNldAo+IENPTkZJR19WSE9TVF9SSU5HPXkKPiAKPiAjCj4gIyBJbnB1dCBkZXZpY2Ugc3VwcG9y
dAo+ICMKPiBDT05GSUdfSU5QVVQ9eQo+IENPTkZJR19JTlBVVF9GRl9NRU1MRVNTPXkKPiBDT05G
SUdfSU5QVVRfUE9MTERFVj15Cj4gQ09ORklHX0lOUFVUX1NQQVJTRUtNQVA9eQo+IENPTkZJR19J
TlBVVF9NQVRSSVhLTUFQPXkKPiAKPiAjCj4gIyBVc2VybGFuZCBpbnRlcmZhY2VzCj4gIwo+IENP
TkZJR19JTlBVVF9NT1VTRURFVj15Cj4gQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYPXkKPiBD
T05GSUdfSU5QVVRfTU9VU0VERVZfU0NSRUVOX1g9MTAyNAo+IENPTkZJR19JTlBVVF9NT1VTRURF
Vl9TQ1JFRU5fWT03NjgKPiAjIENPTkZJR19JTlBVVF9KT1lERVYgaXMgbm90IHNldAo+IENPTkZJ
R19JTlBVVF9FVkRFVj15Cj4gQ09ORklHX0lOUFVUX0VWQlVHPXkKPiAKPiAjCj4gIyBJbnB1dCBE
ZXZpY2UgRHJpdmVycwo+ICMKPiBDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQo+ICMgQ09ORklHX0tF
WUJPQVJEX0FEUDU1ODggaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWUJPQVJEX0FEUDU1ODkgaXMg
bm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9BVEtCRD15Cj4gQ09ORklHX0tFWUJPQVJEX1FUMTA3
MD15Cj4gQ09ORklHX0tFWUJPQVJEX1FUMjE2MD15Cj4gIyBDT05GSUdfS0VZQk9BUkRfTEtLQkQg
aXMgbm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9HUElPPXkKPiBDT05GSUdfS0VZQk9BUkRfR1BJ
T19QT0xMRUQ9eQo+ICMgQ09ORklHX0tFWUJPQVJEX1RDQTY0MTYgaXMgbm90IHNldAo+ICMgQ09O
RklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWUJPQVJEX01BVFJJ
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX0tFWUJPQVJEX0xNODMyMz15Cj4gIyBDT05GSUdfS0VZQk9B
UkRfTE04MzMzIGlzIG5vdCBzZXQKPiBDT05GSUdfS0VZQk9BUkRfTUFYNzM1OT15Cj4gQ09ORklH
X0tFWUJPQVJEX01DUz15Cj4gIyBDT05GSUdfS0VZQk9BUkRfTVBSMTIxIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19LRVlCT0FSRF9ORVdUT04gaXMgbm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9PUEVO
Q09SRVM9eQo+IENPTkZJR19LRVlCT0FSRF9TVE9XQVdBWT15Cj4gIyBDT05GSUdfS0VZQk9BUkRf
U1VOS0JEIGlzIG5vdCBzZXQKPiBDT05GSUdfS0VZQk9BUkRfU0hfS0VZU0M9eQo+IENPTkZJR19L
RVlCT0FSRF9UQzM1ODlYPXkKPiBDT05GSUdfS0VZQk9BUkRfVFdMNDAzMD15Cj4gIyBDT05GSUdf
S0VZQk9BUkRfWFRLQkQgaXMgbm90IHNldAo+IENPTkZJR19JTlBVVF9MRURTPXkKPiAjIENPTkZJ
R19JTlBVVF9NT1VTRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU5QVVRfSk9ZU1RJQ0sgaXMgbm90
IHNldAo+ICMgQ09ORklHX0lOUFVUX1RBQkxFVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX1RP
VUNIU0NSRUVOPXkKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fODhQTTg2MFg9eQo+IENPTkZJR19UT1VD
SFNDUkVFTl9BRDc4Nzk9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9BRDc4NzlfSTJDPXkKPiBDT05G
SUdfVE9VQ0hTQ1JFRU5fQVRNRUxfTVhUPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9BVU9fUElY
Q0lSIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fQlUyMTAxMz15Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX0NZOENUTUcxMTA9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9DWVRUU1BfQ09SRT15
Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQX0kyQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX0NZVFRTUDRfQ09SRT15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUDRfSTJD
PXkKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fRFlOQVBSTz15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0hB
TVBTSElSRT15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fRUVUSSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fRlVKSVRTVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0lM
STIxMFg9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9HVU5aRT15Cj4gQ09ORklHX1RPVUNIU0NSRUVO
X0VMTz15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1dBQ09NX1c4MDAxPXkKPiAjIENPTkZJR19UT1VD
SFNDUkVFTl9XQUNPTV9JMkMgaXMgbm90IHNldAo+IENPTkZJR19UT1VDSFNDUkVFTl9NQVgxMTgw
MT15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX01NUzExND15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTVRPVUNIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fSU5FWElPPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9N
SzcxMiBpcyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1BFTk1PVU5UPXkKPiAjIENPTkZJ
R19UT1VDSFNDUkVFTl9FRFRfRlQ1WDA2IGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVF
Tl9UT1VDSFJJR0hUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9UT1VDSFdJTiBp
cyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1RJX0FNMzM1WF9UU0M9eQo+IENPTkZJR19U
T1VDSFNDUkVFTl9QSVhDSVI9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9XTTgzMVg9eQo+IENPTkZJ
R19UT1VDSFNDUkVFTl9XTTk3WFg9eQo+ICMgQ09ORklHX1RPVUNIU0NSRUVOX1dNOTcwNSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1dNOTcxMj15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JF
RU5fV005NzEzIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9NQzEzNzgzIGlzIG5v
dCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hJVDIxMz15Cj4gIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fVFNDX1NFUklPIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fVFNDMjAwNz15
Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1NUMTIzMj15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1RQUzY1
MDdYPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9aRk9SQ0UgaXMgbm90IHNldAo+IENPTkZJR19J
TlBVVF9NSVNDPXkKPiBDT05GSUdfSU5QVVRfODhQTTg2MFhfT05LRVk9eQo+ICMgQ09ORklHX0lO
UFVUXzg4UE04MFhfT05LRVkgaXMgbm90IHNldAo+IENPTkZJR19JTlBVVF9BRDcxNFg9eQo+IENP
TkZJR19JTlBVVF9BRDcxNFhfSTJDPXkKPiAjIENPTkZJR19JTlBVVF9BUklaT05BX0hBUFRJQ1Mg
aXMgbm90IHNldAo+ICMgQ09ORklHX0lOUFVUX0JNQTE1MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lO
UFVUX1BDU1BLUj15Cj4gIyBDT05GSUdfSU5QVVRfTUMxMzc4M19QV1JCVVRUT04gaXMgbm90IHNl
dAo+IENPTkZJR19JTlBVVF9NTUE4NDUwPXkKPiAjIENPTkZJR19JTlBVVF9NUFUzMDUwIGlzIG5v
dCBzZXQKPiBDT05GSUdfSU5QVVRfQVBBTkVMPXkKPiBDT05GSUdfSU5QVVRfR1AyQT15Cj4gQ09O
RklHX0lOUFVUX0dQSU9fVElMVF9QT0xMRUQ9eQo+IENPTkZJR19JTlBVVF9BVExBU19CVE5TPXkK
PiAjIENPTkZJR19JTlBVVF9LWFRKOSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX1JFVFVfUFdS
QlVUVE9OPXkKPiBDT05GSUdfSU5QVVRfVFdMNDAzMF9QV1JCVVRUT049eQo+IENPTkZJR19JTlBV
VF9UV0w0MDMwX1ZJQlJBPXkKPiBDT05GSUdfSU5QVVRfVFdMNjA0MF9WSUJSQT15Cj4gQ09ORklH
X0lOUFVUX1VJTlBVVD15Cj4gQ09ORklHX0lOUFVUX1BDRjUwNjMzX1BNVT15Cj4gQ09ORklHX0lO
UFVUX1BDRjg1NzQ9eQo+ICMgQ09ORklHX0lOUFVUX0dQSU9fUk9UQVJZX0VOQ09ERVIgaXMgbm90
IHNldAo+ICMgQ09ORklHX0lOUFVUX0RBOTA1NV9PTktFWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lO
UFVUX1dNODMxWF9PTj15Cj4gQ09ORklHX0lOUFVUX0FEWEwzNFg9eQo+IENPTkZJR19JTlBVVF9B
RFhMMzRYX0kyQz15Cj4gIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSU5QVVRfWEVOX0tCRERFVl9GUk9OVEVORCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX0lE
RUFQQURfU0xJREVCQVI9eQo+IAo+ICMKPiAjIEhhcmR3YXJlIEkvTyBwb3J0cwo+ICMKPiBDT05G
SUdfU0VSSU89eQo+IENPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfU0VSSU89eQo+IENPTkZJR19T
RVJJT19JODA0Mj15Cj4gQ09ORklHX1NFUklPX1NFUlBPUlQ9eQo+ICMgQ09ORklHX1NFUklPX0NU
ODJDNzEwIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VSSU9fUEFSS0JEPXkKPiBDT05GSUdfU0VSSU9f
UENJUFMyPXkKPiBDT05GSUdfU0VSSU9fTElCUFMyPXkKPiAjIENPTkZJR19TRVJJT19SQVcgaXMg
bm90IHNldAo+ICMgQ09ORklHX1NFUklPX0FMVEVSQV9QUzIgaXMgbm90IHNldAo+IENPTkZJR19T
RVJJT19QUzJNVUxUPXkKPiAjIENPTkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19IWVBFUlZfS0VZQk9BUkQgaXMgbm90IHNldAo+IENPTkZJR19HQU1FUE9SVD15Cj4gQ09O
RklHX0dBTUVQT1JUX05TNTU4PXkKPiBDT05GSUdfR0FNRVBPUlRfTDQ9eQo+ICMgQ09ORklHX0dB
TUVQT1JUX0VNVTEwSzEgaXMgbm90IHNldAo+IENPTkZJR19HQU1FUE9SVF9GTTgwMT15Cj4gCj4g
Iwo+ICMgQ2hhcmFjdGVyIGRldmljZXMKPiAjCj4gQ09ORklHX1RUWT15Cj4gIyBDT05GSUdfVlQg
aXMgbm90IHNldAo+ICMgQ09ORklHX1VOSVg5OF9QVFlTIGlzIG5vdCBzZXQKPiBDT05GSUdfTEVH
QUNZX1BUWVM9eQo+IENPTkZJR19MRUdBQ1lfUFRZX0NPVU5UPTI1Ngo+ICMgQ09ORklHX1NFUklB
TF9OT05TVEFOREFSRCBpcyBub3Qgc2V0Cj4gQ09ORklHX05PWk9NST15Cj4gQ09ORklHX05fR1NN
PXkKPiAjIENPTkZJR19UUkFDRV9TSU5LIGlzIG5vdCBzZXQKPiBDT05GSUdfREVWS01FTT15Cj4g
Cj4gIwo+ICMgU2VyaWFsIGRyaXZlcnMKPiAjCj4gQ09ORklHX1NFUklBTF84MjUwPXkKPiBDT05G
SUdfU0VSSUFMXzgyNTBfREVQUkVDQVRFRF9PUFRJT05TPXkKPiBDT05GSUdfU0VSSUFMXzgyNTBf
UE5QPXkKPiBDT05GSUdfU0VSSUFMXzgyNTBfQ09OU09MRT15Cj4gQ09ORklHX0ZJWF9FQVJMWUNP
Tl9NRU09eQo+IENPTkZJR19TRVJJQUxfODI1MF9ETUE9eQo+IENPTkZJR19TRVJJQUxfODI1MF9Q
Q0k9eQo+IENPTkZJR19TRVJJQUxfODI1MF9OUl9VQVJUUz00Cj4gQ09ORklHX1NFUklBTF84MjUw
X1JVTlRJTUVfVUFSVFM9NAo+IENPTkZJR19TRVJJQUxfODI1MF9FWFRFTkRFRD15Cj4gIyBDT05G
SUdfU0VSSUFMXzgyNTBfTUFOWV9QT1JUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFUklBTF84MjUw
X1NIQVJFX0lSUT15Cj4gQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlE9eQo+IENPTkZJR19T
RVJJQUxfODI1MF9SU0E9eQo+IENPTkZJR19TRVJJQUxfODI1MF9EVz15Cj4gCj4gIwo+ICMgTm9u
LTgyNTAgc2VyaWFsIHBvcnQgc3VwcG9ydAo+ICMKPiBDT05GSUdfU0VSSUFMX0tHREJfTk1JPXkK
PiAjIENPTkZJR19TRVJJQUxfTUZEX0hTVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFUklBTF9VQVJU
TElURT15Cj4gIyBDT05GSUdfU0VSSUFMX1VBUlRMSVRFX0NPTlNPTEUgaXMgbm90IHNldAo+IENP
TkZJR19TRVJJQUxfQ09SRT15Cj4gQ09ORklHX1NFUklBTF9DT1JFX0NPTlNPTEU9eQo+IENPTkZJ
R19DT05TT0xFX1BPTEw9eQo+IENPTkZJR19TRVJJQUxfSlNNPXkKPiAjIENPTkZJR19TRVJJQUxf
U0NDTlhQIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRVJJQUxfVElNQkVSREFMRSBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlQ9eQo+IENPTkZJR19TRVJJQUxfQUxURVJB
X0pUQUdVQVJUX0NPTlNPTEU9eQo+IENPTkZJR19TRVJJQUxfQUxURVJBX0pUQUdVQVJUX0NPTlNP
TEVfQllQQVNTPXkKPiBDT05GSUdfU0VSSUFMX0FMVEVSQV9VQVJUPXkKPiBDT05GSUdfU0VSSUFM
X0FMVEVSQV9VQVJUX01BWFBPUlRTPTQKPiBDT05GSUdfU0VSSUFMX0FMVEVSQV9VQVJUX0JBVURS
QVRFPTExNTIwMAo+ICMgQ09ORklHX1NFUklBTF9BTFRFUkFfVUFSVF9DT05TT0xFIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19TRVJJQUxfUENIX1VBUlQgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFUklB
TF9BUkMgaXMgbm90IHNldAo+IENPTkZJR19TRVJJQUxfUlAyPXkKPiBDT05GSUdfU0VSSUFMX1JQ
Ml9OUl9VQVJUUz0zMgo+IENPTkZJR19TRVJJQUxfRlNMX0xQVUFSVD15Cj4gQ09ORklHX1NFUklB
TF9GU0xfTFBVQVJUX0NPTlNPTEU9eQo+IENPTkZJR19TRVJJQUxfU1RfQVNDPXkKPiBDT05GSUdf
U0VSSUFMX1NUX0FTQ19DT05TT0xFPXkKPiAjIENPTkZJR19UVFlfUFJJTlRLIGlzIG5vdCBzZXQK
PiBDT05GSUdfUFJJTlRFUj15Cj4gQ09ORklHX0xQX0NPTlNPTEU9eQo+IENPTkZJR19QUERFVj15
Cj4gQ09ORklHX0hWQ19EUklWRVI9eQo+IENPTkZJR19IVkNfSVJRPXkKPiBDT05GSUdfSFZDX1hF
Tj15Cj4gQ09ORklHX0hWQ19YRU5fRlJPTlRFTkQ9eQo+ICMgQ09ORklHX1ZJUlRJT19DT05TT0xF
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUE1JX0hBTkRMRVIgaXMgbm90IHNldAo+IENPTkZJR19I
V19SQU5ET009eQo+IENPTkZJR19IV19SQU5ET01fVElNRVJJT01FTT15Cj4gQ09ORklHX0hXX1JB
TkRPTV9JTlRFTD15Cj4gQ09ORklHX0hXX1JBTkRPTV9BTUQ9eQo+IENPTkZJR19IV19SQU5ET01f
VklBPXkKPiBDT05GSUdfSFdfUkFORE9NX1ZJUlRJTz15Cj4gQ09ORklHX0hXX1JBTkRPTV9UUE09
eQo+IENPTkZJR19OVlJBTT15Cj4gIyBDT05GSUdfUjM5NjQgaXMgbm90IHNldAo+ICMgQ09ORklH
X0FQUExJQ09NIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NV0FWRSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSFBFVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBTkdDSEVDS19USU1FUj15Cj4gQ09ORklHX1RD
R19UUE09eQo+IENPTkZJR19UQ0dfVElTPXkKPiAjIENPTkZJR19UQ0dfVElTX0kyQ19BVE1FTCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfVENHX1RJU19JMkNfSU5GSU5FT04gaXMgbm90IHNldAo+IENP
TkZJR19UQ0dfVElTX0kyQ19OVVZPVE9OPXkKPiBDT05GSUdfVENHX05TQz15Cj4gIyBDT05GSUdf
VENHX0FUTUVMIGlzIG5vdCBzZXQKPiBDT05GSUdfVENHX0lORklORU9OPXkKPiAjIENPTkZJR19U
Q0dfU1QzM19JMkMgaXMgbm90IHNldAo+IENPTkZJR19UQ0dfWEVOPXkKPiAjIENPTkZJR19URUxD
TE9DSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFVlBPUlQ9eQo+IENPTkZJR19JMkM9eQo+IENPTkZJ
R19JMkNfQk9BUkRJTkZPPXkKPiAjIENPTkZJR19JMkNfQ09NUEFUIGlzIG5vdCBzZXQKPiBDT05G
SUdfSTJDX0NIQVJERVY9eQo+IENPTkZJR19JMkNfTVVYPXkKPiAKPiAjCj4gIyBNdWx0aXBsZXhl
ciBJMkMgQ2hpcCBzdXBwb3J0Cj4gIwo+ICMgQ09ORklHX0kyQ19NVVhfR1BJTyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0kyQ19NVVhfUENBOTU0MT15Cj4gQ09ORklHX0kyQ19NVVhfUENBOTU0eD15Cj4g
Q09ORklHX0kyQ19IRUxQRVJfQVVUTz15Cj4gQ09ORklHX0kyQ19TTUJVUz15Cj4gQ09ORklHX0ky
Q19BTEdPQklUPXkKPiBDT05GSUdfSTJDX0FMR09QQ0E9eQo+IAo+ICMKPiAjIEkyQyBIYXJkd2Fy
ZSBCdXMgc3VwcG9ydAo+ICMKPiAKPiAjCj4gIyBQQyBTTUJ1cyBob3N0IGNvbnRyb2xsZXIgZHJp
dmVycwo+ICMKPiAjIENPTkZJR19JMkNfQUxJMTUzNSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19B
TEkxNTYzPXkKPiBDT05GSUdfSTJDX0FMSTE1WDM9eQo+ICMgQ09ORklHX0kyQ19BTUQ3NTYgaXMg
bm90IHNldAo+IENPTkZJR19JMkNfQU1EODExMT15Cj4gIyBDT05GSUdfSTJDX0k4MDEgaXMgbm90
IHNldAo+IENPTkZJR19JMkNfSVNDSD15Cj4gQ09ORklHX0kyQ19JU01UPXkKPiBDT05GSUdfSTJD
X1BJSVg0PXkKPiAjIENPTkZJR19JMkNfTkZPUkNFMiBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19T
SVM1NTk1PXkKPiBDT05GSUdfSTJDX1NJUzYzMD15Cj4gQ09ORklHX0kyQ19TSVM5Nlg9eQo+ICMg
Q09ORklHX0kyQ19WSUEgaXMgbm90IHNldAo+ICMgQ09ORklHX0kyQ19WSUFQUk8gaXMgbm90IHNl
dAo+IAo+ICMKPiAjIEFDUEkgZHJpdmVycwo+ICMKPiBDT05GSUdfSTJDX1NDTUk9eQo+IAo+ICMK
PiAjIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQo+ICMKPiAjIENPTkZJR19JMkNfQ0JVU19HUElPIGlzIG5vdCBzZXQKPiBDT05GSUdfSTJD
X0RFU0lHTldBUkVfQ09SRT15Cj4gQ09ORklHX0kyQ19ERVNJR05XQVJFX1BDST15Cj4gQ09ORklH
X0kyQ19FRzIwVD15Cj4gQ09ORklHX0kyQ19HUElPPXkKPiBDT05GSUdfSTJDX0tFTVBMRD15Cj4g
Q09ORklHX0kyQ19PQ09SRVM9eQo+IENPTkZJR19JMkNfUENBX1BMQVRGT1JNPXkKPiAjIENPTkZJ
R19JMkNfUFhBX1BDSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19SSUlDPXkKPiBDT05GSUdfSTJD
X1NIX01PQklMRT15Cj4gQ09ORklHX0kyQ19TSU1URUM9eQo+IENPTkZJR19JMkNfWElMSU5YPXkK
PiAjIENPTkZJR19JMkNfUkNBUiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRXh0ZXJuYWwgSTJDL1NN
QnVzIGFkYXB0ZXIgZHJpdmVycwo+ICMKPiBDT05GSUdfSTJDX1BBUlBPUlQ9eQo+IENPTkZJR19J
MkNfUEFSUE9SVF9MSUdIVD15Cj4gIyBDT05GSUdfSTJDX1RBT1NfRVZNIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBPdGhlciBJMkMvU01CdXMgYnVzIGRyaXZlcnMKPiAjCj4gQ09ORklHX0kyQ19ERUJV
R19DT1JFPXkKPiBDT05GSUdfSTJDX0RFQlVHX0FMR089eQo+ICMgQ09ORklHX0kyQ19ERUJVR19C
VVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0hTST15Cj4g
Q09ORklHX0hTSV9CT0FSRElORk89eQo+IAo+ICMKPiAjIEhTSSBjbGllbnRzCj4gIwo+ICMgQ09O
RklHX0hTSV9DSEFSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQUFMgc3VwcG9ydAo+ICMKPiBDT05G
SUdfUFBTPXkKPiAjIENPTkZJR19QUFNfREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAjIFBQUyBj
bGllbnRzIHN1cHBvcnQKPiAjCj4gQ09ORklHX1BQU19DTElFTlRfS1RJTUVSPXkKPiBDT05GSUdf
UFBTX0NMSUVOVF9MRElTQz15Cj4gIyBDT05GSUdfUFBTX0NMSUVOVF9QQVJQT1JUIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkKPiAKPiAjCj4gIyBQUFMgZ2VuZXJhdG9ycyBz
dXBwb3J0Cj4gIwo+IAo+ICMKPiAjIFBUUCBjbG9jayBzdXBwb3J0Cj4gIwo+IENPTkZJR19QVFBf
MTU4OF9DTE9DSz15Cj4gCj4gIwo+ICMgRW5hYmxlIFBIWUxJQiBhbmQgTkVUV09SS19QSFlfVElN
RVNUQU1QSU5HIHRvIHNlZSB0aGUgYWRkaXRpb25hbCBjbG9ja3MuCj4gIwo+ICMgQ09ORklHX1BU
UF8xNTg4X0NMT0NLX1BDSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9H
UElPTElCPXkKPiBDT05GSUdfR1BJT0xJQj15Cj4gQ09ORklHX0dQSU9fREVWUkVTPXkKPiBDT05G
SUdfR1BJT19BQ1BJPXkKPiBDT05GSUdfREVCVUdfR1BJTz15Cj4gQ09ORklHX0dQSU9fU1lTRlM9
eQo+IENPTkZJR19HUElPX0dFTkVSSUM9eQo+IENPTkZJR19HUElPX0RBOTA1NT15Cj4gQ09ORklH
X0dQSU9fTUFYNzMwWD15Cj4gCj4gIwo+ICMgTWVtb3J5IG1hcHBlZCBHUElPIGRyaXZlcnM6Cj4g
Iwo+IENPTkZJR19HUElPX0dFTkVSSUNfUExBVEZPUk09eQo+ICMgQ09ORklHX0dQSU9fSVQ4NzYx
RSBpcyBub3Qgc2V0Cj4gQ09ORklHX0dQSU9fRjcxODhYPXkKPiAjIENPTkZJR19HUElPX1NDSDMx
MVggaXMgbm90IHNldAo+ICMgQ09ORklHX0dQSU9fVFM1NTAwIGlzIG5vdCBzZXQKPiBDT05GSUdf
R1BJT19TQ0g9eQo+ICMgQ09ORklHX0dQSU9fSUNIIGlzIG5vdCBzZXQKPiBDT05GSUdfR1BJT19W
WDg1NT15Cj4gQ09ORklHX0dQSU9fTFlOWFBPSU5UPXkKPiAKPiAjCj4gIyBJMkMgR1BJTyBleHBh
bmRlcnM6Cj4gIwo+IENPTkZJR19HUElPX0FSSVpPTkE9eQo+IENPTkZJR19HUElPX01BWDczMDA9
eQo+IENPTkZJR19HUElPX01BWDczMlg9eQo+ICMgQ09ORklHX0dQSU9fTUFYNzMyWF9JUlEgaXMg
bm90IHNldAo+ICMgQ09ORklHX0dQSU9fUENBOTUzWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0dQSU9f
UENGODU3WD15Cj4gIyBDT05GSUdfR1BJT19TWDE1MFggaXMgbm90IHNldAo+IENPTkZJR19HUElP
X1RDMzU4OVg9eQo+ICMgQ09ORklHX0dQSU9fVFdMNDAzMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
R1BJT19UV0w2MDQwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19HUElPX1dNODMxWCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0dQSU9fV004MzUwPXkKPiBDT05GSUdfR1BJT19BRFA1NTg4PXkKPiAjIENPTkZJ
R19HUElPX0FEUDU1ODhfSVJRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQQ0kgR1BJTyBleHBhbmRl
cnM6Cj4gIwo+ICMgQ09ORklHX0dQSU9fQlQ4WFggaXMgbm90IHNldAo+ICMgQ09ORklHX0dQSU9f
QU1EODExMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfR1BJT19JTlRFTF9NSUQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0dQSU9fUENIIGlzIG5vdCBzZXQKPiBDT05GSUdfR1BJT19NTF9JT0g9eQo+IENP
TkZJR19HUElPX1RJTUJFUkRBTEU9eQo+IENPTkZJR19HUElPX1JEQzMyMVg9eQo+IAo+ICMKPiAj
IFNQSSBHUElPIGV4cGFuZGVyczoKPiAjCj4gQ09ORklHX0dQSU9fTUNQMjNTMDg9eQo+IAo+ICMK
PiAjIEFDOTcgR1BJTyBleHBhbmRlcnM6Cj4gIwo+IAo+ICMKPiAjIExQQyBHUElPIGV4cGFuZGVy
czoKPiAjCj4gIyBDT05GSUdfR1BJT19LRU1QTEQgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1PRFVM
YnVzIEdQSU8gZXhwYW5kZXJzOgo+ICMKPiBDT05GSUdfR1BJT19KQU5aX1RUTD15Cj4gQ09ORklH
X0dQSU9fUEFMTUFTPXkKPiAjIENPTkZJR19HUElPX1RQUzY1ODZYIGlzIG5vdCBzZXQKPiAKPiAj
Cj4gIyBVU0IgR1BJTyBleHBhbmRlcnM6Cj4gIwo+IENPTkZJR19XMT15Cj4gQ09ORklHX1cxX0NP
Tj15Cj4gCj4gIwo+ICMgMS13aXJlIEJ1cyBNYXN0ZXJzCj4gIwo+ICMgQ09ORklHX1cxX01BU1RF
Ul9NQVRST1ggaXMgbm90IHNldAo+ICMgQ09ORklHX1cxX01BU1RFUl9EUzI0ODIgaXMgbm90IHNl
dAo+IENPTkZJR19XMV9NQVNURVJfRFMxV009eQo+ICMgQ09ORklHX1cxX01BU1RFUl9HUElPIGlz
IG5vdCBzZXQKPiAKPiAjCj4gIyAxLXdpcmUgU2xhdmVzCj4gIwo+IENPTkZJR19XMV9TTEFWRV9U
SEVSTT15Cj4gIyBDT05GSUdfVzFfU0xBVkVfU01FTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVzFf
U0xBVkVfRFMyNDA4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI0MTMgaXMgbm90
IHNldAo+IENPTkZJR19XMV9TTEFWRV9EUzI0MjM9eQo+ICMgQ09ORklHX1cxX1NMQVZFX0RTMjQz
MSBpcyBub3Qgc2V0Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15Cj4gIyBDT05GSUdfVzFfU0xB
VkVfRFMyNDMzX0NSQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVzFfU0xBVkVfRFMyNzYwIGlzIG5v
dCBzZXQKPiBDT05GSUdfVzFfU0xBVkVfRFMyNzgwPXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyNzgx
PXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyOEUwND15Cj4gQ09ORklHX1cxX1NMQVZFX0JRMjcwMDA9
eQo+IENPTkZJR19QT1dFUl9TVVBQTFk9eQo+ICMgQ09ORklHX1BPV0VSX1NVUFBMWV9ERUJVRyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1BEQV9QT1dFUj15Cj4gIyBDT05GSUdfV004MzFYX0JBQ0tVUCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1dNODMxWF9QT1dFUj15Cj4gQ09ORklHX1dNODM1MF9QT1dFUj15
Cj4gQ09ORklHX1RFU1RfUE9XRVI9eQo+IENPTkZJR19CQVRURVJZXzg4UE04NjBYPXkKPiBDT05G
SUdfQkFUVEVSWV9EUzI3ODA9eQo+IENPTkZJR19CQVRURVJZX0RTMjc4MT15Cj4gQ09ORklHX0JB
VFRFUllfRFMyNzgyPXkKPiAjIENPTkZJR19CQVRURVJZX1dNOTdYWCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0JBVFRFUllfU0JTPXkKPiAjIENPTkZJR19CQVRURVJZX0JRMjd4MDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0JBVFRFUllfTUFYMTcwNDAgaXMgbm90IHNldAo+IENPTkZJR19CQVRURVJZX01B
WDE3MDQyPXkKPiAjIENPTkZJR19DSEFSR0VSXzg4UE04NjBYIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19DSEFSR0VSX1BDRjUwNjMzIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0hBUkdFUl9NQVg4OTAzPXkK
PiBDT05GSUdfQ0hBUkdFUl9UV0w0MDMwPXkKPiBDT05GSUdfQ0hBUkdFUl9MUDg3Mjc9eQo+IENP
TkZJR19DSEFSR0VSX0dQSU89eQo+IENPTkZJR19DSEFSR0VSX01BTkFHRVI9eQo+IENPTkZJR19D
SEFSR0VSX0JRMjQxNVg9eQo+ICMgQ09ORklHX0NIQVJHRVJfQlEyNDE5MCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfQ0hBUkdFUl9CUTI0NzM1IGlzIG5vdCBzZXQKPiBDT05GSUdfQ0hBUkdFUl9TTUIz
NDc9eQo+IENPTkZJR19CQVRURVJZX0dPTERGSVNIPXkKPiAjIENPTkZJR19QT1dFUl9SRVNFVCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1BPV0VSX0FWUz15Cj4gQ09ORklHX0hXTU9OPXkKPiBDT05GSUdf
SFdNT05fVklEPXkKPiBDT05GSUdfSFdNT05fREVCVUdfQ0hJUD15Cj4gCj4gIwo+ICMgTmF0aXZl
IGRyaXZlcnMKPiAjCj4gQ09ORklHX1NFTlNPUlNfQUJJVFVHVVJVPXkKPiBDT05GSUdfU0VOU09S
U19BQklUVUdVUlUzPXkKPiBDT05GSUdfU0VOU09SU19BRDc0MTQ9eQo+IENPTkZJR19TRU5TT1JT
X0FENzQxOD15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyMT15Cj4gQ09ORklHX1NFTlNPUlNfQURN
MTAyNT15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyNj15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAy
OT15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAzMT15Cj4gIyBDT05GSUdfU0VOU09SU19BRE05MjQw
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19BRFQ3WDEwPXkKPiBDT05GSUdfU0VOU09SU19B
RFQ3NDEwPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3NDExPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3
NDYyPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3NDcwPXkKPiAjIENPTkZJR19TRU5TT1JTX0FEVDc0
NzUgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfQVNDNzYyMSBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NFTlNPUlNfSzhURU1QPXkKPiBDT05GSUdfU0VOU09SU19LMTBURU1QPXkKPiBDT05GSUdf
U0VOU09SU19GQU0xNUhfUE9XRVI9eQo+IENPTkZJR19TRU5TT1JTX0FTQjEwMD15Cj4gQ09ORklH
X1NFTlNPUlNfQVRYUDE9eQo+IENPTkZJR19TRU5TT1JTX0RTNjIwPXkKPiBDT05GSUdfU0VOU09S
U19EUzE2MjE9eQo+IENPTkZJR19TRU5TT1JTX0RBOTA1NT15Cj4gQ09ORklHX1NFTlNPUlNfSTVL
X0FNQj15Cj4gQ09ORklHX1NFTlNPUlNfRjcxODA1Rj15Cj4gQ09ORklHX1NFTlNPUlNfRjcxODgy
Rkc9eQo+IENPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQo+ICMgQ09ORklHX1NFTlNPUlNfRlNDSE1E
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19HNzYwQT15Cj4gQ09ORklHX1NFTlNPUlNfRzc2
Mj15Cj4gIyBDT05GSUdfU0VOU09SU19HTDUxOFNNIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09S
U19HTDUyMFNNPXkKPiBDT05GSUdfU0VOU09SU19HUElPX0ZBTj15Cj4gQ09ORklHX1NFTlNPUlNf
SElINjEzMD15Cj4gIyBDT05GSUdfU0VOU09SU19IVFUyMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NF
TlNPUlNfQ09SRVRFTVA9eQo+IENPTkZJR19TRU5TT1JTX0lUODc9eQo+IENPTkZJR19TRU5TT1JT
X0pDNDI9eQo+IENPTkZJR19TRU5TT1JTX0xJTkVBR0U9eQo+IENPTkZJR19TRU5TT1JTX0xNNjM9
eQo+IENPTkZJR19TRU5TT1JTX0xNNzM9eQo+IENPTkZJR19TRU5TT1JTX0xNNzU9eQo+ICMgQ09O
RklHX1NFTlNPUlNfTE03NyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19MTTc4IGlzIG5v
dCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTgwPXkKPiBDT05GSUdfU0VOU09SU19MTTgzPXkKPiBD
T05GSUdfU0VOU09SU19MTTg1PXkKPiBDT05GSUdfU0VOU09SU19MTTg3PXkKPiBDT05GSUdfU0VO
U09SU19MTTkwPXkKPiBDT05GSUdfU0VOU09SU19MTTkyPXkKPiBDT05GSUdfU0VOU09SU19MTTkz
PXkKPiBDT05GSUdfU0VOU09SU19MVEM0MTUxPXkKPiBDT05GSUdfU0VOU09SU19MVEM0MjE1PXkK
PiBDT05GSUdfU0VOU09SU19MVEM0MjQ1PXkKPiAjIENPTkZJR19TRU5TT1JTX0xUQzQyNjEgaXMg
bm90IHNldAo+IENPTkZJR19TRU5TT1JTX0xNOTUyMzQ9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUy
NDE9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUyNDU9eQo+IENPTkZJR19TRU5TT1JTX01BWDE2MDY1
PXkKPiBDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDE2Njgg
aXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfTUFYMTk3IGlzIG5vdCBzZXQKPiBDT05GSUdf
U0VOU09SU19NQVg2NjM5PXkKPiBDT05GSUdfU0VOU09SU19NQVg2NjQyPXkKPiBDT05GSUdfU0VO
U09SU19NQVg2NjUwPXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDY2OTcgaXMgbm90IHNldAo+IENP
TkZJR19TRU5TT1JTX01DUDMwMjE9eQo+IENPTkZJR19TRU5TT1JTX05DVDY3NzU9eQo+IENPTkZJ
R19TRU5TT1JTX05UQ19USEVSTUlTVE9SPXkKPiBDT05GSUdfU0VOU09SU19QQzg3MzYwPXkKPiBD
T05GSUdfU0VOU09SU19QQzg3NDI3PXkKPiBDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkKPiAjIENP
TkZJR19QTUJVUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19TSFQxNSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19T
SVM1NTk1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX1NNTTY2NSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19FTUMx
NDAzPXkKPiBDT05GSUdfU0VOU09SU19FTUMyMTAzPXkKPiBDT05GSUdfU0VOU09SU19FTUM2VzIw
MT15Cj4gQ09ORklHX1NFTlNPUlNfU01TQzQ3TTE9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N00x
OTI9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTc9eQo+IENPTkZJR19TRU5TT1JTX1NDSDU2
WFhfQ09NTU9OPXkKPiBDT05GSUdfU0VOU09SU19TQ0g1NjI3PXkKPiBDT05GSUdfU0VOU09SU19T
Q0g1NjM2PXkKPiBDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKPiAjIENPTkZJR19TRU5TT1JTX0FE
Uzc4MjggaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfQU1DNjgyMSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VOU09SU19JTkEyMDkgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0lOQTJY
WD15Cj4gQ09ORklHX1NFTlNPUlNfVEhNQzUwPXkKPiAjIENPTkZJR19TRU5TT1JTX1RNUDEwMiBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfVE1QNDAxPXkKPiAjIENPTkZJR19TRU5TT1JTX1RN
UDQyMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19WSUFfQ1BVVEVNUCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFTlNPUlNfVklBNjg2QT15Cj4gQ09ORklHX1NFTlNPUlNfVlQxMjExPXkKPiAj
IENPTkZJR19TRU5TT1JTX1ZUODIzMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19XODM3
ODFEIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19XODM3OTFEPXkKPiBDT05GSUdfU0VOU09S
U19XODM3OTJEPXkKPiBDT05GSUdfU0VOU09SU19XODM3OTM9eQo+ICMgQ09ORklHX1NFTlNPUlNf
VzgzNzk1IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19XODNMNzg1VFM9eQo+IENPTkZJR19T
RU5TT1JTX1c4M0w3ODZORz15Cj4gQ09ORklHX1NFTlNPUlNfVzgzNjI3SEY9eQo+IENPTkZJR19T
RU5TT1JTX1c4MzYyN0VIRj15Cj4gQ09ORklHX1NFTlNPUlNfV004MzFYPXkKPiBDT05GSUdfU0VO
U09SU19XTTgzNTA9eQo+IENPTkZJR19TRU5TT1JTX0FQUExFU01DPXkKPiBDT05GSUdfU0VOU09S
U19NQzEzNzgzX0FEQz15Cj4gCj4gIwo+ICMgQUNQSSBkcml2ZXJzCj4gIwo+IENPTkZJR19TRU5T
T1JTX0FDUElfUE9XRVI9eQo+IENPTkZJR19TRU5TT1JTX0FUSzAxMTA9eQo+IENPTkZJR19USEVS
TUFMPXkKPiAjIENPTkZJR19USEVSTUFMX0hXTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfVEhFUk1B
TF9ERUZBVUxUX0dPVl9TVEVQX1dJU0U9eQo+ICMgQ09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1Zf
RkFJUl9TSEFSRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9VU0VS
X1NQQUNFIGlzIG5vdCBzZXQKPiBDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRT15Cj4gQ09O
RklHX1RIRVJNQUxfR09WX1NURVBfV0lTRT15Cj4gQ09ORklHX1RIRVJNQUxfR09WX1VTRVJfU1BB
Q0U9eQo+ICMgQ09ORklHX1RIRVJNQUxfRU1VTEFUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNB
Ul9USEVSTUFMPXkKPiBDT05GSUdfSU5URUxfUE9XRVJDTEFNUD15Cj4gQ09ORklHX0FDUElfSU5U
MzQwM19USEVSTUFMPXkKPiAKPiAjCj4gIyBUZXhhcyBJbnN0cnVtZW50cyB0aGVybWFsIGRyaXZl
cnMKPiAjCj4gQ09ORklHX1dBVENIRE9HPXkKPiBDT05GSUdfV0FUQ0hET0dfQ09SRT15Cj4gQ09O
RklHX1dBVENIRE9HX05PV0FZT1VUPXkKPiAKPiAjCj4gIyBXYXRjaGRvZyBEZXZpY2UgRHJpdmVy
cwo+ICMKPiAjIENPTkZJR19TT0ZUX1dBVENIRE9HIGlzIG5vdCBzZXQKPiBDT05GSUdfREE5MDU1
X1dBVENIRE9HPXkKPiAjIENPTkZJR19XTTgzMVhfV0FUQ0hET0cgaXMgbm90IHNldAo+IENPTkZJ
R19XTTgzNTBfV0FUQ0hET0c9eQo+IENPTkZJR19UV0w0MDMwX1dBVENIRE9HPXkKPiBDT05GSUdf
UkVUVV9XQVRDSERPRz15Cj4gIyBDT05GSUdfQUNRVUlSRV9XRFQgaXMgbm90IHNldAo+ICMgQ09O
RklHX0FEVkFOVEVDSF9XRFQgaXMgbm90IHNldAo+IENPTkZJR19BTElNMTUzNV9XRFQ9eQo+IENP
TkZJR19BTElNNzEwMV9XRFQ9eQo+ICMgQ09ORklHX0Y3MTgwOEVfV0RUIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19TUDUxMDBfVENPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQzUyMF9XRFQgaXMgbm90
IHNldAo+IENPTkZJR19TQkNfRklUUEMyX1dBVENIRE9HPXkKPiBDT05GSUdfRVVST1RFQ0hfV0RU
PXkKPiBDT05GSUdfSUI3MDBfV0RUPXkKPiBDT05GSUdfSUJNQVNSPXkKPiBDT05GSUdfV0FGRVJf
V0RUPXkKPiBDT05GSUdfSTYzMDBFU0JfV0RUPXkKPiBDT05GSUdfSUU2WFhfV0RUPXkKPiBDT05G
SUdfSVRDT19XRFQ9eQo+IENPTkZJR19JVENPX1ZFTkRPUl9TVVBQT1JUPXkKPiBDT05GSUdfSVQ4
NzEyRl9XRFQ9eQo+IENPTkZJR19JVDg3X1dEVD15Cj4gIyBDT05GSUdfSFBfV0FUQ0hET0cgaXMg
bm90IHNldAo+IENPTkZJR19LRU1QTERfV0RUPXkKPiBDT05GSUdfU0MxMjAwX1dEVD15Cj4gQ09O
RklHX1BDODc0MTNfV0RUPXkKPiBDT05GSUdfTlZfVENPPXkKPiBDT05GSUdfNjBYWF9XRFQ9eQo+
IENPTkZJR19TQkM4MzYwX1dEVD15Cj4gQ09ORklHX0NQVTVfV0RUPXkKPiBDT05GSUdfU01TQ19T
Q0gzMTFYX1dEVD15Cj4gQ09ORklHX1NNU0MzN0I3ODdfV0RUPXkKPiAjIENPTkZJR19WSUFfV0RU
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19XODM2MjdIRl9XRFQgaXMgbm90IHNldAo+IENPTkZJR19X
ODM2OTdIRl9XRFQ9eQo+IENPTkZJR19XODM2OTdVR19XRFQ9eQo+IENPTkZJR19XODM4NzdGX1dE
VD15Cj4gIyBDT05GSUdfVzgzOTc3Rl9XRFQgaXMgbm90IHNldAo+IENPTkZJR19NQUNIWl9XRFQ9
eQo+ICMgQ09ORklHX1NCQ19FUFhfQzNfV0FUQ0hET0cgaXMgbm90IHNldAo+IENPTkZJR19NRU5f
QTIxX1dEVD15Cj4gQ09ORklHX1hFTl9XRFQ9eQo+IAo+ICMKPiAjIFBDSS1iYXNlZCBXYXRjaGRv
ZyBDYXJkcwo+ICMKPiBDT05GSUdfUENJUENXQVRDSERPRz15Cj4gQ09ORklHX1dEVFBDST15Cj4g
Q09ORklHX1NTQl9QT1NTSUJMRT15Cj4gCj4gIwo+ICMgU29uaWNzIFNpbGljb24gQmFja3BsYW5l
Cj4gIwo+ICMgQ09ORklHX1NTQiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JDTUFfUE9TU0lCTEU9eQo+
IAo+ICMKPiAjIEJyb2FkY29tIHNwZWNpZmljIEFNQkEKPiAjCj4gQ09ORklHX0JDTUE9eQo+IENP
TkZJR19CQ01BX0hPU1RfUENJX1BPU1NJQkxFPXkKPiBDT05GSUdfQkNNQV9IT1NUX1BDST15Cj4g
Q09ORklHX0JDTUFfSE9TVF9TT0M9eQo+IENPTkZJR19CQ01BX0RSSVZFUl9HTUFDX0NNTj15Cj4g
Q09ORklHX0JDTUFfRFJJVkVSX0dQSU89eQo+IENPTkZJR19CQ01BX0RFQlVHPXkKPiAKPiAjCj4g
IyBNdWx0aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzCj4gIwo+IENPTkZJR19NRkRfQ09SRT15Cj4g
IyBDT05GSUdfTUZEX0NTNTUzNSBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9BUzM3MTE9eQo+ICMg
Q09ORklHX1BNSUNfQURQNTUyMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX0FBVDI4NzBfQ09S
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX0NST1NfRUMgaXMgbm90IHNldAo+ICMgQ09ORklH
X1BNSUNfREE5MDNYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01GRF9EQTkwNTU9eQo+IENPTkZJR19NRkRfREE5MDYzPXkKPiBDT05GSUdf
TUZEX01DMTNYWFg9eQo+IENPTkZJR19NRkRfTUMxM1hYWF9JMkM9eQo+IENPTkZJR19IVENfUEFT
SUMzPXkKPiAjIENPTkZJR19IVENfSTJDUExEIGlzIG5vdCBzZXQKPiBDT05GSUdfTFBDX0lDSD15
Cj4gQ09ORklHX0xQQ19TQ0g9eQo+IENPTkZJR19NRkRfSkFOWl9DTU9ESU89eQo+IENPTkZJR19N
RkRfS0VNUExEPXkKPiBDT05GSUdfTUZEXzg4UE04MDA9eQo+ICMgQ09ORklHX01GRF84OFBNODA1
IGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEXzg4UE04NjBYPXkKPiAjIENPTkZJR19NRkRfTUFYMTQ1
NzcgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9NQVg3NzY4NiBpcyBub3Qgc2V0Cj4gQ09ORklH
X01GRF9NQVg3NzY5Mz15Cj4gIyBDT05GSUdfTUZEX01BWDg5MDcgaXMgbm90IHNldAo+ICMgQ09O
RklHX01GRF9NQVg4OTI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfTUFYODk5NyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01GRF9NQVg4OTk4PXkKPiBDT05GSUdfTUZEX1JFVFU9eQo+IENPTkZJR19N
RkRfUENGNTA2MzM9eQo+IENPTkZJR19QQ0Y1MDYzM19BREM9eQo+IENPTkZJR19QQ0Y1MDYzM19H
UElPPXkKPiAjIENPTkZJR19VQ0IxNDAwX0NPUkUgaXMgbm90IHNldAo+IENPTkZJR19NRkRfUkRD
MzIxWD15Cj4gIyBDT05GSUdfTUZEX1JUU1hfUENJIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRf
UkM1VDU4MyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX1NFQ19DT1JFIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NRkRfU0k0NzZYX0NPUkUgaXMgbm90IHNldAo+IENPTkZJR19NRkRfU001MDE9eQo+
ICMgQ09ORklHX01GRF9TTTUwMV9HUElPIGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEX1NNU0M9eQo+
ICMgQ09ORklHX0FCWDUwMF9DT1JFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfU1RNUEUgaXMg
bm90IHNldAo+IENPTkZJR19NRkRfU1lTQ09OPXkKPiBDT05GSUdfTUZEX1RJX0FNMzM1WF9UU0NB
REM9eQo+ICMgQ09ORklHX01GRF9MUDM5NDMgaXMgbm90IHNldAo+IENPTkZJR19NRkRfTFA4Nzg4
PXkKPiBDT05GSUdfTUZEX1BBTE1BUz15Cj4gQ09ORklHX1RQUzYxMDVYPXkKPiBDT05GSUdfVFBT
NjUwMTA9eQo+IENPTkZJR19UUFM2NTA3WD15Cj4gIyBDT05GSUdfTUZEX1RQUzY1MDkwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUZEX1RQUzY1MjE3PXkKPiBDT05GSUdfTUZEX1RQUzY1ODZYPXkKPiAj
IENPTkZJR19NRkRfVFBTNjU5MTAgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9UUFM2NTkxMiBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX1RQUzY1OTEyX0kyQyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfTUZEX1RQUzgwMDMxIGlzIG5vdCBzZXQKPiBDT05GSUdfVFdMNDAzMF9DT1JFPXkKPiAjIENP
TkZJR19UV0w0MDMwX01BREMgaXMgbm90IHNldAo+IENPTkZJR19NRkRfVFdMNDAzMF9BVURJTz15
Cj4gQ09ORklHX1RXTDYwNDBfQ09SRT15Cj4gQ09ORklHX01GRF9XTDEyNzNfQ09SRT15Cj4gQ09O
RklHX01GRF9MTTM1MzM9eQo+IENPTkZJR19NRkRfVElNQkVSREFMRT15Cj4gQ09ORklHX01GRF9U
QzM1ODlYPXkKPiAjIENPTkZJR19NRkRfVE1JTyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9WWDg1
NT15Cj4gQ09ORklHX01GRF9BUklaT05BPXkKPiBDT05GSUdfTUZEX0FSSVpPTkFfSTJDPXkKPiAj
IENPTkZJR19NRkRfV001MTAyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfV001MTEwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUZEX1dNODk5Nz15Cj4gQ09ORklHX01GRF9XTTg0MDA9eQo+IENPTkZJ
R19NRkRfV004MzFYPXkKPiBDT05GSUdfTUZEX1dNODMxWF9JMkM9eQo+IENPTkZJR19NRkRfV004
MzUwPXkKPiBDT05GSUdfTUZEX1dNODM1MF9JMkM9eQo+ICMgQ09ORklHX01GRF9XTTg5OTQgaXMg
bm90IHNldAo+IENPTkZJR19SRUdVTEFUT1I9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9ERUJVRyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9GSVhFRF9WT0xUQUdFPXkKPiAjIENPTkZJR19S
RUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9V
U0VSU1BBQ0VfQ09OU1VNRVI9eQo+IENPTkZJR19SRUdVTEFUT1JfODhQTTgwMD15Cj4gIyBDT05G
SUdfUkVHVUxBVE9SXzg4UE04NjA3IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX0FDVDg4
NjU9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9BRDUzOTggaXMgbm90IHNldAo+IENPTkZJR19SRUdV
TEFUT1JfQU5BVE9QPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfQVJJWk9OQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfUkVHVUxBVE9SX0FTMzcxMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9E
QTkwNTU9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9EQTkwNjMgaXMgbm90IHNldAo+IENPTkZJR19S
RUdVTEFUT1JfREE5MjEwPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1JFR1VMQVRPUl9HUElPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRUdVTEFU
T1JfSVNMNjI3MUEgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTFAzOTcxPXkKPiBDT05G
SUdfUkVHVUxBVE9SX0xQMzk3Mj15Cj4gQ09ORklHX1JFR1VMQVRPUl9MUDg3Mlg9eQo+ICMgQ09O
RklHX1JFR1VMQVRPUl9MUDg3NTUgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTFA4Nzg4
PXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYMTU4NiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VM
QVRPUl9NQVg4NjQ5PXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYODY2MCBpcyBub3Qgc2V0Cj4g
Q09ORklHX1JFR1VMQVRPUl9NQVg4OTUyPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfUkVHVUxBVE9SX01BWDg5OTggaXMgbm90IHNldAo+IENPTkZJ
R19SRUdVTEFUT1JfTUFYNzc2OTM9eQo+IENPTkZJR19SRUdVTEFUT1JfTUMxM1hYWF9DT1JFPXkK
PiBDT05GSUdfUkVHVUxBVE9SX01DMTM3ODM9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9NQzEzODky
IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1BBTE1BUz15Cj4gIyBDT05GSUdfUkVHVUxB
VE9SX1BDRjUwNjMzIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1BGVVpFMTAwPXkKPiAj
IENPTkZJR19SRUdVTEFUT1JfVFBTNTE2MzIgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1Jf
VFBTNjEwNVg9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1JFR1VMQVRPUl9UUFM2NTAyMz15Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2NTA3WD15Cj4g
Q09ORklHX1JFR1VMQVRPUl9UUFM2NTIxNz15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1ODZY
IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1RXTDQwMzA9eQo+ICMgQ09ORklHX1JFR1VM
QVRPUl9XTTgzMVggaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9XTTgzNTAgaXMgbm90
IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9XTTg0MDAgaXMgbm90IHNldAo+IENPTkZJR19NRURJ
QV9TVVBQT1JUPXkKPiAKPiAjCj4gIyBNdWx0aW1lZGlhIGNvcmUgc3VwcG9ydAo+ICMKPiBDT05G
SUdfTUVESUFfQ0FNRVJBX1NVUFBPUlQ9eQo+ICMgQ09ORklHX01FRElBX0FOQUxPR19UVl9TVVBQ
T1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfTUVESUFfRElHSVRBTF9UVl9TVVBQT1JUPXkKPiAjIENP
TkZJR19NRURJQV9SQURJT19TVVBQT1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfTUVESUFfUkNfU1VQ
UE9SVD15Cj4gIyBDT05GSUdfTUVESUFfQ09OVFJPTExFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJ
REVPX0RFVj15Cj4gQ09ORklHX1ZJREVPX1Y0TDI9eQo+ICMgQ09ORklHX1ZJREVPX0FEVl9ERUJV
RyBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0ZJWEVEX01JTk9SX1JBTkdFUz15Cj4gQ09ORklH
X1Y0TDJfTUVNMk1FTV9ERVY9eQo+IENPTkZJR19WSURFT0JVRjJfQ09SRT15Cj4gQ09ORklHX1ZJ
REVPQlVGMl9NRU1PUFM9eQo+IENPTkZJR19WSURFT0JVRjJfRE1BX0NPTlRJRz15Cj4gQ09ORklH
X1ZJREVPQlVGMl9WTUFMTE9DPXkKPiBDT05GSUdfVklERU9CVUYyX0RNQV9TRz15Cj4gQ09ORklH
X0RWQl9DT1JFPXkKPiAjIENPTkZJR19EVkJfTkVUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UVFBD
SV9FRVBST00gaXMgbm90IHNldAo+IENPTkZJR19EVkJfTUFYX0FEQVBURVJTPTgKPiAjIENPTkZJ
R19EVkJfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lZGlhIGRyaXZlcnMK
PiAjCj4gQ09ORklHX1JDX0NPUkU9eQo+IENPTkZJR19SQ19NQVA9eQo+IENPTkZJR19SQ19ERUNP
REVSUz15Cj4gQ09ORklHX0xJUkM9eQo+IENPTkZJR19JUl9MSVJDX0NPREVDPXkKPiAjIENPTkZJ
R19JUl9ORUNfREVDT0RFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0lSX1JDNV9ERUNPREVSPXkKPiBD
T05GSUdfSVJfUkM2X0RFQ09ERVI9eQo+ICMgQ09ORklHX0lSX0pWQ19ERUNPREVSIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSVJfU09OWV9ERUNPREVSPXkKPiBDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9
eQo+IENPTkZJR19JUl9TQU5ZT19ERUNPREVSPXkKPiBDT05GSUdfSVJfTUNFX0tCRF9ERUNPREVS
PXkKPiBDT05GSUdfUkNfREVWSUNFUz15Cj4gIyBDT05GSUdfSVJfRU5FIGlzIG5vdCBzZXQKPiBD
T05GSUdfSVJfSVRFX0NJUj15Cj4gQ09ORklHX0lSX0ZJTlRFSz15Cj4gQ09ORklHX0lSX05VVk9U
T049eQo+IENPTkZJR19JUl9XSU5CT05EX0NJUj15Cj4gQ09ORklHX1JDX0xPT1BCQUNLPXkKPiBD
T05GSUdfSVJfR1BJT19DSVI9eQo+ICMgQ09ORklHX01FRElBX1BDSV9TVVBQT1JUIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19WNExfUExBVEZPUk1fRFJJVkVSUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1Y0
TF9NRU0yTUVNX0RSSVZFUlM9eQo+IENPTkZJR19WSURFT19NRU0yTUVNX0RFSU5URVJMQUNFPXkK
PiBDT05GSUdfVklERU9fU0hfVkVVPXkKPiAjIENPTkZJR19WNExfVEVTVF9EUklWRVJTIGlzIG5v
dCBzZXQKPiAKPiAjCj4gIyBTdXBwb3J0ZWQgTU1DL1NESU8gYWRhcHRlcnMKPiAjCj4gQ09ORklH
X01FRElBX1BBUlBPUlRfU1VQUE9SVD15Cj4gQ09ORklHX1ZJREVPX0JXUUNBTT15Cj4gIyBDT05G
SUdfVklERU9fQ1FDQU0gaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lZGlhIGFuY2lsbGFyeSBkcml2
ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGkyYywgZnJvbnRlbmRzKQo+ICMKPiAjIENPTkZJR19NRURJ
QV9TVUJEUlZfQVVUT1NFTEVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0lSX0kyQz15Cj4g
Cj4gIwo+ICMgRW5jb2RlcnMsIGRlY29kZXJzLCBzZW5zb3JzIGFuZCBvdGhlciBoZWxwZXIgY2hp
cHMKPiAjCj4gCj4gIwo+ICMgQXVkaW8gZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVycwo+
ICMKPiAjIENPTkZJR19WSURFT19UVkFVRElPIGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fVERB
NzQzMj15Cj4gIyBDT05GSUdfVklERU9fVERBOTg0MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVklE
RU9fVEVBNjQxNUMgaXMgbm90IHNldAo+IENPTkZJR19WSURFT19URUE2NDIwPXkKPiAjIENPTkZJ
R19WSURFT19NU1AzNDAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSURFT19DUzUzNDUgaXMgbm90
IHNldAo+IENPTkZJR19WSURFT19DUzUzTDMyQT15Cj4gIyBDT05GSUdfVklERU9fVExWMzIwQUlD
MjNCIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSURFT19VREExMzQyIGlzIG5vdCBzZXQKPiBDT05G
SUdfVklERU9fV004Nzc1PXkKPiBDT05GSUdfVklERU9fV004NzM5PXkKPiBDT05GSUdfVklERU9f
VlAyN1NNUFg9eQo+ICMgQ09ORklHX1ZJREVPX1NPTllfQlRGX01QWCBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgUkRTIGRlY29kZXJzCj4gIwo+IENPTkZJR19WSURFT19TQUE2NTg4PXkKPiAKPiAjCj4g
IyBWaWRlbyBkZWNvZGVycwo+ICMKPiBDT05GSUdfVklERU9fQURWNzE4MD15Cj4gIyBDT05GSUdf
VklERU9fQURWNzE4MyBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0JUODE5PXkKPiBDT05GSUdf
VklERU9fQlQ4NTY9eQo+IENPTkZJR19WSURFT19CVDg2Nj15Cj4gIyBDT05GSUdfVklERU9fS1Mw
MTI3IGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fTUw4NlY3NjY3PXkKPiAjIENPTkZJR19WSURF
T19TQUE3MTEwIGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fU0FBNzExWD15Cj4gQ09ORklHX1ZJ
REVPX1NBQTcxOTE9eQo+IENPTkZJR19WSURFT19UVlA1MTRYPXkKPiBDT05GSUdfVklERU9fVFZQ
NTE1MD15Cj4gQ09ORklHX1ZJREVPX1RWUDcwMDI9eQo+IENPTkZJR19WSURFT19UVzI4MDQ9eQo+
IENPTkZJR19WSURFT19UVzk5MDM9eQo+IENPTkZJR19WSURFT19UVzk5MDY9eQo+IENPTkZJR19W
SURFT19WUFgzMjIwPXkKPiAKPiAjCj4gIyBWaWRlbyBhbmQgYXVkaW8gZGVjb2RlcnMKPiAjCj4g
Q09ORklHX1ZJREVPX1NBQTcxN1g9eQo+ICMgQ09ORklHX1ZJREVPX0NYMjU4NDAgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIFZpZGVvIGVuY29kZXJzCj4gIwo+ICMgQ09ORklHX1ZJREVPX1NBQTcxMjcg
aXMgbm90IHNldAo+ICMgQ09ORklHX1ZJREVPX1NBQTcxODUgaXMgbm90IHNldAo+IENPTkZJR19W
SURFT19BRFY3MTcwPXkKPiBDT05GSUdfVklERU9fQURWNzE3NT15Cj4gQ09ORklHX1ZJREVPX0FE
VjczNDM9eQo+ICMgQ09ORklHX1ZJREVPX0FEVjczOTMgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZJ
REVPX0FLODgxWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX1RIUzgyMDA9eQo+IAo+ICMKPiAj
IENhbWVyYSBzZW5zb3IgZGV2aWNlcwo+ICMKPiBDT05GSUdfVklERU9fT1Y3NjQwPXkKPiBDT05G
SUdfVklERU9fT1Y3NjcwPXkKPiAjIENPTkZJR19WSURFT19WUzY2MjQgaXMgbm90IHNldAo+IENP
TkZJR19WSURFT19NVDlWMDExPXkKPiAjIENPTkZJR19WSURFT19TUjAzMFBDMzAgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIEZsYXNoIGRldmljZXMKPiAjCj4gCj4gIwo+ICMgVmlkZW8gaW1wcm92ZW1l
bnQgY2hpcHMKPiAjCj4gQ09ORklHX1ZJREVPX1VQRDY0MDMxQT15Cj4gQ09ORklHX1ZJREVPX1VQ
RDY0MDgzPXkKPiAKPiAjCj4gIyBNaXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcwo+ICMKPiBDT05G
SUdfVklERU9fVEhTNzMwMz15Cj4gQ09ORklHX1ZJREVPX001Mjc5MD15Cj4gCj4gIwo+ICMgU2Vu
c29ycyB1c2VkIG9uIHNvY19jYW1lcmEgZHJpdmVyCj4gIwo+IAo+ICMKPiAjIEN1c3RvbWl6ZSBU
ViB0dW5lcnMKPiAjCj4gQ09ORklHX01FRElBX1RVTkVSX1NJTVBMRT15Cj4gIyBDT05GSUdfTUVE
SUFfVFVORVJfVERBODI5MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfVERBODI3
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjcxPXkKPiBDT05GSUdfTUVE
SUFfVFVORVJfVERBOTg4Nz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQo+IENPTkZJ
R19NRURJQV9UVU5FUl9URUE1NzY3PXkKPiAjIENPTkZJR19NRURJQV9UVU5FUl9NVDIwWFggaXMg
bm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX01UMjA2MCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01FRElBX1RVTkVSX01UMjA2Mz15Cj4gQ09ORklHX01FRElBX1RVTkVSX01UMjI2Nj15Cj4gIyBD
T05GSUdfTUVESUFfVFVORVJfTVQyMTMxIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRURJQV9UVU5F
Ul9RVDEwMTAgaXMgbm90IHNldAo+IENPTkZJR19NRURJQV9UVU5FUl9YQzIwMjg9eQo+IENPTkZJ
R19NRURJQV9UVU5FUl9YQzUwMDA9eQo+IENPTkZJR19NRURJQV9UVU5FUl9YQzQwMDA9eQo+IENP
TkZJR19NRURJQV9UVU5FUl9NWEw1MDA1Uz15Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfTVhMNTAw
N1QgaXMgbm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX01DNDRTODAzIGlzIG5vdCBzZXQK
PiBDT05GSUdfTUVESUFfVFVORVJfTUFYMjE2NT15Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfVERB
MTgyMTggaXMgbm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMSBpcyBub3Qgc2V0
Cj4gQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMj15Cj4gQ09ORklHX01FRElBX1RVTkVSX0ZDMDAx
Mz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjEyPXkKPiBDT05GSUdfTUVESUFfVFVORVJf
RTQwMDA9eQo+ICMgQ09ORklHX01FRElBX1RVTkVSX0ZDMjU4MCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01FRElBX1RVTkVSX004OFRTMjAyMj15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RVQTkwMDE9eQo+
ICMgQ09ORklHX01FRElBX1RVTkVSX0lUOTEzWCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElBX1RV
TkVSX1I4MjBUPXkKPiAKPiAjCj4gIyBDdXN0b21pc2UgRFZCIEZyb250ZW5kcwo+ICMKPiAKPiAj
Cj4gIyBNdWx0aXN0YW5kYXJkIChzYXRlbGxpdGUpIGZyb250ZW5kcwo+ICMKPiAjIENPTkZJR19E
VkJfU1RCMDg5OSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX1NUQjYxMDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0RWQl9TVFYwOTB4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfU1RWNjExMHgg
aXMgbm90IHNldAo+IENPTkZJR19EVkJfTTg4RFMzMTAzPXkKPiAKPiAjCj4gIyBNdWx0aXN0YW5k
YXJkIChjYWJsZSArIHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9EUlhL
PXkKPiAjIENPTkZJR19EVkJfVERBMTgyNzFDMkREIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEVkIt
UyAoc2F0ZWxsaXRlKSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9DWDI0MTEwPXkKPiAjIENP
TkZJR19EVkJfQ1gyNDEyMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9NVDMxMj15Cj4gIyBDT05G
SUdfRFZCX1pMMTAwMzYgaXMgbm90IHNldAo+IENPTkZJR19EVkJfWkwxMDAzOT15Cj4gQ09ORklH
X0RWQl9TNUgxNDIwPXkKPiBDT05GSUdfRFZCX1NUVjAyODg9eQo+IENPTkZJR19EVkJfU1RCNjAw
MD15Cj4gQ09ORklHX0RWQl9TVFYwMjk5PXkKPiBDT05GSUdfRFZCX1NUVjYxMTA9eQo+IENPTkZJ
R19EVkJfU1RWMDkwMD15Cj4gQ09ORklHX0RWQl9UREE4MDgzPXkKPiAjIENPTkZJR19EVkJfVERB
MTAwODYgaXMgbm90IHNldAo+ICMgQ09ORklHX0RWQl9UREE4MjYxIGlzIG5vdCBzZXQKPiBDT05G
SUdfRFZCX1ZFUzFYOTM9eQo+IENPTkZJR19EVkJfVFVORVJfSVREMTAwMD15Cj4gIyBDT05GSUdf
RFZCX1RVTkVSX0NYMjQxMTMgaXMgbm90IHNldAo+IENPTkZJR19EVkJfVERBODI2WD15Cj4gQ09O
RklHX0RWQl9UVUE2MTAwPXkKPiBDT05GSUdfRFZCX0NYMjQxMTY9eQo+IENPTkZJR19EVkJfQ1gy
NDExNz15Cj4gIyBDT05GSUdfRFZCX1NJMjFYWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX1RT
MjAyMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9EUzMwMDA9eQo+IENPTkZJR19EVkJfTUI4NkEx
Nj15Cj4gQ09ORklHX0RWQl9UREExMDA3MT15Cj4gCj4gIwo+ICMgRFZCLVQgKHRlcnJlc3RyaWFs
KSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9TUDg4NzA9eQo+ICMgQ09ORklHX0RWQl9TUDg4
N1ggaXMgbm90IHNldAo+IENPTkZJR19EVkJfQ1gyMjcwMD15Cj4gQ09ORklHX0RWQl9DWDIyNzAy
PXkKPiAjIENPTkZJR19EVkJfUzVIMTQzMiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX0RSWEQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0RWQl9MNjQ3ODEgaXMgbm90IHNldAo+ICMgQ09ORklHX0RW
Ql9UREExMDA0WCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX05YVDYwMDAgaXMgbm90IHNldAo+
IENPTkZJR19EVkJfTVQzNTI9eQo+ICMgQ09ORklHX0RWQl9aTDEwMzUzIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19EVkJfRElCMzAwME1CIGlzIG5vdCBzZXQKPiBDT05GSUdfRFZCX0RJQjMwMDBNQz15
Cj4gQ09ORklHX0RWQl9ESUI3MDAwTT15Cj4gQ09ORklHX0RWQl9ESUI3MDAwUD15Cj4gQ09ORklH
X0RWQl9ESUI5MDAwPXkKPiAjIENPTkZJR19EVkJfVERBMTAwNDggaXMgbm90IHNldAo+IENPTkZJ
R19EVkJfQUY5MDEzPXkKPiBDT05GSUdfRFZCX0VDMTAwPXkKPiAjIENPTkZJR19EVkJfSEQyOUwy
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfU1RWMDM2NyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RW
Ql9DWEQyODIwUj15Cj4gQ09ORklHX0RWQl9SVEwyODMwPXkKPiBDT05GSUdfRFZCX1JUTDI4MzI9
eQo+IAo+ICMKPiAjIERWQi1DIChjYWJsZSkgZnJvbnRlbmRzCj4gIwo+IENPTkZJR19EVkJfVkVT
MTgyMD15Cj4gQ09ORklHX0RWQl9UREExMDAyMT15Cj4gQ09ORklHX0RWQl9UREExMDAyMz15Cj4g
Q09ORklHX0RWQl9TVFYwMjk3PXkKPiAKPiAjCj4gIyBBVFNDIChOb3J0aCBBbWVyaWNhbi9Lb3Jl
YW4gVGVycmVzdHJpYWwvQ2FibGUgRFRWKSBmcm9udGVuZHMKPiAjCj4gIyBDT05GSUdfRFZCX05Y
VDIwMFggaXMgbm90IHNldAo+IENPTkZJR19EVkJfT1I1MTIxMT15Cj4gIyBDT05GSUdfRFZCX09S
NTExMzIgaXMgbm90IHNldAo+IENPTkZJR19EVkJfQkNNMzUxMD15Cj4gIyBDT05GSUdfRFZCX0xH
RFQzMzBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfTEdEVDMzMDUgaXMgbm90IHNldAo+ICMg
Q09ORklHX0RWQl9MRzIxNjAgaXMgbm90IHNldAo+IENPTkZJR19EVkJfUzVIMTQwOT15Cj4gQ09O
RklHX0RWQl9BVTg1MjI9eQo+IENPTkZJR19EVkJfQVU4NTIyX0RUVj15Cj4gQ09ORklHX0RWQl9B
VTg1MjJfVjRMPXkKPiAjIENPTkZJR19EVkJfUzVIMTQxMSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
SVNEQi1UICh0ZXJyZXN0cmlhbCkgZnJvbnRlbmRzCj4gIwo+ICMgQ09ORklHX0RWQl9TOTIxIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19EVkJfRElCODAwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9N
Qjg2QTIwUz15Cj4gCj4gIwo+ICMgRGlnaXRhbCB0ZXJyZXN0cmlhbCBvbmx5IHR1bmVycy9QTEwK
PiAjCj4gIyBDT05GSUdfRFZCX1BMTCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9UVU5FUl9ESUIw
MDcwPXkKPiAjIENPTkZJR19EVkJfVFVORVJfRElCMDA5MCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
U0VDIGNvbnRyb2wgZGV2aWNlcyBmb3IgRFZCLVMKPiAjCj4gQ09ORklHX0RWQl9MTkJQMjE9eQo+
IENPTkZJR19EVkJfTE5CUDIyPXkKPiAjIENPTkZJR19EVkJfSVNMNjQwNSBpcyBub3Qgc2V0Cj4g
Q09ORklHX0RWQl9JU0w2NDIxPXkKPiBDT05GSUdfRFZCX0lTTDY0MjM9eQo+IENPTkZJR19EVkJf
QTgyOTM9eQo+ICMgQ09ORklHX0RWQl9MR1M4R0w1IGlzIG5vdCBzZXQKPiBDT05GSUdfRFZCX0xH
UzhHWFg9eQo+ICMgQ09ORklHX0RWQl9BVEJNODgzMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9U
REE2NjV4PXkKPiBDT05GSUdfRFZCX0lYMjUwNVY9eQo+IENPTkZJR19EVkJfSVQ5MTNYX0ZFPXkK
PiBDT05GSUdfRFZCX004OFJTMjAwMD15Cj4gQ09ORklHX0RWQl9BRjkwMzM9eQo+IAo+ICMKPiAj
IFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250ZW5kcwo+ICMKPiBDT05GSUdfRFZCX0RVTU1ZX0ZF
PXkKPiAKPiAjCj4gIyBHcmFwaGljcyBzdXBwb3J0Cj4gIwo+IENPTkZJR19BR1A9eQo+ICMgQ09O
RklHX0FHUF9BTUQ2NCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FHUF9JTlRFTD15Cj4gQ09ORklHX0FH
UF9TSVM9eQo+IENPTkZJR19BR1BfVklBPXkKPiBDT05GSUdfSU5URUxfR1RUPXkKPiBDT05GSUdf
VkdBX0FSQj15Cj4gQ09ORklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYKPiBDT05GSUdfVkdBX1NXSVRD
SEVST089eQo+IENPTkZJR19EUk09eQo+IENPTkZJR19EUk1fS01TX0hFTFBFUj15Cj4gQ09ORklH
X0RSTV9LTVNfRkJfSEVMUEVSPXkKPiBDT05GSUdfRFJNX0xPQURfRURJRF9GSVJNV0FSRT15Cj4g
Q09ORklHX0RSTV9UVE09eQo+IAo+ICMKPiAjIEkyQyBlbmNvZGVyIG9yIGhlbHBlciBjaGlwcwo+
ICMKPiBDT05GSUdfRFJNX0kyQ19DSDcwMDY9eQo+ICMgQ09ORklHX0RSTV9JMkNfU0lMMTY0IGlz
IG5vdCBzZXQKPiBDT05GSUdfRFJNX0kyQ19OWFBfVERBOTk4WD15Cj4gIyBDT05GSUdfRFJNX1RE
RlggaXMgbm90IHNldAo+IENPTkZJR19EUk1fUjEyOD15Cj4gQ09ORklHX0RSTV9SQURFT049eQo+
IENPTkZJR19EUk1fUkFERU9OX1VNUz15Cj4gIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNl
dAo+IENPTkZJR19EUk1fSTgxMD15Cj4gQ09ORklHX0RSTV9JOTE1PXkKPiAjIENPTkZJR19EUk1f
STkxNV9LTVMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RSTV9JOTE1X0ZCREVWIGlzIG5vdCBzZXQK
PiBDT05GSUdfRFJNX0k5MTVfUFJFTElNSU5BUllfSFdfU1VQUE9SVD15Cj4gIyBDT05GSUdfRFJN
X0k5MTVfVU1TIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EUk1fTUdBIGlzIG5vdCBzZXQKPiBDT05G
SUdfRFJNX1NJUz15Cj4gQ09ORklHX0RSTV9WSUE9eQo+ICMgQ09ORklHX0RSTV9TQVZBR0UgaXMg
bm90IHNldAo+IENPTkZJR19EUk1fVk1XR0ZYPXkKPiBDT05GSUdfRFJNX1ZNV0dGWF9GQkNPTj15
Cj4gQ09ORklHX0RSTV9HTUE1MDA9eQo+ICMgQ09ORklHX0RSTV9HTUE2MDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0RSTV9HTUEzNjAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EUk1fQVNUIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19EUk1fTUdBRzIwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RSTV9DSVJS
VVNfUUVNVT15Cj4gQ09ORklHX0RSTV9RWEw9eQo+IENPTkZJR19WR0FTVEFURT15Cj4gQ09ORklH
X1ZJREVPX09VVFBVVF9DT05UUk9MPXkKPiBDT05GSUdfSERNST15Cj4gQ09ORklHX0ZCPXkKPiBD
T05GSUdfRklSTVdBUkVfRURJRD15Cj4gQ09ORklHX0ZCX0REQz15Cj4gQ09ORklHX0ZCX0JPT1Rf
VkVTQV9TVVBQT1JUPXkKPiBDT05GSUdfRkJfQ0ZCX0ZJTExSRUNUPXkKPiBDT05GSUdfRkJfQ0ZC
X0NPUFlBUkVBPXkKPiBDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15Cj4gIyBDT05GSUdfRkJfQ0ZC
X1JFVl9QSVhFTFNfSU5fQllURSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1NZU19GSUxMUkVDVD15
Cj4gQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15Cj4gQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQ9eQo+
IENPTkZJR19GQl9GT1JFSUdOX0VORElBTj15Cj4gQ09ORklHX0ZCX0JPVEhfRU5ESUFOPXkKPiAj
IENPTkZJR19GQl9CSUdfRU5ESUFOIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9MSVRUTEVfRU5E
SUFOIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfU1lTX0ZPUFM9eQo+IENPTkZJR19GQl9ERUZFUlJF
RF9JTz15Cj4gQ09ORklHX0ZCX0hFQ1VCQT15Cj4gQ09ORklHX0ZCX1NWR0FMSUI9eQo+ICMgQ09O
RklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfQkFDS0xJR0hUPXkKPiBDT05G
SUdfRkJfTU9ERV9IRUxQRVJTPXkKPiBDT05GSUdfRkJfVElMRUJMSVRUSU5HPXkKPiAKPiAjCj4g
IyBGcmFtZSBidWZmZXIgaGFyZHdhcmUgZHJpdmVycwo+ICMKPiAjIENPTkZJR19GQl9DSVJSVVMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX1BNMiBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0NZQkVS
MjAwMD15Cj4gQ09ORklHX0ZCX0NZQkVSMjAwMF9EREM9eQo+IENPTkZJR19GQl9BUkM9eQo+ICMg
Q09ORklHX0ZCX0FTSUxJQU5UIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9JTVNUVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0ZCX1ZHQTE2PXkKPiAjIENPTkZJR19GQl9VVkVTQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfRkJfVkVTQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0VGST15Cj4gQ09ORklHX0ZC
X040MTE9eQo+IENPTkZJR19GQl9IR0E9eQo+IENPTkZJR19GQl9TMUQxM1hYWD15Cj4gIyBDT05G
SUdfRkJfTlZJRElBIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfUklWQT15Cj4gQ09ORklHX0ZCX1JJ
VkFfSTJDPXkKPiAjIENPTkZJR19GQl9SSVZBX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19G
Ql9SSVZBX0JBQ0tMSUdIVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRkJfSTc0MCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0ZCX0xFODA1Nzg9eQo+IENPTkZJR19GQl9DQVJJTExPX1JBTkNIPXkKPiAjIENP
TkZJR19GQl9NQVRST1ggaXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX1JBREVPTiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0ZCX0FUWTEyOD15Cj4gQ09ORklHX0ZCX0FUWTEyOF9CQUNLTElHSFQ9eQo+IENP
TkZJR19GQl9BVFk9eQo+IENPTkZJR19GQl9BVFlfQ1Q9eQo+IENPTkZJR19GQl9BVFlfR0VORVJJ
Q19MQ0Q9eQo+ICMgQ09ORklHX0ZCX0FUWV9HWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0FUWV9C
QUNLTElHSFQ9eQo+IENPTkZJR19GQl9TMz15Cj4gIyBDT05GSUdfRkJfUzNfRERDIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkJfU0FWQUdFPXkKPiBDT05GSUdfRkJfU0FWQUdFX0kyQz15Cj4gIyBDT05G
SUdfRkJfU0FWQUdFX0FDQ0VMIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfU0lTPXkKPiBDT05GSUdf
RkJfU0lTXzMwMD15Cj4gQ09ORklHX0ZCX1NJU18zMTU9eQo+ICMgQ09ORklHX0ZCX1ZJQSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0ZCX05FT01BR0lDPXkKPiBDT05GSUdfRkJfS1lSTz15Cj4gIyBDT05G
SUdfRkJfM0RGWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1ZPT0RPTzE9eQo+ICMgQ09ORklHX0ZC
X1ZUODYyMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1RSSURFTlQ9eQo+IENPTkZJR19GQl9BUks9
eQo+ICMgQ09ORklHX0ZCX1BNMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0NBUk1JTkU9eQo+IENP
TkZJR19GQl9DQVJNSU5FX0RSQU1fRVZBTD15Cj4gIyBDT05GSUdfQ0FSTUlORV9EUkFNX0NVU1RP
TSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0dFT0RFPXkKPiBDT05GSUdfRkJfR0VPREVfTFg9eQo+
IENPTkZJR19GQl9HRU9ERV9HWD15Cj4gQ09ORklHX0ZCX0dFT0RFX0dYMT15Cj4gQ09ORklHX0ZC
X1RNSU89eQo+ICMgQ09ORklHX0ZCX1RNSU9fQUNDRUxMIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJf
U001MDE9eQo+IENPTkZJR19GQl9HT0xERklTSD15Cj4gQ09ORklHX0ZCX1ZJUlRVQUw9eQo+ICMg
Q09ORklHX1hFTl9GQkRFVl9GUk9OVEVORCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX01FVFJPTk9N
RT15Cj4gQ09ORklHX0ZCX01CODYyWFg9eQo+IENPTkZJR19GQl9NQjg2MlhYX1BDSV9HREM9eQo+
IENPTkZJR19GQl9NQjg2MlhYX0kyQz15Cj4gIyBDT05GSUdfRkJfQlJPQURTSEVFVCBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfRkJfQVVPX0sxOTBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9IWVBF
UlYgaXMgbm90IHNldAo+IENPTkZJR19GQl9TSU1QTEU9eQo+IENPTkZJR19FWFlOT1NfVklERU89
eQo+IENPTkZJR19CQUNLTElHSFRfTENEX1NVUFBPUlQ9eQo+IENPTkZJR19MQ0RfQ0xBU1NfREVW
SUNFPXkKPiAjIENPTkZJR19MQ0RfUExBVEZPUk0gaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElH
SFRfQ0xBU1NfREVWSUNFPXkKPiBDT05GSUdfQkFDS0xJR0hUX0dFTkVSSUM9eQo+ICMgQ09ORklH
X0JBQ0tMSUdIVF9MTTM1MzMgaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElHSFRfQ0FSSUxMT19S
QU5DSD15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX1BXTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBQ0tM
SUdIVF9BUFBMRT15Cj4gQ09ORklHX0JBQ0tMSUdIVF9TQUhBUkE9eQo+IENPTkZJR19CQUNLTElH
SFRfV004MzFYPXkKPiBDT05GSUdfQkFDS0xJR0hUX0FEUDg4NjA9eQo+ICMgQ09ORklHX0JBQ0tM
SUdIVF9BRFA4ODcwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19CQUNLTElHSFRfODhQTTg2MFggaXMg
bm90IHNldAo+ICMgQ09ORklHX0JBQ0tMSUdIVF9QQ0Y1MDYzMyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0JBQ0tMSUdIVF9MTTM2MzBBPXkKPiBDT05GSUdfQkFDS0xJR0hUX0xNMzYzOT15Cj4gQ09ORklH
X0JBQ0tMSUdIVF9MUDg1NVg9eQo+IENPTkZJR19CQUNLTElHSFRfTFA4Nzg4PXkKPiBDT05GSUdf
QkFDS0xJR0hUX1BBTkRPUkE9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9UUFM2NTIxNyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTE9eQo+IENPTkZJR19CQUNLTElHSFRfR1BJTz15
Cj4gQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUD15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0JENjEw
NyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9HTyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPVU5EPXkK
PiBDT05GSUdfU09VTkRfT1NTX0NPUkU9eQo+IENPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJ
TT15Cj4gQ09ORklHX1NORD15Cj4gQ09ORklHX1NORF9USU1FUj15Cj4gQ09ORklHX1NORF9QQ009
eQo+IENPTkZJR19TTkRfSFdERVA9eQo+IENPTkZJR19TTkRfUkFXTUlEST15Cj4gQ09ORklHX1NO
RF9DT01QUkVTU19PRkZMT0FEPXkKPiBDT05GSUdfU05EX0pBQ0s9eQo+ICMgQ09ORklHX1NORF9T
RVFVRU5DRVIgaXMgbm90IHNldAo+IENPTkZJR19TTkRfT1NTRU1VTD15Cj4gQ09ORklHX1NORF9N
SVhFUl9PU1M9eQo+IENPTkZJR19TTkRfUENNX09TUz15Cj4gIyBDT05GSUdfU05EX1BDTV9PU1Nf
UExVR0lOUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9IUlRJTUVSPXkKPiBDT05GSUdfU05EX0RZ
TkFNSUNfTUlOT1JTPXkKPiBDT05GSUdfU05EX01BWF9DQVJEUz0zMgo+IENPTkZJR19TTkRfU1VQ
UE9SVF9PTERfQVBJPXkKPiAjIENPTkZJR19TTkRfVkVSQk9TRV9QUklOVEsgaXMgbm90IHNldAo+
IENPTkZJR19TTkRfREVCVUc9eQo+IENPTkZJR19TTkRfREVCVUdfVkVSQk9TRT15Cj4gQ09ORklH
X1NORF9WTUFTVEVSPXkKPiBDT05GSUdfU05EX0RNQV9TR0JVRj15Cj4gIyBDT05GSUdfU05EX1JB
V01JRElfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRfT1BMM19MSUJfU0VRIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19TTkRfT1BMNF9MSUJfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRf
U0JBV0VfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRfRU1VMTBLMV9TRVEgaXMgbm90IHNl
dAo+IENPTkZJR19TTkRfTVBVNDAxX1VBUlQ9eQo+IENPTkZJR19TTkRfT1BMM19MSUI9eQo+IENP
TkZJR19TTkRfVlhfTElCPXkKPiBDT05GSUdfU05EX0FDOTdfQ09ERUM9eQo+IENPTkZJR19TTkRf
RFJJVkVSUz15Cj4gIyBDT05GSUdfU05EX1BDU1AgaXMgbm90IHNldAo+IENPTkZJR19TTkRfRFVN
TVk9eQo+IENPTkZJR19TTkRfQUxPT1A9eQo+IENPTkZJR19TTkRfTVRQQVY9eQo+ICMgQ09ORklH
X1NORF9NVFM2NCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9TRVJJQUxfVTE2NTUwPXkKPiBDT05G
SUdfU05EX01QVTQwMT15Cj4gQ09ORklHX1NORF9QT1JUTUFOMlg0PXkKPiAjIENPTkZJR19TTkRf
QUM5N19QT1dFUl9TQVZFIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX1BDST15Cj4gQ09ORklHX1NO
RF9BRDE4ODk9eQo+IENPTkZJR19TTkRfQUxTMzAwPXkKPiBDT05GSUdfU05EX0FMSTU0NTE9eQo+
ICMgQ09ORklHX1NORF9BU0lIUEkgaXMgbm90IHNldAo+IENPTkZJR19TTkRfQVRJSVhQPXkKPiBD
T05GSUdfU05EX0FUSUlYUF9NT0RFTT15Cj4gIyBDT05GSUdfU05EX0FVODgxMCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NORF9BVTg4MjA9eQo+IENPTkZJR19TTkRfQVU4ODMwPXkKPiBDT05GSUdfU05E
X0FXMj15Cj4gIyBDT05GSUdfU05EX0FaVDMzMjggaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9C
VDg3WCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9DQTAxMDY9eQo+ICMgQ09ORklHX1NORF9DTUlQ
Q0kgaXMgbm90IHNldAo+IENPTkZJR19TTkRfT1hZR0VOX0xJQj15Cj4gIyBDT05GSUdfU05EX09Y
WUdFTiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU05EX0NTNDI4MSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1NORF9DUzQ2WFg9eQo+IENPTkZJR19TTkRfQ1M0NlhYX05FV19EU1A9eQo+IENPTkZJR19TTkRf
Q1M1NTM1QVVESU89eQo+ICMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
U05EX0RBUkxBMjAgaXMgbm90IHNldAo+IENPTkZJR19TTkRfR0lOQTIwPXkKPiBDT05GSUdfU05E
X0xBWUxBMjA9eQo+ICMgQ09ORklHX1NORF9EQVJMQTI0IGlzIG5vdCBzZXQKPiBDT05GSUdfU05E
X0dJTkEyND15Cj4gQ09ORklHX1NORF9MQVlMQTI0PXkKPiBDT05GSUdfU05EX01PTkE9eQo+IENP
TkZJR19TTkRfTUlBPXkKPiAjIENPTkZJR19TTkRfRUNITzNHIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX0lORElHTz15Cj4gQ09ORklHX1NORF9JTkRJR09JTz15Cj4gIyBDT05GSUdfU05EX0lORElH
T0RKIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX0lORElHT0lPWD15Cj4gIyBDT05GSUdfU05EX0lO
RElHT0RKWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9FTVUxMEsxPXkKPiBDT05GSUdfU05EX0VN
VTEwSzFYPXkKPiAjIENPTkZJR19TTkRfRU5TMTM3MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9F
TlMxMzcxPXkKPiBDT05GSUdfU05EX0VTMTkzOD15Cj4gQ09ORklHX1NORF9FUzE5Njg9eQo+IENP
TkZJR19TTkRfRVMxOTY4X0lOUFVUPXkKPiBDT05GSUdfU05EX0ZNODAxPXkKPiAjIENPTkZJR19T
TkRfSERBX0lOVEVMIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX0hEU1A9eQo+IAo+ICMKPiAjIERv
bid0IGZvcmdldCB0byBhZGQgYnVpbHQtaW4gZmlybXdhcmVzIGZvciBIRFNQIGRyaXZlcgo+ICMK
PiAjIENPTkZJR19TTkRfSERTUE0gaXMgbm90IHNldAo+IENPTkZJR19TTkRfSUNFMTcxMj15Cj4g
Q09ORklHX1NORF9JQ0UxNzI0PXkKPiBDT05GSUdfU05EX0lOVEVMOFgwPXkKPiBDT05GSUdfU05E
X0lOVEVMOFgwTT15Cj4gQ09ORklHX1NORF9LT1JHMTIxMj15Cj4gQ09ORklHX1NORF9MT0xBPXkK
PiBDT05GSUdfU05EX0xYNjQ2NEVTPXkKPiBDT05GSUdfU05EX01BRVNUUk8zPXkKPiAjIENPTkZJ
R19TTkRfTUFFU1RSTzNfSU5QVVQgaXMgbm90IHNldAo+IENPTkZJR19TTkRfTUlYQVJUPXkKPiBD
T05GSUdfU05EX05NMjU2PXkKPiAjIENPTkZJR19TTkRfUENYSFIgaXMgbm90IHNldAo+IENPTkZJ
R19TTkRfUklQVElERT15Cj4gIyBDT05GSUdfU05EX1JNRTMyIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX1JNRTk2PXkKPiBDT05GSUdfU05EX1JNRTk2NTI9eQo+IENPTkZJR19TTkRfU09OSUNWSUJF
Uz15Cj4gIyBDT05GSUdfU05EX1RSSURFTlQgaXMgbm90IHNldAo+IENPTkZJR19TTkRfVklBODJY
WD15Cj4gIyBDT05GSUdfU05EX1ZJQTgyWFhfTU9ERU0gaXMgbm90IHNldAo+IENPTkZJR19TTkRf
VklSVFVPU089eQo+IENPTkZJR19TTkRfVlgyMjI9eQo+IENPTkZJR19TTkRfWU1GUENJPXkKPiBD
T05GSUdfU05EX1NPQz15Cj4gIyBDT05GSUdfU05EX1NPQ19BREkgaXMgbm90IHNldAo+ICMgQ09O
RklHX1NORF9BVE1FTF9TT0MgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9CQ00yODM1X1NPQ19J
MlMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9FUDkzWFhfU09DIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TTkRfSU1YX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU05EX0tJUktXT09EX1NPQyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NORF9TT0NfSTJDX0FORF9TUEk9eQo+IENPTkZJR19TTkRfU09D
X0FMTF9DT0RFQ1M9eQo+IENPTkZJR19TTkRfU09DXzg4UE04NjBYPXkKPiBDT05GSUdfU05EX1NP
Q19BUklaT05BPXkKPiBDT05GSUdfU05EX1NPQ19XTV9IVUJTPXkKPiBDT05GSUdfU05EX1NPQ19X
TV9BRFNQPXkKPiBDT05GSUdfU05EX1NPQ19BRDE5M1g9eQo+IENPTkZJR19TTkRfU09DX0FENzMz
MTE9eQo+IENPTkZJR19TTkRfU09DX0FEQVUxNzAxPXkKPiBDT05GSUdfU05EX1NPQ19BREFVMTM3
Mz15Cj4gQ09ORklHX1NORF9TT0NfQURBVjgwWD15Cj4gQ09ORklHX1NORF9TT0NfQURTMTE3WD15
Cj4gQ09ORklHX1NORF9TT0NfQUs0NTM1PXkKPiBDT05GSUdfU05EX1NPQ19BSzQ2NDE9eQo+IENP
TkZJR19TTkRfU09DX0FLNDY0Mj15Cj4gQ09ORklHX1NORF9TT0NfQUs0NjcxPXkKPiBDT05GSUdf
U05EX1NPQ19BSzUzODY9eQo+IENPTkZJR19TTkRfU09DX0FMQzU2MjM9eQo+IENPTkZJR19TTkRf
U09DX0FMQzU2MzI9eQo+IENPTkZJR19TTkRfU09DX0NTNDJMNTE9eQo+IENPTkZJR19TTkRfU09D
X0NTNDJMNTI9eQo+IENPTkZJR19TTkRfU09DX0NTNDJMNzM9eQo+IENPTkZJR19TTkRfU09DX0NT
NDI3MD15Cj4gQ09ORklHX1NORF9TT0NfQ1M0MjcxPXkKPiBDT05GSUdfU05EX1NPQ19DWDIwNDQy
PXkKPiBDT05GSUdfU05EX1NPQ19KWjQ3NDBfQ09ERUM9eQo+IENPTkZJR19TTkRfU09DX0wzPXkK
PiBDT05GSUdfU05EX1NPQ19EQTcyMTA9eQo+IENPTkZJR19TTkRfU09DX0RBNzIxMz15Cj4gQ09O
RklHX1NORF9TT0NfREE3MzJYPXkKPiBDT05GSUdfU05EX1NPQ19EQTkwNTU9eQo+IENPTkZJR19T
TkRfU09DX0JUX1NDTz15Cj4gQ09ORklHX1NORF9TT0NfSVNBQkVMTEU9eQo+IENPTkZJR19TTkRf
U09DX0xNNDk0NTM9eQo+IENPTkZJR19TTkRfU09DX01BWDk4MDg4PXkKPiBDT05GSUdfU05EX1NP
Q19NQVg5ODA5MD15Cj4gQ09ORklHX1NORF9TT0NfTUFYOTgwOTU9eQo+IENPTkZJR19TTkRfU09D
X01BWDk4NTA9eQo+IENPTkZJR19TTkRfU09DX0hETUlfQ09ERUM9eQo+IENPTkZJR19TTkRfU09D
X1BDTTE2ODE9eQo+IENPTkZJR19TTkRfU09DX1BDTTMwMDg9eQo+IENPTkZJR19TTkRfU09DX1JU
NTYzMT15Cj4gQ09ORklHX1NORF9TT0NfUlQ1NjQwPXkKPiBDT05GSUdfU05EX1NPQ19TR1RMNTAw
MD15Cj4gQ09ORklHX1NORF9TT0NfU0lHTUFEU1A9eQo+IENPTkZJR19TTkRfU09DX1NQRElGPXkK
PiBDT05GSUdfU05EX1NPQ19TU00yNTE4PXkKPiBDT05GSUdfU05EX1NPQ19TU00yNjAyPXkKPiBD
T05GSUdfU05EX1NPQ19TVEEzMlg9eQo+IENPTkZJR19TTkRfU09DX1NUQTUyOT15Cj4gQ09ORklH
X1NORF9TT0NfVEFTNTA4Nj15Cj4gQ09ORklHX1NORF9TT0NfVExWMzIwQUlDMjM9eQo+IENPTkZJ
R19TTkRfU09DX1RMVjMyMEFJQzMyWDQ9eQo+IENPTkZJR19TTkRfU09DX1RMVjMyMEFJQzNYPXkK
PiBDT05GSUdfU05EX1NPQ19UTFYzMjBEQUMzMz15Cj4gQ09ORklHX1NORF9TT0NfVFdMNDAzMD15
Cj4gQ09ORklHX1NORF9TT0NfVFdMNjA0MD15Cj4gQ09ORklHX1NORF9TT0NfVURBMTM0WD15Cj4g
Q09ORklHX1NORF9TT0NfVURBMTM4MD15Cj4gQ09ORklHX1NORF9TT0NfV0wxMjczPXkKPiBDT05G
SUdfU05EX1NPQ19XTTEyNTBfRVYxPXkKPiBDT05GSUdfU05EX1NPQ19XTTIwMDA9eQo+IENPTkZJ
R19TTkRfU09DX1dNMjIwMD15Cj4gQ09ORklHX1NORF9TT0NfV001MTAwPXkKPiBDT05GSUdfU05E
X1NPQ19XTTgzNTA9eQo+IENPTkZJR19TTkRfU09DX1dNODQwMD15Cj4gQ09ORklHX1NORF9TT0Nf
V004NTEwPXkKPiBDT05GSUdfU05EX1NPQ19XTTg1MjM9eQo+IENPTkZJR19TTkRfU09DX1dNODU4
MD15Cj4gQ09ORklHX1NORF9TT0NfV004NzExPXkKPiBDT05GSUdfU05EX1NPQ19XTTg3Mjc9eQo+
IENPTkZJR19TTkRfU09DX1dNODcyOD15Cj4gQ09ORklHX1NORF9TT0NfV004NzMxPXkKPiBDT05G
SUdfU05EX1NPQ19XTTg3Mzc9eQo+IENPTkZJR19TTkRfU09DX1dNODc0MT15Cj4gQ09ORklHX1NO
RF9TT0NfV004NzUwPXkKPiBDT05GSUdfU05EX1NPQ19XTTg3NTM9eQo+IENPTkZJR19TTkRfU09D
X1dNODc3Nj15Cj4gQ09ORklHX1NORF9TT0NfV004NzgyPXkKPiBDT05GSUdfU05EX1NPQ19XTTg4
MDQ9eQo+IENPTkZJR19TTkRfU09DX1dNODkwMD15Cj4gQ09ORklHX1NORF9TT0NfV004OTAzPXkK
PiBDT05GSUdfU05EX1NPQ19XTTg5MDQ9eQo+IENPTkZJR19TTkRfU09DX1dNODk0MD15Cj4gQ09O
RklHX1NORF9TT0NfV004OTU1PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5NjA9eQo+IENPTkZJR19T
TkRfU09DX1dNODk2MT15Cj4gQ09ORklHX1NORF9TT0NfV004OTYyPXkKPiBDT05GSUdfU05EX1NP
Q19XTTg5NzE9eQo+IENPTkZJR19TTkRfU09DX1dNODk3ND15Cj4gQ09ORklHX1NORF9TT0NfV004
OTc4PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5ODM9eQo+IENPTkZJR19TTkRfU09DX1dNODk4NT15
Cj4gQ09ORklHX1NORF9TT0NfV004OTg4PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5OTA9eQo+IENP
TkZJR19TTkRfU09DX1dNODk5MT15Cj4gQ09ORklHX1NORF9TT0NfV004OTkzPXkKPiBDT05GSUdf
U05EX1NPQ19XTTg5OTU9eQo+IENPTkZJR19TTkRfU09DX1dNODk5Nj15Cj4gQ09ORklHX1NORF9T
T0NfV004OTk3PXkKPiBDT05GSUdfU05EX1NPQ19XTTkwODE9eQo+IENPTkZJR19TTkRfU09DX1dN
OTA5MD15Cj4gQ09ORklHX1NORF9TT0NfTE00ODU3PXkKPiBDT05GSUdfU05EX1NPQ19NQVg5NzY4
PXkKPiBDT05GSUdfU05EX1NPQ19NQVg5ODc3PXkKPiBDT05GSUdfU05EX1NPQ19NQzEzNzgzPXkK
PiBDT05GSUdfU05EX1NPQ19NTDI2MTI0PXkKPiBDT05GSUdfU05EX1NPQ19UUEE2MTMwQTI9eQo+
IENPTkZJR19TTkRfU0lNUExFX0NBUkQ9eQo+IENPTkZJR19TT1VORF9QUklNRT15Cj4gQ09ORklH
X0FDOTdfQlVTPXkKPiAKPiAjCj4gIyBISUQgc3VwcG9ydAo+ICMKPiBDT05GSUdfSElEPXkKPiAj
IENPTkZJR19ISURfQkFUVEVSWV9TVFJFTkdUSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSElEUkFX
IGlzIG5vdCBzZXQKPiBDT05GSUdfVUhJRD15Cj4gIyBDT05GSUdfSElEX0dFTkVSSUMgaXMgbm90
IHNldAo+IAo+ICMKPiAjIFNwZWNpYWwgSElEIGRyaXZlcnMKPiAjCj4gQ09ORklHX0hJRF9BNFRF
Q0g9eQo+IENPTkZJR19ISURfQUNSVVg9eQo+ICMgQ09ORklHX0hJRF9BQ1JVWF9GRiBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfSElEX0FQUExFIGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX0FVUkVBTD15
Cj4gQ09ORklHX0hJRF9CRUxLSU49eQo+IENPTkZJR19ISURfQ0hFUlJZPXkKPiBDT05GSUdfSElE
X0NISUNPTlk9eQo+ICMgQ09ORklHX0hJRF9QUk9ESUtFWVMgaXMgbm90IHNldAo+IENPTkZJR19I
SURfQ1lQUkVTUz15Cj4gIyBDT05GSUdfSElEX0RSQUdPTlJJU0UgaXMgbm90IHNldAo+IENPTkZJ
R19ISURfRU1TX0ZGPXkKPiBDT05GSUdfSElEX0VMRUNPTT15Cj4gIyBDT05GSUdfSElEX0VaS0VZ
IGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX0tFWVRPVUNIPXkKPiAjIENPTkZJR19ISURfS1lFIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19ISURfVUNMT0dJQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hJRF9X
QUxUT1A9eQo+IENPTkZJR19ISURfR1lSQVRJT049eQo+IENPTkZJR19ISURfSUNBREU9eQo+IENP
TkZJR19ISURfVFdJTkhBTj15Cj4gQ09ORklHX0hJRF9LRU5TSU5HVE9OPXkKPiBDT05GSUdfSElE
X0xDUE9XRVI9eQo+ICMgQ09ORklHX0hJRF9MRU5PVk9fVFBLQkQgaXMgbm90IHNldAo+IENPTkZJ
R19ISURfTE9HSVRFQ0g9eQo+IENPTkZJR19MT0dJVEVDSF9GRj15Cj4gQ09ORklHX0xPR0lSVU1C
TEVQQUQyX0ZGPXkKPiBDT05GSUdfTE9HSUc5NDBfRkY9eQo+ICMgQ09ORklHX0xPR0lXSEVFTFNf
RkYgaXMgbm90IHNldAo+IENPTkZJR19ISURfTUFHSUNNT1VTRT15Cj4gIyBDT05GSUdfSElEX01J
Q1JPU09GVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSElEX01PTlRFUkVZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19ISURfTVVMVElUT1VDSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hJRF9PUlRFSz15Cj4g
IyBDT05GSUdfSElEX1BBTlRIRVJMT1JEIGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX1BFVEFMWU5Y
PXkKPiBDT05GSUdfSElEX1BJQ09MQ0Q9eQo+ICMgQ09ORklHX0hJRF9QSUNPTENEX0ZCIGlzIG5v
dCBzZXQKPiBDT05GSUdfSElEX1BJQ09MQ0RfQkFDS0xJR0hUPXkKPiAjIENPTkZJR19ISURfUElD
T0xDRF9MQ0QgaXMgbm90IHNldAo+IENPTkZJR19ISURfUElDT0xDRF9MRURTPXkKPiBDT05GSUdf
SElEX1BJQ09MQ0RfQ0lSPXkKPiAjIENPTkZJR19ISURfUFJJTUFYIGlzIG5vdCBzZXQKPiBDT05G
SUdfSElEX1NBSVRFSz15Cj4gQ09ORklHX0hJRF9TQU1TVU5HPXkKPiBDT05GSUdfSElEX1NQRUVE
TElOSz15Cj4gQ09ORklHX0hJRF9TVEVFTFNFUklFUz15Cj4gQ09ORklHX0hJRF9TVU5QTFVTPXkK
PiAjIENPTkZJR19ISURfR1JFRU5BU0lBIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ISURfSFlQRVJW
X01PVVNFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ISURfU01BUlRKT1lQTFVTIGlzIG5vdCBzZXQK
PiBDT05GSUdfSElEX1RJVk89eQo+IENPTkZJR19ISURfVE9QU0VFRD15Cj4gQ09ORklHX0hJRF9U
SElOR009eQo+IENPTkZJR19ISURfVEhSVVNUTUFTVEVSPXkKPiAjIENPTkZJR19USFJVU1RNQVNU
RVJfRkYgaXMgbm90IHNldAo+IENPTkZJR19ISURfV0FDT009eQo+IENPTkZJR19ISURfV0lJTU9U
RT15Cj4gQ09ORklHX0hJRF9YSU5NTz15Cj4gQ09ORklHX0hJRF9aRVJPUExVUz15Cj4gQ09ORklH
X1pFUk9QTFVTX0ZGPXkKPiBDT05GSUdfSElEX1pZREFDUk9OPXkKPiAjIENPTkZJR19ISURfU0VO
U09SX0hVQiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgSTJDIEhJRCBzdXBwb3J0Cj4gIwo+ICMgQ09O
RklHX0kyQ19ISUQgaXMgbm90IHNldAo+IENPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkK
PiAjIENPTkZJR19VU0JfU1VQUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VXQj15Cj4gQ09ORklH
X1VXQl9XSENJPXkKPiAjIENPTkZJR19NTUMgaXMgbm90IHNldAo+ICMgQ09ORklHX01FTVNUSUNL
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkVXX0xFRFM9eQo+IENPTkZJR19MRURTX0NMQVNTPXkKPiAK
PiAjCj4gIyBMRUQgZHJpdmVycwo+ICMKPiBDT05GSUdfTEVEU184OFBNODYwWD15Cj4gIyBDT05G
SUdfTEVEU19MTTM1MzAgaXMgbm90IHNldAo+ICMgQ09ORklHX0xFRFNfTE0zNTMzIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTEVEU19MTTM2NDI9eQo+ICMgQ09ORklHX0xFRFNfUENBOTUzMiBpcyBub3Qg
c2V0Cj4gQ09ORklHX0xFRFNfR1BJTz15Cj4gQ09ORklHX0xFRFNfTFAzOTQ0PXkKPiBDT05GSUdf
TEVEU19MUDU1WFhfQ09NTU9OPXkKPiBDT05GSUdfTEVEU19MUDU1MjE9eQo+IENPTkZJR19MRURT
X0xQNTUyMz15Cj4gQ09ORklHX0xFRFNfTFA1NTYyPXkKPiBDT05GSUdfTEVEU19MUDg1MDE9eQo+
IENPTkZJR19MRURTX0xQODc4OD15Cj4gQ09ORklHX0xFRFNfQ0xFVk9fTUFJTD15Cj4gQ09ORklH
X0xFRFNfUENBOTU1WD15Cj4gIyBDT05GSUdfTEVEU19QQ0E5NjNYIGlzIG5vdCBzZXQKPiBDT05G
SUdfTEVEU19QQ0E5Njg1PXkKPiBDT05GSUdfTEVEU19XTTgzMVhfU1RBVFVTPXkKPiBDT05GSUdf
TEVEU19XTTgzNTA9eQo+IENPTkZJR19MRURTX1BXTT15Cj4gQ09ORklHX0xFRFNfUkVHVUxBVE9S
PXkKPiBDT05GSUdfTEVEU19CRDI4MDI9eQo+IENPTkZJR19MRURTX0lOVEVMX1NTNDIwMD15Cj4g
Q09ORklHX0xFRFNfTFQzNTkzPXkKPiAjIENPTkZJR19MRURTX01DMTM3ODMgaXMgbm90IHNldAo+
IENPTkZJR19MRURTX1RDQTY1MDc9eQo+IENPTkZJR19MRURTX0xNMzU1eD15Cj4gIyBDT05GSUdf
TEVEU19PVDIwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0xFRFNfQkxJTktNPXkKPiAKPiAjCj4gIyBM
RUQgVHJpZ2dlcnMKPiAjCj4gQ09ORklHX0xFRFNfVFJJR0dFUlM9eQo+IENPTkZJR19MRURTX1RS
SUdHRVJfVElNRVI9eQo+IENPTkZJR19MRURTX1RSSUdHRVJfT05FU0hPVD15Cj4gQ09ORklHX0xF
RFNfVFJJR0dFUl9IRUFSVEJFQVQ9eQo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9CQUNLTElHSFQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9DUFUgaXMgbm90IHNldAo+IENPTkZJ
R19MRURTX1RSSUdHRVJfR1BJTz15Cj4gQ09ORklHX0xFRFNfVFJJR0dFUl9ERUZBVUxUX09OPXkK
PiAKPiAjCj4gIyBpcHRhYmxlcyB0cmlnZ2VyIGlzIHVuZGVyIE5ldGZpbHRlciBjb25maWcgKExF
RCB0YXJnZXQpCj4gIwo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9UUkFOU0lFTlQgaXMgbm90IHNl
dAo+IENPTkZJR19MRURTX1RSSUdHRVJfQ0FNRVJBPXkKPiBDT05GSUdfQUNDRVNTSUJJTElUWT15
Cj4gQ09ORklHX0lORklOSUJBTkQ9eQo+ICMgQ09ORklHX0lORklOSUJBTkRfVVNFUl9NQUQgaXMg
bm90IHNldAo+IENPTkZJR19JTkZJTklCQU5EX1VTRVJfQUNDRVNTPXkKPiBDT05GSUdfSU5GSU5J
QkFORF9VU0VSX01FTT15Cj4gQ09ORklHX0lORklOSUJBTkRfQUREUl9UUkFOUz15Cj4gIyBDT05G
SUdfSU5GSU5JQkFORF9NVEhDQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lORklOSUJBTkRfSVBBVEg9
eQo+IENPTkZJR19JTkZJTklCQU5EX1FJQj15Cj4gQ09ORklHX0lORklOSUJBTkRfQU1TTzExMDA9
eQo+IENPTkZJR19JTkZJTklCQU5EX0FNU08xMTAwX0RFQlVHPXkKPiBDT05GSUdfSU5GSU5JQkFO
RF9ORVM9eQo+ICMgQ09ORklHX0lORklOSUJBTkRfTkVTX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19FREFDIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0xJQj15Cj4gQ09ORklHX1JUQ19DTEFT
Uz15Cj4gIyBDT05GSUdfUlRDX0hDVE9TWVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19TWVNU
T0hDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAj
IFJUQyBpbnRlcmZhY2VzCj4gIwo+ICMgQ09ORklHX1JUQ19JTlRGX1NZU0ZTIGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0lOVEZfREVWPXkKPiAjIENPTkZJR19SVENfSU5URl9ERVZfVUlFX0VNVUwg
aXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX1RFU1Q9eQo+IAo+ICMKPiAjIEkyQyBSVEMgZHJp
dmVycwo+ICMKPiBDT05GSUdfUlRDX0RSVl84OFBNODYwWD15Cj4gIyBDT05GSUdfUlRDX0RSVl84
OFBNODBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJWX0RTMTMwNyBpcyBub3Qgc2V0Cj4g
Q09ORklHX1JUQ19EUlZfRFMxMzc0PXkKPiBDT05GSUdfUlRDX0RSVl9EUzE2NzI9eQo+ICMgQ09O
RklHX1JUQ19EUlZfRFMzMjMyIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9MUDg3ODg9eQo+
IENPTkZJR19SVENfRFJWX01BWDY5MDA9eQo+IENPTkZJR19SVENfRFJWX01BWDg5OTg9eQo+IENP
TkZJR19SVENfRFJWX1JTNUMzNzI9eQo+IENPTkZJR19SVENfRFJWX0lTTDEyMDg9eQo+IENPTkZJ
R19SVENfRFJWX0lTTDEyMDIyPXkKPiAjIENPTkZJR19SVENfRFJWX0lTTDEyMDU3IGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJW
X1BBTE1BUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9QQ0YyMTI3IGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzPXkKPiBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzPXkKPiAj
IENPTkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX000MVQ4
MD15Cj4gIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19S
VENfRFJWX0JRMzJLIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9UV0w0MDMwPXkKPiBDT05G
SUdfUlRDX0RSVl9UUFM2NTg2WD15Cj4gQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15Cj4gIyBDT05G
SUdfUlRDX0RSVl9GTTMxMzAgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfUlg4NTgxIGlz
IG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9SWDgwMjU9eQo+IENPTkZJR19SVENfRFJWX0VNMzAy
Nz15Cj4gIyBDT05GSUdfUlRDX0RSVl9SVjMwMjlDMiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU1BJ
IFJUQyBkcml2ZXJzCj4gIwo+IAo+ICMKPiAjIFBsYXRmb3JtIFJUQyBkcml2ZXJzCj4gIwo+IENP
TkZJR19SVENfRFJWX0NNT1M9eQo+IENPTkZJR19SVENfRFJWX0RTMTI4Nj15Cj4gIyBDT05GSUdf
UlRDX0RSVl9EUzE1MTEgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfRFMxNTUzIGlzIG5v
dCBzZXQKPiBDT05GSUdfUlRDX0RSVl9EUzE3NDI9eQo+IENPTkZJR19SVENfRFJWX0RBOTA1NT15
Cj4gQ09ORklHX1JUQ19EUlZfU1RLMTdUQTg9eQo+IENPTkZJR19SVENfRFJWX000OFQ4Nj15Cj4g
Q09ORklHX1JUQ19EUlZfTTQ4VDM1PXkKPiBDT05GSUdfUlRDX0RSVl9NNDhUNTk9eQo+IENPTkZJ
R19SVENfRFJWX01TTTYyNDI9eQo+IENPTkZJR19SVENfRFJWX0JRNDgwMj15Cj4gQ09ORklHX1JU
Q19EUlZfUlA1QzAxPXkKPiAjIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKPiBDT05G
SUdfUlRDX0RSVl9EUzI0MDQ9eQo+IENPTkZJR19SVENfRFJWX1dNODMxWD15Cj4gQ09ORklHX1JU
Q19EUlZfV004MzUwPXkKPiBDT05GSUdfUlRDX0RSVl9QQ0Y1MDYzMz15Cj4gCj4gIwo+ICMgb24t
Q1BVIFJUQyBkcml2ZXJzCj4gIwo+IENPTkZJR19SVENfRFJWX01DMTNYWFg9eQo+IENPTkZJR19S
VENfRFJWX01PWEFSVD15Cj4gCj4gIwo+ICMgSElEIFNlbnNvciBSVEMgZHJpdmVycwo+ICMKPiBD
T05GSUdfRE1BREVWSUNFUz15Cj4gIyBDT05GSUdfRE1BREVWSUNFU19ERUJVRyBpcyBub3Qgc2V0
Cj4gCj4gIwo+ICMgRE1BIERldmljZXMKPiAjCj4gQ09ORklHX0lOVEVMX01JRF9ETUFDPXkKPiAj
IENPTkZJR19JTlRFTF9JT0FURE1BIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EV19ETUFDX0NPUkUg
aXMgbm90IHNldAo+ICMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RXX0RN
QUNfUENJIGlzIG5vdCBzZXQKPiBDT05GSUdfVElNQl9ETUE9eQo+IENPTkZJR19QQ0hfRE1BPXkK
PiBDT05GSUdfRE1BX0VOR0lORT15Cj4gQ09ORklHX0RNQV9BQ1BJPXkKPiAKPiAjCj4gIyBETUEg
Q2xpZW50cwo+ICMKPiAjIENPTkZJR19BU1lOQ19UWF9ETUEgaXMgbm90IHNldAo+IENPTkZJR19E
TUFURVNUPXkKPiAjIENPTkZJR19BVVhESVNQTEFZIGlzIG5vdCBzZXQKPiBDT05GSUdfVUlPPXkK
PiBDT05GSUdfVUlPX0NJRj15Cj4gQ09ORklHX1VJT19QRFJWX0dFTklSUT15Cj4gQ09ORklHX1VJ
T19ETUVNX0dFTklSUT15Cj4gQ09ORklHX1VJT19BRUM9eQo+IENPTkZJR19VSU9fU0VSQ09TMz15
Cj4gIyBDT05GSUdfVUlPX1BDSV9HRU5FUklDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19VSU9fTkVU
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VJT19NRjYyND15Cj4gIyBDT05GSUdfVklSVF9EUklWRVJT
IGlzIG5vdCBzZXQKPiBDT05GSUdfVklSVElPPXkKPiAKPiAjCj4gIyBWaXJ0aW8gZHJpdmVycwo+
ICMKPiAjIENPTkZJR19WSVJUSU9fUENJIGlzIG5vdCBzZXQKPiBDT05GSUdfVklSVElPX0JBTExP
T049eQo+ICMgQ09ORklHX1ZJUlRJT19NTUlPIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBNaWNyb3Nv
ZnQgSHlwZXItViBndWVzdCBzdXBwb3J0Cj4gIwo+IENPTkZJR19IWVBFUlY9eQo+IENPTkZJR19I
WVBFUlZfVVRJTFM9eQo+IENPTkZJR19IWVBFUlZfQkFMTE9PTj15Cj4gCj4gIwo+ICMgWGVuIGRy
aXZlciBzdXBwb3J0Cj4gIwo+ICMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5vdCBzZXQKPiBDT05G
SUdfWEVOX0RFVl9FVlRDSE49eQo+ICMgQ09ORklHX1hFTl9CQUNLRU5EIGlzIG5vdCBzZXQKPiBD
T05GSUdfWEVORlM9eQo+IENPTkZJR19YRU5fQ09NUEFUX1hFTkZTPXkKPiBDT05GSUdfWEVOX1NZ
U19IWVBFUlZJU09SPXkKPiBDT05GSUdfWEVOX1hFTkJVU19GUk9OVEVORD15Cj4gQ09ORklHX1hF
Tl9HTlRERVY9eQo+IENPTkZJR19YRU5fR1JBTlRfREVWX0FMTE9DPXkKPiBDT05GSUdfU1dJT1RM
Ql9YRU49eQo+IENPTkZJR19YRU5fVE1FTT15Cj4gQ09ORklHX1hFTl9QUklWQ01EPXkKPiAjIENP
TkZJR19YRU5fQUNQSV9QUk9DRVNTT1IgaXMgbm90IHNldAo+IENPTkZJR19YRU5fSEFWRV9QVk1N
VT15Cj4gQ09ORklHX1NUQUdJTkc9eQo+IENPTkZJR19TTElDT1NTPXkKPiBDT05GSUdfRUNITz15
Cj4gQ09ORklHX1BBTkVMPXkKPiBDT05GSUdfUEFORUxfUEFSUE9SVD0wCj4gQ09ORklHX1BBTkVM
X1BST0ZJTEU9NQo+IENPTkZJR19QQU5FTF9DSEFOR0VfTUVTU0FHRT15Cj4gQ09ORklHX1BBTkVM
X0JPT1RfTUVTU0FHRT0iIgo+ICMgQ09ORklHX0RYX1NFUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZC
X1NNN1hYPXkKPiBDT05GSUdfQ1JZU1RBTEhEPXkKPiBDT05GSUdfRkJfWEdJPXkKPiBDT05GSUdf
QUNQSV9RVUlDS1NUQVJUPXkKPiBDT05GSUdfRlQxMDAwPXkKPiAKPiAjCj4gIyBTcGVha3VwIGNv
bnNvbGUgc3BlZWNoCj4gIwo+IENPTkZJR19UT1VDSFNDUkVFTl9DTEVBUlBBRF9UTTEyMTc9eQo+
ICMgQ09ORklHX1RPVUNIU0NSRUVOX1NZTkFQVElDU19JMkNfUk1JNCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NUQUdJTkdfTUVESUE9eQo+IENPTkZJR19EVkJfQ1hEMjA5OT15Cj4gIyBDT05GSUdfVklE
RU9fRFQzMTU1IGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fVjRMMl9JTlRfREVWSUNFPXkKPiBD
T05GSUdfVklERU9fVENNODI1WD15Cj4gQ09ORklHX1VTQl9TTjlDMTAyPXkKPiBDT05GSUdfU09M
TzZYMTA9eQo+ICMgQ09ORklHX0xJUkNfU1RBR0lORyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQW5k
cm9pZAo+ICMKPiBDT05GSUdfQU5EUk9JRD15Cj4gQ09ORklHX0FORFJPSURfQklOREVSX0lQQz15
Cj4gIyBDT05GSUdfQVNITUVNIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BTkRST0lEX0xPR0dFUiBp
cyBub3Qgc2V0Cj4gQ09ORklHX0FORFJPSURfVElNRURfT1VUUFVUPXkKPiAjIENPTkZJR19BTkRS
T0lEX1RJTUVEX0dQSU8gaXMgbm90IHNldAo+IENPTkZJR19BTkRST0lEX0xPV19NRU1PUllfS0lM
TEVSPXkKPiAjIENPTkZJR19BTkRST0lEX0lOVEZfQUxBUk1fREVWIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TWU5DIGlzIG5vdCBzZXQKPiBDT05GSUdfSU9OPXkKPiBDT05GSUdfSU9OX1RFU1Q9eQo+
IENPTkZJR19ER1JQPXkKPiBDT05GSUdfWElMTFlCVVM9eQo+ICMgQ09ORklHX1hJTExZQlVTX1BD
SUUgaXMgbm90IHNldAo+ICMgQ09ORklHX0RHTkMgaXMgbm90IHNldAo+IENPTkZJR19ER0FQPXkK
PiBDT05GSUdfWDg2X1BMQVRGT1JNX0RFVklDRVM9eQo+ICMgQ09ORklHX0FDRVJIREYgaXMgbm90
IHNldAo+ICMgQ09ORklHX0FTVVNfTEFQVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfREVMTF9MQVBU
T1A9eQo+IENPTkZJR19GVUpJVFNVX0xBUFRPUD15Cj4gQ09ORklHX0ZVSklUU1VfTEFQVE9QX0RF
QlVHPXkKPiAjIENPTkZJR19GVUpJVFNVX1RBQkxFVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSFBf
QUNDRUwgaXMgbm90IHNldAo+IENPTkZJR19QQU5BU09OSUNfTEFQVE9QPXkKPiBDT05GSUdfVEhJ
TktQQURfQUNQST15Cj4gQ09ORklHX1RISU5LUEFEX0FDUElfQUxTQV9TVVBQT1JUPXkKPiBDT05G
SUdfVEhJTktQQURfQUNQSV9ERUJVR0ZBQ0lMSVRJRVM9eQo+ICMgQ09ORklHX1RISU5LUEFEX0FD
UElfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1RISU5LUEFEX0FDUElfVU5TQUZFX0xFRFMg
aXMgbm90IHNldAo+IENPTkZJR19USElOS1BBRF9BQ1BJX1ZJREVPPXkKPiBDT05GSUdfVEhJTktQ
QURfQUNQSV9IT1RLRVlfUE9MTD15Cj4gIyBDT05GSUdfU0VOU09SU19IREFQUyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0lOVEVMX01FTkxPVz15Cj4gIyBDT05GSUdfQUNQSV9XTUkgaXMgbm90IHNldAo+
ICMgQ09ORklHX1RPUFNUQVJfTEFQVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9TSElCQV9CVF9S
RktJTEw9eQo+ICMgQ09ORklHX0FDUElfQ01QQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOVEVMX0lQ
Uz15Cj4gQ09ORklHX0lCTV9SVEw9eQo+ICMgQ09ORklHX1hPMTVfRUJPT0sgaXMgbm90IHNldAo+
ICMgQ09ORklHX1NBTVNVTkdfTEFQVE9QIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQU1TVU5HX1Ex
MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FQUExFX0dNVVg9eQo+IENPTkZJR19JTlRFTF9SU1Q9eQo+
ICMgQ09ORklHX0lOVEVMX1NNQVJUQ09OTkVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BWUEFOSUM9
eQo+IENPTkZJR19DSFJPTUVfUExBVEZPUk1TPXkKPiBDT05GSUdfQ0hST01FT1NfTEFQVE9QPXkK
PiAjIENPTkZJR19DSFJPTUVPU19QU1RPUkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEhhcmR3YXJl
IFNwaW5sb2NrIGRyaXZlcnMKPiAjCj4gQ09ORklHX0NMS0VWVF9JODI1Mz15Cj4gQ09ORklHX0k4
MjUzX0xPQ0s9eQo+IENPTkZJR19DTEtCTERfSTgyNTM9eQo+IENPTkZJR19NQUlMQk9YPXkKPiAj
IENPTkZJR19JT01NVV9TVVBQT1JUIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSZW1vdGVwcm9jIGRy
aXZlcnMKPiAjCj4gQ09ORklHX1JFTU9URVBST0M9eQo+IENPTkZJR19TVEVfTU9ERU1fUlBST0M9
eQo+IAo+ICMKPiAjIFJwbXNnIGRyaXZlcnMKPiAjCj4gQ09ORklHX1BNX0RFVkZSRVE9eQo+IAo+
ICMKPiAjIERFVkZSRVEgR292ZXJub3JzCj4gIwo+IENPTkZJR19ERVZGUkVRX0dPVl9TSU1QTEVf
T05ERU1BTkQ9eQo+IENPTkZJR19ERVZGUkVRX0dPVl9QRVJGT1JNQU5DRT15Cj4gIyBDT05GSUdf
REVWRlJFUV9HT1ZfUE9XRVJTQVZFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERVZGUkVRX0dPVl9V
U0VSU1BBQ0UgaXMgbm90IHNldAo+IAo+ICMKPiAjIERFVkZSRVEgRHJpdmVycwo+ICMKPiBDT05G
SUdfRVhUQ09OPXkKPiAKPiAjCj4gIyBFeHRjb24gRGV2aWNlIERyaXZlcnMKPiAjCj4gQ09ORklH
X0VYVENPTl9HUElPPXkKPiBDT05GSUdfRVhUQ09OX01BWDc3NjkzPXkKPiBDT05GSUdfRVhUQ09O
X0FSSVpPTkE9eQo+ICMgQ09ORklHX0VYVENPTl9QQUxNQVMgaXMgbm90IHNldAo+ICMgQ09ORklH
X01FTU9SWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSUlPIGlzIG5vdCBzZXQKPiBDT05GSUdfTlRC
PXkKPiAjIENPTkZJR19WTUVfQlVTIGlzIG5vdCBzZXQKPiBDT05GSUdfUFdNPXkKPiBDT05GSUdf
UFdNX1NZU0ZTPXkKPiBDT05GSUdfUFdNX1JFTkVTQVNfVFBVPXkKPiBDT05GSUdfUFdNX1RXTD15
Cj4gQ09ORklHX1BXTV9UV0xfTEVEPXkKPiBDT05GSUdfSVBBQ0tfQlVTPXkKPiBDT05GSUdfQk9B
UkRfVFBDSTIwMD15Cj4gQ09ORklHX1NFUklBTF9JUE9DVEFMPXkKPiBDT05GSUdfUkVTRVRfQ09O
VFJPTExFUj15Cj4gQ09ORklHX0ZNQz15Cj4gQ09ORklHX0ZNQ19GQUtFREVWPXkKPiBDT05GSUdf
Rk1DX1RSSVZJQUw9eQo+ICMgQ09ORklHX0ZNQ19XUklURV9FRVBST00gaXMgbm90IHNldAo+ICMg
Q09ORklHX0ZNQ19DSEFSREVWIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQSFkgU3Vic3lzdGVtCj4g
Iwo+ICMgQ09ORklHX0dFTkVSSUNfUEhZIGlzIG5vdCBzZXQKPiBDT05GSUdfUEhZX0VYWU5PU19N
SVBJX1ZJREVPPXkKPiAjIENPTkZJR19QT1dFUkNBUCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRmly
bXdhcmUgRHJpdmVycwo+ICMKPiAjIENPTkZJR19FREQgaXMgbm90IHNldAo+IENPTkZJR19GSVJN
V0FSRV9NRU1NQVA9eQo+ICMgQ09ORklHX0RFTExfUkJVIGlzIG5vdCBzZXQKPiBDT05GSUdfRENE
QkFTPXkKPiBDT05GSUdfRE1JSUQ9eQo+ICMgQ09ORklHX0RNSV9TWVNGUyBpcyBub3Qgc2V0Cj4g
Q09ORklHX0RNSV9TQ0FOX01BQ0hJTkVfTk9OX0VGSV9GQUxMQkFDSz15Cj4gQ09ORklHX0lTQ1NJ
X0lCRlRfRklORD15Cj4gIyBDT05GSUdfR09PR0xFX0ZJUk1XQVJFIGlzIG5vdCBzZXQKPiAKPiAj
Cj4gIyBFRkkgKEV4dGVuc2libGUgRmlybXdhcmUgSW50ZXJmYWNlKSBTdXBwb3J0Cj4gIwo+IENP
TkZJR19FRklfVkFSUz15Cj4gQ09ORklHX0VGSV9SVU5USU1FX01BUD15Cj4gCj4gIwo+ICMgRmls
ZSBzeXN0ZW1zCj4gIwo+IENPTkZJR19EQ0FDSEVfV09SRF9BQ0NFU1M9eQo+IENPTkZJR19GU19Q
T1NJWF9BQ0w9eQo+IENPTkZJR19FWFBPUlRGUz15Cj4gQ09ORklHX0ZJTEVfTE9DS0lORz15Cj4g
Q09ORklHX0ZTTk9USUZZPXkKPiBDT05GSUdfRE5PVElGWT15Cj4gIyBDT05GSUdfSU5PVElGWV9V
U0VSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQU5PVElGWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1FV
T1RBPXkKPiBDT05GSUdfUVVPVEFfTkVUTElOS19JTlRFUkZBQ0U9eQo+ICMgQ09ORklHX1BSSU5U
X1FVT1RBX1dBUk5JTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1FVT1RBX0RFQlVHIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUVVPVEFfVFJFRT15Cj4gQ09ORklHX1FGTVRfVjE9eQo+IENPTkZJR19RRk1U
X1YyPXkKPiBDT05GSUdfUVVPVEFDVEw9eQo+ICMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0ZVU0VfRlMgaXMgbm90IHNldAo+IENPTkZJR19HRU5FUklDX0FDTD15Cj4g
Cj4gIwo+ICMgQ2FjaGVzCj4gIwo+ICMgQ09ORklHX0ZTQ0FDSEUgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFBzZXVkbyBmaWxlc3lzdGVtcwo+ICMKPiAjIENPTkZJR19QUk9DX0ZTIGlzIG5vdCBzZXQK
PiBDT05GSUdfU1lTRlM9eQo+IENPTkZJR19UTVBGUz15Cj4gQ09ORklHX1RNUEZTX1BPU0lYX0FD
TD15Cj4gQ09ORklHX1RNUEZTX1hBVFRSPXkKPiBDT05GSUdfSFVHRVRMQkZTPXkKPiBDT05GSUdf
SFVHRVRMQl9QQUdFPXkKPiAjIENPTkZJR19DT05GSUdGU19GUyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfTUlTQ19GSUxFU1lTVEVNUyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVFdPUktfRklMRVNZU1RF
TVM9eQo+IENPTkZJR19ORlNfRlM9eQo+ICMgQ09ORklHX05GU19WMiBpcyBub3Qgc2V0Cj4gQ09O
RklHX05GU19WMz15Cj4gQ09ORklHX05GU19WM19BQ0w9eQo+IENPTkZJR19ORlNfVjQ9eQo+IENP
TkZJR19ORlNfU1dBUD15Cj4gIyBDT05GSUdfTkZTX1Y0XzEgaXMgbm90IHNldAo+ICMgQ09ORklH
X05GU19VU0VfTEVHQUNZX0ROUyBpcyBub3Qgc2V0Cj4gQ09ORklHX05GU19VU0VfS0VSTkVMX0RO
Uz15Cj4gIyBDT05GSUdfTkZTRCBpcyBub3Qgc2V0Cj4gQ09ORklHX0xPQ0tEPXkKPiBDT05GSUdf
TE9DS0RfVjQ9eQo+IENPTkZJR19ORlNfQUNMX1NVUFBPUlQ9eQo+IENPTkZJR19ORlNfQ09NTU9O
PXkKPiBDT05GSUdfU1VOUlBDPXkKPiBDT05GSUdfU1VOUlBDX0dTUz15Cj4gQ09ORklHX1NVTlJQ
Q19YUFJUX1JETUE9eQo+IENPTkZJR19TVU5SUENfU1dBUD15Cj4gIyBDT05GSUdfUlBDU0VDX0dT
U19LUkI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DRVBIX0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q0lGUz15Cj4gQ09ORklHX0NJRlNfU1RBVFM9eQo+IENPTkZJR19DSUZTX1NUQVRTMj15Cj4gIyBD
T05GSUdfQ0lGU19XRUFLX1BXX0hBU0ggaXMgbm90IHNldAo+ICMgQ09ORklHX0NJRlNfVVBDQUxM
IGlzIG5vdCBzZXQKPiBDT05GSUdfQ0lGU19YQVRUUj15Cj4gQ09ORklHX0NJRlNfUE9TSVg9eQo+
IENPTkZJR19DSUZTX0FDTD15Cj4gIyBDT05GSUdfQ0lGU19ERUJVRyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0lGU19ERlNfVVBDQUxMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DSUZTX1NNQjIgaXMg
bm90IHNldAo+IENPTkZJR19OQ1BfRlM9eQo+ICMgQ09ORklHX05DUEZTX1BBQ0tFVF9TSUdOSU5H
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkNQRlNfSU9DVExfTE9DS0lORz15Cj4gQ09ORklHX05DUEZT
X1NUUk9ORz15Cj4gQ09ORklHX05DUEZTX05GU19OUz15Cj4gQ09ORklHX05DUEZTX09TMl9OUz15
Cj4gQ09ORklHX05DUEZTX1NNQUxMRE9TPXkKPiAjIENPTkZJR19OQ1BGU19OTFMgaXMgbm90IHNl
dAo+IENPTkZJR19OQ1BGU19FWFRSQVM9eQo+IENPTkZJR19DT0RBX0ZTPXkKPiAjIENPTkZJR19B
RlNfRlMgaXMgbm90IHNldAo+IENPTkZJR185UF9GUz15Cj4gQ09ORklHXzlQX0ZTX1BPU0lYX0FD
TD15Cj4gQ09ORklHXzlQX0ZTX1NFQ1VSSVRZPXkKPiBDT05GSUdfTkxTPXkKPiBDT05GSUdfTkxT
X0RFRkFVTFQ9Imlzbzg4NTktMSIKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkxTX0NPREVQQUdFXzczNz15Cj4gQ09ORklHX05MU19DT0RFUEFHRV83NzU9
eQo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NTAgaXMgbm90IHNldAo+IENPTkZJR19OTFNfQ09E
RVBBR0VfODUyPXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg1NT15Cj4gIyBDT05GSUdfTkxTX0NP
REVQQUdFXzg1NyBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NjA9eQo+ICMgQ09O
RklHX05MU19DT0RFUEFHRV84NjEgaXMgbm90IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84
NjIgaXMgbm90IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfQ09ERVBBR0VfODY0PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg2NT15Cj4gQ09O
RklHX05MU19DT0RFUEFHRV84NjY9eQo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NjkgaXMgbm90
IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV85MzYgaXMgbm90IHNldAo+IENPTkZJR19OTFNf
Q09ERVBBR0VfOTUwPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKPiBD
T05GSUdfTkxTX0NPREVQQUdFXzk0OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NzQ9eQo+ICMg
Q09ORklHX05MU19JU084ODU5XzggaXMgbm90IHNldAo+IENPTkZJR19OTFNfQ09ERVBBR0VfMTI1
MD15Cj4gQ09ORklHX05MU19DT0RFUEFHRV8xMjUxPXkKPiBDT05GSUdfTkxTX0FTQ0lJPXkKPiBD
T05GSUdfTkxTX0lTTzg4NTlfMT15Cj4gQ09ORklHX05MU19JU084ODU5XzI9eQo+IENPTkZJR19O
TFNfSVNPODg1OV8zPXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfND15Cj4gQ09ORklHX05MU19JU084
ODU5XzU9eQo+IENPTkZJR19OTFNfSVNPODg1OV82PXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfNz15
Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlfOSBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19JU084ODU5
XzEzPXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfMTQ9eQo+ICMgQ09ORklHX05MU19JU084ODU5XzE1
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0tPSThfUj15Cj4gIyBDT05GSUdfTkxTX0tPSThfVSBp
cyBub3Qgc2V0Cj4gQ09ORklHX05MU19NQUNfUk9NQU49eQo+ICMgQ09ORklHX05MU19NQUNfQ0VM
VElDIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX01BQ19DRU5URVVSTz15Cj4gQ09ORklHX05MU19N
QUNfQ1JPQVRJQU49eQo+ICMgQ09ORklHX05MU19NQUNfQ1lSSUxMSUMgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfTUFDX0dBRUxJQz15Cj4gQ09ORklHX05MU19NQUNfR1JFRUs9eQo+IENPTkZJR19O
TFNfTUFDX0lDRUxBTkQ9eQo+ICMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfTUFDX1JPTUFOSUFOPXkKPiBDT05GSUdfTkxTX01BQ19UVVJLSVNIPXkKPiBDT05G
SUdfTkxTX1VURjg9eQo+IAo+ICMKPiAjIEtlcm5lbCBoYWNraW5nCj4gIwo+IENPTkZJR19UUkFD
RV9JUlFGTEFHU19TVVBQT1JUPXkKPiAKPiAjCj4gIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMK
PiAjCj4gIyBDT05GSUdfUFJJTlRLX1RJTUUgaXMgbm90IHNldAo+IENPTkZJR19ERUZBVUxUX01F
U1NBR0VfTE9HTEVWRUw9NAo+IENPTkZJR19CT09UX1BSSU5US19ERUxBWT15Cj4gIyBDT05GSUdf
RFlOQU1JQ19ERUJVRyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQ29tcGlsZS10aW1lIGNoZWNrcyBh
bmQgY29tcGlsZXIgb3B0aW9ucwo+ICMKPiAjIENPTkZJR19ERUJVR19JTkZPIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQKPiBDT05GSUdfRU5B
QkxFX01VU1RfQ0hFQ0s9eQo+IENPTkZJR19GUkFNRV9XQVJOPTIwNDgKPiBDT05GSUdfU1RSSVBf
QVNNX1NZTVM9eQo+ICMgQ09ORklHX1JFQURBQkxFX0FTTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
VU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19GUz15Cj4gIyBDT05GSUdf
SEVBREVSU19DSEVDSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX1NFQ1RJT05fTUlTTUFUQ0g9
eQo+IENPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQo+IENPTkZJR19GUkFNRV9QT0lO
VEVSPXkKPiBDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BVPXkKPiBDT05GSUdfTUFHSUNf
U1lTUlE9eQo+IENPTkZJR19NQUdJQ19TWVNSUV9ERUZBVUxUX0VOQUJMRT0weDEKPiBDT05GSUdf
REVCVUdfS0VSTkVMPXkKPiAKPiAjCj4gIyBNZW1vcnkgRGVidWdnaW5nCj4gIwo+IENPTkZJR19E
RUJVR19QQUdFQUxMT0M9eQo+IENPTkZJR19XQU5UX1BBR0VfREVCVUdfRkxBR1M9eQo+IENPTkZJ
R19QQUdFX0dVQVJEPXkKPiBDT05GSUdfREVCVUdfT0JKRUNUUz15Cj4gIyBDT05GSUdfREVCVUdf
T0JKRUNUU19TRUxGVEVTVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15
Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfVElNRVJTPXkKPiBDT05GSUdfREVCVUdfT0JKRUNUU19X
T1JLPXkKPiAjIENPTkZJR19ERUJVR19PQkpFQ1RTX1JDVV9IRUFEIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ERUJVR19PQkpFQ1RTX1BFUkNQVV9DT1VOVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdfREVC
VUdfT0JKRUNUU19FTkFCTEVfREVGQVVMVD0xCj4gQ09ORklHX1NMVUJfU1RBVFM9eQo+IENPTkZJ
R19IQVZFX0RFQlVHX0tNRU1MRUFLPXkKPiAjIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0RFQlVHX1NUQUNLX1VTQUdFPXkKPiAjIENPTkZJR19ERUJVR19WTSBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfREVCVUdfVklSVFVBTCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfREVC
VUdfTUVNT1JZX0lOSVQgaXMgbm90IHNldAo+IENPTkZJR19NRU1PUllfTk9USUZJRVJfRVJST1Jf
SU5KRUNUPXkKPiBDT05GSUdfSEFWRV9ERUJVR19TVEFDS09WRVJGTE9XPXkKPiAjIENPTkZJR19E
RUJVR19TVEFDS09WRVJGTE9XIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9BUkNIX0tNRU1DSEVD
Sz15Cj4gIyBDT05GSUdfREVCVUdfU0hJUlEgaXMgbm90IHNldAo+IAo+ICMKPiAjIERlYnVnIExv
Y2t1cHMgYW5kIEhhbmdzCj4gIwo+ICMgQ09ORklHX0xPQ0tVUF9ERVRFQ1RPUiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0RFVEVDVF9IVU5HX1RBU0s9eQo+IENPTkZJR19ERUZBVUxUX0hVTkdfVEFTS19U
SU1FT1VUPTEyMAo+ICMgQ09ORklHX0JPT1RQQVJBTV9IVU5HX1RBU0tfUEFOSUMgaXMgbm90IHNl
dAo+IENPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BBTklDX1ZBTFVFPTAKPiBDT05GSUdfUEFO
SUNfT05fT09QUz15Cj4gQ09ORklHX1BBTklDX09OX09PUFNfVkFMVUU9MQo+IENPTkZJR19QQU5J
Q19USU1FT1VUPTAKPiAKPiAjCj4gIyBMb2NrIERlYnVnZ2luZyAoc3BpbmxvY2tzLCBtdXRleGVz
LCBldGMuLi4pCj4gIwo+IENPTkZJR19ERUJVR19SVF9NVVRFWEVTPXkKPiBDT05GSUdfREVCVUdf
UElfTElTVD15Cj4gIyBDT05GSUdfUlRfTVVURVhfVEVTVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdf
REVCVUdfU1BJTkxPQ0s9eQo+IENPTkZJR19ERUJVR19NVVRFWEVTPXkKPiAjIENPTkZJR19ERUJV
R19XV19NVVRFWF9TTE9XUEFUSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0M9
eQo+ICMgQ09ORklHX1BST1ZFX0xPQ0tJTkcgaXMgbm90IHNldAo+IENPTkZJR19MT0NLREVQPXkK
PiBDT05GSUdfTE9DS19TVEFUPXkKPiAjIENPTkZJR19ERUJVR19MT0NLREVQIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19ERUJVR19BVE9NSUNfU0xFRVAgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19M
T0NLSU5HX0FQSV9TRUxGVEVTVFM9eQo+IENPTkZJR19TVEFDS1RSQUNFPXkKPiBDT05GSUdfREVC
VUdfS09CSkVDVD15Cj4gIyBDT05GSUdfREVCVUdfS09CSkVDVF9SRUxFQVNFIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19ERUJVR19XUklURUNPVU5UIGlzIG5vdCBzZXQKPiBDT05GSUdfREVCVUdfTElT
VD15Cj4gIyBDT05GSUdfREVCVUdfU0cgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19OT1RJRklF
UlM9eQo+ICMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBS
Q1UgRGVidWdnaW5nCj4gIwo+IENPTkZJR19TUEFSU0VfUkNVX1BPSU5URVI9eQo+IENPTkZJR19S
Q1VfVE9SVFVSRV9URVNUPXkKPiAjIENPTkZJR19SQ1VfVE9SVFVSRV9URVNUX1JVTk5BQkxFIGlz
IG5vdCBzZXQKPiBDT05GSUdfUkNVX0NQVV9TVEFMTF9USU1FT1VUPTIxCj4gQ09ORklHX1JDVV9U
UkFDRT15Cj4gQ09ORklHX05PVElGSUVSX0VSUk9SX0lOSkVDVElPTj15Cj4gIyBDT05GSUdfUE1f
Tk9USUZJRVJfRVJST1JfSU5KRUNUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQVVMVF9JTkpFQ1RJ
T04gaXMgbm90IHNldAo+IENPTkZJR19BUkNIX0hBU19ERUJVR19TVFJJQ1RfVVNFUl9DT1BZX0NI
RUNLUz15Cj4gIyBDT05GSUdfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1MgaXMgbm90IHNl
dAo+IENPTkZJR19VU0VSX1NUQUNLVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJ
T05fVFJBQ0VSPXkKPiBDT05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQo+IENPTkZJ
R19IQVZFX0ZVTkNUSU9OX0dSQVBIX0ZQX1RFU1Q9eQo+IENPTkZJR19IQVZFX0ZVTkNUSU9OX1RS
QUNFX01DT1VOVF9URVNUPXkKPiBDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15Cj4gQ09ORklH
X0hBVkVfRFlOQU1JQ19GVFJBQ0VfV0lUSF9SRUdTPXkKPiBDT05GSUdfSEFWRV9GVFJBQ0VfTUNP
VU5UX1JFQ09SRD15Cj4gQ09ORklHX0hBVkVfU1lTQ0FMTF9UUkFDRVBPSU5UUz15Cj4gQ09ORklH
X0hBVkVfRkVOVFJZPXkKPiBDT05GSUdfSEFWRV9DX1JFQ09SRE1DT1VOVD15Cj4gQ09ORklHX1RS
QUNFX0NMT0NLPXkKPiBDT05GSUdfVFJBQ0lOR19TVVBQT1JUPXkKPiAjIENPTkZJR19GVFJBQ0Ug
aXMgbm90IHNldAo+IAo+ICMKPiAjIFJ1bnRpbWUgVGVzdGluZwo+ICMKPiBDT05GSUdfVEVTVF9M
SVNUX1NPUlQ9eQo+IENPTkZJR19CQUNLVFJBQ0VfU0VMRl9URVNUPXkKPiAjIENPTkZJR19SQlRS
RUVfVEVTVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUT01JQzY0X1NFTEZURVNUPXkKPiBDT05GSUdf
VEVTVF9TVFJJTkdfSEVMUEVSUz15Cj4gQ09ORklHX1RFU1RfS1NUUlRPWD15Cj4gIyBDT05GSUdf
UFJPVklERV9PSENJMTM5NF9ETUFfSU5JVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRE1BX0FQSV9E
RUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NBTVBMRVM9eQo+IENPTkZJR19IQVZFX0FSQ0hfS0dE
Qj15Cj4gQ09ORklHX0tHREI9eQo+IENPTkZJR19LR0RCX1NFUklBTF9DT05TT0xFPXkKPiAjIENP
TkZJR19LR0RCX1RFU1RTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19LR0RCX0xPV19MRVZFTF9UUkFQ
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NUUklD
VF9ERVZNRU09eQo+ICMgQ09ORklHX1g4Nl9WRVJCT1NFX0JPT1RVUCBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfRUFSTFlfUFJJTlRLIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YODZfUFREVU1QIGlzIG5v
dCBzZXQKPiBDT05GSUdfREVCVUdfUk9EQVRBPXkKPiAjIENPTkZJR19ERUJVR19ST0RBVEFfVEVT
VCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RPVUJMRUZBVUxUPXkKPiAjIENPTkZJR19ERUJVR19UTEJG
TFVTSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9NTVVfU1RSRVNTIGlzIG5vdCBzZXQKPiBDT05G
SUdfSEFWRV9NTUlPVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0w
Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFhFRD0xCj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfVURF
TEFZPTIKPiBDT05GSUdfSU9fREVMQVlfVFlQRV9OT05FPTMKPiBDT05GSUdfSU9fREVMQVlfMFg4
MD15Cj4gIyBDT05GSUdfSU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9fREVM
QVlfVURFTEFZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQK
PiBDT05GSUdfREVGQVVMVF9JT19ERUxBWV9UWVBFPTAKPiAjIENPTkZJR19ERUJVR19CT09UX1BB
UkFNUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BBX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdf
T1BUSU1JWkVfSU5MSU5JTkc9eQo+IENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1Q9eQo+IENPTkZJ
R19YODZfREVCVUdfU1RBVElDX0NQVV9IQVM9eQo+IAo+ICMKPiAjIFNlY3VyaXR5IG9wdGlvbnMK
PiAjCj4gQ09ORklHX0tFWVM9eQo+ICMgQ09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90
IHNldAo+IENPTkZJR19CSUdfS0VZUz15Cj4gIyBDT05GSUdfVFJVU1RFRF9LRVlTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRU5DUllQVEVEX0tFWVM9eQo+ICMgQ09ORklHX0tFWVNfREVCVUdfUFJPQ19L
RVlTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VDVVJJVFlfRE1FU0dfUkVTVFJJQ1Q9eQo+IENPTkZJ
R19TRUNVUklUWT15Cj4gQ09ORklHX1NFQ1VSSVRZRlM9eQo+IENPTkZJR19TRUNVUklUWV9ORVRX
T1JLPXkKPiBDT05GSUdfU0VDVVJJVFlfTkVUV09SS19YRlJNPXkKPiBDT05GSUdfU0VDVVJJVFlf
UEFUSD15Cj4gIyBDT05GSUdfU0VDVVJJVFlfU0VMSU5VWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
U0VDVVJJVFlfU01BQ0sgaXMgbm90IHNldAo+IENPTkZJR19TRUNVUklUWV9UT01PWU89eQo+IENP
TkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FDQ0VQVF9FTlRSWT0yMDQ4Cj4gQ09ORklHX1NFQ1VS
SVRZX1RPTU9ZT19NQVhfQVVESVRfTE9HPTEwMjQKPiBDT05GSUdfU0VDVVJJVFlfVE9NT1lPX09N
SVRfVVNFUlNQQUNFX0xPQURFUj15Cj4gQ09ORklHX1NFQ1VSSVRZX0FQUEFSTU9SPXkKPiBDT05G
SUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBBUkFNX1ZBTFVFPTEKPiBDT05GSUdfU0VDVVJJVFlf
QVBQQVJNT1JfSEFTSD15Cj4gQ09ORklHX1NFQ1VSSVRZX1lBTUE9eQo+IENPTkZJR19TRUNVUklU
WV9ZQU1BX1NUQUNLRUQ9eQo+ICMgQ09ORklHX0lNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRVZN
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQo+ICMgQ09ORklH
X0RFRkFVTFRfU0VDVVJJVFlfQVBQQVJNT1IgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFRkFVTFRf
U0VDVVJJVFlfWUFNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9EQUMg
aXMgbm90IHNldAo+IENPTkZJR19ERUZBVUxUX1NFQ1VSSVRZPSJ0b21veW8iCj4gQ09ORklHX0NS
WVBUTz15Cj4gCj4gIwo+ICMgQ3J5cHRvIGNvcmUgb3IgaGVscGVyCj4gIwo+ICMgQ09ORklHX0NS
WVBUT19GSVBTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0FMR0FQST15Cj4gQ09ORklHX0NS
WVBUT19BTEdBUEkyPXkKPiBDT05GSUdfQ1JZUFRPX0FFQUQ9eQo+IENPTkZJR19DUllQVE9fQUVB
RDI9eQo+IENPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkKPiBDT05GSUdfQ1JZUFRPX0JMS0NJUEhF
UjI9eQo+IENPTkZJR19DUllQVE9fSEFTSD15Cj4gQ09ORklHX0NSWVBUT19IQVNIMj15Cj4gQ09O
RklHX0NSWVBUT19STkc9eQo+IENPTkZJR19DUllQVE9fUk5HMj15Cj4gQ09ORklHX0NSWVBUT19Q
Q09NUDI9eQo+IENPTkZJR19DUllQVE9fTUFOQUdFUj15Cj4gQ09ORklHX0NSWVBUT19NQU5BR0VS
Mj15Cj4gQ09ORklHX0NSWVBUT19VU0VSPXkKPiAjIENPTkZJR19DUllQVE9fTUFOQUdFUl9ESVNB
QkxFX1RFU1RTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0dGMTI4TVVMPXkKPiBDT05GSUdf
Q1JZUFRPX05VTEw9eQo+IENPTkZJR19DUllQVE9fV09SS1FVRVVFPXkKPiBDT05GSUdfQ1JZUFRP
X0NSWVBURD15Cj4gQ09ORklHX0NSWVBUT19BVVRIRU5DPXkKPiBDT05GSUdfQ1JZUFRPX0FCTEtf
SEVMUEVSPXkKPiBDT05GSUdfQ1JZUFRPX0dMVUVfSEVMUEVSX1g4Nj15Cj4gCj4gIwo+ICMgQXV0
aGVudGljYXRlZCBFbmNyeXB0aW9uIHdpdGggQXNzb2NpYXRlZCBEYXRhCj4gIwo+IENPTkZJR19D
UllQVE9fQ0NNPXkKPiBDT05GSUdfQ1JZUFRPX0dDTT15Cj4gQ09ORklHX0NSWVBUT19TRVFJVj15
Cj4gCj4gIwo+ICMgQmxvY2sgbW9kZXMKPiAjCj4gQ09ORklHX0NSWVBUT19DQkM9eQo+IENPTkZJ
R19DUllQVE9fQ1RSPXkKPiBDT05GSUdfQ1JZUFRPX0NUUz15Cj4gQ09ORklHX0NSWVBUT19FQ0I9
eQo+IENPTkZJR19DUllQVE9fTFJXPXkKPiBDT05GSUdfQ1JZUFRPX1BDQkM9eQo+IENPTkZJR19D
UllQVE9fWFRTPXkKPiAKPiAjCj4gIyBIYXNoIG1vZGVzCj4gIwo+IENPTkZJR19DUllQVE9fQ01B
Qz15Cj4gQ09ORklHX0NSWVBUT19ITUFDPXkKPiAjIENPTkZJR19DUllQVE9fWENCQyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NSWVBUT19WTUFDPXkKPiAKPiAjCj4gIyBEaWdlc3QKPiAjCj4gQ09ORklH
X0NSWVBUT19DUkMzMkM9eQo+IENPTkZJR19DUllQVE9fQ1JDMzJDX0lOVEVMPXkKPiAjIENPTkZJ
R19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUwg
aXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fQ1JDVDEwRElGPXkKPiBDT05GSUdfQ1JZUFRPX0dI
QVNIPXkKPiBDT05GSUdfQ1JZUFRPX01END15Cj4gQ09ORklHX0NSWVBUT19NRDU9eQo+IENPTkZJ
R19DUllQVE9fTUlDSEFFTF9NSUM9eQo+ICMgQ09ORklHX0NSWVBUT19STUQxMjggaXMgbm90IHNl
dAo+IENPTkZJR19DUllQVE9fUk1EMTYwPXkKPiBDT05GSUdfQ1JZUFRPX1JNRDI1Nj15Cj4gQ09O
RklHX0NSWVBUT19STUQzMjA9eQo+IENPTkZJR19DUllQVE9fU0hBMT15Cj4gQ09ORklHX0NSWVBU
T19TSEExX1NTU0UzPXkKPiBDT05GSUdfQ1JZUFRPX1NIQTI1Nl9TU1NFMz15Cj4gQ09ORklHX0NS
WVBUT19TSEE1MTJfU1NTRTM9eQo+IENPTkZJR19DUllQVE9fU0hBMjU2PXkKPiBDT05GSUdfQ1JZ
UFRPX1NIQTUxMj15Cj4gQ09ORklHX0NSWVBUT19UR1IxOTI9eQo+IENPTkZJR19DUllQVE9fV1A1
MTI9eQo+ICMgQ09ORklHX0NSWVBUT19HSEFTSF9DTE1VTF9OSV9JTlRFTCBpcyBub3Qgc2V0Cj4g
Cj4gIwo+ICMgQ2lwaGVycwo+ICMKPiBDT05GSUdfQ1JZUFRPX0FFUz15Cj4gQ09ORklHX0NSWVBU
T19BRVNfWDg2XzY0PXkKPiAjIENPTkZJR19DUllQVE9fQUVTX05JX0lOVEVMIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19DUllQVE9fQU5VQklTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0FSQzQ9
eQo+ICMgQ09ORklHX0NSWVBUT19CTE9XRklTSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRP
X0JMT1dGSVNIX1g4Nl82NCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19DQU1FTExJQT15Cj4g
Q09ORklHX0NSWVBUT19DQU1FTExJQV9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUFf
QUVTTklfQVZYX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQU1FTExJQV9BRVNOSV9BVlgyX1g4
Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQVNUX0NPTU1PTj15Cj4gIyBDT05GSUdfQ1JZUFRPX0NB
U1Q1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DUllQVE9fQ0FTVDVfQVZYX1g4Nl82NCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NSWVBUT19DQVNUNj15Cj4gQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2
XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0RFUz15Cj4gQ09ORklHX0NSWVBUT19GQ1JZUFQ9eQo+IENP
TkZJR19DUllQVE9fS0hBWkFEPXkKPiAjIENPTkZJR19DUllQVE9fU0FMU0EyMCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0NSWVBUT19TQUxTQTIwX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRUVEPXkK
PiBDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4
Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRVJQRU5UX0FWWF9YODZfNjQ9eQo+ICMgQ09ORklHX0NS
WVBUT19TRVJQRU5UX0FWWDJfWDg2XzY0IGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX1RFQT15
Cj4gIyBDT05GSUdfQ1JZUFRPX1RXT0ZJU0ggaXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fVFdP
RklTSF9DT01NT049eQo+IENPTkZJR19DUllQVE9fVFdPRklTSF9YODZfNjQ9eQo+IENPTkZJR19D
UllQVE9fVFdPRklTSF9YODZfNjRfM1dBWT15Cj4gQ09ORklHX0NSWVBUT19UV09GSVNIX0FWWF9Y
ODZfNjQ9eQo+IAo+ICMKPiAjIENvbXByZXNzaW9uCj4gIwo+IENPTkZJR19DUllQVE9fREVGTEFU
RT15Cj4gIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fTFpP
PXkKPiBDT05GSUdfQ1JZUFRPX0xaND15Cj4gIyBDT05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBSYW5kb20gTnVtYmVyIEdlbmVyYXRpb24KPiAjCj4gQ09ORklHX0NSWVBU
T19BTlNJX0NQUk5HPXkKPiBDT05GSUdfQ1JZUFRPX1VTRVJfQVBJPXkKPiBDT05GSUdfQ1JZUFRP
X1VTRVJfQVBJX0hBU0g9eQo+IENPTkZJR19DUllQVE9fVVNFUl9BUElfU0tDSVBIRVI9eQo+IENP
TkZJR19DUllQVE9fSFc9eQo+ICMgQ09ORklHX0NSWVBUT19ERVZfUEFETE9DSyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ1JZUFRPX0RFVl9DQ1AgaXMgbm90IHNldAo+ICMgQ09ORklHX0FTWU1NRVRS
SUNfS0VZX1RZUEUgaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tWTT15Cj4gQ09ORklHX0hBVkVf
S1ZNX0lSUUNISVA9eQo+IENPTkZJR19IQVZFX0tWTV9JUlFfUk9VVElORz15Cj4gQ09ORklHX0hB
VkVfS1ZNX0VWRU5URkQ9eQo+IENPTkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQo+IENPTkZJ
R19LVk1fTU1JTz15Cj4gQ09ORklHX0tWTV9BU1lOQ19QRj15Cj4gQ09ORklHX0hBVkVfS1ZNX01T
ST15Cj4gQ09ORklHX0hBVkVfS1ZNX0NQVV9SRUxBWF9JTlRFUkNFUFQ9eQo+IENPTkZJR19LVk1f
VkZJTz15Cj4gQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkKPiBDT05GSUdfS1ZNPXkKPiAjIENPTkZJ
R19LVk1fSU5URUwgaXMgbm90IHNldAo+IENPTkZJR19LVk1fQU1EPXkKPiAjIENPTkZJR19CSU5B
UllfUFJJTlRGIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBMaWJyYXJ5IHJvdXRpbmVzCj4gIwo+IENP
TkZJR19CSVRSRVZFUlNFPXkKPiBDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15Cj4g
Q09ORklHX0dFTkVSSUNfU1RSTkxFTl9VU0VSPXkKPiBDT05GSUdfR0VORVJJQ19ORVRfVVRJTFM9
eQo+IENPTkZJR19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkKPiBDT05GSUdfR0VORVJJQ19QQ0lf
SU9NQVA9eQo+IENPTkZJR19HRU5FUklDX0lPTUFQPXkKPiBDT05GSUdfR0VORVJJQ19JTz15Cj4g
Q09ORklHX0FSQ0hfVVNFX0NNUFhDSEdfTE9DS1JFRj15Cj4gQ09ORklHX0NSQ19DQ0lUVD15Cj4g
Q09ORklHX0NSQzE2PXkKPiAjIENPTkZJR19DUkNfVDEwRElGIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q1JDX0lUVV9UPXkKPiBDT05GSUdfQ1JDMzI9eQo+IENPTkZJR19DUkMzMl9TRUxGVEVTVD15Cj4g
Q09ORklHX0NSQzMyX1NMSUNFQlk4PXkKPiAjIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfQ1JDMzJfU0FSV0FURSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JDMzJf
QklUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DUkM3IGlzIG5vdCBzZXQKPiBDT05GSUdfTElCQ1JD
MzJDPXkKPiAjIENPTkZJR19DUkM4IGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JDNjRfRUNNQT15Cj4g
Q09ORklHX1JBTkRPTTMyX1NFTEZURVNUPXkKPiBDT05GSUdfWkxJQl9JTkZMQVRFPXkKPiBDT05G
SUdfWkxJQl9ERUZMQVRFPXkKPiBDT05GSUdfTFpPX0NPTVBSRVNTPXkKPiBDT05GSUdfTFpPX0RF
Q09NUFJFU1M9eQo+IENPTkZJR19MWjRfQ09NUFJFU1M9eQo+IENPTkZJR19MWjRfREVDT01QUkVT
Uz15Cj4gQ09ORklHX1haX0RFQz15Cj4gIyBDT05GSUdfWFpfREVDX1g4NiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfWFpfREVDX1BPV0VSUEMgaXMgbm90IHNldAo+ICMgQ09ORklHX1haX0RFQ19JQTY0
IGlzIG5vdCBzZXQKPiBDT05GSUdfWFpfREVDX0FSTT15Cj4gIyBDT05GSUdfWFpfREVDX0FSTVRI
VU1CIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YWl9ERUNfU1BBUkMgaXMgbm90IHNldAo+IENPTkZJ
R19YWl9ERUNfQkNKPXkKPiBDT05GSUdfWFpfREVDX1RFU1Q9eQo+IENPTkZJR19ERUNPTVBSRVNT
X1haPXkKPiBDT05GSUdfR0VORVJJQ19BTExPQ0FUT1I9eQo+IENPTkZJR19CQ0g9eQo+IENPTkZJ
R19CQ0hfQ09OU1RfUEFSQU1TPXkKPiBDT05GSUdfVEVYVFNFQVJDSD15Cj4gQ09ORklHX1RFWFRT
RUFSQ0hfS01QPXkKPiBDT05GSUdfQVNTT0NJQVRJVkVfQVJSQVk9eQo+IENPTkZJR19IQVNfSU9N
RU09eQo+IENPTkZJR19IQVNfSU9QT1JUPXkKPiBDT05GSUdfSEFTX0RNQT15Cj4gQ09ORklHX0NI
RUNLX1NJR05BVFVSRT15Cj4gQ09ORklHX0RRTD15Cj4gQ09ORklHX05MQVRUUj15Cj4gQ09ORklH
X0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElWRT15Cj4gIyBDT05GSUdfQVZFUkFHRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERFIgaXMg
bm90IHNldAo+IENPTkZJR19PSURfUkVHSVNUUlk9eQo+IENPTkZJR19VQ1MyX1NUUklORz15Cj4g
Q09ORklHX0ZPTlRfU1VQUE9SVD15Cj4gQ09ORklHX0ZPTlRfOHgxNj15Cj4gQ09ORklHX0ZPTlRf
QVVUT1NFTEVDVD15CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:45:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Xu3-0008AG-EC; Tue, 07 Jan 2014 14:45:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Xtv-00089Q-0g
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:45:09 +0000
Received: from [85.158.139.211:34101] by server-13.bemta-5.messagelabs.com id
	6E/70-11357-EE21CC25; Tue, 07 Jan 2014 14:45:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389105898!8132131!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 7 Jan 2014 14:45:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:45:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EiT50011819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:44:30 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EiRsM003486
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:44:28 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07EiQb0014300; Tue, 7 Jan 2014 14:44:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:44:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 618DA1C18DC; Tue,  7 Jan 2014 09:44:24 -0500 (EST)
Date: Tue, 7 Jan 2014 09:44:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>
Message-ID: <20140107144424.GI3588@phenom.dumpdata.com>
References: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-next@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	mingo@redhat.com, tglx@linutronix.de
Subject: Re: [Xen-devel] randconfig build error with next-20140107,
 in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBKYW4gMDcsIDIwMTQgYXQgMDc6MDM6NTBBTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gYXJjaC94ODYveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmHhlbl9wdmhf
Z250dGFiX3NldHVw4oCZOgo+IGFyY2gveDg2L3hlbi9ncmFudC10YWJsZS5jOjE4MToyOiBlcnJv
cjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJh4ZW5fcHZoX2RvbWFpbuKA
mSBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbl0KPiAgIGlmICgheGVuX3B2
aF9kb21haW4oKSkKPiAgIF4KPiBjYzE6IHNvbWUgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBl
cnJvcnMKPiBtYWtlWzJdOiAqKiogW2FyY2gveDg2L3hlbi9ncmFudC10YWJsZS5vXSBFcnJvciAx
CgpZZWFoLCBJIGdvdCB0aGUgc2FtZSBlcnJvciBmcm9tIHRoZSAwLWJ1aWxkIHRlc3Qgc3lzdGVt
LiBXaWxsCmEgZml4IGluIHRvZGF5LgoKVGhhbmsgeW91IGZvciByZXBvcnRpbmchCgo+ICMKPiAj
IEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgo+ICMgTGludXgveDg2
IDMuMTMuMC1yYzcgS2VybmVsIENvbmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHXzY0QklUPXkKPiBD
T05GSUdfWDg2XzY0PXkKPiBDT05GSUdfWDg2PXkKPiBDT05GSUdfSU5TVFJVQ1RJT05fREVDT0RF
Uj15Cj4gQ09ORklHX09VVFBVVF9GT1JNQVQ9ImVsZjY0LXg4Ni02NCIKPiBDT05GSUdfQVJDSF9E
RUZDT05GSUc9ImFyY2gveDg2L2NvbmZpZ3MveDg2XzY0X2RlZmNvbmZpZyIKPiBDT05GSUdfTE9D
S0RFUF9TVVBQT1JUPXkKPiBDT05GSUdfU1RBQ0tUUkFDRV9TVVBQT1JUPXkKPiBDT05GSUdfSEFW
RV9MQVRFTkNZVE9QX1NVUFBPUlQ9eQo+IENPTkZJR19NTVU9eQo+IENPTkZJR19ORUVEX0RNQV9N
QVBfU1RBVEU9eQo+IENPTkZJR19ORUVEX1NHX0RNQV9MRU5HVEg9eQo+IENPTkZJR19HRU5FUklD
X0hXRUlHSFQ9eQo+IENPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15Cj4gQ09ORklHX0dF
TkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX1JFTEFYPXkKPiBD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX0FV
VE9QUk9CRT15Cj4gQ09ORklHX0hBVkVfU0VUVVBfUEVSX0NQVV9BUkVBPXkKPiBDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKPiBDT05GSUdfTkVFRF9QRVJfQ1BVX1BBR0Vf
RklSU1RfQ0hVTks9eQo+IENPTkZJR19BUkNIX0hJQkVSTkFUSU9OX1BPU1NJQkxFPXkKPiBDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0hVR0VfUE1EX1NI
QVJFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0dFTkVSQUxfSFVHRVRMQj15Cj4gQ09ORklHX1pPTkVf
RE1BMzI9eQo+IENPTkZJR19BVURJVF9BUkNIPXkKPiBDT05GSUdfQVJDSF9TVVBQT1JUU19PUFRJ
TUlaRURfSU5MSU5JTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
Cj4gQ09ORklHX0FSQ0hfSFdFSUdIVF9DRkxBR1M9Ii1mY2FsbC1zYXZlZC1yZGkgLWZjYWxsLXNh
dmVkLXJzaSAtZmNhbGwtc2F2ZWQtcmR4IC1mY2FsbC1zYXZlZC1yY3ggLWZjYWxsLXNhdmVkLXI4
IC1mY2FsbC1zYXZlZC1yOSAtZmNhbGwtc2F2ZWQtcjEwIC1mY2FsbC1zYXZlZC1yMTEiCj4gQ09O
RklHX0FSQ0hfU1VQUE9SVFNfVVBST0JFUz15Cj4gQ09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGli
L21vZHVsZXMvJFVOQU1FX1JFTEVBU0UvLmNvbmZpZyIKPiBDT05GSUdfQ09OU1RSVUNUT1JTPXkK
PiBDT05GSUdfSVJRX1dPUks9eQo+IENPTkZJR19CVUlMRFRJTUVfRVhUQUJMRV9TT1JUPXkKPiAK
PiAjCj4gIyBHZW5lcmFsIHNldHVwCj4gIwo+IENPTkZJR19CUk9LRU5fT05fU01QPXkKPiBDT05G
SUdfSU5JVF9FTlZfQVJHX0xJTUlUPTMyCj4gQ09ORklHX0NST1NTX0NPTVBJTEU9IiIKPiBDT05G
SUdfQ09NUElMRV9URVNUPXkKPiBDT05GSUdfTE9DQUxWRVJTSU9OPSIiCj4gQ09ORklHX0xPQ0FM
VkVSU0lPTl9BVVRPPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfR1pJUD15Cj4gQ09ORklHX0hBVkVf
S0VSTkVMX0JaSVAyPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfTFpNQT15Cj4gQ09ORklHX0hBVkVf
S0VSTkVMX1haPXkKPiBDT05GSUdfSEFWRV9LRVJORUxfTFpPPXkKPiBDT05GSUdfSEFWRV9LRVJO
RUxfTFo0PXkKPiBDT05GSUdfS0VSTkVMX0daSVA9eQo+ICMgQ09ORklHX0tFUk5FTF9CWklQMiBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfS0VSTkVMX0xaTUEgaXMgbm90IHNldAo+ICMgQ09ORklHX0tF
Uk5FTF9YWiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfS0VSTkVMX0xaTyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfS0VSTkVMX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9Iihu
b25lKSIKPiBDT05GSUdfU1lTVklQQz15Cj4gIyBDT05GSUdfUE9TSVhfTVFVRVVFIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkhBTkRMRT15Cj4gQ09ORklHX0FVRElUPXkKPiBDT05GSUdfQVVESVRTWVND
QUxMPXkKPiBDT05GSUdfQVVESVRfV0FUQ0g9eQo+IENPTkZJR19BVURJVF9UUkVFPXkKPiAKPiAj
Cj4gIyBJUlEgc3Vic3lzdGVtCj4gIwo+IENPTkZJR19HRU5FUklDX0lSUV9QUk9CRT15Cj4gQ09O
RklHX0dFTkVSSUNfSVJRX1NIT1c9eQo+IENPTkZJR19HRU5FUklDX0lSUV9DSElQPXkKPiBDT05G
SUdfSVJRX0RPTUFJTj15Cj4gQ09ORklHX0lSUV9ET01BSU5fREVCVUc9eQo+IENPTkZJR19JUlFf
Rk9SQ0VEX1RIUkVBRElORz15Cj4gQ09ORklHX1NQQVJTRV9JUlE9eQo+IENPTkZJR19DTE9DS1NP
VVJDRV9XQVRDSERPRz15Cj4gQ09ORklHX0FSQ0hfQ0xPQ0tTT1VSQ0VfREFUQT15Cj4gQ09ORklH
X0dFTkVSSUNfVElNRV9WU1lTQ0FMTD15Cj4gQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFM9eQo+
IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX0JVSUxEPXkKPiBDT05GSUdfR0VORVJJQ19DTE9D
S0VWRU5UU19CUk9BRENBU1Q9eQo+IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX01JTl9BREpV
U1Q9eQo+IENPTkZJR19HRU5FUklDX0NNT1NfVVBEQVRFPXkKPiAKPiAjCj4gIyBUaW1lcnMgc3Vi
c3lzdGVtCj4gIwo+IENPTkZJR19USUNLX09ORVNIT1Q9eQo+IENPTkZJR19OT19IWl9DT01NT049
eQo+ICMgQ09ORklHX0haX1BFUklPRElDIGlzIG5vdCBzZXQKPiBDT05GSUdfTk9fSFpfSURMRT15
Cj4gQ09ORklHX05PX0haPXkKPiBDT05GSUdfSElHSF9SRVNfVElNRVJTPXkKPiAKPiAjCj4gIyBD
UFUvVGFzayB0aW1lIGFuZCBzdGF0cyBhY2NvdW50aW5nCj4gIwo+IENPTkZJR19USUNLX0NQVV9B
Q0NPVU5USU5HPXkKPiAjIENPTkZJR19WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQlNE
X1BST0NFU1NfQUNDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1RBU0tTVEFUUz15Cj4gQ09ORklHX1RB
U0tfREVMQVlfQUNDVD15Cj4gIyBDT05GSUdfVEFTS19YQUNDVCBpcyBub3Qgc2V0Cj4gCj4gIwo+
ICMgUkNVIFN1YnN5c3RlbQo+ICMKPiBDT05GSUdfVElOWV9SQ1U9eQo+ICMgQ09ORklHX1BSRUVN
UFRfUkNVIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNVX1NUQUxMX0NPTU1PTj15Cj4gIyBDT05GSUdf
VFJFRV9SQ1VfVFJBQ0UgaXMgbm90IHNldAo+IENPTkZJR19JS0NPTkZJRz15Cj4gQ09ORklHX0xP
R19CVUZfU0hJRlQ9MTcKPiBDT05GSUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15Cj4gQ09O
RklHX0FSQ0hfU1VQUE9SVFNfTlVNQV9CQUxBTkNJTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRT
X0lOVDEyOD15Cj4gQ09ORklHX0FSQ0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15Cj4gIyBD
T05GSUdfQ0dST1VQUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0NIRUNLUE9JTlRfUkVTVE9SRT15Cj4g
Q09ORklHX05BTUVTUEFDRVM9eQo+ICMgQ09ORklHX1VUU19OUyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSVBDX05TIGlzIG5vdCBzZXQKPiBDT05GSUdfVVNFUl9OUz15Cj4gQ09ORklHX1BJRF9OUz15
Cj4gIyBDT05GSUdfTkVUX05TIGlzIG5vdCBzZXQKPiBDT05GSUdfVUlER0lEX1NUUklDVF9UWVBF
X0NIRUNLUz15Cj4gIyBDT05GSUdfU0NIRURfQVVUT0dST1VQIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19TWVNGU19ERVBSRUNBVEVEIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVMQVk9eQo+IENPTkZJR19C
TEtfREVWX0lOSVRSRD15Cj4gQ09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIKPiAjIENPTkZJR19S
RF9HWklQIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRF9CWklQMiBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfUkRfTFpNQSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JEX1haPXkKPiAjIENPTkZJR19SRF9MWk8g
aXMgbm90IHNldAo+ICMgQ09ORklHX1JEX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NDX09QVElN
SVpFX0ZPUl9TSVpFPXkKPiBDT05GSUdfQU5PTl9JTk9ERVM9eQo+IENPTkZJR19TWVNDVExfRVhD
RVBUSU9OX1RSQUNFPXkKPiBDT05GSUdfSEFWRV9QQ1NQS1JfUExBVEZPUk09eQo+IENPTkZJR19F
WFBFUlQ9eQo+IENPTkZJR19LQUxMU1lNUz15Cj4gQ09ORklHX0tBTExTWU1TX0FMTD15Cj4gQ09O
RklHX1BSSU5USz15Cj4gIyBDT05GSUdfQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfUENTUEtSX1BM
QVRGT1JNPXkKPiAjIENPTkZJR19CQVNFX0ZVTEwgaXMgbm90IHNldAo+IENPTkZJR19GVVRFWD15
Cj4gQ09ORklHX0VQT0xMPXkKPiAjIENPTkZJR19TSUdOQUxGRCBpcyBub3Qgc2V0Cj4gQ09ORklH
X1RJTUVSRkQ9eQo+IENPTkZJR19FVkVOVEZEPXkKPiBDT05GSUdfU0hNRU09eQo+IENPTkZJR19B
SU89eQo+ICMgQ09ORklHX1BDSV9RVUlSS1MgaXMgbm90IHNldAo+IENPTkZJR19FTUJFRERFRD15
Cj4gQ09ORklHX0hBVkVfUEVSRl9FVkVOVFM9eQo+IAo+ICMKPiAjIEtlcm5lbCBQZXJmb3JtYW5j
ZSBFdmVudHMgQW5kIENvdW50ZXJzCj4gIwo+IENPTkZJR19QRVJGX0VWRU5UUz15Cj4gIyBDT05G
SUdfREVCVUdfUEVSRl9VU0VfVk1BTExPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVk1fRVZFTlRf
Q09VTlRFUlMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NMVUJfREVCVUcgaXMgbm90IHNldAo+IENP
TkZJR19DT01QQVRfQlJLPXkKPiAjIENPTkZJR19TTEFCIGlzIG5vdCBzZXQKPiBDT05GSUdfU0xV
Qj15Cj4gIyBDT05GSUdfU0xPQiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPRklMSU5HIGlzIG5v
dCBzZXQKPiBDT05GSUdfSEFWRV9PUFJPRklMRT15Cj4gQ09ORklHX09QUk9GSUxFX05NSV9USU1F
Uj15Cj4gIyBDT05GSUdfSlVNUF9MQUJFTCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSEFWRV82NEJJ
VF9BTElHTkVEX0FDQ0VTUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfRUZGSUNJRU5UX1VOQUxJ
R05FRF9BQ0NFU1M9eQo+IENPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkKPiBDT05GSUdf
VVNFUl9SRVRVUk5fTk9USUZJRVI9eQo+IENPTkZJR19IQVZFX0lPUkVNQVBfUFJPVD15Cj4gQ09O
RklHX0hBVkVfS1BST0JFUz15Cj4gQ09ORklHX0hBVkVfS1JFVFBST0JFUz15Cj4gQ09ORklHX0hB
VkVfT1BUUFJPQkVTPXkKPiBDT05GSUdfSEFWRV9LUFJPQkVTX09OX0ZUUkFDRT15Cj4gQ09ORklH
X0hBVkVfQVJDSF9UUkFDRUhPT0s9eQo+IENPTkZJR19IQVZFX0RNQV9BVFRSUz15Cj4gQ09ORklH
X0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkKPiBDT05GSUdfSEFWRV9SRUdTX0FORF9TVEFDS19B
Q0NFU1NfQVBJPXkKPiBDT05GSUdfSEFWRV9ETUFfQVBJX0RFQlVHPXkKPiBDT05GSUdfSEFWRV9I
V19CUkVBS1BPSU5UPXkKPiBDT05GSUdfSEFWRV9NSVhFRF9CUkVBS1BPSU5UU19SRUdTPXkKPiBD
T05GSUdfSEFWRV9VU0VSX1JFVFVSTl9OT1RJRklFUj15Cj4gQ09ORklHX0hBVkVfUEVSRl9FVkVO
VFNfTk1JPXkKPiBDT05GSUdfSEFWRV9QRVJGX1JFR1M9eQo+IENPTkZJR19IQVZFX1BFUkZfVVNF
Ul9TVEFDS19EVU1QPXkKPiBDT05GSUdfSEFWRV9BUkNIX0pVTVBfTEFCRUw9eQo+IENPTkZJR19B
UkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRz15Cj4gQ09ORklHX0hBVkVfQUxJR05FRF9TVFJVQ1Rf
UEFHRT15Cj4gQ09ORklHX0hBVkVfQ01QWENIR19MT0NBTD15Cj4gQ09ORklHX0hBVkVfQ01QWENI
R19ET1VCTEU9eQo+IENPTkZJR19IQVZFX0FSQ0hfU0VDQ09NUF9GSUxURVI9eQo+IENPTkZJR19I
QVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkKPiAjIENPTkZJR19DQ19TVEFDS1BST1RFQ1RPUiBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX05PTkU9eQo+ICMgQ09ORklHX0NDX1NU
QUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldAo+ICMgQ09ORklHX0NDX1NUQUNLUFJPVEVD
VE9SX1NUUk9ORyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfQ09OVEVYVF9UUkFDS0lORz15Cj4g
Q09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49eQo+IENPTkZJR19IQVZFX0lSUV9U
SU1FX0FDQ09VTlRJTkc9eQo+IENPTkZJR19IQVZFX0FSQ0hfVFJBTlNQQVJFTlRfSFVHRVBBR0U9
eQo+IENPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15Cj4gQ09ORklHX01PRFVMRVNfVVNFX0VM
Rl9SRUxBPXkKPiBDT05GSUdfSEFWRV9JUlFfRVhJVF9PTl9JUlFfU1RBQ0s9eQo+IAo+ICMKPiAj
IEdDT1YtYmFzZWQga2VybmVsIHByb2ZpbGluZwo+ICMKPiBDT05GSUdfR0NPVl9LRVJORUw9eQo+
ICMgQ09ORklHX0dDT1ZfUFJPRklMRV9BTEwgaXMgbm90IHNldAo+IENPTkZJR19HQ09WX0ZPUk1B
VF9BVVRPREVURUNUPXkKPiAjIENPTkZJR19HQ09WX0ZPUk1BVF8zXzQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0dDT1ZfRk9STUFUXzRfNyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSEFWRV9HRU5FUklD
X0RNQV9DT0hFUkVOVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1JUX01VVEVYRVM9eQo+IENPTkZJR19C
QVNFX1NNQUxMPTEKPiAjIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19NT0RVTEVTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19CTE9DSyBpcyBub3Qgc2V0
Cj4gQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkKPiBDT05GSUdfVU5JTkxJTkVfU1BJTl9VTkxP
Q0s9eQo+IENPTkZJR19GUkVFWkVSPXkKPiAKPiAjCj4gIyBQcm9jZXNzb3IgdHlwZSBhbmQgZmVh
dHVyZXMKPiAjCj4gIyBDT05GSUdfWk9ORV9ETUEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NNUCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9NUFBBUlNFPXkKPiAjIENPTkZJR19YODZfRVhURU5ERURf
UExBVEZPUk0gaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9JTlRFTF9MUFNTIGlzIG5vdCBzZXQK
PiBDT05GSUdfU0NIRURfT01JVF9GUkFNRV9QT0lOVEVSPXkKPiBDT05GSUdfSFlQRVJWSVNPUl9H
VUVTVD15Cj4gQ09ORklHX1BBUkFWSVJUPXkKPiBDT05GSUdfUEFSQVZJUlRfREVCVUc9eQo+IENP
TkZJR19YRU49eQo+IENPTkZJR19YRU5fRE9NMD15Cj4gQ09ORklHX1hFTl9QUklWSUxFR0VEX0dV
RVNUPXkKPiBDT05GSUdfWEVOX1BWSFZNPXkKPiBDT05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZ
PTUwMAo+IENPTkZJR19YRU5fU0FWRV9SRVNUT1JFPXkKPiAjIENPTkZJR19YRU5fREVCVUdfRlMg
aXMgbm90IHNldAo+IENPTkZJR19YRU5fUFZIPXkKPiAjIENPTkZJR19LVk1fR1VFU1QgaXMgbm90
IHNldAo+ICMgQ09ORklHX1BBUkFWSVJUX1RJTUVfQUNDT1VOVElORyBpcyBub3Qgc2V0Cj4gQ09O
RklHX1BBUkFWSVJUX0NMT0NLPXkKPiBDT05GSUdfTk9fQk9PVE1FTT15Cj4gQ09ORklHX01FTVRF
U1Q9eQo+ICMgQ09ORklHX01LOCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfTUNPUkUyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NQVRPTSBpcyBub3Qgc2V0
Cj4gQ09ORklHX0dFTkVSSUNfQ1BVPXkKPiBDT05GSUdfWDg2X0lOVEVSTk9ERV9DQUNIRV9TSElG
VD02Cj4gQ09ORklHX1g4Nl9MMV9DQUNIRV9TSElGVD02Cj4gQ09ORklHX1g4Nl9UU0M9eQo+IENP
TkZJR19YODZfQ01QWENIRzY0PXkKPiBDT05GSUdfWDg2X0NNT1Y9eQo+IENPTkZJR19YODZfTUlO
SU1VTV9DUFVfRkFNSUxZPTY0Cj4gQ09ORklHX1g4Nl9ERUJVR0NUTE1TUj15Cj4gIyBDT05GSUdf
UFJPQ0VTU09SX1NFTEVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9TVVBfSU5URUw9eQo+IENP
TkZJR19DUFVfU1VQX0FNRD15Cj4gQ09ORklHX0NQVV9TVVBfQ0VOVEFVUj15Cj4gQ09ORklHX0hQ
RVRfVElNRVI9eQo+IENPTkZJR19IUEVUX0VNVUxBVEVfUlRDPXkKPiBDT05GSUdfRE1JPXkKPiAj
IENPTkZJR19HQVJUX0lPTU1VIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FMR0FSWV9JT01NVT15Cj4g
Q09ORklHX0NBTEdBUllfSU9NTVVfRU5BQkxFRF9CWV9ERUZBVUxUPXkKPiBDT05GSUdfU1dJT1RM
Qj15Cj4gQ09ORklHX0lPTU1VX0hFTFBFUj15Cj4gQ09ORklHX05SX0NQVVM9MQo+IENPTkZJR19Q
UkVFTVBUX05PTkU9eQo+ICMgQ09ORklHX1BSRUVNUFRfVk9MVU5UQVJZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19QUkVFTVBUIGlzIG5vdCBzZXQKPiBDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQo+IENP
TkZJR19YODZfSU9fQVBJQz15Cj4gIyBDT05GSUdfWDg2X1JFUk9VVEVfRk9SX0JST0tFTl9CT09U
X0lSUVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9NQ0UgaXMgbm90IHNldAo+IENPTkZJR19J
OEs9eQo+IENPTkZJR19NSUNST0NPREU9eQo+ICMgQ09ORklHX01JQ1JPQ09ERV9JTlRFTCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01JQ1JPQ09ERV9BTUQ9eQo+IENPTkZJR19NSUNST0NPREVfT0xEX0lO
VEVSRkFDRT15Cj4gIyBDT05GSUdfTUlDUk9DT0RFX0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NSUNST0NPREVfQU1EX0VBUkxZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NSUNST0NP
REVfRUFSTFkgaXMgbm90IHNldAo+IENPTkZJR19YODZfTVNSPXkKPiBDT05GSUdfWDg2X0NQVUlE
PXkKPiBDT05GSUdfQVJDSF9QSFlTX0FERFJfVF82NEJJVD15Cj4gQ09ORklHX0FSQ0hfRE1BX0FE
RFJfVF82NEJJVD15Cj4gQ09ORklHX0RJUkVDVF9HQlBBR0VTPXkKPiBDT05GSUdfQVJDSF9TUEFS
U0VNRU1fRU5BQkxFPXkKPiBDT05GSUdfQVJDSF9TUEFSU0VNRU1fREVGQVVMVD15Cj4gQ09ORklH
X0FSQ0hfU0VMRUNUX01FTU9SWV9NT0RFTD15Cj4gIyBDT05GSUdfQVJDSF9NRU1PUllfUFJPQkUg
aXMgbm90IHNldAo+IENPTkZJR19JTExFR0FMX1BPSU5URVJfVkFMVUU9MHhkZWFkMDAwMDAwMDAw
MDAwCj4gQ09ORklHX1NFTEVDVF9NRU1PUllfTU9ERUw9eQo+IENPTkZJR19TUEFSU0VNRU1fTUFO
VUFMPXkKPiBDT05GSUdfU1BBUlNFTUVNPXkKPiBDT05GSUdfSEFWRV9NRU1PUllfUFJFU0VOVD15
Cj4gQ09ORklHX1NQQVJTRU1FTV9FWFRSRU1FPXkKPiBDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBf
RU5BQkxFPXkKPiBDT05GSUdfU1BBUlNFTUVNX0FMTE9DX01FTV9NQVBfVE9HRVRIRVI9eQo+IENP
TkZJR19TUEFSU0VNRU1fVk1FTU1BUD15Cj4gQ09ORklHX0hBVkVfTUVNQkxPQ0s9eQo+IENPTkZJ
R19IQVZFX01FTUJMT0NLX05PREVfTUFQPXkKPiBDT05GSUdfQVJDSF9ESVNDQVJEX01FTUJMT0NL
PXkKPiAjIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFIGlzIG5vdCBzZXQKPiBDT05GSUdf
TUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19NRU1PUllfSE9UUExVR19TUEFSU0U9eQo+ICMgQ09O
RklHX01FTU9SWV9IT1RSRU1PVkUgaXMgbm90IHNldAo+IENPTkZJR19QQUdFRkxBR1NfRVhURU5E
RUQ9eQo+IENPTkZJR19TUExJVF9QVExPQ0tfQ1BVUz00Cj4gQ09ORklHX0FSQ0hfRU5BQkxFX1NQ
TElUX1BNRF9QVExPQ0s9eQo+IENPTkZJR19CQUxMT09OX0NPTVBBQ1RJT049eQo+IENPTkZJR19D
T01QQUNUSU9OPXkKPiBDT05GSUdfTUlHUkFUSU9OPXkKPiBDT05GSUdfUEhZU19BRERSX1RfNjRC
SVQ9eQo+IENPTkZJR19aT05FX0RNQV9GTEFHPTAKPiBDT05GSUdfVklSVF9UT19CVVM9eQo+IENP
TkZJR19NTVVfTk9USUZJRVI9eQo+ICMgQ09ORklHX0tTTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RF
RkFVTFRfTU1BUF9NSU5fQUREUj00MDk2Cj4gIyBDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0Ug
aXMgbm90IHNldAo+ICMgQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0ggaXMgbm90IHNldAo+IENP
TkZJR19ORUVEX1BFUl9DUFVfS009eQo+IENPTkZJR19DTEVBTkNBQ0hFPXkKPiAjIENPTkZJR19D
TUEgaXMgbm90IHNldAo+ICMgQ09ORklHX1pCVUQgaXMgbm90IHNldAo+IENPTkZJR19aU01BTExP
Qz15Cj4gQ09ORklHX1BHVEFCTEVfTUFQUElORz15Cj4gQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NP
UlJVUFRJT049eQo+IENPTkZJR19YODZfQk9PVFBBUkFNX01FTU9SWV9DT1JSVVBUSU9OX0NIRUNL
PXkKPiBDT05GSUdfWDg2X1JFU0VSVkVfTE9XPTY0Cj4gIyBDT05GSUdfTVRSUiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQVJDSF9SQU5ET00gaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9TTUFQIGlz
IG5vdCBzZXQKPiBDT05GSUdfRUZJPXkKPiAjIENPTkZJR19FRklfU1RVQiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VDQ09NUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQK
PiBDT05GSUdfSFpfMjUwPXkKPiAjIENPTkZJR19IWl8zMDAgaXMgbm90IHNldAo+ICMgQ09ORklH
X0haXzEwMDAgaXMgbm90IHNldAo+IENPTkZJR19IWj0yNTAKPiBDT05GSUdfU0NIRURfSFJUSUNL
PXkKPiBDT05GSUdfS0VYRUM9eQo+ICMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldAo+IENP
TkZJR19QSFlTSUNBTF9TVEFSVD0weDEwMDAwMDAKPiAjIENPTkZJR19SRUxPQ0FUQUJMRSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1BIWVNJQ0FMX0FMSUdOPTB4MjAwMDAwCj4gIyBDT05GSUdfQ01ETElO
RV9CT09MIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQo+
IENPTkZJR19BUkNIX0VOQUJMRV9NRU1PUllfSE9UUkVNT1ZFPXkKPiAKPiAjCj4gIyBQb3dlciBt
YW5hZ2VtZW50IGFuZCBBQ1BJIG9wdGlvbnMKPiAjCj4gIyBDT05GSUdfU1VTUEVORCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0hJQkVSTkFURV9DQUxMQkFDS1M9eQo+IENPTkZJR19QTV9TTEVFUD15Cj4g
Q09ORklHX1BNX0FVVE9TTEVFUD15Cj4gIyBDT05GSUdfUE1fV0FLRUxPQ0tTIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19QTV9SVU5USU1FIGlzIG5vdCBzZXQKPiBDT05GSUdfUE09eQo+IENPTkZJR19Q
TV9ERUJVRz15Cj4gIyBDT05GSUdfUE1fQURWQU5DRURfREVCVUcgaXMgbm90IHNldAo+IENPTkZJ
R19QTV9TTEVFUF9ERUJVRz15Cj4gQ09ORklHX1BNX1RSQUNFPXkKPiBDT05GSUdfUE1fVFJBQ0Vf
UlRDPXkKPiBDT05GSUdfV1FfUE9XRVJfRUZGSUNJRU5UX0RFRkFVTFQ9eQo+IENPTkZJR19BQ1BJ
PXkKPiAjIENPTkZJR19BQ1BJX0VDX0RFQlVHRlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0FDUElf
QUMgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0JBVFRFUlk9eQo+IENPTkZJR19BQ1BJX0JVVFRP
Tj15Cj4gQ09ORklHX0FDUElfVklERU89eQo+IENPTkZJR19BQ1BJX0ZBTj15Cj4gQ09ORklHX0FD
UElfRE9DSz15Cj4gQ09ORklHX0FDUElfUFJPQ0VTU09SPXkKPiAjIENPTkZJR19BQ1BJX1BST0NF
U1NPUl9BR0dSRUdBVE9SIGlzIG5vdCBzZXQKPiBDT05GSUdfQUNQSV9USEVSTUFMPXkKPiBDT05G
SUdfQUNQSV9DVVNUT01fRFNEVF9GSUxFPSIiCj4gIyBDT05GSUdfQUNQSV9DVVNUT01fRFNEVCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNQSV9JTklUUkRfVEFCTEVfT1ZFUlJJREUgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX1BDSV9TTE9U
PXkKPiAjIENPTkZJR19YODZfUE1fVElNRVIgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0NPTlRB
SU5FUj15Cj4gQ09ORklHX0FDUElfSE9UUExVR19NRU1PUlk9eQo+IENPTkZJR19BQ1BJX1NCUz15
Cj4gIyBDT05GSUdfQUNQSV9IRUQgaXMgbm90IHNldAo+IENPTkZJR19BQ1BJX0NVU1RPTV9NRVRI
T0Q9eQo+ICMgQ09ORklHX0FDUElfQkdSVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNQSV9BUEVJ
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0ZJPXkKPiAKPiAjCj4gIyBDUFUgRnJlcXVlbmN5IHNjYWxp
bmcKPiAjCj4gQ09ORklHX0NQVV9GUkVRPXkKPiBDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTU1PTj15
Cj4gIyBDT05GSUdfQ1BVX0ZSRVFfU1RBVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9GUkVRX0RF
RkFVTFRfR09WX1BFUkZPUk1BTkNFPXkKPiAjIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9Q
T1dFUlNBVkUgaXMgbm90IHNldAo+ICMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX1VTRVJT
UEFDRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BVX0ZSRVFfREVGQVVMVF9HT1ZfT05ERU1BTkQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NQVV9GUkVRX0RFRkFVTFRfR09WX0NPTlNFUlZBVElWRSBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NQVV9GUkVRX0dPVl9QRVJGT1JNQU5DRT15Cj4gQ09ORklHX0NQ
VV9GUkVRX0dPVl9QT1dFUlNBVkU9eQo+IENPTkZJR19DUFVfRlJFUV9HT1ZfVVNFUlNQQUNFPXkK
PiBDT05GSUdfQ1BVX0ZSRVFfR09WX09OREVNQU5EPXkKPiBDT05GSUdfQ1BVX0ZSRVFfR09WX0NP
TlNFUlZBVElWRT15Cj4gCj4gIwo+ICMgeDg2IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJz
Cj4gIwo+ICMgQ09ORklHX1g4Nl9JTlRFTF9QU1RBVEUgaXMgbm90IHNldAo+IENPTkZJR19YODZf
UENDX0NQVUZSRVE9eQo+ICMgQ09ORklHX1g4Nl9BQ1BJX0NQVUZSRVEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1g4Nl9TUEVFRFNURVBfQ0VOVFJJTk8gaXMgbm90IHNldAo+IENPTkZJR19YODZfUDRf
Q0xPQ0tNT0Q9eQo+IAo+ICMKPiAjIHNoYXJlZCBvcHRpb25zCj4gIwo+IENPTkZJR19YODZfU1BF
RURTVEVQX0xJQj15Cj4gCj4gIwo+ICMgQ1BVIElkbGUKPiAjCj4gQ09ORklHX0NQVV9JRExFPXkK
PiAjIENPTkZJR19DUFVfSURMRV9NVUxUSVBMRV9EUklWRVJTIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q1BVX0lETEVfR09WX0xBRERFUj15Cj4gQ09ORklHX0NQVV9JRExFX0dPVl9NRU5VPXkKPiAjIENP
TkZJR19BUkNIX05FRURTX0NQVV9JRExFX0NPVVBMRUQgaXMgbm90IHNldAo+ICMgQ09ORklHX0lO
VEVMX0lETEUgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lbW9yeSBwb3dlciBzYXZpbmdzCj4gIwo+
IENPTkZJR19JNzMwMF9JRExFX0lPQVRfQ0hBTk5FTD15Cj4gQ09ORklHX0k3MzAwX0lETEU9eQo+
IAo+ICMKPiAjIEJ1cyBvcHRpb25zIChQQ0kgZXRjLikKPiAjCj4gQ09ORklHX1BDST15Cj4gQ09O
RklHX1BDSV9ESVJFQ1Q9eQo+IENPTkZJR19QQ0lfTU1DT05GSUc9eQo+IENPTkZJR19QQ0lfWEVO
PXkKPiBDT05GSUdfUENJX0RPTUFJTlM9eQo+IENPTkZJR19QQ0lfQ05CMjBMRV9RVUlSSz15Cj4g
IyBDT05GSUdfUENJRVBPUlRCVVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1BDSV9NU0kgaXMgbm90
IHNldAo+IENPTkZJR19QQ0lfREVCVUc9eQo+ICMgQ09ORklHX1BDSV9SRUFMTE9DX0VOQUJMRV9B
VVRPIGlzIG5vdCBzZXQKPiBDT05GSUdfUENJX1NUVUI9eQo+IENPTkZJR19YRU5fUENJREVWX0ZS
T05URU5EPXkKPiBDT05GSUdfSFRfSVJRPXkKPiBDT05GSUdfUENJX0FUUz15Cj4gIyBDT05GSUdf
UENJX0lPViBpcyBub3Qgc2V0Cj4gQ09ORklHX1BDSV9QUkk9eQo+ICMgQ09ORklHX1BDSV9QQVNJ
RCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BDSV9JT0FQSUM9eQo+IENPTkZJR19QQ0lfTEFCRUw9eQo+
IAo+ICMKPiAjIFBDSSBob3N0IGNvbnRyb2xsZXIgZHJpdmVycwo+ICMKPiAjIENPTkZJR19JU0Ff
RE1BX0FQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FNRF9OQj15Cj4gIyBDT05GSUdfUENDQVJEIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19IT1RQTFVHX1BDSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkFQ
SURJTyBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9TWVNGQj15Cj4gCj4gIwo+ICMgRXhlY3V0YWJs
ZSBmaWxlIGZvcm1hdHMgLyBFbXVsYXRpb25zCj4gIwo+ICMgQ09ORklHX0JJTkZNVF9FTEYgaXMg
bm90IHNldAo+IENPTkZJR19BUkNIX0JJTkZNVF9FTEZfUkFORE9NSVpFX1BJRT15Cj4gQ09ORklH
X0JJTkZNVF9TQ1JJUFQ9eQo+ICMgQ09ORklHX0hBVkVfQU9VVCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfQklORk1UX01JU0MgaXMgbm90IHNldAo+ICMgQ09ORklHX0NPUkVEVU1QIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19JQTMyX0VNVUxBVElPTiBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4Nl9ERVZfRE1B
X09QUz15Cj4gQ09ORklHX05FVD15Cj4gCj4gIwo+ICMgTmV0d29ya2luZyBvcHRpb25zCj4gIwo+
IENPTkZJR19QQUNLRVQ9eQo+ICMgQ09ORklHX1BBQ0tFVF9ESUFHIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19VTklYIGlzIG5vdCBzZXQKPiBDT05GSUdfWEZSTT15Cj4gQ09ORklHX1hGUk1fQUxHTz15
Cj4gQ09ORklHX1hGUk1fVVNFUj15Cj4gIyBDT05GSUdfWEZSTV9TVUJfUE9MSUNZIGlzIG5vdCBz
ZXQKPiBDT05GSUdfWEZSTV9NSUdSQVRFPXkKPiBDT05GSUdfWEZSTV9JUENPTVA9eQo+IENPTkZJ
R19ORVRfS0VZPXkKPiAjIENPTkZJR19ORVRfS0VZX01JR1JBVEUgaXMgbm90IHNldAo+IENPTkZJ
R19JTkVUPXkKPiAjIENPTkZJR19JUF9NVUxUSUNBU1QgaXMgbm90IHNldAo+IENPTkZJR19JUF9B
RFZBTkNFRF9ST1VURVI9eQo+IENPTkZJR19JUF9GSUJfVFJJRV9TVEFUUz15Cj4gQ09ORklHX0lQ
X01VTFRJUExFX1RBQkxFUz15Cj4gIyBDT05GSUdfSVBfUk9VVEVfTVVMVElQQVRIIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19JUF9ST1VURV9WRVJCT1NFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUF9Q
TlAgaXMgbm90IHNldAo+IENPTkZJR19ORVRfSVBJUD15Cj4gQ09ORklHX05FVF9JUEdSRV9ERU1V
WD15Cj4gQ09ORklHX05FVF9JUF9UVU5ORUw9eQo+ICMgQ09ORklHX05FVF9JUEdSRSBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NZTl9DT09LSUVTPXkKPiAjIENPTkZJR19ORVRfSVBWVEkgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0lORVRfQUggaXMgbm90IHNldAo+IENPTkZJR19JTkVUX0VTUD15Cj4gQ09O
RklHX0lORVRfSVBDT01QPXkKPiBDT05GSUdfSU5FVF9YRlJNX1RVTk5FTD15Cj4gQ09ORklHX0lO
RVRfVFVOTkVMPXkKPiBDT05GSUdfSU5FVF9YRlJNX01PREVfVFJBTlNQT1JUPXkKPiBDT05GSUdf
SU5FVF9YRlJNX01PREVfVFVOTkVMPXkKPiBDT05GSUdfSU5FVF9YRlJNX01PREVfQkVFVD15Cj4g
Q09ORklHX0lORVRfTFJPPXkKPiBDT05GSUdfSU5FVF9ESUFHPXkKPiBDT05GSUdfSU5FVF9UQ1Bf
RElBRz15Cj4gQ09ORklHX0lORVRfVURQX0RJQUc9eQo+IENPTkZJR19UQ1BfQ09OR19BRFZBTkNF
RD15Cj4gIyBDT05GSUdfVENQX0NPTkdfQklDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UQ1BfQ09O
R19DVUJJQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RDUF9DT05HX1dFU1RXT09EPXkKPiBDT05GSUdf
VENQX0NPTkdfSFRDUD15Cj4gIyBDT05GSUdfVENQX0NPTkdfSFNUQ1AgaXMgbm90IHNldAo+ICMg
Q09ORklHX1RDUF9DT05HX0hZQkxBIGlzIG5vdCBzZXQKPiBDT05GSUdfVENQX0NPTkdfVkVHQVM9
eQo+ICMgQ09ORklHX1RDUF9DT05HX1NDQUxBQkxFIGlzIG5vdCBzZXQKPiBDT05GSUdfVENQX0NP
TkdfTFA9eQo+IENPTkZJR19UQ1BfQ09OR19WRU5PPXkKPiAjIENPTkZJR19UQ1BfQ09OR19ZRUFI
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19UQ1BfQ09OR19JTExJTk9JUyBpcyBub3Qgc2V0Cj4gQ09O
RklHX0RFRkFVTFRfSFRDUD15Cj4gIyBDT05GSUdfREVGQVVMVF9WRUdBUyBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfREVGQVVMVF9WRU5PIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUZBVUxUX1dFU1RX
T09EIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUZBVUxUX1JFTk8gaXMgbm90IHNldAo+IENPTkZJ
R19ERUZBVUxUX1RDUF9DT05HPSJodGNwIgo+IENPTkZJR19UQ1BfTUQ1U0lHPXkKPiBDT05GSUdf
SVBWNj15Cj4gQ09ORklHX0lQVjZfUk9VVEVSX1BSRUY9eQo+ICMgQ09ORklHX0lQVjZfUk9VVEVf
SU5GTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSVBWNl9PUFRJTUlTVElDX0RBRCBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfSU5FVDZfQUggaXMgbm90IHNldAo+IENPTkZJR19JTkVUNl9FU1A9eQo+ICMg
Q09ORklHX0lORVQ2X0lQQ09NUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lQVjZfTUlQNj15Cj4gIyBD
T05GSUdfSU5FVDZfWEZSTV9UVU5ORUwgaXMgbm90IHNldAo+IENPTkZJR19JTkVUNl9UVU5ORUw9
eQo+IENPTkZJR19JTkVUNl9YRlJNX01PREVfVFJBTlNQT1JUPXkKPiBDT05GSUdfSU5FVDZfWEZS
TV9NT0RFX1RVTk5FTD15Cj4gIyBDT05GSUdfSU5FVDZfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0lORVQ2X1hGUk1fTU9ERV9ST1VURU9QVElNSVpBVElPTiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0lQVjZfVlRJPXkKPiAjIENPTkZJR19JUFY2X1NJVCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0lQVjZfVFVOTkVMPXkKPiBDT05GSUdfSVBWNl9HUkU9eQo+IENPTkZJR19JUFY2X01VTFRJ
UExFX1RBQkxFUz15Cj4gIyBDT05GSUdfSVBWNl9TVUJUUkVFUyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0lQVjZfTVJPVVRFPXkKPiBDT05GSUdfSVBWNl9NUk9VVEVfTVVMVElQTEVfVEFCTEVTPXkKPiAj
IENPTkZJR19JUFY2X1BJTVNNX1YyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRMQUJFTCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX05FVFdPUktfU0VDTUFSSz15Cj4gQ09ORklHX05FVFdPUktfUEhZX1RJ
TUVTVEFNUElORz15Cj4gQ09ORklHX05FVEZJTFRFUj15Cj4gIyBDT05GSUdfTkVURklMVEVSX0RF
QlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX0FEVkFOQ0VEPXkKPiAjIENPTkZJR19C
UklER0VfTkVURklMVEVSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBDb3JlIE5ldGZpbHRlciBDb25m
aWd1cmF0aW9uCj4gIwo+IENPTkZJR19ORVRGSUxURVJfTkVUTElOSz15Cj4gQ09ORklHX05FVEZJ
TFRFUl9ORVRMSU5LX0FDQ1Q9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9ORVRMSU5LX1FVRVVFIGlz
IG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX05FVExJTktfTE9HPXkKPiBDT05GSUdfTkZfQ09O
TlRSQUNLPXkKPiBDT05GSUdfTkZfQ09OTlRSQUNLX01BUks9eQo+IENPTkZJR19ORl9DT05OVFJB
Q0tfU0VDTUFSSz15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19FVkVOVFM9eQo+ICMgQ09ORklHX05G
X0NPTk5UUkFDS19USU1FT1VUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORl9DT05OVFJBQ0tfVElN
RVNUQU1QIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZfQ09OTlRSQUNLX0xBQkVMUz15Cj4gIyBDT05G
SUdfTkZfQ1RfUFJPVE9fRENDUCBpcyBub3Qgc2V0Cj4gQ09ORklHX05GX0NUX1BST1RPX0dSRT15
Cj4gQ09ORklHX05GX0NUX1BST1RPX1NDVFA9eQo+IENPTkZJR19ORl9DVF9QUk9UT19VRFBMSVRF
PXkKPiBDT05GSUdfTkZfQ09OTlRSQUNLX0FNQU5EQT15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19G
VFA9eQo+IENPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15Cj4gIyBDT05GSUdfTkZfQ09OTlRSQUNL
X0lSQyBpcyBub3Qgc2V0Cj4gQ09ORklHX05GX0NPTk5UUkFDS19CUk9BRENBU1Q9eQo+IENPTkZJ
R19ORl9DT05OVFJBQ0tfTkVUQklPU19OUz15Cj4gQ09ORklHX05GX0NPTk5UUkFDS19TTk1QPXkK
PiBDT05GSUdfTkZfQ09OTlRSQUNLX1BQVFA9eQo+IENPTkZJR19ORl9DT05OVFJBQ0tfU0FORT15
Cj4gIyBDT05GSUdfTkZfQ09OTlRSQUNLX1NJUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkZfQ09O
TlRSQUNLX1RGVFAgaXMgbm90IHNldAo+ICMgQ09ORklHX05GX0NUX05FVExJTksgaXMgbm90IHNl
dAo+IENPTkZJR19ORl9DVF9ORVRMSU5LX1RJTUVPVVQ9eQo+ICMgQ09ORklHX05GX1RBQkxFUyBp
cyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVEFCTEVTPXkKPiAKPiAjCj4gIyBYdGFibGVz
IGNvbWJpbmVkIG1vZHVsZXMKPiAjCj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVJLPXkKPiBDT05G
SUdfTkVURklMVEVSX1hUX0NPTk5NQVJLPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX1NFVD15Cj4g
Cj4gIwo+ICMgWHRhYmxlcyB0YXJnZXRzCj4gIwo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VU
X0FVRElUPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NMQVNTSUZZIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DT05OTUFSSz15Cj4gIyBDT05GSUdfTkVU
RklMVEVSX1hUX1RBUkdFVF9DT05OU0VDTUFSSyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfSE1BUks9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0lETEVUSU1F
Uj15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9MRUQgaXMgbm90IHNldAo+IENPTkZJ
R19ORVRGSUxURVJfWFRfVEFSR0VUX0xPRz15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdF
VF9NQVJLIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORkxPRz15Cj4g
IyBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORlFVRVVFIGlzIG5vdCBzZXQKPiBDT05GSUdf
TkVURklMVEVSX1hUX1RBUkdFVF9SQVRFRVNUPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX1RBUkdF
VF9URUU9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1NFQ01BUks9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfVEFSR0VUX1RDUE1TUz15Cj4gCj4gIwo+ICMgWHRhYmxlcyBtYXRjaGVzCj4g
Iwo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9BRERSVFlQRSBpcyBub3Qgc2V0Cj4gQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9CUEY9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRD
SF9DTFVTVEVSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09NTUVO
VCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NPTk5CWVRFUyBpcyBu
b3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTEFCRUw9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfQ09OTkxJTUlUPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENI
X0NPTk5NQVJLPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09OTlRSQUNLIGlzIG5v
dCBzZXQKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NQVT15Cj4gQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9EQ0NQPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfREVWR1JPVVAg
aXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfRFNDUD15Cj4gIyBDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX0VDTiBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9FU1A9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEFTSExJTUlUPXkKPiAjIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEVMUEVSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfSEwgaXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
SVBDT01QPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0lQUkFOR0U9eQo+IENPTkZJR19O
RVRGSUxURVJfWFRfTUFUQ0hfTEVOR1RIPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0xJ
TUlUPXkKPiAjIENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFDIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTUFSSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVURklM
VEVSX1hUX01BVENIX01VTFRJUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9ORkFDQ1Q9eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9PU0YgaXMgbm90IHNl
dAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfT1dORVI9eQo+ICMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9QT0xJQ1kgaXMgbm90IHNldAo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hf
UEtUVFlQRT15Cj4gIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1FVT1RBIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JBVEVFU1Q9eQo+ICMgQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9SRUFMTSBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9S
RUNFTlQ9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU0NUUD15Cj4gQ09ORklHX05FVEZJ
TFRFUl9YVF9NQVRDSF9TT0NLRVQ9eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RBVEU9
eQo+ICMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVEFUSVNUSUMgaXMgbm90IHNldAo+ICMg
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TVFJJTkcgaXMgbm90IHNldAo+IENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfVENQTVNTPXkKPiBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1RJTUU9
eQo+IENPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfVTMyPXkKPiBDT05GSUdfSVBfU0VUPXkKPiBD
T05GSUdfSVBfU0VUX01BWD0yNTYKPiBDT05GSUdfSVBfU0VUX0JJVE1BUF9JUD15Cj4gQ09ORklH
X0lQX1NFVF9CSVRNQVBfSVBNQUM9eQo+ICMgQ09ORklHX0lQX1NFVF9CSVRNQVBfUE9SVCBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0lQX1NFVF9IQVNIX0lQPXkKPiAjIENPTkZJR19JUF9TRVRfSEFTSF9J
UFBPUlQgaXMgbm90IHNldAo+IENPTkZJR19JUF9TRVRfSEFTSF9JUFBPUlRJUD15Cj4gQ09ORklH
X0lQX1NFVF9IQVNIX0lQUE9SVE5FVD15Cj4gQ09ORklHX0lQX1NFVF9IQVNIX05FVFBPUlRORVQ9
eQo+ICMgQ09ORklHX0lQX1NFVF9IQVNIX05FVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lQX1NFVF9I
QVNIX05FVE5FVD15Cj4gIyBDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfSVBfU0VUX0hBU0hfTkVUSUZBQ0UgaXMgbm90IHNldAo+IENPTkZJR19JUF9TRVRf
TElTVF9TRVQ9eQo+ICMgQ09ORklHX0lQX1ZTIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJUDogTmV0
ZmlsdGVyIENvbmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHX05GX0RFRlJBR19JUFY0PXkKPiBDT05G
SUdfTkZfQ09OTlRSQUNLX0lQVjQ9eQo+ICMgQ09ORklHX0lQX05GX0lQVEFCTEVTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSVBfTkZfQVJQVEFCTEVTPXkKPiBDT05GSUdfSVBfTkZfQVJQRklMVEVSPXkK
PiBDT05GSUdfSVBfTkZfQVJQX01BTkdMRT15Cj4gCj4gIwo+ICMgSVB2NjogTmV0ZmlsdGVyIENv
bmZpZ3VyYXRpb24KPiAjCj4gQ09ORklHX05GX0RFRlJBR19JUFY2PXkKPiBDT05GSUdfTkZfQ09O
TlRSQUNLX0lQVjY9eQo+ICMgQ09ORklHX0lQNl9ORl9JUFRBQkxFUyBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgREVDbmV0OiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgo+ICMKPiAjIENPTkZJR19ERUNO
RVRfTkZfR1JBQlVMQVRPUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9ORl9FQlRBQkxFUz15
Cj4gQ09ORklHX0JSSURHRV9FQlRfQlJPVVRFPXkKPiBDT05GSUdfQlJJREdFX0VCVF9UX0ZJTFRF
Uj15Cj4gIyBDT05GSUdfQlJJREdFX0VCVF9UX05BVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURH
RV9FQlRfODAyXzM9eQo+IENPTkZJR19CUklER0VfRUJUX0FNT05HPXkKPiAjIENPTkZJR19CUklE
R0VfRUJUX0FSUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9FQlRfSVA9eQo+IENPTkZJR19C
UklER0VfRUJUX0lQNj15Cj4gQ09ORklHX0JSSURHRV9FQlRfTElNSVQ9eQo+ICMgQ09ORklHX0JS
SURHRV9FQlRfTUFSSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0JSSURHRV9FQlRfUEtUVFlQRT15Cj4g
Q09ORklHX0JSSURHRV9FQlRfU1RQPXkKPiAjIENPTkZJR19CUklER0VfRUJUX1ZMQU4gaXMgbm90
IHNldAo+IENPTkZJR19CUklER0VfRUJUX0FSUFJFUExZPXkKPiBDT05GSUdfQlJJREdFX0VCVF9E
TkFUPXkKPiBDT05GSUdfQlJJREdFX0VCVF9NQVJLX1Q9eQo+IENPTkZJR19CUklER0VfRUJUX1JF
RElSRUNUPXkKPiAjIENPTkZJR19CUklER0VfRUJUX1NOQVQgaXMgbm90IHNldAo+IENPTkZJR19C
UklER0VfRUJUX0xPRz15Cj4gQ09ORklHX0JSSURHRV9FQlRfVUxPRz15Cj4gQ09ORklHX0JSSURH
RV9FQlRfTkZMT0c9eQo+IENPTkZJR19JUF9EQ0NQPXkKPiBDT05GSUdfSU5FVF9EQ0NQX0RJQUc9
eQo+IAo+ICMKPiAjIERDQ1AgQ0NJRHMgQ29uZmlndXJhdGlvbgo+ICMKPiBDT05GSUdfSVBfREND
UF9DQ0lEMl9ERUJVRz15Cj4gQ09ORklHX0lQX0RDQ1BfQ0NJRDM9eQo+ICMgQ09ORklHX0lQX0RD
Q1BfQ0NJRDNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19JUF9EQ0NQX1RGUkNfTElCPXkKPiAK
PiAjCj4gIyBEQ0NQIEtlcm5lbCBIYWNraW5nCj4gIwo+IENPTkZJR19JUF9EQ0NQX0RFQlVHPXkK
PiAjIENPTkZJR19JUF9TQ1RQIGlzIG5vdCBzZXQKPiBDT05GSUdfUkRTPXkKPiAjIENPTkZJR19S
RFNfUkRNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkRTX1RDUCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfUkRTX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfVElQQz15Cj4gQ09ORklHX1RJUENfUE9S
VFM9ODE5MQo+ICMgQ09ORklHX0FUTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTDJUUCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NUUD15Cj4gQ09ORklHX0JSSURHRT15Cj4gIyBDT05GSUdfQlJJREdFX0lH
TVBfU05PT1BJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZMQU5fODAyMVEgaXMgbm90IHNldAo+
IENPTkZJR19ERUNORVQ9eQo+ICMgQ09ORklHX0RFQ05FVF9ST1VURVIgaXMgbm90IHNldAo+IENP
TkZJR19MTEM9eQo+IENPTkZJR19MTEMyPXkKPiBDT05GSUdfSVBYPXkKPiBDT05GSUdfSVBYX0lO
VEVSTj15Cj4gIyBDT05GSUdfQVRBTEsgaXMgbm90IHNldAo+IENPTkZJR19YMjU9eQo+ICMgQ09O
RklHX0xBUEIgaXMgbm90IHNldAo+ICMgQ09ORklHX1BIT05FVCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0lFRUU4MDIxNTQ9eQo+ICMgQ09ORklHX0lFRUU4MDIxNTRfNkxPV1BBTiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfTUFDODAyMTU0IGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRfU0NIRUQgaXMgbm90
IHNldAo+ICMgQ09ORklHX0RDQiBpcyBub3Qgc2V0Cj4gQ09ORklHX0ROU19SRVNPTFZFUj15Cj4g
IyBDT05GSUdfQkFUTUFOX0FEViBpcyBub3Qgc2V0Cj4gQ09ORklHX09QRU5WU1dJVENIPXkKPiAj
IENPTkZJR19PUEVOVlNXSVRDSF9HUkUgaXMgbm90IHNldAo+IENPTkZJR19WU09DS0VUUz15Cj4g
Q09ORklHX1ZNV0FSRV9WTUNJX1ZTT0NLRVRTPXkKPiBDT05GSUdfTkVUTElOS19NTUFQPXkKPiBD
T05GSUdfTkVUTElOS19ESUFHPXkKPiBDT05GSUdfTkVUX01QTFNfR1NPPXkKPiBDT05GSUdfSFNS
PXkKPiBDT05GSUdfTkVUX1JYX0JVU1lfUE9MTD15Cj4gQ09ORklHX0JRTD15Cj4gCj4gIwo+ICMg
TmV0d29yayB0ZXN0aW5nCj4gIwo+IENPTkZJR19IQU1SQURJTz15Cj4gCj4gIwo+ICMgUGFja2V0
IFJhZGlvIHByb3RvY29scwo+ICMKPiBDT05GSUdfQVgyNT15Cj4gQ09ORklHX0FYMjVfREFNQV9T
TEFWRT15Cj4gQ09ORklHX05FVFJPTT15Cj4gIyBDT05GSUdfUk9TRSBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgQVguMjUgbmV0d29yayBkZXZpY2UgZHJpdmVycwo+ICMKPiBDT05GSUdfTUtJU1M9eQo+
ICMgQ09ORklHXzZQQUNLIGlzIG5vdCBzZXQKPiBDT05GSUdfQlBRRVRIRVI9eQo+IENPTkZJR19C
QVlDT01fU0VSX0ZEWD15Cj4gQ09ORklHX0JBWUNPTV9TRVJfSERYPXkKPiBDT05GSUdfQkFZQ09N
X1BBUj15Cj4gQ09ORklHX1lBTT15Cj4gIyBDT05GSUdfQ0FOIGlzIG5vdCBzZXQKPiBDT05GSUdf
SVJEQT15Cj4gCj4gIwo+ICMgSXJEQSBwcm90b2NvbHMKPiAjCj4gQ09ORklHX0lSTEFOPXkKPiAj
IENPTkZJR19JUkNPTU0gaXMgbm90IHNldAo+ICMgQ09ORklHX0lSREFfVUxUUkEgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIElyREEgb3B0aW9ucwo+ICMKPiBDT05GSUdfSVJEQV9DQUNIRV9MQVNUX0xT
QVA9eQo+IENPTkZJR19JUkRBX0ZBU1RfUlI9eQo+IENPTkZJR19JUkRBX0RFQlVHPXkKPiAKPiAj
Cj4gIyBJbmZyYXJlZC1wb3J0IGRldmljZSBkcml2ZXJzCj4gIwo+IAo+ICMKPiAjIFNJUiBkZXZp
Y2UgZHJpdmVycwo+ICMKPiAjIENPTkZJR19JUlRUWV9TSVIgaXMgbm90IHNldAo+IAo+ICMKPiAj
IERvbmdsZSBzdXBwb3J0Cj4gIwo+IAo+ICMKPiAjIEZJUiBkZXZpY2UgZHJpdmVycwo+ICMKPiBD
T05GSUdfVkxTSV9GSVI9eQo+ICMgQ09ORklHX0JUIGlzIG5vdCBzZXQKPiBDT05GSUdfQUZfUlhS
UEM9eQo+IENPTkZJR19BRl9SWFJQQ19ERUJVRz15Cj4gIyBDT05GSUdfUlhLQUQgaXMgbm90IHNl
dAo+IENPTkZJR19GSUJfUlVMRVM9eQo+ICMgQ09ORklHX1dJUkVMRVNTIGlzIG5vdCBzZXQKPiBD
T05GSUdfV0lNQVg9eQo+IENPTkZJR19XSU1BWF9ERUJVR19MRVZFTD04Cj4gIyBDT05GSUdfUkZL
SUxMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRktJTExfUkVHVUxBVE9SIGlzIG5vdCBzZXQKPiBD
T05GSUdfTkVUXzlQPXkKPiBDT05GSUdfTkVUXzlQX1ZJUlRJTz15Cj4gQ09ORklHX05FVF85UF9S
RE1BPXkKPiAjIENPTkZJR19ORVRfOVBfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NBSUYg
aXMgbm90IHNldAo+IENPTkZJR19DRVBIX0xJQj15Cj4gIyBDT05GSUdfQ0VQSF9MSUJfUFJFVFRZ
REVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NFUEhfTElCX1VTRV9ETlNfUkVTT0xWRVIgaXMg
bm90IHNldAo+IENPTkZJR19ORkM9eQo+IENPTkZJR19ORkNfRElHSVRBTD15Cj4gQ09ORklHX05G
Q19OQ0k9eQo+IENPTkZJR19ORkNfSENJPXkKPiBDT05GSUdfTkZDX1NIRExDPXkKPiAKPiAjCj4g
IyBOZWFyIEZpZWxkIENvbW11bmljYXRpb24gKE5GQykgZGV2aWNlcwo+ICMKPiAjIENPTkZJR19O
RkNfV0lMSU5LIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZDX01FSV9QSFk9eQo+IENPTkZJR19ORkNf
U0lNPXkKPiBDT05GSUdfTkZDX1BONTQ0PXkKPiAjIENPTkZJR19ORkNfUE41NDRfSTJDIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19ORkNfUE41NDRfTUVJIGlzIG5vdCBzZXQKPiBDT05GSUdfTkZDX01J
Q1JPUkVBRD15Cj4gQ09ORklHX05GQ19NSUNST1JFQURfSTJDPXkKPiBDT05GSUdfTkZDX01JQ1JP
UkVBRF9NRUk9eQo+IENPTkZJR19IQVZFX0JQRl9KSVQ9eQo+IAo+ICMKPiAjIERldmljZSBEcml2
ZXJzCj4gIwo+IAo+ICMKPiAjIEdlbmVyaWMgRHJpdmVyIE9wdGlvbnMKPiAjCj4gQ09ORklHX1VF
VkVOVF9IRUxQRVJfUEFUSD0iIgo+ICMgQ09ORklHX0RFVlRNUEZTIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TVEFOREFMT05FIGlzIG5vdCBzZXQKPiBDT05GSUdfUFJFVkVOVF9GSVJNV0FSRV9CVUlM
RD15Cj4gQ09ORklHX0ZXX0xPQURFUj15Cj4gQ09ORklHX0ZJUk1XQVJFX0lOX0tFUk5FTD15Cj4g
Q09ORklHX0VYVFJBX0ZJUk1XQVJFPSIiCj4gIyBDT05GSUdfRldfTE9BREVSX1VTRVJfSEVMUEVS
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVCVUdfRFJJVkVSPXkKPiAjIENPTkZJR19ERUJVR19ERVZS
RVMgaXMgbm90IHNldAo+IENPTkZJR19TWVNfSFlQRVJWSVNPUj15Cj4gIyBDT05GSUdfR0VORVJJ
Q19DUFVfREVWSUNFUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR01BUD15Cj4gQ09ORklHX1JFR01B
UF9JMkM9eQo+IENPTkZJR19SRUdNQVBfTU1JTz15Cj4gQ09ORklHX1JFR01BUF9JUlE9eQo+IENP
TkZJR19ETUFfU0hBUkVEX0JVRkZFUj15Cj4gCj4gIwo+ICMgQnVzIGRldmljZXMKPiAjCj4gQ09O
RklHX0NPTk5FQ1RPUj15Cj4gQ09ORklHX1BST0NfRVZFTlRTPXkKPiBDT05GSUdfTVREPXkKPiAj
IENPTkZJR19NVERfUkVEQk9PVF9QQVJUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9DTURMSU5F
X1BBUlRTPXkKPiBDT05GSUdfTVREX0FSN19QQVJUUz15Cj4gCj4gIwo+ICMgVXNlciBNb2R1bGVz
IEFuZCBUcmFuc2xhdGlvbiBMYXllcnMKPiAjCj4gQ09ORklHX01URF9PT1BTPXkKPiAKPiAjCj4g
IyBSQU0vUk9NL0ZsYXNoIGNoaXAgZHJpdmVycwo+ICMKPiBDT05GSUdfTVREX0NGST15Cj4gQ09O
RklHX01URF9KRURFQ1BST0JFPXkKPiBDT05GSUdfTVREX0dFTl9QUk9CRT15Cj4gIyBDT05GSUdf
TVREX0NGSV9BRFZfT1BUSU9OUyBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9NQVBfQkFOS19XSURU
SF8xPXkKPiBDT05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzI9eQo+IENPTkZJR19NVERfTUFQX0JB
TktfV0lEVEhfND15Cj4gIyBDT05GSUdfTVREX01BUF9CQU5LX1dJRFRIXzggaXMgbm90IHNldAo+
ICMgQ09ORklHX01URF9NQVBfQkFOS19XSURUSF8xNiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTVRE
X01BUF9CQU5LX1dJRFRIXzMyIGlzIG5vdCBzZXQKPiBDT05GSUdfTVREX0NGSV9JMT15Cj4gQ09O
RklHX01URF9DRklfSTI9eQo+ICMgQ09ORklHX01URF9DRklfSTQgaXMgbm90IHNldAo+ICMgQ09O
RklHX01URF9DRklfSTggaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9DRklfSU5URUxFWFQgaXMg
bm90IHNldAo+IENPTkZJR19NVERfQ0ZJX0FNRFNURD15Cj4gQ09ORklHX01URF9DRklfU1RBQT15
Cj4gQ09ORklHX01URF9DRklfVVRJTD15Cj4gQ09ORklHX01URF9SQU09eQo+IENPTkZJR19NVERf
Uk9NPXkKPiAjIENPTkZJR19NVERfQUJTRU5UIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBNYXBwaW5n
IGRyaXZlcnMgZm9yIGNoaXAgYWNjZXNzCj4gIwo+ICMgQ09ORklHX01URF9DT01QTEVYX01BUFBJ
TkdTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NVERfUEhZU01BUCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01URF9TQzUyMENEUD15Cj4gQ09ORklHX01URF9ORVRTQzUyMD15Cj4gIyBDT05GSUdfTVREX1RT
NTUwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9BTUQ3NlhST009eQo+IENPTkZJR19NVERfSUNI
WFJPTT15Cj4gQ09ORklHX01URF9FU0IyUk9NPXkKPiAjIENPTkZJR19NVERfQ0s4MDRYUk9NIGlz
IG5vdCBzZXQKPiBDT05GSUdfTVREX1NDQjJfRkxBU0g9eQo+IENPTkZJR19NVERfTkVUdGVsPXkK
PiAjIENPTkZJR19NVERfTDQ0MEdYIGlzIG5vdCBzZXQKPiBDT05GSUdfTVREX0lOVEVMX1ZSX05P
Uj15Cj4gQ09ORklHX01URF9QTEFUUkFNPXkKPiAKPiAjCj4gIyBTZWxmLWNvbnRhaW5lZCBNVEQg
ZGV2aWNlIGRyaXZlcnMKPiAjCj4gQ09ORklHX01URF9QTUM1NTE9eQo+ICMgQ09ORklHX01URF9Q
TUM1NTFfQlVHRklYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NVERfUE1DNTUxX0RFQlVHIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19NVERfU0xSQU0gaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9QSFJB
TSBpcyBub3Qgc2V0Cj4gQ09ORklHX01URF9NVERSQU09eQo+IENPTkZJR19NVERSQU1fVE9UQUxf
U0laRT00MDk2Cj4gQ09ORklHX01URFJBTV9FUkFTRV9TSVpFPTEyOAo+IENPTkZJR19NVERSQU1f
QUJTX1BPUz0wCj4gCj4gIwo+ICMgRGlzay1Pbi1DaGlwIERldmljZSBEcml2ZXJzCj4gIwo+IENP
TkZJR19NVERfRE9DRzM9eQo+IENPTkZJR19CQ0hfQ09OU1RfTT0xNAo+IENPTkZJR19CQ0hfQ09O
U1RfVD00Cj4gIyBDT05GSUdfTVREX05BTkQgaXMgbm90IHNldAo+ICMgQ09ORklHX01URF9PTkVO
QU5EIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBMUEREUiBmbGFzaCBtZW1vcnkgZHJpdmVycwo+ICMK
PiAjIENPTkZJR19NVERfTFBERFIgaXMgbm90IHNldAo+IENPTkZJR19NVERfVUJJPXkKPiBDT05G
SUdfTVREX1VCSV9XTF9USFJFU0hPTEQ9NDA5Ngo+IENPTkZJR19NVERfVUJJX0JFQl9MSU1JVD0y
MAo+IENPTkZJR19NVERfVUJJX0ZBU1RNQVA9eQo+IENPTkZJR19NVERfVUJJX0dMVUVCST15Cj4g
Q09ORklHX1BBUlBPUlQ9eQo+IENPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15Cj4g
Q09ORklHX1BBUlBPUlRfUEM9eQo+IENPTkZJR19QQVJQT1JUX1NFUklBTD15Cj4gQ09ORklHX1BB
UlBPUlRfUENfRklGTz15Cj4gIyBDT05GSUdfUEFSUE9SVF9QQ19TVVBFUklPIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19QQVJQT1JUX0dTQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1BBUlBPUlRfQVg4ODc5
Nj15Cj4gIyBDT05GSUdfUEFSUE9SVF8xMjg0IGlzIG5vdCBzZXQKPiBDT05GSUdfUEFSUE9SVF9O
T1RfUEM9eQo+IENPTkZJR19QTlA9eQo+ICMgQ09ORklHX1BOUF9ERUJVR19NRVNTQUdFUyBpcyBu
b3Qgc2V0Cj4gCj4gIwo+ICMgUHJvdG9jb2xzCj4gIwo+IENPTkZJR19QTlBBQ1BJPXkKPiAKPiAj
Cj4gIyBNaXNjIGRldmljZXMKPiAjCj4gQ09ORklHX1NFTlNPUlNfTElTM0xWMDJEPXkKPiAjIENP
TkZJR19BRDUyNVhfRFBPVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RVTU1ZX0lSUT15Cj4gQ09ORklH
X0lCTV9BU009eQo+IENPTkZJR19QSEFOVE9NPXkKPiAjIENPTkZJR19JTlRFTF9NSURfUFRJIGlz
IG5vdCBzZXQKPiBDT05GSUdfU0dJX0lPQzQ9eQo+IENPTkZJR19USUZNX0NPUkU9eQo+ICMgQ09O
RklHX1RJRk1fN1hYMSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lDUzkzMlM0MDE9eQo+IENPTkZJR19B
VE1FTF9TU0M9eQo+ICMgQ09ORklHX0VOQ0xPU1VSRV9TRVJWSUNFUyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfSFBfSUxPIGlzIG5vdCBzZXQKPiBDT05GSUdfQVBEUzk4MDJBTFM9eQo+IENPTkZJR19J
U0wyOTAwMz15Cj4gIyBDT05GSUdfSVNMMjkwMjAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JT
X1RTTDI1NTA9eQo+IENPTkZJR19TRU5TT1JTX0JIMTc4MD15Cj4gIyBDT05GSUdfU0VOU09SU19C
SDE3NzAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0FQRFM5OTBYPXkKPiBDT05GSUdfSE1D
NjM1Mj15Cj4gQ09ORklHX0RTMTY4Mj15Cj4gQ09ORklHX1ZNV0FSRV9CQUxMT09OPXkKPiAjIENP
TkZJR19CTVAwODVfSTJDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19QQ0hfUEhVQiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5vdCBzZXQKPiBDT05GSUdfU1JBTT15
Cj4gQ09ORklHX0MyUE9SVD15Cj4gQ09ORklHX0MyUE9SVF9EVVJBTUFSXzIxNTA9eQo+IAo+ICMK
PiAjIEVFUFJPTSBzdXBwb3J0Cj4gIwo+IENPTkZJR19FRVBST01fQVQyND15Cj4gIyBDT05GSUdf
RUVQUk9NX0xFR0FDWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRUVQUk9NX01BWDY4NzUgaXMgbm90
IHNldAo+ICMgQ09ORklHX0VFUFJPTV85M0NYNiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0I3MTBf
Q09SRSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5z
cG9ydCBsaW5lIGRpc2NpcGxpbmUKPiAjCj4gQ09ORklHX1RJX1NUPXkKPiBDT05GSUdfU0VOU09S
U19MSVMzX0kyQz15Cj4gCj4gIwo+ICMgQWx0ZXJhIEZQR0EgZmlybXdhcmUgZG93bmxvYWQgbW9k
dWxlCj4gIwo+IENPTkZJR19BTFRFUkFfU1RBUEw9eQo+IENPTkZJR19JTlRFTF9NRUk9eQo+IENP
TkZJR19JTlRFTF9NRUlfTUU9eQo+IENPTkZJR19WTVdBUkVfVk1DST15Cj4gCj4gIwo+ICMgSW50
ZWwgTUlDIEhvc3QgRHJpdmVyCj4gIwo+IENPTkZJR19JTlRFTF9NSUNfSE9TVD15Cj4gCj4gIwo+
ICMgSW50ZWwgTUlDIENhcmQgRHJpdmVyCj4gIwo+IENPTkZJR19JTlRFTF9NSUNfQ0FSRD15Cj4g
Q09ORklHX0dFTldRRT15Cj4gQ09ORklHX0hBVkVfSURFPXkKPiAKPiAjCj4gIyBTQ1NJIGRldmlj
ZSBzdXBwb3J0Cj4gIwo+IENPTkZJR19TQ1NJX01PRD15Cj4gIyBDT05GSUdfU0NTSV9ETUEgaXMg
bm90IHNldAo+ICMgQ09ORklHX1NDU0lfTkVUTElOSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRlVT
SU9OIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJRUVFIDEzOTQgKEZpcmVXaXJlKSBzdXBwb3J0Cj4g
Iwo+ICMgQ09ORklHX0ZJUkVXSVJFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GSVJFV0lSRV9OT1NZ
IGlzIG5vdCBzZXQKPiBDT05GSUdfSTJPPXkKPiAjIENPTkZJR19JMk9fTENUX05PVElGWV9PTl9D
SEFOR0VTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMk9fRVhUX0FEQVBURUMgaXMgbm90IHNldAo+
ICMgQ09ORklHX0kyT19DT05GSUcgaXMgbm90IHNldAo+IENPTkZJR19JMk9fQlVTPXkKPiAjIENP
TkZJR19JMk9fUFJPQyBpcyBub3Qgc2V0Cj4gQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTPXkKPiAj
IENPTkZJR19ORVRERVZJQ0VTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSE9TVF9ORVQgaXMgbm90
IHNldAo+IENPTkZJR19WSE9TVF9SSU5HPXkKPiAKPiAjCj4gIyBJbnB1dCBkZXZpY2Ugc3VwcG9y
dAo+ICMKPiBDT05GSUdfSU5QVVQ9eQo+IENPTkZJR19JTlBVVF9GRl9NRU1MRVNTPXkKPiBDT05G
SUdfSU5QVVRfUE9MTERFVj15Cj4gQ09ORklHX0lOUFVUX1NQQVJTRUtNQVA9eQo+IENPTkZJR19J
TlBVVF9NQVRSSVhLTUFQPXkKPiAKPiAjCj4gIyBVc2VybGFuZCBpbnRlcmZhY2VzCj4gIwo+IENP
TkZJR19JTlBVVF9NT1VTRURFVj15Cj4gQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYPXkKPiBD
T05GSUdfSU5QVVRfTU9VU0VERVZfU0NSRUVOX1g9MTAyNAo+IENPTkZJR19JTlBVVF9NT1VTRURF
Vl9TQ1JFRU5fWT03NjgKPiAjIENPTkZJR19JTlBVVF9KT1lERVYgaXMgbm90IHNldAo+IENPTkZJ
R19JTlBVVF9FVkRFVj15Cj4gQ09ORklHX0lOUFVUX0VWQlVHPXkKPiAKPiAjCj4gIyBJbnB1dCBE
ZXZpY2UgRHJpdmVycwo+ICMKPiBDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQo+ICMgQ09ORklHX0tF
WUJPQVJEX0FEUDU1ODggaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWUJPQVJEX0FEUDU1ODkgaXMg
bm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9BVEtCRD15Cj4gQ09ORklHX0tFWUJPQVJEX1FUMTA3
MD15Cj4gQ09ORklHX0tFWUJPQVJEX1FUMjE2MD15Cj4gIyBDT05GSUdfS0VZQk9BUkRfTEtLQkQg
aXMgbm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9HUElPPXkKPiBDT05GSUdfS0VZQk9BUkRfR1BJ
T19QT0xMRUQ9eQo+ICMgQ09ORklHX0tFWUJPQVJEX1RDQTY0MTYgaXMgbm90IHNldAo+ICMgQ09O
RklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWUJPQVJEX01BVFJJ
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX0tFWUJPQVJEX0xNODMyMz15Cj4gIyBDT05GSUdfS0VZQk9B
UkRfTE04MzMzIGlzIG5vdCBzZXQKPiBDT05GSUdfS0VZQk9BUkRfTUFYNzM1OT15Cj4gQ09ORklH
X0tFWUJPQVJEX01DUz15Cj4gIyBDT05GSUdfS0VZQk9BUkRfTVBSMTIxIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19LRVlCT0FSRF9ORVdUT04gaXMgbm90IHNldAo+IENPTkZJR19LRVlCT0FSRF9PUEVO
Q09SRVM9eQo+IENPTkZJR19LRVlCT0FSRF9TVE9XQVdBWT15Cj4gIyBDT05GSUdfS0VZQk9BUkRf
U1VOS0JEIGlzIG5vdCBzZXQKPiBDT05GSUdfS0VZQk9BUkRfU0hfS0VZU0M9eQo+IENPTkZJR19L
RVlCT0FSRF9UQzM1ODlYPXkKPiBDT05GSUdfS0VZQk9BUkRfVFdMNDAzMD15Cj4gIyBDT05GSUdf
S0VZQk9BUkRfWFRLQkQgaXMgbm90IHNldAo+IENPTkZJR19JTlBVVF9MRURTPXkKPiAjIENPTkZJ
R19JTlBVVF9NT1VTRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU5QVVRfSk9ZU1RJQ0sgaXMgbm90
IHNldAo+ICMgQ09ORklHX0lOUFVUX1RBQkxFVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX1RP
VUNIU0NSRUVOPXkKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fODhQTTg2MFg9eQo+IENPTkZJR19UT1VD
SFNDUkVFTl9BRDc4Nzk9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9BRDc4NzlfSTJDPXkKPiBDT05G
SUdfVE9VQ0hTQ1JFRU5fQVRNRUxfTVhUPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9BVU9fUElY
Q0lSIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fQlUyMTAxMz15Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX0NZOENUTUcxMTA9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9DWVRUU1BfQ09SRT15
Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQX0kyQyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX0NZVFRTUDRfQ09SRT15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0NZVFRTUDRfSTJD
PXkKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fRFlOQVBSTz15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0hB
TVBTSElSRT15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fRUVUSSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fRlVKSVRTVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX0lM
STIxMFg9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9HVU5aRT15Cj4gQ09ORklHX1RPVUNIU0NSRUVO
X0VMTz15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1dBQ09NX1c4MDAxPXkKPiAjIENPTkZJR19UT1VD
SFNDUkVFTl9XQUNPTV9JMkMgaXMgbm90IHNldAo+IENPTkZJR19UT1VDSFNDUkVFTl9NQVgxMTgw
MT15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX1RP
VUNIU0NSRUVOX01NUzExND15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTVRPVUNIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fSU5FWElPPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9N
SzcxMiBpcyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1BFTk1PVU5UPXkKPiAjIENPTkZJ
R19UT1VDSFNDUkVFTl9FRFRfRlQ1WDA2IGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVF
Tl9UT1VDSFJJR0hUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9UT1VDSFdJTiBp
cyBub3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1RJX0FNMzM1WF9UU0M9eQo+IENPTkZJR19U
T1VDSFNDUkVFTl9QSVhDSVI9eQo+IENPTkZJR19UT1VDSFNDUkVFTl9XTTgzMVg9eQo+IENPTkZJ
R19UT1VDSFNDUkVFTl9XTTk3WFg9eQo+ICMgQ09ORklHX1RPVUNIU0NSRUVOX1dNOTcwNSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1dNOTcxMj15Cj4gIyBDT05GSUdfVE9VQ0hTQ1JF
RU5fV005NzEzIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9NQzEzNzgzIGlzIG5v
dCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hJVDIxMz15Cj4gIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fVFNDX1NFUklPIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9VQ0hTQ1JFRU5fVFNDMjAwNz15
Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1NUMTIzMj15Cj4gQ09ORklHX1RPVUNIU0NSRUVOX1RQUzY1
MDdYPXkKPiAjIENPTkZJR19UT1VDSFNDUkVFTl9aRk9SQ0UgaXMgbm90IHNldAo+IENPTkZJR19J
TlBVVF9NSVNDPXkKPiBDT05GSUdfSU5QVVRfODhQTTg2MFhfT05LRVk9eQo+ICMgQ09ORklHX0lO
UFVUXzg4UE04MFhfT05LRVkgaXMgbm90IHNldAo+IENPTkZJR19JTlBVVF9BRDcxNFg9eQo+IENP
TkZJR19JTlBVVF9BRDcxNFhfSTJDPXkKPiAjIENPTkZJR19JTlBVVF9BUklaT05BX0hBUFRJQ1Mg
aXMgbm90IHNldAo+ICMgQ09ORklHX0lOUFVUX0JNQTE1MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lO
UFVUX1BDU1BLUj15Cj4gIyBDT05GSUdfSU5QVVRfTUMxMzc4M19QV1JCVVRUT04gaXMgbm90IHNl
dAo+IENPTkZJR19JTlBVVF9NTUE4NDUwPXkKPiAjIENPTkZJR19JTlBVVF9NUFUzMDUwIGlzIG5v
dCBzZXQKPiBDT05GSUdfSU5QVVRfQVBBTkVMPXkKPiBDT05GSUdfSU5QVVRfR1AyQT15Cj4gQ09O
RklHX0lOUFVUX0dQSU9fVElMVF9QT0xMRUQ9eQo+IENPTkZJR19JTlBVVF9BVExBU19CVE5TPXkK
PiAjIENPTkZJR19JTlBVVF9LWFRKOSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX1JFVFVfUFdS
QlVUVE9OPXkKPiBDT05GSUdfSU5QVVRfVFdMNDAzMF9QV1JCVVRUT049eQo+IENPTkZJR19JTlBV
VF9UV0w0MDMwX1ZJQlJBPXkKPiBDT05GSUdfSU5QVVRfVFdMNjA0MF9WSUJSQT15Cj4gQ09ORklH
X0lOUFVUX1VJTlBVVD15Cj4gQ09ORklHX0lOUFVUX1BDRjUwNjMzX1BNVT15Cj4gQ09ORklHX0lO
UFVUX1BDRjg1NzQ9eQo+ICMgQ09ORklHX0lOUFVUX0dQSU9fUk9UQVJZX0VOQ09ERVIgaXMgbm90
IHNldAo+ICMgQ09ORklHX0lOUFVUX0RBOTA1NV9PTktFWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lO
UFVUX1dNODMxWF9PTj15Cj4gQ09ORklHX0lOUFVUX0FEWEwzNFg9eQo+IENPTkZJR19JTlBVVF9B
RFhMMzRYX0kyQz15Cj4gIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSU5QVVRfWEVOX0tCRERFVl9GUk9OVEVORCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOUFVUX0lE
RUFQQURfU0xJREVCQVI9eQo+IAo+ICMKPiAjIEhhcmR3YXJlIEkvTyBwb3J0cwo+ICMKPiBDT05G
SUdfU0VSSU89eQo+IENPTkZJR19BUkNIX01JR0hUX0hBVkVfUENfU0VSSU89eQo+IENPTkZJR19T
RVJJT19JODA0Mj15Cj4gQ09ORklHX1NFUklPX1NFUlBPUlQ9eQo+ICMgQ09ORklHX1NFUklPX0NU
ODJDNzEwIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VSSU9fUEFSS0JEPXkKPiBDT05GSUdfU0VSSU9f
UENJUFMyPXkKPiBDT05GSUdfU0VSSU9fTElCUFMyPXkKPiAjIENPTkZJR19TRVJJT19SQVcgaXMg
bm90IHNldAo+ICMgQ09ORklHX1NFUklPX0FMVEVSQV9QUzIgaXMgbm90IHNldAo+IENPTkZJR19T
RVJJT19QUzJNVUxUPXkKPiAjIENPTkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19IWVBFUlZfS0VZQk9BUkQgaXMgbm90IHNldAo+IENPTkZJR19HQU1FUE9SVD15Cj4gQ09O
RklHX0dBTUVQT1JUX05TNTU4PXkKPiBDT05GSUdfR0FNRVBPUlRfTDQ9eQo+ICMgQ09ORklHX0dB
TUVQT1JUX0VNVTEwSzEgaXMgbm90IHNldAo+IENPTkZJR19HQU1FUE9SVF9GTTgwMT15Cj4gCj4g
Iwo+ICMgQ2hhcmFjdGVyIGRldmljZXMKPiAjCj4gQ09ORklHX1RUWT15Cj4gIyBDT05GSUdfVlQg
aXMgbm90IHNldAo+ICMgQ09ORklHX1VOSVg5OF9QVFlTIGlzIG5vdCBzZXQKPiBDT05GSUdfTEVH
QUNZX1BUWVM9eQo+IENPTkZJR19MRUdBQ1lfUFRZX0NPVU5UPTI1Ngo+ICMgQ09ORklHX1NFUklB
TF9OT05TVEFOREFSRCBpcyBub3Qgc2V0Cj4gQ09ORklHX05PWk9NST15Cj4gQ09ORklHX05fR1NN
PXkKPiAjIENPTkZJR19UUkFDRV9TSU5LIGlzIG5vdCBzZXQKPiBDT05GSUdfREVWS01FTT15Cj4g
Cj4gIwo+ICMgU2VyaWFsIGRyaXZlcnMKPiAjCj4gQ09ORklHX1NFUklBTF84MjUwPXkKPiBDT05G
SUdfU0VSSUFMXzgyNTBfREVQUkVDQVRFRF9PUFRJT05TPXkKPiBDT05GSUdfU0VSSUFMXzgyNTBf
UE5QPXkKPiBDT05GSUdfU0VSSUFMXzgyNTBfQ09OU09MRT15Cj4gQ09ORklHX0ZJWF9FQVJMWUNP
Tl9NRU09eQo+IENPTkZJR19TRVJJQUxfODI1MF9ETUE9eQo+IENPTkZJR19TRVJJQUxfODI1MF9Q
Q0k9eQo+IENPTkZJR19TRVJJQUxfODI1MF9OUl9VQVJUUz00Cj4gQ09ORklHX1NFUklBTF84MjUw
X1JVTlRJTUVfVUFSVFM9NAo+IENPTkZJR19TRVJJQUxfODI1MF9FWFRFTkRFRD15Cj4gIyBDT05G
SUdfU0VSSUFMXzgyNTBfTUFOWV9QT1JUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFUklBTF84MjUw
X1NIQVJFX0lSUT15Cj4gQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlE9eQo+IENPTkZJR19T
RVJJQUxfODI1MF9SU0E9eQo+IENPTkZJR19TRVJJQUxfODI1MF9EVz15Cj4gCj4gIwo+ICMgTm9u
LTgyNTAgc2VyaWFsIHBvcnQgc3VwcG9ydAo+ICMKPiBDT05GSUdfU0VSSUFMX0tHREJfTk1JPXkK
PiAjIENPTkZJR19TRVJJQUxfTUZEX0hTVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFUklBTF9VQVJU
TElURT15Cj4gIyBDT05GSUdfU0VSSUFMX1VBUlRMSVRFX0NPTlNPTEUgaXMgbm90IHNldAo+IENP
TkZJR19TRVJJQUxfQ09SRT15Cj4gQ09ORklHX1NFUklBTF9DT1JFX0NPTlNPTEU9eQo+IENPTkZJ
R19DT05TT0xFX1BPTEw9eQo+IENPTkZJR19TRVJJQUxfSlNNPXkKPiAjIENPTkZJR19TRVJJQUxf
U0NDTlhQIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRVJJQUxfVElNQkVSREFMRSBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlQ9eQo+IENPTkZJR19TRVJJQUxfQUxURVJB
X0pUQUdVQVJUX0NPTlNPTEU9eQo+IENPTkZJR19TRVJJQUxfQUxURVJBX0pUQUdVQVJUX0NPTlNP
TEVfQllQQVNTPXkKPiBDT05GSUdfU0VSSUFMX0FMVEVSQV9VQVJUPXkKPiBDT05GSUdfU0VSSUFM
X0FMVEVSQV9VQVJUX01BWFBPUlRTPTQKPiBDT05GSUdfU0VSSUFMX0FMVEVSQV9VQVJUX0JBVURS
QVRFPTExNTIwMAo+ICMgQ09ORklHX1NFUklBTF9BTFRFUkFfVUFSVF9DT05TT0xFIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19TRVJJQUxfUENIX1VBUlQgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFUklB
TF9BUkMgaXMgbm90IHNldAo+IENPTkZJR19TRVJJQUxfUlAyPXkKPiBDT05GSUdfU0VSSUFMX1JQ
Ml9OUl9VQVJUUz0zMgo+IENPTkZJR19TRVJJQUxfRlNMX0xQVUFSVD15Cj4gQ09ORklHX1NFUklB
TF9GU0xfTFBVQVJUX0NPTlNPTEU9eQo+IENPTkZJR19TRVJJQUxfU1RfQVNDPXkKPiBDT05GSUdf
U0VSSUFMX1NUX0FTQ19DT05TT0xFPXkKPiAjIENPTkZJR19UVFlfUFJJTlRLIGlzIG5vdCBzZXQK
PiBDT05GSUdfUFJJTlRFUj15Cj4gQ09ORklHX0xQX0NPTlNPTEU9eQo+IENPTkZJR19QUERFVj15
Cj4gQ09ORklHX0hWQ19EUklWRVI9eQo+IENPTkZJR19IVkNfSVJRPXkKPiBDT05GSUdfSFZDX1hF
Tj15Cj4gQ09ORklHX0hWQ19YRU5fRlJPTlRFTkQ9eQo+ICMgQ09ORklHX1ZJUlRJT19DT05TT0xF
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUE1JX0hBTkRMRVIgaXMgbm90IHNldAo+IENPTkZJR19I
V19SQU5ET009eQo+IENPTkZJR19IV19SQU5ET01fVElNRVJJT01FTT15Cj4gQ09ORklHX0hXX1JB
TkRPTV9JTlRFTD15Cj4gQ09ORklHX0hXX1JBTkRPTV9BTUQ9eQo+IENPTkZJR19IV19SQU5ET01f
VklBPXkKPiBDT05GSUdfSFdfUkFORE9NX1ZJUlRJTz15Cj4gQ09ORklHX0hXX1JBTkRPTV9UUE09
eQo+IENPTkZJR19OVlJBTT15Cj4gIyBDT05GSUdfUjM5NjQgaXMgbm90IHNldAo+ICMgQ09ORklH
X0FQUExJQ09NIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NV0FWRSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfSFBFVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBTkdDSEVDS19USU1FUj15Cj4gQ09ORklHX1RD
R19UUE09eQo+IENPTkZJR19UQ0dfVElTPXkKPiAjIENPTkZJR19UQ0dfVElTX0kyQ19BVE1FTCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfVENHX1RJU19JMkNfSU5GSU5FT04gaXMgbm90IHNldAo+IENP
TkZJR19UQ0dfVElTX0kyQ19OVVZPVE9OPXkKPiBDT05GSUdfVENHX05TQz15Cj4gIyBDT05GSUdf
VENHX0FUTUVMIGlzIG5vdCBzZXQKPiBDT05GSUdfVENHX0lORklORU9OPXkKPiAjIENPTkZJR19U
Q0dfU1QzM19JMkMgaXMgbm90IHNldAo+IENPTkZJR19UQ0dfWEVOPXkKPiAjIENPTkZJR19URUxD
TE9DSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFVlBPUlQ9eQo+IENPTkZJR19JMkM9eQo+IENPTkZJ
R19JMkNfQk9BUkRJTkZPPXkKPiAjIENPTkZJR19JMkNfQ09NUEFUIGlzIG5vdCBzZXQKPiBDT05G
SUdfSTJDX0NIQVJERVY9eQo+IENPTkZJR19JMkNfTVVYPXkKPiAKPiAjCj4gIyBNdWx0aXBsZXhl
ciBJMkMgQ2hpcCBzdXBwb3J0Cj4gIwo+ICMgQ09ORklHX0kyQ19NVVhfR1BJTyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0kyQ19NVVhfUENBOTU0MT15Cj4gQ09ORklHX0kyQ19NVVhfUENBOTU0eD15Cj4g
Q09ORklHX0kyQ19IRUxQRVJfQVVUTz15Cj4gQ09ORklHX0kyQ19TTUJVUz15Cj4gQ09ORklHX0ky
Q19BTEdPQklUPXkKPiBDT05GSUdfSTJDX0FMR09QQ0E9eQo+IAo+ICMKPiAjIEkyQyBIYXJkd2Fy
ZSBCdXMgc3VwcG9ydAo+ICMKPiAKPiAjCj4gIyBQQyBTTUJ1cyBob3N0IGNvbnRyb2xsZXIgZHJp
dmVycwo+ICMKPiAjIENPTkZJR19JMkNfQUxJMTUzNSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19B
TEkxNTYzPXkKPiBDT05GSUdfSTJDX0FMSTE1WDM9eQo+ICMgQ09ORklHX0kyQ19BTUQ3NTYgaXMg
bm90IHNldAo+IENPTkZJR19JMkNfQU1EODExMT15Cj4gIyBDT05GSUdfSTJDX0k4MDEgaXMgbm90
IHNldAo+IENPTkZJR19JMkNfSVNDSD15Cj4gQ09ORklHX0kyQ19JU01UPXkKPiBDT05GSUdfSTJD
X1BJSVg0PXkKPiAjIENPTkZJR19JMkNfTkZPUkNFMiBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19T
SVM1NTk1PXkKPiBDT05GSUdfSTJDX1NJUzYzMD15Cj4gQ09ORklHX0kyQ19TSVM5Nlg9eQo+ICMg
Q09ORklHX0kyQ19WSUEgaXMgbm90IHNldAo+ICMgQ09ORklHX0kyQ19WSUFQUk8gaXMgbm90IHNl
dAo+IAo+ICMKPiAjIEFDUEkgZHJpdmVycwo+ICMKPiBDT05GSUdfSTJDX1NDTUk9eQo+IAo+ICMK
PiAjIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQo+ICMKPiAjIENPTkZJR19JMkNfQ0JVU19HUElPIGlzIG5vdCBzZXQKPiBDT05GSUdfSTJD
X0RFU0lHTldBUkVfQ09SRT15Cj4gQ09ORklHX0kyQ19ERVNJR05XQVJFX1BDST15Cj4gQ09ORklH
X0kyQ19FRzIwVD15Cj4gQ09ORklHX0kyQ19HUElPPXkKPiBDT05GSUdfSTJDX0tFTVBMRD15Cj4g
Q09ORklHX0kyQ19PQ09SRVM9eQo+IENPTkZJR19JMkNfUENBX1BMQVRGT1JNPXkKPiAjIENPTkZJ
R19JMkNfUFhBX1BDSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19SSUlDPXkKPiBDT05GSUdfSTJD
X1NIX01PQklMRT15Cj4gQ09ORklHX0kyQ19TSU1URUM9eQo+IENPTkZJR19JMkNfWElMSU5YPXkK
PiAjIENPTkZJR19JMkNfUkNBUiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRXh0ZXJuYWwgSTJDL1NN
QnVzIGFkYXB0ZXIgZHJpdmVycwo+ICMKPiBDT05GSUdfSTJDX1BBUlBPUlQ9eQo+IENPTkZJR19J
MkNfUEFSUE9SVF9MSUdIVD15Cj4gIyBDT05GSUdfSTJDX1RBT1NfRVZNIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBPdGhlciBJMkMvU01CdXMgYnVzIGRyaXZlcnMKPiAjCj4gQ09ORklHX0kyQ19ERUJV
R19DT1JFPXkKPiBDT05GSUdfSTJDX0RFQlVHX0FMR089eQo+ICMgQ09ORklHX0kyQ19ERUJVR19C
VVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0hTST15Cj4g
Q09ORklHX0hTSV9CT0FSRElORk89eQo+IAo+ICMKPiAjIEhTSSBjbGllbnRzCj4gIwo+ICMgQ09O
RklHX0hTSV9DSEFSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQUFMgc3VwcG9ydAo+ICMKPiBDT05G
SUdfUFBTPXkKPiAjIENPTkZJR19QUFNfREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAjIFBQUyBj
bGllbnRzIHN1cHBvcnQKPiAjCj4gQ09ORklHX1BQU19DTElFTlRfS1RJTUVSPXkKPiBDT05GSUdf
UFBTX0NMSUVOVF9MRElTQz15Cj4gIyBDT05GSUdfUFBTX0NMSUVOVF9QQVJQT1JUIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkKPiAKPiAjCj4gIyBQUFMgZ2VuZXJhdG9ycyBz
dXBwb3J0Cj4gIwo+IAo+ICMKPiAjIFBUUCBjbG9jayBzdXBwb3J0Cj4gIwo+IENPTkZJR19QVFBf
MTU4OF9DTE9DSz15Cj4gCj4gIwo+ICMgRW5hYmxlIFBIWUxJQiBhbmQgTkVUV09SS19QSFlfVElN
RVNUQU1QSU5HIHRvIHNlZSB0aGUgYWRkaXRpb25hbCBjbG9ja3MuCj4gIwo+ICMgQ09ORklHX1BU
UF8xNTg4X0NMT0NLX1BDSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9H
UElPTElCPXkKPiBDT05GSUdfR1BJT0xJQj15Cj4gQ09ORklHX0dQSU9fREVWUkVTPXkKPiBDT05G
SUdfR1BJT19BQ1BJPXkKPiBDT05GSUdfREVCVUdfR1BJTz15Cj4gQ09ORklHX0dQSU9fU1lTRlM9
eQo+IENPTkZJR19HUElPX0dFTkVSSUM9eQo+IENPTkZJR19HUElPX0RBOTA1NT15Cj4gQ09ORklH
X0dQSU9fTUFYNzMwWD15Cj4gCj4gIwo+ICMgTWVtb3J5IG1hcHBlZCBHUElPIGRyaXZlcnM6Cj4g
Iwo+IENPTkZJR19HUElPX0dFTkVSSUNfUExBVEZPUk09eQo+ICMgQ09ORklHX0dQSU9fSVQ4NzYx
RSBpcyBub3Qgc2V0Cj4gQ09ORklHX0dQSU9fRjcxODhYPXkKPiAjIENPTkZJR19HUElPX1NDSDMx
MVggaXMgbm90IHNldAo+ICMgQ09ORklHX0dQSU9fVFM1NTAwIGlzIG5vdCBzZXQKPiBDT05GSUdf
R1BJT19TQ0g9eQo+ICMgQ09ORklHX0dQSU9fSUNIIGlzIG5vdCBzZXQKPiBDT05GSUdfR1BJT19W
WDg1NT15Cj4gQ09ORklHX0dQSU9fTFlOWFBPSU5UPXkKPiAKPiAjCj4gIyBJMkMgR1BJTyBleHBh
bmRlcnM6Cj4gIwo+IENPTkZJR19HUElPX0FSSVpPTkE9eQo+IENPTkZJR19HUElPX01BWDczMDA9
eQo+IENPTkZJR19HUElPX01BWDczMlg9eQo+ICMgQ09ORklHX0dQSU9fTUFYNzMyWF9JUlEgaXMg
bm90IHNldAo+ICMgQ09ORklHX0dQSU9fUENBOTUzWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0dQSU9f
UENGODU3WD15Cj4gIyBDT05GSUdfR1BJT19TWDE1MFggaXMgbm90IHNldAo+IENPTkZJR19HUElP
X1RDMzU4OVg9eQo+ICMgQ09ORklHX0dQSU9fVFdMNDAzMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
R1BJT19UV0w2MDQwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19HUElPX1dNODMxWCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0dQSU9fV004MzUwPXkKPiBDT05GSUdfR1BJT19BRFA1NTg4PXkKPiAjIENPTkZJ
R19HUElPX0FEUDU1ODhfSVJRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQQ0kgR1BJTyBleHBhbmRl
cnM6Cj4gIwo+ICMgQ09ORklHX0dQSU9fQlQ4WFggaXMgbm90IHNldAo+ICMgQ09ORklHX0dQSU9f
QU1EODExMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfR1BJT19JTlRFTF9NSUQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0dQSU9fUENIIGlzIG5vdCBzZXQKPiBDT05GSUdfR1BJT19NTF9JT0g9eQo+IENP
TkZJR19HUElPX1RJTUJFUkRBTEU9eQo+IENPTkZJR19HUElPX1JEQzMyMVg9eQo+IAo+ICMKPiAj
IFNQSSBHUElPIGV4cGFuZGVyczoKPiAjCj4gQ09ORklHX0dQSU9fTUNQMjNTMDg9eQo+IAo+ICMK
PiAjIEFDOTcgR1BJTyBleHBhbmRlcnM6Cj4gIwo+IAo+ICMKPiAjIExQQyBHUElPIGV4cGFuZGVy
czoKPiAjCj4gIyBDT05GSUdfR1BJT19LRU1QTEQgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1PRFVM
YnVzIEdQSU8gZXhwYW5kZXJzOgo+ICMKPiBDT05GSUdfR1BJT19KQU5aX1RUTD15Cj4gQ09ORklH
X0dQSU9fUEFMTUFTPXkKPiAjIENPTkZJR19HUElPX1RQUzY1ODZYIGlzIG5vdCBzZXQKPiAKPiAj
Cj4gIyBVU0IgR1BJTyBleHBhbmRlcnM6Cj4gIwo+IENPTkZJR19XMT15Cj4gQ09ORklHX1cxX0NP
Tj15Cj4gCj4gIwo+ICMgMS13aXJlIEJ1cyBNYXN0ZXJzCj4gIwo+ICMgQ09ORklHX1cxX01BU1RF
Ul9NQVRST1ggaXMgbm90IHNldAo+ICMgQ09ORklHX1cxX01BU1RFUl9EUzI0ODIgaXMgbm90IHNl
dAo+IENPTkZJR19XMV9NQVNURVJfRFMxV009eQo+ICMgQ09ORklHX1cxX01BU1RFUl9HUElPIGlz
IG5vdCBzZXQKPiAKPiAjCj4gIyAxLXdpcmUgU2xhdmVzCj4gIwo+IENPTkZJR19XMV9TTEFWRV9U
SEVSTT15Cj4gIyBDT05GSUdfVzFfU0xBVkVfU01FTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVzFf
U0xBVkVfRFMyNDA4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI0MTMgaXMgbm90
IHNldAo+IENPTkZJR19XMV9TTEFWRV9EUzI0MjM9eQo+ICMgQ09ORklHX1cxX1NMQVZFX0RTMjQz
MSBpcyBub3Qgc2V0Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15Cj4gIyBDT05GSUdfVzFfU0xB
VkVfRFMyNDMzX0NSQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVzFfU0xBVkVfRFMyNzYwIGlzIG5v
dCBzZXQKPiBDT05GSUdfVzFfU0xBVkVfRFMyNzgwPXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyNzgx
PXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyOEUwND15Cj4gQ09ORklHX1cxX1NMQVZFX0JRMjcwMDA9
eQo+IENPTkZJR19QT1dFUl9TVVBQTFk9eQo+ICMgQ09ORklHX1BPV0VSX1NVUFBMWV9ERUJVRyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1BEQV9QT1dFUj15Cj4gIyBDT05GSUdfV004MzFYX0JBQ0tVUCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1dNODMxWF9QT1dFUj15Cj4gQ09ORklHX1dNODM1MF9QT1dFUj15
Cj4gQ09ORklHX1RFU1RfUE9XRVI9eQo+IENPTkZJR19CQVRURVJZXzg4UE04NjBYPXkKPiBDT05G
SUdfQkFUVEVSWV9EUzI3ODA9eQo+IENPTkZJR19CQVRURVJZX0RTMjc4MT15Cj4gQ09ORklHX0JB
VFRFUllfRFMyNzgyPXkKPiAjIENPTkZJR19CQVRURVJZX1dNOTdYWCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0JBVFRFUllfU0JTPXkKPiAjIENPTkZJR19CQVRURVJZX0JRMjd4MDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0JBVFRFUllfTUFYMTcwNDAgaXMgbm90IHNldAo+IENPTkZJR19CQVRURVJZX01B
WDE3MDQyPXkKPiAjIENPTkZJR19DSEFSR0VSXzg4UE04NjBYIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19DSEFSR0VSX1BDRjUwNjMzIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0hBUkdFUl9NQVg4OTAzPXkK
PiBDT05GSUdfQ0hBUkdFUl9UV0w0MDMwPXkKPiBDT05GSUdfQ0hBUkdFUl9MUDg3Mjc9eQo+IENP
TkZJR19DSEFSR0VSX0dQSU89eQo+IENPTkZJR19DSEFSR0VSX01BTkFHRVI9eQo+IENPTkZJR19D
SEFSR0VSX0JRMjQxNVg9eQo+ICMgQ09ORklHX0NIQVJHRVJfQlEyNDE5MCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfQ0hBUkdFUl9CUTI0NzM1IGlzIG5vdCBzZXQKPiBDT05GSUdfQ0hBUkdFUl9TTUIz
NDc9eQo+IENPTkZJR19CQVRURVJZX0dPTERGSVNIPXkKPiAjIENPTkZJR19QT1dFUl9SRVNFVCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1BPV0VSX0FWUz15Cj4gQ09ORklHX0hXTU9OPXkKPiBDT05GSUdf
SFdNT05fVklEPXkKPiBDT05GSUdfSFdNT05fREVCVUdfQ0hJUD15Cj4gCj4gIwo+ICMgTmF0aXZl
IGRyaXZlcnMKPiAjCj4gQ09ORklHX1NFTlNPUlNfQUJJVFVHVVJVPXkKPiBDT05GSUdfU0VOU09S
U19BQklUVUdVUlUzPXkKPiBDT05GSUdfU0VOU09SU19BRDc0MTQ9eQo+IENPTkZJR19TRU5TT1JT
X0FENzQxOD15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyMT15Cj4gQ09ORklHX1NFTlNPUlNfQURN
MTAyNT15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyNj15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAy
OT15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAzMT15Cj4gIyBDT05GSUdfU0VOU09SU19BRE05MjQw
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19BRFQ3WDEwPXkKPiBDT05GSUdfU0VOU09SU19B
RFQ3NDEwPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3NDExPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3
NDYyPXkKPiBDT05GSUdfU0VOU09SU19BRFQ3NDcwPXkKPiAjIENPTkZJR19TRU5TT1JTX0FEVDc0
NzUgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfQVNDNzYyMSBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NFTlNPUlNfSzhURU1QPXkKPiBDT05GSUdfU0VOU09SU19LMTBURU1QPXkKPiBDT05GSUdf
U0VOU09SU19GQU0xNUhfUE9XRVI9eQo+IENPTkZJR19TRU5TT1JTX0FTQjEwMD15Cj4gQ09ORklH
X1NFTlNPUlNfQVRYUDE9eQo+IENPTkZJR19TRU5TT1JTX0RTNjIwPXkKPiBDT05GSUdfU0VOU09S
U19EUzE2MjE9eQo+IENPTkZJR19TRU5TT1JTX0RBOTA1NT15Cj4gQ09ORklHX1NFTlNPUlNfSTVL
X0FNQj15Cj4gQ09ORklHX1NFTlNPUlNfRjcxODA1Rj15Cj4gQ09ORklHX1NFTlNPUlNfRjcxODgy
Rkc9eQo+IENPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQo+ICMgQ09ORklHX1NFTlNPUlNfRlNDSE1E
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19HNzYwQT15Cj4gQ09ORklHX1NFTlNPUlNfRzc2
Mj15Cj4gIyBDT05GSUdfU0VOU09SU19HTDUxOFNNIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09S
U19HTDUyMFNNPXkKPiBDT05GSUdfU0VOU09SU19HUElPX0ZBTj15Cj4gQ09ORklHX1NFTlNPUlNf
SElINjEzMD15Cj4gIyBDT05GSUdfU0VOU09SU19IVFUyMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NF
TlNPUlNfQ09SRVRFTVA9eQo+IENPTkZJR19TRU5TT1JTX0lUODc9eQo+IENPTkZJR19TRU5TT1JT
X0pDNDI9eQo+IENPTkZJR19TRU5TT1JTX0xJTkVBR0U9eQo+IENPTkZJR19TRU5TT1JTX0xNNjM9
eQo+IENPTkZJR19TRU5TT1JTX0xNNzM9eQo+IENPTkZJR19TRU5TT1JTX0xNNzU9eQo+ICMgQ09O
RklHX1NFTlNPUlNfTE03NyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19MTTc4IGlzIG5v
dCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTgwPXkKPiBDT05GSUdfU0VOU09SU19MTTgzPXkKPiBD
T05GSUdfU0VOU09SU19MTTg1PXkKPiBDT05GSUdfU0VOU09SU19MTTg3PXkKPiBDT05GSUdfU0VO
U09SU19MTTkwPXkKPiBDT05GSUdfU0VOU09SU19MTTkyPXkKPiBDT05GSUdfU0VOU09SU19MTTkz
PXkKPiBDT05GSUdfU0VOU09SU19MVEM0MTUxPXkKPiBDT05GSUdfU0VOU09SU19MVEM0MjE1PXkK
PiBDT05GSUdfU0VOU09SU19MVEM0MjQ1PXkKPiAjIENPTkZJR19TRU5TT1JTX0xUQzQyNjEgaXMg
bm90IHNldAo+IENPTkZJR19TRU5TT1JTX0xNOTUyMzQ9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUy
NDE9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUyNDU9eQo+IENPTkZJR19TRU5TT1JTX01BWDE2MDY1
PXkKPiBDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDE2Njgg
aXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfTUFYMTk3IGlzIG5vdCBzZXQKPiBDT05GSUdf
U0VOU09SU19NQVg2NjM5PXkKPiBDT05GSUdfU0VOU09SU19NQVg2NjQyPXkKPiBDT05GSUdfU0VO
U09SU19NQVg2NjUwPXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDY2OTcgaXMgbm90IHNldAo+IENP
TkZJR19TRU5TT1JTX01DUDMwMjE9eQo+IENPTkZJR19TRU5TT1JTX05DVDY3NzU9eQo+IENPTkZJ
R19TRU5TT1JTX05UQ19USEVSTUlTVE9SPXkKPiBDT05GSUdfU0VOU09SU19QQzg3MzYwPXkKPiBD
T05GSUdfU0VOU09SU19QQzg3NDI3PXkKPiBDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkKPiAjIENP
TkZJR19QTUJVUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19TSFQxNSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19T
SVM1NTk1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX1NNTTY2NSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19FTUMx
NDAzPXkKPiBDT05GSUdfU0VOU09SU19FTUMyMTAzPXkKPiBDT05GSUdfU0VOU09SU19FTUM2VzIw
MT15Cj4gQ09ORklHX1NFTlNPUlNfU01TQzQ3TTE9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N00x
OTI9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTc9eQo+IENPTkZJR19TRU5TT1JTX1NDSDU2
WFhfQ09NTU9OPXkKPiBDT05GSUdfU0VOU09SU19TQ0g1NjI3PXkKPiBDT05GSUdfU0VOU09SU19T
Q0g1NjM2PXkKPiBDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKPiAjIENPTkZJR19TRU5TT1JTX0FE
Uzc4MjggaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfQU1DNjgyMSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfU0VOU09SU19JTkEyMDkgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0lOQTJY
WD15Cj4gQ09ORklHX1NFTlNPUlNfVEhNQzUwPXkKPiAjIENPTkZJR19TRU5TT1JTX1RNUDEwMiBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfVE1QNDAxPXkKPiAjIENPTkZJR19TRU5TT1JTX1RN
UDQyMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19WSUFfQ1BVVEVNUCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFTlNPUlNfVklBNjg2QT15Cj4gQ09ORklHX1NFTlNPUlNfVlQxMjExPXkKPiAj
IENPTkZJR19TRU5TT1JTX1ZUODIzMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19XODM3
ODFEIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19XODM3OTFEPXkKPiBDT05GSUdfU0VOU09S
U19XODM3OTJEPXkKPiBDT05GSUdfU0VOU09SU19XODM3OTM9eQo+ICMgQ09ORklHX1NFTlNPUlNf
VzgzNzk1IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19XODNMNzg1VFM9eQo+IENPTkZJR19T
RU5TT1JTX1c4M0w3ODZORz15Cj4gQ09ORklHX1NFTlNPUlNfVzgzNjI3SEY9eQo+IENPTkZJR19T
RU5TT1JTX1c4MzYyN0VIRj15Cj4gQ09ORklHX1NFTlNPUlNfV004MzFYPXkKPiBDT05GSUdfU0VO
U09SU19XTTgzNTA9eQo+IENPTkZJR19TRU5TT1JTX0FQUExFU01DPXkKPiBDT05GSUdfU0VOU09S
U19NQzEzNzgzX0FEQz15Cj4gCj4gIwo+ICMgQUNQSSBkcml2ZXJzCj4gIwo+IENPTkZJR19TRU5T
T1JTX0FDUElfUE9XRVI9eQo+IENPTkZJR19TRU5TT1JTX0FUSzAxMTA9eQo+IENPTkZJR19USEVS
TUFMPXkKPiAjIENPTkZJR19USEVSTUFMX0hXTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfVEhFUk1B
TF9ERUZBVUxUX0dPVl9TVEVQX1dJU0U9eQo+ICMgQ09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1Zf
RkFJUl9TSEFSRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9VU0VS
X1NQQUNFIGlzIG5vdCBzZXQKPiBDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRT15Cj4gQ09O
RklHX1RIRVJNQUxfR09WX1NURVBfV0lTRT15Cj4gQ09ORklHX1RIRVJNQUxfR09WX1VTRVJfU1BB
Q0U9eQo+ICMgQ09ORklHX1RIRVJNQUxfRU1VTEFUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNB
Ul9USEVSTUFMPXkKPiBDT05GSUdfSU5URUxfUE9XRVJDTEFNUD15Cj4gQ09ORklHX0FDUElfSU5U
MzQwM19USEVSTUFMPXkKPiAKPiAjCj4gIyBUZXhhcyBJbnN0cnVtZW50cyB0aGVybWFsIGRyaXZl
cnMKPiAjCj4gQ09ORklHX1dBVENIRE9HPXkKPiBDT05GSUdfV0FUQ0hET0dfQ09SRT15Cj4gQ09O
RklHX1dBVENIRE9HX05PV0FZT1VUPXkKPiAKPiAjCj4gIyBXYXRjaGRvZyBEZXZpY2UgRHJpdmVy
cwo+ICMKPiAjIENPTkZJR19TT0ZUX1dBVENIRE9HIGlzIG5vdCBzZXQKPiBDT05GSUdfREE5MDU1
X1dBVENIRE9HPXkKPiAjIENPTkZJR19XTTgzMVhfV0FUQ0hET0cgaXMgbm90IHNldAo+IENPTkZJ
R19XTTgzNTBfV0FUQ0hET0c9eQo+IENPTkZJR19UV0w0MDMwX1dBVENIRE9HPXkKPiBDT05GSUdf
UkVUVV9XQVRDSERPRz15Cj4gIyBDT05GSUdfQUNRVUlSRV9XRFQgaXMgbm90IHNldAo+ICMgQ09O
RklHX0FEVkFOVEVDSF9XRFQgaXMgbm90IHNldAo+IENPTkZJR19BTElNMTUzNV9XRFQ9eQo+IENP
TkZJR19BTElNNzEwMV9XRFQ9eQo+ICMgQ09ORklHX0Y3MTgwOEVfV0RUIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19TUDUxMDBfVENPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQzUyMF9XRFQgaXMgbm90
IHNldAo+IENPTkZJR19TQkNfRklUUEMyX1dBVENIRE9HPXkKPiBDT05GSUdfRVVST1RFQ0hfV0RU
PXkKPiBDT05GSUdfSUI3MDBfV0RUPXkKPiBDT05GSUdfSUJNQVNSPXkKPiBDT05GSUdfV0FGRVJf
V0RUPXkKPiBDT05GSUdfSTYzMDBFU0JfV0RUPXkKPiBDT05GSUdfSUU2WFhfV0RUPXkKPiBDT05G
SUdfSVRDT19XRFQ9eQo+IENPTkZJR19JVENPX1ZFTkRPUl9TVVBQT1JUPXkKPiBDT05GSUdfSVQ4
NzEyRl9XRFQ9eQo+IENPTkZJR19JVDg3X1dEVD15Cj4gIyBDT05GSUdfSFBfV0FUQ0hET0cgaXMg
bm90IHNldAo+IENPTkZJR19LRU1QTERfV0RUPXkKPiBDT05GSUdfU0MxMjAwX1dEVD15Cj4gQ09O
RklHX1BDODc0MTNfV0RUPXkKPiBDT05GSUdfTlZfVENPPXkKPiBDT05GSUdfNjBYWF9XRFQ9eQo+
IENPTkZJR19TQkM4MzYwX1dEVD15Cj4gQ09ORklHX0NQVTVfV0RUPXkKPiBDT05GSUdfU01TQ19T
Q0gzMTFYX1dEVD15Cj4gQ09ORklHX1NNU0MzN0I3ODdfV0RUPXkKPiAjIENPTkZJR19WSUFfV0RU
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19XODM2MjdIRl9XRFQgaXMgbm90IHNldAo+IENPTkZJR19X
ODM2OTdIRl9XRFQ9eQo+IENPTkZJR19XODM2OTdVR19XRFQ9eQo+IENPTkZJR19XODM4NzdGX1dE
VD15Cj4gIyBDT05GSUdfVzgzOTc3Rl9XRFQgaXMgbm90IHNldAo+IENPTkZJR19NQUNIWl9XRFQ9
eQo+ICMgQ09ORklHX1NCQ19FUFhfQzNfV0FUQ0hET0cgaXMgbm90IHNldAo+IENPTkZJR19NRU5f
QTIxX1dEVD15Cj4gQ09ORklHX1hFTl9XRFQ9eQo+IAo+ICMKPiAjIFBDSS1iYXNlZCBXYXRjaGRv
ZyBDYXJkcwo+ICMKPiBDT05GSUdfUENJUENXQVRDSERPRz15Cj4gQ09ORklHX1dEVFBDST15Cj4g
Q09ORklHX1NTQl9QT1NTSUJMRT15Cj4gCj4gIwo+ICMgU29uaWNzIFNpbGljb24gQmFja3BsYW5l
Cj4gIwo+ICMgQ09ORklHX1NTQiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JDTUFfUE9TU0lCTEU9eQo+
IAo+ICMKPiAjIEJyb2FkY29tIHNwZWNpZmljIEFNQkEKPiAjCj4gQ09ORklHX0JDTUE9eQo+IENP
TkZJR19CQ01BX0hPU1RfUENJX1BPU1NJQkxFPXkKPiBDT05GSUdfQkNNQV9IT1NUX1BDST15Cj4g
Q09ORklHX0JDTUFfSE9TVF9TT0M9eQo+IENPTkZJR19CQ01BX0RSSVZFUl9HTUFDX0NNTj15Cj4g
Q09ORklHX0JDTUFfRFJJVkVSX0dQSU89eQo+IENPTkZJR19CQ01BX0RFQlVHPXkKPiAKPiAjCj4g
IyBNdWx0aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzCj4gIwo+IENPTkZJR19NRkRfQ09SRT15Cj4g
IyBDT05GSUdfTUZEX0NTNTUzNSBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9BUzM3MTE9eQo+ICMg
Q09ORklHX1BNSUNfQURQNTUyMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX0FBVDI4NzBfQ09S
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX0NST1NfRUMgaXMgbm90IHNldAo+ICMgQ09ORklH
X1BNSUNfREE5MDNYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01GRF9EQTkwNTU9eQo+IENPTkZJR19NRkRfREE5MDYzPXkKPiBDT05GSUdf
TUZEX01DMTNYWFg9eQo+IENPTkZJR19NRkRfTUMxM1hYWF9JMkM9eQo+IENPTkZJR19IVENfUEFT
SUMzPXkKPiAjIENPTkZJR19IVENfSTJDUExEIGlzIG5vdCBzZXQKPiBDT05GSUdfTFBDX0lDSD15
Cj4gQ09ORklHX0xQQ19TQ0g9eQo+IENPTkZJR19NRkRfSkFOWl9DTU9ESU89eQo+IENPTkZJR19N
RkRfS0VNUExEPXkKPiBDT05GSUdfTUZEXzg4UE04MDA9eQo+ICMgQ09ORklHX01GRF84OFBNODA1
IGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEXzg4UE04NjBYPXkKPiAjIENPTkZJR19NRkRfTUFYMTQ1
NzcgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9NQVg3NzY4NiBpcyBub3Qgc2V0Cj4gQ09ORklH
X01GRF9NQVg3NzY5Mz15Cj4gIyBDT05GSUdfTUZEX01BWDg5MDcgaXMgbm90IHNldAo+ICMgQ09O
RklHX01GRF9NQVg4OTI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfTUFYODk5NyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01GRF9NQVg4OTk4PXkKPiBDT05GSUdfTUZEX1JFVFU9eQo+IENPTkZJR19N
RkRfUENGNTA2MzM9eQo+IENPTkZJR19QQ0Y1MDYzM19BREM9eQo+IENPTkZJR19QQ0Y1MDYzM19H
UElPPXkKPiAjIENPTkZJR19VQ0IxNDAwX0NPUkUgaXMgbm90IHNldAo+IENPTkZJR19NRkRfUkRD
MzIxWD15Cj4gIyBDT05GSUdfTUZEX1JUU1hfUENJIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRf
UkM1VDU4MyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX1NFQ19DT1JFIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NRkRfU0k0NzZYX0NPUkUgaXMgbm90IHNldAo+IENPTkZJR19NRkRfU001MDE9eQo+
ICMgQ09ORklHX01GRF9TTTUwMV9HUElPIGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEX1NNU0M9eQo+
ICMgQ09ORklHX0FCWDUwMF9DT1JFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfU1RNUEUgaXMg
bm90IHNldAo+IENPTkZJR19NRkRfU1lTQ09OPXkKPiBDT05GSUdfTUZEX1RJX0FNMzM1WF9UU0NB
REM9eQo+ICMgQ09ORklHX01GRF9MUDM5NDMgaXMgbm90IHNldAo+IENPTkZJR19NRkRfTFA4Nzg4
PXkKPiBDT05GSUdfTUZEX1BBTE1BUz15Cj4gQ09ORklHX1RQUzYxMDVYPXkKPiBDT05GSUdfVFBT
NjUwMTA9eQo+IENPTkZJR19UUFM2NTA3WD15Cj4gIyBDT05GSUdfTUZEX1RQUzY1MDkwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUZEX1RQUzY1MjE3PXkKPiBDT05GSUdfTUZEX1RQUzY1ODZYPXkKPiAj
IENPTkZJR19NRkRfVFBTNjU5MTAgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9UUFM2NTkxMiBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfTUZEX1RQUzY1OTEyX0kyQyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfTUZEX1RQUzgwMDMxIGlzIG5vdCBzZXQKPiBDT05GSUdfVFdMNDAzMF9DT1JFPXkKPiAjIENP
TkZJR19UV0w0MDMwX01BREMgaXMgbm90IHNldAo+IENPTkZJR19NRkRfVFdMNDAzMF9BVURJTz15
Cj4gQ09ORklHX1RXTDYwNDBfQ09SRT15Cj4gQ09ORklHX01GRF9XTDEyNzNfQ09SRT15Cj4gQ09O
RklHX01GRF9MTTM1MzM9eQo+IENPTkZJR19NRkRfVElNQkVSREFMRT15Cj4gQ09ORklHX01GRF9U
QzM1ODlYPXkKPiAjIENPTkZJR19NRkRfVE1JTyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9WWDg1
NT15Cj4gQ09ORklHX01GRF9BUklaT05BPXkKPiBDT05GSUdfTUZEX0FSSVpPTkFfSTJDPXkKPiAj
IENPTkZJR19NRkRfV001MTAyIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfV001MTEwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUZEX1dNODk5Nz15Cj4gQ09ORklHX01GRF9XTTg0MDA9eQo+IENPTkZJ
R19NRkRfV004MzFYPXkKPiBDT05GSUdfTUZEX1dNODMxWF9JMkM9eQo+IENPTkZJR19NRkRfV004
MzUwPXkKPiBDT05GSUdfTUZEX1dNODM1MF9JMkM9eQo+ICMgQ09ORklHX01GRF9XTTg5OTQgaXMg
bm90IHNldAo+IENPTkZJR19SRUdVTEFUT1I9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9ERUJVRyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9GSVhFRF9WT0xUQUdFPXkKPiAjIENPTkZJR19S
RUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9V
U0VSU1BBQ0VfQ09OU1VNRVI9eQo+IENPTkZJR19SRUdVTEFUT1JfODhQTTgwMD15Cj4gIyBDT05G
SUdfUkVHVUxBVE9SXzg4UE04NjA3IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX0FDVDg4
NjU9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9BRDUzOTggaXMgbm90IHNldAo+IENPTkZJR19SRUdV
TEFUT1JfQU5BVE9QPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfQVJJWk9OQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfUkVHVUxBVE9SX0FTMzcxMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9E
QTkwNTU9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9EQTkwNjMgaXMgbm90IHNldAo+IENPTkZJR19S
RUdVTEFUT1JfREE5MjEwPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1JFR1VMQVRPUl9HUElPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRUdVTEFU
T1JfSVNMNjI3MUEgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTFAzOTcxPXkKPiBDT05G
SUdfUkVHVUxBVE9SX0xQMzk3Mj15Cj4gQ09ORklHX1JFR1VMQVRPUl9MUDg3Mlg9eQo+ICMgQ09O
RklHX1JFR1VMQVRPUl9MUDg3NTUgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTFA4Nzg4
PXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYMTU4NiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VM
QVRPUl9NQVg4NjQ5PXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYODY2MCBpcyBub3Qgc2V0Cj4g
Q09ORklHX1JFR1VMQVRPUl9NQVg4OTUyPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfUkVHVUxBVE9SX01BWDg5OTggaXMgbm90IHNldAo+IENPTkZJ
R19SRUdVTEFUT1JfTUFYNzc2OTM9eQo+IENPTkZJR19SRUdVTEFUT1JfTUMxM1hYWF9DT1JFPXkK
PiBDT05GSUdfUkVHVUxBVE9SX01DMTM3ODM9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9NQzEzODky
IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1BBTE1BUz15Cj4gIyBDT05GSUdfUkVHVUxB
VE9SX1BDRjUwNjMzIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1BGVVpFMTAwPXkKPiAj
IENPTkZJR19SRUdVTEFUT1JfVFBTNTE2MzIgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1Jf
VFBTNjEwNVg9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1JFR1VMQVRPUl9UUFM2NTAyMz15Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2NTA3WD15Cj4g
Q09ORklHX1JFR1VMQVRPUl9UUFM2NTIxNz15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1ODZY
IGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1RXTDQwMzA9eQo+ICMgQ09ORklHX1JFR1VM
QVRPUl9XTTgzMVggaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9XTTgzNTAgaXMgbm90
IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9XTTg0MDAgaXMgbm90IHNldAo+IENPTkZJR19NRURJ
QV9TVVBQT1JUPXkKPiAKPiAjCj4gIyBNdWx0aW1lZGlhIGNvcmUgc3VwcG9ydAo+ICMKPiBDT05G
SUdfTUVESUFfQ0FNRVJBX1NVUFBPUlQ9eQo+ICMgQ09ORklHX01FRElBX0FOQUxPR19UVl9TVVBQ
T1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfTUVESUFfRElHSVRBTF9UVl9TVVBQT1JUPXkKPiAjIENP
TkZJR19NRURJQV9SQURJT19TVVBQT1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfTUVESUFfUkNfU1VQ
UE9SVD15Cj4gIyBDT05GSUdfTUVESUFfQ09OVFJPTExFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJ
REVPX0RFVj15Cj4gQ09ORklHX1ZJREVPX1Y0TDI9eQo+ICMgQ09ORklHX1ZJREVPX0FEVl9ERUJV
RyBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0ZJWEVEX01JTk9SX1JBTkdFUz15Cj4gQ09ORklH
X1Y0TDJfTUVNMk1FTV9ERVY9eQo+IENPTkZJR19WSURFT0JVRjJfQ09SRT15Cj4gQ09ORklHX1ZJ
REVPQlVGMl9NRU1PUFM9eQo+IENPTkZJR19WSURFT0JVRjJfRE1BX0NPTlRJRz15Cj4gQ09ORklH
X1ZJREVPQlVGMl9WTUFMTE9DPXkKPiBDT05GSUdfVklERU9CVUYyX0RNQV9TRz15Cj4gQ09ORklH
X0RWQl9DT1JFPXkKPiAjIENPTkZJR19EVkJfTkVUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19UVFBD
SV9FRVBST00gaXMgbm90IHNldAo+IENPTkZJR19EVkJfTUFYX0FEQVBURVJTPTgKPiAjIENPTkZJ
R19EVkJfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lZGlhIGRyaXZlcnMK
PiAjCj4gQ09ORklHX1JDX0NPUkU9eQo+IENPTkZJR19SQ19NQVA9eQo+IENPTkZJR19SQ19ERUNP
REVSUz15Cj4gQ09ORklHX0xJUkM9eQo+IENPTkZJR19JUl9MSVJDX0NPREVDPXkKPiAjIENPTkZJ
R19JUl9ORUNfREVDT0RFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0lSX1JDNV9ERUNPREVSPXkKPiBD
T05GSUdfSVJfUkM2X0RFQ09ERVI9eQo+ICMgQ09ORklHX0lSX0pWQ19ERUNPREVSIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSVJfU09OWV9ERUNPREVSPXkKPiBDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9
eQo+IENPTkZJR19JUl9TQU5ZT19ERUNPREVSPXkKPiBDT05GSUdfSVJfTUNFX0tCRF9ERUNPREVS
PXkKPiBDT05GSUdfUkNfREVWSUNFUz15Cj4gIyBDT05GSUdfSVJfRU5FIGlzIG5vdCBzZXQKPiBD
T05GSUdfSVJfSVRFX0NJUj15Cj4gQ09ORklHX0lSX0ZJTlRFSz15Cj4gQ09ORklHX0lSX05VVk9U
T049eQo+IENPTkZJR19JUl9XSU5CT05EX0NJUj15Cj4gQ09ORklHX1JDX0xPT1BCQUNLPXkKPiBD
T05GSUdfSVJfR1BJT19DSVI9eQo+ICMgQ09ORklHX01FRElBX1BDSV9TVVBQT1JUIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19WNExfUExBVEZPUk1fRFJJVkVSUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1Y0
TF9NRU0yTUVNX0RSSVZFUlM9eQo+IENPTkZJR19WSURFT19NRU0yTUVNX0RFSU5URVJMQUNFPXkK
PiBDT05GSUdfVklERU9fU0hfVkVVPXkKPiAjIENPTkZJR19WNExfVEVTVF9EUklWRVJTIGlzIG5v
dCBzZXQKPiAKPiAjCj4gIyBTdXBwb3J0ZWQgTU1DL1NESU8gYWRhcHRlcnMKPiAjCj4gQ09ORklH
X01FRElBX1BBUlBPUlRfU1VQUE9SVD15Cj4gQ09ORklHX1ZJREVPX0JXUUNBTT15Cj4gIyBDT05G
SUdfVklERU9fQ1FDQU0gaXMgbm90IHNldAo+IAo+ICMKPiAjIE1lZGlhIGFuY2lsbGFyeSBkcml2
ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGkyYywgZnJvbnRlbmRzKQo+ICMKPiAjIENPTkZJR19NRURJ
QV9TVUJEUlZfQVVUT1NFTEVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0lSX0kyQz15Cj4g
Cj4gIwo+ICMgRW5jb2RlcnMsIGRlY29kZXJzLCBzZW5zb3JzIGFuZCBvdGhlciBoZWxwZXIgY2hp
cHMKPiAjCj4gCj4gIwo+ICMgQXVkaW8gZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVycwo+
ICMKPiAjIENPTkZJR19WSURFT19UVkFVRElPIGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fVERB
NzQzMj15Cj4gIyBDT05GSUdfVklERU9fVERBOTg0MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfVklE
RU9fVEVBNjQxNUMgaXMgbm90IHNldAo+IENPTkZJR19WSURFT19URUE2NDIwPXkKPiAjIENPTkZJ
R19WSURFT19NU1AzNDAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSURFT19DUzUzNDUgaXMgbm90
IHNldAo+IENPTkZJR19WSURFT19DUzUzTDMyQT15Cj4gIyBDT05GSUdfVklERU9fVExWMzIwQUlD
MjNCIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WSURFT19VREExMzQyIGlzIG5vdCBzZXQKPiBDT05G
SUdfVklERU9fV004Nzc1PXkKPiBDT05GSUdfVklERU9fV004NzM5PXkKPiBDT05GSUdfVklERU9f
VlAyN1NNUFg9eQo+ICMgQ09ORklHX1ZJREVPX1NPTllfQlRGX01QWCBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgUkRTIGRlY29kZXJzCj4gIwo+IENPTkZJR19WSURFT19TQUE2NTg4PXkKPiAKPiAjCj4g
IyBWaWRlbyBkZWNvZGVycwo+ICMKPiBDT05GSUdfVklERU9fQURWNzE4MD15Cj4gIyBDT05GSUdf
VklERU9fQURWNzE4MyBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX0JUODE5PXkKPiBDT05GSUdf
VklERU9fQlQ4NTY9eQo+IENPTkZJR19WSURFT19CVDg2Nj15Cj4gIyBDT05GSUdfVklERU9fS1Mw
MTI3IGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fTUw4NlY3NjY3PXkKPiAjIENPTkZJR19WSURF
T19TQUE3MTEwIGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fU0FBNzExWD15Cj4gQ09ORklHX1ZJ
REVPX1NBQTcxOTE9eQo+IENPTkZJR19WSURFT19UVlA1MTRYPXkKPiBDT05GSUdfVklERU9fVFZQ
NTE1MD15Cj4gQ09ORklHX1ZJREVPX1RWUDcwMDI9eQo+IENPTkZJR19WSURFT19UVzI4MDQ9eQo+
IENPTkZJR19WSURFT19UVzk5MDM9eQo+IENPTkZJR19WSURFT19UVzk5MDY9eQo+IENPTkZJR19W
SURFT19WUFgzMjIwPXkKPiAKPiAjCj4gIyBWaWRlbyBhbmQgYXVkaW8gZGVjb2RlcnMKPiAjCj4g
Q09ORklHX1ZJREVPX1NBQTcxN1g9eQo+ICMgQ09ORklHX1ZJREVPX0NYMjU4NDAgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIFZpZGVvIGVuY29kZXJzCj4gIwo+ICMgQ09ORklHX1ZJREVPX1NBQTcxMjcg
aXMgbm90IHNldAo+ICMgQ09ORklHX1ZJREVPX1NBQTcxODUgaXMgbm90IHNldAo+IENPTkZJR19W
SURFT19BRFY3MTcwPXkKPiBDT05GSUdfVklERU9fQURWNzE3NT15Cj4gQ09ORklHX1ZJREVPX0FE
VjczNDM9eQo+ICMgQ09ORklHX1ZJREVPX0FEVjczOTMgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZJ
REVPX0FLODgxWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX1RIUzgyMDA9eQo+IAo+ICMKPiAj
IENhbWVyYSBzZW5zb3IgZGV2aWNlcwo+ICMKPiBDT05GSUdfVklERU9fT1Y3NjQwPXkKPiBDT05G
SUdfVklERU9fT1Y3NjcwPXkKPiAjIENPTkZJR19WSURFT19WUzY2MjQgaXMgbm90IHNldAo+IENP
TkZJR19WSURFT19NVDlWMDExPXkKPiAjIENPTkZJR19WSURFT19TUjAzMFBDMzAgaXMgbm90IHNl
dAo+IAo+ICMKPiAjIEZsYXNoIGRldmljZXMKPiAjCj4gCj4gIwo+ICMgVmlkZW8gaW1wcm92ZW1l
bnQgY2hpcHMKPiAjCj4gQ09ORklHX1ZJREVPX1VQRDY0MDMxQT15Cj4gQ09ORklHX1ZJREVPX1VQ
RDY0MDgzPXkKPiAKPiAjCj4gIyBNaXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcwo+ICMKPiBDT05G
SUdfVklERU9fVEhTNzMwMz15Cj4gQ09ORklHX1ZJREVPX001Mjc5MD15Cj4gCj4gIwo+ICMgU2Vu
c29ycyB1c2VkIG9uIHNvY19jYW1lcmEgZHJpdmVyCj4gIwo+IAo+ICMKPiAjIEN1c3RvbWl6ZSBU
ViB0dW5lcnMKPiAjCj4gQ09ORklHX01FRElBX1RVTkVSX1NJTVBMRT15Cj4gIyBDT05GSUdfTUVE
SUFfVFVORVJfVERBODI5MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfVERBODI3
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjcxPXkKPiBDT05GSUdfTUVE
SUFfVFVORVJfVERBOTg4Nz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQo+IENPTkZJ
R19NRURJQV9UVU5FUl9URUE1NzY3PXkKPiAjIENPTkZJR19NRURJQV9UVU5FUl9NVDIwWFggaXMg
bm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX01UMjA2MCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01FRElBX1RVTkVSX01UMjA2Mz15Cj4gQ09ORklHX01FRElBX1RVTkVSX01UMjI2Nj15Cj4gIyBD
T05GSUdfTUVESUFfVFVORVJfTVQyMTMxIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRURJQV9UVU5F
Ul9RVDEwMTAgaXMgbm90IHNldAo+IENPTkZJR19NRURJQV9UVU5FUl9YQzIwMjg9eQo+IENPTkZJ
R19NRURJQV9UVU5FUl9YQzUwMDA9eQo+IENPTkZJR19NRURJQV9UVU5FUl9YQzQwMDA9eQo+IENP
TkZJR19NRURJQV9UVU5FUl9NWEw1MDA1Uz15Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfTVhMNTAw
N1QgaXMgbm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX01DNDRTODAzIGlzIG5vdCBzZXQK
PiBDT05GSUdfTUVESUFfVFVORVJfTUFYMjE2NT15Cj4gIyBDT05GSUdfTUVESUFfVFVORVJfVERB
MTgyMTggaXMgbm90IHNldAo+ICMgQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMSBpcyBub3Qgc2V0
Cj4gQ09ORklHX01FRElBX1RVTkVSX0ZDMDAxMj15Cj4gQ09ORklHX01FRElBX1RVTkVSX0ZDMDAx
Mz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjEyPXkKPiBDT05GSUdfTUVESUFfVFVORVJf
RTQwMDA9eQo+ICMgQ09ORklHX01FRElBX1RVTkVSX0ZDMjU4MCBpcyBub3Qgc2V0Cj4gQ09ORklH
X01FRElBX1RVTkVSX004OFRTMjAyMj15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RVQTkwMDE9eQo+
ICMgQ09ORklHX01FRElBX1RVTkVSX0lUOTEzWCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElBX1RV
TkVSX1I4MjBUPXkKPiAKPiAjCj4gIyBDdXN0b21pc2UgRFZCIEZyb250ZW5kcwo+ICMKPiAKPiAj
Cj4gIyBNdWx0aXN0YW5kYXJkIChzYXRlbGxpdGUpIGZyb250ZW5kcwo+ICMKPiAjIENPTkZJR19E
VkJfU1RCMDg5OSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX1NUQjYxMDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0RWQl9TVFYwOTB4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfU1RWNjExMHgg
aXMgbm90IHNldAo+IENPTkZJR19EVkJfTTg4RFMzMTAzPXkKPiAKPiAjCj4gIyBNdWx0aXN0YW5k
YXJkIChjYWJsZSArIHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9EUlhL
PXkKPiAjIENPTkZJR19EVkJfVERBMTgyNzFDMkREIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEVkIt
UyAoc2F0ZWxsaXRlKSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9DWDI0MTEwPXkKPiAjIENP
TkZJR19EVkJfQ1gyNDEyMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9NVDMxMj15Cj4gIyBDT05G
SUdfRFZCX1pMMTAwMzYgaXMgbm90IHNldAo+IENPTkZJR19EVkJfWkwxMDAzOT15Cj4gQ09ORklH
X0RWQl9TNUgxNDIwPXkKPiBDT05GSUdfRFZCX1NUVjAyODg9eQo+IENPTkZJR19EVkJfU1RCNjAw
MD15Cj4gQ09ORklHX0RWQl9TVFYwMjk5PXkKPiBDT05GSUdfRFZCX1NUVjYxMTA9eQo+IENPTkZJ
R19EVkJfU1RWMDkwMD15Cj4gQ09ORklHX0RWQl9UREE4MDgzPXkKPiAjIENPTkZJR19EVkJfVERB
MTAwODYgaXMgbm90IHNldAo+ICMgQ09ORklHX0RWQl9UREE4MjYxIGlzIG5vdCBzZXQKPiBDT05G
SUdfRFZCX1ZFUzFYOTM9eQo+IENPTkZJR19EVkJfVFVORVJfSVREMTAwMD15Cj4gIyBDT05GSUdf
RFZCX1RVTkVSX0NYMjQxMTMgaXMgbm90IHNldAo+IENPTkZJR19EVkJfVERBODI2WD15Cj4gQ09O
RklHX0RWQl9UVUE2MTAwPXkKPiBDT05GSUdfRFZCX0NYMjQxMTY9eQo+IENPTkZJR19EVkJfQ1gy
NDExNz15Cj4gIyBDT05GSUdfRFZCX1NJMjFYWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX1RT
MjAyMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9EUzMwMDA9eQo+IENPTkZJR19EVkJfTUI4NkEx
Nj15Cj4gQ09ORklHX0RWQl9UREExMDA3MT15Cj4gCj4gIwo+ICMgRFZCLVQgKHRlcnJlc3RyaWFs
KSBmcm9udGVuZHMKPiAjCj4gQ09ORklHX0RWQl9TUDg4NzA9eQo+ICMgQ09ORklHX0RWQl9TUDg4
N1ggaXMgbm90IHNldAo+IENPTkZJR19EVkJfQ1gyMjcwMD15Cj4gQ09ORklHX0RWQl9DWDIyNzAy
PXkKPiAjIENPTkZJR19EVkJfUzVIMTQzMiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX0RSWEQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0RWQl9MNjQ3ODEgaXMgbm90IHNldAo+ICMgQ09ORklHX0RW
Ql9UREExMDA0WCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRFZCX05YVDYwMDAgaXMgbm90IHNldAo+
IENPTkZJR19EVkJfTVQzNTI9eQo+ICMgQ09ORklHX0RWQl9aTDEwMzUzIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19EVkJfRElCMzAwME1CIGlzIG5vdCBzZXQKPiBDT05GSUdfRFZCX0RJQjMwMDBNQz15
Cj4gQ09ORklHX0RWQl9ESUI3MDAwTT15Cj4gQ09ORklHX0RWQl9ESUI3MDAwUD15Cj4gQ09ORklH
X0RWQl9ESUI5MDAwPXkKPiAjIENPTkZJR19EVkJfVERBMTAwNDggaXMgbm90IHNldAo+IENPTkZJ
R19EVkJfQUY5MDEzPXkKPiBDT05GSUdfRFZCX0VDMTAwPXkKPiAjIENPTkZJR19EVkJfSEQyOUwy
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfU1RWMDM2NyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RW
Ql9DWEQyODIwUj15Cj4gQ09ORklHX0RWQl9SVEwyODMwPXkKPiBDT05GSUdfRFZCX1JUTDI4MzI9
eQo+IAo+ICMKPiAjIERWQi1DIChjYWJsZSkgZnJvbnRlbmRzCj4gIwo+IENPTkZJR19EVkJfVkVT
MTgyMD15Cj4gQ09ORklHX0RWQl9UREExMDAyMT15Cj4gQ09ORklHX0RWQl9UREExMDAyMz15Cj4g
Q09ORklHX0RWQl9TVFYwMjk3PXkKPiAKPiAjCj4gIyBBVFNDIChOb3J0aCBBbWVyaWNhbi9Lb3Jl
YW4gVGVycmVzdHJpYWwvQ2FibGUgRFRWKSBmcm9udGVuZHMKPiAjCj4gIyBDT05GSUdfRFZCX05Y
VDIwMFggaXMgbm90IHNldAo+IENPTkZJR19EVkJfT1I1MTIxMT15Cj4gIyBDT05GSUdfRFZCX09S
NTExMzIgaXMgbm90IHNldAo+IENPTkZJR19EVkJfQkNNMzUxMD15Cj4gIyBDT05GSUdfRFZCX0xH
RFQzMzBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EVkJfTEdEVDMzMDUgaXMgbm90IHNldAo+ICMg
Q09ORklHX0RWQl9MRzIxNjAgaXMgbm90IHNldAo+IENPTkZJR19EVkJfUzVIMTQwOT15Cj4gQ09O
RklHX0RWQl9BVTg1MjI9eQo+IENPTkZJR19EVkJfQVU4NTIyX0RUVj15Cj4gQ09ORklHX0RWQl9B
VTg1MjJfVjRMPXkKPiAjIENPTkZJR19EVkJfUzVIMTQxMSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
SVNEQi1UICh0ZXJyZXN0cmlhbCkgZnJvbnRlbmRzCj4gIwo+ICMgQ09ORklHX0RWQl9TOTIxIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19EVkJfRElCODAwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9N
Qjg2QTIwUz15Cj4gCj4gIwo+ICMgRGlnaXRhbCB0ZXJyZXN0cmlhbCBvbmx5IHR1bmVycy9QTEwK
PiAjCj4gIyBDT05GSUdfRFZCX1BMTCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9UVU5FUl9ESUIw
MDcwPXkKPiAjIENPTkZJR19EVkJfVFVORVJfRElCMDA5MCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
U0VDIGNvbnRyb2wgZGV2aWNlcyBmb3IgRFZCLVMKPiAjCj4gQ09ORklHX0RWQl9MTkJQMjE9eQo+
IENPTkZJR19EVkJfTE5CUDIyPXkKPiAjIENPTkZJR19EVkJfSVNMNjQwNSBpcyBub3Qgc2V0Cj4g
Q09ORklHX0RWQl9JU0w2NDIxPXkKPiBDT05GSUdfRFZCX0lTTDY0MjM9eQo+IENPTkZJR19EVkJf
QTgyOTM9eQo+ICMgQ09ORklHX0RWQl9MR1M4R0w1IGlzIG5vdCBzZXQKPiBDT05GSUdfRFZCX0xH
UzhHWFg9eQo+ICMgQ09ORklHX0RWQl9BVEJNODgzMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RWQl9U
REE2NjV4PXkKPiBDT05GSUdfRFZCX0lYMjUwNVY9eQo+IENPTkZJR19EVkJfSVQ5MTNYX0ZFPXkK
PiBDT05GSUdfRFZCX004OFJTMjAwMD15Cj4gQ09ORklHX0RWQl9BRjkwMzM9eQo+IAo+ICMKPiAj
IFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250ZW5kcwo+ICMKPiBDT05GSUdfRFZCX0RVTU1ZX0ZF
PXkKPiAKPiAjCj4gIyBHcmFwaGljcyBzdXBwb3J0Cj4gIwo+IENPTkZJR19BR1A9eQo+ICMgQ09O
RklHX0FHUF9BTUQ2NCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FHUF9JTlRFTD15Cj4gQ09ORklHX0FH
UF9TSVM9eQo+IENPTkZJR19BR1BfVklBPXkKPiBDT05GSUdfSU5URUxfR1RUPXkKPiBDT05GSUdf
VkdBX0FSQj15Cj4gQ09ORklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYKPiBDT05GSUdfVkdBX1NXSVRD
SEVST089eQo+IENPTkZJR19EUk09eQo+IENPTkZJR19EUk1fS01TX0hFTFBFUj15Cj4gQ09ORklH
X0RSTV9LTVNfRkJfSEVMUEVSPXkKPiBDT05GSUdfRFJNX0xPQURfRURJRF9GSVJNV0FSRT15Cj4g
Q09ORklHX0RSTV9UVE09eQo+IAo+ICMKPiAjIEkyQyBlbmNvZGVyIG9yIGhlbHBlciBjaGlwcwo+
ICMKPiBDT05GSUdfRFJNX0kyQ19DSDcwMDY9eQo+ICMgQ09ORklHX0RSTV9JMkNfU0lMMTY0IGlz
IG5vdCBzZXQKPiBDT05GSUdfRFJNX0kyQ19OWFBfVERBOTk4WD15Cj4gIyBDT05GSUdfRFJNX1RE
RlggaXMgbm90IHNldAo+IENPTkZJR19EUk1fUjEyOD15Cj4gQ09ORklHX0RSTV9SQURFT049eQo+
IENPTkZJR19EUk1fUkFERU9OX1VNUz15Cj4gIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNl
dAo+IENPTkZJR19EUk1fSTgxMD15Cj4gQ09ORklHX0RSTV9JOTE1PXkKPiAjIENPTkZJR19EUk1f
STkxNV9LTVMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RSTV9JOTE1X0ZCREVWIGlzIG5vdCBzZXQK
PiBDT05GSUdfRFJNX0k5MTVfUFJFTElNSU5BUllfSFdfU1VQUE9SVD15Cj4gIyBDT05GSUdfRFJN
X0k5MTVfVU1TIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EUk1fTUdBIGlzIG5vdCBzZXQKPiBDT05G
SUdfRFJNX1NJUz15Cj4gQ09ORklHX0RSTV9WSUE9eQo+ICMgQ09ORklHX0RSTV9TQVZBR0UgaXMg
bm90IHNldAo+IENPTkZJR19EUk1fVk1XR0ZYPXkKPiBDT05GSUdfRFJNX1ZNV0dGWF9GQkNPTj15
Cj4gQ09ORklHX0RSTV9HTUE1MDA9eQo+ICMgQ09ORklHX0RSTV9HTUE2MDAgaXMgbm90IHNldAo+
ICMgQ09ORklHX0RSTV9HTUEzNjAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EUk1fQVNUIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19EUk1fTUdBRzIwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RSTV9DSVJS
VVNfUUVNVT15Cj4gQ09ORklHX0RSTV9RWEw9eQo+IENPTkZJR19WR0FTVEFURT15Cj4gQ09ORklH
X1ZJREVPX09VVFBVVF9DT05UUk9MPXkKPiBDT05GSUdfSERNST15Cj4gQ09ORklHX0ZCPXkKPiBD
T05GSUdfRklSTVdBUkVfRURJRD15Cj4gQ09ORklHX0ZCX0REQz15Cj4gQ09ORklHX0ZCX0JPT1Rf
VkVTQV9TVVBQT1JUPXkKPiBDT05GSUdfRkJfQ0ZCX0ZJTExSRUNUPXkKPiBDT05GSUdfRkJfQ0ZC
X0NPUFlBUkVBPXkKPiBDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15Cj4gIyBDT05GSUdfRkJfQ0ZC
X1JFVl9QSVhFTFNfSU5fQllURSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1NZU19GSUxMUkVDVD15
Cj4gQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15Cj4gQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQ9eQo+
IENPTkZJR19GQl9GT1JFSUdOX0VORElBTj15Cj4gQ09ORklHX0ZCX0JPVEhfRU5ESUFOPXkKPiAj
IENPTkZJR19GQl9CSUdfRU5ESUFOIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9MSVRUTEVfRU5E
SUFOIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfU1lTX0ZPUFM9eQo+IENPTkZJR19GQl9ERUZFUlJF
RF9JTz15Cj4gQ09ORklHX0ZCX0hFQ1VCQT15Cj4gQ09ORklHX0ZCX1NWR0FMSUI9eQo+ICMgQ09O
RklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfQkFDS0xJR0hUPXkKPiBDT05G
SUdfRkJfTU9ERV9IRUxQRVJTPXkKPiBDT05GSUdfRkJfVElMRUJMSVRUSU5HPXkKPiAKPiAjCj4g
IyBGcmFtZSBidWZmZXIgaGFyZHdhcmUgZHJpdmVycwo+ICMKPiAjIENPTkZJR19GQl9DSVJSVVMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX1BNMiBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0NZQkVS
MjAwMD15Cj4gQ09ORklHX0ZCX0NZQkVSMjAwMF9EREM9eQo+IENPTkZJR19GQl9BUkM9eQo+ICMg
Q09ORklHX0ZCX0FTSUxJQU5UIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9JTVNUVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0ZCX1ZHQTE2PXkKPiAjIENPTkZJR19GQl9VVkVTQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfRkJfVkVTQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0VGST15Cj4gQ09ORklHX0ZC
X040MTE9eQo+IENPTkZJR19GQl9IR0E9eQo+IENPTkZJR19GQl9TMUQxM1hYWD15Cj4gIyBDT05G
SUdfRkJfTlZJRElBIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfUklWQT15Cj4gQ09ORklHX0ZCX1JJ
VkFfSTJDPXkKPiAjIENPTkZJR19GQl9SSVZBX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19G
Ql9SSVZBX0JBQ0tMSUdIVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRkJfSTc0MCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0ZCX0xFODA1Nzg9eQo+IENPTkZJR19GQl9DQVJJTExPX1JBTkNIPXkKPiAjIENP
TkZJR19GQl9NQVRST1ggaXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX1JBREVPTiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0ZCX0FUWTEyOD15Cj4gQ09ORklHX0ZCX0FUWTEyOF9CQUNLTElHSFQ9eQo+IENP
TkZJR19GQl9BVFk9eQo+IENPTkZJR19GQl9BVFlfQ1Q9eQo+IENPTkZJR19GQl9BVFlfR0VORVJJ
Q19MQ0Q9eQo+ICMgQ09ORklHX0ZCX0FUWV9HWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0FUWV9C
QUNLTElHSFQ9eQo+IENPTkZJR19GQl9TMz15Cj4gIyBDT05GSUdfRkJfUzNfRERDIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkJfU0FWQUdFPXkKPiBDT05GSUdfRkJfU0FWQUdFX0kyQz15Cj4gIyBDT05G
SUdfRkJfU0FWQUdFX0FDQ0VMIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfU0lTPXkKPiBDT05GSUdf
RkJfU0lTXzMwMD15Cj4gQ09ORklHX0ZCX1NJU18zMTU9eQo+ICMgQ09ORklHX0ZCX1ZJQSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX0ZCX05FT01BR0lDPXkKPiBDT05GSUdfRkJfS1lSTz15Cj4gIyBDT05G
SUdfRkJfM0RGWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1ZPT0RPTzE9eQo+ICMgQ09ORklHX0ZC
X1ZUODYyMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1RSSURFTlQ9eQo+IENPTkZJR19GQl9BUks9
eQo+ICMgQ09ORklHX0ZCX1BNMyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0NBUk1JTkU9eQo+IENP
TkZJR19GQl9DQVJNSU5FX0RSQU1fRVZBTD15Cj4gIyBDT05GSUdfQ0FSTUlORV9EUkFNX0NVU1RP
TSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0dFT0RFPXkKPiBDT05GSUdfRkJfR0VPREVfTFg9eQo+
IENPTkZJR19GQl9HRU9ERV9HWD15Cj4gQ09ORklHX0ZCX0dFT0RFX0dYMT15Cj4gQ09ORklHX0ZC
X1RNSU89eQo+ICMgQ09ORklHX0ZCX1RNSU9fQUNDRUxMIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJf
U001MDE9eQo+IENPTkZJR19GQl9HT0xERklTSD15Cj4gQ09ORklHX0ZCX1ZJUlRVQUw9eQo+ICMg
Q09ORklHX1hFTl9GQkRFVl9GUk9OVEVORCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX01FVFJPTk9N
RT15Cj4gQ09ORklHX0ZCX01CODYyWFg9eQo+IENPTkZJR19GQl9NQjg2MlhYX1BDSV9HREM9eQo+
IENPTkZJR19GQl9NQjg2MlhYX0kyQz15Cj4gIyBDT05GSUdfRkJfQlJPQURTSEVFVCBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfRkJfQVVPX0sxOTBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9IWVBF
UlYgaXMgbm90IHNldAo+IENPTkZJR19GQl9TSU1QTEU9eQo+IENPTkZJR19FWFlOT1NfVklERU89
eQo+IENPTkZJR19CQUNLTElHSFRfTENEX1NVUFBPUlQ9eQo+IENPTkZJR19MQ0RfQ0xBU1NfREVW
SUNFPXkKPiAjIENPTkZJR19MQ0RfUExBVEZPUk0gaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElH
SFRfQ0xBU1NfREVWSUNFPXkKPiBDT05GSUdfQkFDS0xJR0hUX0dFTkVSSUM9eQo+ICMgQ09ORklH
X0JBQ0tMSUdIVF9MTTM1MzMgaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElHSFRfQ0FSSUxMT19S
QU5DSD15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX1BXTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBQ0tM
SUdIVF9BUFBMRT15Cj4gQ09ORklHX0JBQ0tMSUdIVF9TQUhBUkE9eQo+IENPTkZJR19CQUNLTElH
SFRfV004MzFYPXkKPiBDT05GSUdfQkFDS0xJR0hUX0FEUDg4NjA9eQo+ICMgQ09ORklHX0JBQ0tM
SUdIVF9BRFA4ODcwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19CQUNLTElHSFRfODhQTTg2MFggaXMg
bm90IHNldAo+ICMgQ09ORklHX0JBQ0tMSUdIVF9QQ0Y1MDYzMyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0JBQ0tMSUdIVF9MTTM2MzBBPXkKPiBDT05GSUdfQkFDS0xJR0hUX0xNMzYzOT15Cj4gQ09ORklH
X0JBQ0tMSUdIVF9MUDg1NVg9eQo+IENPTkZJR19CQUNLTElHSFRfTFA4Nzg4PXkKPiBDT05GSUdf
QkFDS0xJR0hUX1BBTkRPUkE9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9UUFM2NTIxNyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTE9eQo+IENPTkZJR19CQUNLTElHSFRfR1BJTz15
Cj4gQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUD15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0JENjEw
NyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9HTyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPVU5EPXkK
PiBDT05GSUdfU09VTkRfT1NTX0NPUkU9eQo+IENPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJ
TT15Cj4gQ09ORklHX1NORD15Cj4gQ09ORklHX1NORF9USU1FUj15Cj4gQ09ORklHX1NORF9QQ009
eQo+IENPTkZJR19TTkRfSFdERVA9eQo+IENPTkZJR19TTkRfUkFXTUlEST15Cj4gQ09ORklHX1NO
RF9DT01QUkVTU19PRkZMT0FEPXkKPiBDT05GSUdfU05EX0pBQ0s9eQo+ICMgQ09ORklHX1NORF9T
RVFVRU5DRVIgaXMgbm90IHNldAo+IENPTkZJR19TTkRfT1NTRU1VTD15Cj4gQ09ORklHX1NORF9N
SVhFUl9PU1M9eQo+IENPTkZJR19TTkRfUENNX09TUz15Cj4gIyBDT05GSUdfU05EX1BDTV9PU1Nf
UExVR0lOUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9IUlRJTUVSPXkKPiBDT05GSUdfU05EX0RZ
TkFNSUNfTUlOT1JTPXkKPiBDT05GSUdfU05EX01BWF9DQVJEUz0zMgo+IENPTkZJR19TTkRfU1VQ
UE9SVF9PTERfQVBJPXkKPiAjIENPTkZJR19TTkRfVkVSQk9TRV9QUklOVEsgaXMgbm90IHNldAo+
IENPTkZJR19TTkRfREVCVUc9eQo+IENPTkZJR19TTkRfREVCVUdfVkVSQk9TRT15Cj4gQ09ORklH
X1NORF9WTUFTVEVSPXkKPiBDT05GSUdfU05EX0RNQV9TR0JVRj15Cj4gIyBDT05GSUdfU05EX1JB
V01JRElfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRfT1BMM19MSUJfU0VRIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19TTkRfT1BMNF9MSUJfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRf
U0JBV0VfU0VRIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TTkRfRU1VMTBLMV9TRVEgaXMgbm90IHNl
dAo+IENPTkZJR19TTkRfTVBVNDAxX1VBUlQ9eQo+IENPTkZJR19TTkRfT1BMM19MSUI9eQo+IENP
TkZJR19TTkRfVlhfTElCPXkKPiBDT05GSUdfU05EX0FDOTdfQ09ERUM9eQo+IENPTkZJR19TTkRf
RFJJVkVSUz15Cj4gIyBDT05GSUdfU05EX1BDU1AgaXMgbm90IHNldAo+IENPTkZJR19TTkRfRFVN
TVk9eQo+IENPTkZJR19TTkRfQUxPT1A9eQo+IENPTkZJR19TTkRfTVRQQVY9eQo+ICMgQ09ORklH
X1NORF9NVFM2NCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9TRVJJQUxfVTE2NTUwPXkKPiBDT05G
SUdfU05EX01QVTQwMT15Cj4gQ09ORklHX1NORF9QT1JUTUFOMlg0PXkKPiAjIENPTkZJR19TTkRf
QUM5N19QT1dFUl9TQVZFIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX1BDST15Cj4gQ09ORklHX1NO
RF9BRDE4ODk9eQo+IENPTkZJR19TTkRfQUxTMzAwPXkKPiBDT05GSUdfU05EX0FMSTU0NTE9eQo+
ICMgQ09ORklHX1NORF9BU0lIUEkgaXMgbm90IHNldAo+IENPTkZJR19TTkRfQVRJSVhQPXkKPiBD
T05GSUdfU05EX0FUSUlYUF9NT0RFTT15Cj4gIyBDT05GSUdfU05EX0FVODgxMCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NORF9BVTg4MjA9eQo+IENPTkZJR19TTkRfQVU4ODMwPXkKPiBDT05GSUdfU05E
X0FXMj15Cj4gIyBDT05GSUdfU05EX0FaVDMzMjggaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9C
VDg3WCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9DQTAxMDY9eQo+ICMgQ09ORklHX1NORF9DTUlQ
Q0kgaXMgbm90IHNldAo+IENPTkZJR19TTkRfT1hZR0VOX0xJQj15Cj4gIyBDT05GSUdfU05EX09Y
WUdFTiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU05EX0NTNDI4MSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1NORF9DUzQ2WFg9eQo+IENPTkZJR19TTkRfQ1M0NlhYX05FV19EU1A9eQo+IENPTkZJR19TTkRf
Q1M1NTM1QVVESU89eQo+ICMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
U05EX0RBUkxBMjAgaXMgbm90IHNldAo+IENPTkZJR19TTkRfR0lOQTIwPXkKPiBDT05GSUdfU05E
X0xBWUxBMjA9eQo+ICMgQ09ORklHX1NORF9EQVJMQTI0IGlzIG5vdCBzZXQKPiBDT05GSUdfU05E
X0dJTkEyND15Cj4gQ09ORklHX1NORF9MQVlMQTI0PXkKPiBDT05GSUdfU05EX01PTkE9eQo+IENP
TkZJR19TTkRfTUlBPXkKPiAjIENPTkZJR19TTkRfRUNITzNHIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX0lORElHTz15Cj4gQ09ORklHX1NORF9JTkRJR09JTz15Cj4gIyBDT05GSUdfU05EX0lORElH
T0RKIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX0lORElHT0lPWD15Cj4gIyBDT05GSUdfU05EX0lO
RElHT0RKWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9FTVUxMEsxPXkKPiBDT05GSUdfU05EX0VN
VTEwSzFYPXkKPiAjIENPTkZJR19TTkRfRU5TMTM3MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9F
TlMxMzcxPXkKPiBDT05GSUdfU05EX0VTMTkzOD15Cj4gQ09ORklHX1NORF9FUzE5Njg9eQo+IENP
TkZJR19TTkRfRVMxOTY4X0lOUFVUPXkKPiBDT05GSUdfU05EX0ZNODAxPXkKPiAjIENPTkZJR19T
TkRfSERBX0lOVEVMIGlzIG5vdCBzZXQKPiBDT05GSUdfU05EX0hEU1A9eQo+IAo+ICMKPiAjIERv
bid0IGZvcmdldCB0byBhZGQgYnVpbHQtaW4gZmlybXdhcmVzIGZvciBIRFNQIGRyaXZlcgo+ICMK
PiAjIENPTkZJR19TTkRfSERTUE0gaXMgbm90IHNldAo+IENPTkZJR19TTkRfSUNFMTcxMj15Cj4g
Q09ORklHX1NORF9JQ0UxNzI0PXkKPiBDT05GSUdfU05EX0lOVEVMOFgwPXkKPiBDT05GSUdfU05E
X0lOVEVMOFgwTT15Cj4gQ09ORklHX1NORF9LT1JHMTIxMj15Cj4gQ09ORklHX1NORF9MT0xBPXkK
PiBDT05GSUdfU05EX0xYNjQ2NEVTPXkKPiBDT05GSUdfU05EX01BRVNUUk8zPXkKPiAjIENPTkZJ
R19TTkRfTUFFU1RSTzNfSU5QVVQgaXMgbm90IHNldAo+IENPTkZJR19TTkRfTUlYQVJUPXkKPiBD
T05GSUdfU05EX05NMjU2PXkKPiAjIENPTkZJR19TTkRfUENYSFIgaXMgbm90IHNldAo+IENPTkZJ
R19TTkRfUklQVElERT15Cj4gIyBDT05GSUdfU05EX1JNRTMyIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX1JNRTk2PXkKPiBDT05GSUdfU05EX1JNRTk2NTI9eQo+IENPTkZJR19TTkRfU09OSUNWSUJF
Uz15Cj4gIyBDT05GSUdfU05EX1RSSURFTlQgaXMgbm90IHNldAo+IENPTkZJR19TTkRfVklBODJY
WD15Cj4gIyBDT05GSUdfU05EX1ZJQTgyWFhfTU9ERU0gaXMgbm90IHNldAo+IENPTkZJR19TTkRf
VklSVFVPU089eQo+IENPTkZJR19TTkRfVlgyMjI9eQo+IENPTkZJR19TTkRfWU1GUENJPXkKPiBD
T05GSUdfU05EX1NPQz15Cj4gIyBDT05GSUdfU05EX1NPQ19BREkgaXMgbm90IHNldAo+ICMgQ09O
RklHX1NORF9BVE1FTF9TT0MgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9CQ00yODM1X1NPQ19J
MlMgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9FUDkzWFhfU09DIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TTkRfSU1YX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU05EX0tJUktXT09EX1NPQyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NORF9TT0NfSTJDX0FORF9TUEk9eQo+IENPTkZJR19TTkRfU09D
X0FMTF9DT0RFQ1M9eQo+IENPTkZJR19TTkRfU09DXzg4UE04NjBYPXkKPiBDT05GSUdfU05EX1NP
Q19BUklaT05BPXkKPiBDT05GSUdfU05EX1NPQ19XTV9IVUJTPXkKPiBDT05GSUdfU05EX1NPQ19X
TV9BRFNQPXkKPiBDT05GSUdfU05EX1NPQ19BRDE5M1g9eQo+IENPTkZJR19TTkRfU09DX0FENzMz
MTE9eQo+IENPTkZJR19TTkRfU09DX0FEQVUxNzAxPXkKPiBDT05GSUdfU05EX1NPQ19BREFVMTM3
Mz15Cj4gQ09ORklHX1NORF9TT0NfQURBVjgwWD15Cj4gQ09ORklHX1NORF9TT0NfQURTMTE3WD15
Cj4gQ09ORklHX1NORF9TT0NfQUs0NTM1PXkKPiBDT05GSUdfU05EX1NPQ19BSzQ2NDE9eQo+IENP
TkZJR19TTkRfU09DX0FLNDY0Mj15Cj4gQ09ORklHX1NORF9TT0NfQUs0NjcxPXkKPiBDT05GSUdf
U05EX1NPQ19BSzUzODY9eQo+IENPTkZJR19TTkRfU09DX0FMQzU2MjM9eQo+IENPTkZJR19TTkRf
U09DX0FMQzU2MzI9eQo+IENPTkZJR19TTkRfU09DX0NTNDJMNTE9eQo+IENPTkZJR19TTkRfU09D
X0NTNDJMNTI9eQo+IENPTkZJR19TTkRfU09DX0NTNDJMNzM9eQo+IENPTkZJR19TTkRfU09DX0NT
NDI3MD15Cj4gQ09ORklHX1NORF9TT0NfQ1M0MjcxPXkKPiBDT05GSUdfU05EX1NPQ19DWDIwNDQy
PXkKPiBDT05GSUdfU05EX1NPQ19KWjQ3NDBfQ09ERUM9eQo+IENPTkZJR19TTkRfU09DX0wzPXkK
PiBDT05GSUdfU05EX1NPQ19EQTcyMTA9eQo+IENPTkZJR19TTkRfU09DX0RBNzIxMz15Cj4gQ09O
RklHX1NORF9TT0NfREE3MzJYPXkKPiBDT05GSUdfU05EX1NPQ19EQTkwNTU9eQo+IENPTkZJR19T
TkRfU09DX0JUX1NDTz15Cj4gQ09ORklHX1NORF9TT0NfSVNBQkVMTEU9eQo+IENPTkZJR19TTkRf
U09DX0xNNDk0NTM9eQo+IENPTkZJR19TTkRfU09DX01BWDk4MDg4PXkKPiBDT05GSUdfU05EX1NP
Q19NQVg5ODA5MD15Cj4gQ09ORklHX1NORF9TT0NfTUFYOTgwOTU9eQo+IENPTkZJR19TTkRfU09D
X01BWDk4NTA9eQo+IENPTkZJR19TTkRfU09DX0hETUlfQ09ERUM9eQo+IENPTkZJR19TTkRfU09D
X1BDTTE2ODE9eQo+IENPTkZJR19TTkRfU09DX1BDTTMwMDg9eQo+IENPTkZJR19TTkRfU09DX1JU
NTYzMT15Cj4gQ09ORklHX1NORF9TT0NfUlQ1NjQwPXkKPiBDT05GSUdfU05EX1NPQ19TR1RMNTAw
MD15Cj4gQ09ORklHX1NORF9TT0NfU0lHTUFEU1A9eQo+IENPTkZJR19TTkRfU09DX1NQRElGPXkK
PiBDT05GSUdfU05EX1NPQ19TU00yNTE4PXkKPiBDT05GSUdfU05EX1NPQ19TU00yNjAyPXkKPiBD
T05GSUdfU05EX1NPQ19TVEEzMlg9eQo+IENPTkZJR19TTkRfU09DX1NUQTUyOT15Cj4gQ09ORklH
X1NORF9TT0NfVEFTNTA4Nj15Cj4gQ09ORklHX1NORF9TT0NfVExWMzIwQUlDMjM9eQo+IENPTkZJ
R19TTkRfU09DX1RMVjMyMEFJQzMyWDQ9eQo+IENPTkZJR19TTkRfU09DX1RMVjMyMEFJQzNYPXkK
PiBDT05GSUdfU05EX1NPQ19UTFYzMjBEQUMzMz15Cj4gQ09ORklHX1NORF9TT0NfVFdMNDAzMD15
Cj4gQ09ORklHX1NORF9TT0NfVFdMNjA0MD15Cj4gQ09ORklHX1NORF9TT0NfVURBMTM0WD15Cj4g
Q09ORklHX1NORF9TT0NfVURBMTM4MD15Cj4gQ09ORklHX1NORF9TT0NfV0wxMjczPXkKPiBDT05G
SUdfU05EX1NPQ19XTTEyNTBfRVYxPXkKPiBDT05GSUdfU05EX1NPQ19XTTIwMDA9eQo+IENPTkZJ
R19TTkRfU09DX1dNMjIwMD15Cj4gQ09ORklHX1NORF9TT0NfV001MTAwPXkKPiBDT05GSUdfU05E
X1NPQ19XTTgzNTA9eQo+IENPTkZJR19TTkRfU09DX1dNODQwMD15Cj4gQ09ORklHX1NORF9TT0Nf
V004NTEwPXkKPiBDT05GSUdfU05EX1NPQ19XTTg1MjM9eQo+IENPTkZJR19TTkRfU09DX1dNODU4
MD15Cj4gQ09ORklHX1NORF9TT0NfV004NzExPXkKPiBDT05GSUdfU05EX1NPQ19XTTg3Mjc9eQo+
IENPTkZJR19TTkRfU09DX1dNODcyOD15Cj4gQ09ORklHX1NORF9TT0NfV004NzMxPXkKPiBDT05G
SUdfU05EX1NPQ19XTTg3Mzc9eQo+IENPTkZJR19TTkRfU09DX1dNODc0MT15Cj4gQ09ORklHX1NO
RF9TT0NfV004NzUwPXkKPiBDT05GSUdfU05EX1NPQ19XTTg3NTM9eQo+IENPTkZJR19TTkRfU09D
X1dNODc3Nj15Cj4gQ09ORklHX1NORF9TT0NfV004NzgyPXkKPiBDT05GSUdfU05EX1NPQ19XTTg4
MDQ9eQo+IENPTkZJR19TTkRfU09DX1dNODkwMD15Cj4gQ09ORklHX1NORF9TT0NfV004OTAzPXkK
PiBDT05GSUdfU05EX1NPQ19XTTg5MDQ9eQo+IENPTkZJR19TTkRfU09DX1dNODk0MD15Cj4gQ09O
RklHX1NORF9TT0NfV004OTU1PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5NjA9eQo+IENPTkZJR19T
TkRfU09DX1dNODk2MT15Cj4gQ09ORklHX1NORF9TT0NfV004OTYyPXkKPiBDT05GSUdfU05EX1NP
Q19XTTg5NzE9eQo+IENPTkZJR19TTkRfU09DX1dNODk3ND15Cj4gQ09ORklHX1NORF9TT0NfV004
OTc4PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5ODM9eQo+IENPTkZJR19TTkRfU09DX1dNODk4NT15
Cj4gQ09ORklHX1NORF9TT0NfV004OTg4PXkKPiBDT05GSUdfU05EX1NPQ19XTTg5OTA9eQo+IENP
TkZJR19TTkRfU09DX1dNODk5MT15Cj4gQ09ORklHX1NORF9TT0NfV004OTkzPXkKPiBDT05GSUdf
U05EX1NPQ19XTTg5OTU9eQo+IENPTkZJR19TTkRfU09DX1dNODk5Nj15Cj4gQ09ORklHX1NORF9T
T0NfV004OTk3PXkKPiBDT05GSUdfU05EX1NPQ19XTTkwODE9eQo+IENPTkZJR19TTkRfU09DX1dN
OTA5MD15Cj4gQ09ORklHX1NORF9TT0NfTE00ODU3PXkKPiBDT05GSUdfU05EX1NPQ19NQVg5NzY4
PXkKPiBDT05GSUdfU05EX1NPQ19NQVg5ODc3PXkKPiBDT05GSUdfU05EX1NPQ19NQzEzNzgzPXkK
PiBDT05GSUdfU05EX1NPQ19NTDI2MTI0PXkKPiBDT05GSUdfU05EX1NPQ19UUEE2MTMwQTI9eQo+
IENPTkZJR19TTkRfU0lNUExFX0NBUkQ9eQo+IENPTkZJR19TT1VORF9QUklNRT15Cj4gQ09ORklH
X0FDOTdfQlVTPXkKPiAKPiAjCj4gIyBISUQgc3VwcG9ydAo+ICMKPiBDT05GSUdfSElEPXkKPiAj
IENPTkZJR19ISURfQkFUVEVSWV9TVFJFTkdUSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSElEUkFX
IGlzIG5vdCBzZXQKPiBDT05GSUdfVUhJRD15Cj4gIyBDT05GSUdfSElEX0dFTkVSSUMgaXMgbm90
IHNldAo+IAo+ICMKPiAjIFNwZWNpYWwgSElEIGRyaXZlcnMKPiAjCj4gQ09ORklHX0hJRF9BNFRF
Q0g9eQo+IENPTkZJR19ISURfQUNSVVg9eQo+ICMgQ09ORklHX0hJRF9BQ1JVWF9GRiBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfSElEX0FQUExFIGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX0FVUkVBTD15
Cj4gQ09ORklHX0hJRF9CRUxLSU49eQo+IENPTkZJR19ISURfQ0hFUlJZPXkKPiBDT05GSUdfSElE
X0NISUNPTlk9eQo+ICMgQ09ORklHX0hJRF9QUk9ESUtFWVMgaXMgbm90IHNldAo+IENPTkZJR19I
SURfQ1lQUkVTUz15Cj4gIyBDT05GSUdfSElEX0RSQUdPTlJJU0UgaXMgbm90IHNldAo+IENPTkZJ
R19ISURfRU1TX0ZGPXkKPiBDT05GSUdfSElEX0VMRUNPTT15Cj4gIyBDT05GSUdfSElEX0VaS0VZ
IGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX0tFWVRPVUNIPXkKPiAjIENPTkZJR19ISURfS1lFIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19ISURfVUNMT0dJQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hJRF9X
QUxUT1A9eQo+IENPTkZJR19ISURfR1lSQVRJT049eQo+IENPTkZJR19ISURfSUNBREU9eQo+IENP
TkZJR19ISURfVFdJTkhBTj15Cj4gQ09ORklHX0hJRF9LRU5TSU5HVE9OPXkKPiBDT05GSUdfSElE
X0xDUE9XRVI9eQo+ICMgQ09ORklHX0hJRF9MRU5PVk9fVFBLQkQgaXMgbm90IHNldAo+IENPTkZJ
R19ISURfTE9HSVRFQ0g9eQo+IENPTkZJR19MT0dJVEVDSF9GRj15Cj4gQ09ORklHX0xPR0lSVU1C
TEVQQUQyX0ZGPXkKPiBDT05GSUdfTE9HSUc5NDBfRkY9eQo+ICMgQ09ORklHX0xPR0lXSEVFTFNf
RkYgaXMgbm90IHNldAo+IENPTkZJR19ISURfTUFHSUNNT1VTRT15Cj4gIyBDT05GSUdfSElEX01J
Q1JPU09GVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSElEX01PTlRFUkVZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19ISURfTVVMVElUT1VDSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hJRF9PUlRFSz15Cj4g
IyBDT05GSUdfSElEX1BBTlRIRVJMT1JEIGlzIG5vdCBzZXQKPiBDT05GSUdfSElEX1BFVEFMWU5Y
PXkKPiBDT05GSUdfSElEX1BJQ09MQ0Q9eQo+ICMgQ09ORklHX0hJRF9QSUNPTENEX0ZCIGlzIG5v
dCBzZXQKPiBDT05GSUdfSElEX1BJQ09MQ0RfQkFDS0xJR0hUPXkKPiAjIENPTkZJR19ISURfUElD
T0xDRF9MQ0QgaXMgbm90IHNldAo+IENPTkZJR19ISURfUElDT0xDRF9MRURTPXkKPiBDT05GSUdf
SElEX1BJQ09MQ0RfQ0lSPXkKPiAjIENPTkZJR19ISURfUFJJTUFYIGlzIG5vdCBzZXQKPiBDT05G
SUdfSElEX1NBSVRFSz15Cj4gQ09ORklHX0hJRF9TQU1TVU5HPXkKPiBDT05GSUdfSElEX1NQRUVE
TElOSz15Cj4gQ09ORklHX0hJRF9TVEVFTFNFUklFUz15Cj4gQ09ORklHX0hJRF9TVU5QTFVTPXkK
PiAjIENPTkZJR19ISURfR1JFRU5BU0lBIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ISURfSFlQRVJW
X01PVVNFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ISURfU01BUlRKT1lQTFVTIGlzIG5vdCBzZXQK
PiBDT05GSUdfSElEX1RJVk89eQo+IENPTkZJR19ISURfVE9QU0VFRD15Cj4gQ09ORklHX0hJRF9U
SElOR009eQo+IENPTkZJR19ISURfVEhSVVNUTUFTVEVSPXkKPiAjIENPTkZJR19USFJVU1RNQVNU
RVJfRkYgaXMgbm90IHNldAo+IENPTkZJR19ISURfV0FDT009eQo+IENPTkZJR19ISURfV0lJTU9U
RT15Cj4gQ09ORklHX0hJRF9YSU5NTz15Cj4gQ09ORklHX0hJRF9aRVJPUExVUz15Cj4gQ09ORklH
X1pFUk9QTFVTX0ZGPXkKPiBDT05GSUdfSElEX1pZREFDUk9OPXkKPiAjIENPTkZJR19ISURfU0VO
U09SX0hVQiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgSTJDIEhJRCBzdXBwb3J0Cj4gIwo+ICMgQ09O
RklHX0kyQ19ISUQgaXMgbm90IHNldAo+IENPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkK
PiAjIENPTkZJR19VU0JfU1VQUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VXQj15Cj4gQ09ORklH
X1VXQl9XSENJPXkKPiAjIENPTkZJR19NTUMgaXMgbm90IHNldAo+ICMgQ09ORklHX01FTVNUSUNL
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkVXX0xFRFM9eQo+IENPTkZJR19MRURTX0NMQVNTPXkKPiAK
PiAjCj4gIyBMRUQgZHJpdmVycwo+ICMKPiBDT05GSUdfTEVEU184OFBNODYwWD15Cj4gIyBDT05G
SUdfTEVEU19MTTM1MzAgaXMgbm90IHNldAo+ICMgQ09ORklHX0xFRFNfTE0zNTMzIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTEVEU19MTTM2NDI9eQo+ICMgQ09ORklHX0xFRFNfUENBOTUzMiBpcyBub3Qg
c2V0Cj4gQ09ORklHX0xFRFNfR1BJTz15Cj4gQ09ORklHX0xFRFNfTFAzOTQ0PXkKPiBDT05GSUdf
TEVEU19MUDU1WFhfQ09NTU9OPXkKPiBDT05GSUdfTEVEU19MUDU1MjE9eQo+IENPTkZJR19MRURT
X0xQNTUyMz15Cj4gQ09ORklHX0xFRFNfTFA1NTYyPXkKPiBDT05GSUdfTEVEU19MUDg1MDE9eQo+
IENPTkZJR19MRURTX0xQODc4OD15Cj4gQ09ORklHX0xFRFNfQ0xFVk9fTUFJTD15Cj4gQ09ORklH
X0xFRFNfUENBOTU1WD15Cj4gIyBDT05GSUdfTEVEU19QQ0E5NjNYIGlzIG5vdCBzZXQKPiBDT05G
SUdfTEVEU19QQ0E5Njg1PXkKPiBDT05GSUdfTEVEU19XTTgzMVhfU1RBVFVTPXkKPiBDT05GSUdf
TEVEU19XTTgzNTA9eQo+IENPTkZJR19MRURTX1BXTT15Cj4gQ09ORklHX0xFRFNfUkVHVUxBVE9S
PXkKPiBDT05GSUdfTEVEU19CRDI4MDI9eQo+IENPTkZJR19MRURTX0lOVEVMX1NTNDIwMD15Cj4g
Q09ORklHX0xFRFNfTFQzNTkzPXkKPiAjIENPTkZJR19MRURTX01DMTM3ODMgaXMgbm90IHNldAo+
IENPTkZJR19MRURTX1RDQTY1MDc9eQo+IENPTkZJR19MRURTX0xNMzU1eD15Cj4gIyBDT05GSUdf
TEVEU19PVDIwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0xFRFNfQkxJTktNPXkKPiAKPiAjCj4gIyBM
RUQgVHJpZ2dlcnMKPiAjCj4gQ09ORklHX0xFRFNfVFJJR0dFUlM9eQo+IENPTkZJR19MRURTX1RS
SUdHRVJfVElNRVI9eQo+IENPTkZJR19MRURTX1RSSUdHRVJfT05FU0hPVD15Cj4gQ09ORklHX0xF
RFNfVFJJR0dFUl9IRUFSVEJFQVQ9eQo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9CQUNLTElHSFQg
aXMgbm90IHNldAo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9DUFUgaXMgbm90IHNldAo+IENPTkZJ
R19MRURTX1RSSUdHRVJfR1BJTz15Cj4gQ09ORklHX0xFRFNfVFJJR0dFUl9ERUZBVUxUX09OPXkK
PiAKPiAjCj4gIyBpcHRhYmxlcyB0cmlnZ2VyIGlzIHVuZGVyIE5ldGZpbHRlciBjb25maWcgKExF
RCB0YXJnZXQpCj4gIwo+ICMgQ09ORklHX0xFRFNfVFJJR0dFUl9UUkFOU0lFTlQgaXMgbm90IHNl
dAo+IENPTkZJR19MRURTX1RSSUdHRVJfQ0FNRVJBPXkKPiBDT05GSUdfQUNDRVNTSUJJTElUWT15
Cj4gQ09ORklHX0lORklOSUJBTkQ9eQo+ICMgQ09ORklHX0lORklOSUJBTkRfVVNFUl9NQUQgaXMg
bm90IHNldAo+IENPTkZJR19JTkZJTklCQU5EX1VTRVJfQUNDRVNTPXkKPiBDT05GSUdfSU5GSU5J
QkFORF9VU0VSX01FTT15Cj4gQ09ORklHX0lORklOSUJBTkRfQUREUl9UUkFOUz15Cj4gIyBDT05G
SUdfSU5GSU5JQkFORF9NVEhDQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lORklOSUJBTkRfSVBBVEg9
eQo+IENPTkZJR19JTkZJTklCQU5EX1FJQj15Cj4gQ09ORklHX0lORklOSUJBTkRfQU1TTzExMDA9
eQo+IENPTkZJR19JTkZJTklCQU5EX0FNU08xMTAwX0RFQlVHPXkKPiBDT05GSUdfSU5GSU5JQkFO
RF9ORVM9eQo+ICMgQ09ORklHX0lORklOSUJBTkRfTkVTX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19FREFDIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0xJQj15Cj4gQ09ORklHX1JUQ19DTEFT
Uz15Cj4gIyBDT05GSUdfUlRDX0hDVE9TWVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19TWVNU
T0hDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAj
IFJUQyBpbnRlcmZhY2VzCj4gIwo+ICMgQ09ORklHX1JUQ19JTlRGX1NZU0ZTIGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0lOVEZfREVWPXkKPiAjIENPTkZJR19SVENfSU5URl9ERVZfVUlFX0VNVUwg
aXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX1RFU1Q9eQo+IAo+ICMKPiAjIEkyQyBSVEMgZHJp
dmVycwo+ICMKPiBDT05GSUdfUlRDX0RSVl84OFBNODYwWD15Cj4gIyBDT05GSUdfUlRDX0RSVl84
OFBNODBYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJWX0RTMTMwNyBpcyBub3Qgc2V0Cj4g
Q09ORklHX1JUQ19EUlZfRFMxMzc0PXkKPiBDT05GSUdfUlRDX0RSVl9EUzE2NzI9eQo+ICMgQ09O
RklHX1JUQ19EUlZfRFMzMjMyIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9MUDg3ODg9eQo+
IENPTkZJR19SVENfRFJWX01BWDY5MDA9eQo+IENPTkZJR19SVENfRFJWX01BWDg5OTg9eQo+IENP
TkZJR19SVENfRFJWX1JTNUMzNzI9eQo+IENPTkZJR19SVENfRFJWX0lTTDEyMDg9eQo+IENPTkZJ
R19SVENfRFJWX0lTTDEyMDIyPXkKPiAjIENPTkZJR19SVENfRFJWX0lTTDEyMDU3IGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJW
X1BBTE1BUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9QQ0YyMTI3IGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzPXkKPiBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzPXkKPiAj
IENPTkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX000MVQ4
MD15Cj4gIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19S
VENfRFJWX0JRMzJLIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9UV0w0MDMwPXkKPiBDT05G
SUdfUlRDX0RSVl9UUFM2NTg2WD15Cj4gQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15Cj4gIyBDT05G
SUdfUlRDX0RSVl9GTTMxMzAgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfUlg4NTgxIGlz
IG5vdCBzZXQKPiBDT05GSUdfUlRDX0RSVl9SWDgwMjU9eQo+IENPTkZJR19SVENfRFJWX0VNMzAy
Nz15Cj4gIyBDT05GSUdfUlRDX0RSVl9SVjMwMjlDMiBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU1BJ
IFJUQyBkcml2ZXJzCj4gIwo+IAo+ICMKPiAjIFBsYXRmb3JtIFJUQyBkcml2ZXJzCj4gIwo+IENP
TkZJR19SVENfRFJWX0NNT1M9eQo+IENPTkZJR19SVENfRFJWX0RTMTI4Nj15Cj4gIyBDT05GSUdf
UlRDX0RSVl9EUzE1MTEgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfRFMxNTUzIGlzIG5v
dCBzZXQKPiBDT05GSUdfUlRDX0RSVl9EUzE3NDI9eQo+IENPTkZJR19SVENfRFJWX0RBOTA1NT15
Cj4gQ09ORklHX1JUQ19EUlZfU1RLMTdUQTg9eQo+IENPTkZJR19SVENfRFJWX000OFQ4Nj15Cj4g
Q09ORklHX1JUQ19EUlZfTTQ4VDM1PXkKPiBDT05GSUdfUlRDX0RSVl9NNDhUNTk9eQo+IENPTkZJ
R19SVENfRFJWX01TTTYyNDI9eQo+IENPTkZJR19SVENfRFJWX0JRNDgwMj15Cj4gQ09ORklHX1JU
Q19EUlZfUlA1QzAxPXkKPiAjIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKPiBDT05G
SUdfUlRDX0RSVl9EUzI0MDQ9eQo+IENPTkZJR19SVENfRFJWX1dNODMxWD15Cj4gQ09ORklHX1JU
Q19EUlZfV004MzUwPXkKPiBDT05GSUdfUlRDX0RSVl9QQ0Y1MDYzMz15Cj4gCj4gIwo+ICMgb24t
Q1BVIFJUQyBkcml2ZXJzCj4gIwo+IENPTkZJR19SVENfRFJWX01DMTNYWFg9eQo+IENPTkZJR19S
VENfRFJWX01PWEFSVD15Cj4gCj4gIwo+ICMgSElEIFNlbnNvciBSVEMgZHJpdmVycwo+ICMKPiBD
T05GSUdfRE1BREVWSUNFUz15Cj4gIyBDT05GSUdfRE1BREVWSUNFU19ERUJVRyBpcyBub3Qgc2V0
Cj4gCj4gIwo+ICMgRE1BIERldmljZXMKPiAjCj4gQ09ORklHX0lOVEVMX01JRF9ETUFDPXkKPiAj
IENPTkZJR19JTlRFTF9JT0FURE1BIGlzIG5vdCBzZXQKPiAjIENPTkZJR19EV19ETUFDX0NPUkUg
aXMgbm90IHNldAo+ICMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RXX0RN
QUNfUENJIGlzIG5vdCBzZXQKPiBDT05GSUdfVElNQl9ETUE9eQo+IENPTkZJR19QQ0hfRE1BPXkK
PiBDT05GSUdfRE1BX0VOR0lORT15Cj4gQ09ORklHX0RNQV9BQ1BJPXkKPiAKPiAjCj4gIyBETUEg
Q2xpZW50cwo+ICMKPiAjIENPTkZJR19BU1lOQ19UWF9ETUEgaXMgbm90IHNldAo+IENPTkZJR19E
TUFURVNUPXkKPiAjIENPTkZJR19BVVhESVNQTEFZIGlzIG5vdCBzZXQKPiBDT05GSUdfVUlPPXkK
PiBDT05GSUdfVUlPX0NJRj15Cj4gQ09ORklHX1VJT19QRFJWX0dFTklSUT15Cj4gQ09ORklHX1VJ
T19ETUVNX0dFTklSUT15Cj4gQ09ORklHX1VJT19BRUM9eQo+IENPTkZJR19VSU9fU0VSQ09TMz15
Cj4gIyBDT05GSUdfVUlPX1BDSV9HRU5FUklDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19VSU9fTkVU
WCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VJT19NRjYyND15Cj4gIyBDT05GSUdfVklSVF9EUklWRVJT
IGlzIG5vdCBzZXQKPiBDT05GSUdfVklSVElPPXkKPiAKPiAjCj4gIyBWaXJ0aW8gZHJpdmVycwo+
ICMKPiAjIENPTkZJR19WSVJUSU9fUENJIGlzIG5vdCBzZXQKPiBDT05GSUdfVklSVElPX0JBTExP
T049eQo+ICMgQ09ORklHX1ZJUlRJT19NTUlPIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBNaWNyb3Nv
ZnQgSHlwZXItViBndWVzdCBzdXBwb3J0Cj4gIwo+IENPTkZJR19IWVBFUlY9eQo+IENPTkZJR19I
WVBFUlZfVVRJTFM9eQo+IENPTkZJR19IWVBFUlZfQkFMTE9PTj15Cj4gCj4gIwo+ICMgWGVuIGRy
aXZlciBzdXBwb3J0Cj4gIwo+ICMgQ09ORklHX1hFTl9CQUxMT09OIGlzIG5vdCBzZXQKPiBDT05G
SUdfWEVOX0RFVl9FVlRDSE49eQo+ICMgQ09ORklHX1hFTl9CQUNLRU5EIGlzIG5vdCBzZXQKPiBD
T05GSUdfWEVORlM9eQo+IENPTkZJR19YRU5fQ09NUEFUX1hFTkZTPXkKPiBDT05GSUdfWEVOX1NZ
U19IWVBFUlZJU09SPXkKPiBDT05GSUdfWEVOX1hFTkJVU19GUk9OVEVORD15Cj4gQ09ORklHX1hF
Tl9HTlRERVY9eQo+IENPTkZJR19YRU5fR1JBTlRfREVWX0FMTE9DPXkKPiBDT05GSUdfU1dJT1RM
Ql9YRU49eQo+IENPTkZJR19YRU5fVE1FTT15Cj4gQ09ORklHX1hFTl9QUklWQ01EPXkKPiAjIENP
TkZJR19YRU5fQUNQSV9QUk9DRVNTT1IgaXMgbm90IHNldAo+IENPTkZJR19YRU5fSEFWRV9QVk1N
VT15Cj4gQ09ORklHX1NUQUdJTkc9eQo+IENPTkZJR19TTElDT1NTPXkKPiBDT05GSUdfRUNITz15
Cj4gQ09ORklHX1BBTkVMPXkKPiBDT05GSUdfUEFORUxfUEFSUE9SVD0wCj4gQ09ORklHX1BBTkVM
X1BST0ZJTEU9NQo+IENPTkZJR19QQU5FTF9DSEFOR0VfTUVTU0FHRT15Cj4gQ09ORklHX1BBTkVM
X0JPT1RfTUVTU0FHRT0iIgo+ICMgQ09ORklHX0RYX1NFUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZC
X1NNN1hYPXkKPiBDT05GSUdfQ1JZU1RBTEhEPXkKPiBDT05GSUdfRkJfWEdJPXkKPiBDT05GSUdf
QUNQSV9RVUlDS1NUQVJUPXkKPiBDT05GSUdfRlQxMDAwPXkKPiAKPiAjCj4gIyBTcGVha3VwIGNv
bnNvbGUgc3BlZWNoCj4gIwo+IENPTkZJR19UT1VDSFNDUkVFTl9DTEVBUlBBRF9UTTEyMTc9eQo+
ICMgQ09ORklHX1RPVUNIU0NSRUVOX1NZTkFQVElDU19JMkNfUk1JNCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NUQUdJTkdfTUVESUE9eQo+IENPTkZJR19EVkJfQ1hEMjA5OT15Cj4gIyBDT05GSUdfVklE
RU9fRFQzMTU1IGlzIG5vdCBzZXQKPiBDT05GSUdfVklERU9fVjRMMl9JTlRfREVWSUNFPXkKPiBD
T05GSUdfVklERU9fVENNODI1WD15Cj4gQ09ORklHX1VTQl9TTjlDMTAyPXkKPiBDT05GSUdfU09M
TzZYMTA9eQo+ICMgQ09ORklHX0xJUkNfU1RBR0lORyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQW5k
cm9pZAo+ICMKPiBDT05GSUdfQU5EUk9JRD15Cj4gQ09ORklHX0FORFJPSURfQklOREVSX0lQQz15
Cj4gIyBDT05GSUdfQVNITUVNIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BTkRST0lEX0xPR0dFUiBp
cyBub3Qgc2V0Cj4gQ09ORklHX0FORFJPSURfVElNRURfT1VUUFVUPXkKPiAjIENPTkZJR19BTkRS
T0lEX1RJTUVEX0dQSU8gaXMgbm90IHNldAo+IENPTkZJR19BTkRST0lEX0xPV19NRU1PUllfS0lM
TEVSPXkKPiAjIENPTkZJR19BTkRST0lEX0lOVEZfQUxBUk1fREVWIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19TWU5DIGlzIG5vdCBzZXQKPiBDT05GSUdfSU9OPXkKPiBDT05GSUdfSU9OX1RFU1Q9eQo+
IENPTkZJR19ER1JQPXkKPiBDT05GSUdfWElMTFlCVVM9eQo+ICMgQ09ORklHX1hJTExZQlVTX1BD
SUUgaXMgbm90IHNldAo+ICMgQ09ORklHX0RHTkMgaXMgbm90IHNldAo+IENPTkZJR19ER0FQPXkK
PiBDT05GSUdfWDg2X1BMQVRGT1JNX0RFVklDRVM9eQo+ICMgQ09ORklHX0FDRVJIREYgaXMgbm90
IHNldAo+ICMgQ09ORklHX0FTVVNfTEFQVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfREVMTF9MQVBU
T1A9eQo+IENPTkZJR19GVUpJVFNVX0xBUFRPUD15Cj4gQ09ORklHX0ZVSklUU1VfTEFQVE9QX0RF
QlVHPXkKPiAjIENPTkZJR19GVUpJVFNVX1RBQkxFVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSFBf
QUNDRUwgaXMgbm90IHNldAo+IENPTkZJR19QQU5BU09OSUNfTEFQVE9QPXkKPiBDT05GSUdfVEhJ
TktQQURfQUNQST15Cj4gQ09ORklHX1RISU5LUEFEX0FDUElfQUxTQV9TVVBQT1JUPXkKPiBDT05G
SUdfVEhJTktQQURfQUNQSV9ERUJVR0ZBQ0lMSVRJRVM9eQo+ICMgQ09ORklHX1RISU5LUEFEX0FD
UElfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1RISU5LUEFEX0FDUElfVU5TQUZFX0xFRFMg
aXMgbm90IHNldAo+IENPTkZJR19USElOS1BBRF9BQ1BJX1ZJREVPPXkKPiBDT05GSUdfVEhJTktQ
QURfQUNQSV9IT1RLRVlfUE9MTD15Cj4gIyBDT05GSUdfU0VOU09SU19IREFQUyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0lOVEVMX01FTkxPVz15Cj4gIyBDT05GSUdfQUNQSV9XTUkgaXMgbm90IHNldAo+
ICMgQ09ORklHX1RPUFNUQVJfTEFQVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfVE9TSElCQV9CVF9S
RktJTEw9eQo+ICMgQ09ORklHX0FDUElfQ01QQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOVEVMX0lQ
Uz15Cj4gQ09ORklHX0lCTV9SVEw9eQo+ICMgQ09ORklHX1hPMTVfRUJPT0sgaXMgbm90IHNldAo+
ICMgQ09ORklHX1NBTVNVTkdfTEFQVE9QIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQU1TVU5HX1Ex
MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FQUExFX0dNVVg9eQo+IENPTkZJR19JTlRFTF9SU1Q9eQo+
ICMgQ09ORklHX0lOVEVMX1NNQVJUQ09OTkVDVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BWUEFOSUM9
eQo+IENPTkZJR19DSFJPTUVfUExBVEZPUk1TPXkKPiBDT05GSUdfQ0hST01FT1NfTEFQVE9QPXkK
PiAjIENPTkZJR19DSFJPTUVPU19QU1RPUkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEhhcmR3YXJl
IFNwaW5sb2NrIGRyaXZlcnMKPiAjCj4gQ09ORklHX0NMS0VWVF9JODI1Mz15Cj4gQ09ORklHX0k4
MjUzX0xPQ0s9eQo+IENPTkZJR19DTEtCTERfSTgyNTM9eQo+IENPTkZJR19NQUlMQk9YPXkKPiAj
IENPTkZJR19JT01NVV9TVVBQT1JUIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSZW1vdGVwcm9jIGRy
aXZlcnMKPiAjCj4gQ09ORklHX1JFTU9URVBST0M9eQo+IENPTkZJR19TVEVfTU9ERU1fUlBST0M9
eQo+IAo+ICMKPiAjIFJwbXNnIGRyaXZlcnMKPiAjCj4gQ09ORklHX1BNX0RFVkZSRVE9eQo+IAo+
ICMKPiAjIERFVkZSRVEgR292ZXJub3JzCj4gIwo+IENPTkZJR19ERVZGUkVRX0dPVl9TSU1QTEVf
T05ERU1BTkQ9eQo+IENPTkZJR19ERVZGUkVRX0dPVl9QRVJGT1JNQU5DRT15Cj4gIyBDT05GSUdf
REVWRlJFUV9HT1ZfUE9XRVJTQVZFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERVZGUkVRX0dPVl9V
U0VSU1BBQ0UgaXMgbm90IHNldAo+IAo+ICMKPiAjIERFVkZSRVEgRHJpdmVycwo+ICMKPiBDT05G
SUdfRVhUQ09OPXkKPiAKPiAjCj4gIyBFeHRjb24gRGV2aWNlIERyaXZlcnMKPiAjCj4gQ09ORklH
X0VYVENPTl9HUElPPXkKPiBDT05GSUdfRVhUQ09OX01BWDc3NjkzPXkKPiBDT05GSUdfRVhUQ09O
X0FSSVpPTkE9eQo+ICMgQ09ORklHX0VYVENPTl9QQUxNQVMgaXMgbm90IHNldAo+ICMgQ09ORklH
X01FTU9SWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSUlPIGlzIG5vdCBzZXQKPiBDT05GSUdfTlRC
PXkKPiAjIENPTkZJR19WTUVfQlVTIGlzIG5vdCBzZXQKPiBDT05GSUdfUFdNPXkKPiBDT05GSUdf
UFdNX1NZU0ZTPXkKPiBDT05GSUdfUFdNX1JFTkVTQVNfVFBVPXkKPiBDT05GSUdfUFdNX1RXTD15
Cj4gQ09ORklHX1BXTV9UV0xfTEVEPXkKPiBDT05GSUdfSVBBQ0tfQlVTPXkKPiBDT05GSUdfQk9B
UkRfVFBDSTIwMD15Cj4gQ09ORklHX1NFUklBTF9JUE9DVEFMPXkKPiBDT05GSUdfUkVTRVRfQ09O
VFJPTExFUj15Cj4gQ09ORklHX0ZNQz15Cj4gQ09ORklHX0ZNQ19GQUtFREVWPXkKPiBDT05GSUdf
Rk1DX1RSSVZJQUw9eQo+ICMgQ09ORklHX0ZNQ19XUklURV9FRVBST00gaXMgbm90IHNldAo+ICMg
Q09ORklHX0ZNQ19DSEFSREVWIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQSFkgU3Vic3lzdGVtCj4g
Iwo+ICMgQ09ORklHX0dFTkVSSUNfUEhZIGlzIG5vdCBzZXQKPiBDT05GSUdfUEhZX0VYWU5PU19N
SVBJX1ZJREVPPXkKPiAjIENPTkZJR19QT1dFUkNBUCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRmly
bXdhcmUgRHJpdmVycwo+ICMKPiAjIENPTkZJR19FREQgaXMgbm90IHNldAo+IENPTkZJR19GSVJN
V0FSRV9NRU1NQVA9eQo+ICMgQ09ORklHX0RFTExfUkJVIGlzIG5vdCBzZXQKPiBDT05GSUdfRENE
QkFTPXkKPiBDT05GSUdfRE1JSUQ9eQo+ICMgQ09ORklHX0RNSV9TWVNGUyBpcyBub3Qgc2V0Cj4g
Q09ORklHX0RNSV9TQ0FOX01BQ0hJTkVfTk9OX0VGSV9GQUxMQkFDSz15Cj4gQ09ORklHX0lTQ1NJ
X0lCRlRfRklORD15Cj4gIyBDT05GSUdfR09PR0xFX0ZJUk1XQVJFIGlzIG5vdCBzZXQKPiAKPiAj
Cj4gIyBFRkkgKEV4dGVuc2libGUgRmlybXdhcmUgSW50ZXJmYWNlKSBTdXBwb3J0Cj4gIwo+IENP
TkZJR19FRklfVkFSUz15Cj4gQ09ORklHX0VGSV9SVU5USU1FX01BUD15Cj4gCj4gIwo+ICMgRmls
ZSBzeXN0ZW1zCj4gIwo+IENPTkZJR19EQ0FDSEVfV09SRF9BQ0NFU1M9eQo+IENPTkZJR19GU19Q
T1NJWF9BQ0w9eQo+IENPTkZJR19FWFBPUlRGUz15Cj4gQ09ORklHX0ZJTEVfTE9DS0lORz15Cj4g
Q09ORklHX0ZTTk9USUZZPXkKPiBDT05GSUdfRE5PVElGWT15Cj4gIyBDT05GSUdfSU5PVElGWV9V
U0VSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQU5PVElGWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1FV
T1RBPXkKPiBDT05GSUdfUVVPVEFfTkVUTElOS19JTlRFUkZBQ0U9eQo+ICMgQ09ORklHX1BSSU5U
X1FVT1RBX1dBUk5JTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1FVT1RBX0RFQlVHIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUVVPVEFfVFJFRT15Cj4gQ09ORklHX1FGTVRfVjE9eQo+IENPTkZJR19RRk1U
X1YyPXkKPiBDT05GSUdfUVVPVEFDVEw9eQo+ICMgQ09ORklHX0FVVE9GUzRfRlMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0ZVU0VfRlMgaXMgbm90IHNldAo+IENPTkZJR19HRU5FUklDX0FDTD15Cj4g
Cj4gIwo+ICMgQ2FjaGVzCj4gIwo+ICMgQ09ORklHX0ZTQ0FDSEUgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFBzZXVkbyBmaWxlc3lzdGVtcwo+ICMKPiAjIENPTkZJR19QUk9DX0ZTIGlzIG5vdCBzZXQK
PiBDT05GSUdfU1lTRlM9eQo+IENPTkZJR19UTVBGUz15Cj4gQ09ORklHX1RNUEZTX1BPU0lYX0FD
TD15Cj4gQ09ORklHX1RNUEZTX1hBVFRSPXkKPiBDT05GSUdfSFVHRVRMQkZTPXkKPiBDT05GSUdf
SFVHRVRMQl9QQUdFPXkKPiAjIENPTkZJR19DT05GSUdGU19GUyBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfTUlTQ19GSUxFU1lTVEVNUyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVFdPUktfRklMRVNZU1RF
TVM9eQo+IENPTkZJR19ORlNfRlM9eQo+ICMgQ09ORklHX05GU19WMiBpcyBub3Qgc2V0Cj4gQ09O
RklHX05GU19WMz15Cj4gQ09ORklHX05GU19WM19BQ0w9eQo+IENPTkZJR19ORlNfVjQ9eQo+IENP
TkZJR19ORlNfU1dBUD15Cj4gIyBDT05GSUdfTkZTX1Y0XzEgaXMgbm90IHNldAo+ICMgQ09ORklH
X05GU19VU0VfTEVHQUNZX0ROUyBpcyBub3Qgc2V0Cj4gQ09ORklHX05GU19VU0VfS0VSTkVMX0RO
Uz15Cj4gIyBDT05GSUdfTkZTRCBpcyBub3Qgc2V0Cj4gQ09ORklHX0xPQ0tEPXkKPiBDT05GSUdf
TE9DS0RfVjQ9eQo+IENPTkZJR19ORlNfQUNMX1NVUFBPUlQ9eQo+IENPTkZJR19ORlNfQ09NTU9O
PXkKPiBDT05GSUdfU1VOUlBDPXkKPiBDT05GSUdfU1VOUlBDX0dTUz15Cj4gQ09ORklHX1NVTlJQ
Q19YUFJUX1JETUE9eQo+IENPTkZJR19TVU5SUENfU1dBUD15Cj4gIyBDT05GSUdfUlBDU0VDX0dT
U19LUkI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DRVBIX0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q0lGUz15Cj4gQ09ORklHX0NJRlNfU1RBVFM9eQo+IENPTkZJR19DSUZTX1NUQVRTMj15Cj4gIyBD
T05GSUdfQ0lGU19XRUFLX1BXX0hBU0ggaXMgbm90IHNldAo+ICMgQ09ORklHX0NJRlNfVVBDQUxM
IGlzIG5vdCBzZXQKPiBDT05GSUdfQ0lGU19YQVRUUj15Cj4gQ09ORklHX0NJRlNfUE9TSVg9eQo+
IENPTkZJR19DSUZTX0FDTD15Cj4gIyBDT05GSUdfQ0lGU19ERUJVRyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0lGU19ERlNfVVBDQUxMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DSUZTX1NNQjIgaXMg
bm90IHNldAo+IENPTkZJR19OQ1BfRlM9eQo+ICMgQ09ORklHX05DUEZTX1BBQ0tFVF9TSUdOSU5H
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkNQRlNfSU9DVExfTE9DS0lORz15Cj4gQ09ORklHX05DUEZT
X1NUUk9ORz15Cj4gQ09ORklHX05DUEZTX05GU19OUz15Cj4gQ09ORklHX05DUEZTX09TMl9OUz15
Cj4gQ09ORklHX05DUEZTX1NNQUxMRE9TPXkKPiAjIENPTkZJR19OQ1BGU19OTFMgaXMgbm90IHNl
dAo+IENPTkZJR19OQ1BGU19FWFRSQVM9eQo+IENPTkZJR19DT0RBX0ZTPXkKPiAjIENPTkZJR19B
RlNfRlMgaXMgbm90IHNldAo+IENPTkZJR185UF9GUz15Cj4gQ09ORklHXzlQX0ZTX1BPU0lYX0FD
TD15Cj4gQ09ORklHXzlQX0ZTX1NFQ1VSSVRZPXkKPiBDT05GSUdfTkxTPXkKPiBDT05GSUdfTkxT
X0RFRkFVTFQ9Imlzbzg4NTktMSIKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfNDM3IGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkxTX0NPREVQQUdFXzczNz15Cj4gQ09ORklHX05MU19DT0RFUEFHRV83NzU9
eQo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NTAgaXMgbm90IHNldAo+IENPTkZJR19OTFNfQ09E
RVBBR0VfODUyPXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg1NT15Cj4gIyBDT05GSUdfTkxTX0NP
REVQQUdFXzg1NyBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NjA9eQo+ICMgQ09O
RklHX05MU19DT0RFUEFHRV84NjEgaXMgbm90IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84
NjIgaXMgbm90IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfQ09ERVBBR0VfODY0PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg2NT15Cj4gQ09O
RklHX05MU19DT0RFUEFHRV84NjY9eQo+ICMgQ09ORklHX05MU19DT0RFUEFHRV84NjkgaXMgbm90
IHNldAo+ICMgQ09ORklHX05MU19DT0RFUEFHRV85MzYgaXMgbm90IHNldAo+IENPTkZJR19OTFNf
Q09ERVBBR0VfOTUwPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKPiBD
T05GSUdfTkxTX0NPREVQQUdFXzk0OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NzQ9eQo+ICMg
Q09ORklHX05MU19JU084ODU5XzggaXMgbm90IHNldAo+IENPTkZJR19OTFNfQ09ERVBBR0VfMTI1
MD15Cj4gQ09ORklHX05MU19DT0RFUEFHRV8xMjUxPXkKPiBDT05GSUdfTkxTX0FTQ0lJPXkKPiBD
T05GSUdfTkxTX0lTTzg4NTlfMT15Cj4gQ09ORklHX05MU19JU084ODU5XzI9eQo+IENPTkZJR19O
TFNfSVNPODg1OV8zPXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfND15Cj4gQ09ORklHX05MU19JU084
ODU5XzU9eQo+IENPTkZJR19OTFNfSVNPODg1OV82PXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfNz15
Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlfOSBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19JU084ODU5
XzEzPXkKPiBDT05GSUdfTkxTX0lTTzg4NTlfMTQ9eQo+ICMgQ09ORklHX05MU19JU084ODU5XzE1
IGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0tPSThfUj15Cj4gIyBDT05GSUdfTkxTX0tPSThfVSBp
cyBub3Qgc2V0Cj4gQ09ORklHX05MU19NQUNfUk9NQU49eQo+ICMgQ09ORklHX05MU19NQUNfQ0VM
VElDIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX01BQ19DRU5URVVSTz15Cj4gQ09ORklHX05MU19N
QUNfQ1JPQVRJQU49eQo+ICMgQ09ORklHX05MU19NQUNfQ1lSSUxMSUMgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfTUFDX0dBRUxJQz15Cj4gQ09ORklHX05MU19NQUNfR1JFRUs9eQo+IENPTkZJR19O
TFNfTUFDX0lDRUxBTkQ9eQo+ICMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMgbm90IHNldAo+IENP
TkZJR19OTFNfTUFDX1JPTUFOSUFOPXkKPiBDT05GSUdfTkxTX01BQ19UVVJLSVNIPXkKPiBDT05G
SUdfTkxTX1VURjg9eQo+IAo+ICMKPiAjIEtlcm5lbCBoYWNraW5nCj4gIwo+IENPTkZJR19UUkFD
RV9JUlFGTEFHU19TVVBQT1JUPXkKPiAKPiAjCj4gIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMK
PiAjCj4gIyBDT05GSUdfUFJJTlRLX1RJTUUgaXMgbm90IHNldAo+IENPTkZJR19ERUZBVUxUX01F
U1NBR0VfTE9HTEVWRUw9NAo+IENPTkZJR19CT09UX1BSSU5US19ERUxBWT15Cj4gIyBDT05GSUdf
RFlOQU1JQ19ERUJVRyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQ29tcGlsZS10aW1lIGNoZWNrcyBh
bmQgY29tcGlsZXIgb3B0aW9ucwo+ICMKPiAjIENPTkZJR19ERUJVR19JTkZPIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQKPiBDT05GSUdfRU5B
QkxFX01VU1RfQ0hFQ0s9eQo+IENPTkZJR19GUkFNRV9XQVJOPTIwNDgKPiBDT05GSUdfU1RSSVBf
QVNNX1NZTVM9eQo+ICMgQ09ORklHX1JFQURBQkxFX0FTTSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
VU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19GUz15Cj4gIyBDT05GSUdf
SEVBREVSU19DSEVDSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX1NFQ1RJT05fTUlTTUFUQ0g9
eQo+IENPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQo+IENPTkZJR19GUkFNRV9QT0lO
VEVSPXkKPiBDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BVPXkKPiBDT05GSUdfTUFHSUNf
U1lTUlE9eQo+IENPTkZJR19NQUdJQ19TWVNSUV9ERUZBVUxUX0VOQUJMRT0weDEKPiBDT05GSUdf
REVCVUdfS0VSTkVMPXkKPiAKPiAjCj4gIyBNZW1vcnkgRGVidWdnaW5nCj4gIwo+IENPTkZJR19E
RUJVR19QQUdFQUxMT0M9eQo+IENPTkZJR19XQU5UX1BBR0VfREVCVUdfRkxBR1M9eQo+IENPTkZJ
R19QQUdFX0dVQVJEPXkKPiBDT05GSUdfREVCVUdfT0JKRUNUUz15Cj4gIyBDT05GSUdfREVCVUdf
T0JKRUNUU19TRUxGVEVTVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15
Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfVElNRVJTPXkKPiBDT05GSUdfREVCVUdfT0JKRUNUU19X
T1JLPXkKPiAjIENPTkZJR19ERUJVR19PQkpFQ1RTX1JDVV9IRUFEIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ERUJVR19PQkpFQ1RTX1BFUkNQVV9DT1VOVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdfREVC
VUdfT0JKRUNUU19FTkFCTEVfREVGQVVMVD0xCj4gQ09ORklHX1NMVUJfU1RBVFM9eQo+IENPTkZJ
R19IQVZFX0RFQlVHX0tNRU1MRUFLPXkKPiAjIENPTkZJR19ERUJVR19LTUVNTEVBSyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0RFQlVHX1NUQUNLX1VTQUdFPXkKPiAjIENPTkZJR19ERUJVR19WTSBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfREVCVUdfVklSVFVBTCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfREVC
VUdfTUVNT1JZX0lOSVQgaXMgbm90IHNldAo+IENPTkZJR19NRU1PUllfTk9USUZJRVJfRVJST1Jf
SU5KRUNUPXkKPiBDT05GSUdfSEFWRV9ERUJVR19TVEFDS09WRVJGTE9XPXkKPiAjIENPTkZJR19E
RUJVR19TVEFDS09WRVJGTE9XIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9BUkNIX0tNRU1DSEVD
Sz15Cj4gIyBDT05GSUdfREVCVUdfU0hJUlEgaXMgbm90IHNldAo+IAo+ICMKPiAjIERlYnVnIExv
Y2t1cHMgYW5kIEhhbmdzCj4gIwo+ICMgQ09ORklHX0xPQ0tVUF9ERVRFQ1RPUiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0RFVEVDVF9IVU5HX1RBU0s9eQo+IENPTkZJR19ERUZBVUxUX0hVTkdfVEFTS19U
SU1FT1VUPTEyMAo+ICMgQ09ORklHX0JPT1RQQVJBTV9IVU5HX1RBU0tfUEFOSUMgaXMgbm90IHNl
dAo+IENPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BBTklDX1ZBTFVFPTAKPiBDT05GSUdfUEFO
SUNfT05fT09QUz15Cj4gQ09ORklHX1BBTklDX09OX09PUFNfVkFMVUU9MQo+IENPTkZJR19QQU5J
Q19USU1FT1VUPTAKPiAKPiAjCj4gIyBMb2NrIERlYnVnZ2luZyAoc3BpbmxvY2tzLCBtdXRleGVz
LCBldGMuLi4pCj4gIwo+IENPTkZJR19ERUJVR19SVF9NVVRFWEVTPXkKPiBDT05GSUdfREVCVUdf
UElfTElTVD15Cj4gIyBDT05GSUdfUlRfTVVURVhfVEVTVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdf
REVCVUdfU1BJTkxPQ0s9eQo+IENPTkZJR19ERUJVR19NVVRFWEVTPXkKPiAjIENPTkZJR19ERUJV
R19XV19NVVRFWF9TTE9XUEFUSCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0M9
eQo+ICMgQ09ORklHX1BST1ZFX0xPQ0tJTkcgaXMgbm90IHNldAo+IENPTkZJR19MT0NLREVQPXkK
PiBDT05GSUdfTE9DS19TVEFUPXkKPiAjIENPTkZJR19ERUJVR19MT0NLREVQIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19ERUJVR19BVE9NSUNfU0xFRVAgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19M
T0NLSU5HX0FQSV9TRUxGVEVTVFM9eQo+IENPTkZJR19TVEFDS1RSQUNFPXkKPiBDT05GSUdfREVC
VUdfS09CSkVDVD15Cj4gIyBDT05GSUdfREVCVUdfS09CSkVDVF9SRUxFQVNFIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19ERUJVR19XUklURUNPVU5UIGlzIG5vdCBzZXQKPiBDT05GSUdfREVCVUdfTElT
VD15Cj4gIyBDT05GSUdfREVCVUdfU0cgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19OT1RJRklF
UlM9eQo+ICMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBS
Q1UgRGVidWdnaW5nCj4gIwo+IENPTkZJR19TUEFSU0VfUkNVX1BPSU5URVI9eQo+IENPTkZJR19S
Q1VfVE9SVFVSRV9URVNUPXkKPiAjIENPTkZJR19SQ1VfVE9SVFVSRV9URVNUX1JVTk5BQkxFIGlz
IG5vdCBzZXQKPiBDT05GSUdfUkNVX0NQVV9TVEFMTF9USU1FT1VUPTIxCj4gQ09ORklHX1JDVV9U
UkFDRT15Cj4gQ09ORklHX05PVElGSUVSX0VSUk9SX0lOSkVDVElPTj15Cj4gIyBDT05GSUdfUE1f
Tk9USUZJRVJfRVJST1JfSU5KRUNUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQVVMVF9JTkpFQ1RJ
T04gaXMgbm90IHNldAo+IENPTkZJR19BUkNIX0hBU19ERUJVR19TVFJJQ1RfVVNFUl9DT1BZX0NI
RUNLUz15Cj4gIyBDT05GSUdfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1MgaXMgbm90IHNl
dAo+IENPTkZJR19VU0VSX1NUQUNLVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJ
T05fVFJBQ0VSPXkKPiBDT05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQo+IENPTkZJ
R19IQVZFX0ZVTkNUSU9OX0dSQVBIX0ZQX1RFU1Q9eQo+IENPTkZJR19IQVZFX0ZVTkNUSU9OX1RS
QUNFX01DT1VOVF9URVNUPXkKPiBDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15Cj4gQ09ORklH
X0hBVkVfRFlOQU1JQ19GVFJBQ0VfV0lUSF9SRUdTPXkKPiBDT05GSUdfSEFWRV9GVFJBQ0VfTUNP
VU5UX1JFQ09SRD15Cj4gQ09ORklHX0hBVkVfU1lTQ0FMTF9UUkFDRVBPSU5UUz15Cj4gQ09ORklH
X0hBVkVfRkVOVFJZPXkKPiBDT05GSUdfSEFWRV9DX1JFQ09SRE1DT1VOVD15Cj4gQ09ORklHX1RS
QUNFX0NMT0NLPXkKPiBDT05GSUdfVFJBQ0lOR19TVVBQT1JUPXkKPiAjIENPTkZJR19GVFJBQ0Ug
aXMgbm90IHNldAo+IAo+ICMKPiAjIFJ1bnRpbWUgVGVzdGluZwo+ICMKPiBDT05GSUdfVEVTVF9M
SVNUX1NPUlQ9eQo+IENPTkZJR19CQUNLVFJBQ0VfU0VMRl9URVNUPXkKPiAjIENPTkZJR19SQlRS
RUVfVEVTVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUT01JQzY0X1NFTEZURVNUPXkKPiBDT05GSUdf
VEVTVF9TVFJJTkdfSEVMUEVSUz15Cj4gQ09ORklHX1RFU1RfS1NUUlRPWD15Cj4gIyBDT05GSUdf
UFJPVklERV9PSENJMTM5NF9ETUFfSU5JVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRE1BX0FQSV9E
RUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NBTVBMRVM9eQo+IENPTkZJR19IQVZFX0FSQ0hfS0dE
Qj15Cj4gQ09ORklHX0tHREI9eQo+IENPTkZJR19LR0RCX1NFUklBTF9DT05TT0xFPXkKPiAjIENP
TkZJR19LR0RCX1RFU1RTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19LR0RCX0xPV19MRVZFTF9UUkFQ
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NUUklD
VF9ERVZNRU09eQo+ICMgQ09ORklHX1g4Nl9WRVJCT1NFX0JPT1RVUCBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfRUFSTFlfUFJJTlRLIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YODZfUFREVU1QIGlzIG5v
dCBzZXQKPiBDT05GSUdfREVCVUdfUk9EQVRBPXkKPiAjIENPTkZJR19ERUJVR19ST0RBVEFfVEVT
VCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RPVUJMRUZBVUxUPXkKPiAjIENPTkZJR19ERUJVR19UTEJG
TFVTSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9NTVVfU1RSRVNTIGlzIG5vdCBzZXQKPiBDT05G
SUdfSEFWRV9NTUlPVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0w
Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFhFRD0xCj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfVURF
TEFZPTIKPiBDT05GSUdfSU9fREVMQVlfVFlQRV9OT05FPTMKPiBDT05GSUdfSU9fREVMQVlfMFg4
MD15Cj4gIyBDT05GSUdfSU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9fREVM
QVlfVURFTEFZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQK
PiBDT05GSUdfREVGQVVMVF9JT19ERUxBWV9UWVBFPTAKPiAjIENPTkZJR19ERUJVR19CT09UX1BB
UkFNUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BBX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdf
T1BUSU1JWkVfSU5MSU5JTkc9eQo+IENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1Q9eQo+IENPTkZJ
R19YODZfREVCVUdfU1RBVElDX0NQVV9IQVM9eQo+IAo+ICMKPiAjIFNlY3VyaXR5IG9wdGlvbnMK
PiAjCj4gQ09ORklHX0tFWVM9eQo+ICMgQ09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90
IHNldAo+IENPTkZJR19CSUdfS0VZUz15Cj4gIyBDT05GSUdfVFJVU1RFRF9LRVlTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRU5DUllQVEVEX0tFWVM9eQo+ICMgQ09ORklHX0tFWVNfREVCVUdfUFJPQ19L
RVlTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VDVVJJVFlfRE1FU0dfUkVTVFJJQ1Q9eQo+IENPTkZJ
R19TRUNVUklUWT15Cj4gQ09ORklHX1NFQ1VSSVRZRlM9eQo+IENPTkZJR19TRUNVUklUWV9ORVRX
T1JLPXkKPiBDT05GSUdfU0VDVVJJVFlfTkVUV09SS19YRlJNPXkKPiBDT05GSUdfU0VDVVJJVFlf
UEFUSD15Cj4gIyBDT05GSUdfU0VDVVJJVFlfU0VMSU5VWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
U0VDVVJJVFlfU01BQ0sgaXMgbm90IHNldAo+IENPTkZJR19TRUNVUklUWV9UT01PWU89eQo+IENP
TkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FDQ0VQVF9FTlRSWT0yMDQ4Cj4gQ09ORklHX1NFQ1VS
SVRZX1RPTU9ZT19NQVhfQVVESVRfTE9HPTEwMjQKPiBDT05GSUdfU0VDVVJJVFlfVE9NT1lPX09N
SVRfVVNFUlNQQUNFX0xPQURFUj15Cj4gQ09ORklHX1NFQ1VSSVRZX0FQUEFSTU9SPXkKPiBDT05G
SUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBBUkFNX1ZBTFVFPTEKPiBDT05GSUdfU0VDVVJJVFlf
QVBQQVJNT1JfSEFTSD15Cj4gQ09ORklHX1NFQ1VSSVRZX1lBTUE9eQo+IENPTkZJR19TRUNVUklU
WV9ZQU1BX1NUQUNLRUQ9eQo+ICMgQ09ORklHX0lNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRVZN
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQo+ICMgQ09ORklH
X0RFRkFVTFRfU0VDVVJJVFlfQVBQQVJNT1IgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFRkFVTFRf
U0VDVVJJVFlfWUFNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfREVGQVVMVF9TRUNVUklUWV9EQUMg
aXMgbm90IHNldAo+IENPTkZJR19ERUZBVUxUX1NFQ1VSSVRZPSJ0b21veW8iCj4gQ09ORklHX0NS
WVBUTz15Cj4gCj4gIwo+ICMgQ3J5cHRvIGNvcmUgb3IgaGVscGVyCj4gIwo+ICMgQ09ORklHX0NS
WVBUT19GSVBTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0FMR0FQST15Cj4gQ09ORklHX0NS
WVBUT19BTEdBUEkyPXkKPiBDT05GSUdfQ1JZUFRPX0FFQUQ9eQo+IENPTkZJR19DUllQVE9fQUVB
RDI9eQo+IENPTkZJR19DUllQVE9fQkxLQ0lQSEVSPXkKPiBDT05GSUdfQ1JZUFRPX0JMS0NJUEhF
UjI9eQo+IENPTkZJR19DUllQVE9fSEFTSD15Cj4gQ09ORklHX0NSWVBUT19IQVNIMj15Cj4gQ09O
RklHX0NSWVBUT19STkc9eQo+IENPTkZJR19DUllQVE9fUk5HMj15Cj4gQ09ORklHX0NSWVBUT19Q
Q09NUDI9eQo+IENPTkZJR19DUllQVE9fTUFOQUdFUj15Cj4gQ09ORklHX0NSWVBUT19NQU5BR0VS
Mj15Cj4gQ09ORklHX0NSWVBUT19VU0VSPXkKPiAjIENPTkZJR19DUllQVE9fTUFOQUdFUl9ESVNB
QkxFX1RFU1RTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0dGMTI4TVVMPXkKPiBDT05GSUdf
Q1JZUFRPX05VTEw9eQo+IENPTkZJR19DUllQVE9fV09SS1FVRVVFPXkKPiBDT05GSUdfQ1JZUFRP
X0NSWVBURD15Cj4gQ09ORklHX0NSWVBUT19BVVRIRU5DPXkKPiBDT05GSUdfQ1JZUFRPX0FCTEtf
SEVMUEVSPXkKPiBDT05GSUdfQ1JZUFRPX0dMVUVfSEVMUEVSX1g4Nj15Cj4gCj4gIwo+ICMgQXV0
aGVudGljYXRlZCBFbmNyeXB0aW9uIHdpdGggQXNzb2NpYXRlZCBEYXRhCj4gIwo+IENPTkZJR19D
UllQVE9fQ0NNPXkKPiBDT05GSUdfQ1JZUFRPX0dDTT15Cj4gQ09ORklHX0NSWVBUT19TRVFJVj15
Cj4gCj4gIwo+ICMgQmxvY2sgbW9kZXMKPiAjCj4gQ09ORklHX0NSWVBUT19DQkM9eQo+IENPTkZJ
R19DUllQVE9fQ1RSPXkKPiBDT05GSUdfQ1JZUFRPX0NUUz15Cj4gQ09ORklHX0NSWVBUT19FQ0I9
eQo+IENPTkZJR19DUllQVE9fTFJXPXkKPiBDT05GSUdfQ1JZUFRPX1BDQkM9eQo+IENPTkZJR19D
UllQVE9fWFRTPXkKPiAKPiAjCj4gIyBIYXNoIG1vZGVzCj4gIwo+IENPTkZJR19DUllQVE9fQ01B
Qz15Cj4gQ09ORklHX0NSWVBUT19ITUFDPXkKPiAjIENPTkZJR19DUllQVE9fWENCQyBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NSWVBUT19WTUFDPXkKPiAKPiAjCj4gIyBEaWdlc3QKPiAjCj4gQ09ORklH
X0NSWVBUT19DUkMzMkM9eQo+IENPTkZJR19DUllQVE9fQ1JDMzJDX0lOVEVMPXkKPiAjIENPTkZJ
R19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUwg
aXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fQ1JDVDEwRElGPXkKPiBDT05GSUdfQ1JZUFRPX0dI
QVNIPXkKPiBDT05GSUdfQ1JZUFRPX01END15Cj4gQ09ORklHX0NSWVBUT19NRDU9eQo+IENPTkZJ
R19DUllQVE9fTUlDSEFFTF9NSUM9eQo+ICMgQ09ORklHX0NSWVBUT19STUQxMjggaXMgbm90IHNl
dAo+IENPTkZJR19DUllQVE9fUk1EMTYwPXkKPiBDT05GSUdfQ1JZUFRPX1JNRDI1Nj15Cj4gQ09O
RklHX0NSWVBUT19STUQzMjA9eQo+IENPTkZJR19DUllQVE9fU0hBMT15Cj4gQ09ORklHX0NSWVBU
T19TSEExX1NTU0UzPXkKPiBDT05GSUdfQ1JZUFRPX1NIQTI1Nl9TU1NFMz15Cj4gQ09ORklHX0NS
WVBUT19TSEE1MTJfU1NTRTM9eQo+IENPTkZJR19DUllQVE9fU0hBMjU2PXkKPiBDT05GSUdfQ1JZ
UFRPX1NIQTUxMj15Cj4gQ09ORklHX0NSWVBUT19UR1IxOTI9eQo+IENPTkZJR19DUllQVE9fV1A1
MTI9eQo+ICMgQ09ORklHX0NSWVBUT19HSEFTSF9DTE1VTF9OSV9JTlRFTCBpcyBub3Qgc2V0Cj4g
Cj4gIwo+ICMgQ2lwaGVycwo+ICMKPiBDT05GSUdfQ1JZUFRPX0FFUz15Cj4gQ09ORklHX0NSWVBU
T19BRVNfWDg2XzY0PXkKPiAjIENPTkZJR19DUllQVE9fQUVTX05JX0lOVEVMIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19DUllQVE9fQU5VQklTIGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX0FSQzQ9
eQo+ICMgQ09ORklHX0NSWVBUT19CTE9XRklTSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRP
X0JMT1dGSVNIX1g4Nl82NCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19DQU1FTExJQT15Cj4g
Q09ORklHX0NSWVBUT19DQU1FTExJQV9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUFf
QUVTTklfQVZYX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQU1FTExJQV9BRVNOSV9BVlgyX1g4
Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQVNUX0NPTU1PTj15Cj4gIyBDT05GSUdfQ1JZUFRPX0NB
U1Q1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DUllQVE9fQ0FTVDVfQVZYX1g4Nl82NCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NSWVBUT19DQVNUNj15Cj4gQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2
XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0RFUz15Cj4gQ09ORklHX0NSWVBUT19GQ1JZUFQ9eQo+IENP
TkZJR19DUllQVE9fS0hBWkFEPXkKPiAjIENPTkZJR19DUllQVE9fU0FMU0EyMCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0NSWVBUT19TQUxTQTIwX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRUVEPXkK
PiBDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4
Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRVJQRU5UX0FWWF9YODZfNjQ9eQo+ICMgQ09ORklHX0NS
WVBUT19TRVJQRU5UX0FWWDJfWDg2XzY0IGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JZUFRPX1RFQT15
Cj4gIyBDT05GSUdfQ1JZUFRPX1RXT0ZJU0ggaXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fVFdP
RklTSF9DT01NT049eQo+IENPTkZJR19DUllQVE9fVFdPRklTSF9YODZfNjQ9eQo+IENPTkZJR19D
UllQVE9fVFdPRklTSF9YODZfNjRfM1dBWT15Cj4gQ09ORklHX0NSWVBUT19UV09GSVNIX0FWWF9Y
ODZfNjQ9eQo+IAo+ICMKPiAjIENvbXByZXNzaW9uCj4gIwo+IENPTkZJR19DUllQVE9fREVGTEFU
RT15Cj4gIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fTFpP
PXkKPiBDT05GSUdfQ1JZUFRPX0xaND15Cj4gIyBDT05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBSYW5kb20gTnVtYmVyIEdlbmVyYXRpb24KPiAjCj4gQ09ORklHX0NSWVBU
T19BTlNJX0NQUk5HPXkKPiBDT05GSUdfQ1JZUFRPX1VTRVJfQVBJPXkKPiBDT05GSUdfQ1JZUFRP
X1VTRVJfQVBJX0hBU0g9eQo+IENPTkZJR19DUllQVE9fVVNFUl9BUElfU0tDSVBIRVI9eQo+IENP
TkZJR19DUllQVE9fSFc9eQo+ICMgQ09ORklHX0NSWVBUT19ERVZfUEFETE9DSyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ1JZUFRPX0RFVl9DQ1AgaXMgbm90IHNldAo+ICMgQ09ORklHX0FTWU1NRVRS
SUNfS0VZX1RZUEUgaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tWTT15Cj4gQ09ORklHX0hBVkVf
S1ZNX0lSUUNISVA9eQo+IENPTkZJR19IQVZFX0tWTV9JUlFfUk9VVElORz15Cj4gQ09ORklHX0hB
VkVfS1ZNX0VWRU5URkQ9eQo+IENPTkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQo+IENPTkZJ
R19LVk1fTU1JTz15Cj4gQ09ORklHX0tWTV9BU1lOQ19QRj15Cj4gQ09ORklHX0hBVkVfS1ZNX01T
ST15Cj4gQ09ORklHX0hBVkVfS1ZNX0NQVV9SRUxBWF9JTlRFUkNFUFQ9eQo+IENPTkZJR19LVk1f
VkZJTz15Cj4gQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkKPiBDT05GSUdfS1ZNPXkKPiAjIENPTkZJ
R19LVk1fSU5URUwgaXMgbm90IHNldAo+IENPTkZJR19LVk1fQU1EPXkKPiAjIENPTkZJR19CSU5B
UllfUFJJTlRGIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBMaWJyYXJ5IHJvdXRpbmVzCj4gIwo+IENP
TkZJR19CSVRSRVZFUlNFPXkKPiBDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15Cj4g
Q09ORklHX0dFTkVSSUNfU1RSTkxFTl9VU0VSPXkKPiBDT05GSUdfR0VORVJJQ19ORVRfVVRJTFM9
eQo+IENPTkZJR19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkKPiBDT05GSUdfR0VORVJJQ19QQ0lf
SU9NQVA9eQo+IENPTkZJR19HRU5FUklDX0lPTUFQPXkKPiBDT05GSUdfR0VORVJJQ19JTz15Cj4g
Q09ORklHX0FSQ0hfVVNFX0NNUFhDSEdfTE9DS1JFRj15Cj4gQ09ORklHX0NSQ19DQ0lUVD15Cj4g
Q09ORklHX0NSQzE2PXkKPiAjIENPTkZJR19DUkNfVDEwRElGIGlzIG5vdCBzZXQKPiBDT05GSUdf
Q1JDX0lUVV9UPXkKPiBDT05GSUdfQ1JDMzI9eQo+IENPTkZJR19DUkMzMl9TRUxGVEVTVD15Cj4g
Q09ORklHX0NSQzMyX1NMSUNFQlk4PXkKPiAjIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfQ1JDMzJfU0FSV0FURSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JDMzJf
QklUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DUkM3IGlzIG5vdCBzZXQKPiBDT05GSUdfTElCQ1JD
MzJDPXkKPiAjIENPTkZJR19DUkM4IGlzIG5vdCBzZXQKPiBDT05GSUdfQ1JDNjRfRUNNQT15Cj4g
Q09ORklHX1JBTkRPTTMyX1NFTEZURVNUPXkKPiBDT05GSUdfWkxJQl9JTkZMQVRFPXkKPiBDT05G
SUdfWkxJQl9ERUZMQVRFPXkKPiBDT05GSUdfTFpPX0NPTVBSRVNTPXkKPiBDT05GSUdfTFpPX0RF
Q09NUFJFU1M9eQo+IENPTkZJR19MWjRfQ09NUFJFU1M9eQo+IENPTkZJR19MWjRfREVDT01QUkVT
Uz15Cj4gQ09ORklHX1haX0RFQz15Cj4gIyBDT05GSUdfWFpfREVDX1g4NiBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfWFpfREVDX1BPV0VSUEMgaXMgbm90IHNldAo+ICMgQ09ORklHX1haX0RFQ19JQTY0
IGlzIG5vdCBzZXQKPiBDT05GSUdfWFpfREVDX0FSTT15Cj4gIyBDT05GSUdfWFpfREVDX0FSTVRI
VU1CIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YWl9ERUNfU1BBUkMgaXMgbm90IHNldAo+IENPTkZJ
R19YWl9ERUNfQkNKPXkKPiBDT05GSUdfWFpfREVDX1RFU1Q9eQo+IENPTkZJR19ERUNPTVBSRVNT
X1haPXkKPiBDT05GSUdfR0VORVJJQ19BTExPQ0FUT1I9eQo+IENPTkZJR19CQ0g9eQo+IENPTkZJ
R19CQ0hfQ09OU1RfUEFSQU1TPXkKPiBDT05GSUdfVEVYVFNFQVJDSD15Cj4gQ09ORklHX1RFWFRT
RUFSQ0hfS01QPXkKPiBDT05GSUdfQVNTT0NJQVRJVkVfQVJSQVk9eQo+IENPTkZJR19IQVNfSU9N
RU09eQo+IENPTkZJR19IQVNfSU9QT1JUPXkKPiBDT05GSUdfSEFTX0RNQT15Cj4gQ09ORklHX0NI
RUNLX1NJR05BVFVSRT15Cj4gQ09ORklHX0RRTD15Cj4gQ09ORklHX05MQVRUUj15Cj4gQ09ORklH
X0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElWRT15Cj4gIyBDT05GSUdfQVZFUkFHRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERFIgaXMg
bm90IHNldAo+IENPTkZJR19PSURfUkVHSVNUUlk9eQo+IENPTkZJR19VQ1MyX1NUUklORz15Cj4g
Q09ORklHX0ZPTlRfU1VQUE9SVD15Cj4gQ09ORklHX0ZPTlRfOHgxNj15Cj4gQ09ORklHX0ZPTlRf
QVVUT1NFTEVDVD15CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XwL-0008U8-1n; Tue, 07 Jan 2014 14:47:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XwJ-0008Tx-EY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:47:31 +0000
Received: from [85.158.137.68:60948] by server-8.bemta-3.messagelabs.com id
	74/48-31081-2831CC25; Tue, 07 Jan 2014 14:47:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389106049!6936613!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22725 invoked from network); 7 Jan 2014 14:47:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:47:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:47:29 +0000
Message-Id: <52CC218B0200007800111330@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:47:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Gordan Bobic" <gordan@bobich.net>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
	<20140107143836.GF3588@phenom.dumpdata.com>
In-Reply-To: <20140107143836.GF3588@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 15:38, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> That requires knowing the MMIO BARs the 'fake' device has, and
> .. well, whatever else the Intel VT-d code requires.

Why would you need to know BAR values? Weren't we talking of
an invisible bridge (in which case one would expect that there's
no MSI-X interrupts to be used, which is the only reason I can
see us needing to know/read the BARs)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0XwL-0008U8-1n; Tue, 07 Jan 2014 14:47:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0XwJ-0008Tx-EY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:47:31 +0000
Received: from [85.158.137.68:60948] by server-8.bemta-3.messagelabs.com id
	74/48-31081-2831CC25; Tue, 07 Jan 2014 14:47:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389106049!6936613!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22725 invoked from network); 7 Jan 2014 14:47:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Jan 2014 14:47:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 07 Jan 2014 14:47:29 +0000
Message-Id: <52CC218B0200007800111330@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 07 Jan 2014 14:47:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Gordan Bobic" <gordan@bobich.net>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
	<20140107143836.GF3588@phenom.dumpdata.com>
In-Reply-To: <20140107143836.GF3588@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Feng Wu <feng.wu@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.01.14 at 15:38, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> That requires knowing the MMIO BARs the 'fake' device has, and
> .. well, whatever else the Intel VT-d code requires.

Why would you need to know BAR values? Weren't we talking of
an invisible bridge (in which case one would expect that there's
no MSI-X interrupts to be used, which is the only reason I can
see us needing to know/read the BARs)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:51:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y0E-0000iH-O8; Tue, 07 Jan 2014 14:51:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0Y0D-0000i7-23
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:51:33 +0000
Received: from [85.158.137.68:63254] by server-2.bemta-3.messagelabs.com id
	54/1E-17329-3741CC25; Tue, 07 Jan 2014 14:51:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389106289!6559320!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15680 invoked from network); 7 Jan 2014 14:51:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:51:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88299966"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:51:01 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:51:01 -0500
Message-ID: <52CC1453.3090804@citrix.com>
Date: Tue, 7 Jan 2014 14:50:59 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1386892097-15502-1-git-send-email-zoltan.kiss@citrix.com>
	<1386892097-15502-2-git-send-email-zoltan.kiss@citrix.com>
	<20131213153138.GL21900@zion.uk.xensource.com>
	<52AB506E.3040509@citrix.com>
	<20131213191423.GA12582@zion.uk.xensource.com>
	<52AF1A84.3090304@citrix.com>
	<20131216175036.GB25969@zion.uk.xensource.com>
In-Reply-To: <20131216175036.GB25969@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 17:50, Wei Liu wrote:
> On Mon, Dec 16, 2013 at 03:21:40PM +0000, Zoltan Kiss wrote:
> [...]
>>>>>>>
>>>>>>> Should this be BUG_ON? AIUI this kthread should be the only one doing
>>>>>>> unmap, right?
>>>>> The NAPI instance can do it as well if it is a small packet fits
>>>>> into PKT_PROT_LEN. But still this scenario shouldn't really happen,
>>>>> I was just not sure we have to crash immediately. Maybe handle it as
>>>>> a fatal error and destroy the vif?
>>>>>
>>> It depends. If this is within the trust boundary, i.e. everything at the
>>> stage should have been sanitized then we should BUG_ON because there's
>>> clearly a bug somewhere in the sanitization process, or in the
>>> interaction of various backend routines.
>>
>> My understanding is that crashing should be avoided if we can bail
>> out somehow. At this point there is clearly a bug in netback
>> somewhere, something unmapped that page before it should have
>> happened, or at least that array get corrupted somehow. However
>> there is a chance that xenvif_fatal_tx_err() can contain the issue,
>> and the rest of the system can go unaffected.
>>
>
> That would make debugging much harder if a crash is caused by a previous
> corrupted array and we pretend we can carry on serving IMHO. Now netback
> is having three routines (NAPI, two kthreads) to serve a single vif, the
> interation among them makes bug hard to reproduce.

OK, I'll make this a BUG() in the next series.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:51:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y0E-0000iH-O8; Tue, 07 Jan 2014 14:51:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0Y0D-0000i7-23
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:51:33 +0000
Received: from [85.158.137.68:63254] by server-2.bemta-3.messagelabs.com id
	54/1E-17329-3741CC25; Tue, 07 Jan 2014 14:51:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389106289!6559320!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15680 invoked from network); 7 Jan 2014 14:51:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:51:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88299966"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 14:51:01 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:51:01 -0500
Message-ID: <52CC1453.3090804@citrix.com>
Date: Tue, 7 Jan 2014 14:50:59 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1386892097-15502-1-git-send-email-zoltan.kiss@citrix.com>
	<1386892097-15502-2-git-send-email-zoltan.kiss@citrix.com>
	<20131213153138.GL21900@zion.uk.xensource.com>
	<52AB506E.3040509@citrix.com>
	<20131213191423.GA12582@zion.uk.xensource.com>
	<52AF1A84.3090304@citrix.com>
	<20131216175036.GB25969@zion.uk.xensource.com>
In-Reply-To: <20131216175036.GB25969@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 17:50, Wei Liu wrote:
> On Mon, Dec 16, 2013 at 03:21:40PM +0000, Zoltan Kiss wrote:
> [...]
>>>>>>>
>>>>>>> Should this be BUG_ON? AIUI this kthread should be the only one doing
>>>>>>> unmap, right?
>>>>> The NAPI instance can do it as well if it is a small packet fits
>>>>> into PKT_PROT_LEN. But still this scenario shouldn't really happen,
>>>>> I was just not sure we have to crash immediately. Maybe handle it as
>>>>> a fatal error and destroy the vif?
>>>>>
>>> It depends. If this is within the trust boundary, i.e. everything at the
>>> stage should have been sanitized then we should BUG_ON because there's
>>> clearly a bug somewhere in the sanitization process, or in the
>>> interaction of various backend routines.
>>
>> My understanding is that crashing should be avoided if we can bail
>> out somehow. At this point there is clearly a bug in netback
>> somewhere, something unmapped that page before it should have
>> happened, or at least that array get corrupted somehow. However
>> there is a chance that xenvif_fatal_tx_err() can contain the issue,
>> and the rest of the system can go unaffected.
>>
>
> That would make debugging much harder if a crash is caused by a previous
> corrupted array and we pretend we can carry on serving IMHO. Now netback
> is having three routines (NAPI, two kthreads) to serve a single vif, the
> interation among them makes bug hard to reproduce.

OK, I'll make this a BUG() in the next series.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:52:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y0e-0000kr-5v; Tue, 07 Jan 2014 14:52:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1W0Y0b-0000kV-KU
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:51:58 +0000
Received: from [193.109.254.147:32766] by server-16.bemta-14.messagelabs.com
	id FB/EC-20600-C841CC25; Tue, 07 Jan 2014 14:51:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389106315!9339294!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1783 invoked from network); 7 Jan 2014 14:51:55 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	7 Jan 2014 14:51:55 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 06:51:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384329600"; d="scan'208";a="435136391"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 06:51:31 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 06:51:30 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX152.ccr.corp.intel.com ([10.239.6.52]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 22:51:29 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH v2 2/5] IOMMU: make page table deallocation preemptible
Thread-Index: AQHO+Au1l5I/TDxRlUukwlitXprzYZp5fKkA
Date: Tue, 7 Jan 2014 14:51:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A4834564404E7770B@SHSMSX104.ccr.corp.intel.com>
References: <52A744B7020000780010BEF1@nat28.tlf.novell.com>
	<52A7456A020000780010BF23@nat28.tlf.novell.com>
	<52A8B683.4050705@citrix.com>
	<52AB2105020000780010D064@nat28.tlf.novell.com>
In-Reply-To: <52AB2105020000780010D064@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Andrew
	Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH v2 2/5] IOMMU: make page table deallocation
	preemptible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, Thanks! 
Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Friday, December 13, 2013 10:00 PM
To: xen-devel
Cc: suravee.suthikulpanit@amd.com; Andrew Cooper; George Dunlap; Zhang, Xiantao; Keir Fraser
Subject: [PATCH v2 2/5] IOMMU: make page table deallocation preemptible

This too can take an arbitrary amount of time.

In fact, the bulk of the work is being moved to a tasklet, as handling the necessary preemption logic in line seems close to impossible given that the teardown may also be invoked on error paths.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: abstract out tasklet logic

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -405,11 +405,21 @@ static int amd_iommu_assign_device(struc
     return reassign_device(dom0, d, devfn, pdev);  }
 
-static void deallocate_next_page_table(struct page_info* pg, int level)
+static void deallocate_next_page_table(struct page_info *pg, int level) 
+{
+    PFN_ORDER(pg) = level;
+    spin_lock(&iommu_pt_cleanup_lock);
+    page_list_add_tail(pg, &iommu_pt_cleanup_list);
+    spin_unlock(&iommu_pt_cleanup_lock);
+}
+
+static void deallocate_page_table(struct page_info *pg)
 {
     void *table_vaddr, *pde;
     u64 next_table_maddr;
-    int index, next_level;
+    unsigned int index, level = PFN_ORDER(pg), next_level;
+
+    PFN_ORDER(pg) = 0;
 
     if ( level <= 1 )
     {
@@ -599,6 +609,7 @@ const struct iommu_ops amd_iommu_ops = {
     .teardown = amd_iommu_domain_destroy,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
+    .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
     .update_ire_from_apic = amd_iommu_ioapic_update_ire,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -58,6 +58,10 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+DEFINE_SPINLOCK(iommu_pt_cleanup_lock);
+PAGE_LIST_HEAD(iommu_pt_cleanup_list);
+static struct tasklet iommu_pt_cleanup_tasklet;
+
 static struct keyhandler iommu_p2m_table = {
     .diagnostic = 0,
     .u.fn = iommu_dump_p2m_table,
@@ -235,6 +239,15 @@ int iommu_remove_device(struct pci_dev *
     return hd->platform_ops->remove_device(pdev->devfn, pdev);  }
 
+static void iommu_teardown(struct domain *d) {
+    const struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    d->need_iommu = 0;
+    hd->platform_ops->teardown(d);
+    tasklet_schedule(&iommu_pt_cleanup_tasklet);
+}
+
 /*
  * If the device isn't owned by dom0, it means it already
  * has been assigned to other domain, or it doesn't exist.
@@ -309,10 +322,7 @@ static int assign_device(struct domain *
 
  done:
     if ( !has_arch_pdevs(d) && need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
     spin_unlock(&pcidevs_lock);
 
     return rc;
@@ -377,10 +387,7 @@ static int iommu_populate_page_table(str
     if ( !rc )
         iommu_iotlb_flush_all(d);
     else if ( rc != -ERESTART )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     return rc;
 }
@@ -397,10 +404,7 @@ void iommu_domain_destroy(struct domain 
         return;
 
     if ( need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
     {
@@ -438,6 +442,23 @@ int iommu_unmap_page(struct domain *d, u
     return hd->platform_ops->unmap_page(d, gfn);  }
 
+static void iommu_free_pagetables(unsigned long unused) {
+    do {
+        struct page_info *pg;
+
+        spin_lock(&iommu_pt_cleanup_lock);
+        pg = page_list_remove_head(&iommu_pt_cleanup_list);
+        spin_unlock(&iommu_pt_cleanup_lock);
+        if ( !pg )
+            return;
+        iommu_get_ops()->free_page_table(pg);
+    } while ( !softirq_pending(smp_processor_id()) );
+
+    tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet,
+                            cpumask_cycle(smp_processor_id(), 
+&cpu_online_map)); }
+
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)  {
     struct hvm_iommu *hd = domain_hvm_iommu(d); @@ -500,10 +521,7 @@ int deassign_device(struct domain *d, u1
     pdev->fault.count = 0;
 
     if ( !has_arch_pdevs(d) && need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     return ret;
 }
@@ -542,6 +560,7 @@ int __init iommu_setup(void)
                iommu_passthrough ? "Passthrough" :
                iommu_dom0_strict ? "Strict" : "Relaxed");
         printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "dis");
+        tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, 
+ 0);
     }
 
     return rc;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -668,13 +668,24 @@ static void dma_pte_clear_one(struct dom
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)  {
-    int i;
-    struct dma_pte *pt_vaddr, *pte;
-    int next_level = level - 1;
+    struct page_info *pg = maddr_to_page(pt_maddr);
 
     if ( pt_maddr == 0 )
         return;
 
+    PFN_ORDER(pg) = level;
+    spin_lock(&iommu_pt_cleanup_lock);
+    page_list_add_tail(pg, &iommu_pt_cleanup_list);
+    spin_unlock(&iommu_pt_cleanup_lock);
+}
+
+static void iommu_free_page_table(struct page_info *pg) {
+    unsigned int i, next_level = PFN_ORDER(pg) - 1;
+    u64 pt_maddr = page_to_maddr(pg);
+    struct dma_pte *pt_vaddr, *pte;
+
+    PFN_ORDER(pg) = 0;
     pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
 
     for ( i = 0; i < PTE_NUM; i++ )
@@ -2430,6 +2441,7 @@ const struct iommu_ops intel_iommu_ops =
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
+    .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .update_ire_from_apic = io_apic_write_remap_rte,
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -88,6 +88,7 @@ bool_t pt_irq_need_timer(uint32_t flags)
 
 struct msi_desc;
 struct msi_msg;
+struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
@@ -100,6 +101,7 @@ struct iommu_ops {
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
+    void (*free_page_table)(struct page_info *);
     int (*reassign_device)(struct domain *s, struct domain *t,
 			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn); @@ -151,4 +153,7 @@ int adjust_vtd_irq_affinities(void);
  */
 DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+extern struct spinlock iommu_pt_cleanup_lock; extern struct 
+page_list_head iommu_pt_cleanup_list;
+
 #endif /* _IOMMU_H_ */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:52:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y0e-0000kr-5v; Tue, 07 Jan 2014 14:52:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1W0Y0b-0000kV-KU
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:51:58 +0000
Received: from [193.109.254.147:32766] by server-16.bemta-14.messagelabs.com
	id FB/EC-20600-C841CC25; Tue, 07 Jan 2014 14:51:56 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389106315!9339294!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1783 invoked from network); 7 Jan 2014 14:51:55 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	7 Jan 2014 14:51:55 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 06:51:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384329600"; d="scan'208";a="435136391"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 06:51:31 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 06:51:30 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX152.ccr.corp.intel.com ([10.239.6.52]) with mapi id
	14.03.0123.003; Tue, 7 Jan 2014 22:51:29 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH v2 2/5] IOMMU: make page table deallocation preemptible
Thread-Index: AQHO+Au1l5I/TDxRlUukwlitXprzYZp5fKkA
Date: Tue, 7 Jan 2014 14:51:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A4834564404E7770B@SHSMSX104.ccr.corp.intel.com>
References: <52A744B7020000780010BEF1@nat28.tlf.novell.com>
	<52A7456A020000780010BF23@nat28.tlf.novell.com>
	<52A8B683.4050705@citrix.com>
	<52AB2105020000780010D064@nat28.tlf.novell.com>
In-Reply-To: <52AB2105020000780010D064@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Andrew
	Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH v2 2/5] IOMMU: make page table deallocation
	preemptible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, Thanks! 
Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Friday, December 13, 2013 10:00 PM
To: xen-devel
Cc: suravee.suthikulpanit@amd.com; Andrew Cooper; George Dunlap; Zhang, Xiantao; Keir Fraser
Subject: [PATCH v2 2/5] IOMMU: make page table deallocation preemptible

This too can take an arbitrary amount of time.

In fact, the bulk of the work is being moved to a tasklet, as handling the necessary preemption logic in line seems close to impossible given that the teardown may also be invoked on error paths.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: abstract out tasklet logic

--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -405,11 +405,21 @@ static int amd_iommu_assign_device(struc
     return reassign_device(dom0, d, devfn, pdev);  }
 
-static void deallocate_next_page_table(struct page_info* pg, int level)
+static void deallocate_next_page_table(struct page_info *pg, int level) 
+{
+    PFN_ORDER(pg) = level;
+    spin_lock(&iommu_pt_cleanup_lock);
+    page_list_add_tail(pg, &iommu_pt_cleanup_list);
+    spin_unlock(&iommu_pt_cleanup_lock);
+}
+
+static void deallocate_page_table(struct page_info *pg)
 {
     void *table_vaddr, *pde;
     u64 next_table_maddr;
-    int index, next_level;
+    unsigned int index, level = PFN_ORDER(pg), next_level;
+
+    PFN_ORDER(pg) = 0;
 
     if ( level <= 1 )
     {
@@ -599,6 +609,7 @@ const struct iommu_ops amd_iommu_ops = {
     .teardown = amd_iommu_domain_destroy,
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
+    .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
     .update_ire_from_apic = amd_iommu_ioapic_update_ire,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -58,6 +58,10 @@ bool_t __read_mostly amd_iommu_perdev_in
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+DEFINE_SPINLOCK(iommu_pt_cleanup_lock);
+PAGE_LIST_HEAD(iommu_pt_cleanup_list);
+static struct tasklet iommu_pt_cleanup_tasklet;
+
 static struct keyhandler iommu_p2m_table = {
     .diagnostic = 0,
     .u.fn = iommu_dump_p2m_table,
@@ -235,6 +239,15 @@ int iommu_remove_device(struct pci_dev *
     return hd->platform_ops->remove_device(pdev->devfn, pdev);  }
 
+static void iommu_teardown(struct domain *d) {
+    const struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    d->need_iommu = 0;
+    hd->platform_ops->teardown(d);
+    tasklet_schedule(&iommu_pt_cleanup_tasklet);
+}
+
 /*
  * If the device isn't owned by dom0, it means it already
  * has been assigned to other domain, or it doesn't exist.
@@ -309,10 +322,7 @@ static int assign_device(struct domain *
 
  done:
     if ( !has_arch_pdevs(d) && need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
     spin_unlock(&pcidevs_lock);
 
     return rc;
@@ -377,10 +387,7 @@ static int iommu_populate_page_table(str
     if ( !rc )
         iommu_iotlb_flush_all(d);
     else if ( rc != -ERESTART )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     return rc;
 }
@@ -397,10 +404,7 @@ void iommu_domain_destroy(struct domain 
         return;
 
     if ( need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
     {
@@ -438,6 +442,23 @@ int iommu_unmap_page(struct domain *d, u
     return hd->platform_ops->unmap_page(d, gfn);  }
 
+static void iommu_free_pagetables(unsigned long unused) {
+    do {
+        struct page_info *pg;
+
+        spin_lock(&iommu_pt_cleanup_lock);
+        pg = page_list_remove_head(&iommu_pt_cleanup_list);
+        spin_unlock(&iommu_pt_cleanup_lock);
+        if ( !pg )
+            return;
+        iommu_get_ops()->free_page_table(pg);
+    } while ( !softirq_pending(smp_processor_id()) );
+
+    tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet,
+                            cpumask_cycle(smp_processor_id(), 
+&cpu_online_map)); }
+
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count)  {
     struct hvm_iommu *hd = domain_hvm_iommu(d); @@ -500,10 +521,7 @@ int deassign_device(struct domain *d, u1
     pdev->fault.count = 0;
 
     if ( !has_arch_pdevs(d) && need_iommu(d) )
-    {
-        d->need_iommu = 0;
-        hd->platform_ops->teardown(d);
-    }
+        iommu_teardown(d);
 
     return ret;
 }
@@ -542,6 +560,7 @@ int __init iommu_setup(void)
                iommu_passthrough ? "Passthrough" :
                iommu_dom0_strict ? "Strict" : "Relaxed");
         printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "dis");
+        tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, 
+ 0);
     }
 
     return rc;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -668,13 +668,24 @@ static void dma_pte_clear_one(struct dom
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)  {
-    int i;
-    struct dma_pte *pt_vaddr, *pte;
-    int next_level = level - 1;
+    struct page_info *pg = maddr_to_page(pt_maddr);
 
     if ( pt_maddr == 0 )
         return;
 
+    PFN_ORDER(pg) = level;
+    spin_lock(&iommu_pt_cleanup_lock);
+    page_list_add_tail(pg, &iommu_pt_cleanup_list);
+    spin_unlock(&iommu_pt_cleanup_lock);
+}
+
+static void iommu_free_page_table(struct page_info *pg) {
+    unsigned int i, next_level = PFN_ORDER(pg) - 1;
+    u64 pt_maddr = page_to_maddr(pg);
+    struct dma_pte *pt_vaddr, *pte;
+
+    PFN_ORDER(pg) = 0;
     pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
 
     for ( i = 0; i < PTE_NUM; i++ )
@@ -2430,6 +2441,7 @@ const struct iommu_ops intel_iommu_ops =
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
+    .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .update_ire_from_apic = io_apic_write_remap_rte,
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -88,6 +88,7 @@ bool_t pt_irq_need_timer(uint32_t flags)
 
 struct msi_desc;
 struct msi_msg;
+struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
@@ -100,6 +101,7 @@ struct iommu_ops {
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
+    void (*free_page_table)(struct page_info *);
     int (*reassign_device)(struct domain *s, struct domain *t,
 			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn); @@ -151,4 +153,7 @@ int adjust_vtd_irq_affinities(void);
  */
 DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
+extern struct spinlock iommu_pt_cleanup_lock; extern struct 
+page_list_head iommu_pt_cleanup_list;
+
 #endif /* _IOMMU_H_ */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y1O-0000rt-KS; Tue, 07 Jan 2014 14:52:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Y1N-0000rX-2H
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:52:45 +0000
Received: from [85.158.137.68:13036] by server-9.bemta-3.messagelabs.com id
	7B/EB-13104-BB41CC25; Tue, 07 Jan 2014 14:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389106361!7737296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30028 invoked from network); 7 Jan 2014 14:52:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90460610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:52:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:52:40 -0500
Message-ID: <1389106359.12612.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 14:52:39 +0000
In-Reply-To: <52CC20D50200007800111313@nat28.tlf.novell.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
	<1389105127.12612.35.camel@kazak.uk.xensource.com>
	<52CC20D50200007800111313@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:44 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> >> > 
> >> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> >> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> > >> They are several one line functions in tmem_xen.h which are useless, this 
> > patch
> >> > >> embeded them into tmem.c directly.
> >> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> > order
> >> > >> to make things more straightforward.
> >> > >>
> >> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> > >> ---
> >> > >>  xen/common/domain.c        |    4 ++--
> >> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >> > >>  xen/include/xen/sched.h    |    2 +-
> >> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> >> > > 
> >> > > Keir, are you OK with this simple name change?
> >> > > 
> >> > 
> >> > Ping..
> >> 
> >> Lets make sure his email is on the 'To' (I don't see it
> >> in my email?)
> > 
> > I haven't reviewed the patch or anything but it says "cleanup" -- I
> > think we are past that point of the release process, aren't we?
> 
> It has been pending for quite a while, and tmem isn't critical code,
> so I would tend towards taking it if we can get Keir's ack (if there
> wasn't that relatively trivial change to common code, I would have
> pulled the set already).

OK, I'm happy to follow your lead on that decision.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y1O-0000rt-KS; Tue, 07 Jan 2014 14:52:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Y1N-0000rX-2H
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:52:45 +0000
Received: from [85.158.137.68:13036] by server-9.bemta-3.messagelabs.com id
	7B/EB-13104-BB41CC25; Tue, 07 Jan 2014 14:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389106361!7737296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30028 invoked from network); 7 Jan 2014 14:52:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 14:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90460610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 14:52:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	09:52:40 -0500
Message-ID: <1389106359.12612.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 7 Jan 2014 14:52:39 +0000
In-Reply-To: <52CC20D50200007800111313@nat28.tlf.novell.com>
References: <1386846315-13299-1-git-send-email-bob.liu@oracle.com>
	<1386846315-13299-12-git-send-email-bob.liu@oracle.com>
	<20131213164405.GA11305@phenom.dumpdata.com>
	<52CBDAA3.2000403@oracle.com>
	<20140107142702.GC3588@phenom.dumpdata.com>
	<1389105127.12612.35.camel@kazak.uk.xensource.com>
	<52CC20D50200007800111313@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Bob Liu <lliubbo@gmail.com>, keir@xen.org, andrew.cooper3@citrix.com,
	james.harper@bendigoit.com.au, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 14:44 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> >> > 
> >> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> >> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> > >> They are several one line functions in tmem_xen.h which are useless, this 
> > patch
> >> > >> embeded them into tmem.c directly.
> >> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> > order
> >> > >> to make things more straightforward.
> >> > >>
> >> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> > >> ---
> >> > >>  xen/common/domain.c        |    4 ++--
> >> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >> > >>  xen/include/xen/sched.h    |    2 +-
> >> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> >> > > 
> >> > > Keir, are you OK with this simple name change?
> >> > > 
> >> > 
> >> > Ping..
> >> 
> >> Lets make sure his email is on the 'To' (I don't see it
> >> in my email?)
> > 
> > I haven't reviewed the patch or anything but it says "cleanup" -- I
> > think we are past that point of the release process, aren't we?
> 
> It has been pending for quite a while, and tmem isn't critical code,
> so I would tend towards taking it if we can get Keir's ack (if there
> wasn't that relatively trivial change to common code, I would have
> pulled the set already).

OK, I'm happy to follow your lead on that decision.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:53:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y1z-0000wn-1g; Tue, 07 Jan 2014 14:53:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Y1x-0000wN-6f
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:53:21 +0000
Received: from [85.158.137.68:20464] by server-8.bemta-3.messagelabs.com id
	41/62-31081-FD41CC25; Tue, 07 Jan 2014 14:53:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389106396!7685504!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8631 invoked from network); 7 Jan 2014 14:53:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:53:18 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ErCxq023559
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:53:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ErCRj024808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:53:12 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ErBus014721; Tue, 7 Jan 2014 14:53:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:53:11 -0800
Message-ID: <52CC1500.8050104@oracle.com>
Date: Tue, 07 Jan 2014 09:53:52 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 05:25 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
>> Sent: 31 December 2013 19:10
>> To: Paul Durrant
>> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
>> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
>> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
>>
>> On 11/26/2013 11:41 AM, Paul Durrant wrote:
>>> This patch adds support for IPv6 checksum offload and GSO when those
>>> features are available in the backend.
>> Sorry for late review. Mostly style comments.
>>
> Thanks for the review.
>
> The checksum related code essentially needs to be a duplicate of that in netback and it seems wasteful to have the code in both places. Could this code be moved perhaps to net/core/dev.c? It's not specific to netback/netfront usage.

Will any of these routines be called for anything other than Xen 
networking?

I don't know about net/core/dev.c but given the large amount of 
duplicate code between netfront and netback I think factoring out should 
be done at least for these two. Into xen-netcore.c or some such.

-boris


>
> Opinions?
>
>    Paul
>
>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Cc: David Vrabel <david.vrabel@citrix.com>
>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>> Cc: Annie Li <annie.li@oracle.com>
>>> ---
>>>
>>> v3:
>>>    - Addressed comments raised by Annie Li
>>>
>>> v2:
>>>    - Addressed comments raised by Ian Campbell
>>>
>>>    drivers/net/xen-netfront.c |  239
>> ++++++++++++++++++++++++++++++++++++++++----
>>>    include/linux/ipv6.h       |    2 +
>>>    2 files changed, 224 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>> index dd1011e..fe747e4 100644
>>> --- a/drivers/net/xen-netfront.c
>>> +++ b/drivers/net/xen-netfront.c
>>> @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>>>    		tx->flags |= XEN_NETTXF_extra_info;
>>>
>>>    		gso->u.gso.size = skb_shinfo(skb)->gso_size;
>>> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
>>> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
>> SKB_GSO_TCPV6) ?
>>> +			          XEN_NETIF_GSO_TYPE_TCPV6 :
>>> +			          XEN_NETIF_GSO_TYPE_TCPV4;
>>>    		gso->u.gso.pad = 0;
>>>    		gso->u.gso.features = 0;
>>>
>>> @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
>> *skb,
>>>    		return -EINVAL;
>>>    	}
>>>
>>> -	/* Currently only TCPv4 S.O. is supported. */
>>> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
>>> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
>>> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
>>>    		if (net_ratelimit())
>>>    			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
>>>    		return -EINVAL;
>>>    	}
>>>
>>>    	skb_shinfo(skb)->gso_size = gso->u.gso.size;
>>> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
>>> +	skb_shinfo(skb)->gso_type =
>>> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
>>> +		SKB_GSO_TCPV4 :
>>> +		SKB_GSO_TCPV6;
>>>
>>>    	/* Header must be checked, and gso_segs computed. */
>>>    	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
>>> @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
>> netfront_info *np,
>>>    	return cons;
>>>    }
>>>
>>> -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
>>> +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
>>> +				   unsigned int max)
>> Should this routine return error code instead of a boolean? Otherwise
>> it's not clear what "false" should mean --- whether it is that it failed
>> to pull or that the pull wasn't needed.
>>
>>>    {
>>> -	struct iphdr *iph;
>>> -	int err = -EPROTO;
>>> +	int target;
>>> +
>>> +	BUG_ON(max < min);
>>> +
>>> +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
>>> +		return true;
>>> +
>>> +	/* If we need to pullup then pullup to max, so we hopefully
>>> +	 * won't need to do it again.
>>> +	 */
>> Comment style.
>>
>>> +	target = min_t(int, skb->len, max);
>>> +	__pskb_pull_tail(skb, target - skb_headlen(skb));
>>> +
>>> +	if (skb_headlen(skb) < min) {
>> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
>>
>>> +		net_err_ratelimited("Failed to pullup packet header\n");
>>> +		return false;
>>> +	}
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/* This value should be large enough to cover a tagged ethernet header
>> plus
>>> + * maximally sized IP and TCP or UDP headers.
>>> + */
>> Comment style.
>>
>>> +#define MAX_IP_HEADER 128
>>> +
>>> +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
>> *skb)
>>> +{
>>> +	struct iphdr *iph = (void *)skb->data;
>>> +	unsigned int header_size;
>>> +	unsigned int off;
>>>    	int recalculate_partial_csum = 0;
>>> +	int err = -EPROTO;
>>>
>>>    	/*
>>>    	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
>>> @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
>> *dev, struct sk_buff *skb)
>>>    	if (skb->ip_summed != CHECKSUM_PARTIAL)
>>>    		return 0;
>>>
>>> -	if (skb->protocol != htons(ETH_P_IP))
>>> +	off = sizeof(struct iphdr);
>>> +
>>> +	header_size = skb->network_header + off;
>>> +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
>>>    		goto out;
>>>
>>> -	iph = (void *)skb->data;
>>> +	off = iph->ihl * 4;
>>>
>>>    	switch (iph->protocol) {
>>>    	case IPPROTO_TCP:
>>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
>>> +		if (!skb_partial_csum_set(skb, off,
>>>    					  offsetof(struct tcphdr, check)))
>>>    			goto out;
>>>
>>>    		if (recalculate_partial_csum) {
>>>    			struct tcphdr *tcph = tcp_hdr(skb);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct tcphdr);
>> You can put these (off and sizeof) onto the same line.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IP_HEADER))
>>> +				goto out;
>>> +
>>>    			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
>>> daddr,
>>> -							 skb->len - iph->ihl*4,
>>> +							 skb->len - off,
>>>    							 IPPROTO_TCP, 0);
>>>    		}
>>>    		break;
>>>    	case IPPROTO_UDP:
>>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
>>> +		if (!skb_partial_csum_set(skb, off,
>>>    					  offsetof(struct udphdr, check)))
>>>    			goto out;
>>>
>>>    		if (recalculate_partial_csum) {
>>>    			struct udphdr *udph = udp_hdr(skb);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct udphdr);
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IP_HEADER))
>>> +				goto out;
>>> +
>>>    			udph->check = ~csum_tcpudp_magic(iph->saddr,
>> iph->daddr,
>>> -							 skb->len - iph->ihl*4,
>>> +							 skb->len - off,
>>>    							 IPPROTO_UDP, 0);
>>>    		}
>>>    		break;
>>>    	default:
>>> -		if (net_ratelimit())
>>> -			pr_err("Attempting to checksum a non-TCP/UDP
>> packet, dropping a protocol %d packet\n",
>>> -			       iph->protocol);
>>> +		net_err_ratelimited("Attempting to checksum a non-
>> TCP/UDP packet, dropping a protocol %d packet\n",
>>> +				    iph->protocol);
>>> +		goto out;
>>> +	}
>>> +
>>> +	err = 0;
>>> +
>>> +out:
>>> +	return err;
>>> +}
>>> +
>>> +/* This value should be large enough to cover a tagged ethernet header
>> plus
>>> + * an IPv6 header, all options, and a maximal TCP or UDP header.
>>> + */
>>> +#define MAX_IPV6_HEADER 256
>>> +
>>> +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
>> *skb)
>>> +{
>>> +	struct ipv6hdr *ipv6h = (void *)skb->data;
>>> +	u8 nexthdr;
>>> +	unsigned int header_size;
>>> +	unsigned int off;
>>> +	bool fragment;
>>> +	bool done;
>>> +	int err = -EPROTO;
>>> +
>>> +	done = false;
>> This should probably be moved down to the beginning of the while loop.
>> And you also need to initialize fragment to "false" (and possibly rename
>> it to is_fragment?)
>>
>>> +
>>> +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
>>> +	if (skb->ip_summed != CHECKSUM_PARTIAL)
>>> +		return 0;
>>> +
>>> +	off = sizeof(struct ipv6hdr);
>>> +
>>> +	header_size = skb->network_header + off;
>>> +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
>>> +		goto out;
>>> +
>>> +	nexthdr = ipv6h->nexthdr;
>>> +
>>> +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
>> &&
>>> +	       !done) {
>>> +		switch (nexthdr) {
>>> +		case IPPROTO_DSTOPTS:
>>> +		case IPPROTO_HOPOPTS:
>>> +		case IPPROTO_ROUTING: {
>>> +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct ipv6_opt_hdr);
>> I'd merge the last two lines.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IPV6_HEADER))
>>> +				goto out;
>>> +
>>> +			nexthdr = hp->nexthdr;
>>> +			off += ipv6_optlen(hp);
>>> +			break;
>>> +		}
>>> +		case IPPROTO_AH: {
>>> +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct ip_auth_hdr);
>> Here as well.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IPV6_HEADER))
>>> +				goto out;
>>> +
>>> +			nexthdr = hp->nexthdr;
>>> +			off += ipv6_ahlen(hp);
>>> +			break;
>>> +		}
>>> +		case IPPROTO_FRAGMENT:
>>> +			fragment = true;
>>> +			/* fall through */
>>> +		default:
>>> +			done = true;
>>> +			break;
>>> +		}
>>> +	}
>>> +
>>> +	if (!done) {
>>> +		net_err_ratelimited("Failed to parse packet header\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	if (fragment) {
>>> +		net_err_ratelimited("Packet is a fragment!\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	switch (nexthdr) {
>>> +	case IPPROTO_TCP:
>>> +		if (!skb_partial_csum_set(skb, off,
>>> +					  offsetof(struct tcphdr, check)))
>>> +			goto out;
>>> +		break;
>>> +	case IPPROTO_UDP:
>>> +		if (!skb_partial_csum_set(skb, off,
>>> +					  offsetof(struct udphdr, check)))
>>> +			goto out;
>>> +		break;
>>> +	default:
>>> +		net_err_ratelimited("Attempting to checksum a non-
>> TCP/UDP packet, dropping a protocol %d packet\n",
>>> +				    nexthdr);
>>>    		goto out;
>>>    	}
>>>
>>> @@ -922,6 +1076,25 @@ out:
>>>    	return err;
>>>    }
>>>
>>> +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
>>> +{
>>> +	int err;
>> Initialize to -EPROTO (just to keep consistent with the rest of the file)
>>
>>> +
>>> +	switch (skb->protocol) {
>>> +	case htons(ETH_P_IP):
>>> +		err = checksum_setup_ip(dev, skb);
>>> +		break;
>>> +	case htons(ETH_P_IPV6):
>>> +		err = checksum_setup_ipv6(dev, skb);
>>> +		break;
>>> +	default:
>>> +		err = -EPROTO;
>>> +		break;
>>> +	}
>>> +
>>> +	return err;
>>> +}
>>> +
>>>    static int handle_incoming_queue(struct net_device *dev,
>>>    				 struct sk_buff_head *rxq)
>>>    {
>>> @@ -1232,6 +1405,15 @@ static netdev_features_t
>> xennet_fix_features(struct net_device *dev,
>>>    			features &= ~NETIF_F_SG;
>>>    	}
>>>
>>> +	if (features & NETIF_F_IPV6_CSUM) {
>>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>> +				 "feature-ipv6-csum-offload", "%d", &val) <
>> 0)
>>> +			val = 0;
>>> +
>>> +		if (!val)
>>> +			features &= ~NETIF_F_IPV6_CSUM;
>>> +	}
>>> +
>>>    	if (features & NETIF_F_TSO) {
>>>    		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>>    				 "feature-gso-tcpv4", "%d", &val) < 0)
>>> @@ -1241,6 +1423,15 @@ static netdev_features_t
>> xennet_fix_features(struct net_device *dev,
>>>    			features &= ~NETIF_F_TSO;
>>>    	}
>>>
>>> +	if (features & NETIF_F_TSO6) {
>>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>> +				 "feature-gso-tcpv6", "%d", &val) < 0)
>>> +			val = 0;
>>> +
>>> +		if (!val)
>>> +			features &= ~NETIF_F_TSO6;
>>> +	}
>>> +
>>>    	return features;
>>>    }
>>>
>>> @@ -1373,7 +1564,9 @@ static struct net_device
>> *xennet_create_dev(struct xenbus_device *dev)
>>>    	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>>>    	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>>>    				  NETIF_F_GSO_ROBUST;
>>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
>> NETIF_F_TSO;
>>> +	netdev->hw_features	= NETIF_F_SG |
>>> +		                  NETIF_F_IPV6_CSUM |
>>> +		                  NETIF_F_TSO | NETIF_F_TSO6;
>> Can you merge these three lines and stay under 80? If not, merge either
>> of the two of them.
>>
>>
>> -boris
>>
>>>    	/*
>>>             * Assume that all hw features are available for now. This set
>>> @@ -1751,6 +1944,18 @@ again:
>>>    		goto abort_transaction;
>>>    	}
>>>
>>> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
>> "%d", 1);
>>> +	if (err) {
>>> +		message = "writing feature-gso-tcpv6";
>>> +		goto abort_transaction;
>>> +	}
>>> +
>>> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
>> offload", "%d", 1);
>>> +	if (err) {
>>> +		message = "writing feature-ipv6-csum-offload";
>>> +		goto abort_transaction;
>>> +	}
>>> +
>>>    	err = xenbus_transaction_end(xbt, 0);
>>>    	if (err) {
>>>    		if (err == -EAGAIN)
>>> diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
>>> index 5d89d1b..10f1b03 100644
>>> --- a/include/linux/ipv6.h
>>> +++ b/include/linux/ipv6.h
>>> @@ -4,6 +4,8 @@
>>>    #include <uapi/linux/ipv6.h>
>>>
>>>    #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
>>> +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
>>> +
>>>    /*
>>>     * This structure contains configuration options per IPv6 link.
>>>     */
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:53:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y1z-0000wn-1g; Tue, 07 Jan 2014 14:53:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0Y1x-0000wN-6f
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 14:53:21 +0000
Received: from [85.158.137.68:20464] by server-8.bemta-3.messagelabs.com id
	41/62-31081-FD41CC25; Tue, 07 Jan 2014 14:53:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389106396!7685504!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8631 invoked from network); 7 Jan 2014 14:53:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:53:18 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ErCxq023559
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:53:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ErCRj024808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:53:12 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ErBus014721; Tue, 7 Jan 2014 14:53:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:53:11 -0800
Message-ID: <52CC1500.8050104@oracle.com>
Date: Tue, 07 Jan 2014 09:53:52 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 05:25 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
>> Sent: 31 December 2013 19:10
>> To: Paul Durrant
>> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
>> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
>> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6 offloads
>>
>> On 11/26/2013 11:41 AM, Paul Durrant wrote:
>>> This patch adds support for IPv6 checksum offload and GSO when those
>>> features are available in the backend.
>> Sorry for late review. Mostly style comments.
>>
> Thanks for the review.
>
> The checksum related code essentially needs to be a duplicate of that in netback and it seems wasteful to have the code in both places. Could this code be moved perhaps to net/core/dev.c? It's not specific to netback/netfront usage.

Will any of these routines be called for anything other than Xen 
networking?

I don't know about net/core/dev.c but given the large amount of 
duplicate code between netfront and netback I think factoring out should 
be done at least for these two. Into xen-netcore.c or some such.

-boris


>
> Opinions?
>
>    Paul
>
>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Cc: David Vrabel <david.vrabel@citrix.com>
>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>> Cc: Annie Li <annie.li@oracle.com>
>>> ---
>>>
>>> v3:
>>>    - Addressed comments raised by Annie Li
>>>
>>> v2:
>>>    - Addressed comments raised by Ian Campbell
>>>
>>>    drivers/net/xen-netfront.c |  239
>> ++++++++++++++++++++++++++++++++++++++++----
>>>    include/linux/ipv6.h       |    2 +
>>>    2 files changed, 224 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>> index dd1011e..fe747e4 100644
>>> --- a/drivers/net/xen-netfront.c
>>> +++ b/drivers/net/xen-netfront.c
>>> @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>>>    		tx->flags |= XEN_NETTXF_extra_info;
>>>
>>>    		gso->u.gso.size = skb_shinfo(skb)->gso_size;
>>> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
>>> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
>> SKB_GSO_TCPV6) ?
>>> +			          XEN_NETIF_GSO_TYPE_TCPV6 :
>>> +			          XEN_NETIF_GSO_TYPE_TCPV4;
>>>    		gso->u.gso.pad = 0;
>>>    		gso->u.gso.features = 0;
>>>
>>> @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
>> *skb,
>>>    		return -EINVAL;
>>>    	}
>>>
>>> -	/* Currently only TCPv4 S.O. is supported. */
>>> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
>>> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
>>> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
>>>    		if (net_ratelimit())
>>>    			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
>>>    		return -EINVAL;
>>>    	}
>>>
>>>    	skb_shinfo(skb)->gso_size = gso->u.gso.size;
>>> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
>>> +	skb_shinfo(skb)->gso_type =
>>> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
>>> +		SKB_GSO_TCPV4 :
>>> +		SKB_GSO_TCPV6;
>>>
>>>    	/* Header must be checked, and gso_segs computed. */
>>>    	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
>>> @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
>> netfront_info *np,
>>>    	return cons;
>>>    }
>>>
>>> -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
>>> +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
>>> +				   unsigned int max)
>> Should this routine return error code instead of a boolean? Otherwise
>> it's not clear what "false" should mean --- whether it is that it failed
>> to pull or that the pull wasn't needed.
>>
>>>    {
>>> -	struct iphdr *iph;
>>> -	int err = -EPROTO;
>>> +	int target;
>>> +
>>> +	BUG_ON(max < min);
>>> +
>>> +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
>>> +		return true;
>>> +
>>> +	/* If we need to pullup then pullup to max, so we hopefully
>>> +	 * won't need to do it again.
>>> +	 */
>> Comment style.
>>
>>> +	target = min_t(int, skb->len, max);
>>> +	__pskb_pull_tail(skb, target - skb_headlen(skb));
>>> +
>>> +	if (skb_headlen(skb) < min) {
>> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
>>
>>> +		net_err_ratelimited("Failed to pullup packet header\n");
>>> +		return false;
>>> +	}
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/* This value should be large enough to cover a tagged ethernet header
>> plus
>>> + * maximally sized IP and TCP or UDP headers.
>>> + */
>> Comment style.
>>
>>> +#define MAX_IP_HEADER 128
>>> +
>>> +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
>> *skb)
>>> +{
>>> +	struct iphdr *iph = (void *)skb->data;
>>> +	unsigned int header_size;
>>> +	unsigned int off;
>>>    	int recalculate_partial_csum = 0;
>>> +	int err = -EPROTO;
>>>
>>>    	/*
>>>    	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
>>> @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
>> *dev, struct sk_buff *skb)
>>>    	if (skb->ip_summed != CHECKSUM_PARTIAL)
>>>    		return 0;
>>>
>>> -	if (skb->protocol != htons(ETH_P_IP))
>>> +	off = sizeof(struct iphdr);
>>> +
>>> +	header_size = skb->network_header + off;
>>> +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
>>>    		goto out;
>>>
>>> -	iph = (void *)skb->data;
>>> +	off = iph->ihl * 4;
>>>
>>>    	switch (iph->protocol) {
>>>    	case IPPROTO_TCP:
>>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
>>> +		if (!skb_partial_csum_set(skb, off,
>>>    					  offsetof(struct tcphdr, check)))
>>>    			goto out;
>>>
>>>    		if (recalculate_partial_csum) {
>>>    			struct tcphdr *tcph = tcp_hdr(skb);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct tcphdr);
>> You can put these (off and sizeof) onto the same line.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IP_HEADER))
>>> +				goto out;
>>> +
>>>    			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
>>> daddr,
>>> -							 skb->len - iph->ihl*4,
>>> +							 skb->len - off,
>>>    							 IPPROTO_TCP, 0);
>>>    		}
>>>    		break;
>>>    	case IPPROTO_UDP:
>>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
>>> +		if (!skb_partial_csum_set(skb, off,
>>>    					  offsetof(struct udphdr, check)))
>>>    			goto out;
>>>
>>>    		if (recalculate_partial_csum) {
>>>    			struct udphdr *udph = udp_hdr(skb);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct udphdr);
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IP_HEADER))
>>> +				goto out;
>>> +
>>>    			udph->check = ~csum_tcpudp_magic(iph->saddr,
>> iph->daddr,
>>> -							 skb->len - iph->ihl*4,
>>> +							 skb->len - off,
>>>    							 IPPROTO_UDP, 0);
>>>    		}
>>>    		break;
>>>    	default:
>>> -		if (net_ratelimit())
>>> -			pr_err("Attempting to checksum a non-TCP/UDP
>> packet, dropping a protocol %d packet\n",
>>> -			       iph->protocol);
>>> +		net_err_ratelimited("Attempting to checksum a non-
>> TCP/UDP packet, dropping a protocol %d packet\n",
>>> +				    iph->protocol);
>>> +		goto out;
>>> +	}
>>> +
>>> +	err = 0;
>>> +
>>> +out:
>>> +	return err;
>>> +}
>>> +
>>> +/* This value should be large enough to cover a tagged ethernet header
>> plus
>>> + * an IPv6 header, all options, and a maximal TCP or UDP header.
>>> + */
>>> +#define MAX_IPV6_HEADER 256
>>> +
>>> +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
>> *skb)
>>> +{
>>> +	struct ipv6hdr *ipv6h = (void *)skb->data;
>>> +	u8 nexthdr;
>>> +	unsigned int header_size;
>>> +	unsigned int off;
>>> +	bool fragment;
>>> +	bool done;
>>> +	int err = -EPROTO;
>>> +
>>> +	done = false;
>> This should probably be moved down to the beginning of the while loop.
>> And you also need to initialize fragment to "false" (and possibly rename
>> it to is_fragment?)
>>
>>> +
>>> +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
>>> +	if (skb->ip_summed != CHECKSUM_PARTIAL)
>>> +		return 0;
>>> +
>>> +	off = sizeof(struct ipv6hdr);
>>> +
>>> +	header_size = skb->network_header + off;
>>> +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
>>> +		goto out;
>>> +
>>> +	nexthdr = ipv6h->nexthdr;
>>> +
>>> +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
>> &&
>>> +	       !done) {
>>> +		switch (nexthdr) {
>>> +		case IPPROTO_DSTOPTS:
>>> +		case IPPROTO_HOPOPTS:
>>> +		case IPPROTO_ROUTING: {
>>> +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct ipv6_opt_hdr);
>> I'd merge the last two lines.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IPV6_HEADER))
>>> +				goto out;
>>> +
>>> +			nexthdr = hp->nexthdr;
>>> +			off += ipv6_optlen(hp);
>>> +			break;
>>> +		}
>>> +		case IPPROTO_AH: {
>>> +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
>>> +
>>> +			header_size = skb->network_header +
>>> +				off +
>>> +				sizeof(struct ip_auth_hdr);
>> Here as well.
>>
>>> +			if (!maybe_pull_tail(skb, header_size,
>> MAX_IPV6_HEADER))
>>> +				goto out;
>>> +
>>> +			nexthdr = hp->nexthdr;
>>> +			off += ipv6_ahlen(hp);
>>> +			break;
>>> +		}
>>> +		case IPPROTO_FRAGMENT:
>>> +			fragment = true;
>>> +			/* fall through */
>>> +		default:
>>> +			done = true;
>>> +			break;
>>> +		}
>>> +	}
>>> +
>>> +	if (!done) {
>>> +		net_err_ratelimited("Failed to parse packet header\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	if (fragment) {
>>> +		net_err_ratelimited("Packet is a fragment!\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	switch (nexthdr) {
>>> +	case IPPROTO_TCP:
>>> +		if (!skb_partial_csum_set(skb, off,
>>> +					  offsetof(struct tcphdr, check)))
>>> +			goto out;
>>> +		break;
>>> +	case IPPROTO_UDP:
>>> +		if (!skb_partial_csum_set(skb, off,
>>> +					  offsetof(struct udphdr, check)))
>>> +			goto out;
>>> +		break;
>>> +	default:
>>> +		net_err_ratelimited("Attempting to checksum a non-
>> TCP/UDP packet, dropping a protocol %d packet\n",
>>> +				    nexthdr);
>>>    		goto out;
>>>    	}
>>>
>>> @@ -922,6 +1076,25 @@ out:
>>>    	return err;
>>>    }
>>>
>>> +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
>>> +{
>>> +	int err;
>> Initialize to -EPROTO (just to keep consistent with the rest of the file)
>>
>>> +
>>> +	switch (skb->protocol) {
>>> +	case htons(ETH_P_IP):
>>> +		err = checksum_setup_ip(dev, skb);
>>> +		break;
>>> +	case htons(ETH_P_IPV6):
>>> +		err = checksum_setup_ipv6(dev, skb);
>>> +		break;
>>> +	default:
>>> +		err = -EPROTO;
>>> +		break;
>>> +	}
>>> +
>>> +	return err;
>>> +}
>>> +
>>>    static int handle_incoming_queue(struct net_device *dev,
>>>    				 struct sk_buff_head *rxq)
>>>    {
>>> @@ -1232,6 +1405,15 @@ static netdev_features_t
>> xennet_fix_features(struct net_device *dev,
>>>    			features &= ~NETIF_F_SG;
>>>    	}
>>>
>>> +	if (features & NETIF_F_IPV6_CSUM) {
>>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>> +				 "feature-ipv6-csum-offload", "%d", &val) <
>> 0)
>>> +			val = 0;
>>> +
>>> +		if (!val)
>>> +			features &= ~NETIF_F_IPV6_CSUM;
>>> +	}
>>> +
>>>    	if (features & NETIF_F_TSO) {
>>>    		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>>    				 "feature-gso-tcpv4", "%d", &val) < 0)
>>> @@ -1241,6 +1423,15 @@ static netdev_features_t
>> xennet_fix_features(struct net_device *dev,
>>>    			features &= ~NETIF_F_TSO;
>>>    	}
>>>
>>> +	if (features & NETIF_F_TSO6) {
>>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>>> +				 "feature-gso-tcpv6", "%d", &val) < 0)
>>> +			val = 0;
>>> +
>>> +		if (!val)
>>> +			features &= ~NETIF_F_TSO6;
>>> +	}
>>> +
>>>    	return features;
>>>    }
>>>
>>> @@ -1373,7 +1564,9 @@ static struct net_device
>> *xennet_create_dev(struct xenbus_device *dev)
>>>    	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>>>    	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>>>    				  NETIF_F_GSO_ROBUST;
>>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
>> NETIF_F_TSO;
>>> +	netdev->hw_features	= NETIF_F_SG |
>>> +		                  NETIF_F_IPV6_CSUM |
>>> +		                  NETIF_F_TSO | NETIF_F_TSO6;
>> Can you merge these three lines and stay under 80? If not, merge either
>> of the two of them.
>>
>>
>> -boris
>>
>>>    	/*
>>>             * Assume that all hw features are available for now. This set
>>> @@ -1751,6 +1944,18 @@ again:
>>>    		goto abort_transaction;
>>>    	}
>>>
>>> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
>> "%d", 1);
>>> +	if (err) {
>>> +		message = "writing feature-gso-tcpv6";
>>> +		goto abort_transaction;
>>> +	}
>>> +
>>> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
>> offload", "%d", 1);
>>> +	if (err) {
>>> +		message = "writing feature-ipv6-csum-offload";
>>> +		goto abort_transaction;
>>> +	}
>>> +
>>>    	err = xenbus_transaction_end(xbt, 0);
>>>    	if (err) {
>>>    		if (err == -EAGAIN)
>>> diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
>>> index 5d89d1b..10f1b03 100644
>>> --- a/include/linux/ipv6.h
>>> +++ b/include/linux/ipv6.h
>>> @@ -4,6 +4,8 @@
>>>    #include <uapi/linux/ipv6.h>
>>>
>>>    #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
>>> +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
>>> +
>>>    /*
>>>     * This structure contains configuration options per IPv6 link.
>>>     */
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y72-0001Io-Vy; Tue, 07 Jan 2014 14:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Y71-0001Ij-Mh
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:58:35 +0000
Received: from [85.158.139.211:28191] by server-7.bemta-5.messagelabs.com id
	B5/62-04824-A161CC25; Tue, 07 Jan 2014 14:58:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389106712!8308927!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22588 invoked from network); 7 Jan 2014 14:58:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:58:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EwDeF029667
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:58:14 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EwAL8005928
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:58:11 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EwASE005887; Tue, 7 Jan 2014 14:58:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:58:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3B7311C18DC; Tue,  7 Jan 2014 09:58:08 -0500 (EST)
Date: Tue, 7 Jan 2014 09:58:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>
Message-ID: <20140107145808.GA4111@phenom.dumpdata.com>
References: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-next@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	mingo@redhat.com, tglx@linutronix.de
Subject: Re: [Xen-devel] randconfig build error with next-20140107,
 in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBKYW4gMDcsIDIwMTQgYXQgMDc6MDM6NTBBTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gYXJjaC94ODYveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmHhlbl9wdmhf
Z250dGFiX3NldHVw4oCZOgo+IGFyY2gveDg2L3hlbi9ncmFudC10YWJsZS5jOjE4MToyOiBlcnJv
cjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJh4ZW5fcHZoX2RvbWFpbuKA
mSBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbl0KPiAgIGlmICgheGVuX3B2
aF9kb21haW4oKSkKPiAgIF4KPiBjYzE6IHNvbWUgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBl
cnJvcnMKPiBtYWtlWzJdOiAqKiogW2FyY2gveDg2L3hlbi9ncmFudC10YWJsZS5vXSBFcnJvciAx
CgpBbmQgdGhpcyBmaXhlcyBpdDoKCkZyb20gZGM1ZmFkZDg5NDA4YTU2MmI2YjU1NTY2YWYwNjFh
ZDU1MWEwM2I1YyBNb24gU2VwIDE3IDAwOjAwOjAwIDIwMDEKRnJvbTogS29ucmFkIFJ6ZXN6dXRl
ayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpEYXRlOiBUdWUsIDcgSmFuIDIwMTQgMDk6
NTY6MDYgLTA1MDAKU3ViamVjdDogW1BBVENIXSB4ZW4vcHZoOiBGaXggY29tcGlsZSBpc3N1ZXMg
d2l0aCB4ZW5fcHZoX2RvbWFpbigpCgpPZGRseSBlbm91Z2ggaXQgY29tcGlsZXMgZm9yIG15IGFu
Y2llbnQgY29tcGlsZXIgYnV0IHdpdGgKdGhlIHN1cHBsaWVkIC5jb25maWcgaXQgZG9lcyBibG93
IHVwLiBGaXggaXMgZWFzeSBlbm91Z2guCgpSZXBvcnRlZC1ieToga2J1aWxkIHRlc3Qgcm9ib3Qg
PGZlbmdndWFuZy53dUBpbnRlbC5jb20+ClJlcG9ydGVkLWJ5OiBKaW0gRGF2aXMgPGppbS5lcG9z
dEBnbWFpbC5jb20+ClNpZ25lZC1vZmYtYnk6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFk
LndpbGtAb3JhY2xlLmNvbT4KLS0tCiBhcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYyB8IDEgKwog
MSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVu
L2dyYW50LXRhYmxlLmMgYi9hcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYwppbmRleCAyZDcxOTc5
Li4xMDNjOTNmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYworKysgYi9h
cmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYwpAQCAtMTI4LDYgKzEyOCw3IEBAIHZvaWQgYXJjaF9n
bnR0YWJfdW5tYXAodm9pZCAqc2hhcmVkLCB1bnNpZ25lZCBsb25nIG5yX2dmcmFtZXMpCiAjaWZk
ZWYgQ09ORklHX1hFTl9QVkgKICNpbmNsdWRlIDx4ZW4vYmFsbG9vbi5oPgogI2luY2x1ZGUgPHhl
bi9ldmVudHMuaD4KKyNpbmNsdWRlIDx4ZW4veGVuLmg+CiAjaW5jbHVkZSA8bGludXgvc2xhYi5o
Pgogc3RhdGljIGludCBfX2luaXQgeGxhdGVkX3NldHVwX2dudHRhYl9wYWdlcyh2b2lkKQogewot
LSAKMS44LjMuMQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 07 14:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 14:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Y72-0001Io-Vy; Tue, 07 Jan 2014 14:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Y71-0001Ij-Mh
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 14:58:35 +0000
Received: from [85.158.139.211:28191] by server-7.bemta-5.messagelabs.com id
	B5/62-04824-A161CC25; Tue, 07 Jan 2014 14:58:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389106712!8308927!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22588 invoked from network); 7 Jan 2014 14:58:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 14:58:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07EwDeF029667
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 14:58:14 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EwAL8005928
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 14:58:11 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07EwASE005887; Tue, 7 Jan 2014 14:58:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 06:58:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3B7311C18DC; Tue,  7 Jan 2014 09:58:08 -0500 (EST)
Date: Tue, 7 Jan 2014 09:58:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>
Message-ID: <20140107145808.GA4111@phenom.dumpdata.com>
References: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhgQ7vMjLdE7GDbb5_eEXxsKtEg8L47PbBv2aX571qRYow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, x86@kernel.org,
	linux-kernel@vger.kernel.org, linux-next@vger.kernel.org,
	david.vrabel@citrix.com, hpa@zytor.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	mingo@redhat.com, tglx@linutronix.de
Subject: Re: [Xen-devel] randconfig build error with next-20140107,
 in arch/x86/xen/grant-table.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBKYW4gMDcsIDIwMTQgYXQgMDc6MDM6NTBBTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gYXJjaC94ODYveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmHhlbl9wdmhf
Z250dGFiX3NldHVw4oCZOgo+IGFyY2gveDg2L3hlbi9ncmFudC10YWJsZS5jOjE4MToyOiBlcnJv
cjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJh4ZW5fcHZoX2RvbWFpbuKA
mSBbLVdlcnJvcj1pbXBsaWNpdC1mdW5jdGlvbi1kZWNsYXJhdGlvbl0KPiAgIGlmICgheGVuX3B2
aF9kb21haW4oKSkKPiAgIF4KPiBjYzE6IHNvbWUgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBl
cnJvcnMKPiBtYWtlWzJdOiAqKiogW2FyY2gveDg2L3hlbi9ncmFudC10YWJsZS5vXSBFcnJvciAx
CgpBbmQgdGhpcyBmaXhlcyBpdDoKCkZyb20gZGM1ZmFkZDg5NDA4YTU2MmI2YjU1NTY2YWYwNjFh
ZDU1MWEwM2I1YyBNb24gU2VwIDE3IDAwOjAwOjAwIDIwMDEKRnJvbTogS29ucmFkIFJ6ZXN6dXRl
ayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpEYXRlOiBUdWUsIDcgSmFuIDIwMTQgMDk6
NTY6MDYgLTA1MDAKU3ViamVjdDogW1BBVENIXSB4ZW4vcHZoOiBGaXggY29tcGlsZSBpc3N1ZXMg
d2l0aCB4ZW5fcHZoX2RvbWFpbigpCgpPZGRseSBlbm91Z2ggaXQgY29tcGlsZXMgZm9yIG15IGFu
Y2llbnQgY29tcGlsZXIgYnV0IHdpdGgKdGhlIHN1cHBsaWVkIC5jb25maWcgaXQgZG9lcyBibG93
IHVwLiBGaXggaXMgZWFzeSBlbm91Z2guCgpSZXBvcnRlZC1ieToga2J1aWxkIHRlc3Qgcm9ib3Qg
PGZlbmdndWFuZy53dUBpbnRlbC5jb20+ClJlcG9ydGVkLWJ5OiBKaW0gRGF2aXMgPGppbS5lcG9z
dEBnbWFpbC5jb20+ClNpZ25lZC1vZmYtYnk6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFk
LndpbGtAb3JhY2xlLmNvbT4KLS0tCiBhcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYyB8IDEgKwog
MSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVu
L2dyYW50LXRhYmxlLmMgYi9hcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYwppbmRleCAyZDcxOTc5
Li4xMDNjOTNmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYworKysgYi9h
cmNoL3g4Ni94ZW4vZ3JhbnQtdGFibGUuYwpAQCAtMTI4LDYgKzEyOCw3IEBAIHZvaWQgYXJjaF9n
bnR0YWJfdW5tYXAodm9pZCAqc2hhcmVkLCB1bnNpZ25lZCBsb25nIG5yX2dmcmFtZXMpCiAjaWZk
ZWYgQ09ORklHX1hFTl9QVkgKICNpbmNsdWRlIDx4ZW4vYmFsbG9vbi5oPgogI2luY2x1ZGUgPHhl
bi9ldmVudHMuaD4KKyNpbmNsdWRlIDx4ZW4veGVuLmg+CiAjaW5jbHVkZSA8bGludXgvc2xhYi5o
Pgogc3RhdGljIGludCBfX2luaXQgeGxhdGVkX3NldHVwX2dudHRhYl9wYWdlcyh2b2lkKQogewot
LSAKMS44LjMuMQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:05:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YDi-0001u2-U6; Tue, 07 Jan 2014 15:05:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0YDi-0001tv-5G
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:05:30 +0000
Received: from [85.158.137.68:48131] by server-10.bemta-3.messagelabs.com id
	EE/0F-23989-9B71CC25; Tue, 07 Jan 2014 15:05:29 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389107125!7684736!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29613 invoked from network); 7 Jan 2014 15:05:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:05:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90466581"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:05:25 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 10:05:24 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 16:05:23 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Thread-Topic: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
	IPv6 offloads
Thread-Index: AQHO6sf9BfCBQXbBGka9crYXUPAkJppu0YSAgAp9oNCAADsXAIAAEvQw
Date: Tue, 7 Jan 2014 15:05:22 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
	<52CC1500.8050104@oracle.com>
In-Reply-To: <52CC1500.8050104@oracle.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: 07 January 2014 14:54
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Wei Liu; Ian Campbell;
> Annie Li; David Vrabel
> Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
> IPv6 offloads
> 
> On 01/07/2014 05:25 AM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> >> Sent: 31 December 2013 19:10
> >> To: Paul Durrant
> >> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> >> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> >> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6
> offloads
> >>
> >> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> >>> This patch adds support for IPv6 checksum offload and GSO when those
> >>> features are available in the backend.
> >> Sorry for late review. Mostly style comments.
> >>
> > Thanks for the review.
> >
> > The checksum related code essentially needs to be a duplicate of that in
> netback and it seems wasteful to have the code in both places. Could this
> code be moved perhaps to net/core/dev.c? It's not specific to
> netback/netfront usage.
> 
> Will any of these routines be called for anything other than Xen
> networking?
> 

I guess similar logic must be duplicated in other drivers - I can't believe that netback and netfront are the only ones to want to know where the TCP/UDP checksum field is located.

> I don't know about net/core/dev.c but given the large amount of
> duplicate code between netfront and netback I think factoring out should
> be done at least for these two. Into xen-netcore.c or some such.
> 

That's probably a pragmatic first step; I'll do that and post a patch series as v4.

  Paul

> -boris
> 
> 
> >
> > Opinions?
> >
> >    Paul
> >
> >>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> >>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >>> Cc: David Vrabel <david.vrabel@citrix.com>
> >>> Cc: Ian Campbell <ian.campbell@citrix.com>
> >>> Cc: Wei Liu <wei.liu2@citrix.com>
> >>> Cc: Annie Li <annie.li@oracle.com>
> >>> ---
> >>>
> >>> v3:
> >>>    - Addressed comments raised by Annie Li
> >>>
> >>> v2:
> >>>    - Addressed comments raised by Ian Campbell
> >>>
> >>>    drivers/net/xen-netfront.c |  239
> >> ++++++++++++++++++++++++++++++++++++++++----
> >>>    include/linux/ipv6.h       |    2 +
> >>>    2 files changed, 224 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> >>> index dd1011e..fe747e4 100644
> >>> --- a/drivers/net/xen-netfront.c
> >>> +++ b/drivers/net/xen-netfront.c
> >>> @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> >> struct net_device *dev)
> >>>    		tx->flags |= XEN_NETTXF_extra_info;
> >>>
> >>>    		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> >>> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> >>> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> >> SKB_GSO_TCPV6) ?
> >>> +			          XEN_NETIF_GSO_TYPE_TCPV6 :
> >>> +			          XEN_NETIF_GSO_TYPE_TCPV4;
> >>>    		gso->u.gso.pad = 0;
> >>>    		gso->u.gso.features = 0;
> >>>
> >>> @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
> >> *skb,
> >>>    		return -EINVAL;
> >>>    	}
> >>>
> >>> -	/* Currently only TCPv4 S.O. is supported. */
> >>> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> >>> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> >>> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >>>    		if (net_ratelimit())
> >>>    			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >>>    		return -EINVAL;
> >>>    	}
> >>>
> >>>    	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> >>> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> >>> +	skb_shinfo(skb)->gso_type =
> >>> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> >>> +		SKB_GSO_TCPV4 :
> >>> +		SKB_GSO_TCPV6;
> >>>
> >>>    	/* Header must be checked, and gso_segs computed. */
> >>>    	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> >>> @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
> >> netfront_info *np,
> >>>    	return cons;
> >>>    }
> >>>
> >>> -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> >>> +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
> >>> +				   unsigned int max)
> >> Should this routine return error code instead of a boolean? Otherwise
> >> it's not clear what "false" should mean --- whether it is that it failed
> >> to pull or that the pull wasn't needed.
> >>
> >>>    {
> >>> -	struct iphdr *iph;
> >>> -	int err = -EPROTO;
> >>> +	int target;
> >>> +
> >>> +	BUG_ON(max < min);
> >>> +
> >>> +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
> >>> +		return true;
> >>> +
> >>> +	/* If we need to pullup then pullup to max, so we hopefully
> >>> +	 * won't need to do it again.
> >>> +	 */
> >> Comment style.
> >>
> >>> +	target = min_t(int, skb->len, max);
> >>> +	__pskb_pull_tail(skb, target - skb_headlen(skb));
> >>> +
> >>> +	if (skb_headlen(skb) < min) {
> >> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
> >>
> >>> +		net_err_ratelimited("Failed to pullup packet header\n");
> >>> +		return false;
> >>> +	}
> >>> +
> >>> +	return true;
> >>> +}
> >>> +
> >>> +/* This value should be large enough to cover a tagged ethernet
> header
> >> plus
> >>> + * maximally sized IP and TCP or UDP headers.
> >>> + */
> >> Comment style.
> >>
> >>> +#define MAX_IP_HEADER 128
> >>> +
> >>> +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
> >> *skb)
> >>> +{
> >>> +	struct iphdr *iph = (void *)skb->data;
> >>> +	unsigned int header_size;
> >>> +	unsigned int off;
> >>>    	int recalculate_partial_csum = 0;
> >>> +	int err = -EPROTO;
> >>>
> >>>    	/*
> >>>    	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
> >>> @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
> >> *dev, struct sk_buff *skb)
> >>>    	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >>>    		return 0;
> >>>
> >>> -	if (skb->protocol != htons(ETH_P_IP))
> >>> +	off = sizeof(struct iphdr);
> >>> +
> >>> +	header_size = skb->network_header + off;
> >>> +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
> >>>    		goto out;
> >>>
> >>> -	iph = (void *)skb->data;
> >>> +	off = iph->ihl * 4;
> >>>
> >>>    	switch (iph->protocol) {
> >>>    	case IPPROTO_TCP:
> >>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>>    					  offsetof(struct tcphdr, check)))
> >>>    			goto out;
> >>>
> >>>    		if (recalculate_partial_csum) {
> >>>    			struct tcphdr *tcph = tcp_hdr(skb);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct tcphdr);
> >> You can put these (off and sizeof) onto the same line.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IP_HEADER))
> >>> +				goto out;
> >>> +
> >>>    			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
> >>> daddr,
> >>> -							 skb->len - iph->ihl*4,
> >>> +							 skb->len - off,
> >>>    							 IPPROTO_TCP, 0);
> >>>    		}
> >>>    		break;
> >>>    	case IPPROTO_UDP:
> >>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>>    					  offsetof(struct udphdr, check)))
> >>>    			goto out;
> >>>
> >>>    		if (recalculate_partial_csum) {
> >>>    			struct udphdr *udph = udp_hdr(skb);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct udphdr);
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IP_HEADER))
> >>> +				goto out;
> >>> +
> >>>    			udph->check = ~csum_tcpudp_magic(iph->saddr,
> >> iph->daddr,
> >>> -							 skb->len - iph->ihl*4,
> >>> +							 skb->len - off,
> >>>    							 IPPROTO_UDP, 0);
> >>>    		}
> >>>    		break;
> >>>    	default:
> >>> -		if (net_ratelimit())
> >>> -			pr_err("Attempting to checksum a non-TCP/UDP
> >> packet, dropping a protocol %d packet\n",
> >>> -			       iph->protocol);
> >>> +		net_err_ratelimited("Attempting to checksum a non-
> >> TCP/UDP packet, dropping a protocol %d packet\n",
> >>> +				    iph->protocol);
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	err = 0;
> >>> +
> >>> +out:
> >>> +	return err;
> >>> +}
> >>> +
> >>> +/* This value should be large enough to cover a tagged ethernet
> header
> >> plus
> >>> + * an IPv6 header, all options, and a maximal TCP or UDP header.
> >>> + */
> >>> +#define MAX_IPV6_HEADER 256
> >>> +
> >>> +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
> >> *skb)
> >>> +{
> >>> +	struct ipv6hdr *ipv6h = (void *)skb->data;
> >>> +	u8 nexthdr;
> >>> +	unsigned int header_size;
> >>> +	unsigned int off;
> >>> +	bool fragment;
> >>> +	bool done;
> >>> +	int err = -EPROTO;
> >>> +
> >>> +	done = false;
> >> This should probably be moved down to the beginning of the while loop.
> >> And you also need to initialize fragment to "false" (and possibly rename
> >> it to is_fragment?)
> >>
> >>> +
> >>> +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
> >>> +	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >>> +		return 0;
> >>> +
> >>> +	off = sizeof(struct ipv6hdr);
> >>> +
> >>> +	header_size = skb->network_header + off;
> >>> +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
> >>> +		goto out;
> >>> +
> >>> +	nexthdr = ipv6h->nexthdr;
> >>> +
> >>> +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
> >> &&
> >>> +	       !done) {
> >>> +		switch (nexthdr) {
> >>> +		case IPPROTO_DSTOPTS:
> >>> +		case IPPROTO_HOPOPTS:
> >>> +		case IPPROTO_ROUTING: {
> >>> +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct ipv6_opt_hdr);
> >> I'd merge the last two lines.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IPV6_HEADER))
> >>> +				goto out;
> >>> +
> >>> +			nexthdr = hp->nexthdr;
> >>> +			off += ipv6_optlen(hp);
> >>> +			break;
> >>> +		}
> >>> +		case IPPROTO_AH: {
> >>> +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct ip_auth_hdr);
> >> Here as well.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IPV6_HEADER))
> >>> +				goto out;
> >>> +
> >>> +			nexthdr = hp->nexthdr;
> >>> +			off += ipv6_ahlen(hp);
> >>> +			break;
> >>> +		}
> >>> +		case IPPROTO_FRAGMENT:
> >>> +			fragment = true;
> >>> +			/* fall through */
> >>> +		default:
> >>> +			done = true;
> >>> +			break;
> >>> +		}
> >>> +	}
> >>> +
> >>> +	if (!done) {
> >>> +		net_err_ratelimited("Failed to parse packet header\n");
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	if (fragment) {
> >>> +		net_err_ratelimited("Packet is a fragment!\n");
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	switch (nexthdr) {
> >>> +	case IPPROTO_TCP:
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>> +					  offsetof(struct tcphdr, check)))
> >>> +			goto out;
> >>> +		break;
> >>> +	case IPPROTO_UDP:
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>> +					  offsetof(struct udphdr, check)))
> >>> +			goto out;
> >>> +		break;
> >>> +	default:
> >>> +		net_err_ratelimited("Attempting to checksum a non-
> >> TCP/UDP packet, dropping a protocol %d packet\n",
> >>> +				    nexthdr);
> >>>    		goto out;
> >>>    	}
> >>>
> >>> @@ -922,6 +1076,25 @@ out:
> >>>    	return err;
> >>>    }
> >>>
> >>> +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> >>> +{
> >>> +	int err;
> >> Initialize to -EPROTO (just to keep consistent with the rest of the file)
> >>
> >>> +
> >>> +	switch (skb->protocol) {
> >>> +	case htons(ETH_P_IP):
> >>> +		err = checksum_setup_ip(dev, skb);
> >>> +		break;
> >>> +	case htons(ETH_P_IPV6):
> >>> +		err = checksum_setup_ipv6(dev, skb);
> >>> +		break;
> >>> +	default:
> >>> +		err = -EPROTO;
> >>> +		break;
> >>> +	}
> >>> +
> >>> +	return err;
> >>> +}
> >>> +
> >>>    static int handle_incoming_queue(struct net_device *dev,
> >>>    				 struct sk_buff_head *rxq)
> >>>    {
> >>> @@ -1232,6 +1405,15 @@ static netdev_features_t
> >> xennet_fix_features(struct net_device *dev,
> >>>    			features &= ~NETIF_F_SG;
> >>>    	}
> >>>
> >>> +	if (features & NETIF_F_IPV6_CSUM) {
> >>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>> +				 "feature-ipv6-csum-offload", "%d", &val) <
> >> 0)
> >>> +			val = 0;
> >>> +
> >>> +		if (!val)
> >>> +			features &= ~NETIF_F_IPV6_CSUM;
> >>> +	}
> >>> +
> >>>    	if (features & NETIF_F_TSO) {
> >>>    		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>>    				 "feature-gso-tcpv4", "%d", &val) < 0)
> >>> @@ -1241,6 +1423,15 @@ static netdev_features_t
> >> xennet_fix_features(struct net_device *dev,
> >>>    			features &= ~NETIF_F_TSO;
> >>>    	}
> >>>
> >>> +	if (features & NETIF_F_TSO6) {
> >>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>> +				 "feature-gso-tcpv6", "%d", &val) < 0)
> >>> +			val = 0;
> >>> +
> >>> +		if (!val)
> >>> +			features &= ~NETIF_F_TSO6;
> >>> +	}
> >>> +
> >>>    	return features;
> >>>    }
> >>>
> >>> @@ -1373,7 +1564,9 @@ static struct net_device
> >> *xennet_create_dev(struct xenbus_device *dev)
> >>>    	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >>>    	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >>>    				  NETIF_F_GSO_ROBUST;
> >>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> >> NETIF_F_TSO;
> >>> +	netdev->hw_features	= NETIF_F_SG |
> >>> +		                  NETIF_F_IPV6_CSUM |
> >>> +		                  NETIF_F_TSO | NETIF_F_TSO6;
> >> Can you merge these three lines and stay under 80? If not, merge either
> >> of the two of them.
> >>
> >>
> >> -boris
> >>
> >>>    	/*
> >>>             * Assume that all hw features are available for now. This set
> >>> @@ -1751,6 +1944,18 @@ again:
> >>>    		goto abort_transaction;
> >>>    	}
> >>>
> >>> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> >> "%d", 1);
> >>> +	if (err) {
> >>> +		message = "writing feature-gso-tcpv6";
> >>> +		goto abort_transaction;
> >>> +	}
> >>> +
> >>> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> >> offload", "%d", 1);
> >>> +	if (err) {
> >>> +		message = "writing feature-ipv6-csum-offload";
> >>> +		goto abort_transaction;
> >>> +	}
> >>> +
> >>>    	err = xenbus_transaction_end(xbt, 0);
> >>>    	if (err) {
> >>>    		if (err == -EAGAIN)
> >>> diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
> >>> index 5d89d1b..10f1b03 100644
> >>> --- a/include/linux/ipv6.h
> >>> +++ b/include/linux/ipv6.h
> >>> @@ -4,6 +4,8 @@
> >>>    #include <uapi/linux/ipv6.h>
> >>>
> >>>    #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
> >>> +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
> >>> +
> >>>    /*
> >>>     * This structure contains configuration options per IPv6 link.
> >>>     */
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:05:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YDi-0001u2-U6; Tue, 07 Jan 2014 15:05:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0YDi-0001tv-5G
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:05:30 +0000
Received: from [85.158.137.68:48131] by server-10.bemta-3.messagelabs.com id
	EE/0F-23989-9B71CC25; Tue, 07 Jan 2014 15:05:29 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389107125!7684736!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29613 invoked from network); 7 Jan 2014 15:05:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:05:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90466581"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:05:25 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 10:05:24 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 16:05:23 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Thread-Topic: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
	IPv6 offloads
Thread-Index: AQHO6sf9BfCBQXbBGka9crYXUPAkJppu0YSAgAp9oNCAADsXAIAAEvQw
Date: Tue, 7 Jan 2014 15:05:22 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
	<52CC1500.8050104@oracle.com>
In-Reply-To: <52CC1500.8050104@oracle.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: 07 January 2014 14:54
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Wei Liu; Ian Campbell;
> Annie Li; David Vrabel
> Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
> IPv6 offloads
> 
> On 01/07/2014 05:25 AM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> >> Sent: 31 December 2013 19:10
> >> To: Paul Durrant
> >> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> >> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> >> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6
> offloads
> >>
> >> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> >>> This patch adds support for IPv6 checksum offload and GSO when those
> >>> features are available in the backend.
> >> Sorry for late review. Mostly style comments.
> >>
> > Thanks for the review.
> >
> > The checksum related code essentially needs to be a duplicate of that in
> netback and it seems wasteful to have the code in both places. Could this
> code be moved perhaps to net/core/dev.c? It's not specific to
> netback/netfront usage.
> 
> Will any of these routines be called for anything other than Xen
> networking?
> 

I guess similar logic must be duplicated in other drivers - I can't believe that netback and netfront are the only ones to want to know where the TCP/UDP checksum field is located.

> I don't know about net/core/dev.c but given the large amount of
> duplicate code between netfront and netback I think factoring out should
> be done at least for these two. Into xen-netcore.c or some such.
> 

That's probably a pragmatic first step; I'll do that and post a patch series as v4.

  Paul

> -boris
> 
> 
> >
> > Opinions?
> >
> >    Paul
> >
> >>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> >>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >>> Cc: David Vrabel <david.vrabel@citrix.com>
> >>> Cc: Ian Campbell <ian.campbell@citrix.com>
> >>> Cc: Wei Liu <wei.liu2@citrix.com>
> >>> Cc: Annie Li <annie.li@oracle.com>
> >>> ---
> >>>
> >>> v3:
> >>>    - Addressed comments raised by Annie Li
> >>>
> >>> v2:
> >>>    - Addressed comments raised by Ian Campbell
> >>>
> >>>    drivers/net/xen-netfront.c |  239
> >> ++++++++++++++++++++++++++++++++++++++++----
> >>>    include/linux/ipv6.h       |    2 +
> >>>    2 files changed, 224 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> >>> index dd1011e..fe747e4 100644
> >>> --- a/drivers/net/xen-netfront.c
> >>> +++ b/drivers/net/xen-netfront.c
> >>> @@ -616,7 +616,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> >> struct net_device *dev)
> >>>    		tx->flags |= XEN_NETTXF_extra_info;
> >>>
> >>>    		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> >>> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> >>> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> >> SKB_GSO_TCPV6) ?
> >>> +			          XEN_NETIF_GSO_TYPE_TCPV6 :
> >>> +			          XEN_NETIF_GSO_TYPE_TCPV4;
> >>>    		gso->u.gso.pad = 0;
> >>>    		gso->u.gso.features = 0;
> >>>
> >>> @@ -808,15 +810,18 @@ static int xennet_set_skb_gso(struct sk_buff
> >> *skb,
> >>>    		return -EINVAL;
> >>>    	}
> >>>
> >>> -	/* Currently only TCPv4 S.O. is supported. */
> >>> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> >>> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> >>> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >>>    		if (net_ratelimit())
> >>>    			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >>>    		return -EINVAL;
> >>>    	}
> >>>
> >>>    	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> >>> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> >>> +	skb_shinfo(skb)->gso_type =
> >>> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> >>> +		SKB_GSO_TCPV4 :
> >>> +		SKB_GSO_TCPV6;
> >>>
> >>>    	/* Header must be checked, and gso_segs computed. */
> >>>    	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> >>> @@ -856,11 +861,42 @@ static RING_IDX xennet_fill_frags(struct
> >> netfront_info *np,
> >>>    	return cons;
> >>>    }
> >>>
> >>> -static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> >>> +static inline bool maybe_pull_tail(struct sk_buff *skb, unsigned int min,
> >>> +				   unsigned int max)
> >> Should this routine return error code instead of a boolean? Otherwise
> >> it's not clear what "false" should mean --- whether it is that it failed
> >> to pull or that the pull wasn't needed.
> >>
> >>>    {
> >>> -	struct iphdr *iph;
> >>> -	int err = -EPROTO;
> >>> +	int target;
> >>> +
> >>> +	BUG_ON(max < min);
> >>> +
> >>> +	if (!skb_is_nonlinear(skb) || skb_headlen(skb) >= min)
> >>> +		return true;
> >>> +
> >>> +	/* If we need to pullup then pullup to max, so we hopefully
> >>> +	 * won't need to do it again.
> >>> +	 */
> >> Comment style.
> >>
> >>> +	target = min_t(int, skb->len, max);
> >>> +	__pskb_pull_tail(skb, target - skb_headlen(skb));
> >>> +
> >>> +	if (skb_headlen(skb) < min) {
> >> Why not explicitly check whether__pskb_pull_tail() returned NULL ?
> >>
> >>> +		net_err_ratelimited("Failed to pullup packet header\n");
> >>> +		return false;
> >>> +	}
> >>> +
> >>> +	return true;
> >>> +}
> >>> +
> >>> +/* This value should be large enough to cover a tagged ethernet
> header
> >> plus
> >>> + * maximally sized IP and TCP or UDP headers.
> >>> + */
> >> Comment style.
> >>
> >>> +#define MAX_IP_HEADER 128
> >>> +
> >>> +static int checksum_setup_ip(struct net_device *dev, struct sk_buff
> >> *skb)
> >>> +{
> >>> +	struct iphdr *iph = (void *)skb->data;
> >>> +	unsigned int header_size;
> >>> +	unsigned int off;
> >>>    	int recalculate_partial_csum = 0;
> >>> +	int err = -EPROTO;
> >>>
> >>>    	/*
> >>>    	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
> >>> @@ -879,40 +915,158 @@ static int checksum_setup(struct net_device
> >> *dev, struct sk_buff *skb)
> >>>    	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >>>    		return 0;
> >>>
> >>> -	if (skb->protocol != htons(ETH_P_IP))
> >>> +	off = sizeof(struct iphdr);
> >>> +
> >>> +	header_size = skb->network_header + off;
> >>> +	if (!maybe_pull_tail(skb, header_size, MAX_IP_HEADER))
> >>>    		goto out;
> >>>
> >>> -	iph = (void *)skb->data;
> >>> +	off = iph->ihl * 4;
> >>>
> >>>    	switch (iph->protocol) {
> >>>    	case IPPROTO_TCP:
> >>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>>    					  offsetof(struct tcphdr, check)))
> >>>    			goto out;
> >>>
> >>>    		if (recalculate_partial_csum) {
> >>>    			struct tcphdr *tcph = tcp_hdr(skb);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct tcphdr);
> >> You can put these (off and sizeof) onto the same line.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IP_HEADER))
> >>> +				goto out;
> >>> +
> >>>    			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph-
> >>> daddr,
> >>> -							 skb->len - iph->ihl*4,
> >>> +							 skb->len - off,
> >>>    							 IPPROTO_TCP, 0);
> >>>    		}
> >>>    		break;
> >>>    	case IPPROTO_UDP:
> >>> -		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>>    					  offsetof(struct udphdr, check)))
> >>>    			goto out;
> >>>
> >>>    		if (recalculate_partial_csum) {
> >>>    			struct udphdr *udph = udp_hdr(skb);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct udphdr);
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IP_HEADER))
> >>> +				goto out;
> >>> +
> >>>    			udph->check = ~csum_tcpudp_magic(iph->saddr,
> >> iph->daddr,
> >>> -							 skb->len - iph->ihl*4,
> >>> +							 skb->len - off,
> >>>    							 IPPROTO_UDP, 0);
> >>>    		}
> >>>    		break;
> >>>    	default:
> >>> -		if (net_ratelimit())
> >>> -			pr_err("Attempting to checksum a non-TCP/UDP
> >> packet, dropping a protocol %d packet\n",
> >>> -			       iph->protocol);
> >>> +		net_err_ratelimited("Attempting to checksum a non-
> >> TCP/UDP packet, dropping a protocol %d packet\n",
> >>> +				    iph->protocol);
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	err = 0;
> >>> +
> >>> +out:
> >>> +	return err;
> >>> +}
> >>> +
> >>> +/* This value should be large enough to cover a tagged ethernet
> header
> >> plus
> >>> + * an IPv6 header, all options, and a maximal TCP or UDP header.
> >>> + */
> >>> +#define MAX_IPV6_HEADER 256
> >>> +
> >>> +static int checksum_setup_ipv6(struct net_device *dev, struct sk_buff
> >> *skb)
> >>> +{
> >>> +	struct ipv6hdr *ipv6h = (void *)skb->data;
> >>> +	u8 nexthdr;
> >>> +	unsigned int header_size;
> >>> +	unsigned int off;
> >>> +	bool fragment;
> >>> +	bool done;
> >>> +	int err = -EPROTO;
> >>> +
> >>> +	done = false;
> >> This should probably be moved down to the beginning of the while loop.
> >> And you also need to initialize fragment to "false" (and possibly rename
> >> it to is_fragment?)
> >>
> >>> +
> >>> +	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
> >>> +	if (skb->ip_summed != CHECKSUM_PARTIAL)
> >>> +		return 0;
> >>> +
> >>> +	off = sizeof(struct ipv6hdr);
> >>> +
> >>> +	header_size = skb->network_header + off;
> >>> +	if (!maybe_pull_tail(skb, header_size, MAX_IPV6_HEADER))
> >>> +		goto out;
> >>> +
> >>> +	nexthdr = ipv6h->nexthdr;
> >>> +
> >>> +	while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len))
> >> &&
> >>> +	       !done) {
> >>> +		switch (nexthdr) {
> >>> +		case IPPROTO_DSTOPTS:
> >>> +		case IPPROTO_HOPOPTS:
> >>> +		case IPPROTO_ROUTING: {
> >>> +			struct ipv6_opt_hdr *hp = (void *)(skb->data + off);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct ipv6_opt_hdr);
> >> I'd merge the last two lines.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IPV6_HEADER))
> >>> +				goto out;
> >>> +
> >>> +			nexthdr = hp->nexthdr;
> >>> +			off += ipv6_optlen(hp);
> >>> +			break;
> >>> +		}
> >>> +		case IPPROTO_AH: {
> >>> +			struct ip_auth_hdr *hp = (void *)(skb->data + off);
> >>> +
> >>> +			header_size = skb->network_header +
> >>> +				off +
> >>> +				sizeof(struct ip_auth_hdr);
> >> Here as well.
> >>
> >>> +			if (!maybe_pull_tail(skb, header_size,
> >> MAX_IPV6_HEADER))
> >>> +				goto out;
> >>> +
> >>> +			nexthdr = hp->nexthdr;
> >>> +			off += ipv6_ahlen(hp);
> >>> +			break;
> >>> +		}
> >>> +		case IPPROTO_FRAGMENT:
> >>> +			fragment = true;
> >>> +			/* fall through */
> >>> +		default:
> >>> +			done = true;
> >>> +			break;
> >>> +		}
> >>> +	}
> >>> +
> >>> +	if (!done) {
> >>> +		net_err_ratelimited("Failed to parse packet header\n");
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	if (fragment) {
> >>> +		net_err_ratelimited("Packet is a fragment!\n");
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	switch (nexthdr) {
> >>> +	case IPPROTO_TCP:
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>> +					  offsetof(struct tcphdr, check)))
> >>> +			goto out;
> >>> +		break;
> >>> +	case IPPROTO_UDP:
> >>> +		if (!skb_partial_csum_set(skb, off,
> >>> +					  offsetof(struct udphdr, check)))
> >>> +			goto out;
> >>> +		break;
> >>> +	default:
> >>> +		net_err_ratelimited("Attempting to checksum a non-
> >> TCP/UDP packet, dropping a protocol %d packet\n",
> >>> +				    nexthdr);
> >>>    		goto out;
> >>>    	}
> >>>
> >>> @@ -922,6 +1076,25 @@ out:
> >>>    	return err;
> >>>    }
> >>>
> >>> +static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
> >>> +{
> >>> +	int err;
> >> Initialize to -EPROTO (just to keep consistent with the rest of the file)
> >>
> >>> +
> >>> +	switch (skb->protocol) {
> >>> +	case htons(ETH_P_IP):
> >>> +		err = checksum_setup_ip(dev, skb);
> >>> +		break;
> >>> +	case htons(ETH_P_IPV6):
> >>> +		err = checksum_setup_ipv6(dev, skb);
> >>> +		break;
> >>> +	default:
> >>> +		err = -EPROTO;
> >>> +		break;
> >>> +	}
> >>> +
> >>> +	return err;
> >>> +}
> >>> +
> >>>    static int handle_incoming_queue(struct net_device *dev,
> >>>    				 struct sk_buff_head *rxq)
> >>>    {
> >>> @@ -1232,6 +1405,15 @@ static netdev_features_t
> >> xennet_fix_features(struct net_device *dev,
> >>>    			features &= ~NETIF_F_SG;
> >>>    	}
> >>>
> >>> +	if (features & NETIF_F_IPV6_CSUM) {
> >>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>> +				 "feature-ipv6-csum-offload", "%d", &val) <
> >> 0)
> >>> +			val = 0;
> >>> +
> >>> +		if (!val)
> >>> +			features &= ~NETIF_F_IPV6_CSUM;
> >>> +	}
> >>> +
> >>>    	if (features & NETIF_F_TSO) {
> >>>    		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>>    				 "feature-gso-tcpv4", "%d", &val) < 0)
> >>> @@ -1241,6 +1423,15 @@ static netdev_features_t
> >> xennet_fix_features(struct net_device *dev,
> >>>    			features &= ~NETIF_F_TSO;
> >>>    	}
> >>>
> >>> +	if (features & NETIF_F_TSO6) {
> >>> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >>> +				 "feature-gso-tcpv6", "%d", &val) < 0)
> >>> +			val = 0;
> >>> +
> >>> +		if (!val)
> >>> +			features &= ~NETIF_F_TSO6;
> >>> +	}
> >>> +
> >>>    	return features;
> >>>    }
> >>>
> >>> @@ -1373,7 +1564,9 @@ static struct net_device
> >> *xennet_create_dev(struct xenbus_device *dev)
> >>>    	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >>>    	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >>>    				  NETIF_F_GSO_ROBUST;
> >>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> >> NETIF_F_TSO;
> >>> +	netdev->hw_features	= NETIF_F_SG |
> >>> +		                  NETIF_F_IPV6_CSUM |
> >>> +		                  NETIF_F_TSO | NETIF_F_TSO6;
> >> Can you merge these three lines and stay under 80? If not, merge either
> >> of the two of them.
> >>
> >>
> >> -boris
> >>
> >>>    	/*
> >>>             * Assume that all hw features are available for now. This set
> >>> @@ -1751,6 +1944,18 @@ again:
> >>>    		goto abort_transaction;
> >>>    	}
> >>>
> >>> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> >> "%d", 1);
> >>> +	if (err) {
> >>> +		message = "writing feature-gso-tcpv6";
> >>> +		goto abort_transaction;
> >>> +	}
> >>> +
> >>> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> >> offload", "%d", 1);
> >>> +	if (err) {
> >>> +		message = "writing feature-ipv6-csum-offload";
> >>> +		goto abort_transaction;
> >>> +	}
> >>> +
> >>>    	err = xenbus_transaction_end(xbt, 0);
> >>>    	if (err) {
> >>>    		if (err == -EAGAIN)
> >>> diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
> >>> index 5d89d1b..10f1b03 100644
> >>> --- a/include/linux/ipv6.h
> >>> +++ b/include/linux/ipv6.h
> >>> @@ -4,6 +4,8 @@
> >>>    #include <uapi/linux/ipv6.h>
> >>>
> >>>    #define ipv6_optlen(p)  (((p)->hdrlen+1) << 3)
> >>> +#define ipv6_ahlen(p)   (((p)->hdrlen+2) << 2);
> >>> +
> >>>    /*
> >>>     * This structure contains configuration options per IPv6 link.
> >>>     */
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:11:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YJi-0002U9-TS; Tue, 07 Jan 2014 15:11:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0YJh-0002U4-Mt
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:11:41 +0000
Received: from [85.158.137.68:25238] by server-17.bemta-3.messagelabs.com id
	24/F0-15965-C291CC25; Tue, 07 Jan 2014 15:11:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389107498!6943135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25423 invoked from network); 7 Jan 2014 15:11:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90471903"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:11:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 10:11:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0YJc-0004NS-W1;
	Tue, 07 Jan 2014 15:11:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0YJc-0003Ez-MX;
	Tue, 07 Jan 2014 15:11:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.6440.529434.66793@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 15:11:36 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
> 
> It turns out that this is essential for timeout handling, in order to
> identify which timeout to change on a modify event.
> 
> Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
...
> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
> +	if (!for_app) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	caml_register_global_root(for_app);
> +	*for_app_registration_out = for_app;

I expect you have thought this through properly, and perhaps even
explained it already, but: is the ordering of these operations
(particularly, of the caml_register_global_root) guaranteed to be
correct ?

Eg, can Is_exception_result call the gc ?

>  int fd_modify(void *user, int fd, void **for_app_registration_update,
> @@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
>  {
...
> +	/* If for_app == NULL, assume that something is wrong */
> +	assert(for_app);

While I'm reading this in detail, this is a slightly odd wording for
the comment, now that this is an assertion.  You probably mean
something like "If for_app == NULL, something is very wrong".
(Another occurrence of this later.)

>  void fd_deregister(void *user, int fd, void *for_app_registration)
>  {
...
> +	caml_callbackN_exn(*func, 3, args);
> +	/* If the callback were to raise an exception, this will be ignored;
> +	 * this hook does not return error codes */

Can you not do anything better here ?  I think crashing the whole
application would be better than carrying on and later calling back
into libxl with a stale for_libxl pointer!

> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
> +	if (!for_app) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	caml_register_global_root(for_app);
> +	*for_app_registration_out = for_app;

Aren't these functions getting incredibly formulaic ?  I guess it is
too late for 4.4 but if possible, later, I would like to see the
common stuff factored out.

>  int timeout_modify(void *user, void **for_app_registration_update,
> @@ -1315,18 +1382,43 @@ int timeout_modify(void *user, void **for_app_registration_update,
>  {
>  	caml_leave_blocking_section();
>  	CAMLparam0();
> +	CAMLlocalN(args, 2);
> +	int ret = 0;
...
> +	/* This modify hook causes the timeout to fire immediately. Deregister
> +	 * won't be called, so we clean up our GC registration here. */
> +	caml_remove_global_root(for_app);
> +	free(for_app);
> +	*for_app_registration_update = NULL;

This can't be right, because what the timeout modify callback is
supposed to do is arrange for stub_libxl_osevent_occurred_timeout to
be called.

And looking at that, I see that stub_libxl_osevent_occurred_timeout
doesn't destroy the for_app.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:11:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:11:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YJi-0002U9-TS; Tue, 07 Jan 2014 15:11:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0YJh-0002U4-Mt
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:11:41 +0000
Received: from [85.158.137.68:25238] by server-17.bemta-3.messagelabs.com id
	24/F0-15965-C291CC25; Tue, 07 Jan 2014 15:11:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389107498!6943135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25423 invoked from network); 7 Jan 2014 15:11:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90471903"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:11:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 10:11:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0YJc-0004NS-W1;
	Tue, 07 Jan 2014 15:11:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0YJc-0003Ez-MX;
	Tue, 07 Jan 2014 15:11:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.6440.529434.66793@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 15:11:36 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
> 
> It turns out that this is essential for timeout handling, in order to
> identify which timeout to change on a modify event.
> 
> Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
...
> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
> +	if (!for_app) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	caml_register_global_root(for_app);
> +	*for_app_registration_out = for_app;

I expect you have thought this through properly, and perhaps even
explained it already, but: is the ordering of these operations
(particularly, of the caml_register_global_root) guaranteed to be
correct ?

Eg, can Is_exception_result call the gc ?

>  int fd_modify(void *user, int fd, void **for_app_registration_update,
> @@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
>  {
...
> +	/* If for_app == NULL, assume that something is wrong */
> +	assert(for_app);

While I'm reading this in detail, this is a slightly odd wording for
the comment, now that this is an assertion.  You probably mean
something like "If for_app == NULL, something is very wrong".
(Another occurrence of this later.)

>  void fd_deregister(void *user, int fd, void *for_app_registration)
>  {
...
> +	caml_callbackN_exn(*func, 3, args);
> +	/* If the callback were to raise an exception, this will be ignored;
> +	 * this hook does not return error codes */

Can you not do anything better here ?  I think crashing the whole
application would be better than carrying on and later calling back
into libxl with a stale for_libxl pointer!

> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
> +	if (!for_app) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	caml_register_global_root(for_app);
> +	*for_app_registration_out = for_app;

Aren't these functions getting incredibly formulaic ?  I guess it is
too late for 4.4 but if possible, later, I would like to see the
common stuff factored out.

>  int timeout_modify(void *user, void **for_app_registration_update,
> @@ -1315,18 +1382,43 @@ int timeout_modify(void *user, void **for_app_registration_update,
>  {
>  	caml_leave_blocking_section();
>  	CAMLparam0();
> +	CAMLlocalN(args, 2);
> +	int ret = 0;
...
> +	/* This modify hook causes the timeout to fire immediately. Deregister
> +	 * won't be called, so we clean up our GC registration here. */
> +	caml_remove_global_root(for_app);
> +	free(for_app);
> +	*for_app_registration_update = NULL;

This can't be right, because what the timeout modify callback is
supposed to do is arrange for stub_libxl_osevent_occurred_timeout to
be called.

And looking at that, I see that stub_libxl_osevent_occurred_timeout
doesn't destroy the for_app.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:11:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YJr-0002Uy-9f; Tue, 07 Jan 2014 15:11:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0YJp-0002UY-6v
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:11:49 +0000
Received: from [85.158.139.211:18193] by server-17.bemta-5.messagelabs.com id
	CC/B1-19152-4391CC25; Tue, 07 Jan 2014 15:11:48 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389107507!8352448!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20297 invoked from network); 7 Jan 2014 15:11:48 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:11:48 -0000
Received: by mail-lb0-f169.google.com with SMTP id u14so371484lbd.0
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:11:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Ff+hdb1NfzB7gJXZNpwDTQJ7oIZvbyDLSjoGKZOyBUc=;
	b=PbX5Vlw/Vc3qtX5C6uuiNtR4HeYPggA0hhQLw/P5VbQJNLosAmj16cAXkpr1H5zyF6
	R2AVa5xn3SIm/nkQj9cm24jEvE6gy2jjMqE0G4+8EF8SjyK7EnKwILg8yVS68SqedY3B
	JYsvat2E5IYNAf8FpKl5n0prizyo7Q8Yub1O2j0MFiy6wyqmAZlwL6dXmLScTFratO1W
	LmoCKoLHpXjYoSZOPcovcMyPjEjYd9UCk/XluZMQoUsULCOR8fSAawu4vl83mSSnJncp
	xCHQWDALqcpE57jzlqiEM0MWUp7zfNZ1AkcLyXQfMH35IrvBcPHfDfsGjyr6/FxmBQYp
	x4IA==
X-Gm-Message-State: ALoCoQkxNjOsn7WlzLORrUvguz592Uz2Pb9XHig2DFS3nF1sjCZMMVxYVDu22VHRjQ94vO5VfLol
X-Received: by 10.112.167.42 with SMTP id zl10mr235154lbb.92.1389107507267;
	Tue, 07 Jan 2014 07:11:47 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Tue, 7 Jan 2014 07:11:27 -0800 (PST)
In-Reply-To: <52CC0614.5050402@redhat.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 7 Jan 2014 15:11:27 +0000
Message-ID: <CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 January 2014 13:50, Paolo Bonzini <pbonzini@redhat.com> wrote:
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.

How is this going to work? Do you define a fake architecture
name "xenpv" ?  I guess we'll see what the patches look like...

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:11:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YJr-0002Uy-9f; Tue, 07 Jan 2014 15:11:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W0YJp-0002UY-6v
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:11:49 +0000
Received: from [85.158.139.211:18193] by server-17.bemta-5.messagelabs.com id
	CC/B1-19152-4391CC25; Tue, 07 Jan 2014 15:11:48 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389107507!8352448!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20297 invoked from network); 7 Jan 2014 15:11:48 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:11:48 -0000
Received: by mail-lb0-f169.google.com with SMTP id u14so371484lbd.0
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:11:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Ff+hdb1NfzB7gJXZNpwDTQJ7oIZvbyDLSjoGKZOyBUc=;
	b=PbX5Vlw/Vc3qtX5C6uuiNtR4HeYPggA0hhQLw/P5VbQJNLosAmj16cAXkpr1H5zyF6
	R2AVa5xn3SIm/nkQj9cm24jEvE6gy2jjMqE0G4+8EF8SjyK7EnKwILg8yVS68SqedY3B
	JYsvat2E5IYNAf8FpKl5n0prizyo7Q8Yub1O2j0MFiy6wyqmAZlwL6dXmLScTFratO1W
	LmoCKoLHpXjYoSZOPcovcMyPjEjYd9UCk/XluZMQoUsULCOR8fSAawu4vl83mSSnJncp
	xCHQWDALqcpE57jzlqiEM0MWUp7zfNZ1AkcLyXQfMH35IrvBcPHfDfsGjyr6/FxmBQYp
	x4IA==
X-Gm-Message-State: ALoCoQkxNjOsn7WlzLORrUvguz592Uz2Pb9XHig2DFS3nF1sjCZMMVxYVDu22VHRjQ94vO5VfLol
X-Received: by 10.112.167.42 with SMTP id zl10mr235154lbb.92.1389107507267;
	Tue, 07 Jan 2014 07:11:47 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.125.3 with HTTP; Tue, 7 Jan 2014 07:11:27 -0800 (PST)
In-Reply-To: <52CC0614.5050402@redhat.com>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 7 Jan 2014 15:11:27 +0000
Message-ID: <CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 7 January 2014 13:50, Paolo Bonzini <pbonzini@redhat.com> wrote:
> So let's call things by their name and add qemu-system-xenpv that covers
> both x86 and ARM and anything else in the future.

How is this going to work? Do you define a fake architecture
name "xenpv" ?  I guess we'll see what the patches look like...

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:12:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YKo-0002et-Uq; Tue, 07 Jan 2014 15:12:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0YKn-0002eh-Kb
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:12:49 +0000
Received: from [85.158.139.211:35505] by server-11.bemta-5.messagelabs.com id
	4F/13-23268-0791CC25; Tue, 07 Jan 2014 15:12:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389107566!8312588!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13131 invoked from network); 7 Jan 2014 15:12:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90472594"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:12:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	10:12:36 -0500
Message-ID: <1389107554.12612.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Date: Tue, 7 Jan 2014 15:12:34 +0000
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
	<52CC1500.8050104@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 15:05 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> > Sent: 07 January 2014 14:54
> > To: Paul Durrant
> > Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Wei Liu; Ian Campbell;
> > Annie Li; David Vrabel
> > Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
> > IPv6 offloads
> > 
> > On 01/07/2014 05:25 AM, Paul Durrant wrote:
> > >> -----Original Message-----
> > >> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> > >> Sent: 31 December 2013 19:10
> > >> To: Paul Durrant
> > >> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> > >> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> > >> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6
> > offloads
> > >>
> > >> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> > >>> This patch adds support for IPv6 checksum offload and GSO when those
> > >>> features are available in the backend.
> > >> Sorry for late review. Mostly style comments.
> > >>
> > > Thanks for the review.
> > >
> > > The checksum related code essentially needs to be a duplicate of that in
> > netback and it seems wasteful to have the code in both places. Could this
> > code be moved perhaps to net/core/dev.c? It's not specific to
> > netback/netfront usage.
> > 
> > Will any of these routines be called for anything other than Xen
> > networking?
> > 
> 
> I guess similar logic must be duplicated in other drivers - I can't
> believe that netback and netfront are the only ones to want to know
> where the TCP/UDP checksum field is located.

Me neither. Given that we already have two consumers (albeit both *xen*)
and that the functionality is generic in nature it seems to make sense
to me to have it in a generic place.

> > I don't know about net/core/dev.c but given the large amount of
> > duplicate code between netfront and netback I think factoring out should
> > be done at least for these two. Into xen-netcore.c or some such.
> > 
> 
> That's probably a pragmatic first step; I'll do that and post a patch series as v4.

I think this makes sense for code which has two consumers (both *xen*)
but which is actually Xen specific. Obviously if the network maintainers
don't think the checksum functionality is plausibly generically useful
then we could put it here instead.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:12:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YKo-0002et-Uq; Tue, 07 Jan 2014 15:12:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0YKn-0002eh-Kb
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:12:49 +0000
Received: from [85.158.139.211:35505] by server-11.bemta-5.messagelabs.com id
	4F/13-23268-0791CC25; Tue, 07 Jan 2014 15:12:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389107566!8312588!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13131 invoked from network); 7 Jan 2014 15:12:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90472594"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:12:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	10:12:36 -0500
Message-ID: <1389107554.12612.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Date: Tue, 7 Jan 2014 15:12:34 +0000
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
References: <1385484112-12975-1-git-send-email-paul.durrant@citrix.com>
	<52C31691.9040302@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E2C05@AMSPEX01CL01.citrite.net>
	<52CC1500.8050104@oracle.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD01E4F72@AMSPEX01CL01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Annie Li <annie.li@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
 IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 15:05 +0000, Paul Durrant wrote:
> > -----Original Message-----
> > From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> > Sent: 07 January 2014 14:54
> > To: Paul Durrant
> > Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Wei Liu; Ian Campbell;
> > Annie Li; David Vrabel
> > Subject: Re: [Xen-devel] [PATCH net-next v3] xen-netfront: Add support for
> > IPv6 offloads
> > 
> > On 01/07/2014 05:25 AM, Paul Durrant wrote:
> > >> -----Original Message-----
> > >> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> > >> Sent: 31 December 2013 19:10
> > >> To: Paul Durrant
> > >> Cc: xen-devel@lists.xen.org; netdev@vger.kernel.org; Konrad Rzeszutek
> > >> Wilk; David Vrabel; Ian Campbell; Wei Liu; Annie Li
> > >> Subject: Re: [PATCH net-next v3] xen-netfront: Add support for IPv6
> > offloads
> > >>
> > >> On 11/26/2013 11:41 AM, Paul Durrant wrote:
> > >>> This patch adds support for IPv6 checksum offload and GSO when those
> > >>> features are available in the backend.
> > >> Sorry for late review. Mostly style comments.
> > >>
> > > Thanks for the review.
> > >
> > > The checksum related code essentially needs to be a duplicate of that in
> > netback and it seems wasteful to have the code in both places. Could this
> > code be moved perhaps to net/core/dev.c? It's not specific to
> > netback/netfront usage.
> > 
> > Will any of these routines be called for anything other than Xen
> > networking?
> > 
> 
> I guess similar logic must be duplicated in other drivers - I can't
> believe that netback and netfront are the only ones to want to know
> where the TCP/UDP checksum field is located.

Me neither. Given that we already have two consumers (albeit both *xen*)
and that the functionality is generic in nature it seems to make sense
to me to have it in a generic place.

> > I don't know about net/core/dev.c but given the large amount of
> > duplicate code between netfront and netback I think factoring out should
> > be done at least for these two. Into xen-netcore.c or some such.
> > 
> 
> That's probably a pragmatic first step; I'll do that and post a patch series as v4.

I think this makes sense for code which has two consumers (both *xen*)
but which is actually Xen specific. Obviously if the network maintainers
don't think the checksum functionality is plausibly generically useful
then we could put it here instead.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:23:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YV1-0003LR-BV; Tue, 07 Jan 2014 15:23:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0YV0-0003LM-9l
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:23:22 +0000
Received: from [85.158.137.68:60024] by server-15.bemta-3.messagelabs.com id
	4D/6D-11556-9EB1CC25; Tue, 07 Jan 2014 15:23:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389108199!7744519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29281 invoked from network); 7 Jan 2014 15:23:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:23:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90479728"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:23:17 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	10:23:16 -0500
Message-ID: <52CC1BE3.8080502@citrix.com>
Date: Tue, 7 Jan 2014 15:23:15 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1386892097-15502-1-git-send-email-zoltan.kiss@citrix.com>
	<1386892097-15502-7-git-send-email-zoltan.kiss@citrix.com>
	<20131213154307.GN21900@zion.uk.xensource.com>
	<52AF2602.2000409@citrix.com>
	<20131216180908.GC25969@zion.uk.xensource.com>
In-Reply-To: <20131216180908.GC25969@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 18:09, Wei Liu wrote:
>>>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>>>> index e26cdda..f6ed1c8 100644
>>>> --- a/drivers/net/xen-netback/netback.c
>>>> +++ b/drivers/net/xen-netback/netback.c
>>>> @@ -906,11 +906,15 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>>>>   	u16 pending_idx = *((u16 *)skb->data);
>>>>   	int start;
>>>>   	pending_ring_idx_t index;
>>>> -	unsigned int nr_slots;
>>>> +	unsigned int nr_slots, frag_overflow = 0;
>>>>
>>>>   	/* At this point shinfo->nr_frags is in fact the number of
>>>>   	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
>>>>   	 */
>>>> +	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
>>>> +		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
>>>> +		shinfo->nr_frags = MAX_SKB_FRAGS;
>>>> +	}
>>>>   	nr_slots = shinfo->nr_frags;
>>>>
>>>
>>> It is also probably better to check whether shinfo->nr_frags is too
>>> large which makes frag_overflow > MAX_SKB_FRAGS. I know skb should be
>>> already be valid at this point but it wouldn't hurt to be more careful.
>> Ok, I've added this:
>> 	/* At this point shinfo->nr_frags is in fact the number of
>> 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
>> 	 */
>> +	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
>> +		if (shinfo->nr_frags > XEN_NETBK_LEGACY_SLOTS_MAX) return NULL;
>> +		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
>>
>
> What I suggested is
>
>     BUG_ON(frag_overflow > MAX_SKB_FRAGS)

Ok, I've changed it.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:23:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0YV1-0003LR-BV; Tue, 07 Jan 2014 15:23:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0YV0-0003LM-9l
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:23:22 +0000
Received: from [85.158.137.68:60024] by server-15.bemta-3.messagelabs.com id
	4D/6D-11556-9EB1CC25; Tue, 07 Jan 2014 15:23:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389108199!7744519!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29281 invoked from network); 7 Jan 2014 15:23:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:23:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90479728"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 15:23:17 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	10:23:16 -0500
Message-ID: <52CC1BE3.8080502@citrix.com>
Date: Tue, 7 Jan 2014 15:23:15 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1386892097-15502-1-git-send-email-zoltan.kiss@citrix.com>
	<1386892097-15502-7-git-send-email-zoltan.kiss@citrix.com>
	<20131213154307.GN21900@zion.uk.xensource.com>
	<52AF2602.2000409@citrix.com>
	<20131216180908.GC25969@zion.uk.xensource.com>
In-Reply-To: <20131216180908.GC25969@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/12/13 18:09, Wei Liu wrote:
>>>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>>>> index e26cdda..f6ed1c8 100644
>>>> --- a/drivers/net/xen-netback/netback.c
>>>> +++ b/drivers/net/xen-netback/netback.c
>>>> @@ -906,11 +906,15 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>>>>   	u16 pending_idx = *((u16 *)skb->data);
>>>>   	int start;
>>>>   	pending_ring_idx_t index;
>>>> -	unsigned int nr_slots;
>>>> +	unsigned int nr_slots, frag_overflow = 0;
>>>>
>>>>   	/* At this point shinfo->nr_frags is in fact the number of
>>>>   	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
>>>>   	 */
>>>> +	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
>>>> +		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
>>>> +		shinfo->nr_frags = MAX_SKB_FRAGS;
>>>> +	}
>>>>   	nr_slots = shinfo->nr_frags;
>>>>
>>>
>>> It is also probably better to check whether shinfo->nr_frags is too
>>> large which makes frag_overflow > MAX_SKB_FRAGS. I know skb should be
>>> already be valid at this point but it wouldn't hurt to be more careful.
>> Ok, I've added this:
>> 	/* At this point shinfo->nr_frags is in fact the number of
>> 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
>> 	 */
>> +	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
>> +		if (shinfo->nr_frags > XEN_NETBK_LEGACY_SLOTS_MAX) return NULL;
>> +		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
>>
>
> What I suggested is
>
>     BUG_ON(frag_overflow > MAX_SKB_FRAGS)

Ok, I've changed it.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yga-0003v8-MS; Tue, 07 Jan 2014 15:35:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0YgZ-0003v3-Dy
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:35:19 +0000
Received: from [85.158.139.211:43237] by server-2.bemta-5.messagelabs.com id
	6E/AA-29392-6BE1CC25; Tue, 07 Jan 2014 15:35:18 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389108917!8314517!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11046 invoked from network); 7 Jan 2014 15:35:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-206.messagelabs.com with SMTP;
	7 Jan 2014 15:35:17 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s07FYCtp002201
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 10:34:13 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-36.ams2.redhat.com
	[10.36.112.36])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s07FY8R1029707; Tue, 7 Jan 2014 10:34:09 -0500
Message-ID: <52CC1E6F.7070109@redhat.com>
Date: Tue, 07 Jan 2014 16:34:07 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
	<CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
In-Reply-To: <CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 16:11, Peter Maydell ha scritto:
>> > So let's call things by their name and add qemu-system-xenpv that covers
>> > both x86 and ARM and anything else in the future.
> How is this going to work? Do you define a fake architecture
> name "xenpv" ?

Yes, one that aborts if a CPU is created or something like that.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yga-0003v8-MS; Tue, 07 Jan 2014 15:35:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W0YgZ-0003v3-Dy
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:35:19 +0000
Received: from [85.158.139.211:43237] by server-2.bemta-5.messagelabs.com id
	6E/AA-29392-6BE1CC25; Tue, 07 Jan 2014 15:35:18 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389108917!8314517!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11046 invoked from network); 7 Jan 2014 15:35:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-206.messagelabs.com with SMTP;
	7 Jan 2014 15:35:17 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s07FYCtp002201
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 10:34:13 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-36.ams2.redhat.com
	[10.36.112.36])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s07FY8R1029707; Tue, 7 Jan 2014 10:34:09 -0500
Message-ID: <52CC1E6F.7070109@redhat.com>
Date: Tue, 07 Jan 2014 16:34:07 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <20140106125410.GD3119@zion.uk.xensource.com>
	<CAEgOgz66HeReFzTPMuNfkc7c-RURBwWZnkeWcmdSJScLnCBQ+g@mail.gmail.com>
	<20140106151154.GA10654@zion.uk.xensource.com>
	<CAFEAcA9u-n5fbEYBQR05iDTxu8ZkYeMNpTtT3gHHBc+wCW+qQg@mail.gmail.com>
	<alpine.DEB.2.02.1401061733490.8667@kaball.uk.xensource.com>
	<CAFEAcA_O-9TvrQfG06ZEvp2r8HVc27qM=W-tOO+CDsNyXhymUg@mail.gmail.com>
	<alpine.DEB.2.02.1401071312220.8667@kaball.uk.xensource.com>
	<52CC0614.5050402@redhat.com>
	<CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
In-Reply-To: <CAFEAcA9K5aJRfyV3ROGSMzRvFMKMHXev4AvzJmXXrBena+Djew@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Crosthwaite <peter.crosthwaite@xilinx.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/01/2014 16:11, Peter Maydell ha scritto:
>> > So let's call things by their name and add qemu-system-xenpv that covers
>> > both x86 and ARM and anything else in the future.
> How is this going to work? Do you define a fake architecture
> name "xenpv" ?

Yes, one that aborts if a CPU is created or something like that.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ylg-0004US-Gz; Tue, 07 Jan 2014 15:40:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Yle-0004UL-JV
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:40:34 +0000
Received: from [85.158.137.68:4892] by server-13.bemta-3.messagelabs.com id
	33/89-28603-1FF1CC25; Tue, 07 Jan 2014 15:40:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389109230!6572948!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3847 invoked from network); 7 Jan 2014 15:40:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 15:40:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07FeJQY020824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 15:40:20 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07FeImT026515
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 15:40:18 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07FeISR026506; Tue, 7 Jan 2014 15:40:18 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 07:40:18 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 127E01C18DC; Tue,  7 Jan 2014 10:40:17 -0500 (EST)
Date: Tue, 7 Jan 2014 10:40:16 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140107154016.GA7680@phenom.dumpdata.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
	<20140107143836.GF3588@phenom.dumpdata.com>
	<52CC218B0200007800111330@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC218B0200007800111330@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gordan Bobic <gordan@bobich.net>,
	Feng Wu <feng.wu@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:47:23PM +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 15:38, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > That requires knowing the MMIO BARs the 'fake' device has, and
> > .. well, whatever else the Intel VT-d code requires.
> 
> Why would you need to know BAR values? Weren't we talking of
> an invisible bridge (in which case one would expect that there's
> no MSI-X interrupts to be used, which is the only reason I can
> see us needing to know/read the BARs)?

I mispoke. I was thinking about the 'memory behind the bridge'
is what I need to add in somehwere.

I really need to look at the VT-d spec and implementation to see
what data I need to provide to it.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Ylg-0004US-Gz; Tue, 07 Jan 2014 15:40:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Yle-0004UL-JV
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:40:34 +0000
Received: from [85.158.137.68:4892] by server-13.bemta-3.messagelabs.com id
	33/89-28603-1FF1CC25; Tue, 07 Jan 2014 15:40:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389109230!6572948!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3847 invoked from network); 7 Jan 2014 15:40:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 15:40:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07FeJQY020824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 15:40:20 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07FeImT026515
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 15:40:18 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07FeISR026506; Tue, 7 Jan 2014 15:40:18 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 07:40:18 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 127E01C18DC; Tue,  7 Jan 2014 10:40:17 -0500 (EST)
Date: Tue, 7 Jan 2014 10:40:16 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140107154016.GA7680@phenom.dumpdata.com>
References: <E959C4978C3B6342920538CF579893F001D50F21@SHSMSX104.ccr.corp.intel.com>
	<6748185fb950f1aca45678675dc87b0f@mail.shatteredsilicon.net>
	<52CBFDDD020000780011112C@nat28.tlf.novell.com>
	<5dcec6d652a27688050262f949e9dc9e@mail.shatteredsilicon.net>
	<20140107143836.GF3588@phenom.dumpdata.com>
	<52CC218B0200007800111330@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC218B0200007800111330@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gordan Bobic <gordan@bobich.net>,
	Feng Wu <feng.wu@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:47:23PM +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 15:38, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > That requires knowing the MMIO BARs the 'fake' device has, and
> > .. well, whatever else the Intel VT-d code requires.
> 
> Why would you need to know BAR values? Weren't we talking of
> an invisible bridge (in which case one would expect that there's
> no MSI-X interrupts to be used, which is the only reason I can
> see us needing to know/read the BARs)?

I mispoke. I was thinking about the 'memory behind the bridge'
is what I need to add in somehwere.

I really need to look at the VT-d spec and implementation to see
what data I need to provide to it.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yva-000556-8T; Tue, 07 Jan 2014 15:50:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0YvY-000551-Lo
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:50:48 +0000
Received: from [85.158.137.68:20823] by server-3.bemta-3.messagelabs.com id
	1E/14-10658-7522CC25; Tue, 07 Jan 2014 15:50:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389109847!7765499!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29696 invoked from network); 7 Jan 2014 15:50:47 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:50:47 -0000
Received: by mail-wi0-f182.google.com with SMTP id en1so885568wid.3
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:50:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=3X++ql22erRVkQhQDC5ioT1kmoCjnG6mRi6tz3XGoQg=;
	b=gvFkyLOgrx1BCGTeHJXBzOil/DuUVJqy/8c6zzqgfTVX5uOQa51AzPtN+ZkeKjUmrX
	lpaes0t26T0RTIGj6e25Fsuiy+G4Y6BkhTKdMfJArPyvYXw0hR6qQl4qq2GNgAD0qrZB
	saD7nPD27vv/0f5j6eVR3ycyE2dNBr2m+JlTcErxbehoQdc2xc8gFAGaJI4XtQYzIL62
	2Tbl9hboikIhm7rzOJ2kj1T+xwyn0aADZu5L5FEb2FSvaREcgBnFDdowzFMbN+jrn9hB
	Dj+vjoCUpLmdMaN5X/D/WkkxCtDrUl3/BXlnPDcdExgmi5X9Ax1jiLMACrUa4Mf2Q/kZ
	t48A==
X-Received: by 10.194.222.4 with SMTP id qi4mr8195158wjc.33.1389109847026;
	Tue, 07 Jan 2014 07:50:47 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ci4sm45741275wjc.21.2014.01.07.07.50.44 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:50:46 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:50:28 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: David Vrabel <david.vrabel@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CEF1D2C4.47C2E%keir.xen@gmail.com>
Thread-Topic: [PATCH 1/2] evtchn/fifo: initialize priority when events are
	bound
Thread-Index: Ac8LwC/DCO5lLMiF50GQsiL93bG4Zg==
In-Reply-To: <1386683820-9834-2-git-send-email-david.vrabel@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] evtchn/fifo: initialize priority when
 events are bound
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2013 13:56, "David Vrabel" <david.vrabel@citrix.com> wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Event channel ports that are reused or that were not in the initial
> bucket would have a non-default priority.
> 
> Add an init evtchn_port_op hook and use this to set the priority when
> an event channel is bound.
> 
> Within this new evtchn_fifo_init() call, also check if the event is
> already on a queue and print a warning, as this event may have its
> first event delivered on a queue with the wrong VCPU or priority.
> This guest is expected to prevent this (if it cares) by not unbinding
> events that are still linked.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yvi-00055X-LU; Tue, 07 Jan 2014 15:50:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Yvg-00055P-W6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:50:57 +0000
Received: from [193.109.254.147:35109] by server-7.bemta-14.messagelabs.com id
	2A/D2-15500-0622CC25; Tue, 07 Jan 2014 15:50:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389109855!9347470!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6753 invoked from network); 7 Jan 2014 15:50:55 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:50:55 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so4318018wib.0
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:50:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EnNUz2LDnUhW6ea1+S99n4H0neTPeevn9dYAreboDPQ=;
	b=eZQSeJ0b8girq5ifbanwG7pQRlkPKOgCNsYCJkPHUsJFt//w5Ln+ymQ8Vs658im8Za
	JCjBQ7xu5IGAOKX7ifGZBHy9UmC62+oO2mBPN9dfZ0CYatRiexhwDUQ91w/A7Xs4+coL
	GaKwSYCh0u5R0nNlc3gE0H8eDET7QEY7/q0c/JJ3WnO337Zwm+i3kuJcHBsGUnHTORJw
	98atnbIaOvbef9Hvqs5loRrgHwdJOHv6+hqwTlzAN/RSYQ5kB9xZ4GwNc5bsBPVSnW79
	P9yNBpaHlisTNsM9EyVlYoUowo0CjZlBV7XP2WfeNGW0YJ7YZj7Sn2z4qLdrQNWF2kWa
	3LsA==
X-Received: by 10.180.188.175 with SMTP id gb15mr17411330wic.50.1389109855348; 
	Tue, 07 Jan 2014 07:50:55 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ci4sm45741275wjc.21.2014.01.07.07.50.52 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:50:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:50:50 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: David Vrabel <david.vrabel@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CEF1D2DA.47C2F%keir.xen@gmail.com>
Thread-Topic: [PATCH 2/2] evtchn/fifo: don't corrupt queues if an old tail is
	linked
Thread-Index: Ac8LwDzgP5rYAe0qaEurJfZDwXUtmw==
In-Reply-To: <1386683820-9834-3-git-send-email-david.vrabel@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] evtchn/fifo: don't corrupt queues if an
 old tail is linked
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2013 13:57, "David Vrabel" <david.vrabel@citrix.com> wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> An event may still be the tail of a queue even if the queue is now
> empty (an 'old tail' event).  There is logic to handle the case when
> this old tail event needs to be added to the now empty queue (by
> checking for q->tail == port).
> 
> However, this does not cover all cases.
> 
> 1. An old tail may be re-added simultaneously with another event.
>    LINKED is set on the old tail, and the other CPU may misinterpret
>    this as the old tail still being valid and set LINK instead of
>    HEAD.  All events on this queue will then be lost.
> 
> 2. If the old tail event on queue A is moved to a different queue B
>    (by changing its VCPU or priority), the event may then be linked
>    onto queue B.  When another event is linked onto queue A it will
>    check the old tail, see that it is linked (but on queue B) and
>    overwrite the LINK field, corrupting both queues.
> 
> When an event is linked, save the vcpu id and priority of the queue it
> is being linked onto.  Use this when linking an event to check if it
> is an unlinked old tail event.  If it is an old tail event, the old
> queue is empty and old_q->tail is invalidated to ensure adding another
> event to old_q will update HEAD.  The tail is invalidated by setting
> it to 0 since the event 0 is never linked.
> 
> The old_q->lock is held while setting LINKED to avoid the race with
> the test of LINKED in evtchn_fifo_set_link().
> 
> Since a event channel may move queues after old_q->lock is acquired,
> we must check that we have the correct lock and retry if not.  Since
> changing VCPUs or priority is expected to be rare events that are
> serialized in the guest, we try at most 3 times before dropping the
> event.  This prevents a malicious guest from repeatedly adjusting
> priority to prevent another domain from acquiring old_q->lock.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---

Acked-by: Keir Fraser <keir@xen.org>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yva-000556-8T; Tue, 07 Jan 2014 15:50:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0YvY-000551-Lo
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:50:48 +0000
Received: from [85.158.137.68:20823] by server-3.bemta-3.messagelabs.com id
	1E/14-10658-7522CC25; Tue, 07 Jan 2014 15:50:47 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389109847!7765499!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29696 invoked from network); 7 Jan 2014 15:50:47 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:50:47 -0000
Received: by mail-wi0-f182.google.com with SMTP id en1so885568wid.3
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:50:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=3X++ql22erRVkQhQDC5ioT1kmoCjnG6mRi6tz3XGoQg=;
	b=gvFkyLOgrx1BCGTeHJXBzOil/DuUVJqy/8c6zzqgfTVX5uOQa51AzPtN+ZkeKjUmrX
	lpaes0t26T0RTIGj6e25Fsuiy+G4Y6BkhTKdMfJArPyvYXw0hR6qQl4qq2GNgAD0qrZB
	saD7nPD27vv/0f5j6eVR3ycyE2dNBr2m+JlTcErxbehoQdc2xc8gFAGaJI4XtQYzIL62
	2Tbl9hboikIhm7rzOJ2kj1T+xwyn0aADZu5L5FEb2FSvaREcgBnFDdowzFMbN+jrn9hB
	Dj+vjoCUpLmdMaN5X/D/WkkxCtDrUl3/BXlnPDcdExgmi5X9Ax1jiLMACrUa4Mf2Q/kZ
	t48A==
X-Received: by 10.194.222.4 with SMTP id qi4mr8195158wjc.33.1389109847026;
	Tue, 07 Jan 2014 07:50:47 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ci4sm45741275wjc.21.2014.01.07.07.50.44 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:50:46 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:50:28 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: David Vrabel <david.vrabel@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CEF1D2C4.47C2E%keir.xen@gmail.com>
Thread-Topic: [PATCH 1/2] evtchn/fifo: initialize priority when events are
	bound
Thread-Index: Ac8LwC/DCO5lLMiF50GQsiL93bG4Zg==
In-Reply-To: <1386683820-9834-2-git-send-email-david.vrabel@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 1/2] evtchn/fifo: initialize priority when
 events are bound
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2013 13:56, "David Vrabel" <david.vrabel@citrix.com> wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Event channel ports that are reused or that were not in the initial
> bucket would have a non-default priority.
> 
> Add an init evtchn_port_op hook and use this to set the priority when
> an event channel is bound.
> 
> Within this new evtchn_fifo_init() call, also check if the event is
> already on a queue and print a warning, as this event may have its
> first event delivered on a queue with the wrong VCPU or priority.
> This guest is expected to prevent this (if it cares) by not unbinding
> events that are still linked.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yvi-00055X-LU; Tue, 07 Jan 2014 15:50:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Yvg-00055P-W6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:50:57 +0000
Received: from [193.109.254.147:35109] by server-7.bemta-14.messagelabs.com id
	2A/D2-15500-0622CC25; Tue, 07 Jan 2014 15:50:56 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389109855!9347470!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6753 invoked from network); 7 Jan 2014 15:50:55 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:50:55 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so4318018wib.0
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:50:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=EnNUz2LDnUhW6ea1+S99n4H0neTPeevn9dYAreboDPQ=;
	b=eZQSeJ0b8girq5ifbanwG7pQRlkPKOgCNsYCJkPHUsJFt//w5Ln+ymQ8Vs658im8Za
	JCjBQ7xu5IGAOKX7ifGZBHy9UmC62+oO2mBPN9dfZ0CYatRiexhwDUQ91w/A7Xs4+coL
	GaKwSYCh0u5R0nNlc3gE0H8eDET7QEY7/q0c/JJ3WnO337Zwm+i3kuJcHBsGUnHTORJw
	98atnbIaOvbef9Hvqs5loRrgHwdJOHv6+hqwTlzAN/RSYQ5kB9xZ4GwNc5bsBPVSnW79
	P9yNBpaHlisTNsM9EyVlYoUowo0CjZlBV7XP2WfeNGW0YJ7YZj7Sn2z4qLdrQNWF2kWa
	3LsA==
X-Received: by 10.180.188.175 with SMTP id gb15mr17411330wic.50.1389109855348; 
	Tue, 07 Jan 2014 07:50:55 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ci4sm45741275wjc.21.2014.01.07.07.50.52 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:50:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:50:50 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: David Vrabel <david.vrabel@citrix.com>,
	<xen-devel@lists.xen.org>
Message-ID: <CEF1D2DA.47C2F%keir.xen@gmail.com>
Thread-Topic: [PATCH 2/2] evtchn/fifo: don't corrupt queues if an old tail is
	linked
Thread-Index: Ac8LwDzgP5rYAe0qaEurJfZDwXUtmw==
In-Reply-To: <1386683820-9834-3-git-send-email-david.vrabel@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] evtchn/fifo: don't corrupt queues if an
 old tail is linked
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/12/2013 13:57, "David Vrabel" <david.vrabel@citrix.com> wrote:

> From: David Vrabel <david.vrabel@citrix.com>
> 
> An event may still be the tail of a queue even if the queue is now
> empty (an 'old tail' event).  There is logic to handle the case when
> this old tail event needs to be added to the now empty queue (by
> checking for q->tail == port).
> 
> However, this does not cover all cases.
> 
> 1. An old tail may be re-added simultaneously with another event.
>    LINKED is set on the old tail, and the other CPU may misinterpret
>    this as the old tail still being valid and set LINK instead of
>    HEAD.  All events on this queue will then be lost.
> 
> 2. If the old tail event on queue A is moved to a different queue B
>    (by changing its VCPU or priority), the event may then be linked
>    onto queue B.  When another event is linked onto queue A it will
>    check the old tail, see that it is linked (but on queue B) and
>    overwrite the LINK field, corrupting both queues.
> 
> When an event is linked, save the vcpu id and priority of the queue it
> is being linked onto.  Use this when linking an event to check if it
> is an unlinked old tail event.  If it is an old tail event, the old
> queue is empty and old_q->tail is invalidated to ensure adding another
> event to old_q will update HEAD.  The tail is invalidated by setting
> it to 0 since the event 0 is never linked.
> 
> The old_q->lock is held while setting LINKED to avoid the race with
> the test of LINKED in evtchn_fifo_set_link().
> 
> Since a event channel may move queues after old_q->lock is acquired,
> we must check that we have the correct lock and retry if not.  Since
> changing VCPUs or priority is expected to be rare events that are
> serialized in the guest, we try at most 3 times before dropping the
> event.  This prevents a malicious guest from repeatedly adjusting
> priority to prevent another domain from acquiring old_q->lock.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---

Acked-by: Keir Fraser <keir@xen.org>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yws-0005Ep-4Q; Tue, 07 Jan 2014 15:52:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Ywq-0005Ee-TC
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:52:09 +0000
Received: from [85.158.143.35:55993] by server-1.bemta-4.messagelabs.com id
	AD/26-02132-8A22CC25; Tue, 07 Jan 2014 15:52:08 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389109927!10199135!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2313 invoked from network); 7 Jan 2014 15:52:07 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:52:07 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so137281wgg.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:52:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=pydQopzYeT1nQxQicAE17AP6PSCaQEJRncQz4Iu0ES8=;
	b=U5LvBCtswQ0lGDq0FV5LO85wya6umAmKZ5I0BQKypwT3XxuzVvpMcSBdFAn6ZFpBDw
	ECjvKn91h2cuObDHjNInmvPRJyA/HNpluPylrBDCYb+ExO2NX4wQVXUMs8vvXdxN9U7/
	m3MkVLDx8jUSyGDuLA19SsNlIzBR3sNtbsKC0A4EnRtNlwNV0kbHyO26+mZlwA77ty2m
	P6J/T08lnfdk9h7IPUiM2cnJ4VM7oKQPEFTLHmAQAdOlKyTMkj9c3/wUv6XSW+54u1T1
	3nyhR/mqRaESwUhPtWvdmtKgnOjeEKr4mLJpBnrF1l1bEjv4UWqKi52P3XcBLNfxouYi
	dpGQ==
X-Received: by 10.180.184.105 with SMTP id et9mr16937629wic.36.1389109926727; 
	Tue, 07 Jan 2014 07:52:06 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id hy8sm45708715wjb.2.2014.01.07.07.52.03
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:52:05 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:51:57 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Bob Liu <lliubbo@gmail.com>
Message-ID: <CEF1D31D.47C31%keir.xen@gmail.com>
Thread-Topic: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head
	file
Thread-Index: Ac8LwGTPuugUv8z2LEa9Xvuitt7xJw==
In-Reply-To: <20131213164405.GA11305@phenom.dumpdata.com>
Mime-version: 1.0
Cc: james.harper@bendigoit.com.au, ian.campbell@citrix.com,
	andrew.cooper3@citrix.com, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/2013 16:44, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:

> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this
>> patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in
>> order
>> to make things more straightforward.
>> 
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?

Yes.

Acked-by: Keir Fraser <keir@xen.org>

> Thanks!
>>  4 files changed, 16 insertions(+), 44 deletions(-)
>> 
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 2cbc489..2636fc9 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -528,9 +528,9 @@ int domain_kill(struct domain *d)
>>          spin_barrier(&d->domain_lock);
>>          evtchn_destroy(d);
>>          gnttab_release_mappings(d);
>> -        tmem_destroy(d->tmem);
>> +        tmem_destroy(d->tmem_client);
>>          domain_set_outstanding_pages(d, 0);
>> -        d->tmem = NULL;
>> +        d->tmem_client = NULL;
>>          /* fallthrough */
>>      case DOMDYING_dying:
>>          rc = domain_relinquish_resources(d);
>> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
>> index 5bf5f04..2659651 100644
>> --- a/xen/common/tmem.c
>> +++ b/xen/common/tmem.c
>> @@ -1206,7 +1206,7 @@ static struct client *client_create(domid_t cli_id)
>>          goto fail;
>>      }
>>      if ( !d->is_dying ) {
>> -        d->tmem = client;
>> +        d->tmem_client = client;
>> client->domain = d;
>>      }
>>      rcu_unlock_domain(d);
>> @@ -1324,7 +1324,7 @@ obj_unlock:
>>  
>>  static int tmem_evict(void)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
>>      struct tmem_object_root *obj;
>>      struct tmem_pool *pool;
>> @@ -1761,7 +1761,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct
>> oid *oidp, uint32_t index,
>>              list_del(&pgp->us.client_eph_pages);
>>              
>> list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
>>              spin_unlock(&eph_lists_spinlock);
>> -            obj->last_client = tmem_get_cli_id_from_current();
>> +            obj->last_client = current->domain->domain_id;
>>          }
>>      }
>>      if ( obj != NULL )
>> @@ -1836,7 +1836,7 @@ out:
>>  
>>  static int do_tmem_destroy_pool(uint32_t pool_id)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool;
>>  
>>      if ( client->pools == NULL )
>> @@ -1867,7 +1867,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      int i;
>>  
>>      if ( this_cli_id == TMEM_CLI_ID_NULL )
>> -        cli_id = tmem_get_cli_id_from_current();
>> +        cli_id = current->domain->domain_id;
>>      else
>>          cli_id = this_cli_id;
>>      tmem_client_info("tmem: allocating %s-%s tmem pool for %s=%d...",
>> @@ -1908,7 +1908,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      }
>>      else
>>      {
>> -        client = tmem_client_from_current();
>> +        client = current->domain->tmem_client;
>>          ASSERT(client != NULL);
>>          for ( d_poolid = 0; d_poolid < MAX_POOLS_PER_DOMAIN; d_poolid++ )
>>              if ( client->pools[d_poolid] == NULL )
>> @@ -2511,7 +2511,7 @@ static int do_tmem_control(struct tmem_op *op)
>>      uint32_t subop = op->u.ctrl.subop;
>>      struct oid *oidp = (struct oid *)(&op->u.ctrl.oid[0]);
>>  
>> -    if (!tmem_current_is_privileged())
>> +    if ( xsm_tmem_control(XSM_PRIV) )
>>          return -EPERM;
>>  
>>      switch(subop)
>> @@ -2583,7 +2583,7 @@ static int do_tmem_control(struct tmem_op *op)
>>  long do_tmem_op(tmem_cli_op_t uops)
>>  {
>>      struct tmem_op op;
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool = NULL;
>>      struct oid *oidp;
>>      int rc = 0;
>> @@ -2595,12 +2595,12 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      if ( !tmem_initialized )
>>          return -ENODEV;
>>  
>> -    if ( !tmem_current_permitted() )
>> +    if ( xsm_tmem_op(XSM_HOOK) )
>>          return -EPERM;
>>  
>>      total_tmem_ops++;
>>  
>> -    if ( client != NULL && tmem_client_is_dying(client) )
>> +    if ( client != NULL && client->domain->is_dying )
>>      {
>>          rc = -ENODEV;
>>   simple_error:
>> @@ -2640,7 +2640,7 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      {
>>          write_lock(&tmem_rwlock);
>>          write_lock_set = 1;
>> -        if ( (client = client_create(tmem_get_cli_id_from_current())) ==
>> NULL )
>> +        if ( (client = client_create(current->domain->domain_id)) == NULL )
>>          {
>>              tmem_client_err("tmem: can't create tmem structure for %s\n",
>>                             tmem_client_str);
>> @@ -2732,7 +2732,7 @@ void tmem_destroy(void *v)
>>      if ( client == NULL )
>>          return;
>>  
>> -    if ( !tmem_client_is_dying(client) )
>> +    if ( !client->domain->is_dying )
>>      {
>>          printk("tmem: tmem_destroy can only destroy dying client\n");
>>          return;
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index cbdf377..53ad32f 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -400,7 +400,7 @@ struct domain
>>      spinlock_t hypercall_deadlock_mutex;
>>  
>>      /* transcendent memory, auto-allocated on first tmem op by each domain
>> */
>> -    void *tmem;
>> +    struct client *tmem_client;
>>  
>>      struct lock_profile_qhead profile_head;
>>  
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index 9cfa73f..11f4c2d 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -171,45 +171,17 @@ static inline unsigned long tmem_free_mb(void)
>>  
>>  /*  "Client" (==domain) abstraction */
>>  
>> -struct client;
>>  static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
>>  {
>>      struct client *c;
>>      struct domain *d = rcu_lock_domain_by_id(cli_id);
>>      if (d == NULL)
>>          return NULL;
>> -    c = (struct client *)(d->tmem);
>> +    c = d->tmem_client;
>>      rcu_unlock_domain(d);
>>      return c;
>>  }
>>  
>> -static inline struct client *tmem_client_from_current(void)
>> -{
>> -    return (struct client *)(current->domain->tmem);
>> -}
>> -
>> -#define tmem_client_is_dying(_client) (!!_client->domain->is_dying)
>> -
>> -static inline domid_t tmem_get_cli_id_from_current(void)
>> -{
>> -    return current->domain->domain_id;
>> -}
>> -
>> -static inline struct domain *tmem_get_cli_ptr_from_current(void)
>> -{
>> -    return current->domain;
>> -}
>> -
>> -static inline bool_t tmem_current_permitted(void)
>> -{
>> -    return !xsm_tmem_op(XSM_HOOK);
>> -}
>> -
>> -static inline bool_t tmem_current_is_privileged(void)
>> -{
>> -    return !xsm_tmem_control(XSM_PRIV);
>> -}
>> -
>>  static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
>>  {
>>      const uint8_t *p = __map_domain_page(pfp);
>> -- 
>> 1.7.10.4
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yws-0005Ep-4Q; Tue, 07 Jan 2014 15:52:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Ywq-0005Ee-TC
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:52:09 +0000
Received: from [85.158.143.35:55993] by server-1.bemta-4.messagelabs.com id
	AD/26-02132-8A22CC25; Tue, 07 Jan 2014 15:52:08 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389109927!10199135!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2313 invoked from network); 7 Jan 2014 15:52:07 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:52:07 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so137281wgg.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:52:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=pydQopzYeT1nQxQicAE17AP6PSCaQEJRncQz4Iu0ES8=;
	b=U5LvBCtswQ0lGDq0FV5LO85wya6umAmKZ5I0BQKypwT3XxuzVvpMcSBdFAn6ZFpBDw
	ECjvKn91h2cuObDHjNInmvPRJyA/HNpluPylrBDCYb+ExO2NX4wQVXUMs8vvXdxN9U7/
	m3MkVLDx8jUSyGDuLA19SsNlIzBR3sNtbsKC0A4EnRtNlwNV0kbHyO26+mZlwA77ty2m
	P6J/T08lnfdk9h7IPUiM2cnJ4VM7oKQPEFTLHmAQAdOlKyTMkj9c3/wUv6XSW+54u1T1
	3nyhR/mqRaESwUhPtWvdmtKgnOjeEKr4mLJpBnrF1l1bEjv4UWqKi52P3XcBLNfxouYi
	dpGQ==
X-Received: by 10.180.184.105 with SMTP id et9mr16937629wic.36.1389109926727; 
	Tue, 07 Jan 2014 07:52:06 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id hy8sm45708715wjb.2.2014.01.07.07.52.03
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:52:05 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:51:57 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Bob Liu <lliubbo@gmail.com>
Message-ID: <CEF1D31D.47C31%keir.xen@gmail.com>
Thread-Topic: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head
	file
Thread-Index: Ac8LwGTPuugUv8z2LEa9Xvuitt7xJw==
In-Reply-To: <20131213164405.GA11305@phenom.dumpdata.com>
Mime-version: 1.0
Cc: james.harper@bendigoit.com.au, ian.campbell@citrix.com,
	andrew.cooper3@citrix.com, JBeulich@suse.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4 11/15] tmem: cleanup: drop useless
 functions from head file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/12/2013 16:44, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:

> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this
>> patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in
>> order
>> to make things more straightforward.
>> 
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?

Yes.

Acked-by: Keir Fraser <keir@xen.org>

> Thanks!
>>  4 files changed, 16 insertions(+), 44 deletions(-)
>> 
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 2cbc489..2636fc9 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -528,9 +528,9 @@ int domain_kill(struct domain *d)
>>          spin_barrier(&d->domain_lock);
>>          evtchn_destroy(d);
>>          gnttab_release_mappings(d);
>> -        tmem_destroy(d->tmem);
>> +        tmem_destroy(d->tmem_client);
>>          domain_set_outstanding_pages(d, 0);
>> -        d->tmem = NULL;
>> +        d->tmem_client = NULL;
>>          /* fallthrough */
>>      case DOMDYING_dying:
>>          rc = domain_relinquish_resources(d);
>> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
>> index 5bf5f04..2659651 100644
>> --- a/xen/common/tmem.c
>> +++ b/xen/common/tmem.c
>> @@ -1206,7 +1206,7 @@ static struct client *client_create(domid_t cli_id)
>>          goto fail;
>>      }
>>      if ( !d->is_dying ) {
>> -        d->tmem = client;
>> +        d->tmem_client = client;
>> client->domain = d;
>>      }
>>      rcu_unlock_domain(d);
>> @@ -1324,7 +1324,7 @@ obj_unlock:
>>  
>>  static int tmem_evict(void)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
>>      struct tmem_object_root *obj;
>>      struct tmem_pool *pool;
>> @@ -1761,7 +1761,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct
>> oid *oidp, uint32_t index,
>>              list_del(&pgp->us.client_eph_pages);
>>              
>> list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
>>              spin_unlock(&eph_lists_spinlock);
>> -            obj->last_client = tmem_get_cli_id_from_current();
>> +            obj->last_client = current->domain->domain_id;
>>          }
>>      }
>>      if ( obj != NULL )
>> @@ -1836,7 +1836,7 @@ out:
>>  
>>  static int do_tmem_destroy_pool(uint32_t pool_id)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool;
>>  
>>      if ( client->pools == NULL )
>> @@ -1867,7 +1867,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      int i;
>>  
>>      if ( this_cli_id == TMEM_CLI_ID_NULL )
>> -        cli_id = tmem_get_cli_id_from_current();
>> +        cli_id = current->domain->domain_id;
>>      else
>>          cli_id = this_cli_id;
>>      tmem_client_info("tmem: allocating %s-%s tmem pool for %s=%d...",
>> @@ -1908,7 +1908,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      }
>>      else
>>      {
>> -        client = tmem_client_from_current();
>> +        client = current->domain->tmem_client;
>>          ASSERT(client != NULL);
>>          for ( d_poolid = 0; d_poolid < MAX_POOLS_PER_DOMAIN; d_poolid++ )
>>              if ( client->pools[d_poolid] == NULL )
>> @@ -2511,7 +2511,7 @@ static int do_tmem_control(struct tmem_op *op)
>>      uint32_t subop = op->u.ctrl.subop;
>>      struct oid *oidp = (struct oid *)(&op->u.ctrl.oid[0]);
>>  
>> -    if (!tmem_current_is_privileged())
>> +    if ( xsm_tmem_control(XSM_PRIV) )
>>          return -EPERM;
>>  
>>      switch(subop)
>> @@ -2583,7 +2583,7 @@ static int do_tmem_control(struct tmem_op *op)
>>  long do_tmem_op(tmem_cli_op_t uops)
>>  {
>>      struct tmem_op op;
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool = NULL;
>>      struct oid *oidp;
>>      int rc = 0;
>> @@ -2595,12 +2595,12 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      if ( !tmem_initialized )
>>          return -ENODEV;
>>  
>> -    if ( !tmem_current_permitted() )
>> +    if ( xsm_tmem_op(XSM_HOOK) )
>>          return -EPERM;
>>  
>>      total_tmem_ops++;
>>  
>> -    if ( client != NULL && tmem_client_is_dying(client) )
>> +    if ( client != NULL && client->domain->is_dying )
>>      {
>>          rc = -ENODEV;
>>   simple_error:
>> @@ -2640,7 +2640,7 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      {
>>          write_lock(&tmem_rwlock);
>>          write_lock_set = 1;
>> -        if ( (client = client_create(tmem_get_cli_id_from_current())) ==
>> NULL )
>> +        if ( (client = client_create(current->domain->domain_id)) == NULL )
>>          {
>>              tmem_client_err("tmem: can't create tmem structure for %s\n",
>>                             tmem_client_str);
>> @@ -2732,7 +2732,7 @@ void tmem_destroy(void *v)
>>      if ( client == NULL )
>>          return;
>>  
>> -    if ( !tmem_client_is_dying(client) )
>> +    if ( !client->domain->is_dying )
>>      {
>>          printk("tmem: tmem_destroy can only destroy dying client\n");
>>          return;
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index cbdf377..53ad32f 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -400,7 +400,7 @@ struct domain
>>      spinlock_t hypercall_deadlock_mutex;
>>  
>>      /* transcendent memory, auto-allocated on first tmem op by each domain
>> */
>> -    void *tmem;
>> +    struct client *tmem_client;
>>  
>>      struct lock_profile_qhead profile_head;
>>  
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index 9cfa73f..11f4c2d 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -171,45 +171,17 @@ static inline unsigned long tmem_free_mb(void)
>>  
>>  /*  "Client" (==domain) abstraction */
>>  
>> -struct client;
>>  static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
>>  {
>>      struct client *c;
>>      struct domain *d = rcu_lock_domain_by_id(cli_id);
>>      if (d == NULL)
>>          return NULL;
>> -    c = (struct client *)(d->tmem);
>> +    c = d->tmem_client;
>>      rcu_unlock_domain(d);
>>      return c;
>>  }
>>  
>> -static inline struct client *tmem_client_from_current(void)
>> -{
>> -    return (struct client *)(current->domain->tmem);
>> -}
>> -
>> -#define tmem_client_is_dying(_client) (!!_client->domain->is_dying)
>> -
>> -static inline domid_t tmem_get_cli_id_from_current(void)
>> -{
>> -    return current->domain->domain_id;
>> -}
>> -
>> -static inline struct domain *tmem_get_cli_ptr_from_current(void)
>> -{
>> -    return current->domain;
>> -}
>> -
>> -static inline bool_t tmem_current_permitted(void)
>> -{
>> -    return !xsm_tmem_op(XSM_HOOK);
>> -}
>> -
>> -static inline bool_t tmem_current_is_privileged(void)
>> -{
>> -    return !xsm_tmem_control(XSM_PRIV);
>> -}
>> -
>>  static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
>>  {
>>      const uint8_t *p = __map_domain_page(pfp);
>> -- 
>> 1.7.10.4
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yxe-0005MG-PG; Tue, 07 Jan 2014 15:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Yxc-0005Lm-LR
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:52:56 +0000
Received: from [85.158.139.211:29536] by server-12.bemta-5.messagelabs.com id
	FE/E0-30017-7D22CC25; Tue, 07 Jan 2014 15:52:55 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389109975!5678809!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11287 invoked from network); 7 Jan 2014 15:52:55 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:52:55 -0000
Received: by mail-wi0-f178.google.com with SMTP id bz8so887874wib.11
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:52:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YPJaMY7PNOVJYnmy8CtQqzpwiX53c69Ku7j0aA2rO6c=;
	b=SZr7fmGufctaMmC/2XLPyhYEWM4wGDKaBWgD49pdQJYMXtPzCqZ5Kew8U6T/IchLX7
	tSJLougEoyblkW/P35Fi+lxWcWC+to4lH2wgtmtDEtUieekH/x+LlenMatV8Rb86vUlv
	qkvc2J29e7pwapNDFJmnVPyoaN3pAubycOR//ABJIxewWjzX88FMaY46kP9KvuQmPDAA
	KsjAFRA2GbADunrtaMB8g/wxuhb+RWOI1slSDF+bgISXvAc/vZTZALaqP8Y8QyEP0Xcp
	nKXjLpD5pxDp4y90cnRONl5LZWXpGuf1VD0LjqmYpi69IKSAxTsSy9Wb2U/DZq1osIm4
	3o3A==
X-Received: by 10.194.222.4 with SMTP id qi4mr8203451wjc.33.1389109975216;
	Tue, 07 Jan 2014 07:52:55 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	bj3sm45723000wjb.14.2014.01.07.07.52.53 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:52:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:52:44 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEF1D34C.47C32%keir.xen@gmail.com>
Thread-Topic: [PATCH 0/2] XENMEM_add_to_physmap_batch
Thread-Index: Ac8LwIDT9DjJSdH9Z0OGGM0lFvwJaw==
In-Reply-To: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] XENMEM_add_to_physmap_batch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2013 13:04, "Jan Beulich" <JBeulich@suse.com> wrote:

> 1: rename XENMEM_add_to_physmap_{range => batch} (v2)
> 2: compat wrapper for XENMEM_add_to_physmap_batch
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Yxe-0005MG-PG; Tue, 07 Jan 2014 15:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Yxc-0005Lm-LR
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:52:56 +0000
Received: from [85.158.139.211:29536] by server-12.bemta-5.messagelabs.com id
	FE/E0-30017-7D22CC25; Tue, 07 Jan 2014 15:52:55 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389109975!5678809!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11287 invoked from network); 7 Jan 2014 15:52:55 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:52:55 -0000
Received: by mail-wi0-f178.google.com with SMTP id bz8so887874wib.11
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:52:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=YPJaMY7PNOVJYnmy8CtQqzpwiX53c69Ku7j0aA2rO6c=;
	b=SZr7fmGufctaMmC/2XLPyhYEWM4wGDKaBWgD49pdQJYMXtPzCqZ5Kew8U6T/IchLX7
	tSJLougEoyblkW/P35Fi+lxWcWC+to4lH2wgtmtDEtUieekH/x+LlenMatV8Rb86vUlv
	qkvc2J29e7pwapNDFJmnVPyoaN3pAubycOR//ABJIxewWjzX88FMaY46kP9KvuQmPDAA
	KsjAFRA2GbADunrtaMB8g/wxuhb+RWOI1slSDF+bgISXvAc/vZTZALaqP8Y8QyEP0Xcp
	nKXjLpD5pxDp4y90cnRONl5LZWXpGuf1VD0LjqmYpi69IKSAxTsSy9Wb2U/DZq1osIm4
	3o3A==
X-Received: by 10.194.222.4 with SMTP id qi4mr8203451wjc.33.1389109975216;
	Tue, 07 Jan 2014 07:52:55 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	bj3sm45723000wjb.14.2014.01.07.07.52.53 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:52:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:52:44 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEF1D34C.47C32%keir.xen@gmail.com>
Thread-Topic: [PATCH 0/2] XENMEM_add_to_physmap_batch
Thread-Index: Ac8LwIDT9DjJSdH9Z0OGGM0lFvwJaw==
In-Reply-To: <52B44E7B020000780010F8BD@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/2] XENMEM_add_to_physmap_batch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/12/2013 13:04, "Jan Beulich" <JBeulich@suse.com> wrote:

> 1: rename XENMEM_add_to_physmap_{range => batch} (v2)
> 2: compat wrapper for XENMEM_add_to_physmap_batch
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:55:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z0S-0005bR-Az; Tue, 07 Jan 2014 15:55:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Z0R-0005bI-At
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:55:51 +0000
Received: from [85.158.143.35:18100] by server-3.bemta-4.messagelabs.com id
	67/71-32360-6832CC25; Tue, 07 Jan 2014 15:55:50 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389110149!10210872!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24463 invoked from network); 7 Jan 2014 15:55:50 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:55:50 -0000
Received: by mail-wg0-f46.google.com with SMTP id m15so315465wgh.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:55:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=e2a3LxBCkplH1S3QXvD+sPfyuJUwzetfAJ1hPj/nQDA=;
	b=GkIjKabOabg1wc1Lt45c5mlxH0TJ6yN6Xg1wRTOHE7qthm1JmmeuxZAzOZiDJqQJVX
	grXb5+/fg+giGNf8OffJQcrXRDnFHg+uEeQ1yhbSGTfoQFycnUIqBtF2lYRXICrHCyzc
	4IPDnJO8ZQHiA90mhpRxSjDnAhDRcOvAOqwVCbBFYzfdSl0sojUtlCLymP4esJ4aqACY
	nra2TNzeiqKE1J24OQp5S2pncAX7p1eTqEvPsxpSwXWHktd81c9wO44dwryz3xweZVqE
	w5pinE2163DEE+u83wzJKgbHRWXQ1vuqPzxiSNW/zYpDO+L9V3pPOWC39zhxweqzD2Xn
	AYzA==
X-Received: by 10.180.23.99 with SMTP id l3mr17294331wif.26.1389110149856;
	Tue, 07 Jan 2014 07:55:49 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id e5sm30972793wja.15.2014.01.07.07.55.45
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:55:48 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:55:39 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Don Slutz <dslutz@verizon.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CEF1D3FB.47C33%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct
	data.
Thread-Index: Ac8LwOkie5wkHICqkk2/nwmtgM7LXg==
In-Reply-To: <52B7401A.5070809@terremark.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct
 data.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/12/2013 19:40, "Don Slutz" <dslutz@verizon.com> wrote:

> On 12/16/13 13:33, Andrew Cooper wrote:
> 
> Not sure why it took till late 12/21 for me to get this e-mail.
> 
>> On 16/12/2013 17:51, Don Slutz wrote:
>>> On 12/16/13 03:17, Jan Beulich wrote:
>>>>>>> On 15.12.13 at 17:51, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>>> On 15/12/2013 00:29, Don Slutz wrote:
> [snip]
>> Your loop condition needs to change be "off < (ctxt.cur -
>> sizeof(*desc))" otherwise the "off +=  sizeof(*desc)" can wander beyond
>> ctxt.cur in the loop body.  You also need to verify that the
>> copy_to_guest doesn't exceed ctxt.cur.
> fixed.
>> Stylistically, "desc = (void *)ctxt.data + off;" needs to be "desc =
>> (void *)(ctxt.data + off);" as the latter is standards compliment C
>> while the former is UB which GCC has an extension to deal with sensibly.
> fixed.
>> Also you have a double space before sizeof in "off +=  sizeof(*desc);"


> Fixed.  Version 4 attached.

Acked-by: Keir Fraser <keir@xen.org>

>> ~Andrew
>> 
>     -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:55:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z0S-0005bR-Az; Tue, 07 Jan 2014 15:55:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Z0R-0005bI-At
	for xen-devel@lists.xenproject.org; Tue, 07 Jan 2014 15:55:51 +0000
Received: from [85.158.143.35:18100] by server-3.bemta-4.messagelabs.com id
	67/71-32360-6832CC25; Tue, 07 Jan 2014 15:55:50 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389110149!10210872!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24463 invoked from network); 7 Jan 2014 15:55:50 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:55:50 -0000
Received: by mail-wg0-f46.google.com with SMTP id m15so315465wgh.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 07:55:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=e2a3LxBCkplH1S3QXvD+sPfyuJUwzetfAJ1hPj/nQDA=;
	b=GkIjKabOabg1wc1Lt45c5mlxH0TJ6yN6Xg1wRTOHE7qthm1JmmeuxZAzOZiDJqQJVX
	grXb5+/fg+giGNf8OffJQcrXRDnFHg+uEeQ1yhbSGTfoQFycnUIqBtF2lYRXICrHCyzc
	4IPDnJO8ZQHiA90mhpRxSjDnAhDRcOvAOqwVCbBFYzfdSl0sojUtlCLymP4esJ4aqACY
	nra2TNzeiqKE1J24OQp5S2pncAX7p1eTqEvPsxpSwXWHktd81c9wO44dwryz3xweZVqE
	w5pinE2163DEE+u83wzJKgbHRWXQ1vuqPzxiSNW/zYpDO+L9V3pPOWC39zhxweqzD2Xn
	AYzA==
X-Received: by 10.180.23.99 with SMTP id l3mr17294331wif.26.1389110149856;
	Tue, 07 Jan 2014 07:55:49 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id e5sm30972793wja.15.2014.01.07.07.55.45
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:55:48 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:55:39 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Don Slutz <dslutz@verizon.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <CEF1D3FB.47C33%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct
	data.
Thread-Index: Ac8LwOkie5wkHICqkk2/nwmtgM7LXg==
In-Reply-To: <52B7401A.5070809@terremark.com>
Mime-version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct
 data.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/12/2013 19:40, "Don Slutz" <dslutz@verizon.com> wrote:

> On 12/16/13 13:33, Andrew Cooper wrote:
> 
> Not sure why it took till late 12/21 for me to get this e-mail.
> 
>> On 16/12/2013 17:51, Don Slutz wrote:
>>> On 12/16/13 03:17, Jan Beulich wrote:
>>>>>>> On 15.12.13 at 17:51, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>>> On 15/12/2013 00:29, Don Slutz wrote:
> [snip]
>> Your loop condition needs to change be "off < (ctxt.cur -
>> sizeof(*desc))" otherwise the "off +=  sizeof(*desc)" can wander beyond
>> ctxt.cur in the loop body.  You also need to verify that the
>> copy_to_guest doesn't exceed ctxt.cur.
> fixed.
>> Stylistically, "desc = (void *)ctxt.data + off;" needs to be "desc =
>> (void *)(ctxt.data + off);" as the latter is standards compliment C
>> while the former is UB which GCC has an extension to deal with sensibly.
> fixed.
>> Also you have a double space before sizeof in "off +=  sizeof(*desc);"


> Fixed.  Version 4 attached.

Acked-by: Keir Fraser <keir@xen.org>

>> ~Andrew
>> 
>     -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z14-0005fj-QZ; Tue, 07 Jan 2014 15:56:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Z13-0005fO-9g
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:56:29 +0000
Received: from [85.158.139.211:64794] by server-17.bemta-5.messagelabs.com id
	5A/D8-19152-CA32CC25; Tue, 07 Jan 2014 15:56:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389110187!8318258!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17331 invoked from network); 7 Jan 2014 15:56:27 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:56:27 -0000
Received: by mail-we0-f178.google.com with SMTP id t60so308898wes.23
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:56:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=spf0nRZqBvfZTAsZb0hwh0TDRK9WWsDDIGFhGQdAQJE=;
	b=B8a5m6mtJvgSzjxX2+Qs9O7UkV0hj/jxJ3QBBCxbzV4LGMxbKFbD+FI29WUAjC/lLc
	48s5xNmZcRMo/QNufuY87Hey1VepO37jJYkK7UOdE8wJ7O2b9KOsn5kj2a2cIJvBlDC7
	tDKGfI8sn06yTXLPxL7bKAaVYyl1qHYK2aMFOkelX0hrm+TqES8NO2ROmsWFVijIej/F
	U5KwJFoyibrzjf5vj+l2tPGMRJU2KXtlSDTDEOWsREtXTy5u8Q64SpQ5ZuqiBAsV1LcT
	1xB4aJsV9iWdxa++rhqWGN9ReofR/BQkIX122sGIxuwIQeg3Czp2bJKmUQlcGFfd5l6t
	KcHQ==
X-Received: by 10.194.78.97 with SMTP id a1mr326550wjx.95.1389110187534;
	Tue, 07 Jan 2014 07:56:27 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ly8sm31763136wjb.17.2014.01.07.07.56.24 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:56:26 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:56:17 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Message-ID: <CEF1D421.47C34%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered
	GPE event to a edge one for qemu-xen
Thread-Index: Ac8LwP/IAgTN7wuN7EmOknI3gKvK1A==
In-Reply-To: <1389031241-3429-1-git-send-email-anthony.perard@citrix.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered
 GPE event to a edge one for qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/2014 18:00, "Anthony PERARD" <anthony.perard@citrix.com> wrote:

> This should help to reduce a CPU hotplug race window where a cpu hotplug
> event while not be seen by the OS.
> 
> When hotplugging more than one vcpu, some of those vcpus might not be
> seen as plugged by the guest.
> 
> This is what is currently happenning:
> 
> 1. hw adds cpu, sets GPE.2 bit and sends SCI
> 2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
> 3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
> 4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
>     since GPE00.en masks event
> 5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en
> 
> as result event for step 4 is lost because step 5 clears it and OS
> will not see added second cpu.
> 
> ACPI 50 spec: 5.6.4 General-Purpose Event Handling
> defines GPE event handling as following:
> 
> 1. Disables the interrupt source (GPEx_BLK EN bit).
> 2. If an edge event, clears the status bit.
> 3. Performs one of the following:
> * Dispatches to an ACPI-aware device driver.
> * Queues the matching control method for execution.
> * Manages a wake event using device _PRW objects.
> 4. If a level event, clears the status bit.
> 5. Enables the interrupt source.
> 
> So, by using edge-triggered General-Purpose Event instead of a
> level-triggered GPE, OSPM is less likely to clear the status bit of the
> addition of the second CPU. On step 5, QEMU will resend an interrupt if
> the status bit is set.
> 
> This description apply also for PCI hotplug since the same step are
> followed by QEMU, so we also change the GPE event type for PCI hotplug.
> 
> This does not apply to qemu-xen-traditional because it does not resend
> an interrupt if necessary as a result of step 5.
> 
> Patch and description inspired by SeaBIOS's commit:
> Replace level gpe event with edge gpe event for hot-plug handlers
> 9c6635bd48d39a1d17d0a73df6e577ef6bd0037c
> from Igor Mammedov <imammedo@redhat.com>
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>


> ---
> Change in V3:
>   - description for: does not apply to qemu-dm
> Change in V2:
>   - better patch comment:
>     patch does not fix race, but reduce the window
>     include patch description of the quoted commit
>   - change also apply to pci hotplug.
> ---
>  tools/firmware/hvmloader/acpi/mk_dsdt.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index 996f30b..a4b693b 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -220,9 +220,13 @@ int main(int argc, char **argv)
>  
>      pop_block();
>  
> -    /* Define GPE control method '_L02'. */
> +    /* Define GPE control method. */
>      push_block("Scope", "\\_GPE");
> -    push_block("Method", "_L02");
> +    if (dm_version == QEMU_XEN_TRADITIONAL) {
> +        push_block("Method", "_L02");
> +    } else {
> +        push_block("Method", "_E02");
> +    }
>      stmt("Return", "\\_SB.PRSC()");
>      pop_block();
>      pop_block();
> @@ -428,7 +432,7 @@ int main(int argc, char **argv)
>          decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>          pop_block();
>      } else {
> -        push_block("Method", "_L01");
> +        push_block("Method", "_E01");
>          for (slot = 1; slot <= 31; slot++) {
>              push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
>              stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 15:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 15:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z14-0005fj-QZ; Tue, 07 Jan 2014 15:56:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W0Z13-0005fO-9g
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 15:56:29 +0000
Received: from [85.158.139.211:64794] by server-17.bemta-5.messagelabs.com id
	5A/D8-19152-CA32CC25; Tue, 07 Jan 2014 15:56:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389110187!8318258!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17331 invoked from network); 7 Jan 2014 15:56:27 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 15:56:27 -0000
Received: by mail-we0-f178.google.com with SMTP id t60so308898wes.23
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 07:56:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=spf0nRZqBvfZTAsZb0hwh0TDRK9WWsDDIGFhGQdAQJE=;
	b=B8a5m6mtJvgSzjxX2+Qs9O7UkV0hj/jxJ3QBBCxbzV4LGMxbKFbD+FI29WUAjC/lLc
	48s5xNmZcRMo/QNufuY87Hey1VepO37jJYkK7UOdE8wJ7O2b9KOsn5kj2a2cIJvBlDC7
	tDKGfI8sn06yTXLPxL7bKAaVYyl1qHYK2aMFOkelX0hrm+TqES8NO2ROmsWFVijIej/F
	U5KwJFoyibrzjf5vj+l2tPGMRJU2KXtlSDTDEOWsREtXTy5u8Q64SpQ5ZuqiBAsV1LcT
	1xB4aJsV9iWdxa++rhqWGN9ReofR/BQkIX122sGIxuwIQeg3Czp2bJKmUQlcGFfd5l6t
	KcHQ==
X-Received: by 10.194.78.97 with SMTP id a1mr326550wjx.95.1389110187534;
	Tue, 07 Jan 2014 07:56:27 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165]) by mx.google.com with ESMTPSA id
	ly8sm31763136wjb.17.2014.01.07.07.56.24 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 07:56:26 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 07 Jan 2014 15:56:17 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Message-ID: <CEF1D421.47C34%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered
	GPE event to a edge one for qemu-xen
Thread-Index: Ac8LwP/IAgTN7wuN7EmOknI3gKvK1A==
In-Reply-To: <1389031241-3429-1-git-send-email-anthony.perard@citrix.com>
Mime-version: 1.0
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH V3 RESEND] firmware: Change level-triggered
 GPE event to a edge one for qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/01/2014 18:00, "Anthony PERARD" <anthony.perard@citrix.com> wrote:

> This should help to reduce a CPU hotplug race window where a cpu hotplug
> event while not be seen by the OS.
> 
> When hotplugging more than one vcpu, some of those vcpus might not be
> seen as plugged by the guest.
> 
> This is what is currently happenning:
> 
> 1. hw adds cpu, sets GPE.2 bit and sends SCI
> 2. OSPM gets SCI, reads GPE00.sts and masks GPE.2 bit in GPE00.en
> 3. OSPM executes _L02 (level-triggered event associate to cpu hotplug)
> 4. hw adds second cpu and sets GPE.2 bit but SCI is not asserted
>     since GPE00.en masks event
> 5. OSPM resets GPE.2 bit in GPE00.sts and umasks it in GPE00.en
> 
> as result event for step 4 is lost because step 5 clears it and OS
> will not see added second cpu.
> 
> ACPI 50 spec: 5.6.4 General-Purpose Event Handling
> defines GPE event handling as following:
> 
> 1. Disables the interrupt source (GPEx_BLK EN bit).
> 2. If an edge event, clears the status bit.
> 3. Performs one of the following:
> * Dispatches to an ACPI-aware device driver.
> * Queues the matching control method for execution.
> * Manages a wake event using device _PRW objects.
> 4. If a level event, clears the status bit.
> 5. Enables the interrupt source.
> 
> So, by using edge-triggered General-Purpose Event instead of a
> level-triggered GPE, OSPM is less likely to clear the status bit of the
> addition of the second CPU. On step 5, QEMU will resend an interrupt if
> the status bit is set.
> 
> This description apply also for PCI hotplug since the same step are
> followed by QEMU, so we also change the GPE event type for PCI hotplug.
> 
> This does not apply to qemu-xen-traditional because it does not resend
> an interrupt if necessary as a result of step 5.
> 
> Patch and description inspired by SeaBIOS's commit:
> Replace level gpe event with edge gpe event for hot-plug handlers
> 9c6635bd48d39a1d17d0a73df6e577ef6bd0037c
> from Igor Mammedov <imammedo@redhat.com>
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>


> ---
> Change in V3:
>   - description for: does not apply to qemu-dm
> Change in V2:
>   - better patch comment:
>     patch does not fix race, but reduce the window
>     include patch description of the quoted commit
>   - change also apply to pci hotplug.
> ---
>  tools/firmware/hvmloader/acpi/mk_dsdt.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index 996f30b..a4b693b 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -220,9 +220,13 @@ int main(int argc, char **argv)
>  
>      pop_block();
>  
> -    /* Define GPE control method '_L02'. */
> +    /* Define GPE control method. */
>      push_block("Scope", "\\_GPE");
> -    push_block("Method", "_L02");
> +    if (dm_version == QEMU_XEN_TRADITIONAL) {
> +        push_block("Method", "_L02");
> +    } else {
> +        push_block("Method", "_E02");
> +    }
>      stmt("Return", "\\_SB.PRSC()");
>      pop_block();
>      pop_block();
> @@ -428,7 +432,7 @@ int main(int argc, char **argv)
>          decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>          pop_block();
>      } else {
> -        push_block("Method", "_L01");
> +        push_block("Method", "_E01");
>          for (slot = 1; slot <= 31; slot++) {
>              push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
>              stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:02:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z6q-0006mb-Oi; Tue, 07 Jan 2014 16:02:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Z6p-0006mU-GF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:02:27 +0000
Received: from [85.158.139.211:39722] by server-17.bemta-5.messagelabs.com id
	60/C8-19152-2152CC25; Tue, 07 Jan 2014 16:02:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389110543!5681411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22184 invoked from network); 7 Jan 2014 16:02:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:02:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88336272"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:02:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 11:02:18 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W0Z6f-0004dN-E6;
	Tue, 07 Jan 2014 16:02:17 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:02:17 +0000
Message-ID: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
	cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM guest OSes are started with MMU and Caches disables (as they are on
native) however caching is enabled in the domain running the builder and
therefore we must ensure cache consistency.

The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
cache after loading images into guest memory") is to flush the caches after
loading the various blobs into guest RAM. However this approach has two short
comings:

 - The cache flush primitives available to userspace on arm32 are not
   sufficient for our needs.
 - There is a race between the cache flush and the unmap of the guest page
   where the processor might speculatively dirty the cache line again.

(of these the second is the more fundamental)

This patch makes use of the the hardware functionality to force all accesses
made from guest mode to be cached (the HCR.DC == default cached bit). This
means that we don't need to worry about the domain builder's writes being
cached because the guests "uncached" accesses will actually be cached.

Unfortunately the use of HCR.DC is incompatible with the guest enabling its
MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
detect when this happens and disable HCR.DC. This is done with the HCR.TVM
(trap virtual memory controls) bit which also causes various other registers
to be trapped, all of which can be passed straight through to the underlying
register. Once the guest has enabled its MMU we no longer need to trap so
there is no ongoing overhead. In my tests Linux makes about half a dozen
accesses to these registers before the MMU is enabled, I would expect other
OSes to behave similarly (the sequence of writes needed to setup the MMU is
pretty obvious).

Apart from this unfortunate need to trap these accesses this approach is
incompatible with guests which attempt to do DMA operations with their MMU
disabled. In practice this means guests with passthrough which we do not yet
support. Since a typical guest (including dom0) does not access devices which
require DMA until after it is fully up and running with paging enabled the
main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
with a disk passed through and booting from that disk. Since we know that dom0
is not using any such firmware and we do not support device passthrough to
guests yet we can live with this restriction. Once passthrough is implemented
this will need to be revisited.

The patch includes a couple of seemingly unrelated but necessary changes:

 - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
   with the existing set of system register we handled, but broke with the new
   ones introduced here.
 - The defines used to decode the HSR system register fields were named the
   same as the register. This breaks the accessor macros. This had gone
   unnoticed because the handling of the existing trapped registers did not
   require accessing the underlying hardware register. Rename those constants
   with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).

This patch has survived thousands of boot loops on a Midway system.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
My preferred solution here would be for the tools to use an uncached mapping
of guest memory when building the guest, which requires adding a new privmcd
ioctl (a relatively straightforward patch) and plumbing a "cached" flag
through the libxc foreign mapping interfaces (a twisty maze of passage, all
alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
freeze exception, since it was quite large (because we have 4 or more mapping
interfaces in libxc, some of which call back into others).

So I propose this version for 4.4. The uncached mapping solution should be
revisited for a future release.

At the risk of appearing to be going mad:

<speaking hat="submitter">

This bug results in memory corruption bugs in the guest, which mostly manifest
as a failure to boot the guest (subsequent bad behaviour is possible but, I
think, unlikely) the frequency of failures is perhaps 1/10 times. This would
not constitute an awesome release.

Although the patch is large most of it is repetative and mechanical (made
explicit through the use of macros in many cases). The biggest risk is that
one of the registers is not passed through correctly (i.e. the wrong size or
target registers). The ones which Linux uses have been tested and appear to
function OK.  The others might be buggy but this is mitigated through the use
of the same set of macros.

I think the chance of the patch having a bug wrt my understanding of the
hardware behaviour is pretty low. WRT there being bugs in my understanding of
the hardware documentation, I would say middle to low, but I have discussed it
with some folks at ARM and they didn't call me an idiot (in fact pretty much
the same thing has been proposed for KVM).

Overall I think the benefits outweigh the risks.

One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
It's reasonably recent so reverting it takes us back to a pretty well
understood state in the libraries. The functionality is harmless if
incomplete. I think given the first argument I would lean towards reverting.

</speaking>

<speaking hat="stand in release manager">

OK.

</speaking>

Obviously if you think I'm being to easy on myself please say so.

Actually, if you think my judgement is right I'd appreciate being told so too.
---
 xen/arch/arm/domain.c           |    5 ++
 xen/arch/arm/domain_build.c     |    1 +
 xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 +
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 ++++-
 7 files changed, 182 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 124cccf..104d228 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
+    if ( n->arch.sctlr & SCTLR_M )
+        hcr &= ~(HCR_TVM|HCR_DC);
+    else
+        hcr |= (HCR_TVM|HCR_DC);
+
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..bb31db8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
     else
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
 #endif
+    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
 
     /*
      * kernel_load will determine the placement of the initrd & fdt in
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7c5ab19..d00bba3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
+static void update_sctlr(uint32_t v)
+{
+    /*
+     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
+     * because they are incompatible.
+     *
+     * Once HCR.DC is disabled then we do not need HCR_TVM either,
+     * since it's only purpose was to catch the MMU being enabled.
+     *
+     * Both are set appropriately on context switch but we need to
+     * clear them now since we may not context switch on return to
+     * guest.
+     */
+    if ( v & SCTLR_M )
+        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
+}
+
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1338,6 +1355,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
+
+/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
+#define CP32_PASSTHRU32(R...) do {              \
+    if ( cp32.read )                            \
+        *r = READ_SYSREG32(R);                  \
+    else                                        \
+        WRITE_SYSREG32(*r, R);                  \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates the lower 32-bits and clears the upper bits.
+ */
+#define CP32_PASSTHRU64(R...) do {              \
+    if ( cp32.read )                            \
+        *r = (uint32_t)READ_SYSREG64(R);        \
+    else                                        \
+        WRITE_SYSREG64((uint64_t)*r, R);        \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
+ * the other half.
+ */
+#ifdef CONFIG_ARM_64
+#define CP32_PASSTHRU64_HI(R...) do {                   \
+    if ( cp32.read )                                    \
+        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
+    else                                                \
+    {                                                   \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
+        t |= ((uint64_t)(*r)) << 32;                    \
+        WRITE_SYSREG64(t, R);                           \
+    }                                                   \
+} while(0)
+#define CP32_PASSTHRU64_LO(R...) do {                           \
+    if ( cp32.read )                                            \
+        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
+    else                                                        \
+    {                                                           \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
+        t |= *r;                                                \
+        WRITE_SYSREG64(t, R);                                   \
+    }                                                           \
+} while(0)
+#endif
+
+    /* HCR.TVM */
+    case HSR_CPREG32(SCTLR):
+        CP32_PASSTHRU32(SCTLR_EL1);
+        update_sctlr(*r);
+        break;
+    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
+    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
+    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
+    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
+    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
+    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
+    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
+    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
+    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
+
+#ifdef CONFIG_ARM_64
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
+#else
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
+#endif
+
+#undef CP32_PASSTHRU32
+#undef CP32_PASSTHRU64
+#undef CP32_PASSTHRU64_LO
+#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1351,6 +1451,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
+    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
+    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1368,6 +1471,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define CP64_PASSTHRU(R...) do {                                  \
+    if ( cp64.read )                                            \
+    {                                                           \
+        r = READ_SYSREG64(R);                                   \
+        *r1 = r & 0xffffffffUL;                                 \
+        *r2 = r >> 32;                                          \
+    }                                                           \
+    else                                                        \
+    {                                                           \
+        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
+        WRITE_SYSREG64(r, R);                                   \
+    }                                                           \
+} while(0)
+
+    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
+    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
+
+#undef CP64_PASSTHRU
+
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1382,11 +1505,12 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
+    register_t *x = select_user_reg(regs, sysreg.reg);
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1394,6 +1518,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define SYSREG_PASSTHRU(R...) do {              \
+    if ( sysreg.read )                          \
+        *x = READ_SYSREG(R);                    \
+    else                                        \
+        WRITE_SYSREG(*x, R);                    \
+} while(0)
+
+    case HSR_SYSREG_SCTLR_EL1:
+        SYSREG_PASSTHRU(SCTLR_EL1);
+        update_sctlr(*x);
+        break;
+    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
+    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
+    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
+    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
+    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
+    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
+    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
+    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
+    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
+    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
+
+#undef SYSREG_PASSTHRU
+
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 433ad55..e325f78 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_CPREG64(CNTPCT):
+    case HSR_SYSREG_CNTPCT_EL0:
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index f0f1d53..508467a 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,6 +121,8 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
+#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
+#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -260,7 +262,9 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
+#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
+#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index dfe807d..06e638f 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003800)
+#define HSR_SYSREG_CRN_MASK (0x00003c00)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 48ad07e..0cee0e9 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,8 +40,23 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
-#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
+#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
+#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
+#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
+#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
+#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
+#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
+#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
+#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
+#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
+#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
+#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
+
+#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
+#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
+#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
+
+
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:02:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Z6q-0006mb-Oi; Tue, 07 Jan 2014 16:02:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0Z6p-0006mU-GF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:02:27 +0000
Received: from [85.158.139.211:39722] by server-17.bemta-5.messagelabs.com id
	60/C8-19152-2152CC25; Tue, 07 Jan 2014 16:02:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389110543!5681411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22184 invoked from network); 7 Jan 2014 16:02:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:02:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88336272"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:02:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 11:02:18 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W0Z6f-0004dN-E6;
	Tue, 07 Jan 2014 16:02:17 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:02:17 +0000
Message-ID: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
	cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM guest OSes are started with MMU and Caches disables (as they are on
native) however caching is enabled in the domain running the builder and
therefore we must ensure cache consistency.

The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
cache after loading images into guest memory") is to flush the caches after
loading the various blobs into guest RAM. However this approach has two short
comings:

 - The cache flush primitives available to userspace on arm32 are not
   sufficient for our needs.
 - There is a race between the cache flush and the unmap of the guest page
   where the processor might speculatively dirty the cache line again.

(of these the second is the more fundamental)

This patch makes use of the the hardware functionality to force all accesses
made from guest mode to be cached (the HCR.DC == default cached bit). This
means that we don't need to worry about the domain builder's writes being
cached because the guests "uncached" accesses will actually be cached.

Unfortunately the use of HCR.DC is incompatible with the guest enabling its
MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
detect when this happens and disable HCR.DC. This is done with the HCR.TVM
(trap virtual memory controls) bit which also causes various other registers
to be trapped, all of which can be passed straight through to the underlying
register. Once the guest has enabled its MMU we no longer need to trap so
there is no ongoing overhead. In my tests Linux makes about half a dozen
accesses to these registers before the MMU is enabled, I would expect other
OSes to behave similarly (the sequence of writes needed to setup the MMU is
pretty obvious).

Apart from this unfortunate need to trap these accesses this approach is
incompatible with guests which attempt to do DMA operations with their MMU
disabled. In practice this means guests with passthrough which we do not yet
support. Since a typical guest (including dom0) does not access devices which
require DMA until after it is fully up and running with paging enabled the
main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
with a disk passed through and booting from that disk. Since we know that dom0
is not using any such firmware and we do not support device passthrough to
guests yet we can live with this restriction. Once passthrough is implemented
this will need to be revisited.

The patch includes a couple of seemingly unrelated but necessary changes:

 - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
   with the existing set of system register we handled, but broke with the new
   ones introduced here.
 - The defines used to decode the HSR system register fields were named the
   same as the register. This breaks the accessor macros. This had gone
   unnoticed because the handling of the existing trapped registers did not
   require accessing the underlying hardware register. Rename those constants
   with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).

This patch has survived thousands of boot loops on a Midway system.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
My preferred solution here would be for the tools to use an uncached mapping
of guest memory when building the guest, which requires adding a new privmcd
ioctl (a relatively straightforward patch) and plumbing a "cached" flag
through the libxc foreign mapping interfaces (a twisty maze of passage, all
alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
freeze exception, since it was quite large (because we have 4 or more mapping
interfaces in libxc, some of which call back into others).

So I propose this version for 4.4. The uncached mapping solution should be
revisited for a future release.

At the risk of appearing to be going mad:

<speaking hat="submitter">

This bug results in memory corruption bugs in the guest, which mostly manifest
as a failure to boot the guest (subsequent bad behaviour is possible but, I
think, unlikely) the frequency of failures is perhaps 1/10 times. This would
not constitute an awesome release.

Although the patch is large most of it is repetative and mechanical (made
explicit through the use of macros in many cases). The biggest risk is that
one of the registers is not passed through correctly (i.e. the wrong size or
target registers). The ones which Linux uses have been tested and appear to
function OK.  The others might be buggy but this is mitigated through the use
of the same set of macros.

I think the chance of the patch having a bug wrt my understanding of the
hardware behaviour is pretty low. WRT there being bugs in my understanding of
the hardware documentation, I would say middle to low, but I have discussed it
with some folks at ARM and they didn't call me an idiot (in fact pretty much
the same thing has been proposed for KVM).

Overall I think the benefits outweigh the risks.

One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
It's reasonably recent so reverting it takes us back to a pretty well
understood state in the libraries. The functionality is harmless if
incomplete. I think given the first argument I would lean towards reverting.

</speaking>

<speaking hat="stand in release manager">

OK.

</speaking>

Obviously if you think I'm being to easy on myself please say so.

Actually, if you think my judgement is right I'd appreciate being told so too.
---
 xen/arch/arm/domain.c           |    5 ++
 xen/arch/arm/domain_build.c     |    1 +
 xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 +
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 ++++-
 7 files changed, 182 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 124cccf..104d228 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
+    if ( n->arch.sctlr & SCTLR_M )
+        hcr &= ~(HCR_TVM|HCR_DC);
+    else
+        hcr |= (HCR_TVM|HCR_DC);
+
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..bb31db8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
     else
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
 #endif
+    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
 
     /*
      * kernel_load will determine the placement of the initrd & fdt in
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7c5ab19..d00bba3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
+static void update_sctlr(uint32_t v)
+{
+    /*
+     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
+     * because they are incompatible.
+     *
+     * Once HCR.DC is disabled then we do not need HCR_TVM either,
+     * since it's only purpose was to catch the MMU being enabled.
+     *
+     * Both are set appropriately on context switch but we need to
+     * clear them now since we may not context switch on return to
+     * guest.
+     */
+    if ( v & SCTLR_M )
+        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
+}
+
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1338,6 +1355,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
+
+/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
+#define CP32_PASSTHRU32(R...) do {              \
+    if ( cp32.read )                            \
+        *r = READ_SYSREG32(R);                  \
+    else                                        \
+        WRITE_SYSREG32(*r, R);                  \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates the lower 32-bits and clears the upper bits.
+ */
+#define CP32_PASSTHRU64(R...) do {              \
+    if ( cp32.read )                            \
+        *r = (uint32_t)READ_SYSREG64(R);        \
+    else                                        \
+        WRITE_SYSREG64((uint64_t)*r, R);        \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
+ * the other half.
+ */
+#ifdef CONFIG_ARM_64
+#define CP32_PASSTHRU64_HI(R...) do {                   \
+    if ( cp32.read )                                    \
+        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
+    else                                                \
+    {                                                   \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
+        t |= ((uint64_t)(*r)) << 32;                    \
+        WRITE_SYSREG64(t, R);                           \
+    }                                                   \
+} while(0)
+#define CP32_PASSTHRU64_LO(R...) do {                           \
+    if ( cp32.read )                                            \
+        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
+    else                                                        \
+    {                                                           \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
+        t |= *r;                                                \
+        WRITE_SYSREG64(t, R);                                   \
+    }                                                           \
+} while(0)
+#endif
+
+    /* HCR.TVM */
+    case HSR_CPREG32(SCTLR):
+        CP32_PASSTHRU32(SCTLR_EL1);
+        update_sctlr(*r);
+        break;
+    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
+    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
+    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
+    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
+    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
+    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
+    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
+    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
+    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
+
+#ifdef CONFIG_ARM_64
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
+#else
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
+#endif
+
+#undef CP32_PASSTHRU32
+#undef CP32_PASSTHRU64
+#undef CP32_PASSTHRU64_LO
+#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1351,6 +1451,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
+    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
+    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1368,6 +1471,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define CP64_PASSTHRU(R...) do {                                  \
+    if ( cp64.read )                                            \
+    {                                                           \
+        r = READ_SYSREG64(R);                                   \
+        *r1 = r & 0xffffffffUL;                                 \
+        *r2 = r >> 32;                                          \
+    }                                                           \
+    else                                                        \
+    {                                                           \
+        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
+        WRITE_SYSREG64(r, R);                                   \
+    }                                                           \
+} while(0)
+
+    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
+    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
+
+#undef CP64_PASSTHRU
+
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1382,11 +1505,12 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
+    register_t *x = select_user_reg(regs, sysreg.reg);
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1394,6 +1518,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define SYSREG_PASSTHRU(R...) do {              \
+    if ( sysreg.read )                          \
+        *x = READ_SYSREG(R);                    \
+    else                                        \
+        WRITE_SYSREG(*x, R);                    \
+} while(0)
+
+    case HSR_SYSREG_SCTLR_EL1:
+        SYSREG_PASSTHRU(SCTLR_EL1);
+        update_sctlr(*x);
+        break;
+    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
+    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
+    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
+    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
+    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
+    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
+    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
+    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
+    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
+    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
+
+#undef SYSREG_PASSTHRU
+
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 433ad55..e325f78 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_CPREG64(CNTPCT):
+    case HSR_SYSREG_CNTPCT_EL0:
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index f0f1d53..508467a 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,6 +121,8 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
+#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
+#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -260,7 +262,9 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
+#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
+#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index dfe807d..06e638f 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003800)
+#define HSR_SYSREG_CRN_MASK (0x00003c00)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 48ad07e..0cee0e9 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,8 +40,23 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
-#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
+#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
+#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
+#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
+#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
+#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
+#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
+#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
+#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
+#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
+#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
+#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
+
+#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
+#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
+#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
+
+
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:06:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZAh-0006vb-Jf; Tue, 07 Jan 2014 16:06:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0ZAh-0006vW-2q
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:06:27 +0000
Received: from [85.158.143.35:53765] by server-1.bemta-4.messagelabs.com id
	18/4E-02132-2062CC25; Tue, 07 Jan 2014 16:06:26 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389110785!8915680!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26592 invoked from network); 7 Jan 2014 16:06:25 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 16:06:25 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55086 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0Yzw-0000Oa-Vl; Tue, 07 Jan 2014 16:55:21 +0100
Date: Tue, 7 Jan 2014 17:06:22 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1817666124.20140107170622@eikelenboom.it>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CC08C2.5090004@citrix.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 3:01:38 PM, you wrote:

> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> But dom0 seems to blow up for me .. (without this branch pulled it works ok)
>> 
>> Xen: latest xen-unstable

> The FIFO-based event channel ABI is broken in current xen-unstable.

> You need the two patches from:

> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html

> You can also disable the use the FIFO ABI by the guest using the
> xen.fifo_events = 0 kernel command line option.

> David


Thx, from a first glance dom0 seems to run fine now. :-)

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:06:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZAh-0006vb-Jf; Tue, 07 Jan 2014 16:06:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0ZAh-0006vW-2q
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:06:27 +0000
Received: from [85.158.143.35:53765] by server-1.bemta-4.messagelabs.com id
	18/4E-02132-2062CC25; Tue, 07 Jan 2014 16:06:26 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389110785!8915680!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26592 invoked from network); 7 Jan 2014 16:06:25 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 16:06:25 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55086 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0Yzw-0000Oa-Vl; Tue, 07 Jan 2014 16:55:21 +0100
Date: Tue, 7 Jan 2014 17:06:22 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1817666124.20140107170622@eikelenboom.it>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CC08C2.5090004@citrix.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 3:01:38 PM, you wrote:

> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> But dom0 seems to blow up for me .. (without this branch pulled it works ok)
>> 
>> Xen: latest xen-unstable

> The FIFO-based event channel ABI is broken in current xen-unstable.

> You need the two patches from:

> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html

> You can also disable the use the FIFO ABI by the guest using the
> xen.fifo_events = 0 kernel command line option.

> David


Thx, from a first glance dom0 seems to run fine now. :-)

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:09:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZD6-0007Dq-5H; Tue, 07 Jan 2014 16:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZD4-0007Dj-ND
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:08:55 +0000
Received: from [85.158.139.211:19967] by server-9.bemta-5.messagelabs.com id
	27/5F-15098-5962CC25; Tue, 07 Jan 2014 16:08:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389110931!8366848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24319 invoked from network); 7 Jan 2014 16:08:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:08:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90499896"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 16:06:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:06:21 -0500
Message-ID: <1389110780.12612.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:06:20 +0000
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 16:02 +0000, Ian Campbell wrote:
> One thing I'm not sure about is reverting the previous fix in
> a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards
> reverting.

If the answer here is "revert" then that would be the following. I'd
probably rewrite "A previous patch" into a reference to the actual
applied patch as I committed it.

-------

>From b33eb6a3d5082d3497b8ef0872c0bbd89fb74af1 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Tue, 7 Jan 2014 15:52:29 +0000
Subject: [PATCH] Revert "tools: libxc: flush data cache after loading images
 into guest memory"

This reverts commit a0035ecc0d82c1d4dcd5e429e2fcc3192d89747a.

Even with this fix there is a period between the flush and the unmap where
processor may speculate data into the cache. The solution is to map this
region uncached or to use the HCR.DC bit to mark all guest accesses cached. A
previous patch has arranged to do the latter.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/xc_dom_armzimageloader.c |    1 -
 tools/libxc/xc_dom_binloader.c       |    1 -
 tools/libxc/xc_dom_core.c            |    2 --
 tools/libxc/xc_linux_osdep.c         |   41 ----------------------------------
 tools/libxc/xc_minios.c              |   12 ----------
 tools/libxc/xc_netbsd.c              |   14 ------------
 tools/libxc/xc_private.c             |    5 -----
 tools/libxc/xc_private.h             |    3 ---
 tools/libxc/xc_solaris.c             |   14 ------------
 tools/libxc/xenctrl_osdep_ENOSYS.c   |    9 --------
 tools/libxc/xenctrlosdep.h           |    1 -
 11 files changed, 103 deletions(-)

diff --git a/tools/libxc/xc_dom_armzimageloader.c b/tools/libxc/xc_dom_armzimageloader.c
index 508f74b..e6516a1 100644
--- a/tools/libxc/xc_dom_armzimageloader.c
+++ b/tools/libxc/xc_dom_armzimageloader.c
@@ -229,7 +229,6 @@ static int xc_dom_load_zimage_kernel(struct xc_dom_image *dom)
               __func__, dom->kernel_size, dom->kernel_blob, dst);
 
     memcpy(dst, dom->kernel_blob, dom->kernel_size);
-    xc_cache_flush(dom->xch, dst, dom->kernel_size);
 
     return 0;
 }
diff --git a/tools/libxc/xc_dom_binloader.c b/tools/libxc/xc_dom_binloader.c
index aa0463c..e1de5b5 100644
--- a/tools/libxc/xc_dom_binloader.c
+++ b/tools/libxc/xc_dom_binloader.c
@@ -301,7 +301,6 @@ static int xc_dom_load_bin_kernel(struct xc_dom_image *dom)
 
     memcpy(dest, image + skip, text_size);
     memset(dest + text_size, 0, bss_size);
-    xc_cache_flush(dom->xch, dest, text_size+bss_size);
 
     return 0;
 }
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index d46ac22..77a4e64 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -978,7 +978,6 @@ int xc_dom_build_image(struct xc_dom_image *dom)
         }
         else
             memcpy(ramdiskmap, dom->ramdisk_blob, dom->ramdisk_size);
-        xc_cache_flush(dom->xch, ramdiskmap, ramdisklen);
     }
 
     /* load devicetree */
@@ -998,7 +997,6 @@ int xc_dom_build_image(struct xc_dom_image *dom)
             goto err;
         }
         memcpy(devicetreemap, dom->devicetree_blob, dom->devicetree_size);
-        xc_cache_flush(dom->xch, devicetreemap, dom->devicetree_size);
     }
 
     /* allocate other pages */
diff --git a/tools/libxc/xc_linux_osdep.c b/tools/libxc/xc_linux_osdep.c
index e219b24..73860a2 100644
--- a/tools/libxc/xc_linux_osdep.c
+++ b/tools/libxc/xc_linux_osdep.c
@@ -30,7 +30,6 @@
 
 #include <sys/mman.h>
 #include <sys/ioctl.h>
-#include <sys/syscall.h>
 
 #include <xen/memory.h>
 #include <xen/sys/evtchn.h>
@@ -417,44 +416,6 @@ static void *linux_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handle
     return ret;
 }
 
-static void linux_privcmd_cache_flush(xc_interface *xch,
-				      const void *ptr, size_t nr)
-{
-#if defined(__arm__)
-    unsigned long start = (unsigned long)ptr;
-    unsigned long end = start + nr;
-    /* cacheflush(unsigned long start, unsigned long end, int flags) */
-    int rc = syscall(__ARM_NR_cacheflush, start, end, 0);
-    if ( rc < 0 )
-	    PERROR("cache flush operation failed: %d\n", errno);
-#elif defined(__aarch64__)
-    unsigned long start = (unsigned long)ptr;
-    unsigned long end = start + nr;
-    unsigned long p, ctr;
-    int stride;
-
-    /* Flush cache using direct DC CVAC instructions. This is
-     * available to EL0 when SCTLR_EL1.UCI is set, which Linux does.
-     *
-     * Bits 19:16 of CTR_EL0 are log2 of the minimum dcache line size
-     * in words, which we use as our stride length. This is readable
-     * with SCTLR_EL1.UCT is set, which Linux does.
-     */
-    asm volatile ("mrs %0, ctr_el0" : "=r" (ctr));
-
-    stride = 4 * (1 << ((ctr & 0xf0000UL) >> 16));
-
-    for ( p = start ; p < end ; p += stride )
-        asm volatile ("dc cvac, %0" :  : "r" (p));
-    asm volatile ("dsb sy");
-#elif defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops linux_privcmd_ops = {
     .open = &linux_privcmd_open,
     .close = &linux_privcmd_close,
@@ -469,8 +430,6 @@ static struct xc_osdep_ops linux_privcmd_ops = {
         .map_foreign_bulk = &linux_privcmd_map_foreign_bulk,
         .map_foreign_range = &linux_privcmd_map_foreign_range,
         .map_foreign_ranges = &linux_privcmd_map_foreign_ranges,
-
-        .cache_flush = &linux_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_minios.c b/tools/libxc/xc_minios.c
index 5026b2b..dec4d73 100644
--- a/tools/libxc/xc_minios.c
+++ b/tools/libxc/xc_minios.c
@@ -181,16 +181,6 @@ static void *minios_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handl
     return ret;
 }
 
-static void minios_privcmd_cache_flush(xc_interface *xch,
-                                       const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    printf("No cache flush operation defined for architecture");
-    BUG();
-#endif
-}
 
 static struct xc_osdep_ops minios_privcmd_ops = {
     .open = &minios_privcmd_open,
@@ -206,8 +196,6 @@ static struct xc_osdep_ops minios_privcmd_ops = {
         .map_foreign_bulk = &minios_privcmd_map_foreign_bulk,
         .map_foreign_range = &minios_privcmd_map_foreign_range,
         .map_foreign_ranges = &minios_privcmd_map_foreign_ranges,
-
-        .cache_flush = &minios_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_netbsd.c b/tools/libxc/xc_netbsd.c
index 0143305..8a90ef3 100644
--- a/tools/libxc/xc_netbsd.c
+++ b/tools/libxc/xc_netbsd.c
@@ -22,7 +22,6 @@
 
 #include <xen/sys/evtchn.h>
 #include <unistd.h>
-#include <stdlib.h>
 #include <fcntl.h>
 #include <malloc.h>
 #include <sys/mman.h>
@@ -208,17 +207,6 @@ mmap_failed:
 	return NULL;
 }
 
-static void netbsd_privcmd_cache_flush(xc_interface *xch,
-                                       const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops netbsd_privcmd_ops = {
     .open = &netbsd_privcmd_open,
     .close = &netbsd_privcmd_close,
@@ -233,8 +221,6 @@ static struct xc_osdep_ops netbsd_privcmd_ops = {
         .map_foreign_bulk = &xc_map_foreign_bulk_compat,
         .map_foreign_range = &netbsd_privcmd_map_foreign_range,
         .map_foreign_ranges = &netbsd_privcmd_map_foreign_ranges,
-
-        .cache_flush = &netbsd_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 3ccee2b..838fd21 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -249,11 +249,6 @@ int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall)
     return xch->ops->u.privcmd.hypercall(xch, xch->ops_handle, hypercall);
 }
 
-void xc_cache_flush(xc_interface *xch, const void *p, size_t n)
-{
-    xch->ops->u.privcmd.cache_flush(xch, p, n);
-}
-
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
                              unsigned open_flags)
 {
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 50a0aa7..92271c9 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,9 +304,6 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
-/* Flush data cache */
-void xc_cache_flush(xc_interface *xch, const void *p, size_t n);
-
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/tools/libxc/xc_solaris.c b/tools/libxc/xc_solaris.c
index edffab1..7257a54 100644
--- a/tools/libxc/xc_solaris.c
+++ b/tools/libxc/xc_solaris.c
@@ -23,7 +23,6 @@
 #include <xen/memory.h>
 #include <xen/sys/evtchn.h>
 #include <unistd.h>
-#include <stdlib.h>
 #include <fcntl.h>
 #include <malloc.h>
 
@@ -179,17 +178,6 @@ mmap_failed:
     return NULL;
 }
 
-static void solaris_privcmd_cache_flush(xc_interface *xch,
-                                        const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops solaris_privcmd_ops = {
     .open = &solaris_privcmd_open,
     .close = &solaris_privcmd_close,
@@ -204,8 +192,6 @@ static struct xc_osdep_ops solaris_privcmd_ops = {
         .map_foreign_bulk = &xc_map_foreign_bulk_compat,
         .map_foreign_range = &solaris_privcmd_map_foreign_range,
         .map_foreign_ranges = &solaris_privcmd_map_foreign_ranges,
-
-        .cache_flush = &solaris_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xenctrl_osdep_ENOSYS.c b/tools/libxc/xenctrl_osdep_ENOSYS.c
index d911b10..4821342 100644
--- a/tools/libxc/xenctrl_osdep_ENOSYS.c
+++ b/tools/libxc/xenctrl_osdep_ENOSYS.c
@@ -63,13 +63,6 @@ static void *ENOSYS_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handl
     return MAP_FAILED;
 }
 
-static void ENOSYS_privcmd_cache_flush(xc_interface *xch, const void *p, size_t n)
-{
-    unsigned long start = (unsigned long)p;
-    unsigned long end = start + n;
-    IPRINTF(xch, "ENOSYS_privcmd: cache_flush: %#lx-%#lx\n", start, end);
-}
-
 static struct xc_osdep_ops ENOSYS_privcmd_ops =
 {
     .open      = &ENOSYS_privcmd_open,
@@ -81,8 +74,6 @@ static struct xc_osdep_ops ENOSYS_privcmd_ops =
         .map_foreign_bulk = &ENOSYS_privcmd_map_foreign_bulk,
         .map_foreign_range = &ENOSYS_privcmd_map_foreign_range,
         .map_foreign_ranges = &ENOSYS_privcmd_map_foreign_ranges,
-
-        .cache_flush = &ENOSYS_privcmd_cache_flush,
     }
 };
 
diff --git a/tools/libxc/xenctrlosdep.h b/tools/libxc/xenctrlosdep.h
index 6c9a005..e610a24 100644
--- a/tools/libxc/xenctrlosdep.h
+++ b/tools/libxc/xenctrlosdep.h
@@ -89,7 +89,6 @@ struct xc_osdep_ops
             void *(*map_foreign_ranges)(xc_interface *xch, xc_osdep_handle h, uint32_t dom, size_t size, int prot,
                                         size_t chunksize, privcmd_mmap_entry_t entries[],
                                         int nentries);
-            void (*cache_flush)(xc_interface *xch, const void *p, size_t n);
         } privcmd;
         struct {
             int (*fd)(xc_evtchn *xce, xc_osdep_handle h);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:09:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZD6-0007Dq-5H; Tue, 07 Jan 2014 16:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZD4-0007Dj-ND
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:08:55 +0000
Received: from [85.158.139.211:19967] by server-9.bemta-5.messagelabs.com id
	27/5F-15098-5962CC25; Tue, 07 Jan 2014 16:08:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389110931!8366848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24319 invoked from network); 7 Jan 2014 16:08:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:08:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90499896"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 16:06:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:06:21 -0500
Message-ID: <1389110780.12612.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:06:20 +0000
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 16:02 +0000, Ian Campbell wrote:
> One thing I'm not sure about is reverting the previous fix in
> a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards
> reverting.

If the answer here is "revert" then that would be the following. I'd
probably rewrite "A previous patch" into a reference to the actual
applied patch as I committed it.

-------

>From b33eb6a3d5082d3497b8ef0872c0bbd89fb74af1 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Tue, 7 Jan 2014 15:52:29 +0000
Subject: [PATCH] Revert "tools: libxc: flush data cache after loading images
 into guest memory"

This reverts commit a0035ecc0d82c1d4dcd5e429e2fcc3192d89747a.

Even with this fix there is a period between the flush and the unmap where
processor may speculate data into the cache. The solution is to map this
region uncached or to use the HCR.DC bit to mark all guest accesses cached. A
previous patch has arranged to do the latter.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxc/xc_dom_armzimageloader.c |    1 -
 tools/libxc/xc_dom_binloader.c       |    1 -
 tools/libxc/xc_dom_core.c            |    2 --
 tools/libxc/xc_linux_osdep.c         |   41 ----------------------------------
 tools/libxc/xc_minios.c              |   12 ----------
 tools/libxc/xc_netbsd.c              |   14 ------------
 tools/libxc/xc_private.c             |    5 -----
 tools/libxc/xc_private.h             |    3 ---
 tools/libxc/xc_solaris.c             |   14 ------------
 tools/libxc/xenctrl_osdep_ENOSYS.c   |    9 --------
 tools/libxc/xenctrlosdep.h           |    1 -
 11 files changed, 103 deletions(-)

diff --git a/tools/libxc/xc_dom_armzimageloader.c b/tools/libxc/xc_dom_armzimageloader.c
index 508f74b..e6516a1 100644
--- a/tools/libxc/xc_dom_armzimageloader.c
+++ b/tools/libxc/xc_dom_armzimageloader.c
@@ -229,7 +229,6 @@ static int xc_dom_load_zimage_kernel(struct xc_dom_image *dom)
               __func__, dom->kernel_size, dom->kernel_blob, dst);
 
     memcpy(dst, dom->kernel_blob, dom->kernel_size);
-    xc_cache_flush(dom->xch, dst, dom->kernel_size);
 
     return 0;
 }
diff --git a/tools/libxc/xc_dom_binloader.c b/tools/libxc/xc_dom_binloader.c
index aa0463c..e1de5b5 100644
--- a/tools/libxc/xc_dom_binloader.c
+++ b/tools/libxc/xc_dom_binloader.c
@@ -301,7 +301,6 @@ static int xc_dom_load_bin_kernel(struct xc_dom_image *dom)
 
     memcpy(dest, image + skip, text_size);
     memset(dest + text_size, 0, bss_size);
-    xc_cache_flush(dom->xch, dest, text_size+bss_size);
 
     return 0;
 }
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index d46ac22..77a4e64 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -978,7 +978,6 @@ int xc_dom_build_image(struct xc_dom_image *dom)
         }
         else
             memcpy(ramdiskmap, dom->ramdisk_blob, dom->ramdisk_size);
-        xc_cache_flush(dom->xch, ramdiskmap, ramdisklen);
     }
 
     /* load devicetree */
@@ -998,7 +997,6 @@ int xc_dom_build_image(struct xc_dom_image *dom)
             goto err;
         }
         memcpy(devicetreemap, dom->devicetree_blob, dom->devicetree_size);
-        xc_cache_flush(dom->xch, devicetreemap, dom->devicetree_size);
     }
 
     /* allocate other pages */
diff --git a/tools/libxc/xc_linux_osdep.c b/tools/libxc/xc_linux_osdep.c
index e219b24..73860a2 100644
--- a/tools/libxc/xc_linux_osdep.c
+++ b/tools/libxc/xc_linux_osdep.c
@@ -30,7 +30,6 @@
 
 #include <sys/mman.h>
 #include <sys/ioctl.h>
-#include <sys/syscall.h>
 
 #include <xen/memory.h>
 #include <xen/sys/evtchn.h>
@@ -417,44 +416,6 @@ static void *linux_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handle
     return ret;
 }
 
-static void linux_privcmd_cache_flush(xc_interface *xch,
-				      const void *ptr, size_t nr)
-{
-#if defined(__arm__)
-    unsigned long start = (unsigned long)ptr;
-    unsigned long end = start + nr;
-    /* cacheflush(unsigned long start, unsigned long end, int flags) */
-    int rc = syscall(__ARM_NR_cacheflush, start, end, 0);
-    if ( rc < 0 )
-	    PERROR("cache flush operation failed: %d\n", errno);
-#elif defined(__aarch64__)
-    unsigned long start = (unsigned long)ptr;
-    unsigned long end = start + nr;
-    unsigned long p, ctr;
-    int stride;
-
-    /* Flush cache using direct DC CVAC instructions. This is
-     * available to EL0 when SCTLR_EL1.UCI is set, which Linux does.
-     *
-     * Bits 19:16 of CTR_EL0 are log2 of the minimum dcache line size
-     * in words, which we use as our stride length. This is readable
-     * with SCTLR_EL1.UCT is set, which Linux does.
-     */
-    asm volatile ("mrs %0, ctr_el0" : "=r" (ctr));
-
-    stride = 4 * (1 << ((ctr & 0xf0000UL) >> 16));
-
-    for ( p = start ; p < end ; p += stride )
-        asm volatile ("dc cvac, %0" :  : "r" (p));
-    asm volatile ("dsb sy");
-#elif defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops linux_privcmd_ops = {
     .open = &linux_privcmd_open,
     .close = &linux_privcmd_close,
@@ -469,8 +430,6 @@ static struct xc_osdep_ops linux_privcmd_ops = {
         .map_foreign_bulk = &linux_privcmd_map_foreign_bulk,
         .map_foreign_range = &linux_privcmd_map_foreign_range,
         .map_foreign_ranges = &linux_privcmd_map_foreign_ranges,
-
-        .cache_flush = &linux_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_minios.c b/tools/libxc/xc_minios.c
index 5026b2b..dec4d73 100644
--- a/tools/libxc/xc_minios.c
+++ b/tools/libxc/xc_minios.c
@@ -181,16 +181,6 @@ static void *minios_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handl
     return ret;
 }
 
-static void minios_privcmd_cache_flush(xc_interface *xch,
-                                       const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    printf("No cache flush operation defined for architecture");
-    BUG();
-#endif
-}
 
 static struct xc_osdep_ops minios_privcmd_ops = {
     .open = &minios_privcmd_open,
@@ -206,8 +196,6 @@ static struct xc_osdep_ops minios_privcmd_ops = {
         .map_foreign_bulk = &minios_privcmd_map_foreign_bulk,
         .map_foreign_range = &minios_privcmd_map_foreign_range,
         .map_foreign_ranges = &minios_privcmd_map_foreign_ranges,
-
-        .cache_flush = &minios_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_netbsd.c b/tools/libxc/xc_netbsd.c
index 0143305..8a90ef3 100644
--- a/tools/libxc/xc_netbsd.c
+++ b/tools/libxc/xc_netbsd.c
@@ -22,7 +22,6 @@
 
 #include <xen/sys/evtchn.h>
 #include <unistd.h>
-#include <stdlib.h>
 #include <fcntl.h>
 #include <malloc.h>
 #include <sys/mman.h>
@@ -208,17 +207,6 @@ mmap_failed:
 	return NULL;
 }
 
-static void netbsd_privcmd_cache_flush(xc_interface *xch,
-                                       const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops netbsd_privcmd_ops = {
     .open = &netbsd_privcmd_open,
     .close = &netbsd_privcmd_close,
@@ -233,8 +221,6 @@ static struct xc_osdep_ops netbsd_privcmd_ops = {
         .map_foreign_bulk = &xc_map_foreign_bulk_compat,
         .map_foreign_range = &netbsd_privcmd_map_foreign_range,
         .map_foreign_ranges = &netbsd_privcmd_map_foreign_ranges,
-
-        .cache_flush = &netbsd_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 3ccee2b..838fd21 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -249,11 +249,6 @@ int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall)
     return xch->ops->u.privcmd.hypercall(xch, xch->ops_handle, hypercall);
 }
 
-void xc_cache_flush(xc_interface *xch, const void *p, size_t n)
-{
-    xch->ops->u.privcmd.cache_flush(xch, p, n);
-}
-
 xc_evtchn *xc_evtchn_open(xentoollog_logger *logger,
                              unsigned open_flags)
 {
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 50a0aa7..92271c9 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,9 +304,6 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
-/* Flush data cache */
-void xc_cache_flush(xc_interface *xch, const void *p, size_t n);
-
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/tools/libxc/xc_solaris.c b/tools/libxc/xc_solaris.c
index edffab1..7257a54 100644
--- a/tools/libxc/xc_solaris.c
+++ b/tools/libxc/xc_solaris.c
@@ -23,7 +23,6 @@
 #include <xen/memory.h>
 #include <xen/sys/evtchn.h>
 #include <unistd.h>
-#include <stdlib.h>
 #include <fcntl.h>
 #include <malloc.h>
 
@@ -179,17 +178,6 @@ mmap_failed:
     return NULL;
 }
 
-static void solaris_privcmd_cache_flush(xc_interface *xch,
-                                        const void *ptr, size_t nr)
-{
-#if defined(__i386__) || defined(__x86_64__)
-    /* No need for cache maintenance on x86 */
-#else
-    PERROR("No cache flush operation defined for architecture");
-    abort();
-#endif
-}
-
 static struct xc_osdep_ops solaris_privcmd_ops = {
     .open = &solaris_privcmd_open,
     .close = &solaris_privcmd_close,
@@ -204,8 +192,6 @@ static struct xc_osdep_ops solaris_privcmd_ops = {
         .map_foreign_bulk = &xc_map_foreign_bulk_compat,
         .map_foreign_range = &solaris_privcmd_map_foreign_range,
         .map_foreign_ranges = &solaris_privcmd_map_foreign_ranges,
-
-        .cache_flush = &solaris_privcmd_cache_flush,
     },
 };
 
diff --git a/tools/libxc/xenctrl_osdep_ENOSYS.c b/tools/libxc/xenctrl_osdep_ENOSYS.c
index d911b10..4821342 100644
--- a/tools/libxc/xenctrl_osdep_ENOSYS.c
+++ b/tools/libxc/xenctrl_osdep_ENOSYS.c
@@ -63,13 +63,6 @@ static void *ENOSYS_privcmd_map_foreign_ranges(xc_interface *xch, xc_osdep_handl
     return MAP_FAILED;
 }
 
-static void ENOSYS_privcmd_cache_flush(xc_interface *xch, const void *p, size_t n)
-{
-    unsigned long start = (unsigned long)p;
-    unsigned long end = start + n;
-    IPRINTF(xch, "ENOSYS_privcmd: cache_flush: %#lx-%#lx\n", start, end);
-}
-
 static struct xc_osdep_ops ENOSYS_privcmd_ops =
 {
     .open      = &ENOSYS_privcmd_open,
@@ -81,8 +74,6 @@ static struct xc_osdep_ops ENOSYS_privcmd_ops =
         .map_foreign_bulk = &ENOSYS_privcmd_map_foreign_bulk,
         .map_foreign_range = &ENOSYS_privcmd_map_foreign_range,
         .map_foreign_ranges = &ENOSYS_privcmd_map_foreign_ranges,
-
-        .cache_flush = &ENOSYS_privcmd_cache_flush,
     }
 };
 
diff --git a/tools/libxc/xenctrlosdep.h b/tools/libxc/xenctrlosdep.h
index 6c9a005..e610a24 100644
--- a/tools/libxc/xenctrlosdep.h
+++ b/tools/libxc/xenctrlosdep.h
@@ -89,7 +89,6 @@ struct xc_osdep_ops
             void *(*map_foreign_ranges)(xc_interface *xch, xc_osdep_handle h, uint32_t dom, size_t size, int prot,
                                         size_t chunksize, privcmd_mmap_entry_t entries[],
                                         int nentries);
-            void (*cache_flush)(xc_interface *xch, const void *p, size_t n);
         } privcmd;
         struct {
             int (*fd)(xc_evtchn *xce, xc_osdep_handle h);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZJ1-0007fk-F1; Tue, 07 Jan 2014 16:15:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZIy-0007ff-Me
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:15:00 +0000
Received: from [193.109.254.147:31676] by server-16.bemta-14.messagelabs.com
	id BA/40-20600-4082CC25; Tue, 07 Jan 2014 16:15:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389111298!9353182!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15517 invoked from network); 7 Jan 2014 16:14:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:14:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88344229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:14:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:14:56 -0500
Message-ID: <1389111295.12612.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 7 Jan 2014 16:14:55 +0000
In-Reply-To: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
References: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs/Makefile: Split the install target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:26 +0000, Andrew Cooper wrote:
> Split the current install target into two subtargets, install-man-pages and
> install-html, with the main install target depending on both.
> 
> This helps packagers who want the man pages to put in appropriate rpms/debs,
> but don't want to build the html developer docs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> I am slightly dubious as to the behaviour of rm-rf'ing the DOCDIR, but the
> behaviour is left exactly as before, for sake-of-mind when reviewing.

It is rather suspect I agree.

> I would like this to be taken for 4.4, as I have fallen over it yet again when
> packing 4.4-rc1 for testing in XenServer.  Unlike some of my other fixes to
> the Xen build system, this is trivial to fix correctly upstream, and will
> benefit everyone trying to package 4.4 for distros.

I presume the majority of people packaging Xen for distros either
already have the appropriate patch or actually do want the docs. In fact
I think your requirement to strip them out is likely to be a very
minority one, most distros package the docs for the software they are
shipping (maybe in a separate package).

Why don't you make the XenServer build system build a xen-docs(-html)
package and then not install it anywhere, or delete it, or put it in
your SDK or something?

In any case if there is a need for this level of control over which
types of docs get built and installed it should probably be done at the
configure level rather than the install target level.

Ian.

> ---
>  docs/Makefile |   15 ++++++++++-----
>  1 file changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/docs/Makefile b/docs/Makefile
> index 8d5d48e..e4bf28c 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -78,16 +78,21 @@ distclean: clean
>  	rm -rf $(XEN_ROOT)/config/Docs.mk config.log config.status config.cache \
>  		autom4te.cache
>  
> -.PHONY: install
> -install: all
> -	rm -rf $(DESTDIR)$(DOCDIR)
> -	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
> -
> +.PHONY: install-man-pages
> +install-man-pages: man-pages
>  	$(INSTALL_DIR) $(DESTDIR)$(MANDIR)
>  	cp -R man1 $(DESTDIR)$(MANDIR)
>  	cp -R man5 $(DESTDIR)$(MANDIR)
> +
> +.PHONY: install-html
> +install-html: html txt figs
> +	rm -rf $(DESTDIR)$(DOCDIR)
> +	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
>  	[ ! -d html ] || cp -R html $(DESTDIR)$(DOCDIR)
>  
> +.PHONY: install
> +install: install-man-pages install-html
> +
>  html/index.html: $(DOC_HTML) $(CURDIR)/gen-html-index INDEX
>  	$(PERL) -w -- $(CURDIR)/gen-html-index -i INDEX html $(DOC_HTML)
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZJ1-0007fk-F1; Tue, 07 Jan 2014 16:15:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZIy-0007ff-Me
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:15:00 +0000
Received: from [193.109.254.147:31676] by server-16.bemta-14.messagelabs.com
	id BA/40-20600-4082CC25; Tue, 07 Jan 2014 16:15:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389111298!9353182!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15517 invoked from network); 7 Jan 2014 16:14:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:14:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88344229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:14:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:14:56 -0500
Message-ID: <1389111295.12612.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 7 Jan 2014 16:14:55 +0000
In-Reply-To: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
References: <1388787969-24576-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] docs/Makefile: Split the install target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-03 at 22:26 +0000, Andrew Cooper wrote:
> Split the current install target into two subtargets, install-man-pages and
> install-html, with the main install target depending on both.
> 
> This helps packagers who want the man pages to put in appropriate rpms/debs,
> but don't want to build the html developer docs.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> I am slightly dubious as to the behaviour of rm-rf'ing the DOCDIR, but the
> behaviour is left exactly as before, for sake-of-mind when reviewing.

It is rather suspect I agree.

> I would like this to be taken for 4.4, as I have fallen over it yet again when
> packing 4.4-rc1 for testing in XenServer.  Unlike some of my other fixes to
> the Xen build system, this is trivial to fix correctly upstream, and will
> benefit everyone trying to package 4.4 for distros.

I presume the majority of people packaging Xen for distros either
already have the appropriate patch or actually do want the docs. In fact
I think your requirement to strip them out is likely to be a very
minority one, most distros package the docs for the software they are
shipping (maybe in a separate package).

Why don't you make the XenServer build system build a xen-docs(-html)
package and then not install it anywhere, or delete it, or put it in
your SDK or something?

In any case if there is a need for this level of control over which
types of docs get built and installed it should probably be done at the
configure level rather than the install target level.

Ian.

> ---
>  docs/Makefile |   15 ++++++++++-----
>  1 file changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/docs/Makefile b/docs/Makefile
> index 8d5d48e..e4bf28c 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -78,16 +78,21 @@ distclean: clean
>  	rm -rf $(XEN_ROOT)/config/Docs.mk config.log config.status config.cache \
>  		autom4te.cache
>  
> -.PHONY: install
> -install: all
> -	rm -rf $(DESTDIR)$(DOCDIR)
> -	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
> -
> +.PHONY: install-man-pages
> +install-man-pages: man-pages
>  	$(INSTALL_DIR) $(DESTDIR)$(MANDIR)
>  	cp -R man1 $(DESTDIR)$(MANDIR)
>  	cp -R man5 $(DESTDIR)$(MANDIR)
> +
> +.PHONY: install-html
> +install-html: html txt figs
> +	rm -rf $(DESTDIR)$(DOCDIR)
> +	$(INSTALL_DIR) $(DESTDIR)$(DOCDIR)
>  	[ ! -d html ] || cp -R html $(DESTDIR)$(DOCDIR)
>  
> +.PHONY: install
> +install: install-man-pages install-html
> +
>  html/index.html: $(DOC_HTML) $(CURDIR)/gen-html-index INDEX
>  	$(PERL) -w -- $(CURDIR)/gen-html-index -i INDEX html $(DOC_HTML)
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:24:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZS3-0008G4-SK; Tue, 07 Jan 2014 16:24:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ZS2-0008Fz-B6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:24:22 +0000
Received: from [85.158.139.211:58505] by server-14.bemta-5.messagelabs.com id
	FD/72-24200-53A2CC25; Tue, 07 Jan 2014 16:24:21 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389111858!8358340!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23061 invoked from network); 7 Jan 2014 16:24:20 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:24:20 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 16:24:17 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="642600599"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.61])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 16:24:15 +0000
Message-ID: <52CC2A2F.7010700@terremark.com>
Date: Tue, 07 Jan 2014 11:24:15 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-5-git-send-email-dslutz@verizon.com>	
	<20140106175349.6cbd190b@mantra.us.oracle.com>	
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
In-Reply-To: <1389088937.31766.107.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 05:02, Ian Campbell wrote:
> On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
>> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>>> On Sat,  4 Jan 2014 12:52:16 -0500
>>> Don Slutz <dslutz@verizon.com> wrote:
>>>
>>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>>> returned.
>>>>
>>>> Without this gdb does not report an error.
>>>>
>>>> With this patch and using a 1G hvm domU:
>>>>
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>>>
>>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>>> ---
>>>>   xen/arch/x86/domctl.c | 3 +--
>>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>>>> index ef6c140..4aa751f 100644
>>>> --- a/xen/arch/x86/domctl.c
>>>> +++ b/xen/arch/x86/domctl.c
>>>> @@ -997,8 +997,7 @@ long arch_do_domctl(
>>>>               domctl->u.gdbsx_guest_memio.len;
>>>>   
>>>>           ret = gdbsx_guest_mem_io(domctl->domain,
>>>> &domctl->u.gdbsx_guest_memio);
>>>> -        if ( !ret )
>>>> -           copyback = 1;
>>>> +        copyback = 1;
>>>>       }
>>>>       break;
>>>>   
>>> Ooopsy... my thought was that an application should not even look at
>>> remain if the hcall/syscall failed, but forgot when writing the
>>> gdbsx itself :). Think of it this way, if the call didn't even make it to
>>> xen, and some reason the ioctl returned non-zero rc, then remain would
>>> still be zero.
> Thinking about this for a second longer -- how does this interface deal
> with partial success?
>
> It seems natural to me for it to return an error and also update
> ->remain, but perhaps you have a different scheme in mind?
>
>

I had assumed that this patch (which is not need to "fix" the bugs I found), was to be dropped in v2.  However I will agree that currently there is no way to know about partial success.  The untested change:

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..0add07e 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
      iop->remain = dbg_rw_mem(
          (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
          iop->gwr, iop->pgd3val);
-    return (iop->remain ? -EFAULT : 0);
+    return 0;
  }

  long arch_do_domctl(


Would appear to allow partial success to be reported and also meet with remain not to be looked at with an error.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:24:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZS3-0008G4-SK; Tue, 07 Jan 2014 16:24:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ZS2-0008Fz-B6
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:24:22 +0000
Received: from [85.158.139.211:58505] by server-14.bemta-5.messagelabs.com id
	FD/72-24200-53A2CC25; Tue, 07 Jan 2014 16:24:21 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389111858!8358340!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23061 invoked from network); 7 Jan 2014 16:24:20 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:24:20 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 16:24:17 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="642600599"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.61])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 16:24:15 +0000
Message-ID: <52CC2A2F.7010700@terremark.com>
Date: Tue, 07 Jan 2014 11:24:15 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-5-git-send-email-dslutz@verizon.com>	
	<20140106175349.6cbd190b@mantra.us.oracle.com>	
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
In-Reply-To: <1389088937.31766.107.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 05:02, Ian Campbell wrote:
> On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
>> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>>> On Sat,  4 Jan 2014 12:52:16 -0500
>>> Don Slutz <dslutz@verizon.com> wrote:
>>>
>>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>>> returned.
>>>>
>>>> Without this gdb does not report an error.
>>>>
>>>> With this patch and using a 1G hvm domU:
>>>>
>>>> (gdb) x/1xh 0x6ae9168b
>>>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>>>
>>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>>> ---
>>>>   xen/arch/x86/domctl.c | 3 +--
>>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>>>> index ef6c140..4aa751f 100644
>>>> --- a/xen/arch/x86/domctl.c
>>>> +++ b/xen/arch/x86/domctl.c
>>>> @@ -997,8 +997,7 @@ long arch_do_domctl(
>>>>               domctl->u.gdbsx_guest_memio.len;
>>>>   
>>>>           ret = gdbsx_guest_mem_io(domctl->domain,
>>>> &domctl->u.gdbsx_guest_memio);
>>>> -        if ( !ret )
>>>> -           copyback = 1;
>>>> +        copyback = 1;
>>>>       }
>>>>       break;
>>>>   
>>> Ooopsy... my thought was that an application should not even look at
>>> remain if the hcall/syscall failed, but forgot when writing the
>>> gdbsx itself :). Think of it this way, if the call didn't even make it to
>>> xen, and some reason the ioctl returned non-zero rc, then remain would
>>> still be zero.
> Thinking about this for a second longer -- how does this interface deal
> with partial success?
>
> It seems natural to me for it to return an error and also update
> ->remain, but perhaps you have a different scheme in mind?
>
>

I had assumed that this patch (which is not need to "fix" the bugs I found), was to be dropped in v2.  However I will agree that currently there is no way to know about partial success.  The untested change:

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..0add07e 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
      iop->remain = dbg_rw_mem(
          (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
          iop->gwr, iop->pgd3val);
-    return (iop->remain ? -EFAULT : 0);
+    return 0;
  }

  long arch_do_domctl(


Would appear to allow partial success to be reported and also meet with remain not to be looked at with an error.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZUL-0008MH-Hz; Tue, 07 Jan 2014 16:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ZUK-0008M8-8j
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:26:44 +0000
Received: from [85.158.139.211:63791] by server-16.bemta-5.messagelabs.com id
	17/51-11843-3CA2CC25; Tue, 07 Jan 2014 16:26:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389112000!8367156!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9547 invoked from network); 7 Jan 2014 16:26:42 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:26:42 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 16:26:37 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="642602813"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.61])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 16:26:36 +0000
Message-ID: <52CC2ABC.3040505@terremark.com>
Date: Tue, 07 Jan 2014 11:26:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-5-git-send-email-dslutz@verizon.com>	
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 05:00, Ian Campbell wrote:
> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>> On Sat,  4 Jan 2014 12:52:16 -0500
>> Don Slutz <dslutz@verizon.com> wrote:
>>
>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>> returned.
>>>
>>> Without this gdb does not report an error.
>>>
>>> With this patch and using a 1G hvm domU:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>>   xen/arch/x86/domctl.c | 3 +--
>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>>> index ef6c140..4aa751f 100644
>>> --- a/xen/arch/x86/domctl.c
>>> +++ b/xen/arch/x86/domctl.c
>>> @@ -997,8 +997,7 @@ long arch_do_domctl(
>>>               domctl->u.gdbsx_guest_memio.len;
>>>   
>>>           ret = gdbsx_guest_mem_io(domctl->domain,
>>> &domctl->u.gdbsx_guest_memio);
>>> -        if ( !ret )
>>> -           copyback = 1;
>>> +        copyback = 1;
>>>       }
>>>       break;
>>>   
>> Ooopsy... my thought was that an application should not even look at
>> remain if the hcall/syscall failed, but forgot when writing the
>> gdbsx itself :). Think of it this way, if the call didn't even make it to
>> xen, and some reason the ioctl returned non-zero rc, then remain would
>> still be zero. So I think we should fix gdbsx instead of here:
>>
>> xg_write_mem():
>>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>>      {
>>          XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>>                 iop->remain, errno, rc);
> Isn't this still using iop->remain on failure which is what you say
> shouldn't be done?

I agree, which is why I took it as a guide line.  What I have tested with is:

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..53a0ce1 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,11 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
      iop->gwr = 0;       /* not writing to guest */

      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
+              errno, rc);
+        return tobuf_len;
+    }

      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
      iop->gwr = 1;       /* writing to guest */

      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
      return iop->remain;
  }


works fine and I plan it to be part of v2.
     -Don


>>          return iop->len;
>>      }
>>
>> Similarly in xg_read_mem().
>>
>> Hope that makes sense. Don't mean to create work for you for my mistake,
>> so if you don't have time, I can submit a patch for this too.
>>
>> thanks
>> Mukesh
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZUL-0008MH-Hz; Tue, 07 Jan 2014 16:26:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ZUK-0008M8-8j
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:26:44 +0000
Received: from [85.158.139.211:63791] by server-16.bemta-5.messagelabs.com id
	17/51-11843-3CA2CC25; Tue, 07 Jan 2014 16:26:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389112000!8367156!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9547 invoked from network); 7 Jan 2014 16:26:42 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:26:42 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Jan 2014 16:26:37 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="642602813"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.61])
	by fldsmtpi01.verizon.com with ESMTP; 07 Jan 2014 16:26:36 +0000
Message-ID: <52CC2ABC.3040505@terremark.com>
Date: Tue, 07 Jan 2014 11:26:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	
	<1388857936-664-5-git-send-email-dslutz@verizon.com>	
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 05:00, Ian Campbell wrote:
> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>> On Sat,  4 Jan 2014 12:52:16 -0500
>> Don Slutz <dslutz@verizon.com> wrote:
>>
>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>> returned.
>>>
>>> Without this gdb does not report an error.
>>>
>>> With this patch and using a 1G hvm domU:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>>   xen/arch/x86/domctl.c | 3 +--
>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>>> index ef6c140..4aa751f 100644
>>> --- a/xen/arch/x86/domctl.c
>>> +++ b/xen/arch/x86/domctl.c
>>> @@ -997,8 +997,7 @@ long arch_do_domctl(
>>>               domctl->u.gdbsx_guest_memio.len;
>>>   
>>>           ret = gdbsx_guest_mem_io(domctl->domain,
>>> &domctl->u.gdbsx_guest_memio);
>>> -        if ( !ret )
>>> -           copyback = 1;
>>> +        copyback = 1;
>>>       }
>>>       break;
>>>   
>> Ooopsy... my thought was that an application should not even look at
>> remain if the hcall/syscall failed, but forgot when writing the
>> gdbsx itself :). Think of it this way, if the call didn't even make it to
>> xen, and some reason the ioctl returned non-zero rc, then remain would
>> still be zero. So I think we should fix gdbsx instead of here:
>>
>> xg_write_mem():
>>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>>      {
>>          XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>>                 iop->remain, errno, rc);
> Isn't this still using iop->remain on failure which is what you say
> shouldn't be done?

I agree, which is why I took it as a guide line.  What I have tested with is:

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..53a0ce1 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,11 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
      iop->gwr = 0;       /* not writing to guest */

      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
+              errno, rc);
+        return tobuf_len;
+    }

      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
      iop->gwr = 1;       /* writing to guest */

      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
      return iop->remain;
  }


works fine and I plan it to be part of v2.
     -Don


>>          return iop->len;
>>      }
>>
>> Similarly in xg_read_mem().
>>
>> Hope that makes sense. Don't mean to create work for you for my mistake,
>> so if you don't have time, I can submit a patch for this too.
>>
>> thanks
>> Mukesh
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZXk-0000ZQ-18; Tue, 07 Jan 2014 16:30:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0ZXi-0000ZD-5s
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:30:14 +0000
Received: from [193.109.254.147:59385] by server-1.bemta-14.messagelabs.com id
	5C/93-15600-59B2CC25; Tue, 07 Jan 2014 16:30:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389112212!9388954!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20282 invoked from network); 7 Jan 2014 16:30:12 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 16:30:12 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55131 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0ZMy-0002L3-45; Tue, 07 Jan 2014 17:19:08 +0100
Date: Tue, 7 Jan 2014 17:30:09 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <306967039.20140107173009@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107144355.GH3588@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 3:43:55 PM, you wrote:

> On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
>> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> > Hi Konrad,
>> > 
>> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)

> Hot damm! Thank you for testing so quickly!

Hrmm PVH doesn't seem to be available on AMD systems yet ?

Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:

xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
Parsing config from /etc/xen/domU/production/media.cfg
libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
xc: debug: hypercall buffer: total allocations:11 total releases:11
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0

XL doesn't seem to be too helpful as to *why* though ..
but xl dmesg does give a clearer error message (but perhaps xl also should ?):

(XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support

--
Sander


# xl info
host                   : serveerstertje
release                : 3.13.0-rc7-20140107-xendevel+
version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
machine                : x86_64
nr_cpus                : 6
max_cpu_id             : 5
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 3200
hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
virt_caps              : hvm hvm_directio
total_memory           : 20479
free_memory            : 12050
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 4
xen_extra              : -unstable
xen_version            : 4.4-unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
cc_compile_by          : root
cc_compile_domain      : dyndns.org
cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
xend_config_format     : 4


>> > 
>> > Xen: latest xen-unstable
>> 
>> The FIFO-based event channel ABI is broken in current xen-unstable.
>> 
>> You need the two patches from:
>> 
>> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
>> 
>> You can also disable the use the FIFO ABI by the guest using the
>> xen.fifo_events = 0 kernel command line option.
>> 
>> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZXk-0000ZQ-18; Tue, 07 Jan 2014 16:30:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0ZXi-0000ZD-5s
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:30:14 +0000
Received: from [193.109.254.147:59385] by server-1.bemta-14.messagelabs.com id
	5C/93-15600-59B2CC25; Tue, 07 Jan 2014 16:30:13 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389112212!9388954!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20282 invoked from network); 7 Jan 2014 16:30:12 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 16:30:12 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55131 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0ZMy-0002L3-45; Tue, 07 Jan 2014 17:19:08 +0100
Date: Tue, 7 Jan 2014 17:30:09 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <306967039.20140107173009@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107144355.GH3588@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 3:43:55 PM, you wrote:

> On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
>> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> > Hi Konrad,
>> > 
>> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)

> Hot damm! Thank you for testing so quickly!

Hrmm PVH doesn't seem to be available on AMD systems yet ?

Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:

xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
Parsing config from /etc/xen/domU/production/media.cfg
libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
xc: debug: hypercall buffer: total allocations:11 total releases:11
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0

XL doesn't seem to be too helpful as to *why* though ..
but xl dmesg does give a clearer error message (but perhaps xl also should ?):

(XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support

--
Sander


# xl info
host                   : serveerstertje
release                : 3.13.0-rc7-20140107-xendevel+
version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
machine                : x86_64
nr_cpus                : 6
max_cpu_id             : 5
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 3200
hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
virt_caps              : hvm hvm_directio
total_memory           : 20479
free_memory            : 12050
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 4
xen_extra              : -unstable
xen_version            : 4.4-unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
cc_compile_by          : root
cc_compile_domain      : dyndns.org
cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
xend_config_format     : 4


>> > 
>> > Xen: latest xen-unstable
>> 
>> The FIFO-based event channel ABI is broken in current xen-unstable.
>> 
>> You need the two patches from:
>> 
>> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
>> 
>> You can also disable the use the FIFO ABI by the guest using the
>> xen.fifo_events = 0 kernel command line option.
>> 
>> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:31:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZYz-0000hm-C9; Tue, 07 Jan 2014 16:31:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZYy-0000hd-SE
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:31:33 +0000
Received: from [193.109.254.147:25378] by server-16.bemta-14.messagelabs.com
	id 7E/2A-20600-4EB2CC25; Tue, 07 Jan 2014 16:31:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389112290!9376316!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19161 invoked from network); 7 Jan 2014 16:31:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:31:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90516509"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 16:31:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:31:29 -0500
Message-ID: <1389112288.12612.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Tue, 7 Jan 2014 16:31:28 +0000
In-Reply-To: <20140106175628.GD10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 17:56 +0000, Wei Liu wrote:
> On Mon, Jan 06, 2014 at 05:29:01PM +0000, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> > > And now George has gone away and left me holding the can, smart move on
> > > his part ;-)
> > 
> > I have also reviewed this patch again.
> > 
> > Firstly,
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > 
> > Now, onto the RM questions.
> > 
> > > The potential risk is that this breaks the existing vfb syntax which
> > > works, or it breaks the hvm stuff. This is of particular concern because
> > > I don't think any of that is covered by osstest (except perhaps hvm
> > > console, but that might only be on some other error when osstest takes a
> > > screenshot), so the probability of finding it before release is reliant
> > > on manual testing/test days/user testing etc.
> > > Are you happy that all the existing options keep working?
> > 
> > Wei, could you tell us what configuration(s) you tested ?
> > 
> 
[.snip.]

That looks pretty comprehensive.

Given that George has this in his list of open items for 4.4 with
"blocker?" (his query marker) I'm inclined towards giving a release
exception here.

Release-acked-by: Ian Campbell 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:31:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ZYz-0000hm-C9; Tue, 07 Jan 2014 16:31:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ZYy-0000hd-SE
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:31:33 +0000
Received: from [193.109.254.147:25378] by server-16.bemta-14.messagelabs.com
	id 7E/2A-20600-4EB2CC25; Tue, 07 Jan 2014 16:31:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389112290!9376316!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19161 invoked from network); 7 Jan 2014 16:31:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:31:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90516509"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 16:31:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	11:31:29 -0500
Message-ID: <1389112288.12612.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Tue, 7 Jan 2014 16:31:28 +0000
In-Reply-To: <20140106175628.GD10654@zion.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 17:56 +0000, Wei Liu wrote:
> On Mon, Jan 06, 2014 at 05:29:01PM +0000, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH V4] xl: create VFB for PV guest when VNC is specified"):
> > > And now George has gone away and left me holding the can, smart move on
> > > his part ;-)
> > 
> > I have also reviewed this patch again.
> > 
> > Firstly,
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > 
> > Now, onto the RM questions.
> > 
> > > The potential risk is that this breaks the existing vfb syntax which
> > > works, or it breaks the hvm stuff. This is of particular concern because
> > > I don't think any of that is covered by osstest (except perhaps hvm
> > > console, but that might only be on some other error when osstest takes a
> > > screenshot), so the probability of finding it before release is reliant
> > > on manual testing/test days/user testing etc.
> > > Are you happy that all the existing options keep working?
> > 
> > Wei, could you tell us what configuration(s) you tested ?
> > 
> 
[.snip.]

That looks pretty comprehensive.

Given that George has this in his list of open items for 4.4 with
"blocker?" (his query marker) I'm inclined towards giving a release
exception here.

Release-acked-by: Ian Campbell 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:43:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zk3-0001Yg-IM; Tue, 07 Jan 2014 16:42:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0Zk2-0001YM-0D
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:42:58 +0000
Received: from [85.158.143.35:9174] by server-1.bemta-4.messagelabs.com id
	19/78-02132-19E2CC25; Tue, 07 Jan 2014 16:42:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389112975!10061540!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26685 invoked from network); 7 Jan 2014 16:42:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:42:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88357373"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:42:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 11:42:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0ZT9-0007kw-Ru;
	Tue, 07 Jan 2014 16:25:31 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:25:29 +0000
Message-ID: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread spinning
	if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to improve guest receive side flow control (ca2f09f2) had a
slight flaw in the wait condition for the vif thread in that any remaining
skbs in the guest receive side netback internal queue would prevent the
thread from sleeping. An unresponsive frontend can lead to a permanently
non-empty internal queue and thus the thread will spin. In this case the
thread should really sleep until the frontend becomes responsive again.

This patch adds an extra flag to the vif which is set if the shared ring
is full and cleared when skbs are drained into the shared ring. Thus,
if the thread runs, finds the shared ring full and can make no progress the
flag remains set. If the flag remains set then the thread will sleep,
regardless of a non-empty queue, until the next event from the frontend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netback/common.h  |    1 +
 drivers/net/xen-netback/netback.c |   12 ++++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..4c76bcb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,6 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
+	bool rx_queue_stopped;
 	/* Set when the RX interrupt is triggered by the frontend.
 	 * The worker thread may need to wake the queue.
 	 */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..1c31ac5 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	int need_to_notify = 0;
+	int ring_full = 0;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -509,6 +510,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = 1;
+			ring_full = 1;
 			break;
 		}
 
@@ -521,8 +523,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	if (!npo.copy_prod)
+	if (!npo.copy_prod) {
+		if (ring_full)
+			vif->rx_queue_stopped = true;
 		goto done;
+	}
+
+	vif->rx_queue_stopped = false;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
 	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
@@ -1724,7 +1731,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) || vif->rx_event;
+	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
+		vif->rx_event;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:43:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zk3-0001Yg-IM; Tue, 07 Jan 2014 16:42:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0Zk2-0001YM-0D
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:42:58 +0000
Received: from [85.158.143.35:9174] by server-1.bemta-4.messagelabs.com id
	19/78-02132-19E2CC25; Tue, 07 Jan 2014 16:42:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389112975!10061540!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26685 invoked from network); 7 Jan 2014 16:42:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:42:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="88357373"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 16:42:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 11:42:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0ZT9-0007kw-Ru;
	Tue, 07 Jan 2014 16:25:31 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 16:25:29 +0000
Message-ID: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread spinning
	if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to improve guest receive side flow control (ca2f09f2) had a
slight flaw in the wait condition for the vif thread in that any remaining
skbs in the guest receive side netback internal queue would prevent the
thread from sleeping. An unresponsive frontend can lead to a permanently
non-empty internal queue and thus the thread will spin. In this case the
thread should really sleep until the frontend becomes responsive again.

This patch adds an extra flag to the vif which is set if the shared ring
is full and cleared when skbs are drained into the shared ring. Thus,
if the thread runs, finds the shared ring full and can make no progress the
flag remains set. If the flag remains set then the thread will sleep,
regardless of a non-empty queue, until the next event from the frontend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netback/common.h  |    1 +
 drivers/net/xen-netback/netback.c |   12 ++++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..4c76bcb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,6 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
+	bool rx_queue_stopped;
 	/* Set when the RX interrupt is triggered by the frontend.
 	 * The worker thread may need to wake the queue.
 	 */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..1c31ac5 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	int need_to_notify = 0;
+	int ring_full = 0;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -509,6 +510,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = 1;
+			ring_full = 1;
 			break;
 		}
 
@@ -521,8 +523,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	if (!npo.copy_prod)
+	if (!npo.copy_prod) {
+		if (ring_full)
+			vif->rx_queue_stopped = true;
 		goto done;
+	}
+
+	vif->rx_queue_stopped = false;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
 	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
@@ -1724,7 +1731,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) || vif->rx_event;
+	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
+		vif->rx_event;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zz2-0002fj-KI; Tue, 07 Jan 2014 16:58:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Zz1-0002dw-8k
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:58:27 +0000
Received: from [85.158.139.211:17442] by server-15.bemta-5.messagelabs.com id
	90/11-08490-2323CC25; Tue, 07 Jan 2014 16:58:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389113904!8364930!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1641 invoked from network); 7 Jan 2014 16:58:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:58:25 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07GwKFR022382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 16:58:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07GwJTl002295
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 16:58:20 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07GwJ1j028400; Tue, 7 Jan 2014 16:58:19 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 08:58:19 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 796651C18DC; Tue,  7 Jan 2014 11:58:18 -0500 (EST)
Date: Tue, 7 Jan 2014 11:58:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140107165818.GA8748@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <306967039.20140107173009@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 - CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>] [<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0 xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
> 
> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
> 
> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
> >> > Hi Konrad,
> >> > 
> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> 
> > Hot damm! Thank you for testing so quickly!
> 
> Hrmm PVH doesn't seem to be available on AMD systems yet ?

Correct.
> 
> Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:
> 
> xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
> Parsing config from /etc/xen/domU/production/media.cfg
> libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
> libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
> libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
> libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
> libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
> libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
> libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
> libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
> libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
> xc: debug: hypercall buffer: total allocations:11 total releases:11
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0
> 
> XL doesn't seem to be too helpful as to *why* though ..

Not at all.

> but xl dmesg does give a clearer error message (but perhaps xl also should ?):
> 
> (XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support
> 
> --
> Sander
> 
> 
> # xl info
> host                   : serveerstertje
> release                : 3.13.0-rc7-20140107-xendevel+
> version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
> machine                : x86_64
> nr_cpus                : 6
> max_cpu_id             : 5
> nr_nodes               : 1
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 3200
> hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
> virt_caps              : hvm hvm_directio
> total_memory           : 20479
> free_memory            : 12050
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 4
> xen_extra              : -unstable
> xen_version            : 4.4-unstable
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
> xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
> cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
> cc_compile_by          : root
> cc_compile_domain      : dyndns.org
> cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
> xend_config_format     : 4
> 
> 
> >> > 
> >> > Xen: latest xen-unstable
> >> 
> >> The FIFO-based event channel ABI is broken in current xen-unstable.
> >> 
> >> You need the two patches from:
> >> 
> >> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
> >> 
> >> You can also disable the use the FIFO ABI by the guest using the
> >> xen.fifo_events = 0 kernel command line option.
> >> 
> >> David
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zz2-0002fj-KI; Tue, 07 Jan 2014 16:58:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0Zz1-0002dw-8k
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:58:27 +0000
Received: from [85.158.139.211:17442] by server-15.bemta-5.messagelabs.com id
	90/11-08490-2323CC25; Tue, 07 Jan 2014 16:58:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389113904!8364930!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1641 invoked from network); 7 Jan 2014 16:58:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 16:58:25 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07GwKFR022382
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 16:58:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07GwJTl002295
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 16:58:20 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07GwJ1j028400; Tue, 7 Jan 2014 16:58:19 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 08:58:19 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 796651C18DC; Tue,  7 Jan 2014 11:58:18 -0500 (EST)
Date: Tue, 7 Jan 2014 11:58:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140107165818.GA8748@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <306967039.20140107173009@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 - CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>] [<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0 xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
> 
> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
> 
> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
> >> > Hi Konrad,
> >> > 
> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> 
> > Hot damm! Thank you for testing so quickly!
> 
> Hrmm PVH doesn't seem to be available on AMD systems yet ?

Correct.
> 
> Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:
> 
> xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
> Parsing config from /etc/xen/domU/production/media.cfg
> libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
> libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
> libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
> libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
> libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
> libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
> libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
> libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
> libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
> xc: debug: hypercall buffer: total allocations:11 total releases:11
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0
> 
> XL doesn't seem to be too helpful as to *why* though ..

Not at all.

> but xl dmesg does give a clearer error message (but perhaps xl also should ?):
> 
> (XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support
> 
> --
> Sander
> 
> 
> # xl info
> host                   : serveerstertje
> release                : 3.13.0-rc7-20140107-xendevel+
> version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
> machine                : x86_64
> nr_cpus                : 6
> max_cpu_id             : 5
> nr_nodes               : 1
> cores_per_socket       : 6
> threads_per_core       : 1
> cpu_mhz                : 3200
> hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
> virt_caps              : hvm hvm_directio
> total_memory           : 20479
> free_memory            : 12050
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 4
> xen_extra              : -unstable
> xen_version            : 4.4-unstable
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
> xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
> cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
> cc_compile_by          : root
> cc_compile_domain      : dyndns.org
> cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
> xend_config_format     : 4
> 
> 
> >> > 
> >> > Xen: latest xen-unstable
> >> 
> >> The FIFO-based event channel ABI is broken in current xen-unstable.
> >> 
> >> You need the two patches from:
> >> 
> >> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
> >> 
> >> You can also disable the use the FIFO ABI by the guest using the
> >> xen.fifo_events = 0 kernel command line option.
> >> 
> >> David
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:59:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zzi-0002kf-2S; Tue, 07 Jan 2014 16:59:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0Zzf-0002kL-18
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:59:07 +0000
Received: from [85.158.137.68:34949] by server-5.bemta-3.messagelabs.com id
	54/AD-25188-A523CC25; Tue, 07 Jan 2014 16:59:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389113945!7742673!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9152 invoked from network); 7 Jan 2014 16:59:05 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:59:05 -0000
Received: by mail-ee0-f54.google.com with SMTP id e51so158125eek.41
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 08:59:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=x5G9sV7VJLDb7jWuNI1sBalY4J6es6ygvd26yqKzNN0=;
	b=b9lyWcO05pMXOZDSBUfztJPnlf+9yhbVpEZoixnbuVZqoOs7Zku+2ZQTsWtrtMzPbD
	fEo+tbUyPWyu6MEl2V2b1aknLWCnXtme7M5gJ9i83Xxp86dTnbDGUJqg0ARKHsKy0huW
	ejGjXF1Lm5HNfZF3NkbBGyAYpWOMiDpGeJyeVJdZV3W/O2CLo+9pcvsilVVABdHwK5wG
	KdCQE8mr3GFP7fEwev86F1OnN/sw1mSMKQgqtVkYnwc3Fcv97j4hGbaugnnWSbRycSZv
	2IveYhKbj2wLFCvQXDWXmIqZ/WW/q+enZ4+Zf6VkSDKJHfGNgQd5DTnyTfUpe1aRpLwW
	hOCQ==
X-Gm-Message-State: ALoCoQk9VpOrjv8RXkJ+19uVOp4MM0xyNESrb9AmSTlnuotx2ejpheFMpVfRUQHyUasd59ad9FIR
X-Received: by 10.14.175.131 with SMTP id z3mr22862427eel.65.1389113945431;
	Tue, 07 Jan 2014 08:59:05 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	p45sm181585488eeg.1.2014.01.07.08.59.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 08:59:04 -0800 (PST)
Message-ID: <52CC3256.4040006@linaro.org>
Date: Tue, 07 Jan 2014 16:59:02 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 04:02 PM, Ian Campbell wrote:
.>
> One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards reverting.

I would also prefer reverting the previous patch.

> 
> Obviously if you think I'm being to easy on myself please say so.

Without this patch, we will likely crash a guest in production, that
it's not acceptable for a release.

> 
> Actually, if you think my judgement is right I'd appreciate being told so too.
> ---
>  xen/arch/arm/domain.c           |    5 ++
>  xen/arch/arm/domain_build.c     |    1 +
>  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vtimer.c           |    6 +-
>  xen/include/asm-arm/cpregs.h    |    4 +
>  xen/include/asm-arm/processor.h |    2 +-
>  xen/include/asm-arm/sysregs.h   |   19 ++++-
>  7 files changed, 182 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 124cccf..104d228 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
>      else
>          hcr |= HCR_RW;
>  
> +    if ( n->arch.sctlr & SCTLR_M )
> +        hcr &= ~(HCR_TVM|HCR_DC);
> +    else
> +        hcr |= (HCR_TVM|HCR_DC);
> +
>      WRITE_SYSREG(hcr, HCR_EL2);
>      isb();
>  
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..bb31db8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
>      else
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
>  #endif
> +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);

Is it useful? As I understand, we will at least context switch one time
before booting dom0.

If we need it, perhaps the better place to setup it is init_traps?

>  
>      /*
>       * kernel_load will determine the placement of the initrd & fdt in
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 7c5ab19..d00bba3 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>      regs->pc += hsr.len ? 4 : 2;
>  }
>  
> +static void update_sctlr(uint32_t v)
> +{
> +    /*
> +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> +     * because they are incompatible.
> +     *
> +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> +     * since it's only purpose was to catch the MMU being enabled.
> +     *
> +     * Both are set appropriately on context switch but we need to
> +     * clear them now since we may not context switch on return to
> +     * guest.
> +     */
> +    if ( v & SCTLR_M )
> +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);

Even if it's unlikely, can we handle the case where the guest disable
the case?

Also from ARM ARM B3.2.1, a TLB flush by VMID is required if HCR_DC is
disabled and the VMID is not changed.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 16:59:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 16:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0Zzi-0002kf-2S; Tue, 07 Jan 2014 16:59:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0Zzf-0002kL-18
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 16:59:07 +0000
Received: from [85.158.137.68:34949] by server-5.bemta-3.messagelabs.com id
	54/AD-25188-A523CC25; Tue, 07 Jan 2014 16:59:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389113945!7742673!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9152 invoked from network); 7 Jan 2014 16:59:05 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 16:59:05 -0000
Received: by mail-ee0-f54.google.com with SMTP id e51so158125eek.41
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 08:59:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=x5G9sV7VJLDb7jWuNI1sBalY4J6es6ygvd26yqKzNN0=;
	b=b9lyWcO05pMXOZDSBUfztJPnlf+9yhbVpEZoixnbuVZqoOs7Zku+2ZQTsWtrtMzPbD
	fEo+tbUyPWyu6MEl2V2b1aknLWCnXtme7M5gJ9i83Xxp86dTnbDGUJqg0ARKHsKy0huW
	ejGjXF1Lm5HNfZF3NkbBGyAYpWOMiDpGeJyeVJdZV3W/O2CLo+9pcvsilVVABdHwK5wG
	KdCQE8mr3GFP7fEwev86F1OnN/sw1mSMKQgqtVkYnwc3Fcv97j4hGbaugnnWSbRycSZv
	2IveYhKbj2wLFCvQXDWXmIqZ/WW/q+enZ4+Zf6VkSDKJHfGNgQd5DTnyTfUpe1aRpLwW
	hOCQ==
X-Gm-Message-State: ALoCoQk9VpOrjv8RXkJ+19uVOp4MM0xyNESrb9AmSTlnuotx2ejpheFMpVfRUQHyUasd59ad9FIR
X-Received: by 10.14.175.131 with SMTP id z3mr22862427eel.65.1389113945431;
	Tue, 07 Jan 2014 08:59:05 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	p45sm181585488eeg.1.2014.01.07.08.59.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 08:59:04 -0800 (PST)
Message-ID: <52CC3256.4040006@linaro.org>
Date: Tue, 07 Jan 2014 16:59:02 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 04:02 PM, Ian Campbell wrote:
.>
> One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards reverting.

I would also prefer reverting the previous patch.

> 
> Obviously if you think I'm being to easy on myself please say so.

Without this patch, we will likely crash a guest in production, that
it's not acceptable for a release.

> 
> Actually, if you think my judgement is right I'd appreciate being told so too.
> ---
>  xen/arch/arm/domain.c           |    5 ++
>  xen/arch/arm/domain_build.c     |    1 +
>  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vtimer.c           |    6 +-
>  xen/include/asm-arm/cpregs.h    |    4 +
>  xen/include/asm-arm/processor.h |    2 +-
>  xen/include/asm-arm/sysregs.h   |   19 ++++-
>  7 files changed, 182 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 124cccf..104d228 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
>      else
>          hcr |= HCR_RW;
>  
> +    if ( n->arch.sctlr & SCTLR_M )
> +        hcr &= ~(HCR_TVM|HCR_DC);
> +    else
> +        hcr |= (HCR_TVM|HCR_DC);
> +
>      WRITE_SYSREG(hcr, HCR_EL2);
>      isb();
>  
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..bb31db8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
>      else
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
>  #endif
> +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);

Is it useful? As I understand, we will at least context switch one time
before booting dom0.

If we need it, perhaps the better place to setup it is init_traps?

>  
>      /*
>       * kernel_load will determine the placement of the initrd & fdt in
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 7c5ab19..d00bba3 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>      regs->pc += hsr.len ? 4 : 2;
>  }
>  
> +static void update_sctlr(uint32_t v)
> +{
> +    /*
> +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> +     * because they are incompatible.
> +     *
> +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> +     * since it's only purpose was to catch the MMU being enabled.
> +     *
> +     * Both are set appropriately on context switch but we need to
> +     * clear them now since we may not context switch on return to
> +     * guest.
> +     */
> +    if ( v & SCTLR_M )
> +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);

Even if it's unlikely, can we handle the case where the guest disable
the case?

Also from ARM ARM B3.2.1, a TLB flush by VMID is required if HCR_DC is
disabled and the VMID is not changed.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0aOK-0004BB-2r; Tue, 07 Jan 2014 17:24:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0aOJ-0004B6-0B
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:24:35 +0000
Received: from [193.109.254.147:57455] by server-9.bemta-14.messagelabs.com id
	38/A9-13957-2583CC25; Tue, 07 Jan 2014 17:24:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389115473!9333382!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10363 invoked from network); 7 Jan 2014 17:24:33 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 17:24:33 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55288 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0aDZ-0006cb-3e; Tue, 07 Jan 2014 18:13:29 +0100
Date: Tue, 7 Jan 2014 18:24:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <816291453.20140107182430@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107165818.GA8748@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
	<20140107165818.GA8748@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 5:58:18 PM, you wrote:

> On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
>> 
>> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
>> 
>> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
>> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> >> > Hi Konrad,
>> >> > 
>> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
>> 
>> > Hot damm! Thank you for testing so quickly!
>> 
>> Hrmm PVH doesn't seem to be available on AMD systems yet ?

> Correct.

Ah thought i read something about that in the past ..
but it wasn't mentioned in your patch announcement which does have a nice "how to test" part :-)
(probably because it's more a restriction of Xen and not of the kernel patches announced)

So i thought let's give it a try ... ah well will try to be a bit more patient ;-)

>> 
>> Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:
>> 
>> xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
>> Parsing config from /etc/xen/domU/production/media.cfg
>> libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
>> libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
>> libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
>> libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
>> libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
>> libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
>> libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
>> libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
>> libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
>> xc: debug: hypercall buffer: total allocations:11 total releases:11
>> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
>> xc: debug: hypercall buffer: cache current size:2
>> xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0
>> 
>> XL doesn't seem to be too helpful as to *why* though ..

> Not at all.

>> but xl dmesg does give a clearer error message (but perhaps xl also should ?):
>> 
>> (XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support
>> 
>> --
>> Sander
>> 
>> 
>> # xl info
>> host                   : serveerstertje
>> release                : 3.13.0-rc7-20140107-xendevel+
>> version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
>> machine                : x86_64
>> nr_cpus                : 6
>> max_cpu_id             : 5
>> nr_nodes               : 1
>> cores_per_socket       : 6
>> threads_per_core       : 1
>> cpu_mhz                : 3200
>> hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
>> virt_caps              : hvm hvm_directio
>> total_memory           : 20479
>> free_memory            : 12050
>> sharing_freed_memory   : 0
>> sharing_used_memory    : 0
>> outstanding_claims     : 0
>> free_cpus              : 0
>> xen_major              : 4
>> xen_minor              : 4
>> xen_extra              : -unstable
>> xen_version            : 4.4-unstable
>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler          : credit
>> xen_pagesize           : 4096
>> platform_params        : virt_start=0xffff800000000000
>> xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
>> xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
>> cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
>> cc_compile_by          : root
>> cc_compile_domain      : dyndns.org
>> cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
>> xend_config_format     : 4
>> 
>> 
>> >> > 
>> >> > Xen: latest xen-unstable
>> >> 
>> >> The FIFO-based event channel ABI is broken in current xen-unstable.
>> >> 
>> >> You need the two patches from:
>> >> 
>> >> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
>> >> 
>> >> You can also disable the use the FIFO ABI by the guest using the
>> >> xen.fifo_events = 0 kernel command line option.
>> >> 
>> >> David
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0aOK-0004BB-2r; Tue, 07 Jan 2014 17:24:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0aOJ-0004B6-0B
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:24:35 +0000
Received: from [193.109.254.147:57455] by server-9.bemta-14.messagelabs.com id
	38/A9-13957-2583CC25; Tue, 07 Jan 2014 17:24:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389115473!9333382!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10363 invoked from network); 7 Jan 2014 17:24:33 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 7 Jan 2014 17:24:33 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55288 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0aDZ-0006cb-3e; Tue, 07 Jan 2014 18:13:29 +0100
Date: Tue, 7 Jan 2014 18:24:30 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <816291453.20140107182430@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107165818.GA8748@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
	<20140107165818.GA8748@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
	- CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>]
	[<ffffffff81109a58>] generic_exec_single+0x88/0xc0
	xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, January 7, 2014, 5:58:18 PM, you wrote:

> On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
>> 
>> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
>> 
>> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
>> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
>> >> > Hi Konrad,
>> >> > 
>> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
>> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
>> 
>> > Hot damm! Thank you for testing so quickly!
>> 
>> Hrmm PVH doesn't seem to be available on AMD systems yet ?

> Correct.

Ah thought i read something about that in the past ..
but it wasn't mentioned in your patch announcement which does have a nice "how to test" part :-)
(probably because it's more a restriction of Xen and not of the kernel patches announced)

So i thought let's give it a try ... ah well will try to be a bit more patient ;-)

>> 
>> Just tried the "pvh=1" on a (former) PV guest, but it seems it can't create the guest:
>> 
>> xl -vvvvvvvvvvv create /etc/xen/domU/production/media.cfg
>> Parsing config from /etc/xen/domU/production/media.cfg
>> libxl: debug: libxl_create.c:1315:do_domain_create: ao 0x17c8a30: create: how=(nil) callback=(nil) poller=0x17c9230
>> libxl: error: libxl_create.c:484:libxl__domain_make: domain creation fail
>> libxl: error: libxl_create.c:728:initiate_domain_create: cannot make domain: -3
>> libxl: error: libxl.c:1388:libxl__destroy_domid: non-existant domain -1
>> libxl: error: libxl.c:1352:domain_destroy_callback: unable to destroy guest with domid 4294967295
>> libxl: error: libxl_create.c:1293:domcreate_destruction_cb: unable to destroy domain 4294967295 following failed creation
>> libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x17c8a30: complete, rc=-3
>> libxl: debug: libxl_create.c:1329:do_domain_create: ao 0x17c8a30: inprogress: poller=0x17c9230, flags=ic
>> libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x17c8a30: destroy
>> xc: debug: hypercall buffer: total allocations:11 total releases:11
>> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
>> xc: debug: hypercall buffer: cache current size:2
>> xc: debug: hypercall buffer: cache hits:9 misses:2 toobig:0
>> 
>> XL doesn't seem to be too helpful as to *why* though ..

> Not at all.

>> but xl dmesg does give a clearer error message (but perhaps xl also should ?):
>> 
>> (XEN) [2014-01-07 16:26:08] Attempt to create a PVH guest on a system without necessary hardware support
>> 
>> --
>> Sander
>> 
>> 
>> # xl info
>> host                   : serveerstertje
>> release                : 3.13.0-rc7-20140107-xendevel+
>> version                : #1 SMP Tue Jan 7 10:02:55 CET 2014
>> machine                : x86_64
>> nr_cpus                : 6
>> max_cpu_id             : 5
>> nr_nodes               : 1
>> cores_per_socket       : 6
>> threads_per_core       : 1
>> cpu_mhz                : 3200
>> hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000037ff:00000000
>> virt_caps              : hvm hvm_directio
>> total_memory           : 20479
>> free_memory            : 12050
>> sharing_freed_memory   : 0
>> sharing_used_memory    : 0
>> outstanding_claims     : 0
>> free_cpus              : 0
>> xen_major              : 4
>> xen_minor              : 4
>> xen_extra              : -unstable
>> xen_version            : 4.4-unstable
>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler          : credit
>> xen_pagesize           : 4096
>> platform_params        : virt_start=0xffff800000000000
>> xen_changeset          : Tue Jan 7 15:09:42 2014 +0100 git:81b1c7d-dirty
>> xen_commandline        : dom0_mem=1536M,max:1536M loglvl=all loglvl_guest=all console_timestamps vga=gfx-1280x1024x32 cpuidle cpufreq=xen debug lapic=debug apic_verbosity=debug apic=debug iommu=on,verbose,debug,amd-iommu-debug ivrs_ioapic[6]=00:14.0 ivrs_hpet[0]=00:14.0 com1=38400,8n1 console=vga,com1
>> cc_compiler            : gcc-4.7.real (Debian 4.7.2-5) 4.7.2
>> cc_compile_by          : root
>> cc_compile_domain      : dyndns.org
>> cc_compile_date        : Tue Jan  7 15:50:32 CET 2014
>> xend_config_format     : 4
>> 
>> 
>> >> > 
>> >> > Xen: latest xen-unstable
>> >> 
>> >> The FIFO-based event channel ABI is broken in current xen-unstable.
>> >> 
>> >> You need the two patches from:
>> >> 
>> >> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01458.html
>> >> 
>> >> You can also disable the use the FIFO ABI by the guest using the
>> >> xen.fifo_events = 0 kernel command line option.
>> >> 
>> >> David
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:25:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0aPP-0004EQ-I2; Tue, 07 Jan 2014 17:25:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0aPN-0004EE-Gl
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:25:41 +0000
Received: from [193.109.254.147:51668] by server-7.bemta-14.messagelabs.com id
	B7/C6-15500-4983CC25; Tue, 07 Jan 2014 17:25:40 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389115538!9378633!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13369 invoked from network); 7 Jan 2014 17:25:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90540359"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:25:37 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 12:25:36 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 18:25:35 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHO+CezzEG8gOV2ZUC53KMj+kp0lpp5dHAAgAAu4RA=
Date: Tue, 7 Jan 2014 17:25:34 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
In-Reply-To: <21196.6440.529434.66793@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("[PATCH v2 3/3] libxl: ocaml: use 'for_app_registration'
> in osevent callbacks"):
> > This allows the application to pass a token to libxl in the fd/timeout
> > registration callbacks, which it receives back in modification or
> > deregistration callbacks.
> >
> > It turns out that this is essential for timeout handling, in order to
> > identify which timeout to change on a modify event.
> >
> > Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
> ...
> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> > +	if (!for_app) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	caml_register_global_root(for_app);
> > +	*for_app_registration_out = for_app;
> 
> I expect you have thought this through properly, and perhaps even
> explained it already, but: is the ordering of these operations
> (particularly, of the caml_register_global_root) guaranteed to be
> correct ?
> 
> Eg, can Is_exception_result call the gc ?

This macro is defined as follows:

    #define Is_exception_result(v) (((v) & 3) == 2)

So this won't cause any harm. I think the order is therefore correct, and since we don't want to register for_app with the GC in case of an exception (this may change if we'd try to interpret the exception and log it, later on).

> >  int fd_modify(void *user, int fd, void **for_app_registration_update,
> > @@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void
> > **for_app_registration_update,  {
> ...
> > +	/* If for_app == NULL, assume that something is wrong */
> > +	assert(for_app);
> 
> While I'm reading this in detail, this is a slightly odd wording for the
> comment, now that this is an assertion.  You probably mean something like
> "If for_app == NULL, something is very wrong".
> (Another occurrence of this later.)

Ok.

> >  void fd_deregister(void *user, int fd, void *for_app_registration)  {
> ...
> > +	caml_callbackN_exn(*func, 3, args);
> > +	/* If the callback were to raise an exception, this will be ignored;
> > +	 * this hook does not return error codes */
> 
> Can you not do anything better here ?  I think crashing the whole
> application would be better than carrying on and later calling back into
> libxl with a stale for_libxl pointer!

Ok, that makes sense.

> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> > +	if (!for_app) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	caml_register_global_root(for_app);
> > +	*for_app_registration_out = for_app;
> 
> Aren't these functions getting incredibly formulaic ?  I guess it is too
> late for 4.4 but if possible, later, I would like to see the common stuff
> factored out.

Yes, agreed.

> >  int timeout_modify(void *user, void **for_app_registration_update, @@
> > -1315,18 +1382,43 @@ int timeout_modify(void *user, void
> > **for_app_registration_update,  {
> >  	caml_leave_blocking_section();
> >  	CAMLparam0();
> > +	CAMLlocalN(args, 2);
> > +	int ret = 0;
> ...
> > +	/* This modify hook causes the timeout to fire immediately.
> Deregister
> > +	 * won't be called, so we clean up our GC registration here. */
> > +	caml_remove_global_root(for_app);
> > +	free(for_app);
> > +	*for_app_registration_update = NULL;
> 
> This can't be right, because what the timeout modify callback is supposed
> to do is arrange for stub_libxl_osevent_occurred_timeout to be called.
> 
> And looking at that, I see that stub_libxl_osevent_occurred_timeout
> doesn't destroy the for_app.

Hmm... I thought the for_app stuff is only for the registration bits? The osevent_occurred functions don't use or receive it? They do get for_libxl, but that's entirely in C and opaque to ocaml.

I do assume here that timeout_modify will be called only once for a given timeout registration. Is that correct?

I'll send an update for the comment and exception thing mentioned above.

Cheers,
Rob


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:25:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0aPP-0004EQ-I2; Tue, 07 Jan 2014 17:25:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0aPN-0004EE-Gl
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:25:41 +0000
Received: from [193.109.254.147:51668] by server-7.bemta-14.messagelabs.com id
	B7/C6-15500-4983CC25; Tue, 07 Jan 2014 17:25:40 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389115538!9378633!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13369 invoked from network); 7 Jan 2014 17:25:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,619,1384300800"; d="scan'208";a="90540359"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:25:37 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 12:25:36 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 18:25:35 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHO+CezzEG8gOV2ZUC53KMj+kp0lpp5dHAAgAAu4RA=
Date: Tue, 7 Jan 2014 17:25:34 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
In-Reply-To: <21196.6440.529434.66793@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("[PATCH v2 3/3] libxl: ocaml: use 'for_app_registration'
> in osevent callbacks"):
> > This allows the application to pass a token to libxl in the fd/timeout
> > registration callbacks, which it receives back in modification or
> > deregistration callbacks.
> >
> > It turns out that this is essential for timeout handling, in order to
> > identify which timeout to change on a modify event.
> >
> > Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
> ...
> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> > +	if (!for_app) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	caml_register_global_root(for_app);
> > +	*for_app_registration_out = for_app;
> 
> I expect you have thought this through properly, and perhaps even
> explained it already, but: is the ordering of these operations
> (particularly, of the caml_register_global_root) guaranteed to be
> correct ?
> 
> Eg, can Is_exception_result call the gc ?

This macro is defined as follows:

    #define Is_exception_result(v) (((v) & 3) == 2)

So this won't cause any harm. I think the order is therefore correct, and since we don't want to register for_app with the GC in case of an exception (this may change if we'd try to interpret the exception and log it, later on).

> >  int fd_modify(void *user, int fd, void **for_app_registration_update,
> > @@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void
> > **for_app_registration_update,  {
> ...
> > +	/* If for_app == NULL, assume that something is wrong */
> > +	assert(for_app);
> 
> While I'm reading this in detail, this is a slightly odd wording for the
> comment, now that this is an assertion.  You probably mean something like
> "If for_app == NULL, something is very wrong".
> (Another occurrence of this later.)

Ok.

> >  void fd_deregister(void *user, int fd, void *for_app_registration)  {
> ...
> > +	caml_callbackN_exn(*func, 3, args);
> > +	/* If the callback were to raise an exception, this will be ignored;
> > +	 * this hook does not return error codes */
> 
> Can you not do anything better here ?  I think crashing the whole
> application would be better than carrying on and later calling back into
> libxl with a stale for_libxl pointer!

Ok, that makes sense.

> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> > +	if (!for_app) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> > +	}
> > +
> > +	caml_register_global_root(for_app);
> > +	*for_app_registration_out = for_app;
> 
> Aren't these functions getting incredibly formulaic ?  I guess it is too
> late for 4.4 but if possible, later, I would like to see the common stuff
> factored out.

Yes, agreed.

> >  int timeout_modify(void *user, void **for_app_registration_update, @@
> > -1315,18 +1382,43 @@ int timeout_modify(void *user, void
> > **for_app_registration_update,  {
> >  	caml_leave_blocking_section();
> >  	CAMLparam0();
> > +	CAMLlocalN(args, 2);
> > +	int ret = 0;
> ...
> > +	/* This modify hook causes the timeout to fire immediately.
> Deregister
> > +	 * won't be called, so we clean up our GC registration here. */
> > +	caml_remove_global_root(for_app);
> > +	free(for_app);
> > +	*for_app_registration_update = NULL;
> 
> This can't be right, because what the timeout modify callback is supposed
> to do is arrange for stub_libxl_osevent_occurred_timeout to be called.
> 
> And looking at that, I see that stub_libxl_osevent_occurred_timeout
> doesn't destroy the for_app.

Hmm... I thought the for_app stuff is only for the registration bits? The osevent_occurred functions don't use or receive it? They do get for_libxl, but that's entirely in C and opaque to ocaml.

I do assume here that timeout_modify will be called only once for a given timeout registration. Is that correct?

I'll send an update for the comment and exception thing mentioned above.

Cheers,
Rob


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:40:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0adM-0005EX-Mj; Tue, 07 Jan 2014 17:40:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0adL-0005EP-0C
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:40:07 +0000
Received: from [85.158.143.35:35438] by server-3.bemta-4.messagelabs.com id
	06/40-32360-6FB3CC25; Tue, 07 Jan 2014 17:40:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389116404!10236006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29648 invoked from network); 7 Jan 2014 17:40:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:40:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90546573"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:39:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 12:39:54 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0ad7-00057D-Cb;
	Tue, 07 Jan 2014 17:39:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0ad7-00006g-2M;
	Tue, 07 Jan 2014 17:39:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.15336.725600.971346@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 17:39:52 +0000
To: Rob Hoes <Rob.Hoes@citrix.com>
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> Ian Jackson wrote:
> > And looking at that, I see that stub_libxl_osevent_occurred_timeout
> > doesn't destroy the for_app.
> 
> Hmm... I thought the for_app stuff is only for the registration
> bits? The osevent_occurred functions don't use or receive it? They
> do get for_libxl, but that's entirely in C and opaque to ocaml.

The usual sequence of events is
  timeout_register
      with your new patch:
           stashes for_libxl value in ocaml gc
           calls ocaml libxl_timeout_register with for_libxl
           stashes that function's return in for_app and adds it to the gc
  timeout occurs
      the timeout machinery calls stub_libxl_osevent_occurred_timeout
          with the for_libxl value it has kept somehow
      stub_libxl_osevent_occurred_timeout calls
          libxl_osevent_occurred_timeout

Now the timeout is gone and nothing will deal with it again.  Who
cleans up the for_app value ?

Perhaps you are confused and don't realise that timeouts are one-shot.
See the comment next to libxl_osevent_occurred_timeout.

> I do assume here that timeout_modify will be called only once for a
> given timeout registration. Is that correct?

The specification is that it may be called more than once, or not at
all.  The cleanup needs to be done in
stub_libxl_osevent_occurred_timeout.

(And you don't probably don't want a binding for timeout_deregister.
That's only there for compatibility with what are now old libxls, and
only if those libxls don't have the race patches which are necessary
for reliable operation.)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:40:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0adM-0005EX-Mj; Tue, 07 Jan 2014 17:40:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0adL-0005EP-0C
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:40:07 +0000
Received: from [85.158.143.35:35438] by server-3.bemta-4.messagelabs.com id
	06/40-32360-6FB3CC25; Tue, 07 Jan 2014 17:40:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389116404!10236006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29648 invoked from network); 7 Jan 2014 17:40:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:40:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90546573"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:39:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 12:39:54 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0ad7-00057D-Cb;
	Tue, 07 Jan 2014 17:39:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0ad7-00006g-2M;
	Tue, 07 Jan 2014 17:39:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.15336.725600.971346@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 17:39:52 +0000
To: Rob Hoes <Rob.Hoes@citrix.com>
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> Ian Jackson wrote:
> > And looking at that, I see that stub_libxl_osevent_occurred_timeout
> > doesn't destroy the for_app.
> 
> Hmm... I thought the for_app stuff is only for the registration
> bits? The osevent_occurred functions don't use or receive it? They
> do get for_libxl, but that's entirely in C and opaque to ocaml.

The usual sequence of events is
  timeout_register
      with your new patch:
           stashes for_libxl value in ocaml gc
           calls ocaml libxl_timeout_register with for_libxl
           stashes that function's return in for_app and adds it to the gc
  timeout occurs
      the timeout machinery calls stub_libxl_osevent_occurred_timeout
          with the for_libxl value it has kept somehow
      stub_libxl_osevent_occurred_timeout calls
          libxl_osevent_occurred_timeout

Now the timeout is gone and nothing will deal with it again.  Who
cleans up the for_app value ?

Perhaps you are confused and don't realise that timeouts are one-shot.
See the comment next to libxl_osevent_occurred_timeout.

> I do assume here that timeout_modify will be called only once for a
> given timeout registration. Is that correct?

The specification is that it may be called more than once, or not at
all.  The cleanup needs to be done in
stub_libxl_osevent_occurred_timeout.

(And you don't probably don't want a binding for timeout_deregister.
That's only there for compatibility with what are now old libxls, and
only if those libxls don't have the race patches which are necessary
for reliable operation.)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0afE-0005K5-FG; Tue, 07 Jan 2014 17:42:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0afC-0005Jt-Tk
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:42:03 +0000
Received: from [85.158.139.211:22852] by server-10.bemta-5.messagelabs.com id
	38/C1-01405-A6C3CC25; Tue, 07 Jan 2014 17:42:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389116520!8198817!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12251 invoked from network); 7 Jan 2014 17:42:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:42:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90547486"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:41:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 12:41:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0af8-00057i-JL;
	Tue, 07 Jan 2014 17:41:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0af8-000073-B4;
	Tue, 07 Jan 2014 17:41:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.15462.177720.502000@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 17:41:58 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: ocaml: use int64 for timeval
	fields in the timeout_register callback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH 2/3] libxl: ocaml: use int64 for timeval fields in the timeout_register callback"):
> The original code works fine on 64-bit, but on 32-bit, the OCaml int (which is
> 1 bit smaller than the C int) is likely to overflow.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 17:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 17:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0afE-0005K5-FG; Tue, 07 Jan 2014 17:42:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0afC-0005Jt-Tk
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 17:42:03 +0000
Received: from [85.158.139.211:22852] by server-10.bemta-5.messagelabs.com id
	38/C1-01405-A6C3CC25; Tue, 07 Jan 2014 17:42:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389116520!8198817!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12251 invoked from network); 7 Jan 2014 17:42:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 17:42:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90547486"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 17:41:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 12:41:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0af8-00057i-JL;
	Tue, 07 Jan 2014 17:41:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0af8-000073-B4;
	Tue, 07 Jan 2014 17:41:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.15462.177720.502000@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 17:41:58 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: ocaml: use int64 for timeval
	fields in the timeout_register callback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH 2/3] libxl: ocaml: use int64 for timeval fields in the timeout_register callback"):
> The original code works fine on 64-bit, but on 32-bit, the OCaml int (which is
> 1 bit smaller than the C int) is likely to overflow.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:05:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0b1f-0006UU-Dh; Tue, 07 Jan 2014 18:05:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0b1e-0006UN-77
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:05:14 +0000
Received: from [85.158.137.68:14536] by server-6.bemta-3.messagelabs.com id
	05/3D-04868-9D14CC25; Tue, 07 Jan 2014 18:05:13 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389117911!7753866!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21592 invoked from network); 7 Jan 2014 18:05:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:05:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88393619"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:05:10 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 13:05:10 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 19:05:06 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHO+CezzEG8gOV2ZUC53KMj+kp0lpp5dHAAgAAu4RD///qMAIAAFRoA
Date: Tue, 7 Jan 2014 18:05:05 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
	<21196.15336.725600.971346@mariner.uk.xensource.com>
In-Reply-To: <21196.15336.725600.971346@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use
> 'for_app_registration' in osevent callbacks"):
> > Ian Jackson wrote:
> > > And looking at that, I see that stub_libxl_osevent_occurred_timeout
> > > doesn't destroy the for_app.
> >
> > Hmm... I thought the for_app stuff is only for the registration bits?
> > The osevent_occurred functions don't use or receive it? They do get
> > for_libxl, but that's entirely in C and opaque to ocaml.
> 
> The usual sequence of events is
>   timeout_register
>       with your new patch:
>            stashes for_libxl value in ocaml gc
>            calls ocaml libxl_timeout_register with for_libxl
>            stashes that function's return in for_app and adds it to the gc
>   timeout occurs
>       the timeout machinery calls stub_libxl_osevent_occurred_timeout
>           with the for_libxl value it has kept somehow
>       stub_libxl_osevent_occurred_timeout calls
>           libxl_osevent_occurred_timeout
> 
> Now the timeout is gone and nothing will deal with it again.  Who cleans
> up the for_app value ?
> 
> Perhaps you are confused and don't realise that timeouts are one-shot.
> See the comment next to libxl_osevent_occurred_timeout.

One part of my brain knew that, but another part wrote this function... :)

I'll send an update.

> > I do assume here that timeout_modify will be called only once for a
> > given timeout registration. Is that correct?
> 
> The specification is that it may be called more than once, or not at all.
> The cleanup needs to be done in stub_libxl_osevent_occurred_timeout.
> 
> (And you don't probably don't want a binding for timeout_deregister.
> That's only there for compatibility with what are now old libxls, and only
> if those libxls don't have the race patches which are necessary for
> reliable operation.)

It is already not there on the ocaml side for this reason. There is just a stub that raises an error, which is given to osevent_register_hooks (just to be sure). I should probably just put an abort in there rather than it raising an ocaml exception as it does now...

Cheers,
Rob


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:05:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0b1f-0006UU-Dh; Tue, 07 Jan 2014 18:05:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0b1e-0006UN-77
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:05:14 +0000
Received: from [85.158.137.68:14536] by server-6.bemta-3.messagelabs.com id
	05/3D-04868-9D14CC25; Tue, 07 Jan 2014 18:05:13 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389117911!7753866!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21592 invoked from network); 7 Jan 2014 18:05:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:05:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88393619"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:05:10 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 13:05:10 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Tue, 7 Jan 2014 19:05:06 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHO+CezzEG8gOV2ZUC53KMj+kp0lpp5dHAAgAAu4RD///qMAIAAFRoA
Date: Tue, 7 Jan 2014 18:05:05 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
	<21196.15336.725600.971346@mariner.uk.xensource.com>
In-Reply-To: <21196.15336.725600.971346@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use
> 'for_app_registration' in osevent callbacks"):
> > Ian Jackson wrote:
> > > And looking at that, I see that stub_libxl_osevent_occurred_timeout
> > > doesn't destroy the for_app.
> >
> > Hmm... I thought the for_app stuff is only for the registration bits?
> > The osevent_occurred functions don't use or receive it? They do get
> > for_libxl, but that's entirely in C and opaque to ocaml.
> 
> The usual sequence of events is
>   timeout_register
>       with your new patch:
>            stashes for_libxl value in ocaml gc
>            calls ocaml libxl_timeout_register with for_libxl
>            stashes that function's return in for_app and adds it to the gc
>   timeout occurs
>       the timeout machinery calls stub_libxl_osevent_occurred_timeout
>           with the for_libxl value it has kept somehow
>       stub_libxl_osevent_occurred_timeout calls
>           libxl_osevent_occurred_timeout
> 
> Now the timeout is gone and nothing will deal with it again.  Who cleans
> up the for_app value ?
> 
> Perhaps you are confused and don't realise that timeouts are one-shot.
> See the comment next to libxl_osevent_occurred_timeout.

One part of my brain knew that, but another part wrote this function... :)

I'll send an update.

> > I do assume here that timeout_modify will be called only once for a
> > given timeout registration. Is that correct?
> 
> The specification is that it may be called more than once, or not at all.
> The cleanup needs to be done in stub_libxl_osevent_occurred_timeout.
> 
> (And you don't probably don't want a binding for timeout_deregister.
> That's only there for compatibility with what are now old libxls, and only
> if those libxls don't have the race patches which are necessary for
> reliable operation.)

It is already not there on the ocaml side for this reason. There is just a stub that raises an error, which is given to osevent_register_hooks (just to be sure). I should probably just put an abort in there rather than it raising an ocaml exception as it does now...

Cheers,
Rob


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCD-0007Cp-BL; Tue, 07 Jan 2014 18:16:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-0007AR-SO
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [193.109.254.147:58145] by server-10.bemta-14.messagelabs.com
	id 28/B6-20752-6644CC25; Tue, 07 Jan 2014 18:16:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21475 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398092"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC5-0000vN-29;
	Tue, 07 Jan 2014 18:16:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:52 +0000
Message-ID: <1389118552-4853-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/7] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on some Xen changes that are currently RFC and targetted
for 3.15. Please do not apply yet.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0ecac25..0b12657 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -163,10 +163,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 4287f1f..9031593 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 104d56a..68bf948 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..f9c2d71 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCD-0007DJ-SI; Tue, 07 Jan 2014 18:16:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCB-00079v-CF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [85.158.137.68:36156] by server-10.bemta-3.messagelabs.com id
	1E/E7-23989-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389118563!6603331!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19211 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-VQ;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:49 +0000
Message-ID: <1389118552-4853-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 4/7] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..cca635c 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -85,10 +85,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCA-0007AI-8m; Tue, 07 Jan 2014 18:16:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC8-00079p-VS
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:05 +0000
Received: from [193.109.254.147:45862] by server-8.bemta-14.messagelabs.com id
	11/96-30921-4644CC25; Tue, 07 Jan 2014 18:16:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21307 invoked from network); 7 Jan 2014 18:16:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-To;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:46 +0000
Message-ID: <1389118552-4853-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/7] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..de59822 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -585,7 +585,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -627,7 +627,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -652,7 +652,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -728,7 +728,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -743,13 +743,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCD-0007Cp-BL; Tue, 07 Jan 2014 18:16:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-0007AR-SO
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [193.109.254.147:58145] by server-10.bemta-14.messagelabs.com
	id 28/B6-20752-6644CC25; Tue, 07 Jan 2014 18:16:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21475 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398092"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC5-0000vN-29;
	Tue, 07 Jan 2014 18:16:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:52 +0000
Message-ID: <1389118552-4853-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/7] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on some Xen changes that are currently RFC and targetted
for 3.15. Please do not apply yet.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0ecac25..0b12657 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -163,10 +163,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 4287f1f..9031593 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 104d56a..68bf948 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..f9c2d71 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1458,8 +1458,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCF-0007Dp-Bg; Tue, 07 Jan 2014 18:16:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCD-0007CT-AY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:09 +0000
Received: from [85.158.139.211:27761] by server-3.bemta-5.messagelabs.com id
	D3/E9-04773-8644CC25; Tue, 07 Jan 2014 18:16:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389118564!8389595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19014 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-Ur;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:48 +0000
Message-ID: <1389118552-4853-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/7] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  135 ++++++++++++++++++++++++++++++++++------------------
 1 files changed, 88 insertions(+), 47 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 5c6b83e..3b72adc 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -283,7 +294,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -292,7 +305,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -324,7 +338,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -356,7 +370,9 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	p2m_init(p2m_missing);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
@@ -375,7 +391,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -534,7 +550,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -554,7 +570,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -638,7 +654,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -647,7 +663,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -758,6 +774,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -775,15 +809,8 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
-
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
 	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
@@ -817,8 +844,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCC-0007C5-5c; Tue, 07 Jan 2014 18:16:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-00079w-1P
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:06 +0000
Received: from [193.109.254.147:58093] by server-10.bemta-14.messagelabs.com
	id 36/B6-20752-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21414 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-Vw;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:50 +0000
Message-ID: <1389118552-4853-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 5/7] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 3b72adc..f2e7d2e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -497,7 +497,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index cca635c..3af2c11 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -451,6 +451,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(xen_max_p2m_pfn, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bC9-00079x-Kg; Tue, 07 Jan 2014 18:16:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC8-00079j-7C
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:04 +0000
Received: from [193.109.254.147:57941] by server-9.bemta-14.messagelabs.com id
	9B/E8-13957-3644CC25; Tue, 07 Jan 2014 18:16:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21264 invoked from network); 7 Jan 2014 18:16:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398086"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-TD;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:45 +0000
Message-ID: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [RFC PATCHv2 0/7]: x86/xen: fixes for mapping high MMIO
	regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This a likely fix for the problems with mapping high MMIO regions in
certain cases (e.g., the RDMA drivers) as not all mappers were
specifing _PAGE_IOMAP which meant no valid MFN could be found and the
resulting PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series is posted as an RFC since it hasn't see a level of testing
I required (in particular it's not tested with a device with a high
MMIO region).  This is definetly 3.15 material since anything to do
with the p2m has a high risk.

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCD-0007DJ-SI; Tue, 07 Jan 2014 18:16:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCB-00079v-CF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [85.158.137.68:36156] by server-10.bemta-3.messagelabs.com id
	1E/E7-23989-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389118563!6603331!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19211 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-VQ;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:49 +0000
Message-ID: <1389118552-4853-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 4/7] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 68c054f..cca635c 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -85,10 +85,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bC9-00079x-Kg; Tue, 07 Jan 2014 18:16:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC8-00079j-7C
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:04 +0000
Received: from [193.109.254.147:57941] by server-9.bemta-14.messagelabs.com id
	9B/E8-13957-3644CC25; Tue, 07 Jan 2014 18:16:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21264 invoked from network); 7 Jan 2014 18:16:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398086"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-TD;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:45 +0000
Message-ID: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [RFC PATCHv2 0/7]: x86/xen: fixes for mapping high MMIO
	regions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This a likely fix for the problems with mapping high MMIO regions in
certain cases (e.g., the RDMA drivers) as not all mappers were
specifing _PAGE_IOMAP which meant no valid MFN could be found and the
resulting PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series is posted as an RFC since it hasn't see a level of testing
I required (in particular it's not tested with a device with a high
MMIO region).  This is definetly 3.15 material since anything to do
with the p2m has a high risk.

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCF-0007Dp-Bg; Tue, 07 Jan 2014 18:16:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCD-0007CT-AY
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:09 +0000
Received: from [85.158.139.211:27761] by server-3.bemta-5.messagelabs.com id
	D3/E9-04773-8644CC25; Tue, 07 Jan 2014 18:16:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389118564!8389595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19014 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-Ur;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:48 +0000
Message-ID: <1389118552-4853-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/7] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  135 ++++++++++++++++++++++++++++++++++------------------
 1 files changed, 88 insertions(+), 47 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 5c6b83e..3b72adc 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -283,7 +294,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -292,7 +305,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -324,7 +338,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -356,7 +370,9 @@ void __init xen_build_dynamic_phys_to_machine(void)
 	p2m_init(p2m_missing);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
@@ -375,7 +391,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -534,7 +550,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -554,7 +570,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -638,7 +654,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -647,7 +663,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -758,6 +774,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -775,15 +809,8 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
-
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
 	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
@@ -817,8 +844,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCC-0007CS-QY; Tue, 07 Jan 2014 18:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-0007AL-PI
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [85.158.137.68:58065] by server-3.bemta-3.messagelabs.com id
	47/1B-10658-6644CC25; Tue, 07 Jan 2014 18:16:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389118563!6603331!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19266 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC5-0000vN-0E;
	Tue, 07 Jan 2014 18:16:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:51 +0000
Message-ID: <1389118552-4853-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 6/7] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   50 ++++----------------------------------------------
 1 files changed, 4 insertions(+), 46 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..08cebf5 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 static pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 static pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
@@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCB-0007Ay-NW; Tue, 07 Jan 2014 18:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC9-00079q-Jr
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:05 +0000
Received: from [193.109.254.147:45899] by server-6.bemta-14.messagelabs.com id
	D0/FE-14958-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21349 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398088"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-UL;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:47 +0000
Message-ID: <1389118552-4853-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/7] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index de59822..5c6b83e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -763,7 +763,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -772,6 +772,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCC-0007C5-5c; Tue, 07 Jan 2014 18:16:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-00079w-1P
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:06 +0000
Received: from [193.109.254.147:58093] by server-10.bemta-14.messagelabs.com
	id 36/B6-20752-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21414 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-Vw;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:50 +0000
Message-ID: <1389118552-4853-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 5/7] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 3b72adc..f2e7d2e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -497,7 +497,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index cca635c..3af2c11 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -451,6 +451,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(xen_max_p2m_pfn, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCA-0007AI-8m; Tue, 07 Jan 2014 18:16:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC8-00079p-VS
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:05 +0000
Received: from [193.109.254.147:45862] by server-8.bemta-14.messagelabs.com id
	11/96-30921-4644CC25; Tue, 07 Jan 2014 18:16:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21307 invoked from network); 7 Jan 2014 18:16:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-To;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:46 +0000
Message-ID: <1389118552-4853-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/7] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..de59822 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -585,7 +585,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -627,7 +627,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -652,7 +652,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -728,7 +728,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -743,13 +743,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCB-0007Ay-NW; Tue, 07 Jan 2014 18:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bC9-00079q-Jr
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:05 +0000
Received: from [193.109.254.147:45899] by server-6.bemta-14.messagelabs.com id
	D0/FE-14958-5644CC25; Tue, 07 Jan 2014 18:16:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389118561!9381737!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21349 invoked from network); 7 Jan 2014 18:16:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398088"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC4-0000vN-UL;
	Tue, 07 Jan 2014 18:16:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:47 +0000
Message-ID: <1389118552-4853-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/7] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index de59822..5c6b83e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -763,7 +763,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -772,6 +772,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bCC-0007CS-QY; Tue, 07 Jan 2014 18:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0bCA-0007AL-PI
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:16:07 +0000
Received: from [85.158.137.68:58065] by server-3.bemta-3.messagelabs.com id
	47/1B-10658-6644CC25; Tue, 07 Jan 2014 18:16:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389118563!6603331!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19266 invoked from network); 7 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88398091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 18:16:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 13:16:01 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0bC5-0000vN-0E;
	Tue, 07 Jan 2014 18:16:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 7 Jan 2014 18:15:51 +0000
Message-ID: <1389118552-4853-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
References: <1389118552-4853-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 6/7] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   50 ++++----------------------------------------------
 1 files changed, 4 insertions(+), 46 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..08cebf5 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 static pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ static pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 static pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ static pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2084,7 +2044,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
@@ -2524,8 +2484,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:18:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bEl-0008Iz-FZ; Tue, 07 Jan 2014 18:18:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0bEk-0008Hd-Gu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:18:46 +0000
Received: from [193.109.254.147:25901] by server-7.bemta-14.messagelabs.com id
	AE/E4-15500-5054CC25; Tue, 07 Jan 2014 18:18:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389118723!9376391!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22540 invoked from network); 7 Jan 2014 18:18:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:18:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90563436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 18:18:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 13:18:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0bEf-0005KO-FU;
	Tue, 07 Jan 2014 18:18:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0bEf-00050m-9G;
	Tue, 07 Jan 2014 18:18:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.17665.148524.379238@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 18:18:41 +0000
To: Rob Hoes <Rob.Hoes@citrix.com>
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
	<21196.15336.725600.971346@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> Ian Jackson wrote:
> > Perhaps you are confused and don't realise that timeouts are one-shot.
> > See the comment next to libxl_osevent_occurred_timeout.
> 
> One part of my brain knew that, but another part wrote this function... :)
> 
> I'll send an update.

Ah :-).  Thanks.

> > (And you don't probably don't want a binding for timeout_deregister.
> > That's only there for compatibility with what are now old libxls, and only
> > if those libxls don't have the race patches which are necessary for
> > reliable operation.)
> 
> It is already not there on the ocaml side for this reason. There is just a stub that raises an error, which is given to osevent_register_hooks (just to be sure). I should probably just put an abort in there rather than it raising an ocaml exception as it does now...

Oh, I didn't look closely enough.  But, yes, an abort is probably
better.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:18:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bEl-0008Iz-FZ; Tue, 07 Jan 2014 18:18:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0bEk-0008Hd-Gu
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:18:46 +0000
Received: from [193.109.254.147:25901] by server-7.bemta-14.messagelabs.com id
	AE/E4-15500-5054CC25; Tue, 07 Jan 2014 18:18:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389118723!9376391!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22540 invoked from network); 7 Jan 2014 18:18:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:18:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90563436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 18:18:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 13:18:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0bEf-0005KO-FU;
	Tue, 07 Jan 2014 18:18:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0bEf-00050m-9G;
	Tue, 07 Jan 2014 18:18:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.17665.148524.379238@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 18:18:41 +0000
To: Rob Hoes <Rob.Hoes@citrix.com>
In-Reply-To: <360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
References: <1386866211-12639-4-git-send-email-rob.hoes@citrix.com>
	<1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<21196.6440.529434.66793@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E5E@AMSPEX01CL03.citrite.net>
	<21196.15336.725600.971346@mariner.uk.xensource.com>
	<360717C0B01E6345BCBE64B758E22C2D1D7E94@AMSPEX01CL03.citrite.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("RE: [PATCH v2 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> Ian Jackson wrote:
> > Perhaps you are confused and don't realise that timeouts are one-shot.
> > See the comment next to libxl_osevent_occurred_timeout.
> 
> One part of my brain knew that, but another part wrote this function... :)
> 
> I'll send an update.

Ah :-).  Thanks.

> > (And you don't probably don't want a binding for timeout_deregister.
> > That's only there for compatibility with what are now old libxls, and only
> > if those libxls don't have the race patches which are necessary for
> > reliable operation.)
> 
> It is already not there on the ocaml side for this reason. There is just a stub that raises an error, which is given to osevent_register_hooks (just to be sure). I should probably just put an abort in there rather than it raising an ocaml exception as it does now...

Oh, I didn't look closely enough.  But, yes, an abort is probably
better.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bHt-0000Pi-9k; Tue, 07 Jan 2014 18:22:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0bHr-0000PU-QQ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:22:00 +0000
Received: from [85.158.137.68:21631] by server-4.bemta-3.messagelabs.com id
	BD/85-10414-7C54CC25; Tue, 07 Jan 2014 18:21:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389118916!7728392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2131 invoked from network); 7 Jan 2014 18:21:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 18:21:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ILnHO029823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 18:21:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ILmhm010050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 18:21:49 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ILlrl020508; Tue, 7 Jan 2014 18:21:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 10:21:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5AA5F1C18DC; Tue,  7 Jan 2014 13:21:46 -0500 (EST)
Date: Tue, 7 Jan 2014 13:21:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140107182146.GA9596@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
	<20140107165818.GA8748@phenom.dumpdata.com>
	<816291453.20140107182430@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <816291453.20140107182430@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 - CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>] [<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0 xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:24:30PM +0100, Sander Eikelenboom wrote:
> 
> Tuesday, January 7, 2014, 5:58:18 PM, you wrote:
> 
> > On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
> >> 
> >> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
> >> 
> >> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> >> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
> >> >> > Hi Konrad,
> >> >> > 
> >> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> >> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> >> 
> >> > Hot damm! Thank you for testing so quickly!
> >> 
> >> Hrmm PVH doesn't seem to be available on AMD systems yet ?
> 
> > Correct.
> 
> Ah thought i read something about that in the past ..
> but it wasn't mentioned in your patch announcement which does have a nice "how to test" part :-)

Ugh. I knew I forgot something :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0bHt-0000Pi-9k; Tue, 07 Jan 2014 18:22:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0bHr-0000PU-QQ
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:22:00 +0000
Received: from [85.158.137.68:21631] by server-4.bemta-3.messagelabs.com id
	BD/85-10414-7C54CC25; Tue, 07 Jan 2014 18:21:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389118916!7728392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2131 invoked from network); 7 Jan 2014 18:21:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 18:21:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07ILnHO029823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 18:21:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ILmhm010050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 18:21:49 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07ILlrl020508; Tue, 7 Jan 2014 18:21:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 10:21:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5AA5F1C18DC; Tue,  7 Jan 2014 13:21:46 -0500 (EST)
Date: Tue, 7 Jan 2014 13:21:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140107182146.GA9596@phenom.dumpdata.com>
References: <1536712177.20140107125352@eikelenboom.it>
	<52CC08C2.5090004@citrix.com>
	<20140107144355.GH3588@phenom.dumpdata.com>
	<306967039.20140107173009@eikelenboom.it>
	<20140107165818.GA8748@phenom.dumpdata.com>
	<816291453.20140107182430@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <816291453.20140107182430@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] devel/for-linus-3.14 branch: dom0 BUG: soft lockup
 - CPU#1 stuck for 22s! RIP: e030:[<ffffffff81109a58>] [<ffffffff81109a58>]
 generic_exec_single+0x88/0xc0 xen_destroy_contiguous_region+0x160/0x160
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:24:30PM +0100, Sander Eikelenboom wrote:
> 
> Tuesday, January 7, 2014, 5:58:18 PM, you wrote:
> 
> > On Tue, Jan 07, 2014 at 05:30:09PM +0100, Sander Eikelenboom wrote:
> >> 
> >> Tuesday, January 7, 2014, 3:43:55 PM, you wrote:
> >> 
> >> > On Tue, Jan 07, 2014 at 02:01:38PM +0000, David Vrabel wrote:
> >> >> On 07/01/14 11:53, Sander Eikelenboom wrote:
> >> >> > Hi Konrad,
> >> >> > 
> >> >> > A new year and a new linux merge window looming, so i thought i would try out the "devel/for-linus-3.14" branch.
> >> >> > But dom0 seems to blow up for me .. (without this branch pulled it works ok)
> >> 
> >> > Hot damm! Thank you for testing so quickly!
> >> 
> >> Hrmm PVH doesn't seem to be available on AMD systems yet ?
> 
> > Correct.
> 
> Ah thought i read something about that in the past ..
> but it wasn't mentioned in your patch announcement which does have a nice "how to test" part :-)

Ugh. I knew I forgot something :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:56:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0boq-0002Gu-75; Tue, 07 Jan 2014 18:56:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0bon-0002Go-Sc
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:56:03 +0000
Received: from [85.158.137.68:17242] by server-2.bemta-3.messagelabs.com id
	FE/88-17329-1CD4CC25; Tue, 07 Jan 2014 18:56:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389120958!7816105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24964 invoked from network); 7 Jan 2014 18:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90578790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 18:55:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 13:55:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0boi-0005VH-LT;
	Tue, 07 Jan 2014 18:55:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0boi-0005Cn-Ey;
	Tue, 07 Jan 2014 18:55:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.19900.136146.867552@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 18:55:56 +0000
To: <konrad.wilk@oracle.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I did the following test:

   mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
   xl migrate debian.guest.osstest localhost

xl did what appears to be the right thing: it did most of the
migration, failed to run the block scripts at the end of the
migration, and destroyed the destination domain and instead resumed
the source guest.

However, the source guest immediately went mad spewing WARNINGs and
was after that no longer contactable via the network and not
apparently responsive on the console.  See below.

This is with:

  [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
  version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013

For reasons I don't understand it doesn't seem to print the actual
kernel git hash in dmesg, but I think it was that from flight 22264,
i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
64-bit Xen.

Thanks,
Ian.

debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
[  124.595991] PM: late freeze of devices complete after 0.013 msecs
[  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
[  124.601105] Grant tables using version 2 layout.
[  124.601105] ------------[ cut here ]------------
[  124.601105] kernel BUG at drivers/xen/events.c:1582!
[  124.601105] invalid opcode: 0000 [#1] SMP 
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] 
[  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
[  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
[  124.601105] EIP is at xen_irq_resume+0x215/0x370
[  124.601105] EAX: ffffffef EBX: deadbeef ECX: deadbeef EDX: 00000000
[  124.601105] ESI: c190b020 EDI: df461f24 EBP: df451eb8 ESP: df451e10
[  124.601105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
[  124.601105] CR0: 8005003b CR2: 08b7c8a8 CR3: 038f0000 CR4: 00002660
[  124.601105] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[  124.601105] DR6: ffff0ff0 DR7: 00000400
[  124.601105] Process migration/0 (pid: 6, ti=df450000 task=df43d860 task.ti=df450000)
[  124.601105] Stack:
[  124.601105]  c104ea40 df451e18 c398b80c deadbeef df461f10 df451e58 c12f350a c19b165c
[  124.601105]  df451e94 00000003 df451e78 c190b080 c190b020 00000000 00000010 00000000
[  124.601105]  00000000 00000000 9420f17e 0008a6c2 fc798ba3 0008a6df 00000004 00000413
[  124.601105] Call Trace:
[  124.601105]  [<c104ea40>] ? xen_iret_crit_fixup+0x3c/0x3c
[  124.601105]  [<c12f350a>] ? gnttab_map_frames_v2+0xda/0x120
[  124.601105]  [<c1055b90>] ? xen_spin_lock+0xa0/0x100
[  124.601105]  [<c104d155>] ? xen_mm_unpin_all+0x65/0x80
[  124.601105]  [<c12f6cad>] xen_suspend+0x8d/0xc0
[  124.601105]  [<c10e750b>] stop_machine_cpu_stop+0x9b/0x110
[  124.601105]  [<c10e71f7>] cpu_stopper_thread+0xc7/0x1a0
[  124.601105]  [<c10b3f6f>] ? finish_task_switch+0x5f/0xe0
[  124.601105]  [<c10e7470>] ? stop_one_cpu_nowait+0x40/0x40
[  124.601105]  [<c10b682b>] ? default_wake_function+0xb/0x10
[  124.601105]  [<c10af990>] ? __wake_up_common+0x40/0x70
[  124.601105]  [<c16441ad>] ? _raw_spin_unlock_irqrestore+0x2d/0x50
[  124.601105]  [<c10b2479>] ? complete+0x49/0x60
[  124.601105]  [<c10e7130>] ? res_counter_charge+0x180/0x180
[  124.601105]  [<c10a7474>] kthread+0x74/0x80
[  124.601105]  [<c10a7400>] ? kthread_freezable_should_stop+0x60/0x60
[  124.601105]  [<c164b276>] kernel_thread_helper+0x6/0x10
[  124.601105] Code: 22 e8 ff ff 8b 55 8c 89 d8 e8 88 e6 ff ff 83 45 94 01 83 7d 94 04 0f 84 80 fe ff ff 8b 55 8c 8b 04 95 e0 11 88 c1 e9 64 ff ff ff <0f> 0b eb fe 0f 0b eb fe 8b 1d 00 60 85 c1 81 fb 00 60 85 c1 74 
[  124.601105] EIP: [<c12f5d25>] xen_irq_resume+0x215/0x370 SS:ESP 0069:df451e10
[  124.601105] ---[ end trace 69a5c8cd56e77bce ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/tick-sched.c:464 tick_nohz_idle_enter+0x7a/0x90()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D      3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10d1b1a>] tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bcf ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c10886ea>] ? print_oops_end_marker+0x2a/0x30
[  124.601105]  [<c10888fd>] ? warn_slowpath_common+0x7d/0xa0
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c10d1ae5>] tick_nohz_idle_enter+0x45/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd0 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
[  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
[  124.601105]  [<c104e994>] check_events+0x8/0xc
[  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
[  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd1 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
[  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
[  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
[  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
[  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
[  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
[  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
[  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
[  124.601105]  [<c104e994>] check_events+0x8/0xc
[  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
[  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd2 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd3 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
[  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
[  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
[  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
[  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
[  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
[  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd4 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd5 ]---
[  124.601105] ------------[ cut here ]------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 18:56:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 18:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0boq-0002Gu-75; Tue, 07 Jan 2014 18:56:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0bon-0002Go-Sc
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 18:56:03 +0000
Received: from [85.158.137.68:17242] by server-2.bemta-3.messagelabs.com id
	FE/88-17329-1CD4CC25; Tue, 07 Jan 2014 18:56:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389120958!7816105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24964 invoked from network); 7 Jan 2014 18:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 18:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="90578790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 18:55:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 13:55:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0boi-0005VH-LT;
	Tue, 07 Jan 2014 18:55:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0boi-0005Cn-Ey;
	Tue, 07 Jan 2014 18:55:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.19900.136146.867552@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 18:55:56 +0000
To: <konrad.wilk@oracle.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I did the following test:

   mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
   xl migrate debian.guest.osstest localhost

xl did what appears to be the right thing: it did most of the
migration, failed to run the block scripts at the end of the
migration, and destroyed the destination domain and instead resumed
the source guest.

However, the source guest immediately went mad spewing WARNINGs and
was after that no longer contactable via the network and not
apparently responsive on the console.  See below.

This is with:

  [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
  version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013

For reasons I don't understand it doesn't seem to print the actual
kernel git hash in dmesg, but I think it was that from flight 22264,
i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
64-bit Xen.

Thanks,
Ian.

debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
[  124.595991] PM: late freeze of devices complete after 0.013 msecs
[  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
[  124.601105] Grant tables using version 2 layout.
[  124.601105] ------------[ cut here ]------------
[  124.601105] kernel BUG at drivers/xen/events.c:1582!
[  124.601105] invalid opcode: 0000 [#1] SMP 
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] 
[  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
[  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
[  124.601105] EIP is at xen_irq_resume+0x215/0x370
[  124.601105] EAX: ffffffef EBX: deadbeef ECX: deadbeef EDX: 00000000
[  124.601105] ESI: c190b020 EDI: df461f24 EBP: df451eb8 ESP: df451e10
[  124.601105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
[  124.601105] CR0: 8005003b CR2: 08b7c8a8 CR3: 038f0000 CR4: 00002660
[  124.601105] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[  124.601105] DR6: ffff0ff0 DR7: 00000400
[  124.601105] Process migration/0 (pid: 6, ti=df450000 task=df43d860 task.ti=df450000)
[  124.601105] Stack:
[  124.601105]  c104ea40 df451e18 c398b80c deadbeef df461f10 df451e58 c12f350a c19b165c
[  124.601105]  df451e94 00000003 df451e78 c190b080 c190b020 00000000 00000010 00000000
[  124.601105]  00000000 00000000 9420f17e 0008a6c2 fc798ba3 0008a6df 00000004 00000413
[  124.601105] Call Trace:
[  124.601105]  [<c104ea40>] ? xen_iret_crit_fixup+0x3c/0x3c
[  124.601105]  [<c12f350a>] ? gnttab_map_frames_v2+0xda/0x120
[  124.601105]  [<c1055b90>] ? xen_spin_lock+0xa0/0x100
[  124.601105]  [<c104d155>] ? xen_mm_unpin_all+0x65/0x80
[  124.601105]  [<c12f6cad>] xen_suspend+0x8d/0xc0
[  124.601105]  [<c10e750b>] stop_machine_cpu_stop+0x9b/0x110
[  124.601105]  [<c10e71f7>] cpu_stopper_thread+0xc7/0x1a0
[  124.601105]  [<c10b3f6f>] ? finish_task_switch+0x5f/0xe0
[  124.601105]  [<c10e7470>] ? stop_one_cpu_nowait+0x40/0x40
[  124.601105]  [<c10b682b>] ? default_wake_function+0xb/0x10
[  124.601105]  [<c10af990>] ? __wake_up_common+0x40/0x70
[  124.601105]  [<c16441ad>] ? _raw_spin_unlock_irqrestore+0x2d/0x50
[  124.601105]  [<c10b2479>] ? complete+0x49/0x60
[  124.601105]  [<c10e7130>] ? res_counter_charge+0x180/0x180
[  124.601105]  [<c10a7474>] kthread+0x74/0x80
[  124.601105]  [<c10a7400>] ? kthread_freezable_should_stop+0x60/0x60
[  124.601105]  [<c164b276>] kernel_thread_helper+0x6/0x10
[  124.601105] Code: 22 e8 ff ff 8b 55 8c 89 d8 e8 88 e6 ff ff 83 45 94 01 83 7d 94 04 0f 84 80 fe ff ff 8b 55 8c 8b 04 95 e0 11 88 c1 e9 64 ff ff ff <0f> 0b eb fe 0f 0b eb fe 8b 1d 00 60 85 c1 81 fb 00 60 85 c1 74 
[  124.601105] EIP: [<c12f5d25>] xen_irq_resume+0x215/0x370 SS:ESP 0069:df451e10
[  124.601105] ---[ end trace 69a5c8cd56e77bce ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/tick-sched.c:464 tick_nohz_idle_enter+0x7a/0x90()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D      3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10d1b1a>] tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bcf ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c10886ea>] ? print_oops_end_marker+0x2a/0x30
[  124.601105]  [<c10888fd>] ? warn_slowpath_common+0x7d/0xa0
[  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
[  124.601105]  [<c10d1ae5>] tick_nohz_idle_enter+0x45/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd0 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
[  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
[  124.601105]  [<c104e994>] check_events+0x8/0xc
[  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
[  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd1 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
[  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
[  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
[  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
[  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
[  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
[  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
[  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
[  124.601105]  [<c104e994>] check_events+0x8/0xc
[  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
[  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
[  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
[  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd2 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd3 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
[  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
[  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
[  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
[  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
[  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
[  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
[  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd4 ]---
[  124.601105] ------------[ cut here ]------------
[  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
[  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
[  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
[  124.601105] Call Trace:
[  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
[  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
[  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
[  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
[  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
[  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
[  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
[  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
[  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
[  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
[  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
[  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
[  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
[  124.601105]  [<c16242f8>] rest_init+0x58/0x60
[  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
[  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
[  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
[  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
[  124.601105] ---[ end trace 69a5c8cd56e77bd5 ]---
[  124.601105] ------------[ cut here ]------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0c5M-0003Qb-78; Tue, 07 Jan 2014 19:13:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0c5K-0003QT-Ae
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:13:06 +0000
Received: from [85.158.143.35:60128] by server-2.bemta-4.messagelabs.com id
	C8/8F-11386-1C15CC25; Tue, 07 Jan 2014 19:13:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389121983!10166072!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29274 invoked from network); 7 Jan 2014 19:13:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 19:13:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07JBxGb023333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 19:12:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07JBwRg019092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 19:11:59 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07JBwUH025456; Tue, 7 Jan 2014 19:11:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 11:11:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B22C91C18DC; Tue,  7 Jan 2014 14:11:56 -0500 (EST)
Date: Tue, 7 Jan 2014 14:11:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140107191156.GA10370@phenom.dumpdata.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.
> 
> This is with:
> 
>   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
>   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> 
> For reasons I don't understand it doesn't seem to print the actual
> kernel git hash in dmesg, but I think it was that from flight 22264,
> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> 64-bit Xen.

This a bit of ancient kernel. Does it show up with 3.12?

CC-ing the other maintainers.
> 
> Thanks,
> Ian.
> 
> debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> [  124.601105] Grant tables using version 2 layout.
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> [  124.601105] invalid opcode: 0000 [#1] SMP 
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] 
> [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> [  124.601105] EIP is at xen_irq_resume+0x215/0x370
> [  124.601105] EAX: ffffffef EBX: deadbeef ECX: deadbeef EDX: 00000000
> [  124.601105] ESI: c190b020 EDI: df461f24 EBP: df451eb8 ESP: df451e10
> [  124.601105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
> [  124.601105] CR0: 8005003b CR2: 08b7c8a8 CR3: 038f0000 CR4: 00002660
> [  124.601105] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
> [  124.601105] DR6: ffff0ff0 DR7: 00000400
> [  124.601105] Process migration/0 (pid: 6, ti=df450000 task=df43d860 task.ti=df450000)
> [  124.601105] Stack:
> [  124.601105]  c104ea40 df451e18 c398b80c deadbeef df461f10 df451e58 c12f350a c19b165c
> [  124.601105]  df451e94 00000003 df451e78 c190b080 c190b020 00000000 00000010 00000000
> [  124.601105]  00000000 00000000 9420f17e 0008a6c2 fc798ba3 0008a6df 00000004 00000413
> [  124.601105] Call Trace:
> [  124.601105]  [<c104ea40>] ? xen_iret_crit_fixup+0x3c/0x3c
> [  124.601105]  [<c12f350a>] ? gnttab_map_frames_v2+0xda/0x120
> [  124.601105]  [<c1055b90>] ? xen_spin_lock+0xa0/0x100
> [  124.601105]  [<c104d155>] ? xen_mm_unpin_all+0x65/0x80
> [  124.601105]  [<c12f6cad>] xen_suspend+0x8d/0xc0
> [  124.601105]  [<c10e750b>] stop_machine_cpu_stop+0x9b/0x110
> [  124.601105]  [<c10e71f7>] cpu_stopper_thread+0xc7/0x1a0
> [  124.601105]  [<c10b3f6f>] ? finish_task_switch+0x5f/0xe0
> [  124.601105]  [<c10e7470>] ? stop_one_cpu_nowait+0x40/0x40
> [  124.601105]  [<c10b682b>] ? default_wake_function+0xb/0x10
> [  124.601105]  [<c10af990>] ? __wake_up_common+0x40/0x70
> [  124.601105]  [<c16441ad>] ? _raw_spin_unlock_irqrestore+0x2d/0x50
> [  124.601105]  [<c10b2479>] ? complete+0x49/0x60
> [  124.601105]  [<c10e7130>] ? res_counter_charge+0x180/0x180
> [  124.601105]  [<c10a7474>] kthread+0x74/0x80
> [  124.601105]  [<c10a7400>] ? kthread_freezable_should_stop+0x60/0x60
> [  124.601105]  [<c164b276>] kernel_thread_helper+0x6/0x10
> [  124.601105] Code: 22 e8 ff ff 8b 55 8c 89 d8 e8 88 e6 ff ff 83 45 94 01 83 7d 94 04 0f 84 80 fe ff ff 8b 55 8c 8b 04 95 e0 11 88 c1 e9 64 ff ff ff <0f> 0b eb fe 0f 0b eb fe 8b 1d 00 60 85 c1 81 fb 00 60 85 c1 74 
> [  124.601105] EIP: [<c12f5d25>] xen_irq_resume+0x215/0x370 SS:ESP 0069:df451e10
> [  124.601105] ---[ end trace 69a5c8cd56e77bce ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/tick-sched.c:464 tick_nohz_idle_enter+0x7a/0x90()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D      3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10d1b1a>] tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bcf ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c10886ea>] ? print_oops_end_marker+0x2a/0x30
> [  124.601105]  [<c10888fd>] ? warn_slowpath_common+0x7d/0xa0
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c10d1ae5>] tick_nohz_idle_enter+0x45/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd0 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
> [  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
> [  124.601105]  [<c104e994>] check_events+0x8/0xc
> [  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
> [  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd1 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
> [  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
> [  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
> [  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
> [  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
> [  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
> [  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
> [  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
> [  124.601105]  [<c104e994>] check_events+0x8/0xc
> [  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
> [  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd2 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd3 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
> [  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
> [  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
> [  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
> [  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
> [  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
> [  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd4 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd5 ]---
> [  124.601105] ------------[ cut here ]------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0c5M-0003Qb-78; Tue, 07 Jan 2014 19:13:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0c5K-0003QT-Ae
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:13:06 +0000
Received: from [85.158.143.35:60128] by server-2.bemta-4.messagelabs.com id
	C8/8F-11386-1C15CC25; Tue, 07 Jan 2014 19:13:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389121983!10166072!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29274 invoked from network); 7 Jan 2014 19:13:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 19:13:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07JBxGb023333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 19:12:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07JBwRg019092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 19:11:59 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07JBwUH025456; Tue, 7 Jan 2014 19:11:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 11:11:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B22C91C18DC; Tue,  7 Jan 2014 14:11:56 -0500 (EST)
Date: Tue, 7 Jan 2014 14:11:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140107191156.GA10370@phenom.dumpdata.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.
> 
> This is with:
> 
>   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
>   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> 
> For reasons I don't understand it doesn't seem to print the actual
> kernel git hash in dmesg, but I think it was that from flight 22264,
> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> 64-bit Xen.

This a bit of ancient kernel. Does it show up with 3.12?

CC-ing the other maintainers.
> 
> Thanks,
> Ian.
> 
> debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> [  124.601105] Grant tables using version 2 layout.
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> [  124.601105] invalid opcode: 0000 [#1] SMP 
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] 
> [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> [  124.601105] EIP is at xen_irq_resume+0x215/0x370
> [  124.601105] EAX: ffffffef EBX: deadbeef ECX: deadbeef EDX: 00000000
> [  124.601105] ESI: c190b020 EDI: df461f24 EBP: df451eb8 ESP: df451e10
> [  124.601105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
> [  124.601105] CR0: 8005003b CR2: 08b7c8a8 CR3: 038f0000 CR4: 00002660
> [  124.601105] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
> [  124.601105] DR6: ffff0ff0 DR7: 00000400
> [  124.601105] Process migration/0 (pid: 6, ti=df450000 task=df43d860 task.ti=df450000)
> [  124.601105] Stack:
> [  124.601105]  c104ea40 df451e18 c398b80c deadbeef df461f10 df451e58 c12f350a c19b165c
> [  124.601105]  df451e94 00000003 df451e78 c190b080 c190b020 00000000 00000010 00000000
> [  124.601105]  00000000 00000000 9420f17e 0008a6c2 fc798ba3 0008a6df 00000004 00000413
> [  124.601105] Call Trace:
> [  124.601105]  [<c104ea40>] ? xen_iret_crit_fixup+0x3c/0x3c
> [  124.601105]  [<c12f350a>] ? gnttab_map_frames_v2+0xda/0x120
> [  124.601105]  [<c1055b90>] ? xen_spin_lock+0xa0/0x100
> [  124.601105]  [<c104d155>] ? xen_mm_unpin_all+0x65/0x80
> [  124.601105]  [<c12f6cad>] xen_suspend+0x8d/0xc0
> [  124.601105]  [<c10e750b>] stop_machine_cpu_stop+0x9b/0x110
> [  124.601105]  [<c10e71f7>] cpu_stopper_thread+0xc7/0x1a0
> [  124.601105]  [<c10b3f6f>] ? finish_task_switch+0x5f/0xe0
> [  124.601105]  [<c10e7470>] ? stop_one_cpu_nowait+0x40/0x40
> [  124.601105]  [<c10b682b>] ? default_wake_function+0xb/0x10
> [  124.601105]  [<c10af990>] ? __wake_up_common+0x40/0x70
> [  124.601105]  [<c16441ad>] ? _raw_spin_unlock_irqrestore+0x2d/0x50
> [  124.601105]  [<c10b2479>] ? complete+0x49/0x60
> [  124.601105]  [<c10e7130>] ? res_counter_charge+0x180/0x180
> [  124.601105]  [<c10a7474>] kthread+0x74/0x80
> [  124.601105]  [<c10a7400>] ? kthread_freezable_should_stop+0x60/0x60
> [  124.601105]  [<c164b276>] kernel_thread_helper+0x6/0x10
> [  124.601105] Code: 22 e8 ff ff 8b 55 8c 89 d8 e8 88 e6 ff ff 83 45 94 01 83 7d 94 04 0f 84 80 fe ff ff 8b 55 8c 8b 04 95 e0 11 88 c1 e9 64 ff ff ff <0f> 0b eb fe 0f 0b eb fe 8b 1d 00 60 85 c1 81 fb 00 60 85 c1 74 
> [  124.601105] EIP: [<c12f5d25>] xen_irq_resume+0x215/0x370 SS:ESP 0069:df451e10
> [  124.601105] ---[ end trace 69a5c8cd56e77bce ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/tick-sched.c:464 tick_nohz_idle_enter+0x7a/0x90()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D      3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10d1b1a>] tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bcf ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c10886ea>] ? print_oops_end_marker+0x2a/0x30
> [  124.601105]  [<c10888fd>] ? warn_slowpath_common+0x7d/0xa0
> [  124.601105]  [<c10d1b1a>] ? tick_nohz_idle_enter+0x7a/0x90
> [  124.601105]  [<c10d1ae5>] tick_nohz_idle_enter+0x45/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd0 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
> [  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
> [  124.601105]  [<c104e994>] check_events+0x8/0xc
> [  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
> [  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd1 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
> [  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
> [  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
> [  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
> [  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
> [  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
> [  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c1002227>] ? hypercall_page+0x227/0x1000
> [  124.601105]  [<c104e16a>] ? xen_force_evtchn_callback+0x1a/0x30
> [  124.601105]  [<c104e994>] check_events+0x8/0xc
> [  124.601105]  [<c104e93c>] ? xen_clocksource_get_cycles+0xc/0xc
> [  124.601105]  [<c104e953>] ? xen_irq_enable_direct_reloc+0x4/0x4
> [  124.601105]  [<c10d1af4>] ? tick_nohz_idle_enter+0x54/0x90
> [  124.601105]  [<c105e22a>] cpu_idle+0x1a/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd2 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd3 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1664>] tick_nohz_stop_sched_tick+0x34/0x3f0
> [  124.601105]  [<c12f4288>] ? info_for_irq+0x8/0x20
> [  124.601105]  [<c12f47c3>] ? evtchn_from_irq+0x13/0x40
> [  124.601105]  [<c104e789>] ? xen_clocksource_read+0x19/0x20
> [  124.601105]  [<c12f4b68>] ? __xen_evtchn_do_upcall+0x258/0x2b0
> [  124.601105]  [<c10d1a5f>] tick_nohz_irq_exit+0x3f/0x80
> [  124.601105]  [<c108ef1f>] irq_exit+0x4f/0xb0
> [  124.601105]  [<c12f4e40>] xen_evtchn_do_upcall+0x20/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd4 ]---
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] WARNING: at kernel/time/timekeeping.c:266 ktime_get+0xe9/0x100()
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] Pid: 0, comm: swapper/0 Tainted: G      D W    3.4.70+ #1
> [  124.601105] Call Trace:
> [  124.601105]  [<c10888ed>] warn_slowpath_common+0x6d/0xa0
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c10c9fc9>] ? ktime_get+0xe9/0x100
> [  124.601105]  [<c108893d>] warn_slowpath_null+0x1d/0x20
> [  124.601105]  [<c10c9fc9>] ktime_get+0xe9/0x100
> [  124.601105]  [<c10d1379>] tick_check_idle+0x39/0xf0
> [  124.601105]  [<c108f06c>] irq_enter+0x4c/0x70
> [  124.601105]  [<c12f4e36>] xen_evtchn_do_upcall+0x16/0x30
> [  124.601105]  [<c164b2c7>] xen_do_upcall+0x7/0xc
> [  124.601105]  [<c10023a7>] ? hypercall_page+0x3a7/0x1000
> [  124.601105]  [<c104e1c2>] ? xen_safe_halt+0x12/0x20
> [  124.601105]  [<c104e1b0>] ? xen_irq_disable+0x10/0x10
> [  124.601105]  [<c105ed2b>] default_idle+0x5b/0x190
> [  124.601105]  [<c1040054>] ? svm_set_tsc_khz+0x74/0x140
> [  124.601105]  [<c105e27f>] cpu_idle+0x6f/0xa0
> [  124.601105]  [<c16242f8>] rest_init+0x58/0x60
> [  124.601105]  [<c1887919>] start_kernel+0x355/0x35b
> [  124.601105]  [<c1887435>] ? kernel_init+0x1cf/0x1cf
> [  124.601105]  [<c18870ba>] i386_start_kernel+0xa9/0xb0
> [  124.601105]  [<c188b733>] xen_start_kernel+0x5c4/0x5cc
> [  124.601105] ---[ end trace 69a5c8cd56e77bd5 ]---
> [  124.601105] ------------[ cut here ]------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:23:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cFW-000424-It; Tue, 07 Jan 2014 19:23:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0cFV-00041z-LG
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:23:37 +0000
Received: from [85.158.139.211:46886] by server-6.bemta-5.messagelabs.com id
	4A/A7-16310-8345CC25; Tue, 07 Jan 2014 19:23:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389122614!8186832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24837 invoked from network); 7 Jan 2014 19:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 19:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88429948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 19:23:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 14:23:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0cFQ-0005dp-PS;
	Tue, 07 Jan 2014 19:23:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0cFQ-0005Fv-I1;
	Tue, 07 Jan 2014 19:23:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.21556.181273.225889@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 19:23:32 +0000
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107191156.GA10370@phenom.dumpdata.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
> > For reasons I don't understand it doesn't seem to print the actual
> > kernel git hash in dmesg, but I think it was that from flight 22264,
> > i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> > 64-bit Xen.
> 
> This a bit of ancient kernel. Does it show up with 3.12?

3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
to 3.11 but encountering some problem.)

I haven't tried 3.12 but can do so.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:23:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cFW-000424-It; Tue, 07 Jan 2014 19:23:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0cFV-00041z-LG
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:23:37 +0000
Received: from [85.158.139.211:46886] by server-6.bemta-5.messagelabs.com id
	4A/A7-16310-8345CC25; Tue, 07 Jan 2014 19:23:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389122614!8186832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24837 invoked from network); 7 Jan 2014 19:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 19:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88429948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 19:23:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 14:23:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0cFQ-0005dp-PS;
	Tue, 07 Jan 2014 19:23:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0cFQ-0005Fv-I1;
	Tue, 07 Jan 2014 19:23:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21196.21556.181273.225889@mariner.uk.xensource.com>
Date: Tue, 7 Jan 2014 19:23:32 +0000
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140107191156.GA10370@phenom.dumpdata.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
> > For reasons I don't understand it doesn't seem to print the actual
> > kernel git hash in dmesg, but I think it was that from flight 22264,
> > i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> > 64-bit Xen.
> 
> This a bit of ancient kernel. Does it show up with 3.12?

3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
to 3.11 but encountering some problem.)

I haven't tried 3.12 but can do so.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:37:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cSd-0004at-2s; Tue, 07 Jan 2014 19:37:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0cSa-0004al-Tn
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:37:09 +0000
Received: from [85.158.137.68:36660] by server-11.bemta-3.messagelabs.com id
	D2/81-19379-4675CC25; Tue, 07 Jan 2014 19:37:08 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389123425!7765169!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10489 invoked from network); 7 Jan 2014 19:37:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 19:37:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Ja1fN019488
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 19:36:02 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ja0HC015765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 19:36:00 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07JZxen008557; Tue, 7 Jan 2014 19:35:59 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 11:35:59 -0800
Message-ID: <52CC574C.8080405@oracle.com>
Date: Tue, 07 Jan 2014 14:36:44 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
	<21196.21556.181273.225889@mariner.uk.xensource.com>
In-Reply-To: <21196.21556.181273.225889@mariner.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:23 PM, Ian Jackson wrote:
> Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
>> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
>>> For reasons I don't understand it doesn't seem to print the actual
>>> kernel git hash in dmesg, but I think it was that from flight 22264,
>>> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
>>> 64-bit Xen.
>> This a bit of ancient kernel. Does it show up with 3.12?
> 3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
> to 3.11 but encountering some problem.)
>
> I haven't tried 3.12 but can do so.
>
> Ian.

This is hypercall failing, btw:

https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/xen/events.c?id=refs/tags/v3.4.75#n1582

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 19:37:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 19:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cSd-0004at-2s; Tue, 07 Jan 2014 19:37:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0cSa-0004al-Tn
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 19:37:09 +0000
Received: from [85.158.137.68:36660] by server-11.bemta-3.messagelabs.com id
	D2/81-19379-4675CC25; Tue, 07 Jan 2014 19:37:08 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389123425!7765169!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10489 invoked from network); 7 Jan 2014 19:37:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 19:37:07 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07Ja1fN019488
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 19:36:02 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07Ja0HC015765
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 19:36:00 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07JZxen008557; Tue, 7 Jan 2014 19:35:59 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 11:35:59 -0800
Message-ID: <52CC574C.8080405@oracle.com>
Date: Tue, 07 Jan 2014 14:36:44 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
	<21196.21556.181273.225889@mariner.uk.xensource.com>
In-Reply-To: <21196.21556.181273.225889@mariner.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
	migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:23 PM, Ian Jackson wrote:
> Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
>> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
>>> For reasons I don't understand it doesn't seem to print the actual
>>> kernel git hash in dmesg, but I think it was that from flight 22264,
>>> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
>>> 64-bit Xen.
>> This a bit of ancient kernel. Does it show up with 3.12?
> 3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
> to 3.11 but encountering some problem.)
>
> I haven't tried 3.12 but can do so.
>
> Ian.

This is hypercall failing, btw:

https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/xen/events.c?id=refs/tags/v3.4.75#n1582

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 20:06:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 20:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cuk-00062z-A8; Tue, 07 Jan 2014 20:06:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0cuj-00062u-2g
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 20:06:13 +0000
Received: from [85.158.139.211:15484] by server-11.bemta-5.messagelabs.com id
	4E/DA-23268-43E5CC25; Tue, 07 Jan 2014 20:06:12 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389125170!8402068!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4925 invoked from network); 7 Jan 2014 20:06:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 20:06:11 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07K54PV020470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 20:05:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07K53t7004764
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 20:05:04 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07K53NI027550; Tue, 7 Jan 2014 20:05:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 12:05:03 -0800
Message-ID: <52CC5E1C.90704@oracle.com>
Date: Tue, 07 Jan 2014 15:05:48 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
	<21196.21556.181273.225889@mariner.uk.xensource.com>
	<52CC574C.8080405@oracle.com>
In-Reply-To: <52CC574C.8080405@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:36 PM, Boris Ostrovsky wrote:
> On 01/07/2014 02:23 PM, Ian Jackson wrote:
>> Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew 
>> dysfunction on failed migration"):
>>> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
>>>> For reasons I don't understand it doesn't seem to print the actual
>>>> kernel git hash in dmesg, but I think it was that from flight 22264,
>>>> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
>>>> 64-bit Xen.
>>> This a bit of ancient kernel. Does it show up with 3.12?
>> 3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
>> to 3.11 but encountering some problem.)
>>
>> I haven't tried 3.12 but can do so.
>>
>> Ian.
>
> This is hypercall failing, btw:
>
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/xen/events.c?id=refs/tags/v3.4.75#n1582 
>

More specifically, it fails

     if ( v->virq_to_evtchn[virq] != 0 )
         ERROR_EXIT(-EEXIST);

in Xen's evtchn_bind_virq().

Would be interesting to see if this is still a problem in new kernels.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 20:06:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 20:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0cuk-00062z-A8; Tue, 07 Jan 2014 20:06:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W0cuj-00062u-2g
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 20:06:13 +0000
Received: from [85.158.139.211:15484] by server-11.bemta-5.messagelabs.com id
	4E/DA-23268-43E5CC25; Tue, 07 Jan 2014 20:06:12 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389125170!8402068!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4925 invoked from network); 7 Jan 2014 20:06:11 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 20:06:11 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07K54PV020470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 20:05:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07K53t7004764
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 20:05:04 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07K53NI027550; Tue, 7 Jan 2014 20:05:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 12:05:03 -0800
Message-ID: <52CC5E1C.90704@oracle.com>
Date: Tue, 07 Jan 2014 15:05:48 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<20140107191156.GA10370@phenom.dumpdata.com>
	<21196.21556.181273.225889@mariner.uk.xensource.com>
	<52CC574C.8080405@oracle.com>
In-Reply-To: <52CC574C.8080405@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/2014 02:36 PM, Boris Ostrovsky wrote:
> On 01/07/2014 02:23 PM, Ian Jackson wrote:
>> Konrad Rzeszutek Wilk writes ("Re: 3.4.70+ kernel WARNING spew 
>> dysfunction on failed migration"):
>>> On Tue, Jan 07, 2014 at 06:55:56PM +0000, Ian Jackson wrote:
>>>> For reasons I don't understand it doesn't seem to print the actual
>>>> kernel git hash in dmesg, but I think it was that from flight 22264,
>>>> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
>>>> 64-bit Xen.
>>> This a bit of ancient kernel. Does it show up with 3.12?
>> 3.4.70 is what the osstest push gate is using.  (ISTR trying to switch
>> to 3.11 but encountering some problem.)
>>
>> I haven't tried 3.12 but can do so.
>>
>> Ian.
>
> This is hypercall failing, btw:
>
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/xen/events.c?id=refs/tags/v3.4.75#n1582 
>

More specifically, it fails

     if ( v->virq_to_evtchn[virq] != 0 )
         ERROR_EXIT(-EEXIST);

in Xen's evtchn_bind_virq().

Would be interesting to see if this is still a problem in new kernels.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 20:40:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 20:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0dRf-0007ss-J8; Tue, 07 Jan 2014 20:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0dRe-0007sn-1Z
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 20:40:14 +0000
Received: from [85.158.143.35:44044] by server-2.bemta-4.messagelabs.com id
	F9/00-11386-D266CC25; Tue, 07 Jan 2014 20:40:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389127210!10211508!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28998 invoked from network); 7 Jan 2014 20:40:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 20:40:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88473547"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 20:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 15:40:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0dRZ-00036c-5K;
	Tue, 07 Jan 2014 20:40:09 +0000
Date: Tue, 7 Jan 2014 20:39:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	julien.grall@linaro.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Ian Campbell wrote:
> On ARM guest OSes are started with MMU and Caches disables (as they are on
> native) however caching is enabled in the domain running the builder and
> therefore we must ensure cache consistency.
> 
> The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
> cache after loading images into guest memory") is to flush the caches after
> loading the various blobs into guest RAM. However this approach has two short
> comings:
> 
>  - The cache flush primitives available to userspace on arm32 are not
>    sufficient for our needs.
>  - There is a race between the cache flush and the unmap of the guest page
>    where the processor might speculatively dirty the cache line again.
> 
> (of these the second is the more fundamental)
> 
> This patch makes use of the the hardware functionality to force all accesses
> made from guest mode to be cached (the HCR.DC == default cached bit). This
> means that we don't need to worry about the domain builder's writes being
> cached because the guests "uncached" accesses will actually be cached.
> 
> Unfortunately the use of HCR.DC is incompatible with the guest enabling its
> MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
> detect when this happens and disable HCR.DC. This is done with the HCR.TVM
> (trap virtual memory controls) bit which also causes various other registers
> to be trapped, all of which can be passed straight through to the underlying
> register. Once the guest has enabled its MMU we no longer need to trap so
> there is no ongoing overhead. In my tests Linux makes about half a dozen
> accesses to these registers before the MMU is enabled, I would expect other
> OSes to behave similarly (the sequence of writes needed to setup the MMU is
> pretty obvious).
> 
> Apart from this unfortunate need to trap these accesses this approach is
> incompatible with guests which attempt to do DMA operations with their MMU
> disabled. In practice this means guests with passthrough which we do not yet
> support. Since a typical guest (including dom0) does not access devices which
> require DMA until after it is fully up and running with paging enabled the
> main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
> with a disk passed through and booting from that disk. Since we know that dom0
> is not using any such firmware and we do not support device passthrough to
> guests yet we can live with this restriction. Once passthrough is implemented
> this will need to be revisited.
> 
> The patch includes a couple of seemingly unrelated but necessary changes:
> 
>  - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
>    with the existing set of system register we handled, but broke with the new
>    ones introduced here.
>  - The defines used to decode the HSR system register fields were named the
>    same as the register. This breaks the accessor macros. This had gone
>    unnoticed because the handling of the existing trapped registers did not
>    require accessing the underlying hardware register. Rename those constants
>    with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
> 
> This patch has survived thousands of boot loops on a Midway system.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> My preferred solution here would be for the tools to use an uncached mapping
> of guest memory when building the guest, which requires adding a new privmcd
> ioctl (a relatively straightforward patch) and plumbing a "cached" flag
> through the libxc foreign mapping interfaces (a twisty maze of passage, all
> alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
> freeze exception, since it was quite large (because we have 4 or more mapping
> interfaces in libxc, some of which call back into others).
> 
> So I propose this version for 4.4. The uncached mapping solution should be
> revisited for a future release.
> 
> At the risk of appearing to be going mad:
> 
> <speaking hat="submitter">
> 
> This bug results in memory corruption bugs in the guest, which mostly manifest
> as a failure to boot the guest (subsequent bad behaviour is possible but, I
> think, unlikely) the frequency of failures is perhaps 1/10 times. This would
> not constitute an awesome release.
> 
> Although the patch is large most of it is repetative and mechanical (made
> explicit through the use of macros in many cases). The biggest risk is that
> one of the registers is not passed through correctly (i.e. the wrong size or
> target registers). The ones which Linux uses have been tested and appear to
> function OK.  The others might be buggy but this is mitigated through the use
> of the same set of macros.
> 
> I think the chance of the patch having a bug wrt my understanding of the
> hardware behaviour is pretty low. WRT there being bugs in my understanding of
> the hardware documentation, I would say middle to low, but I have discussed it
> with some folks at ARM and they didn't call me an idiot (in fact pretty much
> the same thing has been proposed for KVM).
> 
> Overall I think the benefits outweigh the risks.
> 
> One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards reverting.
> 
> </speaking>
> 
> <speaking hat="stand in release manager">
> 
> OK.
> 
> </speaking>
> 
> Obviously if you think I'm being to easy on myself please say so.
> 
> Actually, if you think my judgement is right I'd appreciate being told so too.
> ---
>  xen/arch/arm/domain.c           |    5 ++
>  xen/arch/arm/domain_build.c     |    1 +
>  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vtimer.c           |    6 +-
>  xen/include/asm-arm/cpregs.h    |    4 +
>  xen/include/asm-arm/processor.h |    2 +-
>  xen/include/asm-arm/sysregs.h   |   19 ++++-
>  7 files changed, 182 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 124cccf..104d228 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
>      else
>          hcr |= HCR_RW;
>  
> +    if ( n->arch.sctlr & SCTLR_M )
> +        hcr &= ~(HCR_TVM|HCR_DC);
> +    else
> +        hcr |= (HCR_TVM|HCR_DC);
> +
>      WRITE_SYSREG(hcr, HCR_EL2);
>      isb();

Is this actually needed? Shouldn't HCR be already correctly updated by
update_sctlr?


> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..bb31db8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
>      else
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
>  #endif
> +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
>  
>      /*
>       * kernel_load will determine the placement of the initrd & fdt in
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 7c5ab19..d00bba3 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>      regs->pc += hsr.len ? 4 : 2;
>  }
>  
> +static void update_sctlr(uint32_t v)
> +{
> +    /*
> +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> +     * because they are incompatible.
> +     *
> +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> +     * since it's only purpose was to catch the MMU being enabled.
> +     *
> +     * Both are set appropriately on context switch but we need to
> +     * clear them now since we may not context switch on return to
> +     * guest.
> +     */
> +    if ( v & SCTLR_M )
> +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
> +}
> +
>  static void do_cp15_32(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
> @@ -1338,6 +1355,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
>          if ( cp32.read )
>             *r = v->arch.actlr;
>          break;
> +
> +/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
> +#define CP32_PASSTHRU32(R...) do {              \
> +    if ( cp32.read )                            \
> +        *r = READ_SYSREG32(R);                  \
> +    else                                        \
> +        WRITE_SYSREG32(*r, R);                  \
> +} while(0)
> +
> +/*
> + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> + * Updates the lower 32-bits and clears the upper bits.
> + */
> +#define CP32_PASSTHRU64(R...) do {              \
> +    if ( cp32.read )                            \
> +        *r = (uint32_t)READ_SYSREG64(R);        \
> +    else                                        \
> +        WRITE_SYSREG64((uint64_t)*r, R);        \
> +} while(0)

Can/Should CP32_PASSTHRU64_LO be used instead of this?


> +/*
> + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> + * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
> + * the other half.
> + */
> +#ifdef CONFIG_ARM_64
> +#define CP32_PASSTHRU64_HI(R...) do {                   \
> +    if ( cp32.read )                                    \
> +        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
> +    else                                                \
> +    {                                                   \
> +        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
> +        t |= ((uint64_t)(*r)) << 32;                    \
> +        WRITE_SYSREG64(t, R);                           \
> +    }                                                   \
> +} while(0)
> +#define CP32_PASSTHRU64_LO(R...) do {                           \
> +    if ( cp32.read )                                            \
> +        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
> +    else                                                        \
> +    {                                                           \
> +        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
> +        t |= *r;                                                \
> +        WRITE_SYSREG64(t, R);                                   \
> +    }                                                           \
> +} while(0)
> +#endif
> +    /* HCR.TVM */
> +    case HSR_CPREG32(SCTLR):
> +        CP32_PASSTHRU32(SCTLR_EL1);
> +        update_sctlr(*r);
> +        break;
> +    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
> +    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
> +    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
> +    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
> +    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
> +    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
> +    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
> +    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
> +    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
> +
> +#ifdef CONFIG_ARM_64
> +    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
> +    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
> +    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
> +    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
> +    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
> +    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
> +#else
> +    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
> +    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
> +    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
> +    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
> +    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
> +    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
> +#endif
> +
> +#undef CP32_PASSTHRU32
> +#undef CP32_PASSTHRU64
> +#undef CP32_PASSTHRU64_LO
> +#undef CP32_PASSTHRU64_HI
>      default:
>          printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
>                 cp32.read ? "mrc" : "mcr",
> @@ -1351,6 +1451,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
>      struct hsr_cp64 cp64 = hsr.cp64;
> +    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
> +    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
> +    uint64_t r;
>  
>      if ( !check_conditional_instr(regs, hsr) )
>      {
> @@ -1368,6 +1471,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
>              domain_crash_synchronous();
>          }
>          break;
> +
> +#define CP64_PASSTHRU(R...) do {                                  \
> +    if ( cp64.read )                                            \
> +    {                                                           \
> +        r = READ_SYSREG64(R);                                   \
> +        *r1 = r & 0xffffffffUL;                                 \
> +        *r2 = r >> 32;                                          \

it doesn't look like r, r1 and r2 are used anywhere


> +    }                                                           \
> +    else                                                        \
> +    {                                                           \
> +        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
> +        WRITE_SYSREG64(r, R);                                   \
> +    }                                                           \
> +} while(0)
> +
> +    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
> +    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
> +
> +#undef CP64_PASSTHRU
> +
>      default:
>          printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
>                 cp64.read ? "mrrc" : "mcrr",
> @@ -1382,11 +1505,12 @@ static void do_sysreg(struct cpu_user_regs *regs,
>                        union hsr hsr)
>  {
>      struct hsr_sysreg sysreg = hsr.sysreg;
> +    register_t *x = select_user_reg(regs, sysreg.reg);
>  
>      switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
>      {
> -    case CNTP_CTL_EL0:
> -    case CNTP_TVAL_EL0:
> +    case HSR_SYSREG_CNTP_CTL_EL0:
> +    case HSR_SYSREG_CNTP_TVAL_EL0:
>          if ( !vtimer_emulate(regs, hsr) )
>          {
>              dprintk(XENLOG_ERR,
> @@ -1394,6 +1518,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
>              domain_crash_synchronous();
>          }
>          break;
> +
> +#define SYSREG_PASSTHRU(R...) do {              \
> +    if ( sysreg.read )                          \
> +        *x = READ_SYSREG(R);                    \
> +    else                                        \
> +        WRITE_SYSREG(*x, R);                    \
> +} while(0)
> +
> +    case HSR_SYSREG_SCTLR_EL1:
> +        SYSREG_PASSTHRU(SCTLR_EL1);
> +        update_sctlr(*x);
> +        break;
> +    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
> +    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
> +    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
> +    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
> +    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
> +    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
> +    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
> +    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
> +    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
> +    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
> +
> +#undef SYSREG_PASSTHRU
> +
>      default:
>          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
>                 sysreg.read ? "mrs" : "msr",
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 433ad55..e325f78 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
>  
>      switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
>      {
> -    case CNTP_CTL_EL0:
> +    case HSR_SYSREG_CNTP_CTL_EL0:
>          vtimer_cntp_ctl(regs, &r, sysreg.read);
>          if ( sysreg.read )
>              *x = r;
>          return 1;
> -    case CNTP_TVAL_EL0:
> +    case HSR_SYSREG_CNTP_TVAL_EL0:
>          vtimer_cntp_tval(regs, &r, sysreg.read);
>          if ( sysreg.read )
>              *x = r;
>          return 1;
>  
> -    case HSR_CPREG64(CNTPCT):
> +    case HSR_SYSREG_CNTPCT_EL0:
>          return vtimer_cntpct(regs, x, sysreg.read);
>  
>      default:
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index f0f1d53..508467a 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -121,6 +121,8 @@
>  #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
>  #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
>  #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
> +#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
> +#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
>  #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
>  #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
>  #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
> @@ -260,7 +262,9 @@
>  #define CPACR_EL1               CPACR
>  #define CSSELR_EL1              CSSELR
>  #define DACR32_EL2              DACR
> +#define ESR_EL1                 DFSR
>  #define ESR_EL2                 HSR
> +#define FAR_EL1                 HIFAR
>  #define FAR_EL2                 HIFAR
>  #define HCR_EL2                 HCR
>  #define HPFAR_EL2               HPFAR
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index dfe807d..06e638f 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -342,7 +342,7 @@ union hsr {
>  #define HSR_SYSREG_OP0_SHIFT (20)
>  #define HSR_SYSREG_OP1_MASK (0x0001c000)
>  #define HSR_SYSREG_OP1_SHIFT (14)
> -#define HSR_SYSREG_CRN_MASK (0x00003800)
> +#define HSR_SYSREG_CRN_MASK (0x00003c00)
>  #define HSR_SYSREG_CRN_SHIFT (10)
>  #define HSR_SYSREG_CRM_MASK (0x0000001e)
>  #define HSR_SYSREG_CRM_SHIFT (1)
> diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
> index 48ad07e..0cee0e9 100644
> --- a/xen/include/asm-arm/sysregs.h
> +++ b/xen/include/asm-arm/sysregs.h
> @@ -40,8 +40,23 @@
>      ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
>      ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
>  
> -#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
> -#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
> +#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
> +#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
> +#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
> +#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
> +#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
> +#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
> +#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
> +#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
> +#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
> +#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
> +#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
> +
> +#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
> +#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
> +#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
> +
> +
>  #endif
>  
>  #endif
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 20:40:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 20:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0dRf-0007ss-J8; Tue, 07 Jan 2014 20:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0dRe-0007sn-1Z
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 20:40:14 +0000
Received: from [85.158.143.35:44044] by server-2.bemta-4.messagelabs.com id
	F9/00-11386-D266CC25; Tue, 07 Jan 2014 20:40:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389127210!10211508!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28998 invoked from network); 7 Jan 2014 20:40:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 20:40:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,620,1384300800"; d="scan'208";a="88473547"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 20:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 15:40:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0dRZ-00036c-5K;
	Tue, 07 Jan 2014 20:40:09 +0000
Date: Tue, 7 Jan 2014 20:39:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	julien.grall@linaro.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014, Ian Campbell wrote:
> On ARM guest OSes are started with MMU and Caches disables (as they are on
> native) however caching is enabled in the domain running the builder and
> therefore we must ensure cache consistency.
> 
> The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
> cache after loading images into guest memory") is to flush the caches after
> loading the various blobs into guest RAM. However this approach has two short
> comings:
> 
>  - The cache flush primitives available to userspace on arm32 are not
>    sufficient for our needs.
>  - There is a race between the cache flush and the unmap of the guest page
>    where the processor might speculatively dirty the cache line again.
> 
> (of these the second is the more fundamental)
> 
> This patch makes use of the the hardware functionality to force all accesses
> made from guest mode to be cached (the HCR.DC == default cached bit). This
> means that we don't need to worry about the domain builder's writes being
> cached because the guests "uncached" accesses will actually be cached.
> 
> Unfortunately the use of HCR.DC is incompatible with the guest enabling its
> MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
> detect when this happens and disable HCR.DC. This is done with the HCR.TVM
> (trap virtual memory controls) bit which also causes various other registers
> to be trapped, all of which can be passed straight through to the underlying
> register. Once the guest has enabled its MMU we no longer need to trap so
> there is no ongoing overhead. In my tests Linux makes about half a dozen
> accesses to these registers before the MMU is enabled, I would expect other
> OSes to behave similarly (the sequence of writes needed to setup the MMU is
> pretty obvious).
> 
> Apart from this unfortunate need to trap these accesses this approach is
> incompatible with guests which attempt to do DMA operations with their MMU
> disabled. In practice this means guests with passthrough which we do not yet
> support. Since a typical guest (including dom0) does not access devices which
> require DMA until after it is fully up and running with paging enabled the
> main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
> with a disk passed through and booting from that disk. Since we know that dom0
> is not using any such firmware and we do not support device passthrough to
> guests yet we can live with this restriction. Once passthrough is implemented
> this will need to be revisited.
> 
> The patch includes a couple of seemingly unrelated but necessary changes:
> 
>  - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
>    with the existing set of system register we handled, but broke with the new
>    ones introduced here.
>  - The defines used to decode the HSR system register fields were named the
>    same as the register. This breaks the accessor macros. This had gone
>    unnoticed because the handling of the existing trapped registers did not
>    require accessing the underlying hardware register. Rename those constants
>    with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
> 
> This patch has survived thousands of boot loops on a Midway system.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> My preferred solution here would be for the tools to use an uncached mapping
> of guest memory when building the guest, which requires adding a new privmcd
> ioctl (a relatively straightforward patch) and plumbing a "cached" flag
> through the libxc foreign mapping interfaces (a twisty maze of passage, all
> alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
> freeze exception, since it was quite large (because we have 4 or more mapping
> interfaces in libxc, some of which call back into others).
> 
> So I propose this version for 4.4. The uncached mapping solution should be
> revisited for a future release.
> 
> At the risk of appearing to be going mad:
> 
> <speaking hat="submitter">
> 
> This bug results in memory corruption bugs in the guest, which mostly manifest
> as a failure to boot the guest (subsequent bad behaviour is possible but, I
> think, unlikely) the frequency of failures is perhaps 1/10 times. This would
> not constitute an awesome release.
> 
> Although the patch is large most of it is repetative and mechanical (made
> explicit through the use of macros in many cases). The biggest risk is that
> one of the registers is not passed through correctly (i.e. the wrong size or
> target registers). The ones which Linux uses have been tested and appear to
> function OK.  The others might be buggy but this is mitigated through the use
> of the same set of macros.
> 
> I think the chance of the patch having a bug wrt my understanding of the
> hardware behaviour is pretty low. WRT there being bugs in my understanding of
> the hardware documentation, I would say middle to low, but I have discussed it
> with some folks at ARM and they didn't call me an idiot (in fact pretty much
> the same thing has been proposed for KVM).
> 
> Overall I think the benefits outweigh the risks.
> 
> One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> It's reasonably recent so reverting it takes us back to a pretty well
> understood state in the libraries. The functionality is harmless if
> incomplete. I think given the first argument I would lean towards reverting.
> 
> </speaking>
> 
> <speaking hat="stand in release manager">
> 
> OK.
> 
> </speaking>
> 
> Obviously if you think I'm being to easy on myself please say so.
> 
> Actually, if you think my judgement is right I'd appreciate being told so too.
> ---
>  xen/arch/arm/domain.c           |    5 ++
>  xen/arch/arm/domain_build.c     |    1 +
>  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
>  xen/arch/arm/vtimer.c           |    6 +-
>  xen/include/asm-arm/cpregs.h    |    4 +
>  xen/include/asm-arm/processor.h |    2 +-
>  xen/include/asm-arm/sysregs.h   |   19 ++++-
>  7 files changed, 182 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 124cccf..104d228 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
>      else
>          hcr |= HCR_RW;
>  
> +    if ( n->arch.sctlr & SCTLR_M )
> +        hcr &= ~(HCR_TVM|HCR_DC);
> +    else
> +        hcr |= (HCR_TVM|HCR_DC);
> +
>      WRITE_SYSREG(hcr, HCR_EL2);
>      isb();

Is this actually needed? Shouldn't HCR be already correctly updated by
update_sctlr?


> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..bb31db8 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
>      else
>          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
>  #endif
> +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
>  
>      /*
>       * kernel_load will determine the placement of the initrd & fdt in
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 7c5ab19..d00bba3 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>      regs->pc += hsr.len ? 4 : 2;
>  }
>  
> +static void update_sctlr(uint32_t v)
> +{
> +    /*
> +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> +     * because they are incompatible.
> +     *
> +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> +     * since it's only purpose was to catch the MMU being enabled.
> +     *
> +     * Both are set appropriately on context switch but we need to
> +     * clear them now since we may not context switch on return to
> +     * guest.
> +     */
> +    if ( v & SCTLR_M )
> +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
> +}
> +
>  static void do_cp15_32(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
> @@ -1338,6 +1355,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
>          if ( cp32.read )
>             *r = v->arch.actlr;
>          break;
> +
> +/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
> +#define CP32_PASSTHRU32(R...) do {              \
> +    if ( cp32.read )                            \
> +        *r = READ_SYSREG32(R);                  \
> +    else                                        \
> +        WRITE_SYSREG32(*r, R);                  \
> +} while(0)
> +
> +/*
> + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> + * Updates the lower 32-bits and clears the upper bits.
> + */
> +#define CP32_PASSTHRU64(R...) do {              \
> +    if ( cp32.read )                            \
> +        *r = (uint32_t)READ_SYSREG64(R);        \
> +    else                                        \
> +        WRITE_SYSREG64((uint64_t)*r, R);        \
> +} while(0)

Can/Should CP32_PASSTHRU64_LO be used instead of this?


> +/*
> + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> + * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
> + * the other half.
> + */
> +#ifdef CONFIG_ARM_64
> +#define CP32_PASSTHRU64_HI(R...) do {                   \
> +    if ( cp32.read )                                    \
> +        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
> +    else                                                \
> +    {                                                   \
> +        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
> +        t |= ((uint64_t)(*r)) << 32;                    \
> +        WRITE_SYSREG64(t, R);                           \
> +    }                                                   \
> +} while(0)
> +#define CP32_PASSTHRU64_LO(R...) do {                           \
> +    if ( cp32.read )                                            \
> +        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
> +    else                                                        \
> +    {                                                           \
> +        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
> +        t |= *r;                                                \
> +        WRITE_SYSREG64(t, R);                                   \
> +    }                                                           \
> +} while(0)
> +#endif
> +    /* HCR.TVM */
> +    case HSR_CPREG32(SCTLR):
> +        CP32_PASSTHRU32(SCTLR_EL1);
> +        update_sctlr(*r);
> +        break;
> +    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
> +    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
> +    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
> +    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
> +    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
> +    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
> +    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
> +    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
> +    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
> +
> +#ifdef CONFIG_ARM_64
> +    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
> +    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
> +    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
> +    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
> +    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
> +    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
> +#else
> +    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
> +    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
> +    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
> +    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
> +    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
> +    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
> +#endif
> +
> +#undef CP32_PASSTHRU32
> +#undef CP32_PASSTHRU64
> +#undef CP32_PASSTHRU64_LO
> +#undef CP32_PASSTHRU64_HI
>      default:
>          printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
>                 cp32.read ? "mrc" : "mcr",
> @@ -1351,6 +1451,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
>                         union hsr hsr)
>  {
>      struct hsr_cp64 cp64 = hsr.cp64;
> +    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
> +    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
> +    uint64_t r;
>  
>      if ( !check_conditional_instr(regs, hsr) )
>      {
> @@ -1368,6 +1471,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
>              domain_crash_synchronous();
>          }
>          break;
> +
> +#define CP64_PASSTHRU(R...) do {                                  \
> +    if ( cp64.read )                                            \
> +    {                                                           \
> +        r = READ_SYSREG64(R);                                   \
> +        *r1 = r & 0xffffffffUL;                                 \
> +        *r2 = r >> 32;                                          \

it doesn't look like r, r1 and r2 are used anywhere


> +    }                                                           \
> +    else                                                        \
> +    {                                                           \
> +        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
> +        WRITE_SYSREG64(r, R);                                   \
> +    }                                                           \
> +} while(0)
> +
> +    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
> +    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
> +
> +#undef CP64_PASSTHRU
> +
>      default:
>          printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
>                 cp64.read ? "mrrc" : "mcrr",
> @@ -1382,11 +1505,12 @@ static void do_sysreg(struct cpu_user_regs *regs,
>                        union hsr hsr)
>  {
>      struct hsr_sysreg sysreg = hsr.sysreg;
> +    register_t *x = select_user_reg(regs, sysreg.reg);
>  
>      switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
>      {
> -    case CNTP_CTL_EL0:
> -    case CNTP_TVAL_EL0:
> +    case HSR_SYSREG_CNTP_CTL_EL0:
> +    case HSR_SYSREG_CNTP_TVAL_EL0:
>          if ( !vtimer_emulate(regs, hsr) )
>          {
>              dprintk(XENLOG_ERR,
> @@ -1394,6 +1518,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
>              domain_crash_synchronous();
>          }
>          break;
> +
> +#define SYSREG_PASSTHRU(R...) do {              \
> +    if ( sysreg.read )                          \
> +        *x = READ_SYSREG(R);                    \
> +    else                                        \
> +        WRITE_SYSREG(*x, R);                    \
> +} while(0)
> +
> +    case HSR_SYSREG_SCTLR_EL1:
> +        SYSREG_PASSTHRU(SCTLR_EL1);
> +        update_sctlr(*x);
> +        break;
> +    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
> +    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
> +    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
> +    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
> +    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
> +    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
> +    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
> +    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
> +    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
> +    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
> +
> +#undef SYSREG_PASSTHRU
> +
>      default:
>          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
>                 sysreg.read ? "mrs" : "msr",
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index 433ad55..e325f78 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
>  
>      switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
>      {
> -    case CNTP_CTL_EL0:
> +    case HSR_SYSREG_CNTP_CTL_EL0:
>          vtimer_cntp_ctl(regs, &r, sysreg.read);
>          if ( sysreg.read )
>              *x = r;
>          return 1;
> -    case CNTP_TVAL_EL0:
> +    case HSR_SYSREG_CNTP_TVAL_EL0:
>          vtimer_cntp_tval(regs, &r, sysreg.read);
>          if ( sysreg.read )
>              *x = r;
>          return 1;
>  
> -    case HSR_CPREG64(CNTPCT):
> +    case HSR_SYSREG_CNTPCT_EL0:
>          return vtimer_cntpct(regs, x, sysreg.read);
>  
>      default:
> diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
> index f0f1d53..508467a 100644
> --- a/xen/include/asm-arm/cpregs.h
> +++ b/xen/include/asm-arm/cpregs.h
> @@ -121,6 +121,8 @@
>  #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
>  #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
>  #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
> +#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
> +#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
>  #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
>  #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
>  #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
> @@ -260,7 +262,9 @@
>  #define CPACR_EL1               CPACR
>  #define CSSELR_EL1              CSSELR
>  #define DACR32_EL2              DACR
> +#define ESR_EL1                 DFSR
>  #define ESR_EL2                 HSR
> +#define FAR_EL1                 HIFAR
>  #define FAR_EL2                 HIFAR
>  #define HCR_EL2                 HCR
>  #define HPFAR_EL2               HPFAR
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index dfe807d..06e638f 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -342,7 +342,7 @@ union hsr {
>  #define HSR_SYSREG_OP0_SHIFT (20)
>  #define HSR_SYSREG_OP1_MASK (0x0001c000)
>  #define HSR_SYSREG_OP1_SHIFT (14)
> -#define HSR_SYSREG_CRN_MASK (0x00003800)
> +#define HSR_SYSREG_CRN_MASK (0x00003c00)
>  #define HSR_SYSREG_CRN_SHIFT (10)
>  #define HSR_SYSREG_CRM_MASK (0x0000001e)
>  #define HSR_SYSREG_CRM_SHIFT (1)
> diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
> index 48ad07e..0cee0e9 100644
> --- a/xen/include/asm-arm/sysregs.h
> +++ b/xen/include/asm-arm/sysregs.h
> @@ -40,8 +40,23 @@
>      ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
>      ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
>  
> -#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
> -#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
> +#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
> +#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
> +#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
> +#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
> +#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
> +#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
> +#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
> +#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
> +#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
> +#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
> +#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
> +
> +#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
> +#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
> +#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
> +
> +
>  #endif
>  
>  #endif
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eAH-0001EY-DG; Tue, 07 Jan 2014 21:26:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0eAG-0001ET-IB
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 21:26:20 +0000
Received: from [85.158.143.35:11089] by server-1.bemta-4.messagelabs.com id
	39/B1-02132-BF07CC25; Tue, 07 Jan 2014 21:26:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389129978!10266040!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22840 invoked from network); 7 Jan 2014 21:26:19 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 21:26:19 -0000
Received: by mail-ea0-f171.google.com with SMTP id h10so453382eak.16
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 13:26:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0SxlaLUou9Ffhh1tUeK/tDf4/FBO2J7S66YOw1gDYJ8=;
	b=JeSi7h2jihPPY0eNQTRBhW7gMt4QaUUhXevOZ8lwsrVTZ+h3R7+3gNGr89rBtH8qex
	cYF0laUQ/3uJhxh+CBmfo4/EPzDI0p5QOAlCgn4frV5IZCef1xjwON5kXPAQxFeIk17S
	6l/ROXTgd/gew1tdsAcHuthDRAtA98DNQ25DkUat2ihTuRGmOGPLQNskL7tfdX0eIDFZ
	6VSg3m5HxV2KWjKlcKktZV89Q762gjDuq7NHsHU63X2R83D3T1ljjy/+KKRxduoZl5vz
	1J3erJBYDhapxS/Qo2UZhEPkpF3R+evWnK/1XmG6Hg7lo41vcg7bMOZfWbXZXsIRQzqw
	usKQ==
X-Gm-Message-State: ALoCoQkkSWl2K0JrcPN+0nD8vT5MggOhP7+F+zs7Pt7iuQzCH3lozNJolEULFUrZxNxajQRC8cOi
X-Received: by 10.14.200.197 with SMTP id z45mr10150331een.98.1389129976815;
	Tue, 07 Jan 2014 13:26:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	a45sm183519622eem.6.2014.01.07.13.26.15 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 13:26:15 -0800 (PST)
Message-ID: <52CC70F6.5060502@linaro.org>
Date: Tue, 07 Jan 2014 21:26:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-4-git-send-email-roger.pau@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 03/19] xen: add and enable Xen console
 for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> This adds and enables the console used on XEN kernels.
> ---

[..]

> +/* Debug function, prints directly to hypervisor console */
> +void xc_printf(const char *, ...);
> +

Can you add __printflike(...)? It will be easier to catch bad format.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eAH-0001EY-DG; Tue, 07 Jan 2014 21:26:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0eAG-0001ET-IB
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 21:26:20 +0000
Received: from [85.158.143.35:11089] by server-1.bemta-4.messagelabs.com id
	39/B1-02132-BF07CC25; Tue, 07 Jan 2014 21:26:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389129978!10266040!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22840 invoked from network); 7 Jan 2014 21:26:19 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 21:26:19 -0000
Received: by mail-ea0-f171.google.com with SMTP id h10so453382eak.16
	for <xen-devel@lists.xen.org>; Tue, 07 Jan 2014 13:26:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0SxlaLUou9Ffhh1tUeK/tDf4/FBO2J7S66YOw1gDYJ8=;
	b=JeSi7h2jihPPY0eNQTRBhW7gMt4QaUUhXevOZ8lwsrVTZ+h3R7+3gNGr89rBtH8qex
	cYF0laUQ/3uJhxh+CBmfo4/EPzDI0p5QOAlCgn4frV5IZCef1xjwON5kXPAQxFeIk17S
	6l/ROXTgd/gew1tdsAcHuthDRAtA98DNQ25DkUat2ihTuRGmOGPLQNskL7tfdX0eIDFZ
	6VSg3m5HxV2KWjKlcKktZV89Q762gjDuq7NHsHU63X2R83D3T1ljjy/+KKRxduoZl5vz
	1J3erJBYDhapxS/Qo2UZhEPkpF3R+evWnK/1XmG6Hg7lo41vcg7bMOZfWbXZXsIRQzqw
	usKQ==
X-Gm-Message-State: ALoCoQkkSWl2K0JrcPN+0nD8vT5MggOhP7+F+zs7Pt7iuQzCH3lozNJolEULFUrZxNxajQRC8cOi
X-Received: by 10.14.200.197 with SMTP id z45mr10150331een.98.1389129976815;
	Tue, 07 Jan 2014 13:26:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	a45sm183519622eem.6.2014.01.07.13.26.15 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 13:26:15 -0800 (PST)
Message-ID: <52CC70F6.5060502@linaro.org>
Date: Tue, 07 Jan 2014 21:26:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1388677433-49525-4-git-send-email-roger.pau@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 03/19] xen: add and enable Xen console
 for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/2014 03:43 PM, Roger Pau Monne wrote:
> This adds and enables the console used on XEN kernels.
> ---

[..]

> +/* Debug function, prints directly to hypervisor console */
> +void xc_printf(const char *, ...);
> +

Can you add __printflike(...)? It will be easier to catch bad format.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:30:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eDr-0001jW-9X; Tue, 07 Jan 2014 21:30:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0eDo-0001jM-ER
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 21:30:00 +0000
Received: from [85.158.143.35:22179] by server-2.bemta-4.messagelabs.com id
	20/76-11386-7D17CC25; Tue, 07 Jan 2014 21:29:59 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389130198!10266400!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1888 invoked from network); 7 Jan 2014 21:29:58 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-4.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 21:29:58 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 21F9D58843E;
	Tue,  7 Jan 2014 13:29:57 -0800 (PST)
Date: Tue, 07 Jan 2014 16:29:56 -0500 (EST)
Message-Id: <20140107.162956.1062166230525232035.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Tue, 07 Jan 2014 13:29:57 -0800 (PST)
Cc: netdev@vger.kernel.org, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Tue, 7 Jan 2014 16:25:29 +0000

> @@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>  	unsigned long offset;
>  	struct skb_cb_overlay *sco;
>  	int need_to_notify = 0;
> +	int ring_full = 0;

Please use bool, false, and true.

>  
> -	if (!npo.copy_prod)
> +	if (!npo.copy_prod) {
> +		if (ring_full)
> +			vif->rx_queue_stopped = true;
>  		goto done;
> +	}
> +
> +	vif->rx_queue_stopped = false;

And then you can code this as:

	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
	if (!npo.copy_prod)
		goto done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:30:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eDr-0001jW-9X; Tue, 07 Jan 2014 21:30:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0eDo-0001jM-ER
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 21:30:00 +0000
Received: from [85.158.143.35:22179] by server-2.bemta-4.messagelabs.com id
	20/76-11386-7D17CC25; Tue, 07 Jan 2014 21:29:59 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389130198!10266400!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1888 invoked from network); 7 Jan 2014 21:29:58 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-4.tower-21.messagelabs.com with SMTP;
	7 Jan 2014 21:29:58 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 21F9D58843E;
	Tue,  7 Jan 2014 13:29:57 -0800 (PST)
Date: Tue, 07 Jan 2014 16:29:56 -0500 (EST)
Message-Id: <20140107.162956.1062166230525232035.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Tue, 07 Jan 2014 13:29:57 -0800 (PST)
Cc: netdev@vger.kernel.org, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Tue, 7 Jan 2014 16:25:29 +0000

> @@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>  	unsigned long offset;
>  	struct skb_cb_overlay *sco;
>  	int need_to_notify = 0;
> +	int ring_full = 0;

Please use bool, false, and true.

>  
> -	if (!npo.copy_prod)
> +	if (!npo.copy_prod) {
> +		if (ring_full)
> +			vif->rx_queue_stopped = true;
>  		goto done;
> +	}
> +
> +	vif->rx_queue_stopped = false;

And then you can code this as:

	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
	if (!npo.copy_prod)
		goto done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eTz-0002II-BO; Tue, 07 Jan 2014 21:46:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0eTy-0002ID-BE
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 21:46:42 +0000
Received: from [85.158.137.68:23854] by server-9.bemta-3.messagelabs.com id
	23/38-13104-1C57CC25; Tue, 07 Jan 2014 21:46:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389131198!7780335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11407 invoked from network); 7 Jan 2014 21:46:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 21:46:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90660287"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 21:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 16:46:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0eTs-0006LH-JZ;
	Tue, 07 Jan 2014 21:46:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0eTp-0000jO-U3;
	Tue, 07 Jan 2014 21:46:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24295-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Jan 2014 21:46:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24295: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24295 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24295/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24250

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cedfdd43a9798e535a05690bb6f01394490d26bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 16:01:14 2014 +0100

    IOMMU: make page table deallocation preemptible
    
    This too can take an arbitrary amount of time.
    
    In fact, the bulk of the work is being moved to a tasklet, as handling
    the necessary preemption logic in line seems close to impossible given
    that the teardown may also be invoked on error paths.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

commit cc0a6c6c749a8693dc0e201773c10cd97e5e6ce0
Merge: 4746b5a... 81b1c7d...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 14:32:45 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 4746b5adc396bb2fc963b3156eab7267c6e7e541
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Dec 20 15:08:08 2013 +0000

    xen: arm: context switch the aux memory attribute registers
    
    We appear to have somehow missed these. Linux doesn't actually use them and
    none of the processors I've looked at actually define any bits in them (so
    they are UNK/SBZP) but it is good form to context switch them anyway.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 81b1c7de2339d2788352b162057e70130803f3cf
Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Date:   Tue Jan 7 15:09:42 2014 +0100

    AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
    
    Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
    However, the loop variable (bdf) is declared as u16 which causes
    inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
    This patch changes the variable to u32 instead.
    
    Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 62d33ca1048f4e08eaeb026c7b79239b4605b636
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:59:31 2014 +0100

    VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
    
    Free the allocated structure rather than the ACPI table ATS entry.
    
    On further analysis, there is another memory leak.  acpi_parse_dev_scope()
    could allocate scope->devices, and return with -ENOMEM.  All callers of
    acpi_parse_dev_scope() would then free the underlying structure, loosing the
    pointer.
    
    These errors can only actually be reached through acpi_parse_dev_scope()
    (which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
    it.
    
    Coverity-ID: 1146949
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a9fe8c7dda440b84e178d65dcd64c0173b0a4b5d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:58:35 2014 +0100

    AMD/microcode: avoid use-after-free for the microcode buffer
    
    It is possible to free the mc_old buffer and then store it for use in the case
    of resume.
    
    This keeps the old semantics of being able to return an error even after a
    successful microcode application.
    
    Coverity-ID 1146953
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 2af30e62c4e562d7a4ec4185fdab20fb29354fd8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:57:15 2014 +0100

    AMD/iommu_detect: don't leak iommu structure on error paths
    
    Tweak the logic slightly to return the real errors from
    get_iommu_{,msi_}capabilities(), which at the moment is no functional change.
    
    Coverity-ID: 1146950
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b764615c391fdc2648d460245c748a3a319a296e
Merge: 57a4578... f4fed54...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 13:50:35 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 57a45785584e651b807eed08f3a6950d4ade0156
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 24 11:28:47 2013 +0000

    xen: driver/char: fix const declaration of DT compatible list
    
    The data type for DT compatible list should be:
        const char * const[]  __initconst
    
    Fix every serial drivers which support device tree.
    
    Spotted-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit dd3ab3881494136999138d70f4fe28ebabe8660c
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 12:59:57 2013 +0200

    ns16550: support ns16550a
    
    Ns16550a devices are Ns16550 devices with additional capabilities.
    Decare XEN is compatible with this device, to be able to use unmodified
    devicetrees.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 7dde263e6cbe2b58dbff368f8a63dfc6152a70ef
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 13:01:31 2013 +0200

    xen/dts: specific bad cell count error
    
    Specify in the error message if bad cell count is in device or parent.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 8a91554484ad6977f641b308af38f337c20e97cc
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:18 2013 +0000

    libxc: Document xenctrl.h event channel calls
    
    Provide semantic documentation for how the libxc calls relate to the
    hypervisor interface, and how they are to be used.
    
    Also document the bug (present at least in Linux 3.12) that setting
    the evtchn fd to nonblocking doesn't in fact make xc_evtchn_pending
    nonblocking, and describe the appropriate workaround.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Jan Beulich <JBeulich@suse.com>
    CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit a8da8249506b55fe9314462e90cc6749fd50a5fa
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:17 2013 +0000

    docs: Document event-channel-based suspend protocol
    
    Document the event channel protocol in xenstore-paths.markdown, in the
    section for ~/device/suspend/event-channel.
    
    Protocol reverse-engineered from commentary and commit messages of
      4539594d46f9  Add facility to get notification of domain suspend ...
      17636f47a474  Teach xc_save to use event-channel-based ...
    and implementations in
      xc_save (current version)
      libxl (current version)
      linux-2.6.18-xen (mercurial 1241:2993033a77ca)
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>

commit 340702fd894add8adcdfd6c5742f41f89aa1fed2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:16 2013 +0000

    xen: Document that EVTCHNOP_bind_interdomain signals
    
    EVTCHNOP_bind_interdomain signals the event channel.  Document this.
    
    Also explain the usual use pattern.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>

commit f63b6c6ddcb44b5551e2f7748b0f5de6d73b35e5
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:15 2013 +0000

    xen: Document XEN_DOMCTL_subscribe
    
    Arguably this domctl is misnamed.  But, for now, document its actual
    behaviour (reverse-engineered from the code and found in the commit
    message for 4539594d46f9) under its actual name.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>
    CC: Jan Beulich <JBeulich@suse.com>

commit 36a31eb693774e61cdc119c276be90d67b675563
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 17 14:28:19 2013 +0000

    xen/arm: Allow ballooning working with 1:1 memory mapping
    
    With the lack of iommu, dom0 must have a 1:1 memory mapping for all
    these guest physical address. When the balloon decides to give back a
    page to the kernel, this page must have the same address as previously.
    Otherwise, we will loose the 1:1 mapping and will break DMA-capable
    devices.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Cc: Keir Fraser <keir@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f4fed540e78ac8a2bd3b1dee53a5206dde25f613
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:47 2014 +0100

    VMX: Eliminate cr3 save/loading exiting when UG enabled
    
    With the feature of unrestricted guest, there should be no vmexit
    be triggered when guest accesses the cr3 in non-paging mode. This
    patch will clear the cr3 save/loading bit in vmcs control filed to
    eliminate cr3 access vmexit on UG avaliable hardware.
    
    The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
    did the same thing compare to this one. But it will cause guest fail
    to boot up on non-UG hardware which is repoted by Jan and it has been
    reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
    
    This patch incorporate the fixing and guest are working well both in
    UG and non-UG platform with this patch.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 185e83591ce420e0b004646b55c5e4783e388531
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:21 2014 +0100

    VMX,apicv: Set "NMI-window exiting" for NMI
    
    Enable NMI-window exiting if interrupt is blocked by NMI under apicv enabled
    platform.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

commit 3e06b9890c0a691388ace5a6636728998b237b90
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 14:21:48 2014 +0100

    IOMMU: make page table population preemptible
    
    Since this can take an arbitrary amount of time, the rooting domctl as
    well as all involved code must become aware of this requiring a
    continuation.
    
    The subject domain's rel_mem_list is being (ab)used for this, in a way
    similar to and compatible with broken page offlining.
    
    Further, operations get slightly re-ordered in assign_device(): IOMMU
    page tables now get set up _before_ the first device gets assigned, at
    once closing a small timing window in which the guest may already see
    the device but wouldn't be able to access it.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 21:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 21:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0eTz-0002II-BO; Tue, 07 Jan 2014 21:46:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0eTy-0002ID-BE
	for xen-devel@lists.xensource.com; Tue, 07 Jan 2014 21:46:42 +0000
Received: from [85.158.137.68:23854] by server-9.bemta-3.messagelabs.com id
	23/38-13104-1C57CC25; Tue, 07 Jan 2014 21:46:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389131198!7780335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11407 invoked from network); 7 Jan 2014 21:46:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 21:46:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90660287"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Jan 2014 21:46:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 16:46:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0eTs-0006LH-JZ;
	Tue, 07 Jan 2014 21:46:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0eTp-0000jO-U3;
	Tue, 07 Jan 2014 21:46:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24295-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 7 Jan 2014 21:46:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24295: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24295 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24295/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24250

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cedfdd43a9798e535a05690bb6f01394490d26bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 16:01:14 2014 +0100

    IOMMU: make page table deallocation preemptible
    
    This too can take an arbitrary amount of time.
    
    In fact, the bulk of the work is being moved to a tasklet, as handling
    the necessary preemption logic in line seems close to impossible given
    that the teardown may also be invoked on error paths.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

commit cc0a6c6c749a8693dc0e201773c10cd97e5e6ce0
Merge: 4746b5a... 81b1c7d...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 14:32:45 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 4746b5adc396bb2fc963b3156eab7267c6e7e541
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Dec 20 15:08:08 2013 +0000

    xen: arm: context switch the aux memory attribute registers
    
    We appear to have somehow missed these. Linux doesn't actually use them and
    none of the processors I've looked at actually define any bits in them (so
    they are UNK/SBZP) but it is good form to context switch them anyway.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 81b1c7de2339d2788352b162057e70130803f3cf
Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Date:   Tue Jan 7 15:09:42 2014 +0100

    AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
    
    Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
    However, the loop variable (bdf) is declared as u16 which causes
    inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
    This patch changes the variable to u32 instead.
    
    Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 62d33ca1048f4e08eaeb026c7b79239b4605b636
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:59:31 2014 +0100

    VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
    
    Free the allocated structure rather than the ACPI table ATS entry.
    
    On further analysis, there is another memory leak.  acpi_parse_dev_scope()
    could allocate scope->devices, and return with -ENOMEM.  All callers of
    acpi_parse_dev_scope() would then free the underlying structure, loosing the
    pointer.
    
    These errors can only actually be reached through acpi_parse_dev_scope()
    (which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
    it.
    
    Coverity-ID: 1146949
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a9fe8c7dda440b84e178d65dcd64c0173b0a4b5d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:58:35 2014 +0100

    AMD/microcode: avoid use-after-free for the microcode buffer
    
    It is possible to free the mc_old buffer and then store it for use in the case
    of resume.
    
    This keeps the old semantics of being able to return an error even after a
    successful microcode application.
    
    Coverity-ID 1146953
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 2af30e62c4e562d7a4ec4185fdab20fb29354fd8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:57:15 2014 +0100

    AMD/iommu_detect: don't leak iommu structure on error paths
    
    Tweak the logic slightly to return the real errors from
    get_iommu_{,msi_}capabilities(), which at the moment is no functional change.
    
    Coverity-ID: 1146950
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b764615c391fdc2648d460245c748a3a319a296e
Merge: 57a4578... f4fed54...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 13:50:35 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 57a45785584e651b807eed08f3a6950d4ade0156
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 24 11:28:47 2013 +0000

    xen: driver/char: fix const declaration of DT compatible list
    
    The data type for DT compatible list should be:
        const char * const[]  __initconst
    
    Fix every serial drivers which support device tree.
    
    Spotted-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit dd3ab3881494136999138d70f4fe28ebabe8660c
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 12:59:57 2013 +0200

    ns16550: support ns16550a
    
    Ns16550a devices are Ns16550 devices with additional capabilities.
    Decare XEN is compatible with this device, to be able to use unmodified
    devicetrees.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 7dde263e6cbe2b58dbff368f8a63dfc6152a70ef
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 13:01:31 2013 +0200

    xen/dts: specific bad cell count error
    
    Specify in the error message if bad cell count is in device or parent.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 8a91554484ad6977f641b308af38f337c20e97cc
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:18 2013 +0000

    libxc: Document xenctrl.h event channel calls
    
    Provide semantic documentation for how the libxc calls relate to the
    hypervisor interface, and how they are to be used.
    
    Also document the bug (present at least in Linux 3.12) that setting
    the evtchn fd to nonblocking doesn't in fact make xc_evtchn_pending
    nonblocking, and describe the appropriate workaround.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Jan Beulich <JBeulich@suse.com>
    CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit a8da8249506b55fe9314462e90cc6749fd50a5fa
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:17 2013 +0000

    docs: Document event-channel-based suspend protocol
    
    Document the event channel protocol in xenstore-paths.markdown, in the
    section for ~/device/suspend/event-channel.
    
    Protocol reverse-engineered from commentary and commit messages of
      4539594d46f9  Add facility to get notification of domain suspend ...
      17636f47a474  Teach xc_save to use event-channel-based ...
    and implementations in
      xc_save (current version)
      libxl (current version)
      linux-2.6.18-xen (mercurial 1241:2993033a77ca)
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>

commit 340702fd894add8adcdfd6c5742f41f89aa1fed2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:16 2013 +0000

    xen: Document that EVTCHNOP_bind_interdomain signals
    
    EVTCHNOP_bind_interdomain signals the event channel.  Document this.
    
    Also explain the usual use pattern.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>

commit f63b6c6ddcb44b5551e2f7748b0f5de6d73b35e5
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:15 2013 +0000

    xen: Document XEN_DOMCTL_subscribe
    
    Arguably this domctl is misnamed.  But, for now, document its actual
    behaviour (reverse-engineered from the code and found in the commit
    message for 4539594d46f9) under its actual name.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>
    CC: Jan Beulich <JBeulich@suse.com>

commit 36a31eb693774e61cdc119c276be90d67b675563
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 17 14:28:19 2013 +0000

    xen/arm: Allow ballooning working with 1:1 memory mapping
    
    With the lack of iommu, dom0 must have a 1:1 memory mapping for all
    these guest physical address. When the balloon decides to give back a
    page to the kernel, this page must have the same address as previously.
    Otherwise, we will loose the 1:1 mapping and will break DMA-capable
    devices.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Cc: Keir Fraser <keir@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f4fed540e78ac8a2bd3b1dee53a5206dde25f613
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:47 2014 +0100

    VMX: Eliminate cr3 save/loading exiting when UG enabled
    
    With the feature of unrestricted guest, there should be no vmexit
    be triggered when guest accesses the cr3 in non-paging mode. This
    patch will clear the cr3 save/loading bit in vmcs control filed to
    eliminate cr3 access vmexit on UG avaliable hardware.
    
    The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
    did the same thing compare to this one. But it will cause guest fail
    to boot up on non-UG hardware which is repoted by Jan and it has been
    reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
    
    This patch incorporate the fixing and guest are working well both in
    UG and non-UG platform with this patch.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 185e83591ce420e0b004646b55c5e4783e388531
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:21 2014 +0100

    VMX,apicv: Set "NMI-window exiting" for NMI
    
    Enable NMI-window exiting if interrupt is blocked by NMI under apicv enabled
    platform.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

commit 3e06b9890c0a691388ace5a6636728998b237b90
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 14:21:48 2014 +0100

    IOMMU: make page table population preemptible
    
    Since this can take an arbitrary amount of time, the rooting domctl as
    well as all involved code must become aware of this requiring a
    continuation.
    
    The subject domain's rel_mem_list is being (ab)used for this, in a way
    similar to and compatible with broken page offlining.
    
    Further, operations get slightly re-ordered in assign_device(): IOMMU
    page tables now get set up _before_ the first device gets assigned, at
    once closing a small timing window in which the guest may already see
    the device but wouldn't be able to access it.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 22:43:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 22:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0fMZ-0004qa-VX; Tue, 07 Jan 2014 22:43:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0fMY-0004qV-EF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 22:43:06 +0000
Received: from [193.109.254.147:21175] by server-1.bemta-14.messagelabs.com id
	7A/72-15600-9F28CC25; Tue, 07 Jan 2014 22:43:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389134583!9446942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3464 invoked from network); 7 Jan 2014 22:43:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 22:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88519394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 22:43:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 17:43:02 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W0fMU-0004je-5S;
	Tue, 07 Jan 2014 22:43:02 +0000
Message-ID: <1389134581.6917.19.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Jan 2014 22:43:01 +0000
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 18:55 +0000, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.

Might this be the libxl resume thing described at the end of:
http://lists.xen.org/archives/html/xen-devel/2013-02/msg00130.html ?

I thought we'd switch to using fast resume by default to workaround
this, but looking at the code it seems not.

It'd be lovely if the slow path finally got implemented instead of
falling through the cracks again.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 22:43:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 22:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0fMZ-0004qa-VX; Tue, 07 Jan 2014 22:43:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0fMY-0004qV-EF
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 22:43:06 +0000
Received: from [193.109.254.147:21175] by server-1.bemta-14.messagelabs.com id
	7A/72-15600-9F28CC25; Tue, 07 Jan 2014 22:43:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389134583!9446942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3464 invoked from network); 7 Jan 2014 22:43:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Jan 2014 22:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88519394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Jan 2014 22:43:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 7 Jan 2014 17:43:02 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W0fMU-0004je-5S;
	Tue, 07 Jan 2014 22:43:02 +0000
Message-ID: <1389134581.6917.19.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 7 Jan 2014 22:43:01 +0000
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 18:55 +0000, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.

Might this be the libxl resume thing described at the end of:
http://lists.xen.org/archives/html/xen-devel/2013-02/msg00130.html ?

I thought we'd switch to using fast resume by default to workaround
this, but looking at the code it seems not.

It'd be lovely if the slow path finally got implemented instead of
falling through the cracks again.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:03:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ffo-0005o3-T1; Tue, 07 Jan 2014 23:03:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0ffm-0005ny-U2
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:02:59 +0000
Received: from [85.158.143.35:61307] by server-1.bemta-4.messagelabs.com id
	14/38-02132-2A78CC25; Tue, 07 Jan 2014 23:02:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389135776!7585746!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32039 invoked from network); 7 Jan 2014 23:02:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:02:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07N1qPa009808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:01:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N1o1O012174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:01:51 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07N1owv003859; Tue, 7 Jan 2014 23:01:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:01:49 -0800
Date: Tue, 7 Jan 2014 15:01:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107150148.4cbf1a73@mantra.us.oracle.com>
In-Reply-To: <52CC2A2F.7010700@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
	<52CC2A2F.7010700@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 11:24:15 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 05:02, Ian Campbell wrote:
> > On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
> >> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> >>> On Sat,  4 Jan 2014 12:52:16 -0500
> >>> Don Slutz <dslutz@verizon.com> wrote:
> >>>
> >>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> >>>> returned.
> >>>>

..... 

> I had assumed that this patch (which is not need to "fix" the bugs I
> found), was to be dropped in v2.  However I will agree that currently
> there is no way to know about partial success.  The untested change:
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..0add07e 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
>       iop->remain = dbg_rw_mem(
>           (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
>           iop->gwr, iop->pgd3val);
> -    return (iop->remain ? -EFAULT : 0);
> +    return 0;
>   }
> 
>   long arch_do_domctl(
> 
> 
> Would appear to allow partial success to be reported and also meet
> with remain not to be looked at with an error.

No, the partial success is relevant in other cases, like EAGAIN,
but not EFAULT. If we make it pre-emptible in future to return
EAGAIN, we'd need to make sure remain was honored. Again, think of it
this way, if the first copyin failed, remain would have not been 
initialized.

So, because now the only cause of unfinished copy from dbg_rw_mem is 
EFAULT, we should leave above xen code alone, and just change gdbsx.

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:03:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ffo-0005o3-T1; Tue, 07 Jan 2014 23:03:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0ffm-0005ny-U2
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:02:59 +0000
Received: from [85.158.143.35:61307] by server-1.bemta-4.messagelabs.com id
	14/38-02132-2A78CC25; Tue, 07 Jan 2014 23:02:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389135776!7585746!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32039 invoked from network); 7 Jan 2014 23:02:57 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:02:57 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07N1qPa009808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:01:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N1o1O012174
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:01:51 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s07N1owv003859; Tue, 7 Jan 2014 23:01:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:01:49 -0800
Date: Tue, 7 Jan 2014 15:01:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107150148.4cbf1a73@mantra.us.oracle.com>
In-Reply-To: <52CC2A2F.7010700@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
	<52CC2A2F.7010700@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 11:24:15 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 05:02, Ian Campbell wrote:
> > On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
> >> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> >>> On Sat,  4 Jan 2014 12:52:16 -0500
> >>> Don Slutz <dslutz@verizon.com> wrote:
> >>>
> >>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> >>>> returned.
> >>>>

..... 

> I had assumed that this patch (which is not need to "fix" the bugs I
> found), was to be dropped in v2.  However I will agree that currently
> there is no way to know about partial success.  The untested change:
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index ef6c140..0add07e 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
>       iop->remain = dbg_rw_mem(
>           (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
>           iop->gwr, iop->pgd3val);
> -    return (iop->remain ? -EFAULT : 0);
> +    return 0;
>   }
> 
>   long arch_do_domctl(
> 
> 
> Would appear to allow partial success to be reported and also meet
> with remain not to be looked at with an error.

No, the partial success is relevant in other cases, like EAGAIN,
but not EFAULT. If we make it pre-emptible in future to return
EAGAIN, we'd need to make sure remain was honored. Again, think of it
this way, if the first copyin failed, remain would have not been 
initialized.

So, because now the only cause of unfinished copy from dbg_rw_mem is 
EFAULT, we should leave above xen code alone, and just change gdbsx.

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:06:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0fjU-0005xj-5e; Tue, 07 Jan 2014 23:06:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0fjS-0005xc-Ah
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:06:46 +0000
Received: from [85.158.137.68:25953] by server-15.bemta-3.messagelabs.com id
	20/2D-11556-5888CC25; Tue, 07 Jan 2014 23:06:45 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389136003!7753167!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22819 invoked from network); 7 Jan 2014 23:06:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:06:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07N5cR0017757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:05:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N5cmr008444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:05:38 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N5clK008439; Tue, 7 Jan 2014 23:05:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:05:37 -0800
Date: Tue, 7 Jan 2014 15:05:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140107150536.53a8c225@mantra.us.oracle.com>
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014 10:00:24 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> > On Sat,  4 Jan 2014 12:52:16 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> > 
> > > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > > returned.
> > > 
> > > Without this gdb does not report an error.
> > > 
> > > With this patch and using a 1G hvm domU:
> > > 
> > > (gdb) x/1xh 0x6ae9168b
> > > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > > 
> > > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > > ---
> > >  xen/arch/x86/domctl.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > > index ef6c140..4aa751f 100644
> > > --- a/xen/arch/x86/domctl.c
> > > +++ b/xen/arch/x86/domctl.c
> > > @@ -997,8 +997,7 @@ long arch_do_domctl(
> > >              domctl->u.gdbsx_guest_memio.len;
> > >  
> > >          ret = gdbsx_guest_mem_io(domctl->domain,
> > > &domctl->u.gdbsx_guest_memio);
> > > -        if ( !ret )
> > > -           copyback = 1;
> > > +        copyback = 1;
> > >      }
> > >      break;
> > >  
> > 
> > Ooopsy... my thought was that an application should not even look at
> > remain if the hcall/syscall failed, but forgot when writing the
> > gdbsx itself :). Think of it this way, if the call didn't even make
> > it to xen, and some reason the ioctl returned non-zero rc, then
> > remain would still be zero. So I think we should fix gdbsx instead
> > of here:
> > 
> > xg_write_mem():
> >     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> > buflen))) {
> >         XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
> >                iop->remain, errno, rc);
> 
> Isn't this still using iop->remain on failure which is what you say
> shouldn't be done?

Right, it was just a guidline code with bad cut and paste of XGERR()..
so picky :) :)... 

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:06:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0fjU-0005xj-5e; Tue, 07 Jan 2014 23:06:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0fjS-0005xc-Ah
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:06:46 +0000
Received: from [85.158.137.68:25953] by server-15.bemta-3.messagelabs.com id
	20/2D-11556-5888CC25; Tue, 07 Jan 2014 23:06:45 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389136003!7753167!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22819 invoked from network); 7 Jan 2014 23:06:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:06:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07N5cR0017757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:05:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N5cmr008444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:05:38 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07N5clK008439; Tue, 7 Jan 2014 23:05:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:05:37 -0800
Date: Tue, 7 Jan 2014 15:05:36 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140107150536.53a8c225@mantra.us.oracle.com>
In-Reply-To: <1389088824.31766.105.camel@kazak.uk.xensource.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 7 Jan 2014 10:00:24 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> > On Sat,  4 Jan 2014 12:52:16 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> > 
> > > The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
> > > returned.
> > > 
> > > Without this gdb does not report an error.
> > > 
> > > With this patch and using a 1G hvm domU:
> > > 
> > > (gdb) x/1xh 0x6ae9168b
> > > 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> > > 
> > > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > > ---
> > >  xen/arch/x86/domctl.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > > index ef6c140..4aa751f 100644
> > > --- a/xen/arch/x86/domctl.c
> > > +++ b/xen/arch/x86/domctl.c
> > > @@ -997,8 +997,7 @@ long arch_do_domctl(
> > >              domctl->u.gdbsx_guest_memio.len;
> > >  
> > >          ret = gdbsx_guest_mem_io(domctl->domain,
> > > &domctl->u.gdbsx_guest_memio);
> > > -        if ( !ret )
> > > -           copyback = 1;
> > > +        copyback = 1;
> > >      }
> > >      break;
> > >  
> > 
> > Ooopsy... my thought was that an application should not even look at
> > remain if the hcall/syscall failed, but forgot when writing the
> > gdbsx itself :). Think of it this way, if the call didn't even make
> > it to xen, and some reason the ioctl returned non-zero rc, then
> > remain would still be zero. So I think we should fix gdbsx instead
> > of here:
> > 
> > xg_write_mem():
> >     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> > buflen))) {
> >         XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
> >                iop->remain, errno, rc);
> 
> Isn't this still using iop->remain on failure which is what you say
> shouldn't be done?

Right, it was just a guidline code with bad cut and paste of XGERR()..
so picky :) :)... 

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:12:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0foZ-0006VW-A6; Tue, 07 Jan 2014 23:12:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0foX-0006VR-Re
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:12:02 +0000
Received: from [85.158.137.68:40142] by server-3.bemta-3.messagelabs.com id
	B8/08-10658-1C98CC25; Tue, 07 Jan 2014 23:12:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389136319!4130231!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20916 invoked from network); 7 Jan 2014 23:12:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:12:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07NAsZY022207
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:10:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07NAr1R018240
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:10:53 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07NArSv018232; Tue, 7 Jan 2014 23:10:53 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:10:52 -0800
Date: Tue, 7 Jan 2014 15:10:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107151048.48ef42f2@mantra.us.oracle.com>
In-Reply-To: <52CC2ABC.3040505@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<52CC2ABC.3040505@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 11:26:36 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 05:00, Ian Campbell wrote:
> > On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> >> On Sat,  4 Jan 2014 12:52:16 -0500
> >> Don Slutz <dslutz@verizon.com> wrote:
.....
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,11 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->gwr = 0;       /* not writing to
> guest */
> 
>       if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
> +              errno, rc);


Probably would fit in just one line. 
           XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);


> +        return tobuf_len;
> +    }
> 
>       for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>       XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) iop->gwr = 1;       /* writing to guest
> */
> 
>       if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d
> rc:%d\n",
> +              guestva, errno, rc);
> +        return buflen;
> +    }
>       return iop->remain;
>   }
> 
> 
> works fine and I plan it to be part of v2.

Yes, this is it.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 07 23:12:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jan 2014 23:12:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0foZ-0006VW-A6; Tue, 07 Jan 2014 23:12:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0foX-0006VR-Re
	for xen-devel@lists.xen.org; Tue, 07 Jan 2014 23:12:02 +0000
Received: from [85.158.137.68:40142] by server-3.bemta-3.messagelabs.com id
	B8/08-10658-1C98CC25; Tue, 07 Jan 2014 23:12:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389136319!4130231!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20916 invoked from network); 7 Jan 2014 23:12:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Jan 2014 23:12:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s07NAsZY022207
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 7 Jan 2014 23:10:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07NAr1R018240
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 7 Jan 2014 23:10:53 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s07NArSv018232; Tue, 7 Jan 2014 23:10:53 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 15:10:52 -0800
Date: Tue, 7 Jan 2014 15:10:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107151048.48ef42f2@mantra.us.oracle.com>
In-Reply-To: <52CC2ABC.3040505@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<52CC2ABC.3040505@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 11:26:36 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 05:00, Ian Campbell wrote:
> > On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
> >> On Sat,  4 Jan 2014 12:52:16 -0500
> >> Don Slutz <dslutz@verizon.com> wrote:
.....
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,11 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->gwr = 0;       /* not writing to
> guest */
> 
>       if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
> +              errno, rc);


Probably would fit in just one line. 
           XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);


> +        return tobuf_len;
> +    }
> 
>       for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>       XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) iop->gwr = 1;       /* writing to guest
> */
> 
>       if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d
> rc:%d\n",
> +              guestva, errno, rc);
> +        return buflen;
> +    }
>       return iop->remain;
>   }
> 
> 
> works fine and I plan it to be part of v2.

Yes, this is it.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gjb-0001HF-0V; Wed, 08 Jan 2014 00:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gja-0001H3-6b
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:10:58 +0000
Received: from [85.158.139.211:18409] by server-10.bemta-5.messagelabs.com id
	9E/34-01405-1979CC25; Wed, 08 Jan 2014 00:10:57 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389139854!8423993!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18184 invoked from network); 8 Jan 2014 00:10:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90699151"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:10:25 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:10:25 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:09 +0000
Message-ID: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gjb-0001HF-0V; Wed, 08 Jan 2014 00:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gja-0001H3-6b
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:10:58 +0000
Received: from [85.158.139.211:18409] by server-10.bemta-5.messagelabs.com id
	9E/34-01405-1979CC25; Wed, 08 Jan 2014 00:10:57 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389139854!8423993!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18184 invoked from network); 8 Jan 2014 00:10:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90699151"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:10:25 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:10:25 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:09 +0000
Message-ID: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gni-0001TX-3r; Wed, 08 Jan 2014 00:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnf-0001TK-Ty
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:12 +0000
Received: from [85.158.143.35:24859] by server-1.bemta-4.messagelabs.com id
	82/42-02132-F889CC25; Wed, 08 Jan 2014 00:15:11 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26255 invoked from network); 8 Jan 2014 00:15:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543817"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:09 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:08 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:11 +0000
Message-ID: <1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   57 +++++++-
 drivers/net/xen-netback/netback.c   |  251 ++++++++++++++---------------------
 2 files changed, 153 insertions(+), 155 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7170f97..3b2b249 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+		vif->dealloc_task == NULL ||
+		!xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+		vif->mmap_pages,
+		false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return NULL;
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -431,6 +452,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err_rx_unbind;
 	}
 
+	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
+				   (void *)vif, "%s-dealloc", vif->dev->name);
+	if (IS_ERR(vif->dealloc_task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(vif->dealloc_task);
+		goto err_rx_unbind;
+	}
+
 	vif->task = task;
 
 	rtnl_lock();
@@ -443,6 +472,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -480,6 +510,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -495,6 +530,22 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+				net_ratelimit())
+				netdev_err(vif->dev,
+					"Page still granted! Index: %x\n", i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7c241f9..53d7e78 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -644,9 +644,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -784,10 +787,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+					       struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -808,83 +811,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -906,9 +838,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+			NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				"Stale mapped handle! pending_idx %x handle %x\n",
+				pending_idx, vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -933,18 +877,26 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					"Stale mapped handle! pending_idx %x handle %x\n",
+					pending_idx,
+					vif->grant_tx_handle[pending_idx]);
+				xenvif_fatal_tx_err(vif);
+			}
+			set_phys_to_machine(idx_to_pfn(vif, pending_idx),
+				FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -957,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -972,7 +926,8 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb,
+		u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -986,6 +941,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -993,10 +959,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1358,7 +1329,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1466,30 +1437,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1518,17 +1469,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1552,12 +1503,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1567,7 +1523,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+			skb,
+			skb_shinfo(skb)->destructor_arg ?
+					pending_idx :
+					INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1606,6 +1568,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
+			vif->tx_map_ops,
+			nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1736,45 +1709,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gne-0001TD-N3; Wed, 08 Jan 2014 00:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnd-0001T6-FP
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:09 +0000
Received: from [85.158.143.35:24787] by server-1.bemta-4.messagelabs.com id
	8C/32-02132-C889CC25; Wed, 08 Jan 2014 00:15:08 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26197 invoked from network); 8 Jan 2014 00:15:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543770"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:06 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:05 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:10 +0000
Message-ID: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   25 ++++++
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  163 +++++++++++++++++++++++++++++++++++
 3 files changed, 189 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index d218ccd..f1071e3 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,23 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -221,6 +238,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -228,6 +247,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 8d6def2..7170f97 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -37,6 +37,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index addfe1d1..7c241f9 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
+	       struct xen_netif_tx_request *txp,
+	       struct gnttab_map_grant_ref *gop)
+{
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1599,6 +1612,105 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif =
+		container_of(temp - pending_idx, struct xenvif,
+			pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_action_dealloc:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					"Trying to unmap invalid handle! "
+					"pending_idx: %x\n", pending_idx);
+				continue;
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			gnttab_set_unmap_op(gop,
+					idx_to_kaddr(vif, pending_idx),
+					GNTMAP_host_map,
+					vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+				vif->tx_unmap_ops,
+				gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					" host_addr: %llx handle: %x status: %d\n",
+					gop[i].host_addr,
+					gop[i].handle,
+					gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1665,6 +1777,27 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+				"Trying to unmap invalid handle! pending_idx: %x\n",
+				pending_idx);
+		return;
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			idx_to_kaddr(vif, pending_idx),
+			GNTMAP_host_map,
+			vif->grant_tx_handle[pending_idx]);
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+			&tx_unmap_op,
+			1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1726,6 +1859,14 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline int tx_dealloc_work_todo(struct xenvif *vif)
+{
+	if (vif->dealloc_cons != vif->dealloc_prod)
+		return 1;
+
+	return 0;
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1814,6 +1955,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gni-0001TX-3r; Wed, 08 Jan 2014 00:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnf-0001TK-Ty
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:12 +0000
Received: from [85.158.143.35:24859] by server-1.bemta-4.messagelabs.com id
	82/42-02132-F889CC25; Wed, 08 Jan 2014 00:15:11 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26255 invoked from network); 8 Jan 2014 00:15:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543817"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:09 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:08 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:11 +0000
Message-ID: <1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   57 +++++++-
 drivers/net/xen-netback/netback.c   |  251 ++++++++++++++---------------------
 2 files changed, 153 insertions(+), 155 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7170f97..3b2b249 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+		vif->dealloc_task == NULL ||
+		!xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+		vif->mmap_pages,
+		false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return NULL;
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -431,6 +452,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err_rx_unbind;
 	}
 
+	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
+				   (void *)vif, "%s-dealloc", vif->dev->name);
+	if (IS_ERR(vif->dealloc_task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(vif->dealloc_task);
+		goto err_rx_unbind;
+	}
+
 	vif->task = task;
 
 	rtnl_lock();
@@ -443,6 +472,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -480,6 +510,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -495,6 +530,22 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+				net_ratelimit())
+				netdev_err(vif->dev,
+					"Page still granted! Index: %x\n", i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7c241f9..53d7e78 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -644,9 +644,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -784,10 +787,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+					       struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -808,83 +811,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -906,9 +838,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+			NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				"Stale mapped handle! pending_idx %x handle %x\n",
+				pending_idx, vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -933,18 +877,26 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					"Stale mapped handle! pending_idx %x handle %x\n",
+					pending_idx,
+					vif->grant_tx_handle[pending_idx]);
+				xenvif_fatal_tx_err(vif);
+			}
+			set_phys_to_machine(idx_to_pfn(vif, pending_idx),
+				FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -957,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -972,7 +926,8 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb,
+		u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -986,6 +941,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -993,10 +959,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1358,7 +1329,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1466,30 +1437,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1518,17 +1469,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1552,12 +1503,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1567,7 +1523,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+			skb,
+			skb_shinfo(skb)->destructor_arg ?
+					pending_idx :
+					INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1606,6 +1568,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
+			vif->tx_map_ops,
+			nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1736,45 +1709,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gne-0001TD-N3; Wed, 08 Jan 2014 00:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnd-0001T6-FP
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:09 +0000
Received: from [85.158.143.35:24787] by server-1.bemta-4.messagelabs.com id
	8C/32-02132-C889CC25; Wed, 08 Jan 2014 00:15:08 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26197 invoked from network); 8 Jan 2014 00:15:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543770"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:06 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:05 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:10 +0000
Message-ID: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   25 ++++++
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  163 +++++++++++++++++++++++++++++++++++
 3 files changed, 189 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index d218ccd..f1071e3 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,23 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -221,6 +238,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -228,6 +247,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 8d6def2..7170f97 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -37,6 +37,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index addfe1d1..7c241f9 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
+	       struct xen_netif_tx_request *txp,
+	       struct gnttab_map_grant_ref *gop)
+{
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1599,6 +1612,105 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif =
+		container_of(temp - pending_idx, struct xenvif,
+			pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_action_dealloc:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					"Trying to unmap invalid handle! "
+					"pending_idx: %x\n", pending_idx);
+				continue;
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			gnttab_set_unmap_op(gop,
+					idx_to_kaddr(vif, pending_idx),
+					GNTMAP_host_map,
+					vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+				vif->tx_unmap_ops,
+				gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					" host_addr: %llx handle: %x status: %d\n",
+					gop[i].host_addr,
+					gop[i].handle,
+					gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1665,6 +1777,27 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+				"Trying to unmap invalid handle! pending_idx: %x\n",
+				pending_idx);
+		return;
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			idx_to_kaddr(vif, pending_idx),
+			GNTMAP_host_map,
+			vif->grant_tx_handle[pending_idx]);
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+			&tx_unmap_op,
+			1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1726,6 +1859,14 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline int tx_dealloc_work_todo(struct xenvif *vif)
+{
+	if (vif->dealloc_cons != vif->dealloc_prod)
+		return 1;
+
+	return 0;
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1814,6 +1955,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnj-0001Ui-Qp; Wed, 08 Jan 2014 00:15:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gni-0001TW-Bv
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:14 +0000
Received: from [85.158.143.35:50023] by server-2.bemta-4.messagelabs.com id
	42/65-11386-1989CC25; Wed, 08 Jan 2014 00:15:13 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26416 invoked from network); 8 Jan 2014 00:15:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543837"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:12 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:11 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:12 +0000
Message-ID: <1389139818-24458-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolate with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 33cb12c..f286879 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -128,11 +98,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 20352be..88a0fad 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 	       struct xen_netif_tx_request *txp,
 	       struct gnttab_map_grant_ref *gop)
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -870,14 +830,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1343,7 +1301,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		(skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1705,18 +1662,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnj-0001Ui-Qp; Wed, 08 Jan 2014 00:15:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gni-0001TW-Bv
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:14 +0000
Received: from [85.158.143.35:50023] by server-2.bemta-4.messagelabs.com id
	42/65-11386-1989CC25; Wed, 08 Jan 2014 00:15:13 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26416 invoked from network); 8 Jan 2014 00:15:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543837"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:12 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:11 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:12 +0000
Message-ID: <1389139818-24458-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolate with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 33cb12c..f286879 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -128,11 +98,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 20352be..88a0fad 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 	       struct xen_netif_tx_request *txp,
 	       struct gnttab_map_grant_ref *gop)
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -870,14 +830,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1343,7 +1301,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		(skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1705,18 +1662,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnn-0001WQ-BK; Wed, 08 Jan 2014 00:15:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnm-0001Vj-3m
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:18 +0000
Received: from [85.158.143.35:50123] by server-2.bemta-4.messagelabs.com id
	49/65-11386-5989CC25; Wed, 08 Jan 2014 00:15:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17986 invoked from network); 8 Jan 2014 00:15:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700249"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:15 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:14 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:13 +0000
Message-ID: <1389139818-24458-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 10d0cf0..e070475 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -322,7 +322,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -364,8 +366,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -426,6 +435,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -466,6 +478,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				struct pending_tx_info,
+				callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				struct xenvif,
+				pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -475,7 +507,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -484,7 +518,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnn-0001WQ-BK; Wed, 08 Jan 2014 00:15:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnm-0001Vj-3m
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:18 +0000
Received: from [85.158.143.35:50123] by server-2.bemta-4.messagelabs.com id
	49/65-11386-5989CC25; Wed, 08 Jan 2014 00:15:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17986 invoked from network); 8 Jan 2014 00:15:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700249"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:15 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:14 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:13 +0000
Message-ID: <1389139818-24458-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 10d0cf0..e070475 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -322,7 +322,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -364,8 +366,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -426,6 +435,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -466,6 +478,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				struct pending_tx_info,
+				callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				struct xenvif,
+				pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -475,7 +507,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -484,7 +518,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gno-0001XN-PB; Wed, 08 Jan 2014 00:15:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gno-0001Wc-28
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:20 +0000
Received: from [85.158.143.35:50175] by server-3.bemta-4.messagelabs.com id
	05/11-32360-7989CC25; Wed, 08 Jan 2014 00:15:19 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26744 invoked from network); 8 Jan 2014 00:15:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543860"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:18 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:17 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:14 +0000
Message-ID: <1389139818-24458-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gno-0001XN-PB; Wed, 08 Jan 2014 00:15:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gno-0001Wc-28
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:20 +0000
Received: from [85.158.143.35:50175] by server-3.bemta-4.messagelabs.com id
	05/11-32360-7989CC25; Wed, 08 Jan 2014 00:15:19 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389140106!10272325!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26744 invoked from network); 8 Jan 2014 00:15:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543860"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:18 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:17 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:14 +0000
Message-ID: <1389139818-24458-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnv-0001aK-Tq; Wed, 08 Jan 2014 00:15:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gns-0001Ya-0v
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:24 +0000
Received: from [85.158.143.35:25231] by server-1.bemta-4.messagelabs.com id
	69/52-02132-A989CC25; Wed, 08 Jan 2014 00:15:22 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18184 invoked from network); 8 Jan 2014 00:15:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700267"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:21 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:20 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:15 +0000
Message-ID: <1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  115 +++++++++++++++++++++++++++++++++----
 1 file changed, 105 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index ea1e27d..3796cb3 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -800,6 +800,19 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb = alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -810,11 +823,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -830,6 +848,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			netdev_err(vif->dev,
+				   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+				pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -843,6 +884,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -862,6 +904,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -896,11 +939,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+				pending_idx,
+				XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+				pending_idx,
+				XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -912,6 +964,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1403,8 +1481,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1412,9 +1489,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1476,6 +1550,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1518,6 +1593,23 @@ static int xenvif_tx_submit(struct xenvif *vif)
 					pending_idx :
 					INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					0,
+					0,
+					GFP_ATOMIC | __GFP_NOWARN);
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1568,6 +1660,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnv-0001aK-Tq; Wed, 08 Jan 2014 00:15:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gns-0001Ya-0v
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:24 +0000
Received: from [85.158.143.35:25231] by server-1.bemta-4.messagelabs.com id
	69/52-02132-A989CC25; Wed, 08 Jan 2014 00:15:22 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18184 invoked from network); 8 Jan 2014 00:15:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700267"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:21 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:20 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:15 +0000
Message-ID: <1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  115 +++++++++++++++++++++++++++++++++----
 1 file changed, 105 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index ea1e27d..3796cb3 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -800,6 +800,19 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
 
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb = alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -810,11 +823,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -830,6 +848,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			netdev_err(vif->dev,
+				   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+				pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -843,6 +884,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -862,6 +904,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -896,11 +939,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+				pending_idx,
+				XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+				pending_idx,
+				XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -912,6 +964,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1403,8 +1481,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1412,9 +1489,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1476,6 +1550,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1518,6 +1593,23 @@ static int xenvif_tx_submit(struct xenvif *vif)
 					pending_idx :
 					INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					0,
+					0,
+					GFP_ATOMIC | __GFP_NOWARN);
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1568,6 +1660,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnx-0001bU-3I; Wed, 08 Jan 2014 00:15:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnt-0001Zm-JZ
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:25 +0000
Received: from [85.158.143.35:25322] by server-1.bemta-4.messagelabs.com id
	7F/52-02132-C989CC25; Wed, 08 Jan 2014 00:15:24 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18306 invoked from network); 8 Jan 2014 00:15:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700272"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:16 +0000
Message-ID: <1389139818-24458-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gnx-0001bU-3I; Wed, 08 Jan 2014 00:15:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnt-0001Zm-JZ
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:25 +0000
Received: from [85.158.143.35:25322] by server-1.bemta-4.messagelabs.com id
	7F/52-02132-C989CC25; Wed, 08 Jan 2014 00:15:24 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389140115!10282468!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18306 invoked from network); 8 Jan 2014 00:15:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700272"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:16 +0000
Message-ID: <1389139818-24458-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0go1-0001ft-QR; Wed, 08 Jan 2014 00:15:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnz-0001e2-Sm
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:32 +0000
Received: from [193.109.254.147:33067] by server-13.bemta-14.messagelabs.com
	id FE/34-19374-3A89CC25; Wed, 08 Jan 2014 00:15:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389140129!9424936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8633 invoked from network); 8 Jan 2014 00:15:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:29 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:28 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:18 +0000
Message-ID: <1389139818-24458-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   31 ++++++++++++++++++++++++++++++-
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 063fcda..55d1f14 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -115,6 +115,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ce032f9..0287d62 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -406,6 +406,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -551,6 +552,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 6bc5413..27cc36c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -134,6 +134,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1904,10 +1909,34 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline int tx_dealloc_work_todo(struct xenvif *vif)
 {
-	if (vif->dealloc_cons != vif->dealloc_prod)
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+			(vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+			!vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					jiffies + msecs_to_jiffies(1));
+
+			}
+			return 0;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
 		return 1;
+	}
 
 	return 0;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0go1-0001ft-QR; Wed, 08 Jan 2014 00:15:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0gnz-0001e2-Sm
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:32 +0000
Received: from [193.109.254.147:33067] by server-13.bemta-14.messagelabs.com
	id FE/34-19374-3A89CC25; Wed, 08 Jan 2014 00:15:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389140129!9424936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8633 invoked from network); 8 Jan 2014 00:15:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88543909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:29 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:28 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:18 +0000
Message-ID: <1389139818-24458-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   31 ++++++++++++++++++++++++++++++-
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 063fcda..55d1f14 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -115,6 +115,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ce032f9..0287d62 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -406,6 +406,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -551,6 +552,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 6bc5413..27cc36c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -134,6 +134,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1904,10 +1909,34 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline int tx_dealloc_work_todo(struct xenvif *vif)
 {
-	if (vif->dealloc_cons != vif->dealloc_prod)
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+			(vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+			!vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					jiffies + msecs_to_jiffies(1));
+
+			}
+			return 0;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
 		return 1;
+	}
 
 	return 0;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0goQ-0001wG-9r; Wed, 08 Jan 2014 00:15:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0goP-0001v2-Ci
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:57 +0000
Received: from [193.109.254.147:37963] by server-4.bemta-14.messagelabs.com id
	2E/9C-03916-CB89CC25; Wed, 08 Jan 2014 00:15:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389140153!9455655!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23026 invoked from network); 8 Jan 2014 00:15:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700279"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:26 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:25 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:17 +0000
Message-ID: <1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    5 +++++
 drivers/net/xen-netback/interface.c |   22 ++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |    9 +++++++++
 3 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index dda3fd5..063fcda 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -130,6 +130,8 @@ struct xenvif {
 	 */
 	bool rx_event;
 
+	struct timer_list wake_queue;
+
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
 
@@ -224,4 +226,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 95fcd63..ce032f9 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -353,6 +368,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	/* Initialize 'expires' now: it's used to track the credit window. */
 	vif->credit_timeout.expires = jiffies;
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -528,6 +545,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -558,7 +576,7 @@ void xenvif_free(struct xenvif *vif)
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > (rx_drain_timeout_msecs/1000) &&
 				net_ratelimit())
 				netdev_err(vif->dev,
 					"Page still granted! Index: %x\n", i);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f815395..6bc5413 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -62,6 +62,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -2032,6 +2039,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:15:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0goQ-0001wG-9r; Wed, 08 Jan 2014 00:15:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0goP-0001v2-Ci
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:15:57 +0000
Received: from [193.109.254.147:37963] by server-4.bemta-14.messagelabs.com id
	2E/9C-03916-CB89CC25; Wed, 08 Jan 2014 00:15:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389140153!9455655!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23026 invoked from network); 8 Jan 2014 00:15:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:15:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700279"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:15:26 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 19:15:25 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 8 Jan 2014 00:10:17 +0000
Message-ID: <1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    5 +++++
 drivers/net/xen-netback/interface.c |   22 ++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |    9 +++++++++
 3 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index dda3fd5..063fcda 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -130,6 +130,8 @@ struct xenvif {
 	 */
 	bool rx_event;
 
+	struct timer_list wake_queue;
+
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
 
@@ -224,4 +226,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 95fcd63..ce032f9 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -353,6 +368,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	/* Initialize 'expires' now: it's used to track the credit window. */
 	vif->credit_timeout.expires = jiffies;
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -528,6 +545,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -558,7 +576,7 @@ void xenvif_free(struct xenvif *vif)
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > (rx_drain_timeout_msecs/1000) &&
 				net_ratelimit())
 				netdev_err(vif->dev,
 					"Page still granted! Index: %x\n", i);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f815395..6bc5413 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -62,6 +62,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -2032,6 +2039,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0goZ-00022j-Og; Wed, 08 Jan 2014 00:16:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0goY-00021H-Gt
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:16:06 +0000
Received: from [85.158.139.211:62050] by server-14.bemta-5.messagelabs.com id
	EB/20-24200-4C89CC25; Wed, 08 Jan 2014 00:16:04 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389140162!8413960!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6116 invoked from network); 8 Jan 2014 00:16:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700427"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:16:02 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	19:16:02 -0500
Message-ID: <52CC98C0.8070008@citrix.com>
Date: Wed, 8 Jan 2014 00:16:00 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, the version number in the subject should be v3

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0goZ-00022j-Og; Wed, 08 Jan 2014 00:16:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0goY-00021H-Gt
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 00:16:06 +0000
Received: from [85.158.139.211:62050] by server-14.bemta-5.messagelabs.com id
	EB/20-24200-4C89CC25; Wed, 08 Jan 2014 00:16:04 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389140162!8413960!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6116 invoked from network); 8 Jan 2014 00:16:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:16:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90700427"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:16:02 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 7 Jan 2014
	19:16:02 -0500
Message-ID: <52CC98C0.8070008@citrix.com>
Date: Wed, 8 Jan 2014 00:16:00 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, the version number in the subject should be v3

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyJ-0003Pg-Bw; Wed, 08 Jan 2014 00:26:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy9-0003NZ-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:02 +0000
Received: from [85.158.137.68:48622] by server-1.bemta-3.messagelabs.com id
	5A/01-29598-91B9CC25; Wed, 08 Jan 2014 00:26:01 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!4
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23447 invoked from network); 8 Jan 2014 00:26:00 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:26:00 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:57 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933084"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:56 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:48 -0500
Message-Id: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
	XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Drop output of iop->remain because it most likely will be zero.
This leads to a strange message:

ERROR: failed to read 0 bytes. errno:14 rc:-1

Add address to write error because it may be the only message
displayed.

Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
error and so iop->remain will be zero.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..0fc3f82 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->gwr = 0;       /* not writing to guest */
 
     if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
+        return tobuf_len;
+    }
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     iop->gwr = 1;       /* writing to guest */
 
     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
     return iop->remain;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyJ-0003Pg-Bw; Wed, 08 Jan 2014 00:26:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy9-0003NZ-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:02 +0000
Received: from [85.158.137.68:48622] by server-1.bemta-3.messagelabs.com id
	5A/01-29598-91B9CC25; Wed, 08 Jan 2014 00:26:01 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!4
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23447 invoked from network); 8 Jan 2014 00:26:00 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:26:00 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:57 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933084"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:56 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:48 -0500
Message-Id: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
	XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Drop output of iop->remain because it most likely will be zero.
This leads to a strange message:

ERROR: failed to read 0 bytes. errno:14 rc:-1

Add address to write error because it may be the only message
displayed.

Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
error and so iop->remain will be zero.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..0fc3f82 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->gwr = 0;       /* not writing to guest */
 
     if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
+        return tobuf_len;
+    }
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     iop->gwr = 1;       /* writing to guest */
 
     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
     return iop->remain;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gy8-0003NJ-Ip; Wed, 08 Jan 2014 00:26:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy5-0003Mt-Th
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:58 +0000
Received: from [85.158.137.68:48463] by server-15.bemta-3.messagelabs.com id
	8E/FF-11556-51B9CC25; Wed, 08 Jan 2014 00:25:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23138 invoked from network); 8 Jan 2014 00:25:56 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933045"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:51 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:43 -0500
Message-Id: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes v1 to v2:
  From Konrad Rzeszutek Wilk and Ian Campbell:

    ??

  Split out emacs local variables add to it's own new patch (number 1).

  From Andrew Cooper:

    What does matter is that the caller of dbg_hvm_va2mfn() should
    not have to cleanup a reference taken when it returns an error.

  So use his version of the change.

  From Ian Campbell:

    In all three cases what is missing is the "why" and the
    appropriate analysis from
    http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
    i.e. what is the impact of the bug (i..e what are the advantages
    of the fix) and what are the risks of this changing causing
    further breakage? I'm not really in a position to evaluate the
    risk of a change in gdbsx, so someone needs to tell me.

    I think given that gdbsx is a somewhat "peripheral" bit of code
    and that it is targeted at developers (who might be better able
    to tolerate any resulting issues and more able to apply
    subsequent fixups than regular users) we can accept a larger
    risk than we would with a change to the hypervisor itself etc
    (that's assuming that all of these changes cannot potentially
    impact non-debugger usecases which I expect is the case from the
    function names but I would like to see confirmed).
 
  My take on this below.

  From Mukesh Rathor:

    Ooopsy... my thought was that an application should not even
    look at remain if the hcall/syscall failed, but forgot when
    writing the gdbsx itself :). Think of it this way, if the call
    didn't even make it to xen, and some reason the ioctl returned
    non-zero rc, then remain would still be zero. So I think we
    should fix gdbsx instead of here:

  Dropped old patch 4, Added new patch 5.


Freeze:

  The benefit of this series is that the hypervisor stops calling
  panic (debug=y) or hanging (debug=n).  Also a person using gdbsx
  is not seeing random heap data of gdbsx's heap instead of guest
  data.

  The risk is that gdbsx does something new wrong.

  My understanding is that all the changes here only effect gdbsx
  and so very limited in scope.

Release manager requests:
  patch 1 and 3 are optional for 4.4.0.
  patch 2 should be in 4.4.0
  patch 4 and 5 would be good to be in 4.4.0

While tracking down a bug in seabios/grub I found the bug in patch
2.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 3 is debug logging that was used to find the 2nd way.


Don Slutz (5):
  Add Emacs local variables to source files.
  dbg_rw_guest_mem: need to call put_gfn in error path.
  dbg_rw_guest_mem: Conditionally enable debug log output
  xg_read_mem: Report on error.
  xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

 tools/debugger/gdbsx/xg/xg_main.c | 23 ++++++++++---
 xen/arch/x86/debug.c              | 71 +++++++++++++++++++++++----------------
 2 files changed, 61 insertions(+), 33 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyI-0003Or-0s; Wed, 08 Jan 2014 00:26:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy8-0003NF-Cf
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:00 +0000
Received: from [85.158.143.35:38367] by server-2.bemta-4.messagelabs.com id
	7D/69-11386-71B9CC25; Wed, 08 Jan 2014 00:25:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389140756!7593407!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4679 invoked from network); 8 Jan 2014 00:25:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:56 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933069"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:55 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:47 -0500
Message-Id: <1389140748-26524-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 4/5] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 0622ebd..3b2a285 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyI-0003PC-O4; Wed, 08 Jan 2014 00:26:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy8-0003NE-CJ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:00 +0000
Received: from [85.158.137.68:48570] by server-15.bemta-3.messagelabs.com id
	54/00-11556-71B9CC25; Wed, 08 Jan 2014 00:25:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!3
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23325 invoked from network); 8 Jan 2014 00:25:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:55 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933065"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:54 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:46 -0500
Message-Id: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally enable
	debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If dbg_debug is non-zero, output debug.

Include put_gfn debug logging.

Here is a smaple output at dbg_debug == 2:

(XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
(XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$2

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index ba6a64d..777e5ba 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -30,16 +30,9 @@
  * gdbsx, etc..
  */
 
-#ifdef XEN_KDB_CONFIG
-#include "../kdb/include/kdbdefs.h"
-#include "../kdb/include/kdbproto.h"
-#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
-#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
-#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
-#else
-#define DBGP1(...) ((void)0)
-#define DBGP2(...) ((void)0)
-#endif
+static volatile int dbg_debug;
+#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
+#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
@@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
-    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
+    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
     *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
     if ( *gfn == INVALID_GFN )
     {
-        DBGP2("kdb:bad gfn from gva_to_gfn\n");
+        DBGP1("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
-        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
+        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+    DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
 
     if ( mfn == INVALID_MFN )
     {
         put_gfn(dp, *gfn);
+        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
         *gfn = INVALID_GFN;
     }
 
@@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
     unsigned long mfn = cr3 >> PAGE_SHIFT;
 
-    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
+    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
@@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
@@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
-            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
     }
@@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
-        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
     l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
 
         unmap_domain_page(va);
         if ( gfn != INVALID_GFN )
+        {
             put_gfn(dp, gfn);
+            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, gfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     struct domain *dp = get_domain_by_id(domid);
     int hyp = (domid == DOMID_IDLE);
 
-    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
+    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
           addr, buf, len, domid, toaddr, dp);
     if ( hyp )
     {
@@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
         put_domain(dp);
     }
 
-    DBGP2("gmem:exit:len:$%d\n", len);
+    DBGP1("gmem:exit:len:$%d\n", len);
     return len;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gy7-0003N5-9D; Wed, 08 Jan 2014 00:25:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy5-0003Ms-CF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:57 +0000
Received: from [85.158.139.211:6752] by server-14.bemta-5.messagelabs.com id
	A8/E5-24200-41B9CC25; Wed, 08 Jan 2014 00:25:56 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389140754!8414901!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23778 invoked from network); 8 Jan 2014 00:25:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 00:25:54 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933059"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:53 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:45 -0500
Message-Id: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..ba6a64d 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
     }
 
     DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
+    }
+
     return mfn;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyD-0003Nv-Mh; Wed, 08 Jan 2014 00:26:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy6-0003N3-Vm
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:59 +0000
Received: from [85.158.137.68:29304] by server-11.bemta-3.messagelabs.com id
	E9/E5-19379-61B9CC25; Wed, 08 Jan 2014 00:25:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23237 invoked from network); 8 Jan 2014 00:25:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:53 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933051"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:52 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:44 -0500
Message-Id: <1389140748-26524-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These 2 files are changed in this patch set.  So add the allowed
"Emacs local variables" from CODING_STYLE.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 8 ++++++++
 xen/arch/x86/debug.c              | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 5736b86..0622ebd 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -821,3 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     return iop->remain;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..a67a192 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -223,3 +223,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gy8-0003NJ-Ip; Wed, 08 Jan 2014 00:26:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy5-0003Mt-Th
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:58 +0000
Received: from [85.158.137.68:48463] by server-15.bemta-3.messagelabs.com id
	8E/FF-11556-51B9CC25; Wed, 08 Jan 2014 00:25:57 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23138 invoked from network); 8 Jan 2014 00:25:56 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933045"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:51 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:43 -0500
Message-Id: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes v1 to v2:
  From Konrad Rzeszutek Wilk and Ian Campbell:

    ??

  Split out emacs local variables add to it's own new patch (number 1).

  From Andrew Cooper:

    What does matter is that the caller of dbg_hvm_va2mfn() should
    not have to cleanup a reference taken when it returns an error.

  So use his version of the change.

  From Ian Campbell:

    In all three cases what is missing is the "why" and the
    appropriate analysis from
    http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
    i.e. what is the impact of the bug (i..e what are the advantages
    of the fix) and what are the risks of this changing causing
    further breakage? I'm not really in a position to evaluate the
    risk of a change in gdbsx, so someone needs to tell me.

    I think given that gdbsx is a somewhat "peripheral" bit of code
    and that it is targeted at developers (who might be better able
    to tolerate any resulting issues and more able to apply
    subsequent fixups than regular users) we can accept a larger
    risk than we would with a change to the hypervisor itself etc
    (that's assuming that all of these changes cannot potentially
    impact non-debugger usecases which I expect is the case from the
    function names but I would like to see confirmed).
 
  My take on this below.

  From Mukesh Rathor:

    Ooopsy... my thought was that an application should not even
    look at remain if the hcall/syscall failed, but forgot when
    writing the gdbsx itself :). Think of it this way, if the call
    didn't even make it to xen, and some reason the ioctl returned
    non-zero rc, then remain would still be zero. So I think we
    should fix gdbsx instead of here:

  Dropped old patch 4, Added new patch 5.


Freeze:

  The benefit of this series is that the hypervisor stops calling
  panic (debug=y) or hanging (debug=n).  Also a person using gdbsx
  is not seeing random heap data of gdbsx's heap instead of guest
  data.

  The risk is that gdbsx does something new wrong.

  My understanding is that all the changes here only effect gdbsx
  and so very limited in scope.

Release manager requests:
  patch 1 and 3 are optional for 4.4.0.
  patch 2 should be in 4.4.0
  patch 4 and 5 would be good to be in 4.4.0

While tracking down a bug in seabios/grub I found the bug in patch
2.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 3 is debug logging that was used to find the 2nd way.


Don Slutz (5):
  Add Emacs local variables to source files.
  dbg_rw_guest_mem: need to call put_gfn in error path.
  dbg_rw_guest_mem: Conditionally enable debug log output
  xg_read_mem: Report on error.
  xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

 tools/debugger/gdbsx/xg/xg_main.c | 23 ++++++++++---
 xen/arch/x86/debug.c              | 71 +++++++++++++++++++++++----------------
 2 files changed, 61 insertions(+), 33 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyI-0003PC-O4; Wed, 08 Jan 2014 00:26:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy8-0003NE-CJ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:00 +0000
Received: from [85.158.137.68:48570] by server-15.bemta-3.messagelabs.com id
	54/00-11556-71B9CC25; Wed, 08 Jan 2014 00:25:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!3
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23325 invoked from network); 8 Jan 2014 00:25:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:55 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933065"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:54 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:46 -0500
Message-Id: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally enable
	debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If dbg_debug is non-zero, output debug.

Include put_gfn debug logging.

Here is a smaple output at dbg_debug == 2:

(XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
(XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$2

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index ba6a64d..777e5ba 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -30,16 +30,9 @@
  * gdbsx, etc..
  */
 
-#ifdef XEN_KDB_CONFIG
-#include "../kdb/include/kdbdefs.h"
-#include "../kdb/include/kdbproto.h"
-#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
-#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
-#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
-#else
-#define DBGP1(...) ((void)0)
-#define DBGP2(...) ((void)0)
-#endif
+static volatile int dbg_debug;
+#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
+#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
@@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
-    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
+    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
     *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
     if ( *gfn == INVALID_GFN )
     {
-        DBGP2("kdb:bad gfn from gva_to_gfn\n");
+        DBGP1("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
-        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
+        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+    DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
 
     if ( mfn == INVALID_MFN )
     {
         put_gfn(dp, *gfn);
+        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
         *gfn = INVALID_GFN;
     }
 
@@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
     unsigned long mfn = cr3 >> PAGE_SHIFT;
 
-    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
+    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
@@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
@@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
-            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
     }
@@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
-        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
     l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
 
         unmap_domain_page(va);
         if ( gfn != INVALID_GFN )
+        {
             put_gfn(dp, gfn);
+            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, gfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     struct domain *dp = get_domain_by_id(domid);
     int hyp = (domid == DOMID_IDLE);
 
-    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
+    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
           addr, buf, len, domid, toaddr, dp);
     if ( hyp )
     {
@@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
         put_domain(dp);
     }
 
-    DBGP2("gmem:exit:len:$%d\n", len);
+    DBGP1("gmem:exit:len:$%d\n", len);
     return len;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyI-0003Or-0s; Wed, 08 Jan 2014 00:26:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy8-0003NF-Cf
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:26:00 +0000
Received: from [85.158.143.35:38367] by server-2.bemta-4.messagelabs.com id
	7D/69-11386-71B9CC25; Wed, 08 Jan 2014 00:25:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389140756!7593407!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4679 invoked from network); 8 Jan 2014 00:25:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:56 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933069"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:55 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:47 -0500
Message-Id: <1389140748-26524-5-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 4/5] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 0622ebd..3b2a285 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gy7-0003N5-9D; Wed, 08 Jan 2014 00:25:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy5-0003Ms-CF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:57 +0000
Received: from [85.158.139.211:6752] by server-14.bemta-5.messagelabs.com id
	A8/E5-24200-41B9CC25; Wed, 08 Jan 2014 00:25:56 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389140754!8414901!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23778 invoked from network); 8 Jan 2014 00:25:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 00:25:54 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933059"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:53 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:45 -0500
Message-Id: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/debug.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..ba6a64d 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
     }
 
     DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
+    }
+
     return mfn;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0gyD-0003Nv-Mh; Wed, 08 Jan 2014 00:26:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0gy6-0003N3-Vm
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:25:59 +0000
Received: from [85.158.137.68:29304] by server-11.bemta-3.messagelabs.com id
	E9/E5-19379-61B9CC25; Wed, 08 Jan 2014 00:25:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389140754!6640621!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23237 invoked from network); 8 Jan 2014 00:25:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 00:25:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 00:25:53 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625933051"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.205])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 00:25:52 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Tue,  7 Jan 2014 19:25:44 -0500
Message-Id: <1389140748-26524-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These 2 files are changed in this patch set.  So add the allowed
"Emacs local variables" from CODING_STYLE.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 8 ++++++++
 xen/arch/x86/debug.c              | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 5736b86..0622ebd 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -821,3 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     return iop->remain;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 3e21ca8..a67a192 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -223,3 +223,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     return len;
 }
 
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:56:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hRD-0005KG-MA; Wed, 08 Jan 2014 00:56:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hRA-0005KB-SI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:56:01 +0000
Received: from [85.158.137.68:3317] by server-12.bemta-3.messagelabs.com id
	99/16-20055-022ACC25; Wed, 08 Jan 2014 00:56:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389142557!6643235!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8060 invoked from network); 8 Jan 2014 00:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90707291"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:55:35 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 19:55:34 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	01:55:33 +0100
Message-ID: <52CCA204.2020601@citrix.com>
Date: Wed, 8 Jan 2014 00:55:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 00:25, Don Slutz wrote:
> Using a 1G hvm domU (in grub) and gdbsx:
>
> (gdb) set arch i8086
> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
> of GDB.  Attempting to continue with the default i8086 settings.
>
> The target architecture is assumed to be i8086
> (gdb) target remote localhost:9999
> Remote debugging using localhost:9999
> Remote debugging from host 127.0.0.1
> 0x0000d475 in ?? ()
> (gdb) x/1xh 0x6ae9168b
>
> Will reproduce this bug.
>
> With a debug=y build you will get:
>
> Assertion '!preempt_count()' failed at preempt.c:37
>
> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>
>          [ffff82c4c0126eec] _write_lock+0x3c/0x50
>           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>           ffff82c4c01709ed  get_page+0x2d/0x100
>           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>           ffff82c4c012938b  add_entry+0x4b/0xb0
>           ffff82c4c02223f9  syscall_enter+0xa9/0xae
>
> And gdb output:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     0x3024
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Ignoring packet error, continuing...
> Reply contains invalid hex digit 116
>
> The 1st one worked because the p2m.lock is recursive and the PCPU
> had not yet changed.
>
> crash reports (for example):
>
> crash> mm_rwlock_t 0xffff83083f913010
> struct mm_rwlock_t {
>   lock = {
>     raw = {
>       lock = 2147483647
>     },
>     debug = {<No data fields>}
>   },
>   unlock_level = 0,
>   recurse_count = 1,
>   locker = 1,
>   locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
> }
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Technically this should include by Signed-off-by: Andrew Cooper
<andrew.cooper3@citrix.com> tag (being the author of the code) as well
as your own (being the discoverer of the bug and author of the commit
message), but I notice I accidentally omitted it from the original email
thread, so my apologies.

It should probably also include your Tested-by: tag

Ian (with RM hat on):
  This is a hypervisor reference counting error on a toolstack hypercall
path.  Irrespective of any of the other patches in this series, I think
this should be included ASAP (although probably subject to review from a
third person), which will fix the hypervisor crashes from gdbsx usage.

~Andrew

> ---
>  xen/arch/x86/debug.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index a67a192..ba6a64d 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>      }
>  
>      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>      return mfn;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 00:56:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 00:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hRD-0005KG-MA; Wed, 08 Jan 2014 00:56:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hRA-0005KB-SI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 00:56:01 +0000
Received: from [85.158.137.68:3317] by server-12.bemta-3.messagelabs.com id
	99/16-20055-022ACC25; Wed, 08 Jan 2014 00:56:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389142557!6643235!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8060 invoked from network); 8 Jan 2014 00:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 00:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90707291"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 00:55:35 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 19:55:34 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	01:55:33 +0100
Message-ID: <52CCA204.2020601@citrix.com>
Date: Wed, 8 Jan 2014 00:55:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 00:25, Don Slutz wrote:
> Using a 1G hvm domU (in grub) and gdbsx:
>
> (gdb) set arch i8086
> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
> of GDB.  Attempting to continue with the default i8086 settings.
>
> The target architecture is assumed to be i8086
> (gdb) target remote localhost:9999
> Remote debugging using localhost:9999
> Remote debugging from host 127.0.0.1
> 0x0000d475 in ?? ()
> (gdb) x/1xh 0x6ae9168b
>
> Will reproduce this bug.
>
> With a debug=y build you will get:
>
> Assertion '!preempt_count()' failed at preempt.c:37
>
> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>
>          [ffff82c4c0126eec] _write_lock+0x3c/0x50
>           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>           ffff82c4c01709ed  get_page+0x2d/0x100
>           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>           ffff82c4c012938b  add_entry+0x4b/0xb0
>           ffff82c4c02223f9  syscall_enter+0xa9/0xae
>
> And gdb output:
>
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     0x3024
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Ignoring packet error, continuing...
> Reply contains invalid hex digit 116
>
> The 1st one worked because the p2m.lock is recursive and the PCPU
> had not yet changed.
>
> crash reports (for example):
>
> crash> mm_rwlock_t 0xffff83083f913010
> struct mm_rwlock_t {
>   lock = {
>     raw = {
>       lock = 2147483647
>     },
>     debug = {<No data fields>}
>   },
>   unlock_level = 0,
>   recurse_count = 1,
>   locker = 1,
>   locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
> }
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Technically this should include by Signed-off-by: Andrew Cooper
<andrew.cooper3@citrix.com> tag (being the author of the code) as well
as your own (being the discoverer of the bug and author of the commit
message), but I notice I accidentally omitted it from the original email
thread, so my apologies.

It should probably also include your Tested-by: tag

Ian (with RM hat on):
  This is a hypervisor reference counting error on a toolstack hypercall
path.  Irrespective of any of the other patches in this series, I think
this should be included ASAP (although probably subject to review from a
third person), which will fix the hypervisor crashes from gdbsx usage.

~Andrew

> ---
>  xen/arch/x86/debug.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index a67a192..ba6a64d 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>      }
>  
>      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>      return mfn;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:02:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hXm-0007l0-0e; Wed, 08 Jan 2014 01:02:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0hXk-0007cB-Ty
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:02:49 +0000
Received: from [85.158.143.35:42158] by server-2.bemta-4.messagelabs.com id
	19/87-11386-8B3ACC25; Wed, 08 Jan 2014 01:02:48 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389142966!10283606!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17592 invoked from network); 8 Jan 2014 01:02:47 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:02:47 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 01:02:46 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625945856"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.245])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 01:02:42 +0000
Message-ID: <52CCA3B1.7070602@terremark.com>
Date: Tue, 07 Jan 2014 20:02:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	<1388857936-664-5-git-send-email-dslutz@verizon.com>	<20140106175349.6cbd190b@mantra.us.oracle.com>	<1389088824.31766.105.camel@kazak.uk.xensource.com>	<1389088937.31766.107.camel@kazak.uk.xensource.com>	<52CC2A2F.7010700@terremark.com>
	<20140107150148.4cbf1a73@mantra.us.oracle.com>
In-Reply-To: <20140107150148.4cbf1a73@mantra.us.oracle.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 18:01, Mukesh Rathor wrote:
> On Tue, 07 Jan 2014 11:24:15 -0500
> Don Slutz <dslutz@verizon.com> wrote:
>
>> On 01/07/14 05:02, Ian Campbell wrote:
>>> On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
>>>> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>>>>> On Sat,  4 Jan 2014 12:52:16 -0500
>>>>> Don Slutz <dslutz@verizon.com> wrote:
>>>>>
>>>>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>>>>> returned.
>>>>>>
> .....
>
>> I had assumed that this patch (which is not need to "fix" the bugs I
>> found), was to be dropped in v2.  However I will agree that currently
>> there is no way to know about partial success.  The untested change:
>>
>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>> index ef6c140..0add07e 100644
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
>>        iop->remain = dbg_rw_mem(
>>            (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
>>            iop->gwr, iop->pgd3val);
>> -    return (iop->remain ? -EFAULT : 0);
>> +    return 0;
>>    }
>>
>>    long arch_do_domctl(
>>
>>
>> Would appear to allow partial success to be reported and also meet
>> with remain not to be looked at with an error.
> No, the partial success is relevant in other cases, like EAGAIN,
> but not EFAULT. If we make it pre-emptible in future to return
> EAGAIN, we'd need to make sure remain was honored. Again, think of it
> this way, if the first copyin failed, remain would have not been
> initialized.
>
> So, because now the only cause of unfinished copy from dbg_rw_mem is
> EFAULT, we should leave above xen code alone, and just change gdbsx.
>
> thanks
> mukesh

Since it did not look like we would get to an agreement soon on this, I have sent out v2 series.

Not at all clear this patch should be in 4.4.0 (domctl api change, high risk, no clear use case (as far as I know gdb does not do anything with partial success)).

On this topic; you seem to be overlooking the page crossing case.

Using the info that page 1f is good and 20 is bad, a domctl request for 1ffff for 2 bytes would call on dbg_rw_mem(), dbg_rw_guest_mem() which calculate pagecnt == 1, get a valid mfn and return that byte. The 2nd time pagecnt is also 1, but we get INVALID_MFN, so dbg_rw_guest_mem(0 returns 1.  dbg_rw_mem(0 also returns 1. gdbsx_guest_mem_io() returns -EFAULT so no copyback.

At this point of the 2 requested byte, 1 byte is valid and 1 is not.  Since copyback is not done, remain is 0.  So the caller get the error and does not have this "partial success" information.

The 1st version of this patch I proposed, the copyback is done, so the caller gets remain == 1 and the valid byte and an error.

The 2nd version of this patch (which has now been tested) sets remian == 1, the valid byte, and no error.

Hope this helps.

    -Don Slutz




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:02:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hXm-0007l0-0e; Wed, 08 Jan 2014 01:02:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0hXk-0007cB-Ty
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:02:49 +0000
Received: from [85.158.143.35:42158] by server-2.bemta-4.messagelabs.com id
	19/87-11386-8B3ACC25; Wed, 08 Jan 2014 01:02:48 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389142966!10283606!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17592 invoked from network); 8 Jan 2014 01:02:47 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:02:47 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 01:02:46 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625945856"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.245])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 01:02:42 +0000
Message-ID: <52CCA3B1.7070602@terremark.com>
Date: Tue, 07 Jan 2014 20:02:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, Don Slutz <dslutz@verizon.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>	<1388857936-664-5-git-send-email-dslutz@verizon.com>	<20140106175349.6cbd190b@mantra.us.oracle.com>	<1389088824.31766.105.camel@kazak.uk.xensource.com>	<1389088937.31766.107.camel@kazak.uk.xensource.com>	<52CC2A2F.7010700@terremark.com>
	<20140107150148.4cbf1a73@mantra.us.oracle.com>
In-Reply-To: <20140107150148.4cbf1a73@mantra.us.oracle.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 18:01, Mukesh Rathor wrote:
> On Tue, 07 Jan 2014 11:24:15 -0500
> Don Slutz <dslutz@verizon.com> wrote:
>
>> On 01/07/14 05:02, Ian Campbell wrote:
>>> On Tue, 2014-01-07 at 10:00 +0000, Ian Campbell wrote:
>>>> On Mon, 2014-01-06 at 17:53 -0800, Mukesh Rathor wrote:
>>>>> On Sat,  4 Jan 2014 12:52:16 -0500
>>>>> Don Slutz <dslutz@verizon.com> wrote:
>>>>>
>>>>>> The gdbsx code expects that domctl->u.gdbsx_guest_memio.remain is
>>>>>> returned.
>>>>>>
> .....
>
>> I had assumed that this patch (which is not need to "fix" the bugs I
>> found), was to be dropped in v2.  However I will agree that currently
>> there is no way to know about partial success.  The untested change:
>>
>> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
>> index ef6c140..0add07e 100644
>> --- a/xen/arch/x86/domctl.c
>> +++ b/xen/arch/x86/domctl.c
>> @@ -43,7 +43,7 @@ static int gdbsx_guest_mem_io(
>>        iop->remain = dbg_rw_mem(
>>            (dbgva_t)iop->gva, (dbgbyte_t *)l_uva, iop->len, domid,
>>            iop->gwr, iop->pgd3val);
>> -    return (iop->remain ? -EFAULT : 0);
>> +    return 0;
>>    }
>>
>>    long arch_do_domctl(
>>
>>
>> Would appear to allow partial success to be reported and also meet
>> with remain not to be looked at with an error.
> No, the partial success is relevant in other cases, like EAGAIN,
> but not EFAULT. If we make it pre-emptible in future to return
> EAGAIN, we'd need to make sure remain was honored. Again, think of it
> this way, if the first copyin failed, remain would have not been
> initialized.
>
> So, because now the only cause of unfinished copy from dbg_rw_mem is
> EFAULT, we should leave above xen code alone, and just change gdbsx.
>
> thanks
> mukesh

Since it did not look like we would get to an agreement soon on this, I have sent out v2 series.

Not at all clear this patch should be in 4.4.0 (domctl api change, high risk, no clear use case (as far as I know gdb does not do anything with partial success)).

On this topic; you seem to be overlooking the page crossing case.

Using the info that page 1f is good and 20 is bad, a domctl request for 1ffff for 2 bytes would call on dbg_rw_mem(), dbg_rw_guest_mem() which calculate pagecnt == 1, get a valid mfn and return that byte. The 2nd time pagecnt is also 1, but we get INVALID_MFN, so dbg_rw_guest_mem(0 returns 1.  dbg_rw_mem(0 also returns 1. gdbsx_guest_mem_io() returns -EFAULT so no copyback.

At this point of the 2 requested byte, 1 byte is valid and 1 is not.  Since copyback is not done, remain is 0.  So the caller get the error and does not have this "partial success" information.

The 1st version of this patch I proposed, the copyback is done, so the caller gets remain == 1 and the valid byte and an error.

The 2nd version of this patch (which has now been tested) sets remian == 1, the valid byte, and no error.

Hope this helps.

    -Don Slutz




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:06:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hbQ-0001jL-Al; Wed, 08 Jan 2014 01:06:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0hbO-0001jG-DF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:06:34 +0000
Received: from [85.158.139.211:52480] by server-2.bemta-5.messagelabs.com id
	5F/21-29392-994ACC25; Wed, 08 Jan 2014 01:06:33 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389143192!8427839!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11241 invoked from network); 8 Jan 2014 01:06:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:06:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 01:06:31 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625947810"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.245])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 01:06:30 +0000
Message-ID: <52CCA496.80008@terremark.com>
Date: Tue, 07 Jan 2014 20:06:30 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 19:55, Andrew Cooper wrote:
> On 08/01/2014 00:25, Don Slutz wrote:
>> Using a 1G hvm domU (in grub) and gdbsx:
>>
>> (gdb) set arch i8086
>> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
>> of GDB.  Attempting to continue with the default i8086 settings.
>>
>> The target architecture is assumed to be i8086
>> (gdb) target remote localhost:9999
>> Remote debugging using localhost:9999
>> Remote debugging from host 127.0.0.1
>> 0x0000d475 in ?? ()
>> (gdb) x/1xh 0x6ae9168b
>>
>> Will reproduce this bug.
>>
>> With a debug=y build you will get:
>>
>> Assertion '!preempt_count()' failed at preempt.c:37
>>
>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>
>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>
>> And gdb output:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     0x3024
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Ignoring packet error, continuing...
>> Reply contains invalid hex digit 116
>>
>> The 1st one worked because the p2m.lock is recursive and the PCPU
>> had not yet changed.
>>
>> crash reports (for example):
>>
>> crash> mm_rwlock_t 0xffff83083f913010
>> struct mm_rwlock_t {
>>    lock = {
>>      raw = {
>>        lock = 2147483647
>>      },
>>      debug = {<No data fields>}
>>    },
>>    unlock_level = 0,
>>    recurse_count = 1,
>>    locker = 1,
>>    locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
>> }
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
> as your own (being the discoverer of the bug and author of the commit
> message), but I notice I accidentally omitted it from the original email
> thread, so my apologies.

I was not sure if I should have added it without you adding it... So I went without.


> It should probably also include your Tested-by: tag

That is fine with me.  Should I make a v3 of just this with both tags?

    -Don Slutz

>
> Ian (with RM hat on):
>    This is a hypervisor reference counting error on a toolstack hypercall
> path.  Irrespective of any of the other patches in this series, I think
> this should be included ASAP (although probably subject to review from a
> third person), which will fix the hypervisor crashes from gdbsx usage.
>
> ~Andrew
>
>> ---
>>   xen/arch/x86/debug.c | 9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index a67a192..ba6a64d 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>>       if ( p2m_is_readonly(gfntype) && toaddr )
>>       {
>>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>> -        return INVALID_MFN;
>> +        mfn = INVALID_MFN;
>>       }
>>   
>>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
>> +
>> +    if ( mfn == INVALID_MFN )
>> +    {
>> +        put_gfn(dp, *gfn);
>> +        *gfn = INVALID_GFN;
>> +    }
>> +
>>       return mfn;
>>   }
>>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:06:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hbQ-0001jL-Al; Wed, 08 Jan 2014 01:06:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0hbO-0001jG-DF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:06:34 +0000
Received: from [85.158.139.211:52480] by server-2.bemta-5.messagelabs.com id
	5F/21-29392-994ACC25; Wed, 08 Jan 2014 01:06:33 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389143192!8427839!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11241 invoked from network); 8 Jan 2014 01:06:32 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:06:32 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 01:06:31 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="625947810"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.245])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 01:06:30 +0000
Message-ID: <52CCA496.80008@terremark.com>
Date: Tue, 07 Jan 2014 20:06:30 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/07/14 19:55, Andrew Cooper wrote:
> On 08/01/2014 00:25, Don Slutz wrote:
>> Using a 1G hvm domU (in grub) and gdbsx:
>>
>> (gdb) set arch i8086
>> warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
>> of GDB.  Attempting to continue with the default i8086 settings.
>>
>> The target architecture is assumed to be i8086
>> (gdb) target remote localhost:9999
>> Remote debugging using localhost:9999
>> Remote debugging from host 127.0.0.1
>> 0x0000d475 in ?? ()
>> (gdb) x/1xh 0x6ae9168b
>>
>> Will reproduce this bug.
>>
>> With a debug=y build you will get:
>>
>> Assertion '!preempt_count()' failed at preempt.c:37
>>
>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>
>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>
>> And gdb output:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     0x3024
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Ignoring packet error, continuing...
>> Reply contains invalid hex digit 116
>>
>> The 1st one worked because the p2m.lock is recursive and the PCPU
>> had not yet changed.
>>
>> crash reports (for example):
>>
>> crash> mm_rwlock_t 0xffff83083f913010
>> struct mm_rwlock_t {
>>    lock = {
>>      raw = {
>>        lock = 2147483647
>>      },
>>      debug = {<No data fields>}
>>    },
>>    unlock_level = 0,
>>    recurse_count = 1,
>>    locker = 1,
>>    locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
>> }
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
> as your own (being the discoverer of the bug and author of the commit
> message), but I notice I accidentally omitted it from the original email
> thread, so my apologies.

I was not sure if I should have added it without you adding it... So I went without.


> It should probably also include your Tested-by: tag

That is fine with me.  Should I make a v3 of just this with both tags?

    -Don Slutz

>
> Ian (with RM hat on):
>    This is a hypervisor reference counting error on a toolstack hypercall
> path.  Irrespective of any of the other patches in this series, I think
> this should be included ASAP (although probably subject to review from a
> third person), which will fix the hypervisor crashes from gdbsx usage.
>
> ~Andrew
>
>> ---
>>   xen/arch/x86/debug.c | 9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index a67a192..ba6a64d 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>>       if ( p2m_is_readonly(gfntype) && toaddr )
>>       {
>>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>> -        return INVALID_MFN;
>> +        mfn = INVALID_MFN;
>>       }
>>   
>>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
>> +
>> +    if ( mfn == INVALID_MFN )
>> +    {
>> +        put_gfn(dp, *gfn);
>> +        *gfn = INVALID_GFN;
>> +    }
>> +
>>       return mfn;
>>   }
>>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:12:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:12:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hhM-0002I8-7V; Wed, 08 Jan 2014 01:12:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hhK-0002I1-LU
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:12:42 +0000
Received: from [85.158.137.68:43803] by server-6.bemta-3.messagelabs.com id
	1D/47-04868-906ACC25; Wed, 08 Jan 2014 01:12:41 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389143559!7834327!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13913 invoked from network); 8 Jan 2014 01:12:40 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:12:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081BYvg030158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:11:35 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081BXQU011956
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:11:34 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081BXUN005113; Wed, 8 Jan 2014 01:11:33 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:11:33 -0800
Date: Tue, 7 Jan 2014 17:11:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171127.7f3f28af@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:48 -0500
Don Slutz <dslutz@verizon.com> wrote:

> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Drop output of iop->remain because it most likely will be zero.
> This leads to a strange message:
> 
> ERROR: failed to read 0 bytes. errno:14 rc:-1
> 
> Add address to write error because it may be the only message
> displayed.
> 
> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
> error and so iop->remain will be zero.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 3b2a285..0fc3f82 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->gwr = 0;       /* not writing to
> guest */ 
>      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
> errno, rc);
> +        return tobuf_len;
> +    }
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) iop->gwr = 1;       /* writing to guest
> */ 
>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d
> rc:%d\n",
> +              guestva, errno, rc);
> +        return buflen;
> +    }
>      return iop->remain;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:12:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:12:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hhM-0002I8-7V; Wed, 08 Jan 2014 01:12:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hhK-0002I1-LU
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:12:42 +0000
Received: from [85.158.137.68:43803] by server-6.bemta-3.messagelabs.com id
	1D/47-04868-906ACC25; Wed, 08 Jan 2014 01:12:41 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389143559!7834327!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13913 invoked from network); 8 Jan 2014 01:12:40 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:12:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081BYvg030158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:11:35 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081BXQU011956
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:11:34 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081BXUN005113; Wed, 8 Jan 2014 01:11:33 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:11:33 -0800
Date: Tue, 7 Jan 2014 17:11:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171127.7f3f28af@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:48 -0500
Don Slutz <dslutz@verizon.com> wrote:

> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Drop output of iop->remain because it most likely will be zero.
> This leads to a strange message:
> 
> ERROR: failed to read 0 bytes. errno:14 rc:-1
> 
> Add address to write error because it may be the only message
> displayed.
> 
> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
> error and so iop->remain will be zero.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 3b2a285..0fc3f82 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->gwr = 0;       /* not writing to
> guest */ 
>      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n",
> errno, rc);
> +        return tobuf_len;
> +    }
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) iop->gwr = 1;       /* writing to guest
> */ 
>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf,
> buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d
> rc:%d\n",
> +              guestva, errno, rc);
> +        return buflen;
> +    }
>      return iop->remain;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:13:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hiF-0002L7-OW; Wed, 08 Jan 2014 01:13:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0hiE-0002Kw-Ey
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:13:38 +0000
Received: from [85.158.139.211:15460] by server-4.bemta-5.messagelabs.com id
	34/4C-26791-146ACC25; Wed, 08 Jan 2014 01:13:37 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389143616!8428438!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32492 invoked from network); 8 Jan 2014 01:13:37 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 01:13:37 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 17:09:36 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435427891"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:13:28 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:13:28 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:13:28 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.86]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:13:14 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g
Date: Wed, 8 Jan 2014 01:13:13 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-19:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> With the feature of unrestricted guest, there should be no vmexit be
> triggered when guest accesses the cr3 in non-paging mode. This patch
> will clear the cr3 save/loading bit in vmcs control filed to eliminate
> cr3 access vmexit on UG avaliable hardware.
> 
> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
> did the same thing compare to this one. But it will cause guest fail
> to boot up on non-UG hardware which is repoted by Jan and it has been
> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
> 

Hi, Jun.

Can you help to review this patch?

> This patch incorporate the fixing and guest are working well both in
> UG and non-UG platform with this patch.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
> changes in v3:
> Revise the patch description according Jan's suggestion
> 
> changes in v2:
> Fix the guest boot failure on non-UG platform.
> 
> There are some discussions around the first patch, please see the
> following link: http://www.gossamer-threads.com/lists/xen/devel/302810
> 
> ---
>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
>                                   CPU_BASED_CR3_STORE_EXITING);
>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
> -            if ( !hvm_paging_enabled(v) )
> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
> + )
>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>              /* Trap CR3 updates if CR3 memory events are enabled. */
> @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct vcpu *v,
> unsigned int cr)
>      case 3:
>          if ( paging_mode_hap(v->domain) )
>          {
> -            if ( !hvm_paging_enabled(v) )
> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
> + )
>                  v->arch.hvm_vcpu.hw_cr[3] =
> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>              vmx_load_pdptrs(v);
> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
> 
>      hvm_invalidate_regs_fields(regs);
> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
> +    if ( paging_mode_hap(v->domain) )
>      {
>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
> +            v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>      }
>      
>      __vmread(VM_EXIT_REASON, &exit_reason);


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:13:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hiF-0002L7-OW; Wed, 08 Jan 2014 01:13:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0hiE-0002Kw-Ey
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:13:38 +0000
Received: from [85.158.139.211:15460] by server-4.bemta-5.messagelabs.com id
	34/4C-26791-146ACC25; Wed, 08 Jan 2014 01:13:37 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389143616!8428438!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32492 invoked from network); 8 Jan 2014 01:13:37 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 01:13:37 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 17:09:36 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435427891"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:13:28 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:13:28 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:13:28 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.86]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:13:14 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g
Date: Wed, 8 Jan 2014 01:13:13 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-19:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> With the feature of unrestricted guest, there should be no vmexit be
> triggered when guest accesses the cr3 in non-paging mode. This patch
> will clear the cr3 save/loading bit in vmcs control filed to eliminate
> cr3 access vmexit on UG avaliable hardware.
> 
> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
> did the same thing compare to this one. But it will cause guest fail
> to boot up on non-UG hardware which is repoted by Jan and it has been
> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
> 

Hi, Jun.

Can you help to review this patch?

> This patch incorporate the fixing and guest are working well both in
> UG and non-UG platform with this patch.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
> changes in v3:
> Revise the patch description according Jan's suggestion
> 
> changes in v2:
> Fix the guest boot failure on non-UG platform.
> 
> There are some discussions around the first patch, please see the
> following link: http://www.gossamer-threads.com/lists/xen/devel/302810
> 
> ---
>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>  1 files changed, 5 insertions(+), 4 deletions(-)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
>                                   CPU_BASED_CR3_STORE_EXITING);
>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
> -            if ( !hvm_paging_enabled(v) )
> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
> + )
>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>              /* Trap CR3 updates if CR3 memory events are enabled. */
> @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct vcpu *v,
> unsigned int cr)
>      case 3:
>          if ( paging_mode_hap(v->domain) )
>          {
> -            if ( !hvm_paging_enabled(v) )
> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
> + )
>                  v->arch.hvm_vcpu.hw_cr[3] =
> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>              vmx_load_pdptrs(v);
> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
> 
>      hvm_invalidate_regs_fields(regs);
> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
> +    if ( paging_mode_hap(v->domain) )
>      {
>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
> +            v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>      }
>      
>      __vmread(VM_EXIT_REASON, &exit_reason);


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hkK-0002VR-Ak; Wed, 08 Jan 2014 01:15:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hkI-0002VI-Kw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:15:46 +0000
Received: from [193.109.254.147:23958] by server-2.bemta-14.messagelabs.com id
	52/6B-00361-2C6ACC25; Wed, 08 Jan 2014 01:15:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389143743!7942779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9870 invoked from network); 8 Jan 2014 01:15:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88555010"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 01:15:42 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:15:42 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:15:40 +0100
Message-ID: <52CCA6BC.5080406@citrix.com>
Date: Wed, 8 Jan 2014 01:15:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com> <52CCA496.80008@terremark.com>
In-Reply-To: <52CCA496.80008@terremark.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:06, Don Slutz wrote:
> On 01/07/14 19:55, Andrew Cooper wrote:
>> On 08/01/2014 00:25, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
>>> (gdb) set arch i8086
>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>> configuration
>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>
>>> The target architecture is assumed to be i8086
>>> (gdb) target remote localhost:9999
>>> Remote debugging using localhost:9999
>>> Remote debugging from host 127.0.0.1
>>> 0x0000d475 in ?? ()
>>> (gdb) x/1xh 0x6ae9168b
>>>
>>> Will reproduce this bug.
>>>
>>> With a debug=y build you will get:
>>>
>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>
>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>
>>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>
>>> And gdb output:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     0x3024
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>> Reply contains invalid hex digit 116
>>>
>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>> had not yet changed.
>>>
>>> crash reports (for example):
>>>
>>> crash> mm_rwlock_t 0xffff83083f913010
>>> struct mm_rwlock_t {
>>>    lock = {
>>>      raw = {
>>>        lock = 2147483647
>>>      },
>>>      debug = {<No data fields>}
>>>    },
>>>    unlock_level = 0,
>>>    recurse_count = 1,
>>>    locker = 1,
>>>    locker_function = 0xffff82c4c022c640 <__func__.13514>
>>> "__get_gfn_type_access"
>>> }
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Technically this should include by Signed-off-by: Andrew Cooper
>> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
>> as your own (being the discoverer of the bug and author of the commit
>> message), but I notice I accidentally omitted it from the original email
>> thread, so my apologies.
>
> I was not sure if I should have added it without you adding it... So I
> went without.

That is fair enough - it was my mistake to start with so no worries.

>
>
>> It should probably also include your Tested-by: tag
>
> That is fine with me.  Should I make a v3 of just this with both tags?
>
>    -Don Slutz

Depends whether the committers are happy accumulating tags and whether
there is further comment/changes required for the patch.

As a rule of thumb, I would say "no for now" with a "accumulate if a new
v3 is needed" or "a committer asks you to accumulate". 

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hkK-0002VR-Ak; Wed, 08 Jan 2014 01:15:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hkI-0002VI-Kw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:15:46 +0000
Received: from [193.109.254.147:23958] by server-2.bemta-14.messagelabs.com id
	52/6B-00361-2C6ACC25; Wed, 08 Jan 2014 01:15:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389143743!7942779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9870 invoked from network); 8 Jan 2014 01:15:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88555010"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 01:15:42 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:15:42 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:15:40 +0100
Message-ID: <52CCA6BC.5080406@citrix.com>
Date: Wed, 8 Jan 2014 01:15:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com> <52CCA496.80008@terremark.com>
In-Reply-To: <52CCA496.80008@terremark.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:06, Don Slutz wrote:
> On 01/07/14 19:55, Andrew Cooper wrote:
>> On 08/01/2014 00:25, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
>>> (gdb) set arch i8086
>>> warning: A handler for the OS ABI "GNU/Linux" is not built into this
>>> configuration
>>> of GDB.  Attempting to continue with the default i8086 settings.
>>>
>>> The target architecture is assumed to be i8086
>>> (gdb) target remote localhost:9999
>>> Remote debugging using localhost:9999
>>> Remote debugging from host 127.0.0.1
>>> 0x0000d475 in ?? ()
>>> (gdb) x/1xh 0x6ae9168b
>>>
>>> Will reproduce this bug.
>>>
>>> With a debug=y build you will get:
>>>
>>> Assertion '!preempt_count()' failed at preempt.c:37
>>>
>>> For a debug=n build you will get a dom0 VCPU hung (at some point) in:
>>>
>>>           [ffff82c4c0126eec] _write_lock+0x3c/0x50
>>>            ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
>>>            ffff82c4c0158885  dbg_rw_mem+0x115/0x360
>>>            ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
>>>            ffff82c4c01709ed  get_page+0x2d/0x100
>>>            ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
>>>            ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
>>>            ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
>>>            ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
>>>            ffff82c4c012938b  add_entry+0x4b/0xb0
>>>            ffff82c4c02223f9  syscall_enter+0xa9/0xae
>>>
>>> And gdb output:
>>>
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     0x3024
>>> (gdb) x/1xh 0x6ae9168b
>>> 0x6ae9168b:     Ignoring packet error, continuing...
>>> Reply contains invalid hex digit 116
>>>
>>> The 1st one worked because the p2m.lock is recursive and the PCPU
>>> had not yet changed.
>>>
>>> crash reports (for example):
>>>
>>> crash> mm_rwlock_t 0xffff83083f913010
>>> struct mm_rwlock_t {
>>>    lock = {
>>>      raw = {
>>>        lock = 2147483647
>>>      },
>>>      debug = {<No data fields>}
>>>    },
>>>    unlock_level = 0,
>>>    recurse_count = 1,
>>>    locker = 1,
>>>    locker_function = 0xffff82c4c022c640 <__func__.13514>
>>> "__get_gfn_type_access"
>>> }
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Technically this should include by Signed-off-by: Andrew Cooper
>> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
>> as your own (being the discoverer of the bug and author of the commit
>> message), but I notice I accidentally omitted it from the original email
>> thread, so my apologies.
>
> I was not sure if I should have added it without you adding it... So I
> went without.

That is fair enough - it was my mistake to start with so no worries.

>
>
>> It should probably also include your Tested-by: tag
>
> That is fine with me.  Should I make a v3 of just this with both tags?
>
>    -Don Slutz

Depends whether the committers are happy accumulating tags and whether
there is further comment/changes required for the patch.

As a rule of thumb, I would say "no for now" with a "accumulate if a new
v3 is needed" or "a committer asks you to accumulate". 

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hke-0002YO-Qe; Wed, 08 Jan 2014 01:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hkd-0002Xy-4B
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:16:07 +0000
Received: from [85.158.137.68:60531] by server-14.bemta-3.messagelabs.com id
	D5/05-06105-6D6ACC25; Wed, 08 Jan 2014 01:16:06 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389143764!7021224!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28720 invoked from network); 8 Jan 2014 01:16:05 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:16:05 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081Ex58026202
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:15:00 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081Exng010165
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:14:59 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s081EwgI008085; Wed, 8 Jan 2014 01:14:58 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:14:57 -0800
Date: Tue, 7 Jan 2014 17:14:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107171456.1a31e6c0@mantra.us.oracle.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 00:55:32 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 00:25, Don Slutz wrote:
> > Using a 1G hvm domU (in grub) and gdbsx:
> >
> > (gdb) set arch i8086
> > warning: A handler for the OS ABI "GNU/Linux" is not built into
> > this configuration of GDB.  Attempting to continue with the default
> > i8086 settings.
> >
> > The target architecture is assumed to be i8086
> > (gdb) target remote localhost:9999
> > Remote debugging using localhost:9999
> > Remote debugging from host 127.0.0.1
> > 0x0000d475 in ?? ()
> > (gdb) x/1xh 0x6ae9168b
> >
> > Will reproduce this bug.
> >
> > With a debug=y build you will get:
> >
> > Assertion '!preempt_count()' failed at preempt.c:37
> >
> > For a debug=n build you will get a dom0 VCPU hung (at some point)
> > in:
> >
> >          [ffff82c4c0126eec] _write_lock+0x3c/0x50
> >           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
> >           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
> >           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
> >           ffff82c4c01709ed  get_page+0x2d/0x100
> >           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
> >           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
> >           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
> >           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
> >           ffff82c4c012938b  add_entry+0x4b/0xb0
> >           ffff82c4c02223f9  syscall_enter+0xa9/0xae
> >
> > And gdb output:
> >
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     0x3024
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     Ignoring packet error, continuing...
> > Reply contains invalid hex digit 116
> >
> > The 1st one worked because the p2m.lock is recursive and the PCPU
> > had not yet changed.
> >
> > crash reports (for example):
> >
> > crash> mm_rwlock_t 0xffff83083f913010
> > struct mm_rwlock_t {
> >   lock = {
> >     raw = {
> >       lock = 2147483647
> >     },
> >     debug = {<No data fields>}
> >   },
> >   unlock_level = 0,
> >   recurse_count = 1,
> >   locker = 1,
> >   locker_function = 0xffff82c4c022c640 <__func__.13514>
> > "__get_gfn_type_access" }
> >
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> 
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
> as your own (being the discoverer of the bug and author of the commit
> message), but I notice I accidentally omitted it from the original
> email thread, so my apologies.
> 
> It should probably also include your Tested-by: tag

After above changes:

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> 
> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack
> hypercall path.  Irrespective of any of the other patches in this
> series, I think this should be included ASAP (although probably
> subject to review from a third person), which will fix the hypervisor
> crashes from gdbsx usage. 
> ~Andrew
> 
> > ---
> >  xen/arch/x86/debug.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> > index a67a192..ba6a64d 100644
> > --- a/xen/arch/x86/debug.c
> > +++ b/xen/arch/x86/debug.c
> > @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain
> > *dp, int toaddr, if ( p2m_is_readonly(gfntype) && toaddr )
> >      {
> >          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> > -        return INVALID_MFN;
> > +        mfn = INVALID_MFN;
> >      }
> >  
> >      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> > mfn); +
> > +    if ( mfn == INVALID_MFN )
> > +    {
> > +        put_gfn(dp, *gfn);
> > +        *gfn = INVALID_GFN;
> > +    }
> > +
> >      return mfn;
> >  }
> >  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hke-0002YO-Qe; Wed, 08 Jan 2014 01:16:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hkd-0002Xy-4B
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:16:07 +0000
Received: from [85.158.137.68:60531] by server-14.bemta-3.messagelabs.com id
	D5/05-06105-6D6ACC25; Wed, 08 Jan 2014 01:16:06 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389143764!7021224!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28720 invoked from network); 8 Jan 2014 01:16:05 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:16:05 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081Ex58026202
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:15:00 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081Exng010165
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:14:59 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s081EwgI008085; Wed, 8 Jan 2014 01:14:58 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:14:57 -0800
Date: Tue, 7 Jan 2014 17:14:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107171456.1a31e6c0@mantra.us.oracle.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 00:55:32 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 00:25, Don Slutz wrote:
> > Using a 1G hvm domU (in grub) and gdbsx:
> >
> > (gdb) set arch i8086
> > warning: A handler for the OS ABI "GNU/Linux" is not built into
> > this configuration of GDB.  Attempting to continue with the default
> > i8086 settings.
> >
> > The target architecture is assumed to be i8086
> > (gdb) target remote localhost:9999
> > Remote debugging using localhost:9999
> > Remote debugging from host 127.0.0.1
> > 0x0000d475 in ?? ()
> > (gdb) x/1xh 0x6ae9168b
> >
> > Will reproduce this bug.
> >
> > With a debug=y build you will get:
> >
> > Assertion '!preempt_count()' failed at preempt.c:37
> >
> > For a debug=n build you will get a dom0 VCPU hung (at some point)
> > in:
> >
> >          [ffff82c4c0126eec] _write_lock+0x3c/0x50
> >           ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
> >           ffff82c4c0158885  dbg_rw_mem+0x115/0x360
> >           ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
> >           ffff82c4c01709ed  get_page+0x2d/0x100
> >           ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
> >           ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
> >           ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
> >           ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
> >           ffff82c4c012938b  add_entry+0x4b/0xb0
> >           ffff82c4c02223f9  syscall_enter+0xa9/0xae
> >
> > And gdb output:
> >
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     0x3024
> > (gdb) x/1xh 0x6ae9168b
> > 0x6ae9168b:     Ignoring packet error, continuing...
> > Reply contains invalid hex digit 116
> >
> > The 1st one worked because the p2m.lock is recursive and the PCPU
> > had not yet changed.
> >
> > crash reports (for example):
> >
> > crash> mm_rwlock_t 0xffff83083f913010
> > struct mm_rwlock_t {
> >   lock = {
> >     raw = {
> >       lock = 2147483647
> >     },
> >     debug = {<No data fields>}
> >   },
> >   unlock_level = 0,
> >   recurse_count = 1,
> >   locker = 1,
> >   locker_function = 0xffff82c4c022c640 <__func__.13514>
> > "__get_gfn_type_access" }
> >
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> 
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code) as well
> as your own (being the discoverer of the bug and author of the commit
> message), but I notice I accidentally omitted it from the original
> email thread, so my apologies.
> 
> It should probably also include your Tested-by: tag

After above changes:

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> 
> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack
> hypercall path.  Irrespective of any of the other patches in this
> series, I think this should be included ASAP (although probably
> subject to review from a third person), which will fix the hypervisor
> crashes from gdbsx usage. 
> ~Andrew
> 
> > ---
> >  xen/arch/x86/debug.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> > index a67a192..ba6a64d 100644
> > --- a/xen/arch/x86/debug.c
> > +++ b/xen/arch/x86/debug.c
> > @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain
> > *dp, int toaddr, if ( p2m_is_readonly(gfntype) && toaddr )
> >      {
> >          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> > -        return INVALID_MFN;
> > +        mfn = INVALID_MFN;
> >      }
> >  
> >      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> > mfn); +
> > +    if ( mfn == INVALID_MFN )
> > +    {
> > +        put_gfn(dp, *gfn);
> > +        *gfn = INVALID_GFN;
> > +    }
> > +
> >      return mfn;
> >  }
> >  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:17:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hlg-0002j4-Ja; Wed, 08 Jan 2014 01:17:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hle-0002il-AV
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:17:10 +0000
Received: from [193.109.254.147:35729] by server-14.bemta-14.messagelabs.com
	id 7F/D3-12628-517ACC25; Wed, 08 Jan 2014 01:17:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389143827!9385008!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11769 invoked from network); 8 Jan 2014 01:17:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:17:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081G5fm001933
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:16:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081G4jS019335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:16:05 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081G4o7012726; Wed, 8 Jan 2014 01:16:04 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:16:04 -0800
Date: Tue, 7 Jan 2014 17:16:02 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171602.1f9d8153@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-2-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:44 -0500
Don Slutz <dslutz@verizon.com> wrote:

> These 2 files are changed in this patch set.  So add the allowed
> "Emacs local variables" from CODING_STYLE.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Will let some emacs user ack these... "vim" rules!!

Mukesh

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 8 ++++++++
>  xen/arch/x86/debug.c              | 8 ++++++++
>  2 files changed, 16 insertions(+)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 5736b86..0622ebd 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -821,3 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) return iop->remain;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..a67a192 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -223,3 +223,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int
> len, domid_t domid, int toaddr, return len;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:17:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hlg-0002j4-Ja; Wed, 08 Jan 2014 01:17:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hle-0002il-AV
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:17:10 +0000
Received: from [193.109.254.147:35729] by server-14.bemta-14.messagelabs.com
	id 7F/D3-12628-517ACC25; Wed, 08 Jan 2014 01:17:09 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389143827!9385008!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11769 invoked from network); 8 Jan 2014 01:17:08 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:17:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081G5fm001933
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:16:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081G4jS019335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:16:05 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081G4o7012726; Wed, 8 Jan 2014 01:16:04 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:16:04 -0800
Date: Tue, 7 Jan 2014 17:16:02 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171602.1f9d8153@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-2-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
	files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:44 -0500
Don Slutz <dslutz@verizon.com> wrote:

> These 2 files are changed in this patch set.  So add the allowed
> "Emacs local variables" from CODING_STYLE.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Will let some emacs user ack these... "vim" rules!!

Mukesh

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 8 ++++++++
>  xen/arch/x86/debug.c              | 8 ++++++++
>  2 files changed, 16 insertions(+)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 5736b86..0622ebd 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -821,3 +821,11 @@ xg_write_mem(uint64_t guestva, char *frombuf,
> int buflen, uint64_t pgd3val) return iop->remain;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index 3e21ca8..a67a192 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -223,3 +223,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int
> len, domid_t domid, int toaddr, return len;
>  }
>  
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:18:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hmV-0002qP-MW; Wed, 08 Jan 2014 01:18:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hmT-0002q7-Pq
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:18:02 +0000
Received: from [85.158.137.68:6011] by server-17.bemta-3.messagelabs.com id
	B0/8E-15965-947ACC25; Wed, 08 Jan 2014 01:18:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389143878!7759450!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16772 invoked from network); 8 Jan 2014 01:18:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:18:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081GrnG027651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:16:53 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s081Gq6Z012405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:16:52 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s081Gq8Y012390; Wed, 8 Jan 2014 01:16:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:16:51 -0800
Date: Tue, 7 Jan 2014 17:16:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171649.2461264e@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-5-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-5-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 4/5] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:47 -0500
Don Slutz <dslutz@verizon.com> wrote:

> I had coded this with XGERR, but gdb will try to read memory without
> a direct request from the user.  So the error message can be
> confusing.
> 
> Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

I was told the acked line must come after the sob.  jfyi.

Mukesh

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 0622ebd..3b2a285 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) {
>      struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
>      union {uint64_t llbuf8; char buf8[8];} u = {0};
> -    int i;
> +    int i, rc;
>  
>      XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf,
> tobuf_len); 
> @@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->len = tobuf_len;
>      iop->gwr = 0;       /* not writing to guest */
>  
> -    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
> +    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> +        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> +              iop->remain, errno, rc);
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:18:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hmV-0002qP-MW; Wed, 08 Jan 2014 01:18:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0hmT-0002q7-Pq
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:18:02 +0000
Received: from [85.158.137.68:6011] by server-17.bemta-3.messagelabs.com id
	B0/8E-15965-947ACC25; Wed, 08 Jan 2014 01:18:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389143878!7759450!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16772 invoked from network); 8 Jan 2014 01:18:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:18:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081GrnG027651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:16:53 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s081Gq6Z012405
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:16:52 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s081Gq8Y012390; Wed, 8 Jan 2014 01:16:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:16:51 -0800
Date: Tue, 7 Jan 2014 17:16:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107171649.2461264e@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-5-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-5-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 4/5] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:47 -0500
Don Slutz <dslutz@verizon.com> wrote:

> I had coded this with XGERR, but gdb will try to read memory without
> a direct request from the user.  So the error message can be
> confusing.
> 
> Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

I was told the acked line must come after the sob.  jfyi.

Mukesh

> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c
> b/tools/debugger/gdbsx/xg/xg_main.c index 0622ebd..3b2a285 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) {
>      struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
>      union {uint64_t llbuf8; char buf8[8];} u = {0};
> -    int i;
> +    int i, rc;
>  
>      XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf,
> tobuf_len); 
> @@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int
> tobuf_len, uint64_t pgd3val) iop->len = tobuf_len;
>      iop->gwr = 0;       /* not writing to guest */
>  
> -    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
> +    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf,
> tobuf_len)) )
> +        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> +              iop->remain, errno, rc);
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hrq-0003XV-Jc; Wed, 08 Jan 2014 01:23:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0hrp-0003XQ-Cv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:23:33 +0000
Received: from [85.158.139.211:60712] by server-17.bemta-5.messagelabs.com id
	A4/86-19152-498ACC25; Wed, 08 Jan 2014 01:23:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389144211!8219409!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1287 invoked from network); 8 Jan 2014 01:23:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 01:23:31 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:23:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="463220314"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 17:23:16 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:23:15 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:23:11 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/3] VMX,apicv: Set "NMI-window exiting" for NMI
Thread-Index: AQHO9t+CFwqLdVsuTkWsByJNYrdwupp6MmVw
Date: Wed, 8 Jan 2014 01:23:10 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
	apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-12:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
> enabled platform.
> 

Hi Jun,

Can you help to review this patch? Thanks.

> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/hvm/vmx/intr.c |    7 ++++---
>  1 files changed, 4 insertions(+), 3 deletions(-)
> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 7757910..8507432 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++
> b/xen/arch/x86/hvm/vmx/intr.c @@ -252,10 +252,11 @@ void
> vmx_intr_assist(void)
>          intblk = hvm_interrupt_blocked(v, intack);
>          if ( cpu_has_vmx_virtual_intr_delivery )
>          {
> -            /* Set "Interrupt-window exiting" for ExtINT */
> +            /* Set "Interrupt-window exiting" for ExtINT and NMI. */
>              if ( (intblk != hvm_intblk_none) &&
> -                 ( (intack.source == hvm_intsrc_pic) ||
> -                 ( intack.source == hvm_intsrc_vector) ) )
> +                 (intack.source == hvm_intsrc_pic ||
> +                  intack.source == hvm_intsrc_vector ||
> +                  intack.source == hvm_intsrc_nmi) )
>              {
>                  enable_intr_window(v, intack);
>                  goto out;


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hrq-0003XV-Jc; Wed, 08 Jan 2014 01:23:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0hrp-0003XQ-Cv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:23:33 +0000
Received: from [85.158.139.211:60712] by server-17.bemta-5.messagelabs.com id
	A4/86-19152-498ACC25; Wed, 08 Jan 2014 01:23:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389144211!8219409!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1287 invoked from network); 8 Jan 2014 01:23:31 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 01:23:31 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:23:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="463220314"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 17:23:16 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:23:15 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:23:11 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/3] VMX,apicv: Set "NMI-window exiting" for NMI
Thread-Index: AQHO9t+CFwqLdVsuTkWsByJNYrdwupp6MmVw
Date: Wed, 8 Jan 2014 01:23:10 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
	apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-12:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
> enabled platform.
> 

Hi Jun,

Can you help to review this patch? Thanks.

> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/hvm/vmx/intr.c |    7 ++++---
>  1 files changed, 4 insertions(+), 3 deletions(-)
> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 7757910..8507432 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++
> b/xen/arch/x86/hvm/vmx/intr.c @@ -252,10 +252,11 @@ void
> vmx_intr_assist(void)
>          intblk = hvm_interrupt_blocked(v, intack);
>          if ( cpu_has_vmx_virtual_intr_delivery )
>          {
> -            /* Set "Interrupt-window exiting" for ExtINT */
> +            /* Set "Interrupt-window exiting" for ExtINT and NMI. */
>              if ( (intblk != hvm_intblk_none) &&
> -                 ( (intack.source == hvm_intsrc_pic) ||
> -                 ( intack.source == hvm_intsrc_vector) ) )
> +                 (intack.source == hvm_intsrc_pic ||
> +                  intack.source == hvm_intsrc_vector ||
> +                  intack.source == hvm_intsrc_nmi) )
>              {
>                  enable_intr_window(v, intack);
>                  goto out;


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:28:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:28:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hw9-0003es-AU; Wed, 08 Jan 2014 01:28:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hw7-0003en-Up
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:28:00 +0000
Received: from [85.158.143.35:47373] by server-3.bemta-4.messagelabs.com id
	CF/0C-32360-F99ACC25; Wed, 08 Jan 2014 01:27:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389144477!10207340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4781 invoked from network); 8 Jan 2014 01:27:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:27:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90713413"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 01:27:56 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:27:56 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:27:54 +0100
Message-ID: <52CCA99A.3080709@citrix.com>
Date: Wed, 8 Jan 2014 01:27:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
In-Reply-To: <20140107171602.1f9d8153@mantra.us.oracle.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:16, Mukesh Rathor wrote:
> On Tue,  7 Jan 2014 19:25:44 -0500
> Don Slutz <dslutz@verizon.com> wrote:
>
>> These 2 files are changed in this patch set.  So add the allowed
>> "Emacs local variables" from CODING_STYLE.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
> Will let some emacs user ack these... "vim" rules!!
>
> Mukesh

Wasn't there a patch a little while back to add equivalent vim rules to
each of the files (or at least a thread suggesting a patch)?  In the
interest of cooperation, it is only fair.

As a fellow emacsian,

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

As for 4.4-ness:  If the rest of the series is deemed ok for a
release-ack, then this should go in for completeness.  If part of the
series is decided to be deferred, then this warrants deferring as well.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:28:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:28:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hw9-0003es-AU; Wed, 08 Jan 2014 01:28:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hw7-0003en-Up
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:28:00 +0000
Received: from [85.158.143.35:47373] by server-3.bemta-4.messagelabs.com id
	CF/0C-32360-F99ACC25; Wed, 08 Jan 2014 01:27:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389144477!10207340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4781 invoked from network); 8 Jan 2014 01:27:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:27:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90713413"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 01:27:56 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:27:56 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:27:54 +0100
Message-ID: <52CCA99A.3080709@citrix.com>
Date: Wed, 8 Jan 2014 01:27:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
In-Reply-To: <20140107171602.1f9d8153@mantra.us.oracle.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:16, Mukesh Rathor wrote:
> On Tue,  7 Jan 2014 19:25:44 -0500
> Don Slutz <dslutz@verizon.com> wrote:
>
>> These 2 files are changed in this patch set.  So add the allowed
>> "Emacs local variables" from CODING_STYLE.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
> Will let some emacs user ack these... "vim" rules!!
>
> Mukesh

Wasn't there a patch a little while back to add equivalent vim rules to
each of the files (or at least a thread suggesting a patch)?  In the
interest of cooperation, it is only fair.

As a fellow emacsian,

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

As for 4.4-ness:  If the rest of the series is deemed ok for a
release-ack, then this should go in for completeness.  If part of the
series is decided to be deferred, then this warrants deferring as well.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hx7-00045h-UA; Wed, 08 Jan 2014 01:29:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hx6-00043I-2S
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:29:00 +0000
Received: from [85.158.143.35:48900] by server-2.bemta-4.messagelabs.com id
	97/CF-11386-BD9ACC25; Wed, 08 Jan 2014 01:28:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389144537!10205631!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27081 invoked from network); 8 Jan 2014 01:28:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:28:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90713527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 01:28:57 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:28:56 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:28:55 +0100
Message-ID: <52CCA9D7.9050203@citrix.com>
Date: Wed, 8 Jan 2014 01:28:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: "Dong, Eddie" <eddie.dong@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
 apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:23, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2013-12-12:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
>> enabled platform.
>>
> Hi Jun,
>
> Can you help to review this patch? Thanks.

Also committed earlier today
(http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=185e83591ce420e0b004646b55c5e4783e388531)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hx7-00045h-UA; Wed, 08 Jan 2014 01:29:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0hx6-00043I-2S
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:29:00 +0000
Received: from [85.158.143.35:48900] by server-2.bemta-4.messagelabs.com id
	97/CF-11386-BD9ACC25; Wed, 08 Jan 2014 01:28:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389144537!10205631!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27081 invoked from network); 8 Jan 2014 01:28:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 01:28:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90713527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 01:28:57 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 20:28:56 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	02:28:55 +0100
Message-ID: <52CCA9D7.9050203@citrix.com>
Date: Wed, 8 Jan 2014 01:28:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: "Dong, Eddie" <eddie.dong@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
 apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:23, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2013-12-12:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
>> enabled platform.
>>
> Hi Jun,
>
> Can you help to review this patch? Thanks.

Also committed earlier today
(http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=185e83591ce420e0b004646b55c5e4783e388531)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:30:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hy4-0004Ff-DB; Wed, 08 Jan 2014 01:30:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0hy3-0004FY-Gx
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 01:29:59 +0000
Received: from [193.109.254.147:51469] by server-12.bemta-14.messagelabs.com
	id 32/68-13681-61AACC25; Wed, 08 Jan 2014 01:29:58 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389144597!9431877!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14287 invoked from network); 8 Jan 2014 01:29:57 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-5.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 01:29:57 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id B748E5885CC;
	Tue,  7 Jan 2014 17:29:55 -0800 (PST)
Date: Tue, 07 Jan 2014 20:29:51 -0500 (EST)
Message-Id: <20140107.202951.729942261773265015.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Tue, 07 Jan 2014 17:29:56 -0800 (PST)
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 8 Jan 2014 00:10:10 +0000

> This patch contains the new definitions necessary for grant mapping.
> 
> v2:
> - move unmapping to separate thread. The NAPI instance has to be scheduled
>   even from thread context, which can cause huge delays
> - that causes unfortunately bigger struct xenvif
> - store grant handle after checking validity
> 
> v3:
> - fix comment in xenvif_tx_dealloc_action()
> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>   unnecessary m2p_override. Also remove pages_to_[un]map members
> - BUG() if grant_tx_handle corrupted
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> 
> ---
>  drivers/net/xen-netback/common.h    |   25 ++++++
>  drivers/net/xen-netback/interface.c |    1 +
>  drivers/net/xen-netback/netback.c   |  163 +++++++++++++++++++++++++++++++++++
>  3 files changed, 189 insertions(+)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index d218ccd..f1071e3 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -79,6 +79,11 @@ struct pending_tx_info {
>  				  * if it is head of one or more tx
>  				  * reqs
>  				  */
> +	/* callback data for released SKBs. The	callback is always
> +	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
> +	 * contains the pending_idx
> +	 */
> +	struct ubuf_info callback_struct;
>  };
>  
>  #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
> @@ -108,6 +113,8 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
>  
> +#define NETBACK_INVALID_HANDLE -1
> +
>  struct xenvif {
>  	/* Unique identifier for this interface. */
>  	domid_t          domid;
> @@ -126,13 +133,23 @@ struct xenvif {
>  	pending_ring_idx_t pending_cons;
>  	u16 pending_ring[MAX_PENDING_REQS];
>  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> +	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
>  
>  	/* Coalescing tx requests before copying makes number of grant
>  	 * copy ops greater or equal to number of slots required. In
>  	 * worst case a tx request consumes 2 gnttab_copy.
>  	 */
>  	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
> +	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
> +	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
>  
> +	spinlock_t dealloc_lock;
> +	spinlock_t response_lock;
> +	pending_ring_idx_t dealloc_prod;
> +	pending_ring_idx_t dealloc_cons;
> +	u16 dealloc_ring[MAX_PENDING_REQS];
> +	struct task_struct *dealloc_task;
> +	wait_queue_head_t dealloc_wq;
>  
>  	/* Use kthread for guest RX */
>  	struct task_struct *task;
> @@ -221,6 +238,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
>  int xenvif_kthread(void *data);
>  void xenvif_kick_thread(struct xenvif *vif);
>  
> +int xenvif_dealloc_kthread(void *data);
> +
>  /* Determine whether the needed number of slots (req) are available,
>   * and set req_event if not.
>   */
> @@ -228,6 +247,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>  
>  void xenvif_stop_queue(struct xenvif *vif);
>  
> +/* Callback from stack when TX packet can be released */
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
> +
> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> +
>  extern bool separate_tx_rx_irq;
>  
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 8d6def2..7170f97 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -37,6 +37,7 @@
>  
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> +#include <xen/balloon.h>
>  
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index addfe1d1..7c241f9 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
> +	       struct xen_netif_tx_request *txp,
> +	       struct gnttab_map_grant_ref *gop)
> +{
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));
> +
> +}
> +
>  static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
> @@ -1599,6 +1612,105 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  	return work_done;
>  }
>  
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif =
> +		container_of(temp - pending_idx, struct xenvif,
> +			pending_tx_info[0]);
> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_action_dealloc:
> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();
> +		vif->dealloc_prod++;
> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					"Trying to unmap invalid handle! "
> +					"pending_idx: %x\n", pending_idx);
> +				continue;
> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			gnttab_set_unmap_op(gop,
> +					idx_to_kaddr(vif, pending_idx),
> +					GNTMAP_host_map,
> +					vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;
> +		}
> +
> +	} while (dp != vif->dealloc_prod);
> +
> +	vif->dealloc_cons = dc;
> +
> +	if (gop - vif->tx_unmap_ops > 0) {
> +		int ret;
> +		ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +				vif->tx_unmap_ops,
> +				gop - vif->tx_unmap_ops);
> +		if (ret) {
> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
> +				gop - vif->tx_unmap_ops, ret);
> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
> +				netdev_err(vif->dev,
> +					" host_addr: %llx handle: %x status: %d\n",
> +					gop[i].host_addr,
> +					gop[i].handle,
> +					gop[i].status);
> +			}
> +			BUG();
> +		}
> +	}
> +
> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> +		xenvif_idx_release(vif, pending_idx_release[i],
> +				XEN_NETIF_RSP_OKAY);
> +}
> +
> +
>  /* Called after netfront has transmitted */
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
> @@ -1665,6 +1777,27 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>  	vif->mmap_pages[pending_idx] = NULL;
>  }
>  
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +				"Trying to unmap invalid handle! pending_idx: %x\n",
> +				pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			idx_to_kaddr(vif, pending_idx),
> +			GNTMAP_host_map,
> +			vif->grant_tx_handle[pending_idx]);
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +			&tx_unmap_op,
> +			1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1726,6 +1859,14 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline int tx_dealloc_work_todo(struct xenvif *vif)

Make this return bool.

> +		return 1;

return true;

> +	return 0;

return false;

> +		wait_event_interruptible(vif->dealloc_wq,
> +					tx_dealloc_work_todo(vif) ||
> +					 kthread_should_stop());

Inconsistent indentation.  You should make the arguments line up at
exactly the first column after the openning parenthesis of the function
call.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:30:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:30:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0hy4-0004Ff-DB; Wed, 08 Jan 2014 01:30:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0hy3-0004FY-Gx
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 01:29:59 +0000
Received: from [193.109.254.147:51469] by server-12.bemta-14.messagelabs.com
	id 32/68-13681-61AACC25; Wed, 08 Jan 2014 01:29:58 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389144597!9431877!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14287 invoked from network); 8 Jan 2014 01:29:57 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-5.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 01:29:57 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id B748E5885CC;
	Tue,  7 Jan 2014 17:29:55 -0800 (PST)
Date: Tue, 07 Jan 2014 20:29:51 -0500 (EST)
Message-Id: <20140107.202951.729942261773265015.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Tue, 07 Jan 2014 17:29:56 -0800 (PST)
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 8 Jan 2014 00:10:10 +0000

> This patch contains the new definitions necessary for grant mapping.
> 
> v2:
> - move unmapping to separate thread. The NAPI instance has to be scheduled
>   even from thread context, which can cause huge delays
> - that causes unfortunately bigger struct xenvif
> - store grant handle after checking validity
> 
> v3:
> - fix comment in xenvif_tx_dealloc_action()
> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>   unnecessary m2p_override. Also remove pages_to_[un]map members
> - BUG() if grant_tx_handle corrupted
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> 
> ---
>  drivers/net/xen-netback/common.h    |   25 ++++++
>  drivers/net/xen-netback/interface.c |    1 +
>  drivers/net/xen-netback/netback.c   |  163 +++++++++++++++++++++++++++++++++++
>  3 files changed, 189 insertions(+)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index d218ccd..f1071e3 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -79,6 +79,11 @@ struct pending_tx_info {
>  				  * if it is head of one or more tx
>  				  * reqs
>  				  */
> +	/* callback data for released SKBs. The	callback is always
> +	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
> +	 * contains the pending_idx
> +	 */
> +	struct ubuf_info callback_struct;
>  };
>  
>  #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
> @@ -108,6 +113,8 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
>  
> +#define NETBACK_INVALID_HANDLE -1
> +
>  struct xenvif {
>  	/* Unique identifier for this interface. */
>  	domid_t          domid;
> @@ -126,13 +133,23 @@ struct xenvif {
>  	pending_ring_idx_t pending_cons;
>  	u16 pending_ring[MAX_PENDING_REQS];
>  	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
> +	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
>  
>  	/* Coalescing tx requests before copying makes number of grant
>  	 * copy ops greater or equal to number of slots required. In
>  	 * worst case a tx request consumes 2 gnttab_copy.
>  	 */
>  	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
> +	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
> +	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
>  
> +	spinlock_t dealloc_lock;
> +	spinlock_t response_lock;
> +	pending_ring_idx_t dealloc_prod;
> +	pending_ring_idx_t dealloc_cons;
> +	u16 dealloc_ring[MAX_PENDING_REQS];
> +	struct task_struct *dealloc_task;
> +	wait_queue_head_t dealloc_wq;
>  
>  	/* Use kthread for guest RX */
>  	struct task_struct *task;
> @@ -221,6 +238,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
>  int xenvif_kthread(void *data);
>  void xenvif_kick_thread(struct xenvif *vif);
>  
> +int xenvif_dealloc_kthread(void *data);
> +
>  /* Determine whether the needed number of slots (req) are available,
>   * and set req_event if not.
>   */
> @@ -228,6 +247,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>  
>  void xenvif_stop_queue(struct xenvif *vif);
>  
> +/* Callback from stack when TX packet can be released */
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
> +
> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> +
>  extern bool separate_tx_rx_irq;
>  
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 8d6def2..7170f97 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -37,6 +37,7 @@
>  
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> +#include <xen/balloon.h>
>  
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index addfe1d1..7c241f9 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
> +	       struct xen_netif_tx_request *txp,
> +	       struct gnttab_map_grant_ref *gop)
> +{
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));
> +
> +}
> +
>  static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
> @@ -1599,6 +1612,105 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  	return work_done;
>  }
>  
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif =
> +		container_of(temp - pending_idx, struct xenvif,
> +			pending_tx_info[0]);
> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_action_dealloc:
> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();
> +		vif->dealloc_prod++;
> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					"Trying to unmap invalid handle! "
> +					"pending_idx: %x\n", pending_idx);
> +				continue;
> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			gnttab_set_unmap_op(gop,
> +					idx_to_kaddr(vif, pending_idx),
> +					GNTMAP_host_map,
> +					vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;
> +		}
> +
> +	} while (dp != vif->dealloc_prod);
> +
> +	vif->dealloc_cons = dc;
> +
> +	if (gop - vif->tx_unmap_ops > 0) {
> +		int ret;
> +		ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +				vif->tx_unmap_ops,
> +				gop - vif->tx_unmap_ops);
> +		if (ret) {
> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
> +				gop - vif->tx_unmap_ops, ret);
> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
> +				netdev_err(vif->dev,
> +					" host_addr: %llx handle: %x status: %d\n",
> +					gop[i].host_addr,
> +					gop[i].handle,
> +					gop[i].status);
> +			}
> +			BUG();
> +		}
> +	}
> +
> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> +		xenvif_idx_release(vif, pending_idx_release[i],
> +				XEN_NETIF_RSP_OKAY);
> +}
> +
> +
>  /* Called after netfront has transmitted */
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
> @@ -1665,6 +1777,27 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>  	vif->mmap_pages[pending_idx] = NULL;
>  }
>  
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +				"Trying to unmap invalid handle! pending_idx: %x\n",
> +				pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			idx_to_kaddr(vif, pending_idx),
> +			GNTMAP_host_map,
> +			vif->grant_tx_handle[pending_idx]);
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +			&tx_unmap_op,
> +			1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1726,6 +1859,14 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline int tx_dealloc_work_todo(struct xenvif *vif)

Make this return bool.

> +		return 1;

return true;

> +	return 0;

return false;

> +		wait_event_interruptible(vif->dealloc_wq,
> +					tx_dealloc_work_todo(vif) ||
> +					 kthread_should_stop());

Inconsistent indentation.  You should make the arguments line up at
exactly the first column after the openning parenthesis of the function
call.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i59-0004cB-Ul; Wed, 08 Jan 2014 01:37:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0i57-0004c3-Sw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:37:18 +0000
Received: from [85.158.143.35:11428] by server-3.bemta-4.messagelabs.com id
	B8/1F-32360-DCBACC25; Wed, 08 Jan 2014 01:37:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389145035!10207962!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31699 invoked from network); 8 Jan 2014 01:37:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 01:37:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:36:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435436378"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:36:45 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:36:44 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:36:44 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:36:42 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [PATCH 2/3] VMX, apicv: Set "NMI-window exiting"
	for NMI
Thread-Index: AQHPDBIVVvD0s3QPBU6jDW5GgYOUsQ==
Date: Wed, 8 Jan 2014 01:36:42 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A442B@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
	<52CCA9D7.9050203@citrix.com>
In-Reply-To: <52CCA9D7.9050203@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
 apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-01-08:
> On 08/01/2014 01:23, Zhang, Yang Z wrote:
>> Zhang, Yang Z wrote on 2013-12-12:
>>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>> 
>>> Enable NMI-window exiting if interrupt is blocked by NMI under
>>> apicv enabled platform.
>>> 
>> Hi Jun,
>> 
>> Can you help to review this patch? Thanks.
> 
> Also committed earlier today
> (http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=185e83591ce42
> 0e0 b004646b55c5e4783e388531)
> 
> ~Andrew

I see it. Thanks for your remainder.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i59-0004cB-Ul; Wed, 08 Jan 2014 01:37:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0i57-0004c3-Sw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:37:18 +0000
Received: from [85.158.143.35:11428] by server-3.bemta-4.messagelabs.com id
	B8/1F-32360-DCBACC25; Wed, 08 Jan 2014 01:37:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389145035!10207962!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31699 invoked from network); 8 Jan 2014 01:37:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 01:37:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:36:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435436378"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:36:45 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:36:44 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:36:44 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:36:42 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [PATCH 2/3] VMX, apicv: Set "NMI-window exiting"
	for NMI
Thread-Index: AQHPDBIVVvD0s3QPBU6jDW5GgYOUsQ==
Date: Wed, 8 Jan 2014 01:36:42 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A442B@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A43DB@SHSMSX104.ccr.corp.intel.com>
	<52CCA9D7.9050203@citrix.com>
In-Reply-To: <52CCA9D7.9050203@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
 apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-01-08:
> On 08/01/2014 01:23, Zhang, Yang Z wrote:
>> Zhang, Yang Z wrote on 2013-12-12:
>>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>> 
>>> Enable NMI-window exiting if interrupt is blocked by NMI under
>>> apicv enabled platform.
>>> 
>> Hi Jun,
>> 
>> Can you help to review this patch? Thanks.
> 
> Also committed earlier today
> (http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=185e83591ce42
> 0e0 b004646b55c5e4783e388531)
> 
> ~Andrew

I see it. Thanks for your remainder.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i7W-00059O-Gi; Wed, 08 Jan 2014 01:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0i7V-00059H-9S
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:39:45 +0000
Received: from [85.158.143.35:18332] by server-2.bemta-4.messagelabs.com id
	1E/83-11386-06CACC25; Wed, 08 Jan 2014 01:39:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389145182!3146300!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7076 invoked from network); 8 Jan 2014 01:39:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:39:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081cdK6020826
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:38:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081cceW014039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:38:39 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081ccTb014031; Wed, 8 Jan 2014 01:38:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:38:38 -0800
Date: Tue, 7 Jan 2014 17:38:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107173837.028938c9@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:46 -0500
Don Slutz <dslutz@verizon.com> wrote:

> If dbg_debug is non-zero, output debug.
> 
> Include put_gfn debug logging.
> 
> Here is a smaple output at dbg_debug == 2:
> 
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020
> len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000 (XEN) [2014-01-07
> 03:20:09] vaddr:8f56 domid:1 (XEN) [2014-01-07 03:20:09] X:
> vaddr:8f56 domid:1 mfn:64331a (XEN) [2014-01-07 03:20:09] R:
> addr:8f56 pagecnt=1 domid:1 gfn:8 (XEN) [2014-01-07 03:20:09]
> gmem:exit:len:$0 (XEN) [2014-01-07 03:20:09] gmem:addr:8f57
> buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1 (XEN) [2014-01-07
> 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a (XEN) [2014-01-07
> 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8 (XEN) [2014-01-07
> 03:20:09] gmem:exit:len:$0 (XEN) [2014-01-07 03:20:09]
> gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0
> dp:ffff83083e5fe000 (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b
> domid:1 (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1
> mfn:ffffffffffffffff (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  xen/arch/x86/debug.c | 54
> +++++++++++++++++++++++++--------------------------- 1 file changed,
> 26 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index ba6a64d..777e5ba 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -30,16 +30,9 @@
>   * gdbsx, etc..
>   */
>  
> -#ifdef XEN_KDB_CONFIG
> -#include "../kdb/include/kdbdefs.h"
> -#include "../kdb/include/kdbproto.h"
> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
> -#else
> -#define DBGP1(...) ((void)0)
> -#define DBGP2(...) ((void)0)
> -#endif
> +static volatile int dbg_debug;
> +#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
> +#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
>  
>  /* Returns: mfn for the given (hvm guest) vaddr */
>  static unsigned long 
> @@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp,
> int toaddr, uint32_t pfec = PFEC_page_present;
>      p2m_type_t gfntype;
>  
> -    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
> +    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
>  
>      *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
>      if ( *gfn == INVALID_GFN )
>      {
> -        DBGP2("kdb:bad gfn from gva_to_gfn\n");
> +        DBGP1("kdb:bad gfn from gva_to_gfn\n");
>          return INVALID_MFN;
>      }
>  
>      mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
> -        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> +        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>          mfn = INVALID_MFN;
>      }
>  
> -    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> mfn);
> +    DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> mfn); 
>      if ( mfn == INVALID_MFN )
>      {
>          put_gfn(dp, *gfn);
> +        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
>          *gfn = INVALID_GFN;
>      }
>  
> @@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) unsigned long cr3 = (pgd3val ? pgd3val :
> dp->vcpu[0]->arch.cr3); unsigned long mfn = cr3 >> PAGE_SHIFT;
>  
> -    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr,
> dp->domain_id, 
> +    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr,
> dp->domain_id, cr3, pgd3val);
>  
>      if ( pgd3val == 0 )
> @@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l4e = l4t[l4_table_offset(vaddr)];
>          unmap_domain_page(l4t);
>          mfn = l4e_get_pfn(l4e);
> -        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
> -              l4_table_offset(vaddr), l4e, mfn);
> +        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
> +              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
>          if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
>          {
> -            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3); return INVALID_MFN;
>          }
>  
> @@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l3e = l3t[l3_table_offset(vaddr)];
>          unmap_domain_page(l3t);
>          mfn = l3e_get_pfn(l3e);
> -        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
> -              l3_table_offset(vaddr), l3e, mfn);
> +        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
> +              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
>          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
>               (l3e_get_flags(l3e) & _PAGE_PSE) )
>          {
> -            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3); return INVALID_MFN;
>          }
>      }
> @@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l2e = l2t[l2_table_offset(vaddr)];
>      unmap_domain_page(l2t);
>      mfn = l2e_get_pfn(l2e);
> -    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t,
> l2_table_offset(vaddr),
> -          l2e, mfn);
> +    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
> +          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
>      if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
>           (l2e_get_flags(l2e) & _PAGE_PSE) )
>      {
> -        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>          return INVALID_MFN;
>      }
>      l1t = map_domain_page(mfn);
>      l1e = l1t[l1_table_offset(vaddr)];
>      unmap_domain_page(l1t);
>      mfn = l1e_get_pfn(l1e);
> -    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t,
> l1_table_offset(vaddr),
> -          l1e, mfn);
> +    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
> +          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
>  
>      return mfn_valid(mfn) ? mfn : INVALID_MFN;
>  }
> @@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
> int len, struct domain *dp, 
>          unmap_domain_page(va);
>          if ( gfn != INVALID_GFN )
> +        {
>              put_gfn(dp, gfn);
> +            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
> +                  addr, pagecnt, dp->domain_id, gfn);
> +        }
>  
>          addr += pagecnt;
>          buf += pagecnt;
> @@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len,
> domid_t domid, int toaddr, struct domain *dp =
> get_domain_by_id(domid); int hyp = (domid == DOMID_IDLE);
>  
> -    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
> +    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
>            addr, buf, len, domid, toaddr, dp);
>      if ( hyp )
>      {
> @@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len,
> domid_t domid, int toaddr, put_domain(dp);
>      }
>  
> -    DBGP2("gmem:exit:len:$%d\n", len);
> +    DBGP1("gmem:exit:len:$%d\n", len);
>      return len;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i7W-00059O-Gi; Wed, 08 Jan 2014 01:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0i7V-00059H-9S
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:39:45 +0000
Received: from [85.158.143.35:18332] by server-2.bemta-4.messagelabs.com id
	1E/83-11386-06CACC25; Wed, 08 Jan 2014 01:39:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389145182!3146300!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7076 invoked from network); 8 Jan 2014 01:39:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:39:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081cdK6020826
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:38:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081cceW014039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:38:39 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081ccTb014031; Wed, 8 Jan 2014 01:38:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:38:38 -0800
Date: Tue, 7 Jan 2014 17:38:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107173837.028938c9@mantra.us.oracle.com>
In-Reply-To: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue,  7 Jan 2014 19:25:46 -0500
Don Slutz <dslutz@verizon.com> wrote:

> If dbg_debug is non-zero, output debug.
> 
> Include put_gfn debug logging.
> 
> Here is a smaple output at dbg_debug == 2:
> 
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020
> len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000 (XEN) [2014-01-07
> 03:20:09] vaddr:8f56 domid:1 (XEN) [2014-01-07 03:20:09] X:
> vaddr:8f56 domid:1 mfn:64331a (XEN) [2014-01-07 03:20:09] R:
> addr:8f56 pagecnt=1 domid:1 gfn:8 (XEN) [2014-01-07 03:20:09]
> gmem:exit:len:$0 (XEN) [2014-01-07 03:20:09] gmem:addr:8f57
> buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1 (XEN) [2014-01-07
> 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a (XEN) [2014-01-07
> 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8 (XEN) [2014-01-07
> 03:20:09] gmem:exit:len:$0 (XEN) [2014-01-07 03:20:09]
> gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0
> dp:ffff83083e5fe000 (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b
> domid:1 (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1
> mfn:ffffffffffffffff (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

> ---
>  xen/arch/x86/debug.c | 54
> +++++++++++++++++++++++++--------------------------- 1 file changed,
> 26 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index ba6a64d..777e5ba 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -30,16 +30,9 @@
>   * gdbsx, etc..
>   */
>  
> -#ifdef XEN_KDB_CONFIG
> -#include "../kdb/include/kdbdefs.h"
> -#include "../kdb/include/kdbproto.h"
> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
> -#else
> -#define DBGP1(...) ((void)0)
> -#define DBGP2(...) ((void)0)
> -#endif
> +static volatile int dbg_debug;
> +#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
> +#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
>  
>  /* Returns: mfn for the given (hvm guest) vaddr */
>  static unsigned long 
> @@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp,
> int toaddr, uint32_t pfec = PFEC_page_present;
>      p2m_type_t gfntype;
>  
> -    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
> +    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
>  
>      *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
>      if ( *gfn == INVALID_GFN )
>      {
> -        DBGP2("kdb:bad gfn from gva_to_gfn\n");
> +        DBGP1("kdb:bad gfn from gva_to_gfn\n");
>          return INVALID_MFN;
>      }
>  
>      mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
> -        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> +        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>          mfn = INVALID_MFN;
>      }
>  
> -    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> mfn);
> +    DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id,
> mfn); 
>      if ( mfn == INVALID_MFN )
>      {
>          put_gfn(dp, *gfn);
> +        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
>          *gfn = INVALID_GFN;
>      }
>  
> @@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) unsigned long cr3 = (pgd3val ? pgd3val :
> dp->vcpu[0]->arch.cr3); unsigned long mfn = cr3 >> PAGE_SHIFT;
>  
> -    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr,
> dp->domain_id, 
> +    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr,
> dp->domain_id, cr3, pgd3val);
>  
>      if ( pgd3val == 0 )
> @@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l4e = l4t[l4_table_offset(vaddr)];
>          unmap_domain_page(l4t);
>          mfn = l4e_get_pfn(l4e);
> -        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
> -              l4_table_offset(vaddr), l4e, mfn);
> +        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
> +              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
>          if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
>          {
> -            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3); return INVALID_MFN;
>          }
>  
> @@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l3e = l3t[l3_table_offset(vaddr)];
>          unmap_domain_page(l3t);
>          mfn = l3e_get_pfn(l3e);
> -        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
> -              l3_table_offset(vaddr), l3e, mfn);
> +        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
> +              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
>          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
>               (l3e_get_flags(l3e) & _PAGE_PSE) )
>          {
> -            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3); return INVALID_MFN;
>          }
>      }
> @@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp,
> uint64_t pgd3val) l2e = l2t[l2_table_offset(vaddr)];
>      unmap_domain_page(l2t);
>      mfn = l2e_get_pfn(l2e);
> -    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t,
> l2_table_offset(vaddr),
> -          l2e, mfn);
> +    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
> +          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
>      if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
>           (l2e_get_flags(l2e) & _PAGE_PSE) )
>      {
> -        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr,
> cr3);
> +        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
>          return INVALID_MFN;
>      }
>      l1t = map_domain_page(mfn);
>      l1e = l1t[l1_table_offset(vaddr)];
>      unmap_domain_page(l1t);
>      mfn = l1e_get_pfn(l1e);
> -    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t,
> l1_table_offset(vaddr),
> -          l1e, mfn);
> +    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
> +          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
>  
>      return mfn_valid(mfn) ? mfn : INVALID_MFN;
>  }
> @@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf,
> int len, struct domain *dp, 
>          unmap_domain_page(va);
>          if ( gfn != INVALID_GFN )
> +        {
>              put_gfn(dp, gfn);
> +            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
> +                  addr, pagecnt, dp->domain_id, gfn);
> +        }
>  
>          addr += pagecnt;
>          buf += pagecnt;
> @@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len,
> domid_t domid, int toaddr, struct domain *dp =
> get_domain_by_id(domid); int hyp = (domid == DOMID_IDLE);
>  
> -    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
> +    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
>            addr, buf, len, domid, toaddr, dp);
>      if ( hyp )
>      {
> @@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len,
> domid_t domid, int toaddr, put_domain(dp);
>      }
>  
> -    DBGP2("gmem:exit:len:$%d\n", len);
> +    DBGP1("gmem:exit:len:$%d\n", len);
>      return len;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i9z-0005Hm-4V; Wed, 08 Jan 2014 01:42:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0i9y-0005He-1l
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:42:18 +0000
Received: from [85.158.137.68:62250] by server-4.bemta-3.messagelabs.com id
	72/83-10414-9FCACC25; Wed, 08 Jan 2014 01:42:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389145333!7798839!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17507 invoked from network); 8 Jan 2014 01:42:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-31.messagelabs.com with SMTP;
	8 Jan 2014 01:42:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:41:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435438391"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:41:57 -0800
Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:41:57 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:41:57 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.85]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:41:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/3] VMX,apicv: Set "NMI-window exiting" for NMI
Thread-Index: AQHO9t+CFwqLdVsuTkWsByJNYrdwupp6MmVwgAAEF7A=
Date: Wed, 8 Jan 2014 01:41:52 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4457@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
	apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-01-08:
> Zhang, Yang Z wrote on 2013-12-12:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
>> enabled platform.
>> 
> 
> Hi Jun,
> 
> Can you help to review this patch? Thanks.

Sorry. It already been committed earlier today. 

> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/hvm/vmx/intr.c |    7 ++++---
>>  1 files changed, 4 insertions(+), 3 deletions(-) diff --git
>> a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index
>> 7757910..8507432 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++
>> b/xen/arch/x86/hvm/vmx/intr.c @@ -252,10 +252,11 @@ void
>> vmx_intr_assist(void)
>>          intblk = hvm_interrupt_blocked(v, intack);
>>          if ( cpu_has_vmx_virtual_intr_delivery )
>>          {
>> -            /* Set "Interrupt-window exiting" for ExtINT */
>> +            /* Set "Interrupt-window exiting" for ExtINT and NMI.
>> + */
>>              if ( (intblk != hvm_intblk_none) &&
>> -                 ( (intack.source == hvm_intsrc_pic) ||
>> -                 ( intack.source == hvm_intsrc_vector) ) )
>> +                 (intack.source == hvm_intsrc_pic ||
>> +                  intack.source == hvm_intsrc_vector ||
>> +                  intack.source == hvm_intsrc_nmi) )
>>              {
>>                  enable_intr_window(v, intack);
>>                  goto out;
> 
> 
> Best regards,
> Yang
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0i9z-0005Hm-4V; Wed, 08 Jan 2014 01:42:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0i9y-0005He-1l
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:42:18 +0000
Received: from [85.158.137.68:62250] by server-4.bemta-3.messagelabs.com id
	72/83-10414-9FCACC25; Wed, 08 Jan 2014 01:42:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389145333!7798839!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17507 invoked from network); 8 Jan 2014 01:42:16 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-6.tower-31.messagelabs.com with SMTP;
	8 Jan 2014 01:42:16 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Jan 2014 17:41:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="435438391"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 07 Jan 2014 17:41:57 -0800
Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:41:57 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 17:41:57 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.85]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 09:41:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/3] VMX,apicv: Set "NMI-window exiting" for NMI
Thread-Index: AQHO9t+CFwqLdVsuTkWsByJNYrdwupp6MmVwgAAEF7A=
Date: Wed, 8 Jan 2014 01:41:52 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4457@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-3-git-send-email-yang.z.zhang@intel.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/3] VMX,
	apicv: Set "NMI-window exiting" for NMI
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-01-08:
> Zhang, Yang Z wrote on 2013-12-12:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> Enable NMI-window exiting if interrupt is blocked by NMI under apicv
>> enabled platform.
>> 
> 
> Hi Jun,
> 
> Can you help to review this patch? Thanks.

Sorry. It already been committed earlier today. 

> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/hvm/vmx/intr.c |    7 ++++---
>>  1 files changed, 4 insertions(+), 3 deletions(-) diff --git
>> a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index
>> 7757910..8507432 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++
>> b/xen/arch/x86/hvm/vmx/intr.c @@ -252,10 +252,11 @@ void
>> vmx_intr_assist(void)
>>          intblk = hvm_interrupt_blocked(v, intack);
>>          if ( cpu_has_vmx_virtual_intr_delivery )
>>          {
>> -            /* Set "Interrupt-window exiting" for ExtINT */
>> +            /* Set "Interrupt-window exiting" for ExtINT and NMI.
>> + */
>>              if ( (intblk != hvm_intblk_none) &&
>> -                 ( (intack.source == hvm_intsrc_pic) ||
>> -                 ( intack.source == hvm_intsrc_vector) ) )
>> +                 (intack.source == hvm_intsrc_pic ||
>> +                  intack.source == hvm_intsrc_vector ||
>> +                  intack.source == hvm_intsrc_nmi) )
>>              {
>>                  enable_intr_window(v, intack);
>>                  goto out;
> 
> 
> Best regards,
> Yang
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:45:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iD9-0005RW-PY; Wed, 08 Jan 2014 01:45:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0iD8-0005RN-2A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:45:34 +0000
Received: from [85.158.139.211:8990] by server-10.bemta-5.messagelabs.com id
	EA/07-01405-DBDACC25; Wed, 08 Jan 2014 01:45:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389145531!8394269!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 858 invoked from network); 8 Jan 2014 01:45:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:45:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081iQqw018041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:44:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081iPbW013800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:44:26 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081iPNM013790; Wed, 8 Jan 2014 01:44:25 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:44:24 -0800
Date: Tue, 7 Jan 2014 17:44:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107174423.66106c9c@mantra.us.oracle.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 00:55:32 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 00:25, Don Slutz wrote:
> > Using a 1G hvm domU (in grub) and gdbsx:
> >
..... 

> 
> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack
> hypercall path.  Irrespective of any of the other patches in this
> series, I think this should be included ASAP (although probably
> subject to review from a third person), which will fix the hypervisor
> crashes from gdbsx usage.

I remember long ago mentioning to our packaing team to make gdbsx
root executible only. 

What would be a good place to document that gdbsx should be removed from
production systems, and/or be made root executible only?

thanks
mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 01:45:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 01:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iD9-0005RW-PY; Wed, 08 Jan 2014 01:45:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0iD8-0005RN-2A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 01:45:34 +0000
Received: from [85.158.139.211:8990] by server-10.bemta-5.messagelabs.com id
	EA/07-01405-DBDACC25; Wed, 08 Jan 2014 01:45:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389145531!8394269!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 858 invoked from network); 8 Jan 2014 01:45:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 01:45:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s081iQqw018041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 01:44:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081iPbW013800
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 01:44:26 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s081iPNM013790; Wed, 8 Jan 2014 01:44:25 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 17:44:24 -0800
Date: Tue, 7 Jan 2014 17:44:23 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107174423.66106c9c@mantra.us.oracle.com>
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 00:55:32 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 00:25, Don Slutz wrote:
> > Using a 1G hvm domU (in grub) and gdbsx:
> >
..... 

> 
> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack
> hypercall path.  Irrespective of any of the other patches in this
> series, I think this should be included ASAP (although probably
> subject to review from a third person), which will fix the hypervisor
> crashes from gdbsx usage.

I remember long ago mentioning to our packaing team to make gdbsx
root executible only. 

What would be a good place to document that gdbsx should be removed from
production systems, and/or be made root executible only?

thanks
mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:12:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0idB-0007L8-11; Wed, 08 Jan 2014 02:12:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W0id9-0007L2-Sa
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 02:12:28 +0000
Received: from [85.158.139.211:56066] by server-4.bemta-5.messagelabs.com id
	85/7A-26791-B04BCC25; Wed, 08 Jan 2014 02:12:27 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389147144!8396488!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14982 invoked from network); 8 Jan 2014 02:12:26 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:12:26 -0000
Received: by mail-pb0-f45.google.com with SMTP id rp16so929583pbb.32
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 18:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=2thiOsfmr1cq5lZs8KNR+LJ84ricbv0tcfClbpk5YkY=;
	b=hHesfCDssrG8jN5cxTtVpM7qiSqf18JA48syDjCG6V+JesN/s5eelv9dZMd1wbkrFr
	BN0iu5j7hL54G4NgoJnQAkf8aFRDfmOxsTsKchpTIZgz+rCmDcafasycKt4i6Tj3IXeA
	Vl0K1UGj0QFgmjkIsqlySAInfMdnYU8nZTJYtLDcrfpvvGDtWMFGjZVp1Dc4vxO15E3r
	OOp8s5M9vgoi1x8fbnWBZSQJ83Nx/EfPGxiaXdZhE1O6RYG/cAzfQzeFs8pLVuUi62bJ
	CWYsG0uknpbMQDfn2634LBNdVLvPJs6gCaoqSvncyKZiHRoZC7qtVScihb46GVhiKSsU
	lqZg==
X-Received: by 10.68.57.98 with SMTP id h2mr140004872pbq.17.1389147144110;
	Tue, 07 Jan 2014 18:12:24 -0800 (PST)
Received: from [172.29.161.154] ([172.29.161.154])
	by mx.google.com with ESMTPSA id
	ju10sm139178945pbd.33.2014.01.07.18.12.22 for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 18:12:23 -0800 (PST)
Message-ID: <1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 07 Jan 2014 18:12:21 -0800
In-Reply-To: <1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:

>  
> +		if (skb_shinfo(skb)->frag_list) {
> +			nskb = skb_shinfo(skb)->frag_list;
> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> +			skb->len += nskb->len;
> +			skb->data_len += nskb->len;
> +			skb->truesize += nskb->truesize;
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			vif->tx_zerocopy_sent += 2;
> +			nskb = skb;
> +
> +			skb = skb_copy_expand(skb,
> +					0,
> +					0,
> +					GFP_ATOMIC | __GFP_NOWARN);

skb can be NULL here

> +			skb_shinfo(skb)->destructor_arg = NULL;
> +		}
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
> @@ -1568,6 +1660,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:12:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0idB-0007L8-11; Wed, 08 Jan 2014 02:12:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W0id9-0007L2-Sa
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 02:12:28 +0000
Received: from [85.158.139.211:56066] by server-4.bemta-5.messagelabs.com id
	85/7A-26791-B04BCC25; Wed, 08 Jan 2014 02:12:27 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389147144!8396488!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14982 invoked from network); 8 Jan 2014 02:12:26 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:12:26 -0000
Received: by mail-pb0-f45.google.com with SMTP id rp16so929583pbb.32
	for <xen-devel@lists.xenproject.org>;
	Tue, 07 Jan 2014 18:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=2thiOsfmr1cq5lZs8KNR+LJ84ricbv0tcfClbpk5YkY=;
	b=hHesfCDssrG8jN5cxTtVpM7qiSqf18JA48syDjCG6V+JesN/s5eelv9dZMd1wbkrFr
	BN0iu5j7hL54G4NgoJnQAkf8aFRDfmOxsTsKchpTIZgz+rCmDcafasycKt4i6Tj3IXeA
	Vl0K1UGj0QFgmjkIsqlySAInfMdnYU8nZTJYtLDcrfpvvGDtWMFGjZVp1Dc4vxO15E3r
	OOp8s5M9vgoi1x8fbnWBZSQJ83Nx/EfPGxiaXdZhE1O6RYG/cAzfQzeFs8pLVuUi62bJ
	CWYsG0uknpbMQDfn2634LBNdVLvPJs6gCaoqSvncyKZiHRoZC7qtVScihb46GVhiKSsU
	lqZg==
X-Received: by 10.68.57.98 with SMTP id h2mr140004872pbq.17.1389147144110;
	Tue, 07 Jan 2014 18:12:24 -0800 (PST)
Received: from [172.29.161.154] ([172.29.161.154])
	by mx.google.com with ESMTPSA id
	ju10sm139178945pbd.33.2014.01.07.18.12.22 for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Tue, 07 Jan 2014 18:12:23 -0800 (PST)
Message-ID: <1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 07 Jan 2014 18:12:21 -0800
In-Reply-To: <1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:

>  
> +		if (skb_shinfo(skb)->frag_list) {
> +			nskb = skb_shinfo(skb)->frag_list;
> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> +			skb->len += nskb->len;
> +			skb->data_len += nskb->len;
> +			skb->truesize += nskb->truesize;
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			vif->tx_zerocopy_sent += 2;
> +			nskb = skb;
> +
> +			skb = skb_copy_expand(skb,
> +					0,
> +					0,
> +					GFP_ATOMIC | __GFP_NOWARN);

skb can be NULL here

> +			skb_shinfo(skb)->destructor_arg = NULL;
> +		}
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
> @@ -1568,6 +1660,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		}
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ijp-0007oO-Tn; Wed, 08 Jan 2014 02:19:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0ijo-0007o2-NC
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:19:20 +0000
Received: from [85.158.137.68:33901] by server-14.bemta-3.messagelabs.com id
	01/70-06105-7A5BCC25; Wed, 08 Jan 2014 02:19:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389147557!7857456!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4390 invoked from network); 8 Jan 2014 02:19:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88566793"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 02:19:17 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 21:19:17 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	03:19:15 +0100
Message-ID: <52CCB5A3.2010308@citrix.com>
Date: Wed, 8 Jan 2014 02:19:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
	when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:13, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2013-12-19:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> With the feature of unrestricted guest, there should be no vmexit be
>> triggered when guest accesses the cr3 in non-paging mode. This patch
>> will clear the cr3 save/loading bit in vmcs control filed to eliminate
>> cr3 access vmexit on UG avaliable hardware.
>>
>> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
>> did the same thing compare to this one. But it will cause guest fail
>> to boot up on non-UG hardware which is repoted by Jan and it has been
>> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
>>
> Hi, Jun.
>
> Can you help to review this patch?

This got committed earlier today
http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8a2bd3b1dee53a5206dde25f613)
as you are a maintainer, and I (as an independent party as far as the
patch goes) reviewed it.

>
>> This patch incorporate the fixing and guest are working well both in
>> UG and non-UG platform with this patch.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>> changes in v3:
>> Revise the patch description according Jan's suggestion
>>
>> changes in v2:
>> Fix the guest boot failure on non-UG platform.
>>
>> There are some discussions around the first patch, please see the
>> following link: http://www.gossamer-threads.com/lists/xen/devel/302810
>>
>> ---
>>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>>  1 files changed, 5 insertions(+), 4 deletions(-)
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
>> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
>> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
>>                                   CPU_BASED_CR3_STORE_EXITING);
>>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
>> -            if ( !hvm_paging_enabled(v) )
>> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
>> + )
>>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>>              /* Trap CR3 updates if CR3 memory events are enabled. */
>> @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct vcpu *v,
>> unsigned int cr)
>>      case 3:
>>          if ( paging_mode_hap(v->domain) )
>>          {
>> -            if ( !hvm_paging_enabled(v) )
>> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
>> + )
>>                  v->arch.hvm_vcpu.hw_cr[3] =
>> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>>              vmx_load_pdptrs(v);
>> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
>> *regs)
>>
>>      hvm_invalidate_regs_fields(regs);
>> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
>> +    if ( paging_mode_hap(v->domain) )
>>      {
>>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
>> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
>> +            v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>>      }
>>      
>>      __vmread(VM_EXIT_REASON, &exit_reason);
>
> Best regards,
> Yang
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ijp-0007oO-Tn; Wed, 08 Jan 2014 02:19:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0ijo-0007o2-NC
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:19:20 +0000
Received: from [85.158.137.68:33901] by server-14.bemta-3.messagelabs.com id
	01/70-06105-7A5BCC25; Wed, 08 Jan 2014 02:19:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389147557!7857456!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4390 invoked from network); 8 Jan 2014 02:19:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88566793"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 02:19:17 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 21:19:17 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	03:19:15 +0100
Message-ID: <52CCB5A3.2010308@citrix.com>
Date: Wed, 8 Jan 2014 02:19:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA1
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
	when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:13, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2013-12-19:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> With the feature of unrestricted guest, there should be no vmexit be
>> triggered when guest accesses the cr3 in non-paging mode. This patch
>> will clear the cr3 save/loading bit in vmcs control filed to eliminate
>> cr3 access vmexit on UG avaliable hardware.
>>
>> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
>> did the same thing compare to this one. But it will cause guest fail
>> to boot up on non-UG hardware which is repoted by Jan and it has been
>> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
>>
> Hi, Jun.
>
> Can you help to review this patch?

This got committed earlier today
http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8a2bd3b1dee53a5206dde25f613)
as you are a maintainer, and I (as an independent party as far as the
patch goes) reviewed it.

>
>> This patch incorporate the fixing and guest are working well both in
>> UG and non-UG platform with this patch.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>> changes in v3:
>> Revise the patch description according Jan's suggestion
>>
>> changes in v2:
>> Fix the guest boot failure on non-UG platform.
>>
>> There are some discussions around the first patch, please see the
>> following link: http://www.gossamer-threads.com/lists/xen/devel/302810
>>
>> ---
>>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>>  1 files changed, 5 insertions(+), 4 deletions(-)
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
>> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
>> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
>>                                   CPU_BASED_CR3_STORE_EXITING);
>>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
>> -            if ( !hvm_paging_enabled(v) )
>> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
>> + )
>>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>>              /* Trap CR3 updates if CR3 memory events are enabled. */
>> @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct vcpu *v,
>> unsigned int cr)
>>      case 3:
>>          if ( paging_mode_hap(v->domain) )
>>          {
>> -            if ( !hvm_paging_enabled(v) )
>> +            if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v)
>> + )
>>                  v->arch.hvm_vcpu.hw_cr[3] =
>> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>>              vmx_load_pdptrs(v);
>> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
>> *regs)
>>
>>      hvm_invalidate_regs_fields(regs);
>> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
>> +    if ( paging_mode_hap(v->domain) )
>>      {
>>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
>> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
>> +            v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>>      }
>>      
>>      __vmread(VM_EXIT_REASON, &exit_reason);
>
> Best regards,
> Yang
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iue-0008Vn-F9; Wed, 08 Jan 2014 02:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0iuc-0008Vi-SB
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:30:31 +0000
Received: from [85.158.137.68:55074] by server-1.bemta-3.messagelabs.com id
	C2/85-29598-648BCC25; Wed, 08 Jan 2014 02:30:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389148227!6650423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1019 invoked from network); 8 Jan 2014 02:30:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:30:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90723867"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 02:30:26 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 21:30:26 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	03:30:24 +0100
Message-ID: <52CCB840.80207@citrix.com>
Date: Wed, 8 Jan 2014 02:30:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	<1389140748-26524-3-git-send-email-dslutz@verizon.com>	<52CCA204.2020601@citrix.com>
	<20140107174423.66106c9c@mantra.us.oracle.com>
In-Reply-To: <20140107174423.66106c9c@mantra.us.oracle.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:44, Mukesh Rathor wrote:
> On Wed, 8 Jan 2014 00:55:32 +0000
> Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
>> On 08/01/2014 00:25, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
> ..... 
>
>> Ian (with RM hat on):
>>   This is a hypervisor reference counting error on a toolstack
>> hypercall path.  Irrespective of any of the other patches in this
>> series, I think this should be included ASAP (although probably
>> subject to review from a third person), which will fix the hypervisor
>> crashes from gdbsx usage.
> I remember long ago mentioning to our packaing team to make gdbsx
> root executible only. 
>
> What would be a good place to document that gdbsx should be removed from
> production systems, and/or be made root executible only?
>
> thanks
> mukesh
>
>

[root@idol ~]# ls -la /dev/xen/privcmd
crw-rw---- 1 root root 10, 57 Jan  7 11:48 /dev/xen/privcmd

As currently stands (Linux 3.10), only root can open privcmd and issue
ioctls, so a non-root gdbsx process would presumably not function at
all.  I am not sure that any documentation needs updating.

Having said that, with my "future ventures into reducing required dom0
priveleges" hat on, it would be very nice for a subset of hypercalls to
be available in a non-privileged, read-only form.  This would allow
read-only information from xl (and xentop and suchlike) to be available
to non-root users in dom0.

On the other hand, anyone with shell access in dom0 is likely a system
administrator anyway, so will almost certainly be running with sudo
privileges (or as root) anyway.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iue-0008Vn-F9; Wed, 08 Jan 2014 02:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0iuc-0008Vi-SB
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:30:31 +0000
Received: from [85.158.137.68:55074] by server-1.bemta-3.messagelabs.com id
	C2/85-29598-648BCC25; Wed, 08 Jan 2014 02:30:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389148227!6650423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1019 invoked from network); 8 Jan 2014 02:30:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 02:30:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="90723867"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 02:30:26 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 7 Jan 2014 21:30:26 -0500
Received: from [10.68.19.21] (10.68.19.21) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	03:30:24 +0100
Message-ID: <52CCB840.80207@citrix.com>
Date: Wed, 8 Jan 2014 02:30:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	<1389140748-26524-3-git-send-email-dslutz@verizon.com>	<52CCA204.2020601@citrix.com>
	<20140107174423.66106c9c@mantra.us.oracle.com>
In-Reply-To: <20140107174423.66106c9c@mantra.us.oracle.com>
X-Originating-IP: [10.68.19.21]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/2014 01:44, Mukesh Rathor wrote:
> On Wed, 8 Jan 2014 00:55:32 +0000
> Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
>> On 08/01/2014 00:25, Don Slutz wrote:
>>> Using a 1G hvm domU (in grub) and gdbsx:
>>>
> ..... 
>
>> Ian (with RM hat on):
>>   This is a hypervisor reference counting error on a toolstack
>> hypercall path.  Irrespective of any of the other patches in this
>> series, I think this should be included ASAP (although probably
>> subject to review from a third person), which will fix the hypervisor
>> crashes from gdbsx usage.
> I remember long ago mentioning to our packaing team to make gdbsx
> root executible only. 
>
> What would be a good place to document that gdbsx should be removed from
> production systems, and/or be made root executible only?
>
> thanks
> mukesh
>
>

[root@idol ~]# ls -la /dev/xen/privcmd
crw-rw---- 1 root root 10, 57 Jan  7 11:48 /dev/xen/privcmd

As currently stands (Linux 3.10), only root can open privcmd and issue
ioctls, so a non-root gdbsx process would presumably not function at
all.  I am not sure that any documentation needs updating.

Having said that, with my "future ventures into reducing required dom0
priveleges" hat on, it would be very nice for a subset of hypercalls to
be available in a non-privileged, read-only form.  This would allow
read-only information from xl (and xentop and suchlike) to be available
to non-root users in dom0.

On the other hand, anyone with shell access in dom0 is likely a system
administrator anyway, so will almost certainly be running with sudo
privileges (or as root) anyway.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:34:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iyL-0000Bh-5t; Wed, 08 Jan 2014 02:34:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0iyK-0000Bb-3Y
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:34:20 +0000
Received: from [193.109.254.147:20304] by server-11.bemta-14.messagelabs.com
	id 63/3B-20576-B29BCC25; Wed, 08 Jan 2014 02:34:19 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389148457!9460410!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7324 invoked from network); 8 Jan 2014 02:34:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 02:34:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s082XBin021913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 02:33:12 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082XB8P014893
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 02:33:11 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082XB23014879; Wed, 8 Jan 2014 02:33:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 18:33:10 -0800
Date: Tue, 7 Jan 2014 18:33:09 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107183309.21ae078f@mantra.us.oracle.com>
In-Reply-To: <52CCA3B1.7070602@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
	<52CC2A2F.7010700@terremark.com>
	<20140107150148.4cbf1a73@mantra.us.oracle.com>
	<52CCA3B1.7070602@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 20:02:41 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 18:01, Mukesh Rathor wrote:
> > On Tue, 07 Jan 2014 11:24:15 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
....

> Using the info that page 1f is good and 20 is bad, a domctl request
> for 1ffff for 2 bytes would call on dbg_rw_mem(), dbg_rw_guest_mem()
> which calculate pagecnt == 1, get a valid mfn and return that byte.
> The 2nd time pagecnt is also 1, but we get INVALID_MFN, so
> dbg_rw_guest_mem(0 returns 1.  dbg_rw_mem(0 also returns 1.
> gdbsx_guest_mem_io() returns -EFAULT so no copyback.
> 
> At this point of the 2 requested byte, 1 byte is valid and 1 is not.
> Since copyback is not done, remain is 0.  So the caller get the error
> and does not have this "partial success" information.

Again, the application cannot and should not rely on the validity 
of the field in case of failing hcall/syscall, since the failure point
is not known to the application, unless the failure is EAGAIN.

Hope that makes sense.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:34:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0iyL-0000Bh-5t; Wed, 08 Jan 2014 02:34:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0iyK-0000Bb-3Y
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:34:20 +0000
Received: from [193.109.254.147:20304] by server-11.bemta-14.messagelabs.com
	id 63/3B-20576-B29BCC25; Wed, 08 Jan 2014 02:34:19 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389148457!9460410!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7324 invoked from network); 8 Jan 2014 02:34:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 02:34:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s082XBin021913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 02:33:12 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082XB8P014893
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 02:33:11 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082XB23014879; Wed, 8 Jan 2014 02:33:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 18:33:10 -0800
Date: Tue, 7 Jan 2014 18:33:09 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140107183309.21ae078f@mantra.us.oracle.com>
In-Reply-To: <52CCA3B1.7070602@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-5-git-send-email-dslutz@verizon.com>
	<20140106175349.6cbd190b@mantra.us.oracle.com>
	<1389088824.31766.105.camel@kazak.uk.xensource.com>
	<1389088937.31766.107.camel@kazak.uk.xensource.com>
	<52CC2A2F.7010700@terremark.com>
	<20140107150148.4cbf1a73@mantra.us.oracle.com>
	<52CCA3B1.7070602@terremark.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 4/4] XEN_DOMCTL_gdbsx_guestmemio:
 always do the copyback.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 07 Jan 2014 20:02:41 -0500
Don Slutz <dslutz@verizon.com> wrote:

> On 01/07/14 18:01, Mukesh Rathor wrote:
> > On Tue, 07 Jan 2014 11:24:15 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
....

> Using the info that page 1f is good and 20 is bad, a domctl request
> for 1ffff for 2 bytes would call on dbg_rw_mem(), dbg_rw_guest_mem()
> which calculate pagecnt == 1, get a valid mfn and return that byte.
> The 2nd time pagecnt is also 1, but we get INVALID_MFN, so
> dbg_rw_guest_mem(0 returns 1.  dbg_rw_mem(0 also returns 1.
> gdbsx_guest_mem_io() returns -EFAULT so no copyback.
> 
> At this point of the 2 requested byte, 1 byte is valid and 1 is not.
> Since copyback is not done, remain is 0.  So the caller get the error
> and does not have this "partial success" information.

Again, the application cannot and should not rely on the validity 
of the field in case of failing hcall/syscall, since the failure point
is not known to the application, unless the failure is EAGAIN.

Hope that makes sense.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0izm-0000IP-M5; Wed, 08 Jan 2014 02:35:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0izl-0000IE-77
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:35:49 +0000
Received: from [85.158.143.35:36446] by server-2.bemta-4.messagelabs.com id
	5B/56-11386-489BCC25; Wed, 08 Jan 2014 02:35:48 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389148546!3150945!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18646 invoked from network); 8 Jan 2014 02:35:47 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 02:35:47 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 07 Jan 2014 18:35:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="461720202"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 07 Jan 2014 18:35:44 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 18:35:43 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 18:35:43 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 10:35:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g//+MuYCAAIZlUA==
Date: Wed, 8 Jan 2014 02:35:38 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
In-Reply-To: <52CCB5A3.2010308@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-01-08:
> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>> Zhang, Yang Z wrote on 2013-12-19:
>>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>> 
>>> With the feature of unrestricted guest, there should be no vmexit
>>> be triggered when guest accesses the cr3 in non-paging mode. This
>>> patch will clear the cr3 save/loading bit in vmcs control filed to
>>> eliminate
>>> cr3 access vmexit on UG avaliable hardware.
>>> 
>>> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
>>> did the same thing compare to this one. But it will cause guest fail
>>> to boot up on non-UG hardware which is repoted by Jan and it has been
>>> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
>>> 
>> Hi, Jun.
>> 
>> Can you help to review this patch?
> 
> This got committed earlier today
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8
> a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as an
> independent party as far as the patch goes) reviewed it.

I remember in old day, after a patch got committed, the maintainer will reply the mail to tell the author it is applied. Or else, it's hard for author to know it in time. Should we still follow this rule?

BTW: KVM always follow this rule.

> 
>> 
>>> This patch incorporate the fixing and guest are working well both
>>> in UG and non-UG platform with this patch.
>>> 
>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>> ---
>>> changes in v3:
>>> Revise the patch description according Jan's suggestion
>>> 
>>> changes in v2:
>>> Fix the guest boot failure on non-UG platform.
>>> 
>>> There are some discussions around the first patch, please see the
>>> following link:
>>> http://www.gossamer-threads.com/lists/xen/devel/302810
>>> 
>>> ---
>>>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>>>  1 files changed, 5 insertions(+), 4 deletions(-) diff --git
>>> a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index
>>> dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
>>> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
>>> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>>>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
> CPU_BASED_CR3_STORE_EXITING);
>>>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
>>> -            if ( !hvm_paging_enabled(v) )
>>> +            if ( !hvm_paging_enabled(v) &&
>>> + !vmx_unrestricted_guest(v)
>>> + )
>>>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>>>              /* Trap CR3 updates if CR3 memory events are enabled.
>>> */ @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct
>>> vcpu *v, unsigned int cr)
>>>      case 3:
>>>          if ( paging_mode_hap(v->domain) )
>>>          {
>>> -            if ( !hvm_paging_enabled(v) )
>>> +            if ( !hvm_paging_enabled(v) &&
>>> + !vmx_unrestricted_guest(v)
>>> + )
>>>                  v->arch.hvm_vcpu.hw_cr[3] =
>>> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>>>              vmx_load_pdptrs(v);
>>> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
>>> *regs)
>>> 
>>>      hvm_invalidate_regs_fields(regs);
>>> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
>>> +    if ( paging_mode_hap(v->domain) )
>>>      {
>>>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
>>> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>>> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
>>> +            v->arch.hvm_vcpu.guest_cr[3] =
>>> + v->arch.hvm_vcpu.hw_cr[3];
>>>      }
>>>      
>>>      __vmread(VM_EXIT_REASON, &exit_reason);
>> 
>> Best regards,
>> Yang
>> 
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0izm-0000IP-M5; Wed, 08 Jan 2014 02:35:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0izl-0000IE-77
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:35:49 +0000
Received: from [85.158.143.35:36446] by server-2.bemta-4.messagelabs.com id
	5B/56-11386-489BCC25; Wed, 08 Jan 2014 02:35:48 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389148546!3150945!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18646 invoked from network); 8 Jan 2014 02:35:47 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 02:35:47 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 07 Jan 2014 18:35:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,621,1384329600"; d="scan'208";a="461720202"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 07 Jan 2014 18:35:44 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 18:35:43 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 18:35:43 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 10:35:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g//+MuYCAAIZlUA==
Date: Wed, 8 Jan 2014 02:35:38 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
In-Reply-To: <52CCB5A3.2010308@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Keir Fraser <keir@xen.org>, "Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-01-08:
> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>> Zhang, Yang Z wrote on 2013-12-19:
>>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>> 
>>> With the feature of unrestricted guest, there should be no vmexit
>>> be triggered when guest accesses the cr3 in non-paging mode. This
>>> patch will clear the cr3 save/loading bit in vmcs control filed to
>>> eliminate
>>> cr3 access vmexit on UG avaliable hardware.
>>> 
>>> The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
>>> did the same thing compare to this one. But it will cause guest fail
>>> to boot up on non-UG hardware which is repoted by Jan and it has been
>>> reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
>>> 
>> Hi, Jun.
>> 
>> Can you help to review this patch?
> 
> This got committed earlier today
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8
> a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as an
> independent party as far as the patch goes) reviewed it.

I remember in old day, after a patch got committed, the maintainer will reply the mail to tell the author it is applied. Or else, it's hard for author to know it in time. Should we still follow this rule?

BTW: KVM always follow this rule.

> 
>> 
>>> This patch incorporate the fixing and guest are working well both
>>> in UG and non-UG platform with this patch.
>>> 
>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>> ---
>>> changes in v3:
>>> Revise the patch description according Jan's suggestion
>>> 
>>> changes in v2:
>>> Fix the guest boot failure on non-UG platform.
>>> 
>>> There are some discussions around the first patch, please see the
>>> following link:
>>> http://www.gossamer-threads.com/lists/xen/devel/302810
>>> 
>>> ---
>>>  xen/arch/x86/hvm/vmx/vmx.c |    9 +++++----
>>>  1 files changed, 5 insertions(+), 4 deletions(-) diff --git
>>> a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index
>>> dfff628..f6409d6 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++
>>> b/xen/arch/x86/hvm/vmx/vmx.c @@ -1157,7 +1157,7 @@ static void
>>> vmx_update_guest_cr(struct vcpu *v, unsigned int cr)
>>>              uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
> CPU_BASED_CR3_STORE_EXITING);
>>>              v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
>>> -            if ( !hvm_paging_enabled(v) )
>>> +            if ( !hvm_paging_enabled(v) &&
>>> + !vmx_unrestricted_guest(v)
>>> + )
>>>                  v->arch.hvm_vmx.exec_control |= cr3_ctls;
>>>              /* Trap CR3 updates if CR3 memory events are enabled.
>>> */ @@ -1231,7 +1231,7 @@ static void vmx_update_guest_cr(struct
>>> vcpu *v, unsigned int cr)
>>>      case 3:
>>>          if ( paging_mode_hap(v->domain) )
>>>          {
>>> -            if ( !hvm_paging_enabled(v) )
>>> +            if ( !hvm_paging_enabled(v) &&
>>> + !vmx_unrestricted_guest(v)
>>> + )
>>>                  v->arch.hvm_vcpu.hw_cr[3] =
>>> v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
>>>              vmx_load_pdptrs(v);
>>> @@ -2487,10 +2487,11 @@ void vmx_vmexit_handler(struct cpu_user_regs
>>> *regs)
>>> 
>>>      hvm_invalidate_regs_fields(regs);
>>> -    if ( paging_mode_hap(v->domain) && hvm_paging_enabled(v) )
>>> +    if ( paging_mode_hap(v->domain) )
>>>      {
>>>          __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
>>> -        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
>>> +        if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
>>> +            v->arch.hvm_vcpu.guest_cr[3] =
>>> + v->arch.hvm_vcpu.hw_cr[3];
>>>      }
>>>      
>>>      __vmread(VM_EXIT_REASON, &exit_reason);
>> 
>> Best regards,
>> Yang
>> 
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:45:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0j9A-0000un-Sj; Wed, 08 Jan 2014 02:45:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0j99-0000ui-8p
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:45:31 +0000
Received: from [85.158.139.211:54238] by server-15.bemta-5.messagelabs.com id
	2C/3C-08490-ACBBCC25; Wed, 08 Jan 2014 02:45:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389149128!8436067!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24311 invoked from network); 8 Jan 2014 02:45:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 02:45:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s082iOWp007521
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 02:44:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082iMaM028683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 02:44:23 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082iM7H013064; Wed, 8 Jan 2014 02:44:22 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 18:44:21 -0800
Date: Tue, 7 Jan 2014 18:44:20 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107184420.60bbdd33@mantra.us.oracle.com>
In-Reply-To: <52CCB840.80207@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
	<20140107174423.66106c9c@mantra.us.oracle.com>
	<52CCB840.80207@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 02:30:24 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 01:44, Mukesh Rathor wrote:
> > On Wed, 8 Jan 2014 00:55:32 +0000
> > Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >
> >> On 08/01/2014 00:25, Don Slutz wrote:
> >>> Using a 1G hvm domU (in grub) and gdbsx:
> >>>
> > ..... 
> >
> >> Ian (with RM hat on):
> >>   This is a hypervisor reference counting error on a toolstack
> >> hypercall path.  Irrespective of any of the other patches in this
> >> series, I think this should be included ASAP (although probably
> >> subject to review from a third person), which will fix the
> >> hypervisor crashes from gdbsx usage.
> > I remember long ago mentioning to our packaing team to make gdbsx
> > root executible only. 
> >
> > What would be a good place to document that gdbsx should be removed
> > from production systems, and/or be made root executible only?
> >
> > thanks
> > mukesh
> >
> >
> 
> [root@idol ~]# ls -la /dev/xen/privcmd
> crw-rw---- 1 root root 10, 57 Jan  7 11:48 /dev/xen/privcmd
> 
> As currently stands (Linux 3.10), only root can open privcmd and issue
> ioctls, so a non-root gdbsx process would presumably not function at
> all.  I am not sure that any documentation needs updating.

Ah, right. I remember now...  thats much better. At least, currently its
not compromised with anyone able to run it.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 02:45:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 02:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0j9A-0000un-Sj; Wed, 08 Jan 2014 02:45:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W0j99-0000ui-8p
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 02:45:31 +0000
Received: from [85.158.139.211:54238] by server-15.bemta-5.messagelabs.com id
	2C/3C-08490-ACBBCC25; Wed, 08 Jan 2014 02:45:30 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389149128!8436067!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24311 invoked from network); 8 Jan 2014 02:45:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 02:45:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s082iOWp007521
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 02:44:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082iMaM028683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 02:44:23 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s082iM7H013064; Wed, 8 Jan 2014 02:44:22 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 07 Jan 2014 18:44:21 -0800
Date: Tue, 7 Jan 2014 18:44:20 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140107184420.60bbdd33@mantra.us.oracle.com>
In-Reply-To: <52CCB840.80207@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
	<20140107174423.66106c9c@mantra.us.oracle.com>
	<52CCB840.80207@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 02:30:24 +0000
Andrew Cooper <andrew.cooper3@citrix.com> wrote:

> On 08/01/2014 01:44, Mukesh Rathor wrote:
> > On Wed, 8 Jan 2014 00:55:32 +0000
> > Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >
> >> On 08/01/2014 00:25, Don Slutz wrote:
> >>> Using a 1G hvm domU (in grub) and gdbsx:
> >>>
> > ..... 
> >
> >> Ian (with RM hat on):
> >>   This is a hypervisor reference counting error on a toolstack
> >> hypercall path.  Irrespective of any of the other patches in this
> >> series, I think this should be included ASAP (although probably
> >> subject to review from a third person), which will fix the
> >> hypervisor crashes from gdbsx usage.
> > I remember long ago mentioning to our packaing team to make gdbsx
> > root executible only. 
> >
> > What would be a good place to document that gdbsx should be removed
> > from production systems, and/or be made root executible only?
> >
> > thanks
> > mukesh
> >
> >
> 
> [root@idol ~]# ls -la /dev/xen/privcmd
> crw-rw---- 1 root root 10, 57 Jan  7 11:48 /dev/xen/privcmd
> 
> As currently stands (Linux 3.10), only root can open privcmd and issue
> ioctls, so a non-root gdbsx process would presumably not function at
> all.  I am not sure that any documentation needs updating.

Ah, right. I remember now...  thats much better. At least, currently its
not compromised with anyone able to run it.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 04:20:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 04:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0kcD-0005YG-2Y; Wed, 08 Jan 2014 04:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0kcA-0005YB-Mz
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 04:19:35 +0000
Received: from [85.158.139.211:6172] by server-1.bemta-5.messagelabs.com id
	1A/5E-21065-5D1DCC25; Wed, 08 Jan 2014 04:19:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389154770!8442483!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 644 invoked from network); 8 Jan 2014 04:19:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 04:19:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88586140"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 04:19:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 23:19:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0kc4-0000zx-6K;
	Wed, 08 Jan 2014 04:19:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0kc4-0005Is-4z;
	Wed, 08 Jan 2014 04:19:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24297-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 04:19:28 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24297: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24297 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore        fail REGR. vs. 23827

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24295
 test-armhf-armhf-xl           3 host-install(3)  broken in 24295 pass in 24297

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24250
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10       fail   like 22592
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 24250
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24295 blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24295 like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24295 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24295 never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cedfdd43a9798e535a05690bb6f01394490d26bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 16:01:14 2014 +0100

    IOMMU: make page table deallocation preemptible
    
    This too can take an arbitrary amount of time.
    
    In fact, the bulk of the work is being moved to a tasklet, as handling
    the necessary preemption logic in line seems close to impossible given
    that the teardown may also be invoked on error paths.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

commit cc0a6c6c749a8693dc0e201773c10cd97e5e6ce0
Merge: 4746b5a... 81b1c7d...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 14:32:45 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 4746b5adc396bb2fc963b3156eab7267c6e7e541
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Dec 20 15:08:08 2013 +0000

    xen: arm: context switch the aux memory attribute registers
    
    We appear to have somehow missed these. Linux doesn't actually use them and
    none of the processors I've looked at actually define any bits in them (so
    they are UNK/SBZP) but it is good form to context switch them anyway.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 81b1c7de2339d2788352b162057e70130803f3cf
Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Date:   Tue Jan 7 15:09:42 2014 +0100

    AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
    
    Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
    However, the loop variable (bdf) is declared as u16 which causes
    inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
    This patch changes the variable to u32 instead.
    
    Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 62d33ca1048f4e08eaeb026c7b79239b4605b636
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:59:31 2014 +0100

    VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
    
    Free the allocated structure rather than the ACPI table ATS entry.
    
    On further analysis, there is another memory leak.  acpi_parse_dev_scope()
    could allocate scope->devices, and return with -ENOMEM.  All callers of
    acpi_parse_dev_scope() would then free the underlying structure, loosing the
    pointer.
    
    These errors can only actually be reached through acpi_parse_dev_scope()
    (which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
    it.
    
    Coverity-ID: 1146949
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a9fe8c7dda440b84e178d65dcd64c0173b0a4b5d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:58:35 2014 +0100

    AMD/microcode: avoid use-after-free for the microcode buffer
    
    It is possible to free the mc_old buffer and then store it for use in the case
    of resume.
    
    This keeps the old semantics of being able to return an error even after a
    successful microcode application.
    
    Coverity-ID 1146953
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 2af30e62c4e562d7a4ec4185fdab20fb29354fd8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:57:15 2014 +0100

    AMD/iommu_detect: don't leak iommu structure on error paths
    
    Tweak the logic slightly to return the real errors from
    get_iommu_{,msi_}capabilities(), which at the moment is no functional change.
    
    Coverity-ID: 1146950
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b764615c391fdc2648d460245c748a3a319a296e
Merge: 57a4578... f4fed54...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 13:50:35 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 57a45785584e651b807eed08f3a6950d4ade0156
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 24 11:28:47 2013 +0000

    xen: driver/char: fix const declaration of DT compatible list
    
    The data type for DT compatible list should be:
        const char * const[]  __initconst
    
    Fix every serial drivers which support device tree.
    
    Spotted-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit dd3ab3881494136999138d70f4fe28ebabe8660c
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 12:59:57 2013 +0200

    ns16550: support ns16550a
    
    Ns16550a devices are Ns16550 devices with additional capabilities.
    Decare XEN is compatible with this device, to be able to use unmodified
    devicetrees.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 7dde263e6cbe2b58dbff368f8a63dfc6152a70ef
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 13:01:31 2013 +0200

    xen/dts: specific bad cell count error
    
    Specify in the error message if bad cell count is in device or parent.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 8a91554484ad6977f641b308af38f337c20e97cc
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:18 2013 +0000

    libxc: Document xenctrl.h event channel calls
    
    Provide semantic documentation for how the libxc calls relate to the
    hypervisor interface, and how they are to be used.
    
    Also document the bug (present at least in Linux 3.12) that setting
    the evtchn fd to nonblocking doesn't in fact make xc_evtchn_pending
    nonblocking, and describe the appropriate workaround.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Jan Beulich <JBeulich@suse.com>
    CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit a8da8249506b55fe9314462e90cc6749fd50a5fa
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:17 2013 +0000

    docs: Document event-channel-based suspend protocol
    
    Document the event channel protocol in xenstore-paths.markdown, in the
    section for ~/device/suspend/event-channel.
    
    Protocol reverse-engineered from commentary and commit messages of
      4539594d46f9  Add facility to get notification of domain suspend ...
      17636f47a474  Teach xc_save to use event-channel-based ...
    and implementations in
      xc_save (current version)
      libxl (current version)
      linux-2.6.18-xen (mercurial 1241:2993033a77ca)
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>

commit 340702fd894add8adcdfd6c5742f41f89aa1fed2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:16 2013 +0000

    xen: Document that EVTCHNOP_bind_interdomain signals
    
    EVTCHNOP_bind_interdomain signals the event channel.  Document this.
    
    Also explain the usual use pattern.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>

commit f63b6c6ddcb44b5551e2f7748b0f5de6d73b35e5
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:15 2013 +0000

    xen: Document XEN_DOMCTL_subscribe
    
    Arguably this domctl is misnamed.  But, for now, document its actual
    behaviour (reverse-engineered from the code and found in the commit
    message for 4539594d46f9) under its actual name.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>
    CC: Jan Beulich <JBeulich@suse.com>

commit 36a31eb693774e61cdc119c276be90d67b675563
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 17 14:28:19 2013 +0000

    xen/arm: Allow ballooning working with 1:1 memory mapping
    
    With the lack of iommu, dom0 must have a 1:1 memory mapping for all
    these guest physical address. When the balloon decides to give back a
    page to the kernel, this page must have the same address as previously.
    Otherwise, we will loose the 1:1 mapping and will break DMA-capable
    devices.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Cc: Keir Fraser <keir@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f4fed540e78ac8a2bd3b1dee53a5206dde25f613
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:47 2014 +0100

    VMX: Eliminate cr3 save/loading exiting when UG enabled
    
    With the feature of unrestricted guest, there should be no vmexit
    be triggered when guest accesses the cr3 in non-paging mode. This
    patch will clear the cr3 save/loading bit in vmcs control filed to
    eliminate cr3 access vmexit on UG avaliable hardware.
    
    The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
    did the same thing compare to this one. But it will cause guest fail
    to boot up on non-UG hardware which is repoted by Jan and it has been
    reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
    
    This patch incorporate the fixing and guest are working well both in
    UG and non-UG platform with this patch.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 185e83591ce420e0b004646b55c5e4783e388531
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:21 2014 +0100

    VMX,apicv: Set "NMI-window exiting" for NMI
    
    Enable NMI-window exiting if interrupt is blocked by NMI under apicv enabled
    platform.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

commit 3e06b9890c0a691388ace5a6636728998b237b90
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 14:21:48 2014 +0100

    IOMMU: make page table population preemptible
    
    Since this can take an arbitrary amount of time, the rooting domctl as
    well as all involved code must become aware of this requiring a
    continuation.
    
    The subject domain's rel_mem_list is being (ab)used for this, in a way
    similar to and compatible with broken page offlining.
    
    Further, operations get slightly re-ordered in assign_device(): IOMMU
    page tables now get set up _before_ the first device gets assigned, at
    once closing a small timing window in which the guest may already see
    the device but wouldn't be able to access it.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 04:20:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 04:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0kcD-0005YG-2Y; Wed, 08 Jan 2014 04:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0kcA-0005YB-Mz
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 04:19:35 +0000
Received: from [85.158.139.211:6172] by server-1.bemta-5.messagelabs.com id
	1A/5E-21065-5D1DCC25; Wed, 08 Jan 2014 04:19:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389154770!8442483!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 644 invoked from network); 8 Jan 2014 04:19:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 04:19:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,621,1384300800"; d="scan'208";a="88586140"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 04:19:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 7 Jan 2014 23:19:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0kc4-0000zx-6K;
	Wed, 08 Jan 2014 04:19:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0kc4-0005Is-4z;
	Wed, 08 Jan 2014 04:19:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24297-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 04:19:28 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24297: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24297 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore        fail REGR. vs. 23827

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24295
 test-armhf-armhf-xl           3 host-install(3)  broken in 24295 pass in 24297

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24250
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10       fail   like 22592
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 24250
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24295 blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24295 like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24295 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24295 never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cedfdd43a9798e535a05690bb6f01394490d26bb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 16:01:14 2014 +0100

    IOMMU: make page table deallocation preemptible
    
    This too can take an arbitrary amount of time.
    
    In fact, the bulk of the work is being moved to a tasklet, as handling
    the necessary preemption logic in line seems close to impossible given
    that the teardown may also be invoked on error paths.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>

commit cc0a6c6c749a8693dc0e201773c10cd97e5e6ce0
Merge: 4746b5a... 81b1c7d...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 14:32:45 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 4746b5adc396bb2fc963b3156eab7267c6e7e541
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Dec 20 15:08:08 2013 +0000

    xen: arm: context switch the aux memory attribute registers
    
    We appear to have somehow missed these. Linux doesn't actually use them and
    none of the processors I've looked at actually define any bits in them (so
    they are UNK/SBZP) but it is good form to context switch them anyway.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 81b1c7de2339d2788352b162057e70130803f3cf
Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Date:   Tue Jan 7 15:09:42 2014 +0100

    AMD/IOMMU: fix infinite loop due to ivrs_bdf_entries larger than 16-bit value
    
    Certain AMD systems could have upto 0x10000 ivrs_bdf_entries.
    However, the loop variable (bdf) is declared as u16 which causes
    inifinite loop when parsing IOMMU event log with IO_PAGE_FAULT event.
    This patch changes the variable to u32 instead.
    
    Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 62d33ca1048f4e08eaeb026c7b79239b4605b636
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:59:31 2014 +0100

    VTD/DMAR: free() correct pointer on error from acpi_parse_one_atsr()
    
    Free the allocated structure rather than the ACPI table ATS entry.
    
    On further analysis, there is another memory leak.  acpi_parse_dev_scope()
    could allocate scope->devices, and return with -ENOMEM.  All callers of
    acpi_parse_dev_scope() would then free the underlying structure, loosing the
    pointer.
    
    These errors can only actually be reached through acpi_parse_dev_scope()
    (which passes type = DMAR_TYPE), but I am quite surprised Coverity didn't spot
    it.
    
    Coverity-ID: 1146949
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a9fe8c7dda440b84e178d65dcd64c0173b0a4b5d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:58:35 2014 +0100

    AMD/microcode: avoid use-after-free for the microcode buffer
    
    It is possible to free the mc_old buffer and then store it for use in the case
    of resume.
    
    This keeps the old semantics of being able to return an error even after a
    successful microcode application.
    
    Coverity-ID 1146953
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 2af30e62c4e562d7a4ec4185fdab20fb29354fd8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 14:57:15 2014 +0100

    AMD/iommu_detect: don't leak iommu structure on error paths
    
    Tweak the logic slightly to return the real errors from
    get_iommu_{,msi_}capabilities(), which at the moment is no functional change.
    
    Coverity-ID: 1146950
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b764615c391fdc2648d460245c748a3a319a296e
Merge: 57a4578... f4fed54...
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 13:50:35 2014 +0000

    Merge branch 'staging' of ssh://xenbits.xen.org/home/xen/git/xen into staging

commit 57a45785584e651b807eed08f3a6950d4ade0156
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 24 11:28:47 2013 +0000

    xen: driver/char: fix const declaration of DT compatible list
    
    The data type for DT compatible list should be:
        const char * const[]  __initconst
    
    Fix every serial drivers which support device tree.
    
    Spotted-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit dd3ab3881494136999138d70f4fe28ebabe8660c
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 12:59:57 2013 +0200

    ns16550: support ns16550a
    
    Ns16550a devices are Ns16550 devices with additional capabilities.
    Decare XEN is compatible with this device, to be able to use unmodified
    devicetrees.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 7dde263e6cbe2b58dbff368f8a63dfc6152a70ef
Author: Tsahee Zidenberg <tsahee@gmx.com>
Date:   Sun Dec 22 13:01:31 2013 +0200

    xen/dts: specific bad cell count error
    
    Specify in the error message if bad cell count is in device or parent.
    
    Signed-off-by: Tsahee Zidenberg <tsahee@gmx.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit 8a91554484ad6977f641b308af38f337c20e97cc
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:18 2013 +0000

    libxc: Document xenctrl.h event channel calls
    
    Provide semantic documentation for how the libxc calls relate to the
    hypervisor interface, and how they are to be used.
    
    Also document the bug (present at least in Linux 3.12) that setting
    the evtchn fd to nonblocking doesn't in fact make xc_evtchn_pending
    nonblocking, and describe the appropriate workaround.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Jan Beulich <JBeulich@suse.com>
    CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit a8da8249506b55fe9314462e90cc6749fd50a5fa
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:17 2013 +0000

    docs: Document event-channel-based suspend protocol
    
    Document the event channel protocol in xenstore-paths.markdown, in the
    section for ~/device/suspend/event-channel.
    
    Protocol reverse-engineered from commentary and commit messages of
      4539594d46f9  Add facility to get notification of domain suspend ...
      17636f47a474  Teach xc_save to use event-channel-based ...
    and implementations in
      xc_save (current version)
      libxl (current version)
      linux-2.6.18-xen (mercurial 1241:2993033a77ca)
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>

commit 340702fd894add8adcdfd6c5742f41f89aa1fed2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:16 2013 +0000

    xen: Document that EVTCHNOP_bind_interdomain signals
    
    EVTCHNOP_bind_interdomain signals the event channel.  Document this.
    
    Also explain the usual use pattern.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>

commit f63b6c6ddcb44b5551e2f7748b0f5de6d73b35e5
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue Dec 17 18:35:15 2013 +0000

    xen: Document XEN_DOMCTL_subscribe
    
    Arguably this domctl is misnamed.  But, for now, document its actual
    behaviour (reverse-engineered from the code and found in the commit
    message for 4539594d46f9) under its actual name.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    CC: Shriram Rajagopalan <rshriram@cs.ubc.ca>
    CC: Jan Beulich <JBeulich@suse.com>

commit 36a31eb693774e61cdc119c276be90d67b675563
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Dec 17 14:28:19 2013 +0000

    xen/arm: Allow ballooning working with 1:1 memory mapping
    
    With the lack of iommu, dom0 must have a 1:1 memory mapping for all
    these guest physical address. When the balloon decides to give back a
    page to the kernel, this page must have the same address as previously.
    Otherwise, we will loose the 1:1 mapping and will break DMA-capable
    devices.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Cc: Keir Fraser <keir@xen.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f4fed540e78ac8a2bd3b1dee53a5206dde25f613
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:47 2014 +0100

    VMX: Eliminate cr3 save/loading exiting when UG enabled
    
    With the feature of unrestricted guest, there should be no vmexit
    be triggered when guest accesses the cr3 in non-paging mode. This
    patch will clear the cr3 save/loading bit in vmcs control filed to
    eliminate cr3 access vmexit on UG avaliable hardware.
    
    The previous patch (commit c9efe34c119418a5ac776e5d91aeefcce4576518)
    did the same thing compare to this one. But it will cause guest fail
    to boot up on non-UG hardware which is repoted by Jan and it has been
    reverted (commit 1e2bf05ec37cf04b0e01585eae524509179f165e).
    
    This patch incorporate the fixing and guest are working well both in
    UG and non-UG platform with this patch.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 185e83591ce420e0b004646b55c5e4783e388531
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Tue Jan 7 14:30:21 2014 +0100

    VMX,apicv: Set "NMI-window exiting" for NMI
    
    Enable NMI-window exiting if interrupt is blocked by NMI under apicv enabled
    platform.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

commit 3e06b9890c0a691388ace5a6636728998b237b90
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 7 14:21:48 2014 +0100

    IOMMU: make page table population preemptible
    
    Since this can take an arbitrary amount of time, the rooting domctl as
    well as all involved code must become aware of this requiring a
    continuation.
    
    The subject domain's rel_mem_list is being (ab)used for this, in a way
    similar to and compatible with broken page offlining.
    
    Further, operations get slightly re-ordered in assign_device(): IOMMU
    page tables now get set up _before_ the first device gets assigned, at
    once closing a small timing window in which the guest may already see
    the device but wouldn't be able to access it.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 05:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 05:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0m2s-0001Na-0Q; Wed, 08 Jan 2014 05:51:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0m2q-0001NV-VW
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 05:51:13 +0000
Received: from [193.109.254.147:5440] by server-13.bemta-14.messagelabs.com id
	6F/1D-19374-057ECC25; Wed, 08 Jan 2014 05:51:12 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389160270!7141366!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4912 invoked from network); 8 Jan 2014 05:51:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 05:51:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 21:47:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,622,1384329600"; d="scan'208";a="463308301"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 21:51:08 -0800
Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 21:51:08 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX113.amr.corp.intel.com (10.18.116.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 21:51:08 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.86]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 13:50:59 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMA==
Date: Wed, 8 Jan 2014 05:50:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
In-Reply-To: <52CBCC4F.8080500@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-07:
> On 07.01.14 09:54, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-01-07:
>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>> wrote:
>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>>>>> way we can be certain that the retry won't do something
>>>>>>>>> different from what requested the retry, getting once again
>>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>>> is simply a bus operation, not involving redundant decoding of
>>>>>>>>> instructions).
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>> vmexit to
>>>>>>>> L1 and
>>>>>>>> L1 may use the wrong instruction.
>>>>>>> 
>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>> There should be, at any given point in time, at most one
>>>>>>> instruction being emulated. Can you please give a more
>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>> practical?)
>>>> problem?
>>>>>> 
>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>> info and saw the strange phenomenon:
>>>>>> 
>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>> 
>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>> 
>>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>>> proper understanding of the issue is necessary as the very first step.
>>>>> Which doesn't require knowing internals of Hyper-V, all you need
>>>>> is tracking of emulator state changes in Xen (which I realize is
>>>>> said easier than done, since you want to make sure you don't
>>>>> generate huge amounts of output before actually hitting the
>>>>> issue, making it close to
>>>> impossible to analyze).
>>>> 
>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>> Sometimes, L0 need to decode the L2's instruction to handle IO access
>>>> directly. For example, if L1 pass through (not VT-d) a device to L2
>>>> directly. And L0 may get X86EMUL_RETRY when handling this IO request.
>>>> At same time, if there is a virtual vmexit pending (for example, an
>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>> VCPU context from L2 to L1. Now we already in L1's context, but since
>>>> we got a X86EMUL_RETRY just now and this means hyprevisor will retry
>>>> to handle the IO request later and unfortunately, the retry will
>>>> happen in L1's context. Without your patch, L0 just emulate an L1's
>>>> instruction and everything goes on. With your patch, L0 will get the
>>>> instruction from the buffer which is belonging to L2, and problem
>>>> arises.
>>>> 
>>>> So the fixing is that if there is a pending IO request, no virtual
>>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>> 
>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>      struct vcpu *v = current;
>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>> +    ioreq_t *p = get_ioreq(v);
>>>> 
>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>> +        return;
>>>>      /*
>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>       * just handled and the true vmentry. If during this window,
>>> 
>>> This change looks much more sensible; question is whether simply
>>> ignoring the switch request is acceptable (and I don't know the
>>> nested HVM code well enough to tell). Furthermore I wonder whether
>>> that's
> really a VMX-only issue.
>> 
>> From hardware's point, an IO operation is handled atomically. So the
>> problem will never happen. But in Xen, an IO operation is divided into
>> several steps. Without nested virtualization, the VCPU is paused or
>> looped until the IO emulation is finished. But in nested environment,
>> the VCPU will continue running even the IO emulation is not finished.
>> So my patch will check this and allow the VCPU to continue running only
>> there is no pending IO request. This is matching hardware's behavior.
>> 
>> I guess SVM also has this problem. But since I don't know nested SVM
>> well, so perhaps Christoph can help to have a double check.
> 
> For SVM this issue was fixed with commit
> d740d811925385c09553cbe6dee8e77c1d43b198
> 
> And there is a followup cleanup commit
> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
> 
> The change in nestedsvm.c in commit
> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
> 
> Move that into nhvm_interrupt_blocked() in hvm.c right before
> 
>     return hvm_funcs.nhvm_intr_blocked(v);
> and the fix applies for both SVM and VMX.
> 

I guess this is no enough. L2 may vmexit to L1 during IO emulation not only due to interrupt. Now I cannot give an example, but the hyper-v cannot boot up with your suggestion. So I guess only consider external interrupt is not enough. We should prohibit vmswith if there is a pending IO emulation both from L1 or L2(This may never happen to L1, but from hardware's behavior, we should add this check).

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 05:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 05:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0m2s-0001Na-0Q; Wed, 08 Jan 2014 05:51:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W0m2q-0001NV-VW
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 05:51:13 +0000
Received: from [193.109.254.147:5440] by server-13.bemta-14.messagelabs.com id
	6F/1D-19374-057ECC25; Wed, 08 Jan 2014 05:51:12 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389160270!7141366!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4912 invoked from network); 8 Jan 2014 05:51:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 05:51:11 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 07 Jan 2014 21:47:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,622,1384329600"; d="scan'208";a="463308301"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 07 Jan 2014 21:51:08 -0800
Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 21:51:08 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX113.amr.corp.intel.com (10.18.116.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 7 Jan 2014 21:51:08 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.186]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.86]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 13:50:59 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMA==
Date: Wed, 8 Jan 2014 05:50:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
In-Reply-To: <52CBCC4F.8080500@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-07:
> On 07.01.14 09:54, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-01-07:
>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>> wrote:
>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>>>>> way we can be certain that the retry won't do something
>>>>>>>>> different from what requested the retry, getting once again
>>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>>> is simply a bus operation, not involving redundant decoding of
>>>>>>>>> instructions).
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>> vmexit to
>>>>>>>> L1 and
>>>>>>>> L1 may use the wrong instruction.
>>>>>>> 
>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>> There should be, at any given point in time, at most one
>>>>>>> instruction being emulated. Can you please give a more
>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>> practical?)
>>>> problem?
>>>>>> 
>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>> info and saw the strange phenomenon:
>>>>>> 
>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>> 
>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>> 
>>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>>> proper understanding of the issue is necessary as the very first step.
>>>>> Which doesn't require knowing internals of Hyper-V, all you need
>>>>> is tracking of emulator state changes in Xen (which I realize is
>>>>> said easier than done, since you want to make sure you don't
>>>>> generate huge amounts of output before actually hitting the
>>>>> issue, making it close to
>>>> impossible to analyze).
>>>> 
>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>> Sometimes, L0 need to decode the L2's instruction to handle IO access
>>>> directly. For example, if L1 pass through (not VT-d) a device to L2
>>>> directly. And L0 may get X86EMUL_RETRY when handling this IO request.
>>>> At same time, if there is a virtual vmexit pending (for example, an
>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>> VCPU context from L2 to L1. Now we already in L1's context, but since
>>>> we got a X86EMUL_RETRY just now and this means hyprevisor will retry
>>>> to handle the IO request later and unfortunately, the retry will
>>>> happen in L1's context. Without your patch, L0 just emulate an L1's
>>>> instruction and everything goes on. With your patch, L0 will get the
>>>> instruction from the buffer which is belonging to L2, and problem
>>>> arises.
>>>> 
>>>> So the fixing is that if there is a pending IO request, no virtual
>>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>> 
>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>      struct vcpu *v = current;
>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>> +    ioreq_t *p = get_ioreq(v);
>>>> 
>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>> +        return;
>>>>      /*
>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>       * just handled and the true vmentry. If during this window,
>>> 
>>> This change looks much more sensible; question is whether simply
>>> ignoring the switch request is acceptable (and I don't know the
>>> nested HVM code well enough to tell). Furthermore I wonder whether
>>> that's
> really a VMX-only issue.
>> 
>> From hardware's point, an IO operation is handled atomically. So the
>> problem will never happen. But in Xen, an IO operation is divided into
>> several steps. Without nested virtualization, the VCPU is paused or
>> looped until the IO emulation is finished. But in nested environment,
>> the VCPU will continue running even the IO emulation is not finished.
>> So my patch will check this and allow the VCPU to continue running only
>> there is no pending IO request. This is matching hardware's behavior.
>> 
>> I guess SVM also has this problem. But since I don't know nested SVM
>> well, so perhaps Christoph can help to have a double check.
> 
> For SVM this issue was fixed with commit
> d740d811925385c09553cbe6dee8e77c1d43b198
> 
> And there is a followup cleanup commit
> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
> 
> The change in nestedsvm.c in commit
> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
> 
> Move that into nhvm_interrupt_blocked() in hvm.c right before
> 
>     return hvm_funcs.nhvm_intr_blocked(v);
> and the fix applies for both SVM and VMX.
> 

I guess this is no enough. L2 may vmexit to L1 during IO emulation not only due to interrupt. Now I cannot give an example, but the hyper-v cannot boot up with your suggestion. So I guess only consider external interrupt is not enough. We should prohibit vmswith if there is a pending IO emulation both from L1 or L2(This may never happen to L1, but from hardware's behavior, we should add this check).

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:29:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0oVr-0000T3-4g; Wed, 08 Jan 2014 08:29:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0oVp-0000Sy-Q0
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:29:17 +0000
Received: from [85.158.137.68:8762] by server-12.bemta-3.messagelabs.com id
	3D/62-20055-C5C0DC25; Wed, 08 Jan 2014 08:29:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389169756!4193374!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29744 invoked from network); 8 Jan 2014 08:29:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:29:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:29:33 +0000
Message-Id: <52CD1A5B02000078001116AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:28:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
> Release manager requests:
>   patch 1 and 3 are optional for 4.4.0.
>   patch 2 should be in 4.4.0
>   patch 4 and 5 would be good to be in 4.4.0

Which clearly shows that the series is badly ordered: You shouldn't
expect committers to know (or even have to guess) that applying
later patches without earlier ones is okay. I.e. if you think that
leaving out part of the series for 4.4 is fine, you should place the
required ones first, the optional ones second, and the 4.5 ones
last. Or, if the patches are in fact independent, another option
would be to not send the patches as a series in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:29:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0oVr-0000T3-4g; Wed, 08 Jan 2014 08:29:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0oVp-0000Sy-Q0
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:29:17 +0000
Received: from [85.158.137.68:8762] by server-12.bemta-3.messagelabs.com id
	3D/62-20055-C5C0DC25; Wed, 08 Jan 2014 08:29:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389169756!4193374!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29744 invoked from network); 8 Jan 2014 08:29:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:29:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:29:33 +0000
Message-Id: <52CD1A5B02000078001116AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:28:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
> Release manager requests:
>   patch 1 and 3 are optional for 4.4.0.
>   patch 2 should be in 4.4.0
>   patch 4 and 5 would be good to be in 4.4.0

Which clearly shows that the series is badly ordered: You shouldn't
expect committers to know (or even have to guess) that applying
later patches without earlier ones is okay. I.e. if you think that
leaving out part of the series for 4.4 is fine, you should place the
required ones first, the optional ones second, and the 4.5 ones
last. Or, if the patches are in fact independent, another option
would be to not send the patches as a series in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ocx-00011D-1u; Wed, 08 Jan 2014 08:36:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0ocv-000118-QU
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:36:37 +0000
Received: from [193.109.254.147:49202] by server-10.bemta-14.messagelabs.com
	id 86/F0-20752-51E0DC25; Wed, 08 Jan 2014 08:36:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389170195!5987092!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18105 invoked from network); 8 Jan 2014 08:36:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:36:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:37:02 +0000
Message-Id: <52CD1C1E02000078001116BC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:36:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>      }
>  
>      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);

With the flow change above, this should be moved into an "else"
to the earlier "if".

Jan

> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>      return mfn;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ocx-00011D-1u; Wed, 08 Jan 2014 08:36:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0ocv-000118-QU
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:36:37 +0000
Received: from [193.109.254.147:49202] by server-10.bemta-14.messagelabs.com
	id 86/F0-20752-51E0DC25; Wed, 08 Jan 2014 08:36:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389170195!5987092!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18105 invoked from network); 8 Jan 2014 08:36:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:36:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:37:02 +0000
Message-Id: <52CD1C1E02000078001116BC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:36:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389140748-26524-3-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>      if ( p2m_is_readonly(gfntype) && toaddr )
>      {
>          DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
> -        return INVALID_MFN;
> +        mfn = INVALID_MFN;
>      }
>  
>      DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);

With the flow change above, this should be moved into an "else"
to the earlier "if".

Jan

> +
> +    if ( mfn == INVALID_MFN )
> +    {
> +        put_gfn(dp, *gfn);
> +        *gfn = INVALID_GFN;
> +    }
> +
>      return mfn;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:43:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0oj6-0001a7-Sn; Wed, 08 Jan 2014 08:43:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0oj4-0001a2-QL
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:42:59 +0000
Received: from [85.158.137.68:22039] by server-2.bemta-3.messagelabs.com id
	0E/7B-17329-09F0DC25; Wed, 08 Jan 2014 08:42:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389170576!7812579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14645 invoked from network); 8 Jan 2014 08:42:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:42:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:43:41 +0000
Message-Id: <52CD1D9C02000078001116CB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:42:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 03:35, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Andrew Cooper wrote on 2014-01-08:
>> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>>> Can you help to review this patch?
>> 
>> This got committed earlier today
>> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8 
>> a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as an
>> independent party as far as the patch goes) reviewed it.
> 
> I remember in old day, after a patch got committed, the maintainer will 
> reply the mail to tell the author it is applied. Or else, it's hard for 
> author to know it in time. Should we still follow this rule?

As it requires extra work, and it's easy to check the tree (there
aren't that many commits during a day), and there are generally
no intermediate trees (i.e. just a single canonical one to look at),
I never reply with commit notifications. If anything like that is
being wanted, then this should be via an automatic commit
notification mechanism (and ISTR that there is a respective list
you could subscribe to).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 08:43:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 08:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0oj6-0001a7-Sn; Wed, 08 Jan 2014 08:43:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0oj4-0001a2-QL
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 08:42:59 +0000
Received: from [85.158.137.68:22039] by server-2.bemta-3.messagelabs.com id
	0E/7B-17329-09F0DC25; Wed, 08 Jan 2014 08:42:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389170576!7812579!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14645 invoked from network); 8 Jan 2014 08:42:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 08:42:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 08:43:41 +0000
Message-Id: <52CD1D9C02000078001116CB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 08:42:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 03:35, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Andrew Cooper wrote on 2014-01-08:
>> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>>> Can you help to review this patch?
>> 
>> This got committed earlier today
>> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78ac8 
>> a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as an
>> independent party as far as the patch goes) reviewed it.
> 
> I remember in old day, after a patch got committed, the maintainer will 
> reply the mail to tell the author it is applied. Or else, it's hard for 
> author to know it in time. Should we still follow this rule?

As it requires extra work, and it's easy to check the tree (there
aren't that many commits during a day), and there are generally
no intermediate trees (i.e. just a single canonical one to look at),
I never reply with commit notifications. If anything like that is
being wanted, then this should be via an automatic commit
notification mechanism (and ISTR that there is a respective list
you could subscribe to).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:50:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0plv-0004s0-Dr; Wed, 08 Jan 2014 09:49:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0plt-0004rv-Vp
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:49:58 +0000
Received: from [85.158.139.211:8687] by server-4.bemta-5.messagelabs.com id
	94/D5-26791-54F1DC25; Wed, 08 Jan 2014 09:49:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389174595!8503291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16312 invoked from network); 8 Jan 2014 09:49:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 09:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88647281"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 09:49:54 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 8 Jan 2014 04:49:54 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Wed, 8 Jan 2014 10:49:54 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: David Miller <davem@davemloft.net>
Thread-Topic: [PATCH net-next] xen-netback: stop vif thread spinning if
	frontend is unresponsive
Thread-Index: AQHPC8eE3dlRMxAIZ0+4RgF8xTHTWZp5tuUAgADelkA=
Date: Wed, 8 Jan 2014 09:49:53 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E6DF8@AMSPEX01CL01.citrite.net>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
	<20140107.162956.1062166230525232035.davem@davemloft.net>
In-Reply-To: <20140107.162956.1062166230525232035.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of David Miller
> Sent: 07 January 2014 21:30
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; Wei Liu; Ian Campbell;
> David Vrabel
> Subject: Re: [PATCH net-next] xen-netback: stop vif thread spinning if
> frontend is unresponsive
> 
> From: Paul Durrant <paul.durrant@citrix.com>
> Date: Tue, 7 Jan 2014 16:25:29 +0000
> 
> > @@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >  	unsigned long offset;
> >  	struct skb_cb_overlay *sco;
> >  	int need_to_notify = 0;
> > +	int ring_full = 0;
> 
> Please use bool, false, and true.
> 
> >
> > -	if (!npo.copy_prod)
> > +	if (!npo.copy_prod) {
> > +		if (ring_full)
> > +			vif->rx_queue_stopped = true;
> >  		goto done;
> > +	}
> > +
> > +	vif->rx_queue_stopped = false;
> 
> And then you can code this as:
> 
> 	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
> 	if (!npo.copy_prod)
> 		goto done;

Sure. I was just following style (of need_to_notify). If you prefer bool then I'll use that and also convert need_to_notify.

  Paul

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:50:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0plv-0004s0-Dr; Wed, 08 Jan 2014 09:49:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0plt-0004rv-Vp
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:49:58 +0000
Received: from [85.158.139.211:8687] by server-4.bemta-5.messagelabs.com id
	94/D5-26791-54F1DC25; Wed, 08 Jan 2014 09:49:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389174595!8503291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16312 invoked from network); 8 Jan 2014 09:49:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 09:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88647281"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 09:49:54 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 8 Jan 2014 04:49:54 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Wed, 8 Jan 2014 10:49:54 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: David Miller <davem@davemloft.net>
Thread-Topic: [PATCH net-next] xen-netback: stop vif thread spinning if
	frontend is unresponsive
Thread-Index: AQHPC8eE3dlRMxAIZ0+4RgF8xTHTWZp5tuUAgADelkA=
Date: Wed, 8 Jan 2014 09:49:53 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD01E6DF8@AMSPEX01CL01.citrite.net>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
	<20140107.162956.1062166230525232035.davem@davemloft.net>
In-Reply-To: <20140107.162956.1062166230525232035.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of David Miller
> Sent: 07 January 2014 21:30
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; Wei Liu; Ian Campbell;
> David Vrabel
> Subject: Re: [PATCH net-next] xen-netback: stop vif thread spinning if
> frontend is unresponsive
> 
> From: Paul Durrant <paul.durrant@citrix.com>
> Date: Tue, 7 Jan 2014 16:25:29 +0000
> 
> > @@ -477,6 +477,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >  	unsigned long offset;
> >  	struct skb_cb_overlay *sco;
> >  	int need_to_notify = 0;
> > +	int ring_full = 0;
> 
> Please use bool, false, and true.
> 
> >
> > -	if (!npo.copy_prod)
> > +	if (!npo.copy_prod) {
> > +		if (ring_full)
> > +			vif->rx_queue_stopped = true;
> >  		goto done;
> > +	}
> > +
> > +	vif->rx_queue_stopped = false;
> 
> And then you can code this as:
> 
> 	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
> 	if (!npo.copy_prod)
> 		goto done;

Sure. I was just following style (of need_to_notify). If you prefer bool then I'll use that and also convert need_to_notify.

  Paul

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0pnF-0004wJ-TB; Wed, 08 Jan 2014 09:51:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0pnE-0004w9-01
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:51:20 +0000
Received: from [85.158.143.35:50362] by server-1.bemta-4.messagelabs.com id
	EB/D3-02132-79F1DC25; Wed, 08 Jan 2014 09:51:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389174677!10371452!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21957 invoked from network); 8 Jan 2014 09:51:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 09:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90799543"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 09:51:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	04:51:16 -0500
Message-ID: <1389174675.12612.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 09:51:15 +0000
In-Reply-To: <52CCA99A.3080709@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
	<52CCA99A.3080709@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 01:27 +0000, Andrew Cooper wrote:
> On 08/01/2014 01:16, Mukesh Rathor wrote:
> > On Tue,  7 Jan 2014 19:25:44 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> >
> >> These 2 files are changed in this patch set.  So add the allowed
> >> "Emacs local variables" from CODING_STYLE.
> >>
> >> Signed-off-by: Don Slutz <dslutz@verizon.com>
> > Will let some emacs user ack these... "vim" rules!!
> >
> > Mukesh
> 
> Wasn't there a patch a little while back to add equivalent vim rules to
> each of the files (or at least a thread suggesting a patch)?  In the
> interest of cooperation, it is only fair.

IIRC it had some shortcoming, like requiring vim users to always invoke
vim from the top-level xen source directory?

For my part I would be more than happy to accept vim magic runes into
anything which I'm the maintainer for (despite being an emacs type!).

> 
> As a fellow emacsian,

> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> As for 4.4-ness:  If the rest of the series is deemed ok for a
> release-ack, then this should go in for completeness.  If part of the
> series is decided to be deferred, then this warrants deferring as well.

I don't think there is any reason to hold off on a change like this
irrespective of what happens to the rest of the series.

Release-acked-by: Ian Campbell

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0pnF-0004wJ-TB; Wed, 08 Jan 2014 09:51:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0pnE-0004w9-01
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:51:20 +0000
Received: from [85.158.143.35:50362] by server-1.bemta-4.messagelabs.com id
	EB/D3-02132-79F1DC25; Wed, 08 Jan 2014 09:51:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389174677!10371452!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21957 invoked from network); 8 Jan 2014 09:51:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 09:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90799543"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 09:51:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	04:51:16 -0500
Message-ID: <1389174675.12612.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 09:51:15 +0000
In-Reply-To: <52CCA99A.3080709@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
	<52CCA99A.3080709@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 01:27 +0000, Andrew Cooper wrote:
> On 08/01/2014 01:16, Mukesh Rathor wrote:
> > On Tue,  7 Jan 2014 19:25:44 -0500
> > Don Slutz <dslutz@verizon.com> wrote:
> >
> >> These 2 files are changed in this patch set.  So add the allowed
> >> "Emacs local variables" from CODING_STYLE.
> >>
> >> Signed-off-by: Don Slutz <dslutz@verizon.com>
> > Will let some emacs user ack these... "vim" rules!!
> >
> > Mukesh
> 
> Wasn't there a patch a little while back to add equivalent vim rules to
> each of the files (or at least a thread suggesting a patch)?  In the
> interest of cooperation, it is only fair.

IIRC it had some shortcoming, like requiring vim users to always invoke
vim from the top-level xen source directory?

For my part I would be more than happy to accept vim magic runes into
anything which I'm the maintainer for (despite being an emacs type!).

> 
> As a fellow emacsian,

> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

> As for 4.4-ness:  If the rest of the series is deemed ok for a
> release-ack, then this should go in for completeness.  If part of the
> series is decided to be deferred, then this warrants deferring as well.

I don't think there is any reason to hold off on a change like this
irrespective of what happens to the rest of the series.

Release-acked-by: Ian Campbell

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:57:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0psk-0005Aq-R8; Wed, 08 Jan 2014 09:57:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <David.Laight@ACULAB.COM>) id 1W0psj-0005Al-8f
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:57:01 +0000
Received: from [85.158.137.68:35377] by server-5.bemta-3.messagelabs.com id
	E4/53-25188-CE02DC25; Wed, 08 Jan 2014 09:57:00 +0000
X-Env-Sender: David.Laight@ACULAB.COM
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389175019!6711290!1
X-Originating-IP: [213.249.233.131]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22414 invoked from network); 8 Jan 2014 09:56:59 -0000
Received: from mx0.aculab.com (HELO mx0.aculab.com) (213.249.233.131)
	by server-9.tower-31.messagelabs.com with SMTP;
	8 Jan 2014 09:56:59 -0000
Received: (qmail 29611 invoked from network); 8 Jan 2014 09:56:57 -0000
Received: from localhost (127.0.0.1)
	by mx0.aculab.com with SMTP; 8 Jan 2014 09:56:57 -0000
Received: from mx0.aculab.com ([127.0.0.1])
	by localhost (mx0.aculab.com [127.0.0.1]) (amavisd-new,
	port 10024) with SMTP id 27662-04 for <xen-devel@lists.xen.org>;
	Wed,  8 Jan 2014 09:56:55 +0000 (GMT)
Received: (qmail 29560 invoked by uid 599); 8 Jan 2014 09:56:55 -0000
Received: from unknown (HELO AcuExch.aculab.com) (10.202.163.4)
	by mx0.aculab.com (qpsmtpd/0.28) with ESMTP;
	Wed, 08 Jan 2014 09:56:55 +0000
Received: from ACUEXCH.Aculab.com ([::1]) by AcuExch.aculab.com ([::1]) with
	mapi id 14.03.0123.003; Wed, 8 Jan 2014 09:55:15 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'David Miller' <davem@davemloft.net>, "paul.durrant@citrix.com"
	<paul.durrant@citrix.com>
Thread-Topic: [PATCH net-next] xen-netback: stop vif thread spinning if
	frontend is unresponsive
Thread-Index: AQHPC+9oIzTC/n4JfUaAhPU6YZfxeJp6lh1Q
Date: Wed, 8 Jan 2014 09:55:13 +0000
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D45508D@AcuExch.aculab.com>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
	<20140107.162956.1062166230525232035.davem@davemloft.net>
In-Reply-To: <20140107.162956.1062166230525232035.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.202.99.200]
MIME-Version: 1.0
X-Virus-Scanned: by iCritical at mx0.aculab.com
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"wei.liu2@citrix.com" <wei.liu2@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Miller
...
> > -	if (!npo.copy_prod)
> > +	if (!npo.copy_prod) {
> > +		if (ring_full)
> > +			vif->rx_queue_stopped = true;
> >  		goto done;
> > +	}
> > +
> > +	vif->rx_queue_stopped = false;
> 
> And then you can code this as:
> 
> 	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
> 	if (!npo.copy_prod)
> 		goto done;

Which isn't quite the same...
1) It always writes vif->rx_queue_stopped, the old code could
   leave it unchanged.
2) If 'npo' is global then the compiler can't assume that 'vif'
   doesn't alias it so may have to re-read it following the
   write to 'vif->rx_queue_stopped'.

	David




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 09:57:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 09:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0psk-0005Aq-R8; Wed, 08 Jan 2014 09:57:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <David.Laight@ACULAB.COM>) id 1W0psj-0005Al-8f
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 09:57:01 +0000
Received: from [85.158.137.68:35377] by server-5.bemta-3.messagelabs.com id
	E4/53-25188-CE02DC25; Wed, 08 Jan 2014 09:57:00 +0000
X-Env-Sender: David.Laight@ACULAB.COM
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389175019!6711290!1
X-Originating-IP: [213.249.233.131]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22414 invoked from network); 8 Jan 2014 09:56:59 -0000
Received: from mx0.aculab.com (HELO mx0.aculab.com) (213.249.233.131)
	by server-9.tower-31.messagelabs.com with SMTP;
	8 Jan 2014 09:56:59 -0000
Received: (qmail 29611 invoked from network); 8 Jan 2014 09:56:57 -0000
Received: from localhost (127.0.0.1)
	by mx0.aculab.com with SMTP; 8 Jan 2014 09:56:57 -0000
Received: from mx0.aculab.com ([127.0.0.1])
	by localhost (mx0.aculab.com [127.0.0.1]) (amavisd-new,
	port 10024) with SMTP id 27662-04 for <xen-devel@lists.xen.org>;
	Wed,  8 Jan 2014 09:56:55 +0000 (GMT)
Received: (qmail 29560 invoked by uid 599); 8 Jan 2014 09:56:55 -0000
Received: from unknown (HELO AcuExch.aculab.com) (10.202.163.4)
	by mx0.aculab.com (qpsmtpd/0.28) with ESMTP;
	Wed, 08 Jan 2014 09:56:55 +0000
Received: from ACUEXCH.Aculab.com ([::1]) by AcuExch.aculab.com ([::1]) with
	mapi id 14.03.0123.003; Wed, 8 Jan 2014 09:55:15 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'David Miller' <davem@davemloft.net>, "paul.durrant@citrix.com"
	<paul.durrant@citrix.com>
Thread-Topic: [PATCH net-next] xen-netback: stop vif thread spinning if
	frontend is unresponsive
Thread-Index: AQHPC+9oIzTC/n4JfUaAhPU6YZfxeJp6lh1Q
Date: Wed, 8 Jan 2014 09:55:13 +0000
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D45508D@AcuExch.aculab.com>
References: <1389111929-37231-1-git-send-email-paul.durrant@citrix.com>
	<20140107.162956.1062166230525232035.davem@davemloft.net>
In-Reply-To: <20140107.162956.1062166230525232035.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.202.99.200]
MIME-Version: 1.0
X-Virus-Scanned: by iCritical at mx0.aculab.com
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"wei.liu2@citrix.com" <wei.liu2@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> From: David Miller
...
> > -	if (!npo.copy_prod)
> > +	if (!npo.copy_prod) {
> > +		if (ring_full)
> > +			vif->rx_queue_stopped = true;
> >  		goto done;
> > +	}
> > +
> > +	vif->rx_queue_stopped = false;
> 
> And then you can code this as:
> 
> 	vif->rx_queue_stopped = (!npo.copy_prod && ring_full);
> 	if (!npo.copy_prod)
> 		goto done;

Which isn't quite the same...
1) It always writes vif->rx_queue_stopped, the old code could
   leave it unchanged.
2) If 'npo' is global then the compiler can't assume that 'vif'
   doesn't alias it so may have to re-read it following the
   write to 'vif->rx_queue_stopped'.

	David




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:02:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0pxx-0005o8-Qu; Wed, 08 Jan 2014 10:02:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0pxx-0005o3-0V
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:02:25 +0000
Received: from [85.158.139.211:37878] by server-13.bemta-5.messagelabs.com id
	D6/A2-11357-0322DC25; Wed, 08 Jan 2014 10:02:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389175341!7270402!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12456 invoked from network); 8 Jan 2014 10:02:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:02:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90802283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:02:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:02:19 -0500
Message-ID: <1389175338.12612.98.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 8 Jan 2014 10:02:18 +0000
In-Reply-To: <52CC3256.4040006@linaro.org>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<52CC3256.4040006@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 16:59 +0000, Julien Grall wrote:
> On 01/07/2014 04:02 PM, Ian Campbell wrote:
> .>
> > One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> > It's reasonably recent so reverting it takes us back to a pretty well
> > understood state in the libraries. The functionality is harmless if
> > incomplete. I think given the first argument I would lean towards reverting.
> 
> I would also prefer reverting the previous patch.
> 
> > 
> > Obviously if you think I'm being to easy on myself please say so.
> 
> Without this patch, we will likely crash a guest in production, that
> it's not acceptable for a release.
> 
> > 
> > Actually, if you think my judgement is right I'd appreciate being told so too.
> > ---
> >  xen/arch/arm/domain.c           |    5 ++
> >  xen/arch/arm/domain_build.c     |    1 +
> >  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
> >  xen/arch/arm/vtimer.c           |    6 +-
> >  xen/include/asm-arm/cpregs.h    |    4 +
> >  xen/include/asm-arm/processor.h |    2 +-
> >  xen/include/asm-arm/sysregs.h   |   19 ++++-
> >  7 files changed, 182 insertions(+), 8 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 124cccf..104d228 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> >      else
> >          hcr |= HCR_RW;
> >  
> > +    if ( n->arch.sctlr & SCTLR_M )
> > +        hcr &= ~(HCR_TVM|HCR_DC);
> > +    else
> > +        hcr |= (HCR_TVM|HCR_DC);
> > +
> >      WRITE_SYSREG(hcr, HCR_EL2);
> >      isb();
> >  
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 47b781b..bb31db8 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
> >      else
> >          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
> >  #endif
> > +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
> 
> Is it useful? As I understand, we will at least context switch one time
> before booting dom0.

For some reason I was thinking that the initial context became d0v0 when
actually it becomes the first idle vcpu. Unlike HCR_RW etc I don't think
anything in the dom0 build code relies on this bit being correct, so I
think you are right that it can be removed.

> If we need it, perhaps the better place to setup it is init_traps?

This may not be quite true today, but: I think it would be good for
init_traps to setup the global/constant HCR settings and leave the
per-VM ones to more per-VM locations.

> 
> >  
> >      /*
> >       * kernel_load will determine the placement of the initrd & fdt in
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 7c5ab19..d00bba3 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
> >      regs->pc += hsr.len ? 4 : 2;
> >  }
> >  
> > +static void update_sctlr(uint32_t v)
> > +{
> > +    /*
> > +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> > +     * because they are incompatible.
> > +     *
> > +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> > +     * since it's only purpose was to catch the MMU being enabled.
> > +     *
> > +     * Both are set appropriately on context switch but we need to
> > +     * clear them now since we may not context switch on return to
> > +     * guest.
> > +     */
> > +    if ( v & SCTLR_M )
> > +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
> 
> Even if it's unlikely, can we handle the case where the guest disable
> the case?

No, we disable the traps so we would never find out that the guest had
done so (at least, not without waiting for a context switch). We don't
really mind this shortcoming though because we expect guests to enable
caches early on and keep them on (in some sense this is part of our ABI)
but in any case if the guest were to disable its MMU it would have to
have taken care of making the caches consistent itself already (e.g. it
is required on native).

This does make me think that the code in ctxt switch is wrong though,
since if the guest does set SCTLR.M then we will enable HCR.DC at that
point, without necessarily doing everything which would be needed. I
think I'll add a per-vcpu flag which indicates whether DC should be set
and clear it in this function.


> Also from ARM ARM B3.2.1, a TLB flush by VMID is required if HCR_DC is
> disabled and the VMID is not changed.

Oh yes, well spotted, will add that.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:02:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0pxx-0005o8-Qu; Wed, 08 Jan 2014 10:02:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0pxx-0005o3-0V
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:02:25 +0000
Received: from [85.158.139.211:37878] by server-13.bemta-5.messagelabs.com id
	D6/A2-11357-0322DC25; Wed, 08 Jan 2014 10:02:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389175341!7270402!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12456 invoked from network); 8 Jan 2014 10:02:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:02:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90802283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:02:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:02:19 -0500
Message-ID: <1389175338.12612.98.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 8 Jan 2014 10:02:18 +0000
In-Reply-To: <52CC3256.4040006@linaro.org>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<52CC3256.4040006@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 16:59 +0000, Julien Grall wrote:
> On 01/07/2014 04:02 PM, Ian Campbell wrote:
> .>
> > One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
> > It's reasonably recent so reverting it takes us back to a pretty well
> > understood state in the libraries. The functionality is harmless if
> > incomplete. I think given the first argument I would lean towards reverting.
> 
> I would also prefer reverting the previous patch.
> 
> > 
> > Obviously if you think I'm being to easy on myself please say so.
> 
> Without this patch, we will likely crash a guest in production, that
> it's not acceptable for a release.
> 
> > 
> > Actually, if you think my judgement is right I'd appreciate being told so too.
> > ---
> >  xen/arch/arm/domain.c           |    5 ++
> >  xen/arch/arm/domain_build.c     |    1 +
> >  xen/arch/arm/traps.c            |  153 ++++++++++++++++++++++++++++++++++++++-
> >  xen/arch/arm/vtimer.c           |    6 +-
> >  xen/include/asm-arm/cpregs.h    |    4 +
> >  xen/include/asm-arm/processor.h |    2 +-
> >  xen/include/asm-arm/sysregs.h   |   19 ++++-
> >  7 files changed, 182 insertions(+), 8 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 124cccf..104d228 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> >      else
> >          hcr |= HCR_RW;
> >  
> > +    if ( n->arch.sctlr & SCTLR_M )
> > +        hcr &= ~(HCR_TVM|HCR_DC);
> > +    else
> > +        hcr |= (HCR_TVM|HCR_DC);
> > +
> >      WRITE_SYSREG(hcr, HCR_EL2);
> >      isb();
> >  
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 47b781b..bb31db8 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -1026,6 +1026,7 @@ int construct_dom0(struct domain *d)
> >      else
> >          WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
> >  #endif
> > +    WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_DC | HCR_TVM, HCR_EL2);
> 
> Is it useful? As I understand, we will at least context switch one time
> before booting dom0.

For some reason I was thinking that the initial context became d0v0 when
actually it becomes the first idle vcpu. Unlike HCR_RW etc I don't think
anything in the dom0 build code relies on this bit being correct, so I
think you are right that it can be removed.

> If we need it, perhaps the better place to setup it is init_traps?

This may not be quite true today, but: I think it would be good for
init_traps to setup the global/constant HCR settings and leave the
per-VM ones to more per-VM locations.

> 
> >  
> >      /*
> >       * kernel_load will determine the placement of the initrd & fdt in
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 7c5ab19..d00bba3 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1279,6 +1279,23 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
> >      regs->pc += hsr.len ? 4 : 2;
> >  }
> >  
> > +static void update_sctlr(uint32_t v)
> > +{
> > +    /*
> > +     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
> > +     * because they are incompatible.
> > +     *
> > +     * Once HCR.DC is disabled then we do not need HCR_TVM either,
> > +     * since it's only purpose was to catch the MMU being enabled.
> > +     *
> > +     * Both are set appropriately on context switch but we need to
> > +     * clear them now since we may not context switch on return to
> > +     * guest.
> > +     */
> > +    if ( v & SCTLR_M )
> > +        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
> 
> Even if it's unlikely, can we handle the case where the guest disable
> the case?

No, we disable the traps so we would never find out that the guest had
done so (at least, not without waiting for a context switch). We don't
really mind this shortcoming though because we expect guests to enable
caches early on and keep them on (in some sense this is part of our ABI)
but in any case if the guest were to disable its MMU it would have to
have taken care of making the caches consistent itself already (e.g. it
is required on native).

This does make me think that the code in ctxt switch is wrong though,
since if the guest does set SCTLR.M then we will enable HCR.DC at that
point, without necessarily doing everything which would be needed. I
think I'll add a per-vcpu flag which indicates whether DC should be set
and clear it in this function.


> Also from ARM ARM B3.2.1, a TLB flush by VMID is required if HCR_DC is
> disabled and the VMID is not changed.

Oh yes, well spotted, will add that.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0q0J-0005uu-Cp; Wed, 08 Jan 2014 10:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0q0H-0005um-Hw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:04:49 +0000
Received: from [85.158.137.68:30080] by server-17.bemta-3.messagelabs.com id
	4A/1E-15965-0C22DC25; Wed, 08 Jan 2014 10:04:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389175486!7867789!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27166 invoked from network); 8 Jan 2014 10:04:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90803019"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:04:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:04:45 -0500
Message-ID: <1389175483.12612.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 10:04:43 +0000
In-Reply-To: <alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 124cccf..104d228 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> >      else
> >          hcr |= HCR_RW;
> >  
> > +    if ( n->arch.sctlr & SCTLR_M )
> > +        hcr &= ~(HCR_TVM|HCR_DC);
> > +    else
> > +        hcr |= (HCR_TVM|HCR_DC);
> > +
> >      WRITE_SYSREG(hcr, HCR_EL2);
> >      isb();
> 
> Is this actually needed? Shouldn't HCR be already correctly updated by
> update_sctlr?

Not if we are switching back and forth between two guests which are in
different states.

> > +/*
> > + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> > + * Updates the lower 32-bits and clears the upper bits.
> > + */
> > +#define CP32_PASSTHRU64(R...) do {              \
> > +    if ( cp32.read )                            \
> > +        *r = (uint32_t)READ_SYSREG64(R);        \
> > +    else                                        \
> > +        WRITE_SYSREG64((uint64_t)*r, R);        \
> > +} while(0)
> 
> Can/Should CP32_PASSTHRU64_LO be used instead of this?

LO preserves the upper 32-bits which this macro deliberately does not.

Now, an AArch32 guest on an AArch64 hypervisor should never have
anything else in the top bits of a register which it sees as 32-bit, so
using LO would work, but I think having it be explicit like this is
better.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0q0J-0005uu-Cp; Wed, 08 Jan 2014 10:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0q0H-0005um-Hw
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:04:49 +0000
Received: from [85.158.137.68:30080] by server-17.bemta-3.messagelabs.com id
	4A/1E-15965-0C22DC25; Wed, 08 Jan 2014 10:04:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389175486!7867789!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27166 invoked from network); 8 Jan 2014 10:04:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90803019"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:04:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:04:45 -0500
Message-ID: <1389175483.12612.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 10:04:43 +0000
In-Reply-To: <alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 124cccf..104d228 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> >      else
> >          hcr |= HCR_RW;
> >  
> > +    if ( n->arch.sctlr & SCTLR_M )
> > +        hcr &= ~(HCR_TVM|HCR_DC);
> > +    else
> > +        hcr |= (HCR_TVM|HCR_DC);
> > +
> >      WRITE_SYSREG(hcr, HCR_EL2);
> >      isb();
> 
> Is this actually needed? Shouldn't HCR be already correctly updated by
> update_sctlr?

Not if we are switching back and forth between two guests which are in
different states.

> > +/*
> > + * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
> > + * Updates the lower 32-bits and clears the upper bits.
> > + */
> > +#define CP32_PASSTHRU64(R...) do {              \
> > +    if ( cp32.read )                            \
> > +        *r = (uint32_t)READ_SYSREG64(R);        \
> > +    else                                        \
> > +        WRITE_SYSREG64((uint64_t)*r, R);        \
> > +} while(0)
> 
> Can/Should CP32_PASSTHRU64_LO be used instead of this?

LO preserves the upper 32-bits which this macro deliberately does not.

Now, an AArch32 guest on an AArch64 hypervisor should never have
anything else in the top bits of a register which it sees as 32-bit, so
using LO would work, but I think having it be explicit like this is
better.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:25:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qJm-0006yE-P6; Wed, 08 Jan 2014 10:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W0qH6-0006xP-GC; Wed, 08 Jan 2014 10:22:12 +0000
Received: from [85.158.143.35:17934] by server-3.bemta-4.messagelabs.com id
	76/ED-32360-3D62DC25; Wed, 08 Jan 2014 10:22:11 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389176529!10310490!1
X-Originating-IP: [220.181.15.35]
X-SpamReason: No, hits=0.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjM1ID0+IDU0MDI=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjM1ID0+IDU0MDI=\n,HTML_40_50,HTML_MESSAGE,
	MIME_BASE64_TEXT,UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22669 invoked from network); 8 Jan 2014 10:22:10 -0000
Received: from m15-35.126.com (HELO m15-35.126.com) (220.181.15.35)
	by server-16.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 10:22:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=i+nYQ
	fnrJLLLV4cgP0oObRAxnUfz/pXpQAoMMiq2ts0=; b=dgbFhbNkVpcMLvI1h3/d5
	izpOSvHKmGjr0XdCP45tSTrwWPXZUGrewy7SUTYI/p8kBm8Txe2IN2plJ/CwlwyJ
	WP1SQ1It7Xu2GLs7P/IqyzU9jR6LPHuKLvLgcU4iiH/OOj+S8zj+qhZE/wpBoKWj
	ot611BWNv2QmrtG0V0c59o=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr35 (Coremail) ; Wed, 8 Jan 2014 18:22:06 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Wed, 8 Jan 2014 18:22:06 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
X-CM-CTRLDATA: Lt6n8WZvb3Rlcl9odG09NTA3Ojgx
MIME-Version: 1.0
Message-ID: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
X-CM-TRANSID: I8qowECJVEPPJs1S9nxAAA--.12646W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbi2wENDkr1IdZpWQAAsh
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Wed, 08 Jan 2014 10:24:56 +0000
Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1084839159111182475=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1084839159111182475==
Content-Type: multipart/alternative; 
	boundary="----=_Part_653685_371027932.1389176526940"

------=_Part_653685_371027932.1389176526940
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

SGkgbGlzdAogICAgICAgIEFzIHdlIGFsbCBrbm93LCBTUi1JT1YgdGVjaG5vbG9neSBjYW4gaW1w
cm92ZSBWTklDJ3MgcGVyZm9ybWFuY2UsIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1p
Z3JhdGlvbi4gSSBnZXQgc29tZSBpbmZvcm1hdGlvbiByZWNlbnRseSB0aGF0IG9uIEtWTStWaXJ0
aW8gcGxhdGZvcm0sIGlmIHdlIHVzZSBNYWNWdGFwICsgU1ItSU9WLCB0aGUgbGl2ZSBtaWdyYXRp
b24gY291bGQgYmUgZG9uZSBzdWNjZXNzZnVsbHkuIFdoYXQgSSB3YW50IHRvIGtub3cgaXMgbWF5
IEkgY29uZmlndXJlIE1hY1Z0YXAgb24gWGVuPyAKICAgICAgIEFueSByZXBsaWVzIGFyZSB3ZWxj
b21lISBUaGFua3MgYSBsb3Qu
------=_Part_653685_371027932.1389176526940
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPkhpIGxpc3Q8ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyBBcyB3ZSBhbGwga25vdywgU1ItSU9WIHRlY2hub2xvZ3kgY2FuIGltcHJvdmUgVk5JQydzIHBl
cmZvcm1hbmNlLCB3aGlsZSBpdCBjYW4gbm90IHN1cHBvcnQgbGl2ZSBtaWdyYXRpb24uIEkgZ2V0
IHNvbWUgaW5mb3JtYXRpb24gcmVjZW50bHkgdGhhdCBvbiBLVk0rVmlydGlvIHBsYXRmb3JtLCBp
ZiB3ZSB1c2UgTWFjVnRhcCArIFNSLUlPViwgdGhlIGxpdmUgbWlncmF0aW9uIGNvdWxkIGJlIGRv
bmUgc3VjY2Vzc2Z1bGx5LiBXaGF0IEkgd2FudCB0byBrbm93IGlzIG1heSBJIGNvbmZpZ3VyZSBN
YWNWdGFwIG9uIFhlbj8mbmJzcDs8L2Rpdj48ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
O0FueSByZXBsaWVzIGFyZSB3ZWxjb21lISBUaGFua3MgYSBsb3QuPC9kaXY+PC9kaXY+PGJyPjxi
cj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290
ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_653685_371027932.1389176526940--



--===============1084839159111182475==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1084839159111182475==--



From xen-devel-bounces@lists.xen.org Wed Jan 08 10:25:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qJm-0006yE-P6; Wed, 08 Jan 2014 10:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W0qH6-0006xP-GC; Wed, 08 Jan 2014 10:22:12 +0000
Received: from [85.158.143.35:17934] by server-3.bemta-4.messagelabs.com id
	76/ED-32360-3D62DC25; Wed, 08 Jan 2014 10:22:11 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389176529!10310490!1
X-Originating-IP: [220.181.15.35]
X-SpamReason: No, hits=0.9 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjM1ID0+IDU0MDI=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjM1ID0+IDU0MDI=\n,HTML_40_50,HTML_MESSAGE,
	MIME_BASE64_TEXT,UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22669 invoked from network); 8 Jan 2014 10:22:10 -0000
Received: from m15-35.126.com (HELO m15-35.126.com) (220.181.15.35)
	by server-16.tower-21.messagelabs.com with SMTP;
	8 Jan 2014 10:22:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=i+nYQ
	fnrJLLLV4cgP0oObRAxnUfz/pXpQAoMMiq2ts0=; b=dgbFhbNkVpcMLvI1h3/d5
	izpOSvHKmGjr0XdCP45tSTrwWPXZUGrewy7SUTYI/p8kBm8Txe2IN2plJ/CwlwyJ
	WP1SQ1It7Xu2GLs7P/IqyzU9jR6LPHuKLvLgcU4iiH/OOj+S8zj+qhZE/wpBoKWj
	ot611BWNv2QmrtG0V0c59o=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr35 (Coremail) ; Wed, 8 Jan 2014 18:22:06 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Wed, 8 Jan 2014 18:22:06 +0800 (CST)
From: topperxin <topperxin@126.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
X-CM-CTRLDATA: Lt6n8WZvb3Rlcl9odG09NTA3Ojgx
MIME-Version: 1.0
Message-ID: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
X-CM-TRANSID: I8qowECJVEPPJs1S9nxAAA--.12646W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbi2wENDkr1IdZpWQAAsh
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Wed, 08 Jan 2014 10:24:56 +0000
Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1084839159111182475=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1084839159111182475==
Content-Type: multipart/alternative; 
	boundary="----=_Part_653685_371027932.1389176526940"

------=_Part_653685_371027932.1389176526940
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64

SGkgbGlzdAogICAgICAgIEFzIHdlIGFsbCBrbm93LCBTUi1JT1YgdGVjaG5vbG9neSBjYW4gaW1w
cm92ZSBWTklDJ3MgcGVyZm9ybWFuY2UsIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1p
Z3JhdGlvbi4gSSBnZXQgc29tZSBpbmZvcm1hdGlvbiByZWNlbnRseSB0aGF0IG9uIEtWTStWaXJ0
aW8gcGxhdGZvcm0sIGlmIHdlIHVzZSBNYWNWdGFwICsgU1ItSU9WLCB0aGUgbGl2ZSBtaWdyYXRp
b24gY291bGQgYmUgZG9uZSBzdWNjZXNzZnVsbHkuIFdoYXQgSSB3YW50IHRvIGtub3cgaXMgbWF5
IEkgY29uZmlndXJlIE1hY1Z0YXAgb24gWGVuPyAKICAgICAgIEFueSByZXBsaWVzIGFyZSB3ZWxj
b21lISBUaGFua3MgYSBsb3Qu
------=_Part_653685_371027932.1389176526940
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPkhpIGxpc3Q8ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyBBcyB3ZSBhbGwga25vdywgU1ItSU9WIHRlY2hub2xvZ3kgY2FuIGltcHJvdmUgVk5JQydzIHBl
cmZvcm1hbmNlLCB3aGlsZSBpdCBjYW4gbm90IHN1cHBvcnQgbGl2ZSBtaWdyYXRpb24uIEkgZ2V0
IHNvbWUgaW5mb3JtYXRpb24gcmVjZW50bHkgdGhhdCBvbiBLVk0rVmlydGlvIHBsYXRmb3JtLCBp
ZiB3ZSB1c2UgTWFjVnRhcCArIFNSLUlPViwgdGhlIGxpdmUgbWlncmF0aW9uIGNvdWxkIGJlIGRv
bmUgc3VjY2Vzc2Z1bGx5LiBXaGF0IEkgd2FudCB0byBrbm93IGlzIG1heSBJIGNvbmZpZ3VyZSBN
YWNWdGFwIG9uIFhlbj8mbmJzcDs8L2Rpdj48ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
O0FueSByZXBsaWVzIGFyZSB3ZWxjb21lISBUaGFua3MgYSBsb3QuPC9kaXY+PC9kaXY+PGJyPjxi
cj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290
ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_653685_371027932.1389176526940--



--===============1084839159111182475==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1084839159111182475==--



From xen-devel-bounces@lists.xen.org Wed Jan 08 10:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qUV-0007Yc-Mq; Wed, 08 Jan 2014 10:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qUT-0007YX-6R
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:36:01 +0000
Received: from [85.158.139.211:11069] by server-4.bemta-5.messagelabs.com id
	03/AB-26791-01A2DC25; Wed, 08 Jan 2014 10:36:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389177358!8508533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5413 invoked from network); 8 Jan 2014 10:35:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:35:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88659114"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 10:35:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:35:56 -0500
Message-ID: <1389177355.4883.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 10:35:55 +0000
In-Reply-To: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Drop output of iop->remain because it most likely will be zero.
> This leads to a strange message:
> 
> ERROR: failed to read 0 bytes. errno:14 rc:-1
> 
> Add address to write error because it may be the only message
> displayed.
> 
> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
> error and so iop->remain will be zero.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
> index 3b2a285..0fc3f82 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
>      iop->gwr = 0;       /* not writing to guest */
>  
>      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);

Is it worth printing the expect number (i.e. the input) of bytes? Is
that buflen here?

> +        return tobuf_len;
> +    }
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
>      iop->gwr = 1;       /* writing to guest */
>  
>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
> +              guestva, errno, rc);

Same here.

> +        return buflen;
> +    }
>      return iop->remain;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qUV-0007Yc-Mq; Wed, 08 Jan 2014 10:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qUT-0007YX-6R
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:36:01 +0000
Received: from [85.158.139.211:11069] by server-4.bemta-5.messagelabs.com id
	03/AB-26791-01A2DC25; Wed, 08 Jan 2014 10:36:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389177358!8508533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5413 invoked from network); 8 Jan 2014 10:35:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:35:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88659114"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 10:35:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:35:56 -0500
Message-ID: <1389177355.4883.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 10:35:55 +0000
In-Reply-To: <1389140748-26524-6-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
> Without this gdb does not report an error.
> 
> With this patch and using a 1G hvm domU:
> 
> (gdb) x/1xh 0x6ae9168b
> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
> 
> Drop output of iop->remain because it most likely will be zero.
> This leads to a strange message:
> 
> ERROR: failed to read 0 bytes. errno:14 rc:-1
> 
> Add address to write error because it may be the only message
> displayed.
> 
> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
> error and so iop->remain will be zero.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
> index 3b2a285..0fc3f82 100644
> --- a/tools/debugger/gdbsx/xg/xg_main.c
> +++ b/tools/debugger/gdbsx/xg/xg_main.c
> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
>      iop->gwr = 0;       /* not writing to guest */
>  
>      if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
> -              iop->remain, errno, rc);
> +    {
> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);

Is it worth printing the expect number (i.e. the input) of bytes? Is
that buflen here?

> +        return tobuf_len;
> +    }
>  
>      for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>      XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
>      iop->gwr = 1;       /* writing to guest */
>  
>      if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
> -              iop->remain, errno, rc);
> +    {
> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
> +              guestva, errno, rc);

Same here.

> +        return buflen;
> +    }
>      return iop->remain;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qWz-0007y5-Rv; Wed, 08 Jan 2014 10:38:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qWy-0007uz-KO
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:38:36 +0000
Received: from [193.109.254.147:7775] by server-13.bemta-14.messagelabs.com id
	4F/D1-19374-CAA2DC25; Wed, 08 Jan 2014 10:38:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389177513!9506850!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25334 invoked from network); 8 Jan 2014 10:38:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:38:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88659601"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 10:38:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:38:32 -0500
Message-ID: <1389177510.4883.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 10:38:30 +0000
In-Reply-To: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
> If dbg_debug is non-zero, output debug.
> 
> Include put_gfn debug logging.
> 
> Here is a smaple output at dbg_debug == 2:
> 
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
> (XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
> (XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
> (XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
> (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
>  1 file changed, 26 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index ba6a64d..777e5ba 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -30,16 +30,9 @@
>   * gdbsx, etc..
>   */
>  
> -#ifdef XEN_KDB_CONFIG
> -#include "../kdb/include/kdbdefs.h"
> -#include "../kdb/include/kdbproto.h"
> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
> -#else
> -#define DBGP1(...) ((void)0)
> -#define DBGP2(...) ((void)0)
> -#endif
> +static volatile int dbg_debug;

Using volatile is almost always wrong. Why do you think it is needed
here?

If anything this variable is exactly the opposite, i..e __read_mostly or
even const (given that I can't see anything which writes it I suppose
this is a compile time setting?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qWz-0007y5-Rv; Wed, 08 Jan 2014 10:38:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qWy-0007uz-KO
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:38:36 +0000
Received: from [193.109.254.147:7775] by server-13.bemta-14.messagelabs.com id
	4F/D1-19374-CAA2DC25; Wed, 08 Jan 2014 10:38:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389177513!9506850!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25334 invoked from network); 8 Jan 2014 10:38:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:38:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="88659601"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 10:38:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:38:32 -0500
Message-ID: <1389177510.4883.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 10:38:30 +0000
In-Reply-To: <1389140748-26524-4-git-send-email-dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
> If dbg_debug is non-zero, output debug.
> 
> Include put_gfn debug logging.
> 
> Here is a smaple output at dbg_debug == 2:
> 
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
> (XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
> (XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
> (XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
> (XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
> (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
> (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
> (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
>  1 file changed, 26 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
> index ba6a64d..777e5ba 100644
> --- a/xen/arch/x86/debug.c
> +++ b/xen/arch/x86/debug.c
> @@ -30,16 +30,9 @@
>   * gdbsx, etc..
>   */
>  
> -#ifdef XEN_KDB_CONFIG
> -#include "../kdb/include/kdbdefs.h"
> -#include "../kdb/include/kdbproto.h"
> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
> -#else
> -#define DBGP1(...) ((void)0)
> -#define DBGP2(...) ((void)0)
> -#endif
> +static volatile int dbg_debug;

Using volatile is almost always wrong. Why do you think it is needed
here?

If anything this variable is exactly the opposite, i..e __read_mostly or
even const (given that I can't see anything which writes it I suppose
this is a compile time setting?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:41:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qZM-0008J5-O9; Wed, 08 Jan 2014 10:41:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qZL-0008Iv-HQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:41:03 +0000
Received: from [85.158.143.35:25128] by server-1.bemta-4.messagelabs.com id
	08/85-02132-E3B2DC25; Wed, 08 Jan 2014 10:41:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389177661!10332793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23608 invoked from network); 8 Jan 2014 10:41:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:41:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90810810"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:41:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:41:00 -0500
Message-ID: <1389177658.4883.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 10:40:58 +0000
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 00:55 +0000, Andrew Cooper wrote:

> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> 
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code)

If you are the author then the first line of the mail should also be

From: Andrew Cooper <andre...@citrix.com>

Don, if you git commit --author='....' (or --amend to fixup an existing
commit) then git send-email will do the right thing.

> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack hypercall
> path.  Irrespective of any of the other patches in this series, I think
> this should be included ASAP (although probably subject to review from a
> third person), which will fix the hypervisor crashes from gdbsx usage.

I've already given this stuff a release Ack, subject to Muckesh
approving it and someone saying for sure that this patch can only affect
debuggers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:41:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qZM-0008J5-O9; Wed, 08 Jan 2014 10:41:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qZL-0008Iv-HQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 10:41:03 +0000
Received: from [85.158.143.35:25128] by server-1.bemta-4.messagelabs.com id
	08/85-02132-E3B2DC25; Wed, 08 Jan 2014 10:41:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389177661!10332793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23608 invoked from network); 8 Jan 2014 10:41:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:41:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90810810"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:41:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:41:00 -0500
Message-ID: <1389177658.4883.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 10:40:58 +0000
In-Reply-To: <52CCA204.2020601@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CCA204.2020601@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 00:55 +0000, Andrew Cooper wrote:

> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> 
> Technically this should include by Signed-off-by: Andrew Cooper
> <andrew.cooper3@citrix.com> tag (being the author of the code)

If you are the author then the first line of the mail should also be

From: Andrew Cooper <andre...@citrix.com>

Don, if you git commit --author='....' (or --amend to fixup an existing
commit) then git send-email will do the right thing.

> Ian (with RM hat on):
>   This is a hypervisor reference counting error on a toolstack hypercall
> path.  Irrespective of any of the other patches in this series, I think
> this should be included ASAP (although probably subject to review from a
> third person), which will fix the hypervisor crashes from gdbsx usage.

I've already given this stuff a release Ack, subject to Muckesh
approving it and someone saying for sure that this patch can only affect
debuggers.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:50:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qhv-0000ji-Fc; Wed, 08 Jan 2014 10:49:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qhu-0000jY-0I
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 10:49:54 +0000
Received: from [85.158.137.68:3969] by server-10.bemta-3.messagelabs.com id
	DA/E0-23989-15D2DC25; Wed, 08 Jan 2014 10:49:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389178190!4222760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9448 invoked from network); 8 Jan 2014 10:49:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:49:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90812469"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:49:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:49:50 -0500
Message-ID: <1389178188.4883.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 10:49:48 +0000
In-Reply-To: <52CC01350200007800111164@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 24250 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > 
> > Failures :-/ but no regressions.
> > 
> > Regressions which are regarded as allowable (not blocking):
> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
> 
> These windows-install failures have been pretty persistent for
> the last month or two. I've been looking at the logs from the
> hypervisor side a number of times without spotting anything. It'd
> be nice to know whether anyone also did so from the tools and
> qemu sides... In any event we will need to do something about
> this before 4.4 goes out.

http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

says that Windows experienced an unexpected error.

http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-winxpsp3-vcpus1/win.guest.osstest--vnc.jpeg

is a blue screen "BAD_POOL_CALLER".

I think this is unlikely to be a toolstack thing, but as to whether it
is a Xen or a Windows issue I wouldn't like to say. I had a look through
the toolstack logs anyway and didn't see anything untoward.
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 10:50:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 10:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qhv-0000ji-Fc; Wed, 08 Jan 2014 10:49:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0qhu-0000jY-0I
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 10:49:54 +0000
Received: from [85.158.137.68:3969] by server-10.bemta-3.messagelabs.com id
	DA/E0-23989-15D2DC25; Wed, 08 Jan 2014 10:49:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389178190!4222760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9448 invoked from network); 8 Jan 2014 10:49:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 10:49:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90812469"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 10:49:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	05:49:50 -0500
Message-ID: <1389178188.4883.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 10:49:48 +0000
In-Reply-To: <52CC01350200007800111164@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 24250 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > 
> > Failures :-/ but no regressions.
> > 
> > Regressions which are regarded as allowable (not blocking):
> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
> 
> These windows-install failures have been pretty persistent for
> the last month or two. I've been looking at the logs from the
> hypervisor side a number of times without spotting anything. It'd
> be nice to know whether anyone also did so from the tools and
> qemu sides... In any event we will need to do something about
> this before 4.4 goes out.

http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

says that Windows experienced an unexpected error.

http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-winxpsp3-vcpus1/win.guest.osstest--vnc.jpeg

is a blue screen "BAD_POOL_CALLER".

I think this is unlikely to be a toolstack thing, but as to whether it
is a Xen or a Windows issue I wouldn't like to say. I had a look through
the toolstack logs anyway and didn't see anything untoward.
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:00:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qrm-0001Yp-0C; Wed, 08 Jan 2014 11:00:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0qrk-0001Yj-1z
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:00:04 +0000
Received: from [85.158.137.68:17170] by server-6.bemta-3.messagelabs.com id
	28/BB-04868-1BF2DC25; Wed, 08 Jan 2014 11:00:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389178800!7107275!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27178 invoked from network); 8 Jan 2014 11:00:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 11:00:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 11:00:00 +0000
Message-Id: <52CD3DBF02000078001117A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 10:59:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389178188.4883.19.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
>> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > flight 24250 xen-unstable real [real]
>> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
>> > 
>> > Failures :-/ but no regressions.
>> > 
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> 24146
>> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> 23938
>> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> 24146
>> 
>> These windows-install failures have been pretty persistent for
>> the last month or two. I've been looking at the logs from the
>> hypervisor side a number of times without spotting anything. It'd
>> be nice to know whether anyone also did so from the tools and
>> qemu sides... In any event we will need to do something about
>> this before 4.4 goes out.
> 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> n7-amd64/win.guest.osstest--vnc.jpeg
> 
> says that Windows experienced an unexpected error.
> 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> 
> is a blue screen "BAD_POOL_CALLER".
> 
> I think this is unlikely to be a toolstack thing, but as to whether it
> is a Xen or a Windows issue I wouldn't like to say. I had a look through
> the toolstack logs anyway and didn't see anything untoward.

Right, neither did I. I was particularly thinking of qemu though,
since I think these pretty persistent failures started around the
time the qemu tree upgrade was done. Of course this could just
be coincidence with a hypervisor side change having bad effects.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:00:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0qrm-0001Yp-0C; Wed, 08 Jan 2014 11:00:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0qrk-0001Yj-1z
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:00:04 +0000
Received: from [85.158.137.68:17170] by server-6.bemta-3.messagelabs.com id
	28/BB-04868-1BF2DC25; Wed, 08 Jan 2014 11:00:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389178800!7107275!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27178 invoked from network); 8 Jan 2014 11:00:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 11:00:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 11:00:00 +0000
Message-Id: <52CD3DBF02000078001117A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 10:59:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389178188.4883.19.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
>> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > flight 24250 xen-unstable real [real]
>> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
>> > 
>> > Failures :-/ but no regressions.
>> > 
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> 24146
>> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> 23938
>> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> 24146
>> 
>> These windows-install failures have been pretty persistent for
>> the last month or two. I've been looking at the logs from the
>> hypervisor side a number of times without spotting anything. It'd
>> be nice to know whether anyone also did so from the tools and
>> qemu sides... In any event we will need to do something about
>> this before 4.4 goes out.
> 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> n7-amd64/win.guest.osstest--vnc.jpeg
> 
> says that Windows experienced an unexpected error.
> 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> 
> is a blue screen "BAD_POOL_CALLER".
> 
> I think this is unlikely to be a toolstack thing, but as to whether it
> is a Xen or a Windows issue I wouldn't like to say. I had a look through
> the toolstack logs anyway and didn't see anything untoward.

Right, neither did I. I was particularly thinking of qemu though,
since I think these pretty persistent failures started around the
time the qemu tree upgrade was done. Of course this could just
be coincidence with a hypervisor side change having bad effects.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0r4F-00029I-LG; Wed, 08 Jan 2014 11:12:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0r4E-00029D-KN
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:12:58 +0000
Received: from [85.158.137.68:41010] by server-10.bemta-3.messagelabs.com id
	81/01-23989-9B23DC25; Wed, 08 Jan 2014 11:12:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389179575!6732645!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1142 invoked from network); 8 Jan 2014 11:12:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 11:12:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90817043"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 11:12:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	06:12:54 -0500
Message-ID: <1389179573.4883.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 11:12:53 +0000
In-Reply-To: <52CD3DBF02000078001117A2@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> >> > flight 24250 xen-unstable real [real]
> >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> >> > 
> >> > Failures :-/ but no regressions.
> >> > 
> >> > Regressions which are regarded as allowable (not blocking):
> >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > 24146
> >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > 23938
> >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > 24146
> >> 
> >> These windows-install failures have been pretty persistent for
> >> the last month or two. I've been looking at the logs from the
> >> hypervisor side a number of times without spotting anything. It'd
> >> be nice to know whether anyone also did so from the tools and
> >> qemu sides... In any event we will need to do something about
> >> this before 4.4 goes out.
> > 
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > n7-amd64/win.guest.osstest--vnc.jpeg
> > 
> > says that Windows experienced an unexpected error.
> > 
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > 
> > is a blue screen "BAD_POOL_CALLER".

FWIW It's code 0x7 which
http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
says is "The current thread attempted to free the pool, which was
already freed.".

The Internet(tm) seems to think this is often a driver issue. Not that
this helps us much!

> > I think this is unlikely to be a toolstack thing, but as to whether it
> > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > the toolstack logs anyway and didn't see anything untoward.
> 
> Right, neither did I. I was particularly thinking of qemu though,
> since I think these pretty persistent failures started around the
> time the qemu tree upgrade was done. Of course this could just
> be coincidence with a hypervisor side change having bad effects.

If there is a correlation then it would be interesting to investigate --
Anthony/Stefano could you guys have a look please.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0r4F-00029I-LG; Wed, 08 Jan 2014 11:12:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0r4E-00029D-KN
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:12:58 +0000
Received: from [85.158.137.68:41010] by server-10.bemta-3.messagelabs.com id
	81/01-23989-9B23DC25; Wed, 08 Jan 2014 11:12:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389179575!6732645!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1142 invoked from network); 8 Jan 2014 11:12:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 11:12:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,623,1384300800"; d="scan'208";a="90817043"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 11:12:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	06:12:54 -0500
Message-ID: <1389179573.4883.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 11:12:53 +0000
In-Reply-To: <52CD3DBF02000078001117A2@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> >> > flight 24250 xen-unstable real [real]
> >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> >> > 
> >> > Failures :-/ but no regressions.
> >> > 
> >> > Regressions which are regarded as allowable (not blocking):
> >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > 24146
> >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > 23938
> >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > 24146
> >> 
> >> These windows-install failures have been pretty persistent for
> >> the last month or two. I've been looking at the logs from the
> >> hypervisor side a number of times without spotting anything. It'd
> >> be nice to know whether anyone also did so from the tools and
> >> qemu sides... In any event we will need to do something about
> >> this before 4.4 goes out.
> > 
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > n7-amd64/win.guest.osstest--vnc.jpeg
> > 
> > says that Windows experienced an unexpected error.
> > 
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > 
> > is a blue screen "BAD_POOL_CALLER".

FWIW It's code 0x7 which
http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
says is "The current thread attempted to free the pool, which was
already freed.".

The Internet(tm) seems to think this is often a driver issue. Not that
this helps us much!

> > I think this is unlikely to be a toolstack thing, but as to whether it
> > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > the toolstack logs anyway and didn't see anything untoward.
> 
> Right, neither did I. I was particularly thinking of qemu though,
> since I think these pretty persistent failures started around the
> time the qemu tree upgrade was done. Of course this could just
> be coincidence with a hypervisor side change having bad effects.

If there is a correlation then it would be interesting to investigate --
Anthony/Stefano could you guys have a look please.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:25:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0rFo-0002yq-SR; Wed, 08 Jan 2014 11:24:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0rFn-0002yl-Tg
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:24:56 +0000
Received: from [85.158.139.211:39662] by server-5.bemta-5.messagelabs.com id
	10/AD-14928-7853DC25; Wed, 08 Jan 2014 11:24:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389180292!8522456!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20389 invoked from network); 8 Jan 2014 11:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 11:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88669848"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 11:24:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	06:24:51 -0500
Message-ID: <1389180290.4883.34.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 11:24:50 +0000
In-Reply-To: <1389179573.4883.30.camel@kazak.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	ian.jackson@eu.citrix.com, Stefano
	Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 11:12 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> > >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> > >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > >> > flight 24250 xen-unstable real [real]
> > >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > >> > 
> > >> > Failures :-/ but no regressions.
> > >> > 
> > >> > Regressions which are regarded as allowable (not blocking):
> > >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > > 24146
> > >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > > 23938
> > >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > > 24146
> > >> 
> > >> These windows-install failures have been pretty persistent for
> > >> the last month or two. I've been looking at the logs from the
> > >> hypervisor side a number of times without spotting anything. It'd
> > >> be nice to know whether anyone also did so from the tools and
> > >> qemu sides... In any event we will need to do something about
> > >> this before 4.4 goes out.
> > > 
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > n7-amd64/win.guest.osstest--vnc.jpeg
> > > 
> > > says that Windows experienced an unexpected error.
> > > 
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > > 
> > > is a blue screen "BAD_POOL_CALLER".
> 
> FWIW It's code 0x7 which
> http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
> says is "The current thread attempted to free the pool, which was
> already freed.".
> 
> The Internet(tm) seems to think this is often a driver issue. Not that
> this helps us much!
> 
> > > I think this is unlikely to be a toolstack thing, but as to whether it
> > > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > > the toolstack logs anyway and didn't see anything untoward.
> > 
> > Right, neither did I. I was particularly thinking of qemu though,
> > since I think these pretty persistent failures started around the
> > time the qemu tree upgrade was done. Of course this could just
> > be coincidence with a hypervisor side change having bad effects.
> 
> If there is a correlation then it would be interesting to investigate --
> Anthony/Stefano could you guys have a look please.

The earliest instance I could see with logs was 22371 (10/12/2013).

21288 (30/10/2013) has a windows-install failure but the logs have
expired so I cannot tell if it was the same failure.

I didn't look at the vnc for every failure I spotted, there were a lot
like these, and a lot where Windows was just sat at its login screen
(quite long standing issue I think? Thought not to be us?). 

There were a smattering of other failures too e.g.:
http://www.chiark.greenend.org.uk/~xensrcts/logs/22466/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg
http://www.chiark.greenend.org.uk/~xensrcts/logs/22455/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

(bearing in mind that I didn't check all of them, if there was an easy
way to data mine the screen shots for all the failures of this test into
a directory it might be easier to scan)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 11:25:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 11:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0rFo-0002yq-SR; Wed, 08 Jan 2014 11:24:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0rFn-0002yl-Tg
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 11:24:56 +0000
Received: from [85.158.139.211:39662] by server-5.bemta-5.messagelabs.com id
	10/AD-14928-7853DC25; Wed, 08 Jan 2014 11:24:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389180292!8522456!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20389 invoked from network); 8 Jan 2014 11:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 11:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88669848"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 11:24:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	06:24:51 -0500
Message-ID: <1389180290.4883.34.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 11:24:50 +0000
In-Reply-To: <1389179573.4883.30.camel@kazak.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	ian.jackson@eu.citrix.com, Stefano
	Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 11:12 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> > >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> > >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > >> > flight 24250 xen-unstable real [real]
> > >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > >> > 
> > >> > Failures :-/ but no regressions.
> > >> > 
> > >> > Regressions which are regarded as allowable (not blocking):
> > >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > > 24146
> > >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > > 23938
> > >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > > 24146
> > >> 
> > >> These windows-install failures have been pretty persistent for
> > >> the last month or two. I've been looking at the logs from the
> > >> hypervisor side a number of times without spotting anything. It'd
> > >> be nice to know whether anyone also did so from the tools and
> > >> qemu sides... In any event we will need to do something about
> > >> this before 4.4 goes out.
> > > 
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > n7-amd64/win.guest.osstest--vnc.jpeg
> > > 
> > > says that Windows experienced an unexpected error.
> > > 
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > > 
> > > is a blue screen "BAD_POOL_CALLER".
> 
> FWIW It's code 0x7 which
> http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
> says is "The current thread attempted to free the pool, which was
> already freed.".
> 
> The Internet(tm) seems to think this is often a driver issue. Not that
> this helps us much!
> 
> > > I think this is unlikely to be a toolstack thing, but as to whether it
> > > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > > the toolstack logs anyway and didn't see anything untoward.
> > 
> > Right, neither did I. I was particularly thinking of qemu though,
> > since I think these pretty persistent failures started around the
> > time the qemu tree upgrade was done. Of course this could just
> > be coincidence with a hypervisor side change having bad effects.
> 
> If there is a correlation then it would be interesting to investigate --
> Anthony/Stefano could you guys have a look please.

The earliest instance I could see with logs was 22371 (10/12/2013).

21288 (30/10/2013) has a windows-install failure but the logs have
expired so I cannot tell if it was the same failure.

I didn't look at the vnc for every failure I spotted, there were a lot
like these, and a lot where Windows was just sat at its login screen
(quite long standing issue I think? Thought not to be us?). 

There were a smattering of other failures too e.g.:
http://www.chiark.greenend.org.uk/~xensrcts/logs/22466/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg
http://www.chiark.greenend.org.uk/~xensrcts/logs/22455/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

(bearing in mind that I didn't check all of them, if there was an easy
way to data mine the screen shots for all the failures of this test into
a directory it might be easier to scan)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:23:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sA3-0006Cq-JL; Wed, 08 Jan 2014 12:23:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0sA2-0006Cl-8c
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:23:02 +0000
Received: from [193.109.254.147:14139] by server-15.bemta-14.messagelabs.com
	id FE/F3-22186-5234DC25; Wed, 08 Jan 2014 12:23:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389183778!9495491!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25266 invoked from network); 8 Jan 2014 12:22:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88686221"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 12:22:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	07:22:57 -0500
Message-ID: <1389183776.4883.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 12:22:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] support PCI hole resize in qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create <alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>
title it support PCI hole resize in qemu-xen
thanks

We took a workaround late in 4.3 for this and planned to fix it properly
for 4.4, but we seem to have forgotten. I think it is probably now also
too late for 4.4. I've created a bug in the hope thast we can fix this
for 4.5.

I struggled to find a good reference for this (old) issue the comments
at http://bugs.xenproject.org/xen/mid/%
3Calpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com%3E seem
like a good point in that massive thread.

The whole thing is at:
http://www.gossamer-threads.com/lists/engine?do=post_view_flat;post=273750;page=1;mh=-1;list=xen;sb=post_latest_reply;so=ASC

I suspect there were other relevant threads around the time too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:23:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sA3-0006Cq-JL; Wed, 08 Jan 2014 12:23:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0sA2-0006Cl-8c
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:23:02 +0000
Received: from [193.109.254.147:14139] by server-15.bemta-14.messagelabs.com
	id FE/F3-22186-5234DC25; Wed, 08 Jan 2014 12:23:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389183778!9495491!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25266 invoked from network); 8 Jan 2014 12:22:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88686221"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 12:22:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	07:22:57 -0500
Message-ID: <1389183776.4883.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 12:22:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] support PCI hole resize in qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create <alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>
title it support PCI hole resize in qemu-xen
thanks

We took a workaround late in 4.3 for this and planned to fix it properly
for 4.4, but we seem to have forgotten. I think it is probably now also
too late for 4.4. I've created a bug in the hope thast we can fix this
for 4.5.

I struggled to find a good reference for this (old) issue the comments
at http://bugs.xenproject.org/xen/mid/%
3Calpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com%3E seem
like a good point in that massive thread.

The whole thing is at:
http://www.gossamer-threads.com/lists/engine?do=post_view_flat;post=273750;page=1;mh=-1;list=xen;sb=post_latest_reply;so=ASC

I suspect there were other relevant threads around the time too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:26:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sD7-0006U2-9B; Wed, 08 Jan 2014 12:26:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sD5-0006Tw-Rz
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:26:11 +0000
Received: from [193.109.254.147:57026] by server-9.bemta-14.messagelabs.com id
	B0/96-13957-3E34DC25; Wed, 08 Jan 2014 12:26:11 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389183969!9538841!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20001 invoked from network); 8 Jan 2014 12:26:10 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 12:26:10 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sGn-0006zT-P2; Wed, 08 Jan 2014 12:30:01 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389184201.26873@bugs.xenproject.org>
References: <1389183776.4883.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389183776.4883.42.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 12:30:01 +0000
Subject: [Xen-devel] Processed: support PCI hole resize in qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create <alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>
Created new bug #28 rooted at `<alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>'
Title: `support PCI hole resize in qemu-xen'
> title it support PCI hole resize in qemu-xen
Set title for #28 to `support PCI hole resize in qemu-xen'
> thanks
Finished processing.

Modified/created Bugs:
 - 28: http://bugs.xenproject.org/xen/bug/28 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:26:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sD7-0006U2-9B; Wed, 08 Jan 2014 12:26:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sD5-0006Tw-Rz
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:26:11 +0000
Received: from [193.109.254.147:57026] by server-9.bemta-14.messagelabs.com id
	B0/96-13957-3E34DC25; Wed, 08 Jan 2014 12:26:11 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389183969!9538841!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20001 invoked from network); 8 Jan 2014 12:26:10 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 12:26:10 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sGn-0006zT-P2; Wed, 08 Jan 2014 12:30:01 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389184201.26873@bugs.xenproject.org>
References: <1389183776.4883.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389183776.4883.42.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 12:30:01 +0000
Subject: [Xen-devel] Processed: support PCI hole resize in qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create <alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>
Created new bug #28 rooted at `<alpine.DEB.2.02.1305291701580.4799@kaball.uk.xensource.com>'
Title: `support PCI hole resize in qemu-xen'
> title it support PCI hole resize in qemu-xen
Set title for #28 to `support PCI hole resize in qemu-xen'
> thanks
Finished processing.

Modified/created Bugs:
 - 28: http://bugs.xenproject.org/xen/bug/28 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:27:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sE7-0006ZN-P6; Wed, 08 Jan 2014 12:27:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W0sE6-0006Z9-C6
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 12:27:14 +0000
Received: from [85.158.139.211:61722] by server-6.bemta-5.messagelabs.com id
	DD/2F-16310-1244DC25; Wed, 08 Jan 2014 12:27:13 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389184032!8496510!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16561 invoked from network); 8 Jan 2014 12:27:13 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 12:27:13 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 3E3291A26F4;
	Wed,  8 Jan 2014 14:27:10 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 23E5C36C01F; Wed,  8 Jan 2014 14:27:10 +0200 (EET)
Date: Wed, 8 Jan 2014 14:27:10 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: topperxin <topperxin@126.com>
Message-ID: <20140108122710.GZ2924@reaktio.net>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
>    Hi list
>            As we all know, SR-IOV technology can improve VNIC's performance,
>    while it can not support live migration. I get some information recently
>    that on KVM+Virtio platform, if we use MacVtap + SR-IOV, the live
>    migration could be done successfully. What I want to know is may I
>    configure MacVtap on Xen?
>           Any replies are welcome! Thanks a lot.

I think years ago (2009, perhaps) when SR-IOV was first demoed with Xen
it was demoed with live migration.. it was a mixture of SR-IOV VF + Xen vif.

So with some toolstack/script hackery you can do it. (=use the vif pv driver during migration, normally sr-iov vf).

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:27:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sE7-0006ZN-P6; Wed, 08 Jan 2014 12:27:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W0sE6-0006Z9-C6
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 12:27:14 +0000
Received: from [85.158.139.211:61722] by server-6.bemta-5.messagelabs.com id
	DD/2F-16310-1244DC25; Wed, 08 Jan 2014 12:27:13 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389184032!8496510!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16561 invoked from network); 8 Jan 2014 12:27:13 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 12:27:13 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 3E3291A26F4;
	Wed,  8 Jan 2014 14:27:10 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 23E5C36C01F; Wed,  8 Jan 2014 14:27:10 +0200 (EET)
Date: Wed, 8 Jan 2014 14:27:10 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: topperxin <topperxin@126.com>
Message-ID: <20140108122710.GZ2924@reaktio.net>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
>    Hi list
>            As we all know, SR-IOV technology can improve VNIC's performance,
>    while it can not support live migration. I get some information recently
>    that on KVM+Virtio platform, if we use MacVtap + SR-IOV, the live
>    migration could be done successfully. What I want to know is may I
>    configure MacVtap on Xen?
>           Any replies are welcome! Thanks a lot.

I think years ago (2009, perhaps) when SR-IOV was first demoed with Xen
it was demoed with live migration.. it was a mixture of SR-IOV VF + Xen vif.

So with some toolstack/script hackery you can do it. (=use the vif pv driver during migration, normally sr-iov vf).

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:40:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sQN-0007vj-1T; Wed, 08 Jan 2014 12:39:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0sQL-0007vd-KF
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 12:39:53 +0000
Received: from [193.109.254.147:43277] by server-8.bemta-14.messagelabs.com id
	DB/19-30921-8174DC25; Wed, 08 Jan 2014 12:39:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389184791!9549174!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9796 invoked from network); 8 Jan 2014 12:39:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:39:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88690938"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 12:39:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	07:39:49 -0500
Message-ID: <1389184789.4883.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 12:39:49 +0000
In-Reply-To: <52CC01350200007800111164@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it Windows install failures/BSOD
severity it blocker
thanks

On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 24250 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > 
> > Failures :-/ but no regressions.
> > 
> > Regressions which are regarded as allowable (not blocking):
> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
> 
> These windows-install failures have been pretty persistent for
> the last month or two. I've been looking at the logs from the
> hypervisor side a number of times without spotting anything. It'd
> be nice to know whether anyone also did so from the tools and
> qemu sides... In any event we will need to do something about
> this before 4.4 goes out.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:40:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sQN-0007vj-1T; Wed, 08 Jan 2014 12:39:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0sQL-0007vd-KF
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 12:39:53 +0000
Received: from [193.109.254.147:43277] by server-8.bemta-14.messagelabs.com id
	DB/19-30921-8174DC25; Wed, 08 Jan 2014 12:39:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389184791!9549174!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9796 invoked from network); 8 Jan 2014 12:39:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:39:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88690938"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 12:39:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	07:39:49 -0500
Message-ID: <1389184789.4883.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 12:39:49 +0000
In-Reply-To: <52CC01350200007800111164@nat28.tlf.novell.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it Windows install failures/BSOD
severity it blocker
thanks

On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 24250 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > 
> > Failures :-/ but no regressions.
> > 
> > Regressions which are regarded as allowable (not blocking):
> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
> 
> These windows-install failures have been pretty persistent for
> the last month or two. I've been looking at the logs from the
> hypervisor side a number of times without spotting anything. It'd
> be nice to know whether anyone also did so from the tools and
> qemu sides... In any event we will need to do something about
> this before 4.4 goes out.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:41:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sRe-000811-Gp; Wed, 08 Jan 2014 12:41:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sRd-00080t-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:41:13 +0000
Received: from [85.158.143.35:14232] by server-3.bemta-4.messagelabs.com id
	2B/95-32360-7674DC25; Wed, 08 Jan 2014 12:41:11 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389184870!7731138!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21414 invoked from network); 8 Jan 2014 12:41:11 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 12:41:11 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sVK-00079V-Mt; Wed, 08 Jan 2014 12:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389185102.27495@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
In-Reply-To: <1389184789.4883.47.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 12:45:02 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #29 rooted at `<52CC01350200007800111164@nat28.tlf.novell.com>'
Title: `Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL'
> title it Windows install failures/BSOD
Set title for #29 to `Windows install failures/BSOD'
> severity it blocker
Change severity for #29 to `blocker'
> thanks
Finished processing.

Modified/created Bugs:
 - 29: http://bugs.xenproject.org/xen/bug/29 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:41:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sRe-000811-Gp; Wed, 08 Jan 2014 12:41:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sRd-00080t-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:41:13 +0000
Received: from [85.158.143.35:14232] by server-3.bemta-4.messagelabs.com id
	2B/95-32360-7674DC25; Wed, 08 Jan 2014 12:41:11 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389184870!7731138!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21414 invoked from network); 8 Jan 2014 12:41:11 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 12:41:11 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0sVK-00079V-Mt; Wed, 08 Jan 2014 12:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389185102.27495@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
In-Reply-To: <1389184789.4883.47.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 12:45:02 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #29 rooted at `<52CC01350200007800111164@nat28.tlf.novell.com>'
Title: `Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL'
> title it Windows install failures/BSOD
Set title for #29 to `Windows install failures/BSOD'
> severity it blocker
Change severity for #29 to `blocker'
> thanks
Finished processing.

Modified/created Bugs:
 - 29: http://bugs.xenproject.org/xen/bug/29 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:42:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sSR-00086h-Ui; Wed, 08 Jan 2014 12:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0sSR-00086a-8o
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:42:03 +0000
Received: from [85.158.143.35:27448] by server-3.bemta-4.messagelabs.com id
	8D/17-32360-A974DC25; Wed, 08 Jan 2014 12:42:02 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389184920!10368483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10576 invoked from network); 8 Jan 2014 12:42:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:42:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90841624"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:42:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:41:59 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0sSN-0008Ct-K1;
	Wed, 08 Jan 2014 12:41:59 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 12:41:58 +0000
Message-ID: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
	spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to improve guest receive side flow control (ca2f09f2) had a
slight flaw in the wait condition for the vif thread in that any remaining
skbs in the guest receive side netback internal queue would prevent the
thread from sleeping. An unresponsive frontend can lead to a permanently
non-empty internal queue and thus the thread will spin. In this case the
thread should really sleep until the frontend becomes responsive again.

This patch adds an extra flag to the vif which is set if the shared ring
is full and cleared when skbs are drained into the shared ring. Thus,
if the thread runs, finds the shared ring full and can make no progress the
flag remains set. If the flag remains set then the thread will sleep,
regardless of a non-empty queue, until the next event from the frontend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
v2:
- Use bool for ring_full
- Convert need_to_notify to bool for consistency

 drivers/net/xen-netback/common.h  |    1 +
 drivers/net/xen-netback/netback.c |   14 +++++++++-----
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..4c76bcb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,6 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
+	bool rx_queue_stopped;
 	/* Set when the RX interrupt is triggered by the frontend.
 	 * The worker thread may need to wake the queue.
 	 */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2738563 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -476,7 +476,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 	int ret;
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
-	int need_to_notify = 0;
+	bool need_to_notify = false;
+	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -508,7 +509,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 		/* If the skb may not fit then bail out now */
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
-			need_to_notify = 1;
+			need_to_notify = true;
+			ring_full = true;
 			break;
 		}
 
@@ -521,6 +523,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
+	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
+
 	if (!npo.copy_prod)
 		goto done;
 
@@ -592,8 +596,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
 
-		if (ret)
-			need_to_notify = 1;
+		need_to_notify |= !!ret;
 
 		npo.meta_cons += sco->meta_slots_used;
 		dev_kfree_skb(skb);
@@ -1724,7 +1727,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) || vif->rx_event;
+	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
+		vif->rx_event;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:42:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sSR-00086h-Ui; Wed, 08 Jan 2014 12:42:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0sSR-00086a-8o
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:42:03 +0000
Received: from [85.158.143.35:27448] by server-3.bemta-4.messagelabs.com id
	8D/17-32360-A974DC25; Wed, 08 Jan 2014 12:42:02 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389184920!10368483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10576 invoked from network); 8 Jan 2014 12:42:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:42:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90841624"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:42:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:41:59 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0sSN-0008Ct-K1;
	Wed, 08 Jan 2014 12:41:59 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 12:41:58 +0000
Message-ID: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
	spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to improve guest receive side flow control (ca2f09f2) had a
slight flaw in the wait condition for the vif thread in that any remaining
skbs in the guest receive side netback internal queue would prevent the
thread from sleeping. An unresponsive frontend can lead to a permanently
non-empty internal queue and thus the thread will spin. In this case the
thread should really sleep until the frontend becomes responsive again.

This patch adds an extra flag to the vif which is set if the shared ring
is full and cleared when skbs are drained into the shared ring. Thus,
if the thread runs, finds the shared ring full and can make no progress the
flag remains set. If the flag remains set then the thread will sleep,
regardless of a non-empty queue, until the next event from the frontend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
v2:
- Use bool for ring_full
- Convert need_to_notify to bool for consistency

 drivers/net/xen-netback/common.h  |    1 +
 drivers/net/xen-netback/netback.c |   14 +++++++++-----
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..4c76bcb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,6 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
+	bool rx_queue_stopped;
 	/* Set when the RX interrupt is triggered by the frontend.
 	 * The worker thread may need to wake the queue.
 	 */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2738563 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -476,7 +476,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 	int ret;
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
-	int need_to_notify = 0;
+	bool need_to_notify = false;
+	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -508,7 +509,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 		/* If the skb may not fit then bail out now */
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
-			need_to_notify = 1;
+			need_to_notify = true;
+			ring_full = true;
 			break;
 		}
 
@@ -521,6 +523,8 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
+	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
+
 	if (!npo.copy_prod)
 		goto done;
 
@@ -592,8 +596,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
 
-		if (ret)
-			need_to_notify = 1;
+		need_to_notify |= !!ret;
 
 		npo.meta_cons += sco->meta_slots_used;
 		dev_kfree_skb(skb);
@@ -1724,7 +1727,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) || vif->rx_event;
+	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
+		vif->rx_event;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sUQ-0008Hz-Gc; Wed, 08 Jan 2014 12:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0sUO-0008Hj-Jj
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:44:04 +0000
Received: from [85.158.143.35:42446] by server-3.bemta-4.messagelabs.com id
	0A/BA-32360-3184DC25; Wed, 08 Jan 2014 12:44:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389185042!10336567!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1277 invoked from network); 8 Jan 2014 12:44:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:44:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90842047"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:44:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:44:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0sUL-0008Ei-5r;
	Wed, 08 Jan 2014 12:44:01 +0000
Date: Wed, 8 Jan 2014 12:43:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389175483.12612.100.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, tim@xen.org, george.dunlap@citrix.com,
	julien.grall@linaro.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > > index 124cccf..104d228 100644
> > > --- a/xen/arch/arm/domain.c
> > > +++ b/xen/arch/arm/domain.c
> > > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> > >      else
> > >          hcr |= HCR_RW;
> > >  
> > > +    if ( n->arch.sctlr & SCTLR_M )
> > > +        hcr &= ~(HCR_TVM|HCR_DC);
> > > +    else
> > > +        hcr |= (HCR_TVM|HCR_DC);
> > > +
> > >      WRITE_SYSREG(hcr, HCR_EL2);
> > >      isb();
> > 
> > Is this actually needed? Shouldn't HCR be already correctly updated by
> > update_sctlr?
> 
> Not if we are switching back and forth between two guests which are in
> different states.

I didn't realize that HCR is not properly saved and restored during
context switch. Maybe it is worth doing that instead? If nothing else
for simplicity?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sUQ-0008Hz-Gc; Wed, 08 Jan 2014 12:44:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0sUO-0008Hj-Jj
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 12:44:04 +0000
Received: from [85.158.143.35:42446] by server-3.bemta-4.messagelabs.com id
	0A/BA-32360-3184DC25; Wed, 08 Jan 2014 12:44:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389185042!10336567!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1277 invoked from network); 8 Jan 2014 12:44:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:44:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90842047"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:44:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:44:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0sUL-0008Ei-5r;
	Wed, 08 Jan 2014 12:44:01 +0000
Date: Wed, 8 Jan 2014 12:43:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389175483.12612.100.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, tim@xen.org, george.dunlap@citrix.com,
	julien.grall@linaro.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > > index 124cccf..104d228 100644
> > > --- a/xen/arch/arm/domain.c
> > > +++ b/xen/arch/arm/domain.c
> > > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> > >      else
> > >          hcr |= HCR_RW;
> > >  
> > > +    if ( n->arch.sctlr & SCTLR_M )
> > > +        hcr &= ~(HCR_TVM|HCR_DC);
> > > +    else
> > > +        hcr |= (HCR_TVM|HCR_DC);
> > > +
> > >      WRITE_SYSREG(hcr, HCR_EL2);
> > >      isb();
> > 
> > Is this actually needed? Shouldn't HCR be already correctly updated by
> > update_sctlr?
> 
> Not if we are switching back and forth between two guests which are in
> different states.

I didn't realize that HCR is not properly saved and restored during
context switch. Maybe it is worth doing that instead? If nothing else
for simplicity?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:55:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sfe-0000N8-0J; Wed, 08 Jan 2014 12:55:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W0sfc-0000N3-Gv
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 12:55:40 +0000
Received: from [85.158.139.211:20540] by server-17.bemta-5.messagelabs.com id
	D6/9F-19152-BCA4DC25; Wed, 08 Jan 2014 12:55:39 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389185733!8547887!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 991 invoked from network); 8 Jan 2014 12:55:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90844635"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:55:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:55:32 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W0sfU-0008QY-JI;
	Wed, 08 Jan 2014 12:55:32 +0000
Date: Wed, 8 Jan 2014 12:55:32 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108125531.GC1696@perard.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389180290.4883.34.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	ian.jackson@eu.citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 11:24:50AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 11:12 +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> > > >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> > > >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > > >> > flight 24250 xen-unstable real [real]
> > > >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > > >> > 
> > > >> > Failures :-/ but no regressions.
> > > >> > 
> > > >> > Regressions which are regarded as allowable (not blocking):
> > > >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > > > 24146
> > > >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > > > 23938
> > > >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > > > 24146
> > > >> 
> > > >> These windows-install failures have been pretty persistent for
> > > >> the last month or two. I've been looking at the logs from the
> > > >> hypervisor side a number of times without spotting anything. It'd
> > > >> be nice to know whether anyone also did so from the tools and
> > > >> qemu sides... In any event we will need to do something about
> > > >> this before 4.4 goes out.
> > > > 
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > > n7-amd64/win.guest.osstest--vnc.jpeg
> > > > 
> > > > says that Windows experienced an unexpected error.
> > > > 
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > > > 
> > > > is a blue screen "BAD_POOL_CALLER".
> > 
> > FWIW It's code 0x7 which
> > http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
> > says is "The current thread attempted to free the pool, which was
> > already freed.".
> > 
> > The Internet(tm) seems to think this is often a driver issue. Not that
> > this helps us much!
> > 
> > > > I think this is unlikely to be a toolstack thing, but as to whether it
> > > > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > > > the toolstack logs anyway and didn't see anything untoward.
> > > 
> > > Right, neither did I. I was particularly thinking of qemu though,
> > > since I think these pretty persistent failures started around the
> > > time the qemu tree upgrade was done. Of course this could just
> > > be coincidence with a hypervisor side change having bad effects.
> > 
> > If there is a correlation then it would be interesting to investigate --
> > Anthony/Stefano could you guys have a look please.
> 
> The earliest instance I could see with logs was 22371 (10/12/2013).
> 
> 21288 (30/10/2013) has a windows-install failure but the logs have
> expired so I cannot tell if it was the same failure.
> 
> I didn't look at the vnc for every failure I spotted, there were a lot
> like these, and a lot where Windows was just sat at its login screen
> (quite long standing issue I think? Thought not to be us?). 
> 
> There were a smattering of other failures too e.g.:
> http://www.chiark.greenend.org.uk/~xensrcts/logs/22466/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg
> http://www.chiark.greenend.org.uk/~xensrcts/logs/22455/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

There is also this one:
http://www.chiark.greenend.org.uk/~xensrcts/logs/23724/test-amd64-amd64-xl-qemuu-winxpsp3/win.guest.osstest--vnc.jpeg
an issue with ntfs.sys.

Or this one:
http://www.chiark.greenend.org.uk/~xensrcts/logs/23035/test-amd64-i386-xl-winxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
which say "a device driver has pool"
and google respond:
http://www.faultwire.com/solutions-fatal_error/A-device-driver-has-pool-0x000000C5-*1198.html
"A device driver has a bug that attempted to access memory either
nonexistent memory or memory it is not allowed to access."

So, maybe a emulated disk issue or memory issue ?

I'll try to reproduce the issue.

> (bearing in mind that I didn't check all of them, if there was an easy
> way to data mine the screen shots for all the failures of this test into
> a directory it might be easier to scan)

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 12:55:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 12:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sfe-0000N8-0J; Wed, 08 Jan 2014 12:55:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W0sfc-0000N3-Gv
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 12:55:40 +0000
Received: from [85.158.139.211:20540] by server-17.bemta-5.messagelabs.com id
	D6/9F-19152-BCA4DC25; Wed, 08 Jan 2014 12:55:39 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389185733!8547887!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 991 invoked from network); 8 Jan 2014 12:55:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 12:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90844635"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 12:55:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 07:55:32 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W0sfU-0008QY-JI;
	Wed, 08 Jan 2014 12:55:32 +0000
Date: Wed, 8 Jan 2014 12:55:32 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108125531.GC1696@perard.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389180290.4883.34.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	ian.jackson@eu.citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 11:24:50AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 11:12 +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 10:59 +0000, Jan Beulich wrote:
> > > >>> On 08.01.14 at 11:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > > On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
> > > >> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
> > > >> > flight 24250 xen-unstable real [real]
> > > >> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/ 
> > > >> > 
> > > >> > Failures :-/ but no regressions.
> > > >> > 
> > > >> > Regressions which are regarded as allowable (not blocking):
> > > >> >  test-armhf-armhf-xl           7 debian-install               fail   like 
> > > > 24146
> > > >> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 
> > > > 23938
> > > >> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 
> > > > 24146
> > > >> 
> > > >> These windows-install failures have been pretty persistent for
> > > >> the last month or two. I've been looking at the logs from the
> > > >> hypervisor side a number of times without spotting anything. It'd
> > > >> be nice to know whether anyone also did so from the tools and
> > > >> qemu sides... In any event we will need to do something about
> > > >> this before 4.4 goes out.
> > > > 
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > > n7-amd64/win.guest.osstest--vnc.jpeg
> > > > 
> > > > says that Windows experienced an unexpected error.
> > > > 
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/test-amd64-i386-xl-wi 
> > > > nxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
> > > > 
> > > > is a blue screen "BAD_POOL_CALLER".
> > 
> > FWIW It's code 0x7 which
> > http://msdn.microsoft.com/en-us/library/windows/hardware/ff560185%28v=vs.85%29.aspx
> > says is "The current thread attempted to free the pool, which was
> > already freed.".
> > 
> > The Internet(tm) seems to think this is often a driver issue. Not that
> > this helps us much!
> > 
> > > > I think this is unlikely to be a toolstack thing, but as to whether it
> > > > is a Xen or a Windows issue I wouldn't like to say. I had a look through
> > > > the toolstack logs anyway and didn't see anything untoward.
> > > 
> > > Right, neither did I. I was particularly thinking of qemu though,
> > > since I think these pretty persistent failures started around the
> > > time the qemu tree upgrade was done. Of course this could just
> > > be coincidence with a hypervisor side change having bad effects.
> > 
> > If there is a correlation then it would be interesting to investigate --
> > Anthony/Stefano could you guys have a look please.
> 
> The earliest instance I could see with logs was 22371 (10/12/2013).
> 
> 21288 (30/10/2013) has a windows-install failure but the logs have
> expired so I cannot tell if it was the same failure.
> 
> I didn't look at the vnc for every failure I spotted, there were a lot
> like these, and a lot where Windows was just sat at its login screen
> (quite long standing issue I think? Thought not to be us?). 
> 
> There were a smattering of other failures too e.g.:
> http://www.chiark.greenend.org.uk/~xensrcts/logs/22466/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg
> http://www.chiark.greenend.org.uk/~xensrcts/logs/22455/test-amd64-i386-xl-win7-amd64/win.guest.osstest--vnc.jpeg

There is also this one:
http://www.chiark.greenend.org.uk/~xensrcts/logs/23724/test-amd64-amd64-xl-qemuu-winxpsp3/win.guest.osstest--vnc.jpeg
an issue with ntfs.sys.

Or this one:
http://www.chiark.greenend.org.uk/~xensrcts/logs/23035/test-amd64-i386-xl-winxpsp3-vcpus1/win.guest.osstest--vnc.jpeg
which say "a device driver has pool"
and google respond:
http://www.faultwire.com/solutions-fatal_error/A-device-driver-has-pool-0x000000C5-*1198.html
"A device driver has a bug that attempted to access memory either
nonexistent memory or memory it is not allowed to access."

So, maybe a emulated disk issue or memory issue ?

I'll try to reproduce the issue.

> (bearing in mind that I didn't check all of them, if there was an easy
> way to data mine the screen shots for all the failures of this test into
> a directory it might be easier to scan)

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:02:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0smF-0001BI-9c; Wed, 08 Jan 2014 13:02:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0smD-0001BD-Qh
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:02:30 +0000
Received: from [193.109.254.147:39970] by server-13.bemta-14.messagelabs.com
	id 75/1E-19374-56C4DC25; Wed, 08 Jan 2014 13:02:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389186146!9551668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7867 invoked from network); 8 Jan 2014 13:02:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88696977"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:02:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:02:25 -0500
Message-ID: <1389186144.4883.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:02:24 +0000
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxl should implement non-suspend-cancel based resume path
owner Ian Jackson <Ian.Jackson@eu.citrix.com>
thanks

To summarise what I just said to Ian J in the corridor (and lets have a
bug to record it):

There are two mechanisms by which a suspend can be aborted and the
original domain resumed.

The older method is that the toolstack resets a bunch of state (see
tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
the domain. The domain will see HYPERVISOR_suspend return 0 and will
continue without any realisation that it is actually running in the
original domain and not in a new one. This method is supposed to be
implemented by libxl_domain_resume(suspend_cancel=0) but it is not.

The other method is newer and in this case the toolstack arranges that
HYPERVISOR_suspend returns 1 and restarts it (I beleiv . The domain will
observe this and realise that it has been restarted in the same domain
and will behave accordingly. This method is implemented, correctly
AFAIK, by libxl_domain_resume(suspend_cancel=1).

However the newer method is not available in all kernels, although it
does date from the Linux 2.6.18 days and is implemented in all Linux
pvops kernels I can't speak for others (e.g. BSD). The toolstack is
supposed to check for the XEN_ELFNOTE_SUSPEND_CANCEL ELF note when
building the domain. The presence/absence of this flag needs to be
remembered so that it can be consulted on resume (this also implies
preserving that knowledge over migration).

xl currently uses libxl_domain_resume(suspend_cancel=0) on migration
failure which as it stands won't work for *any* domain. Arguably
switching to suspend_cancel=1 for now will mean that some subset of
kernels will work, and those which don't will not have regressed, until
we can correctly implement the suspend_cancel=0 and the necessary
tracking of XEN_ELFNOTE_SUSPEND_CANCEL.

I've also just noticed that on failure to save (as opposed to migrate)
xl does use suspend_cancel=1.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:02:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0smF-0001BI-9c; Wed, 08 Jan 2014 13:02:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0smD-0001BD-Qh
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:02:30 +0000
Received: from [193.109.254.147:39970] by server-13.bemta-14.messagelabs.com
	id 75/1E-19374-56C4DC25; Wed, 08 Jan 2014 13:02:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389186146!9551668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7867 invoked from network); 8 Jan 2014 13:02:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:02:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88696977"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:02:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:02:25 -0500
Message-ID: <1389186144.4883.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:02:24 +0000
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxl should implement non-suspend-cancel based resume path
owner Ian Jackson <Ian.Jackson@eu.citrix.com>
thanks

To summarise what I just said to Ian J in the corridor (and lets have a
bug to record it):

There are two mechanisms by which a suspend can be aborted and the
original domain resumed.

The older method is that the toolstack resets a bunch of state (see
tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
the domain. The domain will see HYPERVISOR_suspend return 0 and will
continue without any realisation that it is actually running in the
original domain and not in a new one. This method is supposed to be
implemented by libxl_domain_resume(suspend_cancel=0) but it is not.

The other method is newer and in this case the toolstack arranges that
HYPERVISOR_suspend returns 1 and restarts it (I beleiv . The domain will
observe this and realise that it has been restarted in the same domain
and will behave accordingly. This method is implemented, correctly
AFAIK, by libxl_domain_resume(suspend_cancel=1).

However the newer method is not available in all kernels, although it
does date from the Linux 2.6.18 days and is implemented in all Linux
pvops kernels I can't speak for others (e.g. BSD). The toolstack is
supposed to check for the XEN_ELFNOTE_SUSPEND_CANCEL ELF note when
building the domain. The presence/absence of this flag needs to be
remembered so that it can be consulted on resume (this also implies
preserving that knowledge over migration).

xl currently uses libxl_domain_resume(suspend_cancel=0) on migration
failure which as it stands won't work for *any* domain. Arguably
switching to suspend_cancel=1 for now will mean that some subset of
kernels will work, and those which don't will not have regressed, until
we can correctly implement the suspend_cancel=0 and the necessary
tracking of XEN_ELFNOTE_SUSPEND_CANCEL.

I've also just noticed that on failure to save (as opposed to migrate)
xl does use suspend_cancel=1.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0soU-0001I6-SF; Wed, 08 Jan 2014 13:04:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0soT-0001I0-HR
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:04:49 +0000
Received: from [85.158.143.35:58772] by server-2.bemta-4.messagelabs.com id
	4C/B7-11386-0FC4DC25; Wed, 08 Jan 2014 13:04:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389186286!7738823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19701 invoked from network); 8 Jan 2014 13:04:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90848010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:04:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:04:44 -0500
Message-ID: <1389186283.4883.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 8 Jan 2014 13:04:43 +0000
In-Reply-To: <20140108125531.GC1696@perard.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
	<20140108125531.GC1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	ian.jackson@eu.citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

owner 28 Anthony PERARD <anthony.perard@citrix.com>
thanks
On Wed, 2014-01-08 at 12:55 +0000, Anthony PERARD wrote:
> I'll try to reproduce the issue.

Thanks!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0soU-0001I6-SF; Wed, 08 Jan 2014 13:04:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0soT-0001I0-HR
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:04:49 +0000
Received: from [85.158.143.35:58772] by server-2.bemta-4.messagelabs.com id
	4C/B7-11386-0FC4DC25; Wed, 08 Jan 2014 13:04:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389186286!7738823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19701 invoked from network); 8 Jan 2014 13:04:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90848010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:04:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:04:44 -0500
Message-ID: <1389186283.4883.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 8 Jan 2014 13:04:43 +0000
In-Reply-To: <20140108125531.GC1696@perard.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
	<20140108125531.GC1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	ian.jackson@eu.citrix.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

owner 28 Anthony PERARD <anthony.perard@citrix.com>
thanks
On Wed, 2014-01-08 at 12:55 +0000, Anthony PERARD wrote:
> I'll try to reproduce the issue.

Thanks!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:09:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ssa-0001gF-O5; Wed, 08 Jan 2014 13:09:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ssY-0001el-Nr
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:09:03 +0000
Received: from [193.109.254.147:5339] by server-1.bemta-14.messagelabs.com id
	7B/7B-15600-EED4DC25; Wed, 08 Jan 2014 13:09:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389186540!9561028!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20344 invoked from network); 8 Jan 2014 13:09:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88699396"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:08:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:08:59 -0500
Message-ID: <1389186538.4883.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:08:58 +0000
In-Reply-To: <alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 12:43 +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > > > index 124cccf..104d228 100644
> > > > --- a/xen/arch/arm/domain.c
> > > > +++ b/xen/arch/arm/domain.c
> > > > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> > > >      else
> > > >          hcr |= HCR_RW;
> > > >  
> > > > +    if ( n->arch.sctlr & SCTLR_M )
> > > > +        hcr &= ~(HCR_TVM|HCR_DC);
> > > > +    else
> > > > +        hcr |= (HCR_TVM|HCR_DC);
> > > > +
> > > >      WRITE_SYSREG(hcr, HCR_EL2);
> > > >      isb();
> > > 
> > > Is this actually needed? Shouldn't HCR be already correctly updated by
> > > update_sctlr?
> > 
> > Not if we are switching back and forth between two guests which are in
> > different states.
> 
> I didn't realize that HCR is not properly saved and restored during
> context switch. Maybe it is worth doing that instead? If nothing else
> for simplicity?

Yes. We ended up here because originally the HCR content was completely
static.

I'm a little concerned about making this change for 4.4, mostly because
I'd need to reliably track down the right places to initialise
v->arch.hcr and the correct contents.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:09:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ssa-0001gF-O5; Wed, 08 Jan 2014 13:09:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ssY-0001el-Nr
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:09:03 +0000
Received: from [193.109.254.147:5339] by server-1.bemta-14.messagelabs.com id
	7B/7B-15600-EED4DC25; Wed, 08 Jan 2014 13:09:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389186540!9561028!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20344 invoked from network); 8 Jan 2014 13:09:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:09:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88699396"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:08:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:08:59 -0500
Message-ID: <1389186538.4883.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:08:58 +0000
In-Reply-To: <alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 12:43 +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-07 at 20:39 +0000, Stefano Stabellini wrote:
> > > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > > > index 124cccf..104d228 100644
> > > > --- a/xen/arch/arm/domain.c
> > > > +++ b/xen/arch/arm/domain.c
> > > > @@ -219,6 +219,11 @@ static void ctxt_switch_to(struct vcpu *n)
> > > >      else
> > > >          hcr |= HCR_RW;
> > > >  
> > > > +    if ( n->arch.sctlr & SCTLR_M )
> > > > +        hcr &= ~(HCR_TVM|HCR_DC);
> > > > +    else
> > > > +        hcr |= (HCR_TVM|HCR_DC);
> > > > +
> > > >      WRITE_SYSREG(hcr, HCR_EL2);
> > > >      isb();
> > > 
> > > Is this actually needed? Shouldn't HCR be already correctly updated by
> > > update_sctlr?
> > 
> > Not if we are switching back and forth between two guests which are in
> > different states.
> 
> I didn't realize that HCR is not properly saved and restored during
> context switch. Maybe it is worth doing that instead? If nothing else
> for simplicity?

Yes. We ended up here because originally the HCR content was completely
static.

I'm a little concerned about making this change for 4.4, mostly because
I'd need to reliably track down the right places to initialise
v->arch.hcr and the correct contents.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:10:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0stc-0001yA-Jm; Wed, 08 Jan 2014 13:10:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0stb-0001xz-Dv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:10:07 +0000
Received: from [85.158.139.211:61156] by server-8.bemta-5.messagelabs.com id
	6C/17-29838-E2E4DC25; Wed, 08 Jan 2014 13:10:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389186604!8549962!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32621 invoked from network); 8 Jan 2014 13:10:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:10:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88699751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:10:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:10:03 -0500
Message-ID: <1389186602.4883.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:10:02 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>
Subject: [Xen-devel] Closing #20: xen_platform_pci=0 doesn't work with
	qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 20
thanks

Anthony tells me this was fixed already.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:10:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0stc-0001yA-Jm; Wed, 08 Jan 2014 13:10:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0stb-0001xz-Dv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:10:07 +0000
Received: from [85.158.139.211:61156] by server-8.bemta-5.messagelabs.com id
	6C/17-29838-E2E4DC25; Wed, 08 Jan 2014 13:10:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389186604!8549962!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32621 invoked from network); 8 Jan 2014 13:10:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:10:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88699751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:10:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:10:03 -0500
Message-ID: <1389186602.4883.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:10:02 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>
Subject: [Xen-devel] Closing #20: xen_platform_pci=0 doesn't work with
	qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 20
thanks

Anthony tells me this was fixed already.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sui-00025M-OJ; Wed, 08 Jan 2014 13:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0suh-000255-Rr
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:11:15 +0000
Received: from [193.109.254.147:35899] by server-16.bemta-14.messagelabs.com
	id 9D/09-20600-37E4DC25; Wed, 08 Jan 2014 13:11:15 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389186673!7314832!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2337 invoked from network); 8 Jan 2014 13:11:14 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:11:14 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0syP-0007Ug-Iz; Wed, 08 Jan 2014 13:15:05 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389186903.28766@bugs.xenproject.org>
References: <1389186602.4883.64.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186602.4883.64.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:15:05 +0000
Subject: [Xen-devel] Processed: Closing #20: xen_platform_pci=0 doesn't work
	with qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 20
Closing bug #20
> thanks
Finished processing.

Modified/created Bugs:
 - 20: http://bugs.xenproject.org/xen/bug/20

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0suh-000251-AI; Wed, 08 Jan 2014 13:11:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0suf-00024k-Cg
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:11:13 +0000
Received: from [193.109.254.147:61658] by server-1.bemta-14.messagelabs.com id
	F8/4E-15600-07E4DC25; Wed, 08 Jan 2014 13:11:12 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389186671!9576917!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 757 invoked from network); 8 Jan 2014 13:11:11 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:11:11 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0syN-0007U0-9c; Wed, 08 Jan 2014 13:15:03 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389186902.28765@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
	<20140108125531.GC1696@perard.uk.xensource.com>
	<1389186283.4883.61.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186283.4883.61.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:15:03 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> owner 28 Anthony PERARD <anthony.perard@citrix.com>
Change owner for #28 to `Anthony PERARD <anthony.perard@citrix.com>'
> thanks
Finished processing.

Modified/created Bugs:
 - 28: http://bugs.xenproject.org/xen/bug/28

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0sui-00025M-OJ; Wed, 08 Jan 2014 13:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0suh-000255-Rr
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:11:15 +0000
Received: from [193.109.254.147:35899] by server-16.bemta-14.messagelabs.com
	id 9D/09-20600-37E4DC25; Wed, 08 Jan 2014 13:11:15 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389186673!7314832!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2337 invoked from network); 8 Jan 2014 13:11:14 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:11:14 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0syP-0007Ug-Iz; Wed, 08 Jan 2014 13:15:05 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389186903.28766@bugs.xenproject.org>
References: <1389186602.4883.64.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186602.4883.64.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:15:05 +0000
Subject: [Xen-devel] Processed: Closing #20: xen_platform_pci=0 doesn't work
	with qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 20
Closing bug #20
> thanks
Finished processing.

Modified/created Bugs:
 - 20: http://bugs.xenproject.org/xen/bug/20

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0suh-000251-AI; Wed, 08 Jan 2014 13:11:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0suf-00024k-Cg
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:11:13 +0000
Received: from [193.109.254.147:61658] by server-1.bemta-14.messagelabs.com id
	F8/4E-15600-07E4DC25; Wed, 08 Jan 2014 13:11:12 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389186671!9576917!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 757 invoked from network); 8 Jan 2014 13:11:11 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:11:11 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0syN-0007U0-9c; Wed, 08 Jan 2014 13:15:03 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389186902.28765@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389178188.4883.19.camel@kazak.uk.xensource.com>
	<52CD3DBF02000078001117A2@nat28.tlf.novell.com>
	<1389179573.4883.30.camel@kazak.uk.xensource.com>
	<1389180290.4883.34.camel@kazak.uk.xensource.com>
	<20140108125531.GC1696@perard.uk.xensource.com>
	<1389186283.4883.61.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186283.4883.61.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:15:03 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> owner 28 Anthony PERARD <anthony.perard@citrix.com>
Change owner for #28 to `Anthony PERARD <anthony.perard@citrix.com>'
> thanks
Finished processing.

Modified/created Bugs:
 - 28: http://bugs.xenproject.org/xen/bug/28

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0szm-0002SB-IO; Wed, 08 Jan 2014 13:16:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0szl-0002S6-9O
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:16:29 +0000
Received: from [85.158.143.35:2805] by server-3.bemta-4.messagelabs.com id
	47/95-32360-CAF4DC25; Wed, 08 Jan 2014 13:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389186986!10265921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15879 invoked from network); 8 Jan 2014 13:16:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88702139"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:16:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:16:25 -0500
Message-ID: <1389186984.4883.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:16:24 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm filing in for George while he is on vacation and travelling to a
conference etc. I'm still coming up to speed wrt what is going on with
this release so please do correct me when I'm wrong. George will be
back on 20 January.

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
time and on George's final comments in [1] I think this means that PVH
dom0 support has not made the cut for 4.4, which is a shame but there
is plenty of good functionality (including PVH domU support) in there.

[1] http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3E

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013 
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: 21 January 2014

Last updated: 8 January 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu 

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings 

== Resolved since last update ==

== Open ==

* xl support for vnc and vnclisten options with PV guests
 > http://bugs.xenproject.org/xen/bug/25
 status: V4 patch posted. Should go in.
 Blocker?

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/29
  > Easiest way to reproduce: 
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* xl needs to disallow PoD with PCI passthrough
  >see http://xen.1045712.n5.nabble.com/PATCH-VT-d-Dis-allow-PCI-device-assignment-if-PoD-is-enabled-td2547788.html

* qemu-upstream not freeing pirq 
 > http://www.gossamer-threads.com/lists/xen/devel/281498
 > http://marc.info/?l=xen-devel&m=137265766424502
 status: patches posted; latest patches need testing
 Not a blocker.

* Race in PV shutdown between tool detection and shutdown watch
 > http://www.gossamer-threads.com/lists/xen/devel/282467
 > Nothing to do with ACPI
 status: Patches posted
 Not a blocker.

* xl does not support specifying virtual function for passthrough device
 > http://bugs.xenproject.org/xen/bug/22
 Too much work to be a blocker.

* xl does not handle migrate interruption gracefully
  > If you start a localhost migrate, and press "Ctrl-C" in the middle,
  > you get two hung domains
 Ian J investigated -- can of worms, too big to be a blocker for 4.4

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review. ( v2 ID
1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )

  > andyhhp: my proposed RTC fixes break migrate from older versions of
  > Xen, so I have to redesign it from scratch. no way it is going to
  > be ready for 4.4

* HPET interrupt stack overflow (when using hpet_broadcast mode and MSI
capable HPETs)
  owner: andyh@citrix
  status: patches posted, undergoing review iteration.

  > andyhhp: I have more work to do on the HPET series
  > andyhhp: no way it is going to be ready or safe for 4.4

* PCI hole resize support hvmloader/qemu-traditional/qemu-upstream with PCI/GPU passthrough
  > http://bugs.xenproject.org/xen/bug/28
  > http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
  > Where Stefano writes:
  > 2) for Xen 4.4 rework the two patches above and improve
  > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
  > enough, it also needs to be able to resize the system memory region
  > (xen.ram) to make room for the bigger pci_hole

  status: not going to be fixed for 4.4 either. Created bug #28.

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

* qemu-* parses "008" as octal in USB bus.addr format
  > http://bugs.xenproject.org/xen/bug/15
  > just needs documenting
  Anthony Perard to patch docs

* osstest windows-install failures
  > http://bugs.xenproject.org/xen/bug/29
  > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
  Anthony and/or Jan investigating

=== Big ticket items ===

* PVH dom0 (w/ Linux) 
  blocker
  owner: mukesh@oracle, george@citrix
  status (Linux): Acked, waiting for ABI to be nailed down
  status (Xen): v6 posted; no longer considered a blocker

* libvirt/libxl integration (external)
 - owner: jfehlig@suse, dario@citrix
 - patches posted (should be released before 4.4)
  - migration
  - PCI pass-through
 - In progress
  - integration w/ libvirt's lock manager
  - improved concurrency



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0szm-0002SB-IO; Wed, 08 Jan 2014 13:16:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0szl-0002S6-9O
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:16:29 +0000
Received: from [85.158.143.35:2805] by server-3.bemta-4.messagelabs.com id
	47/95-32360-CAF4DC25; Wed, 08 Jan 2014 13:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389186986!10265921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15879 invoked from network); 8 Jan 2014 13:16:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88702139"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:16:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:16:25 -0500
Message-ID: <1389186984.4883.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:16:24 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm filing in for George while he is on vacation and travelling to a
conference etc. I'm still coming up to speed wrt what is going on with
this release so please do correct me when I'm wrong. George will be
back on 20 January.

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
time and on George's final comments in [1] I think this means that PVH
dom0 support has not made the cut for 4.4, which is a shame but there
is plenty of good functionality (including PVH domU support) in there.

[1] http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3E

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013 
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: 21 January 2014

Last updated: 8 January 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu 

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings 

== Resolved since last update ==

== Open ==

* xl support for vnc and vnclisten options with PV guests
 > http://bugs.xenproject.org/xen/bug/25
 status: V4 patch posted. Should go in.
 Blocker?

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/29
  > Easiest way to reproduce: 
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* xl needs to disallow PoD with PCI passthrough
  >see http://xen.1045712.n5.nabble.com/PATCH-VT-d-Dis-allow-PCI-device-assignment-if-PoD-is-enabled-td2547788.html

* qemu-upstream not freeing pirq 
 > http://www.gossamer-threads.com/lists/xen/devel/281498
 > http://marc.info/?l=xen-devel&m=137265766424502
 status: patches posted; latest patches need testing
 Not a blocker.

* Race in PV shutdown between tool detection and shutdown watch
 > http://www.gossamer-threads.com/lists/xen/devel/282467
 > Nothing to do with ACPI
 status: Patches posted
 Not a blocker.

* xl does not support specifying virtual function for passthrough device
 > http://bugs.xenproject.org/xen/bug/22
 Too much work to be a blocker.

* xl does not handle migrate interruption gracefully
  > If you start a localhost migrate, and press "Ctrl-C" in the middle,
  > you get two hung domains
 Ian J investigated -- can of worms, too big to be a blocker for 4.4

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review. ( v2 ID
1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )

  > andyhhp: my proposed RTC fixes break migrate from older versions of
  > Xen, so I have to redesign it from scratch. no way it is going to
  > be ready for 4.4

* HPET interrupt stack overflow (when using hpet_broadcast mode and MSI
capable HPETs)
  owner: andyh@citrix
  status: patches posted, undergoing review iteration.

  > andyhhp: I have more work to do on the HPET series
  > andyhhp: no way it is going to be ready or safe for 4.4

* PCI hole resize support hvmloader/qemu-traditional/qemu-upstream with PCI/GPU passthrough
  > http://bugs.xenproject.org/xen/bug/28
  > http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
  > Where Stefano writes:
  > 2) for Xen 4.4 rework the two patches above and improve
  > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
  > enough, it also needs to be able to resize the system memory region
  > (xen.ram) to make room for the bigger pci_hole

  status: not going to be fixed for 4.4 either. Created bug #28.

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

* qemu-* parses "008" as octal in USB bus.addr format
  > http://bugs.xenproject.org/xen/bug/15
  > just needs documenting
  Anthony Perard to patch docs

* osstest windows-install failures
  > http://bugs.xenproject.org/xen/bug/29
  > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
  Anthony and/or Jan investigating

=== Big ticket items ===

* PVH dom0 (w/ Linux) 
  blocker
  owner: mukesh@oracle, george@citrix
  status (Linux): Acked, waiting for ABI to be nailed down
  status (Xen): v6 posted; no longer considered a blocker

* libvirt/libxl integration (external)
 - owner: jfehlig@suse, dario@citrix
 - patches posted (should be released before 4.4)
  - migration
  - PCI pass-through
 - In progress
  - integration w/ libvirt's lock manager
  - improved concurrency



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t1r-0002bt-4P; Wed, 08 Jan 2014 13:18:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>)
	id 1W0t1p-0002bk-6s; Wed, 08 Jan 2014 13:18:37 +0000
Received: from [85.158.137.68:21312] by server-3.bemta-3.messagelabs.com id
	DD/AB-10658-C205DC25; Wed, 08 Jan 2014 13:18:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389187114!7888395!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 8 Jan 2014 13:18:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:18:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853318"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:18:33 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:18:33 -0500
Message-ID: <52CD5028.2060802@citrix.com>
Date: Wed, 8 Jan 2014 13:18:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net>
In-Reply-To: <20140108122710.GZ2924@reaktio.net>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 12:27, Pasi K=E4rkk=E4inen wrote:
> On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
>>    Hi list
>>            As we all know, SR-IOV technology can improve VNIC's performa=
nce,
>>    while it can not support live migration. I get some information recen=
tly
>>    that on KVM+Virtio platform, if we use MacVtap + SR-IOV, the live
>>    migration could be done successfully. What I want to know is may I
>>    configure MacVtap on Xen?
>>           Any replies are welcome! Thanks a lot.
> =

> I think years ago (2009, perhaps) when SR-IOV was first demoed with Xen
> it was demoed with live migration.. it was a mixture of SR-IOV VF + Xen v=
if.
> =

> So with some toolstack/script hackery you can do it. (=3Duse the vif pv d=
river during migration, normally sr-iov vf).

You may find creating an active-backup bond in the guest useful.  The
active bond link is the SR-IOV device and the backup is the VIF.

I've not tried this myself.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t1r-0002bt-4P; Wed, 08 Jan 2014 13:18:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>)
	id 1W0t1p-0002bk-6s; Wed, 08 Jan 2014 13:18:37 +0000
Received: from [85.158.137.68:21312] by server-3.bemta-3.messagelabs.com id
	DD/AB-10658-C205DC25; Wed, 08 Jan 2014 13:18:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389187114!7888395!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 557 invoked from network); 8 Jan 2014 13:18:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:18:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853318"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:18:33 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:18:33 -0500
Message-ID: <52CD5028.2060802@citrix.com>
Date: Wed, 8 Jan 2014 13:18:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net>
In-Reply-To: <20140108122710.GZ2924@reaktio.net>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 12:27, Pasi K=E4rkk=E4inen wrote:
> On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
>>    Hi list
>>            As we all know, SR-IOV technology can improve VNIC's performa=
nce,
>>    while it can not support live migration. I get some information recen=
tly
>>    that on KVM+Virtio platform, if we use MacVtap + SR-IOV, the live
>>    migration could be done successfully. What I want to know is may I
>>    configure MacVtap on Xen?
>>           Any replies are welcome! Thanks a lot.
> =

> I think years ago (2009, perhaps) when SR-IOV was first demoed with Xen
> it was demoed with live migration.. it was a mixture of SR-IOV VF + Xen v=
if.
> =

> So with some toolstack/script hackery you can do it. (=3Duse the vif pv d=
river during migration, normally sr-iov vf).

You may find creating an active-backup bond in the guest useful.  The
active bond link is the SR-IOV device and the backup is the VIF.

I've not tried this myself.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t2t-00031y-RU; Wed, 08 Jan 2014 13:19:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0t2r-00030d-R0
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:19:41 +0000
Received: from [85.158.139.211:13944] by server-13.bemta-5.messagelabs.com id
	AE/DC-11357-D605DC25; Wed, 08 Jan 2014 13:19:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389187178!8511501!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16916 invoked from network); 8 Jan 2014 13:19:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:19:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88703042"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:19:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:19:09 -0500
Message-ID: <1389187148.4883.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:19:08 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
> * qemu-upstream not freeing pirq 
>  > http://www.gossamer-threads.com/lists/xen/devel/281498
>  > http://marc.info/?l=xen-devel&m=137265766424502
>  status: patches posted; latest patches need testing
>  Not a blocker.

I had it in my mind that this was fixed -- true?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t2t-00031y-RU; Wed, 08 Jan 2014 13:19:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0t2r-00030d-R0
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:19:41 +0000
Received: from [85.158.139.211:13944] by server-13.bemta-5.messagelabs.com id
	AE/DC-11357-D605DC25; Wed, 08 Jan 2014 13:19:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389187178!8511501!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16916 invoked from network); 8 Jan 2014 13:19:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:19:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88703042"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:19:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:19:09 -0500
Message-ID: <1389187148.4883.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:19:08 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
> * qemu-upstream not freeing pirq 
>  > http://www.gossamer-threads.com/lists/xen/devel/281498
>  > http://marc.info/?l=xen-devel&m=137265766424502
>  status: patches posted; latest patches need testing
>  Not a blocker.

I had it in my mind that this was fixed -- true?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:20:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t3F-00035f-Fi; Wed, 08 Jan 2014 13:20:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0t3D-00034z-US
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:20:04 +0000
Received: from [193.109.254.147:44620] by server-13.bemta-14.messagelabs.com
	id 8C/E7-19374-3805DC25; Wed, 08 Jan 2014 13:20:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389187201!9560147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21474 invoked from network); 8 Jan 2014 13:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:19:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:19:58 -0500
Message-ID: <1389187198.4883.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:19:58 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] Race in PV shutdown between tool detection and shutdown
 watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:

> * Race in PV shutdown between tool detection and shutdown watch
>  > http://www.gossamer-threads.com/lists/xen/devel/282467
>  > Nothing to do with ACPI
>  status: Patches posted
>  Not a blocker.

This is a Linux issue, I think? Did those patches go in?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:20:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t3F-00035f-Fi; Wed, 08 Jan 2014 13:20:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0t3D-00034z-US
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:20:04 +0000
Received: from [193.109.254.147:44620] by server-13.bemta-14.messagelabs.com
	id 8C/E7-19374-3805DC25; Wed, 08 Jan 2014 13:20:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389187201!9560147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21474 invoked from network); 8 Jan 2014 13:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:19:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:19:58 -0500
Message-ID: <1389187198.4883.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:19:58 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] Race in PV shutdown between tool detection and shutdown
 watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:

> * Race in PV shutdown between tool detection and shutdown watch
>  > http://www.gossamer-threads.com/lists/xen/devel/282467
>  > Nothing to do with ACPI
>  status: Patches posted
>  Not a blocker.

This is a Linux issue, I think? Did those patches go in?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:20:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t3L-00038k-GK; Wed, 08 Jan 2014 13:20:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0t3K-00037T-EE
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:20:10 +0000
Received: from [85.158.139.211:55851] by server-12.bemta-5.messagelabs.com id
	91/E7-30017-9805DC25; Wed, 08 Jan 2014 13:20:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389187207!8347668!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21211 invoked from network); 8 Jan 2014 13:20:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853754"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:20:07 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:20:06 -0500
Message-ID: <52CD5085.1050903@citrix.com>
Date: Wed, 8 Jan 2014 13:20:05 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
	<52CC1D5702000078001112C5@nat28.tlf.novell.com>
In-Reply-To: <52CC1D5702000078001112C5@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 14:29, Jan Beulich wrote:
>>>> On 06.01.14 at 12:59, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/common/kexec.c
>> +++ b/xen/common/kexec.c
>> @@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
>>      }
>>  
>>      set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
>> +    printk("Executing crash image on cpu%u\n", cpu);
>> +
>>      return 0;
>>  }
>>  
> 
> With the calling function also being used from kexec_reboot(),
> printing "crash image" here isn't really correct afaict.

Good point.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:20:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t3L-00038k-GK; Wed, 08 Jan 2014 13:20:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0t3K-00037T-EE
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:20:10 +0000
Received: from [85.158.139.211:55851] by server-12.bemta-5.messagelabs.com id
	91/E7-30017-9805DC25; Wed, 08 Jan 2014 13:20:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389187207!8347668!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21211 invoked from network); 8 Jan 2014 13:20:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90853754"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:20:07 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:20:06 -0500
Message-ID: <52CD5085.1050903@citrix.com>
Date: Wed, 8 Jan 2014 13:20:05 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389009547-12886-1-git-send-email-andrew.cooper3@citrix.com>
	<52CC1D5702000078001112C5@nat28.tlf.novell.com>
In-Reply-To: <52CC1D5702000078001112C5@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/kexec: Identify which cpu the kexec
 image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 14:29, Jan Beulich wrote:
>>>> On 06.01.14 at 12:59, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/common/kexec.c
>> +++ b/xen/common/kexec.c
>> @@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
>>      }
>>  
>>      set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
>> +    printk("Executing crash image on cpu%u\n", cpu);
>> +
>>      return 0;
>>  }
>>  
> 
> With the calling function also being used from kexec_reboot(),
> printing "crash image" here isn't really correct afaict.

Good point.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:22:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t5E-0003V0-Ly; Wed, 08 Jan 2014 13:22:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0t5E-0003Up-4B
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:22:08 +0000
Received: from [85.158.143.35:34103] by server-3.bemta-4.messagelabs.com id
	61/DE-32360-FF05DC25; Wed, 08 Jan 2014 13:22:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389187325!7744035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26049 invoked from network); 8 Jan 2014 13:22:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:22:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88703917"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:22:05 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:22:05 -0500
Message-ID: <52CD50FB.90402@citrix.com>
Date: Wed, 8 Jan 2014 13:22:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187198.4883.69.camel@kazak.uk.xensource.com>
In-Reply-To: <1389187198.4883.69.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Race in PV shutdown between tool detection and
 shutdown watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 13:19, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
>> * Race in PV shutdown between tool detection and shutdown watch
>>  > http://www.gossamer-threads.com/lists/xen/devel/282467
>>  > Nothing to do with ACPI
>>  status: Patches posted
>>  Not a blocker.
> 
> This is a Linux issue, I think? Did those patches go in?

Konrad had a series.  I don't recall their status.  I think they needed
some more work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:22:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t5E-0003V0-Ly; Wed, 08 Jan 2014 13:22:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0t5E-0003Up-4B
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:22:08 +0000
Received: from [85.158.143.35:34103] by server-3.bemta-4.messagelabs.com id
	61/DE-32360-FF05DC25; Wed, 08 Jan 2014 13:22:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389187325!7744035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26049 invoked from network); 8 Jan 2014 13:22:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:22:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88703917"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:22:05 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:22:05 -0500
Message-ID: <52CD50FB.90402@citrix.com>
Date: Wed, 8 Jan 2014 13:22:03 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187198.4883.69.camel@kazak.uk.xensource.com>
In-Reply-To: <1389187198.4883.69.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Race in PV shutdown between tool detection and
 shutdown watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 13:19, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
>> * Race in PV shutdown between tool detection and shutdown watch
>>  > http://www.gossamer-threads.com/lists/xen/devel/282467
>>  > Nothing to do with ACPI
>>  status: Patches posted
>>  Not a blocker.
> 
> This is a Linux issue, I think? Did those patches go in?

Konrad had a series.  I don't recall their status.  I think they needed
some more work.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:23:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t6A-0003cf-5i; Wed, 08 Jan 2014 13:23:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0t69-0003cV-CA
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:23:05 +0000
Received: from [193.109.254.147:23656] by server-12.bemta-14.messagelabs.com
	id 91/5C-13681-8315DC25; Wed, 08 Jan 2014 13:23:04 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389187382!9557454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21696 invoked from network); 8 Jan 2014 13:23:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90854330"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:23:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:23:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0t66-0000Pa-1M;
	Wed, 08 Jan 2014 13:23:02 +0000
Date: Wed, 8 Jan 2014 13:22:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389187148.4883.68.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> > * qemu-upstream not freeing pirq 
> >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> >  > http://marc.info/?l=xen-devel&m=137265766424502
> >  status: patches posted; latest patches need testing
> >  Not a blocker.
> 
> I had it in my mind that this was fixed -- true?

It was fixed on qemu-traditional. We have a patch for upstream qemu but
it hasn't been tested because of the other passthrough issues.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:23:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t6A-0003cf-5i; Wed, 08 Jan 2014 13:23:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0t69-0003cV-CA
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:23:05 +0000
Received: from [193.109.254.147:23656] by server-12.bemta-14.messagelabs.com
	id 91/5C-13681-8315DC25; Wed, 08 Jan 2014 13:23:04 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389187382!9557454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21696 invoked from network); 8 Jan 2014 13:23:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90854330"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:23:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:23:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0t66-0000Pa-1M;
	Wed, 08 Jan 2014 13:23:02 +0000
Date: Wed, 8 Jan 2014 13:22:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389187148.4883.68.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> > * qemu-upstream not freeing pirq 
> >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> >  > http://marc.info/?l=xen-devel&m=137265766424502
> >  status: patches posted; latest patches need testing
> >  Not a blocker.
> 
> I had it in my mind that this was fixed -- true?

It was fixed on qemu-traditional. We have a patch for upstream qemu but
it hasn't been tested because of the other passthrough issues.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t9F-0003r7-UE; Wed, 08 Jan 2014 13:26:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0t9D-0003qx-Qn
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:26:15 +0000
Received: from [193.109.254.147:13204] by server-5.bemta-14.messagelabs.com id
	69/F0-03510-7F15DC25; Wed, 08 Jan 2014 13:26:15 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389187572!9558373!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17186 invoked from network); 8 Jan 2014 13:26:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:26:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0tCu-0007fH-4q; Wed, 08 Jan 2014 13:30:04 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389187804.29465@bugs.xenproject.org>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186144.4883.60.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:30:04 +0000
Subject: [Xen-devel] Processed: Re: 3.4.70+ kernel WARNING spew dysfunction
 on failed migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #30 rooted at `<21196.19900.136146.867552@mariner.uk.xensource.com>'
Title: `Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration'
> title it libxl should implement non-suspend-cancel based resume path
Set title for #30 to `libxl should implement non-suspend-cancel based resume path'
> owner Ian Jackson <Ian.Jackson@eu.citrix.com>
Command failed: Cannot parse arguments at /srv/xen-devel-bugs/lib/emesinae/control.pl line 301, <M> line 36.
Stop processing here.

Modified/created Bugs:
 - 30: http://bugs.xenproject.org/xen/bug/30 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0t9F-0003r7-UE; Wed, 08 Jan 2014 13:26:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0t9D-0003qx-Qn
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:26:15 +0000
Received: from [193.109.254.147:13204] by server-5.bemta-14.messagelabs.com id
	69/F0-03510-7F15DC25; Wed, 08 Jan 2014 13:26:15 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389187572!9558373!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17186 invoked from network); 8 Jan 2014 13:26:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 13:26:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0tCu-0007fH-4q; Wed, 08 Jan 2014 13:30:04 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389187804.29465@bugs.xenproject.org>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186144.4883.60.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 13:30:04 +0000
Subject: [Xen-devel] Processed: Re: 3.4.70+ kernel WARNING spew dysfunction
 on failed migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #30 rooted at `<21196.19900.136146.867552@mariner.uk.xensource.com>'
Title: `Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration'
> title it libxl should implement non-suspend-cancel based resume path
Set title for #30 to `libxl should implement non-suspend-cancel based resume path'
> owner Ian Jackson <Ian.Jackson@eu.citrix.com>
Command failed: Cannot parse arguments at /srv/xen-devel-bugs/lib/emesinae/control.pl line 301, <M> line 36.
Stop processing here.

Modified/created Bugs:
 - 30: http://bugs.xenproject.org/xen/bug/30 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tC7-0004Lo-Sj; Wed, 08 Jan 2014 13:29:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tC5-0004Ix-Ob
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:29:13 +0000
Received: from [85.158.143.35:63253] by server-3.bemta-4.messagelabs.com id
	C2/DA-32360-9A25DC25; Wed, 08 Jan 2014 13:29:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389187751!10345410!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24678 invoked from network); 8 Jan 2014 13:29:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:29:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90855758"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:29:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:29:09 -0500
Message-ID: <1389187749.4883.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:29:09 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
> * libxl / xl does not handle failure of remote qemu gracefully
>   > Related to http://bugs.xenproject.org/xen/bug/29

This should be http://bugs.xenproject.org/xen/bug/30

http://bugs.xenproject.org/xen/bug/29 is

* Windows install failures/BSOD
 > http://bugs.xenproject.org/xen/bug/29
 status: Anthony attempting to reproduce
 Blocker

which I forgot to list.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tC7-0004Lo-Sj; Wed, 08 Jan 2014 13:29:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tC5-0004Ix-Ob
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:29:13 +0000
Received: from [85.158.143.35:63253] by server-3.bemta-4.messagelabs.com id
	C2/DA-32360-9A25DC25; Wed, 08 Jan 2014 13:29:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389187751!10345410!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24678 invoked from network); 8 Jan 2014 13:29:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:29:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90855758"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 13:29:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:29:09 -0500
Message-ID: <1389187749.4883.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:29:09 +0000
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> 
> * libxl / xl does not handle failure of remote qemu gracefully
>   > Related to http://bugs.xenproject.org/xen/bug/29

This should be http://bugs.xenproject.org/xen/bug/30

http://bugs.xenproject.org/xen/bug/29 is

* Windows install failures/BSOD
 > http://bugs.xenproject.org/xen/bug/29
 status: Anthony attempting to reproduce
 Blocker

which I forgot to list.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tDb-0004ZV-PE; Wed, 08 Jan 2014 13:30:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tDa-0004ZN-4w
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:30:46 +0000
Received: from [85.158.143.35:49010] by server-1.bemta-4.messagelabs.com id
	A7/4F-02132-5035DC25; Wed, 08 Jan 2014 13:30:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389187843!10428291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29408 invoked from network); 8 Jan 2014 13:30:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:30:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88706563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:30:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:30:42 -0500
Message-ID: <1389187841.4883.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:30:41 +0000
In-Reply-To: <1389187749.4883.71.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187749.4883.71.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:29 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> > * libxl / xl does not handle failure of remote qemu gracefully
> >   > Related to http://bugs.xenproject.org/xen/bug/29
> 
> This should be http://bugs.xenproject.org/xen/bug/30
> 
> http://bugs.xenproject.org/xen/bug/29 is
> 
> * Windows install failures/BSOD
>  > http://bugs.xenproject.org/xen/bug/29
>  status: Anthony attempting to reproduce
>  Blocker
> 
> which I forgot to list.

Except I didn't -- it was at the end...




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tDb-0004ZV-PE; Wed, 08 Jan 2014 13:30:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tDa-0004ZN-4w
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:30:46 +0000
Received: from [85.158.143.35:49010] by server-1.bemta-4.messagelabs.com id
	A7/4F-02132-5035DC25; Wed, 08 Jan 2014 13:30:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389187843!10428291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29408 invoked from network); 8 Jan 2014 13:30:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:30:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88706563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:30:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:30:42 -0500
Message-ID: <1389187841.4883.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 8 Jan 2014 13:30:41 +0000
In-Reply-To: <1389187749.4883.71.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187749.4883.71.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:29 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> > * libxl / xl does not handle failure of remote qemu gracefully
> >   > Related to http://bugs.xenproject.org/xen/bug/29
> 
> This should be http://bugs.xenproject.org/xen/bug/30
> 
> http://bugs.xenproject.org/xen/bug/29 is
> 
> * Windows install failures/BSOD
>  > http://bugs.xenproject.org/xen/bug/29
>  status: Anthony attempting to reproduce
>  Blocker
> 
> which I forgot to list.

Except I didn't -- it was at the end...




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:49:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tVG-0005NG-RJ; Wed, 08 Jan 2014 13:49:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0tVE-0005NB-7s
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:49:00 +0000
Received: from [193.109.254.147:35868] by server-5.bemta-14.messagelabs.com id
	CE/11-03510-B475DC25; Wed, 08 Jan 2014 13:48:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389188937!7324306!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32743 invoked from network); 8 Jan 2014 13:48:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 13:48:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 13:48:47 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; 
	d="scan'208,223";a="626242397"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 13:48:46 +0000
Message-ID: <52CD573E.1050506@terremark.com>
Date: Wed, 08 Jan 2014 08:48:46 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CD1C1E02000078001116BC@nat28.tlf.novell.com>
In-Reply-To: <52CD1C1E02000078001116BC@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------050102010801030001030404"
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050102010801030001030404
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/08/14 03:36, Jan Beulich wrote:
>>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>>       if ( p2m_is_readonly(gfntype) && toaddr )
>>       {
>>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>> -        return INVALID_MFN;
>> +        mfn = INVALID_MFN;
>>       }
>>   
>>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> With the flow change above, this should be moved into an "else"
> to the earlier "if".

Ok.  I have done this and tested it.  (v3 attached).

Note: this means that patch v2 3/5 will not cleanly apply.  I am in the process of fixing and testing v3 of that.  Andrew Cooper in

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00595.html

Says that this one patch "should be included ASAP"; which is why I am sending this 1st.

I did not add:

Acked-by: Mukesh Rathor ...

Even though my understanding is that I could have (please let me know if this is wrong) based on:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00601.html

since the v3 change is very minor (like a comment change).  This is because DBGP2:

#define DBGP2(...) ((void)0)

does nothing.  And if changed to do something, the file fails to compile without more changes (I.E. patch 3/5).  Even though I knew this does nothing I has compiled and run this code.


I did change the author to "Andrew Cooper <andrew.cooper3@citrix.com>" since he provided most of this change. (see

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00631.html

)  Not sure that is correct since we have now both made changes.

    -Don Slutz



> Jan
>
>> +
>> +    if ( mfn == INVALID_MFN )
>> +    {
>> +        put_gfn(dp, *gfn);
>> +        *gfn = INVALID_GFN;
>> +    }
>> +
>>       return mfn;
>>   }
>


--------------050102010801030001030404
Content-Type: text/x-patch;
 name="0001-dbg_rw_guest_mem-need-to-call-put_gfn-in-error-path.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0001-dbg_rw_guest_mem-need-to-call-put_gfn-in-error-path.pat";
 filename*1="ch"

>From 1c8ebc3cbc4f415c5942b32736ecb58ef9c2cc14 Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 3 Jan 2014 16:12:20 -0500
Subject: [BUGFIX][PATCH v3] dbg_rw_guest_mem: need to call put_gfn in error
 path.

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Tested-by: Don Slutz <dslutz@verizon.com>
---

Changes v2 to v3:
  Jan Beulich: Fix flow change (added an else to DBGP2).
  Ian Campbell: Fix author  

 xen/arch/x86/debug.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..435bd40 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
+    }
+    else
+        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     return mfn;
 }
 
-- 
1.8.4


--------------050102010801030001030404
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050102010801030001030404--


From xen-devel-bounces@lists.xen.org Wed Jan 08 13:49:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tVG-0005NG-RJ; Wed, 08 Jan 2014 13:49:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0tVE-0005NB-7s
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:49:00 +0000
Received: from [193.109.254.147:35868] by server-5.bemta-14.messagelabs.com id
	CE/11-03510-B475DC25; Wed, 08 Jan 2014 13:48:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389188937!7324306!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32743 invoked from network); 8 Jan 2014 13:48:58 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 13:48:58 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 13:48:47 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; 
	d="scan'208,223";a="626242397"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 13:48:46 +0000
Message-ID: <52CD573E.1050506@terremark.com>
Date: Wed, 08 Jan 2014 08:48:46 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>
	<52CD1C1E02000078001116BC@nat28.tlf.novell.com>
In-Reply-To: <52CD1C1E02000078001116BC@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------050102010801030001030404"
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050102010801030001030404
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 01/08/14 03:36, Jan Beulich wrote:
>>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
>>       if ( p2m_is_readonly(gfntype) && toaddr )
>>       {
>>           DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
>> -        return INVALID_MFN;
>> +        mfn = INVALID_MFN;
>>       }
>>   
>>       DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
> With the flow change above, this should be moved into an "else"
> to the earlier "if".

Ok.  I have done this and tested it.  (v3 attached).

Note: this means that patch v2 3/5 will not cleanly apply.  I am in the process of fixing and testing v3 of that.  Andrew Cooper in

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00595.html

Says that this one patch "should be included ASAP"; which is why I am sending this 1st.

I did not add:

Acked-by: Mukesh Rathor ...

Even though my understanding is that I could have (please let me know if this is wrong) based on:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00601.html

since the v3 change is very minor (like a comment change).  This is because DBGP2:

#define DBGP2(...) ((void)0)

does nothing.  And if changed to do something, the file fails to compile without more changes (I.E. patch 3/5).  Even though I knew this does nothing I has compiled and run this code.


I did change the author to "Andrew Cooper <andrew.cooper3@citrix.com>" since he provided most of this change. (see

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00631.html

)  Not sure that is correct since we have now both made changes.

    -Don Slutz



> Jan
>
>> +
>> +    if ( mfn == INVALID_MFN )
>> +    {
>> +        put_gfn(dp, *gfn);
>> +        *gfn = INVALID_GFN;
>> +    }
>> +
>>       return mfn;
>>   }
>


--------------050102010801030001030404
Content-Type: text/x-patch;
 name="0001-dbg_rw_guest_mem-need-to-call-put_gfn-in-error-path.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0001-dbg_rw_guest_mem-need-to-call-put_gfn-in-error-path.pat";
 filename*1="ch"

>From 1c8ebc3cbc4f415c5942b32736ecb58ef9c2cc14 Mon Sep 17 00:00:00 2001
From: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 3 Jan 2014 16:12:20 -0500
Subject: [BUGFIX][PATCH v3] dbg_rw_guest_mem: need to call put_gfn in error
 path.

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Tested-by: Don Slutz <dslutz@verizon.com>
---

Changes v2 to v3:
  Jan Beulich: Fix flow change (added an else to DBGP2).
  Ian Campbell: Fix author  

 xen/arch/x86/debug.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..435bd40 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
+    }
+    else
+        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     return mfn;
 }
 
-- 
1.8.4


--------------050102010801030001030404
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050102010801030001030404--


From xen-devel-bounces@lists.xen.org Wed Jan 08 13:50:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tWC-0005fQ-Ss; Wed, 08 Jan 2014 13:50:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tWB-0005f7-Dm
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:49:59 +0000
Received: from [85.158.143.35:2271] by server-2.bemta-4.messagelabs.com id
	4C/55-11386-6875DC25; Wed, 08 Jan 2014 13:49:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389188996!7751769!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31424 invoked from network); 8 Jan 2014 13:49:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:49:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88712697"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:49:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:49:55 -0500
Message-ID: <1389188993.4883.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:49:53 +0000
In-Reply-To: <1389186538.4883.63.camel@kazak.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
	<1389186538.4883.63.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:08 +0000, Ian Campbell wrote:
> I'm a little concerned about making this change for 4.4, mostly because
> I'd need to reliably track down the right places to initialise
> v->arch.hcr and the correct contents.

Stefano and I discussed this IRL. Changing how we handle HCR would
involve checking that places like gic_inject and the
XEN_DOMCTL_set_address_size handling did the right thing. In principal
its easy but it's a little too subtle for us to be comfortable in making
such a change now. So I'm going to stick with the original approach
here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:50:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tWC-0005fQ-Ss; Wed, 08 Jan 2014 13:50:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0tWB-0005f7-Dm
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:49:59 +0000
Received: from [85.158.143.35:2271] by server-2.bemta-4.messagelabs.com id
	4C/55-11386-6875DC25; Wed, 08 Jan 2014 13:49:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389188996!7751769!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31424 invoked from network); 8 Jan 2014 13:49:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:49:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88712697"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:49:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:49:55 -0500
Message-ID: <1389188993.4883.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 8 Jan 2014 13:49:53 +0000
In-Reply-To: <1389186538.4883.63.camel@kazak.uk.xensource.com>
References: <1389110537-14876-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401071639430.8667@kaball.uk.xensource.com>
	<1389175483.12612.100.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081241511.21510@kaball.uk.xensource.com>
	<1389186538.4883.63.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:08 +0000, Ian Campbell wrote:
> I'm a little concerned about making this change for 4.4, mostly because
> I'd need to reliably track down the right places to initialise
> v->arch.hcr and the correct contents.

Stefano and I discussed this IRL. Changing how we handle HCR would
involve checking that places like gic_inject and the
XEN_DOMCTL_set_address_size handling did the right thing. In principal
its easy but it's a little too subtle for us to be comfortable in making
such a change now. So I'm going to stick with the original approach
here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:50:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tWF-0005h6-DT; Wed, 08 Jan 2014 13:50:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tWE-0005gE-G1
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:50:02 +0000
Received: from [193.109.254.147:2489] by server-5.bemta-14.messagelabs.com id
	AB/D2-03510-9875DC25; Wed, 08 Jan 2014 13:50:01 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389188999!9580347!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29205 invoked from network); 8 Jan 2014 13:50:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:50:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88712747"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:49:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:49:58 -0500
Message-ID: <52CD5785.4050402@citrix.com>
Date: Wed, 8 Jan 2014 13:49:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Eric Dumazet <eric.dumazet@gmail.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 02:12, Eric Dumazet wrote:
> On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
>
>>
>> +		if (skb_shinfo(skb)->frag_list) {
>> +			nskb = skb_shinfo(skb)->frag_list;
>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>> +			skb->len += nskb->len;
>> +			skb->data_len += nskb->len;
>> +			skb->truesize += nskb->truesize;
>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			vif->tx_zerocopy_sent += 2;
>> +			nskb = skb;
>> +
>> +			skb = skb_copy_expand(skb,
>> +					0,
>> +					0,
>> +					GFP_ATOMIC | __GFP_NOWARN);
>
> skb can be NULL here

Thanks, fixed that.

Zoli



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:50:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tWF-0005h6-DT; Wed, 08 Jan 2014 13:50:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tWE-0005gE-G1
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:50:02 +0000
Received: from [193.109.254.147:2489] by server-5.bemta-14.messagelabs.com id
	AB/D2-03510-9875DC25; Wed, 08 Jan 2014 13:50:01 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389188999!9580347!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29205 invoked from network); 8 Jan 2014 13:50:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:50:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88712747"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:49:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	08:49:58 -0500
Message-ID: <52CD5785.4050402@citrix.com>
Date: Wed, 8 Jan 2014 13:49:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Eric Dumazet <eric.dumazet@gmail.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 02:12, Eric Dumazet wrote:
> On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
>
>>
>> +		if (skb_shinfo(skb)->frag_list) {
>> +			nskb = skb_shinfo(skb)->frag_list;
>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>> +			skb->len += nskb->len;
>> +			skb->data_len += nskb->len;
>> +			skb->truesize += nskb->truesize;
>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			vif->tx_zerocopy_sent += 2;
>> +			nskb = skb;
>> +
>> +			skb = skb_copy_expand(skb,
>> +					0,
>> +					0,
>> +					GFP_ATOMIC | __GFP_NOWARN);
>
> skb can be NULL here

Thanks, fixed that.

Zoli



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:54:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0taa-0006Tw-P2; Wed, 08 Jan 2014 13:54:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W0taY-0006TM-QU
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:54:31 +0000
Received: from [85.158.137.68:28351] by server-1.bemta-3.messagelabs.com id
	39/E5-29598-5985DC25; Wed, 08 Jan 2014 13:54:29 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389189267!7929263!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9060 invoked from network); 8 Jan 2014 13:54:29 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:54:29 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so1604244pbb.14
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 05:54:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=rAh7ss4xGjIc2lsnWeBspUnWeMJ8Yi7vCPrtIMI+8uQ=;
	b=DrCBGxVKgcyT+yxfmaSo6YJ75A06e9s7GidxK7XTAkoZmIovDqnxL34tIK+DRAQlTi
	A1icmXjIbWgDBDboohIhBYvR/lCfSpNktfSrCdON/3F/TrXtjtaBFJqgqeAjbwT95JwH
	z9wpY0pZLRLv/MisPHsxnKNQFAtFHM0TtSq3Tr65GBChF7SjA8vM8nexysAmO92LQjLF
	ETsV8SOcs+eo7gHERFhKXtsOgznoYrG16Iowo/Z8A0wW6oAUtU4xRgW9rklQSZUZ5b63
	pHCxZity5iCYvFNSdyevYM57zWe5iCM99qHx4OQAomSOk0BidSMBE3GpWjyyOlHWa5yJ
	mjqw==
X-Received: by 10.68.98.3 with SMTP id ee3mr58033258pbb.31.1389189267154;
	Wed, 08 Jan 2014 05:54:27 -0800 (PST)
Received: from [172.26.49.115] ([172.26.49.115])
	by mx.google.com with ESMTPSA id yz5sm3272548pac.9.2014.01.08.05.54.26
	for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 05:54:26 -0800 (PST)
Message-ID: <1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 08 Jan 2014 05:54:32 -0800
In-Reply-To: <52CD5785.4050402@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
	<52CD5785.4050402@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:49 +0000, Zoltan Kiss wrote:
> On 08/01/14 02:12, Eric Dumazet wrote:
> > On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
> >
> >>
> >> +		if (skb_shinfo(skb)->frag_list) {
> >> +			nskb = skb_shinfo(skb)->frag_list;
> >> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> >> +			skb->len += nskb->len;
> >> +			skb->data_len += nskb->len;
> >> +			skb->truesize += nskb->truesize;
> >> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >> +			vif->tx_zerocopy_sent += 2;
> >> +			nskb = skb;
> >> +
> >> +			skb = skb_copy_expand(skb,
> >> +					0,
> >> +					0,
> >> +					GFP_ATOMIC | __GFP_NOWARN);
> >
> > skb can be NULL here
> 
> Thanks, fixed that.

BTW, I am not sure why you copy the skb.

Is it to get rid of frag_list, and why ?






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:54:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0taa-0006Tw-P2; Wed, 08 Jan 2014 13:54:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W0taY-0006TM-QU
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:54:31 +0000
Received: from [85.158.137.68:28351] by server-1.bemta-3.messagelabs.com id
	39/E5-29598-5985DC25; Wed, 08 Jan 2014 13:54:29 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389189267!7929263!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9060 invoked from network); 8 Jan 2014 13:54:29 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:54:29 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so1604244pbb.14
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 05:54:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=rAh7ss4xGjIc2lsnWeBspUnWeMJ8Yi7vCPrtIMI+8uQ=;
	b=DrCBGxVKgcyT+yxfmaSo6YJ75A06e9s7GidxK7XTAkoZmIovDqnxL34tIK+DRAQlTi
	A1icmXjIbWgDBDboohIhBYvR/lCfSpNktfSrCdON/3F/TrXtjtaBFJqgqeAjbwT95JwH
	z9wpY0pZLRLv/MisPHsxnKNQFAtFHM0TtSq3Tr65GBChF7SjA8vM8nexysAmO92LQjLF
	ETsV8SOcs+eo7gHERFhKXtsOgznoYrG16Iowo/Z8A0wW6oAUtU4xRgW9rklQSZUZ5b63
	pHCxZity5iCYvFNSdyevYM57zWe5iCM99qHx4OQAomSOk0BidSMBE3GpWjyyOlHWa5yJ
	mjqw==
X-Received: by 10.68.98.3 with SMTP id ee3mr58033258pbb.31.1389189267154;
	Wed, 08 Jan 2014 05:54:27 -0800 (PST)
Received: from [172.26.49.115] ([172.26.49.115])
	by mx.google.com with ESMTPSA id yz5sm3272548pac.9.2014.01.08.05.54.26
	for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 05:54:26 -0800 (PST)
Message-ID: <1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 08 Jan 2014 05:54:32 -0800
In-Reply-To: <52CD5785.4050402@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>
	<52CD5785.4050402@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:49 +0000, Zoltan Kiss wrote:
> On 08/01/14 02:12, Eric Dumazet wrote:
> > On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
> >
> >>
> >> +		if (skb_shinfo(skb)->frag_list) {
> >> +			nskb = skb_shinfo(skb)->frag_list;
> >> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> >> +			skb->len += nskb->len;
> >> +			skb->data_len += nskb->len;
> >> +			skb->truesize += nskb->truesize;
> >> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >> +			vif->tx_zerocopy_sent += 2;
> >> +			nskb = skb;
> >> +
> >> +			skb = skb_copy_expand(skb,
> >> +					0,
> >> +					0,
> >> +					GFP_ATOMIC | __GFP_NOWARN);
> >
> > skb can be NULL here
> 
> Thanks, fixed that.

BTW, I am not sure why you copy the skb.

Is it to get rid of frag_list, and why ?






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tbX-0006dC-8l; Wed, 08 Jan 2014 13:55:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W0tbV-0006cp-Ge
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:55:29 +0000
Received: from [193.109.254.147:55946] by server-12.bemta-14.messagelabs.com
	id EA/CD-13681-0D85DC25; Wed, 08 Jan 2014 13:55:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389189327!6077639!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 785 invoked from network); 8 Jan 2014 13:55:28 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 13:55:28 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 4C7B7818D5;
	Wed,  8 Jan 2014 15:55:26 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DC58B36C01F; Wed,  8 Jan 2014 15:55:25 +0200 (EET)
Date: Wed, 8 Jan 2014 15:55:25 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140108135525.GA2924@reaktio.net>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:22:09PM +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > > 
> > > * qemu-upstream not freeing pirq 
> > >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> > >  > http://marc.info/?l=xen-devel&m=137265766424502
> > >  status: patches posted; latest patches need testing
> > >  Not a blocker.
> > 
> > I had it in my mind that this was fixed -- true?
> 
> It was fixed on qemu-traditional. We have a patch for upstream qemu but
> it hasn't been tested because of the other passthrough issues.
> 

Hmm, what other passthrough issues? Should those be added to the list aswell?

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tbX-0006dC-8l; Wed, 08 Jan 2014 13:55:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W0tbV-0006cp-Ge
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 13:55:29 +0000
Received: from [193.109.254.147:55946] by server-12.bemta-14.messagelabs.com
	id EA/CD-13681-0D85DC25; Wed, 08 Jan 2014 13:55:28 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389189327!6077639!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 785 invoked from network); 8 Jan 2014 13:55:28 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 13:55:28 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 4C7B7818D5;
	Wed,  8 Jan 2014 15:55:26 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DC58B36C01F; Wed,  8 Jan 2014 15:55:25 +0200 (EET)
Date: Wed, 8 Jan 2014 15:55:25 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140108135525.GA2924@reaktio.net>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:22:09PM +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > > 
> > > * qemu-upstream not freeing pirq 
> > >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> > >  > http://marc.info/?l=xen-devel&m=137265766424502
> > >  status: patches posted; latest patches need testing
> > >  Not a blocker.
> > 
> > I had it in my mind that this was fixed -- true?
> 
> It was fixed on qemu-traditional. We have a patch for upstream qemu but
> it hasn't been tested because of the other passthrough issues.
> 

Hmm, what other passthrough issues? Should those be added to the list aswell?

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0teh-00078X-0Y; Wed, 08 Jan 2014 13:58:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0tef-000786-9x
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:45 +0000
Received: from [85.158.137.68:23540] by server-13.bemta-3.messagelabs.com id
	EC/E7-28603-3995DC25; Wed, 08 Jan 2014 13:58:43 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11817 invoked from network); 8 Jan 2014 13:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715308"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:40 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-1e;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:28 +0000
Message-ID: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] [PATCH net-next 0/3] make skb_checksum_setup generally
	available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both xen-netfront and xen-netback need to be able to set up the partial
checksum offset of an skb and may also need to recalculate the pseudo-
header checksum in the process. This functionality is currently private
and duplicated between the two drivers.

Patch #1 of this series moves the implementation into the core network code
as there is nothing xen-specific about it and it is potentially useful to
any network driver.
Patch #2 removes the private implementation from netback.
Patch #3 removes the private implementation from netfront.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tej-00079m-Bc; Wed, 08 Jan 2014 13:58:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teh-00078R-6P
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:47 +0000
Received: from [85.158.139.211:56058] by server-3.bemta-5.messagelabs.com id
	0D/E0-04773-6995DC25; Wed, 08 Jan 2014 13:58:46 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389189523!8523004!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9245 invoked from network); 8 Jan 2014 13:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-55;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:30 +0000
Message-ID: <1389189511-14568-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH net-next 2/3] xen-netback: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/netback.c |  260 +------------------------------------
 1 file changed, 3 insertions(+), 257 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2605405 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -39,7 +39,6 @@
 #include <linux/udp.h>
 
 #include <net/tcp.h>
-#include <net/ip6_checksum.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1048,257 +1047,9 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
-				  unsigned int max)
-{
-	if (skb_headlen(skb) >= len)
-		return 0;
-
-	/* If we need to pullup then pullup to the max, so we
-	 * won't need to do it again.
-	 */
-	if (max > skb->len)
-		max = skb->len;
-
-	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
-		return -ENOMEM;
-
-	if (skb_headlen(skb) < len)
-		return -EPROTO;
-
-	return 0;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * maximally sized IP and TCP or UDP headers.
- */
-#define MAX_IP_HDR_LEN 128
-
-static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
-			     int recalculate_partial_csum)
-{
-	unsigned int off;
-	bool fragment;
-	int err;
-
-	fragment = false;
-
-	err = maybe_pull_tail(skb,
-			      sizeof(struct iphdr),
-			      MAX_IP_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
-		fragment = true;
-
-	off = ip_hdrlen(skb);
-
-	err = -EPROTO;
-
-	if (fragment)
-		goto out;
-
-	switch (ip_hdr(skb)->protocol) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * an IPv6 header, all options, and a maximal TCP or UDP header.
- */
-#define MAX_IPV6_HDR_LEN 256
-
-#define OPT_HDR(type, skb, off) \
-	(type *)(skb_network_header(skb) + (off))
-
-static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
-			       int recalculate_partial_csum)
-{
-	int err;
-	u8 nexthdr;
-	unsigned int off;
-	unsigned int len;
-	bool fragment;
-	bool done;
-
-	fragment = false;
-	done = false;
-
-	off = sizeof(struct ipv6hdr);
-
-	err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	nexthdr = ipv6_hdr(skb)->nexthdr;
-
-	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
-	while (off <= len && !done) {
-		switch (nexthdr) {
-		case IPPROTO_DSTOPTS:
-		case IPPROTO_HOPOPTS:
-		case IPPROTO_ROUTING: {
-			struct ipv6_opt_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ipv6_opt_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_optlen(hp);
-			break;
-		}
-		case IPPROTO_AH: {
-			struct ip_auth_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ip_auth_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_authlen(hp);
-			break;
-		}
-		case IPPROTO_FRAGMENT: {
-			struct frag_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct frag_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct frag_hdr, skb, off);
-
-			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
-				fragment = true;
-
-			nexthdr = hp->nexthdr;
-			off += sizeof(struct frag_hdr);
-			break;
-		}
-		default:
-			done = true;
-			break;
-		}
-	}
-
-	err = -EPROTO;
-
-	if (!done || fragment)
-		goto out;
-
-	switch (nexthdr) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
 static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 {
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/* A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
 	 * peers can fail to set NETRXF_csum_blank when sending a GSO
@@ -1308,19 +1059,14 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		vif->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol == htons(ETH_P_IP))
-		err = checksum_setup_ip(vif, skb, recalculate_partial_csum);
-	else if (skb->protocol == htons(ETH_P_IPV6))
-		err = checksum_setup_ipv6(vif, skb, recalculate_partial_csum);
-
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tei-000794-TU; Wed, 08 Jan 2014 13:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teg-00078P-Sn
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:47 +0000
Received: from [85.158.137.68:19109] by server-5.bemta-3.messagelabs.com id
	31/AD-25188-6995DC25; Wed, 08 Jan 2014 13:58:46 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12001 invoked from network); 8 Jan 2014 13:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715314"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-68;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:31 +0000
Message-ID: <1389189511-14568-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next 3/3] xen-netfront: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++-----------------------------------------
 1 file changed, 3 insertions(+), 45 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..c41537b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -859,9 +859,7 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
 
 static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 {
-	struct iphdr *iph;
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/*
 	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
@@ -873,54 +871,14 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 		struct netfront_info *np = netdev_priv(dev);
 		np->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol != htons(ETH_P_IP))
-		goto out;
-
-	iph = (void *)skb->data;
-
-	switch (iph->protocol) {
-	case IPPROTO_TCP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct tcphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct tcphdr *tcph = tcp_hdr(skb);
-			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_TCP, 0);
-		}
-		break;
-	case IPPROTO_UDP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct udphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct udphdr *udph = udp_hdr(skb);
-			udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_UDP, 0);
-		}
-		break;
-	default:
-		if (net_ratelimit())
-			pr_err("Attempting to checksum a non-TCP/UDP packet, dropping a protocol %d packet\n",
-			       iph->protocol);
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static int handle_incoming_queue(struct net_device *dev,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tei-00078t-EU; Wed, 08 Jan 2014 13:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teg-00078D-Bo
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:46 +0000
Received: from [85.158.137.68:33223] by server-8.bemta-3.messagelabs.com id
	C4/A7-31081-5995DC25; Wed, 08 Jan 2014 13:58:45 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11915 invoked from network); 8 Jan 2014 13:58:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715313"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-46;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:29 +0000
Message-ID: <1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a function to set up the partial checksum offset for IP
packets (and optionally re-calculate the pseudo-header checksum) into the
core network code.
The implementation was previously private and duplicated between xen-netback
and xen-netfront, however it is not xen-specific and is potentially useful
to any network driver.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Veaceslav Falico <vfalico@redhat.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
---
 include/linux/netdevice.h |    1 +
 net/core/dev.c            |  271 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 272 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index a2a70cc..15b1003 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2940,6 +2940,7 @@ void netdev_upper_dev_unlink(struct net_device *dev,
 void *netdev_lower_dev_get_private(struct net_device *dev,
 				   struct net_device *lower_dev);
 int skb_checksum_help(struct sk_buff *skb);
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
 struct sk_buff *__skb_gso_segment(struct sk_buff *skb,
 				  netdev_features_t features, bool tx_path);
 struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb,
diff --git a/net/core/dev.c b/net/core/dev.c
index ce01847..cf9fc30 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -101,6 +101,7 @@
 #include <net/dst.h>
 #include <net/pkt_sched.h>
 #include <net/checksum.h>
+#include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 #include <linux/highmem.h>
 #include <linux/init.h>
@@ -2281,6 +2282,276 @@ out:
 }
 EXPORT_SYMBOL(skb_checksum_help);
 
+static inline int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+				      unsigned int max)
+{
+	if (skb_headlen(skb) >= len)
+		return 0;
+
+	/* If we need to pullup then pullup to the max, so we
+	 * won't need to do it again.
+	 */
+	if (max > skb->len)
+		max = skb->len;
+
+	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+		return -ENOMEM;
+
+	if (skb_headlen(skb) < len)
+		return -EPROTO;
+
+	return 0;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
+static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
+{
+	unsigned int off;
+	bool fragment;
+	int err;
+
+	fragment = false;
+
+	err = skb_maybe_pull_tail(skb,
+				  sizeof(struct iphdr),
+				  MAX_IP_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+		fragment = true;
+
+	off = ip_hdrlen(skb);
+
+	err = -EPROTO;
+
+	if (fragment)
+		goto out;
+
+	switch (ip_hdr(skb)->protocol) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+	(type *)(skb_network_header(skb) + (off))
+
+static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+	u8 nexthdr;
+	unsigned int off;
+	unsigned int len;
+	bool fragment;
+	bool done;
+
+	fragment = false;
+	done = false;
+
+	off = sizeof(struct ipv6hdr);
+
+	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	nexthdr = ipv6_hdr(skb)->nexthdr;
+
+	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+	while (off <= len && !done) {
+		switch (nexthdr) {
+		case IPPROTO_DSTOPTS:
+		case IPPROTO_HOPOPTS:
+		case IPPROTO_ROUTING: {
+			struct ipv6_opt_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ipv6_opt_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_optlen(hp);
+			break;
+		}
+		case IPPROTO_AH: {
+			struct ip_auth_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ip_auth_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_authlen(hp);
+			break;
+		}
+		case IPPROTO_FRAGMENT: {
+			struct frag_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct frag_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct frag_hdr, skb, off);
+
+			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+				fragment = true;
+
+			nexthdr = hp->nexthdr;
+			off += sizeof(struct frag_hdr);
+			break;
+		}
+		default:
+			done = true;
+			break;
+		}
+	}
+
+	err = -EPROTO;
+
+	if (!done || fragment)
+		goto out;
+
+	switch (nexthdr) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* Set up partial checksum offset and optionally recalculate pseudo header
+ * checksum.
+ */
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+
+	switch (skb->protocol) {
+	case htons(ETH_P_IP):
+		err = skb_checksum_setup_ip(skb, recalculate);
+		break;
+
+	case htons(ETH_P_IPV6):
+		err = skb_checksum_setup_ipv6(skb, recalculate);
+		break;
+
+	default:
+		err = -EPROTO;
+		break;
+	}
+
+	return err;
+}
+EXPORT_SYMBOL(skb_checksum_setup);
+
 __be16 skb_network_protocol(struct sk_buff *skb)
 {
 	__be16 type = skb->protocol;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0teh-00078X-0Y; Wed, 08 Jan 2014 13:58:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0tef-000786-9x
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:45 +0000
Received: from [85.158.137.68:23540] by server-13.bemta-3.messagelabs.com id
	EC/E7-28603-3995DC25; Wed, 08 Jan 2014 13:58:43 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11817 invoked from network); 8 Jan 2014 13:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715308"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:40 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-1e;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:28 +0000
Message-ID: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] [PATCH net-next 0/3] make skb_checksum_setup generally
	available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both xen-netfront and xen-netback need to be able to set up the partial
checksum offset of an skb and may also need to recalculate the pseudo-
header checksum in the process. This functionality is currently private
and duplicated between the two drivers.

Patch #1 of this series moves the implementation into the core network code
as there is nothing xen-specific about it and it is potentially useful to
any network driver.
Patch #2 removes the private implementation from netback.
Patch #3 removes the private implementation from netfront.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tei-000794-TU; Wed, 08 Jan 2014 13:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teg-00078P-Sn
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:47 +0000
Received: from [85.158.137.68:19109] by server-5.bemta-3.messagelabs.com id
	31/AD-25188-6995DC25; Wed, 08 Jan 2014 13:58:46 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12001 invoked from network); 8 Jan 2014 13:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715314"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-68;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:31 +0000
Message-ID: <1389189511-14568-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next 3/3] xen-netfront: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++-----------------------------------------
 1 file changed, 3 insertions(+), 45 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..c41537b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -859,9 +859,7 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
 
 static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 {
-	struct iphdr *iph;
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/*
 	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
@@ -873,54 +871,14 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 		struct netfront_info *np = netdev_priv(dev);
 		np->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol != htons(ETH_P_IP))
-		goto out;
-
-	iph = (void *)skb->data;
-
-	switch (iph->protocol) {
-	case IPPROTO_TCP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct tcphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct tcphdr *tcph = tcp_hdr(skb);
-			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_TCP, 0);
-		}
-		break;
-	case IPPROTO_UDP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct udphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct udphdr *udph = udp_hdr(skb);
-			udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_UDP, 0);
-		}
-		break;
-	default:
-		if (net_ratelimit())
-			pr_err("Attempting to checksum a non-TCP/UDP packet, dropping a protocol %d packet\n",
-			       iph->protocol);
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static int handle_incoming_queue(struct net_device *dev,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tej-00079m-Bc; Wed, 08 Jan 2014 13:58:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teh-00078R-6P
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:47 +0000
Received: from [85.158.139.211:56058] by server-3.bemta-5.messagelabs.com id
	0D/E0-04773-6995DC25; Wed, 08 Jan 2014 13:58:46 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389189523!8523004!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9245 invoked from network); 8 Jan 2014 13:58:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-55;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:30 +0000
Message-ID: <1389189511-14568-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH net-next 2/3] xen-netback: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/netback.c |  260 +------------------------------------
 1 file changed, 3 insertions(+), 257 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2605405 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -39,7 +39,6 @@
 #include <linux/udp.h>
 
 #include <net/tcp.h>
-#include <net/ip6_checksum.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1048,257 +1047,9 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
-				  unsigned int max)
-{
-	if (skb_headlen(skb) >= len)
-		return 0;
-
-	/* If we need to pullup then pullup to the max, so we
-	 * won't need to do it again.
-	 */
-	if (max > skb->len)
-		max = skb->len;
-
-	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
-		return -ENOMEM;
-
-	if (skb_headlen(skb) < len)
-		return -EPROTO;
-
-	return 0;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * maximally sized IP and TCP or UDP headers.
- */
-#define MAX_IP_HDR_LEN 128
-
-static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
-			     int recalculate_partial_csum)
-{
-	unsigned int off;
-	bool fragment;
-	int err;
-
-	fragment = false;
-
-	err = maybe_pull_tail(skb,
-			      sizeof(struct iphdr),
-			      MAX_IP_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
-		fragment = true;
-
-	off = ip_hdrlen(skb);
-
-	err = -EPROTO;
-
-	if (fragment)
-		goto out;
-
-	switch (ip_hdr(skb)->protocol) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * an IPv6 header, all options, and a maximal TCP or UDP header.
- */
-#define MAX_IPV6_HDR_LEN 256
-
-#define OPT_HDR(type, skb, off) \
-	(type *)(skb_network_header(skb) + (off))
-
-static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
-			       int recalculate_partial_csum)
-{
-	int err;
-	u8 nexthdr;
-	unsigned int off;
-	unsigned int len;
-	bool fragment;
-	bool done;
-
-	fragment = false;
-	done = false;
-
-	off = sizeof(struct ipv6hdr);
-
-	err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	nexthdr = ipv6_hdr(skb)->nexthdr;
-
-	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
-	while (off <= len && !done) {
-		switch (nexthdr) {
-		case IPPROTO_DSTOPTS:
-		case IPPROTO_HOPOPTS:
-		case IPPROTO_ROUTING: {
-			struct ipv6_opt_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ipv6_opt_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_optlen(hp);
-			break;
-		}
-		case IPPROTO_AH: {
-			struct ip_auth_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ip_auth_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_authlen(hp);
-			break;
-		}
-		case IPPROTO_FRAGMENT: {
-			struct frag_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct frag_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct frag_hdr, skb, off);
-
-			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
-				fragment = true;
-
-			nexthdr = hp->nexthdr;
-			off += sizeof(struct frag_hdr);
-			break;
-		}
-		default:
-			done = true;
-			break;
-		}
-	}
-
-	err = -EPROTO;
-
-	if (!done || fragment)
-		goto out;
-
-	switch (nexthdr) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
 static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 {
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/* A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
 	 * peers can fail to set NETRXF_csum_blank when sending a GSO
@@ -1308,19 +1059,14 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		vif->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol == htons(ETH_P_IP))
-		err = checksum_setup_ip(vif, skb, recalculate_partial_csum);
-	else if (skb->protocol == htons(ETH_P_IPV6))
-		err = checksum_setup_ipv6(vif, skb, recalculate_partial_csum);
-
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 13:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tei-00078t-EU; Wed, 08 Jan 2014 13:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W0teg-00078D-Bo
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 13:58:46 +0000
Received: from [85.158.137.68:33223] by server-8.bemta-3.messagelabs.com id
	C4/A7-31081-5995DC25; Wed, 08 Jan 2014 13:58:45 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389189522!7942722!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11915 invoked from network); 8 Jan 2014 13:58:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 13:58:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88715313"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 13:58:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 08:58:41 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W0teb-0000xF-46;
	Wed, 08 Jan 2014 13:58:41 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 13:58:29 +0000
Message-ID: <1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a function to set up the partial checksum offset for IP
packets (and optionally re-calculate the pseudo-header checksum) into the
core network code.
The implementation was previously private and duplicated between xen-netback
and xen-netfront, however it is not xen-specific and is potentially useful
to any network driver.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Veaceslav Falico <vfalico@redhat.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
---
 include/linux/netdevice.h |    1 +
 net/core/dev.c            |  271 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 272 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index a2a70cc..15b1003 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2940,6 +2940,7 @@ void netdev_upper_dev_unlink(struct net_device *dev,
 void *netdev_lower_dev_get_private(struct net_device *dev,
 				   struct net_device *lower_dev);
 int skb_checksum_help(struct sk_buff *skb);
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
 struct sk_buff *__skb_gso_segment(struct sk_buff *skb,
 				  netdev_features_t features, bool tx_path);
 struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb,
diff --git a/net/core/dev.c b/net/core/dev.c
index ce01847..cf9fc30 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -101,6 +101,7 @@
 #include <net/dst.h>
 #include <net/pkt_sched.h>
 #include <net/checksum.h>
+#include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 #include <linux/highmem.h>
 #include <linux/init.h>
@@ -2281,6 +2282,276 @@ out:
 }
 EXPORT_SYMBOL(skb_checksum_help);
 
+static inline int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+				      unsigned int max)
+{
+	if (skb_headlen(skb) >= len)
+		return 0;
+
+	/* If we need to pullup then pullup to the max, so we
+	 * won't need to do it again.
+	 */
+	if (max > skb->len)
+		max = skb->len;
+
+	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+		return -ENOMEM;
+
+	if (skb_headlen(skb) < len)
+		return -EPROTO;
+
+	return 0;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
+static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
+{
+	unsigned int off;
+	bool fragment;
+	int err;
+
+	fragment = false;
+
+	err = skb_maybe_pull_tail(skb,
+				  sizeof(struct iphdr),
+				  MAX_IP_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+		fragment = true;
+
+	off = ip_hdrlen(skb);
+
+	err = -EPROTO;
+
+	if (fragment)
+		goto out;
+
+	switch (ip_hdr(skb)->protocol) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+	(type *)(skb_network_header(skb) + (off))
+
+static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+	u8 nexthdr;
+	unsigned int off;
+	unsigned int len;
+	bool fragment;
+	bool done;
+
+	fragment = false;
+	done = false;
+
+	off = sizeof(struct ipv6hdr);
+
+	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	nexthdr = ipv6_hdr(skb)->nexthdr;
+
+	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+	while (off <= len && !done) {
+		switch (nexthdr) {
+		case IPPROTO_DSTOPTS:
+		case IPPROTO_HOPOPTS:
+		case IPPROTO_ROUTING: {
+			struct ipv6_opt_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ipv6_opt_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_optlen(hp);
+			break;
+		}
+		case IPPROTO_AH: {
+			struct ip_auth_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ip_auth_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_authlen(hp);
+			break;
+		}
+		case IPPROTO_FRAGMENT: {
+			struct frag_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct frag_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct frag_hdr, skb, off);
+
+			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+				fragment = true;
+
+			nexthdr = hp->nexthdr;
+			off += sizeof(struct frag_hdr);
+			break;
+		}
+		default:
+			done = true;
+			break;
+		}
+	}
+
+	err = -EPROTO;
+
+	if (!done || fragment)
+		goto out;
+
+	switch (nexthdr) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* Set up partial checksum offset and optionally recalculate pseudo header
+ * checksum.
+ */
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+
+	switch (skb->protocol) {
+	case htons(ETH_P_IP):
+		err = skb_checksum_setup_ip(skb, recalculate);
+		break;
+
+	case htons(ETH_P_IPV6):
+		err = skb_checksum_setup_ipv6(skb, recalculate);
+		break;
+
+	default:
+		err = -EPROTO;
+		break;
+	}
+
+	return err;
+}
+EXPORT_SYMBOL(skb_checksum_setup);
+
 __be16 skb_network_protocol(struct sk_buff *skb)
 {
 	__be16 type = skb->protocol;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:01:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0thN-00088P-BQ; Wed, 08 Jan 2014 14:01:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0thL-000885-VI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:01:32 +0000
Received: from [85.158.139.211:61352] by server-11.bemta-5.messagelabs.com id
	DD/2C-23268-B3A5DC25; Wed, 08 Jan 2014 14:01:31 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389189689!8375600!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28524 invoked from network); 8 Jan 2014 14:01:30 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 14:01:30 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 14:01:28 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626253918"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:01:26 +0000
Message-ID: <52CD5A36.4030504@terremark.com>
Date: Wed, 08 Jan 2014 09:01:26 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>	
	<52CCA204.2020601@citrix.com>
	<1389177658.4883.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177658.4883.13.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:40, Ian Campbell wrote:
> On Wed, 2014-01-08 at 00:55 +0000, Andrew Cooper wrote:
>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Technically this should include by Signed-off-by: Andrew Cooper
>> <andrew.cooper3@citrix.com> tag (being the author of the code)
> If you are the author then the first line of the mail should also be
>
> From: Andrew Cooper <andre...@citrix.com>
>
> Don, if you git commit --author='....' (or --amend to fixup an existing
> commit) then git send-email will do the right thing.

I have taken my best shot at doing the right thing here in different email.

>> Ian (with RM hat on):
>>    This is a hypervisor reference counting error on a toolstack hypercall
>> path.  Irrespective of any of the other patches in this series, I think
>> this should be included ASAP (although probably subject to review from a
>> third person), which will fix the hypervisor crashes from gdbsx usage.
> I've already given this stuff a release Ack, subject to Muckesh
> approving it and someone saying for sure that this patch can only affect
> debuggers.

For what it is worth, I will say that this patch can only affect debuggers.

     -Don Slutz


> Ian.
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:01:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0thN-00088P-BQ; Wed, 08 Jan 2014 14:01:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0thL-000885-VI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:01:32 +0000
Received: from [85.158.139.211:61352] by server-11.bemta-5.messagelabs.com id
	DD/2C-23268-B3A5DC25; Wed, 08 Jan 2014 14:01:31 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389189689!8375600!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28524 invoked from network); 8 Jan 2014 14:01:30 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 14:01:30 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 14:01:28 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626253918"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:01:26 +0000
Message-ID: <52CD5A36.4030504@terremark.com>
Date: Wed, 08 Jan 2014 09:01:26 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-3-git-send-email-dslutz@verizon.com>	
	<52CCA204.2020601@citrix.com>
	<1389177658.4883.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177658.4883.13.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 2/5] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:40, Ian Campbell wrote:
> On Wed, 2014-01-08 at 00:55 +0000, Andrew Cooper wrote:
>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> Technically this should include by Signed-off-by: Andrew Cooper
>> <andrew.cooper3@citrix.com> tag (being the author of the code)
> If you are the author then the first line of the mail should also be
>
> From: Andrew Cooper <andre...@citrix.com>
>
> Don, if you git commit --author='....' (or --amend to fixup an existing
> commit) then git send-email will do the right thing.

I have taken my best shot at doing the right thing here in different email.

>> Ian (with RM hat on):
>>    This is a hypervisor reference counting error on a toolstack hypercall
>> path.  Irrespective of any of the other patches in this series, I think
>> this should be included ASAP (although probably subject to review from a
>> third person), which will fix the hypervisor crashes from gdbsx usage.
> I've already given this stuff a release Ack, subject to Muckesh
> approving it and someone saying for sure that this patch can only affect
> debuggers.

For what it is worth, I will say that this patch can only affect debuggers.

     -Don Slutz


> Ian.
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tnk-00006Y-3O; Wed, 08 Jan 2014 14:08:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tnj-00006P-8H
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:08:07 +0000
Received: from [193.109.254.147:38735] by server-9.bemta-14.messagelabs.com id
	83/D4-13957-6CB5DC25; Wed, 08 Jan 2014 14:08:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389190084!9600635!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3735 invoked from network); 8 Jan 2014 14:08:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:08:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88719806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:08:04 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:08:03 -0500
Message-ID: <52CD5BC2.5020803@citrix.com>
Date: Wed, 8 Jan 2014 14:08:02 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140107.202951.729942261773265015.davem@davemloft.net>
In-Reply-To: <20140107.202951.729942261773265015.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 01:29, David Miller wrote:
>> +static inline int tx_dealloc_work_todo(struct xenvif *vif)
>
> Make this return bool.
Done, also in the last patch.

>> +		wait_event_interruptible(vif->dealloc_wq,
>> +					tx_dealloc_work_todo(vif) ||
>> +					 kthread_should_stop());
>
> Inconsistent indentation.  You should make the arguments line up at
> exactly the first column after the openning parenthesis of the function
> call.
Done, thanks.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tnk-00006Y-3O; Wed, 08 Jan 2014 14:08:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tnj-00006P-8H
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:08:07 +0000
Received: from [193.109.254.147:38735] by server-9.bemta-14.messagelabs.com id
	83/D4-13957-6CB5DC25; Wed, 08 Jan 2014 14:08:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389190084!9600635!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3735 invoked from network); 8 Jan 2014 14:08:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:08:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88719806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:08:04 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:08:03 -0500
Message-ID: <52CD5BC2.5020803@citrix.com>
Date: Wed, 8 Jan 2014 14:08:02 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140107.202951.729942261773265015.davem@davemloft.net>
In-Reply-To: <20140107.202951.729942261773265015.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 01:29, David Miller wrote:
>> +static inline int tx_dealloc_work_todo(struct xenvif *vif)
>
> Make this return bool.
Done, also in the last patch.

>> +		wait_event_interruptible(vif->dealloc_wq,
>> +					tx_dealloc_work_todo(vif) ||
>> +					 kthread_should_stop());
>
> Inconsistent indentation.  You should make the arguments line up at
> exactly the first column after the openning parenthesis of the function
> call.
Done, thanks.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tom-0000Sn-Te; Wed, 08 Jan 2014 14:09:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0toi-0000Ra-Sv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:09:09 +0000
Received: from [85.158.137.68:13215] by server-8.bemta-3.messagelabs.com id
	22/D9-31081-30C5DC25; Wed, 08 Jan 2014 14:09:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389190144!7905346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5620 invoked from network); 8 Jan 2014 14:09:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:09:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88720165"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:09:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 09:09:02 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W0tob-0003zG-W0;
	Wed, 08 Jan 2014 14:09:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 14:09:01 +0000
Message-ID: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
	cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM guest OSes are started with MMU and Caches disables (as they are on
native) however caching is enabled in the domain running the builder and
therefore we must ensure cache consistency.

The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
cache after loading images into guest memory") is to flush the caches after
loading the various blobs into guest RAM. However this approach has two short
comings:

 - The cache flush primitives available to userspace on arm32 are not
   sufficient for our needs.
 - There is a race between the cache flush and the unmap of the guest page
   where the processor might speculatively dirty the cache line again.

(of these the second is the more fundamental)

This patch makes use of the the hardware functionality to force all accesses
made from guest mode to be cached (the HCR.DC == default cached bit). This
means that we don't need to worry about the domain builder's writes being
cached because the guests "uncached" accesses will actually be cached.

Unfortunately the use of HCR.DC is incompatible with the guest enabling its
MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
detect when this happens and disable HCR.DC. This is done with the HCR.TVM
(trap virtual memory controls) bit which also causes various other registers
to be trapped, all of which can be passed straight through to the underlying
register. Once the guest has enabled its MMU we no longer need to trap so
there is no ongoing overhead. In my tests Linux makes about half a dozen
accesses to these registers before the MMU is enabled, I would expect other
OSes to behave similarly (the sequence of writes needed to setup the MMU is
pretty obvious).

Apart from this unfortunate need to trap these accesses this approach is
incompatible with guests which attempt to do DMA operations with their MMU
disabled. In practice this means guests with passthrough which we do not yet
support. Since a typical guest (including dom0) does not access devices which
require DMA until after it is fully up and running with paging enabled the
main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
with a disk passed through and booting from that disk. Since we know that dom0
is not using any such firmware and we do not support device passthrough to
guests yet we can live with this restriction. Once passthrough is implemented
this will need to be revisited.

The patch includes a couple of seemingly unrelated but necessary changes:

 - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
   with the existing set of system register we handled, but broke with the new
   ones introduced here.
 - The defines used to decode the HSR system register fields were named the
   same as the register. This breaks the accessor macros. This had gone
   unnoticed because the handling of the existing trapped registers did not
   require accessing the underlying hardware register. Rename those constants
   with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).

This patch has survived thousands of boot loops on a Midway system.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
---
v2:
 - ensure that HCR.DC is permanently disabled, even if the guest turns off
   SCTRLR.M
 - flush tlb after disabling HCR.DC.
 - There is no need to set HCR.DC while building dom0.

My preferred solution here would be for the tools to use an uncached mapping
of guest memory when building the guest, which requires adding a new privmcd
ioctl (a relatively straightforward patch) and plumbing a "cached" flag
through the libxc foreign mapping interfaces (a twisty maze of passage, all
alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
freeze exception, since it was quite large (because we have 4 or more mapping
interfaces in libxc, some of which call back into others).

So I propose this version for 4.4. The uncached mapping solution should be
revisited for a future release.

At the risk of appearing to be going mad:

<speaking hat="submitter">

This bug results in memory corruption bugs in the guest, which mostly manifest
as a failure to boot the guest (subsequent bad behaviour is possible but, I
think, unlikely) the frequency of failures is perhaps 1/10 times. This would
not constitute an awesome release.

Although the patch is large most of it is repetative and mechanical (made
explicit through the use of macros in many cases). The biggest risk is that
one of the registers is not passed through correctly (i.e. the wrong size or
target registers). The ones which Linux uses have been tested and appear to
function OK.  The others might be buggy but this is mitigated through the use
of the same set of macros.

I think the chance of the patch having a bug wrt my understanding of the
hardware behaviour is pretty low. WRT there being bugs in my understanding of
the hardware documentation, I would say middle to low, but I have discussed it
with some folks at ARM and they didn't call me an idiot (in fact pretty much
the same thing has been proposed for KVM).

Overall I think the benefits outweigh the risks.

One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
It's reasonably recent so reverting it takes us back to a pretty well
understood state in the libraries. The functionality is harmless if
incomplete. I think given the first argument I would lean towards reverting.

</speaking>

<speaking hat="stand in release manager">

OK.

</speaking>

Obviously if you think I'm being to easy on myself please say so.

Actually, if you think my judgement is right I'd appreciate being told so too.
---
 xen/arch/arm/domain.c           |    7 ++
 xen/arch/arm/traps.c            |  163 ++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 +
 xen/include/asm-arm/domain.h    |    2 +
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 ++++-
 7 files changed, 194 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 124cccf..635a9a4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,6 +19,7 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
+#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -219,6 +220,11 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
+    if ( n->arch.default_cache )
+        hcr |= (HCR_TVM|HCR_DC);
+    else
+        hcr &= ~(HCR_TVM|HCR_DC);
+
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -469,6 +475,7 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
+    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7c5ab19..48a6fcc 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,12 +29,14 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
+#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
+#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1279,6 +1281,29 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
+static void update_sctlr(struct vcpu *v, uint32_t val)
+{
+    /*
+     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
+     * because they are incompatible.
+     *
+     * Once HCR.DC is disabled then we do not need HCR_TVM either,
+     * since it's only purpose was to catch the MMU being enabled.
+     *
+     * Both are set appropriately on context switch but we need to
+     * clear them now since we may not context switch on return to
+     * guest.
+     */
+    if ( val & SCTLR_M )
+    {
+        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
+        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
+         * VMID requires us to flush the TLB for that VMID. */
+        flush_tlb();
+        v->arch.default_cache = false;
+    }
+}
+
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1338,6 +1363,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
+
+/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
+#define CP32_PASSTHRU32(R...) do {              \
+    if ( cp32.read )                            \
+        *r = READ_SYSREG32(R);                  \
+    else                                        \
+        WRITE_SYSREG32(*r, R);                  \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates the lower 32-bits and clears the upper bits.
+ */
+#define CP32_PASSTHRU64(R...) do {              \
+    if ( cp32.read )                            \
+        *r = (uint32_t)READ_SYSREG64(R);        \
+    else                                        \
+        WRITE_SYSREG64((uint64_t)*r, R);        \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
+ * the other half.
+ */
+#ifdef CONFIG_ARM_64
+#define CP32_PASSTHRU64_HI(R...) do {                   \
+    if ( cp32.read )                                    \
+        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
+    else                                                \
+    {                                                   \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
+        t |= ((uint64_t)(*r)) << 32;                    \
+        WRITE_SYSREG64(t, R);                           \
+    }                                                   \
+} while(0)
+#define CP32_PASSTHRU64_LO(R...) do {                           \
+    if ( cp32.read )                                            \
+        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
+    else                                                        \
+    {                                                           \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
+        t |= *r;                                                \
+        WRITE_SYSREG64(t, R);                                   \
+    }                                                           \
+} while(0)
+#endif
+
+    /* HCR.TVM */
+    case HSR_CPREG32(SCTLR):
+        CP32_PASSTHRU32(SCTLR_EL1);
+        update_sctlr(v, *r);
+        break;
+    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
+    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
+    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
+    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
+    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
+    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
+    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
+    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
+    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
+
+#ifdef CONFIG_ARM_64
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
+#else
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
+#endif
+
+#undef CP32_PASSTHRU32
+#undef CP32_PASSTHRU64
+#undef CP32_PASSTHRU64_LO
+#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1351,6 +1459,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
+    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
+    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1368,6 +1479,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define CP64_PASSTHRU(R...) do {                                  \
+    if ( cp64.read )                                            \
+    {                                                           \
+        r = READ_SYSREG64(R);                                   \
+        *r1 = r & 0xffffffffUL;                                 \
+        *r2 = r >> 32;                                          \
+    }                                                           \
+    else                                                        \
+    {                                                           \
+        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
+        WRITE_SYSREG64(r, R);                                   \
+    }                                                           \
+} while(0)
+
+    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
+    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
+
+#undef CP64_PASSTHRU
+
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1382,11 +1513,13 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
+    register_t *x = select_user_reg(regs, sysreg.reg);
+    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1394,6 +1527,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define SYSREG_PASSTHRU(R...) do {              \
+    if ( sysreg.read )                          \
+        *x = READ_SYSREG(R);                    \
+    else                                        \
+        WRITE_SYSREG(*x, R);                    \
+} while(0)
+
+    case HSR_SYSREG_SCTLR_EL1:
+        SYSREG_PASSTHRU(SCTLR_EL1);
+        update_sctlr(v, *x);
+        break;
+    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
+    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
+    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
+    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
+    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
+    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
+    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
+    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
+    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
+    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
+
+#undef SYSREG_PASSTHRU
+
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1466,7 +1624,6 @@ done:
     if (first) unmap_domain_page(first);
 }
 
-
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 433ad55..e325f78 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_CPREG64(CNTPCT):
+    case HSR_SYSREG_CNTPCT_EL0:
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index f0f1d53..508467a 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,6 +121,8 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
+#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
+#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -260,7 +262,9 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
+#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
+#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bc20a15..af8c64b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,6 +257,8 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
+    bool_t default_cache;
+
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index dfe807d..06e638f 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003800)
+#define HSR_SYSREG_CRN_MASK (0x00003c00)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 48ad07e..0cee0e9 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,8 +40,23 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
-#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
+#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
+#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
+#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
+#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
+#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
+#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
+#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
+#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
+#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
+#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
+#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
+
+#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
+#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
+#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
+
+
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tom-0000Sn-Te; Wed, 08 Jan 2014 14:09:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0toi-0000Ra-Sv
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:09:09 +0000
Received: from [85.158.137.68:13215] by server-8.bemta-3.messagelabs.com id
	22/D9-31081-30C5DC25; Wed, 08 Jan 2014 14:09:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389190144!7905346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5620 invoked from network); 8 Jan 2014 14:09:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:09:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88720165"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:09:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 09:09:02 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W0tob-0003zG-W0;
	Wed, 08 Jan 2014 14:09:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 14:09:01 +0000
Message-ID: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
	cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM guest OSes are started with MMU and Caches disables (as they are on
native) however caching is enabled in the domain running the builder and
therefore we must ensure cache consistency.

The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
cache after loading images into guest memory") is to flush the caches after
loading the various blobs into guest RAM. However this approach has two short
comings:

 - The cache flush primitives available to userspace on arm32 are not
   sufficient for our needs.
 - There is a race between the cache flush and the unmap of the guest page
   where the processor might speculatively dirty the cache line again.

(of these the second is the more fundamental)

This patch makes use of the the hardware functionality to force all accesses
made from guest mode to be cached (the HCR.DC == default cached bit). This
means that we don't need to worry about the domain builder's writes being
cached because the guests "uncached" accesses will actually be cached.

Unfortunately the use of HCR.DC is incompatible with the guest enabling its
MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
detect when this happens and disable HCR.DC. This is done with the HCR.TVM
(trap virtual memory controls) bit which also causes various other registers
to be trapped, all of which can be passed straight through to the underlying
register. Once the guest has enabled its MMU we no longer need to trap so
there is no ongoing overhead. In my tests Linux makes about half a dozen
accesses to these registers before the MMU is enabled, I would expect other
OSes to behave similarly (the sequence of writes needed to setup the MMU is
pretty obvious).

Apart from this unfortunate need to trap these accesses this approach is
incompatible with guests which attempt to do DMA operations with their MMU
disabled. In practice this means guests with passthrough which we do not yet
support. Since a typical guest (including dom0) does not access devices which
require DMA until after it is fully up and running with paging enabled the
main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
with a disk passed through and booting from that disk. Since we know that dom0
is not using any such firmware and we do not support device passthrough to
guests yet we can live with this restriction. Once passthrough is implemented
this will need to be revisited.

The patch includes a couple of seemingly unrelated but necessary changes:

 - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
   with the existing set of system register we handled, but broke with the new
   ones introduced here.
 - The defines used to decode the HSR system register fields were named the
   same as the register. This breaks the accessor macros. This had gone
   unnoticed because the handling of the existing trapped registers did not
   require accessing the underlying hardware register. Rename those constants
   with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).

This patch has survived thousands of boot loops on a Midway system.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
---
v2:
 - ensure that HCR.DC is permanently disabled, even if the guest turns off
   SCTRLR.M
 - flush tlb after disabling HCR.DC.
 - There is no need to set HCR.DC while building dom0.

My preferred solution here would be for the tools to use an uncached mapping
of guest memory when building the guest, which requires adding a new privmcd
ioctl (a relatively straightforward patch) and plumbing a "cached" flag
through the libxc foreign mapping interfaces (a twisty maze of passage, all
alike).  IMO the libxc side of this patch was not looking suitable for a 4.4
freeze exception, since it was quite large (because we have 4 or more mapping
interfaces in libxc, some of which call back into others).

So I propose this version for 4.4. The uncached mapping solution should be
revisited for a future release.

At the risk of appearing to be going mad:

<speaking hat="submitter">

This bug results in memory corruption bugs in the guest, which mostly manifest
as a failure to boot the guest (subsequent bad behaviour is possible but, I
think, unlikely) the frequency of failures is perhaps 1/10 times. This would
not constitute an awesome release.

Although the patch is large most of it is repetative and mechanical (made
explicit through the use of macros in many cases). The biggest risk is that
one of the registers is not passed through correctly (i.e. the wrong size or
target registers). The ones which Linux uses have been tested and appear to
function OK.  The others might be buggy but this is mitigated through the use
of the same set of macros.

I think the chance of the patch having a bug wrt my understanding of the
hardware behaviour is pretty low. WRT there being bugs in my understanding of
the hardware documentation, I would say middle to low, but I have discussed it
with some folks at ARM and they didn't call me an idiot (in fact pretty much
the same thing has been proposed for KVM).

Overall I think the benefits outweigh the risks.

One thing I'm not sure about is reverting the previous fix in a0035ecc0d82.
It's reasonably recent so reverting it takes us back to a pretty well
understood state in the libraries. The functionality is harmless if
incomplete. I think given the first argument I would lean towards reverting.

</speaking>

<speaking hat="stand in release manager">

OK.

</speaking>

Obviously if you think I'm being to easy on myself please say so.

Actually, if you think my judgement is right I'd appreciate being told so too.
---
 xen/arch/arm/domain.c           |    7 ++
 xen/arch/arm/traps.c            |  163 ++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 +
 xen/include/asm-arm/domain.h    |    2 +
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 ++++-
 7 files changed, 194 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 124cccf..635a9a4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,6 +19,7 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
+#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -219,6 +220,11 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
+    if ( n->arch.default_cache )
+        hcr |= (HCR_TVM|HCR_DC);
+    else
+        hcr &= ~(HCR_TVM|HCR_DC);
+
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -469,6 +475,7 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
+    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 7c5ab19..48a6fcc 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,12 +29,14 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
+#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
+#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1279,6 +1281,29 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
+static void update_sctlr(struct vcpu *v, uint32_t val)
+{
+    /*
+     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
+     * because they are incompatible.
+     *
+     * Once HCR.DC is disabled then we do not need HCR_TVM either,
+     * since it's only purpose was to catch the MMU being enabled.
+     *
+     * Both are set appropriately on context switch but we need to
+     * clear them now since we may not context switch on return to
+     * guest.
+     */
+    if ( val & SCTLR_M )
+    {
+        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
+        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
+         * VMID requires us to flush the TLB for that VMID. */
+        flush_tlb();
+        v->arch.default_cache = false;
+    }
+}
+
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1338,6 +1363,89 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
+
+/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
+#define CP32_PASSTHRU32(R...) do {              \
+    if ( cp32.read )                            \
+        *r = READ_SYSREG32(R);                  \
+    else                                        \
+        WRITE_SYSREG32(*r, R);                  \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates the lower 32-bits and clears the upper bits.
+ */
+#define CP32_PASSTHRU64(R...) do {              \
+    if ( cp32.read )                            \
+        *r = (uint32_t)READ_SYSREG64(R);        \
+    else                                        \
+        WRITE_SYSREG64((uint64_t)*r, R);        \
+} while(0)
+
+/*
+ * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
+ * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
+ * the other half.
+ */
+#ifdef CONFIG_ARM_64
+#define CP32_PASSTHRU64_HI(R...) do {                   \
+    if ( cp32.read )                                    \
+        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
+    else                                                \
+    {                                                   \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
+        t |= ((uint64_t)(*r)) << 32;                    \
+        WRITE_SYSREG64(t, R);                           \
+    }                                                   \
+} while(0)
+#define CP32_PASSTHRU64_LO(R...) do {                           \
+    if ( cp32.read )                                            \
+        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
+    else                                                        \
+    {                                                           \
+        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
+        t |= *r;                                                \
+        WRITE_SYSREG64(t, R);                                   \
+    }                                                           \
+} while(0)
+#endif
+
+    /* HCR.TVM */
+    case HSR_CPREG32(SCTLR):
+        CP32_PASSTHRU32(SCTLR_EL1);
+        update_sctlr(v, *r);
+        break;
+    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
+    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
+    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
+    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
+    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
+    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
+    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
+    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
+    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
+
+#ifdef CONFIG_ARM_64
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
+#else
+    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
+    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
+    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
+    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
+    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
+    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
+#endif
+
+#undef CP32_PASSTHRU32
+#undef CP32_PASSTHRU64
+#undef CP32_PASSTHRU64_LO
+#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1351,6 +1459,9 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
+    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
+    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
+    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1368,6 +1479,26 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define CP64_PASSTHRU(R...) do {                                  \
+    if ( cp64.read )                                            \
+    {                                                           \
+        r = READ_SYSREG64(R);                                   \
+        *r1 = r & 0xffffffffUL;                                 \
+        *r2 = r >> 32;                                          \
+    }                                                           \
+    else                                                        \
+    {                                                           \
+        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
+        WRITE_SYSREG64(r, R);                                   \
+    }                                                           \
+} while(0)
+
+    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
+    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
+
+#undef CP64_PASSTHRU
+
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1382,11 +1513,13 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
+    register_t *x = select_user_reg(regs, sysreg.reg);
+    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1394,6 +1527,31 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
+
+#define SYSREG_PASSTHRU(R...) do {              \
+    if ( sysreg.read )                          \
+        *x = READ_SYSREG(R);                    \
+    else                                        \
+        WRITE_SYSREG(*x, R);                    \
+} while(0)
+
+    case HSR_SYSREG_SCTLR_EL1:
+        SYSREG_PASSTHRU(SCTLR_EL1);
+        update_sctlr(v, *x);
+        break;
+    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
+    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
+    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
+    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
+    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
+    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
+    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
+    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
+    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
+    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
+
+#undef SYSREG_PASSTHRU
+
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1466,7 +1624,6 @@ done:
     if (first) unmap_domain_page(first);
 }
 
-
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index 433ad55..e325f78 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case CNTP_CTL_EL0:
+    case HSR_SYSREG_CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case CNTP_TVAL_EL0:
+    case HSR_SYSREG_CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_CPREG64(CNTPCT):
+    case HSR_SYSREG_CNTPCT_EL0:
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index f0f1d53..508467a 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,6 +121,8 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
+#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
+#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -260,7 +262,9 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
+#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
+#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bc20a15..af8c64b 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,6 +257,8 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
+    bool_t default_cache;
+
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index dfe807d..06e638f 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003800)
+#define HSR_SYSREG_CRN_MASK (0x00003c00)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 48ad07e..0cee0e9 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,8 +40,23 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
-#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
+#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
+#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
+#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
+#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
+#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
+#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
+#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
+#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
+#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
+#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
+#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
+
+#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
+#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
+#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
+
+
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tsq-0000w4-5o; Wed, 08 Jan 2014 14:13:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tsp-0000vx-Ds
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:13:23 +0000
Received: from [85.158.137.68:16967] by server-11.bemta-3.messagelabs.com id
	BE/A0-19379-20D5DC25; Wed, 08 Jan 2014 14:13:22 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389190398!7932965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4714 invoked from network); 8 Jan 2014 14:13:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:13:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88721708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:13:15 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:13:14 -0500
Message-ID: <52CD5CF8.4000004@citrix.com>
Date: Wed, 8 Jan 2014 14:13:12 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Eric Dumazet <eric.dumazet@gmail.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>		
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>	
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>	
	<52CD5785.4050402@citrix.com>
	<1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 13:54, Eric Dumazet wrote:
> On Wed, 2014-01-08 at 13:49 +0000, Zoltan Kiss wrote:
>> On 08/01/14 02:12, Eric Dumazet wrote:
>>> On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
>>>
>>>>
>>>> +		if (skb_shinfo(skb)->frag_list) {
>>>> +			nskb = skb_shinfo(skb)->frag_list;
>>>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>>>> +			skb->len += nskb->len;
>>>> +			skb->data_len += nskb->len;
>>>> +			skb->truesize += nskb->truesize;
>>>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>>>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>>>> +			vif->tx_zerocopy_sent += 2;
>>>> +			nskb = skb;
>>>> +
>>>> +			skb = skb_copy_expand(skb,
>>>> +					0,
>>>> +					0,
>>>> +					GFP_ATOMIC | __GFP_NOWARN);
>>>
>>> skb can be NULL here
>>
>> Thanks, fixed that.
>
> BTW, I am not sure why you copy the skb.
>
> Is it to get rid of frag_list, and why ?

Yes, it is to get rid of the frag_list, just to be on the safe side. I'm 
not sure if it is normal to send a big skb with MAX_SKB_FRAGS frags plus 
an empty skb on the frag_list with one frag, so I just consolidate them 
here. This scenario shouldn't happen very often anyway, even guests 
which can send more than MAX_SKB_FRAGS slots tends to do it rarely.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tsq-0000w4-5o; Wed, 08 Jan 2014 14:13:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0tsp-0000vx-Ds
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:13:23 +0000
Received: from [85.158.137.68:16967] by server-11.bemta-3.messagelabs.com id
	BE/A0-19379-20D5DC25; Wed, 08 Jan 2014 14:13:22 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389190398!7932965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4714 invoked from network); 8 Jan 2014 14:13:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:13:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88721708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:13:15 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:13:14 -0500
Message-ID: <52CD5CF8.4000004@citrix.com>
Date: Wed, 8 Jan 2014 14:13:12 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Eric Dumazet <eric.dumazet@gmail.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>		
	<1389139818-24458-7-git-send-email-zoltan.kiss@citrix.com>	
	<1389147141.26646.74.camel@edumazet-glaptop2.roam.corp.google.com>	
	<52CD5785.4050402@citrix.com>
	<1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389189272.26646.89.camel@edumazet-glaptop2.roam.corp.google.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 13:54, Eric Dumazet wrote:
> On Wed, 2014-01-08 at 13:49 +0000, Zoltan Kiss wrote:
>> On 08/01/14 02:12, Eric Dumazet wrote:
>>> On Wed, 2014-01-08 at 00:10 +0000, Zoltan Kiss wrote:
>>>
>>>>
>>>> +		if (skb_shinfo(skb)->frag_list) {
>>>> +			nskb = skb_shinfo(skb)->frag_list;
>>>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>>>> +			skb->len += nskb->len;
>>>> +			skb->data_len += nskb->len;
>>>> +			skb->truesize += nskb->truesize;
>>>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>>>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>>>> +			vif->tx_zerocopy_sent += 2;
>>>> +			nskb = skb;
>>>> +
>>>> +			skb = skb_copy_expand(skb,
>>>> +					0,
>>>> +					0,
>>>> +					GFP_ATOMIC | __GFP_NOWARN);
>>>
>>> skb can be NULL here
>>
>> Thanks, fixed that.
>
> BTW, I am not sure why you copy the skb.
>
> Is it to get rid of frag_list, and why ?

Yes, it is to get rid of the frag_list, just to be on the safe side. I'm 
not sure if it is normal to send a big skb with MAX_SKB_FRAGS frags plus 
an empty skb on the frag_list with one frag, so I just consolidate them 
here. This scenario shouldn't happen very often anyway, even guests 
which can send more than MAX_SKB_FRAGS slots tends to do it rarely.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tzA-0001MH-O6; Wed, 08 Jan 2014 14:19:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0tz9-0001LF-0X
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:19:55 +0000
Received: from [85.158.139.211:61203] by server-10.bemta-5.messagelabs.com id
	33/0D-01405-A8E5DC25; Wed, 08 Jan 2014 14:19:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389190792!8530161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20784 invoked from network); 8 Jan 2014 14:19:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88724344"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:19:39 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:19:38 -0500
Message-ID: <52CD5E79.9000008@citrix.com>
Date: Wed, 8 Jan 2014 14:19:37 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 18:55, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.
> 
> This is with:
> 
>   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
>   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> 
> For reasons I don't understand it doesn't seem to print the actual
> kernel git hash in dmesg, but I think it was that from flight 22264,
> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> 64-bit Xen.
> 
> Thanks,
> Ian.
> 
> debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> [  124.601105] Grant tables using version 2 layout.
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> [  124.601105] invalid opcode: 0000 [#1] SMP 
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] 
> [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> [  124.601105] EIP is at xen_irq_resume+0x215/0x370

We shouldn't be calling xen_irq_resume() when resuming the source VM.
The EVTCHNOP_bind_irq is failing because the VIRQ is still bound.

This would suggest that the suspend hypercall has not correctly returned
the cancelled state.

Could this be because of the tools issue mentioned by Ian C?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0tzA-0001MH-O6; Wed, 08 Jan 2014 14:19:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0tz9-0001LF-0X
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:19:55 +0000
Received: from [85.158.139.211:61203] by server-10.bemta-5.messagelabs.com id
	33/0D-01405-A8E5DC25; Wed, 08 Jan 2014 14:19:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389190792!8530161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20784 invoked from network); 8 Jan 2014 14:19:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:19:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88724344"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:19:39 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:19:38 -0500
Message-ID: <52CD5E79.9000008@citrix.com>
Date: Wed, 8 Jan 2014 14:19:37 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
In-Reply-To: <21196.19900.136146.867552@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 18:55, Ian Jackson wrote:
> I did the following test:
> 
>    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
>    xl migrate debian.guest.osstest localhost
> 
> xl did what appears to be the right thing: it did most of the
> migration, failed to run the block scripts at the end of the
> migration, and destroyed the destination domain and instead resumed
> the source guest.
> 
> However, the source guest immediately went mad spewing WARNINGs and
> was after that no longer contactable via the network and not
> apparently responsive on the console.  See below.
> 
> This is with:
> 
>   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
>   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> 
> For reasons I don't understand it doesn't seem to print the actual
> kernel git hash in dmesg, but I think it was that from flight 22264,
> i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> 64-bit Xen.
> 
> Thanks,
> Ian.
> 
> debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> [  124.601105] Grant tables using version 2 layout.
> [  124.601105] ------------[ cut here ]------------
> [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> [  124.601105] invalid opcode: 0000 [#1] SMP 
> [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> [  124.601105] 
> [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> [  124.601105] EIP is at xen_irq_resume+0x215/0x370

We shouldn't be calling xen_irq_resume() when resuming the source VM.
The EVTCHNOP_bind_irq is failing because the VIRQ is still bound.

This would suggest that the suspend hypercall has not correctly returned
the cancelled state.

Could this be because of the tools issue mentioned by Ian C?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:20:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u06-0001ea-O7; Wed, 08 Jan 2014 14:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u05-0001eP-99
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:20:53 +0000
Received: from [85.158.139.211:13712] by server-7.bemta-5.messagelabs.com id
	F3/B0-04824-4CE5DC25; Wed, 08 Jan 2014 14:20:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389190850!8363280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2899 invoked from network); 8 Jan 2014 14:20:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:20:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88724797"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:20:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:20:48 -0500
Message-ID: <1389190848.4883.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 14:20:48 +0000
In-Reply-To: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
 messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
> The errors have been incorrectly identifying their function since c/s
> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
> cleanup.
> 
> Use __func__ to ensure the name remains correct in the future.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>

A simple string change seems harmless from a release PoV, so on that
front 
Release-Acked-by: Ian Campbell.

For the actual change though, most uses of ERROR in this function just
have a descriptive error without the function name. If we are going to
change it then I'm not convinced "rdexact failed..." is as useful as
something like "Failed to read exactly %d bytes (select returned...)".
Other thoughts?

(that said, I'm still somewhat inclined to just bung this one in...)

Ian.

> ---
>  tools/libxc/xc_domain_restore.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 80769a7..ca2fb51 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -87,7 +87,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>              if ( len == -1 && errno == EINTR )
>                  continue;
>              if ( !FD_ISSET(fd, &rfds) ) {
> -                ERROR("read_exact_timed failed (select returned %zd)", len);
> +                ERROR("%s failed (select returned %zd)", __func__, len);
>                  errno = ETIMEDOUT;
>                  return -1;
>              }
> @@ -101,7 +101,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>              errno = 0;
>          }
>          if ( len <= 0 ) {
> -            ERROR("read_exact_timed failed (read rc: %d, errno: %d)", len, errno);
> +            ERROR("%s failed (read rc: %d, errno: %d)", __func__, len, errno);
>              return -1;
>          }
>          offset += len;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:20:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u06-0001ea-O7; Wed, 08 Jan 2014 14:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u05-0001eP-99
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:20:53 +0000
Received: from [85.158.139.211:13712] by server-7.bemta-5.messagelabs.com id
	F3/B0-04824-4CE5DC25; Wed, 08 Jan 2014 14:20:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389190850!8363280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2899 invoked from network); 8 Jan 2014 14:20:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:20:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88724797"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:20:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:20:48 -0500
Message-ID: <1389190848.4883.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 14:20:48 +0000
In-Reply-To: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
 messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
> The errors have been incorrectly identifying their function since c/s
> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
> cleanup.
> 
> Use __func__ to ensure the name remains correct in the future.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>

A simple string change seems harmless from a release PoV, so on that
front 
Release-Acked-by: Ian Campbell.

For the actual change though, most uses of ERROR in this function just
have a descriptive error without the function name. If we are going to
change it then I'm not convinced "rdexact failed..." is as useful as
something like "Failed to read exactly %d bytes (select returned...)".
Other thoughts?

(that said, I'm still somewhat inclined to just bung this one in...)

Ian.

> ---
>  tools/libxc/xc_domain_restore.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 80769a7..ca2fb51 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -87,7 +87,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>              if ( len == -1 && errno == EINTR )
>                  continue;
>              if ( !FD_ISSET(fd, &rfds) ) {
> -                ERROR("read_exact_timed failed (select returned %zd)", len);
> +                ERROR("%s failed (select returned %zd)", __func__, len);
>                  errno = ETIMEDOUT;
>                  return -1;
>              }
> @@ -101,7 +101,7 @@ static ssize_t rdexact(xc_interface *xch, struct restore_ctx *ctx,
>              errno = 0;
>          }
>          if ( len <= 0 ) {
> -            ERROR("read_exact_timed failed (read rc: %d, errno: %d)", len, errno);
> +            ERROR("%s failed (read rc: %d, errno: %d)", __func__, len, errno);
>              return -1;
>          }
>          offset += len;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:21:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u0r-0001ji-HB; Wed, 08 Jan 2014 14:21:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0u0p-0001jV-5b
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:21:40 +0000
Received: from [193.109.254.147:16142] by server-4.bemta-14.messagelabs.com id
	11/30-03916-2FE5DC25; Wed, 08 Jan 2014 14:21:38 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389190897!9574634!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11935 invoked from network); 8 Jan 2014 14:21:37 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 14:21:37 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51303 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0tpz-0007MG-Tv; Wed, 08 Jan 2014 15:10:28 +0100
Date: Wed, 8 Jan 2014 15:21:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1054934236.20140108152133@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, January 8, 2014, 2:16:24 PM, you wrote:

> I'm filing in for George while he is on vacation and travelling to a
> conference etc. I'm still coming up to speed wrt what is going on with
> this release so please do correct me when I'm wrong. George will be
> back on 20 January.

> This information will be mirrored on the Xen 4.4 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.4

> We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
> time and on George's final comments in [1] I think this means that PVH
> dom0 support has not made the cut for 4.4, which is a shame but there
> is plenty of good functionality (including PVH domU support) in there.

> [1] http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3E

> = Timeline =

> Here is our current timeline based on a 6-month release:

> * Feature freeze: 18 October 2013 
> * Code freezing point: 18 November 2013
> * First RCs: 6 December 2013  <== WE ARE HERE
> * Release: 21 January 2014

> Last updated: 8 January 2014

> == Completed ==

<snip>

> * PHV domU (experimental only)

<snip>

> === Big ticket items ===

> * PVH dom0 (w/ Linux) 
>   blocker
>   owner: mukesh@oracle, george@citrix
>   status (Linux): Acked, waiting for ABI to be nailed down
>   status (Xen): v6 posted; no longer considered a blocker

Perhaps worth noting as a separate non-blocking (for 4.5) item:
* PVH support for AMD (/SVM)

And also mention this "lack" of support in the announcement of the new experimental feature PVH.






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:21:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u0r-0001ji-HB; Wed, 08 Jan 2014 14:21:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W0u0p-0001jV-5b
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:21:40 +0000
Received: from [193.109.254.147:16142] by server-4.bemta-14.messagelabs.com id
	11/30-03916-2FE5DC25; Wed, 08 Jan 2014 14:21:38 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389190897!9574634!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11935 invoked from network); 8 Jan 2014 14:21:37 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 14:21:37 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51303 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W0tpz-0007MG-Tv; Wed, 08 Jan 2014 15:10:28 +0100
Date: Wed, 8 Jan 2014 15:21:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1054934236.20140108152133@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, January 8, 2014, 2:16:24 PM, you wrote:

> I'm filing in for George while he is on vacation and travelling to a
> conference etc. I'm still coming up to speed wrt what is going on with
> this release so please do correct me when I'm wrong. George will be
> back on 20 January.

> This information will be mirrored on the Xen 4.4 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.4

> We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
> time and on George's final comments in [1] I think this means that PVH
> dom0 support has not made the cut for 4.4, which is a shame but there
> is plenty of good functionality (including PVH domU support) in there.

> [1] http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3E

> = Timeline =

> Here is our current timeline based on a 6-month release:

> * Feature freeze: 18 October 2013 
> * Code freezing point: 18 November 2013
> * First RCs: 6 December 2013  <== WE ARE HERE
> * Release: 21 January 2014

> Last updated: 8 January 2014

> == Completed ==

<snip>

> * PHV domU (experimental only)

<snip>

> === Big ticket items ===

> * PVH dom0 (w/ Linux) 
>   blocker
>   owner: mukesh@oracle, george@citrix
>   status (Linux): Acked, waiting for ABI to be nailed down
>   status (Xen): v6 posted; no longer considered a blocker

Perhaps worth noting as a separate non-blocking (for 4.5) item:
* PVH support for AMD (/SVM)

And also mention this "lack" of support in the announcement of the new experimental feature PVH.






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u2O-0001tt-32; Wed, 08 Jan 2014 14:23:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u2M-0001tk-KD
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:23:14 +0000
Received: from [85.158.139.211:40906] by server-5.bemta-5.messagelabs.com id
	A4/78-14928-15F5DC25; Wed, 08 Jan 2014 14:23:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389190991!8573369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30589 invoked from network); 8 Jan 2014 14:23:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:23:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90875400"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:23:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:23:08 -0500
Message-ID: <1389190988.4883.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 8 Jan 2014 14:23:08 +0000
In-Reply-To: <1054934236.20140108152133@eikelenboom.it>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1054934236.20140108152133@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 15:21 +0100, Sander Eikelenboom wrote:
> We
> Perhaps worth noting as a separate non-blocking (for 4.5) item:
> * PVH support for AMD (/SVM)

I leave that up to the 4.5 RM.

> And also mention this "lack" of support in the announcement of the new experimental feature PVH.

It is certainly worth making the point that it is Intel only in the
Release Notes or announcements etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u2O-0001tt-32; Wed, 08 Jan 2014 14:23:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u2M-0001tk-KD
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:23:14 +0000
Received: from [85.158.139.211:40906] by server-5.bemta-5.messagelabs.com id
	A4/78-14928-15F5DC25; Wed, 08 Jan 2014 14:23:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389190991!8573369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30589 invoked from network); 8 Jan 2014 14:23:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:23:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90875400"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:23:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:23:08 -0500
Message-ID: <1389190988.4883.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 8 Jan 2014 14:23:08 +0000
In-Reply-To: <1054934236.20140108152133@eikelenboom.it>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1054934236.20140108152133@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 15:21 +0100, Sander Eikelenboom wrote:
> We
> Perhaps worth noting as a separate non-blocking (for 4.5) item:
> * PVH support for AMD (/SVM)

I leave that up to the 4.5 RM.

> And also mention this "lack" of support in the announcement of the new experimental feature PVH.

It is certainly worth making the point that it is Intel only in the
Release Notes or announcements etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:24:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u3r-00023S-3A; Wed, 08 Jan 2014 14:24:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0u3p-00023H-Mi
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:24:46 +0000
Received: from [85.158.137.68:11305] by server-2.bemta-3.messagelabs.com id
	F5/03-17329-CAF5DC25; Wed, 08 Jan 2014 14:24:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389191081!7937616!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2358 invoked from network); 8 Jan 2014 14:24:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:24:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08ENRi5007005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 14:23:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08ENQuh019729
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 14:23:26 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08ENQDM000115; Wed, 8 Jan 2014 14:23:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 06:23:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E30631C18DC; Wed,  8 Jan 2014 09:23:24 -0500 (EST)
Date: Wed, 8 Jan 2014 09:23:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20140108142324.GA13101@phenom.dumpdata.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
	<20140108135525.GA2924@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108135525.GA2924@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 03:55:25PM +0200, Pasi K=E4rkk=E4inen wrote:
> On Wed, Jan 08, 2014 at 01:22:09PM +0000, Stefano Stabellini wrote:
> > On Wed, 8 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > > > =

> > > > * qemu-upstream not freeing pirq =

> > > >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> > > >  > http://marc.info/?l=3Dxen-devel&m=3D137265766424502
> > > >  status: patches posted; latest patches need testing
> > > >  Not a blocker.
> > > =

> > > I had it in my mind that this was fixed -- true?
> > =

> > It was fixed on qemu-traditional. We have a patch for upstream qemu but
> > it hasn't been tested because of the other passthrough issues.
> > =

> =

> Hmm, what other passthrough issues? Should those be added to the list asw=
ell?

http://www.gossamer-threads.com/lists/xen/devel/309476


> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:24:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u3r-00023S-3A; Wed, 08 Jan 2014 14:24:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0u3p-00023H-Mi
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:24:46 +0000
Received: from [85.158.137.68:11305] by server-2.bemta-3.messagelabs.com id
	F5/03-17329-CAF5DC25; Wed, 08 Jan 2014 14:24:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389191081!7937616!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2358 invoked from network); 8 Jan 2014 14:24:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:24:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08ENRi5007005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 14:23:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08ENQuh019729
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 14:23:26 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08ENQDM000115; Wed, 8 Jan 2014 14:23:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 06:23:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E30631C18DC; Wed,  8 Jan 2014 09:23:24 -0500 (EST)
Date: Wed, 8 Jan 2014 09:23:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Message-ID: <20140108142324.GA13101@phenom.dumpdata.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187148.4883.68.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401081321260.21510@kaball.uk.xensource.com>
	<20140108135525.GA2924@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108135525.GA2924@reaktio.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-upstream not freeing pirq (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 03:55:25PM +0200, Pasi K=E4rkk=E4inen wrote:
> On Wed, Jan 08, 2014 at 01:22:09PM +0000, Stefano Stabellini wrote:
> > On Wed, 8 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > > > =

> > > > * qemu-upstream not freeing pirq =

> > > >  > http://www.gossamer-threads.com/lists/xen/devel/281498
> > > >  > http://marc.info/?l=3Dxen-devel&m=3D137265766424502
> > > >  status: patches posted; latest patches need testing
> > > >  Not a blocker.
> > > =

> > > I had it in my mind that this was fixed -- true?
> > =

> > It was fixed on qemu-traditional. We have a patch for upstream qemu but
> > it hasn't been tested because of the other passthrough issues.
> > =

> =

> Hmm, what other passthrough issues? Should those be added to the list asw=
ell?

http://www.gossamer-threads.com/lists/xen/devel/309476


> =

> -- Pasi
> =

> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u43-00025V-RH; Wed, 08 Jan 2014 14:24:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u42-000252-HE
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:24:58 +0000
Received: from [85.158.139.211:34557] by server-14.bemta-5.messagelabs.com id
	F2/F7-24200-9BF5DC25; Wed, 08 Jan 2014 14:24:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389191094!8567277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23856 invoked from network); 8 Jan 2014 14:24:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88726237"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:24:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:24:29 -0500
Message-ID: <1389191068.4883.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 8 Jan 2014 14:24:28 +0000
In-Reply-To: <52CD5E79.9000008@citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<52CD5E79.9000008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 14:19 +0000, David Vrabel wrote:
> On 07/01/14 18:55, Ian Jackson wrote:
> > I did the following test:
> > 
> >    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
> >    xl migrate debian.guest.osstest localhost
> > 
> > xl did what appears to be the right thing: it did most of the
> > migration, failed to run the block scripts at the end of the
> > migration, and destroyed the destination domain and instead resumed
> > the source guest.
> > 
> > However, the source guest immediately went mad spewing WARNINGs and
> > was after that no longer contactable via the network and not
> > apparently responsive on the console.  See below.
> > 
> > This is with:
> > 
> >   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
> >   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> > 
> > For reasons I don't understand it doesn't seem to print the actual
> > kernel git hash in dmesg, but I think it was that from flight 22264,
> > i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> > 64-bit Xen.
> > 
> > Thanks,
> > Ian.
> > 
> > debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> > [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> > [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> > [  124.601105] Grant tables using version 2 layout.
> > [  124.601105] ------------[ cut here ]------------
> > [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> > [  124.601105] invalid opcode: 0000 [#1] SMP 
> > [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> > [  124.601105] 
> > [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> > [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> > [  124.601105] EIP is at xen_irq_resume+0x215/0x370
> 
> We shouldn't be calling xen_irq_resume() when resuming the source VM.
> The EVTCHNOP_bind_irq is failing because the VIRQ is still bound.
> 
> This would suggest that the suspend hypercall has not correctly returned
> the cancelled state.
> 
> Could this be because of the tools issue mentioned by Ian C?

I'm fairly confident that it is, yes.

(well "this" is actually, toolstack failed to implement the old style
resume but told the guest it had, but not returning cancel...)
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u43-00025V-RH; Wed, 08 Jan 2014 14:24:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0u42-000252-HE
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:24:58 +0000
Received: from [85.158.139.211:34557] by server-14.bemta-5.messagelabs.com id
	F2/F7-24200-9BF5DC25; Wed, 08 Jan 2014 14:24:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389191094!8567277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23856 invoked from network); 8 Jan 2014 14:24:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88726237"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:24:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:24:29 -0500
Message-ID: <1389191068.4883.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 8 Jan 2014 14:24:28 +0000
In-Reply-To: <52CD5E79.9000008@citrix.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<52CD5E79.9000008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 14:19 +0000, David Vrabel wrote:
> On 07/01/14 18:55, Ian Jackson wrote:
> > I did the following test:
> > 
> >    mv /etc/xen/scripts/block /etc/xen/scripts/block.aside
> >    xl migrate debian.guest.osstest localhost
> > 
> > xl did what appears to be the right thing: it did most of the
> > migration, failed to run the block scripts at the end of the
> > migration, and destroyed the destination domain and instead resumed
> > the source guest.
> > 
> > However, the source guest immediately went mad spewing WARNINGs and
> > was after that no longer contactable via the network and not
> > apparently responsive on the console.  See below.
> > 
> > This is with:
> > 
> >   [    0.000000] Linux version 3.4.70+ (osstest@rice-weevil) (gcc
> >   version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed Dec 4 03:14:51 GMT 2013
> > 
> > For reasons I don't understand it doesn't seem to print the actual
> > kernel git hash in dmesg, but I think it was that from flight 22264,
> > i.e.  234d96ee0f3b8e49501d068a2a3165aa4db60903.  It's i386, on a
> > 64-bit Xen.
> > 
> > Thanks,
> > Ian.
> > 
> > debian login: [  124.595658] PM: freeze of devices complete after 2.980 msecs
> > [  124.595991] PM: late freeze of devices complete after 0.013 msecs
> > [  124.600919] PM: noirq freeze of devices complete after 4.884 msecs
> > [  124.601105] Grant tables using version 2 layout.
> > [  124.601105] ------------[ cut here ]------------
> > [  124.601105] kernel BUG at drivers/xen/events.c:1582!
> > [  124.601105] invalid opcode: 0000 [#1] SMP 
> > [  124.601105] Modules linked in: [last unloaded: scsi_wait_scan]
> > [  124.601105] 
> > [  124.601105] Pid: 6, comm: migration/0 Not tainted 3.4.70+ #1  
> > [  124.601105] EIP: 0061:[<c12f5d25>] EFLAGS: 00010082 CPU: 0
> > [  124.601105] EIP is at xen_irq_resume+0x215/0x370
> 
> We shouldn't be calling xen_irq_resume() when resuming the source VM.
> The EVTCHNOP_bind_irq is failing because the VIRQ is still bound.
> 
> This would suggest that the suspend hypercall has not correctly returned
> the cancelled state.
> 
> Could this be because of the tools issue mentioned by Ian C?

I'm fairly confident that it is, yes.

(well "this" is actually, toolstack failed to implement the old style
resume but told the guest it had, but not returning cancel...)
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u4M-00029v-Eh; Wed, 08 Jan 2014 14:25:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0u4K-00029M-Lh
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:25:16 +0000
Received: from [193.109.254.147:20856] by server-13.bemta-14.messagelabs.com
	id FA/00-19374-CCF5DC25; Wed, 08 Jan 2014 14:25:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389191114!8094122!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15741 invoked from network); 8 Jan 2014 14:25:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:25:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08EOBwP008105
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 14:24:11 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08EOAKa001995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 14:24:10 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08EO9TA002468; Wed, 8 Jan 2014 14:24:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 06:24:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5F3C1C18DC; Wed,  8 Jan 2014 09:24:07 -0500 (EST)
Date: Wed, 8 Jan 2014 09:24:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140108142407.GB13101@phenom.dumpdata.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187198.4883.69.camel@kazak.uk.xensource.com>
	<52CD50FB.90402@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CD50FB.90402@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Race in PV shutdown between tool detection and
 shutdown watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:22:03PM +0000, David Vrabel wrote:
> On 08/01/14 13:19, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> >> * Race in PV shutdown between tool detection and shutdown watch
> >>  > http://www.gossamer-threads.com/lists/xen/devel/282467
> >>  > Nothing to do with ACPI
> >>  status: Patches posted
> >>  Not a blocker.
> > 
> > This is a Linux issue, I think? Did those patches go in?
> 
> Konrad had a series.  I don't recall their status.  I think they needed
> some more work.

<nods> David pointed out some improvements - but I had my head buried
in PVH so hadn't been able to make any progress.

They will be stalled for some time.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u4M-00029v-Eh; Wed, 08 Jan 2014 14:25:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0u4K-00029M-Lh
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:25:16 +0000
Received: from [193.109.254.147:20856] by server-13.bemta-14.messagelabs.com
	id FA/00-19374-CCF5DC25; Wed, 08 Jan 2014 14:25:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389191114!8094122!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15741 invoked from network); 8 Jan 2014 14:25:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:25:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08EOBwP008105
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 14:24:11 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08EOAKa001995
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 14:24:10 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08EO9TA002468; Wed, 8 Jan 2014 14:24:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 06:24:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E5F3C1C18DC; Wed,  8 Jan 2014 09:24:07 -0500 (EST)
Date: Wed, 8 Jan 2014 09:24:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140108142407.GB13101@phenom.dumpdata.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<1389187198.4883.69.camel@kazak.uk.xensource.com>
	<52CD50FB.90402@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CD50FB.90402@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Race in PV shutdown between tool detection and
 shutdown watch (Was: Re: Xen 4.4 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:22:03PM +0000, David Vrabel wrote:
> On 08/01/14 13:19, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 13:16 +0000, Ian Campbell wrote:
> > 
> >> * Race in PV shutdown between tool detection and shutdown watch
> >>  > http://www.gossamer-threads.com/lists/xen/devel/282467
> >>  > Nothing to do with ACPI
> >>  status: Patches posted
> >>  Not a blocker.
> > 
> > This is a Linux issue, I think? Did those patches go in?
> 
> Konrad had a series.  I don't recall their status.  I think they needed
> some more work.

<nods> David pointed out some improvements - but I had my head buried
in PVH so hadn't been able to make any progress.

They will be stalled for some time.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u82-0002gP-Bu; Wed, 08 Jan 2014 14:29:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0u80-0002et-UI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:29:05 +0000
Received: from [85.158.143.35:58243] by server-1.bemta-4.messagelabs.com id
	AA/B6-02132-0B06DC25; Wed, 08 Jan 2014 14:29:04 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389191342!10451234!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21559 invoked from network); 8 Jan 2014 14:29:03 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:29:03 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:28:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626288193"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:28:58 +0000
Message-ID: <52CD60AA.9010607@terremark.com>
Date: Wed, 08 Jan 2014 09:28:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177510.4883.11.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:38, Ian Campbell wrote:
> On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
>> If dbg_debug is non-zero, output debug.
>>
>> Include put_gfn debug logging.
>>
>> Here is a smaple output at dbg_debug == 2:
>>
>> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
>> (XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
>> (XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
>> (XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
>> (XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
>> (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
>>   1 file changed, 26 insertions(+), 28 deletions(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index ba6a64d..777e5ba 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -30,16 +30,9 @@
>>    * gdbsx, etc..
>>    */
>>   
>> -#ifdef XEN_KDB_CONFIG
>> -#include "../kdb/include/kdbdefs.h"
>> -#include "../kdb/include/kdbproto.h"
>> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
>> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
>> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
>> -#else
>> -#define DBGP1(...) ((void)0)
>> -#define DBGP2(...) ((void)0)
>> -#endif
>> +static volatile int dbg_debug;
> Using volatile is almost always wrong. Why do you think it is needed
> here?

This was from Mukesh Rathor:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html

I saw no reason to make it volatile, but maybe "kdb" needs this? Happy to change any way you want.

> If anything this variable is exactly the opposite, i..e __read_mostly or
> even const (given that I can't see anything which writes it I suppose
> this is a compile time setting?)

That has been how I have been testing it so far (changing the source to set values).  Mukesh claims to be able to change it at will.  Not sure how const may effect this.

       -Don Slutz

> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0u82-0002gP-Bu; Wed, 08 Jan 2014 14:29:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0u80-0002et-UI
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:29:05 +0000
Received: from [85.158.143.35:58243] by server-1.bemta-4.messagelabs.com id
	AA/B6-02132-0B06DC25; Wed, 08 Jan 2014 14:29:04 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389191342!10451234!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21559 invoked from network); 8 Jan 2014 14:29:03 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:29:03 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:28:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626288193"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:28:58 +0000
Message-ID: <52CD60AA.9010607@terremark.com>
Date: Wed, 08 Jan 2014 09:28:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177510.4883.11.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:38, Ian Campbell wrote:
> On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
>> If dbg_debug is non-zero, output debug.
>>
>> Include put_gfn debug logging.
>>
>> Here is a smaple output at dbg_debug == 2:
>>
>> (XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
>> (XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
>> (XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
>> (XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
>> (XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
>> (XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
>> (XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
>> (XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
>> (XEN) [2014-01-07 03:20:09] gmem:exit:len:$2
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
>>   1 file changed, 26 insertions(+), 28 deletions(-)
>>
>> diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
>> index ba6a64d..777e5ba 100644
>> --- a/xen/arch/x86/debug.c
>> +++ b/xen/arch/x86/debug.c
>> @@ -30,16 +30,9 @@
>>    * gdbsx, etc..
>>    */
>>   
>> -#ifdef XEN_KDB_CONFIG
>> -#include "../kdb/include/kdbdefs.h"
>> -#include "../kdb/include/kdbproto.h"
>> -#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
>> -#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
>> -#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
>> -#else
>> -#define DBGP1(...) ((void)0)
>> -#define DBGP2(...) ((void)0)
>> -#endif
>> +static volatile int dbg_debug;
> Using volatile is almost always wrong. Why do you think it is needed
> here?

This was from Mukesh Rathor:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html

I saw no reason to make it volatile, but maybe "kdb" needs this? Happy to change any way you want.

> If anything this variable is exactly the opposite, i..e __read_mostly or
> even const (given that I can't see anything which writes it I suppose
> this is a compile time setting?)

That has been how I have been testing it so far (changing the source to set values).  Mukesh claims to be able to change it at will.  Not sure how const may effect this.

       -Don Slutz

> Ian.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uDu-0003Dw-TB; Wed, 08 Jan 2014 14:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uDs-0003Do-L1
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:35:08 +0000
Received: from [85.158.143.35:37754] by server-2.bemta-4.messagelabs.com id
	0D/D5-11386-B126DC25; Wed, 08 Jan 2014 14:35:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389191705!7764674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29359 invoked from network); 8 Jan 2014 14:35:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730514"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:05 -0500
Message-ID: <1389191703.4883.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 8 Jan 2014 14:35:03 +0000
In-Reply-To: <20140106174620.GA28636@phenom.dumpdata.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<20140106174620.GA28636@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 12:46 -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:

> > Konrad, to what extent is this a blocker for you (or the OVM tooling)
> > vs. it just being something you spotted by random chance?
> 
> No blocker. Just me diligently filling issues with 'xend vs xl'
> as I spot them along.

Thanks for doing so. In the end I gave this a release exception anyway
(see other subthread).




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uDu-0003Dw-TB; Wed, 08 Jan 2014 14:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uDs-0003Do-L1
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:35:08 +0000
Received: from [85.158.143.35:37754] by server-2.bemta-4.messagelabs.com id
	0D/D5-11386-B126DC25; Wed, 08 Jan 2014 14:35:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389191705!7764674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29359 invoked from network); 8 Jan 2014 14:35:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730514"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:05 -0500
Message-ID: <1389191703.4883.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 8 Jan 2014 14:35:03 +0000
In-Reply-To: <20140106174620.GA28636@phenom.dumpdata.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<20140106174620.GA28636@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
	specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-06 at 12:46 -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 06, 2014 at 04:39:30PM +0000, Ian Campbell wrote:

> > Konrad, to what extent is this a blocker for you (or the OVM tooling)
> > vs. it just being something you spotted by random chance?
> 
> No blocker. Just me diligently filling issues with 'xend vs xl'
> as I spot them along.

Thanks for doing so. In the end I gave this a release exception anyway
(see other subthread).




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uEC-0003FI-DF; Wed, 08 Jan 2014 14:35:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0uEA-0003F0-Tm
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:35:27 +0000
Received: from [193.109.254.147:54226] by server-16.bemta-14.messagelabs.com
	id FB/1D-20600-E226DC25; Wed, 08 Jan 2014 14:35:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389191724!9533188!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18252 invoked from network); 8 Jan 2014 14:35:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730696"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:35:22 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0uE6-0001Vj-Qu;
	Wed, 08 Jan 2014 14:35:22 +0000
Date: Wed, 8 Jan 2014 14:35:22 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108143522.GA6218@zion.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, wei.liu2@citrix.com
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:16:24PM +0000, Ian Campbell wrote:
[...]
> == Open ==
> 
> * xl support for vnc and vnclisten options with PV guests
>  > http://bugs.xenproject.org/xen/bug/25
>  status: V4 patch posted. Should go in.
>  Blocker?
> 

Konrad (the reporter) confirmed this is not a blocker.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uEC-0003FI-DF; Wed, 08 Jan 2014 14:35:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0uEA-0003F0-Tm
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:35:27 +0000
Received: from [193.109.254.147:54226] by server-16.bemta-14.messagelabs.com
	id FB/1D-20600-E226DC25; Wed, 08 Jan 2014 14:35:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389191724!9533188!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18252 invoked from network); 8 Jan 2014 14:35:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730696"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:35:22 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0uE6-0001Vj-Qu;
	Wed, 08 Jan 2014 14:35:22 +0000
Date: Wed, 8 Jan 2014 14:35:22 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108143522.GA6218@zion.uk.xensource.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, wei.liu2@citrix.com
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 01:16:24PM +0000, Ian Campbell wrote:
[...]
> == Open ==
> 
> * xl support for vnc and vnclisten options with PV guests
>  > http://bugs.xenproject.org/xen/bug/25
>  status: V4 patch posted. Should go in.
>  Blocker?
> 

Konrad (the reporter) confirmed this is not a blocker.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uED-0003Fv-QM; Wed, 08 Jan 2014 14:35:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uEC-0003F9-1A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:35:28 +0000
Received: from [85.158.137.68:57322] by server-16.bemta-3.messagelabs.com id
	2E/9D-26128-F226DC25; Wed, 08 Jan 2014 14:35:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389191724!7961837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17722 invoked from network); 8 Jan 2014 14:35:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90880453"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:23 -0500
Message-ID: <1389191722.4883.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 8 Jan 2014 14:35:22 +0000
In-Reply-To: <1389112288.12612.61.camel@kazak.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
	<1389112288.12612.61.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
 specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 25
thanks

> > > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > > [...]
> Release-acked-by: Ian Campbell 

And applied & bug closed, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uED-0003Fv-QM; Wed, 08 Jan 2014 14:35:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uEC-0003F9-1A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:35:28 +0000
Received: from [85.158.137.68:57322] by server-16.bemta-3.messagelabs.com id
	2E/9D-26128-F226DC25; Wed, 08 Jan 2014 14:35:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389191724!7961837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17722 invoked from network); 8 Jan 2014 14:35:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90880453"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:35:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:23 -0500
Message-ID: <1389191722.4883.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 8 Jan 2014 14:35:22 +0000
In-Reply-To: <1389112288.12612.61.camel@kazak.uk.xensource.com>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
	<1389112288.12612.61.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4] xl: create VFB for PV guest when VNC is
 specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 25
thanks

> > > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > > [...]
> Release-acked-by: Ian Campbell 

And applied & bug closed, thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:36:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uEs-0003Pj-O1; Wed, 08 Jan 2014 14:36:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uEr-0003PV-PS
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:36:09 +0000
Received: from [193.109.254.147:63002] by server-14.bemta-14.messagelabs.com
	id 00/12-12628-9526DC25; Wed, 08 Jan 2014 14:36:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389191760!6091363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7819 invoked from network); 8 Jan 2014 14:36:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730990"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:36:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:59 -0500
Message-ID: <1389191758.4883.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 14:35:58 +0000
In-Reply-To: <21196.15462.177720.502000@mariner.uk.xensource.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
	<21196.15462.177720.502000@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, Rob Hoes <rob.hoes@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: ocaml: use int64 for timeval
 fields in the timeout_register callback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 17:41 +0000, Ian Jackson wrote:
> Rob Hoes writes ("[PATCH 2/3] libxl: ocaml: use int64 for timeval fields in the timeout_register callback"):
> > The original code works fine on 64-bit, but on 32-bit, the OCaml int (which is
> > 1 bit smaller than the C int) is likely to overflow.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:36:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uEs-0003Pj-O1; Wed, 08 Jan 2014 14:36:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0uEr-0003PV-PS
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:36:09 +0000
Received: from [193.109.254.147:63002] by server-14.bemta-14.messagelabs.com
	id 00/12-12628-9526DC25; Wed, 08 Jan 2014 14:36:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389191760!6091363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7819 invoked from network); 8 Jan 2014 14:36:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88730990"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:36:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:35:59 -0500
Message-ID: <1389191758.4883.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 14:35:58 +0000
In-Reply-To: <21196.15462.177720.502000@mariner.uk.xensource.com>
References: <1386866211-12639-1-git-send-email-rob.hoes@citrix.com>
	<1386866211-12639-3-git-send-email-rob.hoes@citrix.com>
	<21196.15462.177720.502000@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, Rob Hoes <rob.hoes@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: ocaml: use int64 for timeval
 fields in the timeout_register callback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 17:41 +0000, Ian Jackson wrote:
> Rob Hoes writes ("[PATCH 2/3] libxl: ocaml: use int64 for timeval fields in the timeout_register callback"):
> > The original code works fine on 64-bit, but on 32-bit, the OCaml int (which is
> > 1 bit smaller than the C int) is likely to overflow.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Applied, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uH1-0003gC-Ty; Wed, 08 Jan 2014 14:38:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0uH0-0003f8-Un
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:38:23 +0000
Received: from [193.109.254.147:37128] by server-6.bemta-14.messagelabs.com id
	BF/39-14958-ED26DC25; Wed, 08 Jan 2014 14:38:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389191900!9578669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9791 invoked from network); 8 Jan 2014 14:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90881674"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:38:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:38:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0uGx-0001Ys-AS;
	Wed, 08 Jan 2014 14:38:19 +0000
Message-ID: <52CD62DA.3010102@citrix.com>
Date: Wed, 8 Jan 2014 14:38:18 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
	<1389190848.4883.84.camel@kazak.uk.xensource.com>
In-Reply-To: <1389190848.4883.84.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
	messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 14:20, Ian Campbell wrote:
> On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
>> The errors have been incorrectly identifying their function since c/s
>> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
>> cleanup.
>>
>> Use __func__ to ensure the name remains correct in the future.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> A simple string change seems harmless from a release PoV, so on that
> front 
> Release-Acked-by: Ian Campbell.
>
> For the actual change though, most uses of ERROR in this function just
> have a descriptive error without the function name. If we are going to
> change it then I'm not convinced "rdexact failed..." is as useful as
> something like "Failed to read exactly %d bytes (select returned...)".
> Other thoughts?
>
> (that said, I'm still somewhat inclined to just bung this one in...)
>
> Ian.

When triaging problems after-the-fact from logfiles along, a lack of
file/line/function references is often makes debugging harder than it
should be.

In the specific case I encountered, the error as was sufficed for
working out what had gone wrong (an -EIO).

I would possibly throw it straight in now, with a note that there needs
to be some consistency applied to the error reporting in this and other
areas of libxc.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uH1-0003gC-Ty; Wed, 08 Jan 2014 14:38:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0uH0-0003f8-Un
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:38:23 +0000
Received: from [193.109.254.147:37128] by server-6.bemta-14.messagelabs.com id
	BF/39-14958-ED26DC25; Wed, 08 Jan 2014 14:38:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389191900!9578669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9791 invoked from network); 8 Jan 2014 14:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="90881674"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 14:38:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:38:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0uGx-0001Ys-AS;
	Wed, 08 Jan 2014 14:38:19 +0000
Message-ID: <52CD62DA.3010102@citrix.com>
Date: Wed, 8 Jan 2014 14:38:18 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
	<1389190848.4883.84.camel@kazak.uk.xensource.com>
In-Reply-To: <1389190848.4883.84.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
	messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 14:20, Ian Campbell wrote:
> On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
>> The errors have been incorrectly identifying their function since c/s
>> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
>> cleanup.
>>
>> Use __func__ to ensure the name remains correct in the future.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> A simple string change seems harmless from a release PoV, so on that
> front 
> Release-Acked-by: Ian Campbell.
>
> For the actual change though, most uses of ERROR in this function just
> have a descriptive error without the function name. If we are going to
> change it then I'm not convinced "rdexact failed..." is as useful as
> something like "Failed to read exactly %d bytes (select returned...)".
> Other thoughts?
>
> (that said, I'm still somewhat inclined to just bung this one in...)
>
> Ian.

When triaging problems after-the-fact from logfiles along, a lack of
file/line/function references is often makes debugging harder than it
should be.

In the specific case I encountered, the error as was sufficed for
working out what had gone wrong (an -EIO).

I would possibly throw it straight in now, with a note that there needs
to be some consistency applied to the error reporting in this and other
areas of libxc.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uIn-0004LQ-GE; Wed, 08 Jan 2014 14:40:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0uIm-0004LI-AH
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:40:12 +0000
Received: from [193.109.254.147:26933] by server-6.bemta-14.messagelabs.com id
	06/DB-14958-B436DC25; Wed, 08 Jan 2014 14:40:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389192006!9576806!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 8 Jan 2014 14:40:09 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:40:09 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:39:33 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626298203"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:39:32 +0000
Message-ID: <52CD6324.9020908@terremark.com>
Date: Wed, 08 Jan 2014 09:39:32 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
	<1389177355.4883.8.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177355.4883.8.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:35, Ian Campbell wrote:
> On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
>> Without this gdb does not report an error.
>>
>> With this patch and using a 1G hvm domU:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>
>> Drop output of iop->remain because it most likely will be zero.
>> This leads to a strange message:
>>
>> ERROR: failed to read 0 bytes. errno:14 rc:-1
>>
>> Add address to write error because it may be the only message
>> displayed.
>>
>> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
>> error and so iop->remain will be zero.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>>   1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
>> index 3b2a285..0fc3f82 100644
>> --- a/tools/debugger/gdbsx/xg/xg_main.c
>> +++ b/tools/debugger/gdbsx/xg/xg_main.c
>> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
>>       iop->gwr = 0;       /* not writing to guest */
>>   
>>       if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
>> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
>> -              iop->remain, errno, rc);
>> +    {
>> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
> Is it worth printing the expect number (i.e. the input) of bytes? Is
> that buflen here?

Not in my testing.  If I turn on debug output (-d), I get:


...
process_remote_request:E:m8957006a,1 curvcpu:0
xg_read_mem:E:gva:8957006a tobuf:1c81020 len:1
ERROR:xg_read_mem:ERROR: failed to read bytes. errno:14 rc:-1
xg_read_mem:X:remain:0 buf8:0x24
process_m_request:Failed read mem. addr:0x8957006a len:1 remn:1 errno:14
process_remote_request:X:E01 curvcpu:0
...


Which outputs the length (1) a few times already.   The expected number here is tobuf_len.


>> +        return tobuf_len;
>> +    }
>>   
>>       for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>>       XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
>> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
>>       iop->gwr = 1;       /* writing to guest */
>>   
>>       if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>> -              iop->remain, errno, rc);
>> +    {
>> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
>> +              guestva, errno, rc);
> Same here.

Since this error message is output without other context, it might help.   Most of the time, the user knows the write size requested. Since this code has an ACK from Mukesh, I lean to not making a change, but if you want it, I will.

    -Don Slutz

>> +        return buflen;
>> +    }
>>       return iop->remain;
>>   }
>>   
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uIn-0004LQ-GE; Wed, 08 Jan 2014 14:40:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0uIm-0004LI-AH
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:40:12 +0000
Received: from [193.109.254.147:26933] by server-6.bemta-14.messagelabs.com id
	06/DB-14958-B436DC25; Wed, 08 Jan 2014 14:40:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389192006!9576806!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 8 Jan 2014 14:40:09 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 14:40:09 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:39:33 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626298203"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:39:32 +0000
Message-ID: <52CD6324.9020908@terremark.com>
Date: Wed, 08 Jan 2014 09:39:32 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
 Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-6-git-send-email-dslutz@verizon.com>
	<1389177355.4883.8.camel@kazak.uk.xensource.com>
In-Reply-To: <1389177355.4883.8.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 5/5] xg_main: If
 XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 05:35, Ian Campbell wrote:
> On Tue, 2014-01-07 at 19:25 -0500, Don Slutz wrote:
>> Without this gdb does not report an error.
>>
>> With this patch and using a 1G hvm domU:
>>
>> (gdb) x/1xh 0x6ae9168b
>> 0x6ae9168b:     Cannot access memory at address 0x6ae9168b
>>
>> Drop output of iop->remain because it most likely will be zero.
>> This leads to a strange message:
>>
>> ERROR: failed to read 0 bytes. errno:14 rc:-1
>>
>> Add address to write error because it may be the only message
>> displayed.
>>
>> Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
>> error and so iop->remain will be zero.
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
>>   1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
>> index 3b2a285..0fc3f82 100644
>> --- a/tools/debugger/gdbsx/xg/xg_main.c
>> +++ b/tools/debugger/gdbsx/xg/xg_main.c
>> @@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
>>       iop->gwr = 0;       /* not writing to guest */
>>   
>>       if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
>> -        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
>> -              iop->remain, errno, rc);
>> +    {
>> +        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
> Is it worth printing the expect number (i.e. the input) of bytes? Is
> that buflen here?

Not in my testing.  If I turn on debug output (-d), I get:


...
process_remote_request:E:m8957006a,1 curvcpu:0
xg_read_mem:E:gva:8957006a tobuf:1c81020 len:1
ERROR:xg_read_mem:ERROR: failed to read bytes. errno:14 rc:-1
xg_read_mem:X:remain:0 buf8:0x24
process_m_request:Failed read mem. addr:0x8957006a len:1 remn:1 errno:14
process_remote_request:X:E01 curvcpu:0
...


Which outputs the length (1) a few times already.   The expected number here is tobuf_len.


>> +        return tobuf_len;
>> +    }
>>   
>>       for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
>>       XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
>> @@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
>>       iop->gwr = 1;       /* writing to guest */
>>   
>>       if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
>> -        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n",
>> -              iop->remain, errno, rc);
>> +    {
>> +        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
>> +              guestva, errno, rc);
> Same here.

Since this error message is output without other context, it might help.   Most of the time, the user knows the write size requested. Since this code has an ACK from Mukesh, I lean to not making a change, but if you want it, I will.

    -Don Slutz

>> +        return buflen;
>> +    }
>>       return iop->remain;
>>   }
>>   
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uJn-0004SG-QA; Wed, 08 Jan 2014 14:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0uJl-0004S4-Kk
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:41:13 +0000
Received: from [85.158.139.211:63206] by server-16.bemta-5.messagelabs.com id
	15/87-11843-8836DC25; Wed, 08 Jan 2014 14:41:12 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389192071!8578385!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29538 invoked from network); 8 Jan 2014 14:41:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 14:41:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0uNT-0008U7-E9; Wed, 08 Jan 2014 14:45:03 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389192303.32617@bugs.xenproject.org>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
	<1389112288.12612.61.camel@kazak.uk.xensource.com>
	<1389191722.4883.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1389191722.4883.90.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 14:45:03 +0000
Subject: [Xen-devel] Processed: Re: [PATCH V4] xl: create VFB for PV guest
 when VNC is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 25
Closing bug #25
> thanks
Finished processing.

Modified/created Bugs:
 - 25: http://bugs.xenproject.org/xen/bug/25

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uJn-0004SG-QA; Wed, 08 Jan 2014 14:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0uJl-0004S4-Kk
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:41:13 +0000
Received: from [85.158.139.211:63206] by server-16.bemta-5.messagelabs.com id
	15/87-11843-8836DC25; Wed, 08 Jan 2014 14:41:12 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389192071!8578385!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29538 invoked from network); 8 Jan 2014 14:41:12 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 14:41:12 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W0uNT-0008U7-E9; Wed, 08 Jan 2014 14:45:03 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389192303.32617@bugs.xenproject.org>
References: <1387320825-21953-1-git-send-email-wei.liu2@citrix.com>
	<1387372333.28680.6.camel@kazak.uk.xensource.com>
	<20131218134629.GG25969@zion.uk.xensource.com>
	<1389026370.31766.83.camel@kazak.uk.xensource.com>
	<21194.59357.835065.559486@mariner.uk.xensource.com>
	<20140106175628.GD10654@zion.uk.xensource.com>
	<1389112288.12612.61.camel@kazak.uk.xensource.com>
	<1389191722.4883.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1389191722.4883.90.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 08 Jan 2014 14:45:03 +0000
Subject: [Xen-devel] Processed: Re: [PATCH V4] xl: create VFB for PV guest
 when VNC is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 25
Closing bug #25
> thanks
Finished processing.

Modified/created Bugs:
 - 25: http://bugs.xenproject.org/xen/bug/25

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:43:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uLx-0004eO-OS; Wed, 08 Jan 2014 14:43:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0uLw-0004eF-6f
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:43:28 +0000
Received: from [85.158.137.68:10389] by server-14.bemta-3.messagelabs.com id
	6D/85-06105-F046DC25; Wed, 08 Jan 2014 14:43:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389192205!7942900!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24701 invoked from network); 8 Jan 2014 14:43:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:43:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88733900"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:43:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:43:24 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0uLs-0001dQ-6C;
	Wed, 08 Jan 2014 14:43:24 +0000
Date: Wed, 8 Jan 2014 14:43:24 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140108144324.GA6984@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You once mentioned that you have a trick to avoid touching TLB, is it in
this series?

(Haven't really looked at this series as I'm in today. Will have a
closer look tonight. I'm just curious now.)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:43:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uLx-0004eO-OS; Wed, 08 Jan 2014 14:43:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0uLw-0004eF-6f
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:43:28 +0000
Received: from [85.158.137.68:10389] by server-14.bemta-3.messagelabs.com id
	6D/85-06105-F046DC25; Wed, 08 Jan 2014 14:43:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389192205!7942900!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24701 invoked from network); 8 Jan 2014 14:43:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:43:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88733900"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:43:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 09:43:24 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0uLs-0001dQ-6C;
	Wed, 08 Jan 2014 14:43:24 +0000
Date: Wed, 8 Jan 2014 14:43:24 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140108144324.GA6984@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You once mentioned that you have a trick to avoid touching TLB, is it in
this series?

(Haven't really looked at this series as I'm in today. Will have a
closer look tonight. I'm just curious now.)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uMX-0004ij-6E; Wed, 08 Jan 2014 14:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0uMV-0004iD-Af
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:44:03 +0000
Received: from [85.158.139.211:48426] by server-10.bemta-5.messagelabs.com id
	C2/6F-01405-2346DC25; Wed, 08 Jan 2014 14:44:02 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389192240!8389029!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16557 invoked from network); 8 Jan 2014 14:44:01 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 14:44:01 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:43:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626301909"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:43:53 +0000
Message-ID: <52CD6428.9060102@terremark.com>
Date: Wed, 08 Jan 2014 09:43:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<52CD1A5B02000078001116AD@nat28.tlf.novell.com>
In-Reply-To: <52CD1A5B02000078001116AD@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 03:28, Jan Beulich wrote:
>>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
>> Release manager requests:
>>    patch 1 and 3 are optional for 4.4.0.
>>    patch 2 should be in 4.4.0
>>    patch 4 and 5 would be good to be in 4.4.0
> Which clearly shows that the series is badly ordered: You shouldn't
> expect committers to know (or even have to guess) that applying
> later patches without earlier ones is okay. I.e. if you think that
> leaving out part of the series for 4.4 is fine, you should place the
> required ones first, the optional ones second, and the 4.5 ones
> last. Or, if the patches are in fact independent, another option
> would be to not send the patches as a series in the first place.

If this was not real close to the release date, I would not have added this information.  Clearly the way I wrote it is not the way it should be expressed.   Thanks for you help, I will try to do that ordering in the future.

    -Don Slutz


> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:44:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uMX-0004ij-6E; Wed, 08 Jan 2014 14:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0uMV-0004iD-Af
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 14:44:03 +0000
Received: from [85.158.139.211:48426] by server-10.bemta-5.messagelabs.com id
	C2/6F-01405-2346DC25; Wed, 08 Jan 2014 14:44:02 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389192240!8389029!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16557 invoked from network); 8 Jan 2014 14:44:01 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 14:44:01 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 14:43:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="626301909"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 14:43:53 +0000
Message-ID: <52CD6428.9060102@terremark.com>
Date: Wed, 08 Jan 2014 09:43:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<52CD1A5B02000078001116AD@nat28.tlf.novell.com>
In-Reply-To: <52CD1A5B02000078001116AD@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2 0/5] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 03:28, Jan Beulich wrote:
>>>> On 08.01.14 at 01:25, Don Slutz <dslutz@verizon.com> wrote:
>> Release manager requests:
>>    patch 1 and 3 are optional for 4.4.0.
>>    patch 2 should be in 4.4.0
>>    patch 4 and 5 would be good to be in 4.4.0
> Which clearly shows that the series is badly ordered: You shouldn't
> expect committers to know (or even have to guess) that applying
> later patches without earlier ones is okay. I.e. if you think that
> leaving out part of the series for 4.4 is fine, you should place the
> required ones first, the optional ones second, and the 4.5 ones
> last. Or, if the patches are in fact independent, another option
> would be to not send the patches as a series in the first place.

If this was not real close to the release date, I would not have added this information.  Clearly the way I wrote it is not the way it should be expressed.   Thanks for you help, I will try to do that ordering in the future.

    -Don Slutz


> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:45:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:45:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uNW-0004sI-Ql; Wed, 08 Jan 2014 14:45:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0uNV-0004s9-O4
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:45:05 +0000
Received: from [85.158.139.211:51385] by server-5.bemta-5.messagelabs.com id
	41/4C-14928-0746DC25; Wed, 08 Jan 2014 14:45:04 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389192302!8569569!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25399 invoked from network); 8 Jan 2014 14:45:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88734272"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:44:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:44:45 -0500
Message-ID: <52CD645B.6060700@citrix.com>
Date: Wed, 8 Jan 2014 14:44:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<20140108144324.GA6984@zion.uk.xensource.com>
In-Reply-To: <20140108144324.GA6984@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 14:43, Wei Liu wrote:
> You once mentioned that you have a trick to avoid touching TLB, is it in
> this series?
>
> (Haven't really looked at this series as I'm in today. Will have a
> closer look tonight. I'm just curious now.)
>
> Wei.
>
No, I'm currently working on that, it will be a separate series, as it 
also needs some Xen modifications which haven't reached upstream yet AFAIK.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 14:45:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 14:45:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0uNW-0004sI-Ql; Wed, 08 Jan 2014 14:45:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W0uNV-0004s9-O4
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 14:45:05 +0000
Received: from [85.158.139.211:51385] by server-5.bemta-5.messagelabs.com id
	41/4C-14928-0746DC25; Wed, 08 Jan 2014 14:45:04 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389192302!8569569!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25399 invoked from network); 8 Jan 2014 14:45:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 14:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88734272"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 14:44:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	09:44:45 -0500
Message-ID: <52CD645B.6060700@citrix.com>
Date: Wed, 8 Jan 2014 14:44:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<20140108144324.GA6984@zion.uk.xensource.com>
In-Reply-To: <20140108144324.GA6984@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 14:43, Wei Liu wrote:
> You once mentioned that you have a trick to avoid touching TLB, is it in
> this series?
>
> (Haven't really looked at this series as I'm in today. Will have a
> closer look tonight. I'm just curious now.)
>
> Wei.
>
No, I'm currently working on that, it will be a separate series, as it 
also needs some Xen modifications which haven't reached upstream yet AFAIK.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:00:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ucg-0006Er-Le; Wed, 08 Jan 2014 15:00:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ucf-0006Em-DZ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:00:45 +0000
Received: from [85.158.137.68:5559] by server-3.bemta-3.messagelabs.com id
	8C/61-10658-C186DC25; Wed, 08 Jan 2014 15:00:44 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389193241!7969250!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26010 invoked from network); 8 Jan 2014 15:00:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 15:00:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 15:00:40 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="626317501"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 15:00:39 +0000
Message-ID: <52CD6817.5000604@terremark.com>
Date: Wed, 08 Jan 2014 10:00:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	George Dunlap <george.dunlap@eu.citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] 4.4.0 release -- libxl still says 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I do not know if this is something that happens later, or should be fixed soon.

I know kexec changes says that the Xen interface is now 4.4 for kexec, but the library still says 4.3:

/usr/lib/libxenctrl.so.4.3
/usr/lib/libxenctrl.so.4.3.0
/usr/lib/libxenguest.so.4.3
/usr/lib/libxenguest.so.4.3.0
/usr/lib/libxenlight.so.4.3
/usr/lib/libxenlight.so.4.3.0
/usr/lib/libxlutil.so.4.3
/usr/lib/libxlutil.so.4.3.0


commit dac66a5b2db37a40c7eb4b9d25ee8095106906c0
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue May 7 11:39:10 2013 +0100

was where they changed to 4.3

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:00:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ucg-0006Er-Le; Wed, 08 Jan 2014 15:00:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W0ucf-0006Em-DZ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:00:45 +0000
Received: from [85.158.137.68:5559] by server-3.bemta-3.messagelabs.com id
	8C/61-10658-C186DC25; Wed, 08 Jan 2014 15:00:44 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389193241!7969250!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26010 invoked from network); 8 Jan 2014 15:00:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 15:00:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 08 Jan 2014 15:00:40 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="626317501"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.42])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 15:00:39 +0000
Message-ID: <52CD6817.5000604@terremark.com>
Date: Wed, 08 Jan 2014 10:00:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	George Dunlap <george.dunlap@eu.citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] 4.4.0 release -- libxl still says 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I do not know if this is something that happens later, or should be fixed soon.

I know kexec changes says that the Xen interface is now 4.4 for kexec, but the library still says 4.3:

/usr/lib/libxenctrl.so.4.3
/usr/lib/libxenctrl.so.4.3.0
/usr/lib/libxenguest.so.4.3
/usr/lib/libxenguest.so.4.3.0
/usr/lib/libxenlight.so.4.3
/usr/lib/libxenlight.so.4.3.0
/usr/lib/libxlutil.so.4.3
/usr/lib/libxlutil.so.4.3.0


commit dac66a5b2db37a40c7eb4b9d25ee8095106906c0
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Tue May 7 11:39:10 2013 +0100

was where they changed to 4.3

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ufZ-0006M7-Lr; Wed, 08 Jan 2014 15:03:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ufY-0006Lz-DQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:03:44 +0000
Received: from [85.158.143.35:62370] by server-2.bemta-4.messagelabs.com id
	D9/48-11386-FC86DC25; Wed, 08 Jan 2014 15:03:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389193422!9163530!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29373 invoked from network); 8 Jan 2014 15:03:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:03:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88742713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:03:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	10:03:40 -0500
Message-ID: <1389193419.4883.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 15:03:39 +0000
In-Reply-To: <52CD6817.5000604@terremark.com>
References: <52CD6817.5000604@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0 release -- libxl still says 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 10:00 -0500, Don Slutz wrote:
> I do not know if this is something that happens later, or should be fixed soon.
> 
> I know kexec changes says that the Xen interface is now 4.4 for kexec,
> but the library still says 4.3:

This is an ABI so it would only need to change if the ABI has. I'm
pretty certain that it has in this case.

Normally we do a sweep of all the libraries late on in the release
process and pick up any ones we missed during development.

I'm sure it's on Ian J's release checklist.
> 
> /usr/lib/libxenctrl.so.4.3
> /usr/lib/libxenctrl.so.4.3.0
> /usr/lib/libxenguest.so.4.3
> /usr/lib/libxenguest.so.4.3.0
> /usr/lib/libxenlight.so.4.3
> /usr/lib/libxenlight.so.4.3.0
> /usr/lib/libxlutil.so.4.3
> /usr/lib/libxlutil.so.4.3.0
> 
> 
> commit dac66a5b2db37a40c7eb4b9d25ee8095106906c0
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Tue May 7 11:39:10 2013 +0100
> 
> was where they changed to 4.3
> 
>     -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ufZ-0006M7-Lr; Wed, 08 Jan 2014 15:03:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0ufY-0006Lz-DQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:03:44 +0000
Received: from [85.158.143.35:62370] by server-2.bemta-4.messagelabs.com id
	D9/48-11386-FC86DC25; Wed, 08 Jan 2014 15:03:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389193422!9163530!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29373 invoked from network); 8 Jan 2014 15:03:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:03:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,624,1384300800"; d="scan'208";a="88742713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:03:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	10:03:40 -0500
Message-ID: <1389193419.4883.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 15:03:39 +0000
In-Reply-To: <52CD6817.5000604@terremark.com>
References: <52CD6817.5000604@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0 release -- libxl still says 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 10:00 -0500, Don Slutz wrote:
> I do not know if this is something that happens later, or should be fixed soon.
> 
> I know kexec changes says that the Xen interface is now 4.4 for kexec,
> but the library still says 4.3:

This is an ABI so it would only need to change if the ABI has. I'm
pretty certain that it has in this case.

Normally we do a sweep of all the libraries late on in the release
process and pick up any ones we missed during development.

I'm sure it's on Ian J's release checklist.
> 
> /usr/lib/libxenctrl.so.4.3
> /usr/lib/libxenctrl.so.4.3.0
> /usr/lib/libxenguest.so.4.3
> /usr/lib/libxenguest.so.4.3.0
> /usr/lib/libxenlight.so.4.3
> /usr/lib/libxenlight.so.4.3.0
> /usr/lib/libxlutil.so.4.3
> /usr/lib/libxlutil.so.4.3.0
> 
> 
> commit dac66a5b2db37a40c7eb4b9d25ee8095106906c0
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Tue May 7 11:39:10 2013 +0100
> 
> was where they changed to 4.3
> 
>     -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:44:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vIu-0000BV-7L; Wed, 08 Jan 2014 15:44:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0vIs-0000BQ-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:44:22 +0000
Received: from [193.109.254.147:33754] by server-3.bemta-14.messagelabs.com id
	04/A2-11000-6527DC25; Wed, 08 Jan 2014 15:44:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389195861!9618765!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13252 invoked from network); 8 Jan 2014 15:44:21 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:44:21 -0000
Received: by mail-wg0-f45.google.com with SMTP id y10so1588548wgg.0
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 07:44:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=FkIpFQYxTBmautW6lZDZGs996vBVYk8WU7+9p6Q4P1U=;
	b=Q09qxGnjE8b2iDR4I2HRXvXRavIqgnb0+PZwpZ7huY9XedSPwAw9N8311OSKIW6YUU
	FNItotSOT1EvIrSS4Lf2JH5y5QB/EGZ5Wm93bLPjCqFlSizeTxBNuEhDniPLIY0XmzlV
	bc4Y4642EM4qNkE4QhJGHMXgBaCnT9uvuR7yZBaVqzwLeLJ/Tz+mEi9e15tjVKXuLTvv
	W3FjPBVNZ1ENET8wrBBdpP+82pKwT/oVgzJGuUD4ZN3VqxYycVhKVJm+wA4K3Fc+AGum
	clZBRl/GS/R07tEwtP3lPYtHIXTPXt7BppY8WMP9GIlsEXajrxfTnk5Je7dB/Oi2Nayw
	4ImA==
X-Gm-Message-State: ALoCoQlXVKJUDAiyKJIZHc2E7uE1WHyMl5T/PVJN+cEN4MuHGjKVCTuO0P5HxTJBCNnshvyaAwde
X-Received: by 10.194.82.68 with SMTP id g4mr10754330wjy.85.1389195860956;
	Wed, 08 Jan 2014 07:44:20 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j3sm3017874wiy.3.2014.01.08.07.44.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 07:44:20 -0800 (PST)
Message-ID: <52CD7252.4050903@linaro.org>
Date: Wed, 08 Jan 2014 15:44:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2014 02:09 PM, Ian Campbell wrote:
> On ARM guest OSes are started with MMU and Caches disables (as they are on
> native) however caching is enabled in the domain running the builder and
> therefore we must ensure cache consistency.
> 
> The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
> cache after loading images into guest memory") is to flush the caches after
> loading the various blobs into guest RAM. However this approach has two short
> comings:
> 
>  - The cache flush primitives available to userspace on arm32 are not
>    sufficient for our needs.
>  - There is a race between the cache flush and the unmap of the guest page
>    where the processor might speculatively dirty the cache line again.
> 
> (of these the second is the more fundamental)
> 
> This patch makes use of the the hardware functionality to force all accesses
> made from guest mode to be cached (the HCR.DC == default cached bit). This
> means that we don't need to worry about the domain builder's writes being
> cached because the guests "uncached" accesses will actually be cached.
> 
> Unfortunately the use of HCR.DC is incompatible with the guest enabling its
> MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
> detect when this happens and disable HCR.DC. This is done with the HCR.TVM
> (trap virtual memory controls) bit which also causes various other registers
> to be trapped, all of which can be passed straight through to the underlying
> register. Once the guest has enabled its MMU we no longer need to trap so
> there is no ongoing overhead. In my tests Linux makes about half a dozen
> accesses to these registers before the MMU is enabled, I would expect other
> OSes to behave similarly (the sequence of writes needed to setup the MMU is
> pretty obvious).
> 
> Apart from this unfortunate need to trap these accesses this approach is
> incompatible with guests which attempt to do DMA operations with their MMU
> disabled. In practice this means guests with passthrough which we do not yet
> support. Since a typical guest (including dom0) does not access devices which
> require DMA until after it is fully up and running with paging enabled the
> main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
> with a disk passed through and booting from that disk. Since we know that dom0
> is not using any such firmware and we do not support device passthrough to
> guests yet we can live with this restriction. Once passthrough is implemented
> this will need to be revisited.
> 
> The patch includes a couple of seemingly unrelated but necessary changes:
> 
>  - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
>    with the existing set of system register we handled, but broke with the new
>    ones introduced here.
>  - The defines used to decode the HSR system register fields were named the
>    same as the register. This breaks the accessor macros. This had gone
>    unnoticed because the handling of the existing trapped registers did not
>    require accessing the underlying hardware register. Rename those constants
>    with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
> 
> This patch has survived thousands of boot loops on a Midway system.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:44:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vIu-0000BV-7L; Wed, 08 Jan 2014 15:44:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0vIs-0000BQ-PQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:44:22 +0000
Received: from [193.109.254.147:33754] by server-3.bemta-14.messagelabs.com id
	04/A2-11000-6527DC25; Wed, 08 Jan 2014 15:44:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389195861!9618765!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13252 invoked from network); 8 Jan 2014 15:44:21 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:44:21 -0000
Received: by mail-wg0-f45.google.com with SMTP id y10so1588548wgg.0
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 07:44:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=FkIpFQYxTBmautW6lZDZGs996vBVYk8WU7+9p6Q4P1U=;
	b=Q09qxGnjE8b2iDR4I2HRXvXRavIqgnb0+PZwpZ7huY9XedSPwAw9N8311OSKIW6YUU
	FNItotSOT1EvIrSS4Lf2JH5y5QB/EGZ5Wm93bLPjCqFlSizeTxBNuEhDniPLIY0XmzlV
	bc4Y4642EM4qNkE4QhJGHMXgBaCnT9uvuR7yZBaVqzwLeLJ/Tz+mEi9e15tjVKXuLTvv
	W3FjPBVNZ1ENET8wrBBdpP+82pKwT/oVgzJGuUD4ZN3VqxYycVhKVJm+wA4K3Fc+AGum
	clZBRl/GS/R07tEwtP3lPYtHIXTPXt7BppY8WMP9GIlsEXajrxfTnk5Je7dB/Oi2Nayw
	4ImA==
X-Gm-Message-State: ALoCoQlXVKJUDAiyKJIZHc2E7uE1WHyMl5T/PVJN+cEN4MuHGjKVCTuO0P5HxTJBCNnshvyaAwde
X-Received: by 10.194.82.68 with SMTP id g4mr10754330wjy.85.1389195860956;
	Wed, 08 Jan 2014 07:44:20 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j3sm3017874wiy.3.2014.01.08.07.44.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 07:44:20 -0800 (PST)
Message-ID: <52CD7252.4050903@linaro.org>
Date: Wed, 08 Jan 2014 15:44:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/2014 02:09 PM, Ian Campbell wrote:
> On ARM guest OSes are started with MMU and Caches disables (as they are on
> native) however caching is enabled in the domain running the builder and
> therefore we must ensure cache consistency.
> 
> The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
> cache after loading images into guest memory") is to flush the caches after
> loading the various blobs into guest RAM. However this approach has two short
> comings:
> 
>  - The cache flush primitives available to userspace on arm32 are not
>    sufficient for our needs.
>  - There is a race between the cache flush and the unmap of the guest page
>    where the processor might speculatively dirty the cache line again.
> 
> (of these the second is the more fundamental)
> 
> This patch makes use of the the hardware functionality to force all accesses
> made from guest mode to be cached (the HCR.DC == default cached bit). This
> means that we don't need to worry about the domain builder's writes being
> cached because the guests "uncached" accesses will actually be cached.
> 
> Unfortunately the use of HCR.DC is incompatible with the guest enabling its
> MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
> detect when this happens and disable HCR.DC. This is done with the HCR.TVM
> (trap virtual memory controls) bit which also causes various other registers
> to be trapped, all of which can be passed straight through to the underlying
> register. Once the guest has enabled its MMU we no longer need to trap so
> there is no ongoing overhead. In my tests Linux makes about half a dozen
> accesses to these registers before the MMU is enabled, I would expect other
> OSes to behave similarly (the sequence of writes needed to setup the MMU is
> pretty obvious).
> 
> Apart from this unfortunate need to trap these accesses this approach is
> incompatible with guests which attempt to do DMA operations with their MMU
> disabled. In practice this means guests with passthrough which we do not yet
> support. Since a typical guest (including dom0) does not access devices which
> require DMA until after it is fully up and running with paging enabled the
> main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
> with a disk passed through and booting from that disk. Since we know that dom0
> is not using any such firmware and we do not support device passthrough to
> guests yet we can live with this restriction. Once passthrough is implemented
> this will need to be revisited.
> 
> The patch includes a couple of seemingly unrelated but necessary changes:
> 
>  - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
>    with the existing set of system register we handled, but broke with the new
>    ones introduced here.
>  - The defines used to decode the HSR system register fields were named the
>    same as the register. This breaks the accessor macros. This had gone
>    unnoticed because the handling of the existing trapped registers did not
>    require accessing the underlying hardware register. Rename those constants
>    with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
> 
> This patch has survived thousands of boot loops on a Midway system.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vVA-0000qV-04; Wed, 08 Jan 2014 15:57:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV8-0000qE-3R
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:02 +0000
Received: from [193.109.254.147:38250] by server-2.bemta-14.messagelabs.com id
	8A/9A-00361-D457DC25; Wed, 08 Jan 2014 15:57:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11342 invoked from network); 8 Jan 2014 15:57:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:57:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772822"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-MC;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:40 +0000
Message-ID: <1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash kernel
	area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

This reverts commit 7113a45451a9f656deeff070e47672043ed83664.

The kexec code uses map_domain_page() on pages within the crash
region, so this mapping is required if the crash region is within the
direct map area.

Wthout this revert, loading a crash kernel may cause a fatal page
fault when trying to zero the first control page allocated from the
crash area.  The fault will occur on non-debug builds of Xen when the
crash area is below 5 TiB (which will be most systems).

Reported-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/arch/x86/setup.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..f07ee2b 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1097,7 +1097,9 @@ void __init __start_xen(unsigned long mbi_p)
                          mod[i].mod_start,
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
-
+    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                     kexec_crash_area.start >> PAGE_SHIFT,
+                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vVA-0000qk-DX; Wed, 08 Jan 2014 15:57:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV8-0000qF-PJ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:02 +0000
Received: from [193.109.254.147:38298] by server-12.bemta-14.messagelabs.com
	id 78/89-13681-E457DC25; Wed, 08 Jan 2014 15:57:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11424 invoked from network); 8 Jan 2014 15:57:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:57:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772823"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-Mh;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:41 +0000
Message-ID: <1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area that
	is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

map_domain_page() is used to temporarily map the crash area when
loading a kexec crash image.

If some or all of the crash area is within the direct map area then
this portion must have a direct mapping, otherwise map_domain_page()
(on non-debug builds) will return a direct map VA that is not actually
mapped.

Parts of the crash area outside the direct map area (i.e., above 5 TiB)
do not need and should not have such a mapping.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/arch/x86/setup.c |   14 +++++++++++---
 1 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index f07ee2b..8e10bdf 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1097,9 +1097,17 @@ void __init __start_xen(unsigned long mbi_p)
                          mod[i].mod_start,
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
-    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
-                     kexec_crash_area.start >> PAGE_SHIFT,
-                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
+
+    if ( kexec_crash_area.size )
+    {
+        unsigned long s = PFN_DOWN(kexec_crash_area.start);
+        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
+                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
+
+        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                         s, e - s, PAGE_HYPERVISOR);
+    }
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vV8-0000qK-Gl; Wed, 08 Jan 2014 15:57:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV7-0000q9-9D
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:01 +0000
Received: from [193.109.254.147:59093] by server-9.bemta-14.messagelabs.com id
	08/C8-13957-C457DC25; Wed, 08 Jan 2014 15:57:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 8 Jan 2014 15:56:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:56:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772821"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-KR;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:39 +0000
Message-ID: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] x86: fix kexec crash regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A recent commit causes Xen to crash if a kexec crash images is loaded
into a crash area below 5 TiB (and a non-debug Xen is used).

This series reverts the bad commit and then re-implements the part of
the original fix that does cause a regression.

This is an important bug fix necessary for 4.4.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vVA-0000qV-04; Wed, 08 Jan 2014 15:57:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV8-0000qE-3R
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:02 +0000
Received: from [193.109.254.147:38250] by server-2.bemta-14.messagelabs.com id
	8A/9A-00361-D457DC25; Wed, 08 Jan 2014 15:57:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11342 invoked from network); 8 Jan 2014 15:57:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:57:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772822"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-MC;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:40 +0000
Message-ID: <1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash kernel
	area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

This reverts commit 7113a45451a9f656deeff070e47672043ed83664.

The kexec code uses map_domain_page() on pages within the crash
region, so this mapping is required if the crash region is within the
direct map area.

Wthout this revert, loading a crash kernel may cause a fatal page
fault when trying to zero the first control page allocated from the
crash area.  The fault will occur on non-debug builds of Xen when the
crash area is below 5 TiB (which will be most systems).

Reported-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/arch/x86/setup.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..f07ee2b 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1097,7 +1097,9 @@ void __init __start_xen(unsigned long mbi_p)
                          mod[i].mod_start,
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
-
+    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                     kexec_crash_area.start >> PAGE_SHIFT,
+                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vVA-0000qk-DX; Wed, 08 Jan 2014 15:57:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV8-0000qF-PJ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:02 +0000
Received: from [193.109.254.147:38298] by server-12.bemta-14.messagelabs.com
	id 78/89-13681-E457DC25; Wed, 08 Jan 2014 15:57:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11424 invoked from network); 8 Jan 2014 15:57:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:57:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772823"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-Mh;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:41 +0000
Message-ID: <1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area that
	is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

map_domain_page() is used to temporarily map the crash area when
loading a kexec crash image.

If some or all of the crash area is within the direct map area then
this portion must have a direct mapping, otherwise map_domain_page()
(on non-debug builds) will return a direct map VA that is not actually
mapped.

Parts of the crash area outside the direct map area (i.e., above 5 TiB)
do not need and should not have such a mapping.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 xen/arch/x86/setup.c |   14 +++++++++++---
 1 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index f07ee2b..8e10bdf 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1097,9 +1097,17 @@ void __init __start_xen(unsigned long mbi_p)
                          mod[i].mod_start,
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
-    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
-                     kexec_crash_area.start >> PAGE_SHIFT,
-                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
+
+    if ( kexec_crash_area.size )
+    {
+        unsigned long s = PFN_DOWN(kexec_crash_area.start);
+        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
+                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
+
+        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                         s, e - s, PAGE_HYPERVISOR);
+    }
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vV8-0000qK-Gl; Wed, 08 Jan 2014 15:57:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0vV7-0000q9-9D
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:57:01 +0000
Received: from [193.109.254.147:59093] by server-9.bemta-14.messagelabs.com id
	08/C8-13957-C457DC25; Wed, 08 Jan 2014 15:57:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389196618!9622266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 8 Jan 2014 15:56:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:56:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88772821"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 15:56:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 10:56:57 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0vV3-0002xH-KR;
	Wed, 08 Jan 2014 15:56:57 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 15:56:39 +0000
Message-ID: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: [Xen-devel] x86: fix kexec crash regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A recent commit causes Xen to crash if a kexec crash images is loaded
into a crash area below 5 TiB (and a non-debug Xen is used).

This series reverts the bad commit and then re-implements the part of
the original fix that does cause a regression.

This is an important bug fix necessary for 4.4.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vWS-00017f-Qm; Wed, 08 Jan 2014 15:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0vWR-00017M-HX
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:58:23 +0000
Received: from [193.109.254.147:12927] by server-14.bemta-14.messagelabs.com
	id B7/F1-12628-E957DC25; Wed, 08 Jan 2014 15:58:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389196700!9639986!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29408 invoked from network); 8 Jan 2014 15:58:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:58:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90923203"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 15:58:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	10:58:11 -0500
Message-ID: <1389196689.4883.104.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 15:58:09 +0000
In-Reply-To: <1389174675.12612.90.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
	<52CCA99A.3080709@citrix.com>
	<1389174675.12612.90.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 09:51 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 01:27 +0000, Andrew Cooper wrote:
> > 
> > As a fellow emacsian,
> 
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Thanks.
> 
> > As for 4.4-ness:  If the rest of the series is deemed ok for a
> > release-ack, then this should go in for completeness.  If part of the
> > series is decided to be deferred, then this warrants deferring as well.
> 
> I don't think there is any reason to hold off on a change like this
> irrespective of what happens to the rest of the series.

I've just applied it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 15:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 15:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vWS-00017f-Qm; Wed, 08 Jan 2014 15:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0vWR-00017M-HX
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 15:58:23 +0000
Received: from [193.109.254.147:12927] by server-14.bemta-14.messagelabs.com
	id B7/F1-12628-E957DC25; Wed, 08 Jan 2014 15:58:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389196700!9639986!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29408 invoked from network); 8 Jan 2014 15:58:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 15:58:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90923203"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 15:58:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	10:58:11 -0500
Message-ID: <1389196689.4883.104.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 8 Jan 2014 15:58:09 +0000
In-Reply-To: <1389174675.12612.90.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-2-git-send-email-dslutz@verizon.com>
	<20140107171602.1f9d8153@mantra.us.oracle.com>
	<52CCA99A.3080709@citrix.com>
	<1389174675.12612.90.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 1/5] Add Emacs local variables to source
 files.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 09:51 +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 01:27 +0000, Andrew Cooper wrote:
> > 
> > As a fellow emacsian,
> 
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Thanks.
> 
> > As for 4.4-ness:  If the rest of the series is deemed ok for a
> > release-ack, then this should go in for completeness.  If part of the
> > series is decided to be deferred, then this warrants deferring as well.
> 
> I don't think there is any reason to hold off on a change like this
> irrespective of what happens to the rest of the series.

I've just applied it.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:00:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vYU-00026b-Ob; Wed, 08 Jan 2014 16:00:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0vYT-00026S-UX
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:00:30 +0000
Received: from [85.158.139.211:56586] by server-12.bemta-5.messagelabs.com id
	3A/88-30017-D167DC25; Wed, 08 Jan 2014 16:00:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389196826!8558116!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16799 invoked from network); 8 Jan 2014 16:00:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90924224"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:00:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 11:00:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0vYQ-00030a-2p;
	Wed, 08 Jan 2014 16:00:26 +0000
Message-ID: <52CD7619.8070109@citrix.com>
Date: Wed, 8 Jan 2014 16:00:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] x86: fix kexec crash regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 15:56, David Vrabel wrote:
> A recent commit causes Xen to crash if a kexec crash images is loaded
> into a crash area below 5 TiB (and a non-debug Xen is used).
>
> This series reverts the bad commit and then re-implements the part of
> the original fix that does cause a regression.
>
> This is an important bug fix necessary for 4.4.
>
> David

Both Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:00:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vYU-00026b-Ob; Wed, 08 Jan 2014 16:00:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W0vYT-00026S-UX
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:00:30 +0000
Received: from [85.158.139.211:56586] by server-12.bemta-5.messagelabs.com id
	3A/88-30017-D167DC25; Wed, 08 Jan 2014 16:00:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389196826!8558116!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16799 invoked from network); 8 Jan 2014 16:00:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:00:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90924224"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:00:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 11:00:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W0vYQ-00030a-2p;
	Wed, 08 Jan 2014 16:00:26 +0000
Message-ID: <52CD7619.8070109@citrix.com>
Date: Wed, 8 Jan 2014 16:00:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] x86: fix kexec crash regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 15:56, David Vrabel wrote:
> A recent commit causes Xen to crash if a kexec crash images is loaded
> into a crash area below 5 TiB (and a non-debug Xen is used).
>
> This series reverts the bad commit and then re-implements the part of
> the original fix that does cause a regression.
>
> This is an important bug fix necessary for 4.4.
>
> David

Both Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

~Andrew

>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vpo-0002z5-Mt; Wed, 08 Jan 2014 16:18:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0vpm-0002yh-Un
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:18:23 +0000
Received: from [85.158.139.211:50441] by server-13.bemta-5.messagelabs.com id
	6C/CD-11357-E4A7DC25; Wed, 08 Jan 2014 16:18:22 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389197898!8413619!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7017 invoked from network); 8 Jan 2014 16:18:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:18:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90935199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:18:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 11:18:03 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W0vpS-0003Iz-Kd;
	Wed, 08 Jan 2014 16:18:02 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 16:17:43 +0000
Message-ID: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	Rob Hoes <rob.hoes@citrix.com>
Subject: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception
---
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  146 +++++++++++++++++++++++++++++-----
 2 files changed, 133 insertions(+), 23 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..277e81d 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_modify:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..50cd223 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,25 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1279,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1318,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct for_libxl_timeout {
+	void *for_libxl;
+	value *for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1345,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct for_libxl_timeout *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1358,41 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
 
-	caml_callbackN(*func, 4, args);
+	handles->for_app = malloc(sizeof(value));
+	if (!handles->for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*(handles->for_app) = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*(handles->for_app))) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(handles->for_app);
+	*for_app_registration_out = handles->for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1400,45 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
 		func = caml_named_value("libxl_timeout_modify");
 	}
 
-	caml_callback(*func, *p);
+	args[0] = *p;
+	args[1] = *for_app;
+
+	*for_app = caml_callbackN_exn(*func, 2, args);
+
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1386,12 +1491,17 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 
 value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct for_libxl_timeout *handles = (struct for_libxl_timeout *) for_libxl;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(handles->for_app);
+	free(handles->for_app);
+	free(handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vpo-0002z5-Mt; Wed, 08 Jan 2014 16:18:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W0vpm-0002yh-Un
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:18:23 +0000
Received: from [85.158.139.211:50441] by server-13.bemta-5.messagelabs.com id
	6C/CD-11357-E4A7DC25; Wed, 08 Jan 2014 16:18:22 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389197898!8413619!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7017 invoked from network); 8 Jan 2014 16:18:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:18:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90935199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:18:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 11:18:03 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W0vpS-0003Iz-Kd;
	Wed, 08 Jan 2014 16:18:02 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 16:17:43 +0000
Message-ID: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	Rob Hoes <rob.hoes@citrix.com>
Subject: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception
---
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  146 +++++++++++++++++++++++++++++-----
 2 files changed, 133 insertions(+), 23 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..277e81d 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_modify:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..50cd223 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,25 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1279,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1318,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct for_libxl_timeout {
+	void *for_libxl;
+	value *for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1345,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct for_libxl_timeout *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1358,41 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
 
-	caml_callbackN(*func, 4, args);
+	handles->for_app = malloc(sizeof(value));
+	if (!handles->for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*(handles->for_app) = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*(handles->for_app))) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(handles->for_app);
+	*for_app_registration_out = handles->for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1400,45 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
 		func = caml_named_value("libxl_timeout_modify");
 	}
 
-	caml_callback(*func, *p);
+	args[0] = *p;
+	args[1] = *for_app;
+
+	*for_app = caml_callbackN_exn(*func, 2, args);
+
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1386,12 +1491,17 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 
 value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct for_libxl_timeout *handles = (struct for_libxl_timeout *) for_libxl;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(handles->for_app);
+	free(handles->for_app);
+	free(handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vyh-0003cn-Kc; Wed, 08 Jan 2014 16:27:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0vyg-0003ci-KQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:27:34 +0000
Received: from [193.109.254.147:48010] by server-1.bemta-14.messagelabs.com id
	5D/5B-15600-57C7DC25; Wed, 08 Jan 2014 16:27:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389198453!6122820!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16307 invoked from network); 8 Jan 2014 16:27:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:27:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:27:32 +0000
Message-Id: <52CD8A820200007800111962@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:27:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> This reverts commit 7113a45451a9f656deeff070e47672043ed83664.

As indicated before - I don't think reverting is the right thing here.
Just add the necessary code in the same (or another, if more
suitable) place.

Jan

> The kexec code uses map_domain_page() on pages within the crash
> region, so this mapping is required if the crash region is within the
> direct map area.
> 
> Wthout this revert, loading a crash kernel may cause a fatal page
> fault when trying to zero the first control page allocated from the
> crash area.  The fault will occur on non-debug builds of Xen when the
> crash area is below 5 TiB (which will be most systems).
> 
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  xen/arch/x86/setup.c |    4 +++-
>  1 files changed, 3 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..f07ee2b 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1097,7 +1097,9 @@ void __init __start_xen(unsigned long mbi_p)
>                           mod[i].mod_start,
>                           PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>      }
> -
> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>      xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                     ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>      destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + 
> BOOTSTRAP_MAP_BASE);
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0vyh-0003cn-Kc; Wed, 08 Jan 2014 16:27:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0vyg-0003ci-KQ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:27:34 +0000
Received: from [193.109.254.147:48010] by server-1.bemta-14.messagelabs.com id
	5D/5B-15600-57C7DC25; Wed, 08 Jan 2014 16:27:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389198453!6122820!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16307 invoked from network); 8 Jan 2014 16:27:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:27:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:27:32 +0000
Message-Id: <52CD8A820200007800111962@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:27:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> This reverts commit 7113a45451a9f656deeff070e47672043ed83664.

As indicated before - I don't think reverting is the right thing here.
Just add the necessary code in the same (or another, if more
suitable) place.

Jan

> The kexec code uses map_domain_page() on pages within the crash
> region, so this mapping is required if the crash region is within the
> direct map area.
> 
> Wthout this revert, loading a crash kernel may cause a fatal page
> fault when trying to zero the first control page allocated from the
> crash area.  The fault will occur on non-debug builds of Xen when the
> crash area is below 5 TiB (which will be most systems).
> 
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  xen/arch/x86/setup.c |    4 +++-
>  1 files changed, 3 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..f07ee2b 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1097,7 +1097,9 @@ void __init __start_xen(unsigned long mbi_p)
>                           mod[i].mod_start,
>                           PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>      }
> -
> +    map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                     kexec_crash_area.start >> PAGE_SHIFT,
> +                     PFN_UP(kexec_crash_area.size), PAGE_HYPERVISOR);
>      xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                     ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>      destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + 
> BOOTSTRAP_MAP_BASE);
> -- 
> 1.7.2.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0w5w-0004Gx-EN; Wed, 08 Jan 2014 16:35:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0w5t-0004Gs-Nc
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:35:02 +0000
Received: from [193.109.254.147:23768] by server-8.bemta-14.messagelabs.com id
	BE/1A-30921-53E7DC25; Wed, 08 Jan 2014 16:35:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389198899!9615021!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5411 invoked from network); 8 Jan 2014 16:35:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90944311"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:34:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:34:58 -0500
Message-ID: <1389198896.4883.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 16:34:56 +0000
In-Reply-To: <52CD8A820200007800111962@nat28.tlf.novell.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
	<52CD8A820200007800111962@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:27 +0000, Jan Beulich wrote:
> >>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> > From: David Vrabel <david.vrabel@citrix.com>
> > 
> > This reverts commit 7113a45451a9f656deeff070e47672043ed83664.
> 
> As indicated before - I don't think reverting is the right thing here.
> Just add the necessary code in the same (or another, if more
> suitable) place.

Isn't that what the second patch does? Perhaps this would be better
structured as a single fix rather than a revert + do it properly, but
the latter approach seems reasonable enough...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0w5w-0004Gx-EN; Wed, 08 Jan 2014 16:35:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0w5t-0004Gs-Nc
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:35:02 +0000
Received: from [193.109.254.147:23768] by server-8.bemta-14.messagelabs.com id
	BE/1A-30921-53E7DC25; Wed, 08 Jan 2014 16:35:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389198899!9615021!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5411 invoked from network); 8 Jan 2014 16:35:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:35:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90944311"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:34:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:34:58 -0500
Message-ID: <1389198896.4883.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 8 Jan 2014 16:34:56 +0000
In-Reply-To: <52CD8A820200007800111962@nat28.tlf.novell.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
	<52CD8A820200007800111962@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:27 +0000, Jan Beulich wrote:
> >>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> > From: David Vrabel <david.vrabel@citrix.com>
> > 
> > This reverts commit 7113a45451a9f656deeff070e47672043ed83664.
> 
> As indicated before - I don't think reverting is the right thing here.
> Just add the necessary code in the same (or another, if more
> suitable) place.

Isn't that what the second patch does? Perhaps this would be better
structured as a single fix rather than a revert + do it properly, but
the latter approach seems reasonable enough...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:38:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0w9R-0004TJ-7F; Wed, 08 Jan 2014 16:38:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0w9P-0004Qb-Eo
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:38:39 +0000
Received: from [85.158.137.68:28887] by server-11.bemta-3.messagelabs.com id
	A7/01-19379-E0F7DC25; Wed, 08 Jan 2014 16:38:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389199117!7969974!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6128 invoked from network); 8 Jan 2014 16:38:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:38:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:38:37 +0000
Message-Id: <52CD8D1A0200007800111981@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:38:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area
 that is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> +    if ( kexec_crash_area.size )

Wouldn't this better also include a kexec_crash_area.start range
check?

> +    {
> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
> +
> +        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                         s, e - s, PAGE_HYPERVISOR);

map_pages_to_xen() doesn't tolerate a huge count resulting when
e < s (which is possible due to the min() above).

Furthermore, with PFN compression and a badly specified address
range (overlapping a hole) this would likely crash during boot. While
that mistake might later also lead to problems, I think it would be
better if booting nevertheless succeeded.

And of course the whole thing will break the moment we allow RAM
to fall into PFN compression holes (either by not using some small
amount of memory in order to be able to cover a larger total
amount, like would be desirable in cases where we need to chop
off a large piece at the top, but could do with not using a couple
of gigabytes relocated from the range below 4Gb, or via command
line option), which is why I told you that using map_domain_page()
for the kexec crash area is a bad thing in the first place.

Jan

> +    }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:38:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0w9R-0004TJ-7F; Wed, 08 Jan 2014 16:38:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0w9P-0004Qb-Eo
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:38:39 +0000
Received: from [85.158.137.68:28887] by server-11.bemta-3.messagelabs.com id
	A7/01-19379-E0F7DC25; Wed, 08 Jan 2014 16:38:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389199117!7969974!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6128 invoked from network); 8 Jan 2014 16:38:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:38:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:38:37 +0000
Message-Id: <52CD8D1A0200007800111981@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:38:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area
 that is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
> +    if ( kexec_crash_area.size )

Wouldn't this better also include a kexec_crash_area.start range
check?

> +    {
> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
> +
> +        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                         s, e - s, PAGE_HYPERVISOR);

map_pages_to_xen() doesn't tolerate a huge count resulting when
e < s (which is possible due to the min() above).

Furthermore, with PFN compression and a badly specified address
range (overlapping a hole) this would likely crash during boot. While
that mistake might later also lead to problems, I think it would be
better if booting nevertheless succeeded.

And of course the whole thing will break the moment we allow RAM
to fall into PFN compression holes (either by not using some small
amount of memory in order to be able to cover a larger total
amount, like would be desirable in cases where we need to chop
off a large piece at the top, but could do with not using a couple
of gigabytes relocated from the range below 4Gb, or via command
line option), which is why I told you that using map_domain_page()
for the kexec crash area is a bad thing in the first place.

Jan

> +    }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:40:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wB8-00050L-2a; Wed, 08 Jan 2014 16:40:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0wB6-00050B-6X
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:40:24 +0000
Received: from [85.158.137.68:38890] by server-13.bemta-3.messagelabs.com id
	23/D9-28603-77F7DC25; Wed, 08 Jan 2014 16:40:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389199222!7942009!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1072 invoked from network); 8 Jan 2014 16:40:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:40:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:40:22 +0000
Message-Id: <52CD8D820200007800111984@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:40:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
	<52CD8A820200007800111962@nat28.tlf.novell.com>
	<1389198896.4883.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1389198896.4883.109.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 17:34, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-08 at 16:27 +0000, Jan Beulich wrote:
>> >>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
>> > From: David Vrabel <david.vrabel@citrix.com>
>> > 
>> > This reverts commit 7113a45451a9f656deeff070e47672043ed83664.
>> 
>> As indicated before - I don't think reverting is the right thing here.
>> Just add the necessary code in the same (or another, if more
>> suitable) place.
> 
> Isn't that what the second patch does? Perhaps this would be better
> structured as a single fix rather than a revert + do it properly, but
> the latter approach seems reasonable enough...

Right - that's what I'm asking for: Do this as a single patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:40:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wB8-00050L-2a; Wed, 08 Jan 2014 16:40:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W0wB6-00050B-6X
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:40:24 +0000
Received: from [85.158.137.68:38890] by server-13.bemta-3.messagelabs.com id
	23/D9-28603-77F7DC25; Wed, 08 Jan 2014 16:40:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389199222!7942009!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1072 invoked from network); 8 Jan 2014 16:40:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 16:40:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 08 Jan 2014 16:40:22 +0000
Message-Id: <52CD8D820200007800111984@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 08 Jan 2014 16:40:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-2-git-send-email-david.vrabel@citrix.com>
	<52CD8A820200007800111962@nat28.tlf.novell.com>
	<1389198896.4883.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1389198896.4883.109.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: David Vrabel <david.vrabel@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2] Revert "kexec/x86: do not map crash
 kernel area"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 08.01.14 at 17:34, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-08 at 16:27 +0000, Jan Beulich wrote:
>> >>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
>> > From: David Vrabel <david.vrabel@citrix.com>
>> > 
>> > This reverts commit 7113a45451a9f656deeff070e47672043ed83664.
>> 
>> As indicated before - I don't think reverting is the right thing here.
>> Just add the necessary code in the same (or another, if more
>> suitable) place.
> 
> Isn't that what the second patch does? Perhaps this would be better
> structured as a single fix rather than a revert + do it properly, but
> the latter approach seems reasonable enough...

Right - that's what I'm asking for: Do this as a single patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:46:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wGr-0005E1-5b; Wed, 08 Jan 2014 16:46:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W0wGq-0005Dw-1y
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:46:20 +0000
Received: from [85.158.143.35:39482] by server-3.bemta-4.messagelabs.com id
	58/CD-32360-BD08DC25; Wed, 08 Jan 2014 16:46:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389199578!7801624!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19580 invoked from network); 8 Jan 2014 16:46:18 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 16:46:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389199578; l=453;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=ObtwNW0NGaMHQSS0kKzd0NQhrjo=;
	b=rzWyWDtyBiDf81lnoC07m/SJlwGYQxu+BEiXn3s/JD0h/F0L/JI3dsTf8HexhYu43f5
	uzegpIHYRL0k0XRv4ibwaiUjlo70TVVxDl1pyvm9lKBaj/2ApFXAD+9oIirPZlFbEfLm3
	FK9HS4KTR3tqN09JLf6JOoI0+k8yhY6eKpI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5rHQwZTViw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-064-030.pools.arcor-ip.net [88.65.64.30])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 2006b8q08GkIgda
	for <xen-devel@lists.xen.org>; Wed, 8 Jan 2014 17:46:18 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 1E14F50245; Wed,  8 Jan 2014 17:46:18 +0100 (CET)
Date: Wed, 8 Jan 2014 17:46:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140108164617.GA20476@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With xm it was possible to have a global keymap="de" to map the physical
keybard correctly. Now with xl this fails, at least in xen-4.3.
xl create -d shows keymap:NULL in the vfb part.
Only moving keymap= into vfb=[] fixes it for me.

xl.cfg(5) indicates that keymap= can be specified as global option (just
like vnc=) as well as suboption for vbf=[]. 

Was this already fixed in xen-unstable? git log shows not keymap related
changes.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:46:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wGr-0005E1-5b; Wed, 08 Jan 2014 16:46:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W0wGq-0005Dw-1y
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:46:20 +0000
Received: from [85.158.143.35:39482] by server-3.bemta-4.messagelabs.com id
	58/CD-32360-BD08DC25; Wed, 08 Jan 2014 16:46:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389199578!7801624!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19580 invoked from network); 8 Jan 2014 16:46:18 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 16:46:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389199578; l=453;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=ObtwNW0NGaMHQSS0kKzd0NQhrjo=;
	b=rzWyWDtyBiDf81lnoC07m/SJlwGYQxu+BEiXn3s/JD0h/F0L/JI3dsTf8HexhYu43f5
	uzegpIHYRL0k0XRv4ibwaiUjlo70TVVxDl1pyvm9lKBaj/2ApFXAD+9oIirPZlFbEfLm3
	FK9HS4KTR3tqN09JLf6JOoI0+k8yhY6eKpI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5rHQwZTViw==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-064-030.pools.arcor-ip.net [88.65.64.30])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 2006b8q08GkIgda
	for <xen-devel@lists.xen.org>; Wed, 8 Jan 2014 17:46:18 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 1E14F50245; Wed,  8 Jan 2014 17:46:18 +0100 (CET)
Date: Wed, 8 Jan 2014 17:46:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140108164617.GA20476@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

With xm it was possible to have a global keymap="de" to map the physical
keybard correctly. Now with xl this fails, at least in xen-4.3.
xl create -d shows keymap:NULL in the vfb part.
Only moving keymap= into vfb=[] fixes it for me.

xl.cfg(5) indicates that keymap= can be specified as global option (just
like vnc=) as well as suboption for vbf=[]. 

Was this already fixed in xen-unstable? git log shows not keymap related
changes.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:47:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wHh-0005H0-L5; Wed, 08 Jan 2014 16:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0wHg-0005Gq-18
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:47:12 +0000
Received: from [85.158.143.35:51097] by server-3.bemta-4.messagelabs.com id
	12/3F-32360-F018DC25; Wed, 08 Jan 2014 16:47:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389199629!10422862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25708 invoked from network); 8 Jan 2014 16:47:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:47:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88798460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 16:47:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:47:08 -0500
Message-ID: <1389199626.4883.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 16:47:06 +0000
In-Reply-To: <52CD60AA.9010607@terremark.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > Using volatile is almost always wrong. Why do you think it is needed
> > here?
> 
> This was from Mukesh Rathor:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> 
> I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> to change any way you want.

I'm not the maintainer but if I were I'd say drop the volatile and maybe
mark it __read_mostly and perhaps __used too (since it's static it might
otherwise get eliminated).

> > If anything this variable is exactly the opposite, i..e __read_mostly or
> > even const (given that I can't see anything which writes it I suppose
> > this is a compile time setting?)
> 
> That has been how I have been testing it so far (changing the source
> to set values).  Mukesh claims to be able to change it at will.  Not
> sure how const may effect this.

I presume this is using kdb and marking it const would probably cause
the dead code to be eliminated so it is worth not doing that for his
sake.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:47:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wHh-0005H0-L5; Wed, 08 Jan 2014 16:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0wHg-0005Gq-18
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:47:12 +0000
Received: from [85.158.143.35:51097] by server-3.bemta-4.messagelabs.com id
	12/3F-32360-F018DC25; Wed, 08 Jan 2014 16:47:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389199629!10422862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25708 invoked from network); 8 Jan 2014 16:47:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:47:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88798460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 16:47:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:47:08 -0500
Message-ID: <1389199626.4883.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 8 Jan 2014 16:47:06 +0000
In-Reply-To: <52CD60AA.9010607@terremark.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > Using volatile is almost always wrong. Why do you think it is needed
> > here?
> 
> This was from Mukesh Rathor:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> 
> I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> to change any way you want.

I'm not the maintainer but if I were I'd say drop the volatile and maybe
mark it __read_mostly and perhaps __used too (since it's static it might
otherwise get eliminated).

> > If anything this variable is exactly the opposite, i..e __read_mostly or
> > even const (given that I can't see anything which writes it I suppose
> > this is a compile time setting?)
> 
> That has been how I have been testing it so far (changing the source
> to set values).  Mukesh claims to be able to change it at will.  Not
> sure how const may effect this.

I presume this is using kdb and marking it const would probably cause
the dead code to be eliminated so it is worth not doing that for his
sake.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:51:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wM7-00060R-TT; Wed, 08 Jan 2014 16:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0wM6-00060J-5o
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:51:46 +0000
Received: from [85.158.139.211:40698] by server-9.bemta-5.messagelabs.com id
	2E/59-15098-1228DC25; Wed, 08 Jan 2014 16:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389199903!8612278!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21652 invoked from network); 8 Jan 2014 16:51:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:51:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90951788"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:51:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:51:41 -0500
Message-ID: <1389199900.27473.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 8 Jan 2014 16:51:40 +0000
In-Reply-To: <20140108164617.GA20476@aepfle.de>
References: <20140108164617.GA20476@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> With xm it was possible to have a global keymap="de" to map the physical
> keybard correctly. Now with xl this fails, at least in xen-4.3.
> xl create -d shows keymap:NULL in the vfb part.
> Only moving keymap= into vfb=[] fixes it for me.
> 
> xl.cfg(5) indicates that keymap= can be specified as global option (just
> like vnc=) as well as suboption for vbf=[]. 
> 
> Was this already fixed in xen-unstable? git log shows not keymap related
> changes.

I don't think Wei covered this one with his VNC patches. It does sound
like it should be move though, yes. I think this is 4.5 material at this
point.

Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
Sorry for not thinking of that during review.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:51:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wM7-00060R-TT; Wed, 08 Jan 2014 16:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0wM6-00060J-5o
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:51:46 +0000
Received: from [85.158.139.211:40698] by server-9.bemta-5.messagelabs.com id
	2E/59-15098-1228DC25; Wed, 08 Jan 2014 16:51:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389199903!8612278!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21652 invoked from network); 8 Jan 2014 16:51:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 16:51:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90951788"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 16:51:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	11:51:41 -0500
Message-ID: <1389199900.27473.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 8 Jan 2014 16:51:40 +0000
In-Reply-To: <20140108164617.GA20476@aepfle.de>
References: <20140108164617.GA20476@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> With xm it was possible to have a global keymap="de" to map the physical
> keybard correctly. Now with xl this fails, at least in xen-4.3.
> xl create -d shows keymap:NULL in the vfb part.
> Only moving keymap= into vfb=[] fixes it for me.
> 
> xl.cfg(5) indicates that keymap= can be specified as global option (just
> like vnc=) as well as suboption for vbf=[]. 
> 
> Was this already fixed in xen-unstable? git log shows not keymap related
> changes.

I don't think Wei covered this one with his VNC patches. It does sound
like it should be move though, yes. I think this is 4.5 material at this
point.

Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
Sorry for not thinking of that during review.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 16:59:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wTI-0006Wa-JB; Wed, 08 Jan 2014 16:59:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1W0wTH-0006WV-FZ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:59:11 +0000
Received: from [85.158.143.35:5552] by server-2.bemta-4.messagelabs.com id
	1E/7D-11386-ED38DC25; Wed, 08 Jan 2014 16:59:10 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389200344!10426026!1
X-Originating-IP: [15.201.24.17]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE3ID0+IDcyNjQ1Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30892 invoked from network); 8 Jan 2014 16:59:05 -0000
Received: from g4t0014.houston.hp.com (HELO g4t0014.houston.hp.com)
	(15.201.24.17)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 16:59:05 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t0014.houston.hp.com (Postfix) with ESMTPS id 0F33824005;
	Wed,  8 Jan 2014 16:59:03 +0000 (UTC)
Received: from G4W6301.americas.hpqcorp.net (16.210.26.226) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 8 Jan 2014 16:58:18 +0000
Received: from G4W3221.americas.hpqcorp.net ([169.254.3.234]) by
	G4W6301.americas.hpqcorp.net ([16.210.26.226]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 16:58:18 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQAjrFMAAAW7DgAAAi0g0AAAGt+QAAFbrhUAApTFAAAsZ8Aw
Date: Wed, 8 Jan 2014 16:58:17 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>,
	<3B22ECA2D19A3D408C83F4F15A9CB7D45076233C@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE57D@G2W2433.americas.hpqcorp.net>,
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762410@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE61D@G2W2433.americas.hpqcorp.net>
	<2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
In-Reply-To: <2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1574490690870549404=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1574490690870549404==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Dear Ian,

Thanks for the clarification.

None of the things you mentioned addresses my needs where I wanted to pass =
part of SMBIOS table to guest thus make it look more like the physical host=
 we are running on.  I actually made it work by some extra code.  The chang=
e is small but I would like to have our corporate legal staff review it bef=
ore releasing that to xen community.

On top of that, I still doubt that who-else would be interest in this excep=
t myself.  After all, the essence of virtualization is to abstract physical=
 systems rather than trying to make guest looks similar to the host.

Regards/Eniac

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
Sent: Tuesday, January 07, 2014 6:56 AM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu

On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:

> Question, am I missing anything, or this feature (passing smbios) is
> still work in progress?

Under Xen smbios tables are supplied via hvmloader, not via qemu.

What tables and or values do you want to override/supply?

I believe that libxc supports passing in extra smbios tables when building =
the guest (via struct xc_hvm_build_args.smbios_module) but nothing has been=
 plumbed in to make use of this.

I'm not aware of any on going work to plumb that stuff further up, e.g.
to libxl and xl or other toolstacks. (I think the libxc functionality is on=
ly consumed by the XenClient toolstack).

There is also some support in hvmloader for setting certain SMBIOS paramete=
rs via xenstore keys. See the varios HVM_XS_* in tools/firmware/hvmloader/s=
mbios.c. It includes things like the system manufacturer, chassis number et=
c.

Do either of those cover your use case? Are you interested in plumbing them=
 up?

Ian.

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:m=3D"http://sc=
hemas.microsoft.com/office/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-=
html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Segoe UI";
	panose-1:2 11 5 2 4 2 4 2 2 3;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Balloon Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
	{mso-style-name:"Balloon Text Char";
	mso-style-priority:99;
	mso-style-link:"Balloon Text";
	font-family:"Segoe UI","sans-serif";}
p.emailquote, li.emailquote, div.emailquote
	{mso-style-name:emailquote;
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:1.0pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
p.msochpdefault, li.msochpdefault, div.msochpdefault
	{mso-style-name:msochpdefault;
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:0in;
	font-size:10.0pt;
	font-family:"Times New Roman","serif";}
span.emailstyle18
	{mso-style-name:emailstyle18;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
span.balloontextchar0
	{mso-style-name:balloontextchar;
	font-family:"Tahoma","sans-serif";}
span.EmailStyle23
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
span.EmailStyle24
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Dear Ian,</span><o:p></o:=
p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Thanks for the clarificat=
ion.&nbsp;
</span><o:p></o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">None of the things you me=
ntioned addresses my needs where I want</span><span style=3D"font-size:11.0=
pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">ed=
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">
 to </span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,=
&quot;sans-serif&quot;;color:#1F497D">pass</span><span style=3D"font-size:1=
1.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D"=
>
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">part
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">of SMBIOS
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">table to guest thus make it look more lik=
e the physical host we are running on.</span><span style=3D"font-size:11.0p=
t;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nb=
sp;
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">I actually made it work by some extra cod=
e.&nbsp; The change is small but I would like to have
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">our corporate legal staff
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">review it
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">before releasing</span><span style=3D"fon=
t-size:11.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:=
#1F497D"> that</span><span style=3D"font-size:11.0pt;font-family:&quot;Cali=
bri&quot;,&quot;sans-serif&quot;;color:#1F497D">
 to xen community.</span><span style=3D"font-size:11.0pt;font-family:&quot;=
Calibri&quot;,&quot;sans-serif&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">On top of that, I still d=
oubt that who-else would be interest in this except myself.&nbsp; After all=
, the essence of virtualization is to abstract physical systems
 rather than trying to make guest looks similar to the host.<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Regards/Eniac</span><o:p>=
</o:p></p>
<div>
<p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s=
ize:10.0pt"><br>
-----Original Message-----<br>
From: Ian Campbell [<a href=3D"mailto:Ian.Campbell@citrix.com">mailto:Ian.C=
ampbell@citrix.com</a>]
<br>
Sent: Tuesday, January 07, 2014 6:56 AM<br>
To: Zhang, Eniac<br>
Cc: <a href=3D"mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a><=
br>
Subject: Re: [Xen-devel] passing smbios table from qemu<br>
<br>
On Mon, 2014-01-06 at 21:01 &#43;0000, Zhang, Eniac wrote:<br>
<br>
&gt; Question, am I missing anything, or this feature (passing smbios) is <=
br>
&gt; still work in progress?<br>
<br>
Under Xen smbios tables are supplied via hvmloader, not via qemu.<br>
<br>
What tables and or values do you want to override/supply?<br>
<br>
I believe that libxc supports passing in extra smbios tables when building =
the guest (via struct xc_hvm_build_args.smbios_module) but nothing has been=
 plumbed in to make use of this.<br>
<br>
I'm not aware of any on going work to plumb that stuff further up, e.g.<br>
to libxl and xl or other toolstacks. (I think the libxc functionality is on=
ly consumed by the XenClient toolstack).<br>
<br>
There is also some support in hvmloader for setting certain SMBIOS paramete=
rs via xenstore keys. See the varios HVM_XS_* in tools/firmware/hvmloader/s=
mbios.c. It includes things like the system manufacturer, chassis number et=
c.<br>
<br>
Do either of those cover your use case? Are you interested in plumbing them=
 up?<br>
<br>
Ian.</span><o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_--


--===============1574490690870549404==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1574490690870549404==--


From xen-devel-bounces@lists.xen.org Wed Jan 08 16:59:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 16:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wTI-0006Wa-JB; Wed, 08 Jan 2014 16:59:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1W0wTH-0006WV-FZ
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 16:59:11 +0000
Received: from [85.158.143.35:5552] by server-2.bemta-4.messagelabs.com id
	1E/7D-11386-ED38DC25; Wed, 08 Jan 2014 16:59:10 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389200344!10426026!1
X-Originating-IP: [15.201.24.17]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUuMjAxLjI0LjE3ID0+IDcyNjQ1Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30892 invoked from network); 8 Jan 2014 16:59:05 -0000
Received: from g4t0014.houston.hp.com (HELO g4t0014.houston.hp.com)
	(15.201.24.17)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 16:59:05 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t0014.houston.hp.com (Postfix) with ESMTPS id 0F33824005;
	Wed,  8 Jan 2014 16:59:03 +0000 (UTC)
Received: from G4W6301.americas.hpqcorp.net (16.210.26.226) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 8 Jan 2014 16:58:18 +0000
Received: from G4W3221.americas.hpqcorp.net ([169.254.3.234]) by
	G4W6301.americas.hpqcorp.net ([16.210.26.226]) with mapi id
	14.03.0123.003; Wed, 8 Jan 2014 16:58:18 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQAjrFMAAAW7DgAAAi0g0AAAGt+QAAFbrhUAApTFAAAsZ8Aw
Date: Wed, 8 Jan 2014 16:58:17 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>,
	<3B22ECA2D19A3D408C83F4F15A9CB7D45076233C@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE57D@G2W2433.americas.hpqcorp.net>,
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762410@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE61D@G2W2433.americas.hpqcorp.net>
	<2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
In-Reply-To: <2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1574490690870549404=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1574490690870549404==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Dear Ian,

Thanks for the clarification.

None of the things you mentioned addresses my needs where I wanted to pass =
part of SMBIOS table to guest thus make it look more like the physical host=
 we are running on.  I actually made it work by some extra code.  The chang=
e is small but I would like to have our corporate legal staff review it bef=
ore releasing that to xen community.

On top of that, I still doubt that who-else would be interest in this excep=
t myself.  After all, the essence of virtualization is to abstract physical=
 systems rather than trying to make guest looks similar to the host.

Regards/Eniac

-----Original Message-----
From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
Sent: Tuesday, January 07, 2014 6:56 AM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu

On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:

> Question, am I missing anything, or this feature (passing smbios) is
> still work in progress?

Under Xen smbios tables are supplied via hvmloader, not via qemu.

What tables and or values do you want to override/supply?

I believe that libxc supports passing in extra smbios tables when building =
the guest (via struct xc_hvm_build_args.smbios_module) but nothing has been=
 plumbed in to make use of this.

I'm not aware of any on going work to plumb that stuff further up, e.g.
to libxl and xl or other toolstacks. (I think the libxc functionality is on=
ly consumed by the XenClient toolstack).

There is also some support in hvmloader for setting certain SMBIOS paramete=
rs via xenstore keys. See the varios HVM_XS_* in tools/firmware/hvmloader/s=
mbios.c. It includes things like the system manufacturer, chassis number et=
c.

Do either of those cover your use case? Are you interested in plumbing them=
 up?

Ian.

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:m=3D"http://sc=
hemas.microsoft.com/office/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-=
html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Segoe UI";
	panose-1:2 11 5 2 4 2 4 2 2 3;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
	{mso-style-priority:99;
	mso-style-link:"Balloon Text Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:8.0pt;
	font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
	{mso-style-name:"Balloon Text Char";
	mso-style-priority:99;
	mso-style-link:"Balloon Text";
	font-family:"Segoe UI","sans-serif";}
p.emailquote, li.emailquote, div.emailquote
	{mso-style-name:emailquote;
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:1.0pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
p.msochpdefault, li.msochpdefault, div.msochpdefault
	{mso-style-name:msochpdefault;
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:0in;
	font-size:10.0pt;
	font-family:"Times New Roman","serif";}
span.emailstyle18
	{mso-style-name:emailstyle18;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
span.balloontextchar0
	{mso-style-name:balloontextchar;
	font-family:"Tahoma","sans-serif";}
span.EmailStyle23
	{mso-style-type:personal;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
span.EmailStyle24
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Dear Ian,</span><o:p></o:=
p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Thanks for the clarificat=
ion.&nbsp;
</span><o:p></o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">None of the things you me=
ntioned addresses my needs where I want</span><span style=3D"font-size:11.0=
pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">ed=
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">
 to </span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,=
&quot;sans-serif&quot;;color:#1F497D">pass</span><span style=3D"font-size:1=
1.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D"=
>
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">part
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">of SMBIOS
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">table to guest thus make it look more lik=
e the physical host we are running on.</span><span style=3D"font-size:11.0p=
t;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nb=
sp;
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">I actually made it work by some extra cod=
e.&nbsp; The change is small but I would like to have
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">our corporate legal staff
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">review it
</span><span style=3D"font-size:11.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;color:#1F497D">before releasing</span><span style=3D"fon=
t-size:11.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:=
#1F497D"> that</span><span style=3D"font-size:11.0pt;font-family:&quot;Cali=
bri&quot;,&quot;sans-serif&quot;;color:#1F497D">
 to xen community.</span><span style=3D"font-size:11.0pt;font-family:&quot;=
Calibri&quot;,&quot;sans-serif&quot;"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D"><o:p>&nbsp;</o:p></span><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">On top of that, I still d=
oubt that who-else would be interest in this except myself.&nbsp; After all=
, the essence of virtualization is to abstract physical systems
 rather than trying to make guest looks similar to the host.<o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">&nbsp;</span><o:p></o:p><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;color:#1F497D">Regards/Eniac</span><o:p>=
</o:p></p>
<div>
<p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s=
ize:10.0pt"><br>
-----Original Message-----<br>
From: Ian Campbell [<a href=3D"mailto:Ian.Campbell@citrix.com">mailto:Ian.C=
ampbell@citrix.com</a>]
<br>
Sent: Tuesday, January 07, 2014 6:56 AM<br>
To: Zhang, Eniac<br>
Cc: <a href=3D"mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a><=
br>
Subject: Re: [Xen-devel] passing smbios table from qemu<br>
<br>
On Mon, 2014-01-06 at 21:01 &#43;0000, Zhang, Eniac wrote:<br>
<br>
&gt; Question, am I missing anything, or this feature (passing smbios) is <=
br>
&gt; still work in progress?<br>
<br>
Under Xen smbios tables are supplied via hvmloader, not via qemu.<br>
<br>
What tables and or values do you want to override/supply?<br>
<br>
I believe that libxc supports passing in extra smbios tables when building =
the guest (via struct xc_hvm_build_args.smbios_module) but nothing has been=
 plumbed in to make use of this.<br>
<br>
I'm not aware of any on going work to plumb that stuff further up, e.g.<br>
to libxl and xl or other toolstacks. (I think the libxc functionality is on=
ly consumed by the XenClient toolstack).<br>
<br>
There is also some support in hvmloader for setting certain SMBIOS paramete=
rs via xenstore keys. See the varios HVM_XS_* in tools/firmware/hvmloader/s=
mbios.c. It includes things like the system manufacturer, chassis number et=
c.<br>
<br>
Do either of those cover your use case? Are you interested in plumbing them=
 up?<br>
<br>
Ian.</span><o:p></o:p></p>
</div>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D450762FEDG4W3221americas_--


--===============1574490690870549404==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1574490690870549404==--


From xen-devel-bounces@lists.xen.org Wed Jan 08 17:05:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wYn-0006qe-TY; Wed, 08 Jan 2014 17:04:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W0wYn-0006qZ-0Q
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:04:53 +0000
Received: from [193.109.254.147:47910] by server-6.bemta-14.messagelabs.com id
	77/02-14958-4358DC25; Wed, 08 Jan 2014 17:04:52 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389200691!9628744!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7212 invoked from network); 8 Jan 2014 17:04:51 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 17:04:51 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W0wYd-0000kD-0E; Wed, 08 Jan 2014 17:04:43 +0000
Date: Wed, 8 Jan 2014 18:04:42 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108170442.GA75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389199626.4883.112.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > Using volatile is almost always wrong. Why do you think it is needed
> > > here?
> > 
> > This was from Mukesh Rathor:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > 
> > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > to change any way you want.
> 
> I'm not the maintainer but if I were I'd say drop the volatile and maybe
> mark it __read_mostly and perhaps __used too (since it's static it might
> otherwise get eliminated).
> 
> > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > even const (given that I can't see anything which writes it I suppose
> > > this is a compile time setting?)
> > 
> > That has been how I have been testing it so far (changing the source
> > to set values).  Mukesh claims to be able to change it at will.  Not
> > sure how const may effect this.

If the idea is to use kdb itself to frob the value, then it does need
something to stop the compiler caching it.  This might even be one of
the few cases where 'volatile' actually DTRT; it would still be more
in keeping with Xen style to use an explicit read op (like
atomic_read()) where the value is consumed.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:05:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wYn-0006qe-TY; Wed, 08 Jan 2014 17:04:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W0wYn-0006qZ-0Q
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:04:53 +0000
Received: from [193.109.254.147:47910] by server-6.bemta-14.messagelabs.com id
	77/02-14958-4358DC25; Wed, 08 Jan 2014 17:04:52 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389200691!9628744!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7212 invoked from network); 8 Jan 2014 17:04:51 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 17:04:51 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W0wYd-0000kD-0E; Wed, 08 Jan 2014 17:04:43 +0000
Date: Wed, 8 Jan 2014 18:04:42 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108170442.GA75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389199626.4883.112.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > Using volatile is almost always wrong. Why do you think it is needed
> > > here?
> > 
> > This was from Mukesh Rathor:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > 
> > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > to change any way you want.
> 
> I'm not the maintainer but if I were I'd say drop the volatile and maybe
> mark it __read_mostly and perhaps __used too (since it's static it might
> otherwise get eliminated).
> 
> > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > even const (given that I can't see anything which writes it I suppose
> > > this is a compile time setting?)
> > 
> > That has been how I have been testing it so far (changing the source
> > to set values).  Mukesh claims to be able to change it at will.  Not
> > sure how const may effect this.

If the idea is to use kdb itself to frob the value, then it does need
something to stop the compiler caching it.  This might even be one of
the few cases where 'volatile' actually DTRT; it would still be more
in keeping with Xen style to use an explicit read op (like
atomic_read()) where the value is consumed.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wa0-0006vH-OD; Wed, 08 Jan 2014 17:06:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0wZz-0006v8-H4
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:06:07 +0000
Received: from [193.109.254.147:52908] by server-11.bemta-14.messagelabs.com
	id 1C/43-20576-E758DC25; Wed, 08 Jan 2014 17:06:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389200765!9657375!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19155 invoked from network); 8 Jan 2014 17:06:06 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:06:06 -0000
Received: by mail-ee0-f51.google.com with SMTP id b15so815707eek.38
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 09:06:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=XrxvfOx47XB88ZqBz7xy6seLpCnkQKFeoVdwJ46y/TE=;
	b=WUjwXw+EUnDoSKr6bK3fNdxeU0wyYTLg1EnKQ1m5Wybr7t7Y5mFPEAFpa3+sc1v6Yv
	0FeOPTITyfgBn7JfJ8NT6NWtXovCNORMGXBxa44F94VEU9tl8MYoVdLSgiA0CRTrE8hr
	7LwYbqhgJsh6KU1dbIS6Rf1QAFV9AAIJTPZ35/TDlvljbsaxkAEeHnd/scqEzt9ykdNa
	IXK+k2YYxOehFJvijIi79tmkSY8YDVSqsSMu3pX0QZjpMd7TJevwNDGzWMJ5BGPb5eLC
	Xkb+1mD0sSNyWEHirOQRv1wLbahcpYgXMv/U34Jhl98ALAuAgiav7w41DSl2ZxqKighg
	mfjQ==
X-Gm-Message-State: ALoCoQlEmVXS8QEsybAMOWV7A3iDwSKo59ZfL3Q04BRiLuORmwUQuWASUHalpep5sBX14fzXqs3w
X-Received: by 10.15.44.4 with SMTP id y4mr27497953eev.71.1389200765523;
	Wed, 08 Jan 2014 09:06:05 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	n1sm191334786eep.20.2014.01.08.09.06.04 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 08 Jan 2014 09:06:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed,  8 Jan 2014 17:05:59 +0000
Message-Id: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLBs used by this domain on every PCPU.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This is a possible bug fix (found by reading the code) for Xen 4.4. I have
added a small optimisation to avoid flushing all TLBs when a VCPU of this
domain is running on the current cpu.

The downside of this patch is the function can be a little bit slower because
Xen is flushing more TLBs.
---
 xen/arch/arm/p2m.c                   |    7 ++++++-
 xen/include/asm-arm/arm32/flushtlb.h |    6 +++---
 xen/include/asm-arm/arm64/flushtlb.h |    6 +++---
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..9ab0378 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -374,7 +374,12 @@ static int create_p2m_entries(struct domain *d,
         }
 
         if ( flush )
-            flush_tlb_all_local();
+        {
+            if ( current->domain == d )
+                flush_tlb();
+            else
+                flush_tlb_all();
+        }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index ab166f3..6ff6f75 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -23,12 +23,12 @@ static inline void flush_tlb(void)
     isb();
 }
 
-/* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
 {
     dsb();
 
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
+    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
 
     dsb();
     isb();
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 9ce79a8..687eda1 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -23,12 +23,12 @@ static inline void flush_tlb(void)
         : : : "memory");
 }
 
-/* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
 {
     asm volatile(
         "dsb sy;"
-        "tlbi alle1;"
+        "tlbi alle1is;"
         "dsb sy;"
         "isb;"
         : : : "memory");
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wa0-0006vH-OD; Wed, 08 Jan 2014 17:06:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W0wZz-0006v8-H4
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:06:07 +0000
Received: from [193.109.254.147:52908] by server-11.bemta-14.messagelabs.com
	id 1C/43-20576-E758DC25; Wed, 08 Jan 2014 17:06:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389200765!9657375!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19155 invoked from network); 8 Jan 2014 17:06:06 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:06:06 -0000
Received: by mail-ee0-f51.google.com with SMTP id b15so815707eek.38
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 09:06:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=XrxvfOx47XB88ZqBz7xy6seLpCnkQKFeoVdwJ46y/TE=;
	b=WUjwXw+EUnDoSKr6bK3fNdxeU0wyYTLg1EnKQ1m5Wybr7t7Y5mFPEAFpa3+sc1v6Yv
	0FeOPTITyfgBn7JfJ8NT6NWtXovCNORMGXBxa44F94VEU9tl8MYoVdLSgiA0CRTrE8hr
	7LwYbqhgJsh6KU1dbIS6Rf1QAFV9AAIJTPZ35/TDlvljbsaxkAEeHnd/scqEzt9ykdNa
	IXK+k2YYxOehFJvijIi79tmkSY8YDVSqsSMu3pX0QZjpMd7TJevwNDGzWMJ5BGPb5eLC
	Xkb+1mD0sSNyWEHirOQRv1wLbahcpYgXMv/U34Jhl98ALAuAgiav7w41DSl2ZxqKighg
	mfjQ==
X-Gm-Message-State: ALoCoQlEmVXS8QEsybAMOWV7A3iDwSKo59ZfL3Q04BRiLuORmwUQuWASUHalpep5sBX14fzXqs3w
X-Received: by 10.15.44.4 with SMTP id y4mr27497953eev.71.1389200765523;
	Wed, 08 Jan 2014 09:06:05 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	n1sm191334786eep.20.2014.01.08.09.06.04 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 08 Jan 2014 09:06:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed,  8 Jan 2014 17:05:59 +0000
Message-Id: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLBs used by this domain on every PCPU.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This is a possible bug fix (found by reading the code) for Xen 4.4. I have
added a small optimisation to avoid flushing all TLBs when a VCPU of this
domain is running on the current cpu.

The downside of this patch is the function can be a little bit slower because
Xen is flushing more TLBs.
---
 xen/arch/arm/p2m.c                   |    7 ++++++-
 xen/include/asm-arm/arm32/flushtlb.h |    6 +++---
 xen/include/asm-arm/arm64/flushtlb.h |    6 +++---
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..9ab0378 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -374,7 +374,12 @@ static int create_p2m_entries(struct domain *d,
         }
 
         if ( flush )
-            flush_tlb_all_local();
+        {
+            if ( current->domain == d )
+                flush_tlb();
+            else
+                flush_tlb_all();
+        }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index ab166f3..6ff6f75 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -23,12 +23,12 @@ static inline void flush_tlb(void)
     isb();
 }
 
-/* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
 {
     dsb();
 
-    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
+    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
 
     dsb();
     isb();
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 9ce79a8..687eda1 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -23,12 +23,12 @@ static inline void flush_tlb(void)
         : : : "memory");
 }
 
-/* Flush local TLBs, all VMIDs, non-hypervisor mode */
-static inline void flush_tlb_all_local(void)
+/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
 {
     asm volatile(
         "dsb sy;"
-        "tlbi alle1;"
+        "tlbi alle1is;"
         "dsb sy;"
         "isb;"
         : : : "memory");
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:06:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wae-00070f-Nv; Wed, 08 Jan 2014 17:06:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0wad-00070H-24
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:06:47 +0000
Received: from [85.158.137.68:57777] by server-17.bemta-3.messagelabs.com id
	1C/66-15965-6A58DC25; Wed, 08 Jan 2014 17:06:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389200803!6817162!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12724 invoked from network); 8 Jan 2014 17:06:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90959100"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:06:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 12:06:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0waX-0004vt-VE;
	Wed, 08 Jan 2014 17:06:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0waX-0007ef-Om;
	Wed, 08 Jan 2014 17:06:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21197.34209.628619.102215@mariner.uk.xensource.com>
Date: Wed, 8 Jan 2014 17:06:41 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v3 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
> 
> It turns out that this is essential for timeout handling, in order to
> identify which timeout to change on a modify event.
...
> +	handles = malloc(sizeof(*handles));
> +	if (!handles) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	handles->for_libxl = for_libxl;
> +
>  	args[0] = *p;
>  	args[1] = sec;
>  	args[2] = usec;
> -	args[3] = (value) for_libxl;
> +	args[3] = (value) handles;
>  
> -	caml_callbackN(*func, 4, args);
> +	handles->for_app = malloc(sizeof(value));

This may seem like a daft question, but why does handles contain
a value* rather than just a value ?

Also, your error paths fail to free handles (or handles->for_app,
although I'm implicitly proposing that the latter be abolished).

> +	*for_app_registration_out = handles->for_app;

I think this is counterintuitive.  Why not pass handles to
for_app_registration_out ?  I know that your timeout_modify doesn't
need handles->for_libxl, but it would seem clearer to have your shim
proxy everything neatly: i.e., to always pass its own context
("handles") to both sides.

>  int timeout_modify(void *user, void **for_app_registration_update,
> @@ -1315,25 +1400,45 @@ int timeout_modify(void *user, void **for_app_registration_update,
>  {
>  	caml_leave_blocking_section();
>  	CAMLparam0();
> +	CAMLlocalN(args, 2);
> +	int ret = 0;
>  	static value *func = NULL;
>  	value *p = (value *) user;
> +	value *for_app = *for_app_registration_update;
...
> -	caml_callback(*func, *p);
> +	args[0] = *p;
> +	args[1] = *for_app;

I see that you're relying on the promise in modern libxl.h that abs is
always {0,0} meaning "right away".

You should probably assert that.

Also, perhaps, your ocaml function should have a different name.
"libxl_timeout_modify_now" or something.  We have avoided renaming it
in the C version to avoid API churn.  Of course you may prefer to keep
the two names identical, in which case presumably this will be
addressed in the documentation.  (Um.  What documentation.)  Anyway,
I wanted you to think about this question and explicitly choose to
give it a different name, or to avoid doing so :-).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:06:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wae-00070f-Nv; Wed, 08 Jan 2014 17:06:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0wad-00070H-24
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:06:47 +0000
Received: from [85.158.137.68:57777] by server-17.bemta-3.messagelabs.com id
	1C/66-15965-6A58DC25; Wed, 08 Jan 2014 17:06:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389200803!6817162!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12724 invoked from network); 8 Jan 2014 17:06:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:06:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90959100"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:06:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 12:06:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0waX-0004vt-VE;
	Wed, 08 Jan 2014 17:06:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W0waX-0007ef-Om;
	Wed, 08 Jan 2014 17:06:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21197.34209.628619.102215@mariner.uk.xensource.com>
Date: Wed, 8 Jan 2014 17:06:41 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v3 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
> 
> It turns out that this is essential for timeout handling, in order to
> identify which timeout to change on a modify event.
...
> +	handles = malloc(sizeof(*handles));
> +	if (!handles) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;
> +	}
> +
> +	handles->for_libxl = for_libxl;
> +
>  	args[0] = *p;
>  	args[1] = sec;
>  	args[2] = usec;
> -	args[3] = (value) for_libxl;
> +	args[3] = (value) handles;
>  
> -	caml_callbackN(*func, 4, args);
> +	handles->for_app = malloc(sizeof(value));

This may seem like a daft question, but why does handles contain
a value* rather than just a value ?

Also, your error paths fail to free handles (or handles->for_app,
although I'm implicitly proposing that the latter be abolished).

> +	*for_app_registration_out = handles->for_app;

I think this is counterintuitive.  Why not pass handles to
for_app_registration_out ?  I know that your timeout_modify doesn't
need handles->for_libxl, but it would seem clearer to have your shim
proxy everything neatly: i.e., to always pass its own context
("handles") to both sides.

>  int timeout_modify(void *user, void **for_app_registration_update,
> @@ -1315,25 +1400,45 @@ int timeout_modify(void *user, void **for_app_registration_update,
>  {
>  	caml_leave_blocking_section();
>  	CAMLparam0();
> +	CAMLlocalN(args, 2);
> +	int ret = 0;
>  	static value *func = NULL;
>  	value *p = (value *) user;
> +	value *for_app = *for_app_registration_update;
...
> -	caml_callback(*func, *p);
> +	args[0] = *p;
> +	args[1] = *for_app;

I see that you're relying on the promise in modern libxl.h that abs is
always {0,0} meaning "right away".

You should probably assert that.

Also, perhaps, your ocaml function should have a different name.
"libxl_timeout_modify_now" or something.  We have avoided renaming it
in the C version to avoid API churn.  Of course you may prefer to keep
the two names identical, in which case presumably this will be
addressed in the documentation.  (Um.  What documentation.)  Anyway,
I wanted you to think about this question and explicitly choose to
give it a different name, or to avoid doing so :-).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wiv-0007ps-4M; Wed, 08 Jan 2014 17:15:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0wit-0007pn-2K
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:15:19 +0000
Received: from [85.158.139.211:50537] by server-16.bemta-5.messagelabs.com id
	55/6B-11843-6A78DC25; Wed, 08 Jan 2014 17:15:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389201316!5899522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30139 invoked from network); 8 Jan 2014 17:15:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:15:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88810931"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 17:14:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 12:14:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0wiK-0004R9-Cw;
	Wed, 08 Jan 2014 17:14:44 +0000
Date: Wed, 8 Jan 2014 17:13:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLBs used by this domain on every PCPU.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

The fix makes sense to me.

> ---
> 
> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> added a small optimisation to avoid flushing all TLBs when a VCPU of this
> domain is running on the current cpu.
> 
> The downside of this patch is the function can be a little bit slower because
> Xen is flushing more TLBs.

Yes, I wonder how much slower it is going to be, considering that the flush
is executed for every iteration of the loop.


>  xen/arch/arm/p2m.c                   |    7 ++++++-
>  xen/include/asm-arm/arm32/flushtlb.h |    6 +++---
>  xen/include/asm-arm/arm64/flushtlb.h |    6 +++---
>  3 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..9ab0378 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -374,7 +374,12 @@ static int create_p2m_entries(struct domain *d,
>          }
>  
>          if ( flush )
> -            flush_tlb_all_local();
> +        {
> +            if ( current->domain == d )
> +                flush_tlb();
> +            else
> +                flush_tlb_all();
> +        }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index ab166f3..6ff6f75 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -23,12 +23,12 @@ static inline void flush_tlb(void)
>      isb();
>  }
>  
> -/* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
> +static inline void flush_tlb_all(void)
>  {
>      dsb();
>  
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> +    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
>  
>      dsb();
>      isb();
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 9ce79a8..687eda1 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -23,12 +23,12 @@ static inline void flush_tlb(void)
>          : : : "memory");
>  }
>  
> -/* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
> +static inline void flush_tlb_all(void)
>  {
>      asm volatile(
>          "dsb sy;"
> -        "tlbi alle1;"
> +        "tlbi alle1is;"
>          "dsb sy;"
>          "isb;"
>          : : : "memory");
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0wiv-0007ps-4M; Wed, 08 Jan 2014 17:15:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0wit-0007pn-2K
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:15:19 +0000
Received: from [85.158.139.211:50537] by server-16.bemta-5.messagelabs.com id
	55/6B-11843-6A78DC25; Wed, 08 Jan 2014 17:15:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389201316!5899522!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30139 invoked from network); 8 Jan 2014 17:15:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:15:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88810931"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 17:14:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 12:14:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0wiK-0004R9-Cw;
	Wed, 08 Jan 2014 17:14:44 +0000
Date: Wed, 8 Jan 2014 17:13:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLBs used by this domain on every PCPU.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

The fix makes sense to me.

> ---
> 
> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> added a small optimisation to avoid flushing all TLBs when a VCPU of this
> domain is running on the current cpu.
> 
> The downside of this patch is the function can be a little bit slower because
> Xen is flushing more TLBs.

Yes, I wonder how much slower it is going to be, considering that the flush
is executed for every iteration of the loop.


>  xen/arch/arm/p2m.c                   |    7 ++++++-
>  xen/include/asm-arm/arm32/flushtlb.h |    6 +++---
>  xen/include/asm-arm/arm64/flushtlb.h |    6 +++---
>  3 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..9ab0378 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -374,7 +374,12 @@ static int create_p2m_entries(struct domain *d,
>          }
>  
>          if ( flush )
> -            flush_tlb_all_local();
> +        {
> +            if ( current->domain == d )
> +                flush_tlb();
> +            else
> +                flush_tlb_all();
> +        }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
> diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
> index ab166f3..6ff6f75 100644
> --- a/xen/include/asm-arm/arm32/flushtlb.h
> +++ b/xen/include/asm-arm/arm32/flushtlb.h
> @@ -23,12 +23,12 @@ static inline void flush_tlb(void)
>      isb();
>  }
>  
> -/* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
> +static inline void flush_tlb_all(void)
>  {
>      dsb();
>  
> -    WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
> +    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
>  
>      dsb();
>      isb();
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
> index 9ce79a8..687eda1 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -23,12 +23,12 @@ static inline void flush_tlb(void)
>          : : : "memory");
>  }
>  
> -/* Flush local TLBs, all VMIDs, non-hypervisor mode */
> -static inline void flush_tlb_all_local(void)
> +/* Flush inner shareable TLBs, all VMIDs, non-hypervisor mode */
> +static inline void flush_tlb_all(void)
>  {
>      asm volatile(
>          "dsb sy;"
> -        "tlbi alle1;"
> +        "tlbi alle1is;"
>          "dsb sy;"
>          "isb;"
>          : : : "memory");
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xB6-00017T-MU; Wed, 08 Jan 2014 17:44:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xB5-00017O-Vy
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:44:28 +0000
Received: from [85.158.137.68:29271] by server-12.bemta-3.messagelabs.com id
	6E/87-20055-B7E8DC25; Wed, 08 Jan 2014 17:44:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389203065!8042329!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32727 invoked from network); 8 Jan 2014 17:44:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:44:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90973220"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:44:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:44:23 -0500
Message-ID: <1389203062.27473.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 8 Jan 2014 17:44:22 +0000
In-Reply-To: <20140108170442.GA75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > here?
> > > 
> > > This was from Mukesh Rathor:
> > > 
> > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > 
> > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > to change any way you want.
> > 
> > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > mark it __read_mostly and perhaps __used too (since it's static it might
> > otherwise get eliminated).
> > 
> > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > even const (given that I can't see anything which writes it I suppose
> > > > this is a compile time setting?)
> > > 
> > > That has been how I have been testing it so far (changing the source
> > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > sure how const may effect this.
> 
> If the idea is to use kdb itself to frob the value, then it does need
> something to stop the compiler caching it.  This might even be one of
> the few cases where 'volatile' actually DTRT; it would still be more
> in keeping with Xen style to use an explicit read op (like
> atomic_read()) where the value is consumed.

Is there any need to be asynchronously frobbing this value in the middle
of a function within this file and expecting it to be reliable? I'd have
thought that changing the value and having it take affect on the next
debug event/hypercall/whatever would be what was wanted.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xB6-00017T-MU; Wed, 08 Jan 2014 17:44:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xB5-00017O-Vy
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:44:28 +0000
Received: from [85.158.137.68:29271] by server-12.bemta-3.messagelabs.com id
	6E/87-20055-B7E8DC25; Wed, 08 Jan 2014 17:44:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389203065!8042329!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32727 invoked from network); 8 Jan 2014 17:44:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:44:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90973220"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:44:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:44:23 -0500
Message-ID: <1389203062.27473.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 8 Jan 2014 17:44:22 +0000
In-Reply-To: <20140108170442.GA75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > here?
> > > 
> > > This was from Mukesh Rathor:
> > > 
> > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > 
> > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > to change any way you want.
> > 
> > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > mark it __read_mostly and perhaps __used too (since it's static it might
> > otherwise get eliminated).
> > 
> > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > even const (given that I can't see anything which writes it I suppose
> > > > this is a compile time setting?)
> > > 
> > > That has been how I have been testing it so far (changing the source
> > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > sure how const may effect this.
> 
> If the idea is to use kdb itself to frob the value, then it does need
> something to stop the compiler caching it.  This might even be one of
> the few cases where 'volatile' actually DTRT; it would still be more
> in keeping with Xen style to use an explicit read op (like
> atomic_read()) where the value is consumed.

Is there any need to be asynchronously frobbing this value in the middle
of a function within this file and expecting it to be reliable? I'd have
thought that changing the value and having it take affect on the next
debug event/hypercall/whatever would be what was wanted.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:46:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xCg-0001CJ-8b; Wed, 08 Jan 2014 17:46:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xCf-0001CC-1w
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:46:05 +0000
Received: from [193.109.254.147:36316] by server-15.bemta-14.messagelabs.com
	id E4/12-22186-CDE8DC25; Wed, 08 Jan 2014 17:46:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389203162!7322298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14298 invoked from network); 8 Jan 2014 17:46:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90973809"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:46:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:46:01 -0500
Message-ID: <1389203159.27473.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 8 Jan 2014 17:45:59 +0000
In-Reply-To: <1389102894.12612.17.camel@kazak.uk.xensource.com>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
	<1389102894.12612.17.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 13:54 +0000, Ian Campbell wrote:
> Applied.

Except I seem to have failed to actually do it... I've put this back in
my queue and will pick it up on my next attempt...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:46:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xCg-0001CJ-8b; Wed, 08 Jan 2014 17:46:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xCf-0001CC-1w
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 17:46:05 +0000
Received: from [193.109.254.147:36316] by server-15.bemta-14.messagelabs.com
	id E4/12-22186-CDE8DC25; Wed, 08 Jan 2014 17:46:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389203162!7322298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14298 invoked from network); 8 Jan 2014 17:46:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90973809"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:46:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:46:01 -0500
Message-ID: <1389203159.27473.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 8 Jan 2014 17:45:59 +0000
In-Reply-To: <1389102894.12612.17.camel@kazak.uk.xensource.com>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
	<1389102894.12612.17.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 13:54 +0000, Ian Campbell wrote:
> Applied.

Except I seem to have failed to actually do it... I've put this back in
my queue and will pick it up on my next attempt...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:51:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xHw-0001uI-AR; Wed, 08 Jan 2014 17:51:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xHv-0001uD-7A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:51:31 +0000
Received: from [85.158.143.35:8150] by server-2.bemta-4.messagelabs.com id
	0D/15-11386-2209DC25; Wed, 08 Jan 2014 17:51:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389203488!10439108!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16911 invoked from network); 8 Jan 2014 17:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90975949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:51:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:51:27 -0500
Message-ID: <1389203486.27473.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Date: Wed, 8 Jan 2014 17:51:26 +0000
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com> ,
	<3B22ECA2D19A3D408C83F4F15A9CB7D45076233C@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE57D@G2W2433.americas.hpqcorp.net>
	,
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762410@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE61D@G2W2433.americas.hpqcorp.net>
	<2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:58 +0000, Zhang, Eniac wrote:
> None of the things you mentioned addresses my needs where I wanted to
> passpartof SMBIOStable to guest thus make it look more like the
> physical host we are running on.

I thought that was why the code to let various bits of the tables be set
from xenstore was for -- to allow the tools to propagate bits of host
smbios tables. Is it just not covering some specific value you are
interested in?

>  I actually made it work by some extra code.  The change is small but
> I would like to haveour corporate legal staffreview itbefore releasing
> that to xen community.

Understood.

> On top of that, I still doubt that who-else would be interest in this
> except myself.  After all, the essence of virtualization is to
> abstract physical systems rather than trying to make guest looks
> similar to the host.

As I say some of the existing code is there to serve this usecase, I
don't think it is that weird, e.g. on the client end in particular.

Ian,.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:51:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xHw-0001uI-AR; Wed, 08 Jan 2014 17:51:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W0xHv-0001uD-7A
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:51:31 +0000
Received: from [85.158.143.35:8150] by server-2.bemta-4.messagelabs.com id
	0D/15-11386-2209DC25; Wed, 08 Jan 2014 17:51:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389203488!10439108!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16911 invoked from network); 8 Jan 2014 17:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90975949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:51:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:51:27 -0500
Message-ID: <1389203486.27473.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Date: Wed, 8 Jan 2014 17:51:26 +0000
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com> ,
	<3B22ECA2D19A3D408C83F4F15A9CB7D45076233C@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE57D@G2W2433.americas.hpqcorp.net>
	,
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762410@G4W3221.americas.hpqcorp.net>
	<DF407A7CCC374747A5BE5B7739FDEB181EFEE61D@G2W2433.americas.hpqcorp.net>
	<2559B97E2BCECA4BA2AF63DA8CF5C7200D095CD7@G1W3640.americas.hpqcorp.net>
	<3B22ECA2D19A3D408C83F4F15A9CB7D450762FED@G4W3221.americas.hpqcorp.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:58 +0000, Zhang, Eniac wrote:
> None of the things you mentioned addresses my needs where I wanted to
> passpartof SMBIOStable to guest thus make it look more like the
> physical host we are running on.

I thought that was why the code to let various bits of the tables be set
from xenstore was for -- to allow the tools to propagate bits of host
smbios tables. Is it just not covering some specific value you are
interested in?

>  I actually made it work by some extra code.  The change is small but
> I would like to haveour corporate legal staffreview itbefore releasing
> that to xen community.

Understood.

> On top of that, I still doubt that who-else would be interest in this
> except myself.  After all, the essence of virtualization is to
> abstract physical systems rather than trying to make guest looks
> similar to the host.

As I say some of the existing code is there to serve this usecase, I
don't think it is that weird, e.g. on the client end in particular.

Ian,.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xO1-00023X-Cw; Wed, 08 Jan 2014 17:57:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0xO0-00023S-1E
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:57:48 +0000
Received: from [85.158.137.68:51177] by server-10.bemta-3.messagelabs.com id
	C1/E8-23989-B919DC25; Wed, 08 Jan 2014 17:57:47 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389203865!7988262!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6036 invoked from network); 8 Jan 2014 17:57:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:57:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90978780"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:57:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:57:44 -0500
Message-ID: <52CD9197.50305@citrix.com>
Date: Wed, 8 Jan 2014 17:57:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
	<52CD8D1A0200007800111981@nat28.tlf.novell.com>
In-Reply-To: <52CD8D1A0200007800111981@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area
 that is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 16:38, Jan Beulich wrote:
>>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
>> +    if ( kexec_crash_area.size )
> 
> Wouldn't this better also include a kexec_crash_area.start range
> check?

It's a "if there is a crash area" check.  It seems fine as-is to me.

>> +    {
>> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
>> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
>> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
>> +
>> +        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                         s, e - s, PAGE_HYPERVISOR);
> 
> map_pages_to_xen() doesn't tolerate a huge count resulting when
> e < s (which is possible due to the min() above).

Yes, you're right.  This needs to be:

   if ( e > s )
       map_pages_to_xen(...)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 17:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 17:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xO1-00023X-Cw; Wed, 08 Jan 2014 17:57:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0xO0-00023S-1E
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 17:57:48 +0000
Received: from [85.158.137.68:51177] by server-10.bemta-3.messagelabs.com id
	C1/E8-23989-B919DC25; Wed, 08 Jan 2014 17:57:47 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389203865!7988262!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6036 invoked from network); 8 Jan 2014 17:57:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 17:57:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90978780"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 17:57:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	12:57:44 -0500
Message-ID: <52CD9197.50305@citrix.com>
Date: Wed, 8 Jan 2014 17:57:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389196601-12219-1-git-send-email-david.vrabel@citrix.com>
	<1389196601-12219-3-git-send-email-david.vrabel@citrix.com>
	<52CD8D1A0200007800111981@nat28.tlf.novell.com>
In-Reply-To: <52CD8D1A0200007800111981@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86: map portion of kexec crash area
 that is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/01/14 16:38, Jan Beulich wrote:
>>>> On 08.01.14 at 16:56, David Vrabel <david.vrabel@citrix.com> wrote:
>> +    if ( kexec_crash_area.size )
> 
> Wouldn't this better also include a kexec_crash_area.start range
> check?

It's a "if there is a crash area" check.  It seems fine as-is to me.

>> +    {
>> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
>> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
>> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
>> +
>> +        map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
>> +                         s, e - s, PAGE_HYPERVISOR);
> 
> map_pages_to_xen() doesn't tolerate a huge count resulting when
> e < s (which is possible due to the min() above).

Yes, you're right.  This needs to be:

   if ( e > s )
       map_pages_to_xen(...)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:10:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xa4-0003K8-Hi; Wed, 08 Jan 2014 18:10:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W0xa2-0003K3-EY
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:10:14 +0000
Received: from [193.109.254.147:41638] by server-11.bemta-14.messagelabs.com
	id EC/69-20576-5849DC25; Wed, 08 Jan 2014 18:10:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389204612!9586179!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19861 invoked from network); 8 Jan 2014 18:10:12 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 18:10:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W0xZv-0001oU-M1; Wed, 08 Jan 2014 18:10:07 +0000
Date: Wed, 8 Jan 2014 19:10:07 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108181007.GB75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389203062.27473.8.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > here?
> > > > 
> > > > This was from Mukesh Rathor:
> > > > 
> > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > 
> > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > to change any way you want.
> > > 
> > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > otherwise get eliminated).
> > > 
> > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > even const (given that I can't see anything which writes it I suppose
> > > > > this is a compile time setting?)
> > > > 
> > > > That has been how I have been testing it so far (changing the source
> > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > sure how const may effect this.
> > 
> > If the idea is to use kdb itself to frob the value, then it does need
> > something to stop the compiler caching it.  This might even be one of
> > the few cases where 'volatile' actually DTRT; it would still be more
> > in keeping with Xen style to use an explicit read op (like
> > atomic_read()) where the value is consumed.
> 
> Is there any need to be asynchronously frobbing this value in the middle
> of a function within this file and expecting it to be reliable? I'd have
> thought that changing the value and having it take affect on the next
> debug event/hypercall/whatever would be what was wanted.

The variable is static and there's nothing in the file that updates
it, so the compiler might drop it entirely.  Maybe __used__ would be
good enough to stop the compiler dropping all reads, but I'm not sure.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:10:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xa4-0003K8-Hi; Wed, 08 Jan 2014 18:10:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W0xa2-0003K3-EY
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:10:14 +0000
Received: from [193.109.254.147:41638] by server-11.bemta-14.messagelabs.com
	id EC/69-20576-5849DC25; Wed, 08 Jan 2014 18:10:13 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389204612!9586179!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19861 invoked from network); 8 Jan 2014 18:10:12 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 18:10:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W0xZv-0001oU-M1; Wed, 08 Jan 2014 18:10:07 +0000
Date: Wed, 8 Jan 2014 19:10:07 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108181007.GB75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389203062.27473.8.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > here?
> > > > 
> > > > This was from Mukesh Rathor:
> > > > 
> > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > 
> > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > to change any way you want.
> > > 
> > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > otherwise get eliminated).
> > > 
> > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > even const (given that I can't see anything which writes it I suppose
> > > > > this is a compile time setting?)
> > > > 
> > > > That has been how I have been testing it so far (changing the source
> > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > sure how const may effect this.
> > 
> > If the idea is to use kdb itself to frob the value, then it does need
> > something to stop the compiler caching it.  This might even be one of
> > the few cases where 'volatile' actually DTRT; it would still be more
> > in keeping with Xen style to use an explicit read op (like
> > atomic_read()) where the value is consumed.
> 
> Is there any need to be asynchronously frobbing this value in the middle
> of a function within this file and expecting it to be reliable? I'd have
> thought that changing the value and having it take affect on the next
> debug event/hypercall/whatever would be what was wanted.

The variable is static and there's nothing in the file that updates
it, so the compiler might drop it entirely.  Maybe __used__ would be
good enough to stop the compiler dropping all reads, but I'm not sure.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xoc-0004fo-JL; Wed, 08 Jan 2014 18:25:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0xoa-0004fL-RQ
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:25:17 +0000
Received: from [85.158.139.211:4304] by server-5.bemta-5.messagelabs.com id
	EB/69-14928-C089DC25; Wed, 08 Jan 2014 18:25:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389205513!8618266!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21831 invoked from network); 8 Jan 2014 18:25:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90989673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:25:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 13:25:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0xoU-0005Jr-UI;
	Wed, 08 Jan 2014 18:25:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0xoU-00038u-R8;
	Wed, 08 Jan 2014 18:25:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24298-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 18:25:10 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24298: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24298 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24298/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24295
 test-armhf-armhf-xl           3 host-install(3)  broken in 24295 pass in 24298

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24250
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24295 blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24295 like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24295 never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=cedfdd43a9798e535a05690bb6f01394490d26bb
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable cedfdd43a9798e535a05690bb6f01394490d26bb
+ branch=xen-unstable
+ revision=cedfdd43a9798e535a05690bb6f01394490d26bb
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git cedfdd43a9798e535a05690bb6f01394490d26bb:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9a80d50..cedfdd4  cedfdd43a9798e535a05690bb6f01394490d26bb -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xoc-0004fo-JL; Wed, 08 Jan 2014 18:25:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W0xoa-0004fL-RQ
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:25:17 +0000
Received: from [85.158.139.211:4304] by server-5.bemta-5.messagelabs.com id
	EB/69-14928-C089DC25; Wed, 08 Jan 2014 18:25:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389205513!8618266!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21831 invoked from network); 8 Jan 2014 18:25:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90989673"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:25:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 13:25:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W0xoU-0005Jr-UI;
	Wed, 08 Jan 2014 18:25:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W0xoU-00038u-R8;
	Wed, 08 Jan 2014 18:25:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24298-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 18:25:10 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24298: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24298 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24298/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24295
 test-armhf-armhf-xl           3 host-install(3)  broken in 24295 pass in 24298

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24250
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24295 blocked in 24250
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24295 like 24250

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24295 never pass

version targeted for testing:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb
baseline version:
 xen                  9a80d5056766535ac624774b96495f8b97b1d28b

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Tsahee Zidenberg <tsahee@gmx.com>
  Xiantao Zhang <xiantao.zhang@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=cedfdd43a9798e535a05690bb6f01394490d26bb
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable cedfdd43a9798e535a05690bb6f01394490d26bb
+ branch=xen-unstable
+ revision=cedfdd43a9798e535a05690bb6f01394490d26bb
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git cedfdd43a9798e535a05690bb6f01394490d26bb:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9a80d50..cedfdd4  cedfdd43a9798e535a05690bb6f01394490d26bb -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:30:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xtI-0005Ay-8G; Wed, 08 Jan 2014 18:30:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W0xtB-0005As-FF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:30:02 +0000
Received: from [85.158.139.211:28048] by server-6.bemta-5.messagelabs.com id
	8B/D5-16310-8299DC25; Wed, 08 Jan 2014 18:30:00 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389205798!8585410!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25580 invoked from network); 8 Jan 2014 18:29:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:29:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90991232"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:29:57 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Wed, 8 Jan 2014 13:29:57 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "Zhang, Eniac"
	<eniac-xw.zhang@hp.com>
Thread-Topic: [Xen-devel] passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQAuJokAADvH4fA=
Date: Wed, 8 Jan 2014 18:29:56 +0000
Message-ID: <92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389102962.12612.19.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.30]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Ian Campbell
> Sent: Tuesday, January 07, 2014 8:56 AM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] passing smbios table from qemu
> 
> On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> 
> > Question, am I missing anything, or this feature (passing smbios) is
> > still work in progress?
> 
> Under Xen smbios tables are supplied via hvmloader, not via qemu.
> 
> What tables and or values do you want to override/supply?
> 
> I believe that libxc supports passing in extra smbios tables when
> building the guest (via struct xc_hvm_build_args.smbios_module) but
> nothing has been plumbed in to make use of this.
> 
> I'm not aware of any on going work to plumb that stuff further up, e.g.
> to libxl and xl or other toolstacks. (I think the libxc functionality is
> only consumed by the XenClient toolstack).

Just FYI, I did go back and add the support (and docs) for it in
libxl. I did this after the first set of patches went in per someone's
request (can't recall who it was at the moment).

> 
> There is also some support in hvmloader for setting certain SMBIOS
> parameters via xenstore keys. See the varios HVM_XS_* in
> tools/firmware/hvmloader/smbios.c. It includes things like the system
> manufacturer, chassis number etc.
> 
> Do either of those cover your use case? Are you interested in plumbing
> them up?
> 
> Ian.
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6979 - Release Date:
> 01/06/14

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:30:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xtI-0005Ay-8G; Wed, 08 Jan 2014 18:30:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W0xtB-0005As-FF
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:30:02 +0000
Received: from [85.158.139.211:28048] by server-6.bemta-5.messagelabs.com id
	8B/D5-16310-8299DC25; Wed, 08 Jan 2014 18:30:00 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389205798!8585410!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25580 invoked from network); 8 Jan 2014 18:29:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:29:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90991232"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:29:57 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Wed, 8 Jan 2014 13:29:57 -0500
From: Ross Philipson <Ross.Philipson@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "Zhang, Eniac"
	<eniac-xw.zhang@hp.com>
Thread-Topic: [Xen-devel] passing smbios table from qemu
Thread-Index: Ac8LIYIALIWytgb1QG2L+qIOhaKtGQAuJokAADvH4fA=
Date: Wed, 8 Jan 2014 18:29:56 +0000
Message-ID: <92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389102962.12612.19.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.30]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Ian Campbell
> Sent: Tuesday, January 07, 2014 8:56 AM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] passing smbios table from qemu
> 
> On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> 
> > Question, am I missing anything, or this feature (passing smbios) is
> > still work in progress?
> 
> Under Xen smbios tables are supplied via hvmloader, not via qemu.
> 
> What tables and or values do you want to override/supply?
> 
> I believe that libxc supports passing in extra smbios tables when
> building the guest (via struct xc_hvm_build_args.smbios_module) but
> nothing has been plumbed in to make use of this.
> 
> I'm not aware of any on going work to plumb that stuff further up, e.g.
> to libxl and xl or other toolstacks. (I think the libxc functionality is
> only consumed by the XenClient toolstack).

Just FYI, I did go back and add the support (and docs) for it in
libxl. I did this after the first set of patches went in per someone's
request (can't recall who it was at the moment).

> 
> There is also some support in hvmloader for setting certain SMBIOS
> parameters via xenstore keys. See the varios HVM_XS_* in
> tools/firmware/hvmloader/smbios.c. It includes things like the system
> manufacturer, chassis number etc.
> 
> Do either of those cover your use case? Are you interested in plumbing
> them up?
> 
> Ian.
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6979 - Release Date:
> 01/06/14

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:34:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xxJ-0005be-VD; Wed, 08 Jan 2014 18:34:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0xxI-0005bW-1h
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:34:16 +0000
Received: from [85.158.139.211:54127] by server-7.bemta-5.messagelabs.com id
	3A/2D-04824-72A9DC25; Wed, 08 Jan 2014 18:34:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389206053!8586720!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10413 invoked from network); 8 Jan 2014 18:34:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:34:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90993442"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:34:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:34:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0xxE-0005b9-0h;
	Wed, 08 Jan 2014 18:34:12 +0000
Date: Wed, 8 Jan 2014 18:34:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108183411.GA13867@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389199900.27473.3.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > With xm it was possible to have a global keymap="de" to map the physical
> > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > xl create -d shows keymap:NULL in the vfb part.
> > Only moving keymap= into vfb=[] fixes it for me.
> > 
> > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > like vnc=) as well as suboption for vbf=[]. 
> > 
> > Was this already fixed in xen-unstable? git log shows not keymap related
> > changes.
> 
> I don't think Wei covered this one with his VNC patches. It does sound
> like it should be move though, yes. I think this is 4.5 material at this
> point.
> 

You're right, my patch didn't cover that aspect because I tried hard to
make it minimal.

> Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> Sorry for not thinking of that during review.
> 

I don't think so. All VNC / VFB options are already documented.

Wei.

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:34:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xxJ-0005be-VD; Wed, 08 Jan 2014 18:34:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0xxI-0005bW-1h
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:34:16 +0000
Received: from [85.158.139.211:54127] by server-7.bemta-5.messagelabs.com id
	3A/2D-04824-72A9DC25; Wed, 08 Jan 2014 18:34:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389206053!8586720!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10413 invoked from network); 8 Jan 2014 18:34:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:34:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90993442"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:34:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:34:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0xxE-0005b9-0h;
	Wed, 08 Jan 2014 18:34:12 +0000
Date: Wed, 8 Jan 2014 18:34:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108183411.GA13867@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389199900.27473.3.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > With xm it was possible to have a global keymap="de" to map the physical
> > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > xl create -d shows keymap:NULL in the vfb part.
> > Only moving keymap= into vfb=[] fixes it for me.
> > 
> > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > like vnc=) as well as suboption for vbf=[]. 
> > 
> > Was this already fixed in xen-unstable? git log shows not keymap related
> > changes.
> 
> I don't think Wei covered this one with his VNC patches. It does sound
> like it should be move though, yes. I think this is 4.5 material at this
> point.
> 

You're right, my patch didn't cover that aspect because I tried hard to
make it minimal.

> Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> Sorry for not thinking of that during review.
> 

I don't think so. All VNC / VFB options are already documented.

Wei.

> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:35:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xyb-0005iG-Fd; Wed, 08 Jan 2014 18:35:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0xyY-0005i1-TK
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:35:35 +0000
Received: from [85.158.137.68:26201] by server-10.bemta-3.messagelabs.com id
	18/EA-23989-67A9DC25; Wed, 08 Jan 2014 18:35:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389206132!7214408!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5574 invoked from network); 8 Jan 2014 18:35:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:35:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90994023"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:35:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:35:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0xyV-0005cB-2J;
	Wed, 08 Jan 2014 18:35:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 18:35:19 +0000
Message-ID: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that is
	within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
crash kernel area) causes fatal page faults when loading a crash
image.  The attempt to zero the first control page allocated from the
crash region will fault as the VA return by map_domain_page() has no
mapping.

The fault will occur on non-debug builds of Xen when the crash area is
below 5 TiB (which will be most systems).

The assumption that the crash area mapping was not used is incorrect.
map_domain_page() is used when loading an image and building the
image's page tables to temporarily map the crash area, thus the
mapping is required if the crash area is in the direct map area.

Reintroduce the mapping, but only the portions of the crash area that
are within the direct map area.

Reported-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>
---
This fixes a Xen crash so is an important fix for the 4.4 release..

Changes in v2:
- merge patches into one
- add check for e > s before mapping
---
 xen/arch/x86/setup.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..b49256d 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1098,6 +1098,17 @@ void __init __start_xen(unsigned long mbi_p)
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
 
+    if ( kexec_crash_area.size )
+    {
+        unsigned long s = PFN_DOWN(kexec_crash_area.start);
+        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
+                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
+
+        if ( e > s ) 
+            map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                             s, e - s, PAGE_HYPERVISOR);
+    }
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:35:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0xyb-0005iG-Fd; Wed, 08 Jan 2014 18:35:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W0xyY-0005i1-TK
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:35:35 +0000
Received: from [85.158.137.68:26201] by server-10.bemta-3.messagelabs.com id
	18/EA-23989-67A9DC25; Wed, 08 Jan 2014 18:35:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389206132!7214408!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5574 invoked from network); 8 Jan 2014 18:35:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:35:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90994023"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:35:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:35:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W0xyV-0005cB-2J;
	Wed, 08 Jan 2014 18:35:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 8 Jan 2014 18:35:19 +0000
Message-ID: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that is
	within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
crash kernel area) causes fatal page faults when loading a crash
image.  The attempt to zero the first control page allocated from the
crash region will fault as the VA return by map_domain_page() has no
mapping.

The fault will occur on non-debug builds of Xen when the crash area is
below 5 TiB (which will be most systems).

The assumption that the crash area mapping was not used is incorrect.
map_domain_page() is used when loading an image and building the
image's page tables to temporarily map the crash area, thus the
mapping is required if the crash area is in the direct map area.

Reintroduce the mapping, but only the portions of the crash area that
are within the direct map area.

Reported-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>
---
This fixes a Xen crash so is an important fix for the 4.4 release..

Changes in v2:
- merge patches into one
- add check for e > s before mapping
---
 xen/arch/x86/setup.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 4833ca3..b49256d 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1098,6 +1098,17 @@ void __init __start_xen(unsigned long mbi_p)
                          PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
     }
 
+    if ( kexec_crash_area.size )
+    {
+        unsigned long s = PFN_DOWN(kexec_crash_area.start);
+        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
+                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
+
+        if ( e > s ) 
+            map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
+                             s, e - s, PAGE_HYPERVISOR);
+    }
+
     xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
                    ~((1UL << L2_PAGETABLE_SHIFT) - 1);
     destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0y6s-0006Tv-OM; Wed, 08 Jan 2014 18:44:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W0y6r-0006Tn-Kw; Wed, 08 Jan 2014 18:44:09 +0000
Received: from [85.158.137.68:47619] by server-5.bemta-3.messagelabs.com id
	66/F4-25188-87C9DC25; Wed, 08 Jan 2014 18:44:08 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389206646!7959097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4651 invoked from network); 8 Jan 2014 18:44:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:44:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88844843"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:44:05 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0y6n-0005lx-7r;
	Wed, 08 Jan 2014 18:44:05 +0000
Date: Wed, 8 Jan 2014 18:44:05 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140108184405.GB13867@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, wei.liu2@citrix.com,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> Hi list
>         As we all know, SR-IOV technology can improve VNIC's
>         performance, while it can not support live migration. I get
>         some information recently that on KVM+Virtio platform, if we
>         use MacVtap + SR-IOV, the live migration could be done
>         successfully. What I want to know is may I configure MacVtap
>         on Xen? 
>        Any replies are welcome! Thanks a lot.

AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
the feature you want. That would mean you also need to use VirtIO
network driver for Xen's HVM domain, if you manage to configure MacVtap
for Xen.

Basically that means a configuration that nobody ever tried. Good luck.
:-)

Wei.


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0y6s-0006Tv-OM; Wed, 08 Jan 2014 18:44:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W0y6r-0006Tn-Kw; Wed, 08 Jan 2014 18:44:09 +0000
Received: from [85.158.137.68:47619] by server-5.bemta-3.messagelabs.com id
	66/F4-25188-87C9DC25; Wed, 08 Jan 2014 18:44:08 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389206646!7959097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4651 invoked from network); 8 Jan 2014 18:44:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:44:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88844843"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:44:05 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0y6n-0005lx-7r;
	Wed, 08 Jan 2014 18:44:05 +0000
Date: Wed, 8 Jan 2014 18:44:05 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140108184405.GB13867@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, wei.liu2@citrix.com,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> Hi list
>         As we all know, SR-IOV technology can improve VNIC's
>         performance, while it can not support live migration. I get
>         some information recently that on KVM+Virtio platform, if we
>         use MacVtap + SR-IOV, the live migration could be done
>         successfully. What I want to know is may I configure MacVtap
>         on Xen? 
>        Any replies are welcome! Thanks a lot.

AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
the feature you want. That would mean you also need to use VirtIO
network driver for Xen's HVM domain, if you manage to configure MacVtap
for Xen.

Basically that means a configuration that nobody ever tried. Good luck.
:-)

Wei.


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0y7b-0006ZV-P1; Wed, 08 Jan 2014 18:44:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W0y7a-0006ZM-NL
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:44:54 +0000
Received: from [85.158.137.68:49539] by server-14.bemta-3.messagelabs.com id
	FF/66-06105-5AC9DC25; Wed, 08 Jan 2014 18:44:53 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389206691!7989582!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3683 invoked from network); 8 Jan 2014 18:44:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 18:44:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08IinTH029639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 18:44:50 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s08Iimrr005305
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 18:44:49 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s08IimMw005292; Wed, 8 Jan 2014 18:44:48 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 10:44:47 -0800
Date: Wed, 8 Jan 2014 19:44:32 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140108184432.GA3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:35:19PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..

Thanks. It looks quite good for me but I would like to do some tests.
I will send you results by the end of this week.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:44:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0y7b-0006ZV-P1; Wed, 08 Jan 2014 18:44:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W0y7a-0006ZM-NL
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 18:44:54 +0000
Received: from [85.158.137.68:49539] by server-14.bemta-3.messagelabs.com id
	FF/66-06105-5AC9DC25; Wed, 08 Jan 2014 18:44:53 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389206691!7989582!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3683 invoked from network); 8 Jan 2014 18:44:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 18:44:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08IinTH029639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 18:44:50 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s08Iimrr005305
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 18:44:49 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s08IimMw005292; Wed, 8 Jan 2014 18:44:48 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 10:44:47 -0800
Date: Wed, 8 Jan 2014 19:44:32 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140108184432.GA3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:35:19PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..

Thanks. It looks quite good for me but I would like to do some tests.
I will send you results by the end of this week.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yCB-0007O6-Dd; Wed, 08 Jan 2014 18:49:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yCA-0007Lq-4a
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:49:38 +0000
Received: from [193.109.254.147:44723] by server-1.bemta-14.messagelabs.com id
	97/AE-15600-1CD9DC25; Wed, 08 Jan 2014 18:49:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389206975!7395712!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9076 invoked from network); 8 Jan 2014 18:49:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:49:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:49:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:49:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yC6-0005rC-Fh;
	Wed, 08 Jan 2014 18:49:34 +0000
Date: Wed, 8 Jan 2014 18:48:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v8 0/6] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces stolen ticks accounting for Xen on ARM and
ARM64.
Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
typically because Linux is running in a virtual machine and the vcpu has
been descheduled.
To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
so that we can make use of:

kernel/sched/cputime.c:steal_account_process_tick


Changes in v8:
- rebased on 3.13-rc6.



Stefano Stabellini (6):
      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
      kernel: missing include in cputime.c
      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      core: remove ifdef CONFIG_PARAVIRT
      xen/arm: account for stolen ticks

 arch/arm/Kconfig           |   20 ++++++++++
 arch/arm/kernel/Makefile   |    1 +
 arch/arm/xen/enlighten.c   |   21 ++++++++++
 arch/arm64/Kconfig         |   20 ++++++++++
 arch/arm64/kernel/Makefile |    1 +
 arch/ia64/xen/time.c       |   48 +++--------------------
 arch/x86/xen/time.c        |   76 +-----------------------------------
 drivers/xen/Makefile       |    2 +-
 drivers/xen/time.c         |   91 ++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h      |    5 +++
 kernel/sched/core.c        |    2 -
 kernel/sched/cputime.c     |    1 +
 12 files changed, 168 insertions(+), 120 deletions(-)
 create mode 100644 drivers/xen/time.c

git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_8


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yCB-0007O6-Dd; Wed, 08 Jan 2014 18:49:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yCA-0007Lq-4a
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:49:38 +0000
Received: from [193.109.254.147:44723] by server-1.bemta-14.messagelabs.com id
	97/AE-15600-1CD9DC25; Wed, 08 Jan 2014 18:49:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389206975!7395712!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9076 invoked from network); 8 Jan 2014 18:49:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:49:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:49:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:49:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yC6-0005rC-Fh;
	Wed, 08 Jan 2014 18:49:34 +0000
Date: Wed, 8 Jan 2014 18:48:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v8 0/6] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces stolen ticks accounting for Xen on ARM and
ARM64.
Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
typically because Linux is running in a virtual machine and the vcpu has
been descheduled.
To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
so that we can make use of:

kernel/sched/cputime.c:steal_account_process_tick


Changes in v8:
- rebased on 3.13-rc6.



Stefano Stabellini (6):
      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
      kernel: missing include in cputime.c
      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      core: remove ifdef CONFIG_PARAVIRT
      xen/arm: account for stolen ticks

 arch/arm/Kconfig           |   20 ++++++++++
 arch/arm/kernel/Makefile   |    1 +
 arch/arm/xen/enlighten.c   |   21 ++++++++++
 arch/arm64/Kconfig         |   20 ++++++++++
 arch/arm64/kernel/Makefile |    1 +
 arch/ia64/xen/time.c       |   48 +++--------------------
 arch/x86/xen/time.c        |   76 +-----------------------------------
 drivers/xen/Makefile       |    2 +-
 drivers/xen/time.c         |   91 ++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h      |    5 +++
 kernel/sched/core.c        |    2 -
 kernel/sched/cputime.c     |    1 +
 12 files changed, 168 insertions(+), 120 deletions(-)
 create mode 100644 drivers/xen/time.c

git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_8


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDW-0007dB-6h; Wed, 08 Jan 2014 18:51:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDU-0007cX-8l
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:00 +0000
Received: from [193.109.254.147:60268] by server-4.bemta-14.messagelabs.com id
	08/C0-03916-31E9DC25; Wed, 08 Jan 2014 18:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389207057!7331345!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8163 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90999329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-9b;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:58 +0000
Message-ID: <1389206998-27875-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 6/6] xen/arm: account for stolen ticks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register the runstate_memory_area with the hypervisor.
Use pv_time_ops.steal_clock to account for stolen ticks.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


---

Changes in v4:
- don't use paravirt_steal_rq_enabled: we do not support retrieving
stolen ticks for vcpus other than one we are running on.

Changes in v3:
- use BUG_ON and smp_processor_id.
---
 arch/arm/xen/enlighten.c |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..fa8bdc0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -14,7 +14,10 @@
 #include <xen/xen-ops.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <asm/arch_timer.h>
 #include <asm/system_misc.h>
+#include <asm/paravirt.h>
+#include <linux/jump_label.h>
 #include <linux/interrupt.h>
 #include <linux/irqreturn.h>
 #include <linux/module.h>
@@ -154,6 +157,19 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
+unsigned long long xen_stolen_accounting(int cpu)
+{
+	struct vcpu_runstate_info state;
+
+	BUG_ON(cpu != smp_processor_id());
+
+	xen_get_runstate_snapshot(&state);
+
+	WARN_ON(state.state != RUNSTATE_running);
+
+	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
+}
+
 static void __init xen_percpu_init(void *unused)
 {
 	struct vcpu_register_vcpu_info info;
@@ -171,6 +187,8 @@ static void __init xen_percpu_init(void *unused)
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
 
+	xen_setup_runstate_info(cpu);
+
 	enable_percpu_irq(xen_events_irq, 0);
 	put_cpu();
 }
@@ -313,6 +331,9 @@ static int __init xen_init_events(void)
 
 	on_each_cpu(xen_percpu_init, NULL, 0);
 
+	pv_time_ops.steal_clock = xen_stolen_accounting;
+	static_key_slow_inc(&paravirt_steal_enabled);
+
 	return 0;
 }
 postcore_initcall(xen_init_events);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDW-0007dB-6h; Wed, 08 Jan 2014 18:51:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDU-0007cX-8l
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:00 +0000
Received: from [193.109.254.147:60268] by server-4.bemta-14.messagelabs.com id
	08/C0-03916-31E9DC25; Wed, 08 Jan 2014 18:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389207057!7331345!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8163 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90999329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-9b;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:58 +0000
Message-ID: <1389206998-27875-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 6/6] xen/arm: account for stolen ticks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register the runstate_memory_area with the hypervisor.
Use pv_time_ops.steal_clock to account for stolen ticks.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


---

Changes in v4:
- don't use paravirt_steal_rq_enabled: we do not support retrieving
stolen ticks for vcpus other than one we are running on.

Changes in v3:
- use BUG_ON and smp_processor_id.
---
 arch/arm/xen/enlighten.c |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..fa8bdc0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -14,7 +14,10 @@
 #include <xen/xen-ops.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <asm/arch_timer.h>
 #include <asm/system_misc.h>
+#include <asm/paravirt.h>
+#include <linux/jump_label.h>
 #include <linux/interrupt.h>
 #include <linux/irqreturn.h>
 #include <linux/module.h>
@@ -154,6 +157,19 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
+unsigned long long xen_stolen_accounting(int cpu)
+{
+	struct vcpu_runstate_info state;
+
+	BUG_ON(cpu != smp_processor_id());
+
+	xen_get_runstate_snapshot(&state);
+
+	WARN_ON(state.state != RUNSTATE_running);
+
+	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
+}
+
 static void __init xen_percpu_init(void *unused)
 {
 	struct vcpu_register_vcpu_info info;
@@ -171,6 +187,8 @@ static void __init xen_percpu_init(void *unused)
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
 
+	xen_setup_runstate_info(cpu);
+
 	enable_percpu_irq(xen_events_irq, 0);
 	put_cpu();
 }
@@ -313,6 +331,9 @@ static int __init xen_init_events(void)
 
 	on_each_cpu(xen_percpu_init, NULL, 0);
 
+	pv_time_ops.steal_clock = xen_stolen_accounting;
+	static_key_slow_inc(&paravirt_steal_enabled);
+
 	return 0;
 }
 postcore_initcall(xen_init_events);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDW-0007dK-NC; Wed, 08 Jan 2014 18:51:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDU-0007cY-Er
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:00 +0000
Received: from [85.158.143.35:64198] by server-2.bemta-4.messagelabs.com id
	FE/63-11386-31E9DC25; Wed, 08 Jan 2014 18:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15644 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846947"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-54;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:55 +0000
Message-ID: <1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v8 3/6] arm: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Christopher Covington <cov@codeaurora.org>
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.

Changes in v3:
- improve commit description and Kconfig help text;
- no need to initialize pv_time_ops;
- add PARAVIRT_TIME_ACCOUNTING.
---
 arch/arm/Kconfig         |   20 ++++++++++++++++++++
 arch/arm/kernel/Makefile |    1 +
 2 files changed, 21 insertions(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..d6c3ba1 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1874,6 +1874,25 @@ config SWIOTLB
 config IOMMU_HELPER
 	def_bool SWIOTLB
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -1885,6 +1904,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index a30fc9b..bcd2b38 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
 ifneq ($(CONFIG_ARCH_EBSA110),y)
   obj-y		+= io.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 head-y			:= head$(MMUEXT).o
 obj-$(CONFIG_DEBUG_LL)	+= debug.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDW-0007dK-NC; Wed, 08 Jan 2014 18:51:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDU-0007cY-Er
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:00 +0000
Received: from [85.158.143.35:64198] by server-2.bemta-4.messagelabs.com id
	FE/63-11386-31E9DC25; Wed, 08 Jan 2014 18:50:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15644 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846947"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-54;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:55 +0000
Message-ID: <1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v8 3/6] arm: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Christopher Covington <cov@codeaurora.org>
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.

Changes in v3:
- improve commit description and Kconfig help text;
- no need to initialize pv_time_ops;
- add PARAVIRT_TIME_ACCOUNTING.
---
 arch/arm/Kconfig         |   20 ++++++++++++++++++++
 arch/arm/kernel/Makefile |    1 +
 2 files changed, 21 insertions(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..d6c3ba1 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1874,6 +1874,25 @@ config SWIOTLB
 config IOMMU_HELPER
 	def_bool SWIOTLB
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -1885,6 +1904,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index a30fc9b..bcd2b38 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
 ifneq ($(CONFIG_ARCH_EBSA110),y)
   obj-y		+= io.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 head-y			:= head$(MMUEXT).o
 obj-$(CONFIG_DEBUG_LL)	+= debug.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDV-0007cp-Ga; Wed, 08 Jan 2014 18:51:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDT-0007cS-FI
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:50:59 +0000
Received: from [85.158.143.35:64163] by server-3.bemta-4.messagelabs.com id
	9E/DC-32360-21E9DC25; Wed, 08 Jan 2014 18:50:58 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15599 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846946"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-4D;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:54 +0000
Message-ID: <1389206998-27875-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	peterz@infradead.org, mingo@redhat.com, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 2/6] kernel: missing include in cputime.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

steal_account_process_tick calls paravirt_steal_clock, but paravirt.h is
currently missing amoung the included header files.
Add include asm/paravirt.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: mingo@redhat.com
CC: peterz@infradead.org

---

Changes in v7:
- remove ifdef CONFIG_PARAVIRT (the ifdef has been moved inside the
  paravirt headers).

Changes in v5:
- add ifdef CONFIG_PARAVIRT.
---
 kernel/sched/cputime.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 9994791..9f9b76a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -5,6 +5,7 @@
 #include <linux/static_key.h>
 #include <linux/context_tracking.h>
 #include "sched.h"
+#include <asm/paravirt.h>
 
 
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDV-0007cp-Ga; Wed, 08 Jan 2014 18:51:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDT-0007cS-FI
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:50:59 +0000
Received: from [85.158.143.35:64163] by server-3.bemta-4.messagelabs.com id
	9E/DC-32360-21E9DC25; Wed, 08 Jan 2014 18:50:58 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15599 invoked from network); 8 Jan 2014 18:50:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846946"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-4D;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:54 +0000
Message-ID: <1389206998-27875-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	peterz@infradead.org, mingo@redhat.com, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 2/6] kernel: missing include in cputime.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

steal_account_process_tick calls paravirt_steal_clock, but paravirt.h is
currently missing amoung the included header files.
Add include asm/paravirt.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: mingo@redhat.com
CC: peterz@infradead.org

---

Changes in v7:
- remove ifdef CONFIG_PARAVIRT (the ifdef has been moved inside the
  paravirt headers).

Changes in v5:
- add ifdef CONFIG_PARAVIRT.
---
 kernel/sched/cputime.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 9994791..9f9b76a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -5,6 +5,7 @@
 #include <linux/static_key.h>
 #include <linux/context_tracking.h>
 #include "sched.h"
+#include <asm/paravirt.h>
 
 
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDX-0007dm-6V; Wed, 08 Jan 2014 18:51:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDV-0007cl-BO
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:01 +0000
Received: from [85.158.143.35:44577] by server-2.bemta-4.messagelabs.com id
	A0/73-11386-41E9DC25; Wed, 08 Jan 2014 18:51:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15682 invoked from network); 8 Jan 2014 18:50:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-3P;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:53 +0000
Message-ID: <1389206998-27875-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 1/6] xen: move xen_setup_runstate_info and
	get_runstate_snapshot to drivers/xen/time.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: konrad.wilk@oracle.com

---

Changes in v2:
- leave do_stolen_accounting in arch/x86/xen/time.c;
- use the new common functions in arch/ia64/xen/time.c.
---
 arch/ia64/xen/time.c  |   48 ++++----------------------
 arch/x86/xen/time.c   |   76 +----------------------------------------
 drivers/xen/Makefile  |    2 +-
 drivers/xen/time.c    |   91 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |    5 +++
 5 files changed, 104 insertions(+), 118 deletions(-)
 create mode 100644 drivers/xen/time.c

diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index 1f8244a..79a0b8c 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -34,53 +34,17 @@
 
 #include "../kernel/fsyscall_gtod_data.h"
 
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 static DEFINE_PER_CPU(unsigned long, xen_stolen_time);
 static DEFINE_PER_CPU(unsigned long, xen_blocked_time);
 
 /* taken from i386/kernel/time-xen.c */
 static void xen_init_missing_ticks_accounting(int cpu)
 {
-	struct vcpu_register_runstate_memory_area area;
-	struct vcpu_runstate_info *runstate = &per_cpu(xen_runstate, cpu);
-	int rc;
+	xen_setup_runstate_info(&runstate);
 
-	memset(runstate, 0, sizeof(*runstate));
-
-	area.addr.v = runstate;
-	rc = HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu,
-				&area);
-	WARN_ON(rc && rc != -ENOSYS);
-
-	per_cpu(xen_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
-	per_cpu(xen_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
-					    + runstate->time[RUNSTATE_offline];
-}
-
-/*
- * Runstate accounting
- */
-/* stolen from arch/x86/xen/time.c */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = state->state_entry_time;
-		rmb();
-		*res = *state;
-		rmb();
-	} while (state->state_entry_time != state_time);
+	per_cpu(xen_blocked_time, cpu) = runstate.time[RUNSTATE_blocked];
+	per_cpu(xen_stolen_time, cpu) = runstate.time[RUNSTATE_runnable]
+					    + runstate.time[RUNSTATE_offline];
 }
 
 #define NS_PER_TICK (1000000000LL/HZ)
@@ -94,7 +58,7 @@ consider_steal_time(unsigned long new_itm)
 	struct vcpu_runstate_info runstate;
 	struct task_struct *p = current;
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	/*
 	 * Check for vcpu migration effect
@@ -202,7 +166,7 @@ static unsigned long long xen_sched_clock(void)
 	 */
 	now = ia64_native_sched_clock();
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	WARN_ON(runstate.state != RUNSTATE_running);
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 12a1ca7..d479444 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -32,86 +32,12 @@
 #define TIMER_SLOP	100000
 #define NS_PER_TICK	(1000000000LL / HZ)
 
-/* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
-
 /* snapshots of runstate info */
 static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
 
 /* unused ns of stolen time */
 static DEFINE_PER_CPU(u64, xen_residual_stolen);
 
-/* return an consistent snapshot of 64-bit time/counter value */
-static u64 get64(const u64 *p)
-{
-	u64 ret;
-
-	if (BITS_PER_LONG < 64) {
-		u32 *p32 = (u32 *)p;
-		u32 h, l;
-
-		/*
-		 * Read high then low, and then make sure high is
-		 * still the same; this will only loop if low wraps
-		 * and carries into high.
-		 * XXX some clean way to make this endian-proof?
-		 */
-		do {
-			h = p32[1];
-			barrier();
-			l = p32[0];
-			barrier();
-		} while (p32[1] != h);
-
-		ret = (((u64)h) << 32) | l;
-	} else
-		ret = *p;
-
-	return ret;
-}
-
-/*
- * Runstate accounting
- */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = get64(&state->state_entry_time);
-		barrier();
-		*res = *state;
-		barrier();
-	} while (get64(&state->state_entry_time) != state_time);
-}
-
-/* return true when a vcpu could run but has no real cpu to run on */
-bool xen_vcpu_stolen(int vcpu)
-{
-	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
-}
-
-void xen_setup_runstate_info(int cpu)
-{
-	struct vcpu_register_runstate_memory_area area;
-
-	area.addr.v = &per_cpu(xen_runstate, cpu);
-
-	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
-			       cpu, &area))
-		BUG();
-}
-
 static void do_stolen_accounting(void)
 {
 	struct vcpu_runstate_info state;
@@ -119,7 +45,7 @@ static void do_stolen_accounting(void)
 	s64 runnable, offline, stolen;
 	cputime_t ticks;
 
-	get_runstate_snapshot(&state);
+	xen_get_runstate_snapshot(&state);
 
 	WARN_ON(state.state != RUNSTATE_running);
 
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 14fe79d..cd95fb7 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o events.o balloon.o manage.o
+obj-y	+= grant-table.o features.o events.o balloon.o manage.o time.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
new file mode 100644
index 0000000..c2e39d3
--- /dev/null
+++ b/drivers/xen/time.c
@@ -0,0 +1,91 @@
+/*
+ * Xen stolen ticks accounting.
+ */
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
+#include <linux/gfp.h>
+
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+
+#include <xen/events.h>
+#include <xen/features.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/vcpu.h>
+#include <xen/xen-ops.h>
+
+/* runstate info updated by Xen */
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
+
+/* return an consistent snapshot of 64-bit time/counter value */
+static u64 get64(const u64 *p)
+{
+	u64 ret;
+
+	if (BITS_PER_LONG < 64) {
+		u32 *p32 = (u32 *)p;
+		u32 h, l;
+
+		/*
+		 * Read high then low, and then make sure high is
+		 * still the same; this will only loop if low wraps
+		 * and carries into high.
+		 * XXX some clean way to make this endian-proof?
+		 */
+		do {
+			h = p32[1];
+			barrier();
+			l = p32[0];
+			barrier();
+		} while (p32[1] != h);
+
+		ret = (((u64)h) << 32) | l;
+	} else
+		ret = *p;
+
+	return ret;
+}
+
+/*
+ * Runstate accounting
+ */
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res)
+{
+	u64 state_time;
+	struct vcpu_runstate_info *state;
+
+	BUG_ON(preemptible());
+
+	state = &__get_cpu_var(xen_runstate);
+
+	/*
+	 * The runstate info is always updated by the hypervisor on
+	 * the current CPU, so there's no need to use anything
+	 * stronger than a compiler barrier when fetching it.
+	 */
+	do {
+		state_time = get64(&state->state_entry_time);
+		barrier();
+		*res = *state;
+		barrier();
+	} while (get64(&state->state_entry_time) != state_time);
+}
+
+/* return true when a vcpu could run but has no real cpu to run on */
+bool xen_vcpu_stolen(int vcpu)
+{
+	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
+}
+
+void xen_setup_runstate_info(int cpu)
+{
+	struct vcpu_register_runstate_memory_area area;
+
+	area.addr.v = &per_cpu(xen_runstate, cpu);
+
+	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
+			       cpu, &area))
+		BUG();
+}
+
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..ee3303b 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -3,6 +3,7 @@
 
 #include <linux/percpu.h>
 #include <asm/xen/interface.h>
+#include <xen/interface/vcpu.h>
 
 DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
 
@@ -16,6 +17,10 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+bool xen_vcpu_stolen(int vcpu);
+void xen_setup_runstate_info(int cpu);
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDX-0007dm-6V; Wed, 08 Jan 2014 18:51:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDV-0007cl-BO
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:01 +0000
Received: from [85.158.143.35:44577] by server-2.bemta-4.messagelabs.com id
	A0/73-11386-41E9DC25; Wed, 08 Jan 2014 18:51:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389207057!10508606!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15682 invoked from network); 8 Jan 2014 18:50:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:50:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846948"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-3P;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:53 +0000
Message-ID: <1389206998-27875-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 1/6] xen: move xen_setup_runstate_info and
	get_runstate_snapshot to drivers/xen/time.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: konrad.wilk@oracle.com

---

Changes in v2:
- leave do_stolen_accounting in arch/x86/xen/time.c;
- use the new common functions in arch/ia64/xen/time.c.
---
 arch/ia64/xen/time.c  |   48 ++++----------------------
 arch/x86/xen/time.c   |   76 +----------------------------------------
 drivers/xen/Makefile  |    2 +-
 drivers/xen/time.c    |   91 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |    5 +++
 5 files changed, 104 insertions(+), 118 deletions(-)
 create mode 100644 drivers/xen/time.c

diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index 1f8244a..79a0b8c 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -34,53 +34,17 @@
 
 #include "../kernel/fsyscall_gtod_data.h"
 
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 static DEFINE_PER_CPU(unsigned long, xen_stolen_time);
 static DEFINE_PER_CPU(unsigned long, xen_blocked_time);
 
 /* taken from i386/kernel/time-xen.c */
 static void xen_init_missing_ticks_accounting(int cpu)
 {
-	struct vcpu_register_runstate_memory_area area;
-	struct vcpu_runstate_info *runstate = &per_cpu(xen_runstate, cpu);
-	int rc;
+	xen_setup_runstate_info(&runstate);
 
-	memset(runstate, 0, sizeof(*runstate));
-
-	area.addr.v = runstate;
-	rc = HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu,
-				&area);
-	WARN_ON(rc && rc != -ENOSYS);
-
-	per_cpu(xen_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
-	per_cpu(xen_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
-					    + runstate->time[RUNSTATE_offline];
-}
-
-/*
- * Runstate accounting
- */
-/* stolen from arch/x86/xen/time.c */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = state->state_entry_time;
-		rmb();
-		*res = *state;
-		rmb();
-	} while (state->state_entry_time != state_time);
+	per_cpu(xen_blocked_time, cpu) = runstate.time[RUNSTATE_blocked];
+	per_cpu(xen_stolen_time, cpu) = runstate.time[RUNSTATE_runnable]
+					    + runstate.time[RUNSTATE_offline];
 }
 
 #define NS_PER_TICK (1000000000LL/HZ)
@@ -94,7 +58,7 @@ consider_steal_time(unsigned long new_itm)
 	struct vcpu_runstate_info runstate;
 	struct task_struct *p = current;
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	/*
 	 * Check for vcpu migration effect
@@ -202,7 +166,7 @@ static unsigned long long xen_sched_clock(void)
 	 */
 	now = ia64_native_sched_clock();
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	WARN_ON(runstate.state != RUNSTATE_running);
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 12a1ca7..d479444 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -32,86 +32,12 @@
 #define TIMER_SLOP	100000
 #define NS_PER_TICK	(1000000000LL / HZ)
 
-/* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
-
 /* snapshots of runstate info */
 static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
 
 /* unused ns of stolen time */
 static DEFINE_PER_CPU(u64, xen_residual_stolen);
 
-/* return an consistent snapshot of 64-bit time/counter value */
-static u64 get64(const u64 *p)
-{
-	u64 ret;
-
-	if (BITS_PER_LONG < 64) {
-		u32 *p32 = (u32 *)p;
-		u32 h, l;
-
-		/*
-		 * Read high then low, and then make sure high is
-		 * still the same; this will only loop if low wraps
-		 * and carries into high.
-		 * XXX some clean way to make this endian-proof?
-		 */
-		do {
-			h = p32[1];
-			barrier();
-			l = p32[0];
-			barrier();
-		} while (p32[1] != h);
-
-		ret = (((u64)h) << 32) | l;
-	} else
-		ret = *p;
-
-	return ret;
-}
-
-/*
- * Runstate accounting
- */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = get64(&state->state_entry_time);
-		barrier();
-		*res = *state;
-		barrier();
-	} while (get64(&state->state_entry_time) != state_time);
-}
-
-/* return true when a vcpu could run but has no real cpu to run on */
-bool xen_vcpu_stolen(int vcpu)
-{
-	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
-}
-
-void xen_setup_runstate_info(int cpu)
-{
-	struct vcpu_register_runstate_memory_area area;
-
-	area.addr.v = &per_cpu(xen_runstate, cpu);
-
-	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
-			       cpu, &area))
-		BUG();
-}
-
 static void do_stolen_accounting(void)
 {
 	struct vcpu_runstate_info state;
@@ -119,7 +45,7 @@ static void do_stolen_accounting(void)
 	s64 runnable, offline, stolen;
 	cputime_t ticks;
 
-	get_runstate_snapshot(&state);
+	xen_get_runstate_snapshot(&state);
 
 	WARN_ON(state.state != RUNSTATE_running);
 
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 14fe79d..cd95fb7 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o events.o balloon.o manage.o
+obj-y	+= grant-table.o features.o events.o balloon.o manage.o time.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
new file mode 100644
index 0000000..c2e39d3
--- /dev/null
+++ b/drivers/xen/time.c
@@ -0,0 +1,91 @@
+/*
+ * Xen stolen ticks accounting.
+ */
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
+#include <linux/gfp.h>
+
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+
+#include <xen/events.h>
+#include <xen/features.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/vcpu.h>
+#include <xen/xen-ops.h>
+
+/* runstate info updated by Xen */
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
+
+/* return an consistent snapshot of 64-bit time/counter value */
+static u64 get64(const u64 *p)
+{
+	u64 ret;
+
+	if (BITS_PER_LONG < 64) {
+		u32 *p32 = (u32 *)p;
+		u32 h, l;
+
+		/*
+		 * Read high then low, and then make sure high is
+		 * still the same; this will only loop if low wraps
+		 * and carries into high.
+		 * XXX some clean way to make this endian-proof?
+		 */
+		do {
+			h = p32[1];
+			barrier();
+			l = p32[0];
+			barrier();
+		} while (p32[1] != h);
+
+		ret = (((u64)h) << 32) | l;
+	} else
+		ret = *p;
+
+	return ret;
+}
+
+/*
+ * Runstate accounting
+ */
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res)
+{
+	u64 state_time;
+	struct vcpu_runstate_info *state;
+
+	BUG_ON(preemptible());
+
+	state = &__get_cpu_var(xen_runstate);
+
+	/*
+	 * The runstate info is always updated by the hypervisor on
+	 * the current CPU, so there's no need to use anything
+	 * stronger than a compiler barrier when fetching it.
+	 */
+	do {
+		state_time = get64(&state->state_entry_time);
+		barrier();
+		*res = *state;
+		barrier();
+	} while (get64(&state->state_entry_time) != state_time);
+}
+
+/* return true when a vcpu could run but has no real cpu to run on */
+bool xen_vcpu_stolen(int vcpu)
+{
+	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
+}
+
+void xen_setup_runstate_info(int cpu)
+{
+	struct vcpu_register_runstate_memory_area area;
+
+	area.addr.v = &per_cpu(xen_runstate, cpu);
+
+	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
+			       cpu, &area))
+		BUG();
+}
+
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..ee3303b 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -3,6 +3,7 @@
 
 #include <linux/percpu.h>
 #include <asm/xen/interface.h>
+#include <xen/interface/vcpu.h>
 
 DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
 
@@ -16,6 +17,10 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+bool xen_vcpu_stolen(int vcpu);
+void xen_setup_runstate_info(int cpu);
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDY-0007ev-11; Wed, 08 Jan 2014 18:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDV-0007cn-Is
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:01 +0000
Received: from [85.158.139.211:45092] by server-7.bemta-5.messagelabs.com id
	F2/6B-04824-41E9DC25; Wed, 08 Jan 2014 18:51:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389207058!8620316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32647 invoked from network); 8 Jan 2014 18:51:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-8u;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:57 +0000
Message-ID: <1389206998-27875-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 5/6] core: remove ifdef CONFIG_PARAVIRT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the asm/paravirt.h headers (x86, ia64, arm, arm64) are properly
ifdef'ed CONFIG_PARAVIRT inside so they can be safely included.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 kernel/sched/core.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a88f4a4..4f9b239 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -78,9 +78,7 @@
 #include <asm/tlb.h>
 #include <asm/irq_regs.h>
 #include <asm/mutex.h>
-#ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
-#endif
 
 #include "sched.h"
 #include "../workqueue_internal.h"
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDY-0007ev-11; Wed, 08 Jan 2014 18:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDV-0007cn-Is
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:01 +0000
Received: from [85.158.139.211:45092] by server-7.bemta-5.messagelabs.com id
	F2/6B-04824-41E9DC25; Wed, 08 Jan 2014 18:51:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389207058!8620316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32647 invoked from network); 8 Jan 2014 18:51:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="88846949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 18:50:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:50:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-8u;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:57 +0000
Message-ID: <1389206998-27875-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v8 5/6] core: remove ifdef CONFIG_PARAVIRT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All the asm/paravirt.h headers (x86, ia64, arm, arm64) are properly
ifdef'ed CONFIG_PARAVIRT inside so they can be safely included.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 kernel/sched/core.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a88f4a4..4f9b239 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -78,9 +78,7 @@
 #include <asm/tlb.h>
 #include <asm/irq_regs.h>
 #include <asm/mutex.h>
-#ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
-#endif
 
 #include "sched.h"
 #include "../workqueue_internal.h"
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDb-0007hj-63; Wed, 08 Jan 2014 18:51:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDZ-0007ei-Dr
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:05 +0000
Received: from [193.109.254.147:41773] by server-3.bemta-14.messagelabs.com id
	F4/06-11000-71E9DC25; Wed, 08 Jan 2014 18:51:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389207057!7331345!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8399 invoked from network); 8 Jan 2014 18:51:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:51:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90999357"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:51:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:51:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-73;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:56 +0000
Message-ID: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, Catalin.Marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net
CC: Catalin.Marinas@arm.com

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.
---
 arch/arm64/Kconfig         |   20 ++++++++++++++++++++
 arch/arm64/kernel/Makefile |    1 +
 2 files changed, 21 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d4dd22..d1003ba 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 
 source "mm/Kconfig"
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -220,6 +239,7 @@ config XEN
 	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
 	depends on ARM64 && OF
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 5ba2fd4..1dee735 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 18:51:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 18:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yDb-0007hj-63; Wed, 08 Jan 2014 18:51:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W0yDZ-0007ei-Dr
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 18:51:05 +0000
Received: from [193.109.254.147:41773] by server-3.bemta-14.messagelabs.com id
	F4/06-11000-71E9DC25; Wed, 08 Jan 2014 18:51:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389207057!7331345!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8399 invoked from network); 8 Jan 2014 18:51:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 18:51:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,625,1384300800"; d="scan'208";a="90999357"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 18:51:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 13:51:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W0yDL-0005sr-73;
	Wed, 08 Jan 2014 18:50:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 8 Jan 2014 18:49:56 +0000
Message-ID: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, Catalin.Marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net
CC: Catalin.Marinas@arm.com

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.
---
 arch/arm64/Kconfig         |   20 ++++++++++++++++++++
 arch/arm64/kernel/Makefile |    1 +
 2 files changed, 21 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d4dd22..d1003ba 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 
 source "mm/Kconfig"
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -220,6 +239,7 @@ config XEN
 	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
 	depends on ARM64 && OF
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 5ba2fd4..1dee735 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:13:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yZB-0001XT-PY; Wed, 08 Jan 2014 19:13:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1W0yZA-0001XN-Ib
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 19:13:24 +0000
Received: from [85.158.137.68:34645] by server-14.bemta-3.messagelabs.com id
	8B/5E-06105-353ADC25; Wed, 08 Jan 2014 19:13:23 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389208401!7966120!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21228 invoked from network); 8 Jan 2014 19:13:23 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 19:13:23 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 7498C13EF3A;
	Wed,  8 Jan 2014 19:13:20 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id 6341913F114; Wed,  8 Jan 2014 19:13:20 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id B9EAA13EF3A;
	Wed,  8 Jan 2014 19:13:18 +0000 (UTC)
Message-ID: <52CDA34C.2030605@codeaurora.org>
Date: Wed, 08 Jan 2014 14:13:16 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian.Campbell@citrix.com, arnd@arndb.de, marc.zyngier@arm.com,
	Catalin.Marinas@arm.com, nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 01/08/2014 01:49 PM, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Looks good to me.

Acked-by Christopher Covington <cov@codeaurora.org>

While I don't think it should necessarily gate these changes, I wonder if at
some point the config options could be consolidated across the various
architectures using them.

Regards,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:13:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0yZB-0001XT-PY; Wed, 08 Jan 2014 19:13:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1W0yZA-0001XN-Ib
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 19:13:24 +0000
Received: from [85.158.137.68:34645] by server-14.bemta-3.messagelabs.com id
	8B/5E-06105-353ADC25; Wed, 08 Jan 2014 19:13:23 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389208401!7966120!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21228 invoked from network); 8 Jan 2014 19:13:23 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 8 Jan 2014 19:13:23 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 7498C13EF3A;
	Wed,  8 Jan 2014 19:13:20 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id 6341913F114; Wed,  8 Jan 2014 19:13:20 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id B9EAA13EF3A;
	Wed,  8 Jan 2014 19:13:18 +0000 (UTC)
Message-ID: <52CDA34C.2030605@codeaurora.org>
Date: Wed, 08 Jan 2014 14:13:16 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian.Campbell@citrix.com, arnd@arndb.de, marc.zyngier@arm.com,
	Catalin.Marinas@arm.com, nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 01/08/2014 01:49 PM, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Looks good to me.

Acked-by Christopher Covington <cov@codeaurora.org>

While I don't think it should necessarily gate these changes, I wonder if at
some point the config options could be consolidated across the various
architectures using them.

Regards,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:25:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ykV-0002Go-8A; Wed, 08 Jan 2014 19:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <majieyue@gmail.com>) id 1W0ykT-0002Gj-H9
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 19:25:05 +0000
Received: from [85.158.137.68:18313] by server-9.bemta-3.messagelabs.com id
	04/BA-13104-016ADC25; Wed, 08 Jan 2014 19:25:04 +0000
X-Env-Sender: majieyue@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389209102!8031562!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31677 invoked from network); 8 Jan 2014 19:25:03 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 19:25:03 -0000
Received: by mail-pd0-f170.google.com with SMTP id g10so2177713pdj.1
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 11:25:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=+SRGqs2NpyrldQ1SNcsfgV0jLfKYd02yp9aIa8vra8I=;
	b=1CRUuZQQk85Yod7/QdxGAxTT94OtWKCWzyLPuNlTLBOv7MT4NiaIao0ePEdWcB+oHk
	Q+EOxEyf4onH4vt2T57N5cpk/BQ7XnpCK/UWh91i1FCkIMgZreZiUgTie4SWVy8kxJKN
	TyuVDarRiPBWlMp65conb7olQKWc9EjH022++TkVXQ4HbQ3O3x5dsZpicAEahN14vnjG
	/IRJlSzrDRjAHxZ6LeFYncF1DvuElpnYzvrz0VFl+4ezO3El56vHgZvq4unHKVjmIS5G
	64Igcp/En84VwAzsnsSWfosa9BjeeYfiUiz+69rWeYoVQT0ugx1tNS39mC8I1cleLbg+
	0XAw==
X-Received: by 10.66.121.234 with SMTP id ln10mr14929381pab.20.1389209101671; 
	Wed, 08 Jan 2014 11:25:01 -0800 (PST)
Received: from houyi-vm14.dev.sd.aliyun.com ([110.75.164.3])
	by mx.google.com with ESMTPSA id oa3sm4400100pbb.15.2014.01.08.11.24.54
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 11:25:01 -0800 (PST)
From: Ma JieYue <majieyue@gmail.com>
To: netdev@vger.kernel.org,
	xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 03:24:21 +0800
Message-Id: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Mailer: git-send-email 1.8.4
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ma JieYue <jieyue.majy@alibaba-inc.com>,
	Fu Tienan <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Subject: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ma JieYue <jieyue.majy@alibaba-inc.com>

There is a race when waking up or stopping xenvif tx queue, and it leads to 
unnecessary packet drop. The problem is that the rx ring still full when entering 
into xenvif_start_xmit. In fact, in xenvif_rx_interrupt, the netif_wake_queue 
may be called not just after the ring is not full any more, so the operation 
is not atomic. Here is part of the debug log when the race scenario happened:

wake_queue: req_cons_peek 2679757 req_cons 2679586 req_prod 2679841
stop_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
[tx_queue_stopped true]
wake_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
[tx_queue_stopped false]
drop packet: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841

The debug log was written, every time right after netif_wake_queue been called 
in xenvif_rx_interrupt, every time after netif_stop_queue been called in 
xenvif_start_xmit and every time packet drop happened in xenvif_start_xmit. 
As we can see, the second wake_queue appeared in the place it should not be, and 
we believed the ring had been checked before the stop_queue, but the actual 
wake_queue action didn't follow, and took place after the stop_queue, so that when 
entering into xenvif_start_xmit the ring was full but the queue was not stopped.

The patch fixes the race by checking if tx queue stopped, before trying to 
wake it up in xenvif_rx_interrupt. It only wakes the queue when it is stopped, 
as well as it is not full and schedulable.

Signed-off-by: Ma JieYue <jieyue.majy@alibaba-inc.com>
Signed-off-by: Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Signed-off-by: Fu Tienan <tienan.ftn@alibaba-inc.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netback/interface.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..e099f62 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -105,7 +105,7 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	if (xenvif_rx_schedulable(vif))
+	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
 		netif_wake_queue(vif->dev);
 
 	return IRQ_HANDLED;
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:25:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ykV-0002Go-8A; Wed, 08 Jan 2014 19:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <majieyue@gmail.com>) id 1W0ykT-0002Gj-H9
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 19:25:05 +0000
Received: from [85.158.137.68:18313] by server-9.bemta-3.messagelabs.com id
	04/BA-13104-016ADC25; Wed, 08 Jan 2014 19:25:04 +0000
X-Env-Sender: majieyue@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389209102!8031562!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31677 invoked from network); 8 Jan 2014 19:25:03 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 19:25:03 -0000
Received: by mail-pd0-f170.google.com with SMTP id g10so2177713pdj.1
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 11:25:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=+SRGqs2NpyrldQ1SNcsfgV0jLfKYd02yp9aIa8vra8I=;
	b=1CRUuZQQk85Yod7/QdxGAxTT94OtWKCWzyLPuNlTLBOv7MT4NiaIao0ePEdWcB+oHk
	Q+EOxEyf4onH4vt2T57N5cpk/BQ7XnpCK/UWh91i1FCkIMgZreZiUgTie4SWVy8kxJKN
	TyuVDarRiPBWlMp65conb7olQKWc9EjH022++TkVXQ4HbQ3O3x5dsZpicAEahN14vnjG
	/IRJlSzrDRjAHxZ6LeFYncF1DvuElpnYzvrz0VFl+4ezO3El56vHgZvq4unHKVjmIS5G
	64Igcp/En84VwAzsnsSWfosa9BjeeYfiUiz+69rWeYoVQT0ugx1tNS39mC8I1cleLbg+
	0XAw==
X-Received: by 10.66.121.234 with SMTP id ln10mr14929381pab.20.1389209101671; 
	Wed, 08 Jan 2014 11:25:01 -0800 (PST)
Received: from houyi-vm14.dev.sd.aliyun.com ([110.75.164.3])
	by mx.google.com with ESMTPSA id oa3sm4400100pbb.15.2014.01.08.11.24.54
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 11:25:01 -0800 (PST)
From: Ma JieYue <majieyue@gmail.com>
To: netdev@vger.kernel.org,
	xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 03:24:21 +0800
Message-Id: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Mailer: git-send-email 1.8.4
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ma JieYue <jieyue.majy@alibaba-inc.com>,
	Fu Tienan <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Subject: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ma JieYue <jieyue.majy@alibaba-inc.com>

There is a race when waking up or stopping xenvif tx queue, and it leads to 
unnecessary packet drop. The problem is that the rx ring still full when entering 
into xenvif_start_xmit. In fact, in xenvif_rx_interrupt, the netif_wake_queue 
may be called not just after the ring is not full any more, so the operation 
is not atomic. Here is part of the debug log when the race scenario happened:

wake_queue: req_cons_peek 2679757 req_cons 2679586 req_prod 2679841
stop_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
[tx_queue_stopped true]
wake_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
[tx_queue_stopped false]
drop packet: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841

The debug log was written, every time right after netif_wake_queue been called 
in xenvif_rx_interrupt, every time after netif_stop_queue been called in 
xenvif_start_xmit and every time packet drop happened in xenvif_start_xmit. 
As we can see, the second wake_queue appeared in the place it should not be, and 
we believed the ring had been checked before the stop_queue, but the actual 
wake_queue action didn't follow, and took place after the stop_queue, so that when 
entering into xenvif_start_xmit the ring was full but the queue was not stopped.

The patch fixes the race by checking if tx queue stopped, before trying to 
wake it up in xenvif_rx_interrupt. It only wakes the queue when it is stopped, 
as well as it is not full and schedulable.

Signed-off-by: Ma JieYue <jieyue.majy@alibaba-inc.com>
Signed-off-by: Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Signed-off-by: Fu Tienan <tienan.ftn@alibaba-inc.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netback/interface.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..e099f62 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -105,7 +105,7 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	if (xenvif_rx_schedulable(vif))
+	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
 		netif_wake_queue(vif->dev);
 
 	return IRQ_HANDLED;
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:37:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ywL-0003Cv-GL; Wed, 08 Jan 2014 19:37:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0ywJ-0003Cq-WD
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 19:37:20 +0000
Received: from [193.109.254.147:20048] by server-11.bemta-14.messagelabs.com
	id 3A/38-20576-FE8ADC25; Wed, 08 Jan 2014 19:37:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389209837!9679623!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21227 invoked from network); 8 Jan 2014 19:37:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 19:37:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88866430"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 19:37:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 14:37:16 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0ywG-0006YU-7C;
	Wed, 08 Jan 2014 19:37:16 +0000
Date: Wed, 8 Jan 2014 19:37:16 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140108193716.GA16009@zion.uk.xensource.com>
References: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:41:58PM +0000, Paul Durrant wrote:
> The recent patch to improve guest receive side flow control (ca2f09f2) had a
> slight flaw in the wait condition for the vif thread in that any remaining
> skbs in the guest receive side netback internal queue would prevent the
> thread from sleeping. An unresponsive frontend can lead to a permanently
> non-empty internal queue and thus the thread will spin. In this case the
> thread should really sleep until the frontend becomes responsive again.
> 
> This patch adds an extra flag to the vif which is set if the shared ring
> is full and cleared when skbs are drained into the shared ring. Thus,
> if the thread runs, finds the shared ring full and can make no progress the
> flag remains set. If the flag remains set then the thread will sleep,
> regardless of a non-empty queue, until the next event from the frontend.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:37:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0ywL-0003Cv-GL; Wed, 08 Jan 2014 19:37:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W0ywJ-0003Cq-WD
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 19:37:20 +0000
Received: from [193.109.254.147:20048] by server-11.bemta-14.messagelabs.com
	id 3A/38-20576-FE8ADC25; Wed, 08 Jan 2014 19:37:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389209837!9679623!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21227 invoked from network); 8 Jan 2014 19:37:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 19:37:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88866430"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 19:37:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 14:37:16 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0ywG-0006YU-7C;
	Wed, 08 Jan 2014 19:37:16 +0000
Date: Wed, 8 Jan 2014 19:37:16 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140108193716.GA16009@zion.uk.xensource.com>
References: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:41:58PM +0000, Paul Durrant wrote:
> The recent patch to improve guest receive side flow control (ca2f09f2) had a
> slight flaw in the wait condition for the vif thread in that any remaining
> skbs in the guest receive side netback internal queue would prevent the
> thread from sleeping. An unresponsive frontend can lead to a permanently
> non-empty internal queue and thus the thread will spin. In this case the
> thread should really sleep until the frontend becomes responsive again.
> 
> This patch adds an extra flag to the vif which is set if the shared ring
> is full and cleared when skbs are drained into the shared ring. Thus,
> if the thread runs, finds the shared ring full and can make no progress the
> flag remains set. If the flag remains set then the thread will sleep,
> regardless of a non-empty queue, until the next event from the frontend.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:45:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0z3o-000465-6k; Wed, 08 Jan 2014 19:45:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0z3m-00045y-TA
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 19:45:03 +0000
Received: from [193.109.254.147:24195] by server-3.bemta-14.messagelabs.com id
	71/3A-11000-EBAADC25; Wed, 08 Jan 2014 19:45:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389210299!9640365!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2298 invoked from network); 8 Jan 2014 19:45:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 19:45:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08JitE5007260
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 19:44:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08JirNA020464
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 19:44:54 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08Jire0020727; Wed, 8 Jan 2014 19:44:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 11:44:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DA5AE1C18DC; Wed,  8 Jan 2014 14:44:51 -0500 (EST)
Date: Wed, 8 Jan 2014 14:44:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>, donald.d.dugger@intel.com
Message-ID: <20140108194451.GA15956@phenom.dumpdata.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131218144823.GB6081@perard.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > [...]
> > > > > Those Xen report something like:
> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > 131328
> > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > memflags=0 (62 of 64)
> > > > > 
> > > > > ?
> > > > > 
> > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > QEMU :) )
> > > > > 
> 
> > -bash-4.1# lspci -s 01:00.0 -v 
> > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: fast devsel, IRQ 16
> >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> >         I/O ports at e020 [disabled] [size=32]
> >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> >         Expansion ROM at fb400000 [disabled] [size=4M]
> 
> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> allocate memory for it. Will have maybe have to find another way.
> qemu-trad those not seems to allocate memory, but I haven't been very
> far in trying to check that.

And indeed that is the case. The "Fix" below fixes it.


Based on that and this guest config:
disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory = 2048
boot="d"
maxvcpus=32
vcpus=1
serial='pty'
vnclisten="0.0.0.0"
name="latest"
vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
pci = ["01:00.0"]

I can boot the guest.

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index ca2d460..82b3890 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -425,6 +425,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    i, r->size, r->base_addr, type);
     }
 
+#if 0
     /* Register expansion ROM address */
     if (d->rom.base_addr && d->rom.size) {
         uint32_t bar_data = 0;
@@ -449,7 +450,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    " base_addr=0x%08"PRIx64")\n",
                    d->rom.size, d->rom.base_addr);
     }
-
+#endif
     return 0;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:45:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0z3o-000465-6k; Wed, 08 Jan 2014 19:45:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W0z3m-00045y-TA
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 19:45:03 +0000
Received: from [193.109.254.147:24195] by server-3.bemta-14.messagelabs.com id
	71/3A-11000-EBAADC25; Wed, 08 Jan 2014 19:45:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389210299!9640365!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2298 invoked from network); 8 Jan 2014 19:45:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 19:45:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08JitE5007260
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 19:44:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08JirNA020464
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 19:44:54 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08Jire0020727; Wed, 8 Jan 2014 19:44:53 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 11:44:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DA5AE1C18DC; Wed,  8 Jan 2014 14:44:51 -0500 (EST)
Date: Wed, 8 Jan 2014 14:44:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>, donald.d.dugger@intel.com
Message-ID: <20140108194451.GA15956@phenom.dumpdata.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131218144823.GB6081@perard.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > [...]
> > > > > Those Xen report something like:
> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > 131328
> > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > memflags=0 (62 of 64)
> > > > > 
> > > > > ?
> > > > > 
> > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > QEMU :) )
> > > > > 
> 
> > -bash-4.1# lspci -s 01:00.0 -v 
> > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: fast devsel, IRQ 16
> >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> >         I/O ports at e020 [disabled] [size=32]
> >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> >         Expansion ROM at fb400000 [disabled] [size=4M]
> 
> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> allocate memory for it. Will have maybe have to find another way.
> qemu-trad those not seems to allocate memory, but I haven't been very
> far in trying to check that.

And indeed that is the case. The "Fix" below fixes it.


Based on that and this guest config:
disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory = 2048
boot="d"
maxvcpus=32
vcpus=1
serial='pty'
vnclisten="0.0.0.0"
name="latest"
vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
pci = ["01:00.0"]

I can boot the guest.

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index ca2d460..82b3890 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -425,6 +425,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    i, r->size, r->base_addr, type);
     }
 
+#if 0
     /* Register expansion ROM address */
     if (d->rom.base_addr && d->rom.size) {
         uint32_t bar_data = 0;
@@ -449,7 +450,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    " base_addr=0x%08"PRIx64")\n",
                    d->rom.size, d->rom.base_addr);
     }
-
+#endif
     return 0;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zGc-0004uW-1c; Wed, 08 Jan 2014 19:58:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1W0zGa-0004uO-Tj; Wed, 08 Jan 2014 19:58:17 +0000
Received: from [85.158.137.68:13132] by server-2.bemta-3.messagelabs.com id
	BB/22-17329-8DDADC25; Wed, 08 Jan 2014 19:58:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389211093!8006649!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28225 invoked from network); 8 Jan 2014 19:58:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 19:58:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08Jw6CQ022209
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 19:58:06 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s08Jw4Xu014982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 19:58:05 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08Jw4UV003406; Wed, 8 Jan 2014 19:58:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 11:58:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B15F11C18DC; Wed,  8 Jan 2014 14:58:03 -0500 (EST)
Date: Wed, 8 Jan 2014 14:58:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140108195803.GB16230@phenom.dumpdata.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108184405.GB13867@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:44:05PM +0000, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> > Hi list
> >         As we all know, SR-IOV technology can improve VNIC's
> >         performance, while it can not support live migration. I get
> >         some information recently that on KVM+Virtio platform, if we
> >         use MacVtap + SR-IOV, the live migration could be done
> >         successfully. What I want to know is may I configure MacVtap
> >         on Xen? 
> >        Any replies are welcome! Thanks a lot.
> 
> AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
> the feature you want. That would mean you also need to use VirtIO
> network driver for Xen's HVM domain, if you manage to configure MacVtap
> for Xen.
> 
> Basically that means a configuration that nobody ever tried. Good luck.
> :-)

Do you know what would be needed to make MacVtap run with Xen's drivers?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 19:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 19:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zGc-0004uW-1c; Wed, 08 Jan 2014 19:58:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1W0zGa-0004uO-Tj; Wed, 08 Jan 2014 19:58:17 +0000
Received: from [85.158.137.68:13132] by server-2.bemta-3.messagelabs.com id
	BB/22-17329-8DDADC25; Wed, 08 Jan 2014 19:58:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389211093!8006649!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28225 invoked from network); 8 Jan 2014 19:58:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 19:58:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08Jw6CQ022209
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 19:58:06 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s08Jw4Xu014982
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 19:58:05 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08Jw4UV003406; Wed, 8 Jan 2014 19:58:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 11:58:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B15F11C18DC; Wed,  8 Jan 2014 14:58:03 -0500 (EST)
Date: Wed, 8 Jan 2014 14:58:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140108195803.GB16230@phenom.dumpdata.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108184405.GB13867@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:44:05PM +0000, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> > Hi list
> >         As we all know, SR-IOV technology can improve VNIC's
> >         performance, while it can not support live migration. I get
> >         some information recently that on KVM+Virtio platform, if we
> >         use MacVtap + SR-IOV, the live migration could be done
> >         successfully. What I want to know is may I configure MacVtap
> >         on Xen? 
> >        Any replies are welcome! Thanks a lot.
> 
> AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
> the feature you want. That would mean you also need to use VirtIO
> network driver for Xen's HVM domain, if you manage to configure MacVtap
> for Xen.
> 
> Basically that means a configuration that nobody ever tried. Good luck.
> :-)

Do you know what would be needed to make MacVtap run with Xen's drivers?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:03:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zLK-0005pk-Jq; Wed, 08 Jan 2014 20:03:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W0zLI-0005pS-TD; Wed, 08 Jan 2014 20:03:09 +0000
Received: from [85.158.139.211:19561] by server-4.bemta-5.messagelabs.com id
	97/23-26791-BFEADC25; Wed, 08 Jan 2014 20:03:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389211385!8596411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2469 invoked from network); 8 Jan 2014 20:03:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 20:03:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88875962"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 20:03:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 15:03:05 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0zLF-0006xg-2S;
	Wed, 08 Jan 2014 20:03:05 +0000
Date: Wed, 8 Jan 2014 20:03:05 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140108200305.GC16009@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<20140108195803.GB16230@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108195803.GB16230@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 02:58:03PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 08, 2014 at 06:44:05PM +0000, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> > > Hi list
> > >         As we all know, SR-IOV technology can improve VNIC's
> > >         performance, while it can not support live migration. I get
> > >         some information recently that on KVM+Virtio platform, if we
> > >         use MacVtap + SR-IOV, the live migration could be done
> > >         successfully. What I want to know is may I configure MacVtap
> > >         on Xen? 
> > >        Any replies are welcome! Thanks a lot.
> > 
> > AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
> > the feature you want. That would mean you also need to use VirtIO
> > network driver for Xen's HVM domain, if you manage to configure MacVtap
> > for Xen.
> > 
> > Basically that means a configuration that nobody ever tried. Good luck.
> > :-)
> 
> Do you know what would be needed to make MacVtap run with Xen's drivers?

No idea. Never looked into it.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:03:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zLK-0005pk-Jq; Wed, 08 Jan 2014 20:03:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W0zLI-0005pS-TD; Wed, 08 Jan 2014 20:03:09 +0000
Received: from [85.158.139.211:19561] by server-4.bemta-5.messagelabs.com id
	97/23-26791-BFEADC25; Wed, 08 Jan 2014 20:03:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389211385!8596411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2469 invoked from network); 8 Jan 2014 20:03:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 20:03:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88875962"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 20:03:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 8 Jan 2014 15:03:05 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W0zLF-0006xg-2S;
	Wed, 08 Jan 2014 20:03:05 +0000
Date: Wed, 8 Jan 2014 20:03:05 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140108200305.GC16009@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<20140108195803.GB16230@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108195803.GB16230@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>,
	topperxin <topperxin@126.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 02:58:03PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 08, 2014 at 06:44:05PM +0000, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 06:22:06PM +0800, topperxin wrote:
> > > Hi list
> > >         As we all know, SR-IOV technology can improve VNIC's
> > >         performance, while it can not support live migration. I get
> > >         some information recently that on KVM+Virtio platform, if we
> > >         use MacVtap + SR-IOV, the live migration could be done
> > >         successfully. What I want to know is may I configure MacVtap
> > >         on Xen? 
> > >        Any replies are welcome! Thanks a lot.
> > 
> > AIUI MacVtap runs in emulation mode and connects to VirtIO to implement
> > the feature you want. That would mean you also need to use VirtIO
> > network driver for Xen's HVM domain, if you manage to configure MacVtap
> > for Xen.
> > 
> > Basically that means a configuration that nobody ever tried. Good luck.
> > :-)
> 
> Do you know what would be needed to make MacVtap run with Xen's drivers?

No idea. Never looked into it.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zTe-0006tw-6J; Wed, 08 Jan 2014 20:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0zTc-0006tn-D0
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:11:44 +0000
Received: from [85.158.139.211:51624] by server-1.bemta-5.messagelabs.com id
	22/EC-21065-FF0BDC25; Wed, 08 Jan 2014 20:11:43 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389211902!8638157!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21729 invoked from network); 8 Jan 2014 20:11:42 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-12.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 20:11:42 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id A08F6588F7D;
	Wed,  8 Jan 2014 12:11:40 -0800 (PST)
Date: Wed, 08 Jan 2014 15:11:39 -0500 (EST)
Message-Id: <20140108.151139.4522881609697040.davem@davemloft.net>
To: majieyue@gmail.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Wed, 08 Jan 2014 12:11:41 -0800 (PST)
Cc: yingbin.wangyb@alibaba-inc.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, jieyue.majy@alibaba-inc.com,
	tienan.ftn@alibaba-inc.com, david.vrabel@citrix.com,
	wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
 xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ma JieYue <majieyue@gmail.com>
Date: Thu,  9 Jan 2014 03:24:21 +0800

> -	if (xenvif_rx_schedulable(vif))
> +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))

I do not see anything which prevents a netif_stop_queue() call from happening
between these two tests in another thread of control.

This therefore looks like a bandaid and not a real fix.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zTe-0006tw-6J; Wed, 08 Jan 2014 20:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W0zTc-0006tn-D0
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:11:44 +0000
Received: from [85.158.139.211:51624] by server-1.bemta-5.messagelabs.com id
	22/EC-21065-FF0BDC25; Wed, 08 Jan 2014 20:11:43 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389211902!8638157!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21729 invoked from network); 8 Jan 2014 20:11:42 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-12.tower-206.messagelabs.com with SMTP;
	8 Jan 2014 20:11:42 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id A08F6588F7D;
	Wed,  8 Jan 2014 12:11:40 -0800 (PST)
Date: Wed, 08 Jan 2014 15:11:39 -0500 (EST)
Message-Id: <20140108.151139.4522881609697040.davem@davemloft.net>
To: majieyue@gmail.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Wed, 08 Jan 2014 12:11:41 -0800 (PST)
Cc: yingbin.wangyb@alibaba-inc.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, jieyue.majy@alibaba-inc.com,
	tienan.ftn@alibaba-inc.com, david.vrabel@citrix.com,
	wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
 xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ma JieYue <majieyue@gmail.com>
Date: Thu,  9 Jan 2014 03:24:21 +0800

> -	if (xenvif_rx_schedulable(vif))
> +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))

I do not see anything which prevents a netif_stop_queue() call from happening
between these two tests in another thread of control.

This therefore looks like a bandaid and not a real fix.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zZF-00076Q-Bn; Wed, 08 Jan 2014 20:17:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1W0zZE-00076K-4E
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 20:17:32 +0000
Received: from [193.109.254.147:56754] by server-12.bemta-14.messagelabs.com
	id EE/42-13681-B52BDC25; Wed, 08 Jan 2014 20:17:31 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389212249!9596984!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDE2NDE4OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17116 invoked from network); 8 Jan 2014 20:17:30 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-9.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 20:17:30 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 437FE2D80B1;
	Wed,  8 Jan 2014 21:17:28 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Wed, 8 Jan 2014 21:16:16 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Wed, 8 Jan 2014 21:16:16 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer, since it
	is not compiled
Thread-Index: AQHPCw0x+RJzyumU502/wz3qQ25xQ5p7RhkQ
Date: Wed, 8 Jan 2014 20:16:15 +0000
Message-ID: <68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
	<f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
In-Reply-To: <f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> > > The line in question is the following:
> > >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> > >
> > > XenVbd_HwStorFindAdapter is the data structure which you have
> > > corrected a few lines in, in the patch. As it is a level 4 warning,
> > > it can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I
> > > suspect that it would be preferred to find the cause of the warning.
> >
> > That would imply that my function definition doesn't match the
> > expected function definition in the HW_INITIALIZATION_DATA structure,
> > but according to the docs I have everything right. Can you check the
> > storport headers and check the declaration there against my function?
> 
> For windows 8 and newer HwFindAdapter is declared as
>   PVOID                     		HwFindAdapter;
> While for earlier versions of windows it is declared as:
>   PHW_FIND_ADAPTER		HwFindAdapter;
> 

I have tried to typecast the HwFindAdapter like:
  (PHW_FIND_ADAPTER) HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;

However this results in the following error and warnings:
  error C2220: warning treated as error - no 'object' file generated
  warning C4055: 'type cast' : from data pointer 'PVOID' to function pointer 'PHW_FIND_ADAPTER'
  warning C4213: nonstandard extension used : cast on l-value

I might be doing something wrong, however my experience in C++ is limited.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zZF-00076Q-Bn; Wed, 08 Jan 2014 20:17:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kristian@hagsted.dk>) id 1W0zZE-00076K-4E
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 20:17:32 +0000
Received: from [193.109.254.147:56754] by server-12.bemta-14.messagelabs.com
	id EE/42-13681-B52BDC25; Wed, 08 Jan 2014 20:17:31 +0000
X-Env-Sender: kristian@hagsted.dk
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389212249!9596984!1
X-Originating-IP: [80.160.77.98]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuMTYwLjc3Ljk4ID0+IDE2NDE4OA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17116 invoked from network); 8 Jan 2014 20:17:30 -0000
Received: from pasmtpb.tele.dk (HELO pasmtpB.tele.dk) (80.160.77.98)
	by server-9.tower-27.messagelabs.com with SMTP;
	8 Jan 2014 20:17:30 -0000
Received: from hagsted.dk (2-108-99-186-static.dk.customer.tdc.net
	[2.108.99.186])
	by pasmtpB.tele.dk (Postfix) with ESMTP id 437FE2D80B1;
	Wed,  8 Jan 2014 21:17:28 +0100 (CET)
Received: from HAGSTED-CSERVER.hagsted.dk (192.168.2.11) by
	hagsted-cserver.hagsted.dk (192.168.2.11) with Microsoft SMTP Server
	(TLS) id 15.0.620.29; Wed, 8 Jan 2014 21:16:16 +0100
Received: from HAGSTED-CSERVER.hagsted.dk ([fe80::b00a:5a81:2ebe:40e]) by
	hagsted-cserver.hagsted.dk ([fe80::b00a:5a81:2ebe:40e%16]) with mapi id
	15.00.0620.020; Wed, 8 Jan 2014 21:16:16 +0100
From: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
To: James Harper <james.harper@bendigoit.com.au>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer, since it
	is not compiled
Thread-Index: AQHPCw0x+RJzyumU502/wz3qQ25xQ5p7RhkQ
Date: Wed, 8 Jan 2014 20:16:15 +0000
Message-ID: <68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
	<f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
In-Reply-To: <f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
Accept-Language: en-US, da-DK
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.2.41]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> > > The line in question is the following:
> > >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> > >
> > > XenVbd_HwStorFindAdapter is the data structure which you have
> > > corrected a few lines in, in the patch. As it is a level 4 warning,
> > > it can be ignored by setting /W3 in the MSC_WARNING_LEVEL. However I
> > > suspect that it would be preferred to find the cause of the warning.
> >
> > That would imply that my function definition doesn't match the
> > expected function definition in the HW_INITIALIZATION_DATA structure,
> > but according to the docs I have everything right. Can you check the
> > storport headers and check the declaration there against my function?
> 
> For windows 8 and newer HwFindAdapter is declared as
>   PVOID                     		HwFindAdapter;
> While for earlier versions of windows it is declared as:
>   PHW_FIND_ADAPTER		HwFindAdapter;
> 

I have tried to typecast the HwFindAdapter like:
  (PHW_FIND_ADAPTER) HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;

However this results in the following error and warnings:
  error C2220: warning treated as error - no 'object' file generated
  warning C4055: 'type cast' : from data pointer 'PVOID' to function pointer 'PHW_FIND_ADAPTER'
  warning C4213: nonstandard extension used : cast on l-value

I might be doing something wrong, however my experience in C++ is limited.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zZL-00076t-OW; Wed, 08 Jan 2014 20:17:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W0zZK-00076l-CV
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:17:38 +0000
Received: from [85.158.137.68:61044] by server-15.bemta-3.messagelabs.com id
	51/FA-11556-162BDC25; Wed, 08 Jan 2014 20:17:37 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389212255!8062436!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1086 invoked from network); 8 Jan 2014 20:17:37 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 20:17:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08KH81o010808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 20:17:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08KH7gN028224
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 20:17:07 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s08KH6SA011703; Wed, 8 Jan 2014 20:17:06 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 12:17:05 -0800
Date: Wed, 8 Jan 2014 21:16:59 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140108201659.GB3633@olila.local.net-space.pl>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 0/4] Enable use of crash on xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 02:11:22PM -0500, Don Slutz wrote:
> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> exists.  This prevents crash from using a xen 4.4.0 vmcore.
>
> Patch 1 "fixes" this.
>
> Patch 2 is a minor fix in that outputing the offset in hex for
> domain_domain_flags is different.
>
> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> one found.
>
> Patch 4 is a quick way to add domain.guest_type support.

Sorry for late reply but I was on holiday.

Nice work. Thanks. I have done some tests with old
and new Xen versions and it looks that it works
without any issue.

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W0zZL-00076t-OW; Wed, 08 Jan 2014 20:17:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W0zZK-00076l-CV
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:17:38 +0000
Received: from [85.158.137.68:61044] by server-15.bemta-3.messagelabs.com id
	51/FA-11556-162BDC25; Wed, 08 Jan 2014 20:17:37 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389212255!8062436!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1086 invoked from network); 8 Jan 2014 20:17:37 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 20:17:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s08KH81o010808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 8 Jan 2014 20:17:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s08KH7gN028224
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 8 Jan 2014 20:17:07 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s08KH6SA011703; Wed, 8 Jan 2014 20:17:06 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 12:17:05 -0800
Date: Wed, 8 Jan 2014 21:16:59 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140108201659.GB3633@olila.local.net-space.pl>
References: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1388862686-1832-1-git-send-email-dslutz@verizon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org,
	kexec@lists.infradead.org, crash-utility@redhat.com
Subject: Re: [Xen-devel] [PATCH 0/4] Enable use of crash on xen 4.4.0 vmcore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 04, 2014 at 02:11:22PM -0500, Don Slutz wrote:
> With the addition on PVH code to xen 4.4, domain.is_hvm no longer
> exists.  This prevents crash from using a xen 4.4.0 vmcore.
>
> Patch 1 "fixes" this.
>
> Patch 2 is a minor fix in that outputing the offset in hex for
> domain_domain_flags is different.
>
> Patch 3 is a bug fix to get all "domain_flags" set, not just the 1st
> one found.
>
> Patch 4 is a quick way to add domain.guest_type support.

Sorry for late reply but I was on holiday.

Nice work. Thanks. I have done some tests with old
and new Xen versions and it looks that it works
without any issue.

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1092-0001ED-V6; Wed, 08 Jan 2014 20:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1091-0001E8-9G
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:54:31 +0000
Received: from [193.109.254.147:47950] by server-10.bemta-14.messagelabs.com
	id B5/02-20752-60BBDC25; Wed, 08 Jan 2014 20:54:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389214469!9600962!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1946 invoked from network); 8 Jan 2014 20:54:29 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 20:54:29 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 20:54:28 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="626687919"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.218])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 20:54:27 +0000
Message-ID: <52CDBB02.6070702@terremark.com>
Date: Wed, 08 Jan 2014 15:54:26 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 13:35, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..
>
> Changes in v2:
> - merge patches into one
> - add check for e > s before mapping
> ---
>   xen/arch/x86/setup.c |   11 +++++++++++
>   1 files changed, 11 insertions(+), 0 deletions(-)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..b49256d 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1098,6 +1098,17 @@ void __init __start_xen(unsigned long mbi_p)
>                            PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>       }
>   
> +    if ( kexec_crash_area.size )
> +    {
> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
> +
> +        if ( e > s )
> +            map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                             s, e - s, PAGE_HYPERVISOR);
> +    }
> +
>       xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                      ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>       destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);


4.4.0-rc1 + this patch works for me.  So you can add:

Tested-by: Don Slutz <dslutz@verizon.com>

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 20:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 20:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1092-0001ED-V6; Wed, 08 Jan 2014 20:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1091-0001E8-9G
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 20:54:31 +0000
Received: from [193.109.254.147:47950] by server-10.bemta-14.messagelabs.com
	id B5/02-20752-60BBDC25; Wed, 08 Jan 2014 20:54:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389214469!9600962!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1946 invoked from network); 8 Jan 2014 20:54:29 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Jan 2014 20:54:29 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 08 Jan 2014 20:54:28 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="626687919"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.218])
	by fldsmtpi02.verizon.com with ESMTP; 08 Jan 2014 20:54:27 +0000
Message-ID: <52CDBB02.6070702@terremark.com>
Date: Wed, 08 Jan 2014 15:54:26 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/08/14 13:35, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..
>
> Changes in v2:
> - merge patches into one
> - add check for e > s before mapping
> ---
>   xen/arch/x86/setup.c |   11 +++++++++++
>   1 files changed, 11 insertions(+), 0 deletions(-)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 4833ca3..b49256d 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1098,6 +1098,17 @@ void __init __start_xen(unsigned long mbi_p)
>                            PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
>       }
>   
> +    if ( kexec_crash_area.size )
> +    {
> +        unsigned long s = PFN_DOWN(kexec_crash_area.start);
> +        unsigned long e = min(s + PFN_UP(kexec_crash_area.size),
> +                              PFN_UP(__pa(HYPERVISOR_VIRT_END - 1)));
> +
> +        if ( e > s )
> +            map_pages_to_xen((unsigned long)__va(kexec_crash_area.start),
> +                             s, e - s, PAGE_HYPERVISOR);
> +    }
> +
>       xen_virt_end = ((unsigned long)_end + (1UL << L2_PAGETABLE_SHIFT) - 1) &
>                      ~((1UL << L2_PAGETABLE_SHIFT) - 1);
>       destroy_xen_mappings(xen_virt_end, XEN_VIRT_START + BOOTSTRAP_MAP_BASE);


4.4.0-rc1 + this patch works for me.  So you can add:

Tested-by: Don Slutz <dslutz@verizon.com>

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 21:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 21:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W10Ww-0003Ap-2C; Wed, 08 Jan 2014 21:19:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W10Wv-0003Aj-3W
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 21:19:13 +0000
Received: from [85.158.139.211:62938] by server-3.bemta-5.messagelabs.com id
	37/4F-04773-0D0CDC25; Wed, 08 Jan 2014 21:19:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389215948!7417613!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5106 invoked from network); 8 Jan 2014 21:19:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 21:19:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="91058314"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 21:19:03 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	16:19:02 -0500
Message-ID: <52CDC0C3.2080303@citrix.com>
Date: Wed, 8 Jan 2014 21:18:59 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ma JieYue <majieyue@gmail.com>, <netdev@vger.kernel.org>,
	<xen-devel@lists.xen.org>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
In-Reply-To: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ma JieYue <jieyue.majy@alibaba-inc.com>,
	Fu Tienan <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

With Paul's recent flow control improvement I think this became invalid:

http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=ca2f09f2b2c6c25047cfc545d057c4edfcfe561c

Zoli

On 08/01/14 19:24, Ma JieYue wrote:
> From: Ma JieYue <jieyue.majy@alibaba-inc.com>
>
> There is a race when waking up or stopping xenvif tx queue, and it leads to
> unnecessary packet drop. The problem is that the rx ring still full when entering
> into xenvif_start_xmit. In fact, in xenvif_rx_interrupt, the netif_wake_queue
> may be called not just after the ring is not full any more, so the operation
> is not atomic. Here is part of the debug log when the race scenario happened:
>
> wake_queue: req_cons_peek 2679757 req_cons 2679586 req_prod 2679841
> stop_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
> [tx_queue_stopped true]
> wake_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
> [tx_queue_stopped false]
> drop packet: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
>
> The debug log was written, every time right after netif_wake_queue been called
> in xenvif_rx_interrupt, every time after netif_stop_queue been called in
> xenvif_start_xmit and every time packet drop happened in xenvif_start_xmit.
> As we can see, the second wake_queue appeared in the place it should not be, and
> we believed the ring had been checked before the stop_queue, but the actual
> wake_queue action didn't follow, and took place after the stop_queue, so that when
> entering into xenvif_start_xmit the ring was full but the queue was not stopped.
>
> The patch fixes the race by checking if tx queue stopped, before trying to
> wake it up in xenvif_rx_interrupt. It only wakes the queue when it is stopped,
> as well as it is not full and schedulable.
>
> Signed-off-by: Ma JieYue <jieyue.majy@alibaba-inc.com>
> Signed-off-by: Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
> Signed-off-by: Fu Tienan <tienan.ftn@alibaba-inc.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> ---
>   drivers/net/xen-netback/interface.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index fff8cdd..e099f62 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -105,7 +105,7 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>   {
>   	struct xenvif *vif = dev_id;
>
> -	if (xenvif_rx_schedulable(vif))
> +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
>   		netif_wake_queue(vif->dev);
>
>   	return IRQ_HANDLED;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 21:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 21:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W10Ww-0003Ap-2C; Wed, 08 Jan 2014 21:19:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W10Wv-0003Aj-3W
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 21:19:13 +0000
Received: from [85.158.139.211:62938] by server-3.bemta-5.messagelabs.com id
	37/4F-04773-0D0CDC25; Wed, 08 Jan 2014 21:19:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389215948!7417613!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5106 invoked from network); 8 Jan 2014 21:19:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 21:19:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="91058314"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 21:19:03 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	16:19:02 -0500
Message-ID: <52CDC0C3.2080303@citrix.com>
Date: Wed, 8 Jan 2014 21:18:59 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ma JieYue <majieyue@gmail.com>, <netdev@vger.kernel.org>,
	<xen-devel@lists.xen.org>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
In-Reply-To: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ma JieYue <jieyue.majy@alibaba-inc.com>,
	Fu Tienan <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

With Paul's recent flow control improvement I think this became invalid:

http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=ca2f09f2b2c6c25047cfc545d057c4edfcfe561c

Zoli

On 08/01/14 19:24, Ma JieYue wrote:
> From: Ma JieYue <jieyue.majy@alibaba-inc.com>
>
> There is a race when waking up or stopping xenvif tx queue, and it leads to
> unnecessary packet drop. The problem is that the rx ring still full when entering
> into xenvif_start_xmit. In fact, in xenvif_rx_interrupt, the netif_wake_queue
> may be called not just after the ring is not full any more, so the operation
> is not atomic. Here is part of the debug log when the race scenario happened:
>
> wake_queue: req_cons_peek 2679757 req_cons 2679586 req_prod 2679841
> stop_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
> [tx_queue_stopped true]
> wake_queue: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
> [tx_queue_stopped false]
> drop packet: req_cons_peek 2679837 req_cons 2679757 req_prod 2679841
>
> The debug log was written, every time right after netif_wake_queue been called
> in xenvif_rx_interrupt, every time after netif_stop_queue been called in
> xenvif_start_xmit and every time packet drop happened in xenvif_start_xmit.
> As we can see, the second wake_queue appeared in the place it should not be, and
> we believed the ring had been checked before the stop_queue, but the actual
> wake_queue action didn't follow, and took place after the stop_queue, so that when
> entering into xenvif_start_xmit the ring was full but the queue was not stopped.
>
> The patch fixes the race by checking if tx queue stopped, before trying to
> wake it up in xenvif_rx_interrupt. It only wakes the queue when it is stopped,
> as well as it is not full and schedulable.
>
> Signed-off-by: Ma JieYue <jieyue.majy@alibaba-inc.com>
> Signed-off-by: Wang Yingbin <yingbin.wangyb@alibaba-inc.com>
> Signed-off-by: Fu Tienan <tienan.ftn@alibaba-inc.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> ---
>   drivers/net/xen-netback/interface.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index fff8cdd..e099f62 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -105,7 +105,7 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>   {
>   	struct xenvif *vif = dev_id;
>
> -	if (xenvif_rx_schedulable(vif))
> +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
>   		netif_wake_queue(vif->dev);
>
>   	return IRQ_HANDLED;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 21:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 21:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W10lh-0004OC-Ur; Wed, 08 Jan 2014 21:34:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W10lf-0004O7-Rb
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 21:34:28 +0000
Received: from [85.158.137.68:58277] by server-2.bemta-3.messagelabs.com id
	58/E7-17329-364CDC25; Wed, 08 Jan 2014 21:34:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389216864!6851096!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10039 invoked from network); 8 Jan 2014 21:34:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 21:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88913999"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 21:34:24 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	16:34:23 -0500
Message-ID: <52CDC45D.3050509@citrix.com>
Date: Wed, 8 Jan 2014 21:34:21 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I just realized when answering Ma's mail that this doesn't cause the 
desired effect after Paul's flow control improvement: starting the queue 
doesn't drop the packets which cannot fit the ring. Which in fact might 
be not good. We are adding the skb to vif->rx_queue even when 
xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no 
space for that. Or am I missing something? Paul?

Zoli

On 08/01/14 00:10, Zoltan Kiss wrote:
> A malicious or buggy guest can leave its queue filled indefinitely, in which
> case qdisc start to queue packets for that VIF. If those packets came from an
> another guest, it can block its slots and prevent shutdown. To avoid that, we
> make sure the queue is drained in every 10 seconds.
...
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 95fcd63..ce032f9 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
>   	return IRQ_HANDLED;
>   }
>
> +static void xenvif_wake_queue(unsigned long data)
> +{
> +	struct xenvif *vif = (struct xenvif *)data;
> +
> +	if (netif_queue_stopped(vif->dev)) {
> +		netdev_err(vif->dev, "draining TX queue\n");
> +		netif_wake_queue(vif->dev);
> +	}
> +}
> +
>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>   {
>   	struct xenvif *vif = netdev_priv(dev);
> @@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>   	 * then turn off the queue to give the ring a chance to
>   	 * drain.
>   	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> +		vif->wake_queue.function = xenvif_wake_queue;
> +		vif->wake_queue.data = (unsigned long)vif;
>   		xenvif_stop_queue(vif);
> +		mod_timer(&vif->wake_queue,
> +			jiffies + rx_drain_timeout_jiffies);
> +	}
>
>   	skb_queue_tail(&vif->rx_queue, skb);
>   	xenvif_kick_thread(vif);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 21:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 21:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W10lh-0004OC-Ur; Wed, 08 Jan 2014 21:34:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W10lf-0004O7-Rb
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 21:34:28 +0000
Received: from [85.158.137.68:58277] by server-2.bemta-3.messagelabs.com id
	58/E7-17329-364CDC25; Wed, 08 Jan 2014 21:34:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389216864!6851096!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10039 invoked from network); 8 Jan 2014 21:34:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 21:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="88913999"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 21:34:24 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	16:34:23 -0500
Message-ID: <52CDC45D.3050509@citrix.com>
Date: Wed, 8 Jan 2014 21:34:21 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I just realized when answering Ma's mail that this doesn't cause the 
desired effect after Paul's flow control improvement: starting the queue 
doesn't drop the packets which cannot fit the ring. Which in fact might 
be not good. We are adding the skb to vif->rx_queue even when 
xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no 
space for that. Or am I missing something? Paul?

Zoli

On 08/01/14 00:10, Zoltan Kiss wrote:
> A malicious or buggy guest can leave its queue filled indefinitely, in which
> case qdisc start to queue packets for that VIF. If those packets came from an
> another guest, it can block its slots and prevent shutdown. To avoid that, we
> make sure the queue is drained in every 10 seconds.
...
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 95fcd63..ce032f9 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
>   	return IRQ_HANDLED;
>   }
>
> +static void xenvif_wake_queue(unsigned long data)
> +{
> +	struct xenvif *vif = (struct xenvif *)data;
> +
> +	if (netif_queue_stopped(vif->dev)) {
> +		netdev_err(vif->dev, "draining TX queue\n");
> +		netif_wake_queue(vif->dev);
> +	}
> +}
> +
>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>   {
>   	struct xenvif *vif = netdev_priv(dev);
> @@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>   	 * then turn off the queue to give the ring a chance to
>   	 * drain.
>   	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> +		vif->wake_queue.function = xenvif_wake_queue;
> +		vif->wake_queue.data = (unsigned long)vif;
>   		xenvif_stop_queue(vif);
> +		mod_timer(&vif->wake_queue,
> +			jiffies + rx_drain_timeout_jiffies);
> +	}
>
>   	skb_queue_tail(&vif->rx_queue, skb);
>   	xenvif_kick_thread(vif);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11Kf-0007Do-G9; Wed, 08 Jan 2014 22:10:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1W11Kd-0007Db-FZ
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 22:10:35 +0000
Received: from [85.158.137.68:63459] by server-14.bemta-3.messagelabs.com id
	9E/39-06105-ADCCDC25; Wed, 08 Jan 2014 22:10:34 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389219030!6854458!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29632 invoked from network); 8 Jan 2014 22:10:33 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 22:10:33 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1W11KV-0007FZ-GQ; Thu, 09 Jan 2014 09:10:27 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Thu, 9 Jan 2014 09:10:28 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer, since it
	is not compiled
Thread-Index: AQHPDK60R00LXWePCkCzso9TPwwVBpp7YYKQ
Date: Wed, 8 Jan 2014 22:10:27 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F3666EC@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
	<f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
	<68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
In-Reply-To: <68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.200.84]
x-tm-as-product-ver: SMEX-11.0.0.1191-7.000.1014-20418.002
x-tm-as-result: No--49.290000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> > > > The line in question is the following:
> > > >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> > > >
> > > > XenVbd_HwStorFindAdapter is the data structure which you have
> > > > corrected a few lines in, in the patch. As it is a level 4 warning,
> > > > it can be ignored by setting /W3 in the MSC_WARNING_LEVEL.
> However I
> > > > suspect that it would be preferred to find the cause of the warning.
> > >
> > > That would imply that my function definition doesn't match the
> > > expected function definition in the HW_INITIALIZATION_DATA structure,
> > > but according to the docs I have everything right. Can you check the
> > > storport headers and check the declaration there against my function?
> >
> > For windows 8 and newer HwFindAdapter is declared as
> >   PVOID                     		HwFindAdapter;
> > While for earlier versions of windows it is declared as:
> >   PHW_FIND_ADAPTER		HwFindAdapter;
> >
> 
> I have tried to typecast the HwFindAdapter like:
>   (PHW_FIND_ADAPTER) HwInitializationData.HwFindAdapter =
> XenVbd_HwStorFindAdapter;
> 
> However this results in the following error and warnings:
>   error C2220: warning treated as error - no 'object' file generated
>   warning C4055: 'type cast' : from data pointer 'PVOID' to function pointer
> 'PHW_FIND_ADAPTER'
>   warning C4213: nonstandard extension used : cast on l-value
> 
> I might be doing something wrong, however my experience in C++ is limited.

At line 28, please insert the following:

HW_FIND_ADAPTER XenVbd_HwStorFindAdapter;

This pre-declares the function as conforming to the expected declaration.

Alternatively, your cast above is backwards, and should be:

HwInitializationData.HwFindAdapter =  (PHW_FIND_ADAPTER)XenVbd_HwStorFindAdapter;

But try the declaration above first.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11Kf-0007Do-G9; Wed, 08 Jan 2014 22:10:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.harper@bendigoit.com.au>) id 1W11Kd-0007Db-FZ
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 22:10:35 +0000
Received: from [85.158.137.68:63459] by server-14.bemta-3.messagelabs.com id
	9E/39-06105-ADCCDC25; Wed, 08 Jan 2014 22:10:34 +0000
X-Env-Sender: james.harper@bendigoit.com.au
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389219030!6854458!1
X-Originating-IP: [203.16.207.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29632 invoked from network); 8 Jan 2014 22:10:33 -0000
Received: from mail.bendigoit.com.au (HELO smtp2.bendigoit.com.au)
	(203.16.207.99)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 8 Jan 2014 22:10:33 -0000
Received: from bitcom1.int.sbss.com.au
	([2002:cb10:e0fe:201:a5ca:4fd3:14f:ad5d])
	by smtp2.bendigoit.com.au with esmtp (Exim 4.80)
	(envelope-from <james.harper@bendigoit.com.au>)
	id 1W11KV-0007FZ-GQ; Thu, 09 Jan 2014 09:10:27 +1100
Received: from BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d]) by
	BITCOM1.int.sbss.com.au ([fe80::a5ca:4fd3:14f:ad5d%12]) with mapi id
	14.03.0174.001; Thu, 9 Jan 2014 09:10:28 +1100
From: James Harper <james.harper@bendigoit.com.au>
To: Kristian Hagsted Rasmussen <kristian@hagsted.dk>
Thread-Topic: [Xen-devel] [GPLPV] exclude xenscsi from installer, since it
	is not compiled
Thread-Index: AQHPDK60R00LXWePCkCzso9TPwwVBpp7YYKQ
Date: Wed, 8 Jan 2014 22:10:27 +0000
Message-ID: <6035A0D088A63A46850C3988ED045A4B6F3666EC@BITCOM1.int.sbss.com.au>
References: <975f6d4ac79f4467875a54f1d1e421f5@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F358B35@BITCOM1.int.sbss.com.au>
	<2fdac9757fc5437fb788adfc5be47d6d@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F35914A@BITCOM1.int.sbss.com.au>
	<6035A0D088A63A46850C3988ED045A4B6F35941B@BITCOM1.int.sbss.com.au>
	<d8f3c90b129444cbab328b1124caaacf@hagsted-cserver.hagsted.dk>
	<6035A0D088A63A46850C3988ED045A4B6F359BBB@BITCOM1.int.sbss.com.au>
	<f1e201f7cd014619b6a786ec4d53367a@hagsted-cserver.hagsted.dk>
	<68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
In-Reply-To: <68204cf929664299be5b25e3a22488b8@hagsted-cserver.hagsted.dk>
Accept-Language: en-AU, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.200.84]
x-tm-as-product-ver: SMEX-11.0.0.1191-7.000.1014-20418.002
x-tm-as-result: No--49.290000-0.000000-31
x-tm-as-user-approved-sender: Yes
x-tm-as-user-blocked-sender: No
MIME-Version: 1.0
X-Really-From-Bendigo-IT: magichashvalue
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [GPLPV] exclude xenscsi from installer,
 since it is not compiled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> 
> > > > The line in question is the following:
> > > >   HwInitializationData.HwFindAdapter = XenVbd_HwStorFindAdapter;
> > > >
> > > > XenVbd_HwStorFindAdapter is the data structure which you have
> > > > corrected a few lines in, in the patch. As it is a level 4 warning,
> > > > it can be ignored by setting /W3 in the MSC_WARNING_LEVEL.
> However I
> > > > suspect that it would be preferred to find the cause of the warning.
> > >
> > > That would imply that my function definition doesn't match the
> > > expected function definition in the HW_INITIALIZATION_DATA structure,
> > > but according to the docs I have everything right. Can you check the
> > > storport headers and check the declaration there against my function?
> >
> > For windows 8 and newer HwFindAdapter is declared as
> >   PVOID                     		HwFindAdapter;
> > While for earlier versions of windows it is declared as:
> >   PHW_FIND_ADAPTER		HwFindAdapter;
> >
> 
> I have tried to typecast the HwFindAdapter like:
>   (PHW_FIND_ADAPTER) HwInitializationData.HwFindAdapter =
> XenVbd_HwStorFindAdapter;
> 
> However this results in the following error and warnings:
>   error C2220: warning treated as error - no 'object' file generated
>   warning C4055: 'type cast' : from data pointer 'PVOID' to function pointer
> 'PHW_FIND_ADAPTER'
>   warning C4213: nonstandard extension used : cast on l-value
> 
> I might be doing something wrong, however my experience in C++ is limited.

At line 28, please insert the following:

HW_FIND_ADAPTER XenVbd_HwStorFindAdapter;

This pre-declares the function as conforming to the expected declaration.

Alternatively, your cast above is backwards, and should be:

HwInitializationData.HwFindAdapter =  (PHW_FIND_ADAPTER)XenVbd_HwStorFindAdapter;

But try the declaration above first.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:32:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11fV-0000RK-4h; Wed, 08 Jan 2014 22:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jim.epost@gmail.com>) id 1W11fS-0000RC-Hf
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 22:32:07 +0000
Received: from [193.109.254.147:2525] by server-12.bemta-14.messagelabs.com id
	38/1F-13681-5E1DDC25; Wed, 08 Jan 2014 22:32:05 +0000
X-Env-Sender: jim.epost@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389220321!9692059!1
X-Originating-IP: [209.85.223.170]
X-SpamReason: No, hits=1.5 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,UNIQUE_WORDS,UPPERCASE_50_75,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12040 invoked from network); 8 Jan 2014 22:32:02 -0000
Received: from mail-ie0-f170.google.com (HELO mail-ie0-f170.google.com)
	(209.85.223.170)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:32:02 -0000
Received: by mail-ie0-f170.google.com with SMTP id tq11so2021026ieb.15
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 14:32:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=TZKVv5H0KhwAFDVtlslfrb9WqsTAISGVU32Bm78XUpU=;
	b=sWKGdhlxGA5DQDJ6mWbJmc8/C0DRn/qARW1Wk0n6enav8eM5Q+NM4cXaZzUcgGd84R
	JpgXpAjHmuQXZINqyA8ETVXHd8wA4jKsGY8iyprRUg0aag2RWyHJPZFgCjAdxFKMXfWe
	jABmDBkDUDXKoAma1g+qd3Z9/9houDawFItjgHGpeD7VcmXPK0A71xQ2wvUP8FbF/xDh
	rroBM4hBLl/T5XR/WOCxSG3SiAgp7zm/4ylJZKJPhr0GVpaKfl+R3zbuYLfuxZnyRXv0
	NWENhK4anKthF+FXK9ubq9Czjvnx5jzdvHtnzDt+a8cXs5oa7E3ian/1n32KIhV8C8JZ
	lAUw==
MIME-Version: 1.0
X-Received: by 10.43.65.145 with SMTP id xm17mr90954232icb.35.1389220320720;
	Wed, 08 Jan 2014 14:32:00 -0800 (PST)
Received: by 10.42.85.71 with HTTP; Wed, 8 Jan 2014 14:32:00 -0800 (PST)
Date: Wed, 8 Jan 2014 15:32:00 -0700
Message-ID: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
From: Jim Davis <jim.epost@gmail.com>
To: Stephen Rothwell <sfr@canb.auug.org.au>, linux-next@vger.kernel.org, 
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com, 
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary=bcaec51d2998bb54ca04ef7d0f58
Subject: [Xen-devel] randconfig build error with next-20140108,
	in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Building with the attached random configuration file,

drivers/xen/platform-pci.c: In function =91platform_pci_init=92:
drivers/xen/platform-pci.c:131:2: error: implicit declaration of
function =91pci_request_region=92 [-Werror=3Dimplicit-function-declaration]
  ret =3D pci_request_region(pdev, 1, DRV_NAME);
  ^
drivers/xen/platform-pci.c:170:2: error: implicit declaration of
function =91pci_release_region=92 [-Werror=3Dimplicit-function-declaration]
  pci_release_region(pdev, 0);
  ^
cc1: some warnings being treated as errors
make[2]: *** [drivers/xen/platform-pci.o] Error 1

These warnings appeared too:

warning: (XEN_PVH) selects XEN_PVHVM which has unmet direct
dependencies (HYPERVISOR_GUEST && XEN && PCI && X86_LOCAL_APIC)

--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset=US-ASCII; name="randconfig-1389218754.txt"
Content-Disposition: attachment; filename="randconfig-1389218754.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hq75v4xc0

IwojIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgojIExpbnV4L3g4
NiAzLjEzLjAtcmM3IEtlcm5lbCBDb25maWd1cmF0aW9uCiMKQ09ORklHXzY0QklUPXkKQ09ORklH
X1g4Nl82ND15CkNPTkZJR19YODY9eQpDT05GSUdfSU5TVFJVQ1RJT05fREVDT0RFUj15CkNPTkZJ
R19PVVRQVVRfRk9STUFUPSJlbGY2NC14ODYtNjQiCkNPTkZJR19BUkNIX0RFRkNPTkZJRz0iYXJj
aC94ODYvY29uZmlncy94ODZfNjRfZGVmY29uZmlnIgpDT05GSUdfTE9DS0RFUF9TVVBQT1JUPXkK
Q09ORklHX1NUQUNLVFJBQ0VfU1VQUE9SVD15CkNPTkZJR19IQVZFX0xBVEVOQ1lUT1BfU1VQUE9S
VD15CkNPTkZJR19NTVU9eQpDT05GSUdfTkVFRF9ETUFfTUFQX1NUQVRFPXkKQ09ORklHX05FRURf
U0dfRE1BX0xFTkdUSD15CkNPTkZJR19HRU5FUklDX0lTQV9ETUE9eQpDT05GSUdfR0VORVJJQ19I
V0VJR0hUPXkKQ09ORklHX0FSQ0hfTUFZX0hBVkVfUENfRkRDPXkKQ09ORklHX1JXU0VNX1hDSEdB
RERfQUxHT1JJVEhNPXkKQ09ORklHX0dFTkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKQ09ORklHX0FS
Q0hfSEFTX0NQVV9SRUxBWD15CkNPTkZJR19BUkNIX0hBU19DQUNIRV9MSU5FX1NJWkU9eQpDT05G
SUdfQVJDSF9IQVNfQ1BVX0FVVE9QUk9CRT15CkNPTkZJR19IQVZFX1NFVFVQX1BFUl9DUFVfQVJF
QT15CkNPTkZJR19ORUVEX1BFUl9DUFVfRU1CRURfRklSU1RfQ0hVTks9eQpDT05GSUdfTkVFRF9Q
RVJfQ1BVX1BBR0VfRklSU1RfQ0hVTks9eQpDT05GSUdfQVJDSF9ISUJFUk5BVElPTl9QT1NTSUJM
RT15CkNPTkZJR19BUkNIX1NVU1BFTkRfUE9TU0lCTEU9eQpDT05GSUdfQVJDSF9XQU5UX0hVR0Vf
UE1EX1NIQVJFPXkKQ09ORklHX0FSQ0hfV0FOVF9HRU5FUkFMX0hVR0VUTEI9eQpDT05GSUdfWk9O
RV9ETUEzMj15CkNPTkZJR19BVURJVF9BUkNIPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfT1BUSU1J
WkVEX0lOTElOSU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfREVCVUdfUEFHRUFMTE9DPXkKQ09O
RklHX1g4Nl82NF9TTVA9eQpDT05GSUdfWDg2X0hUPXkKQ09ORklHX0FSQ0hfSFdFSUdIVF9DRkxB
R1M9Ii1mY2FsbC1zYXZlZC1yZGkgLWZjYWxsLXNhdmVkLXJzaSAtZmNhbGwtc2F2ZWQtcmR4IC1m
Y2FsbC1zYXZlZC1yY3ggLWZjYWxsLXNhdmVkLXI4IC1mY2FsbC1zYXZlZC1yOSAtZmNhbGwtc2F2
ZWQtcjEwIC1mY2FsbC1zYXZlZC1yMTEiCkNPTkZJR19BUkNIX1NVUFBPUlRTX1VQUk9CRVM9eQpD
T05GSUdfREVGQ09ORklHX0xJU1Q9Ii9saWIvbW9kdWxlcy8kVU5BTUVfUkVMRUFTRS8uY29uZmln
IgpDT05GSUdfSVJRX1dPUks9eQpDT05GSUdfQlVJTERUSU1FX0VYVEFCTEVfU09SVD15CgojCiMg
R2VuZXJhbCBzZXR1cAojCkNPTkZJR19JTklUX0VOVl9BUkdfTElNSVQ9MzIKQ09ORklHX0NST1NT
X0NPTVBJTEU9IiIKQ09ORklHX0NPTVBJTEVfVEVTVD15CkNPTkZJR19MT0NBTFZFUlNJT049IiIK
IyBDT05GSUdfTE9DQUxWRVJTSU9OX0FVVE8gaXMgbm90IHNldApDT05GSUdfSEFWRV9LRVJORUxf
R1pJUD15CkNPTkZJR19IQVZFX0tFUk5FTF9CWklQMj15CkNPTkZJR19IQVZFX0tFUk5FTF9MWk1B
PXkKQ09ORklHX0hBVkVfS0VSTkVMX1haPXkKQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15CkNPTkZJ
R19IQVZFX0tFUk5FTF9MWjQ9eQpDT05GSUdfS0VSTkVMX0daSVA9eQojIENPTkZJR19LRVJORUxf
QlpJUDIgaXMgbm90IHNldAojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0CiMgQ09ORklH
X0tFUk5FTF9YWiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWk8gaXMgbm90IHNldAojIENP
TkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9Iihub25l
KSIKQ09ORklHX1NXQVA9eQpDT05GSUdfU1lTVklQQz15CkNPTkZJR19QT1NJWF9NUVVFVUU9eQoj
IENPTkZJR19GSEFORExFIGlzIG5vdCBzZXQKQ09ORklHX0FVRElUPXkKQ09ORklHX0FVRElUU1lT
Q0FMTD15CkNPTkZJR19BVURJVF9XQVRDSD15CkNPTkZJR19BVURJVF9UUkVFPXkKCiMKIyBJUlEg
c3Vic3lzdGVtCiMKQ09ORklHX0dFTkVSSUNfSVJRX1BST0JFPXkKQ09ORklHX0dFTkVSSUNfSVJR
X1NIT1c9eQpDT05GSUdfR0VORVJJQ19QRU5ESU5HX0lSUT15CkNPTkZJR19JUlFfRE9NQUlOPXkK
IyBDT05GSUdfSVJRX0RPTUFJTl9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19JUlFfRk9SQ0VEX1RI
UkVBRElORz15CkNPTkZJR19TUEFSU0VfSVJRPXkKQ09ORklHX0NMT0NLU09VUkNFX1dBVENIRE9H
PXkKQ09ORklHX0FSQ0hfQ0xPQ0tTT1VSQ0VfREFUQT15CkNPTkZJR19HRU5FUklDX1RJTUVfVlNZ
U0NBTEw9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UUz15CkNPTkZJR19HRU5FUklDX0NMT0NL
RVZFTlRTX0JVSUxEPXkKQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUPXkKQ09O
RklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfTUlOX0FESlVTVD15CkNPTkZJR19HRU5FUklDX0NNT1Nf
VVBEQVRFPXkKCiMKIyBUaW1lcnMgc3Vic3lzdGVtCiMKQ09ORklHX1RJQ0tfT05FU0hPVD15CkNP
TkZJR19OT19IWl9DT01NT049eQojIENPTkZJR19IWl9QRVJJT0RJQyBpcyBub3Qgc2V0CkNPTkZJ
R19OT19IWl9JRExFPXkKIyBDT05GSUdfTk9fSFpfRlVMTCBpcyBub3Qgc2V0CkNPTkZJR19OT19I
Wj15CkNPTkZJR19ISUdIX1JFU19USU1FUlM9eQoKIwojIENQVS9UYXNrIHRpbWUgYW5kIHN0YXRz
IGFjY291bnRpbmcKIwpDT05GSUdfVElDS19DUFVfQUNDT1VOVElORz15CiMgQ09ORklHX1ZJUlRf
Q1BVX0FDQ09VTlRJTkdfR0VOIGlzIG5vdCBzZXQKIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElO
RyBpcyBub3Qgc2V0CkNPTkZJR19CU0RfUFJPQ0VTU19BQ0NUPXkKQ09ORklHX0JTRF9QUk9DRVNT
X0FDQ1RfVjM9eQpDT05GSUdfVEFTS1NUQVRTPXkKQ09ORklHX1RBU0tfREVMQVlfQUNDVD15CiMg
Q09ORklHX1RBU0tfWEFDQ1QgaXMgbm90IHNldAoKIwojIFJDVSBTdWJzeXN0ZW0KIwpDT05GSUdf
VFJFRV9SQ1U9eQojIENPTkZJR19QUkVFTVBUX1JDVSBpcyBub3Qgc2V0CkNPTkZJR19SQ1VfU1RB
TExfQ09NTU9OPXkKQ09ORklHX0NPTlRFWFRfVFJBQ0tJTkc9eQpDT05GSUdfUkNVX1VTRVJfUVM9
eQpDT05GSUdfQ09OVEVYVF9UUkFDS0lOR19GT1JDRT15CkNPTkZJR19SQ1VfRkFOT1VUPTY0CkNP
TkZJR19SQ1VfRkFOT1VUX0xFQUY9MTYKQ09ORklHX1JDVV9GQU5PVVRfRVhBQ1Q9eQojIENPTkZJ
R19SQ1VfRkFTVF9OT19IWiBpcyBub3Qgc2V0CkNPTkZJR19UUkVFX1JDVV9UUkFDRT15CiMgQ09O
RklHX1JDVV9OT0NCX0NQVSBpcyBub3Qgc2V0CkNPTkZJR19JS0NPTkZJRz15CiMgQ09ORklHX0lL
Q09ORklHX1BST0MgaXMgbm90IHNldApDT05GSUdfTE9HX0JVRl9TSElGVD0xNwpDT05GSUdfSEFW
RV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX05VTUFfQkFMQU5D
SU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfSU5UMTI4PXkKQ09ORklHX0FSQ0hfV0FOVFNfUFJP
VF9OVU1BX1BST1RfTk9ORT15CkNPTkZJR19BUkNIX1VTRVNfTlVNQV9QUk9UX05PTkU9eQojIENP
TkZJR19OVU1BX0JBTEFOQ0lOR19ERUZBVUxUX0VOQUJMRUQgaXMgbm90IHNldApDT05GSUdfTlVN
QV9CQUxBTkNJTkc9eQpDT05GSUdfQ0dST1VQUz15CiMgQ09ORklHX0NHUk9VUF9ERUJVRyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NHUk9VUF9GUkVFWkVSIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0dST1VQ
X0RFVklDRSBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVVNFVFMgaXMgbm90IHNldApDT05GSUdfQ0dS
T1VQX0NQVUFDQ1Q9eQojIENPTkZJR19SRVNPVVJDRV9DT1VOVEVSUyBpcyBub3Qgc2V0CiMgQ09O
RklHX0NHUk9VUF9QRVJGIGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUF9TQ0hFRD15CkNPTkZJR19G
QUlSX0dST1VQX1NDSEVEPXkKIyBDT05GSUdfQ0ZTX0JBTkRXSURUSCBpcyBub3Qgc2V0CkNPTkZJ
R19SVF9HUk9VUF9TQ0hFRD15CiMgQ09ORklHX0JMS19DR1JPVVAgaXMgbm90IHNldApDT05GSUdf
Q0hFQ0tQT0lOVF9SRVNUT1JFPXkKIyBDT05GSUdfTkFNRVNQQUNFUyBpcyBub3Qgc2V0CiMgQ09O
RklHX1VJREdJRF9TVFJJQ1RfVFlQRV9DSEVDS1MgaXMgbm90IHNldApDT05GSUdfU0NIRURfQVVU
T0dST1VQPXkKQ09ORklHX1NZU0ZTX0RFUFJFQ0FURUQ9eQojIENPTkZJR19TWVNGU19ERVBSRUNB
VEVEX1YyIGlzIG5vdCBzZXQKQ09ORklHX1JFTEFZPXkKQ09ORklHX0JMS19ERVZfSU5JVFJEPXkK
Q09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIKQ09ORklHX1JEX0daSVA9eQpDT05GSUdfUkRfQlpJ
UDI9eQojIENPTkZJR19SRF9MWk1BIGlzIG5vdCBzZXQKIyBDT05GSUdfUkRfWFogaXMgbm90IHNl
dApDT05GSUdfUkRfTFpPPXkKQ09ORklHX1JEX0xaND15CkNPTkZJR19DQ19PUFRJTUlaRV9GT1Jf
U0laRT15CkNPTkZJR19BTk9OX0lOT0RFUz15CkNPTkZJR19TWVNDVExfRVhDRVBUSU9OX1RSQUNF
PXkKQ09ORklHX0hBVkVfUENTUEtSX1BMQVRGT1JNPXkKQ09ORklHX0VYUEVSVD15CkNPTkZJR19L
QUxMU1lNUz15CkNPTkZJR19LQUxMU1lNU19BTEw9eQojIENPTkZJR19QUklOVEsgaXMgbm90IHNl
dAojIENPTkZJR19CVUcgaXMgbm90IHNldApDT05GSUdfRUxGX0NPUkU9eQpDT05GSUdfUENTUEtS
X1BMQVRGT1JNPXkKQ09ORklHX0JBU0VfRlVMTD15CiMgQ09ORklHX0ZVVEVYIGlzIG5vdCBzZXQK
Q09ORklHX0VQT0xMPXkKIyBDT05GSUdfU0lHTkFMRkQgaXMgbm90IHNldApDT05GSUdfVElNRVJG
RD15CkNPTkZJR19FVkVOVEZEPXkKIyBDT05GSUdfU0hNRU0gaXMgbm90IHNldAojIENPTkZJR19B
SU8gaXMgbm90IHNldAojIENPTkZJR19FTUJFRERFRCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX1BF
UkZfRVZFTlRTPXkKQ09ORklHX1BFUkZfVVNFX1ZNQUxMT0M9eQoKIwojIEtlcm5lbCBQZXJmb3Jt
YW5jZSBFdmVudHMgQW5kIENvdW50ZXJzCiMKQ09ORklHX1BFUkZfRVZFTlRTPXkKQ09ORklHX0RF
QlVHX1BFUkZfVVNFX1ZNQUxMT0M9eQojIENPTkZJR19WTV9FVkVOVF9DT1VOVEVSUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1NMVUJfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19DT01QQVRfQlJLIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0xBQiBpcyBub3Qgc2V0CkNPTkZJR19TTFVCPXkKIyBDT05GSUdf
U0xPQiBpcyBub3Qgc2V0CkNPTkZJR19TTFVCX0NQVV9QQVJUSUFMPXkKQ09ORklHX1BST0ZJTElO
Rz15CkNPTkZJR19UUkFDRVBPSU5UUz15CkNPTkZJR19PUFJPRklMRT15CkNPTkZJR19PUFJPRklM
RV9FVkVOVF9NVUxUSVBMRVg9eQpDT05GSUdfSEFWRV9PUFJPRklMRT15CkNPTkZJR19PUFJPRklM
RV9OTUlfVElNRVI9eQojIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBzZXQKIyBDT05GSUdfSEFW
RV82NEJJVF9BTElHTkVEX0FDQ0VTUyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0VGRklDSUVOVF9V
TkFMSUdORURfQUNDRVNTPXkKQ09ORklHX0FSQ0hfVVNFX0JVSUxUSU5fQlNXQVA9eQpDT05GSUdf
VVNFUl9SRVRVUk5fTk9USUZJRVI9eQpDT05GSUdfSEFWRV9JT1JFTUFQX1BST1Q9eQpDT05GSUdf
SEFWRV9LUFJPQkVTPXkKQ09ORklHX0hBVkVfS1JFVFBST0JFUz15CkNPTkZJR19IQVZFX09QVFBS
T0JFUz15CkNPTkZJR19IQVZFX0tQUk9CRVNfT05fRlRSQUNFPXkKQ09ORklHX0hBVkVfQVJDSF9U
UkFDRUhPT0s9eQpDT05GSUdfSEFWRV9ETUFfQVRUUlM9eQpDT05GSUdfR0VORVJJQ19TTVBfSURM
RV9USFJFQUQ9eQpDT05GSUdfSEFWRV9SRUdTX0FORF9TVEFDS19BQ0NFU1NfQVBJPXkKQ09ORklH
X0hBVkVfRE1BX0FQSV9ERUJVRz15CkNPTkZJR19IQVZFX0hXX0JSRUFLUE9JTlQ9eQpDT05GSUdf
SEFWRV9NSVhFRF9CUkVBS1BPSU5UU19SRUdTPXkKQ09ORklHX0hBVkVfVVNFUl9SRVRVUk5fTk9U
SUZJRVI9eQpDT05GSUdfSEFWRV9QRVJGX0VWRU5UU19OTUk9eQpDT05GSUdfSEFWRV9QRVJGX1JF
R1M9eQpDT05GSUdfSEFWRV9QRVJGX1VTRVJfU1RBQ0tfRFVNUD15CkNPTkZJR19IQVZFX0FSQ0hf
SlVNUF9MQUJFTD15CkNPTkZJR19BUkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRz15CkNPTkZJR19I
QVZFX0FMSUdORURfU1RSVUNUX1BBR0U9eQpDT05GSUdfSEFWRV9DTVBYQ0hHX0xPQ0FMPXkKQ09O
RklHX0hBVkVfQ01QWENIR19ET1VCTEU9eQpDT05GSUdfSEFWRV9BUkNIX1NFQ0NPTVBfRklMVEVS
PXkKQ09ORklHX0hBVkVfQ0NfU1RBQ0tQUk9URUNUT1I9eQojIENPTkZJR19DQ19TVEFDS1BST1RF
Q1RPUiBpcyBub3Qgc2V0CkNPTkZJR19DQ19TVEFDS1BST1RFQ1RPUl9OT05FPXkKIyBDT05GSUdf
Q0NfU1RBQ0tQUk9URUNUT1JfUkVHVUxBUiBpcyBub3Qgc2V0CiMgQ09ORklHX0NDX1NUQUNLUFJP
VEVDVE9SX1NUUk9ORyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0NPTlRFWFRfVFJBQ0tJTkc9eQpD
T05GSUdfSEFWRV9WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTj15CkNPTkZJR19IQVZFX0lSUV9USU1F
X0FDQ09VTlRJTkc9eQpDT05GSUdfSEFWRV9BUkNIX1RSQU5TUEFSRU5UX0hVR0VQQUdFPXkKQ09O
RklHX0hBVkVfQVJDSF9TT0ZUX0RJUlRZPXkKQ09ORklHX01PRFVMRVNfVVNFX0VMRl9SRUxBPXkK
Q09ORklHX0hBVkVfSVJRX0VYSVRfT05fSVJRX1NUQUNLPXkKCiMKIyBHQ09WLWJhc2VkIGtlcm5l
bCBwcm9maWxpbmcKIwojIENPTkZJR19HQ09WX0tFUk5FTCBpcyBub3Qgc2V0CiMgQ09ORklHX0hB
VkVfR0VORVJJQ19ETUFfQ09IRVJFTlQgaXMgbm90IHNldApDT05GSUdfUlRfTVVURVhFUz15CkNP
TkZJR19CQVNFX1NNQUxMPTAKIyBDT05GSUdfTU9EVUxFUyBpcyBub3Qgc2V0CkNPTkZJR19TVE9Q
X01BQ0hJTkU9eQpDT05GSUdfQkxPQ0s9eQpDT05GSUdfQkxLX0RFVl9CU0c9eQpDT05GSUdfQkxL
X0RFVl9CU0dMSUI9eQojIENPTkZJR19CTEtfREVWX0lOVEVHUklUWSBpcyBub3Qgc2V0CkNPTkZJ
R19CTEtfQ01ETElORV9QQVJTRVI9eQoKIwojIFBhcnRpdGlvbiBUeXBlcwojCkNPTkZJR19QQVJU
SVRJT05fQURWQU5DRUQ9eQpDT05GSUdfQUNPUk5fUEFSVElUSU9OPXkKQ09ORklHX0FDT1JOX1BB
UlRJVElPTl9DVU1BTkE9eQojIENPTkZJR19BQ09STl9QQVJUSVRJT05fRUVTT1ggaXMgbm90IHNl
dAojIENPTkZJR19BQ09STl9QQVJUSVRJT05fSUNTIGlzIG5vdCBzZXQKIyBDT05GSUdfQUNPUk5f
UEFSVElUSU9OX0FERlMgaXMgbm90IHNldAojIENPTkZJR19BQ09STl9QQVJUSVRJT05fUE9XRVJU
RUMgaXMgbm90IHNldApDT05GSUdfQUNPUk5fUEFSVElUSU9OX1JJU0NJWD15CiMgQ09ORklHX0FJ
WF9QQVJUSVRJT04gaXMgbm90IHNldAojIENPTkZJR19PU0ZfUEFSVElUSU9OIGlzIG5vdCBzZXQK
IyBDT05GSUdfQU1JR0FfUEFSVElUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRBUklfUEFSVElU
SU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFDX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19N
U0RPU19QQVJUSVRJT049eQpDT05GSUdfQlNEX0RJU0tMQUJFTD15CiMgQ09ORklHX01JTklYX1NV
QlBBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19TT0xBUklTX1g4Nl9QQVJUSVRJT049eQpDT05G
SUdfVU5JWFdBUkVfRElTS0xBQkVMPXkKQ09ORklHX0xETV9QQVJUSVRJT049eQpDT05GSUdfTERN
X0RFQlVHPXkKQ09ORklHX1NHSV9QQVJUSVRJT049eQpDT05GSUdfVUxUUklYX1BBUlRJVElPTj15
CiMgQ09ORklHX1NVTl9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfS0FSTUFfUEFSVElUSU9O
PXkKIyBDT05GSUdfRUZJX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19TWVNWNjhfUEFSVElU
SU9OPXkKQ09ORklHX0NNRExJTkVfUEFSVElUSU9OPXkKCiMKIyBJTyBTY2hlZHVsZXJzCiMKQ09O
RklHX0lPU0NIRURfTk9PUD15CkNPTkZJR19JT1NDSEVEX0RFQURMSU5FPXkKIyBDT05GSUdfSU9T
Q0hFRF9DRlEgaXMgbm90IHNldApDT05GSUdfREVGQVVMVF9ERUFETElORT15CiMgQ09ORklHX0RF
RkFVTFRfTk9PUCBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX0lPU0NIRUQ9ImRlYWRsaW5lIgpD
T05GSUdfUFJFRU1QVF9OT1RJRklFUlM9eQpDT05GSUdfUEFEQVRBPXkKQ09ORklHX1VOSU5MSU5F
X1NQSU5fVU5MT0NLPXkKQ09ORklHX0ZSRUVaRVI9eQoKIwojIFByb2Nlc3NvciB0eXBlIGFuZCBm
ZWF0dXJlcwojCkNPTkZJR19aT05FX0RNQT15CkNPTkZJR19TTVA9eQojIENPTkZJR19YODZfTVBQ
QVJTRSBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9FWFRFTkRFRF9QTEFURk9STSBpcyBub3Qgc2V0
CkNPTkZJR19YODZfU1VQUE9SVFNfTUVNT1JZX0ZBSUxVUkU9eQojIENPTkZJR19TQ0hFRF9PTUlU
X0ZSQU1FX1BPSU5URVIgaXMgbm90IHNldApDT05GSUdfSFlQRVJWSVNPUl9HVUVTVD15CkNPTkZJ
R19QQVJBVklSVD15CiMgQ09ORklHX1BBUkFWSVJUX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
UEFSQVZJUlRfU1BJTkxPQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1hFTj15CiMgQ09ORklHX1hFTl9Q
UklWSUxFR0VEX0dVRVNUIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9QVkhWTT15CkNPTkZJR19YRU5f
TUFYX0RPTUFJTl9NRU1PUlk9NTAwCkNPTkZJR19YRU5fU0FWRV9SRVNUT1JFPXkKQ09ORklHX1hF
Tl9ERUJVR19GUz15CkNPTkZJR19YRU5fUFZIPXkKIyBDT05GSUdfS1ZNX0dVRVNUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUEFSQVZJUlRfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKQ09ORklHX1BB
UkFWSVJUX0NMT0NLPXkKQ09ORklHX05PX0JPT1RNRU09eQojIENPTkZJR19NRU1URVNUIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUs4IGlzIG5vdCBzZXQKIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0CiMg
Q09ORklHX01DT1JFMiBpcyBub3Qgc2V0CiMgQ09ORklHX01BVE9NIGlzIG5vdCBzZXQKQ09ORklH
X0dFTkVSSUNfQ1BVPXkKQ09ORklHX1g4Nl9JTlRFUk5PREVfQ0FDSEVfU0hJRlQ9NgpDT05GSUdf
WDg2X0wxX0NBQ0hFX1NISUZUPTYKQ09ORklHX1g4Nl9UU0M9eQpDT05GSUdfWDg2X0NNUFhDSEc2
ND15CkNPTkZJR19YODZfQ01PVj15CkNPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0CkNP
TkZJR19YODZfREVCVUdDVExNU1I9eQojIENPTkZJR19QUk9DRVNTT1JfU0VMRUNUIGlzIG5vdCBz
ZXQKQ09ORklHX0NQVV9TVVBfSU5URUw9eQpDT05GSUdfQ1BVX1NVUF9BTUQ9eQpDT05GSUdfQ1BV
X1NVUF9DRU5UQVVSPXkKQ09ORklHX0hQRVRfVElNRVI9eQpDT05GSUdfSFBFVF9FTVVMQVRFX1JU
Qz15CiMgQ09ORklHX0RNSSBpcyBub3Qgc2V0CkNPTkZJR19TV0lPVExCPXkKQ09ORklHX0lPTU1V
X0hFTFBFUj15CkNPTkZJR19NQVhTTVA9eQpDT05GSUdfTlJfQ1BVUz04MTkyCkNPTkZJR19TQ0hF
RF9TTVQ9eQpDT05GSUdfU0NIRURfTUM9eQpDT05GSUdfUFJFRU1QVF9OT05FPXkKIyBDT05GSUdf
UFJFRU1QVF9WT0xVTlRBUlkgaXMgbm90IHNldAojIENPTkZJR19QUkVFTVBUIGlzIG5vdCBzZXQK
Q09ORklHX1BSRUVNUFRfQ09VTlQ9eQpDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQpDT05GSUdfWDg2
X0lPX0FQSUM9eQojIENPTkZJR19YODZfUkVST1VURV9GT1JfQlJPS0VOX0JPT1RfSVJRUyBpcyBu
b3Qgc2V0CkNPTkZJR19YODZfTUNFPXkKIyBDT05GSUdfWDg2X01DRV9JTlRFTCBpcyBub3Qgc2V0
CiMgQ09ORklHX1g4Nl9NQ0VfQU1EIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9NQ0VfSU5KRUNUPXkK
Q09ORklHX0k4Sz15CiMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JP
Q09ERV9JTlRFTF9FQVJMWSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERV9BTURfRUFSTFkg
aXMgbm90IHNldAojIENPTkZJR19YODZfTVNSIGlzIG5vdCBzZXQKIyBDT05GSUdfWDg2X0NQVUlE
IGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfUEhZU19BRERSX1RfNjRCSVQ9eQpDT05GSUdfQVJDSF9E
TUFfQUREUl9UXzY0QklUPXkKQ09ORklHX0RJUkVDVF9HQlBBR0VTPXkKQ09ORklHX05VTUE9eQoj
IENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0CkNPTkZJR19OT0RFU19TSElGVD0xMApDT05GSUdf
QVJDSF9TUEFSU0VNRU1fRU5BQkxFPXkKQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0RFRkFVTFQ9eQpD
T05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZX01PREVMPXkKIyBDT05GSUdfQVJDSF9NRU1PUllfUFJP
QkUgaXMgbm90IHNldApDT05GSUdfSUxMRUdBTF9QT0lOVEVSX1ZBTFVFPTB4ZGVhZDAwMDAwMDAw
MDAwMApDT05GSUdfU0VMRUNUX01FTU9SWV9NT0RFTD15CkNPTkZJR19TUEFSU0VNRU1fTUFOVUFM
PXkKQ09ORklHX1NQQVJTRU1FTT15CkNPTkZJR19ORUVEX01VTFRJUExFX05PREVTPXkKQ09ORklH
X0hBVkVfTUVNT1JZX1BSRVNFTlQ9eQpDT05GSUdfU1BBUlNFTUVNX0VYVFJFTUU9eQpDT05GSUdf
U1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkKQ09ORklHX1NQQVJTRU1FTV9BTExPQ19NRU1fTUFQ
X1RPR0VUSEVSPXkKQ09ORklHX1NQQVJTRU1FTV9WTUVNTUFQPXkKQ09ORklHX0hBVkVfTUVNQkxP
Q0s9eQpDT05GSUdfSEFWRV9NRU1CTE9DS19OT0RFX01BUD15CkNPTkZJR19BUkNIX0RJU0NBUkRf
TUVNQkxPQ0s9eQpDT05GSUdfTUVNT1JZX0lTT0xBVElPTj15CiMgQ09ORklHX01PVkFCTEVfTk9E
RSBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFPXkKQ09ORklHX01FTU9S
WV9IT1RQTFVHPXkKQ09ORklHX01FTU9SWV9IT1RQTFVHX1NQQVJTRT15CkNPTkZJR19NRU1PUllf
SE9UUkVNT1ZFPXkKQ09ORklHX1BBR0VGTEFHU19FWFRFTkRFRD15CkNPTkZJR19TUExJVF9QVExP
Q0tfQ1BVUz00CkNPTkZJR19BUkNIX0VOQUJMRV9TUExJVF9QTURfUFRMT0NLPXkKIyBDT05GSUdf
QkFMTE9PTl9DT01QQUNUSU9OIGlzIG5vdCBzZXQKQ09ORklHX0NPTVBBQ1RJT049eQpDT05GSUdf
TUlHUkFUSU9OPXkKQ09ORklHX1BIWVNfQUREUl9UXzY0QklUPXkKQ09ORklHX1pPTkVfRE1BX0ZM
QUc9MQojIENPTkZJR19CT1VOQ0UgaXMgbm90IHNldApDT05GSUdfVklSVF9UT19CVVM9eQpDT05G
SUdfTU1VX05PVElGSUVSPXkKIyBDT05GSUdfS1NNIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRf
TU1BUF9NSU5fQUREUj00MDk2CkNPTkZJR19BUkNIX1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkK
IyBDT05GSUdfTUVNT1JZX0ZBSUxVUkUgaXMgbm90IHNldApDT05GSUdfVFJBTlNQQVJFTlRfSFVH
RVBBR0U9eQpDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0VfQUxXQVlTPXkKIyBDT05GSUdfVFJB
TlNQQVJFTlRfSFVHRVBBR0VfTUFEVklTRSBpcyBub3Qgc2V0CkNPTkZJR19DUk9TU19NRU1PUllf
QVRUQUNIPXkKIyBDT05GSUdfQ0xFQU5DQUNIRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZST05UU1dB
UCBpcyBub3Qgc2V0CiMgQ09ORklHX0NNQSBpcyBub3Qgc2V0CiMgQ09ORklHX1pCVUQgaXMgbm90
IHNldApDT05GSUdfTUVNX1NPRlRfRElSVFk9eQpDT05GSUdfWlNNQUxMT0M9eQpDT05GSUdfUEdU
QUJMRV9NQVBQSU5HPXkKQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJVUFRJT049eQojIENPTkZJ
R19YODZfQk9PVFBBUkFNX01FTU9SWV9DT1JSVVBUSU9OX0NIRUNLIGlzIG5vdCBzZXQKQ09ORklH
X1g4Nl9SRVNFUlZFX0xPVz02NAojIENPTkZJR19NVFJSIGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hf
UkFORE9NPXkKQ09ORklHX1g4Nl9TTUFQPXkKIyBDT05GSUdfU0VDQ09NUCBpcyBub3Qgc2V0CiMg
Q09ORklHX0haXzEwMCBpcyBub3Qgc2V0CkNPTkZJR19IWl8yNTA9eQojIENPTkZJR19IWl8zMDAg
aXMgbm90IHNldAojIENPTkZJR19IWl8xMDAwIGlzIG5vdCBzZXQKQ09ORklHX0haPTI1MApDT05G
SUdfU0NIRURfSFJUSUNLPXkKQ09ORklHX0tFWEVDPXkKIyBDT05GSUdfQ1JBU0hfRFVNUCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0tFWEVDX0pVTVAgaXMgbm90IHNldApDT05GSUdfUEhZU0lDQUxfU1RB
UlQ9MHgxMDAwMDAwCkNPTkZJR19SRUxPQ0FUQUJMRT15CkNPTkZJR19QSFlTSUNBTF9BTElHTj0w
eDIwMDAwMApDT05GSUdfSE9UUExVR19DUFU9eQojIENPTkZJR19CT09UUEFSQU1fSE9UUExVR19D
UFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKQ09O
RklHX0NNRExJTkVfQk9PTD15CkNPTkZJR19DTURMSU5FPSIiCkNPTkZJR19DTURMSU5FX09WRVJS
SURFPXkKQ09ORklHX0FSQ0hfRU5BQkxFX01FTU9SWV9IT1RQTFVHPXkKQ09ORklHX0FSQ0hfRU5B
QkxFX01FTU9SWV9IT1RSRU1PVkU9eQpDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQoK
IwojIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwojCkNPTkZJR19BUkNIX0hJQkVS
TkFUSU9OX0hFQURFUj15CiMgQ09ORklHX1NVU1BFTkQgaXMgbm90IHNldApDT05GSUdfSElCRVJO
QVRFX0NBTExCQUNLUz15CkNPTkZJR19ISUJFUk5BVElPTj15CkNPTkZJR19QTV9TVERfUEFSVElU
SU9OPSIiCkNPTkZJR19QTV9TTEVFUD15CkNPTkZJR19QTV9TTEVFUF9TTVA9eQpDT05GSUdfUE1f
QVVUT1NMRUVQPXkKIyBDT05GSUdfUE1fV0FLRUxPQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1BNX1JV
TlRJTUU9eQpDT05GSUdfUE09eQojIENPTkZJR19QTV9ERUJVRyBpcyBub3Qgc2V0CiMgQ09ORklH
X1dRX1BPV0VSX0VGRklDSUVOVF9ERUZBVUxUIGlzIG5vdCBzZXQKQ09ORklHX1NGST15CgojCiMg
Q1BVIEZyZXF1ZW5jeSBzY2FsaW5nCiMKIyBDT05GSUdfQ1BVX0ZSRVEgaXMgbm90IHNldAoKIwoj
IENQVSBJZGxlCiMKQ09ORklHX0NQVV9JRExFPXkKIyBDT05GSUdfQ1BVX0lETEVfTVVMVElQTEVf
RFJJVkVSUyBpcyBub3Qgc2V0CkNPTkZJR19DUFVfSURMRV9HT1ZfTEFEREVSPXkKQ09ORklHX0NQ
VV9JRExFX0dPVl9NRU5VPXkKIyBDT05GSUdfQVJDSF9ORUVEU19DUFVfSURMRV9DT1VQTEVEIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU5URUxfSURMRSBpcyBub3Qgc2V0CgojCiMgTWVtb3J5IHBvd2Vy
IHNhdmluZ3MKIwojIENPTkZJR19JNzMwMF9JRExFIGlzIG5vdCBzZXQKCiMKIyBCdXMgb3B0aW9u
cyAoUENJIGV0Yy4pCiMKIyBDT05GSUdfUENJIGlzIG5vdCBzZXQKQ09ORklHX0lTQV9ETUFfQVBJ
PXkKQ09ORklHX1BDQ0FSRD15CkNPTkZJR19QQ01DSUE9eQpDT05GSUdfUENNQ0lBX0xPQURfQ0lT
PXkKCiMKIyBQQy1jYXJkIGJyaWRnZXMKIwpDT05GSUdfWDg2X1NZU0ZCPXkKCiMKIyBFeGVjdXRh
YmxlIGZpbGUgZm9ybWF0cyAvIEVtdWxhdGlvbnMKIwpDT05GSUdfQklORk1UX0VMRj15CkNPTkZJ
R19BUkNIX0JJTkZNVF9FTEZfUkFORE9NSVpFX1BJRT15CkNPTkZJR19DT1JFX0RVTVBfREVGQVVM
VF9FTEZfSEVBREVSUz15CiMgQ09ORklHX0JJTkZNVF9TQ1JJUFQgaXMgbm90IHNldAojIENPTkZJ
R19IQVZFX0FPVVQgaXMgbm90IHNldAojIENPTkZJR19CSU5GTVRfTUlTQyBpcyBub3Qgc2V0CkNP
TkZJR19DT1JFRFVNUD15CiMgQ09ORklHX0lBMzJfRU1VTEFUSU9OIGlzIG5vdCBzZXQKQ09ORklH
X1g4Nl9ERVZfRE1BX09QUz15CkNPTkZJR19ORVQ9eQoKIwojIE5ldHdvcmtpbmcgb3B0aW9ucwoj
CkNPTkZJR19QQUNLRVQ9eQpDT05GSUdfUEFDS0VUX0RJQUc9eQpDT05GSUdfVU5JWD15CkNPTkZJ
R19VTklYX0RJQUc9eQojIENPTkZJR19ORVRfS0VZIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5FVCBp
cyBub3Qgc2V0CkNPTkZJR19ORVRXT1JLX1NFQ01BUks9eQpDT05GSUdfTkVUV09SS19QSFlfVElN
RVNUQU1QSU5HPXkKQ09ORklHX05FVEZJTFRFUj15CiMgQ09ORklHX05FVEZJTFRFUl9ERUJVRyBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVEZJTFRFUl9BRFZBTkNFRCBpcyBub3Qgc2V0CkNPTkZJR19B
VE09eQpDT05GSUdfQVRNX0xBTkU9eQojIENPTkZJR19CUklER0UgaXMgbm90IHNldApDT05GSUdf
SEFWRV9ORVRfRFNBPXkKQ09ORklHX05FVF9EU0E9eQpDT05GSUdfTkVUX0RTQV9UQUdfRFNBPXkK
Q09ORklHX05FVF9EU0FfVEFHX0VEU0E9eQpDT05GSUdfTkVUX0RTQV9UQUdfVFJBSUxFUj15CkNP
TkZJR19WTEFOXzgwMjFRPXkKIyBDT05GSUdfVkxBTl84MDIxUV9HVlJQIGlzIG5vdCBzZXQKIyBD
T05GSUdfVkxBTl84MDIxUV9NVlJQIGlzIG5vdCBzZXQKQ09ORklHX0RFQ05FVD15CkNPTkZJR19E
RUNORVRfUk9VVEVSPXkKQ09ORklHX0xMQz15CkNPTkZJR19MTEMyPXkKQ09ORklHX0lQWD15CkNP
TkZJR19JUFhfSU5URVJOPXkKIyBDT05GSUdfQVRBTEsgaXMgbm90IHNldAojIENPTkZJR19YMjUg
aXMgbm90IHNldApDT05GSUdfTEFQQj15CkNPTkZJR19QSE9ORVQ9eQojIENPTkZJR19JRUVFODAy
MTU0IGlzIG5vdCBzZXQKQ09ORklHX05FVF9TQ0hFRD15CgojCiMgUXVldWVpbmcvU2NoZWR1bGlu
ZwojCiMgQ09ORklHX05FVF9TQ0hfQ0JRIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9IVEIg
aXMgbm90IHNldApDT05GSUdfTkVUX1NDSF9IRlNDPXkKQ09ORklHX05FVF9TQ0hfQVRNPXkKIyBD
T05GSUdfTkVUX1NDSF9QUklPIGlzIG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfTVVMVElRPXkKQ09O
RklHX05FVF9TQ0hfUkVEPXkKQ09ORklHX05FVF9TQ0hfU0ZCPXkKQ09ORklHX05FVF9TQ0hfU0ZR
PXkKQ09ORklHX05FVF9TQ0hfVEVRTD15CiMgQ09ORklHX05FVF9TQ0hfVEJGIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1NDSF9HUkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EU01BUksg
aXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX05FVEVNIGlzIG5vdCBzZXQKQ09ORklHX05FVF9T
Q0hfRFJSPXkKIyBDT05GSUdfTkVUX1NDSF9NUVBSSU8gaXMgbm90IHNldApDT05GSUdfTkVUX1ND
SF9DSE9LRT15CkNPTkZJR19ORVRfU0NIX1FGUT15CkNPTkZJR19ORVRfU0NIX0NPREVMPXkKIyBD
T05GSUdfTkVUX1NDSF9GUV9DT0RFTCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfU0NIX0ZRPXkKQ09O
RklHX05FVF9TQ0hfSEhGPXkKQ09ORklHX05FVF9TQ0hfUElFPXkKQ09ORklHX05FVF9TQ0hfSU5H
UkVTUz15CkNPTkZJR19ORVRfU0NIX1BMVUc9eQoKIwojIENsYXNzaWZpY2F0aW9uCiMKQ09ORklH
X05FVF9DTFM9eQpDT05GSUdfTkVUX0NMU19CQVNJQz15CkNPTkZJR19ORVRfQ0xTX1RDSU5ERVg9
eQpDT05GSUdfTkVUX0NMU19GVz15CkNPTkZJR19ORVRfQ0xTX1UzMj15CiMgQ09ORklHX0NMU19V
MzJfUEVSRiBpcyBub3Qgc2V0CiMgQ09ORklHX0NMU19VMzJfTUFSSyBpcyBub3Qgc2V0CkNPTkZJ
R19ORVRfQ0xTX1JTVlA9eQpDT05GSUdfTkVUX0NMU19SU1ZQNj15CkNPTkZJR19ORVRfQ0xTX0ZM
T1c9eQojIENPTkZJR19ORVRfQ0xTX0NHUk9VUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfQ0xTX0JQ
Rj15CkNPTkZJR19ORVRfRU1BVENIPXkKQ09ORklHX05FVF9FTUFUQ0hfU1RBQ0s9MzIKIyBDT05G
SUdfTkVUX0VNQVRDSF9DTVAgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX05CWVRFIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX0VNQVRDSF9VMzIgaXMgbm90IHNldAojIENPTkZJR19ORVRf
RU1BVENIX01FVEEgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX1RFWFQgaXMgbm90IHNl
dApDT05GSUdfTkVUX0VNQVRDSF9DQU5JRD15CkNPTkZJR19ORVRfQ0xTX0FDVD15CiMgQ09ORklH
X05FVF9BQ1RfUE9MSUNFIGlzIG5vdCBzZXQKQ09ORklHX05FVF9BQ1RfR0FDVD15CiMgQ09ORklH
X0dBQ1RfUFJPQiBpcyBub3Qgc2V0CkNPTkZJR19ORVRfQUNUX01JUlJFRD15CkNPTkZJR19ORVRf
QUNUX05BVD15CkNPTkZJR19ORVRfQUNUX1BFRElUPXkKIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9BQ1RfU0tCRURJVD15CiMgQ09ORklHX05FVF9DTFNfSU5EIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfRklGTz15CkNPTkZJR19EQ0I9eQpDT05GSUdfQkFUTUFO
X0FEVj15CiMgQ09ORklHX0JBVE1BTl9BRFZfTkMgaXMgbm90IHNldApDT05GSUdfQkFUTUFOX0FE
Vl9ERUJVRz15CkNPTkZJR19PUEVOVlNXSVRDSD15CkNPTkZJR19WU09DS0VUUz15CkNPTkZJR19O
RVRMSU5LX01NQVA9eQpDT05GSUdfTkVUTElOS19ESUFHPXkKIyBDT05GSUdfTkVUX01QTFNfR1NP
IGlzIG5vdCBzZXQKQ09ORklHX0hTUj15CkNPTkZJR19SUFM9eQpDT05GSUdfUkZTX0FDQ0VMPXkK
Q09ORklHX1hQUz15CiMgQ09ORklHX0NHUk9VUF9ORVRfUFJJTyBpcyBub3Qgc2V0CkNPTkZJR19D
R1JPVVBfTkVUX0NMQVNTSUQ9eQpDT05GSUdfTkVUX1JYX0JVU1lfUE9MTD15CkNPTkZJR19CUUw9
eQpDT05GSUdfTkVUX0ZMT1dfTElNSVQ9eQoKIwojIE5ldHdvcmsgdGVzdGluZwojCiMgQ09ORklH
X0hBTVJBRElPIGlzIG5vdCBzZXQKQ09ORklHX0NBTj15CkNPTkZJR19DQU5fUkFXPXkKIyBDT05G
SUdfQ0FOX0JDTSBpcyBub3Qgc2V0CkNPTkZJR19DQU5fR1c9eQoKIwojIENBTiBEZXZpY2UgRHJp
dmVycwojCiMgQ09ORklHX0NBTl9WQ0FOIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9ERVY9eQojIENP
TkZJR19DQU5fQ0FMQ19CSVRUSU1JTkcgaXMgbm90IHNldAojIENPTkZJR19DQU5fTEVEUyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NBTl9NQ1AyNTFYIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9TSkExMDAw
PXkKIyBDT05GSUdfQ0FOX1NKQTEwMDBfSVNBIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9TSkExMDAw
X1BMQVRGT1JNPXkKQ09ORklHX0NBTl9FTVNfUENNQ0lBPXkKQ09ORklHX0NBTl9QRUFLX1BDTUNJ
QT15CkNPTkZJR19DQU5fQ19DQU49eQpDT05GSUdfQ0FOX0NfQ0FOX1BMQVRGT1JNPXkKQ09ORklH
X0NBTl9DQzc3MD15CkNPTkZJR19DQU5fQ0M3NzBfSVNBPXkKQ09ORklHX0NBTl9DQzc3MF9QTEFU
Rk9STT15CkNPTkZJR19DQU5fU09GVElORz15CkNPTkZJR19DQU5fU09GVElOR19DUz15CkNPTkZJ
R19DQU5fREVCVUdfREVWSUNFUz15CiMgQ09ORklHX0lSREEgaXMgbm90IHNldApDT05GSUdfQlQ9
eQpDT05GSUdfQlRfUkZDT01NPXkKQ09ORklHX0JUX0JORVA9eQpDT05GSUdfQlRfQk5FUF9NQ19G
SUxURVI9eQojIENPTkZJR19CVF9CTkVQX1BST1RPX0ZJTFRFUiBpcyBub3Qgc2V0CgojCiMgQmx1
ZXRvb3RoIGRldmljZSBkcml2ZXJzCiMKQ09ORklHX0JUX0hDSURUTDE9eQpDT05GSUdfQlRfSENJ
QlQzQz15CkNPTkZJR19CVF9IQ0lCTFVFQ0FSRD15CkNPTkZJR19CVF9IQ0lCVFVBUlQ9eQojIENP
TkZJR19CVF9IQ0lWSENJIGlzIG5vdCBzZXQKQ09ORklHX0JUX01SVkw9eQpDT05GSUdfRklCX1JV
TEVTPXkKIyBDT05GSUdfV0lSRUxFU1MgaXMgbm90IHNldApDT05GSUdfV0lNQVg9eQpDT05GSUdf
V0lNQVhfREVCVUdfTEVWRUw9OApDT05GSUdfUkZLSUxMPXkKQ09ORklHX1JGS0lMTF9SRUdVTEFU
T1I9eQpDT05GSUdfTkVUXzlQPXkKQ09ORklHX05FVF85UF9WSVJUSU89eQojIENPTkZJR19ORVRf
OVBfREVCVUcgaXMgbm90IHNldApDT05GSUdfQ0FJRj15CkNPTkZJR19DQUlGX0RFQlVHPXkKIyBD
T05GSUdfQ0FJRl9ORVRERVYgaXMgbm90IHNldApDT05GSUdfQ0FJRl9VU0I9eQojIENPTkZJR19O
RkMgaXMgbm90IHNldApDT05GSUdfSEFWRV9CUEZfSklUPXkKCiMKIyBEZXZpY2UgRHJpdmVycwoj
CgojCiMgR2VuZXJpYyBEcml2ZXIgT3B0aW9ucwojCkNPTkZJR19VRVZFTlRfSEVMUEVSX1BBVEg9
IiIKQ09ORklHX0RFVlRNUEZTPXkKQ09ORklHX0RFVlRNUEZTX01PVU5UPXkKIyBDT05GSUdfU1RB
TkRBTE9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJTEQgaXMgbm90
IHNldApDT05GSUdfRldfTE9BREVSPXkKQ09ORklHX0ZJUk1XQVJFX0lOX0tFUk5FTD15CkNPTkZJ
R19FWFRSQV9GSVJNV0FSRT0iIgpDT05GSUdfRldfTE9BREVSX1VTRVJfSEVMUEVSPXkKIyBDT05G
SUdfREVCVUdfRFJJVkVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX0RFVlJFUz15CkNPTkZJR19T
WVNfSFlQRVJWSVNPUj15CiMgQ09ORklHX0dFTkVSSUNfQ1BVX0RFVklDRVMgaXMgbm90IHNldApD
T05GSUdfUkVHTUFQPXkKQ09ORklHX1JFR01BUF9JMkM9eQpDT05GSUdfUkVHTUFQX1NQST15CkNP
TkZJR19SRUdNQVBfTU1JTz15CkNPTkZJR19SRUdNQVBfSVJRPXkKQ09ORklHX0RNQV9TSEFSRURf
QlVGRkVSPXkKCiMKIyBCdXMgZGV2aWNlcwojCkNPTkZJR19DT05ORUNUT1I9eQpDT05GSUdfUFJP
Q19FVkVOVFM9eQojIENPTkZJR19NVEQgaXMgbm90IHNldApDT05GSUdfUEFSUE9SVD15CkNPTkZJ
R19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15CiMgQ09ORklHX1BBUlBPUlRfUEMgaXMgbm90
IHNldAojIENPTkZJR19QQVJQT1JUX0dTQyBpcyBub3Qgc2V0CkNPTkZJR19QQVJQT1JUX0FYODg3
OTY9eQojIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldApDT05GSUdfUEFSUE9SVF9OT1Rf
UEM9eQpDT05GSUdfQkxLX0RFVj15CkNPTkZJR19CTEtfREVWX05VTExfQkxLPXkKQ09ORklHX0JM
S19ERVZfRkQ9eQojIENPTkZJR19aUkFNIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9DT1df
Q09NTU9OIGlzIG5vdCBzZXQKQ09ORklHX0JMS19ERVZfTE9PUD15CkNPTkZJR19CTEtfREVWX0xP
T1BfTUlOX0NPVU5UPTgKIyBDT05GSUdfQkxLX0RFVl9DUllQVE9MT09QIGlzIG5vdCBzZXQKCiMK
IyBEUkJEIGRpc2FibGVkIGJlY2F1c2UgUFJPQ19GUyBvciBJTkVUIG5vdCBzZWxlY3RlZAojCkNP
TkZJR19CTEtfREVWX05CRD15CiMgQ09ORklHX0JMS19ERVZfUkFNIGlzIG5vdCBzZXQKQ09ORklH
X0NEUk9NX1BLVENEVkQ9eQpDT05GSUdfQ0RST01fUEtUQ0RWRF9CVUZGRVJTPTgKIyBDT05GSUdf
Q0RST01fUEtUQ0RWRF9XQ0FDSEUgaXMgbm90IHNldApDT05GSUdfQVRBX09WRVJfRVRIPXkKQ09O
RklHX1hFTl9CTEtERVZfRlJPTlRFTkQ9eQpDT05GSUdfVklSVElPX0JMSz15CiMgQ09ORklHX0JM
S19ERVZfSEQgaXMgbm90IHNldAoKIwojIE1pc2MgZGV2aWNlcwojCkNPTkZJR19BRDUyNVhfRFBP
VD15CiMgQ09ORklHX0FENTI1WF9EUE9UX0kyQyBpcyBub3Qgc2V0CkNPTkZJR19BRDUyNVhfRFBP
VF9TUEk9eQojIENPTkZJR19EVU1NWV9JUlEgaXMgbm90IHNldApDT05GSUdfSUNTOTMyUzQwMT15
CkNPTkZJR19BVE1FTF9TU0M9eQojIENPTkZJR19FTkNMT1NVUkVfU0VSVklDRVMgaXMgbm90IHNl
dApDT05GSUdfQVBEUzk4MDJBTFM9eQpDT05GSUdfSVNMMjkwMDM9eQojIENPTkZJR19JU0wyOTAy
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVFNMMjU1MCBpcyBub3Qgc2V0CkNPTkZJR19T
RU5TT1JTX0JIMTc4MD15CkNPTkZJR19TRU5TT1JTX0JIMTc3MD15CiMgQ09ORklHX1NFTlNPUlNf
QVBEUzk5MFggaXMgbm90IHNldApDT05GSUdfSE1DNjM1Mj15CkNPTkZJR19EUzE2ODI9eQpDT05G
SUdfVElfREFDNzUxMj15CkNPTkZJR19WTVdBUkVfQkFMTE9PTj15CkNPTkZJR19CTVAwODU9eQpD
T05GSUdfQk1QMDg1X0kyQz15CkNPTkZJR19CTVAwODVfU1BJPXkKIyBDT05GSUdfVVNCX1NXSVRD
SF9GU0E5NDgwIGlzIG5vdCBzZXQKQ09ORklHX0xBVFRJQ0VfRUNQM19DT05GSUc9eQpDT05GSUdf
U1JBTT15CkNPTkZJR19DMlBPUlQ9eQpDT05GSUdfQzJQT1JUX0RVUkFNQVJfMjE1MD15CgojCiMg
RUVQUk9NIHN1cHBvcnQKIwpDT05GSUdfRUVQUk9NX0FUMjQ9eQojIENPTkZJR19FRVBST01fQVQy
NSBpcyBub3Qgc2V0CiMgQ09ORklHX0VFUFJPTV9MRUdBQ1kgaXMgbm90IHNldApDT05GSUdfRUVQ
Uk9NX01BWDY4NzU9eQpDT05GSUdfRUVQUk9NXzkzQ1g2PXkKIyBDT05GSUdfRUVQUk9NXzkzWFg0
NiBpcyBub3Qgc2V0CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5zcG9ydCBsaW5l
IGRpc2NpcGxpbmUKIwoKIwojIEFsdGVyYSBGUEdBIGZpcm13YXJlIGRvd25sb2FkIG1vZHVsZQoj
CiMgQ09ORklHX0FMVEVSQV9TVEFQTCBpcyBub3Qgc2V0CgojCiMgSW50ZWwgTUlDIEhvc3QgRHJp
dmVyCiMKCiMKIyBJbnRlbCBNSUMgQ2FyZCBEcml2ZXIKIwojIENPTkZJR19JTlRFTF9NSUNfQ0FS
RCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0lERT15CkNPTkZJR19JREU9eQoKIwojIFBsZWFzZSBz
ZWUgRG9jdW1lbnRhdGlvbi9pZGUvaWRlLnR4dCBmb3IgaGVscC9pbmZvIG9uIElERSBkcml2ZXMK
IwpDT05GSUdfSURFX0FUQVBJPXkKIyBDT05GSUdfQkxLX0RFVl9JREVfU0FUQSBpcyBub3Qgc2V0
CkNPTkZJR19JREVfR0Q9eQojIENPTkZJR19JREVfR0RfQVRBIGlzIG5vdCBzZXQKIyBDT05GSUdf
SURFX0dEX0FUQVBJIGlzIG5vdCBzZXQKQ09ORklHX0JMS19ERVZfSURFQ1M9eQpDT05GSUdfQkxL
X0RFVl9JREVDRD15CkNPTkZJR19CTEtfREVWX0lERUNEX1ZFUkJPU0VfRVJST1JTPXkKIyBDT05G
SUdfQkxLX0RFVl9JREVUQVBFIGlzIG5vdCBzZXQKIyBDT05GSUdfSURFX1RBU0tfSU9DVEwgaXMg
bm90IHNldApDT05GSUdfSURFX1BST0NfRlM9eQoKIwojIElERSBjaGlwc2V0IHN1cHBvcnQvYnVn
Zml4ZXMKIwpDT05GSUdfSURFX0dFTkVSSUM9eQpDT05GSUdfQkxLX0RFVl9QTEFURk9STT15CiMg
Q09ORklHX0JMS19ERVZfQ01ENjQwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9JREVETUEg
aXMgbm90IHNldAoKIwojIFNDU0kgZGV2aWNlIHN1cHBvcnQKIwpDT05GSUdfU0NTSV9NT0Q9eQpD
T05GSUdfUkFJRF9BVFRSUz15CkNPTkZJR19TQ1NJPXkKQ09ORklHX1NDU0lfRE1BPXkKQ09ORklH
X1NDU0lfVEdUPXkKIyBDT05GSUdfU0NTSV9ORVRMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NT
SV9QUk9DX0ZTIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIHN1cHBvcnQgdHlwZSAoZGlzaywgdGFwZSwg
Q0QtUk9NKQojCkNPTkZJR19CTEtfREVWX1NEPXkKIyBDT05GSUdfQ0hSX0RFVl9TVCBpcyBub3Qg
c2V0CiMgQ09ORklHX0NIUl9ERVZfT1NTVCBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX1NSPXkK
Q09ORklHX0JMS19ERVZfU1JfVkVORE9SPXkKQ09ORklHX0NIUl9ERVZfU0c9eQpDT05GSUdfQ0hS
X0RFVl9TQ0g9eQojIENPTkZJR19TQ1NJX01VTFRJX0xVTiBpcyBub3Qgc2V0CiMgQ09ORklHX1ND
U0lfQ09OU1RBTlRTIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9MT0dHSU5HIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0NTSV9TQ0FOX0FTWU5DIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIFRyYW5zcG9ydHMK
IwojIENPTkZJR19TQ1NJX1NQSV9BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfRkNfQVRU
UlMgaXMgbm90IHNldApDT05GSUdfU0NTSV9JU0NTSV9BVFRSUz15CkNPTkZJR19TQ1NJX1NBU19B
VFRSUz15CkNPTkZJR19TQ1NJX1NBU19MSUJTQVM9eQpDT05GSUdfU0NTSV9TQVNfQVRBPXkKQ09O
RklHX1NDU0lfU0FTX0hPU1RfU01QPXkKQ09ORklHX1NDU0lfU1JQX0FUVFJTPXkKIyBDT05GSUdf
U0NTSV9TUlBfVEdUX0FUVFJTIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfTE9XTEVWRUw9eQpDT05G
SUdfSVNDU0lfQk9PVF9TWVNGUz15CiMgQ09ORklHX1NDU0lfVUZTSENEIGlzIG5vdCBzZXQKIyBD
T05GSUdfTElCRkMgaXMgbm90IHNldAojIENPTkZJR19MSUJGQ09FIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJX1ZJUlRJTz15CiMgQ09ORklHX1ND
U0lfTE9XTEVWRUxfUENNQ0lBIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfREg9eQpDT05GSUdfU0NT
SV9ESF9SREFDPXkKQ09ORklHX1NDU0lfREhfSFBfU1c9eQpDT05GSUdfU0NTSV9ESF9FTUM9eQpD
T05GSUdfU0NTSV9ESF9BTFVBPXkKQ09ORklHX1NDU0lfT1NEX0lOSVRJQVRPUj15CiMgQ09ORklH
X1NDU0lfT1NEX1VMRCBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJX09TRF9EUFJJTlRfU0VOU0U9MQoj
IENPTkZJR19TQ1NJX09TRF9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19BVEE9eQojIENPTkZJR19B
VEFfTk9OU1RBTkRBUkQgaXMgbm90IHNldApDT05GSUdfQVRBX1ZFUkJPU0VfRVJST1I9eQpDT05G
SUdfU0FUQV9QTVA9eQoKIwojIENvbnRyb2xsZXJzIHdpdGggbm9uLVNGRiBuYXRpdmUgaW50ZXJm
YWNlCiMKIyBDT05GSUdfU0FUQV9BSENJX1BMQVRGT1JNIGlzIG5vdCBzZXQKQ09ORklHX0FUQV9T
RkY9eQoKIwojIFNGRiBjb250cm9sbGVycyB3aXRoIGN1c3RvbSBETUEgaW50ZXJmYWNlCiMKQ09O
RklHX0FUQV9CTURNQT15CgojCiMgU0FUQSBTRkYgY29udHJvbGxlcnMgd2l0aCBCTURNQQojCkNP
TkZJR19TQVRBX0hJR0hCQU5LPXkKQ09ORklHX1NBVEFfTVY9eQpDT05GSUdfU0FUQV9SQ0FSPXkK
CiMKIyBQQVRBIFNGRiBjb250cm9sbGVycyB3aXRoIEJNRE1BCiMKIyBDT05GSUdfUEFUQV9BUkFT
QU5fQ0YgaXMgbm90IHNldAoKIwojIFBJTy1vbmx5IFNGRiBjb250cm9sbGVycwojCkNPTkZJR19Q
QVRBX1BDTUNJQT15CiMgQ09ORklHX1BBVEFfUExBVEZPUk0gaXMgbm90IHNldAoKIwojIEdlbmVy
aWMgZmFsbGJhY2sgLyBsZWdhY3kgZHJpdmVycwojCkNPTkZJR19NRD15CiMgQ09ORklHX0JMS19E
RVZfTUQgaXMgbm90IHNldApDT05GSUdfQkNBQ0hFPXkKQ09ORklHX0JDQUNIRV9ERUJVRz15CiMg
Q09ORklHX0JDQUNIRV9DTE9TVVJFU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX0RN
PXkKIyBDT05GSUdfRE1fREVCVUcgaXMgbm90IHNldApDT05GSUdfRE1fQlVGSU89eQpDT05GSUdf
RE1fQklPX1BSSVNPTj15CkNPTkZJR19ETV9QRVJTSVNURU5UX0RBVEE9eQojIENPTkZJR19ETV9D
UllQVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX1NOQVBTSE9UIGlzIG5vdCBzZXQKQ09ORklHX0RN
X1RISU5fUFJPVklTSU9OSU5HPXkKQ09ORklHX0RNX0RFQlVHX0JMT0NLX1NUQUNLX1RSQUNJTkc9
eQpDT05GSUdfRE1fQ0FDSEU9eQojIENPTkZJR19ETV9DQUNIRV9NUSBpcyBub3Qgc2V0CkNPTkZJ
R19ETV9DQUNIRV9DTEVBTkVSPXkKQ09ORklHX0RNX01JUlJPUj15CkNPTkZJR19ETV9MT0dfVVNF
UlNQQUNFPXkKIyBDT05GSUdfRE1fUkFJRCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX1pFUk8gaXMg
bm90IHNldAojIENPTkZJR19ETV9NVUxUSVBBVEggaXMgbm90IHNldAojIENPTkZJR19ETV9ERUxB
WSBpcyBub3Qgc2V0CkNPTkZJR19ETV9VRVZFTlQ9eQpDT05GSUdfRE1fRkxBS0VZPXkKQ09ORklH
X0RNX1ZFUklUWT15CkNPTkZJR19ETV9TV0lUQ0g9eQpDT05GSUdfVEFSR0VUX0NPUkU9eQojIENP
TkZJR19UQ01fSUJMT0NLIGlzIG5vdCBzZXQKQ09ORklHX1RDTV9GSUxFSU89eQojIENPTkZJR19U
Q01fUFNDU0kgaXMgbm90IHNldAojIENPTkZJR19MT09QQkFDS19UQVJHRVQgaXMgbm90IHNldApD
T05GSUdfSVNDU0lfVEFSR0VUPXkKQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTPXkKQ09ORklHX05F
VERFVklDRVM9eQpDT05GSUdfTUlJPXkKIyBDT05GSUdfTkVUX0NPUkUgaXMgbm90IHNldAojIENP
TkZJR19BUkNORVQgaXMgbm90IHNldAojIENPTkZJR19BVE1fRFJJVkVSUyBpcyBub3Qgc2V0Cgoj
CiMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwojCkNPTkZJR19DQUlGX1NQSV9TTEFWRT15CkNPTkZJ
R19DQUlGX1NQSV9TWU5DPXkKIyBDT05GSUdfQ0FJRl9IU0kgaXMgbm90IHNldApDT05GSUdfQ0FJ
Rl9WSVJUSU89eQpDT05GSUdfVkhPU1RfTkVUPXkKQ09ORklHX1ZIT1NUX1JJTkc9eQpDT05GSUdf
VkhPU1Q9eQoKIwojIERpc3RyaWJ1dGVkIFN3aXRjaCBBcmNoaXRlY3R1cmUgZHJpdmVycwojCkNP
TkZJR19ORVRfRFNBX01WODhFNlhYWD15CkNPTkZJR19ORVRfRFNBX01WODhFNjA2MD15CkNPTkZJ
R19ORVRfRFNBX01WODhFNlhYWF9ORUVEX1BQVT15CkNPTkZJR19ORVRfRFNBX01WODhFNjEzMT15
CkNPTkZJR19ORVRfRFNBX01WODhFNjEyM182MV82NT15CkNPTkZJR19FVEhFUk5FVD15CkNPTkZJ
R19ORVRfVkVORE9SXzNDT009eQpDT05GSUdfUENNQ0lBXzNDNTc0PXkKQ09ORklHX1BDTUNJQV8z
QzU4OT15CkNPTkZJR19ORVRfVkVORE9SX0FNRD15CkNPTkZJR19QQ01DSUFfTk1DTEFOPXkKQ09O
RklHX05FVF9WRU5ET1JfQVJDPXkKIyBDT05GSUdfTkVUX0NBREVOQ0UgaXMgbm90IHNldApDT05G
SUdfTkVUX1ZFTkRPUl9CUk9BRENPTT15CiMgQ09ORklHX0I0NCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfQ0FMWEVEQV9YR01BQz15CkNPTkZJR19ETkVUPXkKIyBDT05GSUdfTkVUX1ZFTkRPUl9GVUpJ
VFNVIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfSU5URUw9eQpDT05GSUdfTkVUX1ZFTkRP
Ul9JODI1WFg9eQojIENPTkZJR19ORVRfVkVORE9SX01JQ1JFTCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfVkVORE9SX01JQ1JPQ0hJUD15CkNPTkZJR19FTkMyOEo2MD15CkNPTkZJR19FTkMyOEo2MF9X
UklURVZFUklGWT15CiMgQ09ORklHX05FVF9WRU5ET1JfTkFUU0VNSSBpcyBub3Qgc2V0CkNPTkZJ
R19FVEhPQz15CkNPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQojIENPTkZJR19BVFAgaXMgbm90
IHNldApDT05GSUdfU0hfRVRIPXkKQ09ORklHX05FVF9WRU5ET1JfU0VFUT15CkNPTkZJR19ORVRf
VkVORE9SX1NNU0M9eQpDT05GSUdfUENNQ0lBX1NNQzkxQzkyPXkKQ09ORklHX1NNU0M5MTFYPXkK
IyBDT05GSUdfU01TQzkxMVhfQVJDSF9IT09LUyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9S
X1NUTUlDUk89eQojIENPTkZJR19TVE1NQUNfRVRIIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5E
T1JfVklBPXkKIyBDT05GSUdfTkVUX1ZFTkRPUl9XSVpORVQgaXMgbm90IHNldApDT05GSUdfTkVU
X1ZFTkRPUl9YSVJDT009eQojIENPTkZJR19QQ01DSUFfWElSQzJQUyBpcyBub3Qgc2V0CkNPTkZJ
R19QSFlMSUI9eQoKIwojIE1JSSBQSFkgZGV2aWNlIGRyaXZlcnMKIwpDT05GSUdfQVQ4MDNYX1BI
WT15CiMgQ09ORklHX0FNRF9QSFkgaXMgbm90IHNldApDT05GSUdfTUFSVkVMTF9QSFk9eQpDT05G
SUdfREFWSUNPTV9QSFk9eQpDT05GSUdfUVNFTUlfUEhZPXkKIyBDT05GSUdfTFhUX1BIWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NJQ0FEQV9QSFkgaXMgbm90IHNldApDT05GSUdfVklURVNTRV9QSFk9
eQpDT05GSUdfU01TQ19QSFk9eQpDT05GSUdfQlJPQURDT01fUEhZPXkKIyBDT05GSUdfQkNNODdY
WF9QSFkgaXMgbm90IHNldAojIENPTkZJR19JQ1BMVVNfUEhZIGlzIG5vdCBzZXQKQ09ORklHX1JF
QUxURUtfUEhZPXkKIyBDT05GSUdfTkFUSU9OQUxfUEhZIGlzIG5vdCBzZXQKQ09ORklHX1NURTEw
WFA9eQojIENPTkZJR19MU0lfRVQxMDExQ19QSFkgaXMgbm90IHNldAojIENPTkZJR19NSUNSRUxf
UEhZIGlzIG5vdCBzZXQKQ09ORklHX0ZJWEVEX1BIWT15CkNPTkZJR19NRElPX0JJVEJBTkc9eQpD
T05GSUdfTUlDUkVMX0tTODk5NU1BPXkKIyBDT05GSUdfUExJUCBpcyBub3Qgc2V0CkNPTkZJR19Q
UFA9eQpDT05GSUdfUFBQX0JTRENPTVA9eQojIENPTkZJR19QUFBfREVGTEFURSBpcyBub3Qgc2V0
CkNPTkZJR19QUFBfRklMVEVSPXkKIyBDT05GSUdfUFBQX01QUEUgaXMgbm90IHNldApDT05GSUdf
UFBQX01VTFRJTElOSz15CkNPTkZJR19QUFBPQVRNPXkKQ09ORklHX1BQUE9FPXkKQ09ORklHX1NM
SEM9eQojIENPTkZJR19XTEFOIGlzIG5vdCBzZXQKCiMKIyBXaU1BWCBXaXJlbGVzcyBCcm9hZGJh
bmQgZGV2aWNlcwojCgojCiMgRW5hYmxlIFVTQiBzdXBwb3J0IHRvIHNlZSBXaU1BWCBVU0IgZHJp
dmVycwojCkNPTkZJR19XQU49eQpDT05GSUdfSERMQz15CkNPTkZJR19IRExDX1JBVz15CiMgQ09O
RklHX0hETENfUkFXX0VUSCBpcyBub3Qgc2V0CkNPTkZJR19IRExDX0NJU0NPPXkKQ09ORklHX0hE
TENfRlI9eQpDT05GSUdfSERMQ19QUFA9eQpDT05GSUdfSERMQ19YMjU9eQpDT05GSUdfRExDST15
CkNPTkZJR19ETENJX01BWD04CkNPTkZJR19TQk5JPXkKIyBDT05GSUdfU0JOSV9NVUxUSUxJTkUg
aXMgbm90IHNldApDT05GSUdfWEVOX05FVERFVl9GUk9OVEVORD15CiMgQ09ORklHX0lTRE4gaXMg
bm90IHNldAoKIwojIElucHV0IGRldmljZSBzdXBwb3J0CiMKIyBDT05GSUdfSU5QVVQgaXMgbm90
IHNldAoKIwojIEhhcmR3YXJlIEkvTyBwb3J0cwojCiMgQ09ORklHX1NFUklPIGlzIG5vdCBzZXQK
Q09ORklHX0FSQ0hfTUlHSFRfSEFWRV9QQ19TRVJJTz15CkNPTkZJR19HQU1FUE9SVD15CiMgQ09O
RklHX0dBTUVQT1JUX05TNTU4IGlzIG5vdCBzZXQKIyBDT05GSUdfR0FNRVBPUlRfTDQgaXMgbm90
IHNldAoKIwojIENoYXJhY3RlciBkZXZpY2VzCiMKIyBDT05GSUdfVFRZIGlzIG5vdCBzZXQKQ09O
RklHX0RFVktNRU09eQojIENPTkZJR19QUklOVEVSIGlzIG5vdCBzZXQKQ09ORklHX1BQREVWPXkK
Q09ORklHX0lQTUlfSEFORExFUj15CiMgQ09ORklHX0lQTUlfUEFOSUNfRVZFTlQgaXMgbm90IHNl
dAojIENPTkZJR19JUE1JX0RFVklDRV9JTlRFUkZBQ0UgaXMgbm90IHNldApDT05GSUdfSVBNSV9T
ST15CkNPTkZJR19JUE1JX1dBVENIRE9HPXkKQ09ORklHX0lQTUlfUE9XRVJPRkY9eQpDT05GSUdf
SFdfUkFORE9NPXkKIyBDT05GSUdfSFdfUkFORE9NX1RJTUVSSU9NRU0gaXMgbm90IHNldAojIENP
TkZJR19IV19SQU5ET01fVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfSFdfUkFORE9NX1ZJUlRJTyBp
cyBub3Qgc2V0CkNPTkZJR19IV19SQU5ET01fVFBNPXkKIyBDT05GSUdfTlZSQU0gaXMgbm90IHNl
dAoKIwojIFBDTUNJQSBjaGFyYWN0ZXIgZGV2aWNlcwojCkNPTkZJR19DQVJETUFOXzQwMDA9eQoj
IENPTkZJR19DQVJETUFOXzQwNDAgaXMgbm90IHNldApDT05GSUdfUkFXX0RSSVZFUj15CkNPTkZJ
R19NQVhfUkFXX0RFVlM9MjU2CkNPTkZJR19IQU5HQ0hFQ0tfVElNRVI9eQpDT05GSUdfVENHX1RQ
TT15CkNPTkZJR19UQ0dfVElTPXkKQ09ORklHX1RDR19USVNfSTJDX0FUTUVMPXkKQ09ORklHX1RD
R19USVNfSTJDX0lORklORU9OPXkKQ09ORklHX1RDR19USVNfSTJDX05VVk9UT049eQojIENPTkZJ
R19UQ0dfTlNDIGlzIG5vdCBzZXQKQ09ORklHX1RDR19BVE1FTD15CkNPTkZJR19UQ0dfWEVOPXkK
Q09ORklHX1RFTENMT0NLPXkKQ09ORklHX0kyQz15CkNPTkZJR19JMkNfQk9BUkRJTkZPPXkKIyBD
T05GSUdfSTJDX0NPTVBBVCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfQ0hBUkRFVj15CkNPTkZJR19J
MkNfTVVYPXkKCiMKIyBNdWx0aXBsZXhlciBJMkMgQ2hpcCBzdXBwb3J0CiMKQ09ORklHX0kyQ19N
VVhfUENBOTU0MT15CkNPTkZJR19JMkNfTVVYX1BDQTk1NHg9eQojIENPTkZJR19JMkNfSEVMUEVS
X0FVVE8gaXMgbm90IHNldApDT05GSUdfSTJDX1NNQlVTPXkKCiMKIyBJMkMgQWxnb3JpdGhtcwoj
CkNPTkZJR19JMkNfQUxHT0JJVD15CkNPTkZJR19JMkNfQUxHT1BDRj15CiMgQ09ORklHX0kyQ19B
TEdPUENBIGlzIG5vdCBzZXQKCiMKIyBJMkMgSGFyZHdhcmUgQnVzIHN1cHBvcnQKIwoKIwojIEky
QyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1jaGlwKQoj
CiMgQ09ORklHX0kyQ19LRU1QTEQgaXMgbm90IHNldAojIENPTkZJR19JMkNfT0NPUkVTIGlzIG5v
dCBzZXQKIyBDT05GSUdfSTJDX1BDQV9QTEFURk9STSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19Q
WEFfUENJIGlzIG5vdCBzZXQKQ09ORklHX0kyQ19SSUlDPXkKIyBDT05GSUdfSTJDX1NIX01PQklM
RSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSU1URUMgaXMgbm90IHNldApDT05GSUdfSTJDX1hJ
TElOWD15CiMgQ09ORklHX0kyQ19SQ0FSIGlzIG5vdCBzZXQKCiMKIyBFeHRlcm5hbCBJMkMvU01C
dXMgYWRhcHRlciBkcml2ZXJzCiMKQ09ORklHX0kyQ19QQVJQT1JUPXkKIyBDT05GSUdfSTJDX1BB
UlBPUlRfTElHSFQgaXMgbm90IHNldAoKIwojIE90aGVyIEkyQy9TTUJ1cyBidXMgZHJpdmVycwoj
CkNPTkZJR19JMkNfREVCVUdfQ09SRT15CkNPTkZJR19JMkNfREVCVUdfQUxHTz15CkNPTkZJR19J
MkNfREVCVUdfQlVTPXkKQ09ORklHX1NQST15CiMgQ09ORklHX1NQSV9ERUJVRyBpcyBub3Qgc2V0
CkNPTkZJR19TUElfTUFTVEVSPXkKCiMKIyBTUEkgTWFzdGVyIENvbnRyb2xsZXIgRHJpdmVycwoj
CkNPTkZJR19TUElfQUxURVJBPXkKQ09ORklHX1NQSV9BVE1FTD15CkNPTkZJR19TUElfQkNNMjgz
NT15CkNPTkZJR19TUElfQkNNNjNYWF9IU1NQST15CkNPTkZJR19TUElfQklUQkFORz15CkNPTkZJ
R19TUElfQlVUVEVSRkxZPXkKQ09ORklHX1NQSV9FUDkzWFg9eQpDT05GSUdfU1BJX0lNWD15CkNP
TkZJR19TUElfTE03MF9MTFA9eQojIENPTkZJR19TUElfRlNMX0RTUEkgaXMgbm90IHNldApDT05G
SUdfU1BJX1RJX1FTUEk9eQojIENPTkZJR19TUElfT01BUF8xMDBLIGlzIG5vdCBzZXQKQ09ORklH
X1NQSV9PUklPTj15CiMgQ09ORklHX1NQSV9QWEEyWFhfUENJIGlzIG5vdCBzZXQKIyBDT05GSUdf
U1BJX1NDMThJUzYwMiBpcyBub3Qgc2V0CkNPTkZJR19TUElfU0g9eQojIENPTkZJR19TUElfU0hf
SFNQSSBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9URUdSQTExNCBpcyBub3Qgc2V0CkNPTkZJR19T
UElfVEVHUkEyMF9TRkxBU0g9eQpDT05GSUdfU1BJX1RFR1JBMjBfU0xJTks9eQojIENPTkZJR19T
UElfWENPTU0gaXMgbm90IHNldAojIENPTkZJR19TUElfWElMSU5YIGlzIG5vdCBzZXQKQ09ORklH
X1NQSV9ERVNJR05XQVJFPXkKCiMKIyBTUEkgUHJvdG9jb2wgTWFzdGVycwojCkNPTkZJR19TUElf
U1BJREVWPXkKQ09ORklHX1NQSV9UTEU2MlgwPXkKQ09ORklHX0hTST15CkNPTkZJR19IU0lfQk9B
UkRJTkZPPXkKCiMKIyBIU0kgY2xpZW50cwojCiMgQ09ORklHX0hTSV9DSEFSIGlzIG5vdCBzZXQK
CiMKIyBQUFMgc3VwcG9ydAojCkNPTkZJR19QUFM9eQpDT05GSUdfUFBTX0RFQlVHPXkKCiMKIyBQ
UFMgY2xpZW50cyBzdXBwb3J0CiMKIyBDT05GSUdfUFBTX0NMSUVOVF9LVElNRVIgaXMgbm90IHNl
dApDT05GSUdfUFBTX0NMSUVOVF9QQVJQT1JUPXkKQ09ORklHX1BQU19DTElFTlRfR1BJTz15Cgoj
CiMgUFBTIGdlbmVyYXRvcnMgc3VwcG9ydAojCgojCiMgUFRQIGNsb2NrIHN1cHBvcnQKIwpDT05G
SUdfUFRQXzE1ODhfQ0xPQ0s9eQpDT05GSUdfRFA4MzY0MF9QSFk9eQpDT05GSUdfUFRQXzE1ODhf
Q0xPQ0tfUENIPXkKQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9HUElPTElCPXkKIyBDT05GSUdf
R1BJT0xJQiBpcyBub3Qgc2V0CkNPTkZJR19XMT15CkNPTkZJR19XMV9DT049eQoKIwojIDEtd2ly
ZSBCdXMgTWFzdGVycwojCkNPTkZJR19XMV9NQVNURVJfRFMyNDgyPXkKIyBDT05GSUdfVzFfTUFT
VEVSX0RTMVdNIGlzIG5vdCBzZXQKCiMKIyAxLXdpcmUgU2xhdmVzCiMKQ09ORklHX1cxX1NMQVZF
X1RIRVJNPXkKQ09ORklHX1cxX1NMQVZFX1NNRU09eQpDT05GSUdfVzFfU0xBVkVfRFMyNDA4PXkK
Q09ORklHX1cxX1NMQVZFX0RTMjQwOF9SRUFEQkFDSz15CkNPTkZJR19XMV9TTEFWRV9EUzI0MTM9
eQojIENPTkZJR19XMV9TTEFWRV9EUzI0MjMgaXMgbm90IHNldApDT05GSUdfVzFfU0xBVkVfRFMy
NDMxPXkKQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15CiMgQ09ORklHX1cxX1NMQVZFX0RTMjQzM19D
UkMgaXMgbm90IHNldAojIENPTkZJR19XMV9TTEFWRV9EUzI3NjAgaXMgbm90IHNldApDT05GSUdf
VzFfU0xBVkVfRFMyNzgwPXkKQ09ORklHX1cxX1NMQVZFX0RTMjc4MT15CiMgQ09ORklHX1cxX1NM
QVZFX0RTMjhFMDQgaXMgbm90IHNldApDT05GSUdfVzFfU0xBVkVfQlEyNzAwMD15CkNPTkZJR19Q
T1dFUl9TVVBQTFk9eQpDT05GSUdfUE9XRVJfU1VQUExZX0RFQlVHPXkKIyBDT05GSUdfUERBX1BP
V0VSIGlzIG5vdCBzZXQKQ09ORklHX0dFTkVSSUNfQURDX0JBVFRFUlk9eQpDT05GSUdfTUFYODky
NV9QT1dFUj15CiMgQ09ORklHX1dNODMxWF9CQUNLVVAgaXMgbm90IHNldApDT05GSUdfV004MzFY
X1BPV0VSPXkKIyBDT05GSUdfV004MzUwX1BPV0VSIGlzIG5vdCBzZXQKQ09ORklHX1RFU1RfUE9X
RVI9eQpDT05GSUdfQkFUVEVSWV9EUzI3ODA9eQpDT05GSUdfQkFUVEVSWV9EUzI3ODE9eQojIENP
TkZJR19CQVRURVJZX0RTMjc4MiBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX1NCUz15CiMgQ09O
RklHX0JBVFRFUllfQlEyN3gwMCBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX0RBOTAzMD15CiMg
Q09ORklHX0JBVFRFUllfREE5MDUyIGlzIG5vdCBzZXQKQ09ORklHX0JBVFRFUllfTUFYMTcwNDA9
eQpDT05GSUdfQkFUVEVSWV9NQVgxNzA0Mj15CkNPTkZJR19DSEFSR0VSX1BDRjUwNjMzPXkKIyBD
T05GSUdfQ0hBUkdFUl9JU1AxNzA0IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9NQVg4OTAz
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9UV0w0MDMwIGlzIG5vdCBzZXQKQ09ORklHX0NI
QVJHRVJfTFA4NzI3PXkKIyBDT05GSUdfQ0hBUkdFUl9NQU5BR0VSIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9CUTI0MTVYIGlzIG5vdCBzZXQKQ09ORklHX0NIQVJHRVJfU01CMzQ3PXkKIyBD
T05GSUdfQ0hBUkdFUl9UUFM2NTA5MCBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX0dPTERGSVNI
PXkKQ09ORklHX1BPV0VSX1JFU0VUPXkKIyBDT05GSUdfUE9XRVJfQVZTIGlzIG5vdCBzZXQKQ09O
RklHX0hXTU9OPXkKQ09ORklHX0hXTU9OX1ZJRD15CiMgQ09ORklHX0hXTU9OX0RFQlVHX0NISVAg
aXMgbm90IHNldAoKIwojIE5hdGl2ZSBkcml2ZXJzCiMKIyBDT05GSUdfU0VOU09SU19BRDczMTQg
aXMgbm90IHNldApDT05GSUdfU0VOU09SU19BRDc0MTQ9eQpDT05GSUdfU0VOU09SU19BRDc0MTg9
eQpDT05GSUdfU0VOU09SU19BRENYWD15CkNPTkZJR19TRU5TT1JTX0FETTEwMjE9eQojIENPTkZJ
R19TRU5TT1JTX0FETTEwMjUgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0FETTEwMjYgaXMg
bm90IHNldApDT05GSUdfU0VOU09SU19BRE0xMDI5PXkKQ09ORklHX1NFTlNPUlNfQURNMTAzMT15
CiMgQ09ORklHX1NFTlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0FEVDdY
MTA9eQpDT05GSUdfU0VOU09SU19BRFQ3MzEwPXkKIyBDT05GSUdfU0VOU09SU19BRFQ3NDEwIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BRFQ3NDExIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNP
UlNfQURUNzQ2Mj15CkNPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQpDT05GSUdfU0VOU09SU19BRFQ3
NDc1PXkKQ09ORklHX1NFTlNPUlNfQVNDNzYyMT15CkNPTkZJR19TRU5TT1JTX0FTQjEwMD15CkNP
TkZJR19TRU5TT1JTX0FUWFAxPXkKIyBDT05GSUdfU0VOU09SU19EUzYyMCBpcyBub3Qgc2V0CkNP
TkZJR19TRU5TT1JTX0RTMTYyMT15CkNPTkZJR19TRU5TT1JTX0RBOTA1Ml9BREM9eQpDT05GSUdf
U0VOU09SU19EQTkwNTU9eQpDT05GSUdfU0VOU09SU19GNzE4MDVGPXkKQ09ORklHX1NFTlNPUlNf
RjcxODgyRkc9eQpDT05GSUdfU0VOU09SU19GNzUzNzVTPXkKIyBDT05GSUdfU0VOU09SU19GU0NI
TUQgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19HNzYwQT15CkNPTkZJR19TRU5TT1JTX0c3NjI9
eQpDT05GSUdfU0VOU09SU19HTDUxOFNNPXkKQ09ORklHX1NFTlNPUlNfR0w1MjBTTT15CiMgQ09O
RklHX1NFTlNPUlNfSElINjEzMCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0hUVTIxPXkKQ09O
RklHX1NFTlNPUlNfQ09SRVRFTVA9eQpDT05GSUdfU0VOU09SU19JQk1BRU09eQojIENPTkZJR19T
RU5TT1JTX0lCTVBFWCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0lJT19IV01PTj15CiMgQ09O
RklHX1NFTlNPUlNfSVQ4NyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSkM0MiBpcyBub3Qg
c2V0CkNPTkZJR19TRU5TT1JTX0xJTkVBR0U9eQojIENPTkZJR19TRU5TT1JTX0xNNjMgaXMgbm90
IHNldApDT05GSUdfU0VOU09SU19MTTcwPXkKQ09ORklHX1NFTlNPUlNfTE03Mz15CiMgQ09ORklH
X1NFTlNPUlNfTE03NSBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0xNNzc9eQojIENPTkZJR19T
RU5TT1JTX0xNNzggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19MTTgwPXkKQ09ORklHX1NFTlNP
UlNfTE04Mz15CkNPTkZJR19TRU5TT1JTX0xNODU9eQojIENPTkZJR19TRU5TT1JTX0xNODcgaXMg
bm90IHNldApDT05GSUdfU0VOU09SU19MTTkwPXkKQ09ORklHX1NFTlNPUlNfTE05Mj15CkNPTkZJ
R19TRU5TT1JTX0xNOTM9eQpDT05GSUdfU0VOU09SU19MVEM0MTUxPXkKQ09ORklHX1NFTlNPUlNf
TFRDNDIxNT15CkNPTkZJR19TRU5TT1JTX0xUQzQyNDU9eQojIENPTkZJR19TRU5TT1JTX0xUQzQy
NjEgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19MTTk1MjM0PXkKQ09ORklHX1NFTlNPUlNfTE05
NTI0MT15CkNPTkZJR19TRU5TT1JTX0xNOTUyNDU9eQpDT05GSUdfU0VOU09SU19NQVgxMTExPXkK
Q09ORklHX1NFTlNPUlNfTUFYMTYwNjU9eQpDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKQ09ORklH
X1NFTlNPUlNfTUFYMTY2OD15CkNPTkZJR19TRU5TT1JTX01BWDE5Nz15CkNPTkZJR19TRU5TT1JT
X01BWDY2Mzk9eQpDT05GSUdfU0VOU09SU19NQVg2NjQyPXkKQ09ORklHX1NFTlNPUlNfTUFYNjY1
MD15CiMgQ09ORklHX1NFTlNPUlNfTUFYNjY5NyBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX01D
UDMwMjE9eQpDT05GSUdfU0VOU09SU19OQ1Q2Nzc1PXkKQ09ORklHX1NFTlNPUlNfUEM4NzM2MD15
CkNPTkZJR19TRU5TT1JTX1BDODc0Mjc9eQpDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkKQ09ORklH
X1BNQlVTPXkKQ09ORklHX1NFTlNPUlNfUE1CVVM9eQpDT05GSUdfU0VOU09SU19BRE0xMjc1PXkK
Q09ORklHX1NFTlNPUlNfTE0yNTA2Nj15CkNPTkZJR19TRU5TT1JTX0xUQzI5Nzg9eQpDT05GSUdf
U0VOU09SU19NQVgxNjA2ND15CkNPTkZJR19TRU5TT1JTX01BWDM0NDQwPXkKIyBDT05GSUdfU0VO
U09SU19NQVg4Njg4IGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfVUNEOTAwMD15CkNPTkZJR19T
RU5TT1JTX1VDRDkyMDA9eQpDT05GSUdfU0VOU09SU19aTDYxMDA9eQpDT05GSUdfU0VOU09SU19T
SFQyMT15CkNPTkZJR19TRU5TT1JTX1NNTTY2NT15CkNPTkZJR19TRU5TT1JTX0RNRTE3Mzc9eQpD
T05GSUdfU0VOU09SU19FTUMxNDAzPXkKQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15CkNPTkZJR19T
RU5TT1JTX0VNQzZXMjAxPXkKQ09ORklHX1NFTlNPUlNfU01TQzQ3TTE9eQpDT05GSUdfU0VOU09S
U19TTVNDNDdNMTkyPXkKQ09ORklHX1NFTlNPUlNfU01TQzQ3QjM5Nz15CiMgQ09ORklHX1NFTlNP
UlNfU0NINTZYWF9DT01NT04gaXMgbm90IHNldApDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKIyBD
T05GSUdfU0VOU09SU19BRFM3ODI4IGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfQURTNzg3MT15
CkNPTkZJR19TRU5TT1JTX0FNQzY4MjE9eQojIENPTkZJR19TRU5TT1JTX0lOQTIwOSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfSU5BMlhYIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfVEhN
QzUwPXkKQ09ORklHX1NFTlNPUlNfVE1QMTAyPXkKQ09ORklHX1NFTlNPUlNfVE1QNDAxPXkKIyBD
T05GSUdfU0VOU09SU19UTVA0MjEgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19WSUFfQ1BVVEVN
UD15CkNPTkZJR19TRU5TT1JTX1ZUMTIxMT15CkNPTkZJR19TRU5TT1JTX1c4Mzc4MUQ9eQpDT05G
SUdfU0VOU09SU19XODM3OTFEPXkKIyBDT05GSUdfU0VOU09SU19XODM3OTJEIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19XODM3OTMgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19XODM3OTU9
eQojIENPTkZJR19TRU5TT1JTX1c4Mzc5NV9GQU5DVFJMIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNP
UlNfVzgzTDc4NVRTPXkKQ09ORklHX1NFTlNPUlNfVzgzTDc4Nk5HPXkKIyBDT05GSUdfU0VOU09S
U19XODM2MjdIRiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19XTTgzMVggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19XTTgz
NTA9eQpDT05GSUdfVEhFUk1BTD15CiMgQ09ORklHX1RIRVJNQUxfSFdNT04gaXMgbm90IHNldApD
T05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9TVEVQX1dJU0U9eQojIENPTkZJR19USEVSTUFMX0RF
RkFVTFRfR09WX0ZBSVJfU0hBUkUgaXMgbm90IHNldAojIENPTkZJR19USEVSTUFMX0RFRkFVTFRf
R09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldApDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRT15
CkNPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQpDT05GSUdfVEhFUk1BTF9HT1ZfVVNFUl9T
UEFDRT15CkNPTkZJR19USEVSTUFMX0VNVUxBVElPTj15CkNPTkZJR19SQ0FSX1RIRVJNQUw9eQpD
T05GSUdfSU5URUxfUE9XRVJDTEFNUD15CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBk
cml2ZXJzCiMKIyBDT05GSUdfV0FUQ0hET0cgaXMgbm90IHNldApDT05GSUdfU1NCX1BPU1NJQkxF
PXkKCiMKIyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUKIwpDT05GSUdfU1NCPXkKQ09ORklHX1NT
Ql9QQ01DSUFIT1NUX1BPU1NJQkxFPXkKIyBDT05GSUdfU1NCX1BDTUNJQUhPU1QgaXMgbm90IHNl
dAojIENPTkZJR19TU0JfU0lMRU5UIGlzIG5vdCBzZXQKQ09ORklHX1NTQl9ERUJVRz15CkNPTkZJ
R19CQ01BX1BPU1NJQkxFPXkKCiMKIyBCcm9hZGNvbSBzcGVjaWZpYyBBTUJBCiMKQ09ORklHX0JD
TUE9eQojIENPTkZJR19CQ01BX0hPU1RfU09DIGlzIG5vdCBzZXQKIyBDT05GSUdfQkNNQV9EUklW
RVJfR01BQ19DTU4gaXMgbm90IHNldApDT05GSUdfQkNNQV9ERUJVRz15CgojCiMgTXVsdGlmdW5j
dGlvbiBkZXZpY2UgZHJpdmVycwojCkNPTkZJR19NRkRfQ09SRT15CkNPTkZJR19NRkRfQVMzNzEx
PXkKQ09ORklHX1BNSUNfQURQNTUyMD15CkNPTkZJR19NRkRfQ1JPU19FQz15CkNPTkZJR19NRkRf
Q1JPU19FQ19JMkM9eQpDT05GSUdfUE1JQ19EQTkwM1g9eQpDT05GSUdfUE1JQ19EQTkwNTI9eQoj
IENPTkZJR19NRkRfREE5MDUyX1NQSSBpcyBub3Qgc2V0CkNPTkZJR19NRkRfREE5MDUyX0kyQz15
CkNPTkZJR19NRkRfREE5MDU1PXkKQ09ORklHX01GRF9EQTkwNjM9eQojIENPTkZJR19NRkRfTUMx
M1hYWF9TUEkgaXMgbm90IHNldAojIENPTkZJR19NRkRfTUMxM1hYWF9JMkMgaXMgbm90IHNldAoj
IENPTkZJR19IVENfUEFTSUMzIGlzIG5vdCBzZXQKQ09ORklHX01GRF9LRU1QTEQ9eQpDT05GSUdf
TUZEXzg4UE04MDA9eQojIENPTkZJR19NRkRfODhQTTgwNSBpcyBub3Qgc2V0CiMgQ09ORklHX01G
RF84OFBNODYwWCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVgxNDU3NyBpcyBub3Qgc2V0CkNP
TkZJR19NRkRfTUFYNzc2ODY9eQpDT05GSUdfTUZEX01BWDc3NjkzPXkKIyBDT05GSUdfTUZEX01B
WDg5MDcgaXMgbm90IHNldApDT05GSUdfTUZEX01BWDg5MjU9eQojIENPTkZJR19NRkRfTUFYODk5
NyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg4OTk4IGlzIG5vdCBzZXQKIyBDT05GSUdfRVpY
X1BDQVAgaXMgbm90IHNldApDT05GSUdfTUZEX1JFVFU9eQpDT05GSUdfTUZEX1BDRjUwNjMzPXkK
IyBDT05GSUdfUENGNTA2MzNfQURDIGlzIG5vdCBzZXQKIyBDT05GSUdfUENGNTA2MzNfR1BJTyBp
cyBub3Qgc2V0CkNPTkZJR19NRkRfUkM1VDU4Mz15CiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBu
b3Qgc2V0CkNPTkZJR19NRkRfU0k0NzZYX0NPUkU9eQojIENPTkZJR19NRkRfU001MDEgaXMgbm90
IHNldAojIENPTkZJR19NRkRfU01TQyBpcyBub3Qgc2V0CkNPTkZJR19BQlg1MDBfQ09SRT15CiMg
Q09ORklHX0FCMzEwMF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1NUTVBFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUZEX1NZU0NPTiBpcyBub3Qgc2V0CkNPTkZJR19NRkRfVElfQU0zMzVYX1RT
Q0FEQz15CiMgQ09ORklHX01GRF9MUDM5NDMgaXMgbm90IHNldApDT05GSUdfTUZEX0xQODc4OD15
CkNPTkZJR19NRkRfUEFMTUFTPXkKQ09ORklHX1RQUzYxMDVYPXkKQ09ORklHX1RQUzY1MDdYPXkK
Q09ORklHX01GRF9UUFM2NTA5MD15CkNPTkZJR19NRkRfVFBTNjUyMTc9eQojIENPTkZJR19NRkRf
VFBTNjU4NlggaXMgbm90IHNldAojIENPTkZJR19NRkRfVFBTODAwMzEgaXMgbm90IHNldApDT05G
SUdfVFdMNDAzMF9DT1JFPXkKIyBDT05GSUdfVFdMNDAzMF9NQURDIGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX1RXTDQwMzBfQVVESU8gaXMgbm90IHNldAojIENPTkZJR19UV0w2MDQwX0NPUkUgaXMg
bm90IHNldApDT05GSUdfTUZEX1dMMTI3M19DT1JFPXkKQ09ORklHX01GRF9MTTM1MzM9eQojIENP
TkZJR19NRkRfVEMzNTg5WCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UTUlPIGlzIG5vdCBzZXQK
Q09ORklHX01GRF9BUklaT05BPXkKIyBDT05GSUdfTUZEX0FSSVpPTkFfSTJDIGlzIG5vdCBzZXQK
Q09ORklHX01GRF9BUklaT05BX1NQST15CiMgQ09ORklHX01GRF9XTTUxMDIgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfV001MTEwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dNODk5NyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNldApDT05GSUdfTUZEX1dNODMxWD15CiMg
Q09ORklHX01GRF9XTTgzMVhfSTJDIGlzIG5vdCBzZXQKQ09ORklHX01GRF9XTTgzMVhfU1BJPXkK
Q09ORklHX01GRF9XTTgzNTA9eQpDT05GSUdfTUZEX1dNODM1MF9JMkM9eQojIENPTkZJR19NRkRf
V004OTk0IGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUj15CkNPTkZJR19SRUdVTEFUT1JfREVC
VUc9eQpDT05GSUdfUkVHVUxBVE9SX0ZJWEVEX1ZPTFRBR0U9eQpDT05GSUdfUkVHVUxBVE9SX1ZJ
UlRVQUxfQ09OU1VNRVI9eQpDT05GSUdfUkVHVUxBVE9SX1VTRVJTUEFDRV9DT05TVU1FUj15CiMg
Q09ORklHX1JFR1VMQVRPUl84OFBNODAwIGlzIG5vdCBzZXQKIyBDT05GSUdfUkVHVUxBVE9SX0FD
VDg4NjUgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX0FENTM5OD15CkNPTkZJR19SRUdVTEFU
T1JfQVMzNzExPXkKQ09ORklHX1JFR1VMQVRPUl9EQTkwM1g9eQojIENPTkZJR19SRUdVTEFUT1Jf
REE5MDUyIGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUl9EQTkwNTU9eQpDT05GSUdfUkVHVUxB
VE9SX0RBOTA2Mz15CiMgQ09ORklHX1JFR1VMQVRPUl9EQTkyMTAgaXMgbm90IHNldAojIENPTkZJ
R19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1JfSVNMNjI3
MUEgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX0xQMzk3MT15CkNPTkZJR19SRUdVTEFUT1Jf
TFAzOTcyPXkKQ09ORklHX1JFR1VMQVRPUl9MUDg3Mlg9eQpDT05GSUdfUkVHVUxBVE9SX0xQODc1
NT15CkNPTkZJR19SRUdVTEFUT1JfTFA4Nzg4PXkKQ09ORklHX1JFR1VMQVRPUl9NQVgxNTg2PXkK
Q09ORklHX1JFR1VMQVRPUl9NQVg4NjQ5PXkKIyBDT05GSUdfUkVHVUxBVE9SX01BWDg2NjAgaXMg
bm90IHNldApDT05GSUdfUkVHVUxBVE9SX01BWDg5MjU9eQpDT05GSUdfUkVHVUxBVE9SX01BWDg5
NTI9eQojIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JFR1VM
QVRPUl9NQVg3NzY4NiBpcyBub3Qgc2V0CiMgQ09ORklHX1JFR1VMQVRPUl9NQVg3NzY5MyBpcyBu
b3Qgc2V0CkNPTkZJR19SRUdVTEFUT1JfUEFMTUFTPXkKQ09ORklHX1JFR1VMQVRPUl9QQ0Y1MDYz
Mz15CkNPTkZJR19SRUdVTEFUT1JfUEZVWkUxMDA9eQpDT05GSUdfUkVHVUxBVE9SX1JDNVQ1ODM9
eQpDT05GSUdfUkVHVUxBVE9SX1RQUzUxNjMyPXkKIyBDT05GSUdfUkVHVUxBVE9SX1RQUzYxMDVY
IGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MD15CiMgQ09ORklHX1JFR1VMQVRP
Ul9UUFM2NTAyMyBpcyBub3Qgc2V0CkNPTkZJR19SRUdVTEFUT1JfVFBTNjUwN1g9eQojIENPTkZJ
R19SRUdVTEFUT1JfVFBTNjUwOTAgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1JfVFBTNjUy
MTcgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX1RQUzY1MjRYPXkKQ09ORklHX1JFR1VMQVRP
Ul9UV0w0MDMwPXkKIyBDT05GSUdfUkVHVUxBVE9SX1dNODMxWCBpcyBub3Qgc2V0CkNPTkZJR19S
RUdVTEFUT1JfV004MzUwPXkKQ09ORklHX01FRElBX1NVUFBPUlQ9eQoKIwojIE11bHRpbWVkaWEg
Y29yZSBzdXBwb3J0CiMKQ09ORklHX01FRElBX0NBTUVSQV9TVVBQT1JUPXkKIyBDT05GSUdfTUVE
SUFfQU5BTE9HX1RWX1NVUFBPUlQgaXMgbm90IHNldAojIENPTkZJR19NRURJQV9ESUdJVEFMX1RW
X1NVUFBPUlQgaXMgbm90IHNldApDT05GSUdfTUVESUFfUkFESU9fU1VQUE9SVD15CiMgQ09ORklH
X01FRElBX0NPTlRST0xMRVIgaXMgbm90IHNldApDT05GSUdfVklERU9fREVWPXkKQ09ORklHX1ZJ
REVPX1Y0TDI9eQojIENPTkZJR19WSURFT19BRFZfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19W
SURFT19GSVhFRF9NSU5PUl9SQU5HRVMgaXMgbm90IHNldApDT05GSUdfVjRMMl9NRU0yTUVNX0RF
Vj15CkNPTkZJR19WSURFT0JVRl9HRU49eQpDT05GSUdfVklERU9CVUZfRE1BX0NPTlRJRz15CkNP
TkZJR19WSURFT0JVRjJfQ09SRT15CkNPTkZJR19WSURFT0JVRjJfTUVNT1BTPXkKQ09ORklHX1ZJ
REVPQlVGMl9ETUFfQ09OVElHPXkKIyBDT05GSUdfVFRQQ0lfRUVQUk9NIGlzIG5vdCBzZXQKCiMK
IyBNZWRpYSBkcml2ZXJzCiMKQ09ORklHX1Y0TF9QTEFURk9STV9EUklWRVJTPXkKIyBDT05GSUdf
VklERU9fU0hfVk9VIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX1RJTUJFUkRBTEU9eQpDT05GSUdf
U09DX0NBTUVSQT15CkNPTkZJR19TT0NfQ0FNRVJBX1BMQVRGT1JNPXkKIyBDT05GSUdfVklERU9f
UkNBUl9WSU4gaXMgbm90IHNldApDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUz15CkNPTkZJR19W
SURFT19NRU0yTUVNX0RFSU5URVJMQUNFPXkKIyBDT05GSUdfVklERU9fU0hfVkVVIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVjRMX1RFU1RfRFJJVkVSUyBpcyBub3Qgc2V0CgojCiMgU3VwcG9ydGVkIE1N
Qy9TRElPIGFkYXB0ZXJzCiMKIyBDT05GSUdfTUVESUFfUEFSUE9SVF9TVVBQT1JUIGlzIG5vdCBz
ZXQKQ09ORklHX1JBRElPX0FEQVBURVJTPXkKQ09ORklHX1JBRElPX1NJNDcwWD15CkNPTkZJR19J
MkNfU0k0NzBYPXkKIyBDT05GSUdfUkFESU9fU0k0NzEzIGlzIG5vdCBzZXQKQ09ORklHX1JBRElP
X1RFQTU3NjQ9eQpDT05GSUdfUkFESU9fVEVBNTc2NF9YVEFMPXkKQ09ORklHX1JBRElPX1NBQTc3
MDZIPXkKQ09ORklHX1JBRElPX1RFRjY4NjI9eQpDT05GSUdfUkFESU9fV0wxMjczPXkKCiMKIyBU
ZXhhcyBJbnN0cnVtZW50cyBXTDEyOHggRk0gZHJpdmVyIChTVCBiYXNlZCkKIwoKIwojIE1lZGlh
IGFuY2lsbGFyeSBkcml2ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGkyYywgZnJvbnRlbmRzKQojCkNP
TkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15CgojCiMgQXVkaW8gZGVjb2RlcnMsIHByb2Nl
c3NvcnMgYW5kIG1peGVycwojCgojCiMgUkRTIGRlY29kZXJzCiMKCiMKIyBWaWRlbyBkZWNvZGVy
cwojCkNPTkZJR19WSURFT19BRFY3MTgwPXkKCiMKIyBWaWRlbyBhbmQgYXVkaW8gZGVjb2RlcnMK
IwoKIwojIFZpZGVvIGVuY29kZXJzCiMKCiMKIyBDYW1lcmEgc2Vuc29yIGRldmljZXMKIwoKIwoj
IEZsYXNoIGRldmljZXMKIwoKIwojIFZpZGVvIGltcHJvdmVtZW50IGNoaXBzCiMKCiMKIyBNaXNj
ZWxsYW5lb3VzIGhlbHBlciBjaGlwcwojCgojCiMgU2Vuc29ycyB1c2VkIG9uIHNvY19jYW1lcmEg
ZHJpdmVyCiMKCiMKIyBzb2NfY2FtZXJhIHNlbnNvciBkcml2ZXJzCiMKQ09ORklHX1NPQ19DQU1F
UkFfSU1YMDc0PXkKQ09ORklHX1NPQ19DQU1FUkFfTVQ5TTAwMT15CkNPTkZJR19TT0NfQ0FNRVJB
X01UOU0xMTE9eQpDT05GSUdfU09DX0NBTUVSQV9NVDlUMDMxPXkKIyBDT05GSUdfU09DX0NBTUVS
QV9NVDlUMTEyIGlzIG5vdCBzZXQKQ09ORklHX1NPQ19DQU1FUkFfTVQ5VjAyMj15CkNPTkZJR19T
T0NfQ0FNRVJBX09WMjY0MD15CkNPTkZJR19TT0NfQ0FNRVJBX09WNTY0Mj15CkNPTkZJR19TT0Nf
Q0FNRVJBX09WNjY1MD15CkNPTkZJR19TT0NfQ0FNRVJBX09WNzcyWD15CiMgQ09ORklHX1NPQ19D
QU1FUkFfT1Y5NjQwIGlzIG5vdCBzZXQKQ09ORklHX1NPQ19DQU1FUkFfT1Y5NzQwPXkKIyBDT05G
SUdfU09DX0NBTUVSQV9SSjU0TjEgaXMgbm90IHNldApDT05GSUdfU09DX0NBTUVSQV9UVzk5MTA9
eQpDT05GSUdfTUVESUFfVFVORVI9eQpDT05GSUdfTUVESUFfVFVORVJfU0lNUExFPXkKQ09ORklH
X01FRElBX1RVTkVSX1REQTgyOTA9eQpDT05GSUdfTUVESUFfVFVORVJfVERBODI3WD15CkNPTkZJ
R19NRURJQV9UVU5FUl9UREExODI3MT15CkNPTkZJR19NRURJQV9UVU5FUl9UREE5ODg3PXkKQ09O
RklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQpDT05GSUdfTUVESUFfVFVORVJfVEVBNTc2Nz15CkNP
TkZJR19NRURJQV9UVU5FUl9NVDIwWFg9eQpDT05GSUdfTUVESUFfVFVORVJfWEMyMDI4PXkKQ09O
RklHX01FRElBX1RVTkVSX1hDNTAwMD15CkNPTkZJR19NRURJQV9UVU5FUl9YQzQwMDA9eQpDT05G
SUdfTUVESUFfVFVORVJfTUM0NFM4MDM9eQoKIwojIFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250
ZW5kcwojCiMgQ09ORklHX0RWQl9EVU1NWV9GRSBpcyBub3Qgc2V0CgojCiMgR3JhcGhpY3Mgc3Vw
cG9ydAojCkNPTkZJR19EUk09eQojIENPTkZJR19EUk1fVURMIGlzIG5vdCBzZXQKQ09ORklHX1ZH
QVNUQVRFPXkKQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MPXkKQ09ORklHX0hETUk9eQpDT05G
SUdfRkI9eQojIENPTkZJR19GSVJNV0FSRV9FRElEIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfRERD
IGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JPT1RfVkVTQV9TVVBQT1JUPXkKQ09ORklHX0ZCX0NGQl9G
SUxMUkVDVD15CkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQpDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJ
VD15CiMgQ09ORklHX0ZCX0NGQl9SRVZfUElYRUxTX0lOX0JZVEUgaXMgbm90IHNldApDT05GSUdf
RkJfU1lTX0ZJTExSRUNUPXkKQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15CkNPTkZJR19GQl9TWVNf
SU1BR0VCTElUPXkKQ09ORklHX0ZCX0ZPUkVJR05fRU5ESUFOPXkKQ09ORklHX0ZCX0JPVEhfRU5E
SUFOPXkKIyBDT05GSUdfRkJfQklHX0VORElBTiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0xJVFRM
RV9FTkRJQU4gaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZPUFM9eQpDT05GSUdfRkJfREVGRVJS
RURfSU89eQpDT05GSUdfRkJfSEVDVUJBPXkKIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0
CiMgQ09ORklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfQkFDS0xJR0hUIGlz
IG5vdCBzZXQKQ09ORklHX0ZCX01PREVfSEVMUEVSUz15CkNPTkZJR19GQl9USUxFQkxJVFRJTkc9
eQoKIwojIEZyYW1lIGJ1ZmZlciBoYXJkd2FyZSBkcml2ZXJzCiMKQ09ORklHX0ZCX0FSQz15CkNP
TkZJR19GQl9WR0ExNj15CkNPTkZJR19GQl9VVkVTQT15CkNPTkZJR19GQl9WRVNBPXkKQ09ORklH
X0ZCX040MTE9eQpDT05GSUdfRkJfSEdBPXkKQ09ORklHX0ZCX1MxRDEzWFhYPXkKIyBDT05GSUdf
RkJfVE1JTyBpcyBub3Qgc2V0CkNPTkZJR19GQl9HT0xERklTSD15CiMgQ09ORklHX0ZCX1ZJUlRV
QUwgaXMgbm90IHNldApDT05GSUdfWEVOX0ZCREVWX0ZST05URU5EPXkKIyBDT05GSUdfRkJfTUVU
Uk9OT01FIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JST0FEU0hFRVQ9eQpDT05GSUdfRkJfQVVPX0sx
OTBYPXkKIyBDT05GSUdfRkJfQVVPX0sxOTAwIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0FVT19LMTkw
MT15CiMgQ09ORklHX0ZCX1NJTVBMRSBpcyBub3Qgc2V0CkNPTkZJR19FWFlOT1NfVklERU89eQpD
T05GSUdfQkFDS0xJR0hUX0xDRF9TVVBQT1JUPXkKQ09ORklHX0xDRF9DTEFTU19ERVZJQ0U9eQpD
T05GSUdfTENEX0xUVjM1MFFWPXkKQ09ORklHX0xDRF9JTEk5MjJYPXkKQ09ORklHX0xDRF9JTEk5
MzIwPXkKQ09ORklHX0xDRF9URE8yNE09eQpDT05GSUdfTENEX1ZHRzI0MzJBND15CiMgQ09ORklH
X0xDRF9QTEFURk9STSBpcyBub3Qgc2V0CkNPTkZJR19MQ0RfUzZFNjNNMD15CkNPTkZJR19MQ0Rf
TEQ5MDQwPXkKIyBDT05GSUdfTENEX0FNUzM2OUZHMDYgaXMgbm90IHNldApDT05GSUdfTENEX0xN
UzUwMUtGMDM9eQojIENPTkZJR19MQ0RfSFg4MzU3IGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tMSUdI
VF9DTEFTU19ERVZJQ0U9eQojIENPTkZJR19CQUNLTElHSFRfR0VORVJJQyBpcyBub3Qgc2V0CkNP
TkZJR19CQUNLTElHSFRfTE0zNTMzPXkKQ09ORklHX0JBQ0tMSUdIVF9QV009eQojIENPTkZJR19C
QUNLTElHSFRfREE5MDNYIGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tMSUdIVF9EQTkwNTI9eQpDT05G
SUdfQkFDS0xJR0hUX01BWDg5MjU9eQpDT05GSUdfQkFDS0xJR0hUX1NBSEFSQT15CiMgQ09ORklH
X0JBQ0tMSUdIVF9XTTgzMVggaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfQURQNTUyMCBp
cyBub3Qgc2V0CkNPTkZJR19CQUNLTElHSFRfQURQODg2MD15CkNPTkZJR19CQUNLTElHSFRfQURQ
ODg3MD15CkNPTkZJR19CQUNLTElHSFRfUENGNTA2MzM9eQpDT05GSUdfQkFDS0xJR0hUX0xNMzYz
MEE9eQpDT05GSUdfQkFDS0xJR0hUX0xNMzYzOT15CkNPTkZJR19CQUNLTElHSFRfTFA4NTVYPXkK
Q09ORklHX0JBQ0tMSUdIVF9MUDg3ODg9eQpDT05GSUdfQkFDS0xJR0hUX1BBTkRPUkE9eQpDT05G
SUdfQkFDS0xJR0hUX1RQUzY1MjE3PXkKIyBDT05GSUdfQkFDS0xJR0hUX0FTMzcxMSBpcyBub3Qg
c2V0CiMgQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tM
SUdIVF9CRDYxMDcgaXMgbm90IHNldApDT05GSUdfTE9HTz15CiMgQ09ORklHX0xPR09fTElOVVhf
TU9OTyBpcyBub3Qgc2V0CiMgQ09ORklHX0xPR09fTElOVVhfVkdBMTYgaXMgbm90IHNldApDT05G
SUdfTE9HT19MSU5VWF9DTFVUMjI0PXkKQ09ORklHX1NPVU5EPXkKQ09ORklHX1NPVU5EX09TU19D
T1JFPXkKQ09ORklHX1NPVU5EX09TU19DT1JFX1BSRUNMQUlNPXkKQ09ORklHX1NORD15CkNPTkZJ
R19TTkRfVElNRVI9eQpDT05GSUdfU05EX1BDTT15CkNPTkZJR19TTkRfSFdERVA9eQpDT05GSUdf
U05EX1JBV01JREk9eQojIENPTkZJR19TTkRfU0VRVUVOQ0VSIGlzIG5vdCBzZXQKQ09ORklHX1NO
RF9PU1NFTVVMPXkKQ09ORklHX1NORF9NSVhFUl9PU1M9eQojIENPTkZJR19TTkRfUENNX09TUyBp
cyBub3Qgc2V0CkNPTkZJR19TTkRfSFJUSU1FUj15CkNPTkZJR19TTkRfRFlOQU1JQ19NSU5PUlM9
eQpDT05GSUdfU05EX01BWF9DQVJEUz0zMgojIENPTkZJR19TTkRfU1VQUE9SVF9PTERfQVBJIGlz
IG5vdCBzZXQKQ09ORklHX1NORF9WRVJCT1NFX1BST0NGUz15CkNPTkZJR19TTkRfVkVSQk9TRV9Q
UklOVEs9eQojIENPTkZJR19TTkRfREVCVUcgaXMgbm90IHNldApDT05GSUdfU05EX0RNQV9TR0JV
Rj15CiMgQ09ORklHX1NORF9SQVdNSURJX1NFUSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9PUEwz
X0xJQl9TRVEgaXMgbm90IHNldAojIENPTkZJR19TTkRfT1BMNF9MSUJfU0VRIGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX1NCQVdFX1NFUSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9FTVUxMEsxX1NF
USBpcyBub3Qgc2V0CkNPTkZJR19TTkRfTVBVNDAxX1VBUlQ9eQpDT05GSUdfU05EX1ZYX0xJQj15
CkNPTkZJR19TTkRfRFJJVkVSUz15CkNPTkZJR19TTkRfRFVNTVk9eQpDT05GSUdfU05EX0FMT09Q
PXkKQ09ORklHX1NORF9NVFBBVj15CkNPTkZJR19TTkRfTVRTNjQ9eQpDT05GSUdfU05EX1NFUklB
TF9VMTY1NTA9eQpDT05GSUdfU05EX01QVTQwMT15CkNPTkZJR19TTkRfUE9SVE1BTjJYND15CkNP
TkZJR19TTkRfU1BJPXkKQ09ORklHX1NORF9BVDczQzIxMz15CkNPTkZJR19TTkRfQVQ3M0MyMTNf
VEFSR0VUX0JJVFJBVEU9NDgwMDAKQ09ORklHX1NORF9QQ01DSUE9eQpDT05GSUdfU05EX1ZYUE9D
S0VUPXkKQ09ORklHX1NORF9QREFVRElPQ0Y9eQojIENPTkZJR19TTkRfU09DIGlzIG5vdCBzZXQK
IyBDT05GSUdfU09VTkRfUFJJTUUgaXMgbm90IHNldApDT05GSUdfVVNCX09IQ0lfTElUVExFX0VO
RElBTj15CkNPTkZJR19VU0JfU1VQUE9SVD15CkNPTkZJR19VU0JfQVJDSF9IQVNfSENEPXkKIyBD
T05GSUdfVVNCIGlzIG5vdCBzZXQKCiMKIyBVU0IgcG9ydCBkcml2ZXJzCiMKCiMKIyBVU0IgUGh5
c2ljYWwgTGF5ZXIgZHJpdmVycwojCkNPTkZJR19VU0JfUEhZPXkKIyBDT05GSUdfS0VZU1RPTkVf
VVNCX1BIWSBpcyBub3Qgc2V0CkNPTkZJR19OT1BfVVNCX1hDRUlWPXkKQ09ORklHX09NQVBfQ09O
VFJPTF9VU0I9eQpDT05GSUdfT01BUF9VU0IzPXkKQ09ORklHX0FNMzM1WF9DT05UUk9MX1VTQj15
CkNPTkZJR19BTTMzNVhfUEhZX1VTQj15CkNPTkZJR19TQU1TVU5HX1VTQlBIWT15CiMgQ09ORklH
X1NBTVNVTkdfVVNCMlBIWSBpcyBub3Qgc2V0CkNPTkZJR19TQU1TVU5HX1VTQjNQSFk9eQpDT05G
SUdfVEFIVk9fVVNCPXkKIyBDT05GSUdfVEFIVk9fVVNCX0hPU1RfQllfREVGQVVMVCBpcyBub3Qg
c2V0CkNPTkZJR19VU0JfUkNBUl9HRU4yX1BIWT15CiMgQ09ORklHX1VTQl9HQURHRVQgaXMgbm90
IHNldAojIENPTkZJR19NTUMgaXMgbm90IHNldApDT05GSUdfTUVNU1RJQ0s9eQojIENPTkZJR19N
RU1TVElDS19ERUJVRyBpcyBub3Qgc2V0CgojCiMgTWVtb3J5U3RpY2sgZHJpdmVycwojCkNPTkZJ
R19NRU1TVElDS19VTlNBRkVfUkVTVU1FPXkKQ09ORklHX01TUFJPX0JMT0NLPXkKQ09ORklHX01T
X0JMT0NLPXkKCiMKIyBNZW1vcnlTdGljayBIb3N0IENvbnRyb2xsZXIgRHJpdmVycwojCkNPTkZJ
R19ORVdfTEVEUz15CkNPTkZJR19MRURTX0NMQVNTPXkKCiMKIyBMRUQgZHJpdmVycwojCkNPTkZJ
R19MRURTX0xNMzUzMD15CkNPTkZJR19MRURTX0xNMzUzMz15CkNPTkZJR19MRURTX0xNMzY0Mj15
CkNPTkZJR19MRURTX0xQMzk0ND15CkNPTkZJR19MRURTX0xQNTVYWF9DT01NT049eQpDT05GSUdf
TEVEU19MUDU1MjE9eQojIENPTkZJR19MRURTX0xQNTUyMyBpcyBub3Qgc2V0CkNPTkZJR19MRURT
X0xQNTU2Mj15CkNPTkZJR19MRURTX0xQODUwMT15CkNPTkZJR19MRURTX0xQODc4OD15CkNPTkZJ
R19MRURTX1BDQTk1NVg9eQpDT05GSUdfTEVEU19QQ0E5NjNYPXkKQ09ORklHX0xFRFNfUENBOTY4
NT15CkNPTkZJR19MRURTX1dNODMxWF9TVEFUVVM9eQojIENPTkZJR19MRURTX1dNODM1MCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0xFRFNfREE5MDNYIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19EQTkw
NTIgaXMgbm90IHNldApDT05GSUdfTEVEU19EQUMxMjRTMDg1PXkKIyBDT05GSUdfTEVEU19QV00g
aXMgbm90IHNldApDT05GSUdfTEVEU19SRUdVTEFUT1I9eQpDT05GSUdfTEVEU19CRDI4MDI9eQpD
T05GSUdfTEVEU19BRFA1NTIwPXkKIyBDT05GSUdfTEVEU19UQ0E2NTA3IGlzIG5vdCBzZXQKQ09O
RklHX0xFRFNfTE0zNTV4PXkKQ09ORklHX0xFRFNfT1QyMDA9eQpDT05GSUdfTEVEU19CTElOS009
eQoKIwojIExFRCBUcmlnZ2VycwojCiMgQ09ORklHX0xFRFNfVFJJR0dFUlMgaXMgbm90IHNldAoj
IENPTkZJR19BQ0NFU1NJQklMSVRZIGlzIG5vdCBzZXQKIyBDT05GSUdfRURBQyBpcyBub3Qgc2V0
CkNPTkZJR19SVENfTElCPXkKQ09ORklHX1JUQ19DTEFTUz15CiMgQ09ORklHX1JUQ19IQ1RPU1lT
IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX1NZU1RPSEMgaXMgbm90IHNldAojIENPTkZJR19SVENf
REVCVUcgaXMgbm90IHNldAoKIwojIFJUQyBpbnRlcmZhY2VzCiMKIyBDT05GSUdfUlRDX0lOVEZf
U1lTRlMgaXMgbm90IHNldApDT05GSUdfUlRDX0lOVEZfUFJPQz15CiMgQ09ORklHX1JUQ19JTlRG
X0RFViBpcyBub3Qgc2V0CkNPTkZJR19SVENfRFJWX1RFU1Q9eQoKIwojIEkyQyBSVEMgZHJpdmVy
cwojCkNPTkZJR19SVENfRFJWXzg4UE04MFg9eQojIENPTkZJR19SVENfRFJWX0RTMTMwNyBpcyBu
b3Qgc2V0CkNPTkZJR19SVENfRFJWX0RTMTM3ND15CkNPTkZJR19SVENfRFJWX0RTMTY3Mj15CkNP
TkZJR19SVENfRFJWX0RTMzIzMj15CkNPTkZJR19SVENfRFJWX0xQODc4OD15CiMgQ09ORklHX1JU
Q19EUlZfTUFYNjkwMCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTUFYODkyNSBpcyBub3Qg
c2V0CiMgQ09ORklHX1JUQ19EUlZfTUFYNzc2ODYgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9S
UzVDMzcyPXkKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjA4IGlzIG5vdCBzZXQKQ09ORklHX1JUQ19E
UlZfSVNMMTIwMjI9eQpDT05GSUdfUlRDX0RSVl9JU0wxMjA1Nz15CiMgQ09ORklHX1JUQ19EUlZf
WDEyMDUgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9QQUxNQVM9eQpDT05GSUdfUlRDX0RSVl9Q
Q0YyMTI3PXkKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9QQ0Y4NTYzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTgzIGlzIG5vdCBz
ZXQKQ09ORklHX1JUQ19EUlZfTTQxVDgwPXkKIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlz
IG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfQlEzMks9eQpDT05GSUdfUlRDX0RSVl9UV0w0MDMwPXkK
Q09ORklHX1JUQ19EUlZfUkM1VDU4Mz15CkNPTkZJR19SVENfRFJWX1MzNTM5MEE9eQpDT05GSUdf
UlRDX0RSVl9GTTMxMzA9eQojIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0CkNPTkZJ
R19SVENfRFJWX1JYODAyNT15CkNPTkZJR19SVENfRFJWX0VNMzAyNz15CkNPTkZJR19SVENfRFJW
X1JWMzAyOUMyPXkKCiMKIyBTUEkgUlRDIGRyaXZlcnMKIwpDT05GSUdfUlRDX0RSVl9NNDFUOTM9
eQojIENPTkZJR19SVENfRFJWX000MVQ5NCBpcyBub3Qgc2V0CkNPTkZJR19SVENfRFJWX0RTMTMw
NT15CkNPTkZJR19SVENfRFJWX0RTMTM5MD15CkNPTkZJR19SVENfRFJWX01BWDY5MDI9eQojIENP
TkZJR19SVENfRFJWX1I5NzAxIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUzVDMzQ4IGlz
IG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzMyMzQgaXMgbm90IHNldApDT05GSUdfUlRDX0RS
Vl9QQ0YyMTIzPXkKQ09ORklHX1JUQ19EUlZfUlg0NTgxPXkKCiMKIyBQbGF0Zm9ybSBSVEMgZHJp
dmVycwojCkNPTkZJR19SVENfRFJWX0NNT1M9eQpDT05GSUdfUlRDX0RSVl9EUzEyODY9eQojIENP
TkZJR19SVENfRFJWX0RTMTUxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxNTUzIGlz
IG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfRFMxNzQyPXkKIyBDT05GSUdfUlRDX0RSVl9EQTkwNTIg
aXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9EQTkwNTU9eQojIENPTkZJR19SVENfRFJWX1NUSzE3
VEE4IGlzIG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfTTQ4VDg2PXkKQ09ORklHX1JUQ19EUlZfTTQ4
VDM1PXkKIyBDT05GSUdfUlRDX0RSVl9NNDhUNTkgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJW
X01TTTYyNDIgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9CUTQ4MDI9eQpDT05GSUdfUlRDX0RS
Vl9SUDVDMDE9eQojIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKQ09ORklHX1JUQ19E
UlZfRFMyNDA0PXkKQ09ORklHX1JUQ19EUlZfV004MzFYPXkKQ09ORklHX1JUQ19EUlZfV004MzUw
PXkKIyBDT05GSUdfUlRDX0RSVl9QQ0Y1MDYzMyBpcyBub3Qgc2V0CgojCiMgb24tQ1BVIFJUQyBk
cml2ZXJzCiMKQ09ORklHX1JUQ19EUlZfTU9YQVJUPXkKCiMKIyBISUQgU2Vuc29yIFJUQyBkcml2
ZXJzCiMKQ09ORklHX0RNQURFVklDRVM9eQpDT05GSUdfRE1BREVWSUNFU19ERUJVRz15CiMgQ09O
RklHX0RNQURFVklDRVNfVkRFQlVHIGlzIG5vdCBzZXQKCiMKIyBETUEgRGV2aWNlcwojCkNPTkZJ
R19EV19ETUFDX0NPUkU9eQojIENPTkZJR19EV19ETUFDIGlzIG5vdCBzZXQKQ09ORklHX1RJTUJf
RE1BPXkKQ09ORklHX0RNQV9FTkdJTkU9eQoKIwojIERNQSBDbGllbnRzCiMKIyBDT05GSUdfQVNZ
TkNfVFhfRE1BIGlzIG5vdCBzZXQKQ09ORklHX0RNQVRFU1Q9eQojIENPTkZJR19BVVhESVNQTEFZ
IGlzIG5vdCBzZXQKIyBDT05GSUdfVUlPIGlzIG5vdCBzZXQKQ09ORklHX1ZJUlRfRFJJVkVSUz15
CkNPTkZJR19WSVJUSU89eQoKIwojIFZpcnRpbyBkcml2ZXJzCiMKQ09ORklHX1ZJUlRJT19CQUxM
T09OPXkKIyBDT05GSUdfVklSVElPX01NSU8gaXMgbm90IHNldAoKIwojIE1pY3Jvc29mdCBIeXBl
ci1WIGd1ZXN0IHN1cHBvcnQKIwoKIwojIFhlbiBkcml2ZXIgc3VwcG9ydAojCkNPTkZJR19YRU5f
QkFMTE9PTj15CkNPTkZJR19YRU5fQkFMTE9PTl9NRU1PUllfSE9UUExVRz15CkNPTkZJR19YRU5f
U0NSVUJfUEFHRVM9eQpDT05GSUdfWEVOX0RFVl9FVlRDSE49eQojIENPTkZJR19YRU5GUyBpcyBu
b3Qgc2V0CkNPTkZJR19YRU5fU1lTX0hZUEVSVklTT1I9eQpDT05GSUdfWEVOX1hFTkJVU19GUk9O
VEVORD15CkNPTkZJR19YRU5fR05UREVWPXkKQ09ORklHX1hFTl9HUkFOVF9ERVZfQUxMT0M9eQpD
T05GSUdfU1dJT1RMQl9YRU49eQpDT05GSUdfWEVOX1BSSVZDTUQ9eQpDT05GSUdfWEVOX0hBVkVf
UFZNTVU9eQpDT05GSUdfU1RBR0lORz15CiMgQ09ORklHX0VDSE8gaXMgbm90IHNldApDT05GSUdf
UEFORUw9eQpDT05GSUdfUEFORUxfUEFSUE9SVD0wCkNPTkZJR19QQU5FTF9QUk9GSUxFPTUKQ09O
RklHX1BBTkVMX0NIQU5HRV9NRVNTQUdFPXkKQ09ORklHX1BBTkVMX0JPT1RfTUVTU0FHRT0iIgoK
IwojIElJTyBzdGFnaW5nIGRyaXZlcnMKIwoKIwojIEFjY2VsZXJvbWV0ZXJzCiMKQ09ORklHX0FE
SVMxNjIwMT15CkNPTkZJR19BRElTMTYyMDM9eQojIENPTkZJR19BRElTMTYyMDQgaXMgbm90IHNl
dAojIENPTkZJR19BRElTMTYyMDkgaXMgbm90IHNldAojIENPTkZJR19BRElTMTYyMjAgaXMgbm90
IHNldApDT05GSUdfQURJUzE2MjQwPXkKQ09ORklHX1NDQTMwMDA9eQoKIwojIEFuYWxvZyB0byBk
aWdpdGFsIGNvbnZlcnRlcnMKIwojIENPTkZJR19BRDcyOTEgaXMgbm90IHNldApDT05GSUdfQUQ3
OTlYPXkKQ09ORklHX0FENzk5WF9SSU5HX0JVRkZFUj15CiMgQ09ORklHX0FENzE5MiBpcyBub3Qg
c2V0CkNPTkZJR19BRDcyODA9eQojIENPTkZJR19MUEMzMlhYX0FEQyBpcyBub3Qgc2V0CkNPTkZJ
R19TUEVBUl9BREM9eQoKIwojIEFuYWxvZyBkaWdpdGFsIGJpLWRpcmVjdGlvbiBjb252ZXJ0ZXJz
CiMKCiMKIyBDYXBhY2l0YW5jZSB0byBkaWdpdGFsIGNvbnZlcnRlcnMKIwojIENPTkZJR19BRDcx
NTAgaXMgbm90IHNldAojIENPTkZJR19BRDcxNTIgaXMgbm90IHNldAojIENPTkZJR19BRDc3NDYg
aXMgbm90IHNldAoKIwojIERpcmVjdCBEaWdpdGFsIFN5bnRoZXNpcwojCiMgQ09ORklHX0FENTkz
MCBpcyBub3Qgc2V0CiMgQ09ORklHX0FEOTgzMiBpcyBub3Qgc2V0CkNPTkZJR19BRDk4MzQ9eQpD
T05GSUdfQUQ5ODUwPXkKQ09ORklHX0FEOTg1Mj15CkNPTkZJR19BRDk5MTA9eQojIENPTkZJR19B
RDk5NTEgaXMgbm90IHNldAoKIwojIERpZ2l0YWwgZ3lyb3Njb3BlIHNlbnNvcnMKIwojIENPTkZJ
R19BRElTMTYwNjAgaXMgbm90IHNldAoKIwojIE5ldHdvcmsgQW5hbHl6ZXIsIEltcGVkYW5jZSBD
b252ZXJ0ZXJzCiMKQ09ORklHX0FENTkzMz15CgojCiMgTGlnaHQgc2Vuc29ycwojCiMgQ09ORklH
X1NFTlNPUlNfSVNMMjkwMTggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19JU0wyOTAyOD15CkNP
TkZJR19UU0wyNTgzPXkKQ09ORklHX1RTTDJ4N3g9eQoKIwojIE1hZ25ldG9tZXRlciBzZW5zb3Jz
CiMKQ09ORklHX1NFTlNPUlNfSE1DNTg0Mz15CgojCiMgQWN0aXZlIGVuZXJneSBtZXRlcmluZyBJ
QwojCiMgQ09ORklHX0FERTc3NTMgaXMgbm90IHNldApDT05GSUdfQURFNzc1ND15CiMgQ09ORklH
X0FERTc3NTggaXMgbm90IHNldApDT05GSUdfQURFNzc1OT15CkNPTkZJR19BREU3ODU0PXkKQ09O
RklHX0FERTc4NTRfSTJDPXkKIyBDT05GSUdfQURFNzg1NF9TUEkgaXMgbm90IHNldAoKIwojIFJl
c29sdmVyIHRvIGRpZ2l0YWwgY29udmVydGVycwojCkNPTkZJR19BRDJTOTA9eQoKIwojIFRyaWdn
ZXJzIC0gc3RhbmRhbG9uZQojCiMgQ09ORklHX0lJT19QRVJJT0RJQ19SVENfVFJJR0dFUiBpcyBu
b3Qgc2V0CkNPTkZJR19JSU9fU0lNUExFX0RVTU1ZPXkKIyBDT05GSUdfSUlPX1NJTVBMRV9EVU1N
WV9FVkVOVFMgaXMgbm90IHNldAojIENPTkZJR19JSU9fU0lNUExFX0RVTU1ZX0JVRkZFUiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0ZUMTAwMCBpcyBub3Qgc2V0CgojCiMgU3BlYWt1cCBjb25zb2xlIHNw
ZWVjaAojCkNPTkZJR19TVEFHSU5HX01FRElBPXkKQ09ORklHX0kyQ19CQ00yMDQ4PXkKIyBDT05G
SUdfVklERU9fVENNODI1WCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TTjlDMTAyIGlzIG5vdCBz
ZXQKCiMKIyBBbmRyb2lkCiMKQ09ORklHX0FORFJPSUQ9eQpDT05GSUdfQU5EUk9JRF9CSU5ERVJf
SVBDPXkKQ09ORklHX0FORFJPSURfTE9HR0VSPXkKQ09ORklHX0FORFJPSURfVElNRURfT1VUUFVU
PXkKIyBDT05GSUdfQU5EUk9JRF9MT1dfTUVNT1JZX0tJTExFUiBpcyBub3Qgc2V0CiMgQ09ORklH
X0FORFJPSURfSU5URl9BTEFSTV9ERVYgaXMgbm90IHNldApDT05GSUdfU1lOQz15CiMgQ09ORklH
X1NXX1NZTkMgaXMgbm90IHNldApDT05GSUdfSU9OPXkKQ09ORklHX0lPTl9URVNUPXkKIyBDT05G
SUdfWDg2X1BMQVRGT1JNX0RFVklDRVMgaXMgbm90IHNldApDT05GSUdfQ0hST01FX1BMQVRGT1JN
Uz15CiMgQ09ORklHX0NIUk9NRU9TX1BTVE9SRSBpcyBub3Qgc2V0CgojCiMgSGFyZHdhcmUgU3Bp
bmxvY2sgZHJpdmVycwojCkNPTkZJR19DTEtFVlRfSTgyNTM9eQpDT05GSUdfSTgyNTNfTE9DSz15
CkNPTkZJR19DTEtCTERfSTgyNTM9eQojIENPTkZJR19NQUlMQk9YIGlzIG5vdCBzZXQKIyBDT05G
SUdfSU9NTVVfU1VQUE9SVCBpcyBub3Qgc2V0CgojCiMgUmVtb3RlcHJvYyBkcml2ZXJzCiMKIyBD
T05GSUdfU1RFX01PREVNX1JQUk9DIGlzIG5vdCBzZXQKCiMKIyBScG1zZyBkcml2ZXJzCiMKIyBD
T05GSUdfUE1fREVWRlJFUSBpcyBub3Qgc2V0CkNPTkZJR19FWFRDT049eQoKIwojIEV4dGNvbiBE
ZXZpY2UgRHJpdmVycwojCkNPTkZJR19FWFRDT05fQURDX0pBQ0s9eQpDT05GSUdfRVhUQ09OX1BB
TE1BUz15CiMgQ09ORklHX01FTU9SWSBpcyBub3Qgc2V0CkNPTkZJR19JSU89eQpDT05GSUdfSUlP
X0JVRkZFUj15CiMgQ09ORklHX0lJT19CVUZGRVJfQ0IgaXMgbm90IHNldApDT05GSUdfSUlPX0tG
SUZPX0JVRj15CkNPTkZJR19JSU9fVFJJR0dFUkVEX0JVRkZFUj15CkNPTkZJR19JSU9fVFJJR0dF
Uj15CkNPTkZJR19JSU9fQ09OU1VNRVJTX1BFUl9UUklHR0VSPTIKCiMKIyBBY2NlbGVyb21ldGVy
cwojCkNPTkZJR19CTUExODA9eQpDT05GSUdfSUlPX1NUX0FDQ0VMXzNBWElTPXkKQ09ORklHX0lJ
T19TVF9BQ0NFTF9JMkNfM0FYSVM9eQpDT05GSUdfSUlPX1NUX0FDQ0VMX1NQSV8zQVhJUz15CkNP
TkZJR19LWFNEOT15CgojCiMgQW5hbG9nIHRvIGRpZ2l0YWwgY29udmVydGVycwojCkNPTkZJR19B
RF9TSUdNQV9ERUxUQT15CiMgQ09ORklHX0FENzI2NiBpcyBub3Qgc2V0CiMgQ09ORklHX0FENzI5
OCBpcyBub3Qgc2V0CkNPTkZJR19BRDc0NzY9eQojIENPTkZJR19BRDc3OTEgaXMgbm90IHNldApD
T05GSUdfQUQ3NzkzPXkKQ09ORklHX0FENzg4Nz15CiMgQ09ORklHX0FENzkyMyBpcyBub3Qgc2V0
CiMgQ09ORklHX0xQODc4OF9BREMgaXMgbm90IHNldApDT05GSUdfTUFYMTM2Mz15CkNPTkZJR19N
Q1AzMjBYPXkKQ09ORklHX01DUDM0MjI9eQojIENPTkZJR19OQVU3ODAyIGlzIG5vdCBzZXQKQ09O
RklHX1RJX0FEQzA4MUM9eQojIENPTkZJR19USV9BTTMzNVhfQURDIGlzIG5vdCBzZXQKQ09ORklH
X1RXTDYwMzBfR1BBREM9eQoKIwojIEFtcGxpZmllcnMKIwpDT05GSUdfQUQ4MzY2PXkKCiMKIyBI
aWQgU2Vuc29yIElJTyBDb21tb24KIwpDT05GSUdfSUlPX1NUX1NFTlNPUlNfSTJDPXkKQ09ORklH
X0lJT19TVF9TRU5TT1JTX1NQST15CkNPTkZJR19JSU9fU1RfU0VOU09SU19DT1JFPXkKCiMKIyBE
aWdpdGFsIHRvIGFuYWxvZyBjb252ZXJ0ZXJzCiMKQ09ORklHX0FENTA2ND15CkNPTkZJR19BRDUz
NjA9eQojIENPTkZJR19BRDUzODAgaXMgbm90IHNldAojIENPTkZJR19BRDU0MjEgaXMgbm90IHNl
dApDT05GSUdfQUQ1NDQ2PXkKQ09ORklHX0FENTQ0OT15CkNPTkZJR19BRDU1MDQ9eQpDT05GSUdf
QUQ1NjI0Ul9TUEk9eQpDT05GSUdfQUQ1Njg2PXkKQ09ORklHX0FENTc1NT15CkNPTkZJR19BRDU3
NjQ9eQojIENPTkZJR19BRDU3OTEgaXMgbm90IHNldAojIENPTkZJR19BRDczMDMgaXMgbm90IHNl
dApDT05GSUdfTUFYNTE3PXkKQ09ORklHX01DUDQ3MjU9eQoKIwojIEZyZXF1ZW5jeSBTeW50aGVz
aXplcnMgRERTL1BMTAojCgojCiMgQ2xvY2sgR2VuZXJhdG9yL0Rpc3RyaWJ1dGlvbgojCkNPTkZJ
R19BRDk1MjM9eQoKIwojIFBoYXNlLUxvY2tlZCBMb29wIChQTEwpIGZyZXF1ZW5jeSBzeW50aGVz
aXplcnMKIwojIENPTkZJR19BREY0MzUwIGlzIG5vdCBzZXQKCiMKIyBEaWdpdGFsIGd5cm9zY29w
ZSBzZW5zb3JzCiMKQ09ORklHX0FESVMxNjA4MD15CkNPTkZJR19BRElTMTYxMzA9eQpDT05GSUdf
QURJUzE2MTM2PXkKIyBDT05GSUdfQURJUzE2MjYwIGlzIG5vdCBzZXQKQ09ORklHX0FEWFJTNDUw
PXkKQ09ORklHX0lJT19TVF9HWVJPXzNBWElTPXkKQ09ORklHX0lJT19TVF9HWVJPX0kyQ18zQVhJ
Uz15CkNPTkZJR19JSU9fU1RfR1lST19TUElfM0FYSVM9eQpDT05GSUdfSVRHMzIwMD15CgojCiMg
SHVtaWRpdHkgc2Vuc29ycwojCgojCiMgSW5lcnRpYWwgbWVhc3VyZW1lbnQgdW5pdHMKIwojIENP
TkZJR19BRElTMTY0MDAgaXMgbm90IHNldAojIENPTkZJR19BRElTMTY0ODAgaXMgbm90IHNldApD
T05GSUdfSUlPX0FESVNfTElCPXkKQ09ORklHX0lJT19BRElTX0xJQl9CVUZGRVI9eQpDT05GSUdf
SU5WX01QVTYwNTBfSUlPPXkKCiMKIyBMaWdodCBzZW5zb3JzCiMKIyBDT05GSUdfQURKRF9TMzEx
IGlzIG5vdCBzZXQKQ09ORklHX0FQRFM5MzAwPXkKQ09ORklHX0NNMzY2NTE9eQpDT05GSUdfR1Ay
QVAwMjBBMDBGPXkKQ09ORklHX1NFTlNPUlNfTE0zNTMzPXkKQ09ORklHX1RDUzM0NzI9eQojIENP
TkZJR19TRU5TT1JTX1RTTDI1NjMgaXMgbm90IHNldAojIENPTkZJR19UU0w0NTMxIGlzIG5vdCBz
ZXQKQ09ORklHX1ZDTkw0MDAwPXkKCiMKIyBNYWduZXRvbWV0ZXIgc2Vuc29ycwojCkNPTkZJR19N
QUczMTEwPXkKQ09ORklHX0lJT19TVF9NQUdOXzNBWElTPXkKQ09ORklHX0lJT19TVF9NQUdOX0ky
Q18zQVhJUz15CkNPTkZJR19JSU9fU1RfTUFHTl9TUElfM0FYSVM9eQoKIwojIEluY2xpbm9tZXRl
ciBzZW5zb3JzCiMKCiMKIyBUcmlnZ2VycyAtIHN0YW5kYWxvbmUKIwpDT05GSUdfSUlPX0lOVEVS
UlVQVF9UUklHR0VSPXkKQ09ORklHX0lJT19TWVNGU19UUklHR0VSPXkKCiMKIyBQcmVzc3VyZSBz
ZW5zb3JzCiMKQ09ORklHX01QTDMxMTU9eQpDT05GSUdfSUlPX1NUX1BSRVNTPXkKQ09ORklHX0lJ
T19TVF9QUkVTU19JMkM9eQpDT05GSUdfSUlPX1NUX1BSRVNTX1NQST15CgojCiMgVGVtcGVyYXR1
cmUgc2Vuc29ycwojCkNPTkZJR19UTVAwMDY9eQpDT05GSUdfUFdNPXkKQ09ORklHX1BXTV9TWVNG
Uz15CiMgQ09ORklHX1BXTV9SRU5FU0FTX1RQVSBpcyBub3Qgc2V0CkNPTkZJR19QV01fVFdMPXkK
Q09ORklHX1BXTV9UV0xfTEVEPXkKIyBDT05GSUdfSVBBQ0tfQlVTIGlzIG5vdCBzZXQKQ09ORklH
X1JFU0VUX0NPTlRST0xMRVI9eQpDT05GSUdfRk1DPXkKQ09ORklHX0ZNQ19GQUtFREVWPXkKQ09O
RklHX0ZNQ19UUklWSUFMPXkKIyBDT05GSUdfRk1DX1dSSVRFX0VFUFJPTSBpcyBub3Qgc2V0CkNP
TkZJR19GTUNfQ0hBUkRFVj15CgojCiMgUEhZIFN1YnN5c3RlbQojCkNPTkZJR19HRU5FUklDX1BI
WT15CkNPTkZJR19QSFlfRVhZTk9TX01JUElfVklERU89eQpDT05GSUdfQkNNX0tPTkFfVVNCMl9Q
SFk9eQpDT05GSUdfUE9XRVJDQVA9eQpDT05GSUdfSU5URUxfUkFQTD15CgojCiMgRmlybXdhcmUg
RHJpdmVycwojCkNPTkZJR19FREQ9eQojIENPTkZJR19FRERfT0ZGIGlzIG5vdCBzZXQKIyBDT05G
SUdfRklSTVdBUkVfTUVNTUFQIGlzIG5vdCBzZXQKQ09ORklHX0RFTExfUkJVPXkKQ09ORklHX0RD
REJBUz15CkNPTkZJR19HT09HTEVfRklSTVdBUkU9eQoKIwojIEdvb2dsZSBGaXJtd2FyZSBEcml2
ZXJzCiMKCiMKIyBGaWxlIHN5c3RlbXMKIwpDT05GSUdfRENBQ0hFX1dPUkRfQUNDRVNTPXkKIyBD
T05GSUdfRVhUMl9GUyBpcyBub3Qgc2V0CkNPTkZJR19FWFQzX0ZTPXkKIyBDT05GSUdfRVhUM19E
RUZBVUxUU19UT19PUkRFUkVEIGlzIG5vdCBzZXQKQ09ORklHX0VYVDNfRlNfWEFUVFI9eQpDT05G
SUdfRVhUM19GU19QT1NJWF9BQ0w9eQpDT05GSUdfRVhUM19GU19TRUNVUklUWT15CiMgQ09ORklH
X0VYVDRfRlMgaXMgbm90IHNldApDT05GSUdfSkJEPXkKQ09ORklHX0pCRF9ERUJVRz15CkNPTkZJ
R19KQkQyPXkKQ09ORklHX0pCRDJfREVCVUc9eQpDT05GSUdfRlNfTUJDQUNIRT15CiMgQ09ORklH
X1JFSVNFUkZTX0ZTIGlzIG5vdCBzZXQKQ09ORklHX0pGU19GUz15CkNPTkZJR19KRlNfUE9TSVhf
QUNMPXkKQ09ORklHX0pGU19TRUNVUklUWT15CkNPTkZJR19KRlNfREVCVUc9eQojIENPTkZJR19K
RlNfU1RBVElTVElDUyBpcyBub3Qgc2V0CiMgQ09ORklHX1hGU19GUyBpcyBub3Qgc2V0CkNPTkZJ
R19HRlMyX0ZTPXkKQ09ORklHX09DRlMyX0ZTPXkKQ09ORklHX09DRlMyX0ZTX08yQ0I9eQpDT05G
SUdfT0NGUzJfRlNfU1RBVFM9eQpDT05GSUdfT0NGUzJfREVCVUdfTUFTS0xPRz15CiMgQ09ORklH
X09DRlMyX0RFQlVHX0ZTIGlzIG5vdCBzZXQKQ09ORklHX0JUUkZTX0ZTPXkKIyBDT05GSUdfQlRS
RlNfRlNfUE9TSVhfQUNMIGlzIG5vdCBzZXQKQ09ORklHX0JUUkZTX0ZTX0NIRUNLX0lOVEVHUklU
WT15CkNPTkZJR19CVFJGU19GU19SVU5fU0FOSVRZX1RFU1RTPXkKIyBDT05GSUdfQlRSRlNfREVC
VUcgaXMgbm90IHNldAojIENPTkZJR19CVFJGU19BU1NFUlQgaXMgbm90IHNldApDT05GSUdfTklM
RlMyX0ZTPXkKQ09ORklHX0ZTX1BPU0lYX0FDTD15CiMgQ09ORklHX0ZJTEVfTE9DS0lORyBpcyBu
b3Qgc2V0CkNPTkZJR19GU05PVElGWT15CiMgQ09ORklHX0ROT1RJRlkgaXMgbm90IHNldApDT05G
SUdfSU5PVElGWV9VU0VSPXkKIyBDT05GSUdfRkFOT1RJRlkgaXMgbm90IHNldApDT05GSUdfUVVP
VEE9eQpDT05GSUdfUVVPVEFfTkVUTElOS19JTlRFUkZBQ0U9eQpDT05GSUdfUFJJTlRfUVVPVEFf
V0FSTklORz15CkNPTkZJR19RVU9UQV9ERUJVRz15CkNPTkZJR19RVU9UQV9UUkVFPXkKIyBDT05G
SUdfUUZNVF9WMSBpcyBub3Qgc2V0CkNPTkZJR19RRk1UX1YyPXkKQ09ORklHX1FVT1RBQ1RMPXkK
IyBDT05GSUdfQVVUT0ZTNF9GUyBpcyBub3Qgc2V0CkNPTkZJR19GVVNFX0ZTPXkKIyBDT05GSUdf
Q1VTRSBpcyBub3Qgc2V0CgojCiMgQ2FjaGVzCiMKQ09ORklHX0ZTQ0FDSEU9eQojIENPTkZJR19G
U0NBQ0hFX1NUQVRTIGlzIG5vdCBzZXQKQ09ORklHX0ZTQ0FDSEVfSElTVE9HUkFNPXkKQ09ORklH
X0ZTQ0FDSEVfREVCVUc9eQpDT05GSUdfRlNDQUNIRV9PQkpFQ1RfTElTVD15CkNPTkZJR19DQUNI
RUZJTEVTPXkKIyBDT05GSUdfQ0FDSEVGSUxFU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19DQUNI
RUZJTEVTX0hJU1RPR1JBTT15CgojCiMgQ0QtUk9NL0RWRCBGaWxlc3lzdGVtcwojCiMgQ09ORklH
X0lTTzk2NjBfRlMgaXMgbm90IHNldApDT05GSUdfVURGX0ZTPXkKQ09ORklHX1VERl9OTFM9eQoK
IwojIERPUy9GQVQvTlQgRmlsZXN5c3RlbXMKIwpDT05GSUdfRkFUX0ZTPXkKQ09ORklHX01TRE9T
X0ZTPXkKQ09ORklHX1ZGQVRfRlM9eQpDT05GSUdfRkFUX0RFRkFVTFRfQ09ERVBBR0U9NDM3CkNP
TkZJR19GQVRfREVGQVVMVF9JT0NIQVJTRVQ9Imlzbzg4NTktMSIKQ09ORklHX05URlNfRlM9eQpD
T05GSUdfTlRGU19ERUJVRz15CiMgQ09ORklHX05URlNfUlcgaXMgbm90IHNldAoKIwojIFBzZXVk
byBmaWxlc3lzdGVtcwojCkNPTkZJR19QUk9DX0ZTPXkKIyBDT05GSUdfUFJPQ19LQ09SRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1BST0NfU1lTQ1RMIGlzIG5vdCBzZXQKQ09ORklHX1BST0NfUEFHRV9N
T05JVE9SPXkKQ09ORklHX1NZU0ZTPXkKIyBDT05GSUdfSFVHRVRMQkZTIGlzIG5vdCBzZXQKIyBD
T05GSUdfSFVHRVRMQl9QQUdFIGlzIG5vdCBzZXQKQ09ORklHX0NPTkZJR0ZTX0ZTPXkKQ09ORklH
X01JU0NfRklMRVNZU1RFTVM9eQpDT05GSUdfQURGU19GUz15CiMgQ09ORklHX0FERlNfRlNfUlcg
aXMgbm90IHNldApDT05GSUdfQUZGU19GUz15CiMgQ09ORklHX0hGU19GUyBpcyBub3Qgc2V0CkNP
TkZJR19IRlNQTFVTX0ZTPXkKIyBDT05GSUdfSEZTUExVU19GU19QT1NJWF9BQ0wgaXMgbm90IHNl
dApDT05GSUdfQkVGU19GUz15CkNPTkZJR19CRUZTX0RFQlVHPXkKQ09ORklHX0JGU19GUz15CkNP
TkZJR19FRlNfRlM9eQpDT05GSUdfTE9HRlM9eQpDT05GSUdfQ1JBTUZTPXkKQ09ORklHX1NRVUFT
SEZTPXkKQ09ORklHX1NRVUFTSEZTX0ZJTEVfQ0FDSEU9eQojIENPTkZJR19TUVVBU0hGU19GSUxF
X0RJUkVDVCBpcyBub3Qgc2V0CkNPTkZJR19TUVVBU0hGU19ERUNPTVBfU0lOR0xFPXkKIyBDT05G
SUdfU1FVQVNIRlNfREVDT01QX01VTFRJIGlzIG5vdCBzZXQKIyBDT05GSUdfU1FVQVNIRlNfREVD
T01QX01VTFRJX1BFUkNQVSBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZTX1hBVFRSIGlzIG5v
dCBzZXQKIyBDT05GSUdfU1FVQVNIRlNfWkxJQiBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZT
X0xaTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZTX1haIGlzIG5vdCBzZXQKQ09ORklHX1NR
VUFTSEZTXzRLX0RFVkJMS19TSVpFPXkKQ09ORklHX1NRVUFTSEZTX0VNQkVEREVEPXkKQ09ORklH
X1NRVUFTSEZTX0ZSQUdNRU5UX0NBQ0hFX1NJWkU9MwpDT05GSUdfVlhGU19GUz15CiMgQ09ORklH
X01JTklYX0ZTIGlzIG5vdCBzZXQKQ09ORklHX09NRlNfRlM9eQpDT05GSUdfSFBGU19GUz15CiMg
Q09ORklHX1FOWDRGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19RTlg2RlNfRlM9eQojIENPTkZJR19R
Tlg2RlNfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19ST01GU19GUyBpcyBub3Qgc2V0CkNPTkZJ
R19QU1RPUkU9eQpDT05GSUdfUFNUT1JFX0NPTlNPTEU9eQojIENPTkZJR19QU1RPUkVfRlRSQUNF
IGlzIG5vdCBzZXQKQ09ORklHX1BTVE9SRV9SQU09eQpDT05GSUdfU1lTVl9GUz15CkNPTkZJR19V
RlNfRlM9eQojIENPTkZJR19VRlNfRlNfV1JJVEUgaXMgbm90IHNldAojIENPTkZJR19VRlNfREVC
VUcgaXMgbm90IHNldApDT05GSUdfRjJGU19GUz15CkNPTkZJR19GMkZTX1NUQVRfRlM9eQpDT05G
SUdfRjJGU19GU19YQVRUUj15CiMgQ09ORklHX0YyRlNfRlNfUE9TSVhfQUNMIGlzIG5vdCBzZXQK
IyBDT05GSUdfRjJGU19GU19TRUNVUklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX0YyRlNfQ0hFQ0tf
RlMgaXMgbm90IHNldApDT05GSUdfTkVUV09SS19GSUxFU1lTVEVNUz15CkNPTkZJR19OQ1BfRlM9
eQpDT05GSUdfTkNQRlNfUEFDS0VUX1NJR05JTkc9eQojIENPTkZJR19OQ1BGU19JT0NUTF9MT0NL
SU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfTkNQRlNfU1RST05HIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkNQRlNfTkZTX05TIGlzIG5vdCBzZXQKQ09ORklHX05DUEZTX09TMl9OUz15CkNPTkZJR19OQ1BG
U19TTUFMTERPUz15CkNPTkZJR19OQ1BGU19OTFM9eQojIENPTkZJR19OQ1BGU19FWFRSQVMgaXMg
bm90IHNldApDT05GSUdfTkxTPXkKQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiCiMgQ09O
RklHX05MU19DT0RFUEFHRV80MzcgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfNzM3
IGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV83NzU9eQpDT05GSUdfTkxTX0NPREVQQUdF
Xzg1MD15CkNPTkZJR19OTFNfQ09ERVBBR0VfODUyPXkKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1
NSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NTcgaXMgbm90IHNldAojIENPTkZJ
R19OTFNfQ09ERVBBR0VfODYwIGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV84NjE9eQpD
T05GSUdfTkxTX0NPREVQQUdFXzg2Mj15CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90
IHNldApDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15CkNPTkZJR19OTFNfQ09ERVBBR0VfODY1PXkK
Q09ORklHX05MU19DT0RFUEFHRV84NjY9eQpDT05GSUdfTkxTX0NPREVQQUdFXzg2OT15CkNPTkZJ
R19OTFNfQ09ERVBBR0VfOTM2PXkKQ09ORklHX05MU19DT0RFUEFHRV85NTA9eQojIENPTkZJR19O
TFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV85NDk9eQpDT05G
SUdfTkxTX0NPREVQQUdFXzg3ND15CkNPTkZJR19OTFNfSVNPODg1OV84PXkKQ09ORklHX05MU19D
T0RFUEFHRV8xMjUwPXkKQ09ORklHX05MU19DT0RFUEFHRV8xMjUxPXkKIyBDT05GSUdfTkxTX0FT
Q0lJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfMSBpcyBub3Qgc2V0CiMgQ09ORklH
X05MU19JU084ODU5XzIgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfMz15CiMgQ09ORklH
X05MU19JU084ODU5XzQgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfNT15CiMgQ09ORklH
X05MU19JU084ODU5XzYgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfNz15CiMgQ09ORklH
X05MU19JU084ODU5XzkgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfMTM9eQpDT05GSUdf
TkxTX0lTTzg4NTlfMTQ9eQojIENPTkZJR19OTFNfSVNPODg1OV8xNSBpcyBub3Qgc2V0CiMgQ09O
RklHX05MU19LT0k4X1IgaXMgbm90IHNldApDT05GSUdfTkxTX0tPSThfVT15CiMgQ09ORklHX05M
U19NQUNfUk9NQU4gaXMgbm90IHNldApDT05GSUdfTkxTX01BQ19DRUxUSUM9eQpDT05GSUdfTkxT
X01BQ19DRU5URVVSTz15CkNPTkZJR19OTFNfTUFDX0NST0FUSUFOPXkKQ09ORklHX05MU19NQUNf
Q1lSSUxMSUM9eQpDT05GSUdfTkxTX01BQ19HQUVMSUM9eQpDT05GSUdfTkxTX01BQ19HUkVFSz15
CiMgQ09ORklHX05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0CkNPTkZJR19OTFNfTUFDX0lOVUlU
PXkKQ09ORklHX05MU19NQUNfUk9NQU5JQU49eQpDT05GSUdfTkxTX01BQ19UVVJLSVNIPXkKQ09O
RklHX05MU19VVEY4PXkKCiMKIyBLZXJuZWwgaGFja2luZwojCkNPTkZJR19UUkFDRV9JUlFGTEFH
U19TVVBQT1JUPXkKCiMKIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMKIwpDT05GSUdfREVGQVVM
VF9NRVNTQUdFX0xPR0xFVkVMPTQKCiMKIyBDb21waWxlLXRpbWUgY2hlY2tzIGFuZCBjb21waWxl
ciBvcHRpb25zCiMKQ09ORklHX0RFQlVHX0lORk89eQojIENPTkZJR19ERUJVR19JTkZPX1JFRFVD
RUQgaXMgbm90IHNldAojIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQK
Q09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkKQ09ORklHX0ZSQU1FX1dBUk49MjA0OApDT05GSUdf
U1RSSVBfQVNNX1NZTVM9eQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMgbm90IHNldAojIENPTkZJ
R19VTlVTRURfU1lNQk9MUyBpcyBub3Qgc2V0CkNPTkZJR19ERUJVR19GUz15CiMgQ09ORklHX0hF
QURFUlNfQ0hFQ0sgaXMgbm90IHNldApDT05GSUdfREVCVUdfU0VDVElPTl9NSVNNQVRDSD15CkNP
TkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQpDT05GSUdfRlJBTUVfUE9JTlRFUj15CkNP
TkZJR19ERUJVR19GT1JDRV9XRUFLX1BFUl9DUFU9eQojIENPTkZJR19NQUdJQ19TWVNSUSBpcyBu
b3Qgc2V0CkNPTkZJR19ERUJVR19LRVJORUw9eQoKIwojIE1lbW9yeSBEZWJ1Z2dpbmcKIwojIENP
TkZJR19ERUJVR19QQUdFQUxMT0MgaXMgbm90IHNldApDT05GSUdfREVCVUdfT0JKRUNUUz15CkNP
TkZJR19ERUJVR19PQkpFQ1RTX1NFTEZURVNUPXkKQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15
CiMgQ09ORklHX0RFQlVHX09CSkVDVFNfVElNRVJTIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX09C
SkVDVFNfV09SSz15CiMgQ09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldAoj
IENPTkZJR19ERUJVR19PQkpFQ1RTX1BFUkNQVV9DT1VOVEVSIGlzIG5vdCBzZXQKQ09ORklHX0RF
QlVHX09CSkVDVFNfRU5BQkxFX0RFRkFVTFQ9MQojIENPTkZJR19TTFVCX1NUQVRTIGlzIG5vdCBz
ZXQKQ09ORklHX0hBVkVfREVCVUdfS01FTUxFQUs9eQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBp
cyBub3Qgc2V0CkNPTkZJR19ERUJVR19TVEFDS19VU0FHRT15CiMgQ09ORklHX0RFQlVHX1ZNIGlz
IG5vdCBzZXQKIyBDT05GSUdfREVCVUdfVklSVFVBTCBpcyBub3Qgc2V0CkNPTkZJR19ERUJVR19N
RU1PUllfSU5JVD15CkNPTkZJR19NRU1PUllfTk9USUZJRVJfRVJST1JfSU5KRUNUPXkKIyBDT05G
SUdfREVCVUdfUEVSX0NQVV9NQVBTIGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfREVCVUdfU1RBQ0tP
VkVSRkxPVz15CiMgQ09ORklHX0RFQlVHX1NUQUNLT1ZFUkZMT1cgaXMgbm90IHNldApDT05GSUdf
SEFWRV9BUkNIX0tNRU1DSEVDSz15CiMgQ09ORklHX0RFQlVHX1NISVJRIGlzIG5vdCBzZXQKCiMK
IyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5ncwojCkNPTkZJR19MT0NLVVBfREVURUNUT1I9eQpDT05G
SUdfSEFSRExPQ0tVUF9ERVRFQ1RPUj15CiMgQ09ORklHX0JPT1RQQVJBTV9IQVJETE9DS1VQX1BB
TklDIGlzIG5vdCBzZXQKQ09ORklHX0JPT1RQQVJBTV9IQVJETE9DS1VQX1BBTklDX1ZBTFVFPTAK
IyBDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUMgaXMgbm90IHNldApDT05GSUdfQk9P
VFBBUkFNX1NPRlRMT0NLVVBfUEFOSUNfVkFMVUU9MAojIENPTkZJR19ERVRFQ1RfSFVOR19UQVNL
IGlzIG5vdCBzZXQKIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBub3Qgc2V0CkNPTkZJR19QQU5J
Q19PTl9PT1BTX1ZBTFVFPTAKQ09ORklHX1BBTklDX1RJTUVPVVQ9MAojIENPTkZJR19TQ0hFRF9E
RUJVRyBpcyBub3Qgc2V0CkNPTkZJR19TQ0hFRFNUQVRTPXkKQ09ORklHX1RJTUVSX1NUQVRTPXkK
CiMKIyBMb2NrIERlYnVnZ2luZyAoc3BpbmxvY2tzLCBtdXRleGVzLCBldGMuLi4pCiMKQ09ORklH
X0RFQlVHX1JUX01VVEVYRVM9eQpDT05GSUdfREVCVUdfUElfTElTVD15CkNPTkZJR19SVF9NVVRF
WF9URVNURVI9eQpDT05GSUdfREVCVUdfU1BJTkxPQ0s9eQpDT05GSUdfREVCVUdfTVVURVhFUz15
CkNPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSD15CkNPTkZJR19ERUJVR19MT0NLX0FMTE9D
PXkKQ09ORklHX1BST1ZFX0xPQ0tJTkc9eQpDT05GSUdfTE9DS0RFUD15CkNPTkZJR19MT0NLX1NU
QVQ9eQpDT05GSUdfREVCVUdfTE9DS0RFUD15CkNPTkZJR19ERUJVR19BVE9NSUNfU0xFRVA9eQoj
IENPTkZJR19ERUJVR19MT0NLSU5HX0FQSV9TRUxGVEVTVFMgaXMgbm90IHNldApDT05GSUdfVFJB
Q0VfSVJRRkxBR1M9eQpDT05GSUdfU1RBQ0tUUkFDRT15CkNPTkZJR19ERUJVR19LT0JKRUNUPXkK
IyBDT05GSUdfREVCVUdfV1JJVEVDT1VOVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX0xJU1Qg
aXMgbm90IHNldAojIENPTkZJR19ERUJVR19TRyBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX05P
VElGSUVSUyBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQK
CiMKIyBSQ1UgRGVidWdnaW5nCiMKIyBDT05GSUdfUFJPVkVfUkNVIGlzIG5vdCBzZXQKIyBDT05G
SUdfU1BBUlNFX1JDVV9QT0lOVEVSIGlzIG5vdCBzZXQKQ09ORklHX1JDVV9UT1JUVVJFX1RFU1Q9
eQpDT05GSUdfUkNVX1RPUlRVUkVfVEVTVF9SVU5OQUJMRT15CkNPTkZJR19SQ1VfQ1BVX1NUQUxM
X1RJTUVPVVQ9MjEKIyBDT05GSUdfUkNVX0NQVV9TVEFMTF9JTkZPIGlzIG5vdCBzZXQKQ09ORklH
X1JDVV9UUkFDRT15CiMgQ09ORklHX0RFQlVHX0JMT0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQKQ09O
RklHX05PVElGSUVSX0VSUk9SX0lOSkVDVElPTj15CkNPTkZJR19DUFVfTk9USUZJRVJfRVJST1Jf
SU5KRUNUPXkKQ09ORklHX1BNX05PVElGSUVSX0VSUk9SX0lOSkVDVD15CiMgQ09ORklHX0ZBVUxU
X0lOSkVDVElPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0xBVEVOQ1lUT1AgaXMgbm90IHNldApDT05G
SUdfQVJDSF9IQVNfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1M9eQojIENPTkZJR19ERUJV
R19TVFJJQ1RfVVNFUl9DT1BZX0NIRUNLUyBpcyBub3Qgc2V0CkNPTkZJR19VU0VSX1NUQUNLVFJB
Q0VfU1VQUE9SVD15CkNPTkZJR19OT1BfVFJBQ0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fVFJB
Q0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhfVFJBQ0VSPXkKQ09ORklHX0hBVkVfRlVO
Q1RJT05fR1JBUEhfRlBfVEVTVD15CkNPTkZJR19IQVZFX0ZVTkNUSU9OX1RSQUNFX01DT1VOVF9U
RVNUPXkKQ09ORklHX0hBVkVfRFlOQU1JQ19GVFJBQ0U9eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZU
UkFDRV9XSVRIX1JFR1M9eQpDT05GSUdfSEFWRV9GVFJBQ0VfTUNPVU5UX1JFQ09SRD15CkNPTkZJ
R19IQVZFX1NZU0NBTExfVFJBQ0VQT0lOVFM9eQpDT05GSUdfSEFWRV9GRU5UUlk9eQpDT05GSUdf
SEFWRV9DX1JFQ09SRE1DT1VOVD15CkNPTkZJR19UUkFDRVJfTUFYX1RSQUNFPXkKQ09ORklHX1RS
QUNFX0NMT0NLPXkKQ09ORklHX1JJTkdfQlVGRkVSPXkKQ09ORklHX0VWRU5UX1RSQUNJTkc9eQpD
T05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKQ09ORklHX1JJTkdfQlVGRkVSX0FMTE9XX1NX
QVA9eQpDT05GSUdfVFJBQ0lORz15CkNPTkZJR19HRU5FUklDX1RSQUNFUj15CkNPTkZJR19UUkFD
SU5HX1NVUFBPUlQ9eQpDT05GSUdfRlRSQUNFPXkKQ09ORklHX0ZVTkNUSU9OX1RSQUNFUj15CiMg
Q09ORklHX0ZVTkNUSU9OX0dSQVBIX1RSQUNFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0lSUVNPRkZf
VFJBQ0VSIGlzIG5vdCBzZXQKQ09ORklHX1NDSEVEX1RSQUNFUj15CkNPTkZJR19GVFJBQ0VfU1lT
Q0FMTFM9eQpDT05GSUdfVFJBQ0VSX1NOQVBTSE9UPXkKQ09ORklHX1RSQUNFUl9TTkFQU0hPVF9Q
RVJfQ1BVX1NXQVA9eQpDT05GSUdfQlJBTkNIX1BST0ZJTEVfTk9ORT15CiMgQ09ORklHX1BST0ZJ
TEVfQU5OT1RBVEVEX0JSQU5DSEVTIGlzIG5vdCBzZXQKIyBDT05GSUdfUFJPRklMRV9BTExfQlJB
TkNIRVMgaXMgbm90IHNldApDT05GSUdfU1RBQ0tfVFJBQ0VSPXkKQ09ORklHX0JMS19ERVZfSU9f
VFJBQ0U9eQojIENPTkZJR19VUFJPQkVfRVZFTlQgaXMgbm90IHNldAojIENPTkZJR19QUk9CRV9F
VkVOVFMgaXMgbm90IHNldAojIENPTkZJR19EWU5BTUlDX0ZUUkFDRSBpcyBub3Qgc2V0CkNPTkZJ
R19GVU5DVElPTl9QUk9GSUxFUj15CiMgQ09ORklHX0ZUUkFDRV9TVEFSVFVQX1RFU1QgaXMgbm90
IHNldApDT05GSUdfUklOR19CVUZGRVJfQkVOQ0hNQVJLPXkKIyBDT05GSUdfUklOR19CVUZGRVJf
U1RBUlRVUF9URVNUIGlzIG5vdCBzZXQKCiMKIyBSdW50aW1lIFRlc3RpbmcKIwpDT05GSUdfTEtE
VE09eQpDT05GSUdfVEVTVF9MSVNUX1NPUlQ9eQojIENPTkZJR19CQUNLVFJBQ0VfU0VMRl9URVNU
IGlzIG5vdCBzZXQKIyBDT05GSUdfUkJUUkVFX1RFU1QgaXMgbm90IHNldApDT05GSUdfQVRPTUlD
NjRfU0VMRlRFU1Q9eQpDT05GSUdfVEVTVF9TVFJJTkdfSEVMUEVSUz15CkNPTkZJR19URVNUX0tT
VFJUT1g9eQojIENPTkZJR19ETUFfQVBJX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FNUExF
UyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0FSQ0hfS0dEQj15CkNPTkZJR19LR0RCPXkKQ09ORklH
X0tHREJfVEVTVFM9eQpDT05GSUdfS0dEQl9URVNUU19PTl9CT09UPXkKQ09ORklHX0tHREJfVEVT
VFNfQk9PVF9TVFJJTkc9IlYxRjEwMCIKQ09ORklHX0tHREJfTE9XX0xFVkVMX1RSQVA9eQojIENP
TkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0CkNPTkZJR19TVFJJQ1RfREVWTUVNPXkKQ09ORklHX1g4
Nl9WRVJCT1NFX0JPT1RVUD15CiMgQ09ORklHX0VBUkxZX1BSSU5USyBpcyBub3Qgc2V0CiMgQ09O
RklHX1g4Nl9QVERVTVAgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19ST0RBVEEgaXMgbm90IHNl
dAojIENPTkZJR19ET1VCTEVGQVVMVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1RMQkZMVVNI
IGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX1NUUkVTUz15CkNPTkZJR19IQVZFX01NSU9UUkFDRV9T
VVBQT1JUPXkKQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0wCkNPTkZJR19JT19ERUxBWV9UWVBF
XzBYRUQ9MQpDT05GSUdfSU9fREVMQVlfVFlQRV9VREVMQVk9MgpDT05GSUdfSU9fREVMQVlfVFlQ
RV9OT05FPTMKQ09ORklHX0lPX0RFTEFZXzBYODA9eQojIENPTkZJR19JT19ERUxBWV8wWEVEIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU9fREVMQVlfVURFTEFZIGlzIG5vdCBzZXQKIyBDT05GSUdfSU9f
REVMQVlfTk9ORSBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX0lPX0RFTEFZX1RZUEU9MApDT05G
SUdfREVCVUdfQk9PVF9QQVJBTVM9eQpDT05GSUdfQ1BBX0RFQlVHPXkKIyBDT05GSUdfT1BUSU1J
WkVfSU5MSU5JTkcgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90
IHNldAojIENPTkZJR19YODZfREVCVUdfU1RBVElDX0NQVV9IQVMgaXMgbm90IHNldAoKIwojIFNl
Y3VyaXR5IG9wdGlvbnMKIwojIENPTkZJR19LRVlTIGlzIG5vdCBzZXQKQ09ORklHX1NFQ1VSSVRZ
X0RNRVNHX1JFU1RSSUNUPXkKQ09ORklHX1NFQ1VSSVRZPXkKQ09ORklHX1NFQ1VSSVRZRlM9eQpD
T05GSUdfU0VDVVJJVFlfTkVUV09SSz15CkNPTkZJR19TRUNVUklUWV9QQVRIPXkKQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZTz15CkNPTkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FDQ0VQVF9FTlRSWT0y
MDQ4CkNPTkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FVRElUX0xPRz0xMDI0CiMgQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZT19PTUlUX1VTRVJTUEFDRV9MT0FERVIgaXMgbm90IHNldApDT05GSUdfU0VD
VVJJVFlfVE9NT1lPX1BPTElDWV9MT0FERVI9Ii9zYmluL3RvbW95by1pbml0IgpDT05GSUdfU0VD
VVJJVFlfVE9NT1lPX0FDVElWQVRJT05fVFJJR0dFUj0iL3NiaW4vaW5pdCIKQ09ORklHX1NFQ1VS
SVRZX0FQUEFSTU9SPXkKQ09ORklHX1NFQ1VSSVRZX0FQUEFSTU9SX0JPT1RQQVJBTV9WQUxVRT0x
CkNPTkZJR19TRUNVUklUWV9BUFBBUk1PUl9IQVNIPXkKIyBDT05GSUdfU0VDVVJJVFlfWUFNQSBp
cyBub3Qgc2V0CiMgQ09ORklHX0lNQSBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX1NFQ1VSSVRZ
X1RPTU9ZTz15CiMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfQVBQQVJNT1IgaXMgbm90IHNldAoj
IENPTkZJR19ERUZBVUxUX1NFQ1VSSVRZX0RBQyBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX1NF
Q1VSSVRZPSJ0b21veW8iCkNPTkZJR19YT1JfQkxPQ0tTPXkKQ09ORklHX0NSWVBUTz15CgojCiMg
Q3J5cHRvIGNvcmUgb3IgaGVscGVyCiMKQ09ORklHX0NSWVBUT19BTEdBUEk9eQpDT05GSUdfQ1JZ
UFRPX0FMR0FQSTI9eQpDT05GSUdfQ1JZUFRPX0FFQUQ9eQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkK
Q09ORklHX0NSWVBUT19CTEtDSVBIRVI9eQpDT05GSUdfQ1JZUFRPX0JMS0NJUEhFUjI9eQpDT05G
SUdfQ1JZUFRPX0hBU0g9eQpDT05GSUdfQ1JZUFRPX0hBU0gyPXkKQ09ORklHX0NSWVBUT19STkc9
eQpDT05GSUdfQ1JZUFRPX1JORzI9eQpDT05GSUdfQ1JZUFRPX1BDT01QMj15CkNPTkZJR19DUllQ
VE9fTUFOQUdFUj15CkNPTkZJR19DUllQVE9fTUFOQUdFUjI9eQpDT05GSUdfQ1JZUFRPX1VTRVI9
eQpDT05GSUdfQ1JZUFRPX01BTkFHRVJfRElTQUJMRV9URVNUUz15CkNPTkZJR19DUllQVE9fR0Yx
MjhNVUw9eQpDT05GSUdfQ1JZUFRPX05VTEw9eQpDT05GSUdfQ1JZUFRPX1BDUllQVD15CkNPTkZJ
R19DUllQVE9fV09SS1FVRVVFPXkKQ09ORklHX0NSWVBUT19DUllQVEQ9eQojIENPTkZJR19DUllQ
VE9fQVVUSEVOQyBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fQUJMS19IRUxQRVI9eQpDT05GSUdf
Q1JZUFRPX0dMVUVfSEVMUEVSX1g4Nj15CgojCiMgQXV0aGVudGljYXRlZCBFbmNyeXB0aW9uIHdp
dGggQXNzb2NpYXRlZCBEYXRhCiMKQ09ORklHX0NSWVBUT19DQ009eQpDT05GSUdfQ1JZUFRPX0dD
TT15CkNPTkZJR19DUllQVE9fU0VRSVY9eQoKIwojIEJsb2NrIG1vZGVzCiMKQ09ORklHX0NSWVBU
T19DQkM9eQpDT05GSUdfQ1JZUFRPX0NUUj15CiMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNl
dApDT05GSUdfQ1JZUFRPX0VDQj15CkNPTkZJR19DUllQVE9fTFJXPXkKQ09ORklHX0NSWVBUT19Q
Q0JDPXkKQ09ORklHX0NSWVBUT19YVFM9eQoKIwojIEhhc2ggbW9kZXMKIwojIENPTkZJR19DUllQ
VE9fQ01BQyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19ITUFDIGlzIG5vdCBzZXQKQ09ORklH
X0NSWVBUT19YQ0JDPXkKQ09ORklHX0NSWVBUT19WTUFDPXkKCiMKIyBEaWdlc3QKIwpDT05GSUdf
Q1JZUFRPX0NSQzMyQz15CkNPTkZJR19DUllQVE9fQ1JDMzJDX0lOVEVMPXkKIyBDT05GSUdfQ1JZ
UFRPX0NSQzMyIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUw9eQpDT05GSUdf
Q1JZUFRPX0NSQ1QxMERJRj15CiMgQ09ORklHX0NSWVBUT19DUkNUMTBESUZfUENMTVVMIGlzIG5v
dCBzZXQKQ09ORklHX0NSWVBUT19HSEFTSD15CkNPTkZJR19DUllQVE9fTUQ0PXkKQ09ORklHX0NS
WVBUT19NRDU9eQpDT05GSUdfQ1JZUFRPX01JQ0hBRUxfTUlDPXkKIyBDT05GSUdfQ1JZUFRPX1JN
RDEyOCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fUk1EMTYwPXkKQ09ORklHX0NSWVBUT19STUQy
NTY9eQpDT05GSUdfQ1JZUFRPX1JNRDMyMD15CkNPTkZJR19DUllQVE9fU0hBMT15CkNPTkZJR19D
UllQVE9fU0hBMV9TU1NFMz15CkNPTkZJR19DUllQVE9fU0hBMjU2X1NTU0UzPXkKQ09ORklHX0NS
WVBUT19TSEE1MTJfU1NTRTM9eQpDT05GSUdfQ1JZUFRPX1NIQTI1Nj15CkNPTkZJR19DUllQVE9f
U0hBNTEyPXkKQ09ORklHX0NSWVBUT19UR1IxOTI9eQpDT05GSUdfQ1JZUFRPX1dQNTEyPXkKIyBD
T05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVMIGlzIG5vdCBzZXQKCiMKIyBDaXBoZXJz
CiMKQ09ORklHX0NSWVBUT19BRVM9eQpDT05GSUdfQ1JZUFRPX0FFU19YODZfNjQ9eQpDT05GSUdf
Q1JZUFRPX0FFU19OSV9JTlRFTD15CkNPTkZJR19DUllQVE9fQU5VQklTPXkKQ09ORklHX0NSWVBU
T19BUkM0PXkKQ09ORklHX0NSWVBUT19CTE9XRklTSD15CkNPTkZJR19DUllQVE9fQkxPV0ZJU0hf
Q09NTU9OPXkKQ09ORklHX0NSWVBUT19CTE9XRklTSF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NB
TUVMTElBPXkKQ09ORklHX0NSWVBUT19DQU1FTExJQV9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NB
TUVMTElBX0FFU05JX0FWWF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FFU05JX0FW
WDJfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19DQVNUX0NPTU1PTj15CkNPTkZJR19DUllQVE9fQ0FT
VDU9eQpDT05GSUdfQ1JZUFRPX0NBU1Q1X0FWWF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBU1Q2
PXkKQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19ERVM9eQpD
T05GSUdfQ1JZUFRPX0ZDUllQVD15CiMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMgbm90IHNldApD
T05GSUdfQ1JZUFRPX1NBTFNBMjA9eQpDT05GSUdfQ1JZUFRPX1NBTFNBMjBfWDg2XzY0PXkKQ09O
RklHX0NSWVBUT19TRUVEPXkKQ09ORklHX0NSWVBUT19TRVJQRU5UPXkKQ09ORklHX0NSWVBUT19T
RVJQRU5UX1NTRTJfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19TRVJQRU5UX0FWWF9YODZfNjQ9eQpD
T05GSUdfQ1JZUFRPX1NFUlBFTlRfQVZYMl9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX1RFQT15CiMg
Q09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19UV09GSVNIX0NP
TU1PTj15CkNPTkZJR19DUllQVE9fVFdPRklTSF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX1RXT0ZJ
U0hfWDg2XzY0XzNXQVk9eQojIENPTkZJR19DUllQVE9fVFdPRklTSF9BVlhfWDg2XzY0IGlzIG5v
dCBzZXQKCiMKIyBDb21wcmVzc2lvbgojCiMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fTFpPIGlz
IG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0xaNCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fTFo0
SEM9eQoKIwojIFJhbmRvbSBOdW1iZXIgR2VuZXJhdGlvbgojCkNPTkZJR19DUllQVE9fQU5TSV9D
UFJORz15CkNPTkZJR19DUllQVE9fVVNFUl9BUEk9eQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX0hB
U0g9eQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkKIyBDT05GSUdfQ1JZUFRPX0hX
IGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfS1ZNPXkKQ09ORklHX0hBVkVfS1ZNX0lSUUNISVA9eQpD
T05GSUdfSEFWRV9LVk1fSVJRX1JPVVRJTkc9eQpDT05GSUdfSEFWRV9LVk1fRVZFTlRGRD15CkNP
TkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQpDT05GSUdfS1ZNX01NSU89eQpDT05GSUdfS1ZN
X0FTWU5DX1BGPXkKQ09ORklHX0hBVkVfS1ZNX01TST15CkNPTkZJR19IQVZFX0tWTV9DUFVfUkVM
QVhfSU5URVJDRVBUPXkKQ09ORklHX0tWTV9WRklPPXkKQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkK
Q09ORklHX0tWTT15CkNPTkZJR19LVk1fSU5URUw9eQojIENPTkZJR19LVk1fQU1EIGlzIG5vdCBz
ZXQKIyBDT05GSUdfS1ZNX01NVV9BVURJVCBpcyBub3Qgc2V0CkNPTkZJR19CSU5BUllfUFJJTlRG
PXkKCiMKIyBMaWJyYXJ5IHJvdXRpbmVzCiMKQ09ORklHX1JBSUQ2X1BRPXkKQ09ORklHX0JJVFJF
VkVSU0U9eQpDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15CkNPTkZJR19HRU5FUklD
X1NUUk5MRU5fVVNFUj15CkNPTkZJR19HRU5FUklDX05FVF9VVElMUz15CkNPTkZJR19HRU5FUklD
X0ZJTkRfRklSU1RfQklUPXkKQ09ORklHX0dFTkVSSUNfUENJX0lPTUFQPXkKQ09ORklHX0dFTkVS
SUNfSU9NQVA9eQpDT05GSUdfR0VORVJJQ19JTz15CkNPTkZJR19BUkNIX1VTRV9DTVBYQ0hHX0xP
Q0tSRUY9eQpDT05GSUdfQ1JDX0NDSVRUPXkKQ09ORklHX0NSQzE2PXkKQ09ORklHX0NSQ19UMTBE
SUY9eQpDT05GSUdfQ1JDX0lUVV9UPXkKQ09ORklHX0NSQzMyPXkKQ09ORklHX0NSQzMyX1NFTEZU
RVNUPXkKQ09ORklHX0NSQzMyX1NMSUNFQlk4PXkKIyBDT05GSUdfQ1JDMzJfU0xJQ0VCWTQgaXMg
bm90IHNldAojIENPTkZJR19DUkMzMl9TQVJXQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JDMzJf
QklUIGlzIG5vdCBzZXQKQ09ORklHX0NSQzc9eQpDT05GSUdfTElCQ1JDMzJDPXkKQ09ORklHX0NS
Qzg9eQpDT05GSUdfQ1JDNjRfRUNNQT15CkNPTkZJR19SQU5ET00zMl9TRUxGVEVTVD15CkNPTkZJ
R19aTElCX0lORkxBVEU9eQpDT05GSUdfWkxJQl9ERUZMQVRFPXkKQ09ORklHX0xaT19DT01QUkVT
Uz15CkNPTkZJR19MWk9fREVDT01QUkVTUz15CkNPTkZJR19MWjRIQ19DT01QUkVTUz15CkNPTkZJ
R19MWjRfREVDT01QUkVTUz15CkNPTkZJR19YWl9ERUM9eQojIENPTkZJR19YWl9ERUNfWDg2IGlz
IG5vdCBzZXQKIyBDT05GSUdfWFpfREVDX1BPV0VSUEMgaXMgbm90IHNldApDT05GSUdfWFpfREVD
X0lBNjQ9eQpDT05GSUdfWFpfREVDX0FSTT15CkNPTkZJR19YWl9ERUNfQVJNVEhVTUI9eQpDT05G
SUdfWFpfREVDX1NQQVJDPXkKQ09ORklHX1haX0RFQ19CQ0o9eQojIENPTkZJR19YWl9ERUNfVEVT
VCBpcyBub3Qgc2V0CkNPTkZJR19ERUNPTVBSRVNTX0daSVA9eQpDT05GSUdfREVDT01QUkVTU19C
WklQMj15CkNPTkZJR19ERUNPTVBSRVNTX0xaTz15CkNPTkZJR19ERUNPTVBSRVNTX0xaND15CkNP
TkZJR19HRU5FUklDX0FMTE9DQVRPUj15CkNPTkZJR19SRUVEX1NPTE9NT049eQpDT05GSUdfUkVF
RF9TT0xPTU9OX0VOQzg9eQpDT05GSUdfUkVFRF9TT0xPTU9OX0RFQzg9eQpDT05GSUdfQlRSRUU9
eQpDT05GSUdfSEFTX0lPTUVNPXkKQ09ORklHX0hBU19JT1BPUlQ9eQpDT05GSUdfSEFTX0RNQT15
CkNPTkZJR19DUFVNQVNLX09GRlNUQUNLPXkKQ09ORklHX0NQVV9STUFQPXkKQ09ORklHX0RRTD15
CkNPTkZJR19OTEFUVFI9eQpDT05GSUdfQVJDSF9IQVNfQVRPTUlDNjRfREVDX0lGX1BPU0lUSVZF
PXkKIyBDT05GSUdfQVZFUkFHRSBpcyBub3Qgc2V0CiMgQ09ORklHX0NPUkRJQyBpcyBub3Qgc2V0
CiMgQ09ORklHX0REUiBpcyBub3Qgc2V0Cg==
--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec51d2998bb54ca04ef7d0f58--


From xen-devel-bounces@lists.xen.org Wed Jan 08 22:32:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11fV-0000RK-4h; Wed, 08 Jan 2014 22:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jim.epost@gmail.com>) id 1W11fS-0000RC-Hf
	for xen-devel@lists.xenproject.org; Wed, 08 Jan 2014 22:32:07 +0000
Received: from [193.109.254.147:2525] by server-12.bemta-14.messagelabs.com id
	38/1F-13681-5E1DDC25; Wed, 08 Jan 2014 22:32:05 +0000
X-Env-Sender: jim.epost@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389220321!9692059!1
X-Originating-IP: [209.85.223.170]
X-SpamReason: No, hits=1.5 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,UNIQUE_WORDS,UPPERCASE_50_75,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12040 invoked from network); 8 Jan 2014 22:32:02 -0000
Received: from mail-ie0-f170.google.com (HELO mail-ie0-f170.google.com)
	(209.85.223.170)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:32:02 -0000
Received: by mail-ie0-f170.google.com with SMTP id tq11so2021026ieb.15
	for <xen-devel@lists.xenproject.org>;
	Wed, 08 Jan 2014 14:32:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=TZKVv5H0KhwAFDVtlslfrb9WqsTAISGVU32Bm78XUpU=;
	b=sWKGdhlxGA5DQDJ6mWbJmc8/C0DRn/qARW1Wk0n6enav8eM5Q+NM4cXaZzUcgGd84R
	JpgXpAjHmuQXZINqyA8ETVXHd8wA4jKsGY8iyprRUg0aag2RWyHJPZFgCjAdxFKMXfWe
	jABmDBkDUDXKoAma1g+qd3Z9/9houDawFItjgHGpeD7VcmXPK0A71xQ2wvUP8FbF/xDh
	rroBM4hBLl/T5XR/WOCxSG3SiAgp7zm/4ylJZKJPhr0GVpaKfl+R3zbuYLfuxZnyRXv0
	NWENhK4anKthF+FXK9ubq9Czjvnx5jzdvHtnzDt+a8cXs5oa7E3ian/1n32KIhV8C8JZ
	lAUw==
MIME-Version: 1.0
X-Received: by 10.43.65.145 with SMTP id xm17mr90954232icb.35.1389220320720;
	Wed, 08 Jan 2014 14:32:00 -0800 (PST)
Received: by 10.42.85.71 with HTTP; Wed, 8 Jan 2014 14:32:00 -0800 (PST)
Date: Wed, 8 Jan 2014 15:32:00 -0700
Message-ID: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
From: Jim Davis <jim.epost@gmail.com>
To: Stephen Rothwell <sfr@canb.auug.org.au>, linux-next@vger.kernel.org, 
	linux-kernel@vger.kernel.org, konrad.wilk@oracle.com, 
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com, 
	xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary=bcaec51d2998bb54ca04ef7d0f58
Subject: [Xen-devel] randconfig build error with next-20140108,
	in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Building with the attached random configuration file,

drivers/xen/platform-pci.c: In function =91platform_pci_init=92:
drivers/xen/platform-pci.c:131:2: error: implicit declaration of
function =91pci_request_region=92 [-Werror=3Dimplicit-function-declaration]
  ret =3D pci_request_region(pdev, 1, DRV_NAME);
  ^
drivers/xen/platform-pci.c:170:2: error: implicit declaration of
function =91pci_release_region=92 [-Werror=3Dimplicit-function-declaration]
  pci_release_region(pdev, 0);
  ^
cc1: some warnings being treated as errors
make[2]: *** [drivers/xen/platform-pci.o] Error 1

These warnings appeared too:

warning: (XEN_PVH) selects XEN_PVHVM which has unmet direct
dependencies (HYPERVISOR_GUEST && XEN && PCI && X86_LOCAL_APIC)

--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset=US-ASCII; name="randconfig-1389218754.txt"
Content-Disposition: attachment; filename="randconfig-1389218754.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hq75v4xc0

IwojIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgojIExpbnV4L3g4
NiAzLjEzLjAtcmM3IEtlcm5lbCBDb25maWd1cmF0aW9uCiMKQ09ORklHXzY0QklUPXkKQ09ORklH
X1g4Nl82ND15CkNPTkZJR19YODY9eQpDT05GSUdfSU5TVFJVQ1RJT05fREVDT0RFUj15CkNPTkZJ
R19PVVRQVVRfRk9STUFUPSJlbGY2NC14ODYtNjQiCkNPTkZJR19BUkNIX0RFRkNPTkZJRz0iYXJj
aC94ODYvY29uZmlncy94ODZfNjRfZGVmY29uZmlnIgpDT05GSUdfTE9DS0RFUF9TVVBQT1JUPXkK
Q09ORklHX1NUQUNLVFJBQ0VfU1VQUE9SVD15CkNPTkZJR19IQVZFX0xBVEVOQ1lUT1BfU1VQUE9S
VD15CkNPTkZJR19NTVU9eQpDT05GSUdfTkVFRF9ETUFfTUFQX1NUQVRFPXkKQ09ORklHX05FRURf
U0dfRE1BX0xFTkdUSD15CkNPTkZJR19HRU5FUklDX0lTQV9ETUE9eQpDT05GSUdfR0VORVJJQ19I
V0VJR0hUPXkKQ09ORklHX0FSQ0hfTUFZX0hBVkVfUENfRkRDPXkKQ09ORklHX1JXU0VNX1hDSEdB
RERfQUxHT1JJVEhNPXkKQ09ORklHX0dFTkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKQ09ORklHX0FS
Q0hfSEFTX0NQVV9SRUxBWD15CkNPTkZJR19BUkNIX0hBU19DQUNIRV9MSU5FX1NJWkU9eQpDT05G
SUdfQVJDSF9IQVNfQ1BVX0FVVE9QUk9CRT15CkNPTkZJR19IQVZFX1NFVFVQX1BFUl9DUFVfQVJF
QT15CkNPTkZJR19ORUVEX1BFUl9DUFVfRU1CRURfRklSU1RfQ0hVTks9eQpDT05GSUdfTkVFRF9Q
RVJfQ1BVX1BBR0VfRklSU1RfQ0hVTks9eQpDT05GSUdfQVJDSF9ISUJFUk5BVElPTl9QT1NTSUJM
RT15CkNPTkZJR19BUkNIX1NVU1BFTkRfUE9TU0lCTEU9eQpDT05GSUdfQVJDSF9XQU5UX0hVR0Vf
UE1EX1NIQVJFPXkKQ09ORklHX0FSQ0hfV0FOVF9HRU5FUkFMX0hVR0VUTEI9eQpDT05GSUdfWk9O
RV9ETUEzMj15CkNPTkZJR19BVURJVF9BUkNIPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfT1BUSU1J
WkVEX0lOTElOSU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfREVCVUdfUEFHRUFMTE9DPXkKQ09O
RklHX1g4Nl82NF9TTVA9eQpDT05GSUdfWDg2X0hUPXkKQ09ORklHX0FSQ0hfSFdFSUdIVF9DRkxB
R1M9Ii1mY2FsbC1zYXZlZC1yZGkgLWZjYWxsLXNhdmVkLXJzaSAtZmNhbGwtc2F2ZWQtcmR4IC1m
Y2FsbC1zYXZlZC1yY3ggLWZjYWxsLXNhdmVkLXI4IC1mY2FsbC1zYXZlZC1yOSAtZmNhbGwtc2F2
ZWQtcjEwIC1mY2FsbC1zYXZlZC1yMTEiCkNPTkZJR19BUkNIX1NVUFBPUlRTX1VQUk9CRVM9eQpD
T05GSUdfREVGQ09ORklHX0xJU1Q9Ii9saWIvbW9kdWxlcy8kVU5BTUVfUkVMRUFTRS8uY29uZmln
IgpDT05GSUdfSVJRX1dPUks9eQpDT05GSUdfQlVJTERUSU1FX0VYVEFCTEVfU09SVD15CgojCiMg
R2VuZXJhbCBzZXR1cAojCkNPTkZJR19JTklUX0VOVl9BUkdfTElNSVQ9MzIKQ09ORklHX0NST1NT
X0NPTVBJTEU9IiIKQ09ORklHX0NPTVBJTEVfVEVTVD15CkNPTkZJR19MT0NBTFZFUlNJT049IiIK
IyBDT05GSUdfTE9DQUxWRVJTSU9OX0FVVE8gaXMgbm90IHNldApDT05GSUdfSEFWRV9LRVJORUxf
R1pJUD15CkNPTkZJR19IQVZFX0tFUk5FTF9CWklQMj15CkNPTkZJR19IQVZFX0tFUk5FTF9MWk1B
PXkKQ09ORklHX0hBVkVfS0VSTkVMX1haPXkKQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15CkNPTkZJ
R19IQVZFX0tFUk5FTF9MWjQ9eQpDT05GSUdfS0VSTkVMX0daSVA9eQojIENPTkZJR19LRVJORUxf
QlpJUDIgaXMgbm90IHNldAojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0CiMgQ09ORklH
X0tFUk5FTF9YWiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWk8gaXMgbm90IHNldAojIENP
TkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9Iihub25l
KSIKQ09ORklHX1NXQVA9eQpDT05GSUdfU1lTVklQQz15CkNPTkZJR19QT1NJWF9NUVVFVUU9eQoj
IENPTkZJR19GSEFORExFIGlzIG5vdCBzZXQKQ09ORklHX0FVRElUPXkKQ09ORklHX0FVRElUU1lT
Q0FMTD15CkNPTkZJR19BVURJVF9XQVRDSD15CkNPTkZJR19BVURJVF9UUkVFPXkKCiMKIyBJUlEg
c3Vic3lzdGVtCiMKQ09ORklHX0dFTkVSSUNfSVJRX1BST0JFPXkKQ09ORklHX0dFTkVSSUNfSVJR
X1NIT1c9eQpDT05GSUdfR0VORVJJQ19QRU5ESU5HX0lSUT15CkNPTkZJR19JUlFfRE9NQUlOPXkK
IyBDT05GSUdfSVJRX0RPTUFJTl9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19JUlFfRk9SQ0VEX1RI
UkVBRElORz15CkNPTkZJR19TUEFSU0VfSVJRPXkKQ09ORklHX0NMT0NLU09VUkNFX1dBVENIRE9H
PXkKQ09ORklHX0FSQ0hfQ0xPQ0tTT1VSQ0VfREFUQT15CkNPTkZJR19HRU5FUklDX1RJTUVfVlNZ
U0NBTEw9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UUz15CkNPTkZJR19HRU5FUklDX0NMT0NL
RVZFTlRTX0JVSUxEPXkKQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUPXkKQ09O
RklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfTUlOX0FESlVTVD15CkNPTkZJR19HRU5FUklDX0NNT1Nf
VVBEQVRFPXkKCiMKIyBUaW1lcnMgc3Vic3lzdGVtCiMKQ09ORklHX1RJQ0tfT05FU0hPVD15CkNP
TkZJR19OT19IWl9DT01NT049eQojIENPTkZJR19IWl9QRVJJT0RJQyBpcyBub3Qgc2V0CkNPTkZJ
R19OT19IWl9JRExFPXkKIyBDT05GSUdfTk9fSFpfRlVMTCBpcyBub3Qgc2V0CkNPTkZJR19OT19I
Wj15CkNPTkZJR19ISUdIX1JFU19USU1FUlM9eQoKIwojIENQVS9UYXNrIHRpbWUgYW5kIHN0YXRz
IGFjY291bnRpbmcKIwpDT05GSUdfVElDS19DUFVfQUNDT1VOVElORz15CiMgQ09ORklHX1ZJUlRf
Q1BVX0FDQ09VTlRJTkdfR0VOIGlzIG5vdCBzZXQKIyBDT05GSUdfSVJRX1RJTUVfQUNDT1VOVElO
RyBpcyBub3Qgc2V0CkNPTkZJR19CU0RfUFJPQ0VTU19BQ0NUPXkKQ09ORklHX0JTRF9QUk9DRVNT
X0FDQ1RfVjM9eQpDT05GSUdfVEFTS1NUQVRTPXkKQ09ORklHX1RBU0tfREVMQVlfQUNDVD15CiMg
Q09ORklHX1RBU0tfWEFDQ1QgaXMgbm90IHNldAoKIwojIFJDVSBTdWJzeXN0ZW0KIwpDT05GSUdf
VFJFRV9SQ1U9eQojIENPTkZJR19QUkVFTVBUX1JDVSBpcyBub3Qgc2V0CkNPTkZJR19SQ1VfU1RB
TExfQ09NTU9OPXkKQ09ORklHX0NPTlRFWFRfVFJBQ0tJTkc9eQpDT05GSUdfUkNVX1VTRVJfUVM9
eQpDT05GSUdfQ09OVEVYVF9UUkFDS0lOR19GT1JDRT15CkNPTkZJR19SQ1VfRkFOT1VUPTY0CkNP
TkZJR19SQ1VfRkFOT1VUX0xFQUY9MTYKQ09ORklHX1JDVV9GQU5PVVRfRVhBQ1Q9eQojIENPTkZJ
R19SQ1VfRkFTVF9OT19IWiBpcyBub3Qgc2V0CkNPTkZJR19UUkVFX1JDVV9UUkFDRT15CiMgQ09O
RklHX1JDVV9OT0NCX0NQVSBpcyBub3Qgc2V0CkNPTkZJR19JS0NPTkZJRz15CiMgQ09ORklHX0lL
Q09ORklHX1BST0MgaXMgbm90IHNldApDT05GSUdfTE9HX0JVRl9TSElGVD0xNwpDT05GSUdfSEFW
RV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX05VTUFfQkFMQU5D
SU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfSU5UMTI4PXkKQ09ORklHX0FSQ0hfV0FOVFNfUFJP
VF9OVU1BX1BST1RfTk9ORT15CkNPTkZJR19BUkNIX1VTRVNfTlVNQV9QUk9UX05PTkU9eQojIENP
TkZJR19OVU1BX0JBTEFOQ0lOR19ERUZBVUxUX0VOQUJMRUQgaXMgbm90IHNldApDT05GSUdfTlVN
QV9CQUxBTkNJTkc9eQpDT05GSUdfQ0dST1VQUz15CiMgQ09ORklHX0NHUk9VUF9ERUJVRyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NHUk9VUF9GUkVFWkVSIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0dST1VQ
X0RFVklDRSBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVVNFVFMgaXMgbm90IHNldApDT05GSUdfQ0dS
T1VQX0NQVUFDQ1Q9eQojIENPTkZJR19SRVNPVVJDRV9DT1VOVEVSUyBpcyBub3Qgc2V0CiMgQ09O
RklHX0NHUk9VUF9QRVJGIGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUF9TQ0hFRD15CkNPTkZJR19G
QUlSX0dST1VQX1NDSEVEPXkKIyBDT05GSUdfQ0ZTX0JBTkRXSURUSCBpcyBub3Qgc2V0CkNPTkZJ
R19SVF9HUk9VUF9TQ0hFRD15CiMgQ09ORklHX0JMS19DR1JPVVAgaXMgbm90IHNldApDT05GSUdf
Q0hFQ0tQT0lOVF9SRVNUT1JFPXkKIyBDT05GSUdfTkFNRVNQQUNFUyBpcyBub3Qgc2V0CiMgQ09O
RklHX1VJREdJRF9TVFJJQ1RfVFlQRV9DSEVDS1MgaXMgbm90IHNldApDT05GSUdfU0NIRURfQVVU
T0dST1VQPXkKQ09ORklHX1NZU0ZTX0RFUFJFQ0FURUQ9eQojIENPTkZJR19TWVNGU19ERVBSRUNB
VEVEX1YyIGlzIG5vdCBzZXQKQ09ORklHX1JFTEFZPXkKQ09ORklHX0JMS19ERVZfSU5JVFJEPXkK
Q09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIKQ09ORklHX1JEX0daSVA9eQpDT05GSUdfUkRfQlpJ
UDI9eQojIENPTkZJR19SRF9MWk1BIGlzIG5vdCBzZXQKIyBDT05GSUdfUkRfWFogaXMgbm90IHNl
dApDT05GSUdfUkRfTFpPPXkKQ09ORklHX1JEX0xaND15CkNPTkZJR19DQ19PUFRJTUlaRV9GT1Jf
U0laRT15CkNPTkZJR19BTk9OX0lOT0RFUz15CkNPTkZJR19TWVNDVExfRVhDRVBUSU9OX1RSQUNF
PXkKQ09ORklHX0hBVkVfUENTUEtSX1BMQVRGT1JNPXkKQ09ORklHX0VYUEVSVD15CkNPTkZJR19L
QUxMU1lNUz15CkNPTkZJR19LQUxMU1lNU19BTEw9eQojIENPTkZJR19QUklOVEsgaXMgbm90IHNl
dAojIENPTkZJR19CVUcgaXMgbm90IHNldApDT05GSUdfRUxGX0NPUkU9eQpDT05GSUdfUENTUEtS
X1BMQVRGT1JNPXkKQ09ORklHX0JBU0VfRlVMTD15CiMgQ09ORklHX0ZVVEVYIGlzIG5vdCBzZXQK
Q09ORklHX0VQT0xMPXkKIyBDT05GSUdfU0lHTkFMRkQgaXMgbm90IHNldApDT05GSUdfVElNRVJG
RD15CkNPTkZJR19FVkVOVEZEPXkKIyBDT05GSUdfU0hNRU0gaXMgbm90IHNldAojIENPTkZJR19B
SU8gaXMgbm90IHNldAojIENPTkZJR19FTUJFRERFRCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX1BF
UkZfRVZFTlRTPXkKQ09ORklHX1BFUkZfVVNFX1ZNQUxMT0M9eQoKIwojIEtlcm5lbCBQZXJmb3Jt
YW5jZSBFdmVudHMgQW5kIENvdW50ZXJzCiMKQ09ORklHX1BFUkZfRVZFTlRTPXkKQ09ORklHX0RF
QlVHX1BFUkZfVVNFX1ZNQUxMT0M9eQojIENPTkZJR19WTV9FVkVOVF9DT1VOVEVSUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1NMVUJfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19DT01QQVRfQlJLIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0xBQiBpcyBub3Qgc2V0CkNPTkZJR19TTFVCPXkKIyBDT05GSUdf
U0xPQiBpcyBub3Qgc2V0CkNPTkZJR19TTFVCX0NQVV9QQVJUSUFMPXkKQ09ORklHX1BST0ZJTElO
Rz15CkNPTkZJR19UUkFDRVBPSU5UUz15CkNPTkZJR19PUFJPRklMRT15CkNPTkZJR19PUFJPRklM
RV9FVkVOVF9NVUxUSVBMRVg9eQpDT05GSUdfSEFWRV9PUFJPRklMRT15CkNPTkZJR19PUFJPRklM
RV9OTUlfVElNRVI9eQojIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBzZXQKIyBDT05GSUdfSEFW
RV82NEJJVF9BTElHTkVEX0FDQ0VTUyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0VGRklDSUVOVF9V
TkFMSUdORURfQUNDRVNTPXkKQ09ORklHX0FSQ0hfVVNFX0JVSUxUSU5fQlNXQVA9eQpDT05GSUdf
VVNFUl9SRVRVUk5fTk9USUZJRVI9eQpDT05GSUdfSEFWRV9JT1JFTUFQX1BST1Q9eQpDT05GSUdf
SEFWRV9LUFJPQkVTPXkKQ09ORklHX0hBVkVfS1JFVFBST0JFUz15CkNPTkZJR19IQVZFX09QVFBS
T0JFUz15CkNPTkZJR19IQVZFX0tQUk9CRVNfT05fRlRSQUNFPXkKQ09ORklHX0hBVkVfQVJDSF9U
UkFDRUhPT0s9eQpDT05GSUdfSEFWRV9ETUFfQVRUUlM9eQpDT05GSUdfR0VORVJJQ19TTVBfSURM
RV9USFJFQUQ9eQpDT05GSUdfSEFWRV9SRUdTX0FORF9TVEFDS19BQ0NFU1NfQVBJPXkKQ09ORklH
X0hBVkVfRE1BX0FQSV9ERUJVRz15CkNPTkZJR19IQVZFX0hXX0JSRUFLUE9JTlQ9eQpDT05GSUdf
SEFWRV9NSVhFRF9CUkVBS1BPSU5UU19SRUdTPXkKQ09ORklHX0hBVkVfVVNFUl9SRVRVUk5fTk9U
SUZJRVI9eQpDT05GSUdfSEFWRV9QRVJGX0VWRU5UU19OTUk9eQpDT05GSUdfSEFWRV9QRVJGX1JF
R1M9eQpDT05GSUdfSEFWRV9QRVJGX1VTRVJfU1RBQ0tfRFVNUD15CkNPTkZJR19IQVZFX0FSQ0hf
SlVNUF9MQUJFTD15CkNPTkZJR19BUkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRz15CkNPTkZJR19I
QVZFX0FMSUdORURfU1RSVUNUX1BBR0U9eQpDT05GSUdfSEFWRV9DTVBYQ0hHX0xPQ0FMPXkKQ09O
RklHX0hBVkVfQ01QWENIR19ET1VCTEU9eQpDT05GSUdfSEFWRV9BUkNIX1NFQ0NPTVBfRklMVEVS
PXkKQ09ORklHX0hBVkVfQ0NfU1RBQ0tQUk9URUNUT1I9eQojIENPTkZJR19DQ19TVEFDS1BST1RF
Q1RPUiBpcyBub3Qgc2V0CkNPTkZJR19DQ19TVEFDS1BST1RFQ1RPUl9OT05FPXkKIyBDT05GSUdf
Q0NfU1RBQ0tQUk9URUNUT1JfUkVHVUxBUiBpcyBub3Qgc2V0CiMgQ09ORklHX0NDX1NUQUNLUFJP
VEVDVE9SX1NUUk9ORyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0NPTlRFWFRfVFJBQ0tJTkc9eQpD
T05GSUdfSEFWRV9WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTj15CkNPTkZJR19IQVZFX0lSUV9USU1F
X0FDQ09VTlRJTkc9eQpDT05GSUdfSEFWRV9BUkNIX1RSQU5TUEFSRU5UX0hVR0VQQUdFPXkKQ09O
RklHX0hBVkVfQVJDSF9TT0ZUX0RJUlRZPXkKQ09ORklHX01PRFVMRVNfVVNFX0VMRl9SRUxBPXkK
Q09ORklHX0hBVkVfSVJRX0VYSVRfT05fSVJRX1NUQUNLPXkKCiMKIyBHQ09WLWJhc2VkIGtlcm5l
bCBwcm9maWxpbmcKIwojIENPTkZJR19HQ09WX0tFUk5FTCBpcyBub3Qgc2V0CiMgQ09ORklHX0hB
VkVfR0VORVJJQ19ETUFfQ09IRVJFTlQgaXMgbm90IHNldApDT05GSUdfUlRfTVVURVhFUz15CkNP
TkZJR19CQVNFX1NNQUxMPTAKIyBDT05GSUdfTU9EVUxFUyBpcyBub3Qgc2V0CkNPTkZJR19TVE9Q
X01BQ0hJTkU9eQpDT05GSUdfQkxPQ0s9eQpDT05GSUdfQkxLX0RFVl9CU0c9eQpDT05GSUdfQkxL
X0RFVl9CU0dMSUI9eQojIENPTkZJR19CTEtfREVWX0lOVEVHUklUWSBpcyBub3Qgc2V0CkNPTkZJ
R19CTEtfQ01ETElORV9QQVJTRVI9eQoKIwojIFBhcnRpdGlvbiBUeXBlcwojCkNPTkZJR19QQVJU
SVRJT05fQURWQU5DRUQ9eQpDT05GSUdfQUNPUk5fUEFSVElUSU9OPXkKQ09ORklHX0FDT1JOX1BB
UlRJVElPTl9DVU1BTkE9eQojIENPTkZJR19BQ09STl9QQVJUSVRJT05fRUVTT1ggaXMgbm90IHNl
dAojIENPTkZJR19BQ09STl9QQVJUSVRJT05fSUNTIGlzIG5vdCBzZXQKIyBDT05GSUdfQUNPUk5f
UEFSVElUSU9OX0FERlMgaXMgbm90IHNldAojIENPTkZJR19BQ09STl9QQVJUSVRJT05fUE9XRVJU
RUMgaXMgbm90IHNldApDT05GSUdfQUNPUk5fUEFSVElUSU9OX1JJU0NJWD15CiMgQ09ORklHX0FJ
WF9QQVJUSVRJT04gaXMgbm90IHNldAojIENPTkZJR19PU0ZfUEFSVElUSU9OIGlzIG5vdCBzZXQK
IyBDT05GSUdfQU1JR0FfUEFSVElUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRBUklfUEFSVElU
SU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfTUFDX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19N
U0RPU19QQVJUSVRJT049eQpDT05GSUdfQlNEX0RJU0tMQUJFTD15CiMgQ09ORklHX01JTklYX1NV
QlBBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19TT0xBUklTX1g4Nl9QQVJUSVRJT049eQpDT05G
SUdfVU5JWFdBUkVfRElTS0xBQkVMPXkKQ09ORklHX0xETV9QQVJUSVRJT049eQpDT05GSUdfTERN
X0RFQlVHPXkKQ09ORklHX1NHSV9QQVJUSVRJT049eQpDT05GSUdfVUxUUklYX1BBUlRJVElPTj15
CiMgQ09ORklHX1NVTl9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfS0FSTUFfUEFSVElUSU9O
PXkKIyBDT05GSUdfRUZJX1BBUlRJVElPTiBpcyBub3Qgc2V0CkNPTkZJR19TWVNWNjhfUEFSVElU
SU9OPXkKQ09ORklHX0NNRExJTkVfUEFSVElUSU9OPXkKCiMKIyBJTyBTY2hlZHVsZXJzCiMKQ09O
RklHX0lPU0NIRURfTk9PUD15CkNPTkZJR19JT1NDSEVEX0RFQURMSU5FPXkKIyBDT05GSUdfSU9T
Q0hFRF9DRlEgaXMgbm90IHNldApDT05GSUdfREVGQVVMVF9ERUFETElORT15CiMgQ09ORklHX0RF
RkFVTFRfTk9PUCBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX0lPU0NIRUQ9ImRlYWRsaW5lIgpD
T05GSUdfUFJFRU1QVF9OT1RJRklFUlM9eQpDT05GSUdfUEFEQVRBPXkKQ09ORklHX1VOSU5MSU5F
X1NQSU5fVU5MT0NLPXkKQ09ORklHX0ZSRUVaRVI9eQoKIwojIFByb2Nlc3NvciB0eXBlIGFuZCBm
ZWF0dXJlcwojCkNPTkZJR19aT05FX0RNQT15CkNPTkZJR19TTVA9eQojIENPTkZJR19YODZfTVBQ
QVJTRSBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9FWFRFTkRFRF9QTEFURk9STSBpcyBub3Qgc2V0
CkNPTkZJR19YODZfU1VQUE9SVFNfTUVNT1JZX0ZBSUxVUkU9eQojIENPTkZJR19TQ0hFRF9PTUlU
X0ZSQU1FX1BPSU5URVIgaXMgbm90IHNldApDT05GSUdfSFlQRVJWSVNPUl9HVUVTVD15CkNPTkZJ
R19QQVJBVklSVD15CiMgQ09ORklHX1BBUkFWSVJUX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
UEFSQVZJUlRfU1BJTkxPQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1hFTj15CiMgQ09ORklHX1hFTl9Q
UklWSUxFR0VEX0dVRVNUIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9QVkhWTT15CkNPTkZJR19YRU5f
TUFYX0RPTUFJTl9NRU1PUlk9NTAwCkNPTkZJR19YRU5fU0FWRV9SRVNUT1JFPXkKQ09ORklHX1hF
Tl9ERUJVR19GUz15CkNPTkZJR19YRU5fUFZIPXkKIyBDT05GSUdfS1ZNX0dVRVNUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUEFSQVZJUlRfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKQ09ORklHX1BB
UkFWSVJUX0NMT0NLPXkKQ09ORklHX05PX0JPT1RNRU09eQojIENPTkZJR19NRU1URVNUIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUs4IGlzIG5vdCBzZXQKIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0CiMg
Q09ORklHX01DT1JFMiBpcyBub3Qgc2V0CiMgQ09ORklHX01BVE9NIGlzIG5vdCBzZXQKQ09ORklH
X0dFTkVSSUNfQ1BVPXkKQ09ORklHX1g4Nl9JTlRFUk5PREVfQ0FDSEVfU0hJRlQ9NgpDT05GSUdf
WDg2X0wxX0NBQ0hFX1NISUZUPTYKQ09ORklHX1g4Nl9UU0M9eQpDT05GSUdfWDg2X0NNUFhDSEc2
ND15CkNPTkZJR19YODZfQ01PVj15CkNPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0CkNP
TkZJR19YODZfREVCVUdDVExNU1I9eQojIENPTkZJR19QUk9DRVNTT1JfU0VMRUNUIGlzIG5vdCBz
ZXQKQ09ORklHX0NQVV9TVVBfSU5URUw9eQpDT05GSUdfQ1BVX1NVUF9BTUQ9eQpDT05GSUdfQ1BV
X1NVUF9DRU5UQVVSPXkKQ09ORklHX0hQRVRfVElNRVI9eQpDT05GSUdfSFBFVF9FTVVMQVRFX1JU
Qz15CiMgQ09ORklHX0RNSSBpcyBub3Qgc2V0CkNPTkZJR19TV0lPVExCPXkKQ09ORklHX0lPTU1V
X0hFTFBFUj15CkNPTkZJR19NQVhTTVA9eQpDT05GSUdfTlJfQ1BVUz04MTkyCkNPTkZJR19TQ0hF
RF9TTVQ9eQpDT05GSUdfU0NIRURfTUM9eQpDT05GSUdfUFJFRU1QVF9OT05FPXkKIyBDT05GSUdf
UFJFRU1QVF9WT0xVTlRBUlkgaXMgbm90IHNldAojIENPTkZJR19QUkVFTVBUIGlzIG5vdCBzZXQK
Q09ORklHX1BSRUVNUFRfQ09VTlQ9eQpDT05GSUdfWDg2X0xPQ0FMX0FQSUM9eQpDT05GSUdfWDg2
X0lPX0FQSUM9eQojIENPTkZJR19YODZfUkVST1VURV9GT1JfQlJPS0VOX0JPT1RfSVJRUyBpcyBu
b3Qgc2V0CkNPTkZJR19YODZfTUNFPXkKIyBDT05GSUdfWDg2X01DRV9JTlRFTCBpcyBub3Qgc2V0
CiMgQ09ORklHX1g4Nl9NQ0VfQU1EIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9NQ0VfSU5KRUNUPXkK
Q09ORklHX0k4Sz15CiMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JP
Q09ERV9JTlRFTF9FQVJMWSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERV9BTURfRUFSTFkg
aXMgbm90IHNldAojIENPTkZJR19YODZfTVNSIGlzIG5vdCBzZXQKIyBDT05GSUdfWDg2X0NQVUlE
IGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hfUEhZU19BRERSX1RfNjRCSVQ9eQpDT05GSUdfQVJDSF9E
TUFfQUREUl9UXzY0QklUPXkKQ09ORklHX0RJUkVDVF9HQlBBR0VTPXkKQ09ORklHX05VTUE9eQoj
IENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0CkNPTkZJR19OT0RFU19TSElGVD0xMApDT05GSUdf
QVJDSF9TUEFSU0VNRU1fRU5BQkxFPXkKQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0RFRkFVTFQ9eQpD
T05GSUdfQVJDSF9TRUxFQ1RfTUVNT1JZX01PREVMPXkKIyBDT05GSUdfQVJDSF9NRU1PUllfUFJP
QkUgaXMgbm90IHNldApDT05GSUdfSUxMRUdBTF9QT0lOVEVSX1ZBTFVFPTB4ZGVhZDAwMDAwMDAw
MDAwMApDT05GSUdfU0VMRUNUX01FTU9SWV9NT0RFTD15CkNPTkZJR19TUEFSU0VNRU1fTUFOVUFM
PXkKQ09ORklHX1NQQVJTRU1FTT15CkNPTkZJR19ORUVEX01VTFRJUExFX05PREVTPXkKQ09ORklH
X0hBVkVfTUVNT1JZX1BSRVNFTlQ9eQpDT05GSUdfU1BBUlNFTUVNX0VYVFJFTUU9eQpDT05GSUdf
U1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkKQ09ORklHX1NQQVJTRU1FTV9BTExPQ19NRU1fTUFQ
X1RPR0VUSEVSPXkKQ09ORklHX1NQQVJTRU1FTV9WTUVNTUFQPXkKQ09ORklHX0hBVkVfTUVNQkxP
Q0s9eQpDT05GSUdfSEFWRV9NRU1CTE9DS19OT0RFX01BUD15CkNPTkZJR19BUkNIX0RJU0NBUkRf
TUVNQkxPQ0s9eQpDT05GSUdfTUVNT1JZX0lTT0xBVElPTj15CiMgQ09ORklHX01PVkFCTEVfTk9E
RSBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFPXkKQ09ORklHX01FTU9S
WV9IT1RQTFVHPXkKQ09ORklHX01FTU9SWV9IT1RQTFVHX1NQQVJTRT15CkNPTkZJR19NRU1PUllf
SE9UUkVNT1ZFPXkKQ09ORklHX1BBR0VGTEFHU19FWFRFTkRFRD15CkNPTkZJR19TUExJVF9QVExP
Q0tfQ1BVUz00CkNPTkZJR19BUkNIX0VOQUJMRV9TUExJVF9QTURfUFRMT0NLPXkKIyBDT05GSUdf
QkFMTE9PTl9DT01QQUNUSU9OIGlzIG5vdCBzZXQKQ09ORklHX0NPTVBBQ1RJT049eQpDT05GSUdf
TUlHUkFUSU9OPXkKQ09ORklHX1BIWVNfQUREUl9UXzY0QklUPXkKQ09ORklHX1pPTkVfRE1BX0ZM
QUc9MQojIENPTkZJR19CT1VOQ0UgaXMgbm90IHNldApDT05GSUdfVklSVF9UT19CVVM9eQpDT05G
SUdfTU1VX05PVElGSUVSPXkKIyBDT05GSUdfS1NNIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRf
TU1BUF9NSU5fQUREUj00MDk2CkNPTkZJR19BUkNIX1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkK
IyBDT05GSUdfTUVNT1JZX0ZBSUxVUkUgaXMgbm90IHNldApDT05GSUdfVFJBTlNQQVJFTlRfSFVH
RVBBR0U9eQpDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0VfQUxXQVlTPXkKIyBDT05GSUdfVFJB
TlNQQVJFTlRfSFVHRVBBR0VfTUFEVklTRSBpcyBub3Qgc2V0CkNPTkZJR19DUk9TU19NRU1PUllf
QVRUQUNIPXkKIyBDT05GSUdfQ0xFQU5DQUNIRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZST05UU1dB
UCBpcyBub3Qgc2V0CiMgQ09ORklHX0NNQSBpcyBub3Qgc2V0CiMgQ09ORklHX1pCVUQgaXMgbm90
IHNldApDT05GSUdfTUVNX1NPRlRfRElSVFk9eQpDT05GSUdfWlNNQUxMT0M9eQpDT05GSUdfUEdU
QUJMRV9NQVBQSU5HPXkKQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJVUFRJT049eQojIENPTkZJ
R19YODZfQk9PVFBBUkFNX01FTU9SWV9DT1JSVVBUSU9OX0NIRUNLIGlzIG5vdCBzZXQKQ09ORklH
X1g4Nl9SRVNFUlZFX0xPVz02NAojIENPTkZJR19NVFJSIGlzIG5vdCBzZXQKQ09ORklHX0FSQ0hf
UkFORE9NPXkKQ09ORklHX1g4Nl9TTUFQPXkKIyBDT05GSUdfU0VDQ09NUCBpcyBub3Qgc2V0CiMg
Q09ORklHX0haXzEwMCBpcyBub3Qgc2V0CkNPTkZJR19IWl8yNTA9eQojIENPTkZJR19IWl8zMDAg
aXMgbm90IHNldAojIENPTkZJR19IWl8xMDAwIGlzIG5vdCBzZXQKQ09ORklHX0haPTI1MApDT05G
SUdfU0NIRURfSFJUSUNLPXkKQ09ORklHX0tFWEVDPXkKIyBDT05GSUdfQ1JBU0hfRFVNUCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0tFWEVDX0pVTVAgaXMgbm90IHNldApDT05GSUdfUEhZU0lDQUxfU1RB
UlQ9MHgxMDAwMDAwCkNPTkZJR19SRUxPQ0FUQUJMRT15CkNPTkZJR19QSFlTSUNBTF9BTElHTj0w
eDIwMDAwMApDT05GSUdfSE9UUExVR19DUFU9eQojIENPTkZJR19CT09UUEFSQU1fSE9UUExVR19D
UFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKQ09O
RklHX0NNRExJTkVfQk9PTD15CkNPTkZJR19DTURMSU5FPSIiCkNPTkZJR19DTURMSU5FX09WRVJS
SURFPXkKQ09ORklHX0FSQ0hfRU5BQkxFX01FTU9SWV9IT1RQTFVHPXkKQ09ORklHX0FSQ0hfRU5B
QkxFX01FTU9SWV9IT1RSRU1PVkU9eQpDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQoK
IwojIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwojCkNPTkZJR19BUkNIX0hJQkVS
TkFUSU9OX0hFQURFUj15CiMgQ09ORklHX1NVU1BFTkQgaXMgbm90IHNldApDT05GSUdfSElCRVJO
QVRFX0NBTExCQUNLUz15CkNPTkZJR19ISUJFUk5BVElPTj15CkNPTkZJR19QTV9TVERfUEFSVElU
SU9OPSIiCkNPTkZJR19QTV9TTEVFUD15CkNPTkZJR19QTV9TTEVFUF9TTVA9eQpDT05GSUdfUE1f
QVVUT1NMRUVQPXkKIyBDT05GSUdfUE1fV0FLRUxPQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1BNX1JV
TlRJTUU9eQpDT05GSUdfUE09eQojIENPTkZJR19QTV9ERUJVRyBpcyBub3Qgc2V0CiMgQ09ORklH
X1dRX1BPV0VSX0VGRklDSUVOVF9ERUZBVUxUIGlzIG5vdCBzZXQKQ09ORklHX1NGST15CgojCiMg
Q1BVIEZyZXF1ZW5jeSBzY2FsaW5nCiMKIyBDT05GSUdfQ1BVX0ZSRVEgaXMgbm90IHNldAoKIwoj
IENQVSBJZGxlCiMKQ09ORklHX0NQVV9JRExFPXkKIyBDT05GSUdfQ1BVX0lETEVfTVVMVElQTEVf
RFJJVkVSUyBpcyBub3Qgc2V0CkNPTkZJR19DUFVfSURMRV9HT1ZfTEFEREVSPXkKQ09ORklHX0NQ
VV9JRExFX0dPVl9NRU5VPXkKIyBDT05GSUdfQVJDSF9ORUVEU19DUFVfSURMRV9DT1VQTEVEIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU5URUxfSURMRSBpcyBub3Qgc2V0CgojCiMgTWVtb3J5IHBvd2Vy
IHNhdmluZ3MKIwojIENPTkZJR19JNzMwMF9JRExFIGlzIG5vdCBzZXQKCiMKIyBCdXMgb3B0aW9u
cyAoUENJIGV0Yy4pCiMKIyBDT05GSUdfUENJIGlzIG5vdCBzZXQKQ09ORklHX0lTQV9ETUFfQVBJ
PXkKQ09ORklHX1BDQ0FSRD15CkNPTkZJR19QQ01DSUE9eQpDT05GSUdfUENNQ0lBX0xPQURfQ0lT
PXkKCiMKIyBQQy1jYXJkIGJyaWRnZXMKIwpDT05GSUdfWDg2X1NZU0ZCPXkKCiMKIyBFeGVjdXRh
YmxlIGZpbGUgZm9ybWF0cyAvIEVtdWxhdGlvbnMKIwpDT05GSUdfQklORk1UX0VMRj15CkNPTkZJ
R19BUkNIX0JJTkZNVF9FTEZfUkFORE9NSVpFX1BJRT15CkNPTkZJR19DT1JFX0RVTVBfREVGQVVM
VF9FTEZfSEVBREVSUz15CiMgQ09ORklHX0JJTkZNVF9TQ1JJUFQgaXMgbm90IHNldAojIENPTkZJ
R19IQVZFX0FPVVQgaXMgbm90IHNldAojIENPTkZJR19CSU5GTVRfTUlTQyBpcyBub3Qgc2V0CkNP
TkZJR19DT1JFRFVNUD15CiMgQ09ORklHX0lBMzJfRU1VTEFUSU9OIGlzIG5vdCBzZXQKQ09ORklH
X1g4Nl9ERVZfRE1BX09QUz15CkNPTkZJR19ORVQ9eQoKIwojIE5ldHdvcmtpbmcgb3B0aW9ucwoj
CkNPTkZJR19QQUNLRVQ9eQpDT05GSUdfUEFDS0VUX0RJQUc9eQpDT05GSUdfVU5JWD15CkNPTkZJ
R19VTklYX0RJQUc9eQojIENPTkZJR19ORVRfS0VZIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5FVCBp
cyBub3Qgc2V0CkNPTkZJR19ORVRXT1JLX1NFQ01BUks9eQpDT05GSUdfTkVUV09SS19QSFlfVElN
RVNUQU1QSU5HPXkKQ09ORklHX05FVEZJTFRFUj15CiMgQ09ORklHX05FVEZJTFRFUl9ERUJVRyBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVEZJTFRFUl9BRFZBTkNFRCBpcyBub3Qgc2V0CkNPTkZJR19B
VE09eQpDT05GSUdfQVRNX0xBTkU9eQojIENPTkZJR19CUklER0UgaXMgbm90IHNldApDT05GSUdf
SEFWRV9ORVRfRFNBPXkKQ09ORklHX05FVF9EU0E9eQpDT05GSUdfTkVUX0RTQV9UQUdfRFNBPXkK
Q09ORklHX05FVF9EU0FfVEFHX0VEU0E9eQpDT05GSUdfTkVUX0RTQV9UQUdfVFJBSUxFUj15CkNP
TkZJR19WTEFOXzgwMjFRPXkKIyBDT05GSUdfVkxBTl84MDIxUV9HVlJQIGlzIG5vdCBzZXQKIyBD
T05GSUdfVkxBTl84MDIxUV9NVlJQIGlzIG5vdCBzZXQKQ09ORklHX0RFQ05FVD15CkNPTkZJR19E
RUNORVRfUk9VVEVSPXkKQ09ORklHX0xMQz15CkNPTkZJR19MTEMyPXkKQ09ORklHX0lQWD15CkNP
TkZJR19JUFhfSU5URVJOPXkKIyBDT05GSUdfQVRBTEsgaXMgbm90IHNldAojIENPTkZJR19YMjUg
aXMgbm90IHNldApDT05GSUdfTEFQQj15CkNPTkZJR19QSE9ORVQ9eQojIENPTkZJR19JRUVFODAy
MTU0IGlzIG5vdCBzZXQKQ09ORklHX05FVF9TQ0hFRD15CgojCiMgUXVldWVpbmcvU2NoZWR1bGlu
ZwojCiMgQ09ORklHX05FVF9TQ0hfQ0JRIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9IVEIg
aXMgbm90IHNldApDT05GSUdfTkVUX1NDSF9IRlNDPXkKQ09ORklHX05FVF9TQ0hfQVRNPXkKIyBD
T05GSUdfTkVUX1NDSF9QUklPIGlzIG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfTVVMVElRPXkKQ09O
RklHX05FVF9TQ0hfUkVEPXkKQ09ORklHX05FVF9TQ0hfU0ZCPXkKQ09ORklHX05FVF9TQ0hfU0ZR
PXkKQ09ORklHX05FVF9TQ0hfVEVRTD15CiMgQ09ORklHX05FVF9TQ0hfVEJGIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1NDSF9HUkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EU01BUksg
aXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX05FVEVNIGlzIG5vdCBzZXQKQ09ORklHX05FVF9T
Q0hfRFJSPXkKIyBDT05GSUdfTkVUX1NDSF9NUVBSSU8gaXMgbm90IHNldApDT05GSUdfTkVUX1ND
SF9DSE9LRT15CkNPTkZJR19ORVRfU0NIX1FGUT15CkNPTkZJR19ORVRfU0NIX0NPREVMPXkKIyBD
T05GSUdfTkVUX1NDSF9GUV9DT0RFTCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfU0NIX0ZRPXkKQ09O
RklHX05FVF9TQ0hfSEhGPXkKQ09ORklHX05FVF9TQ0hfUElFPXkKQ09ORklHX05FVF9TQ0hfSU5H
UkVTUz15CkNPTkZJR19ORVRfU0NIX1BMVUc9eQoKIwojIENsYXNzaWZpY2F0aW9uCiMKQ09ORklH
X05FVF9DTFM9eQpDT05GSUdfTkVUX0NMU19CQVNJQz15CkNPTkZJR19ORVRfQ0xTX1RDSU5ERVg9
eQpDT05GSUdfTkVUX0NMU19GVz15CkNPTkZJR19ORVRfQ0xTX1UzMj15CiMgQ09ORklHX0NMU19V
MzJfUEVSRiBpcyBub3Qgc2V0CiMgQ09ORklHX0NMU19VMzJfTUFSSyBpcyBub3Qgc2V0CkNPTkZJ
R19ORVRfQ0xTX1JTVlA9eQpDT05GSUdfTkVUX0NMU19SU1ZQNj15CkNPTkZJR19ORVRfQ0xTX0ZM
T1c9eQojIENPTkZJR19ORVRfQ0xTX0NHUk9VUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfQ0xTX0JQ
Rj15CkNPTkZJR19ORVRfRU1BVENIPXkKQ09ORklHX05FVF9FTUFUQ0hfU1RBQ0s9MzIKIyBDT05G
SUdfTkVUX0VNQVRDSF9DTVAgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX05CWVRFIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX0VNQVRDSF9VMzIgaXMgbm90IHNldAojIENPTkZJR19ORVRf
RU1BVENIX01FVEEgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX1RFWFQgaXMgbm90IHNl
dApDT05GSUdfTkVUX0VNQVRDSF9DQU5JRD15CkNPTkZJR19ORVRfQ0xTX0FDVD15CiMgQ09ORklH
X05FVF9BQ1RfUE9MSUNFIGlzIG5vdCBzZXQKQ09ORklHX05FVF9BQ1RfR0FDVD15CiMgQ09ORklH
X0dBQ1RfUFJPQiBpcyBub3Qgc2V0CkNPTkZJR19ORVRfQUNUX01JUlJFRD15CkNPTkZJR19ORVRf
QUNUX05BVD15CkNPTkZJR19ORVRfQUNUX1BFRElUPXkKIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9BQ1RfU0tCRURJVD15CiMgQ09ORklHX05FVF9DTFNfSU5EIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfRklGTz15CkNPTkZJR19EQ0I9eQpDT05GSUdfQkFUTUFO
X0FEVj15CiMgQ09ORklHX0JBVE1BTl9BRFZfTkMgaXMgbm90IHNldApDT05GSUdfQkFUTUFOX0FE
Vl9ERUJVRz15CkNPTkZJR19PUEVOVlNXSVRDSD15CkNPTkZJR19WU09DS0VUUz15CkNPTkZJR19O
RVRMSU5LX01NQVA9eQpDT05GSUdfTkVUTElOS19ESUFHPXkKIyBDT05GSUdfTkVUX01QTFNfR1NP
IGlzIG5vdCBzZXQKQ09ORklHX0hTUj15CkNPTkZJR19SUFM9eQpDT05GSUdfUkZTX0FDQ0VMPXkK
Q09ORklHX1hQUz15CiMgQ09ORklHX0NHUk9VUF9ORVRfUFJJTyBpcyBub3Qgc2V0CkNPTkZJR19D
R1JPVVBfTkVUX0NMQVNTSUQ9eQpDT05GSUdfTkVUX1JYX0JVU1lfUE9MTD15CkNPTkZJR19CUUw9
eQpDT05GSUdfTkVUX0ZMT1dfTElNSVQ9eQoKIwojIE5ldHdvcmsgdGVzdGluZwojCiMgQ09ORklH
X0hBTVJBRElPIGlzIG5vdCBzZXQKQ09ORklHX0NBTj15CkNPTkZJR19DQU5fUkFXPXkKIyBDT05G
SUdfQ0FOX0JDTSBpcyBub3Qgc2V0CkNPTkZJR19DQU5fR1c9eQoKIwojIENBTiBEZXZpY2UgRHJp
dmVycwojCiMgQ09ORklHX0NBTl9WQ0FOIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9ERVY9eQojIENP
TkZJR19DQU5fQ0FMQ19CSVRUSU1JTkcgaXMgbm90IHNldAojIENPTkZJR19DQU5fTEVEUyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NBTl9NQ1AyNTFYIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9TSkExMDAw
PXkKIyBDT05GSUdfQ0FOX1NKQTEwMDBfSVNBIGlzIG5vdCBzZXQKQ09ORklHX0NBTl9TSkExMDAw
X1BMQVRGT1JNPXkKQ09ORklHX0NBTl9FTVNfUENNQ0lBPXkKQ09ORklHX0NBTl9QRUFLX1BDTUNJ
QT15CkNPTkZJR19DQU5fQ19DQU49eQpDT05GSUdfQ0FOX0NfQ0FOX1BMQVRGT1JNPXkKQ09ORklH
X0NBTl9DQzc3MD15CkNPTkZJR19DQU5fQ0M3NzBfSVNBPXkKQ09ORklHX0NBTl9DQzc3MF9QTEFU
Rk9STT15CkNPTkZJR19DQU5fU09GVElORz15CkNPTkZJR19DQU5fU09GVElOR19DUz15CkNPTkZJ
R19DQU5fREVCVUdfREVWSUNFUz15CiMgQ09ORklHX0lSREEgaXMgbm90IHNldApDT05GSUdfQlQ9
eQpDT05GSUdfQlRfUkZDT01NPXkKQ09ORklHX0JUX0JORVA9eQpDT05GSUdfQlRfQk5FUF9NQ19G
SUxURVI9eQojIENPTkZJR19CVF9CTkVQX1BST1RPX0ZJTFRFUiBpcyBub3Qgc2V0CgojCiMgQmx1
ZXRvb3RoIGRldmljZSBkcml2ZXJzCiMKQ09ORklHX0JUX0hDSURUTDE9eQpDT05GSUdfQlRfSENJ
QlQzQz15CkNPTkZJR19CVF9IQ0lCTFVFQ0FSRD15CkNPTkZJR19CVF9IQ0lCVFVBUlQ9eQojIENP
TkZJR19CVF9IQ0lWSENJIGlzIG5vdCBzZXQKQ09ORklHX0JUX01SVkw9eQpDT05GSUdfRklCX1JV
TEVTPXkKIyBDT05GSUdfV0lSRUxFU1MgaXMgbm90IHNldApDT05GSUdfV0lNQVg9eQpDT05GSUdf
V0lNQVhfREVCVUdfTEVWRUw9OApDT05GSUdfUkZLSUxMPXkKQ09ORklHX1JGS0lMTF9SRUdVTEFU
T1I9eQpDT05GSUdfTkVUXzlQPXkKQ09ORklHX05FVF85UF9WSVJUSU89eQojIENPTkZJR19ORVRf
OVBfREVCVUcgaXMgbm90IHNldApDT05GSUdfQ0FJRj15CkNPTkZJR19DQUlGX0RFQlVHPXkKIyBD
T05GSUdfQ0FJRl9ORVRERVYgaXMgbm90IHNldApDT05GSUdfQ0FJRl9VU0I9eQojIENPTkZJR19O
RkMgaXMgbm90IHNldApDT05GSUdfSEFWRV9CUEZfSklUPXkKCiMKIyBEZXZpY2UgRHJpdmVycwoj
CgojCiMgR2VuZXJpYyBEcml2ZXIgT3B0aW9ucwojCkNPTkZJR19VRVZFTlRfSEVMUEVSX1BBVEg9
IiIKQ09ORklHX0RFVlRNUEZTPXkKQ09ORklHX0RFVlRNUEZTX01PVU5UPXkKIyBDT05GSUdfU1RB
TkRBTE9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJTEQgaXMgbm90
IHNldApDT05GSUdfRldfTE9BREVSPXkKQ09ORklHX0ZJUk1XQVJFX0lOX0tFUk5FTD15CkNPTkZJ
R19FWFRSQV9GSVJNV0FSRT0iIgpDT05GSUdfRldfTE9BREVSX1VTRVJfSEVMUEVSPXkKIyBDT05G
SUdfREVCVUdfRFJJVkVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX0RFVlJFUz15CkNPTkZJR19T
WVNfSFlQRVJWSVNPUj15CiMgQ09ORklHX0dFTkVSSUNfQ1BVX0RFVklDRVMgaXMgbm90IHNldApD
T05GSUdfUkVHTUFQPXkKQ09ORklHX1JFR01BUF9JMkM9eQpDT05GSUdfUkVHTUFQX1NQST15CkNP
TkZJR19SRUdNQVBfTU1JTz15CkNPTkZJR19SRUdNQVBfSVJRPXkKQ09ORklHX0RNQV9TSEFSRURf
QlVGRkVSPXkKCiMKIyBCdXMgZGV2aWNlcwojCkNPTkZJR19DT05ORUNUT1I9eQpDT05GSUdfUFJP
Q19FVkVOVFM9eQojIENPTkZJR19NVEQgaXMgbm90IHNldApDT05GSUdfUEFSUE9SVD15CkNPTkZJ
R19BUkNIX01JR0hUX0hBVkVfUENfUEFSUE9SVD15CiMgQ09ORklHX1BBUlBPUlRfUEMgaXMgbm90
IHNldAojIENPTkZJR19QQVJQT1JUX0dTQyBpcyBub3Qgc2V0CkNPTkZJR19QQVJQT1JUX0FYODg3
OTY9eQojIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldApDT05GSUdfUEFSUE9SVF9OT1Rf
UEM9eQpDT05GSUdfQkxLX0RFVj15CkNPTkZJR19CTEtfREVWX05VTExfQkxLPXkKQ09ORklHX0JM
S19ERVZfRkQ9eQojIENPTkZJR19aUkFNIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9DT1df
Q09NTU9OIGlzIG5vdCBzZXQKQ09ORklHX0JMS19ERVZfTE9PUD15CkNPTkZJR19CTEtfREVWX0xP
T1BfTUlOX0NPVU5UPTgKIyBDT05GSUdfQkxLX0RFVl9DUllQVE9MT09QIGlzIG5vdCBzZXQKCiMK
IyBEUkJEIGRpc2FibGVkIGJlY2F1c2UgUFJPQ19GUyBvciBJTkVUIG5vdCBzZWxlY3RlZAojCkNP
TkZJR19CTEtfREVWX05CRD15CiMgQ09ORklHX0JMS19ERVZfUkFNIGlzIG5vdCBzZXQKQ09ORklH
X0NEUk9NX1BLVENEVkQ9eQpDT05GSUdfQ0RST01fUEtUQ0RWRF9CVUZGRVJTPTgKIyBDT05GSUdf
Q0RST01fUEtUQ0RWRF9XQ0FDSEUgaXMgbm90IHNldApDT05GSUdfQVRBX09WRVJfRVRIPXkKQ09O
RklHX1hFTl9CTEtERVZfRlJPTlRFTkQ9eQpDT05GSUdfVklSVElPX0JMSz15CiMgQ09ORklHX0JM
S19ERVZfSEQgaXMgbm90IHNldAoKIwojIE1pc2MgZGV2aWNlcwojCkNPTkZJR19BRDUyNVhfRFBP
VD15CiMgQ09ORklHX0FENTI1WF9EUE9UX0kyQyBpcyBub3Qgc2V0CkNPTkZJR19BRDUyNVhfRFBP
VF9TUEk9eQojIENPTkZJR19EVU1NWV9JUlEgaXMgbm90IHNldApDT05GSUdfSUNTOTMyUzQwMT15
CkNPTkZJR19BVE1FTF9TU0M9eQojIENPTkZJR19FTkNMT1NVUkVfU0VSVklDRVMgaXMgbm90IHNl
dApDT05GSUdfQVBEUzk4MDJBTFM9eQpDT05GSUdfSVNMMjkwMDM9eQojIENPTkZJR19JU0wyOTAy
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVFNMMjU1MCBpcyBub3Qgc2V0CkNPTkZJR19T
RU5TT1JTX0JIMTc4MD15CkNPTkZJR19TRU5TT1JTX0JIMTc3MD15CiMgQ09ORklHX1NFTlNPUlNf
QVBEUzk5MFggaXMgbm90IHNldApDT05GSUdfSE1DNjM1Mj15CkNPTkZJR19EUzE2ODI9eQpDT05G
SUdfVElfREFDNzUxMj15CkNPTkZJR19WTVdBUkVfQkFMTE9PTj15CkNPTkZJR19CTVAwODU9eQpD
T05GSUdfQk1QMDg1X0kyQz15CkNPTkZJR19CTVAwODVfU1BJPXkKIyBDT05GSUdfVVNCX1NXSVRD
SF9GU0E5NDgwIGlzIG5vdCBzZXQKQ09ORklHX0xBVFRJQ0VfRUNQM19DT05GSUc9eQpDT05GSUdf
U1JBTT15CkNPTkZJR19DMlBPUlQ9eQpDT05GSUdfQzJQT1JUX0RVUkFNQVJfMjE1MD15CgojCiMg
RUVQUk9NIHN1cHBvcnQKIwpDT05GSUdfRUVQUk9NX0FUMjQ9eQojIENPTkZJR19FRVBST01fQVQy
NSBpcyBub3Qgc2V0CiMgQ09ORklHX0VFUFJPTV9MRUdBQ1kgaXMgbm90IHNldApDT05GSUdfRUVQ
Uk9NX01BWDY4NzU9eQpDT05GSUdfRUVQUk9NXzkzQ1g2PXkKIyBDT05GSUdfRUVQUk9NXzkzWFg0
NiBpcyBub3Qgc2V0CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgc2hhcmVkIHRyYW5zcG9ydCBsaW5l
IGRpc2NpcGxpbmUKIwoKIwojIEFsdGVyYSBGUEdBIGZpcm13YXJlIGRvd25sb2FkIG1vZHVsZQoj
CiMgQ09ORklHX0FMVEVSQV9TVEFQTCBpcyBub3Qgc2V0CgojCiMgSW50ZWwgTUlDIEhvc3QgRHJp
dmVyCiMKCiMKIyBJbnRlbCBNSUMgQ2FyZCBEcml2ZXIKIwojIENPTkZJR19JTlRFTF9NSUNfQ0FS
RCBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0lERT15CkNPTkZJR19JREU9eQoKIwojIFBsZWFzZSBz
ZWUgRG9jdW1lbnRhdGlvbi9pZGUvaWRlLnR4dCBmb3IgaGVscC9pbmZvIG9uIElERSBkcml2ZXMK
IwpDT05GSUdfSURFX0FUQVBJPXkKIyBDT05GSUdfQkxLX0RFVl9JREVfU0FUQSBpcyBub3Qgc2V0
CkNPTkZJR19JREVfR0Q9eQojIENPTkZJR19JREVfR0RfQVRBIGlzIG5vdCBzZXQKIyBDT05GSUdf
SURFX0dEX0FUQVBJIGlzIG5vdCBzZXQKQ09ORklHX0JMS19ERVZfSURFQ1M9eQpDT05GSUdfQkxL
X0RFVl9JREVDRD15CkNPTkZJR19CTEtfREVWX0lERUNEX1ZFUkJPU0VfRVJST1JTPXkKIyBDT05G
SUdfQkxLX0RFVl9JREVUQVBFIGlzIG5vdCBzZXQKIyBDT05GSUdfSURFX1RBU0tfSU9DVEwgaXMg
bm90IHNldApDT05GSUdfSURFX1BST0NfRlM9eQoKIwojIElERSBjaGlwc2V0IHN1cHBvcnQvYnVn
Zml4ZXMKIwpDT05GSUdfSURFX0dFTkVSSUM9eQpDT05GSUdfQkxLX0RFVl9QTEFURk9STT15CiMg
Q09ORklHX0JMS19ERVZfQ01ENjQwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9JREVETUEg
aXMgbm90IHNldAoKIwojIFNDU0kgZGV2aWNlIHN1cHBvcnQKIwpDT05GSUdfU0NTSV9NT0Q9eQpD
T05GSUdfUkFJRF9BVFRSUz15CkNPTkZJR19TQ1NJPXkKQ09ORklHX1NDU0lfRE1BPXkKQ09ORklH
X1NDU0lfVEdUPXkKIyBDT05GSUdfU0NTSV9ORVRMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NT
SV9QUk9DX0ZTIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIHN1cHBvcnQgdHlwZSAoZGlzaywgdGFwZSwg
Q0QtUk9NKQojCkNPTkZJR19CTEtfREVWX1NEPXkKIyBDT05GSUdfQ0hSX0RFVl9TVCBpcyBub3Qg
c2V0CiMgQ09ORklHX0NIUl9ERVZfT1NTVCBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX1NSPXkK
Q09ORklHX0JMS19ERVZfU1JfVkVORE9SPXkKQ09ORklHX0NIUl9ERVZfU0c9eQpDT05GSUdfQ0hS
X0RFVl9TQ0g9eQojIENPTkZJR19TQ1NJX01VTFRJX0xVTiBpcyBub3Qgc2V0CiMgQ09ORklHX1ND
U0lfQ09OU1RBTlRTIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9MT0dHSU5HIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0NTSV9TQ0FOX0FTWU5DIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIFRyYW5zcG9ydHMK
IwojIENPTkZJR19TQ1NJX1NQSV9BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfRkNfQVRU
UlMgaXMgbm90IHNldApDT05GSUdfU0NTSV9JU0NTSV9BVFRSUz15CkNPTkZJR19TQ1NJX1NBU19B
VFRSUz15CkNPTkZJR19TQ1NJX1NBU19MSUJTQVM9eQpDT05GSUdfU0NTSV9TQVNfQVRBPXkKQ09O
RklHX1NDU0lfU0FTX0hPU1RfU01QPXkKQ09ORklHX1NDU0lfU1JQX0FUVFJTPXkKIyBDT05GSUdf
U0NTSV9TUlBfVEdUX0FUVFJTIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfTE9XTEVWRUw9eQpDT05G
SUdfSVNDU0lfQk9PVF9TWVNGUz15CiMgQ09ORklHX1NDU0lfVUZTSENEIGlzIG5vdCBzZXQKIyBD
T05GSUdfTElCRkMgaXMgbm90IHNldAojIENPTkZJR19MSUJGQ09FIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJX1ZJUlRJTz15CiMgQ09ORklHX1ND
U0lfTE9XTEVWRUxfUENNQ0lBIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfREg9eQpDT05GSUdfU0NT
SV9ESF9SREFDPXkKQ09ORklHX1NDU0lfREhfSFBfU1c9eQpDT05GSUdfU0NTSV9ESF9FTUM9eQpD
T05GSUdfU0NTSV9ESF9BTFVBPXkKQ09ORklHX1NDU0lfT1NEX0lOSVRJQVRPUj15CiMgQ09ORklH
X1NDU0lfT1NEX1VMRCBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJX09TRF9EUFJJTlRfU0VOU0U9MQoj
IENPTkZJR19TQ1NJX09TRF9ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19BVEE9eQojIENPTkZJR19B
VEFfTk9OU1RBTkRBUkQgaXMgbm90IHNldApDT05GSUdfQVRBX1ZFUkJPU0VfRVJST1I9eQpDT05G
SUdfU0FUQV9QTVA9eQoKIwojIENvbnRyb2xsZXJzIHdpdGggbm9uLVNGRiBuYXRpdmUgaW50ZXJm
YWNlCiMKIyBDT05GSUdfU0FUQV9BSENJX1BMQVRGT1JNIGlzIG5vdCBzZXQKQ09ORklHX0FUQV9T
RkY9eQoKIwojIFNGRiBjb250cm9sbGVycyB3aXRoIGN1c3RvbSBETUEgaW50ZXJmYWNlCiMKQ09O
RklHX0FUQV9CTURNQT15CgojCiMgU0FUQSBTRkYgY29udHJvbGxlcnMgd2l0aCBCTURNQQojCkNP
TkZJR19TQVRBX0hJR0hCQU5LPXkKQ09ORklHX1NBVEFfTVY9eQpDT05GSUdfU0FUQV9SQ0FSPXkK
CiMKIyBQQVRBIFNGRiBjb250cm9sbGVycyB3aXRoIEJNRE1BCiMKIyBDT05GSUdfUEFUQV9BUkFT
QU5fQ0YgaXMgbm90IHNldAoKIwojIFBJTy1vbmx5IFNGRiBjb250cm9sbGVycwojCkNPTkZJR19Q
QVRBX1BDTUNJQT15CiMgQ09ORklHX1BBVEFfUExBVEZPUk0gaXMgbm90IHNldAoKIwojIEdlbmVy
aWMgZmFsbGJhY2sgLyBsZWdhY3kgZHJpdmVycwojCkNPTkZJR19NRD15CiMgQ09ORklHX0JMS19E
RVZfTUQgaXMgbm90IHNldApDT05GSUdfQkNBQ0hFPXkKQ09ORklHX0JDQUNIRV9ERUJVRz15CiMg
Q09ORklHX0JDQUNIRV9DTE9TVVJFU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX0RN
PXkKIyBDT05GSUdfRE1fREVCVUcgaXMgbm90IHNldApDT05GSUdfRE1fQlVGSU89eQpDT05GSUdf
RE1fQklPX1BSSVNPTj15CkNPTkZJR19ETV9QRVJTSVNURU5UX0RBVEE9eQojIENPTkZJR19ETV9D
UllQVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX1NOQVBTSE9UIGlzIG5vdCBzZXQKQ09ORklHX0RN
X1RISU5fUFJPVklTSU9OSU5HPXkKQ09ORklHX0RNX0RFQlVHX0JMT0NLX1NUQUNLX1RSQUNJTkc9
eQpDT05GSUdfRE1fQ0FDSEU9eQojIENPTkZJR19ETV9DQUNIRV9NUSBpcyBub3Qgc2V0CkNPTkZJ
R19ETV9DQUNIRV9DTEVBTkVSPXkKQ09ORklHX0RNX01JUlJPUj15CkNPTkZJR19ETV9MT0dfVVNF
UlNQQUNFPXkKIyBDT05GSUdfRE1fUkFJRCBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX1pFUk8gaXMg
bm90IHNldAojIENPTkZJR19ETV9NVUxUSVBBVEggaXMgbm90IHNldAojIENPTkZJR19ETV9ERUxB
WSBpcyBub3Qgc2V0CkNPTkZJR19ETV9VRVZFTlQ9eQpDT05GSUdfRE1fRkxBS0VZPXkKQ09ORklH
X0RNX1ZFUklUWT15CkNPTkZJR19ETV9TV0lUQ0g9eQpDT05GSUdfVEFSR0VUX0NPUkU9eQojIENP
TkZJR19UQ01fSUJMT0NLIGlzIG5vdCBzZXQKQ09ORklHX1RDTV9GSUxFSU89eQojIENPTkZJR19U
Q01fUFNDU0kgaXMgbm90IHNldAojIENPTkZJR19MT09QQkFDS19UQVJHRVQgaXMgbm90IHNldApD
T05GSUdfSVNDU0lfVEFSR0VUPXkKQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTPXkKQ09ORklHX05F
VERFVklDRVM9eQpDT05GSUdfTUlJPXkKIyBDT05GSUdfTkVUX0NPUkUgaXMgbm90IHNldAojIENP
TkZJR19BUkNORVQgaXMgbm90IHNldAojIENPTkZJR19BVE1fRFJJVkVSUyBpcyBub3Qgc2V0Cgoj
CiMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwojCkNPTkZJR19DQUlGX1NQSV9TTEFWRT15CkNPTkZJ
R19DQUlGX1NQSV9TWU5DPXkKIyBDT05GSUdfQ0FJRl9IU0kgaXMgbm90IHNldApDT05GSUdfQ0FJ
Rl9WSVJUSU89eQpDT05GSUdfVkhPU1RfTkVUPXkKQ09ORklHX1ZIT1NUX1JJTkc9eQpDT05GSUdf
VkhPU1Q9eQoKIwojIERpc3RyaWJ1dGVkIFN3aXRjaCBBcmNoaXRlY3R1cmUgZHJpdmVycwojCkNP
TkZJR19ORVRfRFNBX01WODhFNlhYWD15CkNPTkZJR19ORVRfRFNBX01WODhFNjA2MD15CkNPTkZJ
R19ORVRfRFNBX01WODhFNlhYWF9ORUVEX1BQVT15CkNPTkZJR19ORVRfRFNBX01WODhFNjEzMT15
CkNPTkZJR19ORVRfRFNBX01WODhFNjEyM182MV82NT15CkNPTkZJR19FVEhFUk5FVD15CkNPTkZJ
R19ORVRfVkVORE9SXzNDT009eQpDT05GSUdfUENNQ0lBXzNDNTc0PXkKQ09ORklHX1BDTUNJQV8z
QzU4OT15CkNPTkZJR19ORVRfVkVORE9SX0FNRD15CkNPTkZJR19QQ01DSUFfTk1DTEFOPXkKQ09O
RklHX05FVF9WRU5ET1JfQVJDPXkKIyBDT05GSUdfTkVUX0NBREVOQ0UgaXMgbm90IHNldApDT05G
SUdfTkVUX1ZFTkRPUl9CUk9BRENPTT15CiMgQ09ORklHX0I0NCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfQ0FMWEVEQV9YR01BQz15CkNPTkZJR19ETkVUPXkKIyBDT05GSUdfTkVUX1ZFTkRPUl9GVUpJ
VFNVIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfSU5URUw9eQpDT05GSUdfTkVUX1ZFTkRP
Ul9JODI1WFg9eQojIENPTkZJR19ORVRfVkVORE9SX01JQ1JFTCBpcyBub3Qgc2V0CkNPTkZJR19O
RVRfVkVORE9SX01JQ1JPQ0hJUD15CkNPTkZJR19FTkMyOEo2MD15CkNPTkZJR19FTkMyOEo2MF9X
UklURVZFUklGWT15CiMgQ09ORklHX05FVF9WRU5ET1JfTkFUU0VNSSBpcyBub3Qgc2V0CkNPTkZJ
R19FVEhPQz15CkNPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQojIENPTkZJR19BVFAgaXMgbm90
IHNldApDT05GSUdfU0hfRVRIPXkKQ09ORklHX05FVF9WRU5ET1JfU0VFUT15CkNPTkZJR19ORVRf
VkVORE9SX1NNU0M9eQpDT05GSUdfUENNQ0lBX1NNQzkxQzkyPXkKQ09ORklHX1NNU0M5MTFYPXkK
IyBDT05GSUdfU01TQzkxMVhfQVJDSF9IT09LUyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9S
X1NUTUlDUk89eQojIENPTkZJR19TVE1NQUNfRVRIIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5E
T1JfVklBPXkKIyBDT05GSUdfTkVUX1ZFTkRPUl9XSVpORVQgaXMgbm90IHNldApDT05GSUdfTkVU
X1ZFTkRPUl9YSVJDT009eQojIENPTkZJR19QQ01DSUFfWElSQzJQUyBpcyBub3Qgc2V0CkNPTkZJ
R19QSFlMSUI9eQoKIwojIE1JSSBQSFkgZGV2aWNlIGRyaXZlcnMKIwpDT05GSUdfQVQ4MDNYX1BI
WT15CiMgQ09ORklHX0FNRF9QSFkgaXMgbm90IHNldApDT05GSUdfTUFSVkVMTF9QSFk9eQpDT05G
SUdfREFWSUNPTV9QSFk9eQpDT05GSUdfUVNFTUlfUEhZPXkKIyBDT05GSUdfTFhUX1BIWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0NJQ0FEQV9QSFkgaXMgbm90IHNldApDT05GSUdfVklURVNTRV9QSFk9
eQpDT05GSUdfU01TQ19QSFk9eQpDT05GSUdfQlJPQURDT01fUEhZPXkKIyBDT05GSUdfQkNNODdY
WF9QSFkgaXMgbm90IHNldAojIENPTkZJR19JQ1BMVVNfUEhZIGlzIG5vdCBzZXQKQ09ORklHX1JF
QUxURUtfUEhZPXkKIyBDT05GSUdfTkFUSU9OQUxfUEhZIGlzIG5vdCBzZXQKQ09ORklHX1NURTEw
WFA9eQojIENPTkZJR19MU0lfRVQxMDExQ19QSFkgaXMgbm90IHNldAojIENPTkZJR19NSUNSRUxf
UEhZIGlzIG5vdCBzZXQKQ09ORklHX0ZJWEVEX1BIWT15CkNPTkZJR19NRElPX0JJVEJBTkc9eQpD
T05GSUdfTUlDUkVMX0tTODk5NU1BPXkKIyBDT05GSUdfUExJUCBpcyBub3Qgc2V0CkNPTkZJR19Q
UFA9eQpDT05GSUdfUFBQX0JTRENPTVA9eQojIENPTkZJR19QUFBfREVGTEFURSBpcyBub3Qgc2V0
CkNPTkZJR19QUFBfRklMVEVSPXkKIyBDT05GSUdfUFBQX01QUEUgaXMgbm90IHNldApDT05GSUdf
UFBQX01VTFRJTElOSz15CkNPTkZJR19QUFBPQVRNPXkKQ09ORklHX1BQUE9FPXkKQ09ORklHX1NM
SEM9eQojIENPTkZJR19XTEFOIGlzIG5vdCBzZXQKCiMKIyBXaU1BWCBXaXJlbGVzcyBCcm9hZGJh
bmQgZGV2aWNlcwojCgojCiMgRW5hYmxlIFVTQiBzdXBwb3J0IHRvIHNlZSBXaU1BWCBVU0IgZHJp
dmVycwojCkNPTkZJR19XQU49eQpDT05GSUdfSERMQz15CkNPTkZJR19IRExDX1JBVz15CiMgQ09O
RklHX0hETENfUkFXX0VUSCBpcyBub3Qgc2V0CkNPTkZJR19IRExDX0NJU0NPPXkKQ09ORklHX0hE
TENfRlI9eQpDT05GSUdfSERMQ19QUFA9eQpDT05GSUdfSERMQ19YMjU9eQpDT05GSUdfRExDST15
CkNPTkZJR19ETENJX01BWD04CkNPTkZJR19TQk5JPXkKIyBDT05GSUdfU0JOSV9NVUxUSUxJTkUg
aXMgbm90IHNldApDT05GSUdfWEVOX05FVERFVl9GUk9OVEVORD15CiMgQ09ORklHX0lTRE4gaXMg
bm90IHNldAoKIwojIElucHV0IGRldmljZSBzdXBwb3J0CiMKIyBDT05GSUdfSU5QVVQgaXMgbm90
IHNldAoKIwojIEhhcmR3YXJlIEkvTyBwb3J0cwojCiMgQ09ORklHX1NFUklPIGlzIG5vdCBzZXQK
Q09ORklHX0FSQ0hfTUlHSFRfSEFWRV9QQ19TRVJJTz15CkNPTkZJR19HQU1FUE9SVD15CiMgQ09O
RklHX0dBTUVQT1JUX05TNTU4IGlzIG5vdCBzZXQKIyBDT05GSUdfR0FNRVBPUlRfTDQgaXMgbm90
IHNldAoKIwojIENoYXJhY3RlciBkZXZpY2VzCiMKIyBDT05GSUdfVFRZIGlzIG5vdCBzZXQKQ09O
RklHX0RFVktNRU09eQojIENPTkZJR19QUklOVEVSIGlzIG5vdCBzZXQKQ09ORklHX1BQREVWPXkK
Q09ORklHX0lQTUlfSEFORExFUj15CiMgQ09ORklHX0lQTUlfUEFOSUNfRVZFTlQgaXMgbm90IHNl
dAojIENPTkZJR19JUE1JX0RFVklDRV9JTlRFUkZBQ0UgaXMgbm90IHNldApDT05GSUdfSVBNSV9T
ST15CkNPTkZJR19JUE1JX1dBVENIRE9HPXkKQ09ORklHX0lQTUlfUE9XRVJPRkY9eQpDT05GSUdf
SFdfUkFORE9NPXkKIyBDT05GSUdfSFdfUkFORE9NX1RJTUVSSU9NRU0gaXMgbm90IHNldAojIENP
TkZJR19IV19SQU5ET01fVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfSFdfUkFORE9NX1ZJUlRJTyBp
cyBub3Qgc2V0CkNPTkZJR19IV19SQU5ET01fVFBNPXkKIyBDT05GSUdfTlZSQU0gaXMgbm90IHNl
dAoKIwojIFBDTUNJQSBjaGFyYWN0ZXIgZGV2aWNlcwojCkNPTkZJR19DQVJETUFOXzQwMDA9eQoj
IENPTkZJR19DQVJETUFOXzQwNDAgaXMgbm90IHNldApDT05GSUdfUkFXX0RSSVZFUj15CkNPTkZJ
R19NQVhfUkFXX0RFVlM9MjU2CkNPTkZJR19IQU5HQ0hFQ0tfVElNRVI9eQpDT05GSUdfVENHX1RQ
TT15CkNPTkZJR19UQ0dfVElTPXkKQ09ORklHX1RDR19USVNfSTJDX0FUTUVMPXkKQ09ORklHX1RD
R19USVNfSTJDX0lORklORU9OPXkKQ09ORklHX1RDR19USVNfSTJDX05VVk9UT049eQojIENPTkZJ
R19UQ0dfTlNDIGlzIG5vdCBzZXQKQ09ORklHX1RDR19BVE1FTD15CkNPTkZJR19UQ0dfWEVOPXkK
Q09ORklHX1RFTENMT0NLPXkKQ09ORklHX0kyQz15CkNPTkZJR19JMkNfQk9BUkRJTkZPPXkKIyBD
T05GSUdfSTJDX0NPTVBBVCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfQ0hBUkRFVj15CkNPTkZJR19J
MkNfTVVYPXkKCiMKIyBNdWx0aXBsZXhlciBJMkMgQ2hpcCBzdXBwb3J0CiMKQ09ORklHX0kyQ19N
VVhfUENBOTU0MT15CkNPTkZJR19JMkNfTVVYX1BDQTk1NHg9eQojIENPTkZJR19JMkNfSEVMUEVS
X0FVVE8gaXMgbm90IHNldApDT05GSUdfSTJDX1NNQlVTPXkKCiMKIyBJMkMgQWxnb3JpdGhtcwoj
CkNPTkZJR19JMkNfQUxHT0JJVD15CkNPTkZJR19JMkNfQUxHT1BDRj15CiMgQ09ORklHX0kyQ19B
TEdPUENBIGlzIG5vdCBzZXQKCiMKIyBJMkMgSGFyZHdhcmUgQnVzIHN1cHBvcnQKIwoKIwojIEky
QyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1jaGlwKQoj
CiMgQ09ORklHX0kyQ19LRU1QTEQgaXMgbm90IHNldAojIENPTkZJR19JMkNfT0NPUkVTIGlzIG5v
dCBzZXQKIyBDT05GSUdfSTJDX1BDQV9QTEFURk9STSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19Q
WEFfUENJIGlzIG5vdCBzZXQKQ09ORklHX0kyQ19SSUlDPXkKIyBDT05GSUdfSTJDX1NIX01PQklM
RSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSU1URUMgaXMgbm90IHNldApDT05GSUdfSTJDX1hJ
TElOWD15CiMgQ09ORklHX0kyQ19SQ0FSIGlzIG5vdCBzZXQKCiMKIyBFeHRlcm5hbCBJMkMvU01C
dXMgYWRhcHRlciBkcml2ZXJzCiMKQ09ORklHX0kyQ19QQVJQT1JUPXkKIyBDT05GSUdfSTJDX1BB
UlBPUlRfTElHSFQgaXMgbm90IHNldAoKIwojIE90aGVyIEkyQy9TTUJ1cyBidXMgZHJpdmVycwoj
CkNPTkZJR19JMkNfREVCVUdfQ09SRT15CkNPTkZJR19JMkNfREVCVUdfQUxHTz15CkNPTkZJR19J
MkNfREVCVUdfQlVTPXkKQ09ORklHX1NQST15CiMgQ09ORklHX1NQSV9ERUJVRyBpcyBub3Qgc2V0
CkNPTkZJR19TUElfTUFTVEVSPXkKCiMKIyBTUEkgTWFzdGVyIENvbnRyb2xsZXIgRHJpdmVycwoj
CkNPTkZJR19TUElfQUxURVJBPXkKQ09ORklHX1NQSV9BVE1FTD15CkNPTkZJR19TUElfQkNNMjgz
NT15CkNPTkZJR19TUElfQkNNNjNYWF9IU1NQST15CkNPTkZJR19TUElfQklUQkFORz15CkNPTkZJ
R19TUElfQlVUVEVSRkxZPXkKQ09ORklHX1NQSV9FUDkzWFg9eQpDT05GSUdfU1BJX0lNWD15CkNP
TkZJR19TUElfTE03MF9MTFA9eQojIENPTkZJR19TUElfRlNMX0RTUEkgaXMgbm90IHNldApDT05G
SUdfU1BJX1RJX1FTUEk9eQojIENPTkZJR19TUElfT01BUF8xMDBLIGlzIG5vdCBzZXQKQ09ORklH
X1NQSV9PUklPTj15CiMgQ09ORklHX1NQSV9QWEEyWFhfUENJIGlzIG5vdCBzZXQKIyBDT05GSUdf
U1BJX1NDMThJUzYwMiBpcyBub3Qgc2V0CkNPTkZJR19TUElfU0g9eQojIENPTkZJR19TUElfU0hf
SFNQSSBpcyBub3Qgc2V0CiMgQ09ORklHX1NQSV9URUdSQTExNCBpcyBub3Qgc2V0CkNPTkZJR19T
UElfVEVHUkEyMF9TRkxBU0g9eQpDT05GSUdfU1BJX1RFR1JBMjBfU0xJTks9eQojIENPTkZJR19T
UElfWENPTU0gaXMgbm90IHNldAojIENPTkZJR19TUElfWElMSU5YIGlzIG5vdCBzZXQKQ09ORklH
X1NQSV9ERVNJR05XQVJFPXkKCiMKIyBTUEkgUHJvdG9jb2wgTWFzdGVycwojCkNPTkZJR19TUElf
U1BJREVWPXkKQ09ORklHX1NQSV9UTEU2MlgwPXkKQ09ORklHX0hTST15CkNPTkZJR19IU0lfQk9B
UkRJTkZPPXkKCiMKIyBIU0kgY2xpZW50cwojCiMgQ09ORklHX0hTSV9DSEFSIGlzIG5vdCBzZXQK
CiMKIyBQUFMgc3VwcG9ydAojCkNPTkZJR19QUFM9eQpDT05GSUdfUFBTX0RFQlVHPXkKCiMKIyBQ
UFMgY2xpZW50cyBzdXBwb3J0CiMKIyBDT05GSUdfUFBTX0NMSUVOVF9LVElNRVIgaXMgbm90IHNl
dApDT05GSUdfUFBTX0NMSUVOVF9QQVJQT1JUPXkKQ09ORklHX1BQU19DTElFTlRfR1BJTz15Cgoj
CiMgUFBTIGdlbmVyYXRvcnMgc3VwcG9ydAojCgojCiMgUFRQIGNsb2NrIHN1cHBvcnQKIwpDT05G
SUdfUFRQXzE1ODhfQ0xPQ0s9eQpDT05GSUdfRFA4MzY0MF9QSFk9eQpDT05GSUdfUFRQXzE1ODhf
Q0xPQ0tfUENIPXkKQ09ORklHX0FSQ0hfV0FOVF9PUFRJT05BTF9HUElPTElCPXkKIyBDT05GSUdf
R1BJT0xJQiBpcyBub3Qgc2V0CkNPTkZJR19XMT15CkNPTkZJR19XMV9DT049eQoKIwojIDEtd2ly
ZSBCdXMgTWFzdGVycwojCkNPTkZJR19XMV9NQVNURVJfRFMyNDgyPXkKIyBDT05GSUdfVzFfTUFT
VEVSX0RTMVdNIGlzIG5vdCBzZXQKCiMKIyAxLXdpcmUgU2xhdmVzCiMKQ09ORklHX1cxX1NMQVZF
X1RIRVJNPXkKQ09ORklHX1cxX1NMQVZFX1NNRU09eQpDT05GSUdfVzFfU0xBVkVfRFMyNDA4PXkK
Q09ORklHX1cxX1NMQVZFX0RTMjQwOF9SRUFEQkFDSz15CkNPTkZJR19XMV9TTEFWRV9EUzI0MTM9
eQojIENPTkZJR19XMV9TTEFWRV9EUzI0MjMgaXMgbm90IHNldApDT05GSUdfVzFfU0xBVkVfRFMy
NDMxPXkKQ09ORklHX1cxX1NMQVZFX0RTMjQzMz15CiMgQ09ORklHX1cxX1NMQVZFX0RTMjQzM19D
UkMgaXMgbm90IHNldAojIENPTkZJR19XMV9TTEFWRV9EUzI3NjAgaXMgbm90IHNldApDT05GSUdf
VzFfU0xBVkVfRFMyNzgwPXkKQ09ORklHX1cxX1NMQVZFX0RTMjc4MT15CiMgQ09ORklHX1cxX1NM
QVZFX0RTMjhFMDQgaXMgbm90IHNldApDT05GSUdfVzFfU0xBVkVfQlEyNzAwMD15CkNPTkZJR19Q
T1dFUl9TVVBQTFk9eQpDT05GSUdfUE9XRVJfU1VQUExZX0RFQlVHPXkKIyBDT05GSUdfUERBX1BP
V0VSIGlzIG5vdCBzZXQKQ09ORklHX0dFTkVSSUNfQURDX0JBVFRFUlk9eQpDT05GSUdfTUFYODky
NV9QT1dFUj15CiMgQ09ORklHX1dNODMxWF9CQUNLVVAgaXMgbm90IHNldApDT05GSUdfV004MzFY
X1BPV0VSPXkKIyBDT05GSUdfV004MzUwX1BPV0VSIGlzIG5vdCBzZXQKQ09ORklHX1RFU1RfUE9X
RVI9eQpDT05GSUdfQkFUVEVSWV9EUzI3ODA9eQpDT05GSUdfQkFUVEVSWV9EUzI3ODE9eQojIENP
TkZJR19CQVRURVJZX0RTMjc4MiBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX1NCUz15CiMgQ09O
RklHX0JBVFRFUllfQlEyN3gwMCBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX0RBOTAzMD15CiMg
Q09ORklHX0JBVFRFUllfREE5MDUyIGlzIG5vdCBzZXQKQ09ORklHX0JBVFRFUllfTUFYMTcwNDA9
eQpDT05GSUdfQkFUVEVSWV9NQVgxNzA0Mj15CkNPTkZJR19DSEFSR0VSX1BDRjUwNjMzPXkKIyBD
T05GSUdfQ0hBUkdFUl9JU1AxNzA0IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9NQVg4OTAz
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9UV0w0MDMwIGlzIG5vdCBzZXQKQ09ORklHX0NI
QVJHRVJfTFA4NzI3PXkKIyBDT05GSUdfQ0hBUkdFUl9NQU5BR0VSIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hBUkdFUl9CUTI0MTVYIGlzIG5vdCBzZXQKQ09ORklHX0NIQVJHRVJfU01CMzQ3PXkKIyBD
T05GSUdfQ0hBUkdFUl9UUFM2NTA5MCBpcyBub3Qgc2V0CkNPTkZJR19CQVRURVJZX0dPTERGSVNI
PXkKQ09ORklHX1BPV0VSX1JFU0VUPXkKIyBDT05GSUdfUE9XRVJfQVZTIGlzIG5vdCBzZXQKQ09O
RklHX0hXTU9OPXkKQ09ORklHX0hXTU9OX1ZJRD15CiMgQ09ORklHX0hXTU9OX0RFQlVHX0NISVAg
aXMgbm90IHNldAoKIwojIE5hdGl2ZSBkcml2ZXJzCiMKIyBDT05GSUdfU0VOU09SU19BRDczMTQg
aXMgbm90IHNldApDT05GSUdfU0VOU09SU19BRDc0MTQ9eQpDT05GSUdfU0VOU09SU19BRDc0MTg9
eQpDT05GSUdfU0VOU09SU19BRENYWD15CkNPTkZJR19TRU5TT1JTX0FETTEwMjE9eQojIENPTkZJ
R19TRU5TT1JTX0FETTEwMjUgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0FETTEwMjYgaXMg
bm90IHNldApDT05GSUdfU0VOU09SU19BRE0xMDI5PXkKQ09ORklHX1NFTlNPUlNfQURNMTAzMT15
CiMgQ09ORklHX1NFTlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0FEVDdY
MTA9eQpDT05GSUdfU0VOU09SU19BRFQ3MzEwPXkKIyBDT05GSUdfU0VOU09SU19BRFQ3NDEwIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BRFQ3NDExIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNP
UlNfQURUNzQ2Mj15CkNPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQpDT05GSUdfU0VOU09SU19BRFQ3
NDc1PXkKQ09ORklHX1NFTlNPUlNfQVNDNzYyMT15CkNPTkZJR19TRU5TT1JTX0FTQjEwMD15CkNP
TkZJR19TRU5TT1JTX0FUWFAxPXkKIyBDT05GSUdfU0VOU09SU19EUzYyMCBpcyBub3Qgc2V0CkNP
TkZJR19TRU5TT1JTX0RTMTYyMT15CkNPTkZJR19TRU5TT1JTX0RBOTA1Ml9BREM9eQpDT05GSUdf
U0VOU09SU19EQTkwNTU9eQpDT05GSUdfU0VOU09SU19GNzE4MDVGPXkKQ09ORklHX1NFTlNPUlNf
RjcxODgyRkc9eQpDT05GSUdfU0VOU09SU19GNzUzNzVTPXkKIyBDT05GSUdfU0VOU09SU19GU0NI
TUQgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19HNzYwQT15CkNPTkZJR19TRU5TT1JTX0c3NjI9
eQpDT05GSUdfU0VOU09SU19HTDUxOFNNPXkKQ09ORklHX1NFTlNPUlNfR0w1MjBTTT15CiMgQ09O
RklHX1NFTlNPUlNfSElINjEzMCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0hUVTIxPXkKQ09O
RklHX1NFTlNPUlNfQ09SRVRFTVA9eQpDT05GSUdfU0VOU09SU19JQk1BRU09eQojIENPTkZJR19T
RU5TT1JTX0lCTVBFWCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0lJT19IV01PTj15CiMgQ09O
RklHX1NFTlNPUlNfSVQ4NyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSkM0MiBpcyBub3Qg
c2V0CkNPTkZJR19TRU5TT1JTX0xJTkVBR0U9eQojIENPTkZJR19TRU5TT1JTX0xNNjMgaXMgbm90
IHNldApDT05GSUdfU0VOU09SU19MTTcwPXkKQ09ORklHX1NFTlNPUlNfTE03Mz15CiMgQ09ORklH
X1NFTlNPUlNfTE03NSBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX0xNNzc9eQojIENPTkZJR19T
RU5TT1JTX0xNNzggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19MTTgwPXkKQ09ORklHX1NFTlNP
UlNfTE04Mz15CkNPTkZJR19TRU5TT1JTX0xNODU9eQojIENPTkZJR19TRU5TT1JTX0xNODcgaXMg
bm90IHNldApDT05GSUdfU0VOU09SU19MTTkwPXkKQ09ORklHX1NFTlNPUlNfTE05Mj15CkNPTkZJ
R19TRU5TT1JTX0xNOTM9eQpDT05GSUdfU0VOU09SU19MVEM0MTUxPXkKQ09ORklHX1NFTlNPUlNf
TFRDNDIxNT15CkNPTkZJR19TRU5TT1JTX0xUQzQyNDU9eQojIENPTkZJR19TRU5TT1JTX0xUQzQy
NjEgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19MTTk1MjM0PXkKQ09ORklHX1NFTlNPUlNfTE05
NTI0MT15CkNPTkZJR19TRU5TT1JTX0xNOTUyNDU9eQpDT05GSUdfU0VOU09SU19NQVgxMTExPXkK
Q09ORklHX1NFTlNPUlNfTUFYMTYwNjU9eQpDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKQ09ORklH
X1NFTlNPUlNfTUFYMTY2OD15CkNPTkZJR19TRU5TT1JTX01BWDE5Nz15CkNPTkZJR19TRU5TT1JT
X01BWDY2Mzk9eQpDT05GSUdfU0VOU09SU19NQVg2NjQyPXkKQ09ORklHX1NFTlNPUlNfTUFYNjY1
MD15CiMgQ09ORklHX1NFTlNPUlNfTUFYNjY5NyBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JTX01D
UDMwMjE9eQpDT05GSUdfU0VOU09SU19OQ1Q2Nzc1PXkKQ09ORklHX1NFTlNPUlNfUEM4NzM2MD15
CkNPTkZJR19TRU5TT1JTX1BDODc0Mjc9eQpDT05GSUdfU0VOU09SU19QQ0Y4NTkxPXkKQ09ORklH
X1BNQlVTPXkKQ09ORklHX1NFTlNPUlNfUE1CVVM9eQpDT05GSUdfU0VOU09SU19BRE0xMjc1PXkK
Q09ORklHX1NFTlNPUlNfTE0yNTA2Nj15CkNPTkZJR19TRU5TT1JTX0xUQzI5Nzg9eQpDT05GSUdf
U0VOU09SU19NQVgxNjA2ND15CkNPTkZJR19TRU5TT1JTX01BWDM0NDQwPXkKIyBDT05GSUdfU0VO
U09SU19NQVg4Njg4IGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfVUNEOTAwMD15CkNPTkZJR19T
RU5TT1JTX1VDRDkyMDA9eQpDT05GSUdfU0VOU09SU19aTDYxMDA9eQpDT05GSUdfU0VOU09SU19T
SFQyMT15CkNPTkZJR19TRU5TT1JTX1NNTTY2NT15CkNPTkZJR19TRU5TT1JTX0RNRTE3Mzc9eQpD
T05GSUdfU0VOU09SU19FTUMxNDAzPXkKQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15CkNPTkZJR19T
RU5TT1JTX0VNQzZXMjAxPXkKQ09ORklHX1NFTlNPUlNfU01TQzQ3TTE9eQpDT05GSUdfU0VOU09S
U19TTVNDNDdNMTkyPXkKQ09ORklHX1NFTlNPUlNfU01TQzQ3QjM5Nz15CiMgQ09ORklHX1NFTlNP
UlNfU0NINTZYWF9DT01NT04gaXMgbm90IHNldApDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKIyBD
T05GSUdfU0VOU09SU19BRFM3ODI4IGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfQURTNzg3MT15
CkNPTkZJR19TRU5TT1JTX0FNQzY4MjE9eQojIENPTkZJR19TRU5TT1JTX0lOQTIwOSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NFTlNPUlNfSU5BMlhYIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNPUlNfVEhN
QzUwPXkKQ09ORklHX1NFTlNPUlNfVE1QMTAyPXkKQ09ORklHX1NFTlNPUlNfVE1QNDAxPXkKIyBD
T05GSUdfU0VOU09SU19UTVA0MjEgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19WSUFfQ1BVVEVN
UD15CkNPTkZJR19TRU5TT1JTX1ZUMTIxMT15CkNPTkZJR19TRU5TT1JTX1c4Mzc4MUQ9eQpDT05G
SUdfU0VOU09SU19XODM3OTFEPXkKIyBDT05GSUdfU0VOU09SU19XODM3OTJEIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19XODM3OTMgaXMgbm90IHNldApDT05GSUdfU0VOU09SU19XODM3OTU9
eQojIENPTkZJR19TRU5TT1JTX1c4Mzc5NV9GQU5DVFJMIGlzIG5vdCBzZXQKQ09ORklHX1NFTlNP
UlNfVzgzTDc4NVRTPXkKQ09ORklHX1NFTlNPUlNfVzgzTDc4Nk5HPXkKIyBDT05GSUdfU0VOU09S
U19XODM2MjdIRiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19XTTgzMVggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19XTTgz
NTA9eQpDT05GSUdfVEhFUk1BTD15CiMgQ09ORklHX1RIRVJNQUxfSFdNT04gaXMgbm90IHNldApD
T05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9TVEVQX1dJU0U9eQojIENPTkZJR19USEVSTUFMX0RF
RkFVTFRfR09WX0ZBSVJfU0hBUkUgaXMgbm90IHNldAojIENPTkZJR19USEVSTUFMX0RFRkFVTFRf
R09WX1VTRVJfU1BBQ0UgaXMgbm90IHNldApDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRT15
CkNPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQpDT05GSUdfVEhFUk1BTF9HT1ZfVVNFUl9T
UEFDRT15CkNPTkZJR19USEVSTUFMX0VNVUxBVElPTj15CkNPTkZJR19SQ0FSX1RIRVJNQUw9eQpD
T05GSUdfSU5URUxfUE9XRVJDTEFNUD15CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBk
cml2ZXJzCiMKIyBDT05GSUdfV0FUQ0hET0cgaXMgbm90IHNldApDT05GSUdfU1NCX1BPU1NJQkxF
PXkKCiMKIyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUKIwpDT05GSUdfU1NCPXkKQ09ORklHX1NT
Ql9QQ01DSUFIT1NUX1BPU1NJQkxFPXkKIyBDT05GSUdfU1NCX1BDTUNJQUhPU1QgaXMgbm90IHNl
dAojIENPTkZJR19TU0JfU0lMRU5UIGlzIG5vdCBzZXQKQ09ORklHX1NTQl9ERUJVRz15CkNPTkZJ
R19CQ01BX1BPU1NJQkxFPXkKCiMKIyBCcm9hZGNvbSBzcGVjaWZpYyBBTUJBCiMKQ09ORklHX0JD
TUE9eQojIENPTkZJR19CQ01BX0hPU1RfU09DIGlzIG5vdCBzZXQKIyBDT05GSUdfQkNNQV9EUklW
RVJfR01BQ19DTU4gaXMgbm90IHNldApDT05GSUdfQkNNQV9ERUJVRz15CgojCiMgTXVsdGlmdW5j
dGlvbiBkZXZpY2UgZHJpdmVycwojCkNPTkZJR19NRkRfQ09SRT15CkNPTkZJR19NRkRfQVMzNzEx
PXkKQ09ORklHX1BNSUNfQURQNTUyMD15CkNPTkZJR19NRkRfQ1JPU19FQz15CkNPTkZJR19NRkRf
Q1JPU19FQ19JMkM9eQpDT05GSUdfUE1JQ19EQTkwM1g9eQpDT05GSUdfUE1JQ19EQTkwNTI9eQoj
IENPTkZJR19NRkRfREE5MDUyX1NQSSBpcyBub3Qgc2V0CkNPTkZJR19NRkRfREE5MDUyX0kyQz15
CkNPTkZJR19NRkRfREE5MDU1PXkKQ09ORklHX01GRF9EQTkwNjM9eQojIENPTkZJR19NRkRfTUMx
M1hYWF9TUEkgaXMgbm90IHNldAojIENPTkZJR19NRkRfTUMxM1hYWF9JMkMgaXMgbm90IHNldAoj
IENPTkZJR19IVENfUEFTSUMzIGlzIG5vdCBzZXQKQ09ORklHX01GRF9LRU1QTEQ9eQpDT05GSUdf
TUZEXzg4UE04MDA9eQojIENPTkZJR19NRkRfODhQTTgwNSBpcyBub3Qgc2V0CiMgQ09ORklHX01G
RF84OFBNODYwWCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVgxNDU3NyBpcyBub3Qgc2V0CkNP
TkZJR19NRkRfTUFYNzc2ODY9eQpDT05GSUdfTUZEX01BWDc3NjkzPXkKIyBDT05GSUdfTUZEX01B
WDg5MDcgaXMgbm90IHNldApDT05GSUdfTUZEX01BWDg5MjU9eQojIENPTkZJR19NRkRfTUFYODk5
NyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg4OTk4IGlzIG5vdCBzZXQKIyBDT05GSUdfRVpY
X1BDQVAgaXMgbm90IHNldApDT05GSUdfTUZEX1JFVFU9eQpDT05GSUdfTUZEX1BDRjUwNjMzPXkK
IyBDT05GSUdfUENGNTA2MzNfQURDIGlzIG5vdCBzZXQKIyBDT05GSUdfUENGNTA2MzNfR1BJTyBp
cyBub3Qgc2V0CkNPTkZJR19NRkRfUkM1VDU4Mz15CiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBu
b3Qgc2V0CkNPTkZJR19NRkRfU0k0NzZYX0NPUkU9eQojIENPTkZJR19NRkRfU001MDEgaXMgbm90
IHNldAojIENPTkZJR19NRkRfU01TQyBpcyBub3Qgc2V0CkNPTkZJR19BQlg1MDBfQ09SRT15CiMg
Q09ORklHX0FCMzEwMF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1NUTVBFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUZEX1NZU0NPTiBpcyBub3Qgc2V0CkNPTkZJR19NRkRfVElfQU0zMzVYX1RT
Q0FEQz15CiMgQ09ORklHX01GRF9MUDM5NDMgaXMgbm90IHNldApDT05GSUdfTUZEX0xQODc4OD15
CkNPTkZJR19NRkRfUEFMTUFTPXkKQ09ORklHX1RQUzYxMDVYPXkKQ09ORklHX1RQUzY1MDdYPXkK
Q09ORklHX01GRF9UUFM2NTA5MD15CkNPTkZJR19NRkRfVFBTNjUyMTc9eQojIENPTkZJR19NRkRf
VFBTNjU4NlggaXMgbm90IHNldAojIENPTkZJR19NRkRfVFBTODAwMzEgaXMgbm90IHNldApDT05G
SUdfVFdMNDAzMF9DT1JFPXkKIyBDT05GSUdfVFdMNDAzMF9NQURDIGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX1RXTDQwMzBfQVVESU8gaXMgbm90IHNldAojIENPTkZJR19UV0w2MDQwX0NPUkUgaXMg
bm90IHNldApDT05GSUdfTUZEX1dMMTI3M19DT1JFPXkKQ09ORklHX01GRF9MTTM1MzM9eQojIENP
TkZJR19NRkRfVEMzNTg5WCBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UTUlPIGlzIG5vdCBzZXQK
Q09ORklHX01GRF9BUklaT05BPXkKIyBDT05GSUdfTUZEX0FSSVpPTkFfSTJDIGlzIG5vdCBzZXQK
Q09ORklHX01GRF9BUklaT05BX1NQST15CiMgQ09ORklHX01GRF9XTTUxMDIgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfV001MTEwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dNODk5NyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNldApDT05GSUdfTUZEX1dNODMxWD15CiMg
Q09ORklHX01GRF9XTTgzMVhfSTJDIGlzIG5vdCBzZXQKQ09ORklHX01GRF9XTTgzMVhfU1BJPXkK
Q09ORklHX01GRF9XTTgzNTA9eQpDT05GSUdfTUZEX1dNODM1MF9JMkM9eQojIENPTkZJR19NRkRf
V004OTk0IGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUj15CkNPTkZJR19SRUdVTEFUT1JfREVC
VUc9eQpDT05GSUdfUkVHVUxBVE9SX0ZJWEVEX1ZPTFRBR0U9eQpDT05GSUdfUkVHVUxBVE9SX1ZJ
UlRVQUxfQ09OU1VNRVI9eQpDT05GSUdfUkVHVUxBVE9SX1VTRVJTUEFDRV9DT05TVU1FUj15CiMg
Q09ORklHX1JFR1VMQVRPUl84OFBNODAwIGlzIG5vdCBzZXQKIyBDT05GSUdfUkVHVUxBVE9SX0FD
VDg4NjUgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX0FENTM5OD15CkNPTkZJR19SRUdVTEFU
T1JfQVMzNzExPXkKQ09ORklHX1JFR1VMQVRPUl9EQTkwM1g9eQojIENPTkZJR19SRUdVTEFUT1Jf
REE5MDUyIGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUl9EQTkwNTU9eQpDT05GSUdfUkVHVUxB
VE9SX0RBOTA2Mz15CiMgQ09ORklHX1JFR1VMQVRPUl9EQTkyMTAgaXMgbm90IHNldAojIENPTkZJ
R19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1JfSVNMNjI3
MUEgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX0xQMzk3MT15CkNPTkZJR19SRUdVTEFUT1Jf
TFAzOTcyPXkKQ09ORklHX1JFR1VMQVRPUl9MUDg3Mlg9eQpDT05GSUdfUkVHVUxBVE9SX0xQODc1
NT15CkNPTkZJR19SRUdVTEFUT1JfTFA4Nzg4PXkKQ09ORklHX1JFR1VMQVRPUl9NQVgxNTg2PXkK
Q09ORklHX1JFR1VMQVRPUl9NQVg4NjQ5PXkKIyBDT05GSUdfUkVHVUxBVE9SX01BWDg2NjAgaXMg
bm90IHNldApDT05GSUdfUkVHVUxBVE9SX01BWDg5MjU9eQpDT05GSUdfUkVHVUxBVE9SX01BWDg5
NTI9eQojIENPTkZJR19SRUdVTEFUT1JfTUFYODk3MyBpcyBub3Qgc2V0CiMgQ09ORklHX1JFR1VM
QVRPUl9NQVg3NzY4NiBpcyBub3Qgc2V0CiMgQ09ORklHX1JFR1VMQVRPUl9NQVg3NzY5MyBpcyBu
b3Qgc2V0CkNPTkZJR19SRUdVTEFUT1JfUEFMTUFTPXkKQ09ORklHX1JFR1VMQVRPUl9QQ0Y1MDYz
Mz15CkNPTkZJR19SRUdVTEFUT1JfUEZVWkUxMDA9eQpDT05GSUdfUkVHVUxBVE9SX1JDNVQ1ODM9
eQpDT05GSUdfUkVHVUxBVE9SX1RQUzUxNjMyPXkKIyBDT05GSUdfUkVHVUxBVE9SX1RQUzYxMDVY
IGlzIG5vdCBzZXQKQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MD15CiMgQ09ORklHX1JFR1VMQVRP
Ul9UUFM2NTAyMyBpcyBub3Qgc2V0CkNPTkZJR19SRUdVTEFUT1JfVFBTNjUwN1g9eQojIENPTkZJ
R19SRUdVTEFUT1JfVFBTNjUwOTAgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1JfVFBTNjUy
MTcgaXMgbm90IHNldApDT05GSUdfUkVHVUxBVE9SX1RQUzY1MjRYPXkKQ09ORklHX1JFR1VMQVRP
Ul9UV0w0MDMwPXkKIyBDT05GSUdfUkVHVUxBVE9SX1dNODMxWCBpcyBub3Qgc2V0CkNPTkZJR19S
RUdVTEFUT1JfV004MzUwPXkKQ09ORklHX01FRElBX1NVUFBPUlQ9eQoKIwojIE11bHRpbWVkaWEg
Y29yZSBzdXBwb3J0CiMKQ09ORklHX01FRElBX0NBTUVSQV9TVVBQT1JUPXkKIyBDT05GSUdfTUVE
SUFfQU5BTE9HX1RWX1NVUFBPUlQgaXMgbm90IHNldAojIENPTkZJR19NRURJQV9ESUdJVEFMX1RW
X1NVUFBPUlQgaXMgbm90IHNldApDT05GSUdfTUVESUFfUkFESU9fU1VQUE9SVD15CiMgQ09ORklH
X01FRElBX0NPTlRST0xMRVIgaXMgbm90IHNldApDT05GSUdfVklERU9fREVWPXkKQ09ORklHX1ZJ
REVPX1Y0TDI9eQojIENPTkZJR19WSURFT19BRFZfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19W
SURFT19GSVhFRF9NSU5PUl9SQU5HRVMgaXMgbm90IHNldApDT05GSUdfVjRMMl9NRU0yTUVNX0RF
Vj15CkNPTkZJR19WSURFT0JVRl9HRU49eQpDT05GSUdfVklERU9CVUZfRE1BX0NPTlRJRz15CkNP
TkZJR19WSURFT0JVRjJfQ09SRT15CkNPTkZJR19WSURFT0JVRjJfTUVNT1BTPXkKQ09ORklHX1ZJ
REVPQlVGMl9ETUFfQ09OVElHPXkKIyBDT05GSUdfVFRQQ0lfRUVQUk9NIGlzIG5vdCBzZXQKCiMK
IyBNZWRpYSBkcml2ZXJzCiMKQ09ORklHX1Y0TF9QTEFURk9STV9EUklWRVJTPXkKIyBDT05GSUdf
VklERU9fU0hfVk9VIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX1RJTUJFUkRBTEU9eQpDT05GSUdf
U09DX0NBTUVSQT15CkNPTkZJR19TT0NfQ0FNRVJBX1BMQVRGT1JNPXkKIyBDT05GSUdfVklERU9f
UkNBUl9WSU4gaXMgbm90IHNldApDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUz15CkNPTkZJR19W
SURFT19NRU0yTUVNX0RFSU5URVJMQUNFPXkKIyBDT05GSUdfVklERU9fU0hfVkVVIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVjRMX1RFU1RfRFJJVkVSUyBpcyBub3Qgc2V0CgojCiMgU3VwcG9ydGVkIE1N
Qy9TRElPIGFkYXB0ZXJzCiMKIyBDT05GSUdfTUVESUFfUEFSUE9SVF9TVVBQT1JUIGlzIG5vdCBz
ZXQKQ09ORklHX1JBRElPX0FEQVBURVJTPXkKQ09ORklHX1JBRElPX1NJNDcwWD15CkNPTkZJR19J
MkNfU0k0NzBYPXkKIyBDT05GSUdfUkFESU9fU0k0NzEzIGlzIG5vdCBzZXQKQ09ORklHX1JBRElP
X1RFQTU3NjQ9eQpDT05GSUdfUkFESU9fVEVBNTc2NF9YVEFMPXkKQ09ORklHX1JBRElPX1NBQTc3
MDZIPXkKQ09ORklHX1JBRElPX1RFRjY4NjI9eQpDT05GSUdfUkFESU9fV0wxMjczPXkKCiMKIyBU
ZXhhcyBJbnN0cnVtZW50cyBXTDEyOHggRk0gZHJpdmVyIChTVCBiYXNlZCkKIwoKIwojIE1lZGlh
IGFuY2lsbGFyeSBkcml2ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGkyYywgZnJvbnRlbmRzKQojCkNP
TkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15CgojCiMgQXVkaW8gZGVjb2RlcnMsIHByb2Nl
c3NvcnMgYW5kIG1peGVycwojCgojCiMgUkRTIGRlY29kZXJzCiMKCiMKIyBWaWRlbyBkZWNvZGVy
cwojCkNPTkZJR19WSURFT19BRFY3MTgwPXkKCiMKIyBWaWRlbyBhbmQgYXVkaW8gZGVjb2RlcnMK
IwoKIwojIFZpZGVvIGVuY29kZXJzCiMKCiMKIyBDYW1lcmEgc2Vuc29yIGRldmljZXMKIwoKIwoj
IEZsYXNoIGRldmljZXMKIwoKIwojIFZpZGVvIGltcHJvdmVtZW50IGNoaXBzCiMKCiMKIyBNaXNj
ZWxsYW5lb3VzIGhlbHBlciBjaGlwcwojCgojCiMgU2Vuc29ycyB1c2VkIG9uIHNvY19jYW1lcmEg
ZHJpdmVyCiMKCiMKIyBzb2NfY2FtZXJhIHNlbnNvciBkcml2ZXJzCiMKQ09ORklHX1NPQ19DQU1F
UkFfSU1YMDc0PXkKQ09ORklHX1NPQ19DQU1FUkFfTVQ5TTAwMT15CkNPTkZJR19TT0NfQ0FNRVJB
X01UOU0xMTE9eQpDT05GSUdfU09DX0NBTUVSQV9NVDlUMDMxPXkKIyBDT05GSUdfU09DX0NBTUVS
QV9NVDlUMTEyIGlzIG5vdCBzZXQKQ09ORklHX1NPQ19DQU1FUkFfTVQ5VjAyMj15CkNPTkZJR19T
T0NfQ0FNRVJBX09WMjY0MD15CkNPTkZJR19TT0NfQ0FNRVJBX09WNTY0Mj15CkNPTkZJR19TT0Nf
Q0FNRVJBX09WNjY1MD15CkNPTkZJR19TT0NfQ0FNRVJBX09WNzcyWD15CiMgQ09ORklHX1NPQ19D
QU1FUkFfT1Y5NjQwIGlzIG5vdCBzZXQKQ09ORklHX1NPQ19DQU1FUkFfT1Y5NzQwPXkKIyBDT05G
SUdfU09DX0NBTUVSQV9SSjU0TjEgaXMgbm90IHNldApDT05GSUdfU09DX0NBTUVSQV9UVzk5MTA9
eQpDT05GSUdfTUVESUFfVFVORVI9eQpDT05GSUdfTUVESUFfVFVORVJfU0lNUExFPXkKQ09ORklH
X01FRElBX1RVTkVSX1REQTgyOTA9eQpDT05GSUdfTUVESUFfVFVORVJfVERBODI3WD15CkNPTkZJ
R19NRURJQV9UVU5FUl9UREExODI3MT15CkNPTkZJR19NRURJQV9UVU5FUl9UREE5ODg3PXkKQ09O
RklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQpDT05GSUdfTUVESUFfVFVORVJfVEVBNTc2Nz15CkNP
TkZJR19NRURJQV9UVU5FUl9NVDIwWFg9eQpDT05GSUdfTUVESUFfVFVORVJfWEMyMDI4PXkKQ09O
RklHX01FRElBX1RVTkVSX1hDNTAwMD15CkNPTkZJR19NRURJQV9UVU5FUl9YQzQwMDA9eQpDT05G
SUdfTUVESUFfVFVORVJfTUM0NFM4MDM9eQoKIwojIFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250
ZW5kcwojCiMgQ09ORklHX0RWQl9EVU1NWV9GRSBpcyBub3Qgc2V0CgojCiMgR3JhcGhpY3Mgc3Vw
cG9ydAojCkNPTkZJR19EUk09eQojIENPTkZJR19EUk1fVURMIGlzIG5vdCBzZXQKQ09ORklHX1ZH
QVNUQVRFPXkKQ09ORklHX1ZJREVPX09VVFBVVF9DT05UUk9MPXkKQ09ORklHX0hETUk9eQpDT05G
SUdfRkI9eQojIENPTkZJR19GSVJNV0FSRV9FRElEIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfRERD
IGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JPT1RfVkVTQV9TVVBQT1JUPXkKQ09ORklHX0ZCX0NGQl9G
SUxMUkVDVD15CkNPTkZJR19GQl9DRkJfQ09QWUFSRUE9eQpDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJ
VD15CiMgQ09ORklHX0ZCX0NGQl9SRVZfUElYRUxTX0lOX0JZVEUgaXMgbm90IHNldApDT05GSUdf
RkJfU1lTX0ZJTExSRUNUPXkKQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15CkNPTkZJR19GQl9TWVNf
SU1BR0VCTElUPXkKQ09ORklHX0ZCX0ZPUkVJR05fRU5ESUFOPXkKQ09ORklHX0ZCX0JPVEhfRU5E
SUFOPXkKIyBDT05GSUdfRkJfQklHX0VORElBTiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0xJVFRM
RV9FTkRJQU4gaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZPUFM9eQpDT05GSUdfRkJfREVGRVJS
RURfSU89eQpDT05GSUdfRkJfSEVDVUJBPXkKIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0
CiMgQ09ORklHX0ZCX01BQ01PREVTIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfQkFDS0xJR0hUIGlz
IG5vdCBzZXQKQ09ORklHX0ZCX01PREVfSEVMUEVSUz15CkNPTkZJR19GQl9USUxFQkxJVFRJTkc9
eQoKIwojIEZyYW1lIGJ1ZmZlciBoYXJkd2FyZSBkcml2ZXJzCiMKQ09ORklHX0ZCX0FSQz15CkNP
TkZJR19GQl9WR0ExNj15CkNPTkZJR19GQl9VVkVTQT15CkNPTkZJR19GQl9WRVNBPXkKQ09ORklH
X0ZCX040MTE9eQpDT05GSUdfRkJfSEdBPXkKQ09ORklHX0ZCX1MxRDEzWFhYPXkKIyBDT05GSUdf
RkJfVE1JTyBpcyBub3Qgc2V0CkNPTkZJR19GQl9HT0xERklTSD15CiMgQ09ORklHX0ZCX1ZJUlRV
QUwgaXMgbm90IHNldApDT05GSUdfWEVOX0ZCREVWX0ZST05URU5EPXkKIyBDT05GSUdfRkJfTUVU
Uk9OT01FIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JST0FEU0hFRVQ9eQpDT05GSUdfRkJfQVVPX0sx
OTBYPXkKIyBDT05GSUdfRkJfQVVPX0sxOTAwIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0FVT19LMTkw
MT15CiMgQ09ORklHX0ZCX1NJTVBMRSBpcyBub3Qgc2V0CkNPTkZJR19FWFlOT1NfVklERU89eQpD
T05GSUdfQkFDS0xJR0hUX0xDRF9TVVBQT1JUPXkKQ09ORklHX0xDRF9DTEFTU19ERVZJQ0U9eQpD
T05GSUdfTENEX0xUVjM1MFFWPXkKQ09ORklHX0xDRF9JTEk5MjJYPXkKQ09ORklHX0xDRF9JTEk5
MzIwPXkKQ09ORklHX0xDRF9URE8yNE09eQpDT05GSUdfTENEX1ZHRzI0MzJBND15CiMgQ09ORklH
X0xDRF9QTEFURk9STSBpcyBub3Qgc2V0CkNPTkZJR19MQ0RfUzZFNjNNMD15CkNPTkZJR19MQ0Rf
TEQ5MDQwPXkKIyBDT05GSUdfTENEX0FNUzM2OUZHMDYgaXMgbm90IHNldApDT05GSUdfTENEX0xN
UzUwMUtGMDM9eQojIENPTkZJR19MQ0RfSFg4MzU3IGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tMSUdI
VF9DTEFTU19ERVZJQ0U9eQojIENPTkZJR19CQUNLTElHSFRfR0VORVJJQyBpcyBub3Qgc2V0CkNP
TkZJR19CQUNLTElHSFRfTE0zNTMzPXkKQ09ORklHX0JBQ0tMSUdIVF9QV009eQojIENPTkZJR19C
QUNLTElHSFRfREE5MDNYIGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tMSUdIVF9EQTkwNTI9eQpDT05G
SUdfQkFDS0xJR0hUX01BWDg5MjU9eQpDT05GSUdfQkFDS0xJR0hUX1NBSEFSQT15CiMgQ09ORklH
X0JBQ0tMSUdIVF9XTTgzMVggaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfQURQNTUyMCBp
cyBub3Qgc2V0CkNPTkZJR19CQUNLTElHSFRfQURQODg2MD15CkNPTkZJR19CQUNLTElHSFRfQURQ
ODg3MD15CkNPTkZJR19CQUNLTElHSFRfUENGNTA2MzM9eQpDT05GSUdfQkFDS0xJR0hUX0xNMzYz
MEE9eQpDT05GSUdfQkFDS0xJR0hUX0xNMzYzOT15CkNPTkZJR19CQUNLTElHSFRfTFA4NTVYPXkK
Q09ORklHX0JBQ0tMSUdIVF9MUDg3ODg9eQpDT05GSUdfQkFDS0xJR0hUX1BBTkRPUkE9eQpDT05G
SUdfQkFDS0xJR0hUX1RQUzY1MjE3PXkKIyBDT05GSUdfQkFDS0xJR0hUX0FTMzcxMSBpcyBub3Qg
c2V0CiMgQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tM
SUdIVF9CRDYxMDcgaXMgbm90IHNldApDT05GSUdfTE9HTz15CiMgQ09ORklHX0xPR09fTElOVVhf
TU9OTyBpcyBub3Qgc2V0CiMgQ09ORklHX0xPR09fTElOVVhfVkdBMTYgaXMgbm90IHNldApDT05G
SUdfTE9HT19MSU5VWF9DTFVUMjI0PXkKQ09ORklHX1NPVU5EPXkKQ09ORklHX1NPVU5EX09TU19D
T1JFPXkKQ09ORklHX1NPVU5EX09TU19DT1JFX1BSRUNMQUlNPXkKQ09ORklHX1NORD15CkNPTkZJ
R19TTkRfVElNRVI9eQpDT05GSUdfU05EX1BDTT15CkNPTkZJR19TTkRfSFdERVA9eQpDT05GSUdf
U05EX1JBV01JREk9eQojIENPTkZJR19TTkRfU0VRVUVOQ0VSIGlzIG5vdCBzZXQKQ09ORklHX1NO
RF9PU1NFTVVMPXkKQ09ORklHX1NORF9NSVhFUl9PU1M9eQojIENPTkZJR19TTkRfUENNX09TUyBp
cyBub3Qgc2V0CkNPTkZJR19TTkRfSFJUSU1FUj15CkNPTkZJR19TTkRfRFlOQU1JQ19NSU5PUlM9
eQpDT05GSUdfU05EX01BWF9DQVJEUz0zMgojIENPTkZJR19TTkRfU1VQUE9SVF9PTERfQVBJIGlz
IG5vdCBzZXQKQ09ORklHX1NORF9WRVJCT1NFX1BST0NGUz15CkNPTkZJR19TTkRfVkVSQk9TRV9Q
UklOVEs9eQojIENPTkZJR19TTkRfREVCVUcgaXMgbm90IHNldApDT05GSUdfU05EX0RNQV9TR0JV
Rj15CiMgQ09ORklHX1NORF9SQVdNSURJX1NFUSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9PUEwz
X0xJQl9TRVEgaXMgbm90IHNldAojIENPTkZJR19TTkRfT1BMNF9MSUJfU0VRIGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX1NCQVdFX1NFUSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9FTVUxMEsxX1NF
USBpcyBub3Qgc2V0CkNPTkZJR19TTkRfTVBVNDAxX1VBUlQ9eQpDT05GSUdfU05EX1ZYX0xJQj15
CkNPTkZJR19TTkRfRFJJVkVSUz15CkNPTkZJR19TTkRfRFVNTVk9eQpDT05GSUdfU05EX0FMT09Q
PXkKQ09ORklHX1NORF9NVFBBVj15CkNPTkZJR19TTkRfTVRTNjQ9eQpDT05GSUdfU05EX1NFUklB
TF9VMTY1NTA9eQpDT05GSUdfU05EX01QVTQwMT15CkNPTkZJR19TTkRfUE9SVE1BTjJYND15CkNP
TkZJR19TTkRfU1BJPXkKQ09ORklHX1NORF9BVDczQzIxMz15CkNPTkZJR19TTkRfQVQ3M0MyMTNf
VEFSR0VUX0JJVFJBVEU9NDgwMDAKQ09ORklHX1NORF9QQ01DSUE9eQpDT05GSUdfU05EX1ZYUE9D
S0VUPXkKQ09ORklHX1NORF9QREFVRElPQ0Y9eQojIENPTkZJR19TTkRfU09DIGlzIG5vdCBzZXQK
IyBDT05GSUdfU09VTkRfUFJJTUUgaXMgbm90IHNldApDT05GSUdfVVNCX09IQ0lfTElUVExFX0VO
RElBTj15CkNPTkZJR19VU0JfU1VQUE9SVD15CkNPTkZJR19VU0JfQVJDSF9IQVNfSENEPXkKIyBD
T05GSUdfVVNCIGlzIG5vdCBzZXQKCiMKIyBVU0IgcG9ydCBkcml2ZXJzCiMKCiMKIyBVU0IgUGh5
c2ljYWwgTGF5ZXIgZHJpdmVycwojCkNPTkZJR19VU0JfUEhZPXkKIyBDT05GSUdfS0VZU1RPTkVf
VVNCX1BIWSBpcyBub3Qgc2V0CkNPTkZJR19OT1BfVVNCX1hDRUlWPXkKQ09ORklHX09NQVBfQ09O
VFJPTF9VU0I9eQpDT05GSUdfT01BUF9VU0IzPXkKQ09ORklHX0FNMzM1WF9DT05UUk9MX1VTQj15
CkNPTkZJR19BTTMzNVhfUEhZX1VTQj15CkNPTkZJR19TQU1TVU5HX1VTQlBIWT15CiMgQ09ORklH
X1NBTVNVTkdfVVNCMlBIWSBpcyBub3Qgc2V0CkNPTkZJR19TQU1TVU5HX1VTQjNQSFk9eQpDT05G
SUdfVEFIVk9fVVNCPXkKIyBDT05GSUdfVEFIVk9fVVNCX0hPU1RfQllfREVGQVVMVCBpcyBub3Qg
c2V0CkNPTkZJR19VU0JfUkNBUl9HRU4yX1BIWT15CiMgQ09ORklHX1VTQl9HQURHRVQgaXMgbm90
IHNldAojIENPTkZJR19NTUMgaXMgbm90IHNldApDT05GSUdfTUVNU1RJQ0s9eQojIENPTkZJR19N
RU1TVElDS19ERUJVRyBpcyBub3Qgc2V0CgojCiMgTWVtb3J5U3RpY2sgZHJpdmVycwojCkNPTkZJ
R19NRU1TVElDS19VTlNBRkVfUkVTVU1FPXkKQ09ORklHX01TUFJPX0JMT0NLPXkKQ09ORklHX01T
X0JMT0NLPXkKCiMKIyBNZW1vcnlTdGljayBIb3N0IENvbnRyb2xsZXIgRHJpdmVycwojCkNPTkZJ
R19ORVdfTEVEUz15CkNPTkZJR19MRURTX0NMQVNTPXkKCiMKIyBMRUQgZHJpdmVycwojCkNPTkZJ
R19MRURTX0xNMzUzMD15CkNPTkZJR19MRURTX0xNMzUzMz15CkNPTkZJR19MRURTX0xNMzY0Mj15
CkNPTkZJR19MRURTX0xQMzk0ND15CkNPTkZJR19MRURTX0xQNTVYWF9DT01NT049eQpDT05GSUdf
TEVEU19MUDU1MjE9eQojIENPTkZJR19MRURTX0xQNTUyMyBpcyBub3Qgc2V0CkNPTkZJR19MRURT
X0xQNTU2Mj15CkNPTkZJR19MRURTX0xQODUwMT15CkNPTkZJR19MRURTX0xQODc4OD15CkNPTkZJ
R19MRURTX1BDQTk1NVg9eQpDT05GSUdfTEVEU19QQ0E5NjNYPXkKQ09ORklHX0xFRFNfUENBOTY4
NT15CkNPTkZJR19MRURTX1dNODMxWF9TVEFUVVM9eQojIENPTkZJR19MRURTX1dNODM1MCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0xFRFNfREE5MDNYIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19EQTkw
NTIgaXMgbm90IHNldApDT05GSUdfTEVEU19EQUMxMjRTMDg1PXkKIyBDT05GSUdfTEVEU19QV00g
aXMgbm90IHNldApDT05GSUdfTEVEU19SRUdVTEFUT1I9eQpDT05GSUdfTEVEU19CRDI4MDI9eQpD
T05GSUdfTEVEU19BRFA1NTIwPXkKIyBDT05GSUdfTEVEU19UQ0E2NTA3IGlzIG5vdCBzZXQKQ09O
RklHX0xFRFNfTE0zNTV4PXkKQ09ORklHX0xFRFNfT1QyMDA9eQpDT05GSUdfTEVEU19CTElOS009
eQoKIwojIExFRCBUcmlnZ2VycwojCiMgQ09ORklHX0xFRFNfVFJJR0dFUlMgaXMgbm90IHNldAoj
IENPTkZJR19BQ0NFU1NJQklMSVRZIGlzIG5vdCBzZXQKIyBDT05GSUdfRURBQyBpcyBub3Qgc2V0
CkNPTkZJR19SVENfTElCPXkKQ09ORklHX1JUQ19DTEFTUz15CiMgQ09ORklHX1JUQ19IQ1RPU1lT
IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX1NZU1RPSEMgaXMgbm90IHNldAojIENPTkZJR19SVENf
REVCVUcgaXMgbm90IHNldAoKIwojIFJUQyBpbnRlcmZhY2VzCiMKIyBDT05GSUdfUlRDX0lOVEZf
U1lTRlMgaXMgbm90IHNldApDT05GSUdfUlRDX0lOVEZfUFJPQz15CiMgQ09ORklHX1JUQ19JTlRG
X0RFViBpcyBub3Qgc2V0CkNPTkZJR19SVENfRFJWX1RFU1Q9eQoKIwojIEkyQyBSVEMgZHJpdmVy
cwojCkNPTkZJR19SVENfRFJWXzg4UE04MFg9eQojIENPTkZJR19SVENfRFJWX0RTMTMwNyBpcyBu
b3Qgc2V0CkNPTkZJR19SVENfRFJWX0RTMTM3ND15CkNPTkZJR19SVENfRFJWX0RTMTY3Mj15CkNP
TkZJR19SVENfRFJWX0RTMzIzMj15CkNPTkZJR19SVENfRFJWX0xQODc4OD15CiMgQ09ORklHX1JU
Q19EUlZfTUFYNjkwMCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTUFYODkyNSBpcyBub3Qg
c2V0CiMgQ09ORklHX1JUQ19EUlZfTUFYNzc2ODYgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9S
UzVDMzcyPXkKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjA4IGlzIG5vdCBzZXQKQ09ORklHX1JUQ19E
UlZfSVNMMTIwMjI9eQpDT05GSUdfUlRDX0RSVl9JU0wxMjA1Nz15CiMgQ09ORklHX1JUQ19EUlZf
WDEyMDUgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9QQUxNQVM9eQpDT05GSUdfUlRDX0RSVl9Q
Q0YyMTI3PXkKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9QQ0Y4NTYzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTgzIGlzIG5vdCBz
ZXQKQ09ORklHX1JUQ19EUlZfTTQxVDgwPXkKIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlz
IG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfQlEzMks9eQpDT05GSUdfUlRDX0RSVl9UV0w0MDMwPXkK
Q09ORklHX1JUQ19EUlZfUkM1VDU4Mz15CkNPTkZJR19SVENfRFJWX1MzNTM5MEE9eQpDT05GSUdf
UlRDX0RSVl9GTTMxMzA9eQojIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0CkNPTkZJ
R19SVENfRFJWX1JYODAyNT15CkNPTkZJR19SVENfRFJWX0VNMzAyNz15CkNPTkZJR19SVENfRFJW
X1JWMzAyOUMyPXkKCiMKIyBTUEkgUlRDIGRyaXZlcnMKIwpDT05GSUdfUlRDX0RSVl9NNDFUOTM9
eQojIENPTkZJR19SVENfRFJWX000MVQ5NCBpcyBub3Qgc2V0CkNPTkZJR19SVENfRFJWX0RTMTMw
NT15CkNPTkZJR19SVENfRFJWX0RTMTM5MD15CkNPTkZJR19SVENfRFJWX01BWDY5MDI9eQojIENP
TkZJR19SVENfRFJWX1I5NzAxIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUzVDMzQ4IGlz
IG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzMyMzQgaXMgbm90IHNldApDT05GSUdfUlRDX0RS
Vl9QQ0YyMTIzPXkKQ09ORklHX1JUQ19EUlZfUlg0NTgxPXkKCiMKIyBQbGF0Zm9ybSBSVEMgZHJp
dmVycwojCkNPTkZJR19SVENfRFJWX0NNT1M9eQpDT05GSUdfUlRDX0RSVl9EUzEyODY9eQojIENP
TkZJR19SVENfRFJWX0RTMTUxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfRFMxNTUzIGlz
IG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfRFMxNzQyPXkKIyBDT05GSUdfUlRDX0RSVl9EQTkwNTIg
aXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9EQTkwNTU9eQojIENPTkZJR19SVENfRFJWX1NUSzE3
VEE4IGlzIG5vdCBzZXQKQ09ORklHX1JUQ19EUlZfTTQ4VDg2PXkKQ09ORklHX1JUQ19EUlZfTTQ4
VDM1PXkKIyBDT05GSUdfUlRDX0RSVl9NNDhUNTkgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJW
X01TTTYyNDIgaXMgbm90IHNldApDT05GSUdfUlRDX0RSVl9CUTQ4MDI9eQpDT05GSUdfUlRDX0RS
Vl9SUDVDMDE9eQojIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKQ09ORklHX1JUQ19E
UlZfRFMyNDA0PXkKQ09ORklHX1JUQ19EUlZfV004MzFYPXkKQ09ORklHX1JUQ19EUlZfV004MzUw
PXkKIyBDT05GSUdfUlRDX0RSVl9QQ0Y1MDYzMyBpcyBub3Qgc2V0CgojCiMgb24tQ1BVIFJUQyBk
cml2ZXJzCiMKQ09ORklHX1JUQ19EUlZfTU9YQVJUPXkKCiMKIyBISUQgU2Vuc29yIFJUQyBkcml2
ZXJzCiMKQ09ORklHX0RNQURFVklDRVM9eQpDT05GSUdfRE1BREVWSUNFU19ERUJVRz15CiMgQ09O
RklHX0RNQURFVklDRVNfVkRFQlVHIGlzIG5vdCBzZXQKCiMKIyBETUEgRGV2aWNlcwojCkNPTkZJ
R19EV19ETUFDX0NPUkU9eQojIENPTkZJR19EV19ETUFDIGlzIG5vdCBzZXQKQ09ORklHX1RJTUJf
RE1BPXkKQ09ORklHX0RNQV9FTkdJTkU9eQoKIwojIERNQSBDbGllbnRzCiMKIyBDT05GSUdfQVNZ
TkNfVFhfRE1BIGlzIG5vdCBzZXQKQ09ORklHX0RNQVRFU1Q9eQojIENPTkZJR19BVVhESVNQTEFZ
IGlzIG5vdCBzZXQKIyBDT05GSUdfVUlPIGlzIG5vdCBzZXQKQ09ORklHX1ZJUlRfRFJJVkVSUz15
CkNPTkZJR19WSVJUSU89eQoKIwojIFZpcnRpbyBkcml2ZXJzCiMKQ09ORklHX1ZJUlRJT19CQUxM
T09OPXkKIyBDT05GSUdfVklSVElPX01NSU8gaXMgbm90IHNldAoKIwojIE1pY3Jvc29mdCBIeXBl
ci1WIGd1ZXN0IHN1cHBvcnQKIwoKIwojIFhlbiBkcml2ZXIgc3VwcG9ydAojCkNPTkZJR19YRU5f
QkFMTE9PTj15CkNPTkZJR19YRU5fQkFMTE9PTl9NRU1PUllfSE9UUExVRz15CkNPTkZJR19YRU5f
U0NSVUJfUEFHRVM9eQpDT05GSUdfWEVOX0RFVl9FVlRDSE49eQojIENPTkZJR19YRU5GUyBpcyBu
b3Qgc2V0CkNPTkZJR19YRU5fU1lTX0hZUEVSVklTT1I9eQpDT05GSUdfWEVOX1hFTkJVU19GUk9O
VEVORD15CkNPTkZJR19YRU5fR05UREVWPXkKQ09ORklHX1hFTl9HUkFOVF9ERVZfQUxMT0M9eQpD
T05GSUdfU1dJT1RMQl9YRU49eQpDT05GSUdfWEVOX1BSSVZDTUQ9eQpDT05GSUdfWEVOX0hBVkVf
UFZNTVU9eQpDT05GSUdfU1RBR0lORz15CiMgQ09ORklHX0VDSE8gaXMgbm90IHNldApDT05GSUdf
UEFORUw9eQpDT05GSUdfUEFORUxfUEFSUE9SVD0wCkNPTkZJR19QQU5FTF9QUk9GSUxFPTUKQ09O
RklHX1BBTkVMX0NIQU5HRV9NRVNTQUdFPXkKQ09ORklHX1BBTkVMX0JPT1RfTUVTU0FHRT0iIgoK
IwojIElJTyBzdGFnaW5nIGRyaXZlcnMKIwoKIwojIEFjY2VsZXJvbWV0ZXJzCiMKQ09ORklHX0FE
SVMxNjIwMT15CkNPTkZJR19BRElTMTYyMDM9eQojIENPTkZJR19BRElTMTYyMDQgaXMgbm90IHNl
dAojIENPTkZJR19BRElTMTYyMDkgaXMgbm90IHNldAojIENPTkZJR19BRElTMTYyMjAgaXMgbm90
IHNldApDT05GSUdfQURJUzE2MjQwPXkKQ09ORklHX1NDQTMwMDA9eQoKIwojIEFuYWxvZyB0byBk
aWdpdGFsIGNvbnZlcnRlcnMKIwojIENPTkZJR19BRDcyOTEgaXMgbm90IHNldApDT05GSUdfQUQ3
OTlYPXkKQ09ORklHX0FENzk5WF9SSU5HX0JVRkZFUj15CiMgQ09ORklHX0FENzE5MiBpcyBub3Qg
c2V0CkNPTkZJR19BRDcyODA9eQojIENPTkZJR19MUEMzMlhYX0FEQyBpcyBub3Qgc2V0CkNPTkZJ
R19TUEVBUl9BREM9eQoKIwojIEFuYWxvZyBkaWdpdGFsIGJpLWRpcmVjdGlvbiBjb252ZXJ0ZXJz
CiMKCiMKIyBDYXBhY2l0YW5jZSB0byBkaWdpdGFsIGNvbnZlcnRlcnMKIwojIENPTkZJR19BRDcx
NTAgaXMgbm90IHNldAojIENPTkZJR19BRDcxNTIgaXMgbm90IHNldAojIENPTkZJR19BRDc3NDYg
aXMgbm90IHNldAoKIwojIERpcmVjdCBEaWdpdGFsIFN5bnRoZXNpcwojCiMgQ09ORklHX0FENTkz
MCBpcyBub3Qgc2V0CiMgQ09ORklHX0FEOTgzMiBpcyBub3Qgc2V0CkNPTkZJR19BRDk4MzQ9eQpD
T05GSUdfQUQ5ODUwPXkKQ09ORklHX0FEOTg1Mj15CkNPTkZJR19BRDk5MTA9eQojIENPTkZJR19B
RDk5NTEgaXMgbm90IHNldAoKIwojIERpZ2l0YWwgZ3lyb3Njb3BlIHNlbnNvcnMKIwojIENPTkZJ
R19BRElTMTYwNjAgaXMgbm90IHNldAoKIwojIE5ldHdvcmsgQW5hbHl6ZXIsIEltcGVkYW5jZSBD
b252ZXJ0ZXJzCiMKQ09ORklHX0FENTkzMz15CgojCiMgTGlnaHQgc2Vuc29ycwojCiMgQ09ORklH
X1NFTlNPUlNfSVNMMjkwMTggaXMgbm90IHNldApDT05GSUdfU0VOU09SU19JU0wyOTAyOD15CkNP
TkZJR19UU0wyNTgzPXkKQ09ORklHX1RTTDJ4N3g9eQoKIwojIE1hZ25ldG9tZXRlciBzZW5zb3Jz
CiMKQ09ORklHX1NFTlNPUlNfSE1DNTg0Mz15CgojCiMgQWN0aXZlIGVuZXJneSBtZXRlcmluZyBJ
QwojCiMgQ09ORklHX0FERTc3NTMgaXMgbm90IHNldApDT05GSUdfQURFNzc1ND15CiMgQ09ORklH
X0FERTc3NTggaXMgbm90IHNldApDT05GSUdfQURFNzc1OT15CkNPTkZJR19BREU3ODU0PXkKQ09O
RklHX0FERTc4NTRfSTJDPXkKIyBDT05GSUdfQURFNzg1NF9TUEkgaXMgbm90IHNldAoKIwojIFJl
c29sdmVyIHRvIGRpZ2l0YWwgY29udmVydGVycwojCkNPTkZJR19BRDJTOTA9eQoKIwojIFRyaWdn
ZXJzIC0gc3RhbmRhbG9uZQojCiMgQ09ORklHX0lJT19QRVJJT0RJQ19SVENfVFJJR0dFUiBpcyBu
b3Qgc2V0CkNPTkZJR19JSU9fU0lNUExFX0RVTU1ZPXkKIyBDT05GSUdfSUlPX1NJTVBMRV9EVU1N
WV9FVkVOVFMgaXMgbm90IHNldAojIENPTkZJR19JSU9fU0lNUExFX0RVTU1ZX0JVRkZFUiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0ZUMTAwMCBpcyBub3Qgc2V0CgojCiMgU3BlYWt1cCBjb25zb2xlIHNw
ZWVjaAojCkNPTkZJR19TVEFHSU5HX01FRElBPXkKQ09ORklHX0kyQ19CQ00yMDQ4PXkKIyBDT05G
SUdfVklERU9fVENNODI1WCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TTjlDMTAyIGlzIG5vdCBz
ZXQKCiMKIyBBbmRyb2lkCiMKQ09ORklHX0FORFJPSUQ9eQpDT05GSUdfQU5EUk9JRF9CSU5ERVJf
SVBDPXkKQ09ORklHX0FORFJPSURfTE9HR0VSPXkKQ09ORklHX0FORFJPSURfVElNRURfT1VUUFVU
PXkKIyBDT05GSUdfQU5EUk9JRF9MT1dfTUVNT1JZX0tJTExFUiBpcyBub3Qgc2V0CiMgQ09ORklH
X0FORFJPSURfSU5URl9BTEFSTV9ERVYgaXMgbm90IHNldApDT05GSUdfU1lOQz15CiMgQ09ORklH
X1NXX1NZTkMgaXMgbm90IHNldApDT05GSUdfSU9OPXkKQ09ORklHX0lPTl9URVNUPXkKIyBDT05G
SUdfWDg2X1BMQVRGT1JNX0RFVklDRVMgaXMgbm90IHNldApDT05GSUdfQ0hST01FX1BMQVRGT1JN
Uz15CiMgQ09ORklHX0NIUk9NRU9TX1BTVE9SRSBpcyBub3Qgc2V0CgojCiMgSGFyZHdhcmUgU3Bp
bmxvY2sgZHJpdmVycwojCkNPTkZJR19DTEtFVlRfSTgyNTM9eQpDT05GSUdfSTgyNTNfTE9DSz15
CkNPTkZJR19DTEtCTERfSTgyNTM9eQojIENPTkZJR19NQUlMQk9YIGlzIG5vdCBzZXQKIyBDT05G
SUdfSU9NTVVfU1VQUE9SVCBpcyBub3Qgc2V0CgojCiMgUmVtb3RlcHJvYyBkcml2ZXJzCiMKIyBD
T05GSUdfU1RFX01PREVNX1JQUk9DIGlzIG5vdCBzZXQKCiMKIyBScG1zZyBkcml2ZXJzCiMKIyBD
T05GSUdfUE1fREVWRlJFUSBpcyBub3Qgc2V0CkNPTkZJR19FWFRDT049eQoKIwojIEV4dGNvbiBE
ZXZpY2UgRHJpdmVycwojCkNPTkZJR19FWFRDT05fQURDX0pBQ0s9eQpDT05GSUdfRVhUQ09OX1BB
TE1BUz15CiMgQ09ORklHX01FTU9SWSBpcyBub3Qgc2V0CkNPTkZJR19JSU89eQpDT05GSUdfSUlP
X0JVRkZFUj15CiMgQ09ORklHX0lJT19CVUZGRVJfQ0IgaXMgbm90IHNldApDT05GSUdfSUlPX0tG
SUZPX0JVRj15CkNPTkZJR19JSU9fVFJJR0dFUkVEX0JVRkZFUj15CkNPTkZJR19JSU9fVFJJR0dF
Uj15CkNPTkZJR19JSU9fQ09OU1VNRVJTX1BFUl9UUklHR0VSPTIKCiMKIyBBY2NlbGVyb21ldGVy
cwojCkNPTkZJR19CTUExODA9eQpDT05GSUdfSUlPX1NUX0FDQ0VMXzNBWElTPXkKQ09ORklHX0lJ
T19TVF9BQ0NFTF9JMkNfM0FYSVM9eQpDT05GSUdfSUlPX1NUX0FDQ0VMX1NQSV8zQVhJUz15CkNP
TkZJR19LWFNEOT15CgojCiMgQW5hbG9nIHRvIGRpZ2l0YWwgY29udmVydGVycwojCkNPTkZJR19B
RF9TSUdNQV9ERUxUQT15CiMgQ09ORklHX0FENzI2NiBpcyBub3Qgc2V0CiMgQ09ORklHX0FENzI5
OCBpcyBub3Qgc2V0CkNPTkZJR19BRDc0NzY9eQojIENPTkZJR19BRDc3OTEgaXMgbm90IHNldApD
T05GSUdfQUQ3NzkzPXkKQ09ORklHX0FENzg4Nz15CiMgQ09ORklHX0FENzkyMyBpcyBub3Qgc2V0
CiMgQ09ORklHX0xQODc4OF9BREMgaXMgbm90IHNldApDT05GSUdfTUFYMTM2Mz15CkNPTkZJR19N
Q1AzMjBYPXkKQ09ORklHX01DUDM0MjI9eQojIENPTkZJR19OQVU3ODAyIGlzIG5vdCBzZXQKQ09O
RklHX1RJX0FEQzA4MUM9eQojIENPTkZJR19USV9BTTMzNVhfQURDIGlzIG5vdCBzZXQKQ09ORklH
X1RXTDYwMzBfR1BBREM9eQoKIwojIEFtcGxpZmllcnMKIwpDT05GSUdfQUQ4MzY2PXkKCiMKIyBI
aWQgU2Vuc29yIElJTyBDb21tb24KIwpDT05GSUdfSUlPX1NUX1NFTlNPUlNfSTJDPXkKQ09ORklH
X0lJT19TVF9TRU5TT1JTX1NQST15CkNPTkZJR19JSU9fU1RfU0VOU09SU19DT1JFPXkKCiMKIyBE
aWdpdGFsIHRvIGFuYWxvZyBjb252ZXJ0ZXJzCiMKQ09ORklHX0FENTA2ND15CkNPTkZJR19BRDUz
NjA9eQojIENPTkZJR19BRDUzODAgaXMgbm90IHNldAojIENPTkZJR19BRDU0MjEgaXMgbm90IHNl
dApDT05GSUdfQUQ1NDQ2PXkKQ09ORklHX0FENTQ0OT15CkNPTkZJR19BRDU1MDQ9eQpDT05GSUdf
QUQ1NjI0Ul9TUEk9eQpDT05GSUdfQUQ1Njg2PXkKQ09ORklHX0FENTc1NT15CkNPTkZJR19BRDU3
NjQ9eQojIENPTkZJR19BRDU3OTEgaXMgbm90IHNldAojIENPTkZJR19BRDczMDMgaXMgbm90IHNl
dApDT05GSUdfTUFYNTE3PXkKQ09ORklHX01DUDQ3MjU9eQoKIwojIEZyZXF1ZW5jeSBTeW50aGVz
aXplcnMgRERTL1BMTAojCgojCiMgQ2xvY2sgR2VuZXJhdG9yL0Rpc3RyaWJ1dGlvbgojCkNPTkZJ
R19BRDk1MjM9eQoKIwojIFBoYXNlLUxvY2tlZCBMb29wIChQTEwpIGZyZXF1ZW5jeSBzeW50aGVz
aXplcnMKIwojIENPTkZJR19BREY0MzUwIGlzIG5vdCBzZXQKCiMKIyBEaWdpdGFsIGd5cm9zY29w
ZSBzZW5zb3JzCiMKQ09ORklHX0FESVMxNjA4MD15CkNPTkZJR19BRElTMTYxMzA9eQpDT05GSUdf
QURJUzE2MTM2PXkKIyBDT05GSUdfQURJUzE2MjYwIGlzIG5vdCBzZXQKQ09ORklHX0FEWFJTNDUw
PXkKQ09ORklHX0lJT19TVF9HWVJPXzNBWElTPXkKQ09ORklHX0lJT19TVF9HWVJPX0kyQ18zQVhJ
Uz15CkNPTkZJR19JSU9fU1RfR1lST19TUElfM0FYSVM9eQpDT05GSUdfSVRHMzIwMD15CgojCiMg
SHVtaWRpdHkgc2Vuc29ycwojCgojCiMgSW5lcnRpYWwgbWVhc3VyZW1lbnQgdW5pdHMKIwojIENP
TkZJR19BRElTMTY0MDAgaXMgbm90IHNldAojIENPTkZJR19BRElTMTY0ODAgaXMgbm90IHNldApD
T05GSUdfSUlPX0FESVNfTElCPXkKQ09ORklHX0lJT19BRElTX0xJQl9CVUZGRVI9eQpDT05GSUdf
SU5WX01QVTYwNTBfSUlPPXkKCiMKIyBMaWdodCBzZW5zb3JzCiMKIyBDT05GSUdfQURKRF9TMzEx
IGlzIG5vdCBzZXQKQ09ORklHX0FQRFM5MzAwPXkKQ09ORklHX0NNMzY2NTE9eQpDT05GSUdfR1Ay
QVAwMjBBMDBGPXkKQ09ORklHX1NFTlNPUlNfTE0zNTMzPXkKQ09ORklHX1RDUzM0NzI9eQojIENP
TkZJR19TRU5TT1JTX1RTTDI1NjMgaXMgbm90IHNldAojIENPTkZJR19UU0w0NTMxIGlzIG5vdCBz
ZXQKQ09ORklHX1ZDTkw0MDAwPXkKCiMKIyBNYWduZXRvbWV0ZXIgc2Vuc29ycwojCkNPTkZJR19N
QUczMTEwPXkKQ09ORklHX0lJT19TVF9NQUdOXzNBWElTPXkKQ09ORklHX0lJT19TVF9NQUdOX0ky
Q18zQVhJUz15CkNPTkZJR19JSU9fU1RfTUFHTl9TUElfM0FYSVM9eQoKIwojIEluY2xpbm9tZXRl
ciBzZW5zb3JzCiMKCiMKIyBUcmlnZ2VycyAtIHN0YW5kYWxvbmUKIwpDT05GSUdfSUlPX0lOVEVS
UlVQVF9UUklHR0VSPXkKQ09ORklHX0lJT19TWVNGU19UUklHR0VSPXkKCiMKIyBQcmVzc3VyZSBz
ZW5zb3JzCiMKQ09ORklHX01QTDMxMTU9eQpDT05GSUdfSUlPX1NUX1BSRVNTPXkKQ09ORklHX0lJ
T19TVF9QUkVTU19JMkM9eQpDT05GSUdfSUlPX1NUX1BSRVNTX1NQST15CgojCiMgVGVtcGVyYXR1
cmUgc2Vuc29ycwojCkNPTkZJR19UTVAwMDY9eQpDT05GSUdfUFdNPXkKQ09ORklHX1BXTV9TWVNG
Uz15CiMgQ09ORklHX1BXTV9SRU5FU0FTX1RQVSBpcyBub3Qgc2V0CkNPTkZJR19QV01fVFdMPXkK
Q09ORklHX1BXTV9UV0xfTEVEPXkKIyBDT05GSUdfSVBBQ0tfQlVTIGlzIG5vdCBzZXQKQ09ORklH
X1JFU0VUX0NPTlRST0xMRVI9eQpDT05GSUdfRk1DPXkKQ09ORklHX0ZNQ19GQUtFREVWPXkKQ09O
RklHX0ZNQ19UUklWSUFMPXkKIyBDT05GSUdfRk1DX1dSSVRFX0VFUFJPTSBpcyBub3Qgc2V0CkNP
TkZJR19GTUNfQ0hBUkRFVj15CgojCiMgUEhZIFN1YnN5c3RlbQojCkNPTkZJR19HRU5FUklDX1BI
WT15CkNPTkZJR19QSFlfRVhZTk9TX01JUElfVklERU89eQpDT05GSUdfQkNNX0tPTkFfVVNCMl9Q
SFk9eQpDT05GSUdfUE9XRVJDQVA9eQpDT05GSUdfSU5URUxfUkFQTD15CgojCiMgRmlybXdhcmUg
RHJpdmVycwojCkNPTkZJR19FREQ9eQojIENPTkZJR19FRERfT0ZGIGlzIG5vdCBzZXQKIyBDT05G
SUdfRklSTVdBUkVfTUVNTUFQIGlzIG5vdCBzZXQKQ09ORklHX0RFTExfUkJVPXkKQ09ORklHX0RD
REJBUz15CkNPTkZJR19HT09HTEVfRklSTVdBUkU9eQoKIwojIEdvb2dsZSBGaXJtd2FyZSBEcml2
ZXJzCiMKCiMKIyBGaWxlIHN5c3RlbXMKIwpDT05GSUdfRENBQ0hFX1dPUkRfQUNDRVNTPXkKIyBD
T05GSUdfRVhUMl9GUyBpcyBub3Qgc2V0CkNPTkZJR19FWFQzX0ZTPXkKIyBDT05GSUdfRVhUM19E
RUZBVUxUU19UT19PUkRFUkVEIGlzIG5vdCBzZXQKQ09ORklHX0VYVDNfRlNfWEFUVFI9eQpDT05G
SUdfRVhUM19GU19QT1NJWF9BQ0w9eQpDT05GSUdfRVhUM19GU19TRUNVUklUWT15CiMgQ09ORklH
X0VYVDRfRlMgaXMgbm90IHNldApDT05GSUdfSkJEPXkKQ09ORklHX0pCRF9ERUJVRz15CkNPTkZJ
R19KQkQyPXkKQ09ORklHX0pCRDJfREVCVUc9eQpDT05GSUdfRlNfTUJDQUNIRT15CiMgQ09ORklH
X1JFSVNFUkZTX0ZTIGlzIG5vdCBzZXQKQ09ORklHX0pGU19GUz15CkNPTkZJR19KRlNfUE9TSVhf
QUNMPXkKQ09ORklHX0pGU19TRUNVUklUWT15CkNPTkZJR19KRlNfREVCVUc9eQojIENPTkZJR19K
RlNfU1RBVElTVElDUyBpcyBub3Qgc2V0CiMgQ09ORklHX1hGU19GUyBpcyBub3Qgc2V0CkNPTkZJ
R19HRlMyX0ZTPXkKQ09ORklHX09DRlMyX0ZTPXkKQ09ORklHX09DRlMyX0ZTX08yQ0I9eQpDT05G
SUdfT0NGUzJfRlNfU1RBVFM9eQpDT05GSUdfT0NGUzJfREVCVUdfTUFTS0xPRz15CiMgQ09ORklH
X09DRlMyX0RFQlVHX0ZTIGlzIG5vdCBzZXQKQ09ORklHX0JUUkZTX0ZTPXkKIyBDT05GSUdfQlRS
RlNfRlNfUE9TSVhfQUNMIGlzIG5vdCBzZXQKQ09ORklHX0JUUkZTX0ZTX0NIRUNLX0lOVEVHUklU
WT15CkNPTkZJR19CVFJGU19GU19SVU5fU0FOSVRZX1RFU1RTPXkKIyBDT05GSUdfQlRSRlNfREVC
VUcgaXMgbm90IHNldAojIENPTkZJR19CVFJGU19BU1NFUlQgaXMgbm90IHNldApDT05GSUdfTklM
RlMyX0ZTPXkKQ09ORklHX0ZTX1BPU0lYX0FDTD15CiMgQ09ORklHX0ZJTEVfTE9DS0lORyBpcyBu
b3Qgc2V0CkNPTkZJR19GU05PVElGWT15CiMgQ09ORklHX0ROT1RJRlkgaXMgbm90IHNldApDT05G
SUdfSU5PVElGWV9VU0VSPXkKIyBDT05GSUdfRkFOT1RJRlkgaXMgbm90IHNldApDT05GSUdfUVVP
VEE9eQpDT05GSUdfUVVPVEFfTkVUTElOS19JTlRFUkZBQ0U9eQpDT05GSUdfUFJJTlRfUVVPVEFf
V0FSTklORz15CkNPTkZJR19RVU9UQV9ERUJVRz15CkNPTkZJR19RVU9UQV9UUkVFPXkKIyBDT05G
SUdfUUZNVF9WMSBpcyBub3Qgc2V0CkNPTkZJR19RRk1UX1YyPXkKQ09ORklHX1FVT1RBQ1RMPXkK
IyBDT05GSUdfQVVUT0ZTNF9GUyBpcyBub3Qgc2V0CkNPTkZJR19GVVNFX0ZTPXkKIyBDT05GSUdf
Q1VTRSBpcyBub3Qgc2V0CgojCiMgQ2FjaGVzCiMKQ09ORklHX0ZTQ0FDSEU9eQojIENPTkZJR19G
U0NBQ0hFX1NUQVRTIGlzIG5vdCBzZXQKQ09ORklHX0ZTQ0FDSEVfSElTVE9HUkFNPXkKQ09ORklH
X0ZTQ0FDSEVfREVCVUc9eQpDT05GSUdfRlNDQUNIRV9PQkpFQ1RfTElTVD15CkNPTkZJR19DQUNI
RUZJTEVTPXkKIyBDT05GSUdfQ0FDSEVGSUxFU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19DQUNI
RUZJTEVTX0hJU1RPR1JBTT15CgojCiMgQ0QtUk9NL0RWRCBGaWxlc3lzdGVtcwojCiMgQ09ORklH
X0lTTzk2NjBfRlMgaXMgbm90IHNldApDT05GSUdfVURGX0ZTPXkKQ09ORklHX1VERl9OTFM9eQoK
IwojIERPUy9GQVQvTlQgRmlsZXN5c3RlbXMKIwpDT05GSUdfRkFUX0ZTPXkKQ09ORklHX01TRE9T
X0ZTPXkKQ09ORklHX1ZGQVRfRlM9eQpDT05GSUdfRkFUX0RFRkFVTFRfQ09ERVBBR0U9NDM3CkNP
TkZJR19GQVRfREVGQVVMVF9JT0NIQVJTRVQ9Imlzbzg4NTktMSIKQ09ORklHX05URlNfRlM9eQpD
T05GSUdfTlRGU19ERUJVRz15CiMgQ09ORklHX05URlNfUlcgaXMgbm90IHNldAoKIwojIFBzZXVk
byBmaWxlc3lzdGVtcwojCkNPTkZJR19QUk9DX0ZTPXkKIyBDT05GSUdfUFJPQ19LQ09SRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1BST0NfU1lTQ1RMIGlzIG5vdCBzZXQKQ09ORklHX1BST0NfUEFHRV9N
T05JVE9SPXkKQ09ORklHX1NZU0ZTPXkKIyBDT05GSUdfSFVHRVRMQkZTIGlzIG5vdCBzZXQKIyBD
T05GSUdfSFVHRVRMQl9QQUdFIGlzIG5vdCBzZXQKQ09ORklHX0NPTkZJR0ZTX0ZTPXkKQ09ORklH
X01JU0NfRklMRVNZU1RFTVM9eQpDT05GSUdfQURGU19GUz15CiMgQ09ORklHX0FERlNfRlNfUlcg
aXMgbm90IHNldApDT05GSUdfQUZGU19GUz15CiMgQ09ORklHX0hGU19GUyBpcyBub3Qgc2V0CkNP
TkZJR19IRlNQTFVTX0ZTPXkKIyBDT05GSUdfSEZTUExVU19GU19QT1NJWF9BQ0wgaXMgbm90IHNl
dApDT05GSUdfQkVGU19GUz15CkNPTkZJR19CRUZTX0RFQlVHPXkKQ09ORklHX0JGU19GUz15CkNP
TkZJR19FRlNfRlM9eQpDT05GSUdfTE9HRlM9eQpDT05GSUdfQ1JBTUZTPXkKQ09ORklHX1NRVUFT
SEZTPXkKQ09ORklHX1NRVUFTSEZTX0ZJTEVfQ0FDSEU9eQojIENPTkZJR19TUVVBU0hGU19GSUxF
X0RJUkVDVCBpcyBub3Qgc2V0CkNPTkZJR19TUVVBU0hGU19ERUNPTVBfU0lOR0xFPXkKIyBDT05G
SUdfU1FVQVNIRlNfREVDT01QX01VTFRJIGlzIG5vdCBzZXQKIyBDT05GSUdfU1FVQVNIRlNfREVD
T01QX01VTFRJX1BFUkNQVSBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZTX1hBVFRSIGlzIG5v
dCBzZXQKIyBDT05GSUdfU1FVQVNIRlNfWkxJQiBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZT
X0xaTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NRVUFTSEZTX1haIGlzIG5vdCBzZXQKQ09ORklHX1NR
VUFTSEZTXzRLX0RFVkJMS19TSVpFPXkKQ09ORklHX1NRVUFTSEZTX0VNQkVEREVEPXkKQ09ORklH
X1NRVUFTSEZTX0ZSQUdNRU5UX0NBQ0hFX1NJWkU9MwpDT05GSUdfVlhGU19GUz15CiMgQ09ORklH
X01JTklYX0ZTIGlzIG5vdCBzZXQKQ09ORklHX09NRlNfRlM9eQpDT05GSUdfSFBGU19GUz15CiMg
Q09ORklHX1FOWDRGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19RTlg2RlNfRlM9eQojIENPTkZJR19R
Tlg2RlNfREVCVUcgaXMgbm90IHNldAojIENPTkZJR19ST01GU19GUyBpcyBub3Qgc2V0CkNPTkZJ
R19QU1RPUkU9eQpDT05GSUdfUFNUT1JFX0NPTlNPTEU9eQojIENPTkZJR19QU1RPUkVfRlRSQUNF
IGlzIG5vdCBzZXQKQ09ORklHX1BTVE9SRV9SQU09eQpDT05GSUdfU1lTVl9GUz15CkNPTkZJR19V
RlNfRlM9eQojIENPTkZJR19VRlNfRlNfV1JJVEUgaXMgbm90IHNldAojIENPTkZJR19VRlNfREVC
VUcgaXMgbm90IHNldApDT05GSUdfRjJGU19GUz15CkNPTkZJR19GMkZTX1NUQVRfRlM9eQpDT05G
SUdfRjJGU19GU19YQVRUUj15CiMgQ09ORklHX0YyRlNfRlNfUE9TSVhfQUNMIGlzIG5vdCBzZXQK
IyBDT05GSUdfRjJGU19GU19TRUNVUklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX0YyRlNfQ0hFQ0tf
RlMgaXMgbm90IHNldApDT05GSUdfTkVUV09SS19GSUxFU1lTVEVNUz15CkNPTkZJR19OQ1BfRlM9
eQpDT05GSUdfTkNQRlNfUEFDS0VUX1NJR05JTkc9eQojIENPTkZJR19OQ1BGU19JT0NUTF9MT0NL
SU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfTkNQRlNfU1RST05HIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkNQRlNfTkZTX05TIGlzIG5vdCBzZXQKQ09ORklHX05DUEZTX09TMl9OUz15CkNPTkZJR19OQ1BG
U19TTUFMTERPUz15CkNPTkZJR19OQ1BGU19OTFM9eQojIENPTkZJR19OQ1BGU19FWFRSQVMgaXMg
bm90IHNldApDT05GSUdfTkxTPXkKQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiCiMgQ09O
RklHX05MU19DT0RFUEFHRV80MzcgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfNzM3
IGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV83NzU9eQpDT05GSUdfTkxTX0NPREVQQUdF
Xzg1MD15CkNPTkZJR19OTFNfQ09ERVBBR0VfODUyPXkKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1
NSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NTcgaXMgbm90IHNldAojIENPTkZJ
R19OTFNfQ09ERVBBR0VfODYwIGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV84NjE9eQpD
T05GSUdfTkxTX0NPREVQQUdFXzg2Mj15CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90
IHNldApDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15CkNPTkZJR19OTFNfQ09ERVBBR0VfODY1PXkK
Q09ORklHX05MU19DT0RFUEFHRV84NjY9eQpDT05GSUdfTkxTX0NPREVQQUdFXzg2OT15CkNPTkZJ
R19OTFNfQ09ERVBBR0VfOTM2PXkKQ09ORklHX05MU19DT0RFUEFHRV85NTA9eQojIENPTkZJR19O
TFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKQ09ORklHX05MU19DT0RFUEFHRV85NDk9eQpDT05G
SUdfTkxTX0NPREVQQUdFXzg3ND15CkNPTkZJR19OTFNfSVNPODg1OV84PXkKQ09ORklHX05MU19D
T0RFUEFHRV8xMjUwPXkKQ09ORklHX05MU19DT0RFUEFHRV8xMjUxPXkKIyBDT05GSUdfTkxTX0FT
Q0lJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfMSBpcyBub3Qgc2V0CiMgQ09ORklH
X05MU19JU084ODU5XzIgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfMz15CiMgQ09ORklH
X05MU19JU084ODU5XzQgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfNT15CiMgQ09ORklH
X05MU19JU084ODU5XzYgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfNz15CiMgQ09ORklH
X05MU19JU084ODU5XzkgaXMgbm90IHNldApDT05GSUdfTkxTX0lTTzg4NTlfMTM9eQpDT05GSUdf
TkxTX0lTTzg4NTlfMTQ9eQojIENPTkZJR19OTFNfSVNPODg1OV8xNSBpcyBub3Qgc2V0CiMgQ09O
RklHX05MU19LT0k4X1IgaXMgbm90IHNldApDT05GSUdfTkxTX0tPSThfVT15CiMgQ09ORklHX05M
U19NQUNfUk9NQU4gaXMgbm90IHNldApDT05GSUdfTkxTX01BQ19DRUxUSUM9eQpDT05GSUdfTkxT
X01BQ19DRU5URVVSTz15CkNPTkZJR19OTFNfTUFDX0NST0FUSUFOPXkKQ09ORklHX05MU19NQUNf
Q1lSSUxMSUM9eQpDT05GSUdfTkxTX01BQ19HQUVMSUM9eQpDT05GSUdfTkxTX01BQ19HUkVFSz15
CiMgQ09ORklHX05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0CkNPTkZJR19OTFNfTUFDX0lOVUlU
PXkKQ09ORklHX05MU19NQUNfUk9NQU5JQU49eQpDT05GSUdfTkxTX01BQ19UVVJLSVNIPXkKQ09O
RklHX05MU19VVEY4PXkKCiMKIyBLZXJuZWwgaGFja2luZwojCkNPTkZJR19UUkFDRV9JUlFGTEFH
U19TVVBQT1JUPXkKCiMKIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMKIwpDT05GSUdfREVGQVVM
VF9NRVNTQUdFX0xPR0xFVkVMPTQKCiMKIyBDb21waWxlLXRpbWUgY2hlY2tzIGFuZCBjb21waWxl
ciBvcHRpb25zCiMKQ09ORklHX0RFQlVHX0lORk89eQojIENPTkZJR19ERUJVR19JTkZPX1JFRFVD
RUQgaXMgbm90IHNldAojIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQK
Q09ORklHX0VOQUJMRV9NVVNUX0NIRUNLPXkKQ09ORklHX0ZSQU1FX1dBUk49MjA0OApDT05GSUdf
U1RSSVBfQVNNX1NZTVM9eQojIENPTkZJR19SRUFEQUJMRV9BU00gaXMgbm90IHNldAojIENPTkZJ
R19VTlVTRURfU1lNQk9MUyBpcyBub3Qgc2V0CkNPTkZJR19ERUJVR19GUz15CiMgQ09ORklHX0hF
QURFUlNfQ0hFQ0sgaXMgbm90IHNldApDT05GSUdfREVCVUdfU0VDVElPTl9NSVNNQVRDSD15CkNP
TkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQpDT05GSUdfRlJBTUVfUE9JTlRFUj15CkNP
TkZJR19ERUJVR19GT1JDRV9XRUFLX1BFUl9DUFU9eQojIENPTkZJR19NQUdJQ19TWVNSUSBpcyBu
b3Qgc2V0CkNPTkZJR19ERUJVR19LRVJORUw9eQoKIwojIE1lbW9yeSBEZWJ1Z2dpbmcKIwojIENP
TkZJR19ERUJVR19QQUdFQUxMT0MgaXMgbm90IHNldApDT05GSUdfREVCVUdfT0JKRUNUUz15CkNP
TkZJR19ERUJVR19PQkpFQ1RTX1NFTEZURVNUPXkKQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15
CiMgQ09ORklHX0RFQlVHX09CSkVDVFNfVElNRVJTIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX09C
SkVDVFNfV09SSz15CiMgQ09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldAoj
IENPTkZJR19ERUJVR19PQkpFQ1RTX1BFUkNQVV9DT1VOVEVSIGlzIG5vdCBzZXQKQ09ORklHX0RF
QlVHX09CSkVDVFNfRU5BQkxFX0RFRkFVTFQ9MQojIENPTkZJR19TTFVCX1NUQVRTIGlzIG5vdCBz
ZXQKQ09ORklHX0hBVkVfREVCVUdfS01FTUxFQUs9eQojIENPTkZJR19ERUJVR19LTUVNTEVBSyBp
cyBub3Qgc2V0CkNPTkZJR19ERUJVR19TVEFDS19VU0FHRT15CiMgQ09ORklHX0RFQlVHX1ZNIGlz
IG5vdCBzZXQKIyBDT05GSUdfREVCVUdfVklSVFVBTCBpcyBub3Qgc2V0CkNPTkZJR19ERUJVR19N
RU1PUllfSU5JVD15CkNPTkZJR19NRU1PUllfTk9USUZJRVJfRVJST1JfSU5KRUNUPXkKIyBDT05G
SUdfREVCVUdfUEVSX0NQVV9NQVBTIGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfREVCVUdfU1RBQ0tP
VkVSRkxPVz15CiMgQ09ORklHX0RFQlVHX1NUQUNLT1ZFUkZMT1cgaXMgbm90IHNldApDT05GSUdf
SEFWRV9BUkNIX0tNRU1DSEVDSz15CiMgQ09ORklHX0RFQlVHX1NISVJRIGlzIG5vdCBzZXQKCiMK
IyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5ncwojCkNPTkZJR19MT0NLVVBfREVURUNUT1I9eQpDT05G
SUdfSEFSRExPQ0tVUF9ERVRFQ1RPUj15CiMgQ09ORklHX0JPT1RQQVJBTV9IQVJETE9DS1VQX1BB
TklDIGlzIG5vdCBzZXQKQ09ORklHX0JPT1RQQVJBTV9IQVJETE9DS1VQX1BBTklDX1ZBTFVFPTAK
IyBDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUMgaXMgbm90IHNldApDT05GSUdfQk9P
VFBBUkFNX1NPRlRMT0NLVVBfUEFOSUNfVkFMVUU9MAojIENPTkZJR19ERVRFQ1RfSFVOR19UQVNL
IGlzIG5vdCBzZXQKIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBub3Qgc2V0CkNPTkZJR19QQU5J
Q19PTl9PT1BTX1ZBTFVFPTAKQ09ORklHX1BBTklDX1RJTUVPVVQ9MAojIENPTkZJR19TQ0hFRF9E
RUJVRyBpcyBub3Qgc2V0CkNPTkZJR19TQ0hFRFNUQVRTPXkKQ09ORklHX1RJTUVSX1NUQVRTPXkK
CiMKIyBMb2NrIERlYnVnZ2luZyAoc3BpbmxvY2tzLCBtdXRleGVzLCBldGMuLi4pCiMKQ09ORklH
X0RFQlVHX1JUX01VVEVYRVM9eQpDT05GSUdfREVCVUdfUElfTElTVD15CkNPTkZJR19SVF9NVVRF
WF9URVNURVI9eQpDT05GSUdfREVCVUdfU1BJTkxPQ0s9eQpDT05GSUdfREVCVUdfTVVURVhFUz15
CkNPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSD15CkNPTkZJR19ERUJVR19MT0NLX0FMTE9D
PXkKQ09ORklHX1BST1ZFX0xPQ0tJTkc9eQpDT05GSUdfTE9DS0RFUD15CkNPTkZJR19MT0NLX1NU
QVQ9eQpDT05GSUdfREVCVUdfTE9DS0RFUD15CkNPTkZJR19ERUJVR19BVE9NSUNfU0xFRVA9eQoj
IENPTkZJR19ERUJVR19MT0NLSU5HX0FQSV9TRUxGVEVTVFMgaXMgbm90IHNldApDT05GSUdfVFJB
Q0VfSVJRRkxBR1M9eQpDT05GSUdfU1RBQ0tUUkFDRT15CkNPTkZJR19ERUJVR19LT0JKRUNUPXkK
IyBDT05GSUdfREVCVUdfV1JJVEVDT1VOVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX0xJU1Qg
aXMgbm90IHNldAojIENPTkZJR19ERUJVR19TRyBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX05P
VElGSUVSUyBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQK
CiMKIyBSQ1UgRGVidWdnaW5nCiMKIyBDT05GSUdfUFJPVkVfUkNVIGlzIG5vdCBzZXQKIyBDT05G
SUdfU1BBUlNFX1JDVV9QT0lOVEVSIGlzIG5vdCBzZXQKQ09ORklHX1JDVV9UT1JUVVJFX1RFU1Q9
eQpDT05GSUdfUkNVX1RPUlRVUkVfVEVTVF9SVU5OQUJMRT15CkNPTkZJR19SQ1VfQ1BVX1NUQUxM
X1RJTUVPVVQ9MjEKIyBDT05GSUdfUkNVX0NQVV9TVEFMTF9JTkZPIGlzIG5vdCBzZXQKQ09ORklH
X1JDVV9UUkFDRT15CiMgQ09ORklHX0RFQlVHX0JMT0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQKQ09O
RklHX05PVElGSUVSX0VSUk9SX0lOSkVDVElPTj15CkNPTkZJR19DUFVfTk9USUZJRVJfRVJST1Jf
SU5KRUNUPXkKQ09ORklHX1BNX05PVElGSUVSX0VSUk9SX0lOSkVDVD15CiMgQ09ORklHX0ZBVUxU
X0lOSkVDVElPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0xBVEVOQ1lUT1AgaXMgbm90IHNldApDT05G
SUdfQVJDSF9IQVNfREVCVUdfU1RSSUNUX1VTRVJfQ09QWV9DSEVDS1M9eQojIENPTkZJR19ERUJV
R19TVFJJQ1RfVVNFUl9DT1BZX0NIRUNLUyBpcyBub3Qgc2V0CkNPTkZJR19VU0VSX1NUQUNLVFJB
Q0VfU1VQUE9SVD15CkNPTkZJR19OT1BfVFJBQ0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fVFJB
Q0VSPXkKQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhfVFJBQ0VSPXkKQ09ORklHX0hBVkVfRlVO
Q1RJT05fR1JBUEhfRlBfVEVTVD15CkNPTkZJR19IQVZFX0ZVTkNUSU9OX1RSQUNFX01DT1VOVF9U
RVNUPXkKQ09ORklHX0hBVkVfRFlOQU1JQ19GVFJBQ0U9eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZU
UkFDRV9XSVRIX1JFR1M9eQpDT05GSUdfSEFWRV9GVFJBQ0VfTUNPVU5UX1JFQ09SRD15CkNPTkZJ
R19IQVZFX1NZU0NBTExfVFJBQ0VQT0lOVFM9eQpDT05GSUdfSEFWRV9GRU5UUlk9eQpDT05GSUdf
SEFWRV9DX1JFQ09SRE1DT1VOVD15CkNPTkZJR19UUkFDRVJfTUFYX1RSQUNFPXkKQ09ORklHX1RS
QUNFX0NMT0NLPXkKQ09ORklHX1JJTkdfQlVGRkVSPXkKQ09ORklHX0VWRU5UX1RSQUNJTkc9eQpD
T05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKQ09ORklHX1JJTkdfQlVGRkVSX0FMTE9XX1NX
QVA9eQpDT05GSUdfVFJBQ0lORz15CkNPTkZJR19HRU5FUklDX1RSQUNFUj15CkNPTkZJR19UUkFD
SU5HX1NVUFBPUlQ9eQpDT05GSUdfRlRSQUNFPXkKQ09ORklHX0ZVTkNUSU9OX1RSQUNFUj15CiMg
Q09ORklHX0ZVTkNUSU9OX0dSQVBIX1RSQUNFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0lSUVNPRkZf
VFJBQ0VSIGlzIG5vdCBzZXQKQ09ORklHX1NDSEVEX1RSQUNFUj15CkNPTkZJR19GVFJBQ0VfU1lT
Q0FMTFM9eQpDT05GSUdfVFJBQ0VSX1NOQVBTSE9UPXkKQ09ORklHX1RSQUNFUl9TTkFQU0hPVF9Q
RVJfQ1BVX1NXQVA9eQpDT05GSUdfQlJBTkNIX1BST0ZJTEVfTk9ORT15CiMgQ09ORklHX1BST0ZJ
TEVfQU5OT1RBVEVEX0JSQU5DSEVTIGlzIG5vdCBzZXQKIyBDT05GSUdfUFJPRklMRV9BTExfQlJB
TkNIRVMgaXMgbm90IHNldApDT05GSUdfU1RBQ0tfVFJBQ0VSPXkKQ09ORklHX0JMS19ERVZfSU9f
VFJBQ0U9eQojIENPTkZJR19VUFJPQkVfRVZFTlQgaXMgbm90IHNldAojIENPTkZJR19QUk9CRV9F
VkVOVFMgaXMgbm90IHNldAojIENPTkZJR19EWU5BTUlDX0ZUUkFDRSBpcyBub3Qgc2V0CkNPTkZJ
R19GVU5DVElPTl9QUk9GSUxFUj15CiMgQ09ORklHX0ZUUkFDRV9TVEFSVFVQX1RFU1QgaXMgbm90
IHNldApDT05GSUdfUklOR19CVUZGRVJfQkVOQ0hNQVJLPXkKIyBDT05GSUdfUklOR19CVUZGRVJf
U1RBUlRVUF9URVNUIGlzIG5vdCBzZXQKCiMKIyBSdW50aW1lIFRlc3RpbmcKIwpDT05GSUdfTEtE
VE09eQpDT05GSUdfVEVTVF9MSVNUX1NPUlQ9eQojIENPTkZJR19CQUNLVFJBQ0VfU0VMRl9URVNU
IGlzIG5vdCBzZXQKIyBDT05GSUdfUkJUUkVFX1RFU1QgaXMgbm90IHNldApDT05GSUdfQVRPTUlD
NjRfU0VMRlRFU1Q9eQpDT05GSUdfVEVTVF9TVFJJTkdfSEVMUEVSUz15CkNPTkZJR19URVNUX0tT
VFJUT1g9eQojIENPTkZJR19ETUFfQVBJX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FNUExF
UyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0FSQ0hfS0dEQj15CkNPTkZJR19LR0RCPXkKQ09ORklH
X0tHREJfVEVTVFM9eQpDT05GSUdfS0dEQl9URVNUU19PTl9CT09UPXkKQ09ORklHX0tHREJfVEVT
VFNfQk9PVF9TVFJJTkc9IlYxRjEwMCIKQ09ORklHX0tHREJfTE9XX0xFVkVMX1RSQVA9eQojIENP
TkZJR19LR0RCX0tEQiBpcyBub3Qgc2V0CkNPTkZJR19TVFJJQ1RfREVWTUVNPXkKQ09ORklHX1g4
Nl9WRVJCT1NFX0JPT1RVUD15CiMgQ09ORklHX0VBUkxZX1BSSU5USyBpcyBub3Qgc2V0CiMgQ09O
RklHX1g4Nl9QVERVTVAgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19ST0RBVEEgaXMgbm90IHNl
dAojIENPTkZJR19ET1VCTEVGQVVMVCBpcyBub3Qgc2V0CiMgQ09ORklHX0RFQlVHX1RMQkZMVVNI
IGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX1NUUkVTUz15CkNPTkZJR19IQVZFX01NSU9UUkFDRV9T
VVBQT1JUPXkKQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0wCkNPTkZJR19JT19ERUxBWV9UWVBF
XzBYRUQ9MQpDT05GSUdfSU9fREVMQVlfVFlQRV9VREVMQVk9MgpDT05GSUdfSU9fREVMQVlfVFlQ
RV9OT05FPTMKQ09ORklHX0lPX0RFTEFZXzBYODA9eQojIENPTkZJR19JT19ERUxBWV8wWEVEIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU9fREVMQVlfVURFTEFZIGlzIG5vdCBzZXQKIyBDT05GSUdfSU9f
REVMQVlfTk9ORSBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX0lPX0RFTEFZX1RZUEU9MApDT05G
SUdfREVCVUdfQk9PVF9QQVJBTVM9eQpDT05GSUdfQ1BBX0RFQlVHPXkKIyBDT05GSUdfT1BUSU1J
WkVfSU5MSU5JTkcgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90
IHNldAojIENPTkZJR19YODZfREVCVUdfU1RBVElDX0NQVV9IQVMgaXMgbm90IHNldAoKIwojIFNl
Y3VyaXR5IG9wdGlvbnMKIwojIENPTkZJR19LRVlTIGlzIG5vdCBzZXQKQ09ORklHX1NFQ1VSSVRZ
X0RNRVNHX1JFU1RSSUNUPXkKQ09ORklHX1NFQ1VSSVRZPXkKQ09ORklHX1NFQ1VSSVRZRlM9eQpD
T05GSUdfU0VDVVJJVFlfTkVUV09SSz15CkNPTkZJR19TRUNVUklUWV9QQVRIPXkKQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZTz15CkNPTkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FDQ0VQVF9FTlRSWT0y
MDQ4CkNPTkZJR19TRUNVUklUWV9UT01PWU9fTUFYX0FVRElUX0xPRz0xMDI0CiMgQ09ORklHX1NF
Q1VSSVRZX1RPTU9ZT19PTUlUX1VTRVJTUEFDRV9MT0FERVIgaXMgbm90IHNldApDT05GSUdfU0VD
VVJJVFlfVE9NT1lPX1BPTElDWV9MT0FERVI9Ii9zYmluL3RvbW95by1pbml0IgpDT05GSUdfU0VD
VVJJVFlfVE9NT1lPX0FDVElWQVRJT05fVFJJR0dFUj0iL3NiaW4vaW5pdCIKQ09ORklHX1NFQ1VS
SVRZX0FQUEFSTU9SPXkKQ09ORklHX1NFQ1VSSVRZX0FQUEFSTU9SX0JPT1RQQVJBTV9WQUxVRT0x
CkNPTkZJR19TRUNVUklUWV9BUFBBUk1PUl9IQVNIPXkKIyBDT05GSUdfU0VDVVJJVFlfWUFNQSBp
cyBub3Qgc2V0CiMgQ09ORklHX0lNQSBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX1NFQ1VSSVRZ
X1RPTU9ZTz15CiMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfQVBQQVJNT1IgaXMgbm90IHNldAoj
IENPTkZJR19ERUZBVUxUX1NFQ1VSSVRZX0RBQyBpcyBub3Qgc2V0CkNPTkZJR19ERUZBVUxUX1NF
Q1VSSVRZPSJ0b21veW8iCkNPTkZJR19YT1JfQkxPQ0tTPXkKQ09ORklHX0NSWVBUTz15CgojCiMg
Q3J5cHRvIGNvcmUgb3IgaGVscGVyCiMKQ09ORklHX0NSWVBUT19BTEdBUEk9eQpDT05GSUdfQ1JZ
UFRPX0FMR0FQSTI9eQpDT05GSUdfQ1JZUFRPX0FFQUQ9eQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkK
Q09ORklHX0NSWVBUT19CTEtDSVBIRVI9eQpDT05GSUdfQ1JZUFRPX0JMS0NJUEhFUjI9eQpDT05G
SUdfQ1JZUFRPX0hBU0g9eQpDT05GSUdfQ1JZUFRPX0hBU0gyPXkKQ09ORklHX0NSWVBUT19STkc9
eQpDT05GSUdfQ1JZUFRPX1JORzI9eQpDT05GSUdfQ1JZUFRPX1BDT01QMj15CkNPTkZJR19DUllQ
VE9fTUFOQUdFUj15CkNPTkZJR19DUllQVE9fTUFOQUdFUjI9eQpDT05GSUdfQ1JZUFRPX1VTRVI9
eQpDT05GSUdfQ1JZUFRPX01BTkFHRVJfRElTQUJMRV9URVNUUz15CkNPTkZJR19DUllQVE9fR0Yx
MjhNVUw9eQpDT05GSUdfQ1JZUFRPX05VTEw9eQpDT05GSUdfQ1JZUFRPX1BDUllQVD15CkNPTkZJ
R19DUllQVE9fV09SS1FVRVVFPXkKQ09ORklHX0NSWVBUT19DUllQVEQ9eQojIENPTkZJR19DUllQ
VE9fQVVUSEVOQyBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fQUJMS19IRUxQRVI9eQpDT05GSUdf
Q1JZUFRPX0dMVUVfSEVMUEVSX1g4Nj15CgojCiMgQXV0aGVudGljYXRlZCBFbmNyeXB0aW9uIHdp
dGggQXNzb2NpYXRlZCBEYXRhCiMKQ09ORklHX0NSWVBUT19DQ009eQpDT05GSUdfQ1JZUFRPX0dD
TT15CkNPTkZJR19DUllQVE9fU0VRSVY9eQoKIwojIEJsb2NrIG1vZGVzCiMKQ09ORklHX0NSWVBU
T19DQkM9eQpDT05GSUdfQ1JZUFRPX0NUUj15CiMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNl
dApDT05GSUdfQ1JZUFRPX0VDQj15CkNPTkZJR19DUllQVE9fTFJXPXkKQ09ORklHX0NSWVBUT19Q
Q0JDPXkKQ09ORklHX0NSWVBUT19YVFM9eQoKIwojIEhhc2ggbW9kZXMKIwojIENPTkZJR19DUllQ
VE9fQ01BQyBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19ITUFDIGlzIG5vdCBzZXQKQ09ORklH
X0NSWVBUT19YQ0JDPXkKQ09ORklHX0NSWVBUT19WTUFDPXkKCiMKIyBEaWdlc3QKIwpDT05GSUdf
Q1JZUFRPX0NSQzMyQz15CkNPTkZJR19DUllQVE9fQ1JDMzJDX0lOVEVMPXkKIyBDT05GSUdfQ1JZ
UFRPX0NSQzMyIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19DUkMzMl9QQ0xNVUw9eQpDT05GSUdf
Q1JZUFRPX0NSQ1QxMERJRj15CiMgQ09ORklHX0NSWVBUT19DUkNUMTBESUZfUENMTVVMIGlzIG5v
dCBzZXQKQ09ORklHX0NSWVBUT19HSEFTSD15CkNPTkZJR19DUllQVE9fTUQ0PXkKQ09ORklHX0NS
WVBUT19NRDU9eQpDT05GSUdfQ1JZUFRPX01JQ0hBRUxfTUlDPXkKIyBDT05GSUdfQ1JZUFRPX1JN
RDEyOCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fUk1EMTYwPXkKQ09ORklHX0NSWVBUT19STUQy
NTY9eQpDT05GSUdfQ1JZUFRPX1JNRDMyMD15CkNPTkZJR19DUllQVE9fU0hBMT15CkNPTkZJR19D
UllQVE9fU0hBMV9TU1NFMz15CkNPTkZJR19DUllQVE9fU0hBMjU2X1NTU0UzPXkKQ09ORklHX0NS
WVBUT19TSEE1MTJfU1NTRTM9eQpDT05GSUdfQ1JZUFRPX1NIQTI1Nj15CkNPTkZJR19DUllQVE9f
U0hBNTEyPXkKQ09ORklHX0NSWVBUT19UR1IxOTI9eQpDT05GSUdfQ1JZUFRPX1dQNTEyPXkKIyBD
T05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVMIGlzIG5vdCBzZXQKCiMKIyBDaXBoZXJz
CiMKQ09ORklHX0NSWVBUT19BRVM9eQpDT05GSUdfQ1JZUFRPX0FFU19YODZfNjQ9eQpDT05GSUdf
Q1JZUFRPX0FFU19OSV9JTlRFTD15CkNPTkZJR19DUllQVE9fQU5VQklTPXkKQ09ORklHX0NSWVBU
T19BUkM0PXkKQ09ORklHX0NSWVBUT19CTE9XRklTSD15CkNPTkZJR19DUllQVE9fQkxPV0ZJU0hf
Q09NTU9OPXkKQ09ORklHX0NSWVBUT19CTE9XRklTSF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NB
TUVMTElBPXkKQ09ORklHX0NSWVBUT19DQU1FTExJQV9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NB
TUVMTElBX0FFU05JX0FWWF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FFU05JX0FW
WDJfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19DQVNUX0NPTU1PTj15CkNPTkZJR19DUllQVE9fQ0FT
VDU9eQpDT05GSUdfQ1JZUFRPX0NBU1Q1X0FWWF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0NBU1Q2
PXkKQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19ERVM9eQpD
T05GSUdfQ1JZUFRPX0ZDUllQVD15CiMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMgbm90IHNldApD
T05GSUdfQ1JZUFRPX1NBTFNBMjA9eQpDT05GSUdfQ1JZUFRPX1NBTFNBMjBfWDg2XzY0PXkKQ09O
RklHX0NSWVBUT19TRUVEPXkKQ09ORklHX0NSWVBUT19TRVJQRU5UPXkKQ09ORklHX0NSWVBUT19T
RVJQRU5UX1NTRTJfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19TRVJQRU5UX0FWWF9YODZfNjQ9eQpD
T05GSUdfQ1JZUFRPX1NFUlBFTlRfQVZYMl9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX1RFQT15CiMg
Q09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19UV09GSVNIX0NP
TU1PTj15CkNPTkZJR19DUllQVE9fVFdPRklTSF9YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX1RXT0ZJ
U0hfWDg2XzY0XzNXQVk9eQojIENPTkZJR19DUllQVE9fVFdPRklTSF9BVlhfWDg2XzY0IGlzIG5v
dCBzZXQKCiMKIyBDb21wcmVzc2lvbgojCiMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQ1JZUFRPX1pMSUIgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fTFpPIGlz
IG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0xaNCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fTFo0
SEM9eQoKIwojIFJhbmRvbSBOdW1iZXIgR2VuZXJhdGlvbgojCkNPTkZJR19DUllQVE9fQU5TSV9D
UFJORz15CkNPTkZJR19DUllQVE9fVVNFUl9BUEk9eQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX0hB
U0g9eQpDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkKIyBDT05GSUdfQ1JZUFRPX0hX
IGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfS1ZNPXkKQ09ORklHX0hBVkVfS1ZNX0lSUUNISVA9eQpD
T05GSUdfSEFWRV9LVk1fSVJRX1JPVVRJTkc9eQpDT05GSUdfSEFWRV9LVk1fRVZFTlRGRD15CkNP
TkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQpDT05GSUdfS1ZNX01NSU89eQpDT05GSUdfS1ZN
X0FTWU5DX1BGPXkKQ09ORklHX0hBVkVfS1ZNX01TST15CkNPTkZJR19IQVZFX0tWTV9DUFVfUkVM
QVhfSU5URVJDRVBUPXkKQ09ORklHX0tWTV9WRklPPXkKQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkK
Q09ORklHX0tWTT15CkNPTkZJR19LVk1fSU5URUw9eQojIENPTkZJR19LVk1fQU1EIGlzIG5vdCBz
ZXQKIyBDT05GSUdfS1ZNX01NVV9BVURJVCBpcyBub3Qgc2V0CkNPTkZJR19CSU5BUllfUFJJTlRG
PXkKCiMKIyBMaWJyYXJ5IHJvdXRpbmVzCiMKQ09ORklHX1JBSUQ2X1BRPXkKQ09ORklHX0JJVFJF
VkVSU0U9eQpDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15CkNPTkZJR19HRU5FUklD
X1NUUk5MRU5fVVNFUj15CkNPTkZJR19HRU5FUklDX05FVF9VVElMUz15CkNPTkZJR19HRU5FUklD
X0ZJTkRfRklSU1RfQklUPXkKQ09ORklHX0dFTkVSSUNfUENJX0lPTUFQPXkKQ09ORklHX0dFTkVS
SUNfSU9NQVA9eQpDT05GSUdfR0VORVJJQ19JTz15CkNPTkZJR19BUkNIX1VTRV9DTVBYQ0hHX0xP
Q0tSRUY9eQpDT05GSUdfQ1JDX0NDSVRUPXkKQ09ORklHX0NSQzE2PXkKQ09ORklHX0NSQ19UMTBE
SUY9eQpDT05GSUdfQ1JDX0lUVV9UPXkKQ09ORklHX0NSQzMyPXkKQ09ORklHX0NSQzMyX1NFTEZU
RVNUPXkKQ09ORklHX0NSQzMyX1NMSUNFQlk4PXkKIyBDT05GSUdfQ1JDMzJfU0xJQ0VCWTQgaXMg
bm90IHNldAojIENPTkZJR19DUkMzMl9TQVJXQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JDMzJf
QklUIGlzIG5vdCBzZXQKQ09ORklHX0NSQzc9eQpDT05GSUdfTElCQ1JDMzJDPXkKQ09ORklHX0NS
Qzg9eQpDT05GSUdfQ1JDNjRfRUNNQT15CkNPTkZJR19SQU5ET00zMl9TRUxGVEVTVD15CkNPTkZJ
R19aTElCX0lORkxBVEU9eQpDT05GSUdfWkxJQl9ERUZMQVRFPXkKQ09ORklHX0xaT19DT01QUkVT
Uz15CkNPTkZJR19MWk9fREVDT01QUkVTUz15CkNPTkZJR19MWjRIQ19DT01QUkVTUz15CkNPTkZJ
R19MWjRfREVDT01QUkVTUz15CkNPTkZJR19YWl9ERUM9eQojIENPTkZJR19YWl9ERUNfWDg2IGlz
IG5vdCBzZXQKIyBDT05GSUdfWFpfREVDX1BPV0VSUEMgaXMgbm90IHNldApDT05GSUdfWFpfREVD
X0lBNjQ9eQpDT05GSUdfWFpfREVDX0FSTT15CkNPTkZJR19YWl9ERUNfQVJNVEhVTUI9eQpDT05G
SUdfWFpfREVDX1NQQVJDPXkKQ09ORklHX1haX0RFQ19CQ0o9eQojIENPTkZJR19YWl9ERUNfVEVT
VCBpcyBub3Qgc2V0CkNPTkZJR19ERUNPTVBSRVNTX0daSVA9eQpDT05GSUdfREVDT01QUkVTU19C
WklQMj15CkNPTkZJR19ERUNPTVBSRVNTX0xaTz15CkNPTkZJR19ERUNPTVBSRVNTX0xaND15CkNP
TkZJR19HRU5FUklDX0FMTE9DQVRPUj15CkNPTkZJR19SRUVEX1NPTE9NT049eQpDT05GSUdfUkVF
RF9TT0xPTU9OX0VOQzg9eQpDT05GSUdfUkVFRF9TT0xPTU9OX0RFQzg9eQpDT05GSUdfQlRSRUU9
eQpDT05GSUdfSEFTX0lPTUVNPXkKQ09ORklHX0hBU19JT1BPUlQ9eQpDT05GSUdfSEFTX0RNQT15
CkNPTkZJR19DUFVNQVNLX09GRlNUQUNLPXkKQ09ORklHX0NQVV9STUFQPXkKQ09ORklHX0RRTD15
CkNPTkZJR19OTEFUVFI9eQpDT05GSUdfQVJDSF9IQVNfQVRPTUlDNjRfREVDX0lGX1BPU0lUSVZF
PXkKIyBDT05GSUdfQVZFUkFHRSBpcyBub3Qgc2V0CiMgQ09ORklHX0NPUkRJQyBpcyBub3Qgc2V0
CiMgQ09ORklHX0REUiBpcyBub3Qgc2V0Cg==
--bcaec51d2998bb54ca04ef7d0f58
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec51d2998bb54ca04ef7d0f58--


From xen-devel-bounces@lists.xen.org Wed Jan 08 22:43:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11pz-0001CU-T2; Wed, 08 Jan 2014 22:42:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W11py-0001CP-TW
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 22:42:59 +0000
Received: from [193.109.254.147:43824] by server-12.bemta-14.messagelabs.com
	id 0B/23-13681-274DDC25; Wed, 08 Jan 2014 22:42:58 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389220975!9678989!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 8 Jan 2014 22:42:57 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:42:57 -0000
Received: by mail-pb0-f45.google.com with SMTP id rp16so2173746pbb.4
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 14:42:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=Gok7LNuqO7dGy4zI1ECeyo9ye4UBbxF/zxXlchAZcVo=;
	b=VrJxDnQh+nh1RhRbctRulacpDUQj2pdZeE1ljFKqqjuSpGrsias668EDMbj9BOlTPT
	28tOGl/2dcVWNmPIVkbv7WfHhcXvIHrolwFDl5ZBkR/luMtw9KPYpDeblgGPWyHo649W
	RWK7xQrlaZ+O/6AJU2SK5FhkVo6T9qPdOwVGMYBh6KdndEGqsEbJPI3S05DB8tTLL/Gj
	DZAAODcSNpRV/oMkcamIL+l7hEcmaAQnUUuxSnVJQT3o6FGg1mUNUm1nh6H7v7no/aNS
	deOD5eps0my7Dgpieih27UvtFNp2vNrujsypCNn+agtc6kgYBp/TZVLtujO0mLvrHhgR
	4dzg==
X-Received: by 10.67.22.38 with SMTP id hp6mr15964348pad.53.1389220975390;
	Wed, 08 Jan 2014 14:42:55 -0800 (PST)
Received: from [172.26.49.115] ([172.26.49.115])
	by mx.google.com with ESMTPSA id
	il15sm6255951pac.16.2014.01.08.14.42.54 for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 14:42:54 -0800 (PST)
Message-ID: <1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Paul Durrant <paul.durrant@citrix.com>
Date: Wed, 08 Jan 2014 14:43:04 -0800
In-Reply-To: <1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
	<1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:58 +0000, Paul Durrant wrote:
> This patch adds a function to set up the partial checksum offset for IP
> packets (and optionally re-calculate the pseudo-header checksum) into the
> core network code.
> The implementation was previously private and duplicated between xen-netback
> and xen-netfront, however it is not xen-specific and is potentially useful
> to any network driver.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: David Miller <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Veaceslav Falico <vfalico@redhat.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
> ---
>  include/linux/netdevice.h |    1 +
>  net/core/dev.c            |  271 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 272 insertions(+)

Is there any reason to put this in net/core/dev.c instead of
net/core/skbuff.c ?


Also, no inline should be used in a .c file

( skb_maybe_pull_tail )




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:43:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11pz-0001CU-T2; Wed, 08 Jan 2014 22:42:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eric.dumazet@gmail.com>) id 1W11py-0001CP-TW
	for xen-devel@lists.xen.org; Wed, 08 Jan 2014 22:42:59 +0000
Received: from [193.109.254.147:43824] by server-12.bemta-14.messagelabs.com
	id 0B/23-13681-274DDC25; Wed, 08 Jan 2014 22:42:58 +0000
X-Env-Sender: eric.dumazet@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389220975!9678989!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28705 invoked from network); 8 Jan 2014 22:42:57 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:42:57 -0000
Received: by mail-pb0-f45.google.com with SMTP id rp16so2173746pbb.4
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 14:42:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:content-transfer-encoding:mime-version;
	bh=Gok7LNuqO7dGy4zI1ECeyo9ye4UBbxF/zxXlchAZcVo=;
	b=VrJxDnQh+nh1RhRbctRulacpDUQj2pdZeE1ljFKqqjuSpGrsias668EDMbj9BOlTPT
	28tOGl/2dcVWNmPIVkbv7WfHhcXvIHrolwFDl5ZBkR/luMtw9KPYpDeblgGPWyHo649W
	RWK7xQrlaZ+O/6AJU2SK5FhkVo6T9qPdOwVGMYBh6KdndEGqsEbJPI3S05DB8tTLL/Gj
	DZAAODcSNpRV/oMkcamIL+l7hEcmaAQnUUuxSnVJQT3o6FGg1mUNUm1nh6H7v7no/aNS
	deOD5eps0my7Dgpieih27UvtFNp2vNrujsypCNn+agtc6kgYBp/TZVLtujO0mLvrHhgR
	4dzg==
X-Received: by 10.67.22.38 with SMTP id hp6mr15964348pad.53.1389220975390;
	Wed, 08 Jan 2014 14:42:55 -0800 (PST)
Received: from [172.26.49.115] ([172.26.49.115])
	by mx.google.com with ESMTPSA id
	il15sm6255951pac.16.2014.01.08.14.42.54 for <multiple recipients>
	(version=SSLv3 cipher=RC4-SHA bits=128/128);
	Wed, 08 Jan 2014 14:42:54 -0800 (PST)
Message-ID: <1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Paul Durrant <paul.durrant@citrix.com>
Date: Wed, 08 Jan 2014 14:43:04 -0800
In-Reply-To: <1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
	<1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 13:58 +0000, Paul Durrant wrote:
> This patch adds a function to set up the partial checksum offset for IP
> packets (and optionally re-calculate the pseudo-header checksum) into the
> core network code.
> The implementation was previously private and duplicated between xen-netback
> and xen-netfront, however it is not xen-specific and is potentially useful
> to any network driver.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: David Miller <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Veaceslav Falico <vfalico@redhat.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
> ---
>  include/linux/netdevice.h |    1 +
>  net/core/dev.c            |  271 +++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 272 insertions(+)

Is there any reason to put this in net/core/dev.c instead of
net/core/skbuff.c ?


Also, no inline should be used in a .c file

( skb_maybe_pull_tail )




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11y5-0001vu-DD; Wed, 08 Jan 2014 22:51:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1W11y3-0001vL-Kd; Wed, 08 Jan 2014 22:51:19 +0000
Received: from [193.109.254.147:47051] by server-7.bemta-14.messagelabs.com id
	AC/5E-15500-666DDC25; Wed, 08 Jan 2014 22:51:18 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389221466!9619514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE5NTIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26946 invoked from network); 8 Jan 2014 22:51:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:51:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; 
	d="asc'?scan'208";a="88938087"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 22:50:49 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	17:50:48 -0500
Message-ID: <1389221447.16457.24.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <lars.kurth@xen.org>
Date: Wed, 8 Jan 2014 23:50:47 +0100
In-Reply-To: <52C2CD67.6050603@xen.org>
References: <52B03A2A.807@xen.org> <1388246226.15148.4.camel@Solace>
	<52C2CD67.6050603@xen.org>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Publicity] Xen booth at FOSDEM : invitation to
 community members to help man the booth, show demos,
 have your hand-outs there, etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4596102129665510804=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4596102129665510804==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-W8K1qlr634eyxl2U9E1o"

--=-W8K1qlr634eyxl2U9E1o
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2013-12-31 at 13:57 +0000, Lars Kurth wrote:
> On 28/12/2013 15:57, Dario Faggioli wrote:
> > So I was wondering, we did not get that much of a response about this
> > (or perhaps we did, but not in public, in which case, ignore me).
> >
> > I know, holidays are not helping, etc. but, anyway, should we turn this
> > e-mail into a blog post to send out in early January? I'm up for it if
> > we think it's a good thing to do.
> It is probably the holidays: I know that Cloudious Systems (the makers=
=20
> of OSv) are interested showing OSv running on top of Xen and so do the=
=20
> XO guys.
>=20
> A blog post could be a good idea (tying it up with the DevRoom program,=
=20
> which has not yet been published).
>
Ok. I checked this out earlier today, and the program for Saturday is
out... Still waiting for Sunday.

As soon as that will be available, I think, as you suggest, we should
blog about these two things, i.e.:

 - the devroom program
 - the chance of showing demos at the booth (mentioning which ones we=20
   already know will be performed)

> Another approach would be to reach out pro-actively to people we want to=
=20
> do demos (e.g. Samsung, people/projects with RT schedulers, ...).=20
> Unfortunuately I will struggle to do this given that I am in China.
>=20
Right. I did that for Qubes, and it worked, as it did for OSv. I'll try
to ping some more people directly, let's see how that goes.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-W8K1qlr634eyxl2U9E1o
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLN1kcACgkQk4XaBE3IOsS/BQCeKkExjbBjYrffaDvQ8Y1Uljtg
7hsAnifpbMIrvyYu/l69uVvPIszvAalN
=WHwC
-----END PGP SIGNATURE-----

--=-W8K1qlr634eyxl2U9E1o--


--===============4596102129665510804==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4596102129665510804==--


From xen-devel-bounces@lists.xen.org Wed Jan 08 22:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11y5-0001vu-DD; Wed, 08 Jan 2014 22:51:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1W11y3-0001vL-Kd; Wed, 08 Jan 2014 22:51:19 +0000
Received: from [193.109.254.147:47051] by server-7.bemta-14.messagelabs.com id
	AC/5E-15500-666DDC25; Wed, 08 Jan 2014 22:51:18 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389221466!9619514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE5NTIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26946 invoked from network); 8 Jan 2014 22:51:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:51:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; 
	d="asc'?scan'208";a="88938087"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Jan 2014 22:50:49 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 8 Jan 2014
	17:50:48 -0500
Message-ID: <1389221447.16457.24.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <lars.kurth@xen.org>
Date: Wed, 8 Jan 2014 23:50:47 +0100
In-Reply-To: <52C2CD67.6050603@xen.org>
References: <52B03A2A.807@xen.org> <1388246226.15148.4.camel@Solace>
	<52C2CD67.6050603@xen.org>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Publicity] Xen booth at FOSDEM : invitation to
 community members to help man the booth, show demos,
 have your hand-outs there, etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4596102129665510804=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4596102129665510804==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-W8K1qlr634eyxl2U9E1o"

--=-W8K1qlr634eyxl2U9E1o
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2013-12-31 at 13:57 +0000, Lars Kurth wrote:
> On 28/12/2013 15:57, Dario Faggioli wrote:
> > So I was wondering, we did not get that much of a response about this
> > (or perhaps we did, but not in public, in which case, ignore me).
> >
> > I know, holidays are not helping, etc. but, anyway, should we turn this
> > e-mail into a blog post to send out in early January? I'm up for it if
> > we think it's a good thing to do.
> It is probably the holidays: I know that Cloudious Systems (the makers=
=20
> of OSv) are interested showing OSv running on top of Xen and so do the=
=20
> XO guys.
>=20
> A blog post could be a good idea (tying it up with the DevRoom program,=
=20
> which has not yet been published).
>
Ok. I checked this out earlier today, and the program for Saturday is
out... Still waiting for Sunday.

As soon as that will be available, I think, as you suggest, we should
blog about these two things, i.e.:

 - the devroom program
 - the chance of showing demos at the booth (mentioning which ones we=20
   already know will be performed)

> Another approach would be to reach out pro-actively to people we want to=
=20
> do demos (e.g. Samsung, people/projects with RT schedulers, ...).=20
> Unfortunuately I will struggle to do this given that I am in China.
>=20
Right. I did that for Qubes, and it worked, as it did for OSv. I'll try
to ping some more people directly, let's see how that goes.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-W8K1qlr634eyxl2U9E1o
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLN1kcACgkQk4XaBE3IOsS/BQCeKkExjbBjYrffaDvQ8Y1Uljtg
7hsAnifpbMIrvyYu/l69uVvPIszvAalN
=WHwC
-----END PGP SIGNATURE-----

--=-W8K1qlr634eyxl2U9E1o--


--===============4596102129665510804==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4596102129665510804==--


From xen-devel-bounces@lists.xen.org Wed Jan 08 22:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11y7-0001wk-GD; Wed, 08 Jan 2014 22:51:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W11y5-0001vs-PL
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 22:51:22 +0000
Received: from [193.109.254.147:20539] by server-10.bemta-14.messagelabs.com
	id 2F/D4-20752-966DDC25; Wed, 08 Jan 2014 22:51:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389221478!9673982!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 458 invoked from network); 8 Jan 2014 22:51:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:51:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="91089386"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 22:51:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 17:51:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W11y1-0006dq-7S;
	Wed, 08 Jan 2014 22:51:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W11y0-0001Rx-5I;
	Wed, 08 Jan 2014 22:51:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24308-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 22:51:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24308: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5101941793555310937=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5101941793555310937==
Content-Type: text/plain

flight 24308 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 22644
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 22644

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1000 lines long.)


--===============5101941793555310937==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5101941793555310937==--

From xen-devel-bounces@lists.xen.org Wed Jan 08 22:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jan 2014 22:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W11y7-0001wk-GD; Wed, 08 Jan 2014 22:51:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W11y5-0001vs-PL
	for xen-devel@lists.xensource.com; Wed, 08 Jan 2014 22:51:22 +0000
Received: from [193.109.254.147:20539] by server-10.bemta-14.messagelabs.com
	id 2F/D4-20752-966DDC25; Wed, 08 Jan 2014 22:51:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389221478!9673982!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 458 invoked from network); 8 Jan 2014 22:51:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Jan 2014 22:51:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,626,1384300800"; d="scan'208";a="91089386"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Jan 2014 22:51:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 17:51:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W11y1-0006dq-7S;
	Wed, 08 Jan 2014 22:51:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W11y0-0001Rx-5I;
	Wed, 08 Jan 2014 22:51:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24308-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 8 Jan 2014 22:51:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24308: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5101941793555310937=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5101941793555310937==
Content-Type: text/plain

flight 24308 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate  fail REGR. vs. 22644
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 22644

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1000 lines long.)


--===============5101941793555310937==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5101941793555310937==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 00:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 00:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W13ep-0001wh-V1; Thu, 09 Jan 2014 00:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W13eo-0001wc-7H
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 00:39:34 +0000
Received: from [85.158.137.68:19401] by server-16.bemta-3.messagelabs.com id
	69/BC-26128-5CFEDC25; Thu, 09 Jan 2014 00:39:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389227971!7998047!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18307 invoked from network); 9 Jan 2014 00:39:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 00:39:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s090cPBD007129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 00:38:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s090cOnO010860
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 00:38:24 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s090cNkw003784; Thu, 9 Jan 2014 00:38:23 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 16:38:23 -0800
Date: Wed, 8 Jan 2014 16:38:22 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108163822.30b6f87a@mantra.us.oracle.com>
In-Reply-To: <1389203062.27473.8.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 17:44:22 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > Using volatile is almost always wrong. Why do you think it is
> > > > > needed here?
> > > > 
> > > > This was from Mukesh Rathor:
> > > > 
> > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > 
> > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > this? Happy to change any way you want.
> > > 
> > > I'm not the maintainer but if I were I'd say drop the volatile
> > > and maybe mark it __read_mostly and perhaps __used too (since
> > > it's static it might otherwise get eliminated).
> > > 
> > > > > If anything this variable is exactly the opposite, i..e
> > > > > __read_mostly or even const (given that I can't see anything
> > > > > which writes it I suppose this is a compile time setting?)
> > > > 
> > > > That has been how I have been testing it so far (changing the
> > > > source to set values).  Mukesh claims to be able to change it
> > > > at will.  Not sure how const may effect this.
> > 
> > If the idea is to use kdb itself to frob the value, then it does
> > need something to stop the compiler caching it.  This might even be
> > one of the few cases where 'volatile' actually DTRT; it would still
> > be more in keeping with Xen style to use an explicit read op (like
> > atomic_read()) where the value is consumed.
> 
> Is there any need to be asynchronously frobbing this value in the
> middle of a function within this file and expecting it to be

Yes. I can stop the machine via kdb or other debugger, change the value
during debug, and upon resuming it will start printing stuff. Often
this is needed when going thru several iterations of call before problem
is seen. Making it volatile makes sure the compiler loads it every
instance of its use. This is not in main path, only debugger path, so
the overhead should not be an issue.

thanks,
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 00:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 00:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W13ep-0001wh-V1; Thu, 09 Jan 2014 00:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W13eo-0001wc-7H
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 00:39:34 +0000
Received: from [85.158.137.68:19401] by server-16.bemta-3.messagelabs.com id
	69/BC-26128-5CFEDC25; Thu, 09 Jan 2014 00:39:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389227971!7998047!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18307 invoked from network); 9 Jan 2014 00:39:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 00:39:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s090cPBD007129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 00:38:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s090cOnO010860
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 00:38:24 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s090cNkw003784; Thu, 9 Jan 2014 00:38:23 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 08 Jan 2014 16:38:23 -0800
Date: Wed, 8 Jan 2014 16:38:22 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140108163822.30b6f87a@mantra.us.oracle.com>
In-Reply-To: <1389203062.27473.8.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 8 Jan 2014 17:44:22 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > Using volatile is almost always wrong. Why do you think it is
> > > > > needed here?
> > > > 
> > > > This was from Mukesh Rathor:
> > > > 
> > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > 
> > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > this? Happy to change any way you want.
> > > 
> > > I'm not the maintainer but if I were I'd say drop the volatile
> > > and maybe mark it __read_mostly and perhaps __used too (since
> > > it's static it might otherwise get eliminated).
> > > 
> > > > > If anything this variable is exactly the opposite, i..e
> > > > > __read_mostly or even const (given that I can't see anything
> > > > > which writes it I suppose this is a compile time setting?)
> > > > 
> > > > That has been how I have been testing it so far (changing the
> > > > source to set values).  Mukesh claims to be able to change it
> > > > at will.  Not sure how const may effect this.
> > 
> > If the idea is to use kdb itself to frob the value, then it does
> > need something to stop the compiler caching it.  This might even be
> > one of the few cases where 'volatile' actually DTRT; it would still
> > be more in keeping with Xen style to use an explicit read op (like
> > atomic_read()) where the value is consumed.
> 
> Is there any need to be asynchronously frobbing this value in the
> middle of a function within this file and expecting it to be

Yes. I can stop the machine via kdb or other debugger, change the value
during debug, and upon resuming it will start printing stuff. Often
this is needed when going thru several iterations of call before problem
is seen. Making it volatile makes sure the compiler loads it every
instance of its use. This is not in main path, only debugger path, so
the overhead should not be an issue.

thanks,
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 03:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 03:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W162n-0007MW-O7; Thu, 09 Jan 2014 03:12:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W162l-0007MP-E1
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 03:12:27 +0000
Received: from [85.158.139.211:51644] by server-15.bemta-5.messagelabs.com id
	FC/C3-08490-A931EC25; Thu, 09 Jan 2014 03:12:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389237144!8681629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13945 invoked from network); 9 Jan 2014 03:12:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 03:12:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,628,1384300800"; d="scan'208";a="91140257"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 03:12:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 22:12:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W162g-0007vk-6K;
	Thu, 09 Jan 2014 03:12:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W162g-0008ND-66;
	Thu, 09 Jan 2014 03:12:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24309-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 03:12:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24309 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24298

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24298

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d
baseline version:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bob Liu <bob.liu@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 03:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 03:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W162n-0007MW-O7; Thu, 09 Jan 2014 03:12:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W162l-0007MP-E1
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 03:12:27 +0000
Received: from [85.158.139.211:51644] by server-15.bemta-5.messagelabs.com id
	FC/C3-08490-A931EC25; Thu, 09 Jan 2014 03:12:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389237144!8681629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13945 invoked from network); 9 Jan 2014 03:12:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 03:12:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,628,1384300800"; d="scan'208";a="91140257"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 03:12:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 22:12:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W162g-0007vk-6K;
	Thu, 09 Jan 2014 03:12:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W162g-0008ND-66;
	Thu, 09 Jan 2014 03:12:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24309-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 03:12:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24309 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24298

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24298

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d
baseline version:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bob Liu <bob.liu@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 04:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 04:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W16qy-0002B2-5F; Thu, 09 Jan 2014 04:04:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W16qw-0002Ax-Ac
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 04:04:18 +0000
Received: from [193.109.254.147:40222] by server-14.bemta-14.messagelabs.com
	id 72/D5-12628-1CF1EC25; Thu, 09 Jan 2014 04:04:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389240255!9720792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15927 invoked from network); 9 Jan 2014 04:04:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 04:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,628,1384300800"; d="scan'208";a="88997482"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 04:04:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 23:04:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W16qr-0008Cn-0X;
	Thu, 09 Jan 2014 04:04:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W16qq-0004Ai-UA;
	Thu, 09 Jan 2014 04:04:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24311-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 04:04:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24311: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2590674171351718702=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2590674171351718702==
Content-Type: text/plain

flight 24311 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24311/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 22644

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1000 lines long.)


--===============2590674171351718702==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2590674171351718702==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 04:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 04:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W16qy-0002B2-5F; Thu, 09 Jan 2014 04:04:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W16qw-0002Ax-Ac
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 04:04:18 +0000
Received: from [193.109.254.147:40222] by server-14.bemta-14.messagelabs.com
	id 72/D5-12628-1CF1EC25; Thu, 09 Jan 2014 04:04:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389240255!9720792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15927 invoked from network); 9 Jan 2014 04:04:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 04:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,628,1384300800"; d="scan'208";a="88997482"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 04:04:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 8 Jan 2014 23:04:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W16qr-0008Cn-0X;
	Thu, 09 Jan 2014 04:04:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W16qq-0004Ai-UA;
	Thu, 09 Jan 2014 04:04:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24311-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 04:04:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24311: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2590674171351718702=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2590674171351718702==
Content-Type: text/plain

flight 24311 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24311/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 22644

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1000 lines long.)


--===============2590674171351718702==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2590674171351718702==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:03:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1AZh-0000y1-HN; Thu, 09 Jan 2014 08:02:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1W1AZg-0000xu-Da
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 08:02:44 +0000
Received: from [85.158.139.211:17763] by server-11.bemta-5.messagelabs.com id
	C9/36-23268-3A75EC25; Thu, 09 Jan 2014 08:02:43 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389254561!8705218!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7776 invoked from network); 9 Jan 2014 08:02:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 08:02:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0981cIH022394
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 08:01:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0981bpl029930
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 08:01:38 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0981bjk011239; Thu, 9 Jan 2014 08:01:37 GMT
Received: from [10.191.11.220] (/10.191.11.220)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 00:01:36 -0800
Message-ID: <52CE575F.9050303@oracle.com>
Date: Thu, 09 Jan 2014 16:01:35 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com
Subject: [Xen-devel] Ask about status about 64 bit pci hotplug support on
	qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Maintainer

I added 64bit pci hotplug support to hvm guest recently.
Then I found XudongHao had ever sent a similar patch, but it wasn't
merged in qemu-xen-traditional.

http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html

I am interested in the result about that patch.

thanks
zduan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:03:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1AZh-0000y1-HN; Thu, 09 Jan 2014 08:02:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1W1AZg-0000xu-Da
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 08:02:44 +0000
Received: from [85.158.139.211:17763] by server-11.bemta-5.messagelabs.com id
	C9/36-23268-3A75EC25; Thu, 09 Jan 2014 08:02:43 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389254561!8705218!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7776 invoked from network); 9 Jan 2014 08:02:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 08:02:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0981cIH022394
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 08:01:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0981bpl029930
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 08:01:38 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0981bjk011239; Thu, 9 Jan 2014 08:01:37 GMT
Received: from [10.191.11.220] (/10.191.11.220)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 00:01:36 -0800
Message-ID: <52CE575F.9050303@oracle.com>
Date: Thu, 09 Jan 2014 16:01:35 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com
Subject: [Xen-devel] Ask about status about 64 bit pci hotplug support on
	qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: zhenzhong.duan@oracle.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Maintainer

I added 64bit pci hotplug support to hvm guest recently.
Then I found XudongHao had ever sent a similar patch, but it wasn't
merged in qemu-xen-traditional.

http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html

I am interested in the result about that patch.

thanks
zduan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:42:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BBd-0003Ef-Cr; Thu, 09 Jan 2014 08:41:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1BBc-0003ES-0c
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 08:41:56 +0000
Received: from [85.158.137.68:63810] by server-11.bemta-3.messagelabs.com id
	BB/13-19379-3D06EC25; Thu, 09 Jan 2014 08:41:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389256912!8115155!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16371 invoked from network); 9 Jan 2014 08:41:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 08:41:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91195679"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 08:41:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 03:41:52 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W1BBX-0000fb-OG;
	Thu, 09 Jan 2014 08:41:51 +0000
Message-ID: <1389256911.6917.36.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Jan 2014 08:41:51 +0000
In-Reply-To: <20140108181007.GB75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108181007.GB75747@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 19:10 +0100, Tim Deegan wrote:
> At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > > here?
> > > > > 
> > > > > This was from Mukesh Rathor:
> > > > > 
> > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > 
> > > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > > to change any way you want.
> > > > 
> > > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > > otherwise get eliminated).
> > > > 
> > > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > > even const (given that I can't see anything which writes it I suppose
> > > > > > this is a compile time setting?)
> > > > > 
> > > > > That has been how I have been testing it so far (changing the source
> > > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > > sure how const may effect this.
> > > 
> > > If the idea is to use kdb itself to frob the value, then it does need
> > > something to stop the compiler caching it.  This might even be one of
> > > the few cases where 'volatile' actually DTRT; it would still be more
> > > in keeping with Xen style to use an explicit read op (like
> > > atomic_read()) where the value is consumed.
> > 
> > Is there any need to be asynchronously frobbing this value in the middle
> > of a function within this file and expecting it to be reliable? I'd have
> > thought that changing the value and having it take affect on the next
> > debug event/hypercall/whatever would be what was wanted.
> 
> The variable is static and there's nothing in the file that updates
> it, so the compiler might drop it entirely.  Maybe __used__ would be
> good enough to stop the compiler dropping all reads, but I'm not sure.

Isn't that exactly what __used (or __used__) is for?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:42:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BBd-0003Ef-Cr; Thu, 09 Jan 2014 08:41:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1BBc-0003ES-0c
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 08:41:56 +0000
Received: from [85.158.137.68:63810] by server-11.bemta-3.messagelabs.com id
	BB/13-19379-3D06EC25; Thu, 09 Jan 2014 08:41:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389256912!8115155!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16371 invoked from network); 9 Jan 2014 08:41:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 08:41:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91195679"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 08:41:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 03:41:52 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W1BBX-0000fb-OG;
	Thu, 09 Jan 2014 08:41:51 +0000
Message-ID: <1389256911.6917.36.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 9 Jan 2014 08:41:51 +0000
In-Reply-To: <20140108181007.GB75747@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108181007.GB75747@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 19:10 +0100, Tim Deegan wrote:
> At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > > here?
> > > > > 
> > > > > This was from Mukesh Rathor:
> > > > > 
> > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > 
> > > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > > to change any way you want.
> > > > 
> > > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > > otherwise get eliminated).
> > > > 
> > > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > > even const (given that I can't see anything which writes it I suppose
> > > > > > this is a compile time setting?)
> > > > > 
> > > > > That has been how I have been testing it so far (changing the source
> > > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > > sure how const may effect this.
> > > 
> > > If the idea is to use kdb itself to frob the value, then it does need
> > > something to stop the compiler caching it.  This might even be one of
> > > the few cases where 'volatile' actually DTRT; it would still be more
> > > in keeping with Xen style to use an explicit read op (like
> > > atomic_read()) where the value is consumed.
> > 
> > Is there any need to be asynchronously frobbing this value in the middle
> > of a function within this file and expecting it to be reliable? I'd have
> > thought that changing the value and having it take affect on the next
> > debug event/hypercall/whatever would be what was wanted.
> 
> The variable is static and there's nothing in the file that updates
> it, so the compiler might drop it entirely.  Maybe __used__ would be
> good enough to stop the compiler dropping all reads, but I'm not sure.

Isn't that exactly what __used (or __used__) is for?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:59:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BSe-00059I-Db; Thu, 09 Jan 2014 08:59:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W1BSc-00059B-Tg
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 08:59:31 +0000
Received: from [85.158.139.211:11333] by server-11.bemta-5.messagelabs.com id
	FB/F2-23268-2F46EC25; Thu, 09 Jan 2014 08:59:30 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389257967!8677753!1
X-Originating-IP: [209.85.160.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26535 invoked from network); 9 Jan 2014 08:59:29 -0000
Received: from mail-pb0-f54.google.com (HELO mail-pb0-f54.google.com)
	(209.85.160.54)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 08:59:29 -0000
Received: by mail-pb0-f54.google.com with SMTP id un15so2769684pbc.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 00:59:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=pcKwMdb+YrxgWqY2PTzjvGO/vXDUPRr7a+Gu03BVj3g=;
	b=0bha/q/4xvHlgrYE1JVpR8waW/niO5k/UK7pr9+29AxKg2bPBT9E2NLa6YqlpBWTcG
	xUEa+RJLDcKZ6NrmDjlIxiUkGHhiCDAOo8BSaeLmEOaoTEmrSAknybQVn2x8cQ0R9wtJ
	KHH+vd+EqdxpTzpdegnsHIXX3wOTdzxG52yB8SSb1vx1XpyWQLrtPU5ny8ynWaL6sXiV
	fQZd7/QPJvP6Ykkq6fnfiXHdRZySEYh2Wnqw1DdlberlagkPIWNHLlbtTRoIvLfituSN
	gjxBHx7J0yuGk+lP+Lr8ANEoynl5OblogrRInlSWrRpIDCus3A6ugchhY8D458O4Wd+t
	BMaw==
X-Received: by 10.66.226.46 with SMTP id rp14mr2248063pac.133.1389257967399;
	Thu, 09 Jan 2014 00:59:27 -0800 (PST)
Received: from localhost ([220.202.153.59])
	by mx.google.com with ESMTPSA id nv7sm8082651pbc.31.2014.01.09.00.59.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 00:59:26 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:59:11 +0800
Message-Id: <1389257951-1650-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH] xen/arm64: Avoid trying to map of paddr(start)
	as a block entry in boot_pgtable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Block entries are only supported after first-level table. Since the physical
address of 'start' is usally less than 512GB, this section of codes wouldn't
be executed and therefore useless. Remove this part of codes to avoid
potiential bug if it tries to map paddr(start) as a level-0 block entry by
accident.

Thus, we need to specified that xen image should be loaded at the address less
than 512GB.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm64/head.S | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..bebddf0 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -266,18 +266,7 @@ skip_bss:
         orr   x2, x1, x3             /*       + rights for linear PT */
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
-        /* ... map of paddr(start) in boot_pgtable */
-        lsr   x1, x19, #39           /* Offset of base paddr in boot_pgtable */
-        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
-                                      * or boot_second later on */
-
-        lsl   x2, x1, #39            /* Base address for 512GB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section mapping */
-        orr   x2, x2, x3
-        lsl   x1, x1, #3             /* x1 := Slot offset */
-        str   x2, [x4, x1]           /* Mapping of paddr(start)*/
-
-1:      /* Setup boot_first: */
+        /* Setup boot_first: */
         ldr   x4, =boot_first        /* Next level into boot_first */
         add   x4, x4, x20            /* x4 := paddr(boot_first) */
 
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 08:59:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 08:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BSe-00059I-Db; Thu, 09 Jan 2014 08:59:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W1BSc-00059B-Tg
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 08:59:31 +0000
Received: from [85.158.139.211:11333] by server-11.bemta-5.messagelabs.com id
	FB/F2-23268-2F46EC25; Thu, 09 Jan 2014 08:59:30 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389257967!8677753!1
X-Originating-IP: [209.85.160.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26535 invoked from network); 9 Jan 2014 08:59:29 -0000
Received: from mail-pb0-f54.google.com (HELO mail-pb0-f54.google.com)
	(209.85.160.54)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 08:59:29 -0000
Received: by mail-pb0-f54.google.com with SMTP id un15so2769684pbc.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 00:59:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=pcKwMdb+YrxgWqY2PTzjvGO/vXDUPRr7a+Gu03BVj3g=;
	b=0bha/q/4xvHlgrYE1JVpR8waW/niO5k/UK7pr9+29AxKg2bPBT9E2NLa6YqlpBWTcG
	xUEa+RJLDcKZ6NrmDjlIxiUkGHhiCDAOo8BSaeLmEOaoTEmrSAknybQVn2x8cQ0R9wtJ
	KHH+vd+EqdxpTzpdegnsHIXX3wOTdzxG52yB8SSb1vx1XpyWQLrtPU5ny8ynWaL6sXiV
	fQZd7/QPJvP6Ykkq6fnfiXHdRZySEYh2Wnqw1DdlberlagkPIWNHLlbtTRoIvLfituSN
	gjxBHx7J0yuGk+lP+Lr8ANEoynl5OblogrRInlSWrRpIDCus3A6ugchhY8D458O4Wd+t
	BMaw==
X-Received: by 10.66.226.46 with SMTP id rp14mr2248063pac.133.1389257967399;
	Thu, 09 Jan 2014 00:59:27 -0800 (PST)
Received: from localhost ([220.202.153.59])
	by mx.google.com with ESMTPSA id nv7sm8082651pbc.31.2014.01.09.00.59.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 00:59:26 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:59:11 +0800
Message-Id: <1389257951-1650-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH] xen/arm64: Avoid trying to map of paddr(start)
	as a block entry in boot_pgtable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Block entries are only supported after first-level table. Since the physical
address of 'start' is usally less than 512GB, this section of codes wouldn't
be executed and therefore useless. Remove this part of codes to avoid
potiential bug if it tries to map paddr(start) as a level-0 block entry by
accident.

Thus, we need to specified that xen image should be loaded at the address less
than 512GB.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm64/head.S | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..bebddf0 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -266,18 +266,7 @@ skip_bss:
         orr   x2, x1, x3             /*       + rights for linear PT */
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
-        /* ... map of paddr(start) in boot_pgtable */
-        lsr   x1, x19, #39           /* Offset of base paddr in boot_pgtable */
-        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
-                                      * or boot_second later on */
-
-        lsl   x2, x1, #39            /* Base address for 512GB mapping */
-        mov   x3, #PT_MEM            /* x2 := Section mapping */
-        orr   x2, x2, x3
-        lsl   x1, x1, #3             /* x1 := Slot offset */
-        str   x2, [x4, x1]           /* Mapping of paddr(start)*/
-
-1:      /* Setup boot_first: */
+        /* Setup boot_first: */
         ldr   x4, =boot_first        /* Next level into boot_first */
         add   x4, x4, x20            /* x4 := paddr(boot_first) */
 
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:21:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BnD-0006AX-Tc; Thu, 09 Jan 2014 09:20:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1BnC-0006AS-7L
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:20:46 +0000
Received: from [85.158.139.211:28791] by server-5.bemta-5.messagelabs.com id
	5C/F6-14928-DE96EC25; Thu, 09 Jan 2014 09:20:45 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389259242!8684229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28311 invoked from network); 9 Jan 2014 09:20:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:20:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91204335"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:20:34 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:20:34 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:20:32 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX path
Thread-Index: AQHPDLl4Wu4tZBxeyUWvbT3V/hLkQ5p8HC/w
Date: Thu, 9 Jan 2014 09:20:32 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
In-Reply-To: <52CDC45D.3050509@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 08 January 2014 21:34
> To: Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Cc: Zoltan Kiss; Paul Durrant
> Subject: Re: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX
> path
> 
> I just realized when answering Ma's mail that this doesn't cause the
> desired effect after Paul's flow control improvement: starting the queue
> doesn't drop the packets which cannot fit the ring. Which in fact might
> be not good.

No, that would not be good.

> We are adding the skb to vif->rx_queue even when
> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
> space for that. Or am I missing something? Paul?
> 

That's correct. Part of the flow control improvement was to get rid of needless packet drops. For your purposes, you basically need to avoid using the queuing discipline and take packets into netback's vif->rx_queue regardless of the state of the shared ring so that you can drop them if they get beyond a certain age. So, perhaps you should never stop the netif queue, place an upper limit on vif->rx_queue (either packet or byte count) and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).

  Paul

> Zoli
> 
> On 08/01/14 00:10, Zoltan Kiss wrote:
> > A malicious or buggy guest can leave its queue filled indefinitely, in which
> > case qdisc start to queue packets for that VIF. If those packets came from
> an
> > another guest, it can block its slots and prevent shutdown. To avoid that,
> we
> > make sure the queue is drained in every 10 seconds.
> ...
> > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> > index 95fcd63..ce032f9 100644
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
> >   	return IRQ_HANDLED;
> >   }
> >
> > +static void xenvif_wake_queue(unsigned long data)
> > +{
> > +	struct xenvif *vif = (struct xenvif *)data;
> > +
> > +	if (netif_queue_stopped(vif->dev)) {
> > +		netdev_err(vif->dev, "draining TX queue\n");
> > +		netif_wake_queue(vif->dev);
> > +	}
> > +}
> > +
> >   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >   {
> >   	struct xenvif *vif = netdev_priv(dev);
> > @@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >   	 * then turn off the queue to give the ring a chance to
> >   	 * drain.
> >   	 */
> > -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> > +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> > +		vif->wake_queue.function = xenvif_wake_queue;
> > +		vif->wake_queue.data = (unsigned long)vif;
> >   		xenvif_stop_queue(vif);
> > +		mod_timer(&vif->wake_queue,
> > +			jiffies + rx_drain_timeout_jiffies);
> > +	}
> >
> >   	skb_queue_tail(&vif->rx_queue, skb);
> >   	xenvif_kick_thread(vif);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:21:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1BnD-0006AX-Tc; Thu, 09 Jan 2014 09:20:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1BnC-0006AS-7L
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:20:46 +0000
Received: from [85.158.139.211:28791] by server-5.bemta-5.messagelabs.com id
	5C/F6-14928-DE96EC25; Thu, 09 Jan 2014 09:20:45 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389259242!8684229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28311 invoked from network); 9 Jan 2014 09:20:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:20:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91204335"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:20:34 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:20:34 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:20:32 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX path
Thread-Index: AQHPDLl4Wu4tZBxeyUWvbT3V/hLkQ5p8HC/w
Date: Thu, 9 Jan 2014 09:20:32 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
In-Reply-To: <52CDC45D.3050509@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 08 January 2014 21:34
> To: Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Cc: Zoltan Kiss; Paul Durrant
> Subject: Re: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX
> path
> 
> I just realized when answering Ma's mail that this doesn't cause the
> desired effect after Paul's flow control improvement: starting the queue
> doesn't drop the packets which cannot fit the ring. Which in fact might
> be not good.

No, that would not be good.

> We are adding the skb to vif->rx_queue even when
> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
> space for that. Or am I missing something? Paul?
> 

That's correct. Part of the flow control improvement was to get rid of needless packet drops. For your purposes, you basically need to avoid using the queuing discipline and take packets into netback's vif->rx_queue regardless of the state of the shared ring so that you can drop them if they get beyond a certain age. So, perhaps you should never stop the netif queue, place an upper limit on vif->rx_queue (either packet or byte count) and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).

  Paul

> Zoli
> 
> On 08/01/14 00:10, Zoltan Kiss wrote:
> > A malicious or buggy guest can leave its queue filled indefinitely, in which
> > case qdisc start to queue packets for that VIF. If those packets came from
> an
> > another guest, it can block its slots and prevent shutdown. To avoid that,
> we
> > make sure the queue is drained in every 10 seconds.
> ...
> > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> > index 95fcd63..ce032f9 100644
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -114,6 +114,16 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
> >   	return IRQ_HANDLED;
> >   }
> >
> > +static void xenvif_wake_queue(unsigned long data)
> > +{
> > +	struct xenvif *vif = (struct xenvif *)data;
> > +
> > +	if (netif_queue_stopped(vif->dev)) {
> > +		netdev_err(vif->dev, "draining TX queue\n");
> > +		netif_wake_queue(vif->dev);
> > +	}
> > +}
> > +
> >   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >   {
> >   	struct xenvif *vif = netdev_priv(dev);
> > @@ -143,8 +153,13 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >   	 * then turn off the queue to give the ring a chance to
> >   	 * drain.
> >   	 */
> > -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> > +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> > +		vif->wake_queue.function = xenvif_wake_queue;
> > +		vif->wake_queue.data = (unsigned long)vif;
> >   		xenvif_stop_queue(vif);
> > +		mod_timer(&vif->wake_queue,
> > +			jiffies + rx_drain_timeout_jiffies);
> > +	}
> >
> >   	skb_queue_tail(&vif->rx_queue, skb);
> >   	xenvif_kick_thread(vif);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:22:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Bp3-0006FW-GD; Thu, 09 Jan 2014 09:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1Bp2-0006FQ-Ax
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:22:40 +0000
Received: from [85.158.143.35:4203] by server-1.bemta-4.messagelabs.com id
	89/0B-02132-F5A6EC25; Thu, 09 Jan 2014 09:22:39 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389259357!10616778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 808 invoked from network); 9 Jan 2014 09:22:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:22:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89059226"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:22:20 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:22:19 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:22:18 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Thread-Topic: [PATCH net-next 1/3] net: add skb_checksum_setup
Thread-Index: AQHPDHm9fXvWpC/Pr0aGMlSTugjoIZp7XEQAgADDExA=
Date: Thu, 9 Jan 2014 09:22:17 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C83@AMSPEX01CL01.citrite.net>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
	<1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
	<1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Eric Dumazet [mailto:eric.dumazet@gmail.com]
> Sent: 08 January 2014 22:43
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; David Miller; Eric
> Dumazet; Veaceslav Falico; Alexander Duyck; Nicolas Dichtel
> Subject: Re: [PATCH net-next 1/3] net: add skb_checksum_setup
> 
> On Wed, 2014-01-08 at 13:58 +0000, Paul Durrant wrote:
> > This patch adds a function to set up the partial checksum offset for IP
> > packets (and optionally re-calculate the pseudo-header checksum) into the
> > core network code.
> > The implementation was previously private and duplicated between xen-
> netback
> > and xen-netfront, however it is not xen-specific and is potentially useful
> > to any network driver.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: David Miller <davem@davemloft.net>
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: Veaceslav Falico <vfalico@redhat.com>
> > Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> > Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
> > ---
> >  include/linux/netdevice.h |    1 +
> >  net/core/dev.c            |  271
> +++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 272 insertions(+)
> 
> Is there any reason to put this in net/core/dev.c instead of
> net/core/skbuff.c ?
> 

No, no reason. I just wasn't sure which was the better place. I'll put it in skbuff.c if that is more appropriate.

> 
> Also, no inline should be used in a .c file
> 
> ( skb_maybe_pull_tail )
> 

Ok. I'll fix that.

Thanks,

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:22:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Bp3-0006FW-GD; Thu, 09 Jan 2014 09:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1Bp2-0006FQ-Ax
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:22:40 +0000
Received: from [85.158.143.35:4203] by server-1.bemta-4.messagelabs.com id
	89/0B-02132-F5A6EC25; Thu, 09 Jan 2014 09:22:39 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389259357!10616778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 808 invoked from network); 9 Jan 2014 09:22:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:22:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89059226"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:22:20 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:22:19 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:22:18 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Thread-Topic: [PATCH net-next 1/3] net: add skb_checksum_setup
Thread-Index: AQHPDHm9fXvWpC/Pr0aGMlSTugjoIZp7XEQAgADDExA=
Date: Thu, 9 Jan 2014 09:22:17 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C83@AMSPEX01CL01.citrite.net>
References: <1389189511-14568-1-git-send-email-paul.durrant@citrix.com>
	<1389189511-14568-2-git-send-email-paul.durrant@citrix.com>
	<1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
In-Reply-To: <1389220984.31367.22.camel@edumazet-glaptop2.roam.corp.google.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Eric Dumazet [mailto:eric.dumazet@gmail.com]
> Sent: 08 January 2014 22:43
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; David Miller; Eric
> Dumazet; Veaceslav Falico; Alexander Duyck; Nicolas Dichtel
> Subject: Re: [PATCH net-next 1/3] net: add skb_checksum_setup
> 
> On Wed, 2014-01-08 at 13:58 +0000, Paul Durrant wrote:
> > This patch adds a function to set up the partial checksum offset for IP
> > packets (and optionally re-calculate the pseudo-header checksum) into the
> > core network code.
> > The implementation was previously private and duplicated between xen-
> netback
> > and xen-netfront, however it is not xen-specific and is potentially useful
> > to any network driver.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: David Miller <davem@davemloft.net>
> > Cc: Eric Dumazet <edumazet@google.com>
> > Cc: Veaceslav Falico <vfalico@redhat.com>
> > Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> > Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
> > ---
> >  include/linux/netdevice.h |    1 +
> >  net/core/dev.c            |  271
> +++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 272 insertions(+)
> 
> Is there any reason to put this in net/core/dev.c instead of
> net/core/skbuff.c ?
> 

No, no reason. I just wasn't sure which was the better place. I'll put it in skbuff.c if that is more appropriate.

> 
> Also, no inline should be used in a .c file
> 
> ( skb_maybe_pull_tail )
> 

Ok. I'll fix that.

Thanks,

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:34:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Bzu-0007SZ-6S; Thu, 09 Jan 2014 09:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Bzs-0007SU-Lw
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:33:52 +0000
Received: from [85.158.143.35:10939] by server-2.bemta-4.messagelabs.com id
	D0/95-11386-FFC6EC25; Thu, 09 Jan 2014 09:33:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389260030!10466090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15689 invoked from network); 9 Jan 2014 09:33:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91207238"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:33:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:33:49 -0500
Message-ID: <1389260028.27473.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Thu, 9 Jan 2014 09:33:48 +0000
In-Reply-To: <1389257951-1650-1-git-send-email-baozich@gmail.com>
References: <1389257951-1650-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: Avoid trying to map of
 paddr(start) as a block entry in boot_pgtable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:59 +0800, Chen Baozi wrote:
> Block entries are only supported after first-level table. Since the physical
> address of 'start' is usally less than 512GB,

Thanks, but I don't believe this is universally true. I've not been to
check but I am reasonably sure that the architectural maximum phys addr
on v8 is more than 2^39.

We've already seen a processor whose RAM starts at 128GB, it's not
beyond the realm of possibility that another processor might map it
higher up.

>  this section of codes wouldn't
> be executed and therefore useless. Remove this part of codes to avoid
> potiential bug if it tries to map paddr(start) as a level-0 block entry by
> accident.

I think this bug should be fixed rather than just deleting the code.
This probably means we need boot_first to be split into two pages in
order to support both the 1:1 mapping and the final virtual mapping.

I think the fix for this can wait for 4.5 since processors with RAM
above 512GB are not something we've actually seen in reality, and they
are broken whether we remove the existing code or not so sticking with
the status quo is the lowest risk WRT accidentally breaking existing
systems.

> Thus, we need to specified that xen image should be loaded at the address less
> than 512GB.

> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/arm64/head.S | 13 +------------
>  1 file changed, 1 insertion(+), 12 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 31afdd0..bebddf0 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -266,18 +266,7 @@ skip_bss:
>          orr   x2, x1, x3             /*       + rights for linear PT */
>          str   x2, [x4, #0]           /* Map it in slot 0 */
>  
> -        /* ... map of paddr(start) in boot_pgtable */
> -        lsr   x1, x19, #39           /* Offset of base paddr in boot_pgtable */
> -        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
> -                                      * or boot_second later on */
> -
> -        lsl   x2, x1, #39            /* Base address for 512GB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section mapping */
> -        orr   x2, x2, x3
> -        lsl   x1, x1, #3             /* x1 := Slot offset */
> -        str   x2, [x4, x1]           /* Mapping of paddr(start)*/
> -
> -1:      /* Setup boot_first: */
> +        /* Setup boot_first: */
>          ldr   x4, =boot_first        /* Next level into boot_first */
>          add   x4, x4, x20            /* x4 := paddr(boot_first) */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:34:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Bzu-0007SZ-6S; Thu, 09 Jan 2014 09:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Bzs-0007SU-Lw
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:33:52 +0000
Received: from [85.158.143.35:10939] by server-2.bemta-4.messagelabs.com id
	D0/95-11386-FFC6EC25; Thu, 09 Jan 2014 09:33:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389260030!10466090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15689 invoked from network); 9 Jan 2014 09:33:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91207238"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:33:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:33:49 -0500
Message-ID: <1389260028.27473.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Thu, 9 Jan 2014 09:33:48 +0000
In-Reply-To: <1389257951-1650-1-git-send-email-baozich@gmail.com>
References: <1389257951-1650-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: Avoid trying to map of
 paddr(start) as a block entry in boot_pgtable
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:59 +0800, Chen Baozi wrote:
> Block entries are only supported after first-level table. Since the physical
> address of 'start' is usally less than 512GB,

Thanks, but I don't believe this is universally true. I've not been to
check but I am reasonably sure that the architectural maximum phys addr
on v8 is more than 2^39.

We've already seen a processor whose RAM starts at 128GB, it's not
beyond the realm of possibility that another processor might map it
higher up.

>  this section of codes wouldn't
> be executed and therefore useless. Remove this part of codes to avoid
> potiential bug if it tries to map paddr(start) as a level-0 block entry by
> accident.

I think this bug should be fixed rather than just deleting the code.
This probably means we need boot_first to be split into two pages in
order to support both the 1:1 mapping and the final virtual mapping.

I think the fix for this can wait for 4.5 since processors with RAM
above 512GB are not something we've actually seen in reality, and they
are broken whether we remove the existing code or not so sticking with
the status quo is the lowest risk WRT accidentally breaking existing
systems.

> Thus, we need to specified that xen image should be loaded at the address less
> than 512GB.

> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/arm64/head.S | 13 +------------
>  1 file changed, 1 insertion(+), 12 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 31afdd0..bebddf0 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -266,18 +266,7 @@ skip_bss:
>          orr   x2, x1, x3             /*       + rights for linear PT */
>          str   x2, [x4, #0]           /* Map it in slot 0 */
>  
> -        /* ... map of paddr(start) in boot_pgtable */
> -        lsr   x1, x19, #39           /* Offset of base paddr in boot_pgtable */
> -        cbz   x1, 1f                 /* It's in slot 0, map in boot_first
> -                                      * or boot_second later on */
> -
> -        lsl   x2, x1, #39            /* Base address for 512GB mapping */
> -        mov   x3, #PT_MEM            /* x2 := Section mapping */
> -        orr   x2, x2, x3
> -        lsl   x1, x1, #3             /* x1 := Slot offset */
> -        str   x2, [x4, x1]           /* Mapping of paddr(start)*/
> -
> -1:      /* Setup boot_first: */
> +        /* Setup boot_first: */
>          ldr   x4, =boot_first        /* Next level into boot_first */
>          add   x4, x4, x20            /* x4 := paddr(boot_first) */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1C3X-0007eu-T8; Thu, 09 Jan 2014 09:37:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1C3W-0007eo-0w
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:37:38 +0000
Received: from [85.158.137.68:37763] by server-3.bemta-3.messagelabs.com id
	41/CA-10658-1ED6EC25; Thu, 09 Jan 2014 09:37:37 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389260254!6935058!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29623 invoked from network); 9 Jan 2014 09:37:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:37:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89063418"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:37:06 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:37:05 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:35:50 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: David Miller <davem@davemloft.net>, "majieyue@gmail.com"
	<majieyue@gmail.com>
Thread-Topic: [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
Thread-Index: AQHPDKdrWDNme7NaEU+wedx505SOepp7MZuAgADw7qA=
Date: Thu, 9 Jan 2014 09:35:49 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200D0D@AMSPEX01CL01.citrite.net>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
	<20140108.151139.4522881609697040.davem@davemloft.net>
In-Reply-To: <20140108.151139.4522881609697040.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "yingbin.wangyb@alibaba-inc.com" <yingbin.wangyb@alibaba-inc.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"jieyue.majy@alibaba-inc.com" <jieyue.majy@alibaba-inc.com>,
	"tienan.ftn@alibaba-inc.com" <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
 xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of David Miller
> Sent: 08 January 2014 20:12
> To: majieyue@gmail.com
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; jieyue.majy@alibaba-
> inc.com; yingbin.wangyb@alibaba-inc.com; tienan.ftn@alibaba-inc.com; Wei
> Liu; Ian Campbell; David Vrabel
> Subject: Re: [PATCH net] xen-netback: fix vif tx queue race in
> xenvif_rx_interrupt
> 
> From: Ma JieYue <majieyue@gmail.com>
> Date: Thu,  9 Jan 2014 03:24:21 +0800
> 
> > -	if (xenvif_rx_schedulable(vif))
> > +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
> 
> I do not see anything which prevents a netif_stop_queue() call from
> happening
> between these two tests in another thread of control.
> 
> This therefore looks like a bandaid and not a real fix.

My fix "xen-netback: improve guest-receive-side flow control" in net-next will make this change irrelevant as it completely removes all this somewhat fragile code.

  Paul

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1C3X-0007eu-T8; Thu, 09 Jan 2014 09:37:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1C3W-0007eo-0w
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:37:38 +0000
Received: from [85.158.137.68:37763] by server-3.bemta-3.messagelabs.com id
	41/CA-10658-1ED6EC25; Thu, 09 Jan 2014 09:37:37 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389260254!6935058!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29623 invoked from network); 9 Jan 2014 09:37:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:37:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89063418"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:37:06 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 04:37:05 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:35:50 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: David Miller <davem@davemloft.net>, "majieyue@gmail.com"
	<majieyue@gmail.com>
Thread-Topic: [PATCH net] xen-netback: fix vif tx queue race in
	xenvif_rx_interrupt
Thread-Index: AQHPDKdrWDNme7NaEU+wedx505SOepp7MZuAgADw7qA=
Date: Thu, 9 Jan 2014 09:35:49 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0200D0D@AMSPEX01CL01.citrite.net>
References: <1389209061-29494-1-git-send-email-jieyue.majy@alibaba-inc.com>
	<20140108.151139.4522881609697040.davem@davemloft.net>
In-Reply-To: <20140108.151139.4522881609697040.davem@davemloft.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "yingbin.wangyb@alibaba-inc.com" <yingbin.wangyb@alibaba-inc.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"jieyue.majy@alibaba-inc.com" <jieyue.majy@alibaba-inc.com>,
	"tienan.ftn@alibaba-inc.com" <tienan.ftn@alibaba-inc.com>,
	David Vrabel <david.vrabel@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net] xen-netback: fix vif tx queue race in
 xenvif_rx_interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: netdev-owner@vger.kernel.org [mailto:netdev-
> owner@vger.kernel.org] On Behalf Of David Miller
> Sent: 08 January 2014 20:12
> To: majieyue@gmail.com
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; jieyue.majy@alibaba-
> inc.com; yingbin.wangyb@alibaba-inc.com; tienan.ftn@alibaba-inc.com; Wei
> Liu; Ian Campbell; David Vrabel
> Subject: Re: [PATCH net] xen-netback: fix vif tx queue race in
> xenvif_rx_interrupt
> 
> From: Ma JieYue <majieyue@gmail.com>
> Date: Thu,  9 Jan 2014 03:24:21 +0800
> 
> > -	if (xenvif_rx_schedulable(vif))
> > +	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
> 
> I do not see anything which prevents a netif_stop_queue() call from
> happening
> between these two tests in another thread of control.
> 
> This therefore looks like a bandaid and not a real fix.

My fix "xen-netback: improve guest-receive-side flow control" in net-next will make this change irrelevant as it completely removes all this somewhat fragile code.

  Paul

> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:46:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CC2-0008Qs-3N; Thu, 09 Jan 2014 09:46:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CC1-0008Qn-9m
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:46:25 +0000
Received: from [85.158.137.68:43349] by server-10.bemta-3.messagelabs.com id
	AC/19-23989-0FF6EC25; Thu, 09 Jan 2014 09:46:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389260457!6936072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26996 invoked from network); 9 Jan 2014 09:40:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89064211"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:40:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:40:55 -0500
Message-ID: <1389260454.27473.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <zhenzhong.duan@oracle.com>
Date: Thu, 9 Jan 2014 09:40:54 +0000
In-Reply-To: <52CE575F.9050303@oracle.com>
References: <52CE575F.9050303@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Xudong Hao <xudong.hao@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> Hi Maintainer
> 
> I added 64bit pci hotplug support to hvm guest recently.
> Then I found XudongHao had ever sent a similar patch, but it wasn't
> merged in qemu-xen-traditional.

Stefano is not the maintainer of this tree, Ian Jackson is. On the other
hand the patch you link to is against qemu-xen, which Stefano does
maintain, so I'm a bit confused.

Perhaps you should also have CCd Xudong? I've done that too.

This may also relate to http://bugs.xenproject.org/xen/bug/28 ?

Ian.

> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> 
> I am interested in the result about that patch.
> 
> thanks
> zduan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:46:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CC2-0008Qs-3N; Thu, 09 Jan 2014 09:46:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CC1-0008Qn-9m
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 09:46:25 +0000
Received: from [85.158.137.68:43349] by server-10.bemta-3.messagelabs.com id
	AC/19-23989-0FF6EC25; Thu, 09 Jan 2014 09:46:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389260457!6936072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26996 invoked from network); 9 Jan 2014 09:40:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89064211"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:40:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:40:55 -0500
Message-ID: <1389260454.27473.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <zhenzhong.duan@oracle.com>
Date: Thu, 9 Jan 2014 09:40:54 +0000
In-Reply-To: <52CE575F.9050303@oracle.com>
References: <52CE575F.9050303@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Xudong Hao <xudong.hao@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> Hi Maintainer
> 
> I added 64bit pci hotplug support to hvm guest recently.
> Then I found XudongHao had ever sent a similar patch, but it wasn't
> merged in qemu-xen-traditional.

Stefano is not the maintainer of this tree, Ian Jackson is. On the other
hand the patch you link to is against qemu-xen, which Stefano does
maintain, so I'm a bit confused.

Perhaps you should also have CCd Xudong? I've done that too.

This may also relate to http://bugs.xenproject.org/xen/bug/28 ?

Ian.

> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> 
> I am interested in the result about that patch.
> 
> thanks
> zduan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:48:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CDq-000050-Kk; Thu, 09 Jan 2014 09:48:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1CDp-00004s-75
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 09:48:17 +0000
Received: from [85.158.139.211:24294] by server-8.bemta-5.messagelabs.com id
	AD/4A-29838-0607EC25; Thu, 09 Jan 2014 09:48:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389260893!8739762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16414 invoked from network); 9 Jan 2014 09:48:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:48:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89066468"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:48:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 04:48:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1CDj-0001Yz-Gv;
	Thu, 09 Jan 2014 09:48:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1CDj-0005az-00;
	Thu, 09 Jan 2014 09:48:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24315-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 09:48:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24315: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4357161777868034979=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4357161777868034979==
Content-Type: text/plain

flight 24315 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24315/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail like 24312-bisect
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 22644
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ branch=linux-3.4
+ revision=94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 94f578e6aba14bb2aeb00db2e7f6e5f704fee937:tested/linux-3.4
Counting objects: 1   
Counting objects: 28   
Counting objects: 258   
Counting objects: 338, done.
Compressing objects:   2% (1/46)   
Compressing objects:   4% (2/46)   
Compressing objects:   6% (3/46)   
Compressing objects:   8% (4/46)   
Compressing objects:  10% (5/46)   
Compressing objects:  13% (6/46)   
Compressing objects:  15% (7/46)   
Compressing objects:  17% (8/46)   
Compressing objects:  19% (9/46)   
Compressing objects:  21% (10/46)   
Compressing objects:  23% (11/46)   
Compressing objects:  26% (12/46)   
Compressing objects:  28% (13/46)   
Compressing objects:  30% (14/46)   
Compressing objects:  32% (15/46)   
Compressing objects:  34% (16/46)   
Compressing objects:  36% (17/46)   
Compressing objects:  39% (18/46)   
Compressing objects:  41% (19/46)   
Compressing objects:  43% (20/46)   
Compressing objects:  45% (21/46)   
Compressing objects:  47% (22/46)   
Compressing objects:  50% (23/46)   
Compressing objects:  52% (24/46)   
Compressing objects:  54% (25/46)   
Compressing objects:  56% (26/46)   
Compressing objects:  58% (27/46)   
Compressing objects:  60% (28/46)   
Compressing objects:  63% (29/46)   
Compressing objects:  65% (30/46)   
Compressing objects:  67% (31/46)   
Compressing objects:  69% (32/46)   
Compressing objects:  71% (33/46)   
Compressing objects:  73% (34/46)   
Compressing objects:  76% (35/46)   
Compressing objects:  78% (36/46)   
Compressing objects:  80% (37/46)   
Compressing objects:  82% (38/46)   
Compressing objects:  84% (39/46)   
Compressing objects:  86% (40/46)   
Compressing objects:  89% (41/46)   
Compressing objects:  91% (42/46)   
Compressing objects:  93% (43/46)   
Compressing objects:  95% (44/46)   
Compressing objects:  97% (45/46)   
Compressing objects: 100% (46/46)   
Compressing objects: 100% (46/46), done.
Writing objects:   0% (1/242)   
Writing objects:   1% (3/242)   
Writing objects:   2% (5/242)   
Writing objects:   3% (8/242)   
Writing objects:   4% (10/242)   
Writing objects:   5% (13/242)   
Writing objects:   6% (15/242)   
Writing objects:   7% (17/242)   
Writing objects:   8% (20/242)   
Writing objects:   9% (22/242)   
Writing objects:  10% (25/242)   
Writing objects:  11% (27/242)   
Writing objects:  12% (30/242)   
Writing objects:  13% (32/242)   
Writing objects:  14% (34/242)   
Writing objects:  15% (37/242)   
Writing objects:  16% (39/242)   
Writing objects:  17% (42/242)   
Writing objects:  18% (44/242)   
Writing objects:  19% (46/242)   
Writing objects:  20% (49/242)   
Writing objects:  21% (51/242)   
Writing objects:  22% (54/242)   
Writing objects:  23% (56/242)   
Writing objects:  24% (59/242)   
Writing objects:  25% (61/242)   
Writing objects:  26% (63/242)   
Writing objects:  27% (66/242)   
Writing objects:  28% (68/242)   
Writing objects:  29% (71/242)   
Writing objects:  30% (73/242)   
Writing objects:  31% (76/242)   
Writing objects:  32% (78/242)   
Writing objects:  33% (80/242)   
Writing objects:  34% (83/242)   
Writing objects:  35% (85/242)   
Writing objects:  36% (88/242)   
Writing objects:  37% (90/242)   
Writing objects:  38% (92/242)   
Writing objects:  39% (95/242)   
Writing objects:  40% (97/242)   
Writing objects:  41% (100/242)   
Writing objects:  42% (102/242)   
Writing objects:  43% (105/242)   
Writing objects:  44% (107/242)   
Writing objects:  45% (109/242)   
Writing objects:  46% (112/242)   
Writing objects:  47% (114/242)   
Writing objects:  48% (117/242)   
Writing objects:  49% (119/242)   
Writing objects:  50% (121/242)   
Writing objects:  51% (124/242)   
Writing objects:  52% (126/242)   
Writing objects:  53% (129/242)   
Writing objects:  54% (131/242)   
Writing objects:  55% (134/242)   
Writing objects:  56% (136/242)   
Writing objects:  57% (138/242)   
Writing objects:  58% (141/242)   
Writing objects:  59% (143/242)   
Writing objects:  60% (146/242)   
Writing objects:  61% (148/242)   
Writing objects:  62% (151/242)   
Writing objects:  63% (153/242)   
Writing objects:  64% (155/242)   
Writing objects:  65% (158/242)   
Writing objects:  66% (160/242)   
Writing objects:  67% (163/242)   
Writing objects:  68% (165/242)   
Writing objects:  69% (167/242)   
Writing objects:  70% (170/242)   
Writing objects:  71% (172/242)   
Writing objects:  72% (175/242)   
Writing objects:  73% (177/242)   
Writing objects:  74% (180/242)   
Writing objects:  75% (182/242)   
Writing objects:  76% (184/242)   
Writing objects:  77% (187/242)   
Writing objects:  78% (189/242)   
Writing objects:  79% (192/242)   
Writing objects:  80% (194/242)   
Writing objects:  81% (197/242)   
Writing objects:  82% (199/242)   
Writing objects:  83% (201/242)   
Writing objects:  84% (204/242)   
Writing objects:  85% (206/242)   
Writing objects:  86% (209/242)   
Writing objects:  87% (211/242)   
Writing objects:  88% (213/242)   
Writing objects:  89% (216/242)   
Writing objects:  90% (218/242)   
Writing objects:  91% (221/242)   
Writing objects:  92% (223/242)   
Writing objects:  93% (226/242)   
Writing objects:  94% (228/242)   
Writing objects:  95% (230/242)   
Writing objects:  96% (233/242)   
Writing objects:  97% (235/242)   
Writing objects:  98% (238/242)   
Writing objects:  99% (240/242)   
Writing objects: 100% (242/242)   
Writing objects: 100% (242/242), 41.36 KiB, done.
Total 242 (delta 195), reused 242 (delta 195)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   84dfcb7..94f578e  94f578e6aba14bb2aeb00db2e7f6e5f704fee937 -> tested/linux-3.4
+ exit 0


--===============4357161777868034979==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4357161777868034979==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:48:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CDq-000050-Kk; Thu, 09 Jan 2014 09:48:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1CDp-00004s-75
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 09:48:17 +0000
Received: from [85.158.139.211:24294] by server-8.bemta-5.messagelabs.com id
	AD/4A-29838-0607EC25; Thu, 09 Jan 2014 09:48:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389260893!8739762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16414 invoked from network); 9 Jan 2014 09:48:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:48:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89066468"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 09:48:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 04:48:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1CDj-0001Yz-Gv;
	Thu, 09 Jan 2014 09:48:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1CDj-0005az-00;
	Thu, 09 Jan 2014 09:48:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24315-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 09:48:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24315: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4357161777868034979=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4357161777868034979==
Content-Type: text/plain

flight 24315 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24315/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail like 24312-bisect
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 22644
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 22644
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22644

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937
baseline version:
 linux                84dfcb758ba7cce52ef475ac96861a558e1a20ca

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ben Segall <bsegall@google.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris J Arges <chris.j.arges@canonical.com>
  Dan Williams <dan.j.williams@intel.com>
  Dave Airlie <airlied@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  JongHo Kim <furmuwon@gmail.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Larry Finger <Larry.Finger@lwfinger.net>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Mark Brown <broonie@linaro.org>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Neuling <mikey@neuling.org>
  Michele Baldessari <michele@acksyn.org>
  Nicholas <arealityfarbetween@googlemail.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul Moore <pmoore@redhat.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Sage Weil <sage@inktank.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ branch=linux-3.4
+ revision=94f578e6aba14bb2aeb00db2e7f6e5f704fee937
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 94f578e6aba14bb2aeb00db2e7f6e5f704fee937:tested/linux-3.4
Counting objects: 1   
Counting objects: 28   
Counting objects: 258   
Counting objects: 338, done.
Compressing objects:   2% (1/46)   
Compressing objects:   4% (2/46)   
Compressing objects:   6% (3/46)   
Compressing objects:   8% (4/46)   
Compressing objects:  10% (5/46)   
Compressing objects:  13% (6/46)   
Compressing objects:  15% (7/46)   
Compressing objects:  17% (8/46)   
Compressing objects:  19% (9/46)   
Compressing objects:  21% (10/46)   
Compressing objects:  23% (11/46)   
Compressing objects:  26% (12/46)   
Compressing objects:  28% (13/46)   
Compressing objects:  30% (14/46)   
Compressing objects:  32% (15/46)   
Compressing objects:  34% (16/46)   
Compressing objects:  36% (17/46)   
Compressing objects:  39% (18/46)   
Compressing objects:  41% (19/46)   
Compressing objects:  43% (20/46)   
Compressing objects:  45% (21/46)   
Compressing objects:  47% (22/46)   
Compressing objects:  50% (23/46)   
Compressing objects:  52% (24/46)   
Compressing objects:  54% (25/46)   
Compressing objects:  56% (26/46)   
Compressing objects:  58% (27/46)   
Compressing objects:  60% (28/46)   
Compressing objects:  63% (29/46)   
Compressing objects:  65% (30/46)   
Compressing objects:  67% (31/46)   
Compressing objects:  69% (32/46)   
Compressing objects:  71% (33/46)   
Compressing objects:  73% (34/46)   
Compressing objects:  76% (35/46)   
Compressing objects:  78% (36/46)   
Compressing objects:  80% (37/46)   
Compressing objects:  82% (38/46)   
Compressing objects:  84% (39/46)   
Compressing objects:  86% (40/46)   
Compressing objects:  89% (41/46)   
Compressing objects:  91% (42/46)   
Compressing objects:  93% (43/46)   
Compressing objects:  95% (44/46)   
Compressing objects:  97% (45/46)   
Compressing objects: 100% (46/46)   
Compressing objects: 100% (46/46), done.
Writing objects:   0% (1/242)   
Writing objects:   1% (3/242)   
Writing objects:   2% (5/242)   
Writing objects:   3% (8/242)   
Writing objects:   4% (10/242)   
Writing objects:   5% (13/242)   
Writing objects:   6% (15/242)   
Writing objects:   7% (17/242)   
Writing objects:   8% (20/242)   
Writing objects:   9% (22/242)   
Writing objects:  10% (25/242)   
Writing objects:  11% (27/242)   
Writing objects:  12% (30/242)   
Writing objects:  13% (32/242)   
Writing objects:  14% (34/242)   
Writing objects:  15% (37/242)   
Writing objects:  16% (39/242)   
Writing objects:  17% (42/242)   
Writing objects:  18% (44/242)   
Writing objects:  19% (46/242)   
Writing objects:  20% (49/242)   
Writing objects:  21% (51/242)   
Writing objects:  22% (54/242)   
Writing objects:  23% (56/242)   
Writing objects:  24% (59/242)   
Writing objects:  25% (61/242)   
Writing objects:  26% (63/242)   
Writing objects:  27% (66/242)   
Writing objects:  28% (68/242)   
Writing objects:  29% (71/242)   
Writing objects:  30% (73/242)   
Writing objects:  31% (76/242)   
Writing objects:  32% (78/242)   
Writing objects:  33% (80/242)   
Writing objects:  34% (83/242)   
Writing objects:  35% (85/242)   
Writing objects:  36% (88/242)   
Writing objects:  37% (90/242)   
Writing objects:  38% (92/242)   
Writing objects:  39% (95/242)   
Writing objects:  40% (97/242)   
Writing objects:  41% (100/242)   
Writing objects:  42% (102/242)   
Writing objects:  43% (105/242)   
Writing objects:  44% (107/242)   
Writing objects:  45% (109/242)   
Writing objects:  46% (112/242)   
Writing objects:  47% (114/242)   
Writing objects:  48% (117/242)   
Writing objects:  49% (119/242)   
Writing objects:  50% (121/242)   
Writing objects:  51% (124/242)   
Writing objects:  52% (126/242)   
Writing objects:  53% (129/242)   
Writing objects:  54% (131/242)   
Writing objects:  55% (134/242)   
Writing objects:  56% (136/242)   
Writing objects:  57% (138/242)   
Writing objects:  58% (141/242)   
Writing objects:  59% (143/242)   
Writing objects:  60% (146/242)   
Writing objects:  61% (148/242)   
Writing objects:  62% (151/242)   
Writing objects:  63% (153/242)   
Writing objects:  64% (155/242)   
Writing objects:  65% (158/242)   
Writing objects:  66% (160/242)   
Writing objects:  67% (163/242)   
Writing objects:  68% (165/242)   
Writing objects:  69% (167/242)   
Writing objects:  70% (170/242)   
Writing objects:  71% (172/242)   
Writing objects:  72% (175/242)   
Writing objects:  73% (177/242)   
Writing objects:  74% (180/242)   
Writing objects:  75% (182/242)   
Writing objects:  76% (184/242)   
Writing objects:  77% (187/242)   
Writing objects:  78% (189/242)   
Writing objects:  79% (192/242)   
Writing objects:  80% (194/242)   
Writing objects:  81% (197/242)   
Writing objects:  82% (199/242)   
Writing objects:  83% (201/242)   
Writing objects:  84% (204/242)   
Writing objects:  85% (206/242)   
Writing objects:  86% (209/242)   
Writing objects:  87% (211/242)   
Writing objects:  88% (213/242)   
Writing objects:  89% (216/242)   
Writing objects:  90% (218/242)   
Writing objects:  91% (221/242)   
Writing objects:  92% (223/242)   
Writing objects:  93% (226/242)   
Writing objects:  94% (228/242)   
Writing objects:  95% (230/242)   
Writing objects:  96% (233/242)   
Writing objects:  97% (235/242)   
Writing objects:  98% (238/242)   
Writing objects:  99% (240/242)   
Writing objects: 100% (242/242)   
Writing objects: 100% (242/242), 41.36 KiB, done.
Total 242 (delta 195), reused 242 (delta 195)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   84dfcb7..94f578e  94f578e6aba14bb2aeb00db2e7f6e5f704fee937 -> tested/linux-3.4
+ exit 0


--===============4357161777868034979==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4357161777868034979==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1COS-0001E2-NH; Thu, 09 Jan 2014 09:59:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1COQ-0001Dx-TB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:59:15 +0000
Received: from [85.158.139.211:2373] by server-2.bemta-5.messagelabs.com id
	A9/00-29392-2F27EC25; Thu, 09 Jan 2014 09:59:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389261551!8527421!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10630 invoked from network); 9 Jan 2014 09:59:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:59:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91213967"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:59:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:59:10 -0500
Message-ID: <1389261548.27473.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 9 Jan 2014 09:59:08 +0000
In-Reply-To: <20140108163822.30b6f87a@mantra.us.oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
> On Wed, 8 Jan 2014 17:44:22 +0000
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > Using volatile is almost always wrong. Why do you think it is
> > > > > > needed here?
> > > > > 
> > > > > This was from Mukesh Rathor:
> > > > > 
> > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > 
> > > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > > this? Happy to change any way you want.
> > > > 
> > > > I'm not the maintainer but if I were I'd say drop the volatile
> > > > and maybe mark it __read_mostly and perhaps __used too (since
> > > > it's static it might otherwise get eliminated).
> > > > 
> > > > > > If anything this variable is exactly the opposite, i..e
> > > > > > __read_mostly or even const (given that I can't see anything
> > > > > > which writes it I suppose this is a compile time setting?)
> > > > > 
> > > > > That has been how I have been testing it so far (changing the
> > > > > source to set values).  Mukesh claims to be able to change it
> > > > > at will.  Not sure how const may effect this.
> > > 
> > > If the idea is to use kdb itself to frob the value, then it does
> > > need something to stop the compiler caching it.  This might even be
> > > one of the few cases where 'volatile' actually DTRT; it would still
> > > be more in keeping with Xen style to use an explicit read op (like
> > > atomic_read()) where the value is consumed.
> > 
> > Is there any need to be asynchronously frobbing this value in the
> > middle of a function within this file and expecting it to be
> 
> Yes. I can stop the machine via kdb or other debugger, change the value
> during debug, and upon resuming it will start printing stuff. Often
> this is needed when going thru several iterations of call before problem
> is seen. Making it volatile makes sure the compiler loads it every
> instance of its use. This is not in main path, only debugger path, so
> the overhead should not be an issue.

So you want to be able to toggle the value in between two immediately
adjacent debug print calls? While debugging the debugging infrastructure
itself? (using itself?).

I'm surprised that even works, but if you say so then OK.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 09:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 09:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1COS-0001E2-NH; Thu, 09 Jan 2014 09:59:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1COQ-0001Dx-TB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 09:59:15 +0000
Received: from [85.158.139.211:2373] by server-2.bemta-5.messagelabs.com id
	A9/00-29392-2F27EC25; Thu, 09 Jan 2014 09:59:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389261551!8527421!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10630 invoked from network); 9 Jan 2014 09:59:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 09:59:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91213967"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 09:59:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	04:59:10 -0500
Message-ID: <1389261548.27473.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 9 Jan 2014 09:59:08 +0000
In-Reply-To: <20140108163822.30b6f87a@mantra.us.oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
> On Wed, 8 Jan 2014 17:44:22 +0000
> Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > Using volatile is almost always wrong. Why do you think it is
> > > > > > needed here?
> > > > > 
> > > > > This was from Mukesh Rathor:
> > > > > 
> > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > 
> > > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > > this? Happy to change any way you want.
> > > > 
> > > > I'm not the maintainer but if I were I'd say drop the volatile
> > > > and maybe mark it __read_mostly and perhaps __used too (since
> > > > it's static it might otherwise get eliminated).
> > > > 
> > > > > > If anything this variable is exactly the opposite, i..e
> > > > > > __read_mostly or even const (given that I can't see anything
> > > > > > which writes it I suppose this is a compile time setting?)
> > > > > 
> > > > > That has been how I have been testing it so far (changing the
> > > > > source to set values).  Mukesh claims to be able to change it
> > > > > at will.  Not sure how const may effect this.
> > > 
> > > If the idea is to use kdb itself to frob the value, then it does
> > > need something to stop the compiler caching it.  This might even be
> > > one of the few cases where 'volatile' actually DTRT; it would still
> > > be more in keeping with Xen style to use an explicit read op (like
> > > atomic_read()) where the value is consumed.
> > 
> > Is there any need to be asynchronously frobbing this value in the
> > middle of a function within this file and expecting it to be
> 
> Yes. I can stop the machine via kdb or other debugger, change the value
> during debug, and upon resuming it will start printing stuff. Often
> this is needed when going thru several iterations of call before problem
> is seen. Making it volatile makes sure the compiler loads it every
> instance of its use. This is not in main path, only debugger path, so
> the overhead should not be an issue.

So you want to be able to toggle the value in between two immediately
adjacent debug print calls? While debugging the debugging infrastructure
itself? (using itself?).

I'm surprised that even works, but if you say so then OK.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS0-0001m1-GC; Thu, 09 Jan 2014 10:02:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CRz-0001lW-EP
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:55 +0000
Received: from [85.158.143.35:10833] by server-3.bemta-4.messagelabs.com id
	BB/4F-32360-EC37EC25; Thu, 09 Jan 2014 10:02:54 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8396 invoked from network); 9 Jan 2014 10:02:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-Uw;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:48 +0000
Message-ID: <1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++-----------------------------------------
 1 file changed, 3 insertions(+), 45 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..c41537b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -859,9 +859,7 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
 
 static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 {
-	struct iphdr *iph;
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/*
 	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
@@ -873,54 +871,14 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 		struct netfront_info *np = netdev_priv(dev);
 		np->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol != htons(ETH_P_IP))
-		goto out;
-
-	iph = (void *)skb->data;
-
-	switch (iph->protocol) {
-	case IPPROTO_TCP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct tcphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct tcphdr *tcph = tcp_hdr(skb);
-			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_TCP, 0);
-		}
-		break;
-	case IPPROTO_UDP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct udphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct udphdr *udph = udp_hdr(skb);
-			udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_UDP, 0);
-		}
-		break;
-	default:
-		if (net_ratelimit())
-			pr_err("Attempting to checksum a non-TCP/UDP packet, dropping a protocol %d packet\n",
-			       iph->protocol);
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static int handle_incoming_queue(struct net_device *dev,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CRz-0001lX-30; Thu, 09 Jan 2014 10:02:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CRy-0001lO-Cl
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:54 +0000
Received: from [85.158.143.35:10664] by server-3.bemta-4.messagelabs.com id
	31/4F-32360-DC37EC25; Thu, 09 Jan 2014 10:02:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8275 invoked from network); 9 Jan 2014 10:02:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-Qy;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:45 +0000
Message-ID: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH net-next v2 0/3] make skb_checksum_setup
	generally available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both xen-netfront and xen-netback need to be able to set up the partial
checksum offset of an skb and may also need to recalculate the pseudo-
header checksum in the process. This functionality is currently private
and duplicated between the two drivers.

Patch #1 of this series moves the implementation into the core network code
as there is nothing xen-specific about it and it is potentially useful to
any network driver.
Patch #2 removes the private implementation from netback.
Patch #3 removes the private implementation from netfront.

v2:
- Put skb_checksum_setup in skbuff.c rather than dev.c
- remove inline


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS0-0001m1-GC; Thu, 09 Jan 2014 10:02:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CRz-0001lW-EP
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:55 +0000
Received: from [85.158.143.35:10833] by server-3.bemta-4.messagelabs.com id
	BB/4F-32360-EC37EC25; Thu, 09 Jan 2014 10:02:54 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8396 invoked from network); 9 Jan 2014 10:02:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-Uw;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:48 +0000
Message-ID: <1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++-----------------------------------------
 1 file changed, 3 insertions(+), 45 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..c41537b 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -859,9 +859,7 @@ static RING_IDX xennet_fill_frags(struct netfront_info *np,
 
 static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 {
-	struct iphdr *iph;
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/*
 	 * A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
@@ -873,54 +871,14 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 		struct netfront_info *np = netdev_priv(dev);
 		np->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol != htons(ETH_P_IP))
-		goto out;
-
-	iph = (void *)skb->data;
-
-	switch (iph->protocol) {
-	case IPPROTO_TCP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct tcphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct tcphdr *tcph = tcp_hdr(skb);
-			tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_TCP, 0);
-		}
-		break;
-	case IPPROTO_UDP:
-		if (!skb_partial_csum_set(skb, 4 * iph->ihl,
-					  offsetof(struct udphdr, check)))
-			goto out;
-
-		if (recalculate_partial_csum) {
-			struct udphdr *udph = udp_hdr(skb);
-			udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
-							 skb->len - iph->ihl*4,
-							 IPPROTO_UDP, 0);
-		}
-		break;
-	default:
-		if (net_ratelimit())
-			pr_err("Attempting to checksum a non-TCP/UDP packet, dropping a protocol %d packet\n",
-			       iph->protocol);
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static int handle_incoming_queue(struct net_device *dev,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CRz-0001lX-30; Thu, 09 Jan 2014 10:02:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CRy-0001lO-Cl
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:54 +0000
Received: from [85.158.143.35:10664] by server-3.bemta-4.messagelabs.com id
	31/4F-32360-DC37EC25; Thu, 09 Jan 2014 10:02:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8275 invoked from network); 9 Jan 2014 10:02:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-Qy;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:45 +0000
Message-ID: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH net-next v2 0/3] make skb_checksum_setup
	generally available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both xen-netfront and xen-netback need to be able to set up the partial
checksum offset of an skb and may also need to recalculate the pseudo-
header checksum in the process. This functionality is currently private
and duplicated between the two drivers.

Patch #1 of this series moves the implementation into the core network code
as there is nothing xen-specific about it and it is potentially useful to
any network driver.
Patch #2 removes the private implementation from netback.
Patch #3 removes the private implementation from netfront.

v2:
- Put skb_checksum_setup in skbuff.c rather than dev.c
- remove inline


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS1-0001nB-UW; Thu, 09 Jan 2014 10:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CS0-0001lh-BI
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:56 +0000
Received: from [85.158.143.35:10912] by server-1.bemta-4.messagelabs.com id
	D2/DE-02132-FC37EC25; Thu, 09 Jan 2014 10:02:55 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8497 invoked from network); 9 Jan 2014 10:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215125"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-TL;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:46 +0000
Message-ID: <1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a function to set up the partial checksum offset for IP
packets (and optionally re-calculate the pseudo-header checksum) into the
core network code.
The implementation was previously private and duplicated between xen-netback
and xen-netfront, however it is not xen-specific and is potentially useful
to any network driver.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Veaceslav Falico <vfalico@redhat.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
---
 include/linux/skbuff.h |    2 +
 net/core/skbuff.c      |  273 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 275 insertions(+)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index d97f2d0..48b7605 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2893,6 +2893,8 @@ static inline void skb_checksum_none_assert(const struct sk_buff *skb)
 
 bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off);
 
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
+
 u32 __skb_get_poff(const struct sk_buff *skb);
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 1d641e7..15057d2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -65,6 +65,7 @@
 #include <net/dst.h>
 #include <net/sock.h>
 #include <net/checksum.h>
+#include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 
 #include <asm/uaccess.h>
@@ -3549,6 +3550,278 @@ bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off)
 }
 EXPORT_SYMBOL_GPL(skb_partial_csum_set);
 
+static int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+			       unsigned int max)
+{
+	if (skb_headlen(skb) >= len)
+		return 0;
+
+	/* If we need to pullup then pullup to the max, so we
+	 * won't need to do it again.
+	 */
+	if (max > skb->len)
+		max = skb->len;
+
+	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+		return -ENOMEM;
+
+	if (skb_headlen(skb) < len)
+		return -EPROTO;
+
+	return 0;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
+static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
+{
+	unsigned int off;
+	bool fragment;
+	int err;
+
+	fragment = false;
+
+	err = skb_maybe_pull_tail(skb,
+				  sizeof(struct iphdr),
+				  MAX_IP_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+		fragment = true;
+
+	off = ip_hdrlen(skb);
+
+	err = -EPROTO;
+
+	if (fragment)
+		goto out;
+
+	switch (ip_hdr(skb)->protocol) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+	(type *)(skb_network_header(skb) + (off))
+
+static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+	u8 nexthdr;
+	unsigned int off;
+	unsigned int len;
+	bool fragment;
+	bool done;
+
+	fragment = false;
+	done = false;
+
+	off = sizeof(struct ipv6hdr);
+
+	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	nexthdr = ipv6_hdr(skb)->nexthdr;
+
+	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+	while (off <= len && !done) {
+		switch (nexthdr) {
+		case IPPROTO_DSTOPTS:
+		case IPPROTO_HOPOPTS:
+		case IPPROTO_ROUTING: {
+			struct ipv6_opt_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ipv6_opt_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_optlen(hp);
+			break;
+		}
+		case IPPROTO_AH: {
+			struct ip_auth_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ip_auth_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_authlen(hp);
+			break;
+		}
+		case IPPROTO_FRAGMENT: {
+			struct frag_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct frag_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct frag_hdr, skb, off);
+
+			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+				fragment = true;
+
+			nexthdr = hp->nexthdr;
+			off += sizeof(struct frag_hdr);
+			break;
+		}
+		default:
+			done = true;
+			break;
+		}
+	}
+
+	err = -EPROTO;
+
+	if (!done || fragment)
+		goto out;
+
+	switch (nexthdr) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/**
+ * skb_checksum_setup - set up partial checksum offset
+ * @skb: the skb to set up
+ * @recalculate: if true the pseudo-header checksum will be recalculated
+ */
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+
+	switch (skb->protocol) {
+	case htons(ETH_P_IP):
+		err = skb_checksum_setup_ip(skb, recalculate);
+		break;
+
+	case htons(ETH_P_IPV6):
+		err = skb_checksum_setup_ipv6(skb, recalculate);
+		break;
+
+	default:
+		err = -EPROTO;
+		break;
+	}
+
+	return err;
+}
+EXPORT_SYMBOL(skb_checksum_setup);
+
 void __skb_warn_lro_forwarding(const struct sk_buff *skb)
 {
 	net_warn_ratelimited("%s: received packets cannot be forwarded while LRO is enabled\n",
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS1-0001nB-UW; Thu, 09 Jan 2014 10:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CS0-0001lh-BI
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:56 +0000
Received: from [85.158.143.35:10912] by server-1.bemta-4.messagelabs.com id
	D2/DE-02132-FC37EC25; Thu, 09 Jan 2014 10:02:55 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389261771!7948080!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8497 invoked from network); 9 Jan 2014 10:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215125"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-TL;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:46 +0000
Message-ID: <1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a function to set up the partial checksum offset for IP
packets (and optionally re-calculate the pseudo-header checksum) into the
core network code.
The implementation was previously private and duplicated between xen-netback
and xen-netfront, however it is not xen-specific and is potentially useful
to any network driver.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Veaceslav Falico <vfalico@redhat.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
---
 include/linux/skbuff.h |    2 +
 net/core/skbuff.c      |  273 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 275 insertions(+)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index d97f2d0..48b7605 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2893,6 +2893,8 @@ static inline void skb_checksum_none_assert(const struct sk_buff *skb)
 
 bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off);
 
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
+
 u32 __skb_get_poff(const struct sk_buff *skb);
 
 /**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 1d641e7..15057d2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -65,6 +65,7 @@
 #include <net/dst.h>
 #include <net/sock.h>
 #include <net/checksum.h>
+#include <net/ip6_checksum.h>
 #include <net/xfrm.h>
 
 #include <asm/uaccess.h>
@@ -3549,6 +3550,278 @@ bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off)
 }
 EXPORT_SYMBOL_GPL(skb_partial_csum_set);
 
+static int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
+			       unsigned int max)
+{
+	if (skb_headlen(skb) >= len)
+		return 0;
+
+	/* If we need to pullup then pullup to the max, so we
+	 * won't need to do it again.
+	 */
+	if (max > skb->len)
+		max = skb->len;
+
+	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
+		return -ENOMEM;
+
+	if (skb_headlen(skb) < len)
+		return -EPROTO;
+
+	return 0;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * maximally sized IP and TCP or UDP headers.
+ */
+#define MAX_IP_HDR_LEN 128
+
+static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
+{
+	unsigned int off;
+	bool fragment;
+	int err;
+
+	fragment = false;
+
+	err = skb_maybe_pull_tail(skb,
+				  sizeof(struct iphdr),
+				  MAX_IP_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
+		fragment = true;
+
+	off = ip_hdrlen(skb);
+
+	err = -EPROTO;
+
+	if (fragment)
+		goto out;
+
+	switch (ip_hdr(skb)->protocol) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IP_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+						   ip_hdr(skb)->daddr,
+						   skb->len - off,
+						   IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/* This value should be large enough to cover a tagged ethernet header plus
+ * an IPv6 header, all options, and a maximal TCP or UDP header.
+ */
+#define MAX_IPV6_HDR_LEN 256
+
+#define OPT_HDR(type, skb, off) \
+	(type *)(skb_network_header(skb) + (off))
+
+static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+	u8 nexthdr;
+	unsigned int off;
+	unsigned int len;
+	bool fragment;
+	bool done;
+
+	fragment = false;
+	done = false;
+
+	off = sizeof(struct ipv6hdr);
+
+	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
+	if (err < 0)
+		goto out;
+
+	nexthdr = ipv6_hdr(skb)->nexthdr;
+
+	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
+	while (off <= len && !done) {
+		switch (nexthdr) {
+		case IPPROTO_DSTOPTS:
+		case IPPROTO_HOPOPTS:
+		case IPPROTO_ROUTING: {
+			struct ipv6_opt_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ipv6_opt_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_optlen(hp);
+			break;
+		}
+		case IPPROTO_AH: {
+			struct ip_auth_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct ip_auth_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
+			nexthdr = hp->nexthdr;
+			off += ipv6_authlen(hp);
+			break;
+		}
+		case IPPROTO_FRAGMENT: {
+			struct frag_hdr *hp;
+
+			err = skb_maybe_pull_tail(skb,
+						  off +
+						  sizeof(struct frag_hdr),
+						  MAX_IPV6_HDR_LEN);
+			if (err < 0)
+				goto out;
+
+			hp = OPT_HDR(struct frag_hdr, skb, off);
+
+			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
+				fragment = true;
+
+			nexthdr = hp->nexthdr;
+			off += sizeof(struct frag_hdr);
+			break;
+		}
+		default:
+			done = true;
+			break;
+		}
+	}
+
+	err = -EPROTO;
+
+	if (!done || fragment)
+		goto out;
+
+	switch (nexthdr) {
+	case IPPROTO_TCP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct tcphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct tcphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			tcp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_TCP, 0);
+		break;
+	case IPPROTO_UDP:
+		err = skb_maybe_pull_tail(skb,
+					  off + sizeof(struct udphdr),
+					  MAX_IPV6_HDR_LEN);
+		if (err < 0)
+			goto out;
+
+		if (!skb_partial_csum_set(skb, off,
+					  offsetof(struct udphdr, check))) {
+			err = -EPROTO;
+			goto out;
+		}
+
+		if (recalculate)
+			udp_hdr(skb)->check =
+				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+						 &ipv6_hdr(skb)->daddr,
+						 skb->len - off,
+						 IPPROTO_UDP, 0);
+		break;
+	default:
+		goto out;
+	}
+
+	err = 0;
+
+out:
+	return err;
+}
+
+/**
+ * skb_checksum_setup - set up partial checksum offset
+ * @skb: the skb to set up
+ * @recalculate: if true the pseudo-header checksum will be recalculated
+ */
+int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
+{
+	int err;
+
+	switch (skb->protocol) {
+	case htons(ETH_P_IP):
+		err = skb_checksum_setup_ip(skb, recalculate);
+		break;
+
+	case htons(ETH_P_IPV6):
+		err = skb_checksum_setup_ipv6(skb, recalculate);
+		break;
+
+	default:
+		err = -EPROTO;
+		break;
+	}
+
+	return err;
+}
+EXPORT_SYMBOL(skb_checksum_setup);
+
 void __skb_warn_lro_forwarding(const struct sk_buff *skb)
 {
 	net_warn_ratelimited("%s: received packets cannot be forwarded while LRO is enabled\n",
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS2-0001ns-DX; Thu, 09 Jan 2014 10:02:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CS0-0001lx-Oo
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:57 +0000
Received: from [85.158.139.211:42701] by server-4.bemta-5.messagelabs.com id
	CA/45-26791-0D37EC25; Thu, 09 Jan 2014 10:02:56 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389261773!8528786!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12443 invoked from network); 9 Jan 2014 10:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215128"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-U2;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:47 +0000
Message-ID: <1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 2/3] xen-netback: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/netback.c |  260 +------------------------------------
 1 file changed, 3 insertions(+), 257 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2605405 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -39,7 +39,6 @@
 #include <linux/udp.h>
 
 #include <net/tcp.h>
-#include <net/ip6_checksum.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1048,257 +1047,9 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
-				  unsigned int max)
-{
-	if (skb_headlen(skb) >= len)
-		return 0;
-
-	/* If we need to pullup then pullup to the max, so we
-	 * won't need to do it again.
-	 */
-	if (max > skb->len)
-		max = skb->len;
-
-	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
-		return -ENOMEM;
-
-	if (skb_headlen(skb) < len)
-		return -EPROTO;
-
-	return 0;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * maximally sized IP and TCP or UDP headers.
- */
-#define MAX_IP_HDR_LEN 128
-
-static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
-			     int recalculate_partial_csum)
-{
-	unsigned int off;
-	bool fragment;
-	int err;
-
-	fragment = false;
-
-	err = maybe_pull_tail(skb,
-			      sizeof(struct iphdr),
-			      MAX_IP_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
-		fragment = true;
-
-	off = ip_hdrlen(skb);
-
-	err = -EPROTO;
-
-	if (fragment)
-		goto out;
-
-	switch (ip_hdr(skb)->protocol) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * an IPv6 header, all options, and a maximal TCP or UDP header.
- */
-#define MAX_IPV6_HDR_LEN 256
-
-#define OPT_HDR(type, skb, off) \
-	(type *)(skb_network_header(skb) + (off))
-
-static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
-			       int recalculate_partial_csum)
-{
-	int err;
-	u8 nexthdr;
-	unsigned int off;
-	unsigned int len;
-	bool fragment;
-	bool done;
-
-	fragment = false;
-	done = false;
-
-	off = sizeof(struct ipv6hdr);
-
-	err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	nexthdr = ipv6_hdr(skb)->nexthdr;
-
-	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
-	while (off <= len && !done) {
-		switch (nexthdr) {
-		case IPPROTO_DSTOPTS:
-		case IPPROTO_HOPOPTS:
-		case IPPROTO_ROUTING: {
-			struct ipv6_opt_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ipv6_opt_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_optlen(hp);
-			break;
-		}
-		case IPPROTO_AH: {
-			struct ip_auth_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ip_auth_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_authlen(hp);
-			break;
-		}
-		case IPPROTO_FRAGMENT: {
-			struct frag_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct frag_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct frag_hdr, skb, off);
-
-			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
-				fragment = true;
-
-			nexthdr = hp->nexthdr;
-			off += sizeof(struct frag_hdr);
-			break;
-		}
-		default:
-			done = true;
-			break;
-		}
-	}
-
-	err = -EPROTO;
-
-	if (!done || fragment)
-		goto out;
-
-	switch (nexthdr) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
 static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 {
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/* A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
 	 * peers can fail to set NETRXF_csum_blank when sending a GSO
@@ -1308,19 +1059,14 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		vif->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol == htons(ETH_P_IP))
-		err = checksum_setup_ip(vif, skb, recalculate_partial_csum);
-	else if (skb->protocol == htons(ETH_P_IPV6))
-		err = checksum_setup_ipv6(vif, skb, recalculate_partial_csum);
-
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:02:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CS2-0001ns-DX; Thu, 09 Jan 2014 10:02:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W1CS0-0001lx-Oo
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:02:57 +0000
Received: from [85.158.139.211:42701] by server-4.bemta-5.messagelabs.com id
	CA/45-26791-0D37EC25; Thu, 09 Jan 2014 10:02:56 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389261773!8528786!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12443 invoked from network); 9 Jan 2014 10:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215128"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:02:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 05:02:51 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W1CRu-0001yH-U2;
	Thu, 09 Jan 2014 10:02:50 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 10:02:47 +0000
Message-ID: <1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2 2/3] xen-netback: use new
	skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use skb_checksum_setup to set up partial checksum offsets rather
then a private implementation.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/netback.c |  260 +------------------------------------
 1 file changed, 3 insertions(+), 257 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..2605405 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -39,7 +39,6 @@
 #include <linux/udp.h>
 
 #include <net/tcp.h>
-#include <net/ip6_checksum.h>
 
 #include <xen/xen.h>
 #include <xen/events.h>
@@ -1048,257 +1047,9 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len,
-				  unsigned int max)
-{
-	if (skb_headlen(skb) >= len)
-		return 0;
-
-	/* If we need to pullup then pullup to the max, so we
-	 * won't need to do it again.
-	 */
-	if (max > skb->len)
-		max = skb->len;
-
-	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
-		return -ENOMEM;
-
-	if (skb_headlen(skb) < len)
-		return -EPROTO;
-
-	return 0;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * maximally sized IP and TCP or UDP headers.
- */
-#define MAX_IP_HDR_LEN 128
-
-static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb,
-			     int recalculate_partial_csum)
-{
-	unsigned int off;
-	bool fragment;
-	int err;
-
-	fragment = false;
-
-	err = maybe_pull_tail(skb,
-			      sizeof(struct iphdr),
-			      MAX_IP_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
-		fragment = true;
-
-	off = ip_hdrlen(skb);
-
-	err = -EPROTO;
-
-	if (fragment)
-		goto out;
-
-	switch (ip_hdr(skb)->protocol) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IP_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
-						   ip_hdr(skb)->daddr,
-						   skb->len - off,
-						   IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
-/* This value should be large enough to cover a tagged ethernet header plus
- * an IPv6 header, all options, and a maximal TCP or UDP header.
- */
-#define MAX_IPV6_HDR_LEN 256
-
-#define OPT_HDR(type, skb, off) \
-	(type *)(skb_network_header(skb) + (off))
-
-static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb,
-			       int recalculate_partial_csum)
-{
-	int err;
-	u8 nexthdr;
-	unsigned int off;
-	unsigned int len;
-	bool fragment;
-	bool done;
-
-	fragment = false;
-	done = false;
-
-	off = sizeof(struct ipv6hdr);
-
-	err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
-	if (err < 0)
-		goto out;
-
-	nexthdr = ipv6_hdr(skb)->nexthdr;
-
-	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
-	while (off <= len && !done) {
-		switch (nexthdr) {
-		case IPPROTO_DSTOPTS:
-		case IPPROTO_HOPOPTS:
-		case IPPROTO_ROUTING: {
-			struct ipv6_opt_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ipv6_opt_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_optlen(hp);
-			break;
-		}
-		case IPPROTO_AH: {
-			struct ip_auth_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct ip_auth_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
-			nexthdr = hp->nexthdr;
-			off += ipv6_authlen(hp);
-			break;
-		}
-		case IPPROTO_FRAGMENT: {
-			struct frag_hdr *hp;
-
-			err = maybe_pull_tail(skb,
-					      off +
-					      sizeof(struct frag_hdr),
-					      MAX_IPV6_HDR_LEN);
-			if (err < 0)
-				goto out;
-
-			hp = OPT_HDR(struct frag_hdr, skb, off);
-
-			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
-				fragment = true;
-
-			nexthdr = hp->nexthdr;
-			off += sizeof(struct frag_hdr);
-			break;
-		}
-		default:
-			done = true;
-			break;
-		}
-	}
-
-	err = -EPROTO;
-
-	if (!done || fragment)
-		goto out;
-
-	switch (nexthdr) {
-	case IPPROTO_TCP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct tcphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct tcphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			tcp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_TCP, 0);
-		break;
-	case IPPROTO_UDP:
-		err = maybe_pull_tail(skb,
-				      off + sizeof(struct udphdr),
-				      MAX_IPV6_HDR_LEN);
-		if (err < 0)
-			goto out;
-
-		if (!skb_partial_csum_set(skb, off,
-					  offsetof(struct udphdr, check))) {
-			err = -EPROTO;
-			goto out;
-		}
-
-		if (recalculate_partial_csum)
-			udp_hdr(skb)->check =
-				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
-						 &ipv6_hdr(skb)->daddr,
-						 skb->len - off,
-						 IPPROTO_UDP, 0);
-		break;
-	default:
-		goto out;
-	}
-
-	err = 0;
-
-out:
-	return err;
-}
-
 static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 {
-	int err = -EPROTO;
-	int recalculate_partial_csum = 0;
+	bool recalculate_partial_csum = false;
 
 	/* A GSO SKB must be CHECKSUM_PARTIAL. However some buggy
 	 * peers can fail to set NETRXF_csum_blank when sending a GSO
@@ -1308,19 +1059,14 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		vif->rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
-		recalculate_partial_csum = 1;
+		recalculate_partial_csum = true;
 	}
 
 	/* A non-CHECKSUM_PARTIAL SKB does not require setup. */
 	if (skb->ip_summed != CHECKSUM_PARTIAL)
 		return 0;
 
-	if (skb->protocol == htons(ETH_P_IP))
-		err = checksum_setup_ip(vif, skb, recalculate_partial_csum);
-	else if (skb->protocol == htons(ETH_P_IPV6))
-		err = checksum_setup_ipv6(vif, skb, recalculate_partial_csum);
-
-	return err;
+	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
 static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:05:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CTy-0002Gb-Bu; Thu, 09 Jan 2014 10:04:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CTw-0002G4-HJ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:04:56 +0000
Received: from [85.158.139.211:48495] by server-12.bemta-5.messagelabs.com id
	29/61-30017-7447EC25; Thu, 09 Jan 2014 10:04:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389261893!8735081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6057 invoked from network); 9 Jan 2014 10:04:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:04:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:04:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:04:52 -0500
Message-ID: <1389261891.27473.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 10:04:51 +0000
In-Reply-To: <20140108183411.GA13867@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > With xm it was possible to have a global keymap="de" to map the physical
> > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > xl create -d shows keymap:NULL in the vfb part.
> > > Only moving keymap= into vfb=[] fixes it for me.
> > > 
> > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > like vnc=) as well as suboption for vbf=[]. 
> > > 
> > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > changes.
> > 
> > I don't think Wei covered this one with his VNC patches. It does sound
> > like it should be move though, yes. I think this is 4.5 material at this
> > point.
> > 
> 
> You're right, my patch didn't cover that aspect because I tried hard to
> make it minimal.

Do you think you could revisit this bit for 4.5?

> > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > Sorry for not thinking of that during review.
> > 
> 
> I don't think so. All VNC / VFB options are already documented.

The top level vnc*= options seem to be under "Emulated VGA Graphics
Device" though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:05:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CTy-0002Gb-Bu; Thu, 09 Jan 2014 10:04:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CTw-0002G4-HJ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:04:56 +0000
Received: from [85.158.139.211:48495] by server-12.bemta-5.messagelabs.com id
	29/61-30017-7447EC25; Thu, 09 Jan 2014 10:04:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389261893!8735081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6057 invoked from network); 9 Jan 2014 10:04:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:04:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91215720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:04:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:04:52 -0500
Message-ID: <1389261891.27473.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 10:04:51 +0000
In-Reply-To: <20140108183411.GA13867@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > With xm it was possible to have a global keymap="de" to map the physical
> > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > xl create -d shows keymap:NULL in the vfb part.
> > > Only moving keymap= into vfb=[] fixes it for me.
> > > 
> > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > like vnc=) as well as suboption for vbf=[]. 
> > > 
> > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > changes.
> > 
> > I don't think Wei covered this one with his VNC patches. It does sound
> > like it should be move though, yes. I think this is 4.5 material at this
> > point.
> > 
> 
> You're right, my patch didn't cover that aspect because I tried hard to
> make it minimal.

Do you think you could revisit this bit for 4.5?

> > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > Sorry for not thinking of that during review.
> > 
> 
> I don't think so. All VNC / VFB options are already documented.

The top level vnc*= options seem to be under "Emulated VGA Graphics
Device" though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:06:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CVR-0002TF-T9; Thu, 09 Jan 2014 10:06:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CVQ-0002T4-SZ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:06:29 +0000
Received: from [193.109.254.147:24687] by server-12.bemta-14.messagelabs.com
	id E0/F3-13681-4A47EC25; Thu, 09 Jan 2014 10:06:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389261986!9792227!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28121 invoked from network); 9 Jan 2014 10:06:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:06:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89071119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 10:06:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:06:25 -0500
Message-ID: <1389261984.27473.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Date: Thu, 9 Jan 2014 10:06:24 +0000
In-Reply-To: <92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> > bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Tuesday, January 07, 2014 8:56 AM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] passing smbios table from qemu
> > 
> > On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> > 
> > > Question, am I missing anything, or this feature (passing smbios) is
> > > still work in progress?
> > 
> > Under Xen smbios tables are supplied via hvmloader, not via qemu.
> > 
> > What tables and or values do you want to override/supply?
> > 
> > I believe that libxc supports passing in extra smbios tables when
> > building the guest (via struct xc_hvm_build_args.smbios_module) but
> > nothing has been plumbed in to make use of this.
> > 
> > I'm not aware of any on going work to plumb that stuff further up, e.g.
> > to libxl and xl or other toolstacks. (I think the libxc functionality is
> > only consumed by the XenClient toolstack).
> 
> Just FYI, I did go back and add the support (and docs) for it in
> libxl. I did this after the first set of patches went in per someone's
> request (can't recall who it was at the moment).

Ah yes, here it is:
       smbios_firmware="STRING"
           Specify a path to a file that contains extra SMBIOS firmware
           structures to pass in to a guest. The file can contain a set DMTF
           predefined structures which will override the internal defaults.
           Not all predefined structures can be overridden, only the
           following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
           any number of vendor defined SMBIOS structures (type 128 - 255).
           Since SMBIOS structures do not present their overall size, each
           entry in the file must be preceded by a 32b integer indicating the
           size of the next structure.

Did you not have a tool/library for helping to create such blobs
somewhere? Or is my memory playing tricks?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:06:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CVR-0002TF-T9; Thu, 09 Jan 2014 10:06:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CVQ-0002T4-SZ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:06:29 +0000
Received: from [193.109.254.147:24687] by server-12.bemta-14.messagelabs.com
	id E0/F3-13681-4A47EC25; Thu, 09 Jan 2014 10:06:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389261986!9792227!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28121 invoked from network); 9 Jan 2014 10:06:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:06:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89071119"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 10:06:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:06:25 -0500
Message-ID: <1389261984.27473.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <Ross.Philipson@citrix.com>
Date: Thu, 9 Jan 2014 10:06:24 +0000
In-Reply-To: <92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> > bounces@lists.xen.org] On Behalf Of Ian Campbell
> > Sent: Tuesday, January 07, 2014 8:56 AM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] passing smbios table from qemu
> > 
> > On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> > 
> > > Question, am I missing anything, or this feature (passing smbios) is
> > > still work in progress?
> > 
> > Under Xen smbios tables are supplied via hvmloader, not via qemu.
> > 
> > What tables and or values do you want to override/supply?
> > 
> > I believe that libxc supports passing in extra smbios tables when
> > building the guest (via struct xc_hvm_build_args.smbios_module) but
> > nothing has been plumbed in to make use of this.
> > 
> > I'm not aware of any on going work to plumb that stuff further up, e.g.
> > to libxl and xl or other toolstacks. (I think the libxc functionality is
> > only consumed by the XenClient toolstack).
> 
> Just FYI, I did go back and add the support (and docs) for it in
> libxl. I did this after the first set of patches went in per someone's
> request (can't recall who it was at the moment).

Ah yes, here it is:
       smbios_firmware="STRING"
           Specify a path to a file that contains extra SMBIOS firmware
           structures to pass in to a guest. The file can contain a set DMTF
           predefined structures which will override the internal defaults.
           Not all predefined structures can be overridden, only the
           following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
           any number of vendor defined SMBIOS structures (type 128 - 255).
           Since SMBIOS structures do not present their overall size, each
           entry in the file must be preceded by a 32b integer indicating the
           size of the next structure.

Did you not have a tool/library for helping to create such blobs
somewhere? Or is my memory playing tricks?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXf-00036w-4C; Thu, 09 Jan 2014 10:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lausgans@gmail.com>) id 1W1A9t-00077D-2x
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 07:36:05 +0000
Received: from [85.158.139.211:19928] by server-13.bemta-5.messagelabs.com id
	EF/97-11357-4615EC25; Thu, 09 Jan 2014 07:36:04 +0000
X-Env-Sender: lausgans@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389252963!8511477!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25323 invoked from network); 9 Jan 2014 07:36:03 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 07:36:03 -0000
Received: by mail-wi0-f174.google.com with SMTP id z2so6559224wiv.7
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 23:36:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type:content-transfer-encoding;
	bh=iAsTmRwjGZfYNYDdzMHp1YR94jM2tF8o9SrACL0IEdg=;
	b=zEVwIMBDMGwUfoJ+suCdbIAAXrrU4jqxmLY8YwBP/5lCDuW1PANvCgZAMMMCxUPXVW
	xfc2zIQDgooASXkDx3ulv5o/eZwGBny/NV1l0xALKm9TXq02SpodMuLXwoRbf19qFfkz
	pGYkFljVJwNVspSIh5UynJfxovrk1huzmzLIfWJTb+QeuaWcmQIlRm9VPez8h94za0Pm
	Qsp5KLPadPiZ9QGrgWvcGKP02vfxntw0HLZsxEhqOtCYBUXoVm1iAROLzozl0PWqAqM8
	S1mK0AWv32UcqX20UxCNPzJMMKhtt3M53rj+jWYt2a8KCUsoIn/GGx8jGwzlK5oaKric
	YsIQ==
MIME-Version: 1.0
X-Received: by 10.180.189.6 with SMTP id ge6mr1759572wic.1.1389252963431; Wed,
	08 Jan 2014 23:36:03 -0800 (PST)
Received: by 10.216.9.69 with HTTP; Wed, 8 Jan 2014 23:36:03 -0800 (PST)
In-Reply-To: <e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
Date: Thu, 9 Jan 2014 11:36:03 +0400
Message-ID: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
From: John Johnson <lausgans@gmail.com>
To: chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Subject: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
	Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="koi8-r"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWVhbndoaWxlLCBpJ3ZlIHRyaWVkIHRvIGJvb3QganVzdCBiYXJlIGxpbnV4LWNocm9tZWJvb2sg
a2VybmVsIGZyb20KeGVuIHJlcG8gdG8gbm8gYXZhaWwuIFNhbWUgc3ltcHRvbXMgKCJJIGRvbid0
IHNlZSBhbnl0aGluZyBvbiB0aGUKZGlzcGxheSIpLi4uCk5vdGhpbmcgb2YgdGhlc2UgaGFuZCd0
IGhlbHBlZDoKMS4gVXNhZ2Ugb2YgZGVmY29uZmlnIGZyb20ga2VybmVsIHJlcG8gZnJvbSBjaHJv
bWl1bS5nb29nbGVzb3VyY2UuY29tCjIuIEF0dGFjaGluZyBib3RoIG5ldyBleHlub3M1MjUwLXNu
b3ctcmV2NCAmIGV4eW5vczUyNTAtc25vdy1yZXY1IGZkdApibG9icyBpbnN0ZWFkIG9mIGFuIG9s
ZCBleHlub3M1MjUwLXNub3cgb25lLgpJZGVhcz8KCi0tLS0tLS0tLS0gRm9yd2FyZGVkIG1lc3Nh
Z2UgLS0tLS0tLS0tLQpGcm9tOiAgPGxhdXNnYW5zQD4KRGF0ZTogMjAxMy8xMi8xNwpTdWJqZWN0
OiBSZTogW1hlbi1kZXZlbF0gW2Nyb3MtZGlzY3Vzc10gQm9vdCBpbiBIeXAgbW9kZSBvbiBTYW1z
dW5nCkFSTSBDaHJvbWVib29rClRvOiBjaHJvbWl1bS1vcy1kaXNjdXNzQGNocm9taXVtLm9yZwrr
z9DJ0TogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQD4sIE1pa2UgRnJ5c2luZ2VyIDx2
YXBpZXJAPiwgSWFuCkNhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAPiwgWGVuIERldmVsIDx4ZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZz4sCkZyYW5jZXNjbyBHcmluZ29saSA8ZnJhbmNlc2NvLmdyaW5nb2xp
QD4sIENoZW4gQmFvemkgPGJhb3ppY2hAPgoKCj4gMTcuMTIuMjAxMywgMTM6MzEsICJJYW4gQ2Ft
cGJlbGwiIDxJYW4uQ2FtcGJlbGxAeHh4eHh4eHg+Ogo+ID4gT24gVHVlLCAyMDEzLTEyLTE3IGF0
IDA1OjM3ICswMDAwLCBIZWxkIEJpZXIgd3JvdGU6Cj4gPgo+ID4+ICAgPGZyYW5jZXNjby5ncmlu
Z29saSA8YXQ+IGluZy51bmlicy5pdD4gd3JpdGVzOgo+ID4+Pj4gIC4uLgo+ID4+PiAgLi4uCj4g
Pj4+ICBBZnRlciByZWJvb3QgSSBkb24ndCBzZWUgYW55dGhpbmcgb24gdGhlCj4gPj4+ICBkaXNw
bGF5IGFuZCB0aGlzIGlzIGV4cGVjdGVkOiB1bmZvcnR1bmF0ZWx5IGFmdGVyIHJlYm9vdGluZyBp
bnRvCj4gPj4+ICAgZWl0aGVyIGxpbnV4IG9yIGNocm9tZW9zIEkgZG9uJ3Qgc2VlIGxvZ3Mgc2F2
ZWQgaW4gZG1lc2ctcmFtb29wcy4KPiA+Pj4gIC4uLgo+ID4+PiAgSSdtIHdvbmRlcmluZyBpZiB0
aGUgaGFyZHdhcmUgY291bGQgaGF2ZSBiZWVuIGNoYW5nZWQgaW4gdGhlCj4gPj4+ICBtZWFud2hp
bGU6IENoZW4sIHdvdWxkIHlvdSBtaW5kIHBsZWFzZSBwb3N0Cj4gPj4+ICBzb21ld2hlcmUgdGhl
IHVuc2lnbmVkIGltYWdlIHRoYXQgeW91IHVzZWQgdG8gZ2V0IHRob3NlIGludGVyZXN0aW5nCj4g
Pj4+ICBkbWVzZy1yYW1vb3BzIG1lc3NhZ2VzPyBzbyB0aGF0IEkgY2FuCj4gPj4+ICB0cnkgdG8g
c2VlIGFmdGVyIHJlYm9vdCBpZiBJJ20gZ2V0dGluZyBzb21ldGhpbmcgc2ltaWxhciBpbiAvZGV2
L3BzdG9yZSA/Cj4gPj4+ICAuLi4KPiA+PiAgUElORyEKPiA+Cj4gPiBEaWQgeW91IG1lYW4gdG8g
YWxzbyBzZW5kIHRoaXMgdG8gY3Jvcy1kaXNjdXNzIHdoZXJlIGl0IGFwcGVhcnMgdGhlCj4gPiBw
ZW9wbGUgeW91IGFyZSBxdW90aW5nIGFyZSBzdWJzY3JpYmVkPyBWZXJ5IGxpdHRsZSBvZiB0aGlz
IGRpc2N1c3Npb24KPiA+IHNlZW1zIHRvIGhhdmUgaGFwcGVuZWQgb24geGVuLWRldmVsLgo+ID4K
PiA+IFlvdSBtaWdodCBhbHNvIHdhbnQgdG8gY29uc2lkZXIgQ0NpbmcgdGhlIHBlb3BsZS9wZXJz
b24geW91IGFyZQo+ID4gYWRkcmVzc2luZyB3aXRoIHlvdXIgcGluZyBkaXJlY3RseS4KPiA+Cj4g
PiBJYW4uCgoKRG9uZS4gVGhhbmtzLCBJYW4uClRob3VnaCwgYW4gb3JpZ2luYWwgd2FzIHNlbnQg
b25seSBpbiB4ZW4gbGlzdHM6Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVu
LWRldmVsLzIwMTMtMDkvbXNnMDA5MTIuaHRtbAoKPiA+PiAgSSdtIGV4cGVyaWVuY2luZyBleGFj
dGx5IHRoZSBzYW1lLCB0aG91Z2ggdXBvbiBuYXRpdmVseSBidWlsdCB0b29scwo+ID4+ICAod2hp
bGUgRnJhbmNlc2NvIGNyb3NzLWNvbXBpbGVzKS4KClAuUy46IE1heWJlIGRyb3AgY3VycmVudCBz
dWJqZWN0IG9mIHRoaXMgdGhyZWFkPyBUaG91Z2ggc2VjdXJlIG1vZGUKZXNjYXBlIGhhY2sgaGFk
IHJlbW92ZWQgZnJvbSByZWNlbnQgdHJlZXMsIENocm9tZWJvb2sgWGVuIHRyZWUgc3RpbGwKaGFz
IGl0LgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXf-00036w-4C; Thu, 09 Jan 2014 10:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lausgans@gmail.com>) id 1W1A9t-00077D-2x
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 07:36:05 +0000
Received: from [85.158.139.211:19928] by server-13.bemta-5.messagelabs.com id
	EF/97-11357-4615EC25; Thu, 09 Jan 2014 07:36:04 +0000
X-Env-Sender: lausgans@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389252963!8511477!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25323 invoked from network); 9 Jan 2014 07:36:03 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 07:36:03 -0000
Received: by mail-wi0-f174.google.com with SMTP id z2so6559224wiv.7
	for <xen-devel@lists.xen.org>; Wed, 08 Jan 2014 23:36:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type:content-transfer-encoding;
	bh=iAsTmRwjGZfYNYDdzMHp1YR94jM2tF8o9SrACL0IEdg=;
	b=zEVwIMBDMGwUfoJ+suCdbIAAXrrU4jqxmLY8YwBP/5lCDuW1PANvCgZAMMMCxUPXVW
	xfc2zIQDgooASXkDx3ulv5o/eZwGBny/NV1l0xALKm9TXq02SpodMuLXwoRbf19qFfkz
	pGYkFljVJwNVspSIh5UynJfxovrk1huzmzLIfWJTb+QeuaWcmQIlRm9VPez8h94za0Pm
	Qsp5KLPadPiZ9QGrgWvcGKP02vfxntw0HLZsxEhqOtCYBUXoVm1iAROLzozl0PWqAqM8
	S1mK0AWv32UcqX20UxCNPzJMMKhtt3M53rj+jWYt2a8KCUsoIn/GGx8jGwzlK5oaKric
	YsIQ==
MIME-Version: 1.0
X-Received: by 10.180.189.6 with SMTP id ge6mr1759572wic.1.1389252963431; Wed,
	08 Jan 2014 23:36:03 -0800 (PST)
Received: by 10.216.9.69 with HTTP; Wed, 8 Jan 2014 23:36:03 -0800 (PST)
In-Reply-To: <e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
Date: Thu, 9 Jan 2014 11:36:03 +0400
Message-ID: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
From: John Johnson <lausgans@gmail.com>
To: chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Subject: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
	Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="koi8-r"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWVhbndoaWxlLCBpJ3ZlIHRyaWVkIHRvIGJvb3QganVzdCBiYXJlIGxpbnV4LWNocm9tZWJvb2sg
a2VybmVsIGZyb20KeGVuIHJlcG8gdG8gbm8gYXZhaWwuIFNhbWUgc3ltcHRvbXMgKCJJIGRvbid0
IHNlZSBhbnl0aGluZyBvbiB0aGUKZGlzcGxheSIpLi4uCk5vdGhpbmcgb2YgdGhlc2UgaGFuZCd0
IGhlbHBlZDoKMS4gVXNhZ2Ugb2YgZGVmY29uZmlnIGZyb20ga2VybmVsIHJlcG8gZnJvbSBjaHJv
bWl1bS5nb29nbGVzb3VyY2UuY29tCjIuIEF0dGFjaGluZyBib3RoIG5ldyBleHlub3M1MjUwLXNu
b3ctcmV2NCAmIGV4eW5vczUyNTAtc25vdy1yZXY1IGZkdApibG9icyBpbnN0ZWFkIG9mIGFuIG9s
ZCBleHlub3M1MjUwLXNub3cgb25lLgpJZGVhcz8KCi0tLS0tLS0tLS0gRm9yd2FyZGVkIG1lc3Nh
Z2UgLS0tLS0tLS0tLQpGcm9tOiAgPGxhdXNnYW5zQD4KRGF0ZTogMjAxMy8xMi8xNwpTdWJqZWN0
OiBSZTogW1hlbi1kZXZlbF0gW2Nyb3MtZGlzY3Vzc10gQm9vdCBpbiBIeXAgbW9kZSBvbiBTYW1z
dW5nCkFSTSBDaHJvbWVib29rClRvOiBjaHJvbWl1bS1vcy1kaXNjdXNzQGNocm9taXVtLm9yZwrr
z9DJ0TogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQD4sIE1pa2UgRnJ5c2luZ2VyIDx2
YXBpZXJAPiwgSWFuCkNhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAPiwgWGVuIERldmVsIDx4ZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZz4sCkZyYW5jZXNjbyBHcmluZ29saSA8ZnJhbmNlc2NvLmdyaW5nb2xp
QD4sIENoZW4gQmFvemkgPGJhb3ppY2hAPgoKCj4gMTcuMTIuMjAxMywgMTM6MzEsICJJYW4gQ2Ft
cGJlbGwiIDxJYW4uQ2FtcGJlbGxAeHh4eHh4eHg+Ogo+ID4gT24gVHVlLCAyMDEzLTEyLTE3IGF0
IDA1OjM3ICswMDAwLCBIZWxkIEJpZXIgd3JvdGU6Cj4gPgo+ID4+ICAgPGZyYW5jZXNjby5ncmlu
Z29saSA8YXQ+IGluZy51bmlicy5pdD4gd3JpdGVzOgo+ID4+Pj4gIC4uLgo+ID4+PiAgLi4uCj4g
Pj4+ICBBZnRlciByZWJvb3QgSSBkb24ndCBzZWUgYW55dGhpbmcgb24gdGhlCj4gPj4+ICBkaXNw
bGF5IGFuZCB0aGlzIGlzIGV4cGVjdGVkOiB1bmZvcnR1bmF0ZWx5IGFmdGVyIHJlYm9vdGluZyBp
bnRvCj4gPj4+ICAgZWl0aGVyIGxpbnV4IG9yIGNocm9tZW9zIEkgZG9uJ3Qgc2VlIGxvZ3Mgc2F2
ZWQgaW4gZG1lc2ctcmFtb29wcy4KPiA+Pj4gIC4uLgo+ID4+PiAgSSdtIHdvbmRlcmluZyBpZiB0
aGUgaGFyZHdhcmUgY291bGQgaGF2ZSBiZWVuIGNoYW5nZWQgaW4gdGhlCj4gPj4+ICBtZWFud2hp
bGU6IENoZW4sIHdvdWxkIHlvdSBtaW5kIHBsZWFzZSBwb3N0Cj4gPj4+ICBzb21ld2hlcmUgdGhl
IHVuc2lnbmVkIGltYWdlIHRoYXQgeW91IHVzZWQgdG8gZ2V0IHRob3NlIGludGVyZXN0aW5nCj4g
Pj4+ICBkbWVzZy1yYW1vb3BzIG1lc3NhZ2VzPyBzbyB0aGF0IEkgY2FuCj4gPj4+ICB0cnkgdG8g
c2VlIGFmdGVyIHJlYm9vdCBpZiBJJ20gZ2V0dGluZyBzb21ldGhpbmcgc2ltaWxhciBpbiAvZGV2
L3BzdG9yZSA/Cj4gPj4+ICAuLi4KPiA+PiAgUElORyEKPiA+Cj4gPiBEaWQgeW91IG1lYW4gdG8g
YWxzbyBzZW5kIHRoaXMgdG8gY3Jvcy1kaXNjdXNzIHdoZXJlIGl0IGFwcGVhcnMgdGhlCj4gPiBw
ZW9wbGUgeW91IGFyZSBxdW90aW5nIGFyZSBzdWJzY3JpYmVkPyBWZXJ5IGxpdHRsZSBvZiB0aGlz
IGRpc2N1c3Npb24KPiA+IHNlZW1zIHRvIGhhdmUgaGFwcGVuZWQgb24geGVuLWRldmVsLgo+ID4K
PiA+IFlvdSBtaWdodCBhbHNvIHdhbnQgdG8gY29uc2lkZXIgQ0NpbmcgdGhlIHBlb3BsZS9wZXJz
b24geW91IGFyZQo+ID4gYWRkcmVzc2luZyB3aXRoIHlvdXIgcGluZyBkaXJlY3RseS4KPiA+Cj4g
PiBJYW4uCgoKRG9uZS4gVGhhbmtzLCBJYW4uClRob3VnaCwgYW4gb3JpZ2luYWwgd2FzIHNlbnQg
b25seSBpbiB4ZW4gbGlzdHM6Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVu
LWRldmVsLzIwMTMtMDkvbXNnMDA5MTIuaHRtbAoKPiA+PiAgSSdtIGV4cGVyaWVuY2luZyBleGFj
dGx5IHRoZSBzYW1lLCB0aG91Z2ggdXBvbiBuYXRpdmVseSBidWlsdCB0b29scwo+ID4+ICAod2hp
bGUgRnJhbmNlc2NvIGNyb3NzLWNvbXBpbGVzKS4KClAuUy46IE1heWJlIGRyb3AgY3VycmVudCBz
dWJqZWN0IG9mIHRoaXMgdGhyZWFkPyBUaG91Z2ggc2VjdXJlIG1vZGUKZXNjYXBlIGhhY2sgaGFk
IHJlbW92ZWQgZnJvbSByZWNlbnQgdHJlZXMsIENocm9tZWJvb2sgWGVuIHRyZWUgc3RpbGwKaGFz
IGl0LgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXe-00036j-Kk; Thu, 09 Jan 2014 10:08:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W16fb-0001Ug-Vo; Thu, 09 Jan 2014 03:52:36 +0000
Received: from [85.158.143.35:7525] by server-1.bemta-4.messagelabs.com id
	AF/95-02132-30D1EC25; Thu, 09 Jan 2014 03:52:35 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389239551!10486080!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11742 invoked from network); 9 Jan 2014 03:52:33 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-7.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 03:52:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=E4b8X
	EXOI4j0xn9MBj1tuFN5LQ6GMWeMgR4w6CF8FpM=; b=S1mj0TK4d1exy1MWSl68U
	0V5QrCembzks5tG43qURgFaYnQbe893hdPFTsORyV72V1t8E9jpoSUGwqy2+3PWx
	e6feMkuepnhWqm9qFVJvdQS4DrEVZayEkMfsPI5x6rkenRstjnjrp4EuM3U21GCK
	DBNI9JMBv1v8lk01ufcXUM=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:52:23 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:52:23 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "Wei Liu" <wei.liu2@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140108184405.GB13867@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
X-CM-CTRLDATA: MDH+jWZvb3Rlcl9odG09NDE5Nzo4MQ==
MIME-Version: 1.0
Message-ID: <4088fa33.894a.14375212530.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAAHD6v4HM5St+1BAA--.12811W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYAQODk3APfrtBAABsG
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4124172115908359542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4124172115908359542==
Content-Type: multipart/alternative; 
	boundary="----=_Part_128210_1787703616.1389239543088"

------=_Part_128210_1787703616.1389239543088
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

77u/CgoKSGkgV2VpCgogICAgIFRoYW5rcyBmb3IgeW91ciByZXBseSwgSSBrbm93IHlvdSBhcmUg
aW4gY2hhcmdlIG9mIHBvcnRpbmcgVmlydGlvIHRvIHhlbiwgaG93IGFib3V0IHRoZSBwcm9jZXNz
PyBNYXkgd2UgY29uZmlndXJlIFZpcnRpbyBvbiB4ZW4/CiAgICAgU28gZmFyIGFzIHlvdSBzYWlk
LCB0aGUgTWFjVnRhcCB3YXMgd3JpdHRlbiBzcGVjaWFsbHkgZm9yIFZpcnRpbywgb3RoZXIgdmly
dHVhbCBOSUMgZHJpdmVyIG1vZGVsIGNhbiBub3QgdXNlIGl0LCByaWdodD8KICAgICAgSSBnZXQg
dGhlIGluZm9ybWF0aW9uIGZyb20gaHR0cDovL3ZpcnQua2VybmVsbmV3Ymllcy5vcmcvTWFjVlRh
cAogICAgICAiTWFjdnRhcCBpcyBhIG5ldyBkZXZpY2UgZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5
IHZpcnR1YWxpemVkIGJyaWRnZWQgbmV0d29ya2luZy4gSXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0
aW9uIG9mIHRoZSB0dW4vdGFwIGFuZCBicmlkZ2UgZHJpdmVycyB3aXRoIGEgc2luZ2xlIG1vZHVs
ZSBiYXNlZCBvbiB0aGUgbWFjdmxhbiBkZXZpY2UgZHJpdmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQg
aXMgYSBjaGFyYWN0ZXIgZGV2aWNlIHRoYXQgbGFyZ2VseSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlv
Y3RsIGludGVyZmFjZSBhbmQgY2FuIGJlIHVzZWQgZGlyZWN0bHkgYnkga3ZtL3FlbXUgYW5kIG90
aGVyIGh5cGVydmlzb3JzIHRoYXQgc3VwcG9ydCB0aGUgdHVuL3RhcCBpbnRlcmZhY2UuIiAKICAg
ICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFueSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJl
IE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3VwcG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmln
aHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8gc28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0
d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdodD8gCiAgICAgICAgIExvb2tpbmcgZm9yd2Fy
ZCB0byB5b3VyIHJlcGx5LiB0aGFua3MuCgpBdCAyMDE0LTAxLTA5IDAyOjQ0OjA1LCJXZWkgTGl1
IiA8d2VpLmxpdTJAY2l0cml4LmNvbT4gd3JvdGU6Cj5PbiBXZWQsIEphbiAwOCwgMjAxNCBhdCAw
NjoyMjowNlBNICswODAwLCB0b3BwZXJ4aW4gd3JvdGU6Cj4+IEhpIGxpc3QKPj4gICAgICAgICBB
cyB3ZSBhbGwga25vdywgU1ItSU9WIHRlY2hub2xvZ3kgY2FuIGltcHJvdmUgVk5JQydzCj4+ICAg
ICAgICAgcGVyZm9ybWFuY2UsIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1pZ3JhdGlv
bi4gSSBnZXQKPj4gICAgICAgICBzb21lIGluZm9ybWF0aW9uIHJlY2VudGx5IHRoYXQgb24gS1ZN
K1ZpcnRpbyBwbGF0Zm9ybSwgaWYgd2UKPj4gICAgICAgICB1c2UgTWFjVnRhcCArIFNSLUlPViwg
dGhlIGxpdmUgbWlncmF0aW9uIGNvdWxkIGJlIGRvbmUKPj4gICAgICAgICBzdWNjZXNzZnVsbHku
IFdoYXQgSSB3YW50IHRvIGtub3cgaXMgbWF5IEkgY29uZmlndXJlIE1hY1Z0YXAKPj4gICAgICAg
ICBvbiBYZW4/IAo+PiAgICAgICAgQW55IHJlcGxpZXMgYXJlIHdlbGNvbWUhIFRoYW5rcyBhIGxv
dC4KPgo+QUlVSSBNYWNWdGFwIHJ1bnMgaW4gZW11bGF0aW9uIG1vZGUgYW5kIGNvbm5lY3RzIHRv
IFZpcnRJTyB0byBpbXBsZW1lbnQKPnRoZSBmZWF0dXJlIHlvdSB3YW50LiBUaGF0IHdvdWxkIG1l
YW4geW91IGFsc28gbmVlZCB0byB1c2UgVmlydElPCj5uZXR3b3JrIGRyaXZlciBmb3IgWGVuJ3Mg
SFZNIGRvbWFpbiwgaWYgeW91IG1hbmFnZSB0byBjb25maWd1cmUgTWFjVnRhcAo+Zm9yIFhlbi4K
Pgo+QmFzaWNhbGx5IHRoYXQgbWVhbnMgYSBjb25maWd1cmF0aW9uIHRoYXQgbm9ib2R5IGV2ZXIg
dHJpZWQuIEdvb2QgbHVjay4KPjotKQo+Cj5XZWkuCj4KPgo+PiBfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+PiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4+
IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo+Cg==
------=_Part_128210_1787703616.1389239543088
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyAiPu+7
vzwvc3Bhbj48YnI+PGJyPjxicj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9
IndoaXRlLXNwYWNlOiBwcmUtd3JhcDsgIj5IaSBXZWk8L3NwYW4+PGJyPjxwcmU+ICAgICBUaGFu
a3MgZm9yIHlvdXIgcmVwbHksIEkga25vdyB5b3UgYXJlIGluIGNoYXJnZSBvZiBwb3J0aW5nIFZp
cnRpbyB0byB4ZW4sIGhvdyBhYm91dCB0aGUgcHJvY2Vzcz8gTWF5IHdlIGNvbmZpZ3VyZSBWaXJ0
aW8gb24geGVuPzwvcHJlPjxwcmU+ICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNWdGFw
IHdhcyB3cml0dGVuIHNwZWNpYWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBkcml2
ZXIgbW9kZWwgY2FuIG5vdCB1c2UgaXQsIHJpZ2h0PzwvcHJlPjxwcmU+ICAgICAgSSBnZXQgdGhl
IGluZm9ybWF0aW9uIGZyb20gPHNwYW4gY2xhc3M9IkFwcGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3
aGl0ZS1zcGFjZTogbm9ybWFsOyAiPjxhIGhyZWY9Imh0dHA6Ly92aXJ0Lmtlcm5lbG5ld2JpZXMu
b3JnL01hY1ZUYXAiPmh0dHA6Ly92aXJ0Lmtlcm5lbG5ld2JpZXMub3JnL01hY1ZUYXA8L2E+PC9z
cGFuPjwvcHJlPjxwcmU+ICAgICAgIjxzcGFuIGNsYXNzPSJBcHBsZS1zdHlsZS1zcGFuIiBzdHls
ZT0iZm9udC1mYW1pbHk6IEFyaWFsLCAnTHVjaWRhIEdyYW5kZScsIHNhbnMtc2VyaWY7IGZvbnQt
c2l6ZTogMTJweDsgbGluZS1oZWlnaHQ6IDE1cHg7IHdoaXRlLXNwYWNlOiBub3JtYWw7ICI+TWFj
dnRhcCBpcyBhIG5ldyBkZXZpY2UgZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5IHZpcnR1YWxpemVk
IGJyaWRnZWQgbmV0d29ya2luZy4gSXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0aW9uIG9mIHRoZSB0
dW4vdGFwIGFuZCBicmlkZ2UgZHJpdmVycyB3aXRoIGEgc2luZ2xlIG1vZHVsZSBiYXNlZCBvbiB0
aGUgbWFjdmxhbiBkZXZpY2UgZHJpdmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQgaXMgYSBjaGFyYWN0
ZXIgZGV2aWNlIHRoYXQgbGFyZ2VseSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlvY3RsIGludGVyZmFj
ZSA8c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyNTUsIDAsIDApOyAiPmFuZCBjYW4gYmUgdXNlZCBk
aXJlY3RseSBieSBrdm0vcWVtdSBhbmQgb3RoZXIgaHlwZXJ2aXNvcnMgdGhhdCBzdXBwb3J0IHRo
ZSB0dW4vdGFwIGludGVyZmFjZTwvc3Bhbj4uPC9zcGFuPiImbmJzcDs8L3ByZT48cHJlPiAgICAg
ICBTbyBmYXIgYXMgSSBjb21wcmVoZW5kLCBhbnkgaHlwZXJ2aXNvcnMgY2FuIGNvbmZpZ3VyZSBN
YWNWdGFwIHNvIGxvbmcgYXMgaXQgY2FuIHN1cHBvcnQgPGZvbnQgY2xhc3M9IkFwcGxlLXN0eWxl
LXNwYW4iIGNvbG9yPSIjZmYwMDAwIiBmYWNlPSJBcmlhbCwgJ0x1Y2lkYSBHcmFuZGUnLCBzYW5z
LXNlcmlmIj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9ImZvbnQtc2l6ZTog
MTJweDsgbGluZS1oZWlnaHQ6IDE1cHg7IHdoaXRlLXNwYWNlOiBub3JtYWw7Ij50dW4vdGFwIGlu
dGVyZmFjZSwmbmJzcDtyaWdodD8gJm5ic3A7PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwg
MCk7ICI+U288L3NwYW4+Jm5ic3A7PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7ICI+
TWF5IEkgc2F5IHRoZXJlIGlzIG5vIHNvIGNsb3NlbHkgcmVsYXRpb25zaGlwIGJldHdlZW4gTWFj
VnRhcCBhbmQgVmlydGlvICwgcmlnaHQ/Jm5ic3A7PC9zcGFuPjwvc3Bhbj48L2ZvbnQ+PC9wcmU+
PHByZT48Zm9udCBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgY29sb3I9IiNmZjAwMDAiIGZhY2U9
IkFyaWFsLCAnTHVjaWRhIEdyYW5kZScsIHNhbnMtc2VyaWYiPjxzcGFuIGNsYXNzPSJBcHBsZS1z
dHlsZS1zcGFuIiBzdHlsZT0iZm9udC1zaXplOiAxMnB4OyBsaW5lLWhlaWdodDogMTVweDsgd2hp
dGUtc3BhY2U6IG5vcm1hbDsiPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyAiPiZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtMb29raW5nIGZvcndhcmQgdG8geW91ciBy
ZXBseS4gdGhhbmtzLjwvc3Bhbj48L3NwYW4+PC9mb250PjwvcHJlPjxwcmU+PGJyPkF0Jm5ic3A7
MjAxNC0wMS0wOSZuYnNwOzAyOjQ0OjA1LCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O3dlaS5saXUy
QGNpdHJpeC5jb20mZ3Q7Jm5ic3A7d3JvdGU6CiZndDtPbiZuYnNwO1dlZCwmbmJzcDtKYW4mbmJz
cDswOCwmbmJzcDsyMDE0Jm5ic3A7YXQmbmJzcDswNjoyMjowNlBNJm5ic3A7KzA4MDAsJm5ic3A7
dG9wcGVyeGluJm5ic3A7d3JvdGU6CiZndDsmZ3Q7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7QXMm
bmJzcDt3ZSZuYnNwO2FsbCZuYnNwO2tub3csJm5ic3A7U1ItSU9WJm5ic3A7dGVjaG5vbG9neSZu
YnNwO2NhbiZuYnNwO2ltcHJvdmUmbmJzcDtWTklDJ3MKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtwZXJmb3JtYW5jZSwmbmJzcDt3
aGlsZSZuYnNwO2l0Jm5ic3A7Y2FuJm5ic3A7bm90Jm5ic3A7c3VwcG9ydCZuYnNwO2xpdmUmbmJz
cDttaWdyYXRpb24uJm5ic3A7SSZuYnNwO2dldAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3NvbWUmbmJzcDtpbmZvcm1hdGlvbiZu
YnNwO3JlY2VudGx5Jm5ic3A7dGhhdCZuYnNwO29uJm5ic3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRm
b3JtLCZuYnNwO2lmJm5ic3A7d2UKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDt1c2UmbmJzcDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NS
LUlPViwmbmJzcDt0aGUmbmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uJm5ic3A7Y291bGQmbmJzcDti
ZSZuYnNwO2RvbmUKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDtzdWNjZXNzZnVsbHkuJm5ic3A7V2hhdCZuYnNwO0kmbmJzcDt3YW50
Jm5ic3A7dG8mbmJzcDtrbm93Jm5ic3A7aXMmbmJzcDttYXkmbmJzcDtJJm5ic3A7Y29uZmlndXJl
Jm5ic3A7TWFjVnRhcAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwO29uJm5ic3A7WGVuPyZuYnNwOwomZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJz
cDthcmUmbmJzcDt3ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsKJmd0
O0FJVUkmbmJzcDtNYWNWdGFwJm5ic3A7cnVucyZuYnNwO2luJm5ic3A7ZW11bGF0aW9uJm5ic3A7
bW9kZSZuYnNwO2FuZCZuYnNwO2Nvbm5lY3RzJm5ic3A7dG8mbmJzcDtWaXJ0SU8mbmJzcDt0byZu
YnNwO2ltcGxlbWVudAomZ3Q7dGhlJm5ic3A7ZmVhdHVyZSZuYnNwO3lvdSZuYnNwO3dhbnQuJm5i
c3A7VGhhdCZuYnNwO3dvdWxkJm5ic3A7bWVhbiZuYnNwO3lvdSZuYnNwO2Fsc28mbmJzcDtuZWVk
Jm5ic3A7dG8mbmJzcDt1c2UmbmJzcDtWaXJ0SU8KJmd0O25ldHdvcmsmbmJzcDtkcml2ZXImbmJz
cDtmb3ImbmJzcDtYZW4ncyZuYnNwO0hWTSZuYnNwO2RvbWFpbiwmbmJzcDtpZiZuYnNwO3lvdSZu
YnNwO21hbmFnZSZuYnNwO3RvJm5ic3A7Y29uZmlndXJlJm5ic3A7TWFjVnRhcAomZ3Q7Zm9yJm5i
c3A7WGVuLgomZ3Q7CiZndDtCYXNpY2FsbHkmbmJzcDt0aGF0Jm5ic3A7bWVhbnMmbmJzcDthJm5i
c3A7Y29uZmlndXJhdGlvbiZuYnNwO3RoYXQmbmJzcDtub2JvZHkmbmJzcDtldmVyJm5ic3A7dHJp
ZWQuJm5ic3A7R29vZCZuYnNwO2x1Y2suCiZndDs6LSkKJmd0OwomZ3Q7V2VpLgomZ3Q7CiZndDsK
Jmd0OyZndDsmbmJzcDtfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwomZ3Q7Jmd0OyZuYnNwO1hlbi1kZXZlbCZuYnNwO21haWxpbmcmbmJzcDtsaXN0CiZndDsm
Z3Q7Jm5ic3A7WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKJmd0OyZndDsmbmJzcDtodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwKJmd0Owo8L3ByZT48L2Rpdj48YnI+PGJyPjxzcGFuIHRpdGxl
PSJuZXRlYXNlZm9vdGVyIj48c3BhbiBpZD0ibmV0ZWFzZV9tYWlsX2Zvb3RlciI+PC9zcGFuPjwv
c3Bhbj4=
------=_Part_128210_1787703616.1389239543088--



--===============4124172115908359542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4124172115908359542==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXd-00035y-Ji; Thu, 09 Jan 2014 10:08:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W15yc-0006j6-Fo; Thu, 09 Jan 2014 03:08:10 +0000
Received: from [193.109.254.147:44767] by server-2.bemta-14.messagelabs.com id
	B9/DB-00361-9921EC25; Thu, 09 Jan 2014 03:08:09 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389236886!9706543!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12689 invoked from network); 9 Jan 2014 03:08:08 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-6.tower-27.messagelabs.com with SMTP;
	9 Jan 2014 03:08:08 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=2Qlpu
	oJGYGhxt4GupHnMEhwoQ8ckZowtuXqWDv5P03g=; b=Vm0jVYQCc4iYFLOCUYtrD
	ePJYr3rXggzn17FGYrfn6OVmOGFycFFIqW92LVvIakdgIv4hIXnpQnvVXM8L95Kg
	zmI/ApP6CFF3jSEP8luG5NcPuOl0t9b/qt8QNAhXuHogD6VVsKpwIHiA2onjjocO
	k77s14ac+vdtwMoXXP5DAw=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:07:58 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:07:58 +0800 (CST)
From: topperxin <topperxin@126.com>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140108122710.GZ2924@reaktio.net>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net>
X-CM-CTRLDATA: 7KxGOmZvb3Rlcl9odG09MjI1MTo4MQ==
MIME-Version: 1.0
Message-ID: <45da0d40.6e0f.14374f87dda.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAC3yfSPEs5S_MdBAA--.12254W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYAgODk3APfp6VAAAsM
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7532472034875460006=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7532472034875460006==
Content-Type: multipart/alternative; 
	boundary="----=_Part_102888_1891204376.1389236878809"

------=_Part_102888_1891204376.1389236878809
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

CgoKCgoKCgpoaSBQYXNpCiAgICBUaGFua3MgZm9yIHlvdXIgcmVwbHksIEkgdGhpbmsgd2Uga25v
dyB3aGF0IHlvdSBzYWlkLCB3ZSBoYXZlIGV2ZXIgdHJpZWQgdGhlc2Ugd2F5cy4KICAgICBXaGls
ZSB5b3Uga25vdywgdGhlc2Ugd2F5cyBhcmUgbm90IHN0YWJsZSBlbm91Z2ggZm9yIHRoZSBlbmdp
bmVlcmluZyByZXF1aXJlbWVudCwgd2UgaGF2ZSB0byBhcHBseSBtYW55IHBhdGNoZXMsIGFuZCB0
aGVyZSBhcmUgbWFueSByZXN0cmljdGlvbnMgZm9yIHRoZSBHdWVzdCBWTSB0eXBlcy4KICAgICBX
ZSBuZWVkIGEgbW9yZSBnZW5lcmFsICYgZW5naW5lZXJpbmcgIHNvbHV0aW9uLgogICAgIEFueSB3
YXkgdGhhbmtzIGZvciB5b3VyIHJlcGx5LgoKQXQgMjAxNC0wMS0wOCAyMDoyNzoxMCwiUGFzaSBL
w6Rya2vDpGluZW4iIDxwYXNpa0Bpa2kuZmk+IHdyb3RlOgo+T24gV2VkLCBKYW4gMDgsIDIwMTQg
YXQgMDY6MjI6MDZQTSArMDgwMCwgdG9wcGVyeGluIHdyb3RlOgo+PiAgICBIaSBsaXN0Cj4+ICAg
ICAgICAgICAgQXMgd2UgYWxsIGtub3csIFNSLUlPViB0ZWNobm9sb2d5IGNhbiBpbXByb3ZlIFZO
SUMncyBwZXJmb3JtYW5jZSwKPj4gICAgd2hpbGUgaXQgY2FuIG5vdCBzdXBwb3J0IGxpdmUgbWln
cmF0aW9uLiBJIGdldCBzb21lIGluZm9ybWF0aW9uIHJlY2VudGx5Cj4+ICAgIHRoYXQgb24gS1ZN
K1ZpcnRpbyBwbGF0Zm9ybSwgaWYgd2UgdXNlIE1hY1Z0YXAgKyBTUi1JT1YsIHRoZSBsaXZlCj4+
ICAgIG1pZ3JhdGlvbiBjb3VsZCBiZSBkb25lIHN1Y2Nlc3NmdWxseS4gV2hhdCBJIHdhbnQgdG8g
a25vdyBpcyBtYXkgSQo+PiAgICBjb25maWd1cmUgTWFjVnRhcCBvbiBYZW4/Cj4+ICAgICAgICAg
ICBBbnkgcmVwbGllcyBhcmUgd2VsY29tZSEgVGhhbmtzIGEgbG90Lgo+Cj5JIHRoaW5rIHllYXJz
IGFnbyAoMjAwOSwgcGVyaGFwcykgd2hlbiBTUi1JT1Ygd2FzIGZpcnN0IGRlbW9lZCB3aXRoIFhl
bgo+aXQgd2FzIGRlbW9lZCB3aXRoIGxpdmUgbWlncmF0aW9uLi4gaXQgd2FzIGEgbWl4dHVyZSBv
ZiBTUi1JT1YgVkYgKyBYZW4gdmlmLgo+Cj5TbyB3aXRoIHNvbWUgdG9vbHN0YWNrL3NjcmlwdCBo
YWNrZXJ5IHlvdSBjYW4gZG8gaXQuICg9dXNlIHRoZSB2aWYgcHYgZHJpdmVyIGR1cmluZyBtaWdy
YXRpb24sIG5vcm1hbGx5IHNyLWlvdiB2ZikuCj4KPi0tIFBhc2kKPgo=
------=_Part_102888_1891204376.1389236878809
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48YnI+PGJyPjxicj48YnI+PGRpdj48L2Rpdj48ZGl2IGlk
PSJkaXZOZXRlYXNlTWFpbENhcmQiPjwvZGl2Pjxicj48cHJlPmhpIFBhc2k8L3ByZT48cHJlPiAg
ICBUaGFua3MgZm9yIHlvdXIgcmVwbHksIEkgdGhpbmsgd2Uga25vdyB3aGF0IHlvdSBzYWlkLCB3
ZSBoYXZlIGV2ZXIgdHJpZWQgdGhlc2Ugd2F5cy48L3ByZT48cHJlPiAgICAgV2hpbGUgeW91IGtu
b3csIHRoZXNlIHdheXMgYXJlIG5vdCBzdGFibGUgZW5vdWdoIGZvciB0aGUgZW5naW5lZXJpbmcg
cmVxdWlyZW1lbnQsIHdlIGhhdmUgdG8gYXBwbHkgbWFueSBwYXRjaGVzLCBhbmQgdGhlcmUgYXJl
IG1hbnkgcmVzdHJpY3Rpb25zIGZvciB0aGUgR3Vlc3QgVk0gdHlwZXMuPC9wcmU+PHByZT4gICAg
IFdlIG5lZWQgYSBtb3JlIGdlbmVyYWwgJmFtcDsgZW5naW5lZXJpbmcgJm5ic3A7c29sdXRpb24u
PC9wcmU+PHByZT4gICAgIEFueSB3YXkgdGhhbmtzIGZvciB5b3VyIHJlcGx5LjwvcHJlPjxwcmU+
PGJyPkF0Jm5ic3A7MjAxNC0wMS0wOCZuYnNwOzIwOjI3OjEwLCJQYXNpJm5ic3A7S8Okcmtrw6Rp
bmVuIiZuYnNwOyZsdDtwYXNpa0Bpa2kuZmkmZ3Q7Jm5ic3A7d3JvdGU6CiZndDtPbiZuYnNwO1dl
ZCwmbmJzcDtKYW4mbmJzcDswOCwmbmJzcDsyMDE0Jm5ic3A7YXQmbmJzcDswNjoyMjowNlBNJm5i
c3A7KzA4MDAsJm5ic3A7dG9wcGVyeGluJm5ic3A7d3JvdGU6CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7QXMmbmJzcDt3
ZSZuYnNwO2FsbCZuYnNwO2tub3csJm5ic3A7U1ItSU9WJm5ic3A7dGVjaG5vbG9neSZuYnNwO2Nh
biZuYnNwO2ltcHJvdmUmbmJzcDtWTklDJ3MmbmJzcDtwZXJmb3JtYW5jZSwKJmd0OyZndDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDt3aGlsZSZuYnNwO2l0Jm5ic3A7Y2FuJm5ic3A7bm90Jm5ic3A7
c3VwcG9ydCZuYnNwO2xpdmUmbmJzcDttaWdyYXRpb24uJm5ic3A7SSZuYnNwO2dldCZuYnNwO3Nv
bWUmbmJzcDtpbmZvcm1hdGlvbiZuYnNwO3JlY2VudGx5CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7dGhhdCZuYnNwO29uJm5ic3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRmb3JtLCZuYnNw
O2lmJm5ic3A7d2UmbmJzcDt1c2UmbmJzcDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NSLUlPViwmbmJz
cDt0aGUmbmJzcDtsaXZlCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7bWlncmF0aW9u
Jm5ic3A7Y291bGQmbmJzcDtiZSZuYnNwO2RvbmUmbmJzcDtzdWNjZXNzZnVsbHkuJm5ic3A7V2hh
dCZuYnNwO0kmbmJzcDt3YW50Jm5ic3A7dG8mbmJzcDtrbm93Jm5ic3A7aXMmbmJzcDttYXkmbmJz
cDtJCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Y29uZmlndXJlJm5ic3A7TWFjVnRh
cCZuYnNwO29uJm5ic3A7WGVuPwomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJz
cDthcmUmbmJzcDt3ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsKJmd0
O0kmbmJzcDt0aGluayZuYnNwO3llYXJzJm5ic3A7YWdvJm5ic3A7KDIwMDksJm5ic3A7cGVyaGFw
cykmbmJzcDt3aGVuJm5ic3A7U1ItSU9WJm5ic3A7d2FzJm5ic3A7Zmlyc3QmbmJzcDtkZW1vZWQm
bmJzcDt3aXRoJm5ic3A7WGVuCiZndDtpdCZuYnNwO3dhcyZuYnNwO2RlbW9lZCZuYnNwO3dpdGgm
bmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uLi4mbmJzcDtpdCZuYnNwO3dhcyZuYnNwO2EmbmJzcDtt
aXh0dXJlJm5ic3A7b2YmbmJzcDtTUi1JT1YmbmJzcDtWRiZuYnNwOysmbmJzcDtYZW4mbmJzcDt2
aWYuCiZndDsKJmd0O1NvJm5ic3A7d2l0aCZuYnNwO3NvbWUmbmJzcDt0b29sc3RhY2svc2NyaXB0
Jm5ic3A7aGFja2VyeSZuYnNwO3lvdSZuYnNwO2NhbiZuYnNwO2RvJm5ic3A7aXQuJm5ic3A7KD11
c2UmbmJzcDt0aGUmbmJzcDt2aWYmbmJzcDtwdiZuYnNwO2RyaXZlciZuYnNwO2R1cmluZyZuYnNw
O21pZ3JhdGlvbiwmbmJzcDtub3JtYWxseSZuYnNwO3NyLWlvdiZuYnNwO3ZmKS4KJmd0OwomZ3Q7
LS0mbmJzcDtQYXNpCiZndDsKPC9wcmU+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFz
ZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_102888_1891204376.1389236878809--



--===============7532472034875460006==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7532472034875460006==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXd-00035y-Ji; Thu, 09 Jan 2014 10:08:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W15yc-0006j6-Fo; Thu, 09 Jan 2014 03:08:10 +0000
Received: from [193.109.254.147:44767] by server-2.bemta-14.messagelabs.com id
	B9/DB-00361-9921EC25; Thu, 09 Jan 2014 03:08:09 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389236886!9706543!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12689 invoked from network); 9 Jan 2014 03:08:08 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-6.tower-27.messagelabs.com with SMTP;
	9 Jan 2014 03:08:08 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=2Qlpu
	oJGYGhxt4GupHnMEhwoQ8ckZowtuXqWDv5P03g=; b=Vm0jVYQCc4iYFLOCUYtrD
	ePJYr3rXggzn17FGYrfn6OVmOGFycFFIqW92LVvIakdgIv4hIXnpQnvVXM8L95Kg
	zmI/ApP6CFF3jSEP8luG5NcPuOl0t9b/qt8QNAhXuHogD6VVsKpwIHiA2onjjocO
	k77s14ac+vdtwMoXXP5DAw=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:07:58 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:07:58 +0800 (CST)
From: topperxin <topperxin@126.com>
To: =?UTF-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140108122710.GZ2924@reaktio.net>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net>
X-CM-CTRLDATA: 7KxGOmZvb3Rlcl9odG09MjI1MTo4MQ==
MIME-Version: 1.0
Message-ID: <45da0d40.6e0f.14374f87dda.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAC3yfSPEs5S_MdBAA--.12254W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYAgODk3APfp6VAAAsM
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7532472034875460006=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7532472034875460006==
Content-Type: multipart/alternative; 
	boundary="----=_Part_102888_1891204376.1389236878809"

------=_Part_102888_1891204376.1389236878809
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

CgoKCgoKCgpoaSBQYXNpCiAgICBUaGFua3MgZm9yIHlvdXIgcmVwbHksIEkgdGhpbmsgd2Uga25v
dyB3aGF0IHlvdSBzYWlkLCB3ZSBoYXZlIGV2ZXIgdHJpZWQgdGhlc2Ugd2F5cy4KICAgICBXaGls
ZSB5b3Uga25vdywgdGhlc2Ugd2F5cyBhcmUgbm90IHN0YWJsZSBlbm91Z2ggZm9yIHRoZSBlbmdp
bmVlcmluZyByZXF1aXJlbWVudCwgd2UgaGF2ZSB0byBhcHBseSBtYW55IHBhdGNoZXMsIGFuZCB0
aGVyZSBhcmUgbWFueSByZXN0cmljdGlvbnMgZm9yIHRoZSBHdWVzdCBWTSB0eXBlcy4KICAgICBX
ZSBuZWVkIGEgbW9yZSBnZW5lcmFsICYgZW5naW5lZXJpbmcgIHNvbHV0aW9uLgogICAgIEFueSB3
YXkgdGhhbmtzIGZvciB5b3VyIHJlcGx5LgoKQXQgMjAxNC0wMS0wOCAyMDoyNzoxMCwiUGFzaSBL
w6Rya2vDpGluZW4iIDxwYXNpa0Bpa2kuZmk+IHdyb3RlOgo+T24gV2VkLCBKYW4gMDgsIDIwMTQg
YXQgMDY6MjI6MDZQTSArMDgwMCwgdG9wcGVyeGluIHdyb3RlOgo+PiAgICBIaSBsaXN0Cj4+ICAg
ICAgICAgICAgQXMgd2UgYWxsIGtub3csIFNSLUlPViB0ZWNobm9sb2d5IGNhbiBpbXByb3ZlIFZO
SUMncyBwZXJmb3JtYW5jZSwKPj4gICAgd2hpbGUgaXQgY2FuIG5vdCBzdXBwb3J0IGxpdmUgbWln
cmF0aW9uLiBJIGdldCBzb21lIGluZm9ybWF0aW9uIHJlY2VudGx5Cj4+ICAgIHRoYXQgb24gS1ZN
K1ZpcnRpbyBwbGF0Zm9ybSwgaWYgd2UgdXNlIE1hY1Z0YXAgKyBTUi1JT1YsIHRoZSBsaXZlCj4+
ICAgIG1pZ3JhdGlvbiBjb3VsZCBiZSBkb25lIHN1Y2Nlc3NmdWxseS4gV2hhdCBJIHdhbnQgdG8g
a25vdyBpcyBtYXkgSQo+PiAgICBjb25maWd1cmUgTWFjVnRhcCBvbiBYZW4/Cj4+ICAgICAgICAg
ICBBbnkgcmVwbGllcyBhcmUgd2VsY29tZSEgVGhhbmtzIGEgbG90Lgo+Cj5JIHRoaW5rIHllYXJz
IGFnbyAoMjAwOSwgcGVyaGFwcykgd2hlbiBTUi1JT1Ygd2FzIGZpcnN0IGRlbW9lZCB3aXRoIFhl
bgo+aXQgd2FzIGRlbW9lZCB3aXRoIGxpdmUgbWlncmF0aW9uLi4gaXQgd2FzIGEgbWl4dHVyZSBv
ZiBTUi1JT1YgVkYgKyBYZW4gdmlmLgo+Cj5TbyB3aXRoIHNvbWUgdG9vbHN0YWNrL3NjcmlwdCBo
YWNrZXJ5IHlvdSBjYW4gZG8gaXQuICg9dXNlIHRoZSB2aWYgcHYgZHJpdmVyIGR1cmluZyBtaWdy
YXRpb24sIG5vcm1hbGx5IHNyLWlvdiB2ZikuCj4KPi0tIFBhc2kKPgo=
------=_Part_102888_1891204376.1389236878809
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48YnI+PGJyPjxicj48YnI+PGRpdj48L2Rpdj48ZGl2IGlk
PSJkaXZOZXRlYXNlTWFpbENhcmQiPjwvZGl2Pjxicj48cHJlPmhpIFBhc2k8L3ByZT48cHJlPiAg
ICBUaGFua3MgZm9yIHlvdXIgcmVwbHksIEkgdGhpbmsgd2Uga25vdyB3aGF0IHlvdSBzYWlkLCB3
ZSBoYXZlIGV2ZXIgdHJpZWQgdGhlc2Ugd2F5cy48L3ByZT48cHJlPiAgICAgV2hpbGUgeW91IGtu
b3csIHRoZXNlIHdheXMgYXJlIG5vdCBzdGFibGUgZW5vdWdoIGZvciB0aGUgZW5naW5lZXJpbmcg
cmVxdWlyZW1lbnQsIHdlIGhhdmUgdG8gYXBwbHkgbWFueSBwYXRjaGVzLCBhbmQgdGhlcmUgYXJl
IG1hbnkgcmVzdHJpY3Rpb25zIGZvciB0aGUgR3Vlc3QgVk0gdHlwZXMuPC9wcmU+PHByZT4gICAg
IFdlIG5lZWQgYSBtb3JlIGdlbmVyYWwgJmFtcDsgZW5naW5lZXJpbmcgJm5ic3A7c29sdXRpb24u
PC9wcmU+PHByZT4gICAgIEFueSB3YXkgdGhhbmtzIGZvciB5b3VyIHJlcGx5LjwvcHJlPjxwcmU+
PGJyPkF0Jm5ic3A7MjAxNC0wMS0wOCZuYnNwOzIwOjI3OjEwLCJQYXNpJm5ic3A7S8Okcmtrw6Rp
bmVuIiZuYnNwOyZsdDtwYXNpa0Bpa2kuZmkmZ3Q7Jm5ic3A7d3JvdGU6CiZndDtPbiZuYnNwO1dl
ZCwmbmJzcDtKYW4mbmJzcDswOCwmbmJzcDsyMDE0Jm5ic3A7YXQmbmJzcDswNjoyMjowNlBNJm5i
c3A7KzA4MDAsJm5ic3A7dG9wcGVyeGluJm5ic3A7d3JvdGU6CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7QXMmbmJzcDt3
ZSZuYnNwO2FsbCZuYnNwO2tub3csJm5ic3A7U1ItSU9WJm5ic3A7dGVjaG5vbG9neSZuYnNwO2Nh
biZuYnNwO2ltcHJvdmUmbmJzcDtWTklDJ3MmbmJzcDtwZXJmb3JtYW5jZSwKJmd0OyZndDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDt3aGlsZSZuYnNwO2l0Jm5ic3A7Y2FuJm5ic3A7bm90Jm5ic3A7
c3VwcG9ydCZuYnNwO2xpdmUmbmJzcDttaWdyYXRpb24uJm5ic3A7SSZuYnNwO2dldCZuYnNwO3Nv
bWUmbmJzcDtpbmZvcm1hdGlvbiZuYnNwO3JlY2VudGx5CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7dGhhdCZuYnNwO29uJm5ic3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRmb3JtLCZuYnNw
O2lmJm5ic3A7d2UmbmJzcDt1c2UmbmJzcDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NSLUlPViwmbmJz
cDt0aGUmbmJzcDtsaXZlCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7bWlncmF0aW9u
Jm5ic3A7Y291bGQmbmJzcDtiZSZuYnNwO2RvbmUmbmJzcDtzdWNjZXNzZnVsbHkuJm5ic3A7V2hh
dCZuYnNwO0kmbmJzcDt3YW50Jm5ic3A7dG8mbmJzcDtrbm93Jm5ic3A7aXMmbmJzcDttYXkmbmJz
cDtJCiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Y29uZmlndXJlJm5ic3A7TWFjVnRh
cCZuYnNwO29uJm5ic3A7WGVuPwomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJz
cDthcmUmbmJzcDt3ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsKJmd0
O0kmbmJzcDt0aGluayZuYnNwO3llYXJzJm5ic3A7YWdvJm5ic3A7KDIwMDksJm5ic3A7cGVyaGFw
cykmbmJzcDt3aGVuJm5ic3A7U1ItSU9WJm5ic3A7d2FzJm5ic3A7Zmlyc3QmbmJzcDtkZW1vZWQm
bmJzcDt3aXRoJm5ic3A7WGVuCiZndDtpdCZuYnNwO3dhcyZuYnNwO2RlbW9lZCZuYnNwO3dpdGgm
bmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uLi4mbmJzcDtpdCZuYnNwO3dhcyZuYnNwO2EmbmJzcDtt
aXh0dXJlJm5ic3A7b2YmbmJzcDtTUi1JT1YmbmJzcDtWRiZuYnNwOysmbmJzcDtYZW4mbmJzcDt2
aWYuCiZndDsKJmd0O1NvJm5ic3A7d2l0aCZuYnNwO3NvbWUmbmJzcDt0b29sc3RhY2svc2NyaXB0
Jm5ic3A7aGFja2VyeSZuYnNwO3lvdSZuYnNwO2NhbiZuYnNwO2RvJm5ic3A7aXQuJm5ic3A7KD11
c2UmbmJzcDt0aGUmbmJzcDt2aWYmbmJzcDtwdiZuYnNwO2RyaXZlciZuYnNwO2R1cmluZyZuYnNw
O21pZ3JhdGlvbiwmbmJzcDtub3JtYWxseSZuYnNwO3NyLWlvdiZuYnNwO3ZmKS4KJmd0OwomZ3Q7
LS0mbmJzcDtQYXNpCiZndDsKPC9wcmU+PC9kaXY+PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFz
ZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwvc3Bhbj48L3NwYW4+
------=_Part_102888_1891204376.1389236878809--



--===============7532472034875460006==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7532472034875460006==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXe-00036j-Kk; Thu, 09 Jan 2014 10:08:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W16fb-0001Ug-Vo; Thu, 09 Jan 2014 03:52:36 +0000
Received: from [85.158.143.35:7525] by server-1.bemta-4.messagelabs.com id
	AF/95-02132-30D1EC25; Thu, 09 Jan 2014 03:52:35 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389239551!10486080!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11742 invoked from network); 9 Jan 2014 03:52:33 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-7.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 03:52:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=E4b8X
	EXOI4j0xn9MBj1tuFN5LQ6GMWeMgR4w6CF8FpM=; b=S1mj0TK4d1exy1MWSl68U
	0V5QrCembzks5tG43qURgFaYnQbe893hdPFTsORyV72V1t8E9jpoSUGwqy2+3PWx
	e6feMkuepnhWqm9qFVJvdQS4DrEVZayEkMfsPI5x6rkenRstjnjrp4EuM3U21GCK
	DBNI9JMBv1v8lk01ufcXUM=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:52:23 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:52:23 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "Wei Liu" <wei.liu2@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140108184405.GB13867@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
X-CM-CTRLDATA: MDH+jWZvb3Rlcl9odG09NDE5Nzo4MQ==
MIME-Version: 1.0
Message-ID: <4088fa33.894a.14375212530.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAAHD6v4HM5St+1BAA--.12811W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYAQODk3APfrtBAABsG
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4124172115908359542=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4124172115908359542==
Content-Type: multipart/alternative; 
	boundary="----=_Part_128210_1787703616.1389239543088"

------=_Part_128210_1787703616.1389239543088
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

77u/CgoKSGkgV2VpCgogICAgIFRoYW5rcyBmb3IgeW91ciByZXBseSwgSSBrbm93IHlvdSBhcmUg
aW4gY2hhcmdlIG9mIHBvcnRpbmcgVmlydGlvIHRvIHhlbiwgaG93IGFib3V0IHRoZSBwcm9jZXNz
PyBNYXkgd2UgY29uZmlndXJlIFZpcnRpbyBvbiB4ZW4/CiAgICAgU28gZmFyIGFzIHlvdSBzYWlk
LCB0aGUgTWFjVnRhcCB3YXMgd3JpdHRlbiBzcGVjaWFsbHkgZm9yIFZpcnRpbywgb3RoZXIgdmly
dHVhbCBOSUMgZHJpdmVyIG1vZGVsIGNhbiBub3QgdXNlIGl0LCByaWdodD8KICAgICAgSSBnZXQg
dGhlIGluZm9ybWF0aW9uIGZyb20gaHR0cDovL3ZpcnQua2VybmVsbmV3Ymllcy5vcmcvTWFjVlRh
cAogICAgICAiTWFjdnRhcCBpcyBhIG5ldyBkZXZpY2UgZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5
IHZpcnR1YWxpemVkIGJyaWRnZWQgbmV0d29ya2luZy4gSXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0
aW9uIG9mIHRoZSB0dW4vdGFwIGFuZCBicmlkZ2UgZHJpdmVycyB3aXRoIGEgc2luZ2xlIG1vZHVs
ZSBiYXNlZCBvbiB0aGUgbWFjdmxhbiBkZXZpY2UgZHJpdmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQg
aXMgYSBjaGFyYWN0ZXIgZGV2aWNlIHRoYXQgbGFyZ2VseSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlv
Y3RsIGludGVyZmFjZSBhbmQgY2FuIGJlIHVzZWQgZGlyZWN0bHkgYnkga3ZtL3FlbXUgYW5kIG90
aGVyIGh5cGVydmlzb3JzIHRoYXQgc3VwcG9ydCB0aGUgdHVuL3RhcCBpbnRlcmZhY2UuIiAKICAg
ICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFueSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJl
IE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3VwcG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmln
aHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8gc28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0
d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdodD8gCiAgICAgICAgIExvb2tpbmcgZm9yd2Fy
ZCB0byB5b3VyIHJlcGx5LiB0aGFua3MuCgpBdCAyMDE0LTAxLTA5IDAyOjQ0OjA1LCJXZWkgTGl1
IiA8d2VpLmxpdTJAY2l0cml4LmNvbT4gd3JvdGU6Cj5PbiBXZWQsIEphbiAwOCwgMjAxNCBhdCAw
NjoyMjowNlBNICswODAwLCB0b3BwZXJ4aW4gd3JvdGU6Cj4+IEhpIGxpc3QKPj4gICAgICAgICBB
cyB3ZSBhbGwga25vdywgU1ItSU9WIHRlY2hub2xvZ3kgY2FuIGltcHJvdmUgVk5JQydzCj4+ICAg
ICAgICAgcGVyZm9ybWFuY2UsIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1pZ3JhdGlv
bi4gSSBnZXQKPj4gICAgICAgICBzb21lIGluZm9ybWF0aW9uIHJlY2VudGx5IHRoYXQgb24gS1ZN
K1ZpcnRpbyBwbGF0Zm9ybSwgaWYgd2UKPj4gICAgICAgICB1c2UgTWFjVnRhcCArIFNSLUlPViwg
dGhlIGxpdmUgbWlncmF0aW9uIGNvdWxkIGJlIGRvbmUKPj4gICAgICAgICBzdWNjZXNzZnVsbHku
IFdoYXQgSSB3YW50IHRvIGtub3cgaXMgbWF5IEkgY29uZmlndXJlIE1hY1Z0YXAKPj4gICAgICAg
ICBvbiBYZW4/IAo+PiAgICAgICAgQW55IHJlcGxpZXMgYXJlIHdlbGNvbWUhIFRoYW5rcyBhIGxv
dC4KPgo+QUlVSSBNYWNWdGFwIHJ1bnMgaW4gZW11bGF0aW9uIG1vZGUgYW5kIGNvbm5lY3RzIHRv
IFZpcnRJTyB0byBpbXBsZW1lbnQKPnRoZSBmZWF0dXJlIHlvdSB3YW50LiBUaGF0IHdvdWxkIG1l
YW4geW91IGFsc28gbmVlZCB0byB1c2UgVmlydElPCj5uZXR3b3JrIGRyaXZlciBmb3IgWGVuJ3Mg
SFZNIGRvbWFpbiwgaWYgeW91IG1hbmFnZSB0byBjb25maWd1cmUgTWFjVnRhcAo+Zm9yIFhlbi4K
Pgo+QmFzaWNhbGx5IHRoYXQgbWVhbnMgYSBjb25maWd1cmF0aW9uIHRoYXQgbm9ib2R5IGV2ZXIg
dHJpZWQuIEdvb2QgbHVjay4KPjotKQo+Cj5XZWkuCj4KPgo+PiBfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+PiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4+
IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo+Cg==
------=_Part_128210_1787703616.1389239543088
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyAiPu+7
vzwvc3Bhbj48YnI+PGJyPjxicj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9
IndoaXRlLXNwYWNlOiBwcmUtd3JhcDsgIj5IaSBXZWk8L3NwYW4+PGJyPjxwcmU+ICAgICBUaGFu
a3MgZm9yIHlvdXIgcmVwbHksIEkga25vdyB5b3UgYXJlIGluIGNoYXJnZSBvZiBwb3J0aW5nIFZp
cnRpbyB0byB4ZW4sIGhvdyBhYm91dCB0aGUgcHJvY2Vzcz8gTWF5IHdlIGNvbmZpZ3VyZSBWaXJ0
aW8gb24geGVuPzwvcHJlPjxwcmU+ICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNWdGFw
IHdhcyB3cml0dGVuIHNwZWNpYWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBkcml2
ZXIgbW9kZWwgY2FuIG5vdCB1c2UgaXQsIHJpZ2h0PzwvcHJlPjxwcmU+ICAgICAgSSBnZXQgdGhl
IGluZm9ybWF0aW9uIGZyb20gPHNwYW4gY2xhc3M9IkFwcGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3
aGl0ZS1zcGFjZTogbm9ybWFsOyAiPjxhIGhyZWY9Imh0dHA6Ly92aXJ0Lmtlcm5lbG5ld2JpZXMu
b3JnL01hY1ZUYXAiPmh0dHA6Ly92aXJ0Lmtlcm5lbG5ld2JpZXMub3JnL01hY1ZUYXA8L2E+PC9z
cGFuPjwvcHJlPjxwcmU+ICAgICAgIjxzcGFuIGNsYXNzPSJBcHBsZS1zdHlsZS1zcGFuIiBzdHls
ZT0iZm9udC1mYW1pbHk6IEFyaWFsLCAnTHVjaWRhIEdyYW5kZScsIHNhbnMtc2VyaWY7IGZvbnQt
c2l6ZTogMTJweDsgbGluZS1oZWlnaHQ6IDE1cHg7IHdoaXRlLXNwYWNlOiBub3JtYWw7ICI+TWFj
dnRhcCBpcyBhIG5ldyBkZXZpY2UgZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5IHZpcnR1YWxpemVk
IGJyaWRnZWQgbmV0d29ya2luZy4gSXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0aW9uIG9mIHRoZSB0
dW4vdGFwIGFuZCBicmlkZ2UgZHJpdmVycyB3aXRoIGEgc2luZ2xlIG1vZHVsZSBiYXNlZCBvbiB0
aGUgbWFjdmxhbiBkZXZpY2UgZHJpdmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQgaXMgYSBjaGFyYWN0
ZXIgZGV2aWNlIHRoYXQgbGFyZ2VseSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlvY3RsIGludGVyZmFj
ZSA8c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyNTUsIDAsIDApOyAiPmFuZCBjYW4gYmUgdXNlZCBk
aXJlY3RseSBieSBrdm0vcWVtdSBhbmQgb3RoZXIgaHlwZXJ2aXNvcnMgdGhhdCBzdXBwb3J0IHRo
ZSB0dW4vdGFwIGludGVyZmFjZTwvc3Bhbj4uPC9zcGFuPiImbmJzcDs8L3ByZT48cHJlPiAgICAg
ICBTbyBmYXIgYXMgSSBjb21wcmVoZW5kLCBhbnkgaHlwZXJ2aXNvcnMgY2FuIGNvbmZpZ3VyZSBN
YWNWdGFwIHNvIGxvbmcgYXMgaXQgY2FuIHN1cHBvcnQgPGZvbnQgY2xhc3M9IkFwcGxlLXN0eWxl
LXNwYW4iIGNvbG9yPSIjZmYwMDAwIiBmYWNlPSJBcmlhbCwgJ0x1Y2lkYSBHcmFuZGUnLCBzYW5z
LXNlcmlmIj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9ImZvbnQtc2l6ZTog
MTJweDsgbGluZS1oZWlnaHQ6IDE1cHg7IHdoaXRlLXNwYWNlOiBub3JtYWw7Ij50dW4vdGFwIGlu
dGVyZmFjZSwmbmJzcDtyaWdodD8gJm5ic3A7PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwg
MCk7ICI+U288L3NwYW4+Jm5ic3A7PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7ICI+
TWF5IEkgc2F5IHRoZXJlIGlzIG5vIHNvIGNsb3NlbHkgcmVsYXRpb25zaGlwIGJldHdlZW4gTWFj
VnRhcCBhbmQgVmlydGlvICwgcmlnaHQ/Jm5ic3A7PC9zcGFuPjwvc3Bhbj48L2ZvbnQ+PC9wcmU+
PHByZT48Zm9udCBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgY29sb3I9IiNmZjAwMDAiIGZhY2U9
IkFyaWFsLCAnTHVjaWRhIEdyYW5kZScsIHNhbnMtc2VyaWYiPjxzcGFuIGNsYXNzPSJBcHBsZS1z
dHlsZS1zcGFuIiBzdHlsZT0iZm9udC1zaXplOiAxMnB4OyBsaW5lLWhlaWdodDogMTVweDsgd2hp
dGUtc3BhY2U6IG5vcm1hbDsiPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyAiPiZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDtMb29raW5nIGZvcndhcmQgdG8geW91ciBy
ZXBseS4gdGhhbmtzLjwvc3Bhbj48L3NwYW4+PC9mb250PjwvcHJlPjxwcmU+PGJyPkF0Jm5ic3A7
MjAxNC0wMS0wOSZuYnNwOzAyOjQ0OjA1LCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O3dlaS5saXUy
QGNpdHJpeC5jb20mZ3Q7Jm5ic3A7d3JvdGU6CiZndDtPbiZuYnNwO1dlZCwmbmJzcDtKYW4mbmJz
cDswOCwmbmJzcDsyMDE0Jm5ic3A7YXQmbmJzcDswNjoyMjowNlBNJm5ic3A7KzA4MDAsJm5ic3A7
dG9wcGVyeGluJm5ic3A7d3JvdGU6CiZndDsmZ3Q7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7QXMm
bmJzcDt3ZSZuYnNwO2FsbCZuYnNwO2tub3csJm5ic3A7U1ItSU9WJm5ic3A7dGVjaG5vbG9neSZu
YnNwO2NhbiZuYnNwO2ltcHJvdmUmbmJzcDtWTklDJ3MKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtwZXJmb3JtYW5jZSwmbmJzcDt3
aGlsZSZuYnNwO2l0Jm5ic3A7Y2FuJm5ic3A7bm90Jm5ic3A7c3VwcG9ydCZuYnNwO2xpdmUmbmJz
cDttaWdyYXRpb24uJm5ic3A7SSZuYnNwO2dldAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3NvbWUmbmJzcDtpbmZvcm1hdGlvbiZu
YnNwO3JlY2VudGx5Jm5ic3A7dGhhdCZuYnNwO29uJm5ic3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRm
b3JtLCZuYnNwO2lmJm5ic3A7d2UKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDt1c2UmbmJzcDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NS
LUlPViwmbmJzcDt0aGUmbmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uJm5ic3A7Y291bGQmbmJzcDti
ZSZuYnNwO2RvbmUKJmd0OyZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDtzdWNjZXNzZnVsbHkuJm5ic3A7V2hhdCZuYnNwO0kmbmJzcDt3YW50
Jm5ic3A7dG8mbmJzcDtrbm93Jm5ic3A7aXMmbmJzcDttYXkmbmJzcDtJJm5ic3A7Y29uZmlndXJl
Jm5ic3A7TWFjVnRhcAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwO29uJm5ic3A7WGVuPyZuYnNwOwomZ3Q7Jmd0OyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJz
cDthcmUmbmJzcDt3ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsKJmd0
O0FJVUkmbmJzcDtNYWNWdGFwJm5ic3A7cnVucyZuYnNwO2luJm5ic3A7ZW11bGF0aW9uJm5ic3A7
bW9kZSZuYnNwO2FuZCZuYnNwO2Nvbm5lY3RzJm5ic3A7dG8mbmJzcDtWaXJ0SU8mbmJzcDt0byZu
YnNwO2ltcGxlbWVudAomZ3Q7dGhlJm5ic3A7ZmVhdHVyZSZuYnNwO3lvdSZuYnNwO3dhbnQuJm5i
c3A7VGhhdCZuYnNwO3dvdWxkJm5ic3A7bWVhbiZuYnNwO3lvdSZuYnNwO2Fsc28mbmJzcDtuZWVk
Jm5ic3A7dG8mbmJzcDt1c2UmbmJzcDtWaXJ0SU8KJmd0O25ldHdvcmsmbmJzcDtkcml2ZXImbmJz
cDtmb3ImbmJzcDtYZW4ncyZuYnNwO0hWTSZuYnNwO2RvbWFpbiwmbmJzcDtpZiZuYnNwO3lvdSZu
YnNwO21hbmFnZSZuYnNwO3RvJm5ic3A7Y29uZmlndXJlJm5ic3A7TWFjVnRhcAomZ3Q7Zm9yJm5i
c3A7WGVuLgomZ3Q7CiZndDtCYXNpY2FsbHkmbmJzcDt0aGF0Jm5ic3A7bWVhbnMmbmJzcDthJm5i
c3A7Y29uZmlndXJhdGlvbiZuYnNwO3RoYXQmbmJzcDtub2JvZHkmbmJzcDtldmVyJm5ic3A7dHJp
ZWQuJm5ic3A7R29vZCZuYnNwO2x1Y2suCiZndDs6LSkKJmd0OwomZ3Q7V2VpLgomZ3Q7CiZndDsK
Jmd0OyZndDsmbmJzcDtfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwomZ3Q7Jmd0OyZuYnNwO1hlbi1kZXZlbCZuYnNwO21haWxpbmcmbmJzcDtsaXN0CiZndDsm
Z3Q7Jm5ic3A7WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKJmd0OyZndDsmbmJzcDtodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwKJmd0Owo8L3ByZT48L2Rpdj48YnI+PGJyPjxzcGFuIHRpdGxl
PSJuZXRlYXNlZm9vdGVyIj48c3BhbiBpZD0ibmV0ZWFzZV9tYWlsX2Zvb3RlciI+PC9zcGFuPjwv
c3Bhbj4=
------=_Part_128210_1787703616.1389239543088--



--===============4124172115908359542==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4124172115908359542==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXe-00036O-1n; Thu, 09 Jan 2014 10:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W16Ks-0000HM-63; Thu, 09 Jan 2014 03:31:10 +0000
Received: from [85.158.139.211:38350] by server-5.bemta-5.messagelabs.com id
	92/C9-14928-DF71EC25; Thu, 09 Jan 2014 03:31:09 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389238265!8676703!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28860 invoked from network); 9 Jan 2014 03:31:07 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-4.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 03:31:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=BV3X6
	K+vKZkDOorbSYMceIBItu1S5HJG736B+49GGwc=; b=HIx+WmKNCiEZwWitNb6HT
	THM6rPxBbIYYIT+rcLlX8eKyWf887LBMWszC1lMMF1/AepfBkxUu0ty4NibT0OMH
	U8meL1lIdeEmA5erwyu6knE39SSkukxq4uk/yKSTbCpYW6qkAVA8ZxhL9w616hmY
	smkvdUPcBu/I2bhy3RzQNE=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:30:48 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:30:48 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "David Vrabel" <david.vrabel@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <52CD5028.2060802@citrix.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net> <52CD5028.2060802@citrix.com>
X-CM-CTRLDATA: PUquYWZvb3Rlcl9odG09MjgyNjo4MQ==
MIME-Version: 1.0
Message-ID: <7b7ef2c9.7d57.143750d6389.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAAHD6vtF85SZt1BAA--.12579W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYBcODk3APfq3tgAAs8
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0682092576828045042=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0682092576828045042==
Content-Type: multipart/alternative; 
	boundary="----=_Part_117059_1116309397.1389238248329"

------=_Part_117059_1116309397.1389238248329
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGkgRGF2aWQKVGhhbmtzIGZvciB5b3VyIHJlcGx5LgpJIGhhdmUgZXZlciBoZWFyZCBib25kIHdh
eSwgbm90IHRyeS4KV2hhdCBJIHdhbnQgdG8ga25vdyBpcyB3aGV0aGVyIGJvbmQgZHJpdmVyIGNh
biBwcm92aWRlIHN0YWJsZSBhbmQgZW5naW5lZXJpbmcgc29sdXRpb24gZm9yIHRoZSBsaXZlIG1p
Z3JhdGlvbj8KCgrlnKggMjAxNC0wMS0wOCAyMToxODozMu+8jCJEYXZpZCBWcmFiZWwiIDxkYXZp
ZC52cmFiZWxAY2l0cml4LmNvbT4g5YaZ6YGT77yaCj5PbiAwOC8wMS8xNCAxMjoyNywgUGFzaSBL
w6Rya2vDpGluZW4gd3JvdGU6Cj4+IE9uIFdlZCwgSmFuIDA4LCAyMDE0IGF0IDA2OjIyOjA2UE0g
KzA4MDAsIHRvcHBlcnhpbiB3cm90ZToKPj4+ICAgIEhpIGxpc3QKPj4+ICAgICAgICAgICAgQXMg
d2UgYWxsIGtub3csIFNSLUlPViB0ZWNobm9sb2d5IGNhbiBpbXByb3ZlIFZOSUMncyBwZXJmb3Jt
YW5jZSwKPj4+ICAgIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1pZ3JhdGlvbi4gSSBn
ZXQgc29tZSBpbmZvcm1hdGlvbiByZWNlbnRseQo+Pj4gICAgdGhhdCBvbiBLVk0rVmlydGlvIHBs
YXRmb3JtLCBpZiB3ZSB1c2UgTWFjVnRhcCArIFNSLUlPViwgdGhlIGxpdmUKPj4+ICAgIG1pZ3Jh
dGlvbiBjb3VsZCBiZSBkb25lIHN1Y2Nlc3NmdWxseS4gV2hhdCBJIHdhbnQgdG8ga25vdyBpcyBt
YXkgSQo+Pj4gICAgY29uZmlndXJlIE1hY1Z0YXAgb24gWGVuPwo+Pj4gICAgICAgICAgIEFueSBy
ZXBsaWVzIGFyZSB3ZWxjb21lISBUaGFua3MgYSBsb3QuCj4+IAo+PiBJIHRoaW5rIHllYXJzIGFn
byAoMjAwOSwgcGVyaGFwcykgd2hlbiBTUi1JT1Ygd2FzIGZpcnN0IGRlbW9lZCB3aXRoIFhlbgo+
PiBpdCB3YXMgZGVtb2VkIHdpdGggbGl2ZSBtaWdyYXRpb24uLiBpdCB3YXMgYSBtaXh0dXJlIG9m
IFNSLUlPViBWRiArIFhlbiB2aWYuCj4+IAo+PiBTbyB3aXRoIHNvbWUgdG9vbHN0YWNrL3Njcmlw
dCBoYWNrZXJ5IHlvdSBjYW4gZG8gaXQuICg9dXNlIHRoZSB2aWYgcHYgZHJpdmVyIGR1cmluZyBt
aWdyYXRpb24sIG5vcm1hbGx5IHNyLWlvdiB2ZikuCj4KPllvdSBtYXkgZmluZCBjcmVhdGluZyBh
biBhY3RpdmUtYmFja3VwIGJvbmQgaW4gdGhlIGd1ZXN0IHVzZWZ1bC4gIFRoZQo+YWN0aXZlIGJv
bmQgbGluayBpcyB0aGUgU1ItSU9WIGRldmljZSBhbmQgdGhlIGJhY2t1cCBpcyB0aGUgVklGLgo+
Cj5JJ3ZlIG5vdCB0cmllZCB0aGlzIG15c2VsZi4KPgo+RGF2aWQK
------=_Part_117059_1116309397.1389238248329
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPkhpJm5ic3A7PHNwYW4gY2xhc3M9IkFwcGxlLXN0eWxlLXNwYW4i
IHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7ICI+RGF2aWQ8L3NwYW4+PGRpdj48c3BhbiBj
bGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOiBwcmUtd3JhcDsiPiAg
ICBUaGFua3MgZm9yIHlvdXIgcmVwbHkuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gY2xhc3M9IkFw
cGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7Ij4gICAgSSBoYXZl
IGV2ZXIgaGVhcmQgYm9uZCB3YXksIG5vdCB0cnkuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gY2xh
c3M9IkFwcGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7Ij4gICAg
V2hhdCBJIHdhbnQgdG8ga25vdyBpcyB3aGV0aGVyIGJvbmQgZHJpdmVyIGNhbiBwcm92aWRlIHN0
YWJsZSBhbmQgZW5naW5lZXJpbmcgc29sdXRpb24gZm9yIHRoZSBsaXZlIG1pZ3JhdGlvbj88L3Nw
YW4+PC9kaXY+PGRpdj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9IndoaXRl
LXNwYWNlOiBwcmUtd3JhcDsiPiAgICAgPGJyPjwvc3Bhbj48cHJlPuWcqCZuYnNwOzIwMTQtMDEt
MDgmbmJzcDsyMToxODozMu+8jCJEYXZpZCZuYnNwO1ZyYWJlbCImbmJzcDsmbHQ7ZGF2aWQudnJh
YmVsQGNpdHJpeC5jb20mZ3Q7Jm5ic3A75YaZ6YGT77yaCiZndDtPbiZuYnNwOzA4LzAxLzE0Jm5i
c3A7MTI6MjcsJm5ic3A7UGFzaSZuYnNwO0vDpHJra8OkaW5lbiZuYnNwO3dyb3RlOgomZ3Q7Jmd0
OyZuYnNwO09uJm5ic3A7V2VkLCZuYnNwO0phbiZuYnNwOzA4LCZuYnNwOzIwMTQmbmJzcDthdCZu
YnNwOzA2OjIyOjA2UE0mbmJzcDsrMDgwMCwmbmJzcDt0b3BwZXJ4aW4mbmJzcDt3cm90ZToKJmd0
OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7Jmd0
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwO0FzJm5ic3A7d2UmbmJzcDthbGwmbmJzcDtrbm93LCZuYnNwO1NSLUlP
ViZuYnNwO3RlY2hub2xvZ3kmbmJzcDtjYW4mbmJzcDtpbXByb3ZlJm5ic3A7Vk5JQydzJm5ic3A7
cGVyZm9ybWFuY2UsCiZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3doaWxlJm5i
c3A7aXQmbmJzcDtjYW4mbmJzcDtub3QmbmJzcDtzdXBwb3J0Jm5ic3A7bGl2ZSZuYnNwO21pZ3Jh
dGlvbi4mbmJzcDtJJm5ic3A7Z2V0Jm5ic3A7c29tZSZuYnNwO2luZm9ybWF0aW9uJm5ic3A7cmVj
ZW50bHkKJmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7dGhhdCZuYnNwO29uJm5i
c3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRmb3JtLCZuYnNwO2lmJm5ic3A7d2UmbmJzcDt1c2UmbmJz
cDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NSLUlPViwmbmJzcDt0aGUmbmJzcDtsaXZlCiZndDsmZ3Q7
Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO21pZ3JhdGlvbiZuYnNwO2NvdWxkJm5ic3A7YmUm
bmJzcDtkb25lJm5ic3A7c3VjY2Vzc2Z1bGx5LiZuYnNwO1doYXQmbmJzcDtJJm5ic3A7d2FudCZu
YnNwO3RvJm5ic3A7a25vdyZuYnNwO2lzJm5ic3A7bWF5Jm5ic3A7SQomZ3Q7Jmd0OyZndDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDtjb25maWd1cmUmbmJzcDtNYWNWdGFwJm5ic3A7b24mbmJzcDtY
ZW4/CiZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJzcDthcmUmbmJzcDt3
ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsmZ3Q7Jm5ic3A7CiZndDsm
Z3Q7Jm5ic3A7SSZuYnNwO3RoaW5rJm5ic3A7eWVhcnMmbmJzcDthZ28mbmJzcDsoMjAwOSwmbmJz
cDtwZXJoYXBzKSZuYnNwO3doZW4mbmJzcDtTUi1JT1YmbmJzcDt3YXMmbmJzcDtmaXJzdCZuYnNw
O2RlbW9lZCZuYnNwO3dpdGgmbmJzcDtYZW4KJmd0OyZndDsmbmJzcDtpdCZuYnNwO3dhcyZuYnNw
O2RlbW9lZCZuYnNwO3dpdGgmbmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uLi4mbmJzcDtpdCZuYnNw
O3dhcyZuYnNwO2EmbmJzcDttaXh0dXJlJm5ic3A7b2YmbmJzcDtTUi1JT1YmbmJzcDtWRiZuYnNw
OysmbmJzcDtYZW4mbmJzcDt2aWYuCiZndDsmZ3Q7Jm5ic3A7CiZndDsmZ3Q7Jm5ic3A7U28mbmJz
cDt3aXRoJm5ic3A7c29tZSZuYnNwO3Rvb2xzdGFjay9zY3JpcHQmbmJzcDtoYWNrZXJ5Jm5ic3A7
eW91Jm5ic3A7Y2FuJm5ic3A7ZG8mbmJzcDtpdC4mbmJzcDsoPXVzZSZuYnNwO3RoZSZuYnNwO3Zp
ZiZuYnNwO3B2Jm5ic3A7ZHJpdmVyJm5ic3A7ZHVyaW5nJm5ic3A7bWlncmF0aW9uLCZuYnNwO25v
cm1hbGx5Jm5ic3A7c3ItaW92Jm5ic3A7dmYpLgomZ3Q7CiZndDtZb3UmbmJzcDttYXkmbmJzcDtm
aW5kJm5ic3A7Y3JlYXRpbmcmbmJzcDthbiZuYnNwO2FjdGl2ZS1iYWNrdXAmbmJzcDtib25kJm5i
c3A7aW4mbmJzcDt0aGUmbmJzcDtndWVzdCZuYnNwO3VzZWZ1bC4mbmJzcDsmbmJzcDtUaGUKJmd0
O2FjdGl2ZSZuYnNwO2JvbmQmbmJzcDtsaW5rJm5ic3A7aXMmbmJzcDt0aGUmbmJzcDtTUi1JT1Ym
bmJzcDtkZXZpY2UmbmJzcDthbmQmbmJzcDt0aGUmbmJzcDtiYWNrdXAmbmJzcDtpcyZuYnNwO3Ro
ZSZuYnNwO1ZJRi4KJmd0OwomZ3Q7SSd2ZSZuYnNwO25vdCZuYnNwO3RyaWVkJm5ic3A7dGhpcyZu
YnNwO215c2VsZi4KJmd0OwomZ3Q7RGF2aWQKPC9wcmU+PC9kaXY+PC9kaXY+PGJyPjxicj48c3Bh
biB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwv
c3Bhbj48L3NwYW4+
------=_Part_117059_1116309397.1389238248329--



--===============0682092576828045042==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0682092576828045042==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CXe-00036O-1n; Thu, 09 Jan 2014 10:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W16Ks-0000HM-63; Thu, 09 Jan 2014 03:31:10 +0000
Received: from [85.158.139.211:38350] by server-5.bemta-5.messagelabs.com id
	92/C9-14928-DF71EC25; Thu, 09 Jan 2014 03:31:09 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389238265!8676703!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28860 invoked from network); 9 Jan 2014 03:31:07 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-4.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 03:31:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=BV3X6
	K+vKZkDOorbSYMceIBItu1S5HJG736B+49GGwc=; b=HIx+WmKNCiEZwWitNb6HT
	THM6rPxBbIYYIT+rcLlX8eKyWf887LBMWszC1lMMF1/AepfBkxUu0ty4NibT0OMH
	U8meL1lIdeEmA5erwyu6knE39SSkukxq4uk/yKSTbCpYW6qkAVA8ZxhL9w616hmY
	smkvdUPcBu/I2bhy3RzQNE=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Thu, 9 Jan 2014 11:30:48 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Thu, 9 Jan 2014 11:30:48 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "David Vrabel" <david.vrabel@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <52CD5028.2060802@citrix.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108122710.GZ2924@reaktio.net> <52CD5028.2060802@citrix.com>
X-CM-CTRLDATA: PUquYWZvb3Rlcl9odG09MjgyNjo4MQ==
MIME-Version: 1.0
Message-ID: <7b7ef2c9.7d57.143750d6389.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAAHD6vtF85SZt1BAA--.12579W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiYBcODk3APfq3tgAAs8
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
X-Mailman-Approved-At: Thu, 09 Jan 2014 10:08:44 +0000
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0682092576828045042=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0682092576828045042==
Content-Type: multipart/alternative; 
	boundary="----=_Part_117059_1116309397.1389238248329"

------=_Part_117059_1116309397.1389238248329
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGkgRGF2aWQKVGhhbmtzIGZvciB5b3VyIHJlcGx5LgpJIGhhdmUgZXZlciBoZWFyZCBib25kIHdh
eSwgbm90IHRyeS4KV2hhdCBJIHdhbnQgdG8ga25vdyBpcyB3aGV0aGVyIGJvbmQgZHJpdmVyIGNh
biBwcm92aWRlIHN0YWJsZSBhbmQgZW5naW5lZXJpbmcgc29sdXRpb24gZm9yIHRoZSBsaXZlIG1p
Z3JhdGlvbj8KCgrlnKggMjAxNC0wMS0wOCAyMToxODozMu+8jCJEYXZpZCBWcmFiZWwiIDxkYXZp
ZC52cmFiZWxAY2l0cml4LmNvbT4g5YaZ6YGT77yaCj5PbiAwOC8wMS8xNCAxMjoyNywgUGFzaSBL
w6Rya2vDpGluZW4gd3JvdGU6Cj4+IE9uIFdlZCwgSmFuIDA4LCAyMDE0IGF0IDA2OjIyOjA2UE0g
KzA4MDAsIHRvcHBlcnhpbiB3cm90ZToKPj4+ICAgIEhpIGxpc3QKPj4+ICAgICAgICAgICAgQXMg
d2UgYWxsIGtub3csIFNSLUlPViB0ZWNobm9sb2d5IGNhbiBpbXByb3ZlIFZOSUMncyBwZXJmb3Jt
YW5jZSwKPj4+ICAgIHdoaWxlIGl0IGNhbiBub3Qgc3VwcG9ydCBsaXZlIG1pZ3JhdGlvbi4gSSBn
ZXQgc29tZSBpbmZvcm1hdGlvbiByZWNlbnRseQo+Pj4gICAgdGhhdCBvbiBLVk0rVmlydGlvIHBs
YXRmb3JtLCBpZiB3ZSB1c2UgTWFjVnRhcCArIFNSLUlPViwgdGhlIGxpdmUKPj4+ICAgIG1pZ3Jh
dGlvbiBjb3VsZCBiZSBkb25lIHN1Y2Nlc3NmdWxseS4gV2hhdCBJIHdhbnQgdG8ga25vdyBpcyBt
YXkgSQo+Pj4gICAgY29uZmlndXJlIE1hY1Z0YXAgb24gWGVuPwo+Pj4gICAgICAgICAgIEFueSBy
ZXBsaWVzIGFyZSB3ZWxjb21lISBUaGFua3MgYSBsb3QuCj4+IAo+PiBJIHRoaW5rIHllYXJzIGFn
byAoMjAwOSwgcGVyaGFwcykgd2hlbiBTUi1JT1Ygd2FzIGZpcnN0IGRlbW9lZCB3aXRoIFhlbgo+
PiBpdCB3YXMgZGVtb2VkIHdpdGggbGl2ZSBtaWdyYXRpb24uLiBpdCB3YXMgYSBtaXh0dXJlIG9m
IFNSLUlPViBWRiArIFhlbiB2aWYuCj4+IAo+PiBTbyB3aXRoIHNvbWUgdG9vbHN0YWNrL3Njcmlw
dCBoYWNrZXJ5IHlvdSBjYW4gZG8gaXQuICg9dXNlIHRoZSB2aWYgcHYgZHJpdmVyIGR1cmluZyBt
aWdyYXRpb24sIG5vcm1hbGx5IHNyLWlvdiB2ZikuCj4KPllvdSBtYXkgZmluZCBjcmVhdGluZyBh
biBhY3RpdmUtYmFja3VwIGJvbmQgaW4gdGhlIGd1ZXN0IHVzZWZ1bC4gIFRoZQo+YWN0aXZlIGJv
bmQgbGluayBpcyB0aGUgU1ItSU9WIGRldmljZSBhbmQgdGhlIGJhY2t1cCBpcyB0aGUgVklGLgo+
Cj5JJ3ZlIG5vdCB0cmllZCB0aGlzIG15c2VsZi4KPgo+RGF2aWQK
------=_Part_117059_1116309397.1389238248329
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPkhpJm5ic3A7PHNwYW4gY2xhc3M9IkFwcGxlLXN0eWxlLXNwYW4i
IHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7ICI+RGF2aWQ8L3NwYW4+PGRpdj48c3BhbiBj
bGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOiBwcmUtd3JhcDsiPiAg
ICBUaGFua3MgZm9yIHlvdXIgcmVwbHkuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gY2xhc3M9IkFw
cGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7Ij4gICAgSSBoYXZl
IGV2ZXIgaGVhcmQgYm9uZCB3YXksIG5vdCB0cnkuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gY2xh
c3M9IkFwcGxlLXN0eWxlLXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlLXdyYXA7Ij4gICAg
V2hhdCBJIHdhbnQgdG8ga25vdyBpcyB3aGV0aGVyIGJvbmQgZHJpdmVyIGNhbiBwcm92aWRlIHN0
YWJsZSBhbmQgZW5naW5lZXJpbmcgc29sdXRpb24gZm9yIHRoZSBsaXZlIG1pZ3JhdGlvbj88L3Nw
YW4+PC9kaXY+PGRpdj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5bGU9IndoaXRl
LXNwYWNlOiBwcmUtd3JhcDsiPiAgICAgPGJyPjwvc3Bhbj48cHJlPuWcqCZuYnNwOzIwMTQtMDEt
MDgmbmJzcDsyMToxODozMu+8jCJEYXZpZCZuYnNwO1ZyYWJlbCImbmJzcDsmbHQ7ZGF2aWQudnJh
YmVsQGNpdHJpeC5jb20mZ3Q7Jm5ic3A75YaZ6YGT77yaCiZndDtPbiZuYnNwOzA4LzAxLzE0Jm5i
c3A7MTI6MjcsJm5ic3A7UGFzaSZuYnNwO0vDpHJra8OkaW5lbiZuYnNwO3dyb3RlOgomZ3Q7Jmd0
OyZuYnNwO09uJm5ic3A7V2VkLCZuYnNwO0phbiZuYnNwOzA4LCZuYnNwOzIwMTQmbmJzcDthdCZu
YnNwOzA2OjIyOjA2UE0mbmJzcDsrMDgwMCwmbmJzcDt0b3BwZXJ4aW4mbmJzcDt3cm90ZToKJmd0
OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7SGkmbmJzcDtsaXN0CiZndDsmZ3Q7Jmd0
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwO0FzJm5ic3A7d2UmbmJzcDthbGwmbmJzcDtrbm93LCZuYnNwO1NSLUlP
ViZuYnNwO3RlY2hub2xvZ3kmbmJzcDtjYW4mbmJzcDtpbXByb3ZlJm5ic3A7Vk5JQydzJm5ic3A7
cGVyZm9ybWFuY2UsCiZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3doaWxlJm5i
c3A7aXQmbmJzcDtjYW4mbmJzcDtub3QmbmJzcDtzdXBwb3J0Jm5ic3A7bGl2ZSZuYnNwO21pZ3Jh
dGlvbi4mbmJzcDtJJm5ic3A7Z2V0Jm5ic3A7c29tZSZuYnNwO2luZm9ybWF0aW9uJm5ic3A7cmVj
ZW50bHkKJmd0OyZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7dGhhdCZuYnNwO29uJm5i
c3A7S1ZNK1ZpcnRpbyZuYnNwO3BsYXRmb3JtLCZuYnNwO2lmJm5ic3A7d2UmbmJzcDt1c2UmbmJz
cDtNYWNWdGFwJm5ic3A7KyZuYnNwO1NSLUlPViwmbmJzcDt0aGUmbmJzcDtsaXZlCiZndDsmZ3Q7
Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO21pZ3JhdGlvbiZuYnNwO2NvdWxkJm5ic3A7YmUm
bmJzcDtkb25lJm5ic3A7c3VjY2Vzc2Z1bGx5LiZuYnNwO1doYXQmbmJzcDtJJm5ic3A7d2FudCZu
YnNwO3RvJm5ic3A7a25vdyZuYnNwO2lzJm5ic3A7bWF5Jm5ic3A7SQomZ3Q7Jmd0OyZndDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDtjb25maWd1cmUmbmJzcDtNYWNWdGFwJm5ic3A7b24mbmJzcDtY
ZW4/CiZndDsmZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO0FueSZuYnNwO3JlcGxpZXMmbmJzcDthcmUmbmJzcDt3
ZWxjb21lISZuYnNwO1RoYW5rcyZuYnNwO2EmbmJzcDtsb3QuCiZndDsmZ3Q7Jm5ic3A7CiZndDsm
Z3Q7Jm5ic3A7SSZuYnNwO3RoaW5rJm5ic3A7eWVhcnMmbmJzcDthZ28mbmJzcDsoMjAwOSwmbmJz
cDtwZXJoYXBzKSZuYnNwO3doZW4mbmJzcDtTUi1JT1YmbmJzcDt3YXMmbmJzcDtmaXJzdCZuYnNw
O2RlbW9lZCZuYnNwO3dpdGgmbmJzcDtYZW4KJmd0OyZndDsmbmJzcDtpdCZuYnNwO3dhcyZuYnNw
O2RlbW9lZCZuYnNwO3dpdGgmbmJzcDtsaXZlJm5ic3A7bWlncmF0aW9uLi4mbmJzcDtpdCZuYnNw
O3dhcyZuYnNwO2EmbmJzcDttaXh0dXJlJm5ic3A7b2YmbmJzcDtTUi1JT1YmbmJzcDtWRiZuYnNw
OysmbmJzcDtYZW4mbmJzcDt2aWYuCiZndDsmZ3Q7Jm5ic3A7CiZndDsmZ3Q7Jm5ic3A7U28mbmJz
cDt3aXRoJm5ic3A7c29tZSZuYnNwO3Rvb2xzdGFjay9zY3JpcHQmbmJzcDtoYWNrZXJ5Jm5ic3A7
eW91Jm5ic3A7Y2FuJm5ic3A7ZG8mbmJzcDtpdC4mbmJzcDsoPXVzZSZuYnNwO3RoZSZuYnNwO3Zp
ZiZuYnNwO3B2Jm5ic3A7ZHJpdmVyJm5ic3A7ZHVyaW5nJm5ic3A7bWlncmF0aW9uLCZuYnNwO25v
cm1hbGx5Jm5ic3A7c3ItaW92Jm5ic3A7dmYpLgomZ3Q7CiZndDtZb3UmbmJzcDttYXkmbmJzcDtm
aW5kJm5ic3A7Y3JlYXRpbmcmbmJzcDthbiZuYnNwO2FjdGl2ZS1iYWNrdXAmbmJzcDtib25kJm5i
c3A7aW4mbmJzcDt0aGUmbmJzcDtndWVzdCZuYnNwO3VzZWZ1bC4mbmJzcDsmbmJzcDtUaGUKJmd0
O2FjdGl2ZSZuYnNwO2JvbmQmbmJzcDtsaW5rJm5ic3A7aXMmbmJzcDt0aGUmbmJzcDtTUi1JT1Ym
bmJzcDtkZXZpY2UmbmJzcDthbmQmbmJzcDt0aGUmbmJzcDtiYWNrdXAmbmJzcDtpcyZuYnNwO3Ro
ZSZuYnNwO1ZJRi4KJmd0OwomZ3Q7SSd2ZSZuYnNwO25vdCZuYnNwO3RyaWVkJm5ic3A7dGhpcyZu
YnNwO215c2VsZi4KJmd0OwomZ3Q7RGF2aWQKPC9wcmU+PC9kaXY+PC9kaXY+PGJyPjxicj48c3Bh
biB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHNwYW4gaWQ9Im5ldGVhc2VfbWFpbF9mb290ZXIiPjwv
c3Bhbj48L3NwYW4+
------=_Part_117059_1116309397.1389238248329--



--===============0682092576828045042==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0682092576828045042==--



From xen-devel-bounces@lists.xen.org Thu Jan 09 10:10:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CYz-0003kB-UC; Thu, 09 Jan 2014 10:10:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CYx-0003jk-TA
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:10:08 +0000
Received: from [85.158.137.68:41877] by server-14.bemta-3.messagelabs.com id
	EE/9E-06105-F757EC25; Thu, 09 Jan 2014 10:10:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389262204!8121172!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31063 invoked from network); 9 Jan 2014 10:10:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:10:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91217364"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:10:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:10:03 -0500
Message-ID: <1389262202.27473.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:10:02 +0000
In-Reply-To: <osstest-24309-mainreport@xen.org>
References: <osstest-24309-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 03:12 +0000, xen.org wrote:
> Regressions which are regarded as allowable (not blocking):
>  test-armhf-armhf-xl           7 debian-install               fail   like 24298

http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/test-armhf-armhf-xl/7.ts-debian-install.log:
        Creating ext3 filesystem on /dev/marilith-n5/debian.guest.osstest-disk
        Done
        Installation method: debootstrap
        Done
        System installation failed.  Aborting
        /tmp/LeR4jpAMA2/etc/ssh/ssh_host_rsa_key.pub: No such file or directory
        cannot remove directory for /tmp/LeR4jpAMA2: Device or resource busy at /usr/share/perl/5.14/File/Temp.pm line 902
        Running command 'umount /tmp/LeR4jpAMA2/proc 2>&1' failed with exit code 256.
        Aborting
        See /var/log/xen-tools/debian.guest.osstest.log for details

http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/test-armhf-armhf-xl/marilith-n5---var-log-xen-tools-debian.guest.osstest.log:
        I: Retrieving Release
        I: Retrieving Release.gpg
        I: Checking Release signature
        I: Valid Release signature (key id 0E4EDE2C7F3E1FC0D033800E64481591B98321F9)
        E: Invalid Release file, no entry for main/binary-armhf/Packages
        WARNING (/usr/bin/xt-install-image): The installed system at /tmp/LeR4jpAMA2 doesn't seem to be a full system.
        WARNING (/usr/bin/xt-install-image): The installed system is missing the common file: /bin/ls.
        WARNING (/usr/bin/xt-install-image): The installed system at /tmp/LeR4jpAMA2 doesn't seem to be a full system.
        WARNING (/usr/bin/xt-install-image): The installed system is missing the common file: /bin/cp.
        
        Copying files from new installation to host.
        Copying files from /tmp/LeR4jpAMA2/var/cache/apt/archives -> /var/cache/apt/archives/
        Done
        Done
        Done
        System installation failed.  Aborting
        umount: /tmp/LeR4jpAMA2/proc: not found
        Running command 'umount /tmp/LeR4jpAMA2/proc 2>&1' failed with exit code 256.

Is this some weirdness in the mirror? It looks like this is
http://10.80.16.196/debian but I can't seem to browse it. Perhaps it has
indexes turned off and so I need to guess the correct name.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:10:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CYz-0003kB-UC; Thu, 09 Jan 2014 10:10:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1CYx-0003jk-TA
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:10:08 +0000
Received: from [85.158.137.68:41877] by server-14.bemta-3.messagelabs.com id
	EE/9E-06105-F757EC25; Thu, 09 Jan 2014 10:10:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389262204!8121172!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31063 invoked from network); 9 Jan 2014 10:10:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:10:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91217364"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:10:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:10:03 -0500
Message-ID: <1389262202.27473.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:10:02 +0000
In-Reply-To: <osstest-24309-mainreport@xen.org>
References: <osstest-24309-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 03:12 +0000, xen.org wrote:
> Regressions which are regarded as allowable (not blocking):
>  test-armhf-armhf-xl           7 debian-install               fail   like 24298

http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/test-armhf-armhf-xl/7.ts-debian-install.log:
        Creating ext3 filesystem on /dev/marilith-n5/debian.guest.osstest-disk
        Done
        Installation method: debootstrap
        Done
        System installation failed.  Aborting
        /tmp/LeR4jpAMA2/etc/ssh/ssh_host_rsa_key.pub: No such file or directory
        cannot remove directory for /tmp/LeR4jpAMA2: Device or resource busy at /usr/share/perl/5.14/File/Temp.pm line 902
        Running command 'umount /tmp/LeR4jpAMA2/proc 2>&1' failed with exit code 256.
        Aborting
        See /var/log/xen-tools/debian.guest.osstest.log for details

http://www.chiark.greenend.org.uk/~xensrcts/logs/24309/test-armhf-armhf-xl/marilith-n5---var-log-xen-tools-debian.guest.osstest.log:
        I: Retrieving Release
        I: Retrieving Release.gpg
        I: Checking Release signature
        I: Valid Release signature (key id 0E4EDE2C7F3E1FC0D033800E64481591B98321F9)
        E: Invalid Release file, no entry for main/binary-armhf/Packages
        WARNING (/usr/bin/xt-install-image): The installed system at /tmp/LeR4jpAMA2 doesn't seem to be a full system.
        WARNING (/usr/bin/xt-install-image): The installed system is missing the common file: /bin/ls.
        WARNING (/usr/bin/xt-install-image): The installed system at /tmp/LeR4jpAMA2 doesn't seem to be a full system.
        WARNING (/usr/bin/xt-install-image): The installed system is missing the common file: /bin/cp.
        
        Copying files from new installation to host.
        Copying files from /tmp/LeR4jpAMA2/var/cache/apt/archives -> /var/cache/apt/archives/
        Done
        Done
        Done
        System installation failed.  Aborting
        umount: /tmp/LeR4jpAMA2/proc: not found
        Running command 'umount /tmp/LeR4jpAMA2/proc 2>&1' failed with exit code 256.

Is this some weirdness in the mirror? It looks like this is
http://10.80.16.196/debian but I can't seem to browse it. Perhaps it has
indexes turned off and so I need to guess the correct name.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ck3-0004Cm-FX; Thu, 09 Jan 2014 10:21:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Ck1-0004Ch-IQ
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:21:33 +0000
Received: from [85.158.143.35:8982] by server-2.bemta-4.messagelabs.com id
	B5/05-11386-C287EC25; Thu, 09 Jan 2014 10:21:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389262889!10595298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15613 invoked from network); 9 Jan 2014 10:21:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:21:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91219784"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:21:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 05:21:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1Cjv-0001jS-LM;
	Thu, 09 Jan 2014 10:21:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1Cjv-0003kg-KZ;
	Thu, 09 Jan 2014 10:21:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24313-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:21:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24313: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24313 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24313/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 24309
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24309
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24309 pass in 24313

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24298

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24309 never pass

version targeted for testing:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d
baseline version:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bob Liu <bob.liu@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ branch=xen-unstable
+ revision=025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 025c1b755afc9a9f42f71ef167c20fdc616b1d2d:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   cedfdd4..025c1b7  025c1b755afc9a9f42f71ef167c20fdc616b1d2d -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ck3-0004Cm-FX; Thu, 09 Jan 2014 10:21:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Ck1-0004Ch-IQ
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:21:33 +0000
Received: from [85.158.143.35:8982] by server-2.bemta-4.messagelabs.com id
	B5/05-11386-C287EC25; Thu, 09 Jan 2014 10:21:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389262889!10595298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15613 invoked from network); 9 Jan 2014 10:21:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:21:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91219784"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:21:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 05:21:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1Cjv-0001jS-LM;
	Thu, 09 Jan 2014 10:21:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1Cjv-0003kg-KZ;
	Thu, 09 Jan 2014 10:21:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24313-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:21:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24313: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24313 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24313/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 24309
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24309
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24309 pass in 24313

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24298

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24309 never pass

version targeted for testing:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d
baseline version:
 xen                  cedfdd43a9798e535a05690bb6f01394490d26bb

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Bob Liu <bob.liu@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ branch=xen-unstable
+ revision=025c1b755afc9a9f42f71ef167c20fdc616b1d2d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 025c1b755afc9a9f42f71ef167c20fdc616b1d2d:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   cedfdd4..025c1b7  025c1b755afc9a9f42f71ef167c20fdc616b1d2d -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Cmu-0004tj-OO; Thu, 09 Jan 2014 10:24:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Cmt-0004sw-8F
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:24:31 +0000
Received: from [193.109.254.147:30739] by server-6.bemta-14.messagelabs.com id
	ED/82-14958-ED87EC25; Thu, 09 Jan 2014 10:24:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389263067!7516940!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5328 invoked from network); 9 Jan 2014 10:24:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:24:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91220475"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:24:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:24:26 -0500
Message-ID: <1389263065.27473.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:24:25 +0000
In-Reply-To: <1389262202.27473.50.camel@kazak.uk.xensource.com>
References: <osstest-24309-mainreport@xen.org>
	<1389262202.27473.50.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 10:10 +0000, Ian Campbell wrote:
>         E: Invalid Release file, no entry for main/binary-armhf/Packages

Ah:
xen-create-image \
            --dhcp --mac 5a:36:0e:f5:00:01 \
            --memory 512Mb --swap 1000Mb \
            --dist squeeze \
            --mirror http://10.80.16.196/debian \
            --hostname debian.guest.osstest \
            --lvm marilith-n5 --force \
            --kernel /boot/vmlinuz-3.13.0-rc6+ \
            --genpass 0 --password xenroot \
            --initrd /boot/initrd.img-3.13.0-rc6+ \
            --arch armhf

-- notice the "--dist squeeze". I'll look into an osstest patch!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Cmu-0004tj-OO; Thu, 09 Jan 2014 10:24:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Cmt-0004sw-8F
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 10:24:31 +0000
Received: from [193.109.254.147:30739] by server-6.bemta-14.messagelabs.com id
	ED/82-14958-ED87EC25; Thu, 09 Jan 2014 10:24:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389263067!7516940!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5328 invoked from network); 9 Jan 2014 10:24:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:24:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91220475"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:24:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:24:26 -0500
Message-ID: <1389263065.27473.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:24:25 +0000
In-Reply-To: <1389262202.27473.50.camel@kazak.uk.xensource.com>
References: <osstest-24309-mainreport@xen.org>
	<1389262202.27473.50.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24309: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 10:10 +0000, Ian Campbell wrote:
>         E: Invalid Release file, no entry for main/binary-armhf/Packages

Ah:
xen-create-image \
            --dhcp --mac 5a:36:0e:f5:00:01 \
            --memory 512Mb --swap 1000Mb \
            --dist squeeze \
            --mirror http://10.80.16.196/debian \
            --hostname debian.guest.osstest \
            --lvm marilith-n5 --force \
            --kernel /boot/vmlinuz-3.13.0-rc6+ \
            --genpass 0 --password xenroot \
            --initrd /boot/initrd.img-3.13.0-rc6+ \
            --arch armhf

-- notice the "--dist squeeze". I'll look into an osstest patch!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Cuk-0006Sg-Sy; Thu, 09 Jan 2014 10:32:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1Cuj-0006Sb-95
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:32:37 +0000
Received: from [193.109.254.147:53115] by server-4.bemta-14.messagelabs.com id
	46/42-03916-4CA7EC25; Thu, 09 Jan 2014 10:32:36 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389263555!7454069!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14920 invoked from network); 9 Jan 2014 10:32:35 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 10:32:35 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1CuU-000HLI-Fk; Thu, 09 Jan 2014 10:32:22 +0000
Date: Thu, 9 Jan 2014 11:32:22 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109103222.GA64369@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108181007.GB75747@deinos.phlegethon.org>
	<1389256911.6917.36.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389256911.6917.36.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:41 +0000 on 09 Jan (1389253311), Ian Campbell wrote:
> On Wed, 2014-01-08 at 19:10 +0100, Tim Deegan wrote:
> > At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > > > here?
> > > > > > 
> > > > > > This was from Mukesh Rathor:
> > > > > > 
> > > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > > 
> > > > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > > > to change any way you want.
> > > > > 
> > > > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > > > otherwise get eliminated).
> > > > > 
> > > > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > > > even const (given that I can't see anything which writes it I suppose
> > > > > > > this is a compile time setting?)
> > > > > > 
> > > > > > That has been how I have been testing it so far (changing the source
> > > > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > > > sure how const may effect this.
> > > > 
> > > > If the idea is to use kdb itself to frob the value, then it does need
> > > > something to stop the compiler caching it.  This might even be one of
> > > > the few cases where 'volatile' actually DTRT; it would still be more
> > > > in keeping with Xen style to use an explicit read op (like
> > > > atomic_read()) where the value is consumed.
> > > 
> > > Is there any need to be asynchronously frobbing this value in the middle
> > > of a function within this file and expecting it to be reliable? I'd have
> > > thought that changing the value and having it take affect on the next
> > > debug event/hypercall/whatever would be what was wanted.
> > 
> > The variable is static and there's nothing in the file that updates
> > it, so the compiler might drop it entirely.  Maybe __used__ would be
> > good enough to stop the compiler dropping all reads, but I'm not sure.
> 
> Isn't that exactly what __used (or __used__) is for?

The docs say that "the variable must be emitted even if it appears
that the variable is not referenced" but what that means for
individual accesses I don't know.  So, probably?  I would be inclined
to distruct the compiler here and just wrap the accesses in a
read_atomic.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Cuk-0006Sg-Sy; Thu, 09 Jan 2014 10:32:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1Cuj-0006Sb-95
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:32:37 +0000
Received: from [193.109.254.147:53115] by server-4.bemta-14.messagelabs.com id
	46/42-03916-4CA7EC25; Thu, 09 Jan 2014 10:32:36 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389263555!7454069!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14920 invoked from network); 9 Jan 2014 10:32:35 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 10:32:35 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1CuU-000HLI-Fk; Thu, 09 Jan 2014 10:32:22 +0000
Date: Thu, 9 Jan 2014 11:32:22 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109103222.GA64369@deinos.phlegethon.org>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108181007.GB75747@deinos.phlegethon.org>
	<1389256911.6917.36.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389256911.6917.36.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:41 +0000 on 09 Jan (1389253311), Ian Campbell wrote:
> On Wed, 2014-01-08 at 19:10 +0100, Tim Deegan wrote:
> > At 17:44 +0000 on 08 Jan (1389199462), Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > > Using volatile is almost always wrong. Why do you think it is needed
> > > > > > > here?
> > > > > > 
> > > > > > This was from Mukesh Rathor:
> > > > > > 
> > > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > > 
> > > > > > I saw no reason to make it volatile, but maybe "kdb" needs this? Happy
> > > > > > to change any way you want.
> > > > > 
> > > > > I'm not the maintainer but if I were I'd say drop the volatile and maybe
> > > > > mark it __read_mostly and perhaps __used too (since it's static it might
> > > > > otherwise get eliminated).
> > > > > 
> > > > > > > If anything this variable is exactly the opposite, i..e __read_mostly or
> > > > > > > even const (given that I can't see anything which writes it I suppose
> > > > > > > this is a compile time setting?)
> > > > > > 
> > > > > > That has been how I have been testing it so far (changing the source
> > > > > > to set values).  Mukesh claims to be able to change it at will.  Not
> > > > > > sure how const may effect this.
> > > > 
> > > > If the idea is to use kdb itself to frob the value, then it does need
> > > > something to stop the compiler caching it.  This might even be one of
> > > > the few cases where 'volatile' actually DTRT; it would still be more
> > > > in keeping with Xen style to use an explicit read op (like
> > > > atomic_read()) where the value is consumed.
> > > 
> > > Is there any need to be asynchronously frobbing this value in the middle
> > > of a function within this file and expecting it to be reliable? I'd have
> > > thought that changing the value and having it take affect on the next
> > > debug event/hypercall/whatever would be what was wanted.
> > 
> > The variable is static and there's nothing in the file that updates
> > it, so the compiler might drop it entirely.  Maybe __used__ would be
> > good enough to stop the compiler dropping all reads, but I'm not sure.
> 
> Isn't that exactly what __used (or __used__) is for?

The docs say that "the variable must be emitted even if it appears
that the variable is not referenced" but what that means for
individual accesses I don't know.  So, probably?  I would be inclined
to distruct the compiler here and just wrap the accesses in a
read_atomic.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CvL-0006VN-CQ; Thu, 09 Jan 2014 10:33:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1W1CvJ-0006VD-SQ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:33:14 +0000
Received: from [85.158.137.68:62895] by server-3.bemta-3.messagelabs.com id
	96/B9-10658-8EA7EC25; Thu, 09 Jan 2014 10:33:12 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389263592!8084113!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6339 invoked from network); 9 Jan 2014 10:33:12 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-8.tower-31.messagelabs.com with SMTP;
	9 Jan 2014 10:33:12 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=canonical.com)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1W1CvG-0006lE-8a; Thu, 09 Jan 2014 10:33:10 +0000
From: Stefan Bader <stefan.bader@canonical.com>
To: xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 11:33:09 +0100
Message-Id: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
X-Mailer: git-send-email 1.7.9.5
Cc: jfehlig@suse.com, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This appeared to be working on a quick test with a caller leaving
all devids unset when starting an HVM guest and one that sets them
all. Possible optimazations (maybe nice to have but probably not
important):
1. something more complicated to find gaps in devids
2. limit auto-assignment in initiate_domain_create to HVM domains

-Stefan

>From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@canonical.com>
Date: Wed, 8 Jan 2014 18:26:59 +0100
Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create

This will change initiate_domain_create to walk through NIC definitions
and automatically assign devids to those which have not assigned one.
The devids are needed later in domcreate_launch_dm (for HVM domains
using emulated NICs). The command string for starting the device-model
has those ids as part of its arguments.
Assignment of devids in the hotplug case is handled by libxl_device_nic_add
but that would be called too late in the startup case.
I also moved the call to libxl__device_nic_setdefault here as this seems
to be the only path leading there and avoids doing the loop a third time.
The two loops are trying to handle a case where the caller sets some devids
(not sure that should be valid) but leaves some unset.

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
---
 tools/libxl/libxl_create.c |   35 ++++++++++++++++++++++++-----------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..543e0c8 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    size_t last_devid = -1;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -746,6 +747,29 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_device_disk *bootdisk =
         d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
 
+    /*
+     * The devid has to be set before launching the device model. For the
+     * hotplug case this is done in libxl_device_nic_add but on domain
+     * creation this is called too late.
+     * Make two runs over configured NICs in order to avoid duplicate IDs
+     * in case the caller partially assigned IDs.
+     */
+    for (i = 0; i < d_config->num_nics; i++) {
+        /* We have to init the nic here, because we still haven't
+         * called libxl_device_nic_add when domcreate_launch_dm gets called,
+         * but qemu needs the nic information to be complete.
+         */
+        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
+        if (ret) goto error_out;
+
+        if (d_config->nics[i].devid > last_devid)
+            last_devid = d_config->nics[i].devid;
+    }
+    for (i = 0; i < d_config->num_nics; i++) {
+        if (d_config->nics[i].devid < 0)
+            d_config->nics[i].devid = ++last_devid;
+    }
+
     if (restore_fd >= 0) {
         LOG(DEBUG, "restoring, not running bootloader\n");
         domcreate_bootloader_done(egc, &dcs->bl, 0);
@@ -1058,17 +1082,6 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         }
     }
 
-
-
-    for (i = 0; i < d_config->num_nics; i++) {
-        /* We have to init the nic here, because we still haven't
-         * called libxl_device_nic_add at this point, but qemu needs
-         * the nic information to be complete.
-         */
-        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
-        if (ret)
-            goto error_out;
-    }
     switch (d_config->c_info.type) {
     case LIBXL_DOMAIN_TYPE_HVM:
     {
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1CvL-0006VN-CQ; Thu, 09 Jan 2014 10:33:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1W1CvJ-0006VD-SQ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:33:14 +0000
Received: from [85.158.137.68:62895] by server-3.bemta-3.messagelabs.com id
	96/B9-10658-8EA7EC25; Thu, 09 Jan 2014 10:33:12 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389263592!8084113!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6339 invoked from network); 9 Jan 2014 10:33:12 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-8.tower-31.messagelabs.com with SMTP;
	9 Jan 2014 10:33:12 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=canonical.com)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1W1CvG-0006lE-8a; Thu, 09 Jan 2014 10:33:10 +0000
From: Stefan Bader <stefan.bader@canonical.com>
To: xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 11:33:09 +0100
Message-Id: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
X-Mailer: git-send-email 1.7.9.5
Cc: jfehlig@suse.com, Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This appeared to be working on a quick test with a caller leaving
all devids unset when starting an HVM guest and one that sets them
all. Possible optimazations (maybe nice to have but probably not
important):
1. something more complicated to find gaps in devids
2. limit auto-assignment in initiate_domain_create to HVM domains

-Stefan

>From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@canonical.com>
Date: Wed, 8 Jan 2014 18:26:59 +0100
Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create

This will change initiate_domain_create to walk through NIC definitions
and automatically assign devids to those which have not assigned one.
The devids are needed later in domcreate_launch_dm (for HVM domains
using emulated NICs). The command string for starting the device-model
has those ids as part of its arguments.
Assignment of devids in the hotplug case is handled by libxl_device_nic_add
but that would be called too late in the startup case.
I also moved the call to libxl__device_nic_setdefault here as this seems
to be the only path leading there and avoids doing the loop a third time.
The two loops are trying to handle a case where the caller sets some devids
(not sure that should be valid) but leaves some unset.

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
---
 tools/libxl/libxl_create.c |   35 ++++++++++++++++++++++++-----------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..543e0c8 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    size_t last_devid = -1;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -746,6 +747,29 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_device_disk *bootdisk =
         d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
 
+    /*
+     * The devid has to be set before launching the device model. For the
+     * hotplug case this is done in libxl_device_nic_add but on domain
+     * creation this is called too late.
+     * Make two runs over configured NICs in order to avoid duplicate IDs
+     * in case the caller partially assigned IDs.
+     */
+    for (i = 0; i < d_config->num_nics; i++) {
+        /* We have to init the nic here, because we still haven't
+         * called libxl_device_nic_add when domcreate_launch_dm gets called,
+         * but qemu needs the nic information to be complete.
+         */
+        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
+        if (ret) goto error_out;
+
+        if (d_config->nics[i].devid > last_devid)
+            last_devid = d_config->nics[i].devid;
+    }
+    for (i = 0; i < d_config->num_nics; i++) {
+        if (d_config->nics[i].devid < 0)
+            d_config->nics[i].devid = ++last_devid;
+    }
+
     if (restore_fd >= 0) {
         LOG(DEBUG, "restoring, not running bootloader\n");
         domcreate_bootloader_done(egc, &dcs->bl, 0);
@@ -1058,17 +1082,6 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         }
     }
 
-
-
-    for (i = 0; i < d_config->num_nics; i++) {
-        /* We have to init the nic here, because we still haven't
-         * called libxl_device_nic_add at this point, but qemu needs
-         * the nic information to be complete.
-         */
-        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
-        if (ret)
-            goto error_out;
-    }
     switch (d_config->c_info.type) {
     case LIBXL_DOMAIN_TYPE_HVM:
     {
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:48:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DA1-0007Zm-8M; Thu, 09 Jan 2014 10:48:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W1D9z-0007Zb-ME
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:48:24 +0000
Received: from [85.158.139.211:7912] by server-8.bemta-5.messagelabs.com id
	98/E9-29838-67E7EC25; Thu, 09 Jan 2014 10:48:22 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389264500!8749925!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 9 Jan 2014 10:48:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 10:48:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09AmHtD023005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 10:48:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09AmGrh010292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 10:48:16 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s09AmFSY028236; Thu, 9 Jan 2014 10:48:15 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 02:48:14 -0800
Message-ID: <52CE7E67.5080708@oracle.com>
Date: Thu, 09 Jan 2014 18:48:07 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
In-Reply-To: <52CBC700.1060602@zynstra.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/07/2014 05:21 PM, James Dingwall wrote:
> Bob Liu wrote:
>> Could you confirm that this problem doesn't exist if loading tmem with
>> selfshrinking=0 during compile gcc? It seems that you are compiling
>> difference packages during your testing.
>> This will help to figure out whether selfshrinking is the root cause.
> Got an oom with selfshrinking=0, again during a gcc compile.
> Unfortunately I don't have a single test case which demonstrates the
> problem but as I mentioned before it will generally show up under
> compiles of large packages such as glibc, kdelibs, gcc etc.
> 

So the root cause is not because enabled selfshrinking.
Then what I can think of is that the xen-selfballoon driver was too
aggressive, too many pages were ballooned out which causeed heavy memory
pressure to guest OS.
And kswapd started to reclaim page until most of pages were
unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
triggered.
In theory the balloon driver should give back ballooned out pages to
guest OS, but I'm afraid this procedure is not fast enough.

My suggestion is reserve a min memory for your guest OS so that the
xen-selfballoon won't be so aggressive.
You can do it through parameters selfballoon_reserved_mb or
selfballoon_min_usable_mb.

> I don't know if this is a separate or related issue but over the
> holidays I also had a problem with six of the guests on my system where
> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
> even though there was otherwise no load on them.  Of the guests I
> restarted yesterday in this state two have already got in to the same
> state again, they are running a kernel with the first patch that you sent.
> 

Could you get the meminfo in guest OS at that time?
cat /proc/meminfo
cat /proc/vmstat

Thanks,
-Bob

> /sys/module/tmem/parameters/cleancache Y
> /sys/module/tmem/parameters/frontswap Y
> /sys/module/tmem/parameters/selfballooning Y
> /sys/module/tmem/parameters/selfshrinking N
> 
> James
> 
> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
> oom_score_adj=0
> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
> ffff88001f90e8e8
> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
> ffff88000094f9b8
> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
> ffff88000094f9a8
> [ 8212.940542] Call Trace:
> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
> [ 8212.940578]  [<ffffffff81494abe>] ?
> _raw_spin_unlock_irqrestore+0x47/0x62
> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
> [ 8212.940679] Mem-Info:
> [ 8212.940681] Node 0 DMA per-cpu:
> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
> [ 8212.940686] Node 0 DMA32 per-cpu:
> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
> [ 8212.940691] Node 0 Normal per-cpu:
> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>  active_file:8412 inactive_file:8612 isolated_file:0
>  unevictable:0 dirty:0 writeback:0 unstable:0
>  free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>  mapped:3792 shmem:6 pagetables:2534 bounce:0
>  free_cma:0 totalram:246132 balloontarget:306242
> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
> active_anon:5092kB inactive_anon:5328kB active_file:416kB
> inactive_file:608kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:26951 all_unreclaimable? yes
> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
> high:4092kB active_anon:181456kB inactive_anon:181528kB
> active_file:22296kB inactive_file:22644kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:612393 all_unreclaimable? yes
> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:745963 all_unreclaimable? yes
> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
> [ 8212.940765] 16847 total pagecache pages
> [ 8212.940766] 8381 pages in swap cache
> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
> 250268/342284
> [ 8212.940769] Free swap  = 1925576kB
> [ 8212.940770] Total swap = 2097148kB
> [ 8212.951044] 262143 pages RAM
> [ 8212.951046] 11939 pages reserved
> [ 8212.951047] 540820 pages shared
> [ 8212.951048] 240248 pages non-shared
> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
> oom_score_adj name
> <snip process list>
> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
> sacrifice child
> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
> anon-rss:350980kB, file-rss:9408kB
> [54810.683658] kjournald starting.  Commit interval 5 seconds
> [54810.684381] EXT3-fs (xvda1): using internal journal
> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:48:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DA1-0007Zm-8M; Thu, 09 Jan 2014 10:48:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W1D9z-0007Zb-ME
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:48:24 +0000
Received: from [85.158.139.211:7912] by server-8.bemta-5.messagelabs.com id
	98/E9-29838-67E7EC25; Thu, 09 Jan 2014 10:48:22 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389264500!8749925!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 9 Jan 2014 10:48:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 10:48:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09AmHtD023005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 10:48:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09AmGrh010292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 10:48:16 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s09AmFSY028236; Thu, 9 Jan 2014 10:48:15 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 02:48:14 -0800
Message-ID: <52CE7E67.5080708@oracle.com>
Date: Thu, 09 Jan 2014 18:48:07 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
In-Reply-To: <52CBC700.1060602@zynstra.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/07/2014 05:21 PM, James Dingwall wrote:
> Bob Liu wrote:
>> Could you confirm that this problem doesn't exist if loading tmem with
>> selfshrinking=0 during compile gcc? It seems that you are compiling
>> difference packages during your testing.
>> This will help to figure out whether selfshrinking is the root cause.
> Got an oom with selfshrinking=0, again during a gcc compile.
> Unfortunately I don't have a single test case which demonstrates the
> problem but as I mentioned before it will generally show up under
> compiles of large packages such as glibc, kdelibs, gcc etc.
> 

So the root cause is not because enabled selfshrinking.
Then what I can think of is that the xen-selfballoon driver was too
aggressive, too many pages were ballooned out which causeed heavy memory
pressure to guest OS.
And kswapd started to reclaim page until most of pages were
unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
triggered.
In theory the balloon driver should give back ballooned out pages to
guest OS, but I'm afraid this procedure is not fast enough.

My suggestion is reserve a min memory for your guest OS so that the
xen-selfballoon won't be so aggressive.
You can do it through parameters selfballoon_reserved_mb or
selfballoon_min_usable_mb.

> I don't know if this is a separate or related issue but over the
> holidays I also had a problem with six of the guests on my system where
> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
> even though there was otherwise no load on them.  Of the guests I
> restarted yesterday in this state two have already got in to the same
> state again, they are running a kernel with the first patch that you sent.
> 

Could you get the meminfo in guest OS at that time?
cat /proc/meminfo
cat /proc/vmstat

Thanks,
-Bob

> /sys/module/tmem/parameters/cleancache Y
> /sys/module/tmem/parameters/frontswap Y
> /sys/module/tmem/parameters/selfballooning Y
> /sys/module/tmem/parameters/selfshrinking N
> 
> James
> 
> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
> oom_score_adj=0
> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
> ffff88001f90e8e8
> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
> ffff88000094f9b8
> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
> ffff88000094f9a8
> [ 8212.940542] Call Trace:
> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
> [ 8212.940578]  [<ffffffff81494abe>] ?
> _raw_spin_unlock_irqrestore+0x47/0x62
> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
> [ 8212.940679] Mem-Info:
> [ 8212.940681] Node 0 DMA per-cpu:
> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
> [ 8212.940686] Node 0 DMA32 per-cpu:
> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
> [ 8212.940691] Node 0 Normal per-cpu:
> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>  active_file:8412 inactive_file:8612 isolated_file:0
>  unevictable:0 dirty:0 writeback:0 unstable:0
>  free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>  mapped:3792 shmem:6 pagetables:2534 bounce:0
>  free_cma:0 totalram:246132 balloontarget:306242
> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
> active_anon:5092kB inactive_anon:5328kB active_file:416kB
> inactive_file:608kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:26951 all_unreclaimable? yes
> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
> high:4092kB active_anon:181456kB inactive_anon:181528kB
> active_file:22296kB inactive_file:22644kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:612393 all_unreclaimable? yes
> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:745963 all_unreclaimable? yes
> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
> [ 8212.940765] 16847 total pagecache pages
> [ 8212.940766] 8381 pages in swap cache
> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
> 250268/342284
> [ 8212.940769] Free swap  = 1925576kB
> [ 8212.940770] Total swap = 2097148kB
> [ 8212.951044] 262143 pages RAM
> [ 8212.951046] 11939 pages reserved
> [ 8212.951047] 540820 pages shared
> [ 8212.951048] 240248 pages non-shared
> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
> oom_score_adj name
> <snip process list>
> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
> sacrifice child
> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
> anon-rss:350980kB, file-rss:9408kB
> [54810.683658] kjournald starting.  Commit interval 5 seconds
> [54810.684381] EXT3-fs (xvda1): using internal journal
> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:54:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DG2-0008Qu-6M; Thu, 09 Jan 2014 10:54:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W1DG1-0008Qo-Gn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:54:37 +0000
Received: from [85.158.143.35:11272] by server-2.bemta-4.messagelabs.com id
	38/A6-11386-CEF7EC25; Thu, 09 Jan 2014 10:54:36 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389264875!7966601!1
X-Originating-IP: [213.199.154.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32517 invoked from network); 9 Jan 2014 10:54:35 -0000
Received: from mail-am1lp0012.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.12)
	by server-15.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	9 Jan 2014 10:54:35 -0000
Received: from DBXPRD0310HT002.eurprd03.prod.outlook.com (10.255.65.165) by
	AMSPR03MB307.eurprd03.prod.outlook.com (10.242.83.28) with Microsoft
	SMTP Server (TLS) id 15.0.842.7; Thu, 9 Jan 2014 10:54:28 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.65.165) with Microsoft SMTP Server (TLS) id 14.16.395.1;
	Thu, 9 Jan 2014 10:54:28 +0000
Message-ID: <52CE7FE2.6020306@zynstra.com>
Date: Thu, 9 Jan 2014 10:54:26 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 008663486A
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(189002)(51704005)(479174003)(199002)(377454003)(24454002)(164054003)(80976001)(81342001)(77982001)(69226001)(59896001)(47776003)(54356001)(81686001)(53806001)(76482001)(63696002)(83072002)(33656001)(36756003)(80316001)(79102001)(81542001)(50466002)(66066001)(59766001)(49866001)(47736001)(50986001)(81816001)(47976001)(74366001)(87936001)(54316002)(83506001)(46102001)(74662001)(74502001)(31966008)(23756003)(85306002)(76786001)(47446002)(74706001)(90146001)(56816005)(83322001)(19580395003)(64126003)(51856001)(80022001)(85852003)(56776001)(74876001)(76796001)(4396001)(92566001)(92726001)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AMSPR03MB307;
	H:DBXPRD0310HT002.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
I will try your suggestions and let you know.
>
>> I don't know if this is a separate or related issue but over the
>> holidays I also had a problem with six of the guests on my system where
>> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
>> even though there was otherwise no load on them.  Of the guests I
>> restarted yesterday in this state two have already got in to the same
>> state again, they are running a kernel with the first patch that you sent.
>>
> Could you get the meminfo in guest OS at that time?
> cat /proc/meminfo
MemTotal:         364080 kB
MemFree:          130448 kB
Buffers:            1260 kB
Cached:           129352 kB
SwapCached:          300 kB
Active:            21412 kB
Inactive:         160888 kB
Active(anon):       7732 kB
Inactive(anon):    44676 kB
Active(file):      13680 kB
Inactive(file):   116212 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097148 kB
SwapFree:        2096704 kB
Dirty:                44 kB
Writeback:             0 kB
AnonPages:         51532 kB
Mapped:            14172 kB
Shmem:               720 kB
Slab:              19580 kB
SReclaimable:       7732 kB
SUnreclaim:        11848 kB
KernelStack:        1824 kB
PageTables:         7968 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     2279188 kB
Committed_AS:     338792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        9020 kB
VmallocChunk:   34359716472 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
DirectMap4k:     1048576 kB
DirectMap2M:           0 kB

> cat /proc/vmstat
nr_free_pages 32775
nr_alloc_batch 0
nr_inactive_anon 11167
nr_active_anon 1904
nr_inactive_file 29054
nr_active_file 3420
nr_unevictable 0
nr_mlock 0
nr_anon_pages 12869
nr_mapped 3543
nr_file_pages 32724
nr_dirty 5
nr_writeback 0
nr_slab_reclaimable 1933
nr_slab_unreclaimable 2959
nr_page_table_pages 1988
nr_kernel_stack 228
nr_unstable 0
nr_bounce 0
nr_vmscan_write 781197
nr_vmscan_immediate_reclaim 6241
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 180
nr_dirtied 86426
nr_written 860157
numa_hit 8323637
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 8323637
numa_other 0
nr_anon_transparent_hugepages 0
nr_free_cma 0
nr_dirty_threshold 15359
nr_dirty_background_threshold 7679
pgpgin 2044246
pgpgout 643646
pswpin 123
pswpout 153
pgalloc_dma 164528
pgalloc_dma32 7332263
pgalloc_normal 1018515
pgalloc_movable 0
pgfree 8548450
pgactivate 2011347
pgdeactivate 2274842
pgfault 7231978
pgmajfault 345038
pgrefill_dma 55260
pgrefill_dma32 2261099
pgrefill_normal 1771
pgrefill_movable 0
pgsteal_kswapd_dma 44877
pgsteal_kswapd_dma32 2586249
pgsteal_kswapd_normal 0
pgsteal_kswapd_movable 0
pgsteal_direct_dma 0
pgsteal_direct_dma32 37
pgsteal_direct_normal 0
pgsteal_direct_movable 0
pgscan_kswapd_dma 204746
pgscan_kswapd_dma32 4474736
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 39
pgscan_direct_normal 0
pgscan_direct_movable 0
pgscan_direct_throttle 0
zone_reclaim_failed 0
pginodesteal 0
slabs_scanned 2713984
kswapd_inodesteal 41065
kswapd_low_wmark_hit_quickly 14894
kswapd_high_wmark_hit_quickly 115972041
pageoutrun 115992287
allocstall 1
pgrotated 8495
numa_pte_updates 0
numa_huge_pte_updates 0
numa_hint_faults 0
numa_hint_faults_local 0
numa_pages_migrated 0
pgmigrate_success 0
pgmigrate_fail 0
compact_migrate_scanned 0
compact_free_scanned 0
compact_isolated 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 29364
unevictable_pgs_scanned 0
unevictable_pgs_rescued 29137
unevictable_pgs_mlocked 29542
unevictable_pgs_munlocked 29542
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
nr_tlb_remote_flush 10666
nr_tlb_remote_flush_received 21336
nr_tlb_local_flush_all 65481
nr_tlb_local_flush_one 1431260

>
> Thanks,
> -Bob
>
>> /sys/module/tmem/parameters/cleancache Y
>> /sys/module/tmem/parameters/frontswap Y
>> /sys/module/tmem/parameters/selfballooning Y
>> /sys/module/tmem/parameters/selfshrinking N
>>
>> James
>>
>> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
>> oom_score_adj=0
>> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
>> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
>> ffff88001f90e8e8
>> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
>> ffff88000094f9b8
>> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
>> ffff88000094f9a8
>> [ 8212.940542] Call Trace:
>> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
>> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
>> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
>> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
>> [ 8212.940578]  [<ffffffff81494abe>] ?
>> _raw_spin_unlock_irqrestore+0x47/0x62
>> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
>> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
>> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
>> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
>> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
>> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
>> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
>> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
>> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
>> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
>> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
>> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
>> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
>> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
>> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
>> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
>> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
>> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
>> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
>> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
>> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
>> [ 8212.940679] Mem-Info:
>> [ 8212.940681] Node 0 DMA per-cpu:
>> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940686] Node 0 DMA32 per-cpu:
>> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
>> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
>> [ 8212.940691] Node 0 Normal per-cpu:
>> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>>   active_file:8412 inactive_file:8612 isolated_file:0
>>   unevictable:0 dirty:0 writeback:0 unstable:0
>>   free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>>   mapped:3792 shmem:6 pagetables:2534 bounce:0
>>   free_cma:0 totalram:246132 balloontarget:306242
>> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
>> active_anon:5092kB inactive_anon:5328kB active_file:416kB
>> inactive_file:608kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
>> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
>> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:26951 all_unreclaimable? yes
>> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
>> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
>> high:4092kB active_anon:181456kB inactive_anon:181528kB
>> active_file:22296kB inactive_file:22644kB unevictable:0kB
>> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
>> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
>> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
>> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:612393 all_unreclaimable? yes
>> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
>> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
>> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
>> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
>> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:745963 all_unreclaimable? yes
>> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
>> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
>> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
>> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
>> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
>> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
>> [ 8212.940765] 16847 total pagecache pages
>> [ 8212.940766] 8381 pages in swap cache
>> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
>> 250268/342284
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>> [ 8212.951044] 262143 pages RAM
>> [ 8212.951046] 11939 pages reserved
>> [ 8212.951047] 540820 pages shared
>> [ 8212.951048] 240248 pages non-shared
>> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
>> oom_score_adj name
>> <snip process list>
>> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
>> sacrifice child
>> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
>> anon-rss:350980kB, file-rss:9408kB
>> [54810.683658] kjournald starting.  Commit interval 5 seconds
>> [54810.684381] EXT3-fs (xvda1): using internal journal
>> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:54:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DG2-0008Qu-6M; Thu, 09 Jan 2014 10:54:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W1DG1-0008Qo-Gn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 10:54:37 +0000
Received: from [85.158.143.35:11272] by server-2.bemta-4.messagelabs.com id
	38/A6-11386-CEF7EC25; Thu, 09 Jan 2014 10:54:36 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389264875!7966601!1
X-Originating-IP: [213.199.154.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32517 invoked from network); 9 Jan 2014 10:54:35 -0000
Received: from mail-am1lp0012.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.12)
	by server-15.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	9 Jan 2014 10:54:35 -0000
Received: from DBXPRD0310HT002.eurprd03.prod.outlook.com (10.255.65.165) by
	AMSPR03MB307.eurprd03.prod.outlook.com (10.242.83.28) with Microsoft
	SMTP Server (TLS) id 15.0.842.7; Thu, 9 Jan 2014 10:54:28 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.65.165) with Microsoft SMTP Server (TLS) id 14.16.395.1;
	Thu, 9 Jan 2014 10:54:28 +0000
Message-ID: <52CE7FE2.6020306@zynstra.com>
Date: Thu, 9 Jan 2014 10:54:26 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 008663486A
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(189002)(51704005)(479174003)(199002)(377454003)(24454002)(164054003)(80976001)(81342001)(77982001)(69226001)(59896001)(47776003)(54356001)(81686001)(53806001)(76482001)(63696002)(83072002)(33656001)(36756003)(80316001)(79102001)(81542001)(50466002)(66066001)(59766001)(49866001)(47736001)(50986001)(81816001)(47976001)(74366001)(87936001)(54316002)(83506001)(46102001)(74662001)(74502001)(31966008)(23756003)(85306002)(76786001)(47446002)(74706001)(90146001)(56816005)(83322001)(19580395003)(64126003)(51856001)(80022001)(85852003)(56776001)(74876001)(76796001)(4396001)(92566001)(92726001)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AMSPR03MB307;
	H:DBXPRD0310HT002.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
I will try your suggestions and let you know.
>
>> I don't know if this is a separate or related issue but over the
>> holidays I also had a problem with six of the guests on my system where
>> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
>> even though there was otherwise no load on them.  Of the guests I
>> restarted yesterday in this state two have already got in to the same
>> state again, they are running a kernel with the first patch that you sent.
>>
> Could you get the meminfo in guest OS at that time?
> cat /proc/meminfo
MemTotal:         364080 kB
MemFree:          130448 kB
Buffers:            1260 kB
Cached:           129352 kB
SwapCached:          300 kB
Active:            21412 kB
Inactive:         160888 kB
Active(anon):       7732 kB
Inactive(anon):    44676 kB
Active(file):      13680 kB
Inactive(file):   116212 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097148 kB
SwapFree:        2096704 kB
Dirty:                44 kB
Writeback:             0 kB
AnonPages:         51532 kB
Mapped:            14172 kB
Shmem:               720 kB
Slab:              19580 kB
SReclaimable:       7732 kB
SUnreclaim:        11848 kB
KernelStack:        1824 kB
PageTables:         7968 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     2279188 kB
Committed_AS:     338792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        9020 kB
VmallocChunk:   34359716472 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
DirectMap4k:     1048576 kB
DirectMap2M:           0 kB

> cat /proc/vmstat
nr_free_pages 32775
nr_alloc_batch 0
nr_inactive_anon 11167
nr_active_anon 1904
nr_inactive_file 29054
nr_active_file 3420
nr_unevictable 0
nr_mlock 0
nr_anon_pages 12869
nr_mapped 3543
nr_file_pages 32724
nr_dirty 5
nr_writeback 0
nr_slab_reclaimable 1933
nr_slab_unreclaimable 2959
nr_page_table_pages 1988
nr_kernel_stack 228
nr_unstable 0
nr_bounce 0
nr_vmscan_write 781197
nr_vmscan_immediate_reclaim 6241
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 180
nr_dirtied 86426
nr_written 860157
numa_hit 8323637
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 8323637
numa_other 0
nr_anon_transparent_hugepages 0
nr_free_cma 0
nr_dirty_threshold 15359
nr_dirty_background_threshold 7679
pgpgin 2044246
pgpgout 643646
pswpin 123
pswpout 153
pgalloc_dma 164528
pgalloc_dma32 7332263
pgalloc_normal 1018515
pgalloc_movable 0
pgfree 8548450
pgactivate 2011347
pgdeactivate 2274842
pgfault 7231978
pgmajfault 345038
pgrefill_dma 55260
pgrefill_dma32 2261099
pgrefill_normal 1771
pgrefill_movable 0
pgsteal_kswapd_dma 44877
pgsteal_kswapd_dma32 2586249
pgsteal_kswapd_normal 0
pgsteal_kswapd_movable 0
pgsteal_direct_dma 0
pgsteal_direct_dma32 37
pgsteal_direct_normal 0
pgsteal_direct_movable 0
pgscan_kswapd_dma 204746
pgscan_kswapd_dma32 4474736
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 39
pgscan_direct_normal 0
pgscan_direct_movable 0
pgscan_direct_throttle 0
zone_reclaim_failed 0
pginodesteal 0
slabs_scanned 2713984
kswapd_inodesteal 41065
kswapd_low_wmark_hit_quickly 14894
kswapd_high_wmark_hit_quickly 115972041
pageoutrun 115992287
allocstall 1
pgrotated 8495
numa_pte_updates 0
numa_huge_pte_updates 0
numa_hint_faults 0
numa_hint_faults_local 0
numa_pages_migrated 0
pgmigrate_success 0
pgmigrate_fail 0
compact_migrate_scanned 0
compact_free_scanned 0
compact_isolated 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 29364
unevictable_pgs_scanned 0
unevictable_pgs_rescued 29137
unevictable_pgs_mlocked 29542
unevictable_pgs_munlocked 29542
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
nr_tlb_remote_flush 10666
nr_tlb_remote_flush_received 21336
nr_tlb_local_flush_all 65481
nr_tlb_local_flush_one 1431260

>
> Thanks,
> -Bob
>
>> /sys/module/tmem/parameters/cleancache Y
>> /sys/module/tmem/parameters/frontswap Y
>> /sys/module/tmem/parameters/selfballooning Y
>> /sys/module/tmem/parameters/selfshrinking N
>>
>> James
>>
>> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
>> oom_score_adj=0
>> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
>> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
>> ffff88001f90e8e8
>> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
>> ffff88000094f9b8
>> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
>> ffff88000094f9a8
>> [ 8212.940542] Call Trace:
>> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
>> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
>> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
>> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
>> [ 8212.940578]  [<ffffffff81494abe>] ?
>> _raw_spin_unlock_irqrestore+0x47/0x62
>> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
>> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
>> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
>> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
>> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
>> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
>> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
>> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
>> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
>> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
>> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
>> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
>> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
>> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
>> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
>> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
>> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
>> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
>> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
>> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
>> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
>> [ 8212.940679] Mem-Info:
>> [ 8212.940681] Node 0 DMA per-cpu:
>> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940686] Node 0 DMA32 per-cpu:
>> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
>> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
>> [ 8212.940691] Node 0 Normal per-cpu:
>> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>>   active_file:8412 inactive_file:8612 isolated_file:0
>>   unevictable:0 dirty:0 writeback:0 unstable:0
>>   free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>>   mapped:3792 shmem:6 pagetables:2534 bounce:0
>>   free_cma:0 totalram:246132 balloontarget:306242
>> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
>> active_anon:5092kB inactive_anon:5328kB active_file:416kB
>> inactive_file:608kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
>> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
>> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:26951 all_unreclaimable? yes
>> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
>> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
>> high:4092kB active_anon:181456kB inactive_anon:181528kB
>> active_file:22296kB inactive_file:22644kB unevictable:0kB
>> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
>> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
>> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
>> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:612393 all_unreclaimable? yes
>> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
>> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
>> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
>> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
>> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:745963 all_unreclaimable? yes
>> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
>> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
>> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
>> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
>> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
>> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
>> [ 8212.940765] 16847 total pagecache pages
>> [ 8212.940766] 8381 pages in swap cache
>> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
>> 250268/342284
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>> [ 8212.951044] 262143 pages RAM
>> [ 8212.951046] 11939 pages reserved
>> [ 8212.951047] 540820 pages shared
>> [ 8212.951048] 240248 pages non-shared
>> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
>> oom_score_adj name
>> <snip process list>
>> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
>> sacrifice child
>> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
>> anon-rss:350980kB, file-rss:9408kB
>> [54810.683658] kjournald starting.  Commit interval 5 seconds
>> [54810.684381] EXT3-fs (xvda1): using internal journal
>> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DKw-0000yX-MP; Thu, 09 Jan 2014 10:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1DKu-0000yO-KI
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 10:59:40 +0000
Received: from [85.158.139.211:4439] by server-11.bemta-5.messagelabs.com id
	B7/B2-23268-B118EC25; Thu, 09 Jan 2014 10:59:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389265177!8752663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29546 invoked from network); 9 Jan 2014 10:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91227882"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:59:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:59:36 -0500
Message-ID: <1389265175.27473.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:59:35 +0000
In-Reply-To: <alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Julien Grall wrote:
> > The p2m is shared between VCPUs for each domain. Currently Xen only flush
> > TLB on the local PCPU. This could result to mismatch between the mapping in the
> > p2m and TLBs.
> > 
> > Flush TLBs used by this domain on every PCPU.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> The fix makes sense to me.

Me too. Has anyone had a grep for similar issues?

(the reason for getting this wrong is that for cache flushes the "is"
suffix restricts the flush to the IS domain, whereas with tlb flushes
the "is" suffix broadcasts instead of keeping it local, which is a bit
confusing ;-))

> 
> > ---
> > 
> > This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> > added a small optimisation to avoid flushing all TLBs when a VCPU of this
> > domain is running on the current cpu.
> > 
> > The downside of this patch is the function can be a little bit slower because
> > Xen is flushing more TLBs.
> 
> Yes, I wonder how much slower it is going to be, considering that the flush
> is executed for every iteration of the loop.

It might be better to set the current VMID to the target domain for the
duration of this function, we'd still need the broadcast but at least we
wouldn't be killing unrelated VMIDs.

Pulling the flush out of the loop would require great case WRT accesses
from other VCPUs, e.g. you'd have to put the pages on a list (page->list
might be available?) and issue the put_page() after the flush, otherwise
it might get recycled into another domain while the first domain still
has TLB entries for it.

Or is there always something outside this function which holds another
ref count such that the page definitely won't be freed by the put_page
here?

Actually, do the existing code not have this issue already? The put_page
is before the flush. If this bug does exist now then I'd be inclined to
consider this a bug fix for 4.4, rather than a potential optimisation
for 4.5.

While looking at this function I'm now wondering what happens to the
existing page on ALLOCATE or INSERT, is it leaked?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 10:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 10:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DKw-0000yX-MP; Thu, 09 Jan 2014 10:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1DKu-0000yO-KI
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 10:59:40 +0000
Received: from [85.158.139.211:4439] by server-11.bemta-5.messagelabs.com id
	B7/B2-23268-B118EC25; Thu, 09 Jan 2014 10:59:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389265177!8752663!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29546 invoked from network); 9 Jan 2014 10:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 10:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91227882"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 10:59:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	05:59:36 -0500
Message-ID: <1389265175.27473.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 10:59:35 +0000
In-Reply-To: <alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
> On Wed, 8 Jan 2014, Julien Grall wrote:
> > The p2m is shared between VCPUs for each domain. Currently Xen only flush
> > TLB on the local PCPU. This could result to mismatch between the mapping in the
> > p2m and TLBs.
> > 
> > Flush TLBs used by this domain on every PCPU.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> The fix makes sense to me.

Me too. Has anyone had a grep for similar issues?

(the reason for getting this wrong is that for cache flushes the "is"
suffix restricts the flush to the IS domain, whereas with tlb flushes
the "is" suffix broadcasts instead of keeping it local, which is a bit
confusing ;-))

> 
> > ---
> > 
> > This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> > added a small optimisation to avoid flushing all TLBs when a VCPU of this
> > domain is running on the current cpu.
> > 
> > The downside of this patch is the function can be a little bit slower because
> > Xen is flushing more TLBs.
> 
> Yes, I wonder how much slower it is going to be, considering that the flush
> is executed for every iteration of the loop.

It might be better to set the current VMID to the target domain for the
duration of this function, we'd still need the broadcast but at least we
wouldn't be killing unrelated VMIDs.

Pulling the flush out of the loop would require great case WRT accesses
from other VCPUs, e.g. you'd have to put the pages on a list (page->list
might be available?) and issue the put_page() after the flush, otherwise
it might get recycled into another domain while the first domain still
has TLB entries for it.

Or is there always something outside this function which holds another
ref count such that the page definitely won't be freed by the put_page
here?

Actually, do the existing code not have this issue already? The put_page
is before the flush. If this bug does exist now then I'd be inclined to
consider this a bug fix for 4.4, rather than a potential optimisation
for 4.5.

While looking at this function I'm now wondering what happens to the
existing page on ALLOCATE or INSERT, is it leaked?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DOS-0001EW-I7; Thu, 09 Jan 2014 11:03:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1DOR-0001EP-NU
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:03:19 +0000
Received: from [85.158.143.35:62695] by server-2.bemta-4.messagelabs.com id
	3A/98-11386-7F18EC25; Thu, 09 Jan 2014 11:03:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389265397!10651878!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24042 invoked from network); 9 Jan 2014 11:03:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:03:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91228646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 11:03:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	06:03:16 -0500
Message-ID: <1389265395.27473.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefan Bader <stefan.bader@canonical.com>
Date: Thu, 9 Jan 2014 11:03:15 +0000
In-Reply-To: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jfehlig@suse.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> This appeared to be working on a quick test with a caller leaving
> all devids unset when starting an HVM guest and one that sets them
> all. Possible optimazations (maybe nice to have but probably not
> important):
> 1. something more complicated to find gaps in devids
> 2. limit auto-assignment in initiate_domain_create to HVM domains
> 
> -Stefan
> 
> From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> From: Stefan Bader <stefan.bader@canonical.com>
> Date: Wed, 8 Jan 2014 18:26:59 +0100
> Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> 
> This will change initiate_domain_create to walk through NIC definitions
> and automatically assign devids to those which have not assigned one.
> The devids are needed later in domcreate_launch_dm (for HVM domains
> using emulated NICs). The command string for starting the device-model
> has those ids as part of its arguments.
> Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> but that would be called too late in the startup case.
> I also moved the call to libxl__device_nic_setdefault here as this seems
> to be the only path leading there and avoids doing the loop a third time.
> The two loops are trying to handle a case where the caller sets some devids
> (not sure that should be valid) but leaves some unset.
> 
> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I think from a release point of view we should take this since it is a
bug fix to the API which at least libvirt has tripped over (although
libvirt has worked around it, others may not have done so).

Ian J: Does that make sense?

> ---
>  tools/libxl/libxl_create.c |   35 ++++++++++++++++++++++++-----------
>  1 file changed, 24 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..543e0c8 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    size_t last_devid = -1;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -746,6 +747,29 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_device_disk *bootdisk =
>          d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
>  
> +    /*
> +     * The devid has to be set before launching the device model. For the
> +     * hotplug case this is done in libxl_device_nic_add but on domain
> +     * creation this is called too late.
> +     * Make two runs over configured NICs in order to avoid duplicate IDs
> +     * in case the caller partially assigned IDs.
> +     */
> +    for (i = 0; i < d_config->num_nics; i++) {
> +        /* We have to init the nic here, because we still haven't
> +         * called libxl_device_nic_add when domcreate_launch_dm gets called,
> +         * but qemu needs the nic information to be complete.
> +         */
> +        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
> +        if (ret) goto error_out;
> +
> +        if (d_config->nics[i].devid > last_devid)
> +            last_devid = d_config->nics[i].devid;
> +    }
> +    for (i = 0; i < d_config->num_nics; i++) {
> +        if (d_config->nics[i].devid < 0)
> +            d_config->nics[i].devid = ++last_devid;
> +    }
> +
>      if (restore_fd >= 0) {
>          LOG(DEBUG, "restoring, not running bootloader\n");
>          domcreate_bootloader_done(egc, &dcs->bl, 0);
> @@ -1058,17 +1082,6 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>          }
>      }
>  
> -
> -
> -    for (i = 0; i < d_config->num_nics; i++) {
> -        /* We have to init the nic here, because we still haven't
> -         * called libxl_device_nic_add at this point, but qemu needs
> -         * the nic information to be complete.
> -         */
> -        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
> -        if (ret)
> -            goto error_out;
> -    }
>      switch (d_config->c_info.type) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>      {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DOS-0001EW-I7; Thu, 09 Jan 2014 11:03:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1DOR-0001EP-NU
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:03:19 +0000
Received: from [85.158.143.35:62695] by server-2.bemta-4.messagelabs.com id
	3A/98-11386-7F18EC25; Thu, 09 Jan 2014 11:03:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389265397!10651878!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24042 invoked from network); 9 Jan 2014 11:03:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:03:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91228646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 11:03:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	06:03:16 -0500
Message-ID: <1389265395.27473.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefan Bader <stefan.bader@canonical.com>
Date: Thu, 9 Jan 2014 11:03:15 +0000
In-Reply-To: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jfehlig@suse.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> This appeared to be working on a quick test with a caller leaving
> all devids unset when starting an HVM guest and one that sets them
> all. Possible optimazations (maybe nice to have but probably not
> important):
> 1. something more complicated to find gaps in devids
> 2. limit auto-assignment in initiate_domain_create to HVM domains
> 
> -Stefan
> 
> From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> From: Stefan Bader <stefan.bader@canonical.com>
> Date: Wed, 8 Jan 2014 18:26:59 +0100
> Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> 
> This will change initiate_domain_create to walk through NIC definitions
> and automatically assign devids to those which have not assigned one.
> The devids are needed later in domcreate_launch_dm (for HVM domains
> using emulated NICs). The command string for starting the device-model
> has those ids as part of its arguments.
> Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> but that would be called too late in the startup case.
> I also moved the call to libxl__device_nic_setdefault here as this seems
> to be the only path leading there and avoids doing the loop a third time.
> The two loops are trying to handle a case where the caller sets some devids
> (not sure that should be valid) but leaves some unset.
> 
> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I think from a release point of view we should take this since it is a
bug fix to the API which at least libvirt has tripped over (although
libvirt has worked around it, others may not have done so).

Ian J: Does that make sense?

> ---
>  tools/libxl/libxl_create.c |   35 ++++++++++++++++++++++++-----------
>  1 file changed, 24 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..543e0c8 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    size_t last_devid = -1;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -746,6 +747,29 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_device_disk *bootdisk =
>          d_config->num_disks > 0 ? &d_config->disks[0] : NULL;
>  
> +    /*
> +     * The devid has to be set before launching the device model. For the
> +     * hotplug case this is done in libxl_device_nic_add but on domain
> +     * creation this is called too late.
> +     * Make two runs over configured NICs in order to avoid duplicate IDs
> +     * in case the caller partially assigned IDs.
> +     */
> +    for (i = 0; i < d_config->num_nics; i++) {
> +        /* We have to init the nic here, because we still haven't
> +         * called libxl_device_nic_add when domcreate_launch_dm gets called,
> +         * but qemu needs the nic information to be complete.
> +         */
> +        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
> +        if (ret) goto error_out;
> +
> +        if (d_config->nics[i].devid > last_devid)
> +            last_devid = d_config->nics[i].devid;
> +    }
> +    for (i = 0; i < d_config->num_nics; i++) {
> +        if (d_config->nics[i].devid < 0)
> +            d_config->nics[i].devid = ++last_devid;
> +    }
> +
>      if (restore_fd >= 0) {
>          LOG(DEBUG, "restoring, not running bootloader\n");
>          domcreate_bootloader_done(egc, &dcs->bl, 0);
> @@ -1058,17 +1082,6 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
>          }
>      }
>  
> -
> -
> -    for (i = 0; i < d_config->num_nics; i++) {
> -        /* We have to init the nic here, because we still haven't
> -         * called libxl_device_nic_add at this point, but qemu needs
> -         * the nic information to be complete.
> -         */
> -        ret = libxl__device_nic_setdefault(gc, &d_config->nics[i], domid);
> -        if (ret)
> -            goto error_out;
> -    }
>      switch (d_config->c_info.type) {
>      case LIBXL_DOMAIN_TYPE_HVM:
>      {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:05:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DQ9-0001Lf-5H; Thu, 09 Jan 2014 11:05:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W1DQ7-0001LY-Go
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:05:03 +0000
Received: from [193.109.254.147:27137] by server-4.bemta-14.messagelabs.com id
	C6/62-03916-E528EC25; Thu, 09 Jan 2014 11:05:02 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389265501!9782138!1
X-Originating-IP: [213.199.154.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18037 invoked from network); 9 Jan 2014 11:05:01 -0000
Received: from mail-am1lp0012.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.12)
	by server-5.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	9 Jan 2014 11:05:01 -0000
Received: from DBXPRD0310HT005.eurprd03.prod.outlook.com (10.255.65.168) by
	AMSPR03MB196.eurprd03.prod.outlook.com (10.242.85.19) with Microsoft
	SMTP Server (TLS) id 15.0.842.7; Thu, 9 Jan 2014 11:05:00 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.65.168) with Microsoft SMTP Server (TLS) id 14.16.395.1;
	Thu, 9 Jan 2014 11:04:59 +0000
Message-ID: <52CE825A.9090006@zynstra.com>
Date: Thu, 9 Jan 2014 11:04:58 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 008663486A
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(51704005)(479174003)(377454003)(164054003)(189002)(199002)(24454002)(69226001)(85306002)(83506001)(23756003)(81686001)(74366001)(63696002)(80022001)(59896001)(74502001)(80976001)(76786001)(47776003)(76796001)(47446002)(64126003)(92726001)(92566001)(33656001)(81542001)(79102001)(66066001)(81342001)(77982001)(83072002)(53806001)(59766001)(80316001)(83322001)(76482001)(50466002)(56776001)(4396001)(46102001)(85852003)(47976001)(56816005)(49866001)(47736001)(54356001)(90146001)(50986001)(51856001)(81816001)(74706001)(87936001)(74662001)(31966008)(36756003)(54316002)(19580395003)(74876001)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AMSPR03MB196;
	H:DBXPRD0310HT005.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
>
>> I don't know if this is a separate or related issue but over the
>> holidays I also had a problem with six of the guests on my system where
>> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
>> even though there was otherwise no load on them.  Of the guests I
>> restarted yesterday in this state two have already got in to the same
>> state again, they are running a kernel with the first patch that you sent.
As soon as I echo 32 both (originally 0)
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_reserved_mb
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_min_usable_mb
Then the kswapd process stopped running at 100%.  Unfortunately I didn't 
check between the two commands to see if one by itself made a difference 
but I'll look for that next time.
> Could you get the meminfo in guest OS at that time?
After
> cat /proc/meminfo
MemTotal:         397028 kB
MemFree:          163756 kB
Buffers:            1260 kB
Cached:           129284 kB
SwapCached:          132 kB
Active:            22664 kB
Inactive:         159576 kB
Active(anon):       8004 kB
Inactive(anon):    44412 kB
Active(file):      14660 kB
Inactive(file):   115164 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097148 kB
SwapFree:        2096896 kB
Dirty:                20 kB
Writeback:             0 kB
AnonPages:         51640 kB
Mapped:            14136 kB
Shmem:               720 kB
Slab:              19492 kB
SReclaimable:       7692 kB
SUnreclaim:        11800 kB
KernelStack:        1816 kB
PageTables:         7928 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     2295660 kB
Committed_AS:     338552 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        9020 kB
VmallocChunk:   34359716408 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
DirectMap4k:     1048576 kB
DirectMap2M:           0 kB

> cat /proc/vmstat
nr_free_pages 40916
nr_alloc_batch 0
nr_inactive_anon 11102
nr_active_anon 2009
nr_inactive_file 28791
nr_active_file 3665
nr_unevictable 0
nr_mlock 0
nr_anon_pages 12904
nr_mapped 3534
nr_file_pages 32669
nr_dirty 5
nr_writeback 0
nr_slab_reclaimable 1923
nr_slab_unreclaimable 2945
nr_page_table_pages 1982
nr_kernel_stack 227
nr_unstable 0
nr_bounce 0
nr_vmscan_write 781891
nr_vmscan_immediate_reclaim 6245
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 180
nr_dirtied 86609
nr_written 861010
numa_hit 8353372
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 8353372
numa_other 0
nr_anon_transparent_hugepages 0
nr_free_cma 0
nr_dirty_threshold 16991
nr_dirty_background_threshold 8495
pgpgin 2044575
pgpgout 645866
pswpin 123
pswpout 153
pgalloc_dma 164944
pgalloc_dma32 7347917
pgalloc_normal 1032559
pgalloc_movable 0
pgfree 8586607
pgactivate 2012718
pgdeactivate 2276721
pgfault 7295414
pgmajfault 345301
pgrefill_dma 55271
pgrefill_dma32 2263007
pgrefill_normal 1771
pgrefill_movable 0
pgsteal_kswapd_dma 44880
pgsteal_kswapd_dma32 2587500
pgsteal_kswapd_normal 0
pgsteal_kswapd_movable 0
pgsteal_direct_dma 0
pgsteal_direct_dma32 37
pgsteal_direct_normal 0
pgsteal_direct_movable 0
pgscan_kswapd_dma 204749
pgscan_kswapd_dma32 4477230
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 39
pgscan_direct_normal 0
pgscan_direct_movable 0
pgscan_direct_throttle 0
zone_reclaim_failed 0
pginodesteal 0
slabs_scanned 2720128
kswapd_inodesteal 41065
kswapd_low_wmark_hit_quickly 14897
kswapd_high_wmark_hit_quickly 116697740
pageoutrun 116717997
allocstall 1
pgrotated 8497
numa_pte_updates 0
numa_huge_pte_updates 0
numa_hint_faults 0
numa_hint_faults_local 0
numa_pages_migrated 0
pgmigrate_success 0
pgmigrate_fail 0
compact_migrate_scanned 0
compact_free_scanned 0
compact_isolated 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 29365
unevictable_pgs_scanned 0
unevictable_pgs_rescued 29145
unevictable_pgs_mlocked 29550
unevictable_pgs_munlocked 29550
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
nr_tlb_remote_flush 10780
nr_tlb_remote_flush_received 21564
nr_tlb_local_flush_all 66247
nr_tlb_local_flush_one 1446496

>
> Thanks,
> -Bob
>
>> /sys/module/tmem/parameters/cleancache Y
>> /sys/module/tmem/parameters/frontswap Y
>> /sys/module/tmem/parameters/selfballooning Y
>> /sys/module/tmem/parameters/selfshrinking N
>>
>> James
>>
>> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
>> oom_score_adj=0
>> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
>> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
>> ffff88001f90e8e8
>> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
>> ffff88000094f9b8
>> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
>> ffff88000094f9a8
>> [ 8212.940542] Call Trace:
>> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
>> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
>> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
>> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
>> [ 8212.940578]  [<ffffffff81494abe>] ?
>> _raw_spin_unlock_irqrestore+0x47/0x62
>> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
>> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
>> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
>> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
>> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
>> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
>> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
>> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
>> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
>> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
>> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
>> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
>> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
>> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
>> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
>> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
>> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
>> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
>> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
>> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
>> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
>> [ 8212.940679] Mem-Info:
>> [ 8212.940681] Node 0 DMA per-cpu:
>> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940686] Node 0 DMA32 per-cpu:
>> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
>> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
>> [ 8212.940691] Node 0 Normal per-cpu:
>> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>>   active_file:8412 inactive_file:8612 isolated_file:0
>>   unevictable:0 dirty:0 writeback:0 unstable:0
>>   free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>>   mapped:3792 shmem:6 pagetables:2534 bounce:0
>>   free_cma:0 totalram:246132 balloontarget:306242
>> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
>> active_anon:5092kB inactive_anon:5328kB active_file:416kB
>> inactive_file:608kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
>> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
>> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:26951 all_unreclaimable? yes
>> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
>> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
>> high:4092kB active_anon:181456kB inactive_anon:181528kB
>> active_file:22296kB inactive_file:22644kB unevictable:0kB
>> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
>> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
>> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
>> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:612393 all_unreclaimable? yes
>> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
>> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
>> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
>> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
>> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:745963 all_unreclaimable? yes
>> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
>> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
>> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
>> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
>> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
>> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
>> [ 8212.940765] 16847 total pagecache pages
>> [ 8212.940766] 8381 pages in swap cache
>> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
>> 250268/342284
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>> [ 8212.951044] 262143 pages RAM
>> [ 8212.951046] 11939 pages reserved
>> [ 8212.951047] 540820 pages shared
>> [ 8212.951048] 240248 pages non-shared
>> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
>> oom_score_adj name
>> <snip process list>
>> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
>> sacrifice child
>> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
>> anon-rss:350980kB, file-rss:9408kB
>> [54810.683658] kjournald starting.  Commit interval 5 seconds
>> [54810.684381] EXT3-fs (xvda1): using internal journal
>> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
>>


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:05:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1DQ9-0001Lf-5H; Thu, 09 Jan 2014 11:05:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W1DQ7-0001LY-Go
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:05:03 +0000
Received: from [193.109.254.147:27137] by server-4.bemta-14.messagelabs.com id
	C6/62-03916-E528EC25; Thu, 09 Jan 2014 11:05:02 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389265501!9782138!1
X-Originating-IP: [213.199.154.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18037 invoked from network); 9 Jan 2014 11:05:01 -0000
Received: from mail-am1lp0012.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.12)
	by server-5.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	9 Jan 2014 11:05:01 -0000
Received: from DBXPRD0310HT005.eurprd03.prod.outlook.com (10.255.65.168) by
	AMSPR03MB196.eurprd03.prod.outlook.com (10.242.85.19) with Microsoft
	SMTP Server (TLS) id 15.0.842.7; Thu, 9 Jan 2014 11:05:00 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.65.168) with Microsoft SMTP Server (TLS) id 14.16.395.1;
	Thu, 9 Jan 2014 11:04:59 +0000
Message-ID: <52CE825A.9090006@zynstra.com>
Date: Thu, 9 Jan 2014 11:04:58 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 008663486A
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(51704005)(479174003)(377454003)(164054003)(189002)(199002)(24454002)(69226001)(85306002)(83506001)(23756003)(81686001)(74366001)(63696002)(80022001)(59896001)(74502001)(80976001)(76786001)(47776003)(76796001)(47446002)(64126003)(92726001)(92566001)(33656001)(81542001)(79102001)(66066001)(81342001)(77982001)(83072002)(53806001)(59766001)(80316001)(83322001)(76482001)(50466002)(56776001)(4396001)(46102001)(85852003)(47976001)(56816005)(49866001)(47736001)(54356001)(90146001)(50986001)(51856001)(81816001)(74706001)(87936001)(74662001)(31966008)(36756003)(54316002)(19580395003)(74876001)(414714003)(473944003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AMSPR03MB196;
	H:DBXPRD0310HT005.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
>
>> I don't know if this is a separate or related issue but over the
>> holidays I also had a problem with six of the guests on my system where
>> kswapd was running at 100% and had clocked up >9000 minutes of cpu time
>> even though there was otherwise no load on them.  Of the guests I
>> restarted yesterday in this state two have already got in to the same
>> state again, they are running a kernel with the first patch that you sent.
As soon as I echo 32 both (originally 0)
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_reserved_mb
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_min_usable_mb
Then the kswapd process stopped running at 100%.  Unfortunately I didn't 
check between the two commands to see if one by itself made a difference 
but I'll look for that next time.
> Could you get the meminfo in guest OS at that time?
After
> cat /proc/meminfo
MemTotal:         397028 kB
MemFree:          163756 kB
Buffers:            1260 kB
Cached:           129284 kB
SwapCached:          132 kB
Active:            22664 kB
Inactive:         159576 kB
Active(anon):       8004 kB
Inactive(anon):    44412 kB
Active(file):      14660 kB
Inactive(file):   115164 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097148 kB
SwapFree:        2096896 kB
Dirty:                20 kB
Writeback:             0 kB
AnonPages:         51640 kB
Mapped:            14136 kB
Shmem:               720 kB
Slab:              19492 kB
SReclaimable:       7692 kB
SUnreclaim:        11800 kB
KernelStack:        1816 kB
PageTables:         7928 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     2295660 kB
Committed_AS:     338552 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        9020 kB
VmallocChunk:   34359716408 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
DirectMap4k:     1048576 kB
DirectMap2M:           0 kB

> cat /proc/vmstat
nr_free_pages 40916
nr_alloc_batch 0
nr_inactive_anon 11102
nr_active_anon 2009
nr_inactive_file 28791
nr_active_file 3665
nr_unevictable 0
nr_mlock 0
nr_anon_pages 12904
nr_mapped 3534
nr_file_pages 32669
nr_dirty 5
nr_writeback 0
nr_slab_reclaimable 1923
nr_slab_unreclaimable 2945
nr_page_table_pages 1982
nr_kernel_stack 227
nr_unstable 0
nr_bounce 0
nr_vmscan_write 781891
nr_vmscan_immediate_reclaim 6245
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 180
nr_dirtied 86609
nr_written 861010
numa_hit 8353372
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 8353372
numa_other 0
nr_anon_transparent_hugepages 0
nr_free_cma 0
nr_dirty_threshold 16991
nr_dirty_background_threshold 8495
pgpgin 2044575
pgpgout 645866
pswpin 123
pswpout 153
pgalloc_dma 164944
pgalloc_dma32 7347917
pgalloc_normal 1032559
pgalloc_movable 0
pgfree 8586607
pgactivate 2012718
pgdeactivate 2276721
pgfault 7295414
pgmajfault 345301
pgrefill_dma 55271
pgrefill_dma32 2263007
pgrefill_normal 1771
pgrefill_movable 0
pgsteal_kswapd_dma 44880
pgsteal_kswapd_dma32 2587500
pgsteal_kswapd_normal 0
pgsteal_kswapd_movable 0
pgsteal_direct_dma 0
pgsteal_direct_dma32 37
pgsteal_direct_normal 0
pgsteal_direct_movable 0
pgscan_kswapd_dma 204749
pgscan_kswapd_dma32 4477230
pgscan_kswapd_normal 0
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 39
pgscan_direct_normal 0
pgscan_direct_movable 0
pgscan_direct_throttle 0
zone_reclaim_failed 0
pginodesteal 0
slabs_scanned 2720128
kswapd_inodesteal 41065
kswapd_low_wmark_hit_quickly 14897
kswapd_high_wmark_hit_quickly 116697740
pageoutrun 116717997
allocstall 1
pgrotated 8497
numa_pte_updates 0
numa_huge_pte_updates 0
numa_hint_faults 0
numa_hint_faults_local 0
numa_pages_migrated 0
pgmigrate_success 0
pgmigrate_fail 0
compact_migrate_scanned 0
compact_free_scanned 0
compact_isolated 0
compact_stall 0
compact_fail 0
compact_success 0
unevictable_pgs_culled 29365
unevictable_pgs_scanned 0
unevictable_pgs_rescued 29145
unevictable_pgs_mlocked 29550
unevictable_pgs_munlocked 29550
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
nr_tlb_remote_flush 10780
nr_tlb_remote_flush_received 21564
nr_tlb_local_flush_all 66247
nr_tlb_local_flush_one 1446496

>
> Thanks,
> -Bob
>
>> /sys/module/tmem/parameters/cleancache Y
>> /sys/module/tmem/parameters/frontswap Y
>> /sys/module/tmem/parameters/selfballooning Y
>> /sys/module/tmem/parameters/selfshrinking N
>>
>> James
>>
>> [ 8212.940520] cc1plus invoked oom-killer: gfp_mask=0x200da, order=0,
>> oom_score_adj=0
>> [ 8212.940529] CPU: 1 PID: 23678 Comm: cc1plus Tainted: G W    3.12.5 #88
>> [ 8212.940532]  ffff88001e38cdf8 ffff88000094f968 ffffffff8148f200
>> ffff88001f90e8e8
>> [ 8212.940536]  ffff88001e38c8c0 ffff88000094fa08 ffffffff8148ccf7
>> ffff88000094f9b8
>> [ 8212.940538]  ffffffff810f8d97 ffff88000094f998 ffffffff81006dc8
>> ffff88000094f9a8
>> [ 8212.940542] Call Trace:
>> [ 8212.940554]  [<ffffffff8148f200>] dump_stack+0x46/0x58
>> [ 8212.940558]  [<ffffffff8148ccf7>] dump_header.isra.9+0x6d/0x1cc
>> [ 8212.940564]  [<ffffffff810f8d97>] ? super_cache_count+0xa8/0xb8
>> [ 8212.940569]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940573]  [<ffffffff81006ea9>] ? xen_clocksource_get_cycles+0x9/0xb
>> [ 8212.940578]  [<ffffffff81494abe>] ?
>> _raw_spin_unlock_irqrestore+0x47/0x62
>> [ 8212.940583]  [<ffffffff81296b27>] ? ___ratelimit+0xcb/0xe8
>> [ 8212.940588]  [<ffffffff810b2bbf>] oom_kill_process+0x70/0x2fd
>> [ 8212.940592]  [<ffffffff810bca0e>] ? zone_reclaimable+0x11/0x1e
>> [ 8212.940597]  [<ffffffff81048779>] ? has_ns_capability_noaudit+0x12/0x19
>> [ 8212.940600]  [<ffffffff81048792>] ? has_capability_noaudit+0x12/0x14
>> [ 8212.940603]  [<ffffffff810b32de>] out_of_memory+0x31b/0x34e
>> [ 8212.940608]  [<ffffffff810b7438>] __alloc_pages_nodemask+0x65b/0x792
>> [ 8212.940612]  [<ffffffff810e3da3>] alloc_pages_vma+0xd0/0x10c
>> [ 8212.940617]  [<ffffffff810dd5a4>] read_swap_cache_async+0x70/0x120
>> [ 8212.940620]  [<ffffffff810dd6e4>] swapin_readahead+0x90/0xd4
>> [ 8212.940623]  [<ffffffff81005b35>] ? pte_mfn_to_pfn+0x59/0xcb
>> [ 8212.940627]  [<ffffffff810cf99d>] handle_mm_fault+0x8a4/0xd54
>> [ 8212.940630]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940634]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940638]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940641]  [<ffffffff8106823b>] ? arch_vtime_task_switch+0x81/0x86
>> [ 8212.940646]  [<ffffffff81037f40>] __do_page_fault+0x3d8/0x437
>> [ 8212.940649]  [<ffffffff81006dc8>] ? xen_clocksource_read+0x20/0x22
>> [ 8212.940652]  [<ffffffff810115d2>] ? sched_clock+0x9/0xd
>> [ 8212.940654]  [<ffffffff8106772f>] ? sched_clock_local+0x12/0x75
>> [ 8212.940658]  [<ffffffff810a45cc>] ? __acct_update_integrals+0xb4/0xbf
>> [ 8212.940661]  [<ffffffff810a493f>] ? acct_account_cputime+0x17/0x19
>> [ 8212.940663]  [<ffffffff81067c28>] ? account_user_time+0x67/0x92
>> [ 8212.940666]  [<ffffffff8106811b>] ? vtime_account_user+0x4d/0x52
>> [ 8212.940669]  [<ffffffff81037fd8>] do_page_fault+0x1a/0x5a
>> [ 8212.940674]  [<ffffffff810a065f>] ? rcu_user_enter+0xe/0x10
>> [ 8212.940677]  [<ffffffff81495158>] page_fault+0x28/0x30
>> [ 8212.940679] Mem-Info:
>> [ 8212.940681] Node 0 DMA per-cpu:
>> [ 8212.940684] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940685] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940686] Node 0 DMA32 per-cpu:
>> [ 8212.940688] CPU    0: hi:  186, btch:  31 usd: 116
>> [ 8212.940690] CPU    1: hi:  186, btch:  31 usd: 124
>> [ 8212.940691] Node 0 Normal per-cpu:
>> [ 8212.940693] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 8212.940694] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 8212.940700] active_anon:105765 inactive_anon:105882 isolated_anon:0
>>   active_file:8412 inactive_file:8612 isolated_file:0
>>   unevictable:0 dirty:0 writeback:0 unstable:0
>>   free:1143 slab_reclaimable:3575 slab_unreclaimable:3464
>>   mapped:3792 shmem:6 pagetables:2534 bounce:0
>>   free_cma:0 totalram:246132 balloontarget:306242
>> [ 8212.940702] Node 0 DMA free:1964kB min:88kB low:108kB high:132kB
>> active_anon:5092kB inactive_anon:5328kB active_file:416kB
>> inactive_file:608kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:15996kB managed:15392kB mlocked:0kB dirty:0kB
>> writeback:0kB mapped:320kB shmem:0kB slab_reclaimable:252kB
>> slab_unreclaimable:492kB kernel_stack:120kB pagetables:252kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:26951 all_unreclaimable? yes
>> [ 8212.940711] lowmem_reserve[]: 0 469 469 469
>> [ 8212.940715] Node 0 DMA32 free:2608kB min:2728kB low:3408kB
>> high:4092kB active_anon:181456kB inactive_anon:181528kB
>> active_file:22296kB inactive_file:22644kB unevictable:0kB
>> isolated(anon):0kB isolated(file):0kB present:507904kB managed:466364kB
>> mlocked:0kB dirty:0kB writeback:0kB mapped:8628kB shmem:20kB
>> slab_reclaimable:10756kB slab_unreclaimable:12548kB kernel_stack:1688kB
>> pagetables:8876kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:612393 all_unreclaimable? yes
>> [ 8212.940722] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940725] Node 0 Normal free:0kB min:0kB low:0kB high:0kB
>> active_anon:236512kB inactive_anon:236672kB active_file:10936kB
>> inactive_file:11196kB unevictable:0kB isolated(anon):0kB
>> isolated(file):0kB present:524288kB managed:502772kB mlocked:0kB
>> dirty:0kB writeback:0kB mapped:6220kB shmem:4kB slab_reclaimable:3292kB
>> slab_unreclaimable:816kB kernel_stack:64kB pagetables:1008kB
>> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
>> pages_scanned:745963 all_unreclaimable? yes
>> [ 8212.940732] lowmem_reserve[]: 0 0 0 0
>> [ 8212.940735] Node 0 DMA: 1*4kB (R) 0*8kB 4*16kB (R) 1*32kB (R) 1*64kB
>> (R) 2*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
>> [ 8212.940747] Node 0 DMA32: 652*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
>> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2608kB
>> [ 8212.940756] Node 0 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB
>> 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
>> [ 8212.940765] 16847 total pagecache pages
>> [ 8212.940766] 8381 pages in swap cache
>> [ 8212.940768] Swap cache stats: add 741397, delete 733016, find
>> 250268/342284
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>> [ 8212.951044] 262143 pages RAM
>> [ 8212.951046] 11939 pages reserved
>> [ 8212.951047] 540820 pages shared
>> [ 8212.951048] 240248 pages non-shared
>> [ 8212.951050] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents
>> oom_score_adj name
>> <snip process list>
>> [ 8212.951310] Out of memory: Kill process 23721 (cc1plus) score 119 or
>> sacrifice child
>> [ 8212.951313] Killed process 23721 (cc1plus) total-vm:530268kB,
>> anon-rss:350980kB, file-rss:9408kB
>> [54810.683658] kjournald starting.  Commit interval 5 seconds
>> [54810.684381] EXT3-fs (xvda1): using internal journal
>> [54810.684402] EXT3-fs (xvda1): mounted filesystem with writeback data mode
>>


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1E9y-0004Ut-J9; Thu, 09 Jan 2014 11:52:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1E9w-0004Uo-WD
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:52:25 +0000
Received: from [85.158.143.35:57901] by server-1.bemta-4.messagelabs.com id
	1B/5B-02132-87D8EC25; Thu, 09 Jan 2014 11:52:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389268341!9372305!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3971 invoked from network); 9 Jan 2014 11:52:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:52:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89094750"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 11:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 06:52:20 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1E9s-0003VU-O8;
	Thu, 09 Jan 2014 11:52:20 +0000
Date: Thu, 9 Jan 2014 11:52:20 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109115220.GA32437@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389261891.27473.45.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > xl create -d shows keymap:NULL in the vfb part.
> > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > 
> > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > like vnc=) as well as suboption for vbf=[]. 
> > > > 
> > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > changes.
> > > 
> > > I don't think Wei covered this one with his VNC patches. It does sound
> > > like it should be move though, yes. I think this is 4.5 material at this
> > > point.
> > > 
> > 
> > You're right, my patch didn't cover that aspect because I tried hard to
> > make it minimal.
> 
> Do you think you could revisit this bit for 4.5?
> 

Yes, I think so. That should be relatively straightfoward, I hope. :-)

> > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > Sorry for not thinking of that during review.
> > > 
> > 
> > I don't think so. All VNC / VFB options are already documented.
> 
> The top level vnc*= options seem to be under "Emulated VGA Graphics
> Device" though.
> 

How about this

>From eba072e1362de55f8c6b7fc1101543852c7ee683 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:48:13 +0000
Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
 device

Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
guest when VNC is specified".

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 docs/man/xl.cfg.pod.5 |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 72efd88..00a89b2 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -387,7 +387,9 @@ to the domain.
 
 This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
-configure the emulated device.
+configure the emulated device. If L<Emulated VGA Graphics Device> options
+are used in a PV guest configuration, xl will extract relevant bits to
+create paravirtual framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
-- 
1.7.10.4


> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1E9y-0004Ut-J9; Thu, 09 Jan 2014 11:52:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1E9w-0004Uo-WD
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:52:25 +0000
Received: from [85.158.143.35:57901] by server-1.bemta-4.messagelabs.com id
	1B/5B-02132-87D8EC25; Thu, 09 Jan 2014 11:52:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389268341!9372305!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3971 invoked from network); 9 Jan 2014 11:52:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:52:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89094750"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 11:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 06:52:20 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1E9s-0003VU-O8;
	Thu, 09 Jan 2014 11:52:20 +0000
Date: Thu, 9 Jan 2014 11:52:20 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109115220.GA32437@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389261891.27473.45.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > xl create -d shows keymap:NULL in the vfb part.
> > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > 
> > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > like vnc=) as well as suboption for vbf=[]. 
> > > > 
> > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > changes.
> > > 
> > > I don't think Wei covered this one with his VNC patches. It does sound
> > > like it should be move though, yes. I think this is 4.5 material at this
> > > point.
> > > 
> > 
> > You're right, my patch didn't cover that aspect because I tried hard to
> > make it minimal.
> 
> Do you think you could revisit this bit for 4.5?
> 

Yes, I think so. That should be relatively straightfoward, I hope. :-)

> > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > Sorry for not thinking of that during review.
> > > 
> > 
> > I don't think so. All VNC / VFB options are already documented.
> 
> The top level vnc*= options seem to be under "Emulated VGA Graphics
> Device" though.
> 

How about this

>From eba072e1362de55f8c6b7fc1101543852c7ee683 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:48:13 +0000
Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
 device

Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
guest when VNC is specified".

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 docs/man/xl.cfg.pod.5 |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 72efd88..00a89b2 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -387,7 +387,9 @@ to the domain.
 
 This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
-configure the emulated device.
+configure the emulated device. If L<Emulated VGA Graphics Device> options
+are used in a PV guest configuration, xl will extract relevant bits to
+create paravirtual framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
-- 
1.7.10.4


> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:59:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EH8-0005Af-LF; Thu, 09 Jan 2014 11:59:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EH6-0005Aa-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:59:49 +0000
Received: from [193.109.254.147:14768] by server-2.bemta-14.messagelabs.com id
	B9/32-00361-43F8EC25; Thu, 09 Jan 2014 11:59:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389268786!9804572!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22551 invoked from network); 9 Jan 2014 11:59:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:59:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91240715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 11:59:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	06:59:45 -0500
Message-ID: <1389268784.27473.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:59:44 +0000
In-Reply-To: <20140109115220.GA32437@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > 
> > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > 
> > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > changes.
> > > > 
> > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > point.
> > > > 
> > > 
> > > You're right, my patch didn't cover that aspect because I tried hard to
> > > make it minimal.
> > 
> > Do you think you could revisit this bit for 4.5?
> > 
> 
> Yes, I think so. That should be relatively straightfoward, I hope. :-)
> 
> > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > Sorry for not thinking of that during review.
> > > > 
> > > 
> > > I don't think so. All VNC / VFB options are already documented.
> > 
> > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > Device" though.
> > 
> 
> How about this

HRM, I was more thinking about pulling those options out into a new
"Primary Graphics Device" or something section with a little intro blurb
about how for PV this is a PVFB and for HVM this is an emulated VNC.

With your approach I think at a minimum it would need to enumerate which
specific options work for both.

> From eba072e1362de55f8c6b7fc1101543852c7ee683 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Thu, 9 Jan 2014 11:48:13 +0000
> Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
>  device
> 
> Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
> guest when VNC is specified".
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5 |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 72efd88..00a89b2 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -387,7 +387,9 @@ to the domain.
>  
>  This options does not control the emulated graphics card presented to
>  an HVM guest. See L<Emulated VGA Graphics Device> below for how to
> -configure the emulated device.
> +configure the emulated device. If L<Emulated VGA Graphics Device> options
> +are used in a PV guest configuration, xl will extract relevant bits to
> +create paravirtual framebuffer device for the guest.
>  
>  Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
>  settings, from the following list:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 11:59:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 11:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EH8-0005Af-LF; Thu, 09 Jan 2014 11:59:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EH6-0005Aa-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 11:59:49 +0000
Received: from [193.109.254.147:14768] by server-2.bemta-14.messagelabs.com id
	B9/32-00361-43F8EC25; Thu, 09 Jan 2014 11:59:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389268786!9804572!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22551 invoked from network); 9 Jan 2014 11:59:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 11:59:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91240715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 11:59:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	06:59:45 -0500
Message-ID: <1389268784.27473.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:59:44 +0000
In-Reply-To: <20140109115220.GA32437@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > 
> > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > 
> > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > changes.
> > > > 
> > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > point.
> > > > 
> > > 
> > > You're right, my patch didn't cover that aspect because I tried hard to
> > > make it minimal.
> > 
> > Do you think you could revisit this bit for 4.5?
> > 
> 
> Yes, I think so. That should be relatively straightfoward, I hope. :-)
> 
> > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > Sorry for not thinking of that during review.
> > > > 
> > > 
> > > I don't think so. All VNC / VFB options are already documented.
> > 
> > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > Device" though.
> > 
> 
> How about this

HRM, I was more thinking about pulling those options out into a new
"Primary Graphics Device" or something section with a little intro blurb
about how for PV this is a PVFB and for HVM this is an emulated VNC.

With your approach I think at a minimum it would need to enumerate which
specific options work for both.

> From eba072e1362de55f8c6b7fc1101543852c7ee683 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Thu, 9 Jan 2014 11:48:13 +0000
> Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
>  device
> 
> Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
> guest when VNC is specified".
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5 |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 72efd88..00a89b2 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -387,7 +387,9 @@ to the domain.
>  
>  This options does not control the emulated graphics card presented to
>  an HVM guest. See L<Emulated VGA Graphics Device> below for how to
> -configure the emulated device.
> +configure the emulated device. If L<Emulated VGA Graphics Device> options
> +are used in a PV guest configuration, xl will extract relevant bits to
> +create paravirtual framebuffer device for the guest.
>  
>  Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
>  settings, from the following list:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:13:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EUX-0006MG-V2; Thu, 09 Jan 2014 12:13:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1EUW-0006M0-16
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:13:40 +0000
Received: from [85.158.137.68:49981] by server-1.bemta-3.messagelabs.com id
	C1/5E-29598-3729EC25; Thu, 09 Jan 2014 12:13:39 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389269618!8099719!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19069 invoked from network); 9 Jan 2014 12:13:38 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-14.tower-31.messagelabs.com with SMTP;
	9 Jan 2014 12:13:38 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 7AF7013F626;
	Thu,  9 Jan 2014 06:13:35 -0600 (CST)
Date: Thu, 9 Jan 2014 12:13:08 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140109121308.GH3081@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"nico@linaro.org" <nico@linaro.org>, Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
>  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
>  arch/arm64/kernel/Makefile |    1 +
>  2 files changed, 21 insertions(+)
[...]
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 5ba2fd4..1dee735 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
>  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
>  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
>  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o

Did you forget a git add?

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:13:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EUg-0006N6-BT; Thu, 09 Jan 2014 12:13:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EUe-0006Ms-Lv
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:13:48 +0000
Received: from [85.158.137.68:52092] by server-16.bemta-3.messagelabs.com id
	54/C1-26128-B729EC25; Thu, 09 Jan 2014 12:13:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389269625!8112548!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6309 invoked from network); 9 Jan 2014 12:13:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:13:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91245435"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 12:13:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	07:13:44 -0500
Message-ID: <1389269622.27473.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 12:13:42 +0000
In-Reply-To: <1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk, arnd@arndb.de,
	marc.zyngier@arm.com, catalin.marinas@arm.com, nico@linaro.org,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	cov@codeaurora.org, olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v8 3/6] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:49 +0000, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> 
> ---
> 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig         |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile |    1 +
>  2 files changed, 21 insertions(+)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..bcd2b38 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:13:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EUX-0006MG-V2; Thu, 09 Jan 2014 12:13:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1EUW-0006M0-16
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:13:40 +0000
Received: from [85.158.137.68:49981] by server-1.bemta-3.messagelabs.com id
	C1/5E-29598-3729EC25; Thu, 09 Jan 2014 12:13:39 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389269618!8099719!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19069 invoked from network); 9 Jan 2014 12:13:38 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-14.tower-31.messagelabs.com with SMTP;
	9 Jan 2014 12:13:38 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 7AF7013F626;
	Thu,  9 Jan 2014 06:13:35 -0600 (CST)
Date: Thu, 9 Jan 2014 12:13:08 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140109121308.GH3081@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"nico@linaro.org" <nico@linaro.org>, Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
>  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
>  arch/arm64/kernel/Makefile |    1 +
>  2 files changed, 21 insertions(+)
[...]
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 5ba2fd4..1dee735 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
>  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
>  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
>  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o

Did you forget a git add?

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:13:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EUg-0006N6-BT; Thu, 09 Jan 2014 12:13:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EUe-0006Ms-Lv
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:13:48 +0000
Received: from [85.158.137.68:52092] by server-16.bemta-3.messagelabs.com id
	54/C1-26128-B729EC25; Thu, 09 Jan 2014 12:13:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389269625!8112548!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6309 invoked from network); 9 Jan 2014 12:13:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:13:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91245435"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 12:13:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	07:13:44 -0500
Message-ID: <1389269622.27473.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 12:13:42 +0000
In-Reply-To: <1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-3-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk, arnd@arndb.de,
	marc.zyngier@arm.com, catalin.marinas@arm.com, nico@linaro.org,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	cov@codeaurora.org, olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v8 3/6] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 18:49 +0000, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> 
> ---
> 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig         |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile |    1 +
>  2 files changed, 21 insertions(+)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..bcd2b38 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:14:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EV4-0006SN-Ov; Thu, 09 Jan 2014 12:14:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W1EV3-0006Ry-6s; Thu, 09 Jan 2014 12:14:13 +0000
Received: from [85.158.137.68:59001] by server-9.bemta-3.messagelabs.com id
	6E/41-13104-4929EC25; Thu, 09 Jan 2014 12:14:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389269649!7362681!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5631 invoked from network); 9 Jan 2014 12:14:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:14:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89101908"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:14:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:14:09 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1EUy-0003qW-Vv;
	Thu, 09 Jan 2014 12:14:08 +0000
Date: Thu, 9 Jan 2014 12:14:08 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140109121408.GA12164@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4088fa33.894a.14375212530.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCBKYW4gMDksIDIwMTQgYXQgMTE6NTI6MjNBTSArMDgwMCwgdG9wcGVyeGluIHdyb3Rl
Ogo+IO+7vwo+IAo+IAo+IEhpIFdlaQo+IAo+ICAgICAgVGhhbmtzIGZvciB5b3VyIHJlcGx5LCBJ
IGtub3cgeW91IGFyZSBpbiBjaGFyZ2Ugb2YgcG9ydGluZyBWaXJ0aW8gdG8geGVuLCBob3cgYWJv
dXQgdGhlIHByb2Nlc3M/IE1heSB3ZSBjb25maWd1cmUgVmlydGlvIG9uIHhlbj8KCkkgd291bGRu
J3Qgc2F5ICJJJ20gaW4gY2hhcmdlIi4gVGhhdCB3YXMgc29ydCBvZiBleHBlcmltZW50YWwgcHJv
amVjdAp0d28geWVhcnMgYWdvLiBBbmQgSSBzdG9wcGVkIGFmdGVyIHRoYXQuIFNvIG15IGtub3ds
ZWRnZSBpcyBpbiBmYWN0CnF1aXRlIG91dCBkYXRlZC4KCkkgdHJpZWQgVmlydElPIG9uIEhWTSBn
dWVzdCBhYm91dCB0d28gbW9udGhzIGFnby4gSXQgd29ya2VkIGZvciBtZS4KVGhhdCdzIHRoZSBv
bmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5LiBIb3dldmVyIEZhYmlvIHJlcG9ydGVkIGl0
CmRpZG4ndCB3b3JrIGZvciBoaW0uIFNvIGluIHNob3J0IHlvdXIgbWlsZWFnZSBtYXkgdmFyeS4K
Cj4gICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNWdGFwIHdhcyB3cml0dGVuIHNwZWNp
YWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBkcml2ZXIgbW9kZWwgY2FuIG5vdCB1
c2UgaXQsIHJpZ2h0PwoKTm8sIEkgZGlkbid0IHNheSB0aGF0LgoKPiAgICAgICBJIGdldCB0aGUg
aW5mb3JtYXRpb24gZnJvbSBodHRwOi8vdmlydC5rZXJuZWxuZXdiaWVzLm9yZy9NYWNWVGFwCj4g
ICAgICAgIk1hY3Z0YXAgaXMgYSBuZXcgZGV2aWNlIGRyaXZlciBtZWFudCB0byBzaW1wbGlmeSB2
aXJ0dWFsaXplZCBicmlkZ2VkIG5ldHdvcmtpbmcuIEl0IHJlcGxhY2VzIHRoZSBjb21iaW5hdGlv
biBvZiB0aGUgdHVuL3RhcCBhbmQgYnJpZGdlIGRyaXZlcnMgd2l0aCBhIHNpbmdsZSBtb2R1bGUg
YmFzZWQgb24gdGhlIG1hY3ZsYW4gZGV2aWNlIGRyaXZlci4gQSBtYWN2dGFwIGVuZHBvaW50IGlz
IGEgY2hhcmFjdGVyIGRldmljZSB0aGF0IGxhcmdlbHkgZm9sbG93cyB0aGUgdHVuL3RhcCBpb2N0
bCBpbnRlcmZhY2UgYW5kIGNhbiBiZSB1c2VkIGRpcmVjdGx5IGJ5IGt2bS9xZW11IGFuZCBvdGhl
ciBoeXBlcnZpc29ycyB0aGF0IHN1cHBvcnQgdGhlIHR1bi90YXAgaW50ZXJmYWNlLiIgCj4gICAg
ICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFueSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJl
IE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3VwcG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmln
aHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8gc28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0
d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdodD8gCgpJIGNhbid0IHNwZWFrIGZvciB0aGUg
dGhpbmcgSSdtIG5vdCBmYW1pbGlhciB3aXRoLiBJdCB3b3VsZCBtYWtlIHNlbnNlCnRvIGp1c3Qg
dHJ5IHRoYXQgY29uZmlndXJhdGlvbiBhbmQgc2VlIGlmIGl0IHdvcmtzIG9yIG5vdC4KCldlaS4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:14:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EV4-0006SN-Ov; Thu, 09 Jan 2014 12:14:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W1EV3-0006Ry-6s; Thu, 09 Jan 2014 12:14:13 +0000
Received: from [85.158.137.68:59001] by server-9.bemta-3.messagelabs.com id
	6E/41-13104-4929EC25; Thu, 09 Jan 2014 12:14:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389269649!7362681!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5631 invoked from network); 9 Jan 2014 12:14:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:14:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89101908"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:14:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:14:09 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1EUy-0003qW-Vv;
	Thu, 09 Jan 2014 12:14:08 +0000
Date: Thu, 9 Jan 2014 12:14:08 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140109121408.GA12164@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <4088fa33.894a.14375212530.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCBKYW4gMDksIDIwMTQgYXQgMTE6NTI6MjNBTSArMDgwMCwgdG9wcGVyeGluIHdyb3Rl
Ogo+IO+7vwo+IAo+IAo+IEhpIFdlaQo+IAo+ICAgICAgVGhhbmtzIGZvciB5b3VyIHJlcGx5LCBJ
IGtub3cgeW91IGFyZSBpbiBjaGFyZ2Ugb2YgcG9ydGluZyBWaXJ0aW8gdG8geGVuLCBob3cgYWJv
dXQgdGhlIHByb2Nlc3M/IE1heSB3ZSBjb25maWd1cmUgVmlydGlvIG9uIHhlbj8KCkkgd291bGRu
J3Qgc2F5ICJJJ20gaW4gY2hhcmdlIi4gVGhhdCB3YXMgc29ydCBvZiBleHBlcmltZW50YWwgcHJv
amVjdAp0d28geWVhcnMgYWdvLiBBbmQgSSBzdG9wcGVkIGFmdGVyIHRoYXQuIFNvIG15IGtub3ds
ZWRnZSBpcyBpbiBmYWN0CnF1aXRlIG91dCBkYXRlZC4KCkkgdHJpZWQgVmlydElPIG9uIEhWTSBn
dWVzdCBhYm91dCB0d28gbW9udGhzIGFnby4gSXQgd29ya2VkIGZvciBtZS4KVGhhdCdzIHRoZSBv
bmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5LiBIb3dldmVyIEZhYmlvIHJlcG9ydGVkIGl0
CmRpZG4ndCB3b3JrIGZvciBoaW0uIFNvIGluIHNob3J0IHlvdXIgbWlsZWFnZSBtYXkgdmFyeS4K
Cj4gICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNWdGFwIHdhcyB3cml0dGVuIHNwZWNp
YWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBkcml2ZXIgbW9kZWwgY2FuIG5vdCB1
c2UgaXQsIHJpZ2h0PwoKTm8sIEkgZGlkbid0IHNheSB0aGF0LgoKPiAgICAgICBJIGdldCB0aGUg
aW5mb3JtYXRpb24gZnJvbSBodHRwOi8vdmlydC5rZXJuZWxuZXdiaWVzLm9yZy9NYWNWVGFwCj4g
ICAgICAgIk1hY3Z0YXAgaXMgYSBuZXcgZGV2aWNlIGRyaXZlciBtZWFudCB0byBzaW1wbGlmeSB2
aXJ0dWFsaXplZCBicmlkZ2VkIG5ldHdvcmtpbmcuIEl0IHJlcGxhY2VzIHRoZSBjb21iaW5hdGlv
biBvZiB0aGUgdHVuL3RhcCBhbmQgYnJpZGdlIGRyaXZlcnMgd2l0aCBhIHNpbmdsZSBtb2R1bGUg
YmFzZWQgb24gdGhlIG1hY3ZsYW4gZGV2aWNlIGRyaXZlci4gQSBtYWN2dGFwIGVuZHBvaW50IGlz
IGEgY2hhcmFjdGVyIGRldmljZSB0aGF0IGxhcmdlbHkgZm9sbG93cyB0aGUgdHVuL3RhcCBpb2N0
bCBpbnRlcmZhY2UgYW5kIGNhbiBiZSB1c2VkIGRpcmVjdGx5IGJ5IGt2bS9xZW11IGFuZCBvdGhl
ciBoeXBlcnZpc29ycyB0aGF0IHN1cHBvcnQgdGhlIHR1bi90YXAgaW50ZXJmYWNlLiIgCj4gICAg
ICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFueSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJl
IE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3VwcG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmln
aHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8gc28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0
d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdodD8gCgpJIGNhbid0IHNwZWFrIGZvciB0aGUg
dGhpbmcgSSdtIG5vdCBmYW1pbGlhciB3aXRoLiBJdCB3b3VsZCBtYWtlIHNlbnNlCnRvIGp1c3Qg
dHJ5IHRoYXQgY29uZmlndXJhdGlvbiBhbmQgc2VlIGlmIGl0IHdvcmtzIG9yIG5vdC4KCldlaS4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:15:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EWB-0006jc-Sg; Thu, 09 Jan 2014 12:15:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EWA-0006j5-AW
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:15:22 +0000
Received: from [85.158.143.35:42400] by server-2.bemta-4.messagelabs.com id
	27/3F-11386-9D29EC25; Thu, 09 Jan 2014 12:15:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389269720!10681972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20070 invoked from network); 9 Jan 2014 12:15:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:15:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91246180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 12:15:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	07:15:18 -0500
Message-ID: <1389269716.27473.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Date: Thu, 9 Jan 2014 12:15:16 +0000
In-Reply-To: <20140109121308.GH3081@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nico@linaro.org" <nico@linaro.org>, "olof@lixom.net" <olof@lixom.net>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> >  arch/arm64/kernel/Makefile |    1 +
> >  2 files changed, 21 insertions(+)
> [...]
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index 5ba2fd4..1dee735 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> 
> Did you forget a git add?

I was just about to say the same thing for the previous arm patch too.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:15:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EWB-0006jc-Sg; Thu, 09 Jan 2014 12:15:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1EWA-0006j5-AW
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 12:15:22 +0000
Received: from [85.158.143.35:42400] by server-2.bemta-4.messagelabs.com id
	27/3F-11386-9D29EC25; Thu, 09 Jan 2014 12:15:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389269720!10681972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20070 invoked from network); 9 Jan 2014 12:15:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:15:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91246180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 12:15:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	07:15:18 -0500
Message-ID: <1389269716.27473.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Date: Thu, 9 Jan 2014 12:15:16 +0000
In-Reply-To: <20140109121308.GH3081@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nico@linaro.org" <nico@linaro.org>, "olof@lixom.net" <olof@lixom.net>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> >  arch/arm64/kernel/Makefile |    1 +
> >  2 files changed, 21 insertions(+)
> [...]
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index 5ba2fd4..1dee735 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> 
> Did you forget a git add?

I was just about to say the same thing for the previous arm patch too.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:20:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Eax-0007om-Qd; Thu, 09 Jan 2014 12:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=079244837=chegger@amazon.de>)
	id 1W1Eaw-0007of-Oi
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 12:20:19 +0000
Received: from [85.158.143.35:29060] by server-3.bemta-4.messagelabs.com id
	EE/14-32360-2049EC25; Thu, 09 Jan 2014 12:20:18 +0000
X-Env-Sender: prvs=079244837=chegger@amazon.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389270014!10686214!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8023 invoked from network); 9 Jan 2014 12:20:16 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:20:16 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389270016; x=1420806016;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=6xN0ygR005hl/wxcfr5knPqoOQZP35V+cr73CvtD0Gc=;
	b=Do7XQoCqDZpkfZQ3KMu66pvgr601S2FHlghGmeZaSiQeJOrKYLBoOl52
	Uw9QW8xRuU8Xf1ZX2aTvXOkxfAcf7ptku9+AaSiD9o3R5BhPxPXK0euvy
	ZvDp5y/rEy+DFLHNw8reAmNESQGEyPSd2WKvmdZSZV1yB9snaEcvDeO8T I=;
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="57936655"
Received: from email-inbound-relay-62040.pdx2.amazon.com ([10.241.21.71])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 09 Jan 2014 12:20:13 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-62040.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s09CK9aT002558
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 9 Jan 2014 12:20:12 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Thu, 9 Jan 2014 04:19:39 -0800
Message-ID: <52CE93D8.4080201@amazon.de>
Date: Thu, 9 Jan 2014 13:19:36 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08.01.14 06:50, Zhang, Yang Z wrote:
> Egger, Christoph wrote on 2014-01-07:
>> On 07.01.14 09:54, Zhang, Yang Z wrote:
>>> Jan Beulich wrote on 2014-01-07:
>>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>>> wrote:
>>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>>> wrote:
>>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>>>>>> way we can be certain that the retry won't do something
>>>>>>>>>> different from what requested the retry, getting once again
>>>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>>>> is simply a bus operation, not involving redundant decoding of
>>>>>>>>>> instructions).
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>>> vmexit to
>>>>>>>>> L1 and
>>>>>>>>> L1 may use the wrong instruction.
>>>>>>>>
>>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>>> There should be, at any given point in time, at most one
>>>>>>>> instruction being emulated. Can you please give a more
>>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>>> practical?)
>>>>> problem?
>>>>>>>
>>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>>> info and saw the strange phenomenon:
>>>>>>>
>>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>
>>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>>>
>>>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>>>> proper understanding of the issue is necessary as the very first step.
>>>>>> Which doesn't require knowing internals of Hyper-V, all you need
>>>>>> is tracking of emulator state changes in Xen (which I realize is
>>>>>> said easier than done, since you want to make sure you don't
>>>>>> generate huge amounts of output before actually hitting the
>>>>>> issue, making it close to
>>>>> impossible to analyze).
>>>>>
>>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>>> Sometimes, L0 need to decode the L2's instruction to handle IO access
>>>>> directly. For example, if L1 pass through (not VT-d) a device to L2
>>>>> directly. And L0 may get X86EMUL_RETRY when handling this IO request.
>>>>> At same time, if there is a virtual vmexit pending (for example, an
>>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>>> VCPU context from L2 to L1. Now we already in L1's context, but since
>>>>> we got a X86EMUL_RETRY just now and this means hyprevisor will retry
>>>>> to handle the IO request later and unfortunately, the retry will
>>>>> happen in L1's context. Without your patch, L0 just emulate an L1's
>>>>> instruction and everything goes on. With your patch, L0 will get the
>>>>> instruction from the buffer which is belonging to L2, and problem
>>>>> arises.
>>>>>
>>>>> So the fixing is that if there is a pending IO request, no virtual
>>>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>>>
>>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>>      struct vcpu *v = current;
>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>
>>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>>> +        return;
>>>>>      /*
>>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>>       * just handled and the true vmentry. If during this window,
>>>>
>>>> This change looks much more sensible; question is whether simply
>>>> ignoring the switch request is acceptable (and I don't know the
>>>> nested HVM code well enough to tell). Furthermore I wonder whether
>>>> that's
>> really a VMX-only issue.
>>>
>>> From hardware's point, an IO operation is handled atomically. So the
>>> problem will never happen. But in Xen, an IO operation is divided into
>>> several steps. Without nested virtualization, the VCPU is paused or
>>> looped until the IO emulation is finished. But in nested environment,
>>> the VCPU will continue running even the IO emulation is not finished.
>>> So my patch will check this and allow the VCPU to continue running only
>>> there is no pending IO request. This is matching hardware's behavior.
>>>
>>> I guess SVM also has this problem. But since I don't know nested SVM
>>> well, so perhaps Christoph can help to have a double check.
>>
>> For SVM this issue was fixed with commit
>> d740d811925385c09553cbe6dee8e77c1d43b198
>>
>> And there is a followup cleanup commit
>> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
>>
>> The change in nestedsvm.c in commit
>> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
>>
>> Move that into nhvm_interrupt_blocked() in hvm.c right before
>>
>>     return hvm_funcs.nhvm_intr_blocked(v);
>> and the fix applies for both SVM and VMX.
>>
> 
> I guess this is no enough. L2 may vmexit to L1 during IO emulation
> not only due to interrupt. Now I cannot give an example, but the
> hyper-v cannot boot up with your suggestion. So I guess only consider
> external interrupt is not enough. We should prohibit vmswith if there
> is a pending IO emulation both from L1 or L2(This may never happen to L1,
> but from hardware's behavior, we should add this check).

I compared nsvm_vcpu_switch() with nvmx_switch_guest() (both are
called from entry.S) and come up with one question:

How do you handle the case of a virtual TLB flush (which shoots down
EPT tables) from another virtual CPU while you are in the middle
of a vmentry/vmexit emulation? This happens when a vCPU sends
an IPI to an other vCPU.

If you do not handle this case you launch a l2 guest with
an empty EPT table (you set it up correctly but an other CPU shot it
down right after you set it up) and that lets you end up
in an EPT fault endless loop.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:20:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Eax-0007om-Qd; Thu, 09 Jan 2014 12:20:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=079244837=chegger@amazon.de>)
	id 1W1Eaw-0007of-Oi
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 12:20:19 +0000
Received: from [85.158.143.35:29060] by server-3.bemta-4.messagelabs.com id
	EE/14-32360-2049EC25; Thu, 09 Jan 2014 12:20:18 +0000
X-Env-Sender: prvs=079244837=chegger@amazon.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389270014!10686214!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8023 invoked from network); 9 Jan 2014 12:20:16 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:20:16 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389270016; x=1420806016;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=6xN0ygR005hl/wxcfr5knPqoOQZP35V+cr73CvtD0Gc=;
	b=Do7XQoCqDZpkfZQ3KMu66pvgr601S2FHlghGmeZaSiQeJOrKYLBoOl52
	Uw9QW8xRuU8Xf1ZX2aTvXOkxfAcf7ptku9+AaSiD9o3R5BhPxPXK0euvy
	ZvDp5y/rEy+DFLHNw8reAmNESQGEyPSd2WKvmdZSZV1yB9snaEcvDeO8T I=;
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="57936655"
Received: from email-inbound-relay-62040.pdx2.amazon.com ([10.241.21.71])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 09 Jan 2014 12:20:13 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-62040.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s09CK9aT002558
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Thu, 9 Jan 2014 12:20:12 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Thu, 9 Jan 2014 04:19:39 -0800
Message-ID: <52CE93D8.4080201@amazon.de>
Date: Thu, 9 Jan 2014 13:19:36 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08.01.14 06:50, Zhang, Yang Z wrote:
> Egger, Christoph wrote on 2014-01-07:
>> On 07.01.14 09:54, Zhang, Yang Z wrote:
>>> Jan Beulich wrote on 2014-01-07:
>>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>>> wrote:
>>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>>> wrote:
>>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>>> processing, stash away and re-use tha what we already read. That
>>>>>>>>>> way we can be certain that the retry won't do something
>>>>>>>>>> different from what requested the retry, getting once again
>>>>>>>>>> closer to real hardware behavior (where what we use retries for
>>>>>>>>>> is simply a bus operation, not involving redundant decoding of
>>>>>>>>>> instructions).
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>>> vmexit to
>>>>>>>>> L1 and
>>>>>>>>> L1 may use the wrong instruction.
>>>>>>>>
>>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>>> There should be, at any given point in time, at most one
>>>>>>>> instruction being emulated. Can you please give a more
>>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>>> practical?)
>>>>> problem?
>>>>>>>
>>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>>> info and saw the strange phenomenon:
>>>>>>>
>>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>
>>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>>> information why this happens. And I only saw it with L1 hyper-v
>>>>>>> (Xen on Xen and KVM on Xen don't have this issue) .
>>>>>>
>>>>>> But in order to validate the fix is (a) needed and (b) correct, a
>>>>>> proper understanding of the issue is necessary as the very first step.
>>>>>> Which doesn't require knowing internals of Hyper-V, all you need
>>>>>> is tracking of emulator state changes in Xen (which I realize is
>>>>>> said easier than done, since you want to make sure you don't
>>>>>> generate huge amounts of output before actually hitting the
>>>>>> issue, making it close to
>>>>> impossible to analyze).
>>>>>
>>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>>> Sometimes, L0 need to decode the L2's instruction to handle IO access
>>>>> directly. For example, if L1 pass through (not VT-d) a device to L2
>>>>> directly. And L0 may get X86EMUL_RETRY when handling this IO request.
>>>>> At same time, if there is a virtual vmexit pending (for example, an
>>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>>> VCPU context from L2 to L1. Now we already in L1's context, but since
>>>>> we got a X86EMUL_RETRY just now and this means hyprevisor will retry
>>>>> to handle the IO request later and unfortunately, the retry will
>>>>> happen in L1's context. Without your patch, L0 just emulate an L1's
>>>>> instruction and everything goes on. With your patch, L0 will get the
>>>>> instruction from the buffer which is belonging to L2, and problem
>>>>> arises.
>>>>>
>>>>> So the fixing is that if there is a pending IO request, no virtual
>>>>> vmexit/vmentry is allowed which following hardware's behavior.
>>>>>
>>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>>      struct vcpu *v = current;
>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>
>>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>>> +        return;
>>>>>      /*
>>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>>       * just handled and the true vmentry. If during this window,
>>>>
>>>> This change looks much more sensible; question is whether simply
>>>> ignoring the switch request is acceptable (and I don't know the
>>>> nested HVM code well enough to tell). Furthermore I wonder whether
>>>> that's
>> really a VMX-only issue.
>>>
>>> From hardware's point, an IO operation is handled atomically. So the
>>> problem will never happen. But in Xen, an IO operation is divided into
>>> several steps. Without nested virtualization, the VCPU is paused or
>>> looped until the IO emulation is finished. But in nested environment,
>>> the VCPU will continue running even the IO emulation is not finished.
>>> So my patch will check this and allow the VCPU to continue running only
>>> there is no pending IO request. This is matching hardware's behavior.
>>>
>>> I guess SVM also has this problem. But since I don't know nested SVM
>>> well, so perhaps Christoph can help to have a double check.
>>
>> For SVM this issue was fixed with commit
>> d740d811925385c09553cbe6dee8e77c1d43b198
>>
>> And there is a followup cleanup commit
>> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
>>
>> The change in nestedsvm.c in commit
>> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
>>
>> Move that into nhvm_interrupt_blocked() in hvm.c right before
>>
>>     return hvm_funcs.nhvm_intr_blocked(v);
>> and the fix applies for both SVM and VMX.
>>
> 
> I guess this is no enough. L2 may vmexit to L1 during IO emulation
> not only due to interrupt. Now I cannot give an example, but the
> hyper-v cannot boot up with your suggestion. So I guess only consider
> external interrupt is not enough. We should prohibit vmswith if there
> is a pending IO emulation both from L1 or L2(This may never happen to L1,
> but from hardware's behavior, we should add this check).

I compared nsvm_vcpu_switch() with nvmx_switch_guest() (both are
called from entry.S) and come up with one question:

How do you handle the case of a virtual TLB flush (which shoots down
EPT tables) from another virtual CPU while you are in the middle
of a vmentry/vmexit emulation? This happens when a vCPU sends
an IPI to an other vCPU.

If you do not handle this case you launch a l2 guest with
an empty EPT table (you set it up correctly but an other CPU shot it
down right after you set it up) and that lets you end up
in an EPT fault endless loop.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:27:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EhK-00086Q-33; Thu, 09 Jan 2014 12:26:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1EhH-00086J-TW
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:26:52 +0000
Received: from [85.158.143.35:61978] by server-1.bemta-4.messagelabs.com id
	27/4B-02132-B859EC25; Thu, 09 Jan 2014 12:26:51 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389270409!10633499!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18101 invoked from network); 9 Jan 2014 12:26:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:26:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89106166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:26:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:26:48 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1EhE-00041S-Da;
	Thu, 09 Jan 2014 12:26:48 +0000
Date: Thu, 9 Jan 2014 12:26:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140109122648.GB12164@zion.uk.xensource.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 2/3] xen-netback: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:02:47AM +0000, Paul Durrant wrote:
> Use skb_checksum_setup to set up partial checksum offsets rather
> then a private implementation.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>

If your change to core driver goes in then:

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:27:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1EhK-00086Q-33; Thu, 09 Jan 2014 12:26:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1EhH-00086J-TW
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:26:52 +0000
Received: from [85.158.143.35:61978] by server-1.bemta-4.messagelabs.com id
	27/4B-02132-B859EC25; Thu, 09 Jan 2014 12:26:51 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389270409!10633499!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18101 invoked from network); 9 Jan 2014 12:26:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:26:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89106166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:26:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:26:48 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1EhE-00041S-Da;
	Thu, 09 Jan 2014 12:26:48 +0000
Date: Thu, 9 Jan 2014 12:26:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140109122648.GB12164@zion.uk.xensource.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389261768-30606-3-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 2/3] xen-netback: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:02:47AM +0000, Paul Durrant wrote:
> Use skb_checksum_setup to set up partial checksum offsets rather
> then a private implementation.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>

If your change to core driver goes in then:

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FB9-0001gp-9q; Thu, 09 Jan 2014 12:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W1F8T-0001fy-Mw
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:54:57 +0000
Received: from [85.158.139.211:39638] by server-12.bemta-5.messagelabs.com id
	4A/27-30017-02C9EC25; Thu, 09 Jan 2014 12:54:56 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389272096!8745612!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3577 invoked from network); 9 Jan 2014 12:54:56 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-14.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 12:54:56 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 467361941185
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:54 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 3C67A1941186
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:54 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id iQyX0ZWi64Kz for <xen-devel@lists.xen.org>;
	Thu,  9 Jan 2014 13:54:53 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id C167E1941185
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:53 +0100 (CET)
Date: Thu, 9 Jan 2014 13:54:55 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: xen-devel@lists.xen.org
Message-ID: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Thu, 09 Jan 2014 12:57:42 +0000
Subject: [Xen-devel] Spelling mistake to be corrected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8496717745240042785=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8496717745240042785==
Content-Type: multipart/alternative; 
	boundary="----=_Part_944_1356226781.1389272095141"

------=_Part_944_1356226781.1389272095141
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi,

One more thing, I forgot earlier:

There is a spelling mistake at:

http://wiki.xen.org/wiki/XenWindowsGplPv

The link name to our homepage for installing the signed GPLPV drivers says
"uninvention". There is an "n" too much, correct is "univention".

Would be great if you could correct that, too.

Best regards

Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99
------=_Part_944_1356226781.1389272095141
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 
 </head><body style="">
 
  <div>
   Hi,
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   One more thing, I forgot earlier:
  </div> 
  <div>
   <br /> There is a spelling mistake at:
   <br /> 
   <br /> http://wiki.xen.org/wiki/XenWindowsGplPv
   <br /> 
   <br /> The link name to our homepage for installing the signed GPLPV drivers says &#34;uninvention&#34;. There is an &#34;n&#34; too much, correct is &#34;univention&#34;.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Would be great if you could correct that, too.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Best regards
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Maren Abatielos
  </div> 
  <div id="ox-signature">
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
  </div>
 
</body></html>
------=_Part_944_1356226781.1389272095141--


--===============8496717745240042785==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8496717745240042785==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FB8-0001gi-Uh; Thu, 09 Jan 2014 12:57:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W1Eqg-0000WH-K6
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:36:34 +0000
Received: from [85.158.139.211:33957] by server-11.bemta-5.messagelabs.com id
	74/71-23268-1D79EC25; Thu, 09 Jan 2014 12:36:33 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389270992!7553349!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27429 invoked from network); 9 Jan 2014 12:36:32 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-16.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 12:36:32 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 898EC1939281
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:30 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 7E19A1939282
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:30 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id JXO8m2-OWLZC for <xen-devel@lists.xen.org>;
	Thu,  9 Jan 2014 13:36:29 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id D3F091939281
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:29 +0100 (CET)
Date: Thu, 9 Jan 2014 13:36:31 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: xen-devel@lists.xen.org
Message-ID: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Thu, 09 Jan 2014 12:57:42 +0000
Subject: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5264110016079902980=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5264110016079902980==
Content-Type: multipart/alternative; 
	boundary="----=_Part_921_1415385226.1389270991203"

------=_Part_921_1415385226.1389270991203
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,

I would like to have our product UCS Virtual Machine Manager (UVMM) to be listed
in the section "Management tools and interfaces" on the following page:

http://wiki.xen.org/wiki/Xen_Management_Tools

UVMM is a web-based virtualization management tool for Xen and KVM. It is
included in our Linux server operating system Univention Corporate Server, so is
Xen.

Could you please do that using the following URL to be listed:

http://www.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/

Thanks a lot,

Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

https://www.univention.de
http://gplus.to/Univention
http://www.facebook.com/univention
https://twitter.com/univention
http://www.youtube.com/univentionvideo

Managing director: Peter H. Ganten
HRB 20755 Local Court Bremen
Tax-No.: 71-597-02876
------=_Part_921_1415385226.1389270991203
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 
 </head><body style="">
 
  <div>
   Hello,
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   I would like to have our product UCS Virtual Machine Manager (UVMM) to be listed in the section &#34;Management tools and interfaces&#34; on the following page:
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   http://wiki.xen.org/wiki/Xen_Management_Tools
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   UVMM is a web-based virtualization management tool for Xen and KVM. It is included in our Linux server operating system Univention Corporate Server, so is Xen.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Could you please do that using the following URL to be listed:
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   http://www.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Thanks a lot,
  </div> 
  <div>
   &#160;
  </div> 
  <div id="ox-signature">
   Maren Abatielos
   <br />
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
   <br />
   <br />https://www.univention.de
   <br />http://gplus.to/Univention
   <br />http://www.facebook.com/univention
   <br />https://twitter.com/univention
   <br />http://www.youtube.com/univentionvideo
   <br />
   <br />Managing director: Peter H. Ganten
   <br />HRB 20755 Local Court Bremen
   <br />Tax-No.: 71-597-02876
  </div>
 
</body></html>
------=_Part_921_1415385226.1389270991203--


--===============5264110016079902980==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5264110016079902980==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FB9-0001gp-9q; Thu, 09 Jan 2014 12:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W1F8T-0001fy-Mw
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:54:57 +0000
Received: from [85.158.139.211:39638] by server-12.bemta-5.messagelabs.com id
	4A/27-30017-02C9EC25; Thu, 09 Jan 2014 12:54:56 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389272096!8745612!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3577 invoked from network); 9 Jan 2014 12:54:56 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-14.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 12:54:56 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 467361941185
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:54 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 3C67A1941186
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:54 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id iQyX0ZWi64Kz for <xen-devel@lists.xen.org>;
	Thu,  9 Jan 2014 13:54:53 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id C167E1941185
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:54:53 +0100 (CET)
Date: Thu, 9 Jan 2014 13:54:55 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: xen-devel@lists.xen.org
Message-ID: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Thu, 09 Jan 2014 12:57:42 +0000
Subject: [Xen-devel] Spelling mistake to be corrected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8496717745240042785=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8496717745240042785==
Content-Type: multipart/alternative; 
	boundary="----=_Part_944_1356226781.1389272095141"

------=_Part_944_1356226781.1389272095141
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi,

One more thing, I forgot earlier:

There is a spelling mistake at:

http://wiki.xen.org/wiki/XenWindowsGplPv

The link name to our homepage for installing the signed GPLPV drivers says
"uninvention". There is an "n" too much, correct is "univention".

Would be great if you could correct that, too.

Best regards

Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99
------=_Part_944_1356226781.1389272095141
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 
 </head><body style="">
 
  <div>
   Hi,
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   One more thing, I forgot earlier:
  </div> 
  <div>
   <br /> There is a spelling mistake at:
   <br /> 
   <br /> http://wiki.xen.org/wiki/XenWindowsGplPv
   <br /> 
   <br /> The link name to our homepage for installing the signed GPLPV drivers says &#34;uninvention&#34;. There is an &#34;n&#34; too much, correct is &#34;univention&#34;.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Would be great if you could correct that, too.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Best regards
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Maren Abatielos
  </div> 
  <div id="ox-signature">
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
  </div>
 
</body></html>
------=_Part_944_1356226781.1389272095141--


--===============8496717745240042785==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8496717745240042785==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FB8-0001gi-Uh; Thu, 09 Jan 2014 12:57:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W1Eqg-0000WH-K6
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:36:34 +0000
Received: from [85.158.139.211:33957] by server-11.bemta-5.messagelabs.com id
	74/71-23268-1D79EC25; Thu, 09 Jan 2014 12:36:33 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389270992!7553349!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27429 invoked from network); 9 Jan 2014 12:36:32 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-16.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 12:36:32 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 898EC1939281
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:30 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 7E19A1939282
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:30 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024) with ESMTP id JXO8m2-OWLZC for <xen-devel@lists.xen.org>;
	Thu,  9 Jan 2014 13:36:29 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id D3F091939281
	for <xen-devel@lists.xen.org>; Thu,  9 Jan 2014 13:36:29 +0100 (CET)
Date: Thu, 9 Jan 2014 13:36:31 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: xen-devel@lists.xen.org
Message-ID: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Thu, 09 Jan 2014 12:57:42 +0000
Subject: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5264110016079902980=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5264110016079902980==
Content-Type: multipart/alternative; 
	boundary="----=_Part_921_1415385226.1389270991203"

------=_Part_921_1415385226.1389270991203
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,

I would like to have our product UCS Virtual Machine Manager (UVMM) to be listed
in the section "Management tools and interfaces" on the following page:

http://wiki.xen.org/wiki/Xen_Management_Tools

UVMM is a web-based virtualization management tool for Xen and KVM. It is
included in our Linux server operating system Univention Corporate Server, so is
Xen.

Could you please do that using the following URL to be listed:

http://www.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/

Thanks a lot,

Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

https://www.univention.de
http://gplus.to/Univention
http://www.facebook.com/univention
https://twitter.com/univention
http://www.youtube.com/univentionvideo

Managing director: Peter H. Ganten
HRB 20755 Local Court Bremen
Tax-No.: 71-597-02876
------=_Part_921_1415385226.1389270991203
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 
 </head><body style="">
 
  <div>
   Hello,
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   I would like to have our product UCS Virtual Machine Manager (UVMM) to be listed in the section &#34;Management tools and interfaces&#34; on the following page:
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   http://wiki.xen.org/wiki/Xen_Management_Tools
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   UVMM is a web-based virtualization management tool for Xen and KVM. It is included in our Linux server operating system Univention Corporate Server, so is Xen.
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Could you please do that using the following URL to be listed:
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   http://www.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/
  </div> 
  <div>
   &#160;
  </div> 
  <div>
   Thanks a lot,
  </div> 
  <div>
   &#160;
  </div> 
  <div id="ox-signature">
   Maren Abatielos
   <br />
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
   <br />
   <br />https://www.univention.de
   <br />http://gplus.to/Univention
   <br />http://www.facebook.com/univention
   <br />https://twitter.com/univention
   <br />http://www.youtube.com/univentionvideo
   <br />
   <br />Managing director: Peter H. Ganten
   <br />HRB 20755 Local Court Bremen
   <br />Tax-No.: 71-597-02876
  </div>
 
</body></html>
------=_Part_921_1415385226.1389270991203--


--===============5264110016079902980==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5264110016079902980==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FC6-00029p-Pk; Thu, 09 Jan 2014 12:58:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1FC4-00027v-T4
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 12:58:41 +0000
Received: from [85.158.137.68:24046] by server-12.bemta-3.messagelabs.com id
	84/A8-20055-FFC9EC25; Thu, 09 Jan 2014 12:58:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389272317!8112148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13376 invoked from network); 9 Jan 2014 12:58:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:58:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89114049"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:58:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:58:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1FC0-0004Ty-9I;
	Thu, 09 Jan 2014 12:58:36 +0000
Date: Thu, 9 Jan 2014 12:57:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389260454.27473.27.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xudong Hao <xudong.hao@intel.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	zhenzhong.duan@oracle.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > Hi Maintainer
> > 
> > I added 64bit pci hotplug support to hvm guest recently.
> > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > merged in qemu-xen-traditional.
> 
> Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> hand the patch you link to is against qemu-xen, which Stefano does
> maintain, so I'm a bit confused.

That is not the case: the patch is against qemu-xen-traditional
(hw/pass-through.c doesn't exist on QEMU upstream based trees).


> Perhaps you should also have CCd Xudong? I've done that too.
> 
> This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
> 
> Ian.
> 
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> > 
> > I am interested in the result about that patch.
> > 
> > thanks
> > zduan
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 12:58:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 12:58:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FC6-00029p-Pk; Thu, 09 Jan 2014 12:58:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1FC4-00027v-T4
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 12:58:41 +0000
Received: from [85.158.137.68:24046] by server-12.bemta-3.messagelabs.com id
	84/A8-20055-FFC9EC25; Thu, 09 Jan 2014 12:58:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389272317!8112148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13376 invoked from network); 9 Jan 2014 12:58:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 12:58:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="89114049"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 12:58:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 07:58:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1FC0-0004Ty-9I;
	Thu, 09 Jan 2014 12:58:36 +0000
Date: Thu, 9 Jan 2014 12:57:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389260454.27473.27.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xudong Hao <xudong.hao@intel.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	zhenzhong.duan@oracle.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > Hi Maintainer
> > 
> > I added 64bit pci hotplug support to hvm guest recently.
> > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > merged in qemu-xen-traditional.
> 
> Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> hand the patch you link to is against qemu-xen, which Stefano does
> maintain, so I'm a bit confused.

That is not the case: the patch is against qemu-xen-traditional
(hw/pass-through.c doesn't exist on QEMU upstream based trees).


> Perhaps you should also have CCd Xudong? I've done that too.
> 
> This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
> 
> Ian.
> 
> > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> > 
> > I am interested in the result about that patch.
> > 
> > thanks
> > zduan
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:14:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FRO-0004AK-MU; Thu, 09 Jan 2014 13:14:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1FRN-0004AF-Nk
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 13:14:29 +0000
Received: from [85.158.139.211:39330] by server-16.bemta-5.messagelabs.com id
	A8/D9-11843-5B0AEC25; Thu, 09 Jan 2014 13:14:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389273268!8790797!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9806 invoked from network); 9 Jan 2014 13:14:28 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:14:28 -0000
Received: by mail-we0-f179.google.com with SMTP id q59so2788034wes.10
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 05:14:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/tXSTYjAyo87ksl0uLn7xwgLx9So2x/kyJG4NTRKx3w=;
	b=WOyf7WgZBnQEYUYLZCnt1Nsg4ZSjlNYH7Z0kKdFg7w7rIw5fX/oSyMHN7bXWGPm0zK
	1CV6b8pdaB1HPs4y31LL+Eql50Lr6+fma6+OOFFeJnUwZYh8WyutFjHJz11PDu+VtjCE
	wJ/6i3h1idkADMrBEfKbacnhHWHZ8W8VjKtOLDlZTKfa0flwmbCKepeXHGVc+pwt7h1H
	WwkYnnKqC/t3sQppJrRKXILOcWKrZ8fOpCdxZY1zNxf5t30P0kRR0nxWLrNnejxN0oFK
	W6ZjasJrhRnOQksRtlE5uYVrkqupgX/3LSNiv/s1e1dRF1yuX5R4votvohioH4JfDe2y
	4esQ==
X-Gm-Message-State: ALoCoQkg4PaKdMTuPNr1heTy0vLCVJzwWSVF2cg7KPbFNVFAhTnd2iCHfB0gaaMkS6cJpfroNjJf
X-Received: by 10.194.59.210 with SMTP id b18mr2948567wjr.60.1389273267776;
	Thu, 09 Jan 2014 05:14:27 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id z2sm7597946wiy.11.2014.01.09.05.14.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 05:14:27 -0800 (PST)
Message-ID: <52CEA0B1.3020403@linaro.org>
Date: Thu, 09 Jan 2014 13:14:25 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>	
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1389265175.27473.67.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/09/2014 10:59 AM, Ian Campbell wrote:
> On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
>> On Wed, 8 Jan 2014, Julien Grall wrote:
>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>> p2m and TLBs.
>>>
>>> Flush TLBs used by this domain on every PCPU.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> The fix makes sense to me.
>
> Me too. Has anyone had a grep for similar issues?

I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
innershareable.

The former one is used by flush_tlb_mask. But ... this function seems 
badly implement, it's weird to use flush_xen_data_tlb because we are 
mainly using flush_tlb_mask in common/grant-table.c. Any ideas?


> (the reason for getting this wrong is that for cache flushes the "is"
> suffix restricts the flush to the IS domain, whereas with tlb flushes
> the "is" suffix broadcasts instead of keeping it local, which is a bit
> confusing ;-))
>
>>
>>> ---
>>>
>>> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
>>> added a small optimisation to avoid flushing all TLBs when a VCPU of this
>>> domain is running on the current cpu.
>>>
>>> The downside of this patch is the function can be a little bit slower because
>>> Xen is flushing more TLBs.
>>
>> Yes, I wonder how much slower it is going to be, considering that the flush
>> is executed for every iteration of the loop.
>
> It might be better to set the current VMID to the target domain for the
> duration of this function, we'd still need the broadcast but at least we
> wouldn't be killing unrelated VMIDs.

I can modify the patch to handle that.

>
> Pulling the flush out of the loop would require great case WRT accesses
> from other VCPUs, e.g. you'd have to put the pages on a list (page->list
> might be available?) and issue the put_page() after the flush, otherwise
> it might get recycled into another domain while the first domain still
> has TLB entries for it.
>
> Or is there always something outside this function which holds another
> ref count such that the page definitely won't be freed by the put_page
> here?
 >
> Actually, do the existing code not have this issue already? The put_page
> is before the flush. If this bug does exist now then I'd be inclined to
> consider this a bug fix for 4.4, rather than a potential optimisation
> for 4.5.

For now we don't take reference when we map/unmap mapping. Most of the 
time create_p2m_entries is called by common code which take care of 
having a reference when this function is called. So we should be safe.

I would prefer to wait 4.5 for this optimisation (moving the flush 
outside the loop).

> While looking at this function I'm now wondering what happens to the
> existing page on ALLOCATE or INSERT, is it leaked?

For ALLOCATE page, it's on the domain page list so the page will be 
freed duing relinquish.

For INSERT, except for foreign mapping we don't have refcount for 
mapping. So the only issue could be with foreign mapping.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:14:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1FRO-0004AK-MU; Thu, 09 Jan 2014 13:14:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1FRN-0004AF-Nk
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 13:14:29 +0000
Received: from [85.158.139.211:39330] by server-16.bemta-5.messagelabs.com id
	A8/D9-11843-5B0AEC25; Thu, 09 Jan 2014 13:14:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389273268!8790797!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9806 invoked from network); 9 Jan 2014 13:14:28 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:14:28 -0000
Received: by mail-we0-f179.google.com with SMTP id q59so2788034wes.10
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 05:14:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/tXSTYjAyo87ksl0uLn7xwgLx9So2x/kyJG4NTRKx3w=;
	b=WOyf7WgZBnQEYUYLZCnt1Nsg4ZSjlNYH7Z0kKdFg7w7rIw5fX/oSyMHN7bXWGPm0zK
	1CV6b8pdaB1HPs4y31LL+Eql50Lr6+fma6+OOFFeJnUwZYh8WyutFjHJz11PDu+VtjCE
	wJ/6i3h1idkADMrBEfKbacnhHWHZ8W8VjKtOLDlZTKfa0flwmbCKepeXHGVc+pwt7h1H
	WwkYnnKqC/t3sQppJrRKXILOcWKrZ8fOpCdxZY1zNxf5t30P0kRR0nxWLrNnejxN0oFK
	W6ZjasJrhRnOQksRtlE5uYVrkqupgX/3LSNiv/s1e1dRF1yuX5R4votvohioH4JfDe2y
	4esQ==
X-Gm-Message-State: ALoCoQkg4PaKdMTuPNr1heTy0vLCVJzwWSVF2cg7KPbFNVFAhTnd2iCHfB0gaaMkS6cJpfroNjJf
X-Received: by 10.194.59.210 with SMTP id b18mr2948567wjr.60.1389273267776;
	Thu, 09 Jan 2014 05:14:27 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id z2sm7597946wiy.11.2014.01.09.05.14.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 05:14:27 -0800 (PST)
Message-ID: <52CEA0B1.3020403@linaro.org>
Date: Thu, 09 Jan 2014 13:14:25 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>	
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1389265175.27473.67.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/09/2014 10:59 AM, Ian Campbell wrote:
> On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
>> On Wed, 8 Jan 2014, Julien Grall wrote:
>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>> p2m and TLBs.
>>>
>>> Flush TLBs used by this domain on every PCPU.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> The fix makes sense to me.
>
> Me too. Has anyone had a grep for similar issues?

I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
innershareable.

The former one is used by flush_tlb_mask. But ... this function seems 
badly implement, it's weird to use flush_xen_data_tlb because we are 
mainly using flush_tlb_mask in common/grant-table.c. Any ideas?


> (the reason for getting this wrong is that for cache flushes the "is"
> suffix restricts the flush to the IS domain, whereas with tlb flushes
> the "is" suffix broadcasts instead of keeping it local, which is a bit
> confusing ;-))
>
>>
>>> ---
>>>
>>> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
>>> added a small optimisation to avoid flushing all TLBs when a VCPU of this
>>> domain is running on the current cpu.
>>>
>>> The downside of this patch is the function can be a little bit slower because
>>> Xen is flushing more TLBs.
>>
>> Yes, I wonder how much slower it is going to be, considering that the flush
>> is executed for every iteration of the loop.
>
> It might be better to set the current VMID to the target domain for the
> duration of this function, we'd still need the broadcast but at least we
> wouldn't be killing unrelated VMIDs.

I can modify the patch to handle that.

>
> Pulling the flush out of the loop would require great case WRT accesses
> from other VCPUs, e.g. you'd have to put the pages on a list (page->list
> might be available?) and issue the put_page() after the flush, otherwise
> it might get recycled into another domain while the first domain still
> has TLB entries for it.
>
> Or is there always something outside this function which holds another
> ref count such that the page definitely won't be freed by the put_page
> here?
 >
> Actually, do the existing code not have this issue already? The put_page
> is before the flush. If this bug does exist now then I'd be inclined to
> consider this a bug fix for 4.4, rather than a potential optimisation
> for 4.5.

For now we don't take reference when we map/unmap mapping. Most of the 
time create_p2m_entries is called by common code which take care of 
having a reference when this function is called. So we should be safe.

I would prefer to wait 4.5 for this optimisation (moving the flush 
outside the loop).

> While looking at this function I'm now wondering what happens to the
> existing page on ALLOCATE or INSERT, is it leaked?

For ALLOCATE page, it's on the domain page list so the page will be 
freed duing relinquish.

For INSERT, except for foreign mapping we don't have refcount for 
mapping. So the only issue could be with foreign mapping.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:39:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Fpj-00061D-HO; Thu, 09 Jan 2014 13:39:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Fph-000618-KB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 13:39:37 +0000
Received: from [193.109.254.147:25187] by server-8.bemta-14.messagelabs.com id
	E8/4B-30921-896AEC25; Thu, 09 Jan 2014 13:39:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389274775!9767427!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20444 invoked from network); 9 Jan 2014 13:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91271449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 13:39:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	08:39:33 -0500
Message-ID: <1389274773.27473.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Maren Abatielos <abatielos@univention.de>
Date: Thu, 9 Jan 2014 13:39:33 +0000
In-Reply-To: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
References: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Spelling mistake to be corrected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 13:54 +0100, Maren Abatielos wrote:
> Hi, 
>   
> One more thing, I forgot earlier: 
> 
> There is a spelling mistake at: 
> 
> http://wiki.xen.org/wiki/XenWindowsGplPv 
> 
> The link name to our homepage for installing the signed GPLPV drivers
> says "uninvention". There is an "n" too much, correct is
> "univention". 
>   
> Would be great if you could correct that, too. 

Please send me your wiki login ID and I'll give it write permissions
(has to be manual due to spammers :-()

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:39:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Fpj-00061D-HO; Thu, 09 Jan 2014 13:39:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Fph-000618-KB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 13:39:37 +0000
Received: from [193.109.254.147:25187] by server-8.bemta-14.messagelabs.com id
	E8/4B-30921-896AEC25; Thu, 09 Jan 2014 13:39:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389274775!9767427!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20444 invoked from network); 9 Jan 2014 13:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,630,1384300800"; d="scan'208";a="91271449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 13:39:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	08:39:33 -0500
Message-ID: <1389274773.27473.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Maren Abatielos <abatielos@univention.de>
Date: Thu, 9 Jan 2014 13:39:33 +0000
In-Reply-To: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
References: <1634172506.945.1389272095263.open-xchange@panna.pingst.univention.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Spelling mistake to be corrected
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 13:54 +0100, Maren Abatielos wrote:
> Hi, 
>   
> One more thing, I forgot earlier: 
> 
> There is a spelling mistake at: 
> 
> http://wiki.xen.org/wiki/XenWindowsGplPv 
> 
> The link name to our homepage for installing the signed GPLPV drivers
> says "uninvention". There is an "n" too much, correct is
> "univention". 
>   
> Would be great if you could correct that, too. 

Please send me your wiki login ID and I'll give it write permissions
(has to be manual due to spammers :-()

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Fzu-0006oA-96; Thu, 09 Jan 2014 13:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Fzs-0006o5-Lf
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 13:50:08 +0000
Received: from [85.158.143.35:51189] by server-1.bemta-4.messagelabs.com id
	06/98-02132-019AEC25; Thu, 09 Jan 2014 13:50:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389275405!10700207!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26353 invoked from network); 9 Jan 2014 13:50:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:50:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91274414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 13:50:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	08:50:04 -0500
Message-ID: <1389275403.27473.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 13:50:03 +0000
In-Reply-To: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
References: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [OSSTEST PATCH v2] do not install xend for xl tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

On Fri, 2013-12-20 at 12:35 +0000, Ian Campbell wrote:
> We need to check that xl works correctly when xend is not even installed (in
> case we are subtly relying on some file which xend installs).
> 
> Therefore for xen 4.4 onwards never build xend in the default build job and
> instead create two new build jobs (for i386 and amd64) with xend enabled.
> Update the tests to use the correct xenbuildjob.
> 
> Tested only to the extent of running make-flight for xen-4.{2,3,4}-testing and
> xen-unstable and observing that the jobs do not differ for 4.2 and 4.3 and the
> 4.4 and unstable have gained the new build-{i386,and64}-xend jobs and that the
> relevant tests have switched their xenbuildjob runvar to have the suffix.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v2: Pass down $xenarch and construct $xenbuildjob from it rather than passing
> down the latter.
> ---
>  make-flight |   71 +++++++++++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 57 insertions(+), 14 deletions(-)
> 
> diff --git a/make-flight b/make-flight
> index 5b96153..2b7bc24 100755
> --- a/make-flight
> +++ b/make-flight
> @@ -83,9 +83,22 @@ if [ x$buildflight = x ]; then
>          suite_runvars=
>      fi
>  
> +    # In 4.4 onwards xend is off by default. If necessary we build a
> +    # separate set of binaries with xend enabled in order to run those
> +    # tests which use xend.
>      case "$arch" in
> -    i386|amd64) enable_xend=true;;
> -    *) enable_xend=false;;
> +    i386|amd64) want_xend=true;;
> +    *) want_xend=false;;
> +    esac
> +
> +    case "$xenbranch" in
> +    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    *) build_defxend=false;
> +       build_extraxend=$want_xend
>      esac
>  
>      case "$xenbranch" in
> @@ -104,7 +117,7 @@ if [ x$buildflight = x ]; then
>      build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
>  
>      ./cs-job-create $flight build-$arch build				     \
> -		arch=$arch enable_xend=$enable_xend enable_ovmf=$enable_ovmf	     \
> +		arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf	     \
>  	tree_qemu=$TREE_QEMU	     \
>  	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
>  	tree_xen=$TREE_XEN		     \
> @@ -115,6 +128,20 @@ if [ x$buildflight = x ]; then
>  		revision_qemu=$REVISION_QEMU				     \
>  		revision_qemuu=$REVISION_QEMU_UPSTREAM
>  
> +    if [ $build_extraxend = "true" ] ; then
> +    ./cs-job-create $flight build-$arch-xend build			     \
> +		arch=$arch enable_xend=true enable_ovmf=$enable_ovmf	     \
> +	tree_qemu=$TREE_QEMU	     \
> +	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
> +	tree_xen=$TREE_XEN		     \
> +		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
> +		$suite_runvars                                               \
> +		host_hostflags=$build_hostflags    \
> +		revision_xen=$REVISION_XEN				     \
> +		revision_qemu=$REVISION_QEMU				     \
> +		revision_qemuu=$REVISION_QEMU_UPSTREAM
> +    fi
> +
>      ./cs-job-create $flight build-$arch-pvops build-kern		     \
>  		arch=$arch kconfighow=xen-enable-xen-config		     \
>  	tree_xen=$TREE_XEN		     \
> @@ -198,10 +225,22 @@ job_create_test () {
>  	local job=$1; shift
>  	local recipe=$1; shift
>  	local toolstack=$1; shift
> +	local xenarch=$1; shift
>  
>          local job_md5=`echo "$job" | md5sum`
>          job_md5="${job_md5%  -}"
>  
> +	xenbuildjob="${bfi}build-$xenarch"
> +
> +        case "$xenbranch:$toolstack" in
> +        xen-3.*-testing:*) ;;
> +        xen-4.0-testing:*) ;;
> +        xen-4.1-testing:*) ;;
> +        xen-4.2-testing:*) ;;
> +        xen-4.3-testing:*) ;;
> +        *:xend) xenbuildjob="$xenbuildjob-xend";;
> +        esac
> +
>          if [ "x$JOB_MD5_PATTERN" != x ]; then
>                  case "$job_md5" in
>                  $JOB_MD5_PATTERN)       ;;
> @@ -237,7 +276,7 @@ job_create_test () {
>          esac
>  
>  	./cs-job-create $flight $job $recipe toolstack=$toolstack \
> -		$RUNVARS $TEST_RUNVARS $most_runvars "$@"
> +		$RUNVARS $TEST_RUNVARS $most_runvars xenbuildjob=$xenbuildjob "$@"
>  }
>  
>  for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
> @@ -331,7 +370,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  
>        most_runvars="
>  		arch=$dom0arch			        	\
> -		xenbuildjob=${bfi}build-$xenarch        	\
>  		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
>  		buildjob=${bfi}build-$dom0arch	        	\
>  		kernkind=$kernkind		        	\
> @@ -339,6 +377,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  		"
>        if [ $dom0arch = armhf ]; then
>  	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -346,11 +385,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        fi
>  
>        job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -360,7 +401,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>          for freebsdarch in amd64 i386; do
>  
>   job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
> -			test-freebsd xl \
> +			test-freebsd xl $xenarch  \
>  			freebsd_arch=$freebsdarch \
>   freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
>  			all_hostflags=$most_hostflags
> @@ -406,7 +447,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  
>        job_create_test \
>                  test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
> -                test-win $toolstack $qemuu_runvar \
> +                test-win $toolstack $xenarch $qemuu_runvar \
>  		win_image=winxpsp3.iso $vcpus_runvars	\
>  		all_hostflags=$most_hostflags,hvm
>  
> @@ -416,7 +457,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        if [ $xenarch = amd64 ]; then
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
> -                test-win xl $qemuu_runvar \
> +                test-win xl $xenarch $qemuu_runvar \
>  		win_image=win7-x64.iso \
>  		all_hostflags=$most_hostflags,hvm
>  
> @@ -427,7 +468,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  	for cpuvendor in amd intel; do
>  
>      job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
> -						test-rhelhvm xl \
> +						test-rhelhvm xl $xenarch \
>  		redhat_image=rhel-server-6.1-i386-dvd.iso		\
>  		all_hostflags=$most_hostflags,hvm-$cpuvendor \
>                  $qemuu_runvar
> @@ -439,7 +480,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        done # qemuu_suffix
>  
>        job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
> -		$onetoolstack \
> +		$onetoolstack $xenarch \
>                  !host !host_hostflags \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
> @@ -450,7 +491,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>         for pin in '' -pin; do
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
> -           test-debian xl guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
> +           test-debian xl $xenarch \
> +		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -462,13 +504,14 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
> -                        test-debian xl guests_vcpus=4 \
> +                        test-debian xl $xenarch guests_vcpus=4 \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
> -           test-debian xl guests_vcpus=4 xen_boot_append='sched=credit2'      \
> +           test-debian xl $xenarch					  \
> +		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -480,7 +523,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>          for cpuvendor in intel; do
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
> -                        test-debian-nomigr xl guests_vcpus=4 \
> +                        test-debian-nomigr xl $xenarch guests_vcpus=4	  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		debian_pcipassthrough_nic=host				  \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 13:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 13:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Fzu-0006oA-96; Thu, 09 Jan 2014 13:50:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Fzs-0006o5-Lf
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 13:50:08 +0000
Received: from [85.158.143.35:51189] by server-1.bemta-4.messagelabs.com id
	06/98-02132-019AEC25; Thu, 09 Jan 2014 13:50:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389275405!10700207!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26353 invoked from network); 9 Jan 2014 13:50:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 13:50:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91274414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 13:50:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	08:50:04 -0500
Message-ID: <1389275403.27473.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 13:50:03 +0000
In-Reply-To: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
References: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [OSSTEST PATCH v2] do not install xend for xl tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ping?

On Fri, 2013-12-20 at 12:35 +0000, Ian Campbell wrote:
> We need to check that xl works correctly when xend is not even installed (in
> case we are subtly relying on some file which xend installs).
> 
> Therefore for xen 4.4 onwards never build xend in the default build job and
> instead create two new build jobs (for i386 and amd64) with xend enabled.
> Update the tests to use the correct xenbuildjob.
> 
> Tested only to the extent of running make-flight for xen-4.{2,3,4}-testing and
> xen-unstable and observing that the jobs do not differ for 4.2 and 4.3 and the
> 4.4 and unstable have gained the new build-{i386,and64}-xend jobs and that the
> relevant tests have switched their xenbuildjob runvar to have the suffix.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> v2: Pass down $xenarch and construct $xenbuildjob from it rather than passing
> down the latter.
> ---
>  make-flight |   71 +++++++++++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 57 insertions(+), 14 deletions(-)
> 
> diff --git a/make-flight b/make-flight
> index 5b96153..2b7bc24 100755
> --- a/make-flight
> +++ b/make-flight
> @@ -83,9 +83,22 @@ if [ x$buildflight = x ]; then
>          suite_runvars=
>      fi
>  
> +    # In 4.4 onwards xend is off by default. If necessary we build a
> +    # separate set of binaries with xend enabled in order to run those
> +    # tests which use xend.
>      case "$arch" in
> -    i386|amd64) enable_xend=true;;
> -    *) enable_xend=false;;
> +    i386|amd64) want_xend=true;;
> +    *) want_xend=false;;
> +    esac
> +
> +    case "$xenbranch" in
> +    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
> +    *) build_defxend=false;
> +       build_extraxend=$want_xend
>      esac
>  
>      case "$xenbranch" in
> @@ -104,7 +117,7 @@ if [ x$buildflight = x ]; then
>      build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
>  
>      ./cs-job-create $flight build-$arch build				     \
> -		arch=$arch enable_xend=$enable_xend enable_ovmf=$enable_ovmf	     \
> +		arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf	     \
>  	tree_qemu=$TREE_QEMU	     \
>  	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
>  	tree_xen=$TREE_XEN		     \
> @@ -115,6 +128,20 @@ if [ x$buildflight = x ]; then
>  		revision_qemu=$REVISION_QEMU				     \
>  		revision_qemuu=$REVISION_QEMU_UPSTREAM
>  
> +    if [ $build_extraxend = "true" ] ; then
> +    ./cs-job-create $flight build-$arch-xend build			     \
> +		arch=$arch enable_xend=true enable_ovmf=$enable_ovmf	     \
> +	tree_qemu=$TREE_QEMU	     \
> +	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
> +	tree_xen=$TREE_XEN		     \
> +		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
> +		$suite_runvars                                               \
> +		host_hostflags=$build_hostflags    \
> +		revision_xen=$REVISION_XEN				     \
> +		revision_qemu=$REVISION_QEMU				     \
> +		revision_qemuu=$REVISION_QEMU_UPSTREAM
> +    fi
> +
>      ./cs-job-create $flight build-$arch-pvops build-kern		     \
>  		arch=$arch kconfighow=xen-enable-xen-config		     \
>  	tree_xen=$TREE_XEN		     \
> @@ -198,10 +225,22 @@ job_create_test () {
>  	local job=$1; shift
>  	local recipe=$1; shift
>  	local toolstack=$1; shift
> +	local xenarch=$1; shift
>  
>          local job_md5=`echo "$job" | md5sum`
>          job_md5="${job_md5%  -}"
>  
> +	xenbuildjob="${bfi}build-$xenarch"
> +
> +        case "$xenbranch:$toolstack" in
> +        xen-3.*-testing:*) ;;
> +        xen-4.0-testing:*) ;;
> +        xen-4.1-testing:*) ;;
> +        xen-4.2-testing:*) ;;
> +        xen-4.3-testing:*) ;;
> +        *:xend) xenbuildjob="$xenbuildjob-xend";;
> +        esac
> +
>          if [ "x$JOB_MD5_PATTERN" != x ]; then
>                  case "$job_md5" in
>                  $JOB_MD5_PATTERN)       ;;
> @@ -237,7 +276,7 @@ job_create_test () {
>          esac
>  
>  	./cs-job-create $flight $job $recipe toolstack=$toolstack \
> -		$RUNVARS $TEST_RUNVARS $most_runvars "$@"
> +		$RUNVARS $TEST_RUNVARS $most_runvars xenbuildjob=$xenbuildjob "$@"
>  }
>  
>  for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
> @@ -331,7 +370,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  
>        most_runvars="
>  		arch=$dom0arch			        	\
> -		xenbuildjob=${bfi}build-$xenarch        	\
>  		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
>  		buildjob=${bfi}build-$dom0arch	        	\
>  		kernkind=$kernkind		        	\
> @@ -339,6 +377,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  		"
>        if [ $dom0arch = armhf ]; then
>  	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -346,11 +385,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        fi
>  
>        job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
> +		$xenarch						  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -360,7 +401,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>          for freebsdarch in amd64 i386; do
>  
>   job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
> -			test-freebsd xl \
> +			test-freebsd xl $xenarch  \
>  			freebsd_arch=$freebsdarch \
>   freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
>  			all_hostflags=$most_hostflags
> @@ -406,7 +447,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  
>        job_create_test \
>                  test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
> -                test-win $toolstack $qemuu_runvar \
> +                test-win $toolstack $xenarch $qemuu_runvar \
>  		win_image=winxpsp3.iso $vcpus_runvars	\
>  		all_hostflags=$most_hostflags,hvm
>  
> @@ -416,7 +457,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        if [ $xenarch = amd64 ]; then
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
> -                test-win xl $qemuu_runvar \
> +                test-win xl $xenarch $qemuu_runvar \
>  		win_image=win7-x64.iso \
>  		all_hostflags=$most_hostflags,hvm
>  
> @@ -427,7 +468,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>  	for cpuvendor in amd intel; do
>  
>      job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
> -						test-rhelhvm xl \
> +						test-rhelhvm xl $xenarch \
>  		redhat_image=rhel-server-6.1-i386-dvd.iso		\
>  		all_hostflags=$most_hostflags,hvm-$cpuvendor \
>                  $qemuu_runvar
> @@ -439,7 +480,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        done # qemuu_suffix
>  
>        job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
> -		$onetoolstack \
> +		$onetoolstack $xenarch \
>                  !host !host_hostflags \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
> @@ -450,7 +491,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>         for pin in '' -pin; do
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
> -           test-debian xl guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
> +           test-debian xl $xenarch \
> +		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -462,13 +504,14 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>        if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
> -                        test-debian xl guests_vcpus=4 \
> +                        test-debian xl $xenarch guests_vcpus=4 \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
> -           test-debian xl guests_vcpus=4 xen_boot_append='sched=credit2'      \
> +           test-debian xl $xenarch					  \
> +		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		all_hostflags=$most_hostflags
> @@ -480,7 +523,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
>          for cpuvendor in intel; do
>  
>        job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
> -                        test-debian-nomigr xl guests_vcpus=4 \
> +                        test-debian-nomigr xl $xenarch guests_vcpus=4	  \
>  		debian_kernkind=$kernkind				  \
>  		debian_arch=$dom0arch   				  \
>  		debian_pcipassthrough_nic=host				  \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GFE-0007dD-Gm; Thu, 09 Jan 2014 14:06:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1GFD-0007d8-IX
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:05:59 +0000
Received: from [85.158.139.211:37536] by server-15.bemta-5.messagelabs.com id
	27/E0-08490-6CCAEC25; Thu, 09 Jan 2014 14:05:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389276356!6089266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28302 invoked from network); 9 Jan 2014 14:05:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:05:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89137199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:05:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:05:55 -0500
Message-ID: <1389276353.27473.126.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 9 Jan 2014 14:05:53 +0000
In-Reply-To: <52CEA0B1.3020403@linaro.org>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
	<52CEA0B1.3020403@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 13:14 +0000, Julien Grall wrote:
> 
> On 01/09/2014 10:59 AM, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
> >> On Wed, 8 Jan 2014, Julien Grall wrote:
> >>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >>> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >>> p2m and TLBs.
> >>>
> >>> Flush TLBs used by this domain on every PCPU.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>
> >> The fix makes sense to me.
> >
> > Me too. Has anyone had a grep for similar issues?
> 
> I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
> innershareable.

I think text_tlb is ok, it's only used at start of day. The usage in
setup_pagetables and mmu_init_secondary_cpus are both explicitly local I
think. Perhaps it should be renamed _local.

The case in set_pte_flags_on_range via free_init_memory I'm less sure
about, but I don't think stale tlb entries are actually an issue here,
since there will certainly be one when the page becomes used again. But
maybe it would be safest to make it global.

> The former one is used by flush_tlb_mask.

Yes, the comment there is just wrong. I think this was my doing based on
the confusion I mentioned before.

We need to be careful not to change the (un)map_domain_page since those
are not shared between processors, I don't think this change would do
that.

>  But ... this function seems 
> badly implement, it's weird to use flush_xen_data_tlb because we are 
> mainly using flush_tlb_mask in common/grant-table.c. Any ideas?

Do you mean that this should be flushing the guest TLBs and not Xen's?
That does seem right... We actually need to be flushing for all vmid's
too I think -- for the alloc_heap_pages case.

Ian.

> 
> 
> > (the reason for getting this wrong is that for cache flushes the "is"
> > suffix restricts the flush to the IS domain, whereas with tlb flushes
> > the "is" suffix broadcasts instead of keeping it local, which is a bit
> > confusing ;-))
> >
> >>
> >>> ---
> >>>
> >>> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> >>> added a small optimisation to avoid flushing all TLBs when a VCPU of this
> >>> domain is running on the current cpu.
> >>>
> >>> The downside of this patch is the function can be a little bit slower because
> >>> Xen is flushing more TLBs.
> >>
> >> Yes, I wonder how much slower it is going to be, considering that the flush
> >> is executed for every iteration of the loop.
> >
> > It might be better to set the current VMID to the target domain for the
> > duration of this function, we'd still need the broadcast but at least we
> > wouldn't be killing unrelated VMIDs.
> 
> I can modify the patch to handle that.
> 
> >
> > Pulling the flush out of the loop would require great case WRT accesses
> > from other VCPUs, e.g. you'd have to put the pages on a list (page->list
> > might be available?) and issue the put_page() after the flush, otherwise
> > it might get recycled into another domain while the first domain still
> > has TLB entries for it.
> >
> > Or is there always something outside this function which holds another
> > ref count such that the page definitely won't be freed by the put_page
> > here?
>  >
> > Actually, do the existing code not have this issue already? The put_page
> > is before the flush. If this bug does exist now then I'd be inclined to
> > consider this a bug fix for 4.4, rather than a potential optimisation
> > for 4.5.
> 
> For now we don't take reference when we map/unmap mapping. Most of the 
> time create_p2m_entries is called by common code which take care of 
> having a reference when this function is called. So we should be safe.
> 
> I would prefer to wait 4.5 for this optimisation (moving the flush 
> outside the loop).
> 
> > While looking at this function I'm now wondering what happens to the
> > existing page on ALLOCATE or INSERT, is it leaked?
> 
> For ALLOCATE page, it's on the domain page list so the page will be 
> freed duing relinquish.
> 
> For INSERT, except for foreign mapping we don't have refcount for 
> mapping. So the only issue could be with foreign mapping.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GFE-0007dD-Gm; Thu, 09 Jan 2014 14:06:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1GFD-0007d8-IX
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:05:59 +0000
Received: from [85.158.139.211:37536] by server-15.bemta-5.messagelabs.com id
	27/E0-08490-6CCAEC25; Thu, 09 Jan 2014 14:05:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389276356!6089266!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28302 invoked from network); 9 Jan 2014 14:05:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:05:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89137199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:05:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:05:55 -0500
Message-ID: <1389276353.27473.126.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 9 Jan 2014 14:05:53 +0000
In-Reply-To: <52CEA0B1.3020403@linaro.org>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
	<52CEA0B1.3020403@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 13:14 +0000, Julien Grall wrote:
> 
> On 01/09/2014 10:59 AM, Ian Campbell wrote:
> > On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
> >> On Wed, 8 Jan 2014, Julien Grall wrote:
> >>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >>> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >>> p2m and TLBs.
> >>>
> >>> Flush TLBs used by this domain on every PCPU.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>
> >> The fix makes sense to me.
> >
> > Me too. Has anyone had a grep for similar issues?
> 
> I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
> innershareable.

I think text_tlb is ok, it's only used at start of day. The usage in
setup_pagetables and mmu_init_secondary_cpus are both explicitly local I
think. Perhaps it should be renamed _local.

The case in set_pte_flags_on_range via free_init_memory I'm less sure
about, but I don't think stale tlb entries are actually an issue here,
since there will certainly be one when the page becomes used again. But
maybe it would be safest to make it global.

> The former one is used by flush_tlb_mask.

Yes, the comment there is just wrong. I think this was my doing based on
the confusion I mentioned before.

We need to be careful not to change the (un)map_domain_page since those
are not shared between processors, I don't think this change would do
that.

>  But ... this function seems 
> badly implement, it's weird to use flush_xen_data_tlb because we are 
> mainly using flush_tlb_mask in common/grant-table.c. Any ideas?

Do you mean that this should be flushing the guest TLBs and not Xen's?
That does seem right... We actually need to be flushing for all vmid's
too I think -- for the alloc_heap_pages case.

Ian.

> 
> 
> > (the reason for getting this wrong is that for cache flushes the "is"
> > suffix restricts the flush to the IS domain, whereas with tlb flushes
> > the "is" suffix broadcasts instead of keeping it local, which is a bit
> > confusing ;-))
> >
> >>
> >>> ---
> >>>
> >>> This is a possible bug fix (found by reading the code) for Xen 4.4. I have
> >>> added a small optimisation to avoid flushing all TLBs when a VCPU of this
> >>> domain is running on the current cpu.
> >>>
> >>> The downside of this patch is the function can be a little bit slower because
> >>> Xen is flushing more TLBs.
> >>
> >> Yes, I wonder how much slower it is going to be, considering that the flush
> >> is executed for every iteration of the loop.
> >
> > It might be better to set the current VMID to the target domain for the
> > duration of this function, we'd still need the broadcast but at least we
> > wouldn't be killing unrelated VMIDs.
> 
> I can modify the patch to handle that.
> 
> >
> > Pulling the flush out of the loop would require great case WRT accesses
> > from other VCPUs, e.g. you'd have to put the pages on a list (page->list
> > might be available?) and issue the put_page() after the flush, otherwise
> > it might get recycled into another domain while the first domain still
> > has TLB entries for it.
> >
> > Or is there always something outside this function which holds another
> > ref count such that the page definitely won't be freed by the put_page
> > here?
>  >
> > Actually, do the existing code not have this issue already? The put_page
> > is before the flush. If this bug does exist now then I'd be inclined to
> > consider this a bug fix for 4.4, rather than a potential optimisation
> > for 4.5.
> 
> For now we don't take reference when we map/unmap mapping. Most of the 
> time create_p2m_entries is called by common code which take care of 
> having a reference when this function is called. So we should be safe.
> 
> I would prefer to wait 4.5 for this optimisation (moving the flush 
> outside the loop).
> 
> > While looking at this function I'm now wondering what happens to the
> > existing page on ALLOCATE or INSERT, is it leaked?
> 
> For ALLOCATE page, it's on the domain page list so the page will be 
> freed duing relinquish.
> 
> For INSERT, except for foreign mapping we don't have refcount for 
> mapping. So the only issue could be with foreign mapping.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:27:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GZa-0000gc-Ig; Thu, 09 Jan 2014 14:27:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1GZY-0000gX-CN
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:27:00 +0000
Received: from [193.109.254.147:27525] by server-8.bemta-14.messagelabs.com id
	77/97-30921-3B1BEC25; Thu, 09 Jan 2014 14:26:59 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389277617!9781850!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26242 invoked from network); 9 Jan 2014 14:26:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:26:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89146060"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:26:57 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 09:26:56 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 15:25:04 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v3 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHPDI1ZfF++bAj2bEaXCe0N71x4LZp6/iGAgAFzyvA=
Date: Thu, 9 Jan 2014 14:25:03 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1DAF45@AMSPEX01CL03.citrite.net>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<21197.34209.628619.102215@mariner.uk.xensource.com>
In-Reply-To: <21197.34209.628619.102215@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This may seem like a daft question, but why does handles contain a value*
> rather than just a value ?

Right, the important thing is that this "value" is malloc, which is will be if it is part of the handles struct. I'll change that.

> Also, your error paths fail to free handles (or handles->for_app, although
> I'm implicitly proposing that the latter be abolished).
> 
> > +	*for_app_registration_out = handles->for_app;
> 
> I think this is counterintuitive.  Why not pass handles to
> for_app_registration_out ?  I know that your timeout_modify doesn't need
> handles->for_libxl, but it would seem clearer to have your shim proxy
> everything neatly: i.e., to always pass its own context
> ("handles") to both sides.

Ok, makes sense.

> >  int timeout_modify(void *user, void **for_app_registration_update, @@
> > -1315,25 +1400,45 @@ int timeout_modify(void *user, void
> > **for_app_registration_update,  {
> >  	caml_leave_blocking_section();
> >  	CAMLparam0();
> > +	CAMLlocalN(args, 2);
> > +	int ret = 0;
> >  	static value *func = NULL;
> >  	value *p = (value *) user;
> > +	value *for_app = *for_app_registration_update;
> ...
> > -	caml_callback(*func, *p);
> > +	args[0] = *p;
> > +	args[1] = *for_app;
> 
> I see that you're relying on the promise in modern libxl.h that abs is
> always {0,0} meaning "right away".
> 
> You should probably assert that.

Ok.

> Also, perhaps, your ocaml function should have a different name.
> "libxl_timeout_modify_now" or something.  We have avoided renaming it in
> the C version to avoid API churn.  Of course you may prefer to keep the
> two names identical, in which case presumably this will be addressed in
> the documentation.  (Um.  What documentation.)  Anyway, I wanted you to
> think about this question and explicitly choose to give it a different
> name, or to avoid doing so :-).

Yes, it is probably clearer to change the name on the ocaml side to show better what the function really does. I was thinking about "fire_now".

I'll send an update soon.

Thanks,
Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:27:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GZa-0000gc-Ig; Thu, 09 Jan 2014 14:27:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1GZY-0000gX-CN
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:27:00 +0000
Received: from [193.109.254.147:27525] by server-8.bemta-14.messagelabs.com id
	77/97-30921-3B1BEC25; Thu, 09 Jan 2014 14:26:59 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389277617!9781850!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26242 invoked from network); 9 Jan 2014 14:26:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:26:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89146060"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:26:57 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 09:26:56 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 15:25:04 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v3 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHPDI1ZfF++bAj2bEaXCe0N71x4LZp6/iGAgAFzyvA=
Date: Thu, 9 Jan 2014 14:25:03 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1DAF45@AMSPEX01CL03.citrite.net>
References: <1386955247-22437-1-git-send-email-rob.hoes@citrix.com>
	<1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<21197.34209.628619.102215@mariner.uk.xensource.com>
In-Reply-To: <21197.34209.628619.102215@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> This may seem like a daft question, but why does handles contain a value*
> rather than just a value ?

Right, the important thing is that this "value" is malloc, which is will be if it is part of the handles struct. I'll change that.

> Also, your error paths fail to free handles (or handles->for_app, although
> I'm implicitly proposing that the latter be abolished).
> 
> > +	*for_app_registration_out = handles->for_app;
> 
> I think this is counterintuitive.  Why not pass handles to
> for_app_registration_out ?  I know that your timeout_modify doesn't need
> handles->for_libxl, but it would seem clearer to have your shim proxy
> everything neatly: i.e., to always pass its own context
> ("handles") to both sides.

Ok, makes sense.

> >  int timeout_modify(void *user, void **for_app_registration_update, @@
> > -1315,25 +1400,45 @@ int timeout_modify(void *user, void
> > **for_app_registration_update,  {
> >  	caml_leave_blocking_section();
> >  	CAMLparam0();
> > +	CAMLlocalN(args, 2);
> > +	int ret = 0;
> >  	static value *func = NULL;
> >  	value *p = (value *) user;
> > +	value *for_app = *for_app_registration_update;
> ...
> > -	caml_callback(*func, *p);
> > +	args[0] = *p;
> > +	args[1] = *for_app;
> 
> I see that you're relying on the promise in modern libxl.h that abs is
> always {0,0} meaning "right away".
> 
> You should probably assert that.

Ok.

> Also, perhaps, your ocaml function should have a different name.
> "libxl_timeout_modify_now" or something.  We have avoided renaming it in
> the C version to avoid API churn.  Of course you may prefer to keep the
> two names identical, in which case presumably this will be addressed in
> the documentation.  (Um.  What documentation.)  Anyway, I wanted you to
> think about this question and explicitly choose to give it a different
> name, or to avoid doing so :-).

Yes, it is probably clearer to change the name on the ocaml side to show better what the function really does. I was thinking about "fire_now".

I'll send an update soon.

Thanks,
Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gke-0001W2-F9; Thu, 09 Jan 2014 14:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Gkd-0001Vx-BM
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:38:27 +0000
Received: from [193.109.254.147:65438] by server-8.bemta-14.messagelabs.com id
	A0/56-30921-264BEC25; Thu, 09 Jan 2014 14:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389278304!9784356!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10981 invoked from network); 9 Jan 2014 14:38:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91292588"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:38:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:38:23 -0500
Message-ID: <1389278302.27473.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 14:38:22 +0000
In-Reply-To: <alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	zhenzhong.duan@oracle.com, Xudong Hao <xudong.hao@intel.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > > Hi Maintainer
> > > 
> > > I added 64bit pci hotplug support to hvm guest recently.
> > > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > > merged in qemu-xen-traditional.
> > 
> > Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> > hand the patch you link to is against qemu-xen, which Stefano does
> > maintain, so I'm a bit confused.
> 
> That is not the case: the patch is against qemu-xen-traditional
> (hw/pass-through.c doesn't exist on QEMU upstream based trees).

Ah, I was tricked by the subject line which says qemu-xen!

> 
> > Perhaps you should also have CCd Xudong? I've done that too.
> > 
> > This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
> > 
> > Ian.
> > 
> > > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> > > 
> > > I am interested in the result about that patch.
> > > 
> > > thanks
> > > zduan
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gke-0001W2-F9; Thu, 09 Jan 2014 14:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Gkd-0001Vx-BM
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:38:27 +0000
Received: from [193.109.254.147:65438] by server-8.bemta-14.messagelabs.com id
	A0/56-30921-264BEC25; Thu, 09 Jan 2014 14:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389278304!9784356!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10981 invoked from network); 9 Jan 2014 14:38:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91292588"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:38:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:38:23 -0500
Message-ID: <1389278302.27473.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 9 Jan 2014 14:38:22 +0000
In-Reply-To: <alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	zhenzhong.duan@oracle.com, Xudong Hao <xudong.hao@intel.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > > Hi Maintainer
> > > 
> > > I added 64bit pci hotplug support to hvm guest recently.
> > > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > > merged in qemu-xen-traditional.
> > 
> > Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> > hand the patch you link to is against qemu-xen, which Stefano does
> > maintain, so I'm a bit confused.
> 
> That is not the case: the patch is against qemu-xen-traditional
> (hw/pass-through.c doesn't exist on QEMU upstream based trees).

Ah, I was tricked by the subject line which says qemu-xen!

> 
> > Perhaps you should also have CCd Xudong? I've done that too.
> > 
> > This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
> > 
> > Ian.
> > 
> > > http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
> > > 
> > > I am interested in the result about that patch.
> > > 
> > > thanks
> > > zduan
> > > 
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:39:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GlU-0001ZW-Un; Thu, 09 Jan 2014 14:39:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1GlS-0001ZH-RW
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:39:19 +0000
Received: from [193.109.254.147:3973] by server-6.bemta-14.messagelabs.com id
	2A/BC-14958-694BEC25; Thu, 09 Jan 2014 14:39:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389278356!9871553!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27608 invoked from network); 9 Jan 2014 14:39:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:39:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89150575"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:39:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 09:39:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1GlP-0005yw-5M;
	Thu, 09 Jan 2014 14:39:15 +0000
Date: Thu, 9 Jan 2014 14:39:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109143915.GC12164@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
	<1389268784.27473.71.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389268784.27473.71.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 11:59:44AM +0000, Ian Campbell wrote:
> On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> > On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > > 
> > > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > > 
> > > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > > changes.
> > > > > 
> > > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > > point.
> > > > > 
> > > > 
> > > > You're right, my patch didn't cover that aspect because I tried hard to
> > > > make it minimal.
> > > 
> > > Do you think you could revisit this bit for 4.5?
> > > 
> > 
> > Yes, I think so. That should be relatively straightfoward, I hope. :-)
> > 
> > > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > > Sorry for not thinking of that during review.
> > > > > 
> > > > 
> > > > I don't think so. All VNC / VFB options are already documented.
> > > 
> > > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > > Device" though.
> > > 
> > 
> > How about this
> 
> HRM, I was more thinking about pulling those options out into a new
> "Primary Graphics Device" or something section with a little intro blurb
> about how for PV this is a PVFB and for HVM this is an emulated VNC.
> 
> With your approach I think at a minimum it would need to enumerate which
> specific options work for both.
> 

>From 935b18da7a9cf60186a87dcf97d39722d0e49937 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:48:13 +0000
Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
 device

Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
guest when VNC is specified".

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 docs/man/xl.cfg.pod.5 |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 72efd88..9941395 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -387,7 +387,10 @@ to the domain.
 
 This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
-configure the emulated device.
+configure the emulated device. If L<Emulated VGA Graphics Device> options
+are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
+B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
+framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:39:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GlU-0001ZW-Un; Thu, 09 Jan 2014 14:39:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1GlS-0001ZH-RW
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:39:19 +0000
Received: from [193.109.254.147:3973] by server-6.bemta-14.messagelabs.com id
	2A/BC-14958-694BEC25; Thu, 09 Jan 2014 14:39:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389278356!9871553!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27608 invoked from network); 9 Jan 2014 14:39:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:39:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89150575"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:39:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 09:39:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1GlP-0005yw-5M;
	Thu, 09 Jan 2014 14:39:15 +0000
Date: Thu, 9 Jan 2014 14:39:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109143915.GC12164@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
	<1389268784.27473.71.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389268784.27473.71.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 11:59:44AM +0000, Ian Campbell wrote:
> On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> > On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > > 
> > > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > > 
> > > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > > changes.
> > > > > 
> > > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > > point.
> > > > > 
> > > > 
> > > > You're right, my patch didn't cover that aspect because I tried hard to
> > > > make it minimal.
> > > 
> > > Do you think you could revisit this bit for 4.5?
> > > 
> > 
> > Yes, I think so. That should be relatively straightfoward, I hope. :-)
> > 
> > > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > > Sorry for not thinking of that during review.
> > > > > 
> > > > 
> > > > I don't think so. All VNC / VFB options are already documented.
> > > 
> > > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > > Device" though.
> > > 
> > 
> > How about this
> 
> HRM, I was more thinking about pulling those options out into a new
> "Primary Graphics Device" or something section with a little intro blurb
> about how for PV this is a PVFB and for HVM this is an emulated VNC.
> 
> With your approach I think at a minimum it would need to enumerate which
> specific options work for both.
> 

>From 935b18da7a9cf60186a87dcf97d39722d0e49937 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 11:48:13 +0000
Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
 device

Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
guest when VNC is specified".

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 docs/man/xl.cfg.pod.5 |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 72efd88..9941395 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -387,7 +387,10 @@ to the domain.
 
 This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
-configure the emulated device.
+configure the emulated device. If L<Emulated VGA Graphics Device> options
+are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
+B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
+framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gt6-0002f5-Gq; Thu, 09 Jan 2014 14:47:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1Gt5-0002ey-0k
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:47:11 +0000
Received: from [193.109.254.147:24675] by server-11.bemta-14.messagelabs.com
	id F2/A0-20576-E66BEC25; Thu, 09 Jan 2014 14:47:10 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389278828!9864664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17198 invoked from network); 9 Jan 2014 14:47:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:47:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91295562"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:47:07 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:47:07 -0500
Message-ID: <52CEB663.4070609@citrix.com>
Date: Thu, 9 Jan 2014 09:46:59 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1389261984.27473.46.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/2014 05:06 AM, Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
>>> bounces@lists.xen.org] On Behalf Of Ian Campbell
>>> Sent: Tuesday, January 07, 2014 8:56 AM
>>> To: Zhang, Eniac
>>> Cc: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] passing smbios table from qemu
>>>
>>> On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
>>>
>>>> Question, am I missing anything, or this feature (passing smbios) is
>>>> still work in progress?
>>>
>>> Under Xen smbios tables are supplied via hvmloader, not via qemu.
>>>
>>> What tables and or values do you want to override/supply?
>>>
>>> I believe that libxc supports passing in extra smbios tables when
>>> building the guest (via struct xc_hvm_build_args.smbios_module) but
>>> nothing has been plumbed in to make use of this.
>>>
>>> I'm not aware of any on going work to plumb that stuff further up, e.g.
>>> to libxl and xl or other toolstacks. (I think the libxc functionality is
>>> only consumed by the XenClient toolstack).
>>
>> Just FYI, I did go back and add the support (and docs) for it in
>> libxl. I did this after the first set of patches went in per someone's
>> request (can't recall who it was at the moment).
>
> Ah yes, here it is:
>         smbios_firmware="STRING"
>             Specify a path to a file that contains extra SMBIOS firmware
>             structures to pass in to a guest. The file can contain a set DMTF
>             predefined structures which will override the internal defaults.
>             Not all predefined structures can be overridden, only the
>             following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
>             any number of vendor defined SMBIOS structures (type 128 - 255).
>             Since SMBIOS structures do not present their overall size, each
>             entry in the file must be preceded by a 32b integer indicating the
>             size of the next structure.
>
> Did you not have a tool/library for helping to create such blobs
> somewhere? Or is my memory playing tricks?

Your memory is intact; I did provide a helper library. I posted it as a 
tarball since I could not figure out where such a thing might live in 
the xen tree. I posted it twice - the second time with some fixes:

http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html


>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6986 - Release Date: 01/08/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:47:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gt6-0002f5-Gq; Thu, 09 Jan 2014 14:47:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1Gt5-0002ey-0k
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:47:11 +0000
Received: from [193.109.254.147:24675] by server-11.bemta-14.messagelabs.com
	id F2/A0-20576-E66BEC25; Thu, 09 Jan 2014 14:47:10 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389278828!9864664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17198 invoked from network); 9 Jan 2014 14:47:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:47:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91295562"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:47:07 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:47:07 -0500
Message-ID: <52CEB663.4070609@citrix.com>
Date: Thu, 9 Jan 2014 09:46:59 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1389261984.27473.46.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/2014 05:06 AM, Ian Campbell wrote:
> On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
>>> bounces@lists.xen.org] On Behalf Of Ian Campbell
>>> Sent: Tuesday, January 07, 2014 8:56 AM
>>> To: Zhang, Eniac
>>> Cc: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] passing smbios table from qemu
>>>
>>> On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
>>>
>>>> Question, am I missing anything, or this feature (passing smbios) is
>>>> still work in progress?
>>>
>>> Under Xen smbios tables are supplied via hvmloader, not via qemu.
>>>
>>> What tables and or values do you want to override/supply?
>>>
>>> I believe that libxc supports passing in extra smbios tables when
>>> building the guest (via struct xc_hvm_build_args.smbios_module) but
>>> nothing has been plumbed in to make use of this.
>>>
>>> I'm not aware of any on going work to plumb that stuff further up, e.g.
>>> to libxl and xl or other toolstacks. (I think the libxc functionality is
>>> only consumed by the XenClient toolstack).
>>
>> Just FYI, I did go back and add the support (and docs) for it in
>> libxl. I did this after the first set of patches went in per someone's
>> request (can't recall who it was at the moment).
>
> Ah yes, here it is:
>         smbios_firmware="STRING"
>             Specify a path to a file that contains extra SMBIOS firmware
>             structures to pass in to a guest. The file can contain a set DMTF
>             predefined structures which will override the internal defaults.
>             Not all predefined structures can be overridden, only the
>             following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
>             any number of vendor defined SMBIOS structures (type 128 - 255).
>             Since SMBIOS structures do not present their overall size, each
>             entry in the file must be preceded by a 32b integer indicating the
>             size of the next structure.
>
> Did you not have a tool/library for helping to create such blobs
> somewhere? Or is my memory playing tricks?

Your memory is intact; I did provide a helper library. I posted it as a 
tarball since I could not figure out where such a thing might live in 
the xen tree. I posted it twice - the second time with some fixes:

http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html


>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6986 - Release Date: 01/08/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gx7-0003Rt-AM; Thu, 09 Jan 2014 14:51:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Gx5-0003Ri-Us
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:51:20 +0000
Received: from [193.109.254.147:14828] by server-11.bemta-14.messagelabs.com
	id 46/28-20576-767BEC25; Thu, 09 Jan 2014 14:51:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389279077!7530101!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27717 invoked from network); 9 Jan 2014 14:51:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89154852"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:51:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 09:51:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gx1-000370-TM;
	Thu, 09 Jan 2014 14:51:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gx1-0002MP-Lt;
	Thu, 09 Jan 2014 14:51:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.46947.403608.202162@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 14:51:15 +0000
To: Matthew Daley <mattd@bugfuzz.com>
In-Reply-To: <1385944047-29139-1-git-send-email-mattd@bugfuzz.com>
References: <CAD3Cand9Ot=zqf8VgMOjPfrd16FRr5TEHMehGtSArd7VqzLFcA@mail.gmail.com>
	<1385944047-29139-1-git-send-email-mattd@bugfuzz.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 04/13 v2] libxl: don't leak p in
	libxl__wait_for_backend
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matthew Daley writes ("[PATCH 04/13 v2] libxl: don't leak p in libxl__wait_for_backend"):
> Use libxl__xs_read_checked instead of xs_read. While at it, tidy up the
> function as well.

This was applied to staging, passed the tests, etc., and I also put it
on my backport list.

However it doesn't apply cleanly to 4.3 and the conflicts weren't
trivial.  Would anyone care to backport it, or to fix the same bug
another way ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:51:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:51:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gx7-0003Rt-AM; Thu, 09 Jan 2014 14:51:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Gx5-0003Ri-Us
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:51:20 +0000
Received: from [193.109.254.147:14828] by server-11.bemta-14.messagelabs.com
	id 46/28-20576-767BEC25; Thu, 09 Jan 2014 14:51:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389279077!7530101!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27717 invoked from network); 9 Jan 2014 14:51:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89154852"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:51:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 09:51:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gx1-000370-TM;
	Thu, 09 Jan 2014 14:51:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gx1-0002MP-Lt;
	Thu, 09 Jan 2014 14:51:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.46947.403608.202162@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 14:51:15 +0000
To: Matthew Daley <mattd@bugfuzz.com>
In-Reply-To: <1385944047-29139-1-git-send-email-mattd@bugfuzz.com>
References: <CAD3Cand9Ot=zqf8VgMOjPfrd16FRr5TEHMehGtSArd7VqzLFcA@mail.gmail.com>
	<1385944047-29139-1-git-send-email-mattd@bugfuzz.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 04/13 v2] libxl: don't leak p in
	libxl__wait_for_backend
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matthew Daley writes ("[PATCH 04/13 v2] libxl: don't leak p in libxl__wait_for_backend"):
> Use libxl__xs_read_checked instead of xs_read. While at it, tidy up the
> function as well.

This was applied to staging, passed the tests, etc., and I also put it
on my backport list.

However it doesn't apply cleanly to 4.3 and the conflicts weren't
trivial.  Would anyone care to backport it, or to fix the same bug
another way ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:53:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gz9-0003bo-20; Thu, 09 Jan 2014 14:53:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek@citrix.com>) id 1W1Gz6-0003bc-N9
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:53:25 +0000
Received: from [85.158.137.68:41538] by server-17.bemta-3.messagelabs.com id
	0C/2F-15965-3E7BEC25; Thu, 09 Jan 2014 14:53:23 +0000
X-Env-Sender: russell.pavlicek@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389279199!8152234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9291 invoked from network); 9 Jan 2014 14:53:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208,217";a="89155313"
Received: from sjcpex01cl01.citrite.net ([10.216.14.143])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	09 Jan 2014 14:52:47 +0000
Received: from SJCPEX01CL03.citrite.net ([169.254.3.18]) by
	SJCPEX01CL01.citrite.net ([10.216.14.143]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 06:52:46 -0800
From: Russell Pavlicek <russell.pavlicek@citrix.com>
To: Maren Abatielos <abatielos@univention.de>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen Management Tools - new project to be listed
Thread-Index: AQHPDTrA0qt1YRM8SU6gsw+1kCsh3Jp8eJoA
Date: Thu, 9 Jan 2014 14:52:46 +0000
Message-ID: <55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
References: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
In-Reply-To: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.30]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2373090515291070549=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2373090515291070549==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_"

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

TWFyZW4sDQoNClNvcnJ5IGZvciB0aGUgY29uZnVzaW9uLiAgVGhhdOKAmXMgdGhlIG9sZCBzeXN0
ZW0gKHdlIHdpbGwgbmVlZCB0byBtYXJrIGl0IGFzIHN1Y2gpLg0KDQpUbyBsaXN0IHlvdXIgcHJv
amVjdCwgcHJvZHVjdCwgb3Igc2VydmljZSwgc2ltcGx5IGdvIHRvIFhlblByb2plY3Qub3JnLiAg
UmVnaXN0ZXIgZm9yIGFuIGFjY291bnQsIGlmIHlvdSBkb27igJl0IGhhdmUgb25lIGFscmVhZHku
ICBDbGljayBvbiBEaXJlY3RvcnkgPiBFY29zeXN0ZW0uICBQcmVzcyB0aGUgYnV0dG9uIGxhYmVs
ZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiAgRmlsbCBvdXQgdGhlIGZvcm0gYW5kIHN1
Ym1pdC4NCg0KT25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQgKGdlbmVyYWxseSB3aXRoaW4g
YSBmZXcgaG91cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUgWGVuUHJvamVjdC5vcmcgRWNv
c3lzdGVtIERpcmVjdG9yeS4gIFdoYXTigJlzIG1vcmUsIHlvdSB3aWxsIGhhdmUgY29udHJvbCBv
dmVyIHRoZSBsaXN0aW5nIGNvbnRlbnQsIHNvIGFzIHlvdXIgcHJvZHVjdCBjaGFuZ2VzIG9yIGV4
cGFuZHMsIHlvdSBjYW4gbW9kaWZ5IHlvdXIgbGlzdGluZyB0byBrZWVwIGl0IHVwIHRvIGRhdGUu
DQoNCklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3IgZGlmZmljdWx0eSwgcGxlYXNlIGNvbnRh
Y3QgbWUuICBJ4oCZZCBiZSBoYXBweSB0byBoZWxwIQ0KDQpSdXNzIFBhdmxpY2VrDQpYZW4gUHJv
amVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtcw0KSG9tZSBPZmZpY2U6ICsxLTMwMS04Mjkt
NTMyNw0KTW9iaWxlOiArMS0yNDAtMzk3LTAxOTkNClVLIFZvSVA6ICs0NCAxMjIzIDg1MiA4OTQN
Cg0KRnJvbTogeGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVuLm9yZyBbbWFpbHRvOnhlbi1kZXZl
bC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddIE9uIEJlaGFsZiBPZiBNYXJlbiBBYmF0aWVsb3MNClNl
bnQ6IFRodXJzZGF5LCBKYW51YXJ5IDA5LCAyMDE0IDc6MzcgQU0NClRvOiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZw0KU3ViamVjdDogW1hlbi1kZXZlbF0gWGVuIE1hbmFnZW1lbnQgVG9vbHMgLSBu
ZXcgcHJvamVjdCB0byBiZSBsaXN0ZWQNCg0KSGVsbG8sDQoNCkkgd291bGQgbGlrZSB0byBoYXZl
IG91ciBwcm9kdWN0IFVDUyBWaXJ0dWFsIE1hY2hpbmUgTWFuYWdlciAoVVZNTSkgdG8gYmUgbGlz
dGVkIGluIHRoZSBzZWN0aW9uICJNYW5hZ2VtZW50IHRvb2xzIGFuZCBpbnRlcmZhY2VzIiBvbiB0
aGUgZm9sbG93aW5nIHBhZ2U6DQoNCmh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9YZW5fTWFuYWdl
bWVudF9Ub29scw0KDQpVVk1NIGlzIGEgd2ViLWJhc2VkIHZpcnR1YWxpemF0aW9uIG1hbmFnZW1l
bnQgdG9vbCBmb3IgWGVuIGFuZCBLVk0uIEl0IGlzIGluY2x1ZGVkIGluIG91ciBMaW51eCBzZXJ2
ZXIgb3BlcmF0aW5nIHN5c3RlbSBVbml2ZW50aW9uIENvcnBvcmF0ZSBTZXJ2ZXIsIHNvIGlzIFhl
bi4NCg0KQ291bGQgeW91IHBsZWFzZSBkbyB0aGF0IHVzaW5nIHRoZSBmb2xsb3dpbmcgVVJMIHRv
IGJlIGxpc3RlZDoNCg0KaHR0cDovL3d3dy51bml2ZW50aW9uLmRlL2VuL3Byb2R1Y3RzL3Vjcy91
Y3MtY29tcG9uZW50cy92aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVhbC1tYWNoaW5lLW1hbmFnZXIv
DQoNClRoYW5rcyBhIGxvdCwNCg0KTWFyZW4gQWJhdGllbG9zDQoNCi0tLQ0KTWFya2V0aW5nDQoN
ClVuaXZlbnRpb24gR21iSA0KYmUgb3Blbi4NCk1hcnktU29tZXJ2aWxsZS1TdHIuMQ0KMjgzNTkg
QnJlbWVuDQoNCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGU8bWFpbHRvOmFiYXRpZWxv
c0B1bml2ZW50aW9uLmRlPg0KVGVsLiA6ICs0OSA0MjEgMjIyMzItNjgNCkZheCA6ICs0OSA0MjEg
MjIyMzItOTkNCg0KaHR0cHM6Ly93d3cudW5pdmVudGlvbi5kZQ0KaHR0cDovL2dwbHVzLnRvL1Vu
aXZlbnRpb24NCmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24NCmh0dHBzOi8vdHdp
dHRlci5jb20vdW5pdmVudGlvbg0KaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlk
ZW8NCg0KTWFuYWdpbmcgZGlyZWN0b3I6IFBldGVyIEguIEdhbnRlbg0KSFJCIDIwNzU1IExvY2Fs
IENvdXJ0IEJyZW1lbg0KVGF4LU5vLjogNzEtNTk3LTAyODc2DQo=

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTQgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
Q2FsaWJyaTsNCglwYW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9DQpAZm9udC1mYWNlDQoJ
e2ZvbnQtZmFtaWx5OlRhaG9tYTsNCglwYW5vc2UtMToyIDExIDYgNCAzIDUgNCA0IDIgNDt9DQov
KiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1z
b05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250LXNp
emU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLCJzZXJpZiI7fQ0KYTps
aW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6
Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCmE6dmlzaXRlZCwgc3Bhbi5Nc29I
eXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6cHVycGxl
Ow0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTcNCgl7bXNv
LXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7DQoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5z
LXNlcmlmIjsNCgljb2xvcjojMUY0OTdEO30NCi5Nc29DaHBEZWZhdWx0DQoJe21zby1zdHlsZS10
eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtc2l6ZToxMC4wcHQ7fQ0KQHBhZ2UgV29yZFNlY3Rpb24x
DQoJe3NpemU6OC41aW4gMTEuMGluOw0KCW1hcmdpbjoxLjBpbiAxLjBpbiAxLjBpbiAxLjBpbjt9
DQpkaXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30NCi0tPjwvc3R5bGU+PCEt
LVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlk
bWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+
DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0
YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0tLT4NCjwvaGVhZD4NCjxi
b2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxlIj4NCjxkaXYgY2xhc3M9
IldvcmRTZWN0aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2Vy
aWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+TWFyZW4sPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5
N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVv
dDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5Tb3JyeSBmb3IgdGhlIGNv
bmZ1c2lvbi4mbmJzcDsgVGhhdOKAmXMgdGhlIG9sZCBzeXN0ZW0gKHdlIHdpbGwgbmVlZCB0byBt
YXJrIGl0IGFzIHN1Y2gpLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNw
OzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+VG8gbGlzdCB5b3VyIHByb2plY3QsIHByb2R1Y3Qs
IG9yIHNlcnZpY2UsIHNpbXBseSBnbyB0byBYZW5Qcm9qZWN0Lm9yZy4mbmJzcDsgUmVnaXN0ZXIg
Zm9yIGFuIGFjY291bnQsIGlmIHlvdSBkb27igJl0IGhhdmUgb25lIGFscmVhZHkuJm5ic3A7IENs
aWNrIG9uIERpcmVjdG9yeSAmZ3Q7IEVjb3N5c3RlbS4mbmJzcDsNCiBQcmVzcyB0aGUgYnV0dG9u
IGxhYmVsZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiZuYnNwOyBGaWxsIG91dCB0aGUg
Zm9ybSBhbmQgc3VibWl0LjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNw
OzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+T25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQg
KGdlbmVyYWxseSB3aXRoaW4gYSBmZXcgaG91cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUg
WGVuUHJvamVjdC5vcmcgRWNvc3lzdGVtIERpcmVjdG9yeS4mbmJzcDsgV2hhdOKAmXMgbW9yZSwg
eW91IHdpbGwgaGF2ZSBjb250cm9sDQogb3ZlciB0aGUgbGlzdGluZyBjb250ZW50LCBzbyBhcyB5
b3VyIHByb2R1Y3QgY2hhbmdlcyBvciBleHBhbmRzLCB5b3UgY2FuIG1vZGlmeSB5b3VyIGxpc3Rp
bmcgdG8ga2VlcCBpdCB1cCB0byBkYXRlLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48
bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZx
dW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+SWYgeW91IGhhdmUgYW55IHF1ZXN0
aW9ucyBvciBkaWZmaWN1bHR5LCBwbGVhc2UgY29udGFjdCBtZS4mbmJzcDsgSeKAmWQgYmUgaGFw
cHkgdG8gaGVscCE8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1
b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286
cD48L3NwYW4+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fu
cy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5SdXNzIFBhdmxpY2VrPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4w
cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7
O2NvbG9yOiMxRjQ5N0QiPlhlbiBQcm9qZWN0IEV2YW5nZWxpc3QsIENpdHJpeCBTeXN0ZW1zPG86
cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5z
LXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPkhvbWUgT2ZmaWNlOiAmIzQzOzEtMzAxLTgyOS01
MzI3PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVv
dDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPk1vYmlsZTogJiM0MzsxLTI0MC0zOTct
MDE5OTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5VSyBWb0lQOiAmIzQzOzQ0IDEyMjMg
ODUyIDg5NDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy
aSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7
PC9vOnA+PC9zcGFuPjwvcD4NCjxkaXY+DQo8ZGl2IHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXIt
dG9wOnNvbGlkICNCNUM0REYgMS4wcHQ7cGFkZGluZzozLjBwdCAwaW4gMGluIDBpbiI+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZh
bWlseTomcXVvdDtUYWhvbWEmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+RnJvbTo8L3Nw
YW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O1Rh
aG9tYSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4geGVuLWRldmVsLWJvdW5jZXNAbGlz
dHMueGVuLm9yZyBbbWFpbHRvOnhlbi1kZXZlbC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddDQo8Yj5P
biBCZWhhbGYgT2YgPC9iPk1hcmVuIEFiYXRpZWxvczxicj4NCjxiPlNlbnQ6PC9iPiBUaHVyc2Rh
eSwgSmFudWFyeSAwOSwgMjAxNCA3OjM3IEFNPGJyPg0KPGI+VG86PC9iPiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZzxicj4NCjxiPlN1YmplY3Q6PC9iPiBbWGVuLWRldmVsXSBYZW4gTWFuYWdlbWVu
dCBUb29scyAtIG5ldyBwcm9qZWN0IHRvIGJlIGxpc3RlZDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjwvZGl2Pg0KPC9kaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwv
cD4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5IZWxsbywgPG86cD48L286cD48L3A+DQo8
L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+
DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JIHdvdWxkIGxpa2UgdG8gaGF2
ZSBvdXIgcHJvZHVjdCBVQ1MgVmlydHVhbCBNYWNoaW5lIE1hbmFnZXIgKFVWTU0pIHRvIGJlIGxp
c3RlZCBpbiB0aGUgc2VjdGlvbiAmcXVvdDtNYW5hZ2VtZW50IHRvb2xzIGFuZCBpbnRlcmZhY2Vz
JnF1b3Q7IG9uIHRoZSBmb2xsb3dpbmcgcGFnZToNCjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8
ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+
DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PGEgaHJlZj0iaHR0cDovL3dpa2kueGVuLm9y
Zy93aWtpL1hlbl9NYW5hZ2VtZW50X1Rvb2xzIj5odHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVu
X01hbmFnZW1lbnRfVG9vbHM8L2E+DQo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOyA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiPlVWTU0gaXMgYSB3ZWItYmFzZWQgdmlydHVhbGl6YXRpb24g
bWFuYWdlbWVudCB0b29sIGZvciBYZW4gYW5kIEtWTS4gSXQgaXMgaW5jbHVkZWQgaW4gb3VyIExp
bnV4IHNlcnZlciBvcGVyYXRpbmcgc3lzdGVtIFVuaXZlbnRpb24gQ29ycG9yYXRlIFNlcnZlciwg
c28gaXMgWGVuLg0KPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj5Db3VsZCB5b3UgcGxlYXNlIGRvIHRoYXQgdXNpbmcgdGhlIGZvbGxvd2luZyBV
UkwgdG8gYmUgbGlzdGVkOg0KPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48YSBocmVmPSJodHRwOi8vd3d3LnVuaXZlbnRpb24uZGUvZW4vcHJv
ZHVjdHMvdWNzL3Vjcy1jb21wb25lbnRzL3ZpcnR1YWxpemF0aW9uL3Vjcy12aXJ0dWFsLW1hY2hp
bmUtbWFuYWdlci8iPmh0dHA6Ly93d3cudW5pdmVudGlvbi5kZS9lbi9wcm9kdWN0cy91Y3MvdWNz
LWNvbXBvbmVudHMvdmlydHVhbGl6YXRpb24vdWNzLXZpcnR1YWwtbWFjaGluZS1tYW5hZ2VyLzwv
YT4NCjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+
Jm5ic3A7IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+VGhhbmtzIGEgbG90LCA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPiZuYnNwOyA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdiBpZD0ib3gt
c2lnbmF0dXJlIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPk1hcmVuIEFiYXRpZWxvcyA8YnI+DQo8
YnI+DQotLS0gPGJyPg0KTWFya2V0aW5nIDxicj4NCjxicj4NClVuaXZlbnRpb24gR21iSCA8YnI+
DQpiZSBvcGVuLiA8YnI+DQpNYXJ5LVNvbWVydmlsbGUtU3RyLjEgPGJyPg0KMjgzNTkgQnJlbWVu
IDxicj4NCjxicj4NCkUtTWFpbDogPGEgaHJlZj0ibWFpbHRvOmFiYXRpZWxvc0B1bml2ZW50aW9u
LmRlIj5hYmF0aWVsb3NAdW5pdmVudGlvbi5kZTwvYT4gPGJyPg0KVGVsLiA6ICYjNDM7NDkgNDIx
IDIyMjMyLTY4IDxicj4NCkZheCA6ICYjNDM7NDkgNDIxIDIyMjMyLTk5IDxicj4NCjxicj4NCjxh
IGhyZWY9Imh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24u
ZGU8L2E+IDxicj4NCjxhIGhyZWY9Imh0dHA6Ly9ncGx1cy50by9Vbml2ZW50aW9uIj5odHRwOi8v
Z3BsdXMudG8vVW5pdmVudGlvbjwvYT4gPGJyPg0KPGEgaHJlZj0iaHR0cDovL3d3dy5mYWNlYm9v
ay5jb20vdW5pdmVudGlvbiI+aHR0cDovL3d3dy5mYWNlYm9vay5jb20vdW5pdmVudGlvbjwvYT4N
Cjxicj4NCjxhIGhyZWY9Imh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlvbiI+aHR0cHM6Ly90
d2l0dGVyLmNvbS91bml2ZW50aW9uPC9hPiA8YnI+DQo8YSBocmVmPSJodHRwOi8vd3d3LnlvdXR1
YmUuY29tL3VuaXZlbnRpb252aWRlbyI+aHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9u
dmlkZW88L2E+DQo8YnI+DQo8YnI+DQpNYW5hZ2luZyBkaXJlY3RvcjogUGV0ZXIgSC4gR2FudGVu
IDxicj4NCkhSQiAyMDc1NSBMb2NhbCBDb3VydCBCcmVtZW4gPGJyPg0KVGF4LU5vLjogNzEtNTk3
LTAyODc2IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4N
Cg==

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_--


--===============2373090515291070549==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2373090515291070549==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 14:53:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Gz9-0003bo-20; Thu, 09 Jan 2014 14:53:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek@citrix.com>) id 1W1Gz6-0003bc-N9
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:53:25 +0000
Received: from [85.158.137.68:41538] by server-17.bemta-3.messagelabs.com id
	0C/2F-15965-3E7BEC25; Thu, 09 Jan 2014 14:53:23 +0000
X-Env-Sender: russell.pavlicek@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389279199!8152234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9291 invoked from network); 9 Jan 2014 14:53:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:53:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208,217";a="89155313"
Received: from sjcpex01cl01.citrite.net ([10.216.14.143])
	by FTLPIPO02.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	09 Jan 2014 14:52:47 +0000
Received: from SJCPEX01CL03.citrite.net ([169.254.3.18]) by
	SJCPEX01CL01.citrite.net ([10.216.14.143]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 06:52:46 -0800
From: Russell Pavlicek <russell.pavlicek@citrix.com>
To: Maren Abatielos <abatielos@univention.de>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen Management Tools - new project to be listed
Thread-Index: AQHPDTrA0qt1YRM8SU6gsw+1kCsh3Jp8eJoA
Date: Thu, 9 Jan 2014 14:52:46 +0000
Message-ID: <55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
References: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
In-Reply-To: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.30]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2373090515291070549=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2373090515291070549==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_"

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

TWFyZW4sDQoNClNvcnJ5IGZvciB0aGUgY29uZnVzaW9uLiAgVGhhdOKAmXMgdGhlIG9sZCBzeXN0
ZW0gKHdlIHdpbGwgbmVlZCB0byBtYXJrIGl0IGFzIHN1Y2gpLg0KDQpUbyBsaXN0IHlvdXIgcHJv
amVjdCwgcHJvZHVjdCwgb3Igc2VydmljZSwgc2ltcGx5IGdvIHRvIFhlblByb2plY3Qub3JnLiAg
UmVnaXN0ZXIgZm9yIGFuIGFjY291bnQsIGlmIHlvdSBkb27igJl0IGhhdmUgb25lIGFscmVhZHku
ICBDbGljayBvbiBEaXJlY3RvcnkgPiBFY29zeXN0ZW0uICBQcmVzcyB0aGUgYnV0dG9uIGxhYmVs
ZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiAgRmlsbCBvdXQgdGhlIGZvcm0gYW5kIHN1
Ym1pdC4NCg0KT25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQgKGdlbmVyYWxseSB3aXRoaW4g
YSBmZXcgaG91cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUgWGVuUHJvamVjdC5vcmcgRWNv
c3lzdGVtIERpcmVjdG9yeS4gIFdoYXTigJlzIG1vcmUsIHlvdSB3aWxsIGhhdmUgY29udHJvbCBv
dmVyIHRoZSBsaXN0aW5nIGNvbnRlbnQsIHNvIGFzIHlvdXIgcHJvZHVjdCBjaGFuZ2VzIG9yIGV4
cGFuZHMsIHlvdSBjYW4gbW9kaWZ5IHlvdXIgbGlzdGluZyB0byBrZWVwIGl0IHVwIHRvIGRhdGUu
DQoNCklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3IgZGlmZmljdWx0eSwgcGxlYXNlIGNvbnRh
Y3QgbWUuICBJ4oCZZCBiZSBoYXBweSB0byBoZWxwIQ0KDQpSdXNzIFBhdmxpY2VrDQpYZW4gUHJv
amVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtcw0KSG9tZSBPZmZpY2U6ICsxLTMwMS04Mjkt
NTMyNw0KTW9iaWxlOiArMS0yNDAtMzk3LTAxOTkNClVLIFZvSVA6ICs0NCAxMjIzIDg1MiA4OTQN
Cg0KRnJvbTogeGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVuLm9yZyBbbWFpbHRvOnhlbi1kZXZl
bC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddIE9uIEJlaGFsZiBPZiBNYXJlbiBBYmF0aWVsb3MNClNl
bnQ6IFRodXJzZGF5LCBKYW51YXJ5IDA5LCAyMDE0IDc6MzcgQU0NClRvOiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZw0KU3ViamVjdDogW1hlbi1kZXZlbF0gWGVuIE1hbmFnZW1lbnQgVG9vbHMgLSBu
ZXcgcHJvamVjdCB0byBiZSBsaXN0ZWQNCg0KSGVsbG8sDQoNCkkgd291bGQgbGlrZSB0byBoYXZl
IG91ciBwcm9kdWN0IFVDUyBWaXJ0dWFsIE1hY2hpbmUgTWFuYWdlciAoVVZNTSkgdG8gYmUgbGlz
dGVkIGluIHRoZSBzZWN0aW9uICJNYW5hZ2VtZW50IHRvb2xzIGFuZCBpbnRlcmZhY2VzIiBvbiB0
aGUgZm9sbG93aW5nIHBhZ2U6DQoNCmh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9YZW5fTWFuYWdl
bWVudF9Ub29scw0KDQpVVk1NIGlzIGEgd2ViLWJhc2VkIHZpcnR1YWxpemF0aW9uIG1hbmFnZW1l
bnQgdG9vbCBmb3IgWGVuIGFuZCBLVk0uIEl0IGlzIGluY2x1ZGVkIGluIG91ciBMaW51eCBzZXJ2
ZXIgb3BlcmF0aW5nIHN5c3RlbSBVbml2ZW50aW9uIENvcnBvcmF0ZSBTZXJ2ZXIsIHNvIGlzIFhl
bi4NCg0KQ291bGQgeW91IHBsZWFzZSBkbyB0aGF0IHVzaW5nIHRoZSBmb2xsb3dpbmcgVVJMIHRv
IGJlIGxpc3RlZDoNCg0KaHR0cDovL3d3dy51bml2ZW50aW9uLmRlL2VuL3Byb2R1Y3RzL3Vjcy91
Y3MtY29tcG9uZW50cy92aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVhbC1tYWNoaW5lLW1hbmFnZXIv
DQoNClRoYW5rcyBhIGxvdCwNCg0KTWFyZW4gQWJhdGllbG9zDQoNCi0tLQ0KTWFya2V0aW5nDQoN
ClVuaXZlbnRpb24gR21iSA0KYmUgb3Blbi4NCk1hcnktU29tZXJ2aWxsZS1TdHIuMQ0KMjgzNTkg
QnJlbWVuDQoNCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGU8bWFpbHRvOmFiYXRpZWxv
c0B1bml2ZW50aW9uLmRlPg0KVGVsLiA6ICs0OSA0MjEgMjIyMzItNjgNCkZheCA6ICs0OSA0MjEg
MjIyMzItOTkNCg0KaHR0cHM6Ly93d3cudW5pdmVudGlvbi5kZQ0KaHR0cDovL2dwbHVzLnRvL1Vu
aXZlbnRpb24NCmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24NCmh0dHBzOi8vdHdp
dHRlci5jb20vdW5pdmVudGlvbg0KaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlk
ZW8NCg0KTWFuYWdpbmcgZGlyZWN0b3I6IFBldGVyIEguIEdhbnRlbg0KSFJCIDIwNzU1IExvY2Fs
IENvdXJ0IEJyZW1lbg0KVGF4LU5vLjogNzEtNTk3LTAyODc2DQo=

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTQgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
Q2FsaWJyaTsNCglwYW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9DQpAZm9udC1mYWNlDQoJ
e2ZvbnQtZmFtaWx5OlRhaG9tYTsNCglwYW5vc2UtMToyIDExIDYgNCAzIDUgNCA0IDIgNDt9DQov
KiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1z
b05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250LXNp
emU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLCJzZXJpZiI7fQ0KYTps
aW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6
Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCmE6dmlzaXRlZCwgc3Bhbi5Nc29I
eXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6cHVycGxl
Ow0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTcNCgl7bXNv
LXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7DQoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5z
LXNlcmlmIjsNCgljb2xvcjojMUY0OTdEO30NCi5Nc29DaHBEZWZhdWx0DQoJe21zby1zdHlsZS10
eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtc2l6ZToxMC4wcHQ7fQ0KQHBhZ2UgV29yZFNlY3Rpb24x
DQoJe3NpemU6OC41aW4gMTEuMGluOw0KCW1hcmdpbjoxLjBpbiAxLjBpbiAxLjBpbiAxLjBpbjt9
DQpkaXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30NCi0tPjwvc3R5bGU+PCEt
LVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlk
bWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+
DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0
YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0tLT4NCjwvaGVhZD4NCjxi
b2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxlIj4NCjxkaXYgY2xhc3M9
IldvcmRTZWN0aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2Vy
aWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+TWFyZW4sPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5
N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVv
dDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5Tb3JyeSBmb3IgdGhlIGNv
bmZ1c2lvbi4mbmJzcDsgVGhhdOKAmXMgdGhlIG9sZCBzeXN0ZW0gKHdlIHdpbGwgbmVlZCB0byBt
YXJrIGl0IGFzIHN1Y2gpLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNw
OzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+VG8gbGlzdCB5b3VyIHByb2plY3QsIHByb2R1Y3Qs
IG9yIHNlcnZpY2UsIHNpbXBseSBnbyB0byBYZW5Qcm9qZWN0Lm9yZy4mbmJzcDsgUmVnaXN0ZXIg
Zm9yIGFuIGFjY291bnQsIGlmIHlvdSBkb27igJl0IGhhdmUgb25lIGFscmVhZHkuJm5ic3A7IENs
aWNrIG9uIERpcmVjdG9yeSAmZ3Q7IEVjb3N5c3RlbS4mbmJzcDsNCiBQcmVzcyB0aGUgYnV0dG9u
IGxhYmVsZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiZuYnNwOyBGaWxsIG91dCB0aGUg
Zm9ybSBhbmQgc3VibWl0LjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNw
OzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+T25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQg
KGdlbmVyYWxseSB3aXRoaW4gYSBmZXcgaG91cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUg
WGVuUHJvamVjdC5vcmcgRWNvc3lzdGVtIERpcmVjdG9yeS4mbmJzcDsgV2hhdOKAmXMgbW9yZSwg
eW91IHdpbGwgaGF2ZSBjb250cm9sDQogb3ZlciB0aGUgbGlzdGluZyBjb250ZW50LCBzbyBhcyB5
b3VyIHByb2R1Y3QgY2hhbmdlcyBvciBleHBhbmRzLCB5b3UgY2FuIG1vZGlmeSB5b3VyIGxpc3Rp
bmcgdG8ga2VlcCBpdCB1cCB0byBkYXRlLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48
bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZx
dW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+SWYgeW91IGhhdmUgYW55IHF1ZXN0
aW9ucyBvciBkaWZmaWN1bHR5LCBwbGVhc2UgY29udGFjdCBtZS4mbmJzcDsgSeKAmWQgYmUgaGFw
cHkgdG8gaGVscCE8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1
b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286
cD48L3NwYW4+PC9wPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fu
cy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5SdXNzIFBhdmxpY2VrPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4w
cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7
O2NvbG9yOiMxRjQ5N0QiPlhlbiBQcm9qZWN0IEV2YW5nZWxpc3QsIENpdHJpeCBTeXN0ZW1zPG86
cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5z
LXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPkhvbWUgT2ZmaWNlOiAmIzQzOzEtMzAxLTgyOS01
MzI3PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVv
dDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPk1vYmlsZTogJiM0MzsxLTI0MC0zOTct
MDE5OTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5VSyBWb0lQOiAmIzQzOzQ0IDEyMjMg
ODUyIDg5NDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy
aSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7
PC9vOnA+PC9zcGFuPjwvcD4NCjxkaXY+DQo8ZGl2IHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXIt
dG9wOnNvbGlkICNCNUM0REYgMS4wcHQ7cGFkZGluZzozLjBwdCAwaW4gMGluIDBpbiI+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZh
bWlseTomcXVvdDtUYWhvbWEmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+RnJvbTo8L3Nw
YW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O1Rh
aG9tYSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4geGVuLWRldmVsLWJvdW5jZXNAbGlz
dHMueGVuLm9yZyBbbWFpbHRvOnhlbi1kZXZlbC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddDQo8Yj5P
biBCZWhhbGYgT2YgPC9iPk1hcmVuIEFiYXRpZWxvczxicj4NCjxiPlNlbnQ6PC9iPiBUaHVyc2Rh
eSwgSmFudWFyeSAwOSwgMjAxNCA3OjM3IEFNPGJyPg0KPGI+VG86PC9iPiB4ZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZzxicj4NCjxiPlN1YmplY3Q6PC9iPiBbWGVuLWRldmVsXSBYZW4gTWFuYWdlbWVu
dCBUb29scyAtIG5ldyBwcm9qZWN0IHRvIGJlIGxpc3RlZDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjwvZGl2Pg0KPC9kaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwv
cD4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5IZWxsbywgPG86cD48L286cD48L3A+DQo8
L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+
DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JIHdvdWxkIGxpa2UgdG8gaGF2
ZSBvdXIgcHJvZHVjdCBVQ1MgVmlydHVhbCBNYWNoaW5lIE1hbmFnZXIgKFVWTU0pIHRvIGJlIGxp
c3RlZCBpbiB0aGUgc2VjdGlvbiAmcXVvdDtNYW5hZ2VtZW50IHRvb2xzIGFuZCBpbnRlcmZhY2Vz
JnF1b3Q7IG9uIHRoZSBmb2xsb3dpbmcgcGFnZToNCjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8
ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+
DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PGEgaHJlZj0iaHR0cDovL3dpa2kueGVuLm9y
Zy93aWtpL1hlbl9NYW5hZ2VtZW50X1Rvb2xzIj5odHRwOi8vd2lraS54ZW4ub3JnL3dpa2kvWGVu
X01hbmFnZW1lbnRfVG9vbHM8L2E+DQo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOyA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiPlVWTU0gaXMgYSB3ZWItYmFzZWQgdmlydHVhbGl6YXRpb24g
bWFuYWdlbWVudCB0b29sIGZvciBYZW4gYW5kIEtWTS4gSXQgaXMgaW5jbHVkZWQgaW4gb3VyIExp
bnV4IHNlcnZlciBvcGVyYXRpbmcgc3lzdGVtIFVuaXZlbnRpb24gQ29ycG9yYXRlIFNlcnZlciwg
c28gaXMgWGVuLg0KPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj5Db3VsZCB5b3UgcGxlYXNlIGRvIHRoYXQgdXNpbmcgdGhlIGZvbGxvd2luZyBV
UkwgdG8gYmUgbGlzdGVkOg0KPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj4mbmJzcDsgPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48YSBocmVmPSJodHRwOi8vd3d3LnVuaXZlbnRpb24uZGUvZW4vcHJv
ZHVjdHMvdWNzL3Vjcy1jb21wb25lbnRzL3ZpcnR1YWxpemF0aW9uL3Vjcy12aXJ0dWFsLW1hY2hp
bmUtbWFuYWdlci8iPmh0dHA6Ly93d3cudW5pdmVudGlvbi5kZS9lbi9wcm9kdWN0cy91Y3MvdWNz
LWNvbXBvbmVudHMvdmlydHVhbGl6YXRpb24vdWNzLXZpcnR1YWwtbWFjaGluZS1tYW5hZ2VyLzwv
YT4NCjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+
Jm5ic3A7IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+VGhhbmtzIGEgbG90LCA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPiZuYnNwOyA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdiBpZD0ib3gt
c2lnbmF0dXJlIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPk1hcmVuIEFiYXRpZWxvcyA8YnI+DQo8
YnI+DQotLS0gPGJyPg0KTWFya2V0aW5nIDxicj4NCjxicj4NClVuaXZlbnRpb24gR21iSCA8YnI+
DQpiZSBvcGVuLiA8YnI+DQpNYXJ5LVNvbWVydmlsbGUtU3RyLjEgPGJyPg0KMjgzNTkgQnJlbWVu
IDxicj4NCjxicj4NCkUtTWFpbDogPGEgaHJlZj0ibWFpbHRvOmFiYXRpZWxvc0B1bml2ZW50aW9u
LmRlIj5hYmF0aWVsb3NAdW5pdmVudGlvbi5kZTwvYT4gPGJyPg0KVGVsLiA6ICYjNDM7NDkgNDIx
IDIyMjMyLTY4IDxicj4NCkZheCA6ICYjNDM7NDkgNDIxIDIyMjMyLTk5IDxicj4NCjxicj4NCjxh
IGhyZWY9Imh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24u
ZGU8L2E+IDxicj4NCjxhIGhyZWY9Imh0dHA6Ly9ncGx1cy50by9Vbml2ZW50aW9uIj5odHRwOi8v
Z3BsdXMudG8vVW5pdmVudGlvbjwvYT4gPGJyPg0KPGEgaHJlZj0iaHR0cDovL3d3dy5mYWNlYm9v
ay5jb20vdW5pdmVudGlvbiI+aHR0cDovL3d3dy5mYWNlYm9vay5jb20vdW5pdmVudGlvbjwvYT4N
Cjxicj4NCjxhIGhyZWY9Imh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlvbiI+aHR0cHM6Ly90
d2l0dGVyLmNvbS91bml2ZW50aW9uPC9hPiA8YnI+DQo8YSBocmVmPSJodHRwOi8vd3d3LnlvdXR1
YmUuY29tL3VuaXZlbnRpb252aWRlbyI+aHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9u
dmlkZW88L2E+DQo8YnI+DQo8YnI+DQpNYW5hZ2luZyBkaXJlY3RvcjogUGV0ZXIgSC4gR2FudGVu
IDxicj4NCkhSQiAyMDc1NSBMb2NhbCBDb3VydCBCcmVtZW4gPGJyPg0KVGF4LU5vLjogNzEtNTk3
LTAyODc2IDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4N
Cg==

--_000_55E78A57290FB64FA0D3CF672F9F3DA214487ESJCPEX01CL03citri_--


--===============2373090515291070549==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2373090515291070549==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 14:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GzD-0003cd-F6; Thu, 09 Jan 2014 14:53:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1GzC-0003cA-I1
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:53:30 +0000
Received: from [193.109.254.147:45275] by server-1.bemta-14.messagelabs.com id
	02/D7-15600-9E7BEC25; Thu, 09 Jan 2014 14:53:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389279207!9845952!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23393 invoked from network); 9 Jan 2014 14:53:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:53:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89155471"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:53:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 09:53:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gz4-00037Y-Jl;
	Thu, 09 Jan 2014 14:53:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gz4-0002MZ-Cn;
	Thu, 09 Jan 2014 14:53:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.47074.134445.478548@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 14:53:22 +0000
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>, 
	<patches@linaro.org>, <ian.campbell@citrix.com>
In-Reply-To: <21171.10493.795698.466565@mariner.uk.xensource.com>
References: <1387471503-20794-1-git-send-email-julien.grall@linaro.org>
	<21171.9437.668231.442440@mariner.uk.xensource.com>
	<52B32737.4000303@linaro.org>
	<21171.10493.795698.466565@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] tools/libx: xl uptime doesn't require
	argument
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [PATCH] tools/libx: xl uptime doesn't require argument"):
> Julien Grall writes ("Re: [PATCH] tools/libx: xl uptime doesn't require argument"):
> > No, the last argument of SWITCH_FOREACH_OPT indicates how many argument
> > is required.
> > 
> > With this simple patch I can do:
> > 42sh> xl uptime
> > Name                                ID Uptime
> > Domain-0                             0 12:25:52
> 
> Oh wait I see the "nb_domains==0" case in print_uptime.  Foolish me
> for expecting xl's logic to have any kind of rationale.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> And added to my backport list.

This applied cleanly to 4.3.

But it didn't apply cleanly to 4.2 and the code there is somewhat
different so I wasn't sure just fixing up the textual changes would be
sufficient.  I would be inclined not to backport it to 4.2 unless
someone thinks this fix is important and would like to prepare a
tested backport.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1GzD-0003cd-F6; Thu, 09 Jan 2014 14:53:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1GzC-0003cA-I1
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:53:30 +0000
Received: from [193.109.254.147:45275] by server-1.bemta-14.messagelabs.com id
	02/D7-15600-9E7BEC25; Thu, 09 Jan 2014 14:53:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389279207!9845952!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23393 invoked from network); 9 Jan 2014 14:53:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:53:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89155471"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 14:53:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 09:53:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gz4-00037Y-Jl;
	Thu, 09 Jan 2014 14:53:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1Gz4-0002MZ-Cn;
	Thu, 09 Jan 2014 14:53:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.47074.134445.478548@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 14:53:22 +0000
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>, 
	<patches@linaro.org>, <ian.campbell@citrix.com>
In-Reply-To: <21171.10493.795698.466565@mariner.uk.xensource.com>
References: <1387471503-20794-1-git-send-email-julien.grall@linaro.org>
	<21171.9437.668231.442440@mariner.uk.xensource.com>
	<52B32737.4000303@linaro.org>
	<21171.10493.795698.466565@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] tools/libx: xl uptime doesn't require
	argument
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [PATCH] tools/libx: xl uptime doesn't require argument"):
> Julien Grall writes ("Re: [PATCH] tools/libx: xl uptime doesn't require argument"):
> > No, the last argument of SWITCH_FOREACH_OPT indicates how many argument
> > is required.
> > 
> > With this simple patch I can do:
> > 42sh> xl uptime
> > Name                                ID Uptime
> > Domain-0                             0 12:25:52
> 
> Oh wait I see the "nb_domains==0" case in print_uptime.  Foolish me
> for expecting xl's logic to have any kind of rationale.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> And added to my backport list.

This applied cleanly to 4.3.

But it didn't apply cleanly to 4.2 and the code there is somewhat
different so I wasn't sure just fixing up the textual changes would be
sufficient.  I would be inclined not to backport it to 4.2 unless
someone thinks this fix is important and would like to prepare a
tested backport.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:56:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H2D-0003tc-5s; Thu, 09 Jan 2014 14:56:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1H2C-0003tR-DE
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:56:36 +0000
Received: from [193.109.254.147:31195] by server-14.bemta-14.messagelabs.com
	id 67/8F-12628-3A8BEC25; Thu, 09 Jan 2014 14:56:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389279393!9876541!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7817 invoked from network); 9 Jan 2014 14:56:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:56:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91298819"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:56:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:56:13 -0500
Message-ID: <1389279372.19805.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Thu, 9 Jan 2014 14:56:12 +0000
In-Reply-To: <52CEB663.4070609@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 09:46 -0500, Ross Philipson wrote:
> On 01/09/2014 05:06 AM, Ian Campbell wrote:

> > Did you not have a tool/library for helping to create such blobs
> > somewhere? Or is my memory playing tricks?
> 
> Your memory is intact; I did provide a helper library. I posted it as a 
> tarball since I could not figure out where such a thing might live in 
> the xen tree. I posted it twice - the second time with some fixes:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html

Thanks!

I went looking for a suitable wiki page to link this from, but didn't
find one... Oh well, maybe next time I'll have enough memory to look in
the archives...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:56:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H2G-0003uF-LD; Thu, 09 Jan 2014 14:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1H2E-0003tr-K6
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:56:38 +0000
Received: from [85.158.143.35:27923] by server-3.bemta-4.messagelabs.com id
	12/EF-32360-5A8BEC25; Thu, 09 Jan 2014 14:56:37 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389279396!10719797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15908 invoked from network); 9 Jan 2014 14:56:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91298867"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:56:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 09:56:24 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1H20-0006Hs-V9;
	Thu, 09 Jan 2014 14:56:24 +0000
Date: Thu, 9 Jan 2014 14:56:24 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140109145624.GD1696@perard.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108194451.GA15956@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > [...]
> > > > > > Those Xen report something like:
> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > 131328
> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > memflags=0 (62 of 64)
> > > > > > 
> > > > > > ?
> > > > > > 
> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > QEMU :) )
> > > > > > 
> > 
> > > -bash-4.1# lspci -s 01:00.0 -v 
> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > >         Flags: fast devsel, IRQ 16
> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > >         I/O ports at e020 [disabled] [size=32]
> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > 
> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > allocate memory for it. Will have maybe have to find another way.
> > qemu-trad those not seems to allocate memory, but I haven't been very
> > far in trying to check that.
> 
> And indeed that is the case. The "Fix" below fixes it.
> 
> 
> Based on that and this guest config:
> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> memory = 2048
> boot="d"
> maxvcpus=32
> vcpus=1
> serial='pty'
> vnclisten="0.0.0.0"
> name="latest"
> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> pci = ["01:00.0"]
> 
> I can boot the guest.

And can you access the ROM from the guest ?


Also, I have another patch, it will initialize the PCI ROM BAR like any
other BAR. In this case, if qemu is envolved in the access to ROM, it
will print an error, like it the case for other BAR. 

I tried to test it, but it was with an embedded VGA card. When I dump
the ROM, I got the same one as the emulated card instead of the ROM from
the device.


diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 6dd7a68..2bbdb6d 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
 
         s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
 
-        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
-                                      "xen-pci-pt-rom", d->rom.size);
+        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
+                              "xen-pci-pt-rom", d->rom.size);
         pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
                          &s->rom);
 

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:56:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H2D-0003tc-5s; Thu, 09 Jan 2014 14:56:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1H2C-0003tR-DE
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 14:56:36 +0000
Received: from [193.109.254.147:31195] by server-14.bemta-14.messagelabs.com
	id 67/8F-12628-3A8BEC25; Thu, 09 Jan 2014 14:56:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389279393!9876541!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7817 invoked from network); 9 Jan 2014 14:56:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:56:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91298819"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:56:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	09:56:13 -0500
Message-ID: <1389279372.19805.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Thu, 9 Jan 2014 14:56:12 +0000
In-Reply-To: <52CEB663.4070609@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 09:46 -0500, Ross Philipson wrote:
> On 01/09/2014 05:06 AM, Ian Campbell wrote:

> > Did you not have a tool/library for helping to create such blobs
> > somewhere? Or is my memory playing tricks?
> 
> Your memory is intact; I did provide a helper library. I posted it as a 
> tarball since I could not figure out where such a thing might live in 
> the xen tree. I posted it twice - the second time with some fixes:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html

Thanks!

I went looking for a suitable wiki page to link this from, but didn't
find one... Oh well, maybe next time I'll have enough memory to look in
the archives...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 14:56:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 14:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H2G-0003uF-LD; Thu, 09 Jan 2014 14:56:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1H2E-0003tr-K6
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 14:56:38 +0000
Received: from [85.158.143.35:27923] by server-3.bemta-4.messagelabs.com id
	12/EF-32360-5A8BEC25; Thu, 09 Jan 2014 14:56:37 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389279396!10719797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15908 invoked from network); 9 Jan 2014 14:56:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 14:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91298867"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 14:56:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 09:56:24 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1H20-0006Hs-V9;
	Thu, 09 Jan 2014 14:56:24 +0000
Date: Thu, 9 Jan 2014 14:56:24 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140109145624.GD1696@perard.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140108194451.GA15956@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > [...]
> > > > > > Those Xen report something like:
> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > 131328
> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > memflags=0 (62 of 64)
> > > > > > 
> > > > > > ?
> > > > > > 
> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > QEMU :) )
> > > > > > 
> > 
> > > -bash-4.1# lspci -s 01:00.0 -v 
> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > >         Flags: fast devsel, IRQ 16
> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > >         I/O ports at e020 [disabled] [size=32]
> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > 
> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > allocate memory for it. Will have maybe have to find another way.
> > qemu-trad those not seems to allocate memory, but I haven't been very
> > far in trying to check that.
> 
> And indeed that is the case. The "Fix" below fixes it.
> 
> 
> Based on that and this guest config:
> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> memory = 2048
> boot="d"
> maxvcpus=32
> vcpus=1
> serial='pty'
> vnclisten="0.0.0.0"
> name="latest"
> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> pci = ["01:00.0"]
> 
> I can boot the guest.

And can you access the ROM from the guest ?


Also, I have another patch, it will initialize the PCI ROM BAR like any
other BAR. In this case, if qemu is envolved in the access to ROM, it
will print an error, like it the case for other BAR. 

I tried to test it, but it was with an embedded VGA card. When I dump
the ROM, I got the same one as the emulated card instead of the ROM from
the device.


diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 6dd7a68..2bbdb6d 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
 
         s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
 
-        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
-                                      "xen-pci-pt-rom", d->rom.size);
+        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
+                              "xen-pci-pt-rom", d->rom.size);
         pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
                          &s->rom);
 

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:03:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H8g-0004q5-U5; Thu, 09 Jan 2014 15:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W1H8e-0004q0-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:03:17 +0000
Received: from [193.109.254.147:47134] by server-9.bemta-14.messagelabs.com id
	99/46-13957-43ABEC25; Thu, 09 Jan 2014 15:03:16 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389279794!7596478!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29917 invoked from network); 9 Jan 2014 15:03:15 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:03:15 -0000
Received: by mail-la0-f45.google.com with SMTP id b8so529666lan.18
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 07:03:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:cc:to:mime-version;
	bh=Xf1g+cU6KFLMEvrVQtRcoOaq3EqE1X6wRhfeWxPhST0=;
	b=KlcZxf+kUVHP+zgzvekFSGfuhPC4ucSlm01yNH5tLGB//U1kEHyPzV4tZTLe/dMlFG
	8aB46QHD/kmdhEVm+mmF4bZT+Ih+YtVcowgyDesVOSTG6EAuT03QXEOAsfRYfaK40pKF
	hQ8Cqu5YJZ6LDZaEDCxZscmGdgRj5iDZZcWQfCmd1rHr9NfeYa7fkpQzA3D3AjRw9LTc
	uZADDyLx7wPDQfl32TbVpgkMKtY1RiLE+XB1lKKW2LyTdna/VwABJgy2MhrLrLnqSrSQ
	U17LPBOIUZ+0QiuigIGOd+cvak2Xe0DZWZRrlkoAwYP6aj8IFilw3/My9wRQ9hIvlL4a
	60GQ==
X-Received: by 10.152.45.8 with SMTP id i8mr1503414lam.12.1389279794491;
	Thu, 09 Jan 2014 07:03:14 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id xl4sm1810017lac.9.2014.01.09.07.03.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 07:03:12 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 9 Jan 2014 19:03:10 +0400
Message-Id: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8909856274276204576=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8909856274276204576==
Content-Type: multipart/alternative; boundary="Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0"


--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hello All,

let me introduce status of port xen-4.2 to DilOS(illumos based platform)

i have finished:

illumos code:
- updated xen public headers in dilos-illumos-gate tree
- updated some changes for domU - PV & HVM - i have tested domU on =
dilos-xen-3.4-dom0 - work well.
- updated some changes for dom0

userland xen-4.2
- update some changes for build env.
- merged some changes for build xen.gz + xen-syms

Now - i have loaded dilos-xen-4.2-dom0 with some issues:
- have no access to local console - but have access by SSH
- very slow boot process - more timeout issues with pci-ide
- problem with FMA service
- problem with HAL service


if someone else interested in helps with resolve issues and xen ports: =
you are welcome to dilos-dev@ mail list with questions and FreeNode IRC: =
#dilos.

i can help with setup build env and test env based on VMware ESXi with =
output to serial console.


What need to do:

illumos:
- try to find and resolve problems with services and APIC irq =
remap(probably it is problems with services)
- take a look other problems

userland xen-4.2
- fix build problems with QEMU
- merge additional changes from xen-3.4
- try to identify issues with APIC remap here - probably =
incorrect/mistaken merges

for resolve these issues need to work on both sides: illumos & xen =
sources.

I have a limited time for work on it - i'll do it as I can.
I can share my work and provide access to sources.

if you have questions: please send email to dilos-dev@ mail list.

--
Best regards,
Igor Kozhukhov


--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div>Hello All,</div><div><br></div><div>let me introduce status of =
port xen-4.2 to DilOS(illumos based platform)</div><div><br></div><div>i =
have finished:</div><div><br></div><div>illumos code:</div><div>- =
updated xen public headers in dilos-illumos-gate tree</div><div>- =
updated some changes for domU - PV &amp; HVM - i have tested domU on =
dilos-xen-3.4-dom0 - work well.</div><div>- updated some changes for =
dom0</div><div><br></div><div>userland xen-4.2</div><div>- update some =
changes for build env.</div><div>- merged some changes for build xen.gz =
+ xen-syms</div><div><br></div><div>Now - i have loaded =
dilos-xen-4.2-dom0 with some issues:</div><div>- have no access to local =
console - but have access by SSH</div><div>- very slow boot process - =
more timeout issues with pci-ide</div><div>- problem with FMA =
service</div><div>- problem with HAL =
service</div><div><br></div><div><br></div><div>if someone else =
interested in helps with resolve issues and xen ports: you are welcome =
to dilos-dev@ mail list with questions and FreeNode IRC: =
#dilos.</div><div><br></div><div>i can help with setup build env and =
test env based on VMware ESXi with output to serial =
console.</div><div><br></div><div><br></div><div>What need to =
do:</div><div><br></div><div>illumos:</div><div>- try to find and =
resolve problems with services and APIC irq remap(probably it is =
problems with services)</div><div>- take a look other =
problems</div><div><br></div><div>userland xen-4.2</div><div>- fix build =
problems with QEMU</div><div>- merge additional changes from =
xen-3.4</div><div>- try to identify issues with APIC remap here - =
probably incorrect/mistaken merges</div><div><br></div><div>for resolve =
these issues need to work on both sides: illumos &amp; xen =
sources.</div><div><br></div><div>I have a limited time for work on it - =
i'll do it as I can.</div><div>I can share my work and provide access to =
sources.</div><div><br></div><div>if you have questions: please send =
email to dilos-dev@ mail =
list.</div><div><br></div><div>--</div><div><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; ">Best regards,<br>Igor =
Kozhukhov<br><br></div></span></span></div></body></html>=

--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0--


--===============8909856274276204576==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8909856274276204576==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 15:03:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H8g-0004q5-U5; Thu, 09 Jan 2014 15:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W1H8e-0004q0-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:03:17 +0000
Received: from [193.109.254.147:47134] by server-9.bemta-14.messagelabs.com id
	99/46-13957-43ABEC25; Thu, 09 Jan 2014 15:03:16 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389279794!7596478!1
X-Originating-IP: [209.85.215.45]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29917 invoked from network); 9 Jan 2014 15:03:15 -0000
Received: from mail-la0-f45.google.com (HELO mail-la0-f45.google.com)
	(209.85.215.45)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:03:15 -0000
Received: by mail-la0-f45.google.com with SMTP id b8so529666lan.18
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 07:03:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:cc:to:mime-version;
	bh=Xf1g+cU6KFLMEvrVQtRcoOaq3EqE1X6wRhfeWxPhST0=;
	b=KlcZxf+kUVHP+zgzvekFSGfuhPC4ucSlm01yNH5tLGB//U1kEHyPzV4tZTLe/dMlFG
	8aB46QHD/kmdhEVm+mmF4bZT+Ih+YtVcowgyDesVOSTG6EAuT03QXEOAsfRYfaK40pKF
	hQ8Cqu5YJZ6LDZaEDCxZscmGdgRj5iDZZcWQfCmd1rHr9NfeYa7fkpQzA3D3AjRw9LTc
	uZADDyLx7wPDQfl32TbVpgkMKtY1RiLE+XB1lKKW2LyTdna/VwABJgy2MhrLrLnqSrSQ
	U17LPBOIUZ+0QiuigIGOd+cvak2Xe0DZWZRrlkoAwYP6aj8IFilw3/My9wRQ9hIvlL4a
	60GQ==
X-Received: by 10.152.45.8 with SMTP id i8mr1503414lam.12.1389279794491;
	Thu, 09 Jan 2014 07:03:14 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id xl4sm1810017lac.9.2014.01.09.07.03.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 07:03:12 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 9 Jan 2014 19:03:10 +0400
Message-Id: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8909856274276204576=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8909856274276204576==
Content-Type: multipart/alternative; boundary="Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0"


--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hello All,

let me introduce status of port xen-4.2 to DilOS(illumos based platform)

i have finished:

illumos code:
- updated xen public headers in dilos-illumos-gate tree
- updated some changes for domU - PV & HVM - i have tested domU on =
dilos-xen-3.4-dom0 - work well.
- updated some changes for dom0

userland xen-4.2
- update some changes for build env.
- merged some changes for build xen.gz + xen-syms

Now - i have loaded dilos-xen-4.2-dom0 with some issues:
- have no access to local console - but have access by SSH
- very slow boot process - more timeout issues with pci-ide
- problem with FMA service
- problem with HAL service


if someone else interested in helps with resolve issues and xen ports: =
you are welcome to dilos-dev@ mail list with questions and FreeNode IRC: =
#dilos.

i can help with setup build env and test env based on VMware ESXi with =
output to serial console.


What need to do:

illumos:
- try to find and resolve problems with services and APIC irq =
remap(probably it is problems with services)
- take a look other problems

userland xen-4.2
- fix build problems with QEMU
- merge additional changes from xen-3.4
- try to identify issues with APIC remap here - probably =
incorrect/mistaken merges

for resolve these issues need to work on both sides: illumos & xen =
sources.

I have a limited time for work on it - i'll do it as I can.
I can share my work and provide access to sources.

if you have questions: please send email to dilos-dev@ mail list.

--
Best regards,
Igor Kozhukhov


--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div>Hello All,</div><div><br></div><div>let me introduce status of =
port xen-4.2 to DilOS(illumos based platform)</div><div><br></div><div>i =
have finished:</div><div><br></div><div>illumos code:</div><div>- =
updated xen public headers in dilos-illumos-gate tree</div><div>- =
updated some changes for domU - PV &amp; HVM - i have tested domU on =
dilos-xen-3.4-dom0 - work well.</div><div>- updated some changes for =
dom0</div><div><br></div><div>userland xen-4.2</div><div>- update some =
changes for build env.</div><div>- merged some changes for build xen.gz =
+ xen-syms</div><div><br></div><div>Now - i have loaded =
dilos-xen-4.2-dom0 with some issues:</div><div>- have no access to local =
console - but have access by SSH</div><div>- very slow boot process - =
more timeout issues with pci-ide</div><div>- problem with FMA =
service</div><div>- problem with HAL =
service</div><div><br></div><div><br></div><div>if someone else =
interested in helps with resolve issues and xen ports: you are welcome =
to dilos-dev@ mail list with questions and FreeNode IRC: =
#dilos.</div><div><br></div><div>i can help with setup build env and =
test env based on VMware ESXi with output to serial =
console.</div><div><br></div><div><br></div><div>What need to =
do:</div><div><br></div><div>illumos:</div><div>- try to find and =
resolve problems with services and APIC irq remap(probably it is =
problems with services)</div><div>- take a look other =
problems</div><div><br></div><div>userland xen-4.2</div><div>- fix build =
problems with QEMU</div><div>- merge additional changes from =
xen-3.4</div><div>- try to identify issues with APIC remap here - =
probably incorrect/mistaken merges</div><div><br></div><div>for resolve =
these issues need to work on both sides: illumos &amp; xen =
sources.</div><div><br></div><div>I have a limited time for work on it - =
i'll do it as I can.</div><div>I can share my work and provide access to =
sources.</div><div><br></div><div>if you have questions: please send =
email to dilos-dev@ mail =
list.</div><div><br></div><div>--</div><div><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; ">Best regards,<br>Igor =
Kozhukhov<br><br></div></span></span></div></body></html>=

--Apple-Mail=_B27E89CF-4D99-41F4-9CDE-C4ACAF60E7B0--


--===============8909856274276204576==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8909856274276204576==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 15:04:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H9t-0004vv-JG; Thu, 09 Jan 2014 15:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1H9r-0004vp-WA
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:04:32 +0000
Received: from [85.158.139.211:16835] by server-9.bemta-5.messagelabs.com id
	DC/52-15098-F7ABEC25; Thu, 09 Jan 2014 15:04:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389279868!8779902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2840 invoked from network); 9 Jan 2014 15:04:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:04:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91303409"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:04:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 10:04:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1H9m-0003BB-R8;
	Thu, 09 Jan 2014 15:04:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1H9m-0002Nz-KD;
	Thu, 09 Jan 2014 15:04:26 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.47738.357463.155514@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 15:04:26 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389265395.27473.69.camel@kazak.uk.xensource.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
	<1389265395.27473.69.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: jfehlig@suse.com, Stefan Bader <stefan.bader@canonical.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create"):
> On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> > From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> > From: Stefan Bader <stefan.bader@canonical.com>
> > Date: Wed, 8 Jan 2014 18:26:59 +0100
> > Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> > 
> > This will change initiate_domain_create to walk through NIC definitions
> > and automatically assign devids to those which have not assigned one.
> > The devids are needed later in domcreate_launch_dm (for HVM domains
> > using emulated NICs). The command string for starting the device-model
> > has those ids as part of its arguments.
> > Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> > but that would be called too late in the startup case.
> > I also moved the call to libxl__device_nic_setdefault here as this seems
> > to be the only path leading there and avoids doing the loop a third time.
> > The two loops are trying to handle a case where the caller sets some devids
> > (not sure that should be valid) but leaves some unset.

Thanks.  Thanks also for the careful and comprehensive explanation.

> > Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

> I think from a release point of view we should take this since it is a
> bug fix to the API which at least libvirt has tripped over (although
> libvirt has worked around it, others may not have done so).
 
> Ian J: Does that make sense?

I agree.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:04:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1H9t-0004vv-JG; Thu, 09 Jan 2014 15:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1H9r-0004vp-WA
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:04:32 +0000
Received: from [85.158.139.211:16835] by server-9.bemta-5.messagelabs.com id
	DC/52-15098-F7ABEC25; Thu, 09 Jan 2014 15:04:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389279868!8779902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2840 invoked from network); 9 Jan 2014 15:04:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:04:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91303409"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:04:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 10:04:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1H9m-0003BB-R8;
	Thu, 09 Jan 2014 15:04:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1H9m-0002Nz-KD;
	Thu, 09 Jan 2014 15:04:26 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.47738.357463.155514@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 15:04:26 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389265395.27473.69.camel@kazak.uk.xensource.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
	<1389265395.27473.69.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: jfehlig@suse.com, Stefan Bader <stefan.bader@canonical.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create"):
> On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> > From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> > From: Stefan Bader <stefan.bader@canonical.com>
> > Date: Wed, 8 Jan 2014 18:26:59 +0100
> > Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> > 
> > This will change initiate_domain_create to walk through NIC definitions
> > and automatically assign devids to those which have not assigned one.
> > The devids are needed later in domcreate_launch_dm (for HVM domains
> > using emulated NICs). The command string for starting the device-model
> > has those ids as part of its arguments.
> > Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> > but that would be called too late in the startup case.
> > I also moved the call to libxl__device_nic_setdefault here as this seems
> > to be the only path leading there and avoids doing the loop a third time.
> > The two loops are trying to handle a case where the caller sets some devids
> > (not sure that should be valid) but leaves some unset.

Thanks.  Thanks also for the careful and comprehensive explanation.

> > Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

> I think from a release point of view we should take this since it is a
> bug fix to the API which at least libvirt has tripped over (although
> libvirt has worked around it, others may not have done so).
 
> Ian J: Does that make sense?

I agree.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HDY-00057e-Ax; Thu, 09 Jan 2014 15:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1HDX-00057W-7c
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:08:19 +0000
Received: from [193.109.254.147:62570] by server-13.bemta-14.messagelabs.com
	id D1/D1-19374-26BBEC25; Thu, 09 Jan 2014 15:08:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389280095!9879916!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17397 invoked from network); 9 Jan 2014 15:08:16 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:08:16 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so1282100eaj.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 07:08:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=PdPZYxVdLhWHxxTz4jidci9G+YG8VOIAOAdjh8wTxbo=;
	b=nKpImb/r2H4kc6x0Y3Sah0I6VMZW19wwDNC+dQsNvH4GOzcQGWfsJbajoJVR3fA93Z
	uDhZD9RR4eItKFbgjma3i0VkbDPKuqIvN5o4oLB7X7LTmB1g4kr1qXU5XpLRdtmYnfFr
	rAVTR27YM7h2RDSXCf+kClMWn2b6/0VnhiPxS4sbyVBss7VEtZHhQzLY4BZkpdJJAV9/
	3sd6xQK8vPD4WGLfpkCiaYdhyfvB/mZYt8mvKMEqVMv+mPUqwc64bUb1WhMZyVO2P7ch
	gXl2YfK7mMs7Jh5QzOk2tvxpY0H8mRG52MYmbSSBJthBuS/5ywiODde6bE9uAEtGArr9
	RBKA==
X-Gm-Message-State: ALoCoQn4b3VuWqTme76a/Qi9MyN9Fl7szdrII3/+HVVSOKTUu8jnPMHG925N35dkH7dMpwO1BqKF
X-Received: by 10.14.148.138 with SMTP id v10mr3869675eej.37.1389280095571;
	Thu, 09 Jan 2014 07:08:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm6298960een.13.2014.01.09.07.08.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 07:08:14 -0800 (PST)
Message-ID: <52CEBB5C.9030104@linaro.org>
Date: Thu, 09 Jan 2014 15:08:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
	<52CEA0B1.3020403@linaro.org>
	<1389276353.27473.126.camel@kazak.uk.xensource.com>
In-Reply-To: <1389276353.27473.126.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/2014 02:05 PM, Ian Campbell wrote:
> On Thu, 2014-01-09 at 13:14 +0000, Julien Grall wrote:
>>
>> On 01/09/2014 10:59 AM, Ian Campbell wrote:
>>> On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
>>>> On Wed, 8 Jan 2014, Julien Grall wrote:
>>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>>> p2m and TLBs.
>>>>>
>>>>> Flush TLBs used by this domain on every PCPU.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>>
>>>> The fix makes sense to me.
>>>
>>> Me too. Has anyone had a grep for similar issues?
>>
>> I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
>> innershareable.
> 
> I think text_tlb is ok, it's only used at start of day. The usage in
> setup_pagetables and mmu_init_secondary_cpus are both explicitly local I
> think. Perhaps it should be renamed _local.

After looking to the code, both function are only used to flush for the
current cpu. Suffix flush_xen_data_tlb, flush_xen_text_tlb,
flush_xen_data_tlb_range_va with _local sounds a good idea. Is it a 4.4
material?

> The case in set_pte_flags_on_range via free_init_memory I'm less sure
> about, but I don't think stale tlb entries are actually an issue here,
> since there will certainly be one when the page becomes used again. But
> maybe it would be safest to make it global.

Theses translations will only be used for hypervisor mode. It should be
safe ...

We can flush TLB on every CPUs. That would mean we have to create
flush_xen_text_tlb and flush_xen_text_tlb_local.

>> The former one is used by flush_tlb_mask.
> 
> Yes, the comment there is just wrong. I think this was my doing based on
> the confusion I mentioned before.
> 
> We need to be careful not to change the (un)map_domain_page since those
> are not shared between processors, I don't think this change would do
> that.

Right, this function doesn't need to be changed. We need to modify the
behaviour of flush_tlb_mask.

>>  But ... this function seems 
>> badly implement, it's weird to use flush_xen_data_tlb because we are 
>> mainly using flush_tlb_mask in common/grant-table.c. Any ideas?
> 
> Do you mean that this should be flushing the guest TLBs and not Xen's?
> That does seem right... We actually need to be flushing for all vmid's
> too I think -- for the alloc_heap_pages case.

After looking to the code, flush_tlb_mask is called in 2 specific place
for ARM:
   - alloc_heap_pages: the flush is only be called if the new allocated
page was used by a domain before. So we need to flush only need to flush
TLB non-secure non-hyp inner-shareable.
For Xen 4.5, this flush can be removed for ARM, Xen already call flush
TLB in create_p2m_entries.
   - common/grant_table.c: every calls to flush_tlb_mask are used with
the current domain. A simple flush TLB by VMID inner-shareable should be
enough.

For Xen 4.4, I suggest to make flush_tlb_mask flushes TLB non-secure
non-hyp innershareable.

We would need to rework after 4.4 for optimisation.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HDY-00057e-Ax; Thu, 09 Jan 2014 15:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1HDX-00057W-7c
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:08:19 +0000
Received: from [193.109.254.147:62570] by server-13.bemta-14.messagelabs.com
	id D1/D1-19374-26BBEC25; Thu, 09 Jan 2014 15:08:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389280095!9879916!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17397 invoked from network); 9 Jan 2014 15:08:16 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:08:16 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so1282100eaj.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 07:08:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=PdPZYxVdLhWHxxTz4jidci9G+YG8VOIAOAdjh8wTxbo=;
	b=nKpImb/r2H4kc6x0Y3Sah0I6VMZW19wwDNC+dQsNvH4GOzcQGWfsJbajoJVR3fA93Z
	uDhZD9RR4eItKFbgjma3i0VkbDPKuqIvN5o4oLB7X7LTmB1g4kr1qXU5XpLRdtmYnfFr
	rAVTR27YM7h2RDSXCf+kClMWn2b6/0VnhiPxS4sbyVBss7VEtZHhQzLY4BZkpdJJAV9/
	3sd6xQK8vPD4WGLfpkCiaYdhyfvB/mZYt8mvKMEqVMv+mPUqwc64bUb1WhMZyVO2P7ch
	gXl2YfK7mMs7Jh5QzOk2tvxpY0H8mRG52MYmbSSBJthBuS/5ywiODde6bE9uAEtGArr9
	RBKA==
X-Gm-Message-State: ALoCoQn4b3VuWqTme76a/Qi9MyN9Fl7szdrII3/+HVVSOKTUu8jnPMHG925N35dkH7dMpwO1BqKF
X-Received: by 10.14.148.138 with SMTP id v10mr3869675eej.37.1389280095571;
	Thu, 09 Jan 2014 07:08:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm6298960een.13.2014.01.09.07.08.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 07:08:14 -0800 (PST)
Message-ID: <52CEBB5C.9030104@linaro.org>
Date: Thu, 09 Jan 2014 15:08:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389200759-22177-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401081711210.21510@kaball.uk.xensource.com>
	<1389265175.27473.67.camel@kazak.uk.xensource.com>
	<52CEA0B1.3020403@linaro.org>
	<1389276353.27473.126.camel@kazak.uk.xensource.com>
In-Reply-To: <1389276353.27473.126.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/2014 02:05 PM, Ian Campbell wrote:
> On Thu, 2014-01-09 at 13:14 +0000, Julien Grall wrote:
>>
>> On 01/09/2014 10:59 AM, Ian Campbell wrote:
>>> On Wed, 2014-01-08 at 17:13 +0000, Stefano Stabellini wrote:
>>>> On Wed, 8 Jan 2014, Julien Grall wrote:
>>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>>> p2m and TLBs.
>>>>>
>>>>> Flush TLBs used by this domain on every PCPU.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>>
>>>> The fix makes sense to me.
>>>
>>> Me too. Has anyone had a grep for similar issues?
>>
>> I think flush_xen_data_tlb and flush_xen_text_tlb should also be 
>> innershareable.
> 
> I think text_tlb is ok, it's only used at start of day. The usage in
> setup_pagetables and mmu_init_secondary_cpus are both explicitly local I
> think. Perhaps it should be renamed _local.

After looking to the code, both function are only used to flush for the
current cpu. Suffix flush_xen_data_tlb, flush_xen_text_tlb,
flush_xen_data_tlb_range_va with _local sounds a good idea. Is it a 4.4
material?

> The case in set_pte_flags_on_range via free_init_memory I'm less sure
> about, but I don't think stale tlb entries are actually an issue here,
> since there will certainly be one when the page becomes used again. But
> maybe it would be safest to make it global.

Theses translations will only be used for hypervisor mode. It should be
safe ...

We can flush TLB on every CPUs. That would mean we have to create
flush_xen_text_tlb and flush_xen_text_tlb_local.

>> The former one is used by flush_tlb_mask.
> 
> Yes, the comment there is just wrong. I think this was my doing based on
> the confusion I mentioned before.
> 
> We need to be careful not to change the (un)map_domain_page since those
> are not shared between processors, I don't think this change would do
> that.

Right, this function doesn't need to be changed. We need to modify the
behaviour of flush_tlb_mask.

>>  But ... this function seems 
>> badly implement, it's weird to use flush_xen_data_tlb because we are 
>> mainly using flush_tlb_mask in common/grant-table.c. Any ideas?
> 
> Do you mean that this should be flushing the guest TLBs and not Xen's?
> That does seem right... We actually need to be flushing for all vmid's
> too I think -- for the alloc_heap_pages case.

After looking to the code, flush_tlb_mask is called in 2 specific place
for ARM:
   - alloc_heap_pages: the flush is only be called if the new allocated
page was used by a domain before. So we need to flush only need to flush
TLB non-secure non-hyp inner-shareable.
For Xen 4.5, this flush can be removed for ARM, Xen already call flush
TLB in create_p2m_entries.
   - common/grant_table.c: every calls to flush_tlb_mask are used with
the current domain. A simple flush TLB by VMID inner-shareable should be
enough.

For Xen 4.4, I suggest to make flush_tlb_mask flushes TLB non-secure
non-hyp innershareable.

We would need to rework after 4.4 for optimisation.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HYv-00079F-5M; Thu, 09 Jan 2014 15:30:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1HYs-000797-F5
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:30:22 +0000
Received: from [193.109.254.147:48834] by server-16.bemta-14.messagelabs.com
	id EF/E1-20600-D80CEC25; Thu, 09 Jan 2014 15:30:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389281419!9868159!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19884 invoked from network); 9 Jan 2014 15:30:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:30:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91316759"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:30:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:30:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1HYl-00070G-N4;
	Thu, 09 Jan 2014 15:30:15 +0000
Date: Thu, 9 Jan 2014 15:30:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140109153015.GF12164@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:10:11AM +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping
> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()

I suppose this is to avoid touching m2p override as well, just as
previous patch uses unmap hypercall directly.

> - fix unmapping timeout in xenvif_free()
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   57 +++++++-
>  drivers/net/xen-netback/netback.c   |  251 ++++++++++++++---------------------
>  2 files changed, 153 insertions(+), 155 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 7170f97..3b2b249 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +		vif->dealloc_task == NULL ||
> +		!xenvif_schedulable(vif))

Indentation.

>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it. The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning
> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +		vif->mmap_pages,
> +		false);

Ditto.

> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return NULL;
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -431,6 +452,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err_rx_unbind;
>  	}
>  
> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
> +				   (void *)vif, "%s-dealloc", vif->dev->name);

Ditto.

> +	if (IS_ERR(vif->dealloc_task)) {
> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
> +		err = PTR_ERR(vif->dealloc_task);
> +		goto err_rx_unbind;
> +	}
> +
>  	vif->task = task;
>  
>  	rtnl_lock();
[...]
>  
>  static int xenvif_tx_check_gop(struct xenvif *vif,
>  			       struct sk_buff *skb,
> -			       struct gnttab_copy **gopp)
> +			       struct gnttab_map_grant_ref **gopp)
>  {
> -	struct gnttab_copy *gop = *gopp;
> +	struct gnttab_map_grant_ref *gop = *gopp;
>  	u16 pending_idx = *((u16 *)skb->data);
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	struct pending_tx_info *tx_info;
> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	err = gop->status;
>  	if (unlikely(err))
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> +	else {
> +		if (vif->grant_tx_handle[pending_idx] !=
> +			NETBACK_INVALID_HANDLE) {
> +			netdev_err(vif->dev,
> +				"Stale mapped handle! pending_idx %x handle %x\n",
> +				pending_idx, vif->grant_tx_handle[pending_idx]);
> +			BUG();
> +		}
> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));

What happens when you don't have this?

> +		vif->grant_tx_handle[pending_idx] = gop->handle;
> +	}
>  
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -933,18 +877,26 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		head = tx_info->head;
>  
[...]
>  		}
> @@ -1567,7 +1523,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
>  
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(vif,
> +			skb,
> +			skb_shinfo(skb)->destructor_arg ?
> +					pending_idx :
> +					INVALID_PENDING_IDX);
>  

Indentation.

>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> @@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		if (checksum_setup(vif, skb)) {
>  			netdev_dbg(vif->dev,
>  				   "Can't setup checksum in net_tx_action\n");
> +			if (skb_shinfo(skb)->destructor_arg)
> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;

Do you still care setting the flag even if this skb is not going to be
delivered? If so can you state clearly the reason just like the
following hunk?

>  			kfree_skb(skb);
>  			continue;
>  		}
> @@ -1606,6 +1568,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  
>  		work_done++;
>  
> +		/* Set this flag right before netif_receive_skb, otherwise
> +		 * someone might think this packet already left netback, and
> +		 * do a skb_copy_ubufs while we are still in control of the
> +		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
> +		 */
> +		if (skb_shinfo(skb)->destructor_arg)
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +
>  		netif_receive_skb(skb);
>  	}
>  
> @@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
>  	unsigned nr_gops;
> -	int work_done;
> +	int work_done, ret;
>  
>  	if (unlikely(!tx_work_todo(vif)))
>  		return 0;
> @@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
>  	if (nr_gops == 0)
>  		return 0;
>  
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
> +			vif->tx_map_ops,
> +			nr_gops);

Why do you need to replace gnttab_batch_copy with hypercall? In the
ideal situation gnttab_batch_copy should behave the same as directly
hypercall but it also handles GNTST_eagain for you.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HYv-00079F-5M; Thu, 09 Jan 2014 15:30:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1HYs-000797-F5
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:30:22 +0000
Received: from [193.109.254.147:48834] by server-16.bemta-14.messagelabs.com
	id EF/E1-20600-D80CEC25; Thu, 09 Jan 2014 15:30:21 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389281419!9868159!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19884 invoked from network); 9 Jan 2014 15:30:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:30:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91316759"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:30:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:30:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1HYl-00070G-N4;
	Thu, 09 Jan 2014 15:30:15 +0000
Date: Thu, 9 Jan 2014 15:30:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140109153015.GF12164@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:10:11AM +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping
> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()

I suppose this is to avoid touching m2p override as well, just as
previous patch uses unmap hypercall directly.

> - fix unmapping timeout in xenvif_free()
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   57 +++++++-
>  drivers/net/xen-netback/netback.c   |  251 ++++++++++++++---------------------
>  2 files changed, 153 insertions(+), 155 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 7170f97..3b2b249 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +		vif->dealloc_task == NULL ||
> +		!xenvif_schedulable(vif))

Indentation.

>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it. The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning
> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +		vif->mmap_pages,
> +		false);

Ditto.

> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return NULL;
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -431,6 +452,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err_rx_unbind;
>  	}
>  
> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
> +				   (void *)vif, "%s-dealloc", vif->dev->name);

Ditto.

> +	if (IS_ERR(vif->dealloc_task)) {
> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
> +		err = PTR_ERR(vif->dealloc_task);
> +		goto err_rx_unbind;
> +	}
> +
>  	vif->task = task;
>  
>  	rtnl_lock();
[...]
>  
>  static int xenvif_tx_check_gop(struct xenvif *vif,
>  			       struct sk_buff *skb,
> -			       struct gnttab_copy **gopp)
> +			       struct gnttab_map_grant_ref **gopp)
>  {
> -	struct gnttab_copy *gop = *gopp;
> +	struct gnttab_map_grant_ref *gop = *gopp;
>  	u16 pending_idx = *((u16 *)skb->data);
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	struct pending_tx_info *tx_info;
> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	err = gop->status;
>  	if (unlikely(err))
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> +	else {
> +		if (vif->grant_tx_handle[pending_idx] !=
> +			NETBACK_INVALID_HANDLE) {
> +			netdev_err(vif->dev,
> +				"Stale mapped handle! pending_idx %x handle %x\n",
> +				pending_idx, vif->grant_tx_handle[pending_idx]);
> +			BUG();
> +		}
> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));

What happens when you don't have this?

> +		vif->grant_tx_handle[pending_idx] = gop->handle;
> +	}
>  
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -933,18 +877,26 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		head = tx_info->head;
>  
[...]
>  		}
> @@ -1567,7 +1523,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
>  
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(vif,
> +			skb,
> +			skb_shinfo(skb)->destructor_arg ?
> +					pending_idx :
> +					INVALID_PENDING_IDX);
>  

Indentation.

>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> @@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		if (checksum_setup(vif, skb)) {
>  			netdev_dbg(vif->dev,
>  				   "Can't setup checksum in net_tx_action\n");
> +			if (skb_shinfo(skb)->destructor_arg)
> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;

Do you still care setting the flag even if this skb is not going to be
delivered? If so can you state clearly the reason just like the
following hunk?

>  			kfree_skb(skb);
>  			continue;
>  		}
> @@ -1606,6 +1568,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  
>  		work_done++;
>  
> +		/* Set this flag right before netif_receive_skb, otherwise
> +		 * someone might think this packet already left netback, and
> +		 * do a skb_copy_ubufs while we are still in control of the
> +		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
> +		 */
> +		if (skb_shinfo(skb)->destructor_arg)
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +
>  		netif_receive_skb(skb);
>  	}
>  
> @@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
>  	unsigned nr_gops;
> -	int work_done;
> +	int work_done, ret;
>  
>  	if (unlikely(!tx_work_todo(vif)))
>  		return 0;
> @@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
>  	if (nr_gops == 0)
>  		return 0;
>  
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
> +			vif->tx_map_ops,
> +			nr_gops);

Why do you need to replace gnttab_batch_copy with hypercall? In the
ideal situation gnttab_batch_copy should behave the same as directly
hypercall but it also handles GNTST_eagain for you.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:30:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HYn-00078v-DX; Thu, 09 Jan 2014 15:30:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1HYl-00078q-KH
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:30:15 +0000
Received: from [85.158.139.211:60796] by server-16.bemta-5.messagelabs.com id
	16/4C-11843-680CEC25; Thu, 09 Jan 2014 15:30:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389281412!8639005!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15114 invoked from network); 9 Jan 2014 15:30:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:30:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89173435"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:30:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:30:10 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1HYg-00070D-HQ;
	Thu, 09 Jan 2014 15:30:10 +0000
Date: Thu, 9 Jan 2014 15:30:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140109153010.GE12164@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> This patch contains the new definitions necessary for grant mapping.
> 
> v2:
> - move unmapping to separate thread. The NAPI instance has to be scheduled
>   even from thread context, which can cause huge delays
> - that causes unfortunately bigger struct xenvif
> - store grant handle after checking validity
> 
> v3:
> - fix comment in xenvif_tx_dealloc_action()
> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>   unnecessary m2p_override. Also remove pages_to_[un]map members

Is it worthy to have another function call
gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
parameter to control wether we need to touch m2p_override? I *think* it
will benefit block driver as well?

(CC Roger and David for input)

> - BUG() if grant_tx_handle corrupted
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> 
> ---
[...]
>  
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index addfe1d1..7c241f9 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
> +	       struct xen_netif_tx_request *txp,
> +	       struct gnttab_map_grant_ref *gop)

Indentation.

> +{
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));
> +
> +}
> +
[...]
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif =
> +		container_of(temp - pending_idx, struct xenvif,
> +			pending_tx_info[0]);

Indentation.

> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_action_dealloc:

xenvif_tx_dealloc_action I suppose.

> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();
> +		vif->dealloc_prod++;
> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					"Trying to unmap invalid handle! "
> +					"pending_idx: %x\n", pending_idx);
> +				continue;

You seemed to miss the BUG_ON we discussed?

See thread starting <52AF1A84.3090304@citrix.com>.

Wei.

> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			gnttab_set_unmap_op(gop,
> +					idx_to_kaddr(vif, pending_idx),
> +					GNTMAP_host_map,
> +					vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;
> +		}
> +


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:30:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HYn-00078v-DX; Thu, 09 Jan 2014 15:30:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1HYl-00078q-KH
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:30:15 +0000
Received: from [85.158.139.211:60796] by server-16.bemta-5.messagelabs.com id
	16/4C-11843-680CEC25; Thu, 09 Jan 2014 15:30:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389281412!8639005!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15114 invoked from network); 9 Jan 2014 15:30:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:30:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89173435"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:30:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:30:10 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1HYg-00070D-HQ;
	Thu, 09 Jan 2014 15:30:10 +0000
Date: Thu, 9 Jan 2014 15:30:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140109153010.GE12164@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> This patch contains the new definitions necessary for grant mapping.
> 
> v2:
> - move unmapping to separate thread. The NAPI instance has to be scheduled
>   even from thread context, which can cause huge delays
> - that causes unfortunately bigger struct xenvif
> - store grant handle after checking validity
> 
> v3:
> - fix comment in xenvif_tx_dealloc_action()
> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>   unnecessary m2p_override. Also remove pages_to_[un]map members

Is it worthy to have another function call
gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
parameter to control wether we need to touch m2p_override? I *think* it
will benefit block driver as well?

(CC Roger and David for input)

> - BUG() if grant_tx_handle corrupted
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> 
> ---
[...]
>  
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index addfe1d1..7c241f9 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
> +	       struct xen_netif_tx_request *txp,
> +	       struct gnttab_map_grant_ref *gop)

Indentation.

> +{
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));
> +
> +}
> +
[...]
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif =
> +		container_of(temp - pending_idx, struct xenvif,
> +			pending_tx_info[0]);

Indentation.

> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_action_dealloc:

xenvif_tx_dealloc_action I suppose.

> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();
> +		vif->dealloc_prod++;
> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					"Trying to unmap invalid handle! "
> +					"pending_idx: %x\n", pending_idx);
> +				continue;

You seemed to miss the BUG_ON we discussed?

See thread starting <52AF1A84.3090304@citrix.com>.

Wei.

> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			gnttab_set_unmap_op(gop,
> +					idx_to_kaddr(vif, pending_idx),
> +					GNTMAP_host_map,
> +					vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;
> +		}
> +


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hf6-0007Qe-1W; Thu, 09 Jan 2014 15:36:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W1Hf4-0007QY-8a
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:36:46 +0000
Received: from [85.158.137.68:59477] by server-1.bemta-3.messagelabs.com id
	66/1F-29598-D02CEC25; Thu, 09 Jan 2014 15:36:45 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389281803!8200854!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28418 invoked from network); 9 Jan 2014 15:36:43 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 15:36:43 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 34FC88185E;
	Thu,  9 Jan 2014 17:36:41 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id EB49A36C01F; Thu,  9 Jan 2014 17:36:40 +0200 (EET)
Date: Thu, 9 Jan 2014 17:36:40 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140109153640.GC2924@reaktio.net>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > [...]
> > > > > > > Those Xen report something like:
> > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > 131328
> > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > memflags=0 (62 of 64)
> > > > > > > 
> > > > > > > ?
> > > > > > > 
> > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > QEMU :) )
> > > > > > > 
> > > 
> > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > >         Flags: fast devsel, IRQ 16
> > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > >         I/O ports at e020 [disabled] [size=32]
> > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > 
> > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > allocate memory for it. Will have maybe have to find another way.
> > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > far in trying to check that.
> > 
> > And indeed that is the case. The "Fix" below fixes it.
> > 
> > 
> > Based on that and this guest config:
> > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory = 2048
> > boot="d"
> > maxvcpus=32
> > vcpus=1
> > serial='pty'
> > vnclisten="0.0.0.0"
> > name="latest"
> > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > pci = ["01:00.0"]
> > 
> > I can boot the guest.
> 
> And can you access the ROM from the guest ?
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.
> 

This issue has been reported multiple times on the list, and discussed during the last couple of months on another threads,
mostly related to GPU passthru.

I think some clues were found recently about why you get the emulated rom instead of the actual device rom.
Sorry that I don't have the link available right now..

-- Pasi

> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  
> 
> -- 
> Anthony PERARD
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hf6-0007Qe-1W; Thu, 09 Jan 2014 15:36:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W1Hf4-0007QY-8a
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:36:46 +0000
Received: from [85.158.137.68:59477] by server-1.bemta-3.messagelabs.com id
	66/1F-29598-D02CEC25; Thu, 09 Jan 2014 15:36:45 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389281803!8200854!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28418 invoked from network); 9 Jan 2014 15:36:43 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 15:36:43 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 34FC88185E;
	Thu,  9 Jan 2014 17:36:41 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id EB49A36C01F; Thu,  9 Jan 2014 17:36:40 +0200 (EET)
Date: Thu, 9 Jan 2014 17:36:40 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140109153640.GC2924@reaktio.net>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > [...]
> > > > > > > Those Xen report something like:
> > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > 131328
> > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > memflags=0 (62 of 64)
> > > > > > > 
> > > > > > > ?
> > > > > > > 
> > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > QEMU :) )
> > > > > > > 
> > > 
> > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > >         Flags: fast devsel, IRQ 16
> > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > >         I/O ports at e020 [disabled] [size=32]
> > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > 
> > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > allocate memory for it. Will have maybe have to find another way.
> > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > far in trying to check that.
> > 
> > And indeed that is the case. The "Fix" below fixes it.
> > 
> > 
> > Based on that and this guest config:
> > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory = 2048
> > boot="d"
> > maxvcpus=32
> > vcpus=1
> > serial='pty'
> > vnclisten="0.0.0.0"
> > name="latest"
> > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > pci = ["01:00.0"]
> > 
> > I can boot the guest.
> 
> And can you access the ROM from the guest ?
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.
> 

This issue has been reported multiple times on the list, and discussed during the last couple of months on another threads,
mostly related to GPU passthru.

I think some clues were found recently about why you get the emulated rom instead of the actual device rom.
Sorry that I don't have the link available right now..

-- Pasi

> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  
> 
> -- 
> Anthony PERARD
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HhM-00080x-RG; Thu, 09 Jan 2014 15:39:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1HhL-0007zU-Of
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 15:39:08 +0000
Received: from [85.158.143.35:64721] by server-3.bemta-4.messagelabs.com id
	2A/1D-32360-B92CEC25; Thu, 09 Jan 2014 15:39:07 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389281944!10745705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21219 invoked from network); 9 Jan 2014 15:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89177121"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:39:03 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:39:03 -0500
From: Simon Graham <simon.graham@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: Possible issue with x86_emulate when writing results back to
	memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTg==
Date: Thu, 9 Jan 2014 15:39:02 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
Content-Type: multipart/mixed;
	boundary="_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_"
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] Possible issue with x86_emulate when writing results
	back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

We've been seeing very infrequent crashes in Windows VMs for a while where =
it appears that the top-byte in a longword that is supposed to hold a point=
er is being set to zero - for example:

BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)

The first parameter to keBugCheck is the faulting address - 00B49A40 in thi=
s case.

When we look at the dump, we are in a routine releasing a queued spinlock a=
nd the correct address that should have been used was 0xA3B49A40 and indeed=
 memory contents in the windows dump have this value. Looking around some m=
ore, we see that the failing processor is executing the release queued spin=
lock code and another processor is executing the code to acquire the same q=
ueued spinlock and has recently written the 0xA3B49A40 value to the locatio=
n where the failing instruction stream read it from.

If we look at the disassembly for the two code paths, the writing code does=
:

	mov dword ptr [edx],eax

and the reading code does the following to read this same value:

	mov ebx,dword ptr [eax]

On a hunch that this might be a problem with the x86_emulate code, I took a=
 look at how the mov instruction would be emulated - in both cases where em=
ulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines tha=
t write instruction results back to memory use memcpy() to actually copy th=
e data. Looking at the implementation of memcpy in Xen, I see that, in a 64=
-bit build as ours is, it will use 'rep movsq' to move the data in quadword=
s and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructio=
ns above will, I think, always use byte instructions for the four bytes.

Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but =
based on the above they will not be and I am speculating that this is the c=
ause of our occasional crash - the code path unlocking the spin lock on the=
 other processor sees a partially written value in memory.

Does this seem a reasonable explanation of the issue?=20

On the assumption that this is correct, I developed the attached patch (aga=
inst 4.3.1) which updates all the code paths that are used to read and writ=
eback the results of instruction emulation to use a simple assignment if th=
e length is 2 or 4 bytes -- this doesn't fix the general case where you hav=
e a length > 8 but it does handle emulation of MOV instructions. Unfortunat=
ely, the use of emulation in the HVM code uses a generic routine for copyin=
g memory to the guest so every single place that guest memory is read or wr=
itten will pay the penalty of the extra check for length - not sure if that=
 is terrible or not. Since doing this we have not seen a single instance of=
 the crash - but it's only been a month!

The attached patch is for discussion purposes only - if it is deemed accept=
able I'll resubmit a proper patch request against unstable.

Simon Graham
Citrix Systems, Inc

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: application/octet-stream; name="fix-memcpy-in-x86-emulate"
Content-Description: fix-memcpy-in-x86-emulate
Content-Disposition: attachment; filename="fix-memcpy-in-x86-emulate";
	size=2838; creation-date="Thu, 09 Jan 2014 15:11:42 GMT";
	modification-date="Thu, 09 Jan 2014 14:58:24 GMT"
Content-Transfer-Encoding: base64

ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMgYi94ZW4vYXJjaC94ODYvaHZtL2h2
bS5jCmluZGV4IDJkZDliN2UuLjJhZDYwM2UgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0v
aHZtLmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwpAQCAtMjYwMiw2ICsyNjAyLDI2IEBA
IHZvaWQgaHZtX3Rhc2tfc3dpdGNoKAogICAgIGh2bV91bm1hcF9lbnRyeShucHRzc19kZXNjKTsK
IH0KIAorLyoKKyAqIFJvdXRpbmUgdG8gbWFrZSBfX2h2bV9jb3B5IGFwcHJvcHJpYXRlIHRvIHVz
ZSBmb3IgY29weWluZyB0aGUKKyAqIHJlc3VsdHMgb2YgaW5zdHJ1Y3Rpb24gZW11bGF0aW9uIGJh
Y2sgdG8gZ3Vlc3QgbWVtb3J5IC0gdGhlc2UKKyAqIHR5cGljYWxseSByZXF1aXJlIDY0LWJpdCwg
MzItYml0IGFuZCAxNi1iaXQgd3JpdGVzIHRvIGJlIGF0b21pYworICogd2hlcmVhcyBtZW1jcHkg
aXMgb25seSBhdG9taWMgZm9yIDY0LWJpdCB3cml0ZXMuIFRoaXMgaXMgc3RpbGwKKyAqIG5vdCAx
MDAlIGNvcnJlY3Qgc2luY2UgY29waWVzIGxhcmdlciB0aGFuIDY0LWJpdHMgd2lsbCBub3QgYmUK
KyAqIGF0b21pYyBmb3IgdGhlIGxhc3QgMi02IGJ5dGVzIGJ1dCBzaG91bGQgYmUgZ29vZCBlbm91
Z2ggZm9yCisgKiBpbnN0cnVjdGlvbiBlbXVsYXRpb24KKyAqLworc3RhdGljIGlubGluZSB2b2lk
IF9faHZtX2F0b21pY19jb3B5KAorICAgIHZvaWQgKnRvLCB2b2lkICpmcm9tLCBpbnQgY291bnQp
Cit7CisgICAgaWYgKGNvdW50ID09IHNpemVvZih1aW50MzJfdCkpCisgICAgICAgICoodWludDMy
X3QgKil0byA9ICoodWludDMyX3QgKilmcm9tOworICAgIGVsc2UgaWYgKGNvdW50ID09IHNpemVv
Zih1aW50MTZfdCkpCisgICAgICAgICoodWludDE2X3QgKil0byA9ICoodWludDE2X3QgKilmcm9t
OworICAgIGVsc2UKKyAgICAgICAgbWVtY3B5KHRvLCBmcm9tLCBjb3VudCk7Cit9CisKICNkZWZp
bmUgSFZNQ09QWV9mcm9tX2d1ZXN0ICgwdTw8MCkKICNkZWZpbmUgSFZNQ09QWV90b19ndWVzdCAg
ICgxdTw8MCkKICNkZWZpbmUgSFZNQ09QWV9ub19mYXVsdCAgICgwdTw8MSkKQEAgLTI3MDEsNyAr
MjcyMSw3IEBAIHN0YXRpYyBlbnVtIGh2bV9jb3B5X3Jlc3VsdCBfX2h2bV9jb3B5KAogICAgICAg
ICAgICAgfQogICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgewotICAgICAgICAgICAgICAg
IG1lbWNweShwLCBidWYsIGNvdW50KTsKKyAgICAgICAgICAgICAgICBfX2h2bV9hdG9taWNfY29w
eShwLCBidWYsIGNvdW50KTsKICAgICAgICAgICAgICAgICBwYWdpbmdfbWFya19kaXJ0eShjdXJy
LT5kb21haW4sIHBhZ2VfdG9fbWZuKHBhZ2UpKTsKICAgICAgICAgICAgIH0KICAgICAgICAgfQpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3NoYWRvdy9tdWx0aS5jIGIveGVuL2FyY2gveDg2
L21tL3NoYWRvdy9tdWx0aS5jCmluZGV4IDNmZWQwYjYuLjVlMGRhODIgMTAwNjQ0Ci0tLSBhL3hl
bi9hcmNoL3g4Ni9tbS9zaGFkb3cvbXVsdGkuYworKysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L211bHRpLmMKQEAgLTQ3NjIsNiArNDc2MiwyNiBAQCBzdGF0aWMgdm9pZCBlbXVsYXRlX3VubWFw
X2Rlc3Qoc3RydWN0IHZjcHUgKnYsCiAgICAgYXRvbWljX2luYygmdi0+ZG9tYWluLT5hcmNoLnBh
Z2luZy5zaGFkb3cuZ3RhYmxlX2RpcnR5X3ZlcnNpb24pOwogfQogCisvKgorICogUm91dGluZSB0
byBtYWtlIHNoX3g4Nl9lbXVsYXRlX3dyaXRlIGFwcHJvcHJpYXRlIHRvIHVzZSBmb3IgY29weWlu
ZyB0aGUKKyAqIHJlc3VsdHMgb2YgaW5zdHJ1Y3Rpb24gZW11bGF0aW9uIGJhY2sgdG8gZ3Vlc3Qg
bWVtb3J5IC0gdGhlc2UKKyAqIHR5cGljYWxseSByZXF1aXJlIDY0LWJpdCwgMzItYml0IGFuZCAx
Ni1iaXQgd3JpdGVzIHRvIGJlIGF0b21pYworICogd2hlcmVhcyBtZW1jcHkgaXMgb25seSBhdG9t
aWMgZm9yIDY0LWJpdCB3cml0ZXMuIFRoaXMgaXMgc3RpbGwKKyAqIG5vdCAxMDAlIGNvcnJlY3Qg
c2luY2UgY29waWVzIGxhcmdlciB0aGFuIDY0LWJpdHMgd2lsbCBub3QgYmUKKyAqIGF0b21pYyBm
b3IgdGhlIGxhc3QgMi02IGJ5dGVzIGJ1dCBzaG91bGQgYmUgZ29vZCBlbm91Z2ggZm9yCisgKiBp
bnN0cnVjdGlvbiBlbXVsYXRpb24KKyAqLworc3RhdGljIGlubGluZSB2b2lkIF9fc2hfYXRvbWlj
X3dyaXRlKAorICAgIHZvaWQgKnRvLCB2b2lkICpmcm9tLCBpbnQgY291bnQpCit7CisgICAgaWYg
KGNvdW50ID09IHNpemVvZih1aW50MzJfdCkpCisgICAgICAgICoodWludDMyX3QgKil0byA9ICoo
dWludDMyX3QgKilmcm9tOworICAgIGVsc2UgaWYgKGNvdW50ID09IHNpemVvZih1aW50MTZfdCkp
CisgICAgICAgICoodWludDE2X3QgKil0byA9ICoodWludDE2X3QgKilmcm9tOworICAgIGVsc2UK
KyAgICAgICAgbWVtY3B5KHRvLCBmcm9tLCBjb3VudCk7Cit9CisKIHN0YXRpYyBpbnQKIHNoX3g4
Nl9lbXVsYXRlX3dyaXRlKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZhZGRyLCB2b2lk
ICpzcmMsCiAgICAgICAgICAgICAgICAgICAgICB1MzIgYnl0ZXMsIHN0cnVjdCBzaF9lbXVsYXRl
X2N0eHQgKnNoX2N0eHQpCkBAIC00Nzc3LDcgKzQ3OTcsNyBAQCBzaF94ODZfZW11bGF0ZV93cml0
ZShzdHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQgbG9uZyB2YWRkciwgdm9pZCAqc3JjLAogICAgICAg
ICByZXR1cm4gKGxvbmcpYWRkcjsKIAogICAgIHBhZ2luZ19sb2NrKHYtPmRvbWFpbik7Ci0gICAg
bWVtY3B5KGFkZHIsIHNyYywgYnl0ZXMpOworICAgIF9fc2hfYXRvbWljX3dyaXRlKGFkZHIsIHNy
YywgYnl0ZXMpOwogCiAgICAgaWYgKCB0Yl9pbml0X2RvbmUgKQogICAgIHsK

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_--


From xen-devel-bounces@lists.xen.org Thu Jan 09 15:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HhM-00080x-RG; Thu, 09 Jan 2014 15:39:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1HhL-0007zU-Of
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 15:39:08 +0000
Received: from [85.158.143.35:64721] by server-3.bemta-4.messagelabs.com id
	2A/1D-32360-B92CEC25; Thu, 09 Jan 2014 15:39:07 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389281944!10745705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21219 invoked from network); 9 Jan 2014 15:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89177121"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:39:03 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 10:39:03 -0500
From: Simon Graham <simon.graham@citrix.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Thread-Topic: Possible issue with x86_emulate when writing results back to
	memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTg==
Date: Thu, 9 Jan 2014 15:39:02 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
Content-Type: multipart/mixed;
	boundary="_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_"
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] Possible issue with x86_emulate when writing results
	back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

We've been seeing very infrequent crashes in Windows VMs for a while where =
it appears that the top-byte in a longword that is supposed to hold a point=
er is being set to zero - for example:

BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)

The first parameter to keBugCheck is the faulting address - 00B49A40 in thi=
s case.

When we look at the dump, we are in a routine releasing a queued spinlock a=
nd the correct address that should have been used was 0xA3B49A40 and indeed=
 memory contents in the windows dump have this value. Looking around some m=
ore, we see that the failing processor is executing the release queued spin=
lock code and another processor is executing the code to acquire the same q=
ueued spinlock and has recently written the 0xA3B49A40 value to the locatio=
n where the failing instruction stream read it from.

If we look at the disassembly for the two code paths, the writing code does=
:

	mov dword ptr [edx],eax

and the reading code does the following to read this same value:

	mov ebx,dword ptr [eax]

On a hunch that this might be a problem with the x86_emulate code, I took a=
 look at how the mov instruction would be emulated - in both cases where em=
ulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines tha=
t write instruction results back to memory use memcpy() to actually copy th=
e data. Looking at the implementation of memcpy in Xen, I see that, in a 64=
-bit build as ours is, it will use 'rep movsq' to move the data in quadword=
s and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructio=
ns above will, I think, always use byte instructions for the four bytes.

Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but =
based on the above they will not be and I am speculating that this is the c=
ause of our occasional crash - the code path unlocking the spin lock on the=
 other processor sees a partially written value in memory.

Does this seem a reasonable explanation of the issue?=20

On the assumption that this is correct, I developed the attached patch (aga=
inst 4.3.1) which updates all the code paths that are used to read and writ=
eback the results of instruction emulation to use a simple assignment if th=
e length is 2 or 4 bytes -- this doesn't fix the general case where you hav=
e a length > 8 but it does handle emulation of MOV instructions. Unfortunat=
ely, the use of emulation in the HVM code uses a generic routine for copyin=
g memory to the guest so every single place that guest memory is read or wr=
itten will pay the penalty of the extra check for length - not sure if that=
 is terrible or not. Since doing this we have not seen a single instance of=
 the crash - but it's only been a month!

The attached patch is for discussion purposes only - if it is deemed accept=
able I'll resubmit a proper patch request against unstable.

Simon Graham
Citrix Systems, Inc

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: application/octet-stream; name="fix-memcpy-in-x86-emulate"
Content-Description: fix-memcpy-in-x86-emulate
Content-Disposition: attachment; filename="fix-memcpy-in-x86-emulate";
	size=2838; creation-date="Thu, 09 Jan 2014 15:11:42 GMT";
	modification-date="Thu, 09 Jan 2014 14:58:24 GMT"
Content-Transfer-Encoding: base64

ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMgYi94ZW4vYXJjaC94ODYvaHZtL2h2
bS5jCmluZGV4IDJkZDliN2UuLjJhZDYwM2UgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9odm0v
aHZtLmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS9odm0uYwpAQCAtMjYwMiw2ICsyNjAyLDI2IEBA
IHZvaWQgaHZtX3Rhc2tfc3dpdGNoKAogICAgIGh2bV91bm1hcF9lbnRyeShucHRzc19kZXNjKTsK
IH0KIAorLyoKKyAqIFJvdXRpbmUgdG8gbWFrZSBfX2h2bV9jb3B5IGFwcHJvcHJpYXRlIHRvIHVz
ZSBmb3IgY29weWluZyB0aGUKKyAqIHJlc3VsdHMgb2YgaW5zdHJ1Y3Rpb24gZW11bGF0aW9uIGJh
Y2sgdG8gZ3Vlc3QgbWVtb3J5IC0gdGhlc2UKKyAqIHR5cGljYWxseSByZXF1aXJlIDY0LWJpdCwg
MzItYml0IGFuZCAxNi1iaXQgd3JpdGVzIHRvIGJlIGF0b21pYworICogd2hlcmVhcyBtZW1jcHkg
aXMgb25seSBhdG9taWMgZm9yIDY0LWJpdCB3cml0ZXMuIFRoaXMgaXMgc3RpbGwKKyAqIG5vdCAx
MDAlIGNvcnJlY3Qgc2luY2UgY29waWVzIGxhcmdlciB0aGFuIDY0LWJpdHMgd2lsbCBub3QgYmUK
KyAqIGF0b21pYyBmb3IgdGhlIGxhc3QgMi02IGJ5dGVzIGJ1dCBzaG91bGQgYmUgZ29vZCBlbm91
Z2ggZm9yCisgKiBpbnN0cnVjdGlvbiBlbXVsYXRpb24KKyAqLworc3RhdGljIGlubGluZSB2b2lk
IF9faHZtX2F0b21pY19jb3B5KAorICAgIHZvaWQgKnRvLCB2b2lkICpmcm9tLCBpbnQgY291bnQp
Cit7CisgICAgaWYgKGNvdW50ID09IHNpemVvZih1aW50MzJfdCkpCisgICAgICAgICoodWludDMy
X3QgKil0byA9ICoodWludDMyX3QgKilmcm9tOworICAgIGVsc2UgaWYgKGNvdW50ID09IHNpemVv
Zih1aW50MTZfdCkpCisgICAgICAgICoodWludDE2X3QgKil0byA9ICoodWludDE2X3QgKilmcm9t
OworICAgIGVsc2UKKyAgICAgICAgbWVtY3B5KHRvLCBmcm9tLCBjb3VudCk7Cit9CisKICNkZWZp
bmUgSFZNQ09QWV9mcm9tX2d1ZXN0ICgwdTw8MCkKICNkZWZpbmUgSFZNQ09QWV90b19ndWVzdCAg
ICgxdTw8MCkKICNkZWZpbmUgSFZNQ09QWV9ub19mYXVsdCAgICgwdTw8MSkKQEAgLTI3MDEsNyAr
MjcyMSw3IEBAIHN0YXRpYyBlbnVtIGh2bV9jb3B5X3Jlc3VsdCBfX2h2bV9jb3B5KAogICAgICAg
ICAgICAgfQogICAgICAgICAgICAgZWxzZQogICAgICAgICAgICAgewotICAgICAgICAgICAgICAg
IG1lbWNweShwLCBidWYsIGNvdW50KTsKKyAgICAgICAgICAgICAgICBfX2h2bV9hdG9taWNfY29w
eShwLCBidWYsIGNvdW50KTsKICAgICAgICAgICAgICAgICBwYWdpbmdfbWFya19kaXJ0eShjdXJy
LT5kb21haW4sIHBhZ2VfdG9fbWZuKHBhZ2UpKTsKICAgICAgICAgICAgIH0KICAgICAgICAgfQpk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3NoYWRvdy9tdWx0aS5jIGIveGVuL2FyY2gveDg2
L21tL3NoYWRvdy9tdWx0aS5jCmluZGV4IDNmZWQwYjYuLjVlMGRhODIgMTAwNjQ0Ci0tLSBhL3hl
bi9hcmNoL3g4Ni9tbS9zaGFkb3cvbXVsdGkuYworKysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L211bHRpLmMKQEAgLTQ3NjIsNiArNDc2MiwyNiBAQCBzdGF0aWMgdm9pZCBlbXVsYXRlX3VubWFw
X2Rlc3Qoc3RydWN0IHZjcHUgKnYsCiAgICAgYXRvbWljX2luYygmdi0+ZG9tYWluLT5hcmNoLnBh
Z2luZy5zaGFkb3cuZ3RhYmxlX2RpcnR5X3ZlcnNpb24pOwogfQogCisvKgorICogUm91dGluZSB0
byBtYWtlIHNoX3g4Nl9lbXVsYXRlX3dyaXRlIGFwcHJvcHJpYXRlIHRvIHVzZSBmb3IgY29weWlu
ZyB0aGUKKyAqIHJlc3VsdHMgb2YgaW5zdHJ1Y3Rpb24gZW11bGF0aW9uIGJhY2sgdG8gZ3Vlc3Qg
bWVtb3J5IC0gdGhlc2UKKyAqIHR5cGljYWxseSByZXF1aXJlIDY0LWJpdCwgMzItYml0IGFuZCAx
Ni1iaXQgd3JpdGVzIHRvIGJlIGF0b21pYworICogd2hlcmVhcyBtZW1jcHkgaXMgb25seSBhdG9t
aWMgZm9yIDY0LWJpdCB3cml0ZXMuIFRoaXMgaXMgc3RpbGwKKyAqIG5vdCAxMDAlIGNvcnJlY3Qg
c2luY2UgY29waWVzIGxhcmdlciB0aGFuIDY0LWJpdHMgd2lsbCBub3QgYmUKKyAqIGF0b21pYyBm
b3IgdGhlIGxhc3QgMi02IGJ5dGVzIGJ1dCBzaG91bGQgYmUgZ29vZCBlbm91Z2ggZm9yCisgKiBp
bnN0cnVjdGlvbiBlbXVsYXRpb24KKyAqLworc3RhdGljIGlubGluZSB2b2lkIF9fc2hfYXRvbWlj
X3dyaXRlKAorICAgIHZvaWQgKnRvLCB2b2lkICpmcm9tLCBpbnQgY291bnQpCit7CisgICAgaWYg
KGNvdW50ID09IHNpemVvZih1aW50MzJfdCkpCisgICAgICAgICoodWludDMyX3QgKil0byA9ICoo
dWludDMyX3QgKilmcm9tOworICAgIGVsc2UgaWYgKGNvdW50ID09IHNpemVvZih1aW50MTZfdCkp
CisgICAgICAgICoodWludDE2X3QgKil0byA9ICoodWludDE2X3QgKilmcm9tOworICAgIGVsc2UK
KyAgICAgICAgbWVtY3B5KHRvLCBmcm9tLCBjb3VudCk7Cit9CisKIHN0YXRpYyBpbnQKIHNoX3g4
Nl9lbXVsYXRlX3dyaXRlKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZhZGRyLCB2b2lk
ICpzcmMsCiAgICAgICAgICAgICAgICAgICAgICB1MzIgYnl0ZXMsIHN0cnVjdCBzaF9lbXVsYXRl
X2N0eHQgKnNoX2N0eHQpCkBAIC00Nzc3LDcgKzQ3OTcsNyBAQCBzaF94ODZfZW11bGF0ZV93cml0
ZShzdHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQgbG9uZyB2YWRkciwgdm9pZCAqc3JjLAogICAgICAg
ICByZXR1cm4gKGxvbmcpYWRkcjsKIAogICAgIHBhZ2luZ19sb2NrKHYtPmRvbWFpbik7Ci0gICAg
bWVtY3B5KGFkZHIsIHNyYywgYnl0ZXMpOworICAgIF9fc2hfYXRvbWljX3dyaXRlKGFkZHIsIHNy
YywgYnl0ZXMpOwogCiAgICAgaWYgKCB0Yl9pbml0X2RvbmUgKQogICAgIHsK

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_31EF1F85386F3941A65C4C158E12835D195E4042FTLPEX01CL03cit_--


From xen-devel-bounces@lists.xen.org Thu Jan 09 15:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hks-0008IM-Vx; Thu, 09 Jan 2014 15:42:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1Hkr-0008IH-V7
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:42:46 +0000
Received: from [193.109.254.147:26025] by server-9.bemta-14.messagelabs.com id
	20/18-13957-573CEC25; Thu, 09 Jan 2014 15:42:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389282162!7606714!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 9 Jan 2014 15:42:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:42:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91322742"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:42:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:42:41 -0500
Message-ID: <52CEC370.10503@citrix.com>
Date: Thu, 9 Jan 2014 15:42:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>>
>> v2:
>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>   even from thread context, which can cause huge delays
>> - that causes unfortunately bigger struct xenvif
>> - store grant handle after checking validity
>>
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> 
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?

add_m2p_override and remove_m2p_override calls should be moved into the
gntdev device as that should be the only user.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hks-0008IM-Vx; Thu, 09 Jan 2014 15:42:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1Hkr-0008IH-V7
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:42:46 +0000
Received: from [193.109.254.147:26025] by server-9.bemta-14.messagelabs.com id
	20/18-13957-573CEC25; Thu, 09 Jan 2014 15:42:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389282162!7606714!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 9 Jan 2014 15:42:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:42:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91322742"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:42:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:42:41 -0500
Message-ID: <52CEC370.10503@citrix.com>
Date: Thu, 9 Jan 2014 15:42:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>>
>> v2:
>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>   even from thread context, which can cause huge delays
>> - that causes unfortunately bigger struct xenvif
>> - store grant handle after checking validity
>>
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> 
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?

add_m2p_override and remove_m2p_override calls should be moved into the
gntdev device as that should be the only user.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:43:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hln-0008N7-G0; Thu, 09 Jan 2014 15:43:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W1Hlm-0008Mz-CQ
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:43:42 +0000
Received: from [85.158.143.35:51615] by server-3.bemta-4.messagelabs.com id
	59/E4-32360-DA3CEC25; Thu, 09 Jan 2014 15:43:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389282220!10581589!1
X-Originating-IP: [62.142.5.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA3ID0+IDk5ODc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13345 invoked from network); 9 Jan 2014 15:43:41 -0000
Received: from emh01.mail.saunalahti.fi (HELO emh01.mail.saunalahti.fi)
	(62.142.5.107)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 15:43:41 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh01.mail.saunalahti.fi (Postfix) with ESMTP id C503F9004B;
	Thu,  9 Jan 2014 17:43:38 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8F9DB36C01F; Thu,  9 Jan 2014 17:43:38 +0200 (EET)
Date: Thu, 9 Jan 2014 17:43:38 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140109154338.GD2924@reaktio.net>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140109153640.GC2924@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109153640.GC2924@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 05:36:40PM +0200, Pasi K=E4rkk=E4inen wrote:
> > > =

> > > I can boot the guest.
> > =

> > And can you access the ROM from the guest ?
> > =

> > =

> > Also, I have another patch, it will initialize the PCI ROM BAR like any
> > other BAR. In this case, if qemu is envolved in the access to ROM, it
> > will print an error, like it the case for other BAR. =

> > =

> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from
> > the device.
> > =

> =

> This issue has been reported multiple times on the list, and discussed du=
ring the last couple of months on another threads,
> mostly related to GPU passthru.
> =

> I think some clues were found recently about why you get the emulated rom=
 instead of the actual device rom.
> Sorry that I don't have the link available right now..
> =


Heh.. it seems it was Konrad in this very same thread :)

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02837.html


-- Pasi =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:43:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hln-0008N7-G0; Thu, 09 Jan 2014 15:43:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W1Hlm-0008Mz-CQ
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:43:42 +0000
Received: from [85.158.143.35:51615] by server-3.bemta-4.messagelabs.com id
	59/E4-32360-DA3CEC25; Thu, 09 Jan 2014 15:43:41 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389282220!10581589!1
X-Originating-IP: [62.142.5.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA3ID0+IDk5ODc1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13345 invoked from network); 9 Jan 2014 15:43:41 -0000
Received: from emh01.mail.saunalahti.fi (HELO emh01.mail.saunalahti.fi)
	(62.142.5.107)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 15:43:41 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh01.mail.saunalahti.fi (Postfix) with ESMTP id C503F9004B;
	Thu,  9 Jan 2014 17:43:38 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 8F9DB36C01F; Thu,  9 Jan 2014 17:43:38 +0200 (EET)
Date: Thu, 9 Jan 2014 17:43:38 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140109154338.GD2924@reaktio.net>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140109153640.GC2924@reaktio.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109153640.GC2924@reaktio.net>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 05:36:40PM +0200, Pasi K=E4rkk=E4inen wrote:
> > > =

> > > I can boot the guest.
> > =

> > And can you access the ROM from the guest ?
> > =

> > =

> > Also, I have another patch, it will initialize the PCI ROM BAR like any
> > other BAR. In this case, if qemu is envolved in the access to ROM, it
> > will print an error, like it the case for other BAR. =

> > =

> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from
> > the device.
> > =

> =

> This issue has been reported multiple times on the list, and discussed du=
ring the last couple of months on another threads,
> mostly related to GPU passthru.
> =

> I think some clues were found recently about why you get the emulated rom=
 instead of the actual device rom.
> Sorry that I don't have the link available right now..
> =


Heh.. it seems it was Konrad in this very same thread :)

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02837.html


-- Pasi =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hr4-0000al-EH; Thu, 09 Jan 2014 15:49:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hr2-0000ag-K5
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:08 +0000
Received: from [85.158.143.35:5387] by server-1.bemta-4.messagelabs.com id
	EC/7D-02132-4F4CEC25; Thu, 09 Jan 2014 15:49:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389282545!10666063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26742 invoked from network); 9 Jan 2014 15:49:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89181387"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:04 -0500
Message-ID: <1389282543.19805.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 15:49:03 +0000
In-Reply-To: <21198.47738.357463.155514@mariner.uk.xensource.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
	<1389265395.27473.69.camel@kazak.uk.xensource.com>
	<21198.47738.357463.155514@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jfehlig@suse.com, Stefan Bader <stefan.bader@canonical.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 15:04 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create"):
> > On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> > > From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> > > From: Stefan Bader <stefan.bader@canonical.com>
> > > Date: Wed, 8 Jan 2014 18:26:59 +0100
> > > Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> > > 
> > > This will change initiate_domain_create to walk through NIC definitions
> > > and automatically assign devids to those which have not assigned one.
> > > The devids are needed later in domcreate_launch_dm (for HVM domains
> > > using emulated NICs). The command string for starting the device-model
> > > has those ids as part of its arguments.
> > > Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> > > but that would be called too late in the startup case.
> > > I also moved the call to libxl__device_nic_setdefault here as this seems
> > > to be the only path leading there and avoids doing the loop a third time.
> > > The two loops are trying to handle a case where the caller sets some devids
> > > (not sure that should be valid) but leaves some unset.
> 
> Thanks.  Thanks also for the careful and comprehensive explanation.
> 
> > > Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> > I think from a release point of view we should take this since it is a
> > bug fix to the API which at least libvirt has tripped over (although
> > libvirt has worked around it, others may not have done so).
>  
> > Ian J: Does that make sense?
> 
> I agree.

Applied, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hr4-0000al-EH; Thu, 09 Jan 2014 15:49:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hr2-0000ag-K5
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:08 +0000
Received: from [85.158.143.35:5387] by server-1.bemta-4.messagelabs.com id
	EC/7D-02132-4F4CEC25; Thu, 09 Jan 2014 15:49:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389282545!10666063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26742 invoked from network); 9 Jan 2014 15:49:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89181387"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:04 -0500
Message-ID: <1389282543.19805.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 15:49:03 +0000
In-Reply-To: <21198.47738.357463.155514@mariner.uk.xensource.com>
References: <1389263589-11955-1-git-send-email-stefan.bader@canonical.com>
	<1389265395.27473.69.camel@kazak.uk.xensource.com>
	<21198.47738.357463.155514@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jfehlig@suse.com, Stefan Bader <stefan.bader@canonical.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: Auto-assign NIC devids in
	initiate_domain_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 15:04 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create"):
> > On Thu, 2014-01-09 at 11:33 +0100, Stefan Bader wrote:
> > > From bafc8f62ee3e3175ec4d978bceba4b5f891a597d Mon Sep 17 00:00:00 2001
> > > From: Stefan Bader <stefan.bader@canonical.com>
> > > Date: Wed, 8 Jan 2014 18:26:59 +0100
> > > Subject: [PATCH] libxl: Auto-assign NIC devids in initiate_domain_create
> > > 
> > > This will change initiate_domain_create to walk through NIC definitions
> > > and automatically assign devids to those which have not assigned one.
> > > The devids are needed later in domcreate_launch_dm (for HVM domains
> > > using emulated NICs). The command string for starting the device-model
> > > has those ids as part of its arguments.
> > > Assignment of devids in the hotplug case is handled by libxl_device_nic_add
> > > but that would be called too late in the startup case.
> > > I also moved the call to libxl__device_nic_setdefault here as this seems
> > > to be the only path leading there and avoids doing the loop a third time.
> > > The two loops are trying to handle a case where the caller sets some devids
> > > (not sure that should be valid) but leaves some unset.
> 
> Thanks.  Thanks also for the careful and comprehensive explanation.
> 
> > > Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> > I think from a release point of view we should take this since it is a
> > bug fix to the API which at least libvirt has tripped over (although
> > libvirt has worked around it, others may not have done so).
>  
> > Ian J: Does that make sense?
> 
> I agree.

Applied, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HrS-0000kf-SQ; Thu, 09 Jan 2014 15:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1HrR-0000kR-V1
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:34 +0000
Received: from [193.109.254.147:53460] by server-4.bemta-14.messagelabs.com id
	F5/39-03916-D05CEC25; Thu, 09 Jan 2014 15:49:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389282571!7608656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9710 invoked from network); 9 Jan 2014 15:49:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325678"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:30 -0500
Message-ID: <1389282569.19805.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 15:49:29 +0000
In-Reply-To: <20140109143915.GC12164@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
	<1389268784.27473.71.camel@kazak.uk.xensource.com>
	<20140109143915.GC12164@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 14:39 +0000, Wei Liu wrote:
> On Thu, Jan 09, 2014 at 11:59:44AM +0000, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> > > On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > > > 
> > > > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > > > 
> > > > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > > > changes.
> > > > > > 
> > > > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > > > point.
> > > > > > 
> > > > > 
> > > > > You're right, my patch didn't cover that aspect because I tried hard to
> > > > > make it minimal.
> > > > 
> > > > Do you think you could revisit this bit for 4.5?
> > > > 
> > > 
> > > Yes, I think so. That should be relatively straightfoward, I hope. :-)
> > > 
> > > > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > > > Sorry for not thinking of that during review.
> > > > > > 
> > > > > 
> > > > > I don't think so. All VNC / VFB options are already documented.
> > > > 
> > > > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > > > Device" though.
> > > > 
> > > 
> > > How about this
> > 
> > HRM, I was more thinking about pulling those options out into a new
> > "Primary Graphics Device" or something section with a little intro blurb
> > about how for PV this is a PVFB and for HVM this is an emulated VNC.
> > 
> > With your approach I think at a minimum it would need to enumerate which
> > specific options work for both.
> > 
> 
> From 935b18da7a9cf60186a87dcf97d39722d0e49937 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Thu, 9 Jan 2014 11:48:13 +0000
> Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
>  device
> 
> Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
> guest when VNC is specified".
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

I still think this is inferior to a proper refactoring but it's better
than nothing so, Acked + applied.

RM hat: barrier to docs improvements is very low.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HrS-0000kf-SQ; Thu, 09 Jan 2014 15:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1HrR-0000kR-V1
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:34 +0000
Received: from [193.109.254.147:53460] by server-4.bemta-14.messagelabs.com id
	F5/39-03916-D05CEC25; Thu, 09 Jan 2014 15:49:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389282571!7608656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9710 invoked from network); 9 Jan 2014 15:49:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325678"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:30 -0500
Message-ID: <1389282569.19805.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 15:49:29 +0000
In-Reply-To: <20140109143915.GC12164@zion.uk.xensource.com>
References: <20140108164617.GA20476@aepfle.de>
	<1389199900.27473.3.camel@kazak.uk.xensource.com>
	<20140108183411.GA13867@zion.uk.xensource.com>
	<1389261891.27473.45.camel@kazak.uk.xensource.com>
	<20140109115220.GA32437@zion.uk.xensource.com>
	<1389268784.27473.71.camel@kazak.uk.xensource.com>
	<20140109143915.GC12164@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 14:39 +0000, Wei Liu wrote:
> On Thu, Jan 09, 2014 at 11:59:44AM +0000, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 11:52 +0000, Wei Liu wrote:
> > > On Thu, Jan 09, 2014 at 10:04:51AM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-08 at 18:34 +0000, Wei Liu wrote:
> > > > > On Wed, Jan 08, 2014 at 04:51:40PM +0000, Ian Campbell wrote:
> > > > > > On Wed, 2014-01-08 at 17:46 +0100, Olaf Hering wrote:
> > > > > > > With xm it was possible to have a global keymap="de" to map the physical
> > > > > > > keybard correctly. Now with xl this fails, at least in xen-4.3.
> > > > > > > xl create -d shows keymap:NULL in the vfb part.
> > > > > > > Only moving keymap= into vfb=[] fixes it for me.
> > > > > > > 
> > > > > > > xl.cfg(5) indicates that keymap= can be specified as global option (just
> > > > > > > like vnc=) as well as suboption for vbf=[]. 
> > > > > > > 
> > > > > > > Was this already fixed in xen-unstable? git log shows not keymap related
> > > > > > > changes.
> > > > > > 
> > > > > > I don't think Wei covered this one with his VNC patches. It does sound
> > > > > > like it should be move though, yes. I think this is 4.5 material at this
> > > > > > point.
> > > > > > 
> > > > > 
> > > > > You're right, my patch didn't cover that aspect because I tried hard to
> > > > > make it minimal.
> > > > 
> > > > Do you think you could revisit this bit for 4.5?
> > > > 
> > > 
> > > Yes, I think so. That should be relatively straightfoward, I hope. :-)
> > > 
> > > > > > Wei, BTW, did you VNC change not require any doc (e.g. manpage) updates?
> > > > > > Sorry for not thinking of that during review.
> > > > > > 
> > > > > 
> > > > > I don't think so. All VNC / VFB options are already documented.
> > > > 
> > > > The top level vnc*= options seem to be under "Emulated VGA Graphics
> > > > Device" though.
> > > > 
> > > 
> > > How about this
> > 
> > HRM, I was more thinking about pulling those options out into a new
> > "Primary Graphics Device" or something section with a little intro blurb
> > about how for PV this is a PVFB and for HVM this is an emulated VNC.
> > 
> > With your approach I think at a minimum it would need to enumerate which
> > specific options work for both.
> > 
> 
> From 935b18da7a9cf60186a87dcf97d39722d0e49937 Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Thu, 9 Jan 2014 11:48:13 +0000
> Subject: [PATCH] docs/man/xl.cfg.pod.5: document global VNC options for VFB
>  device
> 
> Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
> guest when VNC is specified".
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

I still think this is inferior to a proper refactoring but it's better
than nothing so, Acked + applied.

RM hat: barrier to docs improvements is very low.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hrj-0000ns-9h; Thu, 09 Jan 2014 15:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1Hrh-0000nU-RA
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:49:50 +0000
Received: from [85.158.137.68:6620] by server-15.bemta-3.messagelabs.com id
	2A/4B-11556-C15CEC25; Thu, 09 Jan 2014 15:49:48 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389282586!8204597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25419 invoked from network); 9 Jan 2014 15:49:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325783"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:46 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:45 -0500
Message-ID: <52CEC518.60606@citrix.com>
Date: Thu, 9 Jan 2014 16:49:44 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, aliguori@amazon.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>>
>> v2:
>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>   even from thread context, which can cause huge delays
>> - that causes unfortunately bigger struct xenvif
>> - store grant handle after checking validity
>>
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> 
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?

Anthony Liguori posted a patch to perform something similar in blkback,
but I think the patch never made it upstream:

https://lkml.org/lkml/2013/11/12/749

Probably a good time to revisit it so this mechanism can be used by both
blkback and netback?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hrj-0000ns-9h; Thu, 09 Jan 2014 15:49:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1Hrh-0000nU-RA
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:49:50 +0000
Received: from [85.158.137.68:6620] by server-15.bemta-3.messagelabs.com id
	2A/4B-11556-C15CEC25; Thu, 09 Jan 2014 15:49:48 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389282586!8204597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25419 invoked from network); 9 Jan 2014 15:49:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325783"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:46 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:45 -0500
Message-ID: <52CEC518.60606@citrix.com>
Date: Thu, 9 Jan 2014 16:49:44 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, aliguori@amazon.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>>
>> v2:
>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>   even from thread context, which can cause huge delays
>> - that causes unfortunately bigger struct xenvif
>> - store grant handle after checking validity
>>
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> 
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?

Anthony Liguori posted a patch to perform something similar in blkback,
but I think the patch never made it upstream:

https://lkml.org/lkml/2013/11/12/749

Probably a good time to revisit it so this mechanism can be used by both
blkback and netback?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hrq-0000ql-VR; Thu, 09 Jan 2014 15:49:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hrp-0000qG-E1
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:57 +0000
Received: from [85.158.139.211:18221] by server-13.bemta-5.messagelabs.com id
	40/82-11357-425CEC25; Thu, 09 Jan 2014 15:49:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389282594!8644552!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14644 invoked from network); 9 Jan 2014 15:49:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325832"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:53 -0500
Message-ID: <1389282592.19805.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 9 Jan 2014 15:49:52 +0000
In-Reply-To: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
 messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
> The errors have been incorrectly identifying their function since c/s
> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
> cleanup.
> 
> Use __func__ to ensure the name remains correct in the future.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:49:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hrq-0000ql-VR; Thu, 09 Jan 2014 15:49:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hrp-0000qG-E1
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:49:57 +0000
Received: from [85.158.139.211:18221] by server-13.bemta-5.messagelabs.com id
	40/82-11357-425CEC25; Thu, 09 Jan 2014 15:49:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389282594!8644552!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14644 invoked from network); 9 Jan 2014 15:49:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91325832"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:49:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:49:53 -0500
Message-ID: <1389282592.19805.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 9 Jan 2014 15:49:52 +0000
In-Reply-To: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389089063-31631-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] tools/libxc: Correct read_exact() error
 messages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-07 at 10:04 +0000, Andrew Cooper wrote:
> The errors have been incorrectly identifying their function since c/s
> 861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
> cleanup.
> 
> Use __func__ to ensure the name remains correct in the future.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hs9-0000wf-HZ; Thu, 09 Jan 2014 15:50:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hs8-0000wF-OK
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:50:16 +0000
Received: from [85.158.137.68:35445] by server-14.bemta-3.messagelabs.com id
	BA/5F-06105-735CEC25; Thu, 09 Jan 2014 15:50:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389282613!8212411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11827 invoked from network); 9 Jan 2014 15:50:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:50:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89181993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:50:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:50:11 -0500
Message-ID: <1389282610.19805.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 9 Jan 2014 15:50:10 +0000
In-Reply-To: <1389203159.27473.9.camel@kazak.uk.xensource.com>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
	<1389102894.12612.17.camel@kazak.uk.xensource.com>
	<1389203159.27473.9.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:45 +0000, Ian Campbell wrote:
> On Tue, 2014-01-07 at 13:54 +0000, Ian Campbell wrote:
> > Applied.
> 
> Except I seem to have failed to actually do it... I've put this back in
> my queue and will pick it up on my next attempt...

Really done this time...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hs9-0000wf-HZ; Thu, 09 Jan 2014 15:50:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Hs8-0000wF-OK
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 15:50:16 +0000
Received: from [85.158.137.68:35445] by server-14.bemta-3.messagelabs.com id
	BA/5F-06105-735CEC25; Thu, 09 Jan 2014 15:50:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389282613!8212411!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11827 invoked from network); 9 Jan 2014 15:50:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:50:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89181993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:50:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:50:11 -0500
Message-ID: <1389282610.19805.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 9 Jan 2014 15:50:10 +0000
In-Reply-To: <1389203159.27473.9.camel@kazak.uk.xensource.com>
References: <1389026178-8792-1-git-send-email-julien.grall@linaro.org>
	<1389102894.12612.17.camel@kazak.uk.xensource.com>
	<1389203159.27473.9.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v2] xen/dts: Don't translate invalid address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 17:45 +0000, Ian Campbell wrote:
> On Tue, 2014-01-07 at 13:54 +0000, Ian Campbell wrote:
> > Applied.
> 
> Except I seem to have failed to actually do it... I've put this back in
> my queue and will pick it up on my next attempt...

Really done this time...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:52:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HuG-0001HX-0h; Thu, 09 Jan 2014 15:52:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1HuE-0001HL-Lb
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:52:26 +0000
Received: from [85.158.143.35:65069] by server-2.bemta-4.messagelabs.com id
	71/09-11386-9B5CEC25; Thu, 09 Jan 2014 15:52:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389282743!10686934!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4264 invoked from network); 9 Jan 2014 15:52:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:52:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89182951"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:52:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:52:22 -0500
Message-ID: <1389282741.19805.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 9 Jan 2014 15:52:21 +0000
In-Reply-To: <20140108164617.GA20476@aepfle.de>
References: <20140108164617.GA20476@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it xl: global keymap= option not recognised
prune it <20140109115220.GA32437@zion.uk.xensource.com>
owner it Wei Liu <wei.liu2@citrix.com>
thanks



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:52:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1HuG-0001HX-0h; Thu, 09 Jan 2014 15:52:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1HuE-0001HL-Lb
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 15:52:26 +0000
Received: from [85.158.143.35:65069] by server-2.bemta-4.messagelabs.com id
	71/09-11386-9B5CEC25; Thu, 09 Jan 2014 15:52:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389282743!10686934!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4264 invoked from network); 9 Jan 2014 15:52:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:52:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89182951"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 15:52:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	10:52:22 -0500
Message-ID: <1389282741.19805.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 9 Jan 2014 15:52:21 +0000
In-Reply-To: <20140108164617.GA20476@aepfle.de>
References: <20140108164617.GA20476@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it xl: global keymap= option not recognised
prune it <20140109115220.GA32437@zion.uk.xensource.com>
owner it Wei Liu <wei.liu2@citrix.com>
thanks



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hvi-0001Ra-I6; Thu, 09 Jan 2014 15:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Hvg-0001RH-KF
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 15:53:56 +0000
Received: from [85.158.137.68:51041] by server-2.bemta-3.messagelabs.com id
	58/8D-17329-316CEC25; Thu, 09 Jan 2014 15:53:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389282833!8169123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11202 invoked from network); 9 Jan 2014 15:53:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:53:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91327570"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:53:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:53:52 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1Hvc-0007Lb-GT;
	Thu, 09 Jan 2014 15:53:52 +0000
Message-ID: <52CEC610.9040502@citrix.com>
Date: Thu, 9 Jan 2014 15:53:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Graham <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:39, Simon Graham wrote:
> We've been seeing very infrequent crashes in Windows VMs for a while where it appears that the top-byte in a longword that is supposed to hold a pointer is being set to zero - for example:
>
> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
>
> The first parameter to keBugCheck is the faulting address - 00B49A40 in this case.
>
> When we look at the dump, we are in a routine releasing a queued spinlock and the correct address that should have been used was 0xA3B49A40 and indeed memory contents in the windows dump have this value. Looking around some more, we see that the failing processor is executing the release queued spinlock code and another processor is executing the code to acquire the same queued spinlock and has recently written the 0xA3B49A40 value to the location where the failing instruction stream read it from.
>
> If we look at the disassembly for the two code paths, the writing code does:
>
> 	mov dword ptr [edx],eax
>
> and the reading code does the following to read this same value:
>
> 	mov ebx,dword ptr [eax]
>
> On a hunch that this might be a problem with the x86_emulate code, I took a look at how the mov instruction would be emulated - in both cases where emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines that write instruction results back to memory use memcpy() to actually copy the data. Looking at the implementation of memcpy in Xen, I see that, in a 64-bit build as ours is, it will use 'rep movsq' to move the data in quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructions above will, I think, always use byte instructions for the four bytes.
>
> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but based on the above they will not be and I am speculating that this is the cause of our occasional crash - the code path unlocking the spin lock on the other processor sees a partially written value in memory.
>
> Does this seem a reasonable explanation of the issue? 
>
> On the assumption that this is correct, I developed the attached patch (against 4.3.1) which updates all the code paths that are used to read and writeback the results of instruction emulation to use a simple assignment if the length is 2 or 4 bytes -- this doesn't fix the general case where you have a length > 8 but it does handle emulation of MOV instructions. Unfortunately, the use of emulation in the HVM code uses a generic routine for copying memory to the guest so every single place that guest memory is read or written will pay the penalty of the extra check for length - not sure if that is terrible or not. Since doing this we have not seen a single instance of the crash - but it's only been a month!
>
> The attached patch is for discussion purposes only - if it is deemed acceptable I'll resubmit a proper patch request against unstable.

That seems like a plausible explanation.

The patch however needs some work.  As this function is identical, it
should have a common implementation somewhere, possibly part of
x86_emulate.h, and it would probably be better as:

To better match real hardware, it might be appropriate for
"memcpy_atomic()" (name subject to improved suggestions) to use a while
loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
reaching the end of the data to be copied.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 15:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 15:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Hvi-0001Ra-I6; Thu, 09 Jan 2014 15:53:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Hvg-0001RH-KF
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 15:53:56 +0000
Received: from [85.158.137.68:51041] by server-2.bemta-3.messagelabs.com id
	58/8D-17329-316CEC25; Thu, 09 Jan 2014 15:53:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389282833!8169123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11202 invoked from network); 9 Jan 2014 15:53:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 15:53:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91327570"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 15:53:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 10:53:52 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1Hvc-0007Lb-GT;
	Thu, 09 Jan 2014 15:53:52 +0000
Message-ID: <52CEC610.9040502@citrix.com>
Date: Thu, 9 Jan 2014 15:53:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Graham <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:39, Simon Graham wrote:
> We've been seeing very infrequent crashes in Windows VMs for a while where it appears that the top-byte in a longword that is supposed to hold a pointer is being set to zero - for example:
>
> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
>
> The first parameter to keBugCheck is the faulting address - 00B49A40 in this case.
>
> When we look at the dump, we are in a routine releasing a queued spinlock and the correct address that should have been used was 0xA3B49A40 and indeed memory contents in the windows dump have this value. Looking around some more, we see that the failing processor is executing the release queued spinlock code and another processor is executing the code to acquire the same queued spinlock and has recently written the 0xA3B49A40 value to the location where the failing instruction stream read it from.
>
> If we look at the disassembly for the two code paths, the writing code does:
>
> 	mov dword ptr [edx],eax
>
> and the reading code does the following to read this same value:
>
> 	mov ebx,dword ptr [eax]
>
> On a hunch that this might be a problem with the x86_emulate code, I took a look at how the mov instruction would be emulated - in both cases where emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines that write instruction results back to memory use memcpy() to actually copy the data. Looking at the implementation of memcpy in Xen, I see that, in a 64-bit build as ours is, it will use 'rep movsq' to move the data in quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructions above will, I think, always use byte instructions for the four bytes.
>
> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but based on the above they will not be and I am speculating that this is the cause of our occasional crash - the code path unlocking the spin lock on the other processor sees a partially written value in memory.
>
> Does this seem a reasonable explanation of the issue? 
>
> On the assumption that this is correct, I developed the attached patch (against 4.3.1) which updates all the code paths that are used to read and writeback the results of instruction emulation to use a simple assignment if the length is 2 or 4 bytes -- this doesn't fix the general case where you have a length > 8 but it does handle emulation of MOV instructions. Unfortunately, the use of emulation in the HVM code uses a generic routine for copying memory to the guest so every single place that guest memory is read or written will pay the penalty of the extra check for length - not sure if that is terrible or not. Since doing this we have not seen a single instance of the crash - but it's only been a month!
>
> The attached patch is for discussion purposes only - if it is deemed acceptable I'll resubmit a proper patch request against unstable.

That seems like a plausible explanation.

The patch however needs some work.  As this function is identical, it
should have a common implementation somewhere, possibly part of
x86_emulate.h, and it would probably be better as:

To better match real hardware, it might be appropriate for
"memcpy_atomic()" (name subject to improved suggestions) to use a while
loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
reaching the end of the data to be copied.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I1q-0002Kg-QF; Thu, 09 Jan 2014 16:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1I1o-0002Kb-Si
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:00:17 +0000
Received: from [85.158.143.35:13480] by server-1.bemta-4.messagelabs.com id
	7E/6F-02132-097CEC25; Thu, 09 Jan 2014 16:00:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389283215!10586492!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25010 invoked from network); 9 Jan 2014 16:00:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:00:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:00:15 +0000
Message-Id: <52CED59D020000780011204A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:00:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 16:39, Simon Graham <simon.graham@citrix.com> wrote:
> We've been seeing very infrequent crashes in Windows VMs for a while where it 
> appears that the top-byte in a longword that is supposed to hold a pointer is 
> being set to zero - for example:
> 
> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
> 
> The first parameter to keBugCheck is the faulting address - 00B49A40 in this 
> case.
> 
> When we look at the dump, we are in a routine releasing a queued spinlock 
> and the correct address that should have been used was 0xA3B49A40 and indeed 
> memory contents in the windows dump have this value. Looking around some 
> more, we see that the failing processor is executing the release queued 
> spinlock code and another processor is executing the code to acquire the same 
> queued spinlock and has recently written the 0xA3B49A40 value to the location 
> where the failing instruction stream read it from.
> 
> If we look at the disassembly for the two code paths, the writing code does:
> 
> 	mov dword ptr [edx],eax
> 
> and the reading code does the following to read this same value:
> 
> 	mov ebx,dword ptr [eax]
> 
> On a hunch that this might be a problem with the x86_emulate code, I took a 
> look at how the mov instruction would be emulated - in both cases where 
> emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines 
> that write instruction results back to memory use memcpy() to actually copy 
> the data. Looking at the implementation of memcpy in Xen, I see that, in a 
> 64-bit build as ours is, it will use 'rep movsq' to move the data in 
> quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the 
> instructions above will, I think, always use byte instructions for the four 
> bytes.
> 
> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but 
> based on the above they will not be and I am speculating that this is the 
> cause of our occasional crash - the code path unlocking the spin lock on the 
> other processor sees a partially written value in memory.
> 
> Does this seem a reasonable explanation of the issue? 

Yes - as long as you can also explain why a spin lock operation
would make it into the emulation code in the first place.

> On the assumption that this is correct, I developed the attached patch 
> (against 4.3.1) which updates all the code paths that are used to read and 
> writeback the results of instruction emulation to use a simple assignment if 
> the length is 2 or 4 bytes -- this doesn't fix the general case where you have 
> a length > 8 but it does handle emulation of MOV instructions. Unfortunately, 
> the use of emulation in the HVM code uses a generic routine for copying 
> memory to the guest so every single place that guest memory is read or 
> written will pay the penalty of the extra check for length - not sure if that 
> is terrible or not. Since doing this we have not seen a single instance of 
> the crash - but it's only been a month!
> 
> The attached patch is for discussion purposes only - if it is deemed 
> acceptable I'll resubmit a proper patch request against unstable.

I'd rather not add limited scope special casing like that, but instead
make the copying much more like real hardware (i.e. not just deal
with the 16- and 32-bit cases, and especially not rely on memcpy()
using 64-bit reads/writes when it can). IOW - don't use memcpy()
here at all (and have a single routine doing The Right Thing (tm)
rather than having two clones now, and perhaps more later on -
I'd in particular think that the read side in shadow code would also
need a similar adjustment).

On the mechanical side of things: Such a generic routine should
have proper parameter types: "const void *" for the source pointer
and "unsigned long" or "size_t" for the count.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I1q-0002Kg-QF; Thu, 09 Jan 2014 16:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1I1o-0002Kb-Si
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:00:17 +0000
Received: from [85.158.143.35:13480] by server-1.bemta-4.messagelabs.com id
	7E/6F-02132-097CEC25; Thu, 09 Jan 2014 16:00:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389283215!10586492!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25010 invoked from network); 9 Jan 2014 16:00:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:00:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:00:15 +0000
Message-Id: <52CED59D020000780011204A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:00:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 16:39, Simon Graham <simon.graham@citrix.com> wrote:
> We've been seeing very infrequent crashes in Windows VMs for a while where it 
> appears that the top-byte in a longword that is supposed to hold a pointer is 
> being set to zero - for example:
> 
> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
> 
> The first parameter to keBugCheck is the faulting address - 00B49A40 in this 
> case.
> 
> When we look at the dump, we are in a routine releasing a queued spinlock 
> and the correct address that should have been used was 0xA3B49A40 and indeed 
> memory contents in the windows dump have this value. Looking around some 
> more, we see that the failing processor is executing the release queued 
> spinlock code and another processor is executing the code to acquire the same 
> queued spinlock and has recently written the 0xA3B49A40 value to the location 
> where the failing instruction stream read it from.
> 
> If we look at the disassembly for the two code paths, the writing code does:
> 
> 	mov dword ptr [edx],eax
> 
> and the reading code does the following to read this same value:
> 
> 	mov ebx,dword ptr [eax]
> 
> On a hunch that this might be a problem with the x86_emulate code, I took a 
> look at how the mov instruction would be emulated - in both cases where 
> emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines 
> that write instruction results back to memory use memcpy() to actually copy 
> the data. Looking at the implementation of memcpy in Xen, I see that, in a 
> 64-bit build as ours is, it will use 'rep movsq' to move the data in 
> quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the 
> instructions above will, I think, always use byte instructions for the four 
> bytes.
> 
> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but 
> based on the above they will not be and I am speculating that this is the 
> cause of our occasional crash - the code path unlocking the spin lock on the 
> other processor sees a partially written value in memory.
> 
> Does this seem a reasonable explanation of the issue? 

Yes - as long as you can also explain why a spin lock operation
would make it into the emulation code in the first place.

> On the assumption that this is correct, I developed the attached patch 
> (against 4.3.1) which updates all the code paths that are used to read and 
> writeback the results of instruction emulation to use a simple assignment if 
> the length is 2 or 4 bytes -- this doesn't fix the general case where you have 
> a length > 8 but it does handle emulation of MOV instructions. Unfortunately, 
> the use of emulation in the HVM code uses a generic routine for copying 
> memory to the guest so every single place that guest memory is read or 
> written will pay the penalty of the extra check for length - not sure if that 
> is terrible or not. Since doing this we have not seen a single instance of 
> the crash - but it's only been a month!
> 
> The attached patch is for discussion purposes only - if it is deemed 
> acceptable I'll resubmit a proper patch request against unstable.

I'd rather not add limited scope special casing like that, but instead
make the copying much more like real hardware (i.e. not just deal
with the 16- and 32-bit cases, and especially not rely on memcpy()
using 64-bit reads/writes when it can). IOW - don't use memcpy()
here at all (and have a single routine doing The Right Thing (tm)
rather than having two clones now, and perhaps more later on -
I'd in particular think that the read side in shadow code would also
need a similar adjustment).

On the mechanical side of things: Such a generic routine should
have proper parameter types: "const void *" for the source pointer
and "unsigned long" or "size_t" for the count.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:03:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I4V-0002Yp-DE; Thu, 09 Jan 2014 16:03:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1I4T-0002Yi-0D
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 16:03:01 +0000
Received: from [85.158.137.68:59509] by server-13.bemta-3.messagelabs.com id
	0F/EA-28603-438CEC25; Thu, 09 Jan 2014 16:03:00 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389283377!7043728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24841 invoked from network); 9 Jan 2014 16:02:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:02:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91331799"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:02:56 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:02:56 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXp110AAApZD0A=
Date: Thu, 9 Jan 2014 16:02:56 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E42FF@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks,

> > The attached patch is for discussion purposes only - if it is deemed
> acceptable I'll resubmit a proper patch request against unstable.
> 
> That seems like a plausible explanation.
> 
> The patch however needs some work.  As this function is identical, it
> should have a common implementation somewhere, possibly part of
> x86_emulate.h, and it would probably be better as:
> 

Not sure I want the generic HVM code to be dependent on x86_emulate... not sure it should be in hvm.c either

> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.
> 

My concern here would be that the generic hvm routine __hvm_copy needs to use this when emulating instruction but not the rest of the time and I'd be concerned about the perf impact.

I'll noodle on a suitable single place to put this...

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:03:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I4V-0002Yp-DE; Thu, 09 Jan 2014 16:03:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1I4T-0002Yi-0D
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 16:03:01 +0000
Received: from [85.158.137.68:59509] by server-13.bemta-3.messagelabs.com id
	0F/EA-28603-438CEC25; Thu, 09 Jan 2014 16:03:00 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389283377!7043728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24841 invoked from network); 9 Jan 2014 16:02:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:02:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91331799"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:02:56 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:02:56 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXp110AAApZD0A=
Date: Thu, 9 Jan 2014 16:02:56 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E42FF@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks,

> > The attached patch is for discussion purposes only - if it is deemed
> acceptable I'll resubmit a proper patch request against unstable.
> 
> That seems like a plausible explanation.
> 
> The patch however needs some work.  As this function is identical, it
> should have a common implementation somewhere, possibly part of
> x86_emulate.h, and it would probably be better as:
> 

Not sure I want the generic HVM code to be dependent on x86_emulate... not sure it should be in hvm.c either

> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.
> 

My concern here would be that the generic hvm routine __hvm_copy needs to use this when emulating instruction but not the rest of the time and I'd be concerned about the perf impact.

I'll noodle on a suitable single place to put this...

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I7I-0002xO-0A; Thu, 09 Jan 2014 16:05:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1I7G-0002xH-4i
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:05:54 +0000
Received: from [85.158.137.68:53332] by server-15.bemta-3.messagelabs.com id
	96/C9-11556-1E8CEC25; Thu, 09 Jan 2014 16:05:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389283552!8200574!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32044 invoked from network); 9 Jan 2014 16:05:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:05:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:05:52 +0000
Message-Id: <52CED6ED0200007800112058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:05:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 16:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.

Except that's not what real hardware does. You'd want to take
initial alignment into account here, shrinking the access width
right away to one suitable for the passed in alignment. Mis-
aligned locked accesses may need extra consideration (but I'd
hope the emulator already does well enough when LOCK is in
use).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:06:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I7I-0002xO-0A; Thu, 09 Jan 2014 16:05:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1I7G-0002xH-4i
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:05:54 +0000
Received: from [85.158.137.68:53332] by server-15.bemta-3.messagelabs.com id
	96/C9-11556-1E8CEC25; Thu, 09 Jan 2014 16:05:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389283552!8200574!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32044 invoked from network); 9 Jan 2014 16:05:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:05:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:05:52 +0000
Message-Id: <52CED6ED0200007800112058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:05:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 16:53, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.

Except that's not what real hardware does. You'd want to take
initial alignment into account here, shrinking the access width
right away to one suitable for the passed in alignment. Mis-
aligned locked accesses may need extra consideration (but I'd
hope the emulator already does well enough when LOCK is in
use).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:06:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I8D-00032S-Ev; Thu, 09 Jan 2014 16:06:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1I8B-000329-Hp
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 16:06:51 +0000
Received: from [85.158.139.211:30611] by server-12.bemta-5.messagelabs.com id
	C4/45-30017-A19CEC25; Thu, 09 Jan 2014 16:06:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389283608!8798379!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12924 invoked from network); 9 Jan 2014 16:06:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:06:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91333479"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:06:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:06:27 -0500
Message-ID: <52CEC902.2050707@citrix.com>
Date: Thu, 9 Jan 2014 16:06:26 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Simon Graham <simon.graham@citrix.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:53, Andrew Cooper wrote:
> On 09/01/14 15:39, Simon Graham wrote:
>> We've been seeing very infrequent crashes in Windows VMs for a while where it appears that the top-byte in a longword that is supposed to hold a pointer is being set to zero - for example:
>>
>> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
>>
>> The first parameter to keBugCheck is the faulting address - 00B49A40 in this case.
>>
>> When we look at the dump, we are in a routine releasing a queued spinlock and the correct address that should have been used was 0xA3B49A40 and indeed memory contents in the windows dump have this value. Looking around some more, we see that the failing processor is executing the release queued spinlock code and another processor is executing the code to acquire the same queued spinlock and has recently written the 0xA3B49A40 value to the location where the failing instruction stream read it from.
>>
>> If we look at the disassembly for the two code paths, the writing code does:
>>
>> 	mov dword ptr [edx],eax
>>
>> and the reading code does the following to read this same value:
>>
>> 	mov ebx,dword ptr [eax]
>>
>> On a hunch that this might be a problem with the x86_emulate code, I took a look at how the mov instruction would be emulated - in both cases where emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines that write instruction results back to memory use memcpy() to actually copy the data. Looking at the implementation of memcpy in Xen, I see that, in a 64-bit build as ours is, it will use 'rep movsq' to move the data in quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructions above will, I think, always use byte instructions for the four bytes.
>>
>> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but based on the above they will not be and I am speculating that this is the cause of our occasional crash - the code path unlocking the spin lock on the other processor sees a partially written value in memory.
>>
>> Does this seem a reasonable explanation of the issue? 
>>
>> On the assumption that this is correct, I developed the attached patch (against 4.3.1) which updates all the code paths that are used to read and writeback the results of instruction emulation to use a simple assignment if the length is 2 or 4 bytes -- this doesn't fix the general case where you have a length > 8 but it does handle emulation of MOV instructions. Unfortunately, the use of emulation in the HVM code uses a generic routine for copying memory to the guest so every single place that guest memory is read or written will pay the penalty of the extra check for length - not sure if that is terrible or not. Since doing this we have not seen a single instance of the crash - but it's only been a month!
>>
>> The attached patch is for discussion purposes only - if it is deemed acceptable I'll resubmit a proper patch request against unstable.
> 
> That seems like a plausible explanation.
> 
> The patch however needs some work.  As this function is identical, it
> should have a common implementation somewhere, possibly part of
> x86_emulate.h, and it would probably be better as:
> 
> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.

Definitely not "memcpy_atomic()" as that suggests the whole copy is
atomic which isn't the case.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:06:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I8D-00032S-Ev; Thu, 09 Jan 2014 16:06:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1I8B-000329-Hp
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 16:06:51 +0000
Received: from [85.158.139.211:30611] by server-12.bemta-5.messagelabs.com id
	C4/45-30017-A19CEC25; Thu, 09 Jan 2014 16:06:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389283608!8798379!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12924 invoked from network); 9 Jan 2014 16:06:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:06:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91333479"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:06:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:06:27 -0500
Message-ID: <52CEC902.2050707@citrix.com>
Date: Thu, 9 Jan 2014 16:06:26 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CEC610.9040502@citrix.com>
In-Reply-To: <52CEC610.9040502@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Simon Graham <simon.graham@citrix.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:53, Andrew Cooper wrote:
> On 09/01/14 15:39, Simon Graham wrote:
>> We've been seeing very infrequent crashes in Windows VMs for a while where it appears that the top-byte in a longword that is supposed to hold a pointer is being set to zero - for example:
>>
>> BUG CHECK 0000000A (00B49A40 00000002 00000001 8261092E)
>>
>> The first parameter to keBugCheck is the faulting address - 00B49A40 in this case.
>>
>> When we look at the dump, we are in a routine releasing a queued spinlock and the correct address that should have been used was 0xA3B49A40 and indeed memory contents in the windows dump have this value. Looking around some more, we see that the failing processor is executing the release queued spinlock code and another processor is executing the code to acquire the same queued spinlock and has recently written the 0xA3B49A40 value to the location where the failing instruction stream read it from.
>>
>> If we look at the disassembly for the two code paths, the writing code does:
>>
>> 	mov dword ptr [edx],eax
>>
>> and the reading code does the following to read this same value:
>>
>> 	mov ebx,dword ptr [eax]
>>
>> On a hunch that this might be a problem with the x86_emulate code, I took a look at how the mov instruction would be emulated - in both cases where emulation can be done (hvm/emulate.c and mm/shadow/multi.c), the routines that write instruction results back to memory use memcpy() to actually copy the data. Looking at the implementation of memcpy in Xen, I see that, in a 64-bit build as ours is, it will use 'rep movsq' to move the data in quadwords and then uses 'rep movsb' to move the last 1-7 bytes -- so the instructions above will, I think, always use byte instructions for the four bytes.
>>
>> Now, according to the X86 arch, 32-bit mov's are supposed to be atomic but based on the above they will not be and I am speculating that this is the cause of our occasional crash - the code path unlocking the spin lock on the other processor sees a partially written value in memory.
>>
>> Does this seem a reasonable explanation of the issue? 
>>
>> On the assumption that this is correct, I developed the attached patch (against 4.3.1) which updates all the code paths that are used to read and writeback the results of instruction emulation to use a simple assignment if the length is 2 or 4 bytes -- this doesn't fix the general case where you have a length > 8 but it does handle emulation of MOV instructions. Unfortunately, the use of emulation in the HVM code uses a generic routine for copying memory to the guest so every single place that guest memory is read or written will pay the penalty of the extra check for length - not sure if that is terrible or not. Since doing this we have not seen a single instance of the crash - but it's only been a month!
>>
>> The attached patch is for discussion purposes only - if it is deemed acceptable I'll resubmit a proper patch request against unstable.
> 
> That seems like a plausible explanation.
> 
> The patch however needs some work.  As this function is identical, it
> should have a common implementation somewhere, possibly part of
> x86_emulate.h, and it would probably be better as:
> 
> To better match real hardware, it might be appropriate for
> "memcpy_atomic()" (name subject to improved suggestions) to use a while
> loop and issue 8 byte writes at a time, falling down to 4, 2 then 1 when
> reaching the end of the data to be copied.

Definitely not "memcpy_atomic()" as that suggests the whole copy is
atomic which isn't the case.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I8n-00038X-Dp; Thu, 09 Jan 2014 16:07:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1I8m-00038G-92
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:07:28 +0000
Received: from [193.109.254.147:15263] by server-4.bemta-14.messagelabs.com id
	94/13-03916-F39CEC25; Thu, 09 Jan 2014 16:07:27 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389283645!9851069!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16224 invoked from network); 9 Jan 2014 16:07:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:07:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89190311"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:07:24 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:07:24 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkA=
Date: Thu, 9 Jan 2014 16:07:23 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
In-Reply-To: <52CED59D020000780011204A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Does this seem a reasonable explanation of the issue?
> 
> Yes - as long as you can also explain why a spin lock operation
> would make it into the emulation code in the first place.
> 

Well, that one is tough and I don't have a good answer... the only thing I would say is that in our system we ALWAYS have the shadow memory tracking enabled (to track changes to framebuffers)

> > The attached patch is for discussion purposes only - if it is deemed
> > acceptable I'll resubmit a proper patch request against unstable.
> 
> I'd rather not add limited scope special casing like that, but instead
> make the copying much more like real hardware (i.e. not just deal
> with the 16- and 32-bit cases, and especially not rely on memcpy()
> using 64-bit reads/writes when it can). IOW - don't use memcpy()
> here at all (and have a single routine doing The Right Thing (tm)
> rather than having two clones now, and perhaps more later on -
> I'd in particular think that the read side in shadow code would also
> need a similar adjustment).

My concern was that memcpy is (I assume!) highly optimized - it certainly should be if it isn't and I would worry that a change to make it atomic for the purposes of instruction emulation would result in an across the board perf hit when in most cases it isn't necessary that it be atomic.

This would be fine for the writeback code in the shadow module BUT the __hvm_copy routine is used generically in situations where atomicity is not required...

> 
> On the mechanical side of things: Such a generic routine should
> have proper parameter types: "const void *" for the source pointer
> and "unsigned long" or "size_t" for the count.

Sure - thanks.

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1I8n-00038X-Dp; Thu, 09 Jan 2014 16:07:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1I8m-00038G-92
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:07:28 +0000
Received: from [193.109.254.147:15263] by server-4.bemta-14.messagelabs.com id
	94/13-03916-F39CEC25; Thu, 09 Jan 2014 16:07:27 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389283645!9851069!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16224 invoked from network); 9 Jan 2014 16:07:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:07:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89190311"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:07:24 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:07:24 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkA=
Date: Thu, 9 Jan 2014 16:07:23 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
In-Reply-To: <52CED59D020000780011204A@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Does this seem a reasonable explanation of the issue?
> 
> Yes - as long as you can also explain why a spin lock operation
> would make it into the emulation code in the first place.
> 

Well, that one is tough and I don't have a good answer... the only thing I would say is that in our system we ALWAYS have the shadow memory tracking enabled (to track changes to framebuffers)

> > The attached patch is for discussion purposes only - if it is deemed
> > acceptable I'll resubmit a proper patch request against unstable.
> 
> I'd rather not add limited scope special casing like that, but instead
> make the copying much more like real hardware (i.e. not just deal
> with the 16- and 32-bit cases, and especially not rely on memcpy()
> using 64-bit reads/writes when it can). IOW - don't use memcpy()
> here at all (and have a single routine doing The Right Thing (tm)
> rather than having two clones now, and perhaps more later on -
> I'd in particular think that the read side in shadow code would also
> need a similar adjustment).

My concern was that memcpy is (I assume!) highly optimized - it certainly should be if it isn't and I would worry that a change to make it atomic for the purposes of instruction emulation would result in an across the board perf hit when in most cases it isn't necessary that it be atomic.

This would be fine for the writeback code in the shadow module BUT the __hvm_copy routine is used generically in situations where atomicity is not required...

> 
> On the mechanical side of things: Such a generic routine should
> have proper parameter types: "const void *" for the source pointer
> and "unsigned long" or "size_t" for the count.

Sure - thanks.

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IAj-0003vj-Jg; Thu, 09 Jan 2014 16:09:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1IAi-0003va-Gi
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:09:28 +0000
Received: from [85.158.139.211:56154] by server-10.bemta-5.messagelabs.com id
	62/0F-01405-7B9CEC25; Thu, 09 Jan 2014 16:09:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389283765!8840220!1
X-Originating-IP: [199.249.25.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1830 invoked from network); 9 Jan 2014 16:09:26 -0000
Received: from omzsmtpe02.verizonbusiness.com (HELO
	omzsmtpe02.verizonbusiness.com) (199.249.25.209)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:09:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe02.verizonbusiness.com with ESMTP; 09 Jan 2014 16:09:23 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; 
	d="scan'208,223";a="628069652"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 09 Jan 2014 16:09:23 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 9 Jan 2014 11:08:25 -0500
Message-ID: <52CEC978.7040705@terremark.com>
Date: Thu, 9 Jan 2014 11:08:24 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Mukesh Rathor
	<mukesh.rathor@oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>	
	<1389177510.4883.11.camel@kazak.uk.xensource.com>	
	<52CD60AA.9010607@terremark.com>	
	<1389199626.4883.112.camel@kazak.uk.xensource.com>	
	<20140108170442.GA75747@deinos.phlegethon.org>	
	<1389203062.27473.8.camel@kazak.uk.xensource.com>	
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389261548.27473.42.camel@kazak.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------070809020903090506040507"
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>, Don
	Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------070809020903090506040507
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/09/14 04:59, Ian Campbell wrote:
> On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
>> On Wed, 8 Jan 2014 17:44:22 +0000
>> Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
>>>> At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
>>>>> On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
>>>>>>> Using volatile is almost always wrong. Why do you think it is
>>>>>>> needed here?
>>>>>> This was from Mukesh Rathor:
>>>>>>
>>>>>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
>>>>>>
>>>>>> I saw no reason to make it volatile, but maybe "kdb" needs
>>>>>> this? Happy to change any way you want.
>>>>> I'm not the maintainer but if I were I'd say drop the volatile
>>>>> and maybe mark it __read_mostly and perhaps __used too (since
>>>>> it's static it might otherwise get eliminated).
>>>>>
>>>>>>> If anything this variable is exactly the opposite, i..e
>>>>>>> __read_mostly or even const (given that I can't see anything
>>>>>>> which writes it I suppose this is a compile time setting?)
>>>>>> That has been how I have been testing it so far (changing the
>>>>>> source to set values).  Mukesh claims to be able to change it
>>>>>> at will.  Not sure how const may effect this.
>>>> If the idea is to use kdb itself to frob the value, then it does
>>>> need something to stop the compiler caching it.  This might even be
>>>> one of the few cases where 'volatile' actually DTRT; it would still
>>>> be more in keeping with Xen style to use an explicit read op (like
>>>> atomic_read()) where the value is consumed.
>>> Is there any need to be asynchronously frobbing this value in the
>>> middle of a function within this file and expecting it to be
>> Yes. I can stop the machine via kdb or other debugger, change the value
>> during debug, and upon resuming it will start printing stuff. Often
>> this is needed when going thru several iterations of call before problem
>> is seen. Making it volatile makes sure the compiler loads it every
>> instance of its use. This is not in main path, only debugger path, so
>> the overhead should not be an issue.
> So you want to be able to toggle the value in between two immediately
> adjacent debug print calls? While debugging the debugging infrastructure
> itself? (using itself?).
>
> I'm surprised that even works, but if you say so then OK.
>
> Ian.
>

Based on Mukesh's statement, attached is the rebased version of this patch (labeled v3).  I included Mukesh's ack.

     -Don Slutz


--------------070809020903090506040507
Content-Type: text/x-patch;
	name="0003-dbg_rw_guest_mem-Conditionally-enable-debug-log-outp.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename*0="0003-dbg_rw_guest_mem-Conditionally-enable-debug-log-outp.pa";
	filename*1="tch"

>From aac1a83a34e5a9d07975015e925563d399c5cd13 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Sat, 4 Jan 2014 11:24:36 -0500
Subject: [BUGFIX][PATCH v3 3/5] dbg_rw_guest_mem: Conditionally enable debug
 log output

If dbg_debug is non-zero, output debug.

Include put_gfn debug logging.

Here is a smaple output at dbg_debug == 2:

(XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
(XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$2

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 435bd40..d28fb70 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -30,16 +30,9 @@
  * gdbsx, etc..
  */
 
-#ifdef XEN_KDB_CONFIG
-#include "../kdb/include/kdbdefs.h"
-#include "../kdb/include/kdbproto.h"
-#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
-#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
-#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
-#else
-#define DBGP1(...) ((void)0)
-#define DBGP2(...) ((void)0)
-#endif
+static volatile int dbg_debug;
+#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
+#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
@@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
-    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
+    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
     *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
     if ( *gfn == INVALID_GFN )
     {
-        DBGP2("kdb:bad gfn from gva_to_gfn\n");
+        DBGP1("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
-        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
+        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
     else
-        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+        DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
 
     if ( mfn == INVALID_MFN )
     {
         put_gfn(dp, *gfn);
+        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
         *gfn = INVALID_GFN;
     }
 
@@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
     unsigned long mfn = cr3 >> PAGE_SHIFT;
 
-    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
+    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
@@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
@@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
-            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
     }
@@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
-        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
     l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
 
         unmap_domain_page(va);
         if ( gfn != INVALID_GFN )
+        {
             put_gfn(dp, gfn);
+            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, gfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     struct domain *dp = get_domain_by_id(domid);
     int hyp = (domid == DOMID_IDLE);
 
-    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
+    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
           addr, buf, len, domid, toaddr, dp);
     if ( hyp )
     {
@@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
         put_domain(dp);
     }
 
-    DBGP2("gmem:exit:len:$%d\n", len);
+    DBGP1("gmem:exit:len:$%d\n", len);
     return len;
 }
 
-- 
1.8.4


--------------070809020903090506040507
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070809020903090506040507--


From xen-devel-bounces@lists.xen.org Thu Jan 09 16:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IAj-0003vj-Jg; Thu, 09 Jan 2014 16:09:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1IAi-0003va-Gi
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:09:28 +0000
Received: from [85.158.139.211:56154] by server-10.bemta-5.messagelabs.com id
	62/0F-01405-7B9CEC25; Thu, 09 Jan 2014 16:09:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389283765!8840220!1
X-Originating-IP: [199.249.25.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1830 invoked from network); 9 Jan 2014 16:09:26 -0000
Received: from omzsmtpe02.verizonbusiness.com (HELO
	omzsmtpe02.verizonbusiness.com) (199.249.25.209)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:09:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe02.verizonbusiness.com with ESMTP; 09 Jan 2014 16:09:23 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; 
	d="scan'208,223";a="628069652"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 09 Jan 2014 16:09:23 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 9 Jan 2014 11:08:25 -0500
Message-ID: <52CEC978.7040705@terremark.com>
Date: Thu, 9 Jan 2014 11:08:24 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Mukesh Rathor
	<mukesh.rathor@oracle.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>	
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>	
	<1389177510.4883.11.camel@kazak.uk.xensource.com>	
	<52CD60AA.9010607@terremark.com>	
	<1389199626.4883.112.camel@kazak.uk.xensource.com>	
	<20140108170442.GA75747@deinos.phlegethon.org>	
	<1389203062.27473.8.camel@kazak.uk.xensource.com>	
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389261548.27473.42.camel@kazak.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------070809020903090506040507"
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>, Don
	Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--------------070809020903090506040507
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/09/14 04:59, Ian Campbell wrote:
> On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
>> On Wed, 8 Jan 2014 17:44:22 +0000
>> Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>> On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
>>>> At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
>>>>> On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
>>>>>>> Using volatile is almost always wrong. Why do you think it is
>>>>>>> needed here?
>>>>>> This was from Mukesh Rathor:
>>>>>>
>>>>>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
>>>>>>
>>>>>> I saw no reason to make it volatile, but maybe "kdb" needs
>>>>>> this? Happy to change any way you want.
>>>>> I'm not the maintainer but if I were I'd say drop the volatile
>>>>> and maybe mark it __read_mostly and perhaps __used too (since
>>>>> it's static it might otherwise get eliminated).
>>>>>
>>>>>>> If anything this variable is exactly the opposite, i..e
>>>>>>> __read_mostly or even const (given that I can't see anything
>>>>>>> which writes it I suppose this is a compile time setting?)
>>>>>> That has been how I have been testing it so far (changing the
>>>>>> source to set values).  Mukesh claims to be able to change it
>>>>>> at will.  Not sure how const may effect this.
>>>> If the idea is to use kdb itself to frob the value, then it does
>>>> need something to stop the compiler caching it.  This might even be
>>>> one of the few cases where 'volatile' actually DTRT; it would still
>>>> be more in keeping with Xen style to use an explicit read op (like
>>>> atomic_read()) where the value is consumed.
>>> Is there any need to be asynchronously frobbing this value in the
>>> middle of a function within this file and expecting it to be
>> Yes. I can stop the machine via kdb or other debugger, change the value
>> during debug, and upon resuming it will start printing stuff. Often
>> this is needed when going thru several iterations of call before problem
>> is seen. Making it volatile makes sure the compiler loads it every
>> instance of its use. This is not in main path, only debugger path, so
>> the overhead should not be an issue.
> So you want to be able to toggle the value in between two immediately
> adjacent debug print calls? While debugging the debugging infrastructure
> itself? (using itself?).
>
> I'm surprised that even works, but if you say so then OK.
>
> Ian.
>

Based on Mukesh's statement, attached is the rebased version of this patch (labeled v3).  I included Mukesh's ack.

     -Don Slutz


--------------070809020903090506040507
Content-Type: text/x-patch;
	name="0003-dbg_rw_guest_mem-Conditionally-enable-debug-log-outp.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename*0="0003-dbg_rw_guest_mem-Conditionally-enable-debug-log-outp.pa";
	filename*1="tch"

>From aac1a83a34e5a9d07975015e925563d399c5cd13 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Sat, 4 Jan 2014 11:24:36 -0500
Subject: [BUGFIX][PATCH v3 3/5] dbg_rw_guest_mem: Conditionally enable debug
 log output

If dbg_debug is non-zero, output debug.

Include put_gfn debug logging.

Here is a smaple output at dbg_debug == 2:

(XEN) [2014-01-07 03:20:09] gmem:addr:8f56 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f56 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f56 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f56 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:8f57 buf:00000000006e2020 len:$1 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:8f57 domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:8f57 domid:1 mfn:64331a
(XEN) [2014-01-07 03:20:09] R: addr:8f57 pagecnt=1 domid:1 gfn:8
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$0
(XEN) [2014-01-07 03:20:09] gmem:addr:6ae9168b buf:00000000006e2020 len:$2 domid:1 toaddr:0 dp:ffff83083e5fe000
(XEN) [2014-01-07 03:20:09] vaddr:6ae9168b domid:1
(XEN) [2014-01-07 03:20:09] X: vaddr:6ae9168b domid:1 mfn:ffffffffffffffff
(XEN) [2014-01-07 03:20:09] R: domid:1 gfn:6ae91
(XEN) [2014-01-07 03:20:09] gmem:exit:len:$2

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/debug.c | 54 +++++++++++++++++++++++++---------------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 435bd40..d28fb70 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -30,16 +30,9 @@
  * gdbsx, etc..
  */
 
-#ifdef XEN_KDB_CONFIG
-#include "../kdb/include/kdbdefs.h"
-#include "../kdb/include/kdbproto.h"
-#define DBGP(...) {(kdbdbg) ? kdbp(__VA_ARGS__):0;}
-#define DBGP1(...) {(kdbdbg>1) ? kdbp(__VA_ARGS__):0;}
-#define DBGP2(...) {(kdbdbg>2) ? kdbp(__VA_ARGS__):0;}
-#else
-#define DBGP1(...) ((void)0)
-#define DBGP2(...) ((void)0)
-#endif
+static volatile int dbg_debug;
+#define DBGP(...) {(dbg_debug) ? printk(__VA_ARGS__) : 0;}
+#define DBGP1(...) {(dbg_debug > 1) ? printk(__VA_ARGS__) : 0;}
 
 /* Returns: mfn for the given (hvm guest) vaddr */
 static unsigned long 
@@ -50,27 +43,28 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     uint32_t pfec = PFEC_page_present;
     p2m_type_t gfntype;
 
-    DBGP2("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
+    DBGP1("vaddr:%lx domid:%d\n", vaddr, dp->domain_id);
 
     *gfn = paging_gva_to_gfn(dp->vcpu[0], vaddr, &pfec);
     if ( *gfn == INVALID_GFN )
     {
-        DBGP2("kdb:bad gfn from gva_to_gfn\n");
+        DBGP1("kdb:bad gfn from gva_to_gfn\n");
         return INVALID_MFN;
     }
 
     mfn = mfn_x(get_gfn(dp, *gfn, &gfntype)); 
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
-        DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
+        DBGP1("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
         mfn = INVALID_MFN;
     }
     else
-        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+        DBGP1("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
 
     if ( mfn == INVALID_MFN )
     {
         put_gfn(dp, *gfn);
+        DBGP1("R: domid:%d gfn:%lx\n", dp->domain_id, *gfn);
         *gfn = INVALID_GFN;
     }
 
@@ -100,7 +94,7 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     unsigned long cr3 = (pgd3val ? pgd3val : dp->vcpu[0]->arch.cr3);
     unsigned long mfn = cr3 >> PAGE_SHIFT;
 
-    DBGP2("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id, 
+    DBGP1("vaddr:%lx domid:%d cr3:%lx pgd3:%lx\n", vaddr, dp->domain_id,
           cr3, pgd3val);
 
     if ( pgd3val == 0 )
@@ -109,11 +103,11 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l4e = l4t[l4_table_offset(vaddr)];
         unmap_domain_page(l4t);
         mfn = l4e_get_pfn(l4e);
-        DBGP2("l4t:%p l4to:%lx l4e:%lx mfn:%lx\n", l4t, 
-              l4_table_offset(vaddr), l4e, mfn);
+        DBGP1("l4t:%p l4to:%lx l4e:%" PRIpte " mfn:%lx\n",
+              l4t, l4_table_offset(vaddr), l4e_get_intpte(l4e), mfn);
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            DBGP1("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l4 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
 
@@ -121,12 +115,12 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
         l3e = l3t[l3_table_offset(vaddr)];
         unmap_domain_page(l3t);
         mfn = l3e_get_pfn(l3e);
-        DBGP2("l3t:%p l3to:%lx l3e:%lx mfn:%lx\n", l3t, 
-              l3_table_offset(vaddr), l3e, mfn);
+        DBGP1("l3t:%p l3to:%lx l3e:%" PRIpte " mfn:%lx\n",
+              l3t, l3_table_offset(vaddr), l3e_get_intpte(l3e), mfn);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
-            DBGP1("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+            DBGP("l3 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
             return INVALID_MFN;
         }
     }
@@ -135,20 +129,20 @@ dbg_pv_va2mfn(dbgva_t vaddr, struct domain *dp, uint64_t pgd3val)
     l2e = l2t[l2_table_offset(vaddr)];
     unmap_domain_page(l2t);
     mfn = l2e_get_pfn(l2e);
-    DBGP2("l2t:%p l2to:%lx l2e:%lx mfn:%lx\n", l2t, l2_table_offset(vaddr),
-          l2e, mfn);
+    DBGP1("l2t:%p l2to:%lx l2e:%" PRIpte " mfn:%lx\n",
+          l2t, l2_table_offset(vaddr), l2e_get_intpte(l2e), mfn);
     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) ||
          (l2e_get_flags(l2e) & _PAGE_PSE) )
     {
-        DBGP1("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
+        DBGP("l2 PAGE not present. vaddr:%lx cr3:%lx\n", vaddr, cr3);
         return INVALID_MFN;
     }
     l1t = map_domain_page(mfn);
     l1e = l1t[l1_table_offset(vaddr)];
     unmap_domain_page(l1t);
     mfn = l1e_get_pfn(l1e);
-    DBGP2("l1t:%p l1to:%lx l1e:%lx mfn:%lx\n", l1t, l1_table_offset(vaddr),
-          l1e, mfn);
+    DBGP1("l1t:%p l1to:%lx l1e:%" PRIpte " mfn:%lx\n",
+          l1t, l1_table_offset(vaddr), l1e_get_intpte(l1e), mfn);
 
     return mfn_valid(mfn) ? mfn : INVALID_MFN;
 }
@@ -186,7 +180,11 @@ dbg_rw_guest_mem(dbgva_t addr, dbgbyte_t *buf, int len, struct domain *dp,
 
         unmap_domain_page(va);
         if ( gfn != INVALID_GFN )
+        {
             put_gfn(dp, gfn);
+            DBGP1("R: addr:%lx pagecnt=%ld domid:%d gfn:%lx\n",
+                  addr, pagecnt, dp->domain_id, gfn);
+        }
 
         addr += pagecnt;
         buf += pagecnt;
@@ -210,7 +208,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
     struct domain *dp = get_domain_by_id(domid);
     int hyp = (domid == DOMID_IDLE);
 
-    DBGP2("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n", 
+    DBGP1("gmem:addr:%lx buf:%p len:$%d domid:%x toaddr:%x dp:%p\n",
           addr, buf, len, domid, toaddr, dp);
     if ( hyp )
     {
@@ -226,7 +224,7 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int len, domid_t domid, int toaddr,
         put_domain(dp);
     }
 
-    DBGP2("gmem:exit:len:$%d\n", len);
+    DBGP1("gmem:exit:len:$%d\n", len);
     return len;
 }
 
-- 
1.8.4


--------------070809020903090506040507
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------070809020903090506040507--


From xen-devel-bounces@lists.xen.org Thu Jan 09 16:17:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IIm-0004Bh-4K; Thu, 09 Jan 2014 16:17:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1IIk-0004BR-HI
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:17:46 +0000
Received: from [85.158.137.68:29676] by server-13.bemta-3.messagelabs.com id
	78/86-28603-9ABCEC25; Thu, 09 Jan 2014 16:17:45 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389284263!8178159!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27855 invoked from network); 9 Jan 2014 16:17:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:17:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; 
	d="asc'?scan'208";a="91338704"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:17:42 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:17:41 -0500
Message-ID: <1389284260.16457.96.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 9 Jan 2014 17:17:40 +0100
In-Reply-To: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
References: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2337438716748712516=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2337438716748712516==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qBiNyXZJnhgEiNZuDcHw"

--=-qBiNyXZJnhgEiNZuDcHw
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-09 at 19:03 +0400, Igor Kozhukhov wrote:
> Hello All,
>=20
Hi,

> let me introduce status of port xen-4.2 to DilOS(illumos based
> platform)
>=20
Wow.. That sound great! The only thing I'm not sure I understand (sorry
if you covered this already in the past) is why 4.2, instead of
something more current.

Anyway, to both show the world you are working and making progress on
this, and, potentially, to attract attention and new dev/testing forces,
would you be interested to write a blog post about this work you're
doing for the Xen-Project blog?

http://blog.xen.org/

If yes, reach out to me and/or to the publicity mailing list (Cc-ed).

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qBiNyXZJnhgEiNZuDcHw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLOy6QACgkQk4XaBE3IOsRSdACgiHGpDYxOf9jVbL6AB3zMcByk
X2sAoKUDyDnIQ7CGKtLJmgqH3DBt+xNq
=+C++
-----END PGP SIGNATURE-----

--=-qBiNyXZJnhgEiNZuDcHw--


--===============2337438716748712516==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2337438716748712516==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 16:17:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IIm-0004Bh-4K; Thu, 09 Jan 2014 16:17:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1IIk-0004BR-HI
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:17:46 +0000
Received: from [85.158.137.68:29676] by server-13.bemta-3.messagelabs.com id
	78/86-28603-9ABCEC25; Thu, 09 Jan 2014 16:17:45 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389284263!8178159!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27855 invoked from network); 9 Jan 2014 16:17:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:17:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; 
	d="asc'?scan'208";a="91338704"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:17:42 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:17:41 -0500
Message-ID: <1389284260.16457.96.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 9 Jan 2014 17:17:40 +0100
In-Reply-To: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
References: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2337438716748712516=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2337438716748712516==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qBiNyXZJnhgEiNZuDcHw"

--=-qBiNyXZJnhgEiNZuDcHw
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-09 at 19:03 +0400, Igor Kozhukhov wrote:
> Hello All,
>=20
Hi,

> let me introduce status of port xen-4.2 to DilOS(illumos based
> platform)
>=20
Wow.. That sound great! The only thing I'm not sure I understand (sorry
if you covered this already in the past) is why 4.2, instead of
something more current.

Anyway, to both show the world you are working and making progress on
this, and, potentially, to attract attention and new dev/testing forces,
would you be interested to write a blog post about this work you're
doing for the Xen-Project blog?

http://blog.xen.org/

If yes, reach out to me and/or to the publicity mailing list (Cc-ed).

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qBiNyXZJnhgEiNZuDcHw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLOy6QACgkQk4XaBE3IOsRSdACgiHGpDYxOf9jVbL6AB3zMcByk
X2sAoKUDyDnIQ7CGKtLJmgqH3DBt+xNq
=+C++
-----END PGP SIGNATURE-----

--=-qBiNyXZJnhgEiNZuDcHw--


--===============2337438716748712516==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2337438716748712516==--


From xen-devel-bounces@lists.xen.org Thu Jan 09 16:21:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:21:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IMi-0004v5-Vc; Thu, 09 Jan 2014 16:21:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1IMh-0004uz-N3
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:21:51 +0000
Received: from [85.158.137.68:33206] by server-12.bemta-3.messagelabs.com id
	40/56-20055-E9CCEC25; Thu, 09 Jan 2014 16:21:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389284508!8242812!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3655 invoked from network); 9 Jan 2014 16:21:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89196795"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:21:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 11:21:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1IMd-0007mF-KR;
	Thu, 09 Jan 2014 16:21:47 +0000
Message-ID: <52CECC9B.50100@citrix.com>
Date: Thu, 9 Jan 2014 16:21:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Graham <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:07, Simon Graham wrote:
>>> Does this seem a reasonable explanation of the issue?
>> Yes - as long as you can also explain why a spin lock operation
>> would make it into the emulation code in the first place.
>>
> Well, that one is tough and I don't have a good answer... the only thing I would say is that in our system we ALWAYS have the shadow memory tracking enabled (to track changes to framebuffers)

Full shadow, or just logdirty?

With logdirty, frequent pagefaults will occur (which are costly in terms
of vmexits), but I would not expect emulation to occur.

Even with full shadow, emulation only kicks in for non-standard RAM,
which is basically IO to qemu, and instructions trying to write to the
pagetables themselves, which are trapped and emulated for safety reasons.

>
>>> The attached patch is for discussion purposes only - if it is deemed
>>> acceptable I'll resubmit a proper patch request against unstable.
>> I'd rather not add limited scope special casing like that, but instead
>> make the copying much more like real hardware (i.e. not just deal
>> with the 16- and 32-bit cases, and especially not rely on memcpy()
>> using 64-bit reads/writes when it can). IOW - don't use memcpy()
>> here at all (and have a single routine doing The Right Thing (tm)
>> rather than having two clones now, and perhaps more later on -
>> I'd in particular think that the read side in shadow code would also
>> need a similar adjustment).
> My concern was that memcpy is (I assume!) highly optimized - it certainly should be if it isn't and I would worry that a change to make it atomic for the purposes of instruction emulation would result in an across the board perf hit when in most cases it isn't necessary that it be atomic.
>
> This would be fine for the writeback code in the shadow module BUT the __hvm_copy routine is used generically in situations where atomicity is not required...

__hvm_copy() is probably too low to be thinking about this.  There are
many things such as grant_copy() which do not want "hardware like" copy
properties, preferring instead to have less overhead.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:21:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:21:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IMi-0004v5-Vc; Thu, 09 Jan 2014 16:21:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1IMh-0004uz-N3
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:21:51 +0000
Received: from [85.158.137.68:33206] by server-12.bemta-3.messagelabs.com id
	40/56-20055-E9CCEC25; Thu, 09 Jan 2014 16:21:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389284508!8242812!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3655 invoked from network); 9 Jan 2014 16:21:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89196795"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:21:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 11:21:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1IMd-0007mF-KR;
	Thu, 09 Jan 2014 16:21:47 +0000
Message-ID: <52CECC9B.50100@citrix.com>
Date: Thu, 9 Jan 2014 16:21:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Graham <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:07, Simon Graham wrote:
>>> Does this seem a reasonable explanation of the issue?
>> Yes - as long as you can also explain why a spin lock operation
>> would make it into the emulation code in the first place.
>>
> Well, that one is tough and I don't have a good answer... the only thing I would say is that in our system we ALWAYS have the shadow memory tracking enabled (to track changes to framebuffers)

Full shadow, or just logdirty?

With logdirty, frequent pagefaults will occur (which are costly in terms
of vmexits), but I would not expect emulation to occur.

Even with full shadow, emulation only kicks in for non-standard RAM,
which is basically IO to qemu, and instructions trying to write to the
pagetables themselves, which are trapped and emulated for safety reasons.

>
>>> The attached patch is for discussion purposes only - if it is deemed
>>> acceptable I'll resubmit a proper patch request against unstable.
>> I'd rather not add limited scope special casing like that, but instead
>> make the copying much more like real hardware (i.e. not just deal
>> with the 16- and 32-bit cases, and especially not rely on memcpy()
>> using 64-bit reads/writes when it can). IOW - don't use memcpy()
>> here at all (and have a single routine doing The Right Thing (tm)
>> rather than having two clones now, and perhaps more later on -
>> I'd in particular think that the read side in shadow code would also
>> need a similar adjustment).
> My concern was that memcpy is (I assume!) highly optimized - it certainly should be if it isn't and I would worry that a change to make it atomic for the purposes of instruction emulation would result in an across the board perf hit when in most cases it isn't necessary that it be atomic.
>
> This would be fine for the writeback code in the shadow module BUT the __hvm_copy routine is used generically in situations where atomicity is not required...

__hvm_copy() is probably too low to be thinking about this.  There are
many things such as grant_copy() which do not want "hardware like" copy
properties, preferring instead to have less overhead.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IOQ-00051j-V9; Thu, 09 Jan 2014 16:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1IOO-00051c-OA
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:23:36 +0000
Received: from [85.158.139.211:62674] by server-11.bemta-5.messagelabs.com id
	8B/55-23268-80DCEC25; Thu, 09 Jan 2014 16:23:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389284615!8836115!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25064 invoked from network); 9 Jan 2014 16:23:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:23:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:23:35 +0000
Message-Id: <52CEDB160200007800112085@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:23:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 17:07, Simon Graham <simon.graham@citrix.com> wrote:
>> > The attached patch is for discussion purposes only - if it is deemed
>> > acceptable I'll resubmit a proper patch request against unstable.
>> 
>> I'd rather not add limited scope special casing like that, but instead
>> make the copying much more like real hardware (i.e. not just deal
>> with the 16- and 32-bit cases, and especially not rely on memcpy()
>> using 64-bit reads/writes when it can). IOW - don't use memcpy()
>> here at all (and have a single routine doing The Right Thing (tm)
>> rather than having two clones now, and perhaps more later on -
>> I'd in particular think that the read side in shadow code would also
>> need a similar adjustment).
> 
> My concern was that memcpy is (I assume!) highly optimized - it certainly 
> should be if it isn't and I would worry that a change to make it atomic for 
> the purposes of instruction emulation would result in an across the board 
> perf hit when in most cases it isn't necessary that it be atomic.

And I didn't mean to fiddle with memcpy(), but rather create a
specialized copying function just for the use in the context of
emulation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IOQ-00051j-V9; Thu, 09 Jan 2014 16:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1IOO-00051c-OA
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:23:36 +0000
Received: from [85.158.139.211:62674] by server-11.bemta-5.messagelabs.com id
	8B/55-23268-80DCEC25; Thu, 09 Jan 2014 16:23:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389284615!8836115!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25064 invoked from network); 9 Jan 2014 16:23:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:23:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:23:35 +0000
Message-Id: <52CEDB160200007800112085@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:23:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 17:07, Simon Graham <simon.graham@citrix.com> wrote:
>> > The attached patch is for discussion purposes only - if it is deemed
>> > acceptable I'll resubmit a proper patch request against unstable.
>> 
>> I'd rather not add limited scope special casing like that, but instead
>> make the copying much more like real hardware (i.e. not just deal
>> with the 16- and 32-bit cases, and especially not rely on memcpy()
>> using 64-bit reads/writes when it can). IOW - don't use memcpy()
>> here at all (and have a single routine doing The Right Thing (tm)
>> rather than having two clones now, and perhaps more later on -
>> I'd in particular think that the read side in shadow code would also
>> need a similar adjustment).
> 
> My concern was that memcpy is (I assume!) highly optimized - it certainly 
> should be if it isn't and I would worry that a change to make it atomic for 
> the purposes of instruction emulation would result in an across the board 
> perf hit when in most cases it isn't necessary that it be atomic.

And I didn't mean to fiddle with memcpy(), but rather create a
specialized copying function just for the use in the context of
emulation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IUk-0005sU-GK; Thu, 09 Jan 2014 16:30:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1IUj-0005sP-76
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:30:09 +0000
Received: from [85.158.143.35:43461] by server-2.bemta-4.messagelabs.com id
	1D/68-11386-09ECEC25; Thu, 09 Jan 2014 16:30:08 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389285006!10750455!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 9 Jan 2014 16:30:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:30:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91344120"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:30:05 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:30:05 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7OCAIAAUwIQ
Date: Thu, 9 Jan 2014 16:30:04 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E44C4@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CEDB160200007800112085@nat28.tlf.novell.com>
In-Reply-To: <52CEDB160200007800112085@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > My concern was that memcpy is (I assume!) highly optimized - it certainly
> > should be if it isn't and I would worry that a change to make it atomic for
> > the purposes of instruction emulation would result in an across the board
> > perf hit when in most cases it isn't necessary that it be atomic.
> 
> And I didn't mean to fiddle with memcpy(), but rather create a
> specialized copying function just for the use in the context of
> emulation.
> 

-sigh- OK I'll look at that - might not be as bad as I originally thought anyway.

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IUk-0005sU-GK; Thu, 09 Jan 2014 16:30:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1IUj-0005sP-76
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:30:09 +0000
Received: from [85.158.143.35:43461] by server-2.bemta-4.messagelabs.com id
	1D/68-11386-09ECEC25; Thu, 09 Jan 2014 16:30:08 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389285006!10750455!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 9 Jan 2014 16:30:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:30:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="91344120"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 16:30:05 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 11:30:05 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7OCAIAAUwIQ
Date: Thu, 9 Jan 2014 16:30:04 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E44C4@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CEDB160200007800112085@nat28.tlf.novell.com>
In-Reply-To: <52CEDB160200007800112085@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > My concern was that memcpy is (I assume!) highly optimized - it certainly
> > should be if it isn't and I would worry that a change to make it atomic for
> > the purposes of instruction emulation would result in an across the board
> > perf hit when in most cases it isn't necessary that it be atomic.
> 
> And I didn't mean to fiddle with memcpy(), but rather create a
> specialized copying function just for the use in the context of
> emulation.
> 

-sigh- OK I'll look at that - might not be as bad as I originally thought anyway.

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:30:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IVQ-0005vw-Vk; Thu, 09 Jan 2014 16:30:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1IVP-0005vM-RB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:30:52 +0000
Received: from [85.158.143.35:51814] by server-3.bemta-4.messagelabs.com id
	B1/67-32360-BBECEC25; Thu, 09 Jan 2014 16:30:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389285049!3610386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21765 invoked from network); 9 Jan 2014 16:30:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:30:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89200731"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:30:36 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:30:35 -0500
Message-ID: <52CECEAA.107@citrix.com>
Date: Thu, 9 Jan 2014 17:30:34 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
	<52CC0EE8.6060205@linaro.org>
In-Reply-To: <52CC0EE8.6060205@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDcvMDEvMTQgMTU6MjcsIEp1bGllbiBHcmFsbCB3cm90ZToKPiBPbiAwMS8wNy8yMDE0IDA4
OjI5IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNi8wMS8xNCAxMjozMywgSnVs
aWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dl
ciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+IE9uIDA1LzAxLzE0IDIyOjU1LCBKdWxpZW4gR3JhbGwg
d3JvdGU6Cj4+Pj4+Cj4+Pj4+Cj4+Pj4+IE9uIDAxLzAyLzIwMTQgMDM6NDMgUE0sIFJvZ2VyIFBh
dSBNb25uZSB3cm90ZToKPj4+Pj4+IEludHJvZHVjZSBhIFhlbiBzcGVjaWZpYyBuZXh1cyB0aGF0
IGlzIGdvaW5nIHRvIGJlIGluIGNoYXJnZSBmb3IKPj4+Pj4+IGF0dGFjaGluZyBYZW4gc3BlY2lm
aWMgZGV2aWNlcy4KPj4+Pj4KPj4+Pj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1cywgZG8g
d2UgcmVhbGx5IG5lZWQgYSBzcGVjaWZpYyBuZXh1cyBmb3IKPj4+Pj4gWGVuPwo+Pj4+PiBXZSBz
aG91bGQgYmUgYWJsZSB0byB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGNy
ZWF0ZSB0aGUKPj4+Pj4gYnVzLgo+Pj4+Pgo+Pj4+PiBUaGUgb3RoZXIgcGFydCBvZiB0aGlzIHBh
dGNoIGNhbiBiZSBtZXJnZWQgaW4gdGhlIHBhdGNoICMxNCAiSW50cm9kdWNlCj4+Pj4+IHhlbnB2
IGJ1cyBhbmQgYSBkdW1teSBwdmNwdSBkZXZpY2UiLgo+Pj4+Cj4+Pj4gT24geDg2IGF0IGxlYXN0
IHdlIG5lZWQgdGhlIFhlbiBzcGVjaWZpYyBuZXh1cywgb3Igd2Ugd2lsbCBmYWxsIGJhY2sgdG8K
Pj4+PiB1c2UgdGhlIGxlZ2FjeSBuZXh1cyB3aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2Fu
dC4KPj4+Pgo+Pj4KPj4+IE9oIHJpZ2h0LCBpbiBhbnkgY2FzZSBjYW4gd2UgdXNlIHRoZSBpZGVu
dGlmeSBjYWxsYmFjayBvZiB4ZW5wdiB0byBhZGQKPj4+IHRoZSBidXM/Cj4+Cj4+IEFGQUlDVCB0
aGlzIGtpbmQgb2YgYnVzIGRldmljZXMgZG9uJ3QgaGF2ZSBhIGlkZW50aWZ5IHJvdXRpbmUsIGFu
ZCB0aGV5Cj4+IGFyZSB1c3VhbGx5IGFkZGVkIG1hbnVhbGx5IGZyb20gdGhlIHNwZWNpZmljIG5l
eHVzLCBzZWUgYWNwaSBvciBsZWdhY3kuCj4+IENvdWxkIHlvdSBhZGQgdGhlIGRldmljZSBvbiBB
Uk0gd2hlbiB5b3UgZGV0ZWN0IHRoYXQgeW91IGFyZSBydW5uaW5nIGFzCj4+IGEgWGVuIGd1ZXN0
LCBvciBpbiB0aGUgZ2VuZXJpYyBBUk0gbmV4dXMgaWYgWGVuIGlzIGRldGVjdGVkPwo+IAo+IElz
IHRoZXJlIGFueSByZWFzb24gdG8gbm90IGFkZCBpZGVudGlmeSBjYWxsYmFjaz8gSWYgaXQncyBw
b3NzaWJsZSwgSQo+IHdvdWxkIGxpa2UgdG8gYXZvaWQgYXMgbXVjaCBhcyBwb3NzaWJsZSAjaWZk
ZWYgWEVOSFZNIGluIEFSTSBjb2RlLgoKTWF5YmUgdGhlIHg4NiB3b3JsZCBpcyByZWFsbHkgZGlm
ZmVyZW50IGZyb20gdGhlIEFSTSB3b3JsZCBpbiBob3cgbmV4dXMKd29ya3MsIGJ1dCBJIHJhdGhl
ciBwcmVmZXIgdG8gaGF2ZSBhICNpZmRlZiBYRU5IVk0gYW5kIGEgQlVTX0FERF9DSElMRAp0aGF0
IGF0dGFjaGVzIHRoZSB4ZW5wdiBidXMgaW4gdGhlIGdlbmVyaWMgQVJNIG5leHVzIHJhdGhlciB0
aGFuIGhhdmluZwpzb21ldGhpbmcgdGhhdCBjb21wbGV0ZWx5IGRpdmVyZ2VzIGZyb20gd2hhdCBi
dXNlcyB1c3VhbGx5IGRvIGluCkZyZWVCU0QuIEl0J3MgZ29pbmcgdG8gYmUgbXVjaCBtb3JlIGRp
ZmZpY3VsdCB0byB0cmFjayBpbiBjYXNlIG9mIGJ1Z3MsCmFuZCBpdCdzIG5vdCB3aGF0IHBlb3Bs
ZSBleHBlY3RzLCBidXQgdGhhdCdzIGp1c3QgbXkgb3Bpbmlvbi4gSSBjYW4KY2VydGFpbmx5IGFk
ZCB0aGUgaWRlbnRpZnkgcm91dGluZSBpZiB0aGVyZSdzIGFuIGFncmVlbWVudCB0aGF0IGl0J3Mg
dGhlCmJlc3Qgd2F5IHRvIGRlYWwgd2l0aCBpdC4KClJvZ2VyLgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:30:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IVQ-0005vw-Vk; Thu, 09 Jan 2014 16:30:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1IVP-0005vM-RB
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:30:52 +0000
Received: from [85.158.143.35:51814] by server-3.bemta-4.messagelabs.com id
	B1/67-32360-BBECEC25; Thu, 09 Jan 2014 16:30:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389285049!3610386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21765 invoked from network); 9 Jan 2014 16:30:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:30:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89200731"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:30:36 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	11:30:35 -0500
Message-ID: <52CECEAA.107@citrix.com>
Date: Thu, 9 Jan 2014 17:30:34 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
	<52CC0EE8.6060205@linaro.org>
In-Reply-To: <52CC0EE8.6060205@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDcvMDEvMTQgMTU6MjcsIEp1bGllbiBHcmFsbCB3cm90ZToKPiBPbiAwMS8wNy8yMDE0IDA4
OjI5IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAwNi8wMS8xNCAxMjozMywgSnVs
aWVuIEdyYWxsIHdyb3RlOgo+Pj4KPj4+Cj4+PiBPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dl
ciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+IE9uIDA1LzAxLzE0IDIyOjU1LCBKdWxpZW4gR3JhbGwg
d3JvdGU6Cj4+Pj4+Cj4+Pj4+Cj4+Pj4+IE9uIDAxLzAyLzIwMTQgMDM6NDMgUE0sIFJvZ2VyIFBh
dSBNb25uZSB3cm90ZToKPj4+Pj4+IEludHJvZHVjZSBhIFhlbiBzcGVjaWZpYyBuZXh1cyB0aGF0
IGlzIGdvaW5nIHRvIGJlIGluIGNoYXJnZSBmb3IKPj4+Pj4+IGF0dGFjaGluZyBYZW4gc3BlY2lm
aWMgZGV2aWNlcy4KPj4+Pj4KPj4+Pj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1cywgZG8g
d2UgcmVhbGx5IG5lZWQgYSBzcGVjaWZpYyBuZXh1cyBmb3IKPj4+Pj4gWGVuPwo+Pj4+PiBXZSBz
aG91bGQgYmUgYWJsZSB0byB1c2UgdGhlIGlkZW50aWZ5IGNhbGxiYWNrIG9mIHhlbnB2IHRvIGNy
ZWF0ZSB0aGUKPj4+Pj4gYnVzLgo+Pj4+Pgo+Pj4+PiBUaGUgb3RoZXIgcGFydCBvZiB0aGlzIHBh
dGNoIGNhbiBiZSBtZXJnZWQgaW4gdGhlIHBhdGNoICMxNCAiSW50cm9kdWNlCj4+Pj4+IHhlbnB2
IGJ1cyBhbmQgYSBkdW1teSBwdmNwdSBkZXZpY2UiLgo+Pj4+Cj4+Pj4gT24geDg2IGF0IGxlYXN0
IHdlIG5lZWQgdGhlIFhlbiBzcGVjaWZpYyBuZXh1cywgb3Igd2Ugd2lsbCBmYWxsIGJhY2sgdG8K
Pj4+PiB1c2UgdGhlIGxlZ2FjeSBuZXh1cyB3aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2Fu
dC4KPj4+Pgo+Pj4KPj4+IE9oIHJpZ2h0LCBpbiBhbnkgY2FzZSBjYW4gd2UgdXNlIHRoZSBpZGVu
dGlmeSBjYWxsYmFjayBvZiB4ZW5wdiB0byBhZGQKPj4+IHRoZSBidXM/Cj4+Cj4+IEFGQUlDVCB0
aGlzIGtpbmQgb2YgYnVzIGRldmljZXMgZG9uJ3QgaGF2ZSBhIGlkZW50aWZ5IHJvdXRpbmUsIGFu
ZCB0aGV5Cj4+IGFyZSB1c3VhbGx5IGFkZGVkIG1hbnVhbGx5IGZyb20gdGhlIHNwZWNpZmljIG5l
eHVzLCBzZWUgYWNwaSBvciBsZWdhY3kuCj4+IENvdWxkIHlvdSBhZGQgdGhlIGRldmljZSBvbiBB
Uk0gd2hlbiB5b3UgZGV0ZWN0IHRoYXQgeW91IGFyZSBydW5uaW5nIGFzCj4+IGEgWGVuIGd1ZXN0
LCBvciBpbiB0aGUgZ2VuZXJpYyBBUk0gbmV4dXMgaWYgWGVuIGlzIGRldGVjdGVkPwo+IAo+IElz
IHRoZXJlIGFueSByZWFzb24gdG8gbm90IGFkZCBpZGVudGlmeSBjYWxsYmFjaz8gSWYgaXQncyBw
b3NzaWJsZSwgSQo+IHdvdWxkIGxpa2UgdG8gYXZvaWQgYXMgbXVjaCBhcyBwb3NzaWJsZSAjaWZk
ZWYgWEVOSFZNIGluIEFSTSBjb2RlLgoKTWF5YmUgdGhlIHg4NiB3b3JsZCBpcyByZWFsbHkgZGlm
ZmVyZW50IGZyb20gdGhlIEFSTSB3b3JsZCBpbiBob3cgbmV4dXMKd29ya3MsIGJ1dCBJIHJhdGhl
ciBwcmVmZXIgdG8gaGF2ZSBhICNpZmRlZiBYRU5IVk0gYW5kIGEgQlVTX0FERF9DSElMRAp0aGF0
IGF0dGFjaGVzIHRoZSB4ZW5wdiBidXMgaW4gdGhlIGdlbmVyaWMgQVJNIG5leHVzIHJhdGhlciB0
aGFuIGhhdmluZwpzb21ldGhpbmcgdGhhdCBjb21wbGV0ZWx5IGRpdmVyZ2VzIGZyb20gd2hhdCBi
dXNlcyB1c3VhbGx5IGRvIGluCkZyZWVCU0QuIEl0J3MgZ29pbmcgdG8gYmUgbXVjaCBtb3JlIGRp
ZmZpY3VsdCB0byB0cmFjayBpbiBjYXNlIG9mIGJ1Z3MsCmFuZCBpdCdzIG5vdCB3aGF0IHBlb3Bs
ZSBleHBlY3RzLCBidXQgdGhhdCdzIGp1c3QgbXkgb3Bpbmlvbi4gSSBjYW4KY2VydGFpbmx5IGFk
ZCB0aGUgaWRlbnRpZnkgcm91dGluZSBpZiB0aGVyZSdzIGFuIGFncmVlbWVudCB0aGF0IGl0J3Mg
dGhlCmJlc3Qgd2F5IHRvIGRlYWwgd2l0aCBpdC4KClJvZ2VyLgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IVc-0005xX-CZ; Thu, 09 Jan 2014 16:31:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1IVb-0005xK-BG
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:31:03 +0000
Received: from [193.109.254.147:30939] by server-10.bemta-14.messagelabs.com
	id 81/0D-20752-6CECEC25; Thu, 09 Jan 2014 16:31:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389285062!9902165!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7832 invoked from network); 9 Jan 2014 16:31:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:31:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:31:01 +0000
Message-Id: <52CEDCD30200007800112096@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:30:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
In-Reply-To: <52CEC978.7040705@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
> Based on Mukesh's statement, attached is the rebased version of this patch 
> (labeled v3).  I included Mukesh's ack.

Unless this is meant just for reviewing purposes (albeit even then
it's likely problematic), could you please get used to sending
patch revisions with mail subjects (i.e. not retaining the prior
version indicator), so there is a reasonable chance to reconstruct
things by searching just the titles in a mail archive. (It's still fine -
at least as far as I'm concerned - to reply to an earlier version,
thus tying things into a single thread on the archive.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IVc-0005xX-CZ; Thu, 09 Jan 2014 16:31:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1IVb-0005xK-BG
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:31:03 +0000
Received: from [193.109.254.147:30939] by server-10.bemta-14.messagelabs.com
	id 81/0D-20752-6CECEC25; Thu, 09 Jan 2014 16:31:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389285062!9902165!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7832 invoked from network); 9 Jan 2014 16:31:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 16:31:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 09 Jan 2014 16:31:01 +0000
Message-Id: <52CEDCD30200007800112096@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 09 Jan 2014 16:30:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
In-Reply-To: <52CEC978.7040705@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
> Based on Mukesh's statement, attached is the rebased version of this patch 
> (labeled v3).  I included Mukesh's ack.

Unless this is meant just for reviewing purposes (albeit even then
it's likely problematic), could you please get used to sending
patch revisions with mail subjects (i.e. not retaining the prior
version indicator), so there is a reasonable chance to reconstruct
things by searching just the titles in a mail archive. (It's still fine -
at least as far as I'm concerned - to reply to an earlier version,
thus tying things into a single thread on the archive.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:34:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IYb-0006E5-3b; Thu, 09 Jan 2014 16:34:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1IYZ-0006Dy-Vy
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:34:08 +0000
Received: from [193.109.254.147:14844] by server-9.bemta-14.messagelabs.com id
	1F/BC-13957-F7FCEC25; Thu, 09 Jan 2014 16:34:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389285246!6377206!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18967 invoked from network); 9 Jan 2014 16:34:06 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:34:06 -0000
Received: by mail-ee0-f54.google.com with SMTP id e51so1202255eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 08:34:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=F4U21s+i4XZQQyMyGi3WdHDh9HA1isj12LVdk5m9jdU=;
	b=llHodm5J5zGaaiq6VpytXOz03HvaYEve66HgfesO/Ne8gJt7jutb0FyK4vS+OSLo8x
	mHUFvv4AYMWYVfzqEXlXJRenerFtFV6tlhAmJFZKOAOAnsmFcM9EwAju1F19GK4b9bC7
	oJCG6gNwaB65fzNEXxroiTr3f7Y8oHVr+CyZ6PGs0jVaFfn+xobNb5IwIom0qX/O8TnR
	fP3fuE6WaU1TErsByyMhDyioyTvkCoBk2mkarLoCoJnJcs2n3k0nvT9+MrgUagl1UAQ4
	pSKefAKQoybpU+a8xjF61LSkzG2co5FcpOT2UEjF4b0pve2dGSauP911Y5PSK5zfEmdY
	ITgA==
X-Gm-Message-State: ALoCoQnpHylOsvfJl056qPisVabilzFuVe2jkyNkW/6+l+bImUe/vyo9WByycYCJbZmbj1uqkWmk
X-Received: by 10.14.88.134 with SMTP id a6mr4252672eef.5.1389285246091;
	Thu, 09 Jan 2014 08:34:06 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 1sm6940951eeg.4.2014.01.09.08.34.04
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 08:34:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:34:00 +0000
Message-Id: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLB entries used by this domain on every PCPU. The flush can also be
moved out of the loop because:
    - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
    - INSERT: if valid = 1 that would means with have replaced a
    page that already belongs to the domain. A VCPU can write on the wrong page.
    This can append for dom0 with the 1:1 mapping because the mapping is not
    removed from the p2m.
    - REMOVE: except for grant-table (replace_grant_host_mapping), each
    call to guest_physmap_remove_page are protected by the callers via a
        get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
    the page can't be allocated for another domain until the last put_page.
    - RELINQUISH : the domain is not running anymore so we don't care...

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Switch to the domain for only flush its TLBs entries
        - Move the flush out of the loop

This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
flush out of the loop which should be safe (see why in the commit message).
Without this patch, the guest can have stale TLB entries when the VCPU is moved
to another PCPU.

Except grant-table (I can't find {get,put}_page for grant-table code???),
all the callers are protected by a get_page before removing the page. So if the
another VCPU is trying to access to this page before the flush, it will just
read/write the wrong page.

The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
should be safe because create_p2m_entries only deal with a specific domain.

I don't think I forget case in this function. Let me know if it's the case.
---
 xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..ad6f76e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
                      int mattr,
                      p2m_type_t t)
 {
-    int rc, flush;
+    int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *first = NULL, *second = NULL, *third = NULL;
     paddr_t addr;
@@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
                   cur_first_offset = ~0,
                   cur_second_offset = ~0;
     unsigned long count = 0;
+    unsigned int flush = 0;
     bool_t populate = (op == INSERT || op == ALLOCATE);
 
     spin_lock(&p2m->lock);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(d);
+
     addr = start_gpaddr;
     while ( addr < end_gpaddr )
     {
@@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
             cur_second_offset = second_table_offset(addr);
         }
 
-        flush = third[third_table_offset(addr)].p2m.valid;
+        flush |= third[third_table_offset(addr)].p2m.valid;
 
         /* Allocate a new RAM page and attach */
         switch (op) {
@@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
                 break;
         }
 
-        if ( flush )
-            flush_tlb_all_local();
-
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
         {
@@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
         addr += PAGE_SIZE;
     }
 
+    if ( flush )
+    {
+        /* At the beginning of the function, Xen is updating VTTBR
+         * with the domain where the mappings are created. In this
+         * case it's only necessary to flush TLBs on every CPUs with
+         * the current VMID (our domain).
+         */
+        flush_tlb();
+    }
+
     if ( op == ALLOCATE || op == INSERT )
     {
         unsigned long sgfn = paddr_to_pfn(start_gpaddr);
@@ -409,6 +420,9 @@ out:
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(current->domain);
+
     spin_unlock(&p2m->lock);
 
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:34:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1IYb-0006E5-3b; Thu, 09 Jan 2014 16:34:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1IYZ-0006Dy-Vy
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:34:08 +0000
Received: from [193.109.254.147:14844] by server-9.bemta-14.messagelabs.com id
	1F/BC-13957-F7FCEC25; Thu, 09 Jan 2014 16:34:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389285246!6377206!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18967 invoked from network); 9 Jan 2014 16:34:06 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:34:06 -0000
Received: by mail-ee0-f54.google.com with SMTP id e51so1202255eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 08:34:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=F4U21s+i4XZQQyMyGi3WdHDh9HA1isj12LVdk5m9jdU=;
	b=llHodm5J5zGaaiq6VpytXOz03HvaYEve66HgfesO/Ne8gJt7jutb0FyK4vS+OSLo8x
	mHUFvv4AYMWYVfzqEXlXJRenerFtFV6tlhAmJFZKOAOAnsmFcM9EwAju1F19GK4b9bC7
	oJCG6gNwaB65fzNEXxroiTr3f7Y8oHVr+CyZ6PGs0jVaFfn+xobNb5IwIom0qX/O8TnR
	fP3fuE6WaU1TErsByyMhDyioyTvkCoBk2mkarLoCoJnJcs2n3k0nvT9+MrgUagl1UAQ4
	pSKefAKQoybpU+a8xjF61LSkzG2co5FcpOT2UEjF4b0pve2dGSauP911Y5PSK5zfEmdY
	ITgA==
X-Gm-Message-State: ALoCoQnpHylOsvfJl056qPisVabilzFuVe2jkyNkW/6+l+bImUe/vyo9WByycYCJbZmbj1uqkWmk
X-Received: by 10.14.88.134 with SMTP id a6mr4252672eef.5.1389285246091;
	Thu, 09 Jan 2014 08:34:06 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 1sm6940951eeg.4.2014.01.09.08.34.04
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 08:34:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:34:00 +0000
Message-Id: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLB entries used by this domain on every PCPU. The flush can also be
moved out of the loop because:
    - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
    - INSERT: if valid = 1 that would means with have replaced a
    page that already belongs to the domain. A VCPU can write on the wrong page.
    This can append for dom0 with the 1:1 mapping because the mapping is not
    removed from the p2m.
    - REMOVE: except for grant-table (replace_grant_host_mapping), each
    call to guest_physmap_remove_page are protected by the callers via a
        get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
    the page can't be allocated for another domain until the last put_page.
    - RELINQUISH : the domain is not running anymore so we don't care...

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Switch to the domain for only flush its TLBs entries
        - Move the flush out of the loop

This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
flush out of the loop which should be safe (see why in the commit message).
Without this patch, the guest can have stale TLB entries when the VCPU is moved
to another PCPU.

Except grant-table (I can't find {get,put}_page for grant-table code???),
all the callers are protected by a get_page before removing the page. So if the
another VCPU is trying to access to this page before the flush, it will just
read/write the wrong page.

The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
should be safe because create_p2m_entries only deal with a specific domain.

I don't think I forget case in this function. Let me know if it's the case.
---
 xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..ad6f76e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
                      int mattr,
                      p2m_type_t t)
 {
-    int rc, flush;
+    int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *first = NULL, *second = NULL, *third = NULL;
     paddr_t addr;
@@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
                   cur_first_offset = ~0,
                   cur_second_offset = ~0;
     unsigned long count = 0;
+    unsigned int flush = 0;
     bool_t populate = (op == INSERT || op == ALLOCATE);
 
     spin_lock(&p2m->lock);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(d);
+
     addr = start_gpaddr;
     while ( addr < end_gpaddr )
     {
@@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
             cur_second_offset = second_table_offset(addr);
         }
 
-        flush = third[third_table_offset(addr)].p2m.valid;
+        flush |= third[third_table_offset(addr)].p2m.valid;
 
         /* Allocate a new RAM page and attach */
         switch (op) {
@@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
                 break;
         }
 
-        if ( flush )
-            flush_tlb_all_local();
-
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
         {
@@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
         addr += PAGE_SIZE;
     }
 
+    if ( flush )
+    {
+        /* At the beginning of the function, Xen is updating VTTBR
+         * with the domain where the mappings are created. In this
+         * case it's only necessary to flush TLBs on every CPUs with
+         * the current VMID (our domain).
+         */
+        flush_tlb();
+    }
+
     if ( op == ALLOCATE || op == INSERT )
     {
         unsigned long sgfn = paddr_to_pfn(start_gpaddr);
@@ -409,6 +420,9 @@ out:
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(current->domain);
+
     spin_unlock(&p2m->lock);
 
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ib0-0006N9-Kg; Thu, 09 Jan 2014 16:36:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W1Iaz-0006N2-Ds
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:36:37 +0000
Received: from [85.158.143.35:40451] by server-2.bemta-4.messagelabs.com id
	62/B2-11386-410DEC25; Thu, 09 Jan 2014 16:36:36 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389285395!10679584!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23252 invoked from network); 9 Jan 2014 16:36:35 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-7.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 16:36:35 -0000
Received: (qmail 8417 invoked by uid 10000); 9 Jan 2014 16:36:35 -0000
Date: Thu, 9 Jan 2014 16:36:34 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: xen-devel@lists.xenproject.org
Message-ID: <20140109163632.GA27164@dark.recoil.org>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
	behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The various cpuid functions are not available on ARM, so this
makes them raise an OCaml exception.  Omitting the functions
completely them results in a link failure in oxenstored due to
the missing symbols, so this is preferable to the much bigger
patch that would result from adding conditional compilation into
the OCaml interfaces.

Signed-off-by: Anil Madhavapeddy <anil@recoil.org>

---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f5cf0ed..76864cc 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 {
 	CAMLparam4(xch, domid, input, config);
 	CAMLlocal2(array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 			 c_input, (const char **)c_config, out_config);
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(array);
 }
 
 CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
 {
 	CAMLparam2(xch, domid);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 
 	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(Val_unit);
 }
 
@@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 {
 	CAMLparam3(xch, input, config);
 	CAMLlocal3(ret, array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 	Store_field(ret, 0, Val_bool(r));
 	Store_field(ret, 1, array);
 
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(ret);
 }
 
-- 
1.8.1.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ib0-0006N9-Kg; Thu, 09 Jan 2014 16:36:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W1Iaz-0006N2-Ds
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:36:37 +0000
Received: from [85.158.143.35:40451] by server-2.bemta-4.messagelabs.com id
	62/B2-11386-410DEC25; Thu, 09 Jan 2014 16:36:36 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389285395!10679584!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23252 invoked from network); 9 Jan 2014 16:36:35 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-7.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 16:36:35 -0000
Received: (qmail 8417 invoked by uid 10000); 9 Jan 2014 16:36:35 -0000
Date: Thu, 9 Jan 2014 16:36:34 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: xen-devel@lists.xenproject.org
Message-ID: <20140109163632.GA27164@dark.recoil.org>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
	behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The various cpuid functions are not available on ARM, so this
makes them raise an OCaml exception.  Omitting the functions
completely them results in a link failure in oxenstored due to
the missing symbols, so this is preferable to the much bigger
patch that would result from adding conditional compilation into
the OCaml interfaces.

Signed-off-by: Anil Madhavapeddy <anil@recoil.org>

---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f5cf0ed..76864cc 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 {
 	CAMLparam4(xch, domid, input, config);
 	CAMLlocal2(array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 			 c_input, (const char **)c_config, out_config);
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(array);
 }
 
 CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
 {
 	CAMLparam2(xch, domid);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 
 	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(Val_unit);
 }
 
@@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 {
 	CAMLparam3(xch, input, config);
 	CAMLlocal3(ret, array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 	Store_field(ret, 0, Val_bool(r));
 	Store_field(ret, 1, array);
 
+#else
+	failwith_xc(_H(xch));
+#endif
 	CAMLreturn(ret);
 }
 
-- 
1.8.1.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Iep-00079i-G8; Thu, 09 Jan 2014 16:40:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W1Ien-00079N-OJ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:40:33 +0000
Received: from [85.158.143.35:11392] by server-2.bemta-4.messagelabs.com id
	6F/58-11386-FF0DEC25; Thu, 09 Jan 2014 16:40:31 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389285630!10707616!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_23,ML_RADAR_SPEW_LINKS_8,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13710 invoked from network); 9 Jan 2014 16:40:31 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:40:31 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so2566741lbi.30
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 08:40:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=csG6J9DU9O4eqiFfb65x/wGI+tpLBZADqVaW4LhDlas=;
	b=ZTjfaBRUGBiH0mNMMAC1D/XNGVMRV7XNKiSJfuV7qTENR30lLRxwx8GSc6fT3YxGBq
	xxxj4WZMCOMMEuX0CUOCEgdBL/ef9hytBE3vgKqBm47iJ6cYl8MOOuQAcglkzo7r3pQ9
	qx89cGIDSlhw2eYCJ+Y7TAsy4/LvyaXEitLwuJcC7ruKEGrhbEGaMdZmVIFZcR0zY0jv
	mfCraJS7CMif3kfMa9kVcvj72HkWyYzqUuA1+R1gNrcHPJTYjIXP3j7LBIEAQoo+GubS
	WWLZbg72LtmFpeRvScPmaQnsJ9S6kcKNst1GORB8rX29m8+NshE/f4peYNQw4uXxyGhj
	vTIw==
X-Received: by 10.112.130.35 with SMTP id ob3mr1658732lbb.2.1389285630286;
	Thu, 09 Jan 2014 08:40:30 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id r10sm1996837lag.7.2014.01.09.08.40.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 08:40:29 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1389284260.16457.96.camel@Solace>
Date: Thu, 9 Jan 2014 20:40:27 +0400
Message-Id: <B3223294-8162-471A-87C9-01EB2BBB8ABA@gmail.com>
References: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
	<1389284260.16457.96.camel@Solace>
To: Dario Faggioli <dario.faggioli@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Dario,

On Jan 9, 2014, at 8:17 PM, Dario Faggioli wrote:

> On gio, 2014-01-09 at 19:03 +0400, Igor Kozhukhov wrote:
>> Hello All,
>> 
> Hi,
> 
>> let me introduce status of port xen-4.2 to DilOS(illumos based
>> platform)
>> 
> Wow.. That sound great! The only thing I'm not sure I understand (sorry
> if you covered this already in the past) is why 4.2, instead of
> something more current.
4.2 because - it is step before 4.4 :)
4.2 mark as stable and some guys is using it - i'm interested in additional test env and comparisons.

> Anyway, to both show the world you are working and making progress on
> this, and, potentially, to attract attention and new dev/testing forces,
> would you be interested to write a blog post about this work you're
> doing for the Xen-Project blog?
> 
> http://blog.xen.org/
> 
> If yes, reach out to me and/or to the publicity mailing list (Cc-ed).
it is very early to post some blogs about it - need to have a demo with env.
for DilOS project I have Atlassian wiki(thanks to Atlassian): 
https://dilos-dev.atlassian.net/wiki/display/DS/DilOS+platform+Home

dilos-xen-3.4-dom0 work well with latest xen-3.4.x sources + additional patches.

-Igor

> Thanks and Regards,
> Dario
> 
> -- 
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Iep-00079i-G8; Thu, 09 Jan 2014 16:40:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W1Ien-00079N-OJ
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:40:33 +0000
Received: from [85.158.143.35:11392] by server-2.bemta-4.messagelabs.com id
	6F/58-11386-FF0DEC25; Thu, 09 Jan 2014 16:40:31 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389285630!10707616!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_23,ML_RADAR_SPEW_LINKS_8,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13710 invoked from network); 9 Jan 2014 16:40:31 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:40:31 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so2566741lbi.30
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 08:40:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=csG6J9DU9O4eqiFfb65x/wGI+tpLBZADqVaW4LhDlas=;
	b=ZTjfaBRUGBiH0mNMMAC1D/XNGVMRV7XNKiSJfuV7qTENR30lLRxwx8GSc6fT3YxGBq
	xxxj4WZMCOMMEuX0CUOCEgdBL/ef9hytBE3vgKqBm47iJ6cYl8MOOuQAcglkzo7r3pQ9
	qx89cGIDSlhw2eYCJ+Y7TAsy4/LvyaXEitLwuJcC7ruKEGrhbEGaMdZmVIFZcR0zY0jv
	mfCraJS7CMif3kfMa9kVcvj72HkWyYzqUuA1+R1gNrcHPJTYjIXP3j7LBIEAQoo+GubS
	WWLZbg72LtmFpeRvScPmaQnsJ9S6kcKNst1GORB8rX29m8+NshE/f4peYNQw4uXxyGhj
	vTIw==
X-Received: by 10.112.130.35 with SMTP id ob3mr1658732lbb.2.1389285630286;
	Thu, 09 Jan 2014 08:40:30 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id r10sm1996837lag.7.2014.01.09.08.40.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 08:40:29 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1389284260.16457.96.camel@Solace>
Date: Thu, 9 Jan 2014 20:40:27 +0400
Message-Id: <B3223294-8162-471A-87C9-01EB2BBB8ABA@gmail.com>
References: <D0238B2F-D009-44FE-9577-368EAAD17999@gmail.com>
	<1389284260.16457.96.camel@Solace>
To: Dario Faggioli <dario.faggioli@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] first port of xen-4.2 to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Dario,

On Jan 9, 2014, at 8:17 PM, Dario Faggioli wrote:

> On gio, 2014-01-09 at 19:03 +0400, Igor Kozhukhov wrote:
>> Hello All,
>> 
> Hi,
> 
>> let me introduce status of port xen-4.2 to DilOS(illumos based
>> platform)
>> 
> Wow.. That sound great! The only thing I'm not sure I understand (sorry
> if you covered this already in the past) is why 4.2, instead of
> something more current.
4.2 because - it is step before 4.4 :)
4.2 mark as stable and some guys is using it - i'm interested in additional test env and comparisons.

> Anyway, to both show the world you are working and making progress on
> this, and, potentially, to attract attention and new dev/testing forces,
> would you be interested to write a blog post about this work you're
> doing for the Xen-Project blog?
> 
> http://blog.xen.org/
> 
> If yes, reach out to me and/or to the publicity mailing list (Cc-ed).
it is very early to post some blogs about it - need to have a demo with env.
for DilOS project I have Atlassian wiki(thanks to Atlassian): 
https://dilos-dev.atlassian.net/wiki/display/DS/DilOS+platform+Home

dilos-xen-3.4-dom0 work well with latest xen-3.4.x sources + additional patches.

-Igor

> Thanks and Regards,
> Dario
> 
> -- 
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:43:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ihq-0007JC-8r; Thu, 09 Jan 2014 16:43:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Iho-0007J5-Ve
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:43:41 +0000
Received: from [193.109.254.147:15231] by server-8.bemta-14.messagelabs.com id
	79/72-30921-CB1DEC25; Thu, 09 Jan 2014 16:43:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389285817!9905368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5530 invoked from network); 9 Jan 2014 16:43:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89207834"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:43:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 11:43:14 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1IhO-000855-G6;
	Thu, 09 Jan 2014 16:43:14 +0000
Message-ID: <52CED1A2.3000708@citrix.com>
Date: Thu, 9 Jan 2014 16:43:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Anil Madhavapeddy <anil@recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
In-Reply-To: <20140109163632.GA27164@dark.recoil.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:36, Anil Madhavapeddy wrote:
> The various cpuid functions are not available on ARM, so this
> makes them raise an OCaml exception.  Omitting the functions
> completely them results in a link failure in oxenstored due to
> the missing symbols, so this is preferable to the much bigger
> patch that would result from adding conditional compilation into
> the OCaml interfaces.
>
> Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
>
> ---
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index f5cf0ed..76864cc 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
>  {
>  	CAMLparam4(xch, domid, input, config);
>  	CAMLlocal2(array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
>  			 c_input, (const char **)c_config, out_config);
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	failwith_xc(_H(xch));

You probably want to set xc's last error so failwith_xc() gives an
exception with a relevant error message.

~Andrew

> +#endif
>  	CAMLreturn(array);
>  }
>  
>  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
>  {
>  	CAMLparam2(xch, domid);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  
>  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	failwith_xc(_H(xch));
> +#endif
>  	CAMLreturn(Val_unit);
>  }
>  
> @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
>  {
>  	CAMLparam3(xch, input, config);
>  	CAMLlocal3(ret, array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
>  	Store_field(ret, 0, Val_bool(r));
>  	Store_field(ret, 1, array);
>  
> +#else
> +	failwith_xc(_H(xch));
> +#endif
>  	CAMLreturn(ret);
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:43:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ihq-0007JC-8r; Thu, 09 Jan 2014 16:43:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Iho-0007J5-Ve
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:43:41 +0000
Received: from [193.109.254.147:15231] by server-8.bemta-14.messagelabs.com id
	79/72-30921-CB1DEC25; Thu, 09 Jan 2014 16:43:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389285817!9905368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5530 invoked from network); 9 Jan 2014 16:43:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,631,1384300800"; d="scan'208";a="89207834"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 16:43:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 11:43:14 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W1IhO-000855-G6;
	Thu, 09 Jan 2014 16:43:14 +0000
Message-ID: <52CED1A2.3000708@citrix.com>
Date: Thu, 9 Jan 2014 16:43:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Anil Madhavapeddy <anil@recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
In-Reply-To: <20140109163632.GA27164@dark.recoil.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 16:36, Anil Madhavapeddy wrote:
> The various cpuid functions are not available on ARM, so this
> makes them raise an OCaml exception.  Omitting the functions
> completely them results in a link failure in oxenstored due to
> the missing symbols, so this is preferable to the much bigger
> patch that would result from adding conditional compilation into
> the OCaml interfaces.
>
> Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
>
> ---
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
>
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index f5cf0ed..76864cc 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
>  {
>  	CAMLparam4(xch, domid, input, config);
>  	CAMLlocal2(array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
>  			 c_input, (const char **)c_config, out_config);
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	failwith_xc(_H(xch));

You probably want to set xc's last error so failwith_xc() gives an
exception with a relevant error message.

~Andrew

> +#endif
>  	CAMLreturn(array);
>  }
>  
>  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
>  {
>  	CAMLparam2(xch, domid);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  
>  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	failwith_xc(_H(xch));
> +#endif
>  	CAMLreturn(Val_unit);
>  }
>  
> @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
>  {
>  	CAMLparam3(xch, input, config);
>  	CAMLlocal3(ret, array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
>  	Store_field(ret, 0, Val_bool(r));
>  	Store_field(ret, 1, array);
>  
> +#else
> +	failwith_xc(_H(xch));
> +#endif
>  	CAMLreturn(ret);
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:56:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Itw-00088S-5m; Thu, 09 Jan 2014 16:56:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1Itu-00088N-Sx
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:56:10 +0000
Received: from [85.158.143.35:25964] by server-3.bemta-4.messagelabs.com id
	3F/3E-32360-AA4DEC25; Thu, 09 Jan 2014 16:56:10 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389286568!10676679!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16419 invoked from network); 9 Jan 2014 16:56:09 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 9 Jan 2014 16:56:09 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1Ixe-0007qU-4m; Thu, 09 Jan 2014 17:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389286802.30160@bugs.xenproject.org>
References: <20140108164617.GA20476@aepfle.de>
	<1389286148.19805.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389286148.19805.50.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 09 Jan 2014 17:00:02 +0000
Subject: [Xen-devel] Processed: Re:  global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #31 rooted at `<20140108164617.GA20476@aepfle.de>'
Title: `Re: [Xen-devel] global keymap= option not recognized'
> title it xl: global keymap= option not recognised
Set title for #31 to `xl: global keymap= option not recognised'
> prune it <20140109115220.GA32437@zion.uk.xensource.com>
Prune `<20140109115220.GA32437@zion.uk.xensource.com>' from #31
> owner it Wei Liu <wei.liu2@citrix.com>
Change owner for #31 to `Wei Liu <wei.liu2@citrix.com>'
> thanks
Finished processing.

Modified/created Bugs:
 - 31: http://bugs.xenproject.org/xen/bug/31 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:56:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Itw-00088S-5m; Thu, 09 Jan 2014 16:56:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1Itu-00088N-Sx
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 16:56:10 +0000
Received: from [85.158.143.35:25964] by server-3.bemta-4.messagelabs.com id
	3F/3E-32360-AA4DEC25; Thu, 09 Jan 2014 16:56:10 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389286568!10676679!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16419 invoked from network); 9 Jan 2014 16:56:09 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 9 Jan 2014 16:56:09 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1Ixe-0007qU-4m; Thu, 09 Jan 2014 17:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389286802.30160@bugs.xenproject.org>
References: <20140108164617.GA20476@aepfle.de>
	<1389286148.19805.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389286148.19805.50.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Thu, 09 Jan 2014 17:00:02 +0000
Subject: [Xen-devel] Processed: Re:  global keymap= option not recognized
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #31 rooted at `<20140108164617.GA20476@aepfle.de>'
Title: `Re: [Xen-devel] global keymap= option not recognized'
> title it xl: global keymap= option not recognised
Set title for #31 to `xl: global keymap= option not recognised'
> prune it <20140109115220.GA32437@zion.uk.xensource.com>
Prune `<20140109115220.GA32437@zion.uk.xensource.com>' from #31
> owner it Wei Liu <wei.liu2@citrix.com>
Change owner for #31 to `Wei Liu <wei.liu2@citrix.com>'
> thanks
Finished processing.

Modified/created Bugs:
 - 31: http://bugs.xenproject.org/xen/bug/31 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:58:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ivq-0008ED-MW; Thu, 09 Jan 2014 16:58:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1Ivp-0008E7-D5
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:58:09 +0000
Received: from [85.158.143.35:25329] by server-1.bemta-4.messagelabs.com id
	7F/2B-02132-025DEC25; Thu, 09 Jan 2014 16:58:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389286687!10712347!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10900 invoked from network); 9 Jan 2014 16:58:07 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:58:07 -0000
Received: by mail-ee0-f53.google.com with SMTP id b57so1463986eek.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 08:58:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Lt9aWjvaqtrgesNP25LZrLrzpkJSUUG3hn3JzZdkDBI=;
	b=k4+DCmhEI0ccWCUeYR7KEQF+i/L09o/iwAgDWlRIIOxmsMalH8lv+IG7o/3m6se7J0
	aYcd2CzeOr0SIc35HEWXPm93NF3wS7FO8cOaJ08JUiF1lILQa3ih1A89YAUCJafd8BD1
	QFcPEz7rzbh6KVDGZrxeLRmFkfMi3R93fL00bZenseujMAQcP3r0/PrvYYGjKdNyqJvq
	tIe+lzLsOQsO9ub5Ch+yeyYn+gsGRwVRTHFDROGQgvq6/BN45ZafMpf5hMxH00hyV8WX
	IAGvcV0D87m/BESDvVm3LQCtkJ00b8RQYvg00p2/n75avznQirLgrBpja27JBoCSp5Pv
	XvAg==
X-Gm-Message-State: ALoCoQkxmtt8WjaTbQK9w3eaDD2LdbX9VuLvOX94ZmN9kGte4ndlA34tXtswG7cppwXo4VWDfoqZ
X-Received: by 10.14.208.199 with SMTP id q47mr4368478eeo.77.1389286687387;
	Thu, 09 Jan 2014 08:58:07 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id b41sm7127152eef.16.2014.01.09.08.58.05
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 08:58:06 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:58:03 +0000
Message-Id: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, flush_tlb_mask is used in the common code:
    - alloc_heap_pages: the flush is only be called if the new allocated
    page was used by a domain before. So we need to flush only TLB non-secure
non-hyp inner-shareable.
    - common/grant-table.c: every calls to flush_tlb_mask are used with
    the current domain. A flush TLB by current VMID inner-shareable is enough.

The current code only flush hypervisor TLB on the current PCPU. For now,
flush TLBs non-secure non-hyp on every PCPUs.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch is bug fix for Xen 4.4. We were safe because there is already
a flush in create_p2m_entries if the previous mapping was valid.

For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
each time we allocated a new page.
---
 xen/arch/arm/smp.c                   |    3 ++-
 xen/include/asm-arm/arm32/flushtlb.h |   11 +++++++++++
 xen/include/asm-arm/arm64/flushtlb.h |   11 +++++++++++
 3 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
index 4042db5..30203b8 100644
--- a/xen/arch/arm/smp.c
+++ b/xen/arch/arm/smp.c
@@ -4,11 +4,12 @@
 #include <asm/cpregs.h>
 #include <asm/page.h>
 #include <asm/gic.h>
+#include <asm/flushtlb.h>
 
 void flush_tlb_mask(const cpumask_t *mask)
 {
     /* No need to IPI other processors on ARM, the processor takes care of it. */
-    flush_xen_data_tlb();
+    flush_tlb_all();
 }
 
 void smp_send_event_check_mask(const cpumask_t *mask)
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index ab166f3..7183a07 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -34,6 +34,17 @@ static inline void flush_tlb_all_local(void)
     isb();
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    dsb();
+
+    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
+
+    dsb();
+    isb();
+}
+
 #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 9ce79a8..a73df92 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -34,6 +34,17 @@ static inline void flush_tlb_all_local(void)
         : : : "memory");
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    asm volatile(
+        "dsb sy;"
+        "tlbi alle1is;"
+        "dsb sy;"
+        "isb;"
+        : : : "memory");
+}
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 16:58:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 16:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ivq-0008ED-MW; Thu, 09 Jan 2014 16:58:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1Ivp-0008E7-D5
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 16:58:09 +0000
Received: from [85.158.143.35:25329] by server-1.bemta-4.messagelabs.com id
	7F/2B-02132-025DEC25; Thu, 09 Jan 2014 16:58:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389286687!10712347!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10900 invoked from network); 9 Jan 2014 16:58:07 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 16:58:07 -0000
Received: by mail-ee0-f53.google.com with SMTP id b57so1463986eek.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 08:58:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Lt9aWjvaqtrgesNP25LZrLrzpkJSUUG3hn3JzZdkDBI=;
	b=k4+DCmhEI0ccWCUeYR7KEQF+i/L09o/iwAgDWlRIIOxmsMalH8lv+IG7o/3m6se7J0
	aYcd2CzeOr0SIc35HEWXPm93NF3wS7FO8cOaJ08JUiF1lILQa3ih1A89YAUCJafd8BD1
	QFcPEz7rzbh6KVDGZrxeLRmFkfMi3R93fL00bZenseujMAQcP3r0/PrvYYGjKdNyqJvq
	tIe+lzLsOQsO9ub5Ch+yeyYn+gsGRwVRTHFDROGQgvq6/BN45ZafMpf5hMxH00hyV8WX
	IAGvcV0D87m/BESDvVm3LQCtkJ00b8RQYvg00p2/n75avznQirLgrBpja27JBoCSp5Pv
	XvAg==
X-Gm-Message-State: ALoCoQkxmtt8WjaTbQK9w3eaDD2LdbX9VuLvOX94ZmN9kGte4ndlA34tXtswG7cppwXo4VWDfoqZ
X-Received: by 10.14.208.199 with SMTP id q47mr4368478eeo.77.1389286687387;
	Thu, 09 Jan 2014 08:58:07 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id b41sm7127152eef.16.2014.01.09.08.58.05
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 08:58:06 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Thu,  9 Jan 2014 16:58:03 +0000
Message-Id: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, flush_tlb_mask is used in the common code:
    - alloc_heap_pages: the flush is only be called if the new allocated
    page was used by a domain before. So we need to flush only TLB non-secure
non-hyp inner-shareable.
    - common/grant-table.c: every calls to flush_tlb_mask are used with
    the current domain. A flush TLB by current VMID inner-shareable is enough.

The current code only flush hypervisor TLB on the current PCPU. For now,
flush TLBs non-secure non-hyp on every PCPUs.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch is bug fix for Xen 4.4. We were safe because there is already
a flush in create_p2m_entries if the previous mapping was valid.

For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
each time we allocated a new page.
---
 xen/arch/arm/smp.c                   |    3 ++-
 xen/include/asm-arm/arm32/flushtlb.h |   11 +++++++++++
 xen/include/asm-arm/arm64/flushtlb.h |   11 +++++++++++
 3 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
index 4042db5..30203b8 100644
--- a/xen/arch/arm/smp.c
+++ b/xen/arch/arm/smp.c
@@ -4,11 +4,12 @@
 #include <asm/cpregs.h>
 #include <asm/page.h>
 #include <asm/gic.h>
+#include <asm/flushtlb.h>
 
 void flush_tlb_mask(const cpumask_t *mask)
 {
     /* No need to IPI other processors on ARM, the processor takes care of it. */
-    flush_xen_data_tlb();
+    flush_tlb_all();
 }
 
 void smp_send_event_check_mask(const cpumask_t *mask)
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index ab166f3..7183a07 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -34,6 +34,17 @@ static inline void flush_tlb_all_local(void)
     isb();
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    dsb();
+
+    WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
+
+    dsb();
+    isb();
+}
+
 #endif /* __ASM_ARM_ARM32_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/arm64/flushtlb.h b/xen/include/asm-arm/arm64/flushtlb.h
index 9ce79a8..a73df92 100644
--- a/xen/include/asm-arm/arm64/flushtlb.h
+++ b/xen/include/asm-arm/arm64/flushtlb.h
@@ -34,6 +34,17 @@ static inline void flush_tlb_all_local(void)
         : : : "memory");
 }
 
+/* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
+static inline void flush_tlb_all(void)
+{
+    asm volatile(
+        "dsb sy;"
+        "tlbi alle1is;"
+        "dsb sy;"
+        "isb;"
+        : : : "memory");
+}
+
 #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */
 /*
  * Local variables:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1J2G-0008Uh-Id; Thu, 09 Jan 2014 17:04:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W1J2E-0008Uc-9P
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:04:46 +0000
Received: from [85.158.139.211:33026] by server-5.bemta-5.messagelabs.com id
	CE/DD-14928-DA6DEC25; Thu, 09 Jan 2014 17:04:45 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389287084!8810544!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1020 invoked from network); 9 Jan 2014 17:04:45 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-14.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 17:04:45 -0000
Received: (qmail 2372 invoked by uid 10000); 9 Jan 2014 17:04:44 -0000
Date: Thu, 9 Jan 2014 17:04:44 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140109170444.GA938@dark.recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
	<52CED1A2.3000708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CED1A2.3000708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 04:43:14PM +0000, Andrew Cooper wrote:
> On 09/01/14 16:36, Anil Madhavapeddy wrote:
> > The various cpuid functions are not available on ARM, so this
> > makes them raise an OCaml exception.  Omitting the functions
> > completely them results in a link failure in oxenstored due to
> > the missing symbols, so this is preferable to the much bigger
> > patch that would result from adding conditional compilation into
> > the OCaml interfaces.
> >
> > Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> >
> > ---
> >  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..76864cc 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	failwith_xc(_H(xch));
> 
> You probably want to set xc's last error so failwith_xc() gives an
> exception with a relevant error message.

Yeah; I'm just stumbling through getting my Cubieboard2 dom0 to boot a VM
at the moment, so I'll test out oxenstored when the compile finishes and
resubmit the patch.

-- 
Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1J2G-0008Uh-Id; Thu, 09 Jan 2014 17:04:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W1J2E-0008Uc-9P
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:04:46 +0000
Received: from [85.158.139.211:33026] by server-5.bemta-5.messagelabs.com id
	CE/DD-14928-DA6DEC25; Thu, 09 Jan 2014 17:04:45 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389287084!8810544!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1020 invoked from network); 9 Jan 2014 17:04:45 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-14.tower-206.messagelabs.com with SMTP;
	9 Jan 2014 17:04:45 -0000
Received: (qmail 2372 invoked by uid 10000); 9 Jan 2014 17:04:44 -0000
Date: Thu, 9 Jan 2014 17:04:44 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140109170444.GA938@dark.recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
	<52CED1A2.3000708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CED1A2.3000708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 04:43:14PM +0000, Andrew Cooper wrote:
> On 09/01/14 16:36, Anil Madhavapeddy wrote:
> > The various cpuid functions are not available on ARM, so this
> > makes them raise an OCaml exception.  Omitting the functions
> > completely them results in a link failure in oxenstored due to
> > the missing symbols, so this is preferable to the much bigger
> > patch that would result from adding conditional compilation into
> > the OCaml interfaces.
> >
> > Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> >
> > ---
> >  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..76864cc 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	failwith_xc(_H(xch));
> 
> You probably want to set xc's last error so failwith_xc() gives an
> exception with a relevant error message.

Yeah; I'm just stumbling through getting my Cubieboard2 dom0 to boot a VM
at the moment, so I'll test out oxenstored when the compile finishes and
resubmit the patch.

-- 
Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:25:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JLW-0002Fc-60; Thu, 09 Jan 2014 17:24:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1JLU-0002FX-T5
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:24:41 +0000
Received: from [85.158.139.211:49773] by server-17.bemta-5.messagelabs.com id
	5D/0A-19152-85BDEC25; Thu, 09 Jan 2014 17:24:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389288277!8845059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30408 invoked from network); 9 Jan 2014 17:24:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89226283"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:24:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 12:24:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1JLQ-0003so-1L;
	Thu, 09 Jan 2014 17:24:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1JLP-0002xr-Q9;
	Thu, 09 Jan 2014 17:24:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.56147.409374.599573@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 17:24:35 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
References: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [OSSTEST PATCH v2] do not install xend for xl tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[OSSTEST PATCH v2] do not install xend for xl tests"):
> We need to check that xl works correctly when xend is not even installed (in
> case we are subtly relying on some file which xend installs).

I have committed this and thrown it into the push gate.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:25:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JLW-0002Fc-60; Thu, 09 Jan 2014 17:24:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1JLU-0002FX-T5
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:24:41 +0000
Received: from [85.158.139.211:49773] by server-17.bemta-5.messagelabs.com id
	5D/0A-19152-85BDEC25; Thu, 09 Jan 2014 17:24:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389288277!8845059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30408 invoked from network); 9 Jan 2014 17:24:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89226283"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:24:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 12:24:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1JLQ-0003so-1L;
	Thu, 09 Jan 2014 17:24:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1JLP-0002xr-Q9;
	Thu, 09 Jan 2014 17:24:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.56147.409374.599573@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 17:24:35 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
References: <1387542927-4727-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [OSSTEST PATCH v2] do not install xend for xl tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[OSSTEST PATCH v2] do not install xend for xl tests"):
> We need to check that xl works correctly when xend is not even installed (in
> case we are subtly relying on some file which xend installs).

I have committed this and thrown it into the push gate.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:29:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JQ0-0002W9-Su; Thu, 09 Jan 2014 17:29:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1JPz-0002W3-0y
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:29:19 +0000
Received: from [85.158.143.35:57736] by server-2.bemta-4.messagelabs.com id
	98/C7-11386-E6CDEC25; Thu, 09 Jan 2014 17:29:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389288556!10609285!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9743 invoked from network); 9 Jan 2014 17:29:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:29:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89227883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:29:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:29:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1JPu-0000Fl-Cv;
	Thu, 09 Jan 2014 17:29:14 +0000
Date: Thu, 9 Jan 2014 17:28:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CEC370.10503@citrix.com>
Message-ID: <alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, David Vrabel wrote:
> On 09/01/14 15:30, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> >> This patch contains the new definitions necessary for grant mapping.
> >>
> >> v2:
> >> - move unmapping to separate thread. The NAPI instance has to be scheduled
> >>   even from thread context, which can cause huge delays
> >> - that causes unfortunately bigger struct xenvif
> >> - store grant handle after checking validity
> >>
> >> v3:
> >> - fix comment in xenvif_tx_dealloc_action()
> >> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
> >>   unnecessary m2p_override. Also remove pages_to_[un]map members
> > 
> > Is it worthy to have another function call
> > gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> > parameter to control wether we need to touch m2p_override? I *think* it
> > will benefit block driver as well?
> 
> add_m2p_override and remove_m2p_override calls should be moved into the
> gntdev device as that should be the only user.

First of all the gntdev device is common code, while the m2p_override is
an x86 concept.

Then I would like to point out that there are no guarantees that a
network driver, or any other kernel subsystems, don't come to rely on
mfn_to_pfn translations for any reasons at any time.
It just happens that today the only known user is gupf, but tomorrow,
who knows?
If we move the m2p_override calls to the gntdev device somehow (avoif
ifdefs please), we should be very well aware of the risks involved.

Of course my practical self realizes that we don't want a performance
regression and this is the quickest way to fix it, so I am not
completely oppose to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:29:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JQ0-0002W9-Su; Thu, 09 Jan 2014 17:29:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1JPz-0002W3-0y
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:29:19 +0000
Received: from [85.158.143.35:57736] by server-2.bemta-4.messagelabs.com id
	98/C7-11386-E6CDEC25; Thu, 09 Jan 2014 17:29:18 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389288556!10609285!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9743 invoked from network); 9 Jan 2014 17:29:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:29:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89227883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:29:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:29:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1JPu-0000Fl-Cv;
	Thu, 09 Jan 2014 17:29:14 +0000
Date: Thu, 9 Jan 2014 17:28:21 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CEC370.10503@citrix.com>
Message-ID: <alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, David Vrabel wrote:
> On 09/01/14 15:30, Wei Liu wrote:
> > On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> >> This patch contains the new definitions necessary for grant mapping.
> >>
> >> v2:
> >> - move unmapping to separate thread. The NAPI instance has to be scheduled
> >>   even from thread context, which can cause huge delays
> >> - that causes unfortunately bigger struct xenvif
> >> - store grant handle after checking validity
> >>
> >> v3:
> >> - fix comment in xenvif_tx_dealloc_action()
> >> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
> >>   unnecessary m2p_override. Also remove pages_to_[un]map members
> > 
> > Is it worthy to have another function call
> > gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> > parameter to control wether we need to touch m2p_override? I *think* it
> > will benefit block driver as well?
> 
> add_m2p_override and remove_m2p_override calls should be moved into the
> gntdev device as that should be the only user.

First of all the gntdev device is common code, while the m2p_override is
an x86 concept.

Then I would like to point out that there are no guarantees that a
network driver, or any other kernel subsystems, don't come to rely on
mfn_to_pfn translations for any reasons at any time.
It just happens that today the only known user is gupf, but tomorrow,
who knows?
If we move the m2p_override calls to the gntdev device somehow (avoif
ifdefs please), we should be very well aware of the risks involved.

Of course my practical self realizes that we don't want a performance
regression and this is the quickest way to fix it, so I am not
completely oppose to it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JTj-00038S-Px; Thu, 09 Jan 2014 17:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1JTi-00038L-WF
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:33:11 +0000
Received: from [85.158.143.35:2495] by server-1.bemta-4.messagelabs.com id
	43/45-02132-65DDEC25; Thu, 09 Jan 2014 17:33:10 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389288788!10774771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4465 invoked from network); 9 Jan 2014 17:33:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:33:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91374136"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 17:33:08 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL01.citrite.net ([10.13.107.78]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 12:33:07 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVA
Date: Thu, 9 Jan 2014 17:33:07 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
In-Reply-To: <52CECC9B.50100@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Well, that one is tough and I don't have a good answer... the only thing I
> would say is that in our system we ALWAYS have the shadow memory
> tracking enabled (to track changes to framebuffers)
> 
> Full shadow, or just logdirty?
> 
> With logdirty, frequent pagefaults will occur (which are costly in terms
> of vmexits), but I would not expect emulation to occur.
> 
> Even with full shadow, emulation only kicks in for non-standard RAM,
> which is basically IO to qemu, and instructions trying to write to the
> pagetables themselves, which are trapped and emulated for safety reasons.
> 

logdirty (somewhat modified to enable multiple framebuffers to be tracked).

I agree that it _shouldn't_ end up emulating -- but the shadow page fault routine has a ton of code paths that I've never managed to fully grok 

(As an aside, I've previously looked at other cases where the shadow code ends up emulating instructions that are unexpected that cause VMs to hang because the shadow module doesn't have a proper implementation of the x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server product inside a Xen VM that has logdirty enabled it _will_ hard hang).

> > My concern was that memcpy is (I assume!) highly optimized - it certainly
> should be if it isn't and I would worry that a change to make it atomic for the
> purposes of instruction emulation would result in an across the board perf hit
> when in most cases it isn't necessary that it be atomic.
> >
> > This would be fine for the writeback code in the shadow module BUT the
> __hvm_copy routine is used generically in situations where atomicity is not
> required...
> 
> __hvm_copy() is probably too low to be thinking about this.  There are
> many things such as grant_copy() which do not want "hardware like" copy
> properties, preferring instead to have less overhead.
> 

Yeah... I'll rework the patch to do this...

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1JTj-00038S-Px; Thu, 09 Jan 2014 17:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1JTi-00038L-WF
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:33:11 +0000
Received: from [85.158.143.35:2495] by server-1.bemta-4.messagelabs.com id
	43/45-02132-65DDEC25; Thu, 09 Jan 2014 17:33:10 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389288788!10774771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4465 invoked from network); 9 Jan 2014 17:33:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:33:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91374136"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 17:33:08 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL01.citrite.net ([10.13.107.78]) with mapi id 14.02.0342.004;
	Thu, 9 Jan 2014 12:33:07 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVA
Date: Thu, 9 Jan 2014 17:33:07 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
In-Reply-To: <52CECC9B.50100@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Well, that one is tough and I don't have a good answer... the only thing I
> would say is that in our system we ALWAYS have the shadow memory
> tracking enabled (to track changes to framebuffers)
> 
> Full shadow, or just logdirty?
> 
> With logdirty, frequent pagefaults will occur (which are costly in terms
> of vmexits), but I would not expect emulation to occur.
> 
> Even with full shadow, emulation only kicks in for non-standard RAM,
> which is basically IO to qemu, and instructions trying to write to the
> pagetables themselves, which are trapped and emulated for safety reasons.
> 

logdirty (somewhat modified to enable multiple framebuffers to be tracked).

I agree that it _shouldn't_ end up emulating -- but the shadow page fault routine has a ton of code paths that I've never managed to fully grok 

(As an aside, I've previously looked at other cases where the shadow code ends up emulating instructions that are unexpected that cause VMs to hang because the shadow module doesn't have a proper implementation of the x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server product inside a Xen VM that has logdirty enabled it _will_ hard hang).

> > My concern was that memcpy is (I assume!) highly optimized - it certainly
> should be if it isn't and I would worry that a change to make it atomic for the
> purposes of instruction emulation would result in an across the board perf hit
> when in most cases it isn't necessary that it be atomic.
> >
> > This would be fine for the writeback code in the shadow module BUT the
> __hvm_copy routine is used generically in situations where atomicity is not
> required...
> 
> __hvm_copy() is probably too low to be thinking about this.  There are
> many things such as grant_copy() which do not want "hardware like" copy
> properties, preferring instead to have less overhead.
> 

Yeah... I'll rework the patch to do this...

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:47:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jgx-0003uB-Bx; Thu, 09 Jan 2014 17:46:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1Jgv-0003u6-R9
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:46:50 +0000
Received: from [85.158.139.211:34829] by server-6.bemta-5.messagelabs.com id
	1F/80-16310-980EEC25; Thu, 09 Jan 2014 17:46:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389289606!8819376!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11885 invoked from network); 9 Jan 2014 17:46:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:46:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89234514"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:46:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	12:46:45 -0500
Message-ID: <52CEE084.5000409@citrix.com>
Date: Thu, 9 Jan 2014 17:46:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 17:28, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, David Vrabel wrote:
>> On 09/01/14 15:30, Wei Liu wrote:
>>> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>>>> This patch contains the new definitions necessary for grant mapping.
>>>>
>>>> v2:
>>>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>>>   even from thread context, which can cause huge delays
>>>> - that causes unfortunately bigger struct xenvif
>>>> - store grant handle after checking validity
>>>>
>>>> v3:
>>>> - fix comment in xenvif_tx_dealloc_action()
>>>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>>>   unnecessary m2p_override. Also remove pages_to_[un]map members
>>>
>>> Is it worthy to have another function call
>>> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
>>> parameter to control wether we need to touch m2p_override? I *think* it
>>> will benefit block driver as well?
>>
>> add_m2p_override and remove_m2p_override calls should be moved into the
>> gntdev device as that should be the only user.
> 
> First of all the gntdev device is common code, while the m2p_override is
> an x86 concept.

m2p_add_override() and m2p_remove_override() are already called from
common code and ARM already provides inline stubs.

The m2p override mechanism is also broken by design (local PFN to
foreign MFN may be many-to-one, but the m2p override only works if local
PFN to foreign MFN is one-to-one). So I want the m2p override to be only
used where it is /currently/ necessary.  I think there should be no new
users of it nor should it be considered a fix for any other use case.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:47:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jgx-0003uB-Bx; Thu, 09 Jan 2014 17:46:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1Jgv-0003u6-R9
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 17:46:50 +0000
Received: from [85.158.139.211:34829] by server-6.bemta-5.messagelabs.com id
	1F/80-16310-980EEC25; Thu, 09 Jan 2014 17:46:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389289606!8819376!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11885 invoked from network); 9 Jan 2014 17:46:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:46:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89234514"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:46:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	12:46:45 -0500
Message-ID: <52CEE084.5000409@citrix.com>
Date: Thu, 9 Jan 2014 17:46:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 17:28, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, David Vrabel wrote:
>> On 09/01/14 15:30, Wei Liu wrote:
>>> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>>>> This patch contains the new definitions necessary for grant mapping.
>>>>
>>>> v2:
>>>> - move unmapping to separate thread. The NAPI instance has to be scheduled
>>>>   even from thread context, which can cause huge delays
>>>> - that causes unfortunately bigger struct xenvif
>>>> - store grant handle after checking validity
>>>>
>>>> v3:
>>>> - fix comment in xenvif_tx_dealloc_action()
>>>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>>>   unnecessary m2p_override. Also remove pages_to_[un]map members
>>>
>>> Is it worthy to have another function call
>>> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
>>> parameter to control wether we need to touch m2p_override? I *think* it
>>> will benefit block driver as well?
>>
>> add_m2p_override and remove_m2p_override calls should be moved into the
>> gntdev device as that should be the only user.
> 
> First of all the gntdev device is common code, while the m2p_override is
> an x86 concept.

m2p_add_override() and m2p_remove_override() are already called from
common code and ARM already provides inline stubs.

The m2p override mechanism is also broken by design (local PFN to
foreign MFN may be many-to-one, but the m2p override only works if local
PFN to foreign MFN is one-to-one). So I want the m2p override to be only
used where it is /currently/ necessary.  I think there should be no new
users of it nor should it be considered a fix for any other use case.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jlf-0004d9-2p; Thu, 09 Jan 2014 17:51:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1Jld-0004d3-Ji
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 17:51:41 +0000
Received: from [85.158.137.68:46274] by server-8.bemta-3.messagelabs.com id
	B1/7F-31081-CA1EEC25; Thu, 09 Jan 2014 17:51:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389289898!8195201!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14728 invoked from network); 9 Jan 2014 17:51:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:51:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89235786"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:51:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:51:38 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1JlZ-0000bI-MC;
	Thu, 09 Jan 2014 17:51:37 +0000
Date: Thu, 9 Jan 2014 17:50:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389269716.27473.73.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
	<1389269716.27473.73.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nico@linaro.org" <nico@linaro.org>, "olof@lixom.net" <olof@lixom.net>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> > On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> > >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> > >  arch/arm64/kernel/Makefile |    1 +
> > >  2 files changed, 21 insertions(+)
> > [...]
> > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > index 5ba2fd4..1dee735 100644
> > > --- a/arch/arm64/kernel/Makefile
> > > +++ b/arch/arm64/kernel/Makefile
> > > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> > >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> > >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> > >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> > 
> > Did you forget a git add?
> 
> I was just about to say the same thing for the previous arm patch too.

That is what I get for still using patch queues :-/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jlf-0004d9-2p; Thu, 09 Jan 2014 17:51:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1Jld-0004d3-Ji
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 17:51:41 +0000
Received: from [85.158.137.68:46274] by server-8.bemta-3.messagelabs.com id
	B1/7F-31081-CA1EEC25; Thu, 09 Jan 2014 17:51:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389289898!8195201!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14728 invoked from network); 9 Jan 2014 17:51:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:51:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89235786"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 17:51:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:51:38 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1JlZ-0000bI-MC;
	Thu, 09 Jan 2014 17:51:37 +0000
Date: Thu, 9 Jan 2014 17:50:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389269716.27473.73.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
	<1389269716.27473.73.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"nico@linaro.org" <nico@linaro.org>, "olof@lixom.net" <olof@lixom.net>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> > On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> > >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> > >  arch/arm64/kernel/Makefile |    1 +
> > >  2 files changed, 21 insertions(+)
> > [...]
> > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > index 5ba2fd4..1dee735 100644
> > > --- a/arch/arm64/kernel/Makefile
> > > +++ b/arch/arm64/kernel/Makefile
> > > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> > >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> > >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> > >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> > 
> > Did you forget a git add?
> 
> I was just about to say the same thing for the previous arm patch too.

That is what I get for still using patch queues :-/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jm3-0004gx-MO; Thu, 09 Jan 2014 17:52:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1Jm2-0004gf-5c
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:52:06 +0000
Received: from [85.158.137.68:20763] by server-17.bemta-3.messagelabs.com id
	3F/67-15965-5C1EEC25; Thu, 09 Jan 2014 17:52:05 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389289922!8195261!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15804 invoked from network); 9 Jan 2014 17:52:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91380739"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 17:52:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:52:02 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W1JWx-0000Nk-8X;
	Thu, 09 Jan 2014 17:36:31 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 17:36:21 +0000
Message-ID: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Rob Hoes <rob.hoes@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, dave.scott@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception

v4:
* made for_app a value inside the handles struct, rather than a value*
* ensure handles are cleaned up in the error path
* rename timeout_modify to timeout_fire_now on the ocaml side
---
 tools/ocaml/libs/xl/xenlight.ml.in   |    4 +-
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  147 +++++++++++++++++++++++++++++-----
 3 files changed, 135 insertions(+), 26 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.ml.in b/tools/ocaml/libs/xl/xenlight.ml.in
index 47f3487..80e620a 100644
--- a/tools/ocaml/libs/xl/xenlight.ml.in
+++ b/tools/ocaml/libs/xl/xenlight.ml.in
@@ -68,12 +68,12 @@ module Async = struct
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
 	external osevent_occurred_timeout : ctx -> for_libxl -> unit = "stub_libxl_osevent_occurred_timeout"
 
-	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_modify =
+	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_fire_now =
 		Callback.register "libxl_fd_register" fd_register;
 		Callback.register "libxl_fd_modify" fd_modify;
 		Callback.register "libxl_fd_deregister" fd_deregister;
 		Callback.register "libxl_timeout_register" timeout_register;
-		Callback.register "libxl_timeout_modify" timeout_modify;
+		Callback.register "libxl_timeout_fire_now" timeout_fire_now;
 		osevent_register_hooks' ctx user
 
 	let async_register_callback ~async_callback =
diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..b2c06b5 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_fire_now:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..21fbe00 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,25 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1279,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1318,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct timeout_handles {
+	void *for_libxl;
+	value for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1345,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1358,37 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
+
+	handles->for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(handles->for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		caml_remove_global_root(&handles->for_app);
+		free(handles);
+		goto err;
+	}
 
-	caml_callbackN(*func, 4, args);
+	caml_register_global_root(&handles->for_app);
+	*for_app_registration_out = handles;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1396,49 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocal1(for_app_update);
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(handles->for_app);
+
+	/* Libxl currently promises that timeout_modify is only ever called with
+	 * abs={0,0}, meaning "right away". We cannot deal with other values. */
+	assert(abs.tv_sec == 0 && abs.tv_usec == 0);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
-		func = caml_named_value("libxl_timeout_modify");
+		func = caml_named_value("libxl_timeout_fire_now");
 	}
 
-	caml_callback(*func, *p);
+	args[0] = *p;
+	args[1] = handles->for_app;
+
+	for_app_update = caml_callbackN_exn(*func, 2, args);
+	if (Is_exception_result(for_app_update)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_app = for_app_update;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1386,12 +1491,16 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 
 value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct timeout_handles *handles = (struct timeout_handles *) for_libxl;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(&handles->for_app);
+	free(handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jm3-0004gx-MO; Thu, 09 Jan 2014 17:52:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1Jm2-0004gf-5c
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:52:06 +0000
Received: from [85.158.137.68:20763] by server-17.bemta-3.messagelabs.com id
	3F/67-15965-5C1EEC25; Thu, 09 Jan 2014 17:52:05 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389289922!8195261!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15804 invoked from network); 9 Jan 2014 17:52:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 17:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91380739"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 17:52:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 12:52:02 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W1JWx-0000Nk-8X;
	Thu, 09 Jan 2014 17:36:31 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 17:36:21 +0000
Message-ID: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Rob Hoes <rob.hoes@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, dave.scott@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception

v4:
* made for_app a value inside the handles struct, rather than a value*
* ensure handles are cleaned up in the error path
* rename timeout_modify to timeout_fire_now on the ocaml side
---
 tools/ocaml/libs/xl/xenlight.ml.in   |    4 +-
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  147 +++++++++++++++++++++++++++++-----
 3 files changed, 135 insertions(+), 26 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.ml.in b/tools/ocaml/libs/xl/xenlight.ml.in
index 47f3487..80e620a 100644
--- a/tools/ocaml/libs/xl/xenlight.ml.in
+++ b/tools/ocaml/libs/xl/xenlight.ml.in
@@ -68,12 +68,12 @@ module Async = struct
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
 	external osevent_occurred_timeout : ctx -> for_libxl -> unit = "stub_libxl_osevent_occurred_timeout"
 
-	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_modify =
+	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_fire_now =
 		Callback.register "libxl_fd_register" fd_register;
 		Callback.register "libxl_fd_modify" fd_modify;
 		Callback.register "libxl_fd_deregister" fd_deregister;
 		Callback.register "libxl_timeout_register" timeout_register;
-		Callback.register "libxl_timeout_modify" timeout_modify;
+		Callback.register "libxl_timeout_fire_now" timeout_fire_now;
 		osevent_register_hooks' ctx user
 
 	let async_register_callback ~async_callback =
diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..b2c06b5 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_fire_now:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..21fbe00 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,25 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1263,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1279,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1318,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct timeout_handles {
+	void *for_libxl;
+	value for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1345,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1358,37 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
+
+	handles->for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(handles->for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		caml_remove_global_root(&handles->for_app);
+		free(handles);
+		goto err;
+	}
 
-	caml_callbackN(*func, 4, args);
+	caml_register_global_root(&handles->for_app);
+	*for_app_registration_out = handles;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1396,49 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocal1(for_app_update);
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(handles->for_app);
+
+	/* Libxl currently promises that timeout_modify is only ever called with
+	 * abs={0,0}, meaning "right away". We cannot deal with other values. */
+	assert(abs.tv_sec == 0 && abs.tv_usec == 0);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
-		func = caml_named_value("libxl_timeout_modify");
+		func = caml_named_value("libxl_timeout_fire_now");
 	}
 
-	caml_callback(*func, *p);
+	args[0] = *p;
+	args[1] = handles->for_app;
+
+	for_app_update = caml_callbackN_exn(*func, 2, args);
+	if (Is_exception_result(for_app_update)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_app = for_app_update;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1386,12 +1491,16 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 
 value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct timeout_handles *handles = (struct timeout_handles *) for_libxl;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(&handles->for_app);
+	free(handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jop-0004tg-9y; Thu, 09 Jan 2014 17:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1Jon-0004tY-Vn
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 17:54:58 +0000
Received: from [85.158.143.35:23665] by server-1.bemta-4.messagelabs.com id
	01/F9-02132-172EEC25; Thu, 09 Jan 2014 17:54:57 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389290096!10778634!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31232 invoked from network); 9 Jan 2014 17:54:56 -0000
Received: from unknown (HELO collaborate-mta1.arm.com) (217.140.110.23)
	by server-4.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 17:54:56 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 3EB5013F69F;
	Thu,  9 Jan 2014 11:54:18 -0600 (CST)
Date: Thu, 9 Jan 2014 17:53:51 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140109175351.GC19628@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
	<1389269716.27473.73.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Marc Zyngier <Marc.Zyngier@arm.com>, "nico@linaro.org" <nico@linaro.org>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 05:50:44PM +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> > > On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> > > >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> > > >  arch/arm64/kernel/Makefile |    1 +
> > > >  2 files changed, 21 insertions(+)
> > > [...]
> > > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > > index 5ba2fd4..1dee735 100644
> > > > --- a/arch/arm64/kernel/Makefile
> > > > +++ b/arch/arm64/kernel/Makefile
> > > > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> > > >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> > > >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> > > >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > > > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> > > 
> > > Did you forget a git add?
> > 
> > I was just about to say the same thing for the previous arm patch too.
> 
> That is what I get for still using patch queues :-/

You should upgrade to stgit ;)

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Jop-0004tg-9y; Thu, 09 Jan 2014 17:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1Jon-0004tY-Vn
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 17:54:58 +0000
Received: from [85.158.143.35:23665] by server-1.bemta-4.messagelabs.com id
	01/F9-02132-172EEC25; Thu, 09 Jan 2014 17:54:57 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389290096!10778634!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31232 invoked from network); 9 Jan 2014 17:54:56 -0000
Received: from unknown (HELO collaborate-mta1.arm.com) (217.140.110.23)
	by server-4.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 17:54:56 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 3EB5013F69F;
	Thu,  9 Jan 2014 11:54:18 -0600 (CST)
Date: Thu, 9 Jan 2014 17:53:51 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140109175351.GC19628@arm.com>
References: <alpine.DEB.2.02.1401081845040.21510@kaball.uk.xensource.com>
	<1389206998-27875-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140109121308.GH3081@arm.com>
	<1389269716.27473.73.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401091750120.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Marc Zyngier <Marc.Zyngier@arm.com>, "nico@linaro.org" <nico@linaro.org>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v8 4/6] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 05:50:44PM +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 12:13 +0000, Catalin Marinas wrote:
> > > On Wed, Jan 08, 2014 at 06:49:56PM +0000, Stefano Stabellini wrote:
> > > >  arch/arm64/Kconfig         |   20 ++++++++++++++++++++
> > > >  arch/arm64/kernel/Makefile |    1 +
> > > >  2 files changed, 21 insertions(+)
> > > [...]
> > > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > > index 5ba2fd4..1dee735 100644
> > > > --- a/arch/arm64/kernel/Makefile
> > > > +++ b/arch/arm64/kernel/Makefile
> > > > @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
> > > >  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
> > > >  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
> > > >  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> > > > +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
> > > 
> > > Did you forget a git add?
> > 
> > I was just about to say the same thing for the previous arm patch too.
> 
> That is what I get for still using patch queues :-/

You should upgrade to stgit ;)

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:58:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Js2-00057M-6H; Thu, 09 Jan 2014 17:58:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1Js0-00057E-Vv
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:58:17 +0000
Received: from [85.158.139.211:49849] by server-10.bemta-5.messagelabs.com id
	DB/71-01405-833EEC25; Thu, 09 Jan 2014 17:58:16 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389290293!8819838!1
X-Originating-IP: [199.249.25.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7029 invoked from network); 9 Jan 2014 17:58:15 -0000
Received: from omzsmtpe03.verizonbusiness.com (HELO
	omzsmtpe03.verizonbusiness.com) (199.249.25.208)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 17:58:15 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe03.verizonbusiness.com with ESMTP; 09 Jan 2014 17:58:12 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="240159935"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 09 Jan 2014 17:58:10 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 9 Jan 2014 12:56:51 -0500
Message-ID: <52CEE2E2.2030501@terremark.com>
Date: Thu, 9 Jan 2014 12:56:50 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
In-Reply-To: <52CEDCD30200007800112096@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/14 11:30, Jan Beulich wrote:
>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>> Based on Mukesh's statement, attached is the rebased version of this patch
>> (labeled v3).  I included Mukesh's ack.
> Unless this is meant just for reviewing purposes (albeit even then
> it's likely problematic), could you please get used to sending
> patch revisions with mail subjects (i.e. not retaining the prior
> version indicator), so there is a reasonable chance to reconstruct
> things by searching just the titles in a mail archive. (It's still fine -
> at least as far as I'm concerned - to reply to an earlier version,
> thus tying things into a single thread on the archive.)
>
> Jan
>

I will try to.  I had not noticed this in the past.

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 17:58:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 17:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Js2-00057M-6H; Thu, 09 Jan 2014 17:58:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1Js0-00057E-Vv
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 17:58:17 +0000
Received: from [85.158.139.211:49849] by server-10.bemta-5.messagelabs.com id
	DB/71-01405-833EEC25; Thu, 09 Jan 2014 17:58:16 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389290293!8819838!1
X-Originating-IP: [199.249.25.208]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7029 invoked from network); 9 Jan 2014 17:58:15 -0000
Received: from omzsmtpe03.verizonbusiness.com (HELO
	omzsmtpe03.verizonbusiness.com) (199.249.25.208)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 17:58:15 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe03.verizonbusiness.com with ESMTP; 09 Jan 2014 17:58:12 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="240159935"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 09 Jan 2014 17:58:10 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 9 Jan 2014 12:56:51 -0500
Message-ID: <52CEE2E2.2030501@terremark.com>
Date: Thu, 9 Jan 2014 12:56:50 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
In-Reply-To: <52CEDCD30200007800112096@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/09/14 11:30, Jan Beulich wrote:
>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>> Based on Mukesh's statement, attached is the rebased version of this patch
>> (labeled v3).  I included Mukesh's ack.
> Unless this is meant just for reviewing purposes (albeit even then
> it's likely problematic), could you please get used to sending
> patch revisions with mail subjects (i.e. not retaining the prior
> version indicator), so there is a reasonable chance to reconstruct
> things by searching just the titles in a mail archive. (It's still fine -
> at least as far as I'm concerned - to reply to an earlier version,
> thus tying things into a single thread on the archive.)
>
> Jan
>

I will try to.  I had not noticed this in the past.

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1K3W-0006aD-SK; Thu, 09 Jan 2014 18:10:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1K3V-0006a8-8t
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 18:10:09 +0000
Received: from [193.109.254.147:33618] by server-12.bemta-14.messagelabs.com
	id B0/95-13681-006EEC25; Thu, 09 Jan 2014 18:10:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389291006!9833843!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18223 invoked from network); 9 Jan 2014 18:10:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89243078"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:10:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:10:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1K3Q-0000tn-Np;
	Thu, 09 Jan 2014 18:10:04 +0000
Date: Thu, 9 Jan 2014 18:09:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CEE084.5000409@citrix.com>
Message-ID: <alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
	<52CEE084.5000409@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, David Vrabel wrote:
> On 09/01/14 17:28, Stefano Stabellini wrote:
> > On Thu, 9 Jan 2014, David Vrabel wrote:
> >> On 09/01/14 15:30, Wei Liu wrote:
> >>> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> >>>> This patch contains the new definitions necessary for grant mapping.
> >>>>
> >>>> v2:
> >>>> - move unmapping to separate thread. The NAPI instance has to be scheduled
> >>>>   even from thread context, which can cause huge delays
> >>>> - that causes unfortunately bigger struct xenvif
> >>>> - store grant handle after checking validity
> >>>>
> >>>> v3:
> >>>> - fix comment in xenvif_tx_dealloc_action()
> >>>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
> >>>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> >>>
> >>> Is it worthy to have another function call
> >>> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> >>> parameter to control wether we need to touch m2p_override? I *think* it
> >>> will benefit block driver as well?
> >>
> >> add_m2p_override and remove_m2p_override calls should be moved into the
> >> gntdev device as that should be the only user.
> > 
> > First of all the gntdev device is common code, while the m2p_override is
> > an x86 concept.
> 
> m2p_add_override() and m2p_remove_override() are already called from
> common code and ARM already provides inline stubs.

This is the right time to fix it, then :)
Maybe we should add the m2p_add_override call to the x86 implementation
of set_phys_to_machine, or maybe we need a new generic
set_machine_to_phys call.


> The m2p override mechanism is also broken by design (local PFN to
> foreign MFN may be many-to-one, but the m2p override only works if local
> PFN to foreign MFN is one-to-one). So I want the m2p override to be only
> used where it is /currently/ necessary.  I think there should be no new
> users of it nor should it be considered a fix for any other use case.

I agree, but I think that we have different views on the use case.
To me the m2p_override use case is "everywhere an mfn_to_pfn translation
is required", that unfortunately is potentially everywhere at this time.

I would love to restrict it further but at the very least we would need
something written down under Documentation. Otherwise when the next
Linux hacker comes along with a performance optimization for her new
network driver that breaks Xen because Xen is incapable of doing mfn to
pfn translations, the maintainers might (rightfully) decide that it is
simply our problem.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1K3W-0006aD-SK; Thu, 09 Jan 2014 18:10:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1K3V-0006a8-8t
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 18:10:09 +0000
Received: from [193.109.254.147:33618] by server-12.bemta-14.messagelabs.com
	id B0/95-13681-006EEC25; Thu, 09 Jan 2014 18:10:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389291006!9833843!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18223 invoked from network); 9 Jan 2014 18:10:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89243078"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:10:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:10:05 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1K3Q-0000tn-Np;
	Thu, 09 Jan 2014 18:10:04 +0000
Date: Thu, 9 Jan 2014 18:09:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52CEE084.5000409@citrix.com>
Message-ID: <alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
	<52CEE084.5000409@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, David Vrabel wrote:
> On 09/01/14 17:28, Stefano Stabellini wrote:
> > On Thu, 9 Jan 2014, David Vrabel wrote:
> >> On 09/01/14 15:30, Wei Liu wrote:
> >>> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
> >>>> This patch contains the new definitions necessary for grant mapping.
> >>>>
> >>>> v2:
> >>>> - move unmapping to separate thread. The NAPI instance has to be scheduled
> >>>>   even from thread context, which can cause huge delays
> >>>> - that causes unfortunately bigger struct xenvif
> >>>> - store grant handle after checking validity
> >>>>
> >>>> v3:
> >>>> - fix comment in xenvif_tx_dealloc_action()
> >>>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
> >>>>   unnecessary m2p_override. Also remove pages_to_[un]map members
> >>>
> >>> Is it worthy to have another function call
> >>> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> >>> parameter to control wether we need to touch m2p_override? I *think* it
> >>> will benefit block driver as well?
> >>
> >> add_m2p_override and remove_m2p_override calls should be moved into the
> >> gntdev device as that should be the only user.
> > 
> > First of all the gntdev device is common code, while the m2p_override is
> > an x86 concept.
> 
> m2p_add_override() and m2p_remove_override() are already called from
> common code and ARM already provides inline stubs.

This is the right time to fix it, then :)
Maybe we should add the m2p_add_override call to the x86 implementation
of set_phys_to_machine, or maybe we need a new generic
set_machine_to_phys call.


> The m2p override mechanism is also broken by design (local PFN to
> foreign MFN may be many-to-one, but the m2p override only works if local
> PFN to foreign MFN is one-to-one). So I want the m2p override to be only
> used where it is /currently/ necessary.  I think there should be no new
> users of it nor should it be considered a fix for any other use case.

I agree, but I think that we have different views on the use case.
To me the m2p_override use case is "everywhere an mfn_to_pfn translation
is required", that unfortunately is potentially everywhere at this time.

I would love to restrict it further but at the very least we would need
something written down under Documentation. Otherwise when the next
Linux hacker comes along with a performance optimization for her new
network driver that breaks Xen because Xen is incapable of doing mfn to
pfn translations, the maintainers might (rightfully) decide that it is
simply our problem.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KAX-0006jK-Td; Thu, 09 Jan 2014 18:17:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KAW-0006jF-O6
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:17:24 +0000
Received: from [193.109.254.147:11349] by server-3.bemta-14.messagelabs.com id
	10/62-11000-4B7EEC25; Thu, 09 Jan 2014 18:17:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389291441!9895516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16633 invoked from network); 9 Jan 2014 18:17:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89246214"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:17:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 13:17:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KAR-0004Ab-WB;
	Thu, 09 Jan 2014 18:17:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KAR-00034a-NG;
	Thu, 09 Jan 2014 18:17:19 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.59311.539848.932465@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 18:17:19 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v4 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
...
>  int fd_register(void *user, int fd, void **for_app_registration_out,
>                       short events, void *for_libxl)
>  {
...
> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
...
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;

Doesn't this leak for_app ?  ISTR spotting this before but perhaps I
forgot to mention it.

> +err:
>  	CAMLdone;
>  	caml_enter_blocking_section();
> -	return 0;
> +	return ret;
>  }


And:

>  int timeout_register(void *user, void **for_app_registration_out,
>                            struct timeval abs, void *for_libxl)
...
> +	caml_register_global_root(&handles->for_app);
...
> +	*for_app_registration_out = handles;
>  }
>  
>  int timeout_modify(void *user, void **for_app_registration_update,
...
> +	handles->for_app = for_app_update;
> +

This is allowed, then ?  (Updating foo when &foo has been registered
as a global root.)  I guess so.


Finally:

>  value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
>  {

Calling this formal parameter "for_libxl" is confusing.  It's actually
the value passed to the ocaml register function, ie handles but with a
different type, and not "for_libxl" at all.


Nearly there ... the rest is fine!

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KAX-0006jK-Td; Thu, 09 Jan 2014 18:17:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KAW-0006jF-O6
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:17:24 +0000
Received: from [193.109.254.147:11349] by server-3.bemta-14.messagelabs.com id
	10/62-11000-4B7EEC25; Thu, 09 Jan 2014 18:17:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389291441!9895516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16633 invoked from network); 9 Jan 2014 18:17:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89246214"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:17:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 13:17:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KAR-0004Ab-WB;
	Thu, 09 Jan 2014 18:17:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KAR-00034a-NG;
	Thu, 09 Jan 2014 18:17:19 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.59311.539848.932465@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 18:17:19 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v4 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.
...
>  int fd_register(void *user, int fd, void **for_app_registration_out,
>                       short events, void *for_libxl)
>  {
...
> -	caml_callbackN(*func, 4, args);
> +	for_app = malloc(sizeof(value));
...
> +	*for_app = caml_callbackN_exn(*func, 4, args);
> +	if (Is_exception_result(*for_app)) {
> +		ret = ERROR_OSEVENT_REG_FAIL;
> +		goto err;

Doesn't this leak for_app ?  ISTR spotting this before but perhaps I
forgot to mention it.

> +err:
>  	CAMLdone;
>  	caml_enter_blocking_section();
> -	return 0;
> +	return ret;
>  }


And:

>  int timeout_register(void *user, void **for_app_registration_out,
>                            struct timeval abs, void *for_libxl)
...
> +	caml_register_global_root(&handles->for_app);
...
> +	*for_app_registration_out = handles;
>  }
>  
>  int timeout_modify(void *user, void **for_app_registration_update,
...
> +	handles->for_app = for_app_update;
> +

This is allowed, then ?  (Updating foo when &foo has been registered
as a global root.)  I guess so.


Finally:

>  value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
>  {

Calling this formal parameter "for_libxl" is confusing.  It's actually
the value passed to the ocaml register function, ie handles but with a
different type, and not "for_libxl" at all.


Nearly there ... the rest is fine!

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:23:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KGQ-0007hj-Tn; Thu, 09 Jan 2014 18:23:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KGP-0007hJ-Rk
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:23:30 +0000
Received: from [193.109.254.147:42845] by server-7.bemta-14.messagelabs.com id
	A7/E2-15500-129EEC25; Thu, 09 Jan 2014 18:23:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389291806!9893723!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2429 invoked from network); 9 Jan 2014 18:23:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:23:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91392944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:23:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 13:23:25 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1KGK-0004C6-Ri;
	Thu, 09 Jan 2014 18:23:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1KGK-0006KZ-QN;
	Thu, 09 Jan 2014 18:23:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24319-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 18:23:24 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24319: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8519239522945357400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8519239522945357400==
Content-Type: text/plain

flight 24319 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24319/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c5c697651b009a0672faa2a902a0c12d2e975d97
baseline version:
 xen                  43359e14ffc64f95343c35c9a1eafbc36adad531

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=c5c697651b009a0672faa2a902a0c12d2e975d97
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing c5c697651b009a0672faa2a902a0c12d2e975d97
+ branch=xen-4.2-testing
+ revision=c5c697651b009a0672faa2a902a0c12d2e975d97
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c5c697651b009a0672faa2a902a0c12d2e975d97:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   43359e1..c5c6976  c5c697651b009a0672faa2a902a0c12d2e975d97 -> stable-4.2


--===============8519239522945357400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8519239522945357400==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:23:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KGQ-0007hj-Tn; Thu, 09 Jan 2014 18:23:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KGP-0007hJ-Rk
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:23:30 +0000
Received: from [193.109.254.147:42845] by server-7.bemta-14.messagelabs.com id
	A7/E2-15500-129EEC25; Thu, 09 Jan 2014 18:23:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389291806!9893723!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2429 invoked from network); 9 Jan 2014 18:23:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:23:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91392944"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:23:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 13:23:25 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1KGK-0004C6-Ri;
	Thu, 09 Jan 2014 18:23:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1KGK-0006KZ-QN;
	Thu, 09 Jan 2014 18:23:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24319-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 18:23:24 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24319: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8519239522945357400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8519239522945357400==
Content-Type: text/plain

flight 24319 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24319/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  c5c697651b009a0672faa2a902a0c12d2e975d97
baseline version:
 xen                  43359e14ffc64f95343c35c9a1eafbc36adad531

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=c5c697651b009a0672faa2a902a0c12d2e975d97
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing c5c697651b009a0672faa2a902a0c12d2e975d97
+ branch=xen-4.2-testing
+ revision=c5c697651b009a0672faa2a902a0c12d2e975d97
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c5c697651b009a0672faa2a902a0c12d2e975d97:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   43359e1..c5c6976  c5c697651b009a0672faa2a902a0c12d2e975d97 -> stable-4.2


--===============8519239522945357400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8519239522945357400==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KGu-0007pl-RA; Thu, 09 Jan 2014 18:24:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1KGt-0007oU-F4
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 18:23:59 +0000
Received: from [85.158.137.68:30328] by server-17.bemta-3.messagelabs.com id
	C6/E6-15965-D39EEC25; Thu, 09 Jan 2014 18:23:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389291835!8245989!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29150 invoked from network); 9 Jan 2014 18:23:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89248592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:23:27 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	13:23:27 -0500
Message-ID: <52CEE91D.50105@citrix.com>
Date: Thu, 9 Jan 2014 18:23:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
	<52CEE084.5000409@citrix.com>
	<alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 18:09, Stefano Stabellini wrote:
> 
> I agree, but I think that we have different views on the use case.
> To me the m2p_override use case is "everywhere an mfn_to_pfn translation
> is required", that unfortunately is potentially everywhere at this time.

mfn_to_pfn() cannot be made to work correctly with foreign MFNs.  It's a
fundamentally unsolvable problem.

IMO, the only sensible use of the m2p_override is to cause mfn_to_pfn()
to BUG() if a foreign MFN is used.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KGu-0007pl-RA; Thu, 09 Jan 2014 18:24:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1KGt-0007oU-F4
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 18:23:59 +0000
Received: from [85.158.137.68:30328] by server-17.bemta-3.messagelabs.com id
	C6/E6-15965-D39EEC25; Thu, 09 Jan 2014 18:23:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389291835!8245989!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29150 invoked from network); 9 Jan 2014 18:23:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89248592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:23:27 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	13:23:27 -0500
Message-ID: <52CEE91D.50105@citrix.com>
Date: Thu, 9 Jan 2014 18:23:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
	<52CEC370.10503@citrix.com>
	<alpine.DEB.2.02.1401091722150.21510@kaball.uk.xensource.com>
	<52CEE084.5000409@citrix.com>
	<alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401091759100.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 18:09, Stefano Stabellini wrote:
> 
> I agree, but I think that we have different views on the use case.
> To me the m2p_override use case is "everywhere an mfn_to_pfn translation
> is required", that unfortunately is potentially everywhere at this time.

mfn_to_pfn() cannot be made to work correctly with foreign MFNs.  It's a
fundamentally unsolvable problem.

IMO, the only sensible use of the m2p_override is to cause mfn_to_pfn()
to BUG() if a foreign MFN is used.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:32:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KOn-0000J7-03; Thu, 09 Jan 2014 18:32:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KOh-0000Iz-Gq
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:32:07 +0000
Received: from [85.158.137.68:25845] by server-6.bemta-3.messagelabs.com id
	44/AD-04868-22BEEC25; Thu, 09 Jan 2014 18:32:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292320!4580363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17128 invoked from network); 9 Jan 2014 18:32:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:32:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253149"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:32:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:31:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1K9A-0000z6-K8;
	Thu, 09 Jan 2014 18:16:00 +0000
Date: Thu, 9 Jan 2014 18:15:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces stolen ticks accounting for Xen on ARM and
ARM64.
Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
typically because Linux is running in a virtual machine and the vcpu has
been descheduled.
To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
so that we can make use of:

kernel/sched/cputime.c:steal_account_process_tick


Changes in v9:
- added back missing new files from patches;
- fix compilation on avr32 (remove patch #5, revert to previous version
  of patch #2).



Stefano Stabellini (5):
      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
      kernel: missing include in cputime.c
      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      xen/arm: account for stolen ticks

 arch/arm/Kconfig                  |   20 ++++++++
 arch/arm/include/asm/paravirt.h   |   20 ++++++++
 arch/arm/kernel/Makefile          |    2 +
 arch/arm/kernel/paravirt.c        |   25 ++++++++++
 arch/arm/xen/enlighten.c          |   21 +++++++++
 arch/arm64/Kconfig                |   20 ++++++++
 arch/arm64/include/asm/paravirt.h |   20 ++++++++
 arch/arm64/kernel/Makefile        |    1 +
 arch/arm64/kernel/paravirt.c      |   25 ++++++++++
 arch/ia64/xen/time.c              |   48 +++----------------
 arch/x86/xen/time.c               |   76 +------------------------------
 drivers/xen/Makefile              |    2 +-
 drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h             |    5 ++
 kernel/sched/cputime.c            |    3 ++
 15 files changed, 261 insertions(+), 118 deletions(-)
 create mode 100644 arch/arm/include/asm/paravirt.h
 create mode 100644 arch/arm/kernel/paravirt.c
 create mode 100644 arch/arm64/include/asm/paravirt.h
 create mode 100644 arch/arm64/kernel/paravirt.c
 create mode 100644 drivers/xen/time.c


git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:32:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KOn-0000J7-03; Thu, 09 Jan 2014 18:32:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KOh-0000Iz-Gq
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:32:07 +0000
Received: from [85.158.137.68:25845] by server-6.bemta-3.messagelabs.com id
	44/AD-04868-22BEEC25; Thu, 09 Jan 2014 18:32:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292320!4580363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17128 invoked from network); 9 Jan 2014 18:32:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:32:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253149"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:32:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:31:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1K9A-0000z6-K8;
	Thu, 09 Jan 2014 18:16:00 +0000
Date: Thu, 9 Jan 2014 18:15:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series introduces stolen ticks accounting for Xen on ARM and
ARM64.
Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
typically because Linux is running in a virtual machine and the vcpu has
been descheduled.
To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
so that we can make use of:

kernel/sched/cputime.c:steal_account_process_tick


Changes in v9:
- added back missing new files from patches;
- fix compilation on avr32 (remove patch #5, revert to previous version
  of patch #2).



Stefano Stabellini (5):
      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
      kernel: missing include in cputime.c
      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
      xen/arm: account for stolen ticks

 arch/arm/Kconfig                  |   20 ++++++++
 arch/arm/include/asm/paravirt.h   |   20 ++++++++
 arch/arm/kernel/Makefile          |    2 +
 arch/arm/kernel/paravirt.c        |   25 ++++++++++
 arch/arm/xen/enlighten.c          |   21 +++++++++
 arch/arm64/Kconfig                |   20 ++++++++
 arch/arm64/include/asm/paravirt.h |   20 ++++++++
 arch/arm64/kernel/Makefile        |    1 +
 arch/arm64/kernel/paravirt.c      |   25 ++++++++++
 arch/ia64/xen/time.c              |   48 +++----------------
 arch/x86/xen/time.c               |   76 +------------------------------
 drivers/xen/Makefile              |    2 +-
 drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h             |    5 ++
 kernel/sched/cputime.c            |    3 ++
 15 files changed, 261 insertions(+), 118 deletions(-)
 create mode 100644 arch/arm/include/asm/paravirt.h
 create mode 100644 arch/arm/kernel/paravirt.c
 create mode 100644 arch/arm64/include/asm/paravirt.h
 create mode 100644 arch/arm64/kernel/paravirt.c
 create mode 100644 drivers/xen/time.c


git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9


Cheers,

Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPz-0000OA-8n; Thu, 09 Jan 2014 18:33:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPx-0000Nb-6E
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:21 +0000
Received: from [85.158.139.211:42505] by server-2.bemta-5.messagelabs.com id
	3B/DF-29392-07BEEC25; Thu, 09 Jan 2014 18:33:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8710 invoked from network); 9 Jan 2014 18:33:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253907"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-Tj;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:12 +0000
Message-ID: <1389292336-9292-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 1/5] xen: move xen_setup_runstate_info and
	get_runstate_snapshot to drivers/xen/time.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: konrad.wilk@oracle.com

---

Changes in v2:
- leave do_stolen_accounting in arch/x86/xen/time.c;
- use the new common functions in arch/ia64/xen/time.c.
---
 arch/ia64/xen/time.c  |   48 ++++----------------------
 arch/x86/xen/time.c   |   76 +----------------------------------------
 drivers/xen/Makefile  |    2 +-
 drivers/xen/time.c    |   91 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |    5 +++
 5 files changed, 104 insertions(+), 118 deletions(-)
 create mode 100644 drivers/xen/time.c

diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index 1f8244a..79a0b8c 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -34,53 +34,17 @@
 
 #include "../kernel/fsyscall_gtod_data.h"
 
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 static DEFINE_PER_CPU(unsigned long, xen_stolen_time);
 static DEFINE_PER_CPU(unsigned long, xen_blocked_time);
 
 /* taken from i386/kernel/time-xen.c */
 static void xen_init_missing_ticks_accounting(int cpu)
 {
-	struct vcpu_register_runstate_memory_area area;
-	struct vcpu_runstate_info *runstate = &per_cpu(xen_runstate, cpu);
-	int rc;
+	xen_setup_runstate_info(&runstate);
 
-	memset(runstate, 0, sizeof(*runstate));
-
-	area.addr.v = runstate;
-	rc = HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu,
-				&area);
-	WARN_ON(rc && rc != -ENOSYS);
-
-	per_cpu(xen_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
-	per_cpu(xen_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
-					    + runstate->time[RUNSTATE_offline];
-}
-
-/*
- * Runstate accounting
- */
-/* stolen from arch/x86/xen/time.c */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = state->state_entry_time;
-		rmb();
-		*res = *state;
-		rmb();
-	} while (state->state_entry_time != state_time);
+	per_cpu(xen_blocked_time, cpu) = runstate.time[RUNSTATE_blocked];
+	per_cpu(xen_stolen_time, cpu) = runstate.time[RUNSTATE_runnable]
+					    + runstate.time[RUNSTATE_offline];
 }
 
 #define NS_PER_TICK (1000000000LL/HZ)
@@ -94,7 +58,7 @@ consider_steal_time(unsigned long new_itm)
 	struct vcpu_runstate_info runstate;
 	struct task_struct *p = current;
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	/*
 	 * Check for vcpu migration effect
@@ -202,7 +166,7 @@ static unsigned long long xen_sched_clock(void)
 	 */
 	now = ia64_native_sched_clock();
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	WARN_ON(runstate.state != RUNSTATE_running);
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 12a1ca7..d479444 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -32,86 +32,12 @@
 #define TIMER_SLOP	100000
 #define NS_PER_TICK	(1000000000LL / HZ)
 
-/* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
-
 /* snapshots of runstate info */
 static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
 
 /* unused ns of stolen time */
 static DEFINE_PER_CPU(u64, xen_residual_stolen);
 
-/* return an consistent snapshot of 64-bit time/counter value */
-static u64 get64(const u64 *p)
-{
-	u64 ret;
-
-	if (BITS_PER_LONG < 64) {
-		u32 *p32 = (u32 *)p;
-		u32 h, l;
-
-		/*
-		 * Read high then low, and then make sure high is
-		 * still the same; this will only loop if low wraps
-		 * and carries into high.
-		 * XXX some clean way to make this endian-proof?
-		 */
-		do {
-			h = p32[1];
-			barrier();
-			l = p32[0];
-			barrier();
-		} while (p32[1] != h);
-
-		ret = (((u64)h) << 32) | l;
-	} else
-		ret = *p;
-
-	return ret;
-}
-
-/*
- * Runstate accounting
- */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = get64(&state->state_entry_time);
-		barrier();
-		*res = *state;
-		barrier();
-	} while (get64(&state->state_entry_time) != state_time);
-}
-
-/* return true when a vcpu could run but has no real cpu to run on */
-bool xen_vcpu_stolen(int vcpu)
-{
-	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
-}
-
-void xen_setup_runstate_info(int cpu)
-{
-	struct vcpu_register_runstate_memory_area area;
-
-	area.addr.v = &per_cpu(xen_runstate, cpu);
-
-	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
-			       cpu, &area))
-		BUG();
-}
-
 static void do_stolen_accounting(void)
 {
 	struct vcpu_runstate_info state;
@@ -119,7 +45,7 @@ static void do_stolen_accounting(void)
 	s64 runnable, offline, stolen;
 	cputime_t ticks;
 
-	get_runstate_snapshot(&state);
+	xen_get_runstate_snapshot(&state);
 
 	WARN_ON(state.state != RUNSTATE_running);
 
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 14fe79d..cd95fb7 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o events.o balloon.o manage.o
+obj-y	+= grant-table.o features.o events.o balloon.o manage.o time.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
new file mode 100644
index 0000000..c2e39d3
--- /dev/null
+++ b/drivers/xen/time.c
@@ -0,0 +1,91 @@
+/*
+ * Xen stolen ticks accounting.
+ */
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
+#include <linux/gfp.h>
+
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+
+#include <xen/events.h>
+#include <xen/features.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/vcpu.h>
+#include <xen/xen-ops.h>
+
+/* runstate info updated by Xen */
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
+
+/* return an consistent snapshot of 64-bit time/counter value */
+static u64 get64(const u64 *p)
+{
+	u64 ret;
+
+	if (BITS_PER_LONG < 64) {
+		u32 *p32 = (u32 *)p;
+		u32 h, l;
+
+		/*
+		 * Read high then low, and then make sure high is
+		 * still the same; this will only loop if low wraps
+		 * and carries into high.
+		 * XXX some clean way to make this endian-proof?
+		 */
+		do {
+			h = p32[1];
+			barrier();
+			l = p32[0];
+			barrier();
+		} while (p32[1] != h);
+
+		ret = (((u64)h) << 32) | l;
+	} else
+		ret = *p;
+
+	return ret;
+}
+
+/*
+ * Runstate accounting
+ */
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res)
+{
+	u64 state_time;
+	struct vcpu_runstate_info *state;
+
+	BUG_ON(preemptible());
+
+	state = &__get_cpu_var(xen_runstate);
+
+	/*
+	 * The runstate info is always updated by the hypervisor on
+	 * the current CPU, so there's no need to use anything
+	 * stronger than a compiler barrier when fetching it.
+	 */
+	do {
+		state_time = get64(&state->state_entry_time);
+		barrier();
+		*res = *state;
+		barrier();
+	} while (get64(&state->state_entry_time) != state_time);
+}
+
+/* return true when a vcpu could run but has no real cpu to run on */
+bool xen_vcpu_stolen(int vcpu)
+{
+	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
+}
+
+void xen_setup_runstate_info(int cpu)
+{
+	struct vcpu_register_runstate_memory_area area;
+
+	area.addr.v = &per_cpu(xen_runstate, cpu);
+
+	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
+			       cpu, &area))
+		BUG();
+}
+
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..ee3303b 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -3,6 +3,7 @@
 
 #include <linux/percpu.h>
 #include <asm/xen/interface.h>
+#include <xen/interface/vcpu.h>
 
 DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
 
@@ -16,6 +17,10 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+bool xen_vcpu_stolen(int vcpu);
+void xen_setup_runstate_info(int cpu);
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPx-0000Nh-Fo; Thu, 09 Jan 2014 18:33:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPw-0000NR-2j
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:20 +0000
Received: from [85.158.139.211:45870] by server-14.bemta-5.messagelabs.com id
	7D/65-24200-F6BEEC25; Thu, 09 Jan 2014 18:33:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8664 invoked from network); 9 Jan 2014 18:33:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-UX;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:13 +0000
Message-ID: <1389292336-9292-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	peterz@infradead.org, mingo@redhat.com, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 2/5] kernel: missing include in cputime.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

steal_account_process_tick calls paravirt_steal_clock, but paravirt.h is
currently missing amoung the included header files.
Add include asm/paravirt.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: mingo@redhat.com
CC: peterz@infradead.org
---
 kernel/sched/cputime.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 9994791..951833e 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -5,6 +5,9 @@
 #include <linux/static_key.h>
 #include <linux/context_tracking.h>
 #include "sched.h"
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt.h>
+#endif
 
 
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPz-0000OA-8n; Thu, 09 Jan 2014 18:33:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPx-0000Nb-6E
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:21 +0000
Received: from [85.158.139.211:42505] by server-2.bemta-5.messagelabs.com id
	3B/DF-29392-07BEEC25; Thu, 09 Jan 2014 18:33:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8710 invoked from network); 9 Jan 2014 18:33:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253907"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-Tj;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:12 +0000
Message-ID: <1389292336-9292-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 1/5] xen: move xen_setup_runstate_info and
	get_runstate_snapshot to drivers/xen/time.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: konrad.wilk@oracle.com

---

Changes in v2:
- leave do_stolen_accounting in arch/x86/xen/time.c;
- use the new common functions in arch/ia64/xen/time.c.
---
 arch/ia64/xen/time.c  |   48 ++++----------------------
 arch/x86/xen/time.c   |   76 +----------------------------------------
 drivers/xen/Makefile  |    2 +-
 drivers/xen/time.c    |   91 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |    5 +++
 5 files changed, 104 insertions(+), 118 deletions(-)
 create mode 100644 drivers/xen/time.c

diff --git a/arch/ia64/xen/time.c b/arch/ia64/xen/time.c
index 1f8244a..79a0b8c 100644
--- a/arch/ia64/xen/time.c
+++ b/arch/ia64/xen/time.c
@@ -34,53 +34,17 @@
 
 #include "../kernel/fsyscall_gtod_data.h"
 
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 static DEFINE_PER_CPU(unsigned long, xen_stolen_time);
 static DEFINE_PER_CPU(unsigned long, xen_blocked_time);
 
 /* taken from i386/kernel/time-xen.c */
 static void xen_init_missing_ticks_accounting(int cpu)
 {
-	struct vcpu_register_runstate_memory_area area;
-	struct vcpu_runstate_info *runstate = &per_cpu(xen_runstate, cpu);
-	int rc;
+	xen_setup_runstate_info(&runstate);
 
-	memset(runstate, 0, sizeof(*runstate));
-
-	area.addr.v = runstate;
-	rc = HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area, cpu,
-				&area);
-	WARN_ON(rc && rc != -ENOSYS);
-
-	per_cpu(xen_blocked_time, cpu) = runstate->time[RUNSTATE_blocked];
-	per_cpu(xen_stolen_time, cpu) = runstate->time[RUNSTATE_runnable]
-					    + runstate->time[RUNSTATE_offline];
-}
-
-/*
- * Runstate accounting
- */
-/* stolen from arch/x86/xen/time.c */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = state->state_entry_time;
-		rmb();
-		*res = *state;
-		rmb();
-	} while (state->state_entry_time != state_time);
+	per_cpu(xen_blocked_time, cpu) = runstate.time[RUNSTATE_blocked];
+	per_cpu(xen_stolen_time, cpu) = runstate.time[RUNSTATE_runnable]
+					    + runstate.time[RUNSTATE_offline];
 }
 
 #define NS_PER_TICK (1000000000LL/HZ)
@@ -94,7 +58,7 @@ consider_steal_time(unsigned long new_itm)
 	struct vcpu_runstate_info runstate;
 	struct task_struct *p = current;
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	/*
 	 * Check for vcpu migration effect
@@ -202,7 +166,7 @@ static unsigned long long xen_sched_clock(void)
 	 */
 	now = ia64_native_sched_clock();
 
-	get_runstate_snapshot(&runstate);
+	xen_get_runstate_snapshot(&runstate);
 
 	WARN_ON(runstate.state != RUNSTATE_running);
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 12a1ca7..d479444 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -32,86 +32,12 @@
 #define TIMER_SLOP	100000
 #define NS_PER_TICK	(1000000000LL / HZ)
 
-/* runstate info updated by Xen */
-static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
-
 /* snapshots of runstate info */
 static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate_snapshot);
 
 /* unused ns of stolen time */
 static DEFINE_PER_CPU(u64, xen_residual_stolen);
 
-/* return an consistent snapshot of 64-bit time/counter value */
-static u64 get64(const u64 *p)
-{
-	u64 ret;
-
-	if (BITS_PER_LONG < 64) {
-		u32 *p32 = (u32 *)p;
-		u32 h, l;
-
-		/*
-		 * Read high then low, and then make sure high is
-		 * still the same; this will only loop if low wraps
-		 * and carries into high.
-		 * XXX some clean way to make this endian-proof?
-		 */
-		do {
-			h = p32[1];
-			barrier();
-			l = p32[0];
-			barrier();
-		} while (p32[1] != h);
-
-		ret = (((u64)h) << 32) | l;
-	} else
-		ret = *p;
-
-	return ret;
-}
-
-/*
- * Runstate accounting
- */
-static void get_runstate_snapshot(struct vcpu_runstate_info *res)
-{
-	u64 state_time;
-	struct vcpu_runstate_info *state;
-
-	BUG_ON(preemptible());
-
-	state = &__get_cpu_var(xen_runstate);
-
-	/*
-	 * The runstate info is always updated by the hypervisor on
-	 * the current CPU, so there's no need to use anything
-	 * stronger than a compiler barrier when fetching it.
-	 */
-	do {
-		state_time = get64(&state->state_entry_time);
-		barrier();
-		*res = *state;
-		barrier();
-	} while (get64(&state->state_entry_time) != state_time);
-}
-
-/* return true when a vcpu could run but has no real cpu to run on */
-bool xen_vcpu_stolen(int vcpu)
-{
-	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
-}
-
-void xen_setup_runstate_info(int cpu)
-{
-	struct vcpu_register_runstate_memory_area area;
-
-	area.addr.v = &per_cpu(xen_runstate, cpu);
-
-	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
-			       cpu, &area))
-		BUG();
-}
-
 static void do_stolen_accounting(void)
 {
 	struct vcpu_runstate_info state;
@@ -119,7 +45,7 @@ static void do_stolen_accounting(void)
 	s64 runnable, offline, stolen;
 	cputime_t ticks;
 
-	get_runstate_snapshot(&state);
+	xen_get_runstate_snapshot(&state);
 
 	WARN_ON(state.state != RUNSTATE_running);
 
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 14fe79d..cd95fb7 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o events.o balloon.o manage.o
+obj-y	+= grant-table.o features.o events.o balloon.o manage.o time.o
 obj-y	+= xenbus/
 
 nostackp := $(call cc-option, -fno-stack-protector)
diff --git a/drivers/xen/time.c b/drivers/xen/time.c
new file mode 100644
index 0000000..c2e39d3
--- /dev/null
+++ b/drivers/xen/time.c
@@ -0,0 +1,91 @@
+/*
+ * Xen stolen ticks accounting.
+ */
+#include <linux/kernel.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
+#include <linux/gfp.h>
+
+#include <asm/xen/hypervisor.h>
+#include <asm/xen/hypercall.h>
+
+#include <xen/events.h>
+#include <xen/features.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/vcpu.h>
+#include <xen/xen-ops.h>
+
+/* runstate info updated by Xen */
+static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
+
+/* return an consistent snapshot of 64-bit time/counter value */
+static u64 get64(const u64 *p)
+{
+	u64 ret;
+
+	if (BITS_PER_LONG < 64) {
+		u32 *p32 = (u32 *)p;
+		u32 h, l;
+
+		/*
+		 * Read high then low, and then make sure high is
+		 * still the same; this will only loop if low wraps
+		 * and carries into high.
+		 * XXX some clean way to make this endian-proof?
+		 */
+		do {
+			h = p32[1];
+			barrier();
+			l = p32[0];
+			barrier();
+		} while (p32[1] != h);
+
+		ret = (((u64)h) << 32) | l;
+	} else
+		ret = *p;
+
+	return ret;
+}
+
+/*
+ * Runstate accounting
+ */
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res)
+{
+	u64 state_time;
+	struct vcpu_runstate_info *state;
+
+	BUG_ON(preemptible());
+
+	state = &__get_cpu_var(xen_runstate);
+
+	/*
+	 * The runstate info is always updated by the hypervisor on
+	 * the current CPU, so there's no need to use anything
+	 * stronger than a compiler barrier when fetching it.
+	 */
+	do {
+		state_time = get64(&state->state_entry_time);
+		barrier();
+		*res = *state;
+		barrier();
+	} while (get64(&state->state_entry_time) != state_time);
+}
+
+/* return true when a vcpu could run but has no real cpu to run on */
+bool xen_vcpu_stolen(int vcpu)
+{
+	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
+}
+
+void xen_setup_runstate_info(int cpu)
+{
+	struct vcpu_register_runstate_memory_area area;
+
+	area.addr.v = &per_cpu(xen_runstate, cpu);
+
+	if (HYPERVISOR_vcpu_op(VCPUOP_register_runstate_memory_area,
+			       cpu, &area))
+		BUG();
+}
+
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..ee3303b 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -3,6 +3,7 @@
 
 #include <linux/percpu.h>
 #include <asm/xen/interface.h>
+#include <xen/interface/vcpu.h>
 
 DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
 
@@ -16,6 +17,10 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+bool xen_vcpu_stolen(int vcpu);
+void xen_setup_runstate_info(int cpu);
+void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPx-0000Nh-Fo; Thu, 09 Jan 2014 18:33:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPw-0000NR-2j
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:20 +0000
Received: from [85.158.139.211:45870] by server-14.bemta-5.messagelabs.com id
	7D/65-24200-F6BEEC25; Thu, 09 Jan 2014 18:33:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8664 invoked from network); 9 Jan 2014 18:33:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-UX;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:13 +0000
Message-ID: <1389292336-9292-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	peterz@infradead.org, mingo@redhat.com, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 2/5] kernel: missing include in cputime.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

steal_account_process_tick calls paravirt_steal_clock, but paravirt.h is
currently missing amoung the included header files.
Add include asm/paravirt.h.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: mingo@redhat.com
CC: peterz@infradead.org
---
 kernel/sched/cputime.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 9994791..951833e 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -5,6 +5,9 @@
 #include <linux/static_key.h>
 #include <linux/context_tracking.h>
 #include "sched.h"
+#ifdef CONFIG_PARAVIRT
+#include <asm/paravirt.h>
+#endif
 
 
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPy-0000Ny-SL; Thu, 09 Jan 2014 18:33:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPw-0000NY-Sc
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:21 +0000
Received: from [85.158.137.68:43896] by server-16.bemta-3.messagelabs.com id
	4E/D9-26128-07BEEC25; Thu, 09 Jan 2014 18:33:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292397!4580572!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 9 Jan 2014 18:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91397937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-VT;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:14 +0000
Message-ID: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.

Changes in v3:
- improve commit description and Kconfig help text;
- no need to initialize pv_time_ops;
- add PARAVIRT_TIME_ACCOUNTING.
---
 arch/arm/Kconfig                |   20 ++++++++++++++++++++
 arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
 arch/arm/kernel/Makefile        |    2 ++
 arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
 4 files changed, 67 insertions(+)
 create mode 100644 arch/arm/include/asm/paravirt.h
 create mode 100644 arch/arm/kernel/paravirt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..d6c3ba1 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1874,6 +1874,25 @@ config SWIOTLB
 config IOMMU_HELPER
 	def_bool SWIOTLB
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -1885,6 +1904,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
new file mode 100644
index 0000000..8435ff59
--- /dev/null
+++ b/arch/arm/include/asm/paravirt.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_ARM_PARAVIRT_H
+#define _ASM_ARM_PARAVIRT_H
+
+#ifdef CONFIG_PARAVIRT
+struct static_key;
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops {
+	unsigned long long (*steal_clock)(int cpu);
+};
+extern struct pv_time_ops pv_time_ops;
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return pv_time_ops.steal_clock(cpu);
+}
+#endif
+
+#endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index a30fc9b..34cf9a6 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
 ifneq ($(CONFIG_ARCH_EBSA110),y)
   obj-y		+= io.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 head-y			:= head$(MMUEXT).o
 obj-$(CONFIG_DEBUG_LL)	+= debug.o
@@ -97,5 +98,6 @@ ifeq ($(CONFIG_ARM_PSCI),y)
 obj-y				+= psci.o
 obj-$(CONFIG_SMP)		+= psci_smp.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
new file mode 100644
index 0000000..53f371e
--- /dev/null
+++ b/arch/arm/kernel/paravirt.c
@@ -0,0 +1,25 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2013 Citrix Systems
+ *
+ * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ */
+
+#include <linux/export.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <asm/paravirt.h>
+
+struct static_key paravirt_steal_enabled;
+struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops pv_time_ops;
+EXPORT_SYMBOL_GPL(pv_time_ops);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KPy-0000Ny-SL; Thu, 09 Jan 2014 18:33:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPw-0000NY-Sc
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:21 +0000
Received: from [85.158.137.68:43896] by server-16.bemta-3.messagelabs.com id
	4E/D9-26128-07BEEC25; Thu, 09 Jan 2014 18:33:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292397!4580572!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22241 invoked from network); 9 Jan 2014 18:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91397937"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPm-0001LZ-VT;
	Thu, 09 Jan 2014 18:33:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:14 +0000
Message-ID: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Christopher Covington <cov@codeaurora.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.

Changes in v3:
- improve commit description and Kconfig help text;
- no need to initialize pv_time_ops;
- add PARAVIRT_TIME_ACCOUNTING.
---
 arch/arm/Kconfig                |   20 ++++++++++++++++++++
 arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
 arch/arm/kernel/Makefile        |    2 ++
 arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
 4 files changed, 67 insertions(+)
 create mode 100644 arch/arm/include/asm/paravirt.h
 create mode 100644 arch/arm/kernel/paravirt.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..d6c3ba1 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1874,6 +1874,25 @@ config SWIOTLB
 config IOMMU_HELPER
 	def_bool SWIOTLB
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -1885,6 +1904,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
new file mode 100644
index 0000000..8435ff59
--- /dev/null
+++ b/arch/arm/include/asm/paravirt.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_ARM_PARAVIRT_H
+#define _ASM_ARM_PARAVIRT_H
+
+#ifdef CONFIG_PARAVIRT
+struct static_key;
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops {
+	unsigned long long (*steal_clock)(int cpu);
+};
+extern struct pv_time_ops pv_time_ops;
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return pv_time_ops.steal_clock(cpu);
+}
+#endif
+
+#endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index a30fc9b..34cf9a6 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
 ifneq ($(CONFIG_ARCH_EBSA110),y)
   obj-y		+= io.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 head-y			:= head$(MMUEXT).o
 obj-$(CONFIG_DEBUG_LL)	+= debug.o
@@ -97,5 +98,6 @@ ifeq ($(CONFIG_ARM_PSCI),y)
 obj-y				+= psci.o
 obj-$(CONFIG_SMP)		+= psci_smp.o
 endif
+obj-$(CONFIG_PARAVIRT)	+= paravirt.o
 
 extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
new file mode 100644
index 0000000..53f371e
--- /dev/null
+++ b/arch/arm/kernel/paravirt.c
@@ -0,0 +1,25 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2013 Citrix Systems
+ *
+ * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ */
+
+#include <linux/export.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <asm/paravirt.h>
+
+struct static_key paravirt_steal_enabled;
+struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops pv_time_ops;
+EXPORT_SYMBOL_GPL(pv_time_ops);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KQ1-0000Pg-EK; Thu, 09 Jan 2014 18:33:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPx-0000Ng-Pz
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:22 +0000
Received: from [85.158.139.211:21372] by server-12.bemta-5.messagelabs.com id
	7D/19-30017-17BEEC25; Thu, 09 Jan 2014 18:33:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8754 invoked from network); 9 Jan 2014 18:33:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253908"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPn-0001LZ-1D;
	Thu, 09 Jan 2014 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:16 +0000
Message-ID: <1389292336-9292-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 5/5] xen/arm: account for stolen ticks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register the runstate_memory_area with the hypervisor.
Use pv_time_ops.steal_clock to account for stolen ticks.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


---

Changes in v4:
- don't use paravirt_steal_rq_enabled: we do not support retrieving
stolen ticks for vcpus other than one we are running on.

Changes in v3:
- use BUG_ON and smp_processor_id.
---
 arch/arm/xen/enlighten.c |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..fa8bdc0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -14,7 +14,10 @@
 #include <xen/xen-ops.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <asm/arch_timer.h>
 #include <asm/system_misc.h>
+#include <asm/paravirt.h>
+#include <linux/jump_label.h>
 #include <linux/interrupt.h>
 #include <linux/irqreturn.h>
 #include <linux/module.h>
@@ -154,6 +157,19 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
+unsigned long long xen_stolen_accounting(int cpu)
+{
+	struct vcpu_runstate_info state;
+
+	BUG_ON(cpu != smp_processor_id());
+
+	xen_get_runstate_snapshot(&state);
+
+	WARN_ON(state.state != RUNSTATE_running);
+
+	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
+}
+
 static void __init xen_percpu_init(void *unused)
 {
 	struct vcpu_register_vcpu_info info;
@@ -171,6 +187,8 @@ static void __init xen_percpu_init(void *unused)
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
 
+	xen_setup_runstate_info(cpu);
+
 	enable_percpu_irq(xen_events_irq, 0);
 	put_cpu();
 }
@@ -313,6 +331,9 @@ static int __init xen_init_events(void)
 
 	on_each_cpu(xen_percpu_init, NULL, 0);
 
+	pv_time_ops.steal_clock = xen_stolen_accounting;
+	static_key_slow_inc(&paravirt_steal_enabled);
+
 	return 0;
 }
 postcore_initcall(xen_init_events);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KQ1-0000Pg-EK; Thu, 09 Jan 2014 18:33:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KPx-0000Ng-Pz
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:22 +0000
Received: from [85.158.139.211:21372] by server-12.bemta-5.messagelabs.com id
	7D/19-30017-17BEEC25; Thu, 09 Jan 2014 18:33:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389292397!8871941!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8754 invoked from network); 9 Jan 2014 18:33:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89253908"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPn-0001LZ-1D;
	Thu, 09 Jan 2014 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:16 +0000
Message-ID: <1389292336-9292-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, catalin.marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v9 5/5] xen/arm: account for stolen ticks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Register the runstate_memory_area with the hypervisor.
Use pv_time_ops.steal_clock to account for stolen ticks.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


---

Changes in v4:
- don't use paravirt_steal_rq_enabled: we do not support retrieving
stolen ticks for vcpus other than one we are running on.

Changes in v3:
- use BUG_ON and smp_processor_id.
---
 arch/arm/xen/enlighten.c |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 8550123..fa8bdc0 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -14,7 +14,10 @@
 #include <xen/xen-ops.h>
 #include <asm/xen/hypervisor.h>
 #include <asm/xen/hypercall.h>
+#include <asm/arch_timer.h>
 #include <asm/system_misc.h>
+#include <asm/paravirt.h>
+#include <linux/jump_label.h>
 #include <linux/interrupt.h>
 #include <linux/irqreturn.h>
 #include <linux/module.h>
@@ -154,6 +157,19 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
+unsigned long long xen_stolen_accounting(int cpu)
+{
+	struct vcpu_runstate_info state;
+
+	BUG_ON(cpu != smp_processor_id());
+
+	xen_get_runstate_snapshot(&state);
+
+	WARN_ON(state.state != RUNSTATE_running);
+
+	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
+}
+
 static void __init xen_percpu_init(void *unused)
 {
 	struct vcpu_register_vcpu_info info;
@@ -171,6 +187,8 @@ static void __init xen_percpu_init(void *unused)
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
 
+	xen_setup_runstate_info(cpu);
+
 	enable_percpu_irq(xen_events_irq, 0);
 	put_cpu();
 }
@@ -313,6 +331,9 @@ static int __init xen_init_events(void)
 
 	on_each_cpu(xen_percpu_init, NULL, 0);
 
+	pv_time_ops.steal_clock = xen_stolen_accounting;
+	static_key_slow_inc(&paravirt_steal_enabled);
+
 	return 0;
 }
 postcore_initcall(xen_init_events);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KQ3-0000R3-Aj; Thu, 09 Jan 2014 18:33:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KQ0-0000OM-9h
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:24 +0000
Received: from [85.158.137.68:42269] by server-14.bemta-3.messagelabs.com id
	93/F4-06105-37BEEC25; Thu, 09 Jan 2014 18:33:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292397!4580572!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22355 invoked from network); 9 Jan 2014 18:33:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91397969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPn-0001LZ-07;
	Thu, 09 Jan 2014 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:15 +0000
Message-ID: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, Catalin.Marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net
CC: Catalin.Marinas@arm.com

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.
---
 arch/arm64/Kconfig                |   20 ++++++++++++++++++++
 arch/arm64/include/asm/paravirt.h |   20 ++++++++++++++++++++
 arch/arm64/kernel/Makefile        |    1 +
 arch/arm64/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
 4 files changed, 66 insertions(+)
 create mode 100644 arch/arm64/include/asm/paravirt.h
 create mode 100644 arch/arm64/kernel/paravirt.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d4dd22..d1003ba 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 
 source "mm/Kconfig"
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -220,6 +239,7 @@ config XEN
 	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
 	depends on ARM64 && OF
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
 
diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
new file mode 100644
index 0000000..fd5f428
--- /dev/null
+++ b/arch/arm64/include/asm/paravirt.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_ARM64_PARAVIRT_H
+#define _ASM_ARM64_PARAVIRT_H
+
+#ifdef CONFIG_PARAVIRT
+struct static_key;
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops {
+	unsigned long long (*steal_clock)(int cpu);
+};
+extern struct pv_time_ops pv_time_ops;
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return pv_time_ops.steal_clock(cpu);
+}
+#endif
+
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 5ba2fd4..1dee735 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
new file mode 100644
index 0000000..53f371e
--- /dev/null
+++ b/arch/arm64/kernel/paravirt.c
@@ -0,0 +1,25 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2013 Citrix Systems
+ *
+ * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ */
+
+#include <linux/export.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <asm/paravirt.h>
+
+struct static_key paravirt_steal_enabled;
+struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops pv_time_ops;
+EXPORT_SYMBOL_GPL(pv_time_ops);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KQ3-0000R3-Aj; Thu, 09 Jan 2014 18:33:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W1KQ0-0000OM-9h
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 18:33:24 +0000
Received: from [85.158.137.68:42269] by server-14.bemta-3.messagelabs.com id
	93/F4-06105-37BEEC25; Thu, 09 Jan 2014 18:33:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389292397!4580572!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22355 invoked from network); 9 Jan 2014 18:33:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 18:33:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91397969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 18:33:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 13:33:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W1KPn-0001LZ-07;
	Thu, 09 Jan 2014 18:33:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 9 Jan 2014 18:32:15 +0000
Message-ID: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, Ian.Campbell@citrix.com, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	marc.zyngier@arm.com, Catalin.Marinas@arm.com,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	nico@linaro.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org, cov@codeaurora.org
Subject: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
	PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.

The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.

This allows us to make use of steal_account_process_tick for stolen
ticks accounting.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: will.deacon@arm.com
CC: nico@linaro.org
CC: marc.zyngier@arm.com
CC: cov@codeaurora.org
CC: arnd@arndb.de
CC: olof@lixom.net
CC: Catalin.Marinas@arm.com

---

Changes in v7:
- ifdef CONFIG_PARAVIRT the content of paravirt.h.
---
 arch/arm64/Kconfig                |   20 ++++++++++++++++++++
 arch/arm64/include/asm/paravirt.h |   20 ++++++++++++++++++++
 arch/arm64/kernel/Makefile        |    1 +
 arch/arm64/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
 4 files changed, 66 insertions(+)
 create mode 100644 arch/arm64/include/asm/paravirt.h
 create mode 100644 arch/arm64/kernel/paravirt.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d4dd22..d1003ba 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 
 source "mm/Kconfig"
 
+config PARAVIRT
+	bool "Enable paravirtualization code"
+	---help---
+	  This changes the kernel so it can modify itself when it is run
+	  under a hypervisor, potentially improving performance significantly
+	  over full virtualization.
+
+config PARAVIRT_TIME_ACCOUNTING
+	bool "Paravirtual steal time accounting"
+	select PARAVIRT
+	default n
+	---help---
+	  Select this option to enable fine granularity task steal time
+	  accounting. Time spent executing other tasks in parallel with
+	  the current vCPU is discounted from the vCPU power. To account for
+	  that, there can be a small performance impact.
+
+	  If in doubt, say N here.
+
 config XEN_DOM0
 	def_bool y
 	depends on XEN
@@ -220,6 +239,7 @@ config XEN
 	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
 	depends on ARM64 && OF
 	select SWIOTLB_XEN
+	select PARAVIRT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
 
diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
new file mode 100644
index 0000000..fd5f428
--- /dev/null
+++ b/arch/arm64/include/asm/paravirt.h
@@ -0,0 +1,20 @@
+#ifndef _ASM_ARM64_PARAVIRT_H
+#define _ASM_ARM64_PARAVIRT_H
+
+#ifdef CONFIG_PARAVIRT
+struct static_key;
+extern struct static_key paravirt_steal_enabled;
+extern struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops {
+	unsigned long long (*steal_clock)(int cpu);
+};
+extern struct pv_time_ops pv_time_ops;
+
+static inline u64 paravirt_steal_clock(int cpu)
+{
+	return pv_time_ops.steal_clock(cpu);
+}
+#endif
+
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 5ba2fd4..1dee735 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
new file mode 100644
index 0000000..53f371e
--- /dev/null
+++ b/arch/arm64/kernel/paravirt.c
@@ -0,0 +1,25 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2013 Citrix Systems
+ *
+ * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
+ */
+
+#include <linux/export.h>
+#include <linux/jump_label.h>
+#include <linux/types.h>
+#include <asm/paravirt.h>
+
+struct static_key paravirt_steal_enabled;
+struct static_key paravirt_steal_rq_enabled;
+
+struct pv_time_ops pv_time_ops;
+EXPORT_SYMBOL_GPL(pv_time_ops);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KZg-0001lj-1Z; Thu, 09 Jan 2014 18:43:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@ing.unibs.it>) id 1W1KZe-0001le-09
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:43:22 +0000
Received: from [85.158.143.35:26970] by server-2.bemta-4.messagelabs.com id
	E9/4B-11386-9CDEEC25; Thu, 09 Jan 2014 18:43:21 +0000
X-Env-Sender: francesco.gringoli@ing.unibs.it
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389293000!10731080!1
X-Originating-IP: [192.167.20.249]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15054 invoked from network); 9 Jan 2014 18:43:20 -0000
Received: from zmx2.ing.unibs.it (HELO zmx2.ing.unibs.it) (192.167.20.249)
	by server-3.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 18:43:20 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by zmx2.ing.unibs.it (Postfix) with ESMTP id 079F6276E14;
	Thu,  9 Jan 2014 19:43:10 +0100 (CET)
X-Virus-Scanned: amavisd-new at ing.unibs.it
Received: from zmx2.ing.unibs.it ([127.0.0.1])
	by localhost (zmx2.ing.unibs.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id La7q3LpGYK0x; Thu,  9 Jan 2014 19:43:04 +0100 (CET)
Received: from [10.20.10.12] (unknown [10.20.10.12])
	by zmx2.ing.unibs.it (Postfix) with ESMTPSA id 84775276DE3;
	Thu,  9 Jan 2014 19:43:04 +0100 (CET)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: francesco.gringoli@ing.unibs.it
In-Reply-To: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
Date: Thu, 9 Jan 2014 19:43:04 +0100
Message-Id: <DAC57AF2-E5F4-49A6-BBE9-098F94809FF1@ing.unibs.it>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
	<CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
To: John Johnson <lausgans@gmail.com>
X-Mailer: Apple Mail (2.1510)
Cc: chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
	Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="koi8-r"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gSmFuIDksIDIwMTQsIGF0IDg6MzYgQU0sIEpvaG4gSm9obnNvbiA8bGF1c2dhbnNAZ21haWwu
Y29tPiB3cm90ZToKCj4gTWVhbndoaWxlLCBpJ3ZlIHRyaWVkIHRvIGJvb3QganVzdCBiYXJlIGxp
bnV4LWNocm9tZWJvb2sga2VybmVsIGZyb20KPiB4ZW4gcmVwbyB0byBubyBhdmFpbC4gU2FtZSBz
eW1wdG9tcyAoIkkgZG9uJ3Qgc2VlIGFueXRoaW5nIG9uIHRoZQo+IGRpc3BsYXkiKS4uLgo+IE5v
dGhpbmcgb2YgdGhlc2UgaGFuZCd0IGhlbHBlZDoKPiAxLiBVc2FnZSBvZiBkZWZjb25maWcgZnJv
bSBrZXJuZWwgcmVwbyBmcm9tIGNocm9taXVtLmdvb2dsZXNvdXJjZS5jb20KPiAyLiBBdHRhY2hp
bmcgYm90aCBuZXcgZXh5bm9zNTI1MC1zbm93LXJldjQgJiBleHlub3M1MjUwLXNub3ctcmV2NSBm
ZHQKPiBibG9icyBpbnN0ZWFkIG9mIGFuIG9sZCBleHlub3M1MjUwLXNub3cgb25lLgo+IElkZWFz
PwpIZWxsbyBKb2huLAoKSSBzcGVudCBhIGxvdCBvZiB0aW1lIGJhY2sgaW4gU2VwdGVtYmVyL09j
dG9iZXIgZm9yIGRlYnVnZ2luZyB0aGlzIHN0dWZmLiBVbmZvcnR1bmF0ZWx5IEkgaGFkIHRvIHN0
b3AsIHRoaXMgaXMgd2hhdCBJIHJlbWVtYmVyOgoKMSkgcmVzZXR0aW5nIHRoZSBjaHJvbWVib29r
IChhdCBsZWFzdCBvbiBteSBoYXJkd2FyZSByZXZpc2lvbikgdHVybnMgb2ZmIHBvd2VyIHRvIHRo
ZSBSQU0gZm9yIGEgd2hpbGUsIHNvIGV2ZXJ5dGhpbmcgaW4gdGhlIHJhbSBsb2cgaXMgbG9zdDsK
CjIpIHRvIGF2b2lkIDEpIEkgaGFkIHRvIGRvIHNvZnQgcmVzZXQgYmVmb3JlIGNyYXNoLCBzbyBJ
IHN0YXJ0ZWQgcHV0dGluZyBzb2Z0LXJlc2V0cyBpbiBlYXJseSBjb2RlIG9mIHhlbiwgYW5kIG1h
Z2ljYWxseSBJIHN0YXJ0ZWQgYWNjZXNzaW5nIHRoZSByYW0gbG9nOyBwYXkgYXR0ZW50aW9uLCBh
ZnRlciBwYWdpbmcgaXMgdHVybmVkIG9uLCB5b3UgaGF2ZSB0byBtYXAgYSBwYWdlIGF0IHRoZSB0
b3Agb2YgdGhlIHBhZ2UgdmVjdG9yIGZvciBrZWVwaW5nIHVzaW5nIHNvZnQgcmVzZXQ7CgozKSBp
dCB0dXJuZWQgb3V0IHRoYXQgeGVuIGJvb3RzIGFuZCBpdCBhbHNvIHN0YXJ0cyBib290aW5nIGxp
bnV4IGtlcm5lbCAoYnR3IHlvdSBjYW4gcmFpc2Ugc29mdC1yZXNldHMgYWxzbyBmcm9tIHdpdGhp
biBMaW51eCwgYW5kIHlvdSBjYW4gd3JpdGUgZnJvbSBMaW51eCBpbiBhIGRpZmZlcmVudCByYW0g
bG9nKTsgcGF5IGF0dGVudGlvbiBhbHNvIGluIExpbnV4LCB3aGVuIHBhZ2luZyBzdGFydHMgdG8g
Y2hhbmdlIHRoZSBhZGRyZXNzIG9mIHRoZSBzb2Z0IHJlc2V0IHJlZ2lzdGVyLiBJIGRvbid0IHJl
bWVtYmVyIHdoZXJlIHRoZSBkb20wIGJvb3QgaGFuZ3MuLi4KCkknbSByZXN0YXJ0aW5nIHdvcmtp
bmcgb24gdGhpcywgdGhlIGZpcnN0IGdvYWwgaXMgdG8gdW5kZXJzdGFuZCBleGFjdGx5IHdoZXJl
IHRoZSBkb20wIGtlcm5lbCBib290IHByb2Nlc3MgY3Jhc2guCgpCZXN0IHJlZ2FyZHMsCi1GcmFu
Y2VzY28KCkxpbnV4IGJvb3QgcHJvY2VzcyBnb2VzIGZhaXJseSBiZXlvbmQgCj4gCj4gLS0tLS0t
LS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tCj4gRnJvbTogIDxsYXVzZ2Fuc0A+Cj4g
RGF0ZTogMjAxMy8xMi8xNwo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbY3Jvcy1kaXNjdXNz
XSBCb290IGluIEh5cCBtb2RlIG9uIFNhbXN1bmcKPiBBUk0gQ2hyb21lYm9vawo+IFRvOiBjaHJv
bWl1bS1vcy1kaXNjdXNzQGNocm9taXVtLm9yZwo+IOvP0MnROiBBbnRob255IFBFUkFSRCA8YW50
aG9ueS5wZXJhcmRAPiwgTWlrZSBGcnlzaW5nZXIgPHZhcGllckA+LCBJYW4KPiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQD4sIFhlbiBEZXZlbCA8eGVuLWRldmVsQGxpc3RzLnhlbi5vcmc+LAo+IEZy
YW5jZXNjbyBHcmluZ29saSA8ZnJhbmNlc2NvLmdyaW5nb2xpQD4sIENoZW4gQmFvemkgPGJhb3pp
Y2hAPgo+IAo+IAo+PiAxNy4xMi4yMDEzLCAxMzozMSwgIklhbiBDYW1wYmVsbCIgPElhbi5DYW1w
YmVsbEB4eHh4eHh4eD46Cj4+PiBPbiBUdWUsIDIwMTMtMTItMTcgYXQgMDU6MzcgKzAwMDAsIEhl
bGQgQmllciB3cm90ZToKPj4+IAo+Pj4+ICA8ZnJhbmNlc2NvLmdyaW5nb2xpIDxhdD4gaW5nLnVu
aWJzLml0PiB3cml0ZXM6Cj4+Pj4+PiAuLi4KPj4+Pj4gLi4uCj4+Pj4+IEFmdGVyIHJlYm9vdCBJ
IGRvbid0IHNlZSBhbnl0aGluZyBvbiB0aGUKPj4+Pj4gZGlzcGxheSBhbmQgdGhpcyBpcyBleHBl
Y3RlZDogdW5mb3J0dW5hdGVseSBhZnRlciByZWJvb3RpbmcgaW50bwo+Pj4+PiAgZWl0aGVyIGxp
bnV4IG9yIGNocm9tZW9zIEkgZG9uJ3Qgc2VlIGxvZ3Mgc2F2ZWQgaW4gZG1lc2ctcmFtb29wcy4K
Pj4+Pj4gLi4uCj4+Pj4+IEknbSB3b25kZXJpbmcgaWYgdGhlIGhhcmR3YXJlIGNvdWxkIGhhdmUg
YmVlbiBjaGFuZ2VkIGluIHRoZQo+Pj4+PiBtZWFud2hpbGU6IENoZW4sIHdvdWxkIHlvdSBtaW5k
IHBsZWFzZSBwb3N0Cj4+Pj4+IHNvbWV3aGVyZSB0aGUgdW5zaWduZWQgaW1hZ2UgdGhhdCB5b3Ug
dXNlZCB0byBnZXQgdGhvc2UgaW50ZXJlc3RpbmcKPj4+Pj4gZG1lc2ctcmFtb29wcyBtZXNzYWdl
cz8gc28gdGhhdCBJIGNhbgo+Pj4+PiB0cnkgdG8gc2VlIGFmdGVyIHJlYm9vdCBpZiBJJ20gZ2V0
dGluZyBzb21ldGhpbmcgc2ltaWxhciBpbiAvZGV2L3BzdG9yZSA/Cj4+Pj4+IC4uLgo+Pj4+IFBJ
TkchCj4+PiAKPj4+IERpZCB5b3UgbWVhbiB0byBhbHNvIHNlbmQgdGhpcyB0byBjcm9zLWRpc2N1
c3Mgd2hlcmUgaXQgYXBwZWFycyB0aGUKPj4+IHBlb3BsZSB5b3UgYXJlIHF1b3RpbmcgYXJlIHN1
YnNjcmliZWQ/IFZlcnkgbGl0dGxlIG9mIHRoaXMgZGlzY3Vzc2lvbgo+Pj4gc2VlbXMgdG8gaGF2
ZSBoYXBwZW5lZCBvbiB4ZW4tZGV2ZWwuCj4+PiAKPj4+IFlvdSBtaWdodCBhbHNvIHdhbnQgdG8g
Y29uc2lkZXIgQ0NpbmcgdGhlIHBlb3BsZS9wZXJzb24geW91IGFyZQo+Pj4gYWRkcmVzc2luZyB3
aXRoIHlvdXIgcGluZyBkaXJlY3RseS4KPj4+IAo+Pj4gSWFuLgo+IAo+IAo+IERvbmUuIFRoYW5r
cywgSWFuLgo+IFRob3VnaCwgYW4gb3JpZ2luYWwgd2FzIHNlbnQgb25seSBpbiB4ZW4gbGlzdHM6
Cj4gaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxMy0wOS9t
c2cwMDkxMi5odG1sCj4gCj4+Pj4gSSdtIGV4cGVyaWVuY2luZyBleGFjdGx5IHRoZSBzYW1lLCB0
aG91Z2ggdXBvbiBuYXRpdmVseSBidWlsdCB0b29scwo+Pj4+ICh3aGlsZSBGcmFuY2VzY28gY3Jv
c3MtY29tcGlsZXMpLgo+IAo+IFAuUy46IE1heWJlIGRyb3AgY3VycmVudCBzdWJqZWN0IG9mIHRo
aXMgdGhyZWFkPyBUaG91Z2ggc2VjdXJlIG1vZGUKPiBlc2NhcGUgaGFjayBoYWQgcmVtb3ZlZCBm
cm9tIHJlY2VudCB0cmVlcywgQ2hyb21lYm9vayBYZW4gdHJlZSBzdGlsbAo+IGhhcyBpdC4KPiAK
PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 09 18:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 18:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KZg-0001lj-1Z; Thu, 09 Jan 2014 18:43:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@ing.unibs.it>) id 1W1KZe-0001le-09
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:43:22 +0000
Received: from [85.158.143.35:26970] by server-2.bemta-4.messagelabs.com id
	E9/4B-11386-9CDEEC25; Thu, 09 Jan 2014 18:43:21 +0000
X-Env-Sender: francesco.gringoli@ing.unibs.it
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389293000!10731080!1
X-Originating-IP: [192.167.20.249]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15054 invoked from network); 9 Jan 2014 18:43:20 -0000
Received: from zmx2.ing.unibs.it (HELO zmx2.ing.unibs.it) (192.167.20.249)
	by server-3.tower-21.messagelabs.com with SMTP;
	9 Jan 2014 18:43:20 -0000
Received: from localhost (localhost.localdomain [127.0.0.1])
	by zmx2.ing.unibs.it (Postfix) with ESMTP id 079F6276E14;
	Thu,  9 Jan 2014 19:43:10 +0100 (CET)
X-Virus-Scanned: amavisd-new at ing.unibs.it
Received: from zmx2.ing.unibs.it ([127.0.0.1])
	by localhost (zmx2.ing.unibs.it [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id La7q3LpGYK0x; Thu,  9 Jan 2014 19:43:04 +0100 (CET)
Received: from [10.20.10.12] (unknown [10.20.10.12])
	by zmx2.ing.unibs.it (Postfix) with ESMTPSA id 84775276DE3;
	Thu,  9 Jan 2014 19:43:04 +0100 (CET)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: francesco.gringoli@ing.unibs.it
In-Reply-To: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
Date: Thu, 9 Jan 2014 19:43:04 +0100
Message-Id: <DAC57AF2-E5F4-49A6-BBE9-098F94809FF1@ing.unibs.it>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
	<CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
To: John Johnson <lausgans@gmail.com>
X-Mailer: Apple Mail (2.1510)
Cc: chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
	Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="koi8-r"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gSmFuIDksIDIwMTQsIGF0IDg6MzYgQU0sIEpvaG4gSm9obnNvbiA8bGF1c2dhbnNAZ21haWwu
Y29tPiB3cm90ZToKCj4gTWVhbndoaWxlLCBpJ3ZlIHRyaWVkIHRvIGJvb3QganVzdCBiYXJlIGxp
bnV4LWNocm9tZWJvb2sga2VybmVsIGZyb20KPiB4ZW4gcmVwbyB0byBubyBhdmFpbC4gU2FtZSBz
eW1wdG9tcyAoIkkgZG9uJ3Qgc2VlIGFueXRoaW5nIG9uIHRoZQo+IGRpc3BsYXkiKS4uLgo+IE5v
dGhpbmcgb2YgdGhlc2UgaGFuZCd0IGhlbHBlZDoKPiAxLiBVc2FnZSBvZiBkZWZjb25maWcgZnJv
bSBrZXJuZWwgcmVwbyBmcm9tIGNocm9taXVtLmdvb2dsZXNvdXJjZS5jb20KPiAyLiBBdHRhY2hp
bmcgYm90aCBuZXcgZXh5bm9zNTI1MC1zbm93LXJldjQgJiBleHlub3M1MjUwLXNub3ctcmV2NSBm
ZHQKPiBibG9icyBpbnN0ZWFkIG9mIGFuIG9sZCBleHlub3M1MjUwLXNub3cgb25lLgo+IElkZWFz
PwpIZWxsbyBKb2huLAoKSSBzcGVudCBhIGxvdCBvZiB0aW1lIGJhY2sgaW4gU2VwdGVtYmVyL09j
dG9iZXIgZm9yIGRlYnVnZ2luZyB0aGlzIHN0dWZmLiBVbmZvcnR1bmF0ZWx5IEkgaGFkIHRvIHN0
b3AsIHRoaXMgaXMgd2hhdCBJIHJlbWVtYmVyOgoKMSkgcmVzZXR0aW5nIHRoZSBjaHJvbWVib29r
IChhdCBsZWFzdCBvbiBteSBoYXJkd2FyZSByZXZpc2lvbikgdHVybnMgb2ZmIHBvd2VyIHRvIHRo
ZSBSQU0gZm9yIGEgd2hpbGUsIHNvIGV2ZXJ5dGhpbmcgaW4gdGhlIHJhbSBsb2cgaXMgbG9zdDsK
CjIpIHRvIGF2b2lkIDEpIEkgaGFkIHRvIGRvIHNvZnQgcmVzZXQgYmVmb3JlIGNyYXNoLCBzbyBJ
IHN0YXJ0ZWQgcHV0dGluZyBzb2Z0LXJlc2V0cyBpbiBlYXJseSBjb2RlIG9mIHhlbiwgYW5kIG1h
Z2ljYWxseSBJIHN0YXJ0ZWQgYWNjZXNzaW5nIHRoZSByYW0gbG9nOyBwYXkgYXR0ZW50aW9uLCBh
ZnRlciBwYWdpbmcgaXMgdHVybmVkIG9uLCB5b3UgaGF2ZSB0byBtYXAgYSBwYWdlIGF0IHRoZSB0
b3Agb2YgdGhlIHBhZ2UgdmVjdG9yIGZvciBrZWVwaW5nIHVzaW5nIHNvZnQgcmVzZXQ7CgozKSBp
dCB0dXJuZWQgb3V0IHRoYXQgeGVuIGJvb3RzIGFuZCBpdCBhbHNvIHN0YXJ0cyBib290aW5nIGxp
bnV4IGtlcm5lbCAoYnR3IHlvdSBjYW4gcmFpc2Ugc29mdC1yZXNldHMgYWxzbyBmcm9tIHdpdGhp
biBMaW51eCwgYW5kIHlvdSBjYW4gd3JpdGUgZnJvbSBMaW51eCBpbiBhIGRpZmZlcmVudCByYW0g
bG9nKTsgcGF5IGF0dGVudGlvbiBhbHNvIGluIExpbnV4LCB3aGVuIHBhZ2luZyBzdGFydHMgdG8g
Y2hhbmdlIHRoZSBhZGRyZXNzIG9mIHRoZSBzb2Z0IHJlc2V0IHJlZ2lzdGVyLiBJIGRvbid0IHJl
bWVtYmVyIHdoZXJlIHRoZSBkb20wIGJvb3QgaGFuZ3MuLi4KCkknbSByZXN0YXJ0aW5nIHdvcmtp
bmcgb24gdGhpcywgdGhlIGZpcnN0IGdvYWwgaXMgdG8gdW5kZXJzdGFuZCBleGFjdGx5IHdoZXJl
IHRoZSBkb20wIGtlcm5lbCBib290IHByb2Nlc3MgY3Jhc2guCgpCZXN0IHJlZ2FyZHMsCi1GcmFu
Y2VzY28KCkxpbnV4IGJvb3QgcHJvY2VzcyBnb2VzIGZhaXJseSBiZXlvbmQgCj4gCj4gLS0tLS0t
LS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tCj4gRnJvbTogIDxsYXVzZ2Fuc0A+Cj4g
RGF0ZTogMjAxMy8xMi8xNwo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbY3Jvcy1kaXNjdXNz
XSBCb290IGluIEh5cCBtb2RlIG9uIFNhbXN1bmcKPiBBUk0gQ2hyb21lYm9vawo+IFRvOiBjaHJv
bWl1bS1vcy1kaXNjdXNzQGNocm9taXVtLm9yZwo+IOvP0MnROiBBbnRob255IFBFUkFSRCA8YW50
aG9ueS5wZXJhcmRAPiwgTWlrZSBGcnlzaW5nZXIgPHZhcGllckA+LCBJYW4KPiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQD4sIFhlbiBEZXZlbCA8eGVuLWRldmVsQGxpc3RzLnhlbi5vcmc+LAo+IEZy
YW5jZXNjbyBHcmluZ29saSA8ZnJhbmNlc2NvLmdyaW5nb2xpQD4sIENoZW4gQmFvemkgPGJhb3pp
Y2hAPgo+IAo+IAo+PiAxNy4xMi4yMDEzLCAxMzozMSwgIklhbiBDYW1wYmVsbCIgPElhbi5DYW1w
YmVsbEB4eHh4eHh4eD46Cj4+PiBPbiBUdWUsIDIwMTMtMTItMTcgYXQgMDU6MzcgKzAwMDAsIEhl
bGQgQmllciB3cm90ZToKPj4+IAo+Pj4+ICA8ZnJhbmNlc2NvLmdyaW5nb2xpIDxhdD4gaW5nLnVu
aWJzLml0PiB3cml0ZXM6Cj4+Pj4+PiAuLi4KPj4+Pj4gLi4uCj4+Pj4+IEFmdGVyIHJlYm9vdCBJ
IGRvbid0IHNlZSBhbnl0aGluZyBvbiB0aGUKPj4+Pj4gZGlzcGxheSBhbmQgdGhpcyBpcyBleHBl
Y3RlZDogdW5mb3J0dW5hdGVseSBhZnRlciByZWJvb3RpbmcgaW50bwo+Pj4+PiAgZWl0aGVyIGxp
bnV4IG9yIGNocm9tZW9zIEkgZG9uJ3Qgc2VlIGxvZ3Mgc2F2ZWQgaW4gZG1lc2ctcmFtb29wcy4K
Pj4+Pj4gLi4uCj4+Pj4+IEknbSB3b25kZXJpbmcgaWYgdGhlIGhhcmR3YXJlIGNvdWxkIGhhdmUg
YmVlbiBjaGFuZ2VkIGluIHRoZQo+Pj4+PiBtZWFud2hpbGU6IENoZW4sIHdvdWxkIHlvdSBtaW5k
IHBsZWFzZSBwb3N0Cj4+Pj4+IHNvbWV3aGVyZSB0aGUgdW5zaWduZWQgaW1hZ2UgdGhhdCB5b3Ug
dXNlZCB0byBnZXQgdGhvc2UgaW50ZXJlc3RpbmcKPj4+Pj4gZG1lc2ctcmFtb29wcyBtZXNzYWdl
cz8gc28gdGhhdCBJIGNhbgo+Pj4+PiB0cnkgdG8gc2VlIGFmdGVyIHJlYm9vdCBpZiBJJ20gZ2V0
dGluZyBzb21ldGhpbmcgc2ltaWxhciBpbiAvZGV2L3BzdG9yZSA/Cj4+Pj4+IC4uLgo+Pj4+IFBJ
TkchCj4+PiAKPj4+IERpZCB5b3UgbWVhbiB0byBhbHNvIHNlbmQgdGhpcyB0byBjcm9zLWRpc2N1
c3Mgd2hlcmUgaXQgYXBwZWFycyB0aGUKPj4+IHBlb3BsZSB5b3UgYXJlIHF1b3RpbmcgYXJlIHN1
YnNjcmliZWQ/IFZlcnkgbGl0dGxlIG9mIHRoaXMgZGlzY3Vzc2lvbgo+Pj4gc2VlbXMgdG8gaGF2
ZSBoYXBwZW5lZCBvbiB4ZW4tZGV2ZWwuCj4+PiAKPj4+IFlvdSBtaWdodCBhbHNvIHdhbnQgdG8g
Y29uc2lkZXIgQ0NpbmcgdGhlIHBlb3BsZS9wZXJzb24geW91IGFyZQo+Pj4gYWRkcmVzc2luZyB3
aXRoIHlvdXIgcGluZyBkaXJlY3RseS4KPj4+IAo+Pj4gSWFuLgo+IAo+IAo+IERvbmUuIFRoYW5r
cywgSWFuLgo+IFRob3VnaCwgYW4gb3JpZ2luYWwgd2FzIHNlbnQgb25seSBpbiB4ZW4gbGlzdHM6
Cj4gaHR0cDovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxMy0wOS9t
c2cwMDkxMi5odG1sCj4gCj4+Pj4gSSdtIGV4cGVyaWVuY2luZyBleGFjdGx5IHRoZSBzYW1lLCB0
aG91Z2ggdXBvbiBuYXRpdmVseSBidWlsdCB0b29scwo+Pj4+ICh3aGlsZSBGcmFuY2VzY28gY3Jv
c3MtY29tcGlsZXMpLgo+IAo+IFAuUy46IE1heWJlIGRyb3AgY3VycmVudCBzdWJqZWN0IG9mIHRo
aXMgdGhyZWFkPyBUaG91Z2ggc2VjdXJlIG1vZGUKPiBlc2NhcGUgaGFjayBoYWQgcmVtb3ZlZCBm
cm9tIHJlY2VudCB0cmVlcywgQ2hyb21lYm9vayBYZW4gdHJlZSBzdGlsbAo+IGhhcyBpdC4KPiAK
PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Kqf-0002bo-AX; Thu, 09 Jan 2014 19:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Kqe-0002bj-F1
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 19:00:56 +0000
Received: from [85.158.139.211:62178] by server-17.bemta-5.messagelabs.com id
	AB/C4-19152-7E1FEC25; Thu, 09 Jan 2014 19:00:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389294052!8867405!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2195 invoked from network); 9 Jan 2014 19:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89265315"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:00:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 14:00:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1KqY-0004Nr-SW;
	Thu, 09 Jan 2014 19:00:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1KqY-0002cu-Rs;
	Thu, 09 Jan 2014 19:00:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24318-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 19:00:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24318: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1961382166632400417=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1961382166632400417==
Content-Type: text/plain

flight 24318 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 22383
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22383

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============1961382166632400417==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1961382166632400417==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Kql-0002cT-J7; Thu, 09 Jan 2014 19:01:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1Kqi-0002bz-4U
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:01:00 +0000
Received: from [85.158.139.211:62386] by server-10.bemta-5.messagelabs.com id
	41/B6-01405-BE1FEC25; Thu, 09 Jan 2014 19:00:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389294057!8828674!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9306 invoked from network); 9 Jan 2014 19:00:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:00:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09J0rgv004873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:00:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09J0q6C020587
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:00:53 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09J0qOG008538; Thu, 9 Jan 2014 19:00:52 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:00:51 -0800
Date: Thu, 9 Jan 2014 14:00:50 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <ross.philipson@citrix.com>
Message-ID: <20140109190049.GB17806@pegasus.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CEB663.4070609@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:
> On 01/09/2014 05:06 AM, Ian Campbell wrote:
> >On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
> >>>-----Original Message-----
> >>>From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> >>>bounces@lists.xen.org] On Behalf Of Ian Campbell
> >>>Sent: Tuesday, January 07, 2014 8:56 AM
> >>>To: Zhang, Eniac
> >>>Cc: xen-devel@lists.xen.org
> >>>Subject: Re: [Xen-devel] passing smbios table from qemu
> >>>
> >>>On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> >>>
> >>>>Question, am I missing anything, or this feature (passing smbios) is
> >>>>still work in progress?
> >>>
> >>>Under Xen smbios tables are supplied via hvmloader, not via qemu.
> >>>
> >>>What tables and or values do you want to override/supply?
> >>>
> >>>I believe that libxc supports passing in extra smbios tables when
> >>>building the guest (via struct xc_hvm_build_args.smbios_module) but
> >>>nothing has been plumbed in to make use of this.
> >>>
> >>>I'm not aware of any on going work to plumb that stuff further up, e.g.
> >>>to libxl and xl or other toolstacks. (I think the libxc functionality is
> >>>only consumed by the XenClient toolstack).
> >>
> >>Just FYI, I did go back and add the support (and docs) for it in
> >>libxl. I did this after the first set of patches went in per someone's
> >>request (can't recall who it was at the moment).
> >
> >Ah yes, here it is:
> >        smbios_firmware="STRING"
> >            Specify a path to a file that contains extra SMBIOS firmware
> >            structures to pass in to a guest. The file can contain a set DMTF
> >            predefined structures which will override the internal defaults.
> >            Not all predefined structures can be overridden, only the
> >            following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
> >            any number of vendor defined SMBIOS structures (type 128 - 255).
> >            Since SMBIOS structures do not present their overall size, each
> >            entry in the file must be preceded by a 32b integer indicating the
> >            size of the next structure.
> >
> >Did you not have a tool/library for helping to create such blobs
> >somewhere? Or is my memory playing tricks?
> 
> Your memory is intact; I did provide a helper library. I posted it
> as a tarball since I could not figure out where such a thing might
> live in the xen tree. I posted it twice - the second time with some
> fixes:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html

Would it make sense to try to have it as part of the Xen tree?
It looks in good shape.
> 
> 
> >
> >Ian.
> >
> >
> >-----
> >No virus found in this message.
> >Checked by AVG - www.avg.com
> >Version: 2014.0.4259 / Virus Database: 3658/6986 - Release Date: 01/08/14
> >
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Kqf-0002bo-AX; Thu, 09 Jan 2014 19:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1Kqe-0002bj-F1
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 19:00:56 +0000
Received: from [85.158.139.211:62178] by server-17.bemta-5.messagelabs.com id
	AB/C4-19152-7E1FEC25; Thu, 09 Jan 2014 19:00:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389294052!8867405!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2195 invoked from network); 9 Jan 2014 19:00:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89265315"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:00:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 14:00:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1KqY-0004Nr-SW;
	Thu, 09 Jan 2014 19:00:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1KqY-0002cu-Rs;
	Thu, 09 Jan 2014 19:00:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24318-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 9 Jan 2014 19:00:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24318: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1961382166632400417=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1961382166632400417==
Content-Type: text/plain

flight 24318 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 22383
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22383

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============1961382166632400417==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1961382166632400417==--

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Kql-0002cT-J7; Thu, 09 Jan 2014 19:01:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1Kqi-0002bz-4U
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:01:00 +0000
Received: from [85.158.139.211:62386] by server-10.bemta-5.messagelabs.com id
	41/B6-01405-BE1FEC25; Thu, 09 Jan 2014 19:00:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389294057!8828674!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9306 invoked from network); 9 Jan 2014 19:00:58 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:00:58 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09J0rgv004873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:00:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09J0q6C020587
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:00:53 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09J0qOG008538; Thu, 9 Jan 2014 19:00:52 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:00:51 -0800
Date: Thu, 9 Jan 2014 14:00:50 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ross Philipson <ross.philipson@citrix.com>
Message-ID: <20140109190049.GB17806@pegasus.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CEB663.4070609@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:
> On 01/09/2014 05:06 AM, Ian Campbell wrote:
> >On Wed, 2014-01-08 at 18:29 +0000, Ross Philipson wrote:
> >>>-----Original Message-----
> >>>From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> >>>bounces@lists.xen.org] On Behalf Of Ian Campbell
> >>>Sent: Tuesday, January 07, 2014 8:56 AM
> >>>To: Zhang, Eniac
> >>>Cc: xen-devel@lists.xen.org
> >>>Subject: Re: [Xen-devel] passing smbios table from qemu
> >>>
> >>>On Mon, 2014-01-06 at 21:01 +0000, Zhang, Eniac wrote:
> >>>
> >>>>Question, am I missing anything, or this feature (passing smbios) is
> >>>>still work in progress?
> >>>
> >>>Under Xen smbios tables are supplied via hvmloader, not via qemu.
> >>>
> >>>What tables and or values do you want to override/supply?
> >>>
> >>>I believe that libxc supports passing in extra smbios tables when
> >>>building the guest (via struct xc_hvm_build_args.smbios_module) but
> >>>nothing has been plumbed in to make use of this.
> >>>
> >>>I'm not aware of any on going work to plumb that stuff further up, e.g.
> >>>to libxl and xl or other toolstacks. (I think the libxc functionality is
> >>>only consumed by the XenClient toolstack).
> >>
> >>Just FYI, I did go back and add the support (and docs) for it in
> >>libxl. I did this after the first set of patches went in per someone's
> >>request (can't recall who it was at the moment).
> >
> >Ah yes, here it is:
> >        smbios_firmware="STRING"
> >            Specify a path to a file that contains extra SMBIOS firmware
> >            structures to pass in to a guest. The file can contain a set DMTF
> >            predefined structures which will override the internal defaults.
> >            Not all predefined structures can be overridden, only the
> >            following types: 0, 1, 2, 3, 11, 22, 39. The file can also contain
> >            any number of vendor defined SMBIOS structures (type 128 - 255).
> >            Since SMBIOS structures do not present their overall size, each
> >            entry in the file must be preceded by a 32b integer indicating the
> >            size of the next structure.
> >
> >Did you not have a tool/library for helping to create such blobs
> >somewhere? Or is my memory playing tricks?
> 
> Your memory is intact; I did provide a helper library. I posted it
> as a tarball since I could not figure out where such a thing might
> live in the xen tree. I posted it twice - the second time with some
> fixes:
> 
> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html

Would it make sense to try to have it as part of the Xen tree?
It looks in good shape.
> 
> 
> >
> >Ian.
> >
> >
> >-----
> >No virus found in this message.
> >Checked by AVG - www.avg.com
> >Version: 2014.0.4259 / Virus Database: 3658/6986 - Release Date: 01/08/14
> >
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:09:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KyY-0003T5-Lb; Thu, 09 Jan 2014 19:09:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KyX-0003T0-0I
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:09:05 +0000
Received: from [193.109.254.147:35329] by server-2.bemta-14.messagelabs.com id
	85/C9-00361-0D3FEC25; Thu, 09 Jan 2014 19:09:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389294540!7647128!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17151 invoked from network); 9 Jan 2014 19:09:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:09:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89268480"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:08:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 14:08:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KyA-0004Pi-EM;
	Thu, 09 Jan 2014 19:08:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KyA-0003VN-7t;
	Thu, 09 Jan 2014 19:08:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.62393.819485.361532@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 19:08:41 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389186144.4883.60.camel@kazak.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> The older method is that the toolstack resets a bunch of state (see
> tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
> the domain. The domain will see HYPERVISOR_suspend return 0 and will
> continue without any realisation that it is actually running in the
> original domain and not in a new one. This method is supposed to be
> implemented by libxl_domain_resume(suspend_cancel=0) but it is not.

I have looked into this and I think I can fairly simply implement the
old protocol in libxl.  This is necessary, I think, to preserve our
back-to-3.0 ABI compatibility guarantee.

Looking at a modern pvops Linux kernel, does seem to try to cope with
older hypervisors which don't do the "new" protocol.  So that's a
reasonable thing to start with, but looking at the code in Linux I
suspect it may not actually work very well.  So if anyone has an
ancient test case of some kind that would be helpful...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:09:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1KyY-0003T5-Lb; Thu, 09 Jan 2014 19:09:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1KyX-0003T0-0I
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:09:05 +0000
Received: from [193.109.254.147:35329] by server-2.bemta-14.messagelabs.com id
	85/C9-00361-0D3FEC25; Thu, 09 Jan 2014 19:09:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389294540!7647128!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17151 invoked from network); 9 Jan 2014 19:09:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:09:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89268480"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:08:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 14:08:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KyA-0004Pi-EM;
	Thu, 09 Jan 2014 19:08:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1KyA-0003VN-7t;
	Thu, 09 Jan 2014 19:08:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21198.62393.819485.361532@mariner.uk.xensource.com>
Date: Thu, 9 Jan 2014 19:08:41 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389186144.4883.60.camel@kazak.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> The older method is that the toolstack resets a bunch of state (see
> tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
> the domain. The domain will see HYPERVISOR_suspend return 0 and will
> continue without any realisation that it is actually running in the
> original domain and not in a new one. This method is supposed to be
> implemented by libxl_domain_resume(suspend_cancel=0) but it is not.

I have looked into this and I think I can fairly simply implement the
old protocol in libxl.  This is necessary, I think, to preserve our
back-to-3.0 ABI compatibility guarantee.

Looking at a modern pvops Linux kernel, does seem to try to cope with
older hypervisors which don't do the "new" protocol.  So that's a
reasonable thing to start with, but looking at the code in Linux I
suspect it may not actually work very well.  So if anyone has an
ancient test case of some kind that would be helpful...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:10:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L0E-00049R-9Q; Thu, 09 Jan 2014 19:10:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L0C-00049L-U0
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:10:49 +0000
Received: from [85.158.137.68:13300] by server-6.bemta-3.messagelabs.com id
	2F/9C-04868-834FEC25; Thu, 09 Jan 2014 19:10:48 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389294645!4571354!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23827 invoked from network); 9 Jan 2014 19:10:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:10:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89269330"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:10:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:10:45 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1L08-0001ql-Sd;
	Thu, 09 Jan 2014 19:10:44 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 19:10:44 +0000
Message-ID: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	Ian Campbell <ian.campbell@citrx.com>, Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrx.com>
Cc: Ian Jackson <ian.jackson@eu.citrx.com>
---
This was listed in 4.4 development update. A quick skim through
hypervisor vtd changesets suggests the situation stays unchanged since 3
years ago -- at least I didn't find any log message related to "PoD".

Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
(It was first reported on 2010-01-21)

This patch is tested with setting memory=, maxmem= and pci=[] parameters
in both HVM and PV guests. In HVM guest's case I need to have
claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
mode enabled.
---
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..59aba7d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
     int pci_msitranslate = 0;
     int pci_permissive = 0;
     int i, e;
+    bool pod_enabled = false;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM omain will enable PoD mode.
+     */
+    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (b_info->target_memkb < b_info->max_memkb);
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
@@ -1468,9 +1475,9 @@ skip_vfb:
         xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
     }
 
+    d_config->num_pcidevs = 0;
+    d_config->pcidevs = NULL;
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
             libxl_device_pci *pcidev;
 
@@ -1488,6 +1495,24 @@ skip_vfb:
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
+    /* We cannot have PoD and PCI device assignment at the same
+     * time. VT-d engine needs to set up the entire page table for
+     * the domain. However if PoD is enabled, un-populated memory is
+     * marked as populate_on_demand, and VT-d engine won't set up page
+     * tables for them. Therefore any DMA towards those memory may
+     * cause DMA fault.
+     *
+     * This is restricted to HVM guest, as only VT-d is relevant
+     * in the counterpart in Xend. We're late in release cycle so the change
+     * should only does what's necessary. Probably we can revisit if
+     * we need to do the same thing for PV guest in the future.
+     */
+    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
+        exit(1);
+    }
+
     switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
     case 0:
         {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:10:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L0E-00049R-9Q; Thu, 09 Jan 2014 19:10:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L0C-00049L-U0
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:10:49 +0000
Received: from [85.158.137.68:13300] by server-6.bemta-3.messagelabs.com id
	2F/9C-04868-834FEC25; Thu, 09 Jan 2014 19:10:48 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389294645!4571354!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23827 invoked from network); 9 Jan 2014 19:10:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:10:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="89269330"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:10:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:10:45 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1L08-0001ql-Sd;
	Thu, 09 Jan 2014 19:10:44 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 19:10:44 +0000
Message-ID: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	Ian Campbell <ian.campbell@citrx.com>, Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrx.com>
Cc: Ian Jackson <ian.jackson@eu.citrx.com>
---
This was listed in 4.4 development update. A quick skim through
hypervisor vtd changesets suggests the situation stays unchanged since 3
years ago -- at least I didn't find any log message related to "PoD".

Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
(It was first reported on 2010-01-21)

This patch is tested with setting memory=, maxmem= and pci=[] parameters
in both HVM and PV guests. In HVM guest's case I need to have
claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
mode enabled.
---
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..59aba7d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
     int pci_msitranslate = 0;
     int pci_permissive = 0;
     int i, e;
+    bool pod_enabled = false;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM omain will enable PoD mode.
+     */
+    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (b_info->target_memkb < b_info->max_memkb);
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
@@ -1468,9 +1475,9 @@ skip_vfb:
         xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
     }
 
+    d_config->num_pcidevs = 0;
+    d_config->pcidevs = NULL;
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
             libxl_device_pci *pcidev;
 
@@ -1488,6 +1495,24 @@ skip_vfb:
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
+    /* We cannot have PoD and PCI device assignment at the same
+     * time. VT-d engine needs to set up the entire page table for
+     * the domain. However if PoD is enabled, un-populated memory is
+     * marked as populate_on_demand, and VT-d engine won't set up page
+     * tables for them. Therefore any DMA towards those memory may
+     * cause DMA fault.
+     *
+     * This is restricted to HVM guest, as only VT-d is relevant
+     * in the counterpart in Xend. We're late in release cycle so the change
+     * should only does what's necessary. Probably we can revisit if
+     * we need to do the same thing for PV guest in the future.
+     */
+    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
+        exit(1);
+    }
+
     switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
     case 0:
         {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:14:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L3Y-0004Lu-4m; Thu, 09 Jan 2014 19:14:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wigyori@uid0.hu>) id 1W1L3W-0004Lo-NH
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 19:14:14 +0000
Received: from [85.158.139.211:41643] by server-17.bemta-5.messagelabs.com id
	21/40-19152-605FEC25; Thu, 09 Jan 2014 19:14:14 +0000
X-Env-Sender: wigyori@uid0.hu
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389294852!8870125!1
X-Originating-IP: [81.0.124.200]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13794 invoked from network); 9 Jan 2014 19:14:13 -0000
Received: from trabant.uid0.hu (HELO trabant.uid0.hu) (81.0.124.200)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 19:14:13 -0000
Received: from apn-94-44-122-119.vodafone.hu ([94.44.122.119]:51485)
	by trabant.uid0.hu with esmtpsa (Exim 4.80 #2 (Debian))
	id 1W1L3K-0002xi-E8
	from <wigyori@uid0.hu>; Thu, 09 Jan 2014 20:14:06 +0100
Date: Thu, 09 Jan 2014 20:13:58 +0100
Message-ID: <xocg2t9u8bji0r73dls3pkqc.1389294838857@email.android.com>
From: Zoltan HERPAI <wigyori@uid0.hu>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"	marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"	linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org	"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

>Hi all,
>this patch series introduces stolen ticks accounting for Xen on ARM and
>ARM64.
>Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
>typically because Linux is running in a virtual machine and the vcpu has
>been descheduled.
>To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
>so that we can make use of:
>
>kernel/sched/cputime.c:steal_account_process_tick
>
>
>Changes in v9:
>- added back missing new files from patches;
>- fix compilation on avr32 (remove patch #5, revert to previous version
>  of patch #2).
>
>
>
>Stefano Stabellini (5):
>      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
>      kernel: missing include in cputime.c
>      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>      xen/arm: account for stolen ticks
>
> arch/arm/Kconfig                  |   20 ++++++++
> arch/arm/include/asm/paravirt.h   |   20 ++++++++
> arch/arm/kernel/Makefile          |    2 +
> arch/arm/kernel/paravirt.c        |   25 ++++++++++
> arch/arm/xen/enlighten.c          |   21 +++++++++
> arch/arm64/Kconfig                |   20 ++++++++
> arch/arm64/include/asm/paravirt.h |   20 ++++++++
> arch/arm64/kernel/Makefile        |    1 +
> arch/arm64/kernel/paravirt.c      |   25 ++++++++++
> arch/ia64/xen/time.c              |   48 +++----------------
> arch/x86/xen/time.c               |   76 +------------------------------
> drivers/xen/Makefile              |    2 +-
> drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
> include/xen/xen-ops.h             |    5 ++
> kernel/sched/cputime.c            |    3 ++
> 15 files changed, 261 insertions(+), 118 deletions(-)
> create mode 100644 arch/arm/include/asm/paravirt.h
> create mode 100644 arch/arm/kernel/paravirt.c
> create mode 100644 arch/arm64/include/asm/paravirt.h
> create mode 100644 arch/arm64/kernel/paravirt.c
> create mode 100644 drivers/xen/time.c
>
>
>git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9
>
>
>Cheers,
>
>Stefano
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:14:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L3Y-0004Lu-4m; Thu, 09 Jan 2014 19:14:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wigyori@uid0.hu>) id 1W1L3W-0004Lo-NH
	for xen-devel@lists.xensource.com; Thu, 09 Jan 2014 19:14:14 +0000
Received: from [85.158.139.211:41643] by server-17.bemta-5.messagelabs.com id
	21/40-19152-605FEC25; Thu, 09 Jan 2014 19:14:14 +0000
X-Env-Sender: wigyori@uid0.hu
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389294852!8870125!1
X-Originating-IP: [81.0.124.200]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13794 invoked from network); 9 Jan 2014 19:14:13 -0000
Received: from trabant.uid0.hu (HELO trabant.uid0.hu) (81.0.124.200)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Jan 2014 19:14:13 -0000
Received: from apn-94-44-122-119.vodafone.hu ([94.44.122.119]:51485)
	by trabant.uid0.hu with esmtpsa (Exim 4.80 #2 (Debian))
	id 1W1L3K-0002xi-E8
	from <wigyori@uid0.hu>; Thu, 09 Jan 2014 20:14:06 +0100
Date: Thu, 09 Jan 2014 20:13:58 +0100
Message-ID: <xocg2t9u8bji0r73dls3pkqc.1389294838857@email.android.com>
From: Zoltan HERPAI <wigyori@uid0.hu>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
MIME-Version: 1.0
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"	marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"	linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org	"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

>Hi all,
>this patch series introduces stolen ticks accounting for Xen on ARM and
>ARM64.
>Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
>typically because Linux is running in a virtual machine and the vcpu has
>been descheduled.
>To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
>so that we can make use of:
>
>kernel/sched/cputime.c:steal_account_process_tick
>
>
>Changes in v9:
>- added back missing new files from patches;
>- fix compilation on avr32 (remove patch #5, revert to previous version
>  of patch #2).
>
>
>
>Stefano Stabellini (5):
>      xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
>      kernel: missing include in cputime.c
>      arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>      xen/arm: account for stolen ticks
>
> arch/arm/Kconfig                  |   20 ++++++++
> arch/arm/include/asm/paravirt.h   |   20 ++++++++
> arch/arm/kernel/Makefile          |    2 +
> arch/arm/kernel/paravirt.c        |   25 ++++++++++
> arch/arm/xen/enlighten.c          |   21 +++++++++
> arch/arm64/Kconfig                |   20 ++++++++
> arch/arm64/include/asm/paravirt.h |   20 ++++++++
> arch/arm64/kernel/Makefile        |    1 +
> arch/arm64/kernel/paravirt.c      |   25 ++++++++++
> arch/ia64/xen/time.c              |   48 +++----------------
> arch/x86/xen/time.c               |   76 +------------------------------
> drivers/xen/Makefile              |    2 +-
> drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
> include/xen/xen-ops.h             |    5 ++
> kernel/sched/cputime.c            |    3 ++
> 15 files changed, 261 insertions(+), 118 deletions(-)
> create mode 100644 arch/arm/include/asm/paravirt.h
> create mode 100644 arch/arm/kernel/paravirt.c
> create mode 100644 arch/arm64/include/asm/paravirt.h
> create mode 100644 arch/arm64/kernel/paravirt.c
> create mode 100644 drivers/xen/time.c
>
>
>git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9
>
>
>Cheers,
>
>Stefano
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L5K-0004Sn-17; Thu, 09 Jan 2014 19:16:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L5I-0004Sd-4r
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:16:04 +0000
Received: from [85.158.139.211:54959] by server-6.bemta-5.messagelabs.com id
	BB/CD-16310-375FEC25; Thu, 09 Jan 2014 19:16:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389294961!8877869!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3482 invoked from network); 9 Jan 2014 19:16:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91415973"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 19:16:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:16:00 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1L5E-0001ub-3B;
	Thu, 09 Jan 2014 19:16:00 +0000
Date: Thu, 9 Jan 2014 19:16:00 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140109191600.GA29180@zion.uk.xensource.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 07:10:44PM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrx.com>
> Cc: Ian Jackson <ian.jackson@eu.citrx.com>

Oops... Missing "i". Will resend...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L5K-0004Sn-17; Thu, 09 Jan 2014 19:16:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L5I-0004Sd-4r
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:16:04 +0000
Received: from [85.158.139.211:54959] by server-6.bemta-5.messagelabs.com id
	BB/CD-16310-375FEC25; Thu, 09 Jan 2014 19:16:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389294961!8877869!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3482 invoked from network); 9 Jan 2014 19:16:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91415973"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 19:16:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:16:00 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1L5E-0001ub-3B;
	Thu, 09 Jan 2014 19:16:00 +0000
Date: Thu, 9 Jan 2014 19:16:00 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140109191600.GA29180@zion.uk.xensource.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 07:10:44PM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrx.com>
> Cc: Ian Jackson <ian.jackson@eu.citrx.com>

Oops... Missing "i". Will resend...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L5R-0004Tl-Hf; Thu, 09 Jan 2014 19:16:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W1Kgt-0002XQ-Dd
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:50:51 +0000
Received: from [193.109.254.147:2597] by server-7.bemta-14.messagelabs.com id
	A6/E6-15500-A8FEEC25; Thu, 09 Jan 2014 18:50:50 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389293448!9840039!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19846 invoked from network); 9 Jan 2014 18:50:49 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-9.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	9 Jan 2014 18:50:49 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZ500200CXBNW00@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:50:48 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.9.183915,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(pool-72-66-107-173.washdc.fios.verizon.net [72.66.107.173])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZ500510DOK7500@smtpauth3.wiscmail.wisc.edu>; Thu,
	09 Jan 2014 12:50:46 -0600 (CST)
Message-id: <52CEEF84.2070701@freebsd.org>
Date: Thu, 09 Jan 2014 13:50:44 -0500
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	Julien Grall <julien.grall@linaro.org>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
	<52CC0EE8.6060205@linaro.org> <52CECEAA.107@citrix.com>
In-reply-to: <52CECEAA.107@citrix.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Thu, 09 Jan 2014 19:16:12 +0000
Cc: xen-devel@lists.xen.org, julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMDkvMTQgMTE6MzAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMDcvMDEvMTQg
MTU6MjcsIEp1bGllbiBHcmFsbCB3cm90ZToKPj4gT24gMDEvMDcvMjAxNCAwODoyOSBBTSwgUm9n
ZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4+IE9uIDA2LzAxLzE0IDEyOjMzLCBKdWxpZW4gR3JhbGwg
d3JvdGU6Cj4+Pj4KPj4+PiBPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOp
IHdyb3RlOgo+Pj4+PiBPbiAwNS8wMS8xNCAyMjo1NSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4+
Pj4KPj4+Pj4+IE9uIDAxLzAyLzIwMTQgMDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToK
Pj4+Pj4+PiBJbnRyb2R1Y2UgYSBYZW4gc3BlY2lmaWMgbmV4dXMgdGhhdCBpcyBnb2luZyB0byBi
ZSBpbiBjaGFyZ2UgZm9yCj4+Pj4+Pj4gYXR0YWNoaW5nIFhlbiBzcGVjaWZpYyBkZXZpY2VzLgo+
Pj4+Pj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1cywgZG8gd2UgcmVhbGx5IG5lZWQgYSBz
cGVjaWZpYyBuZXh1cyBmb3IKPj4+Pj4+IFhlbj8KPj4+Pj4+IFdlIHNob3VsZCBiZSBhYmxlIHRv
IHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2YgeGVucHYgdG8gY3JlYXRlIHRoZQo+Pj4+Pj4g
YnVzLgo+Pj4+Pj4KPj4+Pj4+IFRoZSBvdGhlciBwYXJ0IG9mIHRoaXMgcGF0Y2ggY2FuIGJlIG1l
cmdlZCBpbiB0aGUgcGF0Y2ggIzE0ICJJbnRyb2R1Y2UKPj4+Pj4+IHhlbnB2IGJ1cyBhbmQgYSBk
dW1teSBwdmNwdSBkZXZpY2UiLgo+Pj4+PiBPbiB4ODYgYXQgbGVhc3Qgd2UgbmVlZCB0aGUgWGVu
IHNwZWNpZmljIG5leHVzLCBvciB3ZSB3aWxsIGZhbGwgYmFjayB0bwo+Pj4+PiB1c2UgdGhlIGxl
Z2FjeSBuZXh1cyB3aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2FudC4KPj4+Pj4KPj4+PiBP
aCByaWdodCwgaW4gYW55IGNhc2UgY2FuIHdlIHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2Yg
eGVucHYgdG8gYWRkCj4+Pj4gdGhlIGJ1cz8KPj4+IEFGQUlDVCB0aGlzIGtpbmQgb2YgYnVzIGRl
dmljZXMgZG9uJ3QgaGF2ZSBhIGlkZW50aWZ5IHJvdXRpbmUsIGFuZCB0aGV5Cj4+PiBhcmUgdXN1
YWxseSBhZGRlZCBtYW51YWxseSBmcm9tIHRoZSBzcGVjaWZpYyBuZXh1cywgc2VlIGFjcGkgb3Ig
bGVnYWN5Lgo+Pj4gQ291bGQgeW91IGFkZCB0aGUgZGV2aWNlIG9uIEFSTSB3aGVuIHlvdSBkZXRl
Y3QgdGhhdCB5b3UgYXJlIHJ1bm5pbmcgYXMKPj4+IGEgWGVuIGd1ZXN0LCBvciBpbiB0aGUgZ2Vu
ZXJpYyBBUk0gbmV4dXMgaWYgWGVuIGlzIGRldGVjdGVkPwo+PiBJcyB0aGVyZSBhbnkgcmVhc29u
IHRvIG5vdCBhZGQgaWRlbnRpZnkgY2FsbGJhY2s/IElmIGl0J3MgcG9zc2libGUsIEkKPj4gd291
bGQgbGlrZSB0byBhdm9pZCBhcyBtdWNoIGFzIHBvc3NpYmxlICNpZmRlZiBYRU5IVk0gaW4gQVJN
IGNvZGUuCj4gTWF5YmUgdGhlIHg4NiB3b3JsZCBpcyByZWFsbHkgZGlmZmVyZW50IGZyb20gdGhl
IEFSTSB3b3JsZCBpbiBob3cgbmV4dXMKPiB3b3JrcywgYnV0IEkgcmF0aGVyIHByZWZlciB0byBo
YXZlIGEgI2lmZGVmIFhFTkhWTSBhbmQgYSBCVVNfQUREX0NISUxECj4gdGhhdCBhdHRhY2hlcyB0
aGUgeGVucHYgYnVzIGluIHRoZSBnZW5lcmljIEFSTSBuZXh1cyByYXRoZXIgdGhhbiBoYXZpbmcK
PiBzb21ldGhpbmcgdGhhdCBjb21wbGV0ZWx5IGRpdmVyZ2VzIGZyb20gd2hhdCBidXNlcyB1c3Vh
bGx5IGRvIGluCj4gRnJlZUJTRC4gSXQncyBnb2luZyB0byBiZSBtdWNoIG1vcmUgZGlmZmljdWx0
IHRvIHRyYWNrIGluIGNhc2Ugb2YgYnVncywKPiBhbmQgaXQncyBub3Qgd2hhdCBwZW9wbGUgZXhw
ZWN0cywgYnV0IHRoYXQncyBqdXN0IG15IG9waW5pb24uIEkgY2FuCj4gY2VydGFpbmx5IGFkZCB0
aGUgaWRlbnRpZnkgcm91dGluZSBpZiB0aGVyZSdzIGFuIGFncmVlbWVudCB0aGF0IGl0J3MgdGhl
Cj4gYmVzdCB3YXkgdG8gZGVhbCB3aXRoIGl0Lgo+Cj4gUm9nZXIuCj4gX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBmcmVlYnNkLWN1cnJlbnRAZnJlZWJz
ZC5vcmcgbWFpbGluZyBsaXN0Cj4gaHR0cDovL2xpc3RzLmZyZWVic2Qub3JnL21haWxtYW4vbGlz
dGluZm8vZnJlZWJzZC1jdXJyZW50Cj4gVG8gdW5zdWJzY3JpYmUsIHNlbmQgYW55IG1haWwgdG8g
ImZyZWVic2QtY3VycmVudC11bnN1YnNjcmliZUBmcmVlYnNkLm9yZyIKPgoKQXR0YWNoaW5nIHN1
Yi1kZXZpY2VzIHRvIG5leHVzIHVzaW5nIGRldmljZV9pZGVudGlmeSgpIGlzIHRoZSB1c3VhbCB3
YXkKdG8gZG8gdGhpcyBraW5kIG9mIHRoaW5nLiBOb3RlIHRoYXQgaWYgeW91IGRvIHRoaXMsIHlv
dXIgZGV2aWNlX3Byb2JlKCkKcm91dGluZSBzaG91bGQgcmV0dXJuIEJVU19QUk9CRV9OT1dJTERD
QVJEIHRvIGRlYWwgd2l0aCBwbGF0Zm9ybXMgKEFSTSwKTUlQUywgUG93ZXJQQywgc3BhcmM2NCkg
d2l0aCByZWFsIGF1dG9jb25maWd1cmVkIGRldmljZXMgaGFuZ2luZwpkaXJlY3RseSBmcm9tIG5l
eHVzLgotTmF0aGFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L5R-0004Tl-Hf; Thu, 09 Jan 2014 19:16:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W1Kgt-0002XQ-Dd
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 18:50:51 +0000
Received: from [193.109.254.147:2597] by server-7.bemta-14.messagelabs.com id
	A6/E6-15500-A8FEEC25; Thu, 09 Jan 2014 18:50:50 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389293448!9840039!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19846 invoked from network); 9 Jan 2014 18:50:49 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-9.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	9 Jan 2014 18:50:49 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZ500200CXBNW00@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Thu, 09 Jan 2014 12:50:48 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.9.183915,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(pool-72-66-107-173.washdc.fios.verizon.net [72.66.107.173])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZ500510DOK7500@smtpauth3.wiscmail.wisc.edu>; Thu,
	09 Jan 2014 12:50:46 -0600 (CST)
Message-id: <52CEEF84.2070701@freebsd.org>
Date: Thu, 09 Jan 2014 13:50:44 -0500
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	Julien Grall <julien.grall@linaro.org>
References: <1388677433-49525-1-git-send-email-roger.pau@citrix.com>
	<1388677433-49525-16-git-send-email-roger.pau@citrix.com>
	<52C9D4CA.6070403@linaro.org> <52CA78DE.9060502@citrix.com>
	<52CA9481.4090703@linaro.org> <52CBBB05.6020104@citrix.com>
	<52CC0EE8.6060205@linaro.org> <52CECEAA.107@citrix.com>
In-reply-to: <52CECEAA.107@citrix.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Thu, 09 Jan 2014 19:16:12 +0000
Cc: xen-devel@lists.xen.org, julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v9 15/19] xen: create a Xen nexus to use in
 PV/PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMDkvMTQgMTE6MzAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMDcvMDEvMTQg
MTU6MjcsIEp1bGllbiBHcmFsbCB3cm90ZToKPj4gT24gMDEvMDcvMjAxNCAwODoyOSBBTSwgUm9n
ZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4+IE9uIDA2LzAxLzE0IDEyOjMzLCBKdWxpZW4gR3JhbGwg
d3JvdGU6Cj4+Pj4KPj4+PiBPbiAwMS8wNi8yMDE0IDA5OjM1IEFNLCBSb2dlciBQYXUgTW9ubsOp
IHdyb3RlOgo+Pj4+PiBPbiAwNS8wMS8xNCAyMjo1NSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+Pj4+
Pj4KPj4+Pj4+IE9uIDAxLzAyLzIwMTQgMDM6NDMgUE0sIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToK
Pj4+Pj4+PiBJbnRyb2R1Y2UgYSBYZW4gc3BlY2lmaWMgbmV4dXMgdGhhdCBpcyBnb2luZyB0byBi
ZSBpbiBjaGFyZ2UgZm9yCj4+Pj4+Pj4gYXR0YWNoaW5nIFhlbiBzcGVjaWZpYyBkZXZpY2VzLgo+
Pj4+Pj4gTm93IHRoYXQgd2UgaGF2ZSBhIHhlbnB2IGJ1cywgZG8gd2UgcmVhbGx5IG5lZWQgYSBz
cGVjaWZpYyBuZXh1cyBmb3IKPj4+Pj4+IFhlbj8KPj4+Pj4+IFdlIHNob3VsZCBiZSBhYmxlIHRv
IHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2YgeGVucHYgdG8gY3JlYXRlIHRoZQo+Pj4+Pj4g
YnVzLgo+Pj4+Pj4KPj4+Pj4+IFRoZSBvdGhlciBwYXJ0IG9mIHRoaXMgcGF0Y2ggY2FuIGJlIG1l
cmdlZCBpbiB0aGUgcGF0Y2ggIzE0ICJJbnRyb2R1Y2UKPj4+Pj4+IHhlbnB2IGJ1cyBhbmQgYSBk
dW1teSBwdmNwdSBkZXZpY2UiLgo+Pj4+PiBPbiB4ODYgYXQgbGVhc3Qgd2UgbmVlZCB0aGUgWGVu
IHNwZWNpZmljIG5leHVzLCBvciB3ZSB3aWxsIGZhbGwgYmFjayB0bwo+Pj4+PiB1c2UgdGhlIGxl
Z2FjeSBuZXh1cyB3aGljaCBpcyBub3Qgd2hhdCB3ZSByZWFsbHkgd2FudC4KPj4+Pj4KPj4+PiBP
aCByaWdodCwgaW4gYW55IGNhc2UgY2FuIHdlIHVzZSB0aGUgaWRlbnRpZnkgY2FsbGJhY2sgb2Yg
eGVucHYgdG8gYWRkCj4+Pj4gdGhlIGJ1cz8KPj4+IEFGQUlDVCB0aGlzIGtpbmQgb2YgYnVzIGRl
dmljZXMgZG9uJ3QgaGF2ZSBhIGlkZW50aWZ5IHJvdXRpbmUsIGFuZCB0aGV5Cj4+PiBhcmUgdXN1
YWxseSBhZGRlZCBtYW51YWxseSBmcm9tIHRoZSBzcGVjaWZpYyBuZXh1cywgc2VlIGFjcGkgb3Ig
bGVnYWN5Lgo+Pj4gQ291bGQgeW91IGFkZCB0aGUgZGV2aWNlIG9uIEFSTSB3aGVuIHlvdSBkZXRl
Y3QgdGhhdCB5b3UgYXJlIHJ1bm5pbmcgYXMKPj4+IGEgWGVuIGd1ZXN0LCBvciBpbiB0aGUgZ2Vu
ZXJpYyBBUk0gbmV4dXMgaWYgWGVuIGlzIGRldGVjdGVkPwo+PiBJcyB0aGVyZSBhbnkgcmVhc29u
IHRvIG5vdCBhZGQgaWRlbnRpZnkgY2FsbGJhY2s/IElmIGl0J3MgcG9zc2libGUsIEkKPj4gd291
bGQgbGlrZSB0byBhdm9pZCBhcyBtdWNoIGFzIHBvc3NpYmxlICNpZmRlZiBYRU5IVk0gaW4gQVJN
IGNvZGUuCj4gTWF5YmUgdGhlIHg4NiB3b3JsZCBpcyByZWFsbHkgZGlmZmVyZW50IGZyb20gdGhl
IEFSTSB3b3JsZCBpbiBob3cgbmV4dXMKPiB3b3JrcywgYnV0IEkgcmF0aGVyIHByZWZlciB0byBo
YXZlIGEgI2lmZGVmIFhFTkhWTSBhbmQgYSBCVVNfQUREX0NISUxECj4gdGhhdCBhdHRhY2hlcyB0
aGUgeGVucHYgYnVzIGluIHRoZSBnZW5lcmljIEFSTSBuZXh1cyByYXRoZXIgdGhhbiBoYXZpbmcK
PiBzb21ldGhpbmcgdGhhdCBjb21wbGV0ZWx5IGRpdmVyZ2VzIGZyb20gd2hhdCBidXNlcyB1c3Vh
bGx5IGRvIGluCj4gRnJlZUJTRC4gSXQncyBnb2luZyB0byBiZSBtdWNoIG1vcmUgZGlmZmljdWx0
IHRvIHRyYWNrIGluIGNhc2Ugb2YgYnVncywKPiBhbmQgaXQncyBub3Qgd2hhdCBwZW9wbGUgZXhw
ZWN0cywgYnV0IHRoYXQncyBqdXN0IG15IG9waW5pb24uIEkgY2FuCj4gY2VydGFpbmx5IGFkZCB0
aGUgaWRlbnRpZnkgcm91dGluZSBpZiB0aGVyZSdzIGFuIGFncmVlbWVudCB0aGF0IGl0J3MgdGhl
Cj4gYmVzdCB3YXkgdG8gZGVhbCB3aXRoIGl0Lgo+Cj4gUm9nZXIuCj4gX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBmcmVlYnNkLWN1cnJlbnRAZnJlZWJz
ZC5vcmcgbWFpbGluZyBsaXN0Cj4gaHR0cDovL2xpc3RzLmZyZWVic2Qub3JnL21haWxtYW4vbGlz
dGluZm8vZnJlZWJzZC1jdXJyZW50Cj4gVG8gdW5zdWJzY3JpYmUsIHNlbmQgYW55IG1haWwgdG8g
ImZyZWVic2QtY3VycmVudC11bnN1YnNjcmliZUBmcmVlYnNkLm9yZyIKPgoKQXR0YWNoaW5nIHN1
Yi1kZXZpY2VzIHRvIG5leHVzIHVzaW5nIGRldmljZV9pZGVudGlmeSgpIGlzIHRoZSB1c3VhbCB3
YXkKdG8gZG8gdGhpcyBraW5kIG9mIHRoaW5nLiBOb3RlIHRoYXQgaWYgeW91IGRvIHRoaXMsIHlv
dXIgZGV2aWNlX3Byb2JlKCkKcm91dGluZSBzaG91bGQgcmV0dXJuIEJVU19QUk9CRV9OT1dJTERD
QVJEIHRvIGRlYWwgd2l0aCBwbGF0Zm9ybXMgKEFSTSwKTUlQUywgUG93ZXJQQywgc3BhcmM2NCkg
d2l0aCByZWFsIGF1dG9jb25maWd1cmVkIGRldmljZXMgaGFuZ2luZwpkaXJlY3RseSBmcm9tIG5l
eHVzLgotTmF0aGFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:17:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L6e-0004ew-6L; Thu, 09 Jan 2014 19:17:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L6d-0004el-2t
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:17:27 +0000
Received: from [85.158.143.35:21754] by server-1.bemta-4.messagelabs.com id
	EB/BE-02132-6C5FEC25; Thu, 09 Jan 2014 19:17:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389295038!10792564!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25259 invoked from network); 9 Jan 2014 19:17:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91416337"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 19:17:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:17:03 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1L6F-0001w0-GI;
	Thu, 09 Jan 2014 19:17:03 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 19:17:03 +0000
Message-ID: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
This was listed in 4.4 development update. A quick skim through
hypervisor vtd changesets suggests the situation stays unchanged since 3
years ago -- at least I didn't find any log message related to "PoD".

Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
(It was first reported on 2010-01-21)

This patch is tested with setting memory=, maxmem= and pci=[] parameters
in both HVM and PV guests. In HVM guest's case I need to have
claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
mode enabled.
---
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..59aba7d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
     int pci_msitranslate = 0;
     int pci_permissive = 0;
     int i, e;
+    bool pod_enabled = false;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM omain will enable PoD mode.
+     */
+    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (b_info->target_memkb < b_info->max_memkb);
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
@@ -1468,9 +1475,9 @@ skip_vfb:
         xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
     }
 
+    d_config->num_pcidevs = 0;
+    d_config->pcidevs = NULL;
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
             libxl_device_pci *pcidev;
 
@@ -1488,6 +1495,24 @@ skip_vfb:
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
+    /* We cannot have PoD and PCI device assignment at the same
+     * time. VT-d engine needs to set up the entire page table for
+     * the domain. However if PoD is enabled, un-populated memory is
+     * marked as populate_on_demand, and VT-d engine won't set up page
+     * tables for them. Therefore any DMA towards those memory may
+     * cause DMA fault.
+     *
+     * This is restricted to HVM guest, as only VT-d is relevant
+     * in the counterpart in Xend. We're late in release cycle so the change
+     * should only does what's necessary. Probably we can revisit if
+     * we need to do the same thing for PV guest in the future.
+     */
+    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
+        exit(1);
+    }
+
     switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
     case 0:
         {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:17:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L6e-0004ew-6L; Thu, 09 Jan 2014 19:17:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1L6d-0004el-2t
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:17:27 +0000
Received: from [85.158.143.35:21754] by server-1.bemta-4.messagelabs.com id
	EB/BE-02132-6C5FEC25; Thu, 09 Jan 2014 19:17:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389295038!10792564!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25259 invoked from network); 9 Jan 2014 19:17:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,632,1384300800"; d="scan'208";a="91416337"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 19:17:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 14:17:03 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1L6F-0001w0-GI;
	Thu, 09 Jan 2014 19:17:03 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 9 Jan 2014 19:17:03 +0000
Message-ID: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
This was listed in 4.4 development update. A quick skim through
hypervisor vtd changesets suggests the situation stays unchanged since 3
years ago -- at least I didn't find any log message related to "PoD".

Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
(It was first reported on 2010-01-21)

This patch is tested with setting memory=, maxmem= and pci=[] parameters
in both HVM and PV guests. In HVM guest's case I need to have
claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
mode enabled.
---
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..59aba7d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
     int pci_msitranslate = 0;
     int pci_permissive = 0;
     int i, e;
+    bool pod_enabled = false;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM omain will enable PoD mode.
+     */
+    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (b_info->target_memkb < b_info->max_memkb);
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
@@ -1468,9 +1475,9 @@ skip_vfb:
         xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
     }
 
+    d_config->num_pcidevs = 0;
+    d_config->pcidevs = NULL;
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
             libxl_device_pci *pcidev;
 
@@ -1488,6 +1495,24 @@ skip_vfb:
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
+    /* We cannot have PoD and PCI device assignment at the same
+     * time. VT-d engine needs to set up the entire page table for
+     * the domain. However if PoD is enabled, un-populated memory is
+     * marked as populate_on_demand, and VT-d engine won't set up page
+     * tables for them. Therefore any DMA towards those memory may
+     * cause DMA fault.
+     *
+     * This is restricted to HVM guest, as only VT-d is relevant
+     * in the counterpart in Xend. We're late in release cycle so the change
+     * should only does what's necessary. Probably we can revisit if
+     * we need to do the same thing for PV guest in the future.
+     */
+    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
+        exit(1);
+    }
+
     switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
     case 0:
         {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:20:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:20:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L9V-0005UW-0Q; Thu, 09 Jan 2014 19:20:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1L9T-0005UK-6W
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:20:23 +0000
Received: from [193.109.254.147:39887] by server-3.bemta-14.messagelabs.com id
	A9/41-11000-676FEC25; Thu, 09 Jan 2014 19:20:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389295220!9844578!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11073 invoked from network); 9 Jan 2014 19:20:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:20:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09JJHSo029159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:19:17 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JJGLu024132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:19:17 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JJGRd024126; Thu, 9 Jan 2014 19:19:16 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:19:16 -0800
Date: Thu, 9 Jan 2014 14:19:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140109191913.GD17806@pegasus.dumpdata.com>
References: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 07:17:03PM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> This was listed in 4.4 development update. A quick skim through
> hypervisor vtd changesets suggests the situation stays unchanged since 3
> years ago -- at least I didn't find any log message related to "PoD".
> 
> Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
> (It was first reported on 2010-01-21)
> 
> This patch is tested with setting memory=, maxmem= and pci=[] parameters
> in both HVM and PV guests. In HVM guest's case I need to have
> claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
> mode enabled.

Which implies something is amiss with the PoD memory usage >= nr_pages
for the domain. Which implies that it allocates more memory than it asked
for.

We should track that as a bug I think.
> ---
>  tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
>  1 file changed, 27 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index c30f495..59aba7d 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
>      int pci_msitranslate = 0;
>      int pci_permissive = 0;
>      int i, e;
> +    bool pod_enabled = false;
>  
>      libxl_domain_create_info *c_info = &d_config->c_info;
>      libxl_domain_build_info *b_info = &d_config->b_info;
> @@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
>      if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
>          b_info->max_memkb = l * 1024;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM omain will enable PoD mode.
> +     */
> +    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (b_info->target_memkb < b_info->max_memkb);
> +
>      libxl_defbool_set(&b_info->claim_mode, claim_mode);
>  
>      if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
> @@ -1468,9 +1475,9 @@ skip_vfb:
>          xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
>      }
>  
> +    d_config->num_pcidevs = 0;
> +    d_config->pcidevs = NULL;
>      if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
> -        d_config->num_pcidevs = 0;
> -        d_config->pcidevs = NULL;
>          for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
>              libxl_device_pci *pcidev;
>  
> @@ -1488,6 +1495,24 @@ skip_vfb:
>              libxl_defbool_set(&b_info->u.pv.e820_host, true);
>      }
>  
> +    /* We cannot have PoD and PCI device assignment at the same
> +     * time. VT-d engine needs to set up the entire page table for
> +     * the domain. However if PoD is enabled, un-populated memory is
> +     * marked as populate_on_demand, and VT-d engine won't set up page
> +     * tables for them. Therefore any DMA towards those memory may
> +     * cause DMA fault.
> +     *
> +     * This is restricted to HVM guest, as only VT-d is relevant
> +     * in the counterpart in Xend. We're late in release cycle so the change
> +     * should only does what's necessary. Probably we can revisit if
> +     * we need to do the same thing for PV guest in the future.
> +     */
> +    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
> +        exit(1);
> +    }
> +
>      switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
>      case 0:
>          {
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:20:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:20:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1L9V-0005UW-0Q; Thu, 09 Jan 2014 19:20:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1L9T-0005UK-6W
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:20:23 +0000
Received: from [193.109.254.147:39887] by server-3.bemta-14.messagelabs.com id
	A9/41-11000-676FEC25; Thu, 09 Jan 2014 19:20:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389295220!9844578!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11073 invoked from network); 9 Jan 2014 19:20:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:20:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09JJHSo029159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:19:17 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JJGLu024132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:19:17 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JJGRd024126; Thu, 9 Jan 2014 19:19:16 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:19:16 -0800
Date: Thu, 9 Jan 2014 14:19:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140109191913.GD17806@pegasus.dumpdata.com>
References: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 07:17:03PM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> This was listed in 4.4 development update. A quick skim through
> hypervisor vtd changesets suggests the situation stays unchanged since 3
> years ago -- at least I didn't find any log message related to "PoD".
> 
> Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
> (It was first reported on 2010-01-21)
> 
> This patch is tested with setting memory=, maxmem= and pci=[] parameters
> in both HVM and PV guests. In HVM guest's case I need to have
> claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
> mode enabled.

Which implies something is amiss with the PoD memory usage >= nr_pages
for the domain. Which implies that it allocates more memory than it asked
for.

We should track that as a bug I think.
> ---
>  tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
>  1 file changed, 27 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index c30f495..59aba7d 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
>      int pci_msitranslate = 0;
>      int pci_permissive = 0;
>      int i, e;
> +    bool pod_enabled = false;
>  
>      libxl_domain_create_info *c_info = &d_config->c_info;
>      libxl_domain_build_info *b_info = &d_config->b_info;
> @@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
>      if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
>          b_info->max_memkb = l * 1024;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM omain will enable PoD mode.
> +     */
> +    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (b_info->target_memkb < b_info->max_memkb);
> +
>      libxl_defbool_set(&b_info->claim_mode, claim_mode);
>  
>      if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
> @@ -1468,9 +1475,9 @@ skip_vfb:
>          xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
>      }
>  
> +    d_config->num_pcidevs = 0;
> +    d_config->pcidevs = NULL;
>      if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
> -        d_config->num_pcidevs = 0;
> -        d_config->pcidevs = NULL;
>          for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
>              libxl_device_pci *pcidev;
>  
> @@ -1488,6 +1495,24 @@ skip_vfb:
>              libxl_defbool_set(&b_info->u.pv.e820_host, true);
>      }
>  
> +    /* We cannot have PoD and PCI device assignment at the same
> +     * time. VT-d engine needs to set up the entire page table for
> +     * the domain. However if PoD is enabled, un-populated memory is
> +     * marked as populate_on_demand, and VT-d engine won't set up page
> +     * tables for them. Therefore any DMA towards those memory may
> +     * cause DMA fault.
> +     *
> +     * This is restricted to HVM guest, as only VT-d is relevant
> +     * in the counterpart in Xend. We're late in release cycle so the change
> +     * should only does what's necessary. Probably we can revisit if
> +     * we need to do the same thing for PV guest in the future.
> +     */
> +    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        fprintf(stderr, "PCI device assignment for HVM guest failed due to Paging-on-Demand enabled\n");
> +        exit(1);
> +    }
> +
>      switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
>      case 0:
>          {
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Lcy-0007WI-E4; Thu, 09 Jan 2014 19:50:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1Lcw-0007WD-M3
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:50:50 +0000
Received: from [85.158.143.35:25225] by server-3.bemta-4.messagelabs.com id
	70/AD-32360-A9DFEC25; Thu, 09 Jan 2014 19:50:50 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389297047!10787117!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6061 invoked from network); 9 Jan 2014 19:50:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:50:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09Joiud006244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:50:45 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09Johct020406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:50:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09Joh5v006641; Thu, 9 Jan 2014 19:50:43 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:50:43 -0800
Date: Thu, 9 Jan 2014 20:50:38 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140109195038.GC3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:35:19PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..

This really fixes page fault found by Don. After deeper testing I can confirm
that issue introduced by commit 7113a45451a9f656deeff070e47672043ed83664
is revealed only when Xen is compiled with debug option disabled.

By the way, why map_domain_page() behavior depends on debug option?
It is not nice because we could be trapped by this in the future in
more serious places. Could map_domain_page() work in the same way
with or without debug option?

Anyway...

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Lcy-0007WI-E4; Thu, 09 Jan 2014 19:50:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1Lcw-0007WD-M3
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 19:50:50 +0000
Received: from [85.158.143.35:25225] by server-3.bemta-4.messagelabs.com id
	70/AD-32360-A9DFEC25; Thu, 09 Jan 2014 19:50:50 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389297047!10787117!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6061 invoked from network); 9 Jan 2014 19:50:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:50:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09Joiud006244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:50:45 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09Johct020406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:50:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09Joh5v006641; Thu, 9 Jan 2014 19:50:43 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:50:43 -0800
Date: Thu, 9 Jan 2014 20:50:38 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140109195038.GC3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 08, 2014 at 06:35:19PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
> crash kernel area) causes fatal page faults when loading a crash
> image.  The attempt to zero the first control page allocated from the
> crash region will fault as the VA return by map_domain_page() has no
> mapping.
>
> The fault will occur on non-debug builds of Xen when the crash area is
> below 5 TiB (which will be most systems).
>
> The assumption that the crash area mapping was not used is incorrect.
> map_domain_page() is used when loading an image and building the
> image's page tables to temporarily map the crash area, thus the
> mapping is required if the crash area is in the direct map area.
>
> Reintroduce the mapping, but only the portions of the crash area that
> are within the direct map area.
>
> Reported-by: Don Slutz <dslutz@verizon.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Daniel Kiper <daniel.kiper@oracle.com>
> ---
> This fixes a Xen crash so is an important fix for the 4.4 release..

This really fixes page fault found by Don. After deeper testing I can confirm
that issue introduced by commit 7113a45451a9f656deeff070e47672043ed83664
is revealed only when Xen is compiled with debug option disabled.

By the way, why map_domain_page() behavior depends on debug option?
It is not nice because we could be trapped by this in the future in
more serious places. Could map_domain_page() work in the same way
with or without debug option?

Anyway...

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
Tested-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:53:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1LfF-0007dv-4M; Thu, 09 Jan 2014 19:53:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1LfD-0007dm-DE
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 19:53:11 +0000
Received: from [85.158.137.68:36132] by server-16.bemta-3.messagelabs.com id
	E1/15-26128-62EFEC25; Thu, 09 Jan 2014 19:53:10 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389297188!4576580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 9 Jan 2014 19:53:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:53:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,633,1384300800"; d="scan'208";a="89285057"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:53:07 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	14:53:07 -0500
Message-ID: <52CEFE21.8060608@citrix.com>
Date: Thu, 9 Jan 2014 19:53:05 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>    unnecessary m2p_override. Also remove pages_to_[un]map members
>
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?
>
> (CC Roger and David for input)

Yep, it worth, but let's make it a different patch

>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>>   	return page;
>>   }
>>
>> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
>> +	       struct xen_netif_tx_request *txp,
>> +	       struct gnttab_map_grant_ref *gop)
>
> Indentation.
I fixed it and the later ones up, hopefully I haven't missed anything.

>
>> +
>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>> +	do {
>> +		pending_idx = ubuf->desc;
>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>> +		index = pending_index(vif->dealloc_prod);
>> +		vif->dealloc_ring[index] = pending_idx;
>> +		/* Sync with xenvif_tx_action_dealloc:
>
> xenvif_tx_dealloc_action I suppose.
Yes.

>> +			/* Already unmapped? */
>> +			if (vif->grant_tx_handle[pending_idx] ==
>> +				NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					"Trying to unmap invalid handle! "
>> +					"pending_idx: %x\n", pending_idx);
>> +				continue;
>
> You seemed to miss the BUG_ON we discussed?
>
> See thread starting <52AF1A84.3090304@citrix.com>.
Indeed, despite I wrote it in the version history :)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:53:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1LfF-0007dv-4M; Thu, 09 Jan 2014 19:53:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1LfD-0007dm-DE
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 19:53:11 +0000
Received: from [85.158.137.68:36132] by server-16.bemta-3.messagelabs.com id
	E1/15-26128-62EFEC25; Thu, 09 Jan 2014 19:53:10 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389297188!4576580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3701 invoked from network); 9 Jan 2014 19:53:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 19:53:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,633,1384300800"; d="scan'208";a="89285057"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 19:53:07 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	14:53:07 -0500
Message-ID: <52CEFE21.8060608@citrix.com>
Date: Thu, 9 Jan 2014 19:53:05 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-2-git-send-email-zoltan.kiss@citrix.com>
	<20140109153010.GE12164@zion.uk.xensource.com>
In-Reply-To: <20140109153010.GE12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v3 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:10AM +0000, Zoltan Kiss wrote:
>> v3:
>> - fix comment in xenvif_tx_dealloc_action()
>> - call unmap hypercall directly instead of gnttab_unmap_refs(), which does
>>    unnecessary m2p_override. Also remove pages_to_[un]map members
>
> Is it worthy to have another function call
> gnttab_unmap_refs_no_m2p_override in Xen core driver, or just add a
> parameter to control wether we need to touch m2p_override? I *think* it
> will benefit block driver as well?
>
> (CC Roger and David for input)

Yep, it worth, but let's make it a different patch

>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -771,6 +771,19 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>>   	return page;
>>   }
>>
>> +static inline void xenvif_tx_create_gop(struct xenvif *vif, u16 pending_idx,
>> +	       struct xen_netif_tx_request *txp,
>> +	       struct gnttab_map_grant_ref *gop)
>
> Indentation.
I fixed it and the later ones up, hopefully I haven't missed anything.

>
>> +
>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>> +	do {
>> +		pending_idx = ubuf->desc;
>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>> +		index = pending_index(vif->dealloc_prod);
>> +		vif->dealloc_ring[index] = pending_idx;
>> +		/* Sync with xenvif_tx_action_dealloc:
>
> xenvif_tx_dealloc_action I suppose.
Yes.

>> +			/* Already unmapped? */
>> +			if (vif->grant_tx_handle[pending_idx] ==
>> +				NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					"Trying to unmap invalid handle! "
>> +					"pending_idx: %x\n", pending_idx);
>> +				continue;
>
> You seemed to miss the BUG_ON we discussed?
>
> See thread starting <52AF1A84.3090304@citrix.com>.
Indeed, despite I wrote it in the version history :)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:59:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1LlT-0008Po-Vy; Thu, 09 Jan 2014 19:59:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1LlS-0008Pj-Tp
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 19:59:39 +0000
Received: from [85.158.139.211:38198] by server-4.bemta-5.messagelabs.com id
	45/F9-26791-AAFFEC25; Thu, 09 Jan 2014 19:59:38 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389297574!8866806!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1338 invoked from network); 9 Jan 2014 19:59:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:59:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09JxWUn018468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:59:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JxUBY024361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:59:31 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s09JxUk4005811; Thu, 9 Jan 2014 19:59:30 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:59:29 -0800
Date: Thu, 9 Jan 2014 20:59:25 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140109195925.GD3633@olila.local.net-space.pl>
References: <1386612106-21488-1-git-send-email-daniel.kiper@oracle.com>
	<52A6DD9E020000780010BAFF@nat28.tlf.novell.com>
	<20131210095916.GI3916@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131210095916.GI3916@olila.local.net-space.pl>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: andrew.cooper3@citrix.com, keir@xen.org, david.vrabel@citrix.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Add me as Xen kexec maintainer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 10, 2013 at 10:59:16AM +0100, Daniel Kiper wrote:
> On Tue, Dec 10, 2013 at 08:23:42AM +0000, Jan Beulich wrote:
> > >>> On 09.12.13 at 19:01, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > > If there is no objection and according to earlier discussion
> >
> > What earlier discussion?
>
> Here it is: http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01689.html

Folks, any comments? Yes or no?

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 19:59:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 19:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1LlT-0008Po-Vy; Thu, 09 Jan 2014 19:59:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1LlS-0008Pj-Tp
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 19:59:39 +0000
Received: from [85.158.139.211:38198] by server-4.bemta-5.messagelabs.com id
	45/F9-26791-AAFFEC25; Thu, 09 Jan 2014 19:59:38 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389297574!8866806!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1338 invoked from network); 9 Jan 2014 19:59:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 19:59:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s09JxWUn018468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 9 Jan 2014 19:59:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s09JxUBY024361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 9 Jan 2014 19:59:31 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s09JxUk4005811; Thu, 9 Jan 2014 19:59:30 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 11:59:29 -0800
Date: Thu, 9 Jan 2014 20:59:25 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140109195925.GD3633@olila.local.net-space.pl>
References: <1386612106-21488-1-git-send-email-daniel.kiper@oracle.com>
	<52A6DD9E020000780010BAFF@nat28.tlf.novell.com>
	<20131210095916.GI3916@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20131210095916.GI3916@olila.local.net-space.pl>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: andrew.cooper3@citrix.com, keir@xen.org, david.vrabel@citrix.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Add me as Xen kexec maintainer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Dec 10, 2013 at 10:59:16AM +0100, Daniel Kiper wrote:
> On Tue, Dec 10, 2013 at 08:23:42AM +0000, Jan Beulich wrote:
> > >>> On 09.12.13 at 19:01, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > > If there is no objection and according to earlier discussion
> >
> > What earlier discussion?
>
> Here it is: http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01689.html

Folks, any comments? Yes or no?

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 20:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 20:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Lv6-0000qM-Es; Thu, 09 Jan 2014 20:09:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Lv4-0000oI-Ih
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 20:09:34 +0000
Received: from [193.109.254.147:43964] by server-7.bemta-14.messagelabs.com id
	A0/17-15500-DF10FC25; Thu, 09 Jan 2014 20:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389298172!8413649!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17719 invoked from network); 9 Jan 2014 20:09:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 20:09:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,633,1384300800"; d="scan'208";a="91436232"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 20:09:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 15:09:31 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W1Lv0-0002dr-PV;
	Thu, 09 Jan 2014 20:09:30 +0000
Message-ID: <1389298170.6917.44.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 20:09:30 +0000
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrx.com>, Ian
	Campbell <ian.campbell@citrx.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 19:10 +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").

Thanks, but I think this should be handled in libxl for the benefit of
all toolstacks. I think it has all the necessary inputs in
libxl_domain_create to make the decision and log + return ERROR_INVAL?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 20:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 20:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Lv6-0000qM-Es; Thu, 09 Jan 2014 20:09:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Lv4-0000oI-Ih
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 20:09:34 +0000
Received: from [193.109.254.147:43964] by server-7.bemta-14.messagelabs.com id
	A0/17-15500-DF10FC25; Thu, 09 Jan 2014 20:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389298172!8413649!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17719 invoked from network); 9 Jan 2014 20:09:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 20:09:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,633,1384300800"; d="scan'208";a="91436232"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Jan 2014 20:09:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 9 Jan 2014 15:09:31 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W1Lv0-0002dr-PV;
	Thu, 09 Jan 2014 20:09:30 +0000
Message-ID: <1389298170.6917.44.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Thu, 9 Jan 2014 20:09:30 +0000
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrx.com>, Ian
	Campbell <ian.campbell@citrx.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 19:10 +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").

Thanks, but I think this should be handled in libxl for the benefit of
all toolstacks. I think it has all the necessary inputs in
libxl_domain_create to make the decision and log + return ERROR_INVAL?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 22:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 22:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1NfI-0008JR-7y; Thu, 09 Jan 2014 22:01:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1NfG-0008JM-2q
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 22:01:22 +0000
Received: from [193.109.254.147:49138] by server-10.bemta-14.messagelabs.com
	id ED/A8-20752-13C1FC25; Thu, 09 Jan 2014 22:01:21 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389304880!6424106!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15678 invoked from network); 9 Jan 2014 22:01:20 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 22:01:20 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389304880; l=3835;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yWy+gyw/Vvm1MFDDGz7Jv+UU1Aw=;
	b=yK7AaqqzfpcJF3esAYYGMeRr6vki5OqB8za8MZ9wKQJf1AlU9CQpjdNKrTObnQw3VgG
	w+ScrfohowT+piCCBt7DsCrPGecbzmtOWvj5iLX+MRGlKYh+9nRgqqQMpfJW/eJpjMyME
	dXw9Hg38nRQL+JKdrNCgX21NiJ52LHR681M=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5r/WwRRF6g==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-082-094.pools.arcor-ip.net [88.65.82.94])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z07de2q09M1JJiu ; 
	Thu, 9 Jan 2014 23:01:19 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 79F335024C; Thu,  9 Jan 2014 23:01:19 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 23:01:15 +0100
Message-Id: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. But it would be worth to
have a knob to disable it in case the backing file was intentionally
created non-sparse to avoid fragmentation.
How could this be knob be passed from domU.cfg:disk=[] to the actual
qemu process?

blkfront_setup_discard should check for "qdisk" instead of (, or in
addition to?) "file" to actually make use of this new feature.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 16 ++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..555c2d6 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -68,6 +68,8 @@ struct ioreq {
     int                 presync;
     int                 postsync;
     uint8_t             mapped;
+    int64_t             sector_num;
+    int                 nb_sectors;
 
     /* grant mapping */
     uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
@@ -232,6 +234,7 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
     uintptr_t mem;
     size_t len;
     int i;
@@ -244,6 +247,10 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_READ:
         ioreq->prot = PROT_WRITE; /* to memory */
         break;
+    case BLKIF_OP_DISCARD:
+        ioreq->sector_num = discard_req->sector_number;
+        ioreq->nb_sectors = discard_req->nr_sectors;
+        return 0;
     case BLKIF_OP_FLUSH_DISKCACHE:
         ioreq->presync = 1;
         if (!ioreq->req.nr_segments) {
@@ -521,6 +528,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, ioreq->nb_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        ioreq->sector_num, ioreq->nb_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -764,6 +778,7 @@ static int blk_init(struct XenDevice *xendev)
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
+    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     g_free(directiosafe);
@@ -801,6 +816,7 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 22:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 22:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1NfI-0008JR-7y; Thu, 09 Jan 2014 22:01:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1NfG-0008JM-2q
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 22:01:22 +0000
Received: from [193.109.254.147:49138] by server-10.bemta-14.messagelabs.com
	id ED/A8-20752-13C1FC25; Thu, 09 Jan 2014 22:01:21 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389304880!6424106!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15678 invoked from network); 9 Jan 2014 22:01:20 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 9 Jan 2014 22:01:20 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389304880; l=3835;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yWy+gyw/Vvm1MFDDGz7Jv+UU1Aw=;
	b=yK7AaqqzfpcJF3esAYYGMeRr6vki5OqB8za8MZ9wKQJf1AlU9CQpjdNKrTObnQw3VgG
	w+ScrfohowT+piCCBt7DsCrPGecbzmtOWvj5iLX+MRGlKYh+9nRgqqQMpfJW/eJpjMyME
	dXw9Hg38nRQL+JKdrNCgX21NiJ52LHR681M=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJwKkjb5r/WwRRF6g==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-088-065-082-094.pools.arcor-ip.net [88.65.82.94])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z07de2q09M1JJiu ; 
	Thu, 9 Jan 2014 23:01:19 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 79F335024C; Thu,  9 Jan 2014 23:01:19 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu,  9 Jan 2014 23:01:15 +0100
Message-Id: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. But it would be worth to
have a knob to disable it in case the backing file was intentionally
created non-sparse to avoid fragmentation.
How could this be knob be passed from domU.cfg:disk=[] to the actual
qemu process?

blkfront_setup_discard should check for "qdisk" instead of (, or in
addition to?) "file" to actually make use of this new feature.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 16 ++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..555c2d6 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -68,6 +68,8 @@ struct ioreq {
     int                 presync;
     int                 postsync;
     uint8_t             mapped;
+    int64_t             sector_num;
+    int                 nb_sectors;
 
     /* grant mapping */
     uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
@@ -232,6 +234,7 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
 static int ioreq_parse(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
     uintptr_t mem;
     size_t len;
     int i;
@@ -244,6 +247,10 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_READ:
         ioreq->prot = PROT_WRITE; /* to memory */
         break;
+    case BLKIF_OP_DISCARD:
+        ioreq->sector_num = discard_req->sector_number;
+        ioreq->nb_sectors = discard_req->nr_sectors;
+        return 0;
     case BLKIF_OP_FLUSH_DISKCACHE:
         ioreq->presync = 1;
         if (!ioreq->req.nr_segments) {
@@ -521,6 +528,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, ioreq->nb_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        ioreq->sector_num, ioreq->nb_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -764,6 +778,7 @@ static int blk_init(struct XenDevice *xendev)
      */
     xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
+    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
     g_free(directiosafe);
@@ -801,6 +816,7 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:03:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Od5-0003s8-6W; Thu, 09 Jan 2014 23:03:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1W1Od3-0003s3-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 23:03:10 +0000
Received: from [85.158.139.211:35329] by server-9.bemta-5.messagelabs.com id
	D0/0A-15098-DAA2FC25; Thu, 09 Jan 2014 23:03:09 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389308587!8854501!1
X-Originating-IP: [209.85.223.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28257 invoked from network); 9 Jan 2014 23:03:08 -0000
Received: from mail-ie0-f180.google.com (HELO mail-ie0-f180.google.com)
	(209.85.223.180)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 23:03:08 -0000
Received: by mail-ie0-f180.google.com with SMTP id ar20so130572iec.39
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 15:03:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7lThcXoxLoLh22BSKcAz3rH31aZSud33LwXkJIGSpfw=;
	b=kRbzmkBK/G6OOGh2qKifecZY7nkj8Pp1j2rBDXSS3A9Gg1QM1a4ZMW3DAXWZxOhseV
	eFavW0oogvZ+I1hIyeAwhyS6MUkNHV6fB34zB25I1RCMWJWuws3Lv06bHIAtWHFdbZqP
	gIQoK/W4LCAEYCz9+ad/DMlB/uMZpbaI8l4Jyt5L/iZjT+h+AItidOWUyQZRAcOgF8MN
	zb6O26rF9o6dk/vKPyb6LIvv7UjBCa36lPwNE/k0e7SswvPsKzZP8EhTzgygsz2alIeV
	0K3KFD7KOEyTMCieGSk/gfCyAMU+7BpFJV0tq5Fht3sJLaxLqLPXw0uly27I+XurUqvJ
	+VtA==
MIME-Version: 1.0
X-Received: by 10.50.50.236 with SMTP id f12mr40301628igo.8.1389308586786;
	Thu, 09 Jan 2014 15:03:06 -0800 (PST)
Received: by 10.50.184.232 with HTTP; Thu, 9 Jan 2014 15:03:06 -0800 (PST)
In-Reply-To: <52CF1C02.2030504@redhat.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
Date: Thu, 9 Jan 2014 15:03:06 -0800
Message-ID: <CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Laszlo Ersek <lersek@redhat.com>
Cc: "edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> On 01/09/14 22:47, Jordan Justen wrote:
>> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>>> On 01/09/14 01:45, Jordan Justen wrote:
>>>> From: Laszlo Ersek <lersek@redhat.com>
>>>>
>>>> Contributed-under: TianoCore Contribution Agreement 1.0
>>>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>>>> [jordan.l.justen@intel.com: move to MemDetect.c; use PCDs]
>>>
>>> PCDs are fine of course, but MemDetect() is not called on Xen
>>> (unless that's the intent, but please explain then).
>>
>> I don't think this series claims to enable S3 for Xen, right?
>>
>> When someone looks at S3 for Xen, I might try to steer them towards
>> having Xen call MemDetect again, and branch of for Xen specific things
>> within MemDetect. I was not too excited about that aspect of r14946.
>
> No, the series doesn't *claim* to do that :), and I didn't test it, but
> since I could not see any immediate blocker when running on Xen, I
> figured we should add the feature generally, and then Xen users could
> happily hunt bugs in the common code. By adding code that doesn't run
> specifically on Xen we're making that harder.

I'll try to update this to make a best effort of having S3 potentially
work for Xen.

We should probably see if someone from xen-devel can verify that we
haven't managed to break normal OVMF boots on Xen (aside from the S3
issue).

-Jordan

> ... I guess at least! :) I don't have proof either way.
>
> Also (but I didn't investigate this in particular) it's not that all
> S3-related stuff were non-Xen only. Some of it seems to be Xen and KVM,
> and some non-Xen only. But again I could be wrong.
>
> Anyway if we declare this, then I'll add my R-b to the patches where Xen
> was my only question.
>
> Thanks!
> Laszlo
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:03:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Od5-0003s8-6W; Thu, 09 Jan 2014 23:03:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1W1Od3-0003s3-Vn
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 23:03:10 +0000
Received: from [85.158.139.211:35329] by server-9.bemta-5.messagelabs.com id
	D0/0A-15098-DAA2FC25; Thu, 09 Jan 2014 23:03:09 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389308587!8854501!1
X-Originating-IP: [209.85.223.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28257 invoked from network); 9 Jan 2014 23:03:08 -0000
Received: from mail-ie0-f180.google.com (HELO mail-ie0-f180.google.com)
	(209.85.223.180)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 23:03:08 -0000
Received: by mail-ie0-f180.google.com with SMTP id ar20so130572iec.39
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 15:03:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7lThcXoxLoLh22BSKcAz3rH31aZSud33LwXkJIGSpfw=;
	b=kRbzmkBK/G6OOGh2qKifecZY7nkj8Pp1j2rBDXSS3A9Gg1QM1a4ZMW3DAXWZxOhseV
	eFavW0oogvZ+I1hIyeAwhyS6MUkNHV6fB34zB25I1RCMWJWuws3Lv06bHIAtWHFdbZqP
	gIQoK/W4LCAEYCz9+ad/DMlB/uMZpbaI8l4Jyt5L/iZjT+h+AItidOWUyQZRAcOgF8MN
	zb6O26rF9o6dk/vKPyb6LIvv7UjBCa36lPwNE/k0e7SswvPsKzZP8EhTzgygsz2alIeV
	0K3KFD7KOEyTMCieGSk/gfCyAMU+7BpFJV0tq5Fht3sJLaxLqLPXw0uly27I+XurUqvJ
	+VtA==
MIME-Version: 1.0
X-Received: by 10.50.50.236 with SMTP id f12mr40301628igo.8.1389308586786;
	Thu, 09 Jan 2014 15:03:06 -0800 (PST)
Received: by 10.50.184.232 with HTTP; Thu, 9 Jan 2014 15:03:06 -0800 (PST)
In-Reply-To: <52CF1C02.2030504@redhat.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
Date: Thu, 9 Jan 2014 15:03:06 -0800
Message-ID: <CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Laszlo Ersek <lersek@redhat.com>
Cc: "edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> On 01/09/14 22:47, Jordan Justen wrote:
>> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>>> On 01/09/14 01:45, Jordan Justen wrote:
>>>> From: Laszlo Ersek <lersek@redhat.com>
>>>>
>>>> Contributed-under: TianoCore Contribution Agreement 1.0
>>>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>>>> [jordan.l.justen@intel.com: move to MemDetect.c; use PCDs]
>>>
>>> PCDs are fine of course, but MemDetect() is not called on Xen
>>> (unless that's the intent, but please explain then).
>>
>> I don't think this series claims to enable S3 for Xen, right?
>>
>> When someone looks at S3 for Xen, I might try to steer them towards
>> having Xen call MemDetect again, and branch of for Xen specific things
>> within MemDetect. I was not too excited about that aspect of r14946.
>
> No, the series doesn't *claim* to do that :), and I didn't test it, but
> since I could not see any immediate blocker when running on Xen, I
> figured we should add the feature generally, and then Xen users could
> happily hunt bugs in the common code. By adding code that doesn't run
> specifically on Xen we're making that harder.

I'll try to update this to make a best effort of having S3 potentially
work for Xen.

We should probably see if someone from xen-devel can verify that we
haven't managed to break normal OVMF boots on Xen (aside from the S3
issue).

-Jordan

> ... I guess at least! :) I don't have proof either way.
>
> Also (but I didn't investigate this in particular) it's not that all
> S3-related stuff were non-Xen only. Some of it seems to be Xen and KVM,
> and some non-Xen only. But again I could be wrong.
>
> Anyway if we declare this, then I'll add my R-b to the patches where Xen
> was my only question.
>
> Thanks!
> Laszlo
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ook-0004cl-Kj; Thu, 09 Jan 2014 23:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Ooj-0004cg-6j
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 23:15:13 +0000
Received: from [85.158.143.35:23008] by server-1.bemta-4.messagelabs.com id
	25/39-02132-08D2FC25; Thu, 09 Jan 2014 23:15:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389309310!10763330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15350 invoked from network); 9 Jan 2014 23:15:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 23:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,634,1384300800"; d="scan'208";a="89356708"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 23:15:09 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 18:15:09 -0500
Received: from [192.168.0.17] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 10 Jan 2014
	00:15:07 +0100
Message-ID: <52CF2D78.4@citrix.com>
Date: Thu, 9 Jan 2014 23:15:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, David Vrabel
	<david.vrabel@citrix.com>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
	<20140109195038.GC3633@olila.local.net-space.pl>
In-Reply-To: <20140109195038.GC3633@olila.local.net-space.pl>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: Jan Beulich <jbeulich@suse.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/2014 19:50, Daniel Kiper wrote:
> By the way, why map_domain_page() behavior depends on debug option?
> It is not nice because we could be trapped by this in the future in
> more serious places. Could map_domain_page() work in the same way
> with or without debug option?

With a debug build of Xen, map_domain_page() always mutates the
pagetables and hands out virtual addresses from the mapcache region. 
This is to test map_domain_page() itself, as well as making domain
mapping leaks more obvious (as the mapcache is under heavier load).

For a non-debug build of Xen, any map_domain_page() calls which can be
satisfied by returning a virtual address from the direct map region
(i.e. for pages below the 5TiB boundary which is basically all of them
unless you have more money than sense) are, which avoided excessive use
of the mapcache, and avoids a TLB shootdown/flush on unmap.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Ook-0004cl-Kj; Thu, 09 Jan 2014 23:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1Ooj-0004cg-6j
	for xen-devel@lists.xen.org; Thu, 09 Jan 2014 23:15:13 +0000
Received: from [85.158.143.35:23008] by server-1.bemta-4.messagelabs.com id
	25/39-02132-08D2FC25; Thu, 09 Jan 2014 23:15:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389309310!10763330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15350 invoked from network); 9 Jan 2014 23:15:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Jan 2014 23:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,634,1384300800"; d="scan'208";a="89356708"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Jan 2014 23:15:09 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 9 Jan 2014 18:15:09 -0500
Received: from [192.168.0.17] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 10 Jan 2014
	00:15:07 +0100
Message-ID: <52CF2D78.4@citrix.com>
Date: Thu, 9 Jan 2014 23:15:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, David Vrabel
	<david.vrabel@citrix.com>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
	<20140109195038.GC3633@olila.local.net-space.pl>
In-Reply-To: <20140109195038.GC3633@olila.local.net-space.pl>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: Jan Beulich <jbeulich@suse.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/2014 19:50, Daniel Kiper wrote:
> By the way, why map_domain_page() behavior depends on debug option?
> It is not nice because we could be trapped by this in the future in
> more serious places. Could map_domain_page() work in the same way
> with or without debug option?

With a debug build of Xen, map_domain_page() always mutates the
pagetables and hands out virtual addresses from the mapcache region. 
This is to test map_domain_page() itself, as well as making domain
mapping leaks more obvious (as the mapcache is under heavier load).

For a non-debug build of Xen, any map_domain_page() calls which can be
satisfied by returning a virtual address from the direct map region
(i.e. for pages below the 5TiB boundary which is basically all of them
unless you have more money than sense) are, which avoided excessive use
of the mapcache, and avoids a TLB shootdown/flush on unmap.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:55:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1PR8-0007A5-59; Thu, 09 Jan 2014 23:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1PR6-0007A0-B8
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 23:54:52 +0000
Received: from [85.158.143.35:19601] by server-3.bemta-4.messagelabs.com id
	2B/78-32360-BC63FC25; Thu, 09 Jan 2014 23:54:51 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389311690!10760438!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18869 invoked from network); 9 Jan 2014 23:54:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 9 Jan 2014 23:54:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62828 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1PG7-0001Z2-7t; Fri, 10 Jan 2014 00:43:31 +0100
Date: Fri, 10 Jan 2014 00:54:43 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1079284348.20140110005443@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, January 9, 2014, 3:56:24 PM, you wrote:

> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>> > > > [...]
>> > > > > > Those Xen report something like:
>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>> > > > > > 131328
>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>> > > > > > memflags=0 (62 of 64)
>> > > > > > 
>> > > > > > ?
>> > > > > > 
>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>> > > > > > QEMU :) )
>> > > > > > 
>> > 
>> > > -bash-4.1# lspci -s 01:00.0 -v 
>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>> > >         Flags: fast devsel, IRQ 16
>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>> > >         I/O ports at e020 [disabled] [size=32]
>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>> > 
>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>> > allocate memory for it. Will have maybe have to find another way.
>> > qemu-trad those not seems to allocate memory, but I haven't been very
>> > far in trying to check that.
>> 
>> And indeed that is the case. The "Fix" below fixes it.
>> 
>> 
>> Based on that and this guest config:
>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> memory = 2048
>> boot="d"
>> maxvcpus=32
>> vcpus=1
>> serial='pty'
>> vnclisten="0.0.0.0"
>> name="latest"
>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>> pci = ["01:00.0"]
>> 
>> I can boot the guest.

> And can you access the ROM from the guest ?


> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 

> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.

Ah this is what i reported earlier ..

If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
vga="none"
nographic=1
xen_platform_pci=1

And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
another VGA rom (which was my first assumption some time ago).

When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
so nothing obvious there ...

--
Sander

> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 09 23:55:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jan 2014 23:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1PR8-0007A5-59; Thu, 09 Jan 2014 23:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1PR6-0007A0-B8
	for xen-devel@lists.xenproject.org; Thu, 09 Jan 2014 23:54:52 +0000
Received: from [85.158.143.35:19601] by server-3.bemta-4.messagelabs.com id
	2B/78-32360-BC63FC25; Thu, 09 Jan 2014 23:54:51 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389311690!10760438!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18869 invoked from network); 9 Jan 2014 23:54:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 9 Jan 2014 23:54:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62828 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1PG7-0001Z2-7t; Fri, 10 Jan 2014 00:43:31 +0100
Date: Fri, 10 Jan 2014 00:54:43 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1079284348.20140110005443@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, January 9, 2014, 3:56:24 PM, you wrote:

> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>> > > > [...]
>> > > > > > Those Xen report something like:
>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>> > > > > > 131328
>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>> > > > > > memflags=0 (62 of 64)
>> > > > > > 
>> > > > > > ?
>> > > > > > 
>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>> > > > > > QEMU :) )
>> > > > > > 
>> > 
>> > > -bash-4.1# lspci -s 01:00.0 -v 
>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>> > >         Flags: fast devsel, IRQ 16
>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>> > >         I/O ports at e020 [disabled] [size=32]
>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>> > 
>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>> > allocate memory for it. Will have maybe have to find another way.
>> > qemu-trad those not seems to allocate memory, but I haven't been very
>> > far in trying to check that.
>> 
>> And indeed that is the case. The "Fix" below fixes it.
>> 
>> 
>> Based on that and this guest config:
>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> memory = 2048
>> boot="d"
>> maxvcpus=32
>> vcpus=1
>> serial='pty'
>> vnclisten="0.0.0.0"
>> name="latest"
>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>> pci = ["01:00.0"]
>> 
>> I can boot the guest.

> And can you access the ROM from the guest ?


> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 

> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.

Ah this is what i reported earlier ..

If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
vga="none"
nographic=1
xen_platform_pci=1

And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
another VGA rom (which was my first assumption some time ago).

When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
so nothing obvious there ...

--
Sander

> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 00:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Pmr-0000XQ-K6; Fri, 10 Jan 2014 00:17:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1Pmq-0000XL-EC
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 00:17:20 +0000
Received: from [85.158.137.68:16210] by server-10.bemta-3.messagelabs.com id
	32/4A-23989-F0C3FC25; Fri, 10 Jan 2014 00:17:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389313038!4615728!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4997 invoked from network); 10 Jan 2014 00:17:18 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 00:17:18 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62952 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1Pbn-0003MR-Rt; Fri, 10 Jan 2014 01:05:55 +0100
Date: Fri, 10 Jan 2014 01:17:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <8910389999.20140110011708@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1079284348.20140110005443@eikelenboom.it>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<1079284348.20140110005443@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 12:54:43 AM, you wrote:


> Thursday, January 9, 2014, 3:56:24 PM, you wrote:

>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>> > > > [...]
>>> > > > > > Those Xen report something like:
>>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>>> > > > > > 131328
>>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>>> > > > > > memflags=0 (62 of 64)
>>> > > > > > 
>>> > > > > > ?
>>> > > > > > 
>>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>>> > > > > > QEMU :) )
>>> > > > > > 
>>> > 
>>> > > -bash-4.1# lspci -s 01:00.0 -v 
>>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>>> > >         Flags: fast devsel, IRQ 16
>>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>>> > >         I/O ports at e020 [disabled] [size=32]
>>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>>> > 
>>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>> > allocate memory for it. Will have maybe have to find another way.
>>> > qemu-trad those not seems to allocate memory, but I haven't been very
>>> > far in trying to check that.
>>> 
>>> And indeed that is the case. The "Fix" below fixes it.
>>> 
>>> 
>>> Based on that and this guest config:
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>>> pci = ["01:00.0"]
>>> 
>>> I can boot the guest.

>> And can you access the ROM from the guest ?


>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>> will print an error, like it the case for other BAR. 

>> I tried to test it, but it was with an embedded VGA card. When I dump
>> the ROM, I got the same one as the emulated card instead of the ROM from
>> the device.

> Ah this is what i reported earlier ..

> If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
> vga="none"
> nographic=1
> xen_platform_pci=1

> And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

> So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
> another VGA rom (which was my first assumption some time ago).

> When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


> Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
> so nothing obvious there ...

Perhaps i have to correct myself here ... with:
 vga="none"
 nographic=1
 xen_platform_pci=1

There seems to be a discrepancy between what the guest kernel reports at boot and the guest lspci output:
[    0.000000] e820: [mem 0x40000000-0xfbffffff] available for PCI devices
<snip>
[    1.453530] PCI host bridge to bus 0000:00
[    1.460018] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.466690] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    1.473355] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    1.480024] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[    1.490023] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xfbffffff]
[    1.500013] pci_bus 0000:00: scanning bus
[    1.500504] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    1.500600] pci 0000:00:00.0: calling quirk_mmio_always_on+0x0/0x10
[    1.508515] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    1.515216] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    1.531714] pci 0000:00:01.1: reg 0x20: [io  0xc240-0xc24f]
[    1.540859] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    1.540859] pci 0000:00:01.3: calling acpi_pm_check_blacklist+0x0/0x40
[    1.545776] pci 0000:00:01.3: calling quirk_piix4_acpi+0x0/0x140
[    1.545883] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    1.546916] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[    1.551173] pci 0000:00:01.3: calling pci_fixup_piix4_acpi+0x0/0x10
[    1.554237] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
[    1.556666] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
[    1.566373] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
[    1.596780] pci 0000:00:04.0: [1002:6759] type 00 class 0x030000
[    1.656719] pci 0000:00:04.0: reg 0x10: [mem 0xe0000000-0xefffffff 64bit pref]
[    1.693355] pci 0000:00:04.0: reg 0x18: [mem 0xf1060000-0xf107ffff 64bit]
[    1.743361] pci 0000:00:04.0: reg 0x20: [io  0xc100-0xc1ff]
[    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
[    1.828722] pci 0000:00:04.0: supports D1 D2

00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e193
        Physical Slot: 4
        Flags: fast devsel, IRQ 32
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at c100 [size=256]
        [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [100] #1002


At least these 2 don't seem to match up:
[    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]

[virtual] Expansion ROM at f1000000 [disabled] [size=128K]

> --
> Sander

>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 6dd7a68..2bbdb6d 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>>  
>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>  
>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> -                                      "xen-pci-pt-rom", d->rom.size);
>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> +                              "xen-pci-pt-rom", d->rom.size);
>>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>>                           &s->rom);
>>  






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 00:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Pmr-0000XQ-K6; Fri, 10 Jan 2014 00:17:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1Pmq-0000XL-EC
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 00:17:20 +0000
Received: from [85.158.137.68:16210] by server-10.bemta-3.messagelabs.com id
	32/4A-23989-F0C3FC25; Fri, 10 Jan 2014 00:17:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389313038!4615728!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4997 invoked from network); 10 Jan 2014 00:17:18 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 00:17:18 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62952 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1Pbn-0003MR-Rt; Fri, 10 Jan 2014 01:05:55 +0100
Date: Fri, 10 Jan 2014 01:17:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <8910389999.20140110011708@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <1079284348.20140110005443@eikelenboom.it>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<1079284348.20140110005443@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 12:54:43 AM, you wrote:


> Thursday, January 9, 2014, 3:56:24 PM, you wrote:

>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>> > > > [...]
>>> > > > > > Those Xen report something like:
>>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>>> > > > > > 131328
>>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>>> > > > > > memflags=0 (62 of 64)
>>> > > > > > 
>>> > > > > > ?
>>> > > > > > 
>>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>>> > > > > > QEMU :) )
>>> > > > > > 
>>> > 
>>> > > -bash-4.1# lspci -s 01:00.0 -v 
>>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>>> > >         Flags: fast devsel, IRQ 16
>>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>>> > >         I/O ports at e020 [disabled] [size=32]
>>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>>> > 
>>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>> > allocate memory for it. Will have maybe have to find another way.
>>> > qemu-trad those not seems to allocate memory, but I haven't been very
>>> > far in trying to check that.
>>> 
>>> And indeed that is the case. The "Fix" below fixes it.
>>> 
>>> 
>>> Based on that and this guest config:
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>>> pci = ["01:00.0"]
>>> 
>>> I can boot the guest.

>> And can you access the ROM from the guest ?


>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>> will print an error, like it the case for other BAR. 

>> I tried to test it, but it was with an embedded VGA card. When I dump
>> the ROM, I got the same one as the emulated card instead of the ROM from
>> the device.

> Ah this is what i reported earlier ..

> If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
> vga="none"
> nographic=1
> xen_platform_pci=1

> And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

> So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
> another VGA rom (which was my first assumption some time ago).

> When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


> Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
> so nothing obvious there ...

Perhaps i have to correct myself here ... with:
 vga="none"
 nographic=1
 xen_platform_pci=1

There seems to be a discrepancy between what the guest kernel reports at boot and the guest lspci output:
[    0.000000] e820: [mem 0x40000000-0xfbffffff] available for PCI devices
<snip>
[    1.453530] PCI host bridge to bus 0000:00
[    1.460018] pci_bus 0000:00: root bus resource [bus 00-ff]
[    1.466690] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    1.473355] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    1.480024] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[    1.490023] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xfbffffff]
[    1.500013] pci_bus 0000:00: scanning bus
[    1.500504] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    1.500600] pci 0000:00:00.0: calling quirk_mmio_always_on+0x0/0x10
[    1.508515] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    1.515216] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    1.531714] pci 0000:00:01.1: reg 0x20: [io  0xc240-0xc24f]
[    1.540859] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    1.540859] pci 0000:00:01.3: calling acpi_pm_check_blacklist+0x0/0x40
[    1.545776] pci 0000:00:01.3: calling quirk_piix4_acpi+0x0/0x140
[    1.545883] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    1.546916] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[    1.551173] pci 0000:00:01.3: calling pci_fixup_piix4_acpi+0x0/0x10
[    1.554237] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
[    1.556666] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
[    1.566373] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
[    1.596780] pci 0000:00:04.0: [1002:6759] type 00 class 0x030000
[    1.656719] pci 0000:00:04.0: reg 0x10: [mem 0xe0000000-0xefffffff 64bit pref]
[    1.693355] pci 0000:00:04.0: reg 0x18: [mem 0xf1060000-0xf107ffff 64bit]
[    1.743361] pci 0000:00:04.0: reg 0x20: [io  0xc100-0xc1ff]
[    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
[    1.828722] pci 0000:00:04.0: supports D1 D2

00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e193
        Physical Slot: 4
        Flags: fast devsel, IRQ 32
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at c100 [size=256]
        [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [100] #1002


At least these 2 don't seem to match up:
[    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]

[virtual] Expansion ROM at f1000000 [disabled] [size=128K]

> --
> Sander

>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> index 6dd7a68..2bbdb6d 100644
>> --- a/hw/xen/xen_pt.c
>> +++ b/hw/xen/xen_pt.c
>> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>>  
>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>  
>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> -                                      "xen-pci-pt-rom", d->rom.size);
>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> +                              "xen-pci-pt-rom", d->rom.size);
>>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>>                           &s->rom);
>>  






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 00:23:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1PsO-0001Fz-Fz; Fri, 10 Jan 2014 00:23:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1PsN-0001Ft-Dt
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 00:23:03 +0000
Received: from [85.158.139.211:31227] by server-1.bemta-5.messagelabs.com id
	33/5D-21065-66D3FC25; Fri, 10 Jan 2014 00:23:02 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389313381!8900194!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12784 invoked from network); 10 Jan 2014 00:23:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Jan 2014 00:23:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62972 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1PhO-0003px-Rz; Fri, 10 Jan 2014 01:11:42 +0100
Date: Fri, 10 Jan 2014 01:22:56 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <845943822.20140110012256@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <8910389999.20140110011708@eikelenboom.it>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<1079284348.20140110005443@eikelenboom.it>
	<8910389999.20140110011708@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 1:17:08 AM, you wrote:


> Friday, January 10, 2014, 12:54:43 AM, you wrote:


>> Thursday, January 9, 2014, 3:56:24 PM, you wrote:

>>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>> > > > [...]
>>>> > > > > > Those Xen report something like:
>>>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>>>> > > > > > 131328
>>>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>>>> > > > > > memflags=0 (62 of 64)
>>>> > > > > > 
>>>> > > > > > ?
>>>> > > > > > 
>>>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>>>> > > > > > QEMU :) )
>>>> > > > > > 
>>>> > 
>>>> > > -bash-4.1# lspci -s 01:00.0 -v 
>>>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>>>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>>>> > >         Flags: fast devsel, IRQ 16
>>>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>>>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>>>> > >         I/O ports at e020 [disabled] [size=32]
>>>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>>>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>>>> > 
>>>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>> > allocate memory for it. Will have maybe have to find another way.
>>>> > qemu-trad those not seems to allocate memory, but I haven't been very
>>>> > far in trying to check that.
>>>> 
>>>> And indeed that is the case. The "Fix" below fixes it.
>>>> 
>>>> 
>>>> Based on that and this guest config:
>>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>>> memory = 2048
>>>> boot="d"
>>>> maxvcpus=32
>>>> vcpus=1
>>>> serial='pty'
>>>> vnclisten="0.0.0.0"
>>>> name="latest"
>>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>>>> pci = ["01:00.0"]
>>>> 
>>>> I can boot the guest.

>>> And can you access the ROM from the guest ?


>>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>>> will print an error, like it the case for other BAR. 

>>> I tried to test it, but it was with an embedded VGA card. When I dump
>>> the ROM, I got the same one as the emulated card instead of the ROM from
>>> the device.

>> Ah this is what i reported earlier ..

>> If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
>> vga="none"
>> nographic=1
>> xen_platform_pci=1

>> And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

>> So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
>> another VGA rom (which was my first assumption some time ago).

>> When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


>> Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
>> so nothing obvious there ...

> Perhaps i have to correct myself here ... with:
>  vga="none"
>  nographic=1
>  xen_platform_pci=1

> There seems to be a discrepancy between what the guest kernel reports at boot and the guest lspci output:
> [    0.000000] e820: [mem 0x40000000-0xfbffffff] available for PCI devices
> <snip>
> [    1.453530] PCI host bridge to bus 0000:00
> [    1.460018] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    1.466690] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
> [    1.473355] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
> [    1.480024] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
> [    1.490023] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xfbffffff]
> [    1.500013] pci_bus 0000:00: scanning bus
> [    1.500504] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
> [    1.500600] pci 0000:00:00.0: calling quirk_mmio_always_on+0x0/0x10
> [    1.508515] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
> [    1.515216] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
> [    1.531714] pci 0000:00:01.1: reg 0x20: [io  0xc240-0xc24f]
> [    1.540859] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
> [    1.540859] pci 0000:00:01.3: calling acpi_pm_check_blacklist+0x0/0x40
> [    1.545776] pci 0000:00:01.3: calling quirk_piix4_acpi+0x0/0x140
> [    1.545883] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
> [    1.546916] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
> [    1.551173] pci 0000:00:01.3: calling pci_fixup_piix4_acpi+0x0/0x10
> [    1.554237] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
> [    1.556666] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
> [    1.566373] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
> [    1.596780] pci 0000:00:04.0: [1002:6759] type 00 class 0x030000
> [    1.656719] pci 0000:00:04.0: reg 0x10: [mem 0xe0000000-0xefffffff 64bit pref]
> [    1.693355] pci 0000:00:04.0: reg 0x18: [mem 0xf1060000-0xf107ffff 64bit]
> [    1.743361] pci 0000:00:04.0: reg 0x20: [io  0xc100-0xc1ff]
> [    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
> [    1.828722] pci 0000:00:04.0: supports D1 D2

> 00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
>         Subsystem: PC Partner Limited Device e193
>         Physical Slot: 4
>         Flags: fast devsel, IRQ 32
>         Memory at e0000000 (64-bit, prefetchable) [size=256M]
>         Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
>         I/O ports at c100 [size=256]
>         [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
>         Capabilities: [100] #1002


> At least these 2 don't seem to match up:
> [    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]

> [virtual] Expansion ROM at f1000000 [disabled] [size=128K]

And from xl dmesg:

(d23) [2014-01-10 00:07:47] pci dev 03:0 bar 30 size 000040000: 0f1000000

which is .. probably the NIC ... but which is mysteriously never shown in lspci (thought i had reported that earlier as well):

lspci -v
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
        Subsystem: Red Hat, Inc Qemu virtual machine
        Flags: bus master, fast devsel, latency 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0
        [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
        [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
        [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
        [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
        I/O ports at c240 [size=16]
        Kernel driver in use: PIIX_IDE

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0, IRQ 9

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
        Subsystem: XenSource, Inc. Xen Platform Device
        Physical Slot: 2
        Flags: bus master, fast devsel, latency 0, IRQ 24
        I/O ports at c000 [size=256]
        Memory at f0000000 (32-bit, prefetchable) [size=16M]
        Kernel driver in use: xen-platform-pci

00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e193
        Physical Slot: 4
        Flags: fast devsel, IRQ 32
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at c100 [size=256]
        [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [100] #1002


which is defined in the guest config as:
vif = [ 'bridge=xen_bridge,ip=192.168.1.44,mac=00:16:3A:C6:76:65, model=e1000'  ]


>> --
>> Sander

>>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>>> index 6dd7a68..2bbdb6d 100644
>>> --- a/hw/xen/xen_pt.c
>>> +++ b/hw/xen/xen_pt.c
>>> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>>>  
>>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>>  
>>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>>> -                                      "xen-pci-pt-rom", d->rom.size);
>>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>>> +                              "xen-pci-pt-rom", d->rom.size);
>>>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>>>                           &s->rom);
>>>  








_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 00:23:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1PsO-0001Fz-Fz; Fri, 10 Jan 2014 00:23:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1PsN-0001Ft-Dt
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 00:23:03 +0000
Received: from [85.158.139.211:31227] by server-1.bemta-5.messagelabs.com id
	33/5D-21065-66D3FC25; Fri, 10 Jan 2014 00:23:02 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389313381!8900194!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12784 invoked from network); 10 Jan 2014 00:23:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Jan 2014 00:23:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:62972 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1PhO-0003px-Rz; Fri, 10 Jan 2014 01:11:42 +0100
Date: Fri, 10 Jan 2014 01:22:56 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <845943822.20140110012256@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <8910389999.20140110011708@eikelenboom.it>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<1079284348.20140110005443@eikelenboom.it>
	<8910389999.20140110011708@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 1:17:08 AM, you wrote:


> Friday, January 10, 2014, 12:54:43 AM, you wrote:


>> Thursday, January 9, 2014, 3:56:24 PM, you wrote:

>>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>> > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>> > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>> > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>> > > > [...]
>>>> > > > > > Those Xen report something like:
>>>> > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>>>> > > > > > 131328
>>>> > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>>>> > > > > > memflags=0 (62 of 64)
>>>> > > > > > 
>>>> > > > > > ?
>>>> > > > > > 
>>>> > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>>>> > > > > > QEMU :) )
>>>> > > > > > 
>>>> > 
>>>> > > -bash-4.1# lspci -s 01:00.0 -v 
>>>> > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>>>> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>>>> > >         Flags: fast devsel, IRQ 16
>>>> > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>>>> > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>>>> > >         I/O ports at e020 [disabled] [size=32]
>>>> > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>>>> > >         Expansion ROM at fb400000 [disabled] [size=4M]
>>>> > 
>>>> > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>> > allocate memory for it. Will have maybe have to find another way.
>>>> > qemu-trad those not seems to allocate memory, but I haven't been very
>>>> > far in trying to check that.
>>>> 
>>>> And indeed that is the case. The "Fix" below fixes it.
>>>> 
>>>> 
>>>> Based on that and this guest config:
>>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>>> memory = 2048
>>>> boot="d"
>>>> maxvcpus=32
>>>> vcpus=1
>>>> serial='pty'
>>>> vnclisten="0.0.0.0"
>>>> name="latest"
>>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>>>> pci = ["01:00.0"]
>>>> 
>>>> I can boot the guest.

>>> And can you access the ROM from the guest ?


>>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>>> will print an error, like it the case for other BAR. 

>>> I tried to test it, but it was with an embedded VGA card. When I dump
>>> the ROM, I got the same one as the emulated card instead of the ROM from
>>> the device.

>> Ah this is what i reported earlier ..

>> If you would like it more funky .. use Dario's patches to be able to use vga="none" and use:
>> vga="none"
>> nographic=1
>> xen_platform_pci=1

>> And when you dump the rombar of the passedthrough vga card, you will end up with the iPXE rom of the emulated NIC.

>> So it is pointing at the first / last / a random rom ... but at least it doesn't seem directly tied to pointing at
>> another VGA rom (which was my first assumption some time ago).

>> When i go one step further .. by also disabling the xen platform pci device .. it doesn't boot any more.


>> Strange thing is that all the addresses in debug messages (host kernel .. hvmloader seabios qemu and guest kernel) for the rom bar seem to correspond when the translation is taken into account ..
>> so nothing obvious there ...

> Perhaps i have to correct myself here ... with:
>  vga="none"
>  nographic=1
>  xen_platform_pci=1

> There seems to be a discrepancy between what the guest kernel reports at boot and the guest lspci output:
> [    0.000000] e820: [mem 0x40000000-0xfbffffff] available for PCI devices
> <snip>
> [    1.453530] PCI host bridge to bus 0000:00
> [    1.460018] pci_bus 0000:00: root bus resource [bus 00-ff]
> [    1.466690] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
> [    1.473355] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
> [    1.480024] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
> [    1.490023] pci_bus 0000:00: root bus resource [mem 0xe0000000-0xfbffffff]
> [    1.500013] pci_bus 0000:00: scanning bus
> [    1.500504] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
> [    1.500600] pci 0000:00:00.0: calling quirk_mmio_always_on+0x0/0x10
> [    1.508515] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
> [    1.515216] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
> [    1.531714] pci 0000:00:01.1: reg 0x20: [io  0xc240-0xc24f]
> [    1.540859] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
> [    1.540859] pci 0000:00:01.3: calling acpi_pm_check_blacklist+0x0/0x40
> [    1.545776] pci 0000:00:01.3: calling quirk_piix4_acpi+0x0/0x140
> [    1.545883] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
> [    1.546916] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
> [    1.551173] pci 0000:00:01.3: calling pci_fixup_piix4_acpi+0x0/0x10
> [    1.554237] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
> [    1.556666] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
> [    1.566373] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
> [    1.596780] pci 0000:00:04.0: [1002:6759] type 00 class 0x030000
> [    1.656719] pci 0000:00:04.0: reg 0x10: [mem 0xe0000000-0xefffffff 64bit pref]
> [    1.693355] pci 0000:00:04.0: reg 0x18: [mem 0xf1060000-0xf107ffff 64bit]
> [    1.743361] pci 0000:00:04.0: reg 0x20: [io  0xc100-0xc1ff]
> [    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
> [    1.828722] pci 0000:00:04.0: supports D1 D2

> 00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
>         Subsystem: PC Partner Limited Device e193
>         Physical Slot: 4
>         Flags: fast devsel, IRQ 32
>         Memory at e0000000 (64-bit, prefetchable) [size=256M]
>         Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
>         I/O ports at c100 [size=256]
>         [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
>         Capabilities: [50] Power Management version 3
>         Capabilities: [58] Express Legacy Endpoint, MSI 00
>         Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
>         Capabilities: [100] #1002


> At least these 2 don't seem to match up:
> [    1.826692] pci 0000:00:04.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]

> [virtual] Expansion ROM at f1000000 [disabled] [size=128K]

And from xl dmesg:

(d23) [2014-01-10 00:07:47] pci dev 03:0 bar 30 size 000040000: 0f1000000

which is .. probably the NIC ... but which is mysteriously never shown in lspci (thought i had reported that earlier as well):

lspci -v
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
        Subsystem: Red Hat, Inc Qemu virtual machine
        Flags: bus master, fast devsel, latency 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0
        [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
        [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
        [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
        [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
        I/O ports at c240 [size=16]
        Kernel driver in use: PIIX_IDE

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
        Subsystem: Red Hat, Inc Qemu virtual machine
        Physical Slot: 1
        Flags: bus master, medium devsel, latency 0, IRQ 9

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
        Subsystem: XenSource, Inc. Xen Platform Device
        Physical Slot: 2
        Flags: bus master, fast devsel, latency 0, IRQ 24
        I/O ports at c000 [size=256]
        Memory at f0000000 (32-bit, prefetchable) [size=16M]
        Kernel driver in use: xen-platform-pci

00:04.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] (prog-if 00 [VGA controller])
        Subsystem: PC Partner Limited Device e193
        Physical Slot: 4
        Flags: fast devsel, IRQ 32
        Memory at e0000000 (64-bit, prefetchable) [size=256M]
        Memory at f1060000 (64-bit, non-prefetchable) [size=128K]
        I/O ports at c100 [size=256]
        [virtual] Expansion ROM at f1000000 [disabled] [size=128K]
        Capabilities: [50] Power Management version 3
        Capabilities: [58] Express Legacy Endpoint, MSI 00
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [100] #1002


which is defined in the guest config as:
vif = [ 'bridge=xen_bridge,ip=192.168.1.44,mac=00:16:3A:C6:76:65, model=e1000'  ]


>> --
>> Sander

>>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>>> index 6dd7a68..2bbdb6d 100644
>>> --- a/hw/xen/xen_pt.c
>>> +++ b/hw/xen/xen_pt.c
>>> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>>>  
>>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>>  
>>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>>> -                                      "xen-pci-pt-rom", d->rom.size);
>>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>>> +                              "xen-pci-pt-rom", d->rom.size);
>>>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>>>                           &s->rom);
>>>  








_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 00:44:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1QCh-0002bS-5c; Fri, 10 Jan 2014 00:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1QCf-0002bN-9m
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 00:44:01 +0000
Received: from [193.109.254.147:24715] by server-2.bemta-14.messagelabs.com id
	D3/B5-00361-0524FC25; Fri, 10 Jan 2014 00:44:00 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389314628!9938249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE0MzcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16990 invoked from network); 10 Jan 2014 00:43:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 00:43:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,634,1384300800"; 
	d="asc'?scan'208";a="91521946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 00:43:48 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	19:43:47 -0500
Message-ID: <1389314626.16457.122.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 01:43:46 +0100
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0200519551843347808=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0200519551843347808==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qRi9XYaUtT+UUsDFgMut"

--=-qRi9XYaUtT+UUsDFgMut
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

Building Xen via OSSTest in standalone mode is failing for me, with the
following message:

make[2]: Entering directory `/home/osstest/build.standalone.build-amd64/xen=
-unstable/tools'
if test -d git://xenbits.xen.org/staging/qemu-upstream-unstable.git ; then =
\
		mkdir -p qemu-xen-dir; \
	else \
		export GIT=3Dgit; \
		/home/osstest/build.standalone.build-amd64/xen-unstable/tools/../scripts/=
git-checkout.sh git://xenbits.xen.org/staging/qemu-upstream-unstable.git qe=
mu-xen-4.4.0-rc1 qemu-xen-dir ; \
	fi
Cloning into 'qemu-xen-dir-remote.tmp'...
fatal: git checkout: updating paths is incompatible with switching branches=
.
Did you intend to checkout 'qemu-xen-4.4.0-rc1' which can not be resolved a=
s commit?
make[2]: Leaving directory `/home/osstest/build.standalone.build-amd64/xen-=
unstable/tools'
make[2]: *** [qemu-xen-dir-find] Error 128
make[1]: Leaving directory `/home/osstest/build.standalone.build-amd64/xen-=
unstable/tools'
make[1]: *** [subdirs-install] Error 2
make: *** [install-tools] Error 2

What I'm doing is:

 $ ./standalone-reset
 $ ./sg-run-job build-amd64

which wipes the build box, so I don't think I have stale files, config,
etc. The commit responsible for having this looking for a
'qemu-xen-4.4.0-rc1' tag is:

$ git show d84a6e2f
commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Dec 19 16:28:29 2013 +0000

    Update QEMU_UPSTREAM_REVISION
   =20
    Switch to specific tag, for 4.4.0 RC1 release.
   =20
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

However, looking here: http://xenbits.xen.org/gitweb/?p=3Dstaging/qemu-upst=
ream-unstable.git;a=3Dtags
there does not seem to be any such tag:

staging/qemu-upstream-unstable.git
6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | log
8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog | lo=
g

What am I missing or doing wrong?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qRi9XYaUtT+UUsDFgMut
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLPQkIACgkQk4XaBE3IOsRm6ACgrJ8rUUX4i/4Zhf9r1dsGAPt6
PjoAniaTMLIiBuyAKXgxWPMB4fOc0DeK
=57QW
-----END PGP SIGNATURE-----

--=-qRi9XYaUtT+UUsDFgMut--


--===============0200519551843347808==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0200519551843347808==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 00:44:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 00:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1QCh-0002bS-5c; Fri, 10 Jan 2014 00:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1QCf-0002bN-9m
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 00:44:01 +0000
Received: from [193.109.254.147:24715] by server-2.bemta-14.messagelabs.com id
	D3/B5-00361-0524FC25; Fri, 10 Jan 2014 00:44:00 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389314628!9938249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE0MzcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16990 invoked from network); 10 Jan 2014 00:43:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 00:43:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,634,1384300800"; 
	d="asc'?scan'208";a="91521946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 00:43:48 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 9 Jan 2014
	19:43:47 -0500
Message-ID: <1389314626.16457.122.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 01:43:46 +0100
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0200519551843347808=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0200519551843347808==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qRi9XYaUtT+UUsDFgMut"

--=-qRi9XYaUtT+UUsDFgMut
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

Building Xen via OSSTest in standalone mode is failing for me, with the
following message:

make[2]: Entering directory `/home/osstest/build.standalone.build-amd64/xen=
-unstable/tools'
if test -d git://xenbits.xen.org/staging/qemu-upstream-unstable.git ; then =
\
		mkdir -p qemu-xen-dir; \
	else \
		export GIT=3Dgit; \
		/home/osstest/build.standalone.build-amd64/xen-unstable/tools/../scripts/=
git-checkout.sh git://xenbits.xen.org/staging/qemu-upstream-unstable.git qe=
mu-xen-4.4.0-rc1 qemu-xen-dir ; \
	fi
Cloning into 'qemu-xen-dir-remote.tmp'...
fatal: git checkout: updating paths is incompatible with switching branches=
.
Did you intend to checkout 'qemu-xen-4.4.0-rc1' which can not be resolved a=
s commit?
make[2]: Leaving directory `/home/osstest/build.standalone.build-amd64/xen-=
unstable/tools'
make[2]: *** [qemu-xen-dir-find] Error 128
make[1]: Leaving directory `/home/osstest/build.standalone.build-amd64/xen-=
unstable/tools'
make[1]: *** [subdirs-install] Error 2
make: *** [install-tools] Error 2

What I'm doing is:

 $ ./standalone-reset
 $ ./sg-run-job build-amd64

which wipes the build box, so I don't think I have stale files, config,
etc. The commit responsible for having this looking for a
'qemu-xen-4.4.0-rc1' tag is:

$ git show d84a6e2f
commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Dec 19 16:28:29 2013 +0000

    Update QEMU_UPSTREAM_REVISION
   =20
    Switch to specific tag, for 4.4.0 RC1 release.
   =20
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

However, looking here: http://xenbits.xen.org/gitweb/?p=3Dstaging/qemu-upst=
ream-unstable.git;a=3Dtags
there does not seem to be any such tag:

staging/qemu-upstream-unstable.git
6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | log
8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog | lo=
g

What am I missing or doing wrong?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qRi9XYaUtT+UUsDFgMut
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLPQkIACgkQk4XaBE3IOsRm6ACgrJ8rUUX4i/4Zhf9r1dsGAPt6
PjoAniaTMLIiBuyAKXgxWPMB4fOc0DeK
=57QW
-----END PGP SIGNATURE-----

--=-qRi9XYaUtT+UUsDFgMut--


--===============0200519551843347808==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0200519551843347808==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 01:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1QsL-0000cs-LU; Fri, 10 Jan 2014 01:27:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1QsJ-0000cn-6g
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 01:27:03 +0000
Received: from [85.158.139.211:32118] by server-2.bemta-5.messagelabs.com id
	10/64-29392-66C4FC25; Fri, 10 Jan 2014 01:27:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389317219!8900731!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23006 invoked from network); 10 Jan 2014 01:27:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 01:27:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89383928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 01:26:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 20:26:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1QsC-0006Le-O7;
	Fri, 10 Jan 2014 01:26:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1QsC-0000j9-Jh;
	Fri, 10 Jan 2014 01:26:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24322-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 01:26:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24322: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3377792055816728757=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3377792055816728757==
Content-Type: text/plain

flight 24322 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24322/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 22383
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22383

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           6 capture-logs(6)           broken pass in 24318

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============3377792055816728757==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3377792055816728757==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 01:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1QsL-0000cs-LU; Fri, 10 Jan 2014 01:27:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1QsJ-0000cn-6g
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 01:27:03 +0000
Received: from [85.158.139.211:32118] by server-2.bemta-5.messagelabs.com id
	10/64-29392-66C4FC25; Fri, 10 Jan 2014 01:27:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389317219!8900731!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23006 invoked from network); 10 Jan 2014 01:27:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 01:27:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89383928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 01:26:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 20:26:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1QsC-0006Le-O7;
	Fri, 10 Jan 2014 01:26:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1QsC-0000j9-Jh;
	Fri, 10 Jan 2014 01:26:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24322-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 01:26:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24322: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3377792055816728757=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3377792055816728757==
Content-Type: text/plain

flight 24322 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24322/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 22383
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22383

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           6 capture-logs(6)           broken pass in 24318

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============3377792055816728757==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3377792055816728757==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 01:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1R8a-0001ys-KL; Fri, 10 Jan 2014 01:43:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1R8Z-0001yn-2P
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 01:43:51 +0000
Received: from [85.158.143.35:35859] by server-3.bemta-4.messagelabs.com id
	1A/0A-32360-6505FC25; Fri, 10 Jan 2014 01:43:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389318228!10670475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13227 invoked from network); 10 Jan 2014 01:43:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 01:43:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89387340"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 01:43:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 20:43:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1R8U-0006Rl-2S;
	Fri, 10 Jan 2014 01:43:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1R8T-0008VP-Jr;
	Fri, 10 Jan 2014 01:43:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24320-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 01:43:45 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24320: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24320 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24313

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24313

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9
baseline version:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Stefan Bader <stefan.bader@canonical.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2d03be65d5c50053fec4a5fa1d691972e5d953c9
Author: Stefan Bader <stefan.bader@canonical.com>
Date:   Wed Jan 8 18:26:59 2014 +0100

    libxl: Auto-assign NIC devids in initiate_domain_create
    
    This will change initiate_domain_create to walk through NIC definitions
    and automatically assign devids to those which have not assigned one.
    The devids are needed later in domcreate_launch_dm (for HVM domains
    using emulated NICs). The command string for starting the device-model
    has those ids as part of its arguments.
    Assignment of devids in the hotplug case is handled by libxl_device_nic_add
    but that would be called too late in the startup case.
    I also moved the call to libxl__device_nic_setdefault here as this seems
    to be the only path leading there and avoids doing the loop a third time.
    The two loops are trying to handle a case where the caller sets some devids
    (not sure that should be valid) but leaves some unset.
    
    Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 1a5279fa1e1407c9b8d6ae5d9b1b69f3c3b72d0b
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Thu Jan 9 11:48:13 2014 +0000

    docs/man/xl.cfg.pod.5: document global VNC options for VFB device
    
    Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
    guest when VNC is specified".
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1671cdeac7da663fb2963f3e587fa279dcd0238b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 10:04:23 2014 +0000

    tools/libxc: Correct read_exact() error messages
    
    The errors have been incorrectly identifying their function since c/s
    861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
    cleanup.
    
    Use __func__ to ensure the name remains correct in the future.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 0b1d127081896411bf91693acb1932345e0e627a
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Jan 6 16:36:18 2014 +0000

    xen/dts: Don't translate invalid address
    
    ePAR specifies that if the property "ranges" doesn't exist in a bus node:
    
    "it is assumed that no mapping exists between children of node and the parent
    address space".
    
    Modify dt_number_of_address to check if the list of ranges are valid. Return
    0 (ie there is zero range) if the list is not valid.
    
    This patch has been tested on the Arndale where the bug can occur with the
    '/hdmi' node.
    
    Reported-by: <tsahee@gmx.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 01:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1R8a-0001ys-KL; Fri, 10 Jan 2014 01:43:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1R8Z-0001yn-2P
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 01:43:51 +0000
Received: from [85.158.143.35:35859] by server-3.bemta-4.messagelabs.com id
	1A/0A-32360-6505FC25; Fri, 10 Jan 2014 01:43:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389318228!10670475!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13227 invoked from network); 10 Jan 2014 01:43:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 01:43:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89387340"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 01:43:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 20:43:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1R8U-0006Rl-2S;
	Fri, 10 Jan 2014 01:43:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1R8T-0008VP-Jr;
	Fri, 10 Jan 2014 01:43:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24320-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 01:43:45 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24320: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24320 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24313

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24313

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9
baseline version:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Stefan Bader <stefan.bader@canonical.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2d03be65d5c50053fec4a5fa1d691972e5d953c9
Author: Stefan Bader <stefan.bader@canonical.com>
Date:   Wed Jan 8 18:26:59 2014 +0100

    libxl: Auto-assign NIC devids in initiate_domain_create
    
    This will change initiate_domain_create to walk through NIC definitions
    and automatically assign devids to those which have not assigned one.
    The devids are needed later in domcreate_launch_dm (for HVM domains
    using emulated NICs). The command string for starting the device-model
    has those ids as part of its arguments.
    Assignment of devids in the hotplug case is handled by libxl_device_nic_add
    but that would be called too late in the startup case.
    I also moved the call to libxl__device_nic_setdefault here as this seems
    to be the only path leading there and avoids doing the loop a third time.
    The two loops are trying to handle a case where the caller sets some devids
    (not sure that should be valid) but leaves some unset.
    
    Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 1a5279fa1e1407c9b8d6ae5d9b1b69f3c3b72d0b
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Thu Jan 9 11:48:13 2014 +0000

    docs/man/xl.cfg.pod.5: document global VNC options for VFB device
    
    Update xl.cfg to reflect change in 706d4ab74 "xl: create VFB for PV
    guest when VNC is specified".
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1671cdeac7da663fb2963f3e587fa279dcd0238b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jan 7 10:04:23 2014 +0000

    tools/libxc: Correct read_exact() error messages
    
    The errors have been incorrectly identifying their function since c/s
    861aef6e1558bebad8fc60c1c723f0706fd3ed87 which did a lot of error handling
    cleanup.
    
    Use __func__ to ensure the name remains correct in the future.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 0b1d127081896411bf91693acb1932345e0e627a
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Jan 6 16:36:18 2014 +0000

    xen/dts: Don't translate invalid address
    
    ePAR specifies that if the property "ranges" doesn't exist in a bus node:
    
    "it is assumed that no mapping exists between children of node and the parent
    address space".
    
    Modify dt_number_of_address to check if the list of ranges are valid. Return
    0 (ie there is zero range) if the list is not valid.
    
    This patch has been tested on the Arndale where the bug can occur with the
    '/hdmi' node.
    
    Reported-by: <tsahee@gmx.com>
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 01:55:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1RJl-0002iO-TB; Fri, 10 Jan 2014 01:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W1RJk-0002iJ-Al
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 01:55:24 +0000
Received: from [85.158.139.211:16387] by server-12.bemta-5.messagelabs.com id
	D1/73-30017-B035FC25; Fri, 10 Jan 2014 01:55:23 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389318921!8910974!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2083 invoked from network); 10 Jan 2014 01:55:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 01:55:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A1sGkm010217
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 01:54:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A1sEl3018416
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 01:54:14 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A1sDR3018409; Fri, 10 Jan 2014 01:54:13 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 17:54:13 -0800
Date: Thu, 9 Jan 2014 17:54:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109175410.63e78561@mantra.us.oracle.com>
In-Reply-To: <1389261548.27473.42.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014 09:59:08 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
> > On Wed, 8 Jan 2014 17:44:22 +0000
> > Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > 
> > > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > > Using volatile is almost always wrong. Why do you think
> > > > > > > it is needed here?
> > > > > > 
> > > > > > This was from Mukesh Rathor:
> > > > > > 
> > > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > > 
> > > > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > > > this? Happy to change any way you want.
> > > > > 
> > > > > I'm not the maintainer but if I were I'd say drop the volatile
> > > > > and maybe mark it __read_mostly and perhaps __used too (since
> > > > > it's static it might otherwise get eliminated).
> > > > > 
> > > > > > > If anything this variable is exactly the opposite, i..e
> > > > > > > __read_mostly or even const (given that I can't see
> > > > > > > anything which writes it I suppose this is a compile time
> > > > > > > setting?)
> > > > > > 
> > > > > > That has been how I have been testing it so far (changing
> > > > > > the source to set values).  Mukesh claims to be able to
> > > > > > change it at will.  Not sure how const may effect this.
> > > > 
> > > > If the idea is to use kdb itself to frob the value, then it does
> > > > need something to stop the compiler caching it.  This might
> > > > even be one of the few cases where 'volatile' actually DTRT; it
> > > > would still be more in keeping with Xen style to use an
> > > > explicit read op (like atomic_read()) where the value is
> > > > consumed.
> > > 
> > > Is there any need to be asynchronously frobbing this value in the
> > > middle of a function within this file and expecting it to be
> > 
> > Yes. I can stop the machine via kdb or other debugger, change the
> > value during debug, and upon resuming it will start printing stuff.
> > Often this is needed when going thru several iterations of call
> > before problem is seen. Making it volatile makes sure the compiler
> > loads it every instance of its use. This is not in main path, only
> > debugger path, so the overhead should not be an issue.
> 
> So you want to be able to toggle the value in between two immediately
> adjacent debug print calls? While debugging the debugging
> infrastructure itself? (using itself?).

I often debug gdbsx via kdb, and one could use JTAG also to debug kdb,
or some other debugger. Debugging a debugger is hard. Since, the variable
is accessed via outside means that the compiler is not aware of, making
it volatile makes the most sense to me.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 01:55:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 01:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1RJl-0002iO-TB; Fri, 10 Jan 2014 01:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W1RJk-0002iJ-Al
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 01:55:24 +0000
Received: from [85.158.139.211:16387] by server-12.bemta-5.messagelabs.com id
	D1/73-30017-B035FC25; Fri, 10 Jan 2014 01:55:23 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389318921!8910974!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2083 invoked from network); 10 Jan 2014 01:55:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 01:55:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A1sGkm010217
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 01:54:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A1sEl3018416
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 01:54:14 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A1sDR3018409; Fri, 10 Jan 2014 01:54:13 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 17:54:13 -0800
Date: Thu, 9 Jan 2014 17:54:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140109175410.63e78561@mantra.us.oracle.com>
In-Reply-To: <1389261548.27473.42.camel@kazak.uk.xensource.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org, Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014 09:59:08 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-08 at 16:38 -0800, Mukesh Rathor wrote:
> > On Wed, 8 Jan 2014 17:44:22 +0000
> > Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > 
> > > On Wed, 2014-01-08 at 18:04 +0100, Tim Deegan wrote:
> > > > At 16:47 +0000 on 08 Jan (1389196026), Ian Campbell wrote:
> > > > > On Wed, 2014-01-08 at 09:28 -0500, Don Slutz wrote:
> > > > > > > Using volatile is almost always wrong. Why do you think
> > > > > > > it is needed here?
> > > > > > 
> > > > > > This was from Mukesh Rathor:
> > > > > > 
> > > > > > http://lists.xen.org/archives/html/xen-devel/2014-01/msg00426.html
> > > > > > 
> > > > > > I saw no reason to make it volatile, but maybe "kdb" needs
> > > > > > this? Happy to change any way you want.
> > > > > 
> > > > > I'm not the maintainer but if I were I'd say drop the volatile
> > > > > and maybe mark it __read_mostly and perhaps __used too (since
> > > > > it's static it might otherwise get eliminated).
> > > > > 
> > > > > > > If anything this variable is exactly the opposite, i..e
> > > > > > > __read_mostly or even const (given that I can't see
> > > > > > > anything which writes it I suppose this is a compile time
> > > > > > > setting?)
> > > > > > 
> > > > > > That has been how I have been testing it so far (changing
> > > > > > the source to set values).  Mukesh claims to be able to
> > > > > > change it at will.  Not sure how const may effect this.
> > > > 
> > > > If the idea is to use kdb itself to frob the value, then it does
> > > > need something to stop the compiler caching it.  This might
> > > > even be one of the few cases where 'volatile' actually DTRT; it
> > > > would still be more in keeping with Xen style to use an
> > > > explicit read op (like atomic_read()) where the value is
> > > > consumed.
> > > 
> > > Is there any need to be asynchronously frobbing this value in the
> > > middle of a function within this file and expecting it to be
> > 
> > Yes. I can stop the machine via kdb or other debugger, change the
> > value during debug, and upon resuming it will start printing stuff.
> > Often this is needed when going thru several iterations of call
> > before problem is seen. Making it volatile makes sure the compiler
> > loads it every instance of its use. This is not in main path, only
> > debugger path, so the overhead should not be an issue.
> 
> So you want to be able to toggle the value in between two immediately
> adjacent debug print calls? While debugging the debugging
> infrastructure itself? (using itself?).

I often debug gdbsx via kdb, and one could use JTAG also to debug kdb,
or some other debugger. Debugging a debugger is hard. Since, the variable
is accessed via outside means that the compiler is not aware of, making
it volatile makes the most sense to me.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1STa-0008K7-64; Fri, 10 Jan 2014 03:09:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1STY-0008K2-4p
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 03:09:36 +0000
Received: from [85.158.139.211:61424] by server-6.bemta-5.messagelabs.com id
	AB/8F-16310-F646FC25; Fri, 10 Jan 2014 03:09:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389323372!8916502!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25601 invoked from network); 10 Jan 2014 03:09:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 03:09:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89402366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 03:09:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 22:09:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1STS-0006rY-Na;
	Fri, 10 Jan 2014 03:09:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1STS-0007Z4-MM;
	Fri, 10 Jan 2014 03:09:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24325-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 03:09:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24325: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9149920296351649036=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9149920296351649036==
Content-Type: text/plain

flight 24325 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24325/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win7-amd64  5 xen-boot                 fail REGR. vs. 22687
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install     fail REGR. vs. 22687
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============9149920296351649036==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9149920296351649036==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1STa-0008K7-64; Fri, 10 Jan 2014 03:09:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1STY-0008K2-4p
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 03:09:36 +0000
Received: from [85.158.139.211:61424] by server-6.bemta-5.messagelabs.com id
	AB/8F-16310-F646FC25; Fri, 10 Jan 2014 03:09:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389323372!8916502!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25601 invoked from network); 10 Jan 2014 03:09:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 03:09:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,635,1384300800"; d="scan'208";a="89402366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 03:09:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 9 Jan 2014 22:09:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1STS-0006rY-Na;
	Fri, 10 Jan 2014 03:09:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1STS-0007Z4-MM;
	Fri, 10 Jan 2014 03:09:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24325-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 03:09:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24325: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9149920296351649036=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9149920296351649036==
Content-Type: text/plain

flight 24325 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24325/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-win7-amd64  5 xen-boot                 fail REGR. vs. 22687
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install     fail REGR. vs. 22687
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============9149920296351649036==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9149920296351649036==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1SlQ-0000cr-8a; Fri, 10 Jan 2014 03:28:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1SlN-0000cm-AA
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 03:28:01 +0000
Received: from [85.158.139.211:4752] by server-11.bemta-5.messagelabs.com id
	AC/C8-23268-0C86FC25; Fri, 10 Jan 2014 03:28:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389324479!8914436!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24267 invoked from network); 10 Jan 2014 03:28:00 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 03:28:00 -0000
Received: by mail-ee0-f44.google.com with SMTP id b57so1680165eek.3
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 19:27:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=LTNFVdW2feoY37KIutRHEvROiZ3XgoKoUTUbbT7zhHU=;
	b=cN4XD2sPViL5UL2tdqbHFJL6UyNtGQ4p5q712aoFI6VY3kKxKAjr2mbTVcEVKPIlXP
	urybvk3e6ChtjRsOMEXry7rpBEZudAhV1e4KpRAGe40STzqtUwpm3INbAXiUAolBeoqn
	BIhCsibIoQcescK+ESKkTy4Xi1FFJVRkknBr2YCldsdV2dcYRLsq1ZZG369eUoHllJw1
	7I6NVljkcL7d6iwmRZXGVVPHm76JEmnYFuGDMItfkuSW1EVqcg43cFKNueQiDGjPQM5Q
	8E08reqFfMb1rhVkPIDfVoHwqyxtdAsJEHXlO2ZlSOI4a6djCPGtWqlX2bg0uDKwrada
	vTsA==
X-Gm-Message-State: ALoCoQndZcZbMaBjvlPDqpZfc8Gny87pzdyLxgmdi8WdBj32xbul1bJpUYIOCitSa6lW7YYOGmfw
X-Received: by 10.14.87.2 with SMTP id x2mr6500106eee.79.1389324479606;
	Thu, 09 Jan 2014 19:27:59 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id e43sm10842592eep.7.2014.01.09.19.27.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 19:27:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 03:27:55 +0000
Message-Id: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Scrub heap pages was disabled because it was slow on the models. Now that Xen
supports real hardware, it's possible to enable by default scrubbing.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch should go to Xen 4.4. It avoid to give non-cleared page to
    a domain.

    The downside is it's now slow on models.

    The current implementation of scrub_heap_pages loop on every page in the
    frametable. On ARM, there is only which can contains MMIO. We are safe
    because when frametable is initialized, page are marked inuse. So the
    function won't clear theses pages.
---
 xen/arch/arm/setup.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9fc40c8..d7c7f4d 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -764,10 +764,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     if ( construct_dom0(dom0) != 0)
             panic("Could not set up DOM0 guest OS");
 
-    /* Scrub RAM that is still free and so may go to an unprivileged domain.
-       XXX too slow in simulator
-       scrub_heap_pages();
-    */
+    /* Scrub RAM that is still free and so may go to an unprivileged domain. */
+    scrub_heap_pages();
 
     init_constructors();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1SlQ-0000cr-8a; Fri, 10 Jan 2014 03:28:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1SlN-0000cm-AA
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 03:28:01 +0000
Received: from [85.158.139.211:4752] by server-11.bemta-5.messagelabs.com id
	AC/C8-23268-0C86FC25; Fri, 10 Jan 2014 03:28:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389324479!8914436!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24267 invoked from network); 10 Jan 2014 03:28:00 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 03:28:00 -0000
Received: by mail-ee0-f44.google.com with SMTP id b57so1680165eek.3
	for <xen-devel@lists.xenproject.org>;
	Thu, 09 Jan 2014 19:27:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=LTNFVdW2feoY37KIutRHEvROiZ3XgoKoUTUbbT7zhHU=;
	b=cN4XD2sPViL5UL2tdqbHFJL6UyNtGQ4p5q712aoFI6VY3kKxKAjr2mbTVcEVKPIlXP
	urybvk3e6ChtjRsOMEXry7rpBEZudAhV1e4KpRAGe40STzqtUwpm3INbAXiUAolBeoqn
	BIhCsibIoQcescK+ESKkTy4Xi1FFJVRkknBr2YCldsdV2dcYRLsq1ZZG369eUoHllJw1
	7I6NVljkcL7d6iwmRZXGVVPHm76JEmnYFuGDMItfkuSW1EVqcg43cFKNueQiDGjPQM5Q
	8E08reqFfMb1rhVkPIDfVoHwqyxtdAsJEHXlO2ZlSOI4a6djCPGtWqlX2bg0uDKwrada
	vTsA==
X-Gm-Message-State: ALoCoQndZcZbMaBjvlPDqpZfc8Gny87pzdyLxgmdi8WdBj32xbul1bJpUYIOCitSa6lW7YYOGmfw
X-Received: by 10.14.87.2 with SMTP id x2mr6500106eee.79.1389324479606;
	Thu, 09 Jan 2014 19:27:59 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id e43sm10842592eep.7.2014.01.09.19.27.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 09 Jan 2014 19:27:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 03:27:55 +0000
Message-Id: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Scrub heap pages was disabled because it was slow on the models. Now that Xen
supports real hardware, it's possible to enable by default scrubbing.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch should go to Xen 4.4. It avoid to give non-cleared page to
    a domain.

    The downside is it's now slow on models.

    The current implementation of scrub_heap_pages loop on every page in the
    frametable. On ARM, there is only which can contains MMIO. We are safe
    because when frametable is initialized, page are marked inuse. So the
    function won't clear theses pages.
---
 xen/arch/arm/setup.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9fc40c8..d7c7f4d 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -764,10 +764,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     if ( construct_dom0(dom0) != 0)
             panic("Could not set up DOM0 guest OS");
 
-    /* Scrub RAM that is still free and so may go to an unprivileged domain.
-       XXX too slow in simulator
-       scrub_heap_pages();
-    */
+    /* Scrub RAM that is still free and so may go to an unprivileged domain. */
+    scrub_heap_pages();
 
     init_constructors();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1SmP-0001B6-Ua; Fri, 10 Jan 2014 03:29:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1SmN-00019I-4i
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 03:29:04 +0000
Received: from [85.158.143.35:65156] by server-2.bemta-4.messagelabs.com id
	5A/98-11386-EF86FC25; Fri, 10 Jan 2014 03:29:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389324540!10833124!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26707 invoked from network); 10 Jan 2014 03:29:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 03:29:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A3SsVA016638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 03:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0A3SrGL014359
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 03:28:54 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A3SrbU007036; Fri, 10 Jan 2014 03:28:53 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 19:28:52 -0800
Date: Thu, 9 Jan 2014 22:28:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140110032845.GA3660@konrad-lan.dumpdata.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > [...]
> > > > > > > Those Xen report something like:
> > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > 131328
> > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > memflags=0 (62 of 64)
> > > > > > > 
> > > > > > > ?
> > > > > > > 
> > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > QEMU :) )
> > > > > > > 
> > > 
> > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > >         Flags: fast devsel, IRQ 16
> > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > >         I/O ports at e020 [disabled] [size=32]
> > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > 
> > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > allocate memory for it. Will have maybe have to find another way.
> > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > far in trying to check that.
> > 
> > And indeed that is the case. The "Fix" below fixes it.
> > 
> > 
> > Based on that and this guest config:
> > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory = 2048
> > boot="d"
> > maxvcpus=32
> > vcpus=1
> > serial='pty'
> > vnclisten="0.0.0.0"
> > name="latest"
> > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > pci = ["01:00.0"]
> > 
> > I can boot the guest.
> 
> And can you access the ROM from the guest ?

I hadn't tried it. This is with a NIC and I just wanted to see if it
could do PCI passthrough without using the Option ROM.
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.

Oddly enough for me with your patch the NIC's BIOS was invoked and
it tried to PXE boot:

(d1) [2014-01-10 03:20:29] Running option rom at ca00:0003

(d1) [2014-01-10 03:20:47] Booting from DVD/CD...
(d1) [2014-01-10 03:20:47] Booting from 0000:7c00
..
and I did see the PXE boot menu in my guest - so even
better!

I have not yet done the GPU - this issue was preventing me from using
qemu-xen as it would always blow up before SeaBIOS was in the picture.

If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com>' please do.

Thank you!
> 
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  
> 
> -- 
> Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 03:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 03:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1SmP-0001B6-Ua; Fri, 10 Jan 2014 03:29:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1SmN-00019I-4i
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 03:29:04 +0000
Received: from [85.158.143.35:65156] by server-2.bemta-4.messagelabs.com id
	5A/98-11386-EF86FC25; Fri, 10 Jan 2014 03:29:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389324540!10833124!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26707 invoked from network); 10 Jan 2014 03:29:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 03:29:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A3SsVA016638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 03:28:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0A3SrGL014359
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 03:28:54 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A3SrbU007036; Fri, 10 Jan 2014 03:28:53 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 19:28:52 -0800
Date: Thu, 9 Jan 2014 22:28:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140110032845.GA3660@konrad-lan.dumpdata.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > [...]
> > > > > > > Those Xen report something like:
> > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > 131328
> > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > memflags=0 (62 of 64)
> > > > > > > 
> > > > > > > ?
> > > > > > > 
> > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > QEMU :) )
> > > > > > > 
> > > 
> > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > >         Flags: fast devsel, IRQ 16
> > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > >         I/O ports at e020 [disabled] [size=32]
> > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > 
> > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > allocate memory for it. Will have maybe have to find another way.
> > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > far in trying to check that.
> > 
> > And indeed that is the case. The "Fix" below fixes it.
> > 
> > 
> > Based on that and this guest config:
> > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory = 2048
> > boot="d"
> > maxvcpus=32
> > vcpus=1
> > serial='pty'
> > vnclisten="0.0.0.0"
> > name="latest"
> > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > pci = ["01:00.0"]
> > 
> > I can boot the guest.
> 
> And can you access the ROM from the guest ?

I hadn't tried it. This is with a NIC and I just wanted to see if it
could do PCI passthrough without using the Option ROM.
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any
> other BAR. In this case, if qemu is envolved in the access to ROM, it
> will print an error, like it the case for other BAR. 
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from
> the device.

Oddly enough for me with your patch the NIC's BIOS was invoked and
it tried to PXE boot:

(d1) [2014-01-10 03:20:29] Running option rom at ca00:0003

(d1) [2014-01-10 03:20:47] Booting from DVD/CD...
(d1) [2014-01-10 03:20:47] Booting from 0000:7c00
..
and I did see the PXE boot menu in my guest - so even
better!

I have not yet done the GPU - this issue was preventing me from using
qemu-xen as it would always blow up before SeaBIOS was in the picture.

If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com>' please do.

Thank you!
> 
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 6dd7a68..2bbdb6d 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>  
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>  
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>  
> 
> -- 
> Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 04:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 04:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1TMi-0003Oq-E3; Fri, 10 Jan 2014 04:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W1TMg-0003Ol-Rg
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 04:06:35 +0000
Received: from [85.158.137.68:29755] by server-2.bemta-3.messagelabs.com id
	22/0D-17329-9C17FC25; Fri, 10 Jan 2014 04:06:33 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389326791!7125949!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2041 invoked from network); 10 Jan 2014 04:06:32 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-9.tower-31.messagelabs.com with SMTP;
	10 Jan 2014 04:06:32 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 614F9589BC1;
	Thu,  9 Jan 2014 20:06:30 -0800 (PST)
Date: Thu, 09 Jan 2014 23:06:29 -0500 (EST)
Message-Id: <20140109.230629.123446421132170371.davem@davemloft.net>
To: wei.liu2@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <20140108193716.GA16009@zion.uk.xensource.com>
References: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
	<20140108193716.GA16009@zion.uk.xensource.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Thu, 09 Jan 2014 20:06:31 -0800 (PST)
Cc: david.vrabel@citrix.com, netdev@vger.kernel.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 8 Jan 2014 19:37:16 +0000

> On Wed, Jan 08, 2014 at 12:41:58PM +0000, Paul Durrant wrote:
>> The recent patch to improve guest receive side flow control (ca2f09f2) had a
>> slight flaw in the wait condition for the vif thread in that any remaining
>> skbs in the guest receive side netback internal queue would prevent the
>> thread from sleeping. An unresponsive frontend can lead to a permanently
>> non-empty internal queue and thus the thread will spin. In this case the
>> thread should really sleep until the frontend becomes responsive again.
>> 
>> This patch adds an extra flag to the vif which is set if the shared ring
>> is full and cleared when skbs are drained into the shared ring. Thus,
>> if the thread runs, finds the shared ring full and can make no progress the
>> flag remains set. If the flag remains set then the thread will sleep,
>> regardless of a non-empty queue, until the next event from the frontend.
>> 
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
> 
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Applied.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 04:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 04:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1TMi-0003Oq-E3; Fri, 10 Jan 2014 04:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W1TMg-0003Ol-Rg
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 04:06:35 +0000
Received: from [85.158.137.68:29755] by server-2.bemta-3.messagelabs.com id
	22/0D-17329-9C17FC25; Fri, 10 Jan 2014 04:06:33 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389326791!7125949!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2041 invoked from network); 10 Jan 2014 04:06:32 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-9.tower-31.messagelabs.com with SMTP;
	10 Jan 2014 04:06:32 -0000
Received: from localhost (cpe-74-71-55-169.nyc.res.rr.com [74.71.55.169])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 614F9589BC1;
	Thu,  9 Jan 2014 20:06:30 -0800 (PST)
Date: Thu, 09 Jan 2014 23:06:29 -0500 (EST)
Message-Id: <20140109.230629.123446421132170371.davem@davemloft.net>
To: wei.liu2@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <20140108193716.GA16009@zion.uk.xensource.com>
References: <1389184918-42790-1-git-send-email-paul.durrant@citrix.com>
	<20140108193716.GA16009@zion.uk.xensource.com>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.1
	(shards.monkeyblade.net [0.0.0.0]);
	Thu, 09 Jan 2014 20:06:31 -0800 (PST)
Cc: david.vrabel@citrix.com, netdev@vger.kernel.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: stop vif thread
 spinning if frontend is unresponsive
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 8 Jan 2014 19:37:16 +0000

> On Wed, Jan 08, 2014 at 12:41:58PM +0000, Paul Durrant wrote:
>> The recent patch to improve guest receive side flow control (ca2f09f2) had a
>> slight flaw in the wait condition for the vif thread in that any remaining
>> skbs in the guest receive side netback internal queue would prevent the
>> thread from sleeping. An unresponsive frontend can lead to a permanently
>> non-empty internal queue and thus the thread will spin. In this case the
>> thread should really sleep until the frontend becomes responsive again.
>> 
>> This patch adds an extra flag to the vif which is set if the shared ring
>> is full and cleared when skbs are drained into the shared ring. Thus,
>> if the thread runs, finds the shared ring full and can make no progress the
>> flag remains set. If the flag remains set then the thread will sleep,
>> regardless of a non-empty queue, until the next event from the frontend.
>> 
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
> 
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Applied.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 04:13:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 04:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1TTF-00048Q-EV; Fri, 10 Jan 2014 04:13:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1TTD-00048L-Ks
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 04:13:19 +0000
Received: from [85.158.143.35:61522] by server-1.bemta-4.messagelabs.com id
	7B/83-02132-F537FC25; Fri, 10 Jan 2014 04:13:19 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389327197!10829333!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20765 invoked from network); 10 Jan 2014 04:13:17 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 04:13:17 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so1290183wgg.25
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 20:13:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=PT0/FJSr3X8kte2+7QOt46pI6DAMPb8Re5GeITkL8eI=;
	b=UjLja5lv9MmJ/rWGQWr5W7TQDAAqMytv8u86yherp0M2DnAXvytnJvfrV+KeWCFww/
	sdwjTZGxKm1U1T1ED4S5WXD3V7ZEPKp6+nhMTvG5DGAE+f6/aiKdAxkAYw/ExrbdKY82
	VJCkc0g2AXcPO2WSfDD9JBn+mZhQuHlZfDpy1sDKWdHD+h9+FJ949z1TeauERgNB0FkM
	Z5ndbOxRJtJQ357wl/clQuzWDTw1Y2ZCpHeedyNgepnaWSUFBMW/unHs8b9hug85jW43
	BgZX4C219tb535WTFQ+2VF2TcPJvd53dFw9aCrDFzxmm4A1E+iRDUdy/M96xQnyg82g0
	eShw==
X-Received: by 10.180.86.9 with SMTP id l9mr533966wiz.20.1389327197578;
	Thu, 09 Jan 2014 20:13:17 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id j3sm586009wiy.3.2014.01.09.20.13.16
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 20:13:16 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 04:12:51 +0000
Message-Id: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Cc: tim@xen.org, julien.grall@linaro.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1 mapping
	if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Create multiple banks to hold dom0 memory in case of 1:1 mapping
if we failed to find 1 large contiguous memory to hold the whole
thing.

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
 xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
 2 files changed, 121 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..bb44cdd 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
 {
     paddr_t start;
     paddr_t size;
+    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
     struct page_info *pg = NULL;
-    unsigned int order = get_order_from_bytes(dom0_mem);
+    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
     int res;
     paddr_t spfn;
     unsigned int bits;
 
-    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
+#define MIN_BANK_ORDER 10
+
+    kinfo->mem.nr_banks = 0;
+
+    /*
+     * We always first try to allocate all dom0 memory in 1 bank.
+     * However, if we failed to allocate physically contiguous memory
+     * from the allocator, then we try to create more than one bank.
+     */
+    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
     {
-        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
-        if ( pg != NULL )
-            break;
+        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
+        {
+            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
+			{
+				pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
+				if ( pg != NULL )
+					break;
+			}
+
+			if ( !pg )
+				break;
+
+			spfn = page_to_mfn(pg);
+			start = pfn_to_paddr(spfn);
+			size = pfn_to_paddr((1 << cur_order));
+
+		    kinfo->mem.bank[index].start = start;
+		    kinfo->mem.bank[index].size = size;
+		    index++;
+		    kinfo->mem.nr_banks++;
+    	}
+
+    	if( pg )
+    		break;
+
+    	nr_banks = (nr_banks - cur_bank + 1) << 1;
     }
 
-    if ( !pg )
-        panic("Failed to allocate contiguous memory for dom0");
+	if ( !pg )
+		panic("Failed to allocate contiguous memory for dom0");
 
-    spfn = page_to_mfn(pg);
-    start = pfn_to_paddr(spfn);
-    size = pfn_to_paddr((1 << order));
+	for ( index = 0; index < kinfo->mem.nr_banks; index++ )
+	{
+	    start = kinfo->mem.bank[index].start;
+	    size = kinfo->mem.bank[index].size;
+	    spfn = paddr_to_pfn(start);
+	    order = get_order_from_bytes(size);
 
-    // 1:1 mapping
-    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
-           start, start + size);
-    res = guest_physmap_add_page(d, spfn, spfn, order);
-
-    if ( res )
-        panic("Unable to add pages in DOM0: %d", res);
+	    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
+	            start, start + size, order);
+	    res = guest_physmap_add_page(d, spfn, spfn, order);
 
-    kinfo->mem.bank[0].start = start;
-    kinfo->mem.bank[0].size = size;
-    kinfo->mem.nr_banks = 1;
+	    if ( res )
+	        panic("Unable to add pages in DOM0: %d", res);
 
-    kinfo->unassigned_mem -= size;
+	    kinfo->unassigned_mem -= size;
+	}
 }
 
 static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..658c3de 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
     const paddr_t total = initrd_len + dtb_len;
 
     /* Convenient */
-    const paddr_t mem_start = info->mem.bank[0].start;
-    const paddr_t mem_size = info->mem.bank[0].size;
-    const paddr_t mem_end = mem_start + mem_size;
-    const paddr_t kernel_size = kernel_end - kernel_start;
+    unsigned int i, min_i = -1;
+    bool_t same_bank = false;
+
+    paddr_t mem_start, mem_end, mem_size;
+    paddr_t kernel_size;
 
     paddr_t addr;
 
-    if ( total + kernel_size > mem_size )
-        panic("Not enough memory in the first bank for the dtb+initrd");
+    kernel_size = kernel_end - kernel_start;
+
+    for ( i = 0; i < info->mem.nr_banks; i++ )
+    {
+        mem_start = info->mem.bank[i].start;
+        mem_size = info->mem.bank[i].size;
+        mem_end = mem_start + mem_size - 1;
+
+        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
+            same_bank = true;
+        else
+            same_bank = false;
+
+        if ( same_bank && ((total + kernel_size) < mem_size) )
+            min_i = i;
+        else if ( (!same_bank) && (total < mem_size) )
+            min_i = i;
+        else
+            continue;
+
+        break;
+    }
+
+    if ( min_i == -1 )
+        panic("Not enough memory for the dtb+initrd");
+
+    mem_start = info->mem.bank[min_i].start;
+    mem_size = info->mem.bank[min_i].size;
+    mem_end = mem_start + mem_size;
 
     /*
      * DTB must be loaded such that it does not conflict with the
@@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
      * just after the kernel, if there is room, otherwise just before.
      */
 
-    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
-        addr = MIN(mem_start + MB(128), mem_end - total);
-    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
-        addr = ROUNDUP(kernel_end, MB(2));
-    else if ( kernel_start - mem_start >= total )
-        addr = kernel_start - total;
-    else
+    if ( same_bank )
     {
-        panic("Unable to find suitable location for dtb+initrd");
-        return;
-    }
+        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
+            addr = MIN(mem_start + MB(128), mem_end - total);
+        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
+            addr = ROUNDUP(kernel_end, MB(2));
+        else if ( kernel_start - mem_start >= total )
+            addr = kernel_start - total;
+        else
+        {
+            /*
+             * We should never hit this condition because we've already
+             * done the check while choosing the bank.
+             */
+            panic("Unable to find suitable location for dtb+initrd");
+            return;
+        }
+    } else
+        addr = ROUNDUP(mem_end - total, MB(2));
 
     info->dtb_paddr = addr;
     info->initrd_paddr = info->dtb_paddr + dtb_len;
@@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
      */
     if (start == 0)
     {
+        unsigned int i, min_i = 0, min_start = -1;
         paddr_t load_end;
 
-        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
-        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
+        /*
+         * Load kernel at the lowest possible bank
+         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
+         */
+        for ( i = 0; i < info->mem.nr_banks; i++ )
+        {
+            if( (unsigned int)info->mem.bank[i].start < min_start )
+            {
+                min_start = info->mem.bank[i].start;
+                min_i = i;
+            }
+        }
+
+        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
+        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
 
         info->zimage.load_addr = load_end - end;
         /* Align to 2MB */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 04:13:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 04:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1TTF-00048Q-EV; Fri, 10 Jan 2014 04:13:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1TTD-00048L-Ks
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 04:13:19 +0000
Received: from [85.158.143.35:61522] by server-1.bemta-4.messagelabs.com id
	7B/83-02132-F537FC25; Fri, 10 Jan 2014 04:13:19 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389327197!10829333!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20765 invoked from network); 10 Jan 2014 04:13:17 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 04:13:17 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so1290183wgg.25
	for <xen-devel@lists.xen.org>; Thu, 09 Jan 2014 20:13:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=PT0/FJSr3X8kte2+7QOt46pI6DAMPb8Re5GeITkL8eI=;
	b=UjLja5lv9MmJ/rWGQWr5W7TQDAAqMytv8u86yherp0M2DnAXvytnJvfrV+KeWCFww/
	sdwjTZGxKm1U1T1ED4S5WXD3V7ZEPKp6+nhMTvG5DGAE+f6/aiKdAxkAYw/ExrbdKY82
	VJCkc0g2AXcPO2WSfDD9JBn+mZhQuHlZfDpy1sDKWdHD+h9+FJ949z1TeauERgNB0FkM
	Z5ndbOxRJtJQ357wl/clQuzWDTw1Y2ZCpHeedyNgepnaWSUFBMW/unHs8b9hug85jW43
	BgZX4C219tb535WTFQ+2VF2TcPJvd53dFw9aCrDFzxmm4A1E+iRDUdy/M96xQnyg82g0
	eShw==
X-Received: by 10.180.86.9 with SMTP id l9mr533966wiz.20.1389327197578;
	Thu, 09 Jan 2014 20:13:17 -0800 (PST)
Received: from localhost.localdomain
	(cpc24-watf10-2-0-cust41.15-2.cable.virginm.net. [86.18.37.42])
	by mx.google.com with ESMTPSA id j3sm586009wiy.3.2014.01.09.20.13.16
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 09 Jan 2014 20:13:16 -0800 (PST)
From: Karim Raslan <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 04:12:51 +0000
Message-Id: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
X-Mailer: git-send-email 1.7.9.5
Cc: tim@xen.org, julien.grall@linaro.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1 mapping
	if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Create multiple banks to hold dom0 memory in case of 1:1 mapping
if we failed to find 1 large contiguous memory to hold the whole
thing.

Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
---
 xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
 xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
 2 files changed, 121 insertions(+), 39 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index faff88e..bb44cdd 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
 {
     paddr_t start;
     paddr_t size;
+    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
     struct page_info *pg = NULL;
-    unsigned int order = get_order_from_bytes(dom0_mem);
+    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
     int res;
     paddr_t spfn;
     unsigned int bits;
 
-    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
+#define MIN_BANK_ORDER 10
+
+    kinfo->mem.nr_banks = 0;
+
+    /*
+     * We always first try to allocate all dom0 memory in 1 bank.
+     * However, if we failed to allocate physically contiguous memory
+     * from the allocator, then we try to create more than one bank.
+     */
+    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
     {
-        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
-        if ( pg != NULL )
-            break;
+        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
+        {
+            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
+			{
+				pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
+				if ( pg != NULL )
+					break;
+			}
+
+			if ( !pg )
+				break;
+
+			spfn = page_to_mfn(pg);
+			start = pfn_to_paddr(spfn);
+			size = pfn_to_paddr((1 << cur_order));
+
+		    kinfo->mem.bank[index].start = start;
+		    kinfo->mem.bank[index].size = size;
+		    index++;
+		    kinfo->mem.nr_banks++;
+    	}
+
+    	if( pg )
+    		break;
+
+    	nr_banks = (nr_banks - cur_bank + 1) << 1;
     }
 
-    if ( !pg )
-        panic("Failed to allocate contiguous memory for dom0");
+	if ( !pg )
+		panic("Failed to allocate contiguous memory for dom0");
 
-    spfn = page_to_mfn(pg);
-    start = pfn_to_paddr(spfn);
-    size = pfn_to_paddr((1 << order));
+	for ( index = 0; index < kinfo->mem.nr_banks; index++ )
+	{
+	    start = kinfo->mem.bank[index].start;
+	    size = kinfo->mem.bank[index].size;
+	    spfn = paddr_to_pfn(start);
+	    order = get_order_from_bytes(size);
 
-    // 1:1 mapping
-    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
-           start, start + size);
-    res = guest_physmap_add_page(d, spfn, spfn, order);
-
-    if ( res )
-        panic("Unable to add pages in DOM0: %d", res);
+	    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
+	            start, start + size, order);
+	    res = guest_physmap_add_page(d, spfn, spfn, order);
 
-    kinfo->mem.bank[0].start = start;
-    kinfo->mem.bank[0].size = size;
-    kinfo->mem.nr_banks = 1;
+	    if ( res )
+	        panic("Unable to add pages in DOM0: %d", res);
 
-    kinfo->unassigned_mem -= size;
+	    kinfo->unassigned_mem -= size;
+	}
 }
 
 static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..658c3de 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
     const paddr_t total = initrd_len + dtb_len;
 
     /* Convenient */
-    const paddr_t mem_start = info->mem.bank[0].start;
-    const paddr_t mem_size = info->mem.bank[0].size;
-    const paddr_t mem_end = mem_start + mem_size;
-    const paddr_t kernel_size = kernel_end - kernel_start;
+    unsigned int i, min_i = -1;
+    bool_t same_bank = false;
+
+    paddr_t mem_start, mem_end, mem_size;
+    paddr_t kernel_size;
 
     paddr_t addr;
 
-    if ( total + kernel_size > mem_size )
-        panic("Not enough memory in the first bank for the dtb+initrd");
+    kernel_size = kernel_end - kernel_start;
+
+    for ( i = 0; i < info->mem.nr_banks; i++ )
+    {
+        mem_start = info->mem.bank[i].start;
+        mem_size = info->mem.bank[i].size;
+        mem_end = mem_start + mem_size - 1;
+
+        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
+            same_bank = true;
+        else
+            same_bank = false;
+
+        if ( same_bank && ((total + kernel_size) < mem_size) )
+            min_i = i;
+        else if ( (!same_bank) && (total < mem_size) )
+            min_i = i;
+        else
+            continue;
+
+        break;
+    }
+
+    if ( min_i == -1 )
+        panic("Not enough memory for the dtb+initrd");
+
+    mem_start = info->mem.bank[min_i].start;
+    mem_size = info->mem.bank[min_i].size;
+    mem_end = mem_start + mem_size;
 
     /*
      * DTB must be loaded such that it does not conflict with the
@@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
      * just after the kernel, if there is room, otherwise just before.
      */
 
-    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
-        addr = MIN(mem_start + MB(128), mem_end - total);
-    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
-        addr = ROUNDUP(kernel_end, MB(2));
-    else if ( kernel_start - mem_start >= total )
-        addr = kernel_start - total;
-    else
+    if ( same_bank )
     {
-        panic("Unable to find suitable location for dtb+initrd");
-        return;
-    }
+        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
+            addr = MIN(mem_start + MB(128), mem_end - total);
+        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
+            addr = ROUNDUP(kernel_end, MB(2));
+        else if ( kernel_start - mem_start >= total )
+            addr = kernel_start - total;
+        else
+        {
+            /*
+             * We should never hit this condition because we've already
+             * done the check while choosing the bank.
+             */
+            panic("Unable to find suitable location for dtb+initrd");
+            return;
+        }
+    } else
+        addr = ROUNDUP(mem_end - total, MB(2));
 
     info->dtb_paddr = addr;
     info->initrd_paddr = info->dtb_paddr + dtb_len;
@@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
      */
     if (start == 0)
     {
+        unsigned int i, min_i = 0, min_start = -1;
         paddr_t load_end;
 
-        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
-        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
+        /*
+         * Load kernel at the lowest possible bank
+         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
+         */
+        for ( i = 0; i < info->mem.nr_banks; i++ )
+        {
+            if( (unsigned int)info->mem.bank[i].start < min_start )
+            {
+                min_start = info->mem.bank[i].start;
+                min_i = i;
+            }
+        }
+
+        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
+        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
 
         info->zimage.load_addr = load_end - end;
         /* Align to 2MB */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 06:44:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 06:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1VpP-0005VZ-5a; Fri, 10 Jan 2014 06:44:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W1VpM-0005VU-PA
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 06:44:20 +0000
Received: from [85.158.137.68:36792] by server-9.bemta-3.messagelabs.com id
	77/B2-13104-3C69FC25; Fri, 10 Jan 2014 06:44:19 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389336257!8268631!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5406 invoked from network); 10 Jan 2014 06:44:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 06:44:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A6iCU4012649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 06:44:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A6iBSJ025042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 06:44:12 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A6iBVc029500; Fri, 10 Jan 2014 06:44:11 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 22:44:10 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Fri, 10 Jan 2014 06:48:38 +0800
Message-Id: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Annie Li <Annie.li@oracle.com>, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Current netfront only grants pages for grant copy, not for grant transfer, so
remove corresponding transfer code and add receiving copy code in
xennet_release_rx_bufs.

Signed-off-by: Annie Li <Annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   60 ++-----------------------------------------
 1 files changed, 3 insertions(+), 57 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..692589e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
 	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
+		if (ref == GRANT_INVALID_REF)
 			continue;
-		}
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
+		gnttab_end_foreign_access_ref(ref, 0);
 		gnttab_release_grant_reference(&np->gref_rx_head, ref);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
-			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
-
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
-
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
-
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
-- 
1.7.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 06:44:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 06:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1VpP-0005VZ-5a; Fri, 10 Jan 2014 06:44:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W1VpM-0005VU-PA
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 06:44:20 +0000
Received: from [85.158.137.68:36792] by server-9.bemta-3.messagelabs.com id
	77/B2-13104-3C69FC25; Fri, 10 Jan 2014 06:44:19 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389336257!8268631!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5406 invoked from network); 10 Jan 2014 06:44:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 06:44:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A6iCU4012649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 06:44:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A6iBSJ025042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 06:44:12 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A6iBVc029500; Fri, 10 Jan 2014 06:44:11 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 09 Jan 2014 22:44:10 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Fri, 10 Jan 2014 06:48:38 +0800
Message-Id: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Annie Li <Annie.li@oracle.com>, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Current netfront only grants pages for grant copy, not for grant transfer, so
remove corresponding transfer code and add receiving copy code in
xennet_release_rx_bufs.

Signed-off-by: Annie Li <Annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   60 ++-----------------------------------------
 1 files changed, 3 insertions(+), 57 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..692589e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
 	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
+		if (ref == GRANT_INVALID_REF)
 			continue;
-		}
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
+		gnttab_end_foreign_access_ref(ref, 0);
 		gnttab_release_grant_reference(&np->gref_rx_head, ref);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
-			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
-
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
-
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
-
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
-- 
1.7.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 08:20:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:20:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1XKc-0004DK-Uk; Fri, 10 Jan 2014 08:20:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1XKb-0004DB-9a
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 08:20:41 +0000
Received: from [85.158.143.35:9368] by server-1.bemta-4.messagelabs.com id
	F9/A0-02132-85DAFC25; Fri, 10 Jan 2014 08:20:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389342039!10863343!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9072 invoked from network); 10 Jan 2014 08:20:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 08:20:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 08:20:39 +0000
Message-Id: <52CFBB640200007800112355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 08:20:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <1386612106-21488-1-git-send-email-daniel.kiper@oracle.com>
	<52A6DD9E020000780010BAFF@nat28.tlf.novell.com>
	<20131210095916.GI3916@olila.local.net-space.pl>
	<20140109195925.GD3633@olila.local.net-space.pl>
In-Reply-To: <20140109195925.GD3633@olila.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, david.vrabel@citrix.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Add me as Xen kexec maintainer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 20:59, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> On Tue, Dec 10, 2013 at 10:59:16AM +0100, Daniel Kiper wrote:
>> On Tue, Dec 10, 2013 at 08:23:42AM +0000, Jan Beulich wrote:
>> > >>> On 09.12.13 at 19:01, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>> > > If there is no objection and according to earlier discussion
>> >
>> > What earlier discussion?
>>
>> Here it is: 
> http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01689.html 
> 
> Folks, any comments? Yes or no?

I'd prefer to stick with just David as the maintainer for the time being.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 08:20:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:20:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1XKc-0004DK-Uk; Fri, 10 Jan 2014 08:20:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1XKb-0004DB-9a
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 08:20:41 +0000
Received: from [85.158.143.35:9368] by server-1.bemta-4.messagelabs.com id
	F9/A0-02132-85DAFC25; Fri, 10 Jan 2014 08:20:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389342039!10863343!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9072 invoked from network); 10 Jan 2014 08:20:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 08:20:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 08:20:39 +0000
Message-Id: <52CFBB640200007800112355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 08:20:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel Kiper" <daniel.kiper@oracle.com>
References: <1386612106-21488-1-git-send-email-daniel.kiper@oracle.com>
	<52A6DD9E020000780010BAFF@nat28.tlf.novell.com>
	<20131210095916.GI3916@olila.local.net-space.pl>
	<20140109195925.GD3633@olila.local.net-space.pl>
In-Reply-To: <20140109195925.GD3633@olila.local.net-space.pl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, david.vrabel@citrix.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: Add me as Xen kexec maintainer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 20:59, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> On Tue, Dec 10, 2013 at 10:59:16AM +0100, Daniel Kiper wrote:
>> On Tue, Dec 10, 2013 at 08:23:42AM +0000, Jan Beulich wrote:
>> > >>> On 09.12.13 at 19:01, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>> > > If there is no objection and according to earlier discussion
>> >
>> > What earlier discussion?
>>
>> Here it is: 
> http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg01689.html 
> 
> Folks, any comments? Yes or no?

I'd prefer to stick with just David as the maintainer for the time being.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 08:51:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1XoU-0006NT-54; Fri, 10 Jan 2014 08:51:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1XoS-0006NO-0f
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 08:51:32 +0000
Received: from [85.158.137.68:14820] by server-5.bemta-3.messagelabs.com id
	05/76-25188-294BFC25; Fri, 10 Jan 2014 08:51:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389343888!4669933!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10663 invoked from network); 10 Jan 2014 08:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 08:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91603096"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 08:51:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 03:51:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1XoM-0000Eg-5G;
	Fri, 10 Jan 2014 08:51:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1XoL-0007qr-SI;
	Fri, 10 Jan 2014 08:51:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24333-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 08:51:25 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24333: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7339564948488485607=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7339564948488485607==
Content-Type: text/plain

flight 24333 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24333/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install     fail REGR. vs. 22687
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24325
 test-amd64-i386-xl-win7-amd64  5 xen-boot          fail in 24325 pass in 24333

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24325 never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============7339564948488485607==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7339564948488485607==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 08:51:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1XoU-0006NT-54; Fri, 10 Jan 2014 08:51:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1XoS-0006NO-0f
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 08:51:32 +0000
Received: from [85.158.137.68:14820] by server-5.bemta-3.messagelabs.com id
	05/76-25188-294BFC25; Fri, 10 Jan 2014 08:51:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389343888!4669933!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10663 invoked from network); 10 Jan 2014 08:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 08:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91603096"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 08:51:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 03:51:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1XoM-0000Eg-5G;
	Fri, 10 Jan 2014 08:51:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1XoL-0007qr-SI;
	Fri, 10 Jan 2014 08:51:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24333-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 08:51:25 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24333: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7339564948488485607=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7339564948488485607==
Content-Type: text/plain

flight 24333 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24333/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install     fail REGR. vs. 22687
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24325
 test-amd64-i386-xl-win7-amd64  5 xen-boot          fail in 24325 pass in 24333

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24325 never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============7339564948488485607==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7339564948488485607==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 08:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Xrf-0006WB-W5; Fri, 10 Jan 2014 08:54:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W1Xrd-0006W0-Jb; Fri, 10 Jan 2014 08:54:50 +0000
Received: from [85.158.143.35:45843] by server-1.bemta-4.messagelabs.com id
	DE/7F-02132-855BFC25; Fri, 10 Jan 2014 08:54:48 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389344085!10878752!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26469 invoked from network); 10 Jan 2014 08:54:47 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-14.tower-21.messagelabs.com with SMTP;
	10 Jan 2014 08:54:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=709/Y
	+QXZgq6PqpA6imG1d+M8TtD8S95gCKWohaHhHI=; b=ivsGi18aMuaz9IrlQkPHN
	XNPJcTxelpKHa342+YFxH9ZnDPKgjxqwSnr0ukAPHNBad3YOBMw2WfpWfv+pUCtP
	77eJw/VvQDoGJHWAjLTQw8KGI3hOug2MhQ4M2mX9hgcDoV+yBV0ZZReuHhYcaeeg
	iw2MYrYYFZXKxPbrxAScNk=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Fri, 10 Jan 2014 16:54:34 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Fri, 10 Jan 2014 16:54:34 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "Wei Liu" <wei.liu2@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140109121408.GA12164@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
	<20140109121408.GA12164@zion.uk.xensource.com>
X-CM-CTRLDATA: Er/2sWZvb3Rlcl9odG09MzQxNTo4MQ==
MIME-Version: 1.0
Message-ID: <109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAD3_5tLtc9SzNFEAA--.3311W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiLQoPDk6AWFoZ-QAAsO
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2021749618113189918=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2021749618113189918==
Content-Type: multipart/alternative; 
	boundary="----=_Part_620230_1291171218.1389344074627"

------=_Part_620230_1291171218.1389344074627
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

CgrlnKggMjAxNC0wMS0wOSAyMDoxNDowOO+8jCJXZWkgTGl1IiA8d2VpLmxpdTJAY2l0cml4LmNv
bT4g5YaZ6YGT77yaCj5PbiBUaHUsIEphbiAwOSwgMjAxNCBhdCAxMTo1MjoyM0FNICswODAwLCB0
b3BwZXJ4aW4gd3JvdGU6Cj4+IO+7vwo+PiAKPj4gCj4+IEhpIFdlaQo+PiAKPj4gICAgICBUaGFu
a3MgZm9yIHlvdXIgcmVwbHksIEkga25vdyB5b3UgYXJlIGluIGNoYXJnZSBvZiBwb3J0aW5nIFZp
cnRpbyB0byB4ZW4sIGhvdyBhYm91dCB0aGUgcHJvY2Vzcz8gTWF5IHdlIGNvbmZpZ3VyZSBWaXJ0
aW8gb24geGVuPwo+Cj5JIHdvdWxkbid0IHNheSAiSSdtIGluIGNoYXJnZSIuIFRoYXQgd2FzIHNv
cnQgb2YgZXhwZXJpbWVudGFsIHByb2plY3QKPnR3byB5ZWFycyBhZ28uIEFuZCBJIHN0b3BwZWQg
YWZ0ZXIgdGhhdC4gU28gbXkga25vd2xlZGdlIGlzIGluIGZhY3QKPnF1aXRlIG91dCBkYXRlZC4K
Pgo+SSB0cmllZCBWaXJ0SU8gb24gSFZNIGd1ZXN0IGFib3V0IHR3byBtb250aHMgYWdvLiBJdCB3
b3JrZWQgZm9yIG1lLgo+VGhhdCdzIHRoZSBvbmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5
LiBIb3dldmVyIEZhYmlvIHJlcG9ydGVkIGl0Cj5kaWRuJ3Qgd29yayBmb3IgaGltLiBTbyBpbiBz
aG9ydCB5b3VyIG1pbGVhZ2UgbWF5IHZhcnkuCgoKQ291bGQgeW91IHBsZWFzZSBzaGFyZSB1cyB0
aGUgZWxhYm9yYXRlIHN0ZXBzIGFib3V0IGhvdyB0byBlbmFibGUgVmlydElPIG9uIHhlbiBIVk0g
Z3Vlc3Q/IFRoYW5rcyBhIGxvdAoKPj4gICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNW
dGFwIHdhcyB3cml0dGVuIHNwZWNpYWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBk
cml2ZXIgbW9kZWwgY2FuIG5vdCB1c2UgaXQsIHJpZ2h0Pwo+Cj5ObywgSSBkaWRuJ3Qgc2F5IHRo
YXQuCj4KPj4gICAgICAgSSBnZXQgdGhlIGluZm9ybWF0aW9uIGZyb20gaHR0cDovL3ZpcnQua2Vy
bmVsbmV3Ymllcy5vcmcvTWFjVlRhcAo+PiAgICAgICAiTWFjdnRhcCBpcyBhIG5ldyBkZXZpY2Ug
ZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5IHZpcnR1YWxpemVkIGJyaWRnZWQgbmV0d29ya2luZy4g
SXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0aW9uIG9mIHRoZSB0dW4vdGFwIGFuZCBicmlkZ2UgZHJp
dmVycyB3aXRoIGEgc2luZ2xlIG1vZHVsZSBiYXNlZCBvbiB0aGUgbWFjdmxhbiBkZXZpY2UgZHJp
dmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQgaXMgYSBjaGFyYWN0ZXIgZGV2aWNlIHRoYXQgbGFyZ2Vs
eSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlvY3RsIGludGVyZmFjZSBhbmQgY2FuIGJlIHVzZWQgZGly
ZWN0bHkgYnkga3ZtL3FlbXUgYW5kIG90aGVyIGh5cGVydmlzb3JzIHRoYXQgc3VwcG9ydCB0aGUg
dHVuL3RhcCBpbnRlcmZhY2UuIiAKPj4gICAgICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFu
eSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJlIE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3Vw
cG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmlnaHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8g
c28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdo
dD8gCj4KPkkgY2FuJ3Qgc3BlYWsgZm9yIHRoZSB0aGluZyBJJ20gbm90IGZhbWlsaWFyIHdpdGgu
IEl0IHdvdWxkIG1ha2Ugc2Vuc2UKPnRvIGp1c3QgdHJ5IHRoYXQgY29uZmlndXJhdGlvbiBhbmQg
c2VlIGlmIGl0IHdvcmtzIG9yIG5vdC4KPgo+V2VpLgo=
------=_Part_620230_1291171218.1389344074627
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48cHJlPuWcqCZuYnNwOzIwMTQtMDEtMDkmbmJzcDsyMDox
NDowOO+8jCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O3dlaS5saXUyQGNpdHJpeC5jb20mZ3Q7Jm5i
c3A75YaZ6YGT77yaCiZndDtPbiZuYnNwO1RodSwmbmJzcDtKYW4mbmJzcDswOSwmbmJzcDsyMDE0
Jm5ic3A7YXQmbmJzcDsxMTo1MjoyM0FNJm5ic3A7KzA4MDAsJm5ic3A7dG9wcGVyeGluJm5ic3A7
d3JvdGU6CiZndDsmZ3Q7Jm5ic3A777u/CiZndDsmZ3Q7Jm5ic3A7CiZndDsmZ3Q7Jm5ic3A7CiZn
dDsmZ3Q7Jm5ic3A7SGkmbmJzcDtXZWkKJmd0OyZndDsmbmJzcDsKJmd0OyZndDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtUaGFua3MmbmJzcDtmb3ImbmJzcDt5b3VyJm5ic3A7
cmVwbHksJm5ic3A7SSZuYnNwO2tub3cmbmJzcDt5b3UmbmJzcDthcmUmbmJzcDtpbiZuYnNwO2No
YXJnZSZuYnNwO29mJm5ic3A7cG9ydGluZyZuYnNwO1ZpcnRpbyZuYnNwO3RvJm5ic3A7eGVuLCZu
YnNwO2hvdyZuYnNwO2Fib3V0Jm5ic3A7dGhlJm5ic3A7cHJvY2Vzcz8mbmJzcDtNYXkmbmJzcDt3
ZSZuYnNwO2NvbmZpZ3VyZSZuYnNwO1ZpcnRpbyZuYnNwO29uJm5ic3A7eGVuPwomZ3Q7CiZndDtJ
Jm5ic3A7d291bGRuJ3QmbmJzcDtzYXkmbmJzcDsiSSdtJm5ic3A7aW4mbmJzcDtjaGFyZ2UiLiZu
YnNwO1RoYXQmbmJzcDt3YXMmbmJzcDtzb3J0Jm5ic3A7b2YmbmJzcDtleHBlcmltZW50YWwmbmJz
cDtwcm9qZWN0CiZndDt0d28mbmJzcDt5ZWFycyZuYnNwO2Fnby4mbmJzcDtBbmQmbmJzcDtJJm5i
c3A7c3RvcHBlZCZuYnNwO2FmdGVyJm5ic3A7dGhhdC4mbmJzcDtTbyZuYnNwO215Jm5ic3A7a25v
d2xlZGdlJm5ic3A7aXMmbmJzcDtpbiZuYnNwO2ZhY3QKJmd0O3F1aXRlJm5ic3A7b3V0Jm5ic3A7
ZGF0ZWQuCiZndDsKJmd0O0kmbmJzcDt0cmllZCZuYnNwO1ZpcnRJTyZuYnNwO29uJm5ic3A7SFZN
Jm5ic3A7Z3Vlc3QmbmJzcDthYm91dCZuYnNwO3R3byZuYnNwO21vbnRocyZuYnNwO2Fnby4mbmJz
cDtJdCZuYnNwO3dvcmtlZCZuYnNwO2ZvciZuYnNwO21lLgomZ3Q7VGhhdCdzJm5ic3A7dGhlJm5i
c3A7b25seSZuYnNwO3RoaW5nJm5ic3A7SSdtJm5ic3A7cXVhbGlmaWVkJm5ic3A7dG8mbmJzcDtz
YXkuJm5ic3A7SG93ZXZlciZuYnNwO0ZhYmlvJm5ic3A7cmVwb3J0ZWQmbmJzcDtpdAomZ3Q7ZGlk
bid0Jm5ic3A7d29yayZuYnNwO2ZvciZuYnNwO2hpbS4mbmJzcDtTbyZuYnNwO2luJm5ic3A7c2hv
cnQmbmJzcDt5b3VyJm5ic3A7bWlsZWFnZSZuYnNwO21heSZuYnNwO3ZhcnkuCjxicj48L3ByZT48
cHJlPkNvdWxkIHlvdSBwbGVhc2Ugc2hhcmUgdXMgdGhlJm5ic3A7ZWxhYm9yYXRlIHN0ZXBzIGFi
b3V0IGhvdyB0byBlbmFibGUgVmlydElPIG9uIHhlbiBIVk0gZ3Vlc3Q/IFRoYW5rcyBhIGxvdDwv
cHJlPjxwcmU+CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7U28m
bmJzcDtmYXImbmJzcDthcyZuYnNwO3lvdSZuYnNwO3NhaWQsJm5ic3A7dGhlJm5ic3A7TWFjVnRh
cCZuYnNwO3dhcyZuYnNwO3dyaXR0ZW4mbmJzcDtzcGVjaWFsbHkmbmJzcDtmb3ImbmJzcDtWaXJ0
aW8sJm5ic3A7b3RoZXImbmJzcDt2aXJ0dWFsJm5ic3A7TklDJm5ic3A7ZHJpdmVyJm5ic3A7bW9k
ZWwmbmJzcDtjYW4mbmJzcDtub3QmbmJzcDt1c2UmbmJzcDtpdCwmbmJzcDtyaWdodD8KJmd0Owom
Z3Q7Tm8sJm5ic3A7SSZuYnNwO2RpZG4ndCZuYnNwO3NheSZuYnNwO3RoYXQuCiZndDsKJmd0OyZn
dDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtJJm5ic3A7Z2V0Jm5i
c3A7dGhlJm5ic3A7aW5mb3JtYXRpb24mbmJzcDtmcm9tJm5ic3A7aHR0cDovL3ZpcnQua2VybmVs
bmV3Ymllcy5vcmcvTWFjVlRhcAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyJNYWN2dGFwJm5ic3A7aXMmbmJzcDthJm5ic3A7bmV3Jm5ic3A7ZGV2aWNl
Jm5ic3A7ZHJpdmVyJm5ic3A7bWVhbnQmbmJzcDt0byZuYnNwO3NpbXBsaWZ5Jm5ic3A7dmlydHVh
bGl6ZWQmbmJzcDticmlkZ2VkJm5ic3A7bmV0d29ya2luZy4mbmJzcDtJdCZuYnNwO3JlcGxhY2Vz
Jm5ic3A7dGhlJm5ic3A7Y29tYmluYXRpb24mbmJzcDtvZiZuYnNwO3RoZSZuYnNwO3R1bi90YXAm
bmJzcDthbmQmbmJzcDticmlkZ2UmbmJzcDtkcml2ZXJzJm5ic3A7d2l0aCZuYnNwO2EmbmJzcDtz
aW5nbGUmbmJzcDttb2R1bGUmbmJzcDtiYXNlZCZuYnNwO29uJm5ic3A7dGhlJm5ic3A7bWFjdmxh
biZuYnNwO2RldmljZSZuYnNwO2RyaXZlci4mbmJzcDtBJm5ic3A7bWFjdnRhcCZuYnNwO2VuZHBv
aW50Jm5ic3A7aXMmbmJzcDthJm5ic3A7Y2hhcmFjdGVyJm5ic3A7ZGV2aWNlJm5ic3A7dGhhdCZu
YnNwO2xhcmdlbHkmbmJzcDtmb2xsb3dzJm5ic3A7dGhlJm5ic3A7dHVuL3RhcCZuYnNwO2lvY3Rs
Jm5ic3A7aW50ZXJmYWNlJm5ic3A7YW5kJm5ic3A7Y2FuJm5ic3A7YmUmbmJzcDt1c2VkJm5ic3A7
ZGlyZWN0bHkmbmJzcDtieSZuYnNwO2t2bS9xZW11Jm5ic3A7YW5kJm5ic3A7b3RoZXImbmJzcDto
eXBlcnZpc29ycyZuYnNwO3RoYXQmbmJzcDtzdXBwb3J0Jm5ic3A7dGhlJm5ic3A7dHVuL3RhcCZu
YnNwO2ludGVyZmFjZS4iJm5ic3A7CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7U28mbmJzcDtmYXImbmJzcDthcyZuYnNwO0kmbmJzcDtjb21w
cmVoZW5kLCZuYnNwO2FueSZuYnNwO2h5cGVydmlzb3JzJm5ic3A7Y2FuJm5ic3A7Y29uZmlndXJl
Jm5ic3A7TWFjVnRhcCZuYnNwO3NvJm5ic3A7bG9uZyZuYnNwO2FzJm5ic3A7aXQmbmJzcDtjYW4m
bmJzcDtzdXBwb3J0Jm5ic3A7dHVuL3RhcCZuYnNwO2ludGVyZmFjZSwmbmJzcDtyaWdodD8mbmJz
cDsmbmJzcDtTbyZuYnNwO01heSZuYnNwO0kmbmJzcDtzYXkmbmJzcDt0aGVyZSZuYnNwO2lzJm5i
c3A7bm8mbmJzcDtzbyZuYnNwO2Nsb3NlbHkmbmJzcDtyZWxhdGlvbnNoaXAmbmJzcDtiZXR3ZWVu
Jm5ic3A7TWFjVnRhcCZuYnNwO2FuZCZuYnNwO1ZpcnRpbyZuYnNwOywmbmJzcDtyaWdodD8mbmJz
cDsKJmd0OwomZ3Q7SSZuYnNwO2Nhbid0Jm5ic3A7c3BlYWsmbmJzcDtmb3ImbmJzcDt0aGUmbmJz
cDt0aGluZyZuYnNwO0knbSZuYnNwO25vdCZuYnNwO2ZhbWlsaWFyJm5ic3A7d2l0aC4mbmJzcDtJ
dCZuYnNwO3dvdWxkJm5ic3A7bWFrZSZuYnNwO3NlbnNlCiZndDt0byZuYnNwO2p1c3QmbmJzcDt0
cnkmbmJzcDt0aGF0Jm5ic3A7Y29uZmlndXJhdGlvbiZuYnNwO2FuZCZuYnNwO3NlZSZuYnNwO2lm
Jm5ic3A7aXQmbmJzcDt3b3JrcyZuYnNwO29yJm5ic3A7bm90LgomZ3Q7CiZndDtXZWkuCjwvcHJl
PjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVhc2Vmb290ZXIiPjxzcGFuIGlkPSJuZXRl
YXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==
------=_Part_620230_1291171218.1389344074627--



--===============2021749618113189918==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2021749618113189918==--



From xen-devel-bounces@lists.xen.org Fri Jan 10 08:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 08:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Xrf-0006WB-W5; Fri, 10 Jan 2014 08:54:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <topperxin@126.com>)
	id 1W1Xrd-0006W0-Jb; Fri, 10 Jan 2014 08:54:50 +0000
Received: from [85.158.143.35:45843] by server-1.bemta-4.messagelabs.com id
	DE/7F-02132-855BFC25; Fri, 10 Jan 2014 08:54:48 +0000
X-Env-Sender: topperxin@126.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389344085!10878752!1
X-Originating-IP: [220.181.15.31]
X-SpamReason: No, hits=0.1 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,sa_preprocessor: 
	QmFkIElQOiAyMjAuMTgxLjE1LjMxID0+IDY3MjM=\n,HTML_50_60,HTML_MESSAGE,
	UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26469 invoked from network); 10 Jan 2014 08:54:47 -0000
Received: from m15-31.126.com (HELO m15-31.126.com) (220.181.15.31)
	by server-14.tower-21.messagelabs.com with SMTP;
	10 Jan 2014 08:54:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com;
	s=s110527; h=Date:From:Subject:MIME-Version:Message-ID; bh=709/Y
	+QXZgq6PqpA6imG1d+M8TtD8S95gCKWohaHhHI=; b=ivsGi18aMuaz9IrlQkPHN
	XNPJcTxelpKHa342+YFxH9ZnDPKgjxqwSnr0ukAPHNBad3YOBMw2WfpWfv+pUCtP
	77eJw/VvQDoGJHWAjLTQw8KGI3hOug2MhQ4M2mX9hgcDoV+yBV0ZZReuHhYcaeeg
	iw2MYrYYFZXKxPbrxAScNk=
Received: from topperxin$126.com ( [221.123.156.2, 176.34.62.243] ) by
	ajax-webmail-wmsvr31 (Coremail) ; Fri, 10 Jan 2014 16:54:34 +0800 (CST)
X-Originating-IP: [221.123.156.2, 176.34.62.243]
Date: Fri, 10 Jan 2014 16:54:34 +0800 (CST)
From: topperxin <topperxin@126.com>
To: "Wei Liu" <wei.liu2@citrix.com>
X-Priority: 3
X-Mailer: Coremail Webmail Server Version SP_ntes V3.5 build
	20131204(24406.5820.5783) Copyright (c) 2002-2014 www.mailtech.cn
	126com
In-Reply-To: <20140109121408.GA12164@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
	<20140109121408.GA12164@zion.uk.xensource.com>
X-CM-CTRLDATA: Er/2sWZvb3Rlcl9odG09MzQxNTo4MQ==
MIME-Version: 1.0
Message-ID: <109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
X-CM-TRANSID: H8qowAD3_5tLtc9SzNFEAA--.3311W
X-CM-SenderInfo: xwrs1vhu0l0qqrswhudrp/1tbiLQoPDk6AWFoZ-QAAsO
X-Coremail-Antispam: 1U5529EdanIXcx71UUUUU7vcSsGvfC2KfnxnUU==
Cc: xen-devel <xen-devel@lists.xensource.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2021749618113189918=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2021749618113189918==
Content-Type: multipart/alternative; 
	boundary="----=_Part_620230_1291171218.1389344074627"

------=_Part_620230_1291171218.1389344074627
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

CgrlnKggMjAxNC0wMS0wOSAyMDoxNDowOO+8jCJXZWkgTGl1IiA8d2VpLmxpdTJAY2l0cml4LmNv
bT4g5YaZ6YGT77yaCj5PbiBUaHUsIEphbiAwOSwgMjAxNCBhdCAxMTo1MjoyM0FNICswODAwLCB0
b3BwZXJ4aW4gd3JvdGU6Cj4+IO+7vwo+PiAKPj4gCj4+IEhpIFdlaQo+PiAKPj4gICAgICBUaGFu
a3MgZm9yIHlvdXIgcmVwbHksIEkga25vdyB5b3UgYXJlIGluIGNoYXJnZSBvZiBwb3J0aW5nIFZp
cnRpbyB0byB4ZW4sIGhvdyBhYm91dCB0aGUgcHJvY2Vzcz8gTWF5IHdlIGNvbmZpZ3VyZSBWaXJ0
aW8gb24geGVuPwo+Cj5JIHdvdWxkbid0IHNheSAiSSdtIGluIGNoYXJnZSIuIFRoYXQgd2FzIHNv
cnQgb2YgZXhwZXJpbWVudGFsIHByb2plY3QKPnR3byB5ZWFycyBhZ28uIEFuZCBJIHN0b3BwZWQg
YWZ0ZXIgdGhhdC4gU28gbXkga25vd2xlZGdlIGlzIGluIGZhY3QKPnF1aXRlIG91dCBkYXRlZC4K
Pgo+SSB0cmllZCBWaXJ0SU8gb24gSFZNIGd1ZXN0IGFib3V0IHR3byBtb250aHMgYWdvLiBJdCB3
b3JrZWQgZm9yIG1lLgo+VGhhdCdzIHRoZSBvbmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5
LiBIb3dldmVyIEZhYmlvIHJlcG9ydGVkIGl0Cj5kaWRuJ3Qgd29yayBmb3IgaGltLiBTbyBpbiBz
aG9ydCB5b3VyIG1pbGVhZ2UgbWF5IHZhcnkuCgoKQ291bGQgeW91IHBsZWFzZSBzaGFyZSB1cyB0
aGUgZWxhYm9yYXRlIHN0ZXBzIGFib3V0IGhvdyB0byBlbmFibGUgVmlydElPIG9uIHhlbiBIVk0g
Z3Vlc3Q/IFRoYW5rcyBhIGxvdAoKPj4gICAgICBTbyBmYXIgYXMgeW91IHNhaWQsIHRoZSBNYWNW
dGFwIHdhcyB3cml0dGVuIHNwZWNpYWxseSBmb3IgVmlydGlvLCBvdGhlciB2aXJ0dWFsIE5JQyBk
cml2ZXIgbW9kZWwgY2FuIG5vdCB1c2UgaXQsIHJpZ2h0Pwo+Cj5ObywgSSBkaWRuJ3Qgc2F5IHRo
YXQuCj4KPj4gICAgICAgSSBnZXQgdGhlIGluZm9ybWF0aW9uIGZyb20gaHR0cDovL3ZpcnQua2Vy
bmVsbmV3Ymllcy5vcmcvTWFjVlRhcAo+PiAgICAgICAiTWFjdnRhcCBpcyBhIG5ldyBkZXZpY2Ug
ZHJpdmVyIG1lYW50IHRvIHNpbXBsaWZ5IHZpcnR1YWxpemVkIGJyaWRnZWQgbmV0d29ya2luZy4g
SXQgcmVwbGFjZXMgdGhlIGNvbWJpbmF0aW9uIG9mIHRoZSB0dW4vdGFwIGFuZCBicmlkZ2UgZHJp
dmVycyB3aXRoIGEgc2luZ2xlIG1vZHVsZSBiYXNlZCBvbiB0aGUgbWFjdmxhbiBkZXZpY2UgZHJp
dmVyLiBBIG1hY3Z0YXAgZW5kcG9pbnQgaXMgYSBjaGFyYWN0ZXIgZGV2aWNlIHRoYXQgbGFyZ2Vs
eSBmb2xsb3dzIHRoZSB0dW4vdGFwIGlvY3RsIGludGVyZmFjZSBhbmQgY2FuIGJlIHVzZWQgZGly
ZWN0bHkgYnkga3ZtL3FlbXUgYW5kIG90aGVyIGh5cGVydmlzb3JzIHRoYXQgc3VwcG9ydCB0aGUg
dHVuL3RhcCBpbnRlcmZhY2UuIiAKPj4gICAgICAgIFNvIGZhciBhcyBJIGNvbXByZWhlbmQsIGFu
eSBoeXBlcnZpc29ycyBjYW4gY29uZmlndXJlIE1hY1Z0YXAgc28gbG9uZyBhcyBpdCBjYW4gc3Vw
cG9ydCB0dW4vdGFwIGludGVyZmFjZSwgcmlnaHQ/ICBTbyBNYXkgSSBzYXkgdGhlcmUgaXMgbm8g
c28gY2xvc2VseSByZWxhdGlvbnNoaXAgYmV0d2VlbiBNYWNWdGFwIGFuZCBWaXJ0aW8gLCByaWdo
dD8gCj4KPkkgY2FuJ3Qgc3BlYWsgZm9yIHRoZSB0aGluZyBJJ20gbm90IGZhbWlsaWFyIHdpdGgu
IEl0IHdvdWxkIG1ha2Ugc2Vuc2UKPnRvIGp1c3QgdHJ5IHRoYXQgY29uZmlndXJhdGlvbiBhbmQg
c2VlIGlmIGl0IHdvcmtzIG9yIG5vdC4KPgo+V2VpLgo=
------=_Part_620230_1291171218.1389344074627
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6YXJpYWwiPjxicj48cHJlPuWcqCZuYnNwOzIwMTQtMDEtMDkmbmJzcDsyMDox
NDowOO+8jCJXZWkmbmJzcDtMaXUiJm5ic3A7Jmx0O3dlaS5saXUyQGNpdHJpeC5jb20mZ3Q7Jm5i
c3A75YaZ6YGT77yaCiZndDtPbiZuYnNwO1RodSwmbmJzcDtKYW4mbmJzcDswOSwmbmJzcDsyMDE0
Jm5ic3A7YXQmbmJzcDsxMTo1MjoyM0FNJm5ic3A7KzA4MDAsJm5ic3A7dG9wcGVyeGluJm5ic3A7
d3JvdGU6CiZndDsmZ3Q7Jm5ic3A777u/CiZndDsmZ3Q7Jm5ic3A7CiZndDsmZ3Q7Jm5ic3A7CiZn
dDsmZ3Q7Jm5ic3A7SGkmbmJzcDtXZWkKJmd0OyZndDsmbmJzcDsKJmd0OyZndDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtUaGFua3MmbmJzcDtmb3ImbmJzcDt5b3VyJm5ic3A7
cmVwbHksJm5ic3A7SSZuYnNwO2tub3cmbmJzcDt5b3UmbmJzcDthcmUmbmJzcDtpbiZuYnNwO2No
YXJnZSZuYnNwO29mJm5ic3A7cG9ydGluZyZuYnNwO1ZpcnRpbyZuYnNwO3RvJm5ic3A7eGVuLCZu
YnNwO2hvdyZuYnNwO2Fib3V0Jm5ic3A7dGhlJm5ic3A7cHJvY2Vzcz8mbmJzcDtNYXkmbmJzcDt3
ZSZuYnNwO2NvbmZpZ3VyZSZuYnNwO1ZpcnRpbyZuYnNwO29uJm5ic3A7eGVuPwomZ3Q7CiZndDtJ
Jm5ic3A7d291bGRuJ3QmbmJzcDtzYXkmbmJzcDsiSSdtJm5ic3A7aW4mbmJzcDtjaGFyZ2UiLiZu
YnNwO1RoYXQmbmJzcDt3YXMmbmJzcDtzb3J0Jm5ic3A7b2YmbmJzcDtleHBlcmltZW50YWwmbmJz
cDtwcm9qZWN0CiZndDt0d28mbmJzcDt5ZWFycyZuYnNwO2Fnby4mbmJzcDtBbmQmbmJzcDtJJm5i
c3A7c3RvcHBlZCZuYnNwO2FmdGVyJm5ic3A7dGhhdC4mbmJzcDtTbyZuYnNwO215Jm5ic3A7a25v
d2xlZGdlJm5ic3A7aXMmbmJzcDtpbiZuYnNwO2ZhY3QKJmd0O3F1aXRlJm5ic3A7b3V0Jm5ic3A7
ZGF0ZWQuCiZndDsKJmd0O0kmbmJzcDt0cmllZCZuYnNwO1ZpcnRJTyZuYnNwO29uJm5ic3A7SFZN
Jm5ic3A7Z3Vlc3QmbmJzcDthYm91dCZuYnNwO3R3byZuYnNwO21vbnRocyZuYnNwO2Fnby4mbmJz
cDtJdCZuYnNwO3dvcmtlZCZuYnNwO2ZvciZuYnNwO21lLgomZ3Q7VGhhdCdzJm5ic3A7dGhlJm5i
c3A7b25seSZuYnNwO3RoaW5nJm5ic3A7SSdtJm5ic3A7cXVhbGlmaWVkJm5ic3A7dG8mbmJzcDtz
YXkuJm5ic3A7SG93ZXZlciZuYnNwO0ZhYmlvJm5ic3A7cmVwb3J0ZWQmbmJzcDtpdAomZ3Q7ZGlk
bid0Jm5ic3A7d29yayZuYnNwO2ZvciZuYnNwO2hpbS4mbmJzcDtTbyZuYnNwO2luJm5ic3A7c2hv
cnQmbmJzcDt5b3VyJm5ic3A7bWlsZWFnZSZuYnNwO21heSZuYnNwO3ZhcnkuCjxicj48L3ByZT48
cHJlPkNvdWxkIHlvdSBwbGVhc2Ugc2hhcmUgdXMgdGhlJm5ic3A7ZWxhYm9yYXRlIHN0ZXBzIGFi
b3V0IGhvdyB0byBlbmFibGUgVmlydElPIG9uIHhlbiBIVk0gZ3Vlc3Q/IFRoYW5rcyBhIGxvdDwv
cHJlPjxwcmU+CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7U28m
bmJzcDtmYXImbmJzcDthcyZuYnNwO3lvdSZuYnNwO3NhaWQsJm5ic3A7dGhlJm5ic3A7TWFjVnRh
cCZuYnNwO3dhcyZuYnNwO3dyaXR0ZW4mbmJzcDtzcGVjaWFsbHkmbmJzcDtmb3ImbmJzcDtWaXJ0
aW8sJm5ic3A7b3RoZXImbmJzcDt2aXJ0dWFsJm5ic3A7TklDJm5ic3A7ZHJpdmVyJm5ic3A7bW9k
ZWwmbmJzcDtjYW4mbmJzcDtub3QmbmJzcDt1c2UmbmJzcDtpdCwmbmJzcDtyaWdodD8KJmd0Owom
Z3Q7Tm8sJm5ic3A7SSZuYnNwO2RpZG4ndCZuYnNwO3NheSZuYnNwO3RoYXQuCiZndDsKJmd0OyZn
dDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtJJm5ic3A7Z2V0Jm5i
c3A7dGhlJm5ic3A7aW5mb3JtYXRpb24mbmJzcDtmcm9tJm5ic3A7aHR0cDovL3ZpcnQua2VybmVs
bmV3Ymllcy5vcmcvTWFjVlRhcAomZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyJNYWN2dGFwJm5ic3A7aXMmbmJzcDthJm5ic3A7bmV3Jm5ic3A7ZGV2aWNl
Jm5ic3A7ZHJpdmVyJm5ic3A7bWVhbnQmbmJzcDt0byZuYnNwO3NpbXBsaWZ5Jm5ic3A7dmlydHVh
bGl6ZWQmbmJzcDticmlkZ2VkJm5ic3A7bmV0d29ya2luZy4mbmJzcDtJdCZuYnNwO3JlcGxhY2Vz
Jm5ic3A7dGhlJm5ic3A7Y29tYmluYXRpb24mbmJzcDtvZiZuYnNwO3RoZSZuYnNwO3R1bi90YXAm
bmJzcDthbmQmbmJzcDticmlkZ2UmbmJzcDtkcml2ZXJzJm5ic3A7d2l0aCZuYnNwO2EmbmJzcDtz
aW5nbGUmbmJzcDttb2R1bGUmbmJzcDtiYXNlZCZuYnNwO29uJm5ic3A7dGhlJm5ic3A7bWFjdmxh
biZuYnNwO2RldmljZSZuYnNwO2RyaXZlci4mbmJzcDtBJm5ic3A7bWFjdnRhcCZuYnNwO2VuZHBv
aW50Jm5ic3A7aXMmbmJzcDthJm5ic3A7Y2hhcmFjdGVyJm5ic3A7ZGV2aWNlJm5ic3A7dGhhdCZu
YnNwO2xhcmdlbHkmbmJzcDtmb2xsb3dzJm5ic3A7dGhlJm5ic3A7dHVuL3RhcCZuYnNwO2lvY3Rs
Jm5ic3A7aW50ZXJmYWNlJm5ic3A7YW5kJm5ic3A7Y2FuJm5ic3A7YmUmbmJzcDt1c2VkJm5ic3A7
ZGlyZWN0bHkmbmJzcDtieSZuYnNwO2t2bS9xZW11Jm5ic3A7YW5kJm5ic3A7b3RoZXImbmJzcDto
eXBlcnZpc29ycyZuYnNwO3RoYXQmbmJzcDtzdXBwb3J0Jm5ic3A7dGhlJm5ic3A7dHVuL3RhcCZu
YnNwO2ludGVyZmFjZS4iJm5ic3A7CiZndDsmZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7U28mbmJzcDtmYXImbmJzcDthcyZuYnNwO0kmbmJzcDtjb21w
cmVoZW5kLCZuYnNwO2FueSZuYnNwO2h5cGVydmlzb3JzJm5ic3A7Y2FuJm5ic3A7Y29uZmlndXJl
Jm5ic3A7TWFjVnRhcCZuYnNwO3NvJm5ic3A7bG9uZyZuYnNwO2FzJm5ic3A7aXQmbmJzcDtjYW4m
bmJzcDtzdXBwb3J0Jm5ic3A7dHVuL3RhcCZuYnNwO2ludGVyZmFjZSwmbmJzcDtyaWdodD8mbmJz
cDsmbmJzcDtTbyZuYnNwO01heSZuYnNwO0kmbmJzcDtzYXkmbmJzcDt0aGVyZSZuYnNwO2lzJm5i
c3A7bm8mbmJzcDtzbyZuYnNwO2Nsb3NlbHkmbmJzcDtyZWxhdGlvbnNoaXAmbmJzcDtiZXR3ZWVu
Jm5ic3A7TWFjVnRhcCZuYnNwO2FuZCZuYnNwO1ZpcnRpbyZuYnNwOywmbmJzcDtyaWdodD8mbmJz
cDsKJmd0OwomZ3Q7SSZuYnNwO2Nhbid0Jm5ic3A7c3BlYWsmbmJzcDtmb3ImbmJzcDt0aGUmbmJz
cDt0aGluZyZuYnNwO0knbSZuYnNwO25vdCZuYnNwO2ZhbWlsaWFyJm5ic3A7d2l0aC4mbmJzcDtJ
dCZuYnNwO3dvdWxkJm5ic3A7bWFrZSZuYnNwO3NlbnNlCiZndDt0byZuYnNwO2p1c3QmbmJzcDt0
cnkmbmJzcDt0aGF0Jm5ic3A7Y29uZmlndXJhdGlvbiZuYnNwO2FuZCZuYnNwO3NlZSZuYnNwO2lm
Jm5ic3A7aXQmbmJzcDt3b3JrcyZuYnNwO29yJm5ic3A7bm90LgomZ3Q7CiZndDtXZWkuCjwvcHJl
PjwvZGl2Pjxicj48YnI+PHNwYW4gdGl0bGU9Im5ldGVhc2Vmb290ZXIiPjxzcGFuIGlkPSJuZXRl
YXNlX21haWxfZm9vdGVyIj48L3NwYW4+PC9zcGFuPg==
------=_Part_620230_1291171218.1389344074627--



--===============2021749618113189918==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2021749618113189918==--



From xen-devel-bounces@lists.xen.org Fri Jan 10 09:20:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YFo-00008y-94; Fri, 10 Jan 2014 09:19:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1YFn-00008t-Fw
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:19:47 +0000
Received: from [85.158.143.35:36810] by server-3.bemta-4.messagelabs.com id
	0C/70-32360-13BBFC25; Fri, 10 Jan 2014 09:19:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389345584!10806467!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8223 invoked from network); 10 Jan 2014 09:19:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 09:19:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 09:19:44 +0000
Message-Id: <52CFC93C0200007800112390@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 09:19:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrx.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 20:10, Wei Liu <wei.liu2@citrix.com> wrote:
> +    /* We cannot have PoD and PCI device assignment at the same
> +     * time. VT-d engine needs to set up the entire page table for
> +     * the domain. However if PoD is enabled, un-populated memory is
> +     * marked as populate_on_demand, and VT-d engine won't set up page
> +     * tables for them. Therefore any DMA towards those memory may
> +     * cause DMA fault.
> +     *
> +     * This is restricted to HVM guest, as only VT-d is relevant
> +     * in the counterpart in Xend. We're late in release cycle so the change
> +     * should only does what's necessary. Probably we can revisit if
> +     * we need to do the same thing for PV guest in the future.
> +     */

The comment shouldn't say VT-d as long as the code isn't special
casing VT-d.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:20:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YFo-00008y-94; Fri, 10 Jan 2014 09:19:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1YFn-00008t-Fw
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:19:47 +0000
Received: from [85.158.143.35:36810] by server-3.bemta-4.messagelabs.com id
	0C/70-32360-13BBFC25; Fri, 10 Jan 2014 09:19:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389345584!10806467!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8223 invoked from network); 10 Jan 2014 09:19:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 09:19:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 09:19:44 +0000
Message-Id: <52CFC93C0200007800112390@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 09:19:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrx.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 20:10, Wei Liu <wei.liu2@citrix.com> wrote:
> +    /* We cannot have PoD and PCI device assignment at the same
> +     * time. VT-d engine needs to set up the entire page table for
> +     * the domain. However if PoD is enabled, un-populated memory is
> +     * marked as populate_on_demand, and VT-d engine won't set up page
> +     * tables for them. Therefore any DMA towards those memory may
> +     * cause DMA fault.
> +     *
> +     * This is restricted to HVM guest, as only VT-d is relevant
> +     * in the counterpart in Xend. We're late in release cycle so the change
> +     * should only does what's necessary. Probably we can revisit if
> +     * we need to do the same thing for PV guest in the future.
> +     */

The comment shouldn't say VT-d as long as the code isn't special
casing VT-d.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:22:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YID-0000FN-V9; Fri, 10 Jan 2014 09:22:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1YIC-0000FG-8t
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:22:16 +0000
Received: from [193.109.254.147:31656] by server-16.bemta-14.messagelabs.com
	id CB/11-20600-7CBBFC25; Fri, 10 Jan 2014 09:22:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389345733!10003228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13019 invoked from network); 10 Jan 2014 09:22:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 09:22:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89465126"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 09:22:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 04:22:12 -0500
Message-ID: <1389345731.19142.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 10 Jan 2014 09:22:11 +0000
In-Reply-To: <52CFC93C0200007800112390@nat28.tlf.novell.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
	<52CFC93C0200007800112390@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrx.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:19 +0000, Jan Beulich wrote:
> >>> On 09.01.14 at 20:10, Wei Liu <wei.liu2@citrix.com> wrote:
> > +    /* We cannot have PoD and PCI device assignment at the same
> > +     * time. VT-d engine needs to set up the entire page table for
> > +     * the domain. However if PoD is enabled, un-populated memory is
> > +     * marked as populate_on_demand, and VT-d engine won't set up page
> > +     * tables for them. Therefore any DMA towards those memory may
> > +     * cause DMA fault.
> > +     *
> > +     * This is restricted to HVM guest, as only VT-d is relevant
> > +     * in the counterpart in Xend. We're late in release cycle so the change
> > +     * should only does what's necessary. Probably we can revisit if
> > +     * we need to do the same thing for PV guest in the future.
> > +     */
> 
> The comment shouldn't say VT-d as long as the code isn't special
> casing VT-d.

Also this second paragraph looks more suitable to the commit message to
me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:22:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YID-0000FN-V9; Fri, 10 Jan 2014 09:22:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1YIC-0000FG-8t
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:22:16 +0000
Received: from [193.109.254.147:31656] by server-16.bemta-14.messagelabs.com
	id CB/11-20600-7CBBFC25; Fri, 10 Jan 2014 09:22:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389345733!10003228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13019 invoked from network); 10 Jan 2014 09:22:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 09:22:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89465126"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 09:22:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 04:22:12 -0500
Message-ID: <1389345731.19142.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 10 Jan 2014 09:22:11 +0000
In-Reply-To: <52CFC93C0200007800112390@nat28.tlf.novell.com>
References: <1389294644-25423-1-git-send-email-wei.liu2@citrix.com>
	<52CFC93C0200007800112390@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrx.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrx.com>
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:19 +0000, Jan Beulich wrote:
> >>> On 09.01.14 at 20:10, Wei Liu <wei.liu2@citrix.com> wrote:
> > +    /* We cannot have PoD and PCI device assignment at the same
> > +     * time. VT-d engine needs to set up the entire page table for
> > +     * the domain. However if PoD is enabled, un-populated memory is
> > +     * marked as populate_on_demand, and VT-d engine won't set up page
> > +     * tables for them. Therefore any DMA towards those memory may
> > +     * cause DMA fault.
> > +     *
> > +     * This is restricted to HVM guest, as only VT-d is relevant
> > +     * in the counterpart in Xend. We're late in release cycle so the change
> > +     * should only does what's necessary. Probably we can revisit if
> > +     * we need to do the same thing for PV guest in the future.
> > +     */
> 
> The comment shouldn't say VT-d as long as the code isn't special
> casing VT-d.

Also this second paragraph looks more suitable to the commit message to
me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:29:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YP4-00012H-2J; Fri, 10 Jan 2014 09:29:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1YP3-00012C-3u
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:29:21 +0000
Received: from [85.158.137.68:5023] by server-17.bemta-3.messagelabs.com id
	2C/D4-15965-07DBFC25; Fri, 10 Jan 2014 09:29:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389346159!8385565!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30707 invoked from network); 10 Jan 2014 09:29:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 09:29:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 09:29:18 +0000
Message-Id: <52CFCB7B02000078001123AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 09:29:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> I agree that it _shouldn't_ end up emulating -- but the shadow page fault 
> routine has a ton of code paths that I've never managed to fully grok 
> 
> (As an aside, I've previously looked at other cases where the shadow code 
> ends up emulating instructions that are unexpected that cause VMs to hang 
> because the shadow module doesn't have a proper implementation of the 
> x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server 
> product inside a Xen VM that has logdirty enabled it _will_ hard hang).

Perhaps that's then what really needs fixing?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:29:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YP4-00012H-2J; Fri, 10 Jan 2014 09:29:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1YP3-00012C-3u
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:29:21 +0000
Received: from [85.158.137.68:5023] by server-17.bemta-3.messagelabs.com id
	2C/D4-15965-07DBFC25; Fri, 10 Jan 2014 09:29:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389346159!8385565!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30707 invoked from network); 10 Jan 2014 09:29:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 09:29:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 09:29:18 +0000
Message-Id: <52CFCB7B02000078001123AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 09:29:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> I agree that it _shouldn't_ end up emulating -- but the shadow page fault 
> routine has a ton of code paths that I've never managed to fully grok 
> 
> (As an aside, I've previously looked at other cases where the shadow code 
> ends up emulating instructions that are unexpected that cause VMs to hang 
> because the shadow module doesn't have a proper implementation of the 
> x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server 
> product inside a Xen VM that has logdirty enabled it _will_ hard hang).

Perhaps that's then what really needs fixing?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Yic-0002RI-Ef; Fri, 10 Jan 2014 09:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1W1Yia-0002RD-NU
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:49:32 +0000
Received: from [193.109.254.147:29111] by server-14.bemta-14.messagelabs.com
	id 0E/5B-12628-B22CFC25; Fri, 10 Jan 2014 09:49:31 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389347369!6512078!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28399 invoked from network); 10 Jan 2014 09:49:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 09:49:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A9mA5i027296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 09:48:11 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A9m9pL020700
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 09:48:10 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A9m9Zn024837; Fri, 10 Jan 2014 09:48:09 GMT
Received: from [192.168.0.102] (/117.79.232.153)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 01:48:09 -0800
Message-ID: <52CFC1CD.6060605@oracle.com>
Date: Fri, 10 Jan 2014 17:47:57 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52CE575F.9050303@oracle.com>	
	<1389260454.27473.27.camel@kazak.uk.xensource.com>	
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
	<1389278302.27473.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1389278302.27473.132.camel@kazak.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Xudong Hao <xudong.hao@intel.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-9 22:38, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
>> On Thu, 9 Jan 2014, Ian Campbell wrote:
>>> On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
>>>> Hi Maintainer
>>>>
>>>> I added 64bit pci hotplug support to hvm guest recently.
>>>> Then I found XudongHao had ever sent a similar patch, but it wasn't
>>>> merged in qemu-xen-traditional.
>>> Stefano is not the maintainer of this tree, Ian Jackson is. On the other
>>> hand the patch you link to is against qemu-xen, which Stefano does
>>> maintain, so I'm a bit confused.
>> That is not the case: the patch is against qemu-xen-traditional
>> (hw/pass-through.c doesn't exist on QEMU upstream based trees).
So anyone knows if it could be accepted?
I think it could be used to fix an oracle internal bug.
We found when we hotplug many pci vf to a hvm, some regions may need to 
be mapped above 4G.
That patch add the support.

thanks
zduan
> Ah, I was tricked by the subject line which says qemu-xen!
>
>>> Perhaps you should also have CCd Xudong? I've done that too.
>>>
>>> This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
>>>
>>> Ian.
>>>
>>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
>>>>
>>>> I am interested in the result about that patch.
>>>>
>>>> thanks
>>>> zduan
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Yic-0002RI-Ef; Fri, 10 Jan 2014 09:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zhenzhong.duan@oracle.com>) id 1W1Yia-0002RD-NU
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:49:32 +0000
Received: from [193.109.254.147:29111] by server-14.bemta-14.messagelabs.com
	id 0E/5B-12628-B22CFC25; Fri, 10 Jan 2014 09:49:31 +0000
X-Env-Sender: zhenzhong.duan@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389347369!6512078!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28399 invoked from network); 10 Jan 2014 09:49:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 09:49:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0A9mA5i027296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 09:48:11 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A9m9pL020700
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 09:48:10 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0A9m9Zn024837; Fri, 10 Jan 2014 09:48:09 GMT
Received: from [192.168.0.102] (/117.79.232.153)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 01:48:09 -0800
Message-ID: <52CFC1CD.6060605@oracle.com>
Date: Fri, 10 Jan 2014 17:47:57 +0800
From: Zhenzhong Duan <zhenzhong.duan@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52CE575F.9050303@oracle.com>	
	<1389260454.27473.27.camel@kazak.uk.xensource.com>	
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
	<1389278302.27473.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1389278302.27473.132.camel@kazak.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Xudong Hao <xudong.hao@intel.com>
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-9 22:38, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
>> On Thu, 9 Jan 2014, Ian Campbell wrote:
>>> On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
>>>> Hi Maintainer
>>>>
>>>> I added 64bit pci hotplug support to hvm guest recently.
>>>> Then I found XudongHao had ever sent a similar patch, but it wasn't
>>>> merged in qemu-xen-traditional.
>>> Stefano is not the maintainer of this tree, Ian Jackson is. On the other
>>> hand the patch you link to is against qemu-xen, which Stefano does
>>> maintain, so I'm a bit confused.
>> That is not the case: the patch is against qemu-xen-traditional
>> (hw/pass-through.c doesn't exist on QEMU upstream based trees).
So anyone knows if it could be accepted?
I think it could be used to fix an oracle internal bug.
We found when we hotplug many pci vf to a hvm, some regions may need to 
be mapped above 4G.
That patch add the support.

thanks
zduan
> Ah, I was tricked by the subject line which says qemu-xen!
>
>>> Perhaps you should also have CCd Xudong? I've done that too.
>>>
>>> This may also relate to http://bugs.xenproject.org/xen/bug/28 ?
>>>
>>> Ian.
>>>
>>>> http://lists.xen.org/archives/html/xen-devel/2012-08/msg01168.html
>>>>
>>>> I am interested in the result about that patch.
>>>>
>>>> thanks
>>>> zduan
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Yl5-0002Xe-31; Fri, 10 Jan 2014 09:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Yl4-0002XX-EE
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:52:06 +0000
Received: from [85.158.139.211:48418] by server-2.bemta-5.messagelabs.com id
	DB/E1-29392-5C2CFC25; Fri, 10 Jan 2014 09:52:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389347523!8928649!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18037 invoked from network); 10 Jan 2014 09:52:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 09:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91614365"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 09:52:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 04:52:02 -0500
Message-ID: <1389347521.19142.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 09:52:01 +0000
In-Reply-To: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> supports real hardware, it's possible to enable by default scrubbing.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This patch should go to Xen 4.4. It avoid to give non-cleared page to
>     a domain.
> 
>     The downside is it's now slow on models.

There is a no-bootscrub command-line option which can be used in that
case. Could you update the relevant model wiki pages to mention it
please?

>     The current implementation of scrub_heap_pages loop on every page in the
>     frametable. On ARM, there is only which can contains MMIO. We are safe
>     because when frametable is initialized, page are marked inuse. So the
>     function won't clear theses pages.

I don't think this behaviour is specific to ARM, x86 has MMIO regions
mixed in with RAM as well.

>From an RM PoV I think this is a necessary fix since it can otherwise
potentially leak information from a previous boot. I also think it is
low risk, nothing should have been relying on non-zero content of any
page.

> ---
>  xen/arch/arm/setup.c |    6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 9fc40c8..d7c7f4d 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -764,10 +764,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      if ( construct_dom0(dom0) != 0)
>              panic("Could not set up DOM0 guest OS");
>  
> -    /* Scrub RAM that is still free and so may go to an unprivileged domain.
> -       XXX too slow in simulator
> -       scrub_heap_pages();
> -    */
> +    /* Scrub RAM that is still free and so may go to an unprivileged domain. */
> +    scrub_heap_pages();
>  
>      init_constructors();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 09:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 09:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1Yl5-0002Xe-31; Fri, 10 Jan 2014 09:52:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1Yl4-0002XX-EE
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 09:52:06 +0000
Received: from [85.158.139.211:48418] by server-2.bemta-5.messagelabs.com id
	DB/E1-29392-5C2CFC25; Fri, 10 Jan 2014 09:52:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389347523!8928649!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18037 invoked from network); 10 Jan 2014 09:52:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 09:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91614365"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 09:52:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 04:52:02 -0500
Message-ID: <1389347521.19142.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 09:52:01 +0000
In-Reply-To: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> supports real hardware, it's possible to enable by default scrubbing.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This patch should go to Xen 4.4. It avoid to give non-cleared page to
>     a domain.
> 
>     The downside is it's now slow on models.

There is a no-bootscrub command-line option which can be used in that
case. Could you update the relevant model wiki pages to mention it
please?

>     The current implementation of scrub_heap_pages loop on every page in the
>     frametable. On ARM, there is only which can contains MMIO. We are safe
>     because when frametable is initialized, page are marked inuse. So the
>     function won't clear theses pages.

I don't think this behaviour is specific to ARM, x86 has MMIO regions
mixed in with RAM as well.

>From an RM PoV I think this is a necessary fix since it can otherwise
potentially leak information from a previous boot. I also think it is
low risk, nothing should have been relying on non-zero content of any
page.

> ---
>  xen/arch/arm/setup.c |    6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 9fc40c8..d7c7f4d 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -764,10 +764,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      if ( construct_dom0(dom0) != 0)
>              panic("Could not set up DOM0 guest OS");
>  
> -    /* Scrub RAM that is still free and so may go to an unprivileged domain.
> -       XXX too slow in simulator
> -       scrub_heap_pages();
> -    */
> +    /* Scrub RAM that is still free and so may go to an unprivileged domain. */
> +    scrub_heap_pages();
>  
>      init_constructors();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YvH-0003PL-8V; Fri, 10 Jan 2014 10:02:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1YvG-0003PG-9w
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:02:38 +0000
Received: from [85.158.137.68:47459] by server-2.bemta-3.messagelabs.com id
	6E/0F-17329-D35CFC25; Fri, 10 Jan 2014 10:02:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389348153!4689733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6682 invoked from network); 10 Jan 2014 10:02:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:02:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89473632"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:02:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:02:31 -0500
Message-ID: <1389348151.19142.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 10 Jan 2014 10:02:31 +0000
In-Reply-To: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 23:01 +0100, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.

Please always CC the maintainers on patches. In this case that's
Stefano.

Stefano, I wonder if the MAINTAINERS entry for upstream qemu ought to
list Anthony and/or the upstream qemu list as well?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1YvH-0003PL-8V; Fri, 10 Jan 2014 10:02:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1YvG-0003PG-9w
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:02:38 +0000
Received: from [85.158.137.68:47459] by server-2.bemta-3.messagelabs.com id
	6E/0F-17329-D35CFC25; Fri, 10 Jan 2014 10:02:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389348153!4689733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6682 invoked from network); 10 Jan 2014 10:02:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:02:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89473632"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:02:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:02:31 -0500
Message-ID: <1389348151.19142.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 10 Jan 2014 10:02:31 +0000
In-Reply-To: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 23:01 +0100, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.

Please always CC the maintainers on patches. In this case that's
Stefano.

Stefano, I wonder if the MAINTAINERS entry for upstream qemu ought to
list Anthony and/or the upstream qemu list as well?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:22:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZE0-0004xV-1q; Fri, 10 Jan 2014 10:22:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ZDy-0004xN-Es
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:21:58 +0000
Received: from [193.109.254.147:41793] by server-5.bemta-14.messagelabs.com id
	10/FC-03510-5C9CFC25; Fri, 10 Jan 2014 10:21:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389349315!10025286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19610 invoked from network); 10 Jan 2014 10:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89477366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:20:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:20:13 -0500
Message-ID: <1389349212.19142.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 10:20:12 +0000
In-Reply-To: <20140109190049.GB17806@pegasus.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	Ross Philipson <ross.philipson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 14:00 -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:

> > Your memory is intact; I did provide a helper library. I posted it
> > as a tarball since I could not figure out where such a thing might
> > live in the xen tree. I posted it twice - the second time with some
> > fixes:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html
> 
> Would it make sense to try to have it as part of the Xen tree?
> It looks in good shape.

I think we considered this before, but I can't remember why we didn't
(assuming we even explicitly decided that rather than just letting it
slip through the cracks).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:22:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZE0-0004xV-1q; Fri, 10 Jan 2014 10:22:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ZDy-0004xN-Es
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:21:58 +0000
Received: from [193.109.254.147:41793] by server-5.bemta-14.messagelabs.com id
	10/FC-03510-5C9CFC25; Fri, 10 Jan 2014 10:21:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389349315!10025286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19610 invoked from network); 10 Jan 2014 10:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89477366"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:20:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:20:13 -0500
Message-ID: <1389349212.19142.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 10:20:12 +0000
In-Reply-To: <20140109190049.GB17806@pegasus.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	Ross Philipson <ross.philipson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 14:00 -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:

> > Your memory is intact; I did provide a helper library. I posted it
> > as a tarball since I could not figure out where such a thing might
> > live in the xen tree. I posted it twice - the second time with some
> > fixes:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html
> 
> Would it make sense to try to have it as part of the Xen tree?
> It looks in good shape.

I think we considered this before, but I can't remember why we didn't
(assuming we even explicitly decided that rather than just letting it
slip through the cracks).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZIT-0005Fa-Qv; Fri, 10 Jan 2014 10:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ZIR-0005FS-VC
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:26:36 +0000
Received: from [85.158.137.68:55847] by server-12.bemta-3.messagelabs.com id
	0D/E2-20055-BDACFC25; Fri, 10 Jan 2014 10:26:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389349592!8317185!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22970 invoked from network); 10 Jan 2014 10:26:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:26:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89479028"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:26:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:26:31 -0500
Message-ID: <1389349591.19142.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 10:26:31 +0000
In-Reply-To: <21198.62393.819485.361532@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
	<21198.62393.819485.361532@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 19:08 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> > The older method is that the toolstack resets a bunch of state (see
> > tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
> > the domain. The domain will see HYPERVISOR_suspend return 0 and will
> > continue without any realisation that it is actually running in the
> > original domain and not in a new one. This method is supposed to be
> > implemented by libxl_domain_resume(suspend_cancel=0) but it is not.
> 
> I have looked into this and I think I can fairly simply implement the
> old protocol in libxl.  This is necessary, I think, to preserve our
> back-to-3.0 ABI compatibility guarantee.
> 
> Looking at a modern pvops Linux kernel, does seem to try to cope with
> older hypervisors which don't do the "new" protocol.  So that's a
> reasonable thing to start with, but looking at the code in Linux I
> suspect it may not actually work very well.  So if anyone has an
> ancient test case of some kind that would be helpful...

The linux-2.6.18-xen.hg kernel ought to work in the old mode I think. Or
any of the SLES fwd ports?

Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZIT-0005Fa-Qv; Fri, 10 Jan 2014 10:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ZIR-0005FS-VC
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 10:26:36 +0000
Received: from [85.158.137.68:55847] by server-12.bemta-3.messagelabs.com id
	0D/E2-20055-BDACFC25; Fri, 10 Jan 2014 10:26:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389349592!8317185!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22970 invoked from network); 10 Jan 2014 10:26:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:26:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="89479028"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 10:26:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:26:31 -0500
Message-ID: <1389349591.19142.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 10:26:31 +0000
In-Reply-To: <21198.62393.819485.361532@mariner.uk.xensource.com>
References: <21196.19900.136146.867552@mariner.uk.xensource.com>
	<1389186144.4883.60.camel@kazak.uk.xensource.com>
	<21198.62393.819485.361532@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed
 migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 19:08 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration"):
> > The older method is that the toolstack resets a bunch of state (see
> > tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then restarts
> > the domain. The domain will see HYPERVISOR_suspend return 0 and will
> > continue without any realisation that it is actually running in the
> > original domain and not in a new one. This method is supposed to be
> > implemented by libxl_domain_resume(suspend_cancel=0) but it is not.
> 
> I have looked into this and I think I can fairly simply implement the
> old protocol in libxl.  This is necessary, I think, to preserve our
> back-to-3.0 ABI compatibility guarantee.
> 
> Looking at a modern pvops Linux kernel, does seem to try to cope with
> older hypervisors which don't do the "new" protocol.  So that's a
> reasonable thing to start with, but looking at the code in Linux I
> suspect it may not actually work very well.  So if anyone has an
> ancient test case of some kind that would be helpful...

The linux-2.6.18-xen.hg kernel ought to work in the old mode I think. Or
any of the SLES fwd ports?

Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:31:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZN7-00064Y-Jy; Fri, 10 Jan 2014 10:31:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W1ZN6-00064N-2w; Fri, 10 Jan 2014 10:31:24 +0000
Received: from [85.158.137.68:21684] by server-16.bemta-3.messagelabs.com id
	EF/56-26128-BFBCFC25; Fri, 10 Jan 2014 10:31:23 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389349880!8345513!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19671 invoked from network); 10 Jan 2014 10:31:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:31:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91621901"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 10:31:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:31:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1ZN1-0005qp-1h;
	Fri, 10 Jan 2014 10:31:19 +0000
Date: Fri, 10 Jan 2014 10:31:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140110103118.GB29180@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
	<20140109121408.GA12164@zion.uk.xensource.com>
	<109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBKYW4gMTAsIDIwMTQgYXQgMDQ6NTQ6MzRQTSArMDgwMCwgdG9wcGVyeGluIHdyb3Rl
Ogo+IAo+IAo+IOWcqCAyMDE0LTAxLTA5IDIwOjE0OjA477yMIldlaSBMaXUiIDx3ZWkubGl1MkBj
aXRyaXguY29tPiDlhpnpgZPvvJoKPiA+T24gVGh1LCBKYW4gMDksIDIwMTQgYXQgMTE6NTI6MjNB
TSArMDgwMCwgdG9wcGVyeGluIHdyb3RlOgo+ID4+IO+7vwo+ID4+IAo+ID4+IAo+ID4+IEhpIFdl
aQo+ID4+IAo+ID4+ICAgICAgVGhhbmtzIGZvciB5b3VyIHJlcGx5LCBJIGtub3cgeW91IGFyZSBp
biBjaGFyZ2Ugb2YgcG9ydGluZyBWaXJ0aW8gdG8geGVuLCBob3cgYWJvdXQgdGhlIHByb2Nlc3M/
IE1heSB3ZSBjb25maWd1cmUgVmlydGlvIG9uIHhlbj8KPiA+Cj4gPkkgd291bGRuJ3Qgc2F5ICJJ
J20gaW4gY2hhcmdlIi4gVGhhdCB3YXMgc29ydCBvZiBleHBlcmltZW50YWwgcHJvamVjdAo+ID50
d28geWVhcnMgYWdvLiBBbmQgSSBzdG9wcGVkIGFmdGVyIHRoYXQuIFNvIG15IGtub3dsZWRnZSBp
cyBpbiBmYWN0Cj4gPnF1aXRlIG91dCBkYXRlZC4KPiA+Cj4gPkkgdHJpZWQgVmlydElPIG9uIEhW
TSBndWVzdCBhYm91dCB0d28gbW9udGhzIGFnby4gSXQgd29ya2VkIGZvciBtZS4KPiA+VGhhdCdz
IHRoZSBvbmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5LiBIb3dldmVyIEZhYmlvIHJlcG9y
dGVkIGl0Cj4gPmRpZG4ndCB3b3JrIGZvciBoaW0uIFNvIGluIHNob3J0IHlvdXIgbWlsZWFnZSBt
YXkgdmFyeS4KPiAKPiAKPiBDb3VsZCB5b3UgcGxlYXNlIHNoYXJlIHVzIHRoZSBlbGFib3JhdGUg
c3RlcHMgYWJvdXQgaG93IHRvIGVuYWJsZSBWaXJ0SU8gb24geGVuIEhWTSBndWVzdD8gVGhhbmtz
IGEgbG90Cj4gCgpJbiB5b3VyIHZpZiBjb25maWd1cmF0aW9uLCBhZGQgaW4gIm1vZGVsPXZpcnRp
by1uZXQtcGNpIiB0byBzcGVjaWZ5ClZpcnRJTyBuZXQgYXMgeW91ciBlbXVsYXRlZCBkZXZpY2Uu
IFlvdSBhbHNvIG5lZWQgdG8gYWRkIGluCnhlbl9lbXVsX3VucGx1Zz1uZXZlciBpbiB5b3VyICpn
dWVzdCoga2VybmVsIGNvbW1hbmQgbGluZS4KCllvdSBtaWdodCBhbHNvIGZpbmQgZGV2aWNlX21v
ZGVsX2FyZ3M9IHVzZWZ1bCBmb3IgYXBwZW5kaW5nIGFyYml0cmF0cnkKYXJndW1lbnRzIHRvIFFF
TVUuIFByZXN1bWFibHkgeW91IG5lZWQgdG8gYXBwZW5kIHNvbWUgcnVuZXMgdG8gUUVNVSB0bwpl
bmFibGUgTWFjVnRhcC4KCldlaS4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 10 10:31:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 10:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ZN7-00064Y-Jy; Fri, 10 Jan 2014 10:31:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>)
	id 1W1ZN6-00064N-2w; Fri, 10 Jan 2014 10:31:24 +0000
Received: from [85.158.137.68:21684] by server-16.bemta-3.messagelabs.com id
	EF/56-26128-BFBCFC25; Fri, 10 Jan 2014 10:31:23 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389349880!8345513!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19671 invoked from network); 10 Jan 2014 10:31:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 10:31:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,637,1384300800"; d="scan'208";a="91621901"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 10:31:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 05:31:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1ZN1-0005qp-1h;
	Fri, 10 Jan 2014 10:31:19 +0000
Date: Fri, 10 Jan 2014 10:31:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: topperxin <topperxin@126.com>
Message-ID: <20140110103118.GB29180@zion.uk.xensource.com>
References: <41ded77d.2bbc4.143715f985c.Coremail.topperxin@126.com>
	<20140108184405.GB13867@zion.uk.xensource.com>
	<4088fa33.894a.14375212530.Coremail.topperxin@126.com>
	<20140109121408.GA12164@zion.uk.xensource.com>
	<109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <109e8760.29921.1437b5c2b84.Coremail.topperxin@126.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xensource.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-users@lists.xensource.com" <xen-users@lists.xensource.com>
Subject: Re: [Xen-devel] xen & MacVtap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCBKYW4gMTAsIDIwMTQgYXQgMDQ6NTQ6MzRQTSArMDgwMCwgdG9wcGVyeGluIHdyb3Rl
Ogo+IAo+IAo+IOWcqCAyMDE0LTAxLTA5IDIwOjE0OjA477yMIldlaSBMaXUiIDx3ZWkubGl1MkBj
aXRyaXguY29tPiDlhpnpgZPvvJoKPiA+T24gVGh1LCBKYW4gMDksIDIwMTQgYXQgMTE6NTI6MjNB
TSArMDgwMCwgdG9wcGVyeGluIHdyb3RlOgo+ID4+IO+7vwo+ID4+IAo+ID4+IAo+ID4+IEhpIFdl
aQo+ID4+IAo+ID4+ICAgICAgVGhhbmtzIGZvciB5b3VyIHJlcGx5LCBJIGtub3cgeW91IGFyZSBp
biBjaGFyZ2Ugb2YgcG9ydGluZyBWaXJ0aW8gdG8geGVuLCBob3cgYWJvdXQgdGhlIHByb2Nlc3M/
IE1heSB3ZSBjb25maWd1cmUgVmlydGlvIG9uIHhlbj8KPiA+Cj4gPkkgd291bGRuJ3Qgc2F5ICJJ
J20gaW4gY2hhcmdlIi4gVGhhdCB3YXMgc29ydCBvZiBleHBlcmltZW50YWwgcHJvamVjdAo+ID50
d28geWVhcnMgYWdvLiBBbmQgSSBzdG9wcGVkIGFmdGVyIHRoYXQuIFNvIG15IGtub3dsZWRnZSBp
cyBpbiBmYWN0Cj4gPnF1aXRlIG91dCBkYXRlZC4KPiA+Cj4gPkkgdHJpZWQgVmlydElPIG9uIEhW
TSBndWVzdCBhYm91dCB0d28gbW9udGhzIGFnby4gSXQgd29ya2VkIGZvciBtZS4KPiA+VGhhdCdz
IHRoZSBvbmx5IHRoaW5nIEknbSBxdWFsaWZpZWQgdG8gc2F5LiBIb3dldmVyIEZhYmlvIHJlcG9y
dGVkIGl0Cj4gPmRpZG4ndCB3b3JrIGZvciBoaW0uIFNvIGluIHNob3J0IHlvdXIgbWlsZWFnZSBt
YXkgdmFyeS4KPiAKPiAKPiBDb3VsZCB5b3UgcGxlYXNlIHNoYXJlIHVzIHRoZSBlbGFib3JhdGUg
c3RlcHMgYWJvdXQgaG93IHRvIGVuYWJsZSBWaXJ0SU8gb24geGVuIEhWTSBndWVzdD8gVGhhbmtz
IGEgbG90Cj4gCgpJbiB5b3VyIHZpZiBjb25maWd1cmF0aW9uLCBhZGQgaW4gIm1vZGVsPXZpcnRp
by1uZXQtcGNpIiB0byBzcGVjaWZ5ClZpcnRJTyBuZXQgYXMgeW91ciBlbXVsYXRlZCBkZXZpY2Uu
IFlvdSBhbHNvIG5lZWQgdG8gYWRkIGluCnhlbl9lbXVsX3VucGx1Zz1uZXZlciBpbiB5b3VyICpn
dWVzdCoga2VybmVsIGNvbW1hbmQgbGluZS4KCllvdSBtaWdodCBhbHNvIGZpbmQgZGV2aWNlX21v
ZGVsX2FyZ3M9IHVzZWZ1bCBmb3IgYXBwZW5kaW5nIGFyYml0cmF0cnkKYXJndW1lbnRzIHRvIFFF
TVUuIFByZXN1bWFibHkgeW91IG5lZWQgdG8gYXBwZW5kIHNvbWUgcnVuZXMgdG8gUUVNVSB0bwpl
bmFibGUgTWFjVnRhcC4KCldlaS4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:28:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aFh-0001Io-53; Fri, 10 Jan 2014 11:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aFf-0001Ij-35
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:27:47 +0000
Received: from [85.158.137.68:48010] by server-3.bemta-3.messagelabs.com id
	D5/51-10658-239DFC25; Fri, 10 Jan 2014 11:27:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389353263!4714280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31474 invoked from network); 10 Jan 2014 11:27:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89492886"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:27:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:27:42 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1aFa-0006h3-Ak;
	Fri, 10 Jan 2014 11:27:42 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 11:27:42 +0000
Message-ID: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for HVM
	guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

This change is restricted to HVM guest, as only VT-d is relevant in the
counterpart in Xend. We're late in release cycle so the change should
only do what's necessary. Probably we can revisit it if we need to do
the same thing for PV guest in the future.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 tools/libxl/libxl_create.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..b7adf34 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    bool pod_enabled = false;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -714,6 +715,27 @@ static void initiate_domain_create(libxl__egc *egc,
 
     domid = 0;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM domain will enable PoD mode.
+     */
+    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
+
+    /* We cannot have PoD and PCI device assignment at the same time
+     * for HVM guest. It was reported that VT-d engine cannot
+     * work with PoD enabled because it needs to populated entire page
+     * table for guest. Also a quick grep through AMD IOMMU related
+     * code suggests it has not coped with PoD as well. Just to stay
+     * on the safe side, we disable PCI device assignment with PoD all
+     * together, regardless of the underlying IOMMU in use.
+     */
+    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        ret = ERROR_INVAL;
+        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
+        goto error_out;
+    }
+
     ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
     if (ret) goto error_out;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:28:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aFh-0001Io-53; Fri, 10 Jan 2014 11:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aFf-0001Ij-35
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:27:47 +0000
Received: from [85.158.137.68:48010] by server-3.bemta-3.messagelabs.com id
	D5/51-10658-239DFC25; Fri, 10 Jan 2014 11:27:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389353263!4714280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31474 invoked from network); 10 Jan 2014 11:27:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:27:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89492886"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:27:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:27:42 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W1aFa-0006h3-Ak;
	Fri, 10 Jan 2014 11:27:42 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 11:27:42 +0000
Message-ID: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for HVM
	guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

This change is restricted to HVM guest, as only VT-d is relevant in the
counterpart in Xend. We're late in release cycle so the change should
only do what's necessary. Probably we can revisit it if we need to do
the same thing for PV guest in the future.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 tools/libxl/libxl_create.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..b7adf34 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    bool pod_enabled = false;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -714,6 +715,27 @@ static void initiate_domain_create(libxl__egc *egc,
 
     domid = 0;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM domain will enable PoD mode.
+     */
+    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
+
+    /* We cannot have PoD and PCI device assignment at the same time
+     * for HVM guest. It was reported that VT-d engine cannot
+     * work with PoD enabled because it needs to populated entire page
+     * table for guest. Also a quick grep through AMD IOMMU related
+     * code suggests it has not coped with PoD as well. Just to stay
+     * on the safe side, we disable PCI device assignment with PoD all
+     * together, regardless of the underlying IOMMU in use.
+     */
+    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        ret = ERROR_INVAL;
+        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
+        goto error_out;
+    }
+
     ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
     if (ret) goto error_out;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:35:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aMx-00022T-95; Fri, 10 Jan 2014 11:35:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1aMr-00022M-1d
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:35:18 +0000
Received: from [193.109.254.147:13407] by server-13.bemta-14.messagelabs.com
	id 35/68-19374-0FADFC25; Fri, 10 Jan 2014 11:35:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389353710!10067817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12214 invoked from network); 10 Jan 2014 11:35:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89494472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:35:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:35:09 -0500
Message-ID: <52CFDAEC.5080708@citrix.com>
Date: Fri, 10 Jan 2014 11:35:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
In-Reply-To: <20140109153015.GF12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:11AM +0000, Zoltan Kiss wrote:
>> v3:
>> - delete a surplus checking from tx_action
>> - remove stray line
>> - squash xenvif_idx_unmap changes into the first patch
>> - init spinlocks
>> - call map hypercall directly instead of gnttab_map_refs()
>
> I suppose this is to avoid touching m2p override as well, just as
> previous patch uses unmap hypercall directly.

Yes.

>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +		vif->dealloc_task == NULL ||
>> +		!xenvif_schedulable(vif))
>
> Indentation.
Fixed, and the later ones as well

>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	err = gop->status;
>>   	if (unlikely(err))
>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>> +	else {
>> +		if (vif->grant_tx_handle[pending_idx] !=
>> +			NETBACK_INVALID_HANDLE) {
>> +			netdev_err(vif->dev,
>> +				"Stale mapped handle! pending_idx %x handle %x\n",
>> +				pending_idx, vif->grant_tx_handle[pending_idx]);
>> +			BUG();
>> +		}
>> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>
> What happens when you don't have this?
Your frags will be filled with garbage. I don't understand exactly what 
this function does, someone might want to enlighten us? I've took it's 
usage from classic kernel.
Also, it might be worthwhile to check the return value and BUG if it's 
false, but I don't know what exactly that return value means.

>
>>   		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>>   			int target = min_t(int, skb->len, PKT_PROT_LEN);
>> @@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		if (checksum_setup(vif, skb)) {
>>   			netdev_dbg(vif->dev,
>>   				   "Can't setup checksum in net_tx_action\n");
>> +			if (skb_shinfo(skb)->destructor_arg)
>> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>
> Do you still care setting the flag even if this skb is not going to be
> delivered? If so can you state clearly the reason just like the
> following hunk?
Of course, otherwise the pages wouldn't be sent back to the guest. I've 
added a comment.

>> @@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>>   int xenvif_tx_action(struct xenvif *vif, int budget)
>>   {
>>   	unsigned nr_gops;
>> -	int work_done;
>> +	int work_done, ret;
>>
>>   	if (unlikely(!tx_work_todo(vif)))
>>   		return 0;
>> @@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
>>   	if (nr_gops == 0)
>>   		return 0;
>>
>> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
>> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
>> +			vif->tx_map_ops,
>> +			nr_gops);
>
> Why do you need to replace gnttab_batch_copy with hypercall? In the
> ideal situation gnttab_batch_copy should behave the same as directly
> hypercall but it also handles GNTST_eagain for you.

I don't need gnttab_batch_copy at all, I'm using the grant mapping 
hypercall here.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:35:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aMx-00022T-95; Fri, 10 Jan 2014 11:35:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1aMr-00022M-1d
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:35:18 +0000
Received: from [193.109.254.147:13407] by server-13.bemta-14.messagelabs.com
	id 35/68-19374-0FADFC25; Fri, 10 Jan 2014 11:35:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389353710!10067817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12214 invoked from network); 10 Jan 2014 11:35:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89494472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:35:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:35:09 -0500
Message-ID: <52CFDAEC.5080708@citrix.com>
Date: Fri, 10 Jan 2014 11:35:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
In-Reply-To: <20140109153015.GF12164@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 15:30, Wei Liu wrote:
> On Wed, Jan 08, 2014 at 12:10:11AM +0000, Zoltan Kiss wrote:
>> v3:
>> - delete a surplus checking from tx_action
>> - remove stray line
>> - squash xenvif_idx_unmap changes into the first patch
>> - init spinlocks
>> - call map hypercall directly instead of gnttab_map_refs()
>
> I suppose this is to avoid touching m2p override as well, just as
> previous patch uses unmap hypercall directly.

Yes.

>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +		vif->dealloc_task == NULL ||
>> +		!xenvif_schedulable(vif))
>
> Indentation.
Fixed, and the later ones as well

>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	err = gop->status;
>>   	if (unlikely(err))
>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>> +	else {
>> +		if (vif->grant_tx_handle[pending_idx] !=
>> +			NETBACK_INVALID_HANDLE) {
>> +			netdev_err(vif->dev,
>> +				"Stale mapped handle! pending_idx %x handle %x\n",
>> +				pending_idx, vif->grant_tx_handle[pending_idx]);
>> +			BUG();
>> +		}
>> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>
> What happens when you don't have this?
Your frags will be filled with garbage. I don't understand exactly what 
this function does, someone might want to enlighten us? I've took it's 
usage from classic kernel.
Also, it might be worthwhile to check the return value and BUG if it's 
false, but I don't know what exactly that return value means.

>
>>   		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>>   			int target = min_t(int, skb->len, PKT_PROT_LEN);
>> @@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		if (checksum_setup(vif, skb)) {
>>   			netdev_dbg(vif->dev,
>>   				   "Can't setup checksum in net_tx_action\n");
>> +			if (skb_shinfo(skb)->destructor_arg)
>> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>
> Do you still care setting the flag even if this skb is not going to be
> delivered? If so can you state clearly the reason just like the
> following hunk?
Of course, otherwise the pages wouldn't be sent back to the guest. I've 
added a comment.

>> @@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>>   int xenvif_tx_action(struct xenvif *vif, int budget)
>>   {
>>   	unsigned nr_gops;
>> -	int work_done;
>> +	int work_done, ret;
>>
>>   	if (unlikely(!tx_work_todo(vif)))
>>   		return 0;
>> @@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
>>   	if (nr_gops == 0)
>>   		return 0;
>>
>> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
>> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
>> +			vif->tx_map_ops,
>> +			nr_gops);
>
> Why do you need to replace gnttab_batch_copy with hypercall? In the
> ideal situation gnttab_batch_copy should behave the same as directly
> hypercall but it also handles GNTST_eagain for you.

I don't need gnttab_batch_copy at all, I'm using the grant mapping 
hypercall here.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aQe-0002pe-KK; Fri, 10 Jan 2014 11:39:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aQd-0002pV-Oq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:39:08 +0000
Received: from [85.158.137.68:24258] by server-11.bemta-3.messagelabs.com id
	8B/8D-19379-BDBDFC25; Fri, 10 Jan 2014 11:39:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389353944!8335846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11501 invoked from network); 10 Jan 2014 11:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89495040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:39:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:39:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aQZ-0006qT-Mh;
	Fri, 10 Jan 2014 11:39:03 +0000
Date: Fri, 10 Jan 2014 11:39:03 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110113903.GD29180@zion.uk.xensource.com>
References: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
	<20140109191913.GD17806@pegasus.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109191913.GD17806@pegasus.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:19:14PM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 07:17:03PM +0000, Wei Liu wrote:
> > This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> > device assignment if PoD is enabled.").
> > 
> > Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > ---
> > This was listed in 4.4 development update. A quick skim through
> > hypervisor vtd changesets suggests the situation stays unchanged since 3
> > years ago -- at least I didn't find any log message related to "PoD".
> > 
> > Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
> > (It was first reported on 2010-01-21)
> > 
> > This patch is tested with setting memory=, maxmem= and pci=[] parameters
> > in both HVM and PV guests. In HVM guest's case I need to have
> > claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
> > mode enabled.
> 
> Which implies something is amiss with the PoD memory usage >= nr_pages
> for the domain. Which implies that it allocates more memory than it asked
> for.
> 
> We should track that as a bug I think.

Ian, can you track this in Xen bug tracker?

Claim mode doesn't work for HVM with PoD enabled.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aQe-0002pe-KK; Fri, 10 Jan 2014 11:39:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aQd-0002pV-Oq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:39:08 +0000
Received: from [85.158.137.68:24258] by server-11.bemta-3.messagelabs.com id
	8B/8D-19379-BDBDFC25; Fri, 10 Jan 2014 11:39:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389353944!8335846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11501 invoked from network); 10 Jan 2014 11:39:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:39:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89495040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:39:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:39:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aQZ-0006qT-Mh;
	Fri, 10 Jan 2014 11:39:03 +0000
Date: Fri, 10 Jan 2014 11:39:03 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110113903.GD29180@zion.uk.xensource.com>
References: <1389295023-25507-1-git-send-email-wei.liu2@citrix.com>
	<20140109191913.GD17806@pegasus.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140109191913.GD17806@pegasus.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 02:19:14PM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 07:17:03PM +0000, Wei Liu wrote:
> > This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> > device assignment if PoD is enabled.").
> > 
> > Allegedly-reported-by: Konrad Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > ---
> > This was listed in 4.4 development update. A quick skim through
> > hypervisor vtd changesets suggests the situation stays unchanged since 3
> > years ago -- at least I didn't find any log message related to "PoD".
> > 
> > Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
> > (It was first reported on 2010-01-21)
> > 
> > This patch is tested with setting memory=, maxmem= and pci=[] parameters
> > in both HVM and PV guests. In HVM guest's case I need to have
> > claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
> > mode enabled.
> 
> Which implies something is amiss with the PoD memory usage >= nr_pages
> for the domain. Which implies that it allocates more memory than it asked
> for.
> 
> We should track that as a bug I think.

Ian, can you track this in Xen bug tracker?

Claim mode doesn't work for HVM with PoD enabled.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aRS-0002vO-2x; Fri, 10 Jan 2014 11:39:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1aRQ-0002vA-IN
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:39:56 +0000
Received: from [85.158.143.35:20918] by server-2.bemta-4.messagelabs.com id
	34/F9-11386-B0CDFC25; Fri, 10 Jan 2014 11:39:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389353995!10882241!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 10 Jan 2014 11:39:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 11:39:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 11:39:54 +0000
Message-Id: <52CFEA170200007800112465@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 11:39:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that VT-d engine cannot

There's still a "VT-d" left in here...

Jan

> +     * work with PoD enabled because it needs to populated entire page
> +     * table for guest. Also a quick grep through AMD IOMMU related
> +     * code suggests it has not coped with PoD as well. Just to stay
> +     * on the safe side, we disable PCI device assignment with PoD all
> +     * together, regardless of the underlying IOMMU in use.
> +     */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aRS-0002vO-2x; Fri, 10 Jan 2014 11:39:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1aRQ-0002vA-IN
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:39:56 +0000
Received: from [85.158.143.35:20918] by server-2.bemta-4.messagelabs.com id
	34/F9-11386-B0CDFC25; Fri, 10 Jan 2014 11:39:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389353995!10882241!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 10 Jan 2014 11:39:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 11:39:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 11:39:54 +0000
Message-Id: <52CFEA170200007800112465@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 11:39:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that VT-d engine cannot

There's still a "VT-d" left in here...

Jan

> +     * work with PoD enabled because it needs to populated entire page
> +     * table for guest. Also a quick grep through AMD IOMMU related
> +     * code suggests it has not coped with PoD as well. Just to stay
> +     * on the safe side, we disable PCI device assignment with PoD all
> +     * together, regardless of the underlying IOMMU in use.
> +     */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aVt-0003EH-50; Fri, 10 Jan 2014 11:44:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1aVr-0003E6-Jd
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:44:31 +0000
Received: from [85.158.139.211:38499] by server-12.bemta-5.messagelabs.com id
	20/E3-30017-E1DDFC25; Fri, 10 Jan 2014 11:44:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389354268!8794671!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19942 invoked from network); 10 Jan 2014 11:44:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91635876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 11:44:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 06:44:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1aVQ-00019o-02;
	Fri, 10 Jan 2014 11:44:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1aVP-00058d-PE;
	Fri, 10 Jan 2014 11:44:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.56579.619219.693765@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 11:44:03 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1389314626.16457.122.camel@Solace>
References: <1389314626.16457.122.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Missing/wrong tag in xenbits' qemu-upstream-unstable.git ?"):
> which wipes the build box, so I don't think I have stale files, config,
> etc. The commit responsible for having this looking for a
> 'qemu-xen-4.4.0-rc1' tag is:
> 
> $ git show d84a6e2f
> commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Thu Dec 19 16:28:29 2013 +0000
...
> However, looking here: http://xenbits.xen.org/gitweb/?p=staging/qemu-upstream-unstable.git;a=tags
> there does not seem to be any such tag:
> 
> staging/qemu-upstream-unstable.git
> 6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | log
> 8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog | log
> 
> What am I missing or doing wrong?

It looks like this tag was in the non-staging tree only, not in
staging.  I have pushed it to staging too.

It looks like this was my fault, sorry.  (Normally Stefano tags this
tree, but not on this occasion.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:44:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aVt-0003EH-50; Fri, 10 Jan 2014 11:44:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1aVr-0003E6-Jd
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:44:31 +0000
Received: from [85.158.139.211:38499] by server-12.bemta-5.messagelabs.com id
	20/E3-30017-E1DDFC25; Fri, 10 Jan 2014 11:44:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389354268!8794671!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19942 invoked from network); 10 Jan 2014 11:44:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91635876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 11:44:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 06:44:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1aVQ-00019o-02;
	Fri, 10 Jan 2014 11:44:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1aVP-00058d-PE;
	Fri, 10 Jan 2014 11:44:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.56579.619219.693765@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 11:44:03 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1389314626.16457.122.camel@Solace>
References: <1389314626.16457.122.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Missing/wrong tag in xenbits' qemu-upstream-unstable.git ?"):
> which wipes the build box, so I don't think I have stale files, config,
> etc. The commit responsible for having this looking for a
> 'qemu-xen-4.4.0-rc1' tag is:
> 
> $ git show d84a6e2f
> commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
> Author: Ian Jackson <ian.jackson@eu.citrix.com>
> Date:   Thu Dec 19 16:28:29 2013 +0000
...
> However, looking here: http://xenbits.xen.org/gitweb/?p=staging/qemu-upstream-unstable.git;a=tags
> there does not seem to be any such tag:
> 
> staging/qemu-upstream-unstable.git
> 6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | log
> 8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog | log
> 
> What am I missing or doing wrong?

It looks like this tag was in the non-staging tree only, not in
staging.  I have pushed it to staging too.

It looks like this was my fault, sorry.  (Normally Stefano tags this
tree, but not on this occasion.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:45:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aWy-0003MV-R3; Fri, 10 Jan 2014 11:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aWx-0003ML-LW
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:45:39 +0000
Received: from [193.109.254.147:55459] by server-11.bemta-14.messagelabs.com
	id 60/03-20576-36DDFC25; Fri, 10 Jan 2014 11:45:39 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389354335!7726661!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5867 invoked from network); 10 Jan 2014 11:45:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91636255"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 11:45:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:45:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aWs-0006vl-Hv;
	Fri, 10 Jan 2014 11:45:34 +0000
Date: Fri, 10 Jan 2014 11:45:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140110114534.GE29180@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFDAEC.5080708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
[...]
> 
> >>@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> >>  	err = gop->status;
> >>  	if (unlikely(err))
> >>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> >>+	else {
> >>+		if (vif->grant_tx_handle[pending_idx] !=
> >>+			NETBACK_INVALID_HANDLE) {
> >>+			netdev_err(vif->dev,
> >>+				"Stale mapped handle! pending_idx %x handle %x\n",
> >>+				pending_idx, vif->grant_tx_handle[pending_idx]);
> >>+			BUG();
> >>+		}
> >>+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> >>+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> >
> >What happens when you don't have this?
> Your frags will be filled with garbage. I don't understand exactly
> what this function does, someone might want to enlighten us? I've
> took it's usage from classic kernel.
> Also, it might be worthwhile to check the return value and BUG if
> it's false, but I don't know what exactly that return value means.
> 

This is actually part of gnttab_map_refs. As you're using hypercall
directly this becomes very fragile.

So the right thing to do is to fix gnttab_map_refs.

> >
> >>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
> >>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> >>@@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
> >>  		if (checksum_setup(vif, skb)) {
> >>  			netdev_dbg(vif->dev,
> >>  				   "Can't setup checksum in net_tx_action\n");
> >>+			if (skb_shinfo(skb)->destructor_arg)
> >>+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >
> >Do you still care setting the flag even if this skb is not going to be
> >delivered? If so can you state clearly the reason just like the
> >following hunk?
> Of course, otherwise the pages wouldn't be sent back to the guest.
> I've added a comment.
> 

OK, Thanks! That means whenever SKB leaves netback we need to add this
flag.

> >>@@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> >>  int xenvif_tx_action(struct xenvif *vif, int budget)
> >>  {
> >>  	unsigned nr_gops;
> >>-	int work_done;
> >>+	int work_done, ret;
> >>
> >>  	if (unlikely(!tx_work_todo(vif)))
> >>  		return 0;
> >>@@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
> >>  	if (nr_gops == 0)
> >>  		return 0;
> >>
> >>-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> >>+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
> >>+			vif->tx_map_ops,
> >>+			nr_gops);
> >
> >Why do you need to replace gnttab_batch_copy with hypercall? In the
> >ideal situation gnttab_batch_copy should behave the same as directly
> >hypercall but it also handles GNTST_eagain for you.
> 
> I don't need gnttab_batch_copy at all, I'm using the grant mapping
> hypercall here.
> 

Oops, my bad! Ignore that one.

Wei.

> Regards,
> 
> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:45:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aWy-0003MV-R3; Fri, 10 Jan 2014 11:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aWx-0003ML-LW
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:45:39 +0000
Received: from [193.109.254.147:55459] by server-11.bemta-14.messagelabs.com
	id 60/03-20576-36DDFC25; Fri, 10 Jan 2014 11:45:39 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389354335!7726661!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5867 invoked from network); 10 Jan 2014 11:45:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91636255"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 11:45:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:45:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aWs-0006vl-Hv;
	Fri, 10 Jan 2014 11:45:34 +0000
Date: Fri, 10 Jan 2014 11:45:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140110114534.GE29180@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFDAEC.5080708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
[...]
> 
> >>@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> >>  	err = gop->status;
> >>  	if (unlikely(err))
> >>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> >>+	else {
> >>+		if (vif->grant_tx_handle[pending_idx] !=
> >>+			NETBACK_INVALID_HANDLE) {
> >>+			netdev_err(vif->dev,
> >>+				"Stale mapped handle! pending_idx %x handle %x\n",
> >>+				pending_idx, vif->grant_tx_handle[pending_idx]);
> >>+			BUG();
> >>+		}
> >>+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> >>+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> >
> >What happens when you don't have this?
> Your frags will be filled with garbage. I don't understand exactly
> what this function does, someone might want to enlighten us? I've
> took it's usage from classic kernel.
> Also, it might be worthwhile to check the return value and BUG if
> it's false, but I don't know what exactly that return value means.
> 

This is actually part of gnttab_map_refs. As you're using hypercall
directly this becomes very fragile.

So the right thing to do is to fix gnttab_map_refs.

> >
> >>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
> >>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> >>@@ -1581,6 +1541,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
> >>  		if (checksum_setup(vif, skb)) {
> >>  			netdev_dbg(vif->dev,
> >>  				   "Can't setup checksum in net_tx_action\n");
> >>+			if (skb_shinfo(skb)->destructor_arg)
> >>+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> >
> >Do you still care setting the flag even if this skb is not going to be
> >delivered? If so can you state clearly the reason just like the
> >following hunk?
> Of course, otherwise the pages wouldn't be sent back to the guest.
> I've added a comment.
> 

OK, Thanks! That means whenever SKB leaves netback we need to add this
flag.

> >>@@ -1715,7 +1685,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> >>  int xenvif_tx_action(struct xenvif *vif, int budget)
> >>  {
> >>  	unsigned nr_gops;
> >>-	int work_done;
> >>+	int work_done, ret;
> >>
> >>  	if (unlikely(!tx_work_todo(vif)))
> >>  		return 0;
> >>@@ -1725,7 +1695,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
> >>  	if (nr_gops == 0)
> >>  		return 0;
> >>
> >>-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> >>+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
> >>+			vif->tx_map_ops,
> >>+			nr_gops);
> >
> >Why do you need to replace gnttab_batch_copy with hypercall? In the
> >ideal situation gnttab_batch_copy should behave the same as directly
> >hypercall but it also handles GNTST_eagain for you.
> 
> I don't need gnttab_batch_copy at all, I'm using the grant mapping
> hypercall here.
> 

Oops, my bad! Ignore that one.

Wei.

> Regards,
> 
> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:46:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aY5-0003Tc-BC; Fri, 10 Jan 2014 11:46:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aY3-0003TN-I9
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:46:47 +0000
Received: from [85.158.139.211:61778] by server-7.bemta-5.messagelabs.com id
	7A/14-04824-6ADDFC25; Fri, 10 Jan 2014 11:46:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389354404!8816100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16977 invoked from network); 10 Jan 2014 11:46:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:46:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89496814"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:46:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:46:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aXz-0006xC-IJ;
	Fri, 10 Jan 2014 11:46:43 +0000
Date: Fri, 10 Jan 2014 11:46:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110114643.GF29180@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFEA170200007800112465@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> > +    /* We cannot have PoD and PCI device assignment at the same time
> > +     * for HVM guest. It was reported that VT-d engine cannot
> 
> There's still a "VT-d" left in here...
> 

I mentioned AMD as well. Was trying to clarify things a bit more...

Wei.

> Jan
> 
> > +     * work with PoD enabled because it needs to populated entire page
> > +     * table for guest. Also a quick grep through AMD IOMMU related
> > +     * code suggests it has not coped with PoD as well. Just to stay
> > +     * on the safe side, we disable PCI device assignment with PoD all
> > +     * together, regardless of the underlying IOMMU in use.
> > +     */
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:46:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1aY5-0003Tc-BC; Fri, 10 Jan 2014 11:46:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1aY3-0003TN-I9
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 11:46:47 +0000
Received: from [85.158.139.211:61778] by server-7.bemta-5.messagelabs.com id
	7A/14-04824-6ADDFC25; Fri, 10 Jan 2014 11:46:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389354404!8816100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16977 invoked from network); 10 Jan 2014 11:46:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:46:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89496814"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:46:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:46:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1aXz-0006xC-IJ;
	Fri, 10 Jan 2014 11:46:43 +0000
Date: Fri, 10 Jan 2014 11:46:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110114643.GF29180@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFEA170200007800112465@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> > +    /* We cannot have PoD and PCI device assignment at the same time
> > +     * for HVM guest. It was reported that VT-d engine cannot
> 
> There's still a "VT-d" left in here...
> 

I mentioned AMD as well. Was trying to clarify things a bit more...

Wei.

> Jan
> 
> > +     * work with PoD enabled because it needs to populated entire page
> > +     * table for guest. Also a quick grep through AMD IOMMU related
> > +     * code suggests it has not coped with PoD as well. Just to stay
> > +     * on the safe side, we disable PCI device assignment with PoD all
> > +     * together, regardless of the underlying IOMMU in use.
> > +     */
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:56:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ahf-0004NW-KD; Fri, 10 Jan 2014 11:56:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1ahe-0004NR-Kr
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:56:42 +0000
Received: from [193.109.254.147:49483] by server-6.bemta-14.messagelabs.com id
	07/11-14958-9FFDFC25; Fri, 10 Jan 2014 11:56:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389354999!10061562!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 10 Jan 2014 11:56:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89498776"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:56:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:56:39 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1ahb-00075b-0e;
	Fri, 10 Jan 2014 11:56:39 +0000
Date: Fri, 10 Jan 2014 11:56:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140110115638.GG29180@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When I have following configuration in HVM config file:
  memory=128
  maxmem=256
and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with

xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82

With claim_mode=0, I can sucessfuly create HVM guest.

PV guest is not affected.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:56:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ahf-0004NW-KD; Fri, 10 Jan 2014 11:56:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1ahe-0004NR-Kr
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:56:42 +0000
Received: from [193.109.254.147:49483] by server-6.bemta-14.messagelabs.com id
	07/11-14958-9FFDFC25; Fri, 10 Jan 2014 11:56:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389354999!10061562!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 10 Jan 2014 11:56:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89498776"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:56:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:56:39 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1ahb-00075b-0e;
	Fri, 10 Jan 2014 11:56:39 +0000
Date: Fri, 10 Jan 2014 11:56:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140110115638.GG29180@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When I have following configuration in HVM config file:
  memory=128
  maxmem=256
and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with

xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82

With claim_mode=0, I can sucessfuly create HVM guest.

PV guest is not affected.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:59:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1akf-00055d-89; Fri, 10 Jan 2014 11:59:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1akd-00055V-Fv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:59:47 +0000
Received: from [193.109.254.147:7100] by server-12.bemta-14.messagelabs.com id
	25/50-13681-2B0EFC25; Fri, 10 Jan 2014 11:59:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389355184!10045277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 10 Jan 2014 11:59:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89499335"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:59:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:59:43 -0500
Message-ID: <1389355182.19142.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Fri, 10 Jan 2014 11:59:42 +0000
In-Reply-To: <20140110115638.GG29180@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
owner Wei Liu <wei.liu2@citrix.com>
thanks

On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> When I have following configuration in HVM config file:
>   memory=128
>   maxmem=256
> and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> 
> xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> 
> With claim_mode=0, I can sucessfuly create HVM guest.

Is it trying to claim 256M instead of 128M? (although the likelyhood
that you only have 128-255M free is quite low, or are you
autoballooning?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 11:59:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 11:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1akf-00055d-89; Fri, 10 Jan 2014 11:59:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1akd-00055V-Fv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 11:59:47 +0000
Received: from [193.109.254.147:7100] by server-12.bemta-14.messagelabs.com id
	25/50-13681-2B0EFC25; Fri, 10 Jan 2014 11:59:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389355184!10045277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 10 Jan 2014 11:59:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 11:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89499335"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 11:59:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 06:59:43 -0500
Message-ID: <1389355182.19142.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Fri, 10 Jan 2014 11:59:42 +0000
In-Reply-To: <20140110115638.GG29180@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
owner Wei Liu <wei.liu2@citrix.com>
thanks

On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> When I have following configuration in HVM config file:
>   memory=128
>   maxmem=256
> and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> 
> xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> 
> With claim_mode=0, I can sucessfuly create HVM guest.

Is it trying to claim 256M instead of 128M? (although the likelyhood
that you only have 128-255M free is quite low, or are you
autoballooning?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1avg-00062I-CD; Fri, 10 Jan 2014 12:11:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1avf-00062D-E0
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 12:11:11 +0000
Received: from [193.109.254.147:5752] by server-16.bemta-14.messagelabs.com id
	A1/31-20600-E53EFC25; Fri, 10 Jan 2014 12:11:10 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389355869!7733995!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4730 invoked from network); 10 Jan 2014 12:11:09 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 12:11:09 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1azO-0002jT-QN; Fri, 10 Jan 2014 12:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389356102.10505@bugs.xenproject.org>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1389355182.19142.38.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 10 Jan 2014 12:15:02 +0000
Subject: [Xen-devel] Processed: Re: Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #32 rooted at `<20140110115638.GG29180@zion.uk.xensource.com>'
Title: `Re: Claim mode and HVM PoD interact badly'
> owner Wei Liu <wei.liu2@citrix.com>
Command failed: Cannot parse arguments at /srv/xen-devel-bugs/lib/emesinae/control.pl line 301, <M> line 32.
Stop processing here.

Modified/created Bugs:
 - 32: http://bugs.xenproject.org/xen/bug/32 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1avg-00062I-CD; Fri, 10 Jan 2014 12:11:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1avf-00062D-E0
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 12:11:11 +0000
Received: from [193.109.254.147:5752] by server-16.bemta-14.messagelabs.com id
	A1/31-20600-E53EFC25; Fri, 10 Jan 2014 12:11:10 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389355869!7733995!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4730 invoked from network); 10 Jan 2014 12:11:09 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 12:11:09 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W1azO-0002jT-QN; Fri, 10 Jan 2014 12:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389356102.10505@bugs.xenproject.org>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1389355182.19142.38.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 10 Jan 2014 12:15:02 +0000
Subject: [Xen-devel] Processed: Re: Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #32 rooted at `<20140110115638.GG29180@zion.uk.xensource.com>'
Title: `Re: Claim mode and HVM PoD interact badly'
> owner Wei Liu <wei.liu2@citrix.com>
Command failed: Cannot parse arguments at /srv/xen-devel-bugs/lib/emesinae/control.pl line 301, <M> line 32.
Stop processing here.

Modified/created Bugs:
 - 32: http://bugs.xenproject.org/xen/bug/32 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b0o-0006Aq-K6; Fri, 10 Jan 2014 12:16:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1b0n-0006Ai-3S
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:16:29 +0000
Received: from [193.109.254.147:21143] by server-5.bemta-14.messagelabs.com id
	14/F8-03510-C94EFC25; Fri, 10 Jan 2014 12:16:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389356186!9992607!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30853 invoked from network); 10 Jan 2014 12:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91645416"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 12:16:26 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 10 Jan 2014 07:16:25 -0500
Received: from [192.168.42.152] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 10 Jan 2014
	13:16:24 +0100
Message-ID: <52CFE496.5090906@citrix.com>
Date: Fri, 10 Jan 2014 12:16:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
In-Reply-To: <20140110114643.GF29180@zion.uk.xensource.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/2014 11:46, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>>> +    /* We cannot have PoD and PCI device assignment at the same time
>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> There's still a "VT-d" left in here...
>>
> I mentioned AMD as well. Was trying to clarify things a bit more...
>
> Wei.

Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.

There is no way for for PoD (or Paging for that matter) to work in
combination with PCIPassthrough, as you need all the backing RAM for all
gfns to exist to receive DMA.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b0j-0006AS-7H; Fri, 10 Jan 2014 12:16:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1b0h-0006AK-As
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 12:16:23 +0000
Received: from [193.109.254.147:2601] by server-3.bemta-14.messagelabs.com id
	5A/30-11000-694EFC25; Fri, 10 Jan 2014 12:16:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389356180!10032019!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2900 invoked from network); 10 Jan 2014 12:16:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91645409"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 12:16:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 07:16:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1b0c-0007NX-Gi;
	Fri, 10 Jan 2014 12:16:18 +0000
Date: Fri, 10 Jan 2014 12:16:18 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Message-ID: <20140110121618.GH29180@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com,
	"edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Laszlo Ersek <lersek@redhat.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for bringing this to Xen-devel!

On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> > On 01/09/14 22:47, Jordan Justen wrote:
> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >>> On 01/09/14 01:45, Jordan Justen wrote:
> >>>> From: Laszlo Ersek <lersek@redhat.com>
> >>>>
> >>>> Contributed-under: TianoCore Contribution Agreement 1.0
> >>>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> >>>> [jordan.l.justen@intel.com: move to MemDetect.c; use PCDs]
> >>>
> >>> PCDs are fine of course, but MemDetect() is not called on Xen
> >>> (unless that's the intent, but please explain then).
> >>

Yes, that was the intent IIRC. MemDetect looks at CMOS for memory size
which potentially limits the amount of RAM represented.

Another aspect is that the location used in CMOS is not standardized
location. It didn't cause strong objection though.

> >> I don't think this series claims to enable S3 for Xen, right?
> >>
> >> When someone looks at S3 for Xen, I might try to steer them towards
> >> having Xen call MemDetect again, and branch of for Xen specific things
> >> within MemDetect. I was not too excited about that aspect of r14946.
> >
> > No, the series doesn't *claim* to do that :), and I didn't test it, but
> > since I could not see any immediate blocker when running on Xen, I
> > figured we should add the feature generally, and then Xen users could
> > happily hunt bugs in the common code. By adding code that doesn't run
> > specifically on Xen we're making that harder.
> 
> I'll try to update this to make a best effort of having S3 potentially
> work for Xen.
> 
> We should probably see if someone from xen-devel can verify that we
> haven't managed to break normal OVMF boots on Xen (aside from the S3
> issue).
> 

Do you have a git tree somewhere? And some critical steps to test this
series?

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b0o-0006Aq-K6; Fri, 10 Jan 2014 12:16:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W1b0n-0006Ai-3S
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:16:29 +0000
Received: from [193.109.254.147:21143] by server-5.bemta-14.messagelabs.com id
	14/F8-03510-C94EFC25; Fri, 10 Jan 2014 12:16:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389356186!9992607!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30853 invoked from network); 10 Jan 2014 12:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91645416"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 12:16:26 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 10 Jan 2014 07:16:25 -0500
Received: from [192.168.42.152] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 10 Jan 2014
	13:16:24 +0100
Message-ID: <52CFE496.5090906@citrix.com>
Date: Fri, 10 Jan 2014 12:16:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
In-Reply-To: <20140110114643.GF29180@zion.uk.xensource.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/2014 11:46, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>>> +    /* We cannot have PoD and PCI device assignment at the same time
>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> There's still a "VT-d" left in here...
>>
> I mentioned AMD as well. Was trying to clarify things a bit more...
>
> Wei.

Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.

There is no way for for PoD (or Paging for that matter) to work in
combination with PCIPassthrough, as you need all the backing RAM for all
gfns to exist to receive DMA.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b0j-0006AS-7H; Fri, 10 Jan 2014 12:16:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1b0h-0006AK-As
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 12:16:23 +0000
Received: from [193.109.254.147:2601] by server-3.bemta-14.messagelabs.com id
	5A/30-11000-694EFC25; Fri, 10 Jan 2014 12:16:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389356180!10032019!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2900 invoked from network); 10 Jan 2014 12:16:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:16:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91645409"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 12:16:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 07:16:18 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1b0c-0007NX-Gi;
	Fri, 10 Jan 2014 12:16:18 +0000
Date: Fri, 10 Jan 2014 12:16:18 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Message-ID: <20140110121618.GH29180@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com,
	"edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Laszlo Ersek <lersek@redhat.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for bringing this to Xen-devel!

On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> > On 01/09/14 22:47, Jordan Justen wrote:
> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >>> On 01/09/14 01:45, Jordan Justen wrote:
> >>>> From: Laszlo Ersek <lersek@redhat.com>
> >>>>
> >>>> Contributed-under: TianoCore Contribution Agreement 1.0
> >>>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> >>>> [jordan.l.justen@intel.com: move to MemDetect.c; use PCDs]
> >>>
> >>> PCDs are fine of course, but MemDetect() is not called on Xen
> >>> (unless that's the intent, but please explain then).
> >>

Yes, that was the intent IIRC. MemDetect looks at CMOS for memory size
which potentially limits the amount of RAM represented.

Another aspect is that the location used in CMOS is not standardized
location. It didn't cause strong objection though.

> >> I don't think this series claims to enable S3 for Xen, right?
> >>
> >> When someone looks at S3 for Xen, I might try to steer them towards
> >> having Xen call MemDetect again, and branch of for Xen specific things
> >> within MemDetect. I was not too excited about that aspect of r14946.
> >
> > No, the series doesn't *claim* to do that :), and I didn't test it, but
> > since I could not see any immediate blocker when running on Xen, I
> > figured we should add the feature generally, and then Xen users could
> > happily hunt bugs in the common code. By adding code that doesn't run
> > specifically on Xen we're making that harder.
> 
> I'll try to update this to make a best effort of having S3 potentially
> work for Xen.
> 
> We should probably see if someone from xen-devel can verify that we
> haven't managed to break normal OVMF boots on Xen (aside from the S3
> issue).
> 

Do you have a git tree somewhere? And some critical steps to test this
series?

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:22:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b6E-00072G-NY; Fri, 10 Jan 2014 12:22:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1b6B-000727-Fi
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:22:05 +0000
Received: from [85.158.137.68:15408] by server-1.bemta-3.messagelabs.com id
	EA/6F-29598-AE5EFC25; Fri, 10 Jan 2014 12:22:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389356520!8413075!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31737 invoked from network); 10 Jan 2014 12:22:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:22:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89507292"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 12:22:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 07:21:59 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1b67-0007Sb-E6;
	Fri, 10 Jan 2014 12:21:59 +0000
Date: Fri, 10 Jan 2014 12:21:59 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140110122159.GI29180@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFE496.5090906@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 12:16:22PM +0000, Andrew Cooper wrote:
> On 10/01/2014 11:46, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >>> +    /* We cannot have PoD and PCI device assignment at the same time
> >>> +     * for HVM guest. It was reported that VT-d engine cannot
> >> There's still a "VT-d" left in here...
> >>
> > I mentioned AMD as well. Was trying to clarify things a bit more...
> >
> > Wei.
> 
> Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> 
> There is no way for for PoD (or Paging for that matter) to work in
> combination with PCIPassthrough, as you need all the backing RAM for all
> gfns to exist to receive DMA.
> 

Thanks for clarification. I will fix this comment.

Wei.

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:22:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1b6E-00072G-NY; Fri, 10 Jan 2014 12:22:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1b6B-000727-Fi
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:22:05 +0000
Received: from [85.158.137.68:15408] by server-1.bemta-3.messagelabs.com id
	EA/6F-29598-AE5EFC25; Fri, 10 Jan 2014 12:22:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389356520!8413075!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31737 invoked from network); 10 Jan 2014 12:22:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 12:22:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89507292"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 12:22:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 07:21:59 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1b67-0007Sb-E6;
	Fri, 10 Jan 2014 12:21:59 +0000
Date: Fri, 10 Jan 2014 12:21:59 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140110122159.GI29180@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFE496.5090906@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 12:16:22PM +0000, Andrew Cooper wrote:
> On 10/01/2014 11:46, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >>> +    /* We cannot have PoD and PCI device assignment at the same time
> >>> +     * for HVM guest. It was reported that VT-d engine cannot
> >> There's still a "VT-d" left in here...
> >>
> > I mentioned AMD as well. Was trying to clarify things a bit more...
> >
> > Wei.
> 
> Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> 
> There is no way for for PoD (or Paging for that matter) to work in
> combination with PCIPassthrough, as you need all the backing RAM for all
> gfns to exist to receive DMA.
> 

Thanks for clarification. I will fix this comment.

Wei.

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:29:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bCs-0007FM-MR; Fri, 10 Jan 2014 12:28:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1bCr-0007FG-8W
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:28:57 +0000
Received: from [85.158.143.35:20391] by server-2.bemta-4.messagelabs.com id
	E6/41-11386-887EFC25; Fri, 10 Jan 2014 12:28:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389356935!8259179!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12144 invoked from network); 10 Jan 2014 12:28:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 12:28:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 12:28:54 +0000
Message-Id: <52CFF59402000078001124DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 12:28:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
In-Reply-To: <52CFE496.5090906@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 10/01/2014 11:46, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>>>> +    /* We cannot have PoD and PCI device assignment at the same time
>>>> +     * for HVM guest. It was reported that VT-d engine cannot
>>> There's still a "VT-d" left in here...
>>>
>> I mentioned AMD as well. Was trying to clarify things a bit more...
> 
> Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> 
> There is no way for for PoD (or Paging for that matter) to work in
> combination with PCIPassthrough, as you need all the backing RAM for all
> gfns to exist to receive DMA.

That's going a little too far: If IOMMU faults were recoverable, dealing
with non-present pages would become possible (with other caveats
of course). So this is not a fundamental attribute of IOMMUs, but
there doesn't seem to be any reason to believe that the current
model would change any time soon for either of the vendors.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 12:29:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 12:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bCs-0007FM-MR; Fri, 10 Jan 2014 12:28:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1bCr-0007FG-8W
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 12:28:57 +0000
Received: from [85.158.143.35:20391] by server-2.bemta-4.messagelabs.com id
	E6/41-11386-887EFC25; Fri, 10 Jan 2014 12:28:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389356935!8259179!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12144 invoked from network); 10 Jan 2014 12:28:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 12:28:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 12:28:54 +0000
Message-Id: <52CFF59402000078001124DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 12:28:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
In-Reply-To: <52CFE496.5090906@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 10/01/2014 11:46, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>>>> +    /* We cannot have PoD and PCI device assignment at the same time
>>>> +     * for HVM guest. It was reported that VT-d engine cannot
>>> There's still a "VT-d" left in here...
>>>
>> I mentioned AMD as well. Was trying to clarify things a bit more...
> 
> Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> 
> There is no way for for PoD (or Paging for that matter) to work in
> combination with PCIPassthrough, as you need all the backing RAM for all
> gfns to exist to receive DMA.

That's going a little too far: If IOMMU faults were recoverable, dealing
with non-present pages would become possible (with other caveats
of course). So this is not a fundamental attribute of IOMMUs, but
there doesn't seem to be any reason to believe that the current
model would change any time soon for either of the vendors.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:10:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bqK-00022C-6v; Fri, 10 Jan 2014 13:09:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1bqI-000225-QD
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:09:43 +0000
Received: from [85.158.139.211:7196] by server-1.bemta-5.messagelabs.com id
	58/2C-21065-511FFC25; Fri, 10 Jan 2014 13:09:41 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389359379!7793702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24768 invoked from network); 10 Jan 2014 13:09:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:09:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91658195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:09:19 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 08:09:19 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVAgADNnYCAABeRYA==
Date: Fri, 10 Jan 2014 13:09:18 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E8C41@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52CFCB7B02000078001123AF@nat28.tlf.novell.com>
In-Reply-To: <52CFCB7B02000078001123AF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.13.74.31]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> > I agree that it _shouldn't_ end up emulating -- but the shadow page fault
> > routine has a ton of code paths that I've never managed to fully grok
> >
> > (As an aside, I've previously looked at other cases where the shadow code
> > ends up emulating instructions that are unexpected that cause VMs to
> hang
> > because the shadow module doesn't have a proper implementation of the
> > x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server
> > product inside a Xen VM that has logdirty enabled it _will_ hard hang).
> 
> Perhaps that's then what really needs fixing?
> 

Well, I don't disagree but I also think the two problems are orthogonal -- the shadow use of x86_emulate is incomplete but every use of x86_emulate suffers from the problem that copies to and from memory are not following the definition of the x86 architecture.

I previously looked at fixing the shadow use of x86_emulate but it's a big job that I don't have the expertise or time to address.

Simon

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:10:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bqK-00022C-6v; Fri, 10 Jan 2014 13:09:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1bqI-000225-QD
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:09:43 +0000
Received: from [85.158.139.211:7196] by server-1.bemta-5.messagelabs.com id
	58/2C-21065-511FFC25; Fri, 10 Jan 2014 13:09:41 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389359379!7793702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24768 invoked from network); 10 Jan 2014 13:09:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:09:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91658195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:09:19 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL02.citrite.net ([169.254.2.8]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 08:09:19 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVAgADNnYCAABeRYA==
Date: Fri, 10 Jan 2014 13:09:18 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195E8C41@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52CFCB7B02000078001123AF@nat28.tlf.novell.com>
In-Reply-To: <52CFCB7B02000078001123AF@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.13.74.31]
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> > I agree that it _shouldn't_ end up emulating -- but the shadow page fault
> > routine has a ton of code paths that I've never managed to fully grok
> >
> > (As an aside, I've previously looked at other cases where the shadow code
> > ends up emulating instructions that are unexpected that cause VMs to
> hang
> > because the shadow module doesn't have a proper implementation of the
> > x86_emulate callbacks... e.g. if you try to run the old MS Virtual Server
> > product inside a Xen VM that has logdirty enabled it _will_ hard hang).
> 
> Perhaps that's then what really needs fixing?
> 

Well, I don't disagree but I also think the two problems are orthogonal -- the shadow use of x86_emulate is incomplete but every use of x86_emulate suffers from the problem that copies to and from memory are not following the definition of the x86 architecture.

I previously looked at fixing the shadow use of x86_emulate but it's a big job that I don't have the expertise or time to address.

Simon

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bsC-00026t-QS; Fri, 10 Jan 2014 13:11:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1bsB-00026m-8h
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 13:11:39 +0000
Received: from [85.158.139.211:42337] by server-6.bemta-5.messagelabs.com id
	65/A0-16310-A81FFC25; Fri, 10 Jan 2014 13:11:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389359495!9012776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 10 Jan 2014 13:11:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89519108"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:11:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:11:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1bs0-0001lG-32;
	Fri, 10 Jan 2014 13:11:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1brz-0005Gn-OC;
	Fri, 10 Jan 2014 13:11:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24332-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:11:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5874940318360099055=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5874940318360099055==
Content-Type: text/plain

flight 24332 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24332/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24322 REGR. vs. 22383

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 24322
 test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 24332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24322 never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============5874940318360099055==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5874940318360099055==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bsC-00026t-QS; Fri, 10 Jan 2014 13:11:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1bsB-00026m-8h
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 13:11:39 +0000
Received: from [85.158.139.211:42337] by server-6.bemta-5.messagelabs.com id
	65/A0-16310-A81FFC25; Fri, 10 Jan 2014 13:11:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389359495!9012776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30054 invoked from network); 10 Jan 2014 13:11:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89519108"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:11:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:11:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1bs0-0001lG-32;
	Fri, 10 Jan 2014 13:11:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1brz-0005Gn-OC;
	Fri, 10 Jan 2014 13:11:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24332-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:11:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5874940318360099055=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5874940318360099055==
Content-Type: text/plain

flight 24332 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24332/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24322 REGR. vs. 22383

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 24322
 test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 24332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24322 never pass

version targeted for testing:
 xen                  6d7b67c67039ceac36a780b59c2b890739094b95
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d7b67c67039ceac36a780b59c2b890739094b95
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:47 2013 +0000

    tools/xc_restore: Initialise console and store mfns
    
    If the console or store mfn chunks are not present in the migration stream,
    stack junk gets reported for the mfns.
    
    XenServer had a very hard to track down VM corruption issue caused by exactly
    this issue.  Xenconsoled would connect to a junk mfn and incremented the ring
    pointer if the junk happend to look like a valid gfn.
    
    Coverity ID: 1056093 1056094
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 592b614f3469bb83d1158c3dc8c15b67aacfbf4f)

commit a859a20735421164b718136d6134b4385235d48e
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 9 12:29:54 2014 +0000

    QEMU_TAG update

commit 0c815a0e5308aa5048e5c9959eeb9836917cf17e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Dec 19 16:45:03 2013 +0000

    tools/libx: xl uptime doesn't require argument
    
    The current behavior is:
    
    42sh> xl uptime
    'xl uptime' requires at least 1 argument.
    
    Usage: xl [-v] uptime [-s] [Domain]
    
    The normal behavior should list uptime for each domain when there is no
    parameters.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3533972f6d423e71533ffbce5cb9d84bd1a9a674)

commit 014f9219f1dca3ee92948f0cfcda8d1befa6cbcd
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sat Nov 30 13:20:04 2013 +1300

    xenstore: sanity check incoming message body lengths
    
    This is for the client-side receiving messages from xenstored, so there
    is no security impact, unlike XSA-72.
    
    Coverity-ID: 1055449
    Coverity-ID: 1056028
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 8da1ed9031341381c218b7e6eaab5b4f239a327b)

commit cfa252b05855a712eda0da80cd638c7093ddf89f
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:03 2013 +1300

    libxl: don't leak pcidevs in libxl_pcidev_assignable
    
    Coverity-ID: 1055896
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 26b35b9ace97f433fcf4c5dfbdfb573d1075255f)

commit d41c205e0173ee923e791c2fd320c7eb25f2e9cb
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:01 2013 +1300

    libxl: don't leak output vcpu info on error in libxl_list_vcpu
    
    Coverity-ID: 1055887
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 3c113a57f55dc4e36e3552342721db01efa832c6)

commit 62f88c08b31259032c81163f4133d6f25f033c1e
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Sun Dec 1 23:15:00 2013 +1300

    libxl: actually abort if initializing a ctx's lock fails
    
    If initializing the ctx's lock fails, don't keep going, but instead
    error out.
    
    Coverity-ID: 1055289
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit b1cb2bdde1f2393d75a925e6c15862b93d3e7abd)

commit c393ff09ade45d1a2a8f1c12eac5eab4d38947a3
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:09 2013 +0100

    xl: fixes for do_daemonize
    
    Fix usage of CHK_ERRNO in do_daemonize and also remove the usage of a
    bogus for(;;).
    
    Coverity-ID: 1130516 and 1130520
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit ed8c9047f6fc6d28fc27d37576ec8c8c1be68efe)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c

commit 8f1bd27fcd7f8be1353e7309f450283f3e5f7cd0
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Fri Nov 22 12:54:08 2013 +0100

    libxl: fix fd check in libxl__spawn_local_dm
    
    Checking the logfile_w fd for -1 on failure is no longer true, because
    libxl__create_qemu_logfile will now return ERROR_FAIL on failure which
    is -3.
    
    While there also add an error check for opening /dev/null.
    
    Signed-off-by: Roger Pau MonnÃ© <roger.pau@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Cc: Andrew Cooper <andrew.cooper3@citrix.com>
    (cherry picked from commit 3b88d95e9c0a5ff91d5b60e94d81f1982af57e7f)
    
    Conflicts:
    	tools/libxl/libxl_dm.c

commit 4cbbbdfb775d387dc1e0931b44e14d3205c92265
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:50 2013 +0000

    tools/libxl: Avoid deliberate NULL pointer dereference
    
    Coverity ID: 1055290
    
    Calling LIBXL__LOG_ERRNO(ctx,) with a ctx pointer we have just failed to
    allocate is going to end badly.  Opencode a suitable use of xtl_log() instead.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 1677af03c14f2d8d88d2ed9ed8ce6d4906d19fb4)

commit a5febe4aeff4ab80ce0411f63f336c25951098cf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:05:49 2013 +0000

    tools/libxc: Improve xc_dom_malloc_filemap() error handling
    
    Coverity ID 1055563
    
    In the original function, mmap() could be called with a length of -1 if the
    second lseek failed and the caller had not provided max_size.
    
    While fixing up this error, improve the logging of other error paths.  I know
    from personal experience that debugging failures function is rather difficult
    given only "xc_dom_malloc_filemap: failed (on file <somefile>)" in the logs.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit c635c1ef7833e7505423f6567bf99bd355101587)

commit 6f6d936af8acb7d9e36b70e5e70953f695ca3b36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:39 2013 +0000

    tools/xenconsoled: Fix file handle leaks
    
    Coverity ID: 715218 1055876 1055877
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit 9ab1792e1ce9e77afe2cd230d69e56a0737a735f)

commit 74cd17f84649012bec7ce484bf7b9c3f3a9e79ae
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:06:38 2013 +0000

    tools/xenconsole: Use xc_domain_getinfo() correctly
    
    Coverity ID: 1055018
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    (cherry picked from commit aa344500a3bfceb3ef01931609ac1cfaf6dcf52d)

commit 2de748569f827b037ec10104f7c12f44d01d0ffa
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:12:51 2013 +0000

    tools/libxl: Fix integer overflows in sched_sedf_domain_set()
    
    Coverity ID: 1055662 1055663 1055664
    
    Widen from int to uint64_t before multiplcation, rather than afterwards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit 9c01516fee7d548af58fd310d3c93dd71ea9ea28)

commit 338a8b13757d6ef36ff4e321cb4ef4190ba6ec02
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Nov 25 11:16:48 2013 +0000

    tools/libxl: Fix memory leak in sched_domain_output()
    
    Coverity ID: 1055904
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    CC: Keir Fraser <keir@xen.org>
    CC: Jan Beulich <JBeulich@suse.com>
    (cherry picked from commit 0792426b798fd3b39909d618cf8fe8bac30594f4)
    
    Conflicts:
    	tools/libxl/xl_cmdimpl.c
(qemu changes not included)


--===============5874940318360099055==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5874940318360099055==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bsp-0002Bq-FT; Fri, 10 Jan 2014 13:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1bso-0002Bb-4f
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:12:18 +0000
Received: from [85.158.143.35:30725] by server-2.bemta-4.messagelabs.com id
	25/49-11386-1B1FFC25; Fri, 10 Jan 2014 13:12:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389359535!10905238!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30077 invoked from network); 10 Jan 2014 13:12:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:12:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89519305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:12:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:12:14 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W1bsj-0001lo-Tb;
	Fri, 10 Jan 2014 13:12:13 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:12:13 +0000
Message-ID: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian guest
	suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We already override the host Debian suite but we need to override the guest too
(since armhf doesn't exist in squeeze).

Consolidate the Debian guest vars in to one place and add  debian_suite runvar
when appropriate.
---
 make-flight | 41 ++++++++++++++++-------------------------
 1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/make-flight b/make-flight
index 7ac84b4..a13b38e 100755
--- a/make-flight
+++ b/make-flight
@@ -30,6 +30,7 @@ flight=`./cs-flight-create $blessing $branch`
 . cri-common
 
 defsuite=`getconfig DebianSuite`
+defguestsuite=`getconfig GuestDebianSuite`
 
 xenrt_images=/usr/groups/images/autoinstall
 
@@ -324,8 +325,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
   esac
 
   case "$xenarch" in
-  armhf) suite="wheezy";;
-  *)     suite=$defsuite;;
+  armhf) suite="wheezy";  guestsuite="wheezy";;
+  *)     suite=$defsuite; guestsuite=$defguestsuite;;
   esac
 
   if [ $suite != $defsuite ] ; then
@@ -372,6 +373,11 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
 
+      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+      if [ $guestsuite != $defguestsuite ] ; then
+          debian_runvars="$debian_runvars debian_suite=$guestsuite"
+      fi
+
       most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
 
       most_runvars="
@@ -383,23 +389,17 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $dom0arch = armhf ]; then
 	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 	  continue
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
@@ -487,8 +487,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
 		$onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
+		$debian_runvars \
 		all_hostflags=$most_hostflags,equiv-1
 
       if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
@@ -498,9 +497,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
            test-debian xl $xenarch $dom0arch \
 		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
        done
 
@@ -510,16 +507,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
                         test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		        $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
            test-debian xl $xenarch $dom0arch				  \
 		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       fi
 
@@ -530,9 +523,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
                         test-debian-nomigr xl $xenarch $dom0arch	  \
 		guests_vcpus=4						  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		debian_pcipassthrough_nic=host				  \
+		$debian_runvars debian_pcipassthrough_nic=host		  \
 		all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
         done
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bsp-0002Bq-FT; Fri, 10 Jan 2014 13:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1bso-0002Bb-4f
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:12:18 +0000
Received: from [85.158.143.35:30725] by server-2.bemta-4.messagelabs.com id
	25/49-11386-1B1FFC25; Fri, 10 Jan 2014 13:12:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389359535!10905238!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30077 invoked from network); 10 Jan 2014 13:12:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:12:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89519305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:12:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:12:14 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W1bsj-0001lo-Tb;
	Fri, 10 Jan 2014 13:12:13 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:12:13 +0000
Message-ID: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian guest
	suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We already override the host Debian suite but we need to override the guest too
(since armhf doesn't exist in squeeze).

Consolidate the Debian guest vars in to one place and add  debian_suite runvar
when appropriate.
---
 make-flight | 41 ++++++++++++++++-------------------------
 1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/make-flight b/make-flight
index 7ac84b4..a13b38e 100755
--- a/make-flight
+++ b/make-flight
@@ -30,6 +30,7 @@ flight=`./cs-flight-create $blessing $branch`
 . cri-common
 
 defsuite=`getconfig DebianSuite`
+defguestsuite=`getconfig GuestDebianSuite`
 
 xenrt_images=/usr/groups/images/autoinstall
 
@@ -324,8 +325,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
   esac
 
   case "$xenarch" in
-  armhf) suite="wheezy";;
-  *)     suite=$defsuite;;
+  armhf) suite="wheezy";  guestsuite="wheezy";;
+  *)     suite=$defsuite; guestsuite=$defguestsuite;;
   esac
 
   if [ $suite != $defsuite ] ; then
@@ -372,6 +373,11 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
 
+      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+      if [ $guestsuite != $defguestsuite ] ; then
+          debian_runvars="$debian_runvars debian_suite=$guestsuite"
+      fi
+
       most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
 
       most_runvars="
@@ -383,23 +389,17 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $dom0arch = armhf ]; then
 	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 	  continue
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
 		$xenarch $dom0arch					  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
@@ -487,8 +487,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
 		$onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
+		$debian_runvars \
 		all_hostflags=$most_hostflags,equiv-1
 
       if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
@@ -498,9 +497,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
            test-debian xl $xenarch $dom0arch \
 		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
        done
 
@@ -510,16 +507,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
                         test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		        $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
            test-debian xl $xenarch $dom0arch				  \
 		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		all_hostflags=$most_hostflags
+		$debian_runvars all_hostflags=$most_hostflags
 
       fi
 
@@ -530,9 +523,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
                         test-debian-nomigr xl $xenarch $dom0arch	  \
 		guests_vcpus=4						  \
-		debian_kernkind=$kernkind				  \
-		debian_arch=$dom0arch   				  \
-		debian_pcipassthrough_nic=host				  \
+		$debian_runvars debian_pcipassthrough_nic=host		  \
 		all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
         done
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bwt-0002RM-8R; Fri, 10 Jan 2014 13:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1bwr-0002RH-RK
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:16:29 +0000
Received: from [85.158.143.35:44300] by server-3.bemta-4.messagelabs.com id
	C3/31-32360-DA2FFC25; Fri, 10 Jan 2014 13:16:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389359786!3814578!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11609 invoked from network); 10 Jan 2014 13:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91660148"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:16:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:16:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1bwn-0008LR-HN;
	Fri, 10 Jan 2014 13:16:25 +0000
Date: Fri, 10 Jan 2014 13:16:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140110131625.GJ29180@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110114534.GE29180@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:45:34AM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> [...]
> > 
> > >>@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> > >>  	err = gop->status;
> > >>  	if (unlikely(err))
> > >>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> > >>+	else {
> > >>+		if (vif->grant_tx_handle[pending_idx] !=
> > >>+			NETBACK_INVALID_HANDLE) {
> > >>+			netdev_err(vif->dev,
> > >>+				"Stale mapped handle! pending_idx %x handle %x\n",
> > >>+				pending_idx, vif->grant_tx_handle[pending_idx]);
> > >>+			BUG();
> > >>+		}
> > >>+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> > >>+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> > >
> > >What happens when you don't have this?
> > Your frags will be filled with garbage. I don't understand exactly
> > what this function does, someone might want to enlighten us? I've
> > took it's usage from classic kernel.
> > Also, it might be worthwhile to check the return value and BUG if
> > it's false, but I don't know what exactly that return value means.
> > 
> 
> This is actually part of gnttab_map_refs. As you're using hypercall
> directly this becomes very fragile.
> 

To make it clear, set_phys_to_machine is done within m2p_add_override.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1bwt-0002RM-8R; Fri, 10 Jan 2014 13:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1bwr-0002RH-RK
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:16:29 +0000
Received: from [85.158.143.35:44300] by server-3.bemta-4.messagelabs.com id
	C3/31-32360-DA2FFC25; Fri, 10 Jan 2014 13:16:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389359786!3814578!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11609 invoked from network); 10 Jan 2014 13:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91660148"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:16:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:16:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1bwn-0008LR-HN;
	Fri, 10 Jan 2014 13:16:25 +0000
Date: Fri, 10 Jan 2014 13:16:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140110131625.GJ29180@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110114534.GE29180@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:45:34AM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> [...]
> > 
> > >>@@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> > >>  	err = gop->status;
> > >>  	if (unlikely(err))
> > >>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> > >>+	else {
> > >>+		if (vif->grant_tx_handle[pending_idx] !=
> > >>+			NETBACK_INVALID_HANDLE) {
> > >>+			netdev_err(vif->dev,
> > >>+				"Stale mapped handle! pending_idx %x handle %x\n",
> > >>+				pending_idx, vif->grant_tx_handle[pending_idx]);
> > >>+			BUG();
> > >>+		}
> > >>+		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> > >>+			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> > >
> > >What happens when you don't have this?
> > Your frags will be filled with garbage. I don't understand exactly
> > what this function does, someone might want to enlighten us? I've
> > took it's usage from classic kernel.
> > Also, it might be worthwhile to check the return value and BUG if
> > it's false, but I don't know what exactly that return value means.
> > 
> 
> This is actually part of gnttab_map_refs. As you're using hypercall
> directly this becomes very fragile.
> 

To make it clear, set_phys_to_machine is done within m2p_add_override.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:19:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1c0A-0003Bl-U5; Fri, 10 Jan 2014 13:19:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1c08-0003Be-Rb
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:19:52 +0000
Received: from [193.109.254.147:57342] by server-7.bemta-14.messagelabs.com id
	2F/F0-15500-873FFC25; Fri, 10 Jan 2014 13:19:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389359991!10008131!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15464 invoked from network); 10 Jan 2014 13:19:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 13:19:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 13:19:50 +0000
Message-Id: <52D00184020000780011255E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 13:19:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
In-Reply-To: <osstest-24332-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 24332 xen-4.3-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24332/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64-oldkern           4 xen-build        fail in 24322 REGR. vs. 22383

So this succeeded in this run, yet blocks the push?

> Tests which are failing intermittently (not blocking):
>  test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 
> 24322
>  test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 
> 24332
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Whereas this one (which iirc blocked the previous two test runs
from doing a push) is now considered allowable?

I'm confused...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:19:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1c0A-0003Bl-U5; Fri, 10 Jan 2014 13:19:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1c08-0003Be-Rb
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:19:52 +0000
Received: from [193.109.254.147:57342] by server-7.bemta-14.messagelabs.com id
	2F/F0-15500-873FFC25; Fri, 10 Jan 2014 13:19:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389359991!10008131!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15464 invoked from network); 10 Jan 2014 13:19:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 13:19:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 13:19:50 +0000
Message-Id: <52D00184020000780011255E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 13:19:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
In-Reply-To: <osstest-24332-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 24332 xen-4.3-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24332/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64-oldkern           4 xen-build        fail in 24322 REGR. vs. 22383

So this succeeded in this run, yet blocks the push?

> Tests which are failing intermittently (not blocking):
>  test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 
> 24322
>  test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 
> 24332
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Whereas this one (which iirc blocked the previous two test runs
from doing a push) is now considered allowable?

I'm confused...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:45:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cOR-0004hN-5B; Fri, 10 Jan 2014 13:44:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1cOP-0004hI-Nm
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:44:57 +0000
Received: from [85.158.139.211:50201] by server-14.bemta-5.messagelabs.com id
	FA/96-24200-859FFC25; Fri, 10 Jan 2014 13:44:56 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389361494!9021703!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17530 invoked from network); 10 Jan 2014 13:44:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:44:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89528096"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:44:54 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 10 Jan 2014 08:44:54 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 14:44:52 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v4 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHPDWOAoL1o/UTVNU29YXFGm/XPUJp8ooaAgAEb4KA=
Date: Fri, 10 Jan 2014 13:44:52 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1DBF8B@AMSPEX01CL03.citrite.net>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
	<21198.59311.539848.932465@mariner.uk.xensource.com>
In-Reply-To: <21198.59311.539848.932465@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("[PATCH v4 3/3] libxl: ocaml: use 'for_app_registration'
> in osevent callbacks"):
> > This allows the application to pass a token to libxl in the fd/timeout
> > registration callbacks, which it receives back in modification or
> > deregistration callbacks.
> ...
> >  int fd_register(void *user, int fd, void **for_app_registration_out,
> >                       short events, void *for_libxl)  {
> ...
> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> ...
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> 
> Doesn't this leak for_app ?  ISTR spotting this before but perhaps I
> forgot to mention it.

Yep, I forgot a "free" there...

> > +err:
> >  	CAMLdone;
> >  	caml_enter_blocking_section();
> > -	return 0;
> > +	return ret;
> >  }
> 
> 
> And:
> 
> >  int timeout_register(void *user, void **for_app_registration_out,
> >                            struct timeval abs, void *for_libxl)
> ...
> > +	caml_register_global_root(&handles->for_app);
> ...
> > +	*for_app_registration_out = handles;
> >  }
> >
> >  int timeout_modify(void *user, void **for_app_registration_update,
> ...
> > +	handles->for_app = for_app_update;
> > +
> 
> This is allowed, then ?  (Updating foo when &foo has been registered as a
> global root.)  I guess so.

Yes, I believe so.

Foo is of type "value", which is typically a pointer to something on the ocaml heap. When the GC runs, it may move the thing on the heap. You therefore need to give the GC the address of foo, so it can update foo itself (the heap pointer). Between GC runs, you can change foo to point to something else on the heap. This just means that what foo used to point to may now be GC'ed (unless it is referred to by another root).

> Finally:
> 
> >  value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
> > {
> 
> Calling this formal parameter "for_libxl" is confusing.  It's actually the
> value passed to the ocaml register function, ie handles but with a
> different type, and not "for_libxl" at all.

Ok, I'll rename it.

> 
> Nearly there ... the rest is fine!
> 
> Thanks,
> Ian.

Cheers,
Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:45:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cOR-0004hN-5B; Fri, 10 Jan 2014 13:44:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1cOP-0004hI-Nm
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:44:57 +0000
Received: from [85.158.139.211:50201] by server-14.bemta-5.messagelabs.com id
	FA/96-24200-859FFC25; Fri, 10 Jan 2014 13:44:56 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389361494!9021703!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17530 invoked from network); 10 Jan 2014 13:44:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:44:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89528096"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:44:54 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 10 Jan 2014 08:44:54 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 14:44:52 +0100
From: Rob Hoes <Rob.Hoes@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [PATCH v4 3/3] libxl: ocaml: use 'for_app_registration' in
	osevent callbacks
Thread-Index: AQHPDWOAoL1o/UTVNU29YXFGm/XPUJp8ooaAgAEb4KA=
Date: Fri, 10 Jan 2014 13:44:52 +0000
Message-ID: <360717C0B01E6345BCBE64B758E22C2D1DBF8B@AMSPEX01CL03.citrite.net>
References: <1389197863-30692-1-git-send-email-rob.hoes@citrix.com>
	<1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
	<21198.59311.539848.932465@mariner.uk.xensource.com>
In-Reply-To: <21198.59311.539848.932465@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Dave Scott <Dave.Scott@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 3/3] libxl: ocaml: use
 'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Rob Hoes writes ("[PATCH v4 3/3] libxl: ocaml: use 'for_app_registration'
> in osevent callbacks"):
> > This allows the application to pass a token to libxl in the fd/timeout
> > registration callbacks, which it receives back in modification or
> > deregistration callbacks.
> ...
> >  int fd_register(void *user, int fd, void **for_app_registration_out,
> >                       short events, void *for_libxl)  {
> ...
> > -	caml_callbackN(*func, 4, args);
> > +	for_app = malloc(sizeof(value));
> ...
> > +	*for_app = caml_callbackN_exn(*func, 4, args);
> > +	if (Is_exception_result(*for_app)) {
> > +		ret = ERROR_OSEVENT_REG_FAIL;
> > +		goto err;
> 
> Doesn't this leak for_app ?  ISTR spotting this before but perhaps I
> forgot to mention it.

Yep, I forgot a "free" there...

> > +err:
> >  	CAMLdone;
> >  	caml_enter_blocking_section();
> > -	return 0;
> > +	return ret;
> >  }
> 
> 
> And:
> 
> >  int timeout_register(void *user, void **for_app_registration_out,
> >                            struct timeval abs, void *for_libxl)
> ...
> > +	caml_register_global_root(&handles->for_app);
> ...
> > +	*for_app_registration_out = handles;
> >  }
> >
> >  int timeout_modify(void *user, void **for_app_registration_update,
> ...
> > +	handles->for_app = for_app_update;
> > +
> 
> This is allowed, then ?  (Updating foo when &foo has been registered as a
> global root.)  I guess so.

Yes, I believe so.

Foo is of type "value", which is typically a pointer to something on the ocaml heap. When the GC runs, it may move the thing on the heap. You therefore need to give the GC the address of foo, so it can update foo itself (the heap pointer). Between GC runs, you can change foo to point to something else on the heap. This just means that what foo used to point to may now be GC'ed (unless it is referred to by another root).

> Finally:
> 
> >  value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
> > {
> 
> Calling this formal parameter "for_libxl" is confusing.  It's actually the
> value passed to the ocaml register function, ie handles but with a
> different type, and not "for_libxl" at all.

Ok, I'll rename it.

> 
> Nearly there ... the rest is fine!
> 
> Thanks,
> Ian.

Cheers,
Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:46:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cPn-0004l9-Le; Fri, 10 Jan 2014 13:46:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cPm-0004kz-6P
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:46:22 +0000
Received: from [85.158.139.211:56806] by server-16.bemta-5.messagelabs.com id
	F8/FE-11843-DA9FFC25; Fri, 10 Jan 2014 13:46:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389361579!9038348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5552 invoked from network); 10 Jan 2014 13:46:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:46:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91667456"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:46:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:46:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cPi-0001x1-0i;
	Fri, 10 Jan 2014 13:46:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cPh-0005NC-QP;
	Fri, 10 Jan 2014 13:46:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.63913.652989.23779@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:46:17 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
References: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian
	guest suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] make-flight: override the Debian guest suite on armhf"):
> We already override the host Debian suite but we need to override the guest too
> (since armhf doesn't exist in squeeze).
> 
> Consolidate the Debian guest vars in to one place and add
> debian_suite runvar when appropriate.

LGTM.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Please do push this right away if you haven't already.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:46:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cPn-0004l9-Le; Fri, 10 Jan 2014 13:46:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cPm-0004kz-6P
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:46:22 +0000
Received: from [85.158.139.211:56806] by server-16.bemta-5.messagelabs.com id
	F8/FE-11843-DA9FFC25; Fri, 10 Jan 2014 13:46:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389361579!9038348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5552 invoked from network); 10 Jan 2014 13:46:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:46:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91667456"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:46:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:46:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cPi-0001x1-0i;
	Fri, 10 Jan 2014 13:46:18 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cPh-0005NC-QP;
	Fri, 10 Jan 2014 13:46:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.63913.652989.23779@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:46:17 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
References: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian
	guest suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] make-flight: override the Debian guest suite on armhf"):
> We already override the host Debian suite but we need to override the guest too
> (since armhf doesn't exist in squeeze).
> 
> Consolidate the Debian guest vars in to one place and add
> debian_suite runvar when appropriate.

LGTM.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Please do push this right away if you haven't already.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cRd-0004u8-7P; Fri, 10 Jan 2014 13:48:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cRb-0004tv-L5
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:48:15 +0000
Received: from [85.158.143.35:51808] by server-2.bemta-4.messagelabs.com id
	23/82-11386-E1AFFC25; Fri, 10 Jan 2014 13:48:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389361692!10916746!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20337 invoked from network); 10 Jan 2014 13:48:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:48:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89529191"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:48:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:48:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cRW-0001xU-4w;
	Fri, 10 Jan 2014 13:48:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cRV-0005Nf-Ue;
	Fri, 10 Jan 2014 13:48:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64025.811465.724716@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:48:09 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D00184020000780011255E@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-amd64-oldkern        4 xen-build     fail in 24322 REGR. vs. 22383
> 
> So this succeeded in this run, yet blocks the push?

Yes, because it needed to consider flight 24322 to justify the next
two failures:

> > Tests which are failing intermittently (not blocking):
> >  test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 
> > 24322
> >  test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 
> > 24332

And if it disregarded the build failure in 24322 then it might fail to
pay proper attention to an actual regression which was masked
(blocked) by the build failure.

> > Regressions which are regarded as allowable (not blocking):
> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
> 
> Whereas this one (which iirc blocked the previous two test runs
> from doing a push) is now considered allowable?

Yes, because 24336 proves it doesn't happen consistently.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cRe-0004uO-Ji; Fri, 10 Jan 2014 13:48:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1cRc-0004u1-HB
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:48:16 +0000
Received: from [85.158.143.35:51875] by server-2.bemta-4.messagelabs.com id
	B7/82-11386-F1AFFC25; Fri, 10 Jan 2014 13:48:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389361694!10962679!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14891 invoked from network); 10 Jan 2014 13:48:15 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:48:15 -0000
Received: by mail-wi0-f179.google.com with SMTP id z2so4843702wiv.12
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 05:48:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=iUEAVGUEHeyMkzIGi6MpyQnITl/jTV3ikJm/7GxYrMY=;
	b=igCig8M+4NV+GhacrTB5y4M6LLqwPuNrHMGwCLg9/0LmKPdbEAjXypoPMyRyBBhQFH
	0ah8JqpXnBlGuBHwwAKw2WTV6OEasgkjWJkb5xuh0nGTBNl0iwgTsIPTvwqe1UtvOu5j
	sKa1XXZrYbrnqB0ShrLBrO0Frtk8KkPLCaBGIPKGBwN4FoDrsETbElHnsBHk9w/jEGZk
	Md7qdGQXSsdW7VHa10EGFwU/wkwXL/CtHKFsJC2wKhPeiJEm9bXcVkcohvEfxt1jBVuH
	MVJ/zJsILxFqFyYCd4u5UcO50W7Zp4uRPQ8m9svmmUoh6F9U31ABflNv5OJ+prRdX2hX
	afdA==
X-Gm-Message-State: ALoCoQnfP6YqbMWFzhLUoF53+E/tImae/sX2/rI59lldlet5uDX7MjcTgXP2h65UWup/p9wRd10w
X-Received: by 10.194.60.103 with SMTP id g7mr8595829wjr.37.1389361694628;
	Fri, 10 Jan 2014 05:48:14 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4173808wjc.5.2014.01.10.05.48.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 05:48:13 -0800 (PST)
Message-ID: <52CFFA1C.7000500@linaro.org>
Date: Fri, 10 Jan 2014 13:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389347521.19142.9.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/10/2014 09:52 AM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
>> Scrub heap pages was disabled because it was slow on the models. Now that Xen
>> supports real hardware, it's possible to enable by default scrubbing.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.


>> ---
>>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
>>      a domain.
>>
>>      The downside is it's now slow on models.
>
> There is a no-bootscrub command-line option which can be used in that
> case. Could you update the relevant model wiki pages to mention it
> please?

I have updated the wiki page.

>
>>      The current implementation of scrub_heap_pages loop on every page in the
>>      frametable. On ARM, there is only which can contains MMIO. We are safe
>>      because when frametable is initialized, page are marked inuse. So the
>>      function won't clear theses pages.
>
> I don't think this behaviour is specific to ARM, x86 has MMIO regions
> mixed in with RAM as well.

I was not sure, so I prefered to explain why it's ok.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cRd-0004u8-7P; Fri, 10 Jan 2014 13:48:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cRb-0004tv-L5
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:48:15 +0000
Received: from [85.158.143.35:51808] by server-2.bemta-4.messagelabs.com id
	23/82-11386-E1AFFC25; Fri, 10 Jan 2014 13:48:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389361692!10916746!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20337 invoked from network); 10 Jan 2014 13:48:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:48:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89529191"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:48:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:48:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cRW-0001xU-4w;
	Fri, 10 Jan 2014 13:48:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cRV-0005Nf-Ue;
	Fri, 10 Jan 2014 13:48:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64025.811465.724716@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:48:09 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D00184020000780011255E@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-amd64-oldkern        4 xen-build     fail in 24322 REGR. vs. 22383
> 
> So this succeeded in this run, yet blocks the push?

Yes, because it needed to consider flight 24322 to justify the next
two failures:

> > Tests which are failing intermittently (not blocking):
> >  test-amd64-amd64-xl-qemuu-win7-amd64  8 guest-saverestore   fail pass in 
> > 24322
> >  test-armhf-armhf-xl           6 capture-logs(6)  broken in 24322 pass in 
> > 24332

And if it disregarded the build failure in 24322 then it might fail to
pay proper attention to an actual regression which was masked
(blocked) by the build failure.

> > Regressions which are regarded as allowable (not blocking):
> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
> 
> Whereas this one (which iirc blocked the previous two test runs
> from doing a push) is now considered allowable?

Yes, because 24336 proves it doesn't happen consistently.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cRe-0004uO-Ji; Fri, 10 Jan 2014 13:48:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1cRc-0004u1-HB
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:48:16 +0000
Received: from [85.158.143.35:51875] by server-2.bemta-4.messagelabs.com id
	B7/82-11386-F1AFFC25; Fri, 10 Jan 2014 13:48:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389361694!10962679!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14891 invoked from network); 10 Jan 2014 13:48:15 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:48:15 -0000
Received: by mail-wi0-f179.google.com with SMTP id z2so4843702wiv.12
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 05:48:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=iUEAVGUEHeyMkzIGi6MpyQnITl/jTV3ikJm/7GxYrMY=;
	b=igCig8M+4NV+GhacrTB5y4M6LLqwPuNrHMGwCLg9/0LmKPdbEAjXypoPMyRyBBhQFH
	0ah8JqpXnBlGuBHwwAKw2WTV6OEasgkjWJkb5xuh0nGTBNl0iwgTsIPTvwqe1UtvOu5j
	sKa1XXZrYbrnqB0ShrLBrO0Frtk8KkPLCaBGIPKGBwN4FoDrsETbElHnsBHk9w/jEGZk
	Md7qdGQXSsdW7VHa10EGFwU/wkwXL/CtHKFsJC2wKhPeiJEm9bXcVkcohvEfxt1jBVuH
	MVJ/zJsILxFqFyYCd4u5UcO50W7Zp4uRPQ8m9svmmUoh6F9U31ABflNv5OJ+prRdX2hX
	afdA==
X-Gm-Message-State: ALoCoQnfP6YqbMWFzhLUoF53+E/tImae/sX2/rI59lldlet5uDX7MjcTgXP2h65UWup/p9wRd10w
X-Received: by 10.194.60.103 with SMTP id g7mr8595829wjr.37.1389361694628;
	Fri, 10 Jan 2014 05:48:14 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4173808wjc.5.2014.01.10.05.48.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 05:48:13 -0800 (PST)
Message-ID: <52CFFA1C.7000500@linaro.org>
Date: Fri, 10 Jan 2014 13:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389347521.19142.9.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/10/2014 09:52 AM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
>> Scrub heap pages was disabled because it was slow on the models. Now that Xen
>> supports real hardware, it's possible to enable by default scrubbing.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.


>> ---
>>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
>>      a domain.
>>
>>      The downside is it's now slow on models.
>
> There is a no-bootscrub command-line option which can be used in that
> case. Could you update the relevant model wiki pages to mention it
> please?

I have updated the wiki page.

>
>>      The current implementation of scrub_heap_pages loop on every page in the
>>      frametable. On ARM, there is only which can contains MMIO. We are safe
>>      because when frametable is initialized, page are marked inuse. So the
>>      function won't clear theses pages.
>
> I don't think this behaviour is specific to ARM, x86 has MMIO regions
> mixed in with RAM as well.

I was not sure, so I prefered to explain why it's ok.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cVU-0005q7-I7; Fri, 10 Jan 2014 13:52:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1cVT-0005q1-9W
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:52:15 +0000
Received: from [85.158.143.35:28307] by server-2.bemta-4.messagelabs.com id
	D6/09-11386-E0BFFC25; Fri, 10 Jan 2014 13:52:14 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389361932!8282314!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13822 invoked from network); 10 Jan 2014 13:52:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:52:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89530268"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:52:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:52:11 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W1cVP-0000PC-DD;
	Fri, 10 Jan 2014 13:52:11 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 13:52:04 +0000
Message-ID: <1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
References: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Rob Hoes <rob.hoes@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, dave.scott@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception

v4:
* made for_app a value inside the handles struct, rather than a value*
* ensure handles are cleaned up in the error path
* rename timeout_modify to timeout_fire_now on the ocaml side

v5:
* add a missing "free"
* rename for_libxl param in of stub_libxl_occurred_timeout to handles
---
 tools/ocaml/libs/xl/xenlight.ml.in   |    4 +-
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  149 +++++++++++++++++++++++++++++-----
 3 files changed, 136 insertions(+), 27 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.ml.in b/tools/ocaml/libs/xl/xenlight.ml.in
index 47f3487..80e620a 100644
--- a/tools/ocaml/libs/xl/xenlight.ml.in
+++ b/tools/ocaml/libs/xl/xenlight.ml.in
@@ -68,12 +68,12 @@ module Async = struct
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
 	external osevent_occurred_timeout : ctx -> for_libxl -> unit = "stub_libxl_osevent_occurred_timeout"
 
-	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_modify =
+	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_fire_now =
 		Callback.register "libxl_fd_register" fd_register;
 		Callback.register "libxl_fd_modify" fd_modify;
 		Callback.register "libxl_fd_deregister" fd_deregister;
 		Callback.register "libxl_timeout_register" timeout_register;
-		Callback.register "libxl_timeout_modify" timeout_modify;
+		Callback.register "libxl_timeout_fire_now" timeout_fire_now;
 		osevent_register_hooks' ctx user
 
 	let async_register_callback ~async_callback =
diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..b2c06b5 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_fire_now:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..23f253a 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,26 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		free(for_app);
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1264,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1280,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1319,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct timeout_handles {
+	void *for_libxl;
+	value for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1346,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1359,36 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
 
-	caml_callbackN(*func, 4, args);
+	handles->for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(handles->for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		free(handles);
+		goto err;
+	}
+
+	caml_register_global_root(&handles->for_app);
+	*for_app_registration_out = handles;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1396,49 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocal1(for_app_update);
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(handles->for_app);
+
+	/* Libxl currently promises that timeout_modify is only ever called with
+	 * abs={0,0}, meaning "right away". We cannot deal with other values. */
+	assert(abs.tv_sec == 0 && abs.tv_usec == 0);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
-		func = caml_named_value("libxl_timeout_modify");
+		func = caml_named_value("libxl_timeout_fire_now");
+	}
+
+	args[0] = *p;
+	args[1] = handles->for_app;
+
+	for_app_update = caml_callbackN_exn(*func, 2, args);
+	if (Is_exception_result(for_app_update)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
 	}
 
-	caml_callback(*func, *p);
+	handles->for_app = for_app_update;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1384,14 +1489,18 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 	CAMLreturn(Val_unit);
 }
 
-value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
+value stub_libxl_osevent_occurred_timeout(value ctx, value handles)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct timeout_handles *c_handles = (struct timeout_handles *) handles;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) c_handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(&c_handles->for_app);
+	free(c_handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cVU-0005q7-I7; Fri, 10 Jan 2014 13:52:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Rob.Hoes@citrix.com>) id 1W1cVT-0005q1-9W
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:52:15 +0000
Received: from [85.158.143.35:28307] by server-2.bemta-4.messagelabs.com id
	D6/09-11386-E0BFFC25; Fri, 10 Jan 2014 13:52:14 +0000
X-Env-Sender: Rob.Hoes@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389361932!8282314!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13822 invoked from network); 10 Jan 2014 13:52:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:52:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89530268"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:52:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:52:11 -0500
Received: from [10.80.3.142] (helo=cuijk.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<rob.hoes@citrix.com>)	id 1W1cVP-0000PC-DD;
	Fri, 10 Jan 2014 13:52:11 +0000
From: Rob Hoes <rob.hoes@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 10 Jan 2014 13:52:04 +0000
Message-ID: <1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
References: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Rob Hoes <rob.hoes@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, dave.scott@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 3/3] libxl: ocaml: use 'for_app_registration'
	in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows the application to pass a token to libxl in the fd/timeout
registration callbacks, which it receives back in modification or
deregistration callbacks.

It turns out that this is essential for timeout handling, in order to
identify which timeout to change on a modify event.

Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
Acked-by: David Scott <dave.scott@eu.citrix.com>

---
v2:
* assert if for_app == NULL
* catch any exceptions from callbacks
* use goto-style error handling ;)

v3:
* for timeouts, cleanup for_app in occurred_timeout (not in
  timeout_deregister)
* improve comments
* abort in fd_deregister when the app raises an exception

v4:
* made for_app a value inside the handles struct, rather than a value*
* ensure handles are cleaned up in the error path
* rename timeout_modify to timeout_fire_now on the ocaml side

v5:
* add a missing "free"
* rename for_libxl param in of stub_libxl_occurred_timeout to handles
---
 tools/ocaml/libs/xl/xenlight.ml.in   |    4 +-
 tools/ocaml/libs/xl/xenlight.mli.in  |   10 +--
 tools/ocaml/libs/xl/xenlight_stubs.c |  149 +++++++++++++++++++++++++++++-----
 3 files changed, 136 insertions(+), 27 deletions(-)

diff --git a/tools/ocaml/libs/xl/xenlight.ml.in b/tools/ocaml/libs/xl/xenlight.ml.in
index 47f3487..80e620a 100644
--- a/tools/ocaml/libs/xl/xenlight.ml.in
+++ b/tools/ocaml/libs/xl/xenlight.ml.in
@@ -68,12 +68,12 @@ module Async = struct
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
 	external osevent_occurred_timeout : ctx -> for_libxl -> unit = "stub_libxl_osevent_occurred_timeout"
 
-	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_modify =
+	let osevent_register_hooks ctx ~user ~fd_register ~fd_modify ~fd_deregister ~timeout_register ~timeout_fire_now =
 		Callback.register "libxl_fd_register" fd_register;
 		Callback.register "libxl_fd_modify" fd_modify;
 		Callback.register "libxl_fd_deregister" fd_deregister;
 		Callback.register "libxl_timeout_register" timeout_register;
-		Callback.register "libxl_timeout_modify" timeout_modify;
+		Callback.register "libxl_timeout_fire_now" timeout_fire_now;
 		osevent_register_hooks' ctx user
 
 	let async_register_callback ~async_callback =
diff --git a/tools/ocaml/libs/xl/xenlight.mli.in b/tools/ocaml/libs/xl/xenlight.mli.in
index b9819e1..b2c06b5 100644
--- a/tools/ocaml/libs/xl/xenlight.mli.in
+++ b/tools/ocaml/libs/xl/xenlight.mli.in
@@ -68,11 +68,11 @@ module Async : sig
 
 	val osevent_register_hooks : ctx ->
 		user:'a ->
-		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> unit) ->
-		fd_modify:('a -> Unix.file_descr -> event list -> unit) ->
-		fd_deregister:('a -> Unix.file_descr -> unit) ->
-		timeout_register:('a -> int64 -> int64 -> for_libxl -> unit) ->
-		timeout_modify:('a -> unit) ->
+		fd_register:('a -> Unix.file_descr -> event list -> for_libxl -> 'b) ->
+		fd_modify:('a -> Unix.file_descr -> 'b -> event list -> 'b) ->
+		fd_deregister:('a -> Unix.file_descr -> 'b -> unit) ->
+		timeout_register:('a -> int64 -> int64 -> for_libxl -> 'c) ->
+		timeout_fire_now:('a -> 'c -> 'c) ->
 		osevent_hooks
 
 	external osevent_occurred_fd : ctx -> for_libxl -> Unix.file_descr -> event list -> event list -> unit = "stub_libxl_osevent_occurred_fd"
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2e2606a..23f253a 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -31,6 +31,7 @@
 #include <libxl_utils.h>
 
 #include <unistd.h>
+#include <assert.h>
 
 #include "caml_xentoollog.h"
 
@@ -1211,14 +1212,20 @@ value Val_poll_events(short events)
 	CAMLreturn(event_list);
 }
 
+/* The process for dealing with the for_app_registration_  values in the
+ * callbacks below (GC registrations etc) is similar to the way for_callback is
+ * handled in the asynchronous operations above. */
+
 int fd_register(void *user, int fd, void **for_app_registration_out,
                      short events, void *for_libxl)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1230,10 +1237,26 @@ int fd_register(void *user, int fd, void **for_app_registration_out,
 	args[2] = Val_poll_events(events);
 	args[3] = (value) for_libxl;
 
-	caml_callbackN(*func, 4, args);
+	for_app = malloc(sizeof(value));
+	if (!for_app) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		free(for_app);
+		goto err;
+	}
+
+	caml_register_global_root(for_app);
+	*for_app_registration_out = for_app;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int fd_modify(void *user, int fd, void **for_app_registration_update,
@@ -1241,9 +1264,14 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 3);
+	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1252,21 +1280,37 @@ int fd_modify(void *user, int fd, void **for_app_registration_update,
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
-	args[2] = Val_poll_events(events);
+	args[2] = *for_app;
+	args[3] = Val_poll_events(events);
+
+	*for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(*for_app)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	*for_app_registration_update = for_app;
 
-	caml_callbackN(*func, 3, args);
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void fd_deregister(void *user, int fd, void *for_app_registration)
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
-	CAMLlocalN(args, 2);
+	CAMLlocalN(args, 3);
 	static value *func = NULL;
 	value *p = (value *) user;
+	value *for_app = for_app_registration;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(for_app);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1275,12 +1319,26 @@ void fd_deregister(void *user, int fd, void *for_app_registration)
 
 	args[0] = *p;
 	args[1] = Val_int(fd);
+	args[2] = *for_app;
+
+	caml_callbackN_exn(*func, 3, args);
+	/* This hook does not return error codes, so the best thing we can do
+	 * to avoid trouble, if we catch an exception from the app, is abort. */
+	if (Is_exception_result(*for_app))
+		abort();
+
+	caml_remove_global_root(for_app);
+	free(for_app);
 
-	caml_callbackN(*func, 2, args);
 	CAMLdone;
 	caml_enter_blocking_section();
 }
 
+struct timeout_handles {
+	void *for_libxl;
+	value for_app;
+};
+
 int timeout_register(void *user, void **for_app_registration_out,
                           struct timeval abs, void *for_libxl)
 {
@@ -1288,8 +1346,10 @@ int timeout_register(void *user, void **for_app_registration_out,
 	CAMLparam0();
 	CAMLlocal2(sec, usec);
 	CAMLlocalN(args, 4);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles;
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
@@ -1299,15 +1359,36 @@ int timeout_register(void *user, void **for_app_registration_out,
 	sec = caml_copy_int64(abs.tv_sec);
 	usec = caml_copy_int64(abs.tv_usec);
 
+	/* This struct of "handles" will contain "for_libxl" as well as "for_app".
+	 * We'll give a pointer to the struct to the app, and get it back in
+	 * occurred_timeout, where we can clean it all up. */
+	handles = malloc(sizeof(*handles));
+	if (!handles) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
+	}
+
+	handles->for_libxl = for_libxl;
+
 	args[0] = *p;
 	args[1] = sec;
 	args[2] = usec;
-	args[3] = (value) for_libxl;
+	args[3] = (value) handles;
 
-	caml_callbackN(*func, 4, args);
+	handles->for_app = caml_callbackN_exn(*func, 4, args);
+	if (Is_exception_result(handles->for_app)) {
+		ret = ERROR_OSEVENT_REG_FAIL;
+		free(handles);
+		goto err;
+	}
+
+	caml_register_global_root(&handles->for_app);
+	*for_app_registration_out = handles;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 int timeout_modify(void *user, void **for_app_registration_update,
@@ -1315,25 +1396,49 @@ int timeout_modify(void *user, void **for_app_registration_update,
 {
 	caml_leave_blocking_section();
 	CAMLparam0();
+	CAMLlocal1(for_app_update);
+	CAMLlocalN(args, 2);
+	int ret = 0;
 	static value *func = NULL;
 	value *p = (value *) user;
+	struct timeout_handles *handles = *for_app_registration_update;
+
+	/* If for_app == NULL, then something is very wrong */
+	assert(handles->for_app);
+
+	/* Libxl currently promises that timeout_modify is only ever called with
+	 * abs={0,0}, meaning "right away". We cannot deal with other values. */
+	assert(abs.tv_sec == 0 && abs.tv_usec == 0);
 
 	if (func == NULL) {
 		/* First time around, lookup by name */
-		func = caml_named_value("libxl_timeout_modify");
+		func = caml_named_value("libxl_timeout_fire_now");
+	}
+
+	args[0] = *p;
+	args[1] = handles->for_app;
+
+	for_app_update = caml_callbackN_exn(*func, 2, args);
+	if (Is_exception_result(for_app_update)) {
+		/* If an exception is caught, *for_app_registration_update is not
+		 * changed. It remains a valid pointer to a value that is registered
+		 * with the GC. */
+		ret = ERROR_OSEVENT_REG_FAIL;
+		goto err;
 	}
 
-	caml_callback(*func, *p);
+	handles->for_app = for_app_update;
+
+err:
 	CAMLdone;
 	caml_enter_blocking_section();
-	return 0;
+	return ret;
 }
 
 void timeout_deregister(void *user, void *for_app_registration)
 {
-	caml_leave_blocking_section();
-	failwith_xl(ERROR_FAIL, "timeout_deregister not yet implemented");
-	caml_enter_blocking_section();
+	/* This hook will never be called by libxl. */
+	abort();
 }
 
 value stub_libxl_osevent_register_hooks(value ctx, value user)
@@ -1384,14 +1489,18 @@ value stub_libxl_osevent_occurred_fd(value ctx, value for_libxl, value fd,
 	CAMLreturn(Val_unit);
 }
 
-value stub_libxl_osevent_occurred_timeout(value ctx, value for_libxl)
+value stub_libxl_osevent_occurred_timeout(value ctx, value handles)
 {
-	CAMLparam2(ctx, for_libxl);
+	CAMLparam1(ctx);
+	struct timeout_handles *c_handles = (struct timeout_handles *) handles;
 
 	caml_enter_blocking_section();
-	libxl_osevent_occurred_timeout(CTX, (void *) for_libxl);
+	libxl_osevent_occurred_timeout(CTX, (void *) c_handles->for_libxl);
 	caml_leave_blocking_section();
 
+	caml_remove_global_root(&c_handles->for_app);
+	free(c_handles);
+
 	CAMLreturn(Val_unit);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cXD-0005yM-Ni; Fri, 10 Jan 2014 13:54:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1cXB-0005yG-5O
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:54:01 +0000
Received: from [193.109.254.147:60424] by server-8.bemta-14.messagelabs.com id
	02/84-30921-87BFFC25; Fri, 10 Jan 2014 13:54:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389362039!10057204!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11606 invoked from network); 10 Jan 2014 13:53:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 13:53:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 13:53:59 +0000
Message-Id: <52D0098402000078001125BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 13:53:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
In-Reply-To: <21199.64025.811465.724716@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
>> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
>> 
>> Whereas this one (which iirc blocked the previous two test runs
>> from doing a push) is now considered allowable?
> 
> Yes, because 24336 proves it doesn't happen consistently.

Does it? To me it looks like there were only failures since yesterday's
new commits went in - I don't recall having seen a flight report with
this not failing (but then again the names of the tests are similar
enough that I may mix things up).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cXD-0005yM-Ni; Fri, 10 Jan 2014 13:54:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1cXB-0005yG-5O
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:54:01 +0000
Received: from [193.109.254.147:60424] by server-8.bemta-14.messagelabs.com id
	02/84-30921-87BFFC25; Fri, 10 Jan 2014 13:54:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389362039!10057204!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11606 invoked from network); 10 Jan 2014 13:53:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 13:53:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 13:53:59 +0000
Message-Id: <52D0098402000078001125BD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 13:53:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
In-Reply-To: <21199.64025.811465.724716@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
>> On 10.01.14 at 14:11, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
>> 
>> Whereas this one (which iirc blocked the previous two test runs
>> from doing a push) is now considered allowable?
> 
> Yes, because 24336 proves it doesn't happen consistently.

Does it? To me it looks like there were only failures since yesterday's
new commits went in - I don't recall having seen a flight report with
this not failing (but then again the names of the tests are similar
enough that I may mix things up).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:56:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cZ9-00066Q-AJ; Fri, 10 Jan 2014 13:56:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cZ6-00066K-RP
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:56:00 +0000
Received: from [85.158.143.35:60256] by server-3.bemta-4.messagelabs.com id
	63/A3-32360-0FBFFC25; Fri, 10 Jan 2014 13:56:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389362158!10811377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18905 invoked from network); 10 Jan 2014 13:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91670017"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:55:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:55:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cZ2-0001zd-S8;
	Fri, 10 Jan 2014 13:55:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cZ2-0005Og-LY;
	Fri, 10 Jan 2014 13:55:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64492.526217.423210@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:55:56 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
References: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
	<1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v5 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks!

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:56:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:56:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cZ9-00066Q-AJ; Fri, 10 Jan 2014 13:56:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cZ6-00066K-RP
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:56:00 +0000
Received: from [85.158.143.35:60256] by server-3.bemta-4.messagelabs.com id
	63/A3-32360-0FBFFC25; Fri, 10 Jan 2014 13:56:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389362158!10811377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18905 invoked from network); 10 Jan 2014 13:55:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:55:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91670017"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:55:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:55:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cZ2-0001zd-S8;
	Fri, 10 Jan 2014 13:55:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cZ2-0005Og-LY;
	Fri, 10 Jan 2014 13:55:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64492.526217.423210@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:55:56 +0000
To: Rob Hoes <rob.hoes@citrix.com>
In-Reply-To: <1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
References: <1389288981-3826-1-git-send-email-rob.hoes@citrix.com>
	<1389361925-10790-1-git-send-email-rob.hoes@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: dave.scott@eu.citrix.com, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/3] libxl: ocaml: use
	'for_app_registration' in osevent callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rob Hoes writes ("[PATCH v5 3/3] libxl: ocaml: use 'for_app_registration' in osevent callbacks"):
> This allows the application to pass a token to libxl in the fd/timeout
> registration callbacks, which it receives back in modification or
> deregistration callbacks.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks!

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:56:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cZY-00069T-OO; Fri, 10 Jan 2014 13:56:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1cZX-00069D-AM
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:56:27 +0000
Received: from [85.158.139.211:37012] by server-4.bemta-5.messagelabs.com id
	91/6F-26791-A0CFFC25; Fri, 10 Jan 2014 13:56:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389362184!9023772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8138 invoked from network); 10 Jan 2014 13:56:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:56:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91670139"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:56:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:56:23 -0500
Message-ID: <1389362182.19142.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:56:22 +0000
In-Reply-To: <21199.63913.652989.23779@mariner.uk.xensource.com>
References: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
	<21199.63913.652989.23779@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian
 guest suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:46 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] make-flight: override the Debian guest suite on armhf"):
> > We already override the host Debian suite but we need to override the guest too
> > (since armhf doesn't exist in squeeze).
> > 
> > Consolidate the Debian guest vars in to one place and add
> > debian_suite runvar when appropriate.
> 
> LGTM.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks. This reminded me that I forgot my S-o-b so I added that and
pushed as you suggested.

I also forgot to S-o-b the previous patch I pushed this morning to fix
the fall out from "do not install xend for xl tests". Sigh.

I should also have published that here. Forgetting to do so was bad
form, sorry. It is below.

Ian.

commit 3d62c0e0fa6e71dcb6c5639070849f91b92ce756
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Jan 10 09:55:10 2014 +0000

    make-flight: Adjust buildjob runvar for xend vs non-xend tests
    
    Currently we only do so for xenbuildjob which only contains the hypervisor and
    not the tools, for those we need buildjob. In principal the xenbuildjob output
    should be identical for both xend and non-xend builds, but we use a matching
    set of consistency.
    
    This adds a xend suffix to buildjob for exactly test-amd64-i386-pv,
    test-amd64-amd64-pv, test-amd64-i386-xend-winxpsp3,
    test-amd64-i386-xend-qemut-winxpsp3 which is the expected set of xend jobs.

diff --git a/make-flight b/make-flight
index 354a104..7ac84b4 100755
--- a/make-flight
+++ b/make-flight
@@ -226,11 +226,13 @@ job_create_test () {
 	local recipe=$1; shift
 	local toolstack=$1; shift
 	local xenarch=$1; shift
+	local dom0arch=$1; shift
 
         local job_md5=`echo "$job" | md5sum`
         job_md5="${job_md5%  -}"
 
 	xenbuildjob="${bfi}build-$xenarch"
+	buildjob="${bfi}build-$dom0arch"
 
         case "$xenbranch:$toolstack" in
         xen-3.*-testing:*) ;;
@@ -238,7 +240,9 @@ job_create_test () {
         xen-4.1-testing:*) ;;
         xen-4.2-testing:*) ;;
         xen-4.3-testing:*) ;;
-        *:xend) xenbuildjob="$xenbuildjob-xend";;
+        *:xend) xenbuildjob="$xenbuildjob-xend"
+		buildjob="${bfi}build-$dom0arch-xend"
+		;;
         esac
 
         if [ "x$JOB_MD5_PATTERN" != x ]; then
@@ -275,8 +279,9 @@ job_create_test () {
 		;;
         esac
 
-	./cs-job-create $flight $job $recipe toolstack=$toolstack \
-		$RUNVARS $TEST_RUNVARS $most_runvars xenbuildjob=$xenbuildjob "$@"
+	./cs-job-create $flight $job $recipe toolstack=$toolstack	\
+		$RUNVARS $TEST_RUNVARS $most_runvars			\
+		xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
@@ -372,13 +377,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       most_runvars="
 		arch=$dom0arch			        	\
 		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
-		buildjob=${bfi}build-$dom0arch	        	\
 		kernkind=$kernkind		        	\
 		$arch_runvars $suite_runvars
 		"
       if [ $dom0arch = armhf ]; then
 	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
@@ -386,13 +390,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
@@ -402,7 +406,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
-			test-freebsd xl $xenarch  \
+			test-freebsd xl $xenarch $dom0arch \
 			freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
 			all_hostflags=$most_hostflags
@@ -448,7 +452,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test \
                 test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
-                test-win $toolstack $xenarch $qemuu_runvar \
+                test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
 		win_image=winxpsp3.iso $vcpus_runvars	\
 		all_hostflags=$most_hostflags,hvm
 
@@ -458,7 +462,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $xenarch = amd64 ]; then
 
       job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
-                test-win xl $xenarch $qemuu_runvar \
+                test-win xl $xenarch $dom0arch $qemuu_runvar \
 		win_image=win7-x64.iso \
 		all_hostflags=$most_hostflags,hvm
 
@@ -469,7 +473,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 	for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-						test-rhelhvm xl $xenarch \
+						test-rhelhvm xl $xenarch $dom0arch \
 		redhat_image=rhel-server-6.1-i386-dvd.iso		\
 		all_hostflags=$most_hostflags,hvm-$cpuvendor \
                 $qemuu_runvar
@@ -481,7 +485,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       done # qemuu_suffix
 
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-		$onetoolstack $xenarch \
+		$onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -492,7 +496,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
        for pin in '' -pin; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-           test-debian xl $xenarch \
+           test-debian xl $xenarch $dom0arch \
 		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -505,13 +509,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                        test-debian xl $xenarch guests_vcpus=4 \
+                        test-debian xl $xenarch $dom0arch guests_vcpus=4  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch					  \
+           test-debian xl $xenarch $dom0arch				  \
 		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -524,7 +528,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for cpuvendor in intel; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch guests_vcpus=4	  \
+                        test-debian-nomigr xl $xenarch $dom0arch	  \
+		guests_vcpus=4						  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		debian_pcipassthrough_nic=host				  \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:56:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cZY-00069T-OO; Fri, 10 Jan 2014 13:56:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1cZX-00069D-AM
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 13:56:27 +0000
Received: from [85.158.139.211:37012] by server-4.bemta-5.messagelabs.com id
	91/6F-26791-A0CFFC25; Fri, 10 Jan 2014 13:56:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389362184!9023772!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8138 invoked from network); 10 Jan 2014 13:56:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:56:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91670139"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 13:56:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 08:56:23 -0500
Message-ID: <1389362182.19142.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 13:56:22 +0000
In-Reply-To: <21199.63913.652989.23779@mariner.uk.xensource.com>
References: <1389359533-18669-1-git-send-email-ian.campbell@citrix.com>
	<21199.63913.652989.23779@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] make-flight: override the Debian
 guest suite on armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:46 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] make-flight: override the Debian guest suite on armhf"):
> > We already override the host Debian suite but we need to override the guest too
> > (since armhf doesn't exist in squeeze).
> > 
> > Consolidate the Debian guest vars in to one place and add
> > debian_suite runvar when appropriate.
> 
> LGTM.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks. This reminded me that I forgot my S-o-b so I added that and
pushed as you suggested.

I also forgot to S-o-b the previous patch I pushed this morning to fix
the fall out from "do not install xend for xl tests". Sigh.

I should also have published that here. Forgetting to do so was bad
form, sorry. It is below.

Ian.

commit 3d62c0e0fa6e71dcb6c5639070849f91b92ce756
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Jan 10 09:55:10 2014 +0000

    make-flight: Adjust buildjob runvar for xend vs non-xend tests
    
    Currently we only do so for xenbuildjob which only contains the hypervisor and
    not the tools, for those we need buildjob. In principal the xenbuildjob output
    should be identical for both xend and non-xend builds, but we use a matching
    set of consistency.
    
    This adds a xend suffix to buildjob for exactly test-amd64-i386-pv,
    test-amd64-amd64-pv, test-amd64-i386-xend-winxpsp3,
    test-amd64-i386-xend-qemut-winxpsp3 which is the expected set of xend jobs.

diff --git a/make-flight b/make-flight
index 354a104..7ac84b4 100755
--- a/make-flight
+++ b/make-flight
@@ -226,11 +226,13 @@ job_create_test () {
 	local recipe=$1; shift
 	local toolstack=$1; shift
 	local xenarch=$1; shift
+	local dom0arch=$1; shift
 
         local job_md5=`echo "$job" | md5sum`
         job_md5="${job_md5%  -}"
 
 	xenbuildjob="${bfi}build-$xenarch"
+	buildjob="${bfi}build-$dom0arch"
 
         case "$xenbranch:$toolstack" in
         xen-3.*-testing:*) ;;
@@ -238,7 +240,9 @@ job_create_test () {
         xen-4.1-testing:*) ;;
         xen-4.2-testing:*) ;;
         xen-4.3-testing:*) ;;
-        *:xend) xenbuildjob="$xenbuildjob-xend";;
+        *:xend) xenbuildjob="$xenbuildjob-xend"
+		buildjob="${bfi}build-$dom0arch-xend"
+		;;
         esac
 
         if [ "x$JOB_MD5_PATTERN" != x ]; then
@@ -275,8 +279,9 @@ job_create_test () {
 		;;
         esac
 
-	./cs-job-create $flight $job $recipe toolstack=$toolstack \
-		$RUNVARS $TEST_RUNVARS $most_runvars xenbuildjob=$xenbuildjob "$@"
+	./cs-job-create $flight $job $recipe toolstack=$toolstack	\
+		$RUNVARS $TEST_RUNVARS $most_runvars			\
+		xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
@@ -372,13 +377,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       most_runvars="
 		arch=$dom0arch			        	\
 		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
-		buildjob=${bfi}build-$dom0arch	        	\
 		kernkind=$kernkind		        	\
 		$arch_runvars $suite_runvars
 		"
       if [ $dom0arch = armhf ]; then
 	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
@@ -386,13 +390,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch						  \
+		$xenarch $dom0arch					  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
@@ -402,7 +406,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
-			test-freebsd xl $xenarch  \
+			test-freebsd xl $xenarch $dom0arch \
 			freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
 			all_hostflags=$most_hostflags
@@ -448,7 +452,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test \
                 test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
-                test-win $toolstack $xenarch $qemuu_runvar \
+                test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
 		win_image=winxpsp3.iso $vcpus_runvars	\
 		all_hostflags=$most_hostflags,hvm
 
@@ -458,7 +462,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $xenarch = amd64 ]; then
 
       job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
-                test-win xl $xenarch $qemuu_runvar \
+                test-win xl $xenarch $dom0arch $qemuu_runvar \
 		win_image=win7-x64.iso \
 		all_hostflags=$most_hostflags,hvm
 
@@ -469,7 +473,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 	for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-						test-rhelhvm xl $xenarch \
+						test-rhelhvm xl $xenarch $dom0arch \
 		redhat_image=rhel-server-6.1-i386-dvd.iso		\
 		all_hostflags=$most_hostflags,hvm-$cpuvendor \
                 $qemuu_runvar
@@ -481,7 +485,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       done # qemuu_suffix
 
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-		$onetoolstack $xenarch \
+		$onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -492,7 +496,7 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
        for pin in '' -pin; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-           test-debian xl $xenarch \
+           test-debian xl $xenarch $dom0arch \
 		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -505,13 +509,13 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                        test-debian xl $xenarch guests_vcpus=4 \
+                        test-debian xl $xenarch $dom0arch guests_vcpus=4  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch					  \
+           test-debian xl $xenarch $dom0arch				  \
 		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
@@ -524,7 +528,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for cpuvendor in intel; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch guests_vcpus=4	  \
+                        test-debian-nomigr xl $xenarch $dom0arch	  \
+		guests_vcpus=4						  \
 		debian_kernkind=$kernkind				  \
 		debian_arch=$dom0arch   				  \
 		debian_pcipassthrough_nic=host				  \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cc9-000709-Ds; Fri, 10 Jan 2014 13:59:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cc7-0006zm-PK
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:59:07 +0000
Received: from [85.158.137.68:55471] by server-10.bemta-3.messagelabs.com id
	1E/B1-23989-BACFFC25; Fri, 10 Jan 2014 13:59:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389362344!4752865!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11348 invoked from network); 10 Jan 2014 13:59:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:59:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89531903"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:59:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:59:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cc3-00021T-Eh;
	Fri, 10 Jan 2014 13:59:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cc3-0005PF-7n;
	Fri, 10 Jan 2014 13:59:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64676.218509.61789@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:59:00 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D0098402000078001125BD@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> Whereas this one (which iirc blocked the previous two test runs
> >> from doing a push) is now considered allowable?
> > 
> > Yes, because 24336 proves it doesn't happen consistently.
> 
> Does it? To me it looks like there were only failures since yesterday's
> new commits went in - I don't recall having seen a flight report with
> this not failing (but then again the names of the tests are similar
> enough that I may mix things up).

You wouldn't have seen 24336 because it was part of an attempt as
bisection.  The bisection flights' reports aren't emailed to xen-devel
because they're very numerous.

> >> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 13:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 13:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cc9-000709-Ds; Fri, 10 Jan 2014 13:59:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1cc7-0006zm-PK
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 13:59:07 +0000
Received: from [85.158.137.68:55471] by server-10.bemta-3.messagelabs.com id
	1E/B1-23989-BACFFC25; Fri, 10 Jan 2014 13:59:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389362344!4752865!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11348 invoked from network); 10 Jan 2014 13:59:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 13:59:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89531903"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 13:59:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 08:59:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cc3-00021T-Eh;
	Fri, 10 Jan 2014 13:59:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W1cc3-0005PF-7n;
	Fri, 10 Jan 2014 13:59:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21199.64676.218509.61789@mariner.uk.xensource.com>
Date: Fri, 10 Jan 2014 13:59:00 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D0098402000078001125BD@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> >> Whereas this one (which iirc blocked the previous two test runs
> >> from doing a push) is now considered allowable?
> > 
> > Yes, because 24336 proves it doesn't happen consistently.
> 
> Does it? To me it looks like there were only failures since yesterday's
> new commits went in - I don't recall having seen a flight report with
> this not failing (but then again the names of the tests are similar
> enough that I may mix things up).

You wouldn't have seen 24336 because it was part of an attempt as
bisection.  The bisection flights' reports aren't emailed to xen-devel
because they're very numerous.

> >> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:00:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cdF-0007Co-5R; Fri, 10 Jan 2014 14:00:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1cdD-0007CV-3B
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:00:15 +0000
Received: from [85.158.143.35:37527] by server-1.bemta-4.messagelabs.com id
	22/07-02132-DECFFC25; Fri, 10 Jan 2014 14:00:13 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389362412!10966493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19404 invoked from network); 10 Jan 2014 14:00:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:00:13 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AE09x3028423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 14:00:10 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AE08cg022350
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 14:00:09 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AE08AI022309; Fri, 10 Jan 2014 14:00:08 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 06:00:07 -0800
Date: Fri, 10 Jan 2014 15:00:03 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140110140003.GE3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
	<20140109195038.GC3633@olila.local.net-space.pl>
	<52CF2D78.4@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CF2D78.4@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jan Beulich <jbeulich@suse.com>, David Vrabel <david.vrabel@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 11:15:04PM +0000, Andrew Cooper wrote:
> On 09/01/2014 19:50, Daniel Kiper wrote:
> > By the way, why map_domain_page() behavior depends on debug option?
> > It is not nice because we could be trapped by this in the future in
> > more serious places. Could map_domain_page() work in the same way
> > with or without debug option?
>
> With a debug build of Xen, map_domain_page() always mutates the
> pagetables and hands out virtual addresses from the mapcache region.
> This is to test map_domain_page() itself, as well as making domain
> mapping leaks more obvious (as the mapcache is under heavier load).
>
> For a non-debug build of Xen, any map_domain_page() calls which can be
> satisfied by returning a virtual address from the direct map region
> (i.e. for pages below the 5TiB boundary which is basically all of them
> unless you have more money than sense) are, which avoided excessive use
> of the mapcache, and avoids a TLB shootdown/flush on unmap.

Make sens. Thanks for explanation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:00:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cdF-0007Co-5R; Fri, 10 Jan 2014 14:00:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1W1cdD-0007CV-3B
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:00:15 +0000
Received: from [85.158.143.35:37527] by server-1.bemta-4.messagelabs.com id
	22/07-02132-DECFFC25; Fri, 10 Jan 2014 14:00:13 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389362412!10966493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19404 invoked from network); 10 Jan 2014 14:00:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:00:13 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AE09x3028423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 14:00:10 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AE08cg022350
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 14:00:09 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AE08AI022309; Fri, 10 Jan 2014 14:00:08 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 06:00:07 -0800
Date: Fri, 10 Jan 2014 15:00:03 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140110140003.GE3633@olila.local.net-space.pl>
References: <1389206119-13527-1-git-send-email-david.vrabel@citrix.com>
	<20140109195038.GC3633@olila.local.net-space.pl>
	<52CF2D78.4@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CF2D78.4@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jan Beulich <jbeulich@suse.com>, David Vrabel <david.vrabel@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86: map portion of kexec crash area that
 is within the direct map area
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 11:15:04PM +0000, Andrew Cooper wrote:
> On 09/01/2014 19:50, Daniel Kiper wrote:
> > By the way, why map_domain_page() behavior depends on debug option?
> > It is not nice because we could be trapped by this in the future in
> > more serious places. Could map_domain_page() work in the same way
> > with or without debug option?
>
> With a debug build of Xen, map_domain_page() always mutates the
> pagetables and hands out virtual addresses from the mapcache region.
> This is to test map_domain_page() itself, as well as making domain
> mapping leaks more obvious (as the mapcache is under heavier load).
>
> For a non-debug build of Xen, any map_domain_page() calls which can be
> satisfied by returning a virtual address from the direct map region
> (i.e. for pages below the 5TiB boundary which is basically all of them
> unless you have more money than sense) are, which avoided excessive use
> of the mapcache, and avoids a TLB shootdown/flush on unmap.

Make sens. Thanks for explanation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:03:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cgf-0007Qb-9S; Fri, 10 Jan 2014 14:03:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1cge-0007QU-5p
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:03:48 +0000
Received: from [193.109.254.147:53982] by server-12.bemta-14.messagelabs.com
	id 05/3D-13681-3CDFFC25; Fri, 10 Jan 2014 14:03:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389362625!10059750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24214 invoked from network); 10 Jan 2014 14:03:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:03:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89534651"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:03:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:03:44 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1cga-0000YS-I8;
	Fri, 10 Jan 2014 14:03:44 +0000
Date: Fri, 10 Jan 2014 14:03:44 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110140344.GA30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFF59402000078001124DC@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 10/01/2014 11:46, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> >>> There's still a "VT-d" left in here...
> >>>
> >> I mentioned AMD as well. Was trying to clarify things a bit more...
> > 
> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> > 
> > There is no way for for PoD (or Paging for that matter) to work in
> > combination with PCIPassthrough, as you need all the backing RAM for all
> > gfns to exist to receive DMA.
> 
> That's going a little too far: If IOMMU faults were recoverable, dealing
> with non-present pages would become possible (with other caveats
> of course). So this is not a fundamental attribute of IOMMUs, but
> there doesn't seem to be any reason to believe that the current
> model would change any time soon for either of the vendors.
> 

Do you have suggestion on how this comment should be phrased?

Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:03:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1cgf-0007Qb-9S; Fri, 10 Jan 2014 14:03:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1cge-0007QU-5p
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:03:48 +0000
Received: from [193.109.254.147:53982] by server-12.bemta-14.messagelabs.com
	id 05/3D-13681-3CDFFC25; Fri, 10 Jan 2014 14:03:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389362625!10059750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24214 invoked from network); 10 Jan 2014 14:03:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:03:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89534651"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:03:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:03:44 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1cga-0000YS-I8;
	Fri, 10 Jan 2014 14:03:44 +0000
Date: Fri, 10 Jan 2014 14:03:44 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110140344.GA30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CFF59402000078001124DC@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 10/01/2014 11:46, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> >>> There's still a "VT-d" left in here...
> >>>
> >> I mentioned AMD as well. Was trying to clarify things a bit more...
> > 
> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> > 
> > There is no way for for PoD (or Paging for that matter) to work in
> > combination with PCIPassthrough, as you need all the backing RAM for all
> > gfns to exist to receive DMA.
> 
> That's going a little too far: If IOMMU faults were recoverable, dealing
> with non-present pages would become possible (with other caveats
> of course). So this is not a fundamental attribute of IOMMUs, but
> there doesn't seem to be any reason to believe that the current
> model would change any time soon for either of the vendors.
> 

Do you have suggestion on how this comment should be phrased?

Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:07:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ckD-0007Zq-09; Fri, 10 Jan 2014 14:07:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1ckB-0007Zk-Kz
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:07:27 +0000
Received: from [85.158.143.35:49184] by server-2.bemta-4.messagelabs.com id
	EA/A3-11386-E9EFFC25; Fri, 10 Jan 2014 14:07:26 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389362835!10957181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE1OTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27377 invoked from network); 10 Jan 2014 14:07:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:07:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; 
	d="asc'?scan'208";a="89535993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:07:15 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:07:15 -0500
Message-ID: <1389362833.16457.164.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 15:07:13 +0100
In-Reply-To: <21199.56579.619219.693765@mariner.uk.xensource.com>
References: <1389314626.16457.122.camel@Solace>
	<21199.56579.619219.693765@mariner.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5123843689745952329=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5123843689745952329==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-ZnfqSFJMoS4fDVksDzIU"

--=-ZnfqSFJMoS4fDVksDzIU
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-10 at 11:44 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Missing/wrong tag in xenbits' qemu-upstream-unsta=
ble.git ?"):
> > which wipes the build box, so I don't think I have stale files, config,
> > etc. The commit responsible for having this looking for a
> > 'qemu-xen-4.4.0-rc1' tag is:
> >=20
> > $ git show d84a6e2f
> > commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
> > Author: Ian Jackson <ian.jackson@eu.citrix.com>
> > Date:   Thu Dec 19 16:28:29 2013 +0000
> ...
> > However, looking here: http://xenbits.xen.org/gitweb/?p=3Dstaging/qemu-=
upstream-unstable.git;a=3Dtags
> > there does not seem to be any such tag:
> >=20
> > staging/qemu-upstream-unstable.git
> > 6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | l=
og
> > 8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog =
| log
> >=20
> > What am I missing or doing wrong?
>=20
> It looks like this tag was in the non-staging tree only, not in
> staging.  I have pushed it to staging too.
>=20
I see.

> It looks like this was my fault, sorry.  (Normally Stefano tags this
> tree, but not on this occasion.)
>=20
Well, np at all... and thanks for fixing it.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ZnfqSFJMoS4fDVksDzIU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLP/pEACgkQk4XaBE3IOsQpCACeNjQUsnLHGIb4y4MJbPCVp/PT
XREAnjtlabeLviTjhBFHGV8Acw4rsq3b
=LBzY
-----END PGP SIGNATURE-----

--=-ZnfqSFJMoS4fDVksDzIU--


--===============5123843689745952329==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5123843689745952329==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 14:07:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ckD-0007Zq-09; Fri, 10 Jan 2014 14:07:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W1ckB-0007Zk-Kz
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:07:27 +0000
Received: from [85.158.143.35:49184] by server-2.bemta-4.messagelabs.com id
	EA/A3-11386-E9EFFC25; Fri, 10 Jan 2014 14:07:26 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389362835!10957181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE1OTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27377 invoked from network); 10 Jan 2014 14:07:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:07:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; 
	d="asc'?scan'208";a="89535993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:07:15 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:07:15 -0500
Message-ID: <1389362833.16457.164.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 15:07:13 +0100
In-Reply-To: <21199.56579.619219.693765@mariner.uk.xensource.com>
References: <1389314626.16457.122.camel@Solace>
	<21199.56579.619219.693765@mariner.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Missing/wrong tag in xenbits'
	qemu-upstream-unstable.git ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5123843689745952329=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5123843689745952329==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-ZnfqSFJMoS4fDVksDzIU"

--=-ZnfqSFJMoS4fDVksDzIU
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-10 at 11:44 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Missing/wrong tag in xenbits' qemu-upstream-unsta=
ble.git ?"):
> > which wipes the build box, so I don't think I have stale files, config,
> > etc. The commit responsible for having this looking for a
> > 'qemu-xen-4.4.0-rc1' tag is:
> >=20
> > $ git show d84a6e2f
> > commit d84a6e2fa077d07f91ac72c3d8334b75b45fcba2
> > Author: Ian Jackson <ian.jackson@eu.citrix.com>
> > Date:   Thu Dec 19 16:28:29 2013 +0000
> ...
> > However, looking here: http://xenbits.xen.org/gitweb/?p=3Dstaging/qemu-=
upstream-unstable.git;a=3Dtags
> > there does not seem to be any such tag:
> >=20
> > staging/qemu-upstream-unstable.git
> > 6 months ago	qemu-xen-4.3.0	qemu-xen-4.3.0	tag	 | commit | shortlog | l=
og
> > 8 months ago	qemu-xen-4.3.0-rc1	Xen 4.3.0 RC1	tag	 | commit | shortlog =
| log
> >=20
> > What am I missing or doing wrong?
>=20
> It looks like this tag was in the non-staging tree only, not in
> staging.  I have pushed it to staging too.
>=20
I see.

> It looks like this was my fault, sorry.  (Normally Stefano tags this
> tree, but not on this occasion.)
>=20
Well, np at all... and thanks for fixing it.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ZnfqSFJMoS4fDVksDzIU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLP/pEACgkQk4XaBE3IOsQpCACeNjQUsnLHGIb4y4MJbPCVp/PT
XREAnjtlabeLviTjhBFHGV8Acw4rsq3b
=LBzY
-----END PGP SIGNATURE-----

--=-ZnfqSFJMoS4fDVksDzIU--


--===============5123843689745952329==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5123843689745952329==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 14:07:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ckW-0007bV-Cw; Fri, 10 Jan 2014 14:07:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ckU-0007bB-B6
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:07:46 +0000
Received: from [193.109.254.147:15827] by server-12.bemta-14.messagelabs.com
	id 6B/D3-13681-1BEFFC25; Fri, 10 Jan 2014 14:07:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389362863!10021155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26913 invoked from network); 10 Jan 2014 14:07:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:07:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89536086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:07:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:07:42 -0500
Message-ID: <1389362861.19142.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Fri, 10 Jan 2014 14:07:41 +0000
In-Reply-To: <20140110140344.GA30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 14:03 +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> > >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > > On 10/01/2014 11:46, Wei Liu wrote:
> > >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> > >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> > >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> > >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> > >>> There's still a "VT-d" left in here...
> > >>>
> > >> I mentioned AMD as well. Was trying to clarify things a bit more...
> > > 
> > > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> > > 
> > > There is no way for for PoD (or Paging for that matter) to work in
> > > combination with PCIPassthrough, as you need all the backing RAM for all
> > > gfns to exist to receive DMA.
> > 
> > That's going a little too far: If IOMMU faults were recoverable, dealing
> > with non-present pages would become possible (with other caveats
> > of course). So this is not a fundamental attribute of IOMMUs, but
> > there doesn't seem to be any reason to believe that the current
> > model would change any time soon for either of the vendors.
> > 
> 
> Do you have suggestion on how this comment should be phrased?

We cannot have PoD and PCI device assignment at the same to for HVM
guests. This is because the IOMMU support in Xen cannot currently cope
with faults due to pages which are not present.

?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:07:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ckW-0007bV-Cw; Fri, 10 Jan 2014 14:07:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1ckU-0007bB-B6
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:07:46 +0000
Received: from [193.109.254.147:15827] by server-12.bemta-14.messagelabs.com
	id 6B/D3-13681-1BEFFC25; Fri, 10 Jan 2014 14:07:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389362863!10021155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26913 invoked from network); 10 Jan 2014 14:07:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:07:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89536086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:07:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:07:42 -0500
Message-ID: <1389362861.19142.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Fri, 10 Jan 2014 14:07:41 +0000
In-Reply-To: <20140110140344.GA30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 14:03 +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> > >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > > On 10/01/2014 11:46, Wei Liu wrote:
> > >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> > >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> > >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> > >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> > >>> There's still a "VT-d" left in here...
> > >>>
> > >> I mentioned AMD as well. Was trying to clarify things a bit more...
> > > 
> > > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> > > 
> > > There is no way for for PoD (or Paging for that matter) to work in
> > > combination with PCIPassthrough, as you need all the backing RAM for all
> > > gfns to exist to receive DMA.
> > 
> > That's going a little too far: If IOMMU faults were recoverable, dealing
> > with non-present pages would become possible (with other caveats
> > of course). So this is not a fundamental attribute of IOMMUs, but
> > there doesn't seem to be any reason to believe that the current
> > model would change any time soon for either of the vendors.
> > 
> 
> Do you have suggestion on how this comment should be phrased?

We cannot have PoD and PCI device assignment at the same to for HVM
guests. This is because the IOMMU support in Xen cannot currently cope
with faults due to pages which are not present.

?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:08:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1clI-0007ti-45; Fri, 10 Jan 2014 14:08:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1clH-0007tR-HV
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:08:35 +0000
Received: from [85.158.139.211:31024] by server-16.bemta-5.messagelabs.com id
	6D/1A-11843-2EEFFC25; Fri, 10 Jan 2014 14:08:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389362914!9026762!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12394 invoked from network); 10 Jan 2014 14:08:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:08:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:08:33 +0000
Message-Id: <52D00CEE0200007800112600@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:08:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
	<21199.64676.218509.61789@mariner.uk.xensource.com>
In-Reply-To: <21199.64676.218509.61789@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:59, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - 
> FAIL"):
>> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> >> Whereas this one (which iirc blocked the previous two test runs
>> >> from doing a push) is now considered allowable?
>> > 
>> > Yes, because 24336 proves it doesn't happen consistently.
>> 
>> Does it? To me it looks like there were only failures since yesterday's
>> new commits went in - I don't recall having seen a flight report with
>> this not failing (but then again the names of the tests are similar
>> enough that I may mix things up).
> 
> You wouldn't have seen 24336 because it was part of an attempt as
> bisection.  The bisection flights' reports aren't emailed to xen-devel
> because they're very numerous.

I understand that.

>> >> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

My point was that this says "fail like 24336-bisect", not "failed in
24332, passed in 24336-bisect", i.e. the above only gives
confirmation that there's a problem, not whether some flight had
no problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:08:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1clI-0007ti-45; Fri, 10 Jan 2014 14:08:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1clH-0007tR-HV
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:08:35 +0000
Received: from [85.158.139.211:31024] by server-16.bemta-5.messagelabs.com id
	6D/1A-11843-2EEFFC25; Fri, 10 Jan 2014 14:08:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389362914!9026762!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12394 invoked from network); 10 Jan 2014 14:08:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:08:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:08:33 +0000
Message-Id: <52D00CEE0200007800112600@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:08:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
	<21199.64676.218509.61789@mariner.uk.xensource.com>
In-Reply-To: <21199.64676.218509.61789@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 14:59, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - 
> FAIL"):
>> On 10.01.14 at 14:48, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
>> >> Whereas this one (which iirc blocked the previous two test runs
>> >> from doing a push) is now considered allowable?
>> > 
>> > Yes, because 24336 proves it doesn't happen consistently.
>> 
>> Does it? To me it looks like there were only failures since yesterday's
>> new commits went in - I don't recall having seen a flight report with
>> this not failing (but then again the names of the tests are similar
>> enough that I may mix things up).
> 
> You wouldn't have seen 24336 because it was part of an attempt as
> bisection.  The bisection flights' reports aren't emailed to xen-devel
> because they're very numerous.

I understand that.

>> >> >  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect

My point was that this says "fail like 24336-bisect", not "failed in
24332, passed in 24336-bisect", i.e. the above only gives
confirmation that there's a problem, not whether some flight had
no problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1crr-0000XE-5h; Fri, 10 Jan 2014 14:15:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1crp-0000X3-6R
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:15:21 +0000
Received: from [85.158.137.68:33769] by server-15.bemta-3.messagelabs.com id
	D9/FA-11556-87000D25; Fri, 10 Jan 2014 14:15:20 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389363318!7247058!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20955 invoked from network); 10 Jan 2014 14:15:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:15:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89538229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:15:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:15:17 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1crl-0000it-6Y;
	Fri, 10 Jan 2014 14:15:17 +0000
Date: Fri, 10 Jan 2014 14:15:17 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110141517.GB30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
	<52D00D190200007800112603@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D00D190200007800112603@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 02:09:12PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> >> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >> > On 10/01/2014 11:46, Wei Liu wrote:
> >> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> >> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> >> >>> There's still a "VT-d" left in here...
> >> >>>
> >> >> I mentioned AMD as well. Was trying to clarify things a bit more...
> >> > 
> >> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> >> > 
> >> > There is no way for for PoD (or Paging for that matter) to work in
> >> > combination with PCIPassthrough, as you need all the backing RAM for all
> >> > gfns to exist to receive DMA.
> >> 
> >> That's going a little too far: If IOMMU faults were recoverable, dealing
> >> with non-present pages would become possible (with other caveats
> >> of course). So this is not a fundamental attribute of IOMMUs, but
> >> there doesn't seem to be any reason to believe that the current
> >> model would change any time soon for either of the vendors.
> >> 
> > 
> > Do you have suggestion on how this comment should be phrased?
> 
> Just say "IOMMU" or "DMA remapping" instead of "VT-d".
> 

OK, like this:

/* We cannot have PoD and PCI device assignment at the same time for HVM
 * guest. It was reported that IOMMU cannot work with PoD enabled
 * because it needs to populated entire page table for guest. To stay on
 * the safe side, we disable PCI device assignment when PoD is enabled.
 */

Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1crr-0000XE-5h; Fri, 10 Jan 2014 14:15:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1crp-0000X3-6R
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:15:21 +0000
Received: from [85.158.137.68:33769] by server-15.bemta-3.messagelabs.com id
	D9/FA-11556-87000D25; Fri, 10 Jan 2014 14:15:20 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389363318!7247058!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20955 invoked from network); 10 Jan 2014 14:15:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:15:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89538229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:15:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:15:17 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1crl-0000it-6Y;
	Fri, 10 Jan 2014 14:15:17 +0000
Date: Fri, 10 Jan 2014 14:15:17 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110141517.GB30581@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
	<52D00D190200007800112603@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D00D190200007800112603@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 02:09:12PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
> > On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
> >> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >> > On 10/01/2014 11:46, Wei Liu wrote:
> >> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
> >> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
> >> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
> >> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
> >> >>> There's still a "VT-d" left in here...
> >> >>>
> >> >> I mentioned AMD as well. Was trying to clarify things a bit more...
> >> > 
> >> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
> >> > 
> >> > There is no way for for PoD (or Paging for that matter) to work in
> >> > combination with PCIPassthrough, as you need all the backing RAM for all
> >> > gfns to exist to receive DMA.
> >> 
> >> That's going a little too far: If IOMMU faults were recoverable, dealing
> >> with non-present pages would become possible (with other caveats
> >> of course). So this is not a fundamental attribute of IOMMUs, but
> >> there doesn't seem to be any reason to believe that the current
> >> model would change any time soon for either of the vendors.
> >> 
> > 
> > Do you have suggestion on how this comment should be phrased?
> 
> Just say "IOMMU" or "DMA remapping" instead of "VT-d".
> 

OK, like this:

/* We cannot have PoD and PCI device assignment at the same time for HVM
 * guest. It was reported that IOMMU cannot work with PoD enabled
 * because it needs to populated entire page table for guest. To stay on
 * the safe side, we disable PCI device assignment when PoD is enabled.
 */

Wei.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:17:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ctm-0000hv-0w; Fri, 10 Jan 2014 14:17:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1ctl-0000hm-3j
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:17:21 +0000
Received: from [85.158.139.211:65411] by server-5.bemta-5.messagelabs.com id
	AA/89-14928-0F000D25; Fri, 10 Jan 2014 14:17:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389362957!8995771!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27149 invoked from network); 10 Jan 2014 14:09:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:09:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:09:16 +0000
Message-Id: <52D00D190200007800112603@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:09:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
In-Reply-To: <20140110140344.GA30581@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> > On 10/01/2014 11:46, Wei Liu wrote:
>> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
>> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> >>> There's still a "VT-d" left in here...
>> >>>
>> >> I mentioned AMD as well. Was trying to clarify things a bit more...
>> > 
>> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
>> > 
>> > There is no way for for PoD (or Paging for that matter) to work in
>> > combination with PCIPassthrough, as you need all the backing RAM for all
>> > gfns to exist to receive DMA.
>> 
>> That's going a little too far: If IOMMU faults were recoverable, dealing
>> with non-present pages would become possible (with other caveats
>> of course). So this is not a fundamental attribute of IOMMUs, but
>> there doesn't seem to be any reason to believe that the current
>> model would change any time soon for either of the vendors.
>> 
> 
> Do you have suggestion on how this comment should be phrased?

Just say "IOMMU" or "DMA remapping" instead of "VT-d".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:17:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ctm-0000hv-0w; Fri, 10 Jan 2014 14:17:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1ctl-0000hm-3j
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:17:21 +0000
Received: from [85.158.139.211:65411] by server-5.bemta-5.messagelabs.com id
	AA/89-14928-0F000D25; Fri, 10 Jan 2014 14:17:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389362957!8995771!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27149 invoked from network); 10 Jan 2014 14:09:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:09:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:09:16 +0000
Message-Id: <52D00D190200007800112603@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:09:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
In-Reply-To: <20140110140344.GA30581@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> > On 10/01/2014 11:46, Wei Liu wrote:
>> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
>> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> >>> There's still a "VT-d" left in here...
>> >>>
>> >> I mentioned AMD as well. Was trying to clarify things a bit more...
>> > 
>> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor neutral.
>> > 
>> > There is no way for for PoD (or Paging for that matter) to work in
>> > combination with PCIPassthrough, as you need all the backing RAM for all
>> > gfns to exist to receive DMA.
>> 
>> That's going a little too far: If IOMMU faults were recoverable, dealing
>> with non-present pages would become possible (with other caveats
>> of course). So this is not a fundamental attribute of IOMMUs, but
>> there doesn't seem to be any reason to believe that the current
>> model would change any time soon for either of the vendors.
>> 
> 
> Do you have suggestion on how this comment should be phrased?

Just say "IOMMU" or "DMA remapping" instead of "VT-d".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:25:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1d1o-0001fF-Ga; Fri, 10 Jan 2014 14:25:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1d1m-0001f8-OE
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:25:38 +0000
Received: from [85.158.139.211:15828] by server-9.bemta-5.messagelabs.com id
	23/2D-15098-2E200D25; Fri, 10 Jan 2014 14:25:38 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389363935!7812926!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31735 invoked from network); 10 Jan 2014 14:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89542046"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:25:35 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:25:34 -0500
Message-ID: <52D002D6.7090306@citrix.com>
Date: Fri, 10 Jan 2014 09:25:26 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1389349212.19142.21.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:20 AM, Ian Campbell wrote:
> On Thu, 2014-01-09 at 14:00 -0500, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:
>
>>> Your memory is intact; I did provide a helper library. I posted it
>>> as a tarball since I could not figure out where such a thing might
>>> live in the xen tree. I posted it twice - the second time with some
>>> fixes:
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html
>>
>> Would it make sense to try to have it as part of the Xen tree?
>> It looks in good shape.
>
> I think we considered this before, but I can't remember why we didn't
> (assuming we even explicitly decided that rather than just letting it
> slip through the cracks).

It has been a while but yea, we did discuss putting it in the tools 
somewhere. I then went off and tried to find a reasonable place for it 
and could not. The last thing that happened was that I sent an email 
asking for suggestions, did not get any replies and then I got 
distracted (sorry 'bout that).

So I guess that is the question: where to put it? Should it be merged 
into another lib or be brought in as a separate lib or utility?

Thanks
Ross

>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6989 - Release Date: 01/09/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:25:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1d1o-0001fF-Ga; Fri, 10 Jan 2014 14:25:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1d1m-0001f8-OE
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:25:38 +0000
Received: from [85.158.139.211:15828] by server-9.bemta-5.messagelabs.com id
	23/2D-15098-2E200D25; Fri, 10 Jan 2014 14:25:38 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389363935!7812926!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31735 invoked from network); 10 Jan 2014 14:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89542046"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:25:35 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:25:34 -0500
Message-ID: <52D002D6.7090306@citrix.com>
Date: Fri, 10 Jan 2014 09:25:26 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1389349212.19142.21.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:20 AM, Ian Campbell wrote:
> On Thu, 2014-01-09 at 14:00 -0500, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 09, 2014 at 09:46:59AM -0500, Ross Philipson wrote:
>
>>> Your memory is intact; I did provide a helper library. I posted it
>>> as a tarball since I could not figure out where such a thing might
>>> live in the xen tree. I posted it twice - the second time with some
>>> fixes:
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2013-03/msg01850.html
>>
>> Would it make sense to try to have it as part of the Xen tree?
>> It looks in good shape.
>
> I think we considered this before, but I can't remember why we didn't
> (assuming we even explicitly decided that rather than just letting it
> slip through the cracks).

It has been a while but yea, we did discuss putting it in the tools 
somewhere. I then went off and tried to find a reasonable place for it 
and could not. The last thing that happened was that I sent an email 
asking for suggestions, did not get any replies and then I got 
distracted (sorry 'bout that).

So I guess that is the question: where to put it? Should it be merged 
into another lib or be brought in as a separate lib or utility?

Thanks
Ross

>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6989 - Release Date: 01/09/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dGC-0002sA-7I; Fri, 10 Jan 2014 14:40:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dGA-0002s5-JU
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:40:31 +0000
Received: from [85.158.139.211:16103] by server-13.bemta-5.messagelabs.com id
	0C/14-11357-D5600D25; Fri, 10 Jan 2014 14:40:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389364829!9006232!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25019 invoked from network); 10 Jan 2014 14:40:29 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:40:29 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389364829; l=331;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-AUTH:
	X-RZG-CLASS-ID; bh=JDO3h5afcZXu0HwkwumxFHSoq1w=;
	b=bg5TevCv17ksdJ9eVgLbmNfeizWsjUtsJKeUGGnY5Ly5Njoa0Dfa2JjecP2AHN55be3
	GawKICmRQOY42Oc6hfYuq0Tmm7sMXUc+y6K2qc8U9RZkZgc9as+ytxCbjke6cj+O49dD7
	aXxSqBwtnGfLPmdtT5TmMx8y8OBHlvaWKew=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id f03badq0AEeSyRr
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 15:40:28 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 9C56F5024C; Fri, 10 Jan 2014 15:40:24 +0100 (CET)
Date: Fri, 10 Jan 2014 15:40:24 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140110144024.GA19611@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


What is the reason the backend 'type' property of a configured disk is
now "qdisk" instead of "file"? Would the guest really care about that
detail?  For example block-front currently just checks for "phy" and
"file" when deciding if discard should be enabled.
I wonder if that guest-visible change was well thought.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dGC-0002sA-7I; Fri, 10 Jan 2014 14:40:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dGA-0002s5-JU
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:40:31 +0000
Received: from [85.158.139.211:16103] by server-13.bemta-5.messagelabs.com id
	0C/14-11357-D5600D25; Fri, 10 Jan 2014 14:40:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389364829!9006232!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25019 invoked from network); 10 Jan 2014 14:40:29 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:40:29 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389364829; l=331;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-AUTH:
	X-RZG-CLASS-ID; bh=JDO3h5afcZXu0HwkwumxFHSoq1w=;
	b=bg5TevCv17ksdJ9eVgLbmNfeizWsjUtsJKeUGGnY5Ly5Njoa0Dfa2JjecP2AHN55be3
	GawKICmRQOY42Oc6hfYuq0Tmm7sMXUc+y6K2qc8U9RZkZgc9as+ytxCbjke6cj+O49dD7
	aXxSqBwtnGfLPmdtT5TmMx8y8OBHlvaWKew=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id f03badq0AEeSyRr
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 15:40:28 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 9C56F5024C; Fri, 10 Jan 2014 15:40:24 +0100 (CET)
Date: Fri, 10 Jan 2014 15:40:24 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140110144024.GA19611@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


What is the reason the backend 'type' property of a configured disk is
now "qdisk" instead of "file"? Would the guest really care about that
detail?  For example block-front currently just checks for "phy" and
"file" when deciding if discard should be enabled.
I wonder if that guest-visible change was well thought.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dMo-0003Kj-Jo; Fri, 10 Jan 2014 14:47:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dMn-0003Kd-Hx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:47:21 +0000
Received: from [85.158.139.211:62202] by server-1.bemta-5.messagelabs.com id
	86/6C-21065-8F700D25; Fri, 10 Jan 2014 14:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389365238!9052868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24669 invoked from network); 10 Jan 2014 14:47:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:47:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89550611"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:47:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:47:17 -0500
Message-ID: <1389365236.19142.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 10 Jan 2014 14:47:16 +0000
In-Reply-To: <20140110144024.GA19611@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> What is the reason the backend 'type' property of a configured disk is
> now "qdisk" instead of "file"?

Because qdisk is the backend instead of loop+blk (==file) I think this
just happens naturally.

>  Would the guest really care about that
> detail?  For example block-front currently just checks for "phy" and
> "file" when deciding if discard should be enabled.

That sounds entirely bogus, it should be checking for some sort of
feature-discard.

> I wonder if that guest-visible change was well thought.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dMo-0003Kj-Jo; Fri, 10 Jan 2014 14:47:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dMn-0003Kd-Hx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:47:21 +0000
Received: from [85.158.139.211:62202] by server-1.bemta-5.messagelabs.com id
	86/6C-21065-8F700D25; Fri, 10 Jan 2014 14:47:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389365238!9052868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24669 invoked from network); 10 Jan 2014 14:47:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 14:47:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89550611"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 14:47:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 09:47:17 -0500
Message-ID: <1389365236.19142.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 10 Jan 2014 14:47:16 +0000
In-Reply-To: <20140110144024.GA19611@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> What is the reason the backend 'type' property of a configured disk is
> now "qdisk" instead of "file"?

Because qdisk is the backend instead of loop+blk (==file) I think this
just happens naturally.

>  Would the guest really care about that
> detail?  For example block-front currently just checks for "phy" and
> "file" when deciding if discard should be enabled.

That sounds entirely bogus, it should be checking for some sort of
feature-discard.

> I wonder if that guest-visible change was well thought.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:52:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dRL-00046d-IZ; Fri, 10 Jan 2014 14:52:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1dRK-00046V-G5
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:52:02 +0000
Received: from [193.109.254.147:36540] by server-15.bemta-14.messagelabs.com
	id DD/C5-22186-11900D25; Fri, 10 Jan 2014 14:52:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389365520!8592681!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29113 invoked from network); 10 Jan 2014 14:52:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 14:52:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52937 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1dGL-0007VK-Df; Fri, 10 Jan 2014 15:40:41 +0100
Date: Fri, 10 Jan 2014 15:51:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1447395332.20140110155157@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.

But it seems pci-detach isn't completely detaching the device from the guest.

- Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.

- Now i do a "xl pci-assignable-list"
  This returns nothing, which is correct since all hidden devices have already been assigned to guests.

- Then i do "xl -v pci-detach 2 00:19.0"
  Which also returns nothing ...

- Now i do a "xl pci-assignable-list" again ..
  This returns:
  "0000:00:19.0"
  So the pci-detach does seem to have done *something* :-)

- But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
  and later it give some stacktraces ..

  xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
  xen_pciback: ****** driver domain may still access this device's i/o resources!
  xen_pciback: ****** shutdown driver domain before binding device
  xen_pciback: ****** to other drivers of domains


When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.

So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?

Oh yes running xen-unstable and a 3.13-rc7 kernel

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:52:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dRL-00046d-IZ; Fri, 10 Jan 2014 14:52:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1dRK-00046V-G5
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:52:02 +0000
Received: from [193.109.254.147:36540] by server-15.bemta-14.messagelabs.com
	id DD/C5-22186-11900D25; Fri, 10 Jan 2014 14:52:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389365520!8592681!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29113 invoked from network); 10 Jan 2014 14:52:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 14:52:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52937 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1dGL-0007VK-Df; Fri, 10 Jan 2014 15:40:41 +0100
Date: Fri, 10 Jan 2014 15:51:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1447395332.20140110155157@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.

But it seems pci-detach isn't completely detaching the device from the guest.

- Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.

- Now i do a "xl pci-assignable-list"
  This returns nothing, which is correct since all hidden devices have already been assigned to guests.

- Then i do "xl -v pci-detach 2 00:19.0"
  Which also returns nothing ...

- Now i do a "xl pci-assignable-list" again ..
  This returns:
  "0000:00:19.0"
  So the pci-detach does seem to have done *something* :-)

- But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
  and later it give some stacktraces ..

  xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
  xen_pciback: ****** driver domain may still access this device's i/o resources!
  xen_pciback: ****** shutdown driver domain before binding device
  xen_pciback: ****** to other drivers of domains


When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.

So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?

Oh yes running xen-unstable and a 3.13-rc7 kernel

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:52:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:52:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dRZ-00048D-03; Fri, 10 Jan 2014 14:52:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1dRX-00047q-1X
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:52:15 +0000
Received: from [85.158.137.68:12915] by server-8.bemta-3.messagelabs.com id
	AC/95-31081-E1900D25; Fri, 10 Jan 2014 14:52:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389365533!8381726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23417 invoked from network); 10 Jan 2014 14:52:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:52:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:52:12 +0000
Message-Id: <52D0172A02000078001126C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:52:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
	<52D00D190200007800112603@nat28.tlf.novell.com>
	<20140110141517.GB30581@zion.uk.xensource.com>
In-Reply-To: <20140110141517.GB30581@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 15:15, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 02:09:12PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
>> > On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
>> >> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> >> > On 10/01/2014 11:46, Wei Liu wrote:
>> >> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>> >> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>> >> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
>> >> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> >> >>> There's still a "VT-d" left in here...
>> >> >>>
>> >> >> I mentioned AMD as well. Was trying to clarify things a bit more...
>> >> > 
>> >> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor 
> neutral.
>> >> > 
>> >> > There is no way for for PoD (or Paging for that matter) to work in
>> >> > combination with PCIPassthrough, as you need all the backing RAM for all
>> >> > gfns to exist to receive DMA.
>> >> 
>> >> That's going a little too far: If IOMMU faults were recoverable, dealing
>> >> with non-present pages would become possible (with other caveats
>> >> of course). So this is not a fundamental attribute of IOMMUs, but
>> >> there doesn't seem to be any reason to believe that the current
>> >> model would change any time soon for either of the vendors.
>> >> 
>> > 
>> > Do you have suggestion on how this comment should be phrased?
>> 
>> Just say "IOMMU" or "DMA remapping" instead of "VT-d".
>> 
> 
> OK, like this:
> 
> /* We cannot have PoD and PCI device assignment at the same time for HVM
>  * guest. It was reported that IOMMU cannot work with PoD enabled
>  * because it needs to populated entire page table for guest. To stay on
>  * the safe side, we disable PCI device assignment when PoD is enabled.
>  */

Sound fine to me. As does Ian's suggestion.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:52:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:52:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dRZ-00048D-03; Fri, 10 Jan 2014 14:52:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1dRX-00047q-1X
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:52:15 +0000
Received: from [85.158.137.68:12915] by server-8.bemta-3.messagelabs.com id
	AC/95-31081-E1900D25; Fri, 10 Jan 2014 14:52:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389365533!8381726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23417 invoked from network); 10 Jan 2014 14:52:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:52:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 14:52:12 +0000
Message-Id: <52D0172A02000078001126C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 14:52:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<52CFEA170200007800112465@nat28.tlf.novell.com>
	<20140110114643.GF29180@zion.uk.xensource.com>
	<52CFE496.5090906@citrix.com>
	<52CFF59402000078001124DC@nat28.tlf.novell.com>
	<20140110140344.GA30581@zion.uk.xensource.com>
	<52D00D190200007800112603@nat28.tlf.novell.com>
	<20140110141517.GB30581@zion.uk.xensource.com>
In-Reply-To: <20140110141517.GB30581@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	XiantaoZhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 15:15, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 02:09:12PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 15:03, Wei Liu <wei.liu2@citrix.com> wrote:
>> > On Fri, Jan 10, 2014 at 12:28:52PM +0000, Jan Beulich wrote:
>> >> >>> On 10.01.14 at 13:16, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> >> > On 10/01/2014 11:46, Wei Liu wrote:
>> >> >> On Fri, Jan 10, 2014 at 11:39:51AM +0000, Jan Beulich wrote:
>> >> >>>>>> On 10.01.14 at 12:27, Wei Liu <wei.liu2@citrix.com> wrote:
>> >> >>>> +    /* We cannot have PoD and PCI device assignment at the same time
>> >> >>>> +     * for HVM guest. It was reported that VT-d engine cannot
>> >> >>> There's still a "VT-d" left in here...
>> >> >>>
>> >> >> I mentioned AMD as well. Was trying to clarify things a bit more...
>> >> > 
>> >> > Use "IOMMU"/"PCI Passthrough"/etc as appropriate, which are vendor 
> neutral.
>> >> > 
>> >> > There is no way for for PoD (or Paging for that matter) to work in
>> >> > combination with PCIPassthrough, as you need all the backing RAM for all
>> >> > gfns to exist to receive DMA.
>> >> 
>> >> That's going a little too far: If IOMMU faults were recoverable, dealing
>> >> with non-present pages would become possible (with other caveats
>> >> of course). So this is not a fundamental attribute of IOMMUs, but
>> >> there doesn't seem to be any reason to believe that the current
>> >> model would change any time soon for either of the vendors.
>> >> 
>> > 
>> > Do you have suggestion on how this comment should be phrased?
>> 
>> Just say "IOMMU" or "DMA remapping" instead of "VT-d".
>> 
> 
> OK, like this:
> 
> /* We cannot have PoD and PCI device assignment at the same time for HVM
>  * guest. It was reported that IOMMU cannot work with PoD enabled
>  * because it needs to populated entire page table for guest. To stay on
>  * the safe side, we disable PCI device assignment when PoD is enabled.
>  */

Sound fine to me. As does Ian's suggestion.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:58:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dXL-0004PK-Ve; Fri, 10 Jan 2014 14:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dXK-0004PF-J6
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:58:14 +0000
Received: from [85.158.143.35:19688] by server-3.bemta-4.messagelabs.com id
	BB/73-32360-58A00D25; Fri, 10 Jan 2014 14:58:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389365892!10903834!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28664 invoked from network); 10 Jan 2014 14:58:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:58:13 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AEwA8J003008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 14:58:10 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AEw9OE003870
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 14:58:09 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AEw9hO003861; Fri, 10 Jan 2014 14:58:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 06:58:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0265B1C18DC; Fri, 10 Jan 2014 09:58:07 -0500 (EST)
Date: Fri, 10 Jan 2014 09:58:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110145807.GB19124@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389355182.19142.38.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> create ^
> owner Wei Liu <wei.liu2@citrix.com>
> thanks
> 
> On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > When I have following configuration in HVM config file:
> >   memory=128
> >   maxmem=256
> > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > 
> > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > 
> > With claim_mode=0, I can sucessfuly create HVM guest.
> 
> Is it trying to claim 256M instead of 128M? (although the likelyhood

No. 128MB actually.

> that you only have 128-255M free is quite low, or are you
> autoballooning?)

This patch fixes it for me. It basically sets the amount of pages
claimed to be 'maxmem' instead of 'memory' for PoD.

I don't know PoD very well, and this claim is only valid during the
allocation of the guests memory - so the 'target_pages' value might be
the wrong one. However looking at the hypervisor's
'p2m_pod_set_mem_target' I see this comment:

 316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
 317  *   entries.  The balloon driver will deflate the balloon to give back
 318  *   the remainder of the ram to the guest OS.

Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
And then it is the responsibility of the balloon driver to give the memory
back (and this is where the 'static-max' et al come in play to tell the
balloon driver to balloon out).


diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..65e9577 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
 
     /* try to claim pages for early warning of insufficient memory available */
     if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
+        unsigned long nr = nr_pages - cur_pages;
+
+        if ( pod_mode )
+            nr = target_pages - 0x20;
+
+        rc = xc_domain_claim_pages(xch, dom, nr);
         if ( rc != 0 )
         {
             PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 14:58:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 14:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dXL-0004PK-Ve; Fri, 10 Jan 2014 14:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dXK-0004PF-J6
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 14:58:14 +0000
Received: from [85.158.143.35:19688] by server-3.bemta-4.messagelabs.com id
	BB/73-32360-58A00D25; Fri, 10 Jan 2014 14:58:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389365892!10903834!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28664 invoked from network); 10 Jan 2014 14:58:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 14:58:13 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AEwA8J003008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 14:58:10 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AEw9OE003870
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 14:58:09 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AEw9hO003861; Fri, 10 Jan 2014 14:58:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 06:58:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0265B1C18DC; Fri, 10 Jan 2014 09:58:07 -0500 (EST)
Date: Fri, 10 Jan 2014 09:58:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110145807.GB19124@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389355182.19142.38.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> create ^
> owner Wei Liu <wei.liu2@citrix.com>
> thanks
> 
> On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > When I have following configuration in HVM config file:
> >   memory=128
> >   maxmem=256
> > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > 
> > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > 
> > With claim_mode=0, I can sucessfuly create HVM guest.
> 
> Is it trying to claim 256M instead of 128M? (although the likelyhood

No. 128MB actually.

> that you only have 128-255M free is quite low, or are you
> autoballooning?)

This patch fixes it for me. It basically sets the amount of pages
claimed to be 'maxmem' instead of 'memory' for PoD.

I don't know PoD very well, and this claim is only valid during the
allocation of the guests memory - so the 'target_pages' value might be
the wrong one. However looking at the hypervisor's
'p2m_pod_set_mem_target' I see this comment:

 316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
 317  *   entries.  The balloon driver will deflate the balloon to give back
 318  *   the remainder of the ram to the guest OS.

Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
And then it is the responsibility of the balloon driver to give the memory
back (and this is where the 'static-max' et al come in play to tell the
balloon driver to balloon out).


diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..65e9577 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
 
     /* try to claim pages for early warning of insufficient memory available */
     if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
+        unsigned long nr = nr_pages - cur_pages;
+
+        if ( pod_mode )
+            nr = target_pages - 0x20;
+
+        rc = xc_domain_claim_pages(xch, dom, nr);
         if ( rc != 0 )
         {
             PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:00:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dZO-0005Dv-9l; Fri, 10 Jan 2014 15:00:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dZL-0005Dd-QH
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:00:20 +0000
Received: from [85.158.139.211:11987] by server-13.bemta-5.messagelabs.com id
	41/3C-11357-30B00D25; Fri, 10 Jan 2014 15:00:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389366018!9008513!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23829 invoked from network); 10 Jan 2014 15:00:18 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:00:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389366018; l=1108;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=z4jKMEyd4evZbdRQQ2GlKy8vhAo=;
	b=Cxu29Oan5qPZWDr6nlgtfJ6JSR+6RDpNWbIPk4JL7t4j8z3uLd23Js1R1lMfoeXK6Ad
	JloU8aqsBIoGiD/HyPRmZYsMWP6cfWLn2Nsf+cY/s5NhRBduEwWFtGQ32Xvub9rA7r96E
	Pfr/URPFkYmjgacpipBbwtM6DoxNLWxrDFM=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.20 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id q06ed8q0AF0A0QP ; 
	Fri, 10 Jan 2014 16:00:10 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id C15E85024C; Fri, 10 Jan 2014 16:00:00 +0100 (CET)
Date: Fri, 10 Jan 2014 16:00:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Li Dongyang <lidongyang@novell.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110150000.GA20287@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389365236.19142.54.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Ian Campbell wrote:

> On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > What is the reason the backend 'type' property of a configured disk is
> > now "qdisk" instead of "file"?
> 
> Because qdisk is the backend instead of loop+blk (==file) I think this
> just happens naturally.
> 
> >  Would the guest really care about that
> > detail?  For example block-front currently just checks for "phy" and
> > "file" when deciding if discard should be enabled.
> 
> That sounds entirely bogus, it should be checking for some sort of
> feature-discard.

It does that, then calls blkfront_setup_discard which in turn knows just
about phy and file. And I wonder why it does that.
Maybe this function should be simplified to assume that if its called
feature_discard can be enabled. And if both
discard-granularity/discard-alignment exist those properties should be
assigned, similar for discard-secure.

Now that I look at the history of blkfront_setup_discard:

 Li, Konrad, why does that function care at all about the 'type'?
 Shouldnt that check be removed?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:00:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dZO-0005Dv-9l; Fri, 10 Jan 2014 15:00:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dZL-0005Dd-QH
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:00:20 +0000
Received: from [85.158.139.211:11987] by server-13.bemta-5.messagelabs.com id
	41/3C-11357-30B00D25; Fri, 10 Jan 2014 15:00:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389366018!9008513!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23829 invoked from network); 10 Jan 2014 15:00:18 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:00:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389366018; l=1108;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=z4jKMEyd4evZbdRQQ2GlKy8vhAo=;
	b=Cxu29Oan5qPZWDr6nlgtfJ6JSR+6RDpNWbIPk4JL7t4j8z3uLd23Js1R1lMfoeXK6Ad
	JloU8aqsBIoGiD/HyPRmZYsMWP6cfWLn2Nsf+cY/s5NhRBduEwWFtGQ32Xvub9rA7r96E
	Pfr/URPFkYmjgacpipBbwtM6DoxNLWxrDFM=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.20 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id q06ed8q0AF0A0QP ; 
	Fri, 10 Jan 2014 16:00:10 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id C15E85024C; Fri, 10 Jan 2014 16:00:00 +0100 (CET)
Date: Fri, 10 Jan 2014 16:00:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Li Dongyang <lidongyang@novell.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110150000.GA20287@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389365236.19142.54.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Ian Campbell wrote:

> On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > What is the reason the backend 'type' property of a configured disk is
> > now "qdisk" instead of "file"?
> 
> Because qdisk is the backend instead of loop+blk (==file) I think this
> just happens naturally.
> 
> >  Would the guest really care about that
> > detail?  For example block-front currently just checks for "phy" and
> > "file" when deciding if discard should be enabled.
> 
> That sounds entirely bogus, it should be checking for some sort of
> feature-discard.

It does that, then calls blkfront_setup_discard which in turn knows just
about phy and file. And I wonder why it does that.
Maybe this function should be simplified to assume that if its called
feature_discard can be enabled. And if both
discard-granularity/discard-alignment exist those properties should be
assigned, similar for discard-secure.

Now that I look at the history of blkfront_setup_discard:

 Li, Konrad, why does that function care at all about the 'type'?
 Shouldnt that check be removed?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:00:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dZq-0005Hf-1F; Fri, 10 Jan 2014 15:00:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Gortmaker@windriver.com>) id 1W1dPg-000449-4A
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:50:20 +0000
Received: from [193.109.254.147:64015] by server-13.bemta-14.messagelabs.com
	id 5F/84-19374-BA800D25; Fri, 10 Jan 2014 14:50:19 +0000
X-Env-Sender: Paul.Gortmaker@windriver.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389365416!10069931!1
X-Originating-IP: [147.11.146.13]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14368 invoked from network); 10 Jan 2014 14:50:18 -0000
Received: from mail1.windriver.com (HELO mail1.windriver.com) (147.11.146.13)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:50:18 -0000
Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com
	[147.11.189.40])
	by mail1.windriver.com (8.14.5/8.14.5) with ESMTP id s0AEoEdp019652
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 10 Jan 2014 06:50:14 -0800 (PST)
Received: from yow-asskicker.wrs.com (128.224.146.66) by
	ALA-HCA.corp.ad.wrs.com (147.11.189.40) with Microsoft SMTP Server id
	14.2.347.0; Fri, 10 Jan 2014 06:50:12 -0800
From: Paul Gortmaker <paul.gortmaker@windriver.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 10 Jan 2014 09:50:08 -0500
Message-ID: <1389365408-24034-1-git-send-email-paul.gortmaker@windriver.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Jan 2014 15:00:48 +0000
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH-next] xen: delete new instances of __cpuinit
	usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 1fe565517b57676884349dccfd6ce853ec338636 ("xen/events: use
the FIFO-based ABI if available") added new instances of __cpuinit
macro usage.

We removed this a couple versions ago; we now want to remove
the compat no-op stubs.  Introducing new users is not what
we want to see at this point in time, as it will break once
the stubs are gone.

Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---

[Patch based on today's linux-next tree, so if xen/tip.git#linux-next
 gets rebased, then the above commit ID will be invalidated; if it
 _does_ get rebased, then feel free to squash this fix into the orig.]

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 5b2c039f16c5..1de2a191b395 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -359,7 +359,7 @@ static const struct evtchn_ops evtchn_ops_fifo = {
 	.resume            = evtchn_fifo_resume,
 };
 
-static int __cpuinit evtchn_fifo_init_control_block(unsigned cpu)
+static int evtchn_fifo_init_control_block(unsigned cpu)
 {
 	struct page *control_block = NULL;
 	struct evtchn_init_control init_control;
@@ -386,7 +386,7 @@ static int __cpuinit evtchn_fifo_init_control_block(unsigned cpu)
 	return ret;
 }
 
-static int __cpuinit evtchn_fifo_cpu_notification(struct notifier_block *self,
+static int evtchn_fifo_cpu_notification(struct notifier_block *self,
 						  unsigned long action,
 						  void *hcpu)
 {
@@ -404,7 +404,7 @@ static int __cpuinit evtchn_fifo_cpu_notification(struct notifier_block *self,
 	return ret < 0 ? NOTIFY_BAD : NOTIFY_OK;
 }
 
-static struct notifier_block evtchn_fifo_cpu_notifier __cpuinitdata = {
+static struct notifier_block evtchn_fifo_cpu_notifier = {
 	.notifier_call	= evtchn_fifo_cpu_notification,
 };
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:00:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dZq-0005Hf-1F; Fri, 10 Jan 2014 15:00:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Gortmaker@windriver.com>) id 1W1dPg-000449-4A
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 14:50:20 +0000
Received: from [193.109.254.147:64015] by server-13.bemta-14.messagelabs.com
	id 5F/84-19374-BA800D25; Fri, 10 Jan 2014 14:50:19 +0000
X-Env-Sender: Paul.Gortmaker@windriver.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389365416!10069931!1
X-Originating-IP: [147.11.146.13]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14368 invoked from network); 10 Jan 2014 14:50:18 -0000
Received: from mail1.windriver.com (HELO mail1.windriver.com) (147.11.146.13)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 14:50:18 -0000
Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com
	[147.11.189.40])
	by mail1.windriver.com (8.14.5/8.14.5) with ESMTP id s0AEoEdp019652
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Fri, 10 Jan 2014 06:50:14 -0800 (PST)
Received: from yow-asskicker.wrs.com (128.224.146.66) by
	ALA-HCA.corp.ad.wrs.com (147.11.189.40) with Microsoft SMTP Server id
	14.2.347.0; Fri, 10 Jan 2014 06:50:12 -0800
From: Paul Gortmaker <paul.gortmaker@windriver.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 10 Jan 2014 09:50:08 -0500
Message-ID: <1389365408-24034-1-git-send-email-paul.gortmaker@windriver.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Jan 2014 15:00:48 +0000
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH-next] xen: delete new instances of __cpuinit
	usage
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 1fe565517b57676884349dccfd6ce853ec338636 ("xen/events: use
the FIFO-based ABI if available") added new instances of __cpuinit
macro usage.

We removed this a couple versions ago; we now want to remove
the compat no-op stubs.  Introducing new users is not what
we want to see at this point in time, as it will break once
the stubs are gone.

Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---

[Patch based on today's linux-next tree, so if xen/tip.git#linux-next
 gets rebased, then the above commit ID will be invalidated; if it
 _does_ get rebased, then feel free to squash this fix into the orig.]

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 5b2c039f16c5..1de2a191b395 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -359,7 +359,7 @@ static const struct evtchn_ops evtchn_ops_fifo = {
 	.resume            = evtchn_fifo_resume,
 };
 
-static int __cpuinit evtchn_fifo_init_control_block(unsigned cpu)
+static int evtchn_fifo_init_control_block(unsigned cpu)
 {
 	struct page *control_block = NULL;
 	struct evtchn_init_control init_control;
@@ -386,7 +386,7 @@ static int __cpuinit evtchn_fifo_init_control_block(unsigned cpu)
 	return ret;
 }
 
-static int __cpuinit evtchn_fifo_cpu_notification(struct notifier_block *self,
+static int evtchn_fifo_cpu_notification(struct notifier_block *self,
 						  unsigned long action,
 						  void *hcpu)
 {
@@ -404,7 +404,7 @@ static int __cpuinit evtchn_fifo_cpu_notification(struct notifier_block *self,
 	return ret < 0 ? NOTIFY_BAD : NOTIFY_OK;
 }
 
-static struct notifier_block evtchn_fifo_cpu_notifier __cpuinitdata = {
+static struct notifier_block evtchn_fifo_cpu_notifier = {
 	.notifier_call	= evtchn_fifo_cpu_notification,
 };
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:04:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dcv-0005bc-0L; Fri, 10 Jan 2014 15:04:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dcs-0005bQ-KR
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:03:58 +0000
Received: from [85.158.139.211:6308] by server-15.bemta-5.messagelabs.com id
	30/3B-08490-DDB00D25; Fri, 10 Jan 2014 15:03:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389366235!7822584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25348 invoked from network); 10 Jan 2014 15:03:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:03:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89557178"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:03:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:03:54 -0500
Message-ID: <1389366233.19142.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Fri, 10 Jan 2014 15:03:53 +0000
In-Reply-To: <52D002D6.7090306@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:25 -0500, Ross Philipson wrote:
> So I guess that is the question: where to put it? Should it be merged 
> into another lib or be brought in as a separate lib or utility?

I guess the choices are libxc, libxl, libxlutils or an entirely new lib?

At what level would we expect this to be used? Would it be internal to
e.g. libxl (in which case libxc or libxl would be appropriate) or would
toolstacks be expected to use it and pass the result to libxl? If that
is the case then libxc is inappropriate (users of libxl are not supposed
to have to deal with libxc directly).

Fitting in with libxl proper would require the API to look a certain way
(take a context etc), so perhaps libxlu would be more appropriate,
alongside the disk syntax parser etc?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:04:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dcv-0005bc-0L; Fri, 10 Jan 2014 15:04:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dcs-0005bQ-KR
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:03:58 +0000
Received: from [85.158.139.211:6308] by server-15.bemta-5.messagelabs.com id
	30/3B-08490-DDB00D25; Fri, 10 Jan 2014 15:03:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389366235!7822584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25348 invoked from network); 10 Jan 2014 15:03:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:03:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89557178"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:03:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:03:54 -0500
Message-ID: <1389366233.19142.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Fri, 10 Jan 2014 15:03:53 +0000
In-Reply-To: <52D002D6.7090306@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:25 -0500, Ross Philipson wrote:
> So I guess that is the question: where to put it? Should it be merged 
> into another lib or be brought in as a separate lib or utility?

I guess the choices are libxc, libxl, libxlutils or an entirely new lib?

At what level would we expect this to be used? Would it be internal to
e.g. libxl (in which case libxc or libxl would be appropriate) or would
toolstacks be expected to use it and pass the result to libxl? If that
is the case then libxc is inappropriate (users of libxl are not supposed
to have to deal with libxc directly).

Fitting in with libxl proper would require the API to look a certain way
(take a context etc), so perhaps libxlu would be more appropriate,
alongside the disk syntax parser etc?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:08:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dh0-0005rj-13; Fri, 10 Jan 2014 15:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dgy-0005rd-8h
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:08:12 +0000
Received: from [85.158.143.35:52180] by server-1.bemta-4.messagelabs.com id
	77/BC-02132-BDC00D25; Fri, 10 Jan 2014 15:08:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389366481!10910351!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27201 invoked from network); 10 Jan 2014 15:08:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:08:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AF7g3r012446
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:07:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AF7ecA024218
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:07:41 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AF7euI002515; Fri, 10 Jan 2014 15:07:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:07:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51A6D1C18DC; Fri, 10 Jan 2014 10:07:38 -0500 (EST)
Date: Fri, 10 Jan 2014 10:07:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>, david.vrabel@citrix.com
Message-ID: <20140110150738.GC19124@phenom.dumpdata.com>
References: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	linux-next@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] randconfig build error with next-20140108,
 in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBKYW4gMDgsIDIwMTQgYXQgMDM6MzI6MDBQTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6IEluIGZ1bmN0aW9uIOKAmHBsYXRmb3Jt
X3BjaV9pbml04oCZOgo+IGRyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jOjEzMToyOiBlcnJvcjog
aW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJhwY2lfcmVxdWVzdF9yZWdpb27i
gJkgWy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4gICByZXQgPSBwY2lf
cmVxdWVzdF9yZWdpb24ocGRldiwgMSwgRFJWX05BTUUpOwo+ICAgXgo+IGRyaXZlcnMveGVuL3Bs
YXRmb3JtLXBjaS5jOjE3MDoyOiBlcnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5j
dGlvbiDigJhwY2lfcmVsZWFzZV9yZWdpb27igJkgWy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24t
ZGVjbGFyYXRpb25dCj4gICBwY2lfcmVsZWFzZV9yZWdpb24ocGRldiwgMCk7Cj4gICBeCj4gY2Mx
OiBzb21lIHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4gbWFrZVsyXTogKioqIFtk
cml2ZXJzL3hlbi9wbGF0Zm9ybS1wY2kub10gRXJyb3IgMQo+IAo+IFRoZXNlIHdhcm5pbmdzIGFw
cGVhcmVkIHRvbzoKPiAKPiB3YXJuaW5nOiAoWEVOX1BWSCkgc2VsZWN0cyBYRU5fUFZIVk0gd2hp
Y2ggaGFzIHVubWV0IGRpcmVjdAo+IGRlcGVuZGVuY2llcyAoSFlQRVJWSVNPUl9HVUVTVCAmJiBY
RU4gJiYgUENJICYmIFg4Nl9MT0NBTF9BUElDKQoKSGV5IEppbSwKClRoaXMgZml4IHdvcmtzIGZv
ciBtZToKCgpkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVuL0tjb25maWcgYi9hcmNoL3g4Ni94ZW4v
S2NvbmZpZwppbmRleCBkODhiZmQ2Li4wMWI5MDI2IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4v
S2NvbmZpZworKysgYi9hcmNoL3g4Ni94ZW4vS2NvbmZpZwpAQCAtNTMsNiArNTMsNSBAQCBjb25m
aWcgWEVOX0RFQlVHX0ZTCiAKIGNvbmZpZyBYRU5fUFZICiAJYm9vbCAiU3VwcG9ydCBmb3IgcnVu
bmluZyBhcyBhIFBWSCBndWVzdCIKLQlkZXBlbmRzIG9uIFg4Nl82NCAmJiBYRU4KLQlzZWxlY3Qg
WEVOX1BWSFZNCisJZGVwZW5kcyBvbiBYODZfNjQgJiYgWEVOICYmIFhFTl9QVkhWTQogCWRlZl9i
b29sIG4KCkRhdmlkLCB5b3UgT0sgd2l0aCB0aGF0PyBZb3Ugc3VnZ2VzdGVkIHRvIHVzZSAnc2Vs
ZWN0JyBpbiB0aGUgcGF0Y2hzZXQKaW5zdGVhZCBvZiAnZGVwZW5kcycgYW5kIHRoaXMgdGhyb3dz
IGF3YXkgeW91ciBzdWdnZXN0aW9uLgoKRllJLCB0aGUgcmVhc29uIGZvciB0aGlzIGZhaWx1cmUg
aXMgdGhhdCBDT05GSUdfUENJIGluIHRoZSBiZWxvdwpjb25maWcgZmlsZSBpcyBub3QgYXQgYWxs
IGRlZmluZWQuIEJ1dCBDT05GSUdfWEVOX1BWSCBkb2VzIGVuZAp1cCBiZWluZyBlbmFibGVkIHdp
dGggQ09ORklHX1hFTl9QVkhWTSBiZWluZyBlbmFibGVkICh3aXRoCml0cyBkZXBlbmRlbmNpZXMg
bm90IGJlaW5nIGZ1bGZpbGxlZCkuCgpBbm90aGVyIHdheSB0byBmaXggdGhpcyBpcyB0byBkdXBs
aWNhdGUgdGhlICdkZXBlbmRzJyB0aGF0CkNPTkZJR19YRU5fUFZIVk0gaGFzLCBidXQgd2h5IGJv
dGhlciB3aGVuIENPTkZJR19YRU5fUFZIVk0KYWxyZWFkeSBkb2VzIHRoYXQuCgoKPiAjCj4gIyBB
dXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJVC4KPiAjIExpbnV4L3g4NiAz
LjEzLjAtcmM3IEtlcm5lbCBDb25maWd1cmF0aW9uCj4gIwo+IENPTkZJR182NEJJVD15Cj4gQ09O
RklHX1g4Nl82ND15Cj4gQ09ORklHX1g4Nj15Cj4gQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQo+IENPTkZJR19PVVRQVVRfRk9STUFUPSJlbGY2NC14ODYtNjQiCj4gQ09ORklHX0FSQ0hfREVG
Q09ORklHPSJhcmNoL3g4Ni9jb25maWdzL3g4Nl82NF9kZWZjb25maWciCj4gQ09ORklHX0xPQ0tE
RVBfU1VQUE9SVD15Cj4gQ09ORklHX1NUQUNLVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0hBVkVf
TEFURU5DWVRPUF9TVVBQT1JUPXkKPiBDT05GSUdfTU1VPXkKPiBDT05GSUdfTkVFRF9ETUFfTUFQ
X1NUQVRFPXkKPiBDT05GSUdfTkVFRF9TR19ETUFfTEVOR1RIPXkKPiBDT05GSUdfR0VORVJJQ19J
U0FfRE1BPXkKPiBDT05GSUdfR0VORVJJQ19IV0VJR0hUPXkKPiBDT05GSUdfQVJDSF9NQVlfSEFW
RV9QQ19GREM9eQo+IENPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15Cj4gQ09ORklHX0dF
TkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX1JFTEFYPXkKPiBD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX0FV
VE9QUk9CRT15Cj4gQ09ORklHX0hBVkVfU0VUVVBfUEVSX0NQVV9BUkVBPXkKPiBDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKPiBDT05GSUdfTkVFRF9QRVJfQ1BVX1BBR0Vf
RklSU1RfQ0hVTks9eQo+IENPTkZJR19BUkNIX0hJQkVSTkFUSU9OX1BPU1NJQkxFPXkKPiBDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0hVR0VfUE1EX1NI
QVJFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0dFTkVSQUxfSFVHRVRMQj15Cj4gQ09ORklHX1pPTkVf
RE1BMzI9eQo+IENPTkZJR19BVURJVF9BUkNIPXkKPiBDT05GSUdfQVJDSF9TVVBQT1JUU19PUFRJ
TUlaRURfSU5MSU5JTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
Cj4gQ09ORklHX1g4Nl82NF9TTVA9eQo+IENPTkZJR19YODZfSFQ9eQo+IENPTkZJR19BUkNIX0hX
RUlHSFRfQ0ZMQUdTPSItZmNhbGwtc2F2ZWQtcmRpIC1mY2FsbC1zYXZlZC1yc2kgLWZjYWxsLXNh
dmVkLXJkeCAtZmNhbGwtc2F2ZWQtcmN4IC1mY2FsbC1zYXZlZC1yOCAtZmNhbGwtc2F2ZWQtcjkg
LWZjYWxsLXNhdmVkLXIxMCAtZmNhbGwtc2F2ZWQtcjExIgo+IENPTkZJR19BUkNIX1NVUFBPUlRT
X1VQUk9CRVM9eQo+IENPTkZJR19ERUZDT05GSUdfTElTVD0iL2xpYi9tb2R1bGVzLyRVTkFNRV9S
RUxFQVNFLy5jb25maWciCj4gQ09ORklHX0lSUV9XT1JLPXkKPiBDT05GSUdfQlVJTERUSU1FX0VY
VEFCTEVfU09SVD15Cj4gCj4gIwo+ICMgR2VuZXJhbCBzZXR1cAo+ICMKPiBDT05GSUdfSU5JVF9F
TlZfQVJHX0xJTUlUPTMyCj4gQ09ORklHX0NST1NTX0NPTVBJTEU9IiIKPiBDT05GSUdfQ09NUElM
RV9URVNUPXkKPiBDT05GSUdfTE9DQUxWRVJTSU9OPSIiCj4gIyBDT05GSUdfTE9DQUxWRVJTSU9O
X0FVVE8gaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tFUk5FTF9HWklQPXkKPiBDT05GSUdfSEFW
RV9LRVJORUxfQlpJUDI9eQo+IENPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkKPiBDT05GSUdfSEFW
RV9LRVJORUxfWFo9eQo+IENPTkZJR19IQVZFX0tFUk5FTF9MWk89eQo+IENPTkZJR19IQVZFX0tF
Uk5FTF9MWjQ9eQo+IENPTkZJR19LRVJORUxfR1pJUD15Cj4gIyBDT05GSUdfS0VSTkVMX0JaSVAy
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
S0VSTkVMX1haIGlzIG5vdCBzZXQKPiAjIENPTkZJR19LRVJORUxfTFpPIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9IT1NUTkFNRT0i
KG5vbmUpIgo+IENPTkZJR19TV0FQPXkKPiBDT05GSUdfU1lTVklQQz15Cj4gQ09ORklHX1BPU0lY
X01RVUVVRT15Cj4gIyBDT05GSUdfRkhBTkRMRSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FVRElUPXkK
PiBDT05GSUdfQVVESVRTWVNDQUxMPXkKPiBDT05GSUdfQVVESVRfV0FUQ0g9eQo+IENPTkZJR19B
VURJVF9UUkVFPXkKPiAKPiAjCj4gIyBJUlEgc3Vic3lzdGVtCj4gIwo+IENPTkZJR19HRU5FUklD
X0lSUV9QUk9CRT15Cj4gQ09ORklHX0dFTkVSSUNfSVJRX1NIT1c9eQo+IENPTkZJR19HRU5FUklD
X1BFTkRJTkdfSVJRPXkKPiBDT05GSUdfSVJRX0RPTUFJTj15Cj4gIyBDT05GSUdfSVJRX0RPTUFJ
Tl9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX0lSUV9GT1JDRURfVEhSRUFESU5HPXkKPiBDT05G
SUdfU1BBUlNFX0lSUT15Cj4gQ09ORklHX0NMT0NLU09VUkNFX1dBVENIRE9HPXkKPiBDT05GSUdf
QVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkKPiBDT05GSUdfR0VORVJJQ19USU1FX1ZTWVNDQUxMPXkK
PiBDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UUz15Cj4gQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVO
VFNfQlVJTEQ9eQo+IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX0JST0FEQ0FTVD15Cj4gQ09O
RklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfTUlOX0FESlVTVD15Cj4gQ09ORklHX0dFTkVSSUNfQ01P
U19VUERBVEU9eQo+IAo+ICMKPiAjIFRpbWVycyBzdWJzeXN0ZW0KPiAjCj4gQ09ORklHX1RJQ0tf
T05FU0hPVD15Cj4gQ09ORklHX05PX0haX0NPTU1PTj15Cj4gIyBDT05GSUdfSFpfUEVSSU9ESUMg
aXMgbm90IHNldAo+IENPTkZJR19OT19IWl9JRExFPXkKPiAjIENPTkZJR19OT19IWl9GVUxMIGlz
IG5vdCBzZXQKPiBDT05GSUdfTk9fSFo9eQo+IENPTkZJR19ISUdIX1JFU19USU1FUlM9eQo+IAo+
ICMKPiAjIENQVS9UYXNrIHRpbWUgYW5kIHN0YXRzIGFjY291bnRpbmcKPiAjCj4gQ09ORklHX1RJ
Q0tfQ1BVX0FDQ09VTlRJTkc9eQo+ICMgQ09ORklHX1ZJUlRfQ1BVX0FDQ09VTlRJTkdfR0VOIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JUlFfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKPiBDT05G
SUdfQlNEX1BST0NFU1NfQUNDVD15Cj4gQ09ORklHX0JTRF9QUk9DRVNTX0FDQ1RfVjM9eQo+IENP
TkZJR19UQVNLU1RBVFM9eQo+IENPTkZJR19UQVNLX0RFTEFZX0FDQ1Q9eQo+ICMgQ09ORklHX1RB
U0tfWEFDQ1QgaXMgbm90IHNldAo+IAo+ICMKPiAjIFJDVSBTdWJzeXN0ZW0KPiAjCj4gQ09ORklH
X1RSRUVfUkNVPXkKPiAjIENPTkZJR19QUkVFTVBUX1JDVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JD
VV9TVEFMTF9DT01NT049eQo+IENPTkZJR19DT05URVhUX1RSQUNLSU5HPXkKPiBDT05GSUdfUkNV
X1VTRVJfUVM9eQo+IENPTkZJR19DT05URVhUX1RSQUNLSU5HX0ZPUkNFPXkKPiBDT05GSUdfUkNV
X0ZBTk9VVD02NAo+IENPTkZJR19SQ1VfRkFOT1VUX0xFQUY9MTYKPiBDT05GSUdfUkNVX0ZBTk9V
VF9FWEFDVD15Cj4gIyBDT05GSUdfUkNVX0ZBU1RfTk9fSFogaXMgbm90IHNldAo+IENPTkZJR19U
UkVFX1JDVV9UUkFDRT15Cj4gIyBDT05GSUdfUkNVX05PQ0JfQ1BVIGlzIG5vdCBzZXQKPiBDT05G
SUdfSUtDT05GSUc9eQo+ICMgQ09ORklHX0lLQ09ORklHX1BST0MgaXMgbm90IHNldAo+IENPTkZJ
R19MT0dfQlVGX1NISUZUPTE3Cj4gQ09ORklHX0hBVkVfVU5TVEFCTEVfU0NIRURfQ0xPQ0s9eQo+
IENPTkZJR19BUkNIX1NVUFBPUlRTX05VTUFfQkFMQU5DSU5HPXkKPiBDT05GSUdfQVJDSF9TVVBQ
T1JUU19JTlQxMjg9eQo+IENPTkZJR19BUkNIX1dBTlRTX1BST1RfTlVNQV9QUk9UX05PTkU9eQo+
IENPTkZJR19BUkNIX1VTRVNfTlVNQV9QUk9UX05PTkU9eQo+ICMgQ09ORklHX05VTUFfQkFMQU5D
SU5HX0RFRkFVTFRfRU5BQkxFRCBpcyBub3Qgc2V0Cj4gQ09ORklHX05VTUFfQkFMQU5DSU5HPXkK
PiBDT05GSUdfQ0dST1VQUz15Cj4gIyBDT05GSUdfQ0dST1VQX0RFQlVHIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19DR1JPVVBfRlJFRVpFUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0dST1VQX0RFVklD
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BVU0VUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0NHUk9V
UF9DUFVBQ0NUPXkKPiAjIENPTkZJR19SRVNPVVJDRV9DT1VOVEVSUyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0dST1VQX1BFUkYgaXMgbm90IHNldAo+IENPTkZJR19DR1JPVVBfU0NIRUQ9eQo+IENP
TkZJR19GQUlSX0dST1VQX1NDSEVEPXkKPiAjIENPTkZJR19DRlNfQkFORFdJRFRIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUlRfR1JPVVBfU0NIRUQ9eQo+ICMgQ09ORklHX0JMS19DR1JPVVAgaXMgbm90
IHNldAo+IENPTkZJR19DSEVDS1BPSU5UX1JFU1RPUkU9eQo+ICMgQ09ORklHX05BTUVTUEFDRVMg
aXMgbm90IHNldAo+ICMgQ09ORklHX1VJREdJRF9TVFJJQ1RfVFlQRV9DSEVDS1MgaXMgbm90IHNl
dAo+IENPTkZJR19TQ0hFRF9BVVRPR1JPVVA9eQo+IENPTkZJR19TWVNGU19ERVBSRUNBVEVEPXkK
PiAjIENPTkZJR19TWVNGU19ERVBSRUNBVEVEX1YyIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVMQVk9
eQo+IENPTkZJR19CTEtfREVWX0lOSVRSRD15Cj4gQ09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIK
PiBDT05GSUdfUkRfR1pJUD15Cj4gQ09ORklHX1JEX0JaSVAyPXkKPiAjIENPTkZJR19SRF9MWk1B
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRF9YWiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JEX0xaTz15
Cj4gQ09ORklHX1JEX0xaND15Cj4gQ09ORklHX0NDX09QVElNSVpFX0ZPUl9TSVpFPXkKPiBDT05G
SUdfQU5PTl9JTk9ERVM9eQo+IENPTkZJR19TWVNDVExfRVhDRVBUSU9OX1RSQUNFPXkKPiBDT05G
SUdfSEFWRV9QQ1NQS1JfUExBVEZPUk09eQo+IENPTkZJR19FWFBFUlQ9eQo+IENPTkZJR19LQUxM
U1lNUz15Cj4gQ09ORklHX0tBTExTWU1TX0FMTD15Cj4gIyBDT05GSUdfUFJJTlRLIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19CVUcgaXMgbm90IHNldAo+IENPTkZJR19FTEZfQ09SRT15Cj4gQ09ORklH
X1BDU1BLUl9QTEFURk9STT15Cj4gQ09ORklHX0JBU0VfRlVMTD15Cj4gIyBDT05GSUdfRlVURVgg
aXMgbm90IHNldAo+IENPTkZJR19FUE9MTD15Cj4gIyBDT05GSUdfU0lHTkFMRkQgaXMgbm90IHNl
dAo+IENPTkZJR19USU1FUkZEPXkKPiBDT05GSUdfRVZFTlRGRD15Cj4gIyBDT05GSUdfU0hNRU0g
aXMgbm90IHNldAo+ICMgQ09ORklHX0FJTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRU1CRURERUQg
aXMgbm90IHNldAo+IENPTkZJR19IQVZFX1BFUkZfRVZFTlRTPXkKPiBDT05GSUdfUEVSRl9VU0Vf
Vk1BTExPQz15Cj4gCj4gIwo+ICMgS2VybmVsIFBlcmZvcm1hbmNlIEV2ZW50cyBBbmQgQ291bnRl
cnMKPiAjCj4gQ09ORklHX1BFUkZfRVZFTlRTPXkKPiBDT05GSUdfREVCVUdfUEVSRl9VU0VfVk1B
TExPQz15Cj4gIyBDT05GSUdfVk1fRVZFTlRfQ09VTlRFUlMgaXMgbm90IHNldAo+ICMgQ09ORklH
X1NMVUJfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NPTVBBVF9CUksgaXMgbm90IHNldAo+
ICMgQ09ORklHX1NMQUIgaXMgbm90IHNldAo+IENPTkZJR19TTFVCPXkKPiAjIENPTkZJR19TTE9C
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0xVQl9DUFVfUEFSVElBTD15Cj4gQ09ORklHX1BST0ZJTElO
Rz15Cj4gQ09ORklHX1RSQUNFUE9JTlRTPXkKPiBDT05GSUdfT1BST0ZJTEU9eQo+IENPTkZJR19P
UFJPRklMRV9FVkVOVF9NVUxUSVBMRVg9eQo+IENPTkZJR19IQVZFX09QUk9GSUxFPXkKPiBDT05G
SUdfT1BST0ZJTEVfTk1JX1RJTUVSPXkKPiAjIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19IQVZFXzY0QklUX0FMSUdORURfQUNDRVNTIGlzIG5vdCBzZXQKPiBDT05GSUdf
SEFWRV9FRkZJQ0lFTlRfVU5BTElHTkVEX0FDQ0VTUz15Cj4gQ09ORklHX0FSQ0hfVVNFX0JVSUxU
SU5fQlNXQVA9eQo+IENPTkZJR19VU0VSX1JFVFVSTl9OT1RJRklFUj15Cj4gQ09ORklHX0hBVkVf
SU9SRU1BUF9QUk9UPXkKPiBDT05GSUdfSEFWRV9LUFJPQkVTPXkKPiBDT05GSUdfSEFWRV9LUkVU
UFJPQkVTPXkKPiBDT05GSUdfSEFWRV9PUFRQUk9CRVM9eQo+IENPTkZJR19IQVZFX0tQUk9CRVNf
T05fRlRSQUNFPXkKPiBDT05GSUdfSEFWRV9BUkNIX1RSQUNFSE9PSz15Cj4gQ09ORklHX0hBVkVf
RE1BX0FUVFJTPXkKPiBDT05GSUdfR0VORVJJQ19TTVBfSURMRV9USFJFQUQ9eQo+IENPTkZJR19I
QVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQo+IENPTkZJR19IQVZFX0RNQV9BUElfREVC
VUc9eQo+IENPTkZJR19IQVZFX0hXX0JSRUFLUE9JTlQ9eQo+IENPTkZJR19IQVZFX01JWEVEX0JS
RUFLUE9JTlRTX1JFR1M9eQo+IENPTkZJR19IQVZFX1VTRVJfUkVUVVJOX05PVElGSUVSPXkKPiBD
T05GSUdfSEFWRV9QRVJGX0VWRU5UU19OTUk9eQo+IENPTkZJR19IQVZFX1BFUkZfUkVHUz15Cj4g
Q09ORklHX0hBVkVfUEVSRl9VU0VSX1NUQUNLX0RVTVA9eQo+IENPTkZJR19IQVZFX0FSQ0hfSlVN
UF9MQUJFTD15Cj4gQ09ORklHX0FSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHPXkKPiBDT05GSUdf
SEFWRV9BTElHTkVEX1NUUlVDVF9QQUdFPXkKPiBDT05GSUdfSEFWRV9DTVBYQ0hHX0xPQ0FMPXkK
PiBDT05GSUdfSEFWRV9DTVBYQ0hHX0RPVUJMRT15Cj4gQ09ORklHX0hBVkVfQVJDSF9TRUNDT01Q
X0ZJTFRFUj15Cj4gQ09ORklHX0hBVkVfQ0NfU1RBQ0tQUk9URUNUT1I9eQo+ICMgQ09ORklHX0ND
X1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfTk9O
RT15Cj4gIyBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfUkVHVUxBUiBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfU1RST05HIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9D
T05URVhUX1RSQUNLSU5HPXkKPiBDT05GSUdfSEFWRV9WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTj15
Cj4gQ09ORklHX0hBVkVfSVJRX1RJTUVfQUNDT1VOVElORz15Cj4gQ09ORklHX0hBVkVfQVJDSF9U
UkFOU1BBUkVOVF9IVUdFUEFHRT15Cj4gQ09ORklHX0hBVkVfQVJDSF9TT0ZUX0RJUlRZPXkKPiBD
T05GSUdfTU9EVUxFU19VU0VfRUxGX1JFTEE9eQo+IENPTkZJR19IQVZFX0lSUV9FWElUX09OX0lS
UV9TVEFDSz15Cj4gCj4gIwo+ICMgR0NPVi1iYXNlZCBrZXJuZWwgcHJvZmlsaW5nCj4gIwo+ICMg
Q09ORklHX0dDT1ZfS0VSTkVMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19IQVZFX0dFTkVSSUNfRE1B
X0NPSEVSRU5UIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRfTVVURVhFUz15Cj4gQ09ORklHX0JBU0Vf
U01BTEw9MAo+ICMgQ09ORklHX01PRFVMRVMgaXMgbm90IHNldAo+IENPTkZJR19TVE9QX01BQ0hJ
TkU9eQo+IENPTkZJR19CTE9DSz15Cj4gQ09ORklHX0JMS19ERVZfQlNHPXkKPiBDT05GSUdfQkxL
X0RFVl9CU0dMSUI9eQo+ICMgQ09ORklHX0JMS19ERVZfSU5URUdSSVRZIGlzIG5vdCBzZXQKPiBD
T05GSUdfQkxLX0NNRExJTkVfUEFSU0VSPXkKPiAKPiAjCj4gIyBQYXJ0aXRpb24gVHlwZXMKPiAj
Cj4gQ09ORklHX1BBUlRJVElPTl9BRFZBTkNFRD15Cj4gQ09ORklHX0FDT1JOX1BBUlRJVElPTj15
Cj4gQ09ORklHX0FDT1JOX1BBUlRJVElPTl9DVU1BTkE9eQo+ICMgQ09ORklHX0FDT1JOX1BBUlRJ
VElPTl9FRVNPWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OX0lDUyBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OX0FERlMgaXMgbm90IHNldAo+ICMgQ09O
RklHX0FDT1JOX1BBUlRJVElPTl9QT1dFUlRFQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0FDT1JOX1BB
UlRJVElPTl9SSVNDSVg9eQo+ICMgQ09ORklHX0FJWF9QQVJUSVRJT04gaXMgbm90IHNldAo+ICMg
Q09ORklHX09TRl9QQVJUSVRJT04gaXMgbm90IHNldAo+ICMgQ09ORklHX0FNSUdBX1BBUlRJVElP
TiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQVRBUklfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19NQUNfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfTVNET1NfUEFSVElUSU9OPXkK
PiBDT05GSUdfQlNEX0RJU0tMQUJFTD15Cj4gIyBDT05GSUdfTUlOSVhfU1VCUEFSVElUSU9OIGlz
IG5vdCBzZXQKPiBDT05GSUdfU09MQVJJU19YODZfUEFSVElUSU9OPXkKPiBDT05GSUdfVU5JWFdB
UkVfRElTS0xBQkVMPXkKPiBDT05GSUdfTERNX1BBUlRJVElPTj15Cj4gQ09ORklHX0xETV9ERUJV
Rz15Cj4gQ09ORklHX1NHSV9QQVJUSVRJT049eQo+IENPTkZJR19VTFRSSVhfUEFSVElUSU9OPXkK
PiAjIENPTkZJR19TVU5fUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfS0FSTUFfUEFSVElU
SU9OPXkKPiAjIENPTkZJR19FRklfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfU1lTVjY4
X1BBUlRJVElPTj15Cj4gQ09ORklHX0NNRExJTkVfUEFSVElUSU9OPXkKPiAKPiAjCj4gIyBJTyBT
Y2hlZHVsZXJzCj4gIwo+IENPTkZJR19JT1NDSEVEX05PT1A9eQo+IENPTkZJR19JT1NDSEVEX0RF
QURMSU5FPXkKPiAjIENPTkZJR19JT1NDSEVEX0NGUSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFRkFV
TFRfREVBRExJTkU9eQo+ICMgQ09ORklHX0RFRkFVTFRfTk9PUCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0RFRkFVTFRfSU9TQ0hFRD0iZGVhZGxpbmUiCj4gQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkK
PiBDT05GSUdfUEFEQVRBPXkKPiBDT05GSUdfVU5JTkxJTkVfU1BJTl9VTkxPQ0s9eQo+IENPTkZJ
R19GUkVFWkVSPXkKPiAKPiAjCj4gIyBQcm9jZXNzb3IgdHlwZSBhbmQgZmVhdHVyZXMKPiAjCj4g
Q09ORklHX1pPTkVfRE1BPXkKPiBDT05GSUdfU01QPXkKPiAjIENPTkZJR19YODZfTVBQQVJTRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfWDg2X0VYVEVOREVEX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiBD
T05GSUdfWDg2X1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkKPiAjIENPTkZJR19TQ0hFRF9PTUlU
X0ZSQU1FX1BPSU5URVIgaXMgbm90IHNldAo+IENPTkZJR19IWVBFUlZJU09SX0dVRVNUPXkKPiBD
T05GSUdfUEFSQVZJUlQ9eQo+ICMgQ09ORklHX1BBUkFWSVJUX0RFQlVHIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19QQVJBVklSVF9TUElOTE9DS1MgaXMgbm90IHNldAo+IENPTkZJR19YRU49eQo+ICMg
Q09ORklHX1hFTl9QUklWSUxFR0VEX0dVRVNUIGlzIG5vdCBzZXQKPiBDT05GSUdfWEVOX1BWSFZN
PXkKPiBDT05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZPTUwMAo+IENPTkZJR19YRU5fU0FWRV9S
RVNUT1JFPXkKPiBDT05GSUdfWEVOX0RFQlVHX0ZTPXkKPiBDT05GSUdfWEVOX1BWSD15Cj4gIyBD
T05GSUdfS1ZNX0dVRVNUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19QQVJBVklSVF9USU1FX0FDQ09V
TlRJTkcgaXMgbm90IHNldAo+IENPTkZJR19QQVJBVklSVF9DTE9DSz15Cj4gQ09ORklHX05PX0JP
T1RNRU09eQo+ICMgQ09ORklHX01FTVRFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX01LOCBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUNPUkUyIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19NQVRPTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0dFTkVSSUNfQ1BVPXkK
PiBDT05GSUdfWDg2X0lOVEVSTk9ERV9DQUNIRV9TSElGVD02Cj4gQ09ORklHX1g4Nl9MMV9DQUNI
RV9TSElGVD02Cj4gQ09ORklHX1g4Nl9UU0M9eQo+IENPTkZJR19YODZfQ01QWENIRzY0PXkKPiBD
T05GSUdfWDg2X0NNT1Y9eQo+IENPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0Cj4gQ09O
RklHX1g4Nl9ERUJVR0NUTE1TUj15Cj4gIyBDT05GSUdfUFJPQ0VTU09SX1NFTEVDVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NQVV9TVVBfSU5URUw9eQo+IENPTkZJR19DUFVfU1VQX0FNRD15Cj4gQ09O
RklHX0NQVV9TVVBfQ0VOVEFVUj15Cj4gQ09ORklHX0hQRVRfVElNRVI9eQo+IENPTkZJR19IUEVU
X0VNVUxBVEVfUlRDPXkKPiAjIENPTkZJR19ETUkgaXMgbm90IHNldAo+IENPTkZJR19TV0lPVExC
PXkKPiBDT05GSUdfSU9NTVVfSEVMUEVSPXkKPiBDT05GSUdfTUFYU01QPXkKPiBDT05GSUdfTlJf
Q1BVUz04MTkyCj4gQ09ORklHX1NDSEVEX1NNVD15Cj4gQ09ORklHX1NDSEVEX01DPXkKPiBDT05G
SUdfUFJFRU1QVF9OT05FPXkKPiAjIENPTkZJR19QUkVFTVBUX1ZPTFVOVEFSWSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfUFJFRU1QVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BSRUVNUFRfQ09VTlQ9eQo+
IENPTkZJR19YODZfTE9DQUxfQVBJQz15Cj4gQ09ORklHX1g4Nl9JT19BUElDPXkKPiAjIENPTkZJ
R19YODZfUkVST1VURV9GT1JfQlJPS0VOX0JPT1RfSVJRUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4
Nl9NQ0U9eQo+ICMgQ09ORklHX1g4Nl9NQ0VfSU5URUwgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4
Nl9NQ0VfQU1EIGlzIG5vdCBzZXQKPiBDT05GSUdfWDg2X01DRV9JTkpFQ1Q9eQo+IENPTkZJR19J
OEs9eQo+ICMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUlDUk9DT0RF
X0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NSUNST0NPREVfQU1EX0VBUkxZIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19YODZfTVNSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YODZfQ1BV
SUQgaXMgbm90IHNldAo+IENPTkZJR19BUkNIX1BIWVNfQUREUl9UXzY0QklUPXkKPiBDT05GSUdf
QVJDSF9ETUFfQUREUl9UXzY0QklUPXkKPiBDT05GSUdfRElSRUNUX0dCUEFHRVM9eQo+IENPTkZJ
R19OVU1BPXkKPiAjIENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0Cj4gQ09ORklHX05PREVTX1NI
SUZUPTEwCj4gQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0VOQUJMRT15Cj4gQ09ORklHX0FSQ0hfU1BB
UlNFTUVNX0RFRkFVTFQ9eQo+IENPTkZJR19BUkNIX1NFTEVDVF9NRU1PUllfTU9ERUw9eQo+ICMg
Q09ORklHX0FSQ0hfTUVNT1JZX1BST0JFIGlzIG5vdCBzZXQKPiBDT05GSUdfSUxMRUdBTF9QT0lO
VEVSX1ZBTFVFPTB4ZGVhZDAwMDAwMDAwMDAwMAo+IENPTkZJR19TRUxFQ1RfTUVNT1JZX01PREVM
PXkKPiBDT05GSUdfU1BBUlNFTUVNX01BTlVBTD15Cj4gQ09ORklHX1NQQVJTRU1FTT15Cj4gQ09O
RklHX05FRURfTVVMVElQTEVfTk9ERVM9eQo+IENPTkZJR19IQVZFX01FTU9SWV9QUkVTRU5UPXkK
PiBDT05GSUdfU1BBUlNFTUVNX0VYVFJFTUU9eQo+IENPTkZJR19TUEFSU0VNRU1fVk1FTU1BUF9F
TkFCTEU9eQo+IENPTkZJR19TUEFSU0VNRU1fQUxMT0NfTUVNX01BUF9UT0dFVEhFUj15Cj4gQ09O
RklHX1NQQVJTRU1FTV9WTUVNTUFQPXkKPiBDT05GSUdfSEFWRV9NRU1CTE9DSz15Cj4gQ09ORklH
X0hBVkVfTUVNQkxPQ0tfTk9ERV9NQVA9eQo+IENPTkZJR19BUkNIX0RJU0NBUkRfTUVNQkxPQ0s9
eQo+IENPTkZJR19NRU1PUllfSVNPTEFUSU9OPXkKPiAjIENPTkZJR19NT1ZBQkxFX05PREUgaXMg
bm90IHNldAo+IENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFPXkKPiBDT05GSUdfTUVNT1JZ
X0hPVFBMVUc9eQo+IENPTkZJR19NRU1PUllfSE9UUExVR19TUEFSU0U9eQo+IENPTkZJR19NRU1P
UllfSE9UUkVNT1ZFPXkKPiBDT05GSUdfUEFHRUZMQUdTX0VYVEVOREVEPXkKPiBDT05GSUdfU1BM
SVRfUFRMT0NLX0NQVVM9NAo+IENPTkZJR19BUkNIX0VOQUJMRV9TUExJVF9QTURfUFRMT0NLPXkK
PiAjIENPTkZJR19CQUxMT09OX0NPTVBBQ1RJT04gaXMgbm90IHNldAo+IENPTkZJR19DT01QQUNU
SU9OPXkKPiBDT05GSUdfTUlHUkFUSU9OPXkKPiBDT05GSUdfUEhZU19BRERSX1RfNjRCSVQ9eQo+
IENPTkZJR19aT05FX0RNQV9GTEFHPTEKPiAjIENPTkZJR19CT1VOQ0UgaXMgbm90IHNldAo+IENP
TkZJR19WSVJUX1RPX0JVUz15Cj4gQ09ORklHX01NVV9OT1RJRklFUj15Cj4gIyBDT05GSUdfS1NN
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9NTUFQX01JTl9BRERSPTQwOTYKPiBDT05GSUdf
QVJDSF9TVVBQT1JUU19NRU1PUllfRkFJTFVSRT15Cj4gIyBDT05GSUdfTUVNT1JZX0ZBSUxVUkUg
aXMgbm90IHNldAo+IENPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFHRT15Cj4gQ09ORklHX1RSQU5T
UEFSRU5UX0hVR0VQQUdFX0FMV0FZUz15Cj4gIyBDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0Vf
TUFEVklTRSBpcyBub3Qgc2V0Cj4gQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0g9eQo+ICMgQ09O
RklHX0NMRUFOQ0FDSEUgaXMgbm90IHNldAo+ICMgQ09ORklHX0ZST05UU1dBUCBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ01BIGlzIG5vdCBzZXQKPiAjIENPTkZJR19aQlVEIGlzIG5vdCBzZXQKPiBD
T05GSUdfTUVNX1NPRlRfRElSVFk9eQo+IENPTkZJR19aU01BTExPQz15Cj4gQ09ORklHX1BHVEFC
TEVfTUFQUElORz15Cj4gQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJVUFRJT049eQo+ICMgQ09O
RklHX1g4Nl9CT09UUEFSQU1fTUVNT1JZX0NPUlJVUFRJT05fQ0hFQ0sgaXMgbm90IHNldAo+IENP
TkZJR19YODZfUkVTRVJWRV9MT1c9NjQKPiAjIENPTkZJR19NVFJSIGlzIG5vdCBzZXQKPiBDT05G
SUdfQVJDSF9SQU5ET009eQo+IENPTkZJR19YODZfU01BUD15Cj4gIyBDT05GSUdfU0VDQ09NUCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQKPiBDT05GSUdfSFpfMjUwPXkK
PiAjIENPTkZJR19IWl8zMDAgaXMgbm90IHNldAo+ICMgQ09ORklHX0haXzEwMDAgaXMgbm90IHNl
dAo+IENPTkZJR19IWj0yNTAKPiBDT05GSUdfU0NIRURfSFJUSUNLPXkKPiBDT05GSUdfS0VYRUM9
eQo+ICMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWEVDX0pVTVAg
aXMgbm90IHNldAo+IENPTkZJR19QSFlTSUNBTF9TVEFSVD0weDEwMDAwMDAKPiBDT05GSUdfUkVM
T0NBVEFCTEU9eQo+IENPTkZJR19QSFlTSUNBTF9BTElHTj0weDIwMDAwMAo+IENPTkZJR19IT1RQ
TFVHX0NQVT15Cj4gIyBDT05GSUdfQk9PVFBBUkFNX0hPVFBMVUdfQ1BVMCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfREVCVUdfSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKPiBDT05GSUdfQ01ETElORV9C
T09MPXkKPiBDT05GSUdfQ01ETElORT0iIgo+IENPTkZJR19DTURMSU5FX09WRVJSSURFPXkKPiBD
T05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19BUkNIX0VOQUJMRV9N
RU1PUllfSE9UUkVNT1ZFPXkKPiBDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQo+IAo+
ICMKPiAjIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwo+ICMKPiBDT05GSUdfQVJD
SF9ISUJFUk5BVElPTl9IRUFERVI9eQo+ICMgQ09ORklHX1NVU1BFTkQgaXMgbm90IHNldAo+IENP
TkZJR19ISUJFUk5BVEVfQ0FMTEJBQ0tTPXkKPiBDT05GSUdfSElCRVJOQVRJT049eQo+IENPTkZJ
R19QTV9TVERfUEFSVElUSU9OPSIiCj4gQ09ORklHX1BNX1NMRUVQPXkKPiBDT05GSUdfUE1fU0xF
RVBfU01QPXkKPiBDT05GSUdfUE1fQVVUT1NMRUVQPXkKPiAjIENPTkZJR19QTV9XQUtFTE9DS1Mg
aXMgbm90IHNldAo+IENPTkZJR19QTV9SVU5USU1FPXkKPiBDT05GSUdfUE09eQo+ICMgQ09ORklH
X1BNX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19XUV9QT1dFUl9FRkZJQ0lFTlRfREVGQVVM
VCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NGST15Cj4gCj4gIwo+ICMgQ1BVIEZyZXF1ZW5jeSBzY2Fs
aW5nCj4gIwo+ICMgQ09ORklHX0NQVV9GUkVRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBDUFUgSWRs
ZQo+ICMKPiBDT05GSUdfQ1BVX0lETEU9eQo+ICMgQ09ORklHX0NQVV9JRExFX01VTFRJUExFX0RS
SVZFUlMgaXMgbm90IHNldAo+IENPTkZJR19DUFVfSURMRV9HT1ZfTEFEREVSPXkKPiBDT05GSUdf
Q1BVX0lETEVfR09WX01FTlU9eQo+ICMgQ09ORklHX0FSQ0hfTkVFRFNfQ1BVX0lETEVfQ09VUExF
RCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU5URUxfSURMRSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
TWVtb3J5IHBvd2VyIHNhdmluZ3MKPiAjCj4gIyBDT05GSUdfSTczMDBfSURMRSBpcyBub3Qgc2V0
Cj4gCj4gIwo+ICMgQnVzIG9wdGlvbnMgKFBDSSBldGMuKQo+ICMKPiAjIENPTkZJR19QQ0kgaXMg
bm90IHNldAo+IENPTkZJR19JU0FfRE1BX0FQST15Cj4gQ09ORklHX1BDQ0FSRD15Cj4gQ09ORklH
X1BDTUNJQT15Cj4gQ09ORklHX1BDTUNJQV9MT0FEX0NJUz15Cj4gCj4gIwo+ICMgUEMtY2FyZCBi
cmlkZ2VzCj4gIwo+IENPTkZJR19YODZfU1lTRkI9eQo+IAo+ICMKPiAjIEV4ZWN1dGFibGUgZmls
ZSBmb3JtYXRzIC8gRW11bGF0aW9ucwo+ICMKPiBDT05GSUdfQklORk1UX0VMRj15Cj4gQ09ORklH
X0FSQ0hfQklORk1UX0VMRl9SQU5ET01JWkVfUElFPXkKPiBDT05GSUdfQ09SRV9EVU1QX0RFRkFV
TFRfRUxGX0hFQURFUlM9eQo+ICMgQ09ORklHX0JJTkZNVF9TQ1JJUFQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0hBVkVfQU9VVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQklORk1UX01JU0MgaXMgbm90
IHNldAo+IENPTkZJR19DT1JFRFVNUD15Cj4gIyBDT05GSUdfSUEzMl9FTVVMQVRJT04gaXMgbm90
IHNldAo+IENPTkZJR19YODZfREVWX0RNQV9PUFM9eQo+IENPTkZJR19ORVQ9eQo+IAo+ICMKPiAj
IE5ldHdvcmtpbmcgb3B0aW9ucwo+ICMKPiBDT05GSUdfUEFDS0VUPXkKPiBDT05GSUdfUEFDS0VU
X0RJQUc9eQo+IENPTkZJR19VTklYPXkKPiBDT05GSUdfVU5JWF9ESUFHPXkKPiAjIENPTkZJR19O
RVRfS0VZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JTkVUIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
V09SS19TRUNNQVJLPXkKPiBDT05GSUdfTkVUV09SS19QSFlfVElNRVNUQU1QSU5HPXkKPiBDT05G
SUdfTkVURklMVEVSPXkKPiAjIENPTkZJR19ORVRGSUxURVJfREVCVUcgaXMgbm90IHNldAo+ICMg
Q09ORklHX05FVEZJTFRFUl9BRFZBTkNFRCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUTT15Cj4gQ09O
RklHX0FUTV9MQU5FPXkKPiAjIENPTkZJR19CUklER0UgaXMgbm90IHNldAo+IENPTkZJR19IQVZF
X05FVF9EU0E9eQo+IENPTkZJR19ORVRfRFNBPXkKPiBDT05GSUdfTkVUX0RTQV9UQUdfRFNBPXkK
PiBDT05GSUdfTkVUX0RTQV9UQUdfRURTQT15Cj4gQ09ORklHX05FVF9EU0FfVEFHX1RSQUlMRVI9
eQo+IENPTkZJR19WTEFOXzgwMjFRPXkKPiAjIENPTkZJR19WTEFOXzgwMjFRX0dWUlAgaXMgbm90
IHNldAo+ICMgQ09ORklHX1ZMQU5fODAyMVFfTVZSUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQ05F
VD15Cj4gQ09ORklHX0RFQ05FVF9ST1VURVI9eQo+IENPTkZJR19MTEM9eQo+IENPTkZJR19MTEMy
PXkKPiBDT05GSUdfSVBYPXkKPiBDT05GSUdfSVBYX0lOVEVSTj15Cj4gIyBDT05GSUdfQVRBTEsg
aXMgbm90IHNldAo+ICMgQ09ORklHX1gyNSBpcyBub3Qgc2V0Cj4gQ09ORklHX0xBUEI9eQo+IENP
TkZJR19QSE9ORVQ9eQo+ICMgQ09ORklHX0lFRUU4MDIxNTQgaXMgbm90IHNldAo+IENPTkZJR19O
RVRfU0NIRUQ9eQo+IAo+ICMKPiAjIFF1ZXVlaW5nL1NjaGVkdWxpbmcKPiAjCj4gIyBDT05GSUdf
TkVUX1NDSF9DQlEgaXMgbm90IHNldAo+ICMgQ09ORklHX05FVF9TQ0hfSFRCIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVUX1NDSF9IRlNDPXkKPiBDT05GSUdfTkVUX1NDSF9BVE09eQo+ICMgQ09ORklH
X05FVF9TQ0hfUFJJTyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9TQ0hfTVVMVElRPXkKPiBDT05G
SUdfTkVUX1NDSF9SRUQ9eQo+IENPTkZJR19ORVRfU0NIX1NGQj15Cj4gQ09ORklHX05FVF9TQ0hf
U0ZRPXkKPiBDT05GSUdfTkVUX1NDSF9URVFMPXkKPiAjIENPTkZJR19ORVRfU0NIX1RCRiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTkVUX1NDSF9HUkVEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRf
U0NIX0RTTUFSSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVUX1NDSF9ORVRFTSBpcyBub3Qgc2V0
Cj4gQ09ORklHX05FVF9TQ0hfRFJSPXkKPiAjIENPTkZJR19ORVRfU0NIX01RUFJJTyBpcyBub3Qg
c2V0Cj4gQ09ORklHX05FVF9TQ0hfQ0hPS0U9eQo+IENPTkZJR19ORVRfU0NIX1FGUT15Cj4gQ09O
RklHX05FVF9TQ0hfQ09ERUw9eQo+ICMgQ09ORklHX05FVF9TQ0hfRlFfQ09ERUwgaXMgbm90IHNl
dAo+IENPTkZJR19ORVRfU0NIX0ZRPXkKPiBDT05GSUdfTkVUX1NDSF9ISEY9eQo+IENPTkZJR19O
RVRfU0NIX1BJRT15Cj4gQ09ORklHX05FVF9TQ0hfSU5HUkVTUz15Cj4gQ09ORklHX05FVF9TQ0hf
UExVRz15Cj4gCj4gIwo+ICMgQ2xhc3NpZmljYXRpb24KPiAjCj4gQ09ORklHX05FVF9DTFM9eQo+
IENPTkZJR19ORVRfQ0xTX0JBU0lDPXkKPiBDT05GSUdfTkVUX0NMU19UQ0lOREVYPXkKPiBDT05G
SUdfTkVUX0NMU19GVz15Cj4gQ09ORklHX05FVF9DTFNfVTMyPXkKPiAjIENPTkZJR19DTFNfVTMy
X1BFUkYgaXMgbm90IHNldAo+ICMgQ09ORklHX0NMU19VMzJfTUFSSyBpcyBub3Qgc2V0Cj4gQ09O
RklHX05FVF9DTFNfUlNWUD15Cj4gQ09ORklHX05FVF9DTFNfUlNWUDY9eQo+IENPTkZJR19ORVRf
Q0xTX0ZMT1c9eQo+ICMgQ09ORklHX05FVF9DTFNfQ0dST1VQIGlzIG5vdCBzZXQKPiBDT05GSUdf
TkVUX0NMU19CUEY9eQo+IENPTkZJR19ORVRfRU1BVENIPXkKPiBDT05GSUdfTkVUX0VNQVRDSF9T
VEFDSz0zMgo+ICMgQ09ORklHX05FVF9FTUFUQ0hfQ01QIGlzIG5vdCBzZXQKPiAjIENPTkZJR19O
RVRfRU1BVENIX05CWVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRfRU1BVENIX1UzMiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTkVUX0VNQVRDSF9NRVRBIGlzIG5vdCBzZXQKPiAjIENPTkZJR19O
RVRfRU1BVENIX1RFWFQgaXMgbm90IHNldAo+IENPTkZJR19ORVRfRU1BVENIX0NBTklEPXkKPiBD
T05GSUdfTkVUX0NMU19BQ1Q9eQo+ICMgQ09ORklHX05FVF9BQ1RfUE9MSUNFIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVUX0FDVF9HQUNUPXkKPiAjIENPTkZJR19HQUNUX1BST0IgaXMgbm90IHNldAo+
IENPTkZJR19ORVRfQUNUX01JUlJFRD15Cj4gQ09ORklHX05FVF9BQ1RfTkFUPXkKPiBDT05GSUdf
TkVUX0FDVF9QRURJVD15Cj4gIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkVUX0FDVF9TS0JFRElUPXkKPiAjIENPTkZJR19ORVRfQ0xTX0lORCBpcyBub3Qgc2V0Cj4g
Q09ORklHX05FVF9TQ0hfRklGTz15Cj4gQ09ORklHX0RDQj15Cj4gQ09ORklHX0JBVE1BTl9BRFY9
eQo+ICMgQ09ORklHX0JBVE1BTl9BRFZfTkMgaXMgbm90IHNldAo+IENPTkZJR19CQVRNQU5fQURW
X0RFQlVHPXkKPiBDT05GSUdfT1BFTlZTV0lUQ0g9eQo+IENPTkZJR19WU09DS0VUUz15Cj4gQ09O
RklHX05FVExJTktfTU1BUD15Cj4gQ09ORklHX05FVExJTktfRElBRz15Cj4gIyBDT05GSUdfTkVU
X01QTFNfR1NPIGlzIG5vdCBzZXQKPiBDT05GSUdfSFNSPXkKPiBDT05GSUdfUlBTPXkKPiBDT05G
SUdfUkZTX0FDQ0VMPXkKPiBDT05GSUdfWFBTPXkKPiAjIENPTkZJR19DR1JPVVBfTkVUX1BSSU8g
aXMgbm90IHNldAo+IENPTkZJR19DR1JPVVBfTkVUX0NMQVNTSUQ9eQo+IENPTkZJR19ORVRfUlhf
QlVTWV9QT0xMPXkKPiBDT05GSUdfQlFMPXkKPiBDT05GSUdfTkVUX0ZMT1dfTElNSVQ9eQo+IAo+
ICMKPiAjIE5ldHdvcmsgdGVzdGluZwo+ICMKPiAjIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0NBTj15Cj4gQ09ORklHX0NBTl9SQVc9eQo+ICMgQ09ORklHX0NBTl9CQ00gaXMg
bm90IHNldAo+IENPTkZJR19DQU5fR1c9eQo+IAo+ICMKPiAjIENBTiBEZXZpY2UgRHJpdmVycwo+
ICMKPiAjIENPTkZJR19DQU5fVkNBTiBpcyBub3Qgc2V0Cj4gQ09ORklHX0NBTl9ERVY9eQo+ICMg
Q09ORklHX0NBTl9DQUxDX0JJVFRJTUlORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0FOX0xFRFMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NBTl9NQ1AyNTFYIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FO
X1NKQTEwMDA9eQo+ICMgQ09ORklHX0NBTl9TSkExMDAwX0lTQSBpcyBub3Qgc2V0Cj4gQ09ORklH
X0NBTl9TSkExMDAwX1BMQVRGT1JNPXkKPiBDT05GSUdfQ0FOX0VNU19QQ01DSUE9eQo+IENPTkZJ
R19DQU5fUEVBS19QQ01DSUE9eQo+IENPTkZJR19DQU5fQ19DQU49eQo+IENPTkZJR19DQU5fQ19D
QU5fUExBVEZPUk09eQo+IENPTkZJR19DQU5fQ0M3NzA9eQo+IENPTkZJR19DQU5fQ0M3NzBfSVNB
PXkKPiBDT05GSUdfQ0FOX0NDNzcwX1BMQVRGT1JNPXkKPiBDT05GSUdfQ0FOX1NPRlRJTkc9eQo+
IENPTkZJR19DQU5fU09GVElOR19DUz15Cj4gQ09ORklHX0NBTl9ERUJVR19ERVZJQ0VTPXkKPiAj
IENPTkZJR19JUkRBIGlzIG5vdCBzZXQKPiBDT05GSUdfQlQ9eQo+IENPTkZJR19CVF9SRkNPTU09
eQo+IENPTkZJR19CVF9CTkVQPXkKPiBDT05GSUdfQlRfQk5FUF9NQ19GSUxURVI9eQo+ICMgQ09O
RklHX0JUX0JORVBfUFJPVE9fRklMVEVSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBCbHVldG9vdGgg
ZGV2aWNlIGRyaXZlcnMKPiAjCj4gQ09ORklHX0JUX0hDSURUTDE9eQo+IENPTkZJR19CVF9IQ0lC
VDNDPXkKPiBDT05GSUdfQlRfSENJQkxVRUNBUkQ9eQo+IENPTkZJR19CVF9IQ0lCVFVBUlQ9eQo+
ICMgQ09ORklHX0JUX0hDSVZIQ0kgaXMgbm90IHNldAo+IENPTkZJR19CVF9NUlZMPXkKPiBDT05G
SUdfRklCX1JVTEVTPXkKPiAjIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1dJ
TUFYPXkKPiBDT05GSUdfV0lNQVhfREVCVUdfTEVWRUw9OAo+IENPTkZJR19SRktJTEw9eQo+IENP
TkZJR19SRktJTExfUkVHVUxBVE9SPXkKPiBDT05GSUdfTkVUXzlQPXkKPiBDT05GSUdfTkVUXzlQ
X1ZJUlRJTz15Cj4gIyBDT05GSUdfTkVUXzlQX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FJ
Rj15Cj4gQ09ORklHX0NBSUZfREVCVUc9eQo+ICMgQ09ORklHX0NBSUZfTkVUREVWIGlzIG5vdCBz
ZXQKPiBDT05GSUdfQ0FJRl9VU0I9eQo+ICMgQ09ORklHX05GQyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0hBVkVfQlBGX0pJVD15Cj4gCj4gIwo+ICMgRGV2aWNlIERyaXZlcnMKPiAjCj4gCj4gIwo+ICMg
R2VuZXJpYyBEcml2ZXIgT3B0aW9ucwo+ICMKPiBDT05GSUdfVUVWRU5UX0hFTFBFUl9QQVRIPSIi
Cj4gQ09ORklHX0RFVlRNUEZTPXkKPiBDT05GSUdfREVWVE1QRlNfTU9VTlQ9eQo+ICMgQ09ORklH
X1NUQU5EQUxPTkUgaXMgbm90IHNldAo+ICMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJTEQg
aXMgbm90IHNldAo+IENPTkZJR19GV19MT0FERVI9eQo+IENPTkZJR19GSVJNV0FSRV9JTl9LRVJO
RUw9eQo+IENPTkZJR19FWFRSQV9GSVJNV0FSRT0iIgo+IENPTkZJR19GV19MT0FERVJfVVNFUl9I
RUxQRVI9eQo+ICMgQ09ORklHX0RFQlVHX0RSSVZFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVH
X0RFVlJFUz15Cj4gQ09ORklHX1NZU19IWVBFUlZJU09SPXkKPiAjIENPTkZJR19HRU5FUklDX0NQ
VV9ERVZJQ0VTIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHTUFQPXkKPiBDT05GSUdfUkVHTUFQX0ky
Qz15Cj4gQ09ORklHX1JFR01BUF9TUEk9eQo+IENPTkZJR19SRUdNQVBfTU1JTz15Cj4gQ09ORklH
X1JFR01BUF9JUlE9eQo+IENPTkZJR19ETUFfU0hBUkVEX0JVRkZFUj15Cj4gCj4gIwo+ICMgQnVz
IGRldmljZXMKPiAjCj4gQ09ORklHX0NPTk5FQ1RPUj15Cj4gQ09ORklHX1BST0NfRVZFTlRTPXkK
PiAjIENPTkZJR19NVEQgaXMgbm90IHNldAo+IENPTkZJR19QQVJQT1JUPXkKPiBDT05GSUdfQVJD
SF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQ9eQo+ICMgQ09ORklHX1BBUlBPUlRfUEMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1BBUlBPUlRfR1NDIGlzIG5vdCBzZXQKPiBDT05GSUdfUEFSUE9SVF9BWDg4
Nzk2PXkKPiAjIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldAo+IENPTkZJR19QQVJQT1JU
X05PVF9QQz15Cj4gQ09ORklHX0JMS19ERVY9eQo+IENPTkZJR19CTEtfREVWX05VTExfQkxLPXkK
PiBDT05GSUdfQkxLX0RFVl9GRD15Cj4gIyBDT05GSUdfWlJBTSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfQkxLX0RFVl9DT1dfQ09NTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfQkxLX0RFVl9MT09QPXkK
PiBDT05GSUdfQkxLX0RFVl9MT09QX01JTl9DT1VOVD04Cj4gIyBDT05GSUdfQkxLX0RFVl9DUllQ
VE9MT09QIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEUkJEIGRpc2FibGVkIGJlY2F1c2UgUFJPQ19G
UyBvciBJTkVUIG5vdCBzZWxlY3RlZAo+ICMKPiBDT05GSUdfQkxLX0RFVl9OQkQ9eQo+ICMgQ09O
RklHX0JMS19ERVZfUkFNIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0RST01fUEtUQ0RWRD15Cj4gQ09O
RklHX0NEUk9NX1BLVENEVkRfQlVGRkVSUz04Cj4gIyBDT05GSUdfQ0RST01fUEtUQ0RWRF9XQ0FD
SEUgaXMgbm90IHNldAo+IENPTkZJR19BVEFfT1ZFUl9FVEg9eQo+IENPTkZJR19YRU5fQkxLREVW
X0ZST05URU5EPXkKPiBDT05GSUdfVklSVElPX0JMSz15Cj4gIyBDT05GSUdfQkxLX0RFVl9IRCBp
cyBub3Qgc2V0Cj4gCj4gIwo+ICMgTWlzYyBkZXZpY2VzCj4gIwo+IENPTkZJR19BRDUyNVhfRFBP
VD15Cj4gIyBDT05GSUdfQUQ1MjVYX0RQT1RfSTJDIGlzIG5vdCBzZXQKPiBDT05GSUdfQUQ1MjVY
X0RQT1RfU1BJPXkKPiAjIENPTkZJR19EVU1NWV9JUlEgaXMgbm90IHNldAo+IENPTkZJR19JQ1M5
MzJTNDAxPXkKPiBDT05GSUdfQVRNRUxfU1NDPXkKPiAjIENPTkZJR19FTkNMT1NVUkVfU0VSVklD
RVMgaXMgbm90IHNldAo+IENPTkZJR19BUERTOTgwMkFMUz15Cj4gQ09ORklHX0lTTDI5MDAzPXkK
PiAjIENPTkZJR19JU0wyOTAyMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19UU0wyNTUw
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19CSDE3ODA9eQo+IENPTkZJR19TRU5TT1JTX0JI
MTc3MD15Cj4gIyBDT05GSUdfU0VOU09SU19BUERTOTkwWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hN
QzYzNTI9eQo+IENPTkZJR19EUzE2ODI9eQo+IENPTkZJR19USV9EQUM3NTEyPXkKPiBDT05GSUdf
Vk1XQVJFX0JBTExPT049eQo+IENPTkZJR19CTVAwODU9eQo+IENPTkZJR19CTVAwODVfSTJDPXkK
PiBDT05GSUdfQk1QMDg1X1NQST15Cj4gIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTEFUVElDRV9FQ1AzX0NPTkZJRz15Cj4gQ09ORklHX1NSQU09eQo+IENP
TkZJR19DMlBPUlQ9eQo+IENPTkZJR19DMlBPUlRfRFVSQU1BUl8yMTUwPXkKPiAKPiAjCj4gIyBF
RVBST00gc3VwcG9ydAo+ICMKPiBDT05GSUdfRUVQUk9NX0FUMjQ9eQo+ICMgQ09ORklHX0VFUFJP
TV9BVDI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19FRVBST01fTEVHQUNZIGlzIG5vdCBzZXQKPiBD
T05GSUdfRUVQUk9NX01BWDY4NzU9eQo+IENPTkZJR19FRVBST01fOTNDWDY9eQo+ICMgQ09ORklH
X0VFUFJPTV85M1hYNDYgaXMgbm90IHNldAo+IAo+ICMKPiAjIFRleGFzIEluc3RydW1lbnRzIHNo
YXJlZCB0cmFuc3BvcnQgbGluZSBkaXNjaXBsaW5lCj4gIwo+IAo+ICMKPiAjIEFsdGVyYSBGUEdB
IGZpcm13YXJlIGRvd25sb2FkIG1vZHVsZQo+ICMKPiAjIENPTkZJR19BTFRFUkFfU1RBUEwgaXMg
bm90IHNldAo+IAo+ICMKPiAjIEludGVsIE1JQyBIb3N0IERyaXZlcgo+ICMKPiAKPiAjCj4gIyBJ
bnRlbCBNSUMgQ2FyZCBEcml2ZXIKPiAjCj4gIyBDT05GSUdfSU5URUxfTUlDX0NBUkQgaXMgbm90
IHNldAo+IENPTkZJR19IQVZFX0lERT15Cj4gQ09ORklHX0lERT15Cj4gCj4gIwo+ICMgUGxlYXNl
IHNlZSBEb2N1bWVudGF0aW9uL2lkZS9pZGUudHh0IGZvciBoZWxwL2luZm8gb24gSURFIGRyaXZl
cwo+ICMKPiBDT05GSUdfSURFX0FUQVBJPXkKPiAjIENPTkZJR19CTEtfREVWX0lERV9TQVRBIGlz
IG5vdCBzZXQKPiBDT05GSUdfSURFX0dEPXkKPiAjIENPTkZJR19JREVfR0RfQVRBIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19JREVfR0RfQVRBUEkgaXMgbm90IHNldAo+IENPTkZJR19CTEtfREVWX0lE
RUNTPXkKPiBDT05GSUdfQkxLX0RFVl9JREVDRD15Cj4gQ09ORklHX0JMS19ERVZfSURFQ0RfVkVS
Qk9TRV9FUlJPUlM9eQo+ICMgQ09ORklHX0JMS19ERVZfSURFVEFQRSBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfSURFX1RBU0tfSU9DVEwgaXMgbm90IHNldAo+IENPTkZJR19JREVfUFJPQ19GUz15Cj4g
Cj4gIwo+ICMgSURFIGNoaXBzZXQgc3VwcG9ydC9idWdmaXhlcwo+ICMKPiBDT05GSUdfSURFX0dF
TkVSSUM9eQo+IENPTkZJR19CTEtfREVWX1BMQVRGT1JNPXkKPiAjIENPTkZJR19CTEtfREVWX0NN
RDY0MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkxLX0RFVl9JREVETUEgaXMgbm90IHNldAo+IAo+
ICMKPiAjIFNDU0kgZGV2aWNlIHN1cHBvcnQKPiAjCj4gQ09ORklHX1NDU0lfTU9EPXkKPiBDT05G
SUdfUkFJRF9BVFRSUz15Cj4gQ09ORklHX1NDU0k9eQo+IENPTkZJR19TQ1NJX0RNQT15Cj4gQ09O
RklHX1NDU0lfVEdUPXkKPiAjIENPTkZJR19TQ1NJX05FVExJTksgaXMgbm90IHNldAo+ICMgQ09O
RklHX1NDU0lfUFJPQ19GUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU0NTSSBzdXBwb3J0IHR5cGUg
KGRpc2ssIHRhcGUsIENELVJPTSkKPiAjCj4gQ09ORklHX0JMS19ERVZfU0Q9eQo+ICMgQ09ORklH
X0NIUl9ERVZfU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX0NIUl9ERVZfT1NTVCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0JMS19ERVZfU1I9eQo+IENPTkZJR19CTEtfREVWX1NSX1ZFTkRPUj15Cj4gQ09O
RklHX0NIUl9ERVZfU0c9eQo+IENPTkZJR19DSFJfREVWX1NDSD15Cj4gIyBDT05GSUdfU0NTSV9N
VUxUSV9MVU4gaXMgbm90IHNldAo+ICMgQ09ORklHX1NDU0lfQ09OU1RBTlRTIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19TQ1NJX0xPR0dJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1NDU0lfU0NBTl9B
U1lOQyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU0NTSSBUcmFuc3BvcnRzCj4gIwo+ICMgQ09ORklH
X1NDU0lfU1BJX0FUVFJTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQ1NJX0ZDX0FUVFJTIGlzIG5v
dCBzZXQKPiBDT05GSUdfU0NTSV9JU0NTSV9BVFRSUz15Cj4gQ09ORklHX1NDU0lfU0FTX0FUVFJT
PXkKPiBDT05GSUdfU0NTSV9TQVNfTElCU0FTPXkKPiBDT05GSUdfU0NTSV9TQVNfQVRBPXkKPiBD
T05GSUdfU0NTSV9TQVNfSE9TVF9TTVA9eQo+IENPTkZJR19TQ1NJX1NSUF9BVFRSUz15Cj4gIyBD
T05GSUdfU0NTSV9TUlBfVEdUX0FUVFJTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0NTSV9MT1dMRVZF
TD15Cj4gQ09ORklHX0lTQ1NJX0JPT1RfU1lTRlM9eQo+ICMgQ09ORklHX1NDU0lfVUZTSENEIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19MSUJGQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTElCRkNPRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfU0NTSV9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NDU0lf
VklSVElPPXkKPiAjIENPTkZJR19TQ1NJX0xPV0xFVkVMX1BDTUNJQSBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NDU0lfREg9eQo+IENPTkZJR19TQ1NJX0RIX1JEQUM9eQo+IENPTkZJR19TQ1NJX0RIX0hQ
X1NXPXkKPiBDT05GSUdfU0NTSV9ESF9FTUM9eQo+IENPTkZJR19TQ1NJX0RIX0FMVUE9eQo+IENP
TkZJR19TQ1NJX09TRF9JTklUSUFUT1I9eQo+ICMgQ09ORklHX1NDU0lfT1NEX1VMRCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NDU0lfT1NEX0RQUklOVF9TRU5TRT0xCj4gIyBDT05GSUdfU0NTSV9PU0Rf
REVCVUcgaXMgbm90IHNldAo+IENPTkZJR19BVEE9eQo+ICMgQ09ORklHX0FUQV9OT05TVEFOREFS
RCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUQV9WRVJCT1NFX0VSUk9SPXkKPiBDT05GSUdfU0FUQV9Q
TVA9eQo+IAo+ICMKPiAjIENvbnRyb2xsZXJzIHdpdGggbm9uLVNGRiBuYXRpdmUgaW50ZXJmYWNl
Cj4gIwo+ICMgQ09ORklHX1NBVEFfQUhDSV9QTEFURk9STSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FU
QV9TRkY9eQo+IAo+ICMKPiAjIFNGRiBjb250cm9sbGVycyB3aXRoIGN1c3RvbSBETUEgaW50ZXJm
YWNlCj4gIwo+IENPTkZJR19BVEFfQk1ETUE9eQo+IAo+ICMKPiAjIFNBVEEgU0ZGIGNvbnRyb2xs
ZXJzIHdpdGggQk1ETUEKPiAjCj4gQ09ORklHX1NBVEFfSElHSEJBTks9eQo+IENPTkZJR19TQVRB
X01WPXkKPiBDT05GSUdfU0FUQV9SQ0FSPXkKPiAKPiAjCj4gIyBQQVRBIFNGRiBjb250cm9sbGVy
cyB3aXRoIEJNRE1BCj4gIwo+ICMgQ09ORklHX1BBVEFfQVJBU0FOX0NGIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBQSU8tb25seSBTRkYgY29udHJvbGxlcnMKPiAjCj4gQ09ORklHX1BBVEFfUENNQ0lB
PXkKPiAjIENPTkZJR19QQVRBX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBHZW5lcmlj
IGZhbGxiYWNrIC8gbGVnYWN5IGRyaXZlcnMKPiAjCj4gQ09ORklHX01EPXkKPiAjIENPTkZJR19C
TEtfREVWX01EIGlzIG5vdCBzZXQKPiBDT05GSUdfQkNBQ0hFPXkKPiBDT05GSUdfQkNBQ0hFX0RF
QlVHPXkKPiAjIENPTkZJR19CQ0FDSEVfQ0xPU1VSRVNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJ
R19CTEtfREVWX0RNPXkKPiAjIENPTkZJR19ETV9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RN
X0JVRklPPXkKPiBDT05GSUdfRE1fQklPX1BSSVNPTj15Cj4gQ09ORklHX0RNX1BFUlNJU1RFTlRf
REFUQT15Cj4gIyBDT05GSUdfRE1fQ1JZUFQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RNX1NOQVBT
SE9UIGlzIG5vdCBzZXQKPiBDT05GSUdfRE1fVEhJTl9QUk9WSVNJT05JTkc9eQo+IENPTkZJR19E
TV9ERUJVR19CTE9DS19TVEFDS19UUkFDSU5HPXkKPiBDT05GSUdfRE1fQ0FDSEU9eQo+ICMgQ09O
RklHX0RNX0NBQ0hFX01RIGlzIG5vdCBzZXQKPiBDT05GSUdfRE1fQ0FDSEVfQ0xFQU5FUj15Cj4g
Q09ORklHX0RNX01JUlJPUj15Cj4gQ09ORklHX0RNX0xPR19VU0VSU1BBQ0U9eQo+ICMgQ09ORklH
X0RNX1JBSUQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RNX1pFUk8gaXMgbm90IHNldAo+ICMgQ09O
RklHX0RNX01VTFRJUEFUSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRE1fREVMQVkgaXMgbm90IHNl
dAo+IENPTkZJR19ETV9VRVZFTlQ9eQo+IENPTkZJR19ETV9GTEFLRVk9eQo+IENPTkZJR19ETV9W
RVJJVFk9eQo+IENPTkZJR19ETV9TV0lUQ0g9eQo+IENPTkZJR19UQVJHRVRfQ09SRT15Cj4gIyBD
T05GSUdfVENNX0lCTE9DSyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RDTV9GSUxFSU89eQo+ICMgQ09O
RklHX1RDTV9QU0NTSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9PUEJBQ0tfVEFSR0VUIGlzIG5v
dCBzZXQKPiBDT05GSUdfSVNDU0lfVEFSR0VUPXkKPiBDT05GSUdfTUFDSU5UT1NIX0RSSVZFUlM9
eQo+IENPTkZJR19ORVRERVZJQ0VTPXkKPiBDT05GSUdfTUlJPXkKPiAjIENPTkZJR19ORVRfQ09S
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQVJDTkVUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BVE1f
RFJJVkVSUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwo+ICMK
PiBDT05GSUdfQ0FJRl9TUElfU0xBVkU9eQo+IENPTkZJR19DQUlGX1NQSV9TWU5DPXkKPiAjIENP
TkZJR19DQUlGX0hTSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0NBSUZfVklSVElPPXkKPiBDT05GSUdf
VkhPU1RfTkVUPXkKPiBDT05GSUdfVkhPU1RfUklORz15Cj4gQ09ORklHX1ZIT1NUPXkKPiAKPiAj
Cj4gIyBEaXN0cmlidXRlZCBTd2l0Y2ggQXJjaGl0ZWN0dXJlIGRyaXZlcnMKPiAjCj4gQ09ORklH
X05FVF9EU0FfTVY4OEU2WFhYPXkKPiBDT05GSUdfTkVUX0RTQV9NVjg4RTYwNjA9eQo+IENPTkZJ
R19ORVRfRFNBX01WODhFNlhYWF9ORUVEX1BQVT15Cj4gQ09ORklHX05FVF9EU0FfTVY4OEU2MTMx
PXkKPiBDT05GSUdfTkVUX0RTQV9NVjg4RTYxMjNfNjFfNjU9eQo+IENPTkZJR19FVEhFUk5FVD15
Cj4gQ09ORklHX05FVF9WRU5ET1JfM0NPTT15Cj4gQ09ORklHX1BDTUNJQV8zQzU3ND15Cj4gQ09O
RklHX1BDTUNJQV8zQzU4OT15Cj4gQ09ORklHX05FVF9WRU5ET1JfQU1EPXkKPiBDT05GSUdfUENN
Q0lBX05NQ0xBTj15Cj4gQ09ORklHX05FVF9WRU5ET1JfQVJDPXkKPiAjIENPTkZJR19ORVRfQ0FE
RU5DRSBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9WRU5ET1JfQlJPQURDT009eQo+ICMgQ09ORklH
X0I0NCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9DQUxYRURBX1hHTUFDPXkKPiBDT05GSUdfRE5F
VD15Cj4gIyBDT05GSUdfTkVUX1ZFTkRPUl9GVUpJVFNVIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
X1ZFTkRPUl9JTlRFTD15Cj4gQ09ORklHX05FVF9WRU5ET1JfSTgyNVhYPXkKPiAjIENPTkZJR19O
RVRfVkVORE9SX01JQ1JFTCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9WRU5ET1JfTUlDUk9DSElQ
PXkKPiBDT05GSUdfRU5DMjhKNjA9eQo+IENPTkZJR19FTkMyOEo2MF9XUklURVZFUklGWT15Cj4g
IyBDT05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JIGlzIG5vdCBzZXQKPiBDT05GSUdfRVRIT0M9eQo+
IENPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQo+ICMgQ09ORklHX0FUUCBpcyBub3Qgc2V0Cj4g
Q09ORklHX1NIX0VUSD15Cj4gQ09ORklHX05FVF9WRU5ET1JfU0VFUT15Cj4gQ09ORklHX05FVF9W
RU5ET1JfU01TQz15Cj4gQ09ORklHX1BDTUNJQV9TTUM5MUM5Mj15Cj4gQ09ORklHX1NNU0M5MTFY
PXkKPiAjIENPTkZJR19TTVNDOTExWF9BUkNIX0hPT0tTIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
X1ZFTkRPUl9TVE1JQ1JPPXkKPiAjIENPTkZJR19TVE1NQUNfRVRIIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkVUX1ZFTkRPUl9WSUE9eQo+ICMgQ09ORklHX05FVF9WRU5ET1JfV0laTkVUIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkVUX1ZFTkRPUl9YSVJDT009eQo+ICMgQ09ORklHX1BDTUNJQV9YSVJDMlBT
IGlzIG5vdCBzZXQKPiBDT05GSUdfUEhZTElCPXkKPiAKPiAjCj4gIyBNSUkgUEhZIGRldmljZSBk
cml2ZXJzCj4gIwo+IENPTkZJR19BVDgwM1hfUEhZPXkKPiAjIENPTkZJR19BTURfUEhZIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUFSVkVMTF9QSFk9eQo+IENPTkZJR19EQVZJQ09NX1BIWT15Cj4gQ09O
RklHX1FTRU1JX1BIWT15Cj4gIyBDT05GSUdfTFhUX1BIWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
Q0lDQURBX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJVEVTU0VfUEhZPXkKPiBDT05GSUdfU01T
Q19QSFk9eQo+IENPTkZJR19CUk9BRENPTV9QSFk9eQo+ICMgQ09ORklHX0JDTTg3WFhfUEhZIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JQ1BMVVNfUEhZIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVBTFRF
S19QSFk9eQo+ICMgQ09ORklHX05BVElPTkFMX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NURTEw
WFA9eQo+ICMgQ09ORklHX0xTSV9FVDEwMTFDX1BIWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUlD
UkVMX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZJWEVEX1BIWT15Cj4gQ09ORklHX01ESU9fQklU
QkFORz15Cj4gQ09ORklHX01JQ1JFTF9LUzg5OTVNQT15Cj4gIyBDT05GSUdfUExJUCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1BQUD15Cj4gQ09ORklHX1BQUF9CU0RDT01QPXkKPiAjIENPTkZJR19QUFBf
REVGTEFURSBpcyBub3Qgc2V0Cj4gQ09ORklHX1BQUF9GSUxURVI9eQo+ICMgQ09ORklHX1BQUF9N
UFBFIGlzIG5vdCBzZXQKPiBDT05GSUdfUFBQX01VTFRJTElOSz15Cj4gQ09ORklHX1BQUE9BVE09
eQo+IENPTkZJR19QUFBPRT15Cj4gQ09ORklHX1NMSEM9eQo+ICMgQ09ORklHX1dMQU4gaXMgbm90
IHNldAo+IAo+ICMKPiAjIFdpTUFYIFdpcmVsZXNzIEJyb2FkYmFuZCBkZXZpY2VzCj4gIwo+IAo+
ICMKPiAjIEVuYWJsZSBVU0Igc3VwcG9ydCB0byBzZWUgV2lNQVggVVNCIGRyaXZlcnMKPiAjCj4g
Q09ORklHX1dBTj15Cj4gQ09ORklHX0hETEM9eQo+IENPTkZJR19IRExDX1JBVz15Cj4gIyBDT05G
SUdfSERMQ19SQVdfRVRIIGlzIG5vdCBzZXQKPiBDT05GSUdfSERMQ19DSVNDTz15Cj4gQ09ORklH
X0hETENfRlI9eQo+IENPTkZJR19IRExDX1BQUD15Cj4gQ09ORklHX0hETENfWDI1PXkKPiBDT05G
SUdfRExDST15Cj4gQ09ORklHX0RMQ0lfTUFYPTgKPiBDT05GSUdfU0JOST15Cj4gIyBDT05GSUdf
U0JOSV9NVUxUSUxJTkUgaXMgbm90IHNldAo+IENPTkZJR19YRU5fTkVUREVWX0ZST05URU5EPXkK
PiAjIENPTkZJR19JU0ROIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJbnB1dCBkZXZpY2Ugc3VwcG9y
dAo+ICMKPiAjIENPTkZJR19JTlBVVCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgSGFyZHdhcmUgSS9P
IHBvcnRzCj4gIwo+ICMgQ09ORklHX1NFUklPIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9NSUdI
VF9IQVZFX1BDX1NFUklPPXkKPiBDT05GSUdfR0FNRVBPUlQ9eQo+ICMgQ09ORklHX0dBTUVQT1JU
X05TNTU4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19HQU1FUE9SVF9MNCBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgQ2hhcmFjdGVyIGRldmljZXMKPiAjCj4gIyBDT05GSUdfVFRZIGlzIG5vdCBzZXQKPiBD
T05GSUdfREVWS01FTT15Cj4gIyBDT05GSUdfUFJJTlRFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1BQ
REVWPXkKPiBDT05GSUdfSVBNSV9IQU5ETEVSPXkKPiAjIENPTkZJR19JUE1JX1BBTklDX0VWRU5U
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUE1JX0RFVklDRV9JTlRFUkZBQ0UgaXMgbm90IHNldAo+
IENPTkZJR19JUE1JX1NJPXkKPiBDT05GSUdfSVBNSV9XQVRDSERPRz15Cj4gQ09ORklHX0lQTUlf
UE9XRVJPRkY9eQo+IENPTkZJR19IV19SQU5ET009eQo+ICMgQ09ORklHX0hXX1JBTkRPTV9USU1F
UklPTUVNIGlzIG5vdCBzZXQKPiAjIENPTkZJR19IV19SQU5ET01fVklBIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19IV19SQU5ET01fVklSVElPIGlzIG5vdCBzZXQKPiBDT05GSUdfSFdfUkFORE9NX1RQ
TT15Cj4gIyBDT05GSUdfTlZSQU0gaXMgbm90IHNldAo+IAo+ICMKPiAjIFBDTUNJQSBjaGFyYWN0
ZXIgZGV2aWNlcwo+ICMKPiBDT05GSUdfQ0FSRE1BTl80MDAwPXkKPiAjIENPTkZJR19DQVJETUFO
XzQwNDAgaXMgbm90IHNldAo+IENPTkZJR19SQVdfRFJJVkVSPXkKPiBDT05GSUdfTUFYX1JBV19E
RVZTPTI1Ngo+IENPTkZJR19IQU5HQ0hFQ0tfVElNRVI9eQo+IENPTkZJR19UQ0dfVFBNPXkKPiBD
T05GSUdfVENHX1RJUz15Cj4gQ09ORklHX1RDR19USVNfSTJDX0FUTUVMPXkKPiBDT05GSUdfVENH
X1RJU19JMkNfSU5GSU5FT049eQo+IENPTkZJR19UQ0dfVElTX0kyQ19OVVZPVE9OPXkKPiAjIENP
TkZJR19UQ0dfTlNDIGlzIG5vdCBzZXQKPiBDT05GSUdfVENHX0FUTUVMPXkKPiBDT05GSUdfVENH
X1hFTj15Cj4gQ09ORklHX1RFTENMT0NLPXkKPiBDT05GSUdfSTJDPXkKPiBDT05GSUdfSTJDX0JP
QVJESU5GTz15Cj4gIyBDT05GSUdfSTJDX0NPTVBBVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19D
SEFSREVWPXkKPiBDT05GSUdfSTJDX01VWD15Cj4gCj4gIwo+ICMgTXVsdGlwbGV4ZXIgSTJDIENo
aXAgc3VwcG9ydAo+ICMKPiBDT05GSUdfSTJDX01VWF9QQ0E5NTQxPXkKPiBDT05GSUdfSTJDX01V
WF9QQ0E5NTR4PXkKPiAjIENPTkZJR19JMkNfSEVMUEVSX0FVVE8gaXMgbm90IHNldAo+IENPTkZJ
R19JMkNfU01CVVM9eQo+IAo+ICMKPiAjIEkyQyBBbGdvcml0aG1zCj4gIwo+IENPTkZJR19JMkNf
QUxHT0JJVD15Cj4gQ09ORklHX0kyQ19BTEdPUENGPXkKPiAjIENPTkZJR19JMkNfQUxHT1BDQSBp
cyBub3Qgc2V0Cj4gCj4gIwo+ICMgSTJDIEhhcmR3YXJlIEJ1cyBzdXBwb3J0Cj4gIwo+IAo+ICMK
PiAjIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQo+ICMKPiAjIENPTkZJR19JMkNfS0VNUExEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNf
T0NPUkVTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNfUENBX1BMQVRGT1JNIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19JMkNfUFhBX1BDSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19SSUlDPXkKPiAj
IENPTkZJR19JMkNfU0hfTU9CSUxFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNfU0lNVEVDIGlz
IG5vdCBzZXQKPiBDT05GSUdfSTJDX1hJTElOWD15Cj4gIyBDT05GSUdfSTJDX1JDQVIgaXMgbm90
IHNldAo+IAo+ICMKPiAjIEV4dGVybmFsIEkyQy9TTUJ1cyBhZGFwdGVyIGRyaXZlcnMKPiAjCj4g
Q09ORklHX0kyQ19QQVJQT1JUPXkKPiAjIENPTkZJR19JMkNfUEFSUE9SVF9MSUdIVCBpcyBub3Qg
c2V0Cj4gCj4gIwo+ICMgT3RoZXIgSTJDL1NNQnVzIGJ1cyBkcml2ZXJzCj4gIwo+IENPTkZJR19J
MkNfREVCVUdfQ09SRT15Cj4gQ09ORklHX0kyQ19ERUJVR19BTEdPPXkKPiBDT05GSUdfSTJDX0RF
QlVHX0JVUz15Cj4gQ09ORklHX1NQST15Cj4gIyBDT05GSUdfU1BJX0RFQlVHIGlzIG5vdCBzZXQK
PiBDT05GSUdfU1BJX01BU1RFUj15Cj4gCj4gIwo+ICMgU1BJIE1hc3RlciBDb250cm9sbGVyIERy
aXZlcnMKPiAjCj4gQ09ORklHX1NQSV9BTFRFUkE9eQo+IENPTkZJR19TUElfQVRNRUw9eQo+IENP
TkZJR19TUElfQkNNMjgzNT15Cj4gQ09ORklHX1NQSV9CQ002M1hYX0hTU1BJPXkKPiBDT05GSUdf
U1BJX0JJVEJBTkc9eQo+IENPTkZJR19TUElfQlVUVEVSRkxZPXkKPiBDT05GSUdfU1BJX0VQOTNY
WD15Cj4gQ09ORklHX1NQSV9JTVg9eQo+IENPTkZJR19TUElfTE03MF9MTFA9eQo+ICMgQ09ORklH
X1NQSV9GU0xfRFNQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NQSV9USV9RU1BJPXkKPiAjIENPTkZJ
R19TUElfT01BUF8xMDBLIGlzIG5vdCBzZXQKPiBDT05GSUdfU1BJX09SSU9OPXkKPiAjIENPTkZJ
R19TUElfUFhBMlhYX1BDSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU1BJX1NDMThJUzYwMiBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1NQSV9TSD15Cj4gIyBDT05GSUdfU1BJX1NIX0hTUEkgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1NQSV9URUdSQTExNCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NQSV9URUdSQTIw
X1NGTEFTSD15Cj4gQ09ORklHX1NQSV9URUdSQTIwX1NMSU5LPXkKPiAjIENPTkZJR19TUElfWENP
TU0gaXMgbm90IHNldAo+ICMgQ09ORklHX1NQSV9YSUxJTlggaXMgbm90IHNldAo+IENPTkZJR19T
UElfREVTSUdOV0FSRT15Cj4gCj4gIwo+ICMgU1BJIFByb3RvY29sIE1hc3RlcnMKPiAjCj4gQ09O
RklHX1NQSV9TUElERVY9eQo+IENPTkZJR19TUElfVExFNjJYMD15Cj4gQ09ORklHX0hTST15Cj4g
Q09ORklHX0hTSV9CT0FSRElORk89eQo+IAo+ICMKPiAjIEhTSSBjbGllbnRzCj4gIwo+ICMgQ09O
RklHX0hTSV9DSEFSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQUFMgc3VwcG9ydAo+ICMKPiBDT05G
SUdfUFBTPXkKPiBDT05GSUdfUFBTX0RFQlVHPXkKPiAKPiAjCj4gIyBQUFMgY2xpZW50cyBzdXBw
b3J0Cj4gIwo+ICMgQ09ORklHX1BQU19DTElFTlRfS1RJTUVSIGlzIG5vdCBzZXQKPiBDT05GSUdf
UFBTX0NMSUVOVF9QQVJQT1JUPXkKPiBDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkKPiAKPiAjCj4g
IyBQUFMgZ2VuZXJhdG9ycyBzdXBwb3J0Cj4gIwo+IAo+ICMKPiAjIFBUUCBjbG9jayBzdXBwb3J0
Cj4gIwo+IENPTkZJR19QVFBfMTU4OF9DTE9DSz15Cj4gQ09ORklHX0RQODM2NDBfUEhZPXkKPiBD
T05GSUdfUFRQXzE1ODhfQ0xPQ0tfUENIPXkKPiBDT05GSUdfQVJDSF9XQU5UX09QVElPTkFMX0dQ
SU9MSUI9eQo+ICMgQ09ORklHX0dQSU9MSUIgaXMgbm90IHNldAo+IENPTkZJR19XMT15Cj4gQ09O
RklHX1cxX0NPTj15Cj4gCj4gIwo+ICMgMS13aXJlIEJ1cyBNYXN0ZXJzCj4gIwo+IENPTkZJR19X
MV9NQVNURVJfRFMyNDgyPXkKPiAjIENPTkZJR19XMV9NQVNURVJfRFMxV00gaXMgbm90IHNldAo+
IAo+ICMKPiAjIDEtd2lyZSBTbGF2ZXMKPiAjCj4gQ09ORklHX1cxX1NMQVZFX1RIRVJNPXkKPiBD
T05GSUdfVzFfU0xBVkVfU01FTT15Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQwOD15Cj4gQ09ORklH
X1cxX1NMQVZFX0RTMjQwOF9SRUFEQkFDSz15Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQxMz15Cj4g
IyBDT05GSUdfVzFfU0xBVkVfRFMyNDIzIGlzIG5vdCBzZXQKPiBDT05GSUdfVzFfU0xBVkVfRFMy
NDMxPXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyNDMzPXkKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI0
MzNfQ1JDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI3NjAgaXMgbm90IHNldAo+
IENPTkZJR19XMV9TTEFWRV9EUzI3ODA9eQo+IENPTkZJR19XMV9TTEFWRV9EUzI3ODE9eQo+ICMg
Q09ORklHX1cxX1NMQVZFX0RTMjhFMDQgaXMgbm90IHNldAo+IENPTkZJR19XMV9TTEFWRV9CUTI3
MDAwPXkKPiBDT05GSUdfUE9XRVJfU1VQUExZPXkKPiBDT05GSUdfUE9XRVJfU1VQUExZX0RFQlVH
PXkKPiAjIENPTkZJR19QREFfUE9XRVIgaXMgbm90IHNldAo+IENPTkZJR19HRU5FUklDX0FEQ19C
QVRURVJZPXkKPiBDT05GSUdfTUFYODkyNV9QT1dFUj15Cj4gIyBDT05GSUdfV004MzFYX0JBQ0tV
UCBpcyBub3Qgc2V0Cj4gQ09ORklHX1dNODMxWF9QT1dFUj15Cj4gIyBDT05GSUdfV004MzUwX1BP
V0VSIGlzIG5vdCBzZXQKPiBDT05GSUdfVEVTVF9QT1dFUj15Cj4gQ09ORklHX0JBVFRFUllfRFMy
NzgwPXkKPiBDT05GSUdfQkFUVEVSWV9EUzI3ODE9eQo+ICMgQ09ORklHX0JBVFRFUllfRFMyNzgy
IGlzIG5vdCBzZXQKPiBDT05GSUdfQkFUVEVSWV9TQlM9eQo+ICMgQ09ORklHX0JBVFRFUllfQlEy
N3gwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBVFRFUllfREE5MDMwPXkKPiAjIENPTkZJR19CQVRU
RVJZX0RBOTA1MiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBVFRFUllfTUFYMTcwNDA9eQo+IENPTkZJ
R19CQVRURVJZX01BWDE3MDQyPXkKPiBDT05GSUdfQ0hBUkdFUl9QQ0Y1MDYzMz15Cj4gIyBDT05G
SUdfQ0hBUkdFUl9JU1AxNzA0IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DSEFSR0VSX01BWDg5MDMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NIQVJHRVJfVFdMNDAzMCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0NIQVJHRVJfTFA4NzI3PXkKPiAjIENPTkZJR19DSEFSR0VSX01BTkFHRVIgaXMgbm90IHNldAo+
ICMgQ09ORklHX0NIQVJHRVJfQlEyNDE1WCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NIQVJHRVJfU01C
MzQ3PXkKPiAjIENPTkZJR19DSEFSR0VSX1RQUzY1MDkwIGlzIG5vdCBzZXQKPiBDT05GSUdfQkFU
VEVSWV9HT0xERklTSD15Cj4gQ09ORklHX1BPV0VSX1JFU0VUPXkKPiAjIENPTkZJR19QT1dFUl9B
VlMgaXMgbm90IHNldAo+IENPTkZJR19IV01PTj15Cj4gQ09ORklHX0hXTU9OX1ZJRD15Cj4gIyBD
T05GSUdfSFdNT05fREVCVUdfQ0hJUCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgTmF0aXZlIGRyaXZl
cnMKPiAjCj4gIyBDT05GSUdfU0VOU09SU19BRDczMTQgaXMgbm90IHNldAo+IENPTkZJR19TRU5T
T1JTX0FENzQxND15Cj4gQ09ORklHX1NFTlNPUlNfQUQ3NDE4PXkKPiBDT05GSUdfU0VOU09SU19B
RENYWD15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyMT15Cj4gIyBDT05GSUdfU0VOU09SU19BRE0x
MDI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0FETTEwMjYgaXMgbm90IHNldAo+IENP
TkZJR19TRU5TT1JTX0FETTEwMjk9eQo+IENPTkZJR19TRU5TT1JTX0FETTEwMzE9eQo+ICMgQ09O
RklHX1NFTlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfQURUN1gxMD15
Cj4gQ09ORklHX1NFTlNPUlNfQURUNzMxMD15Cj4gIyBDT05GSUdfU0VOU09SU19BRFQ3NDEwIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0FEVDc0MTEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX0FEVDc0NjI9eQo+IENPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQo+IENPTkZJR19TRU5T
T1JTX0FEVDc0NzU9eQo+IENPTkZJR19TRU5TT1JTX0FTQzc2MjE9eQo+IENPTkZJR19TRU5TT1JT
X0FTQjEwMD15Cj4gQ09ORklHX1NFTlNPUlNfQVRYUDE9eQo+ICMgQ09ORklHX1NFTlNPUlNfRFM2
MjAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0RTMTYyMT15Cj4gQ09ORklHX1NFTlNPUlNf
REE5MDUyX0FEQz15Cj4gQ09ORklHX1NFTlNPUlNfREE5MDU1PXkKPiBDT05GSUdfU0VOU09SU19G
NzE4MDVGPXkKPiBDT05GSUdfU0VOU09SU19GNzE4ODJGRz15Cj4gQ09ORklHX1NFTlNPUlNfRjc1
Mzc1Uz15Cj4gIyBDT05GSUdfU0VOU09SU19GU0NITUQgaXMgbm90IHNldAo+IENPTkZJR19TRU5T
T1JTX0c3NjBBPXkKPiBDT05GSUdfU0VOU09SU19HNzYyPXkKPiBDT05GSUdfU0VOU09SU19HTDUx
OFNNPXkKPiBDT05GSUdfU0VOU09SU19HTDUyMFNNPXkKPiAjIENPTkZJR19TRU5TT1JTX0hJSDYx
MzAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0hUVTIxPXkKPiBDT05GSUdfU0VOU09SU19D
T1JFVEVNUD15Cj4gQ09ORklHX1NFTlNPUlNfSUJNQUVNPXkKPiAjIENPTkZJR19TRU5TT1JTX0lC
TVBFWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfSUlPX0hXTU9OPXkKPiAjIENPTkZJR19T
RU5TT1JTX0lUODcgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfSkM0MiBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFTlNPUlNfTElORUFHRT15Cj4gIyBDT05GSUdfU0VOU09SU19MTTYzIGlzIG5v
dCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTcwPXkKPiBDT05GSUdfU0VOU09SU19MTTczPXkKPiAj
IENPTkZJR19TRU5TT1JTX0xNNzUgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0xNNzc9eQo+
ICMgQ09ORklHX1NFTlNPUlNfTE03OCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfTE04MD15
Cj4gQ09ORklHX1NFTlNPUlNfTE04Mz15Cj4gQ09ORklHX1NFTlNPUlNfTE04NT15Cj4gIyBDT05G
SUdfU0VOU09SU19MTTg3IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTkwPXkKPiBDT05G
SUdfU0VOU09SU19MTTkyPXkKPiBDT05GSUdfU0VOU09SU19MTTkzPXkKPiBDT05GSUdfU0VOU09S
U19MVEM0MTUxPXkKPiBDT05GSUdfU0VOU09SU19MVEM0MjE1PXkKPiBDT05GSUdfU0VOU09SU19M
VEM0MjQ1PXkKPiAjIENPTkZJR19TRU5TT1JTX0xUQzQyNjEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX0xNOTUyMzQ9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUyNDE9eQo+IENPTkZJR19TRU5T
T1JTX0xNOTUyNDU9eQo+IENPTkZJR19TRU5TT1JTX01BWDExMTE9eQo+IENPTkZJR19TRU5TT1JT
X01BWDE2MDY1PXkKPiBDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKPiBDT05GSUdfU0VOU09SU19N
QVgxNjY4PXkKPiBDT05GSUdfU0VOU09SU19NQVgxOTc9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2
Mzk9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2NDI9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2NTA9
eQo+ICMgQ09ORklHX1NFTlNPUlNfTUFYNjY5NyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNf
TUNQMzAyMT15Cj4gQ09ORklHX1NFTlNPUlNfTkNUNjc3NT15Cj4gQ09ORklHX1NFTlNPUlNfUEM4
NzM2MD15Cj4gQ09ORklHX1NFTlNPUlNfUEM4NzQyNz15Cj4gQ09ORklHX1NFTlNPUlNfUENGODU5
MT15Cj4gQ09ORklHX1BNQlVTPXkKPiBDT05GSUdfU0VOU09SU19QTUJVUz15Cj4gQ09ORklHX1NF
TlNPUlNfQURNMTI3NT15Cj4gQ09ORklHX1NFTlNPUlNfTE0yNTA2Nj15Cj4gQ09ORklHX1NFTlNP
UlNfTFRDMjk3OD15Cj4gQ09ORklHX1NFTlNPUlNfTUFYMTYwNjQ9eQo+IENPTkZJR19TRU5TT1JT
X01BWDM0NDQwPXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDg2ODggaXMgbm90IHNldAo+IENPTkZJ
R19TRU5TT1JTX1VDRDkwMDA9eQo+IENPTkZJR19TRU5TT1JTX1VDRDkyMDA9eQo+IENPTkZJR19T
RU5TT1JTX1pMNjEwMD15Cj4gQ09ORklHX1NFTlNPUlNfU0hUMjE9eQo+IENPTkZJR19TRU5TT1JT
X1NNTTY2NT15Cj4gQ09ORklHX1NFTlNPUlNfRE1FMTczNz15Cj4gQ09ORklHX1NFTlNPUlNfRU1D
MTQwMz15Cj4gQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15Cj4gQ09ORklHX1NFTlNPUlNfRU1DNlcy
MDE9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N00xPXkKPiBDT05GSUdfU0VOU09SU19TTVNDNDdN
MTkyPXkKPiBDT05GSUdfU0VOU09SU19TTVNDNDdCMzk3PXkKPiAjIENPTkZJR19TRU5TT1JTX1ND
SDU2WFhfQ09NTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKPiAjIENP
TkZJR19TRU5TT1JTX0FEUzc4MjggaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0FEUzc4NzE9
eQo+IENPTkZJR19TRU5TT1JTX0FNQzY4MjE9eQo+ICMgQ09ORklHX1NFTlNPUlNfSU5BMjA5IGlz
IG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0lOQTJYWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NF
TlNPUlNfVEhNQzUwPXkKPiBDT05GSUdfU0VOU09SU19UTVAxMDI9eQo+IENPTkZJR19TRU5TT1JT
X1RNUDQwMT15Cj4gIyBDT05GSUdfU0VOU09SU19UTVA0MjEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX1ZJQV9DUFVURU1QPXkKPiBDT05GSUdfU0VOU09SU19WVDEyMTE9eQo+IENPTkZJR19T
RU5TT1JTX1c4Mzc4MUQ9eQo+IENPTkZJR19TRU5TT1JTX1c4Mzc5MUQ9eQo+ICMgQ09ORklHX1NF
TlNPUlNfVzgzNzkyRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19XODM3OTMgaXMgbm90
IHNldAo+IENPTkZJR19TRU5TT1JTX1c4Mzc5NT15Cj4gIyBDT05GSUdfU0VOU09SU19XODM3OTVf
RkFOQ1RSTCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfVzgzTDc4NVRTPXkKPiBDT05GSUdf
U0VOU09SU19XODNMNzg2Tkc9eQo+ICMgQ09ORklHX1NFTlNPUlNfVzgzNjI3SEYgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5T
T1JTX1dNODMxWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfV004MzUwPXkKPiBDT05GSUdf
VEhFUk1BTD15Cj4gIyBDT05GSUdfVEhFUk1BTF9IV01PTiBpcyBub3Qgc2V0Cj4gQ09ORklHX1RI
RVJNQUxfREVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkKPiAjIENPTkZJR19USEVSTUFMX0RFRkFVTFRf
R09WX0ZBSVJfU0hBUkUgaXMgbm90IHNldAo+ICMgQ09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1Zf
VVNFUl9TUEFDRSBpcyBub3Qgc2V0Cj4gQ09ORklHX1RIRVJNQUxfR09WX0ZBSVJfU0hBUkU9eQo+
IENPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQo+IENPTkZJR19USEVSTUFMX0dPVl9VU0VS
X1NQQUNFPXkKPiBDT05GSUdfVEhFUk1BTF9FTVVMQVRJT049eQo+IENPTkZJR19SQ0FSX1RIRVJN
QUw9eQo+IENPTkZJR19JTlRFTF9QT1dFUkNMQU1QPXkKPiAKPiAjCj4gIyBUZXhhcyBJbnN0cnVt
ZW50cyB0aGVybWFsIGRyaXZlcnMKPiAjCj4gIyBDT05GSUdfV0FUQ0hET0cgaXMgbm90IHNldAo+
IENPTkZJR19TU0JfUE9TU0lCTEU9eQo+IAo+ICMKPiAjIFNvbmljcyBTaWxpY29uIEJhY2twbGFu
ZQo+ICMKPiBDT05GSUdfU1NCPXkKPiBDT05GSUdfU1NCX1BDTUNJQUhPU1RfUE9TU0lCTEU9eQo+
ICMgQ09ORklHX1NTQl9QQ01DSUFIT1NUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TU0JfU0lMRU5U
IGlzIG5vdCBzZXQKPiBDT05GSUdfU1NCX0RFQlVHPXkKPiBDT05GSUdfQkNNQV9QT1NTSUJMRT15
Cj4gCj4gIwo+ICMgQnJvYWRjb20gc3BlY2lmaWMgQU1CQQo+ICMKPiBDT05GSUdfQkNNQT15Cj4g
IyBDT05GSUdfQkNNQV9IT1NUX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkNNQV9EUklWRVJf
R01BQ19DTU4gaXMgbm90IHNldAo+IENPTkZJR19CQ01BX0RFQlVHPXkKPiAKPiAjCj4gIyBNdWx0
aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzCj4gIwo+IENPTkZJR19NRkRfQ09SRT15Cj4gQ09ORklH
X01GRF9BUzM3MTE9eQo+IENPTkZJR19QTUlDX0FEUDU1MjA9eQo+IENPTkZJR19NRkRfQ1JPU19F
Qz15Cj4gQ09ORklHX01GRF9DUk9TX0VDX0kyQz15Cj4gQ09ORklHX1BNSUNfREE5MDNYPXkKPiBD
T05GSUdfUE1JQ19EQTkwNTI9eQo+ICMgQ09ORklHX01GRF9EQTkwNTJfU1BJIGlzIG5vdCBzZXQK
PiBDT05GSUdfTUZEX0RBOTA1Ml9JMkM9eQo+IENPTkZJR19NRkRfREE5MDU1PXkKPiBDT05GSUdf
TUZEX0RBOTA2Mz15Cj4gIyBDT05GSUdfTUZEX01DMTNYWFhfU1BJIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19NRkRfTUMxM1hYWF9JMkMgaXMgbm90IHNldAo+ICMgQ09ORklHX0hUQ19QQVNJQzMgaXMg
bm90IHNldAo+IENPTkZJR19NRkRfS0VNUExEPXkKPiBDT05GSUdfTUZEXzg4UE04MDA9eQo+ICMg
Q09ORklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfODhQTTg2MFggaXMg
bm90IHNldAo+ICMgQ09ORklHX01GRF9NQVgxNDU3NyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9N
QVg3NzY4Nj15Cj4gQ09ORklHX01GRF9NQVg3NzY5Mz15Cj4gIyBDT05GSUdfTUZEX01BWDg5MDcg
aXMgbm90IHNldAo+IENPTkZJR19NRkRfTUFYODkyNT15Cj4gIyBDT05GSUdfTUZEX01BWDg5OTcg
aXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9NQVg4OTk4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19F
WlhfUENBUCBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9SRVRVPXkKPiBDT05GSUdfTUZEX1BDRjUw
NjMzPXkKPiAjIENPTkZJR19QQ0Y1MDYzM19BREMgaXMgbm90IHNldAo+ICMgQ09ORklHX1BDRjUw
NjMzX0dQSU8gaXMgbm90IHNldAo+IENPTkZJR19NRkRfUkM1VDU4Mz15Cj4gIyBDT05GSUdfTUZE
X1NFQ19DT1JFIGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEX1NJNDc2WF9DT1JFPXkKPiAjIENPTkZJ
R19NRkRfU001MDEgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9TTVNDIGlzIG5vdCBzZXQKPiBD
T05GSUdfQUJYNTAwX0NPUkU9eQo+ICMgQ09ORklHX0FCMzEwMF9DT1JFIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NRkRfU1RNUEUgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9TWVNDT04gaXMgbm90
IHNldAo+IENPTkZJR19NRkRfVElfQU0zMzVYX1RTQ0FEQz15Cj4gIyBDT05GSUdfTUZEX0xQMzk0
MyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9MUDg3ODg9eQo+IENPTkZJR19NRkRfUEFMTUFTPXkK
PiBDT05GSUdfVFBTNjEwNVg9eQo+IENPTkZJR19UUFM2NTA3WD15Cj4gQ09ORklHX01GRF9UUFM2
NTA5MD15Cj4gQ09ORklHX01GRF9UUFM2NTIxNz15Cj4gIyBDT05GSUdfTUZEX1RQUzY1ODZYIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19NRkRfVFBTODAwMzEgaXMgbm90IHNldAo+IENPTkZJR19UV0w0
MDMwX0NPUkU9eQo+ICMgQ09ORklHX1RXTDQwMzBfTUFEQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
TUZEX1RXTDQwMzBfQVVESU8gaXMgbm90IHNldAo+ICMgQ09ORklHX1RXTDYwNDBfQ09SRSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01GRF9XTDEyNzNfQ09SRT15Cj4gQ09ORklHX01GRF9MTTM1MzM9eQo+
ICMgQ09ORklHX01GRF9UQzM1ODlYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfVE1JTyBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01GRF9BUklaT05BPXkKPiAjIENPTkZJR19NRkRfQVJJWk9OQV9JMkMg
aXMgbm90IHNldAo+IENPTkZJR19NRkRfQVJJWk9OQV9TUEk9eQo+ICMgQ09ORklHX01GRF9XTTUx
MDIgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9XTTUxMTAgaXMgbm90IHNldAo+ICMgQ09ORklH
X01GRF9XTTg5OTcgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNldAo+
IENPTkZJR19NRkRfV004MzFYPXkKPiAjIENPTkZJR19NRkRfV004MzFYX0kyQyBpcyBub3Qgc2V0
Cj4gQ09ORklHX01GRF9XTTgzMVhfU1BJPXkKPiBDT05GSUdfTUZEX1dNODM1MD15Cj4gQ09ORklH
X01GRF9XTTgzNTBfSTJDPXkKPiAjIENPTkZJR19NRkRfV004OTk0IGlzIG5vdCBzZXQKPiBDT05G
SUdfUkVHVUxBVE9SPXkKPiBDT05GSUdfUkVHVUxBVE9SX0RFQlVHPXkKPiBDT05GSUdfUkVHVUxB
VE9SX0ZJWEVEX1ZPTFRBR0U9eQo+IENPTkZJR19SRUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUj15
Cj4gQ09ORklHX1JFR1VMQVRPUl9VU0VSU1BBQ0VfQ09OU1VNRVI9eQo+ICMgQ09ORklHX1JFR1VM
QVRPUl84OFBNODAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRUdVTEFUT1JfQUNUODg2NSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9BRDUzOTg9eQo+IENPTkZJR19SRUdVTEFUT1JfQVMz
NzExPXkKPiBDT05GSUdfUkVHVUxBVE9SX0RBOTAzWD15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX0RB
OTA1MiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9EQTkwNTU9eQo+IENPTkZJR19SRUdV
TEFUT1JfREE5MDYzPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfREE5MjEwIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRP
Ul9JU0w2MjcxQSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9MUDM5NzE9eQo+IENPTkZJ
R19SRUdVTEFUT1JfTFAzOTcyPXkKPiBDT05GSUdfUkVHVUxBVE9SX0xQODcyWD15Cj4gQ09ORklH
X1JFR1VMQVRPUl9MUDg3NTU9eQo+IENPTkZJR19SRUdVTEFUT1JfTFA4Nzg4PXkKPiBDT05GSUdf
UkVHVUxBVE9SX01BWDE1ODY9eQo+IENPTkZJR19SRUdVTEFUT1JfTUFYODY0OT15Cj4gIyBDT05G
SUdfUkVHVUxBVE9SX01BWDg2NjAgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTUFYODky
NT15Cj4gQ09ORklHX1JFR1VMQVRPUl9NQVg4OTUyPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFY
ODk3MyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkVHVUxBVE9SX01BWDc3Njg2IGlzIG5vdCBzZXQK
PiAjIENPTkZJR19SRUdVTEFUT1JfTUFYNzc2OTMgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFU
T1JfUEFMTUFTPXkKPiBDT05GSUdfUkVHVUxBVE9SX1BDRjUwNjMzPXkKPiBDT05GSUdfUkVHVUxB
VE9SX1BGVVpFMTAwPXkKPiBDT05GSUdfUkVHVUxBVE9SX1JDNVQ1ODM9eQo+IENPTkZJR19SRUdV
TEFUT1JfVFBTNTE2MzI9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2MTA1WCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MD15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1
MDIzIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1RQUzY1MDdYPXkKPiAjIENPTkZJR19S
RUdVTEFUT1JfVFBTNjUwOTAgaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2NTIx
NyBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2NTI0WD15Cj4gQ09ORklHX1JFR1VM
QVRPUl9UV0w0MDMwPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfV004MzFYIGlzIG5vdCBzZXQKPiBD
T05GSUdfUkVHVUxBVE9SX1dNODM1MD15Cj4gQ09ORklHX01FRElBX1NVUFBPUlQ9eQo+IAo+ICMK
PiAjIE11bHRpbWVkaWEgY29yZSBzdXBwb3J0Cj4gIwo+IENPTkZJR19NRURJQV9DQU1FUkFfU1VQ
UE9SVD15Cj4gIyBDT05GSUdfTUVESUFfQU5BTE9HX1RWX1NVUFBPUlQgaXMgbm90IHNldAo+ICMg
Q09ORklHX01FRElBX0RJR0lUQUxfVFZfU1VQUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElB
X1JBRElPX1NVUFBPUlQ9eQo+ICMgQ09ORklHX01FRElBX0NPTlRST0xMRVIgaXMgbm90IHNldAo+
IENPTkZJR19WSURFT19ERVY9eQo+IENPTkZJR19WSURFT19WNEwyPXkKPiAjIENPTkZJR19WSURF
T19BRFZfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZJREVPX0ZJWEVEX01JTk9SX1JBTkdF
UyBpcyBub3Qgc2V0Cj4gQ09ORklHX1Y0TDJfTUVNMk1FTV9ERVY9eQo+IENPTkZJR19WSURFT0JV
Rl9HRU49eQo+IENPTkZJR19WSURFT0JVRl9ETUFfQ09OVElHPXkKPiBDT05GSUdfVklERU9CVUYy
X0NPUkU9eQo+IENPTkZJR19WSURFT0JVRjJfTUVNT1BTPXkKPiBDT05GSUdfVklERU9CVUYyX0RN
QV9DT05USUc9eQo+ICMgQ09ORklHX1RUUENJX0VFUFJPTSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
TWVkaWEgZHJpdmVycwo+ICMKPiBDT05GSUdfVjRMX1BMQVRGT1JNX0RSSVZFUlM9eQo+ICMgQ09O
RklHX1ZJREVPX1NIX1ZPVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX1RJTUJFUkRBTEU9eQo+
IENPTkZJR19TT0NfQ0FNRVJBPXkKPiBDT05GSUdfU09DX0NBTUVSQV9QTEFURk9STT15Cj4gIyBD
T05GSUdfVklERU9fUkNBUl9WSU4gaXMgbm90IHNldAo+IENPTkZJR19WNExfTUVNMk1FTV9EUklW
RVJTPXkKPiBDT05GSUdfVklERU9fTUVNMk1FTV9ERUlOVEVSTEFDRT15Cj4gIyBDT05GSUdfVklE
RU9fU0hfVkVVIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WNExfVEVTVF9EUklWRVJTIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBTdXBwb3J0ZWQgTU1DL1NESU8gYWRhcHRlcnMKPiAjCj4gIyBDT05GSUdf
TUVESUFfUEFSUE9SVF9TVVBQT1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfUkFESU9fQURBUFRFUlM9
eQo+IENPTkZJR19SQURJT19TSTQ3MFg9eQo+IENPTkZJR19JMkNfU0k0NzBYPXkKPiAjIENPTkZJ
R19SQURJT19TSTQ3MTMgaXMgbm90IHNldAo+IENPTkZJR19SQURJT19URUE1NzY0PXkKPiBDT05G
SUdfUkFESU9fVEVBNTc2NF9YVEFMPXkKPiBDT05GSUdfUkFESU9fU0FBNzcwNkg9eQo+IENPTkZJ
R19SQURJT19URUY2ODYyPXkKPiBDT05GSUdfUkFESU9fV0wxMjczPXkKPiAKPiAjCj4gIyBUZXhh
cyBJbnN0cnVtZW50cyBXTDEyOHggRk0gZHJpdmVyIChTVCBiYXNlZCkKPiAjCj4gCj4gIwo+ICMg
TWVkaWEgYW5jaWxsYXJ5IGRyaXZlcnMgKHR1bmVycywgc2Vuc29ycywgaTJjLCBmcm9udGVuZHMp
Cj4gIwo+IENPTkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15Cj4gCj4gIwo+ICMgQXVkaW8g
ZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVycwo+ICMKPiAKPiAjCj4gIyBSRFMgZGVjb2Rl
cnMKPiAjCj4gCj4gIwo+ICMgVmlkZW8gZGVjb2RlcnMKPiAjCj4gQ09ORklHX1ZJREVPX0FEVjcx
ODA9eQo+IAo+ICMKPiAjIFZpZGVvIGFuZCBhdWRpbyBkZWNvZGVycwo+ICMKPiAKPiAjCj4gIyBW
aWRlbyBlbmNvZGVycwo+ICMKPiAKPiAjCj4gIyBDYW1lcmEgc2Vuc29yIGRldmljZXMKPiAjCj4g
Cj4gIwo+ICMgRmxhc2ggZGV2aWNlcwo+ICMKPiAKPiAjCj4gIyBWaWRlbyBpbXByb3ZlbWVudCBj
aGlwcwo+ICMKPiAKPiAjCj4gIyBNaXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcwo+ICMKPiAKPiAj
Cj4gIyBTZW5zb3JzIHVzZWQgb24gc29jX2NhbWVyYSBkcml2ZXIKPiAjCj4gCj4gIwo+ICMgc29j
X2NhbWVyYSBzZW5zb3IgZHJpdmVycwo+ICMKPiBDT05GSUdfU09DX0NBTUVSQV9JTVgwNzQ9eQo+
IENPTkZJR19TT0NfQ0FNRVJBX01UOU0wMDE9eQo+IENPTkZJR19TT0NfQ0FNRVJBX01UOU0xMTE9
eQo+IENPTkZJR19TT0NfQ0FNRVJBX01UOVQwMzE9eQo+ICMgQ09ORklHX1NPQ19DQU1FUkFfTVQ5
VDExMiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19DQU1FUkFfTVQ5VjAyMj15Cj4gQ09ORklHX1NP
Q19DQU1FUkFfT1YyNjQwPXkKPiBDT05GSUdfU09DX0NBTUVSQV9PVjU2NDI9eQo+IENPTkZJR19T
T0NfQ0FNRVJBX09WNjY1MD15Cj4gQ09ORklHX1NPQ19DQU1FUkFfT1Y3NzJYPXkKPiAjIENPTkZJ
R19TT0NfQ0FNRVJBX09WOTY0MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19DQU1FUkFfT1Y5NzQw
PXkKPiAjIENPTkZJR19TT0NfQ0FNRVJBX1JKNTROMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19D
QU1FUkFfVFc5OTEwPXkKPiBDT05GSUdfTUVESUFfVFVORVI9eQo+IENPTkZJR19NRURJQV9UVU5F
Ul9TSU1QTEU9eQo+IENPTkZJR19NRURJQV9UVU5FUl9UREE4MjkwPXkKPiBDT05GSUdfTUVESUFf
VFVORVJfVERBODI3WD15Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjcxPXkKPiBDT05GSUdf
TUVESUFfVFVORVJfVERBOTg4Nz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQo+IENP
TkZJR19NRURJQV9UVU5FUl9URUE1NzY3PXkKPiBDT05GSUdfTUVESUFfVFVORVJfTVQyMFhYPXkK
PiBDT05GSUdfTUVESUFfVFVORVJfWEMyMDI4PXkKPiBDT05GSUdfTUVESUFfVFVORVJfWEM1MDAw
PXkKPiBDT05GSUdfTUVESUFfVFVORVJfWEM0MDAwPXkKPiBDT05GSUdfTUVESUFfVFVORVJfTUM0
NFM4MDM9eQo+IAo+ICMKPiAjIFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250ZW5kcwo+ICMKPiAj
IENPTkZJR19EVkJfRFVNTVlfRkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEdyYXBoaWNzIHN1cHBv
cnQKPiAjCj4gQ09ORklHX0RSTT15Cj4gIyBDT05GSUdfRFJNX1VETCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1ZHQVNUQVRFPXkKPiBDT05GSUdfVklERU9fT1VUUFVUX0NPTlRST0w9eQo+IENPTkZJR19I
RE1JPXkKPiBDT05GSUdfRkI9eQo+ICMgQ09ORklHX0ZJUk1XQVJFX0VESUQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0ZCX0REQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0JPT1RfVkVTQV9TVVBQT1JU
PXkKPiBDT05GSUdfRkJfQ0ZCX0ZJTExSRUNUPXkKPiBDT05GSUdfRkJfQ0ZCX0NPUFlBUkVBPXkK
PiBDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15Cj4gIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNf
SU5fQllURSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1NZU19GSUxMUkVDVD15Cj4gQ09ORklHX0ZC
X1NZU19DT1BZQVJFQT15Cj4gQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQ9eQo+IENPTkZJR19GQl9G
T1JFSUdOX0VORElBTj15Cj4gQ09ORklHX0ZCX0JPVEhfRU5ESUFOPXkKPiAjIENPTkZJR19GQl9C
SUdfRU5ESUFOIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9MSVRUTEVfRU5ESUFOIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkJfU1lTX0ZPUFM9eQo+IENPTkZJR19GQl9ERUZFUlJFRF9JTz15Cj4gQ09O
RklHX0ZCX0hFQ1VCQT15Cj4gIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfRkJfTUFDTU9ERVMgaXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX0JBQ0tMSUdIVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0ZCX01PREVfSEVMUEVSUz15Cj4gQ09ORklHX0ZCX1RJTEVCTElUVElORz15
Cj4gCj4gIwo+ICMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMKPiAjCj4gQ09ORklHX0ZC
X0FSQz15Cj4gQ09ORklHX0ZCX1ZHQTE2PXkKPiBDT05GSUdfRkJfVVZFU0E9eQo+IENPTkZJR19G
Ql9WRVNBPXkKPiBDT05GSUdfRkJfTjQxMT15Cj4gQ09ORklHX0ZCX0hHQT15Cj4gQ09ORklHX0ZC
X1MxRDEzWFhYPXkKPiAjIENPTkZJR19GQl9UTUlPIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfR09M
REZJU0g9eQo+ICMgQ09ORklHX0ZCX1ZJUlRVQUwgaXMgbm90IHNldAo+IENPTkZJR19YRU5fRkJE
RVZfRlJPTlRFTkQ9eQo+ICMgQ09ORklHX0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0Cj4gQ09ORklH
X0ZCX0JST0FEU0hFRVQ9eQo+IENPTkZJR19GQl9BVU9fSzE5MFg9eQo+ICMgQ09ORklHX0ZCX0FV
T19LMTkwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0FVT19LMTkwMT15Cj4gIyBDT05GSUdfRkJf
U0lNUExFIGlzIG5vdCBzZXQKPiBDT05GSUdfRVhZTk9TX1ZJREVPPXkKPiBDT05GSUdfQkFDS0xJ
R0hUX0xDRF9TVVBQT1JUPXkKPiBDT05GSUdfTENEX0NMQVNTX0RFVklDRT15Cj4gQ09ORklHX0xD
RF9MVFYzNTBRVj15Cj4gQ09ORklHX0xDRF9JTEk5MjJYPXkKPiBDT05GSUdfTENEX0lMSTkzMjA9
eQo+IENPTkZJR19MQ0RfVERPMjRNPXkKPiBDT05GSUdfTENEX1ZHRzI0MzJBND15Cj4gIyBDT05G
SUdfTENEX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiBDT05GSUdfTENEX1M2RTYzTTA9eQo+IENPTkZJ
R19MQ0RfTEQ5MDQwPXkKPiAjIENPTkZJR19MQ0RfQU1TMzY5RkcwNiBpcyBub3Qgc2V0Cj4gQ09O
RklHX0xDRF9MTVM1MDFLRjAzPXkKPiAjIENPTkZJR19MQ0RfSFg4MzU3IGlzIG5vdCBzZXQKPiBD
T05GSUdfQkFDS0xJR0hUX0NMQVNTX0RFVklDRT15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0dFTkVS
SUMgaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElHSFRfTE0zNTMzPXkKPiBDT05GSUdfQkFDS0xJ
R0hUX1BXTT15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0RBOTAzWCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0JBQ0tMSUdIVF9EQTkwNTI9eQo+IENPTkZJR19CQUNLTElHSFRfTUFYODkyNT15Cj4gQ09ORklH
X0JBQ0tMSUdIVF9TQUhBUkE9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9XTTgzMVggaXMgbm90IHNl
dAo+ICMgQ09ORklHX0JBQ0tMSUdIVF9BRFA1NTIwIGlzIG5vdCBzZXQKPiBDT05GSUdfQkFDS0xJ
R0hUX0FEUDg4NjA9eQo+IENPTkZJR19CQUNLTElHSFRfQURQODg3MD15Cj4gQ09ORklHX0JBQ0tM
SUdIVF9QQ0Y1MDYzMz15Cj4gQ09ORklHX0JBQ0tMSUdIVF9MTTM2MzBBPXkKPiBDT05GSUdfQkFD
S0xJR0hUX0xNMzYzOT15Cj4gQ09ORklHX0JBQ0tMSUdIVF9MUDg1NVg9eQo+IENPTkZJR19CQUNL
TElHSFRfTFA4Nzg4PXkKPiBDT05GSUdfQkFDS0xJR0hUX1BBTkRPUkE9eQo+IENPTkZJR19CQUNL
TElHSFRfVFBTNjUyMTc9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTEgaXMgbm90IHNldAo+
ICMgQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkFDS0xJ
R0hUX0JENjEwNyBpcyBub3Qgc2V0Cj4gQ09ORklHX0xPR089eQo+ICMgQ09ORklHX0xPR09fTElO
VVhfTU9OTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9HT19MSU5VWF9WR0ExNiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0xPR09fTElOVVhfQ0xVVDIyND15Cj4gQ09ORklHX1NPVU5EPXkKPiBDT05GSUdf
U09VTkRfT1NTX0NPUkU9eQo+IENPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJTT15Cj4gQ09O
RklHX1NORD15Cj4gQ09ORklHX1NORF9USU1FUj15Cj4gQ09ORklHX1NORF9QQ009eQo+IENPTkZJ
R19TTkRfSFdERVA9eQo+IENPTkZJR19TTkRfUkFXTUlEST15Cj4gIyBDT05GSUdfU05EX1NFUVVF
TkNFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9PU1NFTVVMPXkKPiBDT05GSUdfU05EX01JWEVS
X09TUz15Cj4gIyBDT05GSUdfU05EX1BDTV9PU1MgaXMgbm90IHNldAo+IENPTkZJR19TTkRfSFJU
SU1FUj15Cj4gQ09ORklHX1NORF9EWU5BTUlDX01JTk9SUz15Cj4gQ09ORklHX1NORF9NQVhfQ0FS
RFM9MzIKPiAjIENPTkZJR19TTkRfU1VQUE9SVF9PTERfQVBJIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX1ZFUkJPU0VfUFJPQ0ZTPXkKPiBDT05GSUdfU05EX1ZFUkJPU0VfUFJJTlRLPXkKPiAjIENP
TkZJR19TTkRfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19TTkRfRE1BX1NHQlVGPXkKPiAjIENP
TkZJR19TTkRfUkFXTUlESV9TRVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9PUEwzX0xJQl9T
RVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1NORF9TQkFXRV9TRVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9FTVUxMEsxX1NF
USBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9NUFU0MDFfVUFSVD15Cj4gQ09ORklHX1NORF9WWF9M
SUI9eQo+IENPTkZJR19TTkRfRFJJVkVSUz15Cj4gQ09ORklHX1NORF9EVU1NWT15Cj4gQ09ORklH
X1NORF9BTE9PUD15Cj4gQ09ORklHX1NORF9NVFBBVj15Cj4gQ09ORklHX1NORF9NVFM2ND15Cj4g
Q09ORklHX1NORF9TRVJJQUxfVTE2NTUwPXkKPiBDT05GSUdfU05EX01QVTQwMT15Cj4gQ09ORklH
X1NORF9QT1JUTUFOMlg0PXkKPiBDT05GSUdfU05EX1NQST15Cj4gQ09ORklHX1NORF9BVDczQzIx
Mz15Cj4gQ09ORklHX1NORF9BVDczQzIxM19UQVJHRVRfQklUUkFURT00ODAwMAo+IENPTkZJR19T
TkRfUENNQ0lBPXkKPiBDT05GSUdfU05EX1ZYUE9DS0VUPXkKPiBDT05GSUdfU05EX1BEQVVESU9D
Rj15Cj4gIyBDT05GSUdfU05EX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU09VTkRfUFJJTUUg
aXMgbm90IHNldAo+IENPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkKPiBDT05GSUdfVVNC
X1NVUFBPUlQ9eQo+IENPTkZJR19VU0JfQVJDSF9IQVNfSENEPXkKPiAjIENPTkZJR19VU0IgaXMg
bm90IHNldAo+IAo+ICMKPiAjIFVTQiBwb3J0IGRyaXZlcnMKPiAjCj4gCj4gIwo+ICMgVVNCIFBo
eXNpY2FsIExheWVyIGRyaXZlcnMKPiAjCj4gQ09ORklHX1VTQl9QSFk9eQo+ICMgQ09ORklHX0tF
WVNUT05FX1VTQl9QSFkgaXMgbm90IHNldAo+IENPTkZJR19OT1BfVVNCX1hDRUlWPXkKPiBDT05G
SUdfT01BUF9DT05UUk9MX1VTQj15Cj4gQ09ORklHX09NQVBfVVNCMz15Cj4gQ09ORklHX0FNMzM1
WF9DT05UUk9MX1VTQj15Cj4gQ09ORklHX0FNMzM1WF9QSFlfVVNCPXkKPiBDT05GSUdfU0FNU1VO
R19VU0JQSFk9eQo+ICMgQ09ORklHX1NBTVNVTkdfVVNCMlBIWSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1NBTVNVTkdfVVNCM1BIWT15Cj4gQ09ORklHX1RBSFZPX1VTQj15Cj4gIyBDT05GSUdfVEFIVk9f
VVNCX0hPU1RfQllfREVGQVVMVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VTQl9SQ0FSX0dFTjJfUEhZ
PXkKPiAjIENPTkZJR19VU0JfR0FER0VUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NTUMgaXMgbm90
IHNldAo+IENPTkZJR19NRU1TVElDSz15Cj4gIyBDT05GSUdfTUVNU1RJQ0tfREVCVUcgaXMgbm90
IHNldAo+IAo+ICMKPiAjIE1lbW9yeVN0aWNrIGRyaXZlcnMKPiAjCj4gQ09ORklHX01FTVNUSUNL
X1VOU0FGRV9SRVNVTUU9eQo+IENPTkZJR19NU1BST19CTE9DSz15Cj4gQ09ORklHX01TX0JMT0NL
PXkKPiAKPiAjCj4gIyBNZW1vcnlTdGljayBIb3N0IENvbnRyb2xsZXIgRHJpdmVycwo+ICMKPiBD
T05GSUdfTkVXX0xFRFM9eQo+IENPTkZJR19MRURTX0NMQVNTPXkKPiAKPiAjCj4gIyBMRUQgZHJp
dmVycwo+ICMKPiBDT05GSUdfTEVEU19MTTM1MzA9eQo+IENPTkZJR19MRURTX0xNMzUzMz15Cj4g
Q09ORklHX0xFRFNfTE0zNjQyPXkKPiBDT05GSUdfTEVEU19MUDM5NDQ9eQo+IENPTkZJR19MRURT
X0xQNTVYWF9DT01NT049eQo+IENPTkZJR19MRURTX0xQNTUyMT15Cj4gIyBDT05GSUdfTEVEU19M
UDU1MjMgaXMgbm90IHNldAo+IENPTkZJR19MRURTX0xQNTU2Mj15Cj4gQ09ORklHX0xFRFNfTFA4
NTAxPXkKPiBDT05GSUdfTEVEU19MUDg3ODg9eQo+IENPTkZJR19MRURTX1BDQTk1NVg9eQo+IENP
TkZJR19MRURTX1BDQTk2M1g9eQo+IENPTkZJR19MRURTX1BDQTk2ODU9eQo+IENPTkZJR19MRURT
X1dNODMxWF9TVEFUVVM9eQo+ICMgQ09ORklHX0xFRFNfV004MzUwIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19MRURTX0RBOTAzWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTEVEU19EQTkwNTIgaXMgbm90
IHNldAo+IENPTkZJR19MRURTX0RBQzEyNFMwODU9eQo+ICMgQ09ORklHX0xFRFNfUFdNIGlzIG5v
dCBzZXQKPiBDT05GSUdfTEVEU19SRUdVTEFUT1I9eQo+IENPTkZJR19MRURTX0JEMjgwMj15Cj4g
Q09ORklHX0xFRFNfQURQNTUyMD15Cj4gIyBDT05GSUdfTEVEU19UQ0E2NTA3IGlzIG5vdCBzZXQK
PiBDT05GSUdfTEVEU19MTTM1NXg9eQo+IENPTkZJR19MRURTX09UMjAwPXkKPiBDT05GSUdfTEVE
U19CTElOS009eQo+IAo+ICMKPiAjIExFRCBUcmlnZ2Vycwo+ICMKPiAjIENPTkZJR19MRURTX1RS
SUdHRVJTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BQ0NFU1NJQklMSVRZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19FREFDIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0xJQj15Cj4gQ09ORklHX1JUQ19D
TEFTUz15Cj4gIyBDT05GSUdfUlRDX0hDVE9TWVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19T
WVNUT0hDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfREVCVUcgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFJUQyBpbnRlcmZhY2VzCj4gIwo+ICMgQ09ORklHX1JUQ19JTlRGX1NZU0ZTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUlRDX0lOVEZfUFJPQz15Cj4gIyBDT05GSUdfUlRDX0lOVEZfREVWIGlzIG5v
dCBzZXQKPiBDT05GSUdfUlRDX0RSVl9URVNUPXkKPiAKPiAjCj4gIyBJMkMgUlRDIGRyaXZlcnMK
PiAjCj4gQ09ORklHX1JUQ19EUlZfODhQTTgwWD15Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzEzMDcg
aXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX0RTMTM3ND15Cj4gQ09ORklHX1JUQ19EUlZfRFMx
NjcyPXkKPiBDT05GSUdfUlRDX0RSVl9EUzMyMzI9eQo+IENPTkZJR19SVENfRFJWX0xQODc4OD15
Cj4gIyBDT05GSUdfUlRDX0RSVl9NQVg2OTAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJW
X01BWDg5MjUgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfTUFYNzc2ODYgaXMgbm90IHNl
dAo+IENPTkZJR19SVENfRFJWX1JTNUMzNzI9eQo+ICMgQ09ORklHX1JUQ19EUlZfSVNMMTIwOCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1JUQ19EUlZfSVNMMTIwMjI9eQo+IENPTkZJR19SVENfRFJWX0lT
TDEyMDU3PXkKPiAjIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKPiBDT05GSUdfUlRD
X0RSVl9QQUxNQVM9eQo+IENPTkZJR19SVENfRFJWX1BDRjIxMjc9eQo+ICMgQ09ORklHX1JUQ19E
UlZfUENGODUyMyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJW
X000MVQ4MD15Cj4gIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlzIG5vdCBzZXQKPiBDT05G
SUdfUlRDX0RSVl9CUTMySz15Cj4gQ09ORklHX1JUQ19EUlZfVFdMNDAzMD15Cj4gQ09ORklHX1JU
Q19EUlZfUkM1VDU4Mz15Cj4gQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15Cj4gQ09ORklHX1JUQ19E
UlZfRk0zMTMwPXkKPiAjIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1JUQ19EUlZfUlg4MDI1PXkKPiBDT05GSUdfUlRDX0RSVl9FTTMwMjc9eQo+IENPTkZJR19SVENf
RFJWX1JWMzAyOUMyPXkKPiAKPiAjCj4gIyBTUEkgUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JU
Q19EUlZfTTQxVDkzPXkKPiAjIENPTkZJR19SVENfRFJWX000MVQ5NCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1JUQ19EUlZfRFMxMzA1PXkKPiBDT05GSUdfUlRDX0RSVl9EUzEzOTA9eQo+IENPTkZJR19S
VENfRFJWX01BWDY5MDI9eQo+ICMgQ09ORklHX1JUQ19EUlZfUjk3MDEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1JUQ19EUlZfUlM1QzM0OCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzMy
MzQgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX1BDRjIxMjM9eQo+IENPTkZJR19SVENfRFJW
X1JYNDU4MT15Cj4gCj4gIwo+ICMgUGxhdGZvcm0gUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JU
Q19EUlZfQ01PUz15Cj4gQ09ORklHX1JUQ19EUlZfRFMxMjg2PXkKPiAjIENPTkZJR19SVENfRFJW
X0RTMTUxMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzE1NTMgaXMgbm90IHNldAo+
IENPTkZJR19SVENfRFJWX0RTMTc0Mj15Cj4gIyBDT05GSUdfUlRDX0RSVl9EQTkwNTIgaXMgbm90
IHNldAo+IENPTkZJR19SVENfRFJWX0RBOTA1NT15Cj4gIyBDT05GSUdfUlRDX0RSVl9TVEsxN1RB
OCBpcyBub3Qgc2V0Cj4gQ09ORklHX1JUQ19EUlZfTTQ4VDg2PXkKPiBDT05GSUdfUlRDX0RSVl9N
NDhUMzU9eQo+ICMgQ09ORklHX1JUQ19EUlZfTTQ4VDU5IGlzIG5vdCBzZXQKPiAjIENPTkZJR19S
VENfRFJWX01TTTYyNDIgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX0JRNDgwMj15Cj4gQ09O
RklHX1JUQ19EUlZfUlA1QzAxPXkKPiAjIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0RSVl9EUzI0MDQ9eQo+IENPTkZJR19SVENfRFJWX1dNODMxWD15Cj4gQ09O
RklHX1JUQ19EUlZfV004MzUwPXkKPiAjIENPTkZJR19SVENfRFJWX1BDRjUwNjMzIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBvbi1DUFUgUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JUQ19EUlZfTU9Y
QVJUPXkKPiAKPiAjCj4gIyBISUQgU2Vuc29yIFJUQyBkcml2ZXJzCj4gIwo+IENPTkZJR19ETUFE
RVZJQ0VTPXkKPiBDT05GSUdfRE1BREVWSUNFU19ERUJVRz15Cj4gIyBDT05GSUdfRE1BREVWSUNF
U19WREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAjIERNQSBEZXZpY2VzCj4gIwo+IENPTkZJR19E
V19ETUFDX0NPUkU9eQo+ICMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldAo+IENPTkZJR19USU1C
X0RNQT15Cj4gQ09ORklHX0RNQV9FTkdJTkU9eQo+IAo+ICMKPiAjIERNQSBDbGllbnRzCj4gIwo+
ICMgQ09ORklHX0FTWU5DX1RYX0RNQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RNQVRFU1Q9eQo+ICMg
Q09ORklHX0FVWERJU1BMQVkgaXMgbm90IHNldAo+ICMgQ09ORklHX1VJTyBpcyBub3Qgc2V0Cj4g
Q09ORklHX1ZJUlRfRFJJVkVSUz15Cj4gQ09ORklHX1ZJUlRJTz15Cj4gCj4gIwo+ICMgVmlydGlv
IGRyaXZlcnMKPiAjCj4gQ09ORklHX1ZJUlRJT19CQUxMT09OPXkKPiAjIENPTkZJR19WSVJUSU9f
TU1JTyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgTWljcm9zb2Z0IEh5cGVyLVYgZ3Vlc3Qgc3VwcG9y
dAo+ICMKPiAKPiAjCj4gIyBYZW4gZHJpdmVyIHN1cHBvcnQKPiAjCj4gQ09ORklHX1hFTl9CQUxM
T09OPXkKPiBDT05GSUdfWEVOX0JBTExPT05fTUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19YRU5f
U0NSVUJfUEFHRVM9eQo+IENPTkZJR19YRU5fREVWX0VWVENITj15Cj4gIyBDT05GSUdfWEVORlMg
aXMgbm90IHNldAo+IENPTkZJR19YRU5fU1lTX0hZUEVSVklTT1I9eQo+IENPTkZJR19YRU5fWEVO
QlVTX0ZST05URU5EPXkKPiBDT05GSUdfWEVOX0dOVERFVj15Cj4gQ09ORklHX1hFTl9HUkFOVF9E
RVZfQUxMT0M9eQo+IENPTkZJR19TV0lPVExCX1hFTj15Cj4gQ09ORklHX1hFTl9QUklWQ01EPXkK
PiBDT05GSUdfWEVOX0hBVkVfUFZNTVU9eQo+IENPTkZJR19TVEFHSU5HPXkKPiAjIENPTkZJR19F
Q0hPIGlzIG5vdCBzZXQKPiBDT05GSUdfUEFORUw9eQo+IENPTkZJR19QQU5FTF9QQVJQT1JUPTAK
PiBDT05GSUdfUEFORUxfUFJPRklMRT01Cj4gQ09ORklHX1BBTkVMX0NIQU5HRV9NRVNTQUdFPXkK
PiBDT05GSUdfUEFORUxfQk9PVF9NRVNTQUdFPSIiCj4gCj4gIwo+ICMgSUlPIHN0YWdpbmcgZHJp
dmVycwo+ICMKPiAKPiAjCj4gIyBBY2NlbGVyb21ldGVycwo+ICMKPiBDT05GSUdfQURJUzE2MjAx
PXkKPiBDT05GSUdfQURJUzE2MjAzPXkKPiAjIENPTkZJR19BRElTMTYyMDQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0FESVMxNjIwOSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQURJUzE2MjIwIGlzIG5v
dCBzZXQKPiBDT05GSUdfQURJUzE2MjQwPXkKPiBDT05GSUdfU0NBMzAwMD15Cj4gCj4gIwo+ICMg
QW5hbG9nIHRvIGRpZ2l0YWwgY29udmVydGVycwo+ICMKPiAjIENPTkZJR19BRDcyOTEgaXMgbm90
IHNldAo+IENPTkZJR19BRDc5OVg9eQo+IENPTkZJR19BRDc5OVhfUklOR19CVUZGRVI9eQo+ICMg
Q09ORklHX0FENzE5MiBpcyBub3Qgc2V0Cj4gQ09ORklHX0FENzI4MD15Cj4gIyBDT05GSUdfTFBD
MzJYWF9BREMgaXMgbm90IHNldAo+IENPTkZJR19TUEVBUl9BREM9eQo+IAo+ICMKPiAjIEFuYWxv
ZyBkaWdpdGFsIGJpLWRpcmVjdGlvbiBjb252ZXJ0ZXJzCj4gIwo+IAo+ICMKPiAjIENhcGFjaXRh
bmNlIHRvIGRpZ2l0YWwgY29udmVydGVycwo+ICMKPiAjIENPTkZJR19BRDcxNTAgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0FENzE1MiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUQ3NzQ2IGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBEaXJlY3QgRGlnaXRhbCBTeW50aGVzaXMKPiAjCj4gIyBDT05GSUdfQUQ1
OTMwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BRDk4MzIgaXMgbm90IHNldAo+IENPTkZJR19BRDk4
MzQ9eQo+IENPTkZJR19BRDk4NTA9eQo+IENPTkZJR19BRDk4NTI9eQo+IENPTkZJR19BRDk5MTA9
eQo+ICMgQ09ORklHX0FEOTk1MSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRGlnaXRhbCBneXJvc2Nv
cGUgc2Vuc29ycwo+ICMKPiAjIENPTkZJR19BRElTMTYwNjAgaXMgbm90IHNldAo+IAo+ICMKPiAj
IE5ldHdvcmsgQW5hbHl6ZXIsIEltcGVkYW5jZSBDb252ZXJ0ZXJzCj4gIwo+IENPTkZJR19BRDU5
MzM9eQo+IAo+ICMKPiAjIExpZ2h0IHNlbnNvcnMKPiAjCj4gIyBDT05GSUdfU0VOU09SU19JU0wy
OTAxOCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfSVNMMjkwMjg9eQo+IENPTkZJR19UU0wy
NTgzPXkKPiBDT05GSUdfVFNMMng3eD15Cj4gCj4gIwo+ICMgTWFnbmV0b21ldGVyIHNlbnNvcnMK
PiAjCj4gQ09ORklHX1NFTlNPUlNfSE1DNTg0Mz15Cj4gCj4gIwo+ICMgQWN0aXZlIGVuZXJneSBt
ZXRlcmluZyBJQwo+ICMKPiAjIENPTkZJR19BREU3NzUzIGlzIG5vdCBzZXQKPiBDT05GSUdfQURF
Nzc1ND15Cj4gIyBDT05GSUdfQURFNzc1OCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FERTc3NTk9eQo+
IENPTkZJR19BREU3ODU0PXkKPiBDT05GSUdfQURFNzg1NF9JMkM9eQo+ICMgQ09ORklHX0FERTc4
NTRfU1BJIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSZXNvbHZlciB0byBkaWdpdGFsIGNvbnZlcnRl
cnMKPiAjCj4gQ09ORklHX0FEMlM5MD15Cj4gCj4gIwo+ICMgVHJpZ2dlcnMgLSBzdGFuZGFsb25l
Cj4gIwo+ICMgQ09ORklHX0lJT19QRVJJT0RJQ19SVENfVFJJR0dFUiBpcyBub3Qgc2V0Cj4gQ09O
RklHX0lJT19TSU1QTEVfRFVNTVk9eQo+ICMgQ09ORklHX0lJT19TSU1QTEVfRFVNTVlfRVZFTlRT
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JSU9fU0lNUExFX0RVTU1ZX0JVRkZFUiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfRlQxMDAwIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBTcGVha3VwIGNvbnNvbGUg
c3BlZWNoCj4gIwo+IENPTkZJR19TVEFHSU5HX01FRElBPXkKPiBDT05GSUdfSTJDX0JDTTIwNDg9
eQo+ICMgQ09ORklHX1ZJREVPX1RDTTgyNVggaXMgbm90IHNldAo+ICMgQ09ORklHX1VTQl9TTjlD
MTAyIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBBbmRyb2lkCj4gIwo+IENPTkZJR19BTkRST0lEPXkK
PiBDT05GSUdfQU5EUk9JRF9CSU5ERVJfSVBDPXkKPiBDT05GSUdfQU5EUk9JRF9MT0dHRVI9eQo+
IENPTkZJR19BTkRST0lEX1RJTUVEX09VVFBVVD15Cj4gIyBDT05GSUdfQU5EUk9JRF9MT1dfTUVN
T1JZX0tJTExFUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQU5EUk9JRF9JTlRGX0FMQVJNX0RFViBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NZTkM9eQo+ICMgQ09ORklHX1NXX1NZTkMgaXMgbm90IHNldAo+
IENPTkZJR19JT049eQo+IENPTkZJR19JT05fVEVTVD15Cj4gIyBDT05GSUdfWDg2X1BMQVRGT1JN
X0RFVklDRVMgaXMgbm90IHNldAo+IENPTkZJR19DSFJPTUVfUExBVEZPUk1TPXkKPiAjIENPTkZJ
R19DSFJPTUVPU19QU1RPUkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEhhcmR3YXJlIFNwaW5sb2Nr
IGRyaXZlcnMKPiAjCj4gQ09ORklHX0NMS0VWVF9JODI1Mz15Cj4gQ09ORklHX0k4MjUzX0xPQ0s9
eQo+IENPTkZJR19DTEtCTERfSTgyNTM9eQo+ICMgQ09ORklHX01BSUxCT1ggaXMgbm90IHNldAo+
ICMgQ09ORklHX0lPTU1VX1NVUFBPUlQgaXMgbm90IHNldAo+IAo+ICMKPiAjIFJlbW90ZXByb2Mg
ZHJpdmVycwo+ICMKPiAjIENPTkZJR19TVEVfTU9ERU1fUlBST0MgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFJwbXNnIGRyaXZlcnMKPiAjCj4gIyBDT05GSUdfUE1fREVWRlJFUSBpcyBub3Qgc2V0Cj4g
Q09ORklHX0VYVENPTj15Cj4gCj4gIwo+ICMgRXh0Y29uIERldmljZSBEcml2ZXJzCj4gIwo+IENP
TkZJR19FWFRDT05fQURDX0pBQ0s9eQo+IENPTkZJR19FWFRDT05fUEFMTUFTPXkKPiAjIENPTkZJ
R19NRU1PUlkgaXMgbm90IHNldAo+IENPTkZJR19JSU89eQo+IENPTkZJR19JSU9fQlVGRkVSPXkK
PiAjIENPTkZJR19JSU9fQlVGRkVSX0NCIGlzIG5vdCBzZXQKPiBDT05GSUdfSUlPX0tGSUZPX0JV
Rj15Cj4gQ09ORklHX0lJT19UUklHR0VSRURfQlVGRkVSPXkKPiBDT05GSUdfSUlPX1RSSUdHRVI9
eQo+IENPTkZJR19JSU9fQ09OU1VNRVJTX1BFUl9UUklHR0VSPTIKPiAKPiAjCj4gIyBBY2NlbGVy
b21ldGVycwo+ICMKPiBDT05GSUdfQk1BMTgwPXkKPiBDT05GSUdfSUlPX1NUX0FDQ0VMXzNBWElT
PXkKPiBDT05GSUdfSUlPX1NUX0FDQ0VMX0kyQ18zQVhJUz15Cj4gQ09ORklHX0lJT19TVF9BQ0NF
TF9TUElfM0FYSVM9eQo+IENPTkZJR19LWFNEOT15Cj4gCj4gIwo+ICMgQW5hbG9nIHRvIGRpZ2l0
YWwgY29udmVydGVycwo+ICMKPiBDT05GSUdfQURfU0lHTUFfREVMVEE9eQo+ICMgQ09ORklHX0FE
NzI2NiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUQ3Mjk4IGlzIG5vdCBzZXQKPiBDT05GSUdfQUQ3
NDc2PXkKPiAjIENPTkZJR19BRDc3OTEgaXMgbm90IHNldAo+IENPTkZJR19BRDc3OTM9eQo+IENP
TkZJR19BRDc4ODc9eQo+ICMgQ09ORklHX0FENzkyMyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTFA4
Nzg4X0FEQyBpcyBub3Qgc2V0Cj4gQ09ORklHX01BWDEzNjM9eQo+IENPTkZJR19NQ1AzMjBYPXkK
PiBDT05GSUdfTUNQMzQyMj15Cj4gIyBDT05GSUdfTkFVNzgwMiBpcyBub3Qgc2V0Cj4gQ09ORklH
X1RJX0FEQzA4MUM9eQo+ICMgQ09ORklHX1RJX0FNMzM1WF9BREMgaXMgbm90IHNldAo+IENPTkZJ
R19UV0w2MDMwX0dQQURDPXkKPiAKPiAjCj4gIyBBbXBsaWZpZXJzCj4gIwo+IENPTkZJR19BRDgz
NjY9eQo+IAo+ICMKPiAjIEhpZCBTZW5zb3IgSUlPIENvbW1vbgo+ICMKPiBDT05GSUdfSUlPX1NU
X1NFTlNPUlNfSTJDPXkKPiBDT05GSUdfSUlPX1NUX1NFTlNPUlNfU1BJPXkKPiBDT05GSUdfSUlP
X1NUX1NFTlNPUlNfQ09SRT15Cj4gCj4gIwo+ICMgRGlnaXRhbCB0byBhbmFsb2cgY29udmVydGVy
cwo+ICMKPiBDT05GSUdfQUQ1MDY0PXkKPiBDT05GSUdfQUQ1MzYwPXkKPiAjIENPTkZJR19BRDUz
ODAgaXMgbm90IHNldAo+ICMgQ09ORklHX0FENTQyMSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FENTQ0
Nj15Cj4gQ09ORklHX0FENTQ0OT15Cj4gQ09ORklHX0FENTUwND15Cj4gQ09ORklHX0FENTYyNFJf
U1BJPXkKPiBDT05GSUdfQUQ1Njg2PXkKPiBDT05GSUdfQUQ1NzU1PXkKPiBDT05GSUdfQUQ1NzY0
PXkKPiAjIENPTkZJR19BRDU3OTEgaXMgbm90IHNldAo+ICMgQ09ORklHX0FENzMwMyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01BWDUxNz15Cj4gQ09ORklHX01DUDQ3MjU9eQo+IAo+ICMKPiAjIEZyZXF1
ZW5jeSBTeW50aGVzaXplcnMgRERTL1BMTAo+ICMKPiAKPiAjCj4gIyBDbG9jayBHZW5lcmF0b3Iv
RGlzdHJpYnV0aW9uCj4gIwo+IENPTkZJR19BRDk1MjM9eQo+IAo+ICMKPiAjIFBoYXNlLUxvY2tl
ZCBMb29wIChQTEwpIGZyZXF1ZW5jeSBzeW50aGVzaXplcnMKPiAjCj4gIyBDT05GSUdfQURGNDM1
MCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRGlnaXRhbCBneXJvc2NvcGUgc2Vuc29ycwo+ICMKPiBD
T05GSUdfQURJUzE2MDgwPXkKPiBDT05GSUdfQURJUzE2MTMwPXkKPiBDT05GSUdfQURJUzE2MTM2
PXkKPiAjIENPTkZJR19BRElTMTYyNjAgaXMgbm90IHNldAo+IENPTkZJR19BRFhSUzQ1MD15Cj4g
Q09ORklHX0lJT19TVF9HWVJPXzNBWElTPXkKPiBDT05GSUdfSUlPX1NUX0dZUk9fSTJDXzNBWElT
PXkKPiBDT05GSUdfSUlPX1NUX0dZUk9fU1BJXzNBWElTPXkKPiBDT05GSUdfSVRHMzIwMD15Cj4g
Cj4gIwo+ICMgSHVtaWRpdHkgc2Vuc29ycwo+ICMKPiAKPiAjCj4gIyBJbmVydGlhbCBtZWFzdXJl
bWVudCB1bml0cwo+ICMKPiAjIENPTkZJR19BRElTMTY0MDAgaXMgbm90IHNldAo+ICMgQ09ORklH
X0FESVMxNjQ4MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lJT19BRElTX0xJQj15Cj4gQ09ORklHX0lJ
T19BRElTX0xJQl9CVUZGRVI9eQo+IENPTkZJR19JTlZfTVBVNjA1MF9JSU89eQo+IAo+ICMKPiAj
IExpZ2h0IHNlbnNvcnMKPiAjCj4gIyBDT05GSUdfQURKRF9TMzExIGlzIG5vdCBzZXQKPiBDT05G
SUdfQVBEUzkzMDA9eQo+IENPTkZJR19DTTM2NjUxPXkKPiBDT05GSUdfR1AyQVAwMjBBMDBGPXkK
PiBDT05GSUdfU0VOU09SU19MTTM1MzM9eQo+IENPTkZJR19UQ1MzNDcyPXkKPiAjIENPTkZJR19T
RU5TT1JTX1RTTDI1NjMgaXMgbm90IHNldAo+ICMgQ09ORklHX1RTTDQ1MzEgaXMgbm90IHNldAo+
IENPTkZJR19WQ05MNDAwMD15Cj4gCj4gIwo+ICMgTWFnbmV0b21ldGVyIHNlbnNvcnMKPiAjCj4g
Q09ORklHX01BRzMxMTA9eQo+IENPTkZJR19JSU9fU1RfTUFHTl8zQVhJUz15Cj4gQ09ORklHX0lJ
T19TVF9NQUdOX0kyQ18zQVhJUz15Cj4gQ09ORklHX0lJT19TVF9NQUdOX1NQSV8zQVhJUz15Cj4g
Cj4gIwo+ICMgSW5jbGlub21ldGVyIHNlbnNvcnMKPiAjCj4gCj4gIwo+ICMgVHJpZ2dlcnMgLSBz
dGFuZGFsb25lCj4gIwo+IENPTkZJR19JSU9fSU5URVJSVVBUX1RSSUdHRVI9eQo+IENPTkZJR19J
SU9fU1lTRlNfVFJJR0dFUj15Cj4gCj4gIwo+ICMgUHJlc3N1cmUgc2Vuc29ycwo+ICMKPiBDT05G
SUdfTVBMMzExNT15Cj4gQ09ORklHX0lJT19TVF9QUkVTUz15Cj4gQ09ORklHX0lJT19TVF9QUkVT
U19JMkM9eQo+IENPTkZJR19JSU9fU1RfUFJFU1NfU1BJPXkKPiAKPiAjCj4gIyBUZW1wZXJhdHVy
ZSBzZW5zb3JzCj4gIwo+IENPTkZJR19UTVAwMDY9eQo+IENPTkZJR19QV009eQo+IENPTkZJR19Q
V01fU1lTRlM9eQo+ICMgQ09ORklHX1BXTV9SRU5FU0FTX1RQVSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1BXTV9UV0w9eQo+IENPTkZJR19QV01fVFdMX0xFRD15Cj4gIyBDT05GSUdfSVBBQ0tfQlVTIGlz
IG5vdCBzZXQKPiBDT05GSUdfUkVTRVRfQ09OVFJPTExFUj15Cj4gQ09ORklHX0ZNQz15Cj4gQ09O
RklHX0ZNQ19GQUtFREVWPXkKPiBDT05GSUdfRk1DX1RSSVZJQUw9eQo+ICMgQ09ORklHX0ZNQ19X
UklURV9FRVBST00gaXMgbm90IHNldAo+IENPTkZJR19GTUNfQ0hBUkRFVj15Cj4gCj4gIwo+ICMg
UEhZIFN1YnN5c3RlbQo+ICMKPiBDT05GSUdfR0VORVJJQ19QSFk9eQo+IENPTkZJR19QSFlfRVhZ
Tk9TX01JUElfVklERU89eQo+IENPTkZJR19CQ01fS09OQV9VU0IyX1BIWT15Cj4gQ09ORklHX1BP
V0VSQ0FQPXkKPiBDT05GSUdfSU5URUxfUkFQTD15Cj4gCj4gIwo+ICMgRmlybXdhcmUgRHJpdmVy
cwo+ICMKPiBDT05GSUdfRUREPXkKPiAjIENPTkZJR19FRERfT0ZGIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19GSVJNV0FSRV9NRU1NQVAgaXMgbm90IHNldAo+IENPTkZJR19ERUxMX1JCVT15Cj4gQ09O
RklHX0RDREJBUz15Cj4gQ09ORklHX0dPT0dMRV9GSVJNV0FSRT15Cj4gCj4gIwo+ICMgR29vZ2xl
IEZpcm13YXJlIERyaXZlcnMKPiAjCj4gCj4gIwo+ICMgRmlsZSBzeXN0ZW1zCj4gIwo+IENPTkZJ
R19EQ0FDSEVfV09SRF9BQ0NFU1M9eQo+ICMgQ09ORklHX0VYVDJfRlMgaXMgbm90IHNldAo+IENP
TkZJR19FWFQzX0ZTPXkKPiAjIENPTkZJR19FWFQzX0RFRkFVTFRTX1RPX09SREVSRUQgaXMgbm90
IHNldAo+IENPTkZJR19FWFQzX0ZTX1hBVFRSPXkKPiBDT05GSUdfRVhUM19GU19QT1NJWF9BQ0w9
eQo+IENPTkZJR19FWFQzX0ZTX1NFQ1VSSVRZPXkKPiAjIENPTkZJR19FWFQ0X0ZTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSkJEPXkKPiBDT05GSUdfSkJEX0RFQlVHPXkKPiBDT05GSUdfSkJEMj15Cj4g
Q09ORklHX0pCRDJfREVCVUc9eQo+IENPTkZJR19GU19NQkNBQ0hFPXkKPiAjIENPTkZJR19SRUlT
RVJGU19GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0pGU19GUz15Cj4gQ09ORklHX0pGU19QT1NJWF9B
Q0w9eQo+IENPTkZJR19KRlNfU0VDVVJJVFk9eQo+IENPTkZJR19KRlNfREVCVUc9eQo+ICMgQ09O
RklHX0pGU19TVEFUSVNUSUNTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YRlNfRlMgaXMgbm90IHNl
dAo+IENPTkZJR19HRlMyX0ZTPXkKPiBDT05GSUdfT0NGUzJfRlM9eQo+IENPTkZJR19PQ0ZTMl9G
U19PMkNCPXkKPiBDT05GSUdfT0NGUzJfRlNfU1RBVFM9eQo+IENPTkZJR19PQ0ZTMl9ERUJVR19N
QVNLTE9HPXkKPiAjIENPTkZJR19PQ0ZTMl9ERUJVR19GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0JU
UkZTX0ZTPXkKPiAjIENPTkZJR19CVFJGU19GU19QT1NJWF9BQ0wgaXMgbm90IHNldAo+IENPTkZJ
R19CVFJGU19GU19DSEVDS19JTlRFR1JJVFk9eQo+IENPTkZJR19CVFJGU19GU19SVU5fU0FOSVRZ
X1RFU1RTPXkKPiAjIENPTkZJR19CVFJGU19ERUJVRyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQlRS
RlNfQVNTRVJUIGlzIG5vdCBzZXQKPiBDT05GSUdfTklMRlMyX0ZTPXkKPiBDT05GSUdfRlNfUE9T
SVhfQUNMPXkKPiAjIENPTkZJR19GSUxFX0xPQ0tJTkcgaXMgbm90IHNldAo+IENPTkZJR19GU05P
VElGWT15Cj4gIyBDT05GSUdfRE5PVElGWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOT1RJRllfVVNF
Uj15Cj4gIyBDT05GSUdfRkFOT1RJRlkgaXMgbm90IHNldAo+IENPTkZJR19RVU9UQT15Cj4gQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkKPiBDT05GSUdfUFJJTlRfUVVPVEFfV0FSTklO
Rz15Cj4gQ09ORklHX1FVT1RBX0RFQlVHPXkKPiBDT05GSUdfUVVPVEFfVFJFRT15Cj4gIyBDT05G
SUdfUUZNVF9WMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1FGTVRfVjI9eQo+IENPTkZJR19RVU9UQUNU
TD15Cj4gIyBDT05GSUdfQVVUT0ZTNF9GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZVU0VfRlM9eQo+
ICMgQ09ORklHX0NVU0UgaXMgbm90IHNldAo+IAo+ICMKPiAjIENhY2hlcwo+ICMKPiBDT05GSUdf
RlNDQUNIRT15Cj4gIyBDT05GSUdfRlNDQUNIRV9TVEFUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZT
Q0FDSEVfSElTVE9HUkFNPXkKPiBDT05GSUdfRlNDQUNIRV9ERUJVRz15Cj4gQ09ORklHX0ZTQ0FD
SEVfT0JKRUNUX0xJU1Q9eQo+IENPTkZJR19DQUNIRUZJTEVTPXkKPiAjIENPTkZJR19DQUNIRUZJ
TEVTX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FDSEVGSUxFU19ISVNUT0dSQU09eQo+IAo+
ICMKPiAjIENELVJPTS9EVkQgRmlsZXN5c3RlbXMKPiAjCj4gIyBDT05GSUdfSVNPOTY2MF9GUyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1VERl9GUz15Cj4gQ09ORklHX1VERl9OTFM9eQo+IAo+ICMKPiAj
IERPUy9GQVQvTlQgRmlsZXN5c3RlbXMKPiAjCj4gQ09ORklHX0ZBVF9GUz15Cj4gQ09ORklHX01T
RE9TX0ZTPXkKPiBDT05GSUdfVkZBVF9GUz15Cj4gQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQQUdF
PTQzNwo+IENPTkZJR19GQVRfREVGQVVMVF9JT0NIQVJTRVQ9Imlzbzg4NTktMSIKPiBDT05GSUdf
TlRGU19GUz15Cj4gQ09ORklHX05URlNfREVCVUc9eQo+ICMgQ09ORklHX05URlNfUlcgaXMgbm90
IHNldAo+IAo+ICMKPiAjIFBzZXVkbyBmaWxlc3lzdGVtcwo+ICMKPiBDT05GSUdfUFJPQ19GUz15
Cj4gIyBDT05GSUdfUFJPQ19LQ09SRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPQ19TWVNDVEwg
aXMgbm90IHNldAo+IENPTkZJR19QUk9DX1BBR0VfTU9OSVRPUj15Cj4gQ09ORklHX1NZU0ZTPXkK
PiAjIENPTkZJR19IVUdFVExCRlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0hVR0VUTEJfUEFHRSBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NPTkZJR0ZTX0ZTPXkKPiBDT05GSUdfTUlTQ19GSUxFU1lTVEVN
Uz15Cj4gQ09ORklHX0FERlNfRlM9eQo+ICMgQ09ORklHX0FERlNfRlNfUlcgaXMgbm90IHNldAo+
IENPTkZJR19BRkZTX0ZTPXkKPiAjIENPTkZJR19IRlNfRlMgaXMgbm90IHNldAo+IENPTkZJR19I
RlNQTFVTX0ZTPXkKPiAjIENPTkZJR19IRlNQTFVTX0ZTX1BPU0lYX0FDTCBpcyBub3Qgc2V0Cj4g
Q09ORklHX0JFRlNfRlM9eQo+IENPTkZJR19CRUZTX0RFQlVHPXkKPiBDT05GSUdfQkZTX0ZTPXkK
PiBDT05GSUdfRUZTX0ZTPXkKPiBDT05GSUdfTE9HRlM9eQo+IENPTkZJR19DUkFNRlM9eQo+IENP
TkZJR19TUVVBU0hGUz15Cj4gQ09ORklHX1NRVUFTSEZTX0ZJTEVfQ0FDSEU9eQo+ICMgQ09ORklH
X1NRVUFTSEZTX0ZJTEVfRElSRUNUIGlzIG5vdCBzZXQKPiBDT05GSUdfU1FVQVNIRlNfREVDT01Q
X1NJTkdMRT15Cj4gIyBDT05GSUdfU1FVQVNIRlNfREVDT01QX01VTFRJIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19TUVVBU0hGU19ERUNPTVBfTVVMVElfUEVSQ1BVIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19TUVVBU0hGU19YQVRUUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU1FVQVNIRlNfWkxJQiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfU1FVQVNIRlNfTFpPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TUVVB
U0hGU19YWiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NRVUFTSEZTXzRLX0RFVkJMS19TSVpFPXkKPiBD
T05GSUdfU1FVQVNIRlNfRU1CRURERUQ9eQo+IENPTkZJR19TUVVBU0hGU19GUkFHTUVOVF9DQUNI
RV9TSVpFPTMKPiBDT05GSUdfVlhGU19GUz15Cj4gIyBDT05GSUdfTUlOSVhfRlMgaXMgbm90IHNl
dAo+IENPTkZJR19PTUZTX0ZTPXkKPiBDT05GSUdfSFBGU19GUz15Cj4gIyBDT05GSUdfUU5YNEZT
X0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdfUU5YNkZTX0ZTPXkKPiAjIENPTkZJR19RTlg2RlNfREVC
VUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1JPTUZTX0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdfUFNU
T1JFPXkKPiBDT05GSUdfUFNUT1JFX0NPTlNPTEU9eQo+ICMgQ09ORklHX1BTVE9SRV9GVFJBQ0Ug
aXMgbm90IHNldAo+IENPTkZJR19QU1RPUkVfUkFNPXkKPiBDT05GSUdfU1lTVl9GUz15Cj4gQ09O
RklHX1VGU19GUz15Cj4gIyBDT05GSUdfVUZTX0ZTX1dSSVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19VRlNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19GMkZTX0ZTPXkKPiBDT05GSUdfRjJGU19T
VEFUX0ZTPXkKPiBDT05GSUdfRjJGU19GU19YQVRUUj15Cj4gIyBDT05GSUdfRjJGU19GU19QT1NJ
WF9BQ0wgaXMgbm90IHNldAo+ICMgQ09ORklHX0YyRlNfRlNfU0VDVVJJVFkgaXMgbm90IHNldAo+
ICMgQ09ORklHX0YyRlNfQ0hFQ0tfRlMgaXMgbm90IHNldAo+IENPTkZJR19ORVRXT1JLX0ZJTEVT
WVNURU1TPXkKPiBDT05GSUdfTkNQX0ZTPXkKPiBDT05GSUdfTkNQRlNfUEFDS0VUX1NJR05JTkc9
eQo+ICMgQ09ORklHX05DUEZTX0lPQ1RMX0xPQ0tJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX05D
UEZTX1NUUk9ORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkNQRlNfTkZTX05TIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkNQRlNfT1MyX05TPXkKPiBDT05GSUdfTkNQRlNfU01BTExET1M9eQo+IENPTkZJ
R19OQ1BGU19OTFM9eQo+ICMgQ09ORklHX05DUEZTX0VYVFJBUyBpcyBub3Qgc2V0Cj4gQ09ORklH
X05MUz15Cj4gQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiCj4gIyBDT05GSUdfTkxTX0NP
REVQQUdFXzQzNyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzczNyBpcyBub3Qg
c2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV83NzU9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODUw
PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg1Mj15Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1
NSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfTkxTX0NPREVQQUdFXzg2MCBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV84
NjE9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODYyPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0Vf
ODYzIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15Cj4gQ09ORklHX05MU19D
T0RFUEFHRV84NjU9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODY2PXkKPiBDT05GSUdfTkxTX0NP
REVQQUdFXzg2OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV85MzY9eQo+IENPTkZJR19OTFNfQ09E
RVBBR0VfOTUwPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkxTX0NPREVQQUdFXzk0OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NzQ9eQo+IENPTkZJ
R19OTFNfSVNPODg1OV84PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTA9eQo+IENPTkZJR19O
TFNfQ09ERVBBR0VfMTI1MT15Cj4gIyBDT05GSUdfTkxTX0FTQ0lJIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19OTFNfSVNPODg1OV8xIGlzIG5vdCBzZXQKPiAjIENPTkZJR19OTFNfSVNPODg1OV8yIGlz
IG5vdCBzZXQKPiBDT05GSUdfTkxTX0lTTzg4NTlfMz15Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlf
NCBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19JU084ODU5XzU9eQo+ICMgQ09ORklHX05MU19JU084
ODU5XzYgaXMgbm90IHNldAo+IENPTkZJR19OTFNfSVNPODg1OV83PXkKPiAjIENPTkZJR19OTFNf
SVNPODg1OV85IGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0lTTzg4NTlfMTM9eQo+IENPTkZJR19O
TFNfSVNPODg1OV8xND15Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90IHNldAo+ICMg
Q09ORklHX05MU19LT0k4X1IgaXMgbm90IHNldAo+IENPTkZJR19OTFNfS09JOF9VPXkKPiAjIENP
TkZJR19OTFNfTUFDX1JPTUFOIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX01BQ19DRUxUSUM9eQo+
IENPTkZJR19OTFNfTUFDX0NFTlRFVVJPPXkKPiBDT05GSUdfTkxTX01BQ19DUk9BVElBTj15Cj4g
Q09ORklHX05MU19NQUNfQ1lSSUxMSUM9eQo+IENPTkZJR19OTFNfTUFDX0dBRUxJQz15Cj4gQ09O
RklHX05MU19NQUNfR1JFRUs9eQo+ICMgQ09ORklHX05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0
Cj4gQ09ORklHX05MU19NQUNfSU5VSVQ9eQo+IENPTkZJR19OTFNfTUFDX1JPTUFOSUFOPXkKPiBD
T05GSUdfTkxTX01BQ19UVVJLSVNIPXkKPiBDT05GSUdfTkxTX1VURjg9eQo+IAo+ICMKPiAjIEtl
cm5lbCBoYWNraW5nCj4gIwo+IENPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkKPiAKPiAj
Cj4gIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMKPiAjCj4gQ09ORklHX0RFRkFVTFRfTUVTU0FH
RV9MT0dMRVZFTD00Cj4gCj4gIwo+ICMgQ29tcGlsZS10aW1lIGNoZWNrcyBhbmQgY29tcGlsZXIg
b3B0aW9ucwo+ICMKPiBDT05GSUdfREVCVUdfSU5GTz15Cj4gIyBDT05GSUdfREVCVUdfSU5GT19S
RURVQ0VEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5v
dCBzZXQKPiBDT05GSUdfRU5BQkxFX01VU1RfQ0hFQ0s9eQo+IENPTkZJR19GUkFNRV9XQVJOPTIw
NDgKPiBDT05GSUdfU1RSSVBfQVNNX1NZTVM9eQo+ICMgQ09ORklHX1JFQURBQkxFX0FTTSBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldAo+IENPTkZJR19ERUJV
R19GUz15Cj4gIyBDT05GSUdfSEVBREVSU19DSEVDSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVH
X1NFQ1RJT05fTUlTTUFUQ0g9eQo+IENPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQo+
IENPTkZJR19GUkFNRV9QT0lOVEVSPXkKPiBDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BV
PXkKPiAjIENPTkZJR19NQUdJQ19TWVNSUSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX0tFUk5F
TD15Cj4gCj4gIwo+ICMgTWVtb3J5IERlYnVnZ2luZwo+ICMKPiAjIENPTkZJR19ERUJVR19QQUdF
QUxMT0MgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19PQkpFQ1RTPXkKPiBDT05GSUdfREVCVUdf
T0JKRUNUU19TRUxGVEVTVD15Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15Cj4gIyBDT05G
SUdfREVCVUdfT0JKRUNUU19USU1FUlMgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19PQkpFQ1RT
X1dPUks9eQo+ICMgQ09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0RFQlVHX09CSkVDVFNfUEVSQ1BVX0NPVU5URVIgaXMgbm90IHNldAo+IENPTkZJR19E
RUJVR19PQkpFQ1RTX0VOQUJMRV9ERUZBVUxUPTEKPiAjIENPTkZJR19TTFVCX1NUQVRTIGlzIG5v
dCBzZXQKPiBDT05GSUdfSEFWRV9ERUJVR19LTUVNTEVBSz15Cj4gIyBDT05GSUdfREVCVUdfS01F
TUxFQUsgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19TVEFDS19VU0FHRT15Cj4gIyBDT05GSUdf
REVCVUdfVk0gaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1ZJUlRVQUwgaXMgbm90IHNldAo+
IENPTkZJR19ERUJVR19NRU1PUllfSU5JVD15Cj4gQ09ORklHX01FTU9SWV9OT1RJRklFUl9FUlJP
Ul9JTkpFQ1Q9eQo+ICMgQ09ORklHX0RFQlVHX1BFUl9DUFVfTUFQUyBpcyBub3Qgc2V0Cj4gQ09O
RklHX0hBVkVfREVCVUdfU1RBQ0tPVkVSRkxPVz15Cj4gIyBDT05GSUdfREVCVUdfU1RBQ0tPVkVS
RkxPVyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfQVJDSF9LTUVNQ0hFQ0s9eQo+ICMgQ09ORklH
X0RFQlVHX1NISVJRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5n
cwo+ICMKPiBDT05GSUdfTE9DS1VQX0RFVEVDVE9SPXkKPiBDT05GSUdfSEFSRExPQ0tVUF9ERVRF
Q1RPUj15Cj4gIyBDT05GSUdfQk9PVFBBUkFNX0hBUkRMT0NLVVBfUEFOSUMgaXMgbm90IHNldAo+
IENPTkZJR19CT09UUEFSQU1fSEFSRExPQ0tVUF9QQU5JQ19WQUxVRT0wCj4gIyBDT05GSUdfQk9P
VFBBUkFNX1NPRlRMT0NLVVBfUEFOSUMgaXMgbm90IHNldAo+IENPTkZJR19CT09UUEFSQU1fU09G
VExPQ0tVUF9QQU5JQ19WQUxVRT0wCj4gIyBDT05GSUdfREVURUNUX0hVTkdfVEFTSyBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1BBTklDX09O
X09PUFNfVkFMVUU9MAo+IENPTkZJR19QQU5JQ19USU1FT1VUPTAKPiAjIENPTkZJR19TQ0hFRF9E
RUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NDSEVEU1RBVFM9eQo+IENPTkZJR19USU1FUl9TVEFU
Uz15Cj4gCj4gIwo+ICMgTG9jayBEZWJ1Z2dpbmcgKHNwaW5sb2NrcywgbXV0ZXhlcywgZXRjLi4u
KQo+ICMKPiBDT05GSUdfREVCVUdfUlRfTVVURVhFUz15Cj4gQ09ORklHX0RFQlVHX1BJX0xJU1Q9
eQo+IENPTkZJR19SVF9NVVRFWF9URVNURVI9eQo+IENPTkZJR19ERUJVR19TUElOTE9DSz15Cj4g
Q09ORklHX0RFQlVHX01VVEVYRVM9eQo+IENPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSD15
Cj4gQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0M9eQo+IENPTkZJR19QUk9WRV9MT0NLSU5HPXkKPiBD
T05GSUdfTE9DS0RFUD15Cj4gQ09ORklHX0xPQ0tfU1RBVD15Cj4gQ09ORklHX0RFQlVHX0xPQ0tE
RVA9eQo+IENPTkZJR19ERUJVR19BVE9NSUNfU0xFRVA9eQo+ICMgQ09ORklHX0RFQlVHX0xPQ0tJ
TkdfQVBJX1NFTEZURVNUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RSQUNFX0lSUUZMQUdTPXkKPiBD
T05GSUdfU1RBQ0tUUkFDRT15Cj4gQ09ORklHX0RFQlVHX0tPQkpFQ1Q9eQo+ICMgQ09ORklHX0RF
QlVHX1dSSVRFQ09VTlQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX0xJU1QgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUJVR19OT1RJRklF
UlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBSQ1UgRGVidWdnaW5nCj4gIwo+ICMgQ09ORklHX1BST1ZFX1JDVSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfU1BBUlNFX1JDVV9QT0lOVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNVX1RP
UlRVUkVfVEVTVD15Cj4gQ09ORklHX1JDVV9UT1JUVVJFX1RFU1RfUlVOTkFCTEU9eQo+IENPTkZJ
R19SQ1VfQ1BVX1NUQUxMX1RJTUVPVVQ9MjEKPiAjIENPTkZJR19SQ1VfQ1BVX1NUQUxMX0lORk8g
aXMgbm90IHNldAo+IENPTkZJR19SQ1VfVFJBQ0U9eQo+ICMgQ09ORklHX0RFQlVHX0JMT0NLX0VY
VF9ERVZUIGlzIG5vdCBzZXQKPiBDT05GSUdfTk9USUZJRVJfRVJST1JfSU5KRUNUSU9OPXkKPiBD
T05GSUdfQ1BVX05PVElGSUVSX0VSUk9SX0lOSkVDVD15Cj4gQ09ORklHX1BNX05PVElGSUVSX0VS
Uk9SX0lOSkVDVD15Cj4gIyBDT05GSUdfRkFVTFRfSU5KRUNUSU9OIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19MQVRFTkNZVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9IQVNfREVCVUdfU1RSSUNU
X1VTRVJfQ09QWV9DSEVDS1M9eQo+ICMgQ09ORklHX0RFQlVHX1NUUklDVF9VU0VSX0NPUFlfQ0hF
Q0tTIGlzIG5vdCBzZXQKPiBDT05GSUdfVVNFUl9TVEFDS1RSQUNFX1NVUFBPUlQ9eQo+IENPTkZJ
R19OT1BfVFJBQ0VSPXkKPiBDT05GSUdfSEFWRV9GVU5DVElPTl9UUkFDRVI9eQo+IENPTkZJR19I
QVZFX0ZVTkNUSU9OX0dSQVBIX1RSQUNFUj15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhf
RlBfVEVTVD15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJT05fVFJBQ0VfTUNPVU5UX1RFU1Q9eQo+IENP
TkZJR19IQVZFX0RZTkFNSUNfRlRSQUNFPXkKPiBDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9X
SVRIX1JFR1M9eQo+IENPTkZJR19IQVZFX0ZUUkFDRV9NQ09VTlRfUkVDT1JEPXkKPiBDT05GSUdf
SEFWRV9TWVNDQUxMX1RSQUNFUE9JTlRTPXkKPiBDT05GSUdfSEFWRV9GRU5UUlk9eQo+IENPTkZJ
R19IQVZFX0NfUkVDT1JETUNPVU5UPXkKPiBDT05GSUdfVFJBQ0VSX01BWF9UUkFDRT15Cj4gQ09O
RklHX1RSQUNFX0NMT0NLPXkKPiBDT05GSUdfUklOR19CVUZGRVI9eQo+IENPTkZJR19FVkVOVF9U
UkFDSU5HPXkKPiBDT05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKPiBDT05GSUdfUklOR19C
VUZGRVJfQUxMT1dfU1dBUD15Cj4gQ09ORklHX1RSQUNJTkc9eQo+IENPTkZJR19HRU5FUklDX1RS
QUNFUj15Cj4gQ09ORklHX1RSQUNJTkdfU1VQUE9SVD15Cj4gQ09ORklHX0ZUUkFDRT15Cj4gQ09O
RklHX0ZVTkNUSU9OX1RSQUNFUj15Cj4gIyBDT05GSUdfRlVOQ1RJT05fR1JBUEhfVFJBQ0VSIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JUlFTT0ZGX1RSQUNFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1ND
SEVEX1RSQUNFUj15Cj4gQ09ORklHX0ZUUkFDRV9TWVNDQUxMUz15Cj4gQ09ORklHX1RSQUNFUl9T
TkFQU0hPVD15Cj4gQ09ORklHX1RSQUNFUl9TTkFQU0hPVF9QRVJfQ1BVX1NXQVA9eQo+IENPTkZJ
R19CUkFOQ0hfUFJPRklMRV9OT05FPXkKPiAjIENPTkZJR19QUk9GSUxFX0FOTk9UQVRFRF9CUkFO
Q0hFUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPRklMRV9BTExfQlJBTkNIRVMgaXMgbm90IHNl
dAo+IENPTkZJR19TVEFDS19UUkFDRVI9eQo+IENPTkZJR19CTEtfREVWX0lPX1RSQUNFPXkKPiAj
IENPTkZJR19VUFJPQkVfRVZFTlQgaXMgbm90IHNldAo+ICMgQ09ORklHX1BST0JFX0VWRU5UUyBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfRFlOQU1JQ19GVFJBQ0UgaXMgbm90IHNldAo+IENPTkZJR19G
VU5DVElPTl9QUk9GSUxFUj15Cj4gIyBDT05GSUdfRlRSQUNFX1NUQVJUVVBfVEVTVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1JJTkdfQlVGRkVSX0JFTkNITUFSSz15Cj4gIyBDT05GSUdfUklOR19CVUZG
RVJfU1RBUlRVUF9URVNUIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSdW50aW1lIFRlc3RpbmcKPiAj
Cj4gQ09ORklHX0xLRFRNPXkKPiBDT05GSUdfVEVTVF9MSVNUX1NPUlQ9eQo+ICMgQ09ORklHX0JB
Q0tUUkFDRV9TRUxGX1RFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX1JCVFJFRV9URVNUIGlzIG5v
dCBzZXQKPiBDT05GSUdfQVRPTUlDNjRfU0VMRlRFU1Q9eQo+IENPTkZJR19URVNUX1NUUklOR19I
RUxQRVJTPXkKPiBDT05GSUdfVEVTVF9LU1RSVE9YPXkKPiAjIENPTkZJR19ETUFfQVBJX0RFQlVH
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQU1QTEVTIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9B
UkNIX0tHREI9eQo+IENPTkZJR19LR0RCPXkKPiBDT05GSUdfS0dEQl9URVNUUz15Cj4gQ09ORklH
X0tHREJfVEVTVFNfT05fQk9PVD15Cj4gQ09ORklHX0tHREJfVEVTVFNfQk9PVF9TVFJJTkc9IlYx
RjEwMCIKPiBDT05GSUdfS0dEQl9MT1dfTEVWRUxfVFJBUD15Cj4gIyBDT05GSUdfS0dEQl9LREIg
aXMgbm90IHNldAo+IENPTkZJR19TVFJJQ1RfREVWTUVNPXkKPiBDT05GSUdfWDg2X1ZFUkJPU0Vf
Qk9PVFVQPXkKPiAjIENPTkZJR19FQVJMWV9QUklOVEsgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4
Nl9QVERVTVAgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1JPREFUQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfRE9VQkxFRkFVTFQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1RMQkZMVVNI
IGlzIG5vdCBzZXQKPiBDT05GSUdfSU9NTVVfU1RSRVNTPXkKPiBDT05GSUdfSEFWRV9NTUlPVFJB
Q0VfU1VQUE9SVD15Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0wCj4gQ09ORklHX0lPX0RF
TEFZX1RZUEVfMFhFRD0xCj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfVURFTEFZPTIKPiBDT05GSUdf
SU9fREVMQVlfVFlQRV9OT05FPTMKPiBDT05GSUdfSU9fREVMQVlfMFg4MD15Cj4gIyBDT05GSUdf
SU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9fREVMQVlfVURFTEFZIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVM
VF9JT19ERUxBWV9UWVBFPTAKPiBDT05GSUdfREVCVUdfQk9PVF9QQVJBTVM9eQo+IENPTkZJR19D
UEFfREVCVUc9eQo+ICMgQ09ORklHX09QVElNSVpFX0lOTElOSU5HIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9ERUJVR19T
VEFUSUNfQ1BVX0hBUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU2VjdXJpdHkgb3B0aW9ucwo+ICMK
PiAjIENPTkZJR19LRVlTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VDVVJJVFlfRE1FU0dfUkVTVFJJ
Q1Q9eQo+IENPTkZJR19TRUNVUklUWT15Cj4gQ09ORklHX1NFQ1VSSVRZRlM9eQo+IENPTkZJR19T
RUNVUklUWV9ORVRXT1JLPXkKPiBDT05GSUdfU0VDVVJJVFlfUEFUSD15Cj4gQ09ORklHX1NFQ1VS
SVRZX1RPTU9ZTz15Cj4gQ09ORklHX1NFQ1VSSVRZX1RPTU9ZT19NQVhfQUNDRVBUX0VOVFJZPTIw
NDgKPiBDT05GSUdfU0VDVVJJVFlfVE9NT1lPX01BWF9BVURJVF9MT0c9MTAyNAo+ICMgQ09ORklH
X1NFQ1VSSVRZX1RPTU9ZT19PTUlUX1VTRVJTUEFDRV9MT0FERVIgaXMgbm90IHNldAo+IENPTkZJ
R19TRUNVUklUWV9UT01PWU9fUE9MSUNZX0xPQURFUj0iL3NiaW4vdG9tb3lvLWluaXQiCj4gQ09O
RklHX1NFQ1VSSVRZX1RPTU9ZT19BQ1RJVkFUSU9OX1RSSUdHRVI9Ii9zYmluL2luaXQiCj4gQ09O
RklHX1NFQ1VSSVRZX0FQUEFSTU9SPXkKPiBDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBB
UkFNX1ZBTFVFPTEKPiBDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfSEFTSD15Cj4gIyBDT05GSUdf
U0VDVVJJVFlfWUFNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU1BIGlzIG5vdCBzZXQKPiBDT05G
SUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQo+ICMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlf
QVBQQVJNT1IgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfREFDIGlzIG5v
dCBzZXQKPiBDT05GSUdfREVGQVVMVF9TRUNVUklUWT0idG9tb3lvIgo+IENPTkZJR19YT1JfQkxP
Q0tTPXkKPiBDT05GSUdfQ1JZUFRPPXkKPiAKPiAjCj4gIyBDcnlwdG8gY29yZSBvciBoZWxwZXIK
PiAjCj4gQ09ORklHX0NSWVBUT19BTEdBUEk9eQo+IENPTkZJR19DUllQVE9fQUxHQVBJMj15Cj4g
Q09ORklHX0NSWVBUT19BRUFEPXkKPiBDT05GSUdfQ1JZUFRPX0FFQUQyPXkKPiBDT05GSUdfQ1JZ
UFRPX0JMS0NJUEhFUj15Cj4gQ09ORklHX0NSWVBUT19CTEtDSVBIRVIyPXkKPiBDT05GSUdfQ1JZ
UFRPX0hBU0g9eQo+IENPTkZJR19DUllQVE9fSEFTSDI9eQo+IENPTkZJR19DUllQVE9fUk5HPXkK
PiBDT05GSUdfQ1JZUFRPX1JORzI9eQo+IENPTkZJR19DUllQVE9fUENPTVAyPXkKPiBDT05GSUdf
Q1JZUFRPX01BTkFHRVI9eQo+IENPTkZJR19DUllQVE9fTUFOQUdFUjI9eQo+IENPTkZJR19DUllQ
VE9fVVNFUj15Cj4gQ09ORklHX0NSWVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFM9eQo+IENPTkZJ
R19DUllQVE9fR0YxMjhNVUw9eQo+IENPTkZJR19DUllQVE9fTlVMTD15Cj4gQ09ORklHX0NSWVBU
T19QQ1JZUFQ9eQo+IENPTkZJR19DUllQVE9fV09SS1FVRVVFPXkKPiBDT05GSUdfQ1JZUFRPX0NS
WVBURD15Cj4gIyBDT05GSUdfQ1JZUFRPX0FVVEhFTkMgaXMgbm90IHNldAo+IENPTkZJR19DUllQ
VE9fQUJMS19IRUxQRVI9eQo+IENPTkZJR19DUllQVE9fR0xVRV9IRUxQRVJfWDg2PXkKPiAKPiAj
Cj4gIyBBdXRoZW50aWNhdGVkIEVuY3J5cHRpb24gd2l0aCBBc3NvY2lhdGVkIERhdGEKPiAjCj4g
Q09ORklHX0NSWVBUT19DQ009eQo+IENPTkZJR19DUllQVE9fR0NNPXkKPiBDT05GSUdfQ1JZUFRP
X1NFUUlWPXkKPiAKPiAjCj4gIyBCbG9jayBtb2Rlcwo+ICMKPiBDT05GSUdfQ1JZUFRPX0NCQz15
Cj4gQ09ORklHX0NSWVBUT19DVFI9eQo+ICMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNldAo+
IENPTkZJR19DUllQVE9fRUNCPXkKPiBDT05GSUdfQ1JZUFRPX0xSVz15Cj4gQ09ORklHX0NSWVBU
T19QQ0JDPXkKPiBDT05GSUdfQ1JZUFRPX1hUUz15Cj4gCj4gIwo+ICMgSGFzaCBtb2Rlcwo+ICMK
PiAjIENPTkZJR19DUllQVE9fQ01BQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRPX0hNQUMg
aXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fWENCQz15Cj4gQ09ORklHX0NSWVBUT19WTUFDPXkK
PiAKPiAjCj4gIyBEaWdlc3QKPiAjCj4gQ09ORklHX0NSWVBUT19DUkMzMkM9eQo+IENPTkZJR19D
UllQVE9fQ1JDMzJDX0lOVEVMPXkKPiAjIENPTkZJR19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAo+
IENPTkZJR19DUllQVE9fQ1JDMzJfUENMTVVMPXkKPiBDT05GSUdfQ1JZUFRPX0NSQ1QxMERJRj15
Cj4gIyBDT05GSUdfQ1JZUFRPX0NSQ1QxMERJRl9QQ0xNVUwgaXMgbm90IHNldAo+IENPTkZJR19D
UllQVE9fR0hBU0g9eQo+IENPTkZJR19DUllQVE9fTUQ0PXkKPiBDT05GSUdfQ1JZUFRPX01ENT15
Cj4gQ09ORklHX0NSWVBUT19NSUNIQUVMX01JQz15Cj4gIyBDT05GSUdfQ1JZUFRPX1JNRDEyOCBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19STUQxNjA9eQo+IENPTkZJR19DUllQVE9fUk1EMjU2
PXkKPiBDT05GSUdfQ1JZUFRPX1JNRDMyMD15Cj4gQ09ORklHX0NSWVBUT19TSEExPXkKPiBDT05G
SUdfQ1JZUFRPX1NIQTFfU1NTRTM9eQo+IENPTkZJR19DUllQVE9fU0hBMjU2X1NTU0UzPXkKPiBD
T05GSUdfQ1JZUFRPX1NIQTUxMl9TU1NFMz15Cj4gQ09ORklHX0NSWVBUT19TSEEyNTY9eQo+IENP
TkZJR19DUllQVE9fU0hBNTEyPXkKPiBDT05GSUdfQ1JZUFRPX1RHUjE5Mj15Cj4gQ09ORklHX0NS
WVBUT19XUDUxMj15Cj4gIyBDT05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVMIGlzIG5v
dCBzZXQKPiAKPiAjCj4gIyBDaXBoZXJzCj4gIwo+IENPTkZJR19DUllQVE9fQUVTPXkKPiBDT05G
SUdfQ1JZUFRPX0FFU19YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQUVTX05JX0lOVEVMPXkKPiBD
T05GSUdfQ1JZUFRPX0FOVUJJUz15Cj4gQ09ORklHX0NSWVBUT19BUkM0PXkKPiBDT05GSUdfQ1JZ
UFRPX0JMT1dGSVNIPXkKPiBDT05GSUdfQ1JZUFRPX0JMT1dGSVNIX0NPTU1PTj15Cj4gQ09ORklH
X0NSWVBUT19CTE9XRklTSF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUE9eQo+IENP
TkZJR19DUllQVE9fQ0FNRUxMSUFfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FF
U05JX0FWWF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUFfQUVTTklfQVZYMl9YODZf
NjQ9eQo+IENPTkZJR19DUllQVE9fQ0FTVF9DT01NT049eQo+IENPTkZJR19DUllQVE9fQ0FTVDU9
eQo+IENPTkZJR19DUllQVE9fQ0FTVDVfQVZYX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQVNU
Nj15Cj4gQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0RF
Uz15Cj4gQ09ORklHX0NSWVBUT19GQ1JZUFQ9eQo+ICMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMg
bm90IHNldAo+IENPTkZJR19DUllQVE9fU0FMU0EyMD15Cj4gQ09ORklHX0NSWVBUT19TQUxTQTIw
X1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRUVEPXkKPiBDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9
eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19T
RVJQRU5UX0FWWF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9BVlgyX1g4Nl82ND15
Cj4gQ09ORklHX0NSWVBUT19URUE9eQo+ICMgQ09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJU0hfQ09NTU9OPXkKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJ
U0hfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0XzNXQVk9eQo+ICMgQ09O
RklHX0NSWVBUT19UV09GSVNIX0FWWF9YODZfNjQgaXMgbm90IHNldAo+IAo+ICMKPiAjIENvbXBy
ZXNzaW9uCj4gIwo+ICMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19DUllQVE9fWkxJQiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRPX0xaTyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ1JZUFRPX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19MWjRIQz15
Cj4gCj4gIwo+ICMgUmFuZG9tIE51bWJlciBHZW5lcmF0aW9uCj4gIwo+IENPTkZJR19DUllQVE9f
QU5TSV9DUFJORz15Cj4gQ09ORklHX0NSWVBUT19VU0VSX0FQST15Cj4gQ09ORklHX0NSWVBUT19V
U0VSX0FQSV9IQVNIPXkKPiBDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkKPiAjIENP
TkZJR19DUllQVE9fSFcgaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tWTT15Cj4gQ09ORklHX0hB
VkVfS1ZNX0lSUUNISVA9eQo+IENPTkZJR19IQVZFX0tWTV9JUlFfUk9VVElORz15Cj4gQ09ORklH
X0hBVkVfS1ZNX0VWRU5URkQ9eQo+IENPTkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQo+IENP
TkZJR19LVk1fTU1JTz15Cj4gQ09ORklHX0tWTV9BU1lOQ19QRj15Cj4gQ09ORklHX0hBVkVfS1ZN
X01TST15Cj4gQ09ORklHX0hBVkVfS1ZNX0NQVV9SRUxBWF9JTlRFUkNFUFQ9eQo+IENPTkZJR19L
Vk1fVkZJTz15Cj4gQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkKPiBDT05GSUdfS1ZNPXkKPiBDT05G
SUdfS1ZNX0lOVEVMPXkKPiAjIENPTkZJR19LVk1fQU1EIGlzIG5vdCBzZXQKPiAjIENPTkZJR19L
Vk1fTU1VX0FVRElUIGlzIG5vdCBzZXQKPiBDT05GSUdfQklOQVJZX1BSSU5URj15Cj4gCj4gIwo+
ICMgTGlicmFyeSByb3V0aW5lcwo+ICMKPiBDT05GSUdfUkFJRDZfUFE9eQo+IENPTkZJR19CSVRS
RVZFUlNFPXkKPiBDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15Cj4gQ09ORklHX0dF
TkVSSUNfU1RSTkxFTl9VU0VSPXkKPiBDT05GSUdfR0VORVJJQ19ORVRfVVRJTFM9eQo+IENPTkZJ
R19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkKPiBDT05GSUdfR0VORVJJQ19QQ0lfSU9NQVA9eQo+
IENPTkZJR19HRU5FUklDX0lPTUFQPXkKPiBDT05GSUdfR0VORVJJQ19JTz15Cj4gQ09ORklHX0FS
Q0hfVVNFX0NNUFhDSEdfTE9DS1JFRj15Cj4gQ09ORklHX0NSQ19DQ0lUVD15Cj4gQ09ORklHX0NS
QzE2PXkKPiBDT05GSUdfQ1JDX1QxMERJRj15Cj4gQ09ORklHX0NSQ19JVFVfVD15Cj4gQ09ORklH
X0NSQzMyPXkKPiBDT05GSUdfQ1JDMzJfU0VMRlRFU1Q9eQo+IENPTkZJR19DUkMzMl9TTElDRUJZ
OD15Cj4gIyBDT05GSUdfQ1JDMzJfU0xJQ0VCWTQgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSQzMy
X1NBUldBVEUgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSQzMyX0JJVCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0NSQzc9eQo+IENPTkZJR19MSUJDUkMzMkM9eQo+IENPTkZJR19DUkM4PXkKPiBDT05GSUdf
Q1JDNjRfRUNNQT15Cj4gQ09ORklHX1JBTkRPTTMyX1NFTEZURVNUPXkKPiBDT05GSUdfWkxJQl9J
TkZMQVRFPXkKPiBDT05GSUdfWkxJQl9ERUZMQVRFPXkKPiBDT05GSUdfTFpPX0NPTVBSRVNTPXkK
PiBDT05GSUdfTFpPX0RFQ09NUFJFU1M9eQo+IENPTkZJR19MWjRIQ19DT01QUkVTUz15Cj4gQ09O
RklHX0xaNF9ERUNPTVBSRVNTPXkKPiBDT05GSUdfWFpfREVDPXkKPiAjIENPTkZJR19YWl9ERUNf
WDg2IGlzIG5vdCBzZXQKPiAjIENPTkZJR19YWl9ERUNfUE9XRVJQQyBpcyBub3Qgc2V0Cj4gQ09O
RklHX1haX0RFQ19JQTY0PXkKPiBDT05GSUdfWFpfREVDX0FSTT15Cj4gQ09ORklHX1haX0RFQ19B
Uk1USFVNQj15Cj4gQ09ORklHX1haX0RFQ19TUEFSQz15Cj4gQ09ORklHX1haX0RFQ19CQ0o9eQo+
ICMgQ09ORklHX1haX0RFQ19URVNUIGlzIG5vdCBzZXQKPiBDT05GSUdfREVDT01QUkVTU19HWklQ
PXkKPiBDT05GSUdfREVDT01QUkVTU19CWklQMj15Cj4gQ09ORklHX0RFQ09NUFJFU1NfTFpPPXkK
PiBDT05GSUdfREVDT01QUkVTU19MWjQ9eQo+IENPTkZJR19HRU5FUklDX0FMTE9DQVRPUj15Cj4g
Q09ORklHX1JFRURfU09MT01PTj15Cj4gQ09ORklHX1JFRURfU09MT01PTl9FTkM4PXkKPiBDT05G
SUdfUkVFRF9TT0xPTU9OX0RFQzg9eQo+IENPTkZJR19CVFJFRT15Cj4gQ09ORklHX0hBU19JT01F
TT15Cj4gQ09ORklHX0hBU19JT1BPUlQ9eQo+IENPTkZJR19IQVNfRE1BPXkKPiBDT05GSUdfQ1BV
TUFTS19PRkZTVEFDSz15Cj4gQ09ORklHX0NQVV9STUFQPXkKPiBDT05GSUdfRFFMPXkKPiBDT05G
SUdfTkxBVFRSPXkKPiBDT05GSUdfQVJDSF9IQVNfQVRPTUlDNjRfREVDX0lGX1BPU0lUSVZFPXkK
PiAjIENPTkZJR19BVkVSQUdFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DT1JESUMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0REUiBpcyBub3Qgc2V0CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:08:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:08:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dh0-0005rj-13; Fri, 10 Jan 2014 15:08:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dgy-0005rd-8h
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:08:12 +0000
Received: from [85.158.143.35:52180] by server-1.bemta-4.messagelabs.com id
	77/BC-02132-BDC00D25; Fri, 10 Jan 2014 15:08:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389366481!10910351!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27201 invoked from network); 10 Jan 2014 15:08:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:08:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AF7g3r012446
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:07:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AF7ecA024218
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:07:41 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AF7euI002515; Fri, 10 Jan 2014 15:07:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:07:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 51A6D1C18DC; Fri, 10 Jan 2014 10:07:38 -0500 (EST)
Date: Fri, 10 Jan 2014 10:07:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jim Davis <jim.epost@gmail.com>, david.vrabel@citrix.com
Message-ID: <20140110150738.GC19124@phenom.dumpdata.com>
References: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	linux-next@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] randconfig build error with next-20140108,
 in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBKYW4gMDgsIDIwMTQgYXQgMDM6MzI6MDBQTSAtMDcwMCwgSmltIERhdmlzIHdyb3Rl
Ogo+IEJ1aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUs
Cj4gCj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6IEluIGZ1bmN0aW9uIOKAmHBsYXRmb3Jt
X3BjaV9pbml04oCZOgo+IGRyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jOjEzMToyOiBlcnJvcjog
aW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5jdGlvbiDigJhwY2lfcmVxdWVzdF9yZWdpb27i
gJkgWy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4gICByZXQgPSBwY2lf
cmVxdWVzdF9yZWdpb24ocGRldiwgMSwgRFJWX05BTUUpOwo+ICAgXgo+IGRyaXZlcnMveGVuL3Bs
YXRmb3JtLXBjaS5jOjE3MDoyOiBlcnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiBmdW5j
dGlvbiDigJhwY2lfcmVsZWFzZV9yZWdpb27igJkgWy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24t
ZGVjbGFyYXRpb25dCj4gICBwY2lfcmVsZWFzZV9yZWdpb24ocGRldiwgMCk7Cj4gICBeCj4gY2Mx
OiBzb21lIHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4gbWFrZVsyXTogKioqIFtk
cml2ZXJzL3hlbi9wbGF0Zm9ybS1wY2kub10gRXJyb3IgMQo+IAo+IFRoZXNlIHdhcm5pbmdzIGFw
cGVhcmVkIHRvbzoKPiAKPiB3YXJuaW5nOiAoWEVOX1BWSCkgc2VsZWN0cyBYRU5fUFZIVk0gd2hp
Y2ggaGFzIHVubWV0IGRpcmVjdAo+IGRlcGVuZGVuY2llcyAoSFlQRVJWSVNPUl9HVUVTVCAmJiBY
RU4gJiYgUENJICYmIFg4Nl9MT0NBTF9BUElDKQoKSGV5IEppbSwKClRoaXMgZml4IHdvcmtzIGZv
ciBtZToKCgpkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVuL0tjb25maWcgYi9hcmNoL3g4Ni94ZW4v
S2NvbmZpZwppbmRleCBkODhiZmQ2Li4wMWI5MDI2IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4v
S2NvbmZpZworKysgYi9hcmNoL3g4Ni94ZW4vS2NvbmZpZwpAQCAtNTMsNiArNTMsNSBAQCBjb25m
aWcgWEVOX0RFQlVHX0ZTCiAKIGNvbmZpZyBYRU5fUFZICiAJYm9vbCAiU3VwcG9ydCBmb3IgcnVu
bmluZyBhcyBhIFBWSCBndWVzdCIKLQlkZXBlbmRzIG9uIFg4Nl82NCAmJiBYRU4KLQlzZWxlY3Qg
WEVOX1BWSFZNCisJZGVwZW5kcyBvbiBYODZfNjQgJiYgWEVOICYmIFhFTl9QVkhWTQogCWRlZl9i
b29sIG4KCkRhdmlkLCB5b3UgT0sgd2l0aCB0aGF0PyBZb3Ugc3VnZ2VzdGVkIHRvIHVzZSAnc2Vs
ZWN0JyBpbiB0aGUgcGF0Y2hzZXQKaW5zdGVhZCBvZiAnZGVwZW5kcycgYW5kIHRoaXMgdGhyb3dz
IGF3YXkgeW91ciBzdWdnZXN0aW9uLgoKRllJLCB0aGUgcmVhc29uIGZvciB0aGlzIGZhaWx1cmUg
aXMgdGhhdCBDT05GSUdfUENJIGluIHRoZSBiZWxvdwpjb25maWcgZmlsZSBpcyBub3QgYXQgYWxs
IGRlZmluZWQuIEJ1dCBDT05GSUdfWEVOX1BWSCBkb2VzIGVuZAp1cCBiZWluZyBlbmFibGVkIHdp
dGggQ09ORklHX1hFTl9QVkhWTSBiZWluZyBlbmFibGVkICh3aXRoCml0cyBkZXBlbmRlbmNpZXMg
bm90IGJlaW5nIGZ1bGZpbGxlZCkuCgpBbm90aGVyIHdheSB0byBmaXggdGhpcyBpcyB0byBkdXBs
aWNhdGUgdGhlICdkZXBlbmRzJyB0aGF0CkNPTkZJR19YRU5fUFZIVk0gaGFzLCBidXQgd2h5IGJv
dGhlciB3aGVuIENPTkZJR19YRU5fUFZIVk0KYWxyZWFkeSBkb2VzIHRoYXQuCgoKPiAjCj4gIyBB
dXRvbWF0aWNhbGx5IGdlbmVyYXRlZCBmaWxlOyBETyBOT1QgRURJVC4KPiAjIExpbnV4L3g4NiAz
LjEzLjAtcmM3IEtlcm5lbCBDb25maWd1cmF0aW9uCj4gIwo+IENPTkZJR182NEJJVD15Cj4gQ09O
RklHX1g4Nl82ND15Cj4gQ09ORklHX1g4Nj15Cj4gQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQo+IENPTkZJR19PVVRQVVRfRk9STUFUPSJlbGY2NC14ODYtNjQiCj4gQ09ORklHX0FSQ0hfREVG
Q09ORklHPSJhcmNoL3g4Ni9jb25maWdzL3g4Nl82NF9kZWZjb25maWciCj4gQ09ORklHX0xPQ0tE
RVBfU1VQUE9SVD15Cj4gQ09ORklHX1NUQUNLVFJBQ0VfU1VQUE9SVD15Cj4gQ09ORklHX0hBVkVf
TEFURU5DWVRPUF9TVVBQT1JUPXkKPiBDT05GSUdfTU1VPXkKPiBDT05GSUdfTkVFRF9ETUFfTUFQ
X1NUQVRFPXkKPiBDT05GSUdfTkVFRF9TR19ETUFfTEVOR1RIPXkKPiBDT05GSUdfR0VORVJJQ19J
U0FfRE1BPXkKPiBDT05GSUdfR0VORVJJQ19IV0VJR0hUPXkKPiBDT05GSUdfQVJDSF9NQVlfSEFW
RV9QQ19GREM9eQo+IENPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15Cj4gQ09ORklHX0dF
TkVSSUNfQ0FMSUJSQVRFX0RFTEFZPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX1JFTEFYPXkKPiBD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKPiBDT05GSUdfQVJDSF9IQVNfQ1BVX0FV
VE9QUk9CRT15Cj4gQ09ORklHX0hBVkVfU0VUVVBfUEVSX0NQVV9BUkVBPXkKPiBDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKPiBDT05GSUdfTkVFRF9QRVJfQ1BVX1BBR0Vf
RklSU1RfQ0hVTks9eQo+IENPTkZJR19BUkNIX0hJQkVSTkFUSU9OX1BPU1NJQkxFPXkKPiBDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0hVR0VfUE1EX1NI
QVJFPXkKPiBDT05GSUdfQVJDSF9XQU5UX0dFTkVSQUxfSFVHRVRMQj15Cj4gQ09ORklHX1pPTkVf
RE1BMzI9eQo+IENPTkZJR19BVURJVF9BUkNIPXkKPiBDT05GSUdfQVJDSF9TVVBQT1JUU19PUFRJ
TUlaRURfSU5MSU5JTkc9eQo+IENPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
Cj4gQ09ORklHX1g4Nl82NF9TTVA9eQo+IENPTkZJR19YODZfSFQ9eQo+IENPTkZJR19BUkNIX0hX
RUlHSFRfQ0ZMQUdTPSItZmNhbGwtc2F2ZWQtcmRpIC1mY2FsbC1zYXZlZC1yc2kgLWZjYWxsLXNh
dmVkLXJkeCAtZmNhbGwtc2F2ZWQtcmN4IC1mY2FsbC1zYXZlZC1yOCAtZmNhbGwtc2F2ZWQtcjkg
LWZjYWxsLXNhdmVkLXIxMCAtZmNhbGwtc2F2ZWQtcjExIgo+IENPTkZJR19BUkNIX1NVUFBPUlRT
X1VQUk9CRVM9eQo+IENPTkZJR19ERUZDT05GSUdfTElTVD0iL2xpYi9tb2R1bGVzLyRVTkFNRV9S
RUxFQVNFLy5jb25maWciCj4gQ09ORklHX0lSUV9XT1JLPXkKPiBDT05GSUdfQlVJTERUSU1FX0VY
VEFCTEVfU09SVD15Cj4gCj4gIwo+ICMgR2VuZXJhbCBzZXR1cAo+ICMKPiBDT05GSUdfSU5JVF9F
TlZfQVJHX0xJTUlUPTMyCj4gQ09ORklHX0NST1NTX0NPTVBJTEU9IiIKPiBDT05GSUdfQ09NUElM
RV9URVNUPXkKPiBDT05GSUdfTE9DQUxWRVJTSU9OPSIiCj4gIyBDT05GSUdfTE9DQUxWRVJTSU9O
X0FVVE8gaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tFUk5FTF9HWklQPXkKPiBDT05GSUdfSEFW
RV9LRVJORUxfQlpJUDI9eQo+IENPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkKPiBDT05GSUdfSEFW
RV9LRVJORUxfWFo9eQo+IENPTkZJR19IQVZFX0tFUk5FTF9MWk89eQo+IENPTkZJR19IQVZFX0tF
Uk5FTF9MWjQ9eQo+IENPTkZJR19LRVJORUxfR1pJUD15Cj4gIyBDT05GSUdfS0VSTkVMX0JaSVAy
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
S0VSTkVMX1haIGlzIG5vdCBzZXQKPiAjIENPTkZJR19LRVJORUxfTFpPIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9IT1NUTkFNRT0i
KG5vbmUpIgo+IENPTkZJR19TV0FQPXkKPiBDT05GSUdfU1lTVklQQz15Cj4gQ09ORklHX1BPU0lY
X01RVUVVRT15Cj4gIyBDT05GSUdfRkhBTkRMRSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FVRElUPXkK
PiBDT05GSUdfQVVESVRTWVNDQUxMPXkKPiBDT05GSUdfQVVESVRfV0FUQ0g9eQo+IENPTkZJR19B
VURJVF9UUkVFPXkKPiAKPiAjCj4gIyBJUlEgc3Vic3lzdGVtCj4gIwo+IENPTkZJR19HRU5FUklD
X0lSUV9QUk9CRT15Cj4gQ09ORklHX0dFTkVSSUNfSVJRX1NIT1c9eQo+IENPTkZJR19HRU5FUklD
X1BFTkRJTkdfSVJRPXkKPiBDT05GSUdfSVJRX0RPTUFJTj15Cj4gIyBDT05GSUdfSVJRX0RPTUFJ
Tl9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX0lSUV9GT1JDRURfVEhSRUFESU5HPXkKPiBDT05G
SUdfU1BBUlNFX0lSUT15Cj4gQ09ORklHX0NMT0NLU09VUkNFX1dBVENIRE9HPXkKPiBDT05GSUdf
QVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkKPiBDT05GSUdfR0VORVJJQ19USU1FX1ZTWVNDQUxMPXkK
PiBDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UUz15Cj4gQ09ORklHX0dFTkVSSUNfQ0xPQ0tFVkVO
VFNfQlVJTEQ9eQo+IENPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTX0JST0FEQ0FTVD15Cj4gQ09O
RklHX0dFTkVSSUNfQ0xPQ0tFVkVOVFNfTUlOX0FESlVTVD15Cj4gQ09ORklHX0dFTkVSSUNfQ01P
U19VUERBVEU9eQo+IAo+ICMKPiAjIFRpbWVycyBzdWJzeXN0ZW0KPiAjCj4gQ09ORklHX1RJQ0tf
T05FU0hPVD15Cj4gQ09ORklHX05PX0haX0NPTU1PTj15Cj4gIyBDT05GSUdfSFpfUEVSSU9ESUMg
aXMgbm90IHNldAo+IENPTkZJR19OT19IWl9JRExFPXkKPiAjIENPTkZJR19OT19IWl9GVUxMIGlz
IG5vdCBzZXQKPiBDT05GSUdfTk9fSFo9eQo+IENPTkZJR19ISUdIX1JFU19USU1FUlM9eQo+IAo+
ICMKPiAjIENQVS9UYXNrIHRpbWUgYW5kIHN0YXRzIGFjY291bnRpbmcKPiAjCj4gQ09ORklHX1RJ
Q0tfQ1BVX0FDQ09VTlRJTkc9eQo+ICMgQ09ORklHX1ZJUlRfQ1BVX0FDQ09VTlRJTkdfR0VOIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JUlFfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKPiBDT05G
SUdfQlNEX1BST0NFU1NfQUNDVD15Cj4gQ09ORklHX0JTRF9QUk9DRVNTX0FDQ1RfVjM9eQo+IENP
TkZJR19UQVNLU1RBVFM9eQo+IENPTkZJR19UQVNLX0RFTEFZX0FDQ1Q9eQo+ICMgQ09ORklHX1RB
U0tfWEFDQ1QgaXMgbm90IHNldAo+IAo+ICMKPiAjIFJDVSBTdWJzeXN0ZW0KPiAjCj4gQ09ORklH
X1RSRUVfUkNVPXkKPiAjIENPTkZJR19QUkVFTVBUX1JDVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JD
VV9TVEFMTF9DT01NT049eQo+IENPTkZJR19DT05URVhUX1RSQUNLSU5HPXkKPiBDT05GSUdfUkNV
X1VTRVJfUVM9eQo+IENPTkZJR19DT05URVhUX1RSQUNLSU5HX0ZPUkNFPXkKPiBDT05GSUdfUkNV
X0ZBTk9VVD02NAo+IENPTkZJR19SQ1VfRkFOT1VUX0xFQUY9MTYKPiBDT05GSUdfUkNVX0ZBTk9V
VF9FWEFDVD15Cj4gIyBDT05GSUdfUkNVX0ZBU1RfTk9fSFogaXMgbm90IHNldAo+IENPTkZJR19U
UkVFX1JDVV9UUkFDRT15Cj4gIyBDT05GSUdfUkNVX05PQ0JfQ1BVIGlzIG5vdCBzZXQKPiBDT05G
SUdfSUtDT05GSUc9eQo+ICMgQ09ORklHX0lLQ09ORklHX1BST0MgaXMgbm90IHNldAo+IENPTkZJ
R19MT0dfQlVGX1NISUZUPTE3Cj4gQ09ORklHX0hBVkVfVU5TVEFCTEVfU0NIRURfQ0xPQ0s9eQo+
IENPTkZJR19BUkNIX1NVUFBPUlRTX05VTUFfQkFMQU5DSU5HPXkKPiBDT05GSUdfQVJDSF9TVVBQ
T1JUU19JTlQxMjg9eQo+IENPTkZJR19BUkNIX1dBTlRTX1BST1RfTlVNQV9QUk9UX05PTkU9eQo+
IENPTkZJR19BUkNIX1VTRVNfTlVNQV9QUk9UX05PTkU9eQo+ICMgQ09ORklHX05VTUFfQkFMQU5D
SU5HX0RFRkFVTFRfRU5BQkxFRCBpcyBub3Qgc2V0Cj4gQ09ORklHX05VTUFfQkFMQU5DSU5HPXkK
PiBDT05GSUdfQ0dST1VQUz15Cj4gIyBDT05GSUdfQ0dST1VQX0RFQlVHIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19DR1JPVVBfRlJFRVpFUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0dST1VQX0RFVklD
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1BVU0VUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0NHUk9V
UF9DUFVBQ0NUPXkKPiAjIENPTkZJR19SRVNPVVJDRV9DT1VOVEVSUyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0dST1VQX1BFUkYgaXMgbm90IHNldAo+IENPTkZJR19DR1JPVVBfU0NIRUQ9eQo+IENP
TkZJR19GQUlSX0dST1VQX1NDSEVEPXkKPiAjIENPTkZJR19DRlNfQkFORFdJRFRIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUlRfR1JPVVBfU0NIRUQ9eQo+ICMgQ09ORklHX0JMS19DR1JPVVAgaXMgbm90
IHNldAo+IENPTkZJR19DSEVDS1BPSU5UX1JFU1RPUkU9eQo+ICMgQ09ORklHX05BTUVTUEFDRVMg
aXMgbm90IHNldAo+ICMgQ09ORklHX1VJREdJRF9TVFJJQ1RfVFlQRV9DSEVDS1MgaXMgbm90IHNl
dAo+IENPTkZJR19TQ0hFRF9BVVRPR1JPVVA9eQo+IENPTkZJR19TWVNGU19ERVBSRUNBVEVEPXkK
PiAjIENPTkZJR19TWVNGU19ERVBSRUNBVEVEX1YyIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVMQVk9
eQo+IENPTkZJR19CTEtfREVWX0lOSVRSRD15Cj4gQ09ORklHX0lOSVRSQU1GU19TT1VSQ0U9IiIK
PiBDT05GSUdfUkRfR1pJUD15Cj4gQ09ORklHX1JEX0JaSVAyPXkKPiAjIENPTkZJR19SRF9MWk1B
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRF9YWiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JEX0xaTz15
Cj4gQ09ORklHX1JEX0xaND15Cj4gQ09ORklHX0NDX09QVElNSVpFX0ZPUl9TSVpFPXkKPiBDT05G
SUdfQU5PTl9JTk9ERVM9eQo+IENPTkZJR19TWVNDVExfRVhDRVBUSU9OX1RSQUNFPXkKPiBDT05G
SUdfSEFWRV9QQ1NQS1JfUExBVEZPUk09eQo+IENPTkZJR19FWFBFUlQ9eQo+IENPTkZJR19LQUxM
U1lNUz15Cj4gQ09ORklHX0tBTExTWU1TX0FMTD15Cj4gIyBDT05GSUdfUFJJTlRLIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19CVUcgaXMgbm90IHNldAo+IENPTkZJR19FTEZfQ09SRT15Cj4gQ09ORklH
X1BDU1BLUl9QTEFURk9STT15Cj4gQ09ORklHX0JBU0VfRlVMTD15Cj4gIyBDT05GSUdfRlVURVgg
aXMgbm90IHNldAo+IENPTkZJR19FUE9MTD15Cj4gIyBDT05GSUdfU0lHTkFMRkQgaXMgbm90IHNl
dAo+IENPTkZJR19USU1FUkZEPXkKPiBDT05GSUdfRVZFTlRGRD15Cj4gIyBDT05GSUdfU0hNRU0g
aXMgbm90IHNldAo+ICMgQ09ORklHX0FJTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRU1CRURERUQg
aXMgbm90IHNldAo+IENPTkZJR19IQVZFX1BFUkZfRVZFTlRTPXkKPiBDT05GSUdfUEVSRl9VU0Vf
Vk1BTExPQz15Cj4gCj4gIwo+ICMgS2VybmVsIFBlcmZvcm1hbmNlIEV2ZW50cyBBbmQgQ291bnRl
cnMKPiAjCj4gQ09ORklHX1BFUkZfRVZFTlRTPXkKPiBDT05GSUdfREVCVUdfUEVSRl9VU0VfVk1B
TExPQz15Cj4gIyBDT05GSUdfVk1fRVZFTlRfQ09VTlRFUlMgaXMgbm90IHNldAo+ICMgQ09ORklH
X1NMVUJfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX0NPTVBBVF9CUksgaXMgbm90IHNldAo+
ICMgQ09ORklHX1NMQUIgaXMgbm90IHNldAo+IENPTkZJR19TTFVCPXkKPiAjIENPTkZJR19TTE9C
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0xVQl9DUFVfUEFSVElBTD15Cj4gQ09ORklHX1BST0ZJTElO
Rz15Cj4gQ09ORklHX1RSQUNFUE9JTlRTPXkKPiBDT05GSUdfT1BST0ZJTEU9eQo+IENPTkZJR19P
UFJPRklMRV9FVkVOVF9NVUxUSVBMRVg9eQo+IENPTkZJR19IQVZFX09QUk9GSUxFPXkKPiBDT05G
SUdfT1BST0ZJTEVfTk1JX1RJTUVSPXkKPiAjIENPTkZJR19KVU1QX0xBQkVMIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19IQVZFXzY0QklUX0FMSUdORURfQUNDRVNTIGlzIG5vdCBzZXQKPiBDT05GSUdf
SEFWRV9FRkZJQ0lFTlRfVU5BTElHTkVEX0FDQ0VTUz15Cj4gQ09ORklHX0FSQ0hfVVNFX0JVSUxU
SU5fQlNXQVA9eQo+IENPTkZJR19VU0VSX1JFVFVSTl9OT1RJRklFUj15Cj4gQ09ORklHX0hBVkVf
SU9SRU1BUF9QUk9UPXkKPiBDT05GSUdfSEFWRV9LUFJPQkVTPXkKPiBDT05GSUdfSEFWRV9LUkVU
UFJPQkVTPXkKPiBDT05GSUdfSEFWRV9PUFRQUk9CRVM9eQo+IENPTkZJR19IQVZFX0tQUk9CRVNf
T05fRlRSQUNFPXkKPiBDT05GSUdfSEFWRV9BUkNIX1RSQUNFSE9PSz15Cj4gQ09ORklHX0hBVkVf
RE1BX0FUVFJTPXkKPiBDT05GSUdfR0VORVJJQ19TTVBfSURMRV9USFJFQUQ9eQo+IENPTkZJR19I
QVZFX1JFR1NfQU5EX1NUQUNLX0FDQ0VTU19BUEk9eQo+IENPTkZJR19IQVZFX0RNQV9BUElfREVC
VUc9eQo+IENPTkZJR19IQVZFX0hXX0JSRUFLUE9JTlQ9eQo+IENPTkZJR19IQVZFX01JWEVEX0JS
RUFLUE9JTlRTX1JFR1M9eQo+IENPTkZJR19IQVZFX1VTRVJfUkVUVVJOX05PVElGSUVSPXkKPiBD
T05GSUdfSEFWRV9QRVJGX0VWRU5UU19OTUk9eQo+IENPTkZJR19IQVZFX1BFUkZfUkVHUz15Cj4g
Q09ORklHX0hBVkVfUEVSRl9VU0VSX1NUQUNLX0RVTVA9eQo+IENPTkZJR19IQVZFX0FSQ0hfSlVN
UF9MQUJFTD15Cj4gQ09ORklHX0FSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHPXkKPiBDT05GSUdf
SEFWRV9BTElHTkVEX1NUUlVDVF9QQUdFPXkKPiBDT05GSUdfSEFWRV9DTVBYQ0hHX0xPQ0FMPXkK
PiBDT05GSUdfSEFWRV9DTVBYQ0hHX0RPVUJMRT15Cj4gQ09ORklHX0hBVkVfQVJDSF9TRUNDT01Q
X0ZJTFRFUj15Cj4gQ09ORklHX0hBVkVfQ0NfU1RBQ0tQUk9URUNUT1I9eQo+ICMgQ09ORklHX0ND
X1NUQUNLUFJPVEVDVE9SIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfTk9O
RT15Cj4gIyBDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfUkVHVUxBUiBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfU1RST05HIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9D
T05URVhUX1RSQUNLSU5HPXkKPiBDT05GSUdfSEFWRV9WSVJUX0NQVV9BQ0NPVU5USU5HX0dFTj15
Cj4gQ09ORklHX0hBVkVfSVJRX1RJTUVfQUNDT1VOVElORz15Cj4gQ09ORklHX0hBVkVfQVJDSF9U
UkFOU1BBUkVOVF9IVUdFUEFHRT15Cj4gQ09ORklHX0hBVkVfQVJDSF9TT0ZUX0RJUlRZPXkKPiBD
T05GSUdfTU9EVUxFU19VU0VfRUxGX1JFTEE9eQo+IENPTkZJR19IQVZFX0lSUV9FWElUX09OX0lS
UV9TVEFDSz15Cj4gCj4gIwo+ICMgR0NPVi1iYXNlZCBrZXJuZWwgcHJvZmlsaW5nCj4gIwo+ICMg
Q09ORklHX0dDT1ZfS0VSTkVMIGlzIG5vdCBzZXQKPiAjIENPTkZJR19IQVZFX0dFTkVSSUNfRE1B
X0NPSEVSRU5UIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRfTVVURVhFUz15Cj4gQ09ORklHX0JBU0Vf
U01BTEw9MAo+ICMgQ09ORklHX01PRFVMRVMgaXMgbm90IHNldAo+IENPTkZJR19TVE9QX01BQ0hJ
TkU9eQo+IENPTkZJR19CTE9DSz15Cj4gQ09ORklHX0JMS19ERVZfQlNHPXkKPiBDT05GSUdfQkxL
X0RFVl9CU0dMSUI9eQo+ICMgQ09ORklHX0JMS19ERVZfSU5URUdSSVRZIGlzIG5vdCBzZXQKPiBD
T05GSUdfQkxLX0NNRExJTkVfUEFSU0VSPXkKPiAKPiAjCj4gIyBQYXJ0aXRpb24gVHlwZXMKPiAj
Cj4gQ09ORklHX1BBUlRJVElPTl9BRFZBTkNFRD15Cj4gQ09ORklHX0FDT1JOX1BBUlRJVElPTj15
Cj4gQ09ORklHX0FDT1JOX1BBUlRJVElPTl9DVU1BTkE9eQo+ICMgQ09ORklHX0FDT1JOX1BBUlRJ
VElPTl9FRVNPWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OX0lDUyBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfQUNPUk5fUEFSVElUSU9OX0FERlMgaXMgbm90IHNldAo+ICMgQ09O
RklHX0FDT1JOX1BBUlRJVElPTl9QT1dFUlRFQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0FDT1JOX1BB
UlRJVElPTl9SSVNDSVg9eQo+ICMgQ09ORklHX0FJWF9QQVJUSVRJT04gaXMgbm90IHNldAo+ICMg
Q09ORklHX09TRl9QQVJUSVRJT04gaXMgbm90IHNldAo+ICMgQ09ORklHX0FNSUdBX1BBUlRJVElP
TiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQVRBUklfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19NQUNfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfTVNET1NfUEFSVElUSU9OPXkK
PiBDT05GSUdfQlNEX0RJU0tMQUJFTD15Cj4gIyBDT05GSUdfTUlOSVhfU1VCUEFSVElUSU9OIGlz
IG5vdCBzZXQKPiBDT05GSUdfU09MQVJJU19YODZfUEFSVElUSU9OPXkKPiBDT05GSUdfVU5JWFdB
UkVfRElTS0xBQkVMPXkKPiBDT05GSUdfTERNX1BBUlRJVElPTj15Cj4gQ09ORklHX0xETV9ERUJV
Rz15Cj4gQ09ORklHX1NHSV9QQVJUSVRJT049eQo+IENPTkZJR19VTFRSSVhfUEFSVElUSU9OPXkK
PiAjIENPTkZJR19TVU5fUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfS0FSTUFfUEFSVElU
SU9OPXkKPiAjIENPTkZJR19FRklfUEFSVElUSU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfU1lTVjY4
X1BBUlRJVElPTj15Cj4gQ09ORklHX0NNRExJTkVfUEFSVElUSU9OPXkKPiAKPiAjCj4gIyBJTyBT
Y2hlZHVsZXJzCj4gIwo+IENPTkZJR19JT1NDSEVEX05PT1A9eQo+IENPTkZJR19JT1NDSEVEX0RF
QURMSU5FPXkKPiAjIENPTkZJR19JT1NDSEVEX0NGUSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFRkFV
TFRfREVBRExJTkU9eQo+ICMgQ09ORklHX0RFRkFVTFRfTk9PUCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0RFRkFVTFRfSU9TQ0hFRD0iZGVhZGxpbmUiCj4gQ09ORklHX1BSRUVNUFRfTk9USUZJRVJTPXkK
PiBDT05GSUdfUEFEQVRBPXkKPiBDT05GSUdfVU5JTkxJTkVfU1BJTl9VTkxPQ0s9eQo+IENPTkZJ
R19GUkVFWkVSPXkKPiAKPiAjCj4gIyBQcm9jZXNzb3IgdHlwZSBhbmQgZmVhdHVyZXMKPiAjCj4g
Q09ORklHX1pPTkVfRE1BPXkKPiBDT05GSUdfU01QPXkKPiAjIENPTkZJR19YODZfTVBQQVJTRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfWDg2X0VYVEVOREVEX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiBD
T05GSUdfWDg2X1NVUFBPUlRTX01FTU9SWV9GQUlMVVJFPXkKPiAjIENPTkZJR19TQ0hFRF9PTUlU
X0ZSQU1FX1BPSU5URVIgaXMgbm90IHNldAo+IENPTkZJR19IWVBFUlZJU09SX0dVRVNUPXkKPiBD
T05GSUdfUEFSQVZJUlQ9eQo+ICMgQ09ORklHX1BBUkFWSVJUX0RFQlVHIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19QQVJBVklSVF9TUElOTE9DS1MgaXMgbm90IHNldAo+IENPTkZJR19YRU49eQo+ICMg
Q09ORklHX1hFTl9QUklWSUxFR0VEX0dVRVNUIGlzIG5vdCBzZXQKPiBDT05GSUdfWEVOX1BWSFZN
PXkKPiBDT05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZPTUwMAo+IENPTkZJR19YRU5fU0FWRV9S
RVNUT1JFPXkKPiBDT05GSUdfWEVOX0RFQlVHX0ZTPXkKPiBDT05GSUdfWEVOX1BWSD15Cj4gIyBD
T05GSUdfS1ZNX0dVRVNUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19QQVJBVklSVF9USU1FX0FDQ09V
TlRJTkcgaXMgbm90IHNldAo+IENPTkZJR19QQVJBVklSVF9DTE9DSz15Cj4gQ09ORklHX05PX0JP
T1RNRU09eQo+ICMgQ09ORklHX01FTVRFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX01LOCBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTVBTQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUNPUkUyIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19NQVRPTSBpcyBub3Qgc2V0Cj4gQ09ORklHX0dFTkVSSUNfQ1BVPXkK
PiBDT05GSUdfWDg2X0lOVEVSTk9ERV9DQUNIRV9TSElGVD02Cj4gQ09ORklHX1g4Nl9MMV9DQUNI
RV9TSElGVD02Cj4gQ09ORklHX1g4Nl9UU0M9eQo+IENPTkZJR19YODZfQ01QWENIRzY0PXkKPiBD
T05GSUdfWDg2X0NNT1Y9eQo+IENPTkZJR19YODZfTUlOSU1VTV9DUFVfRkFNSUxZPTY0Cj4gQ09O
RklHX1g4Nl9ERUJVR0NUTE1TUj15Cj4gIyBDT05GSUdfUFJPQ0VTU09SX1NFTEVDVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0NQVV9TVVBfSU5URUw9eQo+IENPTkZJR19DUFVfU1VQX0FNRD15Cj4gQ09O
RklHX0NQVV9TVVBfQ0VOVEFVUj15Cj4gQ09ORklHX0hQRVRfVElNRVI9eQo+IENPTkZJR19IUEVU
X0VNVUxBVEVfUlRDPXkKPiAjIENPTkZJR19ETUkgaXMgbm90IHNldAo+IENPTkZJR19TV0lPVExC
PXkKPiBDT05GSUdfSU9NTVVfSEVMUEVSPXkKPiBDT05GSUdfTUFYU01QPXkKPiBDT05GSUdfTlJf
Q1BVUz04MTkyCj4gQ09ORklHX1NDSEVEX1NNVD15Cj4gQ09ORklHX1NDSEVEX01DPXkKPiBDT05G
SUdfUFJFRU1QVF9OT05FPXkKPiAjIENPTkZJR19QUkVFTVBUX1ZPTFVOVEFSWSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfUFJFRU1QVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1BSRUVNUFRfQ09VTlQ9eQo+
IENPTkZJR19YODZfTE9DQUxfQVBJQz15Cj4gQ09ORklHX1g4Nl9JT19BUElDPXkKPiAjIENPTkZJ
R19YODZfUkVST1VURV9GT1JfQlJPS0VOX0JPT1RfSVJRUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1g4
Nl9NQ0U9eQo+ICMgQ09ORklHX1g4Nl9NQ0VfSU5URUwgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4
Nl9NQ0VfQU1EIGlzIG5vdCBzZXQKPiBDT05GSUdfWDg2X01DRV9JTkpFQ1Q9eQo+IENPTkZJR19J
OEs9eQo+ICMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUlDUk9DT0RF
X0lOVEVMX0VBUkxZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NSUNST0NPREVfQU1EX0VBUkxZIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19YODZfTVNSIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YODZfQ1BV
SUQgaXMgbm90IHNldAo+IENPTkZJR19BUkNIX1BIWVNfQUREUl9UXzY0QklUPXkKPiBDT05GSUdf
QVJDSF9ETUFfQUREUl9UXzY0QklUPXkKPiBDT05GSUdfRElSRUNUX0dCUEFHRVM9eQo+IENPTkZJ
R19OVU1BPXkKPiAjIENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0Cj4gQ09ORklHX05PREVTX1NI
SUZUPTEwCj4gQ09ORklHX0FSQ0hfU1BBUlNFTUVNX0VOQUJMRT15Cj4gQ09ORklHX0FSQ0hfU1BB
UlNFTUVNX0RFRkFVTFQ9eQo+IENPTkZJR19BUkNIX1NFTEVDVF9NRU1PUllfTU9ERUw9eQo+ICMg
Q09ORklHX0FSQ0hfTUVNT1JZX1BST0JFIGlzIG5vdCBzZXQKPiBDT05GSUdfSUxMRUdBTF9QT0lO
VEVSX1ZBTFVFPTB4ZGVhZDAwMDAwMDAwMDAwMAo+IENPTkZJR19TRUxFQ1RfTUVNT1JZX01PREVM
PXkKPiBDT05GSUdfU1BBUlNFTUVNX01BTlVBTD15Cj4gQ09ORklHX1NQQVJTRU1FTT15Cj4gQ09O
RklHX05FRURfTVVMVElQTEVfTk9ERVM9eQo+IENPTkZJR19IQVZFX01FTU9SWV9QUkVTRU5UPXkK
PiBDT05GSUdfU1BBUlNFTUVNX0VYVFJFTUU9eQo+IENPTkZJR19TUEFSU0VNRU1fVk1FTU1BUF9F
TkFCTEU9eQo+IENPTkZJR19TUEFSU0VNRU1fQUxMT0NfTUVNX01BUF9UT0dFVEhFUj15Cj4gQ09O
RklHX1NQQVJTRU1FTV9WTUVNTUFQPXkKPiBDT05GSUdfSEFWRV9NRU1CTE9DSz15Cj4gQ09ORklH
X0hBVkVfTUVNQkxPQ0tfTk9ERV9NQVA9eQo+IENPTkZJR19BUkNIX0RJU0NBUkRfTUVNQkxPQ0s9
eQo+IENPTkZJR19NRU1PUllfSVNPTEFUSU9OPXkKPiAjIENPTkZJR19NT1ZBQkxFX05PREUgaXMg
bm90IHNldAo+IENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RFPXkKPiBDT05GSUdfTUVNT1JZ
X0hPVFBMVUc9eQo+IENPTkZJR19NRU1PUllfSE9UUExVR19TUEFSU0U9eQo+IENPTkZJR19NRU1P
UllfSE9UUkVNT1ZFPXkKPiBDT05GSUdfUEFHRUZMQUdTX0VYVEVOREVEPXkKPiBDT05GSUdfU1BM
SVRfUFRMT0NLX0NQVVM9NAo+IENPTkZJR19BUkNIX0VOQUJMRV9TUExJVF9QTURfUFRMT0NLPXkK
PiAjIENPTkZJR19CQUxMT09OX0NPTVBBQ1RJT04gaXMgbm90IHNldAo+IENPTkZJR19DT01QQUNU
SU9OPXkKPiBDT05GSUdfTUlHUkFUSU9OPXkKPiBDT05GSUdfUEhZU19BRERSX1RfNjRCSVQ9eQo+
IENPTkZJR19aT05FX0RNQV9GTEFHPTEKPiAjIENPTkZJR19CT1VOQ0UgaXMgbm90IHNldAo+IENP
TkZJR19WSVJUX1RPX0JVUz15Cj4gQ09ORklHX01NVV9OT1RJRklFUj15Cj4gIyBDT05GSUdfS1NN
IGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVMVF9NTUFQX01JTl9BRERSPTQwOTYKPiBDT05GSUdf
QVJDSF9TVVBQT1JUU19NRU1PUllfRkFJTFVSRT15Cj4gIyBDT05GSUdfTUVNT1JZX0ZBSUxVUkUg
aXMgbm90IHNldAo+IENPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFHRT15Cj4gQ09ORklHX1RSQU5T
UEFSRU5UX0hVR0VQQUdFX0FMV0FZUz15Cj4gIyBDT05GSUdfVFJBTlNQQVJFTlRfSFVHRVBBR0Vf
TUFEVklTRSBpcyBub3Qgc2V0Cj4gQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0g9eQo+ICMgQ09O
RklHX0NMRUFOQ0FDSEUgaXMgbm90IHNldAo+ICMgQ09ORklHX0ZST05UU1dBUCBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ01BIGlzIG5vdCBzZXQKPiAjIENPTkZJR19aQlVEIGlzIG5vdCBzZXQKPiBD
T05GSUdfTUVNX1NPRlRfRElSVFk9eQo+IENPTkZJR19aU01BTExPQz15Cj4gQ09ORklHX1BHVEFC
TEVfTUFQUElORz15Cj4gQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJVUFRJT049eQo+ICMgQ09O
RklHX1g4Nl9CT09UUEFSQU1fTUVNT1JZX0NPUlJVUFRJT05fQ0hFQ0sgaXMgbm90IHNldAo+IENP
TkZJR19YODZfUkVTRVJWRV9MT1c9NjQKPiAjIENPTkZJR19NVFJSIGlzIG5vdCBzZXQKPiBDT05G
SUdfQVJDSF9SQU5ET009eQo+IENPTkZJR19YODZfU01BUD15Cj4gIyBDT05GSUdfU0VDQ09NUCBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfSFpfMTAwIGlzIG5vdCBzZXQKPiBDT05GSUdfSFpfMjUwPXkK
PiAjIENPTkZJR19IWl8zMDAgaXMgbm90IHNldAo+ICMgQ09ORklHX0haXzEwMDAgaXMgbm90IHNl
dAo+IENPTkZJR19IWj0yNTAKPiBDT05GSUdfU0NIRURfSFJUSUNLPXkKPiBDT05GSUdfS0VYRUM9
eQo+ICMgQ09ORklHX0NSQVNIX0RVTVAgaXMgbm90IHNldAo+ICMgQ09ORklHX0tFWEVDX0pVTVAg
aXMgbm90IHNldAo+IENPTkZJR19QSFlTSUNBTF9TVEFSVD0weDEwMDAwMDAKPiBDT05GSUdfUkVM
T0NBVEFCTEU9eQo+IENPTkZJR19QSFlTSUNBTF9BTElHTj0weDIwMDAwMAo+IENPTkZJR19IT1RQ
TFVHX0NQVT15Cj4gIyBDT05GSUdfQk9PVFBBUkFNX0hPVFBMVUdfQ1BVMCBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfREVCVUdfSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKPiBDT05GSUdfQ01ETElORV9C
T09MPXkKPiBDT05GSUdfQ01ETElORT0iIgo+IENPTkZJR19DTURMSU5FX09WRVJSSURFPXkKPiBD
T05GSUdfQVJDSF9FTkFCTEVfTUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19BUkNIX0VOQUJMRV9N
RU1PUllfSE9UUkVNT1ZFPXkKPiBDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQo+IAo+
ICMKPiAjIFBvd2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwo+ICMKPiBDT05GSUdfQVJD
SF9ISUJFUk5BVElPTl9IRUFERVI9eQo+ICMgQ09ORklHX1NVU1BFTkQgaXMgbm90IHNldAo+IENP
TkZJR19ISUJFUk5BVEVfQ0FMTEJBQ0tTPXkKPiBDT05GSUdfSElCRVJOQVRJT049eQo+IENPTkZJ
R19QTV9TVERfUEFSVElUSU9OPSIiCj4gQ09ORklHX1BNX1NMRUVQPXkKPiBDT05GSUdfUE1fU0xF
RVBfU01QPXkKPiBDT05GSUdfUE1fQVVUT1NMRUVQPXkKPiAjIENPTkZJR19QTV9XQUtFTE9DS1Mg
aXMgbm90IHNldAo+IENPTkZJR19QTV9SVU5USU1FPXkKPiBDT05GSUdfUE09eQo+ICMgQ09ORklH
X1BNX0RFQlVHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19XUV9QT1dFUl9FRkZJQ0lFTlRfREVGQVVM
VCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NGST15Cj4gCj4gIwo+ICMgQ1BVIEZyZXF1ZW5jeSBzY2Fs
aW5nCj4gIwo+ICMgQ09ORklHX0NQVV9GUkVRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBDUFUgSWRs
ZQo+ICMKPiBDT05GSUdfQ1BVX0lETEU9eQo+ICMgQ09ORklHX0NQVV9JRExFX01VTFRJUExFX0RS
SVZFUlMgaXMgbm90IHNldAo+IENPTkZJR19DUFVfSURMRV9HT1ZfTEFEREVSPXkKPiBDT05GSUdf
Q1BVX0lETEVfR09WX01FTlU9eQo+ICMgQ09ORklHX0FSQ0hfTkVFRFNfQ1BVX0lETEVfQ09VUExF
RCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU5URUxfSURMRSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
TWVtb3J5IHBvd2VyIHNhdmluZ3MKPiAjCj4gIyBDT05GSUdfSTczMDBfSURMRSBpcyBub3Qgc2V0
Cj4gCj4gIwo+ICMgQnVzIG9wdGlvbnMgKFBDSSBldGMuKQo+ICMKPiAjIENPTkZJR19QQ0kgaXMg
bm90IHNldAo+IENPTkZJR19JU0FfRE1BX0FQST15Cj4gQ09ORklHX1BDQ0FSRD15Cj4gQ09ORklH
X1BDTUNJQT15Cj4gQ09ORklHX1BDTUNJQV9MT0FEX0NJUz15Cj4gCj4gIwo+ICMgUEMtY2FyZCBi
cmlkZ2VzCj4gIwo+IENPTkZJR19YODZfU1lTRkI9eQo+IAo+ICMKPiAjIEV4ZWN1dGFibGUgZmls
ZSBmb3JtYXRzIC8gRW11bGF0aW9ucwo+ICMKPiBDT05GSUdfQklORk1UX0VMRj15Cj4gQ09ORklH
X0FSQ0hfQklORk1UX0VMRl9SQU5ET01JWkVfUElFPXkKPiBDT05GSUdfQ09SRV9EVU1QX0RFRkFV
TFRfRUxGX0hFQURFUlM9eQo+ICMgQ09ORklHX0JJTkZNVF9TQ1JJUFQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0hBVkVfQU9VVCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQklORk1UX01JU0MgaXMgbm90
IHNldAo+IENPTkZJR19DT1JFRFVNUD15Cj4gIyBDT05GSUdfSUEzMl9FTVVMQVRJT04gaXMgbm90
IHNldAo+IENPTkZJR19YODZfREVWX0RNQV9PUFM9eQo+IENPTkZJR19ORVQ9eQo+IAo+ICMKPiAj
IE5ldHdvcmtpbmcgb3B0aW9ucwo+ICMKPiBDT05GSUdfUEFDS0VUPXkKPiBDT05GSUdfUEFDS0VU
X0RJQUc9eQo+IENPTkZJR19VTklYPXkKPiBDT05GSUdfVU5JWF9ESUFHPXkKPiAjIENPTkZJR19O
RVRfS0VZIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JTkVUIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
V09SS19TRUNNQVJLPXkKPiBDT05GSUdfTkVUV09SS19QSFlfVElNRVNUQU1QSU5HPXkKPiBDT05G
SUdfTkVURklMVEVSPXkKPiAjIENPTkZJR19ORVRGSUxURVJfREVCVUcgaXMgbm90IHNldAo+ICMg
Q09ORklHX05FVEZJTFRFUl9BRFZBTkNFRCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUTT15Cj4gQ09O
RklHX0FUTV9MQU5FPXkKPiAjIENPTkZJR19CUklER0UgaXMgbm90IHNldAo+IENPTkZJR19IQVZF
X05FVF9EU0E9eQo+IENPTkZJR19ORVRfRFNBPXkKPiBDT05GSUdfTkVUX0RTQV9UQUdfRFNBPXkK
PiBDT05GSUdfTkVUX0RTQV9UQUdfRURTQT15Cj4gQ09ORklHX05FVF9EU0FfVEFHX1RSQUlMRVI9
eQo+IENPTkZJR19WTEFOXzgwMjFRPXkKPiAjIENPTkZJR19WTEFOXzgwMjFRX0dWUlAgaXMgbm90
IHNldAo+ICMgQ09ORklHX1ZMQU5fODAyMVFfTVZSUCBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQ05F
VD15Cj4gQ09ORklHX0RFQ05FVF9ST1VURVI9eQo+IENPTkZJR19MTEM9eQo+IENPTkZJR19MTEMy
PXkKPiBDT05GSUdfSVBYPXkKPiBDT05GSUdfSVBYX0lOVEVSTj15Cj4gIyBDT05GSUdfQVRBTEsg
aXMgbm90IHNldAo+ICMgQ09ORklHX1gyNSBpcyBub3Qgc2V0Cj4gQ09ORklHX0xBUEI9eQo+IENP
TkZJR19QSE9ORVQ9eQo+ICMgQ09ORklHX0lFRUU4MDIxNTQgaXMgbm90IHNldAo+IENPTkZJR19O
RVRfU0NIRUQ9eQo+IAo+ICMKPiAjIFF1ZXVlaW5nL1NjaGVkdWxpbmcKPiAjCj4gIyBDT05GSUdf
TkVUX1NDSF9DQlEgaXMgbm90IHNldAo+ICMgQ09ORklHX05FVF9TQ0hfSFRCIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVUX1NDSF9IRlNDPXkKPiBDT05GSUdfTkVUX1NDSF9BVE09eQo+ICMgQ09ORklH
X05FVF9TQ0hfUFJJTyBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9TQ0hfTVVMVElRPXkKPiBDT05G
SUdfTkVUX1NDSF9SRUQ9eQo+IENPTkZJR19ORVRfU0NIX1NGQj15Cj4gQ09ORklHX05FVF9TQ0hf
U0ZRPXkKPiBDT05GSUdfTkVUX1NDSF9URVFMPXkKPiAjIENPTkZJR19ORVRfU0NIX1RCRiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTkVUX1NDSF9HUkVEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRf
U0NIX0RTTUFSSyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkVUX1NDSF9ORVRFTSBpcyBub3Qgc2V0
Cj4gQ09ORklHX05FVF9TQ0hfRFJSPXkKPiAjIENPTkZJR19ORVRfU0NIX01RUFJJTyBpcyBub3Qg
c2V0Cj4gQ09ORklHX05FVF9TQ0hfQ0hPS0U9eQo+IENPTkZJR19ORVRfU0NIX1FGUT15Cj4gQ09O
RklHX05FVF9TQ0hfQ09ERUw9eQo+ICMgQ09ORklHX05FVF9TQ0hfRlFfQ09ERUwgaXMgbm90IHNl
dAo+IENPTkZJR19ORVRfU0NIX0ZRPXkKPiBDT05GSUdfTkVUX1NDSF9ISEY9eQo+IENPTkZJR19O
RVRfU0NIX1BJRT15Cj4gQ09ORklHX05FVF9TQ0hfSU5HUkVTUz15Cj4gQ09ORklHX05FVF9TQ0hf
UExVRz15Cj4gCj4gIwo+ICMgQ2xhc3NpZmljYXRpb24KPiAjCj4gQ09ORklHX05FVF9DTFM9eQo+
IENPTkZJR19ORVRfQ0xTX0JBU0lDPXkKPiBDT05GSUdfTkVUX0NMU19UQ0lOREVYPXkKPiBDT05G
SUdfTkVUX0NMU19GVz15Cj4gQ09ORklHX05FVF9DTFNfVTMyPXkKPiAjIENPTkZJR19DTFNfVTMy
X1BFUkYgaXMgbm90IHNldAo+ICMgQ09ORklHX0NMU19VMzJfTUFSSyBpcyBub3Qgc2V0Cj4gQ09O
RklHX05FVF9DTFNfUlNWUD15Cj4gQ09ORklHX05FVF9DTFNfUlNWUDY9eQo+IENPTkZJR19ORVRf
Q0xTX0ZMT1c9eQo+ICMgQ09ORklHX05FVF9DTFNfQ0dST1VQIGlzIG5vdCBzZXQKPiBDT05GSUdf
TkVUX0NMU19CUEY9eQo+IENPTkZJR19ORVRfRU1BVENIPXkKPiBDT05GSUdfTkVUX0VNQVRDSF9T
VEFDSz0zMgo+ICMgQ09ORklHX05FVF9FTUFUQ0hfQ01QIGlzIG5vdCBzZXQKPiAjIENPTkZJR19O
RVRfRU1BVENIX05CWVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ORVRfRU1BVENIX1UzMiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfTkVUX0VNQVRDSF9NRVRBIGlzIG5vdCBzZXQKPiAjIENPTkZJR19O
RVRfRU1BVENIX1RFWFQgaXMgbm90IHNldAo+IENPTkZJR19ORVRfRU1BVENIX0NBTklEPXkKPiBD
T05GSUdfTkVUX0NMU19BQ1Q9eQo+ICMgQ09ORklHX05FVF9BQ1RfUE9MSUNFIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkVUX0FDVF9HQUNUPXkKPiAjIENPTkZJR19HQUNUX1BST0IgaXMgbm90IHNldAo+
IENPTkZJR19ORVRfQUNUX01JUlJFRD15Cj4gQ09ORklHX05FVF9BQ1RfTkFUPXkKPiBDT05GSUdf
TkVUX0FDVF9QRURJVD15Cj4gIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkVUX0FDVF9TS0JFRElUPXkKPiAjIENPTkZJR19ORVRfQ0xTX0lORCBpcyBub3Qgc2V0Cj4g
Q09ORklHX05FVF9TQ0hfRklGTz15Cj4gQ09ORklHX0RDQj15Cj4gQ09ORklHX0JBVE1BTl9BRFY9
eQo+ICMgQ09ORklHX0JBVE1BTl9BRFZfTkMgaXMgbm90IHNldAo+IENPTkZJR19CQVRNQU5fQURW
X0RFQlVHPXkKPiBDT05GSUdfT1BFTlZTV0lUQ0g9eQo+IENPTkZJR19WU09DS0VUUz15Cj4gQ09O
RklHX05FVExJTktfTU1BUD15Cj4gQ09ORklHX05FVExJTktfRElBRz15Cj4gIyBDT05GSUdfTkVU
X01QTFNfR1NPIGlzIG5vdCBzZXQKPiBDT05GSUdfSFNSPXkKPiBDT05GSUdfUlBTPXkKPiBDT05G
SUdfUkZTX0FDQ0VMPXkKPiBDT05GSUdfWFBTPXkKPiAjIENPTkZJR19DR1JPVVBfTkVUX1BSSU8g
aXMgbm90IHNldAo+IENPTkZJR19DR1JPVVBfTkVUX0NMQVNTSUQ9eQo+IENPTkZJR19ORVRfUlhf
QlVTWV9QT0xMPXkKPiBDT05GSUdfQlFMPXkKPiBDT05GSUdfTkVUX0ZMT1dfTElNSVQ9eQo+IAo+
ICMKPiAjIE5ldHdvcmsgdGVzdGluZwo+ICMKPiAjIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0
Cj4gQ09ORklHX0NBTj15Cj4gQ09ORklHX0NBTl9SQVc9eQo+ICMgQ09ORklHX0NBTl9CQ00gaXMg
bm90IHNldAo+IENPTkZJR19DQU5fR1c9eQo+IAo+ICMKPiAjIENBTiBEZXZpY2UgRHJpdmVycwo+
ICMKPiAjIENPTkZJR19DQU5fVkNBTiBpcyBub3Qgc2V0Cj4gQ09ORklHX0NBTl9ERVY9eQo+ICMg
Q09ORklHX0NBTl9DQUxDX0JJVFRJTUlORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ0FOX0xFRFMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NBTl9NQ1AyNTFYIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FO
X1NKQTEwMDA9eQo+ICMgQ09ORklHX0NBTl9TSkExMDAwX0lTQSBpcyBub3Qgc2V0Cj4gQ09ORklH
X0NBTl9TSkExMDAwX1BMQVRGT1JNPXkKPiBDT05GSUdfQ0FOX0VNU19QQ01DSUE9eQo+IENPTkZJ
R19DQU5fUEVBS19QQ01DSUE9eQo+IENPTkZJR19DQU5fQ19DQU49eQo+IENPTkZJR19DQU5fQ19D
QU5fUExBVEZPUk09eQo+IENPTkZJR19DQU5fQ0M3NzA9eQo+IENPTkZJR19DQU5fQ0M3NzBfSVNB
PXkKPiBDT05GSUdfQ0FOX0NDNzcwX1BMQVRGT1JNPXkKPiBDT05GSUdfQ0FOX1NPRlRJTkc9eQo+
IENPTkZJR19DQU5fU09GVElOR19DUz15Cj4gQ09ORklHX0NBTl9ERUJVR19ERVZJQ0VTPXkKPiAj
IENPTkZJR19JUkRBIGlzIG5vdCBzZXQKPiBDT05GSUdfQlQ9eQo+IENPTkZJR19CVF9SRkNPTU09
eQo+IENPTkZJR19CVF9CTkVQPXkKPiBDT05GSUdfQlRfQk5FUF9NQ19GSUxURVI9eQo+ICMgQ09O
RklHX0JUX0JORVBfUFJPVE9fRklMVEVSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBCbHVldG9vdGgg
ZGV2aWNlIGRyaXZlcnMKPiAjCj4gQ09ORklHX0JUX0hDSURUTDE9eQo+IENPTkZJR19CVF9IQ0lC
VDNDPXkKPiBDT05GSUdfQlRfSENJQkxVRUNBUkQ9eQo+IENPTkZJR19CVF9IQ0lCVFVBUlQ9eQo+
ICMgQ09ORklHX0JUX0hDSVZIQ0kgaXMgbm90IHNldAo+IENPTkZJR19CVF9NUlZMPXkKPiBDT05G
SUdfRklCX1JVTEVTPXkKPiAjIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1dJ
TUFYPXkKPiBDT05GSUdfV0lNQVhfREVCVUdfTEVWRUw9OAo+IENPTkZJR19SRktJTEw9eQo+IENP
TkZJR19SRktJTExfUkVHVUxBVE9SPXkKPiBDT05GSUdfTkVUXzlQPXkKPiBDT05GSUdfTkVUXzlQ
X1ZJUlRJTz15Cj4gIyBDT05GSUdfTkVUXzlQX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FJ
Rj15Cj4gQ09ORklHX0NBSUZfREVCVUc9eQo+ICMgQ09ORklHX0NBSUZfTkVUREVWIGlzIG5vdCBz
ZXQKPiBDT05GSUdfQ0FJRl9VU0I9eQo+ICMgQ09ORklHX05GQyBpcyBub3Qgc2V0Cj4gQ09ORklH
X0hBVkVfQlBGX0pJVD15Cj4gCj4gIwo+ICMgRGV2aWNlIERyaXZlcnMKPiAjCj4gCj4gIwo+ICMg
R2VuZXJpYyBEcml2ZXIgT3B0aW9ucwo+ICMKPiBDT05GSUdfVUVWRU5UX0hFTFBFUl9QQVRIPSIi
Cj4gQ09ORklHX0RFVlRNUEZTPXkKPiBDT05GSUdfREVWVE1QRlNfTU9VTlQ9eQo+ICMgQ09ORklH
X1NUQU5EQUxPTkUgaXMgbm90IHNldAo+ICMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJTEQg
aXMgbm90IHNldAo+IENPTkZJR19GV19MT0FERVI9eQo+IENPTkZJR19GSVJNV0FSRV9JTl9LRVJO
RUw9eQo+IENPTkZJR19FWFRSQV9GSVJNV0FSRT0iIgo+IENPTkZJR19GV19MT0FERVJfVVNFUl9I
RUxQRVI9eQo+ICMgQ09ORklHX0RFQlVHX0RSSVZFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVH
X0RFVlJFUz15Cj4gQ09ORklHX1NZU19IWVBFUlZJU09SPXkKPiAjIENPTkZJR19HRU5FUklDX0NQ
VV9ERVZJQ0VTIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHTUFQPXkKPiBDT05GSUdfUkVHTUFQX0ky
Qz15Cj4gQ09ORklHX1JFR01BUF9TUEk9eQo+IENPTkZJR19SRUdNQVBfTU1JTz15Cj4gQ09ORklH
X1JFR01BUF9JUlE9eQo+IENPTkZJR19ETUFfU0hBUkVEX0JVRkZFUj15Cj4gCj4gIwo+ICMgQnVz
IGRldmljZXMKPiAjCj4gQ09ORklHX0NPTk5FQ1RPUj15Cj4gQ09ORklHX1BST0NfRVZFTlRTPXkK
PiAjIENPTkZJR19NVEQgaXMgbm90IHNldAo+IENPTkZJR19QQVJQT1JUPXkKPiBDT05GSUdfQVJD
SF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQ9eQo+ICMgQ09ORklHX1BBUlBPUlRfUEMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1BBUlBPUlRfR1NDIGlzIG5vdCBzZXQKPiBDT05GSUdfUEFSUE9SVF9BWDg4
Nzk2PXkKPiAjIENPTkZJR19QQVJQT1JUXzEyODQgaXMgbm90IHNldAo+IENPTkZJR19QQVJQT1JU
X05PVF9QQz15Cj4gQ09ORklHX0JMS19ERVY9eQo+IENPTkZJR19CTEtfREVWX05VTExfQkxLPXkK
PiBDT05GSUdfQkxLX0RFVl9GRD15Cj4gIyBDT05GSUdfWlJBTSBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfQkxLX0RFVl9DT1dfQ09NTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfQkxLX0RFVl9MT09QPXkK
PiBDT05GSUdfQkxLX0RFVl9MT09QX01JTl9DT1VOVD04Cj4gIyBDT05GSUdfQkxLX0RFVl9DUllQ
VE9MT09QIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEUkJEIGRpc2FibGVkIGJlY2F1c2UgUFJPQ19G
UyBvciBJTkVUIG5vdCBzZWxlY3RlZAo+ICMKPiBDT05GSUdfQkxLX0RFVl9OQkQ9eQo+ICMgQ09O
RklHX0JMS19ERVZfUkFNIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0RST01fUEtUQ0RWRD15Cj4gQ09O
RklHX0NEUk9NX1BLVENEVkRfQlVGRkVSUz04Cj4gIyBDT05GSUdfQ0RST01fUEtUQ0RWRF9XQ0FD
SEUgaXMgbm90IHNldAo+IENPTkZJR19BVEFfT1ZFUl9FVEg9eQo+IENPTkZJR19YRU5fQkxLREVW
X0ZST05URU5EPXkKPiBDT05GSUdfVklSVElPX0JMSz15Cj4gIyBDT05GSUdfQkxLX0RFVl9IRCBp
cyBub3Qgc2V0Cj4gCj4gIwo+ICMgTWlzYyBkZXZpY2VzCj4gIwo+IENPTkZJR19BRDUyNVhfRFBP
VD15Cj4gIyBDT05GSUdfQUQ1MjVYX0RQT1RfSTJDIGlzIG5vdCBzZXQKPiBDT05GSUdfQUQ1MjVY
X0RQT1RfU1BJPXkKPiAjIENPTkZJR19EVU1NWV9JUlEgaXMgbm90IHNldAo+IENPTkZJR19JQ1M5
MzJTNDAxPXkKPiBDT05GSUdfQVRNRUxfU1NDPXkKPiAjIENPTkZJR19FTkNMT1NVUkVfU0VSVklD
RVMgaXMgbm90IHNldAo+IENPTkZJR19BUERTOTgwMkFMUz15Cj4gQ09ORklHX0lTTDI5MDAzPXkK
PiAjIENPTkZJR19JU0wyOTAyMCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19UU0wyNTUw
IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19CSDE3ODA9eQo+IENPTkZJR19TRU5TT1JTX0JI
MTc3MD15Cj4gIyBDT05GSUdfU0VOU09SU19BUERTOTkwWCBpcyBub3Qgc2V0Cj4gQ09ORklHX0hN
QzYzNTI9eQo+IENPTkZJR19EUzE2ODI9eQo+IENPTkZJR19USV9EQUM3NTEyPXkKPiBDT05GSUdf
Vk1XQVJFX0JBTExPT049eQo+IENPTkZJR19CTVAwODU9eQo+IENPTkZJR19CTVAwODVfSTJDPXkK
PiBDT05GSUdfQk1QMDg1X1NQST15Cj4gIyBDT05GSUdfVVNCX1NXSVRDSF9GU0E5NDgwIGlzIG5v
dCBzZXQKPiBDT05GSUdfTEFUVElDRV9FQ1AzX0NPTkZJRz15Cj4gQ09ORklHX1NSQU09eQo+IENP
TkZJR19DMlBPUlQ9eQo+IENPTkZJR19DMlBPUlRfRFVSQU1BUl8yMTUwPXkKPiAKPiAjCj4gIyBF
RVBST00gc3VwcG9ydAo+ICMKPiBDT05GSUdfRUVQUk9NX0FUMjQ9eQo+ICMgQ09ORklHX0VFUFJP
TV9BVDI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19FRVBST01fTEVHQUNZIGlzIG5vdCBzZXQKPiBD
T05GSUdfRUVQUk9NX01BWDY4NzU9eQo+IENPTkZJR19FRVBST01fOTNDWDY9eQo+ICMgQ09ORklH
X0VFUFJPTV85M1hYNDYgaXMgbm90IHNldAo+IAo+ICMKPiAjIFRleGFzIEluc3RydW1lbnRzIHNo
YXJlZCB0cmFuc3BvcnQgbGluZSBkaXNjaXBsaW5lCj4gIwo+IAo+ICMKPiAjIEFsdGVyYSBGUEdB
IGZpcm13YXJlIGRvd25sb2FkIG1vZHVsZQo+ICMKPiAjIENPTkZJR19BTFRFUkFfU1RBUEwgaXMg
bm90IHNldAo+IAo+ICMKPiAjIEludGVsIE1JQyBIb3N0IERyaXZlcgo+ICMKPiAKPiAjCj4gIyBJ
bnRlbCBNSUMgQ2FyZCBEcml2ZXIKPiAjCj4gIyBDT05GSUdfSU5URUxfTUlDX0NBUkQgaXMgbm90
IHNldAo+IENPTkZJR19IQVZFX0lERT15Cj4gQ09ORklHX0lERT15Cj4gCj4gIwo+ICMgUGxlYXNl
IHNlZSBEb2N1bWVudGF0aW9uL2lkZS9pZGUudHh0IGZvciBoZWxwL2luZm8gb24gSURFIGRyaXZl
cwo+ICMKPiBDT05GSUdfSURFX0FUQVBJPXkKPiAjIENPTkZJR19CTEtfREVWX0lERV9TQVRBIGlz
IG5vdCBzZXQKPiBDT05GSUdfSURFX0dEPXkKPiAjIENPTkZJR19JREVfR0RfQVRBIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19JREVfR0RfQVRBUEkgaXMgbm90IHNldAo+IENPTkZJR19CTEtfREVWX0lE
RUNTPXkKPiBDT05GSUdfQkxLX0RFVl9JREVDRD15Cj4gQ09ORklHX0JMS19ERVZfSURFQ0RfVkVS
Qk9TRV9FUlJPUlM9eQo+ICMgQ09ORklHX0JMS19ERVZfSURFVEFQRSBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfSURFX1RBU0tfSU9DVEwgaXMgbm90IHNldAo+IENPTkZJR19JREVfUFJPQ19GUz15Cj4g
Cj4gIwo+ICMgSURFIGNoaXBzZXQgc3VwcG9ydC9idWdmaXhlcwo+ICMKPiBDT05GSUdfSURFX0dF
TkVSSUM9eQo+IENPTkZJR19CTEtfREVWX1BMQVRGT1JNPXkKPiAjIENPTkZJR19CTEtfREVWX0NN
RDY0MCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkxLX0RFVl9JREVETUEgaXMgbm90IHNldAo+IAo+
ICMKPiAjIFNDU0kgZGV2aWNlIHN1cHBvcnQKPiAjCj4gQ09ORklHX1NDU0lfTU9EPXkKPiBDT05G
SUdfUkFJRF9BVFRSUz15Cj4gQ09ORklHX1NDU0k9eQo+IENPTkZJR19TQ1NJX0RNQT15Cj4gQ09O
RklHX1NDU0lfVEdUPXkKPiAjIENPTkZJR19TQ1NJX05FVExJTksgaXMgbm90IHNldAo+ICMgQ09O
RklHX1NDU0lfUFJPQ19GUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU0NTSSBzdXBwb3J0IHR5cGUg
KGRpc2ssIHRhcGUsIENELVJPTSkKPiAjCj4gQ09ORklHX0JMS19ERVZfU0Q9eQo+ICMgQ09ORklH
X0NIUl9ERVZfU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX0NIUl9ERVZfT1NTVCBpcyBub3Qgc2V0
Cj4gQ09ORklHX0JMS19ERVZfU1I9eQo+IENPTkZJR19CTEtfREVWX1NSX1ZFTkRPUj15Cj4gQ09O
RklHX0NIUl9ERVZfU0c9eQo+IENPTkZJR19DSFJfREVWX1NDSD15Cj4gIyBDT05GSUdfU0NTSV9N
VUxUSV9MVU4gaXMgbm90IHNldAo+ICMgQ09ORklHX1NDU0lfQ09OU1RBTlRTIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19TQ1NJX0xPR0dJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX1NDU0lfU0NBTl9B
U1lOQyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU0NTSSBUcmFuc3BvcnRzCj4gIwo+ICMgQ09ORklH
X1NDU0lfU1BJX0FUVFJTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQ1NJX0ZDX0FUVFJTIGlzIG5v
dCBzZXQKPiBDT05GSUdfU0NTSV9JU0NTSV9BVFRSUz15Cj4gQ09ORklHX1NDU0lfU0FTX0FUVFJT
PXkKPiBDT05GSUdfU0NTSV9TQVNfTElCU0FTPXkKPiBDT05GSUdfU0NTSV9TQVNfQVRBPXkKPiBD
T05GSUdfU0NTSV9TQVNfSE9TVF9TTVA9eQo+IENPTkZJR19TQ1NJX1NSUF9BVFRSUz15Cj4gIyBD
T05GSUdfU0NTSV9TUlBfVEdUX0FUVFJTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0NTSV9MT1dMRVZF
TD15Cj4gQ09ORklHX0lTQ1NJX0JPT1RfU1lTRlM9eQo+ICMgQ09ORklHX1NDU0lfVUZTSENEIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19MSUJGQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTElCRkNPRSBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfU0NTSV9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NDU0lf
VklSVElPPXkKPiAjIENPTkZJR19TQ1NJX0xPV0xFVkVMX1BDTUNJQSBpcyBub3Qgc2V0Cj4gQ09O
RklHX1NDU0lfREg9eQo+IENPTkZJR19TQ1NJX0RIX1JEQUM9eQo+IENPTkZJR19TQ1NJX0RIX0hQ
X1NXPXkKPiBDT05GSUdfU0NTSV9ESF9FTUM9eQo+IENPTkZJR19TQ1NJX0RIX0FMVUE9eQo+IENP
TkZJR19TQ1NJX09TRF9JTklUSUFUT1I9eQo+ICMgQ09ORklHX1NDU0lfT1NEX1VMRCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1NDU0lfT1NEX0RQUklOVF9TRU5TRT0xCj4gIyBDT05GSUdfU0NTSV9PU0Rf
REVCVUcgaXMgbm90IHNldAo+IENPTkZJR19BVEE9eQo+ICMgQ09ORklHX0FUQV9OT05TVEFOREFS
RCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FUQV9WRVJCT1NFX0VSUk9SPXkKPiBDT05GSUdfU0FUQV9Q
TVA9eQo+IAo+ICMKPiAjIENvbnRyb2xsZXJzIHdpdGggbm9uLVNGRiBuYXRpdmUgaW50ZXJmYWNl
Cj4gIwo+ICMgQ09ORklHX1NBVEFfQUhDSV9QTEFURk9STSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FU
QV9TRkY9eQo+IAo+ICMKPiAjIFNGRiBjb250cm9sbGVycyB3aXRoIGN1c3RvbSBETUEgaW50ZXJm
YWNlCj4gIwo+IENPTkZJR19BVEFfQk1ETUE9eQo+IAo+ICMKPiAjIFNBVEEgU0ZGIGNvbnRyb2xs
ZXJzIHdpdGggQk1ETUEKPiAjCj4gQ09ORklHX1NBVEFfSElHSEJBTks9eQo+IENPTkZJR19TQVRB
X01WPXkKPiBDT05GSUdfU0FUQV9SQ0FSPXkKPiAKPiAjCj4gIyBQQVRBIFNGRiBjb250cm9sbGVy
cyB3aXRoIEJNRE1BCj4gIwo+ICMgQ09ORklHX1BBVEFfQVJBU0FOX0NGIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBQSU8tb25seSBTRkYgY29udHJvbGxlcnMKPiAjCj4gQ09ORklHX1BBVEFfUENNQ0lB
PXkKPiAjIENPTkZJR19QQVRBX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBHZW5lcmlj
IGZhbGxiYWNrIC8gbGVnYWN5IGRyaXZlcnMKPiAjCj4gQ09ORklHX01EPXkKPiAjIENPTkZJR19C
TEtfREVWX01EIGlzIG5vdCBzZXQKPiBDT05GSUdfQkNBQ0hFPXkKPiBDT05GSUdfQkNBQ0hFX0RF
QlVHPXkKPiAjIENPTkZJR19CQ0FDSEVfQ0xPU1VSRVNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJ
R19CTEtfREVWX0RNPXkKPiAjIENPTkZJR19ETV9ERUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RN
X0JVRklPPXkKPiBDT05GSUdfRE1fQklPX1BSSVNPTj15Cj4gQ09ORklHX0RNX1BFUlNJU1RFTlRf
REFUQT15Cj4gIyBDT05GSUdfRE1fQ1JZUFQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RNX1NOQVBT
SE9UIGlzIG5vdCBzZXQKPiBDT05GSUdfRE1fVEhJTl9QUk9WSVNJT05JTkc9eQo+IENPTkZJR19E
TV9ERUJVR19CTE9DS19TVEFDS19UUkFDSU5HPXkKPiBDT05GSUdfRE1fQ0FDSEU9eQo+ICMgQ09O
RklHX0RNX0NBQ0hFX01RIGlzIG5vdCBzZXQKPiBDT05GSUdfRE1fQ0FDSEVfQ0xFQU5FUj15Cj4g
Q09ORklHX0RNX01JUlJPUj15Cj4gQ09ORklHX0RNX0xPR19VU0VSU1BBQ0U9eQo+ICMgQ09ORklH
X0RNX1JBSUQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RNX1pFUk8gaXMgbm90IHNldAo+ICMgQ09O
RklHX0RNX01VTFRJUEFUSCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfRE1fREVMQVkgaXMgbm90IHNl
dAo+IENPTkZJR19ETV9VRVZFTlQ9eQo+IENPTkZJR19ETV9GTEFLRVk9eQo+IENPTkZJR19ETV9W
RVJJVFk9eQo+IENPTkZJR19ETV9TV0lUQ0g9eQo+IENPTkZJR19UQVJHRVRfQ09SRT15Cj4gIyBD
T05GSUdfVENNX0lCTE9DSyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RDTV9GSUxFSU89eQo+ICMgQ09O
RklHX1RDTV9QU0NTSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9PUEJBQ0tfVEFSR0VUIGlzIG5v
dCBzZXQKPiBDT05GSUdfSVNDU0lfVEFSR0VUPXkKPiBDT05GSUdfTUFDSU5UT1NIX0RSSVZFUlM9
eQo+IENPTkZJR19ORVRERVZJQ0VTPXkKPiBDT05GSUdfTUlJPXkKPiAjIENPTkZJR19ORVRfQ09S
RSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQVJDTkVUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BVE1f
RFJJVkVSUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwo+ICMK
PiBDT05GSUdfQ0FJRl9TUElfU0xBVkU9eQo+IENPTkZJR19DQUlGX1NQSV9TWU5DPXkKPiAjIENP
TkZJR19DQUlGX0hTSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0NBSUZfVklSVElPPXkKPiBDT05GSUdf
VkhPU1RfTkVUPXkKPiBDT05GSUdfVkhPU1RfUklORz15Cj4gQ09ORklHX1ZIT1NUPXkKPiAKPiAj
Cj4gIyBEaXN0cmlidXRlZCBTd2l0Y2ggQXJjaGl0ZWN0dXJlIGRyaXZlcnMKPiAjCj4gQ09ORklH
X05FVF9EU0FfTVY4OEU2WFhYPXkKPiBDT05GSUdfTkVUX0RTQV9NVjg4RTYwNjA9eQo+IENPTkZJ
R19ORVRfRFNBX01WODhFNlhYWF9ORUVEX1BQVT15Cj4gQ09ORklHX05FVF9EU0FfTVY4OEU2MTMx
PXkKPiBDT05GSUdfTkVUX0RTQV9NVjg4RTYxMjNfNjFfNjU9eQo+IENPTkZJR19FVEhFUk5FVD15
Cj4gQ09ORklHX05FVF9WRU5ET1JfM0NPTT15Cj4gQ09ORklHX1BDTUNJQV8zQzU3ND15Cj4gQ09O
RklHX1BDTUNJQV8zQzU4OT15Cj4gQ09ORklHX05FVF9WRU5ET1JfQU1EPXkKPiBDT05GSUdfUENN
Q0lBX05NQ0xBTj15Cj4gQ09ORklHX05FVF9WRU5ET1JfQVJDPXkKPiAjIENPTkZJR19ORVRfQ0FE
RU5DRSBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9WRU5ET1JfQlJPQURDT009eQo+ICMgQ09ORklH
X0I0NCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9DQUxYRURBX1hHTUFDPXkKPiBDT05GSUdfRE5F
VD15Cj4gIyBDT05GSUdfTkVUX1ZFTkRPUl9GVUpJVFNVIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
X1ZFTkRPUl9JTlRFTD15Cj4gQ09ORklHX05FVF9WRU5ET1JfSTgyNVhYPXkKPiAjIENPTkZJR19O
RVRfVkVORE9SX01JQ1JFTCBpcyBub3Qgc2V0Cj4gQ09ORklHX05FVF9WRU5ET1JfTUlDUk9DSElQ
PXkKPiBDT05GSUdfRU5DMjhKNjA9eQo+IENPTkZJR19FTkMyOEo2MF9XUklURVZFUklGWT15Cj4g
IyBDT05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JIGlzIG5vdCBzZXQKPiBDT05GSUdfRVRIT0M9eQo+
IENPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQo+ICMgQ09ORklHX0FUUCBpcyBub3Qgc2V0Cj4g
Q09ORklHX1NIX0VUSD15Cj4gQ09ORklHX05FVF9WRU5ET1JfU0VFUT15Cj4gQ09ORklHX05FVF9W
RU5ET1JfU01TQz15Cj4gQ09ORklHX1BDTUNJQV9TTUM5MUM5Mj15Cj4gQ09ORklHX1NNU0M5MTFY
PXkKPiAjIENPTkZJR19TTVNDOTExWF9BUkNIX0hPT0tTIGlzIG5vdCBzZXQKPiBDT05GSUdfTkVU
X1ZFTkRPUl9TVE1JQ1JPPXkKPiAjIENPTkZJR19TVE1NQUNfRVRIIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkVUX1ZFTkRPUl9WSUE9eQo+ICMgQ09ORklHX05FVF9WRU5ET1JfV0laTkVUIGlzIG5vdCBz
ZXQKPiBDT05GSUdfTkVUX1ZFTkRPUl9YSVJDT009eQo+ICMgQ09ORklHX1BDTUNJQV9YSVJDMlBT
IGlzIG5vdCBzZXQKPiBDT05GSUdfUEhZTElCPXkKPiAKPiAjCj4gIyBNSUkgUEhZIGRldmljZSBk
cml2ZXJzCj4gIwo+IENPTkZJR19BVDgwM1hfUEhZPXkKPiAjIENPTkZJR19BTURfUEhZIGlzIG5v
dCBzZXQKPiBDT05GSUdfTUFSVkVMTF9QSFk9eQo+IENPTkZJR19EQVZJQ09NX1BIWT15Cj4gQ09O
RklHX1FTRU1JX1BIWT15Cj4gIyBDT05GSUdfTFhUX1BIWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
Q0lDQURBX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJVEVTU0VfUEhZPXkKPiBDT05GSUdfU01T
Q19QSFk9eQo+IENPTkZJR19CUk9BRENPTV9QSFk9eQo+ICMgQ09ORklHX0JDTTg3WFhfUEhZIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JQ1BMVVNfUEhZIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVBTFRF
S19QSFk9eQo+ICMgQ09ORklHX05BVElPTkFMX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NURTEw
WFA9eQo+ICMgQ09ORklHX0xTSV9FVDEwMTFDX1BIWSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTUlD
UkVMX1BIWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZJWEVEX1BIWT15Cj4gQ09ORklHX01ESU9fQklU
QkFORz15Cj4gQ09ORklHX01JQ1JFTF9LUzg5OTVNQT15Cj4gIyBDT05GSUdfUExJUCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1BQUD15Cj4gQ09ORklHX1BQUF9CU0RDT01QPXkKPiAjIENPTkZJR19QUFBf
REVGTEFURSBpcyBub3Qgc2V0Cj4gQ09ORklHX1BQUF9GSUxURVI9eQo+ICMgQ09ORklHX1BQUF9N
UFBFIGlzIG5vdCBzZXQKPiBDT05GSUdfUFBQX01VTFRJTElOSz15Cj4gQ09ORklHX1BQUE9BVE09
eQo+IENPTkZJR19QUFBPRT15Cj4gQ09ORklHX1NMSEM9eQo+ICMgQ09ORklHX1dMQU4gaXMgbm90
IHNldAo+IAo+ICMKPiAjIFdpTUFYIFdpcmVsZXNzIEJyb2FkYmFuZCBkZXZpY2VzCj4gIwo+IAo+
ICMKPiAjIEVuYWJsZSBVU0Igc3VwcG9ydCB0byBzZWUgV2lNQVggVVNCIGRyaXZlcnMKPiAjCj4g
Q09ORklHX1dBTj15Cj4gQ09ORklHX0hETEM9eQo+IENPTkZJR19IRExDX1JBVz15Cj4gIyBDT05G
SUdfSERMQ19SQVdfRVRIIGlzIG5vdCBzZXQKPiBDT05GSUdfSERMQ19DSVNDTz15Cj4gQ09ORklH
X0hETENfRlI9eQo+IENPTkZJR19IRExDX1BQUD15Cj4gQ09ORklHX0hETENfWDI1PXkKPiBDT05G
SUdfRExDST15Cj4gQ09ORklHX0RMQ0lfTUFYPTgKPiBDT05GSUdfU0JOST15Cj4gIyBDT05GSUdf
U0JOSV9NVUxUSUxJTkUgaXMgbm90IHNldAo+IENPTkZJR19YRU5fTkVUREVWX0ZST05URU5EPXkK
PiAjIENPTkZJR19JU0ROIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBJbnB1dCBkZXZpY2Ugc3VwcG9y
dAo+ICMKPiAjIENPTkZJR19JTlBVVCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgSGFyZHdhcmUgSS9P
IHBvcnRzCj4gIwo+ICMgQ09ORklHX1NFUklPIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9NSUdI
VF9IQVZFX1BDX1NFUklPPXkKPiBDT05GSUdfR0FNRVBPUlQ9eQo+ICMgQ09ORklHX0dBTUVQT1JU
X05TNTU4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19HQU1FUE9SVF9MNCBpcyBub3Qgc2V0Cj4gCj4g
Iwo+ICMgQ2hhcmFjdGVyIGRldmljZXMKPiAjCj4gIyBDT05GSUdfVFRZIGlzIG5vdCBzZXQKPiBD
T05GSUdfREVWS01FTT15Cj4gIyBDT05GSUdfUFJJTlRFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1BQ
REVWPXkKPiBDT05GSUdfSVBNSV9IQU5ETEVSPXkKPiAjIENPTkZJR19JUE1JX1BBTklDX0VWRU5U
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JUE1JX0RFVklDRV9JTlRFUkZBQ0UgaXMgbm90IHNldAo+
IENPTkZJR19JUE1JX1NJPXkKPiBDT05GSUdfSVBNSV9XQVRDSERPRz15Cj4gQ09ORklHX0lQTUlf
UE9XRVJPRkY9eQo+IENPTkZJR19IV19SQU5ET009eQo+ICMgQ09ORklHX0hXX1JBTkRPTV9USU1F
UklPTUVNIGlzIG5vdCBzZXQKPiAjIENPTkZJR19IV19SQU5ET01fVklBIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19IV19SQU5ET01fVklSVElPIGlzIG5vdCBzZXQKPiBDT05GSUdfSFdfUkFORE9NX1RQ
TT15Cj4gIyBDT05GSUdfTlZSQU0gaXMgbm90IHNldAo+IAo+ICMKPiAjIFBDTUNJQSBjaGFyYWN0
ZXIgZGV2aWNlcwo+ICMKPiBDT05GSUdfQ0FSRE1BTl80MDAwPXkKPiAjIENPTkZJR19DQVJETUFO
XzQwNDAgaXMgbm90IHNldAo+IENPTkZJR19SQVdfRFJJVkVSPXkKPiBDT05GSUdfTUFYX1JBV19E
RVZTPTI1Ngo+IENPTkZJR19IQU5HQ0hFQ0tfVElNRVI9eQo+IENPTkZJR19UQ0dfVFBNPXkKPiBD
T05GSUdfVENHX1RJUz15Cj4gQ09ORklHX1RDR19USVNfSTJDX0FUTUVMPXkKPiBDT05GSUdfVENH
X1RJU19JMkNfSU5GSU5FT049eQo+IENPTkZJR19UQ0dfVElTX0kyQ19OVVZPVE9OPXkKPiAjIENP
TkZJR19UQ0dfTlNDIGlzIG5vdCBzZXQKPiBDT05GSUdfVENHX0FUTUVMPXkKPiBDT05GSUdfVENH
X1hFTj15Cj4gQ09ORklHX1RFTENMT0NLPXkKPiBDT05GSUdfSTJDPXkKPiBDT05GSUdfSTJDX0JP
QVJESU5GTz15Cj4gIyBDT05GSUdfSTJDX0NPTVBBVCBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19D
SEFSREVWPXkKPiBDT05GSUdfSTJDX01VWD15Cj4gCj4gIwo+ICMgTXVsdGlwbGV4ZXIgSTJDIENo
aXAgc3VwcG9ydAo+ICMKPiBDT05GSUdfSTJDX01VWF9QQ0E5NTQxPXkKPiBDT05GSUdfSTJDX01V
WF9QQ0E5NTR4PXkKPiAjIENPTkZJR19JMkNfSEVMUEVSX0FVVE8gaXMgbm90IHNldAo+IENPTkZJ
R19JMkNfU01CVVM9eQo+IAo+ICMKPiAjIEkyQyBBbGdvcml0aG1zCj4gIwo+IENPTkZJR19JMkNf
QUxHT0JJVD15Cj4gQ09ORklHX0kyQ19BTEdPUENGPXkKPiAjIENPTkZJR19JMkNfQUxHT1BDQSBp
cyBub3Qgc2V0Cj4gCj4gIwo+ICMgSTJDIEhhcmR3YXJlIEJ1cyBzdXBwb3J0Cj4gIwo+IAo+ICMK
PiAjIEkyQyBzeXN0ZW0gYnVzIGRyaXZlcnMgKG1vc3RseSBlbWJlZGRlZCAvIHN5c3RlbS1vbi1j
aGlwKQo+ICMKPiAjIENPTkZJR19JMkNfS0VNUExEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNf
T0NPUkVTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNfUENBX1BMQVRGT1JNIGlzIG5vdCBzZXQK
PiAjIENPTkZJR19JMkNfUFhBX1BDSSBpcyBub3Qgc2V0Cj4gQ09ORklHX0kyQ19SSUlDPXkKPiAj
IENPTkZJR19JMkNfU0hfTU9CSUxFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19JMkNfU0lNVEVDIGlz
IG5vdCBzZXQKPiBDT05GSUdfSTJDX1hJTElOWD15Cj4gIyBDT05GSUdfSTJDX1JDQVIgaXMgbm90
IHNldAo+IAo+ICMKPiAjIEV4dGVybmFsIEkyQy9TTUJ1cyBhZGFwdGVyIGRyaXZlcnMKPiAjCj4g
Q09ORklHX0kyQ19QQVJQT1JUPXkKPiAjIENPTkZJR19JMkNfUEFSUE9SVF9MSUdIVCBpcyBub3Qg
c2V0Cj4gCj4gIwo+ICMgT3RoZXIgSTJDL1NNQnVzIGJ1cyBkcml2ZXJzCj4gIwo+IENPTkZJR19J
MkNfREVCVUdfQ09SRT15Cj4gQ09ORklHX0kyQ19ERUJVR19BTEdPPXkKPiBDT05GSUdfSTJDX0RF
QlVHX0JVUz15Cj4gQ09ORklHX1NQST15Cj4gIyBDT05GSUdfU1BJX0RFQlVHIGlzIG5vdCBzZXQK
PiBDT05GSUdfU1BJX01BU1RFUj15Cj4gCj4gIwo+ICMgU1BJIE1hc3RlciBDb250cm9sbGVyIERy
aXZlcnMKPiAjCj4gQ09ORklHX1NQSV9BTFRFUkE9eQo+IENPTkZJR19TUElfQVRNRUw9eQo+IENP
TkZJR19TUElfQkNNMjgzNT15Cj4gQ09ORklHX1NQSV9CQ002M1hYX0hTU1BJPXkKPiBDT05GSUdf
U1BJX0JJVEJBTkc9eQo+IENPTkZJR19TUElfQlVUVEVSRkxZPXkKPiBDT05GSUdfU1BJX0VQOTNY
WD15Cj4gQ09ORklHX1NQSV9JTVg9eQo+IENPTkZJR19TUElfTE03MF9MTFA9eQo+ICMgQ09ORklH
X1NQSV9GU0xfRFNQSSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NQSV9USV9RU1BJPXkKPiAjIENPTkZJ
R19TUElfT01BUF8xMDBLIGlzIG5vdCBzZXQKPiBDT05GSUdfU1BJX09SSU9OPXkKPiAjIENPTkZJ
R19TUElfUFhBMlhYX1BDSSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU1BJX1NDMThJUzYwMiBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1NQSV9TSD15Cj4gIyBDT05GSUdfU1BJX1NIX0hTUEkgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1NQSV9URUdSQTExNCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NQSV9URUdSQTIw
X1NGTEFTSD15Cj4gQ09ORklHX1NQSV9URUdSQTIwX1NMSU5LPXkKPiAjIENPTkZJR19TUElfWENP
TU0gaXMgbm90IHNldAo+ICMgQ09ORklHX1NQSV9YSUxJTlggaXMgbm90IHNldAo+IENPTkZJR19T
UElfREVTSUdOV0FSRT15Cj4gCj4gIwo+ICMgU1BJIFByb3RvY29sIE1hc3RlcnMKPiAjCj4gQ09O
RklHX1NQSV9TUElERVY9eQo+IENPTkZJR19TUElfVExFNjJYMD15Cj4gQ09ORklHX0hTST15Cj4g
Q09ORklHX0hTSV9CT0FSRElORk89eQo+IAo+ICMKPiAjIEhTSSBjbGllbnRzCj4gIwo+ICMgQ09O
RklHX0hTSV9DSEFSIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBQUFMgc3VwcG9ydAo+ICMKPiBDT05G
SUdfUFBTPXkKPiBDT05GSUdfUFBTX0RFQlVHPXkKPiAKPiAjCj4gIyBQUFMgY2xpZW50cyBzdXBw
b3J0Cj4gIwo+ICMgQ09ORklHX1BQU19DTElFTlRfS1RJTUVSIGlzIG5vdCBzZXQKPiBDT05GSUdf
UFBTX0NMSUVOVF9QQVJQT1JUPXkKPiBDT05GSUdfUFBTX0NMSUVOVF9HUElPPXkKPiAKPiAjCj4g
IyBQUFMgZ2VuZXJhdG9ycyBzdXBwb3J0Cj4gIwo+IAo+ICMKPiAjIFBUUCBjbG9jayBzdXBwb3J0
Cj4gIwo+IENPTkZJR19QVFBfMTU4OF9DTE9DSz15Cj4gQ09ORklHX0RQODM2NDBfUEhZPXkKPiBD
T05GSUdfUFRQXzE1ODhfQ0xPQ0tfUENIPXkKPiBDT05GSUdfQVJDSF9XQU5UX09QVElPTkFMX0dQ
SU9MSUI9eQo+ICMgQ09ORklHX0dQSU9MSUIgaXMgbm90IHNldAo+IENPTkZJR19XMT15Cj4gQ09O
RklHX1cxX0NPTj15Cj4gCj4gIwo+ICMgMS13aXJlIEJ1cyBNYXN0ZXJzCj4gIwo+IENPTkZJR19X
MV9NQVNURVJfRFMyNDgyPXkKPiAjIENPTkZJR19XMV9NQVNURVJfRFMxV00gaXMgbm90IHNldAo+
IAo+ICMKPiAjIDEtd2lyZSBTbGF2ZXMKPiAjCj4gQ09ORklHX1cxX1NMQVZFX1RIRVJNPXkKPiBD
T05GSUdfVzFfU0xBVkVfU01FTT15Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQwOD15Cj4gQ09ORklH
X1cxX1NMQVZFX0RTMjQwOF9SRUFEQkFDSz15Cj4gQ09ORklHX1cxX1NMQVZFX0RTMjQxMz15Cj4g
IyBDT05GSUdfVzFfU0xBVkVfRFMyNDIzIGlzIG5vdCBzZXQKPiBDT05GSUdfVzFfU0xBVkVfRFMy
NDMxPXkKPiBDT05GSUdfVzFfU0xBVkVfRFMyNDMzPXkKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI0
MzNfQ1JDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19XMV9TTEFWRV9EUzI3NjAgaXMgbm90IHNldAo+
IENPTkZJR19XMV9TTEFWRV9EUzI3ODA9eQo+IENPTkZJR19XMV9TTEFWRV9EUzI3ODE9eQo+ICMg
Q09ORklHX1cxX1NMQVZFX0RTMjhFMDQgaXMgbm90IHNldAo+IENPTkZJR19XMV9TTEFWRV9CUTI3
MDAwPXkKPiBDT05GSUdfUE9XRVJfU1VQUExZPXkKPiBDT05GSUdfUE9XRVJfU1VQUExZX0RFQlVH
PXkKPiAjIENPTkZJR19QREFfUE9XRVIgaXMgbm90IHNldAo+IENPTkZJR19HRU5FUklDX0FEQ19C
QVRURVJZPXkKPiBDT05GSUdfTUFYODkyNV9QT1dFUj15Cj4gIyBDT05GSUdfV004MzFYX0JBQ0tV
UCBpcyBub3Qgc2V0Cj4gQ09ORklHX1dNODMxWF9QT1dFUj15Cj4gIyBDT05GSUdfV004MzUwX1BP
V0VSIGlzIG5vdCBzZXQKPiBDT05GSUdfVEVTVF9QT1dFUj15Cj4gQ09ORklHX0JBVFRFUllfRFMy
NzgwPXkKPiBDT05GSUdfQkFUVEVSWV9EUzI3ODE9eQo+ICMgQ09ORklHX0JBVFRFUllfRFMyNzgy
IGlzIG5vdCBzZXQKPiBDT05GSUdfQkFUVEVSWV9TQlM9eQo+ICMgQ09ORklHX0JBVFRFUllfQlEy
N3gwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBVFRFUllfREE5MDMwPXkKPiAjIENPTkZJR19CQVRU
RVJZX0RBOTA1MiBpcyBub3Qgc2V0Cj4gQ09ORklHX0JBVFRFUllfTUFYMTcwNDA9eQo+IENPTkZJ
R19CQVRURVJZX01BWDE3MDQyPXkKPiBDT05GSUdfQ0hBUkdFUl9QQ0Y1MDYzMz15Cj4gIyBDT05G
SUdfQ0hBUkdFUl9JU1AxNzA0IGlzIG5vdCBzZXQKPiAjIENPTkZJR19DSEFSR0VSX01BWDg5MDMg
aXMgbm90IHNldAo+ICMgQ09ORklHX0NIQVJHRVJfVFdMNDAzMCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0NIQVJHRVJfTFA4NzI3PXkKPiAjIENPTkZJR19DSEFSR0VSX01BTkFHRVIgaXMgbm90IHNldAo+
ICMgQ09ORklHX0NIQVJHRVJfQlEyNDE1WCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NIQVJHRVJfU01C
MzQ3PXkKPiAjIENPTkZJR19DSEFSR0VSX1RQUzY1MDkwIGlzIG5vdCBzZXQKPiBDT05GSUdfQkFU
VEVSWV9HT0xERklTSD15Cj4gQ09ORklHX1BPV0VSX1JFU0VUPXkKPiAjIENPTkZJR19QT1dFUl9B
VlMgaXMgbm90IHNldAo+IENPTkZJR19IV01PTj15Cj4gQ09ORklHX0hXTU9OX1ZJRD15Cj4gIyBD
T05GSUdfSFdNT05fREVCVUdfQ0hJUCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgTmF0aXZlIGRyaXZl
cnMKPiAjCj4gIyBDT05GSUdfU0VOU09SU19BRDczMTQgaXMgbm90IHNldAo+IENPTkZJR19TRU5T
T1JTX0FENzQxND15Cj4gQ09ORklHX1NFTlNPUlNfQUQ3NDE4PXkKPiBDT05GSUdfU0VOU09SU19B
RENYWD15Cj4gQ09ORklHX1NFTlNPUlNfQURNMTAyMT15Cj4gIyBDT05GSUdfU0VOU09SU19BRE0x
MDI1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0FETTEwMjYgaXMgbm90IHNldAo+IENP
TkZJR19TRU5TT1JTX0FETTEwMjk9eQo+IENPTkZJR19TRU5TT1JTX0FETTEwMzE9eQo+ICMgQ09O
RklHX1NFTlNPUlNfQURNOTI0MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfQURUN1gxMD15
Cj4gQ09ORklHX1NFTlNPUlNfQURUNzMxMD15Cj4gIyBDT05GSUdfU0VOU09SU19BRFQ3NDEwIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0FEVDc0MTEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX0FEVDc0NjI9eQo+IENPTkZJR19TRU5TT1JTX0FEVDc0NzA9eQo+IENPTkZJR19TRU5T
T1JTX0FEVDc0NzU9eQo+IENPTkZJR19TRU5TT1JTX0FTQzc2MjE9eQo+IENPTkZJR19TRU5TT1JT
X0FTQjEwMD15Cj4gQ09ORklHX1NFTlNPUlNfQVRYUDE9eQo+ICMgQ09ORklHX1NFTlNPUlNfRFM2
MjAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0RTMTYyMT15Cj4gQ09ORklHX1NFTlNPUlNf
REE5MDUyX0FEQz15Cj4gQ09ORklHX1NFTlNPUlNfREE5MDU1PXkKPiBDT05GSUdfU0VOU09SU19G
NzE4MDVGPXkKPiBDT05GSUdfU0VOU09SU19GNzE4ODJGRz15Cj4gQ09ORklHX1NFTlNPUlNfRjc1
Mzc1Uz15Cj4gIyBDT05GSUdfU0VOU09SU19GU0NITUQgaXMgbm90IHNldAo+IENPTkZJR19TRU5T
T1JTX0c3NjBBPXkKPiBDT05GSUdfU0VOU09SU19HNzYyPXkKPiBDT05GSUdfU0VOU09SU19HTDUx
OFNNPXkKPiBDT05GSUdfU0VOU09SU19HTDUyMFNNPXkKPiAjIENPTkZJR19TRU5TT1JTX0hJSDYx
MzAgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0hUVTIxPXkKPiBDT05GSUdfU0VOU09SU19D
T1JFVEVNUD15Cj4gQ09ORklHX1NFTlNPUlNfSUJNQUVNPXkKPiAjIENPTkZJR19TRU5TT1JTX0lC
TVBFWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfSUlPX0hXTU9OPXkKPiAjIENPTkZJR19T
RU5TT1JTX0lUODcgaXMgbm90IHNldAo+ICMgQ09ORklHX1NFTlNPUlNfSkM0MiBpcyBub3Qgc2V0
Cj4gQ09ORklHX1NFTlNPUlNfTElORUFHRT15Cj4gIyBDT05GSUdfU0VOU09SU19MTTYzIGlzIG5v
dCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTcwPXkKPiBDT05GSUdfU0VOU09SU19MTTczPXkKPiAj
IENPTkZJR19TRU5TT1JTX0xNNzUgaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0xNNzc9eQo+
ICMgQ09ORklHX1NFTlNPUlNfTE03OCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfTE04MD15
Cj4gQ09ORklHX1NFTlNPUlNfTE04Mz15Cj4gQ09ORklHX1NFTlNPUlNfTE04NT15Cj4gIyBDT05G
SUdfU0VOU09SU19MTTg3IGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19MTTkwPXkKPiBDT05G
SUdfU0VOU09SU19MTTkyPXkKPiBDT05GSUdfU0VOU09SU19MTTkzPXkKPiBDT05GSUdfU0VOU09S
U19MVEM0MTUxPXkKPiBDT05GSUdfU0VOU09SU19MVEM0MjE1PXkKPiBDT05GSUdfU0VOU09SU19M
VEM0MjQ1PXkKPiAjIENPTkZJR19TRU5TT1JTX0xUQzQyNjEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX0xNOTUyMzQ9eQo+IENPTkZJR19TRU5TT1JTX0xNOTUyNDE9eQo+IENPTkZJR19TRU5T
T1JTX0xNOTUyNDU9eQo+IENPTkZJR19TRU5TT1JTX01BWDExMTE9eQo+IENPTkZJR19TRU5TT1JT
X01BWDE2MDY1PXkKPiBDT05GSUdfU0VOU09SU19NQVgxNjE5PXkKPiBDT05GSUdfU0VOU09SU19N
QVgxNjY4PXkKPiBDT05GSUdfU0VOU09SU19NQVgxOTc9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2
Mzk9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2NDI9eQo+IENPTkZJR19TRU5TT1JTX01BWDY2NTA9
eQo+ICMgQ09ORklHX1NFTlNPUlNfTUFYNjY5NyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNf
TUNQMzAyMT15Cj4gQ09ORklHX1NFTlNPUlNfTkNUNjc3NT15Cj4gQ09ORklHX1NFTlNPUlNfUEM4
NzM2MD15Cj4gQ09ORklHX1NFTlNPUlNfUEM4NzQyNz15Cj4gQ09ORklHX1NFTlNPUlNfUENGODU5
MT15Cj4gQ09ORklHX1BNQlVTPXkKPiBDT05GSUdfU0VOU09SU19QTUJVUz15Cj4gQ09ORklHX1NF
TlNPUlNfQURNMTI3NT15Cj4gQ09ORklHX1NFTlNPUlNfTE0yNTA2Nj15Cj4gQ09ORklHX1NFTlNP
UlNfTFRDMjk3OD15Cj4gQ09ORklHX1NFTlNPUlNfTUFYMTYwNjQ9eQo+IENPTkZJR19TRU5TT1JT
X01BWDM0NDQwPXkKPiAjIENPTkZJR19TRU5TT1JTX01BWDg2ODggaXMgbm90IHNldAo+IENPTkZJ
R19TRU5TT1JTX1VDRDkwMDA9eQo+IENPTkZJR19TRU5TT1JTX1VDRDkyMDA9eQo+IENPTkZJR19T
RU5TT1JTX1pMNjEwMD15Cj4gQ09ORklHX1NFTlNPUlNfU0hUMjE9eQo+IENPTkZJR19TRU5TT1JT
X1NNTTY2NT15Cj4gQ09ORklHX1NFTlNPUlNfRE1FMTczNz15Cj4gQ09ORklHX1NFTlNPUlNfRU1D
MTQwMz15Cj4gQ09ORklHX1NFTlNPUlNfRU1DMjEwMz15Cj4gQ09ORklHX1NFTlNPUlNfRU1DNlcy
MDE9eQo+IENPTkZJR19TRU5TT1JTX1NNU0M0N00xPXkKPiBDT05GSUdfU0VOU09SU19TTVNDNDdN
MTkyPXkKPiBDT05GSUdfU0VOU09SU19TTVNDNDdCMzk3PXkKPiAjIENPTkZJR19TRU5TT1JTX1ND
SDU2WFhfQ09NTU9OIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VOU09SU19BRFMxMDE1PXkKPiAjIENP
TkZJR19TRU5TT1JTX0FEUzc4MjggaXMgbm90IHNldAo+IENPTkZJR19TRU5TT1JTX0FEUzc4NzE9
eQo+IENPTkZJR19TRU5TT1JTX0FNQzY4MjE9eQo+ICMgQ09ORklHX1NFTlNPUlNfSU5BMjA5IGlz
IG5vdCBzZXQKPiAjIENPTkZJR19TRU5TT1JTX0lOQTJYWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NF
TlNPUlNfVEhNQzUwPXkKPiBDT05GSUdfU0VOU09SU19UTVAxMDI9eQo+IENPTkZJR19TRU5TT1JT
X1RNUDQwMT15Cj4gIyBDT05GSUdfU0VOU09SU19UTVA0MjEgaXMgbm90IHNldAo+IENPTkZJR19T
RU5TT1JTX1ZJQV9DUFVURU1QPXkKPiBDT05GSUdfU0VOU09SU19WVDEyMTE9eQo+IENPTkZJR19T
RU5TT1JTX1c4Mzc4MUQ9eQo+IENPTkZJR19TRU5TT1JTX1c4Mzc5MUQ9eQo+ICMgQ09ORklHX1NF
TlNPUlNfVzgzNzkyRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU0VOU09SU19XODM3OTMgaXMgbm90
IHNldAo+IENPTkZJR19TRU5TT1JTX1c4Mzc5NT15Cj4gIyBDT05GSUdfU0VOU09SU19XODM3OTVf
RkFOQ1RSTCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfVzgzTDc4NVRTPXkKPiBDT05GSUdf
U0VOU09SU19XODNMNzg2Tkc9eQo+ICMgQ09ORklHX1NFTlNPUlNfVzgzNjI3SEYgaXMgbm90IHNl
dAo+ICMgQ09ORklHX1NFTlNPUlNfVzgzNjI3RUhGIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TRU5T
T1JTX1dNODMxWCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfV004MzUwPXkKPiBDT05GSUdf
VEhFUk1BTD15Cj4gIyBDT05GSUdfVEhFUk1BTF9IV01PTiBpcyBub3Qgc2V0Cj4gQ09ORklHX1RI
RVJNQUxfREVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkKPiAjIENPTkZJR19USEVSTUFMX0RFRkFVTFRf
R09WX0ZBSVJfU0hBUkUgaXMgbm90IHNldAo+ICMgQ09ORklHX1RIRVJNQUxfREVGQVVMVF9HT1Zf
VVNFUl9TUEFDRSBpcyBub3Qgc2V0Cj4gQ09ORklHX1RIRVJNQUxfR09WX0ZBSVJfU0hBUkU9eQo+
IENPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQo+IENPTkZJR19USEVSTUFMX0dPVl9VU0VS
X1NQQUNFPXkKPiBDT05GSUdfVEhFUk1BTF9FTVVMQVRJT049eQo+IENPTkZJR19SQ0FSX1RIRVJN
QUw9eQo+IENPTkZJR19JTlRFTF9QT1dFUkNMQU1QPXkKPiAKPiAjCj4gIyBUZXhhcyBJbnN0cnVt
ZW50cyB0aGVybWFsIGRyaXZlcnMKPiAjCj4gIyBDT05GSUdfV0FUQ0hET0cgaXMgbm90IHNldAo+
IENPTkZJR19TU0JfUE9TU0lCTEU9eQo+IAo+ICMKPiAjIFNvbmljcyBTaWxpY29uIEJhY2twbGFu
ZQo+ICMKPiBDT05GSUdfU1NCPXkKPiBDT05GSUdfU1NCX1BDTUNJQUhPU1RfUE9TU0lCTEU9eQo+
ICMgQ09ORklHX1NTQl9QQ01DSUFIT1NUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TU0JfU0lMRU5U
IGlzIG5vdCBzZXQKPiBDT05GSUdfU1NCX0RFQlVHPXkKPiBDT05GSUdfQkNNQV9QT1NTSUJMRT15
Cj4gCj4gIwo+ICMgQnJvYWRjb20gc3BlY2lmaWMgQU1CQQo+ICMKPiBDT05GSUdfQkNNQT15Cj4g
IyBDT05GSUdfQkNNQV9IT1NUX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkNNQV9EUklWRVJf
R01BQ19DTU4gaXMgbm90IHNldAo+IENPTkZJR19CQ01BX0RFQlVHPXkKPiAKPiAjCj4gIyBNdWx0
aWZ1bmN0aW9uIGRldmljZSBkcml2ZXJzCj4gIwo+IENPTkZJR19NRkRfQ09SRT15Cj4gQ09ORklH
X01GRF9BUzM3MTE9eQo+IENPTkZJR19QTUlDX0FEUDU1MjA9eQo+IENPTkZJR19NRkRfQ1JPU19F
Qz15Cj4gQ09ORklHX01GRF9DUk9TX0VDX0kyQz15Cj4gQ09ORklHX1BNSUNfREE5MDNYPXkKPiBD
T05GSUdfUE1JQ19EQTkwNTI9eQo+ICMgQ09ORklHX01GRF9EQTkwNTJfU1BJIGlzIG5vdCBzZXQK
PiBDT05GSUdfTUZEX0RBOTA1Ml9JMkM9eQo+IENPTkZJR19NRkRfREE5MDU1PXkKPiBDT05GSUdf
TUZEX0RBOTA2Mz15Cj4gIyBDT05GSUdfTUZEX01DMTNYWFhfU1BJIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19NRkRfTUMxM1hYWF9JMkMgaXMgbm90IHNldAo+ICMgQ09ORklHX0hUQ19QQVNJQzMgaXMg
bm90IHNldAo+IENPTkZJR19NRkRfS0VNUExEPXkKPiBDT05GSUdfTUZEXzg4UE04MDA9eQo+ICMg
Q09ORklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfODhQTTg2MFggaXMg
bm90IHNldAo+ICMgQ09ORklHX01GRF9NQVgxNDU3NyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9N
QVg3NzY4Nj15Cj4gQ09ORklHX01GRF9NQVg3NzY5Mz15Cj4gIyBDT05GSUdfTUZEX01BWDg5MDcg
aXMgbm90IHNldAo+IENPTkZJR19NRkRfTUFYODkyNT15Cj4gIyBDT05GSUdfTUZEX01BWDg5OTcg
aXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9NQVg4OTk4IGlzIG5vdCBzZXQKPiAjIENPTkZJR19F
WlhfUENBUCBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9SRVRVPXkKPiBDT05GSUdfTUZEX1BDRjUw
NjMzPXkKPiAjIENPTkZJR19QQ0Y1MDYzM19BREMgaXMgbm90IHNldAo+ICMgQ09ORklHX1BDRjUw
NjMzX0dQSU8gaXMgbm90IHNldAo+IENPTkZJR19NRkRfUkM1VDU4Mz15Cj4gIyBDT05GSUdfTUZE
X1NFQ19DT1JFIGlzIG5vdCBzZXQKPiBDT05GSUdfTUZEX1NJNDc2WF9DT1JFPXkKPiAjIENPTkZJ
R19NRkRfU001MDEgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9TTVNDIGlzIG5vdCBzZXQKPiBD
T05GSUdfQUJYNTAwX0NPUkU9eQo+ICMgQ09ORklHX0FCMzEwMF9DT1JFIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19NRkRfU1RNUEUgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9TWVNDT04gaXMgbm90
IHNldAo+IENPTkZJR19NRkRfVElfQU0zMzVYX1RTQ0FEQz15Cj4gIyBDT05GSUdfTUZEX0xQMzk0
MyBpcyBub3Qgc2V0Cj4gQ09ORklHX01GRF9MUDg3ODg9eQo+IENPTkZJR19NRkRfUEFMTUFTPXkK
PiBDT05GSUdfVFBTNjEwNVg9eQo+IENPTkZJR19UUFM2NTA3WD15Cj4gQ09ORklHX01GRF9UUFM2
NTA5MD15Cj4gQ09ORklHX01GRF9UUFM2NTIxNz15Cj4gIyBDT05GSUdfTUZEX1RQUzY1ODZYIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19NRkRfVFBTODAwMzEgaXMgbm90IHNldAo+IENPTkZJR19UV0w0
MDMwX0NPUkU9eQo+ICMgQ09ORklHX1RXTDQwMzBfTUFEQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdf
TUZEX1RXTDQwMzBfQVVESU8gaXMgbm90IHNldAo+ICMgQ09ORklHX1RXTDYwNDBfQ09SRSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01GRF9XTDEyNzNfQ09SRT15Cj4gQ09ORklHX01GRF9MTTM1MzM9eQo+
ICMgQ09ORklHX01GRF9UQzM1ODlYIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NRkRfVE1JTyBpcyBu
b3Qgc2V0Cj4gQ09ORklHX01GRF9BUklaT05BPXkKPiAjIENPTkZJR19NRkRfQVJJWk9OQV9JMkMg
aXMgbm90IHNldAo+IENPTkZJR19NRkRfQVJJWk9OQV9TUEk9eQo+ICMgQ09ORklHX01GRF9XTTUx
MDIgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9XTTUxMTAgaXMgbm90IHNldAo+ICMgQ09ORklH
X01GRF9XTTg5OTcgaXMgbm90IHNldAo+ICMgQ09ORklHX01GRF9XTTg0MDAgaXMgbm90IHNldAo+
IENPTkZJR19NRkRfV004MzFYPXkKPiAjIENPTkZJR19NRkRfV004MzFYX0kyQyBpcyBub3Qgc2V0
Cj4gQ09ORklHX01GRF9XTTgzMVhfU1BJPXkKPiBDT05GSUdfTUZEX1dNODM1MD15Cj4gQ09ORklH
X01GRF9XTTgzNTBfSTJDPXkKPiAjIENPTkZJR19NRkRfV004OTk0IGlzIG5vdCBzZXQKPiBDT05G
SUdfUkVHVUxBVE9SPXkKPiBDT05GSUdfUkVHVUxBVE9SX0RFQlVHPXkKPiBDT05GSUdfUkVHVUxB
VE9SX0ZJWEVEX1ZPTFRBR0U9eQo+IENPTkZJR19SRUdVTEFUT1JfVklSVFVBTF9DT05TVU1FUj15
Cj4gQ09ORklHX1JFR1VMQVRPUl9VU0VSU1BBQ0VfQ09OU1VNRVI9eQo+ICMgQ09ORklHX1JFR1VM
QVRPUl84OFBNODAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SRUdVTEFUT1JfQUNUODg2NSBpcyBu
b3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9BRDUzOTg9eQo+IENPTkZJR19SRUdVTEFUT1JfQVMz
NzExPXkKPiBDT05GSUdfUkVHVUxBVE9SX0RBOTAzWD15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX0RB
OTA1MiBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9EQTkwNTU9eQo+IENPTkZJR19SRUdV
TEFUT1JfREE5MDYzPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfREE5MjEwIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19SRUdVTEFUT1JfRkFONTM1NTUgaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRP
Ul9JU0w2MjcxQSBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9MUDM5NzE9eQo+IENPTkZJ
R19SRUdVTEFUT1JfTFAzOTcyPXkKPiBDT05GSUdfUkVHVUxBVE9SX0xQODcyWD15Cj4gQ09ORklH
X1JFR1VMQVRPUl9MUDg3NTU9eQo+IENPTkZJR19SRUdVTEFUT1JfTFA4Nzg4PXkKPiBDT05GSUdf
UkVHVUxBVE9SX01BWDE1ODY9eQo+IENPTkZJR19SRUdVTEFUT1JfTUFYODY0OT15Cj4gIyBDT05G
SUdfUkVHVUxBVE9SX01BWDg2NjAgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFUT1JfTUFYODky
NT15Cj4gQ09ORklHX1JFR1VMQVRPUl9NQVg4OTUyPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfTUFY
ODk3MyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUkVHVUxBVE9SX01BWDc3Njg2IGlzIG5vdCBzZXQK
PiAjIENPTkZJR19SRUdVTEFUT1JfTUFYNzc2OTMgaXMgbm90IHNldAo+IENPTkZJR19SRUdVTEFU
T1JfUEFMTUFTPXkKPiBDT05GSUdfUkVHVUxBVE9SX1BDRjUwNjMzPXkKPiBDT05GSUdfUkVHVUxB
VE9SX1BGVVpFMTAwPXkKPiBDT05GSUdfUkVHVUxBVE9SX1JDNVQ1ODM9eQo+IENPTkZJR19SRUdV
TEFUT1JfVFBTNTE2MzI9eQo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2MTA1WCBpcyBub3Qgc2V0
Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2MjM2MD15Cj4gIyBDT05GSUdfUkVHVUxBVE9SX1RQUzY1
MDIzIGlzIG5vdCBzZXQKPiBDT05GSUdfUkVHVUxBVE9SX1RQUzY1MDdYPXkKPiAjIENPTkZJR19S
RUdVTEFUT1JfVFBTNjUwOTAgaXMgbm90IHNldAo+ICMgQ09ORklHX1JFR1VMQVRPUl9UUFM2NTIx
NyBpcyBub3Qgc2V0Cj4gQ09ORklHX1JFR1VMQVRPUl9UUFM2NTI0WD15Cj4gQ09ORklHX1JFR1VM
QVRPUl9UV0w0MDMwPXkKPiAjIENPTkZJR19SRUdVTEFUT1JfV004MzFYIGlzIG5vdCBzZXQKPiBD
T05GSUdfUkVHVUxBVE9SX1dNODM1MD15Cj4gQ09ORklHX01FRElBX1NVUFBPUlQ9eQo+IAo+ICMK
PiAjIE11bHRpbWVkaWEgY29yZSBzdXBwb3J0Cj4gIwo+IENPTkZJR19NRURJQV9DQU1FUkFfU1VQ
UE9SVD15Cj4gIyBDT05GSUdfTUVESUFfQU5BTE9HX1RWX1NVUFBPUlQgaXMgbm90IHNldAo+ICMg
Q09ORklHX01FRElBX0RJR0lUQUxfVFZfU1VQUE9SVCBpcyBub3Qgc2V0Cj4gQ09ORklHX01FRElB
X1JBRElPX1NVUFBPUlQ9eQo+ICMgQ09ORklHX01FRElBX0NPTlRST0xMRVIgaXMgbm90IHNldAo+
IENPTkZJR19WSURFT19ERVY9eQo+IENPTkZJR19WSURFT19WNEwyPXkKPiAjIENPTkZJR19WSURF
T19BRFZfREVCVUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1ZJREVPX0ZJWEVEX01JTk9SX1JBTkdF
UyBpcyBub3Qgc2V0Cj4gQ09ORklHX1Y0TDJfTUVNMk1FTV9ERVY9eQo+IENPTkZJR19WSURFT0JV
Rl9HRU49eQo+IENPTkZJR19WSURFT0JVRl9ETUFfQ09OVElHPXkKPiBDT05GSUdfVklERU9CVUYy
X0NPUkU9eQo+IENPTkZJR19WSURFT0JVRjJfTUVNT1BTPXkKPiBDT05GSUdfVklERU9CVUYyX0RN
QV9DT05USUc9eQo+ICMgQ09ORklHX1RUUENJX0VFUFJPTSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMg
TWVkaWEgZHJpdmVycwo+ICMKPiBDT05GSUdfVjRMX1BMQVRGT1JNX0RSSVZFUlM9eQo+ICMgQ09O
RklHX1ZJREVPX1NIX1ZPVSBpcyBub3Qgc2V0Cj4gQ09ORklHX1ZJREVPX1RJTUJFUkRBTEU9eQo+
IENPTkZJR19TT0NfQ0FNRVJBPXkKPiBDT05GSUdfU09DX0NBTUVSQV9QTEFURk9STT15Cj4gIyBD
T05GSUdfVklERU9fUkNBUl9WSU4gaXMgbm90IHNldAo+IENPTkZJR19WNExfTUVNMk1FTV9EUklW
RVJTPXkKPiBDT05GSUdfVklERU9fTUVNMk1FTV9ERUlOVEVSTEFDRT15Cj4gIyBDT05GSUdfVklE
RU9fU0hfVkVVIGlzIG5vdCBzZXQKPiAjIENPTkZJR19WNExfVEVTVF9EUklWRVJTIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBTdXBwb3J0ZWQgTU1DL1NESU8gYWRhcHRlcnMKPiAjCj4gIyBDT05GSUdf
TUVESUFfUEFSUE9SVF9TVVBQT1JUIGlzIG5vdCBzZXQKPiBDT05GSUdfUkFESU9fQURBUFRFUlM9
eQo+IENPTkZJR19SQURJT19TSTQ3MFg9eQo+IENPTkZJR19JMkNfU0k0NzBYPXkKPiAjIENPTkZJ
R19SQURJT19TSTQ3MTMgaXMgbm90IHNldAo+IENPTkZJR19SQURJT19URUE1NzY0PXkKPiBDT05G
SUdfUkFESU9fVEVBNTc2NF9YVEFMPXkKPiBDT05GSUdfUkFESU9fU0FBNzcwNkg9eQo+IENPTkZJ
R19SQURJT19URUY2ODYyPXkKPiBDT05GSUdfUkFESU9fV0wxMjczPXkKPiAKPiAjCj4gIyBUZXhh
cyBJbnN0cnVtZW50cyBXTDEyOHggRk0gZHJpdmVyIChTVCBiYXNlZCkKPiAjCj4gCj4gIwo+ICMg
TWVkaWEgYW5jaWxsYXJ5IGRyaXZlcnMgKHR1bmVycywgc2Vuc29ycywgaTJjLCBmcm9udGVuZHMp
Cj4gIwo+IENPTkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15Cj4gCj4gIwo+ICMgQXVkaW8g
ZGVjb2RlcnMsIHByb2Nlc3NvcnMgYW5kIG1peGVycwo+ICMKPiAKPiAjCj4gIyBSRFMgZGVjb2Rl
cnMKPiAjCj4gCj4gIwo+ICMgVmlkZW8gZGVjb2RlcnMKPiAjCj4gQ09ORklHX1ZJREVPX0FEVjcx
ODA9eQo+IAo+ICMKPiAjIFZpZGVvIGFuZCBhdWRpbyBkZWNvZGVycwo+ICMKPiAKPiAjCj4gIyBW
aWRlbyBlbmNvZGVycwo+ICMKPiAKPiAjCj4gIyBDYW1lcmEgc2Vuc29yIGRldmljZXMKPiAjCj4g
Cj4gIwo+ICMgRmxhc2ggZGV2aWNlcwo+ICMKPiAKPiAjCj4gIyBWaWRlbyBpbXByb3ZlbWVudCBj
aGlwcwo+ICMKPiAKPiAjCj4gIyBNaXNjZWxsYW5lb3VzIGhlbHBlciBjaGlwcwo+ICMKPiAKPiAj
Cj4gIyBTZW5zb3JzIHVzZWQgb24gc29jX2NhbWVyYSBkcml2ZXIKPiAjCj4gCj4gIwo+ICMgc29j
X2NhbWVyYSBzZW5zb3IgZHJpdmVycwo+ICMKPiBDT05GSUdfU09DX0NBTUVSQV9JTVgwNzQ9eQo+
IENPTkZJR19TT0NfQ0FNRVJBX01UOU0wMDE9eQo+IENPTkZJR19TT0NfQ0FNRVJBX01UOU0xMTE9
eQo+IENPTkZJR19TT0NfQ0FNRVJBX01UOVQwMzE9eQo+ICMgQ09ORklHX1NPQ19DQU1FUkFfTVQ5
VDExMiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19DQU1FUkFfTVQ5VjAyMj15Cj4gQ09ORklHX1NP
Q19DQU1FUkFfT1YyNjQwPXkKPiBDT05GSUdfU09DX0NBTUVSQV9PVjU2NDI9eQo+IENPTkZJR19T
T0NfQ0FNRVJBX09WNjY1MD15Cj4gQ09ORklHX1NPQ19DQU1FUkFfT1Y3NzJYPXkKPiAjIENPTkZJ
R19TT0NfQ0FNRVJBX09WOTY0MCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19DQU1FUkFfT1Y5NzQw
PXkKPiAjIENPTkZJR19TT0NfQ0FNRVJBX1JKNTROMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1NPQ19D
QU1FUkFfVFc5OTEwPXkKPiBDT05GSUdfTUVESUFfVFVORVI9eQo+IENPTkZJR19NRURJQV9UVU5F
Ul9TSU1QTEU9eQo+IENPTkZJR19NRURJQV9UVU5FUl9UREE4MjkwPXkKPiBDT05GSUdfTUVESUFf
VFVORVJfVERBODI3WD15Cj4gQ09ORklHX01FRElBX1RVTkVSX1REQTE4MjcxPXkKPiBDT05GSUdf
TUVESUFfVFVORVJfVERBOTg4Nz15Cj4gQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQo+IENP
TkZJR19NRURJQV9UVU5FUl9URUE1NzY3PXkKPiBDT05GSUdfTUVESUFfVFVORVJfTVQyMFhYPXkK
PiBDT05GSUdfTUVESUFfVFVORVJfWEMyMDI4PXkKPiBDT05GSUdfTUVESUFfVFVORVJfWEM1MDAw
PXkKPiBDT05GSUdfTUVESUFfVFVORVJfWEM0MDAwPXkKPiBDT05GSUdfTUVESUFfVFVORVJfTUM0
NFM4MDM9eQo+IAo+ICMKPiAjIFRvb2xzIHRvIGRldmVsb3AgbmV3IGZyb250ZW5kcwo+ICMKPiAj
IENPTkZJR19EVkJfRFVNTVlfRkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEdyYXBoaWNzIHN1cHBv
cnQKPiAjCj4gQ09ORklHX0RSTT15Cj4gIyBDT05GSUdfRFJNX1VETCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1ZHQVNUQVRFPXkKPiBDT05GSUdfVklERU9fT1VUUFVUX0NPTlRST0w9eQo+IENPTkZJR19I
RE1JPXkKPiBDT05GSUdfRkI9eQo+ICMgQ09ORklHX0ZJUk1XQVJFX0VESUQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0ZCX0REQyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0JPT1RfVkVTQV9TVVBQT1JU
PXkKPiBDT05GSUdfRkJfQ0ZCX0ZJTExSRUNUPXkKPiBDT05GSUdfRkJfQ0ZCX0NPUFlBUkVBPXkK
PiBDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15Cj4gIyBDT05GSUdfRkJfQ0ZCX1JFVl9QSVhFTFNf
SU5fQllURSBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX1NZU19GSUxMUkVDVD15Cj4gQ09ORklHX0ZC
X1NZU19DT1BZQVJFQT15Cj4gQ09ORklHX0ZCX1NZU19JTUFHRUJMSVQ9eQo+IENPTkZJR19GQl9G
T1JFSUdOX0VORElBTj15Cj4gQ09ORklHX0ZCX0JPVEhfRU5ESUFOPXkKPiAjIENPTkZJR19GQl9C
SUdfRU5ESUFOIGlzIG5vdCBzZXQKPiAjIENPTkZJR19GQl9MSVRUTEVfRU5ESUFOIGlzIG5vdCBz
ZXQKPiBDT05GSUdfRkJfU1lTX0ZPUFM9eQo+IENPTkZJR19GQl9ERUZFUlJFRF9JTz15Cj4gQ09O
RklHX0ZCX0hFQ1VCQT15Cj4gIyBDT05GSUdfRkJfU1ZHQUxJQiBpcyBub3Qgc2V0Cj4gIyBDT05G
SUdfRkJfTUFDTU9ERVMgaXMgbm90IHNldAo+ICMgQ09ORklHX0ZCX0JBQ0tMSUdIVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX0ZCX01PREVfSEVMUEVSUz15Cj4gQ09ORklHX0ZCX1RJTEVCTElUVElORz15
Cj4gCj4gIwo+ICMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMKPiAjCj4gQ09ORklHX0ZC
X0FSQz15Cj4gQ09ORklHX0ZCX1ZHQTE2PXkKPiBDT05GSUdfRkJfVVZFU0E9eQo+IENPTkZJR19G
Ql9WRVNBPXkKPiBDT05GSUdfRkJfTjQxMT15Cj4gQ09ORklHX0ZCX0hHQT15Cj4gQ09ORklHX0ZC
X1MxRDEzWFhYPXkKPiAjIENPTkZJR19GQl9UTUlPIGlzIG5vdCBzZXQKPiBDT05GSUdfRkJfR09M
REZJU0g9eQo+ICMgQ09ORklHX0ZCX1ZJUlRVQUwgaXMgbm90IHNldAo+IENPTkZJR19YRU5fRkJE
RVZfRlJPTlRFTkQ9eQo+ICMgQ09ORklHX0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0Cj4gQ09ORklH
X0ZCX0JST0FEU0hFRVQ9eQo+IENPTkZJR19GQl9BVU9fSzE5MFg9eQo+ICMgQ09ORklHX0ZCX0FV
T19LMTkwMCBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZCX0FVT19LMTkwMT15Cj4gIyBDT05GSUdfRkJf
U0lNUExFIGlzIG5vdCBzZXQKPiBDT05GSUdfRVhZTk9TX1ZJREVPPXkKPiBDT05GSUdfQkFDS0xJ
R0hUX0xDRF9TVVBQT1JUPXkKPiBDT05GSUdfTENEX0NMQVNTX0RFVklDRT15Cj4gQ09ORklHX0xD
RF9MVFYzNTBRVj15Cj4gQ09ORklHX0xDRF9JTEk5MjJYPXkKPiBDT05GSUdfTENEX0lMSTkzMjA9
eQo+IENPTkZJR19MQ0RfVERPMjRNPXkKPiBDT05GSUdfTENEX1ZHRzI0MzJBND15Cj4gIyBDT05G
SUdfTENEX1BMQVRGT1JNIGlzIG5vdCBzZXQKPiBDT05GSUdfTENEX1M2RTYzTTA9eQo+IENPTkZJ
R19MQ0RfTEQ5MDQwPXkKPiAjIENPTkZJR19MQ0RfQU1TMzY5RkcwNiBpcyBub3Qgc2V0Cj4gQ09O
RklHX0xDRF9MTVM1MDFLRjAzPXkKPiAjIENPTkZJR19MQ0RfSFg4MzU3IGlzIG5vdCBzZXQKPiBD
T05GSUdfQkFDS0xJR0hUX0NMQVNTX0RFVklDRT15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0dFTkVS
SUMgaXMgbm90IHNldAo+IENPTkZJR19CQUNLTElHSFRfTE0zNTMzPXkKPiBDT05GSUdfQkFDS0xJ
R0hUX1BXTT15Cj4gIyBDT05GSUdfQkFDS0xJR0hUX0RBOTAzWCBpcyBub3Qgc2V0Cj4gQ09ORklH
X0JBQ0tMSUdIVF9EQTkwNTI9eQo+IENPTkZJR19CQUNLTElHSFRfTUFYODkyNT15Cj4gQ09ORklH
X0JBQ0tMSUdIVF9TQUhBUkE9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9XTTgzMVggaXMgbm90IHNl
dAo+ICMgQ09ORklHX0JBQ0tMSUdIVF9BRFA1NTIwIGlzIG5vdCBzZXQKPiBDT05GSUdfQkFDS0xJ
R0hUX0FEUDg4NjA9eQo+IENPTkZJR19CQUNLTElHSFRfQURQODg3MD15Cj4gQ09ORklHX0JBQ0tM
SUdIVF9QQ0Y1MDYzMz15Cj4gQ09ORklHX0JBQ0tMSUdIVF9MTTM2MzBBPXkKPiBDT05GSUdfQkFD
S0xJR0hUX0xNMzYzOT15Cj4gQ09ORklHX0JBQ0tMSUdIVF9MUDg1NVg9eQo+IENPTkZJR19CQUNL
TElHSFRfTFA4Nzg4PXkKPiBDT05GSUdfQkFDS0xJR0hUX1BBTkRPUkE9eQo+IENPTkZJR19CQUNL
TElHSFRfVFBTNjUyMTc9eQo+ICMgQ09ORklHX0JBQ0tMSUdIVF9BUzM3MTEgaXMgbm90IHNldAo+
ICMgQ09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQkFDS0xJ
R0hUX0JENjEwNyBpcyBub3Qgc2V0Cj4gQ09ORklHX0xPR089eQo+ICMgQ09ORklHX0xPR09fTElO
VVhfTU9OTyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTE9HT19MSU5VWF9WR0ExNiBpcyBub3Qgc2V0
Cj4gQ09ORklHX0xPR09fTElOVVhfQ0xVVDIyND15Cj4gQ09ORklHX1NPVU5EPXkKPiBDT05GSUdf
U09VTkRfT1NTX0NPUkU9eQo+IENPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJTT15Cj4gQ09O
RklHX1NORD15Cj4gQ09ORklHX1NORF9USU1FUj15Cj4gQ09ORklHX1NORF9QQ009eQo+IENPTkZJ
R19TTkRfSFdERVA9eQo+IENPTkZJR19TTkRfUkFXTUlEST15Cj4gIyBDT05GSUdfU05EX1NFUVVF
TkNFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9PU1NFTVVMPXkKPiBDT05GSUdfU05EX01JWEVS
X09TUz15Cj4gIyBDT05GSUdfU05EX1BDTV9PU1MgaXMgbm90IHNldAo+IENPTkZJR19TTkRfSFJU
SU1FUj15Cj4gQ09ORklHX1NORF9EWU5BTUlDX01JTk9SUz15Cj4gQ09ORklHX1NORF9NQVhfQ0FS
RFM9MzIKPiAjIENPTkZJR19TTkRfU1VQUE9SVF9PTERfQVBJIGlzIG5vdCBzZXQKPiBDT05GSUdf
U05EX1ZFUkJPU0VfUFJPQ0ZTPXkKPiBDT05GSUdfU05EX1ZFUkJPU0VfUFJJTlRLPXkKPiAjIENP
TkZJR19TTkRfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19TTkRfRE1BX1NHQlVGPXkKPiAjIENP
TkZJR19TTkRfUkFXTUlESV9TRVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9PUEwzX0xJQl9T
RVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1NORF9TQkFXRV9TRVEgaXMgbm90IHNldAo+ICMgQ09ORklHX1NORF9FTVUxMEsxX1NF
USBpcyBub3Qgc2V0Cj4gQ09ORklHX1NORF9NUFU0MDFfVUFSVD15Cj4gQ09ORklHX1NORF9WWF9M
SUI9eQo+IENPTkZJR19TTkRfRFJJVkVSUz15Cj4gQ09ORklHX1NORF9EVU1NWT15Cj4gQ09ORklH
X1NORF9BTE9PUD15Cj4gQ09ORklHX1NORF9NVFBBVj15Cj4gQ09ORklHX1NORF9NVFM2ND15Cj4g
Q09ORklHX1NORF9TRVJJQUxfVTE2NTUwPXkKPiBDT05GSUdfU05EX01QVTQwMT15Cj4gQ09ORklH
X1NORF9QT1JUTUFOMlg0PXkKPiBDT05GSUdfU05EX1NQST15Cj4gQ09ORklHX1NORF9BVDczQzIx
Mz15Cj4gQ09ORklHX1NORF9BVDczQzIxM19UQVJHRVRfQklUUkFURT00ODAwMAo+IENPTkZJR19T
TkRfUENNQ0lBPXkKPiBDT05GSUdfU05EX1ZYUE9DS0VUPXkKPiBDT05GSUdfU05EX1BEQVVESU9D
Rj15Cj4gIyBDT05GSUdfU05EX1NPQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU09VTkRfUFJJTUUg
aXMgbm90IHNldAo+IENPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5ESUFOPXkKPiBDT05GSUdfVVNC
X1NVUFBPUlQ9eQo+IENPTkZJR19VU0JfQVJDSF9IQVNfSENEPXkKPiAjIENPTkZJR19VU0IgaXMg
bm90IHNldAo+IAo+ICMKPiAjIFVTQiBwb3J0IGRyaXZlcnMKPiAjCj4gCj4gIwo+ICMgVVNCIFBo
eXNpY2FsIExheWVyIGRyaXZlcnMKPiAjCj4gQ09ORklHX1VTQl9QSFk9eQo+ICMgQ09ORklHX0tF
WVNUT05FX1VTQl9QSFkgaXMgbm90IHNldAo+IENPTkZJR19OT1BfVVNCX1hDRUlWPXkKPiBDT05G
SUdfT01BUF9DT05UUk9MX1VTQj15Cj4gQ09ORklHX09NQVBfVVNCMz15Cj4gQ09ORklHX0FNMzM1
WF9DT05UUk9MX1VTQj15Cj4gQ09ORklHX0FNMzM1WF9QSFlfVVNCPXkKPiBDT05GSUdfU0FNU1VO
R19VU0JQSFk9eQo+ICMgQ09ORklHX1NBTVNVTkdfVVNCMlBIWSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1NBTVNVTkdfVVNCM1BIWT15Cj4gQ09ORklHX1RBSFZPX1VTQj15Cj4gIyBDT05GSUdfVEFIVk9f
VVNCX0hPU1RfQllfREVGQVVMVCBpcyBub3Qgc2V0Cj4gQ09ORklHX1VTQl9SQ0FSX0dFTjJfUEhZ
PXkKPiAjIENPTkZJR19VU0JfR0FER0VUIGlzIG5vdCBzZXQKPiAjIENPTkZJR19NTUMgaXMgbm90
IHNldAo+IENPTkZJR19NRU1TVElDSz15Cj4gIyBDT05GSUdfTUVNU1RJQ0tfREVCVUcgaXMgbm90
IHNldAo+IAo+ICMKPiAjIE1lbW9yeVN0aWNrIGRyaXZlcnMKPiAjCj4gQ09ORklHX01FTVNUSUNL
X1VOU0FGRV9SRVNVTUU9eQo+IENPTkZJR19NU1BST19CTE9DSz15Cj4gQ09ORklHX01TX0JMT0NL
PXkKPiAKPiAjCj4gIyBNZW1vcnlTdGljayBIb3N0IENvbnRyb2xsZXIgRHJpdmVycwo+ICMKPiBD
T05GSUdfTkVXX0xFRFM9eQo+IENPTkZJR19MRURTX0NMQVNTPXkKPiAKPiAjCj4gIyBMRUQgZHJp
dmVycwo+ICMKPiBDT05GSUdfTEVEU19MTTM1MzA9eQo+IENPTkZJR19MRURTX0xNMzUzMz15Cj4g
Q09ORklHX0xFRFNfTE0zNjQyPXkKPiBDT05GSUdfTEVEU19MUDM5NDQ9eQo+IENPTkZJR19MRURT
X0xQNTVYWF9DT01NT049eQo+IENPTkZJR19MRURTX0xQNTUyMT15Cj4gIyBDT05GSUdfTEVEU19M
UDU1MjMgaXMgbm90IHNldAo+IENPTkZJR19MRURTX0xQNTU2Mj15Cj4gQ09ORklHX0xFRFNfTFA4
NTAxPXkKPiBDT05GSUdfTEVEU19MUDg3ODg9eQo+IENPTkZJR19MRURTX1BDQTk1NVg9eQo+IENP
TkZJR19MRURTX1BDQTk2M1g9eQo+IENPTkZJR19MRURTX1BDQTk2ODU9eQo+IENPTkZJR19MRURT
X1dNODMxWF9TVEFUVVM9eQo+ICMgQ09ORklHX0xFRFNfV004MzUwIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19MRURTX0RBOTAzWCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTEVEU19EQTkwNTIgaXMgbm90
IHNldAo+IENPTkZJR19MRURTX0RBQzEyNFMwODU9eQo+ICMgQ09ORklHX0xFRFNfUFdNIGlzIG5v
dCBzZXQKPiBDT05GSUdfTEVEU19SRUdVTEFUT1I9eQo+IENPTkZJR19MRURTX0JEMjgwMj15Cj4g
Q09ORklHX0xFRFNfQURQNTUyMD15Cj4gIyBDT05GSUdfTEVEU19UQ0E2NTA3IGlzIG5vdCBzZXQK
PiBDT05GSUdfTEVEU19MTTM1NXg9eQo+IENPTkZJR19MRURTX09UMjAwPXkKPiBDT05GSUdfTEVE
U19CTElOS009eQo+IAo+ICMKPiAjIExFRCBUcmlnZ2Vycwo+ICMKPiAjIENPTkZJR19MRURTX1RS
SUdHRVJTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BQ0NFU1NJQklMSVRZIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19FREFDIGlzIG5vdCBzZXQKPiBDT05GSUdfUlRDX0xJQj15Cj4gQ09ORklHX1JUQ19D
TEFTUz15Cj4gIyBDT05GSUdfUlRDX0hDVE9TWVMgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19T
WVNUT0hDIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfREVCVUcgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFJUQyBpbnRlcmZhY2VzCj4gIwo+ICMgQ09ORklHX1JUQ19JTlRGX1NZU0ZTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfUlRDX0lOVEZfUFJPQz15Cj4gIyBDT05GSUdfUlRDX0lOVEZfREVWIGlzIG5v
dCBzZXQKPiBDT05GSUdfUlRDX0RSVl9URVNUPXkKPiAKPiAjCj4gIyBJMkMgUlRDIGRyaXZlcnMK
PiAjCj4gQ09ORklHX1JUQ19EUlZfODhQTTgwWD15Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzEzMDcg
aXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX0RTMTM3ND15Cj4gQ09ORklHX1JUQ19EUlZfRFMx
NjcyPXkKPiBDT05GSUdfUlRDX0RSVl9EUzMyMzI9eQo+IENPTkZJR19SVENfRFJWX0xQODc4OD15
Cj4gIyBDT05GSUdfUlRDX0RSVl9NQVg2OTAwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19SVENfRFJW
X01BWDg5MjUgaXMgbm90IHNldAo+ICMgQ09ORklHX1JUQ19EUlZfTUFYNzc2ODYgaXMgbm90IHNl
dAo+IENPTkZJR19SVENfRFJWX1JTNUMzNzI9eQo+ICMgQ09ORklHX1JUQ19EUlZfSVNMMTIwOCBp
cyBub3Qgc2V0Cj4gQ09ORklHX1JUQ19EUlZfSVNMMTIwMjI9eQo+IENPTkZJR19SVENfRFJWX0lT
TDEyMDU3PXkKPiAjIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKPiBDT05GSUdfUlRD
X0RSVl9QQUxNQVM9eQo+IENPTkZJR19SVENfRFJWX1BDRjIxMjc9eQo+ICMgQ09ORklHX1JUQ19E
UlZfUENGODUyMyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzIGlzIG5vdCBz
ZXQKPiAjIENPTkZJR19SVENfRFJWX1BDRjg1ODMgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJW
X000MVQ4MD15Cj4gIyBDT05GSUdfUlRDX0RSVl9NNDFUODBfV0RUIGlzIG5vdCBzZXQKPiBDT05G
SUdfUlRDX0RSVl9CUTMySz15Cj4gQ09ORklHX1JUQ19EUlZfVFdMNDAzMD15Cj4gQ09ORklHX1JU
Q19EUlZfUkM1VDU4Mz15Cj4gQ09ORklHX1JUQ19EUlZfUzM1MzkwQT15Cj4gQ09ORklHX1JUQ19E
UlZfRk0zMTMwPXkKPiAjIENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1JUQ19EUlZfUlg4MDI1PXkKPiBDT05GSUdfUlRDX0RSVl9FTTMwMjc9eQo+IENPTkZJR19SVENf
RFJWX1JWMzAyOUMyPXkKPiAKPiAjCj4gIyBTUEkgUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JU
Q19EUlZfTTQxVDkzPXkKPiAjIENPTkZJR19SVENfRFJWX000MVQ5NCBpcyBub3Qgc2V0Cj4gQ09O
RklHX1JUQ19EUlZfRFMxMzA1PXkKPiBDT05GSUdfUlRDX0RSVl9EUzEzOTA9eQo+IENPTkZJR19S
VENfRFJWX01BWDY5MDI9eQo+ICMgQ09ORklHX1JUQ19EUlZfUjk3MDEgaXMgbm90IHNldAo+ICMg
Q09ORklHX1JUQ19EUlZfUlM1QzM0OCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzMy
MzQgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX1BDRjIxMjM9eQo+IENPTkZJR19SVENfRFJW
X1JYNDU4MT15Cj4gCj4gIwo+ICMgUGxhdGZvcm0gUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JU
Q19EUlZfQ01PUz15Cj4gQ09ORklHX1JUQ19EUlZfRFMxMjg2PXkKPiAjIENPTkZJR19SVENfRFJW
X0RTMTUxMSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUlRDX0RSVl9EUzE1NTMgaXMgbm90IHNldAo+
IENPTkZJR19SVENfRFJWX0RTMTc0Mj15Cj4gIyBDT05GSUdfUlRDX0RSVl9EQTkwNTIgaXMgbm90
IHNldAo+IENPTkZJR19SVENfRFJWX0RBOTA1NT15Cj4gIyBDT05GSUdfUlRDX0RSVl9TVEsxN1RB
OCBpcyBub3Qgc2V0Cj4gQ09ORklHX1JUQ19EUlZfTTQ4VDg2PXkKPiBDT05GSUdfUlRDX0RSVl9N
NDhUMzU9eQo+ICMgQ09ORklHX1JUQ19EUlZfTTQ4VDU5IGlzIG5vdCBzZXQKPiAjIENPTkZJR19S
VENfRFJWX01TTTYyNDIgaXMgbm90IHNldAo+IENPTkZJR19SVENfRFJWX0JRNDgwMj15Cj4gQ09O
RklHX1JUQ19EUlZfUlA1QzAxPXkKPiAjIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQK
PiBDT05GSUdfUlRDX0RSVl9EUzI0MDQ9eQo+IENPTkZJR19SVENfRFJWX1dNODMxWD15Cj4gQ09O
RklHX1JUQ19EUlZfV004MzUwPXkKPiAjIENPTkZJR19SVENfRFJWX1BDRjUwNjMzIGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBvbi1DUFUgUlRDIGRyaXZlcnMKPiAjCj4gQ09ORklHX1JUQ19EUlZfTU9Y
QVJUPXkKPiAKPiAjCj4gIyBISUQgU2Vuc29yIFJUQyBkcml2ZXJzCj4gIwo+IENPTkZJR19ETUFE
RVZJQ0VTPXkKPiBDT05GSUdfRE1BREVWSUNFU19ERUJVRz15Cj4gIyBDT05GSUdfRE1BREVWSUNF
U19WREVCVUcgaXMgbm90IHNldAo+IAo+ICMKPiAjIERNQSBEZXZpY2VzCj4gIwo+IENPTkZJR19E
V19ETUFDX0NPUkU9eQo+ICMgQ09ORklHX0RXX0RNQUMgaXMgbm90IHNldAo+IENPTkZJR19USU1C
X0RNQT15Cj4gQ09ORklHX0RNQV9FTkdJTkU9eQo+IAo+ICMKPiAjIERNQSBDbGllbnRzCj4gIwo+
ICMgQ09ORklHX0FTWU5DX1RYX0RNQSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RNQVRFU1Q9eQo+ICMg
Q09ORklHX0FVWERJU1BMQVkgaXMgbm90IHNldAo+ICMgQ09ORklHX1VJTyBpcyBub3Qgc2V0Cj4g
Q09ORklHX1ZJUlRfRFJJVkVSUz15Cj4gQ09ORklHX1ZJUlRJTz15Cj4gCj4gIwo+ICMgVmlydGlv
IGRyaXZlcnMKPiAjCj4gQ09ORklHX1ZJUlRJT19CQUxMT09OPXkKPiAjIENPTkZJR19WSVJUSU9f
TU1JTyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgTWljcm9zb2Z0IEh5cGVyLVYgZ3Vlc3Qgc3VwcG9y
dAo+ICMKPiAKPiAjCj4gIyBYZW4gZHJpdmVyIHN1cHBvcnQKPiAjCj4gQ09ORklHX1hFTl9CQUxM
T09OPXkKPiBDT05GSUdfWEVOX0JBTExPT05fTUVNT1JZX0hPVFBMVUc9eQo+IENPTkZJR19YRU5f
U0NSVUJfUEFHRVM9eQo+IENPTkZJR19YRU5fREVWX0VWVENITj15Cj4gIyBDT05GSUdfWEVORlMg
aXMgbm90IHNldAo+IENPTkZJR19YRU5fU1lTX0hZUEVSVklTT1I9eQo+IENPTkZJR19YRU5fWEVO
QlVTX0ZST05URU5EPXkKPiBDT05GSUdfWEVOX0dOVERFVj15Cj4gQ09ORklHX1hFTl9HUkFOVF9E
RVZfQUxMT0M9eQo+IENPTkZJR19TV0lPVExCX1hFTj15Cj4gQ09ORklHX1hFTl9QUklWQ01EPXkK
PiBDT05GSUdfWEVOX0hBVkVfUFZNTVU9eQo+IENPTkZJR19TVEFHSU5HPXkKPiAjIENPTkZJR19F
Q0hPIGlzIG5vdCBzZXQKPiBDT05GSUdfUEFORUw9eQo+IENPTkZJR19QQU5FTF9QQVJQT1JUPTAK
PiBDT05GSUdfUEFORUxfUFJPRklMRT01Cj4gQ09ORklHX1BBTkVMX0NIQU5HRV9NRVNTQUdFPXkK
PiBDT05GSUdfUEFORUxfQk9PVF9NRVNTQUdFPSIiCj4gCj4gIwo+ICMgSUlPIHN0YWdpbmcgZHJp
dmVycwo+ICMKPiAKPiAjCj4gIyBBY2NlbGVyb21ldGVycwo+ICMKPiBDT05GSUdfQURJUzE2MjAx
PXkKPiBDT05GSUdfQURJUzE2MjAzPXkKPiAjIENPTkZJR19BRElTMTYyMDQgaXMgbm90IHNldAo+
ICMgQ09ORklHX0FESVMxNjIwOSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQURJUzE2MjIwIGlzIG5v
dCBzZXQKPiBDT05GSUdfQURJUzE2MjQwPXkKPiBDT05GSUdfU0NBMzAwMD15Cj4gCj4gIwo+ICMg
QW5hbG9nIHRvIGRpZ2l0YWwgY29udmVydGVycwo+ICMKPiAjIENPTkZJR19BRDcyOTEgaXMgbm90
IHNldAo+IENPTkZJR19BRDc5OVg9eQo+IENPTkZJR19BRDc5OVhfUklOR19CVUZGRVI9eQo+ICMg
Q09ORklHX0FENzE5MiBpcyBub3Qgc2V0Cj4gQ09ORklHX0FENzI4MD15Cj4gIyBDT05GSUdfTFBD
MzJYWF9BREMgaXMgbm90IHNldAo+IENPTkZJR19TUEVBUl9BREM9eQo+IAo+ICMKPiAjIEFuYWxv
ZyBkaWdpdGFsIGJpLWRpcmVjdGlvbiBjb252ZXJ0ZXJzCj4gIwo+IAo+ICMKPiAjIENhcGFjaXRh
bmNlIHRvIGRpZ2l0YWwgY29udmVydGVycwo+ICMKPiAjIENPTkZJR19BRDcxNTAgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0FENzE1MiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUQ3NzQ2IGlzIG5vdCBz
ZXQKPiAKPiAjCj4gIyBEaXJlY3QgRGlnaXRhbCBTeW50aGVzaXMKPiAjCj4gIyBDT05GSUdfQUQ1
OTMwIGlzIG5vdCBzZXQKPiAjIENPTkZJR19BRDk4MzIgaXMgbm90IHNldAo+IENPTkZJR19BRDk4
MzQ9eQo+IENPTkZJR19BRDk4NTA9eQo+IENPTkZJR19BRDk4NTI9eQo+IENPTkZJR19BRDk5MTA9
eQo+ICMgQ09ORklHX0FEOTk1MSBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRGlnaXRhbCBneXJvc2Nv
cGUgc2Vuc29ycwo+ICMKPiAjIENPTkZJR19BRElTMTYwNjAgaXMgbm90IHNldAo+IAo+ICMKPiAj
IE5ldHdvcmsgQW5hbHl6ZXIsIEltcGVkYW5jZSBDb252ZXJ0ZXJzCj4gIwo+IENPTkZJR19BRDU5
MzM9eQo+IAo+ICMKPiAjIExpZ2h0IHNlbnNvcnMKPiAjCj4gIyBDT05GSUdfU0VOU09SU19JU0wy
OTAxOCBpcyBub3Qgc2V0Cj4gQ09ORklHX1NFTlNPUlNfSVNMMjkwMjg9eQo+IENPTkZJR19UU0wy
NTgzPXkKPiBDT05GSUdfVFNMMng3eD15Cj4gCj4gIwo+ICMgTWFnbmV0b21ldGVyIHNlbnNvcnMK
PiAjCj4gQ09ORklHX1NFTlNPUlNfSE1DNTg0Mz15Cj4gCj4gIwo+ICMgQWN0aXZlIGVuZXJneSBt
ZXRlcmluZyBJQwo+ICMKPiAjIENPTkZJR19BREU3NzUzIGlzIG5vdCBzZXQKPiBDT05GSUdfQURF
Nzc1ND15Cj4gIyBDT05GSUdfQURFNzc1OCBpcyBub3Qgc2V0Cj4gQ09ORklHX0FERTc3NTk9eQo+
IENPTkZJR19BREU3ODU0PXkKPiBDT05GSUdfQURFNzg1NF9JMkM9eQo+ICMgQ09ORklHX0FERTc4
NTRfU1BJIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSZXNvbHZlciB0byBkaWdpdGFsIGNvbnZlcnRl
cnMKPiAjCj4gQ09ORklHX0FEMlM5MD15Cj4gCj4gIwo+ICMgVHJpZ2dlcnMgLSBzdGFuZGFsb25l
Cj4gIwo+ICMgQ09ORklHX0lJT19QRVJJT0RJQ19SVENfVFJJR0dFUiBpcyBub3Qgc2V0Cj4gQ09O
RklHX0lJT19TSU1QTEVfRFVNTVk9eQo+ICMgQ09ORklHX0lJT19TSU1QTEVfRFVNTVlfRVZFTlRT
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19JSU9fU0lNUExFX0RVTU1ZX0JVRkZFUiBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfRlQxMDAwIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBTcGVha3VwIGNvbnNvbGUg
c3BlZWNoCj4gIwo+IENPTkZJR19TVEFHSU5HX01FRElBPXkKPiBDT05GSUdfSTJDX0JDTTIwNDg9
eQo+ICMgQ09ORklHX1ZJREVPX1RDTTgyNVggaXMgbm90IHNldAo+ICMgQ09ORklHX1VTQl9TTjlD
MTAyIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBBbmRyb2lkCj4gIwo+IENPTkZJR19BTkRST0lEPXkK
PiBDT05GSUdfQU5EUk9JRF9CSU5ERVJfSVBDPXkKPiBDT05GSUdfQU5EUk9JRF9MT0dHRVI9eQo+
IENPTkZJR19BTkRST0lEX1RJTUVEX09VVFBVVD15Cj4gIyBDT05GSUdfQU5EUk9JRF9MT1dfTUVN
T1JZX0tJTExFUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQU5EUk9JRF9JTlRGX0FMQVJNX0RFViBp
cyBub3Qgc2V0Cj4gQ09ORklHX1NZTkM9eQo+ICMgQ09ORklHX1NXX1NZTkMgaXMgbm90IHNldAo+
IENPTkZJR19JT049eQo+IENPTkZJR19JT05fVEVTVD15Cj4gIyBDT05GSUdfWDg2X1BMQVRGT1JN
X0RFVklDRVMgaXMgbm90IHNldAo+IENPTkZJR19DSFJPTUVfUExBVEZPUk1TPXkKPiAjIENPTkZJ
R19DSFJPTUVPU19QU1RPUkUgaXMgbm90IHNldAo+IAo+ICMKPiAjIEhhcmR3YXJlIFNwaW5sb2Nr
IGRyaXZlcnMKPiAjCj4gQ09ORklHX0NMS0VWVF9JODI1Mz15Cj4gQ09ORklHX0k4MjUzX0xPQ0s9
eQo+IENPTkZJR19DTEtCTERfSTgyNTM9eQo+ICMgQ09ORklHX01BSUxCT1ggaXMgbm90IHNldAo+
ICMgQ09ORklHX0lPTU1VX1NVUFBPUlQgaXMgbm90IHNldAo+IAo+ICMKPiAjIFJlbW90ZXByb2Mg
ZHJpdmVycwo+ICMKPiAjIENPTkZJR19TVEVfTU9ERU1fUlBST0MgaXMgbm90IHNldAo+IAo+ICMK
PiAjIFJwbXNnIGRyaXZlcnMKPiAjCj4gIyBDT05GSUdfUE1fREVWRlJFUSBpcyBub3Qgc2V0Cj4g
Q09ORklHX0VYVENPTj15Cj4gCj4gIwo+ICMgRXh0Y29uIERldmljZSBEcml2ZXJzCj4gIwo+IENP
TkZJR19FWFRDT05fQURDX0pBQ0s9eQo+IENPTkZJR19FWFRDT05fUEFMTUFTPXkKPiAjIENPTkZJ
R19NRU1PUlkgaXMgbm90IHNldAo+IENPTkZJR19JSU89eQo+IENPTkZJR19JSU9fQlVGRkVSPXkK
PiAjIENPTkZJR19JSU9fQlVGRkVSX0NCIGlzIG5vdCBzZXQKPiBDT05GSUdfSUlPX0tGSUZPX0JV
Rj15Cj4gQ09ORklHX0lJT19UUklHR0VSRURfQlVGRkVSPXkKPiBDT05GSUdfSUlPX1RSSUdHRVI9
eQo+IENPTkZJR19JSU9fQ09OU1VNRVJTX1BFUl9UUklHR0VSPTIKPiAKPiAjCj4gIyBBY2NlbGVy
b21ldGVycwo+ICMKPiBDT05GSUdfQk1BMTgwPXkKPiBDT05GSUdfSUlPX1NUX0FDQ0VMXzNBWElT
PXkKPiBDT05GSUdfSUlPX1NUX0FDQ0VMX0kyQ18zQVhJUz15Cj4gQ09ORklHX0lJT19TVF9BQ0NF
TF9TUElfM0FYSVM9eQo+IENPTkZJR19LWFNEOT15Cj4gCj4gIwo+ICMgQW5hbG9nIHRvIGRpZ2l0
YWwgY29udmVydGVycwo+ICMKPiBDT05GSUdfQURfU0lHTUFfREVMVEE9eQo+ICMgQ09ORklHX0FE
NzI2NiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQUQ3Mjk4IGlzIG5vdCBzZXQKPiBDT05GSUdfQUQ3
NDc2PXkKPiAjIENPTkZJR19BRDc3OTEgaXMgbm90IHNldAo+IENPTkZJR19BRDc3OTM9eQo+IENP
TkZJR19BRDc4ODc9eQo+ICMgQ09ORklHX0FENzkyMyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTFA4
Nzg4X0FEQyBpcyBub3Qgc2V0Cj4gQ09ORklHX01BWDEzNjM9eQo+IENPTkZJR19NQ1AzMjBYPXkK
PiBDT05GSUdfTUNQMzQyMj15Cj4gIyBDT05GSUdfTkFVNzgwMiBpcyBub3Qgc2V0Cj4gQ09ORklH
X1RJX0FEQzA4MUM9eQo+ICMgQ09ORklHX1RJX0FNMzM1WF9BREMgaXMgbm90IHNldAo+IENPTkZJ
R19UV0w2MDMwX0dQQURDPXkKPiAKPiAjCj4gIyBBbXBsaWZpZXJzCj4gIwo+IENPTkZJR19BRDgz
NjY9eQo+IAo+ICMKPiAjIEhpZCBTZW5zb3IgSUlPIENvbW1vbgo+ICMKPiBDT05GSUdfSUlPX1NU
X1NFTlNPUlNfSTJDPXkKPiBDT05GSUdfSUlPX1NUX1NFTlNPUlNfU1BJPXkKPiBDT05GSUdfSUlP
X1NUX1NFTlNPUlNfQ09SRT15Cj4gCj4gIwo+ICMgRGlnaXRhbCB0byBhbmFsb2cgY29udmVydGVy
cwo+ICMKPiBDT05GSUdfQUQ1MDY0PXkKPiBDT05GSUdfQUQ1MzYwPXkKPiAjIENPTkZJR19BRDUz
ODAgaXMgbm90IHNldAo+ICMgQ09ORklHX0FENTQyMSBpcyBub3Qgc2V0Cj4gQ09ORklHX0FENTQ0
Nj15Cj4gQ09ORklHX0FENTQ0OT15Cj4gQ09ORklHX0FENTUwND15Cj4gQ09ORklHX0FENTYyNFJf
U1BJPXkKPiBDT05GSUdfQUQ1Njg2PXkKPiBDT05GSUdfQUQ1NzU1PXkKPiBDT05GSUdfQUQ1NzY0
PXkKPiAjIENPTkZJR19BRDU3OTEgaXMgbm90IHNldAo+ICMgQ09ORklHX0FENzMwMyBpcyBub3Qg
c2V0Cj4gQ09ORklHX01BWDUxNz15Cj4gQ09ORklHX01DUDQ3MjU9eQo+IAo+ICMKPiAjIEZyZXF1
ZW5jeSBTeW50aGVzaXplcnMgRERTL1BMTAo+ICMKPiAKPiAjCj4gIyBDbG9jayBHZW5lcmF0b3Iv
RGlzdHJpYnV0aW9uCj4gIwo+IENPTkZJR19BRDk1MjM9eQo+IAo+ICMKPiAjIFBoYXNlLUxvY2tl
ZCBMb29wIChQTEwpIGZyZXF1ZW5jeSBzeW50aGVzaXplcnMKPiAjCj4gIyBDT05GSUdfQURGNDM1
MCBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgRGlnaXRhbCBneXJvc2NvcGUgc2Vuc29ycwo+ICMKPiBD
T05GSUdfQURJUzE2MDgwPXkKPiBDT05GSUdfQURJUzE2MTMwPXkKPiBDT05GSUdfQURJUzE2MTM2
PXkKPiAjIENPTkZJR19BRElTMTYyNjAgaXMgbm90IHNldAo+IENPTkZJR19BRFhSUzQ1MD15Cj4g
Q09ORklHX0lJT19TVF9HWVJPXzNBWElTPXkKPiBDT05GSUdfSUlPX1NUX0dZUk9fSTJDXzNBWElT
PXkKPiBDT05GSUdfSUlPX1NUX0dZUk9fU1BJXzNBWElTPXkKPiBDT05GSUdfSVRHMzIwMD15Cj4g
Cj4gIwo+ICMgSHVtaWRpdHkgc2Vuc29ycwo+ICMKPiAKPiAjCj4gIyBJbmVydGlhbCBtZWFzdXJl
bWVudCB1bml0cwo+ICMKPiAjIENPTkZJR19BRElTMTY0MDAgaXMgbm90IHNldAo+ICMgQ09ORklH
X0FESVMxNjQ4MCBpcyBub3Qgc2V0Cj4gQ09ORklHX0lJT19BRElTX0xJQj15Cj4gQ09ORklHX0lJ
T19BRElTX0xJQl9CVUZGRVI9eQo+IENPTkZJR19JTlZfTVBVNjA1MF9JSU89eQo+IAo+ICMKPiAj
IExpZ2h0IHNlbnNvcnMKPiAjCj4gIyBDT05GSUdfQURKRF9TMzExIGlzIG5vdCBzZXQKPiBDT05G
SUdfQVBEUzkzMDA9eQo+IENPTkZJR19DTTM2NjUxPXkKPiBDT05GSUdfR1AyQVAwMjBBMDBGPXkK
PiBDT05GSUdfU0VOU09SU19MTTM1MzM9eQo+IENPTkZJR19UQ1MzNDcyPXkKPiAjIENPTkZJR19T
RU5TT1JTX1RTTDI1NjMgaXMgbm90IHNldAo+ICMgQ09ORklHX1RTTDQ1MzEgaXMgbm90IHNldAo+
IENPTkZJR19WQ05MNDAwMD15Cj4gCj4gIwo+ICMgTWFnbmV0b21ldGVyIHNlbnNvcnMKPiAjCj4g
Q09ORklHX01BRzMxMTA9eQo+IENPTkZJR19JSU9fU1RfTUFHTl8zQVhJUz15Cj4gQ09ORklHX0lJ
T19TVF9NQUdOX0kyQ18zQVhJUz15Cj4gQ09ORklHX0lJT19TVF9NQUdOX1NQSV8zQVhJUz15Cj4g
Cj4gIwo+ICMgSW5jbGlub21ldGVyIHNlbnNvcnMKPiAjCj4gCj4gIwo+ICMgVHJpZ2dlcnMgLSBz
dGFuZGFsb25lCj4gIwo+IENPTkZJR19JSU9fSU5URVJSVVBUX1RSSUdHRVI9eQo+IENPTkZJR19J
SU9fU1lTRlNfVFJJR0dFUj15Cj4gCj4gIwo+ICMgUHJlc3N1cmUgc2Vuc29ycwo+ICMKPiBDT05G
SUdfTVBMMzExNT15Cj4gQ09ORklHX0lJT19TVF9QUkVTUz15Cj4gQ09ORklHX0lJT19TVF9QUkVT
U19JMkM9eQo+IENPTkZJR19JSU9fU1RfUFJFU1NfU1BJPXkKPiAKPiAjCj4gIyBUZW1wZXJhdHVy
ZSBzZW5zb3JzCj4gIwo+IENPTkZJR19UTVAwMDY9eQo+IENPTkZJR19QV009eQo+IENPTkZJR19Q
V01fU1lTRlM9eQo+ICMgQ09ORklHX1BXTV9SRU5FU0FTX1RQVSBpcyBub3Qgc2V0Cj4gQ09ORklH
X1BXTV9UV0w9eQo+IENPTkZJR19QV01fVFdMX0xFRD15Cj4gIyBDT05GSUdfSVBBQ0tfQlVTIGlz
IG5vdCBzZXQKPiBDT05GSUdfUkVTRVRfQ09OVFJPTExFUj15Cj4gQ09ORklHX0ZNQz15Cj4gQ09O
RklHX0ZNQ19GQUtFREVWPXkKPiBDT05GSUdfRk1DX1RSSVZJQUw9eQo+ICMgQ09ORklHX0ZNQ19X
UklURV9FRVBST00gaXMgbm90IHNldAo+IENPTkZJR19GTUNfQ0hBUkRFVj15Cj4gCj4gIwo+ICMg
UEhZIFN1YnN5c3RlbQo+ICMKPiBDT05GSUdfR0VORVJJQ19QSFk9eQo+IENPTkZJR19QSFlfRVhZ
Tk9TX01JUElfVklERU89eQo+IENPTkZJR19CQ01fS09OQV9VU0IyX1BIWT15Cj4gQ09ORklHX1BP
V0VSQ0FQPXkKPiBDT05GSUdfSU5URUxfUkFQTD15Cj4gCj4gIwo+ICMgRmlybXdhcmUgRHJpdmVy
cwo+ICMKPiBDT05GSUdfRUREPXkKPiAjIENPTkZJR19FRERfT0ZGIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19GSVJNV0FSRV9NRU1NQVAgaXMgbm90IHNldAo+IENPTkZJR19ERUxMX1JCVT15Cj4gQ09O
RklHX0RDREJBUz15Cj4gQ09ORklHX0dPT0dMRV9GSVJNV0FSRT15Cj4gCj4gIwo+ICMgR29vZ2xl
IEZpcm13YXJlIERyaXZlcnMKPiAjCj4gCj4gIwo+ICMgRmlsZSBzeXN0ZW1zCj4gIwo+IENPTkZJ
R19EQ0FDSEVfV09SRF9BQ0NFU1M9eQo+ICMgQ09ORklHX0VYVDJfRlMgaXMgbm90IHNldAo+IENP
TkZJR19FWFQzX0ZTPXkKPiAjIENPTkZJR19FWFQzX0RFRkFVTFRTX1RPX09SREVSRUQgaXMgbm90
IHNldAo+IENPTkZJR19FWFQzX0ZTX1hBVFRSPXkKPiBDT05GSUdfRVhUM19GU19QT1NJWF9BQ0w9
eQo+IENPTkZJR19FWFQzX0ZTX1NFQ1VSSVRZPXkKPiAjIENPTkZJR19FWFQ0X0ZTIGlzIG5vdCBz
ZXQKPiBDT05GSUdfSkJEPXkKPiBDT05GSUdfSkJEX0RFQlVHPXkKPiBDT05GSUdfSkJEMj15Cj4g
Q09ORklHX0pCRDJfREVCVUc9eQo+IENPTkZJR19GU19NQkNBQ0hFPXkKPiAjIENPTkZJR19SRUlT
RVJGU19GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0pGU19GUz15Cj4gQ09ORklHX0pGU19QT1NJWF9B
Q0w9eQo+IENPTkZJR19KRlNfU0VDVVJJVFk9eQo+IENPTkZJR19KRlNfREVCVUc9eQo+ICMgQ09O
RklHX0pGU19TVEFUSVNUSUNTIGlzIG5vdCBzZXQKPiAjIENPTkZJR19YRlNfRlMgaXMgbm90IHNl
dAo+IENPTkZJR19HRlMyX0ZTPXkKPiBDT05GSUdfT0NGUzJfRlM9eQo+IENPTkZJR19PQ0ZTMl9G
U19PMkNCPXkKPiBDT05GSUdfT0NGUzJfRlNfU1RBVFM9eQo+IENPTkZJR19PQ0ZTMl9ERUJVR19N
QVNLTE9HPXkKPiAjIENPTkZJR19PQ0ZTMl9ERUJVR19GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0JU
UkZTX0ZTPXkKPiAjIENPTkZJR19CVFJGU19GU19QT1NJWF9BQ0wgaXMgbm90IHNldAo+IENPTkZJ
R19CVFJGU19GU19DSEVDS19JTlRFR1JJVFk9eQo+IENPTkZJR19CVFJGU19GU19SVU5fU0FOSVRZ
X1RFU1RTPXkKPiAjIENPTkZJR19CVFJGU19ERUJVRyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQlRS
RlNfQVNTRVJUIGlzIG5vdCBzZXQKPiBDT05GSUdfTklMRlMyX0ZTPXkKPiBDT05GSUdfRlNfUE9T
SVhfQUNMPXkKPiAjIENPTkZJR19GSUxFX0xPQ0tJTkcgaXMgbm90IHNldAo+IENPTkZJR19GU05P
VElGWT15Cj4gIyBDT05GSUdfRE5PVElGWSBpcyBub3Qgc2V0Cj4gQ09ORklHX0lOT1RJRllfVVNF
Uj15Cj4gIyBDT05GSUdfRkFOT1RJRlkgaXMgbm90IHNldAo+IENPTkZJR19RVU9UQT15Cj4gQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkKPiBDT05GSUdfUFJJTlRfUVVPVEFfV0FSTklO
Rz15Cj4gQ09ORklHX1FVT1RBX0RFQlVHPXkKPiBDT05GSUdfUVVPVEFfVFJFRT15Cj4gIyBDT05G
SUdfUUZNVF9WMSBpcyBub3Qgc2V0Cj4gQ09ORklHX1FGTVRfVjI9eQo+IENPTkZJR19RVU9UQUNU
TD15Cj4gIyBDT05GSUdfQVVUT0ZTNF9GUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZVU0VfRlM9eQo+
ICMgQ09ORklHX0NVU0UgaXMgbm90IHNldAo+IAo+ICMKPiAjIENhY2hlcwo+ICMKPiBDT05GSUdf
RlNDQUNIRT15Cj4gIyBDT05GSUdfRlNDQUNIRV9TVEFUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX0ZT
Q0FDSEVfSElTVE9HUkFNPXkKPiBDT05GSUdfRlNDQUNIRV9ERUJVRz15Cj4gQ09ORklHX0ZTQ0FD
SEVfT0JKRUNUX0xJU1Q9eQo+IENPTkZJR19DQUNIRUZJTEVTPXkKPiAjIENPTkZJR19DQUNIRUZJ
TEVTX0RFQlVHIGlzIG5vdCBzZXQKPiBDT05GSUdfQ0FDSEVGSUxFU19ISVNUT0dSQU09eQo+IAo+
ICMKPiAjIENELVJPTS9EVkQgRmlsZXN5c3RlbXMKPiAjCj4gIyBDT05GSUdfSVNPOTY2MF9GUyBp
cyBub3Qgc2V0Cj4gQ09ORklHX1VERl9GUz15Cj4gQ09ORklHX1VERl9OTFM9eQo+IAo+ICMKPiAj
IERPUy9GQVQvTlQgRmlsZXN5c3RlbXMKPiAjCj4gQ09ORklHX0ZBVF9GUz15Cj4gQ09ORklHX01T
RE9TX0ZTPXkKPiBDT05GSUdfVkZBVF9GUz15Cj4gQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQQUdF
PTQzNwo+IENPTkZJR19GQVRfREVGQVVMVF9JT0NIQVJTRVQ9Imlzbzg4NTktMSIKPiBDT05GSUdf
TlRGU19GUz15Cj4gQ09ORklHX05URlNfREVCVUc9eQo+ICMgQ09ORklHX05URlNfUlcgaXMgbm90
IHNldAo+IAo+ICMKPiAjIFBzZXVkbyBmaWxlc3lzdGVtcwo+ICMKPiBDT05GSUdfUFJPQ19GUz15
Cj4gIyBDT05GSUdfUFJPQ19LQ09SRSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPQ19TWVNDVEwg
aXMgbm90IHNldAo+IENPTkZJR19QUk9DX1BBR0VfTU9OSVRPUj15Cj4gQ09ORklHX1NZU0ZTPXkK
PiAjIENPTkZJR19IVUdFVExCRlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0hVR0VUTEJfUEFHRSBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NPTkZJR0ZTX0ZTPXkKPiBDT05GSUdfTUlTQ19GSUxFU1lTVEVN
Uz15Cj4gQ09ORklHX0FERlNfRlM9eQo+ICMgQ09ORklHX0FERlNfRlNfUlcgaXMgbm90IHNldAo+
IENPTkZJR19BRkZTX0ZTPXkKPiAjIENPTkZJR19IRlNfRlMgaXMgbm90IHNldAo+IENPTkZJR19I
RlNQTFVTX0ZTPXkKPiAjIENPTkZJR19IRlNQTFVTX0ZTX1BPU0lYX0FDTCBpcyBub3Qgc2V0Cj4g
Q09ORklHX0JFRlNfRlM9eQo+IENPTkZJR19CRUZTX0RFQlVHPXkKPiBDT05GSUdfQkZTX0ZTPXkK
PiBDT05GSUdfRUZTX0ZTPXkKPiBDT05GSUdfTE9HRlM9eQo+IENPTkZJR19DUkFNRlM9eQo+IENP
TkZJR19TUVVBU0hGUz15Cj4gQ09ORklHX1NRVUFTSEZTX0ZJTEVfQ0FDSEU9eQo+ICMgQ09ORklH
X1NRVUFTSEZTX0ZJTEVfRElSRUNUIGlzIG5vdCBzZXQKPiBDT05GSUdfU1FVQVNIRlNfREVDT01Q
X1NJTkdMRT15Cj4gIyBDT05GSUdfU1FVQVNIRlNfREVDT01QX01VTFRJIGlzIG5vdCBzZXQKPiAj
IENPTkZJR19TUVVBU0hGU19ERUNPTVBfTVVMVElfUEVSQ1BVIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19TUVVBU0hGU19YQVRUUiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfU1FVQVNIRlNfWkxJQiBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfU1FVQVNIRlNfTFpPIGlzIG5vdCBzZXQKPiAjIENPTkZJR19TUVVB
U0hGU19YWiBpcyBub3Qgc2V0Cj4gQ09ORklHX1NRVUFTSEZTXzRLX0RFVkJMS19TSVpFPXkKPiBD
T05GSUdfU1FVQVNIRlNfRU1CRURERUQ9eQo+IENPTkZJR19TUVVBU0hGU19GUkFHTUVOVF9DQUNI
RV9TSVpFPTMKPiBDT05GSUdfVlhGU19GUz15Cj4gIyBDT05GSUdfTUlOSVhfRlMgaXMgbm90IHNl
dAo+IENPTkZJR19PTUZTX0ZTPXkKPiBDT05GSUdfSFBGU19GUz15Cj4gIyBDT05GSUdfUU5YNEZT
X0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdfUU5YNkZTX0ZTPXkKPiAjIENPTkZJR19RTlg2RlNfREVC
VUcgaXMgbm90IHNldAo+ICMgQ09ORklHX1JPTUZTX0ZTIGlzIG5vdCBzZXQKPiBDT05GSUdfUFNU
T1JFPXkKPiBDT05GSUdfUFNUT1JFX0NPTlNPTEU9eQo+ICMgQ09ORklHX1BTVE9SRV9GVFJBQ0Ug
aXMgbm90IHNldAo+IENPTkZJR19QU1RPUkVfUkFNPXkKPiBDT05GSUdfU1lTVl9GUz15Cj4gQ09O
RklHX1VGU19GUz15Cj4gIyBDT05GSUdfVUZTX0ZTX1dSSVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19VRlNfREVCVUcgaXMgbm90IHNldAo+IENPTkZJR19GMkZTX0ZTPXkKPiBDT05GSUdfRjJGU19T
VEFUX0ZTPXkKPiBDT05GSUdfRjJGU19GU19YQVRUUj15Cj4gIyBDT05GSUdfRjJGU19GU19QT1NJ
WF9BQ0wgaXMgbm90IHNldAo+ICMgQ09ORklHX0YyRlNfRlNfU0VDVVJJVFkgaXMgbm90IHNldAo+
ICMgQ09ORklHX0YyRlNfQ0hFQ0tfRlMgaXMgbm90IHNldAo+IENPTkZJR19ORVRXT1JLX0ZJTEVT
WVNURU1TPXkKPiBDT05GSUdfTkNQX0ZTPXkKPiBDT05GSUdfTkNQRlNfUEFDS0VUX1NJR05JTkc9
eQo+ICMgQ09ORklHX05DUEZTX0lPQ1RMX0xPQ0tJTkcgaXMgbm90IHNldAo+ICMgQ09ORklHX05D
UEZTX1NUUk9ORyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkNQRlNfTkZTX05TIGlzIG5vdCBzZXQK
PiBDT05GSUdfTkNQRlNfT1MyX05TPXkKPiBDT05GSUdfTkNQRlNfU01BTExET1M9eQo+IENPTkZJ
R19OQ1BGU19OTFM9eQo+ICMgQ09ORklHX05DUEZTX0VYVFJBUyBpcyBub3Qgc2V0Cj4gQ09ORklH
X05MUz15Cj4gQ09ORklHX05MU19ERUZBVUxUPSJpc284ODU5LTEiCj4gIyBDT05GSUdfTkxTX0NP
REVQQUdFXzQzNyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzczNyBpcyBub3Qg
c2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV83NzU9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODUw
PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg1Mj15Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1
NSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0Cj4gIyBD
T05GSUdfTkxTX0NPREVQQUdFXzg2MCBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19DT0RFUEFHRV84
NjE9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODYyPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0Vf
ODYzIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0NPREVQQUdFXzg2ND15Cj4gQ09ORklHX05MU19D
T0RFUEFHRV84NjU9eQo+IENPTkZJR19OTFNfQ09ERVBBR0VfODY2PXkKPiBDT05GSUdfTkxTX0NP
REVQQUdFXzg2OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV85MzY9eQo+IENPTkZJR19OTFNfQ09E
RVBBR0VfOTUwPXkKPiAjIENPTkZJR19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKPiBDT05G
SUdfTkxTX0NPREVQQUdFXzk0OT15Cj4gQ09ORklHX05MU19DT0RFUEFHRV84NzQ9eQo+IENPTkZJ
R19OTFNfSVNPODg1OV84PXkKPiBDT05GSUdfTkxTX0NPREVQQUdFXzEyNTA9eQo+IENPTkZJR19O
TFNfQ09ERVBBR0VfMTI1MT15Cj4gIyBDT05GSUdfTkxTX0FTQ0lJIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19OTFNfSVNPODg1OV8xIGlzIG5vdCBzZXQKPiAjIENPTkZJR19OTFNfSVNPODg1OV8yIGlz
IG5vdCBzZXQKPiBDT05GSUdfTkxTX0lTTzg4NTlfMz15Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlf
NCBpcyBub3Qgc2V0Cj4gQ09ORklHX05MU19JU084ODU5XzU9eQo+ICMgQ09ORklHX05MU19JU084
ODU5XzYgaXMgbm90IHNldAo+IENPTkZJR19OTFNfSVNPODg1OV83PXkKPiAjIENPTkZJR19OTFNf
SVNPODg1OV85IGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX0lTTzg4NTlfMTM9eQo+IENPTkZJR19O
TFNfSVNPODg1OV8xND15Cj4gIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90IHNldAo+ICMg
Q09ORklHX05MU19LT0k4X1IgaXMgbm90IHNldAo+IENPTkZJR19OTFNfS09JOF9VPXkKPiAjIENP
TkZJR19OTFNfTUFDX1JPTUFOIGlzIG5vdCBzZXQKPiBDT05GSUdfTkxTX01BQ19DRUxUSUM9eQo+
IENPTkZJR19OTFNfTUFDX0NFTlRFVVJPPXkKPiBDT05GSUdfTkxTX01BQ19DUk9BVElBTj15Cj4g
Q09ORklHX05MU19NQUNfQ1lSSUxMSUM9eQo+IENPTkZJR19OTFNfTUFDX0dBRUxJQz15Cj4gQ09O
RklHX05MU19NQUNfR1JFRUs9eQo+ICMgQ09ORklHX05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0
Cj4gQ09ORklHX05MU19NQUNfSU5VSVQ9eQo+IENPTkZJR19OTFNfTUFDX1JPTUFOSUFOPXkKPiBD
T05GSUdfTkxTX01BQ19UVVJLSVNIPXkKPiBDT05GSUdfTkxTX1VURjg9eQo+IAo+ICMKPiAjIEtl
cm5lbCBoYWNraW5nCj4gIwo+IENPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkKPiAKPiAj
Cj4gIyBwcmludGsgYW5kIGRtZXNnIG9wdGlvbnMKPiAjCj4gQ09ORklHX0RFRkFVTFRfTUVTU0FH
RV9MT0dMRVZFTD00Cj4gCj4gIwo+ICMgQ29tcGlsZS10aW1lIGNoZWNrcyBhbmQgY29tcGlsZXIg
b3B0aW9ucwo+ICMKPiBDT05GSUdfREVCVUdfSU5GTz15Cj4gIyBDT05GSUdfREVCVUdfSU5GT19S
RURVQ0VEIGlzIG5vdCBzZXQKPiAjIENPTkZJR19FTkFCTEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5v
dCBzZXQKPiBDT05GSUdfRU5BQkxFX01VU1RfQ0hFQ0s9eQo+IENPTkZJR19GUkFNRV9XQVJOPTIw
NDgKPiBDT05GSUdfU1RSSVBfQVNNX1NZTVM9eQo+ICMgQ09ORklHX1JFQURBQkxFX0FTTSBpcyBu
b3Qgc2V0Cj4gIyBDT05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldAo+IENPTkZJR19ERUJV
R19GUz15Cj4gIyBDT05GSUdfSEVBREVSU19DSEVDSyBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVH
X1NFQ1RJT05fTUlTTUFUQ0g9eQo+IENPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQo+
IENPTkZJR19GUkFNRV9QT0lOVEVSPXkKPiBDT05GSUdfREVCVUdfRk9SQ0VfV0VBS19QRVJfQ1BV
PXkKPiAjIENPTkZJR19NQUdJQ19TWVNSUSBpcyBub3Qgc2V0Cj4gQ09ORklHX0RFQlVHX0tFUk5F
TD15Cj4gCj4gIwo+ICMgTWVtb3J5IERlYnVnZ2luZwo+ICMKPiAjIENPTkZJR19ERUJVR19QQUdF
QUxMT0MgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19PQkpFQ1RTPXkKPiBDT05GSUdfREVCVUdf
T0JKRUNUU19TRUxGVEVTVD15Cj4gQ09ORklHX0RFQlVHX09CSkVDVFNfRlJFRT15Cj4gIyBDT05G
SUdfREVCVUdfT0JKRUNUU19USU1FUlMgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19PQkpFQ1RT
X1dPUks9eQo+ICMgQ09ORklHX0RFQlVHX09CSkVDVFNfUkNVX0hFQUQgaXMgbm90IHNldAo+ICMg
Q09ORklHX0RFQlVHX09CSkVDVFNfUEVSQ1BVX0NPVU5URVIgaXMgbm90IHNldAo+IENPTkZJR19E
RUJVR19PQkpFQ1RTX0VOQUJMRV9ERUZBVUxUPTEKPiAjIENPTkZJR19TTFVCX1NUQVRTIGlzIG5v
dCBzZXQKPiBDT05GSUdfSEFWRV9ERUJVR19LTUVNTEVBSz15Cj4gIyBDT05GSUdfREVCVUdfS01F
TUxFQUsgaXMgbm90IHNldAo+IENPTkZJR19ERUJVR19TVEFDS19VU0FHRT15Cj4gIyBDT05GSUdf
REVCVUdfVk0gaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1ZJUlRVQUwgaXMgbm90IHNldAo+
IENPTkZJR19ERUJVR19NRU1PUllfSU5JVD15Cj4gQ09ORklHX01FTU9SWV9OT1RJRklFUl9FUlJP
Ul9JTkpFQ1Q9eQo+ICMgQ09ORklHX0RFQlVHX1BFUl9DUFVfTUFQUyBpcyBub3Qgc2V0Cj4gQ09O
RklHX0hBVkVfREVCVUdfU1RBQ0tPVkVSRkxPVz15Cj4gIyBDT05GSUdfREVCVUdfU1RBQ0tPVkVS
RkxPVyBpcyBub3Qgc2V0Cj4gQ09ORklHX0hBVkVfQVJDSF9LTUVNQ0hFQ0s9eQo+ICMgQ09ORklH
X0RFQlVHX1NISVJRIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5n
cwo+ICMKPiBDT05GSUdfTE9DS1VQX0RFVEVDVE9SPXkKPiBDT05GSUdfSEFSRExPQ0tVUF9ERVRF
Q1RPUj15Cj4gIyBDT05GSUdfQk9PVFBBUkFNX0hBUkRMT0NLVVBfUEFOSUMgaXMgbm90IHNldAo+
IENPTkZJR19CT09UUEFSQU1fSEFSRExPQ0tVUF9QQU5JQ19WQUxVRT0wCj4gIyBDT05GSUdfQk9P
VFBBUkFNX1NPRlRMT0NLVVBfUEFOSUMgaXMgbm90IHNldAo+IENPTkZJR19CT09UUEFSQU1fU09G
VExPQ0tVUF9QQU5JQ19WQUxVRT0wCj4gIyBDT05GSUdfREVURUNUX0hVTkdfVEFTSyBpcyBub3Qg
c2V0Cj4gIyBDT05GSUdfUEFOSUNfT05fT09QUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1BBTklDX09O
X09PUFNfVkFMVUU9MAo+IENPTkZJR19QQU5JQ19USU1FT1VUPTAKPiAjIENPTkZJR19TQ0hFRF9E
RUJVRyBpcyBub3Qgc2V0Cj4gQ09ORklHX1NDSEVEU1RBVFM9eQo+IENPTkZJR19USU1FUl9TVEFU
Uz15Cj4gCj4gIwo+ICMgTG9jayBEZWJ1Z2dpbmcgKHNwaW5sb2NrcywgbXV0ZXhlcywgZXRjLi4u
KQo+ICMKPiBDT05GSUdfREVCVUdfUlRfTVVURVhFUz15Cj4gQ09ORklHX0RFQlVHX1BJX0xJU1Q9
eQo+IENPTkZJR19SVF9NVVRFWF9URVNURVI9eQo+IENPTkZJR19ERUJVR19TUElOTE9DSz15Cj4g
Q09ORklHX0RFQlVHX01VVEVYRVM9eQo+IENPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSD15
Cj4gQ09ORklHX0RFQlVHX0xPQ0tfQUxMT0M9eQo+IENPTkZJR19QUk9WRV9MT0NLSU5HPXkKPiBD
T05GSUdfTE9DS0RFUD15Cj4gQ09ORklHX0xPQ0tfU1RBVD15Cj4gQ09ORklHX0RFQlVHX0xPQ0tE
RVA9eQo+IENPTkZJR19ERUJVR19BVE9NSUNfU0xFRVA9eQo+ICMgQ09ORklHX0RFQlVHX0xPQ0tJ
TkdfQVBJX1NFTEZURVNUUyBpcyBub3Qgc2V0Cj4gQ09ORklHX1RSQUNFX0lSUUZMQUdTPXkKPiBD
T05GSUdfU1RBQ0tUUkFDRT15Cj4gQ09ORklHX0RFQlVHX0tPQkpFQ1Q9eQo+ICMgQ09ORklHX0RF
QlVHX1dSSVRFQ09VTlQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX0xJU1QgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0RFQlVHX1NHIGlzIG5vdCBzZXQKPiAjIENPTkZJR19ERUJVR19OT1RJRklF
UlMgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX0NSRURFTlRJQUxTIGlzIG5vdCBzZXQKPiAK
PiAjCj4gIyBSQ1UgRGVidWdnaW5nCj4gIwo+ICMgQ09ORklHX1BST1ZFX1JDVSBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfU1BBUlNFX1JDVV9QT0lOVEVSIGlzIG5vdCBzZXQKPiBDT05GSUdfUkNVX1RP
UlRVUkVfVEVTVD15Cj4gQ09ORklHX1JDVV9UT1JUVVJFX1RFU1RfUlVOTkFCTEU9eQo+IENPTkZJ
R19SQ1VfQ1BVX1NUQUxMX1RJTUVPVVQ9MjEKPiAjIENPTkZJR19SQ1VfQ1BVX1NUQUxMX0lORk8g
aXMgbm90IHNldAo+IENPTkZJR19SQ1VfVFJBQ0U9eQo+ICMgQ09ORklHX0RFQlVHX0JMT0NLX0VY
VF9ERVZUIGlzIG5vdCBzZXQKPiBDT05GSUdfTk9USUZJRVJfRVJST1JfSU5KRUNUSU9OPXkKPiBD
T05GSUdfQ1BVX05PVElGSUVSX0VSUk9SX0lOSkVDVD15Cj4gQ09ORklHX1BNX05PVElGSUVSX0VS
Uk9SX0lOSkVDVD15Cj4gIyBDT05GSUdfRkFVTFRfSU5KRUNUSU9OIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19MQVRFTkNZVE9QIGlzIG5vdCBzZXQKPiBDT05GSUdfQVJDSF9IQVNfREVCVUdfU1RSSUNU
X1VTRVJfQ09QWV9DSEVDS1M9eQo+ICMgQ09ORklHX0RFQlVHX1NUUklDVF9VU0VSX0NPUFlfQ0hF
Q0tTIGlzIG5vdCBzZXQKPiBDT05GSUdfVVNFUl9TVEFDS1RSQUNFX1NVUFBPUlQ9eQo+IENPTkZJ
R19OT1BfVFJBQ0VSPXkKPiBDT05GSUdfSEFWRV9GVU5DVElPTl9UUkFDRVI9eQo+IENPTkZJR19I
QVZFX0ZVTkNUSU9OX0dSQVBIX1RSQUNFUj15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJT05fR1JBUEhf
RlBfVEVTVD15Cj4gQ09ORklHX0hBVkVfRlVOQ1RJT05fVFJBQ0VfTUNPVU5UX1RFU1Q9eQo+IENP
TkZJR19IQVZFX0RZTkFNSUNfRlRSQUNFPXkKPiBDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRV9X
SVRIX1JFR1M9eQo+IENPTkZJR19IQVZFX0ZUUkFDRV9NQ09VTlRfUkVDT1JEPXkKPiBDT05GSUdf
SEFWRV9TWVNDQUxMX1RSQUNFUE9JTlRTPXkKPiBDT05GSUdfSEFWRV9GRU5UUlk9eQo+IENPTkZJ
R19IQVZFX0NfUkVDT1JETUNPVU5UPXkKPiBDT05GSUdfVFJBQ0VSX01BWF9UUkFDRT15Cj4gQ09O
RklHX1RSQUNFX0NMT0NLPXkKPiBDT05GSUdfUklOR19CVUZGRVI9eQo+IENPTkZJR19FVkVOVF9U
UkFDSU5HPXkKPiBDT05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKPiBDT05GSUdfUklOR19C
VUZGRVJfQUxMT1dfU1dBUD15Cj4gQ09ORklHX1RSQUNJTkc9eQo+IENPTkZJR19HRU5FUklDX1RS
QUNFUj15Cj4gQ09ORklHX1RSQUNJTkdfU1VQUE9SVD15Cj4gQ09ORklHX0ZUUkFDRT15Cj4gQ09O
RklHX0ZVTkNUSU9OX1RSQUNFUj15Cj4gIyBDT05GSUdfRlVOQ1RJT05fR1JBUEhfVFJBQ0VSIGlz
IG5vdCBzZXQKPiAjIENPTkZJR19JUlFTT0ZGX1RSQUNFUiBpcyBub3Qgc2V0Cj4gQ09ORklHX1ND
SEVEX1RSQUNFUj15Cj4gQ09ORklHX0ZUUkFDRV9TWVNDQUxMUz15Cj4gQ09ORklHX1RSQUNFUl9T
TkFQU0hPVD15Cj4gQ09ORklHX1RSQUNFUl9TTkFQU0hPVF9QRVJfQ1BVX1NXQVA9eQo+IENPTkZJ
R19CUkFOQ0hfUFJPRklMRV9OT05FPXkKPiAjIENPTkZJR19QUk9GSUxFX0FOTk9UQVRFRF9CUkFO
Q0hFUyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfUFJPRklMRV9BTExfQlJBTkNIRVMgaXMgbm90IHNl
dAo+IENPTkZJR19TVEFDS19UUkFDRVI9eQo+IENPTkZJR19CTEtfREVWX0lPX1RSQUNFPXkKPiAj
IENPTkZJR19VUFJPQkVfRVZFTlQgaXMgbm90IHNldAo+ICMgQ09ORklHX1BST0JFX0VWRU5UUyBp
cyBub3Qgc2V0Cj4gIyBDT05GSUdfRFlOQU1JQ19GVFJBQ0UgaXMgbm90IHNldAo+IENPTkZJR19G
VU5DVElPTl9QUk9GSUxFUj15Cj4gIyBDT05GSUdfRlRSQUNFX1NUQVJUVVBfVEVTVCBpcyBub3Qg
c2V0Cj4gQ09ORklHX1JJTkdfQlVGRkVSX0JFTkNITUFSSz15Cj4gIyBDT05GSUdfUklOR19CVUZG
RVJfU1RBUlRVUF9URVNUIGlzIG5vdCBzZXQKPiAKPiAjCj4gIyBSdW50aW1lIFRlc3RpbmcKPiAj
Cj4gQ09ORklHX0xLRFRNPXkKPiBDT05GSUdfVEVTVF9MSVNUX1NPUlQ9eQo+ICMgQ09ORklHX0JB
Q0tUUkFDRV9TRUxGX1RFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX1JCVFJFRV9URVNUIGlzIG5v
dCBzZXQKPiBDT05GSUdfQVRPTUlDNjRfU0VMRlRFU1Q9eQo+IENPTkZJR19URVNUX1NUUklOR19I
RUxQRVJTPXkKPiBDT05GSUdfVEVTVF9LU1RSVE9YPXkKPiAjIENPTkZJR19ETUFfQVBJX0RFQlVH
IGlzIG5vdCBzZXQKPiAjIENPTkZJR19TQU1QTEVTIGlzIG5vdCBzZXQKPiBDT05GSUdfSEFWRV9B
UkNIX0tHREI9eQo+IENPTkZJR19LR0RCPXkKPiBDT05GSUdfS0dEQl9URVNUUz15Cj4gQ09ORklH
X0tHREJfVEVTVFNfT05fQk9PVD15Cj4gQ09ORklHX0tHREJfVEVTVFNfQk9PVF9TVFJJTkc9IlYx
RjEwMCIKPiBDT05GSUdfS0dEQl9MT1dfTEVWRUxfVFJBUD15Cj4gIyBDT05GSUdfS0dEQl9LREIg
aXMgbm90IHNldAo+IENPTkZJR19TVFJJQ1RfREVWTUVNPXkKPiBDT05GSUdfWDg2X1ZFUkJPU0Vf
Qk9PVFVQPXkKPiAjIENPTkZJR19FQVJMWV9QUklOVEsgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4
Nl9QVERVTVAgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1JPREFUQSBpcyBub3Qgc2V0Cj4g
IyBDT05GSUdfRE9VQkxFRkFVTFQgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFQlVHX1RMQkZMVVNI
IGlzIG5vdCBzZXQKPiBDT05GSUdfSU9NTVVfU1RSRVNTPXkKPiBDT05GSUdfSEFWRV9NTUlPVFJB
Q0VfU1VQUE9SVD15Cj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfMFg4MD0wCj4gQ09ORklHX0lPX0RF
TEFZX1RZUEVfMFhFRD0xCj4gQ09ORklHX0lPX0RFTEFZX1RZUEVfVURFTEFZPTIKPiBDT05GSUdf
SU9fREVMQVlfVFlQRV9OT05FPTMKPiBDT05GSUdfSU9fREVMQVlfMFg4MD15Cj4gIyBDT05GSUdf
SU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU9fREVMQVlfVURFTEFZIGlzIG5v
dCBzZXQKPiAjIENPTkZJR19JT19ERUxBWV9OT05FIGlzIG5vdCBzZXQKPiBDT05GSUdfREVGQVVM
VF9JT19ERUxBWV9UWVBFPTAKPiBDT05GSUdfREVCVUdfQk9PVF9QQVJBTVM9eQo+IENPTkZJR19D
UEFfREVCVUc9eQo+ICMgQ09ORklHX09QVElNSVpFX0lOTElOSU5HIGlzIG5vdCBzZXQKPiAjIENP
TkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90IHNldAo+ICMgQ09ORklHX1g4Nl9ERUJVR19T
VEFUSUNfQ1BVX0hBUyBpcyBub3Qgc2V0Cj4gCj4gIwo+ICMgU2VjdXJpdHkgb3B0aW9ucwo+ICMK
PiAjIENPTkZJR19LRVlTIGlzIG5vdCBzZXQKPiBDT05GSUdfU0VDVVJJVFlfRE1FU0dfUkVTVFJJ
Q1Q9eQo+IENPTkZJR19TRUNVUklUWT15Cj4gQ09ORklHX1NFQ1VSSVRZRlM9eQo+IENPTkZJR19T
RUNVUklUWV9ORVRXT1JLPXkKPiBDT05GSUdfU0VDVVJJVFlfUEFUSD15Cj4gQ09ORklHX1NFQ1VS
SVRZX1RPTU9ZTz15Cj4gQ09ORklHX1NFQ1VSSVRZX1RPTU9ZT19NQVhfQUNDRVBUX0VOVFJZPTIw
NDgKPiBDT05GSUdfU0VDVVJJVFlfVE9NT1lPX01BWF9BVURJVF9MT0c9MTAyNAo+ICMgQ09ORklH
X1NFQ1VSSVRZX1RPTU9ZT19PTUlUX1VTRVJTUEFDRV9MT0FERVIgaXMgbm90IHNldAo+IENPTkZJ
R19TRUNVUklUWV9UT01PWU9fUE9MSUNZX0xPQURFUj0iL3NiaW4vdG9tb3lvLWluaXQiCj4gQ09O
RklHX1NFQ1VSSVRZX1RPTU9ZT19BQ1RJVkFUSU9OX1RSSUdHRVI9Ii9zYmluL2luaXQiCj4gQ09O
RklHX1NFQ1VSSVRZX0FQUEFSTU9SPXkKPiBDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfQk9PVFBB
UkFNX1ZBTFVFPTEKPiBDT05GSUdfU0VDVVJJVFlfQVBQQVJNT1JfSEFTSD15Cj4gIyBDT05GSUdf
U0VDVVJJVFlfWUFNQSBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfSU1BIGlzIG5vdCBzZXQKPiBDT05G
SUdfREVGQVVMVF9TRUNVUklUWV9UT01PWU89eQo+ICMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlf
QVBQQVJNT1IgaXMgbm90IHNldAo+ICMgQ09ORklHX0RFRkFVTFRfU0VDVVJJVFlfREFDIGlzIG5v
dCBzZXQKPiBDT05GSUdfREVGQVVMVF9TRUNVUklUWT0idG9tb3lvIgo+IENPTkZJR19YT1JfQkxP
Q0tTPXkKPiBDT05GSUdfQ1JZUFRPPXkKPiAKPiAjCj4gIyBDcnlwdG8gY29yZSBvciBoZWxwZXIK
PiAjCj4gQ09ORklHX0NSWVBUT19BTEdBUEk9eQo+IENPTkZJR19DUllQVE9fQUxHQVBJMj15Cj4g
Q09ORklHX0NSWVBUT19BRUFEPXkKPiBDT05GSUdfQ1JZUFRPX0FFQUQyPXkKPiBDT05GSUdfQ1JZ
UFRPX0JMS0NJUEhFUj15Cj4gQ09ORklHX0NSWVBUT19CTEtDSVBIRVIyPXkKPiBDT05GSUdfQ1JZ
UFRPX0hBU0g9eQo+IENPTkZJR19DUllQVE9fSEFTSDI9eQo+IENPTkZJR19DUllQVE9fUk5HPXkK
PiBDT05GSUdfQ1JZUFRPX1JORzI9eQo+IENPTkZJR19DUllQVE9fUENPTVAyPXkKPiBDT05GSUdf
Q1JZUFRPX01BTkFHRVI9eQo+IENPTkZJR19DUllQVE9fTUFOQUdFUjI9eQo+IENPTkZJR19DUllQ
VE9fVVNFUj15Cj4gQ09ORklHX0NSWVBUT19NQU5BR0VSX0RJU0FCTEVfVEVTVFM9eQo+IENPTkZJ
R19DUllQVE9fR0YxMjhNVUw9eQo+IENPTkZJR19DUllQVE9fTlVMTD15Cj4gQ09ORklHX0NSWVBU
T19QQ1JZUFQ9eQo+IENPTkZJR19DUllQVE9fV09SS1FVRVVFPXkKPiBDT05GSUdfQ1JZUFRPX0NS
WVBURD15Cj4gIyBDT05GSUdfQ1JZUFRPX0FVVEhFTkMgaXMgbm90IHNldAo+IENPTkZJR19DUllQ
VE9fQUJMS19IRUxQRVI9eQo+IENPTkZJR19DUllQVE9fR0xVRV9IRUxQRVJfWDg2PXkKPiAKPiAj
Cj4gIyBBdXRoZW50aWNhdGVkIEVuY3J5cHRpb24gd2l0aCBBc3NvY2lhdGVkIERhdGEKPiAjCj4g
Q09ORklHX0NSWVBUT19DQ009eQo+IENPTkZJR19DUllQVE9fR0NNPXkKPiBDT05GSUdfQ1JZUFRP
X1NFUUlWPXkKPiAKPiAjCj4gIyBCbG9jayBtb2Rlcwo+ICMKPiBDT05GSUdfQ1JZUFRPX0NCQz15
Cj4gQ09ORklHX0NSWVBUT19DVFI9eQo+ICMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNldAo+
IENPTkZJR19DUllQVE9fRUNCPXkKPiBDT05GSUdfQ1JZUFRPX0xSVz15Cj4gQ09ORklHX0NSWVBU
T19QQ0JDPXkKPiBDT05GSUdfQ1JZUFRPX1hUUz15Cj4gCj4gIwo+ICMgSGFzaCBtb2Rlcwo+ICMK
PiAjIENPTkZJR19DUllQVE9fQ01BQyBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRPX0hNQUMg
aXMgbm90IHNldAo+IENPTkZJR19DUllQVE9fWENCQz15Cj4gQ09ORklHX0NSWVBUT19WTUFDPXkK
PiAKPiAjCj4gIyBEaWdlc3QKPiAjCj4gQ09ORklHX0NSWVBUT19DUkMzMkM9eQo+IENPTkZJR19D
UllQVE9fQ1JDMzJDX0lOVEVMPXkKPiAjIENPTkZJR19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAo+
IENPTkZJR19DUllQVE9fQ1JDMzJfUENMTVVMPXkKPiBDT05GSUdfQ1JZUFRPX0NSQ1QxMERJRj15
Cj4gIyBDT05GSUdfQ1JZUFRPX0NSQ1QxMERJRl9QQ0xNVUwgaXMgbm90IHNldAo+IENPTkZJR19D
UllQVE9fR0hBU0g9eQo+IENPTkZJR19DUllQVE9fTUQ0PXkKPiBDT05GSUdfQ1JZUFRPX01ENT15
Cj4gQ09ORklHX0NSWVBUT19NSUNIQUVMX01JQz15Cj4gIyBDT05GSUdfQ1JZUFRPX1JNRDEyOCBp
cyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19STUQxNjA9eQo+IENPTkZJR19DUllQVE9fUk1EMjU2
PXkKPiBDT05GSUdfQ1JZUFRPX1JNRDMyMD15Cj4gQ09ORklHX0NSWVBUT19TSEExPXkKPiBDT05G
SUdfQ1JZUFRPX1NIQTFfU1NTRTM9eQo+IENPTkZJR19DUllQVE9fU0hBMjU2X1NTU0UzPXkKPiBD
T05GSUdfQ1JZUFRPX1NIQTUxMl9TU1NFMz15Cj4gQ09ORklHX0NSWVBUT19TSEEyNTY9eQo+IENP
TkZJR19DUllQVE9fU0hBNTEyPXkKPiBDT05GSUdfQ1JZUFRPX1RHUjE5Mj15Cj4gQ09ORklHX0NS
WVBUT19XUDUxMj15Cj4gIyBDT05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVMIGlzIG5v
dCBzZXQKPiAKPiAjCj4gIyBDaXBoZXJzCj4gIwo+IENPTkZJR19DUllQVE9fQUVTPXkKPiBDT05G
SUdfQ1JZUFRPX0FFU19YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQUVTX05JX0lOVEVMPXkKPiBD
T05GSUdfQ1JZUFRPX0FOVUJJUz15Cj4gQ09ORklHX0NSWVBUT19BUkM0PXkKPiBDT05GSUdfQ1JZ
UFRPX0JMT1dGSVNIPXkKPiBDT05GSUdfQ1JZUFRPX0JMT1dGSVNIX0NPTU1PTj15Cj4gQ09ORklH
X0NSWVBUT19CTE9XRklTSF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUE9eQo+IENP
TkZJR19DUllQVE9fQ0FNRUxMSUFfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0NBTUVMTElBX0FF
U05JX0FWWF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fQ0FNRUxMSUFfQUVTTklfQVZYMl9YODZf
NjQ9eQo+IENPTkZJR19DUllQVE9fQ0FTVF9DT01NT049eQo+IENPTkZJR19DUllQVE9fQ0FTVDU9
eQo+IENPTkZJR19DUllQVE9fQ0FTVDVfQVZYX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19DQVNU
Nj15Cj4gQ09ORklHX0NSWVBUT19DQVNUNl9BVlhfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX0RF
Uz15Cj4gQ09ORklHX0NSWVBUT19GQ1JZUFQ9eQo+ICMgQ09ORklHX0NSWVBUT19LSEFaQUQgaXMg
bm90IHNldAo+IENPTkZJR19DUllQVE9fU0FMU0EyMD15Cj4gQ09ORklHX0NSWVBUT19TQUxTQTIw
X1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19TRUVEPXkKPiBDT05GSUdfQ1JZUFRPX1NFUlBFTlQ9
eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4Nl82ND15Cj4gQ09ORklHX0NSWVBUT19T
RVJQRU5UX0FWWF9YODZfNjQ9eQo+IENPTkZJR19DUllQVE9fU0VSUEVOVF9BVlgyX1g4Nl82ND15
Cj4gQ09ORklHX0NSWVBUT19URUE9eQo+ICMgQ09ORklHX0NSWVBUT19UV09GSVNIIGlzIG5vdCBz
ZXQKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJU0hfQ09NTU9OPXkKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJ
U0hfWDg2XzY0PXkKPiBDT05GSUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0XzNXQVk9eQo+ICMgQ09O
RklHX0NSWVBUT19UV09GSVNIX0FWWF9YODZfNjQgaXMgbm90IHNldAo+IAo+ICMKPiAjIENvbXBy
ZXNzaW9uCj4gIwo+ICMgQ09ORklHX0NSWVBUT19ERUZMQVRFIGlzIG5vdCBzZXQKPiAjIENPTkZJ
R19DUllQVE9fWkxJQiBpcyBub3Qgc2V0Cj4gIyBDT05GSUdfQ1JZUFRPX0xaTyBpcyBub3Qgc2V0
Cj4gIyBDT05GSUdfQ1JZUFRPX0xaNCBpcyBub3Qgc2V0Cj4gQ09ORklHX0NSWVBUT19MWjRIQz15
Cj4gCj4gIwo+ICMgUmFuZG9tIE51bWJlciBHZW5lcmF0aW9uCj4gIwo+IENPTkZJR19DUllQVE9f
QU5TSV9DUFJORz15Cj4gQ09ORklHX0NSWVBUT19VU0VSX0FQST15Cj4gQ09ORklHX0NSWVBUT19V
U0VSX0FQSV9IQVNIPXkKPiBDT05GSUdfQ1JZUFRPX1VTRVJfQVBJX1NLQ0lQSEVSPXkKPiAjIENP
TkZJR19DUllQVE9fSFcgaXMgbm90IHNldAo+IENPTkZJR19IQVZFX0tWTT15Cj4gQ09ORklHX0hB
VkVfS1ZNX0lSUUNISVA9eQo+IENPTkZJR19IQVZFX0tWTV9JUlFfUk9VVElORz15Cj4gQ09ORklH
X0hBVkVfS1ZNX0VWRU5URkQ9eQo+IENPTkZJR19LVk1fQVBJQ19BUkNISVRFQ1RVUkU9eQo+IENP
TkZJR19LVk1fTU1JTz15Cj4gQ09ORklHX0tWTV9BU1lOQ19QRj15Cj4gQ09ORklHX0hBVkVfS1ZN
X01TST15Cj4gQ09ORklHX0hBVkVfS1ZNX0NQVV9SRUxBWF9JTlRFUkNFUFQ9eQo+IENPTkZJR19L
Vk1fVkZJTz15Cj4gQ09ORklHX1ZJUlRVQUxJWkFUSU9OPXkKPiBDT05GSUdfS1ZNPXkKPiBDT05G
SUdfS1ZNX0lOVEVMPXkKPiAjIENPTkZJR19LVk1fQU1EIGlzIG5vdCBzZXQKPiAjIENPTkZJR19L
Vk1fTU1VX0FVRElUIGlzIG5vdCBzZXQKPiBDT05GSUdfQklOQVJZX1BSSU5URj15Cj4gCj4gIwo+
ICMgTGlicmFyeSByb3V0aW5lcwo+ICMKPiBDT05GSUdfUkFJRDZfUFE9eQo+IENPTkZJR19CSVRS
RVZFUlNFPXkKPiBDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15Cj4gQ09ORklHX0dF
TkVSSUNfU1RSTkxFTl9VU0VSPXkKPiBDT05GSUdfR0VORVJJQ19ORVRfVVRJTFM9eQo+IENPTkZJ
R19HRU5FUklDX0ZJTkRfRklSU1RfQklUPXkKPiBDT05GSUdfR0VORVJJQ19QQ0lfSU9NQVA9eQo+
IENPTkZJR19HRU5FUklDX0lPTUFQPXkKPiBDT05GSUdfR0VORVJJQ19JTz15Cj4gQ09ORklHX0FS
Q0hfVVNFX0NNUFhDSEdfTE9DS1JFRj15Cj4gQ09ORklHX0NSQ19DQ0lUVD15Cj4gQ09ORklHX0NS
QzE2PXkKPiBDT05GSUdfQ1JDX1QxMERJRj15Cj4gQ09ORklHX0NSQ19JVFVfVD15Cj4gQ09ORklH
X0NSQzMyPXkKPiBDT05GSUdfQ1JDMzJfU0VMRlRFU1Q9eQo+IENPTkZJR19DUkMzMl9TTElDRUJZ
OD15Cj4gIyBDT05GSUdfQ1JDMzJfU0xJQ0VCWTQgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSQzMy
X1NBUldBVEUgaXMgbm90IHNldAo+ICMgQ09ORklHX0NSQzMyX0JJVCBpcyBub3Qgc2V0Cj4gQ09O
RklHX0NSQzc9eQo+IENPTkZJR19MSUJDUkMzMkM9eQo+IENPTkZJR19DUkM4PXkKPiBDT05GSUdf
Q1JDNjRfRUNNQT15Cj4gQ09ORklHX1JBTkRPTTMyX1NFTEZURVNUPXkKPiBDT05GSUdfWkxJQl9J
TkZMQVRFPXkKPiBDT05GSUdfWkxJQl9ERUZMQVRFPXkKPiBDT05GSUdfTFpPX0NPTVBSRVNTPXkK
PiBDT05GSUdfTFpPX0RFQ09NUFJFU1M9eQo+IENPTkZJR19MWjRIQ19DT01QUkVTUz15Cj4gQ09O
RklHX0xaNF9ERUNPTVBSRVNTPXkKPiBDT05GSUdfWFpfREVDPXkKPiAjIENPTkZJR19YWl9ERUNf
WDg2IGlzIG5vdCBzZXQKPiAjIENPTkZJR19YWl9ERUNfUE9XRVJQQyBpcyBub3Qgc2V0Cj4gQ09O
RklHX1haX0RFQ19JQTY0PXkKPiBDT05GSUdfWFpfREVDX0FSTT15Cj4gQ09ORklHX1haX0RFQ19B
Uk1USFVNQj15Cj4gQ09ORklHX1haX0RFQ19TUEFSQz15Cj4gQ09ORklHX1haX0RFQ19CQ0o9eQo+
ICMgQ09ORklHX1haX0RFQ19URVNUIGlzIG5vdCBzZXQKPiBDT05GSUdfREVDT01QUkVTU19HWklQ
PXkKPiBDT05GSUdfREVDT01QUkVTU19CWklQMj15Cj4gQ09ORklHX0RFQ09NUFJFU1NfTFpPPXkK
PiBDT05GSUdfREVDT01QUkVTU19MWjQ9eQo+IENPTkZJR19HRU5FUklDX0FMTE9DQVRPUj15Cj4g
Q09ORklHX1JFRURfU09MT01PTj15Cj4gQ09ORklHX1JFRURfU09MT01PTl9FTkM4PXkKPiBDT05G
SUdfUkVFRF9TT0xPTU9OX0RFQzg9eQo+IENPTkZJR19CVFJFRT15Cj4gQ09ORklHX0hBU19JT01F
TT15Cj4gQ09ORklHX0hBU19JT1BPUlQ9eQo+IENPTkZJR19IQVNfRE1BPXkKPiBDT05GSUdfQ1BV
TUFTS19PRkZTVEFDSz15Cj4gQ09ORklHX0NQVV9STUFQPXkKPiBDT05GSUdfRFFMPXkKPiBDT05G
SUdfTkxBVFRSPXkKPiBDT05GSUdfQVJDSF9IQVNfQVRPTUlDNjRfREVDX0lGX1BPU0lUSVZFPXkK
PiAjIENPTkZJR19BVkVSQUdFIGlzIG5vdCBzZXQKPiAjIENPTkZJR19DT1JESUMgaXMgbm90IHNl
dAo+ICMgQ09ORklHX0REUiBpcyBub3Qgc2V0CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dhv-0006HN-QQ; Fri, 10 Jan 2014 15:09:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1dhv-0006Gi-5h
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:09:11 +0000
Received: from [193.109.254.147:56114] by server-1.bemta-14.messagelabs.com id
	5B/16-15600-61D00D25; Fri, 10 Jan 2014 15:09:10 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389366547!10124842!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1754 invoked from network); 10 Jan 2014 15:09:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89560798"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:09:07 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:09:06 -0500
Message-ID: <52D00D11.9090903@citrix.com>
Date: Fri, 10 Jan 2014 16:09:05 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
In-Reply-To: <1389365236.19142.54.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: lidongyang@novell.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 15:47, Ian Campbell wrote:
> On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
>> What is the reason the backend 'type' property of a configured disk is
>> now "qdisk" instead of "file"?
> 
> Because qdisk is the backend instead of loop+blk (==file) I think this
> just happens naturally.
> 
>>  Would the guest really care about that
>> detail?  For example block-front currently just checks for "phy" and
>> "file" when deciding if discard should be enabled.
> 
> That sounds entirely bogus, it should be checking for some sort of
> feature-discard.


Linux blkfront checks for "feature-discard" on xenstore before trying 
to enable discard:

1795         err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1796                             "feature-discard", "%d", &discard,
1797                             NULL);
1798 
1799         if (!err && discard)
1800                 blkfront_setup_discard(info);

But then in blkfront_setup_discard:

1615         type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
1616         if (IS_ERR(type))
1617                 return;
1618 
1619         info->feature_secdiscard = 0;
1620         if (strncmp(type, "phy", 3) == 0) {
1621                 err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1622                         "discard-granularity", "%u", &discard_granularity,
1623                         "discard-alignment", "%u", &discard_alignment,
1624                         NULL);
1625                 if (!err) {
1626                         info->feature_discard = 1;
1627                         info->discard_granularity = discard_granularity;
1628                         info->discard_alignment = discard_alignment;
1629                 }
1630                 err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1631                             "discard-secure", "%d", &discard_secure,
1632                             NULL);
1633                 if (!err)
1634                         info->feature_secdiscard = discard_secure;
1635 
1636         } else if (strncmp(type, "file", 4) == 0)
1637                 info->feature_discard = 1;

I don't understand why blkfront doesn't check directly for "discard-
granularity" and "discard-alignment" and setup discard based on those 
parameters. Also, even if the backend sets "feature-discard" but the 
type is not "phy" or "file" discard will not be set.

The commit message that introduced those changes contains no info 
regarding why discard is only enabled for "phy" or "file" backends. 
Ccing Konrad and the original author of the change.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dhv-0006HN-QQ; Fri, 10 Jan 2014 15:09:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W1dhv-0006Gi-5h
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:09:11 +0000
Received: from [193.109.254.147:56114] by server-1.bemta-14.messagelabs.com id
	5B/16-15600-61D00D25; Fri, 10 Jan 2014 15:09:10 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389366547!10124842!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1754 invoked from network); 10 Jan 2014 15:09:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89560798"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:09:07 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:09:06 -0500
Message-ID: <52D00D11.9090903@citrix.com>
Date: Fri, 10 Jan 2014 16:09:05 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
In-Reply-To: <1389365236.19142.54.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: lidongyang@novell.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 15:47, Ian Campbell wrote:
> On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
>> What is the reason the backend 'type' property of a configured disk is
>> now "qdisk" instead of "file"?
> 
> Because qdisk is the backend instead of loop+blk (==file) I think this
> just happens naturally.
> 
>>  Would the guest really care about that
>> detail?  For example block-front currently just checks for "phy" and
>> "file" when deciding if discard should be enabled.
> 
> That sounds entirely bogus, it should be checking for some sort of
> feature-discard.


Linux blkfront checks for "feature-discard" on xenstore before trying 
to enable discard:

1795         err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1796                             "feature-discard", "%d", &discard,
1797                             NULL);
1798 
1799         if (!err && discard)
1800                 blkfront_setup_discard(info);

But then in blkfront_setup_discard:

1615         type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
1616         if (IS_ERR(type))
1617                 return;
1618 
1619         info->feature_secdiscard = 0;
1620         if (strncmp(type, "phy", 3) == 0) {
1621                 err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1622                         "discard-granularity", "%u", &discard_granularity,
1623                         "discard-alignment", "%u", &discard_alignment,
1624                         NULL);
1625                 if (!err) {
1626                         info->feature_discard = 1;
1627                         info->discard_granularity = discard_granularity;
1628                         info->discard_alignment = discard_alignment;
1629                 }
1630                 err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
1631                             "discard-secure", "%d", &discard_secure,
1632                             NULL);
1633                 if (!err)
1634                         info->feature_secdiscard = discard_secure;
1635 
1636         } else if (strncmp(type, "file", 4) == 0)
1637                 info->feature_discard = 1;

I don't understand why blkfront doesn't check directly for "discard-
granularity" and "discard-alignment" and setup discard based on those 
parameters. Also, even if the backend sets "feature-discard" but the 
type is not "phy" or "file" discard will not be set.

The commit message that introduced those changes contains no info 
regarding why discard is only enabled for "phy" or "file" backends. 
Ccing Konrad and the original author of the change.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:12:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dkl-0006mU-Mp; Fri, 10 Jan 2014 15:12:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dkh-0006mD-6t
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:12:05 +0000
Received: from [85.158.137.68:36571] by server-1.bemta-3.messagelabs.com id
	C0/34-29598-2CD00D25; Fri, 10 Jan 2014 15:12:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389366719!8386936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1669 invoked from network); 10 Jan 2014 15:12:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:12:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89562277"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:11:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 10:11:47 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W1dkQ-0002Nx-G4;
	Fri, 10 Jan 2014 15:11:46 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 15:11:46 +0000
Message-ID: <1389366706-24854-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: override kernel config
	from runvars
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use this to build EXT4 into the kernel statically for armhf build to work
around the lack of guest initrd support on ARM at the moment.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
When we branch for xen 4.4 we should make this apply to 4.3 and 4.4 only, so
that when we test unstable from that point on we are reminded of the lack of
initrd support...
---
 make-flight     |  5 ++++-
 ts-kernel-build | 15 +++++++++++++++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index a13b38e..fea642c 100755
--- a/make-flight
+++ b/make-flight
@@ -61,6 +61,9 @@ if [ x$buildflight = x ]; then
 	tree_linux=$TREE_LINUX_ARM
 	revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
       "
+      pvops_kconfig_overrides="
+        kconfig_override_y=CONFIG_EXT4_FS
+      "
       ;;
     *)
       case "$branch" in
@@ -151,7 +154,7 @@ if [ x$buildflight = x ]; then
 		host_hostflags=$build_hostflags    \
 		xen_kernels=linux-2.6-pvops				     \
 		revision_xen=$REVISION_XEN				     \
-		$pvops_kernel						\
+		$pvops_kernel $pvops_kconfig_overrides			     \
 		${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}	\
 		tree_linuxfirmware=$TREE_LINUXFIRMWARE			     \
 		revision_linuxfirmware=$REVISION_LINUXFIRMWARE
diff --git a/ts-kernel-build b/ts-kernel-build
index 478d912..96f6b74 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -203,7 +203,21 @@ END
     return $edscript;
 }
 
+sub kconfig_overrides ($) {
+    my ($to) = @_;
+    return '' unless $r{"kconfig_override_$to"};
+    my $overrides = '';
+    foreach my $override (split /,/, $r{"kconfig_override_$to"}) {
+        $overrides .= "setopt $override $to\n";
+    }
+    return $overrides;
+}
+
 sub config_xen_enable_xen_config () {
+    my $config_runvars = kconfig_overrides('y');
+    $config_runvars .= kconfig_overrides('m');
+    $config_runvars .= kconfig_overrides('n');
+
     my $edscript= stash_config_edscript(<<END);
 
 setopt CONFIG_HIGHMEM64G y
@@ -228,6 +242,7 @@ setopt CONFIG_HYPERVISOR_GUEST y
 
 $config_hardware
 $config_features
+$config_runvars
 END
 
     target_cmd_build($ho, 1000, $builddir, <<END);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:12:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dkl-0006mU-Mp; Fri, 10 Jan 2014 15:12:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dkh-0006mD-6t
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:12:05 +0000
Received: from [85.158.137.68:36571] by server-1.bemta-3.messagelabs.com id
	C0/34-29598-2CD00D25; Fri, 10 Jan 2014 15:12:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389366719!8386936!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1669 invoked from network); 10 Jan 2014 15:12:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:12:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89562277"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:11:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 10:11:47 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W1dkQ-0002Nx-G4;
	Fri, 10 Jan 2014 15:11:46 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 15:11:46 +0000
Message-ID: <1389366706-24854-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: override kernel config
	from runvars
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use this to build EXT4 into the kernel statically for armhf build to work
around the lack of guest initrd support on ARM at the moment.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
When we branch for xen 4.4 we should make this apply to 4.3 and 4.4 only, so
that when we test unstable from that point on we are reminded of the lack of
initrd support...
---
 make-flight     |  5 ++++-
 ts-kernel-build | 15 +++++++++++++++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index a13b38e..fea642c 100755
--- a/make-flight
+++ b/make-flight
@@ -61,6 +61,9 @@ if [ x$buildflight = x ]; then
 	tree_linux=$TREE_LINUX_ARM
 	revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
       "
+      pvops_kconfig_overrides="
+        kconfig_override_y=CONFIG_EXT4_FS
+      "
       ;;
     *)
       case "$branch" in
@@ -151,7 +154,7 @@ if [ x$buildflight = x ]; then
 		host_hostflags=$build_hostflags    \
 		xen_kernels=linux-2.6-pvops				     \
 		revision_xen=$REVISION_XEN				     \
-		$pvops_kernel						\
+		$pvops_kernel $pvops_kconfig_overrides			     \
 		${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}	\
 		tree_linuxfirmware=$TREE_LINUXFIRMWARE			     \
 		revision_linuxfirmware=$REVISION_LINUXFIRMWARE
diff --git a/ts-kernel-build b/ts-kernel-build
index 478d912..96f6b74 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -203,7 +203,21 @@ END
     return $edscript;
 }
 
+sub kconfig_overrides ($) {
+    my ($to) = @_;
+    return '' unless $r{"kconfig_override_$to"};
+    my $overrides = '';
+    foreach my $override (split /,/, $r{"kconfig_override_$to"}) {
+        $overrides .= "setopt $override $to\n";
+    }
+    return $overrides;
+}
+
 sub config_xen_enable_xen_config () {
+    my $config_runvars = kconfig_overrides('y');
+    $config_runvars .= kconfig_overrides('m');
+    $config_runvars .= kconfig_overrides('n');
+
     my $edscript= stash_config_edscript(<<END);
 
 setopt CONFIG_HIGHMEM64G y
@@ -228,6 +242,7 @@ setopt CONFIG_HYPERVISOR_GUEST y
 
 $config_hardware
 $config_features
+$config_runvars
 END
 
     target_cmd_build($ho, 1000, $builddir, <<END);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:12:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dl8-0006pm-5Q; Fri, 10 Jan 2014 15:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dl6-0006pS-C4
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:12:28 +0000
Received: from [85.158.143.35:15743] by server-2.bemta-4.messagelabs.com id
	6D/62-11386-BDD00D25; Fri, 10 Jan 2014 15:12:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389366745!10907771!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26809 invoked from network); 10 Jan 2014 15:12:26 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:12:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFCLA6018083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:12:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFCK6Q024053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:12:21 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFCKiI017006; Fri, 10 Jan 2014 15:12:20 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:12:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D054F1C18DC; Fri, 10 Jan 2014 10:12:18 -0500 (EST)
Date: Fri, 10 Jan 2014 10:12:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110151218.GA20152@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1447395332.20140110155157@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
> 
> But it seems pci-detach isn't completely detaching the device from the guest.
> 
> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
> 
> - Now i do a "xl pci-assignable-list"
>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
> 
> - Then i do "xl -v pci-detach 2 00:19.0"
>   Which also returns nothing ...
> 
> - Now i do a "xl pci-assignable-list" again ..
>   This returns:
>   "0000:00:19.0"
>   So the pci-detach does seem to have done *something* :-)

Or it thinks it has :-)

> 
> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>   and later it give some stacktraces ..
> 
>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>   xen_pciback: ****** shutdown driver domain before binding device
>   xen_pciback: ****** to other drivers of domains

What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
removed from there?
> 
> 
> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
> 
> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
> 
> Oh yes running xen-unstable and a 3.13-rc7 kernel

Do you see the same issue with 'xend'? 
> 
> --
> Sander
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:12:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dl8-0006pm-5Q; Fri, 10 Jan 2014 15:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1dl6-0006pS-C4
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:12:28 +0000
Received: from [85.158.143.35:15743] by server-2.bemta-4.messagelabs.com id
	6D/62-11386-BDD00D25; Fri, 10 Jan 2014 15:12:27 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389366745!10907771!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26809 invoked from network); 10 Jan 2014 15:12:26 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:12:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFCLA6018083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:12:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFCK6Q024053
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:12:21 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFCKiI017006; Fri, 10 Jan 2014 15:12:20 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:12:20 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D054F1C18DC; Fri, 10 Jan 2014 10:12:18 -0500 (EST)
Date: Fri, 10 Jan 2014 10:12:18 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110151218.GA20152@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1447395332.20140110155157@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
> 
> But it seems pci-detach isn't completely detaching the device from the guest.
> 
> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
> 
> - Now i do a "xl pci-assignable-list"
>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
> 
> - Then i do "xl -v pci-detach 2 00:19.0"
>   Which also returns nothing ...
> 
> - Now i do a "xl pci-assignable-list" again ..
>   This returns:
>   "0000:00:19.0"
>   So the pci-detach does seem to have done *something* :-)

Or it thinks it has :-)

> 
> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>   and later it give some stacktraces ..
> 
>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>   xen_pciback: ****** shutdown driver domain before binding device
>   xen_pciback: ****** to other drivers of domains

What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
removed from there?
> 
> 
> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
> 
> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
> 
> Oh yes running xen-unstable and a 3.13-rc7 kernel

Do you see the same issue with 'xend'? 
> 
> --
> Sander
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1doa-0007Hh-2Q; Fri, 10 Jan 2014 15:16:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1doY-0007Ha-Nx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:16:02 +0000
Received: from [85.158.139.211:40967] by server-4.bemta-5.messagelabs.com id
	D2/08-26791-1BE00D25; Fri, 10 Jan 2014 15:16:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389366959!9056159!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31750 invoked from network); 10 Jan 2014 15:16:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:16:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFFsoi022159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:15:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFFrPT004715
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:15:53 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFFqoN010472; Fri, 10 Jan 2014 15:15:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:15:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B4EE21C18DC; Fri, 10 Jan 2014 10:15:51 -0500 (EST)
Date: Fri, 10 Jan 2014 10:15:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140110151551.GB20152@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110150000.GA20287@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> On Fri, Jan 10, Ian Campbell wrote:
> 
> > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > What is the reason the backend 'type' property of a configured disk is
> > > now "qdisk" instead of "file"?
> > 
> > Because qdisk is the backend instead of loop+blk (==file) I think this
> > just happens naturally.
> > 
> > >  Would the guest really care about that
> > > detail?  For example block-front currently just checks for "phy" and
> > > "file" when deciding if discard should be enabled.
> > 
> > That sounds entirely bogus, it should be checking for some sort of
> > feature-discard.
> 
> It does that, then calls blkfront_setup_discard which in turn knows just
> about phy and file. And I wonder why it does that.
> Maybe this function should be simplified to assume that if its called
> feature_discard can be enabled. And if both
> discard-granularity/discard-alignment exist those properties should be
> assigned, similar for discard-secure.
> 
> Now that I look at the history of blkfront_setup_discard:
> 
>  Li, Konrad, why does that function care at all about the 'type'?
>  Shouldnt that check be removed?

You are looking at:
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)

My recollection is that at the time the patches were developed, loop
was not able to do discard operations. That has since changed and
loop can do it. Hence the force of =1 was added in.

But that assumes that 'file' is going through the 'loop' device.

If that assumption is incorrect then this needs to be fixed and
perhaps the underlaying device ('file'?) interogated as to whether
it can do discard or not.

> 
> Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1doa-0007Hh-2Q; Fri, 10 Jan 2014 15:16:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1doY-0007Ha-Nx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:16:02 +0000
Received: from [85.158.139.211:40967] by server-4.bemta-5.messagelabs.com id
	D2/08-26791-1BE00D25; Fri, 10 Jan 2014 15:16:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389366959!9056159!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31750 invoked from network); 10 Jan 2014 15:16:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:16:01 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFFsoi022159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:15:55 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFFrPT004715
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:15:53 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFFqoN010472; Fri, 10 Jan 2014 15:15:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:15:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B4EE21C18DC; Fri, 10 Jan 2014 10:15:51 -0500 (EST)
Date: Fri, 10 Jan 2014 10:15:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140110151551.GB20152@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110150000.GA20287@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> On Fri, Jan 10, Ian Campbell wrote:
> 
> > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > What is the reason the backend 'type' property of a configured disk is
> > > now "qdisk" instead of "file"?
> > 
> > Because qdisk is the backend instead of loop+blk (==file) I think this
> > just happens naturally.
> > 
> > >  Would the guest really care about that
> > > detail?  For example block-front currently just checks for "phy" and
> > > "file" when deciding if discard should be enabled.
> > 
> > That sounds entirely bogus, it should be checking for some sort of
> > feature-discard.
> 
> It does that, then calls blkfront_setup_discard which in turn knows just
> about phy and file. And I wonder why it does that.
> Maybe this function should be simplified to assume that if its called
> feature_discard can be enabled. And if both
> discard-granularity/discard-alignment exist those properties should be
> assigned, similar for discard-secure.
> 
> Now that I look at the history of blkfront_setup_discard:
> 
>  Li, Konrad, why does that function care at all about the 'type'?
>  Shouldnt that check be removed?

You are looking at:
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)

My recollection is that at the time the patches were developed, loop
was not able to do discard operations. That has since changed and
loop can do it. Hence the force of =1 was added in.

But that assumes that 'file' is going through the 'loop' device.

If that assumption is incorrect then this needs to be fixed and
perhaps the underlaying device ('file'?) interogated as to whether
it can do discard or not.

> 
> Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dp2-0007LZ-Gq; Fri, 10 Jan 2014 15:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dp0-0007LI-KJ
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:16:30 +0000
Received: from [85.158.143.35:8491] by server-1.bemta-4.messagelabs.com id
	46/39-02132-DCE00D25; Fri, 10 Jan 2014 15:16:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389366988!3848062!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25301 invoked from network); 10 Jan 2014 15:16:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:16:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91702441"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:16:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:16:27 -0500
Message-ID: <1389366985.19142.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:16:25 +0000
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 
> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well,

Me neither, this might have to wait for George to get back.

We should also consider flipping the default claim setting to off in xl
for 4.4, since that is likely to be a lower impact change than fixing
the issue (and one which we all understand!).

>  and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).

PoD exists purely so that we don't need the 'maxmem' amount of memory at
boot time. It is basically there in order to let the guest get booted
far enough to load the balloon driver to give the memory back.

It's basically a boot time zero-page sharing mechanism AIUI.

> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;

0x20?

> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dp2-0007LZ-Gq; Fri, 10 Jan 2014 15:16:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dp0-0007LI-KJ
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:16:30 +0000
Received: from [85.158.143.35:8491] by server-1.bemta-4.messagelabs.com id
	46/39-02132-DCE00D25; Fri, 10 Jan 2014 15:16:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389366988!3848062!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25301 invoked from network); 10 Jan 2014 15:16:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:16:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91702441"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:16:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:16:27 -0500
Message-ID: <1389366985.19142.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:16:25 +0000
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 
> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well,

Me neither, this might have to wait for George to get back.

We should also consider flipping the default claim setting to off in xl
for 4.4, since that is likely to be a lower impact change than fixing
the issue (and one which we all understand!).

>  and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).

PoD exists purely so that we don't need the 'maxmem' amount of memory at
boot time. It is basically there in order to let the guest get booted
far enough to load the balloon driver to give the memory back.

It's basically a boot time zero-page sharing mechanism AIUI.

> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;

0x20?

> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1drH-00086k-CB; Fri, 10 Jan 2014 15:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1drG-00085U-3y
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:18:50 +0000
Received: from [85.158.143.35:39640] by server-1.bemta-4.messagelabs.com id
	95/BC-02132-95F00D25; Fri, 10 Jan 2014 15:18:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389367128!3848698!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9756 invoked from network); 10 Jan 2014 15:18:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:18:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:18:48 +0000
Message-Id: <52D01D640200007800112739@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:18:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA1922E44.0__="
Cc: Tim Deegan <tim@xen.org>, CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 4.2] ix86: fix linear page table construction in
 alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA1922E44.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Slot 0 got updated when slot 3 was meant. The mistake was hidden by
create_pae_xen_mappings() correcting things immediately afterwards
(i.e. before the new entries could get used the first time).

Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
             l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
                       l2e_from_page(perdomain_pt_page(d, i),
                                     __PAGE_HYPERVISOR));
-        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =3D
+        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =3D
             l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
 #else
         memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],




--=__PartA1922E44.0__=
Content-Type: text/plain; name="ix86-linear-pt-construction.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="ix86-linear-pt-construction.patch"

ix86: fix linear page table construction in alloc_l2_table()=0A=0ASlot 0 =
got updated when slot 3 was meant. The mistake was hidden by=0Acreate_pae_x=
en_mappings() correcting things immediately afterwards=0A(i.e. before the =
new entries could get used the first time).=0A=0AReported-by: CHENG =
Yueqiang <yqcheng.2008@phdis.smu.edu.sg>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=
=0A@@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in=0A       =
      l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],=0A       =
                l2e_from_page(perdomain_pt_page(d, i),=0A                  =
                   __PAGE_HYPERVISOR));=0A-        pl2e[l2_table_offset(LIN=
EAR_PT_VIRT_START)] =3D=0A+        pl2e[l2_table_offset(LINEAR_PT_VIRT_STAR=
T) + 3] =3D=0A             l2e_from_pfn(pfn, __PAGE_HYPERVISOR);=0A =
#else=0A         memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],=0A
--=__PartA1922E44.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA1922E44.0__=--


From xen-devel-bounces@lists.xen.org Fri Jan 10 15:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1drH-00086k-CB; Fri, 10 Jan 2014 15:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1drG-00085U-3y
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:18:50 +0000
Received: from [85.158.143.35:39640] by server-1.bemta-4.messagelabs.com id
	95/BC-02132-95F00D25; Fri, 10 Jan 2014 15:18:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389367128!3848698!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9756 invoked from network); 10 Jan 2014 15:18:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:18:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:18:48 +0000
Message-Id: <52D01D640200007800112739@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:18:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA1922E44.0__="
Cc: Tim Deegan <tim@xen.org>, CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 4.2] ix86: fix linear page table construction in
 alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA1922E44.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Slot 0 got updated when slot 3 was meant. The mistake was hidden by
create_pae_xen_mappings() correcting things immediately afterwards
(i.e. before the new entries could get used the first time).

Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
             l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
                       l2e_from_page(perdomain_pt_page(d, i),
                                     __PAGE_HYPERVISOR));
-        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =3D
+        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =3D
             l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
 #else
         memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],




--=__PartA1922E44.0__=
Content-Type: text/plain; name="ix86-linear-pt-construction.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="ix86-linear-pt-construction.patch"

ix86: fix linear page table construction in alloc_l2_table()=0A=0ASlot 0 =
got updated when slot 3 was meant. The mistake was hidden by=0Acreate_pae_x=
en_mappings() correcting things immediately afterwards=0A(i.e. before the =
new entries could get used the first time).=0A=0AReported-by: CHENG =
Yueqiang <yqcheng.2008@phdis.smu.edu.sg>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=
=0A@@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in=0A       =
      l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],=0A       =
                l2e_from_page(perdomain_pt_page(d, i),=0A                  =
                   __PAGE_HYPERVISOR));=0A-        pl2e[l2_table_offset(LIN=
EAR_PT_VIRT_START)] =3D=0A+        pl2e[l2_table_offset(LINEAR_PT_VIRT_STAR=
T) + 3] =3D=0A             l2e_from_pfn(pfn, __PAGE_HYPERVISOR);=0A =
#else=0A         memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],=0A
--=__PartA1922E44.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA1922E44.0__=--


From xen-devel-bounces@lists.xen.org Fri Jan 10 15:20:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dsU-0008VK-G0; Fri, 10 Jan 2014 15:20:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dsS-0008Ut-I4
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:20:04 +0000
Received: from [193.109.254.147:33167] by server-9.bemta-14.messagelabs.com id
	DD/0A-13957-3AF00D25; Fri, 10 Jan 2014 15:20:03 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389367203!10038733!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14693 invoked from network); 10 Jan 2014 15:20:03 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:20:03 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389367202; l=903;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=Tpn5FTh5FnfaZAkZiuhDZLz143k=;
	b=ZBb58Swe75b1sRORS+pN+1XV1eqqMw2EuUq1KllZIqDKb/6sEboAorIYuuGgf7OrbsK
	CRnT/WMidQj5wkCic5n9or3w1+rBMQZZScucaAtmhzKLIyK33IhuoYpbGitaM5RAu98Nt
	R/zB2QvNb8mt2njOh1zfBrPtEeVLV0qSHB8=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Y0473cq0AFK0LCU ; 
	Fri, 10 Jan 2014 16:20:00 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id EAA495024C; Fri, 10 Jan 2014 16:19:56 +0100 (CET)
Date: Fri, 10 Jan 2014 16:19:56 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151956.GA22646@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151551.GB20152@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ removing Li who is no longer with Novell ]

On Fri, Jan 10, Konrad Rzeszutek Wilk wrote:

> You are looking at:
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> 
> My recollection is that at the time the patches were developed, loop
> was not able to do discard operations. That has since changed and
> loop can do it. Hence the force of =1 was added in.

If the backend cant do discard then it should not advertise the feature.

Are you ok with me sending a patch which simplifies this function?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:20:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dsU-0008VK-G0; Fri, 10 Jan 2014 15:20:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1dsS-0008Ut-I4
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:20:04 +0000
Received: from [193.109.254.147:33167] by server-9.bemta-14.messagelabs.com id
	DD/0A-13957-3AF00D25; Fri, 10 Jan 2014 15:20:03 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389367203!10038733!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14693 invoked from network); 10 Jan 2014 15:20:03 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:20:03 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389367202; l=903;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=Tpn5FTh5FnfaZAkZiuhDZLz143k=;
	b=ZBb58Swe75b1sRORS+pN+1XV1eqqMw2EuUq1KllZIqDKb/6sEboAorIYuuGgf7OrbsK
	CRnT/WMidQj5wkCic5n9or3w1+rBMQZZScucaAtmhzKLIyK33IhuoYpbGitaM5RAu98Nt
	R/zB2QvNb8mt2njOh1zfBrPtEeVLV0qSHB8=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Y0473cq0AFK0LCU ; 
	Fri, 10 Jan 2014 16:20:00 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id EAA495024C; Fri, 10 Jan 2014 16:19:56 +0100 (CET)
Date: Fri, 10 Jan 2014 16:19:56 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151956.GA22646@aepfle.de>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151551.GB20152@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ removing Li who is no longer with Novell ]

On Fri, Jan 10, Konrad Rzeszutek Wilk wrote:

> You are looking at:
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> 
> My recollection is that at the time the patches were developed, loop
> was not able to do discard operations. That has since changed and
> loop can do it. Hence the force of =1 was added in.

If the backend cant do discard then it should not advertise the feature.

Are you ok with me sending a patch which simplifies this function?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dtC-0000BA-Um; Fri, 10 Jan 2014 15:20:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dtB-0000As-N9
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:20:49 +0000
Received: from [85.158.143.35:35400] by server-3.bemta-4.messagelabs.com id
	18/BB-32360-1DF00D25; Fri, 10 Jan 2014 15:20:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389367247!8305446!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2402 invoked from network); 10 Jan 2014 15:20:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:20:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91704170"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:20:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:20:46 -0500
Message-ID: <1389367245.19142.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:20:45 +0000
In-Reply-To: <20140110151551.GB20152@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 10:15 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> > On Fri, Jan 10, Ian Campbell wrote:
> > 
> > > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > > What is the reason the backend 'type' property of a configured disk is
> > > > now "qdisk" instead of "file"?
> > > 
> > > Because qdisk is the backend instead of loop+blk (==file) I think this
> > > just happens naturally.
> > > 
> > > >  Would the guest really care about that
> > > > detail?  For example block-front currently just checks for "phy" and
> > > > "file" when deciding if discard should be enabled.
> > > 
> > > That sounds entirely bogus, it should be checking for some sort of
> > > feature-discard.
> > 
> > It does that, then calls blkfront_setup_discard which in turn knows just
> > about phy and file. And I wonder why it does that.
> > Maybe this function should be simplified to assume that if its called
> > feature_discard can be enabled. And if both
> > discard-granularity/discard-alignment exist those properties should be
> > assigned, similar for discard-secure.
> > 
> > Now that I look at the history of blkfront_setup_discard:
> > 
> >  Li, Konrad, why does that function care at all about the 'type'?
> >  Shouldnt that check be removed?
> 
> You are looking at:
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> 
> My recollection is that at the time the patches were developed, loop
> was not able to do discard operations. That has since changed and
> loop can do it. Hence the force of =1 was added in.
> 
> But that assumes that 'file' is going through the 'loop' device.
> 
> If that assumption is incorrect then this needs to be fixed and
> perhaps the underlaying device ('file'?) interogated as to whether
> it can do discard or not.

Why on earth is this happening in the frontend?

The *backend* should be querying the underlying device and propagating
the result via the feature flag to the frontend. Having the backend
advertise discard and then have the frontend second guess this based on
rumour and hearsay (which is all probing this type field) is just nuts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dtC-0000BA-Um; Fri, 10 Jan 2014 15:20:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dtB-0000As-N9
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:20:49 +0000
Received: from [85.158.143.35:35400] by server-3.bemta-4.messagelabs.com id
	18/BB-32360-1DF00D25; Fri, 10 Jan 2014 15:20:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389367247!8305446!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2402 invoked from network); 10 Jan 2014 15:20:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:20:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91704170"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:20:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:20:46 -0500
Message-ID: <1389367245.19142.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:20:45 +0000
In-Reply-To: <20140110151551.GB20152@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 10:15 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> > On Fri, Jan 10, Ian Campbell wrote:
> > 
> > > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > > What is the reason the backend 'type' property of a configured disk is
> > > > now "qdisk" instead of "file"?
> > > 
> > > Because qdisk is the backend instead of loop+blk (==file) I think this
> > > just happens naturally.
> > > 
> > > >  Would the guest really care about that
> > > > detail?  For example block-front currently just checks for "phy" and
> > > > "file" when deciding if discard should be enabled.
> > > 
> > > That sounds entirely bogus, it should be checking for some sort of
> > > feature-discard.
> > 
> > It does that, then calls blkfront_setup_discard which in turn knows just
> > about phy and file. And I wonder why it does that.
> > Maybe this function should be simplified to assume that if its called
> > feature_discard can be enabled. And if both
> > discard-granularity/discard-alignment exist those properties should be
> > assigned, similar for discard-secure.
> > 
> > Now that I look at the history of blkfront_setup_discard:
> > 
> >  Li, Konrad, why does that function care at all about the 'type'?
> >  Shouldnt that check be removed?
> 
> You are looking at:
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> 
> My recollection is that at the time the patches were developed, loop
> was not able to do discard operations. That has since changed and
> loop can do it. Hence the force of =1 was added in.
> 
> But that assumes that 'file' is going through the 'loop' device.
> 
> If that assumption is incorrect then this needs to be fixed and
> perhaps the underlaying device ('file'?) interogated as to whether
> it can do discard or not.

Why on earth is this happening in the frontend?

The *backend* should be querying the underlying device and propagating
the result via the feature flag to the frontend. Having the backend
advertise discard and then have the frontend second guess this based on
rumour and hearsay (which is all probing this type field) is just nuts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:21:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dtS-0000FM-CB; Fri, 10 Jan 2014 15:21:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1W1dtQ-0000Eg-Ld
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:21:04 +0000
Received: from [85.158.139.211:46406] by server-10.bemta-5.messagelabs.com id
	40/0B-01405-FDF00D25; Fri, 10 Jan 2014 15:21:03 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389367261!9052999!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28093 invoked from network); 10 Jan 2014 15:21:03 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:03 -0000
Received: by mail-ie0-f171.google.com with SMTP id to1so1128282ieb.2
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 07:21:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=k0p7regAY9SMA60yVHEKBlFIMzXj/aS5PWzaNHILaKc=;
	b=Czw2zjkKXdrv4od5AyAQPqwif7KAj0T6ukXsgyHdFShlFAsl27Mc44k3omSQKDPDCt
	YU2NSshn3gLj7IDYB3CpS3js+Rw5UxqJvshpWdsgDlVc7D43MpMsHy5zVNpWWVLMmuQ3
	F35ZRhNJ6Np+nJ9xdXIZU1pcp7vQrs5BJlmrYUupKnN2M6h8jBEGQHtcUvdc59kyPmdW
	stbJgriY0gevK7rUMWJDnkO+b1YUhRjrkjzbt5Y5DAzgRg1zT+gRCTYKfUJk/DMnTpRK
	9uW7yXJoqHmfrgLbUmwPpHKc3OosXUr9+lYP7niIYtJVTWq8IP0U4IEOwLki3dcw5Y4N
	MyhQ==
MIME-Version: 1.0
X-Received: by 10.42.66.134 with SMTP id p6mr1545824ici.85.1389367261509; Fri,
	10 Jan 2014 07:21:01 -0800 (PST)
Received: by 10.64.69.40 with HTTP; Fri, 10 Jan 2014 07:21:01 -0800 (PST)
Date: Fri, 10 Jan 2014 18:51:01 +0330
Message-ID: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Problem with enabling vt(virtualization),
	xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hello,
i have a problem;when i enable vt in bios of my system, i can not boot
into my fedora with xen hypervisor system.In fact, when vt is enabled
and i wanted to boot into xen, i got this error:
iommu: mapping reserved region failed
and after that, it returned me to the grub. what's the problem?
i have a HP Pavilion Entertainment PC.
thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:21:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dtS-0000FM-CB; Fri, 10 Jan 2014 15:21:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1W1dtQ-0000Eg-Ld
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:21:04 +0000
Received: from [85.158.139.211:46406] by server-10.bemta-5.messagelabs.com id
	40/0B-01405-FDF00D25; Fri, 10 Jan 2014 15:21:03 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389367261!9052999!1
X-Originating-IP: [209.85.223.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28093 invoked from network); 10 Jan 2014 15:21:03 -0000
Received: from mail-ie0-f171.google.com (HELO mail-ie0-f171.google.com)
	(209.85.223.171)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:03 -0000
Received: by mail-ie0-f171.google.com with SMTP id to1so1128282ieb.2
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 07:21:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=k0p7regAY9SMA60yVHEKBlFIMzXj/aS5PWzaNHILaKc=;
	b=Czw2zjkKXdrv4od5AyAQPqwif7KAj0T6ukXsgyHdFShlFAsl27Mc44k3omSQKDPDCt
	YU2NSshn3gLj7IDYB3CpS3js+Rw5UxqJvshpWdsgDlVc7D43MpMsHy5zVNpWWVLMmuQ3
	F35ZRhNJ6Np+nJ9xdXIZU1pcp7vQrs5BJlmrYUupKnN2M6h8jBEGQHtcUvdc59kyPmdW
	stbJgriY0gevK7rUMWJDnkO+b1YUhRjrkjzbt5Y5DAzgRg1zT+gRCTYKfUJk/DMnTpRK
	9uW7yXJoqHmfrgLbUmwPpHKc3OosXUr9+lYP7niIYtJVTWq8IP0U4IEOwLki3dcw5Y4N
	MyhQ==
MIME-Version: 1.0
X-Received: by 10.42.66.134 with SMTP id p6mr1545824ici.85.1389367261509; Fri,
	10 Jan 2014 07:21:01 -0800 (PST)
Received: by 10.64.69.40 with HTTP; Fri, 10 Jan 2014 07:21:01 -0800 (PST)
Date: Fri, 10 Jan 2014 18:51:01 +0330
Message-ID: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Problem with enabling vt(virtualization),
	xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hello,
i have a problem;when i enable vt in bios of my system, i can not boot
into my fedora with xen hypervisor system.In fact, when vt is enabled
and i wanted to boot into xen, i got this error:
iommu: mapping reserved region failed
and after that, it returned me to the grub. what's the problem?
i have a HP Pavilion Entertainment PC.
thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duL-0000Tb-89; Fri, 10 Jan 2014 15:22:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1duJ-0000Si-5j
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:21:59 +0000
Received: from [85.158.139.211:2241] by server-17.bemta-5.messagelabs.com id
	61/9E-19152-61010D25; Fri, 10 Jan 2014 15:21:58 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389367315!9045407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8722 invoked from network); 10 Jan 2014 15:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89566991"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:21:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:21:48 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1drf-0001yy-0S;
	Fri, 10 Jan 2014 15:19:15 +0000
Date: Fri, 10 Jan 2014 15:19:15 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151914.GF1696@perard.uk.xensource.com>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110032845.GA3660@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:28:47PM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > > [...]
> > > > > > > > Those Xen report something like:
> > > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > > 131328
> > > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > > memflags=0 (62 of 64)
> > > > > > > > 
> > > > > > > > ?
> > > > > > > > 
> > > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > > QEMU :) )
> > > > > > > > 
> > > > 
> > > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > > >         Flags: fast devsel, IRQ 16
> > > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > > >         I/O ports at e020 [disabled] [size=32]
> > > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > > 
> > > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > > allocate memory for it. Will have maybe have to find another way.
> > > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > > far in trying to check that.
> > > 
> > > And indeed that is the case. The "Fix" below fixes it.
> > > 
> > > 
> > > Based on that and this guest config:
> > > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > > memory = 2048
> > > boot="d"
> > > maxvcpus=32
> > > vcpus=1
> > > serial='pty'
> > > vnclisten="0.0.0.0"
> > > name="latest"
> > > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > > pci = ["01:00.0"]
> > > 
> > > I can boot the guest.
> > 
> > And can you access the ROM from the guest ?
> 
> I hadn't tried it. This is with a NIC and I just wanted to see if it
> could do PCI passthrough without using the Option ROM.
> > 
> > 
> > Also, I have another patch, it will initialize the PCI ROM BAR like any
> > other BAR. In this case, if qemu is envolved in the access to ROM, it
> > will print an error, like it the case for other BAR. 
> > 
> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from
> > the device.
> 
> Oddly enough for me with your patch the NIC's BIOS was invoked and
> it tried to PXE boot:
> 
> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> 
> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> ..
> and I did see the PXE boot menu in my guest - so even
> better!

Perfect, look like it is the fix for PCI passthrough.

> I have not yet done the GPU - this issue was preventing me from using
> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> 
> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>' please do.

Will do.

> Thank you!
> > 
> > 
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> > index 6dd7a68..2bbdb6d 100644
> > --- a/hw/xen/xen_pt.c
> > +++ b/hw/xen/xen_pt.c
> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >  
> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >  
> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> > -                                      "xen-pci-pt-rom", d->rom.size);
> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> > +                              "xen-pci-pt-rom", d->rom.size);
> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >                           &s->rom);
> >  
> > 
> > -- 
> > Anthony PERARD

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duK-0000TN-SI; Fri, 10 Jan 2014 15:22:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1duI-0000Sd-T3
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:21:59 +0000
Received: from [85.158.137.68:54667] by server-6.bemta-3.messagelabs.com id
	49/9C-04868-61010D25; Fri, 10 Jan 2014 15:21:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389367315!7265813!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5766 invoked from network); 10 Jan 2014 15:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89566993"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:21:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:21:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1dra-0001yv-Pj;
	Fri, 10 Jan 2014 15:19:10 +0000
Date: Fri, 10 Jan 2014 15:19:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110151910.GD30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389366985.19142.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
 [...]
 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> 
> 0x20?
> 

128K VGA hole. :-)

Wei.

> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duK-0000TN-SI; Fri, 10 Jan 2014 15:22:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1duI-0000Sd-T3
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:21:59 +0000
Received: from [85.158.137.68:54667] by server-6.bemta-3.messagelabs.com id
	49/9C-04868-61010D25; Fri, 10 Jan 2014 15:21:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389367315!7265813!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5766 invoked from network); 10 Jan 2014 15:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89566993"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:21:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:21:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1dra-0001yv-Pj;
	Fri, 10 Jan 2014 15:19:10 +0000
Date: Fri, 10 Jan 2014 15:19:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110151910.GD30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389366985.19142.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
 [...]
 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> 
> 0x20?
> 

128K VGA hole. :-)

Wei.

> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duL-0000Tb-89; Fri, 10 Jan 2014 15:22:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1duJ-0000Si-5j
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:21:59 +0000
Received: from [85.158.139.211:2241] by server-17.bemta-5.messagelabs.com id
	61/9E-19152-61010D25; Fri, 10 Jan 2014 15:21:58 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389367315!9045407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8722 invoked from network); 10 Jan 2014 15:21:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:21:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89566991"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:21:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:21:48 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1drf-0001yy-0S;
	Fri, 10 Jan 2014 15:19:15 +0000
Date: Fri, 10 Jan 2014 15:19:15 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151914.GF1696@perard.uk.xensource.com>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110032845.GA3660@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 10:28:47PM -0500, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> > > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> > > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> > > > > > [...]
> > > > > > > > Those Xen report something like:
> > > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
> > > > > > > > 131328
> > > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
> > > > > > > > memflags=0 (62 of 64)
> > > > > > > > 
> > > > > > > > ?
> > > > > > > > 
> > > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
> > > > > > > > QEMU :) )
> > > > > > > > 
> > > > 
> > > > > -bash-4.1# lspci -s 01:00.0 -v 
> > > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
> > > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > > > >         Flags: fast devsel, IRQ 16
> > > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
> > > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> > > > >         I/O ports at e020 [disabled] [size=32]
> > > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
> > > > >         Expansion ROM at fb400000 [disabled] [size=4M]
> > > > 
> > > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> > > > allocate memory for it. Will have maybe have to find another way.
> > > > qemu-trad those not seems to allocate memory, but I haven't been very
> > > > far in trying to check that.
> > > 
> > > And indeed that is the case. The "Fix" below fixes it.
> > > 
> > > 
> > > Based on that and this guest config:
> > > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > > memory = 2048
> > > boot="d"
> > > maxvcpus=32
> > > vcpus=1
> > > serial='pty'
> > > vnclisten="0.0.0.0"
> > > name="latest"
> > > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
> > > pci = ["01:00.0"]
> > > 
> > > I can boot the guest.
> > 
> > And can you access the ROM from the guest ?
> 
> I hadn't tried it. This is with a NIC and I just wanted to see if it
> could do PCI passthrough without using the Option ROM.
> > 
> > 
> > Also, I have another patch, it will initialize the PCI ROM BAR like any
> > other BAR. In this case, if qemu is envolved in the access to ROM, it
> > will print an error, like it the case for other BAR. 
> > 
> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from
> > the device.
> 
> Oddly enough for me with your patch the NIC's BIOS was invoked and
> it tried to PXE boot:
> 
> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> 
> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> ..
> and I did see the PXE boot menu in my guest - so even
> better!

Perfect, look like it is the fix for PCI passthrough.

> I have not yet done the GPU - this issue was preventing me from using
> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> 
> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>' please do.

Will do.

> Thank you!
> > 
> > 
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> > index 6dd7a68..2bbdb6d 100644
> > --- a/hw/xen/xen_pt.c
> > +++ b/hw/xen/xen_pt.c
> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >  
> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >  
> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> > -                                      "xen-pci-pt-rom", d->rom.size);
> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> > +                              "xen-pci-pt-rom", d->rom.size);
> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >                           &s->rom);
> >  
> > 
> > -- 
> > Anthony PERARD

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duo-0000d4-Np; Fri, 10 Jan 2014 15:22:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1dun-0000cZ-8b
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:22:29 +0000
Received: from [85.158.139.211:5811] by server-16.bemta-5.messagelabs.com id
	7B/F1-11843-43010D25; Fri, 10 Jan 2014 15:22:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389367346!9061360!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19065 invoked from network); 10 Jan 2014 15:22:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:22:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91704825"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:22:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:22:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1djU-0001og-3d;
	Fri, 10 Jan 2014 15:10:48 +0000
Date: Fri, 10 Jan 2014 15:10:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151048.GC30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 

Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
8MB video ram). Did I misread your message...

On the hypervisor side d->tot_pages = 30688, d->max_pages = 33024 (128MB
+ 1MB slack). So the claim failed.

> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well, and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).
> 
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;
> +

Yes it should work because this makes nr smaller than d->tot_pages and
d->max_pages. But according to the comment you pasted above this looks
like wrong fix...

Wei.

> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1duo-0000d4-Np; Fri, 10 Jan 2014 15:22:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1dun-0000cZ-8b
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:22:29 +0000
Received: from [85.158.139.211:5811] by server-16.bemta-5.messagelabs.com id
	7B/F1-11843-43010D25; Fri, 10 Jan 2014 15:22:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389367346!9061360!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19065 invoked from network); 10 Jan 2014 15:22:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:22:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91704825"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:22:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:22:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1djU-0001og-3d;
	Fri, 10 Jan 2014 15:10:48 +0000
Date: Fri, 10 Jan 2014 15:10:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110151048.GC30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 

Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
8MB video ram). Did I misread your message...

On the hypervisor side d->tot_pages = 30688, d->max_pages = 33024 (128MB
+ 1MB slack). So the claim failed.

> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well, and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).
> 
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;
> +

Yes it should work because this makes nr smaller than d->tot_pages and
d->max_pages. But according to the comment you pasted above this looks
like wrong fix...

Wei.

> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dwR-0000ze-Ab; Fri, 10 Jan 2014 15:24:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1dwP-0000zJ-Tz
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:24:10 +0000
Received: from [193.109.254.147:25487] by server-1.bemta-14.messagelabs.com id
	61/5B-15600-99010D25; Fri, 10 Jan 2014 15:24:09 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389367447!10091004!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3804 invoked from network); 10 Jan 2014 15:24:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:24:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91705416"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:24:06 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:24:06 -0500
Message-ID: <52D01094.5060102@citrix.com>
Date: Fri, 10 Jan 2014 15:24:04 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
In-Reply-To: <20140110114534.GE29180@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 11:45, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> [...]
>>
>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>>>   	err = gop->status;
>>>>   	if (unlikely(err))
>>>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>>>> +	else {
>>>> +		if (vif->grant_tx_handle[pending_idx] !=
>>>> +			NETBACK_INVALID_HANDLE) {
>>>> +			netdev_err(vif->dev,
>>>> +				"Stale mapped handle! pending_idx %x handle %x\n",
>>>> +				pending_idx, vif->grant_tx_handle[pending_idx]);
>>>> +			BUG();
>>>> +		}
>>>> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>>>> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>>>
>>> What happens when you don't have this?
>> Your frags will be filled with garbage. I don't understand exactly
>> what this function does, someone might want to enlighten us? I've
>> took it's usage from classic kernel.
>> Also, it might be worthwhile to check the return value and BUG if
>> it's false, but I don't know what exactly that return value means.
>>
>
> This is actually part of gnttab_map_refs. As you're using hypercall
> directly this becomes very fragile.
>
> So the right thing to do is to fix gnttab_map_refs.
I agree, as I mentioned in other email in this thread, I think that 
should be the topic of an another patchseries. In the meantime, I will 
use gnttab_batch_map instead of the direct hypercall, it handles the 
GNTST_eagain scenario, and I will use set_phys_to_machine the same way 
as m2p_override does:

if (unlikely(!set_phys_to_machine(idx_to_pfn(vif, pending_idx), 
FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT))
			BUG();

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dwR-0000ze-Ab; Fri, 10 Jan 2014 15:24:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W1dwP-0000zJ-Tz
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:24:10 +0000
Received: from [193.109.254.147:25487] by server-1.bemta-14.messagelabs.com id
	61/5B-15600-99010D25; Fri, 10 Jan 2014 15:24:09 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389367447!10091004!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3804 invoked from network); 10 Jan 2014 15:24:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:24:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91705416"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:24:06 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:24:06 -0500
Message-ID: <52D01094.5060102@citrix.com>
Date: Fri, 10 Jan 2014 15:24:04 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
In-Reply-To: <20140110114534.GE29180@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 11:45, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> [...]
>>
>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>>>   	err = gop->status;
>>>>   	if (unlikely(err))
>>>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>>>> +	else {
>>>> +		if (vif->grant_tx_handle[pending_idx] !=
>>>> +			NETBACK_INVALID_HANDLE) {
>>>> +			netdev_err(vif->dev,
>>>> +				"Stale mapped handle! pending_idx %x handle %x\n",
>>>> +				pending_idx, vif->grant_tx_handle[pending_idx]);
>>>> +			BUG();
>>>> +		}
>>>> +		set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>>>> +			FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>>>
>>> What happens when you don't have this?
>> Your frags will be filled with garbage. I don't understand exactly
>> what this function does, someone might want to enlighten us? I've
>> took it's usage from classic kernel.
>> Also, it might be worthwhile to check the return value and BUG if
>> it's false, but I don't know what exactly that return value means.
>>
>
> This is actually part of gnttab_map_refs. As you're using hypercall
> directly this becomes very fragile.
>
> So the right thing to do is to fix gnttab_map_refs.
I agree, as I mentioned in other email in this thread, I think that 
should be the topic of an another patchseries. In the meantime, I will 
use gnttab_batch_map instead of the direct hypercall, it handles the 
GNTST_eagain scenario, and I will use set_phys_to_machine the same way 
as m2p_override does:

if (unlikely(!set_phys_to_machine(idx_to_pfn(vif, pending_idx), 
FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT))
			BUG();

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:24:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dwd-00013Y-Uo; Fri, 10 Jan 2014 15:24:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dwd-00013G-2K
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:24:23 +0000
Received: from [85.158.143.35:64947] by server-2.bemta-4.messagelabs.com id
	F3/25-11386-6A010D25; Fri, 10 Jan 2014 15:24:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389367456!10939304!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7691 invoked from network); 10 Jan 2014 15:24:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89568083"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:24:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:24:14 -0500
Message-ID: <1389367453.19142.70.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Date: Fri, 10 Jan 2014 15:24:13 +0000
In-Reply-To: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 18:51 +0330, fahimeh soltaninejad wrote:
> hello,
> i have a problem;when i enable vt in bios of my system, i can not boot
> into my fedora with xen hypervisor system.In fact, when vt is enabled
> and i wanted to boot into xen, i got this error:
> iommu: mapping reserved region failed
> and after that, it returned me to the grub. what's the problem?

http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen contains advice on
the information which a report should contain, I'm afraid the above is
not sufficient for anyone to be able to advise you.

I suggest you take this to the xen-user list in the first instance.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:24:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1dwd-00013Y-Uo; Fri, 10 Jan 2014 15:24:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1dwd-00013G-2K
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:24:23 +0000
Received: from [85.158.143.35:64947] by server-2.bemta-4.messagelabs.com id
	F3/25-11386-6A010D25; Fri, 10 Jan 2014 15:24:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389367456!10939304!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7691 invoked from network); 10 Jan 2014 15:24:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89568083"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:24:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:24:14 -0500
Message-ID: <1389367453.19142.70.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Date: Fri, 10 Jan 2014 15:24:13 +0000
In-Reply-To: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 18:51 +0330, fahimeh soltaninejad wrote:
> hello,
> i have a problem;when i enable vt in bios of my system, i can not boot
> into my fedora with xen hypervisor system.In fact, when vt is enabled
> and i wanted to boot into xen, i got this error:
> iommu: mapping reserved region failed
> and after that, it returned me to the grub. what's the problem?

http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen contains advice on
the information which a report should contain, I'm afraid the above is
not sufficient for anyone to be able to advise you.

I suggest you take this to the xen-user list in the first instance.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e22-0001ye-Rl; Fri, 10 Jan 2014 15:29:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e21-0001xu-Am
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:29:57 +0000
Received: from [85.158.137.68:38162] by server-5.bemta-3.messagelabs.com id
	00/83-25188-4F110D25; Fri, 10 Jan 2014 15:29:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389367794!8477493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18283 invoked from network); 10 Jan 2014 15:29:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:29:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFTocJ009635
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:29:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFTn6C002880
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:29:50 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFTniH011127; Fri, 10 Jan 2014 15:29:49 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:29:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 147D81C18DC; Fri, 10 Jan 2014 10:29:48 -0500 (EST)
Date: Fri, 10 Jan 2014 10:29:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140110152948.GA20640@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
	<20140110151956.GA22646@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151956.GA22646@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:19:56PM +0100, Olaf Hering wrote:
> [ removing Li who is no longer with Novell ]
> 
> On Fri, Jan 10, Konrad Rzeszutek Wilk wrote:
> 
> > You are looking at:
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> > 
> > My recollection is that at the time the patches were developed, loop
> > was not able to do discard operations. That has since changed and
> > loop can do it. Hence the force of =1 was added in.
> 
> If the backend cant do discard then it should not advertise the feature.

<nods>
> 
> Are you ok with me sending a patch which simplifies this function?

Yes of course!
> 
> Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e1v-0001xA-9z; Fri, 10 Jan 2014 15:29:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e1u-0001wn-Fd
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:29:50 +0000
Received: from [85.158.143.35:47633] by server-2.bemta-4.messagelabs.com id
	B8/4D-11386-DE110D25; Fri, 10 Jan 2014 15:29:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389367787!10915804!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10476 invoked from network); 10 Jan 2014 15:29:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:29:49 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFShV5005009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:28:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFSfps000476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:28:43 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFSfM0000471; Fri, 10 Jan 2014 15:28:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:28:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 541B21C18DC; Fri, 10 Jan 2014 10:28:40 -0500 (EST)
Date: Fri, 10 Jan 2014 10:28:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110152840.GA20385@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389366985.19142.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well,
> 
> Me neither, this might have to wait for George to get back.

<nods>
> 
> We should also consider flipping the default claim setting to off in xl
> for 4.4, since that is likely to be a lower impact change than fixing
> the issue (and one which we all understand!).

<unwraps the Xen 4.4 duct-tape roll>

> 
> >  and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> 
> PoD exists purely so that we don't need the 'maxmem' amount of memory at
> boot time. It is basically there in order to let the guest get booted
> far enough to load the balloon driver to give the memory back.
> 
> It's basically a boot time zero-page sharing mechanism AIUI.

But it does look to gulp up hypervisor memory and return it during
allocation of memory for the guest.

Digging in the hypervisor I see in 'p2m_pod_set_cache_target' (where
pod_target is for this case maxmem - memory):

And pod.count is zero, so for Wei's case it would be 128MB.

 216     /* Increasing the target */
 217     while ( pod_target > p2m->pod.count )
 218     {
 222         if ( (pod_target - p2m->pod.count) >= SUPERPAGE_PAGES )
 223             order = PAGE_ORDER_2M;
 224         else
 225             order = PAGE_ORDER_4K;
 226     retry:
 227         page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);

So allocate 64 2MB pages

 243         p2m_pod_cache_add(p2m, page, order);

Add to a list

251 
 252     /* Decreasing the target */
 253     /* We hold the pod lock here, so we don't need to worry about
 254      * cache disappearing under our feet. */
 255     while ( pod_target < p2m->pod.count )
 256     {
..
 266         page = p2m_pod_cache_get(p2m, order);

Get the page (from the list)
..
 287             put_page(page+i);

And then free it.


>From reading the code the patch seems correct - we will _need_ that
extra 128MB 'claim' to allocate/free the 128MB extra pages. They
are temporary as we do free them.

> 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> 
> 0x20?

Yup. From earlier:

305     if ( pod_mode )
306     {
307         /*
308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
309          * adjust the PoD cache size so that domain tot_pages will be
310          * target_pages - 0x20 after this call.
311          */
312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
313                                       NULL, NULL, NULL);
314         if ( rc != 0 )
315         {
316             PERROR("Could not set PoD target for HVM guest.\n");
317             goto error_out;
318         }
319     }

Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
this around.

> 
> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e1v-0001xA-9z; Fri, 10 Jan 2014 15:29:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e1u-0001wn-Fd
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:29:50 +0000
Received: from [85.158.143.35:47633] by server-2.bemta-4.messagelabs.com id
	B8/4D-11386-DE110D25; Fri, 10 Jan 2014 15:29:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389367787!10915804!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10476 invoked from network); 10 Jan 2014 15:29:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:29:49 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFShV5005009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:28:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFSfps000476
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:28:43 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFSfM0000471; Fri, 10 Jan 2014 15:28:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:28:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 541B21C18DC; Fri, 10 Jan 2014 10:28:40 -0500 (EST)
Date: Fri, 10 Jan 2014 10:28:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110152840.GA20385@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389366985.19142.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well,
> 
> Me neither, this might have to wait for George to get back.

<nods>
> 
> We should also consider flipping the default claim setting to off in xl
> for 4.4, since that is likely to be a lower impact change than fixing
> the issue (and one which we all understand!).

<unwraps the Xen 4.4 duct-tape roll>

> 
> >  and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> 
> PoD exists purely so that we don't need the 'maxmem' amount of memory at
> boot time. It is basically there in order to let the guest get booted
> far enough to load the balloon driver to give the memory back.
> 
> It's basically a boot time zero-page sharing mechanism AIUI.

But it does look to gulp up hypervisor memory and return it during
allocation of memory for the guest.

Digging in the hypervisor I see in 'p2m_pod_set_cache_target' (where
pod_target is for this case maxmem - memory):

And pod.count is zero, so for Wei's case it would be 128MB.

 216     /* Increasing the target */
 217     while ( pod_target > p2m->pod.count )
 218     {
 222         if ( (pod_target - p2m->pod.count) >= SUPERPAGE_PAGES )
 223             order = PAGE_ORDER_2M;
 224         else
 225             order = PAGE_ORDER_4K;
 226     retry:
 227         page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);

So allocate 64 2MB pages

 243         p2m_pod_cache_add(p2m, page, order);

Add to a list

251 
 252     /* Decreasing the target */
 253     /* We hold the pod lock here, so we don't need to worry about
 254      * cache disappearing under our feet. */
 255     while ( pod_target < p2m->pod.count )
 256     {
..
 266         page = p2m_pod_cache_get(p2m, order);

Get the page (from the list)
..
 287             put_page(page+i);

And then free it.


>From reading the code the patch seems correct - we will _need_ that
extra 128MB 'claim' to allocate/free the 128MB extra pages. They
are temporary as we do free them.

> 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> 
> 0x20?

Yup. From earlier:

305     if ( pod_mode )
306     {
307         /*
308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
309          * adjust the PoD cache size so that domain tot_pages will be
310          * target_pages - 0x20 after this call.
311          */
312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
313                                       NULL, NULL, NULL);
314         if ( rc != 0 )
315         {
316             PERROR("Could not set PoD target for HVM guest.\n");
317             goto error_out;
318         }
319     }

Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
this around.

> 
> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e22-0001ye-Rl; Fri, 10 Jan 2014 15:29:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e21-0001xu-Am
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:29:57 +0000
Received: from [85.158.137.68:38162] by server-5.bemta-3.messagelabs.com id
	00/83-25188-4F110D25; Fri, 10 Jan 2014 15:29:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389367794!8477493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18283 invoked from network); 10 Jan 2014 15:29:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:29:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFTocJ009635
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:29:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFTn6C002880
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:29:50 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFTniH011127; Fri, 10 Jan 2014 15:29:49 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:29:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 147D81C18DC; Fri, 10 Jan 2014 10:29:48 -0500 (EST)
Date: Fri, 10 Jan 2014 10:29:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140110152948.GA20640@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
	<20140110151956.GA22646@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151956.GA22646@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:19:56PM +0100, Olaf Hering wrote:
> [ removing Li who is no longer with Novell ]
> 
> On Fri, Jan 10, Konrad Rzeszutek Wilk wrote:
> 
> > You are looking at:
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> > 
> > My recollection is that at the time the patches were developed, loop
> > was not able to do discard operations. That has since changed and
> > loop can do it. Hence the force of =1 was added in.
> 
> If the backend cant do discard then it should not advertise the feature.

<nods>
> 
> Are you ok with me sending a patch which simplifies this function?

Yes of course!
> 
> Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e58-0002aP-Kr; Fri, 10 Jan 2014 15:33:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e57-0002aD-5Y
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:33:09 +0000
Received: from [85.158.143.35:16541] by server-3.bemta-4.messagelabs.com id
	47/FF-32360-4B210D25; Fri, 10 Jan 2014 15:33:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389367986!10943721!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7352 invoked from network); 10 Jan 2014 15:33:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:33:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFX2GG010651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:33:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFX0cw012909
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:33:01 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFX0aj024841; Fri, 10 Jan 2014 15:33:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:33:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 985191C18DC; Fri, 10 Jan 2014 10:32:59 -0500 (EST)
Date: Fri, 10 Jan 2014 10:32:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110153259.GB20640@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
	<1389367245.19142.68.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389367245.19142.68.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:20:45PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 10:15 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> > > On Fri, Jan 10, Ian Campbell wrote:
> > > 
> > > > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > > > What is the reason the backend 'type' property of a configured disk is
> > > > > now "qdisk" instead of "file"?
> > > > 
> > > > Because qdisk is the backend instead of loop+blk (==file) I think this
> > > > just happens naturally.
> > > > 
> > > > >  Would the guest really care about that
> > > > > detail?  For example block-front currently just checks for "phy" and
> > > > > "file" when deciding if discard should be enabled.
> > > > 
> > > > That sounds entirely bogus, it should be checking for some sort of
> > > > feature-discard.
> > > 
> > > It does that, then calls blkfront_setup_discard which in turn knows just
> > > about phy and file. And I wonder why it does that.
> > > Maybe this function should be simplified to assume that if its called
> > > feature_discard can be enabled. And if both
> > > discard-granularity/discard-alignment exist those properties should be
> > > assigned, similar for discard-secure.
> > > 
> > > Now that I look at the history of blkfront_setup_discard:
> > > 
> > >  Li, Konrad, why does that function care at all about the 'type'?
> > >  Shouldnt that check be removed?
> > 
> > You are looking at:
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> > 
> > My recollection is that at the time the patches were developed, loop
> > was not able to do discard operations. That has since changed and
> > loop can do it. Hence the force of =1 was added in.
> > 
> > But that assumes that 'file' is going through the 'loop' device.
> > 
> > If that assumption is incorrect then this needs to be fixed and
> > perhaps the underlaying device ('file'?) interogated as to whether
> > it can do discard or not.
> 
> Why on earth is this happening in the frontend?
> 
> The *backend* should be querying the underlying device and propagating
> the result via the feature flag to the frontend. Having the backend
> advertise discard and then have the frontend second guess this based on
> rumour and hearsay (which is all probing this type field) is just nuts.

Madness I say!

Note that the discard operations are OK with errors. That is if the disk
says 'I can do it' but returns -Exx whatever, the filesystems, tools
etc are OK with that.

But it is incorrect. Worst yet, why do we even check for 'type' to figure
this out is odd too.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e58-0002aP-Kr; Fri, 10 Jan 2014 15:33:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1e57-0002aD-5Y
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:33:09 +0000
Received: from [85.158.143.35:16541] by server-3.bemta-4.messagelabs.com id
	47/FF-32360-4B210D25; Fri, 10 Jan 2014 15:33:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389367986!10943721!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7352 invoked from network); 10 Jan 2014 15:33:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:33:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFX2GG010651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:33:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFX0cw012909
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:33:01 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFX0aj024841; Fri, 10 Jan 2014 15:33:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:33:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 985191C18DC; Fri, 10 Jan 2014 10:32:59 -0500 (EST)
Date: Fri, 10 Jan 2014 10:32:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110153259.GB20640@phenom.dumpdata.com>
References: <20140110144024.GA19611@aepfle.de>
	<1389365236.19142.54.camel@kazak.uk.xensource.com>
	<20140110150000.GA20287@aepfle.de>
	<20140110151551.GB20152@phenom.dumpdata.com>
	<1389367245.19142.68.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389367245.19142.68.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Olaf Hering <olaf@aepfle.de>, Li Dongyang <lidongyang@novell.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] qdisk vs. file as vbd type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:20:45PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 10:15 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 04:00:00PM +0100, Olaf Hering wrote:
> > > On Fri, Jan 10, Ian Campbell wrote:
> > > 
> > > > On Fri, 2014-01-10 at 15:40 +0100, Olaf Hering wrote:
> > > > > What is the reason the backend 'type' property of a configured disk is
> > > > > now "qdisk" instead of "file"?
> > > > 
> > > > Because qdisk is the backend instead of loop+blk (==file) I think this
> > > > just happens naturally.
> > > > 
> > > > >  Would the guest really care about that
> > > > > detail?  For example block-front currently just checks for "phy" and
> > > > > "file" when deciding if discard should be enabled.
> > > > 
> > > > That sounds entirely bogus, it should be checking for some sort of
> > > > feature-discard.
> > > 
> > > It does that, then calls blkfront_setup_discard which in turn knows just
> > > about phy and file. And I wonder why it does that.
> > > Maybe this function should be simplified to assume that if its called
> > > feature_discard can be enabled. And if both
> > > discard-granularity/discard-alignment exist those properties should be
> > > assigned, similar for discard-secure.
> > > 
> > > Now that I look at the history of blkfront_setup_discard:
> > > 
> > >  Li, Konrad, why does that function care at all about the 'type'?
> > >  Shouldnt that check be removed?
> > 
> > You are looking at:
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1664)   } else if (strncmp(type, "file", 4) == 0)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1665)           info->feature_discard = 1;
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1666)
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1667)   kfree(type);
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1668)}
> > ed30bf317       (Li Dongyang    2011-09-01 18:39:09 +0800       1669)
> > 
> > My recollection is that at the time the patches were developed, loop
> > was not able to do discard operations. That has since changed and
> > loop can do it. Hence the force of =1 was added in.
> > 
> > But that assumes that 'file' is going through the 'loop' device.
> > 
> > If that assumption is incorrect then this needs to be fixed and
> > perhaps the underlaying device ('file'?) interogated as to whether
> > it can do discard or not.
> 
> Why on earth is this happening in the frontend?
> 
> The *backend* should be querying the underlying device and propagating
> the result via the feature flag to the frontend. Having the backend
> advertise discard and then have the frontend second guess this based on
> rumour and hearsay (which is all probing this type field) is just nuts.

Madness I say!

Note that the discard operations are OK with errors. That is if the disk
says 'I can do it' but returns -Exx whatever, the filesystems, tools
etc are OK with that.

But it is incorrect. Worst yet, why do we even check for 'type' to figure
this out is odd too.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e5n-0002ef-4G; Fri, 10 Jan 2014 15:33:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1e5m-0002eW-6r
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:33:50 +0000
Received: from [85.158.143.35:22108] by server-1.bemta-4.messagelabs.com id
	30/54-02132-DD210D25; Fri, 10 Jan 2014 15:33:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389368027!10992379!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6013 invoked from network); 10 Jan 2014 15:33:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:33:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89571796"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:33:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:33:47 -0500
Message-ID: <52D012D9.1010602@citrix.com>
Date: Fri, 10 Jan 2014 15:33:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
	<20140110150738.GC19124@phenom.dumpdata.com>
In-Reply-To: <20140110150738.GC19124@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	linux-next@vger.kernel.org, Jim Davis <jim.epost@gmail.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] randconfig build error with next-20140108,
	in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDEvMTQgMTU6MDcsIEtvbnJhZCBSemVzenV0ZWsgV2lsayB3cm90ZToKPiBPbiBXZWQs
IEphbiAwOCwgMjAxNCBhdCAwMzozMjowMFBNIC0wNzAwLCBKaW0gRGF2aXMgd3JvdGU6Cj4+IEJ1
aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUsCj4+Cj4+
IGRyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jOiBJbiBmdW5jdGlvbiDigJhwbGF0Zm9ybV9wY2lf
aW5pdOKAmToKPj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6MTMxOjI6IGVycm9yOiBpbXBs
aWNpdCBkZWNsYXJhdGlvbiBvZgo+PiBmdW5jdGlvbiDigJhwY2lfcmVxdWVzdF9yZWdpb27igJkg
Wy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4+ICAgcmV0ID0gcGNpX3Jl
cXVlc3RfcmVnaW9uKHBkZXYsIDEsIERSVl9OQU1FKTsKPj4gICBeCj4+IGRyaXZlcnMveGVuL3Bs
YXRmb3JtLXBjaS5jOjE3MDoyOiBlcnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPj4gZnVu
Y3Rpb24g4oCYcGNpX3JlbGVhc2VfcmVnaW9u4oCZIFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9u
LWRlY2xhcmF0aW9uXQo+PiAgIHBjaV9yZWxlYXNlX3JlZ2lvbihwZGV2LCAwKTsKPj4gICBeCj4+
IGNjMTogc29tZSB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+PiBtYWtlWzJdOiAq
KiogW2RyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5vXSBFcnJvciAxCj4+Cj4+IFRoZXNlIHdhcm5p
bmdzIGFwcGVhcmVkIHRvbzoKPj4KPj4gd2FybmluZzogKFhFTl9QVkgpIHNlbGVjdHMgWEVOX1BW
SFZNIHdoaWNoIGhhcyB1bm1ldCBkaXJlY3QKPj4gZGVwZW5kZW5jaWVzIChIWVBFUlZJU09SX0dV
RVNUICYmIFhFTiAmJiBQQ0kgJiYgWDg2X0xPQ0FMX0FQSUMpCj4gCj4gSGV5IEppbSwKPiAKPiBU
aGlzIGZpeCB3b3JrcyBmb3IgbWU6Cj4gCj4gCj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9L
Y29uZmlnIGIvYXJjaC94ODYveGVuL0tjb25maWcKPiBpbmRleCBkODhiZmQ2Li4wMWI5MDI2IDEw
MDY0NAo+IC0tLSBhL2FyY2gveDg2L3hlbi9LY29uZmlnCj4gKysrIGIvYXJjaC94ODYveGVuL0tj
b25maWcKPiBAQCAtNTMsNiArNTMsNSBAQCBjb25maWcgWEVOX0RFQlVHX0ZTCj4gIAo+ICBjb25m
aWcgWEVOX1BWSAo+ICAJYm9vbCAiU3VwcG9ydCBmb3IgcnVubmluZyBhcyBhIFBWSCBndWVzdCIK
PiAtCWRlcGVuZHMgb24gWDg2XzY0ICYmIFhFTgo+IC0Jc2VsZWN0IFhFTl9QVkhWTQo+ICsJZGVw
ZW5kcyBvbiBYODZfNjQgJiYgWEVOICYmIFhFTl9QVkhWTQo+ICAJZGVmX2Jvb2wgbgo+IAo+IERh
dmlkLCB5b3UgT0sgd2l0aCB0aGF0PyBZb3Ugc3VnZ2VzdGVkIHRvIHVzZSAnc2VsZWN0JyBpbiB0
aGUgcGF0Y2hzZXQKPiBpbnN0ZWFkIG9mICdkZXBlbmRzJyBhbmQgdGhpcyB0aHJvd3MgYXdheSB5
b3VyIHN1Z2dlc3Rpb24uCgpZZXMuCgpEYXZpZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1e5n-0002ef-4G; Fri, 10 Jan 2014 15:33:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1e5m-0002eW-6r
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:33:50 +0000
Received: from [85.158.143.35:22108] by server-1.bemta-4.messagelabs.com id
	30/54-02132-DD210D25; Fri, 10 Jan 2014 15:33:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389368027!10992379!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6013 invoked from network); 10 Jan 2014 15:33:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:33:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89571796"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:33:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:33:47 -0500
Message-ID: <52D012D9.1010602@citrix.com>
Date: Fri, 10 Jan 2014 15:33:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <CA+r1ZhinEQiBzonB_+ev_9hry+-7wscEVWcwqW46ExjGC2SYYg@mail.gmail.com>
	<20140110150738.GC19124@phenom.dumpdata.com>
In-Reply-To: <20140110150738.GC19124@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	linux-next@vger.kernel.org, Jim Davis <jim.epost@gmail.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] randconfig build error with next-20140108,
	in drivers/xen/platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDEvMTQgMTU6MDcsIEtvbnJhZCBSemVzenV0ZWsgV2lsayB3cm90ZToKPiBPbiBXZWQs
IEphbiAwOCwgMjAxNCBhdCAwMzozMjowMFBNIC0wNzAwLCBKaW0gRGF2aXMgd3JvdGU6Cj4+IEJ1
aWxkaW5nIHdpdGggdGhlIGF0dGFjaGVkIHJhbmRvbSBjb25maWd1cmF0aW9uIGZpbGUsCj4+Cj4+
IGRyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jOiBJbiBmdW5jdGlvbiDigJhwbGF0Zm9ybV9wY2lf
aW5pdOKAmToKPj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6MTMxOjI6IGVycm9yOiBpbXBs
aWNpdCBkZWNsYXJhdGlvbiBvZgo+PiBmdW5jdGlvbiDigJhwY2lfcmVxdWVzdF9yZWdpb27igJkg
Wy1XZXJyb3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4+ICAgcmV0ID0gcGNpX3Jl
cXVlc3RfcmVnaW9uKHBkZXYsIDEsIERSVl9OQU1FKTsKPj4gICBeCj4+IGRyaXZlcnMveGVuL3Bs
YXRmb3JtLXBjaS5jOjE3MDoyOiBlcnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPj4gZnVu
Y3Rpb24g4oCYcGNpX3JlbGVhc2VfcmVnaW9u4oCZIFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9u
LWRlY2xhcmF0aW9uXQo+PiAgIHBjaV9yZWxlYXNlX3JlZ2lvbihwZGV2LCAwKTsKPj4gICBeCj4+
IGNjMTogc29tZSB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+PiBtYWtlWzJdOiAq
KiogW2RyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5vXSBFcnJvciAxCj4+Cj4+IFRoZXNlIHdhcm5p
bmdzIGFwcGVhcmVkIHRvbzoKPj4KPj4gd2FybmluZzogKFhFTl9QVkgpIHNlbGVjdHMgWEVOX1BW
SFZNIHdoaWNoIGhhcyB1bm1ldCBkaXJlY3QKPj4gZGVwZW5kZW5jaWVzIChIWVBFUlZJU09SX0dV
RVNUICYmIFhFTiAmJiBQQ0kgJiYgWDg2X0xPQ0FMX0FQSUMpCj4gCj4gSGV5IEppbSwKPiAKPiBU
aGlzIGZpeCB3b3JrcyBmb3IgbWU6Cj4gCj4gCj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9L
Y29uZmlnIGIvYXJjaC94ODYveGVuL0tjb25maWcKPiBpbmRleCBkODhiZmQ2Li4wMWI5MDI2IDEw
MDY0NAo+IC0tLSBhL2FyY2gveDg2L3hlbi9LY29uZmlnCj4gKysrIGIvYXJjaC94ODYveGVuL0tj
b25maWcKPiBAQCAtNTMsNiArNTMsNSBAQCBjb25maWcgWEVOX0RFQlVHX0ZTCj4gIAo+ICBjb25m
aWcgWEVOX1BWSAo+ICAJYm9vbCAiU3VwcG9ydCBmb3IgcnVubmluZyBhcyBhIFBWSCBndWVzdCIK
PiAtCWRlcGVuZHMgb24gWDg2XzY0ICYmIFhFTgo+IC0Jc2VsZWN0IFhFTl9QVkhWTQo+ICsJZGVw
ZW5kcyBvbiBYODZfNjQgJiYgWEVOICYmIFhFTl9QVkhWTQo+ICAJZGVmX2Jvb2wgbgo+IAo+IERh
dmlkLCB5b3UgT0sgd2l0aCB0aGF0PyBZb3Ugc3VnZ2VzdGVkIHRvIHVzZSAnc2VsZWN0JyBpbiB0
aGUgcGF0Y2hzZXQKPiBpbnN0ZWFkIG9mICdkZXBlbmRzJyBhbmQgdGhpcyB0aHJvd3MgYXdheSB5
b3VyIHN1Z2dlc3Rpb24uCgpZZXMuCgpEYXZpZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:41:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eCx-0003Zv-D9; Fri, 10 Jan 2014 15:41:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1eCv-0003Zq-St
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:41:14 +0000
Received: from [85.158.137.68:48711] by server-15.bemta-3.messagelabs.com id
	B3/92-11556-99410D25; Fri, 10 Jan 2014 15:41:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389368471!8396289!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1149 invoked from network); 10 Jan 2014 15:41:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:41:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFf8wY023589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:41:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFf7D4005810
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:41:08 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AFf7vm029865; Fri, 10 Jan 2014 15:41:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:41:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F108B1C18DC; Fri, 10 Jan 2014 10:41:05 -0500 (EST)
Date: Fri, 10 Jan 2014 10:41:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110154105.GC20640@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151048.GC30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> 
> Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> 8MB video ram). Did I misread your message...

The 'claim' being the hypercall to set the 'clamp' on how much memory
the guest can allocate. This is based on:

242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;

  /* try to claim pages for early warning of insufficient memory available */
337     if ( claim_enabled ) {
343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);

Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
that the guest only needs 128MB - 768kB.
> 
> On the hypervisor side d->tot_pages = 30688, d->max_pages = 33024 (128MB
> + 1MB slack). So the claim failed.

Correct.
> 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well, and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> > 
> > 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> > +
> 
> Yes it should work because this makes nr smaller than d->tot_pages and
> d->max_pages. But according to the comment you pasted above this looks
> like wrong fix...

It should be: 

tot_pages = 128MB
max_pages = 256MB
nr = 256MB - 0x20.

So tot_pages < max_pages > nr && nr > tot_pages

If I got my variables right.
Which means that 'nr' is greater than tot_pages but less than max_pages.

> 
> Wei.
> 
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:41:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eCx-0003Zv-D9; Fri, 10 Jan 2014 15:41:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1eCv-0003Zq-St
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:41:14 +0000
Received: from [85.158.137.68:48711] by server-15.bemta-3.messagelabs.com id
	B3/92-11556-99410D25; Fri, 10 Jan 2014 15:41:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389368471!8396289!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1149 invoked from network); 10 Jan 2014 15:41:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 15:41:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AFf8wY023589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 15:41:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AFf7D4005810
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 15:41:08 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AFf7vm029865; Fri, 10 Jan 2014 15:41:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 07:41:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id F108B1C18DC; Fri, 10 Jan 2014 10:41:05 -0500 (EST)
Date: Fri, 10 Jan 2014 10:41:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110154105.GC20640@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110151048.GC30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> 
> Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> 8MB video ram). Did I misread your message...

The 'claim' being the hypercall to set the 'clamp' on how much memory
the guest can allocate. This is based on:

242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;

  /* try to claim pages for early warning of insufficient memory available */
337     if ( claim_enabled ) {
343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);

Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
that the guest only needs 128MB - 768kB.
> 
> On the hypervisor side d->tot_pages = 30688, d->max_pages = 33024 (128MB
> + 1MB slack). So the claim failed.

Correct.
> 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well, and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> > 
> > 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> > +
> 
> Yes it should work because this makes nr smaller than d->tot_pages and
> d->max_pages. But according to the comment you pasted above this looks
> like wrong fix...

It should be: 

tot_pages = 128MB
max_pages = 256MB
nr = 256MB - 0x20.

So tot_pages < max_pages > nr && nr > tot_pages

If I got my variables right.
Which means that 'nr' is greater than tot_pages but less than max_pages.

> 
> Wei.
> 
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eIq-0003l9-Ik; Fri, 10 Jan 2014 15:47:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eIp-0003l4-CP
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:47:19 +0000
Received: from [85.158.139.211:13268] by server-13.bemta-5.messagelabs.com id
	B4/E9-11357-60610D25; Fri, 10 Jan 2014 15:47:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389368837!9051052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29191 invoked from network); 10 Jan 2014 15:47:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:47:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:47:17 +0000
Message-Id: <52D0241102000078001127C5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:47:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
>> __hvm_copy() is probably too low to be thinking about this.  There are
>> many things such as grant_copy() which do not want "hardware like" copy
>> properties, preferring instead to have less overhead.
>> 
> 
> Yeah... I'll rework the patch to do this...

Looking a little more closely, hvm_copy_{to,from}_guest_virt()
are what you want to have the adjusted behavior. That way
you'd in particular not touch the behavior of the more generic
copying routines copy_{to,from}_user_hvm(). And adjusting
the behavior would seem to be doable cleanly by adding e.g.
HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
to not use memcpy().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eIq-0003l9-Ik; Fri, 10 Jan 2014 15:47:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eIp-0003l4-CP
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:47:19 +0000
Received: from [85.158.139.211:13268] by server-13.bemta-5.messagelabs.com id
	B4/E9-11357-60610D25; Fri, 10 Jan 2014 15:47:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389368837!9051052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29191 invoked from network); 10 Jan 2014 15:47:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:47:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:47:17 +0000
Message-Id: <52D0241102000078001127C5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:47:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
>> __hvm_copy() is probably too low to be thinking about this.  There are
>> many things such as grant_copy() which do not want "hardware like" copy
>> properties, preferring instead to have less overhead.
>> 
> 
> Yeah... I'll rework the patch to do this...

Looking a little more closely, hvm_copy_{to,from}_guest_virt()
are what you want to have the adjusted behavior. That way
you'd in particular not touch the behavior of the more generic
copying routines copy_{to,from}_user_hvm(). And adjusting
the behavior would seem to be doable cleanly by adding e.g.
HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
to not use memcpy().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:49:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eKK-0003sh-4U; Fri, 10 Jan 2014 15:48:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eKH-0003sa-VC
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:48:50 +0000
Received: from [85.158.137.68:61499] by server-11.bemta-3.messagelabs.com id
	11/9E-19379-16610D25; Fri, 10 Jan 2014 15:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389368926!7266155!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26329 invoked from network); 10 Jan 2014 15:48:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91715068"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:48:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:48:45 -0500
Message-ID: <1389368924.6423.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Karim Raslan <karim.allah.ahmed@gmail.com>
Date: Fri, 10 Jan 2014 15:48:44 +0000
In-Reply-To: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, julien.grall@linaro.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
> Create multiple banks to hold dom0 memory in case of 1:1 mapping
> if we failed to find 1 large contiguous memory to hold the whole
> thing.

Thanks. While with my ARM maintainer hat on I'd love for this to go in
for 4.4 with my acting release manager hat on I think if I have to be
honest this is too big a change for 4.4 at this stage, which is a
pity :-(


> 
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> ---
>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>  2 files changed, 121 insertions(+), 39 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..bb44cdd 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>  {
>      paddr_t start;
>      paddr_t size;
> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>      struct page_info *pg = NULL;
> -    unsigned int order = get_order_from_bytes(dom0_mem);
> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>      int res;
>      paddr_t spfn;
>      unsigned int bits;
>  
> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> +#define MIN_BANK_ORDER 10

2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
which can be made, but I think in practice the lowest useful bank size
is going to be somewhat larger than that.

NR_MEM_BANKS is 8 so we also need to consider that.

A 64M dom0 would mean 8M per bank, which seems like a reasonable minimum
bank size. That would be a MIN_BANK_ORDER of 23. Please include a
comment explaining where this number comes from.

The other way to look at this would be to calculate it dynamically as
get_order_from_bytes(dom0_mem / NR_MEM_BANKS).

> +
> +    kinfo->mem.nr_banks = 0;
> +
> +    /*
> +     * We always first try to allocate all dom0 memory in 1 bank.
> +     * However, if we failed to allocate physically contiguous memory
> +     * from the allocator, then we try to create more than one bank.
> +     */
> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)

I think this can be just 
	for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
(maybe order >= ?) or
	while (order > MIN_BANK_ORDER )
	{
		...
		order--;
	}
I think the first works better.

This does away with the need for cur_order vs order. I think order is
mostly unused after this patch, also not renaming cur_order would
hopefully reduce the diff and therefore the "scariness" of the patch wrt
4.4 (although that may not be sufficient).

>      {
> -        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
> -        if ( pg != NULL )
> -            break;
> +        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )

Is cur_bank redundant with index? Also kinfo->mem.nr_banks tells you how
many banks are filled in.

> +        {
> +            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> +			{

There something a bit odd going on with the whitespace here and in the
rest of this loop. Perhaps some hard tabs snuck in?

> +				pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
> +				if ( pg != NULL )
> +					break;
> +			}
> +
> +			if ( !pg )
> +				break;
> +
> +			spfn = page_to_mfn(pg);
> +			start = pfn_to_paddr(spfn);
> +			size = pfn_to_paddr((1 << cur_order));
> +
> +		    kinfo->mem.bank[index].start = start;
> +		    kinfo->mem.bank[index].size = size;
> +		    index++;
> +		    kinfo->mem.nr_banks++;
> +    	}
> +
> +    	if( pg )
> +    		break;
> +
> +    	nr_banks = (nr_banks - cur_bank + 1) << 1;

<<1 ? 

Is this not just kinfo->mem.nr_banks?

The basic structure here is:

for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
                for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
                
Shouldn't the iteration over bank be the outer one?

The banks might be different sizes, right?

Also with either approach then depending on where memory is available
this may result in the memory not being allocated in and/or that they
are not in increasing order (in fact, because Xen prefer to allocate
higher memory first it seems likely that it will be in reverse order).

I don't know if either of those things matter. What does ePAPR have to
say on the matter?

I'd expect that the ordering might matter from the point of view of
putting the kernel in the first bank -- since that may no longer be the
lowest address.

You don't seem to reference kinfo->unassigned_mem anywhere after the
initial order calculation -- I think you need to subtract memory from it
on each iteration, or else I'm not sure you will actually get the right
amount allocated in all cases.

>      }
>  
> -    if ( !pg )
> -        panic("Failed to allocate contiguous memory for dom0");
> +	if ( !pg )
> +		panic("Failed to allocate contiguous memory for dom0");
>  
> -    spfn = page_to_mfn(pg);
> -    start = pfn_to_paddr(spfn);
> -    size = pfn_to_paddr((1 << order));
> +	for ( index = 0; index < kinfo->mem.nr_banks; index++ )
> +	{
> +	    start = kinfo->mem.bank[index].start;
> +	    size = kinfo->mem.bank[index].size;
> +	    spfn = paddr_to_pfn(start);
> +	    order = get_order_from_bytes(size);
>  
> -    // 1:1 mapping
> -    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
> -           start, start + size);
> -    res = guest_physmap_add_page(d, spfn, spfn, order);
> -
> -    if ( res )
> -        panic("Unable to add pages in DOM0: %d", res);
> +	    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
> +	            start, start + size, order);
> +	    res = guest_physmap_add_page(d, spfn, spfn, order);

Can this not be done as it is allocated rather than in a second pass?

>  
> -    kinfo->mem.bank[0].start = start;
> -    kinfo->mem.bank[0].size = size;
> -    kinfo->mem.nr_banks = 1;
> +	    if ( res )
> +	        panic("Unable to add pages in DOM0: %d", res);
>  
> -    kinfo->unassigned_mem -= size;
> +	    kinfo->unassigned_mem -= size;
> +	}
>  }
>  
>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 6a5772b..658c3de 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>      const paddr_t total = initrd_len + dtb_len;
>  
>      /* Convenient */

If you are going to remove all of the following convenience variables
then this comment is no longer correct.

(Convenient here means a shorter local name for something complex)

> -    const paddr_t mem_start = info->mem.bank[0].start;
> -    const paddr_t mem_size = info->mem.bank[0].size;
> -    const paddr_t mem_end = mem_start + mem_size;
> -    const paddr_t kernel_size = kernel_end - kernel_start;
> +    unsigned int i, min_i = -1;
> +    bool_t same_bank = false;
> +
> +    paddr_t mem_start, mem_end, mem_size;
> +    paddr_t kernel_size;
>  
>      paddr_t addr;
>  
> -    if ( total + kernel_size > mem_size )
> -        panic("Not enough memory in the first bank for the dtb+initrd");
> +    kernel_size = kernel_end - kernel_start;
> +
> +    for ( i = 0; i < info->mem.nr_banks; i++ )
> +    {
> +        mem_start = info->mem.bank[i].start;
> +        mem_size = info->mem.bank[i].size;
> +        mem_end = mem_start + mem_size - 1;
> +
> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
> +            same_bank = true;
> +        else
> +            same_bank = false;
> +
> +        if ( same_bank && ((total + kernel_size) < mem_size) )
> +            min_i = i;
> +        else if ( (!same_bank) && (total < mem_size) )
> +            min_i = i;
> +        else
> +            continue;

What is all this doing?

> +
> +        break;
> +    }
> +
> +    if ( min_i == -1 )
> +        panic("Not enough memory for the dtb+initrd");
> +
> +    mem_start = info->mem.bank[min_i].start;
> +    mem_size = info->mem.bank[min_i].size;
> +    mem_end = mem_start + mem_size;
>  
>      /*
>       * DTB must be loaded such that it does not conflict with the
> @@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
>       * just after the kernel, if there is room, otherwise just before.
>       */
>  
> -    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
> -        addr = MIN(mem_start + MB(128), mem_end - total);
> -    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
> -        addr = ROUNDUP(kernel_end, MB(2));
> -    else if ( kernel_start - mem_start >= total )
> -        addr = kernel_start - total;
> -    else
> +    if ( same_bank )
>      {
> -        panic("Unable to find suitable location for dtb+initrd");
> -        return;
> -    }
> +        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
> +            addr = MIN(mem_start + MB(128), mem_end - total);
> +        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
> +            addr = ROUNDUP(kernel_end, MB(2));
> +        else if ( kernel_start - mem_start >= total )
> +            addr = kernel_start - total;
> +        else
> +        {
> +            /*
> +             * We should never hit this condition because we've already
> +             * done the check while choosing the bank.
> +             */
> +            panic("Unable to find suitable location for dtb+initrd");
> +            return;
> +        }
> +    } else
> +        addr = ROUNDUP(mem_end - total, MB(2));
>  
>      info->dtb_paddr = addr;
>      info->initrd_paddr = info->dtb_paddr + dtb_len;
> @@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
>       */
>      if (start == 0)
>      {
> +        unsigned int i, min_i = 0, min_start = -1;
>          paddr_t load_end;
>  
> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
> +        /*
> +         * Load kernel at the lowest possible bank
> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )

That commit says nothing about loading in the lowest possible bank,
though. If there is some relevant factor which is worth commenting on
please do so directly.

Actually now that the kernel is fixed upstream we don't need the
behaviour of that commit at all. Although there are still restrictions
based on load address vs start of RAM (See booting.txt in the kernel
docs)

Ian.

> +         */
> +        for ( i = 0; i < info->mem.nr_banks; i++ )
> +        {
> +            if( (unsigned int)info->mem.bank[i].start < min_start )
> +            {
> +                min_start = info->mem.bank[i].start;
> +                min_i = i;
> +            }
> +        }
> +
> +        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
> +        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
>  
>          info->zimage.load_addr = load_end - end;
>          /* Align to 2MB */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:49:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eKK-0003sh-4U; Fri, 10 Jan 2014 15:48:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eKH-0003sa-VC
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:48:50 +0000
Received: from [85.158.137.68:61499] by server-11.bemta-3.messagelabs.com id
	11/9E-19379-16610D25; Fri, 10 Jan 2014 15:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389368926!7266155!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26329 invoked from network); 10 Jan 2014 15:48:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91715068"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:48:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:48:45 -0500
Message-ID: <1389368924.6423.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Karim Raslan <karim.allah.ahmed@gmail.com>
Date: Fri, 10 Jan 2014 15:48:44 +0000
In-Reply-To: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, julien.grall@linaro.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
> Create multiple banks to hold dom0 memory in case of 1:1 mapping
> if we failed to find 1 large contiguous memory to hold the whole
> thing.

Thanks. While with my ARM maintainer hat on I'd love for this to go in
for 4.4 with my acting release manager hat on I think if I have to be
honest this is too big a change for 4.4 at this stage, which is a
pity :-(


> 
> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> ---
>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>  2 files changed, 121 insertions(+), 39 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index faff88e..bb44cdd 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>  {
>      paddr_t start;
>      paddr_t size;
> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>      struct page_info *pg = NULL;
> -    unsigned int order = get_order_from_bytes(dom0_mem);
> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>      int res;
>      paddr_t spfn;
>      unsigned int bits;
>  
> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> +#define MIN_BANK_ORDER 10

2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
which can be made, but I think in practice the lowest useful bank size
is going to be somewhat larger than that.

NR_MEM_BANKS is 8 so we also need to consider that.

A 64M dom0 would mean 8M per bank, which seems like a reasonable minimum
bank size. That would be a MIN_BANK_ORDER of 23. Please include a
comment explaining where this number comes from.

The other way to look at this would be to calculate it dynamically as
get_order_from_bytes(dom0_mem / NR_MEM_BANKS).

> +
> +    kinfo->mem.nr_banks = 0;
> +
> +    /*
> +     * We always first try to allocate all dom0 memory in 1 bank.
> +     * However, if we failed to allocate physically contiguous memory
> +     * from the allocator, then we try to create more than one bank.
> +     */
> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)

I think this can be just 
	for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
(maybe order >= ?) or
	while (order > MIN_BANK_ORDER )
	{
		...
		order--;
	}
I think the first works better.

This does away with the need for cur_order vs order. I think order is
mostly unused after this patch, also not renaming cur_order would
hopefully reduce the diff and therefore the "scariness" of the patch wrt
4.4 (although that may not be sufficient).

>      {
> -        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
> -        if ( pg != NULL )
> -            break;
> +        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )

Is cur_bank redundant with index? Also kinfo->mem.nr_banks tells you how
many banks are filled in.

> +        {
> +            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> +			{

There something a bit odd going on with the whitespace here and in the
rest of this loop. Perhaps some hard tabs snuck in?

> +				pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
> +				if ( pg != NULL )
> +					break;
> +			}
> +
> +			if ( !pg )
> +				break;
> +
> +			spfn = page_to_mfn(pg);
> +			start = pfn_to_paddr(spfn);
> +			size = pfn_to_paddr((1 << cur_order));
> +
> +		    kinfo->mem.bank[index].start = start;
> +		    kinfo->mem.bank[index].size = size;
> +		    index++;
> +		    kinfo->mem.nr_banks++;
> +    	}
> +
> +    	if( pg )
> +    		break;
> +
> +    	nr_banks = (nr_banks - cur_bank + 1) << 1;

<<1 ? 

Is this not just kinfo->mem.nr_banks?

The basic structure here is:

for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
                for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
                
Shouldn't the iteration over bank be the outer one?

The banks might be different sizes, right?

Also with either approach then depending on where memory is available
this may result in the memory not being allocated in and/or that they
are not in increasing order (in fact, because Xen prefer to allocate
higher memory first it seems likely that it will be in reverse order).

I don't know if either of those things matter. What does ePAPR have to
say on the matter?

I'd expect that the ordering might matter from the point of view of
putting the kernel in the first bank -- since that may no longer be the
lowest address.

You don't seem to reference kinfo->unassigned_mem anywhere after the
initial order calculation -- I think you need to subtract memory from it
on each iteration, or else I'm not sure you will actually get the right
amount allocated in all cases.

>      }
>  
> -    if ( !pg )
> -        panic("Failed to allocate contiguous memory for dom0");
> +	if ( !pg )
> +		panic("Failed to allocate contiguous memory for dom0");
>  
> -    spfn = page_to_mfn(pg);
> -    start = pfn_to_paddr(spfn);
> -    size = pfn_to_paddr((1 << order));
> +	for ( index = 0; index < kinfo->mem.nr_banks; index++ )
> +	{
> +	    start = kinfo->mem.bank[index].start;
> +	    size = kinfo->mem.bank[index].size;
> +	    spfn = paddr_to_pfn(start);
> +	    order = get_order_from_bytes(size);
>  
> -    // 1:1 mapping
> -    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
> -           start, start + size);
> -    res = guest_physmap_add_page(d, spfn, spfn, order);
> -
> -    if ( res )
> -        panic("Unable to add pages in DOM0: %d", res);
> +	    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
> +	            start, start + size, order);
> +	    res = guest_physmap_add_page(d, spfn, spfn, order);

Can this not be done as it is allocated rather than in a second pass?

>  
> -    kinfo->mem.bank[0].start = start;
> -    kinfo->mem.bank[0].size = size;
> -    kinfo->mem.nr_banks = 1;
> +	    if ( res )
> +	        panic("Unable to add pages in DOM0: %d", res);
>  
> -    kinfo->unassigned_mem -= size;
> +	    kinfo->unassigned_mem -= size;
> +	}
>  }
>  
>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 6a5772b..658c3de 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>      const paddr_t total = initrd_len + dtb_len;
>  
>      /* Convenient */

If you are going to remove all of the following convenience variables
then this comment is no longer correct.

(Convenient here means a shorter local name for something complex)

> -    const paddr_t mem_start = info->mem.bank[0].start;
> -    const paddr_t mem_size = info->mem.bank[0].size;
> -    const paddr_t mem_end = mem_start + mem_size;
> -    const paddr_t kernel_size = kernel_end - kernel_start;
> +    unsigned int i, min_i = -1;
> +    bool_t same_bank = false;
> +
> +    paddr_t mem_start, mem_end, mem_size;
> +    paddr_t kernel_size;
>  
>      paddr_t addr;
>  
> -    if ( total + kernel_size > mem_size )
> -        panic("Not enough memory in the first bank for the dtb+initrd");
> +    kernel_size = kernel_end - kernel_start;
> +
> +    for ( i = 0; i < info->mem.nr_banks; i++ )
> +    {
> +        mem_start = info->mem.bank[i].start;
> +        mem_size = info->mem.bank[i].size;
> +        mem_end = mem_start + mem_size - 1;
> +
> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
> +            same_bank = true;
> +        else
> +            same_bank = false;
> +
> +        if ( same_bank && ((total + kernel_size) < mem_size) )
> +            min_i = i;
> +        else if ( (!same_bank) && (total < mem_size) )
> +            min_i = i;
> +        else
> +            continue;

What is all this doing?

> +
> +        break;
> +    }
> +
> +    if ( min_i == -1 )
> +        panic("Not enough memory for the dtb+initrd");
> +
> +    mem_start = info->mem.bank[min_i].start;
> +    mem_size = info->mem.bank[min_i].size;
> +    mem_end = mem_start + mem_size;
>  
>      /*
>       * DTB must be loaded such that it does not conflict with the
> @@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
>       * just after the kernel, if there is room, otherwise just before.
>       */
>  
> -    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
> -        addr = MIN(mem_start + MB(128), mem_end - total);
> -    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
> -        addr = ROUNDUP(kernel_end, MB(2));
> -    else if ( kernel_start - mem_start >= total )
> -        addr = kernel_start - total;
> -    else
> +    if ( same_bank )
>      {
> -        panic("Unable to find suitable location for dtb+initrd");
> -        return;
> -    }
> +        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
> +            addr = MIN(mem_start + MB(128), mem_end - total);
> +        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
> +            addr = ROUNDUP(kernel_end, MB(2));
> +        else if ( kernel_start - mem_start >= total )
> +            addr = kernel_start - total;
> +        else
> +        {
> +            /*
> +             * We should never hit this condition because we've already
> +             * done the check while choosing the bank.
> +             */
> +            panic("Unable to find suitable location for dtb+initrd");
> +            return;
> +        }
> +    } else
> +        addr = ROUNDUP(mem_end - total, MB(2));
>  
>      info->dtb_paddr = addr;
>      info->initrd_paddr = info->dtb_paddr + dtb_len;
> @@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
>       */
>      if (start == 0)
>      {
> +        unsigned int i, min_i = 0, min_start = -1;
>          paddr_t load_end;
>  
> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
> +        /*
> +         * Load kernel at the lowest possible bank
> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )

That commit says nothing about loading in the lowest possible bank,
though. If there is some relevant factor which is worth commenting on
please do so directly.

Actually now that the kernel is fixed upstream we don't need the
behaviour of that commit at all. Although there are still restrictions
based on load address vs start of RAM (See booting.txt in the kernel
docs)

Ian.

> +         */
> +        for ( i = 0; i < info->mem.nr_banks; i++ )
> +        {
> +            if( (unsigned int)info->mem.bank[i].start < min_start )
> +            {
> +                min_start = info->mem.bank[i].start;
> +                min_i = i;
> +            }
> +        }
> +
> +        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
> +        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
>  
>          info->zimage.load_addr = load_end - end;
>          /* Align to 2MB */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eNg-0004fr-AW; Fri, 10 Jan 2014 15:52:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1eNf-0004eb-BO
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:52:19 +0000
Received: from [85.158.143.35:46048] by server-1.bemta-4.messagelabs.com id
	8B/6F-02132-23710D25; Fri, 10 Jan 2014 15:52:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389369136!10943089!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23541 invoked from network); 10 Jan 2014 15:52:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:52:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89579543"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:52:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:52:16 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1eJz-0002W2-Pf;
	Fri, 10 Jan 2014 15:48:31 +0000
Date: Fri, 10 Jan 2014 15:48:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110154831.GE30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > 
> > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > 8MB video ram). Did I misread your message...
> 
> The 'claim' being the hypercall to set the 'clamp' on how much memory
> the guest can allocate. This is based on:
> 
> 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> 
>   /* try to claim pages for early warning of insufficient memory available */
> 337     if ( claim_enabled ) {
> 343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> 
> Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
> that the guest only needs 128MB - 768kB.

No, the nr_pages I saw was 63296 (256MB - 768KB) -- I printed it out.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eNc-0004dg-UD; Fri, 10 Jan 2014 15:52:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eNc-0004c2-6e
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:52:16 +0000
Received: from [85.158.139.211:8270] by server-1.bemta-5.messagelabs.com id
	71/77-21065-F2710D25; Fri, 10 Jan 2014 15:52:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389369133!8875738!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1218 invoked from network); 10 Jan 2014 15:52:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:52:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:52:13 +0000
Message-Id: <52D0253802000078001127F3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:52:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> > --- a/tools/libxc/xc_hvm_build_x86.c
>> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>> >  
>> >      /* try to claim pages for early warning of insufficient memory available */
>> >      if ( claim_enabled ) {
>> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> > +        unsigned long nr = nr_pages - cur_pages;
>> > +
>> > +        if ( pod_mode )
>> > +            nr = target_pages - 0x20;
>> > +
>> 
>> Yes it should work because this makes nr smaller than d->tot_pages and
>> d->max_pages. But according to the comment you pasted above this looks
>> like wrong fix...
> 
> It should be: 
> 
> tot_pages = 128MB
> max_pages = 256MB
> nr = 256MB - 0x20.
> 
> So tot_pages < max_pages > nr && nr > tot_pages
> 
> If I got my variables right.
> Which means that 'nr' is greater than tot_pages but less than max_pages.

But that seems conceptually wrong: As was said before, the guest
should only get 128Mb allocated, hence it would be wrong to claim
almost 256Mb for it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eNg-0004fr-AW; Fri, 10 Jan 2014 15:52:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1eNf-0004eb-BO
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:52:19 +0000
Received: from [85.158.143.35:46048] by server-1.bemta-4.messagelabs.com id
	8B/6F-02132-23710D25; Fri, 10 Jan 2014 15:52:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389369136!10943089!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23541 invoked from network); 10 Jan 2014 15:52:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:52:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="89579543"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 15:52:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:52:16 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1eJz-0002W2-Pf;
	Fri, 10 Jan 2014 15:48:31 +0000
Date: Fri, 10 Jan 2014 15:48:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110154831.GE30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > 
> > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > 8MB video ram). Did I misread your message...
> 
> The 'claim' being the hypercall to set the 'clamp' on how much memory
> the guest can allocate. This is based on:
> 
> 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> 
>   /* try to claim pages for early warning of insufficient memory available */
> 337     if ( claim_enabled ) {
> 343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> 
> Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
> that the guest only needs 128MB - 768kB.

No, the nr_pages I saw was 63296 (256MB - 768KB) -- I printed it out.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eNc-0004dg-UD; Fri, 10 Jan 2014 15:52:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eNc-0004c2-6e
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:52:16 +0000
Received: from [85.158.139.211:8270] by server-1.bemta-5.messagelabs.com id
	71/77-21065-F2710D25; Fri, 10 Jan 2014 15:52:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389369133!8875738!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1218 invoked from network); 10 Jan 2014 15:52:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 15:52:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 15:52:13 +0000
Message-Id: <52D0253802000078001127F3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 15:52:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> > --- a/tools/libxc/xc_hvm_build_x86.c
>> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>> >  
>> >      /* try to claim pages for early warning of insufficient memory available */
>> >      if ( claim_enabled ) {
>> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> > +        unsigned long nr = nr_pages - cur_pages;
>> > +
>> > +        if ( pod_mode )
>> > +            nr = target_pages - 0x20;
>> > +
>> 
>> Yes it should work because this makes nr smaller than d->tot_pages and
>> d->max_pages. But according to the comment you pasted above this looks
>> like wrong fix...
> 
> It should be: 
> 
> tot_pages = 128MB
> max_pages = 256MB
> nr = 256MB - 0x20.
> 
> So tot_pages < max_pages > nr && nr > tot_pages
> 
> If I got my variables right.
> Which means that 'nr' is greater than tot_pages but less than max_pages.

But that seems conceptually wrong: As was said before, the guest
should only get 128Mb allocated, hence it would be wrong to claim
almost 256Mb for it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:53:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eON-0004o6-QJ; Fri, 10 Jan 2014 15:53:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1eOM-0004nk-Bc
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:53:02 +0000
Received: from [85.158.143.35:57345] by server-2.bemta-4.messagelabs.com id
	F9/01-11386-D5710D25; Fri, 10 Jan 2014 15:53:01 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389369179!9702675!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8343 invoked from network); 10 Jan 2014 15:53:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:53:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91716630"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:52:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:52:59 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1eOI-0002b6-PO;
	Fri, 10 Jan 2014 15:52:58 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>, QEMU-devel <qemu-devel@nongnu.org>
Date: Fri, 10 Jan 2014 15:52:54 +0000
Message-ID: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH] xen_pt: Fix debug output.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/xen/xen_pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index d58cb61..eee4354 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -420,8 +420,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                               "xen-pci-pt-bar", r->size);
         pci_register_bar(&s->dev, i, type, &s->bar[i]);
 
-        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%lx"PRIx64
-                   " base_addr=0x%lx"PRIx64" type: %#x)\n",
+        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
+                   " base_addr=0x%08"PRIx64" type: %#x)\n",
                    i, r->size, r->base_addr, type);
     }
 
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:53:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eON-0004o6-QJ; Fri, 10 Jan 2014 15:53:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1eOM-0004nk-Bc
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:53:02 +0000
Received: from [85.158.143.35:57345] by server-2.bemta-4.messagelabs.com id
	F9/01-11386-D5710D25; Fri, 10 Jan 2014 15:53:01 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389369179!9702675!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8343 invoked from network); 10 Jan 2014 15:53:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:53:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91716630"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:52:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:52:59 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1eOI-0002b6-PO;
	Fri, 10 Jan 2014 15:52:58 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>, QEMU-devel <qemu-devel@nongnu.org>
Date: Fri, 10 Jan 2014 15:52:54 +0000
Message-ID: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH] xen_pt: Fix debug output.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/xen/xen_pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index d58cb61..eee4354 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -420,8 +420,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                               "xen-pci-pt-bar", r->size);
         pci_register_bar(&s->dev, i, type, &s->bar[i]);
 
-        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%lx"PRIx64
-                   " base_addr=0x%lx"PRIx64" type: %#x)\n",
+        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
+                   " base_addr=0x%08"PRIx64" type: %#x)\n",
                    i, r->size, r->base_addr, type);
     }
 
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:56:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eRX-00054r-HI; Fri, 10 Jan 2014 15:56:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eRW-00054m-Av
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:56:18 +0000
Received: from [85.158.139.211:11213] by server-16.bemta-5.messagelabs.com id
	18/11-11843-12810D25; Fri, 10 Jan 2014 15:56:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389369375!9023749!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6007 invoked from network); 10 Jan 2014 15:56:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91717682"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:56:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:56:14 -0500
Message-ID: <1389369373.6423.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:56:13 +0000
In-Reply-To: <20140110152840.GA20385@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > And then it is the responsibility of the balloon driver to give the memory
> > > back (and this is where the 'static-max' et al come in play to tell the
> > > balloon driver to balloon out).
> > 
> > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > boot time. It is basically there in order to let the guest get booted
> > far enough to load the balloon driver to give the memory back.
> > 
> > It's basically a boot time zero-page sharing mechanism AIUI.
> 
> But it does look to gulp up hypervisor memory and return it during
> allocation of memory for the guest.

It should be less than the maxmem-memory amount though. Perhaps because
Wei is using relatively small sizes the pod cache ends up being a
similar size to the saving?

Or maybe I just don't understand PoD, since the code you quote does seem
to contradict that.

Or maybe libxl's calculation of pod_target is wrong?

> From reading the code the patch seems correct - we will _need_ that
> extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> are temporary as we do free them.

It does makes sense that the PoD cache should be included in the claim,
I just don't get why the cache is so big...

> > > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > > index 77bd365..65e9577 100644
> > > --- a/tools/libxc/xc_hvm_build_x86.c
> > > +++ b/tools/libxc/xc_hvm_build_x86.c
> > > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> > >  
> > >      /* try to claim pages for early warning of insufficient memory available */
> > >      if ( claim_enabled ) {
> > > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > > +        unsigned long nr = nr_pages - cur_pages;
> > > +
> > > +        if ( pod_mode )
> > > +            nr = target_pages - 0x20;
> > 
> > 0x20?
> 
> Yup. From earlier:
> 
> 305     if ( pod_mode )
> 306     {
> 307         /*
> 308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> 309          * adjust the PoD cache size so that domain tot_pages will be
> 310          * target_pages - 0x20 after this call.
> 311          */
> 312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> 313                                       NULL, NULL, NULL);
> 314         if ( rc != 0 )
> 315         {
> 316             PERROR("Could not set PoD target for HVM guest.\n");
> 317             goto error_out;
> 318         }
> 319     }
> 
> Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
> this around.

A #define or something might be nice, yes.

> 
> > 
> > > +
> > > +        rc = xc_domain_claim_pages(xch, dom, nr);
> > >          if ( rc != 0 )
> > >          {
> > >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:56:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eRX-00054r-HI; Fri, 10 Jan 2014 15:56:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eRW-00054m-Av
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:56:18 +0000
Received: from [85.158.139.211:11213] by server-16.bemta-5.messagelabs.com id
	18/11-11843-12810D25; Fri, 10 Jan 2014 15:56:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389369375!9023749!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6007 invoked from network); 10 Jan 2014 15:56:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91717682"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:56:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:56:14 -0500
Message-ID: <1389369373.6423.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 15:56:13 +0000
In-Reply-To: <20140110152840.GA20385@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > And then it is the responsibility of the balloon driver to give the memory
> > > back (and this is where the 'static-max' et al come in play to tell the
> > > balloon driver to balloon out).
> > 
> > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > boot time. It is basically there in order to let the guest get booted
> > far enough to load the balloon driver to give the memory back.
> > 
> > It's basically a boot time zero-page sharing mechanism AIUI.
> 
> But it does look to gulp up hypervisor memory and return it during
> allocation of memory for the guest.

It should be less than the maxmem-memory amount though. Perhaps because
Wei is using relatively small sizes the pod cache ends up being a
similar size to the saving?

Or maybe I just don't understand PoD, since the code you quote does seem
to contradict that.

Or maybe libxl's calculation of pod_target is wrong?

> From reading the code the patch seems correct - we will _need_ that
> extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> are temporary as we do free them.

It does makes sense that the PoD cache should be included in the claim,
I just don't get why the cache is so big...

> > > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > > index 77bd365..65e9577 100644
> > > --- a/tools/libxc/xc_hvm_build_x86.c
> > > +++ b/tools/libxc/xc_hvm_build_x86.c
> > > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> > >  
> > >      /* try to claim pages for early warning of insufficient memory available */
> > >      if ( claim_enabled ) {
> > > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > > +        unsigned long nr = nr_pages - cur_pages;
> > > +
> > > +        if ( pod_mode )
> > > +            nr = target_pages - 0x20;
> > 
> > 0x20?
> 
> Yup. From earlier:
> 
> 305     if ( pod_mode )
> 306     {
> 307         /*
> 308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> 309          * adjust the PoD cache size so that domain tot_pages will be
> 310          * target_pages - 0x20 after this call.
> 311          */
> 312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> 313                                       NULL, NULL, NULL);
> 314         if ( rc != 0 )
> 315         {
> 316             PERROR("Could not set PoD target for HVM guest.\n");
> 317             goto error_out;
> 318         }
> 319     }
> 
> Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
> this around.

A #define or something might be nice, yes.

> 
> > 
> > > +
> > > +        rc = xc_domain_claim_pages(xch, dom, nr);
> > >          if ( rc != 0 )
> > >          {
> > >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> > 
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eS2-00059s-5X; Fri, 10 Jan 2014 15:56:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1eS0-00059U-T8
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:56:49 +0000
Received: from [85.158.143.35:18634] by server-3.bemta-4.messagelabs.com id
	87/64-32360-04810D25; Fri, 10 Jan 2014 15:56:48 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389369406!10986027!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20823 invoked from network); 10 Jan 2014 15:56:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91717814"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:56:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:56:45 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1eRx-0002e8-7L;
	Fri, 10 Jan 2014 15:56:45 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>, QEMU-devel <qemu-devel@nongnu.org>
Date: Fri, 10 Jan 2014 15:56:33 +0000
Message-ID: <1389369393-28654-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH] xen_pt: Fix passthrough of device with ROM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU does not need and should not allocate memory for the ROM of a
passthrough PCI device. So this patch initialize the particular region
like any other PCI BAR of a passthrough device.

When a guest will access the ROM, Xen will take care of the IO, QEMU
will not be involved in it.

Xen set a limit of memory available for each guest, allocating memory
for a ROM can hit this limit.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 hw/xen/xen_pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index eee4354..be4220b 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
 
         s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
 
-        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
-                                      "xen-pci-pt-rom", d->rom.size);
+        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
+                              "xen-pci-pt-rom", d->rom.size);
         pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
                          &s->rom);
 
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eS2-00059s-5X; Fri, 10 Jan 2014 15:56:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1eS0-00059U-T8
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 15:56:49 +0000
Received: from [85.158.143.35:18634] by server-3.bemta-4.messagelabs.com id
	87/64-32360-04810D25; Fri, 10 Jan 2014 15:56:48 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389369406!10986027!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20823 invoked from network); 10 Jan 2014 15:56:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:56:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91717814"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:56:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 10:56:45 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1eRx-0002e8-7L;
	Fri, 10 Jan 2014 15:56:45 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>, QEMU-devel <qemu-devel@nongnu.org>
Date: Fri, 10 Jan 2014 15:56:33 +0000
Message-ID: <1389369393-28654-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: [Xen-devel] [PATCH] xen_pt: Fix passthrough of device with ROM.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QEMU does not need and should not allocate memory for the ROM of a
passthrough PCI device. So this patch initialize the particular region
like any other PCI BAR of a passthrough device.

When a guest will access the ROM, Xen will take care of the IO, QEMU
will not be involved in it.

Xen set a limit of memory available for each guest, allocating memory
for a ROM can hit this limit.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 hw/xen/xen_pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index eee4354..be4220b 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
 
         s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
 
-        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
-                                      "xen-pci-pt-rom", d->rom.size);
+        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
+                              "xen-pci-pt-rom", d->rom.size);
         pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
                          &s->rom);
 
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eSn-0005Ha-Kk; Fri, 10 Jan 2014 15:57:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1eSl-0005HJ-Kb
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:57:35 +0000
Received: from [85.158.143.35:24411] by server-2.bemta-4.messagelabs.com id
	D3/B7-11386-E6810D25; Fri, 10 Jan 2014 15:57:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389369453!10842290!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16268 invoked from network); 10 Jan 2014 15:57:33 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 15:57:33 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53709 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1eHl-0004OO-Tu; Fri, 10 Jan 2014 16:46:14 +0100
Date: Fri, 10 Jan 2014 16:57:29 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1087166993.20140110165729@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110151218.GA20152@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 4:12:18 PM, you wrote:

> On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
>> 
>> But it seems pci-detach isn't completely detaching the device from the guest.
>> 
>> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
>> 
>> - Now i do a "xl pci-assignable-list"
>>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
>> 
>> - Then i do "xl -v pci-detach 2 00:19.0"
>>   Which also returns nothing ...
>> 
>> - Now i do a "xl pci-assignable-list" again ..
>>   This returns:
>>   "0000:00:19.0"
>>   So the pci-detach does seem to have done *something* :-)

> Or it thinks it has :-)

Well it has .. but probably not enough ;-)

>> 
>> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>>   and later it give some stacktraces ..
>> 
>>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>>   xen_pciback: ****** shutdown driver domain before binding device
>>   xen_pciback: ****** to other drivers of domains

> What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
> removed from there?

Oeh i should have thought of that ...

in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..

So let's try with qemu-xen-traditional .. which i also forgot to test ...

Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:

pt_msgctrl_reg_write: setup msi for dev 30
pt_msi_setup: pt_msi_setup requested pirq = 54
pt_msi_setup: msi mapped with pirq 36
pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
pt_msgctrl_reg_write: setup msi for dev 28
pt_msi_setup: pt_msi_setup requested pirq = 53
pt_msi_setup: msi mapped with pirq 35
pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
dm-command: hot remove pass-through pci dev
generate a sci for PHP.
deassert due to disable GPE bit.
ACPI:debug: write addr=0xb044, val=0x30.
ACPI:debug: write addr=0xb045, val=0x3.
ACPI:debug: write addr=0xb044, val=0x30.
ACPI:debug: write addr=0xb045, val=0x88.
ACPI PCI hotplug: write devfn=0x30.
pci_intx: intx=1
pci_intx: intx=1
pt_msi_disable: Unbind msi with pirq 36, gvec 0
pt_msi_disable: Unmap msi with pirq 36



Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.

>> 
>> 
>> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
>> 
>> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
>> 
>> Oh yes running xen-unstable and a 3.13-rc7 kernel

> Do you see the same issue with 'xend'?

Erhmmm haven't used that for what seems to be ages .. :-)

Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...

It seems to be the pci_reset_function ...

[   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
[   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
[   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
[   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
[   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
[   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
[  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
[  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
[  227.445811] xen_pciback: ****** shutdown driver domain before binding device
[  227.447811] xen_pciback: ****** to other drivers or domains
[  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
[  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
[  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
[  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
[  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
[  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
[  368.864514] Call Trace:
[  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
[  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
[  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
[  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
[  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
[  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
[  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
[  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
[  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
[  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
[  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
[  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
[  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
[  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
[  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
[  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
[  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
[  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
[  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
[  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
[  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
[  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
[  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
[  489.058621] Call Trace:
[  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
[  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
[  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
[  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
[  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
[  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
[  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
[  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
[  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
[  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
[  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
[  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
[  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
[  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
[  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
[  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
[  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b


>> 
>> --
>> Sander
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eSn-0005Ha-Kk; Fri, 10 Jan 2014 15:57:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1eSl-0005HJ-Kb
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:57:35 +0000
Received: from [85.158.143.35:24411] by server-2.bemta-4.messagelabs.com id
	D3/B7-11386-E6810D25; Fri, 10 Jan 2014 15:57:34 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389369453!10842290!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16268 invoked from network); 10 Jan 2014 15:57:33 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 15:57:33 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53709 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1eHl-0004OO-Tu; Fri, 10 Jan 2014 16:46:14 +0100
Date: Fri, 10 Jan 2014 16:57:29 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1087166993.20140110165729@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110151218.GA20152@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 4:12:18 PM, you wrote:

> On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
>> 
>> But it seems pci-detach isn't completely detaching the device from the guest.
>> 
>> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
>> 
>> - Now i do a "xl pci-assignable-list"
>>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
>> 
>> - Then i do "xl -v pci-detach 2 00:19.0"
>>   Which also returns nothing ...
>> 
>> - Now i do a "xl pci-assignable-list" again ..
>>   This returns:
>>   "0000:00:19.0"
>>   So the pci-detach does seem to have done *something* :-)

> Or it thinks it has :-)

Well it has .. but probably not enough ;-)

>> 
>> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>>   and later it give some stacktraces ..
>> 
>>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>>   xen_pciback: ****** shutdown driver domain before binding device
>>   xen_pciback: ****** to other drivers of domains

> What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
> removed from there?

Oeh i should have thought of that ...

in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..

So let's try with qemu-xen-traditional .. which i also forgot to test ...

Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:

pt_msgctrl_reg_write: setup msi for dev 30
pt_msi_setup: pt_msi_setup requested pirq = 54
pt_msi_setup: msi mapped with pirq 36
pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
pt_msgctrl_reg_write: setup msi for dev 28
pt_msi_setup: pt_msi_setup requested pirq = 53
pt_msi_setup: msi mapped with pirq 35
pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
dm-command: hot remove pass-through pci dev
generate a sci for PHP.
deassert due to disable GPE bit.
ACPI:debug: write addr=0xb044, val=0x30.
ACPI:debug: write addr=0xb045, val=0x3.
ACPI:debug: write addr=0xb044, val=0x30.
ACPI:debug: write addr=0xb045, val=0x88.
ACPI PCI hotplug: write devfn=0x30.
pci_intx: intx=1
pci_intx: intx=1
pt_msi_disable: Unbind msi with pirq 36, gvec 0
pt_msi_disable: Unmap msi with pirq 36



Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.

>> 
>> 
>> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
>> 
>> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
>> 
>> Oh yes running xen-unstable and a 3.13-rc7 kernel

> Do you see the same issue with 'xend'?

Erhmmm haven't used that for what seems to be ages .. :-)

Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...

It seems to be the pci_reset_function ...

[   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
[   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
[   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
[   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
[   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
[   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
[  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
[  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
[  227.445811] xen_pciback: ****** shutdown driver domain before binding device
[  227.447811] xen_pciback: ****** to other drivers or domains
[  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
[  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
[  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
[  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
[  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
[  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
[  368.864514] Call Trace:
[  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
[  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
[  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
[  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
[  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
[  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
[  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
[  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
[  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
[  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
[  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
[  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
[  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
[  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
[  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
[  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
[  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
[  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
[  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
[  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
[  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
[  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
[  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
[  489.058621] Call Trace:
[  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
[  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
[  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
[  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
[  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
[  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
[  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
[  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
[  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
[  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
[  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
[  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
[  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
[  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
[  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
[  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
[  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b


>> 
>> --
>> Sander
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:57:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eT3-0005Kp-3y; Fri, 10 Jan 2014 15:57:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1eT2-0005KW-E0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:57:52 +0000
Received: from [85.158.143.35:56248] by server-2.bemta-4.messagelabs.com id
	59/18-11386-F7810D25; Fri, 10 Jan 2014 15:57:51 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389369470!8314584!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31637 invoked from network); 10 Jan 2014 15:57:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:57:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91718109"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:57:49 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL01.citrite.net ([10.13.107.78]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 10:57:49 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVAgAE3N4CAAFH2kA==
Date: Fri, 10 Jan 2014 15:57:49 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52D0241102000078001127C5@nat28.tlf.novell.com>
In-Reply-To: <52D0241102000078001127C5@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> >> __hvm_copy() is probably too low to be thinking about this.  There are
> >> many things such as grant_copy() which do not want "hardware like" copy
> >> properties, preferring instead to have less overhead.
> >>
> >
> > Yeah... I'll rework the patch to do this...
> 
> Looking a little more closely, hvm_copy_{to,from}_guest_virt()
> are what you want to have the adjusted behavior. That way
> you'd in particular not touch the behavior of the more generic
> copying routines copy_{to,from}_user_hvm(). And adjusting
> the behavior would seem to be doable cleanly by adding e.g.
> HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
> to not use memcpy().
> 

Thanks -- I was coming to the same conclusion slowly too! Although I think I might call it HVMCOPY_emulate rather than atomic since it's not the case that the entire copy is atomic...

Now I just have to figure out why the shadow code doesn't use the hvm copy_to_ routine...

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 15:57:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 15:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eT3-0005Kp-3y; Fri, 10 Jan 2014 15:57:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <simon.graham@citrix.com>) id 1W1eT2-0005KW-E0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 15:57:52 +0000
Received: from [85.158.143.35:56248] by server-2.bemta-4.messagelabs.com id
	59/18-11386-F7810D25; Fri, 10 Jan 2014 15:57:51 +0000
X-Env-Sender: simon.graham@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389369470!8314584!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31637 invoked from network); 10 Jan 2014 15:57:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 15:57:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,638,1384300800"; d="scan'208";a="91718109"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 15:57:49 +0000
Received: from FTLPEX01CL03.citrite.net ([169.254.1.150]) by
	FTLPEX01CL01.citrite.net ([10.13.107.78]) with mapi id 14.02.0342.004;
	Fri, 10 Jan 2014 10:57:49 -0500
From: Simon Graham <simon.graham@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Possible issue with x86_emulate when writing
	results back to memory
Thread-Index: Ac711YmNMUOUVnlMTTS29ThyibIwTgXqECOAAApgbkD//7MDgIAAUXVAgAE3N4CAAFH2kA==
Date: Fri, 10 Jan 2014 15:57:49 +0000
Message-ID: <31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52D0241102000078001127C5@nat28.tlf.novell.com>
In-Reply-To: <52D0241102000078001127C5@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.204.248.101]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
> >> __hvm_copy() is probably too low to be thinking about this.  There are
> >> many things such as grant_copy() which do not want "hardware like" copy
> >> properties, preferring instead to have less overhead.
> >>
> >
> > Yeah... I'll rework the patch to do this...
> 
> Looking a little more closely, hvm_copy_{to,from}_guest_virt()
> are what you want to have the adjusted behavior. That way
> you'd in particular not touch the behavior of the more generic
> copying routines copy_{to,from}_user_hvm(). And adjusting
> the behavior would seem to be doable cleanly by adding e.g.
> HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
> to not use memcpy().
> 

Thanks -- I was coming to the same conclusion slowly too! Although I think I might call it HVMCOPY_emulate rather than atomic since it's not the case that the entire copy is atomic...

Now I just have to figure out why the shadow code doesn't use the hvm copy_to_ routine...

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:01:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eWm-0006hx-U5; Fri, 10 Jan 2014 16:01:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1W1eWk-0006ho-Sj
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:01:43 +0000
Received: from [85.158.143.35:59269] by server-1.bemta-4.messagelabs.com id
	9E/FE-02132-66910D25; Fri, 10 Jan 2014 16:01:42 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389369700!11005540!1
X-Originating-IP: [209.85.223.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15706 invoked from network); 10 Jan 2014 16:01:41 -0000
Received: from mail-ie0-f176.google.com (HELO mail-ie0-f176.google.com)
	(209.85.223.176)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:01:41 -0000
Received: by mail-ie0-f176.google.com with SMTP id at1so5342816iec.35
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 08:01:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=NNHkeEaP6T/0Covqw0F395WDgTHcTtdQ2G2kfFP1I4E=;
	b=FAt6DwEcLHO/gcbETR8HkT0F9eIX1pprG6CvkxjqyKnDJIgrN/wjcRi567o0RsiCEW
	KNmqxF4meD1GR0Zptb7OLAjgpz9FozMTzu5QVEnM6IO03s+NZLz0QwaRkdkA+a/LUiy0
	ts106ICwTTC3JEbXK8JwmICVLkHr2TlKSjBUmJOI+vGWNu0KTluG/p+r7H8nEaqDcb+I
	b9i+fFtwsQF4h3oPvOuS9oyP9mOK/PahFFwpSeraHaz12aOiN5BfOtfqUnA46qmYw3u9
	RYjrjolsWNHqAbN5ZOfO+s4sZWYCvsdEPXdHm8hM27ISCzUVlkl7hGsLQQt7Wyr793Ym
	Achw==
MIME-Version: 1.0
X-Received: by 10.42.66.134 with SMTP id p6mr1738754ici.85.1389369699675; Fri,
	10 Jan 2014 08:01:39 -0800 (PST)
Received: by 10.64.69.40 with HTTP; Fri, 10 Jan 2014 08:01:39 -0800 (PST)
In-Reply-To: <1389367453.19142.70.camel@kazak.uk.xensource.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
	<1389367453.19142.70.camel@kazak.uk.xensource.com>
Date: Fri, 10 Jan 2014 19:31:39 +0330
Message-ID: <CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

problem will be solved when i disable vt.what should i do?

On 1/10/14, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 18:51 +0330, fahimeh soltaninejad wrote:
>> hello,
>> i have a problem;when i enable vt in bios of my system, i can not boot
>> into my fedora with xen hypervisor system.In fact, when vt is enabled
>> and i wanted to boot into xen, i got this error:
>> iommu: mapping reserved region failed
>> and after that, it returned me to the grub. what's the problem?
>
> http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen contains advice on
> the information which a report should contain, I'm afraid the above is
> not sufficient for anyone to be able to advise you.
>
> I suggest you take this to the xen-user list in the first instance.
>
> Ian.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:01:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eWm-0006hx-U5; Fri, 10 Jan 2014 16:01:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <f.soltani298@gmail.com>) id 1W1eWk-0006ho-Sj
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:01:43 +0000
Received: from [85.158.143.35:59269] by server-1.bemta-4.messagelabs.com id
	9E/FE-02132-66910D25; Fri, 10 Jan 2014 16:01:42 +0000
X-Env-Sender: f.soltani298@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389369700!11005540!1
X-Originating-IP: [209.85.223.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15706 invoked from network); 10 Jan 2014 16:01:41 -0000
Received: from mail-ie0-f176.google.com (HELO mail-ie0-f176.google.com)
	(209.85.223.176)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:01:41 -0000
Received: by mail-ie0-f176.google.com with SMTP id at1so5342816iec.35
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 08:01:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=NNHkeEaP6T/0Covqw0F395WDgTHcTtdQ2G2kfFP1I4E=;
	b=FAt6DwEcLHO/gcbETR8HkT0F9eIX1pprG6CvkxjqyKnDJIgrN/wjcRi567o0RsiCEW
	KNmqxF4meD1GR0Zptb7OLAjgpz9FozMTzu5QVEnM6IO03s+NZLz0QwaRkdkA+a/LUiy0
	ts106ICwTTC3JEbXK8JwmICVLkHr2TlKSjBUmJOI+vGWNu0KTluG/p+r7H8nEaqDcb+I
	b9i+fFtwsQF4h3oPvOuS9oyP9mOK/PahFFwpSeraHaz12aOiN5BfOtfqUnA46qmYw3u9
	RYjrjolsWNHqAbN5ZOfO+s4sZWYCvsdEPXdHm8hM27ISCzUVlkl7hGsLQQt7Wyr793Ym
	Achw==
MIME-Version: 1.0
X-Received: by 10.42.66.134 with SMTP id p6mr1738754ici.85.1389369699675; Fri,
	10 Jan 2014 08:01:39 -0800 (PST)
Received: by 10.64.69.40 with HTTP; Fri, 10 Jan 2014 08:01:39 -0800 (PST)
In-Reply-To: <1389367453.19142.70.camel@kazak.uk.xensource.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
	<1389367453.19142.70.camel@kazak.uk.xensource.com>
Date: Fri, 10 Jan 2014 19:31:39 +0330
Message-ID: <CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
From: fahimeh soltaninejad <f.soltani298@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

problem will be solved when i disable vt.what should i do?

On 1/10/14, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 18:51 +0330, fahimeh soltaninejad wrote:
>> hello,
>> i have a problem;when i enable vt in bios of my system, i can not boot
>> into my fedora with xen hypervisor system.In fact, when vt is enabled
>> and i wanted to boot into xen, i got this error:
>> iommu: mapping reserved region failed
>> and after that, it returned me to the grub. what's the problem?
>
> http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen contains advice on
> the information which a report should contain, I'm afraid the above is
> not sufficient for anyone to be able to advise you.
>
> I suggest you take this to the xen-user list in the first instance.
>
> Ian.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:02:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eXX-0006n4-DB; Fri, 10 Jan 2014 16:02:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1eXV-0006mj-Gw
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:02:29 +0000
Received: from [85.158.137.68:35999] by server-10.bemta-3.messagelabs.com id
	5B/E8-23989-49910D25; Fri, 10 Jan 2014 16:02:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389369746!8445107!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22193 invoked from network); 10 Jan 2014 16:02:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:02:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG2NLM018525
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:02:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG2NZi005462
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:02:23 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG2MPI015548; Fri, 10 Jan 2014 16:02:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:02:22 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 414F21C18DC; Fri, 10 Jan 2014 11:02:21 -0500 (EST)
Date: Fri, 10 Jan 2014 11:02:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140110160221.GA21360@phenom.dumpdata.com>
References: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	QEMU-devel <qemu-devel@nongnu.org>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen_pt: Fix debug output.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:52:54PM +0000, Anthony PERARD wrote:
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I saw myself and was going to post a fix, but you beat me to it.
> ---
>  hw/xen/xen_pt.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index d58cb61..eee4354 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -420,8 +420,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>                                "xen-pci-pt-bar", r->size);
>          pci_register_bar(&s->dev, i, type, &s->bar[i]);
>  
> -        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%lx"PRIx64
> -                   " base_addr=0x%lx"PRIx64" type: %#x)\n",
> +        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
> +                   " base_addr=0x%08"PRIx64" type: %#x)\n",
>                     i, r->size, r->base_addr, type);
>      }
>  
> -- 
> Anthony PERARD
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:02:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eXX-0006n4-DB; Fri, 10 Jan 2014 16:02:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1eXV-0006mj-Gw
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:02:29 +0000
Received: from [85.158.137.68:35999] by server-10.bemta-3.messagelabs.com id
	5B/E8-23989-49910D25; Fri, 10 Jan 2014 16:02:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389369746!8445107!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22193 invoked from network); 10 Jan 2014 16:02:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:02:27 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG2NLM018525
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:02:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG2NZi005462
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:02:23 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG2MPI015548; Fri, 10 Jan 2014 16:02:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:02:22 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 414F21C18DC; Fri, 10 Jan 2014 11:02:21 -0500 (EST)
Date: Fri, 10 Jan 2014 11:02:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <20140110160221.GA21360@phenom.dumpdata.com>
References: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389369174-28096-1-git-send-email-anthony.perard@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	QEMU-devel <qemu-devel@nongnu.org>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] xen_pt: Fix debug output.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:52:54PM +0000, Anthony PERARD wrote:
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I saw myself and was going to post a fix, but you beat me to it.
> ---
>  hw/xen/xen_pt.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index d58cb61..eee4354 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -420,8 +420,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>                                "xen-pci-pt-bar", r->size);
>          pci_register_bar(&s->dev, i, type, &s->bar[i]);
>  
> -        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%lx"PRIx64
> -                   " base_addr=0x%lx"PRIx64" type: %#x)\n",
> +        XEN_PT_LOG(&s->dev, "IO region %i registered (size=0x%08"PRIx64
> +                   " base_addr=0x%08"PRIx64" type: %#x)\n",
>                     i, r->size, r->base_addr, type);
>      }
>  
> -- 
> Anthony PERARD
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eZZ-000732-Pq; Fri, 10 Jan 2014 16:04:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1eZY-00072j-IR
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:04:36 +0000
Received: from [193.109.254.147:60272] by server-9.bemta-14.messagelabs.com id
	43/CD-13957-31A10D25; Fri, 10 Jan 2014 16:04:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389369874!6610972!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11440 invoked from network); 10 Jan 2014 16:04:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:04:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89584296"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:02:21 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:02:21 -0500
Message-ID: <52D0198C.6000600@citrix.com>
Date: Fri, 10 Jan 2014 16:02:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>	<20140109153015.GF12164@zion.uk.xensource.com>	<52CFDAEC.5080708@citrix.com>	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com>
In-Reply-To: <52D01094.5060102@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 15:24, Zoltan Kiss wrote:
> On 10/01/14 11:45, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
>> [...]
>>>
>>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif
>>>>> *vif,
>>>>>       err = gop->status;
>>>>>       if (unlikely(err))
>>>>>           xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>>>>> +    else {
>>>>> +        if (vif->grant_tx_handle[pending_idx] !=
>>>>> +            NETBACK_INVALID_HANDLE) {
>>>>> +            netdev_err(vif->dev,
>>>>> +                "Stale mapped handle! pending_idx %x handle %x\n",
>>>>> +                pending_idx, vif->grant_tx_handle[pending_idx]);
>>>>> +            BUG();
>>>>> +        }
>>>>> +        set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>>>>> +            FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>>>>
>>>> What happens when you don't have this?
>>> Your frags will be filled with garbage. I don't understand exactly
>>> what this function does, someone might want to enlighten us? I've
>>> took it's usage from classic kernel.
>>> Also, it might be worthwhile to check the return value and BUG if
>>> it's false, but I don't know what exactly that return value means.
>>>
>>
>> This is actually part of gnttab_map_refs. As you're using hypercall
>> directly this becomes very fragile.
>>
>> So the right thing to do is to fix gnttab_map_refs.
> I agree, as I mentioned in other email in this thread, I think that
> should be the topic of an another patchseries. In the meantime, I will
> use gnttab_batch_map instead of the direct hypercall, it handles the
> GNTST_eagain scenario, and I will use set_phys_to_machine the same way
> as m2p_override does:

If the grant table code doesn't provide the API calls you need you can
either:

a) add the new API as a prerequisite patch.
b) use the existing API calls and live with the performance problem,
until you can refactor the API later on.

Adding a netback-specific hack isn't a valid option.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eZZ-000732-Pq; Fri, 10 Jan 2014 16:04:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W1eZY-00072j-IR
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:04:36 +0000
Received: from [193.109.254.147:60272] by server-9.bemta-14.messagelabs.com id
	43/CD-13957-31A10D25; Fri, 10 Jan 2014 16:04:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389369874!6610972!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11440 invoked from network); 10 Jan 2014 16:04:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:04:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89584296"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:02:21 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:02:21 -0500
Message-ID: <52D0198C.6000600@citrix.com>
Date: Fri, 10 Jan 2014 16:02:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>	<20140109153015.GF12164@zion.uk.xensource.com>	<52CFDAEC.5080708@citrix.com>	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com>
In-Reply-To: <52D01094.5060102@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 15:24, Zoltan Kiss wrote:
> On 10/01/14 11:45, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
>> [...]
>>>
>>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif
>>>>> *vif,
>>>>>       err = gop->status;
>>>>>       if (unlikely(err))
>>>>>           xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
>>>>> +    else {
>>>>> +        if (vif->grant_tx_handle[pending_idx] !=
>>>>> +            NETBACK_INVALID_HANDLE) {
>>>>> +            netdev_err(vif->dev,
>>>>> +                "Stale mapped handle! pending_idx %x handle %x\n",
>>>>> +                pending_idx, vif->grant_tx_handle[pending_idx]);
>>>>> +            BUG();
>>>>> +        }
>>>>> +        set_phys_to_machine(idx_to_pfn(vif, pending_idx),
>>>>> +            FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
>>>>
>>>> What happens when you don't have this?
>>> Your frags will be filled with garbage. I don't understand exactly
>>> what this function does, someone might want to enlighten us? I've
>>> took it's usage from classic kernel.
>>> Also, it might be worthwhile to check the return value and BUG if
>>> it's false, but I don't know what exactly that return value means.
>>>
>>
>> This is actually part of gnttab_map_refs. As you're using hypercall
>> directly this becomes very fragile.
>>
>> So the right thing to do is to fix gnttab_map_refs.
> I agree, as I mentioned in other email in this thread, I think that
> should be the topic of an another patchseries. In the meantime, I will
> use gnttab_batch_map instead of the direct hypercall, it handles the
> GNTST_eagain scenario, and I will use set_phys_to_machine the same way
> as m2p_override does:

If the grant table code doesn't provide the API calls you need you can
either:

a) add the new API as a prerequisite patch.
b) use the existing API calls and live with the performance problem,
until you can refactor the API later on.

Adding a netback-specific hack isn't a valid option.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eah-0007ED-Ek; Fri, 10 Jan 2014 16:05:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1eag-0007E4-Mq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:05:46 +0000
Received: from [85.158.139.211:11093] by server-15.bemta-5.messagelabs.com id
	19/24-08490-A5A10D25; Fri, 10 Jan 2014 16:05:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389369943!7837080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8686 invoked from network); 10 Jan 2014 16:05:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:05:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89585724"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:03:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:03:51 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1eYp-0002kU-HS;
	Fri, 10 Jan 2014 16:03:51 +0000
Date: Fri, 10 Jan 2014 16:03:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110160351.GF30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > 
> > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > 8MB video ram). Did I misread your message...
> 
> The 'claim' being the hypercall to set the 'clamp' on how much memory
> the guest can allocate. This is based on:
> 
> 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;

This is in fact initialized to 'maxmem' in guest's config file and

243     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;

This is in fact 'memory' in guest's config file.

So when you try to claim "maxmem" and the current limit is "memory" it
would not work.

So guest should only claim target_pages sans 0x20 pages if PoD enabled.
Oh this is what your initial patch did. I don't know whether this is
conceptually correct though. :-P

Further more, should guest only allow to claim target_pages, regardless
whether PoD is enabled? When only "memory" is specify, "maxmem"
equals to "memory". So conceptually what we really care about is
"memory" not "maxmem".

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eah-0007ED-Ek; Fri, 10 Jan 2014 16:05:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1eag-0007E4-Mq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:05:46 +0000
Received: from [85.158.139.211:11093] by server-15.bemta-5.messagelabs.com id
	19/24-08490-A5A10D25; Fri, 10 Jan 2014 16:05:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389369943!7837080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8686 invoked from network); 10 Jan 2014 16:05:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:05:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89585724"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:03:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:03:51 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1eYp-0002kU-HS;
	Fri, 10 Jan 2014 16:03:51 +0000
Date: Fri, 10 Jan 2014 16:03:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110160351.GF30581@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154105.GC20640@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > 
> > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > 8MB video ram). Did I misread your message...
> 
> The 'claim' being the hypercall to set the 'clamp' on how much memory
> the guest can allocate. This is based on:
> 
> 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;

This is in fact initialized to 'maxmem' in guest's config file and

243     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;

This is in fact 'memory' in guest's config file.

So when you try to claim "maxmem" and the current limit is "memory" it
would not work.

So guest should only claim target_pages sans 0x20 pages if PoD enabled.
Oh this is what your initial patch did. I don't know whether this is
conceptually correct though. :-P

Further more, should guest only allow to claim target_pages, regardless
whether PoD is enabled? When only "memory" is specify, "maxmem"
equals to "memory". So conceptually what we really care about is
"memory" not "maxmem".

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ean-0007Fr-RE; Fri, 10 Jan 2014 16:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1eam-0007FM-8E
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:05:52 +0000
Received: from [85.158.139.211:60024] by server-14.bemta-5.messagelabs.com id
	0F/BD-24200-F5A10D25; Fri, 10 Jan 2014 16:05:51 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389369950!9064100!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26617 invoked from network); 10 Jan 2014 16:05:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Jan 2014 16:05:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53807 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1ePj-00052k-ME; Fri, 10 Jan 2014 16:54:27 +0100
Date: Fri, 10 Jan 2014 17:05:44 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <24225189.20140110170544@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140110151914.GF1696@perard.uk.xensource.com>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 4:19:15 PM, you wrote:

> On Thu, Jan 09, 2014 at 10:28:47PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
>> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> > > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>> > > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>> > > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>> > > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>> > > > > > [...]
>> > > > > > > > Those Xen report something like:
>> > > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>> > > > > > > > 131328
>> > > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>> > > > > > > > memflags=0 (62 of 64)
>> > > > > > > > 
>> > > > > > > > ?
>> > > > > > > > 
>> > > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>> > > > > > > > QEMU :) )
>> > > > > > > > 
>> > > > 
>> > > > > -bash-4.1# lspci -s 01:00.0 -v 
>> > > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>> > > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>> > > > >         Flags: fast devsel, IRQ 16
>> > > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>> > > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>> > > > >         I/O ports at e020 [disabled] [size=32]
>> > > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>> > > > >         Expansion ROM at fb400000 [disabled] [size=4M]
>> > > > 
>> > > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>> > > > allocate memory for it. Will have maybe have to find another way.
>> > > > qemu-trad those not seems to allocate memory, but I haven't been very
>> > > > far in trying to check that.
>> > > 
>> > > And indeed that is the case. The "Fix" below fixes it.
>> > > 
>> > > 
>> > > Based on that and this guest config:
>> > > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> > > memory = 2048
>> > > boot="d"
>> > > maxvcpus=32
>> > > vcpus=1
>> > > serial='pty'
>> > > vnclisten="0.0.0.0"
>> > > name="latest"
>> > > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>> > > pci = ["01:00.0"]
>> > > 
>> > > I can boot the guest.
>> > 
>> > And can you access the ROM from the guest ?
>> 
>> I hadn't tried it. This is with a NIC and I just wanted to see if it
>> could do PCI passthrough without using the Option ROM.
>> > 
>> > 
>> > Also, I have another patch, it will initialize the PCI ROM BAR like any
>> > other BAR. In this case, if qemu is envolved in the access to ROM, it
>> > will print an error, like it the case for other BAR. 
>> > 
>> > I tried to test it, but it was with an embedded VGA card. When I dump
>> > the ROM, I got the same one as the emulated card instead of the ROM from
>> > the device.
>> 
>> Oddly enough for me with your patch the NIC's BIOS was invoked and
>> it tried to PXE boot:
>> 
>> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
>> 
>> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
>> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
>> ..
>> and I did see the PXE boot menu in my guest - so even
>> better!

> Perfect, look like it is the fix for PCI passthrough.

Hi Konrad,

Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
gets run ?

With this patch and VGA devices it still points to another rom for me.
(it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
(for example one by reading acpi tables .. the other one by real probing .. ?)

--
Sander


>> I have not yet done the GPU - this issue was preventing me from using
>> qemu-xen as it would always blow up before SeaBIOS was in the picture.
>> 
>> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>' please do.

> Will do.

>> Thank you!
>> > 
>> > 
>> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> > index 6dd7a68..2bbdb6d 100644
>> > --- a/hw/xen/xen_pt.c
>> > +++ b/hw/xen/xen_pt.c
>> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> >  
>> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> >  
>> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> > -                                      "xen-pci-pt-rom", d->rom.size);
>> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> > +                              "xen-pci-pt-rom", d->rom.size);
>> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>> >                           &s->rom);
>> >  
>> > 
>> > -- 
>> > Anthony PERARD



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ean-0007Fr-RE; Fri, 10 Jan 2014 16:05:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1eam-0007FM-8E
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:05:52 +0000
Received: from [85.158.139.211:60024] by server-14.bemta-5.messagelabs.com id
	0F/BD-24200-F5A10D25; Fri, 10 Jan 2014 16:05:51 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389369950!9064100!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26617 invoked from network); 10 Jan 2014 16:05:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Jan 2014 16:05:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53807 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1ePj-00052k-ME; Fri, 10 Jan 2014 16:54:27 +0100
Date: Fri, 10 Jan 2014 17:05:44 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <24225189.20140110170544@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140110151914.GF1696@perard.uk.xensource.com>
References: <20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 4:19:15 PM, you wrote:

> On Thu, Jan 09, 2014 at 10:28:47PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 09, 2014 at 02:56:24PM +0000, Anthony PERARD wrote:
>> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> > > On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>> > > > On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>> > > > > On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>> > > > > > On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>> > > > > > [...]
>> > > > > > > > Those Xen report something like:
>> > > > > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46: 131329 >
>> > > > > > > > 131328
>> > > > > > > > (XEN) memory.c:132:d0 Could not allocate order=0 extent: id=46
>> > > > > > > > memflags=0 (62 of 64)
>> > > > > > > > 
>> > > > > > > > ?
>> > > > > > > > 
>> > > > > > > > (I tryied to reproduce the issue by simply add many emulated e1000 in
>> > > > > > > > QEMU :) )
>> > > > > > > > 
>> > > > 
>> > > > > -bash-4.1# lspci -s 01:00.0 -v 
>> > > > > 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
>> > > > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>> > > > >         Flags: fast devsel, IRQ 16
>> > > > >         Memory at fbc20000 (32-bit, non-prefetchable) [disabled] [size=128K]
>> > > > >         Memory at fb800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>> > > > >         I/O ports at e020 [disabled] [size=32]
>> > > > >         Memory at fbc44000 (32-bit, non-prefetchable) [disabled] [size=16K]
>> > > > >         Expansion ROM at fb400000 [disabled] [size=4M]
>> > > > 
>> > > > BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>> > > > allocate memory for it. Will have maybe have to find another way.
>> > > > qemu-trad those not seems to allocate memory, but I haven't been very
>> > > > far in trying to check that.
>> > > 
>> > > And indeed that is the case. The "Fix" below fixes it.
>> > > 
>> > > 
>> > > Based on that and this guest config:
>> > > disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> > > memory = 2048
>> > > boot="d"
>> > > maxvcpus=32
>> > > vcpus=1
>> > > serial='pty'
>> > > vnclisten="0.0.0.0"
>> > > name="latest"
>> > > vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>> > > pci = ["01:00.0"]
>> > > 
>> > > I can boot the guest.
>> > 
>> > And can you access the ROM from the guest ?
>> 
>> I hadn't tried it. This is with a NIC and I just wanted to see if it
>> could do PCI passthrough without using the Option ROM.
>> > 
>> > 
>> > Also, I have another patch, it will initialize the PCI ROM BAR like any
>> > other BAR. In this case, if qemu is envolved in the access to ROM, it
>> > will print an error, like it the case for other BAR. 
>> > 
>> > I tried to test it, but it was with an embedded VGA card. When I dump
>> > the ROM, I got the same one as the emulated card instead of the ROM from
>> > the device.
>> 
>> Oddly enough for me with your patch the NIC's BIOS was invoked and
>> it tried to PXE boot:
>> 
>> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
>> 
>> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
>> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
>> ..
>> and I did see the PXE boot menu in my guest - so even
>> better!

> Perfect, look like it is the fix for PCI passthrough.

Hi Konrad,

Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
gets run ?

With this patch and VGA devices it still points to another rom for me.
(it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
(for example one by reading acpi tables .. the other one by real probing .. ?)

--
Sander


>> I have not yet done the GPU - this issue was preventing me from using
>> qemu-xen as it would always blow up before SeaBIOS was in the picture.
>> 
>> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com>' please do.

> Will do.

>> Thank you!
>> > 
>> > 
>> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> > index 6dd7a68..2bbdb6d 100644
>> > --- a/hw/xen/xen_pt.c
>> > +++ b/hw/xen/xen_pt.c
>> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> >  
>> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> >  
>> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> > -                                      "xen-pci-pt-rom", d->rom.size);
>> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> > +                              "xen-pci-pt-rom", d->rom.size);
>> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>> >                           &s->rom);
>> >  
>> > 
>> > -- 
>> > Anthony PERARD



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eaq-0007Gz-8g; Fri, 10 Jan 2014 16:05:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eao-0007G7-Cx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:05:54 +0000
Received: from [193.109.254.147:13564] by server-14.bemta-14.messagelabs.com
	id BE/58-12628-16A10D25; Fri, 10 Jan 2014 16:05:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389369951!10088380!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3266 invoked from network); 10 Jan 2014 16:05:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89586142"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:04:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:04:17 -0500
Message-ID: <1389369856.6423.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Date: Fri, 10 Jan 2014 16:04:16 +0000
In-Reply-To: <CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
	<1389367453.19142.70.camel@kazak.uk.xensource.com>
	<CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 19:31 +0330, fahimeh soltaninejad wrote:
> problem will be solved when i disable vt.what should i do?

Read what i wrote last time and act on it.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:05:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1eaq-0007Gz-8g; Fri, 10 Jan 2014 16:05:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1eao-0007G7-Cx
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:05:54 +0000
Received: from [193.109.254.147:13564] by server-14.bemta-14.messagelabs.com
	id BE/58-12628-16A10D25; Fri, 10 Jan 2014 16:05:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389369951!10088380!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3266 invoked from network); 10 Jan 2014 16:05:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:05:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89586142"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:04:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:04:17 -0500
Message-ID: <1389369856.6423.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: fahimeh soltaninejad <f.soltani298@gmail.com>
Date: Fri, 10 Jan 2014 16:04:16 +0000
In-Reply-To: <CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
References: <CAKLxbwJdzi_120jKSm+1ZdLCDnB95QhTHqimVm1a6bQdTB5YnA@mail.gmail.com>
	<1389367453.19142.70.camel@kazak.uk.xensource.com>
	<CAKLxbwKkKGyE+OwdScm4X3wLS6PoOnmhHr6kKrnT7wYWbhOAtA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Problem with enabling vt(virtualization),
 xen does not boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 19:31 +0330, fahimeh soltaninejad wrote:
> problem will be solved when i disable vt.what should i do?

Read what i wrote last time and act on it.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:06:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ebg-0007WY-U6; Fri, 10 Jan 2014 16:06:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ebe-0007Vu-NB
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:06:46 +0000
Received: from [193.109.254.147:21983] by server-9.bemta-14.messagelabs.com id
	E9/21-13957-69A10D25; Fri, 10 Jan 2014 16:06:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389370003!10112342!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7762 invoked from network); 10 Jan 2014 16:06:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:06:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG5dn3022406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:05:40 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG5cST022343
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:05:39 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG5c3U024073; Fri, 10 Jan 2014 16:05:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:05:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CA8851C18DC; Fri, 10 Jan 2014 11:05:36 -0500 (EST)
Date: Fri, 10 Jan 2014 11:05:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110160536.GB21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389369373.6423.21.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > And then it is the responsibility of the balloon driver to give the memory
> > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > balloon driver to balloon out).
> > > 
> > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > boot time. It is basically there in order to let the guest get booted
> > > far enough to load the balloon driver to give the memory back.
> > > 
> > > It's basically a boot time zero-page sharing mechanism AIUI.
> > 
> > But it does look to gulp up hypervisor memory and return it during
> > allocation of memory for the guest.
> 
> It should be less than the maxmem-memory amount though. Perhaps because
> Wei is using relatively small sizes the pod cache ends up being a
> similar size to the saving?
> 
> Or maybe I just don't understand PoD, since the code you quote does seem
> to contradict that.
> 
> Or maybe libxl's calculation of pod_target is wrong?
> 
> > From reading the code the patch seems correct - we will _need_ that
> > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > are temporary as we do free them.
> 
> It does makes sense that the PoD cache should be included in the claim,
> I just don't get why the cache is so big...

I think it expands and shrinks to make sure that the memory is present
in the hypervisor. If there is not enough memory it would -ENOMEM and
the toolstack would know immediately.

But that seems silly - as that memory might be in the future used
by other guests and then you won't be able to use said cache. But since
it is a "cache" I guess that is OK.

> 
> > > > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > > > index 77bd365..65e9577 100644
> > > > --- a/tools/libxc/xc_hvm_build_x86.c
> > > > +++ b/tools/libxc/xc_hvm_build_x86.c
> > > > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> > > >  
> > > >      /* try to claim pages for early warning of insufficient memory available */
> > > >      if ( claim_enabled ) {
> > > > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > > > +        unsigned long nr = nr_pages - cur_pages;
> > > > +
> > > > +        if ( pod_mode )
> > > > +            nr = target_pages - 0x20;
> > > 
> > > 0x20?
> > 
> > Yup. From earlier:
> > 
> > 305     if ( pod_mode )
> > 306     {
> > 307         /*
> > 308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> > 309          * adjust the PoD cache size so that domain tot_pages will be
> > 310          * target_pages - 0x20 after this call.
> > 311          */
> > 312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> > 313                                       NULL, NULL, NULL);
> > 314         if ( rc != 0 )
> > 315         {
> > 316             PERROR("Could not set PoD target for HVM guest.\n");
> > 317             goto error_out;
> > 318         }
> > 319     }
> > 
> > Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
> > this around.
> 
> A #define or something might be nice, yes.
> 
> > 
> > > 
> > > > +
> > > > +        rc = xc_domain_claim_pages(xch, dom, nr);
> > > >          if ( rc != 0 )
> > > >          {
> > > >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> > > 
> > > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:06:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ebg-0007WY-U6; Fri, 10 Jan 2014 16:06:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ebe-0007Vu-NB
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:06:46 +0000
Received: from [193.109.254.147:21983] by server-9.bemta-14.messagelabs.com id
	E9/21-13957-69A10D25; Fri, 10 Jan 2014 16:06:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389370003!10112342!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7762 invoked from network); 10 Jan 2014 16:06:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:06:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG5dn3022406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:05:40 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG5cST022343
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:05:39 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG5c3U024073; Fri, 10 Jan 2014 16:05:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:05:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CA8851C18DC; Fri, 10 Jan 2014 11:05:36 -0500 (EST)
Date: Fri, 10 Jan 2014 11:05:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110160536.GB21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389369373.6423.21.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > And then it is the responsibility of the balloon driver to give the memory
> > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > balloon driver to balloon out).
> > > 
> > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > boot time. It is basically there in order to let the guest get booted
> > > far enough to load the balloon driver to give the memory back.
> > > 
> > > It's basically a boot time zero-page sharing mechanism AIUI.
> > 
> > But it does look to gulp up hypervisor memory and return it during
> > allocation of memory for the guest.
> 
> It should be less than the maxmem-memory amount though. Perhaps because
> Wei is using relatively small sizes the pod cache ends up being a
> similar size to the saving?
> 
> Or maybe I just don't understand PoD, since the code you quote does seem
> to contradict that.
> 
> Or maybe libxl's calculation of pod_target is wrong?
> 
> > From reading the code the patch seems correct - we will _need_ that
> > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > are temporary as we do free them.
> 
> It does makes sense that the PoD cache should be included in the claim,
> I just don't get why the cache is so big...

I think it expands and shrinks to make sure that the memory is present
in the hypervisor. If there is not enough memory it would -ENOMEM and
the toolstack would know immediately.

But that seems silly - as that memory might be in the future used
by other guests and then you won't be able to use said cache. But since
it is a "cache" I guess that is OK.

> 
> > > > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > > > index 77bd365..65e9577 100644
> > > > --- a/tools/libxc/xc_hvm_build_x86.c
> > > > +++ b/tools/libxc/xc_hvm_build_x86.c
> > > > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> > > >  
> > > >      /* try to claim pages for early warning of insufficient memory available */
> > > >      if ( claim_enabled ) {
> > > > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > > > +        unsigned long nr = nr_pages - cur_pages;
> > > > +
> > > > +        if ( pod_mode )
> > > > +            nr = target_pages - 0x20;
> > > 
> > > 0x20?
> > 
> > Yup. From earlier:
> > 
> > 305     if ( pod_mode )
> > 306     {
> > 307         /*
> > 308          * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> > 309          * adjust the PoD cache size so that domain tot_pages will be
> > 310          * target_pages - 0x20 after this call.
> > 311          */
> > 312         rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> > 313                                       NULL, NULL, NULL);
> > 314         if ( rc != 0 )
> > 315         {
> > 316             PERROR("Could not set PoD target for HVM guest.\n");
> > 317             goto error_out;
> > 318         }
> > 319     }
> > 
> > Maybe a nice little 'pod_delta' or 'pod_pages' should be used instead of copying
> > this around.
> 
> A #define or something might be nice, yes.
> 
> > 
> > > 
> > > > +
> > > > +        rc = xc_domain_claim_pages(xch, dom, nr);
> > > >          if ( rc != 0 )
> > > >          {
> > > >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> > > 
> > > 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ech-0007k3-32; Fri, 10 Jan 2014 16:07:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ecf-0007jW-63
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:07:49 +0000
Received: from [85.158.137.68:43908] by server-1.bemta-3.messagelabs.com id
	1D/39-29598-4DA10D25; Fri, 10 Jan 2014 16:07:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389370066!8399892!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32227 invoked from network); 10 Jan 2014 16:07:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:07:47 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG7eak020984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:07:41 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG7dKI000034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:07:40 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG7dir022793; Fri, 10 Jan 2014 16:07:39 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:07:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 551131C18DC; Fri, 10 Jan 2014 11:07:38 -0500 (EST)
Date: Fri, 10 Jan 2014 11:07:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110160738.GC21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<52D0253802000078001127F3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D0253802000078001127F3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:52:08PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> >> > --- a/tools/libxc/xc_hvm_build_x86.c
> >> > +++ b/tools/libxc/xc_hvm_build_x86.c
> >> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >> >  
> >> >      /* try to claim pages for early warning of insufficient memory available */
> >> >      if ( claim_enabled ) {
> >> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> >> > +        unsigned long nr = nr_pages - cur_pages;
> >> > +
> >> > +        if ( pod_mode )
> >> > +            nr = target_pages - 0x20;
> >> > +
> >> 
> >> Yes it should work because this makes nr smaller than d->tot_pages and
> >> d->max_pages. But according to the comment you pasted above this looks
> >> like wrong fix...
> > 
> > It should be: 
> > 
> > tot_pages = 128MB
> > max_pages = 256MB
> > nr = 256MB - 0x20.
> > 
> > So tot_pages < max_pages > nr && nr > tot_pages
> > 
> > If I got my variables right.
> > Which means that 'nr' is greater than tot_pages but less than max_pages.
> 
> But that seems conceptually wrong: As was said before, the guest
> should only get 128Mb allocated, hence it would be wrong to claim
> almost 256Mb for it.

At the start of the day (that is when the guest starts) it would only have
128MB allocated to it. But before then it needs 256MB (at least that is
based on looking at the code).

I do agree it is seems odd that the cache allocates memory, then
frees it. But I might also be reading the code wrong.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ech-0007k3-32; Fri, 10 Jan 2014 16:07:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ecf-0007jW-63
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:07:49 +0000
Received: from [85.158.137.68:43908] by server-1.bemta-3.messagelabs.com id
	1D/39-29598-4DA10D25; Fri, 10 Jan 2014 16:07:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389370066!8399892!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32227 invoked from network); 10 Jan 2014 16:07:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:07:47 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG7eak020984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:07:41 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG7dKI000034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:07:40 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AG7dir022793; Fri, 10 Jan 2014 16:07:39 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:07:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 551131C18DC; Fri, 10 Jan 2014 11:07:38 -0500 (EST)
Date: Fri, 10 Jan 2014 11:07:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110160738.GC21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<52D0253802000078001127F3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D0253802000078001127F3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:52:08PM +0000, Jan Beulich wrote:
> >>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> >> > --- a/tools/libxc/xc_hvm_build_x86.c
> >> > +++ b/tools/libxc/xc_hvm_build_x86.c
> >> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >> >  
> >> >      /* try to claim pages for early warning of insufficient memory available */
> >> >      if ( claim_enabled ) {
> >> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> >> > +        unsigned long nr = nr_pages - cur_pages;
> >> > +
> >> > +        if ( pod_mode )
> >> > +            nr = target_pages - 0x20;
> >> > +
> >> 
> >> Yes it should work because this makes nr smaller than d->tot_pages and
> >> d->max_pages. But according to the comment you pasted above this looks
> >> like wrong fix...
> > 
> > It should be: 
> > 
> > tot_pages = 128MB
> > max_pages = 256MB
> > nr = 256MB - 0x20.
> > 
> > So tot_pages < max_pages > nr && nr > tot_pages
> > 
> > If I got my variables right.
> > Which means that 'nr' is greater than tot_pages but less than max_pages.
> 
> But that seems conceptually wrong: As was said before, the guest
> should only get 128Mb allocated, hence it would be wrong to claim
> almost 256Mb for it.

At the start of the day (that is when the guest starts) it would only have
128MB allocated to it. But before then it needs 256MB (at least that is
based on looking at the code).

I do agree it is seems odd that the cache allocates memory, then
frees it. But I might also be reading the code wrong.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:08:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1edX-000811-JO; Fri, 10 Jan 2014 16:08:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1edV-0007zt-Vx
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:08:42 +0000
Received: from [85.158.137.68:16414] by server-8.bemta-3.messagelabs.com id
	13/2D-31081-90B10D25; Fri, 10 Jan 2014 16:08:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389370116!8400100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4386 invoked from network); 10 Jan 2014 16:08:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91726545"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 16:08:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:08:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1edL-0002pC-12;
	Fri, 10 Jan 2014 16:08:31 +0000
Date: Fri, 10 Jan 2014 16:08:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140110160830.GG30581@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com> <52D0198C.6000600@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D0198C.6000600@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:02:20PM +0000, David Vrabel wrote:
> On 10/01/14 15:24, Zoltan Kiss wrote:
> > On 10/01/14 11:45, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> >> [...]
> >>>
> >>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif
> >>>>> *vif,
> >>>>>       err = gop->status;
> >>>>>       if (unlikely(err))
> >>>>>           xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> >>>>> +    else {
> >>>>> +        if (vif->grant_tx_handle[pending_idx] !=
> >>>>> +            NETBACK_INVALID_HANDLE) {
> >>>>> +            netdev_err(vif->dev,
> >>>>> +                "Stale mapped handle! pending_idx %x handle %x\n",
> >>>>> +                pending_idx, vif->grant_tx_handle[pending_idx]);
> >>>>> +            BUG();
> >>>>> +        }
> >>>>> +        set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> >>>>> +            FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> >>>>
> >>>> What happens when you don't have this?
> >>> Your frags will be filled with garbage. I don't understand exactly
> >>> what this function does, someone might want to enlighten us? I've
> >>> took it's usage from classic kernel.
> >>> Also, it might be worthwhile to check the return value and BUG if
> >>> it's false, but I don't know what exactly that return value means.
> >>>
> >>
> >> This is actually part of gnttab_map_refs. As you're using hypercall
> >> directly this becomes very fragile.
> >>
> >> So the right thing to do is to fix gnttab_map_refs.
> > I agree, as I mentioned in other email in this thread, I think that
> > should be the topic of an another patchseries. In the meantime, I will
> > use gnttab_batch_map instead of the direct hypercall, it handles the
> > GNTST_eagain scenario, and I will use set_phys_to_machine the same way
> > as m2p_override does:
> 
> If the grant table code doesn't provide the API calls you need you can
> either:
> 
> a) add the new API as a prerequisite patch.
> b) use the existing API calls and live with the performance problem,
> until you can refactor the API later on.
> 
> Adding a netback-specific hack isn't a valid option.
> 

Agreed.

Wei.

> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:08:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1edX-000811-JO; Fri, 10 Jan 2014 16:08:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W1edV-0007zt-Vx
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:08:42 +0000
Received: from [85.158.137.68:16414] by server-8.bemta-3.messagelabs.com id
	13/2D-31081-90B10D25; Fri, 10 Jan 2014 16:08:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389370116!8400100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4386 invoked from network); 10 Jan 2014 16:08:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91726545"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 16:08:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:08:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W1edL-0002pC-12;
	Fri, 10 Jan 2014 16:08:31 +0000
Date: Fri, 10 Jan 2014 16:08:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140110160830.GG30581@zion.uk.xensource.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>
	<20140109153015.GF12164@zion.uk.xensource.com>
	<52CFDAEC.5080708@citrix.com>
	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com> <52D0198C.6000600@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D0198C.6000600@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:02:20PM +0000, David Vrabel wrote:
> On 10/01/14 15:24, Zoltan Kiss wrote:
> > On 10/01/14 11:45, Wei Liu wrote:
> >> On Fri, Jan 10, 2014 at 11:35:08AM +0000, Zoltan Kiss wrote:
> >> [...]
> >>>
> >>>>> @@ -920,6 +852,18 @@ static int xenvif_tx_check_gop(struct xenvif
> >>>>> *vif,
> >>>>>       err = gop->status;
> >>>>>       if (unlikely(err))
> >>>>>           xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> >>>>> +    else {
> >>>>> +        if (vif->grant_tx_handle[pending_idx] !=
> >>>>> +            NETBACK_INVALID_HANDLE) {
> >>>>> +            netdev_err(vif->dev,
> >>>>> +                "Stale mapped handle! pending_idx %x handle %x\n",
> >>>>> +                pending_idx, vif->grant_tx_handle[pending_idx]);
> >>>>> +            BUG();
> >>>>> +        }
> >>>>> +        set_phys_to_machine(idx_to_pfn(vif, pending_idx),
> >>>>> +            FOREIGN_FRAME(gop->dev_bus_addr >> PAGE_SHIFT));
> >>>>
> >>>> What happens when you don't have this?
> >>> Your frags will be filled with garbage. I don't understand exactly
> >>> what this function does, someone might want to enlighten us? I've
> >>> took it's usage from classic kernel.
> >>> Also, it might be worthwhile to check the return value and BUG if
> >>> it's false, but I don't know what exactly that return value means.
> >>>
> >>
> >> This is actually part of gnttab_map_refs. As you're using hypercall
> >> directly this becomes very fragile.
> >>
> >> So the right thing to do is to fix gnttab_map_refs.
> > I agree, as I mentioned in other email in this thread, I think that
> > should be the topic of an another patchseries. In the meantime, I will
> > use gnttab_batch_map instead of the direct hypercall, it handles the
> > GNTST_eagain scenario, and I will use set_phys_to_machine the same way
> > as m2p_override does:
> 
> If the grant table code doesn't provide the API calls you need you can
> either:
> 
> a) add the new API as a prerequisite patch.
> b) use the existing API calls and live with the performance problem,
> until you can refactor the API later on.
> 
> Adding a netback-specific hack isn't a valid option.
> 

Agreed.

Wei.

> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:09:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1edu-00088V-BB; Fri, 10 Jan 2014 16:09:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1edp-00087V-Nw
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:09:04 +0000
Received: from [85.158.139.211:29204] by server-13.bemta-5.messagelabs.com id
	E1/63-11357-C1B10D25; Fri, 10 Jan 2014 16:09:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389370138!9024743!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20951 invoked from network); 10 Jan 2014 16:09:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:09:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG8uRk026235
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:08:56 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AG8tSj019265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:08:55 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AG8tON019250; Fri, 10 Jan 2014 16:08:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:08:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3812F1C18DC; Fri, 10 Jan 2014 11:08:54 -0500 (EST)
Date: Fri, 10 Jan 2014 11:08:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110160854.GD21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<20140110154831.GE30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154831.GE30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:48:31PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > > create ^
> > > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > > thanks
> > > > > 
> > > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > > When I have following configuration in HVM config file:
> > > > > >   memory=128
> > > > > >   maxmem=256
> > > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > > 
> > > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > > 
> > > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > > 
> > > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > > 
> > > > No. 128MB actually.
> > > > 
> > > 
> > > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > > 8MB video ram). Did I misread your message...
> > 
> > The 'claim' being the hypercall to set the 'clamp' on how much memory
> > the guest can allocate. This is based on:
> > 
> > 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> > 
> >   /* try to claim pages for early warning of insufficient memory available */
> > 337     if ( claim_enabled ) {
> > 343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > 
> > Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
> > that the guest only needs 128MB - 768kB.
> 
> No, the nr_pages I saw was 63296 (256MB - 768KB) -- I printed it out.

Then my patch should have made no difference in your case. Odd.

> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:09:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1edu-00088V-BB; Fri, 10 Jan 2014 16:09:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1edp-00087V-Nw
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:09:04 +0000
Received: from [85.158.139.211:29204] by server-13.bemta-5.messagelabs.com id
	E1/63-11357-C1B10D25; Fri, 10 Jan 2014 16:09:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389370138!9024743!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20951 invoked from network); 10 Jan 2014 16:09:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:09:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AG8uRk026235
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:08:56 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AG8tSj019265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:08:55 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AG8tON019250; Fri, 10 Jan 2014 16:08:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:08:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3812F1C18DC; Fri, 10 Jan 2014 11:08:54 -0500 (EST)
Date: Fri, 10 Jan 2014 11:08:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110160854.GD21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<20140110154831.GE30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110154831.GE30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:48:31PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > > create ^
> > > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > > thanks
> > > > > 
> > > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > > When I have following configuration in HVM config file:
> > > > > >   memory=128
> > > > > >   maxmem=256
> > > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > > 
> > > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > > 
> > > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > > 
> > > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > > 
> > > > No. 128MB actually.
> > > > 
> > > 
> > > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > > 8MB video ram). Did I misread your message...
> > 
> > The 'claim' being the hypercall to set the 'clamp' on how much memory
> > the guest can allocate. This is based on:
> > 
> > 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> > 
> >   /* try to claim pages for early warning of insufficient memory available */
> > 337     if ( claim_enabled ) {
> > 343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > 
> > Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
> > that the guest only needs 128MB - 768kB.
> 
> No, the nr_pages I saw was 63296 (256MB - 768KB) -- I printed it out.

Then my patch should have made no difference in your case. Odd.

> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:12:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1egr-0000WC-7P; Fri, 10 Jan 2014 16:12:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1egp-0000W1-Vi
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:12:08 +0000
Received: from [85.158.139.211:7676] by server-10.bemta-5.messagelabs.com id
	09/AE-01405-7DB10D25; Fri, 10 Jan 2014 16:12:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389370324!9055505!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10613 invoked from network); 10 Jan 2014 16:12:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:12:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89591627"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:11:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:11:16 -0500
Message-ID: <1389370275.6423.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 16:11:15 +0000
In-Reply-To: <20140110160536.GB21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 11:05 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > > And then it is the responsibility of the balloon driver to give the memory
> > > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > > balloon driver to balloon out).
> > > > 
> > > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > > boot time. It is basically there in order to let the guest get booted
> > > > far enough to load the balloon driver to give the memory back.
> > > > 
> > > > It's basically a boot time zero-page sharing mechanism AIUI.
> > > 
> > > But it does look to gulp up hypervisor memory and return it during
> > > allocation of memory for the guest.
> > 
> > It should be less than the maxmem-memory amount though. Perhaps because
> > Wei is using relatively small sizes the pod cache ends up being a
> > similar size to the saving?
> > 
> > Or maybe I just don't understand PoD, since the code you quote does seem
> > to contradict that.
> > 
> > Or maybe libxl's calculation of pod_target is wrong?
> > 
> > > From reading the code the patch seems correct - we will _need_ that
> > > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > > are temporary as we do free them.
> > 
> > It does makes sense that the PoD cache should be included in the claim,
> > I just don't get why the cache is so big...
> 
> I think it expands and shrinks to make sure that the memory is present
> in the hypervisor. If there is not enough memory it would -ENOMEM and
> the toolstack would know immediately.
> 
> But that seems silly - as that memory might be in the future used
> by other guests and then you won't be able to use said cache. But since
> it is a "cache" I guess that is OK.

Wait, isn't the "cache" here just the target memory size?

PoD uses up to that size to populate guest pages, and will try to
reclaim zeroed pages from the guest so that it never grows bigger than
the target size.

I think the confusion here is that Wei had target=128M and maxmem=256M
so the difference was 128M which served as a nice red-herring...

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:12:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1egr-0000WC-7P; Fri, 10 Jan 2014 16:12:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1egp-0000W1-Vi
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:12:08 +0000
Received: from [85.158.139.211:7676] by server-10.bemta-5.messagelabs.com id
	09/AE-01405-7DB10D25; Fri, 10 Jan 2014 16:12:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389370324!9055505!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10613 invoked from network); 10 Jan 2014 16:12:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:12:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89591627"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:11:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:11:16 -0500
Message-ID: <1389370275.6423.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 10 Jan 2014 16:11:15 +0000
In-Reply-To: <20140110160536.GB21360@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 11:05 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > > And then it is the responsibility of the balloon driver to give the memory
> > > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > > balloon driver to balloon out).
> > > > 
> > > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > > boot time. It is basically there in order to let the guest get booted
> > > > far enough to load the balloon driver to give the memory back.
> > > > 
> > > > It's basically a boot time zero-page sharing mechanism AIUI.
> > > 
> > > But it does look to gulp up hypervisor memory and return it during
> > > allocation of memory for the guest.
> > 
> > It should be less than the maxmem-memory amount though. Perhaps because
> > Wei is using relatively small sizes the pod cache ends up being a
> > similar size to the saving?
> > 
> > Or maybe I just don't understand PoD, since the code you quote does seem
> > to contradict that.
> > 
> > Or maybe libxl's calculation of pod_target is wrong?
> > 
> > > From reading the code the patch seems correct - we will _need_ that
> > > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > > are temporary as we do free them.
> > 
> > It does makes sense that the PoD cache should be included in the claim,
> > I just don't get why the cache is so big...
> 
> I think it expands and shrinks to make sure that the memory is present
> in the hypervisor. If there is not enough memory it would -ENOMEM and
> the toolstack would know immediately.
> 
> But that seems silly - as that memory might be in the future used
> by other guests and then you won't be able to use said cache. But since
> it is a "cache" I guess that is OK.

Wait, isn't the "cache" here just the target memory size?

PoD uses up to that size to populate guest pages, and will try to
reclaim zeroed pages from the guest so that it never grows bigger than
the target size.

I think the confusion here is that Wei had target=128M and maxmem=256M
so the difference was 128M which served as a nice red-herring...

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ehg-0000lx-Ma; Fri, 10 Jan 2014 16:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ehd-0000lN-Ta
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:12:58 +0000
Received: from [193.109.254.147:37272] by server-10.bemta-14.messagelabs.com
	id FB/3F-20752-90C10D25; Fri, 10 Jan 2014 16:12:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389370374!10091025!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26184 invoked from network); 10 Jan 2014 16:12:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:12:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGCoog007354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:12:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AGCn8E006683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:12:50 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGCnN0020648; Fri, 10 Jan 2014 16:12:49 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:12:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 653EA1C18DC; Fri, 10 Jan 2014 11:12:48 -0500 (EST)
Date: Fri, 10 Jan 2014 11:12:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110161248.GE21360@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1087166993.20140110165729@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:57:29PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 4:12:18 PM, you wrote:
> 
> > On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
> >> Hi Konrad,
> >> 
> >> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
> >> 
> >> But it seems pci-detach isn't completely detaching the device from the guest.
> >> 
> >> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
> >> 
> >> - Now i do a "xl pci-assignable-list"
> >>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
> >> 
> >> - Then i do "xl -v pci-detach 2 00:19.0"
> >>   Which also returns nothing ...
> >> 
> >> - Now i do a "xl pci-assignable-list" again ..
> >>   This returns:
> >>   "0000:00:19.0"
> >>   So the pci-detach does seem to have done *something* :-)
> 
> > Or it thinks it has :-)
> 
> Well it has .. but probably not enough ;-)
> 
> >> 
> >> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
> >>   and later it give some stacktraces ..
> >> 
> >>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
> >>   xen_pciback: ****** driver domain may still access this device's i/o resources!
> >>   xen_pciback: ****** shutdown driver domain before binding device
> >>   xen_pciback: ****** to other drivers of domains
> 
> > What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
> > removed from there?
> 
> Oeh i should have thought of that ...
> 
> in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
> in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..
> 
> So let's try with qemu-xen-traditional .. which i also forgot to test ...
> 
> Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:
> 
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: pt_msi_setup requested pirq = 54
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
> pt_msgctrl_reg_write: setup msi for dev 28
> pt_msi_setup: pt_msi_setup requested pirq = 53
> pt_msi_setup: msi mapped with pirq 35
> pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
> dm-command: hot remove pass-through pci dev
> generate a sci for PHP.
> deassert due to disable GPE bit.
> ACPI:debug: write addr=0xb044, val=0x30.
> ACPI:debug: write addr=0xb045, val=0x3.
> ACPI:debug: write addr=0xb044, val=0x30.
> ACPI:debug: write addr=0xb045, val=0x88.
> ACPI PCI hotplug: write devfn=0x30.
> pci_intx: intx=1
> pci_intx: intx=1
> pt_msi_disable: Unbind msi with pirq 36, gvec 0
> pt_msi_disable: Unmap msi with pirq 36

Good, so the device is safely removed from the guest.
QEMU acted on 'libxl' command to remove it.

> 
> 
> 
> Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.
> 
> >> 
> >> 
> >> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
> >> 
> >> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
> >> 
> >> Oh yes running xen-unstable and a 3.13-rc7 kernel
> 
> > Do you see the same issue with 'xend'?
> 
> Erhmmm haven't used that for what seems to be ages .. :-)

Heh.
> 
> Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...


Wow. You just walked in a pile of bugs didn't you? And on Friday
nonethless.

> 
> It seems to be the pci_reset_function ...
> 
> [   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
> [   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
> [   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
> [   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
> [   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
> [   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
> [  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
> [  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
> [  227.445811] xen_pciback: ****** shutdown driver domain before binding device
> [  227.447811] xen_pciback: ****** to other drivers or domains
> [  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
> [  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
> [  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
> [  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
> [  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
> [  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
> [  368.864514] Call Trace:
> [  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> [  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> [  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> [  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> [  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> [  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
> [  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
> [  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
> [  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
> [  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
> [  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
> [  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
> [  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
> [  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
> [  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
> [  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
> [  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
> [  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
> [  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
> [  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
> [  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
> [  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
> [  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
> [  489.058621] Call Trace:
> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e

Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
I totally forgot about it !


I hope.

> [  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
> [  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
> [  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
> [  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
> [  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
> [  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
> [  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
> [  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
> [  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
> [  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
> [  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
> [  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
> 
> 
> >> 
> >> --
> >> Sander
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ehg-0000lx-Ma; Fri, 10 Jan 2014 16:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1ehd-0000lN-Ta
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:12:58 +0000
Received: from [193.109.254.147:37272] by server-10.bemta-14.messagelabs.com
	id FB/3F-20752-90C10D25; Fri, 10 Jan 2014 16:12:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389370374!10091025!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26184 invoked from network); 10 Jan 2014 16:12:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:12:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGCoog007354
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:12:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0AGCn8E006683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:12:50 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGCnN0020648; Fri, 10 Jan 2014 16:12:49 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:12:49 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 653EA1C18DC; Fri, 10 Jan 2014 11:12:48 -0500 (EST)
Date: Fri, 10 Jan 2014 11:12:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110161248.GE21360@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1087166993.20140110165729@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:57:29PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 4:12:18 PM, you wrote:
> 
> > On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
> >> Hi Konrad,
> >> 
> >> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
> >> 
> >> But it seems pci-detach isn't completely detaching the device from the guest.
> >> 
> >> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
> >> 
> >> - Now i do a "xl pci-assignable-list"
> >>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
> >> 
> >> - Then i do "xl -v pci-detach 2 00:19.0"
> >>   Which also returns nothing ...
> >> 
> >> - Now i do a "xl pci-assignable-list" again ..
> >>   This returns:
> >>   "0000:00:19.0"
> >>   So the pci-detach does seem to have done *something* :-)
> 
> > Or it thinks it has :-)
> 
> Well it has .. but probably not enough ;-)
> 
> >> 
> >> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
> >>   and later it give some stacktraces ..
> >> 
> >>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
> >>   xen_pciback: ****** driver domain may still access this device's i/o resources!
> >>   xen_pciback: ****** shutdown driver domain before binding device
> >>   xen_pciback: ****** to other drivers of domains
> 
> > What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
> > removed from there?
> 
> Oeh i should have thought of that ...
> 
> in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
> in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..
> 
> So let's try with qemu-xen-traditional .. which i also forgot to test ...
> 
> Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:
> 
> pt_msgctrl_reg_write: setup msi for dev 30
> pt_msi_setup: pt_msi_setup requested pirq = 54
> pt_msi_setup: msi mapped with pirq 36
> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
> pt_msgctrl_reg_write: setup msi for dev 28
> pt_msi_setup: pt_msi_setup requested pirq = 53
> pt_msi_setup: msi mapped with pirq 35
> pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
> dm-command: hot remove pass-through pci dev
> generate a sci for PHP.
> deassert due to disable GPE bit.
> ACPI:debug: write addr=0xb044, val=0x30.
> ACPI:debug: write addr=0xb045, val=0x3.
> ACPI:debug: write addr=0xb044, val=0x30.
> ACPI:debug: write addr=0xb045, val=0x88.
> ACPI PCI hotplug: write devfn=0x30.
> pci_intx: intx=1
> pci_intx: intx=1
> pt_msi_disable: Unbind msi with pirq 36, gvec 0
> pt_msi_disable: Unmap msi with pirq 36

Good, so the device is safely removed from the guest.
QEMU acted on 'libxl' command to remove it.

> 
> 
> 
> Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.
> 
> >> 
> >> 
> >> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
> >> 
> >> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
> >> 
> >> Oh yes running xen-unstable and a 3.13-rc7 kernel
> 
> > Do you see the same issue with 'xend'?
> 
> Erhmmm haven't used that for what seems to be ages .. :-)

Heh.
> 
> Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...


Wow. You just walked in a pile of bugs didn't you? And on Friday
nonethless.

> 
> It seems to be the pci_reset_function ...
> 
> [   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
> [   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
> [   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
> [   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
> [   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
> [   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
> [  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
> [  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
> [  227.445811] xen_pciback: ****** shutdown driver domain before binding device
> [  227.447811] xen_pciback: ****** to other drivers or domains
> [  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
> [  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
> [  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
> [  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
> [  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
> [  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
> [  368.864514] Call Trace:
> [  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> [  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> [  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> [  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> [  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> [  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
> [  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
> [  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
> [  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
> [  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
> [  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
> [  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
> [  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
> [  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
> [  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
> [  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
> [  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
> [  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
> [  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
> [  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
> [  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
> [  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
> [  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
> [  489.058621] Call Trace:
> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e

Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
I totally forgot about it !


I hope.

> [  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
> [  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
> [  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
> [  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
> [  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
> [  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
> [  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
> [  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
> [  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
> [  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
> [  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
> [  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
> 
> 
> >> 
> >> --
> >> Sander
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:16:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1el5-0001OG-9n; Fri, 10 Jan 2014 16:16:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1el2-0001O2-Vm
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:16:29 +0000
Received: from [193.109.254.147:63210] by server-16.bemta-14.messagelabs.com
	id 38/98-20600-CDC10D25; Fri, 10 Jan 2014 16:16:28 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389370586!10091848!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17729 invoked from network); 10 Jan 2014 16:16:27 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 16:16:27 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53962 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1ea3-0005uj-8w; Fri, 10 Jan 2014 17:05:07 +0100
Date: Fri, 10 Jan 2014 17:16:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1010658460.20140110171623@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110161248.GE21360@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:12:48 PM, you wrote:

> On Fri, Jan 10, 2014 at 04:57:29PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 4:12:18 PM, you wrote:
>> 
>> > On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
>> >> Hi Konrad,
>> >> 
>> >> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
>> >> 
>> >> But it seems pci-detach isn't completely detaching the device from the guest.
>> >> 
>> >> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
>> >> 
>> >> - Now i do a "xl pci-assignable-list"
>> >>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
>> >> 
>> >> - Then i do "xl -v pci-detach 2 00:19.0"
>> >>   Which also returns nothing ...
>> >> 
>> >> - Now i do a "xl pci-assignable-list" again ..
>> >>   This returns:
>> >>   "0000:00:19.0"
>> >>   So the pci-detach does seem to have done *something* :-)
>> 
>> > Or it thinks it has :-)
>> 
>> Well it has .. but probably not enough ;-)
>> 
>> >> 
>> >> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>> >>   and later it give some stacktraces ..
>> >> 
>> >>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>> >>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>> >>   xen_pciback: ****** shutdown driver domain before binding device
>> >>   xen_pciback: ****** to other drivers of domains
>> 
>> > What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
>> > removed from there?
>> 
>> Oeh i should have thought of that ...
>> 
>> in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
>> in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..
>> 
>> So let's try with qemu-xen-traditional .. which i also forgot to test ...
>> 
>> Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:
>> 
>> pt_msgctrl_reg_write: setup msi for dev 30
>> pt_msi_setup: pt_msi_setup requested pirq = 54
>> pt_msi_setup: msi mapped with pirq 36
>> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
>> pt_msgctrl_reg_write: setup msi for dev 28
>> pt_msi_setup: pt_msi_setup requested pirq = 53
>> pt_msi_setup: msi mapped with pirq 35
>> pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
>> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
>> dm-command: hot remove pass-through pci dev
>> generate a sci for PHP.
>> deassert due to disable GPE bit.
>> ACPI:debug: write addr=0xb044, val=0x30.
>> ACPI:debug: write addr=0xb045, val=0x3.
>> ACPI:debug: write addr=0xb044, val=0x30.
>> ACPI:debug: write addr=0xb045, val=0x88.
>> ACPI PCI hotplug: write devfn=0x30.
>> pci_intx: intx=1
>> pci_intx: intx=1
>> pt_msi_disable: Unbind msi with pirq 36, gvec 0
>> pt_msi_disable: Unmap msi with pirq 36

> Good, so the device is safely removed from the guest.
> QEMU acted on 'libxl' command to remove it.

>> 
>> 
>> 
>> Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.
>> 
>> >> 
>> >> 
>> >> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
>> >> 
>> >> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
>> >> 
>> >> Oh yes running xen-unstable and a 3.13-rc7 kernel
>> 
>> > Do you see the same issue with 'xend'?
>> 
>> Erhmmm haven't used that for what seems to be ages .. :-)

> Heh.
>> 
>> Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...


> Wow. You just walked in a pile of bugs didn't you? And on Friday
> nonethless.

As usual ;-)

>> 
>> It seems to be the pci_reset_function ...
>> 
>> [   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
>> [   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
>> [   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
>> [   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
>> [   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
>> [   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
>> [  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>> [  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
>> [  227.445811] xen_pciback: ****** shutdown driver domain before binding device
>> [  227.447811] xen_pciback: ****** to other drivers or domains
>> [  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
>> [  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
>> [  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
>> [  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
>> [  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
>> [  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
>> [  368.864514] Call Trace:
>> [  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> [  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> [  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> [  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> [  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> [  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
>> [  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
>> [  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
>> [  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
>> [  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
>> [  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
>> [  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
>> [  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
>> [  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
>> [  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
>> [  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
>> [  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
>> [  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
>> [  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
>> [  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
>> [  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
>> [  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
>> [  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
>> [  489.058621] Call Trace:
>> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e

> Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> I totally forgot about it !

Got a link to that patchset ?
I at least could give it a spin .. you never know when fortune is on your side :-)

> I hope.

>> [  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
>> [  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
>> [  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
>> [  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
>> [  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
>> [  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
>> [  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
>> [  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
>> [  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
>> [  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
>> [  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
>> [  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
>> 
>> 
>> >> 
>> >> --
>> >> Sander
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:16:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1el5-0001OG-9n; Fri, 10 Jan 2014 16:16:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1el2-0001O2-Vm
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:16:29 +0000
Received: from [193.109.254.147:63210] by server-16.bemta-14.messagelabs.com
	id 38/98-20600-CDC10D25; Fri, 10 Jan 2014 16:16:28 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389370586!10091848!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17729 invoked from network); 10 Jan 2014 16:16:27 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 16:16:27 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53962 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1ea3-0005uj-8w; Fri, 10 Jan 2014 17:05:07 +0100
Date: Fri, 10 Jan 2014 17:16:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1010658460.20140110171623@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110161248.GE21360@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:12:48 PM, you wrote:

> On Fri, Jan 10, 2014 at 04:57:29PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 4:12:18 PM, you wrote:
>> 
>> > On Fri, Jan 10, 2014 at 03:51:57PM +0100, Sander Eikelenboom wrote:
>> >> Hi Konrad,
>> >> 
>> >> Normally i'm never reattaching pci devices to dom0, but at the moment i have some use for it.
>> >> 
>> >> But it seems pci-detach isn't completely detaching the device from the guest.
>> >> 
>> >> - Say i have a guest (HVM) with domid=2 and a pci device passedthrough with bdf 00:19.0, the device is hidden on boot with xen-pciback.hide=(00:19.0) in grub.
>> >> 
>> >> - Now i do a "xl pci-assignable-list"
>> >>   This returns nothing, which is correct since all hidden devices have already been assigned to guests.
>> >> 
>> >> - Then i do "xl -v pci-detach 2 00:19.0"
>> >>   Which also returns nothing ...
>> >> 
>> >> - Now i do a "xl pci-assignable-list" again ..
>> >>   This returns:
>> >>   "0000:00:19.0"
>> >>   So the pci-detach does seem to have done *something* :-)
>> 
>> > Or it thinks it has :-)
>> 
>> Well it has .. but probably not enough ;-)
>> 
>> >> 
>> >> - But when now trying to remove the device from pciback to dom0 with "xl pci-assignable-remove 00:19.0" it gives an error
>> >>   and later it give some stacktraces ..
>> >> 
>> >>   xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>> >>   xen_pciback: ****** driver domain may still access this device's i/o resources!
>> >>   xen_pciback: ****** shutdown driver domain before binding device
>> >>   xen_pciback: ****** to other drivers of domains
>> 
>> > What about /var/log/xen/qemu-dm* and the 'lspci' in the guest? Is the PCI device
>> > removed from there?
>> 
>> Oeh i should have thought of that ...
>> 
>> in the guest i get a "e1000e 0000:00:06.0 removed PHC" and it's gone from lspci ..
>> in /var/log/xen/qemu-dm* .. i get nothing .. but i was using qemu-xen .. which is totally non verbose ..
>> 
>> So let's try with qemu-xen-traditional .. which i also forgot to test ...
>> 
>> Which gives exact the same error / warning as above, but it has some output in  /var/log/xen/qemu-dm*:
>> 
>> pt_msgctrl_reg_write: setup msi for dev 30
>> pt_msi_setup: pt_msi_setup requested pirq = 54
>> pt_msi_setup: msi mapped with pirq 36
>> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3036
>> pt_msgctrl_reg_write: setup msi for dev 28
>> pt_msi_setup: pt_msi_setup requested pirq = 53
>> pt_msi_setup: msi mapped with pirq 35
>> pt_msi_update: Update msi with pirq 35 gvec 0 gflags 3035
>> pt_msi_update: Update msi with pirq 36 gvec 0 gflags 3034
>> dm-command: hot remove pass-through pci dev
>> generate a sci for PHP.
>> deassert due to disable GPE bit.
>> ACPI:debug: write addr=0xb044, val=0x30.
>> ACPI:debug: write addr=0xb045, val=0x3.
>> ACPI:debug: write addr=0xb044, val=0x30.
>> ACPI:debug: write addr=0xb045, val=0x88.
>> ACPI PCI hotplug: write devfn=0x30.
>> pci_intx: intx=1
>> pci_intx: intx=1
>> pt_msi_disable: Unbind msi with pirq 36, gvec 0
>> pt_msi_disable: Unmap msi with pirq 36

> Good, so the device is safely removed from the guest.
> QEMU acted on 'libxl' command to remove it.

>> 
>> 
>> 
>> Also worth mentioninng is that the console on which the "xl pci-assignable-remove 00:19.0" command is given, keeps hanging and eventually the hungtask stacktrace will appear.
>> 
>> >> 
>> >> 
>> >> When i shut the guest down instead of using pci-detach, the "xl pci-assignable-remove" works fine and i can rebind the device to it's driver in dom0.
>> >> 
>> >> So am i misreading the wiki .. and is it not possible to detach a device from a running domain or ... ?
>> >> 
>> >> Oh yes running xen-unstable and a 3.13-rc7 kernel
>> 
>> > Do you see the same issue with 'xend'?
>> 
>> Erhmmm haven't used that for what seems to be ages .. :-)

> Heh.
>> 
>> Hmm i also forgot the hungtask stacktrace i get sometime after the "xl pci-assignable-remove 00:19.0" ...


> Wow. You just walked in a pile of bugs didn't you? And on Friday
> nonethless.

As usual ;-)

>> 
>> It seems to be the pci_reset_function ...
>> 
>> [   52.099144] xen_bridge: port 4(vif2.0-emu) entered forwarding state
>> [   55.683141] xen_bridge: port 1(vif1.0) entered forwarding state
>> [   59.861385] xen-blkback:ring-ref 8, event-channel 22, protocol 1 (x86_64-abi) persistent grants
>> [   66.043965] xen_bridge: port 3(vif2.0) entered forwarding state
>> [   66.044549] xen_bridge: port 3(vif2.0) entered forwarding state
>> [   81.091149] xen_bridge: port 3(vif2.0) entered forwarding state
>> [  227.441191] xen_pciback: ****** removing device 0000:00:19.0 while still in-use! ******
>> [  227.443482] xen_pciback: ****** driver domain may still access this device's i/o resources!
>> [  227.445811] xen_pciback: ****** shutdown driver domain before binding device
>> [  227.447811] xen_pciback: ****** to other drivers or domains
>> [  368.859343] INFO: task xl:3675 blocked for more than 120 seconds.
>> [  368.860447]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
>> [  368.860990] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [  368.861682] xl              D ffff88003fd93f00     0  3675   3489 0x00000000
>> [  368.862319]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
>> [  368.863035]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
>> [  368.863802]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
>> [  368.864514] Call Trace:
>> [  368.864744]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> [  368.865273]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> [  368.865851]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> [  368.866409]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> [  368.866892]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> [  368.867430]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
>> [  368.867996]  [<ffffffff818e7238>] ? down_write+0x9/0x26
>> [  368.868467]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
>> [  368.868991]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
>> [  368.869506]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
>> [  368.870017]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
>> [  368.870593]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
>> [  368.871152]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
>> [  368.871659]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
>> [  368.872173]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
>> [  368.872641]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
>> [  368.873087]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
>> [  488.871331] INFO: task xl:3675 blocked for more than 120 seconds.
>> [  488.913929]       Not tainted 3.13.0-rc7-20140110-creabox-nuc+ #1
>> [  488.937031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>> [  488.960945] xl              D ffff88003fd93f00     0  3675   3489 0x00000004
>> [  488.986090]  ffff880038c0e880 0000000000000282 0000000000000000 ffff880038fd03d0
>> [  489.010383]  0000000000013f00 0000000000013f00 ffff880038c0e880 ffff880036abffd8
>> [  489.034456]  ffffffff81087ac6 ffff88003a0f00f8 ffff88003a0f00fc ffff880038c0e880
>> [  489.058621] Call Trace:
>> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e

> Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> I totally forgot about it !

Got a link to that patchset ?
I at least could give it a spin .. you never know when fortune is on your side :-)

> I hope.

>> [  489.200927]  [<ffffffff818e7dc1>] ? _raw_spin_lock_irqsave+0x14/0x36
>> [  489.224076]  [<ffffffff818e7238>] ? down_write+0x9/0x26
>> [  489.246898]  [<ffffffff813f1863>] ? pcistub_put_pci_dev+0x7b/0xe0
>> [  489.270086]  [<ffffffff813f14a7>] ? pcistub_remove+0xd0/0x127
>> [  489.293053]  [<ffffffff8135b5b8>] ? pci_device_remove+0x38/0x83
>> [  489.316068]  [<ffffffff814cb37f>] ? __device_release_driver+0x82/0xdb
>> [  489.338896]  [<ffffffff814cb602>] ? device_release_driver+0x1a/0x25
>> [  489.362459]  [<ffffffff814ca993>] ? unbind_store+0x59/0x89
>> [  489.385396]  [<ffffffff81178aa0>] ? sysfs_write_file+0x13f/0x18f
>> [  489.408605]  [<ffffffff81122aa6>] ? vfs_write+0x95/0xfb
>> [  489.431407]  [<ffffffff81122d8a>] ? SyS_write+0x51/0x85
>> [  489.454251]  [<ffffffff818ed179>] ? system_call_fastpath+0x16/0x1b
>> 
>> 
>> >> 
>> >> --
>> >> Sander
>> >> 
>> >> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1erV-0001w0-HX; Fri, 10 Jan 2014 16:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1erT-0001vh-Ur
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:23:08 +0000
Received: from [85.158.143.35:48612] by server-1.bemta-4.messagelabs.com id
	EE/CF-02132-B6E10D25; Fri, 10 Jan 2014 16:23:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389370986!10953937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17283 invoked from network); 10 Jan 2014 16:23:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:23:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 16:23:06 +0000
Message-Id: <52D02C7702000078001128DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 16:23:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<52D0253802000078001127F3@nat28.tlf.novell.com>
	<20140110160738.GC21360@phenom.dumpdata.com>
In-Reply-To: <20140110160738.GC21360@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 17:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 10, 2014 at 03:52:08PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
>> >> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> >> > --- a/tools/libxc/xc_hvm_build_x86.c
>> >> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> >> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>> >> >  
>> >> >      /* try to claim pages for early warning of insufficient memory 
> available */
>> >> >      if ( claim_enabled ) {
>> >> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> >> > +        unsigned long nr = nr_pages - cur_pages;
>> >> > +
>> >> > +        if ( pod_mode )
>> >> > +            nr = target_pages - 0x20;
>> >> > +
>> >> 
>> >> Yes it should work because this makes nr smaller than d->tot_pages and
>> >> d->max_pages. But according to the comment you pasted above this looks
>> >> like wrong fix...
>> > 
>> > It should be: 
>> > 
>> > tot_pages = 128MB
>> > max_pages = 256MB
>> > nr = 256MB - 0x20.
>> > 
>> > So tot_pages < max_pages > nr && nr > tot_pages
>> > 
>> > If I got my variables right.
>> > Which means that 'nr' is greater than tot_pages but less than max_pages.
>> 
>> But that seems conceptually wrong: As was said before, the guest
>> should only get 128Mb allocated, hence it would be wrong to claim
>> almost 256Mb for it.
> 
> At the start of the day (that is when the guest starts) it would only have
> 128MB allocated to it. But before then it needs 256MB (at least that is
> based on looking at the code).
> 
> I do agree it is seems odd that the cache allocates memory, then
> frees it. But I might also be reading the code wrong.

I'm afraid you do - Xen would refuse to allocate more than the
current memory target to a domain, i.e. in the example here
more than 128Mb. All of the rest of the guest memory must be
fake (zeroed) pages.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:23:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1erV-0001w0-HX; Fri, 10 Jan 2014 16:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1erT-0001vh-Ur
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:23:08 +0000
Received: from [85.158.143.35:48612] by server-1.bemta-4.messagelabs.com id
	EE/CF-02132-B6E10D25; Fri, 10 Jan 2014 16:23:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389370986!10953937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17283 invoked from network); 10 Jan 2014 16:23:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:23:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 16:23:06 +0000
Message-Id: <52D02C7702000078001128DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 16:23:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<52D0253802000078001127F3@nat28.tlf.novell.com>
	<20140110160738.GC21360@phenom.dumpdata.com>
In-Reply-To: <20140110160738.GC21360@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 17:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 10, 2014 at 03:52:08PM +0000, Jan Beulich wrote:
>> >>> On 10.01.14 at 16:41, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
>> >> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> >> > --- a/tools/libxc/xc_hvm_build_x86.c
>> >> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> >> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>> >> >  
>> >> >      /* try to claim pages for early warning of insufficient memory 
> available */
>> >> >      if ( claim_enabled ) {
>> >> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> >> > +        unsigned long nr = nr_pages - cur_pages;
>> >> > +
>> >> > +        if ( pod_mode )
>> >> > +            nr = target_pages - 0x20;
>> >> > +
>> >> 
>> >> Yes it should work because this makes nr smaller than d->tot_pages and
>> >> d->max_pages. But according to the comment you pasted above this looks
>> >> like wrong fix...
>> > 
>> > It should be: 
>> > 
>> > tot_pages = 128MB
>> > max_pages = 256MB
>> > nr = 256MB - 0x20.
>> > 
>> > So tot_pages < max_pages > nr && nr > tot_pages
>> > 
>> > If I got my variables right.
>> > Which means that 'nr' is greater than tot_pages but less than max_pages.
>> 
>> But that seems conceptually wrong: As was said before, the guest
>> should only get 128Mb allocated, hence it would be wrong to claim
>> almost 256Mb for it.
> 
> At the start of the day (that is when the guest starts) it would only have
> 128MB allocated to it. But before then it needs 256MB (at least that is
> based on looking at the code).
> 
> I do agree it is seems odd that the cache allocates memory, then
> frees it. But I might also be reading the code wrong.

I'm afraid you do - Xen would refuse to allocate more than the
current memory target to a domain, i.e. in the example here
more than 128Mb. All of the rest of the guest memory must be
fake (zeroed) pages.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:25:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1etw-0002I3-6F; Fri, 10 Jan 2014 16:25:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1etu-0002Ht-Ou
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:25:38 +0000
Received: from [193.109.254.147:20383] by server-8.bemta-14.messagelabs.com id
	C0/FD-30921-20F10D25; Fri, 10 Jan 2014 16:25:38 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389371136!10050198!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19989 invoked from network); 10 Jan 2014 16:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89598303"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:25:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:25:35 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1etr-00034A-0Z;
	Fri, 10 Jan 2014 16:25:35 +0000
Date: Fri, 10 Jan 2014 16:25:35 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110162533.GG1696@perard.uk.xensource.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <24225189.20140110170544@eikelenboom.it>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 05:05:44PM +0100, Sander Eikelenboom wrote:
> With this patch and VGA devices it still points to another rom for me.
> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

I do think VGA devices are a particular case. hvmloader or SeaBIOS might
"replace" the ROM of the card.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:25:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1etw-0002I3-6F; Fri, 10 Jan 2014 16:25:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W1etu-0002Ht-Ou
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:25:38 +0000
Received: from [193.109.254.147:20383] by server-8.bemta-14.messagelabs.com id
	C0/FD-30921-20F10D25; Fri, 10 Jan 2014 16:25:38 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389371136!10050198!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19989 invoked from network); 10 Jan 2014 16:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89598303"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 16:25:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 11:25:35 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W1etr-00034A-0Z;
	Fri, 10 Jan 2014 16:25:35 +0000
Date: Fri, 10 Jan 2014 16:25:35 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110162533.GG1696@perard.uk.xensource.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <24225189.20140110170544@eikelenboom.it>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 05:05:44PM +0100, Sander Eikelenboom wrote:
> With this patch and VGA devices it still points to another rom for me.
> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

I do think VGA devices are a particular case. hvmloader or SeaBIOS might
"replace" the ROM of the card.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:26:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1euu-0002UP-M0; Fri, 10 Jan 2014 16:26:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eut-0002UF-3S
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:26:39 +0000
Received: from [85.158.139.211:11261] by server-11.bemta-5.messagelabs.com id
	81/6B-23268-E3F10D25; Fri, 10 Jan 2014 16:26:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389371197!9068049!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10538 invoked from network); 10 Jan 2014 16:26:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:26:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 16:26:37 +0000
Message-Id: <52D02D4A02000078001128FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 16:26:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52D0241102000078001127C5@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 16:57, Simon Graham <simon.graham@citrix.com> wrote:
>> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
>> >> __hvm_copy() is probably too low to be thinking about this.  There are
>> >> many things such as grant_copy() which do not want "hardware like" copy
>> >> properties, preferring instead to have less overhead.
>> >>
>> >
>> > Yeah... I'll rework the patch to do this...
>> 
>> Looking a little more closely, hvm_copy_{to,from}_guest_virt()
>> are what you want to have the adjusted behavior. That way
>> you'd in particular not touch the behavior of the more generic
>> copying routines copy_{to,from}_user_hvm(). And adjusting
>> the behavior would seem to be doable cleanly by adding e.g.
>> HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
>> to not use memcpy().
>> 
> 
> Thanks -- I was coming to the same conclusion slowly too! Although I think I 
> might call it HVMCOPY_emulate rather than atomic since it's not the case that 
> the entire copy is atomic...

I'd read "atomic" here as "as atomic as possible". "emulate" is bad
imo because the function may be used for other purposes too.

> Now I just have to figure out why the shadow code doesn't use the hvm 
> copy_to_ routine...

Perhaps because it doesn't work on virtual addresses (page tables
always hold physical ones)? Maybe it could use
hvm_copy_{to,from}_guest_phys(), but I would assume those
routines didn't exist yet when the shadow code was written. Tim
may know further details...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:26:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1euu-0002UP-M0; Fri, 10 Jan 2014 16:26:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1eut-0002UF-3S
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:26:39 +0000
Received: from [85.158.139.211:11261] by server-11.bemta-5.messagelabs.com id
	81/6B-23268-E3F10D25; Fri, 10 Jan 2014 16:26:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389371197!9068049!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10538 invoked from network); 10 Jan 2014 16:26:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 16:26:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 16:26:37 +0000
Message-Id: <52D02D4A02000078001128FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 16:26:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Simon Graham" <simon.graham@citrix.com>
References: <31EF1F85386F3941A65C4C158E12835D195E4042@FTLPEX01CL03.citrite.net>
	<52CED59D020000780011204A@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195E435A@FTLPEX01CL03.citrite.net>
	<52CECC9B.50100@citrix.com>
	<31EF1F85386F3941A65C4C158E12835D195E4CA3@FTLPEX01CL03.citrite.net>
	<52D0241102000078001127C5@nat28.tlf.novell.com>
	<31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
In-Reply-To: <31EF1F85386F3941A65C4C158E12835D195EA773@FTLPEX01CL03.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Possible issue with x86_emulate when writing
 results back to memory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 16:57, Simon Graham <simon.graham@citrix.com> wrote:
>> >>> On 09.01.14 at 18:33, Simon Graham <simon.graham@citrix.com> wrote:
>> >> __hvm_copy() is probably too low to be thinking about this.  There are
>> >> many things such as grant_copy() which do not want "hardware like" copy
>> >> properties, preferring instead to have less overhead.
>> >>
>> >
>> > Yeah... I'll rework the patch to do this...
>> 
>> Looking a little more closely, hvm_copy_{to,from}_guest_virt()
>> are what you want to have the adjusted behavior. That way
>> you'd in particular not touch the behavior of the more generic
>> copying routines copy_{to,from}_user_hvm(). And adjusting
>> the behavior would seem to be doable cleanly by adding e.g.
>> HVMCOPY_atomic as a new flag, thus informing __hvm_copy()
>> to not use memcpy().
>> 
> 
> Thanks -- I was coming to the same conclusion slowly too! Although I think I 
> might call it HVMCOPY_emulate rather than atomic since it's not the case that 
> the entire copy is atomic...

I'd read "atomic" here as "as atomic as possible". "emulate" is bad
imo because the function may be used for other purposes too.

> Now I just have to figure out why the shadow code doesn't use the hvm 
> copy_to_ routine...

Perhaps because it doesn't work on virtual addresses (page tables
always hold physical ones)? Maybe it could use
hvm_copy_{to,from}_guest_phys(), but I would assume those
routines didn't exist yet when the shadow code was written. Tim
may know further details...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:28:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ewt-0002lx-9l; Fri, 10 Jan 2014 16:28:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1ewr-0002kx-UG
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:28:42 +0000
Received: from [85.158.137.68:17242] by server-9.bemta-3.messagelabs.com id
	33/4F-13104-9BF10D25; Fri, 10 Jan 2014 16:28:41 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389371320!7280005!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15538 invoked from network); 10 Jan 2014 16:28:40 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:28:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389371320; l=2488;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=foNUU59Ro1bbyaDsK3RqI0ReeG4=;
	b=HQFPwx33jeHOlVwc50JjIL0NzzsSnEjfpLAwQVcsBCkIVMxDxgyPGbgveSvod8UgI5Y
	34xMlmKc0QY5+rUUSLtdJOfZEZu0X6vcBj90JMRR7Nk67yFfqo9MStS3UyMIUi2+d2sQ6
	x94Pi9y3gmQFAIHNjtjpFIczU1r4AruBcAY=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id D03270q0AGSXGhc ; 
	Fri, 10 Jan 2014 17:28:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 4081C5024C; Fri, 10 Jan 2014 17:28:30 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Fri, 10 Jan 2014 17:28:21 +0100
Message-Id: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: If the backend advertises discard
support it is supposed to implement it properly, so enable
feature_discard unconditionally. If the backend advertises the need for
a certain granularity and alignment then propagate both properties to
the blocklayer. The discard-secure property is a boolean, update the code
to reflect that.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..c9e96b9 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
 	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
-		return;
-
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
-
-	kfree(type);
+	info->feature_discard = 1;
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		NULL);
+	if (!err) {
+		info->discard_granularity = discard_granularity;
+		info->discard_alignment = discard_alignment;
+	}
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		    "discard-secure", "%d", &discard_secure,
+		    NULL);
+	if (!err)
+		info->feature_secdiscard = !!discard_secure;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:28:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ewt-0002lx-9l; Fri, 10 Jan 2014 16:28:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1ewr-0002kx-UG
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:28:42 +0000
Received: from [85.158.137.68:17242] by server-9.bemta-3.messagelabs.com id
	33/4F-13104-9BF10D25; Fri, 10 Jan 2014 16:28:41 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389371320!7280005!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15538 invoked from network); 10 Jan 2014 16:28:40 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:28:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389371320; l=2488;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=foNUU59Ro1bbyaDsK3RqI0ReeG4=;
	b=HQFPwx33jeHOlVwc50JjIL0NzzsSnEjfpLAwQVcsBCkIVMxDxgyPGbgveSvod8UgI5Y
	34xMlmKc0QY5+rUUSLtdJOfZEZu0X6vcBj90JMRR7Nk67yFfqo9MStS3UyMIUi2+d2sQ6
	x94Pi9y3gmQFAIHNjtjpFIczU1r4AruBcAY=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id D03270q0AGSXGhc ; 
	Fri, 10 Jan 2014 17:28:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 4081C5024C; Fri, 10 Jan 2014 17:28:30 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Fri, 10 Jan 2014 17:28:21 +0100
Message-Id: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: If the backend advertises discard
support it is supposed to implement it properly, so enable
feature_discard unconditionally. If the backend advertises the need for
a certain granularity and alignment then propagate both properties to
the blocklayer. The discard-secure property is a boolean, update the code
to reflect that.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..c9e96b9 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
 	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
-		return;
-
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
-
-	kfree(type);
+	info->feature_discard = 1;
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		NULL);
+	if (!err) {
+		info->discard_granularity = discard_granularity;
+		info->discard_alignment = discard_alignment;
+	}
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		    "discard-secure", "%d", &discard_secure,
+		    NULL);
+	if (!err)
+		info->feature_secdiscard = !!discard_secure;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ezs-0003AH-W1; Fri, 10 Jan 2014 16:31:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W1ezq-0003A0-KX
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:31:46 +0000
Received: from [85.158.143.35:53699] by server-2.bemta-4.messagelabs.com id
	EC/2B-11386-27020D25; Fri, 10 Jan 2014 16:31:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389371505!9710765!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25301 invoked from network); 10 Jan 2014 16:31:45 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:31:45 -0000
Received: by mail-wg0-f53.google.com with SMTP id k14so4243354wgh.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 08:31:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=+5Rt9FXijWY+n0yY3r8az3qlGwc8tjrorvLS+R6kl9g=;
	b=pADaP4QzuwIlaSifeGMh0jeYkXYmXtsEHxRFAk99wgPLsYl5UjDabLOpAKR8M4fgJC
	BFjphK6EtntT3sMoKufjuKcAif+Wy419X1HwxFsBXKyChsRjwCLduushO9PCmKRJI8ZO
	3Rpw522k9Rsq2Dm/N02w3UD62MnoXyRPk3T9h6uE4SBljzL+mSi/CmuyTNOSu+eXJtjL
	GBqGHWyQsY0wEVkgNh7HmYTQh2p2FX/F+JOVCB+Qeg6xXk6DCovL7pfiJiFAHIJTweCM
	Y/kkde/TT3CdAM6BZZPbt5wVWB+Nul17WQvPiTaWnJKYmKfQeeRomLkyCJMNvRdyCWpE
	xAjg==
X-Received: by 10.180.106.229 with SMTP id gx5mr3469987wib.55.1389371505117;
	Fri, 10 Jan 2014 08:31:45 -0800 (PST)
Received: from [192.168.1.106]
	(host86-139-172-165.range86-139.btcentralplus.com. [86.139.172.165])
	by mx.google.com with ESMTPSA id ay6sm3216998wjb.23.2014.01.10.08.31.41
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 08:31:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 10 Jan 2014 16:31:37 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEF5D0E9.6D61B%keir.xen@gmail.com>
Thread-Topic: [PATCH 4.2] ix86: fix linear page table construction in
	alloc_l2_table()
Thread-Index: Ac8OIW6kw98lDIYUlUegLMgJffTOYQ==
In-Reply-To: <52D01D640200007800112739@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>, Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 4.2] ix86: fix linear page table
 construction in alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/2014 15:18, "Jan Beulich" <JBeulich@suse.com> wrote:

> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ezs-0003AH-W1; Fri, 10 Jan 2014 16:31:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W1ezq-0003A0-KX
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:31:46 +0000
Received: from [85.158.143.35:53699] by server-2.bemta-4.messagelabs.com id
	EC/2B-11386-27020D25; Fri, 10 Jan 2014 16:31:46 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389371505!9710765!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25301 invoked from network); 10 Jan 2014 16:31:45 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:31:45 -0000
Received: by mail-wg0-f53.google.com with SMTP id k14so4243354wgh.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 08:31:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=+5Rt9FXijWY+n0yY3r8az3qlGwc8tjrorvLS+R6kl9g=;
	b=pADaP4QzuwIlaSifeGMh0jeYkXYmXtsEHxRFAk99wgPLsYl5UjDabLOpAKR8M4fgJC
	BFjphK6EtntT3sMoKufjuKcAif+Wy419X1HwxFsBXKyChsRjwCLduushO9PCmKRJI8ZO
	3Rpw522k9Rsq2Dm/N02w3UD62MnoXyRPk3T9h6uE4SBljzL+mSi/CmuyTNOSu+eXJtjL
	GBqGHWyQsY0wEVkgNh7HmYTQh2p2FX/F+JOVCB+Qeg6xXk6DCovL7pfiJiFAHIJTweCM
	Y/kkde/TT3CdAM6BZZPbt5wVWB+Nul17WQvPiTaWnJKYmKfQeeRomLkyCJMNvRdyCWpE
	xAjg==
X-Received: by 10.180.106.229 with SMTP id gx5mr3469987wib.55.1389371505117;
	Fri, 10 Jan 2014 08:31:45 -0800 (PST)
Received: from [192.168.1.106]
	(host86-139-172-165.range86-139.btcentralplus.com. [86.139.172.165])
	by mx.google.com with ESMTPSA id ay6sm3216998wjb.23.2014.01.10.08.31.41
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 08:31:43 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 10 Jan 2014 16:31:37 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEF5D0E9.6D61B%keir.xen@gmail.com>
Thread-Topic: [PATCH 4.2] ix86: fix linear page table construction in
	alloc_l2_table()
Thread-Index: Ac8OIW6kw98lDIYUlUegLMgJffTOYQ==
In-Reply-To: <52D01D640200007800112739@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>, Tim Deegan <tim@xen.org>
Subject: Re: [Xen-devel] [PATCH 4.2] ix86: fix linear page table
 construction in alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/2014 15:18, "Jan Beulich" <JBeulich@suse.com> wrote:

> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f1R-0003PB-RY; Fri, 10 Jan 2014 16:33:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1f1Q-0003OP-Ds
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 16:33:24 +0000
Received: from [85.158.137.68:8978] by server-16.bemta-3.messagelabs.com id
	82/9E-26128-EC020D25; Fri, 10 Jan 2014 16:33:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389371589!8451884!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5924 invoked from network); 10 Jan 2014 16:33:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:33:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91737675"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 16:33:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 11:33:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1f1A-0002oi-0P;
	Fri, 10 Jan 2014 16:33:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1f19-00084s-QS;
	Fri, 10 Jan 2014 16:33:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24334-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 16:33:07 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24334: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24334 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24334/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24320
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24320 pass in 24334

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24313

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24320 never pass

version targeted for testing:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9
baseline version:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Stefan Bader <stefan.bader@canonical.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ branch=xen-unstable
+ revision=2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 2d03be65d5c50053fec4a5fa1d691972e5d953c9:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   025c1b7..2d03be6  2d03be65d5c50053fec4a5fa1d691972e5d953c9 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f1R-0003PB-RY; Fri, 10 Jan 2014 16:33:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1f1Q-0003OP-Ds
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 16:33:24 +0000
Received: from [85.158.137.68:8978] by server-16.bemta-3.messagelabs.com id
	82/9E-26128-EC020D25; Fri, 10 Jan 2014 16:33:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389371589!8451884!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5924 invoked from network); 10 Jan 2014 16:33:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 16:33:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91737675"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 16:33:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 11:33:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1f1A-0002oi-0P;
	Fri, 10 Jan 2014 16:33:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1f19-00084s-QS;
	Fri, 10 Jan 2014 16:33:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24334-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 16:33:07 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24334: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24334 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24334/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24320
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24320 pass in 24334

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24313

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24320 never pass

version targeted for testing:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9
baseline version:
 xen                  025c1b755afc9a9f42f71ef167c20fdc616b1d2d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Stefan Bader <stefan.bader@canonical.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ branch=xen-unstable
+ revision=2d03be65d5c50053fec4a5fa1d691972e5d953c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 2d03be65d5c50053fec4a5fa1d691972e5d953c9:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   025c1b7..2d03be6  2d03be65d5c50053fec4a5fa1d691972e5d953c9 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f3u-0003pe-Uk; Fri, 10 Jan 2014 16:35:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1f3t-0003pU-DD
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:35:57 +0000
Received: from [85.158.137.68:13979] by server-14.bemta-3.messagelabs.com id
	F1/DF-06105-C6120D25; Fri, 10 Jan 2014 16:35:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389371754!7281661!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26522 invoked from network); 10 Jan 2014 16:35:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:35:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGZk0s025672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:35:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGZkqI009743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:35:46 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGZjAq009737; Fri, 10 Jan 2014 16:35:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:35:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B837E1C18DC; Fri, 10 Jan 2014 11:35:44 -0500 (EST)
Date: Fri, 10 Jan 2014 11:35:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110163544.GA21692@phenom.dumpdata.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <24225189.20140110170544@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> >> 
> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> >> ..
> >> and I did see the PXE boot menu in my guest - so even
> >> better!
> 
> > Perfect, look like it is the fix for PCI passthrough.
> 
> Hi Konrad,
> 
> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
> gets run ?

Yes. I double checked that the MAC address that was given an DHCP
address was indeed for the physical hardware. And it was.

> 
> With this patch and VGA devices it still points to another rom for me.
> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

That all sounds to me like a bug in QEMU which constructs
the 'world'. Then 'hvmloader' and the kernel ingest this to
create their view of what the PCI configuration/slots/etc should look like.
They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.

Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
is-and-PT-VGA'?

Just to make sure I am not forgetting a crucial fact - if you don't
have vga=none, does the lspci output look sane?

> 
> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
> (for example one by reading acpi tables .. the other one by real probing .. ?)

The kernel and hvmloader all use the 'inb' and 'outb' to figure out
what the PCI space looks like.

'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
'inb' and 'outb'. 

So it all seems to point to QEMU. 
> 
> --
> Sander
> 
> 
> >> I have not yet done the GPU - this issue was preventing me from using
> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> >> 
> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>' please do.
> 
> > Will do.
> 
> >> Thank you!
> >> > 
> >> > 
> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> >> > index 6dd7a68..2bbdb6d 100644
> >> > --- a/hw/xen/xen_pt.c
> >> > +++ b/hw/xen/xen_pt.c
> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >> >  
> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >> >  
> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >> > -                                      "xen-pci-pt-rom", d->rom.size);
> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> >> > +                              "xen-pci-pt-rom", d->rom.size);
> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >> >                           &s->rom);
> >> >  
> >> > 
> >> > -- 
> >> > Anthony PERARD
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f3u-0003pe-Uk; Fri, 10 Jan 2014 16:35:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1f3t-0003pU-DD
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:35:57 +0000
Received: from [85.158.137.68:13979] by server-14.bemta-3.messagelabs.com id
	F1/DF-06105-C6120D25; Fri, 10 Jan 2014 16:35:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389371754!7281661!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26522 invoked from network); 10 Jan 2014 16:35:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:35:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGZk0s025672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:35:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGZkqI009743
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:35:46 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGZjAq009737; Fri, 10 Jan 2014 16:35:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:35:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B837E1C18DC; Fri, 10 Jan 2014 11:35:44 -0500 (EST)
Date: Fri, 10 Jan 2014 11:35:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110163544.GA21692@phenom.dumpdata.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <24225189.20140110170544@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> >> 
> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> >> ..
> >> and I did see the PXE boot menu in my guest - so even
> >> better!
> 
> > Perfect, look like it is the fix for PCI passthrough.
> 
> Hi Konrad,
> 
> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
> gets run ?

Yes. I double checked that the MAC address that was given an DHCP
address was indeed for the physical hardware. And it was.

> 
> With this patch and VGA devices it still points to another rom for me.
> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

That all sounds to me like a bug in QEMU which constructs
the 'world'. Then 'hvmloader' and the kernel ingest this to
create their view of what the PCI configuration/slots/etc should look like.
They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.

Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
is-and-PT-VGA'?

Just to make sure I am not forgetting a crucial fact - if you don't
have vga=none, does the lspci output look sane?

> 
> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
> (for example one by reading acpi tables .. the other one by real probing .. ?)

The kernel and hvmloader all use the 'inb' and 'outb' to figure out
what the PCI space looks like.

'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
'inb' and 'outb'. 

So it all seems to point to QEMU. 
> 
> --
> Sander
> 
> 
> >> I have not yet done the GPU - this issue was preventing me from using
> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> >> 
> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>' please do.
> 
> > Will do.
> 
> >> Thank you!
> >> > 
> >> > 
> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> >> > index 6dd7a68..2bbdb6d 100644
> >> > --- a/hw/xen/xen_pt.c
> >> > +++ b/hw/xen/xen_pt.c
> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >> >  
> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >> >  
> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >> > -                                      "xen-pci-pt-rom", d->rom.size);
> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> >> > +                              "xen-pci-pt-rom", d->rom.size);
> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >> >                           &s->rom);
> >> >  
> >> > 
> >> > -- 
> >> > Anthony PERARD
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:38:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f60-000458-I1; Fri, 10 Jan 2014 16:38:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1f5z-000450-Dd
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:38:07 +0000
Received: from [85.158.137.68:31551] by server-9.bemta-3.messagelabs.com id
	82/6E-13104-EE120D25; Fri, 10 Jan 2014 16:38:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389371886!8492394!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13230 invoked from network); 10 Jan 2014 16:38:06 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 16:38:06 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54204 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1euz-0007dC-8E; Fri, 10 Jan 2014 17:26:45 +0100
Date: Fri, 10 Jan 2014 17:38:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1018472239.20140110173801@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140110162533.GG1696@perard.uk.xensource.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110162533.GG1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:25:35 PM, you wrote:

> On Fri, Jan 10, 2014 at 05:05:44PM +0100, Sander Eikelenboom wrote:
>> With this patch and VGA devices it still points to another rom for me.
>> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

> I do think VGA devices are a particular case. hvmloader or SeaBIOS might
> "replace" the ROM of the card.

Any way / patch to find out ?

It would be quite silly if it replaces the vga rom with a nic rom.

If that only happens for vga cards .. it would need to check for device class,
would seem very strange to replace it with another rom and not check for class there.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:38:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f60-000458-I1; Fri, 10 Jan 2014 16:38:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1f5z-000450-Dd
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 16:38:07 +0000
Received: from [85.158.137.68:31551] by server-9.bemta-3.messagelabs.com id
	82/6E-13104-EE120D25; Fri, 10 Jan 2014 16:38:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389371886!8492394!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13230 invoked from network); 10 Jan 2014 16:38:06 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 16:38:06 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54204 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1euz-0007dC-8E; Fri, 10 Jan 2014 17:26:45 +0100
Date: Fri, 10 Jan 2014 17:38:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1018472239.20140110173801@eikelenboom.it>
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140110162533.GG1696@perard.uk.xensource.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110162533.GG1696@perard.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:25:35 PM, you wrote:

> On Fri, Jan 10, 2014 at 05:05:44PM +0100, Sander Eikelenboom wrote:
>> With this patch and VGA devices it still points to another rom for me.
>> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

> I do think VGA devices are a particular case. hvmloader or SeaBIOS might
> "replace" the ROM of the card.

Any way / patch to find out ?

It would be quite silly if it replaces the vga rom with a nic rom.

If that only happens for vga cards .. it would need to check for device class,
would seem very strange to replace it with another rom and not check for class there.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:40:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f8g-00051N-BP; Fri, 10 Jan 2014 16:40:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1f8e-00050p-7U
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:40:52 +0000
Received: from [85.158.143.35:7107] by server-3.bemta-4.messagelabs.com id
	EA/9B-32360-39220D25; Fri, 10 Jan 2014 16:40:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389372049!3868497!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17110 invoked from network); 10 Jan 2014 16:40:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:40:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGdkrV030786
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:39:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGdkCW015737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:39:46 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGdkJn019485; Fri, 10 Jan 2014 16:39:46 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:39:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CF7BF1C18DC; Fri, 10 Jan 2014 11:39:44 -0500 (EST)
Date: Fri, 10 Jan 2014 11:39:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110163944.GB21692@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
	<1389370275.6423.26.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389370275.6423.26.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:11:15PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 11:05 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > > > And then it is the responsibility of the balloon driver to give the memory
> > > > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > > > balloon driver to balloon out).
> > > > > 
> > > > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > > > boot time. It is basically there in order to let the guest get booted
> > > > > far enough to load the balloon driver to give the memory back.
> > > > > 
> > > > > It's basically a boot time zero-page sharing mechanism AIUI.
> > > > 
> > > > But it does look to gulp up hypervisor memory and return it during
> > > > allocation of memory for the guest.
> > > 
> > > It should be less than the maxmem-memory amount though. Perhaps because
> > > Wei is using relatively small sizes the pod cache ends up being a
> > > similar size to the saving?
> > > 
> > > Or maybe I just don't understand PoD, since the code you quote does seem
> > > to contradict that.
> > > 
> > > Or maybe libxl's calculation of pod_target is wrong?
> > > 
> > > > From reading the code the patch seems correct - we will _need_ that
> > > > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > > > are temporary as we do free them.
> > > 
> > > It does makes sense that the PoD cache should be included in the claim,
> > > I just don't get why the cache is so big...
> > 
> > I think it expands and shrinks to make sure that the memory is present
> > in the hypervisor. If there is not enough memory it would -ENOMEM and
> > the toolstack would know immediately.
> > 
> > But that seems silly - as that memory might be in the future used
> > by other guests and then you won't be able to use said cache. But since
> > it is a "cache" I guess that is OK.
> 
> Wait, isn't the "cache" here just the target memory size?

The delta of it: maxmem - memory.
> 
> PoD uses up to that size to populate guest pages, and will try to
> reclaim zeroed pages from the guest so that it never grows bigger than
> the target size.

The way I read the code it has a split personality:
 - up to 'memory' in the toolstack are allocated.
 - the delta of 'maxmem - memory' are allocated in the hypervisor and
   then freed.

> 
> I think the confusion here is that Wei had target=128M and maxmem=256M
> so the difference was 128M which served as a nice red-herring...

This is confusing indeed.
> 
> Ian
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:40:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1f8g-00051N-BP; Fri, 10 Jan 2014 16:40:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1f8e-00050p-7U
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:40:52 +0000
Received: from [85.158.143.35:7107] by server-3.bemta-4.messagelabs.com id
	EA/9B-32360-39220D25; Fri, 10 Jan 2014 16:40:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389372049!3868497!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17110 invoked from network); 10 Jan 2014 16:40:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:40:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGdkrV030786
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:39:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGdkCW015737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:39:46 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGdkJn019485; Fri, 10 Jan 2014 16:39:46 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:39:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CF7BF1C18DC; Fri, 10 Jan 2014 11:39:44 -0500 (EST)
Date: Fri, 10 Jan 2014 11:39:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140110163944.GB21692@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
	<1389370275.6423.26.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389370275.6423.26.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:11:15PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 11:05 -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > > > > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > > > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > > > > And then it is the responsibility of the balloon driver to give the memory
> > > > > > back (and this is where the 'static-max' et al come in play to tell the
> > > > > > balloon driver to balloon out).
> > > > > 
> > > > > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > > > > boot time. It is basically there in order to let the guest get booted
> > > > > far enough to load the balloon driver to give the memory back.
> > > > > 
> > > > > It's basically a boot time zero-page sharing mechanism AIUI.
> > > > 
> > > > But it does look to gulp up hypervisor memory and return it during
> > > > allocation of memory for the guest.
> > > 
> > > It should be less than the maxmem-memory amount though. Perhaps because
> > > Wei is using relatively small sizes the pod cache ends up being a
> > > similar size to the saving?
> > > 
> > > Or maybe I just don't understand PoD, since the code you quote does seem
> > > to contradict that.
> > > 
> > > Or maybe libxl's calculation of pod_target is wrong?
> > > 
> > > > From reading the code the patch seems correct - we will _need_ that
> > > > extra 128MB 'claim' to allocate/free the 128MB extra pages. They
> > > > are temporary as we do free them.
> > > 
> > > It does makes sense that the PoD cache should be included in the claim,
> > > I just don't get why the cache is so big...
> > 
> > I think it expands and shrinks to make sure that the memory is present
> > in the hypervisor. If there is not enough memory it would -ENOMEM and
> > the toolstack would know immediately.
> > 
> > But that seems silly - as that memory might be in the future used
> > by other guests and then you won't be able to use said cache. But since
> > it is a "cache" I guess that is OK.
> 
> Wait, isn't the "cache" here just the target memory size?

The delta of it: maxmem - memory.
> 
> PoD uses up to that size to populate guest pages, and will try to
> reclaim zeroed pages from the guest so that it never grows bigger than
> the target size.

The way I read the code it has a split personality:
 - up to 'memory' in the toolstack are allocated.
 - the delta of 'maxmem - memory' are allocated in the hypervisor and
   then freed.

> 
> I think the confusion here is that Wei had target=128M and maxmem=256M
> so the difference was 128M which served as a nice red-herring...

This is confusing indeed.
> 
> Ian
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fIN-0005tG-Os; Fri, 10 Jan 2014 16:50:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1fIM-0005t8-MP
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:50:54 +0000
Received: from [193.109.254.147:9206] by server-10.bemta-14.messagelabs.com id
	CA/35-20752-EE420D25; Fri, 10 Jan 2014 16:50:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389372652!7861314!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32445 invoked from network); 10 Jan 2014 16:50:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:50:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGolvj013799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:50:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGojJ8001309
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:50:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AGoi41013927; Fri, 10 Jan 2014 16:50:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:50:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6902F1C18DC; Fri, 10 Jan 2014 11:50:43 -0500 (EST)
Date: Fri, 10 Jan 2014 11:50:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110165043.GC21692@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<20140110160351.GF30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110160351.GF30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:03:51PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > > create ^
> > > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > > thanks
> > > > > 
> > > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > > When I have following configuration in HVM config file:
> > > > > >   memory=128
> > > > > >   maxmem=256
> > > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > > 
> > > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > > 
> > > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > > 
> > > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > > 
> > > > No. 128MB actually.
> > > > 
> > > 
> > > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > > 8MB video ram). Did I misread your message...
> > 
> > The 'claim' being the hypercall to set the 'clamp' on how much memory
> > the guest can allocate. This is based on:
> > 
> > 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> 
> This is in fact initialized to 'maxmem' in guest's config file and
> 
> 243     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;
> 
> This is in fact 'memory' in guest's config file.
> 
> So when you try to claim "maxmem" and the current limit is "memory" it
> would not work.
> 
> So guest should only claim target_pages sans 0x20 pages if PoD enabled.
> Oh this is what your initial patch did. I don't know whether this is
> conceptually correct though. :-P

Heh.
> 
> Further more, should guest only allow to claim target_pages, regardless

No.
> whether PoD is enabled? When only "memory" is specify, "maxmem"

That is indeed happening at some point. When you modify the 'target_pages'
(so 'xl mem-set' or 'xl mem-max') you will move the ceiling and allow
the guest (via ballooning) to increase or decrease tot_pages.

You don't need the 'claim' at that point as the hypervisor is the
one that deals with many concurrent guests competing for memory.
And it has the proper locking mechanics to tell guests to buzz
off if there is not enough memory.

But keep in mind that the 'claim' (or outstanding pages) is more
of a reservation. Or a lock. Or a stick in the ground.
It says: "To allocate this guest I need X pages' - and if you cannot
guarantee that amount then -ENOMEM right away. Which it did.

And said 'X' pages is incorrect for PoD guests. The patch I posted
sets the ceiling to the 'maxmem'.

Pls also note that the claim hypercall, or reservation, is cancelled
right after the guests' memory has been allocated:

530     /* ensure no unclaimed pages are left unused */                             
531     xc_domain_claim_pages(xch, dom, 0 /* cancels the claim */);            

It is a very short lived 'lock' on the memory - all contained
within 'setup_guest' for HVM and 'arch_setup_meminit' for PV.


> equals to "memory". So conceptually what we really care about is
> "memory" not "maxmem".

Uh, at the start of the life of the guest - sure. During its
build-up - well, we seem to have a spike of memory for the
PoD to allocate and free memory.

The time-flow seem to be:

 memory ... maxmem ... memory.. [start of guest]


That actually seems a bit silly - we could as well just check
how much free memory the hypervisor has and return 'ENOMEM'
if it does not have enough. But I am very likely mis-reading the
early setup of the PoD code or misunderstanding the implications
of PoD allocating its cache and freeing it.

> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 16:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 16:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fIN-0005tG-Os; Fri, 10 Jan 2014 16:50:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1fIM-0005t8-MP
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 16:50:54 +0000
Received: from [193.109.254.147:9206] by server-10.bemta-14.messagelabs.com id
	CA/35-20752-EE420D25; Fri, 10 Jan 2014 16:50:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389372652!7861314!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32445 invoked from network); 10 Jan 2014 16:50:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 16:50:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AGolvj013799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 16:50:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AGojJ8001309
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 16:50:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AGoi41013927; Fri, 10 Jan 2014 16:50:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 08:50:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6902F1C18DC; Fri, 10 Jan 2014 11:50:43 -0500 (EST)
Date: Fri, 10 Jan 2014 11:50:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140110165043.GC21692@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140110151048.GC30581@zion.uk.xensource.com>
	<20140110154105.GC20640@phenom.dumpdata.com>
	<20140110160351.GF30581@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110160351.GF30581@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 04:03:51PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 10:41:05AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> > > On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > > create ^
> > > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > > thanks
> > > > > 
> > > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > > When I have following configuration in HVM config file:
> > > > > >   memory=128
> > > > > >   maxmem=256
> > > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > > 
> > > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > > 
> > > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > > 
> > > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > > 
> > > > No. 128MB actually.
> > > > 
> > > 
> > > Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> > > 8MB video ram). Did I misread your message...
> > 
> > The 'claim' being the hypercall to set the 'clamp' on how much memory
> > the guest can allocate. This is based on:
> > 
> > 242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;
> 
> This is in fact initialized to 'maxmem' in guest's config file and
> 
> 243     unsigned long target_pages = args->mem_target >> PAGE_SHIFT;
> 
> This is in fact 'memory' in guest's config file.
> 
> So when you try to claim "maxmem" and the current limit is "memory" it
> would not work.
> 
> So guest should only claim target_pages sans 0x20 pages if PoD enabled.
> Oh this is what your initial patch did. I don't know whether this is
> conceptually correct though. :-P

Heh.
> 
> Further more, should guest only allow to claim target_pages, regardless

No.
> whether PoD is enabled? When only "memory" is specify, "maxmem"

That is indeed happening at some point. When you modify the 'target_pages'
(so 'xl mem-set' or 'xl mem-max') you will move the ceiling and allow
the guest (via ballooning) to increase or decrease tot_pages.

You don't need the 'claim' at that point as the hypervisor is the
one that deals with many concurrent guests competing for memory.
And it has the proper locking mechanics to tell guests to buzz
off if there is not enough memory.

But keep in mind that the 'claim' (or outstanding pages) is more
of a reservation. Or a lock. Or a stick in the ground.
It says: "To allocate this guest I need X pages' - and if you cannot
guarantee that amount then -ENOMEM right away. Which it did.

And said 'X' pages is incorrect for PoD guests. The patch I posted
sets the ceiling to the 'maxmem'.

Pls also note that the claim hypercall, or reservation, is cancelled
right after the guests' memory has been allocated:

530     /* ensure no unclaimed pages are left unused */                             
531     xc_domain_claim_pages(xch, dom, 0 /* cancels the claim */);            

It is a very short lived 'lock' on the memory - all contained
within 'setup_guest' for HVM and 'arch_setup_meminit' for PV.


> equals to "memory". So conceptually what we really care about is
> "memory" not "maxmem".

Uh, at the start of the life of the guest - sure. During its
build-up - well, we seem to have a spike of memory for the
PoD to allocate and free memory.

The time-flow seem to be:

 memory ... maxmem ... memory.. [start of guest]


That actually seems a bit silly - we could as well just check
how much free memory the hypervisor has and return 'ENOMEM'
if it does not have enough. But I am very likely mis-reading the
early setup of the PoD code or misunderstanding the implications
of PoD allocating its cache and freeing it.

> 
> Wei.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:00:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fRn-0006kI-CV; Fri, 10 Jan 2014 17:00:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1fRl-0006kD-VU
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 17:00:38 +0000
Received: from [85.158.137.68:11585] by server-9.bemta-3.messagelabs.com id
	DC/E0-13104-53720D25; Fri, 10 Jan 2014 17:00:37 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389373236!8457694!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21751 invoked from network); 10 Jan 2014 17:00:36 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-3.tower-31.messagelabs.com with SMTP;
	10 Jan 2014 17:00:36 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 4977113F69F;
	Fri, 10 Jan 2014 11:00:33 -0600 (CST)
Date: Fri, 10 Jan 2014 17:00:06 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140110170005.GH925@arm.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"nico@linaro.org" <nico@linaro.org>, Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 06:32:15PM +0000, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:00:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fRn-0006kI-CV; Fri, 10 Jan 2014 17:00:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1W1fRl-0006kD-VU
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 17:00:38 +0000
Received: from [85.158.137.68:11585] by server-9.bemta-3.messagelabs.com id
	DC/E0-13104-53720D25; Fri, 10 Jan 2014 17:00:37 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389373236!8457694!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21751 invoked from network); 10 Jan 2014 17:00:36 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-3.tower-31.messagelabs.com with SMTP;
	10 Jan 2014 17:00:36 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.24])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 4977113F69F;
	Fri, 10 Jan 2014 11:00:33 -0600 (CST)
Date: Fri, 10 Jan 2014 17:00:06 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140110170005.GH925@arm.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"arnd@arndb.de" <arnd@arndb.de>, Marc Zyngier <Marc.Zyngier@arm.com>,
	"nico@linaro.org" <nico@linaro.org>, Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 06:32:15PM +0000, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:01:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fSs-0006vq-JT; Fri, 10 Jan 2014 17:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1fSq-0006vc-HR
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:01:44 +0000
Received: from [85.158.143.35:2399] by server-1.bemta-4.messagelabs.com id
	48/F9-02132-77720D25; Fri, 10 Jan 2014 17:01:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389373303!11019966!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 10 Jan 2014 17:01:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 17:01:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 17:01:42 +0000
Message-Id: <52D03580020000780011298A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 17:01:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:

The subject seems odd - rather than PV I would think it should say
privileged.

> --- a/xen/arch/x86/x86_64/platform_hypercall.c
> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>  CHECK_pf_enter_acpi_sleep;
>  #undef xen_pf_enter_acpi_sleep
>  
> +#define xenpf_symdata   compat_pf_symdata
> +

This should be done like other cases (see context above): Add a
CHECK_ invocation, which in turn requires an entry in include/xlat.lst.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:01:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fSs-0006vq-JT; Fri, 10 Jan 2014 17:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W1fSq-0006vc-HR
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:01:44 +0000
Received: from [85.158.143.35:2399] by server-1.bemta-4.messagelabs.com id
	48/F9-02132-77720D25; Fri, 10 Jan 2014 17:01:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389373303!11019966!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 10 Jan 2014 17:01:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 17:01:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 10 Jan 2014 17:01:42 +0000
Message-Id: <52D03580020000780011298A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 10 Jan 2014 17:01:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:

The subject seems odd - rather than PV I would think it should say
privileged.

> --- a/xen/arch/x86/x86_64/platform_hypercall.c
> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>  CHECK_pf_enter_acpi_sleep;
>  #undef xen_pf_enter_acpi_sleep
>  
> +#define xenpf_symdata   compat_pf_symdata
> +

This should be done like other cases (see context above): Add a
CHECK_ invocation, which in turn requires an entry in include/xlat.lst.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:02:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fTN-00070r-20; Fri, 10 Jan 2014 17:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1fTL-00070X-L4
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:02:15 +0000
Received: from [85.158.143.35:42705] by server-2.bemta-4.messagelabs.com id
	65/29-11386-79720D25; Fri, 10 Jan 2014 17:02:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389373334!10937263!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12552 invoked from network); 10 Jan 2014 17:02:14 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 17:02:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1fTE-000M21-RB; Fri, 10 Jan 2014 17:02:08 +0000
Date: Fri, 10 Jan 2014 18:02:08 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110170208.GA48471@deinos.phlegethon.org>
References: <52D01D640200007800112739@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D01D640200007800112739@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 4.2] ix86: fix linear page table
 construction in alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:18 +0000 on 10 Jan (1389363524), Jan Beulich wrote:
> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],
> 
> 
> 

> ix86: fix linear page table construction in alloc_l2_table()
> 
> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:02:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fTN-00070r-20; Fri, 10 Jan 2014 17:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1fTL-00070X-L4
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:02:15 +0000
Received: from [85.158.143.35:42705] by server-2.bemta-4.messagelabs.com id
	65/29-11386-79720D25; Fri, 10 Jan 2014 17:02:15 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389373334!10937263!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12552 invoked from network); 10 Jan 2014 17:02:14 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 17:02:14 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1fTE-000M21-RB; Fri, 10 Jan 2014 17:02:08 +0000
Date: Fri, 10 Jan 2014 18:02:08 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140110170208.GA48471@deinos.phlegethon.org>
References: <52D01D640200007800112739@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D01D640200007800112739@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 4.2] ix86: fix linear page table
 construction in alloc_l2_table()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:18 +0000 on 10 Jan (1389363524), Jan Beulich wrote:
> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],
> 
> 
> 

> ix86: fix linear page table construction in alloc_l2_table()
> 
> Slot 0 got updated when slot 3 was meant. The mistake was hidden by
> create_pae_xen_mappings() correcting things immediately afterwards
> (i.e. before the new entries could get used the first time).
> 
> Reported-by: CHENG Yueqiang <yqcheng.2008@phdis.smu.edu.sg>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1488,7 +1488,7 @@ static int alloc_l2_table(struct page_in
>              l2e_write(&pl2e[l2_table_offset(PERDOMAIN_VIRT_START) + i],
>                        l2e_from_page(perdomain_pt_page(d, i),
>                                      __PAGE_HYPERVISOR));
> -        pl2e[l2_table_offset(LINEAR_PT_VIRT_START)] =
> +        pl2e[l2_table_offset(LINEAR_PT_VIRT_START) + 3] =
>              l2e_from_pfn(pfn, __PAGE_HYPERVISOR);
>  #else
>          memcpy(&pl2e[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:08:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fZO-0007fB-UN; Fri, 10 Jan 2014 17:08:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fZO-0007eC-CK
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 17:08:30 +0000
Received: from [193.109.254.147:30752] by server-13.bemta-14.messagelabs.com
	id 0F/AF-19374-D0920D25; Fri, 10 Jan 2014 17:08:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389373707!10102385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19710 invoked from network); 10 Jan 2014 17:08:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:08:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91751022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:08:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:08:26 -0500
Message-ID: <1389373705.6423.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:08:25 +0000
In-Reply-To: <52CD7252.4050903@linaro.org>
References: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
	<52CD7252.4050903@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 15:44 +0000, Julien Grall wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks, no one contradicted my RM analysis so I have applied.

I also applied "Revert "tools: libxc: flush data cache after loading
images into guest memory"" which I posted in
<1389110780.12612.48.camel@kazak.uk.xensource.com> on the basis that
although it is harmless to keep that functionality it might make real
problems harder to spot. Also it happens to have been in use when I was
testing this patch so it seems consistent to take it. I update the
commit message to refer to this patch as I suggested.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:08:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fZO-0007fB-UN; Fri, 10 Jan 2014 17:08:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fZO-0007eC-CK
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 17:08:30 +0000
Received: from [193.109.254.147:30752] by server-13.bemta-14.messagelabs.com
	id 0F/AF-19374-D0920D25; Fri, 10 Jan 2014 17:08:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389373707!10102385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19710 invoked from network); 10 Jan 2014 17:08:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:08:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91751022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:08:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:08:26 -0500
Message-ID: <1389373705.6423.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:08:25 +0000
In-Reply-To: <52CD7252.4050903@linaro.org>
References: <1389190141-29262-1-git-send-email-ian.campbell@citrix.com>
	<52CD7252.4050903@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: arm: force guest memory accesses to
 cacheable when MMU is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-08 at 15:44 +0000, Julien Grall wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks, no one contradicted my RM analysis so I have applied.

I also applied "Revert "tools: libxc: flush data cache after loading
images into guest memory"" which I posted in
<1389110780.12612.48.camel@kazak.uk.xensource.com> on the basis that
although it is harmless to keep that functionality it might make real
problems harder to spot. Also it happens to have been in use when I was
testing this patch so it seems consistent to take it. I update the
commit message to refer to this patch as I suggested.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fa4-0007vJ-JT; Fri, 10 Jan 2014 17:09:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fa2-0007uh-J0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:09:10 +0000
Received: from [85.158.137.68:40862] by server-1.bemta-3.messagelabs.com id
	30/A0-29598-53920D25; Fri, 10 Jan 2014 17:09:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389373747!8459504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6069 invoked from network); 10 Jan 2014 17:09:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91751244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:09:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:09:06 -0500
Message-ID: <1389373745.6423.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:09:05 +0000
In-Reply-To: <52CFFA1C.7000500@linaro.org>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
> 
> On 01/10/2014 09:52 AM, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> >> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> >> supports real hardware, it's possible to enable by default scrubbing.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.

Applied.

> >> ---
> >>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
> >>      a domain.
> >>
> >>      The downside is it's now slow on models.
> >
> > There is a no-bootscrub command-line option which can be used in that
> > case. Could you update the relevant model wiki pages to mention it
> > please?
> 
> I have updated the wiki page.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1fa4-0007vJ-JT; Fri, 10 Jan 2014 17:09:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fa2-0007uh-J0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:09:10 +0000
Received: from [85.158.137.68:40862] by server-1.bemta-3.messagelabs.com id
	30/A0-29598-53920D25; Fri, 10 Jan 2014 17:09:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389373747!8459504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6069 invoked from network); 10 Jan 2014 17:09:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91751244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:09:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:09:06 -0500
Message-ID: <1389373745.6423.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:09:05 +0000
In-Reply-To: <52CFFA1C.7000500@linaro.org>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
> 
> On 01/10/2014 09:52 AM, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> >> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> >> supports real hardware, it's possible to enable by default scrubbing.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.

Applied.

> >> ---
> >>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
> >>      a domain.
> >>
> >>      The downside is it's now slow on models.
> >
> > There is a no-bootscrub command-line option which can be used in that
> > case. Could you update the relevant model wiki pages to mention it
> > please?
> 
> I have updated the wiki page.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:13:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1feA-0000I6-68; Fri, 10 Jan 2014 17:13:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fe8-0000Hu-U0
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 17:13:25 +0000
Received: from [85.158.139.211:47111] by server-10.bemta-5.messagelabs.com id
	2F/79-01405-43A20D25; Fri, 10 Jan 2014 17:13:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389374001!8873539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3574 invoked from network); 10 Jan 2014 17:13:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89616370"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 17:13:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:13:20 -0500
Message-ID: <1389373999.6423.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Fri, 10 Jan 2014 17:13:19 +0000
In-Reply-To: <52CEE2E2.2030501@terremark.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
> On 01/09/14 11:30, Jan Beulich wrote:
> >>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
> >> Based on Mukesh's statement, attached is the rebased version of this patch
> >> (labeled v3).  I included Mukesh's ack.
> > Unless this is meant just for reviewing purposes (albeit even then
> > it's likely problematic), could you please get used to sending
> > patch revisions with mail subjects (i.e. not retaining the prior
> > version indicator), so there is a reasonable chance to reconstruct
> > things by searching just the titles in a mail archive. (It's still fine -
> > at least as far as I'm concerned - to reply to an earlier version,
> > thus tying things into a single thread on the archive.)
> >
> > Jan
> >
> 
> I will try to.  I had not noticed this in the past.

Thanks, as Jan says it is very confusing.

If there are tools things outstanding in this series which should be for
4.4 then I don't know what is where or what has been acked.

Please can resend whatever you think is outstanding for 4.4 as a fresh
thread with a suitable vN larger than any of the ones mentioned in any
of the replies here and with the acks collected.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:13:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1feA-0000I6-68; Fri, 10 Jan 2014 17:13:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1fe8-0000Hu-U0
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 17:13:25 +0000
Received: from [85.158.139.211:47111] by server-10.bemta-5.messagelabs.com id
	2F/79-01405-43A20D25; Fri, 10 Jan 2014 17:13:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389374001!8873539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3574 invoked from network); 10 Jan 2014 17:13:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="89616370"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 17:13:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:13:20 -0500
Message-ID: <1389373999.6423.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Fri, 10 Jan 2014 17:13:19 +0000
In-Reply-To: <52CEE2E2.2030501@terremark.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
> On 01/09/14 11:30, Jan Beulich wrote:
> >>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
> >> Based on Mukesh's statement, attached is the rebased version of this patch
> >> (labeled v3).  I included Mukesh's ack.
> > Unless this is meant just for reviewing purposes (albeit even then
> > it's likely problematic), could you please get used to sending
> > patch revisions with mail subjects (i.e. not retaining the prior
> > version indicator), so there is a reasonable chance to reconstruct
> > things by searching just the titles in a mail archive. (It's still fine -
> > at least as far as I'm concerned - to reply to an earlier version,
> > thus tying things into a single thread on the archive.)
> >
> > Jan
> >
> 
> I will try to.  I had not noticed this in the past.

Thanks, as Jan says it is very confusing.

If there are tools things outstanding in this series which should be for
4.4 then I don't know what is where or what has been acked.

Please can resend whatever you think is outstanding for 4.4 as a fresh
thread with a suitable vN larger than any of the ones mentioned in any
of the replies here and with the acks collected.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1g2N-0002E6-Jv; Fri, 10 Jan 2014 17:38:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1g2L-0002C6-EJ
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:38:25 +0000
Received: from [193.109.254.147:28923] by server-16.bemta-14.messagelabs.com
	id D8/6C-20600-01030D25; Fri, 10 Jan 2014 17:38:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389375503!10156227!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11989 invoked from network); 10 Jan 2014 17:38:24 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 17:38:24 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54927 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1frK-0003y0-7C; Fri, 10 Jan 2014 18:27:02 +0100
Date: Fri, 10 Jan 2014 18:38:18 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1945447520.20140110183818@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110163544.GA21692@phenom.dumpdata.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110163544.GA21692@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:35:44 PM, you wrote:

>> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
>> >> 
>> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
>> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
>> >> ..
>> >> and I did see the PXE boot menu in my guest - so even
>> >> better!
>> 
>> > Perfect, look like it is the fix for PCI passthrough.
>> 
>> Hi Konrad,
>> 
>> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
>> gets run ?

> Yes. I double checked that the MAC address that was given an DHCP
> address was indeed for the physical hardware. And it was.

OK

>> 
>> With this patch and VGA devices it still points to another rom for me.
>> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

> That all sounds to me like a bug in QEMU which constructs
> the 'world'. Then 'hvmloader' and the kernel ingest this to
> create their view of what the PCI configuration/slots/etc should look like.
> They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.

> Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
> is-and-PT-VGA'?

> Just to make sure I am not forgetting a crucial fact - if you don't
> have vga=none, does the lspci output look sane?

It does .. except when using an PV NIC, that one doesn't show in lspci, but it does occupy a slot (the numbers are not
consecutive anymore, one is "hidden"), but the PV nic itself is working ok.
This doesn't seem to happen for disk so that seems a bit strange to me ..

Apart from that it looks sane.

But I'm rebuilding everything now from scratch and will try again .. just too damned many parameters :-)

Will see if i can make a complete post again with all data, although the previous time i tried that,
it was a little bit to intimidating i guess.

>> 
>> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
>> (for example one by reading acpi tables .. the other one by real probing .. ?)

> The kernel and hvmloader all use the 'inb' and 'outb' to figure out
> what the PCI space looks like.

> 'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
> 'inb' and 'outb'.

Hmm ok .. so i would expect that dumping with the:
echo 1 > rom; cat rom > rom.bin; echo 0 > rom;
sequence in /sys/bus/pci/devices/<BDF> would do everything according to the addresses that lspci gives for the rombar ..
and when i do that the resulting rom.bin differs according to the emulated devices i put it.

The devices that have a rom in the guest are always in the order (with there BDF):
NIC
Emulated VGA
Passthroughed VGA

When all enabled, i end up with the rom of the emulated VGA
When i disable that with vga="none", i end up with the rom of the NIC
When i also disable that .. by not specifying any vif and using xen_platform_pci=0 ..
     the kernel complains when trying to dump that it's not a valid rom (and when forced too it's all zero's so it's
     right because it can't find the start signature of a rom)

Is there an easy tool to dump the content of arbitrary mem addresses ?
(except enabling the /dev/kmem and trying to do some voodoo calculations and use "dd" ?)

Hmm when i make a core dump using "xl dump-core" would/should that also contain the content of the passed throughed rom ?
Then at least i could search that dump for the strings of the real vga rom of the passedtroughed card  and see if it is there .. perhaps on some other
address as expected .. or at least know it isn't.

Because at the moment i'm not very well in progressing with ruling things out (which seems to be the only method),
i only have some symptoms that differ. Wish i had another NIC device with a rom to see if that does work.


> So it all seems to point to QEMU.
>> 
>> --
>> Sander
>> 
>> 
>> >> I have not yet done the GPU - this issue was preventing me from using
>> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
>> >> 
>> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
>> >> <konrad.wilk@oracle.com>' please do.
>> 
>> > Will do.
>> 
>> >> Thank you!
>> >> > 
>> >> > 
>> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> >> > index 6dd7a68..2bbdb6d 100644
>> >> > --- a/hw/xen/xen_pt.c
>> >> > +++ b/hw/xen/xen_pt.c
>> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> >> >  
>> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> >> >  
>> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> >> > -                                      "xen-pci-pt-rom", d->rom.size);
>> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> >> > +                              "xen-pci-pt-rom", d->rom.size);
>> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>> >> >                           &s->rom);
>> >> >  
>> >> > 
>> >> > -- 
>> >> > Anthony PERARD
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1g2L-0002CY-6k; Fri, 10 Jan 2014 17:38:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1g2J-000298-W7
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:38:24 +0000
Received: from [193.109.254.147:47282] by server-14.bemta-14.messagelabs.com
	id 61/3D-12628-F0030D25; Fri, 10 Jan 2014 17:38:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389375499!10127325!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6797 invoked from network); 10 Jan 2014 17:38:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 17:38:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AHcGkd008646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 17:38:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AHcFaq017972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 17:38:16 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AHcEYl001355; Fri, 10 Jan 2014 17:38:15 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 09:38:14 -0800
Date: Fri, 10 Jan 2014 12:38:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1010658460.20140110171623@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Wow. You just walked in a pile of bugs didn't you? And on Friday
> > nonethless.
> 
> As usual ;-)

Ha!
..snip..
> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> 
> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> > I totally forgot about it !
> 
> Got a link to that patchset ?

https://lkml.org/lkml/2013/12/13/315

> I at least could give it a spin .. you never know when fortune is on your side :-)

It is also at this git tree:

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
want to merge it in your current Linus tree.

Thank you!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1g2N-0002E6-Jv; Fri, 10 Jan 2014 17:38:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1g2L-0002C6-EJ
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:38:25 +0000
Received: from [193.109.254.147:28923] by server-16.bemta-14.messagelabs.com
	id D8/6C-20600-01030D25; Fri, 10 Jan 2014 17:38:24 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389375503!10156227!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11989 invoked from network); 10 Jan 2014 17:38:24 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 17:38:24 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54927 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1frK-0003y0-7C; Fri, 10 Jan 2014 18:27:02 +0100
Date: Fri, 10 Jan 2014 18:38:18 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1945447520.20140110183818@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110163544.GA21692@phenom.dumpdata.com>
References: <20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110163544.GA21692@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 5:35:44 PM, you wrote:

>> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
>> >> 
>> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
>> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
>> >> ..
>> >> and I did see the PXE boot menu in my guest - so even
>> >> better!
>> 
>> > Perfect, look like it is the fix for PCI passthrough.
>> 
>> Hi Konrad,
>> 
>> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
>> gets run ?

> Yes. I double checked that the MAC address that was given an DHCP
> address was indeed for the physical hardware. And it was.

OK

>> 
>> With this patch and VGA devices it still points to another rom for me.
>> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)

> That all sounds to me like a bug in QEMU which constructs
> the 'world'. Then 'hvmloader' and the kernel ingest this to
> create their view of what the PCI configuration/slots/etc should look like.
> They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.

> Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
> is-and-PT-VGA'?

> Just to make sure I am not forgetting a crucial fact - if you don't
> have vga=none, does the lspci output look sane?

It does .. except when using an PV NIC, that one doesn't show in lspci, but it does occupy a slot (the numbers are not
consecutive anymore, one is "hidden"), but the PV nic itself is working ok.
This doesn't seem to happen for disk so that seems a bit strange to me ..

Apart from that it looks sane.

But I'm rebuilding everything now from scratch and will try again .. just too damned many parameters :-)

Will see if i can make a complete post again with all data, although the previous time i tried that,
it was a little bit to intimidating i guess.

>> 
>> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
>> (for example one by reading acpi tables .. the other one by real probing .. ?)

> The kernel and hvmloader all use the 'inb' and 'outb' to figure out
> what the PCI space looks like.

> 'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
> 'inb' and 'outb'.

Hmm ok .. so i would expect that dumping with the:
echo 1 > rom; cat rom > rom.bin; echo 0 > rom;
sequence in /sys/bus/pci/devices/<BDF> would do everything according to the addresses that lspci gives for the rombar ..
and when i do that the resulting rom.bin differs according to the emulated devices i put it.

The devices that have a rom in the guest are always in the order (with there BDF):
NIC
Emulated VGA
Passthroughed VGA

When all enabled, i end up with the rom of the emulated VGA
When i disable that with vga="none", i end up with the rom of the NIC
When i also disable that .. by not specifying any vif and using xen_platform_pci=0 ..
     the kernel complains when trying to dump that it's not a valid rom (and when forced too it's all zero's so it's
     right because it can't find the start signature of a rom)

Is there an easy tool to dump the content of arbitrary mem addresses ?
(except enabling the /dev/kmem and trying to do some voodoo calculations and use "dd" ?)

Hmm when i make a core dump using "xl dump-core" would/should that also contain the content of the passed throughed rom ?
Then at least i could search that dump for the strings of the real vga rom of the passedtroughed card  and see if it is there .. perhaps on some other
address as expected .. or at least know it isn't.

Because at the moment i'm not very well in progressing with ruling things out (which seems to be the only method),
i only have some symptoms that differ. Wish i had another NIC device with a rom to see if that does work.


> So it all seems to point to QEMU.
>> 
>> --
>> Sander
>> 
>> 
>> >> I have not yet done the GPU - this issue was preventing me from using
>> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
>> >> 
>> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
>> >> <konrad.wilk@oracle.com>' please do.
>> 
>> > Will do.
>> 
>> >> Thank you!
>> >> > 
>> >> > 
>> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
>> >> > index 6dd7a68..2bbdb6d 100644
>> >> > --- a/hw/xen/xen_pt.c
>> >> > +++ b/hw/xen/xen_pt.c
>> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> >> >  
>> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> >> >  
>> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> >> > -                                      "xen-pci-pt-rom", d->rom.size);
>> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
>> >> > +                              "xen-pci-pt-rom", d->rom.size);
>> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
>> >> >                           &s->rom);
>> >> >  
>> >> > 
>> >> > -- 
>> >> > Anthony PERARD
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1g2L-0002CY-6k; Fri, 10 Jan 2014 17:38:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1g2J-000298-W7
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:38:24 +0000
Received: from [193.109.254.147:47282] by server-14.bemta-14.messagelabs.com
	id 61/3D-12628-F0030D25; Fri, 10 Jan 2014 17:38:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389375499!10127325!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6797 invoked from network); 10 Jan 2014 17:38:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 17:38:21 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AHcGkd008646
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 17:38:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AHcFaq017972
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 17:38:16 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AHcEYl001355; Fri, 10 Jan 2014 17:38:15 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 09:38:14 -0800
Date: Fri, 10 Jan 2014 12:38:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1010658460.20140110171623@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> > Wow. You just walked in a pile of bugs didn't you? And on Friday
> > nonethless.
> 
> As usual ;-)

Ha!
..snip..
> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> 
> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> > I totally forgot about it !
> 
> Got a link to that patchset ?

https://lkml.org/lkml/2013/12/13/315

> I at least could give it a spin .. you never know when fortune is on your side :-)

It is also at this git tree:

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
want to merge it in your current Linus tree.

Thank you!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gEi-0003qj-Ax; Fri, 10 Jan 2014 17:51:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1gEh-0003qd-4r
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:51:11 +0000
Received: from [85.158.137.68:30903] by server-5.bemta-3.messagelabs.com id
	57/6A-25188-E0330D25; Fri, 10 Jan 2014 17:51:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389376268!4801692!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6186 invoked from network); 10 Jan 2014 17:51:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:51:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91765004"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:51:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:51:06 -0500
Message-ID: <1389376265.6423.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:51:05 +0000
In-Reply-To: <1389373745.6423.39.camel@kazak.uk.xensource.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
	<1389373745.6423.39.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 17:09 +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
> > 
> > On 01/10/2014 09:52 AM, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> > >> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> > >> supports real hardware, it's possible to enable by default scrubbing.
> > >>
> > >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > >
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> 
> Applied.
> 
> > >> ---
> > >>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
> > >>      a domain.
> > >>
> > >>      The downside is it's now slow on models.
> > >
> > > There is a no-bootscrub command-line option which can be used in that
> > > case. Could you update the relevant model wiki pages to mention it
> > > please?
> > 
> > I have updated the wiki page.
> 
> Thanks.

You made it say "comment out the code" rather than advising to use the
command line option like I suggested -- was that on purpose?

Is something broken with the command line option?

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gEi-0003qj-Ax; Fri, 10 Jan 2014 17:51:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W1gEh-0003qd-4r
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:51:11 +0000
Received: from [85.158.137.68:30903] by server-5.bemta-3.messagelabs.com id
	57/6A-25188-E0330D25; Fri, 10 Jan 2014 17:51:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389376268!4801692!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6186 invoked from network); 10 Jan 2014 17:51:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:51:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91765004"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 17:51:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 12:51:06 -0500
Message-ID: <1389376265.6423.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 10 Jan 2014 17:51:05 +0000
In-Reply-To: <1389373745.6423.39.camel@kazak.uk.xensource.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
	<1389373745.6423.39.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 17:09 +0000, Ian Campbell wrote:
> On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
> > 
> > On 01/10/2014 09:52 AM, Ian Campbell wrote:
> > > On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
> > >> Scrub heap pages was disabled because it was slow on the models. Now that Xen
> > >> supports real hardware, it's possible to enable by default scrubbing.
> > >>
> > >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > >
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> 
> Applied.
> 
> > >> ---
> > >>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
> > >>      a domain.
> > >>
> > >>      The downside is it's now slow on models.
> > >
> > > There is a no-bootscrub command-line option which can be used in that
> > > case. Could you update the relevant model wiki pages to mention it
> > > please?
> > 
> > I have updated the wiki page.
> 
> Thanks.

You made it say "comment out the code" rather than advising to use the
command line option like I suggested -- was that on purpose?

Is something broken with the command line option?

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gLF-00049I-0i; Fri, 10 Jan 2014 17:57:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1gLD-00049B-CX
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:57:55 +0000
Received: from [85.158.137.68:8039] by server-8.bemta-3.messagelabs.com id
	39/95-31081-2A430D25; Fri, 10 Jan 2014 17:57:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389376673!8421397!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 490 invoked from network); 10 Jan 2014 17:57:53 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:57:53 -0000
Received: by mail-ee0-f46.google.com with SMTP id d49so2061331eek.19
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 09:57:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=etQMEtlGWWkZMufka15nQvKk7UzYcdBjDAuuc98TWVg=;
	b=YpFgCOeOgxHa3F8C7/vlokeRqwRvD3qj1wcDe2oomT8SKhrBcN+dnUXDh5r/3lKyQs
	4deWyf7uj4XwWbByA7CrTRHmmrk4nBsLBJn9Mi54wxEs2IM03b1kIzRuNrfGFOKx6qBv
	caXLe4b0yyWMZiDzxSeXONqz8otxod9H6yQ7o+6cwcZZShPmojbmLK6XYTNQoXDTfBi+
	0OUWwek5Ucy83/Y3jVF81xyhtgRDC8JrCqmxy+dIQ7iXOCSe4mjvStWYNMKSBt6nbBMX
	gqHfxNAFSM5ke+jq7Cv1N8gVBq5VTJTcdxeWCL31/4GI/xSAuuSeJPerXJsYzjx5qKiJ
	SGjA==
X-Gm-Message-State: ALoCoQm8C2HjFiZ1cSdxKCGVVURZWudIaQ/A7oWB3RYjOedjW/R262JFTB5HirlAZhPaXWb4YwfW
X-Received: by 10.15.54.72 with SMTP id s48mr11194521eew.3.1389376673145;
	Fri, 10 Jan 2014 09:57:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o13sm16398017eex.19.2014.01.10.09.57.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 09:57:52 -0800 (PST)
Message-ID: <52D0349E.5070504@linaro.org>
Date: Fri, 10 Jan 2014 17:57:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
	<1389373745.6423.39.camel@kazak.uk.xensource.com>
	<1389376265.6423.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1389376265.6423.46.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:51 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 17:09 +0000, Ian Campbell wrote:
>> On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
>>>
>>> On 01/10/2014 09:52 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
>>>>> Scrub heap pages was disabled because it was slow on the models. Now that Xen
>>>>> supports real hardware, it's possible to enable by default scrubbing.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>>
>>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Thanks.
>>
>> Applied.
>>
>>>>> ---
>>>>>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
>>>>>      a domain.
>>>>>
>>>>>      The downside is it's now slow on models.
>>>>
>>>> There is a no-bootscrub command-line option which can be used in that
>>>> case. Could you update the relevant model wiki pages to mention it
>>>> please?
>>>
>>> I have updated the wiki page.
>>
>> Thanks.
> 
> You made it say "comment out the code" rather than advising to use the
> command line option like I suggested -- was that on purpose?
> 
> Is something broken with the command line option?

No, I thought we will need to create a command line. I will update the
wiki page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 17:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 17:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gLF-00049I-0i; Fri, 10 Jan 2014 17:57:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1gLD-00049B-CX
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 17:57:55 +0000
Received: from [85.158.137.68:8039] by server-8.bemta-3.messagelabs.com id
	39/95-31081-2A430D25; Fri, 10 Jan 2014 17:57:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389376673!8421397!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 490 invoked from network); 10 Jan 2014 17:57:53 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 17:57:53 -0000
Received: by mail-ee0-f46.google.com with SMTP id d49so2061331eek.19
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 09:57:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=etQMEtlGWWkZMufka15nQvKk7UzYcdBjDAuuc98TWVg=;
	b=YpFgCOeOgxHa3F8C7/vlokeRqwRvD3qj1wcDe2oomT8SKhrBcN+dnUXDh5r/3lKyQs
	4deWyf7uj4XwWbByA7CrTRHmmrk4nBsLBJn9Mi54wxEs2IM03b1kIzRuNrfGFOKx6qBv
	caXLe4b0yyWMZiDzxSeXONqz8otxod9H6yQ7o+6cwcZZShPmojbmLK6XYTNQoXDTfBi+
	0OUWwek5Ucy83/Y3jVF81xyhtgRDC8JrCqmxy+dIQ7iXOCSe4mjvStWYNMKSBt6nbBMX
	gqHfxNAFSM5ke+jq7Cv1N8gVBq5VTJTcdxeWCL31/4GI/xSAuuSeJPerXJsYzjx5qKiJ
	SGjA==
X-Gm-Message-State: ALoCoQm8C2HjFiZ1cSdxKCGVVURZWudIaQ/A7oWB3RYjOedjW/R262JFTB5HirlAZhPaXWb4YwfW
X-Received: by 10.15.54.72 with SMTP id s48mr11194521eew.3.1389376673145;
	Fri, 10 Jan 2014 09:57:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o13sm16398017eex.19.2014.01.10.09.57.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 10 Jan 2014 09:57:52 -0800 (PST)
Message-ID: <52D0349E.5070504@linaro.org>
Date: Fri, 10 Jan 2014 17:57:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389324476-9158-1-git-send-email-julien.grall@linaro.org>
	<1389347521.19142.9.camel@kazak.uk.xensource.com>
	<52CFFA1C.7000500@linaro.org>
	<1389373745.6423.39.camel@kazak.uk.xensource.com>
	<1389376265.6423.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1389376265.6423.46.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Scrub heap pages during boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:51 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 17:09 +0000, Ian Campbell wrote:
>> On Fri, 2014-01-10 at 13:48 +0000, Julien Grall wrote:
>>>
>>> On 01/10/2014 09:52 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-10 at 03:27 +0000, Julien Grall wrote:
>>>>> Scrub heap pages was disabled because it was slow on the models. Now that Xen
>>>>> supports real hardware, it's possible to enable by default scrubbing.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>>
>>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Thanks.
>>
>> Applied.
>>
>>>>> ---
>>>>>      This patch should go to Xen 4.4. It avoid to give non-cleared page to
>>>>>      a domain.
>>>>>
>>>>>      The downside is it's now slow on models.
>>>>
>>>> There is a no-bootscrub command-line option which can be used in that
>>>> case. Could you update the relevant model wiki pages to mention it
>>>> please?
>>>
>>> I have updated the wiki page.
>>
>> Thanks.
> 
> You made it say "comment out the code" rather than advising to use the
> command line option like I suggested -- was that on purpose?
> 
> Is something broken with the command line option?

No, I thought we will need to create a command line. I will update the
wiki page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:01:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gOi-000564-2o; Fri, 10 Jan 2014 18:01:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1gOg-00055s-O9
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:01:31 +0000
Received: from [85.158.139.211:36040] by server-16.bemta-5.messagelabs.com id
	10/35-11843-A7530D25; Fri, 10 Jan 2014 18:01:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389376887!8899064!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25517 invoked from network); 10 Jan 2014 18:01:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 18:01:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AI17Qg027427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:01:07 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI15KV011883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:01:05 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI14KJ011875; Fri, 10 Jan 2014 18:01:04 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:01:04 -0800
Date: Fri, 10 Jan 2014 13:01:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110180102.GB19423@pegasus.dumpdata.com>
References: <20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110163544.GA21692@phenom.dumpdata.com>
	<1945447520.20140110183818@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1945447520.20140110183818@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 06:38:18PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 5:35:44 PM, you wrote:
> 
> >> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> >> >> 
> >> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> >> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> >> >> ..
> >> >> and I did see the PXE boot menu in my guest - so even
> >> >> better!
> >> 
> >> > Perfect, look like it is the fix for PCI passthrough.
> >> 
> >> Hi Konrad,
> >> 
> >> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
> >> gets run ?
> 
> > Yes. I double checked that the MAC address that was given an DHCP
> > address was indeed for the physical hardware. And it was.
> 
> OK
> 
> >> 
> >> With this patch and VGA devices it still points to another rom for me.
> >> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)
> 
> > That all sounds to me like a bug in QEMU which constructs
> > the 'world'. Then 'hvmloader' and the kernel ingest this to
> > create their view of what the PCI configuration/slots/etc should look like.
> > They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.
> 
> > Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
> > is-and-PT-VGA'?
> 
> > Just to make sure I am not forgetting a crucial fact - if you don't
> > have vga=none, does the lspci output look sane?
> 
> It does .. except when using an PV NIC, that one doesn't show in lspci, but it does occupy a slot (the numbers are not
> consecutive anymore, one is "hidden"), but the PV nic itself is working ok.
Right, because your Intel emulated gets 'unplugged' (disappears)
when the Xen PV one kicks in.

> This doesn't seem to happen for disk so that seems a bit strange to me ..

That might be due to the CD-ROM option. The disk should
nonetless disappear (/dev/hda).

> 
> Apart from that it looks sane.
> 
> But I'm rebuilding everything now from scratch and will try again .. just too damned many parameters :-)
> 
> Will see if i can make a complete post again with all data, although the previous time i tried that,
> it was a little bit to intimidating i guess.
> 
> >> 
> >> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
> >> (for example one by reading acpi tables .. the other one by real probing .. ?)
> 
> > The kernel and hvmloader all use the 'inb' and 'outb' to figure out
> > what the PCI space looks like.
> 
> > 'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
> > 'inb' and 'outb'.
> 
> Hmm ok .. so i would expect that dumping with the:
> echo 1 > rom; cat rom > rom.bin; echo 0 > rom;
> sequence in /sys/bus/pci/devices/<BDF> would do everything according to the addresses that lspci gives for the rombar ..
> and when i do that the resulting rom.bin differs according to the emulated devices i put it.
> 
> The devices that have a rom in the guest are always in the order (with there BDF):
> NIC
> Emulated VGA
> Passthroughed VGA
> 
> When all enabled, i end up with the rom of the emulated VGA
> When i disable that with vga="none", i end up with the rom of the NIC
> When i also disable that .. by not specifying any vif and using xen_platform_pci=0 ..
>      the kernel complains when trying to dump that it's not a valid rom (and when forced too it's all zero's so it's
>      right because it can't find the start signature of a rom)
> 

Ok,so the passthrough ROM is definitly not showing up. Which is
odd, b/c it does show up for the physical NIC.

If you do the 'echo 1' .. rune in dom0, can you extract your
VGA (Radeon) BIOS? Perhaps the ROM extraction part for video
cards is different?

> Is there an easy tool to dump the content of arbitrary mem addresses ?
> (except enabling the /dev/kmem and trying to do some voodoo calculations and use "dd" ?)

That is the easiest.
> 
> Hmm when i make a core dump using "xl dump-core" would/should that also contain the content of the passed throughed rom ?

I think not. As the ROM is not RAM it should not include that.

> Then at least i could search that dump for the strings of the real vga rom of the passedtroughed card  and see if it is there .. perhaps on some other
> address as expected .. or at least know it isn't.
> 
> Because at the moment i'm not very well in progressing with ruling things out (which seems to be the only method),
> i only have some symptoms that differ. Wish i had another NIC device with a rom to see if that does work.
> 
> 
> > So it all seems to point to QEMU.
> >> 
> >> --
> >> Sander
> >> 
> >> 
> >> >> I have not yet done the GPU - this issue was preventing me from using
> >> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> >> >> 
> >> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> >> >> <konrad.wilk@oracle.com>' please do.
> >> 
> >> > Will do.
> >> 
> >> >> Thank you!
> >> >> > 
> >> >> > 
> >> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> >> >> > index 6dd7a68..2bbdb6d 100644
> >> >> > --- a/hw/xen/xen_pt.c
> >> >> > +++ b/hw/xen/xen_pt.c
> >> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >> >> >  
> >> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >> >> >  
> >> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >> >> > -                                      "xen-pci-pt-rom", d->rom.size);
> >> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> >> >> > +                              "xen-pci-pt-rom", d->rom.size);
> >> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >> >> >                           &s->rom);
> >> >> >  
> >> >> > 
> >> >> > -- 
> >> >> > Anthony PERARD
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:01:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gOi-000564-2o; Fri, 10 Jan 2014 18:01:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1gOg-00055s-O9
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:01:31 +0000
Received: from [85.158.139.211:36040] by server-16.bemta-5.messagelabs.com id
	10/35-11843-A7530D25; Fri, 10 Jan 2014 18:01:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389376887!8899064!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25517 invoked from network); 10 Jan 2014 18:01:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 18:01:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AI17Qg027427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:01:07 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI15KV011883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:01:05 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI14KJ011875; Fri, 10 Jan 2014 18:01:04 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:01:04 -0800
Date: Fri, 10 Jan 2014 13:01:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140110180102.GB19423@pegasus.dumpdata.com>
References: <20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<20140110032845.GA3660@konrad-lan.dumpdata.com>
	<20140110151914.GF1696@perard.uk.xensource.com>
	<24225189.20140110170544@eikelenboom.it>
	<20140110163544.GA21692@phenom.dumpdata.com>
	<1945447520.20140110183818@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1945447520.20140110183818@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
	stefano.stabellini@citrix.com, donald.d.dugger@intel.com
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 06:38:18PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 5:35:44 PM, you wrote:
> 
> >> >> (d1) [2014-01-10 03:20:29] Running option rom at ca00:0003
> >> >> 
> >> >> (d1) [2014-01-10 03:20:47] Booting from DVD/CD...
> >> >> (d1) [2014-01-10 03:20:47] Booting from 0000:7c00
> >> >> ..
> >> >> and I did see the PXE boot menu in my guest - so even
> >> >> better!
> >> 
> >> > Perfect, look like it is the fix for PCI passthrough.
> >> 
> >> Hi Konrad,
> >> 
> >> Are you sure it's the rom of the NIC, and not the iPXE rom from the emulated device that
> >> gets run ?
> 
> > Yes. I double checked that the MAC address that was given an DHCP
> > address was indeed for the physical hardware. And it was.
> 
> OK
> 
> >> 
> >> With this patch and VGA devices it still points to another rom for me.
> >> (it looks like it is pointing to the rom of the device with a BDF just one lower than the passed through one)
> 
> > That all sounds to me like a bug in QEMU which constructs
> > the 'world'. Then 'hvmloader' and the kernel ingest this to
> > create their view of what the PCI configuration/slots/etc should look like.
> > They use 'inb' and 'outb' instructions on the 0xcf8/0xcfc port.
> 
> > Perhaps the 'vga=none' is having an hard time dealing with 'no-VGA-but-wait-there
> > is-and-PT-VGA'?
> 
> > Just to make sure I am not forgetting a crucial fact - if you don't
> > have vga=none, does the lspci output look sane?
> 
> It does .. except when using an PV NIC, that one doesn't show in lspci, but it does occupy a slot (the numbers are not
> consecutive anymore, one is "hidden"), but the PV nic itself is working ok.
Right, because your Intel emulated gets 'unplugged' (disappears)
when the Xen PV one kicks in.

> This doesn't seem to happen for disk so that seems a bit strange to me ..

That might be due to the CD-ROM option. The disk should
nonetless disappear (/dev/hda).

> 
> Apart from that it looks sane.
> 
> But I'm rebuilding everything now from scratch and will try again .. just too damned many parameters :-)
> 
> Will see if i can make a complete post again with all data, although the previous time i tried that,
> it was a little bit to intimidating i guess.
> 
> >> 
> >> Do you by any chance know if there is a difference in how lspci and the linux kernel scan / list pci devices ?
> >> (for example one by reading acpi tables .. the other one by real probing .. ?)
> 
> > The kernel and hvmloader all use the 'inb' and 'outb' to figure out
> > what the PCI space looks like.
> 
> > 'lspci' uses /sysfs. Thought if you use '-xxxx' I think it also does
> > 'inb' and 'outb'.
> 
> Hmm ok .. so i would expect that dumping with the:
> echo 1 > rom; cat rom > rom.bin; echo 0 > rom;
> sequence in /sys/bus/pci/devices/<BDF> would do everything according to the addresses that lspci gives for the rombar ..
> and when i do that the resulting rom.bin differs according to the emulated devices i put it.
> 
> The devices that have a rom in the guest are always in the order (with there BDF):
> NIC
> Emulated VGA
> Passthroughed VGA
> 
> When all enabled, i end up with the rom of the emulated VGA
> When i disable that with vga="none", i end up with the rom of the NIC
> When i also disable that .. by not specifying any vif and using xen_platform_pci=0 ..
>      the kernel complains when trying to dump that it's not a valid rom (and when forced too it's all zero's so it's
>      right because it can't find the start signature of a rom)
> 

Ok,so the passthrough ROM is definitly not showing up. Which is
odd, b/c it does show up for the physical NIC.

If you do the 'echo 1' .. rune in dom0, can you extract your
VGA (Radeon) BIOS? Perhaps the ROM extraction part for video
cards is different?

> Is there an easy tool to dump the content of arbitrary mem addresses ?
> (except enabling the /dev/kmem and trying to do some voodoo calculations and use "dd" ?)

That is the easiest.
> 
> Hmm when i make a core dump using "xl dump-core" would/should that also contain the content of the passed throughed rom ?

I think not. As the ROM is not RAM it should not include that.

> Then at least i could search that dump for the strings of the real vga rom of the passedtroughed card  and see if it is there .. perhaps on some other
> address as expected .. or at least know it isn't.
> 
> Because at the moment i'm not very well in progressing with ruling things out (which seems to be the only method),
> i only have some symptoms that differ. Wish i had another NIC device with a rom to see if that does work.
> 
> 
> > So it all seems to point to QEMU.
> >> 
> >> --
> >> Sander
> >> 
> >> 
> >> >> I have not yet done the GPU - this issue was preventing me from using
> >> >> qemu-xen as it would always blow up before SeaBIOS was in the picture.
> >> >> 
> >> >> If you would like to put 'Reported-and-Tested-by: Konrad Rzeszutek Wilk
> >> >> <konrad.wilk@oracle.com>' please do.
> >> 
> >> > Will do.
> >> 
> >> >> Thank you!
> >> >> > 
> >> >> > 
> >> >> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> >> >> > index 6dd7a68..2bbdb6d 100644
> >> >> > --- a/hw/xen/xen_pt.c
> >> >> > +++ b/hw/xen/xen_pt.c
> >> >> > @@ -440,8 +440,8 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
> >> >> >  
> >> >> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >> >> >  
> >> >> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >> >> > -                                      "xen-pci-pt-rom", d->rom.size);
> >> >> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> >> >> > +                              "xen-pci-pt-rom", d->rom.size);
> >> >> >          pci_register_bar(&s->dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_MEM_PREFETCH,
> >> >> >                           &s->rom);
> >> >> >  
> >> >> > 
> >> >> > -- 
> >> >> > Anthony PERARD
> >> 
> >> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gUg-0005J3-7D; Fri, 10 Jan 2014 18:07:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1gUe-0005Iw-Uv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 18:07:41 +0000
Received: from [85.158.137.68:39302] by server-7.bemta-3.messagelabs.com id
	7A/86-27599-CE630D25; Fri, 10 Jan 2014 18:07:40 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389377257!4803675!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27556 invoked from network); 10 Jan 2014 18:07:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 18:07:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AI7Vu4009756
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:07:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI7UVX008790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:07:30 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AI7Tod026244; Fri, 10 Jan 2014 18:07:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:07:29 -0800
Message-ID: <52D036FC.6000308@oracle.com>
Date: Fri, 10 Jan 2014 13:07:56 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 11:28 AM, Olaf Hering wrote:
> In its initial implementation a check for "type" was added, but only phy
> and file are handled. This breaks advertised discard support for other
> type values such as qdisk.
>
> Fix and simplify this function: If the backend advertises discard
> support it is supposed to implement it properly, so enable
> feature_discard unconditionally. If the backend advertises the need for
> a certain granularity and alignment then propagate both properties to
> the blocklayer. The discard-secure property is a boolean, update the code
> to reflect that.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
>   1 file changed, 14 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c4a4c90..c9e96b9 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
>   static void blkfront_setup_discard(struct blkfront_info *info)
>   {
>   	int err;
> -	char *type;
>   	unsigned int discard_granularity;
>   	unsigned int discard_alignment;
>   	unsigned int discard_secure;
>   
> -	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
> -	if (IS_ERR(type))
> -		return;
> -
> -	info->feature_secdiscard = 0;
> -	if (strncmp(type, "phy", 3) == 0) {
> -		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> -			"discard-granularity", "%u", &discard_granularity,
> -			"discard-alignment", "%u", &discard_alignment,
> -			NULL);
> -		if (!err) {
> -			info->feature_discard = 1;
> -			info->discard_granularity = discard_granularity;
> -			info->discard_alignment = discard_alignment;
> -		}
> -		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> -			    "discard-secure", "%d", &discard_secure,
> -			    NULL);
> -		if (!err)
> -			info->feature_secdiscard = discard_secure;
> -
> -	} else if (strncmp(type, "file", 4) == 0)
> -		info->feature_discard = 1;
> -
> -	kfree(type);
> +	info->feature_discard = 1;

If the call below fails, is it safe to continue using discard feature? 
At the least, are discard_granularity and discard_alignment guaranteed 
to have sane/safe values?

> +	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		"discard-granularity", "%u", &discard_granularity,
> +		"discard-alignment", "%u", &discard_alignment,
> +		NULL);
> +	if (!err) {
> +		info->discard_granularity = discard_granularity;
> +		info->discard_alignment = discard_alignment;
> +	}
> +	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		    "discard-secure", "%d", &discard_secure,
> +		    NULL);
> +	if (!err)
> +		info->feature_secdiscard = !!discard_secure;
>   }

err variable is not really necessary so you can drop it.


-boris
>   
>   static int blkfront_setup_indirect(struct blkfront_info *info)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gUg-0005J3-7D; Fri, 10 Jan 2014 18:07:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1gUe-0005Iw-Uv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 18:07:41 +0000
Received: from [85.158.137.68:39302] by server-7.bemta-3.messagelabs.com id
	7A/86-27599-CE630D25; Fri, 10 Jan 2014 18:07:40 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389377257!4803675!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27556 invoked from network); 10 Jan 2014 18:07:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 18:07:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AI7Vu4009756
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:07:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AI7UVX008790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:07:30 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AI7Tod026244; Fri, 10 Jan 2014 18:07:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:07:29 -0800
Message-ID: <52D036FC.6000308@oracle.com>
Date: Fri, 10 Jan 2014 13:07:56 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 11:28 AM, Olaf Hering wrote:
> In its initial implementation a check for "type" was added, but only phy
> and file are handled. This breaks advertised discard support for other
> type values such as qdisk.
>
> Fix and simplify this function: If the backend advertises discard
> support it is supposed to implement it properly, so enable
> feature_discard unconditionally. If the backend advertises the need for
> a certain granularity and alignment then propagate both properties to
> the blocklayer. The discard-secure property is a boolean, update the code
> to reflect that.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
>   1 file changed, 14 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c4a4c90..c9e96b9 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
>   static void blkfront_setup_discard(struct blkfront_info *info)
>   {
>   	int err;
> -	char *type;
>   	unsigned int discard_granularity;
>   	unsigned int discard_alignment;
>   	unsigned int discard_secure;
>   
> -	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
> -	if (IS_ERR(type))
> -		return;
> -
> -	info->feature_secdiscard = 0;
> -	if (strncmp(type, "phy", 3) == 0) {
> -		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> -			"discard-granularity", "%u", &discard_granularity,
> -			"discard-alignment", "%u", &discard_alignment,
> -			NULL);
> -		if (!err) {
> -			info->feature_discard = 1;
> -			info->discard_granularity = discard_granularity;
> -			info->discard_alignment = discard_alignment;
> -		}
> -		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> -			    "discard-secure", "%d", &discard_secure,
> -			    NULL);
> -		if (!err)
> -			info->feature_secdiscard = discard_secure;
> -
> -	} else if (strncmp(type, "file", 4) == 0)
> -		info->feature_discard = 1;
> -
> -	kfree(type);
> +	info->feature_discard = 1;

If the call below fails, is it safe to continue using discard feature? 
At the least, are discard_granularity and discard_alignment guaranteed 
to have sane/safe values?

> +	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		"discard-granularity", "%u", &discard_granularity,
> +		"discard-alignment", "%u", &discard_alignment,
> +		NULL);
> +	if (!err) {
> +		info->discard_granularity = discard_granularity;
> +		info->discard_alignment = discard_alignment;
> +	}
> +	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		    "discard-secure", "%d", &discard_secure,
> +		    NULL);
> +	if (!err)
> +		info->feature_secdiscard = !!discard_secure;
>   }

err variable is not really necessary so you can drop it.


-boris
>   
>   static int blkfront_setup_indirect(struct blkfront_info *info)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:17:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gda-00068o-LZ; Fri, 10 Jan 2014 18:16:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1gdY-00068j-W1
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 18:16:53 +0000
Received: from [85.158.137.68:41002] by server-17.bemta-3.messagelabs.com id
	3C/C5-15965-41930D25; Fri, 10 Jan 2014 18:16:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389377809!8471298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13905 invoked from network); 10 Jan 2014 18:16:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 18:16:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91774370"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 18:16:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 13:16:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1gdR-0003QU-O1;
	Fri, 10 Jan 2014 18:16:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1gdR-0002LL-G4;
	Fri, 10 Jan 2014 18:16:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24340-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 18:16:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24340: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1847292479527879529=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1847292479527879529==
Content-Type: text/plain

flight 24340 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24340/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                 fail pass in 24333
 test-amd64-i386-xl-win7-amd64  5 xen-boot                   fail pass in 24333
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24333 pass in 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install   fail like 24344-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24333 like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============1847292479527879529==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1847292479527879529==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:17:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gda-00068o-LZ; Fri, 10 Jan 2014 18:16:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1gdY-00068j-W1
	for xen-devel@lists.xensource.com; Fri, 10 Jan 2014 18:16:53 +0000
Received: from [85.158.137.68:41002] by server-17.bemta-3.messagelabs.com id
	3C/C5-15965-41930D25; Fri, 10 Jan 2014 18:16:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389377809!8471298!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13905 invoked from network); 10 Jan 2014 18:16:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 18:16:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,639,1384300800"; d="scan'208";a="91774370"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Jan 2014 18:16:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 13:16:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1gdR-0003QU-O1;
	Fri, 10 Jan 2014 18:16:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1gdR-0002LL-G4;
	Fri, 10 Jan 2014 18:16:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24340-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 10 Jan 2014 18:16:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24340: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1847292479527879529=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1847292479527879529==
Content-Type: text/plain

flight 24340 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24340/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 22687

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                 fail pass in 24333
 test-amd64-i386-xl-win7-amd64  5 xen-boot                   fail pass in 24333
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24333 pass in 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install   fail like 24344-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24333 like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3250 lines long.)


--===============1847292479527879529==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1847292479527879529==--

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:21:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gi3-0006w4-Gl; Fri, 10 Jan 2014 18:21:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1gi1-0006vy-KF
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:21:29 +0000
Received: from [85.158.143.35:18671] by server-3.bemta-4.messagelabs.com id
	43/9E-32360-92A30D25; Fri, 10 Jan 2014 18:21:29 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389378088!10975745!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7318 invoked from network); 10 Jan 2014 18:21:28 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 18:21:28 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55114 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1gWo-0007NB-1w; Fri, 10 Jan 2014 19:09:54 +0100
Date: Fri, 10 Jan 2014 19:21:11 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1668392148.20140110192111@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!

Hmm it was worth a shot, but it didn't cut it, still the same error/warning and console hung.
I don't seem to get the hung task stacktrace i dit get before.
However it is stuck, when switching to another console and trying to do a "lspci" there, that also hangs.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:21:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gi3-0006w4-Gl; Fri, 10 Jan 2014 18:21:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1gi1-0006vy-KF
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:21:29 +0000
Received: from [85.158.143.35:18671] by server-3.bemta-4.messagelabs.com id
	43/9E-32360-92A30D25; Fri, 10 Jan 2014 18:21:29 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389378088!10975745!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7318 invoked from network); 10 Jan 2014 18:21:28 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 18:21:28 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55114 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1gWo-0007NB-1w; Fri, 10 Jan 2014 19:09:54 +0100
Date: Fri, 10 Jan 2014 19:21:11 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1668392148.20140110192111@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!

Hmm it was worth a shot, but it didn't cut it, still the same error/warning and console hung.
I don't seem to get the hung task stacktrace i dit get before.
However it is stuck, when switching to another console and trying to do a "lspci" there, that also hangs.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:22:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gj1-00072a-8J; Fri, 10 Jan 2014 18:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1giz-00072N-TY
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:22:30 +0000
Received: from [85.158.139.211:34761] by server-16.bemta-5.messagelabs.com id
	8C/09-11843-36A30D25; Fri, 10 Jan 2014 18:22:27 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389378147!9078214!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22281 invoked from network); 10 Jan 2014 18:22:27 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 18:22:27 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55115 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1gXz-0007Ts-8N; Fri, 10 Jan 2014 19:11:07 +0100
Date: Fri, 10 Jan 2014 19:22:24 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1233354318.20140110192224@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!


Ah and there is the same stacktrace as well ..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:22:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1gj1-00072a-8J; Fri, 10 Jan 2014 18:22:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1giz-00072N-TY
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:22:30 +0000
Received: from [85.158.139.211:34761] by server-16.bemta-5.messagelabs.com id
	8C/09-11843-36A30D25; Fri, 10 Jan 2014 18:22:27 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389378147!9078214!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22281 invoked from network); 10 Jan 2014 18:22:27 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 18:22:27 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55115 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1gXz-0007Ts-8N; Fri, 10 Jan 2014 19:11:07 +0100
Date: Fri, 10 Jan 2014 19:22:24 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1233354318.20140110192224@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!


Ah and there is the same stacktrace as well ..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:43:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1h2z-0000BN-PK; Fri, 10 Jan 2014 18:43:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1h2x-0000BF-Me
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:43:07 +0000
Received: from [85.158.137.68:36412] by server-9.bemta-3.messagelabs.com id
	02/08-13104-A3F30D25; Fri, 10 Jan 2014 18:43:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389379383!8412366!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11459 invoked from network); 10 Jan 2014 18:43:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 18:43:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AIfxNn006601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:42:00 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AIfv5g029002
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:41:58 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AIfvI0022964; Fri, 10 Jan 2014 18:41:57 GMT
Received: from pegasus.dumpdata.com (/10.154.165.205)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:41:56 -0800
Date: Fri, 10 Jan 2014 13:41:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
	jun.nakajima@intel.com
Message-ID: <20140110184151.GA20232@pegasus.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
	disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

It occured to me that when booting Linux HVM guests, the 1GB hugepage
cpuid is not exposed. One needs to manually do:

 cpuid= ['0x80000001:edx=xxxxx1xxxxxxxxxxxxxxxxxxxxxxxxxx'] 
or 
 cpuid="host,page1gb=k"

(see http://lists.xenproject.org/archives/html/xen-users/2013-09/msg00083.html
for some discussion about it).

The machine I am using is supporting it:

 (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB 

and this is the only way to get hugepages working
(for reference here is what I have in the guest command line:
"default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
hugepages=100" and without this cpuid I get:
 hugepagesz: Unsupported page size 1024 M 

because the 1GB flag is not exposed.

I was wondering why we don't check the host cpuid and set the 1GB
flag (pdpe1gb) for those cases? It seems that all the pieces are in
place for this to work?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 18:43:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 18:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1h2z-0000BN-PK; Fri, 10 Jan 2014 18:43:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W1h2x-0000BF-Me
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 18:43:07 +0000
Received: from [85.158.137.68:36412] by server-9.bemta-3.messagelabs.com id
	02/08-13104-A3F30D25; Fri, 10 Jan 2014 18:43:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389379383!8412366!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11459 invoked from network); 10 Jan 2014 18:43:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 18:43:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AIfxNn006601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 18:42:00 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AIfv5g029002
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 18:41:58 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AIfvI0022964; Fri, 10 Jan 2014 18:41:57 GMT
Received: from pegasus.dumpdata.com (/10.154.165.205)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 10:41:56 -0800
Date: Fri, 10 Jan 2014 13:41:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
	jun.nakajima@intel.com
Message-ID: <20140110184151.GA20232@pegasus.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
	disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

It occured to me that when booting Linux HVM guests, the 1GB hugepage
cpuid is not exposed. One needs to manually do:

 cpuid= ['0x80000001:edx=xxxxx1xxxxxxxxxxxxxxxxxxxxxxxxxx'] 
or 
 cpuid="host,page1gb=k"

(see http://lists.xenproject.org/archives/html/xen-users/2013-09/msg00083.html
for some discussion about it).

The machine I am using is supporting it:

 (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB 

and this is the only way to get hugepages working
(for reference here is what I have in the guest command line:
"default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
hugepages=100" and without this cpuid I get:
 hugepagesz: Unsupported page size 1024 M 

because the 1GB flag is not exposed.

I was wondering why we don't check the host cpuid and set the 1GB
flag (pdpe1gb) for those cases? It seems that all the pieces are in
place for this to work?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 19:07:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 19:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1hQJ-0001gP-Hx; Fri, 10 Jan 2014 19:07:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1hQH-0001gK-I3
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 19:07:13 +0000
Received: from [85.158.143.35:32890] by server-2.bemta-4.messagelabs.com id
	16/F5-11386-0E440D25; Fri, 10 Jan 2014 19:07:12 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389380831!11031364!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14818 invoked from network); 10 Jan 2014 19:07:12 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 19:07:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1hQ8-000O9Y-5A; Fri, 10 Jan 2014 19:07:04 +0000
Date: Fri, 10 Jan 2014 20:07:04 +0100
From: Tim Deegan <tim@xen.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110190704.GB48471@deinos.phlegethon.org>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110152840.GA20385@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:28 -0500 on 10 Jan (1389346120), Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > > > that you only have 128-255M free is quite low, or are you
> > > > autoballooning?)
> > > 
> > > This patch fixes it for me. It basically sets the amount of pages
> > > claimed to be 'maxmem' instead of 'memory' for PoD.
> > > 
> > > I don't know PoD very well,
> > 
> > Me neither, this might have to wait for George to get back.
> 
> <nods>
> > 
> > We should also consider flipping the default claim setting to off in xl
> > for 4.4, since that is likely to be a lower impact change than fixing
> > the issue (and one which we all understand!).
> 
> <unwraps the Xen 4.4 duct-tape roll>
> 
> > 
> > >  and this claim is only valid during the
> > > allocation of the guests memory - so the 'target_pages' value might be
> > > the wrong one. However looking at the hypervisor's
> > > 'p2m_pod_set_mem_target' I see this comment:
> > > 
> > >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> > >  317  *   entries.  The balloon driver will deflate the balloon to give back
> > >  318  *   the remainder of the ram to the guest OS.
> > > 
> > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > And then it is the responsibility of the balloon driver to give the memory
> > > back (and this is where the 'static-max' et al come in play to tell the
> > > balloon driver to balloon out).
> > 
> > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > boot time. It is basically there in order to let the guest get booted
> > far enough to load the balloon driver to give the memory back.
> > 
> > It's basically a boot time zero-page sharing mechanism AIUI.
> 
> But it does look to gulp up hypervisor memory and return it during
> allocation of memory for the guest.
> 
> Digging in the hypervisor I see in 'p2m_pod_set_cache_target' (where
> pod_target is for this case maxmem - memory):
> 
> And pod.count is zero, so for Wei's case it would be 128MB.
> 
>  216     /* Increasing the target */
>  217     while ( pod_target > p2m->pod.count )
>  218     {
>  222         if ( (pod_target - p2m->pod.count) >= SUPERPAGE_PAGES )
>  223             order = PAGE_ORDER_2M;
>  224         else
>  225             order = PAGE_ORDER_4K;
>  226     retry:
>  227         page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);
> 
> So allocate 64 2MB pages
> 
>  243         p2m_pod_cache_add(p2m, page, order);
> 
> Add to a list
> 
> 251 
>  252     /* Decreasing the target */
>  253     /* We hold the pod lock here, so we don't need to worry about
>  254      * cache disappearing under our feet. */
>  255     while ( pod_target < p2m->pod.count )
>  256     {
> ..
>  266         page = p2m_pod_cache_get(p2m, order);
> 
> Get the page (from the list)
> ..
>  287             put_page(page+i);
> 
> And then free it.

Er, it will do only one of those things, right?  If the requested
target is larger than the current pool, it allocates memory.  If the
requested target is smaller than the current pool, it frees memory. 
So starting from an empty pool, it will allocate pod_target pages and
free none of them.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 19:07:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 19:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1hQJ-0001gP-Hx; Fri, 10 Jan 2014 19:07:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W1hQH-0001gK-I3
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 19:07:13 +0000
Received: from [85.158.143.35:32890] by server-2.bemta-4.messagelabs.com id
	16/F5-11386-0E440D25; Fri, 10 Jan 2014 19:07:12 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389380831!11031364!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14818 invoked from network); 10 Jan 2014 19:07:12 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 19:07:12 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W1hQ8-000O9Y-5A; Fri, 10 Jan 2014 19:07:04 +0000
Date: Fri, 10 Jan 2014 20:07:04 +0100
From: Tim Deegan <tim@xen.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140110190704.GB48471@deinos.phlegethon.org>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110152840.GA20385@phenom.dumpdata.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:28 -0500 on 10 Jan (1389346120), Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > > create ^
> > > > owner Wei Liu <wei.liu2@citrix.com>
> > > > thanks
> > > > 
> > > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > > When I have following configuration in HVM config file:
> > > > >   memory=128
> > > > >   maxmem=256
> > > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > > 
> > > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > > 
> > > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > > 
> > > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > > 
> > > No. 128MB actually.
> > > 
> > > > that you only have 128-255M free is quite low, or are you
> > > > autoballooning?)
> > > 
> > > This patch fixes it for me. It basically sets the amount of pages
> > > claimed to be 'maxmem' instead of 'memory' for PoD.
> > > 
> > > I don't know PoD very well,
> > 
> > Me neither, this might have to wait for George to get back.
> 
> <nods>
> > 
> > We should also consider flipping the default claim setting to off in xl
> > for 4.4, since that is likely to be a lower impact change than fixing
> > the issue (and one which we all understand!).
> 
> <unwraps the Xen 4.4 duct-tape roll>
> 
> > 
> > >  and this claim is only valid during the
> > > allocation of the guests memory - so the 'target_pages' value might be
> > > the wrong one. However looking at the hypervisor's
> > > 'p2m_pod_set_mem_target' I see this comment:
> > > 
> > >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> > >  317  *   entries.  The balloon driver will deflate the balloon to give back
> > >  318  *   the remainder of the ram to the guest OS.
> > > 
> > > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > > And then it is the responsibility of the balloon driver to give the memory
> > > back (and this is where the 'static-max' et al come in play to tell the
> > > balloon driver to balloon out).
> > 
> > PoD exists purely so that we don't need the 'maxmem' amount of memory at
> > boot time. It is basically there in order to let the guest get booted
> > far enough to load the balloon driver to give the memory back.
> > 
> > It's basically a boot time zero-page sharing mechanism AIUI.
> 
> But it does look to gulp up hypervisor memory and return it during
> allocation of memory for the guest.
> 
> Digging in the hypervisor I see in 'p2m_pod_set_cache_target' (where
> pod_target is for this case maxmem - memory):
> 
> And pod.count is zero, so for Wei's case it would be 128MB.
> 
>  216     /* Increasing the target */
>  217     while ( pod_target > p2m->pod.count )
>  218     {
>  222         if ( (pod_target - p2m->pod.count) >= SUPERPAGE_PAGES )
>  223             order = PAGE_ORDER_2M;
>  224         else
>  225             order = PAGE_ORDER_4K;
>  226     retry:
>  227         page = alloc_domheap_pages(d, order, PAGE_ORDER_4K);
> 
> So allocate 64 2MB pages
> 
>  243         p2m_pod_cache_add(p2m, page, order);
> 
> Add to a list
> 
> 251 
>  252     /* Decreasing the target */
>  253     /* We hold the pod lock here, so we don't need to worry about
>  254      * cache disappearing under our feet. */
>  255     while ( pod_target < p2m->pod.count )
>  256     {
> ..
>  266         page = p2m_pod_cache_get(p2m, order);
> 
> Get the page (from the list)
> ..
>  287             put_page(page+i);
> 
> And then free it.

Er, it will do only one of those things, right?  If the requested
target is larger than the current pool, it allocates memory.  If the
requested target is smaller than the current pool, it frees memory. 
So starting from an empty pool, it will allocate pod_target pages and
free none of them.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:33:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ilP-0007lz-9g; Fri, 10 Jan 2014 20:33:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1ilN-0007ls-Sa
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 20:33:06 +0000
Received: from [85.158.137.68:57757] by server-10.bemta-3.messagelabs.com id
	4B/8E-23989-00950D25; Fri, 10 Jan 2014 20:33:04 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389385982!8423099!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32242 invoked from network); 10 Jan 2014 20:33:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:33:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="89682861"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 20:33:01 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 15:33:00 -0500
Message-ID: <52D058F3.5080504@citrix.com>
Date: Fri, 10 Jan 2014 15:32:51 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1389366233.19142.60.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 10:03 AM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 09:25 -0500, Ross Philipson wrote:
>> So I guess that is the question: where to put it? Should it be merged
>> into another lib or be brought in as a separate lib or utility?
>
> I guess the choices are libxc, libxl, libxlutils or an entirely new lib?
>
> At what level would we expect this to be used? Would it be internal to
> e.g. libxl (in which case libxc or libxl would be appropriate) or would
> toolstacks be expected to use it and pass the result to libxl? If that
> is the case then libxc is inappropriate (users of libxl are not supposed
> to have to deal with libxc directly).

I don't think it would be internal to libxl. I think it would be used 
the second way you mention; toolstacks would use it and pass the 
results. The exact way it is used would be implementation specific. 
Files might need to be generated once, on every boot, before vms are 
started, etc. I did not think it belonged in either libxl/xc.

>
> Fitting in with libxl proper would require the API to look a certain way
> (take a context etc), so perhaps libxlu would be more appropriate,
> alongside the disk syntax parser etc?

Possibly. I looked at that back then (and today again) and it seemed to 
all be related to parsing things into XLU_Config objects. I guess I did 
not have a good feel for what libxlu was supposed to be. If it is 
supposed to be a generic library of auxiliary toolstack functionality 
then I think it would be a good place.

Thanks
Ross

>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6991 - Release Date: 01/10/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:33:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ilP-0007lz-9g; Fri, 10 Jan 2014 20:33:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W1ilN-0007ls-Sa
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 20:33:06 +0000
Received: from [85.158.137.68:57757] by server-10.bemta-3.messagelabs.com id
	4B/8E-23989-00950D25; Fri, 10 Jan 2014 20:33:04 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389385982!8423099!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32242 invoked from network); 10 Jan 2014 20:33:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:33:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="89682861"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Jan 2014 20:33:01 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 10 Jan 2014 15:33:00 -0500
Message-ID: <52D058F3.5080504@citrix.com>
Date: Fri, 10 Jan 2014 15:32:51 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1389366233.19142.60.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 10:03 AM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 09:25 -0500, Ross Philipson wrote:
>> So I guess that is the question: where to put it? Should it be merged
>> into another lib or be brought in as a separate lib or utility?
>
> I guess the choices are libxc, libxl, libxlutils or an entirely new lib?
>
> At what level would we expect this to be used? Would it be internal to
> e.g. libxl (in which case libxc or libxl would be appropriate) or would
> toolstacks be expected to use it and pass the result to libxl? If that
> is the case then libxc is inappropriate (users of libxl are not supposed
> to have to deal with libxc directly).

I don't think it would be internal to libxl. I think it would be used 
the second way you mention; toolstacks would use it and pass the 
results. The exact way it is used would be implementation specific. 
Files might need to be generated once, on every boot, before vms are 
started, etc. I did not think it belonged in either libxl/xc.

>
> Fitting in with libxl proper would require the API to look a certain way
> (take a context etc), so perhaps libxlu would be more appropriate,
> alongside the disk syntax parser etc?

Possibly. I looked at that back then (and today again) and it seemed to 
all be related to parsing things into XLU_Config objects. I guess I did 
not have a good feel for what libxlu was supposed to be. If it is 
supposed to be a generic library of auxiliary toolstack functionality 
then I think it would be a good place.

Thanks
Ross

>
> Ian.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6991 - Release Date: 01/10/14
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1j1e-0000kM-KX; Fri, 10 Jan 2014 20:49:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1j1d-0000kH-Co
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 20:49:53 +0000
Received: from [85.158.139.211:10981] by server-3.bemta-5.messagelabs.com id
	6C/E3-04773-0FC50D25; Fri, 10 Jan 2014 20:49:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389386991!9102486!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6578 invoked from network); 10 Jan 2014 20:49:51 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:49:51 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so1923724eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 12:49:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=OvNKYyA6XQArg9i+Q7sOu2CvSH3WVevG6gl8YYerNu0=;
	b=JvMdzy4jPMeq4BQp4Ugpg07XvNHdGut1d1chmWduu2fgs37sibr1mC8F7prn0Bm42T
	Mb9k/4rRlsqeDIXnF0gf14D8skhPEsMRRovFPZzH5pa2fEKHHHEmukbTc0qXnGeaCuFU
	xKaZ+A9s/4xTLVmi+ZsisTn/TCr2ThPajCEFztczPDw4kYZCMiQwq41lvKfjUQMNAsLy
	itn+bpXZrvUWySgZ0bQXFHDzwprfuQHHTukqPfiNC8BOgi4itDl9P8s6C5E49gxBxjFv
	/pm2peBelFq+HJRCVzXbtQs5TBo1IrN/uLiK4dwadCh1Pcal1nLOo5dZu7paWWwNzpBA
	Hk1A==
X-Gm-Message-State: ALoCoQlvmgZbdVYIJfRfjbViRACYtypzRxjtOyDzWzleuCbFSSExx8LpG7VEjxclVHhTCOu/O5Ok
X-Received: by 10.14.175.131 with SMTP id z3mr11922176eel.65.1389386991433;
	Fri, 10 Jan 2014 12:49:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id n1sm17492387eep.20.2014.01.10.12.49.49
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 10 Jan 2014 12:49:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 20:49:47 +0000
Message-Id: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ if
	the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now __setup_dt_irq can only fail if the action is already set. If in the
future, the function is updated we don't want to enable the IRQ.

Assuming the function can fail with action = NULL, when Xen will receive the
IRQ it will segfault because do_IRQ doesn't check if action is NULL.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..62510e3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
-    desc->handler->startup(desc);
-
+    if ( !rc )
+        desc->handler->startup(desc);
 
     return rc;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1j1e-0000kM-KX; Fri, 10 Jan 2014 20:49:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1j1d-0000kH-Co
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 20:49:53 +0000
Received: from [85.158.139.211:10981] by server-3.bemta-5.messagelabs.com id
	6C/E3-04773-0FC50D25; Fri, 10 Jan 2014 20:49:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389386991!9102486!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6578 invoked from network); 10 Jan 2014 20:49:51 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:49:51 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so1923724eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 12:49:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=OvNKYyA6XQArg9i+Q7sOu2CvSH3WVevG6gl8YYerNu0=;
	b=JvMdzy4jPMeq4BQp4Ugpg07XvNHdGut1d1chmWduu2fgs37sibr1mC8F7prn0Bm42T
	Mb9k/4rRlsqeDIXnF0gf14D8skhPEsMRRovFPZzH5pa2fEKHHHEmukbTc0qXnGeaCuFU
	xKaZ+A9s/4xTLVmi+ZsisTn/TCr2ThPajCEFztczPDw4kYZCMiQwq41lvKfjUQMNAsLy
	itn+bpXZrvUWySgZ0bQXFHDzwprfuQHHTukqPfiNC8BOgi4itDl9P8s6C5E49gxBxjFv
	/pm2peBelFq+HJRCVzXbtQs5TBo1IrN/uLiK4dwadCh1Pcal1nLOo5dZu7paWWwNzpBA
	Hk1A==
X-Gm-Message-State: ALoCoQlvmgZbdVYIJfRfjbViRACYtypzRxjtOyDzWzleuCbFSSExx8LpG7VEjxclVHhTCOu/O5Ok
X-Received: by 10.14.175.131 with SMTP id z3mr11922176eel.65.1389386991433;
	Fri, 10 Jan 2014 12:49:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id n1sm17492387eep.20.2014.01.10.12.49.49
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 10 Jan 2014 12:49:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 20:49:47 +0000
Message-Id: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ if
	the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now __setup_dt_irq can only fail if the action is already set. If in the
future, the function is updated we don't want to enable the IRQ.

Assuming the function can fail with action = NULL, when Xen will receive the
IRQ it will segfault because do_IRQ doesn't check if action is NULL.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..62510e3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
-    desc->handler->startup(desc);
-
+    if ( !rc )
+        desc->handler->startup(desc);
 
     return rc;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1j22-0000m0-12; Fri, 10 Jan 2014 20:50:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1j21-0000lq-19
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 20:50:17 +0000
Received: from [85.158.143.35:54829] by server-1.bemta-4.messagelabs.com id
	22/39-02132-80D50D25; Fri, 10 Jan 2014 20:50:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389387015!10961671!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3141 invoked from network); 10 Jan 2014 20:50:15 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:50:15 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2137224eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 12:50:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=EHvg3QKa6QqIeV3RbdypNfoCs7PBSTyw8n9SP5qiIIc=;
	b=KJnBIYBOzEbQOG+doSYfUbvMiQLFaGJ+IGc3j7bTzKA9HokTi5qMfnQHtkNyhD9ZDi
	hMONmhRNUHMg1OxZ3kdOUckK5Q3pXSWguvyNmHOrMVAD4gudvSrDwP0SGIBwLbkTY32d
	3+HNBUiGzRjSP90CJFR/7YJOndkqtP3s19VtMubP2DboMm+PCjwajNSHi8yiq4o496gQ
	NSgnEWJIH3pKiBZVEjCPHO4UNvayge71wFCJc5PoX++/+1d5GkWep2eOOPRIQBLTClLw
	mC3CuxnJGRzYC+Ntzh0k87WpAcotWv8jiOqIh15RSxqPhTmgTuuZaBxFd+enKPD+dOcv
	fg7w==
X-Gm-Message-State: ALoCoQkfkJ0p4WIj967kaGO32Ievq5YZtE6U0MQnOU3Nip55pBBUIlFnnPy6erwY7MQSRzVnkMeC
X-Received: by 10.15.56.9 with SMTP id x9mr5421eew.112.1389387015078;
	Fri, 10 Jan 2014 12:50:15 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id o1sm17484496eea.10.2014.01.10.12.50.13
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 10 Jan 2014 12:50:14 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 20:50:12 +0000
Message-Id: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared between
	domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
IRQ is correctly setup.

As IRQ can be shared between devices, if the devices are not assigned to the
same domain or Xen, this could result to IRQ route to the domain instead of
Xen ...

Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Hopefully, none of the supported platforms have UARTs (the only device
    currently used by Xen). It would be nice to have this patch for Xen 4.4 to
    avoid waste of time for developer.

    The downside of this patch is if someone wants to support a such platform
    (eg IRQ shared between device assigned to different domain/XEN), it will
    end up to a error message and a panic.
---
 xen/arch/arm/domain_build.c |    8 ++++++--
 xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..1fc359a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
         }
 
         DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
-        /* Don't check return because the IRQ can be use by multiple device */
-        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
+            return res;
+        }
     }
 
     /* Map the address ranges */
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 62510e3..829d767 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -602,6 +602,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock, flags);
+
+    if ( desc->status & IRQ_GUEST )
+    {
+        struct domain *d;
+
+        ASSERT(desc->action != NULL);
+
+        d = desc->action->dev_id;
+
+        spin_unlock_irqrestore(&desc->lock, flags);
+        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+               irq->irq, d->domain_id);
+        return -EADDRINUSE;
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
@@ -756,7 +771,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     struct irqaction *action;
     struct irq_desc *desc = irq_to_desc(irq->irq);
     unsigned long flags;
-    int retval;
+    int retval = 0;
     bool_t level;
     struct pending_irq *p;
 
@@ -771,6 +786,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
 
+    /* If the IRQ is already used by someone
+     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
+     *  - Otherwise -> For now, don't allow the IRQ to be shared between
+     *  Xen and domains.
+     */
+    if ( desc->action != NULL )
+    {
+        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
+            goto out;
+
+        if ( desc->status & IRQ_GUEST )
+        {
+            d = desc->action->dev_id;
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+                   irq->irq, d->domain_id);
+        }
+        else
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
+                   irq->irq);
+        retval = -EADDRINUSE;
+        goto out;
+    }
+
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 20:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 20:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1j22-0000m0-12; Fri, 10 Jan 2014 20:50:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W1j21-0000lq-19
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 20:50:17 +0000
Received: from [85.158.143.35:54829] by server-1.bemta-4.messagelabs.com id
	22/39-02132-80D50D25; Fri, 10 Jan 2014 20:50:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389387015!10961671!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3141 invoked from network); 10 Jan 2014 20:50:15 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 20:50:15 -0000
Received: by mail-ee0-f45.google.com with SMTP id d49so2137224eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 10 Jan 2014 12:50:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=EHvg3QKa6QqIeV3RbdypNfoCs7PBSTyw8n9SP5qiIIc=;
	b=KJnBIYBOzEbQOG+doSYfUbvMiQLFaGJ+IGc3j7bTzKA9HokTi5qMfnQHtkNyhD9ZDi
	hMONmhRNUHMg1OxZ3kdOUckK5Q3pXSWguvyNmHOrMVAD4gudvSrDwP0SGIBwLbkTY32d
	3+HNBUiGzRjSP90CJFR/7YJOndkqtP3s19VtMubP2DboMm+PCjwajNSHi8yiq4o496gQ
	NSgnEWJIH3pKiBZVEjCPHO4UNvayge71wFCJc5PoX++/+1d5GkWep2eOOPRIQBLTClLw
	mC3CuxnJGRzYC+Ntzh0k87WpAcotWv8jiOqIh15RSxqPhTmgTuuZaBxFd+enKPD+dOcv
	fg7w==
X-Gm-Message-State: ALoCoQkfkJ0p4WIj967kaGO32Ievq5YZtE6U0MQnOU3Nip55pBBUIlFnnPy6erwY7MQSRzVnkMeC
X-Received: by 10.15.56.9 with SMTP id x9mr5421eew.112.1389387015078;
	Fri, 10 Jan 2014 12:50:15 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id o1sm17484496eea.10.2014.01.10.12.50.13
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 10 Jan 2014 12:50:14 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 10 Jan 2014 20:50:12 +0000
Message-Id: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared between
	domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
IRQ is correctly setup.

As IRQ can be shared between devices, if the devices are not assigned to the
same domain or Xen, this could result to IRQ route to the domain instead of
Xen ...

Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Hopefully, none of the supported platforms have UARTs (the only device
    currently used by Xen). It would be nice to have this patch for Xen 4.4 to
    avoid waste of time for developer.

    The downside of this patch is if someone wants to support a such platform
    (eg IRQ shared between device assigned to different domain/XEN), it will
    end up to a error message and a panic.
---
 xen/arch/arm/domain_build.c |    8 ++++++--
 xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..1fc359a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
         }
 
         DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
-        /* Don't check return because the IRQ can be use by multiple device */
-        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
+            return res;
+        }
     }
 
     /* Map the address ranges */
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 62510e3..829d767 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -602,6 +602,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock, flags);
+
+    if ( desc->status & IRQ_GUEST )
+    {
+        struct domain *d;
+
+        ASSERT(desc->action != NULL);
+
+        d = desc->action->dev_id;
+
+        spin_unlock_irqrestore(&desc->lock, flags);
+        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+               irq->irq, d->domain_id);
+        return -EADDRINUSE;
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
@@ -756,7 +771,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     struct irqaction *action;
     struct irq_desc *desc = irq_to_desc(irq->irq);
     unsigned long flags;
-    int retval;
+    int retval = 0;
     bool_t level;
     struct pending_irq *p;
 
@@ -771,6 +786,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
 
+    /* If the IRQ is already used by someone
+     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
+     *  - Otherwise -> For now, don't allow the IRQ to be shared between
+     *  Xen and domains.
+     */
+    if ( desc->action != NULL )
+    {
+        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
+            goto out;
+
+        if ( desc->status & IRQ_GUEST )
+        {
+            d = desc->action->dev_id;
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+                   irq->irq, d->domain_id);
+        }
+        else
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
+                   irq->irq);
+        retval = -EADDRINUSE;
+        goto out;
+    }
+
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jQi-0002Lk-DF; Fri, 10 Jan 2014 21:15:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1jQg-0002Lf-IK
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:15:46 +0000
Received: from [85.158.139.211:49673] by server-4.bemta-5.messagelabs.com id
	B1/15-26791-10360D25; Fri, 10 Jan 2014 21:15:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389388544!6384542!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12211 invoked from network); 10 Jan 2014 21:15:45 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:15:45 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:15:43 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629145986"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.131])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:15:42 +0000
Message-ID: <52D062FE.5000803@terremark.com>
Date: Fri, 10 Jan 2014 16:15:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
	<1389373999.6423.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389373999.6423.42.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/14 12:13, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
>> On 01/09/14 11:30, Jan Beulich wrote:
>>>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>>>> Based on Mukesh's statement, attached is the rebased version of this patch
>>>> (labeled v3).  I included Mukesh's ack.
>>> Unless this is meant just for reviewing purposes (albeit even then
>>> it's likely problematic), could you please get used to sending
>>> patch revisions with mail subjects (i.e. not retaining the prior
>>> version indicator), so there is a reasonable chance to reconstruct
>>> things by searching just the titles in a mail archive. (It's still fine -
>>> at least as far as I'm concerned - to reply to an earlier version,
>>> thus tying things into a single thread on the archive.)
>>>
>>> Jan
>>>
>> I will try to.  I had not noticed this in the past.
> Thanks, as Jan says it is very confusing.
>
> If there are tools things outstanding in this series which should be for
> 4.4 then I don't know what is where or what has been acked.
>
> Please can resend whatever you think is outstanding for 4.4 as a fresh
> thread with a suitable vN larger than any of the ones mentioned in any
> of the replies here and with the acks collected.
>
> Ian.
>
Will do.  I expect ~1 hour to rebase, build and quick test.

    -Don

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jQi-0002Lk-DF; Fri, 10 Jan 2014 21:15:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1jQg-0002Lf-IK
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:15:46 +0000
Received: from [85.158.139.211:49673] by server-4.bemta-5.messagelabs.com id
	B1/15-26791-10360D25; Fri, 10 Jan 2014 21:15:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389388544!6384542!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12211 invoked from network); 10 Jan 2014 21:15:45 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:15:45 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:15:43 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629145986"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.131])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:15:42 +0000
Message-ID: <52D062FE.5000803@terremark.com>
Date: Fri, 10 Jan 2014 16:15:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
	<1389373999.6423.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1389373999.6423.42.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/14 12:13, Ian Campbell wrote:
> On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
>> On 01/09/14 11:30, Jan Beulich wrote:
>>>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>>>> Based on Mukesh's statement, attached is the rebased version of this patch
>>>> (labeled v3).  I included Mukesh's ack.
>>> Unless this is meant just for reviewing purposes (albeit even then
>>> it's likely problematic), could you please get used to sending
>>> patch revisions with mail subjects (i.e. not retaining the prior
>>> version indicator), so there is a reasonable chance to reconstruct
>>> things by searching just the titles in a mail archive. (It's still fine -
>>> at least as far as I'm concerned - to reply to an earlier version,
>>> thus tying things into a single thread on the archive.)
>>>
>>> Jan
>>>
>> I will try to.  I had not noticed this in the past.
> Thanks, as Jan says it is very confusing.
>
> If there are tools things outstanding in this series which should be for
> 4.4 then I don't know what is where or what has been acked.
>
> Please can resend whatever you think is outstanding for 4.4 as a fresh
> thread with a suitable vN larger than any of the ones mentioned in any
> of the replies here and with the acks collected.
>
> Ian.
>
Will do.  I expect ~1 hour to rebase, build and quick test.

    -Don

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jm9-0003o6-Rh; Fri, 10 Jan 2014 21:37:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1jm8-0003o1-Ly
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:37:56 +0000
Received: from [85.158.143.35:15167] by server-1.bemta-4.messagelabs.com id
	72/9B-02132-43860D25; Fri, 10 Jan 2014 21:37:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389389875!11052640!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8298 invoked from network); 10 Jan 2014 21:37:55 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:37:55 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389389875; l=566;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=9ia/QMBDuYSnNTyzsW1WlL/PmcU=;
	b=aZblkeOhcVuvoUShbo+1fHf12XNwqIZNBFN3bi1V4zBZaUirgLlm68wtlJdvsYyxQ4/
	Zk/AtUV6ScG8CWheR2tSBPmsZSZlYsg4hfxnrbXmaPBKd61UOhtm55OIrA3AaF6nVSKwn
	E2OKqVn02YtuacleAlJUvpOGfSGLdpB6tdI=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id D03270q0ALbnIdo ; 
	Fri, 10 Jan 2014 22:37:49 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 927F55024C; Fri, 10 Jan 2014 22:37:46 +0100 (CET)
Date: Fri, 10 Jan 2014 22:37:46 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140110213746.GA933@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D036FC.6000308@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> If the call below fails, is it safe to continue using discard feature? At
> the least, are discard_granularity and discard_alignment guaranteed to have
> sane/safe values?

Its up to the toolstack to provide sane values. In the worst case
discard fails. In this specific case the three values are optional, so
the calls can fail. I do not know what happens if the backend device
actually needs the values, but the frontend can not send proper discard
requests. Hopefully it will not damage the hardware..

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jm9-0003o6-Rh; Fri, 10 Jan 2014 21:37:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1jm8-0003o1-Ly
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:37:56 +0000
Received: from [85.158.143.35:15167] by server-1.bemta-4.messagelabs.com id
	72/9B-02132-43860D25; Fri, 10 Jan 2014 21:37:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389389875!11052640!1
X-Originating-IP: [81.169.146.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8298 invoked from network); 10 Jan 2014 21:37:55 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.179)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:37:55 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389389875; l=566;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=9ia/QMBDuYSnNTyzsW1WlL/PmcU=;
	b=aZblkeOhcVuvoUShbo+1fHf12XNwqIZNBFN3bi1V4zBZaUirgLlm68wtlJdvsYyxQ4/
	Zk/AtUV6ScG8CWheR2tSBPmsZSZlYsg4hfxnrbXmaPBKd61UOhtm55OIrA3AaF6nVSKwn
	E2OKqVn02YtuacleAlJUvpOGfSGLdpB6tdI=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id D03270q0ALbnIdo ; 
	Fri, 10 Jan 2014 22:37:49 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 927F55024C; Fri, 10 Jan 2014 22:37:46 +0100 (CET)
Date: Fri, 10 Jan 2014 22:37:46 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140110213746.GA933@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D036FC.6000308@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> If the call below fails, is it safe to continue using discard feature? At
> the least, are discard_granularity and discard_alignment guaranteed to have
> sane/safe values?

Its up to the toolstack to provide sane values. In the worst case
discard fails. In this specific case the three values are optional, so
the calls can fail. I do not know what happens if the backend device
actually needs the values, but the frontend can not send proper discard
requests. Hopefully it will not damage the hardware..

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:51:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jyy-0005Ia-86; Fri, 10 Jan 2014 21:51:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1jyx-0005IV-I0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 21:51:11 +0000
Received: from [193.109.254.147:53592] by server-9.bemta-14.messagelabs.com id
	16/8D-13957-E4B60D25; Fri, 10 Jan 2014 21:51:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389390669!10145814!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18658 invoked from network); 10 Jan 2014 21:51:10 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 21:51:10 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55890 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1jnu-000783-7Q; Fri, 10 Jan 2014 22:39:46 +0100
Date: Fri, 10 Jan 2014 22:51:03 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <694614152.20140110225103@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Sander Eikelenboom <linux@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] xen pci and vga passthrough + option roms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

I'm starting a new thread .. since i'm essentially trying to start over from scratch.
(shoot i forgot to include xen-devel on the first mail ...)

- Xen-unstable the latest and greatest
- Linux 3.13-rc7 as dom0 and guest kernel
- Qemu-xen
- Some patches (vga="none", enableing qemu debug, fixing builderrors when enabling qemu debug)

I'm now first trying to replicate you :-) (don't worry .. it won't hurt :p),
i'm using an intel NIC now (found one laying around and it has a rom.
and after it was flashed .. it even has valid rom content :-) )

Now the first experiment is to see if the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works.

- Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
- Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

The failure with pciback is perhaps because the device wasn't initialized on boot because it was seized by pciback.
So generally speaking the rombar of pci devices is correctly passed through and can be dumped.

So it indeed appears to be a VGA passthrough specific issue.

This was on my intel machine, but unfortunately that one has an IGD that shares it's pci-e lanes with the only pcie x16 slot,
so will put the NIC in my AMD machine .. and verify if it works there and then continue with the VGA card.

Now trying on the AMD machine:

NIC:
- Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
- Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

VGA card:
- Running Xen, on dom0, VGA card owned by radeon: dumping the rom works
- Running Xen, on dom0, VGA card owned by nothing (due to setting nomodeset): dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, on dom0, VGA card owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, VGA card passedthrough: dumping the rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios

Prelimanary conclusions:
- Passing through the rombar from a NIC works
- Passing through the rombar from a VGA card doesn't work out of the box.
- the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works but seems to rely on a loaded and working driver and is thus not very reliable for testing
  so i need to dust of my arithmatic and use /dev/kmem and dd.
- So Anthony is probably right .. something is playing tricks with the rombar when it's VGA class, even if it's the secondary card in the Guest.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:51:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1jyy-0005Ia-86; Fri, 10 Jan 2014 21:51:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W1jyx-0005IV-I0
	for xen-devel@lists.xenproject.org; Fri, 10 Jan 2014 21:51:11 +0000
Received: from [193.109.254.147:53592] by server-9.bemta-14.messagelabs.com id
	16/8D-13957-E4B60D25; Fri, 10 Jan 2014 21:51:10 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389390669!10145814!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18658 invoked from network); 10 Jan 2014 21:51:10 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Jan 2014 21:51:10 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55890 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W1jnu-000783-7Q; Fri, 10 Jan 2014 22:39:46 +0100
Date: Fri, 10 Jan 2014 22:51:03 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <694614152.20140110225103@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Sander Eikelenboom <linux@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] xen pci and vga passthrough + option roms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

I'm starting a new thread .. since i'm essentially trying to start over from scratch.
(shoot i forgot to include xen-devel on the first mail ...)

- Xen-unstable the latest and greatest
- Linux 3.13-rc7 as dom0 and guest kernel
- Qemu-xen
- Some patches (vga="none", enableing qemu debug, fixing builderrors when enabling qemu debug)

I'm now first trying to replicate you :-) (don't worry .. it won't hurt :p),
i'm using an intel NIC now (found one laying around and it has a rom.
and after it was flashed .. it even has valid rom content :-) )

Now the first experiment is to see if the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works.

- Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
- Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

The failure with pciback is perhaps because the device wasn't initialized on boot because it was seized by pciback.
So generally speaking the rombar of pci devices is correctly passed through and can be dumped.

So it indeed appears to be a VGA passthrough specific issue.

This was on my intel machine, but unfortunately that one has an IGD that shares it's pci-e lanes with the only pcie x16 slot,
so will put the NIC in my AMD machine .. and verify if it works there and then continue with the VGA card.

Now trying on the AMD machine:

NIC:
- Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
- Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

VGA card:
- Running Xen, on dom0, VGA card owned by radeon: dumping the rom works
- Running Xen, on dom0, VGA card owned by nothing (due to setting nomodeset): dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, on dom0, VGA card owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
- Running Xen, in a HVM guest, VGA card passedthrough: dumping the rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios

Prelimanary conclusions:
- Passing through the rombar from a NIC works
- Passing through the rombar from a VGA card doesn't work out of the box.
- the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works but seems to rely on a loaded and working driver and is thus not very reliable for testing
  so i need to dust of my arithmatic and use /dev/kmem and dd.
- So Anthony is probably right .. something is playing tricks with the rombar when it's VGA class, even if it's the secondary card in the Guest.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4s-0005SK-DX; Fri, 10 Jan 2014 21:57:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4q-0005Rg-Pv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:16 +0000
Received: from [85.158.143.35:17619] by server-2.bemta-4.messagelabs.com id
	5D/70-11386-CBC60D25; Fri, 10 Jan 2014 21:57:16 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389391033!3906289!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28287 invoked from network); 10 Jan 2014 21:57:15 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:15 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Jan 2014 21:57:12 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172757"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:07 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:58 -0500
Message-Id: <1389391020-14476-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Tested-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/debug.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..435bd40 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
+    }
+    else
+        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     return mfn;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4q-0005Rh-LQ; Fri, 10 Jan 2014 21:57:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4o-0005RP-TV
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:15 +0000
Received: from [193.109.254.147:20592] by server-15.bemta-14.messagelabs.com
	id 94/1A-22186-ABC60D25; Fri, 10 Jan 2014 21:57:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!2
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31085 invoked from network); 10 Jan 2014 21:57:13 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:13 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:13 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172767"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:08 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:59 -0500
Message-Id: <1389391020-14476-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 2/3] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 0622ebd..3b2a285 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4s-0005SK-DX; Fri, 10 Jan 2014 21:57:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4q-0005Rg-Pv
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:16 +0000
Received: from [85.158.143.35:17619] by server-2.bemta-4.messagelabs.com id
	5D/70-11386-CBC60D25; Fri, 10 Jan 2014 21:57:16 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389391033!3906289!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28287 invoked from network); 10 Jan 2014 21:57:15 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:15 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Jan 2014 21:57:12 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172757"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:07 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:58 -0500
Message-Id: <1389391020-14476-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to call
	put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

Using a 1G hvm domU (in grub) and gdbsx:

(gdb) set arch i8086
warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
of GDB.  Attempting to continue with the default i8086 settings.

The target architecture is assumed to be i8086
(gdb) target remote localhost:9999
Remote debugging using localhost:9999
Remote debugging from host 127.0.0.1
0x0000d475 in ?? ()
(gdb) x/1xh 0x6ae9168b

Will reproduce this bug.

With a debug=y build you will get:

Assertion '!preempt_count()' failed at preempt.c:37

For a debug=n build you will get a dom0 VCPU hung (at some point) in:

         [ffff82c4c0126eec] _write_lock+0x3c/0x50
          ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
          ffff82c4c0158885  dbg_rw_mem+0x115/0x360
          ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
          ffff82c4c01709ed  get_page+0x2d/0x100
          ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
          ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
          ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
          ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
          ffff82c4c012938b  add_entry+0x4b/0xb0
          ffff82c4c02223f9  syscall_enter+0xa9/0xae

And gdb output:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     0x3024
(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Ignoring packet error, continuing...
Reply contains invalid hex digit 116

The 1st one worked because the p2m.lock is recursive and the PCPU
had not yet changed.

crash reports (for example):

crash> mm_rwlock_t 0xffff83083f913010
struct mm_rwlock_t {
  lock = {
    raw = {
      lock = 2147483647
    },
    debug = {<No data fields>}
  },
  unlock_level = 0,
  recurse_count = 1,
  locker = 1,
  locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
}

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
Tested-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/debug.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index a67a192..435bd40 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -63,10 +63,17 @@ dbg_hvm_va2mfn(dbgva_t vaddr, struct domain *dp, int toaddr,
     if ( p2m_is_readonly(gfntype) && toaddr )
     {
         DBGP2("kdb:p2m_is_readonly: gfntype:%x\n", gfntype);
-        return INVALID_MFN;
+        mfn = INVALID_MFN;
+    }
+    else
+        DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
+
+    if ( mfn == INVALID_MFN )
+    {
+        put_gfn(dp, *gfn);
+        *gfn = INVALID_GFN;
     }
 
-    DBGP2("X: vaddr:%lx domid:%d mfn:%lx\n", vaddr, dp->domain_id, mfn);
     return mfn;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4q-0005Rh-LQ; Fri, 10 Jan 2014 21:57:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4o-0005RP-TV
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:15 +0000
Received: from [193.109.254.147:20592] by server-15.bemta-14.messagelabs.com
	id 94/1A-22186-ABC60D25; Fri, 10 Jan 2014 21:57:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!2
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31085 invoked from network); 10 Jan 2014 21:57:13 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:13 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:13 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172767"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:08 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:59 -0500
Message-Id: <1389391020-14476-3-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 2/3] xg_read_mem: Report on error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I had coded this with XGERR, but gdb will try to read memory without
a direct request from the user.  So the error message can be confusing.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 0622ebd..3b2a285 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -775,7 +775,7 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
 {
     struct xen_domctl_gdbsx_memio *iop = &domctl.u.gdbsx_guest_memio;
     union {uint64_t llbuf8; char buf8[8];} u = {0};
-    int i;
+    int i, rc;
 
     XGTRC("E:gva:%llx tobuf:%lx len:%d\n", guestva, tobuf, tobuf_len);
 
@@ -786,7 +786,9 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->len = tobuf_len;
     iop->gwr = 0;       /* not writing to guest */
 
-    _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len);
+    if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
+        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
+              iop->remain, errno, rc);
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4p-0005RV-6l; Fri, 10 Jan 2014 21:57:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4n-0005RK-WD
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:14 +0000
Received: from [193.109.254.147:58069] by server-15.bemta-14.messagelabs.com
	id 83/1A-22186-9BC60D25; Fri, 10 Jan 2014 21:57:13 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30618 invoked from network); 10 Jan 2014 21:57:12 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:12 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:10 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172734"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:02 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:57 -0500
Message-Id: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 0/3] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes v3 to v4:
    Drop patch 1 -- already commited
    Drop patch 3 -- Does not need to be in 4.4 as far as I know
    Added "Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>" to all 3.

Changes v2 to v3:
  From Jan Beulich:
    Add else to keep debug log the same.

Changes v1 to v2:
  From Konrad Rzeszutek Wilk and Ian Campbell:

    ??

  Split out emacs local variables add to it's own new patch (number 1).

  From Andrew Cooper:

    What does matter is that the caller of dbg_hvm_va2mfn() should
    not have to cleanup a reference taken when it returns an error.

  So use his version of the change.

  From Ian Campbell:

    In all three cases what is missing is the "why" and the
    appropriate analysis from
    http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
    i.e. what is the impact of the bug (i..e what are the advantages
    of the fix) and what are the risks of this changing causing
    further breakage? I'm not really in a position to evaluate the
    risk of a change in gdbsx, so someone needs to tell me.

    I think given that gdbsx is a somewhat "peripheral" bit of code
    and that it is targeted at developers (who might be better able
    to tolerate any resulting issues and more able to apply
    subsequent fixups than regular users) we can accept a larger
    risk than we would with a change to the hypervisor itself etc
    (that's assuming that all of these changes cannot potentially
    impact non-debugger usecases which I expect is the case from the
    function names but I would like to see confirmed).
 
  My take on this below.

  From Mukesh Rathor:

    Ooopsy... my thought was that an application should not even
    look at remain if the hcall/syscall failed, but forgot when
    writing the gdbsx itself :). Think of it this way, if the call
    didn't even make it to xen, and some reason the ioctl returned
    non-zero rc, then remain would still be zero. So I think we
    should fix gdbsx instead of here:

  Dropped old patch 4, Added new patch 5.


Freeze:

  The benefit of this series is that the hypervisor stops calling
  panic (debug=y) or hanging (debug=n).  Also a person using gdbsx
  is not seeing random heap data of gdbsx's heap instead of guest
  data.

  The risk is that gdbsx does something new wrong.

  My understanding is that all the changes here only effect gdbsx
  and so very limited in scope.

Release manager requests:
  patch 1 and 3 are optional for 4.4.0.
  patch 2 should be in 4.4.0
  patch 4 and 5 would be good to be in 4.4.0

While tracking down a bug in seabios/grub I found the bug in patch
2.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 3 is debug logging that was used to find the 2nd way.


Andrew Cooper (1):
  dbg_rw_guest_mem: need to call put_gfn in error path.

Don Slutz (2):
  xg_read_mem: Report on error.
  xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

 tools/debugger/gdbsx/xg/xg_main.c | 15 +++++++++++----
 xen/arch/x86/debug.c              | 11 +++++++++--
 2 files changed, 20 insertions(+), 6 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4p-0005RV-6l; Fri, 10 Jan 2014 21:57:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4n-0005RK-WD
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:14 +0000
Received: from [193.109.254.147:58069] by server-15.bemta-14.messagelabs.com
	id 83/1A-22186-9BC60D25; Fri, 10 Jan 2014 21:57:13 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30618 invoked from network); 10 Jan 2014 21:57:12 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:12 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:10 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172734"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:02 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:56:57 -0500
Message-Id: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 0/3] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes v3 to v4:
    Drop patch 1 -- already commited
    Drop patch 3 -- Does not need to be in 4.4 as far as I know
    Added "Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>" to all 3.

Changes v2 to v3:
  From Jan Beulich:
    Add else to keep debug log the same.

Changes v1 to v2:
  From Konrad Rzeszutek Wilk and Ian Campbell:

    ??

  Split out emacs local variables add to it's own new patch (number 1).

  From Andrew Cooper:

    What does matter is that the caller of dbg_hvm_va2mfn() should
    not have to cleanup a reference taken when it returns an error.

  So use his version of the change.

  From Ian Campbell:

    In all three cases what is missing is the "why" and the
    appropriate analysis from
    http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
    i.e. what is the impact of the bug (i..e what are the advantages
    of the fix) and what are the risks of this changing causing
    further breakage? I'm not really in a position to evaluate the
    risk of a change in gdbsx, so someone needs to tell me.

    I think given that gdbsx is a somewhat "peripheral" bit of code
    and that it is targeted at developers (who might be better able
    to tolerate any resulting issues and more able to apply
    subsequent fixups than regular users) we can accept a larger
    risk than we would with a change to the hypervisor itself etc
    (that's assuming that all of these changes cannot potentially
    impact non-debugger usecases which I expect is the case from the
    function names but I would like to see confirmed).
 
  My take on this below.

  From Mukesh Rathor:

    Ooopsy... my thought was that an application should not even
    look at remain if the hcall/syscall failed, but forgot when
    writing the gdbsx itself :). Think of it this way, if the call
    didn't even make it to xen, and some reason the ioctl returned
    non-zero rc, then remain would still be zero. So I think we
    should fix gdbsx instead of here:

  Dropped old patch 4, Added new patch 5.


Freeze:

  The benefit of this series is that the hypervisor stops calling
  panic (debug=y) or hanging (debug=n).  Also a person using gdbsx
  is not seeing random heap data of gdbsx's heap instead of guest
  data.

  The risk is that gdbsx does something new wrong.

  My understanding is that all the changes here only effect gdbsx
  and so very limited in scope.

Release manager requests:
  patch 1 and 3 are optional for 4.4.0.
  patch 2 should be in 4.4.0
  patch 4 and 5 would be good to be in 4.4.0

While tracking down a bug in seabios/grub I found the bug in patch
2.

There are 2 ways that gfn will not be INVALID_GFN and yet mfn will
be INVALID_MFN.

  1) p2m_is_readonly(gfntype) and writing memory.
  2) the requested vaddr does not exist.

This may only be an issue for a HVM guest that is in real mode
(I.E. no page tables).

Patch 3 is debug logging that was used to find the 2nd way.


Andrew Cooper (1):
  dbg_rw_guest_mem: need to call put_gfn in error path.

Don Slutz (2):
  xg_read_mem: Report on error.
  xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

 tools/debugger/gdbsx/xg/xg_main.c | 15 +++++++++++----
 xen/arch/x86/debug.c              | 11 +++++++++--
 2 files changed, 20 insertions(+), 6 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4r-0005Rt-0z; Fri, 10 Jan 2014 21:57:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4p-0005RQ-EW
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:15 +0000
Received: from [193.109.254.147:20612] by server-10.bemta-14.messagelabs.com
	id FA/E5-20752-ABC60D25; Fri, 10 Jan 2014 21:57:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!3
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31446 invoked from network); 10 Jan 2014 21:57:14 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:14 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:13 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172788"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:11 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:57:00 -0500
Message-Id: <1389391020-14476-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 3/3] xg_main: If
	XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Drop output of iop->remain because it most likely will be zero.
This leads to a strange message:

ERROR: failed to read 0 bytes. errno:14 rc:-1

Add address to write error because it may be the only message
displayed.

Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
error and so iop->remain will be zero.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..0fc3f82 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->gwr = 0;       /* not writing to guest */
 
     if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
+        return tobuf_len;
+    }
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     iop->gwr = 1;       /* writing to guest */
 
     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
     return iop->remain;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 21:57:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 21:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1k4r-0005Rt-0z; Fri, 10 Jan 2014 21:57:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1k4p-0005RQ-EW
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:57:15 +0000
Received: from [193.109.254.147:20612] by server-10.bemta-14.messagelabs.com
	id FA/E5-20752-ABC60D25; Fri, 10 Jan 2014 21:57:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389391031!6657399!3
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31446 invoked from network); 10 Jan 2014 21:57:14 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:57:14 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 21:57:13 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629172788"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.102])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 21:57:11 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 16:57:00 -0500
Message-Id: <1389391020-14476-4-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [BUGFIX][PATCH v4 3/3] xg_main: If
	XEN_DOMCTL_gdbsx_guestmemio fails then force error.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Without this gdb does not report an error.

With this patch and using a 1G hvm domU:

(gdb) x/1xh 0x6ae9168b
0x6ae9168b:     Cannot access memory at address 0x6ae9168b

Drop output of iop->remain because it most likely will be zero.
This leads to a strange message:

ERROR: failed to read 0 bytes. errno:14 rc:-1

Add address to write error because it may be the only message
displayed.

Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
error and so iop->remain will be zero.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/debugger/gdbsx/xg/xg_main.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 3b2a285..0fc3f82 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -787,8 +787,10 @@ xg_read_mem(uint64_t guestva, char *tobuf, int tobuf_len, uint64_t pgd3val)
     iop->gwr = 0;       /* not writing to guest */
 
     if ( (rc = _domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, tobuf, tobuf_len)) )
-        XGTRC("ERROR: failed to read %d bytes. errno:%d rc:%d\n",
-              iop->remain, errno, rc);
+    {
+        XGTRC("ERROR: failed to read bytes. errno:%d rc:%d\n", errno, rc);
+        return tobuf_len;
+    }
 
     for(i=0; i < XGMIN(8, tobuf_len); u.buf8[i]=tobuf[i], i++);
     XGTRC("X:remain:%d buf8:0x%llx\n", iop->remain, u.llbuf8);
@@ -818,8 +820,11 @@ xg_write_mem(uint64_t guestva, char *frombuf, int buflen, uint64_t pgd3val)
     iop->gwr = 1;       /* writing to guest */
 
     if ((rc=_domctl_hcall(XEN_DOMCTL_gdbsx_guestmemio, frombuf, buflen)))
-        XGERR("ERROR: failed to write %d bytes. errno:%d rc:%d\n", 
-              iop->remain, errno, rc);
+    {
+        XGERR("ERROR: failed to write bytes to %llx. errno:%d rc:%d\n",
+              guestva, errno, rc);
+        return buflen;
+    }
     return iop->remain;
 }
 
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:08:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1kFf-0006pY-VC; Fri, 10 Jan 2014 22:08:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1kFe-0006lL-3i
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:08:26 +0000
Received: from [193.109.254.147:40786] by server-14.bemta-14.messagelabs.com
	id 55/DB-12628-95F60D25; Fri, 10 Jan 2014 22:08:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389391703!10159408!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32384 invoked from network); 10 Jan 2014 22:08:24 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:08:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 22:08:23 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629181809"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.182])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 22:08:21 +0000
Message-ID: <52D06F55.7050002@terremark.com>
Date: Fri, 10 Jan 2014 17:08:21 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
	<1389373999.6423.42.camel@kazak.uk.xensource.com>
	<52D062FE.5000803@terremark.com>
In-Reply-To: <52D062FE.5000803@terremark.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/14 16:15, Don Slutz wrote:
> On 01/10/14 12:13, Ian Campbell wrote:
>> On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
>>> On 01/09/14 11:30, Jan Beulich wrote:
>>>>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>>>>> Based on Mukesh's statement, attached is the rebased version of 
>>>>> this patch
>>>>> (labeled v3).  I included Mukesh's ack.
>>>> Unless this is meant just for reviewing purposes (albeit even then
>>>> it's likely problematic), could you please get used to sending
>>>> patch revisions with mail subjects (i.e. not retaining the prior
>>>> version indicator), so there is a reasonable chance to reconstruct
>>>> things by searching just the titles in a mail archive. (It's still 
>>>> fine -
>>>> at least as far as I'm concerned - to reply to an earlier version,
>>>> thus tying things into a single thread on the archive.)
>>>>
>>>> Jan
>>>>
>>> I will try to.  I had not noticed this in the past.
>> Thanks, as Jan says it is very confusing.
>>
>> If there are tools things outstanding in this series which should be for
>> 4.4 then I don't know what is where or what has been acked.
>>
>> Please can resend whatever you think is outstanding for 4.4 as a fresh
>> thread with a suitable vN larger than any of the ones mentioned in any
>> of the replies here and with the acks collected.
>>
>> Ian.
>>
> Will do.  I expect ~1 hour to rebase, build and quick test.
>
>    -Don


I have send out v4 of the rest. Adjusted this thread to v3.  I did not 
include this one in the 4.4 because:

1) Is is debug logging.

2) not 100% sure on volatile, __read_mostly,  __used , __used__, and 
maybe drop static.

3) Most developers cannot change dbg_debug value without a re-compile. 
And then volatile is not needed for them.

So I think it is fine to delay until 4.5 is open for this.

     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:08:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1kFf-0006pY-VC; Fri, 10 Jan 2014 22:08:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W1kFe-0006lL-3i
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:08:26 +0000
Received: from [193.109.254.147:40786] by server-14.bemta-14.messagelabs.com
	id 55/DB-12628-95F60D25; Fri, 10 Jan 2014 22:08:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389391703!10159408!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32384 invoked from network); 10 Jan 2014 22:08:24 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:08:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Jan 2014 22:08:23 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,640,1384300800"; d="scan'208";a="629181809"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.182])
	by fldsmtpi03.verizon.com with ESMTP; 10 Jan 2014 22:08:21 +0000
Message-ID: <52D06F55.7050002@terremark.com>
Date: Fri, 10 Jan 2014 17:08:21 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1389140748-26524-1-git-send-email-dslutz@verizon.com>
	<1389140748-26524-4-git-send-email-dslutz@verizon.com>
	<1389177510.4883.11.camel@kazak.uk.xensource.com>
	<52CD60AA.9010607@terremark.com>
	<1389199626.4883.112.camel@kazak.uk.xensource.com>
	<20140108170442.GA75747@deinos.phlegethon.org>
	<1389203062.27473.8.camel@kazak.uk.xensource.com>
	<20140108163822.30b6f87a@mantra.us.oracle.com>
	<1389261548.27473.42.camel@kazak.uk.xensource.com>
	<52CEC978.7040705@terremark.com>
	<52CEDCD30200007800112096@nat28.tlf.novell.com>
	<52CEE2E2.2030501@terremark.com>
	<1389373999.6423.42.camel@kazak.uk.xensource.com>
	<52D062FE.5000803@terremark.com>
In-Reply-To: <52D062FE.5000803@terremark.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v3 3/5] dbg_rw_guest_mem: Conditionally
 enable debug log output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/14 16:15, Don Slutz wrote:
> On 01/10/14 12:13, Ian Campbell wrote:
>> On Thu, 2014-01-09 at 12:56 -0500, Don Slutz wrote:
>>> On 01/09/14 11:30, Jan Beulich wrote:
>>>>>>> On 09.01.14 at 17:08, Don Slutz <dslutz@verizon.com> wrote:
>>>>> Based on Mukesh's statement, attached is the rebased version of 
>>>>> this patch
>>>>> (labeled v3).  I included Mukesh's ack.
>>>> Unless this is meant just for reviewing purposes (albeit even then
>>>> it's likely problematic), could you please get used to sending
>>>> patch revisions with mail subjects (i.e. not retaining the prior
>>>> version indicator), so there is a reasonable chance to reconstruct
>>>> things by searching just the titles in a mail archive. (It's still 
>>>> fine -
>>>> at least as far as I'm concerned - to reply to an earlier version,
>>>> thus tying things into a single thread on the archive.)
>>>>
>>>> Jan
>>>>
>>> I will try to.  I had not noticed this in the past.
>> Thanks, as Jan says it is very confusing.
>>
>> If there are tools things outstanding in this series which should be for
>> 4.4 then I don't know what is where or what has been acked.
>>
>> Please can resend whatever you think is outstanding for 4.4 as a fresh
>> thread with a suitable vN larger than any of the ones mentioned in any
>> of the replies here and with the acks collected.
>>
>> Ian.
>>
> Will do.  I expect ~1 hour to rebase, build and quick test.
>
>    -Don


I have send out v4 of the rest. Adjusted this thread to v3.  I did not 
include this one in the 4.4 because:

1) Is is debug logging.

2) not 100% sure on volatile, __read_mostly,  __used , __used__, and 
maybe drop static.

3) Most developers cannot change dbg_debug value without a re-compile. 
And then volatile is not needed for them.

So I think it is fine to delay until 4.5 is open for this.

     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:27:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1kYJ-0008F7-VG; Fri, 10 Jan 2014 22:27:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1kYI-0008F1-PJ
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:27:42 +0000
Received: from [85.158.139.211:47982] by server-10.bemta-5.messagelabs.com id
	ED/38-01405-ED370D25; Fri, 10 Jan 2014 22:27:42 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389392859!9109234!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32341 invoked from network); 10 Jan 2014 22:27:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 22:27:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AMRYPh001838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 22:27:35 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMRXsv005922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 22:27:34 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMRXvs004285; Fri, 10 Jan 2014 22:27:33 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 14:27:33 -0800
Message-ID: <52D073F0.5020400@oracle.com>
Date: Fri, 10 Jan 2014 17:28:00 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
In-Reply-To: <20140110213746.GA933@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 04:37 PM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> If the call below fails, is it safe to continue using discard feature? At
>> the least, are discard_granularity and discard_alignment guaranteed to have
>> sane/safe values?
> Its up to the toolstack to provide sane values. In the worst case
> discard fails. In this specific case the three values are optional, so
> the calls can fail. I do not know what happens if the backend device
> actually needs the values, but the frontend can not send proper discard
> requests. Hopefully it will not damage the hardware..

I don't know discard code works but it seems to me that if you pass, for 
example,  zero as discard_granularity (which may happen if 
xenbus_gather() fails) then blkdev_issue_discard() in the backend will 
set granularity to 1 and continue with discard. This may not be what the 
the guest admin requested. And he won't know about this since no error 
message is printed anywhere.

Similarly, if xenbug_gather("discard-secure") fails, I think the code 
will assume that secure discard has not been requested. I don't know 
what security implications this will have but it sounds bad to me.

I think we should at clear feature_discard and print an error in the log 
if *either* of xenbus_gather() calls fail.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:27:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1kYJ-0008F7-VG; Fri, 10 Jan 2014 22:27:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1kYI-0008F1-PJ
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:27:42 +0000
Received: from [85.158.139.211:47982] by server-10.bemta-5.messagelabs.com id
	ED/38-01405-ED370D25; Fri, 10 Jan 2014 22:27:42 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389392859!9109234!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32341 invoked from network); 10 Jan 2014 22:27:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Jan 2014 22:27:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AMRYPh001838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 22:27:35 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMRXsv005922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 22:27:34 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMRXvs004285; Fri, 10 Jan 2014 22:27:33 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 14:27:33 -0800
Message-ID: <52D073F0.5020400@oracle.com>
Date: Fri, 10 Jan 2014 17:28:00 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
In-Reply-To: <20140110213746.GA933@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 04:37 PM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> If the call below fails, is it safe to continue using discard feature? At
>> the least, are discard_granularity and discard_alignment guaranteed to have
>> sane/safe values?
> Its up to the toolstack to provide sane values. In the worst case
> discard fails. In this specific case the three values are optional, so
> the calls can fail. I do not know what happens if the backend device
> actually needs the values, but the frontend can not send proper discard
> requests. Hopefully it will not damage the hardware..

I don't know discard code works but it seems to me that if you pass, for 
example,  zero as discard_granularity (which may happen if 
xenbus_gather() fails) then blkdev_issue_discard() in the backend will 
set granularity to 1 and continue with discard. This may not be what the 
the guest admin requested. And he won't know about this since no error 
message is printed anywhere.

Similarly, if xenbug_gather("discard-secure") fails, I think the code 
will assume that secure discard has not been requested. I don't know 
what security implications this will have but it sounds bad to me.

I think we should at clear feature_discard and print an error in the log 
if *either* of xenbus_gather() calls fail.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:50:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ktX-0001s3-Hn; Fri, 10 Jan 2014 22:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1ktX-0001ry-0X
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:49:39 +0000
Received: from [85.158.137.68:10905] by server-9.bemta-3.messagelabs.com id
	AF/CD-13104-20970D25; Fri, 10 Jan 2014 22:49:38 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389394177!4829465!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32643 invoked from network); 10 Jan 2014 22:49:37 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:49:37 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389394177; l=392;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=/xiePCIil0nugz/OevA7WCN3RLk=;
	b=HIBtXvrll89NgBoWtUjKCqhwbXuUoDZhlbjm1D5it8HT8hWBaGssmLP1S4SmDDgP9lJ
	Vm4E5cXnuB4c2BGuzKvRC2vScbQbS5CKGQOLKCq0HA9hjkyVWGvIXSNkp4aeXrexE6Qv4
	ylF/OXqyaDyWe8/JfPpKyT/X6vuIroAwdbI=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.20 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 906475q0AMnW51Q ; 
	Fri, 10 Jan 2014 23:49:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id A063A5024C; Fri, 10 Jan 2014 23:49:27 +0100 (CET)
Date: Fri, 10 Jan 2014 23:49:27 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140110224927.GA14824@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D073F0.5020400@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> I think we should at clear feature_discard and print an error in the log if
> *either* of xenbus_gather() calls fail.

Are you sure about that? AFAIK many other properties are optional as
well. I dont think there is a formal spec about the discard related
properties. Should every backend be required to provide all four
properties? 

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:50:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:50:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1ktX-0001s3-Hn; Fri, 10 Jan 2014 22:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W1ktX-0001ry-0X
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:49:39 +0000
Received: from [85.158.137.68:10905] by server-9.bemta-3.messagelabs.com id
	AF/CD-13104-20970D25; Fri, 10 Jan 2014 22:49:38 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389394177!4829465!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32643 invoked from network); 10 Jan 2014 22:49:37 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:49:37 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389394177; l=392;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=/xiePCIil0nugz/OevA7WCN3RLk=;
	b=HIBtXvrll89NgBoWtUjKCqhwbXuUoDZhlbjm1D5it8HT8hWBaGssmLP1S4SmDDgP9lJ
	Vm4E5cXnuB4c2BGuzKvRC2vScbQbS5CKGQOLKCq0HA9hjkyVWGvIXSNkp4aeXrexE6Qv4
	ylF/OXqyaDyWe8/JfPpKyT/X6vuIroAwdbI=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7iWQ==
Received: from probook.site (ip-80-226-24-8.vodafone-net.de [80.226.24.8])
	by smtp.strato.de (RZmta 32.20 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 906475q0AMnW51Q ; 
	Fri, 10 Jan 2014 23:49:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id A063A5024C; Fri, 10 Jan 2014 23:49:27 +0100 (CET)
Date: Fri, 10 Jan 2014 23:49:27 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140110224927.GA14824@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D073F0.5020400@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> I think we should at clear feature_discard and print an error in the log if
> *either* of xenbus_gather() calls fail.

Are you sure about that? AFAIK many other properties are optional as
well. I dont think there is a formal spec about the discard related
properties. Should every backend be required to provide all four
properties? 

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:57:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1l18-00021Z-OJ; Fri, 10 Jan 2014 22:57:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1l17-00021U-M1
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:57:29 +0000
Received: from [85.158.143.35:7479] by server-1.bemta-4.messagelabs.com id
	F0/D5-02132-9DA70D25; Fri, 10 Jan 2014 22:57:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389394646!11000578!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3891 invoked from network); 10 Jan 2014 22:57:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:57:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AMvMcD029556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 22:57:22 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMvLVp026618
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 22:57:21 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AMvK95010536; Fri, 10 Jan 2014 22:57:21 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 14:57:20 -0800
Message-ID: <52D07AEC.6060209@oracle.com>
Date: Fri, 10 Jan 2014 17:57:48 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140110224927.GA14824@aepfle.de>
In-Reply-To: <20140110224927.GA14824@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:49 PM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> I think we should at clear feature_discard and print an error in the log if
>> *either* of xenbus_gather() calls fail.
> Are you sure about that? AFAIK many other properties are optional as
> well. I dont think there is a formal spec about the discard related
> properties. Should every backend be required to provide all four
> properties?

It's not whether the properties are required or not. It's that they may 
have been set by the admin but we ignored them. I am particularly 
concerned about security setting.

Can you determine from the error whether the call failed or the property 
wasn't available?

Alternatively, we may have to require the toolstack that if 
feature-discard is provided then all three of these are provided as 
well. And then you disable discard on any error.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 22:57:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 22:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1l18-00021Z-OJ; Fri, 10 Jan 2014 22:57:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1l17-00021U-M1
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 22:57:29 +0000
Received: from [85.158.143.35:7479] by server-1.bemta-4.messagelabs.com id
	F0/D5-02132-9DA70D25; Fri, 10 Jan 2014 22:57:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389394646!11000578!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3891 invoked from network); 10 Jan 2014 22:57:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 22:57:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0AMvMcD029556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 10 Jan 2014 22:57:22 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0AMvLVp026618
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 10 Jan 2014 22:57:21 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0AMvK95010536; Fri, 10 Jan 2014 22:57:21 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 14:57:20 -0800
Message-ID: <52D07AEC.6060209@oracle.com>
Date: Fri, 10 Jan 2014 17:57:48 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140110224927.GA14824@aepfle.de>
In-Reply-To: <20140110224927.GA14824@aepfle.de>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 05:49 PM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> I think we should at clear feature_discard and print an error in the log if
>> *either* of xenbus_gather() calls fail.
> Are you sure about that? AFAIK many other properties are optional as
> well. I dont think there is a formal spec about the discard related
> properties. Should every backend be required to provide all four
> properties?

It's not whether the properties are required or not. It's that they may 
have been set by the admin but we ignored them. I am particularly 
concerned about security setting.

Can you determine from the error whether the call failed or the property 
wasn't available?

Alternatively, we may have to require the toolstack that if 
feature-discard is provided then all three of these are provided as 
well. And then you disable discard on any error.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 23:20:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1lMx-000478-BA; Fri, 10 Jan 2014 23:20:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1lMu-000473-N8
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 23:20:00 +0000
Received: from [85.158.143.35:56824] by server-3.bemta-4.messagelabs.com id
	4B/24-32360-02080D25; Fri, 10 Jan 2014 23:20:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389395997!11050594!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18370 invoked from network); 10 Jan 2014 23:19:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 23:19:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0ANJuGI015643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0ANJtrf004652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:56 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0ANJtPt021470
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:55 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 15:19:55 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 18:20:03 -0500
Message-Id: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH] compat: Allow CHECK_FIELD_COMMON_ macro deal
	with fields that are handles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CHECK_FIELD_COMMON_ and CHECK_FIELD_COMMON macros verify whether the field in
64- and 32-bit versions of a structure is at the same offset by comparing
address of the field in both structures.

However, if the field itself is a handle, it is either

typedef struct {
    <type> *p;
} __guest_handle_<type>

or

typedef struct {
    compat_ptr_t c;
    <type> *_[0] __attribute__((__packed__));
} __compat_handle_<type>

and compiler will warn that we are trying to compare addresses of different
structures.

By casting the addresses to (void *) we will avoid this warning.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/xen/compat.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/compat.h b/xen/include/xen/compat.h
index ca60699..6de2d14 100644
--- a/xen/include/xen/compat.h
+++ b/xen/include/xen/compat.h
@@ -158,14 +158,14 @@ static inline int name(xen_ ## t ## _t *x, compat_ ## t ## _t *c) \
 { \
     BUILD_BUG_ON(offsetof(xen_ ## t ## _t, f) != \
                  offsetof(compat_ ## t ## _t, f)); \
-    return &x->f == &c->f; \
+    return (void *)&x->f == (void *)&c->f; \
 }
 #define CHECK_FIELD_COMMON_(k, name, n, f) \
 static inline int name(k xen_ ## n *x, k compat_ ## n *c) \
 { \
     BUILD_BUG_ON(offsetof(k xen_ ## n, f) != \
                  offsetof(k compat_ ## n, f)); \
-    return &x->f == &c->f; \
+    return (void *)&x->f == (void *)&c->f; \
 }
 
 #define CHECK_FIELD(t, f) \
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 23:20:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1lMx-000478-BA; Fri, 10 Jan 2014 23:20:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W1lMu-000473-N8
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 23:20:00 +0000
Received: from [85.158.143.35:56824] by server-3.bemta-4.messagelabs.com id
	4B/24-32360-02080D25; Fri, 10 Jan 2014 23:20:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389395997!11050594!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18370 invoked from network); 10 Jan 2014 23:19:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 23:19:59 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0ANJuGI015643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0ANJtrf004652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:56 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0ANJtPt021470
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 23:19:55 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 15:19:55 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Fri, 10 Jan 2014 18:20:03 -0500
Message-Id: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH] compat: Allow CHECK_FIELD_COMMON_ macro deal
	with fields that are handles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CHECK_FIELD_COMMON_ and CHECK_FIELD_COMMON macros verify whether the field in
64- and 32-bit versions of a structure is at the same offset by comparing
address of the field in both structures.

However, if the field itself is a handle, it is either

typedef struct {
    <type> *p;
} __guest_handle_<type>

or

typedef struct {
    compat_ptr_t c;
    <type> *_[0] __attribute__((__packed__));
} __compat_handle_<type>

and compiler will warn that we are trying to compare addresses of different
structures.

By casting the addresses to (void *) we will avoid this warning.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/xen/compat.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/compat.h b/xen/include/xen/compat.h
index ca60699..6de2d14 100644
--- a/xen/include/xen/compat.h
+++ b/xen/include/xen/compat.h
@@ -158,14 +158,14 @@ static inline int name(xen_ ## t ## _t *x, compat_ ## t ## _t *c) \
 { \
     BUILD_BUG_ON(offsetof(xen_ ## t ## _t, f) != \
                  offsetof(compat_ ## t ## _t, f)); \
-    return &x->f == &c->f; \
+    return (void *)&x->f == (void *)&c->f; \
 }
 #define CHECK_FIELD_COMMON_(k, name, n, f) \
 static inline int name(k xen_ ## n *x, k compat_ ## n *c) \
 { \
     BUILD_BUG_ON(offsetof(k xen_ ## n, f) != \
                  offsetof(k compat_ ## n, f)); \
-    return &x->f == &c->f; \
+    return (void *)&x->f == (void *)&c->f; \
 }
 
 #define CHECK_FIELD(t, f) \
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 23:34:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1laM-0004tz-3R; Fri, 10 Jan 2014 23:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W1jCx-0001gp-3t
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:01:35 +0000
Received: from [85.158.143.35:21831] by server-1.bemta-4.messagelabs.com id
	61/DD-02132-EAF50D25; Fri, 10 Jan 2014 21:01:34 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389387692!10962525!1
X-Originating-IP: [72.30.239.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19800 invoked from network); 10 Jan 2014 21:01:33 -0000
Received: from nm32-vm7.bullet.mail.bf1.yahoo.com (HELO
	nm32-vm7.bullet.mail.bf1.yahoo.com) (72.30.239.143)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:01:33 -0000
Received: from [98.139.212.150] by nm32.bullet.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:32 -0000
Received: from [98.139.212.227] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:31 -0000
Received: from [127.0.0.1] by omp1036.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:31 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 980863.35881.bm@omp1036.mail.bf1.yahoo.com
Received: (qmail 10790 invoked by uid 60001); 10 Jan 2014 21:01:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389387691; bh=bMjwHPvO1TirDYintrfY/dVuzrxbAKTclUqkbviG+rs=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=xtS5gAu6qo9/VDJn38an/QmgxYVg9uUwG1TaRdoAHshwLMfnVQMKJwQpfMzH7JfexuFPathZI+mRb/0QK5nrMaPsRGSQMCftDb4nWY1Q1MzaFtZGKyhlRq0LtXLQooBHbcpgdvi5mSXE/r7T3K7iouYmBluNu+par+yAiJgykUY=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=b5RA0y/PZgQuJ6EAdpM5lCkcYMV4JuFO5+VZKc3n0AMj/LzxiSnkR05uNi+zCeVznv3wfaLRhviarbjoKXQwJS/PuR4gLxZiAVFkRqs4ux6jbCdJtOaWgAFG3YqrNOWeKBFcQ9+quSMNC8mqlNgSXsinnuX7jDeIAwFOZnTPMnU=;
X-YMail-OSG: tXtlnvsVM1kUXrr25dyA_kcrpk9IB27AtASJNsWwKoozlas
	rf8_nQ7Mb7xeWx0W04xcuBRrln5fbtgf6ZxaUX.MmIJd5xg1Gtn8vdwBs3ev
	vUCZ65JRWTF5kKgGIrIJ8iaGVnxQR6fzAe8M.ygNWBCfVOjIJR9nFLZT4kao
	2NMVQrwlzxr_kpx3glka_zo9iuqWLLKIQLKk6BZDZu4iFhPGUUxmNfbojns6
	k03yXg1LYQjSdlDMRvwFSobNMudMFiqGI2VLFI37.ojozvJjLdOc1toZ6ZCH
	mZhxqTrdBodup5Wm8dTzGJ_hKvYQYEUiao2ulib6j3IICqRbDfGnnfHLa89t
	wLlXFdGLB_QIzjSRVjpgvhwQxKKhBY6d6Q.1sbof26Q.KeeUH7hVYdW_DXwD
	Z.4NBfmXMxhd3Y194wKMv6NnftuiOX.hIxKCuiqFZoy5xkdT15m3SZ8GN5aG
	tAXeF1kjVm3N5V7lf8VhffHsQZ.sV9mtcCy13vhfaHgJEUQwO94QMVGOkHNZ
	A4FxSMHIMvkz2mP89dKaj1rm7GIA8IVZX4O0PM.eQWMh_dxzJvZRsSxgWueT q
Received: from [192.227.225.3] by web160201.mail.bf1.yahoo.com via HTTP;
	Fri, 10 Jan 2014 13:01:31 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8KSSBkbyBvbmUgbWlncmF0aW9uIGFuZCB0cmFjZSBpbiBtaWdyYXRpb24gdG8gY29tbWFuZCAieGVudHJhY2UgLUQgLWUgYWxsIC1TIDI1NiAvdGVzdC50cmFjZSIKSSBjYW4gZ2V0IHRvdGFsIHRpbWUgbWlncmF0aW9uIG9mIGNvbW1hbmQgInRpbWUgbWlncmF0aW9uIC1sIHVidW50dTExIDE5Mi4xNjguMS4xIsKgwqBCdXQgaSBkb24ndCBrbm93IGhvdyBnZXQgIkRvd250aW1lIiBhbmQgImRpcnR5IHBhZ2VzIiBvZiB0ZXN0LnRyYWNlIDotKCDCoG9yIGZyb20gYW5vdGhlciB3YXkuLi4KwqAKCkFkZWwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
Message-ID: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Fri, 10 Jan 2014 13:01:31 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Xen <xen-devel@lists.xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Matthew Daley <mattjd@gmail.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Jan 2014 23:33:53 +0000
Subject: [Xen-devel] how get Downtime?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0136286502603081148=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0136286502603081148==
Content-Type: multipart/alternative; boundary="958772500-1387471485-1389387691=:14476"

--958772500-1387471485-1389387691=:14476
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello=0AI do one migration and trace in migration to command "xentrace -D -=
e all -S 256 /test.trace"=0AI can get total time migration of command "time=
 migration -l ubuntu11 192.168.1.1"=A0=A0But i don't know how get "Downtime=
" and "dirty pages" of test.trace :-( =A0or from another way...=0A=A0=0A=0A=
Adel Amani=0AM.Sc. Candidate@Computer Engineering Department, University of=
 Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
--958772500-1387471485-1389387691=:14476
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello</s=
pan></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: =
'bookman old style', 'new york', times, serif; background-color: transparen=
t; font-style: normal;"><span>I do one migration and trace in migration to =
command "xentrace -D -e all -S 256 /test.trace"</span></div><div style=3D"b=
ackground-color: transparent;"><span>I can get total time migration of comm=
and "time migration -l ubuntu11 192.168.1.1"&nbsp;</span><span style=3D"bac=
kground-color: transparent;">&nbsp;But i don't know how get "Downtime" and =
"dirty pages" of test.trace :-( &nbsp;or from another way...</span></div><d=
iv style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old=
 style', 'new york', times, serif; background-color: transparent; font-styl=
e: normal;"><span style=3D"font-size: 10pt;">&nbsp;</span><br></div><div><d=
iv
 style=3D"text-align:center;"><span style=3D"font-size: 16px; font-family: =
'times new roman', 'new york', times, serif; line-height: normal;">Adel Ama=
ni</span><br></div><span style=3D"font-family: 'times new roman', 'new york=
', times, serif; line-height: normal;"><div style=3D"font-size:16px;text-al=
ign:center;">M.Sc. Candidate@Computer Engineering Department, University of=
 Isfahan</div><div style=3D"text-align:center;"><span style=3D"font-size:13=
px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoration:underline;"=
>A.Amani@eng.ui.ac.ir</span></span></div></span></div></div></body></html>
--958772500-1387471485-1389387691=:14476--


--===============0136286502603081148==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0136286502603081148==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 23:34:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1laM-0004tz-3R; Fri, 10 Jan 2014 23:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W1jCx-0001gp-3t
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 21:01:35 +0000
Received: from [85.158.143.35:21831] by server-1.bemta-4.messagelabs.com id
	61/DD-02132-EAF50D25; Fri, 10 Jan 2014 21:01:34 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389387692!10962525!1
X-Originating-IP: [72.30.239.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19800 invoked from network); 10 Jan 2014 21:01:33 -0000
Received: from nm32-vm7.bullet.mail.bf1.yahoo.com (HELO
	nm32-vm7.bullet.mail.bf1.yahoo.com) (72.30.239.143)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Jan 2014 21:01:33 -0000
Received: from [98.139.212.150] by nm32.bullet.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:32 -0000
Received: from [98.139.212.227] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:31 -0000
Received: from [127.0.0.1] by omp1036.mail.bf1.yahoo.com with NNFMP;
	10 Jan 2014 21:01:31 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 980863.35881.bm@omp1036.mail.bf1.yahoo.com
Received: (qmail 10790 invoked by uid 60001); 10 Jan 2014 21:01:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389387691; bh=bMjwHPvO1TirDYintrfY/dVuzrxbAKTclUqkbviG+rs=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=xtS5gAu6qo9/VDJn38an/QmgxYVg9uUwG1TaRdoAHshwLMfnVQMKJwQpfMzH7JfexuFPathZI+mRb/0QK5nrMaPsRGSQMCftDb4nWY1Q1MzaFtZGKyhlRq0LtXLQooBHbcpgdvi5mSXE/r7T3K7iouYmBluNu+par+yAiJgykUY=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=b5RA0y/PZgQuJ6EAdpM5lCkcYMV4JuFO5+VZKc3n0AMj/LzxiSnkR05uNi+zCeVznv3wfaLRhviarbjoKXQwJS/PuR4gLxZiAVFkRqs4ux6jbCdJtOaWgAFG3YqrNOWeKBFcQ9+quSMNC8mqlNgSXsinnuX7jDeIAwFOZnTPMnU=;
X-YMail-OSG: tXtlnvsVM1kUXrr25dyA_kcrpk9IB27AtASJNsWwKoozlas
	rf8_nQ7Mb7xeWx0W04xcuBRrln5fbtgf6ZxaUX.MmIJd5xg1Gtn8vdwBs3ev
	vUCZ65JRWTF5kKgGIrIJ8iaGVnxQR6fzAe8M.ygNWBCfVOjIJR9nFLZT4kao
	2NMVQrwlzxr_kpx3glka_zo9iuqWLLKIQLKk6BZDZu4iFhPGUUxmNfbojns6
	k03yXg1LYQjSdlDMRvwFSobNMudMFiqGI2VLFI37.ojozvJjLdOc1toZ6ZCH
	mZhxqTrdBodup5Wm8dTzGJ_hKvYQYEUiao2ulib6j3IICqRbDfGnnfHLa89t
	wLlXFdGLB_QIzjSRVjpgvhwQxKKhBY6d6Q.1sbof26Q.KeeUH7hVYdW_DXwD
	Z.4NBfmXMxhd3Y194wKMv6NnftuiOX.hIxKCuiqFZoy5xkdT15m3SZ8GN5aG
	tAXeF1kjVm3N5V7lf8VhffHsQZ.sV9mtcCy13vhfaHgJEUQwO94QMVGOkHNZ
	A4FxSMHIMvkz2mP89dKaj1rm7GIA8IVZX4O0PM.eQWMh_dxzJvZRsSxgWueT q
Received: from [192.227.225.3] by web160201.mail.bf1.yahoo.com via HTTP;
	Fri, 10 Jan 2014 13:01:31 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8KSSBkbyBvbmUgbWlncmF0aW9uIGFuZCB0cmFjZSBpbiBtaWdyYXRpb24gdG8gY29tbWFuZCAieGVudHJhY2UgLUQgLWUgYWxsIC1TIDI1NiAvdGVzdC50cmFjZSIKSSBjYW4gZ2V0IHRvdGFsIHRpbWUgbWlncmF0aW9uIG9mIGNvbW1hbmQgInRpbWUgbWlncmF0aW9uIC1sIHVidW50dTExIDE5Mi4xNjguMS4xIsKgwqBCdXQgaSBkb24ndCBrbm93IGhvdyBnZXQgIkRvd250aW1lIiBhbmQgImRpcnR5IHBhZ2VzIiBvZiB0ZXN0LnRyYWNlIDotKCDCoG9yIGZyb20gYW5vdGhlciB3YXkuLi4KwqAKCkFkZWwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
Message-ID: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Fri, 10 Jan 2014 13:01:31 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Xen <xen-devel@lists.xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Matthew Daley <mattjd@gmail.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 10 Jan 2014 23:33:53 +0000
Subject: [Xen-devel] how get Downtime?!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0136286502603081148=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0136286502603081148==
Content-Type: multipart/alternative; boundary="958772500-1387471485-1389387691=:14476"

--958772500-1387471485-1389387691=:14476
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello=0AI do one migration and trace in migration to command "xentrace -D -=
e all -S 256 /test.trace"=0AI can get total time migration of command "time=
 migration -l ubuntu11 192.168.1.1"=A0=A0But i don't know how get "Downtime=
" and "dirty pages" of test.trace :-( =A0or from another way...=0A=A0=0A=0A=
Adel Amani=0AM.Sc. Candidate@Computer Engineering Department, University of=
 Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
--958772500-1387471485-1389387691=:14476
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello</s=
pan></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: =
'bookman old style', 'new york', times, serif; background-color: transparen=
t; font-style: normal;"><span>I do one migration and trace in migration to =
command "xentrace -D -e all -S 256 /test.trace"</span></div><div style=3D"b=
ackground-color: transparent;"><span>I can get total time migration of comm=
and "time migration -l ubuntu11 192.168.1.1"&nbsp;</span><span style=3D"bac=
kground-color: transparent;">&nbsp;But i don't know how get "Downtime" and =
"dirty pages" of test.trace :-( &nbsp;or from another way...</span></div><d=
iv style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old=
 style', 'new york', times, serif; background-color: transparent; font-styl=
e: normal;"><span style=3D"font-size: 10pt;">&nbsp;</span><br></div><div><d=
iv
 style=3D"text-align:center;"><span style=3D"font-size: 16px; font-family: =
'times new roman', 'new york', times, serif; line-height: normal;">Adel Ama=
ni</span><br></div><span style=3D"font-family: 'times new roman', 'new york=
', times, serif; line-height: normal;"><div style=3D"font-size:16px;text-al=
ign:center;">M.Sc. Candidate@Computer Engineering Department, University of=
 Isfahan</div><div style=3D"text-align:center;"><span style=3D"font-size:13=
px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoration:underline;"=
>A.Amani@eng.ui.ac.ir</span></span></div></span></div></div></body></html>
--958772500-1387471485-1389387691=:14476--


--===============0136286502603081148==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0136286502603081148==--


From xen-devel-bounces@lists.xen.org Fri Jan 10 23:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1lpt-0006Ki-LR; Fri, 10 Jan 2014 23:49:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1W1lpr-0006Kd-Jq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 23:49:55 +0000
Received: from [85.158.137.68:6964] by server-11.bemta-3.messagelabs.com id
	34/56-19379-22780D25; Fri, 10 Jan 2014 23:49:54 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389397792!4818715!1
X-Originating-IP: [209.85.223.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23330 invoked from network); 10 Jan 2014 23:49:54 -0000
Received: from mail-ie0-f170.google.com (HELO mail-ie0-f170.google.com)
	(209.85.223.170)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 23:49:54 -0000
Received: by mail-ie0-f170.google.com with SMTP id tq11so5254938ieb.29
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 15:49:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=sS0PdgAn52mXSbB7BwHeR5v4ENgAcmNB3dINWItHJDc=;
	b=Wo1A+6rh+ZKM0v1H1igykBgD7Lgce0Hg5D3f44seEVM4L9v4/yv9dgyqHISTV0tbOR
	QAu0nJAJIe73YFzvKq3clq3d1J86R6u1n5FjrBYrlPYiTGSZbmBA4/ElFa1EtvCczmpQ
	6cYcR6U/MYKHbUIjKZZ1jAw2L3ZzIZp4CHzS6g9inEwwF41MUMWS5kCDyF9vrEJ8/EeW
	aD3WlB/DHUFEAmc5to0Nwa6voCrnoYaMJrwNRnVS7AaeSflKGhV0iYUWDvulJqT/EeGM
	4sO12Mcx9of4ivLCNSkenzFtIvu93KAS39N+ser0SxEMmD1MPN9OWo7yjhJuLBQY3vRj
	yE5g==
MIME-Version: 1.0
X-Received: by 10.43.49.1 with SMTP id uy1mr10592499icb.48.1389397792475; Fri,
	10 Jan 2014 15:49:52 -0800 (PST)
Received: by 10.50.53.233 with HTTP; Fri, 10 Jan 2014 15:49:52 -0800 (PST)
In-Reply-To: <20140110121618.GH29180@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
	<20140110121618.GH29180@zion.uk.xensource.com>
Date: Fri, 10 Jan 2014 15:49:52 -0800
Message-ID: <CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Laszlo Ersek <lersek@redhat.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 4:16 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> Thanks for bringing this to Xen-devel!
> On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
>> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>> > On 01/09/14 22:47, Jordan Justen wrote:
>> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>> >>> On 01/09/14 01:45, Jordan Justen wrote:
>> >> I don't think this series claims to enable S3 for Xen, right?
>> >>
>> >> When someone looks at S3 for Xen, I might try to steer them towards
>> >> having Xen call MemDetect again, and branch of for Xen specific things
>> >> within MemDetect. I was not too excited about that aspect of r14946.
>> >
>> > No, the series doesn't *claim* to do that :), and I didn't test it, but
>> > since I could not see any immediate blocker when running on Xen, I
>> > figured we should add the feature generally, and then Xen users could
>> > happily hunt bugs in the common code. By adding code that doesn't run
>> > specifically on Xen we're making that harder.
>>
>> I'll try to update this to make a best effort of having S3 potentially
>> work for Xen.
>>
>> We should probably see if someone from xen-devel can verify that we
>> haven't managed to break normal OVMF boots on Xen (aside from the S3
>> issue).
>>
>
> Do you have a git tree somewhere? And some critical steps to test this
> series?

git://github.com/jljusten/edk2.git ovmf-s3-v4

For testing S3, I've been doing this:
OvmfPkg/build.sh qemu -enable-kvm \
  -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \
  -cdrom ~/ISO/archlinux-2013.12.01-dual.iso \
  -m 512 -vga qxl

At the bash prompt I run:
echo -n mem > /sys/power/state

Laszlo already pointed out some key issues for Xen with S3, so you
might want to wait until I try to fix those in v5.

-Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 10 23:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jan 2014 23:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1lpt-0006Ki-LR; Fri, 10 Jan 2014 23:49:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jljusten@gmail.com>) id 1W1lpr-0006Kd-Jq
	for xen-devel@lists.xen.org; Fri, 10 Jan 2014 23:49:55 +0000
Received: from [85.158.137.68:6964] by server-11.bemta-3.messagelabs.com id
	34/56-19379-22780D25; Fri, 10 Jan 2014 23:49:54 +0000
X-Env-Sender: jljusten@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389397792!4818715!1
X-Originating-IP: [209.85.223.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23330 invoked from network); 10 Jan 2014 23:49:54 -0000
Received: from mail-ie0-f170.google.com (HELO mail-ie0-f170.google.com)
	(209.85.223.170)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Jan 2014 23:49:54 -0000
Received: by mail-ie0-f170.google.com with SMTP id tq11so5254938ieb.29
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 15:49:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=sS0PdgAn52mXSbB7BwHeR5v4ENgAcmNB3dINWItHJDc=;
	b=Wo1A+6rh+ZKM0v1H1igykBgD7Lgce0Hg5D3f44seEVM4L9v4/yv9dgyqHISTV0tbOR
	QAu0nJAJIe73YFzvKq3clq3d1J86R6u1n5FjrBYrlPYiTGSZbmBA4/ElFa1EtvCczmpQ
	6cYcR6U/MYKHbUIjKZZ1jAw2L3ZzIZp4CHzS6g9inEwwF41MUMWS5kCDyF9vrEJ8/EeW
	aD3WlB/DHUFEAmc5to0Nwa6voCrnoYaMJrwNRnVS7AaeSflKGhV0iYUWDvulJqT/EeGM
	4sO12Mcx9of4ivLCNSkenzFtIvu93KAS39N+ser0SxEMmD1MPN9OWo7yjhJuLBQY3vRj
	yE5g==
MIME-Version: 1.0
X-Received: by 10.43.49.1 with SMTP id uy1mr10592499icb.48.1389397792475; Fri,
	10 Jan 2014 15:49:52 -0800 (PST)
Received: by 10.50.53.233 with HTTP; Fri, 10 Jan 2014 15:49:52 -0800 (PST)
In-Reply-To: <20140110121618.GH29180@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
	<20140110121618.GH29180@zion.uk.xensource.com>
Date: Fri, 10 Jan 2014 15:49:52 -0800
Message-ID: <CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
From: Jordan Justen <jljusten@gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Laszlo Ersek <lersek@redhat.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 4:16 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> Thanks for bringing this to Xen-devel!
> On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
>> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>> > On 01/09/14 22:47, Jordan Justen wrote:
>> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
>> >>> On 01/09/14 01:45, Jordan Justen wrote:
>> >> I don't think this series claims to enable S3 for Xen, right?
>> >>
>> >> When someone looks at S3 for Xen, I might try to steer them towards
>> >> having Xen call MemDetect again, and branch of for Xen specific things
>> >> within MemDetect. I was not too excited about that aspect of r14946.
>> >
>> > No, the series doesn't *claim* to do that :), and I didn't test it, but
>> > since I could not see any immediate blocker when running on Xen, I
>> > figured we should add the feature generally, and then Xen users could
>> > happily hunt bugs in the common code. By adding code that doesn't run
>> > specifically on Xen we're making that harder.
>>
>> I'll try to update this to make a best effort of having S3 potentially
>> work for Xen.
>>
>> We should probably see if someone from xen-devel can verify that we
>> haven't managed to break normal OVMF boots on Xen (aside from the S3
>> issue).
>>
>
> Do you have a git tree somewhere? And some critical steps to test this
> series?

git://github.com/jljusten/edk2.git ovmf-s3-v4

For testing S3, I've been doing this:
OvmfPkg/build.sh qemu -enable-kvm \
  -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \
  -cdrom ~/ISO/archlinux-2013.12.01-dual.iso \
  -m 512 -vga qxl

At the bash prompt I run:
echo -n mem > /sys/power/state

Laszlo already pointed out some key issues for Xen with S3, so you
might want to wait until I try to fix those in v5.

-Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 01:17:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 01:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1nBf-0007a3-BG; Sat, 11 Jan 2014 01:16:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1nBd-0007Zy-Gz
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 01:16:29 +0000
Received: from [85.158.139.211:63674] by server-12.bemta-5.messagelabs.com id
	CF/CD-30017-C6B90D25; Sat, 11 Jan 2014 01:16:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389402986!7894470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20835 invoked from network); 11 Jan 2014 01:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 01:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,641,1384300800"; d="scan'208";a="91890090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 01:16:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 20:16:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1nBW-0005b7-AD;
	Sat, 11 Jan 2014 01:16:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1nBV-0000FL-L6;
	Sat, 11 Jan 2014 01:16:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24343-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 01:16:21 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24343: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24343 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24343/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  a01a5595305f7f18ac0477d3f248e8c2b30b051c
baseline version:
 xen                  c5c697651b009a0672faa2a902a0c12d2e975d97

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ branch=xen-4.2-testing
+ revision=a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git a01a5595305f7f18ac0477d3f248e8c2b30b051c:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c5c6976..a01a559  a01a5595305f7f18ac0477d3f248e8c2b30b051c -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 01:17:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 01:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1nBf-0007a3-BG; Sat, 11 Jan 2014 01:16:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1nBd-0007Zy-Gz
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 01:16:29 +0000
Received: from [85.158.139.211:63674] by server-12.bemta-5.messagelabs.com id
	CF/CD-30017-C6B90D25; Sat, 11 Jan 2014 01:16:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389402986!7894470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20835 invoked from network); 11 Jan 2014 01:16:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 01:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,641,1384300800"; d="scan'208";a="91890090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 01:16:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 20:16:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1nBW-0005b7-AD;
	Sat, 11 Jan 2014 01:16:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1nBV-0000FL-L6;
	Sat, 11 Jan 2014 01:16:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24343-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 01:16:21 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24343: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24343 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24343/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  a01a5595305f7f18ac0477d3f248e8c2b30b051c
baseline version:
 xen                  c5c697651b009a0672faa2a902a0c12d2e975d97

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ branch=xen-4.2-testing
+ revision=a01a5595305f7f18ac0477d3f248e8c2b30b051c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git a01a5595305f7f18ac0477d3f248e8c2b30b051c:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c5c6976..a01a559  a01a5595305f7f18ac0477d3f248e8c2b30b051c -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 01:59:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 01:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1nqf-0002dK-GB; Sat, 11 Jan 2014 01:58:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1nqd-0002dF-PS
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 01:58:52 +0000
Received: from [85.158.143.35:29723] by server-3.bemta-4.messagelabs.com id
	DD/5E-32360-B55A0D25; Sat, 11 Jan 2014 01:58:51 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389405524!11060935!1
X-Originating-IP: [209.85.128.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2775 invoked from network); 11 Jan 2014 01:58:45 -0000
Received: from mail-qe0-f51.google.com (HELO mail-qe0-f51.google.com)
	(209.85.128.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 01:58:45 -0000
Received: by mail-qe0-f51.google.com with SMTP id 1so5268310qee.24
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 17:58:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+UlokHK2Xt7TNsZ0NtHp04Zjn7jS00VxmD/FOlw45YE=;
	b=QlWuFg17BftVeynSrAMi1s9Ol2b9+WF/cAaTSWhoQQLR0T4xYlPa2OCL9w3/BW+oey
	Xps9/7qcvMmNfDds3g0JnTtwB9D8vHGGHFY5jqPRaMJkubYrIO/RYmvYJyym0IEEbOh5
	ITccJ7G6WSYiXWOpMypsNBib7ylYTVUOHXfiWdy6mOQgcZkCGgoDkq+8I718qSJc5xpk
	yxvWz7GlvMDtXJbFrx8Np2UcvSZ9RODfZpmcRClR3QlCMqBve19CUhiLu1lbp7JdUIrv
	D3qDO3o1OIT8fi1Gjc4oe3UMYrVDYZLUUxYRI3PhuexW4fG5tv6KU9PBebCzXLpYp5JX
	BcpA==
MIME-Version: 1.0
X-Received: by 10.49.70.131 with SMTP id m3mr13939444qeu.59.1389405524401;
	Fri, 10 Jan 2014 17:58:44 -0800 (PST)
Received: by 10.224.114.145 with HTTP; Fri, 10 Jan 2014 17:58:44 -0800 (PST)
In-Reply-To: <1389368924.6423.17.camel@kazak.uk.xensource.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
Date: Sat, 11 Jan 2014 01:58:44 +0000
Message-ID: <CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
>> Create multiple banks to hold dom0 memory in case of 1:1 mapping
>> if we failed to find 1 large contiguous memory to hold the whole
>> thing.
>
> Thanks. While with my ARM maintainer hat on I'd love for this to go in
> for 4.4 with my acting release manager hat on I think if I have to be
> honest this is too big a change for 4.4 at this stage, which is a
> pity :-(
>
>
>>
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> ---
>>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>>  2 files changed, 121 insertions(+), 39 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index faff88e..bb44cdd 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>>  {
>>      paddr_t start;
>>      paddr_t size;
>> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>>      struct page_info *pg = NULL;
>> -    unsigned int order = get_order_from_bytes(dom0_mem);
>> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>>      int res;
>>      paddr_t spfn;
>>      unsigned int bits;
>>
>> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> +#define MIN_BANK_ORDER 10
>
> 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
> which can be made, but I think in practice the lowest useful bank size
> is going to be somewhat larger than that.

MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
makes the minimum 4 Mbyte

>
> NR_MEM_BANKS is 8 so we also need to consider that.
>
> A 64M dom0 would mean 8M per bank, which seems like a reasonable minimum
> bank size. That would be a MIN_BANK_ORDER of 23. Please include a
> comment explaining where this number comes from.
>
> The other way to look at this would be to calculate it dynamically as
> get_order_from_bytes(dom0_mem / NR_MEM_BANKS).

Yup, you're right. That sounds more reasonable.

>
>> +
>> +    kinfo->mem.nr_banks = 0;
>> +
>> +    /*
>> +     * We always first try to allocate all dom0 memory in 1 bank.
>> +     * However, if we failed to allocate physically contiguous memory
>> +     * from the allocator, then we try to create more than one bank.
>> +     */
>> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>
> I think this can be just
>         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
> (maybe order >= ?) or
>         while (order > MIN_BANK_ORDER )
>         {
>                 ...
>                 order--;
>         }
> I think the first works better.
>
> This does away with the need for cur_order vs order. I think order is
> mostly unused after this patch, also not renaming cur_order would
> hopefully reduce the diff and therefore the "scariness" of the patch wrt
> 4.4 (although that may not be sufficient).

Yes, that's correct, however I'm intentionally using a different
variable because I thought that this is going to make things more
obvious. If you think it's better to use the same variable, then I'll
update it.

>
>>      {
>> -        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
>> -        if ( pg != NULL )
>> -            break;
>> +        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>
> Is cur_bank redundant with index? Also kinfo->mem.nr_banks tells you how
> many banks are filled in.
>
>> +        {
>> +            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> +                     {
>
> There something a bit odd going on with the whitespace here and in the
> rest of this loop. Perhaps some hard tabs snuck in?

Yes, I noticed that when the patch appeared in the mailing list :)

>
>> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>> +                             if ( pg != NULL )
>> +                                     break;
>> +                     }
>> +
>> +                     if ( !pg )
>> +                             break;
>> +
>> +                     spfn = page_to_mfn(pg);
>> +                     start = pfn_to_paddr(spfn);
>> +                     size = pfn_to_paddr((1 << cur_order));
>> +
>> +                 kinfo->mem.bank[index].start = start;
>> +                 kinfo->mem.bank[index].size = size;
>> +                 index++;
>> +                 kinfo->mem.nr_banks++;
>> +     }
>> +
>> +     if( pg )
>> +             break;
>> +
>> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
>
> <<1 ?

* 2 :)

>
> Is this not just kinfo->mem.nr_banks?

No, In this equation I'm calculating how much more memory will be
needed to satisfy the memory size of dom0.
So at the end of the iteration, I check how much memory has been
allocated (=cur_bank * cur_order) and how much memory was needed
(=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
cur_bank + 1) * cur_order
So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
) and we do another iteration to satisfy the rest of unallocated
memory.

>
> The basic structure here is:
>
> for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>
> Shouldn't the iteration over bank be the outer one?
>
> The banks might be different sizes, right?
>

The outer loop will define the order of the allocated bank[s] while
the inner one will define how many banks of that order will be needed.

So you try to allocate one big bank, if it succeeds you're done. If
not, you double the number of required banks and retry with smaller
order (order - 1).

The code can indeed allocate banks of different sizes. So, if we
failed to allocate 1 big bank, we will try to allocate two banks of
(order = order -1) in that case we might only allocate the first bank
and fail to allocate the second one. So, we try to allocate the
required memory for the second one in two banks.

So the logic is always: In the end of the outer loop calculate how
many banks we need to allocate for next iteration as well as the
required order. So all allocation that occur in the same iteration is
equal, while each iteration changes the nr banks and order depending
on how much memory we still need to allocate

> Also with either approach then depending on where memory is available
> this may result in the memory not being allocated in and/or that they
> are not in increasing order (in fact, because Xen prefer to allocate
> higher memory first it seems likely that it will be in reverse order).

Yes, there's no restriction what so ever on the order of the
addresses. However each allocation is trying to allocate the bank as
early as possible

for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
{
    pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
    if ( pg != NULL )
        break;
}

>
> I don't know if either of those things matter. What does ePAPR have to
> say on the matter?

There is no mention of address range ordering ( at least in section
3.4 "Memory Node" )

> I'd expect that the ordering might matter from the point of view of
> putting the kernel in the first bank -- since that may no longer be the
> lowest address.
>
In the patch, when I set the loadaddr of the image I search through
the banks for the lowest address bank not the first one anyway.


> You don't seem to reference kinfo->unassigned_mem anywhere after the
> initial order calculation -- I think you need to subtract memory from it
> on each iteration, or else I'm not sure you will actually get the right
> amount allocated in all cases.

It's being properly calculated in the external mapping loop.

>
>>      }
>>
>> -    if ( !pg )
>> -        panic("Failed to allocate contiguous memory for dom0");
>> +     if ( !pg )
>> +             panic("Failed to allocate contiguous memory for dom0");
>>
>> -    spfn = page_to_mfn(pg);
>> -    start = pfn_to_paddr(spfn);
>> -    size = pfn_to_paddr((1 << order));
>> +     for ( index = 0; index < kinfo->mem.nr_banks; index++ )
>> +     {
>> +         start = kinfo->mem.bank[index].start;
>> +         size = kinfo->mem.bank[index].size;
>> +         spfn = paddr_to_pfn(start);
>> +         order = get_order_from_bytes(size);
>>
>> -    // 1:1 mapping
>> -    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
>> -           start, start + size);
>> -    res = guest_physmap_add_page(d, spfn, spfn, order);
>> -
>> -    if ( res )
>> -        panic("Unable to add pages in DOM0: %d", res);
>> +         printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
>> +                 start, start + size, order);
>> +         res = guest_physmap_add_page(d, spfn, spfn, order);
>
> Can this not be done as it is allocated rather than in a second pass?
>
Yes, that's possible.

>>
>> -    kinfo->mem.bank[0].start = start;
>> -    kinfo->mem.bank[0].size = size;
>> -    kinfo->mem.nr_banks = 1;
>> +         if ( res )
>> +             panic("Unable to add pages in DOM0: %d", res);
>>
>> -    kinfo->unassigned_mem -= size;
>> +         kinfo->unassigned_mem -= size;
>> +     }
>>  }
>>
>>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
>> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> index 6a5772b..658c3de 100644
>> --- a/xen/arch/arm/kernel.c
>> +++ b/xen/arch/arm/kernel.c
>> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>>      const paddr_t total = initrd_len + dtb_len;
>>
>>      /* Convenient */
>
> If you are going to remove all of the following convenience variables
> then this comment is no longer correct.
>
> (Convenient here means a shorter local name for something complex)

It still applies to these variables except probably "unsigned int i,
min_i = -1;", so I'll push the comment one line down.

>
>> -    const paddr_t mem_start = info->mem.bank[0].start;
>> -    const paddr_t mem_size = info->mem.bank[0].size;
>> -    const paddr_t mem_end = mem_start + mem_size;
>> -    const paddr_t kernel_size = kernel_end - kernel_start;
>> +    unsigned int i, min_i = -1;
>> +    bool_t same_bank = false;
>> +
>> +    paddr_t mem_start, mem_end, mem_size;
>> +    paddr_t kernel_size;
>>
>>      paddr_t addr;
>>
>> -    if ( total + kernel_size > mem_size )
>> -        panic("Not enough memory in the first bank for the dtb+initrd");
>> +    kernel_size = kernel_end - kernel_start;
>> +
>> +    for ( i = 0; i < info->mem.nr_banks; i++ )
>> +    {
>> +        mem_start = info->mem.bank[i].start;
>> +        mem_size = info->mem.bank[i].size;
>> +        mem_end = mem_start + mem_size - 1;
>> +
>> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
>> +            same_bank = true;
>> +        else
>> +            same_bank = false;
>> +
>> +        if ( same_bank && ((total + kernel_size) < mem_size) )
>> +            min_i = i;
>> +        else if ( (!same_bank) && (total < mem_size) )
>> +            min_i = i;
>> +        else
>> +            continue;
>
> What is all this doing?

Search through the banks for the bank that fits the initrd and dtb.
Calculation is slightly different depending on whether I ended up
using the same bank as the one used by the kernel or not. ( as
mentioned previously, I don't just blindly choose the first bank for
the kernel. I search through all of the banks for the lowest banks -
address-wise - ). In the same_bank case, I just use the old
calculations that was already there in the code, however in the
!same_bank case I just start at the end of the bank.

>
>> +
>> +        break;
>> +    }
>> +
>> +    if ( min_i == -1 )
>> +        panic("Not enough memory for the dtb+initrd");
>> +
>> +    mem_start = info->mem.bank[min_i].start;
>> +    mem_size = info->mem.bank[min_i].size;
>> +    mem_end = mem_start + mem_size;
>>
>>      /*
>>       * DTB must be loaded such that it does not conflict with the
>> @@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
>>       * just after the kernel, if there is room, otherwise just before.
>>       */
>>
>> -    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
>> -        addr = MIN(mem_start + MB(128), mem_end - total);
>> -    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
>> -        addr = ROUNDUP(kernel_end, MB(2));
>> -    else if ( kernel_start - mem_start >= total )
>> -        addr = kernel_start - total;
>> -    else
>> +    if ( same_bank )
>>      {
>> -        panic("Unable to find suitable location for dtb+initrd");
>> -        return;
>> -    }
>> +        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
>> +            addr = MIN(mem_start + MB(128), mem_end - total);
>> +        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
>> +            addr = ROUNDUP(kernel_end, MB(2));
>> +        else if ( kernel_start - mem_start >= total )
>> +            addr = kernel_start - total;
>> +        else
>> +        {
>> +            /*
>> +             * We should never hit this condition because we've already
>> +             * done the check while choosing the bank.
>> +             */
>> +            panic("Unable to find suitable location for dtb+initrd");
>> +            return;
>> +        }
>> +    } else
>> +        addr = ROUNDUP(mem_end - total, MB(2));
>>
>>      info->dtb_paddr = addr;
>>      info->initrd_paddr = info->dtb_paddr + dtb_len;
>> @@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
>>       */
>>      if (start == 0)
>>      {
>> +        unsigned int i, min_i = 0, min_start = -1;
>>          paddr_t load_end;
>>
>> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
>> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
>> +        /*
>> +         * Load kernel at the lowest possible bank
>> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
>
> That commit says nothing about loading in the lowest possible bank,
> though. If there is some relevant factor which is worth commenting on
> please do so directly.

Loading at the lowest bank is safer because of the:
"The current solution in Linux assuming that the delta physical
address - virtual address is always negative".
This delta is being calculated based on where the kernel was loaded (
using "adr" to find physical address ).
So loading it as early as possible is a good idea to avoid this problem.

However, I'm not quite sure why not just load it at the beginning of
the memory then! I think I'll look at booting.txt for that, maybe it's
a decompresser limitation or something!

>
> Actually now that the kernel is fixed upstream we don't need the
> behaviour of that commit at all. Although there are still restrictions
> based on load address vs start of RAM (See booting.txt in the kernel
> docs)
Ah, I see. I'm using an allwinner branch of the kernel (
https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
the latest thing.

>
> Ian.
>
>> +         */
>> +        for ( i = 0; i < info->mem.nr_banks; i++ )
>> +        {
>> +            if( (unsigned int)info->mem.bank[i].start < min_start )
>> +            {
>> +                min_start = info->mem.bank[i].start;
>> +                min_i = i;
>> +            }
>> +        }
>> +
>> +        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
>> +        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
>>
>>          info->zimage.load_addr = load_end - end;
>>          /* Align to 2MB */
>
>
>



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 01:59:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 01:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1nqf-0002dK-GB; Sat, 11 Jan 2014 01:58:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1nqd-0002dF-PS
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 01:58:52 +0000
Received: from [85.158.143.35:29723] by server-3.bemta-4.messagelabs.com id
	DD/5E-32360-B55A0D25; Sat, 11 Jan 2014 01:58:51 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389405524!11060935!1
X-Originating-IP: [209.85.128.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2775 invoked from network); 11 Jan 2014 01:58:45 -0000
Received: from mail-qe0-f51.google.com (HELO mail-qe0-f51.google.com)
	(209.85.128.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 01:58:45 -0000
Received: by mail-qe0-f51.google.com with SMTP id 1so5268310qee.24
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 17:58:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=+UlokHK2Xt7TNsZ0NtHp04Zjn7jS00VxmD/FOlw45YE=;
	b=QlWuFg17BftVeynSrAMi1s9Ol2b9+WF/cAaTSWhoQQLR0T4xYlPa2OCL9w3/BW+oey
	Xps9/7qcvMmNfDds3g0JnTtwB9D8vHGGHFY5jqPRaMJkubYrIO/RYmvYJyym0IEEbOh5
	ITccJ7G6WSYiXWOpMypsNBib7ylYTVUOHXfiWdy6mOQgcZkCGgoDkq+8I718qSJc5xpk
	yxvWz7GlvMDtXJbFrx8Np2UcvSZ9RODfZpmcRClR3QlCMqBve19CUhiLu1lbp7JdUIrv
	D3qDO3o1OIT8fi1Gjc4oe3UMYrVDYZLUUxYRI3PhuexW4fG5tv6KU9PBebCzXLpYp5JX
	BcpA==
MIME-Version: 1.0
X-Received: by 10.49.70.131 with SMTP id m3mr13939444qeu.59.1389405524401;
	Fri, 10 Jan 2014 17:58:44 -0800 (PST)
Received: by 10.224.114.145 with HTTP; Fri, 10 Jan 2014 17:58:44 -0800 (PST)
In-Reply-To: <1389368924.6423.17.camel@kazak.uk.xensource.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
Date: Sat, 11 Jan 2014 01:58:44 +0000
Message-ID: <CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
>> Create multiple banks to hold dom0 memory in case of 1:1 mapping
>> if we failed to find 1 large contiguous memory to hold the whole
>> thing.
>
> Thanks. While with my ARM maintainer hat on I'd love for this to go in
> for 4.4 with my acting release manager hat on I think if I have to be
> honest this is too big a change for 4.4 at this stage, which is a
> pity :-(
>
>
>>
>> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> ---
>>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>>  2 files changed, 121 insertions(+), 39 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index faff88e..bb44cdd 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>>  {
>>      paddr_t start;
>>      paddr_t size;
>> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>>      struct page_info *pg = NULL;
>> -    unsigned int order = get_order_from_bytes(dom0_mem);
>> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>>      int res;
>>      paddr_t spfn;
>>      unsigned int bits;
>>
>> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> +#define MIN_BANK_ORDER 10
>
> 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
> which can be made, but I think in practice the lowest useful bank size
> is going to be somewhat larger than that.

MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
makes the minimum 4 Mbyte

>
> NR_MEM_BANKS is 8 so we also need to consider that.
>
> A 64M dom0 would mean 8M per bank, which seems like a reasonable minimum
> bank size. That would be a MIN_BANK_ORDER of 23. Please include a
> comment explaining where this number comes from.
>
> The other way to look at this would be to calculate it dynamically as
> get_order_from_bytes(dom0_mem / NR_MEM_BANKS).

Yup, you're right. That sounds more reasonable.

>
>> +
>> +    kinfo->mem.nr_banks = 0;
>> +
>> +    /*
>> +     * We always first try to allocate all dom0 memory in 1 bank.
>> +     * However, if we failed to allocate physically contiguous memory
>> +     * from the allocator, then we try to create more than one bank.
>> +     */
>> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>
> I think this can be just
>         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
> (maybe order >= ?) or
>         while (order > MIN_BANK_ORDER )
>         {
>                 ...
>                 order--;
>         }
> I think the first works better.
>
> This does away with the need for cur_order vs order. I think order is
> mostly unused after this patch, also not renaming cur_order would
> hopefully reduce the diff and therefore the "scariness" of the patch wrt
> 4.4 (although that may not be sufficient).

Yes, that's correct, however I'm intentionally using a different
variable because I thought that this is going to make things more
obvious. If you think it's better to use the same variable, then I'll
update it.

>
>>      {
>> -        pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
>> -        if ( pg != NULL )
>> -            break;
>> +        for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>
> Is cur_bank redundant with index? Also kinfo->mem.nr_banks tells you how
> many banks are filled in.
>
>> +        {
>> +            for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> +                     {
>
> There something a bit odd going on with the whitespace here and in the
> rest of this loop. Perhaps some hard tabs snuck in?

Yes, I noticed that when the patch appeared in the mailing list :)

>
>> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>> +                             if ( pg != NULL )
>> +                                     break;
>> +                     }
>> +
>> +                     if ( !pg )
>> +                             break;
>> +
>> +                     spfn = page_to_mfn(pg);
>> +                     start = pfn_to_paddr(spfn);
>> +                     size = pfn_to_paddr((1 << cur_order));
>> +
>> +                 kinfo->mem.bank[index].start = start;
>> +                 kinfo->mem.bank[index].size = size;
>> +                 index++;
>> +                 kinfo->mem.nr_banks++;
>> +     }
>> +
>> +     if( pg )
>> +             break;
>> +
>> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
>
> <<1 ?

* 2 :)

>
> Is this not just kinfo->mem.nr_banks?

No, In this equation I'm calculating how much more memory will be
needed to satisfy the memory size of dom0.
So at the end of the iteration, I check how much memory has been
allocated (=cur_bank * cur_order) and how much memory was needed
(=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
cur_bank + 1) * cur_order
So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
) and we do another iteration to satisfy the rest of unallocated
memory.

>
> The basic structure here is:
>
> for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>
> Shouldn't the iteration over bank be the outer one?
>
> The banks might be different sizes, right?
>

The outer loop will define the order of the allocated bank[s] while
the inner one will define how many banks of that order will be needed.

So you try to allocate one big bank, if it succeeds you're done. If
not, you double the number of required banks and retry with smaller
order (order - 1).

The code can indeed allocate banks of different sizes. So, if we
failed to allocate 1 big bank, we will try to allocate two banks of
(order = order -1) in that case we might only allocate the first bank
and fail to allocate the second one. So, we try to allocate the
required memory for the second one in two banks.

So the logic is always: In the end of the outer loop calculate how
many banks we need to allocate for next iteration as well as the
required order. So all allocation that occur in the same iteration is
equal, while each iteration changes the nr banks and order depending
on how much memory we still need to allocate

> Also with either approach then depending on where memory is available
> this may result in the memory not being allocated in and/or that they
> are not in increasing order (in fact, because Xen prefer to allocate
> higher memory first it seems likely that it will be in reverse order).

Yes, there's no restriction what so ever on the order of the
addresses. However each allocation is trying to allocate the bank as
early as possible

for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
{
    pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
    if ( pg != NULL )
        break;
}

>
> I don't know if either of those things matter. What does ePAPR have to
> say on the matter?

There is no mention of address range ordering ( at least in section
3.4 "Memory Node" )

> I'd expect that the ordering might matter from the point of view of
> putting the kernel in the first bank -- since that may no longer be the
> lowest address.
>
In the patch, when I set the loadaddr of the image I search through
the banks for the lowest address bank not the first one anyway.


> You don't seem to reference kinfo->unassigned_mem anywhere after the
> initial order calculation -- I think you need to subtract memory from it
> on each iteration, or else I'm not sure you will actually get the right
> amount allocated in all cases.

It's being properly calculated in the external mapping loop.

>
>>      }
>>
>> -    if ( !pg )
>> -        panic("Failed to allocate contiguous memory for dom0");
>> +     if ( !pg )
>> +             panic("Failed to allocate contiguous memory for dom0");
>>
>> -    spfn = page_to_mfn(pg);
>> -    start = pfn_to_paddr(spfn);
>> -    size = pfn_to_paddr((1 << order));
>> +     for ( index = 0; index < kinfo->mem.nr_banks; index++ )
>> +     {
>> +         start = kinfo->mem.bank[index].start;
>> +         size = kinfo->mem.bank[index].size;
>> +         spfn = paddr_to_pfn(start);
>> +         order = get_order_from_bytes(size);
>>
>> -    // 1:1 mapping
>> -    printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0)\n",
>> -           start, start + size);
>> -    res = guest_physmap_add_page(d, spfn, spfn, order);
>> -
>> -    if ( res )
>> -        panic("Unable to add pages in DOM0: %d", res);
>> +         printk("Populate P2M %#"PRIx64"->%#"PRIx64" (1:1 mapping for dom0 - order : %i)\n",
>> +                 start, start + size, order);
>> +         res = guest_physmap_add_page(d, spfn, spfn, order);
>
> Can this not be done as it is allocated rather than in a second pass?
>
Yes, that's possible.

>>
>> -    kinfo->mem.bank[0].start = start;
>> -    kinfo->mem.bank[0].size = size;
>> -    kinfo->mem.nr_banks = 1;
>> +         if ( res )
>> +             panic("Unable to add pages in DOM0: %d", res);
>>
>> -    kinfo->unassigned_mem -= size;
>> +         kinfo->unassigned_mem -= size;
>> +     }
>>  }
>>
>>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
>> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> index 6a5772b..658c3de 100644
>> --- a/xen/arch/arm/kernel.c
>> +++ b/xen/arch/arm/kernel.c
>> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>>      const paddr_t total = initrd_len + dtb_len;
>>
>>      /* Convenient */
>
> If you are going to remove all of the following convenience variables
> then this comment is no longer correct.
>
> (Convenient here means a shorter local name for something complex)

It still applies to these variables except probably "unsigned int i,
min_i = -1;", so I'll push the comment one line down.

>
>> -    const paddr_t mem_start = info->mem.bank[0].start;
>> -    const paddr_t mem_size = info->mem.bank[0].size;
>> -    const paddr_t mem_end = mem_start + mem_size;
>> -    const paddr_t kernel_size = kernel_end - kernel_start;
>> +    unsigned int i, min_i = -1;
>> +    bool_t same_bank = false;
>> +
>> +    paddr_t mem_start, mem_end, mem_size;
>> +    paddr_t kernel_size;
>>
>>      paddr_t addr;
>>
>> -    if ( total + kernel_size > mem_size )
>> -        panic("Not enough memory in the first bank for the dtb+initrd");
>> +    kernel_size = kernel_end - kernel_start;
>> +
>> +    for ( i = 0; i < info->mem.nr_banks; i++ )
>> +    {
>> +        mem_start = info->mem.bank[i].start;
>> +        mem_size = info->mem.bank[i].size;
>> +        mem_end = mem_start + mem_size - 1;
>> +
>> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
>> +            same_bank = true;
>> +        else
>> +            same_bank = false;
>> +
>> +        if ( same_bank && ((total + kernel_size) < mem_size) )
>> +            min_i = i;
>> +        else if ( (!same_bank) && (total < mem_size) )
>> +            min_i = i;
>> +        else
>> +            continue;
>
> What is all this doing?

Search through the banks for the bank that fits the initrd and dtb.
Calculation is slightly different depending on whether I ended up
using the same bank as the one used by the kernel or not. ( as
mentioned previously, I don't just blindly choose the first bank for
the kernel. I search through all of the banks for the lowest banks -
address-wise - ). In the same_bank case, I just use the old
calculations that was already there in the code, however in the
!same_bank case I just start at the end of the bank.

>
>> +
>> +        break;
>> +    }
>> +
>> +    if ( min_i == -1 )
>> +        panic("Not enough memory for the dtb+initrd");
>> +
>> +    mem_start = info->mem.bank[min_i].start;
>> +    mem_size = info->mem.bank[min_i].size;
>> +    mem_end = mem_start + mem_size;
>>
>>      /*
>>       * DTB must be loaded such that it does not conflict with the
>> @@ -104,17 +132,25 @@ static void place_modules(struct kernel_info *info,
>>       * just after the kernel, if there is room, otherwise just before.
>>       */
>>
>> -    if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
>> -        addr = MIN(mem_start + MB(128), mem_end - total);
>> -    else if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
>> -        addr = ROUNDUP(kernel_end, MB(2));
>> -    else if ( kernel_start - mem_start >= total )
>> -        addr = kernel_start - total;
>> -    else
>> +    if ( same_bank )
>>      {
>> -        panic("Unable to find suitable location for dtb+initrd");
>> -        return;
>> -    }
>> +        if ( kernel_end < MIN(mem_start + MB(128), mem_end - total) )
>> +            addr = MIN(mem_start + MB(128), mem_end - total);
>> +        if ( mem_end - ROUNDUP(kernel_end, MB(2)) >= total )
>> +            addr = ROUNDUP(kernel_end, MB(2));
>> +        else if ( kernel_start - mem_start >= total )
>> +            addr = kernel_start - total;
>> +        else
>> +        {
>> +            /*
>> +             * We should never hit this condition because we've already
>> +             * done the check while choosing the bank.
>> +             */
>> +            panic("Unable to find suitable location for dtb+initrd");
>> +            return;
>> +        }
>> +    } else
>> +        addr = ROUNDUP(mem_end - total, MB(2));
>>
>>      info->dtb_paddr = addr;
>>      info->initrd_paddr = info->dtb_paddr + dtb_len;
>> @@ -264,10 +300,24 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
>>       */
>>      if (start == 0)
>>      {
>> +        unsigned int i, min_i = 0, min_start = -1;
>>          paddr_t load_end;
>>
>> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
>> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
>> +        /*
>> +         * Load kernel at the lowest possible bank
>> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
>
> That commit says nothing about loading in the lowest possible bank,
> though. If there is some relevant factor which is worth commenting on
> please do so directly.

Loading at the lowest bank is safer because of the:
"The current solution in Linux assuming that the delta physical
address - virtual address is always negative".
This delta is being calculated based on where the kernel was loaded (
using "adr" to find physical address ).
So loading it as early as possible is a good idea to avoid this problem.

However, I'm not quite sure why not just load it at the beginning of
the memory then! I think I'll look at booting.txt for that, maybe it's
a decompresser limitation or something!

>
> Actually now that the kernel is fixed upstream we don't need the
> behaviour of that commit at all. Although there are still restrictions
> based on load address vs start of RAM (See booting.txt in the kernel
> docs)
Ah, I see. I'm using an allwinner branch of the kernel (
https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
the latest thing.

>
> Ian.
>
>> +         */
>> +        for ( i = 0; i < info->mem.nr_banks; i++ )
>> +        {
>> +            if( (unsigned int)info->mem.bank[i].start < min_start )
>> +            {
>> +                min_start = info->mem.bank[i].start;
>> +                min_i = i;
>> +            }
>> +        }
>> +
>> +        load_end = info->mem.bank[min_i].start + info->mem.bank[min_i].size;
>> +        load_end = MIN(info->mem.bank[min_i].start + MB(128), load_end);
>>
>>          info->zimage.load_addr = load_end - end;
>>          /* Align to 2MB */
>
>
>



-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 02:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 02:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1o6G-0004M3-1e; Sat, 11 Jan 2014 02:15:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1o6E-0004Ln-MP
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 02:14:59 +0000
Received: from [193.109.254.147:57577] by server-5.bemta-14.messagelabs.com id
	CA/46-03510-229A0D25; Sat, 11 Jan 2014 02:14:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389406495!10155571!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7291 invoked from network); 11 Jan 2014 02:14:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 02:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,641,1384300800"; d="scan'208";a="89764059"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 02:14:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 21:14:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1o6A-0005tV-0P;
	Sat, 11 Jan 2014 02:14:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1o69-0000Iz-Vg;
	Sat, 11 Jan 2014 02:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24345-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 02:14:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24345: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6703884027336408904=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6703884027336408904==
Content-Type: text/plain

flight 24345 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24345/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24342-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Eddie Dong <eddie.dong@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ branch=xen-4.3-testing
+ revision=670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 670d64aed01e27d3e8b783fd83dc29bc46a808b7:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   8940a13..670d64a  670d64aed01e27d3e8b783fd83dc29bc46a808b7 -> stable-4.3


--===============6703884027336408904==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6703884027336408904==--

From xen-devel-bounces@lists.xen.org Sat Jan 11 02:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 02:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1o6G-0004M3-1e; Sat, 11 Jan 2014 02:15:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1o6E-0004Ln-MP
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 02:14:59 +0000
Received: from [193.109.254.147:57577] by server-5.bemta-14.messagelabs.com id
	CA/46-03510-229A0D25; Sat, 11 Jan 2014 02:14:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389406495!10155571!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7291 invoked from network); 11 Jan 2014 02:14:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 02:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,641,1384300800"; d="scan'208";a="89764059"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 02:14:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 21:14:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1o6A-0005tV-0P;
	Sat, 11 Jan 2014 02:14:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1o69-0000Iz-Vg;
	Sat, 11 Jan 2014 02:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24345-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 02:14:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24345: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6703884027336408904=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6703884027336408904==
Content-Type: text/plain

flight 24345 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24345/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24342-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7
baseline version:
 xen                  8940a13d6de1295cfdc4a189e0a5610849a9ef59

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Eddie Dong <eddie.dong@intel.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Matthew Daley <mattd@bugfuzz.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ branch=xen-4.3-testing
+ revision=670d64aed01e27d3e8b783fd83dc29bc46a808b7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 670d64aed01e27d3e8b783fd83dc29bc46a808b7:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   8940a13..670d64a  670d64aed01e27d3e8b783fd83dc29bc46a808b7 -> stable-4.3


--===============6703884027336408904==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6703884027336408904==--

From xen-devel-bounces@lists.xen.org Sat Jan 11 02:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 02:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1oIE-0005IP-3u; Sat, 11 Jan 2014 02:27:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1oIC-0005IK-HP
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 02:27:20 +0000
Received: from [85.158.139.211:15688] by server-16.bemta-5.messagelabs.com id
	89/7E-11843-70CA0D25; Sat, 11 Jan 2014 02:27:19 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389407237!9129502!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6482 invoked from network); 11 Jan 2014 02:27:18 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 02:27:18 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so4632131qcy.37
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 18:27:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=BN1DBfy2wpMZZZQyBJQ0hmUUS9rvNR8OqcweOSa4aNk=;
	b=xwtQqChOT8oU7nSmUYWYxtYUJ7tJ4uealsFE5shcNw1lSWBdA3YgiHYb+c752tVkIM
	SLvv48OD4p9plfSypPW8K/Xenimzc/Dqn9QBDaU2Y+VCkHd2n38uraVMdeZDHEgxb9BO
	sPRk2NywGXSLHWAcv8Geihj/M5HAklU/akeMkbubBfjb+yWjoEMtOhU6z4uRIMLiPySu
	Q1TRaodIZqRa+8+gWCYQURI5o8uQ0KFCqB6S1zVUXc9o+xc7tXS/IvjdXAOwWHuiSDF4
	FCHmD8xwKmV56YMVpVClPBWLceRjbF3FoZBSucJ/j+eonxEI76zg4qF75u7DP+GFZhkx
	9Eig==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr14133549qev.11.1389407237248; Fri,
	10 Jan 2014 18:27:17 -0800 (PST)
Received: by 10.224.114.145 with HTTP; Fri, 10 Jan 2014 18:27:17 -0800 (PST)
Date: Sat, 11 Jan 2014 02:27:17 +0000
Message-ID: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] kdb support upstream ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there any reason why kdb[0] is not supported in the upstream xen ?

[0] http://xenbits.xensource.com/ext/debuggers.hg

-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 02:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 02:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1oIE-0005IP-3u; Sat, 11 Jan 2014 02:27:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W1oIC-0005IK-HP
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 02:27:20 +0000
Received: from [85.158.139.211:15688] by server-16.bemta-5.messagelabs.com id
	89/7E-11843-70CA0D25; Sat, 11 Jan 2014 02:27:19 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389407237!9129502!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6482 invoked from network); 11 Jan 2014 02:27:18 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 02:27:18 -0000
Received: by mail-qc0-f178.google.com with SMTP id i17so4632131qcy.37
	for <xen-devel@lists.xen.org>; Fri, 10 Jan 2014 18:27:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=BN1DBfy2wpMZZZQyBJQ0hmUUS9rvNR8OqcweOSa4aNk=;
	b=xwtQqChOT8oU7nSmUYWYxtYUJ7tJ4uealsFE5shcNw1lSWBdA3YgiHYb+c752tVkIM
	SLvv48OD4p9plfSypPW8K/Xenimzc/Dqn9QBDaU2Y+VCkHd2n38uraVMdeZDHEgxb9BO
	sPRk2NywGXSLHWAcv8Geihj/M5HAklU/akeMkbubBfjb+yWjoEMtOhU6z4uRIMLiPySu
	Q1TRaodIZqRa+8+gWCYQURI5o8uQ0KFCqB6S1zVUXc9o+xc7tXS/IvjdXAOwWHuiSDF4
	FCHmD8xwKmV56YMVpVClPBWLceRjbF3FoZBSucJ/j+eonxEI76zg4qF75u7DP+GFZhkx
	9Eig==
MIME-Version: 1.0
X-Received: by 10.49.72.66 with SMTP id b2mr14133549qev.11.1389407237248; Fri,
	10 Jan 2014 18:27:17 -0800 (PST)
Received: by 10.224.114.145 with HTTP; Fri, 10 Jan 2014 18:27:17 -0800 (PST)
Date: Sat, 11 Jan 2014 02:27:17 +0000
Message-ID: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] kdb support upstream ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there any reason why kdb[0] is not supported in the upstream xen ?

[0] http://xenbits.xensource.com/ext/debuggers.hg

-- 
Karim Allah Ahmed.
LinkedIn

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 03:18:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 03:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1p5U-0000Ia-44; Sat, 11 Jan 2014 03:18:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W1p5T-0000IV-9f
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 03:18:15 +0000
Received: from [85.158.137.68:2625] by server-6.bemta-3.messagelabs.com id
	C7/30-04868-6F7B0D25; Sat, 11 Jan 2014 03:18:14 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389410291!8463899!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5771 invoked from network); 11 Jan 2014 03:18:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Jan 2014 03:18:13 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0B3I8aI028433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 11 Jan 2014 03:18:09 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0B3I7fX004466
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 11 Jan 2014 03:18:08 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0B3I7Tv020381; Sat, 11 Jan 2014 03:18:07 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 19:18:06 -0800
Date: Fri, 10 Jan 2014 19:18:03 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Message-ID: <20140110191803.5dc82d8f@mantra.us.oracle.com>
In-Reply-To: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
References: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kdb support upstream ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 11 Jan 2014 02:27:17 +0000
"karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com> wrote:

> Is there any reason why kdb[0] is not supported in the upstream xen ?
> 
> [0] http://xenbits.xensource.com/ext/debuggers.hg
> 

It appeared there was not enough demand for it.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 03:18:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 03:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1p5U-0000Ia-44; Sat, 11 Jan 2014 03:18:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W1p5T-0000IV-9f
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 03:18:15 +0000
Received: from [85.158.137.68:2625] by server-6.bemta-3.messagelabs.com id
	C7/30-04868-6F7B0D25; Sat, 11 Jan 2014 03:18:14 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389410291!8463899!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5771 invoked from network); 11 Jan 2014 03:18:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Jan 2014 03:18:13 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0B3I8aI028433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 11 Jan 2014 03:18:09 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0B3I7fX004466
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 11 Jan 2014 03:18:08 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0B3I7Tv020381; Sat, 11 Jan 2014 03:18:07 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 10 Jan 2014 19:18:06 -0800
Date: Fri, 10 Jan 2014 19:18:03 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Message-ID: <20140110191803.5dc82d8f@mantra.us.oracle.com>
In-Reply-To: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
References: <CAOTdubswPcxOcgbxnJL3C6aYB9T3=f6u+i6PLNK6Lsesv6TuAA@mail.gmail.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] kdb support upstream ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 11 Jan 2014 02:27:17 +0000
"karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com> wrote:

> Is there any reason why kdb[0] is not supported in the upstream xen ?
> 
> [0] http://xenbits.xensource.com/ext/debuggers.hg
> 

It appeared there was not enough demand for it.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 04:51:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 04:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1qXU-0006tL-CE; Sat, 11 Jan 2014 04:51:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1qXS-0006tG-WF
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 04:51:15 +0000
Received: from [193.109.254.147:16158] by server-11.bemta-14.messagelabs.com
	id A7/65-20576-2CDC0D25; Sat, 11 Jan 2014 04:51:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389415871!10187143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14683 invoked from network); 11 Jan 2014 04:51:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 04:51:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,642,1384300800"; d="scan'208";a="89780800"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 04:51:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 23:51:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1qXN-0006dl-WB;
	Sat, 11 Jan 2014 04:51:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1qXN-0005P0-LH;
	Sat, 11 Jan 2014 04:51:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24348-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 04:51:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24348: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24348 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24348/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24334

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24334
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24334

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  0896bd8bea84526b00e00d2d076f4f953a3d73cb
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Jan Beulich <jbeulich@suse.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0896bd8bea84526b00e00d2d076f4f953a3d73cb
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 10 17:46:33 2014 +0100

    x86: map portion of kexec crash area that is within the direct map area
    
    Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
    crash kernel area) causes fatal page faults when loading a crash
    image.  The attempt to zero the first control page allocated from the
    crash region will fault as the VA return by map_domain_page() has no
    mapping.
    
    The fault will occur on non-debug builds of Xen when the crash area is
    below 5 TiB (which will be most systems).
    
    The assumption that the crash area mapping was not used is incorrect.
    map_domain_page() is used when loading an image and building the
    image's page tables to temporarily map the crash area, thus the
    mapping is required if the crash area is in the direct map area.
    
    Reintroduce the mapping, but only the portions of the crash area that
    are within the direct map area.
    
    Reported-by: Don Slutz <dslutz@verizon.com>
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Tested-by: Don Slutz <dslutz@verizon.com>
    Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
    Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
    
    This is really just a band aid - kexec shouldn't rely on the crash area
    being always mapped when in the direct mapping range (and it didn't use
    to in its previous form). That's primarily because map_domain_page()
    (needed when the area is outside the direct mapping range) may be
    unusable when wanting to kexec due to a crash, but also because in the
    case of PFN compression the kexec range (if specified on the command
    line) could fall into a hole between used memory ranges (while we're
    currently only ignoring memory at the top of the physical address
    space, it's pretty clear that sooner or later we will want that
    selection to become more sophisticated in order to maximize the memory
    made use of).
    
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 10 17:45:01 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 04:51:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 04:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1qXU-0006tL-CE; Sat, 11 Jan 2014 04:51:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1qXS-0006tG-WF
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 04:51:15 +0000
Received: from [193.109.254.147:16158] by server-11.bemta-14.messagelabs.com
	id A7/65-20576-2CDC0D25; Sat, 11 Jan 2014 04:51:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389415871!10187143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14683 invoked from network); 11 Jan 2014 04:51:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 04:51:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,642,1384300800"; d="scan'208";a="89780800"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 04:51:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 10 Jan 2014 23:51:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1qXN-0006dl-WB;
	Sat, 11 Jan 2014 04:51:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1qXN-0005P0-LH;
	Sat, 11 Jan 2014 04:51:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24348-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 04:51:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24348: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24348 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24348/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24334

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           7 debian-install               fail   like 24334
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24334

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  0896bd8bea84526b00e00d2d076f4f953a3d73cb
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Jan Beulich <jbeulich@suse.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0896bd8bea84526b00e00d2d076f4f953a3d73cb
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 10 17:46:33 2014 +0100

    x86: map portion of kexec crash area that is within the direct map area
    
    Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
    crash kernel area) causes fatal page faults when loading a crash
    image.  The attempt to zero the first control page allocated from the
    crash region will fault as the VA return by map_domain_page() has no
    mapping.
    
    The fault will occur on non-debug builds of Xen when the crash area is
    below 5 TiB (which will be most systems).
    
    The assumption that the crash area mapping was not used is incorrect.
    map_domain_page() is used when loading an image and building the
    image's page tables to temporarily map the crash area, thus the
    mapping is required if the crash area is in the direct map area.
    
    Reintroduce the mapping, but only the portions of the crash area that
    are within the direct map area.
    
    Reported-by: Don Slutz <dslutz@verizon.com>
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Tested-by: Don Slutz <dslutz@verizon.com>
    Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
    Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
    
    This is really just a band aid - kexec shouldn't rely on the crash area
    being always mapped when in the direct mapping range (and it didn't use
    to in its previous form). That's primarily because map_domain_page()
    (needed when the area is outside the direct mapping range) may be
    unusable when wanting to kexec due to a crash, but also because in the
    case of PFN compression the kexec range (if specified on the command
    line) could fall into a hole between used memory ranges (while we're
    currently only ignoring memory at the top of the physical address
    space, it's pretty clear that sooner or later we will want that
    selection to become more sophisticated in order to maximize the memory
    made use of).
    
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 10 17:45:01 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 06:13:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 06:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1roR-0003yi-1O; Sat, 11 Jan 2014 06:12:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1roP-0003yc-Gk
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 06:12:49 +0000
Received: from [85.158.137.68:39938] by server-3.bemta-3.messagelabs.com id
	BA/66-10658-0E0E0D25; Sat, 11 Jan 2014 06:12:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389420765!8474572!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 510 invoked from network); 11 Jan 2014 06:12:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 06:12:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,642,1384300800"; d="scan'208";a="89792360"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 06:12:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 01:12:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1roI-00072R-UZ;
	Sat, 11 Jan 2014 06:12:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1roI-0001nf-GJ;
	Sat, 11 Jan 2014 06:12:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24349-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 06:12:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24349: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3038194024408533545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3038194024408533545==
Content-Type: text/plain

flight 24349 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24349/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                 fail pass in 24333
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install        fail pass in 24340
 test-amd64-i386-xl-win7-amd64  5 xen-boot                   fail pass in 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24340
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 22652
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install   fail like 24347-bisect
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24351-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24333 like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24333 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24340 never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ branch=linux-3.10
+ revision=8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 8b4ed85b8404ffe7e10ee410c4df3968b86f0793:tested/linux-3.10
Counting objects: 1   
Counting objects: 566   
Counting objects: 1051, done.
Compressing objects:   0% (1/158)   
Compressing objects:   1% (2/158)   
Compressing objects:   2% (4/158)   
Compressing objects:   3% (5/158)   
Compressing objects:   4% (7/158)   
Compressing objects:   5% (8/158)   
Compressing objects:   6% (10/158)   
Compressing objects:   7% (12/158)   
Compressing objects:   8% (13/158)   
Compressing objects:   9% (15/158)   
Compressing objects:  10% (16/158)   
Compressing objects:  11% (18/158)   
Compressing objects:  12% (19/158)   
Compressing objects:  13% (21/158)   
Compressing objects:  14% (23/158)   
Compressing objects:  15% (24/158)   
Compressing objects:  16% (26/158)   
Compressing objects:  17% (27/158)   
Compressing objects:  18% (29/158)   
Compressing objects:  19% (31/158)   
Compressing objects:  20% (32/158)   
Compressing objects:  21% (34/158)   
Compressing objects:  22% (35/158)   
Compressing objects:  23% (37/158)   
Compressing objects:  24% (38/158)   
Compressing objects:  25% (40/158)   
Compressing objects:  26% (42/158)   
Compressing objects:  27% (43/158)   
Compressing objects:  28% (45/158)   
Compressing objects:  29% (46/158)   
Compressing objects:  30% (48/158)   
Compressing objects:  31% (49/158)   
Compressing objects:  32% (51/158)   
Compressing objects:  33% (53/158)   
Compressing objects:  34% (54/158)   
Compressing objects:  35% (56/158)   
Compressing objects:  36% (57/158)   
Compressing objects:  37% (59/158)   
Compressing objects:  38% (61/158)   
Compressing objects:  39% (62/158)   
Compressing objects:  40% (64/158)   
Compressing objects:  41% (65/158)   
Compressing objects:  42% (67/158)   
Compressing objects:  43% (68/158)   
Compressing objects:  44% (70/158)   
Compressing objects:  45% (72/158)   
Compressing objects:  46% (73/158)   
Compressing objects:  47% (75/158)   
Compressing objects:  48% (76/158)   
Compressing objects:  49% (78/158)   
Compressing objects:  50% (79/158)   
Compressing objects:  51% (81/158)   
Compressing objects:  52% (83/158)   
Compressing objects:  53% (84/158)   
Compressing objects:  54% (86/158)   
Compressing objects:  55% (87/158)   
Compressing objects:  56% (89/158)   
Compressing objects:  57% (91/158)   
Compressing objects:  58% (92/158)   
Compressing objects:  59% (94/158)   
Compressing objects:  60% (95/158)   
Compressing objects:  61% (97/158)   
Compressing objects:  62% (98/158)   
Compressing objects:  63% (100/158)   
Compressing objects:  64% (102/158)   
Compressing objects:  65% (103/158)   
Compressing objects:  66% (105/158)   
Compressing objects:  67% (106/158)   
Compressing objects:  68% (108/158)   
Compressing objects:  69% (110/158)   
Compressing objects:  70% (111/158)   
Compressing objects:  71% (113/158)   
Compressing objects:  72% (114/158)   
Compressing objects:  73% (116/158)   
Compressing objects:  74% (117/158)   
Compressing objects:  75% (119/158)   
Compressing objects:  76% (121/158)   
Compressing objects:  77% (122/158)   
Compressing objects:  78% (124/158)   
Compressing objects:  79% (125/158)   
Compressing objects:  80% (127/158)   
Compressing objects:  81% (128/158)   
Compressing objects:  82% (130/158)   
Compressing objects:  83% (132/158)   
Compressing objects:  84% (133/158)   
Compressing objects:  85% (135/158)   
Compressing objects:  86% (136/158)   
Compressing objects:  87% (138/158)   
Compressing objects:  88% (140/158)   
Compressing objects:  89% (141/158)   
Compressing objects:  90% (143/158)   
Compressing objects:  91% (144/158)   
Compressing objects:  92% (146/158)   
Compressing objects:  93% (147/158)   
Compressing objects:  94% (149/158)   
Compressing objects:  95% (151/158)   
Compressing objects:  96% (152/158)   
Compressing objects:  97% (154/158)   
Compressing objects:  98% (155/158)   
Compressing objects:  99% (157/158)   
Compressing objects: 100% (158/158)   
Compressing objects: 100% (158/158), done.
Writing objects:   0% (1/815)   
Writing objects:   1% (9/815)   
Writing objects:   2% (17/815)   
Writing objects:   3% (25/815)   
Writing objects:   4% (33/815)   
Writing objects:   5% (41/815)   
Writing objects:   6% (49/815)   
Writing objects:   7% (58/815)   
Writing objects:   8% (66/815)   
Writing objects:   9% (74/815)   
Writing objects:  10% (82/815)   
Writing objects:  11% (90/815)   
Writing objects:  12% (98/815)   
Writing objects:  13% (106/815)   
Writing objects:  14% (115/815)   
Writing objects:  15% (123/815)   
Writing objects:  16% (131/815)   
Writing objects:  17% (139/815)   
Writing objects:  18% (147/815)   
Writing objects:  19% (155/815)   
Writing objects:  20% (163/815)   
Writing objects:  21% (172/815)   
Writing objects:  22% (180/815)   
Writing objects:  23% (188/815)   
Writing objects:  24% (196/815)   
Writing objects:  25% (204/815)   
Writing objects:  26% (212/815)   
Writing objects:  27% (221/815)   
Writing objects:  28% (229/815)   
Writing objects:  29% (237/815)   
Writing objects:  30% (245/815)   
Writing objects:  31% (253/815)   
Writing objects:  32% (261/815)   
Writing objects:  33% (269/815)   
Writing objects:  34% (278/815)   
Writing objects:  35% (286/815)   
Writing objects:  36% (294/815)   
Writing objects:  37% (302/815)   
Writing objects:  38% (310/815)   
Writing objects:  39% (318/815)   
Writing objects:  40% (326/815)   
Writing objects:  41% (335/815)   
Writing objects:  42% (343/815)   
Writing objects:  43% (351/815)   
Writing objects:  44% (359/815)   
Writing objects:  45% (367/815)   
Writing objects:  46% (375/815)   
Writing objects:  47% (384/815)   
Writing objects:  48% (392/815)   
Writing objects:  49% (400/815)   
Writing objects:  50% (408/815)   
Writing objects:  51% (416/815)   
Writing objects:  52% (424/815)   
Writing objects:  53% (432/815)   
Writing objects:  54% (441/815)   
Writing objects:  55% (449/815)   
Writing objects:  56% (457/815)   
Writing objects:  57% (465/815)   
Writing objects:  58% (474/815)   
Writing objects:  59% (481/815)   
Writing objects:  60% (489/815)   
Writing objects:  61% (498/815)   
Writing objects:  62% (506/815)   
Writing objects:  63% (514/815)   
Writing objects:  64% (522/815)   
Writing objects:  65% (530/815)   
Writing objects:  66% (538/815)   
Writing objects:  67% (547/815)   
Writing objects:  68% (555/815)   
Writing objects:  69% (563/815)   
Writing objects:  70% (571/815)   
Writing objects:  71% (579/815)   
Writing objects:  72% (587/815)   
Writing objects:  73% (595/815)   
Writing objects:  74% (604/815)   
Writing objects:  75% (612/815)   
Writing objects:  76% (620/815)   
Writing objects:  77% (628/815)   
Writing objects:  78% (636/815)   
Writing objects:  79% (644/815)   
Writing objects:  80% (652/815)   
Writing objects:  81% (661/815)   
Writing objects:  82% (669/815)   
Writing objects:  83% (677/815)   
Writing objects:  84% (685/815)   
Writing objects:  85% (693/815)   
Writing objects:  86% (701/815)   
Writing objects:  87% (710/815)   
Writing objects:  88% (718/815)   
Writing objects:  89% (726/815)   
Writing objects:  90% (734/815)   
Writing objects:  91% (742/815)   
Writing objects:  92% (750/815)   
Writing objects:  93% (758/815)   
Writing objects:  94% (767/815)   
Writing objects:  95% (775/815)   
Writing objects:  96% (783/815)   
Writing objects:  97% (791/815)   
Writing objects:  98% (799/815)   
Writing objects:  99% (807/815)   
Writing objects: 100% (815/815)   
Writing objects: 100% (815/815), 145.23 KiB, done.
Total 815 (delta 657), reused 815 (delta 657)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   095f493..8b4ed85  8b4ed85b8404ffe7e10ee410c4df3968b86f0793 -> tested/linux-3.10
+ exit 0


--===============3038194024408533545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3038194024408533545==--

From xen-devel-bounces@lists.xen.org Sat Jan 11 06:13:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 06:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1roR-0003yi-1O; Sat, 11 Jan 2014 06:12:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1roP-0003yc-Gk
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 06:12:49 +0000
Received: from [85.158.137.68:39938] by server-3.bemta-3.messagelabs.com id
	BA/66-10658-0E0E0D25; Sat, 11 Jan 2014 06:12:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389420765!8474572!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 510 invoked from network); 11 Jan 2014 06:12:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 06:12:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,642,1384300800"; d="scan'208";a="89792360"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Jan 2014 06:12:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 01:12:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1roI-00072R-UZ;
	Sat, 11 Jan 2014 06:12:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1roI-0001nf-GJ;
	Sat, 11 Jan 2014 06:12:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24349-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 06:12:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24349: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3038194024408533545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3038194024408533545==
Content-Type: text/plain

flight 24349 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24349/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                 fail pass in 24333
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install        fail pass in 24340
 test-amd64-i386-xl-win7-amd64  5 xen-boot                   fail pass in 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24340
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 22652
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install   fail like 24347-bisect
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24351-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24333 like 22652

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24333 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24340 never pass

version targeted for testing:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793
baseline version:
 linux                095f493c4d532b0ced3aee22e2d5b2cea02aa773

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Alan Cox <gnomes@lxorguk.ukuu.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@inktank.com>
  Alexander Graf <agraf@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Blanchard <anton@samba.org>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bo Shen <voice.shen@atmel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chad Hanson <chanson@trustedcs.com>
  Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoffer Dall <cdall@cs.columbia.edu>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Henningsson <david.henningsson@canonical.com>
  David S. Miller <davem@davemloft.net>
  Dinh Nguyen <dinguyen@altera.com>
  Dmitry Kunilov <dmitry.kunilov@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Emil Goode <emilgoode@gmail.com>
  Eric Whitney <enwlinux@gmail.com>
  Eryu Guan <guaneryu@gmail.com>
  Eugene Shatokhin <eugene.shatokhin@rosalab.ru>
  Feng Kan <fkan@apm.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jiang Liu <jiang.liu@huawei.com>
  Jianguo Wu <wujianguo@huawei.com>
  Jianpeng Ma <majianpeng@gmail.com>
  Joel Becker <jlbec@evilplan.org>
  Johan Hovold <jhovold@gmail.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  JongHo Kim <furmuwon@gmail.com>
  Joonsoo Kim <iamjoonsoo.kim@lge.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Josh Durgin <josh.durgin@inktank.com>
  Junho Ryu <jayr@google.com>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Kumar Sankaran <ksankaran@apm.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Len Brown <len.brown@intel.com>
  Li Wang <liwang@ubuntukylin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Ping Fan <pingfank@linux.vnet.ibm.com>
  Lukas Czerner <lczerner@redhat.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <marc.zyngier@arm.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Mark Brown <broonie@linaro.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Mel Gorman <mgorman@suse.de>
  Miao Xie <miaox@cn.fujitsu.com>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <michael@neuling.org>
  Michael Neuling <mikey@neuling.org>
  Michal Hocko <mhocko@suse.cz>
  Michele Baldessari <michele@acksyn.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathaniel Yazdani <n1ght.4nd.d4y@gmail.com>
  Nicholas <arealityfarbetween@googlemail.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.de>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Paul Walmsley <paul@pwsan.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rik van Riel <riel@redhat.com>
  Rob Herring <robh@kernel.org>
  Robin H. Johnson <robbat2@gentoo.org>
  Roger Quadros <rogerq@ti.com>
  Sage Weil <sage@inktank.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Stefan Richter <stefanr@s5r6.in-berlin.de>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suman Anna <s-anna@ti.com>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tony Lindgren <tony@atomide.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vincent Pelletier <plr.vincent@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Will Deacon <will.deacon@arm.com>
  Witold Bazakbal <865perl@wp.pl>
  Yan, Zheng <zheng.z.yan@intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ branch=linux-3.10
+ revision=8b4ed85b8404ffe7e10ee410c4df3968b86f0793
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 8b4ed85b8404ffe7e10ee410c4df3968b86f0793:tested/linux-3.10
Counting objects: 1   
Counting objects: 566   
Counting objects: 1051, done.
Compressing objects:   0% (1/158)   
Compressing objects:   1% (2/158)   
Compressing objects:   2% (4/158)   
Compressing objects:   3% (5/158)   
Compressing objects:   4% (7/158)   
Compressing objects:   5% (8/158)   
Compressing objects:   6% (10/158)   
Compressing objects:   7% (12/158)   
Compressing objects:   8% (13/158)   
Compressing objects:   9% (15/158)   
Compressing objects:  10% (16/158)   
Compressing objects:  11% (18/158)   
Compressing objects:  12% (19/158)   
Compressing objects:  13% (21/158)   
Compressing objects:  14% (23/158)   
Compressing objects:  15% (24/158)   
Compressing objects:  16% (26/158)   
Compressing objects:  17% (27/158)   
Compressing objects:  18% (29/158)   
Compressing objects:  19% (31/158)   
Compressing objects:  20% (32/158)   
Compressing objects:  21% (34/158)   
Compressing objects:  22% (35/158)   
Compressing objects:  23% (37/158)   
Compressing objects:  24% (38/158)   
Compressing objects:  25% (40/158)   
Compressing objects:  26% (42/158)   
Compressing objects:  27% (43/158)   
Compressing objects:  28% (45/158)   
Compressing objects:  29% (46/158)   
Compressing objects:  30% (48/158)   
Compressing objects:  31% (49/158)   
Compressing objects:  32% (51/158)   
Compressing objects:  33% (53/158)   
Compressing objects:  34% (54/158)   
Compressing objects:  35% (56/158)   
Compressing objects:  36% (57/158)   
Compressing objects:  37% (59/158)   
Compressing objects:  38% (61/158)   
Compressing objects:  39% (62/158)   
Compressing objects:  40% (64/158)   
Compressing objects:  41% (65/158)   
Compressing objects:  42% (67/158)   
Compressing objects:  43% (68/158)   
Compressing objects:  44% (70/158)   
Compressing objects:  45% (72/158)   
Compressing objects:  46% (73/158)   
Compressing objects:  47% (75/158)   
Compressing objects:  48% (76/158)   
Compressing objects:  49% (78/158)   
Compressing objects:  50% (79/158)   
Compressing objects:  51% (81/158)   
Compressing objects:  52% (83/158)   
Compressing objects:  53% (84/158)   
Compressing objects:  54% (86/158)   
Compressing objects:  55% (87/158)   
Compressing objects:  56% (89/158)   
Compressing objects:  57% (91/158)   
Compressing objects:  58% (92/158)   
Compressing objects:  59% (94/158)   
Compressing objects:  60% (95/158)   
Compressing objects:  61% (97/158)   
Compressing objects:  62% (98/158)   
Compressing objects:  63% (100/158)   
Compressing objects:  64% (102/158)   
Compressing objects:  65% (103/158)   
Compressing objects:  66% (105/158)   
Compressing objects:  67% (106/158)   
Compressing objects:  68% (108/158)   
Compressing objects:  69% (110/158)   
Compressing objects:  70% (111/158)   
Compressing objects:  71% (113/158)   
Compressing objects:  72% (114/158)   
Compressing objects:  73% (116/158)   
Compressing objects:  74% (117/158)   
Compressing objects:  75% (119/158)   
Compressing objects:  76% (121/158)   
Compressing objects:  77% (122/158)   
Compressing objects:  78% (124/158)   
Compressing objects:  79% (125/158)   
Compressing objects:  80% (127/158)   
Compressing objects:  81% (128/158)   
Compressing objects:  82% (130/158)   
Compressing objects:  83% (132/158)   
Compressing objects:  84% (133/158)   
Compressing objects:  85% (135/158)   
Compressing objects:  86% (136/158)   
Compressing objects:  87% (138/158)   
Compressing objects:  88% (140/158)   
Compressing objects:  89% (141/158)   
Compressing objects:  90% (143/158)   
Compressing objects:  91% (144/158)   
Compressing objects:  92% (146/158)   
Compressing objects:  93% (147/158)   
Compressing objects:  94% (149/158)   
Compressing objects:  95% (151/158)   
Compressing objects:  96% (152/158)   
Compressing objects:  97% (154/158)   
Compressing objects:  98% (155/158)   
Compressing objects:  99% (157/158)   
Compressing objects: 100% (158/158)   
Compressing objects: 100% (158/158), done.
Writing objects:   0% (1/815)   
Writing objects:   1% (9/815)   
Writing objects:   2% (17/815)   
Writing objects:   3% (25/815)   
Writing objects:   4% (33/815)   
Writing objects:   5% (41/815)   
Writing objects:   6% (49/815)   
Writing objects:   7% (58/815)   
Writing objects:   8% (66/815)   
Writing objects:   9% (74/815)   
Writing objects:  10% (82/815)   
Writing objects:  11% (90/815)   
Writing objects:  12% (98/815)   
Writing objects:  13% (106/815)   
Writing objects:  14% (115/815)   
Writing objects:  15% (123/815)   
Writing objects:  16% (131/815)   
Writing objects:  17% (139/815)   
Writing objects:  18% (147/815)   
Writing objects:  19% (155/815)   
Writing objects:  20% (163/815)   
Writing objects:  21% (172/815)   
Writing objects:  22% (180/815)   
Writing objects:  23% (188/815)   
Writing objects:  24% (196/815)   
Writing objects:  25% (204/815)   
Writing objects:  26% (212/815)   
Writing objects:  27% (221/815)   
Writing objects:  28% (229/815)   
Writing objects:  29% (237/815)   
Writing objects:  30% (245/815)   
Writing objects:  31% (253/815)   
Writing objects:  32% (261/815)   
Writing objects:  33% (269/815)   
Writing objects:  34% (278/815)   
Writing objects:  35% (286/815)   
Writing objects:  36% (294/815)   
Writing objects:  37% (302/815)   
Writing objects:  38% (310/815)   
Writing objects:  39% (318/815)   
Writing objects:  40% (326/815)   
Writing objects:  41% (335/815)   
Writing objects:  42% (343/815)   
Writing objects:  43% (351/815)   
Writing objects:  44% (359/815)   
Writing objects:  45% (367/815)   
Writing objects:  46% (375/815)   
Writing objects:  47% (384/815)   
Writing objects:  48% (392/815)   
Writing objects:  49% (400/815)   
Writing objects:  50% (408/815)   
Writing objects:  51% (416/815)   
Writing objects:  52% (424/815)   
Writing objects:  53% (432/815)   
Writing objects:  54% (441/815)   
Writing objects:  55% (449/815)   
Writing objects:  56% (457/815)   
Writing objects:  57% (465/815)   
Writing objects:  58% (474/815)   
Writing objects:  59% (481/815)   
Writing objects:  60% (489/815)   
Writing objects:  61% (498/815)   
Writing objects:  62% (506/815)   
Writing objects:  63% (514/815)   
Writing objects:  64% (522/815)   
Writing objects:  65% (530/815)   
Writing objects:  66% (538/815)   
Writing objects:  67% (547/815)   
Writing objects:  68% (555/815)   
Writing objects:  69% (563/815)   
Writing objects:  70% (571/815)   
Writing objects:  71% (579/815)   
Writing objects:  72% (587/815)   
Writing objects:  73% (595/815)   
Writing objects:  74% (604/815)   
Writing objects:  75% (612/815)   
Writing objects:  76% (620/815)   
Writing objects:  77% (628/815)   
Writing objects:  78% (636/815)   
Writing objects:  79% (644/815)   
Writing objects:  80% (652/815)   
Writing objects:  81% (661/815)   
Writing objects:  82% (669/815)   
Writing objects:  83% (677/815)   
Writing objects:  84% (685/815)   
Writing objects:  85% (693/815)   
Writing objects:  86% (701/815)   
Writing objects:  87% (710/815)   
Writing objects:  88% (718/815)   
Writing objects:  89% (726/815)   
Writing objects:  90% (734/815)   
Writing objects:  91% (742/815)   
Writing objects:  92% (750/815)   
Writing objects:  93% (758/815)   
Writing objects:  94% (767/815)   
Writing objects:  95% (775/815)   
Writing objects:  96% (783/815)   
Writing objects:  97% (791/815)   
Writing objects:  98% (799/815)   
Writing objects:  99% (807/815)   
Writing objects: 100% (815/815)   
Writing objects: 100% (815/815), 145.23 KiB, done.
Total 815 (delta 657), reused 815 (delta 657)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   095f493..8b4ed85  8b4ed85b8404ffe7e10ee410c4df3968b86f0793 -> tested/linux-3.10
+ exit 0


--===============3038194024408533545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3038194024408533545==--

From xen-devel-bounces@lists.xen.org Sat Jan 11 12:35:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 12:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1xm6-0003mi-Mf; Sat, 11 Jan 2014 12:34:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1xm4-0003md-M8
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 12:34:49 +0000
Received: from [193.109.254.147:41582] by server-9.bemta-14.messagelabs.com id
	4C/E6-13957-76A31D25; Sat, 11 Jan 2014 12:34:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389443685!10238494!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 11 Jan 2014 12:34:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 12:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,643,1384300800"; d="scan'208";a="91961925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 12:34:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 07:34:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1xlz-0000VM-FV;
	Sat, 11 Jan 2014 12:34:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1xlz-0001jR-9o;
	Sat, 11 Jan 2014 12:34:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24354-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 12:34:43 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24354: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24354 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24354/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24334
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24334
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10   fail REGR. vs. 24320

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Scott <dave.scott@eu.citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 15:52:29 2014 +0000

    Revert "tools: libxc: flush data cache after loading images into guest memory"
    
    This reverts commit a0035ecc0d82c1d4dcd5e429e2fcc3192d89747a.
    
    Even with this fix there is a period between the flush and the unmap where
    processor may speculate data into the cache. The solution is to map this
    region uncached or to use the HCR.DC bit to mark all guest accesses cached.
    89eb02c2204a "xen: arm: force guest memory accesses to cacheable when MMU is
    disabled" has arranged to do the latter.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

commit 89eb02c2204a0b42a0aa169f107bc346a3fef802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Jan 8 14:09:01 2014 +0000

    xen: arm: force guest memory accesses to cacheable when MMU is disabled
    
    On ARM guest OSes are started with MMU and Caches disables (as they are on
    native) however caching is enabled in the domain running the builder and
    therefore we must ensure cache consistency.
    
    The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
    cache after loading images into guest memory") is to flush the caches after
    loading the various blobs into guest RAM. However this approach has two short
    comings:
    
     - The cache flush primitives available to userspace on arm32 are not
       sufficient for our needs.
     - There is a race between the cache flush and the unmap of the guest page
       where the processor might speculatively dirty the cache line again.
    
    (of these the second is the more fundamental)
    
    This patch makes use of the the hardware functionality to force all accesses
    made from guest mode to be cached (the HCR.DC == default cached bit). This
    means that we don't need to worry about the domain builder's writes being
    cached because the guests "uncached" accesses will actually be cached.
    
    Unfortunately the use of HCR.DC is incompatible with the guest enabling its
    MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
    detect when this happens and disable HCR.DC. This is done with the HCR.TVM
    (trap virtual memory controls) bit which also causes various other registers
    to be trapped, all of which can be passed straight through to the underlying
    register. Once the guest has enabled its MMU we no longer need to trap so
    there is no ongoing overhead. In my tests Linux makes about half a dozen
    accesses to these registers before the MMU is enabled, I would expect other
    OSes to behave similarly (the sequence of writes needed to setup the MMU is
    pretty obvious).
    
    Apart from this unfortunate need to trap these accesses this approach is
    incompatible with guests which attempt to do DMA operations with their MMU
    disabled. In practice this means guests with passthrough which we do not yet
    support. Since a typical guest (including dom0) does not access devices which
    require DMA until after it is fully up and running with paging enabled the
    main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
    with a disk passed through and booting from that disk. Since we know that dom0
    is not using any such firmware and we do not support device passthrough to
    guests yet we can live with this restriction. Once passthrough is implemented
    this will need to be revisited.
    
    The patch includes a couple of seemingly unrelated but necessary changes:
    
     - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
       with the existing set of system register we handled, but broke with the new
       ones introduced here.
     - The defines used to decode the HSR system register fields were named the
       same as the register. This breaks the accessor macros. This had gone
       unnoticed because the handling of the existing trapped registers did not
       require accessing the underlying hardware register. Rename those constants
       with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
    
    This patch has survived thousands of boot loops on a Midway system.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit ca6bf20d4157b3b0b270e384e47c1e351964be16
Author: Julien Grall <julien.grall@linaro.org>
Date:   Fri Jan 10 03:27:55 2014 +0000

    xen/arm: Scrub heap pages during boot
    
    Scrub heap pages was disabled because it was slow on the models. Now that Xen
    supports real hardware, it's possible to enable by default scrubbing.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 8aba7e1ce9e26cdf9d2b002ed87b4bd75fce4af3
Author: Rob Hoes <rob.hoes@citrix.com>
Date:   Fri Jan 10 13:52:04 2014 +0000

    libxl: ocaml: use 'for_app_registration' in osevent callbacks
    
    This allows the application to pass a token to libxl in the fd/timeout
    registration callbacks, which it receives back in modification or
    deregistration callbacks.
    
    It turns out that this is essential for timeout handling, in order to
    identify which timeout to change on a modify event.
    
    Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
    Acked-by: David Scott <dave.scott@eu.citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 0896bd8bea84526b00e00d2d076f4f953a3d73cb
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 10 17:46:33 2014 +0100

    x86: map portion of kexec crash area that is within the direct map area
    
    Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
    crash kernel area) causes fatal page faults when loading a crash
    image.  The attempt to zero the first control page allocated from the
    crash region will fault as the VA return by map_domain_page() has no
    mapping.
    
    The fault will occur on non-debug builds of Xen when the crash area is
    below 5 TiB (which will be most systems).
    
    The assumption that the crash area mapping was not used is incorrect.
    map_domain_page() is used when loading an image and building the
    image's page tables to temporarily map the crash area, thus the
    mapping is required if the crash area is in the direct map area.
    
    Reintroduce the mapping, but only the portions of the crash area that
    are within the direct map area.
    
    Reported-by: Don Slutz <dslutz@verizon.com>
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Tested-by: Don Slutz <dslutz@verizon.com>
    Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
    Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
    
    This is really just a band aid - kexec shouldn't rely on the crash area
    being always mapped when in the direct mapping range (and it didn't use
    to in its previous form). That's primarily because map_domain_page()
    (needed when the area is outside the direct mapping range) may be
    unusable when wanting to kexec due to a crash, but also because in the
    case of PFN compression the kexec range (if specified on the command
    line) could fall into a hole between used memory ranges (while we're
    currently only ignoring memory at the top of the physical address
    space, it's pretty clear that sooner or later we will want that
    selection to become more sophisticated in order to maximize the memory
    made use of).
    
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 10 17:45:01 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 12:35:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 12:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W1xm6-0003mi-Mf; Sat, 11 Jan 2014 12:34:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W1xm4-0003md-M8
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 12:34:49 +0000
Received: from [193.109.254.147:41582] by server-9.bemta-14.messagelabs.com id
	4C/E6-13957-76A31D25; Sat, 11 Jan 2014 12:34:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389443685!10238494!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8302 invoked from network); 11 Jan 2014 12:34:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 12:34:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,643,1384300800"; d="scan'208";a="91961925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 12:34:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 07:34:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W1xlz-0000VM-FV;
	Sat, 11 Jan 2014 12:34:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W1xlz-0001jR-9o;
	Sat, 11 Jan 2014 12:34:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24354-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 12:34:43 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24354: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24354 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24354/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24334
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24334
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10   fail REGR. vs. 24320

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Scott <dave.scott@eu.citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Tue Jan 7 15:52:29 2014 +0000

    Revert "tools: libxc: flush data cache after loading images into guest memory"
    
    This reverts commit a0035ecc0d82c1d4dcd5e429e2fcc3192d89747a.
    
    Even with this fix there is a period between the flush and the unmap where
    processor may speculate data into the cache. The solution is to map this
    region uncached or to use the HCR.DC bit to mark all guest accesses cached.
    89eb02c2204a "xen: arm: force guest memory accesses to cacheable when MMU is
    disabled" has arranged to do the latter.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

commit 89eb02c2204a0b42a0aa169f107bc346a3fef802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Jan 8 14:09:01 2014 +0000

    xen: arm: force guest memory accesses to cacheable when MMU is disabled
    
    On ARM guest OSes are started with MMU and Caches disables (as they are on
    native) however caching is enabled in the domain running the builder and
    therefore we must ensure cache consistency.
    
    The existing solution to this problem (a0035ecc0d82 "tools: libxc: flush data
    cache after loading images into guest memory") is to flush the caches after
    loading the various blobs into guest RAM. However this approach has two short
    comings:
    
     - The cache flush primitives available to userspace on arm32 are not
       sufficient for our needs.
     - There is a race between the cache flush and the unmap of the guest page
       where the processor might speculatively dirty the cache line again.
    
    (of these the second is the more fundamental)
    
    This patch makes use of the the hardware functionality to force all accesses
    made from guest mode to be cached (the HCR.DC == default cached bit). This
    means that we don't need to worry about the domain builder's writes being
    cached because the guests "uncached" accesses will actually be cached.
    
    Unfortunately the use of HCR.DC is incompatible with the guest enabling its
    MMU (SCTLR.M bit). Therefore we must trap accesses to the SCTLR so that we can
    detect when this happens and disable HCR.DC. This is done with the HCR.TVM
    (trap virtual memory controls) bit which also causes various other registers
    to be trapped, all of which can be passed straight through to the underlying
    register. Once the guest has enabled its MMU we no longer need to trap so
    there is no ongoing overhead. In my tests Linux makes about half a dozen
    accesses to these registers before the MMU is enabled, I would expect other
    OSes to behave similarly (the sequence of writes needed to setup the MMU is
    pretty obvious).
    
    Apart from this unfortunate need to trap these accesses this approach is
    incompatible with guests which attempt to do DMA operations with their MMU
    disabled. In practice this means guests with passthrough which we do not yet
    support. Since a typical guest (including dom0) does not access devices which
    require DMA until after it is fully up and running with paging enabled the
    main risk is to in-guest firmware which does DMA i.e. running EFI in a guest,
    with a disk passed through and booting from that disk. Since we know that dom0
    is not using any such firmware and we do not support device passthrough to
    guests yet we can live with this restriction. Once passthrough is implemented
    this will need to be revisited.
    
    The patch includes a couple of seemingly unrelated but necessary changes:
    
     - HSR_SYSREG_CRN_MASK was incorrectly defined, which happened to be benign
       with the existing set of system register we handled, but broke with the new
       ones introduced here.
     - The defines used to decode the HSR system register fields were named the
       same as the register. This breaks the accessor macros. This had gone
       unnoticed because the handling of the existing trapped registers did not
       require accessing the underlying hardware register. Rename those constants
       with an HSR_SYSREG prefix (in line with HSR_CP32/64 for 32-bit registers).
    
    This patch has survived thousands of boot loops on a Midway system.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit ca6bf20d4157b3b0b270e384e47c1e351964be16
Author: Julien Grall <julien.grall@linaro.org>
Date:   Fri Jan 10 03:27:55 2014 +0000

    xen/arm: Scrub heap pages during boot
    
    Scrub heap pages was disabled because it was slow on the models. Now that Xen
    supports real hardware, it's possible to enable by default scrubbing.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 8aba7e1ce9e26cdf9d2b002ed87b4bd75fce4af3
Author: Rob Hoes <rob.hoes@citrix.com>
Date:   Fri Jan 10 13:52:04 2014 +0000

    libxl: ocaml: use 'for_app_registration' in osevent callbacks
    
    This allows the application to pass a token to libxl in the fd/timeout
    registration callbacks, which it receives back in modification or
    deregistration callbacks.
    
    It turns out that this is essential for timeout handling, in order to
    identify which timeout to change on a modify event.
    
    Signed-off-by: Rob Hoes <rob.hoes@citrix.com>
    Acked-by: David Scott <dave.scott@eu.citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 0896bd8bea84526b00e00d2d076f4f953a3d73cb
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 10 17:46:33 2014 +0100

    x86: map portion of kexec crash area that is within the direct map area
    
    Commit 7113a45451a9f656deeff070e47672043ed83664 (kexec/x86: do not map
    crash kernel area) causes fatal page faults when loading a crash
    image.  The attempt to zero the first control page allocated from the
    crash region will fault as the VA return by map_domain_page() has no
    mapping.
    
    The fault will occur on non-debug builds of Xen when the crash area is
    below 5 TiB (which will be most systems).
    
    The assumption that the crash area mapping was not used is incorrect.
    map_domain_page() is used when loading an image and building the
    image's page tables to temporarily map the crash area, thus the
    mapping is required if the crash area is in the direct map area.
    
    Reintroduce the mapping, but only the portions of the crash area that
    are within the direct map area.
    
    Reported-by: Don Slutz <dslutz@verizon.com>
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Tested-by: Don Slutz <dslutz@verizon.com>
    Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
    Tested-by: Daniel Kiper <daniel.kiper@oracle.com>
    
    This is really just a band aid - kexec shouldn't rely on the crash area
    being always mapped when in the direct mapping range (and it didn't use
    to in its previous form). That's primarily because map_domain_page()
    (needed when the area is outside the direct mapping range) may be
    unusable when wanting to kexec due to a crash, but also because in the
    case of PFN compression the kexec range (if specified on the command
    line) could fall into a hole between used memory ranges (while we're
    currently only ignoring memory at the top of the physical address
    space, it's pretty clear that sooner or later we will want that
    selection to become more sophisticated in order to maximize the memory
    made use of).
    
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 10 17:45:01 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 18:04:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 18:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W22uY-0007ZR-GT; Sat, 11 Jan 2014 18:03:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W22Xh-0005tp-It
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 17:40:18 +0000
Received: from [193.109.254.147:19689] by server-11.bemta-14.messagelabs.com
	id 58/94-20576-00281D25; Sat, 11 Jan 2014 17:40:16 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389462013!8744860!1
X-Originating-IP: [98.139.213.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22606 invoked from network); 11 Jan 2014 17:40:14 -0000
Received: from nm23-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm23-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.141)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Jan 2014 17:40:14 -0000
Received: from [66.196.81.172] by nm23.bullet.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
Received: from [98.139.212.201] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
Received: from [127.0.0.1] by omp1010.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 304849.53310.bm@omp1010.mail.bf1.yahoo.com
Received: (qmail 99193 invoked by uid 60001); 11 Jan 2014 17:40:13 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389462013; bh=HVHE+U82cew3j81H80xlI2Dm9JfhnlJ80L9n+251x4k=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=bnEzzNk6U0Gk2QA6YLsIjZClBt/i//bjdKcy0HtN8FtOFfVkSv+gLF4jItIZDgRCPUz4IG4rJexg5sb5GWtHRDL896+DDfvd6lRlAuMTlxS1+4YmPWXpnV5QsHVzVF+CB+yRpumWaHX9pGxHebXsLFbQtt4P1yiXjcrxF5JweRw=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=fyoJMGaOyvQAGC33opRGKppzj4QhlEXZ/4NshnojLCPrd022CKbPwhaUzIcJBE0aPBnXetojBMYITrOl2ZvbbyjF6LnZjV3rUc+8e/PJg3zh5HyO0w5RCTptjb9wM3TygjUgf+Qit3KT0BNTh/1yTUhLNppAXMLuTd8dCpql95U=;
X-YMail-OSG: 8lJFMLUVM1l8iVNbYE9FYHyKITTk4Dq3jlLZ2_Y_eq7dWLJ
	xSOxDVQyveTHHR1NzxDFVx4X27JbMhq1C7tR.vk7zw1hTnCkZ4yKSccgLfh5
	URBaN5_GbTftq1Zkk54vCKhz3EvJGRG.iT32N.pqA9TDW.F6kxTkRGJNywnM
	zptBUyjpu_OCTyXdZQNWDpZ3n3ITuGOuXz50lFHb3OCHi2xIomedumIk8WdA
	uzJIHI0nHtSWRmd7VoiKiBigT.RWJrKxabm8rvCvxSW5UaE956ypxqfxVc13
	.kXaKICMIS.xIJphwwcejjk9i7wxOAe.NIXaKwwUJU7DmlRPp4A0OIluh_jS
	qsf0eC5nDmX3L0BfEGoGDdW8hFT2u2CIjfh0eHhufZt3bMwrWvEGlljJEz5o
	uCnwc7uCo2z0zk6Y5JvKQN9yPm4pQ7dr8a1KROzSjtt0C_sBSjVEEEL7XLuc
	QK3LVYmxnkMEQHwdVIZ_o_76cdqUy25TBzNpXvPBJcwm0ogAJO49dXphm.IK
	u1kZknNrHZW3sXAm185dnagZV
Received: from [192.227.225.3] by web160204.mail.bf1.yahoo.com via HTTP;
	Sat, 11 Jan 2014 09:40:13 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8gTXIgRHVubGFwCkkgcmVhbGx5IHdvbmRlciBvZiBhbnlib2R5IGtub3cgYW5zd2VyIDotKApjYW4geW91IGhlbHAgbWU_IQpJIHdhbnQgZm91bmQgdGltZSAiRG93bnRpbWUiIGFuZCBudW1iZXIgImRpcnR5IHBhZ2VzIiBpbiBlbmQgbWlncmF0aW9uIC4uLgpwbGVhc2UgaGVscCBtZS4KVGhhbmtzLgrCoApBZGVsIEFtYW5pCk0uU2MuIENhbmRpZGF0ZUBDb21wdXRlciBFbmdpbmVlcmluZyBEZXBhcnRtZW50LCBVbml2ZXJzaXR5IG9mIElzZmFoYW4KRW1haWw6IEEuQW1hbmlAZW5nLnVpLmFjLmlyCgoBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Message-ID: <1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
Date: Sat, 11 Jan 2014 09:40:13 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Xen <xen-devel@lists.xen.org>, George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Sat, 11 Jan 2014 18:03:54 +0000
Subject: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1703153535643613466=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1703153535643613466==
Content-Type: multipart/alternative; boundary="-2068959492-665862014-1389462013=:51486"

---2068959492-665862014-1389462013=:51486
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Mr Dunlap=0AI really wonder of anybody know answer :-(=0Acan you help=
 me?!=0AI want found time "Downtime" and number "dirty pages" in end migrat=
ion ...=0Aplease help me.=0AThanks.=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Co=
mputer Engineering Department, University of Isfahan=0AEmail: A.Amani@eng.u=
i.ac.ir=0A=0A=0A=0AOn Saturday, January 11, 2014 12:31 AM, Adel Amani <adel=
.amani66@yahoo.com> wrote:=0A =0AHello=0AI do one migration and trace in mi=
gration to command "xentrace -D -e all -S 256 /test.trace"=0AI can get tota=
l time migration of command "time migration -l ubuntu11 192.168.1.1"=A0=A0B=
ut i don't know how get "Downtime" and "dirty pages" of test.trace :-( =A0o=
r from another way...=0A=A0=0A=0AAdel Amani=0AM.Sc. Candidate@Computer Engi=
neering Department, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
---2068959492-665862014-1389462013=:51486
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello Mr=
 Dunlap</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; fon=
t-family: 'bookman old style', 'new york', times, serif; background-color: =
transparent; font-style: normal;"><span>I really wonder of anybody know ans=
wer :-(</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; fon=
t-family: 'bookman old style', 'new york', times, serif; background-color: =
transparent; font-style: normal;"><span>can you help me?!</span></div><div =
style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old st=
yle', 'new york', times, serif; background-color: transparent; font-style: =
normal;"><span>I want found time "Downtime" and number "dirty pages" in end=
 migration ...</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13=
px; font-family: 'bookman old style', 'new york', times, serif;
 background-color: transparent; font-style: normal;"><span>please help me.<=
/span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family=
: 'bookman old style', 'new york', times, serif; background-color: transpar=
ent; font-style: normal;"><span>Thanks.</span></div><div></div><div>&nbsp;<=
/div><div><div style=3D"text-align:center;"><span style=3D"font-size: 16px;=
 font-family: 'times new roman', 'new york', times, serif; line-height: nor=
mal;">Adel Amani</span><br></div><span style=3D"font-family: 'times new rom=
an', 'new york', times, serif; line-height: normal;"><div style=3D"font-siz=
e:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Department,=
 University of Isfahan</div><div style=3D"text-align:center;"><span style=
=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decora=
tion:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><div =
class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D"f=
ont-family:
 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div sty=
le=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Luci=
da Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=3D"=
2" face=3D"Arial"> On Saturday, January 11, 2014 12:31 AM, Adel Amani &lt;a=
del.amani66@yahoo.com&gt; wrote:<br> </font> </div>  <div class=3D"y_msg_co=
ntainer"><div id=3D"yiv4072929106"><div><div style=3D"color: rgb(0, 0, 0); =
background-color: rgb(255, 255, 255); font-family: 'bookman old style', 'ne=
w york', times, serif; font-size: 10pt;"><div><span>Hello</span></div><div =
style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old st=
yle', 'new york', times, serif; background-color: transparent; font-style: =
normal;"><span>I do one migration and trace in migration to command "xentra=
ce -D -e all -S 256 /test.trace"</span></div><div style=3D"background-color=
:transparent;"><span>I can get total time migration of command "time migrat=
ion -l
 ubuntu11 192.168.1.1"&nbsp;</span><span style=3D"background-color:transpar=
ent;">&nbsp;But i don't know how get "Downtime" and "dirty pages" of test.t=
race :-( &nbsp;or from another way...</span></div><div style=3D"color: rgb(=
0, 0, 0); font-size: 13px; font-family: 'bookman old style', 'new york', ti=
mes, serif; background-color: transparent; font-style: normal;"><span style=
=3D"font-size:10pt;">&nbsp;</span><br></div><div><div style=3D"text-align:c=
enter;"><span style=3D"font-size: 16px; font-family: 'times new roman', 'ne=
w york', times, serif; line-height: normal;">Adel Amani</span><br></div><sp=
an style=3D"font-family: 'times new roman', 'new york', times, serif; line-=
height: normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Can=
didate@Computer Engineering Department, University of Isfahan</div><div sty=
le=3D"text-align:center;"><span style=3D"font-size:13px;">Email: <span styl=
e=3D"color:rgb(0, 0,
 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></div><=
/span></div></div></div></div><br><br></div>  </div> </div>  </div> </div><=
/body></html>
---2068959492-665862014-1389462013=:51486--


--===============1703153535643613466==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1703153535643613466==--


From xen-devel-bounces@lists.xen.org Sat Jan 11 18:04:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 18:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W22uY-0007ZR-GT; Sat, 11 Jan 2014 18:03:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W22Xh-0005tp-It
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 17:40:18 +0000
Received: from [193.109.254.147:19689] by server-11.bemta-14.messagelabs.com
	id 58/94-20576-00281D25; Sat, 11 Jan 2014 17:40:16 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389462013!8744860!1
X-Originating-IP: [98.139.213.141]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22606 invoked from network); 11 Jan 2014 17:40:14 -0000
Received: from nm23-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm23-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.141)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Jan 2014 17:40:14 -0000
Received: from [66.196.81.172] by nm23.bullet.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
Received: from [98.139.212.201] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
Received: from [127.0.0.1] by omp1010.mail.bf1.yahoo.com with NNFMP;
	11 Jan 2014 17:40:13 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 304849.53310.bm@omp1010.mail.bf1.yahoo.com
Received: (qmail 99193 invoked by uid 60001); 11 Jan 2014 17:40:13 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389462013; bh=HVHE+U82cew3j81H80xlI2Dm9JfhnlJ80L9n+251x4k=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=bnEzzNk6U0Gk2QA6YLsIjZClBt/i//bjdKcy0HtN8FtOFfVkSv+gLF4jItIZDgRCPUz4IG4rJexg5sb5GWtHRDL896+DDfvd6lRlAuMTlxS1+4YmPWXpnV5QsHVzVF+CB+yRpumWaHX9pGxHebXsLFbQtt4P1yiXjcrxF5JweRw=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=fyoJMGaOyvQAGC33opRGKppzj4QhlEXZ/4NshnojLCPrd022CKbPwhaUzIcJBE0aPBnXetojBMYITrOl2ZvbbyjF6LnZjV3rUc+8e/PJg3zh5HyO0w5RCTptjb9wM3TygjUgf+Qit3KT0BNTh/1yTUhLNppAXMLuTd8dCpql95U=;
X-YMail-OSG: 8lJFMLUVM1l8iVNbYE9FYHyKITTk4Dq3jlLZ2_Y_eq7dWLJ
	xSOxDVQyveTHHR1NzxDFVx4X27JbMhq1C7tR.vk7zw1hTnCkZ4yKSccgLfh5
	URBaN5_GbTftq1Zkk54vCKhz3EvJGRG.iT32N.pqA9TDW.F6kxTkRGJNywnM
	zptBUyjpu_OCTyXdZQNWDpZ3n3ITuGOuXz50lFHb3OCHi2xIomedumIk8WdA
	uzJIHI0nHtSWRmd7VoiKiBigT.RWJrKxabm8rvCvxSW5UaE956ypxqfxVc13
	.kXaKICMIS.xIJphwwcejjk9i7wxOAe.NIXaKwwUJU7DmlRPp4A0OIluh_jS
	qsf0eC5nDmX3L0BfEGoGDdW8hFT2u2CIjfh0eHhufZt3bMwrWvEGlljJEz5o
	uCnwc7uCo2z0zk6Y5JvKQN9yPm4pQ7dr8a1KROzSjtt0C_sBSjVEEEL7XLuc
	QK3LVYmxnkMEQHwdVIZ_o_76cdqUy25TBzNpXvPBJcwm0ogAJO49dXphm.IK
	u1kZknNrHZW3sXAm185dnagZV
Received: from [192.227.225.3] by web160204.mail.bf1.yahoo.com via HTTP;
	Sat, 11 Jan 2014 09:40:13 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8gTXIgRHVubGFwCkkgcmVhbGx5IHdvbmRlciBvZiBhbnlib2R5IGtub3cgYW5zd2VyIDotKApjYW4geW91IGhlbHAgbWU_IQpJIHdhbnQgZm91bmQgdGltZSAiRG93bnRpbWUiIGFuZCBudW1iZXIgImRpcnR5IHBhZ2VzIiBpbiBlbmQgbWlncmF0aW9uIC4uLgpwbGVhc2UgaGVscCBtZS4KVGhhbmtzLgrCoApBZGVsIEFtYW5pCk0uU2MuIENhbmRpZGF0ZUBDb21wdXRlciBFbmdpbmVlcmluZyBEZXBhcnRtZW50LCBVbml2ZXJzaXR5IG9mIElzZmFoYW4KRW1haWw6IEEuQW1hbmlAZW5nLnVpLmFjLmlyCgoBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Message-ID: <1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
Date: Sat, 11 Jan 2014 09:40:13 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Xen <xen-devel@lists.xen.org>, George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Sat, 11 Jan 2014 18:03:54 +0000
Subject: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1703153535643613466=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1703153535643613466==
Content-Type: multipart/alternative; boundary="-2068959492-665862014-1389462013=:51486"

---2068959492-665862014-1389462013=:51486
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Mr Dunlap=0AI really wonder of anybody know answer :-(=0Acan you help=
 me?!=0AI want found time "Downtime" and number "dirty pages" in end migrat=
ion ...=0Aplease help me.=0AThanks.=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Co=
mputer Engineering Department, University of Isfahan=0AEmail: A.Amani@eng.u=
i.ac.ir=0A=0A=0A=0AOn Saturday, January 11, 2014 12:31 AM, Adel Amani <adel=
.amani66@yahoo.com> wrote:=0A =0AHello=0AI do one migration and trace in mi=
gration to command "xentrace -D -e all -S 256 /test.trace"=0AI can get tota=
l time migration of command "time migration -l ubuntu11 192.168.1.1"=A0=A0B=
ut i don't know how get "Downtime" and "dirty pages" of test.trace :-( =A0o=
r from another way...=0A=A0=0A=0AAdel Amani=0AM.Sc. Candidate@Computer Engi=
neering Department, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
---2068959492-665862014-1389462013=:51486
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello Mr=
 Dunlap</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; fon=
t-family: 'bookman old style', 'new york', times, serif; background-color: =
transparent; font-style: normal;"><span>I really wonder of anybody know ans=
wer :-(</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; fon=
t-family: 'bookman old style', 'new york', times, serif; background-color: =
transparent; font-style: normal;"><span>can you help me?!</span></div><div =
style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old st=
yle', 'new york', times, serif; background-color: transparent; font-style: =
normal;"><span>I want found time "Downtime" and number "dirty pages" in end=
 migration ...</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13=
px; font-family: 'bookman old style', 'new york', times, serif;
 background-color: transparent; font-style: normal;"><span>please help me.<=
/span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family=
: 'bookman old style', 'new york', times, serif; background-color: transpar=
ent; font-style: normal;"><span>Thanks.</span></div><div></div><div>&nbsp;<=
/div><div><div style=3D"text-align:center;"><span style=3D"font-size: 16px;=
 font-family: 'times new roman', 'new york', times, serif; line-height: nor=
mal;">Adel Amani</span><br></div><span style=3D"font-family: 'times new rom=
an', 'new york', times, serif; line-height: normal;"><div style=3D"font-siz=
e:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Department,=
 University of Isfahan</div><div style=3D"text-align:center;"><span style=
=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decora=
tion:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><div =
class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D"f=
ont-family:
 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div sty=
le=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Luci=
da Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=3D"=
2" face=3D"Arial"> On Saturday, January 11, 2014 12:31 AM, Adel Amani &lt;a=
del.amani66@yahoo.com&gt; wrote:<br> </font> </div>  <div class=3D"y_msg_co=
ntainer"><div id=3D"yiv4072929106"><div><div style=3D"color: rgb(0, 0, 0); =
background-color: rgb(255, 255, 255); font-family: 'bookman old style', 'ne=
w york', times, serif; font-size: 10pt;"><div><span>Hello</span></div><div =
style=3D"color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old st=
yle', 'new york', times, serif; background-color: transparent; font-style: =
normal;"><span>I do one migration and trace in migration to command "xentra=
ce -D -e all -S 256 /test.trace"</span></div><div style=3D"background-color=
:transparent;"><span>I can get total time migration of command "time migrat=
ion -l
 ubuntu11 192.168.1.1"&nbsp;</span><span style=3D"background-color:transpar=
ent;">&nbsp;But i don't know how get "Downtime" and "dirty pages" of test.t=
race :-( &nbsp;or from another way...</span></div><div style=3D"color: rgb(=
0, 0, 0); font-size: 13px; font-family: 'bookman old style', 'new york', ti=
mes, serif; background-color: transparent; font-style: normal;"><span style=
=3D"font-size:10pt;">&nbsp;</span><br></div><div><div style=3D"text-align:c=
enter;"><span style=3D"font-size: 16px; font-family: 'times new roman', 'ne=
w york', times, serif; line-height: normal;">Adel Amani</span><br></div><sp=
an style=3D"font-family: 'times new roman', 'new york', times, serif; line-=
height: normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Can=
didate@Computer Engineering Department, University of Isfahan</div><div sty=
le=3D"text-align:center;"><span style=3D"font-size:13px;">Email: <span styl=
e=3D"color:rgb(0, 0,
 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></div><=
/span></div></div></div></div><br><br></div>  </div> </div>  </div> </div><=
/body></html>
---2068959492-665862014-1389462013=:51486--


--===============1703153535643613466==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1703153535643613466==--


From xen-devel-bounces@lists.xen.org Sat Jan 11 21:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 21:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W26Ae-0004IU-KP; Sat, 11 Jan 2014 21:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W26Ac-0004IN-Up
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 21:32:43 +0000
Received: from [85.158.139.211:34521] by server-4.bemta-5.messagelabs.com id
	DD/16-26791-A78B1D25; Sat, 11 Jan 2014 21:32:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389475959!9207315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5033 invoked from network); 11 Jan 2014 21:32:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 21:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,644,1384300800"; d="scan'208";a="92023347"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 21:32:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 16:32:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W26AX-0005Lh-AZ;
	Sat, 11 Jan 2014 21:32:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W26AX-00073A-2t;
	Sat, 11 Jan 2014 21:32:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24360-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 21:32:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24360: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24360 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24360/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24334

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Scott <dave.scott@eu.citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ branch=xen-unstable
+ revision=4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   2d03be6..4fad2dc  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 21:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 21:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W26Ae-0004IU-KP; Sat, 11 Jan 2014 21:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W26Ac-0004IN-Up
	for xen-devel@lists.xensource.com; Sat, 11 Jan 2014 21:32:43 +0000
Received: from [85.158.139.211:34521] by server-4.bemta-5.messagelabs.com id
	DD/16-26791-A78B1D25; Sat, 11 Jan 2014 21:32:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389475959!9207315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5033 invoked from network); 11 Jan 2014 21:32:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 21:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,644,1384300800"; d="scan'208";a="92023347"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Jan 2014 21:32:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 11 Jan 2014 16:32:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W26AX-0005Lh-AZ;
	Sat, 11 Jan 2014 21:32:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W26AX-00073A-2t;
	Sat, 11 Jan 2014 21:32:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24360-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 11 Jan 2014 21:32:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24360: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24360 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24360/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24334

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  2d03be65d5c50053fec4a5fa1d691972e5d953c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Daniel Kiper <daniel.kiper@oracle.com>
  David Scott <dave.scott@eu.citrix.com>
  David Vrabel <david.vrabel@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Rob Hoes <rob.hoes@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ branch=xen-unstable
+ revision=4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 4fad2dc72a8607f50c3783e1cbcb3fb25e3af932:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   2d03be6..4fad2dc  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:00:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W27Wn-0001IJ-Ch; Sat, 11 Jan 2014 22:59:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W27Wl-0001IE-T7
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 22:59:40 +0000
Received: from [85.158.143.35:63354] by server-2.bemta-4.messagelabs.com id
	77/EF-11386-ADCC1D25; Sat, 11 Jan 2014 22:59:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389481177!11092608!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27190 invoked from network); 11 Jan 2014 22:59:38 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 22:59:38 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so4330833lbi.2
	for <xen-devel@lists.xen.org>; Sat, 11 Jan 2014 14:59:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=ke1B6MU1B27xhyaiJQysUK8GmD1pNtLXC/tDUuGHT8E=;
	b=kt6HpAAGBBIf6HwbPhEzOsa94iWNjYNUZCJ+6o80XJJZdhzH3hvqX8GxuGDnMF5ex0
	/n3jCS9fLmCDNXeGZnAP037x6aQx4XekOKLYT/lK2a81yXrM7Q2w8L/lJXvWyeTIfNVL
	4cy4k/RbFUQHA9dNpCuzLEJswg/KXHZutxxfUlQ7XQUn5pTb/r4rODDdlSjxnOYLpwKD
	2MphjlH45EAdUt8BaA400OEkLJgc8TgaiCaMOz7MR9NgkEguX54B00gEVJHEYVOvRuoi
	HzZZhv47St6gojUhvtrm1u2WLQItCSfZ8VzGOB+cAQjtqMoEcJYmIO6eXhxpeR6MBaX+
	fqTA==
X-Received: by 10.152.28.230 with SMTP id e6mr6989204lah.3.1389481177225;
	Sat, 11 Jan 2014 14:59:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id j1sm5289597laj.3.2014.01.11.14.59.36
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 11 Jan 2014 14:59:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Sun, 12 Jan 2014 02:59:34 +0400
Message-Id: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
/* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */

i have received 'pirq' from hypervisor > 255.

map_irq.domid = DOMID_SELF;                                                   
map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
map_irq.index = -1; /* hypervisor auto allocates vector */                    
map_irq.pirq = -1;                                                            
map_irq.bus = busnum;                                                         
map_irq.devfn = devfn;                                                        
map_irq.entry_nr = i;                                                         
map_irq.table_base = 0;                                              
rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
irqno = map_irq.pirq;

i have:
irqno = 279 - it is more APIC_MAX_VECTOR(255)

i have a question: how to correct translate pirq to irq for APIC map table ?

all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:00:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W27Wn-0001IJ-Ch; Sat, 11 Jan 2014 22:59:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W27Wl-0001IE-T7
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 22:59:40 +0000
Received: from [85.158.143.35:63354] by server-2.bemta-4.messagelabs.com id
	77/EF-11386-ADCC1D25; Sat, 11 Jan 2014 22:59:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389481177!11092608!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27190 invoked from network); 11 Jan 2014 22:59:38 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Jan 2014 22:59:38 -0000
Received: by mail-lb0-f171.google.com with SMTP id w7so4330833lbi.2
	for <xen-devel@lists.xen.org>; Sat, 11 Jan 2014 14:59:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=ke1B6MU1B27xhyaiJQysUK8GmD1pNtLXC/tDUuGHT8E=;
	b=kt6HpAAGBBIf6HwbPhEzOsa94iWNjYNUZCJ+6o80XJJZdhzH3hvqX8GxuGDnMF5ex0
	/n3jCS9fLmCDNXeGZnAP037x6aQx4XekOKLYT/lK2a81yXrM7Q2w8L/lJXvWyeTIfNVL
	4cy4k/RbFUQHA9dNpCuzLEJswg/KXHZutxxfUlQ7XQUn5pTb/r4rODDdlSjxnOYLpwKD
	2MphjlH45EAdUt8BaA400OEkLJgc8TgaiCaMOz7MR9NgkEguX54B00gEVJHEYVOvRuoi
	HzZZhv47St6gojUhvtrm1u2WLQItCSfZ8VzGOB+cAQjtqMoEcJYmIO6eXhxpeR6MBaX+
	fqTA==
X-Received: by 10.152.28.230 with SMTP id e6mr6989204lah.3.1389481177225;
	Sat, 11 Jan 2014 14:59:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id j1sm5289597laj.3.2014.01.11.14.59.36
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 11 Jan 2014 14:59:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Sun, 12 Jan 2014 02:59:34 +0400
Message-Id: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
/* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */

i have received 'pirq' from hypervisor > 255.

map_irq.domid = DOMID_SELF;                                                   
map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
map_irq.index = -1; /* hypervisor auto allocates vector */                    
map_irq.pirq = -1;                                                            
map_irq.bus = busnum;                                                         
map_irq.devfn = devfn;                                                        
map_irq.entry_nr = i;                                                         
map_irq.table_base = 0;                                              
rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
irqno = map_irq.pirq;

i have:
irqno = 279 - it is more APIC_MAX_VECTOR(255)

i have a question: how to correct translate pirq to irq for APIC map table ?

all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:33:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W283V-00039w-DF; Sat, 11 Jan 2014 23:33:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W283T-00039r-DK
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 23:33:27 +0000
Received: from [85.158.137.68:49941] by server-2.bemta-3.messagelabs.com id
	C5/48-17329-6C4D1D25; Sat, 11 Jan 2014 23:33:26 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389483205!8591982!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2869 invoked from network); 11 Jan 2014 23:33:25 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-3.tower-31.messagelabs.com with SMTP;
	11 Jan 2014 23:33:25 -0000
Received: (qmail 25927 invoked by uid 10000); 11 Jan 2014 23:33:25 -0000
Date: Sat, 11 Jan 2014 23:33:25 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: xen-devel@lists.xen.org
Message-ID: <20140111233325.GA30303@dark.recoil.org>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: David Scott <dave.scott@citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific functions
	behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The various cpuid functions are not available on ARM, so this
makes them raise an OCaml exception.  Omitting the functions
completely results in a link failure in oxenstored due to the
missing symbols, so this is preferable to the much bigger patch
that would result from adding conditional compilation into the
OCaml interfaces.

With this patch, oxenstored can successfully start a domain on
Xen/ARM.

Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f5cf0ed..ff29b47 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 {
 	CAMLparam4(xch, domid, input, config);
 	CAMLlocal2(array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 			 c_input, (const char **)c_config, out_config);
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	caml_failwith("xc_domain_cpuid_set: not implemented");
+#endif
 	CAMLreturn(array);
 }
 
 CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
 {
 	CAMLparam2(xch, domid);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 
 	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
+#endif
 	CAMLreturn(Val_unit);
 }
 
@@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 {
 	CAMLparam3(xch, input, config);
 	CAMLlocal3(ret, array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 	Store_field(ret, 0, Val_bool(r));
 	Store_field(ret, 1, array);
 
+#else
+	caml_failwith("xc_domain_cpuid_check: not implemented");
+#endif
 	CAMLreturn(ret);
 }
 
-- 
1.8.1.2


-- 
Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:33:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W283V-00039w-DF; Sat, 11 Jan 2014 23:33:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W283T-00039r-DK
	for xen-devel@lists.xen.org; Sat, 11 Jan 2014 23:33:27 +0000
Received: from [85.158.137.68:49941] by server-2.bemta-3.messagelabs.com id
	C5/48-17329-6C4D1D25; Sat, 11 Jan 2014 23:33:26 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389483205!8591982!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2869 invoked from network); 11 Jan 2014 23:33:25 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-3.tower-31.messagelabs.com with SMTP;
	11 Jan 2014 23:33:25 -0000
Received: (qmail 25927 invoked by uid 10000); 11 Jan 2014 23:33:25 -0000
Date: Sat, 11 Jan 2014 23:33:25 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: xen-devel@lists.xen.org
Message-ID: <20140111233325.GA30303@dark.recoil.org>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: David Scott <dave.scott@citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific functions
	behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The various cpuid functions are not available on ARM, so this
makes them raise an OCaml exception.  Omitting the functions
completely results in a link failure in oxenstored due to the
missing symbols, so this is preferable to the much bigger patch
that would result from adding conditional compilation into the
OCaml interfaces.

With this patch, oxenstored can successfully start a domain on
Xen/ARM.

Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f5cf0ed..ff29b47 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 {
 	CAMLparam4(xch, domid, input, config);
 	CAMLlocal2(array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
 			 c_input, (const char **)c_config, out_config);
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	caml_failwith("xc_domain_cpuid_set: not implemented");
+#endif
 	CAMLreturn(array);
 }
 
 CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value domid)
 {
 	CAMLparam2(xch, domid);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 
 	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
 	if (r < 0)
 		failwith_xc(_H(xch));
+#else
+	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
+#endif
 	CAMLreturn(Val_unit);
 }
 
@@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 {
 	CAMLparam3(xch, input, config);
 	CAMLlocal3(ret, array, tmp);
+#if defined(__i386__) || defined(__x86_64__)
 	int r;
 	unsigned int c_input[2];
 	char *c_config[4], *out_config[4];
@@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch, value input, value config)
 	Store_field(ret, 0, Val_bool(r));
 	Store_field(ret, 1, array);
 
+#else
+	caml_failwith("xc_domain_cpuid_check: not implemented");
+#endif
 	CAMLreturn(ret);
 }
 
-- 
1.8.1.2


-- 
Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W285Q-0003Fp-UL; Sat, 11 Jan 2014 23:35:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W285O-0003Fe-N2
	for xen-devel@lists.xenproject.org; Sat, 11 Jan 2014 23:35:26 +0000
Received: from [85.158.137.68:57697] by server-8.bemta-3.messagelabs.com id
	86/74-31081-D35D1D25; Sat, 11 Jan 2014 23:35:25 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389483324!8572502!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5020 invoked from network); 11 Jan 2014 23:35:25 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-2.tower-31.messagelabs.com with SMTP;
	11 Jan 2014 23:35:25 -0000
Received: (qmail 19295 invoked by uid 10000); 11 Jan 2014 23:35:24 -0000
Date: Sat, 11 Jan 2014 23:35:24 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140111233524.GA9666@dark.recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
	<52CED1A2.3000708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CED1A2.3000708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org, Dave Scott <dave.scott@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 04:43:14PM +0000, Andrew Cooper wrote:
> On 09/01/14 16:36, Anil Madhavapeddy wrote:
> > The various cpuid functions are not available on ARM, so this
> > makes them raise an OCaml exception.  Omitting the functions
> > completely them results in a link failure in oxenstored due to
> > the missing symbols, so this is preferable to the much bigger
> > patch that would result from adding conditional compilation into
> > the OCaml interfaces.
> >
> > Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> >
> > ---
> >  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..76864cc 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	failwith_xc(_H(xch));
> 
> You probably want to set xc's last error so failwith_xc() gives an
> exception with a relevant error message.

After discussion with Dave Scott, we thought it best to just raise a
normal Failure exception rather than the libxc ones.  The functions aren't
used by any of the libraries in-tree at the moment and oxenstored works
for a simple VM start for me with v2 of the patch as well.

-anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 11 23:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jan 2014 23:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W285Q-0003Fp-UL; Sat, 11 Jan 2014 23:35:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W285O-0003Fe-N2
	for xen-devel@lists.xenproject.org; Sat, 11 Jan 2014 23:35:26 +0000
Received: from [85.158.137.68:57697] by server-8.bemta-3.messagelabs.com id
	86/74-31081-D35D1D25; Sat, 11 Jan 2014 23:35:25 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389483324!8572502!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5020 invoked from network); 11 Jan 2014 23:35:25 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-2.tower-31.messagelabs.com with SMTP;
	11 Jan 2014 23:35:25 -0000
Received: (qmail 19295 invoked by uid 10000); 11 Jan 2014 23:35:24 -0000
Date: Sat, 11 Jan 2014 23:35:24 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140111233524.GA9666@dark.recoil.org>
References: <20140109163632.GA27164@dark.recoil.org>
	<52CED1A2.3000708@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CED1A2.3000708@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org, Dave Scott <dave.scott@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: ocaml: guard x86-specific functions
 behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 04:43:14PM +0000, Andrew Cooper wrote:
> On 09/01/14 16:36, Anil Madhavapeddy wrote:
> > The various cpuid functions are not available on ARM, so this
> > makes them raise an OCaml exception.  Omitting the functions
> > completely them results in a link failure in oxenstored due to
> > the missing symbols, so this is preferable to the much bigger
> > patch that would result from adding conditional compilation into
> > the OCaml interfaces.
> >
> > Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> >
> > ---
> >  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..76864cc 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	failwith_xc(_H(xch));
> 
> You probably want to set xc's last error so failwith_xc() gives an
> exception with a relevant error message.

After discussion with Dave Scott, we thought it best to just raise a
normal Failure exception rather than the libxc ones.  The functions aren't
used by any of the libraries in-tree at the moment and oxenstored works
for a simple VM start for me with v2 of the patch as well.

-anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 00:58:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 00:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W29NL-0008PG-2a; Sun, 12 Jan 2014 00:58:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W29NI-0008PB-C1
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 00:58:00 +0000
Received: from [85.158.139.211:55646] by server-11.bemta-5.messagelabs.com id
	CC/63-23268-798E1D25; Sun, 12 Jan 2014 00:57:59 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389488278!9011696!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2884 invoked from network); 12 Jan 2014 00:57:58 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 12 Jan 2014 00:57:58 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:57652 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W29C7-0002aP-KB; Sun, 12 Jan 2014 01:46:27 +0100
Date: Sun, 12 Jan 2014 01:57:50 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <112976369.20140112015750@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <694614152.20140110225103@eikelenboom.it>
References: <694614152.20140110225103@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen pci and vga passthrough + option roms ...
	finally ... SUCCES
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Finally .. and i was searching in the complete wrong direction for the whole time ...
red herring is the understatement of the year ... red whale would do it ..

It wasn't Xen, Seabios, Qemu ... but the guest kernel.

Which now makes sense with my recollection that secondary passthrough with ATI cards used to work with Xen
with some expiriments i had done but laid aside for some time.
And the fact that trying older Xen versions failed to get a similar setup working again
(because i kept trying newer .. not older guest kernels in those tests).

What is getting triggered is the pci_fixup_video (in arch/x86/pci/fixup.c) below.

>From dmesg:
[    2.545728] pci 0000:00:00.0: calling quirk_natoma+0x0/0x40
[    2.545730] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    2.558998] pci 0000:00:00.0: calling quirk_passive_release+0x0/0x90
[    2.559121] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    2.572412] pci 0000:00:01.0: calling quirk_isa_dma_hangs+0x0/0x40
[    2.572415] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    2.586527] pci 0000:00:03.0: calling pci_fixup_video+0x0/0xd0
[    2.586609] pci 0000:00:03.0: Boot video device
[    2.586696] pci 0000:00:05.0: calling pci_fixup_video+0x0/0xd0
[    2.586827] pci 0000:00:05.0: Boot video device
[    2.586928] pci 0000:00:06.0: calling quirk_e100_interrupt+0x0/0x1c0


So although there is a boot video device recognized already (the emulated one with bfd 00:03.0)
it also applies the quirk to the secondary GPU which is now also marked as boot device (passed through device with bfd 00:05.0)
which seems to be wrong.

I will go further with Dave Airlie for now .. see where that goes, will keep you posted.

--
Sander



/*
 * Fixup to mark boot BIOS video selected by BIOS before it changes
 *
 * From information provided by "Jon Smirl" <jonsmirl@gmail.com>
 *
 * The standard boot ROM sequence for an x86 machine uses the BIOS
 * to select an initial video card for boot display. This boot video
 * card will have it's BIOS copied to C0000 in system RAM.
 * IORESOURCE_ROM_SHADOW is used to associate the boot video
 * card with this copy. On laptops this copy has to be used since
 * the main ROM may be compressed or combined with another image.
 * See pci_map_rom() for use of this flag. IORESOURCE_ROM_SHADOW
 * is marked here since the boot video device will be the only enabled
 * video device at this point.
 */

static void pci_fixup_video(struct pci_dev *pdev)
{
        struct pci_dev *bridge;
        struct pci_bus *bus;
        u16 config;

        return;

        /* Is VGA routed to us? */
        bus = pdev->bus;
        while (bus) {
                bridge = bus->self;

                /*
                 * From information provided by
                 * "David Miller" <davem@davemloft.net>
                 * The bridge control register is valid for PCI header
                 * type BRIDGE, or CARDBUS. Host to PCI controllers use
                 * PCI header type NORMAL.
                 */
                if (bridge
                    && ((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
                       || (bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
                        pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
                                                &config);
                        if (!(config & PCI_BRIDGE_CTL_VGA))
                                return;
                }
                bus = bus->parent;
        }
        pci_read_config_word(pdev, PCI_COMMAND, &config);
        if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
                pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW;
                dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n");
                if (!vga_default_device())
                        vga_set_default_device(pdev);
        }
}
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
                                PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);





Friday, January 10, 2014, 10:51:03 PM, you wrote:

> Hi Konrad,

> I'm starting a new thread .. since i'm essentially trying to start over from scratch.
> (shoot i forgot to include xen-devel on the first mail ...)

> - Xen-unstable the latest and greatest
> - Linux 3.13-rc7 as dom0 and guest kernel
> - Qemu-xen
> - Some patches (vga="none", enableing qemu debug, fixing builderrors when enabling qemu debug)

> I'm now first trying to replicate you :-) (don't worry .. it won't hurt :p),
> i'm using an intel NIC now (found one laying around and it has a rom.
> and after it was flashed .. it even has valid rom content :-) )

> Now the first experiment is to see if the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works.

> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

> The failure with pciback is perhaps because the device wasn't initialized on boot because it was seized by pciback.
> So generally speaking the rombar of pci devices is correctly passed through and can be dumped.

> So it indeed appears to be a VGA passthrough specific issue.

> This was on my intel machine, but unfortunately that one has an IGD that shares it's pci-e lanes with the only pcie x16 slot,
> so will put the NIC in my AMD machine .. and verify if it works there and then continue with the VGA card.

> Now trying on the AMD machine:

> NIC:
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

> VGA card:
> - Running Xen, on dom0, VGA card owned by radeon: dumping the rom works
> - Running Xen, on dom0, VGA card owned by nothing (due to setting nomodeset): dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, on dom0, VGA card owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, VGA card passedthrough: dumping the rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios

> Prelimanary conclusions:
> - Passing through the rombar from a NIC works
> - Passing through the rombar from a VGA card doesn't work out of the box.
> - the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works but seems to rely on a loaded and working driver and is thus not very reliable for testing
>   so i need to dust of my arithmatic and use /dev/kmem and dd.
> - So Anthony is probably right .. something is playing tricks with the rombar when it's VGA class, even if it's the secondary card in the Guest.

> --
> Sander




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 00:58:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 00:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W29NL-0008PG-2a; Sun, 12 Jan 2014 00:58:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W29NI-0008PB-C1
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 00:58:00 +0000
Received: from [85.158.139.211:55646] by server-11.bemta-5.messagelabs.com id
	CC/63-23268-798E1D25; Sun, 12 Jan 2014 00:57:59 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389488278!9011696!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2884 invoked from network); 12 Jan 2014 00:57:58 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 12 Jan 2014 00:57:58 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:57652 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W29C7-0002aP-KB; Sun, 12 Jan 2014 01:46:27 +0100
Date: Sun, 12 Jan 2014 01:57:50 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <112976369.20140112015750@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <694614152.20140110225103@eikelenboom.it>
References: <694614152.20140110225103@eikelenboom.it>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen pci and vga passthrough + option roms ...
	finally ... SUCCES
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Finally .. and i was searching in the complete wrong direction for the whole time ...
red herring is the understatement of the year ... red whale would do it ..

It wasn't Xen, Seabios, Qemu ... but the guest kernel.

Which now makes sense with my recollection that secondary passthrough with ATI cards used to work with Xen
with some expiriments i had done but laid aside for some time.
And the fact that trying older Xen versions failed to get a similar setup working again
(because i kept trying newer .. not older guest kernels in those tests).

What is getting triggered is the pci_fixup_video (in arch/x86/pci/fixup.c) below.

>From dmesg:
[    2.545728] pci 0000:00:00.0: calling quirk_natoma+0x0/0x40
[    2.545730] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    2.558998] pci 0000:00:00.0: calling quirk_passive_release+0x0/0x90
[    2.559121] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    2.572412] pci 0000:00:01.0: calling quirk_isa_dma_hangs+0x0/0x40
[    2.572415] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    2.586527] pci 0000:00:03.0: calling pci_fixup_video+0x0/0xd0
[    2.586609] pci 0000:00:03.0: Boot video device
[    2.586696] pci 0000:00:05.0: calling pci_fixup_video+0x0/0xd0
[    2.586827] pci 0000:00:05.0: Boot video device
[    2.586928] pci 0000:00:06.0: calling quirk_e100_interrupt+0x0/0x1c0


So although there is a boot video device recognized already (the emulated one with bfd 00:03.0)
it also applies the quirk to the secondary GPU which is now also marked as boot device (passed through device with bfd 00:05.0)
which seems to be wrong.

I will go further with Dave Airlie for now .. see where that goes, will keep you posted.

--
Sander



/*
 * Fixup to mark boot BIOS video selected by BIOS before it changes
 *
 * From information provided by "Jon Smirl" <jonsmirl@gmail.com>
 *
 * The standard boot ROM sequence for an x86 machine uses the BIOS
 * to select an initial video card for boot display. This boot video
 * card will have it's BIOS copied to C0000 in system RAM.
 * IORESOURCE_ROM_SHADOW is used to associate the boot video
 * card with this copy. On laptops this copy has to be used since
 * the main ROM may be compressed or combined with another image.
 * See pci_map_rom() for use of this flag. IORESOURCE_ROM_SHADOW
 * is marked here since the boot video device will be the only enabled
 * video device at this point.
 */

static void pci_fixup_video(struct pci_dev *pdev)
{
        struct pci_dev *bridge;
        struct pci_bus *bus;
        u16 config;

        return;

        /* Is VGA routed to us? */
        bus = pdev->bus;
        while (bus) {
                bridge = bus->self;

                /*
                 * From information provided by
                 * "David Miller" <davem@davemloft.net>
                 * The bridge control register is valid for PCI header
                 * type BRIDGE, or CARDBUS. Host to PCI controllers use
                 * PCI header type NORMAL.
                 */
                if (bridge
                    && ((bridge->hdr_type == PCI_HEADER_TYPE_BRIDGE)
                       || (bridge->hdr_type == PCI_HEADER_TYPE_CARDBUS))) {
                        pci_read_config_word(bridge, PCI_BRIDGE_CONTROL,
                                                &config);
                        if (!(config & PCI_BRIDGE_CTL_VGA))
                                return;
                }
                bus = bus->parent;
        }
        pci_read_config_word(pdev, PCI_COMMAND, &config);
        if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
                pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW;
                dev_printk(KERN_DEBUG, &pdev->dev, "Boot video device\n");
                if (!vga_default_device())
                        vga_set_default_device(pdev);
        }
}
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_ANY_ID, PCI_ANY_ID,
                                PCI_CLASS_DISPLAY_VGA, 8, pci_fixup_video);





Friday, January 10, 2014, 10:51:03 PM, you wrote:

> Hi Konrad,

> I'm starting a new thread .. since i'm essentially trying to start over from scratch.
> (shoot i forgot to include xen-devel on the first mail ...)

> - Xen-unstable the latest and greatest
> - Linux 3.13-rc7 as dom0 and guest kernel
> - Qemu-xen
> - Some patches (vga="none", enableing qemu debug, fixing builderrors when enabling qemu debug)

> I'm now first trying to replicate you :-) (don't worry .. it won't hurt :p),
> i'm using an intel NIC now (found one laying around and it has a rom.
> and after it was flashed .. it even has valid rom content :-) )

> Now the first experiment is to see if the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works.

> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

> The failure with pciback is perhaps because the device wasn't initialized on boot because it was seized by pciback.
> So generally speaking the rombar of pci devices is correctly passed through and can be dumped.

> So it indeed appears to be a VGA passthrough specific issue.

> This was on my intel machine, but unfortunately that one has an IGD that shares it's pci-e lanes with the only pcie x16 slot,
> so will put the NIC in my AMD machine .. and verify if it works there and then continue with the VGA card.

> Now trying on the AMD machine:

> NIC:
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom works, contents are the same as under dom0

> VGA card:
> - Running Xen, on dom0, VGA card owned by radeon: dumping the rom works
> - Running Xen, on dom0, VGA card owned by nothing (due to setting nomodeset): dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, on dom0, VGA card owned by pciback: dumping the rom FAILS, kernel gives "cat: rom: Input/output error"
> - Running Xen, in a HVM guest, VGA card passedthrough: dumping the rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios

> Prelimanary conclusions:
> - Passing through the rombar from a NIC works
> - Passing through the rombar from a VGA card doesn't work out of the box.
> - the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;" works but seems to rely on a loaded and working driver and is thus not very reliable for testing
>   so i need to dust of my arithmatic and use /dev/kmem and dd.
> - So Anthony is probably right .. something is playing tricks with the rombar when it's VGA class, even if it's the secondary card in the Guest.

> --
> Sander




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 08:58:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 08:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Grg-0005zp-6h; Sun, 12 Jan 2014 08:57:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2Grf-0005zk-0x
	for xen-devel@lists.xensource.com; Sun, 12 Jan 2014 08:57:51 +0000
Received: from [85.158.143.35:6267] by server-1.bemta-4.messagelabs.com id
	64/6E-02132-E0952D25; Sun, 12 Jan 2014 08:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389517068!11132860!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19352 invoked from network); 12 Jan 2014 08:57:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 08:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,646,1384300800"; d="scan'208";a="92079461"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 08:57:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 12 Jan 2014 03:57:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2GrZ-0000DT-Ur;
	Sun, 12 Jan 2014 08:57:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2GrZ-0005Z9-TT;
	Sun, 12 Jan 2014 08:57:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24364-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 12 Jan 2014 08:57:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24364: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24364 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24364/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24360

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24360

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 08:58:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 08:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Grg-0005zp-6h; Sun, 12 Jan 2014 08:57:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2Grf-0005zk-0x
	for xen-devel@lists.xensource.com; Sun, 12 Jan 2014 08:57:51 +0000
Received: from [85.158.143.35:6267] by server-1.bemta-4.messagelabs.com id
	64/6E-02132-E0952D25; Sun, 12 Jan 2014 08:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389517068!11132860!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19352 invoked from network); 12 Jan 2014 08:57:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 08:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,646,1384300800"; d="scan'208";a="92079461"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 08:57:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 12 Jan 2014 03:57:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2GrZ-0000DT-Ur;
	Sun, 12 Jan 2014 08:57:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2GrZ-0005Z9-TT;
	Sun, 12 Jan 2014 08:57:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24364-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 12 Jan 2014 08:57:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24364: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24364 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24364/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24360

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24360

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 11:57:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 11:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2JfI-0007K3-BJ; Sun, 12 Jan 2014 11:57:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1W2HOY-0008Ir-AQ
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 09:31:50 +0000
Received: from [85.158.139.211:62430] by server-5.bemta-5.messagelabs.com id
	FC/4A-14928-50162D25; Sun, 12 Jan 2014 09:31:49 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389519106!9207229!1
X-Originating-IP: [65.54.190.211]
X-SpamReason: No, hits=1.8 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_HOTMAIL_RCVD,HTML_00_10,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_12,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_2,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15120 invoked from network); 12 Jan 2014 09:31:46 -0000
Received: from bay0-omc4-s9.bay0.hotmail.com (HELO
	bay0-omc4-s9.bay0.hotmail.com) (65.54.190.211)
	by server-5.tower-206.messagelabs.com with SMTP;
	12 Jan 2014 09:31:46 -0000
Received: from BAY170-W64 ([65.54.190.199]) by bay0-omc4-s9.bay0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Sun, 12 Jan 2014 01:31:46 -0800
X-TMN: [fFIlkqYeMHXdP8mNhjER6hf+1MEwnnAPQCcQa37lc9c=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Sun, 12 Jan 2014 06:31:45 -0300
Importance: Normal
MIME-Version: 1.0
X-OriginalArrivalTime: 12 Jan 2014 09:31:46.0192 (UTC)
	FILETIME=[1C937900:01CF0F79]
X-Mailman-Approved-At: Sun, 12 Jan 2014 11:57:14 +0000
Subject: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4990483044060424573=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4990483044060424573==
Content-Type: multipart/alternative;
	boundary="_97203c59-53bb-475a-a251-16c55fe6d6f5_"

--_97203c59-53bb-475a-a251-16c55fe6d6f5_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Due to the fact that I didn't managed in any way to get Xen booting in UEFI=
 mode when asking for help at xen-users=2C I have been suggested to mail xe=
n-devel to see if someone here can help me with this issue=2C as this could=
 be bug-related and not a configuration mistake. Original Thread here:
http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html




I also suggest to pick a cup of coffee=2C due to the fact that due to my ov=
er-detailing may bore most readers=2C plus that I have very low Linux and X=
en experience=2C may complicate getting logs or more data=2C so if you need=
 a log of something or whatever=2C try to explain it as simple as possible =
or give me a link with details to know exactly what to do - will help me to=
 not get stuck if requested to do things like "Checking what flags are enab=
led in Kernel"=2C "Recompile the Kernel with the following option/modules"=
=2C etc.






SOFTWARE
Arch Linux - archlinux-2013.12.01-dual.iso (There is a new one already avai=
lable)
Xen 4.3.1-5 builded as EFI file (Builded two versions=2C one with the Radeo=
n patch applied/uncommented in PKGBUILD=2C and another without/commented)


I wrote the ISO to a Pendrive with dd following these instructions here to =
make it boot in UEFI mode:
https://wiki.archlinux.org/index.php/USB_Flash_Installation_Media#Using_dd_=
.28Recommended_method.29


Installed Arch Linux following most reelevant parts of the guides=2C choosi=
ng Gummiboot as UEFI Boot Manager:
https://wiki.archlinux.org/index.php/Installation_Guide
https://wiki.archlinux.org/index.php/Beginners%27_Guide


Finally=2C I builded the two Xen.efi files with the x86_64-pep option enabl=
ed in binutils as this guide=2C and added them to Gummiboot:
https://bbs.archlinux.org/viewtopic.php?pid=3D1359933




xen.efi's cfg:


[global]
default=3Dxen


[xen]
options=3Dconsole=3Dvga vga=3Dcurrent loglvl=3Dall noreboot
kernel=3Dvmlinuz-linux root=3DUUID=3D7af1797c-00af-4219-a141-27a718f39858 r=
w ignore_loglevel earlyprintk=3Dvga=2Ckeep
ramdisk=3Dinitramfs-linux.img=20




That pretty much sums the Software side of what I have installed and can't =
get working. I think there were barely a few misc packages installed beside=
s those. So if anyone has a near-identical setup (Most important thing woul=
d be Motherboard)=2C I think it could be reproducible.








HARDWARE
Processor: Xeon E3-1245 V3 (Haswell)
Motheroard Supermicro X10SAT (C226)
Hard Disk: Seagate Desktop HDD.15 4 TB
Video Cards: Intel HD Graphics 4600P / Haswell integrated GPU=2C Sapphire R=
adeon 5770 FLEX (Have another identical Radeon 5770=2C for when I get every=
thing working)
Monitors: Samsung SyncMaster P2370H (Connected to Haswell IGP via DVI)=2C S=
amsung SyncMaster 932N+ (Connected to Radeon 5770 with a DVI-to-VGA adapter=
)




Any common sense options are Enabled in the BIOS (VT-x=2C VT-d=2C etc)
Secure Boot isn't mentioned anywhere=2C not even the Motherboard Manual
http://www.supermicro.com/products/motherboard/xeon/c220/x10sat.cfm




lscpi output:


00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3 Processor DRAM Contr=
oller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processo=
r PCI Express x16 Controller (rev 06)
00:02.0 Display controller: Intel Corporation Xeon E3-1200 v3 Processor Int=
egrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Proces=
sor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB xHCI (rev 05)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Ch=
ipset Family MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM =
(rev 05)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB EHCI #2 (rev 05)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High D=
efinition Audio Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #1 (rev d5)
00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #2 (rev d5)
00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #4 (rev d5)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #5 (rev d5)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB EHCI #1 (rev 05)
00:1f.0 ISA bridge: Intel Corporation C226 Series Chipset Family Server Adv=
anced SKU LPC Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Fam=
ily 6-port SATA Controller 1 [AHCI mode] (rev 05)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus =
Controller (rev 05)
00:1f.6 Signal processing controller: Intel Corporation 8 Series Chipset Fa=
mily Thermal Management Controller (rev 05)
01:00.0 VGA compatible controller: Advanced Micro Devices=2C Inc. [AMD/ATI]=
 Juniper XT [Radeon HD 5770]
01:00.1 Audio device: Advanced Micro Devices=2C Inc. [AMD/ATI] Juniper HDMI=
 Audio [Radeon HD 5700 Series]
02:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Control=
ler (rev 01)
03:00.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:01.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:04.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:05.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:07.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:09.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
08:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Con=
troller (rev 03)
09:00.0 PCI bridge: Texas Instruments XIO2213A/B/XIO2221 PCI Express to PCI=
 Bridge [Cheetah Express] (rev 01)
0a:00.0 FireWire (IEEE 1394): Texas Instruments XIO2213A/B/XIO2221 IEEE-139=
4b OHCI Controller [Cheetah Express] (rev 01)
0b:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connect=
ion (rev 03)






In this same machine=2C I can get Xen working with pretty much everything e=
lse being the same but using Syslinux as BIOS-GPT Boot Loader instead of fo=
llowing the UEFI instruction. I even managed to do VGA Passthrough of the R=
adeon 5770 to a WXP SP3 VM. There are a few differences in the booting proc=
ess (More on this later).






Basically=2C I can get Arch Linux to boot in UEFI mode from the Gummiboot m=
enu. However=2C when I launch the xen.efi executable instead (Either the Ra=
deon patched or the vanilla one)=2C it either black screens or hangs. After=
 experimenting A LOT=2C I discovered the following different behaviators:




If in BIOS I set the Haswell GPU as main video output (POST and BIOS setup =
appears on first Monitor only)=2C it reachs a point in the booting process =
where it simply black screens. One of the last lines was


Dom0 has maximum 8 VCPUs


Additionally=2C if the CFG file has an assigned RAM max size (dom0_mem=3D20=
48M=2Cmax=3D2048M)=2C it does the Scrubbing Free RAM. I think it throws a f=
ew more lines if I recall correctly=2C but too fast to notice. I tried with=
 a variety of options in that file and didn't get anywhere.




Later=2C I decided to try setting the Radeon 5770 as video output in the BI=
OS (POST and BIOS setup part appears on the second Monitor only). Surprisin=
gly=2C it passed that point where it black screens with the IGP=2C it inste=
ad hangs on "unpacking initramfs". When I started to use earlyprintk=3Dvga=
=2Ckeep on the Xen CFG file=2C I actually got a bit more info (I copied all=
 these lines by hand=2C I skipped to write the [1.23456789] parts which I s=
uppose are seconds=2C last line is around 5 seconds and half):




e820: reserve RAM buffer [mem 0x8826e000-0x8bffffff]
e820: reserve RAM buffer [mem 0x88b4e000-0x8bffffff]
e820: reserve RAM buffer [mem 0x992ed000-0x9bffffff]
e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]
e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]
e820: reserve RAM buffer [mem 0x85ee00000-0x85fffffff]
NetLabel: Initializing
NetLabel:  domain hash size =3D 128
NetLabel:  protocols =3D UNLABELED CIPSOv4
NetLabel: unlabeled traffic allowed by default
Switched to clocksource xen
pnp: PnP ACPI: disabled
pci 0000:00:1c.4: bridge window [io 0x1000-0x0fff] to [bus 0c-44] add_size =
1000
pci 0000:00:1c.4: res[13]=3D[io  0x1000-0x0fff] get_res_add_size add_size 1=
000
pci 0000:00:1c.4: BAR 13: assigned [io 0x1000-0x1fff]
pci 0000:00:01.0: PCI bridge to [bus 01]
pci 0000:00:01.0:  bridge window [io  0xe000-0xefff]
pci 0000:00:01.0:  bridge window [mem 0xeee00000-0xeeefffff]
pci 0000:00:01.0:  bridge window [mem 0xc40000000-0xc4fffffff 64bit pref]
pci 0000:00:1c.0: PCI bridge to [bus 02]
pci 0000:00:1c.0:  bridge window [io 0xd000-0xdfff]
pci 0000:00:1c.0:  bridge window [mem 0xeed00000-0xeedfffff]
pci 0000:04:01.0: PCI bridge to [bus 05]
pci 0000:04:04.0: PCI bridge to [bus 06]
pci 0000:04:05.0: PCI bridge to [bus 07]
pci 0000:04:07.0: PCI bridge to [bus 08]
pci 0000:04:07.0:  bridge window [mem 0xeea00000-0xeeafffff]
pci 0000:09:00.0: PCI bridge to [bus 0a]
pci 0000:09:00.0:  bridge window [mem 0xee800000-0xee8fffff]
pci 0000:04:09.0: PCI bridge to [bus 09-0a]
pci 0000:04:09.0:  bridge window [mem 0xee800000-0xee9fffff]
pci 0000:03:00.0: PCI bridge to [bus 04-0a]
pci 0000:03:00.0:  bridge window [mem 0xee800000-0xeeafffff]
pci 0000:00:1c.1: PCI bridge to [bus 03-04]
pci 0000:00:1c.1:  bridge window [mem 0xee800000-0xeebfffff]
pci 0000:00:1c.3: PCI bridge to [bus 0b]
pci 0000:00:1c.3:  bridge window [io  0xc000-0xcfff]
pci 0000:00:1c.3:  bridge window [mem 0xeec00000-0xeecfffff]
pci 0000:00:1c.4: PCI bridge to [bus 0c-44]
pci 0000:00:1c.4:  bridge window [io  0x1000-0x1fff]
pci 0000:00:1c.4:  bridge window [mem 0xd8000000-0xee0fffff]
pci 0000:00:1c.4:  bridge window [mem 0xb0000000-0xd1ffffff 64bit pref]
pci_bus 0000:00: resource 4 [io  0x0000-0xffff]
pci_bus 0000:00: resource 5 [mem 0x00000000-0x7fffffffff]




Sometimes it doesn't reach the pci_bus lines=2C other times it throws 3 pci=
_bus instead of 2=2C so the hang part isn't very precise. What is also susp=
icious is that on my working Xen installation (Syslinux BIOS-GPT)=2C it DOE=
S detect ACPI and throws a few additional acpi-related lines before pci. I =
don't know why it doesn't on UEFI.


There is nothing else I was able to figure out as I got stuck here. Only th=
ing I didn't tested was using KMS (Kernel Mode Settings) as I heared that t=
here is a bug with Intel IGPs that may result in a black screen if I don't =
load a kernel module early=2C or something around these lines. That may be =
responsible for the black screen=2C but I'm not sure about the hang with th=
e Radeon. But I don't have these issues with my working installation using =
BIOS.




I welcome anyone that knows what could be wrong=2C or any ideas about what =
I could test. I don't know if this is a Xen=2C Arch Linux=2C or maybe even =
BIOS issue=2C or how often people encounter hangs trying to boot Xen as an =
UEFI executable=2C as there doesn't seems to be a lot of people using Xen i=
n UEFI by what I could get googling around.
 		 	   		  =

--_97203c59-53bb-475a-a251-16c55fe6d6f5_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>Due to the fact that I didn't ma=
naged in any way to get Xen booting in UEFI mode when asking for help at xe=
n-users=2C I have been suggested to mail xen-devel to see if someone here c=
an help me with this issue=2C as this could be bug-related and not a config=
uration mistake. Original Thread here:<BR>http://lists.xen.org/archives/htm=
l/xen-users/2013-12/msg00050.html<BR><br><BR><br><BR>I also suggest to pick=
 a cup of coffee=2C due to the fact that due to my over-detailing may bore =
most readers=2C plus that I have very low Linux and Xen experience=2C may c=
omplicate getting logs or more data=2C so if you need a log of something or=
 whatever=2C try to explain it as simple as possible or give me a link with=
 details to know exactly what to do - will help me to not get stuck if requ=
ested to do things like "Checking what flags are enabled in Kernel"=2C "Rec=
ompile the Kernel with the following option/modules"=2C etc.<BR><br><BR><br=
><BR><br><BR>SOFTWARE<BR>Arch Linux - archlinux-2013.12.01-dual.iso (There =
is a new one already available)<BR>Xen 4.3.1-5 builded as EFI file (Builded=
 two versions=2C one with the Radeon patch applied/uncommented in PKGBUILD=
=2C and another without/commented)<BR><br><BR>I wrote the ISO to a Pendrive=
 with dd following these instructions here to make it boot in UEFI mode:<BR=
>https://wiki.archlinux.org/index.php/USB_Flash_Installation_Media#Using_dd=
_.28Recommended_method.29<BR><br><BR>Installed Arch Linux following most re=
elevant parts of the guides=2C choosing Gummiboot as UEFI Boot Manager:<BR>=
https://wiki.archlinux.org/index.php/Installation_Guide<BR>https://wiki.arc=
hlinux.org/index.php/Beginners%27_Guide<BR><br><BR>Finally=2C I builded the=
 two Xen.efi files with the x86_64-pep option enabled in binutils as this g=
uide=2C and added them to Gummiboot:<BR>https://bbs.archlinux.org/viewtopic=
.php?pid=3D1359933<BR><br><BR><br><BR>xen.efi's cfg:<BR><br><BR>[global]<BR=
>default=3Dxen<BR><br><BR>[xen]<BR>options=3Dconsole=3Dvga vga=3Dcurrent lo=
glvl=3Dall noreboot<BR>kernel=3Dvmlinuz-linux root=3DUUID=3D7af1797c-00af-4=
219-a141-27a718f39858 rw ignore_loglevel earlyprintk=3Dvga=2Ckeep<BR>ramdis=
k=3Dinitramfs-linux.img&nbsp=3B<BR><br><BR><br><BR>That pretty much sums th=
e Software side of what I have installed and can't get working. I think the=
re were barely a few misc packages installed besides those. So if anyone ha=
s a near-identical setup (Most important thing would be Motherboard)=2C I t=
hink it could be reproducible.<BR><br><BR><br><BR><br><BR><br><BR>HARDWARE<=
BR>Processor: Xeon E3-1245 V3 (Haswell)<BR>Motheroard Supermicro X10SAT (C2=
26)<BR>Hard Disk: Seagate Desktop HDD.15 4 TB<BR>Video Cards: Intel HD Grap=
hics 4600P / Haswell integrated GPU=2C Sapphire Radeon 5770 FLEX (Have anot=
her identical Radeon 5770=2C for when I get everything working)<BR>Monitors=
: Samsung SyncMaster P2370H (Connected to Haswell IGP via DVI)=2C Samsung S=
yncMaster 932N+ (Connected to Radeon 5770 with a DVI-to-VGA adapter)<BR><br=
><BR><br><BR>Any common sense options are Enabled in the BIOS (VT-x=2C VT-d=
=2C etc)<BR>Secure Boot isn't mentioned anywhere=2C not even the Motherboar=
d Manual<BR>http://www.supermicro.com/products/motherboard/xeon/c220/x10sat=
.cfm<BR><br><BR><br><BR>lscpi output:<BR><br><BR>00:00.0 Host bridge: Intel=
 Corporation Xeon E3-1200 v3 Processor DRAM Controller (rev 06)<BR>00:01.0 =
PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Ex=
press x16 Controller (rev 06)<BR>00:02.0 Display controller: Intel Corporat=
ion Xeon E3-1200 v3 Processor Integrated Graphics Controller (rev 06)<BR>00=
:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processo=
r HD Audio Controller (rev 06)<BR>00:14.0 USB controller: Intel Corporation=
 8 Series/C220 Series Chipset Family USB xHCI (rev 05)<BR>00:16.0 Communica=
tion controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI =
Controller #1 (rev 04)<BR>00:19.0 Ethernet controller: Intel Corporation Et=
hernet Connection I217-LM (rev 05)<BR>00:1a.0 USB controller: Intel Corpora=
tion 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05)<BR>00:1b.0 Au=
dio device: Intel Corporation 8 Series/C220 Series Chipset High Definition =
Audio Controller (rev 05)<BR>00:1c.0 PCI bridge: Intel Corporation 8 Series=
/C220 Series Chipset Family PCI Express Root Port #1 (rev d5)<BR>00:1c.1 PC=
I bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express=
 Root Port #2 (rev d5)<BR>00:1c.3 PCI bridge: Intel Corporation 8 Series/C2=
20 Series Chipset Family PCI Express Root Port #4 (rev d5)<BR>00:1c.4 PCI b=
ridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Ro=
ot Port #5 (rev d5)<BR>00:1d.0 USB controller: Intel Corporation 8 Series/C=
220 Series Chipset Family USB EHCI #1 (rev 05)<BR>00:1f.0 ISA bridge: Intel=
 Corporation C226 Series Chipset Family Server Advanced SKU LPC Controller =
(rev 05)<BR>00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series=
 Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)<BR>00:1f.3 SM=
Bus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller=
 (rev 05)<BR>00:1f.6 Signal processing controller: Intel Corporation 8 Seri=
es Chipset Family Thermal Management Controller (rev 05)<BR>01:00.0 VGA com=
patible controller: Advanced Micro Devices=2C Inc. [AMD/ATI] Juniper XT [Ra=
deon HD 5770]<BR>01:00.1 Audio device: Advanced Micro Devices=2C Inc. [AMD/=
ATI] Juniper HDMI Audio [Radeon HD 5700 Series]<BR>02:00.0 SATA controller:=
 ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)<BR>03:00.0 =
PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Ge=
n 2 (5.0 GT/s) Switch (rev ba)<BR>04:01.0 PCI bridge: PLX Technology=2C Inc=
. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<BR=
>04:04.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI E=
xpress Gen 2 (5.0 GT/s) Switch (rev ba)<BR>04:05.0 PCI bridge: PLX Technolo=
gy=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) Switch (r=
ev ba)<BR>04:07.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 P=
ort PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<BR>04:09.0 PCI bridge: PLX=
 Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) =
Switch (rev ba)<BR>08:00.0 USB controller: Renesas Technology Corp. uPD7202=
01 USB 3.0 Host Controller (rev 03)<BR>09:00.0 PCI bridge: Texas Instrument=
s XIO2213A/B/XIO2221 PCI Express to PCI Bridge [Cheetah Express] (rev 01)<B=
R>0a:00.0 FireWire (IEEE 1394): Texas Instruments XIO2213A/B/XIO2221 IEEE-1=
394b OHCI Controller [Cheetah Express] (rev 01)<BR>0b:00.0 Ethernet control=
ler: Intel Corporation I210 Gigabit Network Connection (rev 03)<BR><br><BR>=
<br><BR><br><BR>In this same machine=2C I can get Xen working with pretty m=
uch everything else being the same but using Syslinux as BIOS-GPT Boot Load=
er instead of following the UEFI instruction. I even managed to do VGA Pass=
through of the Radeon 5770 to a WXP SP3 VM. There are a few differences in =
the booting process (More on this later).<BR><br><BR><br><BR><br><BR>Basica=
lly=2C I can get Arch Linux to boot in UEFI mode from the Gummiboot menu. H=
owever=2C when I launch the xen.efi executable instead (Either the Radeon p=
atched or the vanilla one)=2C it either black screens or hangs. After exper=
imenting A LOT=2C I discovered the following different behaviators<span sty=
le=3D"font-size: 12pt=3B">:</span><BR><br><BR><br><BR>If in BIOS I set the =
Haswell GPU as main video output (POST and BIOS setup appears on first Moni=
tor only)=2C it reachs a point in the booting process where it simply black=
 screens. One of the last lines was<BR><br><BR>Dom0 has maximum 8 VCPUs<BR>=
<br><BR>Additionally=2C if the CFG file has an assigned RAM max size (dom0_=
mem=3D2048M=2Cmax=3D2048M)=2C it does the Scrubbing Free RAM. I think it th=
rows a few more lines if I recall correctly=2C but too fast to notice.&nbsp=
=3B<span style=3D"font-size: 12pt=3B">I tried with a variety of options in =
that file and didn't get anywhere.</span><BR><br><BR><br><BR>Later=2C I dec=
ided to try setting the Radeon 5770 as video output in the BIOS (POST and B=
IOS setup part appears on the second Monitor only). Surprisingly=2C it pass=
ed that point where it black screens with the IGP=2C it instead hangs on "u=
npacking initramfs". When I started to use earlyprintk=3Dvga=2Ckeep on the =
Xen CFG file=2C I actually got a bit more info (I copied all these lines by=
 hand=2C I skipped to write the [1.23456789] parts which I suppose are seco=
nds=2C last line is around 5 seconds and half):<BR><br><BR><br><BR>e820: re=
serve RAM buffer [mem 0x8826e000-0x8bffffff]<BR>e820: reserve RAM buffer [m=
em 0x88b4e000-0x8bffffff]<BR>e820: reserve RAM buffer [mem 0x992ed000-0x9bf=
fffff]<BR>e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]<BR>e820: res=
erve RAM buffer [mem 0x993eb000-0x9bffffff]<BR>e820: reserve RAM buffer [me=
m 0x85ee00000-0x85fffffff]<BR>NetLabel: Initializing<BR>NetLabel: &nbsp=3Bd=
omain hash size =3D 128<BR>NetLabel: &nbsp=3Bprotocols =3D UNLABELED CIPSOv=
4<BR>NetLabel: unlabeled traffic allowed by default<BR>Switched to clocksou=
rce xen<BR>pnp: PnP ACPI: disabled<BR>pci 0000:00:1c.4: bridge window [io 0=
x1000-0x0fff] to [bus 0c-44] add_size 1000<BR>pci 0000:00:1c.4: res[13]=3D[=
io &nbsp=3B0x1000-0x0fff] get_res_add_size add_size 1000<BR>pci 0000:00:1c.=
4: BAR 13: assigned [io 0x1000-0x1fff]<BR>pci 0000:00:01.0: PCI bridge to [=
bus 01]<BR>pci 0000:00:01.0: &nbsp=3Bbridge window [io &nbsp=3B0xe000-0xeff=
f]<BR>pci 0000:00:01.0: &nbsp=3Bbridge window [mem 0xeee00000-0xeeefffff]<B=
R>pci 0000:00:01.0: &nbsp=3Bbridge window [mem 0xc40000000-0xc4fffffff 64bi=
t pref]<BR>pci 0000:00:1c.0: PCI bridge to [bus 02]<BR>pci 0000:00:1c.0: &n=
bsp=3Bbridge window [io 0xd000-0xdfff]<BR>pci 0000:00:1c.0: &nbsp=3Bbridge =
window [mem 0xeed00000-0xeedfffff]<BR>pci 0000:04:01.0: PCI bridge to [bus =
05]<BR>pci 0000:04:04.0: PCI bridge to [bus 06]<BR>pci 0000:04:05.0: PCI br=
idge to [bus 07]<BR>pci 0000:04:07.0: PCI bridge to [bus 08]<BR>pci 0000:04=
:07.0: &nbsp=3Bbridge window [mem 0xeea00000-0xeeafffff]<BR>pci 0000:09:00.=
0: PCI bridge to [bus 0a]<BR>pci 0000:09:00.0: &nbsp=3Bbridge window [mem 0=
xee800000-0xee8fffff]<BR>pci 0000:04:09.0: PCI bridge to [bus 09-0a]<BR>pci=
 0000:04:09.0: &nbsp=3Bbridge window [mem 0xee800000-0xee9fffff]<BR>pci 000=
0:03:00.0: PCI bridge to [bus 04-0a]<BR>pci 0000:03:00.0: &nbsp=3Bbridge wi=
ndow [mem 0xee800000-0xeeafffff]<BR>pci 0000:00:1c.1: PCI bridge to [bus 03=
-04]<BR>pci 0000:00:1c.1: &nbsp=3Bbridge window [mem 0xee800000-0xeebfffff]=
<BR>pci 0000:00:1c.3: PCI bridge to [bus 0b]<BR>pci 0000:00:1c.3: &nbsp=3Bb=
ridge window [io &nbsp=3B0xc000-0xcfff]<BR>pci 0000:00:1c.3: &nbsp=3Bbridge=
 window [mem 0xeec00000-0xeecfffff]<BR>pci 0000:00:1c.4: PCI bridge to [bus=
 0c-44]<BR>pci 0000:00:1c.4: &nbsp=3Bbridge window [io &nbsp=3B0x1000-0x1ff=
f]<BR>pci 0000:00:1c.4: &nbsp=3Bbridge window [mem 0xd8000000-0xee0fffff]<B=
R>pci 0000:00:1c.4: &nbsp=3Bbridge window [mem 0xb0000000-0xd1ffffff 64bit =
pref]<BR>pci_bus 0000:00: resource 4 [io &nbsp=3B0x0000-0xffff]<BR>pci_bus =
0000:00: resource 5 [mem 0x00000000-0x7fffffffff]<BR><br><BR><br><BR>Someti=
mes it doesn't reach the pci_bus lines=2C other times it throws 3 pci_bus i=
nstead of 2=2C so the hang part isn't very precise. What is also suspicious=
 is that on my working Xen installation (Syslinux BIOS-GPT)=2C it DOES dete=
ct ACPI and throws a few additional acpi-related lines before pci. I don't =
know why it doesn't on UEFI.<BR><br><BR>There is nothing else I was able to=
 figure out as I got stuck here. Only thing I didn't tested was using KMS (=
Kernel Mode Settings) as I heared that there is a bug with Intel IGPs that =
may result in a black screen if I don't load a kernel module early=2C or so=
mething around these lines. That may be responsible for the black screen=2C=
 but I'm not sure about the hang with the Radeon. But I don't have these is=
sues with my working installation using BIOS.<BR><br><BR><br><BR>I welcome =
anyone that knows what could be wrong=2C or any ideas about what I could te=
st. I don't know if this is a Xen=2C Arch Linux=2C or maybe even BIOS issue=
=2C or how often people encounter hangs trying to boot Xen as an UEFI execu=
table=2C as there doesn't seems to be a lot of people using Xen in UEFI by =
what I could get googling around.<BR> 		 	   		  </div></body>
</html>=

--_97203c59-53bb-475a-a251-16c55fe6d6f5_--


--===============4990483044060424573==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4990483044060424573==--


From xen-devel-bounces@lists.xen.org Sun Jan 12 11:57:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 11:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2JfI-0007K3-BJ; Sun, 12 Jan 2014 11:57:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1W2HOY-0008Ir-AQ
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 09:31:50 +0000
Received: from [85.158.139.211:62430] by server-5.bemta-5.messagelabs.com id
	FC/4A-14928-50162D25; Sun, 12 Jan 2014 09:31:49 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389519106!9207229!1
X-Originating-IP: [65.54.190.211]
X-SpamReason: No, hits=1.8 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_HOTMAIL_RCVD,HTML_00_10,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_12,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_2,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15120 invoked from network); 12 Jan 2014 09:31:46 -0000
Received: from bay0-omc4-s9.bay0.hotmail.com (HELO
	bay0-omc4-s9.bay0.hotmail.com) (65.54.190.211)
	by server-5.tower-206.messagelabs.com with SMTP;
	12 Jan 2014 09:31:46 -0000
Received: from BAY170-W64 ([65.54.190.199]) by bay0-omc4-s9.bay0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Sun, 12 Jan 2014 01:31:46 -0800
X-TMN: [fFIlkqYeMHXdP8mNhjER6hf+1MEwnnAPQCcQa37lc9c=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Sun, 12 Jan 2014 06:31:45 -0300
Importance: Normal
MIME-Version: 1.0
X-OriginalArrivalTime: 12 Jan 2014 09:31:46.0192 (UTC)
	FILETIME=[1C937900:01CF0F79]
X-Mailman-Approved-At: Sun, 12 Jan 2014 11:57:14 +0000
Subject: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4990483044060424573=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4990483044060424573==
Content-Type: multipart/alternative;
	boundary="_97203c59-53bb-475a-a251-16c55fe6d6f5_"

--_97203c59-53bb-475a-a251-16c55fe6d6f5_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Due to the fact that I didn't managed in any way to get Xen booting in UEFI=
 mode when asking for help at xen-users=2C I have been suggested to mail xe=
n-devel to see if someone here can help me with this issue=2C as this could=
 be bug-related and not a configuration mistake. Original Thread here:
http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html




I also suggest to pick a cup of coffee=2C due to the fact that due to my ov=
er-detailing may bore most readers=2C plus that I have very low Linux and X=
en experience=2C may complicate getting logs or more data=2C so if you need=
 a log of something or whatever=2C try to explain it as simple as possible =
or give me a link with details to know exactly what to do - will help me to=
 not get stuck if requested to do things like "Checking what flags are enab=
led in Kernel"=2C "Recompile the Kernel with the following option/modules"=
=2C etc.






SOFTWARE
Arch Linux - archlinux-2013.12.01-dual.iso (There is a new one already avai=
lable)
Xen 4.3.1-5 builded as EFI file (Builded two versions=2C one with the Radeo=
n patch applied/uncommented in PKGBUILD=2C and another without/commented)


I wrote the ISO to a Pendrive with dd following these instructions here to =
make it boot in UEFI mode:
https://wiki.archlinux.org/index.php/USB_Flash_Installation_Media#Using_dd_=
.28Recommended_method.29


Installed Arch Linux following most reelevant parts of the guides=2C choosi=
ng Gummiboot as UEFI Boot Manager:
https://wiki.archlinux.org/index.php/Installation_Guide
https://wiki.archlinux.org/index.php/Beginners%27_Guide


Finally=2C I builded the two Xen.efi files with the x86_64-pep option enabl=
ed in binutils as this guide=2C and added them to Gummiboot:
https://bbs.archlinux.org/viewtopic.php?pid=3D1359933




xen.efi's cfg:


[global]
default=3Dxen


[xen]
options=3Dconsole=3Dvga vga=3Dcurrent loglvl=3Dall noreboot
kernel=3Dvmlinuz-linux root=3DUUID=3D7af1797c-00af-4219-a141-27a718f39858 r=
w ignore_loglevel earlyprintk=3Dvga=2Ckeep
ramdisk=3Dinitramfs-linux.img=20




That pretty much sums the Software side of what I have installed and can't =
get working. I think there were barely a few misc packages installed beside=
s those. So if anyone has a near-identical setup (Most important thing woul=
d be Motherboard)=2C I think it could be reproducible.








HARDWARE
Processor: Xeon E3-1245 V3 (Haswell)
Motheroard Supermicro X10SAT (C226)
Hard Disk: Seagate Desktop HDD.15 4 TB
Video Cards: Intel HD Graphics 4600P / Haswell integrated GPU=2C Sapphire R=
adeon 5770 FLEX (Have another identical Radeon 5770=2C for when I get every=
thing working)
Monitors: Samsung SyncMaster P2370H (Connected to Haswell IGP via DVI)=2C S=
amsung SyncMaster 932N+ (Connected to Radeon 5770 with a DVI-to-VGA adapter=
)




Any common sense options are Enabled in the BIOS (VT-x=2C VT-d=2C etc)
Secure Boot isn't mentioned anywhere=2C not even the Motherboard Manual
http://www.supermicro.com/products/motherboard/xeon/c220/x10sat.cfm




lscpi output:


00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3 Processor DRAM Contr=
oller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processo=
r PCI Express x16 Controller (rev 06)
00:02.0 Display controller: Intel Corporation Xeon E3-1200 v3 Processor Int=
egrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Proces=
sor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB xHCI (rev 05)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Ch=
ipset Family MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM =
(rev 05)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB EHCI #2 (rev 05)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High D=
efinition Audio Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #1 (rev d5)
00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #2 (rev d5)
00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #4 (rev d5)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family P=
CI Express Root Port #5 (rev d5)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Fami=
ly USB EHCI #1 (rev 05)
00:1f.0 ISA bridge: Intel Corporation C226 Series Chipset Family Server Adv=
anced SKU LPC Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Fam=
ily 6-port SATA Controller 1 [AHCI mode] (rev 05)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus =
Controller (rev 05)
00:1f.6 Signal processing controller: Intel Corporation 8 Series Chipset Fa=
mily Thermal Management Controller (rev 05)
01:00.0 VGA compatible controller: Advanced Micro Devices=2C Inc. [AMD/ATI]=
 Juniper XT [Radeon HD 5770]
01:00.1 Audio device: Advanced Micro Devices=2C Inc. [AMD/ATI] Juniper HDMI=
 Audio [Radeon HD 5700 Series]
02:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Control=
ler (rev 01)
03:00.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:01.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:04.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:05.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:07.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
04:09.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Ex=
press Gen 2 (5.0 GT/s) Switch (rev ba)
08:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Con=
troller (rev 03)
09:00.0 PCI bridge: Texas Instruments XIO2213A/B/XIO2221 PCI Express to PCI=
 Bridge [Cheetah Express] (rev 01)
0a:00.0 FireWire (IEEE 1394): Texas Instruments XIO2213A/B/XIO2221 IEEE-139=
4b OHCI Controller [Cheetah Express] (rev 01)
0b:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connect=
ion (rev 03)






In this same machine=2C I can get Xen working with pretty much everything e=
lse being the same but using Syslinux as BIOS-GPT Boot Loader instead of fo=
llowing the UEFI instruction. I even managed to do VGA Passthrough of the R=
adeon 5770 to a WXP SP3 VM. There are a few differences in the booting proc=
ess (More on this later).






Basically=2C I can get Arch Linux to boot in UEFI mode from the Gummiboot m=
enu. However=2C when I launch the xen.efi executable instead (Either the Ra=
deon patched or the vanilla one)=2C it either black screens or hangs. After=
 experimenting A LOT=2C I discovered the following different behaviators:




If in BIOS I set the Haswell GPU as main video output (POST and BIOS setup =
appears on first Monitor only)=2C it reachs a point in the booting process =
where it simply black screens. One of the last lines was


Dom0 has maximum 8 VCPUs


Additionally=2C if the CFG file has an assigned RAM max size (dom0_mem=3D20=
48M=2Cmax=3D2048M)=2C it does the Scrubbing Free RAM. I think it throws a f=
ew more lines if I recall correctly=2C but too fast to notice. I tried with=
 a variety of options in that file and didn't get anywhere.




Later=2C I decided to try setting the Radeon 5770 as video output in the BI=
OS (POST and BIOS setup part appears on the second Monitor only). Surprisin=
gly=2C it passed that point where it black screens with the IGP=2C it inste=
ad hangs on "unpacking initramfs". When I started to use earlyprintk=3Dvga=
=2Ckeep on the Xen CFG file=2C I actually got a bit more info (I copied all=
 these lines by hand=2C I skipped to write the [1.23456789] parts which I s=
uppose are seconds=2C last line is around 5 seconds and half):




e820: reserve RAM buffer [mem 0x8826e000-0x8bffffff]
e820: reserve RAM buffer [mem 0x88b4e000-0x8bffffff]
e820: reserve RAM buffer [mem 0x992ed000-0x9bffffff]
e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]
e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]
e820: reserve RAM buffer [mem 0x85ee00000-0x85fffffff]
NetLabel: Initializing
NetLabel:  domain hash size =3D 128
NetLabel:  protocols =3D UNLABELED CIPSOv4
NetLabel: unlabeled traffic allowed by default
Switched to clocksource xen
pnp: PnP ACPI: disabled
pci 0000:00:1c.4: bridge window [io 0x1000-0x0fff] to [bus 0c-44] add_size =
1000
pci 0000:00:1c.4: res[13]=3D[io  0x1000-0x0fff] get_res_add_size add_size 1=
000
pci 0000:00:1c.4: BAR 13: assigned [io 0x1000-0x1fff]
pci 0000:00:01.0: PCI bridge to [bus 01]
pci 0000:00:01.0:  bridge window [io  0xe000-0xefff]
pci 0000:00:01.0:  bridge window [mem 0xeee00000-0xeeefffff]
pci 0000:00:01.0:  bridge window [mem 0xc40000000-0xc4fffffff 64bit pref]
pci 0000:00:1c.0: PCI bridge to [bus 02]
pci 0000:00:1c.0:  bridge window [io 0xd000-0xdfff]
pci 0000:00:1c.0:  bridge window [mem 0xeed00000-0xeedfffff]
pci 0000:04:01.0: PCI bridge to [bus 05]
pci 0000:04:04.0: PCI bridge to [bus 06]
pci 0000:04:05.0: PCI bridge to [bus 07]
pci 0000:04:07.0: PCI bridge to [bus 08]
pci 0000:04:07.0:  bridge window [mem 0xeea00000-0xeeafffff]
pci 0000:09:00.0: PCI bridge to [bus 0a]
pci 0000:09:00.0:  bridge window [mem 0xee800000-0xee8fffff]
pci 0000:04:09.0: PCI bridge to [bus 09-0a]
pci 0000:04:09.0:  bridge window [mem 0xee800000-0xee9fffff]
pci 0000:03:00.0: PCI bridge to [bus 04-0a]
pci 0000:03:00.0:  bridge window [mem 0xee800000-0xeeafffff]
pci 0000:00:1c.1: PCI bridge to [bus 03-04]
pci 0000:00:1c.1:  bridge window [mem 0xee800000-0xeebfffff]
pci 0000:00:1c.3: PCI bridge to [bus 0b]
pci 0000:00:1c.3:  bridge window [io  0xc000-0xcfff]
pci 0000:00:1c.3:  bridge window [mem 0xeec00000-0xeecfffff]
pci 0000:00:1c.4: PCI bridge to [bus 0c-44]
pci 0000:00:1c.4:  bridge window [io  0x1000-0x1fff]
pci 0000:00:1c.4:  bridge window [mem 0xd8000000-0xee0fffff]
pci 0000:00:1c.4:  bridge window [mem 0xb0000000-0xd1ffffff 64bit pref]
pci_bus 0000:00: resource 4 [io  0x0000-0xffff]
pci_bus 0000:00: resource 5 [mem 0x00000000-0x7fffffffff]




Sometimes it doesn't reach the pci_bus lines=2C other times it throws 3 pci=
_bus instead of 2=2C so the hang part isn't very precise. What is also susp=
icious is that on my working Xen installation (Syslinux BIOS-GPT)=2C it DOE=
S detect ACPI and throws a few additional acpi-related lines before pci. I =
don't know why it doesn't on UEFI.


There is nothing else I was able to figure out as I got stuck here. Only th=
ing I didn't tested was using KMS (Kernel Mode Settings) as I heared that t=
here is a bug with Intel IGPs that may result in a black screen if I don't =
load a kernel module early=2C or something around these lines. That may be =
responsible for the black screen=2C but I'm not sure about the hang with th=
e Radeon. But I don't have these issues with my working installation using =
BIOS.




I welcome anyone that knows what could be wrong=2C or any ideas about what =
I could test. I don't know if this is a Xen=2C Arch Linux=2C or maybe even =
BIOS issue=2C or how often people encounter hangs trying to boot Xen as an =
UEFI executable=2C as there doesn't seems to be a lot of people using Xen i=
n UEFI by what I could get googling around.
 		 	   		  =

--_97203c59-53bb-475a-a251-16c55fe6d6f5_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>Due to the fact that I didn't ma=
naged in any way to get Xen booting in UEFI mode when asking for help at xe=
n-users=2C I have been suggested to mail xen-devel to see if someone here c=
an help me with this issue=2C as this could be bug-related and not a config=
uration mistake. Original Thread here:<BR>http://lists.xen.org/archives/htm=
l/xen-users/2013-12/msg00050.html<BR><br><BR><br><BR>I also suggest to pick=
 a cup of coffee=2C due to the fact that due to my over-detailing may bore =
most readers=2C plus that I have very low Linux and Xen experience=2C may c=
omplicate getting logs or more data=2C so if you need a log of something or=
 whatever=2C try to explain it as simple as possible or give me a link with=
 details to know exactly what to do - will help me to not get stuck if requ=
ested to do things like "Checking what flags are enabled in Kernel"=2C "Rec=
ompile the Kernel with the following option/modules"=2C etc.<BR><br><BR><br=
><BR><br><BR>SOFTWARE<BR>Arch Linux - archlinux-2013.12.01-dual.iso (There =
is a new one already available)<BR>Xen 4.3.1-5 builded as EFI file (Builded=
 two versions=2C one with the Radeon patch applied/uncommented in PKGBUILD=
=2C and another without/commented)<BR><br><BR>I wrote the ISO to a Pendrive=
 with dd following these instructions here to make it boot in UEFI mode:<BR=
>https://wiki.archlinux.org/index.php/USB_Flash_Installation_Media#Using_dd=
_.28Recommended_method.29<BR><br><BR>Installed Arch Linux following most re=
elevant parts of the guides=2C choosing Gummiboot as UEFI Boot Manager:<BR>=
https://wiki.archlinux.org/index.php/Installation_Guide<BR>https://wiki.arc=
hlinux.org/index.php/Beginners%27_Guide<BR><br><BR>Finally=2C I builded the=
 two Xen.efi files with the x86_64-pep option enabled in binutils as this g=
uide=2C and added them to Gummiboot:<BR>https://bbs.archlinux.org/viewtopic=
.php?pid=3D1359933<BR><br><BR><br><BR>xen.efi's cfg:<BR><br><BR>[global]<BR=
>default=3Dxen<BR><br><BR>[xen]<BR>options=3Dconsole=3Dvga vga=3Dcurrent lo=
glvl=3Dall noreboot<BR>kernel=3Dvmlinuz-linux root=3DUUID=3D7af1797c-00af-4=
219-a141-27a718f39858 rw ignore_loglevel earlyprintk=3Dvga=2Ckeep<BR>ramdis=
k=3Dinitramfs-linux.img&nbsp=3B<BR><br><BR><br><BR>That pretty much sums th=
e Software side of what I have installed and can't get working. I think the=
re were barely a few misc packages installed besides those. So if anyone ha=
s a near-identical setup (Most important thing would be Motherboard)=2C I t=
hink it could be reproducible.<BR><br><BR><br><BR><br><BR><br><BR>HARDWARE<=
BR>Processor: Xeon E3-1245 V3 (Haswell)<BR>Motheroard Supermicro X10SAT (C2=
26)<BR>Hard Disk: Seagate Desktop HDD.15 4 TB<BR>Video Cards: Intel HD Grap=
hics 4600P / Haswell integrated GPU=2C Sapphire Radeon 5770 FLEX (Have anot=
her identical Radeon 5770=2C for when I get everything working)<BR>Monitors=
: Samsung SyncMaster P2370H (Connected to Haswell IGP via DVI)=2C Samsung S=
yncMaster 932N+ (Connected to Radeon 5770 with a DVI-to-VGA adapter)<BR><br=
><BR><br><BR>Any common sense options are Enabled in the BIOS (VT-x=2C VT-d=
=2C etc)<BR>Secure Boot isn't mentioned anywhere=2C not even the Motherboar=
d Manual<BR>http://www.supermicro.com/products/motherboard/xeon/c220/x10sat=
.cfm<BR><br><BR><br><BR>lscpi output:<BR><br><BR>00:00.0 Host bridge: Intel=
 Corporation Xeon E3-1200 v3 Processor DRAM Controller (rev 06)<BR>00:01.0 =
PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Ex=
press x16 Controller (rev 06)<BR>00:02.0 Display controller: Intel Corporat=
ion Xeon E3-1200 v3 Processor Integrated Graphics Controller (rev 06)<BR>00=
:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processo=
r HD Audio Controller (rev 06)<BR>00:14.0 USB controller: Intel Corporation=
 8 Series/C220 Series Chipset Family USB xHCI (rev 05)<BR>00:16.0 Communica=
tion controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI =
Controller #1 (rev 04)<BR>00:19.0 Ethernet controller: Intel Corporation Et=
hernet Connection I217-LM (rev 05)<BR>00:1a.0 USB controller: Intel Corpora=
tion 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05)<BR>00:1b.0 Au=
dio device: Intel Corporation 8 Series/C220 Series Chipset High Definition =
Audio Controller (rev 05)<BR>00:1c.0 PCI bridge: Intel Corporation 8 Series=
/C220 Series Chipset Family PCI Express Root Port #1 (rev d5)<BR>00:1c.1 PC=
I bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express=
 Root Port #2 (rev d5)<BR>00:1c.3 PCI bridge: Intel Corporation 8 Series/C2=
20 Series Chipset Family PCI Express Root Port #4 (rev d5)<BR>00:1c.4 PCI b=
ridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Ro=
ot Port #5 (rev d5)<BR>00:1d.0 USB controller: Intel Corporation 8 Series/C=
220 Series Chipset Family USB EHCI #1 (rev 05)<BR>00:1f.0 ISA bridge: Intel=
 Corporation C226 Series Chipset Family Server Advanced SKU LPC Controller =
(rev 05)<BR>00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series=
 Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)<BR>00:1f.3 SM=
Bus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller=
 (rev 05)<BR>00:1f.6 Signal processing controller: Intel Corporation 8 Seri=
es Chipset Family Thermal Management Controller (rev 05)<BR>01:00.0 VGA com=
patible controller: Advanced Micro Devices=2C Inc. [AMD/ATI] Juniper XT [Ra=
deon HD 5770]<BR>01:00.1 Audio device: Advanced Micro Devices=2C Inc. [AMD/=
ATI] Juniper HDMI Audio [Radeon HD 5700 Series]<BR>02:00.0 SATA controller:=
 ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)<BR>03:00.0 =
PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Ge=
n 2 (5.0 GT/s) Switch (rev ba)<BR>04:01.0 PCI bridge: PLX Technology=2C Inc=
. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<BR=
>04:04.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI E=
xpress Gen 2 (5.0 GT/s) Switch (rev ba)<BR>04:05.0 PCI bridge: PLX Technolo=
gy=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) Switch (r=
ev ba)<BR>04:07.0 PCI bridge: PLX Technology=2C Inc. PEX 8606 6 Lane=2C 6 P=
ort PCI Express Gen 2 (5.0 GT/s) Switch (rev ba)<BR>04:09.0 PCI bridge: PLX=
 Technology=2C Inc. PEX 8606 6 Lane=2C 6 Port PCI Express Gen 2 (5.0 GT/s) =
Switch (rev ba)<BR>08:00.0 USB controller: Renesas Technology Corp. uPD7202=
01 USB 3.0 Host Controller (rev 03)<BR>09:00.0 PCI bridge: Texas Instrument=
s XIO2213A/B/XIO2221 PCI Express to PCI Bridge [Cheetah Express] (rev 01)<B=
R>0a:00.0 FireWire (IEEE 1394): Texas Instruments XIO2213A/B/XIO2221 IEEE-1=
394b OHCI Controller [Cheetah Express] (rev 01)<BR>0b:00.0 Ethernet control=
ler: Intel Corporation I210 Gigabit Network Connection (rev 03)<BR><br><BR>=
<br><BR><br><BR>In this same machine=2C I can get Xen working with pretty m=
uch everything else being the same but using Syslinux as BIOS-GPT Boot Load=
er instead of following the UEFI instruction. I even managed to do VGA Pass=
through of the Radeon 5770 to a WXP SP3 VM. There are a few differences in =
the booting process (More on this later).<BR><br><BR><br><BR><br><BR>Basica=
lly=2C I can get Arch Linux to boot in UEFI mode from the Gummiboot menu. H=
owever=2C when I launch the xen.efi executable instead (Either the Radeon p=
atched or the vanilla one)=2C it either black screens or hangs. After exper=
imenting A LOT=2C I discovered the following different behaviators<span sty=
le=3D"font-size: 12pt=3B">:</span><BR><br><BR><br><BR>If in BIOS I set the =
Haswell GPU as main video output (POST and BIOS setup appears on first Moni=
tor only)=2C it reachs a point in the booting process where it simply black=
 screens. One of the last lines was<BR><br><BR>Dom0 has maximum 8 VCPUs<BR>=
<br><BR>Additionally=2C if the CFG file has an assigned RAM max size (dom0_=
mem=3D2048M=2Cmax=3D2048M)=2C it does the Scrubbing Free RAM. I think it th=
rows a few more lines if I recall correctly=2C but too fast to notice.&nbsp=
=3B<span style=3D"font-size: 12pt=3B">I tried with a variety of options in =
that file and didn't get anywhere.</span><BR><br><BR><br><BR>Later=2C I dec=
ided to try setting the Radeon 5770 as video output in the BIOS (POST and B=
IOS setup part appears on the second Monitor only). Surprisingly=2C it pass=
ed that point where it black screens with the IGP=2C it instead hangs on "u=
npacking initramfs". When I started to use earlyprintk=3Dvga=2Ckeep on the =
Xen CFG file=2C I actually got a bit more info (I copied all these lines by=
 hand=2C I skipped to write the [1.23456789] parts which I suppose are seco=
nds=2C last line is around 5 seconds and half):<BR><br><BR><br><BR>e820: re=
serve RAM buffer [mem 0x8826e000-0x8bffffff]<BR>e820: reserve RAM buffer [m=
em 0x88b4e000-0x8bffffff]<BR>e820: reserve RAM buffer [mem 0x992ed000-0x9bf=
fffff]<BR>e820: reserve RAM buffer [mem 0x993eb000-0x9bffffff]<BR>e820: res=
erve RAM buffer [mem 0x993eb000-0x9bffffff]<BR>e820: reserve RAM buffer [me=
m 0x85ee00000-0x85fffffff]<BR>NetLabel: Initializing<BR>NetLabel: &nbsp=3Bd=
omain hash size =3D 128<BR>NetLabel: &nbsp=3Bprotocols =3D UNLABELED CIPSOv=
4<BR>NetLabel: unlabeled traffic allowed by default<BR>Switched to clocksou=
rce xen<BR>pnp: PnP ACPI: disabled<BR>pci 0000:00:1c.4: bridge window [io 0=
x1000-0x0fff] to [bus 0c-44] add_size 1000<BR>pci 0000:00:1c.4: res[13]=3D[=
io &nbsp=3B0x1000-0x0fff] get_res_add_size add_size 1000<BR>pci 0000:00:1c.=
4: BAR 13: assigned [io 0x1000-0x1fff]<BR>pci 0000:00:01.0: PCI bridge to [=
bus 01]<BR>pci 0000:00:01.0: &nbsp=3Bbridge window [io &nbsp=3B0xe000-0xeff=
f]<BR>pci 0000:00:01.0: &nbsp=3Bbridge window [mem 0xeee00000-0xeeefffff]<B=
R>pci 0000:00:01.0: &nbsp=3Bbridge window [mem 0xc40000000-0xc4fffffff 64bi=
t pref]<BR>pci 0000:00:1c.0: PCI bridge to [bus 02]<BR>pci 0000:00:1c.0: &n=
bsp=3Bbridge window [io 0xd000-0xdfff]<BR>pci 0000:00:1c.0: &nbsp=3Bbridge =
window [mem 0xeed00000-0xeedfffff]<BR>pci 0000:04:01.0: PCI bridge to [bus =
05]<BR>pci 0000:04:04.0: PCI bridge to [bus 06]<BR>pci 0000:04:05.0: PCI br=
idge to [bus 07]<BR>pci 0000:04:07.0: PCI bridge to [bus 08]<BR>pci 0000:04=
:07.0: &nbsp=3Bbridge window [mem 0xeea00000-0xeeafffff]<BR>pci 0000:09:00.=
0: PCI bridge to [bus 0a]<BR>pci 0000:09:00.0: &nbsp=3Bbridge window [mem 0=
xee800000-0xee8fffff]<BR>pci 0000:04:09.0: PCI bridge to [bus 09-0a]<BR>pci=
 0000:04:09.0: &nbsp=3Bbridge window [mem 0xee800000-0xee9fffff]<BR>pci 000=
0:03:00.0: PCI bridge to [bus 04-0a]<BR>pci 0000:03:00.0: &nbsp=3Bbridge wi=
ndow [mem 0xee800000-0xeeafffff]<BR>pci 0000:00:1c.1: PCI bridge to [bus 03=
-04]<BR>pci 0000:00:1c.1: &nbsp=3Bbridge window [mem 0xee800000-0xeebfffff]=
<BR>pci 0000:00:1c.3: PCI bridge to [bus 0b]<BR>pci 0000:00:1c.3: &nbsp=3Bb=
ridge window [io &nbsp=3B0xc000-0xcfff]<BR>pci 0000:00:1c.3: &nbsp=3Bbridge=
 window [mem 0xeec00000-0xeecfffff]<BR>pci 0000:00:1c.4: PCI bridge to [bus=
 0c-44]<BR>pci 0000:00:1c.4: &nbsp=3Bbridge window [io &nbsp=3B0x1000-0x1ff=
f]<BR>pci 0000:00:1c.4: &nbsp=3Bbridge window [mem 0xd8000000-0xee0fffff]<B=
R>pci 0000:00:1c.4: &nbsp=3Bbridge window [mem 0xb0000000-0xd1ffffff 64bit =
pref]<BR>pci_bus 0000:00: resource 4 [io &nbsp=3B0x0000-0xffff]<BR>pci_bus =
0000:00: resource 5 [mem 0x00000000-0x7fffffffff]<BR><br><BR><br><BR>Someti=
mes it doesn't reach the pci_bus lines=2C other times it throws 3 pci_bus i=
nstead of 2=2C so the hang part isn't very precise. What is also suspicious=
 is that on my working Xen installation (Syslinux BIOS-GPT)=2C it DOES dete=
ct ACPI and throws a few additional acpi-related lines before pci. I don't =
know why it doesn't on UEFI.<BR><br><BR>There is nothing else I was able to=
 figure out as I got stuck here. Only thing I didn't tested was using KMS (=
Kernel Mode Settings) as I heared that there is a bug with Intel IGPs that =
may result in a black screen if I don't load a kernel module early=2C or so=
mething around these lines. That may be responsible for the black screen=2C=
 but I'm not sure about the hang with the Radeon. But I don't have these is=
sues with my working installation using BIOS.<BR><br><BR><br><BR>I welcome =
anyone that knows what could be wrong=2C or any ideas about what I could te=
st. I don't know if this is a Xen=2C Arch Linux=2C or maybe even BIOS issue=
=2C or how often people encounter hangs trying to boot Xen as an UEFI execu=
table=2C as there doesn't seems to be a lot of people using Xen in UEFI by =
what I could get googling around.<BR> 		 	   		  </div></body>
</html>=

--_97203c59-53bb-475a-a251-16c55fe6d6f5_--


--===============4990483044060424573==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4990483044060424573==--


From xen-devel-bounces@lists.xen.org Sun Jan 12 12:04:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 12:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2JmJ-000852-Tk; Sun, 12 Jan 2014 12:04:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W2JmI-00084w-1t
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 12:04:30 +0000
Received: from [85.158.139.211:34397] by server-3.bemta-5.messagelabs.com id
	C4/86-04773-DC482D25; Sun, 12 Jan 2014 12:04:29 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389528267!9219221!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 12 Jan 2014 12:04:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-206.messagelabs.com with SMTP;
	12 Jan 2014 12:04:28 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 12 Jan 2014 04:00:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,647,1384329600"; d="scan'208";a="465406402"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 12 Jan 2014 04:04:26 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 12 Jan 2014 04:04:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 12 Jan 2014 20:04:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] xen pci and vga passthrough + option roms
Thread-Index: AQHPDk5TRj9WaqtJbkWAkWiFDbG7mJqA+yrw
Date: Sun, 12 Jan 2014 12:04:21 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350147C123@SHSMSX101.ccr.corp.intel.com>
References: <694614152.20140110225103@eikelenboom.it>
In-Reply-To: <694614152.20140110225103@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen pci and vga passthrough + option roms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IIRC, qemu-traditional supports Intel Gfx passthrough but qemu-xen does not.

Unlike generic PCI device passthrough, Gfx device has some tricky points (and different vendor has different tricky points?).
For example, per qemu-traditional implementation for Intel Gfx, it need add some logic
At Qemu side
  do not create emulated VGA, for iGfx passthrough
  using dummy console, avoid keyboard and mouse passthrough
  Reserve 00:02.0 for iGfx
  PCI-ISA bridge emulation (OS install proper iGfx-gen-driver per bridge dev-id)
Hvmloader
  Reserve 2 pages at e820 for iGfx OpRegion

Some questions embedded below.

Thanks,
Jinsong


Sander Eikelenboom wrote:
> Hi Konrad,
> 
> I'm starting a new thread .. since i'm essentially trying to start
> over from scratch. (shoot i forgot to include xen-devel on the first
> mail ...) 

Would you please paste history mail?

> 
> - Xen-unstable the latest and greatest
> - Linux 3.13-rc7 as dom0 and guest kernel
> - Qemu-xen
> - Some patches (vga="none", enableing qemu debug, fixing builderrors
> when enabling qemu debug) 
> 
> I'm now first trying to replicate you :-) (don't worry .. it won't
> hurt :p), 
> i'm using an intel NIC now (found one laying around and it has a rom.
> and after it was flashed .. it even has valid rom content :-) )
> 
> Now the first experiment is to see if the sequence "echo 1 > rom; cat
> rom > romfile.bin; echo 0 > rom;" works. 

Where do you run these commands? like ./pci0000:00/0000:00:03.2/0000:xx:xx.x (NIC and VGA dir)?
I run at my machine but fail so would you please elaborate more your test step?

> 
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS,
> kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom
> works, contents are the same as under dom0 
> 
> The failure with pciback is perhaps because the device wasn't
> initialized on boot because it was seized by pciback. 
> So generally speaking the rombar of pci devices is correctly passed
> through and can be dumped. 
> 
> So it indeed appears to be a VGA passthrough specific issue.
> 
> This was on my intel machine, but unfortunately that one has an IGD
> that shares it's pci-e lanes with the only pcie x16 slot, 
> so will put the NIC in my AMD machine .. and verify if it works there
> and then continue with the VGA card. 
> 
> Now trying on the AMD machine:
> 
> NIC:
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS,
> kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom
> works, contents are the same as under dom0 
> 
> VGA card:
> - Running Xen, on dom0, VGA card owned by radeon: dumping the rom
> works 
> - Running Xen, on dom0, VGA card owned by nothing (due to setting
> nomodeset): dumping the rom FAILS, kernel gives "cat: rom:
> Input/output error"  
> - Running Xen, on dom0, VGA card owned by pciback: dumping the rom
> FAILS, kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, VGA card passedthrough: dumping the
> rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios 
> 
> Prelimanary conclusions:
> - Passing through the rombar from a NIC works
> - Passing through the rombar from a VGA card doesn't work out of the
> box. 
> - the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;"
>   works but seems to rely on a loaded and working driver and is thus
> not very reliable for testing so i need to dust of my arithmatic and
> use /dev/kmem and dd. - So Anthony is probably right .. something is
> playing tricks with the rombar when it's VGA class, even if it's the
> secondary card in the Guest.   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 12:04:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 12:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2JmJ-000852-Tk; Sun, 12 Jan 2014 12:04:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W2JmI-00084w-1t
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 12:04:30 +0000
Received: from [85.158.139.211:34397] by server-3.bemta-5.messagelabs.com id
	C4/86-04773-DC482D25; Sun, 12 Jan 2014 12:04:29 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389528267!9219221!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 12 Jan 2014 12:04:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-206.messagelabs.com with SMTP;
	12 Jan 2014 12:04:28 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 12 Jan 2014 04:00:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,647,1384329600"; d="scan'208";a="465406402"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 12 Jan 2014 04:04:26 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 12 Jan 2014 04:04:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 12 Jan 2014 20:04:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] xen pci and vga passthrough + option roms
Thread-Index: AQHPDk5TRj9WaqtJbkWAkWiFDbG7mJqA+yrw
Date: Sun, 12 Jan 2014 12:04:21 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350147C123@SHSMSX101.ccr.corp.intel.com>
References: <694614152.20140110225103@eikelenboom.it>
In-Reply-To: <694614152.20140110225103@eikelenboom.it>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen pci and vga passthrough + option roms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

IIRC, qemu-traditional supports Intel Gfx passthrough but qemu-xen does not.

Unlike generic PCI device passthrough, Gfx device has some tricky points (and different vendor has different tricky points?).
For example, per qemu-traditional implementation for Intel Gfx, it need add some logic
At Qemu side
  do not create emulated VGA, for iGfx passthrough
  using dummy console, avoid keyboard and mouse passthrough
  Reserve 00:02.0 for iGfx
  PCI-ISA bridge emulation (OS install proper iGfx-gen-driver per bridge dev-id)
Hvmloader
  Reserve 2 pages at e820 for iGfx OpRegion

Some questions embedded below.

Thanks,
Jinsong


Sander Eikelenboom wrote:
> Hi Konrad,
> 
> I'm starting a new thread .. since i'm essentially trying to start
> over from scratch. (shoot i forgot to include xen-devel on the first
> mail ...) 

Would you please paste history mail?

> 
> - Xen-unstable the latest and greatest
> - Linux 3.13-rc7 as dom0 and guest kernel
> - Qemu-xen
> - Some patches (vga="none", enableing qemu debug, fixing builderrors
> when enabling qemu debug) 
> 
> I'm now first trying to replicate you :-) (don't worry .. it won't
> hurt :p), 
> i'm using an intel NIC now (found one laying around and it has a rom.
> and after it was flashed .. it even has valid rom content :-) )
> 
> Now the first experiment is to see if the sequence "echo 1 > rom; cat
> rom > romfile.bin; echo 0 > rom;" works. 

Where do you run these commands? like ./pci0000:00/0000:00:03.2/0000:xx:xx.x (NIC and VGA dir)?
I run at my machine but fail so would you please elaborate more your test step?

> 
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS,
> kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom
> works, contents are the same as under dom0 
> 
> The failure with pciback is perhaps because the device wasn't
> initialized on boot because it was seized by pciback. 
> So generally speaking the rombar of pci devices is correctly passed
> through and can be dumped. 
> 
> So it indeed appears to be a VGA passthrough specific issue.
> 
> This was on my intel machine, but unfortunately that one has an IGD
> that shares it's pci-e lanes with the only pcie x16 slot, 
> so will put the NIC in my AMD machine .. and verify if it works there
> and then continue with the VGA card. 
> 
> Now trying on the AMD machine:
> 
> NIC:
> - Running Xen, on dom0, NIC owned by e1000e: dumping the rom works
> - Running Xen, on dom0, NIC owned by pciback: dumping the rom FAILS,
> kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, NIC passedthrough: dumping the rom
> works, contents are the same as under dom0 
> 
> VGA card:
> - Running Xen, on dom0, VGA card owned by radeon: dumping the rom
> works 
> - Running Xen, on dom0, VGA card owned by nothing (due to setting
> nomodeset): dumping the rom FAILS, kernel gives "cat: rom:
> Input/output error"  
> - Running Xen, on dom0, VGA card owned by pciback: dumping the rom
> FAILS, kernel gives "cat: rom: Input/output error" 
> - Running Xen, in a HVM guest, VGA card passedthrough: dumping the
> rom works, contents are DIFFERENT from dom0, i get the BOCHS VGAbios 
> 
> Prelimanary conclusions:
> - Passing through the rombar from a NIC works
> - Passing through the rombar from a VGA card doesn't work out of the
> box. 
> - the sequence "echo 1 > rom; cat rom > romfile.bin; echo 0 > rom;"
>   works but seems to rely on a loaded and working driver and is thus
> not very reliable for testing so i need to dust of my arithmatic and
> use /dev/kmem and dd. - So Anthony is probably right .. something is
> playing tricks with the rombar when it's VGA class, even if it's the
> secondary card in the Guest.   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 18:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Pjq-0001zn-QV; Sun, 12 Jan 2014 18:26:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Pjp-0001zi-F4
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:26:21 +0000
Received: from [85.158.137.68:8106] by server-15.bemta-3.messagelabs.com id
	DF/3D-11556-C4ED2D25; Sun, 12 Jan 2014 18:26:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389551178!4993641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8673 invoked from network); 12 Jan 2014 18:26:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 18:26:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208,217";a="92133024"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 18:26:17 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 13:26:16 -0500
Received: from [10.197.86.194] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	19:26:15 +0100
Message-ID: <52D2DE47.4070506@citrix.com>
Date: Sun, 12 Jan 2014 18:26:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zir Blazer <zir_blazer@hotmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
In-Reply-To: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6748531899135878479=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6748531899135878479==
Content-Type: multipart/alternative;
	boundary="------------050204060106090305030703"

--------------050204060106090305030703
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 12/01/2014 09:31, Zir Blazer wrote:
> Due to the fact that I didn't managed in any way to get Xen booting in
> UEFI mode when asking for help at xen-users, I have been suggested to
> mail xen-devel to see if someone here can help me with this issue, as
> this could be bug-related and not a configuration mistake. Original
> Thread here:
> http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html
>
>

I have skimmed over, and lack of boot messages appears to be the main
problem.

Do you have a serial PCI card you could put it, (or does your USB port
have debug capabilities) and can you set up a serial console?

As for EFI boot itself, there is a lot which is expected not to work at
the moment, because of the split between Xen and dom0 and who gets to
use bootservices etc, but it is expected to boot and function basically.

Have you tried xen unstable (which is currently 4.4-rc1 + some).  It
might be an interesting datapoint, but I cant offhand recall whether
there is much/anything relevant to EFI boot.

~Andrew

--------------050204060106090305030703
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 12/01/2014 09:31, Zir Blazer wrote:<br>
    </div>
    <blockquote cite="mid:BAY170-W646961885B339AB1673393F3BD0@phx.gbl"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style>
      <div dir="ltr">Due to the fact that I didn't managed in any way to
        get Xen booting in UEFI mode when asking for help at xen-users,
        I have been suggested to mail xen-devel to see if someone here
        can help me with this issue, as this could be bug-related and
        not a configuration mistake. Original Thread here:<br>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html">http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html</a><br>
        <br>
        <br>
      </div>
    </blockquote>
    <br>
    I have skimmed over, and lack of boot messages appears to be the
    main problem.<br>
    <br>
    Do you have a serial PCI card you could put it, (or does your USB
    port have debug capabilities) and can you set up a serial console?<br>
    <br>
    As for EFI boot itself, there is a lot which is expected not to work
    at the moment, because of the split between Xen and dom0 and who
    gets to use bootservices etc, but it is expected to boot and
    function basically.<br>
    <br>
    Have you tried xen unstable (which is currently 4.4-rc1 + some).&nbsp; It
    might be an interesting datapoint, but I cant offhand recall whether
    there is much/anything relevant to EFI boot.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------050204060106090305030703--


--===============6748531899135878479==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6748531899135878479==--


From xen-devel-bounces@lists.xen.org Sun Jan 12 18:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Pjq-0001zn-QV; Sun, 12 Jan 2014 18:26:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Pjp-0001zi-F4
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:26:21 +0000
Received: from [85.158.137.68:8106] by server-15.bemta-3.messagelabs.com id
	DF/3D-11556-C4ED2D25; Sun, 12 Jan 2014 18:26:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389551178!4993641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8673 invoked from network); 12 Jan 2014 18:26:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 18:26:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208,217";a="92133024"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 18:26:17 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 13:26:16 -0500
Received: from [10.197.86.194] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	19:26:15 +0100
Message-ID: <52D2DE47.4070506@citrix.com>
Date: Sun, 12 Jan 2014 18:26:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zir Blazer <zir_blazer@hotmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
In-Reply-To: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6748531899135878479=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6748531899135878479==
Content-Type: multipart/alternative;
	boundary="------------050204060106090305030703"

--------------050204060106090305030703
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 12/01/2014 09:31, Zir Blazer wrote:
> Due to the fact that I didn't managed in any way to get Xen booting in
> UEFI mode when asking for help at xen-users, I have been suggested to
> mail xen-devel to see if someone here can help me with this issue, as
> this could be bug-related and not a configuration mistake. Original
> Thread here:
> http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html
>
>

I have skimmed over, and lack of boot messages appears to be the main
problem.

Do you have a serial PCI card you could put it, (or does your USB port
have debug capabilities) and can you set up a serial console?

As for EFI boot itself, there is a lot which is expected not to work at
the moment, because of the split between Xen and dom0 and who gets to
use bootservices etc, but it is expected to boot and function basically.

Have you tried xen unstable (which is currently 4.4-rc1 + some).  It
might be an interesting datapoint, but I cant offhand recall whether
there is much/anything relevant to EFI boot.

~Andrew

--------------050204060106090305030703
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 12/01/2014 09:31, Zir Blazer wrote:<br>
    </div>
    <blockquote cite="mid:BAY170-W646961885B339AB1673393F3BD0@phx.gbl"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style>
      <div dir="ltr">Due to the fact that I didn't managed in any way to
        get Xen booting in UEFI mode when asking for help at xen-users,
        I have been suggested to mail xen-devel to see if someone here
        can help me with this issue, as this could be bug-related and
        not a configuration mistake. Original Thread here:<br>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html">http://lists.xen.org/archives/html/xen-users/2013-12/msg00050.html</a><br>
        <br>
        <br>
      </div>
    </blockquote>
    <br>
    I have skimmed over, and lack of boot messages appears to be the
    main problem.<br>
    <br>
    Do you have a serial PCI card you could put it, (or does your USB
    port have debug capabilities) and can you set up a serial console?<br>
    <br>
    As for EFI boot itself, there is a lot which is expected not to work
    at the moment, because of the split between Xen and dom0 and who
    gets to use bootservices etc, but it is expected to boot and
    function basically.<br>
    <br>
    Have you tried xen unstable (which is currently 4.4-rc1 + some).&nbsp; It
    might be an interesting datapoint, but I cant offhand recall whether
    there is much/anything relevant to EFI boot.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------050204060106090305030703--


--===============6748531899135878479==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6748531899135878479==--


From xen-devel-bounces@lists.xen.org Sun Jan 12 18:29:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Pn5-0002XW-Eb; Sun, 12 Jan 2014 18:29:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Pn4-0002XN-Rz
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:29:43 +0000
Received: from [85.158.143.35:40190] by server-1.bemta-4.messagelabs.com id
	DF/73-02132-61FD2D25; Sun, 12 Jan 2014 18:29:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389551380!11224157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10266 invoked from network); 12 Jan 2014 18:29:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 18:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="92133180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 18:29:39 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 13:29:39 -0500
Received: from [10.197.86.194] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	19:29:37 +0100
Message-ID: <52D2DF12.5050801@citrix.com>
Date: Sun, 12 Jan 2014 18:29:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, <xen-devel@lists.xen.org>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
In-Reply-To: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/01/2014 22:59, Igor Kozhukhov wrote:
> Hello All,
>
> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>
> i have received 'pirq' from hypervisor > 255.
>
> map_irq.domid = DOMID_SELF;                                                   
> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
> map_irq.index = -1; /* hypervisor auto allocates vector */                    
> map_irq.pirq = -1;                                                            
> map_irq.bus = busnum;                                                         
> map_irq.devfn = devfn;                                                        
> map_irq.entry_nr = i;                                                         
> map_irq.table_base = 0;                                              
> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
> irqno = map_irq.pirq;
>
> i have:
> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>
> i have a question: how to correct translate pirq to irq for APIC map table ?
>
> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.

Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
handed back will be the event channel on which the notification will
arrive, and has nothing to do with regular IDT vectors.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 18:29:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Pn5-0002XW-Eb; Sun, 12 Jan 2014 18:29:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Pn4-0002XN-Rz
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:29:43 +0000
Received: from [85.158.143.35:40190] by server-1.bemta-4.messagelabs.com id
	DF/73-02132-61FD2D25; Sun, 12 Jan 2014 18:29:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389551380!11224157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10266 invoked from network); 12 Jan 2014 18:29:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 18:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="92133180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 18:29:39 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 13:29:39 -0500
Received: from [10.197.86.194] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	19:29:37 +0100
Message-ID: <52D2DF12.5050801@citrix.com>
Date: Sun, 12 Jan 2014 18:29:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, <xen-devel@lists.xen.org>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
In-Reply-To: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/01/2014 22:59, Igor Kozhukhov wrote:
> Hello All,
>
> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>
> i have received 'pirq' from hypervisor > 255.
>
> map_irq.domid = DOMID_SELF;                                                   
> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
> map_irq.index = -1; /* hypervisor auto allocates vector */                    
> map_irq.pirq = -1;                                                            
> map_irq.bus = busnum;                                                         
> map_irq.devfn = devfn;                                                        
> map_irq.entry_nr = i;                                                         
> map_irq.table_base = 0;                                              
> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
> irqno = map_irq.pirq;
>
> i have:
> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>
> i have a question: how to correct translate pirq to irq for APIC map table ?
>
> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.

Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
handed back will be the event channel on which the notification will
arrive, and has nothing to do with regular IDT vectors.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 18:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2PvO-0002in-Ge; Sun, 12 Jan 2014 18:38:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tipbot@zytor.com>) id 1W2PvM-0002ii-LU
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:38:16 +0000
Received: from [85.158.143.35:32587] by server-2.bemta-4.messagelabs.com id
	A7/63-11386-711E2D25; Sun, 12 Jan 2014 18:38:15 +0000
X-Env-Sender: tipbot@zytor.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389551892!11227886!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	X_MAILER_SPAM,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 12 Jan 2014 18:38:14 -0000
Received: from terminus.zytor.com (HELO terminus.zytor.com) (198.137.202.10)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Jan 2014 18:38:14 -0000
Received: from terminus.zytor.com (localhost [127.0.0.1])
	by terminus.zytor.com (8.14.7/8.14.7) with ESMTP id s0CIbs1t021223;
	Sun, 12 Jan 2014 10:37:59 -0800
Received: (from tipbot@localhost)
	by terminus.zytor.com (8.14.7/8.14.5/Submit) id s0CIbstA021220;
	Sun, 12 Jan 2014 10:37:54 -0800
Date: Sun, 12 Jan 2014 10:37:54 -0800
X-Authentication-Warning: terminus.zytor.com: tipbot set sender to
	tipbot@zytor.com using -f
From: tip-bot for John Stultz <tipbot@zytor.com>
Message-ID: <tip-5258d3f25c76f6ab86e9333abf97a55a877d3870@git.kernel.org>
To: linux-tip-commits@vger.kernel.org
Git-Commit-ID: 5258d3f25c76f6ab86e9333abf97a55a877d3870
X-Mailer: tip-git-log-daemon
Robot-ID: <tip-bot.git.kernel.org>
Robot-Unsubscribe: Contact <mailto:hpa@kernel.org>
	to get blacklisted from these emails
MIME-Version: 1.0
Content-Disposition: inline
Precedence: bulk
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.2
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on terminus.zytor.com
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.1
	(terminus.zytor.com [127.0.0.1]);
	Sun, 12 Jan 2014 10:38:01 -0800 (PST)
Cc: prarit@redhat.com, richardcochran@gmail.com, xen-devel@lists.xen.org,
	john.stultz@linaro.org, david.vrabel@citrix.com, hpa@zytor.com,
	sasha.levin@oracle.com, tglx@linutronix.de, mingo@kernel.org
Subject: [Xen-devel] [tip:timers/core] timekeeping: Fix potential lost pv
 notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Reply-To: linux-kernel@vger.kernel.org, mingo@kernel.org, hpa@zytor.com,
	sasha.levin@oracle.com, konrad.wilk@oracle.com,
	richardcochran@gmail.com, john.stultz@linaro.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com,
	tglx@linutronix.de, prarit@redhat.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit-ID:  5258d3f25c76f6ab86e9333abf97a55a877d3870
Gitweb:     http://git.kernel.org/tip/5258d3f25c76f6ab86e9333abf97a55a877d3870
Author:     John Stultz <john.stultz@linaro.org>
AuthorDate: Wed, 11 Dec 2013 20:07:49 -0800
Committer:  John Stultz <john.stultz@linaro.org>
CommitDate: Mon, 23 Dec 2013 11:53:26 -0800

timekeeping: Fix potential lost pv notification of time change

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7488f0b..051855f 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 18:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 18:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2PvO-0002in-Ge; Sun, 12 Jan 2014 18:38:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tipbot@zytor.com>) id 1W2PvM-0002ii-LU
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 18:38:16 +0000
Received: from [85.158.143.35:32587] by server-2.bemta-4.messagelabs.com id
	A7/63-11386-711E2D25; Sun, 12 Jan 2014 18:38:15 +0000
X-Env-Sender: tipbot@zytor.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389551892!11227886!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=1.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	X_MAILER_SPAM,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 12 Jan 2014 18:38:14 -0000
Received: from terminus.zytor.com (HELO terminus.zytor.com) (198.137.202.10)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Jan 2014 18:38:14 -0000
Received: from terminus.zytor.com (localhost [127.0.0.1])
	by terminus.zytor.com (8.14.7/8.14.7) with ESMTP id s0CIbs1t021223;
	Sun, 12 Jan 2014 10:37:59 -0800
Received: (from tipbot@localhost)
	by terminus.zytor.com (8.14.7/8.14.5/Submit) id s0CIbstA021220;
	Sun, 12 Jan 2014 10:37:54 -0800
Date: Sun, 12 Jan 2014 10:37:54 -0800
X-Authentication-Warning: terminus.zytor.com: tipbot set sender to
	tipbot@zytor.com using -f
From: tip-bot for John Stultz <tipbot@zytor.com>
Message-ID: <tip-5258d3f25c76f6ab86e9333abf97a55a877d3870@git.kernel.org>
To: linux-tip-commits@vger.kernel.org
Git-Commit-ID: 5258d3f25c76f6ab86e9333abf97a55a877d3870
X-Mailer: tip-git-log-daemon
Robot-ID: <tip-bot.git.kernel.org>
Robot-Unsubscribe: Contact <mailto:hpa@kernel.org>
	to get blacklisted from these emails
MIME-Version: 1.0
Content-Disposition: inline
Precedence: bulk
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.2
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on terminus.zytor.com
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.1
	(terminus.zytor.com [127.0.0.1]);
	Sun, 12 Jan 2014 10:38:01 -0800 (PST)
Cc: prarit@redhat.com, richardcochran@gmail.com, xen-devel@lists.xen.org,
	john.stultz@linaro.org, david.vrabel@citrix.com, hpa@zytor.com,
	sasha.levin@oracle.com, tglx@linutronix.de, mingo@kernel.org
Subject: [Xen-devel] [tip:timers/core] timekeeping: Fix potential lost pv
 notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Reply-To: linux-kernel@vger.kernel.org, mingo@kernel.org, hpa@zytor.com,
	sasha.levin@oracle.com, konrad.wilk@oracle.com,
	richardcochran@gmail.com, john.stultz@linaro.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com,
	tglx@linutronix.de, prarit@redhat.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit-ID:  5258d3f25c76f6ab86e9333abf97a55a877d3870
Gitweb:     http://git.kernel.org/tip/5258d3f25c76f6ab86e9333abf97a55a877d3870
Author:     John Stultz <john.stultz@linaro.org>
AuthorDate: Wed, 11 Dec 2013 20:07:49 -0800
Committer:  John Stultz <john.stultz@linaro.org>
CommitDate: Mon, 23 Dec 2013 11:53:26 -0800

timekeeping: Fix potential lost pv notification of time change

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7488f0b..051855f 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 19:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 19:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2QXk-0004jA-1c; Sun, 12 Jan 2014 19:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Dave.Scott@citrix.com>) id 1W2QXh-0004j5-Ou
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 19:17:53 +0000
Received: from [85.158.143.35:49052] by server-2.bemta-4.messagelabs.com id
	10/40-11386-16AE2D25; Sun, 12 Jan 2014 19:17:53 +0000
X-Env-Sender: Dave.Scott@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389554271!11078018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30880 invoked from network); 12 Jan 2014 19:17:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 19:17:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="92137548"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 19:17:50 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 14:17:50 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Sun, 12 Jan 2014 20:17:49 +0100
From: Dave Scott <Dave.Scott@citrix.com>
To: 'Anil Madhavapeddy' <anil@recoil.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v2] libxl: ocaml: guard x86-specific functions behind
	an ifdef
Thread-Index: AQHPDyWJITpQEuw8a0eO477WORnCJJqBdsDA
Date: Sun, 12 Jan 2014 19:17:48 +0000
Message-ID: <6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
References: <20140111233325.GA30303@dark.recoil.org>
In-Reply-To: <20140111233325.GA30303@dark.recoil.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anil,

Thanks for getting oxenstored on ARM working!

I'm happy with a simple 'Failure not implemented' exception for the moment. I think that once we're using the libxl bindings everywhere we can probably remove these libxc bindings anyway.

Acked-by: David Scott <dave.scott@eu.citrix.com>

Cheers,
Dave

> -----Original Message-----
> From: Anil Madhavapeddy [mailto:anil@recoil.org]
> Sent: 11 January 2014 11:33 PM
> To: xen-devel@lists.xen.org
> Cc: Dave Scott
> Subject: [PATCH v2] libxl: ocaml: guard x86-specific functions behind an
> ifdef
> 
> The various cpuid functions are not available on ARM, so this makes them
> raise an OCaml exception.  Omitting the functions completely results in a
> link failure in oxenstored due to the missing symbols, so this is preferable
> to the much bigger patch that would result from adding conditional
> compilation into the OCaml interfaces.
> 
> With this patch, oxenstored can successfully start a domain on Xen/ARM.
> 
> Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> ---
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index f5cf0ed..ff29b47 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> xch, value domid,  {
>  	CAMLparam4(xch, domid, input, config);
>  	CAMLlocal2(array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> xch, value domid,
>  			 c_input, (const char **)c_config, out_config);
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	caml_failwith("xc_domain_cpuid_set: not implemented"); #endif
>  	CAMLreturn(array);
>  }
> 
>  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value
> domid)  {
>  	CAMLparam2(xch, domid);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
> 
>  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
> #endif
>  	CAMLreturn(Val_unit);
>  }
> 
> @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> value input, value config)  {
>  	CAMLparam3(xch, input, config);
>  	CAMLlocal3(ret, array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> value input, value config)
>  	Store_field(ret, 0, Val_bool(r));
>  	Store_field(ret, 1, array);
> 
> +#else
> +	caml_failwith("xc_domain_cpuid_check: not implemented"); #endif
>  	CAMLreturn(ret);
>  }
> 
> --
> 1.8.1.2
> 
> 
> --
> Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 19:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 19:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2QXk-0004jA-1c; Sun, 12 Jan 2014 19:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Dave.Scott@citrix.com>) id 1W2QXh-0004j5-Ou
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 19:17:53 +0000
Received: from [85.158.143.35:49052] by server-2.bemta-4.messagelabs.com id
	10/40-11386-16AE2D25; Sun, 12 Jan 2014 19:17:53 +0000
X-Env-Sender: Dave.Scott@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389554271!11078018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30880 invoked from network); 12 Jan 2014 19:17:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 19:17:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="92137548"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 19:17:50 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 14:17:50 -0500
Received: from AMSPEX01CL03.citrite.net ([169.254.8.218]) by
	AMSPEX01CL01.citrite.net ([10.69.46.32]) with mapi id 14.02.0342.004;
	Sun, 12 Jan 2014 20:17:49 +0100
From: Dave Scott <Dave.Scott@citrix.com>
To: 'Anil Madhavapeddy' <anil@recoil.org>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v2] libxl: ocaml: guard x86-specific functions behind
	an ifdef
Thread-Index: AQHPDyWJITpQEuw8a0eO477WORnCJJqBdsDA
Date: Sun, 12 Jan 2014 19:17:48 +0000
Message-ID: <6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
References: <20140111233325.GA30303@dark.recoil.org>
In-Reply-To: <20140111233325.GA30303@dark.recoil.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.69.129.53]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Anil,

Thanks for getting oxenstored on ARM working!

I'm happy with a simple 'Failure not implemented' exception for the moment. I think that once we're using the libxl bindings everywhere we can probably remove these libxc bindings anyway.

Acked-by: David Scott <dave.scott@eu.citrix.com>

Cheers,
Dave

> -----Original Message-----
> From: Anil Madhavapeddy [mailto:anil@recoil.org]
> Sent: 11 January 2014 11:33 PM
> To: xen-devel@lists.xen.org
> Cc: Dave Scott
> Subject: [PATCH v2] libxl: ocaml: guard x86-specific functions behind an
> ifdef
> 
> The various cpuid functions are not available on ARM, so this makes them
> raise an OCaml exception.  Omitting the functions completely results in a
> link failure in oxenstored due to the missing symbols, so this is preferable
> to the much bigger patch that would result from adding conditional
> compilation into the OCaml interfaces.
> 
> With this patch, oxenstored can successfully start a domain on Xen/ARM.
> 
> Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
> ---
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index f5cf0ed..ff29b47 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> xch, value domid,  {
>  	CAMLparam4(xch, domid, input, config);
>  	CAMLlocal2(array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> xch, value domid,
>  			 c_input, (const char **)c_config, out_config);
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	caml_failwith("xc_domain_cpuid_set: not implemented"); #endif
>  	CAMLreturn(array);
>  }
> 
>  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value
> domid)  {
>  	CAMLparam2(xch, domid);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
> 
>  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
>  	if (r < 0)
>  		failwith_xc(_H(xch));
> +#else
> +	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
> #endif
>  	CAMLreturn(Val_unit);
>  }
> 
> @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> value input, value config)  {
>  	CAMLparam3(xch, input, config);
>  	CAMLlocal3(ret, array, tmp);
> +#if defined(__i386__) || defined(__x86_64__)
>  	int r;
>  	unsigned int c_input[2];
>  	char *c_config[4], *out_config[4];
> @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> value input, value config)
>  	Store_field(ret, 0, Val_bool(r));
>  	Store_field(ret, 1, array);
> 
> +#else
> +	caml_failwith("xc_domain_cpuid_check: not implemented"); #endif
>  	CAMLreturn(ret);
>  }
> 
> --
> 1.8.1.2
> 
> 
> --
> Anil Madhavapeddy                                 http://anil.recoil.org

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 19:26:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 19:26:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Qfy-0005LJ-He; Sun, 12 Jan 2014 19:26:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W2Qfx-0005LE-5e
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 19:26:25 +0000
Received: from [193.109.254.147:23284] by server-14.bemta-14.messagelabs.com
	id 70/E8-12628-06CE2D25; Sun, 12 Jan 2014 19:26:24 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389554782!10369596!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23675 invoked from network); 12 Jan 2014 19:26:23 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 19:26:23 -0000
Received: by mail-la0-f54.google.com with SMTP id y1so541830lam.27
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 11:26:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=VDFsuyvuacS4OKvWfYPEwNI++YXUp7xjHKivLOCFIAQ=;
	b=U+hUujHkpgDXBFuTuESsnXKlfe5NjQadWKZ8TPNVkdl/xfQlaFS5Q5rAp1mMwoVEmn
	ZIQh0aFsY4VASGt1rA6tsupiXkomr+MhASFNloFbY5mMdWoussDeSErLQLC/O3BnjCm1
	Xw/r/lPpQjwroJfEMVMQYHdy8W1C2KeT+PVhZy1KDXueyacEM7Jj6GGAonj0BJzGURcz
	UdjMLWNm1DuAx54wJkXlyOtJFUBOrJkH/PEvHFjQixvDxDVXaM2IMmt+7FD0l/IFRB4q
	qyo65HhVd20JMQCQPOa0DrQPH1N4e40vOC1+Wl1QEwZeLJXq8of27OUd/p0k+e7YIUP3
	eQ4A==
X-Received: by 10.112.156.228 with SMTP id wh4mr7508779lbb.10.1389554782700;
	Sun, 12 Jan 2014 11:26:22 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id tc8sm8040861lbb.9.2014.01.12.11.26.21
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 12 Jan 2014 11:26:21 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D2DF12.5050801@citrix.com>
Date: Sun, 12 Jan 2014 23:26:19 +0400
Message-Id: <43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:

> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>> 
>> i have received 'pirq' from hypervisor > 255.
>> 
>> map_irq.domid = DOMID_SELF;                                                   
>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>> map_irq.pirq = -1;                                                            
>> map_irq.bus = busnum;                                                         
>> map_irq.devfn = devfn;                                                        
>> map_irq.entry_nr = i;                                                         
>> map_irq.table_base = 0;                                              
>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>> irqno = map_irq.pirq;
>> 
>> i have:
>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>> 
>> i have a question: how to correct translate pirq to irq for APIC map table ?
>> 
>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
> 
> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
> handed back will be the event channel on which the notification will
> arrive, and has nothing to do with regular IDT vectors.

it is for dom0.

full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt


if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?

i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
but - how to correct translate it to APIC IRQ (physical irq)?

> ~Andrew
> 

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 19:26:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 19:26:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Qfy-0005LJ-He; Sun, 12 Jan 2014 19:26:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W2Qfx-0005LE-5e
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 19:26:25 +0000
Received: from [193.109.254.147:23284] by server-14.bemta-14.messagelabs.com
	id 70/E8-12628-06CE2D25; Sun, 12 Jan 2014 19:26:24 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389554782!10369596!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23675 invoked from network); 12 Jan 2014 19:26:23 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 19:26:23 -0000
Received: by mail-la0-f54.google.com with SMTP id y1so541830lam.27
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 11:26:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=VDFsuyvuacS4OKvWfYPEwNI++YXUp7xjHKivLOCFIAQ=;
	b=U+hUujHkpgDXBFuTuESsnXKlfe5NjQadWKZ8TPNVkdl/xfQlaFS5Q5rAp1mMwoVEmn
	ZIQh0aFsY4VASGt1rA6tsupiXkomr+MhASFNloFbY5mMdWoussDeSErLQLC/O3BnjCm1
	Xw/r/lPpQjwroJfEMVMQYHdy8W1C2KeT+PVhZy1KDXueyacEM7Jj6GGAonj0BJzGURcz
	UdjMLWNm1DuAx54wJkXlyOtJFUBOrJkH/PEvHFjQixvDxDVXaM2IMmt+7FD0l/IFRB4q
	qyo65HhVd20JMQCQPOa0DrQPH1N4e40vOC1+Wl1QEwZeLJXq8of27OUd/p0k+e7YIUP3
	eQ4A==
X-Received: by 10.112.156.228 with SMTP id wh4mr7508779lbb.10.1389554782700;
	Sun, 12 Jan 2014 11:26:22 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id tc8sm8040861lbb.9.2014.01.12.11.26.21
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 12 Jan 2014 11:26:21 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D2DF12.5050801@citrix.com>
Date: Sun, 12 Jan 2014 23:26:19 +0400
Message-Id: <43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:

> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>> 
>> i have received 'pirq' from hypervisor > 255.
>> 
>> map_irq.domid = DOMID_SELF;                                                   
>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>> map_irq.pirq = -1;                                                            
>> map_irq.bus = busnum;                                                         
>> map_irq.devfn = devfn;                                                        
>> map_irq.entry_nr = i;                                                         
>> map_irq.table_base = 0;                                              
>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>> irqno = map_irq.pirq;
>> 
>> i have:
>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>> 
>> i have a question: how to correct translate pirq to irq for APIC map table ?
>> 
>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
> 
> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
> handed back will be the event channel on which the notification will
> arrive, and has nothing to do with regular IDT vectors.

it is for dom0.

full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt


if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?

i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
but - how to correct translate it to APIC IRQ (physical irq)?

> ~Andrew
> 

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 20:34:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 20:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Rjr-0000GH-3z; Sun, 12 Jan 2014 20:34:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Rjq-0000GC-A9
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 20:34:30 +0000
Received: from [193.109.254.147:46414] by server-16.bemta-14.messagelabs.com
	id C8/97-20600-55CF2D25; Sun, 12 Jan 2014 20:34:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389558867!10368192!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21708 invoked from network); 12 Jan 2014 20:34:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 20:34:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="90011838"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Jan 2014 20:34:26 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 15:34:26 -0500
Received: from [192.168.50.127] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	21:34:23 +0100
Message-ID: <52D2FC4B.2080509@citrix.com>
Date: Sun, 12 Jan 2014 20:34:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
In-Reply-To: <43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/01/2014 19:26, Igor Kozhukhov wrote:
> Hi Andrew,
>
> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>
>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>> Hello All,
>>>
>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>
>>> i have received 'pirq' from hypervisor > 255.
>>>
>>> map_irq.domid = DOMID_SELF;                                                   
>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>> map_irq.pirq = -1;                                                            
>>> map_irq.bus = busnum;                                                         
>>> map_irq.devfn = devfn;                                                        
>>> map_irq.entry_nr = i;                                                         
>>> map_irq.table_base = 0;                                              
>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>> irqno = map_irq.pirq;
>>>
>>> i have:
>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>
>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>
>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>> handed back will be the event channel on which the notification will
>> arrive, and has nothing to do with regular IDT vectors.
> it is for dom0.
>
> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>
>
> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>
> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
> but - how to correct translate it to APIC IRQ (physical irq)?

Why do you need to know?

Xen controls all interrupts on the system.  Event channels which you
register with Xen have no mapping/relation to local apic vectors.  Your
device drivers should not expect to have an apic vector in their hand.

The reason behind this is that as virtual cpus get scheduled around
physical cpus, Xen needs to move the interrupts from IDT to IDT at which
point their vector will change.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 20:34:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 20:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2Rjr-0000GH-3z; Sun, 12 Jan 2014 20:34:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2Rjq-0000GC-A9
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 20:34:30 +0000
Received: from [193.109.254.147:46414] by server-16.bemta-14.messagelabs.com
	id C8/97-20600-55CF2D25; Sun, 12 Jan 2014 20:34:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389558867!10368192!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21708 invoked from network); 12 Jan 2014 20:34:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 20:34:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="90011838"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Jan 2014 20:34:26 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 15:34:26 -0500
Received: from [192.168.50.127] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	21:34:23 +0100
Message-ID: <52D2FC4B.2080509@citrix.com>
Date: Sun, 12 Jan 2014 20:34:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
In-Reply-To: <43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/01/2014 19:26, Igor Kozhukhov wrote:
> Hi Andrew,
>
> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>
>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>> Hello All,
>>>
>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>
>>> i have received 'pirq' from hypervisor > 255.
>>>
>>> map_irq.domid = DOMID_SELF;                                                   
>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>> map_irq.pirq = -1;                                                            
>>> map_irq.bus = busnum;                                                         
>>> map_irq.devfn = devfn;                                                        
>>> map_irq.entry_nr = i;                                                         
>>> map_irq.table_base = 0;                                              
>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>> irqno = map_irq.pirq;
>>>
>>> i have:
>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>
>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>
>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>> handed back will be the event channel on which the notification will
>> arrive, and has nothing to do with regular IDT vectors.
> it is for dom0.
>
> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>
>
> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>
> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
> but - how to correct translate it to APIC IRQ (physical irq)?

Why do you need to know?

Xen controls all interrupts on the system.  Event channels which you
register with Xen have no mapping/relation to local apic vectors.  Your
device drivers should not expect to have an apic vector in their hand.

The reason behind this is that as virtual cpus get scheduled around
physical cpus, Xen needs to move the interrupts from IDT to IDT at which
point their vector will change.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 21:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 21:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2S9j-0001kc-GW; Sun, 12 Jan 2014 21:01:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W2S9i-0001kX-1y
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 21:01:14 +0000
Received: from [85.158.137.68:53296] by server-17.bemta-3.messagelabs.com id
	14/7C-15965-99203D25; Sun, 12 Jan 2014 21:01:13 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389560471!8662046!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29464 invoked from network); 12 Jan 2014 21:01:12 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 21:01:12 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1852975lan.12
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 13:01:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=LO4+I1dWMACxccAO2llu68jFRLPjCvYVls2Jsx3av8U=;
	b=DcUkc0QCTjdmUlwjvH2OdTX29Ami0TTonVqRJvqDzgOplJXgpTuHPz7F5U10yBlTG/
	q7D29aly7utwyeDFHCqEodeJ8mbseWRmdu6/UcuzjO9THOjPqL+nkX8NGd7XfbJlha0w
	4YmE8qna+Wuvcsrzi8yFWxOOyEYcm/+vAB1chwnsOW1gIdr317aMhsF5eGDFzYndmZjY
	oespLmhrUAdtviiYRYolRxEmdkwLG02knLv8WA2U36ZFXNP4Ra63KiH7yolOqwHQOleS
	Qn7z7MrBuROo0qqhCnd/E5cwup2n/Q8afESXlXxr9BXPkh0N1nSOcD5XmoMUE3897Px4
	QIBQ==
X-Received: by 10.112.52.74 with SMTP id r10mr8636247lbo.36.1389560471478;
	Sun, 12 Jan 2014 13:01:11 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id y11sm8179094lbm.13.2014.01.12.13.01.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 12 Jan 2014 13:01:10 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D2FC4B.2080509@citrix.com>
Date: Mon, 13 Jan 2014 01:01:09 +0400
Message-Id: <BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:

> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>> Hi Andrew,
>> 
>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>> 
>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>> Hello All,
>>>> 
>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>> 
>>>> i have received 'pirq' from hypervisor > 255.
>>>> 
>>>> map_irq.domid = DOMID_SELF;                                                   
>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>> map_irq.pirq = -1;                                                            
>>>> map_irq.bus = busnum;                                                         
>>>> map_irq.devfn = devfn;                                                        
>>>> map_irq.entry_nr = i;                                                         
>>>> map_irq.table_base = 0;                                              
>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>> irqno = map_irq.pirq;
>>>> 
>>>> i have:
>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>> 
>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>> 
>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>> handed back will be the event channel on which the notification will
>>> arrive, and has nothing to do with regular IDT vectors.
>> it is for dom0.
>> 
>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>> 
>> 
>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>> 
>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>> but - how to correct translate it to APIC IRQ (physical irq)?
> 
> Why do you need to know?
> 
> Xen controls all interrupts on the system.  Event channels which you
> register with Xen have no mapping/relation to local apic vectors.  Your
> device drivers should not expect to have an apic vector in their hand.
> 
> The reason behind this is that as virtual cpus get scheduled around
> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
> point their vector will change.
is it possible to receive IRQ from APIC table from Xen as index ?
i need it for local APIC pointer to APIC table array as index.
all others functions is using index from apic_irq_table[] as APIC IRQ.

i have function apic_find_irq() for this. 
it is not my realization - it is original code.

> ~Andrew

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 21:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 21:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2S9j-0001kc-GW; Sun, 12 Jan 2014 21:01:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W2S9i-0001kX-1y
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 21:01:14 +0000
Received: from [85.158.137.68:53296] by server-17.bemta-3.messagelabs.com id
	14/7C-15965-99203D25; Sun, 12 Jan 2014 21:01:13 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389560471!8662046!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29464 invoked from network); 12 Jan 2014 21:01:12 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 21:01:12 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1852975lan.12
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 13:01:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=LO4+I1dWMACxccAO2llu68jFRLPjCvYVls2Jsx3av8U=;
	b=DcUkc0QCTjdmUlwjvH2OdTX29Ami0TTonVqRJvqDzgOplJXgpTuHPz7F5U10yBlTG/
	q7D29aly7utwyeDFHCqEodeJ8mbseWRmdu6/UcuzjO9THOjPqL+nkX8NGd7XfbJlha0w
	4YmE8qna+Wuvcsrzi8yFWxOOyEYcm/+vAB1chwnsOW1gIdr317aMhsF5eGDFzYndmZjY
	oespLmhrUAdtviiYRYolRxEmdkwLG02knLv8WA2U36ZFXNP4Ra63KiH7yolOqwHQOleS
	Qn7z7MrBuROo0qqhCnd/E5cwup2n/Q8afESXlXxr9BXPkh0N1nSOcD5XmoMUE3897Px4
	QIBQ==
X-Received: by 10.112.52.74 with SMTP id r10mr8636247lbo.36.1389560471478;
	Sun, 12 Jan 2014 13:01:11 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id y11sm8179094lbm.13.2014.01.12.13.01.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 12 Jan 2014 13:01:10 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D2FC4B.2080509@citrix.com>
Date: Mon, 13 Jan 2014 01:01:09 +0400
Message-Id: <BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:

> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>> Hi Andrew,
>> 
>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>> 
>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>> Hello All,
>>>> 
>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>> 
>>>> i have received 'pirq' from hypervisor > 255.
>>>> 
>>>> map_irq.domid = DOMID_SELF;                                                   
>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>> map_irq.pirq = -1;                                                            
>>>> map_irq.bus = busnum;                                                         
>>>> map_irq.devfn = devfn;                                                        
>>>> map_irq.entry_nr = i;                                                         
>>>> map_irq.table_base = 0;                                              
>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>> irqno = map_irq.pirq;
>>>> 
>>>> i have:
>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>> 
>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>> 
>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>> handed back will be the event channel on which the notification will
>>> arrive, and has nothing to do with regular IDT vectors.
>> it is for dom0.
>> 
>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>> 
>> 
>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>> 
>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>> but - how to correct translate it to APIC IRQ (physical irq)?
> 
> Why do you need to know?
> 
> Xen controls all interrupts on the system.  Event channels which you
> register with Xen have no mapping/relation to local apic vectors.  Your
> device drivers should not expect to have an apic vector in their hand.
> 
> The reason behind this is that as virtual cpus get scheduled around
> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
> point their vector will change.
is it possible to receive IRQ from APIC table from Xen as index ?
i need it for local APIC pointer to APIC table array as index.
all others functions is using index from apic_irq_table[] as APIC IRQ.

i have function apic_find_irq() for this. 
it is not my realization - it is original code.

> ~Andrew

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 21:11:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 21:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2SJ8-0002KI-MJ; Sun, 12 Jan 2014 21:10:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2SJ7-0002KC-9B
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 21:10:57 +0000
Received: from [193.109.254.147:8910] by server-11.bemta-14.messagelabs.com id
	C3/32-20576-0E403D25; Sun, 12 Jan 2014 21:10:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389561054!10399923!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22475 invoked from network); 12 Jan 2014 21:10:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 21:10:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="90015277"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Jan 2014 21:10:54 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 16:10:53 -0500
Received: from [192.168.50.127] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	22:10:49 +0100
Message-ID: <52D304D4.3030704@citrix.com>
Date: Sun, 12 Jan 2014 21:10:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
	<BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
In-Reply-To: <BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/01/2014 21:01, Igor Kozhukhov wrote:
> On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:
>
>> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>>> Hi Andrew,
>>>
>>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>>>
>>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>>> Hello All,
>>>>>
>>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>>>
>>>>> i have received 'pirq' from hypervisor > 255.
>>>>>
>>>>> map_irq.domid = DOMID_SELF;                                                   
>>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>>> map_irq.pirq = -1;                                                            
>>>>> map_irq.bus = busnum;                                                         
>>>>> map_irq.devfn = devfn;                                                        
>>>>> map_irq.entry_nr = i;                                                         
>>>>> map_irq.table_base = 0;                                              
>>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>>> irqno = map_irq.pirq;
>>>>>
>>>>> i have:
>>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>>>
>>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>>>
>>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>>> handed back will be the event channel on which the notification will
>>>> arrive, and has nothing to do with regular IDT vectors.
>>> it is for dom0.
>>>
>>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>>>
>>>
>>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>>>
>>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>>> but - how to correct translate it to APIC IRQ (physical irq)?
>> Why do you need to know?
>>
>> Xen controls all interrupts on the system.  Event channels which you
>> register with Xen have no mapping/relation to local apic vectors.  Your
>> device drivers should not expect to have an apic vector in their hand.
>>
>> The reason behind this is that as virtual cpus get scheduled around
>> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
>> point their vector will change.
> is it possible to receive IRQ from APIC table from Xen as index ?

No.

> i need it for local APIC pointer to APIC table array as index.
> all others functions is using index from apic_irq_table[] as APIC IRQ.
>
> i have function apic_find_irq() for this. 
> it is not my realization - it is original code.

Nothing in a dom0 system should know/care about apic vectors.  Dom0
cannot use the IDT, nor can it even write to MSI/MSI-X configuration
registers (they get trapped and fixed-up by Xen).

Even if there were a hypercall to map an event channel back to an
apic-id/vector, it is possible that the data would be stale by the time
the vcpu ran again.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 21:11:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 21:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2SJ8-0002KI-MJ; Sun, 12 Jan 2014 21:10:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2SJ7-0002KC-9B
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 21:10:57 +0000
Received: from [193.109.254.147:8910] by server-11.bemta-14.messagelabs.com id
	C3/32-20576-0E403D25; Sun, 12 Jan 2014 21:10:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389561054!10399923!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22475 invoked from network); 12 Jan 2014 21:10:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 21:10:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,648,1384300800"; d="scan'208";a="90015277"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Jan 2014 21:10:54 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Sun, 12 Jan 2014 16:10:53 -0500
Received: from [192.168.50.127] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Sun, 12 Jan 2014
	22:10:49 +0100
Message-ID: <52D304D4.3030704@citrix.com>
Date: Sun, 12 Jan 2014 21:10:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
	<BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
In-Reply-To: <BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/01/2014 21:01, Igor Kozhukhov wrote:
> On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:
>
>> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>>> Hi Andrew,
>>>
>>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>>>
>>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>>> Hello All,
>>>>>
>>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>>>
>>>>> i have received 'pirq' from hypervisor > 255.
>>>>>
>>>>> map_irq.domid = DOMID_SELF;                                                   
>>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>>> map_irq.pirq = -1;                                                            
>>>>> map_irq.bus = busnum;                                                         
>>>>> map_irq.devfn = devfn;                                                        
>>>>> map_irq.entry_nr = i;                                                         
>>>>> map_irq.table_base = 0;                                              
>>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>>> irqno = map_irq.pirq;
>>>>>
>>>>> i have:
>>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>>>
>>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>>>
>>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>>> handed back will be the event channel on which the notification will
>>>> arrive, and has nothing to do with regular IDT vectors.
>>> it is for dom0.
>>>
>>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>>>
>>>
>>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>>>
>>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>>> but - how to correct translate it to APIC IRQ (physical irq)?
>> Why do you need to know?
>>
>> Xen controls all interrupts on the system.  Event channels which you
>> register with Xen have no mapping/relation to local apic vectors.  Your
>> device drivers should not expect to have an apic vector in their hand.
>>
>> The reason behind this is that as virtual cpus get scheduled around
>> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
>> point their vector will change.
> is it possible to receive IRQ from APIC table from Xen as index ?

No.

> i need it for local APIC pointer to APIC table array as index.
> all others functions is using index from apic_irq_table[] as APIC IRQ.
>
> i have function apic_find_irq() for this. 
> it is not my realization - it is original code.

Nothing in a dom0 system should know/care about apic vectors.  Dom0
cannot use the IDT, nor can it even write to MSI/MSI-X configuration
registers (they get trapped and fixed-up by Xen).

Even if there were a hypercall to map an event channel back to an
apic-id/vector, it is possible that the data would be stale by the time
the vcpu ran again.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 23:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 23:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2UG2-0007rn-T5; Sun, 12 Jan 2014 23:15:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2UG1-0007ri-7v
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 23:15:53 +0000
Received: from [193.109.254.147:16666] by server-2.bemta-14.messagelabs.com id
	FC/B0-00361-82223D25; Sun, 12 Jan 2014 23:15:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389568550!10382799!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25462 invoked from network); 12 Jan 2014 23:15:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:15:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92159783"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 23:15:49 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 12 Jan 2014 18:15:48 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	<xen-devel@lists.xenproject.org>
Date: Sun, 12 Jan 2014 23:15:25 +0000
Message-ID: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this patch does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
  because it is needed always
- update the function prototypes as kmap_ops are no longer needed

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c                  |    4 --
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   61 ++++++++++++++++++++++++++--
 drivers/xen/grant-table.c           |   76 +++++++++++------------------------
 include/xen/grant_table.h           |    2 -
 5 files changed, 87 insertions(+), 71 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..3e78eb9 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -893,9 +893,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	set_page_private(page, mfn);
 	page->index = pfn_to_mfn(pfn);
 
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
-
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs =
@@ -962,7 +959,6 @@ int m2p_remove_override(struct page *page,
 	WARN_ON(!PagePrivate(page));
 	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..3a97342 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
 static int map_grant_pages(struct grant_map *map)
 {
 	int i, err = 0;
+	bool lazy = false;
+	pte_t *pte;
+	unsigned long mfn;
 
 	if (!use_ptemod) {
 		/* Note: it could already be mapped */
@@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < map->count; i++) {
+		/* Do not add to override if the map failed. */
+		if (map->map_ops[i].status)
+			continue;
+
+		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
+				(map->map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
+		}
+		err = m2p_add_override(mfn,
+				       map->pages[i],
+				       use_ptemod ? &map->kmap_ops[i] : NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
@@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
 static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
 	int i, err = 0;
+	bool lazy = false;
 
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
@@ -316,8 +349,28 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+				map->pages + offset,
+				pages);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < pages; i++) {
+		err = m2p_remove_override((map->pages + offset)[i],
+					  use_ptemod ?
+					  &(map->kmap_ops + offset)[i] :
+					  NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..59019f2 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -881,11 +881,9 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
@@ -904,49 +902,35 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		for (i = 0; i < count; i++) {
 			if (map_ops[i].status)
 				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
+				return -ENOMEM;
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+	} else {
+		for (i = 0; i < count; i++) {
+			if (map_ops[i].status)
+				continue;
+			if (map_ops[i].flags & GNTMAP_contains_pte) {
+				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+					(map_ops[i].host_addr & ~PAGE_MASK));
+				mfn = pte_mfn(*pte);
+			} else {
+				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+			}
+			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
+							  FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
@@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
+	} else {
+		for (i = 0; i < count; i++) {
+				set_phys_to_machine(page_to_pfn(pages[i]),
+						    pages[i]->index);
+		}
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..93d363a 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,10 +184,8 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 23:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 23:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2UG2-0007rn-T5; Sun, 12 Jan 2014 23:15:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2UG1-0007ri-7v
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 23:15:53 +0000
Received: from [193.109.254.147:16666] by server-2.bemta-14.messagelabs.com id
	FC/B0-00361-82223D25; Sun, 12 Jan 2014 23:15:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389568550!10382799!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25462 invoked from network); 12 Jan 2014 23:15:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:15:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92159783"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 23:15:49 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 12 Jan 2014 18:15:48 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	<xen-devel@lists.xenproject.org>
Date: Sun, 12 Jan 2014 23:15:25 +0000
Message-ID: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this patch does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
  because it is needed always
- update the function prototypes as kmap_ops are no longer needed

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c                  |    4 --
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   61 ++++++++++++++++++++++++++--
 drivers/xen/grant-table.c           |   76 +++++++++++------------------------
 include/xen/grant_table.h           |    2 -
 5 files changed, 87 insertions(+), 71 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..3e78eb9 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -893,9 +893,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	set_page_private(page, mfn);
 	page->index = pfn_to_mfn(pfn);
 
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
-
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs =
@@ -962,7 +959,6 @@ int m2p_remove_override(struct page *page,
 	WARN_ON(!PagePrivate(page));
 	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..3a97342 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
 static int map_grant_pages(struct grant_map *map)
 {
 	int i, err = 0;
+	bool lazy = false;
+	pte_t *pte;
+	unsigned long mfn;
 
 	if (!use_ptemod) {
 		/* Note: it could already be mapped */
@@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < map->count; i++) {
+		/* Do not add to override if the map failed. */
+		if (map->map_ops[i].status)
+			continue;
+
+		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
+				(map->map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
+		}
+		err = m2p_add_override(mfn,
+				       map->pages[i],
+				       use_ptemod ? &map->kmap_ops[i] : NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
@@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
 static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
 	int i, err = 0;
+	bool lazy = false;
 
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
@@ -316,8 +349,28 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+				map->pages + offset,
+				pages);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < pages; i++) {
+		err = m2p_remove_override((map->pages + offset)[i],
+					  use_ptemod ?
+					  &(map->kmap_ops + offset)[i] :
+					  NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..59019f2 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -881,11 +881,9 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
@@ -904,49 +902,35 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		for (i = 0; i < count; i++) {
 			if (map_ops[i].status)
 				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
+				return -ENOMEM;
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+	} else {
+		for (i = 0; i < count; i++) {
+			if (map_ops[i].status)
+				continue;
+			if (map_ops[i].flags & GNTMAP_contains_pte) {
+				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+					(map_ops[i].host_addr & ~PAGE_MASK));
+				mfn = pte_mfn(*pte);
+			} else {
+				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+			}
+			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
+							  FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
@@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
+	} else {
+		for (i = 0; i < count; i++) {
+				set_phys_to_machine(page_to_pfn(pages[i]),
+						    pages[i]->index);
+		}
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..93d363a 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,10 +184,8 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 23:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 23:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2UK4-0008Rp-PC; Sun, 12 Jan 2014 23:20:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2UK3-0008Rk-CW
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 23:20:03 +0000
Received: from [85.158.143.35:49169] by server-1.bemta-4.messagelabs.com id
	EB/44-02132-22323D25; Sun, 12 Jan 2014 23:20:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389568800!4110485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7879 invoked from network); 12 Jan 2014 23:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92160028"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 23:20:00 +0000
Received: from [10.68.14.27] (10.68.14.27) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 12 Jan 2014 18:20:00 -0500
Message-ID: <52D3230A.3020203@citrix.com>
Date: Sun, 12 Jan 2014 23:19:38 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>	<20140109153015.GF12164@zion.uk.xensource.com>	<52CFDAEC.5080708@citrix.com>	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com> <52D0198C.6000600@citrix.com>
In-Reply-To: <52D0198C.6000600@citrix.com>
X-Originating-IP: [10.68.14.27]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 16:02, David Vrabel wrote:
> On 10/01/14 15:24, Zoltan Kiss wrote:
> If the grant table code doesn't provide the API calls you need you can
> either:
>
> a) add the new API as a prerequisite patch.
> b) use the existing API calls and live with the performance problem,
> until you can refactor the API later on.
>
> Adding a netback-specific hack isn't a valid option.
>
> David

Ok, I've sent in the patch which does a)

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 12 23:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jan 2014 23:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2UK4-0008Rp-PC; Sun, 12 Jan 2014 23:20:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2UK3-0008Rk-CW
	for xen-devel@lists.xenproject.org; Sun, 12 Jan 2014 23:20:03 +0000
Received: from [85.158.143.35:49169] by server-1.bemta-4.messagelabs.com id
	EB/44-02132-22323D25; Sun, 12 Jan 2014 23:20:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389568800!4110485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7879 invoked from network); 12 Jan 2014 23:20:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:20:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92160028"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Jan 2014 23:20:00 +0000
Received: from [10.68.14.27] (10.68.14.27) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 12 Jan 2014 18:20:00 -0500
Message-ID: <52D3230A.3020203@citrix.com>
Date: Sun, 12 Jan 2014 23:19:38 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>	<1389139818-24458-3-git-send-email-zoltan.kiss@citrix.com>	<20140109153015.GF12164@zion.uk.xensource.com>	<52CFDAEC.5080708@citrix.com>	<20140110114534.GE29180@zion.uk.xensource.com>
	<52D01094.5060102@citrix.com> <52D0198C.6000600@citrix.com>
In-Reply-To: <52D0198C.6000600@citrix.com>
X-Originating-IP: [10.68.14.27]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v3 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/01/14 16:02, David Vrabel wrote:
> On 10/01/14 15:24, Zoltan Kiss wrote:
> If the grant table code doesn't provide the API calls you need you can
> either:
>
> a) add the new API as a prerequisite patch.
> b) use the existing API calls and live with the performance problem,
> until you can refactor the API later on.
>
> Adding a netback-specific hack isn't a valid option.
>
> David

Ok, I've sent in the patch which does a)

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 00:20:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 00:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2VGM-0003HN-Ey; Mon, 13 Jan 2014 00:20:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2VGL-0003HI-9s
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 00:20:17 +0000
Received: from [85.158.139.211:28360] by server-7.bemta-5.messagelabs.com id
	75/27-04824-04133D25; Mon, 13 Jan 2014 00:20:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389572414!9303881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11179 invoked from network); 13 Jan 2014 00:20:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 00:20:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92166428"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 00:20:13 +0000
Received: from [10.68.14.27] (10.68.14.27) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 12 Jan 2014 19:20:13 -0500
Message-ID: <52D33138.4090903@citrix.com>
Date: Mon, 13 Jan 2014 00:20:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Jonathan
	Davies <Jonathan.Davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
X-Originating-IP: [10.68.14.27]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 09:20, Paul Durrant wrote:
>> We are adding the skb to vif->rx_queue even when
>> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
>> space for that. Or am I missing something? Paul?
>>
> That's correct. Part of the flow control improvement was to get rid of needless packet drops. For your purposes, you basically need to avoid using the queuing discipline and take packets into netback's vif->rx_queue regardless of the state of the shared ring so that you can drop them if they get beyond a certain age. So, perhaps you should never stop the netif queue, place an upper limit on vif->rx_queue (either packet or byte count) and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).
>
How about this:
- when the timer fires first we wake up the thread an tell it to drop 
all the packets in rx_queue
- start_xmit then can drain the qdisc queue into the device queue
- additionally, the RX thread should stop that timer when it was able to 
do some work

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 00:20:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 00:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2VGM-0003HN-Ey; Mon, 13 Jan 2014 00:20:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2VGL-0003HI-9s
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 00:20:17 +0000
Received: from [85.158.139.211:28360] by server-7.bemta-5.messagelabs.com id
	75/27-04824-04133D25; Mon, 13 Jan 2014 00:20:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389572414!9303881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11179 invoked from network); 13 Jan 2014 00:20:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 00:20:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,649,1384300800"; d="scan'208";a="92166428"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 00:20:13 +0000
Received: from [10.68.14.27] (10.68.14.27) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 12 Jan 2014 19:20:13 -0500
Message-ID: <52D33138.4090903@citrix.com>
Date: Mon, 13 Jan 2014 00:20:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Jonathan
	Davies <Jonathan.Davies@citrix.com>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
X-Originating-IP: [10.68.14.27]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 09:20, Paul Durrant wrote:
>> We are adding the skb to vif->rx_queue even when
>> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
>> space for that. Or am I missing something? Paul?
>>
> That's correct. Part of the flow control improvement was to get rid of needless packet drops. For your purposes, you basically need to avoid using the queuing discipline and take packets into netback's vif->rx_queue regardless of the state of the shared ring so that you can drop them if they get beyond a certain age. So, perhaps you should never stop the netif queue, place an upper limit on vif->rx_queue (either packet or byte count) and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).
>
How about this:
- when the timer fires first we wake up the thread an tell it to drop 
all the packets in rx_queue
- start_xmit then can drain the qdisc queue into the device queue
- additionally, the RX thread should stop that timer when it was able to 
do some work

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 00:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 00:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2VTW-0003rR-Dv; Mon, 13 Jan 2014 00:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1W2VTV-0003rM-Kh
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 00:33:53 +0000
Received: from [85.158.143.35:51071] by server-2.bemta-4.messagelabs.com id
	40/E5-11386-17433D25; Mon, 13 Jan 2014 00:33:53 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389573231!8570480!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6930 invoked from network); 13 Jan 2014 00:33:52 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 00:33:52 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so3578236qcx.35
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 16:33:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=aAv2eG4CLZFtySRjdcHZqT0pJrgQb32rFmRqubxpxZg=;
	b=W+11sbkppqZWhPYumAzxPsrE1Mu+8I+pAcoz2/lsWrQ7DH8qbYJjN5Ux5hzpIkRFtZ
	RKUyP1u94dVTHOhj4UJiAdC3cdzckYud78gzBGOPE7nMDxwUgTPMkM9NM+rEpsFMSiOT
	RGpvt2EPCUtmHLLSnD8tSh5cpXqmQvw6d9ltHRjU0hQKPy2pTjlFfP4wntfhV+uB5W6L
	2/Ec2+zevHCAkOtLbVzpnIZpfQR45Nr4Oo/NuznXYDHp+cdrJt5D+sYn55+2Z/lvEdP7
	0YhZpm32a6NDz1frzlkBwoziVHes0jFXNdj6F33AfMIStt0/hDFRxLqkEPOBpUa76nxz
	GBWQ==
MIME-Version: 1.0
X-Received: by 10.49.35.112 with SMTP id g16mr33760237qej.13.1389573231234;
	Sun, 12 Jan 2014 16:33:51 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Sun, 12 Jan 2014 16:33:51 -0800 (PST)
In-Reply-To: <1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
Date: Mon, 13 Jan 2014 08:33:51 +0800
Message-ID: <CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Adel Amani <adel.amani66@yahoo.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:
> Hello Mr Dunlap
> I really wonder of anybody know answer :-(
> can you help me?!
> I want found time "Downtime" and number "dirty pages" in end migration ...

What do you mean by "Downtime", and "in end migration"? Migration will
take several rounds to transfer dirty pages. If you want to know
exactly how many pages are dirty during each stage, you can get the
dirty page bitmap in xc_domain_save, which will be called for each
live migration stage.

Thanks,
-Kai
> please help me.
> Thanks.
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
> On Saturday, January 11, 2014 12:31 AM, Adel Amani <adel.amani66@yahoo.com>
> wrote:
> Hello
> I do one migration and trace in migration to command "xentrace -D -e all -S
> 256 /test.trace"
> I can get total time migration of command "time migration -l ubuntu11
> 192.168.1.1"  But i don't know how get "Downtime" and "dirty pages" of
> test.trace :-(  or from another way...
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 00:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 00:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2VTW-0003rR-Dv; Mon, 13 Jan 2014 00:33:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1W2VTV-0003rM-Kh
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 00:33:53 +0000
Received: from [85.158.143.35:51071] by server-2.bemta-4.messagelabs.com id
	40/E5-11386-17433D25; Mon, 13 Jan 2014 00:33:53 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389573231!8570480!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6930 invoked from network); 13 Jan 2014 00:33:52 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 00:33:52 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so3578236qcx.35
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 16:33:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=aAv2eG4CLZFtySRjdcHZqT0pJrgQb32rFmRqubxpxZg=;
	b=W+11sbkppqZWhPYumAzxPsrE1Mu+8I+pAcoz2/lsWrQ7DH8qbYJjN5Ux5hzpIkRFtZ
	RKUyP1u94dVTHOhj4UJiAdC3cdzckYud78gzBGOPE7nMDxwUgTPMkM9NM+rEpsFMSiOT
	RGpvt2EPCUtmHLLSnD8tSh5cpXqmQvw6d9ltHRjU0hQKPy2pTjlFfP4wntfhV+uB5W6L
	2/Ec2+zevHCAkOtLbVzpnIZpfQR45Nr4Oo/NuznXYDHp+cdrJt5D+sYn55+2Z/lvEdP7
	0YhZpm32a6NDz1frzlkBwoziVHes0jFXNdj6F33AfMIStt0/hDFRxLqkEPOBpUa76nxz
	GBWQ==
MIME-Version: 1.0
X-Received: by 10.49.35.112 with SMTP id g16mr33760237qej.13.1389573231234;
	Sun, 12 Jan 2014 16:33:51 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Sun, 12 Jan 2014 16:33:51 -0800 (PST)
In-Reply-To: <1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
Date: Mon, 13 Jan 2014 08:33:51 +0800
Message-ID: <CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Adel Amani <adel.amani66@yahoo.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:
> Hello Mr Dunlap
> I really wonder of anybody know answer :-(
> can you help me?!
> I want found time "Downtime" and number "dirty pages" in end migration ...

What do you mean by "Downtime", and "in end migration"? Migration will
take several rounds to transfer dirty pages. If you want to know
exactly how many pages are dirty during each stage, you can get the
dirty page bitmap in xc_domain_save, which will be called for each
live migration stage.

Thanks,
-Kai
> please help me.
> Thanks.
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
> On Saturday, January 11, 2014 12:31 AM, Adel Amani <adel.amani66@yahoo.com>
> wrote:
> Hello
> I do one migration and trace in migration to command "xentrace -D -e all -S
> 256 /test.trace"
> I can get total time migration of command "time migration -l ubuntu11
> 192.168.1.1"  But i don't know how get "Downtime" and "dirty pages" of
> test.trace :-(  or from another way...
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 08:19:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2cjn-00054R-Mn; Mon, 13 Jan 2014 08:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2cjm-00054M-6I
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 08:19:10 +0000
Received: from [193.109.254.147:48658] by server-4.bemta-14.messagelabs.com id
	92/5C-03916-D71A3D25; Mon, 13 Jan 2014 08:19:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389601146!10465003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9811 invoked from network); 13 Jan 2014 08:19:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 08:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90093251"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 08:19:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 03:19:05 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2cjg-0006xP-Kg;
	Mon, 13 Jan 2014 08:19:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2cjg-0005k1-JQ;
	Mon, 13 Jan 2014 08:19:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24366-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 08:19:04 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24366 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24364
 test-amd64-amd64-xl-winxpsp3  7 windows-install             fail pass in 24364

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 24364
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)       broken like 24354
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10       fail  like 24354
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24364 like 24360

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24364 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24364 never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 08:19:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2cjn-00054R-Mn; Mon, 13 Jan 2014 08:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2cjm-00054M-6I
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 08:19:10 +0000
Received: from [193.109.254.147:48658] by server-4.bemta-14.messagelabs.com id
	92/5C-03916-D71A3D25; Mon, 13 Jan 2014 08:19:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389601146!10465003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9811 invoked from network); 13 Jan 2014 08:19:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 08:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90093251"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 08:19:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 03:19:05 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2cjg-0006xP-Kg;
	Mon, 13 Jan 2014 08:19:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2cjg-0005k1-JQ;
	Mon, 13 Jan 2014 08:19:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24366-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 08:19:04 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24366 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24364
 test-amd64-amd64-xl-winxpsp3  7 windows-install             fail pass in 24364

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 24364
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)       broken like 24354
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10       fail  like 24354
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24364 like 24360

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24364 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24364 never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 08:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dDW-0006X5-Ej; Mon, 13 Jan 2014 08:49:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xisisu@gmail.com>) id 1W2Uue-0001VM-Q6
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 23:57:53 +0000
Received: from [85.158.137.68:45523] by server-2.bemta-3.messagelabs.com id
	93/E0-17329-00C23D25; Sun, 12 Jan 2014 23:57:52 +0000
X-Env-Sender: xisisu@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389571071!7521296!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9538 invoked from network); 12 Jan 2014 23:57:51 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:57:51 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so409224wib.4
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 15:57:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=A0rl17UHzkm7fxWC2t2tTXKMKgI30yRRbIkgEGB8S+I=;
	b=DTOq4QZjwgjqisxETaF3DpjwMfqRniL8h7LiED9RwuIXKcbE8ZkOWBiAAKw6wazUTm
	835tNSfobvJ8TbdeTgKtxqjpNZznOLd1ovAzgDrfyHagm0MiM2Wfr78NWKG7vOaT4OHh
	yyxrgiBJHyXKkAbr70IdyTttB4xfZkBGqo7CovyYu+FImkUGBDP+OqQc6lpVfvQZGyr5
	xBIowkzZ5tCAViWXLuIXc/X8ncZrRo09o0LLBghhPTX3BZKEqj90gSYWgM71HnUPEQ5H
	ADBEnFjTJWqDQOPi4t46vYxGbaUXp5odsVArBlii47xEsAx+u4rLHLWGecTxzbLbGbuP
	82Ig==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr19325084wjb.12.1389571070958;
	Sun, 12 Jan 2014 15:57:50 -0800 (PST)
Received: by 10.227.147.194 with HTTP; Sun, 12 Jan 2014 15:57:50 -0800 (PST)
Date: Sun, 12 Jan 2014 17:57:50 -0600
Message-ID: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
From: Sisu Xi <xisisu@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, 
	Dario Faggioli <dario.faggioli@citrix.com>
X-Mailman-Approved-At: Mon, 13 Jan 2014 08:49:53 +0000
Subject: [Xen-devel] Change VCPU schedulers in XenServer? And recompile Xen
	in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4974326468031832021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4974326468031832021==
Content-Type: multipart/alternative; boundary=089e0102e6da1323f104efcebabe

--089e0102e6da1323f104efcebabe
Content-Type: text/plain; charset=UTF-8

Hi, George and Dario:

I recently played with XenServer and want to integrate RT-Xen to it. Do you
know how can I change the VCPU schedulers in XenServer?

Also, is there a tutorial for recompiling xen in XenServer? Will there be a
conflict between the xl and xe tools?

Thanks very much!

Sisu

-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

--089e0102e6da1323f104efcebabe
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, George and Dario:<div><br></div><div>I recently played=
 with XenServer and want to integrate RT-Xen to it. Do you know how can I c=
hange the VCPU schedulers in XenServer?=C2=A0</div><div><br></div><div>Also=
, is there a tutorial for recompiling xen in XenServer? Will there be a con=
flict between the xl and xe tools?</div>
<div><br></div><div>Thanks very much!</div><div><br></div><div>Sisu<br clea=
r=3D"all"><div><br></div>-- <br>Sisu Xi, PhD Candidate<br><br><a href=3D"ht=
tp://www.cse.wustl.edu/~xis/" target=3D"_blank">http://www.cse.wustl.edu/~x=
is/</a><br>
Department of Computer Science and Engineering<br>Campus Box 1045<br>Washin=
gton University in St. Louis<br>One Brookings Drive<br>St. Louis, MO 63130
</div></div>

--089e0102e6da1323f104efcebabe--


--===============4974326468031832021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4974326468031832021==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 08:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dDW-0006XD-Qs; Mon, 13 Jan 2014 08:49:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W2d0M-0005eK-1f
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 08:36:18 +0000
Received: from [85.158.139.211:58641] by server-14.bemta-5.messagelabs.com id
	11/A9-24200-185A3D25; Mon, 13 Jan 2014 08:36:17 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389602175!9367806!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15740 invoked from network); 13 Jan 2014 08:36:15 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-9.tower-206.messagelabs.com with SMTP;
	13 Jan 2014 08:36:15 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 19F4D1941182;
	Mon, 13 Jan 2014 09:36:13 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 983E41941183;
	Mon, 13 Jan 2014 09:36:12 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id Xd2J9zkSSmGV; Mon, 13 Jan 2014 09:36:11 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id B73B81941182;
	Mon, 13 Jan 2014 09:36:09 +0100 (CET)
Date: Mon, 13 Jan 2014 09:36:11 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: Russell Pavlicek <russell.pavlicek@citrix.com>
Message-ID: <863171993.2852.1389602171164.open-xchange@panna.pingst.univention.de>
In-Reply-To: <55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
References: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
	<55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Mon, 13 Jan 2014 08:49:53 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8063289816586940429=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8063289816586940429==
Content-Type: multipart/alternative; 
	boundary="----=_Part_2851_516175269.1389602170994"

------=_Part_2851_516175269.1389602170994
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hello Russell,

ah, I understand, thanks a lot. I've been already once on that page but did=
n't
really know that this is the new one. Fine, I will surely register for an
account and list our product there.

Good to be able to change it on my own when we have updates or something.

Thanks again,

Maren

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de <mailto:abatielos@univention.de>
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

<https://www.univention.de>

<http://gplus.to/Univention>

<http://www.facebook.com/univention>

<https://twitter.com/univention>

<http://www.youtube.com/univentionvideo>


Managing director: Peter H. Ganten
HRB 20755 Local Court Bremen
Tax-No.: 71-597-02876

> Russell Pavlicek <russell.pavlicek@citrix.com> hat am 9. Januar 2014 um 1=
5:52
> geschrieben:
>=20
>=20
>  Maren,
>=20
>=20
>=20
>  Sorry for the confusion.  That=E2=80=99s the old system (we will need to=
 mark it as
> such).
>=20
>=20
>=20
>  To list your project, product, or service, simply go to XenProject.org.
>  Register for an account, if you don=E2=80=99t have one already.  Click o=
n Directory >
> Ecosystem.  Press the button labeled =E2=80=9CAdd your listing here=E2=80=
=9D.  Fill out the
> form and submit.
>=20
>=20
>=20
>  Once your listing is approved (generally within a few hours), it will be
> listed in the XenProject.org Ecosystem Directory.  What=E2=80=99s more, y=
ou will have
> control over the listing content, so as your product changes or expands, =
you
> can modify your listing to keep it up to date.
>=20
>=20
>=20
>  If you have any questions or difficulty, please contact me.  I=E2=80=99d=
 be happy to
> help!
>=20
>=20
>=20
>  Russ Pavlicek
>=20
>  Xen Project Evangelist, Citrix Systems
>=20
>  Home Office: +1-301-829-5327
>=20
>  Mobile: +1-240-397-0199
>=20
>  UK VoIP: +44 1223 852 894
>=20
>=20
>=20
>  From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Maren Abatielos
>  Sent: Thursday, January 09, 2014 7:37 AM
>  To: xen-devel@lists.xen.org
>  Subject: [Xen-devel] Xen Management Tools - new project to be listed
>=20
>=20
>=20
>  Hello,
>=20
>=20
>=20
>  I would like to have our product UCS Virtual Machine Manager (UVMM) to b=
e
> listed in the section "Management tools and interfaces" on the following =
page:
>=20
>   <http://wiki.xen.org/wiki/Xen_Management_Tools>
>=20
>=20
>=20
>  UVMM is a web-based virtualization management tool for Xen and KVM. It i=
s
> included in our Linux server operating system Univention Corporate Server=
, so
> is Xen.
>=20
>=20
>=20
>  Could you please do that using the following URL to be listed:
>=20
>=20
>  <http://www.univention.de/en/products/ucs/ucs-components/virtualization/=
ucs-virtual-machine-manager/>
>=20
>=20
>=20
>  Thanks a lot,
>=20
>=20
>=20
>  Maren Abatielos
>=20
>  ---
>  Marketing
>=20
>  Univention GmbH
>  be open.
>  Mary-Somerville-Str.1
>  28359 Bremen
>=20
>  E-Mail: abatielos@univention.de <mailto:abatielos@univention.de>
>  Tel. : +49 421 22232-68
>  Fax : +49 421 22232-99
>=20
>  <https://www.univention.de>
>=20
>  <http://gplus.to/Univention>
>=20
>  <http://www.facebook.com/univention>
>=20
>  <https://twitter.com/univention>
>=20
>  <http://www.youtube.com/univentionvideo>
>=20
>=20
>  Managing director: Peter H. Ganten
>  HRB 20755 Local Court Bremen
>  Tax-No.: 71-597-02876
>=20


Sch=C3=B6ne Gr=C3=BC=C3=9Fe
Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

https://www.univention.de
http://gplus.to/Univention
http://www.facebook.com/univention
https://twitter.com/univention
http://www.youtube.com/univentionvideo

Gesch=C3=A4ftsf=C3=BChrer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876
------=_Part_2851_516175269.1389602170994
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org=
/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns=3D"http://www.w3.org/1999/xht=
ml"><head>
    <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8=
" />
=20
  <style type=3D"text/css"><!-- /* Font Definitions */ @font-face  {font-fa=
mily: Calibri;  }
@font-face  {font-family: Tahoma;  }
/* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal  {margin: =
0in;  margin-bottom: .0001pt;  font-size: 12.0pt;  font-family: "Times New =
Roman","serif";}
a:link, span.MsoHyperlink  {  color: blue;  text-decoration: underline;}
a:visited, span.MsoHyperlinkFollowed  {  color: purple;  text-decoration: u=
nderline;}
span.EmailStyle17  {  font-family: "Calibri","sans-serif";  color: #1F497D;=
}
.MsoChpDefault  {  font-size: 10.0pt;}
@page WordSection1  {  margin: 1.0in 1.0in 1.0in 1.0in;}
div.WordSection1  {}
--></style>
=20
 </head><body style=3D"">
=20
  <div>
   Hello Russell,
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   ah, I understand, thanks a lot. I&#39;ve been already once on that page =
but didn&#39;t really know that this is the new one. Fine, I will surely re=
gister for an account and list our product there.
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Good to be able to change it on my own when we have updates or something=
.
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Thanks again,
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Maren
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   ---=20
   <br /> Marketing=20
   <br />=20
   <br /> Univention GmbH=20
   <br /> be open.=20
   <br /> Mary-Somerville-Str.1=20
   <br /> 28359 Bremen=20
   <br />=20
   <br /> E-Mail:=20
   <a href=3D"mailto:abatielos@univention.de">abatielos@univention.de</a>=
=20
   <br /> Tel. : +49 421 22232-68=20
   <br /> Fax : +49 421 22232-99=20
   <br />=20
   <br />=20
   <a href=3D"https://www.univention.de">https://www.univention.de</a>=20
   <br />=20
   <a href=3D"http://gplus.to/Univention">http://gplus.to/Univention</a>=20
   <br />=20
   <a href=3D"http://www.facebook.com/univention">http://www.facebook.com/u=
nivention</a>=20
   <br />=20
   <a href=3D"https://twitter.com/univention">https://twitter.com/univentio=
n</a>=20
   <br />=20
   <a href=3D"http://www.youtube.com/univentionvideo">http://www.youtube.co=
m/univentionvideo</a>=20
   <br />=20
   <br /> Managing director: Peter H. Ganten=20
   <br /> HRB 20755 Local Court Bremen=20
   <br /> Tax-No.: 71-597-02876
  </div>=20
  <blockquote style=3D"position: relative; margin-left: 0px; padding-left: =
10px; border-left: solid 1px blue;" type=3D"cite">
   <!-- [if gte mso 9]> -->
   <!-- <![endif] -->
   <!-- [if gte mso 9]> -->
   <!-- <![endif] --> Russell Pavlicek &#60;russell.pavlicek@citrix.com&#62=
; hat am 9. Januar 2014 um 15:52 geschrieben:
   <br />
   <br />=20
   <div class=3D"WordSection1">=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Maren,</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Sorry for the confu=
sion.&#160; That=E2=80=99s the old system (we will need to mark it as such)=
.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">To list your projec=
t, product, or service, simply go to XenProject.org.&#160; Register for an =
account, if you don=E2=80=99t have one already.&#160; Click on Directory &#=
62; Ecosystem.&#160; Press the button labeled =E2=80=9CAdd your listing her=
e=E2=80=9D.&#160; Fill out the form and submit.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Once your listing i=
s approved (generally within a few hours), it will be listed in the XenProj=
ect.org Ecosystem Directory.&#160; What=E2=80=99s more, you will have contr=
ol over the listing content, so as your product changes or expands, you can=
 modify your listing to keep it up to date.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">If you have any que=
stions or difficulty, please contact me.&#160; I=E2=80=99d be happy to help=
!</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <div>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Russ Pavlicek</spa=
n></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Xen Project Evange=
list, Citrix Systems</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Home Office: +1-30=
1-829-5327</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Mobile: +1-240-397=
-0199</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">UK VoIP: +44 1223 =
852 894</span></p>=20
    </div>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <div>=20
     <div style=3D"border: none; border-top: solid #B5C4DF 1.0pt; padding: =
3.0pt 0in 0in 0in;">=20
      <p class=3D"MsoNormal"><strong><span style=3D"font-size: 10.0pt; font=
-family: &#39;Tahoma&#39;,&#39;sans-serif&#39;;">From:</span></strong><span=
 style=3D"font-size: 10.0pt; font-family: &#39;Tahoma&#39;,&#39;sans-serif&=
#39;;"> xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen=
.org] <strong>On Behalf Of </strong>Maren Abatielos<br /> <strong>Sent:</st=
rong> Thursday, January 09, 2014 7:37 AM<br /> <strong>To:</strong> xen-dev=
el@lists.xen.org<br /> <strong>Subject:</strong> [Xen-devel] Xen Management=
 Tools - new project to be listed</span></p>=20
     </div>=20
    </div>=20
    <p class=3D"MsoNormal">&#160;</p>=20
    <div>=20
     <p class=3D"MsoNormal">Hello,</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">I would like to have our product UCS Virtual Ma=
chine Manager (UVMM) to be listed in the section &#34;Management tools and =
interfaces&#34; on the following page:</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal"><a href=3D"http://wiki.xen.org/wiki/Xen_Managem=
ent_Tools">http://wiki.xen.org/wiki/Xen_Management_Tools</a></p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">UVMM is a web-based virtualization management t=
ool for Xen and KVM. It is included in our Linux server operating system Un=
ivention Corporate Server, so is Xen.</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Could you please do that using the following UR=
L to be listed:</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal"><a href=3D"http://www.univention.de/en/products=
/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/">http://www=
.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-ma=
chine-manager/</a></p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Thanks a lot,</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Maren Abatielos <br /> <br /> --- <br /> Market=
ing <br /> <br /> Univention GmbH <br /> be open. <br /> Mary-Somerville-St=
r.1 <br /> 28359 Bremen <br /> <br /> E-Mail: <a href=3D"mailto:abatielos@u=
nivention.de">abatielos@univention.de</a> <br /> Tel. : +49 421 22232-68 <b=
r /> Fax : +49 421 22232-99 <br /> <br /> <a href=3D"https://www.univention=
.de">https://www.univention.de</a> <br /> <a href=3D"http://gplus.to/Univen=
tion">http://gplus.to/Univention</a> <br /> <a href=3D"http://www.facebook.=
com/univention">http://www.facebook.com/univention</a> <br /> <a href=3D"ht=
tps://twitter.com/univention">https://twitter.com/univention</a> <br /> <a =
href=3D"http://www.youtube.com/univentionvideo">http://www.youtube.com/univ=
entionvideo</a> <br /> <br /> Managing director: Peter H. Ganten <br /> HRB=
 20755 Local Court Bremen <br /> Tax-No.: 71-597-02876</p>=20
    </div>=20
   </div>=20
  </blockquote>=20
  <div>
   <br />&#160;
  </div>=20
  <div id=3D"ox-signature">
   Sch&#246;ne Gr&#252;&#223;e
   <br />Maren Abatielos
   <br />
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
   <br />
   <br />https://www.univention.de
   <br />http://gplus.to/Univention
   <br />http://www.facebook.com/univention
   <br />https://twitter.com/univention
   <br />http://www.youtube.com/univentionvideo
   <br />
   <br />Gesch&#228;ftsf&#252;hrer: Peter H. Ganten
   <br />HRB 20755 Amtsgericht Bremen
   <br />Steuer-Nr.: 71-597-02876
  </div>
=20
</body></html>
------=_Part_2851_516175269.1389602170994--


--===============8063289816586940429==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8063289816586940429==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 08:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dDW-0006X5-Ej; Mon, 13 Jan 2014 08:49:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xisisu@gmail.com>) id 1W2Uue-0001VM-Q6
	for xen-devel@lists.xen.org; Sun, 12 Jan 2014 23:57:53 +0000
Received: from [85.158.137.68:45523] by server-2.bemta-3.messagelabs.com id
	93/E0-17329-00C23D25; Sun, 12 Jan 2014 23:57:52 +0000
X-Env-Sender: xisisu@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389571071!7521296!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9538 invoked from network); 12 Jan 2014 23:57:51 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Jan 2014 23:57:51 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so409224wib.4
	for <xen-devel@lists.xen.org>; Sun, 12 Jan 2014 15:57:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=A0rl17UHzkm7fxWC2t2tTXKMKgI30yRRbIkgEGB8S+I=;
	b=DTOq4QZjwgjqisxETaF3DpjwMfqRniL8h7LiED9RwuIXKcbE8ZkOWBiAAKw6wazUTm
	835tNSfobvJ8TbdeTgKtxqjpNZznOLd1ovAzgDrfyHagm0MiM2Wfr78NWKG7vOaT4OHh
	yyxrgiBJHyXKkAbr70IdyTttB4xfZkBGqo7CovyYu+FImkUGBDP+OqQc6lpVfvQZGyr5
	xBIowkzZ5tCAViWXLuIXc/X8ncZrRo09o0LLBghhPTX3BZKEqj90gSYWgM71HnUPEQ5H
	ADBEnFjTJWqDQOPi4t46vYxGbaUXp5odsVArBlii47xEsAx+u4rLHLWGecTxzbLbGbuP
	82Ig==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr19325084wjb.12.1389571070958;
	Sun, 12 Jan 2014 15:57:50 -0800 (PST)
Received: by 10.227.147.194 with HTTP; Sun, 12 Jan 2014 15:57:50 -0800 (PST)
Date: Sun, 12 Jan 2014 17:57:50 -0600
Message-ID: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
From: Sisu Xi <xisisu@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, 
	Dario Faggioli <dario.faggioli@citrix.com>
X-Mailman-Approved-At: Mon, 13 Jan 2014 08:49:53 +0000
Subject: [Xen-devel] Change VCPU schedulers in XenServer? And recompile Xen
	in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4974326468031832021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4974326468031832021==
Content-Type: multipart/alternative; boundary=089e0102e6da1323f104efcebabe

--089e0102e6da1323f104efcebabe
Content-Type: text/plain; charset=UTF-8

Hi, George and Dario:

I recently played with XenServer and want to integrate RT-Xen to it. Do you
know how can I change the VCPU schedulers in XenServer?

Also, is there a tutorial for recompiling xen in XenServer? Will there be a
conflict between the xl and xe tools?

Thanks very much!

Sisu

-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

--089e0102e6da1323f104efcebabe
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, George and Dario:<div><br></div><div>I recently played=
 with XenServer and want to integrate RT-Xen to it. Do you know how can I c=
hange the VCPU schedulers in XenServer?=C2=A0</div><div><br></div><div>Also=
, is there a tutorial for recompiling xen in XenServer? Will there be a con=
flict between the xl and xe tools?</div>
<div><br></div><div>Thanks very much!</div><div><br></div><div>Sisu<br clea=
r=3D"all"><div><br></div>-- <br>Sisu Xi, PhD Candidate<br><br><a href=3D"ht=
tp://www.cse.wustl.edu/~xis/" target=3D"_blank">http://www.cse.wustl.edu/~x=
is/</a><br>
Department of Computer Science and Engineering<br>Campus Box 1045<br>Washin=
gton University in St. Louis<br>One Brookings Drive<br>St. Louis, MO 63130
</div></div>

--089e0102e6da1323f104efcebabe--


--===============4974326468031832021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4974326468031832021==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 08:50:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 08:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dDW-0006XD-Qs; Mon, 13 Jan 2014 08:49:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abatielos@univention.de>) id 1W2d0M-0005eK-1f
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 08:36:18 +0000
Received: from [85.158.139.211:58641] by server-14.bemta-5.messagelabs.com id
	11/A9-24200-185A3D25; Mon, 13 Jan 2014 08:36:17 +0000
X-Env-Sender: abatielos@univention.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389602175!9367806!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15740 invoked from network); 13 Jan 2014 08:36:15 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-9.tower-206.messagelabs.com with SMTP;
	13 Jan 2014 08:36:15 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 19F4D1941182;
	Mon, 13 Jan 2014 09:36:13 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 983E41941183;
	Mon, 13 Jan 2014 09:36:12 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id Xd2J9zkSSmGV; Mon, 13 Jan 2014 09:36:11 +0100 (CET)
Received: from panna.pingst.univention.de (unknown [192.168.5.28])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id B73B81941182;
	Mon, 13 Jan 2014 09:36:09 +0100 (CET)
Date: Mon, 13 Jan 2014 09:36:11 +0100 (CET)
From: Maren Abatielos <abatielos@univention.de>
To: Russell Pavlicek <russell.pavlicek@citrix.com>
Message-ID: <863171993.2852.1389602171164.open-xchange@panna.pingst.univention.de>
In-Reply-To: <55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
References: <492325672.922.1389270991329.open-xchange@panna.pingst.univention.de>
	<55E78A57290FB64FA0D3CF672F9F3DA214487E@SJCPEX01CL03.citrite.net>
MIME-Version: 1.0
X-Priority: 3
Importance: Medium
X-Mailer: Open-Xchange Mailer v7.4.0-Rev20
X-Mailman-Approved-At: Mon, 13 Jan 2014 08:49:53 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Maren Abatielos <abatielos@univention.de>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8063289816586940429=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8063289816586940429==
Content-Type: multipart/alternative; 
	boundary="----=_Part_2851_516175269.1389602170994"

------=_Part_2851_516175269.1389602170994
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hello Russell,

ah, I understand, thanks a lot. I've been already once on that page but did=
n't
really know that this is the new one. Fine, I will surely register for an
account and list our product there.

Good to be able to change it on my own when we have updates or something.

Thanks again,

Maren

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de <mailto:abatielos@univention.de>
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

<https://www.univention.de>

<http://gplus.to/Univention>

<http://www.facebook.com/univention>

<https://twitter.com/univention>

<http://www.youtube.com/univentionvideo>


Managing director: Peter H. Ganten
HRB 20755 Local Court Bremen
Tax-No.: 71-597-02876

> Russell Pavlicek <russell.pavlicek@citrix.com> hat am 9. Januar 2014 um 1=
5:52
> geschrieben:
>=20
>=20
>  Maren,
>=20
>=20
>=20
>  Sorry for the confusion.  That=E2=80=99s the old system (we will need to=
 mark it as
> such).
>=20
>=20
>=20
>  To list your project, product, or service, simply go to XenProject.org.
>  Register for an account, if you don=E2=80=99t have one already.  Click o=
n Directory >
> Ecosystem.  Press the button labeled =E2=80=9CAdd your listing here=E2=80=
=9D.  Fill out the
> form and submit.
>=20
>=20
>=20
>  Once your listing is approved (generally within a few hours), it will be
> listed in the XenProject.org Ecosystem Directory.  What=E2=80=99s more, y=
ou will have
> control over the listing content, so as your product changes or expands, =
you
> can modify your listing to keep it up to date.
>=20
>=20
>=20
>  If you have any questions or difficulty, please contact me.  I=E2=80=99d=
 be happy to
> help!
>=20
>=20
>=20
>  Russ Pavlicek
>=20
>  Xen Project Evangelist, Citrix Systems
>=20
>  Home Office: +1-301-829-5327
>=20
>  Mobile: +1-240-397-0199
>=20
>  UK VoIP: +44 1223 852 894
>=20
>=20
>=20
>  From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Maren Abatielos
>  Sent: Thursday, January 09, 2014 7:37 AM
>  To: xen-devel@lists.xen.org
>  Subject: [Xen-devel] Xen Management Tools - new project to be listed
>=20
>=20
>=20
>  Hello,
>=20
>=20
>=20
>  I would like to have our product UCS Virtual Machine Manager (UVMM) to b=
e
> listed in the section "Management tools and interfaces" on the following =
page:
>=20
>   <http://wiki.xen.org/wiki/Xen_Management_Tools>
>=20
>=20
>=20
>  UVMM is a web-based virtualization management tool for Xen and KVM. It i=
s
> included in our Linux server operating system Univention Corporate Server=
, so
> is Xen.
>=20
>=20
>=20
>  Could you please do that using the following URL to be listed:
>=20
>=20
>  <http://www.univention.de/en/products/ucs/ucs-components/virtualization/=
ucs-virtual-machine-manager/>
>=20
>=20
>=20
>  Thanks a lot,
>=20
>=20
>=20
>  Maren Abatielos
>=20
>  ---
>  Marketing
>=20
>  Univention GmbH
>  be open.
>  Mary-Somerville-Str.1
>  28359 Bremen
>=20
>  E-Mail: abatielos@univention.de <mailto:abatielos@univention.de>
>  Tel. : +49 421 22232-68
>  Fax : +49 421 22232-99
>=20
>  <https://www.univention.de>
>=20
>  <http://gplus.to/Univention>
>=20
>  <http://www.facebook.com/univention>
>=20
>  <https://twitter.com/univention>
>=20
>  <http://www.youtube.com/univentionvideo>
>=20
>=20
>  Managing director: Peter H. Ganten
>  HRB 20755 Local Court Bremen
>  Tax-No.: 71-597-02876
>=20


Sch=C3=B6ne Gr=C3=BC=C3=9Fe
Maren Abatielos

---
Marketing

Univention GmbH
be open.
Mary-Somerville-Str.1
28359 Bremen

E-Mail: abatielos@univention.de
Tel. : +49 421 22232-68
Fax : +49 421 22232-99

https://www.univention.de
http://gplus.to/Univention
http://www.facebook.com/univention
https://twitter.com/univention
http://www.youtube.com/univentionvideo

Gesch=C3=A4ftsf=C3=BChrer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876
------=_Part_2851_516175269.1389602170994
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org=
/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns=3D"http://www.w3.org/1999/xht=
ml"><head>
    <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8=
" />
=20
  <style type=3D"text/css"><!-- /* Font Definitions */ @font-face  {font-fa=
mily: Calibri;  }
@font-face  {font-family: Tahoma;  }
/* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal  {margin: =
0in;  margin-bottom: .0001pt;  font-size: 12.0pt;  font-family: "Times New =
Roman","serif";}
a:link, span.MsoHyperlink  {  color: blue;  text-decoration: underline;}
a:visited, span.MsoHyperlinkFollowed  {  color: purple;  text-decoration: u=
nderline;}
span.EmailStyle17  {  font-family: "Calibri","sans-serif";  color: #1F497D;=
}
.MsoChpDefault  {  font-size: 10.0pt;}
@page WordSection1  {  margin: 1.0in 1.0in 1.0in 1.0in;}
div.WordSection1  {}
--></style>
=20
 </head><body style=3D"">
=20
  <div>
   Hello Russell,
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   ah, I understand, thanks a lot. I&#39;ve been already once on that page =
but didn&#39;t really know that this is the new one. Fine, I will surely re=
gister for an account and list our product there.
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Good to be able to change it on my own when we have updates or something=
.
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Thanks again,
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   Maren
  </div>=20
  <div>
   &#160;
  </div>=20
  <div>
   ---=20
   <br /> Marketing=20
   <br />=20
   <br /> Univention GmbH=20
   <br /> be open.=20
   <br /> Mary-Somerville-Str.1=20
   <br /> 28359 Bremen=20
   <br />=20
   <br /> E-Mail:=20
   <a href=3D"mailto:abatielos@univention.de">abatielos@univention.de</a>=
=20
   <br /> Tel. : +49 421 22232-68=20
   <br /> Fax : +49 421 22232-99=20
   <br />=20
   <br />=20
   <a href=3D"https://www.univention.de">https://www.univention.de</a>=20
   <br />=20
   <a href=3D"http://gplus.to/Univention">http://gplus.to/Univention</a>=20
   <br />=20
   <a href=3D"http://www.facebook.com/univention">http://www.facebook.com/u=
nivention</a>=20
   <br />=20
   <a href=3D"https://twitter.com/univention">https://twitter.com/univentio=
n</a>=20
   <br />=20
   <a href=3D"http://www.youtube.com/univentionvideo">http://www.youtube.co=
m/univentionvideo</a>=20
   <br />=20
   <br /> Managing director: Peter H. Ganten=20
   <br /> HRB 20755 Local Court Bremen=20
   <br /> Tax-No.: 71-597-02876
  </div>=20
  <blockquote style=3D"position: relative; margin-left: 0px; padding-left: =
10px; border-left: solid 1px blue;" type=3D"cite">
   <!-- [if gte mso 9]> -->
   <!-- <![endif] -->
   <!-- [if gte mso 9]> -->
   <!-- <![endif] --> Russell Pavlicek &#60;russell.pavlicek@citrix.com&#62=
; hat am 9. Januar 2014 um 15:52 geschrieben:
   <br />
   <br />=20
   <div class=3D"WordSection1">=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Maren,</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Sorry for the confu=
sion.&#160; That=E2=80=99s the old system (we will need to mark it as such)=
.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">To list your projec=
t, product, or service, simply go to XenProject.org.&#160; Register for an =
account, if you don=E2=80=99t have one already.&#160; Click on Directory &#=
62; Ecosystem.&#160; Press the button labeled =E2=80=9CAdd your listing her=
e=E2=80=9D.&#160; Fill out the form and submit.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Once your listing i=
s approved (generally within a few hours), it will be listed in the XenProj=
ect.org Ecosystem Directory.&#160; What=E2=80=99s more, you will have contr=
ol over the listing content, so as your product changes or expands, you can=
 modify your listing to keep it up to date.</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">If you have any que=
stions or difficulty, please contact me.&#160; I=E2=80=99d be happy to help=
!</span></p>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <div>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Russ Pavlicek</spa=
n></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Xen Project Evange=
list, Citrix Systems</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Home Office: +1-30=
1-829-5327</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">Mobile: +1-240-397=
-0199</span></p>=20
     <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: =
&#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">UK VoIP: +44 1223 =
852 894</span></p>=20
    </div>=20
    <p class=3D"MsoNormal"><span style=3D"font-size: 11.0pt; font-family: &=
#39;Calibri&#39;,&#39;sans-serif&#39;; color: #1f497d;">&#160;</span></p>=
=20
    <div>=20
     <div style=3D"border: none; border-top: solid #B5C4DF 1.0pt; padding: =
3.0pt 0in 0in 0in;">=20
      <p class=3D"MsoNormal"><strong><span style=3D"font-size: 10.0pt; font=
-family: &#39;Tahoma&#39;,&#39;sans-serif&#39;;">From:</span></strong><span=
 style=3D"font-size: 10.0pt; font-family: &#39;Tahoma&#39;,&#39;sans-serif&=
#39;;"> xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen=
.org] <strong>On Behalf Of </strong>Maren Abatielos<br /> <strong>Sent:</st=
rong> Thursday, January 09, 2014 7:37 AM<br /> <strong>To:</strong> xen-dev=
el@lists.xen.org<br /> <strong>Subject:</strong> [Xen-devel] Xen Management=
 Tools - new project to be listed</span></p>=20
     </div>=20
    </div>=20
    <p class=3D"MsoNormal">&#160;</p>=20
    <div>=20
     <p class=3D"MsoNormal">Hello,</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">I would like to have our product UCS Virtual Ma=
chine Manager (UVMM) to be listed in the section &#34;Management tools and =
interfaces&#34; on the following page:</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal"><a href=3D"http://wiki.xen.org/wiki/Xen_Managem=
ent_Tools">http://wiki.xen.org/wiki/Xen_Management_Tools</a></p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">UVMM is a web-based virtualization management t=
ool for Xen and KVM. It is included in our Linux server operating system Un=
ivention Corporate Server, so is Xen.</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Could you please do that using the following UR=
L to be listed:</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal"><a href=3D"http://www.univention.de/en/products=
/ucs/ucs-components/virtualization/ucs-virtual-machine-manager/">http://www=
.univention.de/en/products/ucs/ucs-components/virtualization/ucs-virtual-ma=
chine-manager/</a></p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Thanks a lot,</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">&#160;</p>=20
    </div>=20
    <div>=20
     <p class=3D"MsoNormal">Maren Abatielos <br /> <br /> --- <br /> Market=
ing <br /> <br /> Univention GmbH <br /> be open. <br /> Mary-Somerville-St=
r.1 <br /> 28359 Bremen <br /> <br /> E-Mail: <a href=3D"mailto:abatielos@u=
nivention.de">abatielos@univention.de</a> <br /> Tel. : +49 421 22232-68 <b=
r /> Fax : +49 421 22232-99 <br /> <br /> <a href=3D"https://www.univention=
.de">https://www.univention.de</a> <br /> <a href=3D"http://gplus.to/Univen=
tion">http://gplus.to/Univention</a> <br /> <a href=3D"http://www.facebook.=
com/univention">http://www.facebook.com/univention</a> <br /> <a href=3D"ht=
tps://twitter.com/univention">https://twitter.com/univention</a> <br /> <a =
href=3D"http://www.youtube.com/univentionvideo">http://www.youtube.com/univ=
entionvideo</a> <br /> <br /> Managing director: Peter H. Ganten <br /> HRB=
 20755 Local Court Bremen <br /> Tax-No.: 71-597-02876</p>=20
    </div>=20
   </div>=20
  </blockquote>=20
  <div>
   <br />&#160;
  </div>=20
  <div id=3D"ox-signature">
   Sch&#246;ne Gr&#252;&#223;e
   <br />Maren Abatielos
   <br />
   <br />---
   <br />Marketing
   <br />
   <br />Univention GmbH
   <br />be open.
   <br />Mary-Somerville-Str.1
   <br />28359 Bremen
   <br />
   <br />E-Mail: abatielos@univention.de
   <br />Tel. : +49 421 22232-68
   <br />Fax : +49 421 22232-99
   <br />
   <br />https://www.univention.de
   <br />http://gplus.to/Univention
   <br />http://www.facebook.com/univention
   <br />https://twitter.com/univention
   <br />http://www.youtube.com/univentionvideo
   <br />
   <br />Gesch&#228;ftsf&#252;hrer: Peter H. Ganten
   <br />HRB 20755 Amtsgericht Bremen
   <br />Steuer-Nr.: 71-597-02876
  </div>
=20
</body></html>
------=_Part_2851_516175269.1389602170994--


--===============8063289816586940429==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8063289816586940429==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 09:31:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dqz-0000Cu-Rp; Mon, 13 Jan 2014 09:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2dqy-0000Cp-Bb
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 09:30:40 +0000
Received: from [85.158.143.35:11759] by server-1.bemta-4.messagelabs.com id
	09/50-02132-F32B3D25; Mon, 13 Jan 2014 09:30:39 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389605438!11306612!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28426 invoked from network); 13 Jan 2014 09:30:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 09:30:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389605438; l=1456;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=VAX97ykEbNyerDViA17we40x7cg=;
	b=sxCtTpdge21If/iz0XsJU7FGhGLDPnWrXAcysbx5sQDRdb00O2tXGwyayINuR6qDhFf
	Rb/fLiWafTIupEQ64CSeawYiKAildJHs/G0MKiJpfbaMjMI2pN7Z8FA2ALQWBOb2Tu2S8
	axatsUIWEHkTTfgz5ohz77RvrSd2q8Wg7pk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z03d9eq0D9UXth3 ; 
	Mon, 13 Jan 2014 10:30:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 45BAF5025A; Mon, 13 Jan 2014 10:30:32 +0100 (CET)
Date: Mon, 13 Jan 2014 10:30:32 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140113093032.GA13919@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D073F0.5020400@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> I don't know discard code works but it seems to me that if you pass, for
> example,  zero as discard_granularity (which may happen if xenbus_gather()
> fails) then blkdev_issue_discard() in the backend will set granularity to 1
> and continue with discard. This may not be what the the guest admin
> requested. And he won't know about this since no error message is printed
> anywhere.

If I understand the code using granularity/alignment correctly, both are
optional properties. So if the granularity is just 1 it means byte
ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
both properties are not admin controlled, for phy the blkbk drivers just
passes on what it gets from the underlying hardware.

> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
> assume that secure discard has not been requested. I don't know what
> security implications this will have but it sounds bad to me.

There are no security implications, if the backend does not advertise it
then its not present.

After poking around some more it seems that blkif.h is the spec, it does
not say anything that the three properties are optional. Also the
backend drivers in sles11sp2 and mainline create all three properties
unconditionally. So I think a better change is to expect all three
properties in the frontend. I will send another version of the patch.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:31:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2dqz-0000Cu-Rp; Mon, 13 Jan 2014 09:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2dqy-0000Cp-Bb
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 09:30:40 +0000
Received: from [85.158.143.35:11759] by server-1.bemta-4.messagelabs.com id
	09/50-02132-F32B3D25; Mon, 13 Jan 2014 09:30:39 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389605438!11306612!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28426 invoked from network); 13 Jan 2014 09:30:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 09:30:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389605438; l=1456;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=VAX97ykEbNyerDViA17we40x7cg=;
	b=sxCtTpdge21If/iz0XsJU7FGhGLDPnWrXAcysbx5sQDRdb00O2tXGwyayINuR6qDhFf
	Rb/fLiWafTIupEQ64CSeawYiKAildJHs/G0MKiJpfbaMjMI2pN7Z8FA2ALQWBOb2Tu2S8
	axatsUIWEHkTTfgz5ohz77RvrSd2q8Wg7pk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z03d9eq0D9UXth3 ; 
	Mon, 13 Jan 2014 10:30:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 45BAF5025A; Mon, 13 Jan 2014 10:30:32 +0100 (CET)
Date: Mon, 13 Jan 2014 10:30:32 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140113093032.GA13919@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D073F0.5020400@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, Boris Ostrovsky wrote:

> I don't know discard code works but it seems to me that if you pass, for
> example,  zero as discard_granularity (which may happen if xenbus_gather()
> fails) then blkdev_issue_discard() in the backend will set granularity to 1
> and continue with discard. This may not be what the the guest admin
> requested. And he won't know about this since no error message is printed
> anywhere.

If I understand the code using granularity/alignment correctly, both are
optional properties. So if the granularity is just 1 it means byte
ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
both properties are not admin controlled, for phy the blkbk drivers just
passes on what it gets from the underlying hardware.

> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
> assume that secure discard has not been requested. I don't know what
> security implications this will have but it sounds bad to me.

There are no security implications, if the backend does not advertise it
then its not present.

After poking around some more it seems that blkif.h is the spec, it does
not say anything that the three properties are optional. Also the
backend drivers in sles11sp2 and mainline create all three properties
unconditionally. So I think a better change is to expect all three
properties in the frontend. I will send another version of the patch.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:48:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2e8C-0000xH-Lu; Mon, 13 Jan 2014 09:48:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2e8A-0000tf-Rd
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 09:48:27 +0000
Received: from [193.109.254.147:7336] by server-15.bemta-14.messagelabs.com id
	D9/E4-22186-A66B3D25; Mon, 13 Jan 2014 09:48:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389606504!8143190!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11019 invoked from network); 13 Jan 2014 09:48:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:48:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90109057"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 09:48:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 04:48:23 -0500
Message-ID: <1389606502.8187.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>, Julien Grall <julien.grall@citrix.com>
Date: Mon, 13 Jan 2014 09:48:22 +0000
In-Reply-To: <osstest-24366-mainreport@xen.org>
References: <osstest-24366-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> flight 24366 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> [...]
> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-xl           9 guest-start                  fail   never pass
AKA
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html

We are getting there (slowly), the new failure after making EXT4
available is:
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
        [    0.087330] Registering SWP/SWPB emulation handler
        [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
        [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
        [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
        [    0.193307] List of all partitions:
        [    0.193325] ca02         4194304 xvda2  driver: vbd
        [    0.193340] ca01         1024000 xvda1  driver: vbd
        [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
        [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
        
The disk is on LVM and
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
shows that front and backend are in state 4/connected. I don't see any
smoking guns in the logs.

Julien/Stefano -- do you see anything?

The filesystem has been mounted in dom0 (for debootstrap) which is
running the same kernel binary.

The associated kernel build job, including binaries and .config is at
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-amd64-pvops/info.html
and the Xen build is at 
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-armhf/info.html

I also have marilith-n4 in a similar state, let me know if you want to
have a poke at it.

In that environment I tried making root be xvda and swap xvdb but that
didn't help. I also tried various rootflags= debug options but no extra
info.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:48:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2e8C-0000xH-Lu; Mon, 13 Jan 2014 09:48:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2e8A-0000tf-Rd
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 09:48:27 +0000
Received: from [193.109.254.147:7336] by server-15.bemta-14.messagelabs.com id
	D9/E4-22186-A66B3D25; Mon, 13 Jan 2014 09:48:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389606504!8143190!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11019 invoked from network); 13 Jan 2014 09:48:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:48:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90109057"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 09:48:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 04:48:23 -0500
Message-ID: <1389606502.8187.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>, Julien Grall <julien.grall@citrix.com>
Date: Mon, 13 Jan 2014 09:48:22 +0000
In-Reply-To: <osstest-24366-mainreport@xen.org>
References: <osstest-24366-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> flight 24366 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> [...]
> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-xl           9 guest-start                  fail   never pass
AKA
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html

We are getting there (slowly), the new failure after making EXT4
available is:
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
        [    0.087330] Registering SWP/SWPB emulation handler
        [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
        [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
        [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
        [    0.193307] List of all partitions:
        [    0.193325] ca02         4194304 xvda2  driver: vbd
        [    0.193340] ca01         1024000 xvda1  driver: vbd
        [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
        [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
        
The disk is on LVM and
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
shows that front and backend are in state 4/connected. I don't see any
smoking guns in the logs.

Julien/Stefano -- do you see anything?

The filesystem has been mounted in dom0 (for debootstrap) which is
running the same kernel binary.

The associated kernel build job, including binaries and .config is at
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-amd64-pvops/info.html
and the Xen build is at 
http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-armhf/info.html

I also have marilith-n4 in a similar state, let me know if you want to
have a poke at it.

In that environment I tried making root be xvda and swap xvdb but that
didn't help. I also tried various rootflags= debug options but no extra
info.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2e9G-0001IP-3r; Mon, 13 Jan 2014 09:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2e9F-0001II-Az
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 09:49:33 +0000
Received: from [193.109.254.147:10241] by server-6.bemta-14.messagelabs.com id
	27/A6-14958-CA6B3D25; Mon, 13 Jan 2014 09:49:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389606570!8960681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22462 invoked from network); 13 Jan 2014 09:49:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:49:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92239648"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 09:49:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 04:49:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2e9B-0007HY-Gr;
	Mon, 13 Jan 2014 09:49:29 +0000
Message-ID: <52D3B6A9.2090003@citrix.com>
Date: Mon, 13 Jan 2014 09:49:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
"x86: map portion of kexec crash area that is within the direct map
area" to staging-4.3 ASAP, as following the backport of
8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
kernel area", kexec loading is broken in exactly the same way as it was
in staging.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2e9G-0001IP-3r; Mon, 13 Jan 2014 09:49:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2e9F-0001II-Az
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 09:49:33 +0000
Received: from [193.109.254.147:10241] by server-6.bemta-14.messagelabs.com id
	27/A6-14958-CA6B3D25; Mon, 13 Jan 2014 09:49:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389606570!8960681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22462 invoked from network); 13 Jan 2014 09:49:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:49:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92239648"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 09:49:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 04:49:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2e9B-0007HY-Gr;
	Mon, 13 Jan 2014 09:49:29 +0000
Message-ID: <52D3B6A9.2090003@citrix.com>
Date: Mon, 13 Jan 2014 09:49:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
"x86: map portion of kexec crash area that is within the direct map
area" to staging-4.3 ASAP, as following the backport of
8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
kernel area", kexec loading is broken in exactly the same way as it was
in staging.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eCj-0001Ti-08; Mon, 13 Jan 2014 09:53:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W2eCh-0001TY-JA
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 09:53:07 +0000
Received: from [85.158.139.211:48002] by server-2.bemta-5.messagelabs.com id
	71/9D-29392-287B3D25; Mon, 13 Jan 2014 09:53:06 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389606784!9374458!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6912 invoked from network); 13 Jan 2014 09:53:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:53:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90110006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 09:53:04 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 13 Jan 2014 04:53:03 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 13 Jan 2014 10:53:01 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX path
Thread-Index: AQHPDLl4Wu4tZBxeyUWvbT3V/hLkQ5p8HC/wgAWkCACAALChMA==
Date: Mon, 13 Jan 2014 09:53:00 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0204455@AMSPEX01CL01.citrite.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
	<52D33138.4090903@citrix.com>
In-Reply-To: <52D33138.4090903@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 13 January 2014 00:20
> To: Paul Durrant; Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Subject: Re: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX
> path
> 
> On 09/01/14 09:20, Paul Durrant wrote:
> >> We are adding the skb to vif->rx_queue even when
> >> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
> >> space for that. Or am I missing something? Paul?
> >>
> > That's correct. Part of the flow control improvement was to get rid of
> needless packet drops. For your purposes, you basically need to avoid using
> the queuing discipline and take packets into netback's vif->rx_queue
> regardless of the state of the shared ring so that you can drop them if they
> get beyond a certain age. So, perhaps you should never stop the netif
> queue, place an upper limit on vif->rx_queue (either packet or byte count)
> and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).
> >
> How about this:
> - when the timer fires first we wake up the thread an tell it to drop
> all the packets in rx_queue
> - start_xmit then can drain the qdisc queue into the device queue
> - additionally, the RX thread should stop that timer when it was able to
> do some work
> 

Yes, you could do it that way.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 09:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 09:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eCj-0001Ti-08; Mon, 13 Jan 2014 09:53:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W2eCh-0001TY-JA
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 09:53:07 +0000
Received: from [85.158.139.211:48002] by server-2.bemta-5.messagelabs.com id
	71/9D-29392-287B3D25; Mon, 13 Jan 2014 09:53:06 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389606784!9374458!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6912 invoked from network); 13 Jan 2014 09:53:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 09:53:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90110006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 09:53:04 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 13 Jan 2014 04:53:03 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 13 Jan 2014 10:53:01 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX path
Thread-Index: AQHPDLl4Wu4tZBxeyUWvbT3V/hLkQ5p8HC/wgAWkCACAALChMA==
Date: Mon, 13 Jan 2014 09:53:00 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0204455@AMSPEX01CL01.citrite.net>
References: <1389139818-24458-1-git-send-email-zoltan.kiss@citrix.com>
	<1389139818-24458-9-git-send-email-zoltan.kiss@citrix.com>
	<52CDC45D.3050509@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0200C2A@AMSPEX01CL01.citrite.net>
	<52D33138.4090903@citrix.com>
In-Reply-To: <52D33138.4090903@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH net-next v3 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 13 January 2014 00:20
> To: Paul Durrant; Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Subject: Re: [PATCH net-next v3 8/9] xen-netback: Timeout packets in RX
> path
> 
> On 09/01/14 09:20, Paul Durrant wrote:
> >> We are adding the skb to vif->rx_queue even when
> >> xenvif_rx_ring_slots_available(vif, min_slots_needed) said there is no
> >> space for that. Or am I missing something? Paul?
> >>
> > That's correct. Part of the flow control improvement was to get rid of
> needless packet drops. For your purposes, you basically need to avoid using
> the queuing discipline and take packets into netback's vif->rx_queue
> regardless of the state of the shared ring so that you can drop them if they
> get beyond a certain age. So, perhaps you should never stop the netif
> queue, place an upper limit on vif->rx_queue (either packet or byte count)
> and drop when that is exceeded (i.e. mimicking pfifo or bfifo internally).
> >
> How about this:
> - when the timer fires first we wake up the thread an tell it to drop
> all the packets in rx_queue
> - start_xmit then can drain the qdisc queue into the device queue
> - additionally, the RX thread should stop that timer when it was able to
> do some work
> 

Yes, you could do it that way.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:10:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eTC-00036W-TB; Mon, 13 Jan 2014 10:10:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2eTB-00036P-8t
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 10:10:09 +0000
Received: from [85.158.137.68:18161] by server-10.bemta-3.messagelabs.com id
	72/66-23989-08BB3D25; Mon, 13 Jan 2014 10:10:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389607806!8789332!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1096 invoked from network); 13 Jan 2014 10:10:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92243696"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:10:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:10:05 -0500
Message-ID: <1389607803.8187.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 10:10:03 +0000
In-Reply-To: <20140110184151.GA20232@pegasus.dumpdata.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
	disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:41 -0500, Konrad Rzeszutek Wilk wrote:
> Hey,
> 
> It occured to me that when booting Linux HVM guests, the 1GB hugepage
> cpuid is not exposed. One needs to manually do:
> 
>  cpuid= ['0x80000001:edx=xxxxx1xxxxxxxxxxxxxxxxxxxxxxxxxx'] 
> or 
>  cpuid="host,page1gb=k"

> (see http://lists.xenproject.org/archives/html/xen-users/2013-09/msg00083.html
> for some discussion about it).
> 
> The machine I am using is supporting it:
> 
>  (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB 

HAP page sizes is unrelated to the page sizes supported in "first stage"
paging, although I suppose it is unlikely for EPT to support it without
the primary translation supporting it.

> and this is the only way to get hugepages working
> (for reference here is what I have in the guest command line:
> "default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
> hugepages=100" and without this cpuid I get:
>  hugepagesz: Unsupported page size 1024 M 
> 
> because the 1GB flag is not exposed.
> 
> I was wondering why we don't check the host cpuid and set the 1GB
> flag (pdpe1gb) for those cases? It seems that all the pieces are in
> place for this to work?

The hypervisor currently contains :
        /* Hide 1GB-superpage feature if we can't emulate it. */
        if (!hvm_pse1gb_supported(d))
            *edx &= ~cpufeat_mask(X86_FEATURE_PAGE1GB);

So I think it would be safe to enable this by default in userspace and
let the hypervisor clear it if unsafe. But TBH the whole cpuid policy is
a mystery to me. Can you cc some likely hypervisor side people on your
patch for confirmation please.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:10:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eTC-00036W-TB; Mon, 13 Jan 2014 10:10:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2eTB-00036P-8t
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 10:10:09 +0000
Received: from [85.158.137.68:18161] by server-10.bemta-3.messagelabs.com id
	72/66-23989-08BB3D25; Mon, 13 Jan 2014 10:10:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389607806!8789332!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1096 invoked from network); 13 Jan 2014 10:10:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:10:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92243696"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:10:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:10:05 -0500
Message-ID: <1389607803.8187.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 10:10:03 +0000
In-Reply-To: <20140110184151.GA20232@pegasus.dumpdata.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
	disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 13:41 -0500, Konrad Rzeszutek Wilk wrote:
> Hey,
> 
> It occured to me that when booting Linux HVM guests, the 1GB hugepage
> cpuid is not exposed. One needs to manually do:
> 
>  cpuid= ['0x80000001:edx=xxxxx1xxxxxxxxxxxxxxxxxxxxxxxxxx'] 
> or 
>  cpuid="host,page1gb=k"

> (see http://lists.xenproject.org/archives/html/xen-users/2013-09/msg00083.html
> for some discussion about it).
> 
> The machine I am using is supporting it:
> 
>  (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB 

HAP page sizes is unrelated to the page sizes supported in "first stage"
paging, although I suppose it is unlikely for EPT to support it without
the primary translation supporting it.

> and this is the only way to get hugepages working
> (for reference here is what I have in the guest command line:
> "default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
> hugepages=100" and without this cpuid I get:
>  hugepagesz: Unsupported page size 1024 M 
> 
> because the 1GB flag is not exposed.
> 
> I was wondering why we don't check the host cpuid and set the 1GB
> flag (pdpe1gb) for those cases? It seems that all the pieces are in
> place for this to work?

The hypervisor currently contains :
        /* Hide 1GB-superpage feature if we can't emulate it. */
        if (!hvm_pse1gb_supported(d))
            *edx &= ~cpufeat_mask(X86_FEATURE_PAGE1GB);

So I think it would be safe to enable this by default in userspace and
let the hypervisor clear it if unsafe. But TBH the whole cpuid policy is
a mystery to me. Can you cc some likely hypervisor side people on your
patch for confirmation please.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:14:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eXJ-0003HS-L3; Mon, 13 Jan 2014 10:14:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2eXH-0003HL-Jo
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:14:23 +0000
Received: from [85.158.143.35:9017] by server-3.bemta-4.messagelabs.com id
	D9/AA-32360-E7CB3D25; Mon, 13 Jan 2014 10:14:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389608061!8648750!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20608 invoked from network); 13 Jan 2014 10:14:22 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 10:14:22 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389608061; l=2330;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=0J22Sw6IhoqYGDBQ1U3c6GsSTpI=;
	b=kuqL4QmkrUPdu5ZZiV4BGTJegK1Jb+WAuRDw0i3K2zFWBCZG452W5WcPzCZrMEwYGpZ
	7ayyqDK5VIwRfC95qpYhhYnb97YQqkM234mTzVEYNpoZyVFvPeuOLd/PvZgGOB907CNv+
	sRFP/6beQSqexl0ikggVAEuVMugIX14XIg0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z03d9eq0DAEFucP ; 
	Mon, 13 Jan 2014 11:14:15 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 4F6775025A; Mon, 13 Jan 2014 11:14:14 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 13 Jan 2014 11:14:12 +0100
Message-Id: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: remove the check for "type" as the
frontend is not supposed to care about this backend detail. Expect
backends to provide discard-aligment, discard-granularity and
discard-secure properties because interface specification in blkif.h
does not list these properties as optional.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 33 +++++++++------------------------
 1 file changed, 9 insertions(+), 24 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..6ef63eb 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1634,37 +1634,22 @@ blkfront_closing(struct blkfront_info *info)
 
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
-	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
+	if (xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		"discard-secure", "%u", &discard_secure,
+		NULL))
 		return;
 
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
+	info->discard_granularity = discard_granularity;
+	info->discard_alignment = discard_alignment;
+	info->feature_secdiscard = !!discard_secure;
 
-	kfree(type);
+	info->feature_discard = 1;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:14:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:14:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eXJ-0003HS-L3; Mon, 13 Jan 2014 10:14:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2eXH-0003HL-Jo
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:14:23 +0000
Received: from [85.158.143.35:9017] by server-3.bemta-4.messagelabs.com id
	D9/AA-32360-E7CB3D25; Mon, 13 Jan 2014 10:14:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389608061!8648750!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20608 invoked from network); 13 Jan 2014 10:14:22 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 10:14:22 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389608061; l=2330;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=0J22Sw6IhoqYGDBQ1U3c6GsSTpI=;
	b=kuqL4QmkrUPdu5ZZiV4BGTJegK1Jb+WAuRDw0i3K2zFWBCZG452W5WcPzCZrMEwYGpZ
	7ayyqDK5VIwRfC95qpYhhYnb97YQqkM234mTzVEYNpoZyVFvPeuOLd/PvZgGOB907CNv+
	sRFP/6beQSqexl0ikggVAEuVMugIX14XIg0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z03d9eq0DAEFucP ; 
	Mon, 13 Jan 2014 11:14:15 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 4F6775025A; Mon, 13 Jan 2014 11:14:14 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 13 Jan 2014 11:14:12 +0100
Message-Id: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: remove the check for "type" as the
frontend is not supposed to care about this backend detail. Expect
backends to provide discard-aligment, discard-granularity and
discard-secure properties because interface specification in blkif.h
does not list these properties as optional.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 33 +++++++++------------------------
 1 file changed, 9 insertions(+), 24 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..6ef63eb 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1634,37 +1634,22 @@ blkfront_closing(struct blkfront_info *info)
 
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
-	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
+	if (xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		"discard-secure", "%u", &discard_secure,
+		NULL))
 		return;
 
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
+	info->discard_granularity = discard_granularity;
+	info->discard_alignment = discard_alignment;
+	info->feature_secdiscard = !!discard_secure;
 
-	kfree(type);
+	info->feature_discard = 1;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:19:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ebx-0003s0-DY; Mon, 13 Jan 2014 10:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ebv-0003qG-Ba
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:19:11 +0000
Received: from [85.158.143.35:36734] by server-2.bemta-4.messagelabs.com id
	A8/3C-11386-E9DB3D25; Mon, 13 Jan 2014 10:19:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389608348!11280427!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17908 invoked from network); 13 Jan 2014 10:19:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90115689"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 10:19:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:19:07 -0500
Message-ID: <1389608346.13654.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sisu Xi <xisisu@gmail.com>
Date: Mon, 13 Jan 2014 10:19:06 +0000
In-Reply-To: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
References: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Change VCPU schedulers in XenServer? And recompile
 Xen in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:
> Hi, George and Dario:
> 
> 
> I recently played with XenServer and want to integrate RT-Xen to it.
> Do you know how can I change the VCPU schedulers in XenServer? 
> 
> 
> Also, is there a tutorial for recompiling xen in XenServer? Will there
> be a conflict between the xl and xe tools?

You should ask these XenServer specific questions on the xenserver list,
see www.xenserver.org.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:19:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ebx-0003s0-DY; Mon, 13 Jan 2014 10:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ebv-0003qG-Ba
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:19:11 +0000
Received: from [85.158.143.35:36734] by server-2.bemta-4.messagelabs.com id
	A8/3C-11386-E9DB3D25; Mon, 13 Jan 2014 10:19:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389608348!11280427!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17908 invoked from network); 13 Jan 2014 10:19:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="90115689"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 10:19:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:19:07 -0500
Message-ID: <1389608346.13654.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sisu Xi <xisisu@gmail.com>
Date: Mon, 13 Jan 2014 10:19:06 +0000
In-Reply-To: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
References: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Change VCPU schedulers in XenServer? And recompile
 Xen in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:
> Hi, George and Dario:
> 
> 
> I recently played with XenServer and want to integrate RT-Xen to it.
> Do you know how can I change the VCPU schedulers in XenServer? 
> 
> 
> Also, is there a tutorial for recompiling xen in XenServer? Will there
> be a conflict between the xl and xe tools?

You should ask these XenServer specific questions on the xenserver list,
see www.xenserver.org.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ec7-0003tD-QM; Mon, 13 Jan 2014 10:19:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W2ec5-0003sv-S3
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 10:19:22 +0000
Received: from [85.158.137.68:58781] by server-3.bemta-3.messagelabs.com id
	9D/02-10658-8ADB3D25; Mon, 13 Jan 2014 10:19:20 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389608357!5111052!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18620 invoked from network); 13 Jan 2014 10:19:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92245163"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:19:01 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:19:00 -0500
Message-ID: <52D3BD8F.1000707@citrix.com>
Date: Mon, 13 Jan 2014 11:18:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David
	Vrabel <david.vrabel@citrix.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	<x86@kernel.org>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 00:15, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this patch does the following:
> - move the m2p_override part from grant_[un]map_refs to gntdev, where it is
>   needed after mapping operations
> - but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
>   because it is needed always
> - update the function prototypes as kmap_ops are no longer needed
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/xen/p2m.c                  |    4 --
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   61 ++++++++++++++++++++++++++--
>  drivers/xen/grant-table.c           |   76 +++++++++++------------------------
>  include/xen/grant_table.h           |    2 -
>  5 files changed, 87 insertions(+), 71 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..3e78eb9 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -893,9 +893,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	set_page_private(page, mfn);
>  	page->index = pfn_to_mfn(pfn);
>  
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
> -
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs =
> @@ -962,7 +959,6 @@ int m2p_remove_override(struct page *page,
>  	WARN_ON(!PagePrivate(page));
>  	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..3a97342 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
>  static int map_grant_pages(struct grant_map *map)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +	unsigned long mfn;
>  
>  	if (!use_ptemod) {
>  		/* Note: it could already be mapped */
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> @@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
>  static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
>  
>  	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
>  		int pgno = (map->notify.addr >> PAGE_SHIFT);
> @@ -316,8 +349,28 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  	}
>  
>  	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +				map->pages + offset,
> +				pages);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < pages; i++) {
> +		err = m2p_remove_override((map->pages + offset)[i],
> +					  use_ptemod ?
> +					  &(map->kmap_ops + offset)[i] :
> +					  NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..59019f2 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -881,11 +881,9 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> @@ -904,49 +902,35 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		for (i = 0; i < count; i++) {
>  			if (map_ops[i].status)
>  				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
> +				return -ENOMEM;
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +	} else {
> +		for (i = 0; i < count; i++) {
> +			if (map_ops[i].status)
> +				continue;
> +			if (map_ops[i].flags & GNTMAP_contains_pte) {
> +				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +					(map_ops[i].host_addr & ~PAGE_MASK));
> +				mfn = pte_mfn(*pte);
> +			} else {
> +				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +			}
> +			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
> +							  FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kmap_ops,
>  		      struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
> @@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> +	} else {
> +		for (i = 0; i < count; i++) {
> +				set_phys_to_machine(page_to_pfn(pages[i]),
> +						    pages[i]->index);

You seem to relay on page->index containing the old mfn, but I don't see
it being set on gnttab_map_refs (it's only set on m2p_add_override
AFAICT), maybe I'm missing something?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ec7-0003tD-QM; Mon, 13 Jan 2014 10:19:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W2ec5-0003sv-S3
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 10:19:22 +0000
Received: from [85.158.137.68:58781] by server-3.bemta-3.messagelabs.com id
	9D/02-10658-8ADB3D25; Mon, 13 Jan 2014 10:19:20 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389608357!5111052!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18620 invoked from network); 13 Jan 2014 10:19:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,651,1384300800"; d="scan'208";a="92245163"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:19:01 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:19:00 -0500
Message-ID: <52D3BD8F.1000707@citrix.com>
Date: Mon, 13 Jan 2014 11:18:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David
	Vrabel <david.vrabel@citrix.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	<x86@kernel.org>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 00:15, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this patch does the following:
> - move the m2p_override part from grant_[un]map_refs to gntdev, where it is
>   needed after mapping operations
> - but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
>   because it is needed always
> - update the function prototypes as kmap_ops are no longer needed
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/xen/p2m.c                  |    4 --
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   61 ++++++++++++++++++++++++++--
>  drivers/xen/grant-table.c           |   76 +++++++++++------------------------
>  include/xen/grant_table.h           |    2 -
>  5 files changed, 87 insertions(+), 71 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..3e78eb9 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -893,9 +893,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	set_page_private(page, mfn);
>  	page->index = pfn_to_mfn(pfn);
>  
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
> -
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs =
> @@ -962,7 +959,6 @@ int m2p_remove_override(struct page *page,
>  	WARN_ON(!PagePrivate(page));
>  	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..3a97342 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
>  static int map_grant_pages(struct grant_map *map)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +	unsigned long mfn;
>  
>  	if (!use_ptemod) {
>  		/* Note: it could already be mapped */
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> @@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
>  static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
>  
>  	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
>  		int pgno = (map->notify.addr >> PAGE_SHIFT);
> @@ -316,8 +349,28 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  	}
>  
>  	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +				map->pages + offset,
> +				pages);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < pages; i++) {
> +		err = m2p_remove_override((map->pages + offset)[i],
> +					  use_ptemod ?
> +					  &(map->kmap_ops + offset)[i] :
> +					  NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..59019f2 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -881,11 +881,9 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> @@ -904,49 +902,35 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		for (i = 0; i < count; i++) {
>  			if (map_ops[i].status)
>  				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
> +				return -ENOMEM;
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +	} else {
> +		for (i = 0; i < count; i++) {
> +			if (map_ops[i].status)
> +				continue;
> +			if (map_ops[i].flags & GNTMAP_contains_pte) {
> +				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +					(map_ops[i].host_addr & ~PAGE_MASK));
> +				mfn = pte_mfn(*pte);
> +			} else {
> +				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +			}
> +			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
> +							  FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kmap_ops,
>  		      struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
> @@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> +	} else {
> +		for (i = 0; i < count; i++) {
> +				set_phys_to_machine(page_to_pfn(pages[i]),
> +						    pages[i]->index);

You seem to relay on page->index containing the old mfn, but I don't see
it being set on gnttab_map_refs (it's only set on m2p_add_override
AFAICT), maybe I'm missing something?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:25:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eiB-0004AT-Ss; Mon, 13 Jan 2014 10:25:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ei9-0004AM-PM
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:25:38 +0000
Received: from [85.158.137.68:55470] by server-12.bemta-3.messagelabs.com id
	56/29-20055-02FB3D25; Mon, 13 Jan 2014 10:25:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389608734!8718162!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3851 invoked from network); 13 Jan 2014 10:25:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92246572"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:25:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:25:33 -0500
Message-ID: <1389608732.13654.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Mon, 13 Jan 2014 10:25:32 +0000
In-Reply-To: <52D058F3.5080504@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> On 01/10/2014 10:03 AM, Ian Campbell wrote:
> > Fitting in with libxl proper would require the API to look a certain way
> > (take a context etc), so perhaps libxlu would be more appropriate,
> > alongside the disk syntax parser etc?
> 
> Possibly. I looked at that back then (and today again) and it seemed to 
> all be related to parsing things into XLU_Config objects. I guess I did 
> not have a good feel for what libxlu was supposed to be. If it is 
> supposed to be a generic library of auxiliary toolstack functionality 
> then I think it would be a good place.

I think that (aux functionality) was the intention -- the fact that it
is all parsing stuff right now is just a coincidence.

(Ian J: right?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:25:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2eiB-0004AT-Ss; Mon, 13 Jan 2014 10:25:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ei9-0004AM-PM
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:25:38 +0000
Received: from [85.158.137.68:55470] by server-12.bemta-3.messagelabs.com id
	56/29-20055-02FB3D25; Mon, 13 Jan 2014 10:25:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389608734!8718162!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3851 invoked from network); 13 Jan 2014 10:25:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92246572"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:25:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:25:33 -0500
Message-ID: <1389608732.13654.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Mon, 13 Jan 2014 10:25:32 +0000
In-Reply-To: <52D058F3.5080504@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> On 01/10/2014 10:03 AM, Ian Campbell wrote:
> > Fitting in with libxl proper would require the API to look a certain way
> > (take a context etc), so perhaps libxlu would be more appropriate,
> > alongside the disk syntax parser etc?
> 
> Possibly. I looked at that back then (and today again) and it seemed to 
> all be related to parsing things into XLU_Config objects. I guess I did 
> not have a good feel for what libxlu was supposed to be. If it is 
> supposed to be a generic library of auxiliary toolstack functionality 
> then I think it would be a good place.

I think that (aux functionality) was the intention -- the fact that it
is all parsing stuff right now is just a coincidence.

(Ian J: right?)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2f3l-0005DJ-TC; Mon, 13 Jan 2014 10:47:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2f3j-0005DE-M9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:47:55 +0000
Received: from [85.158.143.35:24990] by server-2.bemta-4.messagelabs.com id
	07/A6-11386-A54C3D25; Mon, 13 Jan 2014 10:47:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389610073!11354244!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16098 invoked from network); 13 Jan 2014 10:47:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:47:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90121876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 10:47:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:47:52 -0500
Message-ID: <52D3C457.3010903@citrix.com>
Date: Mon, 13 Jan 2014 10:47:51 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
In-Reply-To: <52D3B6A9.2090003@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>, Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 09:49, Andrew Cooper wrote:
> Hello,
> 
> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> "x86: map portion of kexec crash area that is within the direct map
> area" to staging-4.3 ASAP, as following the backport of
> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> kernel area", kexec loading is broken in exactly the same way as it was
> in staging.

This shouldn't be necessary for 4.3 as the old kexec implementation
didn't use map_domain_page() but used several fixmap slots instead.

If this is broken in our XenServer 4.3 tree it's because we've
backported the new kexec implementation and we'll have to do the
backport of 0896bd8be ourselves.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2f3l-0005DJ-TC; Mon, 13 Jan 2014 10:47:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2f3j-0005DE-M9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:47:55 +0000
Received: from [85.158.143.35:24990] by server-2.bemta-4.messagelabs.com id
	07/A6-11386-A54C3D25; Mon, 13 Jan 2014 10:47:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389610073!11354244!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16098 invoked from network); 13 Jan 2014 10:47:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:47:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90121876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 10:47:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:47:52 -0500
Message-ID: <52D3C457.3010903@citrix.com>
Date: Mon, 13 Jan 2014 10:47:51 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
In-Reply-To: <52D3B6A9.2090003@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>, Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 09:49, Andrew Cooper wrote:
> Hello,
> 
> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> "x86: map portion of kexec crash area that is within the direct map
> area" to staging-4.3 ASAP, as following the backport of
> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> kernel area", kexec loading is broken in exactly the same way as it was
> in staging.

This shouldn't be necessary for 4.3 as the old kexec implementation
didn't use map_domain_page() but used several fixmap slots instead.

If this is broken in our XenServer 4.3 tree it's because we've
backported the new kexec implementation and we'll have to do the
backport of 0896bd8be ourselves.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:54:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fA9-0005nC-RE; Mon, 13 Jan 2014 10:54:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fA8-0005n7-Bd
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:54:32 +0000
Received: from [85.158.139.211:7415] by server-7.bemta-5.messagelabs.com id
	99/0D-04824-7E5C3D25; Mon, 13 Jan 2014 10:54:31 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389610468!9392932!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12130 invoked from network); 13 Jan 2014 10:54:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:54:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92252297"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:54:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:54:14 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2f9q-0008HT-27;
	Mon, 13 Jan 2014 10:54:14 +0000
Date: Mon, 13 Jan 2014 10:54:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140113105413.GB5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Any more comments on code logic and / or the location of the new
snippet? Should I send a new version with code comment fixed?

Thanks
Wei.

On Fri, Jan 10, 2014 at 11:27:42AM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> This change is restricted to HVM guest, as only VT-d is relevant in the
> counterpart in Xend. We're late in release cycle so the change should
> only do what's necessary. Probably we can revisit it if we need to do
> the same thing for PV guest in the future.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  tools/libxl/libxl_create.c |   22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..b7adf34 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    bool pod_enabled = false;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -714,6 +715,27 @@ static void initiate_domain_create(libxl__egc *egc,
>  
>      domid = 0;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM domain will enable PoD mode.
> +     */
> +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> +
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that VT-d engine cannot
> +     * work with PoD enabled because it needs to populated entire page
> +     * table for guest. Also a quick grep through AMD IOMMU related
> +     * code suggests it has not coped with PoD as well. Just to stay
> +     * on the safe side, we disable PCI device assignment with PoD all
> +     * together, regardless of the underlying IOMMU in use.
> +     */
> +    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        ret = ERROR_INVAL;
> +        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
> +        goto error_out;
> +    }
> +
>      ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
>      if (ret) goto error_out;
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 10:54:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 10:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fA9-0005nC-RE; Mon, 13 Jan 2014 10:54:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fA8-0005n7-Bd
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 10:54:32 +0000
Received: from [85.158.139.211:7415] by server-7.bemta-5.messagelabs.com id
	99/0D-04824-7E5C3D25; Mon, 13 Jan 2014 10:54:31 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389610468!9392932!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12130 invoked from network); 13 Jan 2014 10:54:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 10:54:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92252297"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 10:54:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 05:54:14 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2f9q-0008HT-27;
	Mon, 13 Jan 2014 10:54:14 +0000
Date: Mon, 13 Jan 2014 10:54:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140113105413.GB5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Any more comments on code logic and / or the location of the new
snippet? Should I send a new version with code comment fixed?

Thanks
Wei.

On Fri, Jan 10, 2014 at 11:27:42AM +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> This change is restricted to HVM guest, as only VT-d is relevant in the
> counterpart in Xend. We're late in release cycle so the change should
> only do what's necessary. Probably we can revisit it if we need to do
> the same thing for PV guest in the future.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  tools/libxl/libxl_create.c |   22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..b7adf34 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    bool pod_enabled = false;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -714,6 +715,27 @@ static void initiate_domain_create(libxl__egc *egc,
>  
>      domid = 0;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM domain will enable PoD mode.
> +     */
> +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> +
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that VT-d engine cannot
> +     * work with PoD enabled because it needs to populated entire page
> +     * table for guest. Also a quick grep through AMD IOMMU related
> +     * code suggests it has not coped with PoD as well. Just to stay
> +     * on the safe side, we disable PCI device assignment with PoD all
> +     * together, regardless of the underlying IOMMU in use.
> +     */
> +    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        ret = ERROR_INVAL;
> +        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
> +        goto error_out;
> +    }
> +
>      ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
>      if (ret) goto error_out;
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:10:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fPM-0006Si-47; Mon, 13 Jan 2014 11:10:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fPK-0006ST-GC
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:10:14 +0000
Received: from [85.158.139.211:33146] by server-3.bemta-5.messagelabs.com id
	92/9E-04773-599C3D25; Mon, 13 Jan 2014 11:10:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389611411!9397784!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16850 invoked from network); 13 Jan 2014 11:10:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90126910"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:10:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:10:10 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2fPG-0008Vr-Be;
	Mon, 13 Jan 2014 11:10:10 +0000
Date: Mon, 13 Jan 2014 11:10:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Message-ID: <20140113111010.GC5698@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
	<20140110121618.GH29180@zion.uk.xensource.com>
	<CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Laszlo Ersek <lersek@redhat.com>,
	"edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:49:52PM -0800, Jordan Justen wrote:
> On Fri, Jan 10, 2014 at 4:16 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> > Thanks for bringing this to Xen-devel!
> > On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
> >> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >> > On 01/09/14 22:47, Jordan Justen wrote:
> >> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >> >>> On 01/09/14 01:45, Jordan Justen wrote:
> >> >> I don't think this series claims to enable S3 for Xen, right?
> >> >>
> >> >> When someone looks at S3 for Xen, I might try to steer them towards
> >> >> having Xen call MemDetect again, and branch of for Xen specific things
> >> >> within MemDetect. I was not too excited about that aspect of r14946.
> >> >
> >> > No, the series doesn't *claim* to do that :), and I didn't test it, but
> >> > since I could not see any immediate blocker when running on Xen, I
> >> > figured we should add the feature generally, and then Xen users could
> >> > happily hunt bugs in the common code. By adding code that doesn't run
> >> > specifically on Xen we're making that harder.
> >>
> >> I'll try to update this to make a best effort of having S3 potentially
> >> work for Xen.
> >>
> >> We should probably see if someone from xen-devel can verify that we
> >> haven't managed to break normal OVMF boots on Xen (aside from the S3
> >> issue).
> >>
> >
> > Do you have a git tree somewhere? And some critical steps to test this
> > series?
> 
> git://github.com/jljusten/edk2.git ovmf-s3-v4
> 
> For testing S3, I've been doing this:
> OvmfPkg/build.sh qemu -enable-kvm \
>   -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \
>   -cdrom ~/ISO/archlinux-2013.12.01-dual.iso \
>   -m 512 -vga qxl
> 
> At the bash prompt I run:
> echo -n mem > /sys/power/state
> 
> Laszlo already pointed out some key issues for Xen with S3, so you
> might want to wait until I try to fix those in v5.
> 

Thanks, I will wait.

Wei.

> -Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:10:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fPM-0006Si-47; Mon, 13 Jan 2014 11:10:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fPK-0006ST-GC
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:10:14 +0000
Received: from [85.158.139.211:33146] by server-3.bemta-5.messagelabs.com id
	92/9E-04773-599C3D25; Mon, 13 Jan 2014 11:10:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389611411!9397784!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16850 invoked from network); 13 Jan 2014 11:10:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:10:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90126910"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:10:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:10:10 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2fPG-0008Vr-Be;
	Mon, 13 Jan 2014 11:10:10 +0000
Date: Mon, 13 Jan 2014 11:10:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jordan Justen <jljusten@gmail.com>
Message-ID: <20140113111010.GC5698@zion.uk.xensource.com>
References: <1389228311-2452-1-git-send-email-jordan.l.justen@intel.com>
	<1389228311-2452-17-git-send-email-jordan.l.justen@intel.com>
	<52CF0966.5090404@redhat.com>
	<CAFe8ug__qAuX_4+2inONeCeqY_fU6oAQxYF9h4QCaHNgEpzwFQ@mail.gmail.com>
	<52CF1C02.2030504@redhat.com>
	<CAFe8ug9knSAUup2etM6PUbTNXMQKc9S6+b=J74=-CYGwbPyXaA@mail.gmail.com>
	<20140110121618.GH29180@zion.uk.xensource.com>
	<CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFe8ug9u3P8u2H46Ws=in0DH4G5hayuU2QFa=1NnbLTfGi5xKA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Laszlo Ersek <lersek@redhat.com>,
	"edk2-devel@lists.sourceforge.net" <edk2-devel@lists.sourceforge.net>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [edk2] [PATCH v4 16/26] OvmfPkg: PlatformPei:
 reserve SEC/PEI temp RAM for S3 resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 03:49:52PM -0800, Jordan Justen wrote:
> On Fri, Jan 10, 2014 at 4:16 AM, Wei Liu <wei.liu2@citrix.com> wrote:
> > Thanks for bringing this to Xen-devel!
> > On Thu, Jan 09, 2014 at 03:03:06PM -0800, Jordan Justen wrote:
> >> On Thu, Jan 9, 2014 at 2:00 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >> > On 01/09/14 22:47, Jordan Justen wrote:
> >> >> On Thu, Jan 9, 2014 at 12:41 PM, Laszlo Ersek <lersek@redhat.com> wrote:
> >> >>> On 01/09/14 01:45, Jordan Justen wrote:
> >> >> I don't think this series claims to enable S3 for Xen, right?
> >> >>
> >> >> When someone looks at S3 for Xen, I might try to steer them towards
> >> >> having Xen call MemDetect again, and branch of for Xen specific things
> >> >> within MemDetect. I was not too excited about that aspect of r14946.
> >> >
> >> > No, the series doesn't *claim* to do that :), and I didn't test it, but
> >> > since I could not see any immediate blocker when running on Xen, I
> >> > figured we should add the feature generally, and then Xen users could
> >> > happily hunt bugs in the common code. By adding code that doesn't run
> >> > specifically on Xen we're making that harder.
> >>
> >> I'll try to update this to make a best effort of having S3 potentially
> >> work for Xen.
> >>
> >> We should probably see if someone from xen-devel can verify that we
> >> haven't managed to break normal OVMF boots on Xen (aside from the S3
> >> issue).
> >>
> >
> > Do you have a git tree somewhere? And some critical steps to test this
> > series?
> 
> git://github.com/jljusten/edk2.git ovmf-s3-v4
> 
> For testing S3, I've been doing this:
> OvmfPkg/build.sh qemu -enable-kvm \
>   -debugcon file:debug.log -global isa-debugcon.iobase=0x402 \
>   -cdrom ~/ISO/archlinux-2013.12.01-dual.iso \
>   -m 512 -vga qxl
> 
> At the bash prompt I run:
> echo -n mem > /sys/power/state
> 
> Laszlo already pointed out some key issues for Xen with S3, so you
> might want to wait until I try to fix those in v5.
> 

Thanks, I will wait.

Wei.

> -Jordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:11:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fQU-0006p5-Ut; Mon, 13 Jan 2014 11:11:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fQT-0006ot-CL
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:11:25 +0000
Received: from [193.109.254.147:57474] by server-3.bemta-14.messagelabs.com id
	7A/B7-11000-CD9C3D25; Mon, 13 Jan 2014 11:11:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389611483!10490801!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14973 invoked from network); 13 Jan 2014 11:11:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:11:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92257292"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:11:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:11:21 -0500
Message-ID: <1389611480.13654.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 13 Jan 2014 11:11:20 +0000
In-Reply-To: <20140113105413.GB5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> Any more comments on code logic and / or the location of the new
> snippet? Should I send a new version with code comment fixed?
> [...] 
> > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > +     * to libxc when building HVM domain will enable PoD mode.
> > +     */
> > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);

I suppose this corresponds to exactly when PoD would be enabled?

I'm happy with the patch, and it would be better to go ahead now than to
wait for George (with his PoD hat in place) to confirm/deny, so please
resend with the improved comment.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:11:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fQU-0006p5-Ut; Mon, 13 Jan 2014 11:11:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fQT-0006ot-CL
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:11:25 +0000
Received: from [193.109.254.147:57474] by server-3.bemta-14.messagelabs.com id
	7A/B7-11000-CD9C3D25; Mon, 13 Jan 2014 11:11:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389611483!10490801!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14973 invoked from network); 13 Jan 2014 11:11:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:11:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92257292"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:11:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:11:21 -0500
Message-ID: <1389611480.13654.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 13 Jan 2014 11:11:20 +0000
In-Reply-To: <20140113105413.GB5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> Any more comments on code logic and / or the location of the new
> snippet? Should I send a new version with code comment fixed?
> [...] 
> > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > +     * to libxc when building HVM domain will enable PoD mode.
> > +     */
> > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);

I suppose this corresponds to exactly when PoD would be enabled?

I'm happy with the patch, and it would be better to go ahead now than to
wait for George (with his PoD hat in place) to confirm/deny, so please
resend with the improved comment.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:13:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fSW-0007DS-Fz; Mon, 13 Jan 2014 11:13:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2fSU-0007DH-HX
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:13:30 +0000
Received: from [85.158.143.35:41902] by server-3.bemta-4.messagelabs.com id
	7C/3D-32360-95AC3D25; Mon, 13 Jan 2014 11:13:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389611607!11350212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6854 invoked from network); 13 Jan 2014 11:13:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:13:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90127721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:13:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:13:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2fSR-00007V-0r;
	Mon, 13 Jan 2014 11:13:27 +0000
Message-ID: <52D3CA56.2000002@citrix.com>
Date: Mon, 13 Jan 2014 11:13:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <52CBF78402000078001110E8@nat28.tlf.novell.com>
	<1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:59, Andrew Cooper wrote:
> In addition, 'copyback' should be cleared even in the error case.
>
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Ping?

>
> ---
>
> Changes in v2:
>  * Still clear copyback even in the error case.
> ---
>  xen/common/sysctl.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..0cb6ee1 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
>  
>          xfree(status);
>          copyback = 0;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:13:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fSW-0007DS-Fz; Mon, 13 Jan 2014 11:13:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2fSU-0007DH-HX
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:13:30 +0000
Received: from [85.158.143.35:41902] by server-3.bemta-4.messagelabs.com id
	7C/3D-32360-95AC3D25; Mon, 13 Jan 2014 11:13:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389611607!11350212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6854 invoked from network); 13 Jan 2014 11:13:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:13:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90127721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:13:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:13:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2fSR-00007V-0r;
	Mon, 13 Jan 2014 11:13:27 +0000
Message-ID: <52D3CA56.2000002@citrix.com>
Date: Mon, 13 Jan 2014 11:13:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <52CBF78402000078001110E8@nat28.tlf.novell.com>
	<1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/14 11:59, Andrew Cooper wrote:
> In addition, 'copyback' should be cleared even in the error case.
>
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Ping?

>
> ---
>
> Changes in v2:
>  * Still clear copyback even in the error case.
> ---
>  xen/common/sysctl.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..0cb6ee1 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
>  
>          xfree(status);
>          copyback = 0;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:13:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fSc-0007Ei-SH; Mon, 13 Jan 2014 11:13:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fSb-0007ER-NV
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:13:37 +0000
Received: from [85.158.137.68:42941] by server-15.bemta-3.messagelabs.com id
	25/76-11556-06AC3D25; Mon, 13 Jan 2014 11:13:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389611615!7614086!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17337 invoked from network); 13 Jan 2014 11:13:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:13:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:13:35 +0000
Message-Id: <52D3D86C0200007800112F92@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:13:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
In-Reply-To: <52D3B6A9.2090003@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> "x86: map portion of kexec crash area that is within the direct map
> area" to staging-4.3 ASAP, as following the backport of
> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> kernel area", kexec loading is broken in exactly the same way as it was
> in staging.

Not without explaining how it is broken: According to my own
checking as well as Daniel's there was no need for the kexec area
to be mapped at all in the old implementation.

Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
instead if there really is an issue. That's largely because, as
noted in the extra comment I added to 0896bd8b, the change is
still incomplete (and hence not much better than the code with
Daniel's change reverted).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:13:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fSc-0007Ei-SH; Mon, 13 Jan 2014 11:13:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fSb-0007ER-NV
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:13:37 +0000
Received: from [85.158.137.68:42941] by server-15.bemta-3.messagelabs.com id
	25/76-11556-06AC3D25; Mon, 13 Jan 2014 11:13:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389611615!7614086!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17337 invoked from network); 13 Jan 2014 11:13:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:13:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:13:35 +0000
Message-Id: <52D3D86C0200007800112F92@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:13:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
In-Reply-To: <52D3B6A9.2090003@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> "x86: map portion of kexec crash area that is within the direct map
> area" to staging-4.3 ASAP, as following the backport of
> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> kernel area", kexec loading is broken in exactly the same way as it was
> in staging.

Not without explaining how it is broken: According to my own
checking as well as Daniel's there was no need for the kexec area
to be mapped at all in the old implementation.

Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
instead if there really is an issue. That's largely because, as
noted in the extra comment I added to 0896bd8b, the change is
still incomplete (and hence not much better than the code with
Daniel's change reverted).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:16:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fVW-0007YE-KE; Mon, 13 Jan 2014 11:16:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2fVU-0007Y3-Gt
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:16:36 +0000
Received: from [85.158.143.35:10641] by server-1.bemta-4.messagelabs.com id
	D7/EB-02132-31BC3D25; Mon, 13 Jan 2014 11:16:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389611794!11340183!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22341 invoked from network); 13 Jan 2014 11:16:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:16:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92258394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:16:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 06:16:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fVQ-0007nc-9Z;
	Mon, 13 Jan 2014 11:16:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fVQ-00051A-2f;
	Mon, 13 Jan 2014 11:16:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21203.51983.949715.371449@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 11:16:31 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D00CEE0200007800112600@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
	<21199.64676.218509.61789@mariner.uk.xensource.com>
	<52D00CEE0200007800112600@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
>>>>  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
> 
> My point was that this says "fail like 24336-bisect", not "failed in
> 24332, passed in 24336-bisect", i.e. the above only gives
> confirmation that there's a problem, not whether some flight had
> no problem.

24336 will have been a double-check test of the baseline version
carried out by the bisector.  So it gives confirmation that the
problem exists also in the baseline and is therefore not a reason to
block the push.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:16:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fVW-0007YE-KE; Mon, 13 Jan 2014 11:16:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2fVU-0007Y3-Gt
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:16:36 +0000
Received: from [85.158.143.35:10641] by server-1.bemta-4.messagelabs.com id
	D7/EB-02132-31BC3D25; Mon, 13 Jan 2014 11:16:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389611794!11340183!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22341 invoked from network); 13 Jan 2014 11:16:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:16:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92258394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:16:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 06:16:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fVQ-0007nc-9Z;
	Mon, 13 Jan 2014 11:16:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fVQ-00051A-2f;
	Mon, 13 Jan 2014 11:16:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21203.51983.949715.371449@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 11:16:31 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D00CEE0200007800112600@nat28.tlf.novell.com>
References: <osstest-24332-mainreport@xen.org>
	<52D00184020000780011255E@nat28.tlf.novell.com>
	<21199.64025.811465.724716@mariner.uk.xensource.com>
	<52D0098402000078001125BD@nat28.tlf.novell.com>
	<21199.64676.218509.61789@mariner.uk.xensource.com>
	<52D00CEE0200007800112600@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.3-testing test] 24332: regressions - FAIL"):
>>>>  test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install   fail like 24336-bisect
> 
> My point was that this says "fail like 24336-bisect", not "failed in
> 24332, passed in 24336-bisect", i.e. the above only gives
> confirmation that there's a problem, not whether some flight had
> no problem.

24336 will have been a double-check test of the baseline version
carried out by the bisector.  So it gives confirmation that the
problem exists also in the baseline and is therefore not a reason to
block the push.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fbw-0008Um-9z; Mon, 13 Jan 2014 11:23:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W2fbu-0008Ue-Gv
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:23:14 +0000
Received: from [85.158.139.211:34677] by server-1.bemta-5.messagelabs.com id
	10/8A-21065-1ACC3D25; Mon, 13 Jan 2014 11:23:13 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389612189!8183226!1
X-Originating-IP: [209.85.192.180]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23318 invoked from network); 13 Jan 2014 11:23:10 -0000
Received: from mail-pd0-f180.google.com (HELO mail-pd0-f180.google.com)
	(209.85.192.180)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:23:10 -0000
Received: by mail-pd0-f180.google.com with SMTP id q10so7303888pdj.39
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 03:23:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:subject:message-id:importance:from:to:cc:mime-version
	:content-type; bh=NKx8+RYO8/f+k65nBAGZdzfe/gIgAtRO9Uslvx94u4s=;
	b=O+ame3JKFkTJNnTSTRYQ9Nu8Tb1RJTy8wr4VIWh7gkdnOuGQMyVlnDgnHVnYI+Om8J
	5LwJs42YaLDrzH2jm7PnFxTNxzVXyeB2QG8XTRpXwWaCHxfbjlg+qZXLcN+flxbpULuv
	7rvlkMsV+E/U1VmscBOCgIMvrCEZlTiu1z093liwKdqcH2gezPZVoUUrbI28RuWr5OJT
	M+Q+huhhU/RUs+fQlsmn0oZ+ZlDjSTULpiRkGUgasTuDx9YcReVrvMT9LY10xzK+Wx7S
	Nf09WsP0vpeK3Ll/RrSDg7dDAWwP/YoFEB6D2nnok6SZPY5sffedj9tlkdwR/h8IHiGe
	UjDQ==
X-Received: by 10.66.228.37 with SMTP id sf5mr29257172pac.19.1389612188648;
	Mon, 13 Jan 2014 03:23:08 -0800 (PST)
Received: from [10.10.145.42] (p2559.superclick.com. [210.13.92.125])
	by mx.google.com with ESMTPSA id
	fk4sm47848103pab.23.2014.01.13.03.23.06 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 03:23:08 -0800 (PST)
Date: Mon, 13 Jan 2014 19:23:03 +0800
Message-ID: <phwlyitgpjnbm2j8twjeh586.1389612183927@email.android.com>
Importance: normal
From: "lars.kurth@xen.org" <lars.kurth.xen@gmail.com>
To: Maren Abatielos <abatielos@univention.de>, Russell Pavlicek
	<russell.pavlicek@citrix.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0355030833621922675=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0355030833621922675==
Content-Type: multipart/alternative; boundary="--_com.android.email_37615610558630"

----_com.android.email_37615610558630
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

TWFyZW4sCnRoZXJlIGlzIGFsc28gYSBsaW5rIHRvIGluc3RydWN0aW9ucyBvbiB0aGUgZWNvc3lz
dGVtIHBhZ2UuwqAKTGFycwoKU2VudCBmcm9tIFNhbXN1bmcgTW9iaWxlCgo8ZGl2Pi0tLS0tLS0t
IE9yaWdpbmFsIG1lc3NhZ2UgLS0tLS0tLS08L2Rpdj48ZGl2PkZyb206IE1hcmVuIEFiYXRpZWxv
cyA8YWJhdGllbG9zQHVuaXZlbnRpb24uZGU+IDwvZGl2PjxkaXY+RGF0ZToyMDE0LzAxLzEzICAx
NjozNiAgKEdNVCswODowMCkgPC9kaXY+PGRpdj5UbzogUnVzc2VsbCBQYXZsaWNlayA8cnVzc2Vs
bC5wYXZsaWNla0BjaXRyaXguY29tPiA8L2Rpdj48ZGl2PkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZyA8L2Rpdj48ZGl2PlN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBYZW4gTWFuYWdlbWVudCBU
b29scyAtIG5ldyBwcm9qZWN0IHRvIGJlIGxpc3RlZCA8L2Rpdj48ZGl2Pgo8L2Rpdj5IZWxsbyBS
dXNzZWxsLArCoAphaCwgSSB1bmRlcnN0YW5kLCB0aGFua3MgYSBsb3QuIEkndmUgYmVlbiBhbHJl
YWR5IG9uY2Ugb24gdGhhdCBwYWdlIGJ1dCBkaWRuJ3QgcmVhbGx5IGtub3cgdGhhdCB0aGlzIGlz
IHRoZSBuZXcgb25lLiBGaW5lLCBJIHdpbGwgc3VyZWx5IHJlZ2lzdGVyIGZvciBhbiBhY2NvdW50
IGFuZCBsaXN0IG91ciBwcm9kdWN0IHRoZXJlLgrCoApHb29kIHRvIGJlIGFibGUgdG8gY2hhbmdl
IGl0IG9uIG15IG93biB3aGVuIHdlIGhhdmUgdXBkYXRlcyBvciBzb21ldGhpbmcuCsKgClRoYW5r
cyBhZ2FpbiwKwqAKTWFyZW4KwqAKLS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJl
IG9wZW4uIApNYXJ5LVNvbWVydmlsbGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJh
dGllbG9zQHVuaXZlbnRpb24uZGUgClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkg
NDIxIDIyMjMyLTk5IAoKaHR0cHM6Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRv
L1VuaXZlbnRpb24gCmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8v
dHdpdHRlci5jb20vdW5pdmVudGlvbiAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9u
dmlkZW8gCgpNYW5hZ2luZyBkaXJlY3RvcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgTG9j
YWwgQ291cnQgQnJlbWVuIApUYXgtTm8uOiA3MS01OTctMDI4NzYKUnVzc2VsbCBQYXZsaWNlayA8
cnVzc2VsbC5wYXZsaWNla0BjaXRyaXguY29tPiBoYXQgYW0gOS4gSmFudWFyIDIwMTQgdW0gMTU6
NTIgZ2VzY2hyaWViZW46IAoKTWFyZW4sCsKgClNvcnJ5IGZvciB0aGUgY29uZnVzaW9uLsKgIFRo
YXTigJlzIHRoZSBvbGQgc3lzdGVtICh3ZSB3aWxsIG5lZWQgdG8gbWFyayBpdCBhcyBzdWNoKS4K
wqAKVG8gbGlzdCB5b3VyIHByb2plY3QsIHByb2R1Y3QsIG9yIHNlcnZpY2UsIHNpbXBseSBnbyB0
byBYZW5Qcm9qZWN0Lm9yZy7CoCBSZWdpc3RlciBmb3IgYW4gYWNjb3VudCwgaWYgeW91IGRvbuKA
mXQgaGF2ZSBvbmUgYWxyZWFkeS7CoCBDbGljayBvbiBEaXJlY3RvcnkgPiBFY29zeXN0ZW0uwqAg
UHJlc3MgdGhlIGJ1dHRvbiBsYWJlbGVkIOKAnEFkZCB5b3VyIGxpc3RpbmcgaGVyZeKAnS7CoCBG
aWxsIG91dCB0aGUgZm9ybSBhbmQgc3VibWl0LgrCoApPbmNlIHlvdXIgbGlzdGluZyBpcyBhcHBy
b3ZlZCAoZ2VuZXJhbGx5IHdpdGhpbiBhIGZldyBob3VycyksIGl0IHdpbGwgYmUgbGlzdGVkIGlu
IHRoZSBYZW5Qcm9qZWN0Lm9yZyBFY29zeXN0ZW0gRGlyZWN0b3J5LsKgIFdoYXTigJlzIG1vcmUs
IHlvdSB3aWxsIGhhdmUgY29udHJvbCBvdmVyIHRoZSBsaXN0aW5nIGNvbnRlbnQsIHNvIGFzIHlv
dXIgcHJvZHVjdCBjaGFuZ2VzIG9yIGV4cGFuZHMsIHlvdSBjYW4gbW9kaWZ5IHlvdXIgbGlzdGlu
ZyB0byBrZWVwIGl0IHVwIHRvIGRhdGUuCsKgCklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3Ig
ZGlmZmljdWx0eSwgcGxlYXNlIGNvbnRhY3QgbWUuwqAgSeKAmWQgYmUgaGFwcHkgdG8gaGVscCEK
wqAKUnVzcyBQYXZsaWNlawpYZW4gUHJvamVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtcwpI
b21lIE9mZmljZTogKzEtMzAxLTgyOS01MzI3Ck1vYmlsZTogKzEtMjQwLTM5Ny0wMTk5ClVLIFZv
SVA6ICs0NCAxMjIzIDg1MiA4OTQKwqAKRnJvbTogeGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVu
Lm9yZyBbbWFpbHRvOnhlbi1kZXZlbC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddIE9uIEJlaGFsZiBP
ZiBNYXJlbiBBYmF0aWVsb3MKU2VudDogVGh1cnNkYXksIEphbnVhcnkgMDksIDIwMTQgNzozNyBB
TQpUbzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKU3ViamVjdDogW1hlbi1kZXZlbF0gWGVuIE1h
bmFnZW1lbnQgVG9vbHMgLSBuZXcgcHJvamVjdCB0byBiZSBsaXN0ZWQKwqAKSGVsbG8sCsKgCkkg
d291bGQgbGlrZSB0byBoYXZlIG91ciBwcm9kdWN0IFVDUyBWaXJ0dWFsIE1hY2hpbmUgTWFuYWdl
ciAoVVZNTSkgdG8gYmUgbGlzdGVkIGluIHRoZSBzZWN0aW9uICJNYW5hZ2VtZW50IHRvb2xzIGFu
ZCBpbnRlcmZhY2VzIiBvbiB0aGUgZm9sbG93aW5nIHBhZ2U6CsKgCmh0dHA6Ly93aWtpLnhlbi5v
cmcvd2lraS9YZW5fTWFuYWdlbWVudF9Ub29scwrCoApVVk1NIGlzIGEgd2ViLWJhc2VkIHZpcnR1
YWxpemF0aW9uIG1hbmFnZW1lbnQgdG9vbCBmb3IgWGVuIGFuZCBLVk0uIEl0IGlzIGluY2x1ZGVk
IGluIG91ciBMaW51eCBzZXJ2ZXIgb3BlcmF0aW5nIHN5c3RlbSBVbml2ZW50aW9uIENvcnBvcmF0
ZSBTZXJ2ZXIsIHNvIGlzIFhlbi4KwqAKQ291bGQgeW91IHBsZWFzZSBkbyB0aGF0IHVzaW5nIHRo
ZSBmb2xsb3dpbmcgVVJMIHRvIGJlIGxpc3RlZDoKwqAKaHR0cDovL3d3dy51bml2ZW50aW9uLmRl
L2VuL3Byb2R1Y3RzL3Vjcy91Y3MtY29tcG9uZW50cy92aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVh
bC1tYWNoaW5lLW1hbmFnZXIvCsKgClRoYW5rcyBhIGxvdCwKwqAKTWFyZW4gQWJhdGllbG9zIAoK
LS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJlIG9wZW4uIApNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUg
ClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkgNDIxIDIyMjMyLTk5IAoKaHR0cHM6
Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24gCmh0dHA6Ly93
d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlv
biAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlkZW8gCgpNYW5hZ2luZyBkaXJl
Y3RvcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgTG9jYWwgQ291cnQgQnJlbWVuIApUYXgt
Tm8uOiA3MS01OTctMDI4NzYKCsKgClNjaMO2bmUgR3LDvMOfZSAKTWFyZW4gQWJhdGllbG9zIAoK
LS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJlIG9wZW4uIApNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUg
ClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkgNDIxIDIyMjMyLTk5IAoKaHR0cHM6
Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24gCmh0dHA6Ly93
d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlv
biAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlkZW8gCgpHZXNjaMOkZnRzZsO8
aHJlcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgQW10c2dlcmljaHQgQnJlbWVuIApTdGV1
ZXItTnIuOiA3MS01OTctMDI4NzY=

----_com.android.email_37615610558630
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5NYXJlbiw8L2Rpdj48ZGl2
PnRoZXJlIGlzIGFsc28gYSBsaW5rIHRvIGluc3RydWN0aW9ucyBvbiB0aGUgZWNvc3lzdGVtIHBh
Z2UuJm5ic3A7PC9kaXY+PGRpdj5MYXJzPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48ZGl2IHN0
eWxlPSJmb250LXNpemU6OXB4O2NvbG9yOiM1NzU3NTciPlNlbnQgZnJvbSBTYW1zdW5nIE1vYmls
ZTwvZGl2PjwvZGl2Pjxicj48YnI+PGRpdj4tLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0tLS0t
LS0tPC9kaXY+PGRpdj5Gcm9tOiBNYXJlbiBBYmF0aWVsb3MgPGFiYXRpZWxvc0B1bml2ZW50aW9u
LmRlPiA8L2Rpdj48ZGl2PkRhdGU6MjAxNC8wMS8xMyAgMTY6MzYgIChHTVQrMDg6MDApIDwvZGl2
PjxkaXY+VG86IFJ1c3NlbGwgUGF2bGljZWsgPHJ1c3NlbGwucGF2bGljZWtAY2l0cml4LmNvbT4g
PC9kaXY+PGRpdj5DYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcgPC9kaXY+PGRpdj5TdWJqZWN0
OiBSZTogW1hlbi1kZXZlbF0gWGVuIE1hbmFnZW1lbnQgVG9vbHMgLSBuZXcgcHJvamVjdCB0byBi
ZSBsaXN0ZWQgPC9kaXY+PGRpdj48YnI+PC9kaXY+CiAKICA8ZGl2PgogICBIZWxsbyBSdXNzZWxs
LAogIDwvZGl2PiAKICA8ZGl2PgogICAmbmJzcDsKICA8L2Rpdj4gCiAgPGRpdj4KICAgYWgsIEkg
dW5kZXJzdGFuZCwgdGhhbmtzIGEgbG90LiBJJ3ZlIGJlZW4gYWxyZWFkeSBvbmNlIG9uIHRoYXQg
cGFnZSBidXQgZGlkbid0IHJlYWxseSBrbm93IHRoYXQgdGhpcyBpcyB0aGUgbmV3IG9uZS4gRmlu
ZSwgSSB3aWxsIHN1cmVseSByZWdpc3RlciBmb3IgYW4gYWNjb3VudCBhbmQgbGlzdCBvdXIgcHJv
ZHVjdCB0aGVyZS4KICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAgPC9kaXY+IAogIDxkaXY+
CiAgIEdvb2QgdG8gYmUgYWJsZSB0byBjaGFuZ2UgaXQgb24gbXkgb3duIHdoZW4gd2UgaGF2ZSB1
cGRhdGVzIG9yIHNvbWV0aGluZy4KICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAgPC9kaXY+
IAogIDxkaXY+CiAgIFRoYW5rcyBhZ2FpbiwKICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAg
PC9kaXY+IAogIDxkaXY+CiAgIE1hcmVuCiAgPC9kaXY+IAogIDxkaXY+CiAgICZuYnNwOwogIDwv
ZGl2PiAKICA8ZGl2PgogICAtLS0gCiAgIDxicj4gTWFya2V0aW5nIAogICA8YnI+IAogICA8YnI+
IFVuaXZlbnRpb24gR21iSCAKICAgPGJyPiBiZSBvcGVuLiAKICAgPGJyPiBNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCiAgIDxicj4gMjgzNTkgQnJlbWVuIAogICA8YnI+IAogICA8YnI+IEUtTWFpbDog
CiAgIDxhIGhyZWY9Im1haWx0bzphYmF0aWVsb3NAdW5pdmVudGlvbi5kZSI+YWJhdGllbG9zQHVu
aXZlbnRpb24uZGU8L2E+IAogICA8YnI+IFRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IAogICA8YnI+
IEZheCA6ICs0OSA0MjEgMjIyMzItOTkgCiAgIDxicj4gCiAgIDxicj4gCiAgIDxhIGhyZWY9Imh0
dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGU8L2E+IAog
ICA8YnI+IAogICA8YSBocmVmPSJodHRwOi8vZ3BsdXMudG8vVW5pdmVudGlvbiI+aHR0cDovL2dw
bHVzLnRvL1VuaXZlbnRpb248L2E+IAogICA8YnI+IAogICA8YSBocmVmPSJodHRwOi8vd3d3LmZh
Y2Vib29rLmNvbS91bml2ZW50aW9uIj5odHRwOi8vd3d3LmZhY2Vib29rLmNvbS91bml2ZW50aW9u
PC9hPiAKICAgPGJyPiAKICAgPGEgaHJlZj0iaHR0cHM6Ly90d2l0dGVyLmNvbS91bml2ZW50aW9u
Ij5odHRwczovL3R3aXR0ZXIuY29tL3VuaXZlbnRpb248L2E+IAogICA8YnI+IAogICA8YSBocmVm
PSJodHRwOi8vd3d3LnlvdXR1YmUuY29tL3VuaXZlbnRpb252aWRlbyI+aHR0cDovL3d3dy55b3V0
dWJlLmNvbS91bml2ZW50aW9udmlkZW88L2E+IAogICA8YnI+IAogICA8YnI+IE1hbmFnaW5nIGRp
cmVjdG9yOiBQZXRlciBILiBHYW50ZW4gCiAgIDxicj4gSFJCIDIwNzU1IExvY2FsIENvdXJ0IEJy
ZW1lbiAKICAgPGJyPiBUYXgtTm8uOiA3MS01OTctMDI4NzYKICA8L2Rpdj4gCiAgPGJsb2NrcXVv
dGUgc3R5bGU9InBvc2l0aW9uOiByZWxhdGl2ZTsgbWFyZ2luLWxlZnQ6IDBweDsgcGFkZGluZy1s
ZWZ0OiAxMHB4OyBib3JkZXItbGVmdDogc29saWQgMXB4IGJsdWU7IiB0eXBlPSJjaXRlIj4KICAg
PCEtLSBbaWYgZ3RlIG1zbyA5XT4gLS0+CiAgIDwhLS0gPCFbZW5kaWZdIC0tPgogICA8IS0tIFtp
ZiBndGUgbXNvIDldPiAtLT4KICAgPCEtLSA8IVtlbmRpZl0gLS0+IFJ1c3NlbGwgUGF2bGljZWsg
Jmx0O3J1c3NlbGwucGF2bGljZWtAY2l0cml4LmNvbSZndDsgaGF0IGFtIDkuIEphbnVhciAyMDE0
IHVtIDE1OjUyIGdlc2NocmllYmVuOgogICA8YnI+CiAgIDxicj4gCiAgIDxkaXYgY2xhc3M9Ildv
cmRTZWN0aW9uMSI+IAogICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6
ICMxZjQ5N2Q7Ij5NYXJlbiw8L3NwYW4+PC9wPiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fu
cy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8cCBjbGFz
cz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5
OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjogIzFmNDk3ZDsiPlNvcnJ5IGZvciB0aGUg
Y29uZnVzaW9uLiZuYnNwOyBUaGF04oCZcyB0aGUgb2xkIHN5c3RlbSAod2Ugd2lsbCBuZWVkIHRv
IG1hcmsgaXQgYXMgc3VjaCkuPC9zcGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3Nh
bnMtc2VyaWYnOyBjb2xvcjogIzFmNDk3ZDsiPiZuYnNwOzwvc3Bhbj48L3A+IAogICAgPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWls
eTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij5UbyBsaXN0IHlvdXIg
cHJvamVjdCwgcHJvZHVjdCwgb3Igc2VydmljZSwgc2ltcGx5IGdvIHRvIFhlblByb2plY3Qub3Jn
LiZuYnNwOyBSZWdpc3RlciBmb3IgYW4gYWNjb3VudCwgaWYgeW91IGRvbuKAmXQgaGF2ZSBvbmUg
YWxyZWFkeS4mbmJzcDsgQ2xpY2sgb24gRGlyZWN0b3J5ICZndDsgRWNvc3lzdGVtLiZuYnNwOyBQ
cmVzcyB0aGUgYnV0dG9uIGxhYmVsZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiZuYnNw
OyBGaWxsIG91dCB0aGUgZm9ybSBhbmQgc3VibWl0Ljwvc3Bhbj48L3A+IAogICAgPHAgY2xhc3M9
Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTog
J0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij4mbmJzcDs8L3NwYW4+PC9w
PiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBw
dDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+
T25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQgKGdlbmVyYWxseSB3aXRoaW4gYSBmZXcgaG91
cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUgWGVuUHJvamVjdC5vcmcgRWNvc3lzdGVtIERp
cmVjdG9yeS4mbmJzcDsgV2hhdOKAmXMgbW9yZSwgeW91IHdpbGwgaGF2ZSBjb250cm9sIG92ZXIg
dGhlIGxpc3RpbmcgY29udGVudCwgc28gYXMgeW91ciBwcm9kdWN0IGNoYW5nZXMgb3IgZXhwYW5k
cywgeW91IGNhbiBtb2RpZnkgeW91ciBsaXN0aW5nIHRvIGtlZXAgaXQgdXAgdG8gZGF0ZS48L3Nw
YW4+PC9wPiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6
IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0
OTdkOyI+Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2Vy
aWYnOyBjb2xvcjogIzFmNDk3ZDsiPklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3IgZGlmZmlj
dWx0eSwgcGxlYXNlIGNvbnRhY3QgbWUuJm5ic3A7IEnigJlkIGJlIGhhcHB5IHRvIGhlbHAhPC9z
cGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjogIzFm
NDk3ZDsiPiZuYnNwOzwvc3Bhbj48L3A+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05v
cm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTogJ0NhbGli
cmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij5SdXNzIFBhdmxpY2VrPC9zcGFuPjwv
cD4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEu
MHB0OyBmb250LWZhbWlseTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7
Ij5YZW4gUHJvamVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtczwvc3Bhbj48L3A+IAogICAg
IDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9u
dC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+SG9tZSBP
ZmZpY2U6ICsxLTMwMS04MjktNTMyNzwvc3Bhbj48L3A+IAogICAgIDxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJp
Jywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+TW9iaWxlOiArMS0yNDAtMzk3LTAxOTk8
L3NwYW4+PC9wPiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjog
IzFmNDk3ZDsiPlVLIFZvSVA6ICs0NCAxMjIzIDg1MiA4OTQ8L3NwYW4+PC9wPiAKICAgIDwvZGl2
PiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBw
dDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+
Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8ZGl2PiAKICAgICA8ZGl2IHN0eWxlPSJib3JkZXI6IG5v
bmU7IGJvcmRlci10b3A6IHNvbGlkICNCNUM0REYgMS4wcHQ7IHBhZGRpbmc6IDMuMHB0IDBpbiAw
aW4gMGluOyI+IAogICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Ryb25nPjxzcGFuIHN0eWxl
PSJmb250LXNpemU6IDEwLjBwdDsgZm9udC1mYW1pbHk6ICdUYWhvbWEnLCdzYW5zLXNlcmlmJzsi
PkZyb206PC9zcGFuPjwvc3Ryb25nPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDEwLjBwdDsgZm9u
dC1mYW1pbHk6ICdUYWhvbWEnLCdzYW5zLXNlcmlmJzsiPiB4ZW4tZGV2ZWwtYm91bmNlc0BsaXN0
cy54ZW4ub3JnIFttYWlsdG86eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVuLm9yZ10gPHN0cm9u
Zz5PbiBCZWhhbGYgT2YgPC9zdHJvbmc+TWFyZW4gQWJhdGllbG9zPGJyPiA8c3Ryb25nPlNlbnQ6
PC9zdHJvbmc+IFRodXJzZGF5LCBKYW51YXJ5IDA5LCAyMDE0IDc6MzcgQU08YnI+IDxzdHJvbmc+
VG86PC9zdHJvbmc+IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnPGJyPiA8c3Ryb25nPlN1YmplY3Q6
PC9zdHJvbmc+IFtYZW4tZGV2ZWxdIFhlbiBNYW5hZ2VtZW50IFRvb2xzIC0gbmV3IHByb2plY3Qg
dG8gYmUgbGlzdGVkPC9zcGFuPjwvcD4gCiAgICAgPC9kaXY+IAogICAgPC9kaXY+IAogICAgPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PC9wPiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJN
c29Ob3JtYWwiPkhlbGxvLDwvcD4gCiAgICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFz
cz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+SSB3b3VsZCBsaWtlIHRvIGhhdmUgb3VyIHByb2R1Y3QgVUNTIFZp
cnR1YWwgTWFjaGluZSBNYW5hZ2VyIChVVk1NKSB0byBiZSBsaXN0ZWQgaW4gdGhlIHNlY3Rpb24g
Ik1hbmFnZW1lbnQgdG9vbHMgYW5kIGludGVyZmFjZXMiIG9uIHRoZSBmb2xsb3dpbmcgcGFnZTo8
L3A+IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5i
c3A7PC9wPiAKICAgIDwvZGl2PiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxhIGhyZWY9Imh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9YZW5fTWFuYWdlbWVudF9Ub29scyI+
aHR0cDovL3dpa2kueGVuLm9yZy93aWtpL1hlbl9NYW5hZ2VtZW50X1Rvb2xzPC9hPjwvcD4gCiAg
ICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+
IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+VVZNTSBp
cyBhIHdlYi1iYXNlZCB2aXJ0dWFsaXphdGlvbiBtYW5hZ2VtZW50IHRvb2wgZm9yIFhlbiBhbmQg
S1ZNLiBJdCBpcyBpbmNsdWRlZCBpbiBvdXIgTGludXggc2VydmVyIG9wZXJhdGluZyBzeXN0ZW0g
VW5pdmVudGlvbiBDb3Jwb3JhdGUgU2VydmVyLCBzbyBpcyBYZW4uPC9wPiAKICAgIDwvZGl2PiAK
ICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4gCiAgICA8L2Rp
dj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5Db3VsZCB5b3UgcGxlYXNl
IGRvIHRoYXQgdXNpbmcgdGhlIGZvbGxvd2luZyBVUkwgdG8gYmUgbGlzdGVkOjwvcD4gCiAgICA8
L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+IAog
ICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PGEgaHJlZj0i
aHR0cDovL3d3dy51bml2ZW50aW9uLmRlL2VuL3Byb2R1Y3RzL3Vjcy91Y3MtY29tcG9uZW50cy92
aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVhbC1tYWNoaW5lLW1hbmFnZXIvIj5odHRwOi8vd3d3LnVu
aXZlbnRpb24uZGUvZW4vcHJvZHVjdHMvdWNzL3Vjcy1jb21wb25lbnRzL3ZpcnR1YWxpemF0aW9u
L3Vjcy12aXJ0dWFsLW1hY2hpbmUtbWFuYWdlci88L2E+PC9wPiAKICAgIDwvZGl2PiAKICAgIDxk
aXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4gCiAgICA8L2Rpdj4gCiAg
ICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5UaGFua3MgYSBsb3QsPC9wPiAKICAg
IDwvZGl2PiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4g
CiAgICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5NYXJlbiBB
YmF0aWVsb3MgPGJyPiA8YnI+IC0tLSA8YnI+IE1hcmtldGluZyA8YnI+IDxicj4gVW5pdmVudGlv
biBHbWJIIDxicj4gYmUgb3Blbi4gPGJyPiBNYXJ5LVNvbWVydmlsbGUtU3RyLjEgPGJyPiAyODM1
OSBCcmVtZW4gPGJyPiA8YnI+IEUtTWFpbDogPGEgaHJlZj0ibWFpbHRvOmFiYXRpZWxvc0B1bml2
ZW50aW9uLmRlIj5hYmF0aWVsb3NAdW5pdmVudGlvbi5kZTwvYT4gPGJyPiBUZWwuIDogKzQ5IDQy
MSAyMjIzMi02OCA8YnI+IEZheCA6ICs0OSA0MjEgMjIyMzItOTkgPGJyPiA8YnI+IDxhIGhyZWY9
Imh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGU8L2E+
IDxicj4gPGEgaHJlZj0iaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24iPmh0dHA6Ly9ncGx1cy50
by9Vbml2ZW50aW9uPC9hPiA8YnI+IDxhIGhyZWY9Imh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3Vu
aXZlbnRpb24iPmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb248L2E+IDxicj4gPGEg
aHJlZj0iaHR0cHM6Ly90d2l0dGVyLmNvbS91bml2ZW50aW9uIj5odHRwczovL3R3aXR0ZXIuY29t
L3VuaXZlbnRpb248L2E+IDxicj4gPGEgaHJlZj0iaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2
ZW50aW9udmlkZW8iPmh0dHA6Ly93d3cueW91dHViZS5jb20vdW5pdmVudGlvbnZpZGVvPC9hPiA8
YnI+IDxicj4gTWFuYWdpbmcgZGlyZWN0b3I6IFBldGVyIEguIEdhbnRlbiA8YnI+IEhSQiAyMDc1
NSBMb2NhbCBDb3VydCBCcmVtZW4gPGJyPiBUYXgtTm8uOiA3MS01OTctMDI4NzY8L3A+IAogICAg
PC9kaXY+IAogICA8L2Rpdj4gCiAgPC9ibG9ja3F1b3RlPiAKICA8ZGl2PgogICA8YnI+Jm5ic3A7
CiAgPC9kaXY+IAogIDxkaXYgaWQ9Im94LXNpZ25hdHVyZSI+CiAgIFNjaMO2bmUgR3LDvMOfZQog
ICA8YnI+TWFyZW4gQWJhdGllbG9zCiAgIDxicj4KICAgPGJyPi0tLQogICA8YnI+TWFya2V0aW5n
CiAgIDxicj4KICAgPGJyPlVuaXZlbnRpb24gR21iSAogICA8YnI+YmUgb3Blbi4KICAgPGJyPk1h
cnktU29tZXJ2aWxsZS1TdHIuMQogICA8YnI+MjgzNTkgQnJlbWVuCiAgIDxicj4KICAgPGJyPkUt
TWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUKICAgPGJyPlRlbC4gOiArNDkgNDIxIDIyMjMy
LTY4CiAgIDxicj5GYXggOiArNDkgNDIxIDIyMjMyLTk5CiAgIDxicj4KICAgPGJyPmh0dHBzOi8v
d3d3LnVuaXZlbnRpb24uZGUKICAgPGJyPmh0dHA6Ly9ncGx1cy50by9Vbml2ZW50aW9uCiAgIDxi
cj5odHRwOi8vd3d3LmZhY2Vib29rLmNvbS91bml2ZW50aW9uCiAgIDxicj5odHRwczovL3R3aXR0
ZXIuY29tL3VuaXZlbnRpb24KICAgPGJyPmh0dHA6Ly93d3cueW91dHViZS5jb20vdW5pdmVudGlv
bnZpZGVvCiAgIDxicj4KICAgPGJyPkdlc2Now6RmdHNmw7xocmVyOiBQZXRlciBILiBHYW50ZW4K
ICAgPGJyPkhSQiAyMDc1NSBBbXRzZ2VyaWNodCBCcmVtZW4KICAgPGJyPlN0ZXVlci1Oci46IDcx
LTU5Ny0wMjg3NgogIDwvZGl2PgogCjwvYm9keT4=

----_com.android.email_37615610558630--




--===============0355030833621922675==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0355030833621922675==--




From xen-devel-bounces@lists.xen.org Mon Jan 13 11:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fbw-0008Um-9z; Mon, 13 Jan 2014 11:23:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W2fbu-0008Ue-Gv
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:23:14 +0000
Received: from [85.158.139.211:34677] by server-1.bemta-5.messagelabs.com id
	10/8A-21065-1ACC3D25; Mon, 13 Jan 2014 11:23:13 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389612189!8183226!1
X-Originating-IP: [209.85.192.180]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23318 invoked from network); 13 Jan 2014 11:23:10 -0000
Received: from mail-pd0-f180.google.com (HELO mail-pd0-f180.google.com)
	(209.85.192.180)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:23:10 -0000
Received: by mail-pd0-f180.google.com with SMTP id q10so7303888pdj.39
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 03:23:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:subject:message-id:importance:from:to:cc:mime-version
	:content-type; bh=NKx8+RYO8/f+k65nBAGZdzfe/gIgAtRO9Uslvx94u4s=;
	b=O+ame3JKFkTJNnTSTRYQ9Nu8Tb1RJTy8wr4VIWh7gkdnOuGQMyVlnDgnHVnYI+Om8J
	5LwJs42YaLDrzH2jm7PnFxTNxzVXyeB2QG8XTRpXwWaCHxfbjlg+qZXLcN+flxbpULuv
	7rvlkMsV+E/U1VmscBOCgIMvrCEZlTiu1z093liwKdqcH2gezPZVoUUrbI28RuWr5OJT
	M+Q+huhhU/RUs+fQlsmn0oZ+ZlDjSTULpiRkGUgasTuDx9YcReVrvMT9LY10xzK+Wx7S
	Nf09WsP0vpeK3Ll/RrSDg7dDAWwP/YoFEB6D2nnok6SZPY5sffedj9tlkdwR/h8IHiGe
	UjDQ==
X-Received: by 10.66.228.37 with SMTP id sf5mr29257172pac.19.1389612188648;
	Mon, 13 Jan 2014 03:23:08 -0800 (PST)
Received: from [10.10.145.42] (p2559.superclick.com. [210.13.92.125])
	by mx.google.com with ESMTPSA id
	fk4sm47848103pab.23.2014.01.13.03.23.06 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 03:23:08 -0800 (PST)
Date: Mon, 13 Jan 2014 19:23:03 +0800
Message-ID: <phwlyitgpjnbm2j8twjeh586.1389612183927@email.android.com>
Importance: normal
From: "lars.kurth@xen.org" <lars.kurth.xen@gmail.com>
To: Maren Abatielos <abatielos@univention.de>, Russell Pavlicek
	<russell.pavlicek@citrix.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen Management Tools - new project to be listed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0355030833621922675=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0355030833621922675==
Content-Type: multipart/alternative; boundary="--_com.android.email_37615610558630"

----_com.android.email_37615610558630
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

TWFyZW4sCnRoZXJlIGlzIGFsc28gYSBsaW5rIHRvIGluc3RydWN0aW9ucyBvbiB0aGUgZWNvc3lz
dGVtIHBhZ2UuwqAKTGFycwoKU2VudCBmcm9tIFNhbXN1bmcgTW9iaWxlCgo8ZGl2Pi0tLS0tLS0t
IE9yaWdpbmFsIG1lc3NhZ2UgLS0tLS0tLS08L2Rpdj48ZGl2PkZyb206IE1hcmVuIEFiYXRpZWxv
cyA8YWJhdGllbG9zQHVuaXZlbnRpb24uZGU+IDwvZGl2PjxkaXY+RGF0ZToyMDE0LzAxLzEzICAx
NjozNiAgKEdNVCswODowMCkgPC9kaXY+PGRpdj5UbzogUnVzc2VsbCBQYXZsaWNlayA8cnVzc2Vs
bC5wYXZsaWNla0BjaXRyaXguY29tPiA8L2Rpdj48ZGl2PkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZyA8L2Rpdj48ZGl2PlN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBYZW4gTWFuYWdlbWVudCBU
b29scyAtIG5ldyBwcm9qZWN0IHRvIGJlIGxpc3RlZCA8L2Rpdj48ZGl2Pgo8L2Rpdj5IZWxsbyBS
dXNzZWxsLArCoAphaCwgSSB1bmRlcnN0YW5kLCB0aGFua3MgYSBsb3QuIEkndmUgYmVlbiBhbHJl
YWR5IG9uY2Ugb24gdGhhdCBwYWdlIGJ1dCBkaWRuJ3QgcmVhbGx5IGtub3cgdGhhdCB0aGlzIGlz
IHRoZSBuZXcgb25lLiBGaW5lLCBJIHdpbGwgc3VyZWx5IHJlZ2lzdGVyIGZvciBhbiBhY2NvdW50
IGFuZCBsaXN0IG91ciBwcm9kdWN0IHRoZXJlLgrCoApHb29kIHRvIGJlIGFibGUgdG8gY2hhbmdl
IGl0IG9uIG15IG93biB3aGVuIHdlIGhhdmUgdXBkYXRlcyBvciBzb21ldGhpbmcuCsKgClRoYW5r
cyBhZ2FpbiwKwqAKTWFyZW4KwqAKLS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJl
IG9wZW4uIApNYXJ5LVNvbWVydmlsbGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJh
dGllbG9zQHVuaXZlbnRpb24uZGUgClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkg
NDIxIDIyMjMyLTk5IAoKaHR0cHM6Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRv
L1VuaXZlbnRpb24gCmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8v
dHdpdHRlci5jb20vdW5pdmVudGlvbiAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9u
dmlkZW8gCgpNYW5hZ2luZyBkaXJlY3RvcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgTG9j
YWwgQ291cnQgQnJlbWVuIApUYXgtTm8uOiA3MS01OTctMDI4NzYKUnVzc2VsbCBQYXZsaWNlayA8
cnVzc2VsbC5wYXZsaWNla0BjaXRyaXguY29tPiBoYXQgYW0gOS4gSmFudWFyIDIwMTQgdW0gMTU6
NTIgZ2VzY2hyaWViZW46IAoKTWFyZW4sCsKgClNvcnJ5IGZvciB0aGUgY29uZnVzaW9uLsKgIFRo
YXTigJlzIHRoZSBvbGQgc3lzdGVtICh3ZSB3aWxsIG5lZWQgdG8gbWFyayBpdCBhcyBzdWNoKS4K
wqAKVG8gbGlzdCB5b3VyIHByb2plY3QsIHByb2R1Y3QsIG9yIHNlcnZpY2UsIHNpbXBseSBnbyB0
byBYZW5Qcm9qZWN0Lm9yZy7CoCBSZWdpc3RlciBmb3IgYW4gYWNjb3VudCwgaWYgeW91IGRvbuKA
mXQgaGF2ZSBvbmUgYWxyZWFkeS7CoCBDbGljayBvbiBEaXJlY3RvcnkgPiBFY29zeXN0ZW0uwqAg
UHJlc3MgdGhlIGJ1dHRvbiBsYWJlbGVkIOKAnEFkZCB5b3VyIGxpc3RpbmcgaGVyZeKAnS7CoCBG
aWxsIG91dCB0aGUgZm9ybSBhbmQgc3VibWl0LgrCoApPbmNlIHlvdXIgbGlzdGluZyBpcyBhcHBy
b3ZlZCAoZ2VuZXJhbGx5IHdpdGhpbiBhIGZldyBob3VycyksIGl0IHdpbGwgYmUgbGlzdGVkIGlu
IHRoZSBYZW5Qcm9qZWN0Lm9yZyBFY29zeXN0ZW0gRGlyZWN0b3J5LsKgIFdoYXTigJlzIG1vcmUs
IHlvdSB3aWxsIGhhdmUgY29udHJvbCBvdmVyIHRoZSBsaXN0aW5nIGNvbnRlbnQsIHNvIGFzIHlv
dXIgcHJvZHVjdCBjaGFuZ2VzIG9yIGV4cGFuZHMsIHlvdSBjYW4gbW9kaWZ5IHlvdXIgbGlzdGlu
ZyB0byBrZWVwIGl0IHVwIHRvIGRhdGUuCsKgCklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3Ig
ZGlmZmljdWx0eSwgcGxlYXNlIGNvbnRhY3QgbWUuwqAgSeKAmWQgYmUgaGFwcHkgdG8gaGVscCEK
wqAKUnVzcyBQYXZsaWNlawpYZW4gUHJvamVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtcwpI
b21lIE9mZmljZTogKzEtMzAxLTgyOS01MzI3Ck1vYmlsZTogKzEtMjQwLTM5Ny0wMTk5ClVLIFZv
SVA6ICs0NCAxMjIzIDg1MiA4OTQKwqAKRnJvbTogeGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVu
Lm9yZyBbbWFpbHRvOnhlbi1kZXZlbC1ib3VuY2VzQGxpc3RzLnhlbi5vcmddIE9uIEJlaGFsZiBP
ZiBNYXJlbiBBYmF0aWVsb3MKU2VudDogVGh1cnNkYXksIEphbnVhcnkgMDksIDIwMTQgNzozNyBB
TQpUbzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKU3ViamVjdDogW1hlbi1kZXZlbF0gWGVuIE1h
bmFnZW1lbnQgVG9vbHMgLSBuZXcgcHJvamVjdCB0byBiZSBsaXN0ZWQKwqAKSGVsbG8sCsKgCkkg
d291bGQgbGlrZSB0byBoYXZlIG91ciBwcm9kdWN0IFVDUyBWaXJ0dWFsIE1hY2hpbmUgTWFuYWdl
ciAoVVZNTSkgdG8gYmUgbGlzdGVkIGluIHRoZSBzZWN0aW9uICJNYW5hZ2VtZW50IHRvb2xzIGFu
ZCBpbnRlcmZhY2VzIiBvbiB0aGUgZm9sbG93aW5nIHBhZ2U6CsKgCmh0dHA6Ly93aWtpLnhlbi5v
cmcvd2lraS9YZW5fTWFuYWdlbWVudF9Ub29scwrCoApVVk1NIGlzIGEgd2ViLWJhc2VkIHZpcnR1
YWxpemF0aW9uIG1hbmFnZW1lbnQgdG9vbCBmb3IgWGVuIGFuZCBLVk0uIEl0IGlzIGluY2x1ZGVk
IGluIG91ciBMaW51eCBzZXJ2ZXIgb3BlcmF0aW5nIHN5c3RlbSBVbml2ZW50aW9uIENvcnBvcmF0
ZSBTZXJ2ZXIsIHNvIGlzIFhlbi4KwqAKQ291bGQgeW91IHBsZWFzZSBkbyB0aGF0IHVzaW5nIHRo
ZSBmb2xsb3dpbmcgVVJMIHRvIGJlIGxpc3RlZDoKwqAKaHR0cDovL3d3dy51bml2ZW50aW9uLmRl
L2VuL3Byb2R1Y3RzL3Vjcy91Y3MtY29tcG9uZW50cy92aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVh
bC1tYWNoaW5lLW1hbmFnZXIvCsKgClRoYW5rcyBhIGxvdCwKwqAKTWFyZW4gQWJhdGllbG9zIAoK
LS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJlIG9wZW4uIApNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUg
ClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkgNDIxIDIyMjMyLTk5IAoKaHR0cHM6
Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24gCmh0dHA6Ly93
d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlv
biAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlkZW8gCgpNYW5hZ2luZyBkaXJl
Y3RvcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgTG9jYWwgQ291cnQgQnJlbWVuIApUYXgt
Tm8uOiA3MS01OTctMDI4NzYKCsKgClNjaMO2bmUgR3LDvMOfZSAKTWFyZW4gQWJhdGllbG9zIAoK
LS0tIApNYXJrZXRpbmcgCgpVbml2ZW50aW9uIEdtYkggCmJlIG9wZW4uIApNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCjI4MzU5IEJyZW1lbiAKCkUtTWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUg
ClRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IApGYXggOiArNDkgNDIxIDIyMjMyLTk5IAoKaHR0cHM6
Ly93d3cudW5pdmVudGlvbi5kZSAKaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24gCmh0dHA6Ly93
d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb24gCmh0dHBzOi8vdHdpdHRlci5jb20vdW5pdmVudGlv
biAKaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2ZW50aW9udmlkZW8gCgpHZXNjaMOkZnRzZsO8
aHJlcjogUGV0ZXIgSC4gR2FudGVuIApIUkIgMjA3NTUgQW10c2dlcmljaHQgQnJlbWVuIApTdGV1
ZXItTnIuOiA3MS01OTctMDI4NzY=

----_com.android.email_37615610558630
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5NYXJlbiw8L2Rpdj48ZGl2
PnRoZXJlIGlzIGFsc28gYSBsaW5rIHRvIGluc3RydWN0aW9ucyBvbiB0aGUgZWNvc3lzdGVtIHBh
Z2UuJm5ic3A7PC9kaXY+PGRpdj5MYXJzPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48ZGl2IHN0
eWxlPSJmb250LXNpemU6OXB4O2NvbG9yOiM1NzU3NTciPlNlbnQgZnJvbSBTYW1zdW5nIE1vYmls
ZTwvZGl2PjwvZGl2Pjxicj48YnI+PGRpdj4tLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0tLS0t
LS0tPC9kaXY+PGRpdj5Gcm9tOiBNYXJlbiBBYmF0aWVsb3MgPGFiYXRpZWxvc0B1bml2ZW50aW9u
LmRlPiA8L2Rpdj48ZGl2PkRhdGU6MjAxNC8wMS8xMyAgMTY6MzYgIChHTVQrMDg6MDApIDwvZGl2
PjxkaXY+VG86IFJ1c3NlbGwgUGF2bGljZWsgPHJ1c3NlbGwucGF2bGljZWtAY2l0cml4LmNvbT4g
PC9kaXY+PGRpdj5DYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcgPC9kaXY+PGRpdj5TdWJqZWN0
OiBSZTogW1hlbi1kZXZlbF0gWGVuIE1hbmFnZW1lbnQgVG9vbHMgLSBuZXcgcHJvamVjdCB0byBi
ZSBsaXN0ZWQgPC9kaXY+PGRpdj48YnI+PC9kaXY+CiAKICA8ZGl2PgogICBIZWxsbyBSdXNzZWxs
LAogIDwvZGl2PiAKICA8ZGl2PgogICAmbmJzcDsKICA8L2Rpdj4gCiAgPGRpdj4KICAgYWgsIEkg
dW5kZXJzdGFuZCwgdGhhbmtzIGEgbG90LiBJJ3ZlIGJlZW4gYWxyZWFkeSBvbmNlIG9uIHRoYXQg
cGFnZSBidXQgZGlkbid0IHJlYWxseSBrbm93IHRoYXQgdGhpcyBpcyB0aGUgbmV3IG9uZS4gRmlu
ZSwgSSB3aWxsIHN1cmVseSByZWdpc3RlciBmb3IgYW4gYWNjb3VudCBhbmQgbGlzdCBvdXIgcHJv
ZHVjdCB0aGVyZS4KICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAgPC9kaXY+IAogIDxkaXY+
CiAgIEdvb2QgdG8gYmUgYWJsZSB0byBjaGFuZ2UgaXQgb24gbXkgb3duIHdoZW4gd2UgaGF2ZSB1
cGRhdGVzIG9yIHNvbWV0aGluZy4KICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAgPC9kaXY+
IAogIDxkaXY+CiAgIFRoYW5rcyBhZ2FpbiwKICA8L2Rpdj4gCiAgPGRpdj4KICAgJm5ic3A7CiAg
PC9kaXY+IAogIDxkaXY+CiAgIE1hcmVuCiAgPC9kaXY+IAogIDxkaXY+CiAgICZuYnNwOwogIDwv
ZGl2PiAKICA8ZGl2PgogICAtLS0gCiAgIDxicj4gTWFya2V0aW5nIAogICA8YnI+IAogICA8YnI+
IFVuaXZlbnRpb24gR21iSCAKICAgPGJyPiBiZSBvcGVuLiAKICAgPGJyPiBNYXJ5LVNvbWVydmls
bGUtU3RyLjEgCiAgIDxicj4gMjgzNTkgQnJlbWVuIAogICA8YnI+IAogICA8YnI+IEUtTWFpbDog
CiAgIDxhIGhyZWY9Im1haWx0bzphYmF0aWVsb3NAdW5pdmVudGlvbi5kZSI+YWJhdGllbG9zQHVu
aXZlbnRpb24uZGU8L2E+IAogICA8YnI+IFRlbC4gOiArNDkgNDIxIDIyMjMyLTY4IAogICA8YnI+
IEZheCA6ICs0OSA0MjEgMjIyMzItOTkgCiAgIDxicj4gCiAgIDxicj4gCiAgIDxhIGhyZWY9Imh0
dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGU8L2E+IAog
ICA8YnI+IAogICA8YSBocmVmPSJodHRwOi8vZ3BsdXMudG8vVW5pdmVudGlvbiI+aHR0cDovL2dw
bHVzLnRvL1VuaXZlbnRpb248L2E+IAogICA8YnI+IAogICA8YSBocmVmPSJodHRwOi8vd3d3LmZh
Y2Vib29rLmNvbS91bml2ZW50aW9uIj5odHRwOi8vd3d3LmZhY2Vib29rLmNvbS91bml2ZW50aW9u
PC9hPiAKICAgPGJyPiAKICAgPGEgaHJlZj0iaHR0cHM6Ly90d2l0dGVyLmNvbS91bml2ZW50aW9u
Ij5odHRwczovL3R3aXR0ZXIuY29tL3VuaXZlbnRpb248L2E+IAogICA8YnI+IAogICA8YSBocmVm
PSJodHRwOi8vd3d3LnlvdXR1YmUuY29tL3VuaXZlbnRpb252aWRlbyI+aHR0cDovL3d3dy55b3V0
dWJlLmNvbS91bml2ZW50aW9udmlkZW88L2E+IAogICA8YnI+IAogICA8YnI+IE1hbmFnaW5nIGRp
cmVjdG9yOiBQZXRlciBILiBHYW50ZW4gCiAgIDxicj4gSFJCIDIwNzU1IExvY2FsIENvdXJ0IEJy
ZW1lbiAKICAgPGJyPiBUYXgtTm8uOiA3MS01OTctMDI4NzYKICA8L2Rpdj4gCiAgPGJsb2NrcXVv
dGUgc3R5bGU9InBvc2l0aW9uOiByZWxhdGl2ZTsgbWFyZ2luLWxlZnQ6IDBweDsgcGFkZGluZy1s
ZWZ0OiAxMHB4OyBib3JkZXItbGVmdDogc29saWQgMXB4IGJsdWU7IiB0eXBlPSJjaXRlIj4KICAg
PCEtLSBbaWYgZ3RlIG1zbyA5XT4gLS0+CiAgIDwhLS0gPCFbZW5kaWZdIC0tPgogICA8IS0tIFtp
ZiBndGUgbXNvIDldPiAtLT4KICAgPCEtLSA8IVtlbmRpZl0gLS0+IFJ1c3NlbGwgUGF2bGljZWsg
Jmx0O3J1c3NlbGwucGF2bGljZWtAY2l0cml4LmNvbSZndDsgaGF0IGFtIDkuIEphbnVhciAyMDE0
IHVtIDE1OjUyIGdlc2NocmllYmVuOgogICA8YnI+CiAgIDxicj4gCiAgIDxkaXYgY2xhc3M9Ildv
cmRTZWN0aW9uMSI+IAogICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6
ICMxZjQ5N2Q7Ij5NYXJlbiw8L3NwYW4+PC9wPiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fu
cy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8cCBjbGFz
cz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5
OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjogIzFmNDk3ZDsiPlNvcnJ5IGZvciB0aGUg
Y29uZnVzaW9uLiZuYnNwOyBUaGF04oCZcyB0aGUgb2xkIHN5c3RlbSAod2Ugd2lsbCBuZWVkIHRv
IG1hcmsgaXQgYXMgc3VjaCkuPC9zcGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3Nh
bnMtc2VyaWYnOyBjb2xvcjogIzFmNDk3ZDsiPiZuYnNwOzwvc3Bhbj48L3A+IAogICAgPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWls
eTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij5UbyBsaXN0IHlvdXIg
cHJvamVjdCwgcHJvZHVjdCwgb3Igc2VydmljZSwgc2ltcGx5IGdvIHRvIFhlblByb2plY3Qub3Jn
LiZuYnNwOyBSZWdpc3RlciBmb3IgYW4gYWNjb3VudCwgaWYgeW91IGRvbuKAmXQgaGF2ZSBvbmUg
YWxyZWFkeS4mbmJzcDsgQ2xpY2sgb24gRGlyZWN0b3J5ICZndDsgRWNvc3lzdGVtLiZuYnNwOyBQ
cmVzcyB0aGUgYnV0dG9uIGxhYmVsZWQg4oCcQWRkIHlvdXIgbGlzdGluZyBoZXJl4oCdLiZuYnNw
OyBGaWxsIG91dCB0aGUgZm9ybSBhbmQgc3VibWl0Ljwvc3Bhbj48L3A+IAogICAgPHAgY2xhc3M9
Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTog
J0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij4mbmJzcDs8L3NwYW4+PC9w
PiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBw
dDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+
T25jZSB5b3VyIGxpc3RpbmcgaXMgYXBwcm92ZWQgKGdlbmVyYWxseSB3aXRoaW4gYSBmZXcgaG91
cnMpLCBpdCB3aWxsIGJlIGxpc3RlZCBpbiB0aGUgWGVuUHJvamVjdC5vcmcgRWNvc3lzdGVtIERp
cmVjdG9yeS4mbmJzcDsgV2hhdOKAmXMgbW9yZSwgeW91IHdpbGwgaGF2ZSBjb250cm9sIG92ZXIg
dGhlIGxpc3RpbmcgY29udGVudCwgc28gYXMgeW91ciBwcm9kdWN0IGNoYW5nZXMgb3IgZXhwYW5k
cywgeW91IGNhbiBtb2RpZnkgeW91ciBsaXN0aW5nIHRvIGtlZXAgaXQgdXAgdG8gZGF0ZS48L3Nw
YW4+PC9wPiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6
IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0
OTdkOyI+Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2Vy
aWYnOyBjb2xvcjogIzFmNDk3ZDsiPklmIHlvdSBoYXZlIGFueSBxdWVzdGlvbnMgb3IgZGlmZmlj
dWx0eSwgcGxlYXNlIGNvbnRhY3QgbWUuJm5ic3A7IEnigJlkIGJlIGhhcHB5IHRvIGhlbHAhPC9z
cGFuPjwvcD4gCiAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjogIzFm
NDk3ZDsiPiZuYnNwOzwvc3Bhbj48L3A+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05v
cm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEuMHB0OyBmb250LWZhbWlseTogJ0NhbGli
cmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7Ij5SdXNzIFBhdmxpY2VrPC9zcGFuPjwv
cD4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTEu
MHB0OyBmb250LWZhbWlseTogJ0NhbGlicmknLCdzYW5zLXNlcmlmJzsgY29sb3I6ICMxZjQ5N2Q7
Ij5YZW4gUHJvamVjdCBFdmFuZ2VsaXN0LCBDaXRyaXggU3lzdGVtczwvc3Bhbj48L3A+IAogICAg
IDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9u
dC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+SG9tZSBP
ZmZpY2U6ICsxLTMwMS04MjktNTMyNzwvc3Bhbj48L3A+IAogICAgIDxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBwdDsgZm9udC1mYW1pbHk6ICdDYWxpYnJp
Jywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+TW9iaWxlOiArMS0yNDAtMzk3LTAxOTk8
L3NwYW4+PC9wPiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOiAxMS4wcHQ7IGZvbnQtZmFtaWx5OiAnQ2FsaWJyaScsJ3NhbnMtc2VyaWYnOyBjb2xvcjog
IzFmNDk3ZDsiPlVLIFZvSVA6ICs0NCAxMjIzIDg1MiA4OTQ8L3NwYW4+PC9wPiAKICAgIDwvZGl2
PiAKICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDExLjBw
dDsgZm9udC1mYW1pbHk6ICdDYWxpYnJpJywnc2Fucy1zZXJpZic7IGNvbG9yOiAjMWY0OTdkOyI+
Jm5ic3A7PC9zcGFuPjwvcD4gCiAgICA8ZGl2PiAKICAgICA8ZGl2IHN0eWxlPSJib3JkZXI6IG5v
bmU7IGJvcmRlci10b3A6IHNvbGlkICNCNUM0REYgMS4wcHQ7IHBhZGRpbmc6IDMuMHB0IDBpbiAw
aW4gMGluOyI+IAogICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Ryb25nPjxzcGFuIHN0eWxl
PSJmb250LXNpemU6IDEwLjBwdDsgZm9udC1mYW1pbHk6ICdUYWhvbWEnLCdzYW5zLXNlcmlmJzsi
PkZyb206PC9zcGFuPjwvc3Ryb25nPjxzcGFuIHN0eWxlPSJmb250LXNpemU6IDEwLjBwdDsgZm9u
dC1mYW1pbHk6ICdUYWhvbWEnLCdzYW5zLXNlcmlmJzsiPiB4ZW4tZGV2ZWwtYm91bmNlc0BsaXN0
cy54ZW4ub3JnIFttYWlsdG86eGVuLWRldmVsLWJvdW5jZXNAbGlzdHMueGVuLm9yZ10gPHN0cm9u
Zz5PbiBCZWhhbGYgT2YgPC9zdHJvbmc+TWFyZW4gQWJhdGllbG9zPGJyPiA8c3Ryb25nPlNlbnQ6
PC9zdHJvbmc+IFRodXJzZGF5LCBKYW51YXJ5IDA5LCAyMDE0IDc6MzcgQU08YnI+IDxzdHJvbmc+
VG86PC9zdHJvbmc+IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnPGJyPiA8c3Ryb25nPlN1YmplY3Q6
PC9zdHJvbmc+IFtYZW4tZGV2ZWxdIFhlbiBNYW5hZ2VtZW50IFRvb2xzIC0gbmV3IHByb2plY3Qg
dG8gYmUgbGlzdGVkPC9zcGFuPjwvcD4gCiAgICAgPC9kaXY+IAogICAgPC9kaXY+IAogICAgPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+Jm5ic3A7PC9wPiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJN
c29Ob3JtYWwiPkhlbGxvLDwvcD4gCiAgICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFz
cz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+SSB3b3VsZCBsaWtlIHRvIGhhdmUgb3VyIHByb2R1Y3QgVUNTIFZp
cnR1YWwgTWFjaGluZSBNYW5hZ2VyIChVVk1NKSB0byBiZSBsaXN0ZWQgaW4gdGhlIHNlY3Rpb24g
Ik1hbmFnZW1lbnQgdG9vbHMgYW5kIGludGVyZmFjZXMiIG9uIHRoZSBmb2xsb3dpbmcgcGFnZTo8
L3A+IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+Jm5i
c3A7PC9wPiAKICAgIDwvZGl2PiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxhIGhyZWY9Imh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9YZW5fTWFuYWdlbWVudF9Ub29scyI+
aHR0cDovL3dpa2kueGVuLm9yZy93aWtpL1hlbl9NYW5hZ2VtZW50X1Rvb2xzPC9hPjwvcD4gCiAg
ICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+
IAogICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+VVZNTSBp
cyBhIHdlYi1iYXNlZCB2aXJ0dWFsaXphdGlvbiBtYW5hZ2VtZW50IHRvb2wgZm9yIFhlbiBhbmQg
S1ZNLiBJdCBpcyBpbmNsdWRlZCBpbiBvdXIgTGludXggc2VydmVyIG9wZXJhdGluZyBzeXN0ZW0g
VW5pdmVudGlvbiBDb3Jwb3JhdGUgU2VydmVyLCBzbyBpcyBYZW4uPC9wPiAKICAgIDwvZGl2PiAK
ICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4gCiAgICA8L2Rp
dj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5Db3VsZCB5b3UgcGxlYXNl
IGRvIHRoYXQgdXNpbmcgdGhlIGZvbGxvd2luZyBVUkwgdG8gYmUgbGlzdGVkOjwvcD4gCiAgICA8
L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8L3A+IAog
ICAgPC9kaXY+IAogICAgPGRpdj4gCiAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PGEgaHJlZj0i
aHR0cDovL3d3dy51bml2ZW50aW9uLmRlL2VuL3Byb2R1Y3RzL3Vjcy91Y3MtY29tcG9uZW50cy92
aXJ0dWFsaXphdGlvbi91Y3MtdmlydHVhbC1tYWNoaW5lLW1hbmFnZXIvIj5odHRwOi8vd3d3LnVu
aXZlbnRpb24uZGUvZW4vcHJvZHVjdHMvdWNzL3Vjcy1jb21wb25lbnRzL3ZpcnR1YWxpemF0aW9u
L3Vjcy12aXJ0dWFsLW1hY2hpbmUtbWFuYWdlci88L2E+PC9wPiAKICAgIDwvZGl2PiAKICAgIDxk
aXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4gCiAgICA8L2Rpdj4gCiAg
ICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5UaGFua3MgYSBsb3QsPC9wPiAKICAg
IDwvZGl2PiAKICAgIDxkaXY+IAogICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPiZuYnNwOzwvcD4g
CiAgICA8L2Rpdj4gCiAgICA8ZGl2PiAKICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj5NYXJlbiBB
YmF0aWVsb3MgPGJyPiA8YnI+IC0tLSA8YnI+IE1hcmtldGluZyA8YnI+IDxicj4gVW5pdmVudGlv
biBHbWJIIDxicj4gYmUgb3Blbi4gPGJyPiBNYXJ5LVNvbWVydmlsbGUtU3RyLjEgPGJyPiAyODM1
OSBCcmVtZW4gPGJyPiA8YnI+IEUtTWFpbDogPGEgaHJlZj0ibWFpbHRvOmFiYXRpZWxvc0B1bml2
ZW50aW9uLmRlIj5hYmF0aWVsb3NAdW5pdmVudGlvbi5kZTwvYT4gPGJyPiBUZWwuIDogKzQ5IDQy
MSAyMjIzMi02OCA8YnI+IEZheCA6ICs0OSA0MjEgMjIyMzItOTkgPGJyPiA8YnI+IDxhIGhyZWY9
Imh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGUiPmh0dHBzOi8vd3d3LnVuaXZlbnRpb24uZGU8L2E+
IDxicj4gPGEgaHJlZj0iaHR0cDovL2dwbHVzLnRvL1VuaXZlbnRpb24iPmh0dHA6Ly9ncGx1cy50
by9Vbml2ZW50aW9uPC9hPiA8YnI+IDxhIGhyZWY9Imh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3Vu
aXZlbnRpb24iPmh0dHA6Ly93d3cuZmFjZWJvb2suY29tL3VuaXZlbnRpb248L2E+IDxicj4gPGEg
aHJlZj0iaHR0cHM6Ly90d2l0dGVyLmNvbS91bml2ZW50aW9uIj5odHRwczovL3R3aXR0ZXIuY29t
L3VuaXZlbnRpb248L2E+IDxicj4gPGEgaHJlZj0iaHR0cDovL3d3dy55b3V0dWJlLmNvbS91bml2
ZW50aW9udmlkZW8iPmh0dHA6Ly93d3cueW91dHViZS5jb20vdW5pdmVudGlvbnZpZGVvPC9hPiA8
YnI+IDxicj4gTWFuYWdpbmcgZGlyZWN0b3I6IFBldGVyIEguIEdhbnRlbiA8YnI+IEhSQiAyMDc1
NSBMb2NhbCBDb3VydCBCcmVtZW4gPGJyPiBUYXgtTm8uOiA3MS01OTctMDI4NzY8L3A+IAogICAg
PC9kaXY+IAogICA8L2Rpdj4gCiAgPC9ibG9ja3F1b3RlPiAKICA8ZGl2PgogICA8YnI+Jm5ic3A7
CiAgPC9kaXY+IAogIDxkaXYgaWQ9Im94LXNpZ25hdHVyZSI+CiAgIFNjaMO2bmUgR3LDvMOfZQog
ICA8YnI+TWFyZW4gQWJhdGllbG9zCiAgIDxicj4KICAgPGJyPi0tLQogICA8YnI+TWFya2V0aW5n
CiAgIDxicj4KICAgPGJyPlVuaXZlbnRpb24gR21iSAogICA8YnI+YmUgb3Blbi4KICAgPGJyPk1h
cnktU29tZXJ2aWxsZS1TdHIuMQogICA8YnI+MjgzNTkgQnJlbWVuCiAgIDxicj4KICAgPGJyPkUt
TWFpbDogYWJhdGllbG9zQHVuaXZlbnRpb24uZGUKICAgPGJyPlRlbC4gOiArNDkgNDIxIDIyMjMy
LTY4CiAgIDxicj5GYXggOiArNDkgNDIxIDIyMjMyLTk5CiAgIDxicj4KICAgPGJyPmh0dHBzOi8v
d3d3LnVuaXZlbnRpb24uZGUKICAgPGJyPmh0dHA6Ly9ncGx1cy50by9Vbml2ZW50aW9uCiAgIDxi
cj5odHRwOi8vd3d3LmZhY2Vib29rLmNvbS91bml2ZW50aW9uCiAgIDxicj5odHRwczovL3R3aXR0
ZXIuY29tL3VuaXZlbnRpb24KICAgPGJyPmh0dHA6Ly93d3cueW91dHViZS5jb20vdW5pdmVudGlv
bnZpZGVvCiAgIDxicj4KICAgPGJyPkdlc2Now6RmdHNmw7xocmVyOiBQZXRlciBILiBHYW50ZW4K
ICAgPGJyPkhSQiAyMDc1NSBBbXRzZ2VyaWNodCBCcmVtZW4KICAgPGJyPlN0ZXVlci1Oci46IDcx
LTU5Ny0wMjg3NgogIDwvZGl2PgogCjwvYm9keT4=

----_com.android.email_37615610558630--




--===============0355030833621922675==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0355030833621922675==--




From xen-devel-bounces@lists.xen.org Mon Jan 13 11:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fcz-00008V-PV; Mon, 13 Jan 2014 11:24:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fcx-00008J-N5
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:24:19 +0000
Received: from [85.158.143.35:22662] by server-3.bemta-4.messagelabs.com id
	94/E0-32360-3ECC3D25; Mon, 13 Jan 2014 11:24:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389612257!11300210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6390 invoked from network); 13 Jan 2014 11:24:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:24:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:24:17 +0000
Message-Id: <52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:24:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 11:14, Olaf Hering <olaf@aepfle.de> wrote:
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1634,37 +1634,22 @@ blkfront_closing(struct blkfront_info *info)
>  
>  static void blkfront_setup_discard(struct blkfront_info *info)
>  {
> -	int err;
> -	char *type;
>  	unsigned int discard_granularity;
>  	unsigned int discard_alignment;
>  	unsigned int discard_secure;
>  
> -	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
> -	if (IS_ERR(type))
> +	if (xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		"discard-granularity", "%u", &discard_granularity,
> +		"discard-alignment", "%u", &discard_alignment,
> +		"discard-secure", "%u", &discard_secure,
> +		NULL))
>  		return;

You can't do this in one go - the first two and the last one may be
set independently (and are independent in their meaning), and
hence need to be queried independently (xenbus_gather() fails
on the first absent value).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fcz-00008V-PV; Mon, 13 Jan 2014 11:24:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fcx-00008J-N5
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:24:19 +0000
Received: from [85.158.143.35:22662] by server-3.bemta-4.messagelabs.com id
	94/E0-32360-3ECC3D25; Mon, 13 Jan 2014 11:24:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389612257!11300210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6390 invoked from network); 13 Jan 2014 11:24:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:24:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:24:17 +0000
Message-Id: <52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:24:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 11:14, Olaf Hering <olaf@aepfle.de> wrote:
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1634,37 +1634,22 @@ blkfront_closing(struct blkfront_info *info)
>  
>  static void blkfront_setup_discard(struct blkfront_info *info)
>  {
> -	int err;
> -	char *type;
>  	unsigned int discard_granularity;
>  	unsigned int discard_alignment;
>  	unsigned int discard_secure;
>  
> -	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
> -	if (IS_ERR(type))
> +	if (xenbus_gather(XBT_NIL, info->xbdev->otherend,
> +		"discard-granularity", "%u", &discard_granularity,
> +		"discard-alignment", "%u", &discard_alignment,
> +		"discard-secure", "%u", &discard_secure,
> +		NULL))
>  		return;

You can't do this in one go - the first two and the last one may be
set independently (and are independent in their meaning), and
hence need to be queried independently (xenbus_gather() fails
on the first absent value).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ffK-0000Ji-D7; Mon, 13 Jan 2014 11:26:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W2ffI-0000JY-U0
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:26:45 +0000
Received: from [85.158.143.35:45668] by server-3.bemta-4.messagelabs.com id
	F1/85-32360-47DC3D25; Mon, 13 Jan 2014 11:26:44 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389612401!11354078!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3266 invoked from network); 13 Jan 2014 11:26:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:26:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90130967"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:26:41 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 13 Jan 2014 06:26:41 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 13 Jan 2014 12:26:39 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH net-next v2 1/3] net: add skb_checksum_setup
Thread-Index: AQHPDSH2fCxP8y7U+0CaBHnzQFP3oZqCij6A
Date: Mon, 13 Jan 2014 11:26:39 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: 09 January 2014 10:03
> To: netdev@vger.kernel.org; xen-devel@lists.xen.org
> Cc: Paul Durrant; David Miller; Eric Dumazet; Veaceslav Falico; Alexander
> Duyck; Nicolas Dichtel
> Subject: [PATCH net-next v2 1/3] net: add skb_checksum_setup
> 
> This patch adds a function to set up the partial checksum offset for IP
> packets (and optionally re-calculate the pseudo-header checksum) into the
> core network code.
> The implementation was previously private and duplicated between xen-
> netback
> and xen-netfront, however it is not xen-specific and is potentially useful
> to any network driver.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: David Miller <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Veaceslav Falico <vfalico@redhat.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>

Ping?

  Paul

> ---
>  include/linux/skbuff.h |    2 +
>  net/core/skbuff.c      |  273
> ++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 275 insertions(+)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index d97f2d0..48b7605 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -2893,6 +2893,8 @@ static inline void skb_checksum_none_assert(const
> struct sk_buff *skb)
> 
>  bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off);
> 
> +int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
> +
>  u32 __skb_get_poff(const struct sk_buff *skb);
> 
>  /**
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 1d641e7..15057d2 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -65,6 +65,7 @@
>  #include <net/dst.h>
>  #include <net/sock.h>
>  #include <net/checksum.h>
> +#include <net/ip6_checksum.h>
>  #include <net/xfrm.h>
> 
>  #include <asm/uaccess.h>
> @@ -3549,6 +3550,278 @@ bool skb_partial_csum_set(struct sk_buff *skb,
> u16 start, u16 off)
>  }
>  EXPORT_SYMBOL_GPL(skb_partial_csum_set);
> 
> +static int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
> +			       unsigned int max)
> +{
> +	if (skb_headlen(skb) >= len)
> +		return 0;
> +
> +	/* If we need to pullup then pullup to the max, so we
> +	 * won't need to do it again.
> +	 */
> +	if (max > skb->len)
> +		max = skb->len;
> +
> +	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
> +		return -ENOMEM;
> +
> +	if (skb_headlen(skb) < len)
> +		return -EPROTO;
> +
> +	return 0;
> +}
> +
> +/* This value should be large enough to cover a tagged ethernet header
> plus
> + * maximally sized IP and TCP or UDP headers.
> + */
> +#define MAX_IP_HDR_LEN 128
> +
> +static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
> +{
> +	unsigned int off;
> +	bool fragment;
> +	int err;
> +
> +	fragment = false;
> +
> +	err = skb_maybe_pull_tail(skb,
> +				  sizeof(struct iphdr),
> +				  MAX_IP_HDR_LEN);
> +	if (err < 0)
> +		goto out;
> +
> +	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
> +		fragment = true;
> +
> +	off = ip_hdrlen(skb);
> +
> +	err = -EPROTO;
> +
> +	if (fragment)
> +		goto out;
> +
> +	switch (ip_hdr(skb)->protocol) {
> +	case IPPROTO_TCP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct tcphdr),
> +					  MAX_IP_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct tcphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			tcp_hdr(skb)->check =
> +				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
> +						   ip_hdr(skb)->daddr,
> +						   skb->len - off,
> +						   IPPROTO_TCP, 0);
> +		break;
> +	case IPPROTO_UDP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct udphdr),
> +					  MAX_IP_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct udphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			udp_hdr(skb)->check =
> +				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
> +						   ip_hdr(skb)->daddr,
> +						   skb->len - off,
> +						   IPPROTO_UDP, 0);
> +		break;
> +	default:
> +		goto out;
> +	}
> +
> +	err = 0;
> +
> +out:
> +	return err;
> +}
> +
> +/* This value should be large enough to cover a tagged ethernet header
> plus
> + * an IPv6 header, all options, and a maximal TCP or UDP header.
> + */
> +#define MAX_IPV6_HDR_LEN 256
> +
> +#define OPT_HDR(type, skb, off) \
> +	(type *)(skb_network_header(skb) + (off))
> +
> +static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
> +{
> +	int err;
> +	u8 nexthdr;
> +	unsigned int off;
> +	unsigned int len;
> +	bool fragment;
> +	bool done;
> +
> +	fragment = false;
> +	done = false;
> +
> +	off = sizeof(struct ipv6hdr);
> +
> +	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
> +	if (err < 0)
> +		goto out;
> +
> +	nexthdr = ipv6_hdr(skb)->nexthdr;
> +
> +	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
> +	while (off <= len && !done) {
> +		switch (nexthdr) {
> +		case IPPROTO_DSTOPTS:
> +		case IPPROTO_HOPOPTS:
> +		case IPPROTO_ROUTING: {
> +			struct ipv6_opt_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct ipv6_opt_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
> +			nexthdr = hp->nexthdr;
> +			off += ipv6_optlen(hp);
> +			break;
> +		}
> +		case IPPROTO_AH: {
> +			struct ip_auth_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct ip_auth_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
> +			nexthdr = hp->nexthdr;
> +			off += ipv6_authlen(hp);
> +			break;
> +		}
> +		case IPPROTO_FRAGMENT: {
> +			struct frag_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct frag_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct frag_hdr, skb, off);
> +
> +			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
> +				fragment = true;
> +
> +			nexthdr = hp->nexthdr;
> +			off += sizeof(struct frag_hdr);
> +			break;
> +		}
> +		default:
> +			done = true;
> +			break;
> +		}
> +	}
> +
> +	err = -EPROTO;
> +
> +	if (!done || fragment)
> +		goto out;
> +
> +	switch (nexthdr) {
> +	case IPPROTO_TCP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct tcphdr),
> +					  MAX_IPV6_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct tcphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			tcp_hdr(skb)->check =
> +				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
> +						 &ipv6_hdr(skb)->daddr,
> +						 skb->len - off,
> +						 IPPROTO_TCP, 0);
> +		break;
> +	case IPPROTO_UDP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct udphdr),
> +					  MAX_IPV6_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct udphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			udp_hdr(skb)->check =
> +				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
> +						 &ipv6_hdr(skb)->daddr,
> +						 skb->len - off,
> +						 IPPROTO_UDP, 0);
> +		break;
> +	default:
> +		goto out;
> +	}
> +
> +	err = 0;
> +
> +out:
> +	return err;
> +}
> +
> +/**
> + * skb_checksum_setup - set up partial checksum offset
> + * @skb: the skb to set up
> + * @recalculate: if true the pseudo-header checksum will be recalculated
> + */
> +int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
> +{
> +	int err;
> +
> +	switch (skb->protocol) {
> +	case htons(ETH_P_IP):
> +		err = skb_checksum_setup_ip(skb, recalculate);
> +		break;
> +
> +	case htons(ETH_P_IPV6):
> +		err = skb_checksum_setup_ipv6(skb, recalculate);
> +		break;
> +
> +	default:
> +		err = -EPROTO;
> +		break;
> +	}
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(skb_checksum_setup);
> +
>  void __skb_warn_lro_forwarding(const struct sk_buff *skb)
>  {
>  	net_warn_ratelimited("%s: received packets cannot be forwarded
> while LRO is enabled\n",
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ffK-0000Ji-D7; Mon, 13 Jan 2014 11:26:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W2ffI-0000JY-U0
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:26:45 +0000
Received: from [85.158.143.35:45668] by server-3.bemta-4.messagelabs.com id
	F1/85-32360-47DC3D25; Mon, 13 Jan 2014 11:26:44 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389612401!11354078!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3266 invoked from network); 13 Jan 2014 11:26:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:26:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90130967"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:26:41 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 13 Jan 2014 06:26:41 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 13 Jan 2014 12:26:39 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH net-next v2 1/3] net: add skb_checksum_setup
Thread-Index: AQHPDSH2fCxP8y7U+0CaBHnzQFP3oZqCij6A
Date: Mon, 13 Jan 2014 11:26:39 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	Nicolas Dichtel <nicolas.dichtel@6wind.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Paul Durrant [mailto:paul.durrant@citrix.com]
> Sent: 09 January 2014 10:03
> To: netdev@vger.kernel.org; xen-devel@lists.xen.org
> Cc: Paul Durrant; David Miller; Eric Dumazet; Veaceslav Falico; Alexander
> Duyck; Nicolas Dichtel
> Subject: [PATCH net-next v2 1/3] net: add skb_checksum_setup
> 
> This patch adds a function to set up the partial checksum offset for IP
> packets (and optionally re-calculate the pseudo-header checksum) into the
> core network code.
> The implementation was previously private and duplicated between xen-
> netback
> and xen-netfront, however it is not xen-specific and is potentially useful
> to any network driver.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: David Miller <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Veaceslav Falico <vfalico@redhat.com>
> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>

Ping?

  Paul

> ---
>  include/linux/skbuff.h |    2 +
>  net/core/skbuff.c      |  273
> ++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 275 insertions(+)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index d97f2d0..48b7605 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -2893,6 +2893,8 @@ static inline void skb_checksum_none_assert(const
> struct sk_buff *skb)
> 
>  bool skb_partial_csum_set(struct sk_buff *skb, u16 start, u16 off);
> 
> +int skb_checksum_setup(struct sk_buff *skb, bool recalculate);
> +
>  u32 __skb_get_poff(const struct sk_buff *skb);
> 
>  /**
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 1d641e7..15057d2 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -65,6 +65,7 @@
>  #include <net/dst.h>
>  #include <net/sock.h>
>  #include <net/checksum.h>
> +#include <net/ip6_checksum.h>
>  #include <net/xfrm.h>
> 
>  #include <asm/uaccess.h>
> @@ -3549,6 +3550,278 @@ bool skb_partial_csum_set(struct sk_buff *skb,
> u16 start, u16 off)
>  }
>  EXPORT_SYMBOL_GPL(skb_partial_csum_set);
> 
> +static int skb_maybe_pull_tail(struct sk_buff *skb, unsigned int len,
> +			       unsigned int max)
> +{
> +	if (skb_headlen(skb) >= len)
> +		return 0;
> +
> +	/* If we need to pullup then pullup to the max, so we
> +	 * won't need to do it again.
> +	 */
> +	if (max > skb->len)
> +		max = skb->len;
> +
> +	if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL)
> +		return -ENOMEM;
> +
> +	if (skb_headlen(skb) < len)
> +		return -EPROTO;
> +
> +	return 0;
> +}
> +
> +/* This value should be large enough to cover a tagged ethernet header
> plus
> + * maximally sized IP and TCP or UDP headers.
> + */
> +#define MAX_IP_HDR_LEN 128
> +
> +static int skb_checksum_setup_ip(struct sk_buff *skb, bool recalculate)
> +{
> +	unsigned int off;
> +	bool fragment;
> +	int err;
> +
> +	fragment = false;
> +
> +	err = skb_maybe_pull_tail(skb,
> +				  sizeof(struct iphdr),
> +				  MAX_IP_HDR_LEN);
> +	if (err < 0)
> +		goto out;
> +
> +	if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF))
> +		fragment = true;
> +
> +	off = ip_hdrlen(skb);
> +
> +	err = -EPROTO;
> +
> +	if (fragment)
> +		goto out;
> +
> +	switch (ip_hdr(skb)->protocol) {
> +	case IPPROTO_TCP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct tcphdr),
> +					  MAX_IP_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct tcphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			tcp_hdr(skb)->check =
> +				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
> +						   ip_hdr(skb)->daddr,
> +						   skb->len - off,
> +						   IPPROTO_TCP, 0);
> +		break;
> +	case IPPROTO_UDP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct udphdr),
> +					  MAX_IP_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct udphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			udp_hdr(skb)->check =
> +				~csum_tcpudp_magic(ip_hdr(skb)->saddr,
> +						   ip_hdr(skb)->daddr,
> +						   skb->len - off,
> +						   IPPROTO_UDP, 0);
> +		break;
> +	default:
> +		goto out;
> +	}
> +
> +	err = 0;
> +
> +out:
> +	return err;
> +}
> +
> +/* This value should be large enough to cover a tagged ethernet header
> plus
> + * an IPv6 header, all options, and a maximal TCP or UDP header.
> + */
> +#define MAX_IPV6_HDR_LEN 256
> +
> +#define OPT_HDR(type, skb, off) \
> +	(type *)(skb_network_header(skb) + (off))
> +
> +static int skb_checksum_setup_ipv6(struct sk_buff *skb, bool recalculate)
> +{
> +	int err;
> +	u8 nexthdr;
> +	unsigned int off;
> +	unsigned int len;
> +	bool fragment;
> +	bool done;
> +
> +	fragment = false;
> +	done = false;
> +
> +	off = sizeof(struct ipv6hdr);
> +
> +	err = skb_maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN);
> +	if (err < 0)
> +		goto out;
> +
> +	nexthdr = ipv6_hdr(skb)->nexthdr;
> +
> +	len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len);
> +	while (off <= len && !done) {
> +		switch (nexthdr) {
> +		case IPPROTO_DSTOPTS:
> +		case IPPROTO_HOPOPTS:
> +		case IPPROTO_ROUTING: {
> +			struct ipv6_opt_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct ipv6_opt_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
> +			nexthdr = hp->nexthdr;
> +			off += ipv6_optlen(hp);
> +			break;
> +		}
> +		case IPPROTO_AH: {
> +			struct ip_auth_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct ip_auth_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct ip_auth_hdr, skb, off);
> +			nexthdr = hp->nexthdr;
> +			off += ipv6_authlen(hp);
> +			break;
> +		}
> +		case IPPROTO_FRAGMENT: {
> +			struct frag_hdr *hp;
> +
> +			err = skb_maybe_pull_tail(skb,
> +						  off +
> +						  sizeof(struct frag_hdr),
> +						  MAX_IPV6_HDR_LEN);
> +			if (err < 0)
> +				goto out;
> +
> +			hp = OPT_HDR(struct frag_hdr, skb, off);
> +
> +			if (hp->frag_off & htons(IP6_OFFSET | IP6_MF))
> +				fragment = true;
> +
> +			nexthdr = hp->nexthdr;
> +			off += sizeof(struct frag_hdr);
> +			break;
> +		}
> +		default:
> +			done = true;
> +			break;
> +		}
> +	}
> +
> +	err = -EPROTO;
> +
> +	if (!done || fragment)
> +		goto out;
> +
> +	switch (nexthdr) {
> +	case IPPROTO_TCP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct tcphdr),
> +					  MAX_IPV6_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct tcphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			tcp_hdr(skb)->check =
> +				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
> +						 &ipv6_hdr(skb)->daddr,
> +						 skb->len - off,
> +						 IPPROTO_TCP, 0);
> +		break;
> +	case IPPROTO_UDP:
> +		err = skb_maybe_pull_tail(skb,
> +					  off + sizeof(struct udphdr),
> +					  MAX_IPV6_HDR_LEN);
> +		if (err < 0)
> +			goto out;
> +
> +		if (!skb_partial_csum_set(skb, off,
> +					  offsetof(struct udphdr, check))) {
> +			err = -EPROTO;
> +			goto out;
> +		}
> +
> +		if (recalculate)
> +			udp_hdr(skb)->check =
> +				~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
> +						 &ipv6_hdr(skb)->daddr,
> +						 skb->len - off,
> +						 IPPROTO_UDP, 0);
> +		break;
> +	default:
> +		goto out;
> +	}
> +
> +	err = 0;
> +
> +out:
> +	return err;
> +}
> +
> +/**
> + * skb_checksum_setup - set up partial checksum offset
> + * @skb: the skb to set up
> + * @recalculate: if true the pseudo-header checksum will be recalculated
> + */
> +int skb_checksum_setup(struct sk_buff *skb, bool recalculate)
> +{
> +	int err;
> +
> +	switch (skb->protocol) {
> +	case htons(ETH_P_IP):
> +		err = skb_checksum_setup_ip(skb, recalculate);
> +		break;
> +
> +	case htons(ETH_P_IPV6):
> +		err = skb_checksum_setup_ipv6(skb, recalculate);
> +		break;
> +
> +	default:
> +		err = -EPROTO;
> +		break;
> +	}
> +
> +	return err;
> +}
> +EXPORT_SYMBOL(skb_checksum_setup);
> +
>  void __skb_warn_lro_forwarding(const struct sk_buff *skb)
>  {
>  	net_warn_ratelimited("%s: received packets cannot be forwarded
> while LRO is enabled\n",
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fjH-0000zA-Nd; Mon, 13 Jan 2014 11:30:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2fjF-0000ys-MZ
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:30:49 +0000
Received: from [85.158.137.68:7948] by server-1.bemta-3.messagelabs.com id
	88/7B-29598-86EC3D25; Mon, 13 Jan 2014 11:30:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389612646!5116821!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9204 invoked from network); 13 Jan 2014 11:30:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:30:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92261363"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:30:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:30:46 -0500
Message-ID: <52D3CE64.7060303@citrix.com>
Date: Mon, 13 Jan 2014 11:30:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
In-Reply-To: <52D3D86C0200007800112F92@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 11:13, Jan Beulich wrote:
>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
>> "x86: map portion of kexec crash area that is within the direct map
>> area" to staging-4.3 ASAP, as following the backport of
>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
>> kernel area", kexec loading is broken in exactly the same way as it was
>> in staging.
> 
> Not without explaining how it is broken: According to my own
> checking as well as Daniel's there was no need for the kexec area
> to be mapped at all in the old implementation.
> 
> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
> instead if there really is an issue. That's largely because, as
> noted in the extra comment I added to 0896bd8b, the change is
> still incomplete (and hence not much better than the code with
> Daniel's change reverted).

In that comment it says:

  "That's primarily because map_domain_page()
   (needed when the area is outside the direct mapping range) may be
   unusable when wanting to kexec due to a crash,"

But map_domain_page() is only used when loading the crash image, not
during exec.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fjF-0000yt-BX; Mon, 13 Jan 2014 11:30:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fjE-0000yn-08
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:30:48 +0000
Received: from [85.158.139.211:31212] by server-10.bemta-5.messagelabs.com id
	D8/A5-01405-76EC3D25; Mon, 13 Jan 2014 11:30:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389612646!9208234!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13527 invoked from network); 13 Jan 2014 11:30:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:30:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:30:45 +0000
Message-Id: <52D3DC730200007800112FF6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:30:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
In-Reply-To: <1389607803.8187.22.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 11:10, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 13:41 -0500, Konrad Rzeszutek Wilk wrote:
>> and this is the only way to get hugepages working
>> (for reference here is what I have in the guest command line:
>> "default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
>> hugepages=100" and without this cpuid I get:
>>  hugepagesz: Unsupported page size 1024 M 
>> 
>> because the 1GB flag is not exposed.
>> 
>> I was wondering why we don't check the host cpuid and set the 1GB
>> flag (pdpe1gb) for those cases? It seems that all the pieces are in
>> place for this to work?
> 
> The hypervisor currently contains :
>         /* Hide 1GB-superpage feature if we can't emulate it. */
>         if (!hvm_pse1gb_supported(d))
>             *edx &= ~cpufeat_mask(X86_FEATURE_PAGE1GB);
> 
> So I think it would be safe to enable this by default in userspace and
> let the hypervisor clear it if unsafe. But TBH the whole cpuid policy is
> a mystery to me. Can you cc some likely hypervisor side people on your
> patch for confirmation please.

In fact I can't see where this would be forced off: xc_cpuid_x86.c
only does so in the PV case, and all hvm_pse1gb_supported() is
that the CPU supports it and the domain uses HAP.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fjH-0000zA-Nd; Mon, 13 Jan 2014 11:30:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2fjF-0000ys-MZ
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:30:49 +0000
Received: from [85.158.137.68:7948] by server-1.bemta-3.messagelabs.com id
	88/7B-29598-86EC3D25; Mon, 13 Jan 2014 11:30:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389612646!5116821!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9204 invoked from network); 13 Jan 2014 11:30:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:30:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92261363"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:30:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:30:46 -0500
Message-ID: <52D3CE64.7060303@citrix.com>
Date: Mon, 13 Jan 2014 11:30:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
In-Reply-To: <52D3D86C0200007800112F92@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 11:13, Jan Beulich wrote:
>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
>> "x86: map portion of kexec crash area that is within the direct map
>> area" to staging-4.3 ASAP, as following the backport of
>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
>> kernel area", kexec loading is broken in exactly the same way as it was
>> in staging.
> 
> Not without explaining how it is broken: According to my own
> checking as well as Daniel's there was no need for the kexec area
> to be mapped at all in the old implementation.
> 
> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
> instead if there really is an issue. That's largely because, as
> noted in the extra comment I added to 0896bd8b, the change is
> still incomplete (and hence not much better than the code with
> Daniel's change reverted).

In that comment it says:

  "That's primarily because map_domain_page()
   (needed when the area is outside the direct mapping range) may be
   unusable when wanting to kexec due to a crash,"

But map_domain_page() is only used when loading the crash image, not
during exec.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fjF-0000yt-BX; Mon, 13 Jan 2014 11:30:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fjE-0000yn-08
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:30:48 +0000
Received: from [85.158.139.211:31212] by server-10.bemta-5.messagelabs.com id
	D8/A5-01405-76EC3D25; Mon, 13 Jan 2014 11:30:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389612646!9208234!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13527 invoked from network); 13 Jan 2014 11:30:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:30:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:30:45 +0000
Message-Id: <52D3DC730200007800112FF6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:30:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
In-Reply-To: <1389607803.8187.22.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 11:10, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-10 at 13:41 -0500, Konrad Rzeszutek Wilk wrote:
>> and this is the only way to get hugepages working
>> (for reference here is what I have in the guest command line:
>> "default_hugepagesz=1G hugepagesz=1G hugepages=2 hugepagesz=2M
>> hugepages=100" and without this cpuid I get:
>>  hugepagesz: Unsupported page size 1024 M 
>> 
>> because the 1GB flag is not exposed.
>> 
>> I was wondering why we don't check the host cpuid and set the 1GB
>> flag (pdpe1gb) for those cases? It seems that all the pieces are in
>> place for this to work?
> 
> The hypervisor currently contains :
>         /* Hide 1GB-superpage feature if we can't emulate it. */
>         if (!hvm_pse1gb_supported(d))
>             *edx &= ~cpufeat_mask(X86_FEATURE_PAGE1GB);
> 
> So I think it would be safe to enable this by default in userspace and
> let the hypervisor clear it if unsafe. But TBH the whole cpuid policy is
> a mystery to me. Can you cc some likely hypervisor side people on your
> patch for confirmation please.

In fact I can't see where this would be forced off: xc_cpuid_x86.c
only does so in the PV case, and all hvm_pse1gb_supported() is
that the CPU supports it and the domain uses HAP.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fkR-000193-77; Mon, 13 Jan 2014 11:32:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2fkP-00018s-Ut
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:32:02 +0000
Received: from [85.158.137.68:23076] by server-1.bemta-3.messagelabs.com id
	12/BD-29598-1BEC3D25; Mon, 13 Jan 2014 11:32:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389612718!8737420!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8994 invoked from network); 13 Jan 2014 11:32:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:32:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92261756"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:31:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 06:31:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fkK-0007sI-QT;
	Mon, 13 Jan 2014 11:31:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fkK-00053v-EJ;
	Mon, 13 Jan 2014 11:31:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21203.52907.856721.125370@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 11:31:55 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389608732.13654.5.camel@kazak.uk.xensource.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>, Ross Philipson <ross.philipson@citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> > On 01/10/2014 10:03 AM, Ian Campbell wrote:
> > > Fitting in with libxl proper would require the API to look a certain way
> > > (take a context etc), so perhaps libxlu would be more appropriate,
> > > alongside the disk syntax parser etc?
> > 
> > Possibly. I looked at that back then (and today again) and it seemed to 
> > all be related to parsing things into XLU_Config objects. I guess I did 
> > not have a good feel for what libxlu was supposed to be. If it is 
> > supposed to be a generic library of auxiliary toolstack functionality 
> > then I think it would be a good place.
> 
> I think that (aux functionality) was the intention -- the fact that it
> is all parsing stuff right now is just a coincidence.
> 
> (Ian J: right?)

Yes, that's right.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fkR-000193-77; Mon, 13 Jan 2014 11:32:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2fkP-00018s-Ut
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:32:02 +0000
Received: from [85.158.137.68:23076] by server-1.bemta-3.messagelabs.com id
	12/BD-29598-1BEC3D25; Mon, 13 Jan 2014 11:32:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389612718!8737420!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8994 invoked from network); 13 Jan 2014 11:32:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:32:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92261756"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:31:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 06:31:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fkK-0007sI-QT;
	Mon, 13 Jan 2014 11:31:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2fkK-00053v-EJ;
	Mon, 13 Jan 2014 11:31:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21203.52907.856721.125370@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 11:31:55 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389608732.13654.5.camel@kazak.uk.xensource.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>
	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>, Ross Philipson <ross.philipson@citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> > On 01/10/2014 10:03 AM, Ian Campbell wrote:
> > > Fitting in with libxl proper would require the API to look a certain way
> > > (take a context etc), so perhaps libxlu would be more appropriate,
> > > alongside the disk syntax parser etc?
> > 
> > Possibly. I looked at that back then (and today again) and it seemed to 
> > all be related to parsing things into XLU_Config objects. I guess I did 
> > not have a good feel for what libxlu was supposed to be. If it is 
> > supposed to be a generic library of auxiliary toolstack functionality 
> > then I think it would be a good place.
> 
> I think that (aux functionality) was the intention -- the fact that it
> is all parsing stuff right now is just a coincidence.
> 
> (Ian J: right?)

Yes, that's right.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fou-0001PD-VM; Mon, 13 Jan 2014 11:36:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fot-0001P8-GA
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:36:39 +0000
Received: from [85.158.139.211:5243] by server-9.bemta-5.messagelabs.com id
	11/EB-15098-6CFC3D25; Mon, 13 Jan 2014 11:36:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389612998!8186790!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25230 invoked from network); 13 Jan 2014 11:36:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:36:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:36:37 +0000
Message-Id: <52D3DDD20200007800113009@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:36:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
	<52D3CE64.7060303@citrix.com>
In-Reply-To: <52D3CE64.7060303@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 12:30, David Vrabel <david.vrabel@citrix.com> wrote:
> On 13/01/14 11:13, Jan Beulich wrote:
>>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
>>> "x86: map portion of kexec crash area that is within the direct map
>>> area" to staging-4.3 ASAP, as following the backport of
>>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
>>> kernel area", kexec loading is broken in exactly the same way as it was
>>> in staging.
>> 
>> Not without explaining how it is broken: According to my own
>> checking as well as Daniel's there was no need for the kexec area
>> to be mapped at all in the old implementation.
>> 
>> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
>> instead if there really is an issue. That's largely because, as
>> noted in the extra comment I added to 0896bd8b, the change is
>> still incomplete (and hence not much better than the code with
>> Daniel's change reverted).
> 
> In that comment it says:
> 
>   "That's primarily because map_domain_page()
>    (needed when the area is outside the direct mapping range) may be
>    unusable when wanting to kexec due to a crash,"
> 
> But map_domain_page() is only used when loading the crash image, not
> during exec.

That's good to know, but I don't think it's a good idea either. Seeing
that map_domain_page() is being used in the code _at all_ may
make someone try to use it in a path that is used during kexec. And
it being used by other functions in the same file may also result in
one of those functions suddenly also getting used on a kexec path.

Together with the issue of the area potentially ending up in a
PFN compression hole, to me this is enough reason to still expect
that these uses get dropped again.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fou-0001PD-VM; Mon, 13 Jan 2014 11:36:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fot-0001P8-GA
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:36:39 +0000
Received: from [85.158.139.211:5243] by server-9.bemta-5.messagelabs.com id
	11/EB-15098-6CFC3D25; Mon, 13 Jan 2014 11:36:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389612998!8186790!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25230 invoked from network); 13 Jan 2014 11:36:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:36:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:36:37 +0000
Message-Id: <52D3DDD20200007800113009@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:36:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
	<52D3CE64.7060303@citrix.com>
In-Reply-To: <52D3CE64.7060303@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 12:30, David Vrabel <david.vrabel@citrix.com> wrote:
> On 13/01/14 11:13, Jan Beulich wrote:
>>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
>>> "x86: map portion of kexec crash area that is within the direct map
>>> area" to staging-4.3 ASAP, as following the backport of
>>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
>>> kernel area", kexec loading is broken in exactly the same way as it was
>>> in staging.
>> 
>> Not without explaining how it is broken: According to my own
>> checking as well as Daniel's there was no need for the kexec area
>> to be mapped at all in the old implementation.
>> 
>> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
>> instead if there really is an issue. That's largely because, as
>> noted in the extra comment I added to 0896bd8b, the change is
>> still incomplete (and hence not much better than the code with
>> Daniel's change reverted).
> 
> In that comment it says:
> 
>   "That's primarily because map_domain_page()
>    (needed when the area is outside the direct mapping range) may be
>    unusable when wanting to kexec due to a crash,"
> 
> But map_domain_page() is only used when loading the crash image, not
> during exec.

That's good to know, but I don't think it's a good idea either. Seeing
that map_domain_page() is being used in the code _at all_ may
make someone try to use it in a path that is used during kexec. And
it being used by other functions in the same file may also result in
one of those functions suddenly also getting used on a kexec path.

Together with the issue of the area potentially ending up in a
PFN compression hole, to me this is enough reason to still expect
that these uses get dropped again.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fql-0001ap-FV; Mon, 13 Jan 2014 11:38:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fqj-0001ak-Sa
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:38:34 +0000
Received: from [85.158.143.35:36616] by server-2.bemta-4.messagelabs.com id
	85/D9-11386-930D3D25; Mon, 13 Jan 2014 11:38:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389613111!11305086!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24879 invoked from network); 13 Jan 2014 11:38:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:38:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92263090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:38:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:38:30 -0500
Message-ID: <1389613109.13654.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 11:38:29 +0000
In-Reply-To: <52D3DC730200007800112FF6@nat28.tlf.novell.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	ian.jackson@eu.citrix.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
> In fact I can't see where this would be forced off: xc_cpuid_x86.c
> only does so in the PV case, and all hvm_pse1gb_supported() is
> that the CPU supports it and the domain uses HAP.

Took me a while to spot it too:
static void intel_xc_cpuid_policy(
[...]
            case 0x80000001: {
                int is_64bit = hypervisor_is_64bit(xch) && is_pae;
        
                /* Only a few features are advertised in Intel's 0x80000001. */
                regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
                                       bitmaskof(X86_FEATURE_ABM);
                regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
                break;
            }
        
        
Which masks anything which is not explicitly mentioned. (PAGE1GB is in
regs[3], I think).

The AMD version is more permissive:

        regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
                    (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                    (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
                    bitmaskof(X86_FEATURE_SYSCALL) |
                    bitmaskof(X86_FEATURE_MP) |
                    bitmaskof(X86_FEATURE_MMXEXT) |
                    bitmaskof(X86_FEATURE_FFXSR) |
                    bitmaskof(X86_FEATURE_3DNOW) |
                    bitmaskof(X86_FEATURE_3DNOWEXT));

(but I didn't check if PAGE1GB is in that magic number...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fql-0001ap-FV; Mon, 13 Jan 2014 11:38:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fqj-0001ak-Sa
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:38:34 +0000
Received: from [85.158.143.35:36616] by server-2.bemta-4.messagelabs.com id
	85/D9-11386-930D3D25; Mon, 13 Jan 2014 11:38:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389613111!11305086!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24879 invoked from network); 13 Jan 2014 11:38:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:38:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92263090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:38:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:38:30 -0500
Message-ID: <1389613109.13654.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 11:38:29 +0000
In-Reply-To: <52D3DC730200007800112FF6@nat28.tlf.novell.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	ian.jackson@eu.citrix.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
> In fact I can't see where this would be forced off: xc_cpuid_x86.c
> only does so in the PV case, and all hvm_pse1gb_supported() is
> that the CPU supports it and the domain uses HAP.

Took me a while to spot it too:
static void intel_xc_cpuid_policy(
[...]
            case 0x80000001: {
                int is_64bit = hypervisor_is_64bit(xch) && is_pae;
        
                /* Only a few features are advertised in Intel's 0x80000001. */
                regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
                                       bitmaskof(X86_FEATURE_ABM);
                regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
                            (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
                break;
            }
        
        
Which masks anything which is not explicitly mentioned. (PAGE1GB is in
regs[3], I think).

The AMD version is more permissive:

        regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
                    (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                    (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
                    bitmaskof(X86_FEATURE_SYSCALL) |
                    bitmaskof(X86_FEATURE_MP) |
                    bitmaskof(X86_FEATURE_MMXEXT) |
                    bitmaskof(X86_FEATURE_FFXSR) |
                    bitmaskof(X86_FEATURE_3DNOW) |
                    bitmaskof(X86_FEATURE_3DNOWEXT));

(but I didn't check if PAGE1GB is in that magic number...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:39:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2frR-0001mt-TD; Mon, 13 Jan 2014 11:39:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2frQ-0001kq-W7
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:39:17 +0000
Received: from [85.158.143.35:43006] by server-1.bemta-4.messagelabs.com id
	45/A4-02132-260D3D25; Mon, 13 Jan 2014 11:39:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389613154!11359667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8709 invoked from network); 13 Jan 2014 11:39:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:39:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:39:13 +0000
Message-Id: <52D3DE6E020000780011301D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:39:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
	<1389391020-14476-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389391020-14476-2-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 22:56, Don Slutz <dslutz@verizon.com> wrote:

I don't see why you included this in the resend - it got applied
earlier on Friday.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:39:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2frR-0001mt-TD; Mon, 13 Jan 2014 11:39:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2frQ-0001kq-W7
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:39:17 +0000
Received: from [85.158.143.35:43006] by server-1.bemta-4.messagelabs.com id
	45/A4-02132-260D3D25; Mon, 13 Jan 2014 11:39:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389613154!11359667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8709 invoked from network); 13 Jan 2014 11:39:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:39:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:39:13 +0000
Message-Id: <52D3DE6E020000780011301D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:39:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
	<1389391020-14476-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1389391020-14476-2-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.01.14 at 22:56, Don Slutz <dslutz@verizon.com> wrote:

I don't see why you included this in the resend - it got applied
earlier on Friday.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:45:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fxc-0002GV-O1; Mon, 13 Jan 2014 11:45:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fxb-0002GQ-E2
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:45:39 +0000
Received: from [85.158.143.35:41897] by server-2.bemta-4.messagelabs.com id
	F2/D6-11386-2E1D3D25; Mon, 13 Jan 2014 11:45:38 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389613537!11307244!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20888 invoked from network); 13 Jan 2014 11:45:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92264443"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:45:35 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2fxX-0000gb-P6;
	Mon, 13 Jan 2014 11:45:35 +0000
Date: Mon, 13 Jan 2014 11:45:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140113114535.GE5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
	<1389611480.13654.33.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389611480.13654.33.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:11:20AM +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> > Any more comments on code logic and / or the location of the new
> > snippet? Should I send a new version with code comment fixed?
> > [...] 
> > > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > > +     * to libxc when building HVM domain will enable PoD mode.
> > > +     */
> > > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> 
> I suppose this corresponds to exactly when PoD would be enabled?
> 

Yes. In libxc if maxmem > target_mem then pod_mode is set. See
tools/libxc/xc_hvm_build_x86.c:setup_guest.

> I'm happy with the patch, and it would be better to go ahead now than to
> wait for George (with his PoD hat in place) to confirm/deny, so please
> resend with the improved comment.

Will do.

Wei.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:45:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fxc-0002GV-O1; Mon, 13 Jan 2014 11:45:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2fxb-0002GQ-E2
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:45:39 +0000
Received: from [85.158.143.35:41897] by server-2.bemta-4.messagelabs.com id
	F2/D6-11386-2E1D3D25; Mon, 13 Jan 2014 11:45:38 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389613537!11307244!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20888 invoked from network); 13 Jan 2014 11:45:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92264443"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:45:35 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W2fxX-0000gb-P6;
	Mon, 13 Jan 2014 11:45:35 +0000
Date: Mon, 13 Jan 2014 11:45:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140113114535.GE5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
	<1389611480.13654.33.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389611480.13654.33.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:11:20AM +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> > Any more comments on code logic and / or the location of the new
> > snippet? Should I send a new version with code comment fixed?
> > [...] 
> > > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > > +     * to libxc when building HVM domain will enable PoD mode.
> > > +     */
> > > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> 
> I suppose this corresponds to exactly when PoD would be enabled?
> 

Yes. In libxc if maxmem > target_mem then pod_mode is set. See
tools/libxc/xc_hvm_build_x86.c:setup_guest.

> I'm happy with the patch, and it would be better to go ahead now than to
> wait for George (with his PoD hat in place) to confirm/deny, so please
> resend with the improved comment.

Will do.

Wei.

> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:47:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fzW-0002MO-8O; Mon, 13 Jan 2014 11:47:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fzU-0002MH-Q0
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:47:37 +0000
Received: from [85.158.137.68:13192] by server-15.bemta-3.messagelabs.com id
	60/8D-11556-852D3D25; Mon, 13 Jan 2014 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389613653!7622335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21506 invoked from network); 13 Jan 2014 11:47:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92264676"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:47:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:47:32 -0500
Message-ID: <1389613651.13654.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 13 Jan 2014 11:47:31 +0000
In-Reply-To: <20140113114535.GE5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
	<1389611480.13654.33.camel@kazak.uk.xensource.com>
	<20140113114535.GE5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:45 +0000, Wei Liu wrote:
> On Mon, Jan 13, 2014 at 11:11:20AM +0000, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> > > Any more comments on code logic and / or the location of the new
> > > snippet? Should I send a new version with code comment fixed?
> > > [...] 
> > > > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > > > +     * to libxc when building HVM domain will enable PoD mode.
> > > > +     */
> > > > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > > > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> > 
> > I suppose this corresponds to exactly when PoD would be enabled?
> > 
> 
> Yes. In libxc if maxmem > target_mem then pod_mode is set. See
> tools/libxc/xc_hvm_build_x86.c:setup_guest.

Putting this into a common place might be a nice future cleanup...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:47:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fzW-0002MO-8O; Mon, 13 Jan 2014 11:47:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2fzU-0002MH-Q0
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:47:37 +0000
Received: from [85.158.137.68:13192] by server-15.bemta-3.messagelabs.com id
	60/8D-11556-852D3D25; Mon, 13 Jan 2014 11:47:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389613653!7622335!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21506 invoked from network); 13 Jan 2014 11:47:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:47:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="92264676"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 11:47:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:47:32 -0500
Message-ID: <1389613651.13654.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 13 Jan 2014 11:47:31 +0000
In-Reply-To: <20140113114535.GE5698@zion.uk.xensource.com>
References: <1389353262-27668-1-git-send-email-wei.liu2@citrix.com>
	<20140113105413.GB5698@zion.uk.xensource.com>
	<1389611480.13654.33.camel@kazak.uk.xensource.com>
	<20140113114535.GE5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: disallow PCI device assignment for
 HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:45 +0000, Wei Liu wrote:
> On Mon, Jan 13, 2014 at 11:11:20AM +0000, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 10:54 +0000, Wei Liu wrote:
> > > Any more comments on code logic and / or the location of the new
> > > snippet? Should I send a new version with code comment fixed?
> > > [...] 
> > > > +    /* If target_memkb is smaller than max_memkb, the subsequent call
> > > > +     * to libxc when building HVM domain will enable PoD mode.
> > > > +     */
> > > > +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> > > > +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> > 
> > I suppose this corresponds to exactly when PoD would be enabled?
> > 
> 
> Yes. In libxc if maxmem > target_mem then pod_mode is set. See
> tools/libxc/xc_hvm_build_x86.c:setup_guest.

Putting this into a common place might be a nice future cleanup...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:47:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fzo-0002Pc-1u; Mon, 13 Jan 2014 11:47:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fzm-0002PN-Pp
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:47:54 +0000
Received: from [85.158.143.35:61403] by server-3.bemta-4.messagelabs.com id
	C8/1F-32360-A62D3D25; Mon, 13 Jan 2014 11:47:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389613673!11360232!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28093 invoked from network); 13 Jan 2014 11:47:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:47:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:47:53 +0000
Message-Id: <52D3E0770200007800113055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:47:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@oracle.com>
References: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] compat: Allow CHECK_FIELD_COMMON_ macro
 deal with fields that are handles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.01.14 at 00:20, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> CHECK_FIELD_COMMON_ and CHECK_FIELD_COMMON macros verify whether the field in
> 64- and 32-bit versions of a structure is at the same offset by comparing
> address of the field in both structures.
> 
> However, if the field itself is a handle, it is either
> 
> typedef struct {
>     <type> *p;
> } __guest_handle_<type>
> 
> or
> 
> typedef struct {
>     compat_ptr_t c;
>     <type> *_[0] __attribute__((__packed__));
> } __compat_handle_<type>
> 
> and compiler will warn that we are trying to compare addresses of different
> structures.
> 
> By casting the addresses to (void *) we will avoid this warning.

Which means you didn't understand the purpose: If a structure
contains a handle, it shouldn't be subject to checking at all, but
would need translation instead.

Jan

> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  xen/include/xen/compat.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/xen/compat.h b/xen/include/xen/compat.h
> index ca60699..6de2d14 100644
> --- a/xen/include/xen/compat.h
> +++ b/xen/include/xen/compat.h
> @@ -158,14 +158,14 @@ static inline int name(xen_ ## t ## _t *x, compat_ ## t 
> ## _t *c) \
>  { \
>      BUILD_BUG_ON(offsetof(xen_ ## t ## _t, f) != \
>                   offsetof(compat_ ## t ## _t, f)); \
> -    return &x->f == &c->f; \
> +    return (void *)&x->f == (void *)&c->f; \
>  }
>  #define CHECK_FIELD_COMMON_(k, name, n, f) \
>  static inline int name(k xen_ ## n *x, k compat_ ## n *c) \
>  { \
>      BUILD_BUG_ON(offsetof(k xen_ ## n, f) != \
>                   offsetof(k compat_ ## n, f)); \
> -    return &x->f == &c->f; \
> +    return (void *)&x->f == (void *)&c->f; \
>  }
>  
>  #define CHECK_FIELD(t, f) \
> -- 
> 1.8.1.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:47:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2fzo-0002Pc-1u; Mon, 13 Jan 2014 11:47:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2fzm-0002PN-Pp
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:47:54 +0000
Received: from [85.158.143.35:61403] by server-3.bemta-4.messagelabs.com id
	C8/1F-32360-A62D3D25; Mon, 13 Jan 2014 11:47:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389613673!11360232!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28093 invoked from network); 13 Jan 2014 11:47:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:47:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:47:53 +0000
Message-Id: <52D3E0770200007800113055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:47:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <boris.ostrovsky@oracle.com>
References: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389396003-32647-1-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] compat: Allow CHECK_FIELD_COMMON_ macro
 deal with fields that are handles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.01.14 at 00:20, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> CHECK_FIELD_COMMON_ and CHECK_FIELD_COMMON macros verify whether the field in
> 64- and 32-bit versions of a structure is at the same offset by comparing
> address of the field in both structures.
> 
> However, if the field itself is a handle, it is either
> 
> typedef struct {
>     <type> *p;
> } __guest_handle_<type>
> 
> or
> 
> typedef struct {
>     compat_ptr_t c;
>     <type> *_[0] __attribute__((__packed__));
> } __compat_handle_<type>
> 
> and compiler will warn that we are trying to compare addresses of different
> structures.
> 
> By casting the addresses to (void *) we will avoid this warning.

Which means you didn't understand the purpose: If a structure
contains a handle, it shouldn't be subject to checking at all, but
would need translation instead.

Jan

> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  xen/include/xen/compat.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/xen/compat.h b/xen/include/xen/compat.h
> index ca60699..6de2d14 100644
> --- a/xen/include/xen/compat.h
> +++ b/xen/include/xen/compat.h
> @@ -158,14 +158,14 @@ static inline int name(xen_ ## t ## _t *x, compat_ ## t 
> ## _t *c) \
>  { \
>      BUILD_BUG_ON(offsetof(xen_ ## t ## _t, f) != \
>                   offsetof(compat_ ## t ## _t, f)); \
> -    return &x->f == &c->f; \
> +    return (void *)&x->f == (void *)&c->f; \
>  }
>  #define CHECK_FIELD_COMMON_(k, name, n, f) \
>  static inline int name(k xen_ ## n *x, k compat_ ## n *c) \
>  { \
>      BUILD_BUG_ON(offsetof(k xen_ ## n, f) != \
>                   offsetof(k compat_ ## n, f)); \
> -    return &x->f == &c->f; \
> +    return (void *)&x->f == (void *)&c->f; \
>  }
>  
>  #define CHECK_FIELD(t, f) \
> -- 
> 1.8.1.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2g3M-00038d-Nq; Mon, 13 Jan 2014 11:51:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2g3K-00038Y-TG
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:51:35 +0000
Received: from [85.158.143.35:26401] by server-2.bemta-4.messagelabs.com id
	10/F1-11386-643D3D25; Mon, 13 Jan 2014 11:51:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389613893!11363260!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7347 invoked from network); 13 Jan 2014 11:51:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:51:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:51:32 +0000
Message-Id: <52D3E1500200007800113058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:51:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389613109.13654.43.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 12:38, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
>> In fact I can't see where this would be forced off: xc_cpuid_x86.c
>> only does so in the PV case, and all hvm_pse1gb_supported() is
>> that the CPU supports it and the domain uses HAP.
> 
> Took me a while to spot it too:
> static void intel_xc_cpuid_policy(
> [...]
>             case 0x80000001: {
>                 int is_64bit = hypervisor_is_64bit(xch) && is_pae;
>         
>                 /* Only a few features are advertised in Intel's 0x80000001. */
>                 regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
>                                        bitmaskof(X86_FEATURE_ABM);
>                 regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
>                 break;
>             }
>         
>         
> Which masks anything which is not explicitly mentioned. (PAGE1GB is in
> regs[3], I think).

Ah, okay. The funs of white listing on HVM vs black listing on PV
again.

> The AMD version is more permissive:
> 
>         regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
>                     (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
>                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
>                     bitmaskof(X86_FEATURE_SYSCALL) |
>                     bitmaskof(X86_FEATURE_MP) |
>                     bitmaskof(X86_FEATURE_MMXEXT) |
>                     bitmaskof(X86_FEATURE_FFXSR) |
>                     bitmaskof(X86_FEATURE_3DNOW) |
>                     bitmaskof(X86_FEATURE_3DNOWEXT));
> 
> (but I didn't check if PAGE1GB is in that magic number...)

It's not - it's bit 26.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2g3M-00038d-Nq; Mon, 13 Jan 2014 11:51:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2g3K-00038Y-TG
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 11:51:35 +0000
Received: from [85.158.143.35:26401] by server-2.bemta-4.messagelabs.com id
	10/F1-11386-643D3D25; Mon, 13 Jan 2014 11:51:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389613893!11363260!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7347 invoked from network); 13 Jan 2014 11:51:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 11:51:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 11:51:32 +0000
Message-Id: <52D3E1500200007800113058@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 11:51:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389613109.13654.43.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	jun.nakajima@intel.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 12:38, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
>> In fact I can't see where this would be forced off: xc_cpuid_x86.c
>> only does so in the PV case, and all hvm_pse1gb_supported() is
>> that the CPU supports it and the domain uses HAP.
> 
> Took me a while to spot it too:
> static void intel_xc_cpuid_policy(
> [...]
>             case 0x80000001: {
>                 int is_64bit = hypervisor_is_64bit(xch) && is_pae;
>         
>                 /* Only a few features are advertised in Intel's 0x80000001. */
>                 regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
>                                        bitmaskof(X86_FEATURE_ABM);
>                 regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
>                             (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
>                 break;
>             }
>         
>         
> Which masks anything which is not explicitly mentioned. (PAGE1GB is in
> regs[3], I think).

Ah, okay. The funs of white listing on HVM vs black listing on PV
again.

> The AMD version is more permissive:
> 
>         regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
>                     (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
>                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
>                     bitmaskof(X86_FEATURE_SYSCALL) |
>                     bitmaskof(X86_FEATURE_MP) |
>                     bitmaskof(X86_FEATURE_MMXEXT) |
>                     bitmaskof(X86_FEATURE_FFXSR) |
>                     bitmaskof(X86_FEATURE_3DNOW) |
>                     bitmaskof(X86_FEATURE_3DNOWEXT));
> 
> (but I didn't check if PAGE1GB is in that magic number...)

It's not - it's bit 26.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:52:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2g4I-0003Eg-6B; Mon, 13 Jan 2014 11:52:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2g4G-0003EV-Pw
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:52:33 +0000
Received: from [193.109.254.147:48579] by server-6.bemta-14.messagelabs.com id
	AA/51-14958-083D3D25; Mon, 13 Jan 2014 11:52:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389613950!10506602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14400 invoked from network); 13 Jan 2014 11:52:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90135866"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:52:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:52:28 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W2g4C-0000nx-SV;
	Mon, 13 Jan 2014 11:52:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 13 Jan 2014 11:52:28 +0000
Message-ID: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

This change is restricted to HVM guest, as only HVM is relevant in the
counterpart in Xend. We're late in release cycle so the change should
only do what's necessary. Probably we can revisit it if we need to do
the same thing for PV guest in the future.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: fix comment
---
 tools/libxl/libxl_create.c |   20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..61437de 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    bool pod_enabled = false;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -714,6 +715,25 @@ static void initiate_domain_create(libxl__egc *egc,
 
     domid = 0;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM domain will enable PoD mode.
+     */
+    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
+
+    /* We cannot have PoD and PCI device assignment at the same time
+     * for HVM guest. It was reported that IOMMU cannot work with PoD
+     * enabled because it needs to populated entire page table for
+     * guest. To stay on the safe side, we disable PCI device
+     * assignment when PoD is enabled.
+     */
+    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        ret = ERROR_INVAL;
+        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
+        goto error_out;
+    }
+
     ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
     if (ret) goto error_out;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 11:52:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 11:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2g4I-0003Eg-6B; Mon, 13 Jan 2014 11:52:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W2g4G-0003EV-Pw
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:52:33 +0000
Received: from [193.109.254.147:48579] by server-6.bemta-14.messagelabs.com id
	AA/51-14958-083D3D25; Mon, 13 Jan 2014 11:52:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389613950!10506602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14400 invoked from network); 13 Jan 2014 11:52:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 11:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,652,1384300800"; d="scan'208";a="90135866"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 11:52:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 06:52:28 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W2g4C-0000nx-SV;
	Mon, 13 Jan 2014 11:52:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 13 Jan 2014 11:52:28 +0000
Message-ID: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment for
	HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

This change is restricted to HVM guest, as only HVM is relevant in the
counterpart in Xend. We're late in release cycle so the change should
only do what's necessary. Probably we can revisit it if we need to do
the same thing for PV guest in the future.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: fix comment
---
 tools/libxl/libxl_create.c |   20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e03bb55..61437de 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     uint32_t domid;
     int i, ret;
+    bool pod_enabled = false;
 
     /* convenience aliases */
     libxl_domain_config *const d_config = dcs->guest_config;
@@ -714,6 +715,25 @@ static void initiate_domain_create(libxl__egc *egc,
 
     domid = 0;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM domain will enable PoD mode.
+     */
+    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
+
+    /* We cannot have PoD and PCI device assignment at the same time
+     * for HVM guest. It was reported that IOMMU cannot work with PoD
+     * enabled because it needs to populated entire page table for
+     * guest. To stay on the safe side, we disable PCI device
+     * assignment when PoD is enabled.
+     */
+    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        ret = ERROR_INVAL;
+        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
+        goto error_out;
+    }
+
     ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
     if (ret) goto error_out;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gEM-00040t-TA; Mon, 13 Jan 2014 12:02:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W2fYl-0008Nh-V1
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:20:00 +0000
Received: from [85.158.137.68:58133] by server-10.bemta-3.messagelabs.com id
	E8/7D-23989-FDBC3D25; Mon, 13 Jan 2014 11:19:59 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389611996!7619830!1
X-Originating-IP: [216.109.115.31]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_6,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29980 invoked from network); 13 Jan 2014 11:19:57 -0000
Received: from nm44-vm7.bullet.mail.bf1.yahoo.com (HELO
	nm44-vm7.bullet.mail.bf1.yahoo.com) (216.109.115.31)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 11:19:57 -0000
Received: from [66.196.81.170] by nm44.bullet.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
Received: from [98.139.212.197] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
Received: from [127.0.0.1] by omp1006.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 118344.40150.bm@omp1006.mail.bf1.yahoo.com
Received: (qmail 30170 invoked by uid 60001); 13 Jan 2014 11:19:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389611995; bh=cnhnqqWBFhtiBueGpGXYiQPtGQdAQqOmpRq/a1BUXXA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=ZDL46vzbGJ9N8XertubiGOwyQn/GKNRXo3auauALM5n4rnMV8zFzXJZ9EhG5E/o7drGE2aiVA1mE1dfRrlBumgNnOYAl+aMZLerWRzbPT+z6K3Mtzs95rzFFF5PUf/EQQ03EuPwYZ7/0Fj7IJDw9i5LOuwfRnSIPW68C5NDx08A=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=krAADyBNc+xb7ZmUxCE9r6paFVV8RFM9eairXRKjzFL5HzTcDRG+HIYz8tvSv7oMa2CFjmrQUE1JlLz8V/MsGHxyAf6BMc//a0WSYmJPre1NYaHlABUI/0WfDiv/zxFupeY44ZswRsG6qUwhbrQIOd90JCoMOJZXeuWw5L+viOk=;
X-YMail-OSG: nvpi3SAVM1m.T_3mpkiM5pNq7q1bbfBI9DrjzBMtQLhlZ.H
	M.VoZBx7kC9sn3z32fVY2NS3JSLQK5qZ.OEhM.s6.AzLoKVkJdgrCTyLKpYt
	v45zefqN.v4fxPo4Cw8rRoFWJShuH0HJo_.fjNk5r7F4QgkjSm9tRYBk8IVZ
	6LGJRBAbUsSHQ0zRBILVWA7hxFjpa0EQpIm18M.pKiFiMjAshVJyyJka.POJ
	8Nnz9G7G0Xc2FwhjIPvnv1l03VQdFRpqk85d40pZ2wrvOcxHU07SSI9la52P
	l9QOLNu9apKA_RnTUZoQhJOCJxHbT5SpuM.CzjDileI1joRREW.EEWFxv9Um
	vi6YJcouxy.3Gh2HZ5caC5rzUq.gATlt8.Z3v7eGqSIo0BM9PUqWlVj9bvZK
	aMwOFyDy273FP6PtoKF3mL4U7di8PedADH9V4D6e0DDUQ9T2N_6o388N.lGR
	c.6hn6yS3AZEIgh8ft2kae29BPW.2OXPvIUwderARRilbTfP8o30LClGlEZf
	OPBfPR6JcLJ8knUyXRyUTqL4_xQbOHOf0PMopFMc-
Received: from [212.50.248.7] by web160201.mail.bf1.yahoo.com via HTTP;
	Mon, 13 Jan 2014 03:19:55 PST
X-Rocket-MIMEInfo: 002.001,
	RG93bnRpbWUgaXQgbWVhbidzIFRoZSB0b3RhbCBkb3dudGltZSBjb25zaXN0cyBvZiB0aGUgdGltZcKgbmVjZXNzYXJ5IHRvIHF1aWVzY2UgdGhlIFZNIG9uIHRoZSBzb3VyY2UsIHRyYW5zZmVyIHRoZcKgZGV2aWNlIHN0YXRlIHRvIHRoZSBkZXN0aW5hdGlvbiwgbG9hZCB0aGUgZGV2aWNlIHN0YXRlLCBhbmTCoGNvcHkgb3ZlciBhbGwgdGhlIHJlbWFpbmluZyBtZW1vcnkgcGFnZXPCoGNvbmN1cnJlbnRseSB3aXRoIGxvYWRpbmcgdGhlIGRldmljZSBzdGF0ZS4KaG93IGkgY2FuIHByaW50IGJpdG1hcCBpbiABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
Message-ID: <1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Mon, 13 Jan 2014 03:19:55 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Kai Huang <dev.kai.huang@gmail.com>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 13 Jan 2014 12:02:57 +0000
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3258935669156897867=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3258935669156897867==
Content-Type: multipart/alternative; boundary="958772500-511405302-1389611995=:25477"

--958772500-511405302-1389611995=:25477
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Downtime it mean's The total downtime consists of the time=A0necessary to q=
uiesce the VM on the source, transfer the=A0device state to the destination=
, load the device state, and=A0copy over all the remaining memory pages=A0c=
oncurrently with loading the device state.=0Ahow i can print bitmap in Outp=
ut? or how i can see bitmap in=A0Output?=0AIs there a particular name for=
=A0Downtime=A0in xen_domain_save ?=0A=0Ahow i can see Downtime in Output?=
=0A=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Department, U=
niversity of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Monday, Ja=
nuary 13, 2014 4:03 AM, Kai Huang <dev.kai.huang@gmail.com> wrote:=0A =0AOn=
 Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:=
=0A> Hello Mr Dunlap=0A> I really wonder of anybody know answer :-(=0A> can=
 you help me?!=0A> I want found time "Downtime" and number "dirty pages" in=
 end migration ...=0A=0AWhat do you mean by "Downtime", and "in end migrati=
on"? Migration will=0Atake several rounds to transfer dirty pages. If you w=
ant to know=0Aexactly how many pages are dirty during each stage, you can g=
et the=0Adirty page bitmap in xc_domain_save, which will be called for each=
=0Alive migration stage.=0A=0AThanks,=0A-Kai=0A=0A> please help me.=0A> Tha=
nks.=0A>=0A> Adel Amani=0A> M.Sc. Candidate@Computer Engineering Department=
, University of Isfahan=0A> Email: A.Amani@eng.ui.ac.ir=0A>=0A>=0A> On Satu=
rday, January 11, 2014 12:31 AM, Adel Amani <adel.amani66@yahoo.com>=0A> wr=
ote:=0A> Hello=0A> I do one migration and trace in migration to command "xe=
ntrace -D -e all -S=0A> 256 /test.trace"=0A> I can get total time migration=
 of command "time migration -l ubuntu11=0A> 192.168.1.1"=A0 But i don't kno=
w how get "Downtime" and "dirty pages" of=0A> test.trace :-(=A0 or from ano=
ther way...=0A>=0A> Adel Amani=0A> M.Sc. Candidate@Computer Engineering Dep=
artment, University of Isfahan=0A> Email: A.Amani@eng.ui.ac.ir=0A>=0A>=0A>=
=0A> _______________________________________________=0A> Xen-devel mailing =
list=0A> Xen-devel@lists.xen.org=0A> http://lists.xen.org/xen-devel=0A=0A>
--958772500-511405302-1389611995=:25477
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span style=3D=
"font-weight: bold;">Downtime</span> it mean's The total downtime consists =
of the time&nbsp;<span style=3D"font-size: 10pt;">necessary to quiesce the =
VM on the source, transfer the&nbsp;</span><span style=3D"font-size: 10pt;"=
>device state to the destination, load the device state, and&nbsp;</span><s=
pan style=3D"font-size: 10pt;">copy over all the remaining memory pages&nbs=
p;</span><span style=3D"font-size: 10pt;">concurrently with loading the dev=
ice state.</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; =
font-family: 'bookman old style', 'new york', times, serif; background-colo=
r: transparent; font-style: normal;"><span style=3D"font-size: 10pt;">how i=
 can print bitmap in Output? or how i can see bitmap in&nbsp;Output?</span>=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: 'boo=
kman old
 style', 'new york', times, serif; background-color: transparent; font-styl=
e: normal;"><span style=3D"background-color: transparent; font-size: 10pt;"=
>Is there a particular name for&nbsp;</span><span style=3D"background-color=
: transparent; font-size: 10pt; font-weight: bold;">Downtime</span><span st=
yle=3D"background-color: transparent; font-size: 10pt; font-weight: bold;">=
&nbsp;</span><span style=3D"background-color: transparent; font-size: 10pt;=
">in xen_domain_save ?</span><br></div><div style=3D"color: rgb(0, 0, 0); f=
ont-size: 13px; font-family: 'bookman old style', 'new york', times, serif;=
 background-color: transparent; font-style: normal;"><span style=3D"backgro=
und-color: transparent; font-size: 10pt;">how i can see <span style=3D"font=
-weight: bold;">Downtime</span> in Output?</span></div><div style=3D"color:=
 rgb(0, 0, 0); font-size: 10pt; font-family: 'bookman old style', 'new york=
', times, serif; background-color: transparent; font-style: normal;"><span
 style=3D"font-size: 10pt;"><br></span></div><div></div><div>&nbsp;</div><d=
iv><div style=3D"text-align:center;"><span style=3D"font-size: 16px; font-f=
amily: 'times new roman', 'new york', times, serif; line-height: normal;">A=
del Amani</span><br></div><span style=3D"font-family: 'times new roman', 'n=
ew york', times, serif; line-height: normal;"><div style=3D"font-size:16px;=
text-align:center;">M.Sc. Candidate@Computer Engineering Department, Univer=
sity of Isfahan</div><div style=3D"text-align:center;"><span style=3D"font-=
size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoration:unde=
rline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><div class=3D"=
yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D"font-famil=
y: 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div s=
tyle=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Lu=
cida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=
=3D"2" face=3D"Arial"> On
 Monday, January 13, 2014 4:03 AM, Kai Huang &lt;dev.kai.huang@gmail.com&gt=
; wrote:<br> </font> </div>  <div class=3D"y_msg_container">On Sun, Jan 12,=
 2014 at 1:40 AM, Adel Amani &lt;<a shape=3D"rect" ymailto=3D"mailto:adel.a=
mani66@yahoo.com" href=3D"mailto:adel.amani66@yahoo.com">adel.amani66@yahoo=
.com</a>&gt; wrote:<br clear=3D"none">&gt; Hello Mr Dunlap<br clear=3D"none=
">&gt; I really wonder of anybody know answer :-(<br clear=3D"none">&gt; ca=
n you help me?!<br clear=3D"none">&gt; I want found time "Downtime" and num=
ber "dirty pages" in end migration ...<br clear=3D"none"><br clear=3D"none"=
>What do you mean by "Downtime", and "in end migration"? Migration will<br =
clear=3D"none">take several rounds to transfer dirty pages. If you want to =
know<br clear=3D"none">exactly how many pages are dirty during each stage, =
you can get the<br clear=3D"none">dirty page bitmap in xc_domain_save, whic=
h will be called for each<br clear=3D"none">live migration stage.<br clear=
=3D"none"><br
 clear=3D"none">Thanks,<br clear=3D"none">-Kai<div class=3D"yqt7363649995" =
id=3D"yqtfd40014"><br clear=3D"none">&gt; please help me.<br clear=3D"none"=
>&gt; Thanks.<br clear=3D"none">&gt;<br clear=3D"none">&gt; Adel Amani<br c=
lear=3D"none">&gt; M.Sc. <a shape=3D"rect" ymailto=3D"mailto:Candidate@Comp=
uter" href=3D"mailto:Candidate@Computer">Candidate@Computer</a> Engineering=
 Department, University of Isfahan<br clear=3D"none">&gt; Email: <a shape=
=3D"rect" ymailto=3D"mailto:A.Amani@eng.ui.ac.ir" href=3D"mailto:A.Amani@en=
g.ui.ac.ir">A.Amani@eng.ui.ac.ir</a><br clear=3D"none">&gt;<br clear=3D"non=
e">&gt;<br clear=3D"none">&gt; On Saturday, January 11, 2014 12:31 AM, Adel=
 Amani &lt;<a shape=3D"rect" ymailto=3D"mailto:adel.amani66@yahoo.com" href=
=3D"mailto:adel.amani66@yahoo.com">adel.amani66@yahoo.com</a>&gt;<br clear=
=3D"none">&gt; wrote:<br clear=3D"none">&gt; Hello<br clear=3D"none">&gt; I=
 do one migration and trace in migration to command "xentrace -D -e all -S<=
br clear=3D"none">&gt; 256 /test.trace"<br
 clear=3D"none">&gt; I can get total time migration of command "time migrat=
ion -l ubuntu11<br clear=3D"none">&gt; 192.168.1.1"&nbsp; But i don't know =
how get "Downtime" and "dirty pages" of<br clear=3D"none">&gt; test.trace :=
-(&nbsp; or from another way...<br clear=3D"none">&gt;<br clear=3D"none">&g=
t; Adel Amani<br clear=3D"none">&gt; M.Sc. <a shape=3D"rect" ymailto=3D"mai=
lto:Candidate@Computer" href=3D"mailto:Candidate@Computer">Candidate@Comput=
er</a> Engineering Department, University of Isfahan<br clear=3D"none">&gt;=
 Email: <a shape=3D"rect" ymailto=3D"mailto:A.Amani@eng.ui.ac.ir" href=3D"m=
ailto:A.Amani@eng.ui.ac.ir">A.Amani@eng.ui.ac.ir</a></div><br clear=3D"none=
">&gt;<br clear=3D"none">&gt;<br clear=3D"none">&gt;<br clear=3D"none">&gt;=
 _______________________________________________<br clear=3D"none">&gt; Xen=
-devel mailing list<br clear=3D"none">&gt; <a shape=3D"rect" ymailto=3D"mai=
lto:Xen-devel@lists.xen.org" href=3D"mailto:Xen-devel@lists.xen.org">Xen-de=
vel@lists.xen.org</a><br
 clear=3D"none">&gt; <a shape=3D"rect" href=3D"http://lists.xen.org/xen-dev=
el" target=3D"_blank">http://lists.xen.org/xen-devel</a><div class=3D"yqt73=
63649995" id=3D"yqtfd15823"><br clear=3D"none">&gt;<br clear=3D"none"></div=
><br><br></div>  </div> </div>  </div> </div></body></html>
--958772500-511405302-1389611995=:25477--


--===============3258935669156897867==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3258935669156897867==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gED-00040Q-G6; Mon, 13 Jan 2014 12:02:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2gEB-00040I-Kq
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:02:48 +0000
Received: from [85.158.137.68:33773] by server-14.bemta-3.messagelabs.com id
	E7/53-06105-6E5D3D25; Mon, 13 Jan 2014 12:02:46 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389614566!5139061!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12108 invoked from network); 13 Jan 2014 12:02:46 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 12:02:46 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389614566; l=441;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ZocsjLNpyZ99qj6c9tVBfFajvzo=;
	b=MqpM+ohm2Zl2Qx2/5cgZhGlzCuQCd5FapMzbFxl5WlwrXAFCX0ecylLi+v8uq062kpW
	XK1oT/TplPJy75CLP7q446ZrmfKJhAjCALNjDo3E9y95Cwz8Es9qnjD0ul6OtS4AmMFde
	iSvKudSAiFysj/trtmG1Nks7ujy9KQP7ZUM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 503d37q0DC1Xoxn ; 
	Mon, 13 Jan 2014 13:01:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id C9BD15025A; Mon, 13 Jan 2014 13:01:31 +0100 (CET)
Date: Mon, 13 Jan 2014 13:01:31 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113120131.GA15623@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Jan Beulich wrote:

> You can't do this in one go - the first two and the last one may be
> set independently (and are independent in their meaning), and
> hence need to be queried independently (xenbus_gather() fails
> on the first absent value).

Yes, thats the purpose. Since the properties are required its an all or
nothing thing. If they are truly optional then blkif.h should be updated
to say that.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gEM-00040t-TA; Mon, 13 Jan 2014 12:02:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W2fYl-0008Nh-V1
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 11:20:00 +0000
Received: from [85.158.137.68:58133] by server-10.bemta-3.messagelabs.com id
	E8/7D-23989-FDBC3D25; Mon, 13 Jan 2014 11:19:59 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389611996!7619830!1
X-Originating-IP: [216.109.115.31]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_6,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29980 invoked from network); 13 Jan 2014 11:19:57 -0000
Received: from nm44-vm7.bullet.mail.bf1.yahoo.com (HELO
	nm44-vm7.bullet.mail.bf1.yahoo.com) (216.109.115.31)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 11:19:57 -0000
Received: from [66.196.81.170] by nm44.bullet.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
Received: from [98.139.212.197] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
Received: from [127.0.0.1] by omp1006.mail.bf1.yahoo.com with NNFMP;
	13 Jan 2014 11:19:56 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 118344.40150.bm@omp1006.mail.bf1.yahoo.com
Received: (qmail 30170 invoked by uid 60001); 13 Jan 2014 11:19:55 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389611995; bh=cnhnqqWBFhtiBueGpGXYiQPtGQdAQqOmpRq/a1BUXXA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=ZDL46vzbGJ9N8XertubiGOwyQn/GKNRXo3auauALM5n4rnMV8zFzXJZ9EhG5E/o7drGE2aiVA1mE1dfRrlBumgNnOYAl+aMZLerWRzbPT+z6K3Mtzs95rzFFF5PUf/EQQ03EuPwYZ7/0Fj7IJDw9i5LOuwfRnSIPW68C5NDx08A=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=krAADyBNc+xb7ZmUxCE9r6paFVV8RFM9eairXRKjzFL5HzTcDRG+HIYz8tvSv7oMa2CFjmrQUE1JlLz8V/MsGHxyAf6BMc//a0WSYmJPre1NYaHlABUI/0WfDiv/zxFupeY44ZswRsG6qUwhbrQIOd90JCoMOJZXeuWw5L+viOk=;
X-YMail-OSG: nvpi3SAVM1m.T_3mpkiM5pNq7q1bbfBI9DrjzBMtQLhlZ.H
	M.VoZBx7kC9sn3z32fVY2NS3JSLQK5qZ.OEhM.s6.AzLoKVkJdgrCTyLKpYt
	v45zefqN.v4fxPo4Cw8rRoFWJShuH0HJo_.fjNk5r7F4QgkjSm9tRYBk8IVZ
	6LGJRBAbUsSHQ0zRBILVWA7hxFjpa0EQpIm18M.pKiFiMjAshVJyyJka.POJ
	8Nnz9G7G0Xc2FwhjIPvnv1l03VQdFRpqk85d40pZ2wrvOcxHU07SSI9la52P
	l9QOLNu9apKA_RnTUZoQhJOCJxHbT5SpuM.CzjDileI1joRREW.EEWFxv9Um
	vi6YJcouxy.3Gh2HZ5caC5rzUq.gATlt8.Z3v7eGqSIo0BM9PUqWlVj9bvZK
	aMwOFyDy273FP6PtoKF3mL4U7di8PedADH9V4D6e0DDUQ9T2N_6o388N.lGR
	c.6hn6yS3AZEIgh8ft2kae29BPW.2OXPvIUwderARRilbTfP8o30LClGlEZf
	OPBfPR6JcLJ8knUyXRyUTqL4_xQbOHOf0PMopFMc-
Received: from [212.50.248.7] by web160201.mail.bf1.yahoo.com via HTTP;
	Mon, 13 Jan 2014 03:19:55 PST
X-Rocket-MIMEInfo: 002.001,
	RG93bnRpbWUgaXQgbWVhbidzIFRoZSB0b3RhbCBkb3dudGltZSBjb25zaXN0cyBvZiB0aGUgdGltZcKgbmVjZXNzYXJ5IHRvIHF1aWVzY2UgdGhlIFZNIG9uIHRoZSBzb3VyY2UsIHRyYW5zZmVyIHRoZcKgZGV2aWNlIHN0YXRlIHRvIHRoZSBkZXN0aW5hdGlvbiwgbG9hZCB0aGUgZGV2aWNlIHN0YXRlLCBhbmTCoGNvcHkgb3ZlciBhbGwgdGhlIHJlbWFpbmluZyBtZW1vcnkgcGFnZXPCoGNvbmN1cnJlbnRseSB3aXRoIGxvYWRpbmcgdGhlIGRldmljZSBzdGF0ZS4KaG93IGkgY2FuIHByaW50IGJpdG1hcCBpbiABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
Message-ID: <1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Mon, 13 Jan 2014 03:19:55 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Kai Huang <dev.kai.huang@gmail.com>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 13 Jan 2014 12:02:57 +0000
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3258935669156897867=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3258935669156897867==
Content-Type: multipart/alternative; boundary="958772500-511405302-1389611995=:25477"

--958772500-511405302-1389611995=:25477
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Downtime it mean's The total downtime consists of the time=A0necessary to q=
uiesce the VM on the source, transfer the=A0device state to the destination=
, load the device state, and=A0copy over all the remaining memory pages=A0c=
oncurrently with loading the device state.=0Ahow i can print bitmap in Outp=
ut? or how i can see bitmap in=A0Output?=0AIs there a particular name for=
=A0Downtime=A0in xen_domain_save ?=0A=0Ahow i can see Downtime in Output?=
=0A=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Department, U=
niversity of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Monday, Ja=
nuary 13, 2014 4:03 AM, Kai Huang <dev.kai.huang@gmail.com> wrote:=0A =0AOn=
 Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:=
=0A> Hello Mr Dunlap=0A> I really wonder of anybody know answer :-(=0A> can=
 you help me?!=0A> I want found time "Downtime" and number "dirty pages" in=
 end migration ...=0A=0AWhat do you mean by "Downtime", and "in end migrati=
on"? Migration will=0Atake several rounds to transfer dirty pages. If you w=
ant to know=0Aexactly how many pages are dirty during each stage, you can g=
et the=0Adirty page bitmap in xc_domain_save, which will be called for each=
=0Alive migration stage.=0A=0AThanks,=0A-Kai=0A=0A> please help me.=0A> Tha=
nks.=0A>=0A> Adel Amani=0A> M.Sc. Candidate@Computer Engineering Department=
, University of Isfahan=0A> Email: A.Amani@eng.ui.ac.ir=0A>=0A>=0A> On Satu=
rday, January 11, 2014 12:31 AM, Adel Amani <adel.amani66@yahoo.com>=0A> wr=
ote:=0A> Hello=0A> I do one migration and trace in migration to command "xe=
ntrace -D -e all -S=0A> 256 /test.trace"=0A> I can get total time migration=
 of command "time migration -l ubuntu11=0A> 192.168.1.1"=A0 But i don't kno=
w how get "Downtime" and "dirty pages" of=0A> test.trace :-(=A0 or from ano=
ther way...=0A>=0A> Adel Amani=0A> M.Sc. Candidate@Computer Engineering Dep=
artment, University of Isfahan=0A> Email: A.Amani@eng.ui.ac.ir=0A>=0A>=0A>=
=0A> _______________________________________________=0A> Xen-devel mailing =
list=0A> Xen-devel@lists.xen.org=0A> http://lists.xen.org/xen-devel=0A=0A>
--958772500-511405302-1389611995=:25477
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span style=3D=
"font-weight: bold;">Downtime</span> it mean's The total downtime consists =
of the time&nbsp;<span style=3D"font-size: 10pt;">necessary to quiesce the =
VM on the source, transfer the&nbsp;</span><span style=3D"font-size: 10pt;"=
>device state to the destination, load the device state, and&nbsp;</span><s=
pan style=3D"font-size: 10pt;">copy over all the remaining memory pages&nbs=
p;</span><span style=3D"font-size: 10pt;">concurrently with loading the dev=
ice state.</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; =
font-family: 'bookman old style', 'new york', times, serif; background-colo=
r: transparent; font-style: normal;"><span style=3D"font-size: 10pt;">how i=
 can print bitmap in Output? or how i can see bitmap in&nbsp;Output?</span>=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: 'boo=
kman old
 style', 'new york', times, serif; background-color: transparent; font-styl=
e: normal;"><span style=3D"background-color: transparent; font-size: 10pt;"=
>Is there a particular name for&nbsp;</span><span style=3D"background-color=
: transparent; font-size: 10pt; font-weight: bold;">Downtime</span><span st=
yle=3D"background-color: transparent; font-size: 10pt; font-weight: bold;">=
&nbsp;</span><span style=3D"background-color: transparent; font-size: 10pt;=
">in xen_domain_save ?</span><br></div><div style=3D"color: rgb(0, 0, 0); f=
ont-size: 13px; font-family: 'bookman old style', 'new york', times, serif;=
 background-color: transparent; font-style: normal;"><span style=3D"backgro=
und-color: transparent; font-size: 10pt;">how i can see <span style=3D"font=
-weight: bold;">Downtime</span> in Output?</span></div><div style=3D"color:=
 rgb(0, 0, 0); font-size: 10pt; font-family: 'bookman old style', 'new york=
', times, serif; background-color: transparent; font-style: normal;"><span
 style=3D"font-size: 10pt;"><br></span></div><div></div><div>&nbsp;</div><d=
iv><div style=3D"text-align:center;"><span style=3D"font-size: 16px; font-f=
amily: 'times new roman', 'new york', times, serif; line-height: normal;">A=
del Amani</span><br></div><span style=3D"font-family: 'times new roman', 'n=
ew york', times, serif; line-height: normal;"><div style=3D"font-size:16px;=
text-align:center;">M.Sc. Candidate@Computer Engineering Department, Univer=
sity of Isfahan</div><div style=3D"text-align:center;"><span style=3D"font-=
size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoration:unde=
rline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><div class=3D"=
yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D"font-famil=
y: 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div s=
tyle=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Lu=
cida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=
=3D"2" face=3D"Arial"> On
 Monday, January 13, 2014 4:03 AM, Kai Huang &lt;dev.kai.huang@gmail.com&gt=
; wrote:<br> </font> </div>  <div class=3D"y_msg_container">On Sun, Jan 12,=
 2014 at 1:40 AM, Adel Amani &lt;<a shape=3D"rect" ymailto=3D"mailto:adel.a=
mani66@yahoo.com" href=3D"mailto:adel.amani66@yahoo.com">adel.amani66@yahoo=
.com</a>&gt; wrote:<br clear=3D"none">&gt; Hello Mr Dunlap<br clear=3D"none=
">&gt; I really wonder of anybody know answer :-(<br clear=3D"none">&gt; ca=
n you help me?!<br clear=3D"none">&gt; I want found time "Downtime" and num=
ber "dirty pages" in end migration ...<br clear=3D"none"><br clear=3D"none"=
>What do you mean by "Downtime", and "in end migration"? Migration will<br =
clear=3D"none">take several rounds to transfer dirty pages. If you want to =
know<br clear=3D"none">exactly how many pages are dirty during each stage, =
you can get the<br clear=3D"none">dirty page bitmap in xc_domain_save, whic=
h will be called for each<br clear=3D"none">live migration stage.<br clear=
=3D"none"><br
 clear=3D"none">Thanks,<br clear=3D"none">-Kai<div class=3D"yqt7363649995" =
id=3D"yqtfd40014"><br clear=3D"none">&gt; please help me.<br clear=3D"none"=
>&gt; Thanks.<br clear=3D"none">&gt;<br clear=3D"none">&gt; Adel Amani<br c=
lear=3D"none">&gt; M.Sc. <a shape=3D"rect" ymailto=3D"mailto:Candidate@Comp=
uter" href=3D"mailto:Candidate@Computer">Candidate@Computer</a> Engineering=
 Department, University of Isfahan<br clear=3D"none">&gt; Email: <a shape=
=3D"rect" ymailto=3D"mailto:A.Amani@eng.ui.ac.ir" href=3D"mailto:A.Amani@en=
g.ui.ac.ir">A.Amani@eng.ui.ac.ir</a><br clear=3D"none">&gt;<br clear=3D"non=
e">&gt;<br clear=3D"none">&gt; On Saturday, January 11, 2014 12:31 AM, Adel=
 Amani &lt;<a shape=3D"rect" ymailto=3D"mailto:adel.amani66@yahoo.com" href=
=3D"mailto:adel.amani66@yahoo.com">adel.amani66@yahoo.com</a>&gt;<br clear=
=3D"none">&gt; wrote:<br clear=3D"none">&gt; Hello<br clear=3D"none">&gt; I=
 do one migration and trace in migration to command "xentrace -D -e all -S<=
br clear=3D"none">&gt; 256 /test.trace"<br
 clear=3D"none">&gt; I can get total time migration of command "time migrat=
ion -l ubuntu11<br clear=3D"none">&gt; 192.168.1.1"&nbsp; But i don't know =
how get "Downtime" and "dirty pages" of<br clear=3D"none">&gt; test.trace :=
-(&nbsp; or from another way...<br clear=3D"none">&gt;<br clear=3D"none">&g=
t; Adel Amani<br clear=3D"none">&gt; M.Sc. <a shape=3D"rect" ymailto=3D"mai=
lto:Candidate@Computer" href=3D"mailto:Candidate@Computer">Candidate@Comput=
er</a> Engineering Department, University of Isfahan<br clear=3D"none">&gt;=
 Email: <a shape=3D"rect" ymailto=3D"mailto:A.Amani@eng.ui.ac.ir" href=3D"m=
ailto:A.Amani@eng.ui.ac.ir">A.Amani@eng.ui.ac.ir</a></div><br clear=3D"none=
">&gt;<br clear=3D"none">&gt;<br clear=3D"none">&gt;<br clear=3D"none">&gt;=
 _______________________________________________<br clear=3D"none">&gt; Xen=
-devel mailing list<br clear=3D"none">&gt; <a shape=3D"rect" ymailto=3D"mai=
lto:Xen-devel@lists.xen.org" href=3D"mailto:Xen-devel@lists.xen.org">Xen-de=
vel@lists.xen.org</a><br
 clear=3D"none">&gt; <a shape=3D"rect" href=3D"http://lists.xen.org/xen-dev=
el" target=3D"_blank">http://lists.xen.org/xen-devel</a><div class=3D"yqt73=
63649995" id=3D"yqtfd15823"><br clear=3D"none">&gt;<br clear=3D"none"></div=
><br><br></div>  </div> </div>  </div> </div></body></html>
--958772500-511405302-1389611995=:25477--


--===============3258935669156897867==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3258935669156897867==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gED-00040Q-G6; Mon, 13 Jan 2014 12:02:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2gEB-00040I-Kq
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:02:48 +0000
Received: from [85.158.137.68:33773] by server-14.bemta-3.messagelabs.com id
	E7/53-06105-6E5D3D25; Mon, 13 Jan 2014 12:02:46 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389614566!5139061!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12108 invoked from network); 13 Jan 2014 12:02:46 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 12:02:46 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389614566; l=441;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ZocsjLNpyZ99qj6c9tVBfFajvzo=;
	b=MqpM+ohm2Zl2Qx2/5cgZhGlzCuQCd5FapMzbFxl5WlwrXAFCX0ecylLi+v8uq062kpW
	XK1oT/TplPJy75CLP7q446ZrmfKJhAjCALNjDo3E9y95Cwz8Es9qnjD0ul6OtS4AmMFde
	iSvKudSAiFysj/trtmG1Nks7ujy9KQP7ZUM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 503d37q0DC1Xoxn ; 
	Mon, 13 Jan 2014 13:01:33 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id C9BD15025A; Mon, 13 Jan 2014 13:01:31 +0100 (CET)
Date: Mon, 13 Jan 2014 13:01:31 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113120131.GA15623@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Jan Beulich wrote:

> You can't do this in one go - the first two and the last one may be
> set independently (and are independent in their meaning), and
> hence need to be queried independently (xenbus_gather() fails
> on the first absent value).

Yes, thats the purpose. Since the properties are required its an all or
nothing thing. If they are truly optional then blkif.h should be updated
to say that.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:34:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gis-0005i4-Ru; Mon, 13 Jan 2014 12:34:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2giq-0005hz-MC
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:34:28 +0000
Received: from [85.158.139.211:17490] by server-14.bemta-5.messagelabs.com id
	D8/DE-24200-35DD3D25; Mon, 13 Jan 2014 12:34:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389616466!6705974!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18115 invoked from network); 13 Jan 2014 12:34:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 12:34:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 12:34:25 +0000
Message-Id: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 12:34:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
In-Reply-To: <20140113120131.GA15623@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> On Mon, Jan 13, Jan Beulich wrote:
> 
>> You can't do this in one go - the first two and the last one may be
>> set independently (and are independent in their meaning), and
>> hence need to be queried independently (xenbus_gather() fails
>> on the first absent value).
> 
> Yes, thats the purpose. Since the properties are required its an all or
> nothing thing. If they are truly optional then blkif.h should be updated
> to say that.

They _are_ optional.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:34:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gis-0005i4-Ru; Mon, 13 Jan 2014 12:34:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2giq-0005hz-MC
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:34:28 +0000
Received: from [85.158.139.211:17490] by server-14.bemta-5.messagelabs.com id
	D8/DE-24200-35DD3D25; Mon, 13 Jan 2014 12:34:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389616466!6705974!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18115 invoked from network); 13 Jan 2014 12:34:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 12:34:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 12:34:25 +0000
Message-Id: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 12:34:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
In-Reply-To: <20140113120131.GA15623@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> On Mon, Jan 13, Jan Beulich wrote:
> 
>> You can't do this in one go - the first two and the last one may be
>> set independently (and are independent in their meaning), and
>> hence need to be queried independently (xenbus_gather() fails
>> on the first absent value).
> 
> Yes, thats the purpose. Since the properties are required its an all or
> nothing thing. If they are truly optional then blkif.h should be updated
> to say that.

They _are_ optional.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:41:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gps-0006Ii-TD; Mon, 13 Jan 2014 12:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2gpr-0006Ib-M1
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:41:43 +0000
Received: from [85.158.137.68:63782] by server-9.bemta-3.messagelabs.com id
	6F/BD-13104-60FD3D25; Mon, 13 Jan 2014 12:41:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389616901!7636724!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10450 invoked from network); 13 Jan 2014 12:41:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 12:41:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389616901; l=667;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=r/HXquTGy/+Bo7dPH3/L11Oynvo=;
	b=NHBiSZPUusqcUebLIijxdfAuHaxRjs485KLfrQibmtyRhMH0VgWXfZzsSVw46yl18kn
	kEWImOUbP3oS6DUcgvCrYKdeywBBmj2AHJ++9S+8EgTn3XtmUO8NLqJXEr1Bho1LLa15K
	evcuUVKWyyEVdO52ZikmLaWhXli7pZ0/va8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id e02445q0DCeTyM1 ; 
	Mon, 13 Jan 2014 13:40:29 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 25C8E5025A; Mon, 13 Jan 2014 13:40:29 +0100 (CET)
Date: Mon, 13 Jan 2014 13:40:29 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113124029.GA19027@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Jan Beulich wrote:

> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> > On Mon, Jan 13, Jan Beulich wrote:
> > 
> >> You can't do this in one go - the first two and the last one may be
> >> set independently (and are independent in their meaning), and
> >> hence need to be queried independently (xenbus_gather() fails
> >> on the first absent value).
> > 
> > Yes, thats the purpose. Since the properties are required its an all or
> > nothing thing. If they are truly optional then blkif.h should be updated
> > to say that.
> 
> They _are_ optional.

In this case my first patch is correct and should be used.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 12:41:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 12:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2gps-0006Ii-TD; Mon, 13 Jan 2014 12:41:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2gpr-0006Ib-M1
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 12:41:43 +0000
Received: from [85.158.137.68:63782] by server-9.bemta-3.messagelabs.com id
	6F/BD-13104-60FD3D25; Mon, 13 Jan 2014 12:41:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389616901!7636724!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10450 invoked from network); 13 Jan 2014 12:41:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 12:41:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389616901; l=667;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=r/HXquTGy/+Bo7dPH3/L11Oynvo=;
	b=NHBiSZPUusqcUebLIijxdfAuHaxRjs485KLfrQibmtyRhMH0VgWXfZzsSVw46yl18kn
	kEWImOUbP3oS6DUcgvCrYKdeywBBmj2AHJ++9S+8EgTn3XtmUO8NLqJXEr1Bho1LLa15K
	evcuUVKWyyEVdO52ZikmLaWhXli7pZ0/va8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWMTlsoxCss+iCqhAw1vkA==
X-RZG-CLASS-ID: mo00
Received: from probook.site
	(dslb-094-223-159-122.pools.arcor-ip.net [94.223.159.122])
	by smtp.strato.de (RZmta 32.17 DYNA|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id e02445q0DCeTyM1 ; 
	Mon, 13 Jan 2014 13:40:29 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 25C8E5025A; Mon, 13 Jan 2014 13:40:29 +0100 (CET)
Date: Mon, 13 Jan 2014 13:40:29 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113124029.GA19027@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Jan Beulich wrote:

> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> > On Mon, Jan 13, Jan Beulich wrote:
> > 
> >> You can't do this in one go - the first two and the last one may be
> >> set independently (and are independent in their meaning), and
> >> hence need to be queried independently (xenbus_gather() fails
> >> on the first absent value).
> > 
> > Yes, thats the purpose. Since the properties are required its an all or
> > nothing thing. If they are truly optional then blkif.h should be updated
> > to say that.
> 
> They _are_ optional.

In this case my first patch is correct and should be used.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:01:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2h8W-0007N1-OF; Mon, 13 Jan 2014 13:01:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2h8V-0007Mw-PE
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:00:59 +0000
Received: from [193.109.254.147:35125] by server-10.bemta-14.messagelabs.com
	id 00/09-20752-B83E3D25; Mon, 13 Jan 2014 13:00:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389618057!8204720!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7129 invoked from network); 13 Jan 2014 13:00:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:00:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92283449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 13:00:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:00:55 -0500
Message-ID: <1389618054.13654.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 13:00:54 +0000
In-Reply-To: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, Olaf Hering <olaf@aepfle.de>,
	xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> > On Mon, Jan 13, Jan Beulich wrote:
> > 
> >> You can't do this in one go - the first two and the last one may be
> >> set independently (and are independent in their meaning), and
> >> hence need to be queried independently (xenbus_gather() fails
> >> on the first absent value).
> > 
> > Yes, thats the purpose. Since the properties are required its an all or
> > nothing thing. If they are truly optional then blkif.h should be updated
> > to say that.
> 
> They _are_ optional.

But is it true that either they are all present or they are all absent?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:01:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2h8W-0007N1-OF; Mon, 13 Jan 2014 13:01:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2h8V-0007Mw-PE
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:00:59 +0000
Received: from [193.109.254.147:35125] by server-10.bemta-14.messagelabs.com
	id 00/09-20752-B83E3D25; Mon, 13 Jan 2014 13:00:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389618057!8204720!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7129 invoked from network); 13 Jan 2014 13:00:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:00:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92283449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 13:00:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:00:55 -0500
Message-ID: <1389618054.13654.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 13:00:54 +0000
In-Reply-To: <52D3EB5F02000078001130B5@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, Olaf Hering <olaf@aepfle.de>,
	xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> > On Mon, Jan 13, Jan Beulich wrote:
> > 
> >> You can't do this in one go - the first two and the last one may be
> >> set independently (and are independent in their meaning), and
> >> hence need to be queried independently (xenbus_gather() fails
> >> on the first absent value).
> > 
> > Yes, thats the purpose. Since the properties are required its an all or
> > nothing thing. If they are truly optional then blkif.h should be updated
> > to say that.
> 
> They _are_ optional.

But is it true that either they are all present or they are all absent?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:16:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hNU-0007yB-FZ; Mon, 13 Jan 2014 13:16:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hNS-0007y6-VK
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:16:27 +0000
Received: from [85.158.143.35:18366] by server-3.bemta-4.messagelabs.com id
	A0/EE-32360-A27E3D25; Mon, 13 Jan 2014 13:16:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389618985!11314661!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 13 Jan 2014 13:16:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:16:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:16:25 +0000
Message-Id: <52D3F535020000780011311B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:16:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
In-Reply-To: <1389618054.13654.57.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>> > On Mon, Jan 13, Jan Beulich wrote:
>> > 
>> >> You can't do this in one go - the first two and the last one may be
>> >> set independently (and are independent in their meaning), and
>> >> hence need to be queried independently (xenbus_gather() fails
>> >> on the first absent value).
>> > 
>> > Yes, thats the purpose. Since the properties are required its an all or
>> > nothing thing. If they are truly optional then blkif.h should be updated
>> > to say that.
>> 
>> They _are_ optional.
> 
> But is it true that either they are all present or they are all absent?

No, it's not. discard-secure is independent of the other two (but
those other two are tied together).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:16:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hNU-0007yB-FZ; Mon, 13 Jan 2014 13:16:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hNS-0007y6-VK
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:16:27 +0000
Received: from [85.158.143.35:18366] by server-3.bemta-4.messagelabs.com id
	A0/EE-32360-A27E3D25; Mon, 13 Jan 2014 13:16:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389618985!11314661!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 13 Jan 2014 13:16:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:16:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:16:25 +0000
Message-Id: <52D3F535020000780011311B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:16:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
In-Reply-To: <1389618054.13654.57.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>> > On Mon, Jan 13, Jan Beulich wrote:
>> > 
>> >> You can't do this in one go - the first two and the last one may be
>> >> set independently (and are independent in their meaning), and
>> >> hence need to be queried independently (xenbus_gather() fails
>> >> on the first absent value).
>> > 
>> > Yes, thats the purpose. Since the properties are required its an all or
>> > nothing thing. If they are truly optional then blkif.h should be updated
>> > to say that.
>> 
>> They _are_ optional.
> 
> But is it true that either they are all present or they are all absent?

No, it's not. discard-secure is independent of the other two (but
those other two are tied together).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:29:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hZg-0000aK-Ue; Mon, 13 Jan 2014 13:29:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hZg-0000aF-5M
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 13:29:04 +0000
Received: from [193.109.254.147:57547] by server-2.bemta-14.messagelabs.com id
	87/90-00361-F1AE3D25; Mon, 13 Jan 2014 13:29:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389619742!7029491!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29724 invoked from network); 13 Jan 2014 13:29:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:29:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:29:01 +0000
Message-Id: <52D3F82A020000780011313A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:28:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Update APIC_LVTPC vector when HVM guest writes to it.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
>  xen/arch/x86/hvm/vlapic.c         |  5 ++++-
>  xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
>  xen/arch/x86/hvm/vpmu.c           | 16 +++++++++++++---
>  xen/include/asm-x86/hvm/vpmu.h    |  1 +
>  5 files changed, 18 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 842bce7..1f7d6b7 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t 
> msr_content)
>          if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
>              return 1;
>          vpmu_set(vpmu, VPMU_RUNNING);
> -        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
>  
>          if ( is_hvm_domain(v->domain) &&
>               !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> @@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t 
> msr_content)
>      if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
>          (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) 
> )
>      {
> -        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>          vpmu_reset(vpmu, VPMU_RUNNING);
>          if ( is_hvm_domain(v->domain) &&
>               ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index bc06010..d954f4f 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -38,6 +38,7 @@
>  #include <asm/hvm/support.h>
>  #include <asm/hvm/vmx/vmx.h>
>  #include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/vpmu.h>
>  #include <public/hvm/ioreq.h>
>  #include <public/hvm/params.h>
>  
> @@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
>              vlapic_adjust_i8259_target(v->domain);
>              pt_may_unmask_irq(v->domain, NULL);
>          }
> -        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
> +        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
>              pt_may_unmask_irq(NULL, &vlapic->pt);
> +        else if ( offset == APIC_LVTPC )
> +            vpmu_lvtpc_update(val);
>          break;
>  
>      case APIC_TMICT:
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c 
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 89212ec..7fd2420 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, 
> uint64_t msr_content)
>      else
>          vpmu_reset(vpmu, VPMU_RUNNING);
>  
> -    /* Setup LVTPC in local apic */
> -    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
> -         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
> -    {
> -        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
> -    }
> -    else
> -    {
> -        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
> -    }
> -
>      if ( type != MSR_TYPE_GLOBAL )
>      {
>          u64 mask;
> @@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs 
> *regs)
>              return 0;
>      }
>  
> -    /* HW sets the MASK bit when performance counter interrupt occurs*/
> -    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
> -    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
> -
>      return 1;
>  }
>  
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index d6a9ff6..2c1201b 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
>      }
>  }
>  
> +void vpmu_lvtpc_update(uint32_t val)
> +{
> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
> +
> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
> +}
> +
>  int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>      case X86_VENDOR_AMD:
>          if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>              opt_vpmu_enabled = 0;
> -        break;
> +        return;
>  
>      case X86_VENDOR_INTEL:
>          if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>              opt_vpmu_enabled = 0;
> -        break;
> +        return;
>  
>      default:
>          printk("VPMU: Initialization failed. "
>                 "Unknown CPU vendor %d\n", vendor);
>          opt_vpmu_enabled = 0;
> -        break;
> +        return;
>      }
> +
> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>  }

So what is this good for? All code paths above use "return" now,
hence how would execution get here?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:29:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hZg-0000aK-Ue; Mon, 13 Jan 2014 13:29:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hZg-0000aF-5M
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 13:29:04 +0000
Received: from [193.109.254.147:57547] by server-2.bemta-14.messagelabs.com id
	87/90-00361-F1AE3D25; Mon, 13 Jan 2014 13:29:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389619742!7029491!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29724 invoked from network); 13 Jan 2014 13:29:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:29:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:29:01 +0000
Message-Id: <52D3F82A020000780011313A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:28:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Update APIC_LVTPC vector when HVM guest writes to it.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
>  xen/arch/x86/hvm/vlapic.c         |  5 ++++-
>  xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
>  xen/arch/x86/hvm/vpmu.c           | 16 +++++++++++++---
>  xen/include/asm-x86/hvm/vpmu.h    |  1 +
>  5 files changed, 18 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 842bce7..1f7d6b7 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t 
> msr_content)
>          if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
>              return 1;
>          vpmu_set(vpmu, VPMU_RUNNING);
> -        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
>  
>          if ( is_hvm_domain(v->domain) &&
>               !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> @@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t 
> msr_content)
>      if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
>          (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) 
> )
>      {
> -        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>          vpmu_reset(vpmu, VPMU_RUNNING);
>          if ( is_hvm_domain(v->domain) &&
>               ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index bc06010..d954f4f 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -38,6 +38,7 @@
>  #include <asm/hvm/support.h>
>  #include <asm/hvm/vmx/vmx.h>
>  #include <asm/hvm/nestedhvm.h>
> +#include <asm/hvm/vpmu.h>
>  #include <public/hvm/ioreq.h>
>  #include <public/hvm/params.h>
>  
> @@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
>              vlapic_adjust_i8259_target(v->domain);
>              pt_may_unmask_irq(v->domain, NULL);
>          }
> -        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
> +        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
>              pt_may_unmask_irq(NULL, &vlapic->pt);
> +        else if ( offset == APIC_LVTPC )
> +            vpmu_lvtpc_update(val);
>          break;
>  
>      case APIC_TMICT:
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c 
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 89212ec..7fd2420 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, 
> uint64_t msr_content)
>      else
>          vpmu_reset(vpmu, VPMU_RUNNING);
>  
> -    /* Setup LVTPC in local apic */
> -    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
> -         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
> -    {
> -        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
> -    }
> -    else
> -    {
> -        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
> -        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
> -    }
> -
>      if ( type != MSR_TYPE_GLOBAL )
>      {
>          u64 mask;
> @@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs 
> *regs)
>              return 0;
>      }
>  
> -    /* HW sets the MASK bit when performance counter interrupt occurs*/
> -    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
> -    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
> -
>      return 1;
>  }
>  
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index d6a9ff6..2c1201b 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
>      }
>  }
>  
> +void vpmu_lvtpc_update(uint32_t val)
> +{
> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
> +
> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
> +}
> +
>  int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>      case X86_VENDOR_AMD:
>          if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>              opt_vpmu_enabled = 0;
> -        break;
> +        return;
>  
>      case X86_VENDOR_INTEL:
>          if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>              opt_vpmu_enabled = 0;
> -        break;
> +        return;
>  
>      default:
>          printk("VPMU: Initialization failed. "
>                 "Unknown CPU vendor %d\n", vendor);
>          opt_vpmu_enabled = 0;
> -        break;
> +        return;
>      }
> +
> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>  }

So what is this good for? All code paths above use "return" now,
hence how would execution get here?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:35:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hfP-0000j4-PA; Mon, 13 Jan 2014 13:34:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2hfO-0000iz-JB
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:34:58 +0000
Received: from [85.158.143.35:50859] by server-2.bemta-4.messagelabs.com id
	2D/41-11386-18BE3D25; Mon, 13 Jan 2014 13:34:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389620096!10098330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23655 invoked from network); 13 Jan 2014 13:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90168086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 13:34:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:34:54 -0500
Message-ID: <1389620093.13654.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 13:34:53 +0000
In-Reply-To: <52D3F535020000780011311B@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 13:16 +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
> >> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > On Mon, Jan 13, Jan Beulich wrote:
> >> > 
> >> >> You can't do this in one go - the first two and the last one may be
> >> >> set independently (and are independent in their meaning), and
> >> >> hence need to be queried independently (xenbus_gather() fails
> >> >> on the first absent value).
> >> > 
> >> > Yes, thats the purpose. Since the properties are required its an all or
> >> > nothing thing. If they are truly optional then blkif.h should be updated
> >> > to say that.
> >> 
> >> They _are_ optional.
> > 
> > But is it true that either they are all present or they are all absent?
> 
> No, it's not. discard-secure is independent of the other two (but
> those other two are tied together).

Thanks for clarifying.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:35:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hfP-0000j4-PA; Mon, 13 Jan 2014 13:34:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2hfO-0000iz-JB
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:34:58 +0000
Received: from [85.158.143.35:50859] by server-2.bemta-4.messagelabs.com id
	2D/41-11386-18BE3D25; Mon, 13 Jan 2014 13:34:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389620096!10098330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23655 invoked from network); 13 Jan 2014 13:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90168086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 13:34:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:34:54 -0500
Message-ID: <1389620093.13654.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 13 Jan 2014 13:34:53 +0000
In-Reply-To: <52D3F535020000780011311B@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 13:16 +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
> >> >>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
> >> > On Mon, Jan 13, Jan Beulich wrote:
> >> > 
> >> >> You can't do this in one go - the first two and the last one may be
> >> >> set independently (and are independent in their meaning), and
> >> >> hence need to be queried independently (xenbus_gather() fails
> >> >> on the first absent value).
> >> > 
> >> > Yes, thats the purpose. Since the properties are required its an all or
> >> > nothing thing. If they are truly optional then blkif.h should be updated
> >> > to say that.
> >> 
> >> They _are_ optional.
> > 
> > But is it true that either they are all present or they are all absent?
> 
> No, it's not. discard-secure is independent of the other two (but
> those other two are tied together).

Thanks for clarifying.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hj7-0001HH-Dw; Mon, 13 Jan 2014 13:38:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1W2hj5-0001HB-JP
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:38:47 +0000
Received: from [85.158.143.35:40028] by server-1.bemta-4.messagelabs.com id
	38/61-02132-66CE3D25; Mon, 13 Jan 2014 13:38:46 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389620325!4254308!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2763 invoked from network); 13 Jan 2014 13:38:46 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-6.tower-21.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 13 Jan 2014 13:38:46 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1705440AbaAMNi3 (ORCPT <rfc822;xen-devel@lists.xen.org>);
	Mon, 13 Jan 2014 14:38:29 +0100
Date: Mon, 13 Jan 2014 14:38:29 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113133829.GA20951@router-fw-old.local.net-space.pl>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
	<52D3CE64.7060303@citrix.com>
	<52D3DDD20200007800113009@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3DDD20200007800113009@nat28.tlf.novell.com>
User-Agent: Mutt/1.3.28i
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:36:34AM +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 12:30, David Vrabel <david.vrabel@citrix.com> wrote:
> > On 13/01/14 11:13, Jan Beulich wrote:
> >>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> >>> "x86: map portion of kexec crash area that is within the direct map
> >>> area" to staging-4.3 ASAP, as following the backport of
> >>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> >>> kernel area", kexec loading is broken in exactly the same way as it was
> >>> in staging.
> >>
> >> Not without explaining how it is broken: According to my own
> >> checking as well as Daniel's there was no need for the kexec area
> >> to be mapped at all in the old implementation.
> >>
> >> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
> >> instead if there really is an issue. That's largely because, as
> >> noted in the extra comment I added to 0896bd8b, the change is
> >> still incomplete (and hence not much better than the code with
> >> Daniel's change reverted).
> >
> > In that comment it says:
> >
> >   "That's primarily because map_domain_page()
> >    (needed when the area is outside the direct mapping range) may be
> >    unusable when wanting to kexec due to a crash,"
> >
> > But map_domain_page() is only used when loading the crash image, not
> > during exec.
>
> That's good to know, but I don't think it's a good idea either. Seeing
> that map_domain_page() is being used in the code _at all_ may
> make someone try to use it in a path that is used during kexec. And
> it being used by other functions in the same file may also result in
> one of those functions suddenly also getting used on a kexec path.
>
> Together with the issue of the area potentially ending up in a
> PFN compression hole, to me this is enough reason to still expect
> that these uses get dropped again.

map_domain_page() is used during kexec and crash image load.
However, it is not used when a given image is executed.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hj7-0001HH-Dw; Mon, 13 Jan 2014 13:38:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1W2hj5-0001HB-JP
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:38:47 +0000
Received: from [85.158.143.35:40028] by server-1.bemta-4.messagelabs.com id
	38/61-02132-66CE3D25; Mon, 13 Jan 2014 13:38:46 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389620325!4254308!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2763 invoked from network); 13 Jan 2014 13:38:46 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-6.tower-21.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 13 Jan 2014 13:38:46 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1705440AbaAMNi3 (ORCPT <rfc822;xen-devel@lists.xen.org>);
	Mon, 13 Jan 2014 14:38:29 +0100
Date: Mon, 13 Jan 2014 14:38:29 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140113133829.GA20951@router-fw-old.local.net-space.pl>
References: <52D3B6A9.2090003@citrix.com>
	<52D3D86C0200007800112F92@nat28.tlf.novell.com>
	<52D3CE64.7060303@citrix.com>
	<52D3DDD20200007800113009@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3DDD20200007800113009@nat28.tlf.novell.com>
User-Agent: Mutt/1.3.28i
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Backport request
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:36:34AM +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 12:30, David Vrabel <david.vrabel@citrix.com> wrote:
> > On 13/01/14 11:13, Jan Beulich wrote:
> >>>>> On 13.01.14 at 10:49, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >>> Can you please backport c/s 0896bd8bea84526b00e00d2d076f4f953a3d73cb
> >>> "x86: map portion of kexec crash area that is within the direct map
> >>> area" to staging-4.3 ASAP, as following the backport of
> >>> 8d611a00d3389d9c16506326e24145b94ac6fb86 "kexec/x86: do not map crash
> >>> kernel area", kexec loading is broken in exactly the same way as it was
> >>> in staging.
> >>
> >> Not without explaining how it is broken: According to my own
> >> checking as well as Daniel's there was no need for the kexec area
> >> to be mapped at all in the old implementation.
> >>
> >> Furthermore, I'd prefer to revert 8d611a00 (dff90d0c on 4.2)
> >> instead if there really is an issue. That's largely because, as
> >> noted in the extra comment I added to 0896bd8b, the change is
> >> still incomplete (and hence not much better than the code with
> >> Daniel's change reverted).
> >
> > In that comment it says:
> >
> >   "That's primarily because map_domain_page()
> >    (needed when the area is outside the direct mapping range) may be
> >    unusable when wanting to kexec due to a crash,"
> >
> > But map_domain_page() is only used when loading the crash image, not
> > during exec.
>
> That's good to know, but I don't think it's a good idea either. Seeing
> that map_domain_page() is being used in the code _at all_ may
> make someone try to use it in a path that is used during kexec. And
> it being used by other functions in the same file may also result in
> one of those functions suddenly also getting used on a kexec path.
>
> Together with the issue of the area potentially ending up in a
> PFN compression hole, to me this is enough reason to still expect
> that these uses get dropped again.

map_domain_page() is used during kexec and crash image load.
However, it is not used when a given image is executed.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:39:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hk8-0001M3-SY; Mon, 13 Jan 2014 13:39:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hk8-0001Lw-3Z
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 13:39:52 +0000
Received: from [85.158.139.211:34401] by server-5.bemta-5.messagelabs.com id
	2F/7C-14928-7ACE3D25; Mon, 13 Jan 2014 13:39:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389620390!9448300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 633 invoked from network); 13 Jan 2014 13:39:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:39:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:39:50 +0000
Message-Id: <52D3FAB20200007800113162@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:39:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- /dev/null
> +++ b/xen/include/public/arch-x86/xenpmu.h
> @@ -0,0 +1,74 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +#include "xen.h"

The comment looks like it is describing the #include, which it
clearly isn't. Please have a blank line between them.

> +
> +/* Start of PMU register bank */
> +#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
> +                                                 (uintptr_t)ctxt->offset))

This doesn't seem to belong in a public header.

> +
> +/* AMD PMU registers and structures */
> +struct amd_vpmu_context {
> +    uint64_t counters;      /* Offset to counter MSRs */
> +    uint64_t ctrls;         /* Offset to control MSRs */
> +    uint8_t msr_bitmap_set; /* Used by HVM only */
> +};

sizeof() of this structure will differ between 32- and 64-bit guests -
are you intending to do the necessary translation even though it
seems rather easy to avoid having to do so?

> +
> +/* Intel PMU registers and structures */
> +struct arch_cntr_pair {
> +    uint64_t counter;
> +    uint64_t control;
> +};
> +struct core2_vpmu_context {

Blank line missing between the two structures.

> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
> +};
> +
> +/* ANSI-C does not support anonymous unions */
> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
> +#define __ANON_UNION_NAME(x) x
> +#else
> +#define __ANON_UNION_NAME(x)
> +#endif

Why? And if really needed, why here?

> +
> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
> +                                    sizeof(struct core2_vpmu_context) ? \
> +                                     sizeof(struct amd_vpmu_context) : \
> +                                     sizeof(struct core2_vpmu_context))
> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
> +struct arch_xenpmu {
> +    union {
> +        struct cpu_user_regs regs;
> +        uint8_t pad1[256];
> +    } __ANON_UNION_NAME(r);
> +    union {
> +        uint32_t lapic_lvtpc;
> +        uint64_t pad2;
> +    } __ANON_UNION_NAME(l);
> +    union {
> +        struct amd_vpmu_context amd;
> +        struct core2_vpmu_context intel;
> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
> +    } __ANON_UNION_NAME(c);

I don't think there's be a severe problem if you simply always
had names on these unions.

> +};
> +typedef struct arch_xenpmu arch_xenpmu_t;

Overall you should also prefix all types added to global scope with
"xen". I know this wasn't done consistently for older headers, but
we shouldn't be extending this name space cluttering.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:39:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hk8-0001M3-SY; Mon, 13 Jan 2014 13:39:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2hk8-0001Lw-3Z
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 13:39:52 +0000
Received: from [85.158.139.211:34401] by server-5.bemta-5.messagelabs.com id
	2F/7C-14928-7ACE3D25; Mon, 13 Jan 2014 13:39:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389620390!9448300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 633 invoked from network); 13 Jan 2014 13:39:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:39:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:39:50 +0000
Message-Id: <52D3FAB20200007800113162@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:39:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- /dev/null
> +++ b/xen/include/public/arch-x86/xenpmu.h
> @@ -0,0 +1,74 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +#include "xen.h"

The comment looks like it is describing the #include, which it
clearly isn't. Please have a blank line between them.

> +
> +/* Start of PMU register bank */
> +#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
> +                                                 (uintptr_t)ctxt->offset))

This doesn't seem to belong in a public header.

> +
> +/* AMD PMU registers and structures */
> +struct amd_vpmu_context {
> +    uint64_t counters;      /* Offset to counter MSRs */
> +    uint64_t ctrls;         /* Offset to control MSRs */
> +    uint8_t msr_bitmap_set; /* Used by HVM only */
> +};

sizeof() of this structure will differ between 32- and 64-bit guests -
are you intending to do the necessary translation even though it
seems rather easy to avoid having to do so?

> +
> +/* Intel PMU registers and structures */
> +struct arch_cntr_pair {
> +    uint64_t counter;
> +    uint64_t control;
> +};
> +struct core2_vpmu_context {

Blank line missing between the two structures.

> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
> +};
> +
> +/* ANSI-C does not support anonymous unions */
> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
> +#define __ANON_UNION_NAME(x) x
> +#else
> +#define __ANON_UNION_NAME(x)
> +#endif

Why? And if really needed, why here?

> +
> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
> +                                    sizeof(struct core2_vpmu_context) ? \
> +                                     sizeof(struct amd_vpmu_context) : \
> +                                     sizeof(struct core2_vpmu_context))
> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
> +struct arch_xenpmu {
> +    union {
> +        struct cpu_user_regs regs;
> +        uint8_t pad1[256];
> +    } __ANON_UNION_NAME(r);
> +    union {
> +        uint32_t lapic_lvtpc;
> +        uint64_t pad2;
> +    } __ANON_UNION_NAME(l);
> +    union {
> +        struct amd_vpmu_context amd;
> +        struct core2_vpmu_context intel;
> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
> +    } __ANON_UNION_NAME(c);

I don't think there's be a severe problem if you simply always
had names on these unions.

> +};
> +typedef struct arch_xenpmu arch_xenpmu_t;

Overall you should also prefix all types added to global scope with
"xen". I know this wasn't done consistently for older headers, but
we shouldn't be extending this name space cluttering.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hq7-0001aJ-0K; Mon, 13 Jan 2014 13:46:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2hq5-0001aE-E9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:46:01 +0000
Received: from [85.158.137.68:44848] by server-2.bemta-3.messagelabs.com id
	34/61-17329-81EE3D25; Mon, 13 Jan 2014 13:46:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389620758!8829917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29496 invoked from network); 13 Jan 2014 13:45:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:45:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92298949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 13:45:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:45:57 -0500
Message-ID: <52D3EE14.3080609@citrix.com>
Date: Mon, 13 Jan 2014 13:45:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
In-Reply-To: <52D3F535020000780011311B@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 13:16, Jan Beulich wrote:
>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>
>>>>> You can't do this in one go - the first two and the last one may be
>>>>> set independently (and are independent in their meaning), and
>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>> on the first absent value).
>>>>
>>>> Yes, thats the purpose. Since the properties are required its an all or
>>>> nothing thing. If they are truly optional then blkif.h should be updated
>>>> to say that.
>>>
>>> They _are_ optional.
>>
>> But is it true that either they are all present or they are all absent?
> 
> No, it's not. discard-secure is independent of the other two (but
> those other two are tied together).

Can we have a patch to blkif.h that clarifies this?

e.g.,

feature-discard

   ...

   discard-granularity and discard-offset must also be present if
   feature-discard is enabled

   discard-secure may also be present if feature-discard is enabled.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2hq7-0001aJ-0K; Mon, 13 Jan 2014 13:46:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W2hq5-0001aE-E9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:46:01 +0000
Received: from [85.158.137.68:44848] by server-2.bemta-3.messagelabs.com id
	34/61-17329-81EE3D25; Mon, 13 Jan 2014 13:46:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389620758!8829917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29496 invoked from network); 13 Jan 2014 13:45:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:45:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92298949"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 13:45:57 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 08:45:57 -0500
Message-ID: <52D3EE14.3080609@citrix.com>
Date: Mon, 13 Jan 2014 13:45:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
In-Reply-To: <52D3F535020000780011311B@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 13:16, Jan Beulich wrote:
>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>
>>>>> You can't do this in one go - the first two and the last one may be
>>>>> set independently (and are independent in their meaning), and
>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>> on the first absent value).
>>>>
>>>> Yes, thats the purpose. Since the properties are required its an all or
>>>> nothing thing. If they are truly optional then blkif.h should be updated
>>>> to say that.
>>>
>>> They _are_ optional.
>>
>> But is it true that either they are all present or they are all absent?
> 
> No, it's not. discard-secure is independent of the other two (but
> those other two are tied together).

Can we have a patch to blkif.h that clarifies this?

e.g.,

feature-discard

   ...

   discard-granularity and discard-offset must also be present if
   feature-discard is enabled

   discard-secure may also be present if feature-discard is enabled.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i29-0002DV-A8; Mon, 13 Jan 2014 13:58:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i27-0002Av-L8
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:58:27 +0000
Received: from [193.109.254.147:10929] by server-12.bemta-14.messagelabs.com
	id 1A/19-13681-301F3D25; Mon, 13 Jan 2014 13:58:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389621506!10529683!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14496 invoked from network); 13 Jan 2014 13:58:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:58:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:58:25 +0000
Message-Id: <52D3FF0E02000078001131A5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:58:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
In-Reply-To: <52D3EE14.3080609@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 14:45, David Vrabel <david.vrabel@citrix.com> wrote:
> On 13/01/14 13:16, Jan Beulich wrote:
>>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>>
>>>>>> You can't do this in one go - the first two and the last one may be
>>>>>> set independently (and are independent in their meaning), and
>>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>>> on the first absent value).
>>>>>
>>>>> Yes, thats the purpose. Since the properties are required its an all or
>>>>> nothing thing. If they are truly optional then blkif.h should be updated
>>>>> to say that.
>>>>
>>>> They _are_ optional.
>>>
>>> But is it true that either they are all present or they are all absent?
>> 
>> No, it's not. discard-secure is independent of the other two (but
>> those other two are tied together).
> 
> Can we have a patch to blkif.h that clarifies this?
> 
> e.g.,
> 
> feature-discard
> 
>    ...
> 
>    discard-granularity and discard-offset must also be present if
>    feature-discard is enabled

It would be "may" here too afaict. But I'll defer to Konrad, who
has done more work in this area...

Jan

>    discard-secure may also be present if feature-discard is enabled.
> 
> David




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 13:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 13:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i29-0002DV-A8; Mon, 13 Jan 2014 13:58:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i27-0002Av-L8
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:58:27 +0000
Received: from [193.109.254.147:10929] by server-12.bemta-14.messagelabs.com
	id 1A/19-13681-301F3D25; Mon, 13 Jan 2014 13:58:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389621506!10529683!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14496 invoked from network); 13 Jan 2014 13:58:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 13:58:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 13:58:25 +0000
Message-Id: <52D3FF0E02000078001131A5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 13:58:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
In-Reply-To: <52D3EE14.3080609@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	Olaf Hering <olaf@aepfle.de>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 14:45, David Vrabel <david.vrabel@citrix.com> wrote:
> On 13/01/14 13:16, Jan Beulich wrote:
>>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>>
>>>>>> You can't do this in one go - the first two and the last one may be
>>>>>> set independently (and are independent in their meaning), and
>>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>>> on the first absent value).
>>>>>
>>>>> Yes, thats the purpose. Since the properties are required its an all or
>>>>> nothing thing. If they are truly optional then blkif.h should be updated
>>>>> to say that.
>>>>
>>>> They _are_ optional.
>>>
>>> But is it true that either they are all present or they are all absent?
>> 
>> No, it's not. discard-secure is independent of the other two (but
>> those other two are tied together).
> 
> Can we have a patch to blkif.h that clarifies this?
> 
> e.g.,
> 
> feature-discard
> 
>    ...
> 
>    discard-granularity and discard-offset must also be present if
>    feature-discard is enabled

It would be "may" here too afaict. But I'll defer to Konrad, who
has done more work in this area...

Jan

>    discard-secure may also be present if feature-discard is enabled.
> 
> David




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:02:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i5x-0002nL-04; Mon, 13 Jan 2014 14:02:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i5v-0002nG-A6
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:02:23 +0000
Received: from [193.109.254.147:24346] by server-6.bemta-14.messagelabs.com id
	53/68-14958-EE1F3D25; Mon, 13 Jan 2014 14:02:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389621741!10568253!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12249 invoked from network); 13 Jan 2014 14:02:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:02:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:02:21 +0000
Message-Id: <52D3FFF902000078001131C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 09/16] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/include/public/xenpmu.h
> +++ b/xen/include/public/xenpmu.h
> @@ -13,6 +13,64 @@
>  #define XENPMU_VER_MAJ    0
>  #define XENPMU_VER_MIN    0
>  
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
> + *
> + * @cmd  == XENPMU_* (PMU operation)
> + * @args == struct xenpmu_params
> + */
> +/* ` enum xenpmu_op { */
> +#define XENPMU_mode_get        0 /* Also used for getting PMU version */
> +#define XENPMU_mode_set        1
> +#define XENPMU_feature_get     2
> +#define XENPMU_feature_set     3
> +/* ` } */
> +
> +/* ANSI-C does not support anonymous unions */
> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
> +#define __ANON_UNION_NAME(x) x
> +#else
> +#define __ANON_UNION_NAME(x)
> +#endif

Same comment as on the earlier incarnation in another patch of
this series: This shouldn't be in a public header, or if it really needs
to be, then properly prefixed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:02:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i5x-0002nL-04; Mon, 13 Jan 2014 14:02:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i5v-0002nG-A6
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:02:23 +0000
Received: from [193.109.254.147:24346] by server-6.bemta-14.messagelabs.com id
	53/68-14958-EE1F3D25; Mon, 13 Jan 2014 14:02:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389621741!10568253!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12249 invoked from network); 13 Jan 2014 14:02:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:02:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:02:21 +0000
Message-Id: <52D3FFF902000078001131C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-10-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 09/16] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/include/public/xenpmu.h
> +++ b/xen/include/public/xenpmu.h
> @@ -13,6 +13,64 @@
>  #define XENPMU_VER_MAJ    0
>  #define XENPMU_VER_MIN    0
>  
> +/*
> + * ` enum neg_errnoval
> + * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
> + *
> + * @cmd  == XENPMU_* (PMU operation)
> + * @args == struct xenpmu_params
> + */
> +/* ` enum xenpmu_op { */
> +#define XENPMU_mode_get        0 /* Also used for getting PMU version */
> +#define XENPMU_mode_set        1
> +#define XENPMU_feature_get     2
> +#define XENPMU_feature_set     3
> +/* ` } */
> +
> +/* ANSI-C does not support anonymous unions */
> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
> +#define __ANON_UNION_NAME(x) x
> +#else
> +#define __ANON_UNION_NAME(x)
> +#endif

Same comment as on the earlier incarnation in another patch of
this series: This shouldn't be in a public header, or if it really needs
to be, then properly prefixed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:04:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i8G-0002w0-Ni; Mon, 13 Jan 2014 14:04:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i8F-0002vu-Lv
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:04:47 +0000
Received: from [193.109.254.147:63214] by server-8.bemta-14.messagelabs.com id
	9C/6E-30921-E72F3D25; Mon, 13 Jan 2014 14:04:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389621886!7041270!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25056 invoked from network); 13 Jan 2014 14:04:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:04:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:04:46 +0000
Message-Id: <52D4008A02000078001131CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:04:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
>  	 }
>      }
>  
> -    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
> -    if ( !ctxt )
> +    if ( is_hvm_domain(v->domain) )

Here and elsewhere - did you consider whether is_hvm_domain() is
still the correct thing here now that PVH code is in?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:04:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i8G-0002w0-Ni; Mon, 13 Jan 2014 14:04:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i8F-0002vu-Lv
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:04:47 +0000
Received: from [193.109.254.147:63214] by server-8.bemta-14.messagelabs.com id
	9C/6E-30921-E72F3D25; Mon, 13 Jan 2014 14:04:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389621886!7041270!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25056 invoked from network); 13 Jan 2014 14:04:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:04:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:04:46 +0000
Message-Id: <52D4008A02000078001131CA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:04:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
>  	 }
>      }
>  
> -    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) + 
> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
> -    if ( !ctxt )
> +    if ( is_hvm_domain(v->domain) )

Here and elsewhere - did you consider whether is_hvm_domain() is
still the correct thing here now that PVH code is in?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:05:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i93-000310-5S; Mon, 13 Jan 2014 14:05:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i91-00030m-Vs
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:05:36 +0000
Received: from [193.109.254.147:12988] by server-5.bemta-14.messagelabs.com id
	4D/A0-03510-FA2F3D25; Mon, 13 Jan 2014 14:05:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389621934!10476416!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28327 invoked from network); 13 Jan 2014 14:05:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:05:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:05:34 +0000
Message-Id: <52D400B902000078001131E7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:05:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -750,6 +758,11 @@ func_out:
>      fixed_pmc_cnt = core2_get_fixed_pmc_count();
>      check_pmc_quirk();
>  
> +    /* PV domains can allocate resources immediately */
> +    if ( !is_hvm_domain(v->domain) )
> +        if ( !core2_vpmu_alloc_resource(v) )

Please use a single if() for conditions like this.

Jan

> +            return 1;
> +
>      return 0;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:05:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2i93-000310-5S; Mon, 13 Jan 2014 14:05:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2i91-00030m-Vs
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:05:36 +0000
Received: from [193.109.254.147:12988] by server-5.bemta-14.messagelabs.com id
	4D/A0-03510-FA2F3D25; Mon, 13 Jan 2014 14:05:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389621934!10476416!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28327 invoked from network); 13 Jan 2014 14:05:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:05:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:05:34 +0000
Message-Id: <52D400B902000078001131E7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:05:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xen.org>, "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -750,6 +758,11 @@ func_out:
>      fixed_pmc_cnt = core2_get_fixed_pmc_count();
>      check_pmc_quirk();
>  
> +    /* PV domains can allocate resources immediately */
> +    if ( !is_hvm_domain(v->domain) )
> +        if ( !core2_vpmu_alloc_resource(v) )

Please use a single if() for conditions like this.

Jan

> +            return 1;
> +
>      return 0;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iFR-0003iB-1n; Mon, 13 Jan 2014 14:12:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2iFP-0003i6-NE
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:12:12 +0000
Received: from [85.158.137.68:14779] by server-3.bemta-3.messagelabs.com id
	69/AA-10658-A34F3D25; Mon, 13 Jan 2014 14:12:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389622329!8840351!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24475 invoked from network); 13 Jan 2014 14:12:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:12:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:12:09 +0000
Message-Id: <52D402420200007800113210@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:12:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
>          break;
>  
>      case 0x00000005: /* MONITOR/MWAIT */
> -    case 0x0000000a: /* Architectural Performance Monitor Features */
>      case 0x0000000b: /* Extended Topology Enumeration */
>      case 0x8000000a: /* SVM revision and features */
>      case 0x8000001b: /* Instruction Based Sampling */
> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>      unsupported:
>          a = b = c = d = 0;
>          break;
> -
> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
> +        break;
>      default:

Rather than removing a blank line here, you ought to insert a
second one so that there's one before _and_ after the added
code block.

Furthermore the need to pass 0xa as the first argument suggests
that you're not in line with the intentions of vpmu_do_cpuid():
Either you drop the first parameter from the function, or you get
your code in line with the existing caller.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iFR-0003iB-1n; Mon, 13 Jan 2014 14:12:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2iFP-0003i6-NE
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:12:12 +0000
Received: from [85.158.137.68:14779] by server-3.bemta-3.messagelabs.com id
	69/AA-10658-A34F3D25; Mon, 13 Jan 2014 14:12:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389622329!8840351!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24475 invoked from network); 13 Jan 2014 14:12:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:12:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 14:12:09 +0000
Message-Id: <52D402420200007800113210@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 14:12:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
>          break;
>  
>      case 0x00000005: /* MONITOR/MWAIT */
> -    case 0x0000000a: /* Architectural Performance Monitor Features */
>      case 0x0000000b: /* Extended Topology Enumeration */
>      case 0x8000000a: /* SVM revision and features */
>      case 0x8000001b: /* Instruction Based Sampling */
> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>      unsupported:
>          a = b = c = d = 0;
>          break;
> -
> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
> +        break;
>      default:

Rather than removing a blank line here, you ought to insert a
second one so that there's one before _and_ after the added
code block.

Furthermore the need to pass 0xa as the first argument suggests
that you're not in line with the intentions of vpmu_do_cpuid():
Either you drop the first parameter from the function, or you get
your code in line with the existing caller.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:30:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iWg-0004l6-Qj; Mon, 13 Jan 2014 14:30:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2iWe-0004l1-N5
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:30:00 +0000
Received: from [193.109.254.147:15264] by server-14.bemta-14.messagelabs.com
	id 84/91-12628-768F3D25; Mon, 13 Jan 2014 14:29:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389623396!10577195!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14875 invoked from network); 13 Jan 2014 14:29:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 14:29:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 14:29:55 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="629474395"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.4.230])
	by fldsmtpi02.verizon.com with ESMTP; 13 Jan 2014 14:29:53 +0000
Message-ID: <52D3F861.9090706@terremark.com>
Date: Mon, 13 Jan 2014 09:29:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
	<1389391020-14476-2-git-send-email-dslutz@verizon.com>
	<52D3DE6E020000780011301D@nat28.tlf.novell.com>
In-Reply-To: <52D3DE6E020000780011301D@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 06:39, Jan Beulich wrote:
>>>> On 10.01.14 at 22:56, Don Slutz <dslutz@verizon.com> wrote:
> I don't see why you included this in the resend - it got applied
> earlier on Friday.
>
> Jan
>


Opps, My mistake.  In the rush to get v4 in share I forgot to check it 
vs staging also.

Sorry for the mistake.

     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:30:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iWg-0004l6-Qj; Mon, 13 Jan 2014 14:30:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2iWe-0004l1-N5
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:30:00 +0000
Received: from [193.109.254.147:15264] by server-14.bemta-14.messagelabs.com
	id 84/91-12628-768F3D25; Mon, 13 Jan 2014 14:29:59 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389623396!10577195!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14875 invoked from network); 13 Jan 2014 14:29:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 14:29:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 14:29:55 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="629474395"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.4.230])
	by fldsmtpi02.verizon.com with ESMTP; 13 Jan 2014 14:29:53 +0000
Message-ID: <52D3F861.9090706@terremark.com>
Date: Mon, 13 Jan 2014 09:29:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
	<1389391020-14476-2-git-send-email-dslutz@verizon.com>
	<52D3DE6E020000780011301D@nat28.tlf.novell.com>
In-Reply-To: <52D3DE6E020000780011301D@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 1/3] dbg_rw_guest_mem: need to
 call put_gfn in error path.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 06:39, Jan Beulich wrote:
>>>> On 10.01.14 at 22:56, Don Slutz <dslutz@verizon.com> wrote:
> I don't see why you included this in the resend - it got applied
> earlier on Friday.
>
> Jan
>


Opps, My mistake.  In the rush to get v4 in share I forgot to check it 
vs staging also.

Sorry for the mistake.

     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:32:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iZU-0004rf-09; Mon, 13 Jan 2014 14:32:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2iZS-0004ra-9W
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:32:54 +0000
Received: from [85.158.143.35:26050] by server-2.bemta-4.messagelabs.com id
	87/7E-11386-519F3D25; Mon, 13 Jan 2014 14:32:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389623571!11356974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8839 invoked from network); 13 Jan 2014 14:32:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 14:32:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90189129"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 14:32:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 09:32:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2iZO-0003X5-JU;
	Mon, 13 Jan 2014 14:32:50 +0000
Message-ID: <52D3F912.7080004@citrix.com>
Date: Mon, 13 Jan 2014 14:32:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

As part of running XenRT on a XenServer experimental build on top of
Xen-4.4.0-rc1, I encountered the following error which is far from helpful:

$ xl create /root/minios.cfg
xl: fatal error: xl_cmdimpl.c:457: No such file or directory: logfile =
open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644)
Parsing config from /root/minios.cfg
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
daemonizing child [32476] exited with error status 3
Parsing config from /root/minios.cfg

The root cause of the issue is that the directly /var/log/xen didn't
exist (because I messed up the packaging).

Because of the CHK_SYSCALL() macro which prints a string-ized version of
its argument, the error message omits the key detail (i.e. which file
doesn't exist).  Furthermore, the string 'fullname' is a complete path
which has been returned from "libxl_create_logfile()" without error.

If the failure to create a logfile is a fatal error (which itself is
probably not correct - what about a RO dom0 which strictly logs to
rsyslog?), then either xl or libxl should assume the responsibility of
ensuring that its logging directory actually exists, and
"libxl_create_logfile()" seems like a prime candidate to assume this
responsibility.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:32:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iZU-0004rf-09; Mon, 13 Jan 2014 14:32:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2iZS-0004ra-9W
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:32:54 +0000
Received: from [85.158.143.35:26050] by server-2.bemta-4.messagelabs.com id
	87/7E-11386-519F3D25; Mon, 13 Jan 2014 14:32:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389623571!11356974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8839 invoked from network); 13 Jan 2014 14:32:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 14:32:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90189129"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 14:32:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 09:32:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2iZO-0003X5-JU;
	Mon, 13 Jan 2014 14:32:50 +0000
Message-ID: <52D3F912.7080004@citrix.com>
Date: Mon, 13 Jan 2014 14:32:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

As part of running XenRT on a XenServer experimental build on top of
Xen-4.4.0-rc1, I encountered the following error which is far from helpful:

$ xl create /root/minios.cfg
xl: fatal error: xl_cmdimpl.c:457: No such file or directory: logfile =
open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644)
Parsing config from /root/minios.cfg
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
daemonizing child [32476] exited with error status 3
Parsing config from /root/minios.cfg

The root cause of the issue is that the directly /var/log/xen didn't
exist (because I messed up the packaging).

Because of the CHK_SYSCALL() macro which prints a string-ized version of
its argument, the error message omits the key detail (i.e. which file
doesn't exist).  Furthermore, the string 'fullname' is a complete path
which has been returned from "libxl_create_logfile()" without error.

If the failure to create a logfile is a fatal error (which itself is
probably not correct - what about a RO dom0 which strictly logs to
rsyslog?), then either xl or libxl should assume the responsibility of
ensuring that its logging directory actually exists, and
"libxl_create_logfile()" seems like a prime candidate to assume this
responsibility.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iqy-0005xI-Pj; Mon, 13 Jan 2014 14:51:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2iqx-0005xD-UI
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:51:00 +0000
Received: from [85.158.139.211:62722] by server-5.bemta-5.messagelabs.com id
	10/E6-14928-35DF3D25; Mon, 13 Jan 2014 14:50:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389624656!9467903!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1968 invoked from network); 13 Jan 2014 14:50:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:50:58 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0DEomL7010809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 14:50:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DEolQJ006403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 14:50:47 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0DEokjI011562; Mon, 13 Jan 2014 14:50:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 06:50:46 -0800
Message-ID: <52D3FD67.2060708@oracle.com>
Date: Mon, 13 Jan 2014 09:51:19 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
In-Reply-To: <20140113093032.GA13919@aepfle.de>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 04:30 AM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> I don't know discard code works but it seems to me that if you pass, for
>> example,  zero as discard_granularity (which may happen if xenbus_gather()
>> fails) then blkdev_issue_discard() in the backend will set granularity to 1
>> and continue with discard. This may not be what the the guest admin
>> requested. And he won't know about this since no error message is printed
>> anywhere.
> If I understand the code using granularity/alignment correctly, both are
> optional properties. So if the granularity is just 1 it means byte
> ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
> both properties are not admin controlled, for phy the blkbk drivers just
> passes on what it gets from the underlying hardware.
>
>> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
>> assume that secure discard has not been requested. I don't know what
>> security implications this will have but it sounds bad to me.
> There are no security implications, if the backend does not advertise it
> then its not present.

Right. But my questions was what if the backend does advertise it and 
wants the frontent to use it but xenbus_gather() in the frontend fails. 
Do we want to silently continue without discard-secure? Is this safe?


-boris

>
> After poking around some more it seems that blkif.h is the spec, it does
> not say anything that the three properties are optional. Also the
> backend drivers in sles11sp2 and mainline create all three properties
> unconditionally. So I think a better change is to expect all three
> properties in the frontend. I will send another version of the patch.
>
>
> Olaf


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 14:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 14:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2iqy-0005xI-Pj; Mon, 13 Jan 2014 14:51:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2iqx-0005xD-UI
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 14:51:00 +0000
Received: from [85.158.139.211:62722] by server-5.bemta-5.messagelabs.com id
	10/E6-14928-35DF3D25; Mon, 13 Jan 2014 14:50:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389624656!9467903!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1968 invoked from network); 13 Jan 2014 14:50:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 14:50:58 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0DEomL7010809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 14:50:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DEolQJ006403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 14:50:47 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0DEokjI011562; Mon, 13 Jan 2014 14:50:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 06:50:46 -0800
Message-ID: <52D3FD67.2060708@oracle.com>
Date: Mon, 13 Jan 2014 09:51:19 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
In-Reply-To: <20140113093032.GA13919@aepfle.de>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 04:30 AM, Olaf Hering wrote:
> On Fri, Jan 10, Boris Ostrovsky wrote:
>
>> I don't know discard code works but it seems to me that if you pass, for
>> example,  zero as discard_granularity (which may happen if xenbus_gather()
>> fails) then blkdev_issue_discard() in the backend will set granularity to 1
>> and continue with discard. This may not be what the the guest admin
>> requested. And he won't know about this since no error message is printed
>> anywhere.
> If I understand the code using granularity/alignment correctly, both are
> optional properties. So if the granularity is just 1 it means byte
> ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
> both properties are not admin controlled, for phy the blkbk drivers just
> passes on what it gets from the underlying hardware.
>
>> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
>> assume that secure discard has not been requested. I don't know what
>> security implications this will have but it sounds bad to me.
> There are no security implications, if the backend does not advertise it
> then its not present.

Right. But my questions was what if the backend does advertise it and 
wants the frontent to use it but xenbus_gather() in the frontend fails. 
Do we want to silently continue without discard-secure? Is this safe?


-boris

>
> After poking around some more it seems that blkif.h is the spec, it does
> not say anything that the three properties are optional. Also the
> backend drivers in sles11sp2 and mainline create all three properties
> unconditionally. So I think a better change is to expect all three
> properties in the frontend. I will send another version of the patch.
>
>
> Olaf


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:00:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2izc-0006YD-Rj; Mon, 13 Jan 2014 14:59:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2izb-0006Y8-Je
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:59:55 +0000
Received: from [193.109.254.147:29815] by server-11.bemta-14.messagelabs.com
	id 07/F6-20576-A6FF3D25; Mon, 13 Jan 2014 14:59:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389625193!8240488!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9601 invoked from network); 13 Jan 2014 14:59:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 14:59:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DExlXq030680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 14:59:48 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0DExkPj005953
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 14:59:46 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DExjLu019852; Mon, 13 Jan 2014 14:59:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 06:59:45 -0800
Message-ID: <52D3FF82.7020005@oracle.com>
Date: Mon, 13 Jan 2014 10:00:18 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
	<52D03580020000780011298A@nat28.tlf.novell.com>
In-Reply-To: <52D03580020000780011298A@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 12:01 PM, Jan Beulich wrote:
>
>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>   CHECK_pf_enter_acpi_sleep;
>>   #undef xen_pf_enter_acpi_sleep
>>   
>> +#define xenpf_symdata   compat_pf_symdata
>> +
> This should be done like other cases (see context above): Add a
> CHECK_ invocation, which in turn requires an entry in include/xlat.lst.

Based on your comment to my attempt to patch compat.h, is this really 
needed? Since xenpf_symdata contains a handle we shouldn't subject it to 
checking, should we?

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:00:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2izc-0006YD-Rj; Mon, 13 Jan 2014 14:59:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2izb-0006Y8-Je
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 14:59:55 +0000
Received: from [193.109.254.147:29815] by server-11.bemta-14.messagelabs.com
	id 07/F6-20576-A6FF3D25; Mon, 13 Jan 2014 14:59:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389625193!8240488!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9601 invoked from network); 13 Jan 2014 14:59:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 14:59:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DExlXq030680
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 14:59:48 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0DExkPj005953
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 14:59:46 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DExjLu019852; Mon, 13 Jan 2014 14:59:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 06:59:45 -0800
Message-ID: <52D3FF82.7020005@oracle.com>
Date: Mon, 13 Jan 2014 10:00:18 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
	<52D03580020000780011298A@nat28.tlf.novell.com>
In-Reply-To: <52D03580020000780011298A@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 12:01 PM, Jan Beulich wrote:
>
>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>   CHECK_pf_enter_acpi_sleep;
>>   #undef xen_pf_enter_acpi_sleep
>>   
>> +#define xenpf_symdata   compat_pf_symdata
>> +
> This should be done like other cases (see context above): Add a
> CHECK_ invocation, which in turn requires an entry in include/xlat.lst.

Based on your comment to my attempt to patch compat.h, is this really 
needed? Since xenpf_symdata contains a handle we shouldn't subject it to 
checking, should we?

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:04:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j3q-0006hx-IC; Mon, 13 Jan 2014 15:04:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2j3o-0006hn-Cg
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:04:16 +0000
Received: from [85.158.139.211:51441] by server-9.bemta-5.messagelabs.com id
	09/F2-15098-F6004D25; Mon, 13 Jan 2014 15:04:15 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389625453!9464679!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23338 invoked from network); 13 Jan 2014 15:04:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:04:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DF489h004263
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:04:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DF47Dg009961
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:04:08 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DF47N5009956; Mon, 13 Jan 2014 15:04:07 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:04:07 -0800
Message-ID: <52D40088.6080401@oracle.com>
Date: Mon, 13 Jan 2014 10:04:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
	<52D3F82A020000780011313A@nat28.tlf.novell.com>
In-Reply-To: <52D3F82A020000780011313A@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 08:28 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Update APIC_LVTPC vector when HVM guest writes to it.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

...

>> +void vpmu_lvtpc_update(uint32_t val)
>> +{
>> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
>> +
>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
>> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>> +}
>> +
>>   int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>   {
>>       struct vpmu_struct *vpmu = vcpu_vpmu(current);
>> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>>       case X86_VENDOR_AMD:
>>           if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>               opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>   
>>       case X86_VENDOR_INTEL:
>>           if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>               opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>   
>>       default:
>>           printk("VPMU: Initialization failed. "
>>                  "Unknown CPU vendor %d\n", vendor);
>>           opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>       }
>> +
>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>>   }
> So what is this good for? All code paths above use "return" now,
> hence how would execution get here?

Yes, it shouldn't be here. I could move it up but since we are now 
updating LVTPC explicitly when a guest is writing APIC I think I can 
just drop it altogether.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:04:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j3q-0006hx-IC; Mon, 13 Jan 2014 15:04:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2j3o-0006hn-Cg
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:04:16 +0000
Received: from [85.158.139.211:51441] by server-9.bemta-5.messagelabs.com id
	09/F2-15098-F6004D25; Mon, 13 Jan 2014 15:04:15 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389625453!9464679!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23338 invoked from network); 13 Jan 2014 15:04:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:04:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DF489h004263
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:04:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DF47Dg009961
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:04:08 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DF47N5009956; Mon, 13 Jan 2014 15:04:07 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:04:07 -0800
Message-ID: <52D40088.6080401@oracle.com>
Date: Mon, 13 Jan 2014 10:04:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
	<52D3F82A020000780011313A@nat28.tlf.novell.com>
In-Reply-To: <52D3F82A020000780011313A@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 08:28 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Update APIC_LVTPC vector when HVM guest writes to it.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

...

>> +void vpmu_lvtpc_update(uint32_t val)
>> +{
>> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
>> +
>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
>> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>> +}
>> +
>>   int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>   {
>>       struct vpmu_struct *vpmu = vcpu_vpmu(current);
>> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>>       case X86_VENDOR_AMD:
>>           if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>               opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>   
>>       case X86_VENDOR_INTEL:
>>           if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>               opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>   
>>       default:
>>           printk("VPMU: Initialization failed. "
>>                  "Unknown CPU vendor %d\n", vendor);
>>           opt_vpmu_enabled = 0;
>> -        break;
>> +        return;
>>       }
>> +
>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>>   }
> So what is this good for? All code paths above use "return" now,
> hence how would execution get here?

Yes, it shouldn't be here. I could move it up but since we are now 
updating LVTPC explicitly when a guest is writing APIC I think I can 
just drop it altogether.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j6f-0006qw-AA; Mon, 13 Jan 2014 15:07:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2j6e-0006qq-6J
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:07:12 +0000
Received: from [85.158.137.68:12208] by server-7.bemta-3.messagelabs.com id
	88/07-27599-F1104D25; Mon, 13 Jan 2014 15:07:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389625630!8851927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30103 invoked from network); 13 Jan 2014 15:07:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:07:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:07:09 +0000
Message-Id: <52D40F2902000078001132B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:07:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
	<52D03580020000780011298A@nat28.tlf.novell.com>
	<52D3FF82.7020005@oracle.com>
In-Reply-To: <52D3FF82.7020005@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:00, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/10/2014 12:01 PM, Jan Beulich wrote:
>>
>>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>>   CHECK_pf_enter_acpi_sleep;
>>>   #undef xen_pf_enter_acpi_sleep
>>>   
>>> +#define xenpf_symdata   compat_pf_symdata
>>> +
>> This should be done like other cases (see context above): Add a
>> CHECK_ invocation, which in turn requires an entry in include/xlat.lst.
> 
> Based on your comment to my attempt to patch compat.h, is this really 
> needed? Since xenpf_symdata contains a handle we shouldn't subject it to 
> checking, should we?

Correct. You nevertheless should add the new type to include/xlat.lst.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j6f-0006qw-AA; Mon, 13 Jan 2014 15:07:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2j6e-0006qq-6J
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:07:12 +0000
Received: from [85.158.137.68:12208] by server-7.bemta-3.messagelabs.com id
	88/07-27599-F1104D25; Mon, 13 Jan 2014 15:07:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389625630!8851927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30103 invoked from network); 13 Jan 2014 15:07:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:07:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:07:09 +0000
Message-Id: <52D40F2902000078001132B4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:07:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-2-git-send-email-boris.ostrovsky@oracle.com>
	<52D03580020000780011298A@nat28.tlf.novell.com>
	<52D3FF82.7020005@oracle.com>
In-Reply-To: <52D3FF82.7020005@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 01/16] common/symbols: Export hypervisor
 symbols to PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:00, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/10/2014 12:01 PM, Jan Beulich wrote:
>>
>>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>>   CHECK_pf_enter_acpi_sleep;
>>>   #undef xen_pf_enter_acpi_sleep
>>>   
>>> +#define xenpf_symdata   compat_pf_symdata
>>> +
>> This should be done like other cases (see context above): Add a
>> CHECK_ invocation, which in turn requires an entry in include/xlat.lst.
> 
> Based on your comment to my attempt to patch compat.h, is this really 
> needed? Since xenpf_symdata contains a handle we shouldn't subject it to 
> checking, should we?

Correct. You nevertheless should add the new type to include/xlat.lst.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:07:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j7B-0006u0-Oa; Mon, 13 Jan 2014 15:07:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2j7A-0006tq-Ph
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 15:07:44 +0000
Received: from [85.158.143.35:42564] by server-3.bemta-4.messagelabs.com id
	8C/86-32360-04104D25; Mon, 13 Jan 2014 15:07:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389625662!11372512!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23120 invoked from network); 13 Jan 2014 15:07:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 15:07:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92329787"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 15:07:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 10:07:40 -0500
Message-ID: <1389625659.13654.77.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 15:07:39 +0000
In-Reply-To: <52D3F912.7080004@citrix.com>
References: <52D3F912.7080004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
thanks

Thanks for the report, I've created a bug for tracking purposes.

On Mon, 2014-01-13 at 14:32 +0000, Andrew Cooper wrote:
> Hello,
> 
> As part of running XenRT on a XenServer experimental build on top of
> Xen-4.4.0-rc1, I encountered the following error which is far from helpful:
> 
> $ xl create /root/minios.cfg
> xl: fatal error: xl_cmdimpl.c:457: No such file or directory: logfile =
> open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644)
> Parsing config from /root/minios.cfg
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
> daemonizing child [32476] exited with error status 3
> Parsing config from /root/minios.cfg
> 
> The root cause of the issue is that the directly /var/log/xen didn't
> exist

Right, I do think we ought to cope with this a bit more gracefully.

>  (because I messed up the packaging).
> 
> Because of the CHK_SYSCALL() macro which prints a string-ized version of
> its argument, the error message omits the key detail (i.e. which file
> doesn't exist).

Oops!

> Furthermore, the string 'fullname' is a complete path
> which has been returned from "libxl_create_logfile()" without error.
> 
> If the failure to create a logfile is a fatal error (which itself is
> probably not correct - what about a RO dom0 which strictly logs to
> rsyslog?),

Even a R/O dom0 would need a writeable /var I think -- otherwise xl's
behaviour is the least of your worries. A tmpfs for example.

In any case presumably that use case would warrant making further
changes in order to have the option to direct these logs to syslog too.

>  then either xl or libxl should assume the responsibility of
> ensuring that its logging directory actually exists, and
> "libxl_create_logfile()" seems like a prime candidate to assume this
> responsibility.

It sounds like it, yes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:07:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j7B-0006u0-Oa; Mon, 13 Jan 2014 15:07:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2j7A-0006tq-Ph
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 15:07:44 +0000
Received: from [85.158.143.35:42564] by server-3.bemta-4.messagelabs.com id
	8C/86-32360-04104D25; Mon, 13 Jan 2014 15:07:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389625662!11372512!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23120 invoked from network); 13 Jan 2014 15:07:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 15:07:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92329787"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 15:07:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 10:07:40 -0500
Message-ID: <1389625659.13654.77.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 15:07:39 +0000
In-Reply-To: <52D3F912.7080004@citrix.com>
References: <52D3F912.7080004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
thanks

Thanks for the report, I've created a bug for tracking purposes.

On Mon, 2014-01-13 at 14:32 +0000, Andrew Cooper wrote:
> Hello,
> 
> As part of running XenRT on a XenServer experimental build on top of
> Xen-4.4.0-rc1, I encountered the following error which is far from helpful:
> 
> $ xl create /root/minios.cfg
> xl: fatal error: xl_cmdimpl.c:457: No such file or directory: logfile =
> open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644)
> Parsing config from /root/minios.cfg
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
> daemonizing child [32476] exited with error status 3
> Parsing config from /root/minios.cfg
> 
> The root cause of the issue is that the directly /var/log/xen didn't
> exist

Right, I do think we ought to cope with this a bit more gracefully.

>  (because I messed up the packaging).
> 
> Because of the CHK_SYSCALL() macro which prints a string-ized version of
> its argument, the error message omits the key detail (i.e. which file
> doesn't exist).

Oops!

> Furthermore, the string 'fullname' is a complete path
> which has been returned from "libxl_create_logfile()" without error.
> 
> If the failure to create a logfile is a fatal error (which itself is
> probably not correct - what about a RO dom0 which strictly logs to
> rsyslog?),

Even a R/O dom0 would need a writeable /var I think -- otherwise xl's
behaviour is the least of your worries. A tmpfs for example.

In any case presumably that use case would warrant making further
changes in order to have the option to direct these logs to syslog too.

>  then either xl or libxl should assume the responsibility of
> ensuring that its logging directory actually exists, and
> "libxl_create_logfile()" seems like a prime candidate to assume this
> responsibility.

It sounds like it, yes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:08:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j7i-0006z4-9v; Mon, 13 Jan 2014 15:08:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2j7g-0006yb-1S
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:08:16 +0000
Received: from [85.158.137.68:58925] by server-16.bemta-3.messagelabs.com id
	EA/1A-26128-F5104D25; Mon, 13 Jan 2014 15:08:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389625694!8871036!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30654 invoked from network); 13 Jan 2014 15:08:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:08:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:08:14 +0000
Message-Id: <52D40F6A02000078001132B7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:08:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
	<52D3F82A020000780011313A@nat28.tlf.novell.com>
	<52D40088.6080401@oracle.com>
In-Reply-To: <52D40088.6080401@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:04, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 08:28 AM, Jan Beulich wrote:
>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> Update APIC_LVTPC vector when HVM guest writes to it.
>>>
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> ...
> 
>>> +void vpmu_lvtpc_update(uint32_t val)
>>> +{
>>> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
>>> +
>>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
>>> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>>> +}
>>> +
>>>   int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>>   {
>>>       struct vpmu_struct *vpmu = vcpu_vpmu(current);
>>> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>>>       case X86_VENDOR_AMD:
>>>           if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>>               opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>   
>>>       case X86_VENDOR_INTEL:
>>>           if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>>               opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>   
>>>       default:
>>>           printk("VPMU: Initialization failed. "
>>>                  "Unknown CPU vendor %d\n", vendor);
>>>           opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>       }
>>> +
>>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>>>   }
>> So what is this good for? All code paths above use "return" now,
>> hence how would execution get here?
> 
> Yes, it shouldn't be here. I could move it up but since we are now 
> updating LVTPC explicitly when a guest is writing APIC I think I can 
> just drop it altogether.

As long as the mask bit gets properly set somewhere, not adding
it here or anywhere else seems fine to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:08:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2j7i-0006z4-9v; Mon, 13 Jan 2014 15:08:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2j7g-0006yb-1S
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:08:16 +0000
Received: from [85.158.137.68:58925] by server-16.bemta-3.messagelabs.com id
	EA/1A-26128-F5104D25; Mon, 13 Jan 2014 15:08:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389625694!8871036!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30654 invoked from network); 13 Jan 2014 15:08:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:08:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:08:14 +0000
Message-Id: <52D40F6A02000078001132B7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:08:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-6-git-send-email-boris.ostrovsky@oracle.com>
	<52D3F82A020000780011313A@nat28.tlf.novell.com>
	<52D40088.6080401@oracle.com>
In-Reply-To: <52D40088.6080401@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 05/16] x86/VPMU: Handle APIC_LVTPC
	accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:04, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 08:28 AM, Jan Beulich wrote:
>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> Update APIC_LVTPC vector when HVM guest writes to it.
>>>
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> ...
> 
>>> +void vpmu_lvtpc_update(uint32_t val)
>>> +{
>>> +    struct vpmu_struct *vpmu = vcpu_vpmu(current);
>>> +
>>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
>>> +    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>>> +}
>>> +
>>>   int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>>   {
>>>       struct vpmu_struct *vpmu = vcpu_vpmu(current);
>>> @@ -227,19 +235,21 @@ void vpmu_initialise(struct vcpu *v)
>>>       case X86_VENDOR_AMD:
>>>           if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>>               opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>   
>>>       case X86_VENDOR_INTEL:
>>>           if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
>>>               opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>   
>>>       default:
>>>           printk("VPMU: Initialization failed. "
>>>                  "Unknown CPU vendor %d\n", vendor);
>>>           opt_vpmu_enabled = 0;
>>> -        break;
>>> +        return;
>>>       }
>>> +
>>> +    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>>>   }
>> So what is this good for? All code paths above use "return" now,
>> hence how would execution get here?
> 
> Yes, it shouldn't be here. I could move it up but since we are now 
> updating LVTPC explicitly when a guest is writing APIC I think I can 
> just drop it altogether.

As long as the mask bit gets properly set somewhere, not adding
it here or anywhere else seems fine to me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jAU-0007gc-Tn; Mon, 13 Jan 2014 15:11:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2jAU-0007gK-8h
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 15:11:10 +0000
Received: from [85.158.143.35:13812] by server-2.bemta-4.messagelabs.com id
	18/B7-11386-B0204D25; Mon, 13 Jan 2014 15:11:07 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389625865!8736993!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20574 invoked from network); 13 Jan 2014 15:11:06 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 15:11:06 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2jEE-0006A6-VX; Mon, 13 Jan 2014 15:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389626102.23688@bugs.xenproject.org>
References: <52D3F912.7080004@citrix.com>
	<1389625659.13654.77.camel@kazak.uk.xensource.com>
In-Reply-To: <1389625659.13654.77.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 15:15:02 +0000
Subject: [Xen-devel] Processed: Re: xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #33 rooted at `<52D3F912.7080004@citrix.com>'
Title: `Re: xl errors if /var/log/xen doesn't exist'
> thanks
Finished processing.

Modified/created Bugs:
 - 33: http://bugs.xenproject.org/xen/bug/33 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jAU-0007gc-Tn; Mon, 13 Jan 2014 15:11:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2jAU-0007gK-8h
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 15:11:10 +0000
Received: from [85.158.143.35:13812] by server-2.bemta-4.messagelabs.com id
	18/B7-11386-B0204D25; Mon, 13 Jan 2014 15:11:07 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389625865!8736993!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20574 invoked from network); 13 Jan 2014 15:11:06 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 15:11:06 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2jEE-0006A6-VX; Mon, 13 Jan 2014 15:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389626102.23688@bugs.xenproject.org>
References: <52D3F912.7080004@citrix.com>
	<1389625659.13654.77.camel@kazak.uk.xensource.com>
In-Reply-To: <1389625659.13654.77.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 15:15:02 +0000
Subject: [Xen-devel] Processed: Re: xl errors if /var/log/xen doesn't exist
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #33 rooted at `<52D3F912.7080004@citrix.com>'
Title: `Re: xl errors if /var/log/xen doesn't exist'
> thanks
Finished processing.

Modified/created Bugs:
 - 33: http://bugs.xenproject.org/xen/bug/33 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:14:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jDk-0007sc-Nb; Mon, 13 Jan 2014 15:14:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2jDj-0007sW-Iv
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:14:31 +0000
Received: from [85.158.139.211:31671] by server-6.bemta-5.messagelabs.com id
	82/E7-16310-6D204D25; Mon, 13 Jan 2014 15:14:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389626069!9431414!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30200 invoked from network); 13 Jan 2014 15:14:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:14:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:14:29 +0000
Message-Id: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:14:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aiming at a release in late January or early February, I'd like to cut
RC1s later this or early next week.

Please indicate any bug fixes that so far may have been missed
in the backports already done.

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:14:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jDk-0007sc-Nb; Mon, 13 Jan 2014 15:14:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2jDj-0007sW-Iv
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:14:31 +0000
Received: from [85.158.139.211:31671] by server-6.bemta-5.messagelabs.com id
	82/E7-16310-6D204D25; Mon, 13 Jan 2014 15:14:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389626069!9431414!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30200 invoked from network); 13 Jan 2014 15:14:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:14:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:14:29 +0000
Message-Id: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:14:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aiming at a release in late January or early February, I'd like to cut
RC1s later this or early next week.

Please indicate any bug fixes that so far may have been missed
in the backports already done.

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:23:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jMD-0008V0-Q1; Mon, 13 Jan 2014 15:23:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jMB-0008Uv-KN
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:23:15 +0000
Received: from [85.158.143.35:64672] by server-2.bemta-4.messagelabs.com id
	5D/9E-11386-3E404D25; Mon, 13 Jan 2014 15:23:15 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389626591!8740506!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21709 invoked from network); 13 Jan 2014 15:23:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:23:13 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DFN5Px029434
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:23:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFN49r026611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:23:04 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFN34Q002917; Mon, 13 Jan 2014 15:23:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:23:03 -0800
Message-ID: <52D404F7.9090906@oracle.com>
Date: Mon, 13 Jan 2014 10:23:35 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
	<52D3FAB20200007800113162@nat28.tlf.novell.com>
In-Reply-To: <52D3FAB20200007800113162@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 08:39 AM, Jan Beulich wrote:
>> --- /dev/null
>> +++ b/xen/include/public/arch-x86/xenpmu.h
>> @@ -0,0 +1,74 @@
....

>> +
>> +/* AMD PMU registers and structures */
>> +struct amd_vpmu_context {
>> +    uint64_t counters;      /* Offset to counter MSRs */
>> +    uint64_t ctrls;         /* Offset to control MSRs */
>> +    uint8_t msr_bitmap_set; /* Used by HVM only */
>> +};
> sizeof() of this structure will differ between 32- and 64-bit guests -
> are you intending to do the necessary translation even though it
> seems rather easy to avoid having to do so?

'msr_bitmap_set' field is actually never used by PV and it's the last 
one in the structure which is why I didn't bother to make it bigger. But 
you are right, I should fix this to avoid problems in the future.

>
>> +
>> +/* Intel PMU registers and structures */
>> +struct arch_cntr_pair {
>> +    uint64_t counter;
>> +    uint64_t control;
>> +};
>> +struct core2_vpmu_context {
> Blank line missing between the two structures.
>
>> +    uint64_t global_ctrl;
>> +    uint64_t global_ovf_ctrl;
>> +    uint64_t global_status;
>> +    uint64_t fixed_ctrl;
>> +    uint64_t ds_area;
>> +    uint64_t pebs_enable;
>> +    uint64_t debugctl;
>> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
>> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
>> +};
>> +
>> +/* ANSI-C does not support anonymous unions */
>> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
>> +#define __ANON_UNION_NAME(x) x
>> +#else
>> +#define __ANON_UNION_NAME(x)
>> +#endif
> Why? And if really needed, why here?

I'll drop this. I thought anonymous unions looked better but now that I 
look at it again I think the ifdefs are rather ugly too.

>
>> +
>> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
>> +                                    sizeof(struct core2_vpmu_context) ? \
>> +                                     sizeof(struct amd_vpmu_context) : \
>> +                                     sizeof(struct core2_vpmu_context))
>> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
>> +struct arch_xenpmu {
>> +    union {
>> +        struct cpu_user_regs regs;
>> +        uint8_t pad1[256];
>> +    } __ANON_UNION_NAME(r);
>> +    union {
>> +        uint32_t lapic_lvtpc;
>> +        uint64_t pad2;
>> +    } __ANON_UNION_NAME(l);
>> +    union {
>> +        struct amd_vpmu_context amd;
>> +        struct core2_vpmu_context intel;
>> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
>> +    } __ANON_UNION_NAME(c);
> I don't think there's be a severe problem if you simply always
> had names on these unions.
>
>> +};
>> +typedef struct arch_xenpmu arch_xenpmu_t;
> Overall you should also prefix all types added to global scope with
> "xen". I know this wasn't done consistently for older headers, but
> we shouldn't be extending this name space cluttering.

You mean something like arch_xenpmu ==> xen_arch_pmu ?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:23:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jMD-0008V0-Q1; Mon, 13 Jan 2014 15:23:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jMB-0008Uv-KN
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:23:15 +0000
Received: from [85.158.143.35:64672] by server-2.bemta-4.messagelabs.com id
	5D/9E-11386-3E404D25; Mon, 13 Jan 2014 15:23:15 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389626591!8740506!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21709 invoked from network); 13 Jan 2014 15:23:13 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:23:13 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DFN5Px029434
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:23:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFN49r026611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:23:04 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFN34Q002917; Mon, 13 Jan 2014 15:23:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:23:03 -0800
Message-ID: <52D404F7.9090906@oracle.com>
Date: Mon, 13 Jan 2014 10:23:35 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
	<52D3FAB20200007800113162@nat28.tlf.novell.com>
In-Reply-To: <52D3FAB20200007800113162@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 08:39 AM, Jan Beulich wrote:
>> --- /dev/null
>> +++ b/xen/include/public/arch-x86/xenpmu.h
>> @@ -0,0 +1,74 @@
....

>> +
>> +/* AMD PMU registers and structures */
>> +struct amd_vpmu_context {
>> +    uint64_t counters;      /* Offset to counter MSRs */
>> +    uint64_t ctrls;         /* Offset to control MSRs */
>> +    uint8_t msr_bitmap_set; /* Used by HVM only */
>> +};
> sizeof() of this structure will differ between 32- and 64-bit guests -
> are you intending to do the necessary translation even though it
> seems rather easy to avoid having to do so?

'msr_bitmap_set' field is actually never used by PV and it's the last 
one in the structure which is why I didn't bother to make it bigger. But 
you are right, I should fix this to avoid problems in the future.

>
>> +
>> +/* Intel PMU registers and structures */
>> +struct arch_cntr_pair {
>> +    uint64_t counter;
>> +    uint64_t control;
>> +};
>> +struct core2_vpmu_context {
> Blank line missing between the two structures.
>
>> +    uint64_t global_ctrl;
>> +    uint64_t global_ovf_ctrl;
>> +    uint64_t global_status;
>> +    uint64_t fixed_ctrl;
>> +    uint64_t ds_area;
>> +    uint64_t pebs_enable;
>> +    uint64_t debugctl;
>> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
>> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
>> +};
>> +
>> +/* ANSI-C does not support anonymous unions */
>> +#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
>> +#define __ANON_UNION_NAME(x) x
>> +#else
>> +#define __ANON_UNION_NAME(x)
>> +#endif
> Why? And if really needed, why here?

I'll drop this. I thought anonymous unions looked better but now that I 
look at it again I think the ifdefs are rather ugly too.

>
>> +
>> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct amd_vpmu_context) > \
>> +                                    sizeof(struct core2_vpmu_context) ? \
>> +                                     sizeof(struct amd_vpmu_context) : \
>> +                                     sizeof(struct core2_vpmu_context))
>> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
>> +struct arch_xenpmu {
>> +    union {
>> +        struct cpu_user_regs regs;
>> +        uint8_t pad1[256];
>> +    } __ANON_UNION_NAME(r);
>> +    union {
>> +        uint32_t lapic_lvtpc;
>> +        uint64_t pad2;
>> +    } __ANON_UNION_NAME(l);
>> +    union {
>> +        struct amd_vpmu_context amd;
>> +        struct core2_vpmu_context intel;
>> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
>> +    } __ANON_UNION_NAME(c);
> I don't think there's be a severe problem if you simply always
> had names on these unions.
>
>> +};
>> +typedef struct arch_xenpmu arch_xenpmu_t;
> Overall you should also prefix all types added to global scope with
> "xen". I know this wasn't done consistently for older headers, but
> we shouldn't be extending this name space cluttering.

You mean something like arch_xenpmu ==> xen_arch_pmu ?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jNz-00008L-AP; Mon, 13 Jan 2014 15:25:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jNy-00008F-IP
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:25:06 +0000
Received: from [85.158.143.35:15202] by server-3.bemta-4.messagelabs.com id
	40/3A-32360-15504D25; Mon, 13 Jan 2014 15:25:05 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389626704!11431020!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31667 invoked from network); 13 Jan 2014 15:25:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:25:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DFOv6C031569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:24:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFOuoa001572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:24:57 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFOu85007745; Mon, 13 Jan 2014 15:24:56 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:24:56 -0800
Message-ID: <52D40569.2020904@oracle.com>
Date: Mon, 13 Jan 2014 10:25:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
	<52D4008A02000078001131CA@nat28.tlf.novell.com>
In-Reply-To: <52D4008A02000078001131CA@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 09:04 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/svm/vpmu.c
>> +++ b/xen/arch/x86/hvm/svm/vpmu.c
>> @@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
>>   	 }
>>       }
>>   
>> -    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) +
>> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS +
>> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
>> -    if ( !ctxt )
>> +    if ( is_hvm_domain(v->domain) )
> Here and elsewhere - did you consider whether is_hvm_domain() is
> still the correct thing here now that PVH code is in?
>

I actually know that this is broken --- I ran a PVH guest a couple of 
days ago and it exploded because of this check.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jNz-00008L-AP; Mon, 13 Jan 2014 15:25:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jNy-00008F-IP
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:25:06 +0000
Received: from [85.158.143.35:15202] by server-3.bemta-4.messagelabs.com id
	40/3A-32360-15504D25; Mon, 13 Jan 2014 15:25:05 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389626704!11431020!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31667 invoked from network); 13 Jan 2014 15:25:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:25:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DFOv6C031569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:24:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFOuoa001572
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:24:57 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFOu85007745; Mon, 13 Jan 2014 15:24:56 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:24:56 -0800
Message-ID: <52D40569.2020904@oracle.com>
Date: Mon, 13 Jan 2014 10:25:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-11-git-send-email-boris.ostrovsky@oracle.com>
	<52D4008A02000078001131CA@nat28.tlf.novell.com>
In-Reply-To: <52D4008A02000078001131CA@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 10/16] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 09:04 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/svm/vpmu.c
>> +++ b/xen/arch/x86/hvm/svm/vpmu.c
>> @@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
>>   	 }
>>       }
>>   
>> -    ctxt = xzalloc_bytes(sizeof(struct amd_vpmu_context) +
>> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS +
>> -			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
>> -    if ( !ctxt )
>> +    if ( is_hvm_domain(v->domain) )
> Here and elsewhere - did you consider whether is_hvm_domain() is
> still the correct thing here now that PVH code is in?
>

I actually know that this is broken --- I ran a PVH guest a couple of 
days ago and it exploded because of this check.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jUN-0000m7-Af; Mon, 13 Jan 2014 15:31:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2jUL-0000m2-Hc
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:31:41 +0000
Received: from [85.158.143.35:50710] by server-3.bemta-4.messagelabs.com id
	43/66-32360-CD604D25; Mon, 13 Jan 2014 15:31:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389627098!11415610!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25366 invoked from network); 13 Jan 2014 15:31:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 15:31:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92340166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 15:31:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 10:31:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2jUH-0004UI-Jt;
	Mon, 13 Jan 2014 15:31:37 +0000
Message-ID: <52D406D9.7060407@citrix.com>
Date: Mon, 13 Jan 2014 15:31:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 15:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan

Looking through the XenServer patch queue on top of stable-4.3

All of these are suggestions for "might be useful to take", rather than
"strictly bugfixes"

Hypervisor:

* x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
xen-unstable.hg-27635.45bf542dd584
git: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704

* x86/ats: Fix parsing of 'ats' command line option
xen-unstable.hg-27831.bd84d8277c21
git: 7b5af1df122092243a3697409d5a5ad3b9944da4

* x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
xen-unstable.hg-27960.48cd67917186
git: a82e98d473fd212316ea5aa078a7588324b020e5

* kexec: prevent deadlock on reentry to the crash path
xen-unstable.hg-28067.065befe6d07e
git: 470f58c159410b280627c2ea7798ea12ad93bd7c

Tools:

* libxl: Allow 4 MB of video RAM for Cirrus graphics on traditional QEM
xen-unstable.hg-27834.02b33d7e56f6
git: 13d13a45d0591fc195666ea20ddf8781a0367e88


~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jUN-0000m7-Af; Mon, 13 Jan 2014 15:31:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2jUL-0000m2-Hc
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:31:41 +0000
Received: from [85.158.143.35:50710] by server-3.bemta-4.messagelabs.com id
	43/66-32360-CD604D25; Mon, 13 Jan 2014 15:31:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389627098!11415610!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25366 invoked from network); 13 Jan 2014 15:31:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 15:31:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92340166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 15:31:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 10:31:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2jUH-0004UI-Jt;
	Mon, 13 Jan 2014 15:31:37 +0000
Message-ID: <52D406D9.7060407@citrix.com>
Date: Mon, 13 Jan 2014 15:31:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 15:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan

Looking through the XenServer patch queue on top of stable-4.3

All of these are suggestions for "might be useful to take", rather than
"strictly bugfixes"

Hypervisor:

* x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
xen-unstable.hg-27635.45bf542dd584
git: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704

* x86/ats: Fix parsing of 'ats' command line option
xen-unstable.hg-27831.bd84d8277c21
git: 7b5af1df122092243a3697409d5a5ad3b9944da4

* x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
xen-unstable.hg-27960.48cd67917186
git: a82e98d473fd212316ea5aa078a7588324b020e5

* kexec: prevent deadlock on reentry to the crash path
xen-unstable.hg-28067.065befe6d07e
git: 470f58c159410b280627c2ea7798ea12ad93bd7c

Tools:

* libxl: Allow 4 MB of video RAM for Cirrus graphics on traditional QEM
xen-unstable.hg-27834.02b33d7e56f6
git: 13d13a45d0591fc195666ea20ddf8781a0367e88


~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jat-0000yO-Do; Mon, 13 Jan 2014 15:38:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2jar-0000wu-Ht
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:38:25 +0000
Received: from [85.158.137.68:47234] by server-5.bemta-3.messagelabs.com id
	4A/0B-25188-07804D25; Mon, 13 Jan 2014 15:38:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389627501!8816264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13414 invoked from network); 13 Jan 2014 15:38:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:38:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:38:20 +0000
Message-Id: <52D416780200007800113360@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:38:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
	<52D3FAB20200007800113162@nat28.tlf.novell.com>
	<52D404F7.9090906@oracle.com>
In-Reply-To: <52D404F7.9090906@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:23, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 08:39 AM, Jan Beulich wrote:
>> Overall you should also prefix all types added to global scope with
>> "xen". I know this wasn't done consistently for older headers, but
>> we shouldn't be extending this name space cluttering.
> 
> You mean something like arch_xenpmu ==> xen_arch_pmu ?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jat-0000yO-Do; Mon, 13 Jan 2014 15:38:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2jar-0000wu-Ht
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:38:25 +0000
Received: from [85.158.137.68:47234] by server-5.bemta-3.messagelabs.com id
	4A/0B-25188-07804D25; Mon, 13 Jan 2014 15:38:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389627501!8816264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13414 invoked from network); 13 Jan 2014 15:38:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:38:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:38:20 +0000
Message-Id: <52D416780200007800113360@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:38:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-8-git-send-email-boris.ostrovsky@oracle.com>
	<52D3FAB20200007800113162@nat28.tlf.novell.com>
	<52D404F7.9090906@oracle.com>
In-Reply-To: <52D404F7.9090906@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 07/16] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:23, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 08:39 AM, Jan Beulich wrote:
>> Overall you should also prefix all types added to global scope with
>> "xen". I know this wasn't done consistently for older headers, but
>> we shouldn't be extending this name space cluttering.
> 
> You mean something like arch_xenpmu ==> xen_arch_pmu ?

Yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:44:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jgp-0001Xl-Dv; Mon, 13 Jan 2014 15:44:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jgn-0001Xg-UR
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:44:34 +0000
Received: from [85.158.137.68:28880] by server-13.bemta-3.messagelabs.com id
	0B/F9-28603-1E904D25; Mon, 13 Jan 2014 15:44:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389627869!8861063!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10061 invoked from network); 13 Jan 2014 15:44:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:44:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0DFiKlo015779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:44:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFiJkD005349
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:44:19 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFiIih027270; Mon, 13 Jan 2014 15:44:18 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:44:18 -0800
Message-ID: <52D409F3.20808@oracle.com>
Date: Mon, 13 Jan 2014 10:44:51 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
In-Reply-To: <52D402420200007800113210@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> @@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>           break;
>>   
>>       case 0x00000005: /* MONITOR/MWAIT */
>> -    case 0x0000000a: /* Architectural Performance Monitor Features */
>>       case 0x0000000b: /* Extended Topology Enumeration */
>>       case 0x8000000a: /* SVM revision and features */
>>       case 0x8000001b: /* Instruction Based Sampling */
>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>       unsupported:
>>           a = b = c = d = 0;
>>           break;
>> -
>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>> +        break;
>>       default:
> Rather than removing a blank line here, you ought to insert a
> second one so that there's one before _and_ after the added
> code block.
>
> Furthermore the need to pass 0xa as the first argument suggests
> that you're not in line with the intentions of vpmu_do_cpuid():
> Either you drop the first parameter from the function, or you get
> your code in line with the existing caller.

Not sure I understand the problem. We fill a, b, c and d with HW values 
for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And 
whether or not this is needed is based on the first argument.

(There is a bug in this routine in that it looks like I am not calling 
vpmu_do_cpuid() for non-dom0 PV guest but that's a different issue).

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:44:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2jgp-0001Xl-Dv; Mon, 13 Jan 2014 15:44:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2jgn-0001Xg-UR
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:44:34 +0000
Received: from [85.158.137.68:28880] by server-13.bemta-3.messagelabs.com id
	0B/F9-28603-1E904D25; Mon, 13 Jan 2014 15:44:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389627869!8861063!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10061 invoked from network); 13 Jan 2014 15:44:32 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 15:44:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0DFiKlo015779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 15:44:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFiJkD005349
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 15:44:19 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DFiIih027270; Mon, 13 Jan 2014 15:44:18 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 07:44:18 -0800
Message-ID: <52D409F3.20808@oracle.com>
Date: Mon, 13 Jan 2014 10:44:51 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
In-Reply-To: <52D402420200007800113210@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> @@ -866,7 +867,6 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>           break;
>>   
>>       case 0x00000005: /* MONITOR/MWAIT */
>> -    case 0x0000000a: /* Architectural Performance Monitor Features */
>>       case 0x0000000b: /* Extended Topology Enumeration */
>>       case 0x8000000a: /* SVM revision and features */
>>       case 0x8000001b: /* Instruction Based Sampling */
>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>       unsupported:
>>           a = b = c = d = 0;
>>           break;
>> -
>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>> +        break;
>>       default:
> Rather than removing a blank line here, you ought to insert a
> second one so that there's one before _and_ after the added
> code block.
>
> Furthermore the need to pass 0xa as the first argument suggests
> that you're not in line with the intentions of vpmu_do_cpuid():
> Either you drop the first parameter from the function, or you get
> your code in line with the existing caller.

Not sure I understand the problem. We fill a, b, c and d with HW values 
for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And 
whether or not this is needed is based on the first argument.

(There is a bug in this routine in that it looks like I am not calling 
vpmu_do_cpuid() for non-dom0 PV guest but that's a different issue).

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:58:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2juA-00028y-RB; Mon, 13 Jan 2014 15:58:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2ju9-00028q-4L
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:58:21 +0000
Received: from [85.158.137.68:26049] by server-11.bemta-3.messagelabs.com id
	FB/90-19379-B1D04D25; Mon, 13 Jan 2014 15:58:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389628698!8864584!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12849 invoked from network); 13 Jan 2014 15:58:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:58:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:58:17 +0000
Message-Id: <52D41B2602000078001133C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:58:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
	<52D409F3.20808@oracle.com>
In-Reply-To: <52D409F3.20808@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:44, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>>       unsupported:
>>>           a = b = c = d = 0;
>>>           break;
>>> -
>>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>>> +        break;
>>>       default:
>> Rather than removing a blank line here, you ought to insert a
>> second one so that there's one before _and_ after the added
>> code block.
>>
>> Furthermore the need to pass 0xa as the first argument suggests
>> that you're not in line with the intentions of vpmu_do_cpuid():
>> Either you drop the first parameter from the function, or you get
>> your code in line with the existing caller.
> 
> Not sure I understand the problem. We fill a, b, c and d with HW values 
> for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And 
> whether or not this is needed is based on the first argument.

Did you look at the other call site? Iirc it calls the function for all
input values (and hence the need for the current first parameter
of the function). I take it that the expectation was that further
perfctr related leaves might be added later and/or perfctr code
may need to override more than just leaf 0xa values.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 15:58:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 15:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2juA-00028y-RB; Mon, 13 Jan 2014 15:58:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2ju9-00028q-4L
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 15:58:21 +0000
Received: from [85.158.137.68:26049] by server-11.bemta-3.messagelabs.com id
	FB/90-19379-B1D04D25; Mon, 13 Jan 2014 15:58:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389628698!8864584!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12849 invoked from network); 13 Jan 2014 15:58:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Jan 2014 15:58:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 13 Jan 2014 15:58:17 +0000
Message-Id: <52D41B2602000078001133C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 13 Jan 2014 15:58:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
	<52D409F3.20808@oracle.com>
In-Reply-To: <52D409F3.20808@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:44, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>>       unsupported:
>>>           a = b = c = d = 0;
>>>           break;
>>> -
>>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>>> +        break;
>>>       default:
>> Rather than removing a blank line here, you ought to insert a
>> second one so that there's one before _and_ after the added
>> code block.
>>
>> Furthermore the need to pass 0xa as the first argument suggests
>> that you're not in line with the intentions of vpmu_do_cpuid():
>> Either you drop the first parameter from the function, or you get
>> your code in line with the existing caller.
> 
> Not sure I understand the problem. We fill a, b, c and d with HW values 
> for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And 
> whether or not this is needed is based on the first argument.

Did you look at the other call site? Iirc it calls the function for all
input values (and hence the need for the current first parameter
of the function). I take it that the expectation was that further
perfctr related leaves might be added later and/or perfctr code
may need to override more than just leaf 0xa values.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kCT-0003bh-S7; Mon, 13 Jan 2014 16:17:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W2kCS-0003bc-OI
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:17:16 +0000
Received: from [193.109.254.147:28923] by server-4.bemta-14.messagelabs.com id
	09/C8-03916-C8114D25; Mon, 13 Jan 2014 16:17:16 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389629835!9075841!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16972 invoked from network); 13 Jan 2014 16:17:15 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 16:17:15 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53881 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W2k1D-0008HJ-MQ; Mon, 13 Jan 2014 17:05:39 +0100
Date: Mon, 13 Jan 2014 17:17:12 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1981506151.20140113171712@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Hungtask on dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Got an hungtask on dom0 after starting a guest with pci passthrough, this kernel has that pci branch from your tree.
(it's not always happening .. so reproduceability is probably low ..)

[ 5169.950009] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 5169.959236] xenwatch        D ffff88005908c240     0    46      2 0x00000000
[ 5169.969066]  ffff88005917f828 0000000000000216 ffff88005908c240 0000000000013500
[ 5169.978392]  ffff88005917ffd8 0000000000013500 ffff880059bbd2d0 ffff88005908c240
[ 5169.987680]  ffffffff00000004 ffff880057c89b00 ffff88005908c9b0 0000000000000002
[ 5169.997103] Call Trace:
[ 5170.006363]  [<ffffffff810e4f38>] ? __lock_acquire+0x418/0x2220
[ 5170.015924]  [<ffffffff810e4f38>] ? __lock_acquire+0x418/0x2220
[ 5170.025559]  [<ffffffff81a9a674>] schedule+0x24/0x70
[ 5170.034841]  [<ffffffff81a998ec>] schedule_timeout+0x1dc/0x240
[ 5170.044258]  [<ffffffff810e3034>] ? mark_held_locks+0x94/0x130
[ 5170.053321]  [<ffffffff81a9fa9b>] ? _raw_spin_unlock_irq+0x2b/0x50
[ 5170.062429]  [<ffffffff810e31cb>] ? trace_hardirqs_on_caller+0xfb/0x240
[ 5170.071479]  [<ffffffff81a9b06f>] wait_for_common+0xff/0x150
[ 5170.080617]  [<ffffffff81a9f8f5>] ? _raw_spin_unlock_irqrestore+0x65/0x90
[ 5170.090064]  [<ffffffff810d0e70>] ? try_to_wake_up+0x300/0x300
[ 5170.099121]  [<ffffffff81a9b0d8>] wait_for_completion+0x18/0x20
[ 5170.108189]  [<ffffffff814d9e6a>] pcistub_get_pci_dev_by_slot+0xda/0x110
[ 5170.117341]  [<ffffffff814db1b4>] xen_pcibk_export_device+0x34/0x120
[ 5170.126406]  [<ffffffff814db3cd>] xen_pcibk_setup_backend+0x12d/0x320
[ 5170.135205]  [<ffffffff814cf590>] ? xenbus_gather+0x150/0x170
[ 5170.143741]  [<ffffffff814db5ea>] xen_pcibk_be_watch+0x2a/0x40
[ 5170.152106]  [<ffffffff814dbc98>] xen_pcibk_xenbus_probe+0x168/0x180
[ 5170.160471]  [<ffffffff814d0926>] xenbus_dev_probe+0x66/0x110
[ 5170.168892]  [<ffffffff81628075>] driver_probe_device+0x75/0x210
[ 5170.176443]  [<ffffffff816282c0>] ? __driver_attach+0xb0/0xb0
[ 5170.184074]  [<ffffffff8162830b>] __device_attach+0x4b/0x60
[ 5170.191277]  [<ffffffff816262be>] bus_for_each_drv+0x4e/0xa0
[ 5170.198475]  [<ffffffff81627fc8>] device_attach+0x98/0xb0
[ 5170.205675]  [<ffffffff81627520>] bus_probe_device+0xb0/0xe0
[ 5170.212841]  [<ffffffff8162551b>] device_add+0x3bb/0x5b0
[ 5170.219943]  [<ffffffff8162f9cd>] ? device_pm_sleep_init+0x4d/0x80
[ 5170.227050]  [<ffffffff81625729>] device_register+0x19/0x20
[ 5170.234258]  [<ffffffff814d0491>] xenbus_probe_node+0x141/0x170
[ 5170.241277]  [<ffffffff81626386>] ? bus_for_each_dev+0x76/0xa0
[ 5170.248280]  [<ffffffff814d0690>] xenbus_dev_changed+0x1d0/0x1e0
[ 5170.255316]  [<ffffffff814cea00>] ? xs_watch+0x60/0x60
[ 5170.262238]  [<ffffffff814d0a06>] backend_changed+0x16/0x20
[ 5170.269271]  [<ffffffff814cea3d>] xenwatch_thread+0x3d/0x120
[ 5170.276671]  [<ffffffff810ddcf0>] ? __init_waitqueue_head+0x60/0x60
[ 5170.283670]  [<ffffffff810c58df>] kthread+0xdf/0x100
[ 5170.290776]  [<ffffffff81a9fa9b>] ? _raw_spin_unlock_irq+0x2b/0x50
[ 5170.297911]  [<ffffffff810c5800>] ? __init_kthread_worker+0x70/0x70
[ 5170.305132]  [<ffffffff81aa098c>] ret_from_fork+0x7c/0xb0
[ 5170.312352]  [<ffffffff810c5800>] ? __init_kthread_worker+0x70/0x70
[ 5170.319611] 3 locks held by xenwatch/46:
[ 5170.326974]  #0:  (xenwatch_mutex){+.+.+.}, at: [<ffffffff814cea7e>] xenwatch_thread+0x7e/0x120
[ 5170.334363]  #1:  (&__lockdep_no_validate__){......}, at: [<ffffffff81627f48>] device_attach+0x18/0xb0
[ 5170.341976]  #2:  (&pdev->dev_lock){+.+.+.}, at: [<ffffffff814db2d0>] xen_pcibk_setup_backend+0x30/0x320

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kCT-0003bh-S7; Mon, 13 Jan 2014 16:17:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W2kCS-0003bc-OI
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:17:16 +0000
Received: from [193.109.254.147:28923] by server-4.bemta-14.messagelabs.com id
	09/C8-03916-C8114D25; Mon, 13 Jan 2014 16:17:16 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389629835!9075841!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16972 invoked from network); 13 Jan 2014 16:17:15 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 16:17:15 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53881 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W2k1D-0008HJ-MQ; Mon, 13 Jan 2014 17:05:39 +0100
Date: Mon, 13 Jan 2014 17:17:12 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1981506151.20140113171712@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Hungtask on dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Got an hungtask on dom0 after starting a guest with pci passthrough, this kernel has that pci branch from your tree.
(it's not always happening .. so reproduceability is probably low ..)

[ 5169.950009] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 5169.959236] xenwatch        D ffff88005908c240     0    46      2 0x00000000
[ 5169.969066]  ffff88005917f828 0000000000000216 ffff88005908c240 0000000000013500
[ 5169.978392]  ffff88005917ffd8 0000000000013500 ffff880059bbd2d0 ffff88005908c240
[ 5169.987680]  ffffffff00000004 ffff880057c89b00 ffff88005908c9b0 0000000000000002
[ 5169.997103] Call Trace:
[ 5170.006363]  [<ffffffff810e4f38>] ? __lock_acquire+0x418/0x2220
[ 5170.015924]  [<ffffffff810e4f38>] ? __lock_acquire+0x418/0x2220
[ 5170.025559]  [<ffffffff81a9a674>] schedule+0x24/0x70
[ 5170.034841]  [<ffffffff81a998ec>] schedule_timeout+0x1dc/0x240
[ 5170.044258]  [<ffffffff810e3034>] ? mark_held_locks+0x94/0x130
[ 5170.053321]  [<ffffffff81a9fa9b>] ? _raw_spin_unlock_irq+0x2b/0x50
[ 5170.062429]  [<ffffffff810e31cb>] ? trace_hardirqs_on_caller+0xfb/0x240
[ 5170.071479]  [<ffffffff81a9b06f>] wait_for_common+0xff/0x150
[ 5170.080617]  [<ffffffff81a9f8f5>] ? _raw_spin_unlock_irqrestore+0x65/0x90
[ 5170.090064]  [<ffffffff810d0e70>] ? try_to_wake_up+0x300/0x300
[ 5170.099121]  [<ffffffff81a9b0d8>] wait_for_completion+0x18/0x20
[ 5170.108189]  [<ffffffff814d9e6a>] pcistub_get_pci_dev_by_slot+0xda/0x110
[ 5170.117341]  [<ffffffff814db1b4>] xen_pcibk_export_device+0x34/0x120
[ 5170.126406]  [<ffffffff814db3cd>] xen_pcibk_setup_backend+0x12d/0x320
[ 5170.135205]  [<ffffffff814cf590>] ? xenbus_gather+0x150/0x170
[ 5170.143741]  [<ffffffff814db5ea>] xen_pcibk_be_watch+0x2a/0x40
[ 5170.152106]  [<ffffffff814dbc98>] xen_pcibk_xenbus_probe+0x168/0x180
[ 5170.160471]  [<ffffffff814d0926>] xenbus_dev_probe+0x66/0x110
[ 5170.168892]  [<ffffffff81628075>] driver_probe_device+0x75/0x210
[ 5170.176443]  [<ffffffff816282c0>] ? __driver_attach+0xb0/0xb0
[ 5170.184074]  [<ffffffff8162830b>] __device_attach+0x4b/0x60
[ 5170.191277]  [<ffffffff816262be>] bus_for_each_drv+0x4e/0xa0
[ 5170.198475]  [<ffffffff81627fc8>] device_attach+0x98/0xb0
[ 5170.205675]  [<ffffffff81627520>] bus_probe_device+0xb0/0xe0
[ 5170.212841]  [<ffffffff8162551b>] device_add+0x3bb/0x5b0
[ 5170.219943]  [<ffffffff8162f9cd>] ? device_pm_sleep_init+0x4d/0x80
[ 5170.227050]  [<ffffffff81625729>] device_register+0x19/0x20
[ 5170.234258]  [<ffffffff814d0491>] xenbus_probe_node+0x141/0x170
[ 5170.241277]  [<ffffffff81626386>] ? bus_for_each_dev+0x76/0xa0
[ 5170.248280]  [<ffffffff814d0690>] xenbus_dev_changed+0x1d0/0x1e0
[ 5170.255316]  [<ffffffff814cea00>] ? xs_watch+0x60/0x60
[ 5170.262238]  [<ffffffff814d0a06>] backend_changed+0x16/0x20
[ 5170.269271]  [<ffffffff814cea3d>] xenwatch_thread+0x3d/0x120
[ 5170.276671]  [<ffffffff810ddcf0>] ? __init_waitqueue_head+0x60/0x60
[ 5170.283670]  [<ffffffff810c58df>] kthread+0xdf/0x100
[ 5170.290776]  [<ffffffff81a9fa9b>] ? _raw_spin_unlock_irq+0x2b/0x50
[ 5170.297911]  [<ffffffff810c5800>] ? __init_kthread_worker+0x70/0x70
[ 5170.305132]  [<ffffffff81aa098c>] ret_from_fork+0x7c/0xb0
[ 5170.312352]  [<ffffffff810c5800>] ? __init_kthread_worker+0x70/0x70
[ 5170.319611] 3 locks held by xenwatch/46:
[ 5170.326974]  #0:  (xenwatch_mutex){+.+.+.}, at: [<ffffffff814cea7e>] xenwatch_thread+0x7e/0x120
[ 5170.334363]  #1:  (&__lockdep_no_validate__){......}, at: [<ffffffff81627f48>] device_attach+0x18/0xb0
[ 5170.341976]  #2:  (&pdev->dev_lock){+.+.+.}, at: [<ffffffff814db2d0>] xen_pcibk_setup_backend+0x30/0x320

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kF6-000495-FI; Mon, 13 Jan 2014 16:20:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2kF5-000490-1I
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:19:59 +0000
Received: from [85.158.137.68:22367] by server-12.bemta-3.messagelabs.com id
	F1/B4-20055-D2214D25; Mon, 13 Jan 2014 16:19:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389629994!8888560!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3191 invoked from network); 13 Jan 2014 16:19:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 16:19:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DGJlSE010150
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 16:19:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0DGJjRe005203
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 16:19:46 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DGJjnT019411; Mon, 13 Jan 2014 16:19:45 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 08:19:44 -0800
Message-ID: <52D41241.8010406@oracle.com>
Date: Mon, 13 Jan 2014 11:20:17 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
	<52D409F3.20808@oracle.com>
	<52D41B2602000078001133C0@nat28.tlf.novell.com>
In-Reply-To: <52D41B2602000078001133C0@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 10:58 AM, Jan Beulich wrote:
>>>> On 13.01.14 at 16:44, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>>>        unsupported:
>>>>            a = b = c = d = 0;
>>>>            break;
>>>> -
>>>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>>>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>>>> +        break;
>>>>        default:
>>> Rather than removing a blank line here, you ought to insert a
>>> second one so that there's one before _and_ after the added
>>> code block.
>>>
>>> Furthermore the need to pass 0xa as the first argument suggests
>>> that you're not in line with the intentions of vpmu_do_cpuid():
>>> Either you drop the first parameter from the function, or you get
>>> your code in line with the existing caller.
>> Not sure I understand the problem. We fill a, b, c and d with HW values
>> for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And
>> whether or not this is needed is based on the first argument.
> Did you look at the other call site? Iirc it calls the function for all
> input values (and hence the need for the current first parameter
> of the function). I take it that the expectation was that further
> perfctr related leaves might be added later and/or perfctr code
> may need to override more than just leaf 0xa values.
>


Oh, you meant the problem is that I don't cover leaves *other* than 0xa.

I can move vpmu_do_cpuid() down to the exit path ("out" label) so that 
all cpuid calls are checked. This will add latency to every cpuid 
invocation in the guest but I guess it should be negligible (and this is 
what HVM does already).

Incidentally, this will also solve my non-dom0 CPUID bug that I mentioned.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kF6-000495-FI; Mon, 13 Jan 2014 16:20:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2kF5-000490-1I
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:19:59 +0000
Received: from [85.158.137.68:22367] by server-12.bemta-3.messagelabs.com id
	F1/B4-20055-D2214D25; Mon, 13 Jan 2014 16:19:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389629994!8888560!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3191 invoked from network); 13 Jan 2014 16:19:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 16:19:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0DGJlSE010150
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 13 Jan 2014 16:19:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0DGJjRe005203
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 13 Jan 2014 16:19:46 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0DGJjnT019411; Mon, 13 Jan 2014 16:19:45 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 08:19:44 -0800
Message-ID: <52D41241.8010406@oracle.com>
Date: Mon, 13 Jan 2014 11:20:17 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-12-git-send-email-boris.ostrovsky@oracle.com>
	<52D402420200007800113210@nat28.tlf.novell.com>
	<52D409F3.20808@oracle.com>
	<52D41B2602000078001133C0@nat28.tlf.novell.com>
In-Reply-To: <52D41B2602000078001133C0@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 11/16] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 10:58 AM, Jan Beulich wrote:
>>>> On 13.01.14 at 16:44, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 01/13/2014 09:12 AM, Jan Beulich wrote:
>>>>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> @@ -875,7 +875,9 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>>>        unsupported:
>>>>            a = b = c = d = 0;
>>>>            break;
>>>> -
>>>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>>>> +        vpmu_do_cpuid(0xa, &a, &b, &c, &d);
>>>> +        break;
>>>>        default:
>>> Rather than removing a blank line here, you ought to insert a
>>> second one so that there's one before _and_ after the added
>>> code block.
>>>
>>> Furthermore the need to pass 0xa as the first argument suggests
>>> that you're not in line with the intentions of vpmu_do_cpuid():
>>> Either you drop the first parameter from the function, or you get
>>> your code in line with the existing caller.
>> Not sure I understand the problem. We fill a, b, c and d with HW values
>> for dom0 and then call vpmu_do_cpuid() to adjust them if needed. And
>> whether or not this is needed is based on the first argument.
> Did you look at the other call site? Iirc it calls the function for all
> input values (and hence the need for the current first parameter
> of the function). I take it that the expectation was that further
> perfctr related leaves might be added later and/or perfctr code
> may need to override more than just leaf 0xa values.
>


Oh, you meant the problem is that I don't cover leaves *other* than 0xa.

I can move vpmu_do_cpuid() down to the exit path ("out" label) so that 
all cpuid calls are checked. This will add latency to every cpuid 
invocation in the guest but I guess it should be negligible (and this is 
what HVM does already).

Incidentally, this will also solve my non-dom0 CPUID bug that I mentioned.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kcL-0005Ef-KT; Mon, 13 Jan 2014 16:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2kcK-0005Ea-Eg
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:44:00 +0000
Received: from [85.158.139.211:59964] by server-13.bemta-5.messagelabs.com id
	B5/8E-11357-FC714D25; Mon, 13 Jan 2014 16:43:59 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389631435!9490547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7574 invoked from network); 13 Jan 2014 16:43:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:43:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90243984"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 16:43:55 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:43:55 -0500
Message-ID: <52D417C8.4090003@citrix.com>
Date: Mon, 13 Jan 2014 16:43:52 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>, Jan Beulich <jbeulich@suse.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
	<52D3BD8F.1000707@citrix.com>
In-Reply-To: <52D3BD8F.1000707@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 10:18, Roger Pau Monn=E9 wrote:
>> @@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> -	}
>> -
>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> -		arch_enter_lazy_mmu_mode();
>> -		lazy =3D true;
>> -	}
>> -
>> -	for (i =3D 0; i < count; i++) {
>> -		ret =3D m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> -		if (ret)
>> -			goto out;
>> +	} else {
>> +		for (i =3D 0; i < count; i++) {
>> +				set_phys_to_machine(page_to_pfn(pages[i]),
>> +						    pages[i]->index);
>
> You seem to relay on page->index containing the old mfn, but I don't see
> it being set on gnttab_map_refs (it's only set on m2p_add_override
> AFAICT), maybe I'm missing something?
>
> Roger.
>

Indeed, I'll fix that, and I will also cut the function prototype update =

out into a separate patch for better readability.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kcL-0005Ef-KT; Mon, 13 Jan 2014 16:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2kcK-0005Ea-Eg
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:44:00 +0000
Received: from [85.158.139.211:59964] by server-13.bemta-5.messagelabs.com id
	B5/8E-11357-FC714D25; Mon, 13 Jan 2014 16:43:59 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389631435!9490547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7574 invoked from network); 13 Jan 2014 16:43:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:43:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90243984"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 16:43:55 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:43:55 -0500
Message-ID: <52D417C8.4090003@citrix.com>
Date: Mon, 13 Jan 2014 16:43:52 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>, Jan Beulich <jbeulich@suse.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1389568525-1948-1-git-send-email-zoltan.kiss@citrix.com>
	<52D3BD8F.1000707@citrix.com>
In-Reply-To: <52D3BD8F.1000707@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 10:18, Roger Pau Monn=E9 wrote:
>> @@ -958,26 +942,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> -	}
>> -
>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> -		arch_enter_lazy_mmu_mode();
>> -		lazy =3D true;
>> -	}
>> -
>> -	for (i =3D 0; i < count; i++) {
>> -		ret =3D m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> -		if (ret)
>> -			goto out;
>> +	} else {
>> +		for (i =3D 0; i < count; i++) {
>> +				set_phys_to_machine(page_to_pfn(pages[i]),
>> +						    pages[i]->index);
>
> You seem to relay on page->index containing the old mfn, but I don't see
> it being set on gnttab_map_refs (it's only set on m2p_add_override
> AFAICT), maybe I'm missing something?
>
> Roger.
>

Indeed, I'll fix that, and I will also cut the function prototype update =

out into a separate patch for better readability.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:50:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kiA-0005oO-M7; Mon, 13 Jan 2014 16:50:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2ki8-0005oJ-TF
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:50:01 +0000
Received: from [85.158.137.68:15208] by server-4.bemta-3.messagelabs.com id
	A0/B7-10414-83914D25; Mon, 13 Jan 2014 16:50:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389631797!8895323!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7698 invoked from network); 13 Jan 2014 16:49:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:49:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90246709"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 16:49:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 11:49:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2ki4-00015E-8o;
	Mon, 13 Jan 2014 16:49:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2ki4-0005cX-2B;
	Mon, 13 Jan 2014 16:49:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21204.6451.927124.472470@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 16:49:55 +0000
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("preparing for 4.3.2 and 4.2.4"):
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.

All I'm aware of is

  9.1.14

(that's the date they went into staging)

  2d03be65d5c50053fec4a5fa1d691972e5d953c9
  libxl: Auto-assign NIC devids in initiate_domain_create

  1671cdeac7da663fb2963f3e587fa279dcd0238b
  tools/libxc: Correct read_exact() error messages

But Ian C probably has a nearly-disjoint view of bugfixes which have
gone into staging and could do with backporting.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:50:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kiA-0005oO-M7; Mon, 13 Jan 2014 16:50:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2ki8-0005oJ-TF
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:50:01 +0000
Received: from [85.158.137.68:15208] by server-4.bemta-3.messagelabs.com id
	A0/B7-10414-83914D25; Mon, 13 Jan 2014 16:50:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389631797!8895323!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7698 invoked from network); 13 Jan 2014 16:49:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:49:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90246709"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 16:49:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 11:49:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2ki4-00015E-8o;
	Mon, 13 Jan 2014 16:49:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2ki4-0005cX-2B;
	Mon, 13 Jan 2014 16:49:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21204.6451.927124.472470@mariner.uk.xensource.com>
Date: Mon, 13 Jan 2014 16:49:55 +0000
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("preparing for 4.3.2 and 4.2.4"):
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.

All I'm aware of is

  9.1.14

(that's the date they went into staging)

  2d03be65d5c50053fec4a5fa1d691972e5d953c9
  libxl: Auto-assign NIC devids in initiate_domain_create

  1671cdeac7da663fb2963f3e587fa279dcd0238b
  tools/libxc: Correct read_exact() error messages

But Ian C probably has a nearly-disjoint view of bugfixes which have
gone into staging and could do with backporting.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:53:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2klh-0005wk-5h; Mon, 13 Jan 2014 16:53:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2klf-0005we-Nk
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 16:53:39 +0000
Received: from [85.158.143.35:9080] by server-1.bemta-4.messagelabs.com id
	C3/3B-02132-31A14D25; Mon, 13 Jan 2014 16:53:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389632017!11373631!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15307 invoked from network); 13 Jan 2014 16:53:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92375088"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 16:53:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:53:36 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2klb-0005kU-Up;
	Mon, 13 Jan 2014 16:53:35 +0000
Message-ID: <52D41A0F.5030707@citrix.com>
Date: Mon, 13 Jan 2014 16:53:35 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Ian Campbell
	<Ian.Campbell@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

More bugs for tracking:

1) PERROR("Could not obtain handle on privileged command interface");
doesn't indicate which path(s) were tried.

2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
"/proc/xen/privcmd" which is the classic-xen location, whereas the
expected location under PVops is "/dev/xen/privcmd".  The other open
functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:53:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2klh-0005wk-5h; Mon, 13 Jan 2014 16:53:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W2klf-0005we-Nk
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 16:53:39 +0000
Received: from [85.158.143.35:9080] by server-1.bemta-4.messagelabs.com id
	C3/3B-02132-31A14D25; Mon, 13 Jan 2014 16:53:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389632017!11373631!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15307 invoked from network); 13 Jan 2014 16:53:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92375088"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 16:53:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:53:36 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W2klb-0005kU-Up;
	Mon, 13 Jan 2014 16:53:35 +0000
Message-ID: <52D41A0F.5030707@citrix.com>
Date: Mon, 13 Jan 2014 16:53:35 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Ian Campbell
	<Ian.Campbell@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

More bugs for tracking:

1) PERROR("Could not obtain handle on privileged command interface");
doesn't indicate which path(s) were tried.

2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
"/proc/xen/privcmd" which is the classic-xen location, whereas the
expected location under PVops is "/dev/xen/privcmd".  The other open
functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:57:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kp5-000673-1X; Mon, 13 Jan 2014 16:57:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2kp3-00066w-1Z
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:57:09 +0000
Received: from [85.158.143.35:56570] by server-3.bemta-4.messagelabs.com id
	3D/1A-32360-4EA14D25; Mon, 13 Jan 2014 16:57:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389632226!11394419!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25477 invoked from network); 13 Jan 2014 16:57:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92376357"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 16:57:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:57:05 -0500
Message-ID: <1389632224.13654.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 16:57:04 +0000
In-Reply-To: <21204.6451.927124.472470@mariner.uk.xensource.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<21204.6451.927124.472470@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 16:49 +0000, Ian Jackson wrote:
> Jan Beulich writes ("preparing for 4.3.2 and 4.2.4"):
> > Please indicate any bug fixes that so far may have been missed
> > in the backports already done.
> 
> All I'm aware of is
> 
>   9.1.14
> 
> (that's the date they went into staging)
> 
>   2d03be65d5c50053fec4a5fa1d691972e5d953c9
>   libxl: Auto-assign NIC devids in initiate_domain_create
> 
>   1671cdeac7da663fb2963f3e587fa279dcd0238b
>   tools/libxc: Correct read_exact() error messages
> 
> But Ian C probably has a nearly-disjoint view of bugfixes which have
> gone into staging and could do with backporting.

I'm afraid I've not really been paying attention to things at that
level, sorry :-/

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 16:57:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 16:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kp5-000673-1X; Mon, 13 Jan 2014 16:57:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2kp3-00066w-1Z
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 16:57:09 +0000
Received: from [85.158.143.35:56570] by server-3.bemta-4.messagelabs.com id
	3D/1A-32360-4EA14D25; Mon, 13 Jan 2014 16:57:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389632226!11394419!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25477 invoked from network); 13 Jan 2014 16:57:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 16:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="92376357"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 16:57:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 11:57:05 -0500
Message-ID: <1389632224.13654.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 16:57:04 +0000
In-Reply-To: <21204.6451.927124.472470@mariner.uk.xensource.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<21204.6451.927124.472470@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 16:49 +0000, Ian Jackson wrote:
> Jan Beulich writes ("preparing for 4.3.2 and 4.2.4"):
> > Please indicate any bug fixes that so far may have been missed
> > in the backports already done.
> 
> All I'm aware of is
> 
>   9.1.14
> 
> (that's the date they went into staging)
> 
>   2d03be65d5c50053fec4a5fa1d691972e5d953c9
>   libxl: Auto-assign NIC devids in initiate_domain_create
> 
>   1671cdeac7da663fb2963f3e587fa279dcd0238b
>   tools/libxc: Correct read_exact() error messages
> 
> But Ian C probably has a nearly-disjoint view of bugfixes which have
> gone into staging and could do with backporting.

I'm afraid I've not really been paying attention to things at that
level, sorry :-/

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:06:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kxi-0006kT-4d; Mon, 13 Jan 2014 17:06:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2kxg-0006kO-OX
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:06:04 +0000
Received: from [85.158.143.35:46880] by server-2.bemta-4.messagelabs.com id
	39/49-11386-CFC14D25; Mon, 13 Jan 2014 17:06:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389632762!11456995!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7620 invoked from network); 13 Jan 2014 17:06:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:06:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90253483"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:06:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:06:00 -0500
Message-ID: <1389632760.13654.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 17:06:00 +0000
In-Reply-To: <52D41A0F.5030707@citrix.com>
References: <52D41A0F.5030707@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxc should use /dev/xen/privcmd on Linux
severity it wishlist
thanks

On Mon, 2014-01-13 at 16:53 +0000, Andrew Cooper wrote:
> Hello,
> 
> More bugs for tracking:
> 
> 1) PERROR("Could not obtain handle on privileged command interface");
> doesn't indicate which path(s) were tried.
> 
> 2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
> "/proc/xen/privcmd" which is the classic-xen location, whereas the
> expected location under PVops is "/dev/xen/privcmd".

Nothing has ever used /dev/xen/privcmd since it was reimplemented to
replace xenfs, so describing it as expected is a bit strong. At best
there is a wishlist issue here for someone to finish implementing this
transition, which I've now created.

http://lists.xen.org/archives/html/xen-devel/2011-12/msg01344.html
http://lists.xen.org/archives/html/xen-devel/2011-12/msg01345.html
looks to be related patches, not sure why thay didn't go in back then.
Lack of a S-o-b perhaps.

>   The other open
> functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

These have only ever been driver rather than proc based so there was
never a transition.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:06:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:06:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2kxi-0006kT-4d; Mon, 13 Jan 2014 17:06:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2kxg-0006kO-OX
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:06:04 +0000
Received: from [85.158.143.35:46880] by server-2.bemta-4.messagelabs.com id
	39/49-11386-CFC14D25; Mon, 13 Jan 2014 17:06:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389632762!11456995!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7620 invoked from network); 13 Jan 2014 17:06:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:06:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90253483"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:06:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:06:00 -0500
Message-ID: <1389632760.13654.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 17:06:00 +0000
In-Reply-To: <52D41A0F.5030707@citrix.com>
References: <52D41A0F.5030707@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxc should use /dev/xen/privcmd on Linux
severity it wishlist
thanks

On Mon, 2014-01-13 at 16:53 +0000, Andrew Cooper wrote:
> Hello,
> 
> More bugs for tracking:
> 
> 1) PERROR("Could not obtain handle on privileged command interface");
> doesn't indicate which path(s) were tried.
> 
> 2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
> "/proc/xen/privcmd" which is the classic-xen location, whereas the
> expected location under PVops is "/dev/xen/privcmd".

Nothing has ever used /dev/xen/privcmd since it was reimplemented to
replace xenfs, so describing it as expected is a bit strong. At best
there is a wishlist issue here for someone to finish implementing this
transition, which I've now created.

http://lists.xen.org/archives/html/xen-devel/2011-12/msg01344.html
http://lists.xen.org/archives/html/xen-devel/2011-12/msg01345.html
looks to be related patches, not sure why thay didn't go in back then.
Lack of a S-o-b perhaps.

>   The other open
> functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

These have only ever been driver rather than proc based so there was
never a transition.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:06:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ky1-0006m0-Hz; Mon, 13 Jan 2014 17:06:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ky0-0006lq-8E
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:06:24 +0000
Received: from [193.109.254.147:43859] by server-3.bemta-14.messagelabs.com id
	7A/A3-11000-F0D14D25; Mon, 13 Jan 2014 17:06:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389632778!10585248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20574 invoked from network); 13 Jan 2014 17:06:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:06:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90253620"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:06:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:06:17 -0500
Message-ID: <1389632776.13654.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 17:06:16 +0000
In-Reply-To: <52D41A0F.5030707@citrix.com>
References: <52D41A0F.5030707@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxc should use /dev/xen/privcmd on Linux
severity it wishlist
thanks

On Mon, 2014-01-13 at 16:53 +0000, Andrew Cooper wrote:
> Hello,
> 
> More bugs for tracking:
> 
> 1) PERROR("Could not obtain handle on privileged command interface");
> doesn't indicate which path(s) were tried.
> 
> 2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
> "/proc/xen/privcmd" which is the classic-xen location, whereas the
> expected location under PVops is "/dev/xen/privcmd".

Nothing has ever used /dev/xen/privcmd since it was reimplemented to
replace xenfs, so describing it as expected is a bit strong. At best
there is a wishlist issue here for someone to finish implementing this
transition, which I've now created.

http://lists.xen.org/archives/html/xen-devel/2011-12/msg01344.html
http://lists.xen.org/archives/html/xen-devel/2011-12/msg01345.html
looks to be related patches, not sure why thay didn't go in back then.
Lack of a S-o-b perhaps.

>   The other open
> functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

These have only ever been driver rather than proc based so there was
never a transition.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:06:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ky1-0006m0-Hz; Mon, 13 Jan 2014 17:06:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2ky0-0006lq-8E
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:06:24 +0000
Received: from [193.109.254.147:43859] by server-3.bemta-14.messagelabs.com id
	7A/A3-11000-F0D14D25; Mon, 13 Jan 2014 17:06:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389632778!10585248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20574 invoked from network); 13 Jan 2014 17:06:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:06:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90253620"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:06:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:06:17 -0500
Message-ID: <1389632776.13654.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 13 Jan 2014 17:06:16 +0000
In-Reply-To: <52D41A0F.5030707@citrix.com>
References: <52D41A0F.5030707@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it libxc should use /dev/xen/privcmd on Linux
severity it wishlist
thanks

On Mon, 2014-01-13 at 16:53 +0000, Andrew Cooper wrote:
> Hello,
> 
> More bugs for tracking:
> 
> 1) PERROR("Could not obtain handle on privileged command interface");
> doesn't indicate which path(s) were tried.
> 
> 2) xc_linux_osdep.c:linux_privcmd_open() only tries to open
> "/proc/xen/privcmd" which is the classic-xen location, whereas the
> expected location under PVops is "/dev/xen/privcmd".

Nothing has ever used /dev/xen/privcmd since it was reimplemented to
replace xenfs, so describing it as expected is a bit strong. At best
there is a wishlist issue here for someone to finish implementing this
transition, which I've now created.

http://lists.xen.org/archives/html/xen-devel/2011-12/msg01344.html
http://lists.xen.org/archives/html/xen-devel/2011-12/msg01345.html
looks to be related patches, not sure why thay didn't go in back then.
Lack of a S-o-b perhaps.

>   The other open
> functions (evtchn, gnttab, gntshr) exclusivly use "/dev/xen/$FOO".

These have only ever been driver rather than proc based so there was
never a transition.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:10:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l1r-0007RI-H4; Mon, 13 Jan 2014 17:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2l1o-0007RB-TA
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:10:21 +0000
Received: from [85.158.143.35:17421] by server-1.bemta-4.messagelabs.com id
	8C/25-02132-CFD14D25; Mon, 13 Jan 2014 17:10:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389633018!11402125!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11581 invoked from network); 13 Jan 2014 17:10:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:10:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90255138"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:10:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:10:16 -0500
Message-ID: <1389633015.13654.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:10:15 +0000
In-Reply-To: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hrm, our TLB flush discipline is horribly confused isn't it...

On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLB entries used by this domain on every PCPU. The flush can also be
> moved out of the loop because:
>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called

OK.

An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
worthwhile if that is the case.

(I'm not sure why ALLOCATE can't be replaced by allocation followed by
an INSERT, it's seems very special case)

>     - INSERT: if valid = 1 that would means with have replaced a
>     page that already belongs to the domain. A VCPU can write on the wrong page.
>     This can append for dom0 with the 1:1 mapping because the mapping is not
>     removed from the p2m.

"append"? Do you mean "happen"?

In the non-dom0 1:1 case eventually the page will be freed, I guess by a
subsequent put_page elsewhere -- do they all contain the correct
flushing? Or do we just leak?

What happens if the page being replaced is foreign? Do we leak a
reference to another domain's page? (This is probably a separate issue
though, I suspect the put_page needs pulling out of the switch(op)
statement).

>     - REMOVE: except for grant-table (replace_grant_host_mapping),

What about this case? What makes it safe?

>  each
>     call to guest_physmap_remove_page are protected by the callers via a
>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>     the page can't be allocated for another domain until the last put_page.

I have confirmed this is the case for guest_remove_page, memory_exchange
and XENMEM_remove_from_physmap.

There is one case I saw where this isn't true which is gnttab_transfer,
AFAICT that will fail because steal_page unconditionally returns an
error on arm. There is also a flush_tlb_mask there, FWIW.

>     - RELINQUISH : the domain is not running anymore so we don't care...

At some point this page will be reused, as will the VMID, where/how is
it ensured that a flush will happen before that point? 

So I think the main outstanding question is what makes
replace_grant_host_mapping safe.

The other big issue is the leak of a foreign mapping on INSERT, but I
think that is separate.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Changes in v2:
>         - Switch to the domain for only flush its TLBs entries
>         - Move the flush out of the loop
> 
> This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
> flush out of the loop which should be safe (see why in the commit message).
> Without this patch, the guest can have stale TLB entries when the VCPU is moved
> to another PCPU.
> 
> Except grant-table (I can't find {get,put}_page for grant-table code???),
> all the callers are protected by a get_page before removing the page. So if the
> another VCPU is trying to access to this page before the flush, it will just
> read/write the wrong page.
> 
> The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
> on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
> should be safe because create_p2m_entries only deal with a specific domain.
> 
> I don't think I forget case in this function. Let me know if it's the case.
> ---
>  xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..ad6f76e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
>                       int mattr,
>                       p2m_type_t t)
>  {
> -    int rc, flush;
> +    int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *first = NULL, *second = NULL, *third = NULL;
>      paddr_t addr;
> @@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
>                    cur_first_offset = ~0,
>                    cur_second_offset = ~0;
>      unsigned long count = 0;
> +    unsigned int flush = 0;
>      bool_t populate = (op == INSERT || op == ALLOCATE);
>  
>      spin_lock(&p2m->lock);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(d);
> +
>      addr = start_gpaddr;
>      while ( addr < end_gpaddr )
>      {
> @@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
>              cur_second_offset = second_table_offset(addr);
>          }
>  
> -        flush = third[third_table_offset(addr)].p2m.valid;
> +        flush |= third[third_table_offset(addr)].p2m.valid;
>  
>          /* Allocate a new RAM page and attach */
>          switch (op) {
> @@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
>                  break;
>          }
>  
> -        if ( flush )
> -            flush_tlb_all_local();
> -
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
>          {
> @@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
>          addr += PAGE_SIZE;
>      }
>  
> +    if ( flush )
> +    {
> +        /* At the beginning of the function, Xen is updating VTTBR
> +         * with the domain where the mappings are created. In this
> +         * case it's only necessary to flush TLBs on every CPUs with
> +         * the current VMID (our domain).
> +         */
> +        flush_tlb();
> +    }
> +
>      if ( op == ALLOCATE || op == INSERT )
>      {
>          unsigned long sgfn = paddr_to_pfn(start_gpaddr);
> @@ -409,6 +420,9 @@ out:
>      if (second) unmap_domain_page(second);
>      if (first) unmap_domain_page(first);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(current->domain);
> +
>      spin_unlock(&p2m->lock);
>  
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:10:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l1r-0007RI-H4; Mon, 13 Jan 2014 17:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2l1o-0007RB-TA
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:10:21 +0000
Received: from [85.158.143.35:17421] by server-1.bemta-4.messagelabs.com id
	8C/25-02132-CFD14D25; Mon, 13 Jan 2014 17:10:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389633018!11402125!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11581 invoked from network); 13 Jan 2014 17:10:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:10:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90255138"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:10:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:10:16 -0500
Message-ID: <1389633015.13654.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:10:15 +0000
In-Reply-To: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hrm, our TLB flush discipline is horribly confused isn't it...

On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLB entries used by this domain on every PCPU. The flush can also be
> moved out of the loop because:
>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called

OK.

An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
worthwhile if that is the case.

(I'm not sure why ALLOCATE can't be replaced by allocation followed by
an INSERT, it's seems very special case)

>     - INSERT: if valid = 1 that would means with have replaced a
>     page that already belongs to the domain. A VCPU can write on the wrong page.
>     This can append for dom0 with the 1:1 mapping because the mapping is not
>     removed from the p2m.

"append"? Do you mean "happen"?

In the non-dom0 1:1 case eventually the page will be freed, I guess by a
subsequent put_page elsewhere -- do they all contain the correct
flushing? Or do we just leak?

What happens if the page being replaced is foreign? Do we leak a
reference to another domain's page? (This is probably a separate issue
though, I suspect the put_page needs pulling out of the switch(op)
statement).

>     - REMOVE: except for grant-table (replace_grant_host_mapping),

What about this case? What makes it safe?

>  each
>     call to guest_physmap_remove_page are protected by the callers via a
>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>     the page can't be allocated for another domain until the last put_page.

I have confirmed this is the case for guest_remove_page, memory_exchange
and XENMEM_remove_from_physmap.

There is one case I saw where this isn't true which is gnttab_transfer,
AFAICT that will fail because steal_page unconditionally returns an
error on arm. There is also a flush_tlb_mask there, FWIW.

>     - RELINQUISH : the domain is not running anymore so we don't care...

At some point this page will be reused, as will the VMID, where/how is
it ensured that a flush will happen before that point? 

So I think the main outstanding question is what makes
replace_grant_host_mapping safe.

The other big issue is the leak of a foreign mapping on INSERT, but I
think that is separate.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Changes in v2:
>         - Switch to the domain for only flush its TLBs entries
>         - Move the flush out of the loop
> 
> This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
> flush out of the loop which should be safe (see why in the commit message).
> Without this patch, the guest can have stale TLB entries when the VCPU is moved
> to another PCPU.
> 
> Except grant-table (I can't find {get,put}_page for grant-table code???),
> all the callers are protected by a get_page before removing the page. So if the
> another VCPU is trying to access to this page before the flush, it will just
> read/write the wrong page.
> 
> The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
> on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
> should be safe because create_p2m_entries only deal with a specific domain.
> 
> I don't think I forget case in this function. Let me know if it's the case.
> ---
>  xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..ad6f76e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
>                       int mattr,
>                       p2m_type_t t)
>  {
> -    int rc, flush;
> +    int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *first = NULL, *second = NULL, *third = NULL;
>      paddr_t addr;
> @@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
>                    cur_first_offset = ~0,
>                    cur_second_offset = ~0;
>      unsigned long count = 0;
> +    unsigned int flush = 0;
>      bool_t populate = (op == INSERT || op == ALLOCATE);
>  
>      spin_lock(&p2m->lock);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(d);
> +
>      addr = start_gpaddr;
>      while ( addr < end_gpaddr )
>      {
> @@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
>              cur_second_offset = second_table_offset(addr);
>          }
>  
> -        flush = third[third_table_offset(addr)].p2m.valid;
> +        flush |= third[third_table_offset(addr)].p2m.valid;
>  
>          /* Allocate a new RAM page and attach */
>          switch (op) {
> @@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
>                  break;
>          }
>  
> -        if ( flush )
> -            flush_tlb_all_local();
> -
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
>          {
> @@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
>          addr += PAGE_SIZE;
>      }
>  
> +    if ( flush )
> +    {
> +        /* At the beginning of the function, Xen is updating VTTBR
> +         * with the domain where the mappings are created. In this
> +         * case it's only necessary to flush TLBs on every CPUs with
> +         * the current VMID (our domain).
> +         */
> +        flush_tlb();
> +    }
> +
>      if ( op == ALLOCATE || op == INSERT )
>      {
>          unsigned long sgfn = paddr_to_pfn(start_gpaddr);
> @@ -409,6 +420,9 @@ out:
>      if (second) unmap_domain_page(second);
>      if (first) unmap_domain_page(first);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(current->domain);
> +
>      spin_unlock(&p2m->lock);
>  
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l2b-0007Vf-05; Mon, 13 Jan 2014 17:11:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2l2a-0007VS-3I
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:11:08 +0000
Received: from [85.158.139.211:34845] by server-11.bemta-5.messagelabs.com id
	44/7F-23268-B2E14D25; Mon, 13 Jan 2014 17:11:07 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389633065!6777408!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13608 invoked from network); 13 Jan 2014 17:11:06 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 17:11:06 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2l6M-0007Oy-Jh; Mon, 13 Jan 2014 17:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389633302.28454@bugs.xenproject.org>
References: <52D41A0F.5030707@citrix.com>
	<1389632776.13654.108.camel@kazak.uk.xensource.com>
In-Reply-To: <1389632776.13654.108.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 17:15:02 +0000
Subject: [Xen-devel] Processed: Re: libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #34 rooted at `<52D41A0F.5030707@citrix.com>'
Title: `Re: libxc linux privcmd bugs'
> title it libxc should use /dev/xen/privcmd on Linux
Set title for #34 to `libxc should use /dev/xen/privcmd on Linux'
> severity it wishlist
Change severity for #34 to `wishlist'
> thanks
Finished processing.

Modified/created Bugs:
 - 34: http://bugs.xenproject.org/xen/bug/34 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l2b-0007Vf-05; Mon, 13 Jan 2014 17:11:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2l2a-0007VS-3I
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 17:11:08 +0000
Received: from [85.158.139.211:34845] by server-11.bemta-5.messagelabs.com id
	44/7F-23268-B2E14D25; Mon, 13 Jan 2014 17:11:07 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389633065!6777408!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13608 invoked from network); 13 Jan 2014 17:11:06 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Jan 2014 17:11:06 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W2l6M-0007Oy-Jh; Mon, 13 Jan 2014 17:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1389633302.28454@bugs.xenproject.org>
References: <52D41A0F.5030707@citrix.com>
	<1389632776.13654.108.camel@kazak.uk.xensource.com>
In-Reply-To: <1389632776.13654.108.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 17:15:02 +0000
Subject: [Xen-devel] Processed: Re: libxc linux privcmd bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #34 rooted at `<52D41A0F.5030707@citrix.com>'
Title: `Re: libxc linux privcmd bugs'
> title it libxc should use /dev/xen/privcmd on Linux
Set title for #34 to `libxc should use /dev/xen/privcmd on Linux'
> severity it wishlist
Change severity for #34 to `wishlist'
> thanks
Finished processing.

Modified/created Bugs:
 - 34: http://bugs.xenproject.org/xen/bug/34 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l7P-0007jI-OE; Mon, 13 Jan 2014 17:16:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2l7O-0007jD-Q2
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:16:07 +0000
Received: from [85.158.139.211:4979] by server-6.bemta-5.messagelabs.com id
	12/E8-16310-65F14D25; Mon, 13 Jan 2014 17:16:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389633363!6778514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6801 invoked from network); 13 Jan 2014 17:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90257722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:16:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:16:02 -0500
Message-ID: <1389633361.13654.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:16:01 +0000
In-Reply-To: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
References: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> On ARM, flush_tlb_mask is used in the common code:
>     - alloc_heap_pages: the flush is only be called if the new allocated
>     page was used by a domain before. So we need to flush only TLB non-secure
> non-hyp inner-shareable.
>     - common/grant-table.c: every calls to flush_tlb_mask are used with
>     the current domain. A flush TLB by current VMID inner-shareable is enough.
> 
> The current code only flush hypervisor TLB on the current PCPU. For now,
> flush TLBs non-secure non-hyp on every PCPUs.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This patch is bug fix for Xen 4.4. We were safe because there is already
> a flush in create_p2m_entries if the previous mapping was valid.
> 
> For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
> each time we allocated a new page.

Right, this whole are is ripe for rationalisation and optimisation.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2l7P-0007jI-OE; Mon, 13 Jan 2014 17:16:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2l7O-0007jD-Q2
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:16:07 +0000
Received: from [85.158.139.211:4979] by server-6.bemta-5.messagelabs.com id
	12/E8-16310-65F14D25; Mon, 13 Jan 2014 17:16:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389633363!6778514!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6801 invoked from network); 13 Jan 2014 17:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,653,1384300800"; d="scan'208";a="90257722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 17:16:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:16:02 -0500
Message-ID: <1389633361.13654.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:16:01 +0000
In-Reply-To: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
References: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> On ARM, flush_tlb_mask is used in the common code:
>     - alloc_heap_pages: the flush is only be called if the new allocated
>     page was used by a domain before. So we need to flush only TLB non-secure
> non-hyp inner-shareable.
>     - common/grant-table.c: every calls to flush_tlb_mask are used with
>     the current domain. A flush TLB by current VMID inner-shareable is enough.
> 
> The current code only flush hypervisor TLB on the current PCPU. For now,
> flush TLBs non-secure non-hyp on every PCPUs.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This patch is bug fix for Xen 4.4. We were safe because there is already
> a flush in create_p2m_entries if the previous mapping was valid.
> 
> For Xen 4.5, we should optimize the function to avoid flush for every VMIDs
> each time we allocated a new page.

Right, this whole are is ripe for rationalisation and optimisation.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:38:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:38:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2lSW-0000Lo-QP; Mon, 13 Jan 2014 17:37:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W2lSV-0000Lj-0w
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:37:55 +0000
Received: from [85.158.139.211:22744] by server-7.bemta-5.messagelabs.com id
	DC/84-04824-27424D25; Mon, 13 Jan 2014 17:37:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389634673!9505775!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10495 invoked from network); 13 Jan 2014 17:37:53 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:37:53 -0000
Received: by mail-wi0-f182.google.com with SMTP id ex4so692086wid.3
	for <xen-devel@lists.xenproject.org>;
	Mon, 13 Jan 2014 09:37:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=t1pX/Qh9WWCqXgUzjS4ixT+/dzxV+ZUtiFa4Kr/rLg0=;
	b=Gyv+CSwFgw1b1jEAVFxE/chTegS415iF1GbdMdgZXVaFJu2GT+C9P8U2nYJyIvL3Aa
	6fP06NwvEvRhcMQKiu4Gq+tv9u+1SxXhFeIKg3d+8uucUQuZzKmpcBs0OebWQ2KxZz+o
	/j7lcm/kxh4XzV93D/WlSPRr4a7blXZtXwr3qCCFZadOAwnEfHjwLiFdcjtqbnStAyoH
	VSYAbW0HMlxeDGjOw1XZLgT9uiKRpLbdHKhc8KChJf2UjDZmgxi1ZCkZiyirJ6FpBxXp
	UPaTxynsj7aLrDPFPGKhqkkbLr7q85wwhG6sQevOrhKZDB+URTtCJfU8C44KMBM9AWxF
	YgAw==
X-Gm-Message-State: ALoCoQlO967TxYnFGepfsnKKTafyIvzkoq6B01XO9K9OzKBlomABIatqeUQxogUbrTHmTvgHkMQt
X-Received: by 10.180.12.83 with SMTP id w19mr16462304wib.16.1389634673056;
	Mon, 13 Jan 2014 09:37:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id bc5sm18920492wib.4.2014.01.13.09.37.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 09:37:52 -0800 (PST)
Message-ID: <52D4246D.1050400@linaro.org>
Date: Mon, 13 Jan 2014 17:37:49 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1389633015.13654.109.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 05:10 PM, Ian Campbell wrote:
> Hrm, our TLB flush discipline is horribly confused isn't it...
> 
> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>> p2m and TLBs.
>>
>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>> moved out of the loop because:
>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> 
> OK.
> 
> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> worthwhile if that is the case.

Will add it.

> 
> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> an INSERT, it's seems very special case)
> 
>>     - INSERT: if valid = 1 that would means with have replaced a
>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>     removed from the p2m.
> 
> "append"? Do you mean "happen"?

I meant "happen".

> 
> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> subsequent put_page elsewhere -- do they all contain the correct
> flushing? Or do we just leak?

As for foreign mapping the INSERT function should be hardened. We don't
have a protection against the guest is replacing a current valid mapping.

> What happens if the page being replaced is foreign? Do we leak a
> reference to another domain's page? (This is probably a separate issue
> though, I suspect the put_page needs pulling out of the switch(op)
> statement).

Right we leak a reference to another domain.

> 
>>     - REMOVE: except for grant-table (replace_grant_host_mapping),
> 
> What about this case? What makes it safe?

As specified a bit later, I can't say if it's safe or not. I was unabled
to find a proof in the code.

>>  each
>>     call to guest_physmap_remove_page are protected by the callers via a
>>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>>     the page can't be allocated for another domain until the last put_page.
> 
> I have confirmed this is the case for guest_remove_page, memory_exchange
> and XENMEM_remove_from_physmap.
> 
> There is one case I saw where this isn't true which is gnttab_transfer,
> AFAICT that will fail because steal_page unconditionally returns an
> error on arm. There is also a flush_tlb_mask there, FWIW.

hmmm... I forgot this one... why don't we have a {get,put}_page in this
function?

> 
>>     - RELINQUISH : the domain is not running anymore so we don't care...
> 
> At some point this page will be reused, as will the VMID, where/how is
> it ensured that a flush will happen before that point? 

When the VMID is reused, Xen will flush everything TLBs associated to
this VMID (see p2m_alloc_table);

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:38:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:38:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2lSW-0000Lo-QP; Mon, 13 Jan 2014 17:37:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W2lSV-0000Lj-0w
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:37:55 +0000
Received: from [85.158.139.211:22744] by server-7.bemta-5.messagelabs.com id
	DC/84-04824-27424D25; Mon, 13 Jan 2014 17:37:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389634673!9505775!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10495 invoked from network); 13 Jan 2014 17:37:53 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:37:53 -0000
Received: by mail-wi0-f182.google.com with SMTP id ex4so692086wid.3
	for <xen-devel@lists.xenproject.org>;
	Mon, 13 Jan 2014 09:37:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=t1pX/Qh9WWCqXgUzjS4ixT+/dzxV+ZUtiFa4Kr/rLg0=;
	b=Gyv+CSwFgw1b1jEAVFxE/chTegS415iF1GbdMdgZXVaFJu2GT+C9P8U2nYJyIvL3Aa
	6fP06NwvEvRhcMQKiu4Gq+tv9u+1SxXhFeIKg3d+8uucUQuZzKmpcBs0OebWQ2KxZz+o
	/j7lcm/kxh4XzV93D/WlSPRr4a7blXZtXwr3qCCFZadOAwnEfHjwLiFdcjtqbnStAyoH
	VSYAbW0HMlxeDGjOw1XZLgT9uiKRpLbdHKhc8KChJf2UjDZmgxi1ZCkZiyirJ6FpBxXp
	UPaTxynsj7aLrDPFPGKhqkkbLr7q85wwhG6sQevOrhKZDB+URTtCJfU8C44KMBM9AWxF
	YgAw==
X-Gm-Message-State: ALoCoQlO967TxYnFGepfsnKKTafyIvzkoq6B01XO9K9OzKBlomABIatqeUQxogUbrTHmTvgHkMQt
X-Received: by 10.180.12.83 with SMTP id w19mr16462304wib.16.1389634673056;
	Mon, 13 Jan 2014 09:37:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id bc5sm18920492wib.4.2014.01.13.09.37.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 09:37:52 -0800 (PST)
Message-ID: <52D4246D.1050400@linaro.org>
Date: Mon, 13 Jan 2014 17:37:49 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1389633015.13654.109.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 05:10 PM, Ian Campbell wrote:
> Hrm, our TLB flush discipline is horribly confused isn't it...
> 
> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>> p2m and TLBs.
>>
>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>> moved out of the loop because:
>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> 
> OK.
> 
> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> worthwhile if that is the case.

Will add it.

> 
> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> an INSERT, it's seems very special case)
> 
>>     - INSERT: if valid = 1 that would means with have replaced a
>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>     removed from the p2m.
> 
> "append"? Do you mean "happen"?

I meant "happen".

> 
> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> subsequent put_page elsewhere -- do they all contain the correct
> flushing? Or do we just leak?

As for foreign mapping the INSERT function should be hardened. We don't
have a protection against the guest is replacing a current valid mapping.

> What happens if the page being replaced is foreign? Do we leak a
> reference to another domain's page? (This is probably a separate issue
> though, I suspect the put_page needs pulling out of the switch(op)
> statement).

Right we leak a reference to another domain.

> 
>>     - REMOVE: except for grant-table (replace_grant_host_mapping),
> 
> What about this case? What makes it safe?

As specified a bit later, I can't say if it's safe or not. I was unabled
to find a proof in the code.

>>  each
>>     call to guest_physmap_remove_page are protected by the callers via a
>>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>>     the page can't be allocated for another domain until the last put_page.
> 
> I have confirmed this is the case for guest_remove_page, memory_exchange
> and XENMEM_remove_from_physmap.
> 
> There is one case I saw where this isn't true which is gnttab_transfer,
> AFAICT that will fail because steal_page unconditionally returns an
> error on arm. There is also a flush_tlb_mask there, FWIW.

hmmm... I forgot this one... why don't we have a {get,put}_page in this
function?

> 
>>     - RELINQUISH : the domain is not running anymore so we don't care...
> 
> At some point this page will be reused, as will the VMID, where/how is
> it ensured that a flush will happen before that point? 

When the VMID is reused, Xen will flush everything TLBs associated to
this VMID (see p2m_alloc_table);

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:57:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2llU-0001Pm-2o; Mon, 13 Jan 2014 17:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2llS-0001Ph-R3
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:57:31 +0000
Received: from [85.158.143.35:56758] by server-1.bemta-4.messagelabs.com id
	F0/59-02132-A0924D25; Mon, 13 Jan 2014 17:57:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389635848!11466393!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 319 invoked from network); 13 Jan 2014 17:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="92398169"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 17:57:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:57:26 -0500
Message-ID: <1389635845.13654.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:57:25 +0000
In-Reply-To: <52D4246D.1050400@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
> On 01/13/2014 05:10 PM, Ian Campbell wrote:
> > Hrm, our TLB flush discipline is horribly confused isn't it...
> > 
> > On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> >> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >> p2m and TLBs.
> >>
> >> Flush TLB entries used by this domain on every PCPU. The flush can also be
> >> moved out of the loop because:
> >>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> > 
> > OK.
> > 
> > An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> > worthwhile if that is the case.
> 
> Will add it.

Thanks.

> > (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> > an INSERT, it's seems very special case)
> > 
> >>     - INSERT: if valid = 1 that would means with have replaced a
> >>     page that already belongs to the domain. A VCPU can write on the wrong page.
> >>     This can append for dom0 with the 1:1 mapping because the mapping is not
> >>     removed from the p2m.
> > 
> > "append"? Do you mean "happen"?
> 
> I meant "happen".
> 
> > 
> > In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> > subsequent put_page elsewhere -- do they all contain the correct
> > flushing? Or do we just leak?
> 
> As for foreign mapping the INSERT function should be hardened. We don't

Did you mean "handled"?

> have a protection against the guest is replacing a current valid mapping.
> 
> > What happens if the page being replaced is foreign? Do we leak a
> > reference to another domain's page? (This is probably a separate issue
> > though, I suspect the put_page needs pulling out of the switch(op)
> > statement).
> 
> Right we leak a reference to another domain.
> 
> > 
> >>     - REMOVE: except for grant-table (replace_grant_host_mapping),
> > 
> > What about this case? What makes it safe?
> 
> As specified a bit later, I can't say if it's safe or not. I was unabled
> to find a proof in the code.

Sorry, I seem to have forgotten to read the blurb after the cut. I'll
have to have a think about that aspect tomorrow.

> >>  each
> >>     call to guest_physmap_remove_page are protected by the callers via a
> >>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
> >>     the page can't be allocated for another domain until the last put_page.
> > 
> > I have confirmed this is the case for guest_remove_page, memory_exchange
> > and XENMEM_remove_from_physmap.
> > 
> > There is one case I saw where this isn't true which is gnttab_transfer,
> > AFAICT that will fail because steal_page unconditionally returns an
> > error on arm. There is also a flush_tlb_mask there, FWIW.
> 
> hmmm... I forgot this one... why don't we have a {get,put}_page in this
> function?

On x86 they seem to be in steal_page, which ARM doesn't implement.

> 
> > 
> >>     - RELINQUISH : the domain is not running anymore so we don't care...
> > 
> > At some point this page will be reused, as will the VMID, where/how is
> > it ensured that a flush will happen before that point? 
> 
> When the VMID is reused, Xen will flush everything TLBs associated to
> this VMID (see p2m_alloc_table);

So it does, I missed that when I looked.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 17:57:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 17:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2llU-0001Pm-2o; Mon, 13 Jan 2014 17:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W2llS-0001Ph-R3
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 17:57:31 +0000
Received: from [85.158.143.35:56758] by server-1.bemta-4.messagelabs.com id
	F0/59-02132-A0924D25; Mon, 13 Jan 2014 17:57:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389635848!11466393!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 319 invoked from network); 13 Jan 2014 17:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 17:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="92398169"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 17:57:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 13 Jan 2014 12:57:26 -0500
Message-ID: <1389635845.13654.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 13 Jan 2014 17:57:25 +0000
In-Reply-To: <52D4246D.1050400@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
> On 01/13/2014 05:10 PM, Ian Campbell wrote:
> > Hrm, our TLB flush discipline is horribly confused isn't it...
> > 
> > On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> >> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >> p2m and TLBs.
> >>
> >> Flush TLB entries used by this domain on every PCPU. The flush can also be
> >> moved out of the loop because:
> >>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> > 
> > OK.
> > 
> > An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> > worthwhile if that is the case.
> 
> Will add it.

Thanks.

> > (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> > an INSERT, it's seems very special case)
> > 
> >>     - INSERT: if valid = 1 that would means with have replaced a
> >>     page that already belongs to the domain. A VCPU can write on the wrong page.
> >>     This can append for dom0 with the 1:1 mapping because the mapping is not
> >>     removed from the p2m.
> > 
> > "append"? Do you mean "happen"?
> 
> I meant "happen".
> 
> > 
> > In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> > subsequent put_page elsewhere -- do they all contain the correct
> > flushing? Or do we just leak?
> 
> As for foreign mapping the INSERT function should be hardened. We don't

Did you mean "handled"?

> have a protection against the guest is replacing a current valid mapping.
> 
> > What happens if the page being replaced is foreign? Do we leak a
> > reference to another domain's page? (This is probably a separate issue
> > though, I suspect the put_page needs pulling out of the switch(op)
> > statement).
> 
> Right we leak a reference to another domain.
> 
> > 
> >>     - REMOVE: except for grant-table (replace_grant_host_mapping),
> > 
> > What about this case? What makes it safe?
> 
> As specified a bit later, I can't say if it's safe or not. I was unabled
> to find a proof in the code.

Sorry, I seem to have forgotten to read the blurb after the cut. I'll
have to have a think about that aspect tomorrow.

> >>  each
> >>     call to guest_physmap_remove_page are protected by the callers via a
> >>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
> >>     the page can't be allocated for another domain until the last put_page.
> > 
> > I have confirmed this is the case for guest_remove_page, memory_exchange
> > and XENMEM_remove_from_physmap.
> > 
> > There is one case I saw where this isn't true which is gnttab_transfer,
> > AFAICT that will fail because steal_page unconditionally returns an
> > error on arm. There is also a flush_tlb_mask there, FWIW.
> 
> hmmm... I forgot this one... why don't we have a {get,put}_page in this
> function?

On x86 they seem to be in steal_page, which ARM doesn't implement.

> 
> > 
> >>     - RELINQUISH : the domain is not running anymore so we don't care...
> > 
> > At some point this page will be reused, as will the VMID, where/how is
> > it ensured that a flush will happen before that point? 
> 
> When the VMID is reused, Xen will flush everything TLBs associated to
> this VMID (see p2m_alloc_table);

So it does, I missed that when I looked.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 18:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 18:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2m3D-0002WA-U4; Mon, 13 Jan 2014 18:15:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2m3C-0002W5-2C
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 18:15:50 +0000
Received: from [85.158.143.35:27881] by server-3.bemta-4.messagelabs.com id
	71/D5-32360-55D24D25; Mon, 13 Jan 2014 18:15:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389636947!10163646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16532 invoked from network); 13 Jan 2014 18:15:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 18:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="92404640"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 18:15:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 13:15:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2m32-0001Ue-AL;
	Mon, 13 Jan 2014 18:15:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2m32-0002TV-2J;
	Mon, 13 Jan 2014 18:15:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 13 Jan 2014 18:15:37 +0000
Message-ID: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xl: Always use "fast" migration resume protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As Ian Campbell writes:

  There are two mechanisms by which a suspend can be aborted and the
  original domain resumed.

  The older method is that the toolstack resets a bunch of state (see
  tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
  restarts the domain. The domain will see HYPERVISOR_suspend return 0
  and will continue without any realisation that it is actually
  running in the original domain and not in a new one. This method is
  supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
  but it is not.

  The other method is newer and in this case the toolstack arranges
  that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
  domain will observe this and realise that it has been restarted in
  the same domain and will behave accordingly. This method is
  implemented, correctly AFAIK, by
  libxl_domain_resume(suspend_cancel=1).

Attempting to use the old method without doing all of the work simply
causes the guest to crash.  Implementing the work required for old
method, or for checking that domains actually support the new method,
is not feasible at this stage of the 4.4 release.

So, always use the new method, without regard to the declarations of
support by the guest.  This is a strict improvement: guests which do
in fact support the new method will work, whereas ones which don't are
no worse off.

There are two call sites of libxl_domain_resume that need fixing, both
in the migration error path.

With this change I observe a correct and successful resumption of a
Debian wheezy guest with a Linux 3.4.70 kernel after a migration
attempt which I arranged to fail by nobbling the block hotplug script.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: konrad.wilk@oracle.com
CC: David Vrabel <david.vrabel@citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 tools/libxl/xl_cmdimpl.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..d93e01b 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
-        rc = libxl_domain_resume(ctx, domid, 0, 0);
+        rc = libxl_domain_resume(ctx, domid, 1, 0);
         if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
 
         fprintf(stderr, "Migration failed due to problems at target.\n");
@@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
-    libxl_domain_resume(ctx, domid, 0, 0);
+    libxl_domain_resume(ctx, domid, 1, 0);
     exit(-ERROR_FAIL);
 
  failed_badly:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 18:16:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 18:16:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2m3D-0002WA-U4; Mon, 13 Jan 2014 18:15:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2m3C-0002W5-2C
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 18:15:50 +0000
Received: from [85.158.143.35:27881] by server-3.bemta-4.messagelabs.com id
	71/D5-32360-55D24D25; Mon, 13 Jan 2014 18:15:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389636947!10163646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16532 invoked from network); 13 Jan 2014 18:15:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 18:15:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="92404640"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Jan 2014 18:15:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 13:15:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2m32-0001Ue-AL;
	Mon, 13 Jan 2014 18:15:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W2m32-0002TV-2J;
	Mon, 13 Jan 2014 18:15:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 13 Jan 2014 18:15:37 +0000
Message-ID: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xl: Always use "fast" migration resume protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As Ian Campbell writes:

  There are two mechanisms by which a suspend can be aborted and the
  original domain resumed.

  The older method is that the toolstack resets a bunch of state (see
  tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
  restarts the domain. The domain will see HYPERVISOR_suspend return 0
  and will continue without any realisation that it is actually
  running in the original domain and not in a new one. This method is
  supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
  but it is not.

  The other method is newer and in this case the toolstack arranges
  that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
  domain will observe this and realise that it has been restarted in
  the same domain and will behave accordingly. This method is
  implemented, correctly AFAIK, by
  libxl_domain_resume(suspend_cancel=1).

Attempting to use the old method without doing all of the work simply
causes the guest to crash.  Implementing the work required for old
method, or for checking that domains actually support the new method,
is not feasible at this stage of the 4.4 release.

So, always use the new method, without regard to the declarations of
support by the guest.  This is a strict improvement: guests which do
in fact support the new method will work, whereas ones which don't are
no worse off.

There are two call sites of libxl_domain_resume that need fixing, both
in the migration error path.

With this change I observe a correct and successful resumption of a
Debian wheezy guest with a Linux 3.4.70 kernel after a migration
attempt which I arranged to fail by nobbling the block hotplug script.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: konrad.wilk@oracle.com
CC: David Vrabel <david.vrabel@citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 tools/libxl/xl_cmdimpl.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..d93e01b 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
         if (common_domname) {
             libxl_domain_rename(ctx, domid, away_domname, common_domname);
         }
-        rc = libxl_domain_resume(ctx, domid, 0, 0);
+        rc = libxl_domain_resume(ctx, domid, 1, 0);
         if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
 
         fprintf(stderr, "Migration failed due to problems at target.\n");
@@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
     close(send_fd);
     migration_child_report(recv_fd);
     fprintf(stderr, "Migration failed, resuming at sender.\n");
-    libxl_domain_resume(ctx, domid, 0, 0);
+    libxl_domain_resume(ctx, domid, 1, 0);
     exit(-ERROR_FAIL);
 
  failed_badly:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 18:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 18:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mPP-0003p9-2a; Mon, 13 Jan 2014 18:38:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2mPM-0003p4-Rm
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 18:38:44 +0000
Received: from [85.158.137.68:44428] by server-3.bemta-3.messagelabs.com id
	0B/EF-10658-3B234D25; Mon, 13 Jan 2014 18:38:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389638322!7719178!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27069 invoked from network); 13 Jan 2014 18:38:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 18:38:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 18:38:41 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="630458486"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.86])
	by fldsmtpi03.verizon.com with ESMTP; 13 Jan 2014 18:38:40 +0000
Message-ID: <52D432AF.5030205@terremark.com>
Date: Mon, 13 Jan 2014 13:38:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	xen-devel <xen-devel@lists.xenproject.org>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 10:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


I do not see:


commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3


in any 4.2 or 4.3 tree.  As Andrew Cooper said:


 > This is a hypervisor reference counting error on a toolstack hypercall
 > path.  Irrespective of any of the other patches in this series, I think
 > this should be included ASAP (although probably subject to review from a
 > third person), which will fix the hypervisor crashes from gdbsx usage.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 18:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 18:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mPP-0003p9-2a; Mon, 13 Jan 2014 18:38:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2mPM-0003p4-Rm
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 18:38:44 +0000
Received: from [85.158.137.68:44428] by server-3.bemta-3.messagelabs.com id
	0B/EF-10658-3B234D25; Mon, 13 Jan 2014 18:38:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389638322!7719178!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27069 invoked from network); 13 Jan 2014 18:38:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 18:38:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 18:38:41 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="630458486"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.2.86])
	by fldsmtpi03.verizon.com with ESMTP; 13 Jan 2014 18:38:40 +0000
Message-ID: <52D432AF.5030205@terremark.com>
Date: Mon, 13 Jan 2014 13:38:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	xen-devel <xen-devel@lists.xenproject.org>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 10:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


I do not see:


commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3


in any 4.2 or 4.3 tree.  As Andrew Cooper said:


 > This is a hypervisor reference counting error on a toolstack hypercall
 > path.  Irrespective of any of the other patches in this series, I think
 > this should be included ASAP (although probably subject to review from a
 > third person), which will fix the hypervisor crashes from gdbsx usage.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mr7-00053Z-GH; Mon, 13 Jan 2014 19:07:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2mr6-00053U-35
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:07:24 +0000
Received: from [85.158.143.35:11422] by server-1.bemta-4.messagelabs.com id
	DA/1F-02132-B6934D25; Mon, 13 Jan 2014 19:07:23 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389640040!11390691!1
X-Originating-IP: [140.108.26.140]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDAgPT4gMzMxNTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6121 invoked from network); 13 Jan 2014 19:07:21 -0000
Received: from fldsmtpe01.verizon.com (HELO fldsmtpe01.verizon.com)
	(140.108.26.140)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 19:07:21 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe01.verizon.com with ESMTP; 13 Jan 2014 19:07:19 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="630482415"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 13 Jan 2014 19:07:18 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Mon, 13 Jan 2014 13:35:07 -0500
Message-ID: <52D43175.3030904@terremark.com>
Date: Mon, 13 Jan 2014 13:33:25 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 10:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


I do not see:


commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3


in any 4.2 or 4.3 tree.  As Andrew Cooper said:


 > This is a hypervisor reference counting error on a toolstack hypercall
 > path.  Irrespective of any of the other patches in this series, I think
 > this should be included ASAP (although probably subject to review from a
 > third person), which will fix the hypervisor crashes from gdbsx usage.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mr7-00053Z-GH; Mon, 13 Jan 2014 19:07:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2mr6-00053U-35
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:07:24 +0000
Received: from [85.158.143.35:11422] by server-1.bemta-4.messagelabs.com id
	DA/1F-02132-B6934D25; Mon, 13 Jan 2014 19:07:23 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389640040!11390691!1
X-Originating-IP: [140.108.26.140]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDAgPT4gMzMxNTk1\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6121 invoked from network); 13 Jan 2014 19:07:21 -0000
Received: from fldsmtpe01.verizon.com (HELO fldsmtpe01.verizon.com)
	(140.108.26.140)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 19:07:21 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe01.verizon.com with ESMTP; 13 Jan 2014 19:07:19 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="630482415"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 13 Jan 2014 19:07:18 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Mon, 13 Jan 2014 13:35:07 -0500
Message-ID: <52D43175.3030904@terremark.com>
Date: Mon, 13 Jan 2014 13:33:25 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
In-Reply-To: <52D410E102000078001132F8@nat28.tlf.novell.com>
Cc: "lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/14 10:14, Jan Beulich wrote:
> Aiming at a release in late January or early February, I'd like to cut
> RC1s later this or early next week.
>
> Please indicate any bug fixes that so far may have been missed
> in the backports already done.
>
> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


I do not see:


commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3


in any 4.2 or 4.3 tree.  As Andrew Cooper said:


 > This is a hypervisor reference counting error on a toolstack hypercall
 > path.  Irrespective of any of the other patches in this series, I think
 > this should be included ASAP (although probably subject to review from a
 > third person), which will fix the hypervisor crashes from gdbsx usage.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:08:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2msW-0005Yi-Vz; Mon, 13 Jan 2014 19:08:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2msV-0005YZ-M5
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:08:51 +0000
Received: from [85.158.139.211:11701] by server-8.bemta-5.messagelabs.com id
	D4/CE-29838-3C934D25; Mon, 13 Jan 2014 19:08:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389640128!9507780!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28790 invoked from network); 13 Jan 2014 19:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90294988"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:08:48 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:08:47 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:37 +0000
Message-ID: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to gnttab_[un]map_refs,
  because it is needed always
- update the function prototypes as kmap_ops are no longer needed

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:08:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2msW-0005Yi-Vz; Mon, 13 Jan 2014 19:08:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2msV-0005YZ-M5
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:08:51 +0000
Received: from [85.158.139.211:11701] by server-8.bemta-5.messagelabs.com id
	D4/CE-29838-3C934D25; Mon, 13 Jan 2014 19:08:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389640128!9507780!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28790 invoked from network); 13 Jan 2014 19:08:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90294988"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:08:48 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:08:47 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:37 +0000
Message-ID: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to gnttab_[un]map_refs,
  because it is needed always
- update the function prototypes as kmap_ops are no longer needed

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mt5-0005ch-Dr; Mon, 13 Jan 2014 19:09:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2mt3-0005cB-Kk
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:09:25 +0000
Received: from [85.158.143.35:19153] by server-3.bemta-4.messagelabs.com id
	55/7C-32360-4E934D25; Mon, 13 Jan 2014 19:09:24 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389640162!4329337!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19646 invoked from network); 13 Jan 2014 19:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90295181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:09:22 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:09:21 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:38 +0000
Message-ID: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
  because it is needed always

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c        |    5 ---
 drivers/xen/gntdev.c      |   62 ++++++++++++++++++++++++++++++++++---
 drivers/xen/grant-table.c |   75 +++++++++++++++------------------------------
 3 files changed, 83 insertions(+), 59 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..b1e9407 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -891,10 +891,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	WARN_ON(PagePrivate(page));
 	SetPagePrivate(page);
 	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -962,7 +958,6 @@ int m2p_remove_override(struct page *page,
 	WARN_ON(!PagePrivate(page));
 	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..b89aaa2 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
 static int map_grant_pages(struct grant_map *map)
 {
 	int i, err = 0;
+	bool lazy = false;
+	pte_t *pte;
+	unsigned long mfn;
 
 	if (!use_ptemod) {
 		/* Note: it could already be mapped */
@@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < map->count; i++) {
+		/* Do not add to override if the map failed. */
+		if (map->map_ops[i].status)
+			continue;
+
+		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
+				(map->map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
+		}
+		err = m2p_add_override(mfn,
+				       map->pages[i],
+				       use_ptemod ? &map->kmap_ops[i] : NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
@@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
 static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
 	int i, err = 0;
+	bool lazy = false;
 
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
@@ -316,8 +349,29 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+				NULL,
+				map->pages + offset,
+				pages);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < pages; i++) {
+		err = m2p_remove_override((map->pages + offset)[i],
+					  use_ptemod ?
+					  &(map->kmap_ops + offset)[i] :
+					  NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..ad281e4 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -885,7 +885,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
@@ -904,40 +903,29 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		for (i = 0; i < count; i++) {
 			if (map_ops[i].status)
 				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
+				return -ENOMEM;
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+	} else {
+		for (i = 0; i < count; i++) {
+			if (map_ops[i].status)
+				continue;
+			if (map_ops[i].flags & GNTMAP_contains_pte) {
+				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+					(map_ops[i].host_addr & ~PAGE_MASK));
+				mfn = pte_mfn(*pte);
+			} else {
+				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+			}
+			pages[i]->index = pfn_to_mfn(page_to_pfn(pages[i]));
+			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
+							  FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
@@ -946,7 +934,6 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
@@ -958,26 +945,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
+	} else {
+		for (i = 0; i < count; i++) {
+				set_phys_to_machine(page_to_pfn(pages[i]),
+						    pages[i]->index);
+		}
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mt5-0005ch-Dr; Mon, 13 Jan 2014 19:09:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2mt3-0005cB-Kk
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:09:25 +0000
Received: from [85.158.143.35:19153] by server-3.bemta-4.messagelabs.com id
	55/7C-32360-4E934D25; Mon, 13 Jan 2014 19:09:24 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389640162!4329337!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19646 invoked from network); 13 Jan 2014 19:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90295181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:09:22 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:09:21 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:38 +0000
Message-ID: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch does the following:
- move the m2p_override part from grant_[un]map_refs to gntdev, where it is
  needed after mapping operations
- but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
  because it is needed always

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c        |    5 ---
 drivers/xen/gntdev.c      |   62 ++++++++++++++++++++++++++++++++++---
 drivers/xen/grant-table.c |   75 +++++++++++++++------------------------------
 3 files changed, 83 insertions(+), 59 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..b1e9407 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -891,10 +891,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	WARN_ON(PagePrivate(page));
 	SetPagePrivate(page);
 	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -962,7 +958,6 @@ int m2p_remove_override(struct page *page,
 	WARN_ON(!PagePrivate(page));
 	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..b89aaa2 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
 static int map_grant_pages(struct grant_map *map)
 {
 	int i, err = 0;
+	bool lazy = false;
+	pte_t *pte;
+	unsigned long mfn;
 
 	if (!use_ptemod) {
 		/* Note: it could already be mapped */
@@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < map->count; i++) {
+		/* Do not add to override if the map failed. */
+		if (map->map_ops[i].status)
+			continue;
+
+		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
+				(map->map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
+		}
+		err = m2p_add_override(mfn,
+				       map->pages[i],
+				       use_ptemod ? &map->kmap_ops[i] : NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
@@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
 static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 {
 	int i, err = 0;
+	bool lazy = false;
 
 	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
 		int pgno = (map->notify.addr >> PAGE_SHIFT);
@@ -316,8 +349,29 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+				NULL,
+				map->pages + offset,
+				pages);
+	if (err)
+		return err;
+
+	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < pages; i++) {
+		err = m2p_remove_override((map->pages + offset)[i],
+					  use_ptemod ?
+					  &(map->kmap_ops + offset)[i] :
+					  NULL);
+		if (err)
+			break;
+	}
+
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..ad281e4 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -885,7 +885,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
@@ -904,40 +903,29 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		for (i = 0; i < count; i++) {
 			if (map_ops[i].status)
 				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
+				return -ENOMEM;
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+	} else {
+		for (i = 0; i < count; i++) {
+			if (map_ops[i].status)
+				continue;
+			if (map_ops[i].flags & GNTMAP_contains_pte) {
+				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+					(map_ops[i].host_addr & ~PAGE_MASK));
+				mfn = pte_mfn(*pte);
+			} else {
+				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+			}
+			pages[i]->index = pfn_to_mfn(page_to_pfn(pages[i]));
+			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
+							  FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
@@ -946,7 +934,6 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
@@ -958,26 +945,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
+	} else {
+		for (i = 0; i < count; i++) {
+				set_phys_to_machine(page_to_pfn(pages[i]),
+						    pages[i]->index);
+		}
 	}
 
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:09:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mtN-0005gp-2Y; Mon, 13 Jan 2014 19:09:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2mtL-0005gF-DH
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:09:43 +0000
Received: from [85.158.139.211:55595] by server-17.bemta-5.messagelabs.com id
	CC/12-19152-6F934D25; Mon, 13 Jan 2014 19:09:42 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389640180!9512776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19199 invoked from network); 13 Jan 2014 19:09:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:09:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90295300"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:09:40 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:09:39 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:39 +0000
Message-ID: <1389640119-7936-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/2] xen/grant-table: Remove kmap_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch updates the function prototypes as kmap_ops is no longer needed.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/block/xen-blkback/blkback.c |   15 ++++++---------
 drivers/xen/gntdev.c                |    3 +--
 drivers/xen/grant-table.c           |    2 --
 include/xen/grant_table.h           |    2 --
 4 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index b89aaa2..3a97342 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -287,7 +287,7 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
 	if (err)
 		return err;
 
@@ -349,7 +349,6 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-				NULL,
 				map->pages + offset,
 				pages);
 	if (err)
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index ad281e4..1471b5f 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -881,7 +881,6 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
@@ -930,7 +929,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..93d363a 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,10 +184,8 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:09:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2mtN-0005gp-2Y; Mon, 13 Jan 2014 19:09:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W2mtL-0005gF-DH
	for xen-devel@lists.xenproject.org; Mon, 13 Jan 2014 19:09:43 +0000
Received: from [85.158.139.211:55595] by server-17.bemta-5.messagelabs.com id
	CC/12-19152-6F934D25; Mon, 13 Jan 2014 19:09:42 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389640180!9512776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19199 invoked from network); 13 Jan 2014 19:09:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:09:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90295300"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 19:09:40 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 14:09:39 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
Date: Mon, 13 Jan 2014 19:08:39 +0000
Message-ID: <1389640119-7936-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/2] xen/grant-table: Remove kmap_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch updates the function prototypes as kmap_ops is no longer needed.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/block/xen-blkback/blkback.c |   15 ++++++---------
 drivers/xen/gntdev.c                |    3 +--
 drivers/xen/grant-table.c           |    2 --
 include/xen/grant_table.h           |    2 --
 4 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index b89aaa2..3a97342 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -287,7 +287,7 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
+	err = gnttab_map_refs(map->map_ops, map->pages, map->count);
 	if (err)
 		return err;
 
@@ -349,7 +349,6 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 	}
 
 	err = gnttab_unmap_refs(map->unmap_ops + offset,
-				NULL,
 				map->pages + offset,
 				pages);
 	if (err)
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index ad281e4..1471b5f 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -881,7 +881,6 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
@@ -930,7 +929,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
 	int i, ret;
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..93d363a 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,10 +184,8 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:27:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2nAT-0006Wh-Sd; Mon, 13 Jan 2014 19:27:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W2nAS-0006Wc-1o
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 19:27:24 +0000
Received: from [85.158.139.211:51699] by server-6.bemta-5.messagelabs.com id
	DB/BE-16310-B1E34D25; Mon, 13 Jan 2014 19:27:23 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389641242!9511254!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19032 invoked from network); 13 Jan 2014 19:27:22 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-2.tower-206.messagelabs.com with SMTP;
	13 Jan 2014 19:27:22 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 15E9D58351F;
	Mon, 13 Jan 2014 11:27:21 -0800 (PST)
Date: Mon, 13 Jan 2014 11:27:20 -0800 (PST)
Message-Id: <20140113.112720.2236966361404094251.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 9 Jan 2014 10:02:48 +0000

> Use skb_checksum_setup to set up partial checksum offsets rather
> then a private implementation.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

This patch really needs review by a netfront expert before I can apply
this series.

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 19:27:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 19:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2nAT-0006Wh-Sd; Mon, 13 Jan 2014 19:27:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W2nAS-0006Wc-1o
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 19:27:24 +0000
Received: from [85.158.139.211:51699] by server-6.bemta-5.messagelabs.com id
	DB/BE-16310-B1E34D25; Mon, 13 Jan 2014 19:27:23 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389641242!9511254!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19032 invoked from network); 13 Jan 2014 19:27:22 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-2.tower-206.messagelabs.com with SMTP;
	13 Jan 2014 19:27:22 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 15E9D58351F;
	Mon, 13 Jan 2014 11:27:21 -0800 (PST)
Date: Mon, 13 Jan 2014 11:27:20 -0800 (PST)
Message-Id: <20140113.112720.2236966361404094251.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 9 Jan 2014 10:02:48 +0000

> Use skb_checksum_setup to set up partial checksum offsets rather
> then a private implementation.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

This patch really needs review by a netfront expert before I can apply
this series.

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 20:02:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 20:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2nhu-0000Vf-4r; Mon, 13 Jan 2014 20:01:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2nht-0000VW-1C
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 20:01:57 +0000
Received: from [193.109.254.147:48879] by server-6.bemta-14.messagelabs.com id
	E7/4E-14958-43644D25; Mon, 13 Jan 2014 20:01:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389643314!9113994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20579 invoked from network); 13 Jan 2014 20:01:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 20:01:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90314695"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 20:01:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 15:01:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2nho-00020j-0x;
	Mon, 13 Jan 2014 20:01:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2nhn-0003Rb-WF;
	Mon, 13 Jan 2014 20:01:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24368-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 20:01:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24368: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24368 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24368/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375
baseline version:
 xen                  a01a5595305f7f18ac0477d3f248e8c2b30b051c

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=e131045033e7235d17a0d4be88e3a550cfcaf375
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing e131045033e7235d17a0d4be88e3a550cfcaf375
+ branch=xen-4.2-testing
+ revision=e131045033e7235d17a0d4be88e3a550cfcaf375
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git e131045033e7235d17a0d4be88e3a550cfcaf375:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   a01a559..e131045  e131045033e7235d17a0d4be88e3a550cfcaf375 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 20:02:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 20:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2nhu-0000Vf-4r; Mon, 13 Jan 2014 20:01:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2nht-0000VW-1C
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 20:01:57 +0000
Received: from [193.109.254.147:48879] by server-6.bemta-14.messagelabs.com id
	E7/4E-14958-43644D25; Mon, 13 Jan 2014 20:01:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389643314!9113994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20579 invoked from network); 13 Jan 2014 20:01:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 20:01:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,654,1384300800"; d="scan'208";a="90314695"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Jan 2014 20:01:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 13 Jan 2014 15:01:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2nho-00020j-0x;
	Mon, 13 Jan 2014 20:01:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2nhn-0003Rb-WF;
	Mon, 13 Jan 2014 20:01:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24368-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 13 Jan 2014 20:01:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24368: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24368 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24368/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375
baseline version:
 xen                  a01a5595305f7f18ac0477d3f248e8c2b30b051c

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tim Deegan <tim@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=e131045033e7235d17a0d4be88e3a550cfcaf375
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing e131045033e7235d17a0d4be88e3a550cfcaf375
+ branch=xen-4.2-testing
+ revision=e131045033e7235d17a0d4be88e3a550cfcaf375
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git e131045033e7235d17a0d4be88e3a550cfcaf375:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   a01a559..e131045  e131045033e7235d17a0d4be88e3a550cfcaf375 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 22:09:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2phG-0006Uu-8N; Mon, 13 Jan 2014 22:09:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2phE-0006Up-N9
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 22:09:24 +0000
Received: from [85.158.143.35:64443] by server-2.bemta-4.messagelabs.com id
	D7/CA-11386-41464D25; Mon, 13 Jan 2014 22:09:24 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389650957!11412612!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30724 invoked from network); 13 Jan 2014 22:09:23 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 22:09:23 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 22:09:10 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,655,1384300800"; d="scan'208";a="647294894"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.31])
	by fldsmtpi01.verizon.com with ESMTP; 13 Jan 2014 22:09:09 +0000
Message-ID: <52D46405.50102@terremark.com>
Date: Mon, 13 Jan 2014 17:09:09 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, qemu-devel@nongnu.org, 
	1257099@bugs.launchpad.net, xen-devel@lists.xensource.com
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/14 21:12, Don Slutz wrote:

Ping.

> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> to be fixed (TMPB).
>
> Add new functions do_libtool and libtool_prog.
>
> Add check for broken gcc and libtool.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
> Was posted as an attachment.
>
> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
>
>   configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 62 insertions(+), 1 deletion(-)
>
> diff --git a/configure b/configure
> index edfea95..852d021 100755
> --- a/configure
> +++ b/configure
> @@ -12,7 +12,10 @@ else
>   fi
>   
>   TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> +TMPO="${TMPDIR1}/${TMPB}.o"
> +TMPL="${TMPDIR1}/${TMPB}.lo"
> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>   TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>   
>   # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> @@ -86,6 +89,38 @@ compile_prog() {
>     do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>   }
>   
> +do_libtool() {
> +    local mode=$1
> +    shift
> +    # Run the compiler, capturing its output to the log.
> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> +    # Test passed. If this is an --enable-werror build, rerun
> +    # the test with -Werror and bail out if it fails. This
> +    # makes warning-generating-errors in configure test code
> +    # obvious to developers.
> +    if test "$werror" != "yes"; then
> +        return 0
> +    fi
> +    # Don't bother rerunning the compile if we were already using -Werror
> +    case "$*" in
> +        *-Werror*)
> +           return 0
> +        ;;
> +    esac
> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> +    error_exit "configure test passed without -Werror but failed with -Werror." \
> +        "This is probably a bug in the configure script. The failing command" \
> +        "will be at the bottom of config.log." \
> +        "You can run configure with --disable-werror to bypass this check."
> +}
> +
> +libtool_prog() {
> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> +}
> +
>   # symbolically link $1 to $2.  Portable version of "ln -sf".
>   symlink() {
>     rm -rf "$2"
> @@ -1367,6 +1402,32 @@ EOF
>     fi
>   fi
>   
> +# check for broken gcc and libtool in RHEL5
> +if test -n "$libtool" -a "$pie" != "no" ; then
> +  cat > $TMPC <<EOF
> +
> +void *f(unsigned char *buf, int len);
> +void *g(unsigned char *buf, int len);
> +
> +void *
> +f(unsigned char *buf, int len)
> +{
> +    return (void*)0L;
> +}
> +
> +void *
> +g(unsigned char *buf, int len)
> +{
> +    return f(buf, len);
> +}
> +
> +EOF
> +  if ! libtool_prog; then
> +    echo "Disabling libtool due to broken toolchain support"
> +    libtool=
> +  fi
> +fi
> +
>   ##########################################
>   # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>   # use i686 as default anyway, but for those that don't, an explicit


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 22:09:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2phG-0006Uu-8N; Mon, 13 Jan 2014 22:09:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W2phE-0006Up-N9
	for xen-devel@lists.xensource.com; Mon, 13 Jan 2014 22:09:24 +0000
Received: from [85.158.143.35:64443] by server-2.bemta-4.messagelabs.com id
	D7/CA-11386-41464D25; Mon, 13 Jan 2014 22:09:24 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389650957!11412612!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30724 invoked from network); 13 Jan 2014 22:09:23 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 22:09:23 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 13 Jan 2014 22:09:10 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,655,1384300800"; d="scan'208";a="647294894"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.31])
	by fldsmtpi01.verizon.com with ESMTP; 13 Jan 2014 22:09:09 +0000
Message-ID: <52D46405.50102@terremark.com>
Date: Mon, 13 Jan 2014 17:09:09 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, qemu-devel@nongnu.org, 
	1257099@bugs.launchpad.net, xen-devel@lists.xensource.com
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/14 21:12, Don Slutz wrote:

Ping.

> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> to be fixed (TMPB).
>
> Add new functions do_libtool and libtool_prog.
>
> Add check for broken gcc and libtool.
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
> Was posted as an attachment.
>
> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
>
>   configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 62 insertions(+), 1 deletion(-)
>
> diff --git a/configure b/configure
> index edfea95..852d021 100755
> --- a/configure
> +++ b/configure
> @@ -12,7 +12,10 @@ else
>   fi
>   
>   TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> +TMPO="${TMPDIR1}/${TMPB}.o"
> +TMPL="${TMPDIR1}/${TMPB}.lo"
> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>   TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>   
>   # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> @@ -86,6 +89,38 @@ compile_prog() {
>     do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>   }
>   
> +do_libtool() {
> +    local mode=$1
> +    shift
> +    # Run the compiler, capturing its output to the log.
> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> +    # Test passed. If this is an --enable-werror build, rerun
> +    # the test with -Werror and bail out if it fails. This
> +    # makes warning-generating-errors in configure test code
> +    # obvious to developers.
> +    if test "$werror" != "yes"; then
> +        return 0
> +    fi
> +    # Don't bother rerunning the compile if we were already using -Werror
> +    case "$*" in
> +        *-Werror*)
> +           return 0
> +        ;;
> +    esac
> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> +    error_exit "configure test passed without -Werror but failed with -Werror." \
> +        "This is probably a bug in the configure script. The failing command" \
> +        "will be at the bottom of config.log." \
> +        "You can run configure with --disable-werror to bypass this check."
> +}
> +
> +libtool_prog() {
> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> +}
> +
>   # symbolically link $1 to $2.  Portable version of "ln -sf".
>   symlink() {
>     rm -rf "$2"
> @@ -1367,6 +1402,32 @@ EOF
>     fi
>   fi
>   
> +# check for broken gcc and libtool in RHEL5
> +if test -n "$libtool" -a "$pie" != "no" ; then
> +  cat > $TMPC <<EOF
> +
> +void *f(unsigned char *buf, int len);
> +void *g(unsigned char *buf, int len);
> +
> +void *
> +f(unsigned char *buf, int len)
> +{
> +    return (void*)0L;
> +}
> +
> +void *
> +g(unsigned char *buf, int len)
> +{
> +    return f(buf, len);
> +}
> +
> +EOF
> +  if ! libtool_prog; then
> +    echo "Disabling libtool due to broken toolchain support"
> +    libtool=
> +  fi
> +fi
> +
>   ##########################################
>   # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>   # use i686 as default anyway, but for those that don't, an explicit


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 22:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qQI-0008Qf-5k; Mon, 13 Jan 2014 22:55:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.dichtel@6wind.com>) id 1W2hkX-0001Oy-6i
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:40:17 +0000
Received: from [193.109.254.147:50789] by server-15.bemta-14.messagelabs.com
	id 8E/4C-22186-0CCE3D25; Mon, 13 Jan 2014 13:40:16 +0000
X-Env-Sender: nicolas.dichtel@6wind.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389620415!10531030!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5387 invoked from network); 13 Jan 2014 13:40:15 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:40:15 -0000
Received: by mail-we0-f175.google.com with SMTP id p61so1358003wes.6
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 05:40:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:reply-to:organization
	:user-agent:mime-version:to:cc:subject:references:in-reply-to
	:content-type:content-transfer-encoding;
	bh=Q/A3EsUWmRfTHDg1775g5rjTxBZF7mm/TtjZtyvWuB0=;
	b=CW3svMzLUQ4IpXTnpD4E57ONp5IAigIGXYyczXmEIC3VlPl+FGz1OtOU0kWo5Heb4B
	aIQ9S1SfdYPIe5CxJEmjvToxfV51jG3yICNAfk/Cn7U28OySZu8IjoLVzaSheGQq+oRo
	z6E12qKYAgPpde1cNLY1+pSayDBvCDpiPPBd6lM7+TfhUfy1r4WYF4fDArkslAXcZi3W
	fdakr83dsjkGOyD4BDlEocbfY4TVBuqn6nPvYFwbcw/M0VFRbpT5mOLN7mLx6rfup8mm
	W7CmuzDZq8R1qFjX0F/KFTUs8aZ56UgfGsnpxnolgPcY6bftRdsp8OXL4QVbeM4c8rUL
	B5sQ==
X-Gm-Message-State: ALoCoQmUppd63DQ2M+TMQIjQHp+7vDuKCMh5THOGHyMb58JCWrDRt7ejxA400L/zvp2xN4Cdmtc3
X-Received: by 10.194.60.73 with SMTP id f9mr3529483wjr.65.1389620415669;
	Mon, 13 Jan 2014 05:40:15 -0800 (PST)
Received: from ?IPv6:2a01:e35:8b63:dc30:d4f1:6e89:89d:21b0?
	([2a01:e35:8b63:dc30:d4f1:6e89:89d:21b0])
	by mx.google.com with ESMTPSA id md9sm17917118wic.1.2014.01.13.05.40.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 05:40:14 -0800 (PST)
Message-ID: <52D3ECBD.7050203@6wind.com>
Date: Mon, 13 Jan 2014 14:40:13 +0100
From: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Organization: 6WIND
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, 
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
X-Mailman-Approved-At: Mon, 13 Jan 2014 22:55:56 +0000
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nicolas.dichtel@6wind.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Le 13/01/2014 12:26, Paul Durrant a =E9crit :
>> -----Original Message-----
>> From: Paul Durrant [mailto:paul.durrant@citrix.com]
>> Sent: 09 January 2014 10:03
>> To: netdev@vger.kernel.org; xen-devel@lists.xen.org
>> Cc: Paul Durrant; David Miller; Eric Dumazet; Veaceslav Falico; Alexander
>> Duyck; Nicolas Dichtel
>> Subject: [PATCH net-next v2 1/3] net: add skb_checksum_setup
>>
>> This patch adds a function to set up the partial checksum offset for IP
>> packets (and optionally re-calculate the pseudo-header checksum) into the
>> core network code.
>> The implementation was previously private and duplicated between xen-
>> netback
>> and xen-netfront, however it is not xen-specific and is potentially usef=
ul
>> to any network driver.
>>
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> Cc: David Miller <davem@davemloft.net>
>> Cc: Eric Dumazet <edumazet@google.com>
>> Cc: Veaceslav Falico <vfalico@redhat.com>
>> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
>> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
>
> Ping?
Your patch is under review by David (see
http://patchwork.ozlabs.org/patch/308539/), please be patient ;-)


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 22:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qQP-0008RS-On; Mon, 13 Jan 2014 22:56:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xisisu@gmail.com>) id 1W2nPP-0007bh-Pk
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 19:42:52 +0000
Received: from [85.158.137.68:24043] by server-9.bemta-3.messagelabs.com id
	E6/20-13104-AB144D25; Mon, 13 Jan 2014 19:42:50 +0000
X-Env-Sender: xisisu@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389642169!8077669!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11015 invoked from network); 13 Jan 2014 19:42:49 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:42:49 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so6764wgg.34
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 11:42:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jh00b1WCdREphbdxc6O8I55DZSW1EP3GOjJt10QId2g=;
	b=uySao0v/akuxi3mW7H3pC3HTPv780mimro6Ul20chjv3BUzNQCUUsdxnMSr6uqcmIB
	kPxHaas8hKd1ZrGADBp0k22NXzSVFk7vVNewWATOI6WRmdmYoz+muAFWGasa4P3NFKPO
	KeDClSKEYBSjkzG5QrKZe+Dzg+goS+pEOc11denHpU7naA0gSSDIWouytSB4uy9CgnT3
	J96tddsc/bsMwJPeVesnV9VdyF+on/zvgQ4HrjeeYMFYC3cAnLUU5LfYIqSA3DgApbo5
	I5bEntfVaBUtOoj5wUy+LQEQXrDtM7SqpsModVqYA+MnaI27uPnjruBY/MjVLM94b+H8
	B5Sw==
MIME-Version: 1.0
X-Received: by 10.194.8.229 with SMTP id u5mr4354146wja.80.1389642169419; Mon,
	13 Jan 2014 11:42:49 -0800 (PST)
Received: by 10.227.147.194 with HTTP; Mon, 13 Jan 2014 11:42:49 -0800 (PST)
In-Reply-To: <1389608346.13654.0.camel@kazak.uk.xensource.com>
References: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
	<1389608346.13654.0.camel@kazak.uk.xensource.com>
Date: Mon, 13 Jan 2014 13:42:49 -0600
Message-ID: <CAPqOm-pHNdcprZxJ_y3i-e_HfmztJ6NE3Dx_Lcs9aK_SkfoH_A@mail.gmail.com>
From: Sisu Xi <xisisu@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Mon, 13 Jan 2014 22:56:03 +0000
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Change VCPU schedulers in XenServer? And recompile
 Xen in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3124826451691610553=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3124826451691610553==
Content-Type: multipart/alternative; boundary=047d7b5d35dedf8c5f04efdf478f

--047d7b5d35dedf8c5f04efdf478f
Content-Type: text/plain; charset=UTF-8

Hi, Ian:

Sorry about this, will re-post it on xenserver mailing list.

Thanks!

Sisu


On Mon, Jan 13, 2014 at 4:19 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:
> > Hi, George and Dario:
> >
> >
> > I recently played with XenServer and want to integrate RT-Xen to it.
> > Do you know how can I change the VCPU schedulers in XenServer?
> >
> >
> > Also, is there a tutorial for recompiling xen in XenServer? Will there
> > be a conflict between the xl and xe tools?
>
> You should ask these XenServer specific questions on the xenserver list,
> see www.xenserver.org.
>
> Ian.
>
>
>


-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

--047d7b5d35dedf8c5f04efdf478f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, Ian:<div><br></div><div>Sorry about this, will re-post=
 it on xenserver mailing list.</div><div><br></div><div>Thanks!</div><div><=
br></div><div>Sisu</div></div><div class=3D"gmail_extra"><br><br><div class=
=3D"gmail_quote">
On Mon, Jan 13, 2014 at 4:19 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.c=
om</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:<br>
&gt; Hi, George and Dario:<br>
&gt;<br>
&gt;<br>
&gt; I recently played with XenServer and want to integrate RT-Xen to it.<b=
r>
&gt; Do you know how can I change the VCPU schedulers in XenServer?<br>
&gt;<br>
&gt;<br>
&gt; Also, is there a tutorial for recompiling xen in XenServer? Will there=
<br>
&gt; be a conflict between the xl and xe tools?<br>
<br>
</div>You should ask these XenServer specific questions on the xenserver li=
st,<br>
see <a href=3D"http://www.xenserver.org" target=3D"_blank">www.xenserver.or=
g</a>.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Sisu Xi, PhD Candidate<br><br><a href=3D"http://www.cse.wustl.edu/~xis/" =
target=3D"_blank">http://www.cse.wustl.edu/~xis/</a><br>Department of Compu=
ter Science and Engineering<br>
Campus Box 1045<br>Washington University in St. Louis<br>One Brookings Driv=
e<br>St. Louis, MO 63130
</div>

--047d7b5d35dedf8c5f04efdf478f--


--===============3124826451691610553==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3124826451691610553==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 22:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qQP-0008RS-On; Mon, 13 Jan 2014 22:56:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xisisu@gmail.com>) id 1W2nPP-0007bh-Pk
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 19:42:52 +0000
Received: from [85.158.137.68:24043] by server-9.bemta-3.messagelabs.com id
	E6/20-13104-AB144D25; Mon, 13 Jan 2014 19:42:50 +0000
X-Env-Sender: xisisu@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389642169!8077669!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11015 invoked from network); 13 Jan 2014 19:42:49 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 19:42:49 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so6764wgg.34
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 11:42:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jh00b1WCdREphbdxc6O8I55DZSW1EP3GOjJt10QId2g=;
	b=uySao0v/akuxi3mW7H3pC3HTPv780mimro6Ul20chjv3BUzNQCUUsdxnMSr6uqcmIB
	kPxHaas8hKd1ZrGADBp0k22NXzSVFk7vVNewWATOI6WRmdmYoz+muAFWGasa4P3NFKPO
	KeDClSKEYBSjkzG5QrKZe+Dzg+goS+pEOc11denHpU7naA0gSSDIWouytSB4uy9CgnT3
	J96tddsc/bsMwJPeVesnV9VdyF+on/zvgQ4HrjeeYMFYC3cAnLUU5LfYIqSA3DgApbo5
	I5bEntfVaBUtOoj5wUy+LQEQXrDtM7SqpsModVqYA+MnaI27uPnjruBY/MjVLM94b+H8
	B5Sw==
MIME-Version: 1.0
X-Received: by 10.194.8.229 with SMTP id u5mr4354146wja.80.1389642169419; Mon,
	13 Jan 2014 11:42:49 -0800 (PST)
Received: by 10.227.147.194 with HTTP; Mon, 13 Jan 2014 11:42:49 -0800 (PST)
In-Reply-To: <1389608346.13654.0.camel@kazak.uk.xensource.com>
References: <CAPqOm-omTnN01FLrO8Lq40ZXCDeiOvRuNvYVCMSo=oV4sqhA5w@mail.gmail.com>
	<1389608346.13654.0.camel@kazak.uk.xensource.com>
Date: Mon, 13 Jan 2014 13:42:49 -0600
Message-ID: <CAPqOm-pHNdcprZxJ_y3i-e_HfmztJ6NE3Dx_Lcs9aK_SkfoH_A@mail.gmail.com>
From: Sisu Xi <xisisu@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Mon, 13 Jan 2014 22:56:03 +0000
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Change VCPU schedulers in XenServer? And recompile
 Xen in XenServer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3124826451691610553=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3124826451691610553==
Content-Type: multipart/alternative; boundary=047d7b5d35dedf8c5f04efdf478f

--047d7b5d35dedf8c5f04efdf478f
Content-Type: text/plain; charset=UTF-8

Hi, Ian:

Sorry about this, will re-post it on xenserver mailing list.

Thanks!

Sisu


On Mon, Jan 13, 2014 at 4:19 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:
> > Hi, George and Dario:
> >
> >
> > I recently played with XenServer and want to integrate RT-Xen to it.
> > Do you know how can I change the VCPU schedulers in XenServer?
> >
> >
> > Also, is there a tutorial for recompiling xen in XenServer? Will there
> > be a conflict between the xl and xe tools?
>
> You should ask these XenServer specific questions on the xenserver list,
> see www.xenserver.org.
>
> Ian.
>
>
>


-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130

--047d7b5d35dedf8c5f04efdf478f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, Ian:<div><br></div><div>Sorry about this, will re-post=
 it on xenserver mailing list.</div><div><br></div><div>Thanks!</div><div><=
br></div><div>Sisu</div></div><div class=3D"gmail_extra"><br><br><div class=
=3D"gmail_quote">
On Mon, Jan 13, 2014 at 4:19 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=
=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.c=
om</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"marg=
in:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=3D"im">On Sun, 2014-01-12 at 17:57 -0600, Sisu Xi wrote:<br>
&gt; Hi, George and Dario:<br>
&gt;<br>
&gt;<br>
&gt; I recently played with XenServer and want to integrate RT-Xen to it.<b=
r>
&gt; Do you know how can I change the VCPU schedulers in XenServer?<br>
&gt;<br>
&gt;<br>
&gt; Also, is there a tutorial for recompiling xen in XenServer? Will there=
<br>
&gt; be a conflict between the xl and xe tools?<br>
<br>
</div>You should ask these XenServer specific questions on the xenserver li=
st,<br>
see <a href=3D"http://www.xenserver.org" target=3D"_blank">www.xenserver.or=
g</a>.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Sisu Xi, PhD Candidate<br><br><a href=3D"http://www.cse.wustl.edu/~xis/" =
target=3D"_blank">http://www.cse.wustl.edu/~xis/</a><br>Department of Compu=
ter Science and Engineering<br>
Campus Box 1045<br>Washington University in St. Louis<br>One Brookings Driv=
e<br>St. Louis, MO 63130
</div>

--047d7b5d35dedf8c5f04efdf478f--


--===============3124826451691610553==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3124826451691610553==--


From xen-devel-bounces@lists.xen.org Mon Jan 13 22:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 22:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qQI-0008Qf-5k; Mon, 13 Jan 2014 22:55:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.dichtel@6wind.com>) id 1W2hkX-0001Oy-6i
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 13:40:17 +0000
Received: from [193.109.254.147:50789] by server-15.bemta-14.messagelabs.com
	id 8E/4C-22186-0CCE3D25; Mon, 13 Jan 2014 13:40:16 +0000
X-Env-Sender: nicolas.dichtel@6wind.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389620415!10531030!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5387 invoked from network); 13 Jan 2014 13:40:15 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Jan 2014 13:40:15 -0000
Received: by mail-we0-f175.google.com with SMTP id p61so1358003wes.6
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 05:40:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:reply-to:organization
	:user-agent:mime-version:to:cc:subject:references:in-reply-to
	:content-type:content-transfer-encoding;
	bh=Q/A3EsUWmRfTHDg1775g5rjTxBZF7mm/TtjZtyvWuB0=;
	b=CW3svMzLUQ4IpXTnpD4E57ONp5IAigIGXYyczXmEIC3VlPl+FGz1OtOU0kWo5Heb4B
	aIQ9S1SfdYPIe5CxJEmjvToxfV51jG3yICNAfk/Cn7U28OySZu8IjoLVzaSheGQq+oRo
	z6E12qKYAgPpde1cNLY1+pSayDBvCDpiPPBd6lM7+TfhUfy1r4WYF4fDArkslAXcZi3W
	fdakr83dsjkGOyD4BDlEocbfY4TVBuqn6nPvYFwbcw/M0VFRbpT5mOLN7mLx6rfup8mm
	W7CmuzDZq8R1qFjX0F/KFTUs8aZ56UgfGsnpxnolgPcY6bftRdsp8OXL4QVbeM4c8rUL
	B5sQ==
X-Gm-Message-State: ALoCoQmUppd63DQ2M+TMQIjQHp+7vDuKCMh5THOGHyMb58JCWrDRt7ejxA400L/zvp2xN4Cdmtc3
X-Received: by 10.194.60.73 with SMTP id f9mr3529483wjr.65.1389620415669;
	Mon, 13 Jan 2014 05:40:15 -0800 (PST)
Received: from ?IPv6:2a01:e35:8b63:dc30:d4f1:6e89:89d:21b0?
	([2a01:e35:8b63:dc30:d4f1:6e89:89d:21b0])
	by mx.google.com with ESMTPSA id md9sm17917118wic.1.2014.01.13.05.40.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 05:40:14 -0800 (PST)
Message-ID: <52D3ECBD.7050203@6wind.com>
Date: Mon, 13 Jan 2014 14:40:13 +0100
From: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Organization: 6WIND
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, 
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-2-git-send-email-paul.durrant@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0204A79@AMSPEX01CL01.citrite.net>
X-Mailman-Approved-At: Mon, 13 Jan 2014 22:55:56 +0000
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	Veaceslav Falico <vfalico@redhat.com>, Eric Dumazet <edumazet@google.com>,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2 1/3] net: add skb_checksum_setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: nicolas.dichtel@6wind.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Le 13/01/2014 12:26, Paul Durrant a =E9crit :
>> -----Original Message-----
>> From: Paul Durrant [mailto:paul.durrant@citrix.com]
>> Sent: 09 January 2014 10:03
>> To: netdev@vger.kernel.org; xen-devel@lists.xen.org
>> Cc: Paul Durrant; David Miller; Eric Dumazet; Veaceslav Falico; Alexander
>> Duyck; Nicolas Dichtel
>> Subject: [PATCH net-next v2 1/3] net: add skb_checksum_setup
>>
>> This patch adds a function to set up the partial checksum offset for IP
>> packets (and optionally re-calculate the pseudo-header checksum) into the
>> core network code.
>> The implementation was previously private and duplicated between xen-
>> netback
>> and xen-netfront, however it is not xen-specific and is potentially usef=
ul
>> to any network driver.
>>
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> Cc: David Miller <davem@davemloft.net>
>> Cc: Eric Dumazet <edumazet@google.com>
>> Cc: Veaceslav Falico <vfalico@redhat.com>
>> Cc: Alexander Duyck <alexander.h.duyck@intel.com>
>> Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
>
> Ping?
Your patch is under review by David (see
http://patchwork.ozlabs.org/patch/308539/), please be patient ;-)


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 23:08:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 23:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qbr-0000lR-7h; Mon, 13 Jan 2014 23:07:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2qbp-0000lM-K9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 23:07:53 +0000
Received: from [85.158.137.68:62830] by server-13.bemta-3.messagelabs.com id
	43/B0-28603-8C174D25; Mon, 13 Jan 2014 23:07:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389654472!8926009!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8848 invoked from network); 13 Jan 2014 23:07:52 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 23:07:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389654472; l=822;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=54H9QpDBtyhCcdZYMbJZt9vK5cE=;
	b=iIR9/hc4EZ1MFV+CceOCfT6a5jvGBIp6c9XfT3w36mh1ygfD6brA2cgIMRJaiPvwvmI
	MGrJzrYBDqBCKG2+CDRuyWQ0yPXl/bneRYii+UDF8C8ZlSZVSBQUQraj7G5WPohHc+O9S
	ALA1ESEnKCVPOp41xqgnd3X8pX7p+yJvNMw=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7rNLk=
Received: from probook.site (ip-80-226-24-10.vodafone-net.de [80.226.24.10])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id e036c6q0DN7i4Kc ; 
	Tue, 14 Jan 2014 00:07:44 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 8B3D75025A; Tue, 14 Jan 2014 00:07:40 +0100 (CET)
Date: Tue, 14 Jan 2014 00:07:40 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140113230740.GA23544@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3FD67.2060708@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Boris Ostrovsky wrote:

> On 01/13/2014 04:30 AM, Olaf Hering wrote:
> >>Similarly, if xenbug_gather("discard-secure") fails, I think the code will
> >>assume that secure discard has not been requested. I don't know what
> >>security implications this will have but it sounds bad to me.
> >There are no security implications, if the backend does not advertise it
> >then its not present.
> 
> Right. But my questions was what if the backend does advertise it and wants
> the frontent to use it but xenbus_gather() in the frontend fails. Do we want
> to silently continue without discard-secure? Is this safe?

The frontend can not know that the backend advertised discard-secure
because the frontend just failed to read the property which indicates
discard-secure should be enabled.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 13 23:08:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jan 2014 23:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2qbr-0000lR-7h; Mon, 13 Jan 2014 23:07:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W2qbp-0000lM-K9
	for xen-devel@lists.xen.org; Mon, 13 Jan 2014 23:07:53 +0000
Received: from [85.158.137.68:62830] by server-13.bemta-3.messagelabs.com id
	43/B0-28603-8C174D25; Mon, 13 Jan 2014 23:07:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389654472!8926009!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8848 invoked from network); 13 Jan 2014 23:07:52 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Jan 2014 23:07:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389654472; l=822;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=54H9QpDBtyhCcdZYMbJZt9vK5cE=;
	b=iIR9/hc4EZ1MFV+CceOCfT6a5jvGBIp6c9XfT3w36mh1ygfD6brA2cgIMRJaiPvwvmI
	MGrJzrYBDqBCKG2+CDRuyWQ0yPXl/bneRYii+UDF8C8ZlSZVSBQUQraj7G5WPohHc+O9S
	ALA1ESEnKCVPOp41xqgnd3X8pX7p+yJvNMw=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7rNLk=
Received: from probook.site (ip-80-226-24-10.vodafone-net.de [80.226.24.10])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id e036c6q0DN7i4Kc ; 
	Tue, 14 Jan 2014 00:07:44 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 8B3D75025A; Tue, 14 Jan 2014 00:07:40 +0100 (CET)
Date: Tue, 14 Jan 2014 00:07:40 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20140113230740.GA23544@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3FD67.2060708@oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, Boris Ostrovsky wrote:

> On 01/13/2014 04:30 AM, Olaf Hering wrote:
> >>Similarly, if xenbug_gather("discard-secure") fails, I think the code will
> >>assume that secure discard has not been requested. I don't know what
> >>security implications this will have but it sounds bad to me.
> >There are no security implications, if the backend does not advertise it
> >then its not present.
> 
> Right. But my questions was what if the backend does advertise it and wants
> the frontent to use it but xenbus_gather() in the frontend fails. Do we want
> to silently continue without discard-secure? Is this safe?

The frontend can not know that the backend advertised discard-secure
because the frontend just failed to read the property which indicates
discard-secure should be enabled.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2tTG-0005dn-HA; Tue, 14 Jan 2014 02:11:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2tTE-0005di-Qr
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 02:11:13 +0000
Received: from [85.158.143.35:9951] by server-2.bemta-4.messagelabs.com id
	FF/A6-11386-0CC94D25; Tue, 14 Jan 2014 02:11:12 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389665469!11429986!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4684 invoked from network); 14 Jan 2014 02:11:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 02:11:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0E2B4Mn027741
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Jan 2014 02:11:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0E2B2pF006156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Jan 2014 02:11:03 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0E2B1gP021798; Tue, 14 Jan 2014 02:11:01 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 18:11:01 -0800
Message-ID: <52D49CD5.20805@oracle.com>
Date: Mon, 13 Jan 2014 21:11:33 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com> <20140113230740.GA23544@aepfle.de>
In-Reply-To: <20140113230740.GA23544@aepfle.de>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 06:07 PM, Olaf Hering wrote:
> On Mon, Jan 13, Boris Ostrovsky wrote:
>
>> On 01/13/2014 04:30 AM, Olaf Hering wrote:
>>>> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
>>>> assume that secure discard has not been requested. I don't know what
>>>> security implications this will have but it sounds bad to me.
>>> There are no security implications, if the backend does not advertise it
>>> then its not present.
>> Right. But my questions was what if the backend does advertise it and wants
>> the frontent to use it but xenbus_gather() in the frontend fails. Do we want
>> to silently continue without discard-secure? Is this safe?
> The frontend can not know that the backend advertised discard-secure
> because the frontend just failed to read the property which indicates
> discard-secure should be enabled.

And is it OK for the frontend not to know about this?

I don't understand what the use model for this feature is. Is it just 
that the backend advertises its capability and it's up to the frontend 
to use it or not -or- is it that the user/admin created the storage with 
expectations that it will be used in "secure" manner.

I think if it's the former then losing information about storage 
features is OK but if it's the latter then I am not so sure.

Or perhaps it's neither of these two and I am completely missing the 
point of this feature.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:11:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:11:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2tTG-0005dn-HA; Tue, 14 Jan 2014 02:11:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W2tTE-0005di-Qr
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 02:11:13 +0000
Received: from [85.158.143.35:9951] by server-2.bemta-4.messagelabs.com id
	FF/A6-11386-0CC94D25; Tue, 14 Jan 2014 02:11:12 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389665469!11429986!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4684 invoked from network); 14 Jan 2014 02:11:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 02:11:11 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with
	ESMTP id s0E2B4Mn027741
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 14 Jan 2014 02:11:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0E2B2pF006156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 14 Jan 2014 02:11:03 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0E2B1gP021798; Tue, 14 Jan 2014 02:11:01 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 13 Jan 2014 18:11:01 -0800
Message-ID: <52D49CD5.20805@oracle.com>
Date: Mon, 13 Jan 2014 21:11:33 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com> <20140113230740.GA23544@aepfle.de>
In-Reply-To: <20140113230740.GA23544@aepfle.de>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 06:07 PM, Olaf Hering wrote:
> On Mon, Jan 13, Boris Ostrovsky wrote:
>
>> On 01/13/2014 04:30 AM, Olaf Hering wrote:
>>>> Similarly, if xenbug_gather("discard-secure") fails, I think the code will
>>>> assume that secure discard has not been requested. I don't know what
>>>> security implications this will have but it sounds bad to me.
>>> There are no security implications, if the backend does not advertise it
>>> then its not present.
>> Right. But my questions was what if the backend does advertise it and wants
>> the frontent to use it but xenbus_gather() in the frontend fails. Do we want
>> to silently continue without discard-secure? Is this safe?
> The frontend can not know that the backend advertised discard-secure
> because the frontend just failed to read the property which indicates
> discard-secure should be enabled.

And is it OK for the frontend not to know about this?

I don't understand what the use model for this feature is. Is it just 
that the backend advertises its capability and it's up to the frontend 
to use it or not -or- is it that the user/admin created the storage with 
expectations that it will be used in "secure" manner.

I think if it's the former then losing information about storage 
features is OK but if it's the latter then I am not so sure.

Or perhaps it's neither of these two and I am completely missing the 
point of this feature.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:17:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2tZY-0005lq-Du; Tue, 14 Jan 2014 02:17:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1W2tZW-0005li-Sq
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 02:17:43 +0000
Received: from [193.109.254.147:33282] by server-9.bemta-14.messagelabs.com id
	EF/44-13957-64E94D25; Tue, 14 Jan 2014 02:17:42 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389665860!10628770!1
X-Originating-IP: [209.85.128.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18472 invoked from network); 14 Jan 2014 02:17:41 -0000
Received: from mail-qe0-f43.google.com (HELO mail-qe0-f43.google.com)
	(209.85.128.43)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 02:17:41 -0000
Received: by mail-qe0-f43.google.com with SMTP id nc12so1104735qeb.2
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 18:17:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9gPxk45lKePuQFulxijQvRuomE51y8cOjmK01Y9EhPc=;
	b=y9DVESezEBJD0rWsA2BK2fhPqlzTELEiB4GuLpbCX6Ud2geINDz5r38OD7XDlmr57R
	IDh0GJgPX/Uu9AR+NjszxQEzc/xsACoDbLo203o8rHosrECJn31Bzlh71sDydFAws7gA
	N9sCXbV1ACZiJ28c7JD+eMuEkHdY/nW+k1PG0Ogn99J5/Kvpd3qcZw1Oylc9YozKavyM
	stdt/tnGTppWP2GOuRx5LwVHph5D+JD25fgfhw4eCAr0TWPvOgGZwLXgc2NCINEmZxF5
	32zM80kiCcdpxnrk7J1tHH7MfGxRmOSkz91fetaW3QD8liVXYFplR8/0czV63o+xYX2S
	JjSg==
MIME-Version: 1.0
X-Received: by 10.224.55.197 with SMTP id v5mr46249773qag.9.1389665860014;
	Mon, 13 Jan 2014 18:17:40 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Mon, 13 Jan 2014 18:17:39 -0800 (PST)
In-Reply-To: <1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Tue, 14 Jan 2014 10:17:39 +0800
Message-ID: <CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Adel Amani <adel.amani66@yahoo.com>
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 7:19 PM, Adel Amani <adel.amani66@yahoo.com> wrote:
> Downtime it mean's The total downtime consists of the time necessary to
> quiesce the VM on the source, transfer the device state to the destination,
> load the device state, and copy over all the remaining memory pages
> concurrently with loading the device state.
> how i can print bitmap in Output? or how i can see bitmap in Output?

I believe you need to add code by yourself to print out the bitmap.

> Is there a particular name for Downtime in xen_domain_save ?
> how i can see Downtime in Output?

Not sure if there's already code to meet your requirement, probably
you also need to do some hack to print out the downtime.

Thanks,
-Kai
>
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
> On Monday, January 13, 2014 4:03 AM, Kai Huang <dev.kai.huang@gmail.com>
> wrote:
> On Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:
>> Hello Mr Dunlap
>> I really wonder of anybody know answer :-(
>> can you help me?!
>> I want found time "Downtime" and number "dirty pages" in end migration ...
>
> What do you mean by "Downtime", and "in end migration"? Migration will
> take several rounds to transfer dirty pages. If you want to know
> exactly how many pages are dirty during each stage, you can get the
> dirty page bitmap in xc_domain_save, which will be called for each
> live migration stage.
>
> Thanks,
> -Kai
>
>> please help me.
>> Thanks.
>>
>> Adel Amani
>> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
>> Email: A.Amani@eng.ui.ac.ir
>>
>>
>> On Saturday, January 11, 2014 12:31 AM, Adel Amani
>> <adel.amani66@yahoo.com>
>> wrote:
>> Hello
>> I do one migration and trace in migration to command "xentrace -D -e all
>> -S
>> 256 /test.trace"
>> I can get total time migration of command "time migration -l ubuntu11
>> 192.168.1.1"  But i don't know how get "Downtime" and "dirty pages" of
>> test.trace :-(  or from another way...
>>
>> Adel Amani
>> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
>> Email: A.Amani@eng.ui.ac.ir
>
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:17:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:17:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2tZY-0005lq-Du; Tue, 14 Jan 2014 02:17:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dev.kai.huang@gmail.com>) id 1W2tZW-0005li-Sq
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 02:17:43 +0000
Received: from [193.109.254.147:33282] by server-9.bemta-14.messagelabs.com id
	EF/44-13957-64E94D25; Tue, 14 Jan 2014 02:17:42 +0000
X-Env-Sender: dev.kai.huang@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389665860!10628770!1
X-Originating-IP: [209.85.128.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18472 invoked from network); 14 Jan 2014 02:17:41 -0000
Received: from mail-qe0-f43.google.com (HELO mail-qe0-f43.google.com)
	(209.85.128.43)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 02:17:41 -0000
Received: by mail-qe0-f43.google.com with SMTP id nc12so1104735qeb.2
	for <xen-devel@lists.xen.org>; Mon, 13 Jan 2014 18:17:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9gPxk45lKePuQFulxijQvRuomE51y8cOjmK01Y9EhPc=;
	b=y9DVESezEBJD0rWsA2BK2fhPqlzTELEiB4GuLpbCX6Ud2geINDz5r38OD7XDlmr57R
	IDh0GJgPX/Uu9AR+NjszxQEzc/xsACoDbLo203o8rHosrECJn31Bzlh71sDydFAws7gA
	N9sCXbV1ACZiJ28c7JD+eMuEkHdY/nW+k1PG0Ogn99J5/Kvpd3qcZw1Oylc9YozKavyM
	stdt/tnGTppWP2GOuRx5LwVHph5D+JD25fgfhw4eCAr0TWPvOgGZwLXgc2NCINEmZxF5
	32zM80kiCcdpxnrk7J1tHH7MfGxRmOSkz91fetaW3QD8liVXYFplR8/0czV63o+xYX2S
	JjSg==
MIME-Version: 1.0
X-Received: by 10.224.55.197 with SMTP id v5mr46249773qag.9.1389665860014;
	Mon, 13 Jan 2014 18:17:40 -0800 (PST)
Received: by 10.229.168.137 with HTTP; Mon, 13 Jan 2014 18:17:39 -0800 (PST)
In-Reply-To: <1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
Date: Tue, 14 Jan 2014 10:17:39 +0800
Message-ID: <CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
From: Kai Huang <dev.kai.huang@gmail.com>
To: Adel Amani <adel.amani66@yahoo.com>
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 7:19 PM, Adel Amani <adel.amani66@yahoo.com> wrote:
> Downtime it mean's The total downtime consists of the time necessary to
> quiesce the VM on the source, transfer the device state to the destination,
> load the device state, and copy over all the remaining memory pages
> concurrently with loading the device state.
> how i can print bitmap in Output? or how i can see bitmap in Output?

I believe you need to add code by yourself to print out the bitmap.

> Is there a particular name for Downtime in xen_domain_save ?
> how i can see Downtime in Output?

Not sure if there's already code to meet your requirement, probably
you also need to do some hack to print out the downtime.

Thanks,
-Kai
>
>
> Adel Amani
> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
> Email: A.Amani@eng.ui.ac.ir
>
>
> On Monday, January 13, 2014 4:03 AM, Kai Huang <dev.kai.huang@gmail.com>
> wrote:
> On Sun, Jan 12, 2014 at 1:40 AM, Adel Amani <adel.amani66@yahoo.com> wrote:
>> Hello Mr Dunlap
>> I really wonder of anybody know answer :-(
>> can you help me?!
>> I want found time "Downtime" and number "dirty pages" in end migration ...
>
> What do you mean by "Downtime", and "in end migration"? Migration will
> take several rounds to transfer dirty pages. If you want to know
> exactly how many pages are dirty during each stage, you can get the
> dirty page bitmap in xc_domain_save, which will be called for each
> live migration stage.
>
> Thanks,
> -Kai
>
>> please help me.
>> Thanks.
>>
>> Adel Amani
>> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
>> Email: A.Amani@eng.ui.ac.ir
>>
>>
>> On Saturday, January 11, 2014 12:31 AM, Adel Amani
>> <adel.amani66@yahoo.com>
>> wrote:
>> Hello
>> I do one migration and trace in migration to command "xentrace -D -e all
>> -S
>> 256 /test.trace"
>> I can get total time migration of command "time migration -l ubuntu11
>> 192.168.1.1"  But i don't know how get "Downtime" and "dirty pages" of
>> test.trace :-(  or from another way...
>>
>> Adel Amani
>> M.Sc. Candidate@Computer Engineering Department, University of Isfahan
>> Email: A.Amani@eng.ui.ac.ir
>
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>>
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:33:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2toc-0006oW-Ro; Tue, 14 Jan 2014 02:33:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2tob-0006oR-CI
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 02:33:17 +0000
Received: from [85.158.143.35:44601] by server-1.bemta-4.messagelabs.com id
	45/35-02132-CE1A4D25; Tue, 14 Jan 2014 02:33:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389666794!11509950!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13784 invoked from network); 14 Jan 2014 02:33:15 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-21.messagelabs.com with SMTP;
	14 Jan 2014 02:33:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 13 Jan 2014 18:33:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,656,1384329600"; d="scan'208";a="458273868"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 13 Jan 2014 18:33:12 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 18:33:12 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 18:33:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 10:33:10 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9A=
Date: Tue, 14 Jan 2014 02:33:10 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>  
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-24:
> Zhang, Yang Z wrote on 2013-12-23:
>> Egger, Christoph wrote on 2013-12-18:
>>> On 18.12.13 11:24, Zhang, Yang Z wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 09:58, "Dong, Eddie" <eddie.dong@intel.com> wrote:
>>>>>> Acked by Eddie Dong <eddie.dong@intel.com>
>>>>> 
>>>>> As long as Christoph's reservations wrt SVM aren't being
>>>>> addressed/ eliminated, I don't think we can apply this patch.
>>>>> 
>>>>> Furthermore, while you ack-ed this patch (which isn't really VMX
>>>>> specific) and patch 3, you didn't ack patch 2, but you also
>>>>> didn't indicate anything that's possibly wrong with it.
>>>> 
>>>> Actually, I asked him help to review the first patch. Since
>>>> Christoph thought
>>> the first patch may break AMD. So I hope he can help to review the
>>> first patch to see whether I am wrong.
>>>> 
>>>>> 
>>>>> And finally, with patch 1 needing to be left out for the moment,
>>>>> I'd like to have confirmation that all three patches can be
>>>>> applied independently (i.e. with the current state of things only
>>>>> patch 3 is ready to
>>> go in).
>>>> 
>>>> Yes, the three patches are independent.
>>> 
>>> I have looked through code.
>>> 
>>> vcpu is in guestmode till the vmentry/vmexit emulation is done.
>>> In SVM the vcpu guestmode changes right before setting
>>> nv_vmswitch_in_progress to 0 when the vmentry/vmexit emulation was
>>> successfull (there is a bunch of error-checking).
>>> 
>> 
>> After checking the SVM logic, I find the basic usage of
>> vcpu_in/exitk_guestmode of VMX side is different from that of SVM side:
>> VMX side:
>>     virtual vmentry: set vcpu in guestmode after vmcs switched to vmcs02.
>> This happed at the beginning of vmentry which means the whole
>> vmentry emulation code is executed when vcpu is in guest mode.
>>     virtual vmexit: set vcpu exit guestmode after vmcs switched to vmcs01.
>> Also, this action occurred just after sync vmcs02 to vmcs12 and
>> before the vmcs01's state restored (like set_cr), which means the
>> vmcs01's state restored when vcpu is not in guest mode.
>> SVM side:
>>     virtual vmentry: set vcpu in guest mode after vmentry emulation is
>>     done, which means the emulation code is executed when vcpu is not in
>>     guest mode. virtual vmexit: set vcpu exit guest mode after vmexit
>>     emulation is
>> done, which means the emulation code is executed when vcpu is in guest
>> mode.
>> 
>> Ok, now let us take a look at current implementation, take
>> hvm_set_cr0 for example: Update nested mode when (vcpu_in_guestmode
>> && !vmswitch_in_progress). Otherwise, update L1's paging mode:
>>         if ( !nestedhvm_vmswitch_in_progress(v) &&
>> nestedhvm_vcpu_in_guestmode(v) )
>>             paging_update_nestedmode(v); else
>>             paging_update_paging_modes(v); virtual vmentry:
>>     Expected result: nested mode is being updated.
>>     Current result in SVM:
>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mode is updated.  Wrong.
>>     Current result in VMX:
>>           vcpu_in_guestmode and vmswitch_in_progress :  L1's paging
>> mode is updated.  Wrong.
>> 
>> Virtual vmexit:
>> 	Expected result: L1's paging mode is updated.
>> 	Current result in SVM:
>>           vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mode is updated.   Correct.
>>     Current result in VMX:
>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mod is updated.    Correct
>> 
>> From the above result, we can see the vmswitch_in_progress is
>> actually take effect not vcpu_guest_mode. The original code doesn't
>> consider the paging mode changed during vmentry/vmexit emulation.
>> This seems wrong to me, because paging mode changing happens in real world.
>> 
>> Here is the result with my patch: Update nested mode when
>> vcpu_in_guestmode. Otherwise, update L1's paging mode:
>>         if (nestedhvm_vcpu_in_guestmode(v) )
>>             paging_update_nestedmode(v); else
>>             paging_update_paging_modes(v); virtual vmentry:
>>     Expected result: nested mode is being updated.
>>     Current result in SVM:
>>           !vcpu_in_guestmode:  L1's paging mode is updated. Wrong.
>>           Current result in VMX: vcpu_in_guestmode :  Nested paging
>>           mode is updated.  Correct.
>> Virtual vmexit:
>> 	Expected result: L1's paging mode is updated.
>> 	Current result in SVM:
>>           vcpu_in_guestmode:  Nested paging mode is updated. Wrong.
>>           Current result in VMX: !vcpu_in_guestmode:  L1's paging
>> mod
> is
>>           updated.      Correct
>> From the above, we can see the problem is that the way to
>> distinguish the vmentry and vmexit in SVM and VMX side is different.
>> Since I am not familiar the SVM, so i am not sure whether the usage
>> of vcpu_in_guestmode in SVM is right or wrong. But in VMX side, once
>> the vmcs is changed, then the vcpu_in_guestmode is changed too which
>> we think is following hardware's behavior.
>> 
>> Also, I think I found another issue that the paging mode cannot be
>> tracked correctly with current way or with my patch. Need more time
>> to
> investigate.
> 
> Ok. The issue is that if the paging state is changed via vmwrite (L1
> writes L2's vmcs to change paging mode directly) which is unaware to
> L0. And hypervisor cannot set the right nested paging mode during
> virtual vmentry. So we need to update nest mode unconditionally for each virtual vmentry.
> 
>> 
>> 
>>> This patch breaks both vmentry and vmexit emulation for SVM.
>>> The vmentry breakage comes with l1-hypervisor using shadow-paging.
>>> 
>>> During vmexit emulation hvm_set_cr0 and hvm_set_cr4 are called to
>>> restore cr0 and cr4 for the l1 guest. With this patch the paging
>>> mode for the l2 guest is updated rather for the l1 guest.
>>> 
>>> I think this patch also breaks the case where l2 guest wants to set
>>> cr0 or cr4 and l1-hypervisor does not intercept cr0/cr4 and
>>> l1-hypervisor uses shadow-paging. This may also count for VMX.
>> 
>> For this case, am I missing something? If yes, please correct me.
>> 
>>> 
>>> This is just from reading the code. As I said, I do not have a
>>> setup to verify this, unfortunately.
>>> 
>> 
>> 
>> Best regards,
>> Yang
>> 
> 
> 
> Best regards,
> Yang
>

Any comments ?

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 02:33:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 02:33:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2toc-0006oW-Ro; Tue, 14 Jan 2014 02:33:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2tob-0006oR-CI
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 02:33:17 +0000
Received: from [85.158.143.35:44601] by server-1.bemta-4.messagelabs.com id
	45/35-02132-CE1A4D25; Tue, 14 Jan 2014 02:33:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389666794!11509950!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13784 invoked from network); 14 Jan 2014 02:33:15 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-21.messagelabs.com with SMTP;
	14 Jan 2014 02:33:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 13 Jan 2014 18:33:13 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,656,1384329600"; d="scan'208";a="458273868"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 13 Jan 2014 18:33:12 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 18:33:12 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 18:33:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 10:33:10 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9A=
Date: Tue, 14 Jan 2014 02:33:10 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>  
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2013-12-24:
> Zhang, Yang Z wrote on 2013-12-23:
>> Egger, Christoph wrote on 2013-12-18:
>>> On 18.12.13 11:24, Zhang, Yang Z wrote:
>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>> On 18.12.13 at 09:58, "Dong, Eddie" <eddie.dong@intel.com> wrote:
>>>>>> Acked by Eddie Dong <eddie.dong@intel.com>
>>>>> 
>>>>> As long as Christoph's reservations wrt SVM aren't being
>>>>> addressed/ eliminated, I don't think we can apply this patch.
>>>>> 
>>>>> Furthermore, while you ack-ed this patch (which isn't really VMX
>>>>> specific) and patch 3, you didn't ack patch 2, but you also
>>>>> didn't indicate anything that's possibly wrong with it.
>>>> 
>>>> Actually, I asked him help to review the first patch. Since
>>>> Christoph thought
>>> the first patch may break AMD. So I hope he can help to review the
>>> first patch to see whether I am wrong.
>>>> 
>>>>> 
>>>>> And finally, with patch 1 needing to be left out for the moment,
>>>>> I'd like to have confirmation that all three patches can be
>>>>> applied independently (i.e. with the current state of things only
>>>>> patch 3 is ready to
>>> go in).
>>>> 
>>>> Yes, the three patches are independent.
>>> 
>>> I have looked through code.
>>> 
>>> vcpu is in guestmode till the vmentry/vmexit emulation is done.
>>> In SVM the vcpu guestmode changes right before setting
>>> nv_vmswitch_in_progress to 0 when the vmentry/vmexit emulation was
>>> successfull (there is a bunch of error-checking).
>>> 
>> 
>> After checking the SVM logic, I find the basic usage of
>> vcpu_in/exitk_guestmode of VMX side is different from that of SVM side:
>> VMX side:
>>     virtual vmentry: set vcpu in guestmode after vmcs switched to vmcs02.
>> This happed at the beginning of vmentry which means the whole
>> vmentry emulation code is executed when vcpu is in guest mode.
>>     virtual vmexit: set vcpu exit guestmode after vmcs switched to vmcs01.
>> Also, this action occurred just after sync vmcs02 to vmcs12 and
>> before the vmcs01's state restored (like set_cr), which means the
>> vmcs01's state restored when vcpu is not in guest mode.
>> SVM side:
>>     virtual vmentry: set vcpu in guest mode after vmentry emulation is
>>     done, which means the emulation code is executed when vcpu is not in
>>     guest mode. virtual vmexit: set vcpu exit guest mode after vmexit
>>     emulation is
>> done, which means the emulation code is executed when vcpu is in guest
>> mode.
>> 
>> Ok, now let us take a look at current implementation, take
>> hvm_set_cr0 for example: Update nested mode when (vcpu_in_guestmode
>> && !vmswitch_in_progress). Otherwise, update L1's paging mode:
>>         if ( !nestedhvm_vmswitch_in_progress(v) &&
>> nestedhvm_vcpu_in_guestmode(v) )
>>             paging_update_nestedmode(v); else
>>             paging_update_paging_modes(v); virtual vmentry:
>>     Expected result: nested mode is being updated.
>>     Current result in SVM:
>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mode is updated.  Wrong.
>>     Current result in VMX:
>>           vcpu_in_guestmode and vmswitch_in_progress :  L1's paging
>> mode is updated.  Wrong.
>> 
>> Virtual vmexit:
>> 	Expected result: L1's paging mode is updated.
>> 	Current result in SVM:
>>           vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mode is updated.   Correct.
>>     Current result in VMX:
>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>> mod is updated.    Correct
>> 
>> From the above result, we can see the vmswitch_in_progress is
>> actually take effect not vcpu_guest_mode. The original code doesn't
>> consider the paging mode changed during vmentry/vmexit emulation.
>> This seems wrong to me, because paging mode changing happens in real world.
>> 
>> Here is the result with my patch: Update nested mode when
>> vcpu_in_guestmode. Otherwise, update L1's paging mode:
>>         if (nestedhvm_vcpu_in_guestmode(v) )
>>             paging_update_nestedmode(v); else
>>             paging_update_paging_modes(v); virtual vmentry:
>>     Expected result: nested mode is being updated.
>>     Current result in SVM:
>>           !vcpu_in_guestmode:  L1's paging mode is updated. Wrong.
>>           Current result in VMX: vcpu_in_guestmode :  Nested paging
>>           mode is updated.  Correct.
>> Virtual vmexit:
>> 	Expected result: L1's paging mode is updated.
>> 	Current result in SVM:
>>           vcpu_in_guestmode:  Nested paging mode is updated. Wrong.
>>           Current result in VMX: !vcpu_in_guestmode:  L1's paging
>> mod
> is
>>           updated.      Correct
>> From the above, we can see the problem is that the way to
>> distinguish the vmentry and vmexit in SVM and VMX side is different.
>> Since I am not familiar the SVM, so i am not sure whether the usage
>> of vcpu_in_guestmode in SVM is right or wrong. But in VMX side, once
>> the vmcs is changed, then the vcpu_in_guestmode is changed too which
>> we think is following hardware's behavior.
>> 
>> Also, I think I found another issue that the paging mode cannot be
>> tracked correctly with current way or with my patch. Need more time
>> to
> investigate.
> 
> Ok. The issue is that if the paging state is changed via vmwrite (L1
> writes L2's vmcs to change paging mode directly) which is unaware to
> L0. And hypervisor cannot set the right nested paging mode during
> virtual vmentry. So we need to update nest mode unconditionally for each virtual vmentry.
> 
>> 
>> 
>>> This patch breaks both vmentry and vmexit emulation for SVM.
>>> The vmentry breakage comes with l1-hypervisor using shadow-paging.
>>> 
>>> During vmexit emulation hvm_set_cr0 and hvm_set_cr4 are called to
>>> restore cr0 and cr4 for the l1 guest. With this patch the paging
>>> mode for the l2 guest is updated rather for the l1 guest.
>>> 
>>> I think this patch also breaks the case where l2 guest wants to set
>>> cr0 or cr4 and l1-hypervisor does not intercept cr0/cr4 and
>>> l1-hypervisor uses shadow-paging. This may also count for VMX.
>> 
>> For this case, am I missing something? If yes, please correct me.
>> 
>>> 
>>> This is just from reading the code. As I said, I do not have a
>>> setup to verify this, unfortunately.
>>> 
>> 
>> 
>> Best regards,
>> Yang
>> 
> 
> 
> Best regards,
> Yang
>

Any comments ?

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 03:24:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 03:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ubZ-0000lB-3W; Tue, 14 Jan 2014 03:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W2ubX-0000l6-DR
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 03:23:51 +0000
Received: from [85.158.139.211:63787] by server-14.bemta-5.messagelabs.com id
	6B/3B-24200-6CDA4D25; Tue, 14 Jan 2014 03:23:50 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389669827!9561500!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.8 required=7.0 tests=DATE_IN_PAST_06_12,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15382 invoked from network); 14 Jan 2014 03:23:49 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 03:23:49 -0000
Received: by mail-pd0-f179.google.com with SMTP id y10so1273346pdj.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 13 Jan 2014 19:23:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=Yoj9m7R0xLOodeVcTd8RSvyXrAUHLXS5qdAMGXHmwCk=;
	b=mIfLTNAvotff8hPqJGl0N1JQjmJUxrzAysVkTWCmMJoLCnlI5ROZNlpi0nwuQT7Ibq
	gFhgrfSvxnTP37qvP4fM9g6k1dJqNU9MWtZ1fWedAHkhsIg98Z5HC09i1PoPDfqns1Sl
	aSgqryDducp2vQy/eqh9qKmC29pzAmRLdn6YTTkd5FCRivXbdxitMU2aEw6NWaVBG/+d
	PMKXEarJuxos1DxjgTMunP9Rnr1Pi5XhYJh1uppdL0/3FckW6Nr0mmxi/k9A7WbWy5iQ
	FwCR1H8wMyKwEbt1Y79BWnDY9K4XELCEXumocAbgP+/zitY7sLlfCOfDt2intz3B/ypX
	2KXw==
X-Received: by 10.66.194.2 with SMTP id hs2mr33685516pac.79.1389669827593;
	Mon, 13 Jan 2014 19:23:47 -0800 (PST)
Received: from localhost ([220.202.152.15])
	by mx.google.com with ESMTPSA id cz3sm38476574pbc.9.2014.01.13.19.23.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 19:23:46 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 01:06:47 +0800
Message-Id: <1389632807-13005-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping 2MB
	block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm64/head.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..ad35f60 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -292,8 +292,8 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #21           /* Base address for 2MB mapping */
+        lsl   x2, x2, #21
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 03:24:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 03:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ubZ-0000lB-3W; Tue, 14 Jan 2014 03:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W2ubX-0000l6-DR
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 03:23:51 +0000
Received: from [85.158.139.211:63787] by server-14.bemta-5.messagelabs.com id
	6B/3B-24200-6CDA4D25; Tue, 14 Jan 2014 03:23:50 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389669827!9561500!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.8 required=7.0 tests=DATE_IN_PAST_06_12,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15382 invoked from network); 14 Jan 2014 03:23:49 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 03:23:49 -0000
Received: by mail-pd0-f179.google.com with SMTP id y10so1273346pdj.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 13 Jan 2014 19:23:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=Yoj9m7R0xLOodeVcTd8RSvyXrAUHLXS5qdAMGXHmwCk=;
	b=mIfLTNAvotff8hPqJGl0N1JQjmJUxrzAysVkTWCmMJoLCnlI5ROZNlpi0nwuQT7Ibq
	gFhgrfSvxnTP37qvP4fM9g6k1dJqNU9MWtZ1fWedAHkhsIg98Z5HC09i1PoPDfqns1Sl
	aSgqryDducp2vQy/eqh9qKmC29pzAmRLdn6YTTkd5FCRivXbdxitMU2aEw6NWaVBG/+d
	PMKXEarJuxos1DxjgTMunP9Rnr1Pi5XhYJh1uppdL0/3FckW6Nr0mmxi/k9A7WbWy5iQ
	FwCR1H8wMyKwEbt1Y79BWnDY9K4XELCEXumocAbgP+/zitY7sLlfCOfDt2intz3B/ypX
	2KXw==
X-Received: by 10.66.194.2 with SMTP id hs2mr33685516pac.79.1389669827593;
	Mon, 13 Jan 2014 19:23:47 -0800 (PST)
Received: from localhost ([220.202.152.15])
	by mx.google.com with ESMTPSA id cz3sm38476574pbc.9.2014.01.13.19.23.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 13 Jan 2014 19:23:46 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 01:06:47 +0800
Message-Id: <1389632807-13005-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping 2MB
	block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm64/head.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..ad35f60 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -292,8 +292,8 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #21           /* Base address for 2MB mapping */
+        lsl   x2, x2, #21
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 04:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 04:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2vz5-0004yK-Co; Tue, 14 Jan 2014 04:52:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2vz4-0004yF-4h
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 04:52:14 +0000
Received: from [85.158.139.211:58981] by server-8.bemta-5.messagelabs.com id
	AD/4F-29838-D72C4D25; Tue, 14 Jan 2014 04:52:13 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389675131!9378155!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15770 invoked from network); 14 Jan 2014 04:52:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-206.messagelabs.com with SMTP;
	14 Jan 2014 04:52:12 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 13 Jan 2014 20:52:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,656,1384329600"; d="scan'208";a="438394559"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 13 Jan 2014 20:52:10 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 20:52:10 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 20:52:10 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 12:52:08 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g//+MuYCAAIZlUP//5MoAgAmy4nA=
Date: Tue, 14 Jan 2014 04:52:07 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BBA25@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
	<52CD1D9C02000078001116CB@nat28.tlf.novell.com>
In-Reply-To: <52CD1D9C02000078001116CB@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-08:
>>>> On 08.01.14 at 03:35, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Andrew Cooper wrote on 2014-01-08:
>>> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>>>> Can you help to review this patch?
>>> 
>>> This got committed earlier today
>>> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78
>>> ac
>>> 8 a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as
>>> an independent party as far as the patch goes) reviewed it.
>> 
>> I remember in old day, after a patch got committed, the maintainer
>> will reply the mail to tell the author it is applied. Or else, it's
>> hard for author to know it in time. Should we still follow this rule?
> 
> As it requires extra work, and it's easy to check the tree (there
> aren't that many commits during a day), and there are generally no intermediate trees (i.e.
> just a single canonical one to look at), I never reply with commit
> notifications. If anything like that is being wanted, then this should
> be via an automatic commit notification mechanism (and ISTR that there
> is a respective list you could subscribe to).
> 

Yes, I should subscribe Xen change log list to track the committed patch. It can solve my concern.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 04:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 04:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2vz5-0004yK-Co; Tue, 14 Jan 2014 04:52:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2vz4-0004yF-4h
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 04:52:14 +0000
Received: from [85.158.139.211:58981] by server-8.bemta-5.messagelabs.com id
	AD/4F-29838-D72C4D25; Tue, 14 Jan 2014 04:52:13 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389675131!9378155!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15770 invoked from network); 14 Jan 2014 04:52:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-206.messagelabs.com with SMTP;
	14 Jan 2014 04:52:12 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 13 Jan 2014 20:52:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,656,1384329600"; d="scan'208";a="438394559"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 13 Jan 2014 20:52:10 -0800
Received: from fmsmsx153.amr.corp.intel.com (10.19.17.7) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 20:52:10 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX153.amr.corp.intel.com (10.19.17.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 20:52:10 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 12:52:08 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v3] VMX: Eliminate cr3 save/loading exiting when UG
	enabled
Thread-Index: AQHO/GRdiJc1D0vPN0aA1OgXIfYI9Zp6JG9g//+MuYCAAIZlUP//5MoAgAmy4nA=
Date: Tue, 14 Jan 2014 04:52:07 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BBA25@SHSMSX104.ccr.corp.intel.com>
References: <1387420802-590-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4384@SHSMSX104.ccr.corp.intel.com>
	<52CCB5A3.2010308@citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A44CE@SHSMSX104.ccr.corp.intel.com>
	<52CD1D9C02000078001116CB@nat28.tlf.novell.com>
In-Reply-To: <52CD1D9C02000078001116CB@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3] VMX: Eliminate cr3 save/loading exiting
 when UG enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-08:
>>>> On 08.01.14 at 03:35, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Andrew Cooper wrote on 2014-01-08:
>>> On 08/01/2014 01:13, Zhang, Yang Z wrote:
>>>> Can you help to review this patch?
>>> 
>>> This got committed earlier today
>>> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=f4fed540e78
>>> ac
>>> 8 a2b d3b1dee53a5206dde25f613) as you are a maintainer, and I (as
>>> an independent party as far as the patch goes) reviewed it.
>> 
>> I remember in old day, after a patch got committed, the maintainer
>> will reply the mail to tell the author it is applied. Or else, it's
>> hard for author to know it in time. Should we still follow this rule?
> 
> As it requires extra work, and it's easy to check the tree (there
> aren't that many commits during a day), and there are generally no intermediate trees (i.e.
> just a single canonical one to look at), I never reply with commit
> notifications. If anything like that is being wanted, then this should
> be via an automatic commit notification mechanism (and ISTR that there
> is a respective list you could subscribe to).
> 

Yes, I should subscribe Xen change log list to track the committed patch. It can solve my concern.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:30:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2yRh-0004Bq-DS; Tue, 14 Jan 2014 07:29:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2yRf-0004Bl-UH
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:29:56 +0000
Received: from [85.158.139.211:46831] by server-8.bemta-5.messagelabs.com id
	5F/20-29838-377E4D25; Tue, 14 Jan 2014 07:29:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389684594!9544050!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3233 invoked from network); 14 Jan 2014 07:29:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 07:29:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 07:29:53 +0000
Message-Id: <52D4F57F020000780011362C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 07:29:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>, "Eddie Dong" <eddie.dong@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Zhang, Yang Z wrote on 2013-12-24:
>> Zhang, Yang Z wrote on 2013-12-23:
>>> Egger, Christoph wrote on 2013-12-18:
>>>> On 18.12.13 11:24, Zhang, Yang Z wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 09:58, "Dong, Eddie" <eddie.dong@intel.com> wrote:
>>>>>>> Acked by Eddie Dong <eddie.dong@intel.com>
>>>>>> 
>>>>>> As long as Christoph's reservations wrt SVM aren't being
>>>>>> addressed/ eliminated, I don't think we can apply this patch.
>>>>>> 
>>>>>> Furthermore, while you ack-ed this patch (which isn't really VMX
>>>>>> specific) and patch 3, you didn't ack patch 2, but you also
>>>>>> didn't indicate anything that's possibly wrong with it.
>>>>> 
>>>>> Actually, I asked him help to review the first patch. Since
>>>>> Christoph thought
>>>> the first patch may break AMD. So I hope he can help to review the
>>>> first patch to see whether I am wrong.
>>>>> 
>>>>>> 
>>>>>> And finally, with patch 1 needing to be left out for the moment,
>>>>>> I'd like to have confirmation that all three patches can be
>>>>>> applied independently (i.e. with the current state of things only
>>>>>> patch 3 is ready to
>>>> go in).
>>>>> 
>>>>> Yes, the three patches are independent.
>>>> 
>>>> I have looked through code.
>>>> 
>>>> vcpu is in guestmode till the vmentry/vmexit emulation is done.
>>>> In SVM the vcpu guestmode changes right before setting
>>>> nv_vmswitch_in_progress to 0 when the vmentry/vmexit emulation was
>>>> successfull (there is a bunch of error-checking).
>>>> 
>>> 
>>> After checking the SVM logic, I find the basic usage of
>>> vcpu_in/exitk_guestmode of VMX side is different from that of SVM side:
>>> VMX side:
>>>     virtual vmentry: set vcpu in guestmode after vmcs switched to vmcs02.
>>> This happed at the beginning of vmentry which means the whole
>>> vmentry emulation code is executed when vcpu is in guest mode.
>>>     virtual vmexit: set vcpu exit guestmode after vmcs switched to vmcs01.
>>> Also, this action occurred just after sync vmcs02 to vmcs12 and
>>> before the vmcs01's state restored (like set_cr), which means the
>>> vmcs01's state restored when vcpu is not in guest mode.
>>> SVM side:
>>>     virtual vmentry: set vcpu in guest mode after vmentry emulation is
>>>     done, which means the emulation code is executed when vcpu is not in
>>>     guest mode. virtual vmexit: set vcpu exit guest mode after vmexit
>>>     emulation is
>>> done, which means the emulation code is executed when vcpu is in guest
>>> mode.
>>> 
>>> Ok, now let us take a look at current implementation, take
>>> hvm_set_cr0 for example: Update nested mode when (vcpu_in_guestmode
>>> && !vmswitch_in_progress). Otherwise, update L1's paging mode:
>>>         if ( !nestedhvm_vmswitch_in_progress(v) &&
>>> nestedhvm_vcpu_in_guestmode(v) )
>>>             paging_update_nestedmode(v); else
>>>             paging_update_paging_modes(v); virtual vmentry:
>>>     Expected result: nested mode is being updated.
>>>     Current result in SVM:
>>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mode is updated.  Wrong.
>>>     Current result in VMX:
>>>           vcpu_in_guestmode and vmswitch_in_progress :  L1's paging
>>> mode is updated.  Wrong.
>>> 
>>> Virtual vmexit:
>>> 	Expected result: L1's paging mode is updated.
>>> 	Current result in SVM:
>>>           vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mode is updated.   Correct.
>>>     Current result in VMX:
>>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mod is updated.    Correct
>>> 
>>> From the above result, we can see the vmswitch_in_progress is
>>> actually take effect not vcpu_guest_mode. The original code doesn't
>>> consider the paging mode changed during vmentry/vmexit emulation.
>>> This seems wrong to me, because paging mode changing happens in real world.
>>> 
>>> Here is the result with my patch: Update nested mode when
>>> vcpu_in_guestmode. Otherwise, update L1's paging mode:
>>>         if (nestedhvm_vcpu_in_guestmode(v) )
>>>             paging_update_nestedmode(v); else
>>>             paging_update_paging_modes(v); virtual vmentry:
>>>     Expected result: nested mode is being updated.
>>>     Current result in SVM:
>>>           !vcpu_in_guestmode:  L1's paging mode is updated. Wrong.
>>>           Current result in VMX: vcpu_in_guestmode :  Nested paging
>>>           mode is updated.  Correct.
>>> Virtual vmexit:
>>> 	Expected result: L1's paging mode is updated.
>>> 	Current result in SVM:
>>>           vcpu_in_guestmode:  Nested paging mode is updated. Wrong.
>>>           Current result in VMX: !vcpu_in_guestmode:  L1's paging
>>> mod
>> is
>>>           updated.      Correct
>>> From the above, we can see the problem is that the way to
>>> distinguish the vmentry and vmexit in SVM and VMX side is different.
>>> Since I am not familiar the SVM, so i am not sure whether the usage
>>> of vcpu_in_guestmode in SVM is right or wrong. But in VMX side, once
>>> the vmcs is changed, then the vcpu_in_guestmode is changed too which
>>> we think is following hardware's behavior.
>>> 
>>> Also, I think I found another issue that the paging mode cannot be
>>> tracked correctly with current way or with my patch. Need more time
>>> to
>> investigate.
>> 
>> Ok. The issue is that if the paging state is changed via vmwrite (L1
>> writes L2's vmcs to change paging mode directly) which is unaware to
>> L0. And hypervisor cannot set the right nested paging mode during
>> virtual vmentry. So we need to update nest mode unconditionally for each 
> virtual vmentry.
>> 
>>> 
>>> 
>>>> This patch breaks both vmentry and vmexit emulation for SVM.
>>>> The vmentry breakage comes with l1-hypervisor using shadow-paging.
>>>> 
>>>> During vmexit emulation hvm_set_cr0 and hvm_set_cr4 are called to
>>>> restore cr0 and cr4 for the l1 guest. With this patch the paging
>>>> mode for the l2 guest is updated rather for the l1 guest.
>>>> 
>>>> I think this patch also breaks the case where l2 guest wants to set
>>>> cr0 or cr4 and l1-hypervisor does not intercept cr0/cr4 and
>>>> l1-hypervisor uses shadow-paging. This may also count for VMX.
>>> 
>>> For this case, am I missing something? If yes, please correct me.
>>> 
>>>> 
>>>> This is just from reading the code. As I said, I do not have a
>>>> setup to verify this, unfortunately.
> 
> Any comments ?

Considering Christoph's comments and reservations, if you can't
alone deal with this I think you should work with the AMD people
to eliminate or address his concerns.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:30:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2yRh-0004Bq-DS; Tue, 14 Jan 2014 07:29:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2yRf-0004Bl-UH
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:29:56 +0000
Received: from [85.158.139.211:46831] by server-8.bemta-5.messagelabs.com id
	5F/20-29838-377E4D25; Tue, 14 Jan 2014 07:29:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389684594!9544050!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3233 invoked from network); 14 Jan 2014 07:29:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 07:29:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 07:29:53 +0000
Message-Id: <52D4F57F020000780011362C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 07:29:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>, "Eddie Dong" <eddie.dong@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Zhang, Yang Z wrote on 2013-12-24:
>> Zhang, Yang Z wrote on 2013-12-23:
>>> Egger, Christoph wrote on 2013-12-18:
>>>> On 18.12.13 11:24, Zhang, Yang Z wrote:
>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>> On 18.12.13 at 09:58, "Dong, Eddie" <eddie.dong@intel.com> wrote:
>>>>>>> Acked by Eddie Dong <eddie.dong@intel.com>
>>>>>> 
>>>>>> As long as Christoph's reservations wrt SVM aren't being
>>>>>> addressed/ eliminated, I don't think we can apply this patch.
>>>>>> 
>>>>>> Furthermore, while you ack-ed this patch (which isn't really VMX
>>>>>> specific) and patch 3, you didn't ack patch 2, but you also
>>>>>> didn't indicate anything that's possibly wrong with it.
>>>>> 
>>>>> Actually, I asked him help to review the first patch. Since
>>>>> Christoph thought
>>>> the first patch may break AMD. So I hope he can help to review the
>>>> first patch to see whether I am wrong.
>>>>> 
>>>>>> 
>>>>>> And finally, with patch 1 needing to be left out for the moment,
>>>>>> I'd like to have confirmation that all three patches can be
>>>>>> applied independently (i.e. with the current state of things only
>>>>>> patch 3 is ready to
>>>> go in).
>>>>> 
>>>>> Yes, the three patches are independent.
>>>> 
>>>> I have looked through code.
>>>> 
>>>> vcpu is in guestmode till the vmentry/vmexit emulation is done.
>>>> In SVM the vcpu guestmode changes right before setting
>>>> nv_vmswitch_in_progress to 0 when the vmentry/vmexit emulation was
>>>> successfull (there is a bunch of error-checking).
>>>> 
>>> 
>>> After checking the SVM logic, I find the basic usage of
>>> vcpu_in/exitk_guestmode of VMX side is different from that of SVM side:
>>> VMX side:
>>>     virtual vmentry: set vcpu in guestmode after vmcs switched to vmcs02.
>>> This happed at the beginning of vmentry which means the whole
>>> vmentry emulation code is executed when vcpu is in guest mode.
>>>     virtual vmexit: set vcpu exit guestmode after vmcs switched to vmcs01.
>>> Also, this action occurred just after sync vmcs02 to vmcs12 and
>>> before the vmcs01's state restored (like set_cr), which means the
>>> vmcs01's state restored when vcpu is not in guest mode.
>>> SVM side:
>>>     virtual vmentry: set vcpu in guest mode after vmentry emulation is
>>>     done, which means the emulation code is executed when vcpu is not in
>>>     guest mode. virtual vmexit: set vcpu exit guest mode after vmexit
>>>     emulation is
>>> done, which means the emulation code is executed when vcpu is in guest
>>> mode.
>>> 
>>> Ok, now let us take a look at current implementation, take
>>> hvm_set_cr0 for example: Update nested mode when (vcpu_in_guestmode
>>> && !vmswitch_in_progress). Otherwise, update L1's paging mode:
>>>         if ( !nestedhvm_vmswitch_in_progress(v) &&
>>> nestedhvm_vcpu_in_guestmode(v) )
>>>             paging_update_nestedmode(v); else
>>>             paging_update_paging_modes(v); virtual vmentry:
>>>     Expected result: nested mode is being updated.
>>>     Current result in SVM:
>>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mode is updated.  Wrong.
>>>     Current result in VMX:
>>>           vcpu_in_guestmode and vmswitch_in_progress :  L1's paging
>>> mode is updated.  Wrong.
>>> 
>>> Virtual vmexit:
>>> 	Expected result: L1's paging mode is updated.
>>> 	Current result in SVM:
>>>           vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mode is updated.   Correct.
>>>     Current result in VMX:
>>>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging
>>> mod is updated.    Correct
>>> 
>>> From the above result, we can see the vmswitch_in_progress is
>>> actually take effect not vcpu_guest_mode. The original code doesn't
>>> consider the paging mode changed during vmentry/vmexit emulation.
>>> This seems wrong to me, because paging mode changing happens in real world.
>>> 
>>> Here is the result with my patch: Update nested mode when
>>> vcpu_in_guestmode. Otherwise, update L1's paging mode:
>>>         if (nestedhvm_vcpu_in_guestmode(v) )
>>>             paging_update_nestedmode(v); else
>>>             paging_update_paging_modes(v); virtual vmentry:
>>>     Expected result: nested mode is being updated.
>>>     Current result in SVM:
>>>           !vcpu_in_guestmode:  L1's paging mode is updated. Wrong.
>>>           Current result in VMX: vcpu_in_guestmode :  Nested paging
>>>           mode is updated.  Correct.
>>> Virtual vmexit:
>>> 	Expected result: L1's paging mode is updated.
>>> 	Current result in SVM:
>>>           vcpu_in_guestmode:  Nested paging mode is updated. Wrong.
>>>           Current result in VMX: !vcpu_in_guestmode:  L1's paging
>>> mod
>> is
>>>           updated.      Correct
>>> From the above, we can see the problem is that the way to
>>> distinguish the vmentry and vmexit in SVM and VMX side is different.
>>> Since I am not familiar the SVM, so i am not sure whether the usage
>>> of vcpu_in_guestmode in SVM is right or wrong. But in VMX side, once
>>> the vmcs is changed, then the vcpu_in_guestmode is changed too which
>>> we think is following hardware's behavior.
>>> 
>>> Also, I think I found another issue that the paging mode cannot be
>>> tracked correctly with current way or with my patch. Need more time
>>> to
>> investigate.
>> 
>> Ok. The issue is that if the paging state is changed via vmwrite (L1
>> writes L2's vmcs to change paging mode directly) which is unaware to
>> L0. And hypervisor cannot set the right nested paging mode during
>> virtual vmentry. So we need to update nest mode unconditionally for each 
> virtual vmentry.
>> 
>>> 
>>> 
>>>> This patch breaks both vmentry and vmexit emulation for SVM.
>>>> The vmentry breakage comes with l1-hypervisor using shadow-paging.
>>>> 
>>>> During vmexit emulation hvm_set_cr0 and hvm_set_cr4 are called to
>>>> restore cr0 and cr4 for the l1 guest. With this patch the paging
>>>> mode for the l2 guest is updated rather for the l1 guest.
>>>> 
>>>> I think this patch also breaks the case where l2 guest wants to set
>>>> cr0 or cr4 and l1-hypervisor does not intercept cr0/cr4 and
>>>> l1-hypervisor uses shadow-paging. This may also count for VMX.
>>> 
>>> For this case, am I missing something? If yes, please correct me.
>>> 
>>>> 
>>>> This is just from reading the code. As I said, I do not have a
>>>> setup to verify this, unfortunately.
> 
> Any comments ?

Considering Christoph's comments and reservations, if you can't
alone deal with this I think you should work with the AMD people
to eliminate or address his concerns.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ya2-0004lK-L4; Tue, 14 Jan 2014 07:38:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2ya1-0004iB-6R
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:38:33 +0000
Received: from [193.109.254.147:26445] by server-12.bemta-14.messagelabs.com
	id 3E/A2-13681-879E4D25; Tue, 14 Jan 2014 07:38:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389685111!10685207!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8852 invoked from network); 14 Jan 2014 07:38:32 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	14 Jan 2014 07:38:32 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Jan 2014 23:34:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,657,1384329600"; d="scan'208";a="438451233"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 13 Jan 2014 23:38:29 -0800
Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 23:38:29 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx117.amr.corp.intel.com (10.18.116.17) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 23:38:28 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 15:38:27 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9D//8zygIAAhlgA
Date: Tue, 14 Jan 2014 07:38:27 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
In-Reply-To: <52D4F57F020000780011362C@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-14:
>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Zhang, Yang Z wrote on 2013-12-24:
>> 
>> Any comments ?
> 
> Considering Christoph's comments and reservations, if you can't alone
> deal with this I think you should work with the AMD people to
> eliminate or address his concerns.
> 

Yes. But the problem puzzled me is that Christoph said nested SVM works well w/o my patch which I cannot understand. Because according my analysis in previous thread, it also buggy in AMD side. But if they really solved the issue in their side, I wonder how they fix it. Perhaps I can use the same solution in VMX side without touch the common code.

Christoph, can you help to check it? thanks.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ya2-0004lK-L4; Tue, 14 Jan 2014 07:38:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W2ya1-0004iB-6R
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:38:33 +0000
Received: from [193.109.254.147:26445] by server-12.bemta-14.messagelabs.com
	id 3E/A2-13681-879E4D25; Tue, 14 Jan 2014 07:38:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389685111!10685207!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8852 invoked from network); 14 Jan 2014 07:38:32 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-27.messagelabs.com with SMTP;
	14 Jan 2014 07:38:32 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Jan 2014 23:34:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,657,1384329600"; d="scan'208";a="438451233"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 13 Jan 2014 23:38:29 -0800
Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 23:38:29 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx117.amr.corp.intel.com (10.18.116.17) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 13 Jan 2014 23:38:28 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Tue, 14 Jan 2014 15:38:27 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9D//8zygIAAhlgA
Date: Tue, 14 Jan 2014 07:38:27 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
In-Reply-To: <52D4F57F020000780011362C@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-14:
>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Zhang, Yang Z wrote on 2013-12-24:
>> 
>> Any comments ?
> 
> Considering Christoph's comments and reservations, if you can't alone
> deal with this I think you should work with the AMD people to
> eliminate or address his concerns.
> 

Yes. But the problem puzzled me is that Christoph said nested SVM works well w/o my patch which I cannot understand. Because according my analysis in previous thread, it also buggy in AMD side. But if they really solved the issue in their side, I wonder how they fix it. Perhaps I can use the same solution in VMX side without touch the common code.

Christoph, can you help to check it? thanks.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:49:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ykl-0005LM-6S; Tue, 14 Jan 2014 07:49:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2ykk-0005LH-86
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:49:38 +0000
Received: from [193.109.254.147:39278] by server-12.bemta-14.messagelabs.com
	id AC/AC-13681-11CE4D25; Tue, 14 Jan 2014 07:49:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389685776!9183568!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28466 invoked from network); 14 Jan 2014 07:49:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 07:49:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 07:49:36 +0000
Message-Id: <52D4FA1E0200007800113643@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 07:49:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<52D432AF.5030205@terremark.com>
In-Reply-To: <52D432AF.5030205@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 19:38, Don Slutz <dslutz@verizon.com> wrote:
> On 01/13/14 10:14, Jan Beulich wrote:
>> Aiming at a release in late January or early February, I'd like to cut
>> RC1s later this or early next week.
>>
>> Please indicate any bug fixes that so far may have been missed
>> in the backports already done.
>>
>> Thanks, Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
> 
> 
> I do not see:
> 
> 
> commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
> 
> 
> in any 4.2 or 4.3 tree.  As Andrew Cooper said:
> 
> 
>  > This is a hypervisor reference counting error on a toolstack hypercall
>  > path.  Irrespective of any of the other patches in this series, I think
>  > this should be included ASAP (although probably subject to review from a
>  > third person), which will fix the hypervisor crashes from gdbsx usage.

Considering that this is only debugging code, I did not view this as
important enough to backport without explicit request.
Furthermore, it hadn't passed the push gate yet when I collected
the most recent batch of backports.

I just now added it to my list of pending backports.

As an aside: It would have helped (read: saved me some time) if you
had also reproduced the title of the commit instead of just the commit
ID. See Andrew's response to the original mail of this thread for an
example (the precise form doesn't matter, the conveyed information
does).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 07:49:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 07:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2ykl-0005LM-6S; Tue, 14 Jan 2014 07:49:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W2ykk-0005LH-86
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 07:49:38 +0000
Received: from [193.109.254.147:39278] by server-12.bemta-14.messagelabs.com
	id AC/AC-13681-11CE4D25; Tue, 14 Jan 2014 07:49:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389685776!9183568!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28466 invoked from network); 14 Jan 2014 07:49:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 07:49:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 07:49:36 +0000
Message-Id: <52D4FA1E0200007800113643@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 07:49:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<52D432AF.5030205@terremark.com>
In-Reply-To: <52D432AF.5030205@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 19:38, Don Slutz <dslutz@verizon.com> wrote:
> On 01/13/14 10:14, Jan Beulich wrote:
>> Aiming at a release in late January or early February, I'd like to cut
>> RC1s later this or early next week.
>>
>> Please indicate any bug fixes that so far may have been missed
>> in the backports already done.
>>
>> Thanks, Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 
> 
> 
> I do not see:
> 
> 
> commit 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
> 
> 
> in any 4.2 or 4.3 tree.  As Andrew Cooper said:
> 
> 
>  > This is a hypervisor reference counting error on a toolstack hypercall
>  > path.  Irrespective of any of the other patches in this series, I think
>  > this should be included ASAP (although probably subject to review from a
>  > third person), which will fix the hypervisor crashes from gdbsx usage.

Considering that this is only debugging code, I did not view this as
important enough to backport without explicit request.
Furthermore, it hadn't passed the push gate yet when I collected
the most recent batch of backports.

I just now added it to my list of pending backports.

As an aside: It would have helped (read: saved me some time) if you
had also reproduced the title of the commit instead of just the commit
ID. See Andrew's response to the original mail of this thread for an
example (the precise form doesn't matter, the conveyed information
does).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 08:28:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 08:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2zM7-0007Ix-Px; Tue, 14 Jan 2014 08:28:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2zM6-0007Is-Gr
	for xen-devel@lists.xensource.com; Tue, 14 Jan 2014 08:28:14 +0000
Received: from [85.158.137.68:54110] by server-7.bemta-3.messagelabs.com id
	22/C4-27599-D15F4D25; Tue, 14 Jan 2014 08:28:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389688089!5300981!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6585 invoked from network); 14 Jan 2014 08:28:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 08:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90470276"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 08:28:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 03:28:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2zLz-0005iw-RV;
	Tue, 14 Jan 2014 08:28:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2zLz-0004MR-PZ;
	Tue, 14 Jan 2014 08:28:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24369-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 08:28:07 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24369: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24369 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24369/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24366
 test-amd64-amd64-xl-qemut-win7-amd64 11 guest-localmigrate.2 fail pass in 24366
 test-amd64-i386-xl-win7-amd64 11 guest-localmigrate.2       fail pass in 24366
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24366 pass in 24369

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   like 24366
 test-amd64-amd64-xl-sedf    14 guest-localmigrate/x10 fail in 24366 like 24364
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24366 like 24354
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10 fail in 24366 like 24354

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24366 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24366 never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 08:28:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 08:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W2zM7-0007Ix-Px; Tue, 14 Jan 2014 08:28:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W2zM6-0007Is-Gr
	for xen-devel@lists.xensource.com; Tue, 14 Jan 2014 08:28:14 +0000
Received: from [85.158.137.68:54110] by server-7.bemta-3.messagelabs.com id
	22/C4-27599-D15F4D25; Tue, 14 Jan 2014 08:28:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389688089!5300981!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6585 invoked from network); 14 Jan 2014 08:28:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 08:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90470276"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 08:28:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 03:28:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W2zLz-0005iw-RV;
	Tue, 14 Jan 2014 08:28:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W2zLz-0004MR-PZ;
	Tue, 14 Jan 2014 08:28:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24369-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 08:28:07 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24369: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24369 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24369/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24366
 test-amd64-amd64-xl-qemut-win7-amd64 11 guest-localmigrate.2 fail pass in 24366
 test-amd64-i386-xl-win7-amd64 11 guest-localmigrate.2       fail pass in 24366
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24366 pass in 24369

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-winxpsp3  7 windows-install              fail   like 24366
 test-amd64-amd64-xl-sedf    14 guest-localmigrate/x10 fail in 24366 like 24364
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24366 like 24354
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10 fail in 24366 like 24354

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24366 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24366 never pass

version targeted for testing:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:31:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30Kh-00029M-Rs; Tue, 14 Jan 2014 09:30:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30Kg-00029H-0r
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:30:50 +0000
Received: from [85.158.143.35:63129] by server-1.bemta-4.messagelabs.com id
	34/32-02132-7C305D25; Tue, 14 Jan 2014 09:30:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389691846!8884170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1711 invoked from network); 14 Jan 2014 09:30:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:30:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90483355"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 09:30:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:30:45 -0500
Message-ID: <1389691844.13654.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 09:30:44 +0000
In-Reply-To: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
	protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> As Ian Campbell writes:

"...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
add on commit, no need to resend just for this IMHO)

>   There are two mechanisms by which a suspend can be aborted and the
>   original domain resumed.
> 
>   The older method is that the toolstack resets a bunch of state (see
>   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
>   restarts the domain. The domain will see HYPERVISOR_suspend return 0
>   and will continue without any realisation that it is actually
>   running in the original domain and not in a new one. This method is
>   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
>   but it is not.
> 
>   The other method is newer and in this case the toolstack arranges
>   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
>   domain will observe this and realise that it has been restarted in
>   the same domain and will behave accordingly. This method is
>   implemented, correctly AFAIK, by
>   libxl_domain_resume(suspend_cancel=1).
> 
> Attempting to use the old method without doing all of the work simply
> causes the guest to crash.  Implementing the work required for old
> method, or for checking that domains actually support the new method,
> is not feasible at this stage of the 4.4 release.
> 
> So, always use the new method, without regard to the declarations of
> support by the guest.  This is a strict improvement: guests which do
> in fact support the new method will work, whereas ones which don't are
> no worse off.

I agree with this rationale.

> There are two call sites of libxl_domain_resume that need fixing, both
> in the migration error path.
> 
> With this change I observe a correct and successful resumption of a
> Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> attempt which I arranged to fail by nobbling the block hotplug script.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

I think you have at least a partial patch ready for 4.5?

> CC: konrad.wilk@oracle.com
> CC: David Vrabel <david.vrabel@citrix.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  tools/libxl/xl_cmdimpl.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index c30f495..d93e01b 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>          if (common_domname) {
>              libxl_domain_rename(ctx, domid, away_domname, common_domname);
>          }
> -        rc = libxl_domain_resume(ctx, domid, 0, 0);
> +        rc = libxl_domain_resume(ctx, domid, 1, 0);
>          if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
>  
>          fprintf(stderr, "Migration failed due to problems at target.\n");
> @@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>      close(send_fd);
>      migration_child_report(recv_fd);
>      fprintf(stderr, "Migration failed, resuming at sender.\n");
> -    libxl_domain_resume(ctx, domid, 0, 0);
> +    libxl_domain_resume(ctx, domid, 1, 0);
>      exit(-ERROR_FAIL);
>  
>   failed_badly:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:31:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30Kh-00029M-Rs; Tue, 14 Jan 2014 09:30:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30Kg-00029H-0r
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:30:50 +0000
Received: from [85.158.143.35:63129] by server-1.bemta-4.messagelabs.com id
	34/32-02132-7C305D25; Tue, 14 Jan 2014 09:30:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389691846!8884170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1711 invoked from network); 14 Jan 2014 09:30:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:30:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90483355"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 09:30:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:30:45 -0500
Message-ID: <1389691844.13654.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 09:30:44 +0000
In-Reply-To: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
	protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> As Ian Campbell writes:

"...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
add on commit, no need to resend just for this IMHO)

>   There are two mechanisms by which a suspend can be aborted and the
>   original domain resumed.
> 
>   The older method is that the toolstack resets a bunch of state (see
>   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
>   restarts the domain. The domain will see HYPERVISOR_suspend return 0
>   and will continue without any realisation that it is actually
>   running in the original domain and not in a new one. This method is
>   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
>   but it is not.
> 
>   The other method is newer and in this case the toolstack arranges
>   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
>   domain will observe this and realise that it has been restarted in
>   the same domain and will behave accordingly. This method is
>   implemented, correctly AFAIK, by
>   libxl_domain_resume(suspend_cancel=1).
> 
> Attempting to use the old method without doing all of the work simply
> causes the guest to crash.  Implementing the work required for old
> method, or for checking that domains actually support the new method,
> is not feasible at this stage of the 4.4 release.
> 
> So, always use the new method, without regard to the declarations of
> support by the guest.  This is a strict improvement: guests which do
> in fact support the new method will work, whereas ones which don't are
> no worse off.

I agree with this rationale.

> There are two call sites of libxl_domain_resume that need fixing, both
> in the migration error path.
> 
> With this change I observe a correct and successful resumption of a
> Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> attempt which I arranged to fail by nobbling the block hotplug script.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

I think you have at least a partial patch ready for 4.5?

> CC: konrad.wilk@oracle.com
> CC: David Vrabel <david.vrabel@citrix.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  tools/libxl/xl_cmdimpl.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index c30f495..d93e01b 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>          if (common_domname) {
>              libxl_domain_rename(ctx, domid, away_domname, common_domname);
>          }
> -        rc = libxl_domain_resume(ctx, domid, 0, 0);
> +        rc = libxl_domain_resume(ctx, domid, 1, 0);
>          if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
>  
>          fprintf(stderr, "Migration failed due to problems at target.\n");
> @@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>      close(send_fd);
>      migration_child_report(recv_fd);
>      fprintf(stderr, "Migration failed, resuming at sender.\n");
> -    libxl_domain_resume(ctx, domid, 0, 0);
> +    libxl_domain_resume(ctx, domid, 1, 0);
>      exit(-ERROR_FAIL);
>  
>   failed_badly:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:33:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30NW-0002L5-K0; Tue, 14 Jan 2014 09:33:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30NV-0002Ky-1W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:33:45 +0000
Received: from [193.109.254.147:20860] by server-2.bemta-14.messagelabs.com id
	F3/81-00361-87405D25; Tue, 14 Jan 2014 09:33:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389692022!10711593!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29395 invoked from network); 14 Jan 2014 09:33:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:33:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92608023"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:33:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:33:20 -0500
Message-ID: <1389691999.13654.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Miller <davem@davemloft.net>
Date: Tue, 14 Jan 2014 09:33:19 +0000
In-Reply-To: <20140113.112720.2236966361404094251.davem@davemloft.net>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
	<20140113.112720.2236966361404094251.davem@davemloft.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, paul.durrant@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:27 -0800, David Miller wrote:
> From: Paul Durrant <paul.durrant@citrix.com>
> Date: Thu, 9 Jan 2014 10:02:48 +0000
> 
> > Use skb_checksum_setup to set up partial checksum offsets rather
> > then a private implementation.
> > 
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> This patch really needs review by a netfront expert before I can apply
> this series.

I think in combination with 1/3 it is basically code motion. So although
I'm not not netfront maintainer:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:33:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30NW-0002L5-K0; Tue, 14 Jan 2014 09:33:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30NV-0002Ky-1W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:33:45 +0000
Received: from [193.109.254.147:20860] by server-2.bemta-14.messagelabs.com id
	F3/81-00361-87405D25; Tue, 14 Jan 2014 09:33:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389692022!10711593!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29395 invoked from network); 14 Jan 2014 09:33:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:33:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92608023"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:33:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:33:20 -0500
Message-ID: <1389691999.13654.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Miller <davem@davemloft.net>
Date: Tue, 14 Jan 2014 09:33:19 +0000
In-Reply-To: <20140113.112720.2236966361404094251.davem@davemloft.net>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
	<1389261768-30606-4-git-send-email-paul.durrant@citrix.com>
	<20140113.112720.2236966361404094251.davem@davemloft.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, paul.durrant@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 3/3] xen-netfront: use new
 skb_checksum_setup function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:27 -0800, David Miller wrote:
> From: Paul Durrant <paul.durrant@citrix.com>
> Date: Thu, 9 Jan 2014 10:02:48 +0000
> 
> > Use skb_checksum_setup to set up partial checksum offsets rather
> > then a private implementation.
> > 
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> This patch really needs review by a netfront expert before I can apply
> this series.

I think in combination with 1/3 it is basically code motion. So although
I'm not not netfront maintainer:

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:34:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30OK-0002QJ-3j; Tue, 14 Jan 2014 09:34:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30OI-0002Q3-7k
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:34:34 +0000
Received: from [193.109.254.147:49218] by server-16.bemta-14.messagelabs.com
	id CB/BC-20600-9A405D25; Tue, 14 Jan 2014 09:34:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389692071!8397052!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13898 invoked from network); 14 Jan 2014 09:34:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:34:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92608173"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:34:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:34:30 -0500
Message-ID: <1389692069.13654.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 09:34:29 +0000
In-Reply-To: <1389691844.13654.119.camel@kazak.uk.xensource.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389691844.13654.119.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
 protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 09:30 +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> > As Ian Campbell writes:
> 
> "...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
> add on commit, no need to resend just for this IMHO)
> 
> >   There are two mechanisms by which a suspend can be aborted and the
> >   original domain resumed.
> > 
> >   The older method is that the toolstack resets a bunch of state (see
> >   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
> >   restarts the domain. The domain will see HYPERVISOR_suspend return 0
> >   and will continue without any realisation that it is actually
> >   running in the original domain and not in a new one. This method is
> >   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
> >   but it is not.
> > 
> >   The other method is newer and in this case the toolstack arranges
> >   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
> >   domain will observe this and realise that it has been restarted in
> >   the same domain and will behave accordingly. This method is
> >   implemented, correctly AFAIK, by
> >   libxl_domain_resume(suspend_cancel=1).
> > 
> > Attempting to use the old method without doing all of the work simply
> > causes the guest to crash.  Implementing the work required for old
> > method, or for checking that domains actually support the new method,
> > is not feasible at this stage of the 4.4 release.
> > 
> > So, always use the new method, without regard to the declarations of
> > support by the guest.  This is a strict improvement: guests which do
> > in fact support the new method will work, whereas ones which don't are
> > no worse off.
> 
> I agree with this rationale.

To be clear, I meant this as a release ack. Things are strictly better
with this change and nothing which wasn't already broken will be broken
afterwards.

> 
> > There are two call sites of libxl_domain_resume that need fixing, both
> > in the migration error path.
> > 
> > With this change I observe a correct and successful resumption of a
> > Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> > attempt which I arranged to fail by nobbling the block hotplug script.

Are you going to add something to osstest?

> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> I think you have at least a partial patch ready for 4.5?
> 
> > CC: konrad.wilk@oracle.com
> > CC: David Vrabel <david.vrabel@citrix.com>
> > CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > ---
> >  tools/libxl/xl_cmdimpl.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index c30f495..d93e01b 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
> >          if (common_domname) {
> >              libxl_domain_rename(ctx, domid, away_domname, common_domname);
> >          }
> > -        rc = libxl_domain_resume(ctx, domid, 0, 0);
> > +        rc = libxl_domain_resume(ctx, domid, 1, 0);
> >          if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
> >  
> >          fprintf(stderr, "Migration failed due to problems at target.\n");
> > @@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
> >      close(send_fd);
> >      migration_child_report(recv_fd);
> >      fprintf(stderr, "Migration failed, resuming at sender.\n");
> > -    libxl_domain_resume(ctx, domid, 0, 0);
> > +    libxl_domain_resume(ctx, domid, 1, 0);
> >      exit(-ERROR_FAIL);
> >  
> >   failed_badly:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:34:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30OK-0002QJ-3j; Tue, 14 Jan 2014 09:34:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30OI-0002Q3-7k
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:34:34 +0000
Received: from [193.109.254.147:49218] by server-16.bemta-14.messagelabs.com
	id CB/BC-20600-9A405D25; Tue, 14 Jan 2014 09:34:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389692071!8397052!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13898 invoked from network); 14 Jan 2014 09:34:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:34:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92608173"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:34:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:34:30 -0500
Message-ID: <1389692069.13654.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 09:34:29 +0000
In-Reply-To: <1389691844.13654.119.camel@kazak.uk.xensource.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389691844.13654.119.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
 protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 09:30 +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> > As Ian Campbell writes:
> 
> "...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
> add on commit, no need to resend just for this IMHO)
> 
> >   There are two mechanisms by which a suspend can be aborted and the
> >   original domain resumed.
> > 
> >   The older method is that the toolstack resets a bunch of state (see
> >   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
> >   restarts the domain. The domain will see HYPERVISOR_suspend return 0
> >   and will continue without any realisation that it is actually
> >   running in the original domain and not in a new one. This method is
> >   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
> >   but it is not.
> > 
> >   The other method is newer and in this case the toolstack arranges
> >   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
> >   domain will observe this and realise that it has been restarted in
> >   the same domain and will behave accordingly. This method is
> >   implemented, correctly AFAIK, by
> >   libxl_domain_resume(suspend_cancel=1).
> > 
> > Attempting to use the old method without doing all of the work simply
> > causes the guest to crash.  Implementing the work required for old
> > method, or for checking that domains actually support the new method,
> > is not feasible at this stage of the 4.4 release.
> > 
> > So, always use the new method, without regard to the declarations of
> > support by the guest.  This is a strict improvement: guests which do
> > in fact support the new method will work, whereas ones which don't are
> > no worse off.
> 
> I agree with this rationale.

To be clear, I meant this as a release ack. Things are strictly better
with this change and nothing which wasn't already broken will be broken
afterwards.

> 
> > There are two call sites of libxl_domain_resume that need fixing, both
> > in the migration error path.
> > 
> > With this change I observe a correct and successful resumption of a
> > Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> > attempt which I arranged to fail by nobbling the block hotplug script.

Are you going to add something to osstest?

> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> I think you have at least a partial patch ready for 4.5?
> 
> > CC: konrad.wilk@oracle.com
> > CC: David Vrabel <david.vrabel@citrix.com>
> > CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > ---
> >  tools/libxl/xl_cmdimpl.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index c30f495..d93e01b 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -3734,7 +3734,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
> >          if (common_domname) {
> >              libxl_domain_rename(ctx, domid, away_domname, common_domname);
> >          }
> > -        rc = libxl_domain_resume(ctx, domid, 0, 0);
> > +        rc = libxl_domain_resume(ctx, domid, 1, 0);
> >          if (!rc) fprintf(stderr, "migration sender: Resumed OK.\n");
> >  
> >          fprintf(stderr, "Migration failed due to problems at target.\n");
> > @@ -3756,7 +3756,7 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
> >      close(send_fd);
> >      migration_child_report(recv_fd);
> >      fprintf(stderr, "Migration failed, resuming at sender.\n");
> > -    libxl_domain_resume(ctx, domid, 0, 0);
> > +    libxl_domain_resume(ctx, domid, 1, 0);
> >      exit(-ERROR_FAIL);
> >  
> >   failed_badly:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30WE-00034o-CV; Tue, 14 Jan 2014 09:42:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30WC-00034j-Sr
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:42:45 +0000
Received: from [85.158.137.68:42305] by server-15.bemta-3.messagelabs.com id
	7F/0B-11556-39605D25; Tue, 14 Jan 2014 09:42:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389692561!8951496!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2799 invoked from network); 14 Jan 2014 09:42:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:42:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92609294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:42:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:42:41 -0500
Message-ID: <1389692560.9887.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Kai Huang <dev.kai.huang@gmail.com>
Date: Tue, 14 Jan 2014 09:42:40 +0000
In-Reply-To: <CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Adel Amani <adel.amani66@yahoo.com>, Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 10:17 +0800, Kai Huang wrote:
> On Mon, Jan 13, 2014 at 7:19 PM, Adel Amani <adel.amani66@yahoo.com> wrote:
> > Downtime it mean's The total downtime consists of the time necessary to
> > quiesce the VM on the source, transfer the device state to the destination,
> > load the device state, and copy over all the remaining memory pages
> > concurrently with loading the device state.
> > how i can print bitmap in Output? or how i can see bitmap in Output?
> 
> I believe you need to add code by yourself to print out the bitmap.

Yes.

> > Is there a particular name for Downtime in xen_domain_save ?
> > how i can see Downtime in Output?
> 
> Not sure if there's already code to meet your requirement, probably
> you also need to do some hack to print out the downtime.

I don't think there is any such code, but this would be trivial for Adel
to check by reading the source.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:42:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30WE-00034o-CV; Tue, 14 Jan 2014 09:42:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30WC-00034j-Sr
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 09:42:45 +0000
Received: from [85.158.137.68:42305] by server-15.bemta-3.messagelabs.com id
	7F/0B-11556-39605D25; Tue, 14 Jan 2014 09:42:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389692561!8951496!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2799 invoked from network); 14 Jan 2014 09:42:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:42:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="92609294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 09:42:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:42:41 -0500
Message-ID: <1389692560.9887.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Kai Huang <dev.kai.huang@gmail.com>
Date: Tue, 14 Jan 2014 09:42:40 +0000
In-Reply-To: <CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>
	<CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Adel Amani <adel.amani66@yahoo.com>, Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 10:17 +0800, Kai Huang wrote:
> On Mon, Jan 13, 2014 at 7:19 PM, Adel Amani <adel.amani66@yahoo.com> wrote:
> > Downtime it mean's The total downtime consists of the time necessary to
> > quiesce the VM on the source, transfer the device state to the destination,
> > load the device state, and copy over all the remaining memory pages
> > concurrently with loading the device state.
> > how i can print bitmap in Output? or how i can see bitmap in Output?
> 
> I believe you need to add code by yourself to print out the bitmap.

Yes.

> > Is there a particular name for Downtime in xen_domain_save ?
> > how i can see Downtime in Output?
> 
> Not sure if there's already code to meet your requirement, probably
> you also need to do some hack to print out the downtime.

I don't think there is any such code, but this would be trivial for Adel
to check by reading the source.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30ke-0003tm-6D; Tue, 14 Jan 2014 09:57:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30kd-0003th-GC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 09:57:39 +0000
Received: from [85.158.139.211:10797] by server-11.bemta-5.messagelabs.com id
	DF/57-23268-21A05D25; Tue, 14 Jan 2014 09:57:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389693456!9581674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32349 invoked from network); 14 Jan 2014 09:57:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:57:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90488049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 09:57:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:57:35 -0500
Message-ID: <1389693454.9887.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 14 Jan 2014 09:57:34 +0000
In-Reply-To: <1389632807-13005-1-git-send-email-baozich@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 01:06 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20.

Thanks, are you finding these issues on actual hardware or just by
inspection?

I think this change is correct, also arm32 has:

        1:      /* Setup boot_second: */
                ldr   r4, =boot_second
                add   r4, r4, r10            /* r1 := paddr (boot_second) */
        
                lsr   r2, r9, #20            /* Base address for 2MB mapping */
                lsl   r2, r2, #20
                orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
                orr   r2, r2, #PT_LOWER(MEM)
        
Either they are both wrong or both right (I think both wrong).

The effect of this error is that bit #20 of paddr(boot_second) is
preserved as bit #20 of the PTE, rather than being cleared. bit #20 of
an entry at this level is UNK/SBZP -- so we survive this mistake even if
paddr(boot_second) happens to have bit #20 set.

Really this code should use {FIRST,SECOND,THIRD}_SHIFT, and this file
already includes asm/page.h so they should be available...

> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/arm64/head.S | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index bebddf0..ad35f60 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -292,8 +292,8 @@ skip_bss:
>          ldr   x4, =boot_second
>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>  
> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
> -        lsl   x2, x2, #20
> +        lsr   x2, x19, #21           /* Base address for 2MB mapping */
> +        lsl   x2, x2, #21
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 09:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 09:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30ke-0003tm-6D; Tue, 14 Jan 2014 09:57:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30kd-0003th-GC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 09:57:39 +0000
Received: from [85.158.139.211:10797] by server-11.bemta-5.messagelabs.com id
	DF/57-23268-21A05D25; Tue, 14 Jan 2014 09:57:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389693456!9581674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32349 invoked from network); 14 Jan 2014 09:57:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 09:57:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90488049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 09:57:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 04:57:35 -0500
Message-ID: <1389693454.9887.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 14 Jan 2014 09:57:34 +0000
In-Reply-To: <1389632807-13005-1-git-send-email-baozich@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 01:06 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20.

Thanks, are you finding these issues on actual hardware or just by
inspection?

I think this change is correct, also arm32 has:

        1:      /* Setup boot_second: */
                ldr   r4, =boot_second
                add   r4, r4, r10            /* r1 := paddr (boot_second) */
        
                lsr   r2, r9, #20            /* Base address for 2MB mapping */
                lsl   r2, r2, #20
                orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
                orr   r2, r2, #PT_LOWER(MEM)
        
Either they are both wrong or both right (I think both wrong).

The effect of this error is that bit #20 of paddr(boot_second) is
preserved as bit #20 of the PTE, rather than being cleared. bit #20 of
an entry at this level is UNK/SBZP -- so we survive this mistake even if
paddr(boot_second) happens to have bit #20 set.

Really this code should use {FIRST,SECOND,THIRD}_SHIFT, and this file
already includes asm/page.h so they should be available...

> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>
> ---
>  xen/arch/arm/arm64/head.S | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index bebddf0..ad35f60 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -292,8 +292,8 @@ skip_bss:
>          ldr   x4, =boot_second
>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>  
> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
> -        lsl   x2, x2, #20
> +        lsr   x2, x19, #21           /* Base address for 2MB mapping */
> +        lsl   x2, x2, #21
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:12:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30yw-000517-U9; Tue, 14 Jan 2014 10:12:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30yv-000512-8S
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:12:25 +0000
Received: from [85.158.137.68:50369] by server-16.bemta-3.messagelabs.com id
	96/F9-26128-88D05D25; Tue, 14 Jan 2014 10:12:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389694341!9013496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10767 invoked from network); 14 Jan 2014 10:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90491511"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 10:12:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:12:20 -0500
Message-ID: <1389694339.9887.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: John Johnson <lausgans@gmail.com>
Date: Tue, 14 Jan 2014 10:12:19 +0000
In-Reply-To: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
	<CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
 Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:36 +0400, John Johnson wrote:
> Meanwhile, i've tried to boot just bare linux-chromebook kernel from
> xen repo to no avail.

Are you trying to use the March 2013 branch referred to by the wiki page
at [0] by any chance? That page gave the misleading impression that Xen
on Chromebook was working, when in reality it was very much a WIP. See
also the thread on the Debian lists at [1].

I asked Anthony to update the wiki page to update that page to better
reflect the reality and it looks like he did yesterday, thanks Anthony

As I said in the debian-arm thread:
         as you are probably gathering you are going to have to do some
        development work if you want to run Xen on your chromebook at
        this stage. We are happy to provide guidance (probably best to
        take this upstream) but it is going to need a certain amount of
        lowlevel hacking skills -- especially given the lack of serial
        console on the chromebook (unless you have soldered one on).

Sorry.

Ian.

[0] http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook
[1] https://lists.debian.org/debian-arm/2014/01/msg00037.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:12:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W30yw-000517-U9; Tue, 14 Jan 2014 10:12:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W30yv-000512-8S
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:12:25 +0000
Received: from [85.158.137.68:50369] by server-16.bemta-3.messagelabs.com id
	96/F9-26128-88D05D25; Tue, 14 Jan 2014 10:12:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389694341!9013496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10767 invoked from network); 14 Jan 2014 10:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,657,1384300800"; d="scan'208";a="90491511"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 10:12:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:12:20 -0500
Message-ID: <1389694339.9887.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: John Johnson <lausgans@gmail.com>
Date: Tue, 14 Jan 2014 10:12:19 +0000
In-Reply-To: <CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
References: <0d0b4c75-5a62-4d7c-9b4b-f4998257e398@chromium.org>
	<CALiw-2Efb41+=+iv3Q6oTKS-g7FsdCnxq3zLpvx_PX787UXdcg@mail.gmail.com>
	<alpine.DEB.2.02.1304102022330.5353@kaball.uk.xensource.com>
	<51668C9B.8030607@citrix.com> <5169062B.5080909@gmail.com>
	<516998D0.5090805@gmail.com>
	<CAAbOSc=iCd1Lrv5tGrwjp2g-Zd84n6kGAvUFFuDiUYQun3z_6A@mail.gmail.com>
	<CAJJyHj+h832_fjg0FUtDH6qzkXwmhvJCBMJJumpZhO-LkgoLuw@mail.gmail.com>
	<516A37B1.7070902@gmail.com>
	<e0fc2703-6eb7-4667-b13b-4ccaf69502d5@chromium.org>
	<CANoehN-HmJyv3VY-s-00jhEp-SPbOQGJhRDpOuUdoUtQB3RsVg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	chromium-os-discuss@chromium.org, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Fwd: [cros-discuss] Boot in Hyp mode on Samsung ARM
 Chromebook
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 11:36 +0400, John Johnson wrote:
> Meanwhile, i've tried to boot just bare linux-chromebook kernel from
> xen repo to no avail.

Are you trying to use the March 2013 branch referred to by the wiki page
at [0] by any chance? That page gave the misleading impression that Xen
on Chromebook was working, when in reality it was very much a WIP. See
also the thread on the Debian lists at [1].

I asked Anthony to update the wiki page to update that page to better
reflect the reality and it looks like he did yesterday, thanks Anthony

As I said in the debian-arm thread:
         as you are probably gathering you are going to have to do some
        development work if you want to run Xen on your chromebook at
        this stage. We are happy to provide guidance (probably best to
        take this upstream) but it is going to need a certain amount of
        lowlevel hacking skills -- especially given the lack of serial
        console on the chromebook (unless you have soldered one on).

Sorry.

Ian.

[0] http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook
[1] https://lists.debian.org/debian-arm/2014/01/msg00037.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:47:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W31W1-0006U1-28; Tue, 14 Jan 2014 10:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W31Vz-0006Tw-0W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:46:35 +0000
Received: from [193.109.254.147:35217] by server-10.bemta-14.messagelabs.com
	id A4/B5-20752-A8515D25; Tue, 14 Jan 2014 10:46:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389696392!10716556!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23342 invoked from network); 14 Jan 2014 10:46:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:46:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90497470"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 10:46:31 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:46:31 -0500
Message-ID: <52D51586.3020508@citrix.com>
Date: Tue, 14 Jan 2014 10:46:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
	<20140113230740.GA23544@aepfle.de> <52D49CD5.20805@oracle.com>
In-Reply-To: <52D49CD5.20805@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 02:11, Boris Ostrovsky wrote:
> On 01/13/2014 06:07 PM, Olaf Hering wrote:
>> On Mon, Jan 13, Boris Ostrovsky wrote:
>>
>>> On 01/13/2014 04:30 AM, Olaf Hering wrote:
>>>>> Similarly, if xenbug_gather("discard-secure") fails, I think the
>>>>> code will
>>>>> assume that secure discard has not been requested. I don't know what
>>>>> security implications this will have but it sounds bad to me.
>>>> There are no security implications, if the backend does not
>>>> advertise it
>>>> then its not present.
>>> Right. But my questions was what if the backend does advertise it and
>>> wants
>>> the frontent to use it but xenbus_gather() in the frontend fails. Do
>>> we want
>>> to silently continue without discard-secure? Is this safe?
>> The frontend can not know that the backend advertised discard-secure
>> because the frontend just failed to read the property which indicates
>> discard-secure should be enabled.
> 
> And is it OK for the frontend not to know about this?

Yes.

> I don't understand what the use model for this feature is. Is it just
> that the backend advertises its capability and it's up to the frontend
> to use it or not -or- is it that the user/admin created the storage with
> expectations that it will be used in "secure" manner.

The creator of the data (i.e., the guest) is the one who knows whether
the data is security sensitive.  The backend is simply providing the
facility which the guest may or may not make use of.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:47:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W31W1-0006U1-28; Tue, 14 Jan 2014 10:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W31Vz-0006Tw-0W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:46:35 +0000
Received: from [193.109.254.147:35217] by server-10.bemta-14.messagelabs.com
	id A4/B5-20752-A8515D25; Tue, 14 Jan 2014 10:46:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389696392!10716556!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23342 invoked from network); 14 Jan 2014 10:46:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:46:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90497470"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 10:46:31 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:46:31 -0500
Message-ID: <52D51586.3020508@citrix.com>
Date: Tue, 14 Jan 2014 10:46:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
	<20140113230740.GA23544@aepfle.de> <52D49CD5.20805@oracle.com>
In-Reply-To: <52D49CD5.20805@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 02:11, Boris Ostrovsky wrote:
> On 01/13/2014 06:07 PM, Olaf Hering wrote:
>> On Mon, Jan 13, Boris Ostrovsky wrote:
>>
>>> On 01/13/2014 04:30 AM, Olaf Hering wrote:
>>>>> Similarly, if xenbug_gather("discard-secure") fails, I think the
>>>>> code will
>>>>> assume that secure discard has not been requested. I don't know what
>>>>> security implications this will have but it sounds bad to me.
>>>> There are no security implications, if the backend does not
>>>> advertise it
>>>> then its not present.
>>> Right. But my questions was what if the backend does advertise it and
>>> wants
>>> the frontent to use it but xenbus_gather() in the frontend fails. Do
>>> we want
>>> to silently continue without discard-secure? Is this safe?
>> The frontend can not know that the backend advertised discard-secure
>> because the frontend just failed to read the property which indicates
>> discard-secure should be enabled.
> 
> And is it OK for the frontend not to know about this?

Yes.

> I don't understand what the use model for this feature is. Is it just
> that the backend advertises its capability and it's up to the frontend
> to use it or not -or- is it that the user/admin created the storage with
> expectations that it will be used in "secure" manner.

The creator of the data (i.e., the guest) is the one who knows whether
the data is security sensitive.  The backend is simply providing the
facility which the guest may or may not make use of.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W31b6-00073e-7K; Tue, 14 Jan 2014 10:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W31b5-00073Y-4q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:51:51 +0000
Received: from [85.158.143.35:43419] by server-3.bemta-4.messagelabs.com id
	32/1C-32360-6C615D25; Tue, 14 Jan 2014 10:51:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389696708!11548050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2545 invoked from network); 14 Jan 2014 10:51:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:51:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92622322"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 10:51:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:51:47 -0500
Message-ID: <1389696705.9887.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 14 Jan 2014 10:51:45 +0000
In-Reply-To: <CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-11 at 01:58 +0000, karim.allah.ahmed@gmail.com wrote:
> On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
> >> Create multiple banks to hold dom0 memory in case of 1:1 mapping
> >> if we failed to find 1 large contiguous memory to hold the whole
> >> thing.
> >
> > Thanks. While with my ARM maintainer hat on I'd love for this to go in
> > for 4.4 with my acting release manager hat on I think if I have to be
> > honest this is too big a change for 4.4 at this stage, which is a
> > pity :-(
> >
> >
> >>
> >> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> >> ---
> >>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
> >>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
> >>  2 files changed, 121 insertions(+), 39 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >> index faff88e..bb44cdd 100644
> >> --- a/xen/arch/arm/domain_build.c
> >> +++ b/xen/arch/arm/domain_build.c
> >> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
> >>  {
> >>      paddr_t start;
> >>      paddr_t size;
> >> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
> >>      struct page_info *pg = NULL;
> >> -    unsigned int order = get_order_from_bytes(dom0_mem);
> >> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
> >>      int res;
> >>      paddr_t spfn;
> >>      unsigned int bits;
> >>
> >> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> >> +#define MIN_BANK_ORDER 10
> >
> > 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
> > which can be made, but I think in practice the lowest useful bank size
> > is going to be somewhat larger than that.
> 
> MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
> makes the minimum 4 Mbyte

My mistake. Please add a comment though.

> >> +
> >> +    kinfo->mem.nr_banks = 0;
> >> +
> >> +    /*
> >> +     * We always first try to allocate all dom0 memory in 1 bank.
> >> +     * However, if we failed to allocate physically contiguous memory
> >> +     * from the allocator, then we try to create more than one bank.
> >> +     */
> >> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
> >
> > I think this can be just
> >         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
> > (maybe order >= ?) or
> >         while (order > MIN_BANK_ORDER )
> >         {
> >                 ...
> >                 order--;
> >         }
> > I think the first works better.
> >
> > This does away with the need for cur_order vs order. I think order is
> > mostly unused after this patch, also not renaming cur_order would
> > hopefully reduce the diff and therefore the "scariness" of the patch wrt
> > 4.4 (although that may not be sufficient).
> 
> Yes, that's correct, however I'm intentionally using a different
> variable because I thought that this is going to make things more
> obvious. If you think it's better to use the same variable, then I'll
> update it.

Personally I found it confusing to read as it is. Although reading
further down I'm still confused and I'm not sure that my suggestion
would actually help.

> >
> >> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
> >> +                             if ( pg != NULL )
> >> +                                     break;
> >> +                     }
> >> +
> >> +                     if ( !pg )
> >> +                             break;
> >> +
> >> +                     spfn = page_to_mfn(pg);
> >> +                     start = pfn_to_paddr(spfn);
> >> +                     size = pfn_to_paddr((1 << cur_order));
> >> +
> >> +                 kinfo->mem.bank[index].start = start;
> >> +                 kinfo->mem.bank[index].size = size;
> >> +                 index++;
> >> +                 kinfo->mem.nr_banks++;
> >> +     }
> >> +
> >> +     if( pg )
> >> +             break;
> >> +
> >> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
> >
> > <<1 ?
> 
> * 2 :)

I know that ;-) But why are you doubling the number of banks at all?

> > Is this not just kinfo->mem.nr_banks?
> 
> No, In this equation I'm calculating how much more memory will be
> needed to satisfy the memory size of dom0.
> So at the end of the iteration, I check how much memory has been
> allocated (=cur_bank * cur_order) and how much memory was needed
> (=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
> cur_bank + 1) * cur_order
> So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
> ) and we do another iteration to satisfy the rest of unallocated
> memory.

Hrm, I'm still a bit confused. I think perhaps choosing better names for
the variables (e.g. it seems like you are saying nr_banks is actually
nr_unallocated_banks?) and adding some comments might help.

But is this algorithm not more complex than it needs to be? Why not just
allocate memory, subtracting it from kinfo->unassigned_mem as you go and
adding a new bank each time? The result would the same wouldn't it?
e.g.:

	while ( kinfo->unassigned_mem )
	{
		order = get_order_of_bytes(kinfo->unsigned_mem)
		do {
			pg = alloc_domheap_pages(... order ...)
		} while(!pg && order>>=1 > MIN_BANK_ORDER)

		if (pg)
			urk!

		kinfo->mem.bank[kinfo->mem.nr_banks].{start,size} = (stuff based on pg, order etc)
		kinfo->mem.nr_banks++
		kinfo->unassigned_mem -= /*whatever it is*/

		/* maybe do guest_physmap_add_page here too */
	}

I think that will produce the same result, won't it?

In either algorithm there needs to be a check for NR_MEM_BANKS somewhere
or else it will run off the end of kinfo->mem.bank[].

> 
> >
> > The basic structure here is:
> >
> > for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
> >         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
> >                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> >
> > Shouldn't the iteration over bank be the outer one?
> >
> > The banks might be different sizes, right?
> >
> 
> The outer loop will define the order of the allocated bank[s] while
> the inner one will define how many banks of that order will be needed.
> 
> So you try to allocate one big bank, if it succeeds you're done. If
> not, you double the number of required banks and retry with smaller
> order (order - 1).
> 
> The code can indeed allocate banks of different sizes. So, if we
> failed to allocate 1 big bank, we will try to allocate two banks of
> (order = order -1) in that case we might only allocate the first bank
> and fail to allocate the second one. So, we try to allocate the
> required memory for the second one in two banks.
> 
> So the logic is always: In the end of the outer loop calculate how
> many banks we need to allocate for next iteration as well as the
> required order. So all allocation that occur in the same iteration is
> equal, while each iteration changes the nr banks and order depending
> on how much memory we still need to allocate

I think I get it now, but it still seems complicated. Am I missing
something with the simpler loop I suggested?

> > Also with either approach then depending on where memory is available
> > this may result in the memory not being allocated in and/or that they
> > are not in increasing order (in fact, because Xen prefer to allocate
> > higher memory first it seems likely that it will be in reverse order).
> 
> Yes, there's no restriction what so ever on the order of the
> addresses. However each allocation is trying to allocate the bank as
> early as possible
> 
> for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> {
>     pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>     if ( pg != NULL )
>         break;
> }
> 
> >
> > I don't know if either of those things matter. What does ePAPR have to
> > say on the matter?
> 
> There is no mention of address range ordering ( at least in section
> 3.4 "Memory Node" )

OK. It'll be interesting to see whether the implementations match up to
that!

Since max banks is necessarily small I do wonder if it might be easier
to just do a quick insertion sort as we go rather than risking it. I
suppose there will be plenty of time in the 4.5 cycle (which this work
is now almost certainly targeting) to hit any problem cases.

> > I'd expect that the ordering might matter from the point of view of
> > putting the kernel in the first bank -- since that may no longer be the
> > lowest address.
> >
> In the patch, when I set the loadaddr of the image I search through
> the banks for the lowest address bank not the first one anyway.

OK, I think that makes sense since the requirement is wrt position
relative to the start of RAM, not the first bank.

> > You don't seem to reference kinfo->unassigned_mem anywhere after the
> > initial order calculation -- I think you need to subtract memory from it
> > on each iteration, or else I'm not sure you will actually get the right
> > amount allocated in all cases.
> 
> It's being properly calculated in the external mapping loop.

Hrm yes with all that doubling etc.

> >>
> >> -    kinfo->mem.bank[0].start = start;
> >> -    kinfo->mem.bank[0].size = size;
> >> -    kinfo->mem.nr_banks = 1;
> >> +         if ( res )
> >> +             panic("Unable to add pages in DOM0: %d", res);
> >>
> >> -    kinfo->unassigned_mem -= size;
> >> +         kinfo->unassigned_mem -= size;
> >> +     }
> >>  }
> >>
> >>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
> >> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> >> index 6a5772b..658c3de 100644
> >> --- a/xen/arch/arm/kernel.c
> >> +++ b/xen/arch/arm/kernel.c
> >> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
> >>      const paddr_t total = initrd_len + dtb_len;
> >>
> >>      /* Convenient */
> >
> > If you are going to remove all of the following convenience variables
> > then this comment is no longer correct.
> >
> > (Convenient here means a shorter local name for something complex)
> 
> It still applies to these variables except probably "unsigned int i,
> min_i = -1;", so I'll push the comment one line down.

No it doesn't AFAICT. "Convenient" here means "these are shorthand for
something longer and inconvenient to type", if the variables aren't
const then they almost certainly are not a convenience in the intended
sense. same_bank, mem_* and kernel_size are all just regular variables I
think.

> >> -    const paddr_t mem_start = info->mem.bank[0].start;
> >> -    const paddr_t mem_size = info->mem.bank[0].size;
> >> -    const paddr_t mem_end = mem_start + mem_size;
> >> -    const paddr_t kernel_size = kernel_end - kernel_start;
> >> +    unsigned int i, min_i = -1;
> >> +    bool_t same_bank = false;
> >> +
> >> +    paddr_t mem_start, mem_end, mem_size;
> >> +    paddr_t kernel_size;
> >>
> >>      paddr_t addr;
> >>
> >> -    if ( total + kernel_size > mem_size )
> >> -        panic("Not enough memory in the first bank for the dtb+initrd");
> >> +    kernel_size = kernel_end - kernel_start;
> >> +
> >> +    for ( i = 0; i < info->mem.nr_banks; i++ )
> >> +    {
> >> +        mem_start = info->mem.bank[i].start;
> >> +        mem_size = info->mem.bank[i].size;
> >> +        mem_end = mem_start + mem_size - 1;
> >> +
> >> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
> >> +            same_bank = true;
> >> +        else
> >> +            same_bank = false;
> >> +
> >> +        if ( same_bank && ((total + kernel_size) < mem_size) )
> >> +            min_i = i;
> >> +        else if ( (!same_bank) && (total < mem_size) )
> >> +            min_i = i;
> >> +        else
> >> +            continue;
> >
> > What is all this doing?
> 
> Search through the banks for the bank that fits the initrd and dtb.
> Calculation is slightly different depending on whether I ended up
> using the same bank as the one used by the kernel or not. ( as
> mentioned previously, I don't just blindly choose the first bank for
> the kernel. I search through all of the banks for the lowest banks -
> address-wise - ).

Where is the address comparison before assigning min_i?

>  In the same_bank case, I just use the old
> calculations that was already there in the code, however in the
> !same_bank case I just start at the end of the bank.

Would it be clearer to do
	if ( dtb+initrd fits in kernel's back )
		ok
	else
		search
?

There is also a requirement that the dtb and initrd are within certain
ranges of the start of RAM (see the booting.txt and Booting in
Documentation/arm*/), does this implement this? It doesn't look to be
considered during the search or when placing in the !same_bank case.

This would all be simpler if we sorted the banks too. Which is another
argument for doing so IMHO.

> >> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
> >> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
> >> +        /*
> >> +         * Load kernel at the lowest possible bank
> >> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
> >
> > That commit says nothing about loading in the lowest possible bank,
> > though. If there is some relevant factor which is worth commenting on
> > please do so directly.
> 
> Loading at the lowest bank is safer because of the:
> "The current solution in Linux assuming that the delta physical
> address - virtual address is always negative".
> This delta is being calculated based on where the kernel was loaded (
> using "adr" to find physical address ).
> So loading it as early as possible is a good idea to avoid this problem.

I think this problem is now fixed upstream, the intention was to
eventually revert the workaround (Julien was going to tell me when it
had gone into stable etc, but this is now a 4.5 era revert candidate).

> However, I'm not quite sure why not just load it at the beginning of
> the memory then! I think I'll look at booting.txt for that, maybe it's
> a decompresser limitation or something!

Yes, it's all a bit complicated.

> > Actually now that the kernel is fixed upstream we don't need the
> > behaviour of that commit at all. Although there are still restrictions
> > based on load address vs start of RAM (See booting.txt in the kernel
> > docs)
> Ah, I see. I'm using an allwinner branch of the kernel (
> https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
> the latest thing.

git://github.com/linux-sunxi/linux-sunxi.git either sunxi-next or
sunxi-devel branches are pretty good baselines nowadays. I've got an
outstanding TODO to update the wiki...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 10:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 10:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W31b6-00073e-7K; Tue, 14 Jan 2014 10:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W31b5-00073Y-4q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 10:51:51 +0000
Received: from [85.158.143.35:43419] by server-3.bemta-4.messagelabs.com id
	32/1C-32360-6C615D25; Tue, 14 Jan 2014 10:51:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389696708!11548050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2545 invoked from network); 14 Jan 2014 10:51:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 10:51:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92622322"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 10:51:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 05:51:47 -0500
Message-ID: <1389696705.9887.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
Date: Tue, 14 Jan 2014 10:51:45 +0000
In-Reply-To: <CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-01-11 at 01:58 +0000, karim.allah.ahmed@gmail.com wrote:
> On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
> >> Create multiple banks to hold dom0 memory in case of 1:1 mapping
> >> if we failed to find 1 large contiguous memory to hold the whole
> >> thing.
> >
> > Thanks. While with my ARM maintainer hat on I'd love for this to go in
> > for 4.4 with my acting release manager hat on I think if I have to be
> > honest this is too big a change for 4.4 at this stage, which is a
> > pity :-(
> >
> >
> >>
> >> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
> >> ---
> >>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
> >>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
> >>  2 files changed, 121 insertions(+), 39 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >> index faff88e..bb44cdd 100644
> >> --- a/xen/arch/arm/domain_build.c
> >> +++ b/xen/arch/arm/domain_build.c
> >> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
> >>  {
> >>      paddr_t start;
> >>      paddr_t size;
> >> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
> >>      struct page_info *pg = NULL;
> >> -    unsigned int order = get_order_from_bytes(dom0_mem);
> >> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
> >>      int res;
> >>      paddr_t spfn;
> >>      unsigned int bits;
> >>
> >> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> >> +#define MIN_BANK_ORDER 10
> >
> > 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
> > which can be made, but I think in practice the lowest useful bank size
> > is going to be somewhat larger than that.
> 
> MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
> makes the minimum 4 Mbyte

My mistake. Please add a comment though.

> >> +
> >> +    kinfo->mem.nr_banks = 0;
> >> +
> >> +    /*
> >> +     * We always first try to allocate all dom0 memory in 1 bank.
> >> +     * However, if we failed to allocate physically contiguous memory
> >> +     * from the allocator, then we try to create more than one bank.
> >> +     */
> >> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
> >
> > I think this can be just
> >         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
> > (maybe order >= ?) or
> >         while (order > MIN_BANK_ORDER )
> >         {
> >                 ...
> >                 order--;
> >         }
> > I think the first works better.
> >
> > This does away with the need for cur_order vs order. I think order is
> > mostly unused after this patch, also not renaming cur_order would
> > hopefully reduce the diff and therefore the "scariness" of the patch wrt
> > 4.4 (although that may not be sufficient).
> 
> Yes, that's correct, however I'm intentionally using a different
> variable because I thought that this is going to make things more
> obvious. If you think it's better to use the same variable, then I'll
> update it.

Personally I found it confusing to read as it is. Although reading
further down I'm still confused and I'm not sure that my suggestion
would actually help.

> >
> >> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
> >> +                             if ( pg != NULL )
> >> +                                     break;
> >> +                     }
> >> +
> >> +                     if ( !pg )
> >> +                             break;
> >> +
> >> +                     spfn = page_to_mfn(pg);
> >> +                     start = pfn_to_paddr(spfn);
> >> +                     size = pfn_to_paddr((1 << cur_order));
> >> +
> >> +                 kinfo->mem.bank[index].start = start;
> >> +                 kinfo->mem.bank[index].size = size;
> >> +                 index++;
> >> +                 kinfo->mem.nr_banks++;
> >> +     }
> >> +
> >> +     if( pg )
> >> +             break;
> >> +
> >> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
> >
> > <<1 ?
> 
> * 2 :)

I know that ;-) But why are you doubling the number of banks at all?

> > Is this not just kinfo->mem.nr_banks?
> 
> No, In this equation I'm calculating how much more memory will be
> needed to satisfy the memory size of dom0.
> So at the end of the iteration, I check how much memory has been
> allocated (=cur_bank * cur_order) and how much memory was needed
> (=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
> cur_bank + 1) * cur_order
> So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
> ) and we do another iteration to satisfy the rest of unallocated
> memory.

Hrm, I'm still a bit confused. I think perhaps choosing better names for
the variables (e.g. it seems like you are saying nr_banks is actually
nr_unallocated_banks?) and adding some comments might help.

But is this algorithm not more complex than it needs to be? Why not just
allocate memory, subtracting it from kinfo->unassigned_mem as you go and
adding a new bank each time? The result would the same wouldn't it?
e.g.:

	while ( kinfo->unassigned_mem )
	{
		order = get_order_of_bytes(kinfo->unsigned_mem)
		do {
			pg = alloc_domheap_pages(... order ...)
		} while(!pg && order>>=1 > MIN_BANK_ORDER)

		if (pg)
			urk!

		kinfo->mem.bank[kinfo->mem.nr_banks].{start,size} = (stuff based on pg, order etc)
		kinfo->mem.nr_banks++
		kinfo->unassigned_mem -= /*whatever it is*/

		/* maybe do guest_physmap_add_page here too */
	}

I think that will produce the same result, won't it?

In either algorithm there needs to be a check for NR_MEM_BANKS somewhere
or else it will run off the end of kinfo->mem.bank[].

> 
> >
> > The basic structure here is:
> >
> > for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
> >         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
> >                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> >
> > Shouldn't the iteration over bank be the outer one?
> >
> > The banks might be different sizes, right?
> >
> 
> The outer loop will define the order of the allocated bank[s] while
> the inner one will define how many banks of that order will be needed.
> 
> So you try to allocate one big bank, if it succeeds you're done. If
> not, you double the number of required banks and retry with smaller
> order (order - 1).
> 
> The code can indeed allocate banks of different sizes. So, if we
> failed to allocate 1 big bank, we will try to allocate two banks of
> (order = order -1) in that case we might only allocate the first bank
> and fail to allocate the second one. So, we try to allocate the
> required memory for the second one in two banks.
> 
> So the logic is always: In the end of the outer loop calculate how
> many banks we need to allocate for next iteration as well as the
> required order. So all allocation that occur in the same iteration is
> equal, while each iteration changes the nr banks and order depending
> on how much memory we still need to allocate

I think I get it now, but it still seems complicated. Am I missing
something with the simpler loop I suggested?

> > Also with either approach then depending on where memory is available
> > this may result in the memory not being allocated in and/or that they
> > are not in increasing order (in fact, because Xen prefer to allocate
> > higher memory first it seems likely that it will be in reverse order).
> 
> Yes, there's no restriction what so ever on the order of the
> addresses. However each allocation is trying to allocate the bank as
> early as possible
> 
> for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
> {
>     pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>     if ( pg != NULL )
>         break;
> }
> 
> >
> > I don't know if either of those things matter. What does ePAPR have to
> > say on the matter?
> 
> There is no mention of address range ordering ( at least in section
> 3.4 "Memory Node" )

OK. It'll be interesting to see whether the implementations match up to
that!

Since max banks is necessarily small I do wonder if it might be easier
to just do a quick insertion sort as we go rather than risking it. I
suppose there will be plenty of time in the 4.5 cycle (which this work
is now almost certainly targeting) to hit any problem cases.

> > I'd expect that the ordering might matter from the point of view of
> > putting the kernel in the first bank -- since that may no longer be the
> > lowest address.
> >
> In the patch, when I set the loadaddr of the image I search through
> the banks for the lowest address bank not the first one anyway.

OK, I think that makes sense since the requirement is wrt position
relative to the start of RAM, not the first bank.

> > You don't seem to reference kinfo->unassigned_mem anywhere after the
> > initial order calculation -- I think you need to subtract memory from it
> > on each iteration, or else I'm not sure you will actually get the right
> > amount allocated in all cases.
> 
> It's being properly calculated in the external mapping loop.

Hrm yes with all that doubling etc.

> >>
> >> -    kinfo->mem.bank[0].start = start;
> >> -    kinfo->mem.bank[0].size = size;
> >> -    kinfo->mem.nr_banks = 1;
> >> +         if ( res )
> >> +             panic("Unable to add pages in DOM0: %d", res);
> >>
> >> -    kinfo->unassigned_mem -= size;
> >> +         kinfo->unassigned_mem -= size;
> >> +     }
> >>  }
> >>
> >>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
> >> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> >> index 6a5772b..658c3de 100644
> >> --- a/xen/arch/arm/kernel.c
> >> +++ b/xen/arch/arm/kernel.c
> >> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
> >>      const paddr_t total = initrd_len + dtb_len;
> >>
> >>      /* Convenient */
> >
> > If you are going to remove all of the following convenience variables
> > then this comment is no longer correct.
> >
> > (Convenient here means a shorter local name for something complex)
> 
> It still applies to these variables except probably "unsigned int i,
> min_i = -1;", so I'll push the comment one line down.

No it doesn't AFAICT. "Convenient" here means "these are shorthand for
something longer and inconvenient to type", if the variables aren't
const then they almost certainly are not a convenience in the intended
sense. same_bank, mem_* and kernel_size are all just regular variables I
think.

> >> -    const paddr_t mem_start = info->mem.bank[0].start;
> >> -    const paddr_t mem_size = info->mem.bank[0].size;
> >> -    const paddr_t mem_end = mem_start + mem_size;
> >> -    const paddr_t kernel_size = kernel_end - kernel_start;
> >> +    unsigned int i, min_i = -1;
> >> +    bool_t same_bank = false;
> >> +
> >> +    paddr_t mem_start, mem_end, mem_size;
> >> +    paddr_t kernel_size;
> >>
> >>      paddr_t addr;
> >>
> >> -    if ( total + kernel_size > mem_size )
> >> -        panic("Not enough memory in the first bank for the dtb+initrd");
> >> +    kernel_size = kernel_end - kernel_start;
> >> +
> >> +    for ( i = 0; i < info->mem.nr_banks; i++ )
> >> +    {
> >> +        mem_start = info->mem.bank[i].start;
> >> +        mem_size = info->mem.bank[i].size;
> >> +        mem_end = mem_start + mem_size - 1;
> >> +
> >> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
> >> +            same_bank = true;
> >> +        else
> >> +            same_bank = false;
> >> +
> >> +        if ( same_bank && ((total + kernel_size) < mem_size) )
> >> +            min_i = i;
> >> +        else if ( (!same_bank) && (total < mem_size) )
> >> +            min_i = i;
> >> +        else
> >> +            continue;
> >
> > What is all this doing?
> 
> Search through the banks for the bank that fits the initrd and dtb.
> Calculation is slightly different depending on whether I ended up
> using the same bank as the one used by the kernel or not. ( as
> mentioned previously, I don't just blindly choose the first bank for
> the kernel. I search through all of the banks for the lowest banks -
> address-wise - ).

Where is the address comparison before assigning min_i?

>  In the same_bank case, I just use the old
> calculations that was already there in the code, however in the
> !same_bank case I just start at the end of the bank.

Would it be clearer to do
	if ( dtb+initrd fits in kernel's back )
		ok
	else
		search
?

There is also a requirement that the dtb and initrd are within certain
ranges of the start of RAM (see the booting.txt and Booting in
Documentation/arm*/), does this implement this? It doesn't look to be
considered during the search or when placing in the !same_bank case.

This would all be simpler if we sorted the banks too. Which is another
argument for doing so IMHO.

> >> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
> >> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
> >> +        /*
> >> +         * Load kernel at the lowest possible bank
> >> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
> >
> > That commit says nothing about loading in the lowest possible bank,
> > though. If there is some relevant factor which is worth commenting on
> > please do so directly.
> 
> Loading at the lowest bank is safer because of the:
> "The current solution in Linux assuming that the delta physical
> address - virtual address is always negative".
> This delta is being calculated based on where the kernel was loaded (
> using "adr" to find physical address ).
> So loading it as early as possible is a good idea to avoid this problem.

I think this problem is now fixed upstream, the intention was to
eventually revert the workaround (Julien was going to tell me when it
had gone into stable etc, but this is now a 4.5 era revert candidate).

> However, I'm not quite sure why not just load it at the beginning of
> the memory then! I think I'll look at booting.txt for that, maybe it's
> a decompresser limitation or something!

Yes, it's all a bit complicated.

> > Actually now that the kernel is fixed upstream we don't need the
> > behaviour of that commit at all. Although there are still restrictions
> > based on load address vs start of RAM (See booting.txt in the kernel
> > docs)
> Ah, I see. I'm using an allwinner branch of the kernel (
> https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
> the latest thing.

git://github.com/linux-sunxi/linux-sunxi.git either sunxi-next or
sunxi-devel branches are pretty good baselines nowadays. I've got an
outstanding TODO to update the wiki...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 11:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 11:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W32XR-0001Q4-Jt; Tue, 14 Jan 2014 11:52:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W32XQ-0001Pz-T8
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 11:52:09 +0000
Received: from [193.109.254.147:37372] by server-8.bemta-14.messagelabs.com id
	BA/F0-30921-8E425D25; Tue, 14 Jan 2014 11:52:08 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389700325!10736585!1
X-Originating-IP: [209.85.160.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10450 invoked from network); 14 Jan 2014 11:52:06 -0000
Received: from mail-pb0-f51.google.com (HELO mail-pb0-f51.google.com)
	(209.85.160.51)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 11:52:06 -0000
Received: by mail-pb0-f51.google.com with SMTP id rp16so2318925pbb.24
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 03:52:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=45+GsnSH+T/WOVkAWFsksH+UpJ2/0QaR6kvw++UMqr4=;
	b=kDHcJSCpUg/o4favyF0yFlrLCqBPl9nryXh2IrcwqB6sl5JH/tfuAQwLzGXXV3M9f4
	27gmNT/XmXw7EfPEvJ5Ng6BysnLreL4a905p0IuyTqH8+TpG0wTeTeVcqEkelIbqtC8V
	O1FP5+aWKag4HhUD7AC8Em/LxPKZ0yREedQeIT7zAUfrONUpi7mSdgy/80491HCpNRx5
	ztWb/Cyj0qjBxdkYVsh5XrQxhoK6zTeAX6lePOkbT2fiheypwdvKO06/LQw28EM9jK6s
	anretvoJnFjFRD+Zu8fe3fg9eXBjz5xsXeEhY8frooSTf5iQ13xALjHk6AKh4nZy2odI
	lVyw==
X-Received: by 10.66.168.12 with SMTP id zs12mr1247406pab.122.1389700324976;
	Tue, 14 Jan 2014 03:52:04 -0800 (PST)
Received: from [192.168.1.100] ([220.202.153.106])
	by mx.google.com with ESMTPSA id ns7sm725868pbc.32.2014.01.14.03.51.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 03:52:04 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389693454.9887.14.camel@kazak.uk.xensource.com>
Date: Tue, 14 Jan 2014 19:51:13 +0800
Message-Id: <02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
	<1389693454.9887.14.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
	2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 14, 2014, at 17:57, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Tue, 2014-01-14 at 01:06 +0800, Chen Baozi wrote:
>> Section shift for level-2 page table should be #21 rather than #20.
> =

> Thanks, are you finding these issues on actual hardware or just by
> inspection?

Oh, I noticed this because I was inspecting my Mini-OS=92s code by
comparing with this. :-)

> =

> I think this change is correct, also arm32 has:
> =

>        1:      /* Setup boot_second: */
>                ldr   r4, =3Dboot_second
>                add   r4, r4, r10            /* r1 :=3D paddr (boot_second=
) */
> =

>                lsr   r2, r9, #20            /* Base address for 2MB mappi=
ng */
>                lsl   r2, r2, #20
>                orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 :=3D section map */
>                orr   r2, r2, #PT_LOWER(MEM)
> =

> Either they are both wrong or both right (I think both wrong).
> =

> The effect of this error is that bit #20 of paddr(boot_second) is
> preserved as bit #20 of the PTE, rather than being cleared. bit #20 of
> an entry at this level is UNK/SBZP -- so we survive this mistake even if
> paddr(boot_second) happens to have bit #20 set.
> =

> Really this code should use {FIRST,SECOND,THIRD}_SHIFT, and this file
> already includes asm/page.h so they should be available=85

Ok, I=92ll write the V2 to fix them all.

Cheers,

Baozi

> =

>> =

>> Signed-off-by: Chen Baozi <baozich@gmail.com>
>> ---
>> xen/arch/arm/arm64/head.S | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>> =

>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index bebddf0..ad35f60 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -292,8 +292,8 @@ skip_bss:
>>         ldr   x4, =3Dboot_second
>>         add   x4, x4, x20            /* x4 :=3D paddr (boot_second) */
>> =

>> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
>> -        lsl   x2, x2, #20
>> +        lsr   x2, x19, #21           /* Base address for 2MB mapping */
>> +        lsl   x2, x2, #21
>>         mov   x3, #PT_MEM            /* x2 :=3D Section map */
>>         orr   x2, x2, x3
>> =

> =

> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 11:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 11:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W32XR-0001Q4-Jt; Tue, 14 Jan 2014 11:52:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W32XQ-0001Pz-T8
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 11:52:09 +0000
Received: from [193.109.254.147:37372] by server-8.bemta-14.messagelabs.com id
	BA/F0-30921-8E425D25; Tue, 14 Jan 2014 11:52:08 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389700325!10736585!1
X-Originating-IP: [209.85.160.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10450 invoked from network); 14 Jan 2014 11:52:06 -0000
Received: from mail-pb0-f51.google.com (HELO mail-pb0-f51.google.com)
	(209.85.160.51)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 11:52:06 -0000
Received: by mail-pb0-f51.google.com with SMTP id rp16so2318925pbb.24
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 03:52:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=45+GsnSH+T/WOVkAWFsksH+UpJ2/0QaR6kvw++UMqr4=;
	b=kDHcJSCpUg/o4favyF0yFlrLCqBPl9nryXh2IrcwqB6sl5JH/tfuAQwLzGXXV3M9f4
	27gmNT/XmXw7EfPEvJ5Ng6BysnLreL4a905p0IuyTqH8+TpG0wTeTeVcqEkelIbqtC8V
	O1FP5+aWKag4HhUD7AC8Em/LxPKZ0yREedQeIT7zAUfrONUpi7mSdgy/80491HCpNRx5
	ztWb/Cyj0qjBxdkYVsh5XrQxhoK6zTeAX6lePOkbT2fiheypwdvKO06/LQw28EM9jK6s
	anretvoJnFjFRD+Zu8fe3fg9eXBjz5xsXeEhY8frooSTf5iQ13xALjHk6AKh4nZy2odI
	lVyw==
X-Received: by 10.66.168.12 with SMTP id zs12mr1247406pab.122.1389700324976;
	Tue, 14 Jan 2014 03:52:04 -0800 (PST)
Received: from [192.168.1.100] ([220.202.153.106])
	by mx.google.com with ESMTPSA id ns7sm725868pbc.32.2014.01.14.03.51.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 03:52:04 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1389693454.9887.14.camel@kazak.uk.xensource.com>
Date: Tue, 14 Jan 2014 19:51:13 +0800
Message-Id: <02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
	<1389693454.9887.14.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
	2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 14, 2014, at 17:57, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Tue, 2014-01-14 at 01:06 +0800, Chen Baozi wrote:
>> Section shift for level-2 page table should be #21 rather than #20.
> =

> Thanks, are you finding these issues on actual hardware or just by
> inspection?

Oh, I noticed this because I was inspecting my Mini-OS=92s code by
comparing with this. :-)

> =

> I think this change is correct, also arm32 has:
> =

>        1:      /* Setup boot_second: */
>                ldr   r4, =3Dboot_second
>                add   r4, r4, r10            /* r1 :=3D paddr (boot_second=
) */
> =

>                lsr   r2, r9, #20            /* Base address for 2MB mappi=
ng */
>                lsl   r2, r2, #20
>                orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 :=3D section map */
>                orr   r2, r2, #PT_LOWER(MEM)
> =

> Either they are both wrong or both right (I think both wrong).
> =

> The effect of this error is that bit #20 of paddr(boot_second) is
> preserved as bit #20 of the PTE, rather than being cleared. bit #20 of
> an entry at this level is UNK/SBZP -- so we survive this mistake even if
> paddr(boot_second) happens to have bit #20 set.
> =

> Really this code should use {FIRST,SECOND,THIRD}_SHIFT, and this file
> already includes asm/page.h so they should be available=85

Ok, I=92ll write the V2 to fix them all.

Cheers,

Baozi

> =

>> =

>> Signed-off-by: Chen Baozi <baozich@gmail.com>
>> ---
>> xen/arch/arm/arm64/head.S | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>> =

>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index bebddf0..ad35f60 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -292,8 +292,8 @@ skip_bss:
>>         ldr   x4, =3Dboot_second
>>         add   x4, x4, x20            /* x4 :=3D paddr (boot_second) */
>> =

>> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
>> -        lsl   x2, x2, #20
>> +        lsr   x2, x19, #21           /* Base address for 2MB mapping */
>> +        lsl   x2, x2, #21
>>         mov   x3, #PT_MEM            /* x2 :=3D Section map */
>>         orr   x2, x2, x3
>> =

> =

> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 11:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 11:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W32dE-0001aa-IP; Tue, 14 Jan 2014 11:58:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W32dD-0001aR-81
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 11:58:07 +0000
Received: from [193.109.254.147:39878] by server-9.bemta-14.messagelabs.com id
	58/84-13957-E4625D25; Tue, 14 Jan 2014 11:58:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389700684!10777024!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28115 invoked from network); 14 Jan 2014 11:58:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 11:58:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90511909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 11:58:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 06:58:03 -0500
Message-ID: <1389700682.9887.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 14 Jan 2014 11:58:02 +0000
In-Reply-To: <02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
	<1389693454.9887.14.camel@kazak.uk.xensource.com>
	<02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTE0IGF0IDE5OjUxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEphbiAxNCwgMjAxNCwgYXQgMTc6NTcsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+IAo+ID4gT24gVHVlLCAyMDE0LTAxLTE0IGF0IDAxOjA2ICswODAwLCBD
aGVuIEJhb3ppIHdyb3RlOgo+ID4+IFNlY3Rpb24gc2hpZnQgZm9yIGxldmVsLTIgcGFnZSB0YWJs
ZSBzaG91bGQgYmUgIzIxIHJhdGhlciB0aGFuICMyMC4KPiA+IAo+ID4gVGhhbmtzLCBhcmUgeW91
IGZpbmRpbmcgdGhlc2UgaXNzdWVzIG9uIGFjdHVhbCBoYXJkd2FyZSBvciBqdXN0IGJ5Cj4gPiBp
bnNwZWN0aW9uPwo+IAo+IE9oLCBJIG5vdGljZWQgdGhpcyBiZWNhdXNlIEkgd2FzIGluc3BlY3Rp
bmcgbXkgTWluaS1PU+KAmXMgY29kZSBieQo+IGNvbXBhcmluZyB3aXRoIHRoaXMuIDotKQoKQWgs
IEkgc2VlIQoKPiA+IEkgdGhpbmsgdGhpcyBjaGFuZ2UgaXMgY29ycmVjdCwgYWxzbyBhcm0zMiBo
YXM6Cj4gPiAKPiA+ICAgICAgICAxOiAgICAgIC8qIFNldHVwIGJvb3Rfc2Vjb25kOiAqLwo+ID4g
ICAgICAgICAgICAgICAgbGRyICAgcjQsID1ib290X3NlY29uZAo+ID4gICAgICAgICAgICAgICAg
YWRkICAgcjQsIHI0LCByMTAgICAgICAgICAgICAvKiByMSA6PSBwYWRkciAoYm9vdF9zZWNvbmQp
ICovCj4gPiAKPiA+ICAgICAgICAgICAgICAgIGxzciAgIHIyLCByOSwgIzIwICAgICAgICAgICAg
LyogQmFzZSBhZGRyZXNzIGZvciAyTUIgbWFwcGluZyAqLwo+ID4gICAgICAgICAgICAgICAgbHNs
ICAgcjIsIHIyLCAjMjAKPiA+ICAgICAgICAgICAgICAgIG9yciAgIHIyLCByMiwgI1BUX1VQUEVS
KE1FTSkgLyogcjI6cjMgOj0gc2VjdGlvbiBtYXAgKi8KPiA+ICAgICAgICAgICAgICAgIG9yciAg
IHIyLCByMiwgI1BUX0xPV0VSKE1FTSkKPiA+IAo+ID4gRWl0aGVyIHRoZXkgYXJlIGJvdGggd3Jv
bmcgb3IgYm90aCByaWdodCAoSSB0aGluayBib3RoIHdyb25nKS4KPiA+IAo+ID4gVGhlIGVmZmVj
dCBvZiB0aGlzIGVycm9yIGlzIHRoYXQgYml0ICMyMCBvZiBwYWRkcihib290X3NlY29uZCkgaXMK
PiA+IHByZXNlcnZlZCBhcyBiaXQgIzIwIG9mIHRoZSBQVEUsIHJhdGhlciB0aGFuIGJlaW5nIGNs
ZWFyZWQuIGJpdCAjMjAgb2YKPiA+IGFuIGVudHJ5IGF0IHRoaXMgbGV2ZWwgaXMgVU5LL1NCWlAg
LS0gc28gd2Ugc3Vydml2ZSB0aGlzIG1pc3Rha2UgZXZlbiBpZgo+ID4gcGFkZHIoYm9vdF9zZWNv
bmQpIGhhcHBlbnMgdG8gaGF2ZSBiaXQgIzIwIHNldC4KPiA+IAo+ID4gUmVhbGx5IHRoaXMgY29k
ZSBzaG91bGQgdXNlIHtGSVJTVCxTRUNPTkQsVEhJUkR9X1NISUZULCBhbmQgdGhpcyBmaWxlCj4g
PiBhbHJlYWR5IGluY2x1ZGVzIGFzbS9wYWdlLmggc28gdGhleSBzaG91bGQgYmUgYXZhaWxhYmxl
4oCmCj4gCj4gT2ssIEnigJlsbCB3cml0ZSB0aGUgVjIgdG8gZml4IHRoZW0gYWxsLgoKVGhhbmtz
IQoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4t
ZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54
ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 11:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 11:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W32dE-0001aa-IP; Tue, 14 Jan 2014 11:58:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W32dD-0001aR-81
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 11:58:07 +0000
Received: from [193.109.254.147:39878] by server-9.bemta-14.messagelabs.com id
	58/84-13957-E4625D25; Tue, 14 Jan 2014 11:58:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389700684!10777024!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28115 invoked from network); 14 Jan 2014 11:58:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 11:58:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90511909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 11:58:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 06:58:03 -0500
Message-ID: <1389700682.9887.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 14 Jan 2014 11:58:02 +0000
In-Reply-To: <02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
References: <1389632807-13005-1-git-send-email-baozich@gmail.com>
	<1389693454.9887.14.camel@kazak.uk.xensource.com>
	<02A565FA-04B3-4A1D-9D81-59C948369D36@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm64: fix section shift when mapping
 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTE0IGF0IDE5OjUxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEphbiAxNCwgMjAxNCwgYXQgMTc6NTcsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+IAo+ID4gT24gVHVlLCAyMDE0LTAxLTE0IGF0IDAxOjA2ICswODAwLCBD
aGVuIEJhb3ppIHdyb3RlOgo+ID4+IFNlY3Rpb24gc2hpZnQgZm9yIGxldmVsLTIgcGFnZSB0YWJs
ZSBzaG91bGQgYmUgIzIxIHJhdGhlciB0aGFuICMyMC4KPiA+IAo+ID4gVGhhbmtzLCBhcmUgeW91
IGZpbmRpbmcgdGhlc2UgaXNzdWVzIG9uIGFjdHVhbCBoYXJkd2FyZSBvciBqdXN0IGJ5Cj4gPiBp
bnNwZWN0aW9uPwo+IAo+IE9oLCBJIG5vdGljZWQgdGhpcyBiZWNhdXNlIEkgd2FzIGluc3BlY3Rp
bmcgbXkgTWluaS1PU+KAmXMgY29kZSBieQo+IGNvbXBhcmluZyB3aXRoIHRoaXMuIDotKQoKQWgs
IEkgc2VlIQoKPiA+IEkgdGhpbmsgdGhpcyBjaGFuZ2UgaXMgY29ycmVjdCwgYWxzbyBhcm0zMiBo
YXM6Cj4gPiAKPiA+ICAgICAgICAxOiAgICAgIC8qIFNldHVwIGJvb3Rfc2Vjb25kOiAqLwo+ID4g
ICAgICAgICAgICAgICAgbGRyICAgcjQsID1ib290X3NlY29uZAo+ID4gICAgICAgICAgICAgICAg
YWRkICAgcjQsIHI0LCByMTAgICAgICAgICAgICAvKiByMSA6PSBwYWRkciAoYm9vdF9zZWNvbmQp
ICovCj4gPiAKPiA+ICAgICAgICAgICAgICAgIGxzciAgIHIyLCByOSwgIzIwICAgICAgICAgICAg
LyogQmFzZSBhZGRyZXNzIGZvciAyTUIgbWFwcGluZyAqLwo+ID4gICAgICAgICAgICAgICAgbHNs
ICAgcjIsIHIyLCAjMjAKPiA+ICAgICAgICAgICAgICAgIG9yciAgIHIyLCByMiwgI1BUX1VQUEVS
KE1FTSkgLyogcjI6cjMgOj0gc2VjdGlvbiBtYXAgKi8KPiA+ICAgICAgICAgICAgICAgIG9yciAg
IHIyLCByMiwgI1BUX0xPV0VSKE1FTSkKPiA+IAo+ID4gRWl0aGVyIHRoZXkgYXJlIGJvdGggd3Jv
bmcgb3IgYm90aCByaWdodCAoSSB0aGluayBib3RoIHdyb25nKS4KPiA+IAo+ID4gVGhlIGVmZmVj
dCBvZiB0aGlzIGVycm9yIGlzIHRoYXQgYml0ICMyMCBvZiBwYWRkcihib290X3NlY29uZCkgaXMK
PiA+IHByZXNlcnZlZCBhcyBiaXQgIzIwIG9mIHRoZSBQVEUsIHJhdGhlciB0aGFuIGJlaW5nIGNs
ZWFyZWQuIGJpdCAjMjAgb2YKPiA+IGFuIGVudHJ5IGF0IHRoaXMgbGV2ZWwgaXMgVU5LL1NCWlAg
LS0gc28gd2Ugc3Vydml2ZSB0aGlzIG1pc3Rha2UgZXZlbiBpZgo+ID4gcGFkZHIoYm9vdF9zZWNv
bmQpIGhhcHBlbnMgdG8gaGF2ZSBiaXQgIzIwIHNldC4KPiA+IAo+ID4gUmVhbGx5IHRoaXMgY29k
ZSBzaG91bGQgdXNlIHtGSVJTVCxTRUNPTkQsVEhJUkR9X1NISUZULCBhbmQgdGhpcyBmaWxlCj4g
PiBhbHJlYWR5IGluY2x1ZGVzIGFzbS9wYWdlLmggc28gdGhleSBzaG91bGQgYmUgYXZhaWxhYmxl
4oCmCj4gCj4gT2ssIEnigJlsbCB3cml0ZSB0aGUgVjIgdG8gZml4IHRoZW0gYWxsLgoKVGhhbmtz
IQoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4t
ZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54
ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:28:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W336G-0003DF-Bp; Tue, 14 Jan 2014 12:28:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W336F-0003D7-2O
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:28:07 +0000
Received: from [193.109.254.147:20864] by server-13.bemta-14.messagelabs.com
	id 39/81-19374-65D25D25; Tue, 14 Jan 2014 12:28:06 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389702483!10794931!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=1.0 required=7.0 tests=DATE_IN_PAST_12_24,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21738 invoked from network); 14 Jan 2014 12:28:05 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:28:05 -0000
Received: by mail-pd0-f173.google.com with SMTP id p10so8738417pdj.4
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 04:28:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=NRlwRMOzxahFjvBJyGWdz+qrewQYdEl2vrqLLPscQKE=;
	b=lHiE3pbjrltOlIi0gqbWYCOVtBFDXUNfQeNnp7Z4WrV/f9e8oC2TCTr+aTBVwRLe42
	MIVFV/bkookrtJY1wX1WjNHyHyHei0Dc8ooWX7rS0Gi1eIR4Ww7cSMdzuuFsA5NBySby
	/Dan9DEpoTdTPm3fWAShFyfAaG60AL8wUuYKgkU/2C3xwKzR1H3t4ej7JBGn39ft/VDL
	ZCiNS1sMyPpRvZ469WjI1Yha23aOVnmzUHzE4aRTmNTD15cih2+lFCqAfkgIXxezH54z
	P7nN1gG7+SCa+waZl/zDg63A+6jWgQT9uimoxjNetnD64nJ+XRkTDTgtYI62V1dS/l0B
	GSxw==
X-Received: by 10.66.146.229 with SMTP id tf5mr1493134pab.50.1389702483005;
	Tue, 14 Jan 2014 04:28:03 -0800 (PST)
Received: from localhost ([220.202.153.106])
	by mx.google.com with ESMTPSA id qw8sm924153pbb.27.2014.01.14.04.27.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 04:28:02 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 06:19:59 +0800
Message-Id: <1389651599-26562-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH v2] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20. Besides,
since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
these macros instead of hard-coded shift value.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm32/head.S | 20 ++++++++++----------
 xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..f3eab89 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -291,14 +291,14 @@ cpu_init_done:
         ldr   r4, =boot_second
         add   r4, r4, r10            /* r1 := paddr (boot_second) */
 
-        lsr   r2, r9, #20            /* Base address for 2MB mapping */
-        lsl   r2, r2, #20
+        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
+        lsl   r2, r2, #SECOND_SHIFT
         orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
         orr   r2, r2, #PT_LOWER(MEM)
 
         /* ... map of vaddr(start) in boot_second */
         ldr   r1, =start
-        lsr   r1, #18                /* Slot for vaddr(start) */
+        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
@@ -307,7 +307,7 @@ cpu_init_done:
                                       * then the mapping was done in
                                       * boot_pgtable above */
 
-        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
+        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
         strd  r2, r3, [r4, r1]       /* Map Xen there */
 1:
 
@@ -339,8 +339,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
         mov   r3, #0
-        lsr   r2, r11, #12
-        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        lsr   r2, r11, #THIRD_SHIFT
+        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         orr   r2, r2, #PT_UPPER(DEV_L3)
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -353,7 +353,7 @@ paging:
         orr   r2, r2, #PT_UPPER(PT)
         orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
         ldr   r4, =FIXMAP_ADDR(0)
-        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
         strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -365,12 +365,12 @@ paging:
 
         ldr   r1, =boot_second
         mov   r3, #0x0
-        lsr   r2, r8, #21
-        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
+        lsr   r2, r8, #SECOND_SHIFT
+        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
         orr   r2, r2, #PT_UPPER(MEM)
         orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
         ldr   r4, =BOOT_FDT_VIRT_START
-        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
+        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
         strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
         dsb
 1:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..5b164e9 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -278,11 +278,11 @@ skip_bss:
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
         /* ... map of paddr(start) in boot_first */
-        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
+        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
         and   x1, x2, 0x1ff          /* x1 := Slot to use */
         cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
 
-        lsl   x2, x2, #30            /* Base address for 1GB mapping */
+        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
         lsl   x1, x1, #3             /* x1 := Slot offset */
@@ -292,23 +292,23 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
+        lsl   x2, x2, #SECOND_SHIFT
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
         /* ... map of vaddr(start) in boot_second */
         ldr   x1, =start
-        lsr   x1, x1, #18            /* Slot for vaddr(start) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         str   x2, [x4, x1]           /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
-        lsr   x1, x19, #30           /* Base paddr */
+        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
         cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
                                       * then the mapping was done in
                                       * boot_pgtable or boot_first above */
 
-        lsr   x1, x19, #18           /* Slot for paddr(start) */
+        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
         str   x2, [x4, x1]           /* Map Xen there */
 1:
 
@@ -340,8 +340,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap
         add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
-        lsr   x2, x23, #12
-        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
+        lsr   x2, x23, #THIRD_SHIFT
+        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         mov   x3, #PT_DEV_L3
         orr   x2, x2, x3             /* x2 := 4K dev map including UART */
         str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -354,7 +354,7 @@ paging:
         mov   x3, #PT_PT
         orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
         ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -364,12 +364,12 @@ paging:
         /* Map the DTB in the boot misc slot */
         cbnz  x22, 1f                /* Only on boot CPU */
 
-        lsr   x2, x21, #21
-        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
+        lsr   x2, x21, #SECOND_SHIFT
+        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
         mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
         orr   x2, x2, x3
         ldr   x1, =BOOT_FDT_VIRT_START
-        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
         str   x2, [x4, x1]           /* Map it in the early fdt slot */
         dsb   sy
 1:
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:28:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W336G-0003DF-Bp; Tue, 14 Jan 2014 12:28:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1W336F-0003D7-2O
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:28:07 +0000
Received: from [193.109.254.147:20864] by server-13.bemta-14.messagelabs.com
	id 39/81-19374-65D25D25; Tue, 14 Jan 2014 12:28:06 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389702483!10794931!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=1.0 required=7.0 tests=DATE_IN_PAST_12_24,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21738 invoked from network); 14 Jan 2014 12:28:05 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:28:05 -0000
Received: by mail-pd0-f173.google.com with SMTP id p10so8738417pdj.4
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 04:28:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=NRlwRMOzxahFjvBJyGWdz+qrewQYdEl2vrqLLPscQKE=;
	b=lHiE3pbjrltOlIi0gqbWYCOVtBFDXUNfQeNnp7Z4WrV/f9e8oC2TCTr+aTBVwRLe42
	MIVFV/bkookrtJY1wX1WjNHyHyHei0Dc8ooWX7rS0Gi1eIR4Ww7cSMdzuuFsA5NBySby
	/Dan9DEpoTdTPm3fWAShFyfAaG60AL8wUuYKgkU/2C3xwKzR1H3t4ej7JBGn39ft/VDL
	ZCiNS1sMyPpRvZ469WjI1Yha23aOVnmzUHzE4aRTmNTD15cih2+lFCqAfkgIXxezH54z
	P7nN1gG7+SCa+waZl/zDg63A+6jWgQT9uimoxjNetnD64nJ+XRkTDTgtYI62V1dS/l0B
	GSxw==
X-Received: by 10.66.146.229 with SMTP id tf5mr1493134pab.50.1389702483005;
	Tue, 14 Jan 2014 04:28:03 -0800 (PST)
Received: from localhost ([220.202.153.106])
	by mx.google.com with ESMTPSA id qw8sm924153pbb.27.2014.01.14.04.27.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 04:28:02 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 06:19:59 +0800
Message-Id: <1389651599-26562-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>
Subject: [Xen-devel] [PATCH v2] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20. Besides,
since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
these macros instead of hard-coded shift value.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm32/head.S | 20 ++++++++++----------
 xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..f3eab89 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -291,14 +291,14 @@ cpu_init_done:
         ldr   r4, =boot_second
         add   r4, r4, r10            /* r1 := paddr (boot_second) */
 
-        lsr   r2, r9, #20            /* Base address for 2MB mapping */
-        lsl   r2, r2, #20
+        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
+        lsl   r2, r2, #SECOND_SHIFT
         orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
         orr   r2, r2, #PT_LOWER(MEM)
 
         /* ... map of vaddr(start) in boot_second */
         ldr   r1, =start
-        lsr   r1, #18                /* Slot for vaddr(start) */
+        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
@@ -307,7 +307,7 @@ cpu_init_done:
                                       * then the mapping was done in
                                       * boot_pgtable above */
 
-        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
+        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
         strd  r2, r3, [r4, r1]       /* Map Xen there */
 1:
 
@@ -339,8 +339,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
         mov   r3, #0
-        lsr   r2, r11, #12
-        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        lsr   r2, r11, #THIRD_SHIFT
+        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         orr   r2, r2, #PT_UPPER(DEV_L3)
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -353,7 +353,7 @@ paging:
         orr   r2, r2, #PT_UPPER(PT)
         orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
         ldr   r4, =FIXMAP_ADDR(0)
-        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
         strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -365,12 +365,12 @@ paging:
 
         ldr   r1, =boot_second
         mov   r3, #0x0
-        lsr   r2, r8, #21
-        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
+        lsr   r2, r8, #SECOND_SHIFT
+        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
         orr   r2, r2, #PT_UPPER(MEM)
         orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
         ldr   r4, =BOOT_FDT_VIRT_START
-        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
+        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
         strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
         dsb
 1:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..5b164e9 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -278,11 +278,11 @@ skip_bss:
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
         /* ... map of paddr(start) in boot_first */
-        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
+        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
         and   x1, x2, 0x1ff          /* x1 := Slot to use */
         cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
 
-        lsl   x2, x2, #30            /* Base address for 1GB mapping */
+        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
         lsl   x1, x1, #3             /* x1 := Slot offset */
@@ -292,23 +292,23 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
+        lsl   x2, x2, #SECOND_SHIFT
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
         /* ... map of vaddr(start) in boot_second */
         ldr   x1, =start
-        lsr   x1, x1, #18            /* Slot for vaddr(start) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         str   x2, [x4, x1]           /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
-        lsr   x1, x19, #30           /* Base paddr */
+        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
         cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
                                       * then the mapping was done in
                                       * boot_pgtable or boot_first above */
 
-        lsr   x1, x19, #18           /* Slot for paddr(start) */
+        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
         str   x2, [x4, x1]           /* Map Xen there */
 1:
 
@@ -340,8 +340,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap
         add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
-        lsr   x2, x23, #12
-        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
+        lsr   x2, x23, #THIRD_SHIFT
+        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         mov   x3, #PT_DEV_L3
         orr   x2, x2, x3             /* x2 := 4K dev map including UART */
         str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -354,7 +354,7 @@ paging:
         mov   x3, #PT_PT
         orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
         ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -364,12 +364,12 @@ paging:
         /* Map the DTB in the boot misc slot */
         cbnz  x22, 1f                /* Only on boot CPU */
 
-        lsr   x2, x21, #21
-        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
+        lsr   x2, x21, #SECOND_SHIFT
+        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
         mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
         orr   x2, x2, x3
         ldr   x1, =BOOT_FDT_VIRT_START
-        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
         str   x2, [x4, x1]           /* Map it in the early fdt slot */
         dsb   sy
 1:
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:42:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33Ja-0004C5-1i; Tue, 14 Jan 2014 12:41:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W33JY-0004C0-LH
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:41:52 +0000
Received: from [193.109.254.147:58733] by server-2.bemta-14.messagelabs.com id
	D5/52-00361-F8035D25; Tue, 14 Jan 2014 12:41:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389703309!10770763!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31963 invoked from network); 14 Jan 2014 12:41:50 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:41:50 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so315372wgh.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 04:41:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=h12T8xeCaNCGrn+obZW2gX8YhOVn1itmEi9G/i4aGNY=;
	b=OpXOEtjG71scyFWAkUOYZJTRBnovFqku4KmAOjgcgF9wh09zVNIZkmc3WorbKoKbcx
	9ectRcdlgEz/whqHhtulf7KmSWvLoq5uBUYmOP3b8IVesK7dawMZhRLtCF5nPG4GnB2H
	nfwDqjyuQbkAOrXcMhPws/QG2E1iO4NTzabJmoNcUJ/II3YKpkFDzLQwD552L32f4YO7
	6w0IuaTNmPXEbKcndd6kiKC4MlfE2IeIJVOEwCtNhhWTbAuhXApNWPS090L7ii+IfwY5
	eSwm4/mrc8hiDQkA3IuTIAsYx/218MZwWv1C5j7RW55VsRkC+zpegDSedBUnxXnw95hI
	U5rg==
X-Gm-Message-State: ALoCoQl0z+hal4/Vq5goicdp3Xi1fyv8IFOHg6LIHW5NT1U97Dv0XpvDyVA+lls1Jso2xuzh9bby
X-Received: by 10.194.62.70 with SMTP id w6mr8876496wjr.55.1389703309680;
	Tue, 14 Jan 2014 04:41:49 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x4sm1309245wif.0.2014.01.14.04.41.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 04:41:48 -0800 (PST)
Message-ID: <52D5308B.6020000@linaro.org>
Date: Tue, 14 Jan 2014 12:41:47 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
In-Reply-To: <1389635845.13654.117.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 05:57 PM, Ian Campbell wrote:
> On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
>> On 01/13/2014 05:10 PM, Ian Campbell wrote:
>>> Hrm, our TLB flush discipline is horribly confused isn't it...
>>>
>>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>> p2m and TLBs.
>>>>
>>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>>>> moved out of the loop because:
>>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>>>
>>> OK.
>>>
>>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
>>> worthwhile if that is the case.
>>
>> Will add it.
> 
> Thanks.
> 
>>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
>>> an INSERT, it's seems very special case)
>>>
>>>>     - INSERT: if valid = 1 that would means with have replaced a
>>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>>>     removed from the p2m.
>>>
>>> "append"? Do you mean "happen"?
>>
>> I meant "happen".
>>
>>>
>>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
>>> subsequent put_page elsewhere -- do they all contain the correct
>>> flushing? Or do we just leak?
>>
>> As for foreign mapping the INSERT function should be hardened. We don't
> 
> Did you mean "handled"?

I meant both :). Actually we don't have any check in this function as
for REMOVE case.

I don't think it's possible to do it for 4.4, we take a reference on the
mapping every time a new entrie is added in the p2m.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:42:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33Ja-0004C5-1i; Tue, 14 Jan 2014 12:41:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W33JY-0004C0-LH
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:41:52 +0000
Received: from [193.109.254.147:58733] by server-2.bemta-14.messagelabs.com id
	D5/52-00361-F8035D25; Tue, 14 Jan 2014 12:41:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389703309!10770763!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31963 invoked from network); 14 Jan 2014 12:41:50 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:41:50 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so315372wgh.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 04:41:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=h12T8xeCaNCGrn+obZW2gX8YhOVn1itmEi9G/i4aGNY=;
	b=OpXOEtjG71scyFWAkUOYZJTRBnovFqku4KmAOjgcgF9wh09zVNIZkmc3WorbKoKbcx
	9ectRcdlgEz/whqHhtulf7KmSWvLoq5uBUYmOP3b8IVesK7dawMZhRLtCF5nPG4GnB2H
	nfwDqjyuQbkAOrXcMhPws/QG2E1iO4NTzabJmoNcUJ/II3YKpkFDzLQwD552L32f4YO7
	6w0IuaTNmPXEbKcndd6kiKC4MlfE2IeIJVOEwCtNhhWTbAuhXApNWPS090L7ii+IfwY5
	eSwm4/mrc8hiDQkA3IuTIAsYx/218MZwWv1C5j7RW55VsRkC+zpegDSedBUnxXnw95hI
	U5rg==
X-Gm-Message-State: ALoCoQl0z+hal4/Vq5goicdp3Xi1fyv8IFOHg6LIHW5NT1U97Dv0XpvDyVA+lls1Jso2xuzh9bby
X-Received: by 10.194.62.70 with SMTP id w6mr8876496wjr.55.1389703309680;
	Tue, 14 Jan 2014 04:41:49 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x4sm1309245wif.0.2014.01.14.04.41.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 04:41:48 -0800 (PST)
Message-ID: <52D5308B.6020000@linaro.org>
Date: Tue, 14 Jan 2014 12:41:47 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
In-Reply-To: <1389635845.13654.117.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 05:57 PM, Ian Campbell wrote:
> On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
>> On 01/13/2014 05:10 PM, Ian Campbell wrote:
>>> Hrm, our TLB flush discipline is horribly confused isn't it...
>>>
>>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>> p2m and TLBs.
>>>>
>>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>>>> moved out of the loop because:
>>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>>>
>>> OK.
>>>
>>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
>>> worthwhile if that is the case.
>>
>> Will add it.
> 
> Thanks.
> 
>>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
>>> an INSERT, it's seems very special case)
>>>
>>>>     - INSERT: if valid = 1 that would means with have replaced a
>>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>>>     removed from the p2m.
>>>
>>> "append"? Do you mean "happen"?
>>
>> I meant "happen".
>>
>>>
>>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
>>> subsequent put_page elsewhere -- do they all contain the correct
>>> flushing? Or do we just leak?
>>
>> As for foreign mapping the INSERT function should be hardened. We don't
> 
> Did you mean "handled"?

I meant both :). Actually we don't have any check in this function as
for REMOVE case.

I don't think it's possible to do it for 4.4, we take a reference on the
mapping every time a new entrie is added in the p2m.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33WL-0004kl-Mq; Tue, 14 Jan 2014 12:55:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W33WK-0004kg-0x
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:55:04 +0000
Received: from [85.158.139.211:31492] by server-15.bemta-5.messagelabs.com id
	B5/59-08490-7A335D25; Tue, 14 Jan 2014 12:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389704100!9663776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31073 invoked from network); 14 Jan 2014 12:55:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90527310"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 12:54:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 07:54:41 -0500
Message-ID: <1389704080.12434.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 12:54:40 +0000
In-Reply-To: <52D5308B.6020000@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
	<52D5308B.6020000@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 12:41 +0000, Julien Grall wrote:
> On 01/13/2014 05:57 PM, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
> >> On 01/13/2014 05:10 PM, Ian Campbell wrote:
> >>> Hrm, our TLB flush discipline is horribly confused isn't it...
> >>>
> >>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> >>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >>>> p2m and TLBs.
> >>>>
> >>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
> >>>> moved out of the loop because:
> >>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> >>>
> >>> OK.
> >>>
> >>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> >>> worthwhile if that is the case.
> >>
> >> Will add it.
> > 
> > Thanks.
> > 
> >>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> >>> an INSERT, it's seems very special case)
> >>>
> >>>>     - INSERT: if valid = 1 that would means with have replaced a
> >>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
> >>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
> >>>>     removed from the p2m.
> >>>
> >>> "append"? Do you mean "happen"?
> >>
> >> I meant "happen".
> >>
> >>>
> >>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> >>> subsequent put_page elsewhere -- do they all contain the correct
> >>> flushing? Or do we just leak?
> >>
> >> As for foreign mapping the INSERT function should be hardened. We don't
> > 
> > Did you mean "handled"?
> 
> I meant both :). Actually we don't have any check in this function as
> for REMOVE case.
> 
> I don't think it's possible to do it for 4.4, we take a reference on the
> mapping every time a new entrie is added in the p2m.

Can we pull the:
                /* TODO: Handle other p2m type */
                    if ( p2m_is_foreign(pte.p2m.type) )
                    {
                        ASSERT(mfn_valid(mfn));
                        put_page(mfn_to_page(mfn));
                    }
out to above the switch and have it be:
                                pte = third[third_table_offset(addr)]
                                flsuh = pte.p2m.valid
                                
                /* TODO: Handle other p2m type */
                                if (pte.p2m.valid &&
                                p2m_is_foreign(pte.p2m.type)
                                {
                                    ASSERT(mfn_valid(mfn));
                                    put_page(mfn_to_page(mfn));
                                }
I think that would be acceptable for 4.4, unless there is some
complication I'm not foreseeing?

Ian.	



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 12:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 12:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33WL-0004kl-Mq; Tue, 14 Jan 2014 12:55:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W33WK-0004kg-0x
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 12:55:04 +0000
Received: from [85.158.139.211:31492] by server-15.bemta-5.messagelabs.com id
	B5/59-08490-7A335D25; Tue, 14 Jan 2014 12:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389704100!9663776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31073 invoked from network); 14 Jan 2014 12:55:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 12:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90527310"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 12:54:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 07:54:41 -0500
Message-ID: <1389704080.12434.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 12:54:40 +0000
In-Reply-To: <52D5308B.6020000@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
	<52D5308B.6020000@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 12:41 +0000, Julien Grall wrote:
> On 01/13/2014 05:57 PM, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
> >> On 01/13/2014 05:10 PM, Ian Campbell wrote:
> >>> Hrm, our TLB flush discipline is horribly confused isn't it...
> >>>
> >>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> >>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> >>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
> >>>> p2m and TLBs.
> >>>>
> >>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
> >>>> moved out of the loop because:
> >>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> >>>
> >>> OK.
> >>>
> >>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
> >>> worthwhile if that is the case.
> >>
> >> Will add it.
> > 
> > Thanks.
> > 
> >>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
> >>> an INSERT, it's seems very special case)
> >>>
> >>>>     - INSERT: if valid = 1 that would means with have replaced a
> >>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
> >>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
> >>>>     removed from the p2m.
> >>>
> >>> "append"? Do you mean "happen"?
> >>
> >> I meant "happen".
> >>
> >>>
> >>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
> >>> subsequent put_page elsewhere -- do they all contain the correct
> >>> flushing? Or do we just leak?
> >>
> >> As for foreign mapping the INSERT function should be hardened. We don't
> > 
> > Did you mean "handled"?
> 
> I meant both :). Actually we don't have any check in this function as
> for REMOVE case.
> 
> I don't think it's possible to do it for 4.4, we take a reference on the
> mapping every time a new entrie is added in the p2m.

Can we pull the:
                /* TODO: Handle other p2m type */
                    if ( p2m_is_foreign(pte.p2m.type) )
                    {
                        ASSERT(mfn_valid(mfn));
                        put_page(mfn_to_page(mfn));
                    }
out to above the switch and have it be:
                                pte = third[third_table_offset(addr)]
                                flsuh = pte.p2m.valid
                                
                /* TODO: Handle other p2m type */
                                if (pte.p2m.valid &&
                                p2m_is_foreign(pte.p2m.type)
                                {
                                    ASSERT(mfn_valid(mfn));
                                    put_page(mfn_to_page(mfn));
                                }
I think that would be acceptable for 4.4, unless there is some
complication I'm not foreseeing?

Ian.	



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 13:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 13:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33v2-00063Y-1L; Tue, 14 Jan 2014 13:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W33v0-00063E-8b
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 13:20:34 +0000
Received: from [85.158.139.211:50245] by server-8.bemta-5.messagelabs.com id
	69/0D-29838-1A935D25; Tue, 14 Jan 2014 13:20:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389705632!9479149!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5754 invoked from network); 14 Jan 2014 13:20:32 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 13:20:32 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so370468wes.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 05:20:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=3NEPi71Fi49xf4pslfSvLfM/J+xdayeIpSmKp8U6pUM=;
	b=NoPyZ1mgBa+CBeDI1HukLaLXrkQefYrpEGAMWJ2UwAqD2Bbw9COvnpfIiQ3AOup9W6
	DlK7lSwqpQcgNulJ4Nu/sboJ95FYUifKnfjd7TE1jjvY9AT/e+5tRSUwjEUmdxXwv7QT
	pIkae8/H//9UQEEbF5HnbfcG7m0le2M3dXWFiVZWfT2MfwgVaXZKb3LL6yA447CYcz2F
	eRbOrdkcJPSuiI5PV68TJ8lPX3/Fd2MoqYCTYdl6l5ymBPe4bi0fNvYLtelfDcO5wawY
	dwiMhqRuTOcwozVTE5r2YtRT6KAYP7NC8GE25GL9iFAMVsbz4IyU9LKH0HGlSIyQqE5l
	5PTA==
X-Gm-Message-State: ALoCoQn962lZj8GycY4kbadrMsc58hHEv1I/+UZvDiMYZ7c69h2urazZN7Lpf7/5TGTt/qvher9O
X-Received: by 10.180.38.11 with SMTP id c11mr20181271wik.60.1389705632457;
	Tue, 14 Jan 2014 05:20:32 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ea4sm4286490wib.7.2014.01.14.05.20.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 05:20:31 -0800 (PST)
Message-ID: <52D5399D.1030402@linaro.org>
Date: Tue, 14 Jan 2014 13:20:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
	<52D5308B.6020000@linaro.org>
	<1389704080.12434.18.camel@kazak.uk.xensource.com>
In-Reply-To: <1389704080.12434.18.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 12:54 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 12:41 +0000, Julien Grall wrote:
>> On 01/13/2014 05:57 PM, Ian Campbell wrote:
>>> On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
>>>> On 01/13/2014 05:10 PM, Ian Campbell wrote:
>>>>> Hrm, our TLB flush discipline is horribly confused isn't it...
>>>>>
>>>>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>>>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>>>> p2m and TLBs.
>>>>>>
>>>>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>>>>>> moved out of the loop because:
>>>>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>>>>>
>>>>> OK.
>>>>>
>>>>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
>>>>> worthwhile if that is the case.
>>>>
>>>> Will add it.
>>>
>>> Thanks.
>>>
>>>>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
>>>>> an INSERT, it's seems very special case)
>>>>>
>>>>>>     - INSERT: if valid = 1 that would means with have replaced a
>>>>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>>>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>>>>>     removed from the p2m.
>>>>>
>>>>> "append"? Do you mean "happen"?
>>>>
>>>> I meant "happen".
>>>>
>>>>>
>>>>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
>>>>> subsequent put_page elsewhere -- do they all contain the correct
>>>>> flushing? Or do we just leak?
>>>>
>>>> As for foreign mapping the INSERT function should be hardened. We don't
>>>
>>> Did you mean "handled"?
>>
>> I meant both :). Actually we don't have any check in this function as
>> for REMOVE case.
>>
>> I don't think it's possible to do it for 4.4, we take a reference on the
>> mapping every time a new entrie is added in the p2m.
> 
> Can we pull the:
>                 /* TODO: Handle other p2m type */
>                     if ( p2m_is_foreign(pte.p2m.type) )
>                     {
>                         ASSERT(mfn_valid(mfn));
>                         put_page(mfn_to_page(mfn));
>                     }
> out to above the switch and have it be:
>                                 pte = third[third_table_offset(addr)]
>                                 flsuh = pte.p2m.valid
>                                 
>                 /* TODO: Handle other p2m type */
>                                 if (pte.p2m.valid &&
>                                 p2m_is_foreign(pte.p2m.type)
>                                 {
>                                     ASSERT(mfn_valid(mfn));
>                                     put_page(mfn_to_page(mfn));
>                                 }
> I think that would be acceptable for 4.4, unless there is some
> complication I'm not foreseeing?

As discussed IRL, this is the only {get,put}_page to the foreign page
for this domain. So if the foreign domain has release all reference to
this page, the put_page will free the page.

In the tiny timeslice before the flush (at the end of the loop), the
page could be reallocated to another domain.
But hopefully we are safe (assuming my patch "xen/arm: correct
flush_tlb_mask behaviour" is also in the tree), because page_alloc will
call flush TLB.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 13:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 13:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W33v2-00063Y-1L; Tue, 14 Jan 2014 13:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W33v0-00063E-8b
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 13:20:34 +0000
Received: from [85.158.139.211:50245] by server-8.bemta-5.messagelabs.com id
	69/0D-29838-1A935D25; Tue, 14 Jan 2014 13:20:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389705632!9479149!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5754 invoked from network); 14 Jan 2014 13:20:32 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 13:20:32 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so370468wes.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 05:20:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=3NEPi71Fi49xf4pslfSvLfM/J+xdayeIpSmKp8U6pUM=;
	b=NoPyZ1mgBa+CBeDI1HukLaLXrkQefYrpEGAMWJ2UwAqD2Bbw9COvnpfIiQ3AOup9W6
	DlK7lSwqpQcgNulJ4Nu/sboJ95FYUifKnfjd7TE1jjvY9AT/e+5tRSUwjEUmdxXwv7QT
	pIkae8/H//9UQEEbF5HnbfcG7m0le2M3dXWFiVZWfT2MfwgVaXZKb3LL6yA447CYcz2F
	eRbOrdkcJPSuiI5PV68TJ8lPX3/Fd2MoqYCTYdl6l5ymBPe4bi0fNvYLtelfDcO5wawY
	dwiMhqRuTOcwozVTE5r2YtRT6KAYP7NC8GE25GL9iFAMVsbz4IyU9LKH0HGlSIyQqE5l
	5PTA==
X-Gm-Message-State: ALoCoQn962lZj8GycY4kbadrMsc58hHEv1I/+UZvDiMYZ7c69h2urazZN7Lpf7/5TGTt/qvher9O
X-Received: by 10.180.38.11 with SMTP id c11mr20181271wik.60.1389705632457;
	Tue, 14 Jan 2014 05:20:32 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ea4sm4286490wib.7.2014.01.14.05.20.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 05:20:31 -0800 (PST)
Message-ID: <52D5399D.1030402@linaro.org>
Date: Tue, 14 Jan 2014 13:20:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389633015.13654.109.camel@kazak.uk.xensource.com>
	<52D4246D.1050400@linaro.org>
	<1389635845.13654.117.camel@kazak.uk.xensource.com>
	<52D5308B.6020000@linaro.org>
	<1389704080.12434.18.camel@kazak.uk.xensource.com>
In-Reply-To: <1389704080.12434.18.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 12:54 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 12:41 +0000, Julien Grall wrote:
>> On 01/13/2014 05:57 PM, Ian Campbell wrote:
>>> On Mon, 2014-01-13 at 17:37 +0000, Julien Grall wrote:
>>>> On 01/13/2014 05:10 PM, Ian Campbell wrote:
>>>>> Hrm, our TLB flush discipline is horribly confused isn't it...
>>>>>
>>>>> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
>>>>>> The p2m is shared between VCPUs for each domain. Currently Xen only flush
>>>>>> TLB on the local PCPU. This could result to mismatch between the mapping in the
>>>>>> p2m and TLBs.
>>>>>>
>>>>>> Flush TLB entries used by this domain on every PCPU. The flush can also be
>>>>>> moved out of the loop because:
>>>>>>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>>>>>
>>>>> OK.
>>>>>
>>>>> An ASSERT(!third[third_table_offset(addr)].p2m.valid) might be
>>>>> worthwhile if that is the case.
>>>>
>>>> Will add it.
>>>
>>> Thanks.
>>>
>>>>> (I'm not sure why ALLOCATE can't be replaced by allocation followed by
>>>>> an INSERT, it's seems very special case)
>>>>>
>>>>>>     - INSERT: if valid = 1 that would means with have replaced a
>>>>>>     page that already belongs to the domain. A VCPU can write on the wrong page.
>>>>>>     This can append for dom0 with the 1:1 mapping because the mapping is not
>>>>>>     removed from the p2m.
>>>>>
>>>>> "append"? Do you mean "happen"?
>>>>
>>>> I meant "happen".
>>>>
>>>>>
>>>>> In the non-dom0 1:1 case eventually the page will be freed, I guess by a
>>>>> subsequent put_page elsewhere -- do they all contain the correct
>>>>> flushing? Or do we just leak?
>>>>
>>>> As for foreign mapping the INSERT function should be hardened. We don't
>>>
>>> Did you mean "handled"?
>>
>> I meant both :). Actually we don't have any check in this function as
>> for REMOVE case.
>>
>> I don't think it's possible to do it for 4.4, we take a reference on the
>> mapping every time a new entrie is added in the p2m.
> 
> Can we pull the:
>                 /* TODO: Handle other p2m type */
>                     if ( p2m_is_foreign(pte.p2m.type) )
>                     {
>                         ASSERT(mfn_valid(mfn));
>                         put_page(mfn_to_page(mfn));
>                     }
> out to above the switch and have it be:
>                                 pte = third[third_table_offset(addr)]
>                                 flsuh = pte.p2m.valid
>                                 
>                 /* TODO: Handle other p2m type */
>                                 if (pte.p2m.valid &&
>                                 p2m_is_foreign(pte.p2m.type)
>                                 {
>                                     ASSERT(mfn_valid(mfn));
>                                     put_page(mfn_to_page(mfn));
>                                 }
> I think that would be acceptable for 4.4, unless there is some
> complication I'm not foreseeing?

As discussed IRL, this is the only {get,put}_page to the foreign page
for this domain. So if the foreign domain has release all reference to
this page, the put_page will free the page.

In the tiny timeslice before the flush (at the end of the loop), the
page could be reallocated to another domain.
But hopefully we are safe (assuming my patch "xen/arm: correct
flush_tlb_mask behaviour" is also in the tree), because page_alloc will
call flush TLB.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 13:37:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 13:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W34Az-0006mH-DB; Tue, 14 Jan 2014 13:37:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W34Ax-0006mC-AC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 13:37:04 +0000
Received: from [85.158.139.211:32490] by server-13.bemta-5.messagelabs.com id
	FD/34-11357-E7D35D25; Tue, 14 Jan 2014 13:37:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389706621!9688847!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14188 invoked from network); 14 Jan 2014 13:37:01 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 13:37:01 -0000
Received: by mail-we0-f181.google.com with SMTP id u56so381644wes.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 05:37:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=nKKmq/svaz7UlosztBZlsYPhXtVVoxeE6ZACL53ybqU=;
	b=WSoCJIQEzDRumxlZ1iSFpoYgWj6nZY8xAY4yy/UvJCs/akfQW4p7ORjJprKpCoIan+
	i617bpBexSwzTPQIequAryA7TfIjesshLYLnsaaLGuoMwmMACFPxNLoKAjqiJ27PqWto
	t2Uro51ZQVIuMyY7ODjixZ5vpswDsewQW6WDkepRBMS+GuLiKVXHiAzggy+ainbivwK5
	4ksjK9sjCoH3S9XLWwZKFiYhcmzn+MCRldIwxZQzyDF5PpewBNCXyep6JmI/AtSIvrwE
	ranWnDTK1jJ94yxQY7aGmKi/YxM8Y7lC08Rh1RULIy2SysbULdXxZoCfcXquAkKl8O6h
	i/tg==
X-Gm-Message-State: ALoCoQn5cSC9hGy2x/SIrZFRSEZxth8dliIxfOI6Ou9CzI20vQL8hir7AENNyF2rjO2+Xh0x/m9q
X-Received: by 10.180.36.51 with SMTP id n19mr3000196wij.48.1389706621014;
	Tue, 14 Jan 2014 05:37:01 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k10sm552023wjf.11.2014.01.14.05.36.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 14 Jan 2014 05:36:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 13:36:55 +0000
Message-Id: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLB entries used by this domain on every PCPU. The flush can also be
moved out of the loop because:
    - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
    - INSERT: if valid = 1 that would means with have replaced a
    page that already belongs to the domain. A VCPU can write on the wrong page.
    This can happen for dom0 with the 1:1 mapping because the mapping is not
    removed from the p2m.
    - REMOVE: except for grant-table (replace_grant_host_mapping), each
    call to guest_physmap_remove_page are protected by the callers via a
        get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
    the page can't be allocated for another domain until the last put_page.
    - RELINQUISH : the domain is not running anymore so we don't care...

Also avoid leaking a foreign page if the function is INSERTed a new mapping
on top of foreign mapping.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v3:
        - Add an ASSERT in ALLOCATE
        - Fix typo in commit message
        - Move put_page above the switch to avoid leaking foreign page
        when a page is replaced.
    Changes in v2:
        - Switch to the domain for only flush its TLBs entries
        - Move the flush out of the loop

This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
flush out of the loop which should be safe (see why in the commit message).
Without this patch, the guest can have stale TLB entries when the VCPU is moved
to another PCPU.

Except grant-table (I can't find {get,put}_page for grant-table code???),
all the callers are protected by a get_page before removing the page. So if the
another VCPU is trying to access to this page before the flush, it will just
read/write the wrong page.

The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
should be safe because create_p2m_entries only deal with a specific domain.

I don't think I forget case in this function. Let me know if it's the case.
---
 xen/arch/arm/p2m.c |   56 +++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 38 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..85ca330 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
                      int mattr,
                      p2m_type_t t)
 {
-    int rc, flush;
+    int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *first = NULL, *second = NULL, *third = NULL;
     paddr_t addr;
@@ -246,10 +246,15 @@ static int create_p2m_entries(struct domain *d,
                   cur_first_offset = ~0,
                   cur_second_offset = ~0;
     unsigned long count = 0;
+    unsigned int flush = 0;
     bool_t populate = (op == INSERT || op == ALLOCATE);
+    lpae_t pte;
 
     spin_lock(&p2m->lock);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(d);
+
     addr = start_gpaddr;
     while ( addr < end_gpaddr )
     {
@@ -316,15 +321,31 @@ static int create_p2m_entries(struct domain *d,
             cur_second_offset = second_table_offset(addr);
         }
 
-        flush = third[third_table_offset(addr)].p2m.valid;
+        pte = third[third_table_offset(addr)];
+
+        flush |= pte.p2m.valid;
+
+        /* TODO: Handle other p2m type
+         *
+         * It's safe to do the put_page here because page_alloc will
+         * flush the TLBs if the page is reallocated before the end of
+         * this loop.
+         */
+        if ( pte.p2m.valid && p2m_is_foreign(pte.p2m.type) )
+        {
+            unsigned long mfn = pte.p2m.base;
+
+            ASSERT(mfn_valid(mfn));
+            put_page(mfn_to_page(mfn));
+        }
 
         /* Allocate a new RAM page and attach */
         switch (op) {
             case ALLOCATE:
                 {
                     struct page_info *page;
-                    lpae_t pte;
 
+                    ASSERT(!pte.p2m.valid);
                     rc = -ENOMEM;
                     page = alloc_domheap_page(d, 0);
                     if ( page == NULL ) {
@@ -339,8 +360,7 @@ static int create_p2m_entries(struct domain *d,
                 break;
             case INSERT:
                 {
-                    lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT,
-                                                  mattr, t);
+                    pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t);
                     write_pte(&third[third_table_offset(addr)], pte);
                     maddr += PAGE_SIZE;
                 }
@@ -348,9 +368,6 @@ static int create_p2m_entries(struct domain *d,
             case RELINQUISH:
             case REMOVE:
                 {
-                    lpae_t pte = third[third_table_offset(addr)];
-                    unsigned long mfn = pte.p2m.base;
-
                     if ( !pte.p2m.valid )
                     {
                         count++;
@@ -359,13 +376,6 @@ static int create_p2m_entries(struct domain *d,
 
                     count += 0x10;
 
-                    /* TODO: Handle other p2m type */
-                    if ( p2m_is_foreign(pte.p2m.type) )
-                    {
-                        ASSERT(mfn_valid(mfn));
-                        put_page(mfn_to_page(mfn));
-                    }
-
                     memset(&pte, 0x00, sizeof(pte));
                     write_pte(&third[third_table_offset(addr)], pte);
                     count++;
@@ -373,9 +383,6 @@ static int create_p2m_entries(struct domain *d,
                 break;
         }
 
-        if ( flush )
-            flush_tlb_all_local();
-
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
         {
@@ -392,6 +399,16 @@ static int create_p2m_entries(struct domain *d,
         addr += PAGE_SIZE;
     }
 
+    if ( flush )
+    {
+        /* At the beginning of the function, Xen is updating VTTBR
+         * with the domain where the mappings are created. In this
+         * case it's only necessary to flush TLBs on every CPUs with
+         * the current VMID (our domain).
+         */
+        flush_tlb();
+    }
+
     if ( op == ALLOCATE || op == INSERT )
     {
         unsigned long sgfn = paddr_to_pfn(start_gpaddr);
@@ -409,6 +426,9 @@ out:
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(current->domain);
+
     spin_unlock(&p2m->lock);
 
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 13:37:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 13:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W34Az-0006mH-DB; Tue, 14 Jan 2014 13:37:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W34Ax-0006mC-AC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 13:37:04 +0000
Received: from [85.158.139.211:32490] by server-13.bemta-5.messagelabs.com id
	FD/34-11357-E7D35D25; Tue, 14 Jan 2014 13:37:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389706621!9688847!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14188 invoked from network); 14 Jan 2014 13:37:01 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 13:37:01 -0000
Received: by mail-we0-f181.google.com with SMTP id u56so381644wes.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 14 Jan 2014 05:37:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=nKKmq/svaz7UlosztBZlsYPhXtVVoxeE6ZACL53ybqU=;
	b=WSoCJIQEzDRumxlZ1iSFpoYgWj6nZY8xAY4yy/UvJCs/akfQW4p7ORjJprKpCoIan+
	i617bpBexSwzTPQIequAryA7TfIjesshLYLnsaaLGuoMwmMACFPxNLoKAjqiJ27PqWto
	t2Uro51ZQVIuMyY7ODjixZ5vpswDsewQW6WDkepRBMS+GuLiKVXHiAzggy+ainbivwK5
	4ksjK9sjCoH3S9XLWwZKFiYhcmzn+MCRldIwxZQzyDF5PpewBNCXyep6JmI/AtSIvrwE
	ranWnDTK1jJ94yxQY7aGmKi/YxM8Y7lC08Rh1RULIy2SysbULdXxZoCfcXquAkKl8O6h
	i/tg==
X-Gm-Message-State: ALoCoQn5cSC9hGy2x/SIrZFRSEZxth8dliIxfOI6Ou9CzI20vQL8hir7AENNyF2rjO2+Xh0x/m9q
X-Received: by 10.180.36.51 with SMTP id n19mr3000196wij.48.1389706621014;
	Tue, 14 Jan 2014 05:37:01 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k10sm552023wjf.11.2014.01.14.05.36.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 14 Jan 2014 05:36:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 14 Jan 2014 13:36:55 +0000
Message-Id: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
	create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The p2m is shared between VCPUs for each domain. Currently Xen only flush
TLB on the local PCPU. This could result to mismatch between the mapping in the
p2m and TLBs.

Flush TLB entries used by this domain on every PCPU. The flush can also be
moved out of the loop because:
    - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
    - INSERT: if valid = 1 that would means with have replaced a
    page that already belongs to the domain. A VCPU can write on the wrong page.
    This can happen for dom0 with the 1:1 mapping because the mapping is not
    removed from the p2m.
    - REMOVE: except for grant-table (replace_grant_host_mapping), each
    call to guest_physmap_remove_page are protected by the callers via a
        get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
    the page can't be allocated for another domain until the last put_page.
    - RELINQUISH : the domain is not running anymore so we don't care...

Also avoid leaking a foreign page if the function is INSERTed a new mapping
on top of foreign mapping.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v3:
        - Add an ASSERT in ALLOCATE
        - Fix typo in commit message
        - Move put_page above the switch to avoid leaking foreign page
        when a page is replaced.
    Changes in v2:
        - Switch to the domain for only flush its TLBs entries
        - Move the flush out of the loop

This is a possible bug fix (found by reading the code) for Xen 4.4, I moved the
flush out of the loop which should be safe (see why in the commit message).
Without this patch, the guest can have stale TLB entries when the VCPU is moved
to another PCPU.

Except grant-table (I can't find {get,put}_page for grant-table code???),
all the callers are protected by a get_page before removing the page. So if the
another VCPU is trying to access to this page before the flush, it will just
read/write the wrong page.

The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
should be safe because create_p2m_entries only deal with a specific domain.

I don't think I forget case in this function. Let me know if it's the case.
---
 xen/arch/arm/p2m.c |   56 +++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 38 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 11f4714..85ca330 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
                      int mattr,
                      p2m_type_t t)
 {
-    int rc, flush;
+    int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *first = NULL, *second = NULL, *third = NULL;
     paddr_t addr;
@@ -246,10 +246,15 @@ static int create_p2m_entries(struct domain *d,
                   cur_first_offset = ~0,
                   cur_second_offset = ~0;
     unsigned long count = 0;
+    unsigned int flush = 0;
     bool_t populate = (op == INSERT || op == ALLOCATE);
+    lpae_t pte;
 
     spin_lock(&p2m->lock);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(d);
+
     addr = start_gpaddr;
     while ( addr < end_gpaddr )
     {
@@ -316,15 +321,31 @@ static int create_p2m_entries(struct domain *d,
             cur_second_offset = second_table_offset(addr);
         }
 
-        flush = third[third_table_offset(addr)].p2m.valid;
+        pte = third[third_table_offset(addr)];
+
+        flush |= pte.p2m.valid;
+
+        /* TODO: Handle other p2m type
+         *
+         * It's safe to do the put_page here because page_alloc will
+         * flush the TLBs if the page is reallocated before the end of
+         * this loop.
+         */
+        if ( pte.p2m.valid && p2m_is_foreign(pte.p2m.type) )
+        {
+            unsigned long mfn = pte.p2m.base;
+
+            ASSERT(mfn_valid(mfn));
+            put_page(mfn_to_page(mfn));
+        }
 
         /* Allocate a new RAM page and attach */
         switch (op) {
             case ALLOCATE:
                 {
                     struct page_info *page;
-                    lpae_t pte;
 
+                    ASSERT(!pte.p2m.valid);
                     rc = -ENOMEM;
                     page = alloc_domheap_page(d, 0);
                     if ( page == NULL ) {
@@ -339,8 +360,7 @@ static int create_p2m_entries(struct domain *d,
                 break;
             case INSERT:
                 {
-                    lpae_t pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT,
-                                                  mattr, t);
+                    pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t);
                     write_pte(&third[third_table_offset(addr)], pte);
                     maddr += PAGE_SIZE;
                 }
@@ -348,9 +368,6 @@ static int create_p2m_entries(struct domain *d,
             case RELINQUISH:
             case REMOVE:
                 {
-                    lpae_t pte = third[third_table_offset(addr)];
-                    unsigned long mfn = pte.p2m.base;
-
                     if ( !pte.p2m.valid )
                     {
                         count++;
@@ -359,13 +376,6 @@ static int create_p2m_entries(struct domain *d,
 
                     count += 0x10;
 
-                    /* TODO: Handle other p2m type */
-                    if ( p2m_is_foreign(pte.p2m.type) )
-                    {
-                        ASSERT(mfn_valid(mfn));
-                        put_page(mfn_to_page(mfn));
-                    }
-
                     memset(&pte, 0x00, sizeof(pte));
                     write_pte(&third[third_table_offset(addr)], pte);
                     count++;
@@ -373,9 +383,6 @@ static int create_p2m_entries(struct domain *d,
                 break;
         }
 
-        if ( flush )
-            flush_tlb_all_local();
-
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
         if ( op == RELINQUISH && count >= 0x2000 )
         {
@@ -392,6 +399,16 @@ static int create_p2m_entries(struct domain *d,
         addr += PAGE_SIZE;
     }
 
+    if ( flush )
+    {
+        /* At the beginning of the function, Xen is updating VTTBR
+         * with the domain where the mappings are created. In this
+         * case it's only necessary to flush TLBs on every CPUs with
+         * the current VMID (our domain).
+         */
+        flush_tlb();
+    }
+
     if ( op == ALLOCATE || op == INSERT )
     {
         unsigned long sgfn = paddr_to_pfn(start_gpaddr);
@@ -409,6 +426,9 @@ out:
     if (second) unmap_domain_page(second);
     if (first) unmap_domain_page(first);
 
+    if ( d != current->domain )
+        p2m_load_VTTBR(current->domain);
+
     spin_unlock(&p2m->lock);
 
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:33:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W353O-00015x-NW; Tue, 14 Jan 2014 14:33:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W321o-00007N-1l
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 11:19:28 +0000
Received: from [193.109.254.147:48203] by server-11.bemta-14.messagelabs.com
	id E7/01-20576-F3D15D25; Tue, 14 Jan 2014 11:19:27 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389698363!9247733!1
X-Originating-IP: [98.139.213.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 634 invoked from network); 14 Jan 2014 11:19:24 -0000
Received: from nm27-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm27-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.148)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 11:19:24 -0000
Received: from [66.196.81.171] by nm27.bullet.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
Received: from [98.139.212.246] by tm17.bullet.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
Received: from [127.0.0.1] by omp1055.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 601044.53087.bm@omp1055.mail.bf1.yahoo.com
Received: (qmail 19640 invoked by uid 60001); 14 Jan 2014 11:19:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389698363; bh=+aTDlq6E51Af1s5rQmbpUtNQUde1P8MJY5T4VSxvahM=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Roa+Z3qaiEcXHnA0goTuoHxhHvMk0lcPb3kt9S4VJ/hVZlyDsnfmrJwSLZwTDhjRUtOj+BX6ZvcGqp9RsUL0dBVi3KsH7ezEjICqnvAjj/KSKFg2Ij9gBfvUcPJ8GHO6J/shCC6QmqGpcWMsb3KEnZPCLrdMt1N8drb4ceLDw8k=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Mrf5NcHfwYI7sm4LhX39lK9cKQB23pPz9xeFtyTsBEU4HUTD0VbTZjSH2MKyanZp1SpcH8FWBzrzEtqLuZebtU0oC51vqqsGq7MtuVmY3GeIr9bAfBOYIPGxPUiOlMKVOsKndheB+g3cTNaGq3byS40D4vqqB1Y12E2l3hPYYMQ=;
X-YMail-OSG: ZYsOX80VM1kjEPWc4mHrRG9bEG6DjM80nfBufiCCN3D7XIh
	4cYwMONrOFJnNI0ycMBhhECTkw2CBsuvJgtTPXYczOdly3WKgyVMp06qq1m9
	3n5DEvCT6LuhLFzhx7DnDXG_5w8EXp_GsrUx1LyHMfnNip9JwwsCFmGVbusc
	rdeYvAEr5gU1dWQHLQtmKv95LEe_cAGiHBnErheWDXvNCVgZ1Qb2uYWb3.d3
	6jaoaWYJQDKamM__UcEKRCPdHPZ_0vFIiayHF5wL42mqor.gFCLJeJBJC6Uk
	E8RfNpfvyo3a5ZV9yD1qE6CceI9oWkfrtPN7xtqQOOpAwsZt7pTq2Rl0uqoG
	1uebQDz0TENhPh3LVlYs_SUOuMrvGDXfiICdOSkE_U3NGuq8Z91UK68q7iIy
	S2RR.KS2Q4R0I8o4CWcJatfv9INM5i5SLHN6YlgKUPC.BrHy0tAs3qOwMM03
	FDn.Y9klABeVlmibVhYO21tJ.y1GzSd2AJYkR2NSga3bxxgd7Xh7qxXSjobo
	JI.LCStwBHwt45jvOkv_p_xyr
Received: from [192.227.225.3] by web160206.mail.bf1.yahoo.com via HTTP;
	Tue, 14 Jan 2014 03:19:23 PST
X-Rocket-MIMEInfo: 002.001,
	Y2FuIGkgd3JpdGUgZGlydHkgcGFnZXMgaW50byBmaWxlIHhlbmQubG9nPwpEb2VzIGFueW9uZSBoZXJlIGtub3cgaG93IHRoaXMgY2hhbmdlPwpEb3dudGltZSBpcyB0aW1lIGVuZCByb3VuZC4gaSBwcm9ibGVtIGZvciBwcmludCBpbiB0aGlzIGluZm9ybWF0aW9uLiBpIHVzZWQgcHJpbnRrIGZvciBvdXRwdXQsIGJ1dCBpIGRvbid0IGtub3cgd2hlcmUgaXPCoGluZm9ybWF0aW9uwqBzdG9yZWQ_IDotKAoKQWRlbCBBbWFuaQpNLlNjLiBDYW5kaWRhdGVAQ29tcHV0ZXIgRW5naW5lZXJpbmcgRGVwYXJ0bWVudCwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>	
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>	
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>	
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>	
	<CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
	<1389692560.9887.5.camel@kazak.uk.xensource.com>
Message-ID: <1389698363.98542.YahooMailNeo@web160206.mail.bf1.yahoo.com>
Date: Tue, 14 Jan 2014 03:19:23 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Kai Huang <dev.kai.huang@gmail.com>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <1389692560.9887.5.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 14 Jan 2014 14:33:17 +0000
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7678704925951391585=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7678704925951391585==
Content-Type: multipart/alternative; boundary="-554184862-772353409-1389698363=:98542"

---554184862-772353409-1389698363=:98542
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

can i write dirty pages into file xend.log?=0ADoes anyone here know how thi=
s change?=0ADowntime is time end round. i problem for print in this informa=
tion. i used printk for output, but i don't know where is=A0information=A0s=
tored? :-(=0A=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Departmen=
t, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
---554184862-772353409-1389698363=:98542
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>can i wr=
ite dirty pages into file xend.log?</span></div><div style=3D"background-co=
lor: transparent;">Does anyone here know how this change?</div><div style=
=3D"background-color: transparent;"><span style=3D"color: rgb(0, 0, 0); fon=
t-family: 'bookman old style', 'new york', times, serif; font-size: 10pt; f=
ont-style: normal;">Downtime is time end round. i problem for print in this=
 information. i used printk for output, but i don't know where is</span><sp=
an style=3D"font-size: 10pt;">&nbsp;</span><span style=3D"font-size: 10pt;"=
>information</span><span style=3D"font-size: 10pt; background-color: transp=
arent;">&nbsp;stored? :-(</span></div><div style=3D"background-color: trans=
parent; color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old sty=
le', 'new york', times, serif; font-style: normal;"><span style=3D"color: r=
gb(0, 0, 0);
 font-family: 'bookman old style', 'new york', times, serif; font-size: 10p=
t; font-style: normal;"><br></span></div><div><div style=3D"text-align: cen=
ter;"><span style=3D"line-height: normal; font-size: 16px; font-family: 'ti=
mes new roman', 'new york', times, serif;">Adel Amani</span><br></div><span=
 style=3D"font-family: 'times new roman', 'new york', times, serif; line-he=
ight: normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candi=
date@Computer Engineering Department, University of Isfahan</div><div style=
=3D"text-align:center;"><span style=3D"font-size:13px;">Email: <span style=
=3D"color:rgb(0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</=
span></span></div></span></div><div class=3D"yahoo_quoted" style=3D"display=
: block;"> <br><div style=3D"font-family: 'bookman old style', 'new york', =
times, serif; font-size: 10pt;"><div style=3D"font-family: HelveticaNeue, '=
Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: =
12pt;"><div
 class=3D"y_msg_container"><br></div>  </div> </div>  </div> </div></body><=
/html>
---554184862-772353409-1389698363=:98542--


--===============7678704925951391585==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7678704925951391585==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 14:33:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W353O-00015x-NW; Tue, 14 Jan 2014 14:33:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W321o-00007N-1l
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 11:19:28 +0000
Received: from [193.109.254.147:48203] by server-11.bemta-14.messagelabs.com
	id E7/01-20576-F3D15D25; Tue, 14 Jan 2014 11:19:27 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389698363!9247733!1
X-Originating-IP: [98.139.213.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 634 invoked from network); 14 Jan 2014 11:19:24 -0000
Received: from nm27-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm27-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.148)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 11:19:24 -0000
Received: from [66.196.81.171] by nm27.bullet.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
Received: from [98.139.212.246] by tm17.bullet.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
Received: from [127.0.0.1] by omp1055.mail.bf1.yahoo.com with NNFMP;
	14 Jan 2014 11:19:23 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 601044.53087.bm@omp1055.mail.bf1.yahoo.com
Received: (qmail 19640 invoked by uid 60001); 14 Jan 2014 11:19:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1389698363; bh=+aTDlq6E51Af1s5rQmbpUtNQUde1P8MJY5T4VSxvahM=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Roa+Z3qaiEcXHnA0goTuoHxhHvMk0lcPb3kt9S4VJ/hVZlyDsnfmrJwSLZwTDhjRUtOj+BX6ZvcGqp9RsUL0dBVi3KsH7ezEjICqnvAjj/KSKFg2Ij9gBfvUcPJ8GHO6J/shCC6QmqGpcWMsb3KEnZPCLrdMt1N8drb4ceLDw8k=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Mrf5NcHfwYI7sm4LhX39lK9cKQB23pPz9xeFtyTsBEU4HUTD0VbTZjSH2MKyanZp1SpcH8FWBzrzEtqLuZebtU0oC51vqqsGq7MtuVmY3GeIr9bAfBOYIPGxPUiOlMKVOsKndheB+g3cTNaGq3byS40D4vqqB1Y12E2l3hPYYMQ=;
X-YMail-OSG: ZYsOX80VM1kjEPWc4mHrRG9bEG6DjM80nfBufiCCN3D7XIh
	4cYwMONrOFJnNI0ycMBhhECTkw2CBsuvJgtTPXYczOdly3WKgyVMp06qq1m9
	3n5DEvCT6LuhLFzhx7DnDXG_5w8EXp_GsrUx1LyHMfnNip9JwwsCFmGVbusc
	rdeYvAEr5gU1dWQHLQtmKv95LEe_cAGiHBnErheWDXvNCVgZ1Qb2uYWb3.d3
	6jaoaWYJQDKamM__UcEKRCPdHPZ_0vFIiayHF5wL42mqor.gFCLJeJBJC6Uk
	E8RfNpfvyo3a5ZV9yD1qE6CceI9oWkfrtPN7xtqQOOpAwsZt7pTq2Rl0uqoG
	1uebQDz0TENhPh3LVlYs_SUOuMrvGDXfiICdOSkE_U3NGuq8Z91UK68q7iIy
	S2RR.KS2Q4R0I8o4CWcJatfv9INM5i5SLHN6YlgKUPC.BrHy0tAs3qOwMM03
	FDn.Y9klABeVlmibVhYO21tJ.y1GzSd2AJYkR2NSga3bxxgd7Xh7qxXSjobo
	JI.LCStwBHwt45jvOkv_p_xyr
Received: from [192.227.225.3] by web160206.mail.bf1.yahoo.com via HTTP;
	Tue, 14 Jan 2014 03:19:23 PST
X-Rocket-MIMEInfo: 002.001,
	Y2FuIGkgd3JpdGUgZGlydHkgcGFnZXMgaW50byBmaWxlIHhlbmQubG9nPwpEb2VzIGFueW9uZSBoZXJlIGtub3cgaG93IHRoaXMgY2hhbmdlPwpEb3dudGltZSBpcyB0aW1lIGVuZCByb3VuZC4gaSBwcm9ibGVtIGZvciBwcmludCBpbiB0aGlzIGluZm9ybWF0aW9uLiBpIHVzZWQgcHJpbnRrIGZvciBvdXRwdXQsIGJ1dCBpIGRvbid0IGtub3cgd2hlcmUgaXPCoGluZm9ybWF0aW9uwqBzdG9yZWQ_IDotKAoKQWRlbCBBbWFuaQpNLlNjLiBDYW5kaWRhdGVAQ29tcHV0ZXIgRW5naW5lZXJpbmcgRGVwYXJ0bWVudCwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.172.614
References: <1389387691.14476.YahooMailNeo@web160201.mail.bf1.yahoo.com>	
	<1389462013.51486.YahooMailNeo@web160204.mail.bf1.yahoo.com>	
	<CAOtp4Kof=6kLsyDqVN1yaei213bFUmLGYvf_iqvYcvdwpZunPQ@mail.gmail.com>	
	<1389611995.25477.YahooMailNeo@web160201.mail.bf1.yahoo.com>	
	<CAOtp4Kqwye7tHAzj0-JLyeZ1-vvgp9ji76yVZ+Wti89Y8RUFrQ@mail.gmail.com>
	<1389692560.9887.5.camel@kazak.uk.xensource.com>
Message-ID: <1389698363.98542.YahooMailNeo@web160206.mail.bf1.yahoo.com>
Date: Tue, 14 Jan 2014 03:19:23 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	Kai Huang <dev.kai.huang@gmail.com>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <1389692560.9887.5.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 14 Jan 2014 14:33:17 +0000
Subject: Re: [Xen-devel] Help about live migration VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7678704925951391585=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7678704925951391585==
Content-Type: multipart/alternative; boundary="-554184862-772353409-1389698363=:98542"

---554184862-772353409-1389698363=:98542
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

can i write dirty pages into file xend.log?=0ADoes anyone here know how thi=
s change?=0ADowntime is time end round. i problem for print in this informa=
tion. i used printk for output, but i don't know where is=A0information=A0s=
tored? :-(=0A=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Departmen=
t, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir
---554184862-772353409-1389698363=:98542
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>can i wr=
ite dirty pages into file xend.log?</span></div><div style=3D"background-co=
lor: transparent;">Does anyone here know how this change?</div><div style=
=3D"background-color: transparent;"><span style=3D"color: rgb(0, 0, 0); fon=
t-family: 'bookman old style', 'new york', times, serif; font-size: 10pt; f=
ont-style: normal;">Downtime is time end round. i problem for print in this=
 information. i used printk for output, but i don't know where is</span><sp=
an style=3D"font-size: 10pt;">&nbsp;</span><span style=3D"font-size: 10pt;"=
>information</span><span style=3D"font-size: 10pt; background-color: transp=
arent;">&nbsp;stored? :-(</span></div><div style=3D"background-color: trans=
parent; color: rgb(0, 0, 0); font-size: 13px; font-family: 'bookman old sty=
le', 'new york', times, serif; font-style: normal;"><span style=3D"color: r=
gb(0, 0, 0);
 font-family: 'bookman old style', 'new york', times, serif; font-size: 10p=
t; font-style: normal;"><br></span></div><div><div style=3D"text-align: cen=
ter;"><span style=3D"line-height: normal; font-size: 16px; font-family: 'ti=
mes new roman', 'new york', times, serif;">Adel Amani</span><br></div><span=
 style=3D"font-family: 'times new roman', 'new york', times, serif; line-he=
ight: normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candi=
date@Computer Engineering Department, University of Isfahan</div><div style=
=3D"text-align:center;"><span style=3D"font-size:13px;">Email: <span style=
=3D"color:rgb(0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</=
span></span></div></span></div><div class=3D"yahoo_quoted" style=3D"display=
: block;"> <br><div style=3D"font-family: 'bookman old style', 'new york', =
times, serif; font-size: 10pt;"><div style=3D"font-family: HelveticaNeue, '=
Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: =
12pt;"><div
 class=3D"y_msg_container"><br></div>  </div> </div>  </div> </div></body><=
/html>
---554184862-772353409-1389698363=:98542--


--===============7678704925951391585==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7678704925951391585==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 14:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Dj-0001dT-3i; Tue, 14 Jan 2014 14:43:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35Di-0001dO-Bb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:43:58 +0000
Received: from [85.158.139.211:64054] by server-13.bemta-5.messagelabs.com id
	22/1C-11357-D2D45D25; Tue, 14 Jan 2014 14:43:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389710635!9702647!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14678 invoked from network); 14 Jan 2014 14:43:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92687591"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:43:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:43:50 -0500
Message-ID: <1389710629.12434.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dave Scott <Dave.Scott@citrix.com>
Date: Tue, 14 Jan 2014 14:43:49 +0000
In-Reply-To: <6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	'Anil Madhavapeddy' <anil@recoil.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-12 at 19:17 +0000, Dave Scott wrote:
> Hi Anil,
> 
> Thanks for getting oxenstored on ARM working!
> 
> I'm happy with a simple 'Failure not implemented' exception for the moment. I think that once we're using the libxl bindings everywhere we can probably remove these libxc bindings anyway.
> 
> Acked-by: David Scott <dave.scott@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: This is pretty tightly targeted, and the worst that could
happen is a build failure, which is generally pretty easy to detect
before a release, although perhaps less so in the case of ocaml which
isn't enabled by everyone.

Although I have two misgivings in that regard:

> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..ff29b47 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > xch, value domid,  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);

Aren't these variables now unused? Does the compiler not complain about
this (with -Werror => build failure)?

Perhaps CAMLlocal2 both defines and references the variables keeping
this issue at bay?

> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	caml_failwith("xc_domain_cpuid_set: not implemented");
> > #endif
> >  	CAMLreturn(array);

In the !__i386__ && !__x86_64__ case this code is unreachable, right
because caml_failwith is marked Noreturn?

Is there any chance that some compiler version might pick up on this and
complain about the dead code? Or perhaps complain about the
uninitialised use of array?

I suppose putting the CAMLreturn inside the x86 case runs the opposite
risk of a compiler which doesn't pay proper attention to Noreturn and
therefore things we are reaching the end of a non-void function? Would
CAMLreturn(unit) be appropriate in that case?

> >  }
> > 
> >  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value
> > domid)  {
> >  	CAMLparam2(xch, domid);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> > 
> >  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
> > #endif
> >  	CAMLreturn(Val_unit);

Unreached?

> >  }
> > 
> > @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> > value input, value config)  {
> >  	CAMLparam3(xch, input, config);
> >  	CAMLlocal3(ret, array, tmp);

Unused?

> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> > value input, value config)
> >  	Store_field(ret, 0, Val_bool(r));
> >  	Store_field(ret, 1, array);
> > 
> > +#else
> > +	caml_failwith("xc_domain_cpuid_check: not implemented");
> > #endif
> >  	CAMLreturn(ret);

Unreached?

> >  }
> > 
> > --
> > 1.8.1.2
> > 
> > 
> > --
> > Anil Madhavapeddy                                 http://anil.recoil.org
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Dj-0001dT-3i; Tue, 14 Jan 2014 14:43:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35Di-0001dO-Bb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:43:58 +0000
Received: from [85.158.139.211:64054] by server-13.bemta-5.messagelabs.com id
	22/1C-11357-D2D45D25; Tue, 14 Jan 2014 14:43:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389710635!9702647!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14678 invoked from network); 14 Jan 2014 14:43:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92687591"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:43:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:43:50 -0500
Message-ID: <1389710629.12434.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dave Scott <Dave.Scott@citrix.com>
Date: Tue, 14 Jan 2014 14:43:49 +0000
In-Reply-To: <6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	'Anil Madhavapeddy' <anil@recoil.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-12 at 19:17 +0000, Dave Scott wrote:
> Hi Anil,
> 
> Thanks for getting oxenstored on ARM working!
> 
> I'm happy with a simple 'Failure not implemented' exception for the moment. I think that once we're using the libxl bindings everywhere we can probably remove these libxc bindings anyway.
> 
> Acked-by: David Scott <dave.scott@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: This is pretty tightly targeted, and the worst that could
happen is a build failure, which is generally pretty easy to detect
before a release, although perhaps less so in the case of ocaml which
isn't enabled by everyone.

Although I have two misgivings in that regard:

> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index f5cf0ed..ff29b47 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > xch, value domid,  {
> >  	CAMLparam4(xch, domid, input, config);
> >  	CAMLlocal2(array, tmp);

Aren't these variables now unused? Does the compiler not complain about
this (with -Werror => build failure)?

Perhaps CAMLlocal2 both defines and references the variables keeping
this issue at bay?

> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > xch, value domid,
> >  			 c_input, (const char **)c_config, out_config);
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	caml_failwith("xc_domain_cpuid_set: not implemented");
> > #endif
> >  	CAMLreturn(array);

In the !__i386__ && !__x86_64__ case this code is unreachable, right
because caml_failwith is marked Noreturn?

Is there any chance that some compiler version might pick up on this and
complain about the dead code? Or perhaps complain about the
uninitialised use of array?

I suppose putting the CAMLreturn inside the x86 case runs the opposite
risk of a compiler which doesn't pay proper attention to Noreturn and
therefore things we are reaching the end of a non-void function? Would
CAMLreturn(unit) be appropriate in that case?

> >  }
> > 
> >  CAMLprim value stub_xc_domain_cpuid_apply_policy(value xch, value
> > domid)  {
> >  	CAMLparam2(xch, domid);
> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> > 
> >  	r = xc_cpuid_apply_policy(_H(xch), _D(domid));
> >  	if (r < 0)
> >  		failwith_xc(_H(xch));
> > +#else
> > +	caml_failwith("xc_domain_cpuid_apply_policy: not implemented");
> > #endif
> >  	CAMLreturn(Val_unit);

Unreached?

> >  }
> > 
> > @@ -760,6 +768,7 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> > value input, value config)  {
> >  	CAMLparam3(xch, input, config);
> >  	CAMLlocal3(ret, array, tmp);

Unused?

> > +#if defined(__i386__) || defined(__x86_64__)
> >  	int r;
> >  	unsigned int c_input[2];
> >  	char *c_config[4], *out_config[4];
> > @@ -792,6 +801,9 @@ CAMLprim value stub_xc_cpuid_check(value xch,
> > value input, value config)
> >  	Store_field(ret, 0, Val_bool(r));
> >  	Store_field(ret, 1, array);
> > 
> > +#else
> > +	caml_failwith("xc_domain_cpuid_check: not implemented");
> > #endif
> >  	CAMLreturn(ret);

Unreached?

> >  }
> > 
> > --
> > 1.8.1.2
> > 
> > 
> > --
> > Anil Madhavapeddy                                 http://anil.recoil.org
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:44:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Dx-0001ea-KV; Tue, 14 Jan 2014 14:44:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35Dv-0001eI-Nb
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 14:44:11 +0000
Received: from [85.158.143.35:49221] by server-3.bemta-4.messagelabs.com id
	B8/0B-32360-B3D45D25; Tue, 14 Jan 2014 14:44:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389710649!11676657!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20697 invoked from network); 14 Jan 2014 14:44:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:44:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90567901"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:44:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:44:07 -0500
Message-ID: <1389710646.12434.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 14:44:06 +0000
In-Reply-To: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
References: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 13:36 +0000, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLB entries used by this domain on every PCPU. The flush can also be
> moved out of the loop because:
>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>     - INSERT: if valid = 1 that would means with have replaced a
>     page that already belongs to the domain. A VCPU can write on the wrong page.
>     This can happen for dom0 with the 1:1 mapping because the mapping is not
>     removed from the p2m.
>     - REMOVE: except for grant-table (replace_grant_host_mapping), each
>     call to guest_physmap_remove_page are protected by the callers via a
>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>     the page can't be allocated for another domain until the last put_page.
>     - RELINQUISH : the domain is not running anymore so we don't care...
> 
> Also avoid leaking a foreign page if the function is INSERTed a new mapping
> on top of foreign mapping.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: There are two major issues here, one is not broadcasting
the TLB flush, which is a potential security issue (another VCPU can
keep accessing a page after it is freed). The other is a potential DoS
by leaking a reference on a foreign page, which would stop that domain
from ever being destroyed.

Either of these two issues would be enough to justify taking this change
for 4.4.

We are cutting rc2 at the moment, I will apply after that is out the
way.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:44:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Dx-0001ea-KV; Tue, 14 Jan 2014 14:44:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35Dv-0001eI-Nb
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 14:44:11 +0000
Received: from [85.158.143.35:49221] by server-3.bemta-4.messagelabs.com id
	B8/0B-32360-B3D45D25; Tue, 14 Jan 2014 14:44:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389710649!11676657!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20697 invoked from network); 14 Jan 2014 14:44:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:44:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90567901"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:44:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:44:07 -0500
Message-ID: <1389710646.12434.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 14:44:06 +0000
In-Reply-To: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
References: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 13:36 +0000, Julien Grall wrote:
> The p2m is shared between VCPUs for each domain. Currently Xen only flush
> TLB on the local PCPU. This could result to mismatch between the mapping in the
> p2m and TLBs.
> 
> Flush TLB entries used by this domain on every PCPU. The flush can also be
> moved out of the loop because:
>     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
>     - INSERT: if valid = 1 that would means with have replaced a
>     page that already belongs to the domain. A VCPU can write on the wrong page.
>     This can happen for dom0 with the 1:1 mapping because the mapping is not
>     removed from the p2m.
>     - REMOVE: except for grant-table (replace_grant_host_mapping), each
>     call to guest_physmap_remove_page are protected by the callers via a
>         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
>     the page can't be allocated for another domain until the last put_page.
>     - RELINQUISH : the domain is not running anymore so we don't care...
> 
> Also avoid leaking a foreign page if the function is INSERTed a new mapping
> on top of foreign mapping.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: There are two major issues here, one is not broadcasting
the TLB flush, which is a potential security issue (another VCPU can
keep accessing a page after it is freed). The other is a potential DoS
by leaking a reference on a foreign page, which would stop that domain
from ever being destroyed.

Either of these two issues would be enough to justify taking this change
for 4.4.

We are cutting rc2 at the moment, I will apply after that is out the
way.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:50:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35KD-0002H8-Fl; Tue, 14 Jan 2014 14:50:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35KA-0002Fi-6Q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:50:38 +0000
Received: from [85.158.143.35:52055] by server-2.bemta-4.messagelabs.com id
	A5/44-11386-6BE45D25; Tue, 14 Jan 2014 14:50:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389711028!11670414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21499 invoked from network); 14 Jan 2014 14:50:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:50:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90570887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:50:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:50:15 -0500
Message-ID: <1389711014.12434.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Tue, 14 Jan 2014 14:50:14 +0000
In-Reply-To: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> This change is restricted to HVM guest, as only HVM is relevant in the
> counterpart in Xend. We're late in release cycle so the change should
> only do what's necessary. Probably we can revisit it if we need to do
> the same thing for PV guest in the future.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: The risk here is of a false positive detecting whether PoD
would be used and therefore refusing to start a domain. However Wei
directed me earlier on to the code in setup_guest which sets
XENMEMF_populate_on_demand and I believe it is using the same logic.

The benefit of this is that it will stop people starting a domain in an
invalid configuration -- but what is the downside here? Is it an
unhandled IOMMU fault or another host-fatal error? That would make the
argument for taking this patch pretty strong. On the other hand if the
failure were simply to kill this domain, that would be a less serious
issue and I'd be in two minds, mainly due to George not being here to
confirm that the pod_enabled logic is correct (although if he were here
I wouldn't be wrestling with this question at all ;-)).

I'm leaning towards taking this fix, but I'd really like to know what
the current failure case looks like.

Ian.

> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> v2: fix comment
> ---
>  tools/libxl/libxl_create.c |   20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..61437de 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    bool pod_enabled = false;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -714,6 +715,25 @@ static void initiate_domain_create(libxl__egc *egc,
>  
>      domid = 0;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM domain will enable PoD mode.
> +     */
> +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> +
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that IOMMU cannot work with PoD
> +     * enabled because it needs to populated entire page table for
> +     * guest. To stay on the safe side, we disable PCI device
> +     * assignment when PoD is enabled.
> +     */
> +    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        ret = ERROR_INVAL;
> +        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
> +        goto error_out;
> +    }
> +
>      ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
>      if (ret) goto error_out;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:50:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35KD-0002H8-Fl; Tue, 14 Jan 2014 14:50:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35KA-0002Fi-6Q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:50:38 +0000
Received: from [85.158.143.35:52055] by server-2.bemta-4.messagelabs.com id
	A5/44-11386-6BE45D25; Tue, 14 Jan 2014 14:50:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389711028!11670414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21499 invoked from network); 14 Jan 2014 14:50:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:50:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90570887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:50:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:50:15 -0500
Message-ID: <1389711014.12434.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Tue, 14 Jan 2014 14:50:14 +0000
In-Reply-To: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> device assignment if PoD is enabled.").
> 
> This change is restricted to HVM guest, as only HVM is relevant in the
> counterpart in Xend. We're late in release cycle so the change should
> only do what's necessary. Probably we can revisit it if we need to do
> the same thing for PV guest in the future.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Release hat: The risk here is of a false positive detecting whether PoD
would be used and therefore refusing to start a domain. However Wei
directed me earlier on to the code in setup_guest which sets
XENMEMF_populate_on_demand and I believe it is using the same logic.

The benefit of this is that it will stop people starting a domain in an
invalid configuration -- but what is the downside here? Is it an
unhandled IOMMU fault or another host-fatal error? That would make the
argument for taking this patch pretty strong. On the other hand if the
failure were simply to kill this domain, that would be a less serious
issue and I'd be in two minds, mainly due to George not being here to
confirm that the pod_enabled logic is correct (although if he were here
I wouldn't be wrestling with this question at all ;-)).

I'm leaning towards taking this fix, but I'd really like to know what
the current failure case looks like.

Ian.

> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> v2: fix comment
> ---
>  tools/libxl/libxl_create.c |   20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index e03bb55..61437de 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -706,6 +706,7 @@ static void initiate_domain_create(libxl__egc *egc,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      uint32_t domid;
>      int i, ret;
> +    bool pod_enabled = false;
>  
>      /* convenience aliases */
>      libxl_domain_config *const d_config = dcs->guest_config;
> @@ -714,6 +715,25 @@ static void initiate_domain_create(libxl__egc *egc,
>  
>      domid = 0;
>  
> +    /* If target_memkb is smaller than max_memkb, the subsequent call
> +     * to libxc when building HVM domain will enable PoD mode.
> +     */
> +    pod_enabled = (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM) &&
> +        (d_config->b_info.target_memkb < d_config->b_info.max_memkb);
> +
> +    /* We cannot have PoD and PCI device assignment at the same time
> +     * for HVM guest. It was reported that IOMMU cannot work with PoD
> +     * enabled because it needs to populated entire page table for
> +     * guest. To stay on the safe side, we disable PCI device
> +     * assignment when PoD is enabled.
> +     */
> +    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
> +        d_config->num_pcidevs && pod_enabled) {
> +        ret = ERROR_INVAL;
> +        LOG(ERROR, "PCI device assignment for HVM guest failed due to PoD enabled");
> +        goto error_out;
> +    }
> +
>      ret = libxl__domain_create_info_setdefault(gc, &d_config->c_info);
>      if (ret) goto error_out;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:53:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35ND-0002Pp-1B; Tue, 14 Jan 2014 14:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W35NB-0002Pg-6i
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:53:45 +0000
Received: from [85.158.143.35:23140] by server-3.bemta-4.messagelabs.com id
	F3/6D-32360-87F45D25; Tue, 14 Jan 2014 14:53:44 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389711223!11665657!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13299 invoked from network); 14 Jan 2014 14:53:43 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 14:53:43 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389711223; l=1763;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=s7DY3zo/7pbNmKk4QY+IujiJgoI=;
	b=rvkv8IJW8qDQ6MjsgYbYDgwJhK1MkSoh6klIBI0Gdwvo3TjtqNWLUQXZwL0r4vxJylq
	FARvO+vH7hcmtKyuIG22pTTRoKm0zFC9wx+zcdBNw5heJNSU9MgEYGC8JQ3p5/ORRFH1J
	P0UMotolPZoTHZfMFwsfQtORXMh+uzJYMSE=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7rNbk=
Received: from probook.site (ip-80-226-24-11.vodafone-net.de [80.226.24.11])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 403b1bq0EErW5xy ; 
	Tue, 14 Jan 2014 15:53:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 1DD735025A; Tue, 14 Jan 2014 15:53:28 +0100 (CET)
Date: Tue, 14 Jan 2014 15:53:28 +0100
From: Olaf Hering <olaf@aepfle.de>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140114145328.GA12888@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3EE14.3080609@citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, David Vrabel wrote:

> Can we have a patch to blkif.h that clarifies this?

What about this change?

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..56e2faa 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,10 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. The properties discard-granularity and discard-alignment may
+ *     be present if the backing device has such requirments.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +346,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*


Olaf


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:53:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35ND-0002Pp-1B; Tue, 14 Jan 2014 14:53:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W35NB-0002Pg-6i
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:53:45 +0000
Received: from [85.158.143.35:23140] by server-3.bemta-4.messagelabs.com id
	F3/6D-32360-87F45D25; Tue, 14 Jan 2014 14:53:44 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389711223!11665657!1
X-Originating-IP: [81.169.146.223]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13299 invoked from network); 14 Jan 2014 14:53:43 -0000
Received: from mo4-p04-ob.smtp.rzone.de (HELO mo4-p04-ob.smtp.rzone.de)
	(81.169.146.223)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 14:53:43 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389711223; l=1763;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-AUTH:X-RZG-CLASS-ID;
	bh=s7DY3zo/7pbNmKk4QY+IujiJgoI=;
	b=rvkv8IJW8qDQ6MjsgYbYDgwJhK1MkSoh6klIBI0Gdwvo3TjtqNWLUQXZwL0r4vxJylq
	FARvO+vH7hcmtKyuIG22pTTRoKm0zFC9wx+zcdBNw5heJNSU9MgEYGC8JQ3p5/ORRFH1J
	P0UMotolPZoTHZfMFwsfQtORXMh+uzJYMSE=
X-RZG-CLASS-ID: mo04
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWJ4Kkzc/qnW2/7rNbk=
Received: from probook.site (ip-80-226-24-11.vodafone-net.de [80.226.24.11])
	by smtp.strato.de (RZmta 32.17 SBL|AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 403b1bq0EErW5xy ; 
	Tue, 14 Jan 2014 15:53:32 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 1DD735025A; Tue, 14 Jan 2014 15:53:28 +0100 (CET)
Date: Tue, 14 Jan 2014 15:53:28 +0100
From: Olaf Hering <olaf@aepfle.de>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140114145328.GA12888@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3EE14.3080609@citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
 blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, David Vrabel wrote:

> Can we have a patch to blkif.h that clarifies this?

What about this change?

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..56e2faa 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,10 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. The properties discard-granularity and discard-alignment may
+ *     be present if the backing device has such requirments.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +346,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*


Olaf


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Nz-0002U2-KI; Tue, 14 Jan 2014 14:54:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W35Nx-0002Tr-6z
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:54:33 +0000
Received: from [85.158.143.35:33232] by server-1.bemta-4.messagelabs.com id
	67/5D-02132-8AF45D25; Tue, 14 Jan 2014 14:54:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389711270!11679186!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13354 invoked from network); 14 Jan 2014 14:54:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:54:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92692360"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:54:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:54:14 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W35Ne-0006D0-Kd;
	Tue, 14 Jan 2014 14:54:14 +0000
Message-ID: <52D54F96.2060607@citrix.com>
Date: Tue, 14 Jan 2014 14:54:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
In-Reply-To: <1389711014.12434.71.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 14:50, Ian Campbell wrote:
> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>> device assignment if PoD is enabled.").
>>
>> This change is restricted to HVM guest, as only HVM is relevant in the
>> counterpart in Xend. We're late in release cycle so the change should
>> only do what's necessary. Probably we can revisit it if we need to do
>> the same thing for PV guest in the future.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Release hat: The risk here is of a false positive detecting whether PoD
> would be used and therefore refusing to start a domain. However Wei
> directed me earlier on to the code in setup_guest which sets
> XENMEMF_populate_on_demand and I believe it is using the same logic.
>
> The benefit of this is that it will stop people starting a domain in an
> invalid configuration -- but what is the downside here? Is it an
> unhandled IOMMU fault or another host-fatal error? That would make the
> argument for taking this patch pretty strong. On the other hand if the
> failure were simply to kill this domain, that would be a less serious
> issue and I'd be in two minds, mainly due to George not being here to
> confirm that the pod_enabled logic is correct (although if he were here
> I wouldn't be wrestling with this question at all ;-)).
>
> I'm leaning towards taking this fix, but I'd really like to know what
> the current failure case looks like.
>
> Ian.

The answer is likely hardware specific.

An IOMMU fault (however handled by Xen) will result in a master abort on
the DMA transaction for the PCI device which has suffered the fault. 
That device can then do anything from continue blindly to issuing an NMI
IOCK/SERR which will likely be fatal to the entire server.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Nz-0002U2-KI; Tue, 14 Jan 2014 14:54:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W35Nx-0002Tr-6z
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:54:33 +0000
Received: from [85.158.143.35:33232] by server-1.bemta-4.messagelabs.com id
	67/5D-02132-8AF45D25; Tue, 14 Jan 2014 14:54:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389711270!11679186!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13354 invoked from network); 14 Jan 2014 14:54:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:54:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92692360"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:54:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:54:14 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W35Ne-0006D0-Kd;
	Tue, 14 Jan 2014 14:54:14 +0000
Message-ID: <52D54F96.2060607@citrix.com>
Date: Tue, 14 Jan 2014 14:54:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
In-Reply-To: <1389711014.12434.71.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 14:50, Ian Campbell wrote:
> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>> device assignment if PoD is enabled.").
>>
>> This change is restricted to HVM guest, as only HVM is relevant in the
>> counterpart in Xend. We're late in release cycle so the change should
>> only do what's necessary. Probably we can revisit it if we need to do
>> the same thing for PV guest in the future.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Release hat: The risk here is of a false positive detecting whether PoD
> would be used and therefore refusing to start a domain. However Wei
> directed me earlier on to the code in setup_guest which sets
> XENMEMF_populate_on_demand and I believe it is using the same logic.
>
> The benefit of this is that it will stop people starting a domain in an
> invalid configuration -- but what is the downside here? Is it an
> unhandled IOMMU fault or another host-fatal error? That would make the
> argument for taking this patch pretty strong. On the other hand if the
> failure were simply to kill this domain, that would be a less serious
> issue and I'd be in two minds, mainly due to George not being here to
> confirm that the pod_enabled logic is correct (although if he were here
> I wouldn't be wrestling with this question at all ;-)).
>
> I'm leaning towards taking this fix, but I'd really like to know what
> the current failure case looks like.
>
> Ian.

The answer is likely hardware specific.

An IOMMU fault (however handled by Xen) will result in a master abort on
the DMA transaction for the PCI device which has suffered the fault. 
That device can then do anything from continue blindly to issuing an NMI
IOCK/SERR which will likely be fatal to the entire server.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:57:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35R0-0002i1-Nb; Tue, 14 Jan 2014 14:57:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W35Qy-0002hm-Hj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:57:40 +0000
Received: from [193.109.254.147:12019] by server-10.bemta-14.messagelabs.com
	id 53/D8-20752-36055D25; Tue, 14 Jan 2014 14:57:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389711458!10739435!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11176 invoked from network); 14 Jan 2014 14:57:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92693851"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:57:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:57:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W35Qv-0006GN-8W;
	Tue, 14 Jan 2014 14:57:37 +0000
Message-ID: <52D55061.2020900@citrix.com>
Date: Tue, 14 Jan 2014 14:57:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Ian Campbell
	<Ian.Campbell@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

As part of XenServer's attempt to move to a 64bit dom0, we have
encountered a sizeable flaw in xc_domain_{save,restore}().

Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:

xc: detail: xc_domain_restore: starting restore of new domid 1
xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
xc: error: Couldn't allocate p2m_frame_list array: Internal error
xc: detail: Restore exit of domid 1 with rc=1

This is caused because of

RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))

where sizeof(unsigned long) is different between the source and destination.


It is unreasonable for the format of the migration stream to rely on the
bitness of the toolstack, which should be completely transparent as far
as "motion of a VM" is concerned.  Furthermore, the same issue occurs
with suspend/resume where the stream gets written to a file in the meantime.

A quick grep across the code shows several other items in the migration
stream which depend on toolstack bitness.

There is no way to divine whether the far side of the migration stream
is 32 or 64 bit, which is now vital information required to read the
stream correctly.

As a result,  it is not obvious how best to fix this with backwards
compatibility in mind.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 14:57:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 14:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35R0-0002i1-Nb; Tue, 14 Jan 2014 14:57:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W35Qy-0002hm-Hj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 14:57:40 +0000
Received: from [193.109.254.147:12019] by server-10.bemta-14.messagelabs.com
	id 53/D8-20752-36055D25; Tue, 14 Jan 2014 14:57:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389711458!10739435!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11176 invoked from network); 14 Jan 2014 14:57:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 14:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92693851"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:57:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:57:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W35Qv-0006GN-8W;
	Tue, 14 Jan 2014 14:57:37 +0000
Message-ID: <52D55061.2020900@citrix.com>
Date: Tue, 14 Jan 2014 14:57:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Ian Campbell
	<Ian.Campbell@citrix.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

As part of XenServer's attempt to move to a 64bit dom0, we have
encountered a sizeable flaw in xc_domain_{save,restore}().

Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:

xc: detail: xc_domain_restore: starting restore of new domid 1
xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
xc: error: Couldn't allocate p2m_frame_list array: Internal error
xc: detail: Restore exit of domid 1 with rc=1

This is caused because of

RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))

where sizeof(unsigned long) is different between the source and destination.


It is unreasonable for the format of the migration stream to rely on the
bitness of the toolstack, which should be completely transparent as far
as "motion of a VM" is concerned.  Furthermore, the same issue occurs
with suspend/resume where the stream gets written to a file in the meantime.

A quick grep across the code shows several other items in the migration
stream which depend on toolstack bitness.

There is no way to divine whether the far side of the migration stream
is 32 or 64 bit, which is now vital information required to read the
stream correctly.

As a result,  it is not obvious how best to fix this with backwards
compatibility in mind.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35TX-000331-6w; Tue, 14 Jan 2014 15:00:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TL-00032J-29
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [85.158.143.35:25480] by server-1.bemta-4.messagelabs.com id
	46/77-02132-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711604!11680901!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8412 invoked from network); 14 Jan 2014 15:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694647"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:46 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T0-0006J6-BW;
	Tue, 14 Jan 2014 14:59:46 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:24 +0100
Message-ID: <1389711582-66908-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 02/20] xen: add macro to detect if running
	as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xen-os.h |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index c7474d8..e8a5a99 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -82,6 +82,13 @@ xen_hvm_domain(void)
 	return (xen_domain_type == XEN_HVM_DOMAIN);
 }
 
+static inline int
+xen_initial_domain(void)
+{
+	return (xen_domain() && HYPERVISOR_start_info &&
+	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
+}
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35TO-00032W-Kx; Tue, 14 Jan 2014 15:00:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TJ-00031o-S0
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:06 +0000
Received: from [193.109.254.147:41118] by server-10.bemta-14.messagelabs.com
	id 48/CC-20752-5F055D25; Tue, 14 Jan 2014 15:00:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 14 Jan 2014 15:00:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694623"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35Sz-0006J6-7G;
	Tue, 14 Jan 2014 14:59:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:22 +0100
Message-ID: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v10 00/20] FreeBSD PVH DomU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a split of the previous patch "Xen x86 DomU PVH 
support", with the aim to make the review of the code easier.

The series can also be found on my git repo:

git://xenbits.xen.org/people/royger/freebsd.git pvh_v10

or

http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/pvh_v10

PVH mode is basically a PV guest inside an HVM container, and shares
a great amount of code with PVHVM. The main difference is the way the
guest is started, PVH uses the PV start sequence, jumping directly
into the kernel entry point in long mode and with page tables set.
The main work of this patch consists in setting the environment as
similar as possible to what native FreeBSD expects, and then adding
hooks to the PV ops when necessary.

This new version of the series (v10) addresses the comments from the 
previous posted version (v9). Major changes between v9 and v10:

 * Add a identify routine to xenpv instead of attaching it manually 
   from the Xen nexus.
 * Remove bus routines from xenpci (devices are now attached to xenpv 
   instead).
 * Add __printflike modifier to xc_printf.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35TO-00032W-Kx; Tue, 14 Jan 2014 15:00:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TJ-00031o-S0
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:06 +0000
Received: from [193.109.254.147:41118] by server-10.bemta-14.messagelabs.com
	id 48/CC-20752-5F055D25; Tue, 14 Jan 2014 15:00:05 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 14 Jan 2014 15:00:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694623"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35Sz-0006J6-7G;
	Tue, 14 Jan 2014 14:59:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:22 +0100
Message-ID: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v10 00/20] FreeBSD PVH DomU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series is a split of the previous patch "Xen x86 DomU PVH 
support", with the aim to make the review of the code easier.

The series can also be found on my git repo:

git://xenbits.xen.org/people/royger/freebsd.git pvh_v10

or

http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/pvh_v10

PVH mode is basically a PV guest inside an HVM container, and shares
a great amount of code with PVHVM. The main difference is the way the
guest is started, PVH uses the PV start sequence, jumping directly
into the kernel entry point in long mode and with page tables set.
The main work of this patch consists in setting the environment as
similar as possible to what native FreeBSD expects, and then adding
hooks to the PV ops when necessary.

This new version of the series (v10) addresses the comments from the 
previous posted version (v9). Major changes between v9 and v10:

 * Add a identify routine to xenpv instead of attaching it manually 
   from the Xen nexus.
 * Remove bus routines from xenpci (devices are now attached to xenpv 
   instead).
 * Add __printflike modifier to xc_printf.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35TX-000331-6w; Tue, 14 Jan 2014 15:00:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TL-00032J-29
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [85.158.143.35:25480] by server-1.bemta-4.messagelabs.com id
	46/77-02132-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711604!11680901!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8412 invoked from network); 14 Jan 2014 15:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694647"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:46 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T0-0006J6-BW;
	Tue, 14 Jan 2014 14:59:46 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:24 +0100
Message-ID: <1389711582-66908-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 02/20] xen: add macro to detect if running
	as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xen-os.h |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index c7474d8..e8a5a99 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -82,6 +82,13 @@ xen_hvm_domain(void)
 	return (xen_domain_type == XEN_HVM_DOMAIN);
 }
 
+static inline int
+xen_initial_domain(void)
+{
+	return (xen_domain() && HYPERVISOR_start_info &&
+	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
+}
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Tl-00033d-Jf; Tue, 14 Jan 2014 15:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TK-00032I-Q2
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [193.109.254.147:41262] by server-11.bemta-14.messagelabs.com
	id B0/29-20576-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13208 invoked from network); 14 Jan 2014 15:00:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694644"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35Sz-0006J6-OP;
	Tue, 14 Jan 2014 14:59:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:23 +0100
Message-ID: <1389711582-66908-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_01/20=5D_xen=3A_add_PV/PVH_ker?=
	=?utf-8?q?nel_entry_point?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHRoZSBQVi9QVkggZW50cnkgcG9pbnQgYW5kIHRoZSBsb3cgbGV2ZWwgZnVuY3Rpb25zIGZv
ciBQVkgKaW5pdGlhbGl6YXRpb24uCi0tLQogc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TICAgICB8
ICAgIDEgKwogc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUyB8ICAgODMgKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmggfCAgIDI2ICsr
KysrKysrKwogc3lzL2NvbmYvZmlsZXMuYW1kNjQgICAgICAgICB8ICAgIDIgKwogc3lzL2kzODYv
eGVuL3hlbl9tYWNoZGVwLmMgICB8ICAgIDIgKwogc3lzL3g4Ni94ZW4vaHZtLmMgICAgICAgICAg
ICB8ICAgIDEgKwogc3lzL3g4Ni94ZW4vcHYuYyAgICAgICAgICAgICB8ICAxMTkgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCiBzeXMveGVuL3hlbi1vcy5oICAgICAg
ICAgICAgIHwgICAgNCArKwogOCBmaWxlcyBjaGFuZ2VkLCAyMzggaW5zZXJ0aW9ucygrKSwgMCBk
ZWxldGlvbnMoLSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMvYW1kNjQvYW1kNjQveGVuLWxvY29y
ZS5TCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4vcHYuYwoKZGlmZiAtLWdpdCBhL3N5
cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUyBiL3N5cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUwppbmRleCA1
NWNkYTNhLi40YWNlZjk3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbG9jb3JlLlMKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TCkBAIC04NCw1ICs4NCw2IEBAIE5PTl9HUFJPRl9F
TlRSWShidGV4dCkKIAogCS5ic3MKIAlBTElHTl9EQVRBCQkJLyoganVzdCB0byBiZSBzdXJlICov
CisJLmdsb2JsCWJvb3RzdGFjawogCS5zcGFjZQkweDEwMDAJCQkvKiBzcGFjZSBmb3IgYm9vdHN0
YWNrIC0gdGVtcG9yYXJ5IHN0YWNrICovCiBib290c3RhY2s6CmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQveGVuLWxvY29yZS5TIGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpuZXcg
ZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi44NDI4N2M0Ci0tLSAvZGV2L251bGwKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpAQCAtMCwwICsxLDgzIEBACisvKi0KKyAq
IENvcHlyaWdodCAoYykgMjAwMyBQZXRlciBXZW1tIDxwZXRlckBGcmVlQlNELm9yZz4KKyAqIENv
cHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubmUgPHJveWdlckBGcmVlQlNELm9yZz4KKyAq
IEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBz
b3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24s
IGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAq
IGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRh
aW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0
aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25z
IGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAg
IG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xh
aW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBw
cm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQ
Uk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgYGBBUyBJUycnIEFORAorICog
QU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElN
SVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFO
RCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJ
TiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAq
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZ
LCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRF
RCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExP
U1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisg
KiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVH
TElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBV
U0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBP
RgorICogU1VDSCBEQU1BR0UuCisgKgorICogJEZyZWVCU0QkCisgKi8KKworI2luY2x1ZGUgPG1h
Y2hpbmUvYXNtYWNyb3MuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3BzbC5oPgorI2luY2x1ZGUgPG1h
Y2hpbmUvcG1hcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc3BlY2lhbHJlZy5oPgorCisjaW5jbHVk
ZSA8eGVuL3hlbi1vcy5oPgorI2RlZmluZSBfX0FTU0VNQkxZX18KKyNpbmNsdWRlIDx4ZW4vaW50
ZXJmYWNlL2VsZm5vdGUuaD4KKworI2luY2x1ZGUgImFzc3ltLnMiCisKKy5zZWN0aW9uIF9feGVu
X2d1ZXN0CisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0dVRVNUX09TLCAgICAgICAuYXNjaXos
ICJGcmVlQlNEIikKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfR1VFU1RfVkVSU0lPTiwgIC5h
c2NpeiwgIkhFQUQiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9YRU5fVkVSU0lPTiwgICAg
LmFzY2l6LCAieGVuLTMuMCIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX1ZJUlRfQkFTRSwg
ICAgICAucXVhZCwgIEtFUk5CQVNFKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9QQUREUl9P
RkZTRVQsICAgLnF1YWQsICBLRVJOQkFTRSkgLyogWGVuIGhvbm91cnMgZWxmLT5wX3BhZGRyOyBj
b21wZW5zYXRlIGZvciB0aGlzICovCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0VOVFJZLCAg
ICAgICAgICAucXVhZCwgIHhlbl9zdGFydCkKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfSFlQ
RVJDQUxMX1BBR0UsIC5xdWFkLAkgaHlwZXJjYWxsX3BhZ2UpCisJRUxGTk9URShYZW4sIFhFTl9F
TEZOT1RFX0hWX1NUQVJUX0xPVywgICAucXVhZCwgIEhZUEVSVklTT1JfVklSVF9TVEFSVCkKKwlF
TEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfRkVBVFVSRVMsICAgICAgIC5hc2NpeiwgIndyaXRhYmxl
X2Rlc2NyaXB0b3JfdGFibGVzfGF1dG9fdHJhbnNsYXRlZF9waHlzbWFwfHN1cGVydmlzb3JfbW9k
ZV9rZXJuZWx8aHZtX2NhbGxiYWNrX3ZlY3RvciIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RF
X1BBRV9NT0RFLCAgICAgICAuYXNjaXosICJ5ZXMiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9U
RV9MMV9NRk5fVkFMSUQsICAgLmxvbmcsICBQR19WLCBQR19WKQorCUVMRk5PVEUoWGVuLCBYRU5f
RUxGTk9URV9MT0FERVIsICAgICAgICAgLmFzY2l6LCAiZ2VuZXJpYyIpCisJRUxGTk9URShYZW4s
IFhFTl9FTEZOT1RFX1NVU1BFTkRfQ0FOQ0VMLCAubG9uZywgIDApCisJRUxGTk9URShYZW4sIFhF
Tl9FTEZOT1RFX0JTRF9TWU1UQUIsCSAuYXNjaXosICJ5ZXMiKQorCisJLnRleHQKKy5wMmFsaWdu
IFBBR0VfU0hJRlQsIDB4OTAJLyogSHlwZXJjYWxsX3BhZ2UgbmVlZHMgdG8gYmUgUEFHRSBhbGln
bmVkICovCisKK05PTl9HUFJPRl9FTlRSWShoeXBlcmNhbGxfcGFnZSkKKwkuc2tpcAkweDEwMDAs
IDB4OTAJLyogRmlsbCB3aXRoICJub3AicyAqLworCitOT05fR1BST0ZfRU5UUlkoeGVuX3N0YXJ0
KQorCS8qIERvbid0IHRydXN0IHdoYXQgdGhlIGxvYWRlciBnaXZlcyBmb3IgcmZsYWdzLiAqLwor
CXB1c2hxCSRQU0xfS0VSTkVMCisJcG9wZnEKKworCS8qIFBhcmFtZXRlcnMgZm9yIHRoZSB4ZW4g
aW5pdCBmdW5jdGlvbiAqLworCW1vdnEJJXJzaSwgJXJkaQkJLyogc2hhcmVkX2luZm8gKGFyZyAx
KSAqLworCW1vdnEJJXJzcCwgJXJzaQkJLyogeGVuc3RhY2sgICAgKGFyZyAyKSAqLworCisJLyog
VXNlIG91ciBvd24gc3RhY2sgKi8KKwltb3ZxCSRib290c3RhY2ssJXJzcAorCXhvcmwJJWVicCwg
JWVicAorCisJLyogdV9pbnQ2NF90IGhhbW1lcl90aW1lX3hlbihzdGFydF9pbmZvX3QgKnNpLCB1
X2ludDY0X3QgeGVuc3RhY2spOyAqLworCWNhbGwJaGFtbWVyX3RpbWVfeGVuCisJbW92cQklcmF4
LCAlcnNwCQkvKiBzZXQgdXAga3N0YWNrIGZvciBtaV9zdGFydHVwKCkgKi8KKwljYWxsCW1pX3N0
YXJ0dXAJCS8qIGF1dG9jb25maWd1cmF0aW9uLCBtb3VudHJvb3QgZXRjICovCisKKwkvKiBOT1RS
RUFDSEVEICovCiswOglobHQKKwlqbXAgCTBiCmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVk
ZS9hc21hY3Jvcy5oIGIvc3lzL2FtZDY0L2luY2x1ZGUvYXNtYWNyb3MuaAppbmRleCAxZmI1OTJh
Li5jZThkY2U0IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvaW5jbHVkZS9hc21hY3Jvcy5oCisrKyBi
L3N5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmgKQEAgLTIwMSw0ICsyMDEsMzAgQEAKIAogI2Vu
ZGlmIC8qIExPQ09SRSAqLwogCisjaWZkZWYgX19TVERDX18KKyNkZWZpbmUgRUxGTk9URShuYW1l
LCB0eXBlLCBkZXNjdHlwZSwgZGVzY2RhdGEuLi4pIFwKKy5wdXNoc2VjdGlvbiAubm90ZS5uYW1l
ICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKyAgLmFsaWduIDQgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICA7ICAgICAgIFwKKyAgLmxvbmcgMmYgLSAxZiAgICAgICAgIC8qIG5hbWVzeiAq
LyAgICA7ICAgICAgIFwKKyAgLmxvbmcgNGYgLSAzZiAgICAgICAgIC8qIGRlc2NzeiAqLyAgICA7
ICAgICAgIFwKKyAgLmxvbmcgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAg
IFwKKzE6LmFzY2l6ICNuYW1lICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzI6
LmFsaWduIDQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzM6ZGVzY3R5
cGUgZGVzY2RhdGEgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzQ6LmFsaWduIDQgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKy5wb3BzZWN0aW9uCisjZWxzZSAv
KiAhX19TVERDX18sIGkuZS4gLXRyYWRpdGlvbmFsICovCisjZGVmaW5lIEVMRk5PVEUobmFtZSwg
dHlwZSwgZGVzY3R5cGUsIGRlc2NkYXRhKSBcCisucHVzaHNlY3Rpb24gLm5vdGUubmFtZSAgICAg
ICAgICAgICAgICAgOyAgICAgICBcCisgIC5hbGlnbiA0ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgOyAgICAgICBcCisgIC5sb25nIDJmIC0gMWYgICAgICAgICAvKiBuYW1lc3ogKi8gICAg
OyAgICAgICBcCisgIC5sb25nIDRmIC0gM2YgICAgICAgICAvKiBkZXNjc3ogKi8gICAgOyAgICAg
ICBcCisgIC5sb25nIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisx
Oi5hc2NpeiAibmFtZSIgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisyOi5hbGln
biA0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCiszOmRlc2N0eXBlIGRl
c2NkYXRhICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCis0Oi5hbGlnbiA0ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisucG9wc2VjdGlvbgorI2VuZGlmIC8qIF9f
U1REQ19fICovCisKICNlbmRpZiAvKiAhX01BQ0hJTkVfQVNNQUNST1NfSF8gKi8KZGlmZiAtLWdp
dCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKaW5kZXggZDFi
ZGNkOS4uMTYwMjlkOCAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuYW1kNjQKKysrIGIvc3lz
L2NvbmYvZmlsZXMuYW1kNjQKQEAgLTExOSw2ICsxMTksNyBAQCBhbWQ2NC9hbWQ2NC9pbl9ja3N1
bS5jCQlvcHRpb25hbAlpbmV0IHwgaW5ldDYKIGFtZDY0L2FtZDY0L2luaXRjcHUuYwkJc3RhbmRh
cmQKIGFtZDY0L2FtZDY0L2lvLmMJCW9wdGlvbmFsCWlvCiBhbWQ2NC9hbWQ2NC9sb2NvcmUuUwkJ
c3RhbmRhcmQJbm8tb2JqCithbWQ2NC9hbWQ2NC94ZW4tbG9jb3JlLlMJb3B0aW9uYWwJeGVuaHZt
CiBhbWQ2NC9hbWQ2NC9tYWNoZGVwLmMJCXN0YW5kYXJkCiBhbWQ2NC9hbWQ2NC9tZW0uYwkJb3B0
aW9uYWwJbWVtCiBhbWQ2NC9hbWQ2NC9taW5pZHVtcF9tYWNoZGVwLmMJc3RhbmRhcmQKQEAgLTU2
NiwzICs1NjcsNCBAQCB4ODYveDg2L25leHVzLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni90c2MuYwkJ
CXN0YW5kYXJkCiB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9p
bnRyLmMJCW9wdGlvbmFsCXhlbiB8IHhlbmh2bQoreDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVu
aHZtCmRpZmYgLS1naXQgYS9zeXMvaTM4Ni94ZW4veGVuX21hY2hkZXAuYyBiL3N5cy9pMzg2L3hl
bi94ZW5fbWFjaGRlcC5jCmluZGV4IDcwNDliZTYuLmZkNTc1ZWUgMTAwNjQ0Ci0tLSBhL3N5cy9p
Mzg2L3hlbi94ZW5fbWFjaGRlcC5jCisrKyBiL3N5cy9pMzg2L3hlbi94ZW5fbWFjaGRlcC5jCkBA
IC04OSw2ICs4OSw3IEBAIElEVFZFQyhkaXYpLCBJRFRWRUMoZGJnKSwgSURUVkVDKG5taSksIElE
VFZFQyhicHQpLCBJRFRWRUMob2ZsKSwKIAogaW50IHhlbmRlYnVnX2ZsYWdzOyAKIHN0YXJ0X2lu
Zm9fdCAqeGVuX3N0YXJ0X2luZm87CitzdGFydF9pbmZvX3QgKkhZUEVSVklTT1Jfc3RhcnRfaW5m
bzsKIHNoYXJlZF9pbmZvX3QgKkhZUEVSVklTT1Jfc2hhcmVkX2luZm87CiB4ZW5fcGZuX3QgKnhl
bl9tYWNoaW5lX3BoeXMgPSBtYWNoaW5lX3RvX3BoeXNfbWFwcGluZzsKIHhlbl9wZm5fdCAqeGVu
X3BoeXNfbWFjaGluZTsKQEAgLTkyNyw2ICs5MjgsNyBAQCBpbml0dmFsdWVzKHN0YXJ0X2luZm9f
dCAqc3RhcnRpbmZvKQogCUhZUEVSVklTT1Jfdm1fYXNzaXN0KFZNQVNTVF9DTURfZW5hYmxlLCBW
TUFTU1RfVFlQRV80Z2Jfc2VnbWVudHNfbm90aWZ5KTsJCiAjZW5kaWYJCiAJeGVuX3N0YXJ0X2lu
Zm8gPSBzdGFydGluZm87CisJSFlQRVJWSVNPUl9zdGFydF9pbmZvID0gc3RhcnRpbmZvOwogCXhl
bl9waHlzX21hY2hpbmUgPSAoeGVuX3Bmbl90ICopc3RhcnRpbmZvLT5tZm5fbGlzdDsKIAogCUlk
bGVQVEQgPSAocGRfZW50cnlfdCAqKSgodWludDhfdCAqKXN0YXJ0aW5mby0+cHRfYmFzZSArIFBB
R0VfU0laRSk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYveGVuL2h2
bS5jCmluZGV4IDcyODExZGMuLmIzOTc3MjEgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL2h2bS5j
CisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC0xNTksNiArMTU5LDcgQEAgRFBDUFVfREVGSU5F
KHhlbl9pbnRyX2hhbmRsZV90LCBpcGlfaGFuZGxlW25pdGVtcyh4ZW5faXBpcyldKTsKIC8qKiBI
eXBlcmNhbGwgdGFibGUgYWNjZXNzZWQgdmlhIEhZUEVSVklTT1JfKl9vcCgpIG1ldGhvZHMuICov
CiBjaGFyICpoeXBlcmNhbGxfc3R1YnM7CiBzaGFyZWRfaW5mb190ICpIWVBFUlZJU09SX3NoYXJl
ZF9pbmZvOworc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0YXJ0X2luZm87CiAKICNpZmRlZiBT
TVAKIC8qLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBYRU4gUFYgSVBJIEhhbmRsZXJzIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9wdi5j
IGIvc3lzL3g4Ni94ZW4vcHYuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi41
NTcxZWNmCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL3g4Ni94ZW4vcHYuYwpAQCAtMCwwICsxLDEx
OSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAwNCBDaHJpc3RpYW4gTGltcGFjaC4KKyAqIENv
cHlyaWdodCAoYykgMjAwNC0yMDA2LDIwMDggS2lwIE1hY3kKKyAqIENvcHlyaWdodCAoYykgMjAx
MyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMg
cmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJp
bmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6Cisg
KiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3Zl
IGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhl
IGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBm
b3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhp
cyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUK
KyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRo
IHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBU
SEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9S
IElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQor
ICogSU1QTElFRCBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1Ig
QSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hB
TEwgVEhFIEFVVEhPUiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVD
VCwgSU5ESVJFQ1QsIElOQ0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVO
VElBTAorICogREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVN
RU5UIE9GIFNVQlNUSVRVVEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFU
QSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVT
RUQgQU5EIE9OIEFOWSBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBT
VFJJQ1QKKyAqIExJQUJJTElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RI
RVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09G
VFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFN
QUdFLgorICovCisKKyNpbmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQi
KTsKKworI2luY2x1ZGUgPHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4KKyNpbmNs
dWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3JlYm9vdC5oPgorI2luY2x1ZGUgPHN5
cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9sb2NrLmg+CisjaW5jbHVkZSA8c3lzL3J3bG9jay5o
PgorCisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS92bV9leHRlcm4uaD4KKyNpbmNs
dWRlIDx2bS92bV9rZXJuLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFnZS5oPgorI2luY2x1ZGUgPHZt
L3ZtX21hcC5oPgorI2luY2x1ZGUgPHZtL3ZtX29iamVjdC5oPgorI2luY2x1ZGUgPHZtL3ZtX3Bh
Z2VyLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFyYW0uaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKyNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorCisvKiBOYXRpdmUgaW5pdGlhbCBmdW5j
dGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1X2ludDY0X3QsIHVfaW50NjRf
dCk7CisvKiBYZW4gaW5pdGlhbCBmdW5jdGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJf
dGltZV94ZW4oc3RhcnRfaW5mb190ICosIHVfaW50NjRfdCk7CisKKy8qCisgKiBGaXJzdCBmdW5j
dGlvbiBjYWxsZWQgYnkgdGhlIFhlbiBQVkggYm9vdCBzZXF1ZW5jZS4KKyAqCisgKiBTZXQgc29t
ZSBYZW4gZ2xvYmFsIHZhcmlhYmxlcyBhbmQgcHJlcGFyZSB0aGUgZW52aXJvbm1lbnQgc28gaXQg
aXMKKyAqIGFzIHNpbWlsYXIgYXMgcG9zc2libGUgdG8gd2hhdCBuYXRpdmUgRnJlZUJTRCBpbml0
IGZ1bmN0aW9uIGV4cGVjdHMuCisgKi8KK3VfaW50NjRfdAoraGFtbWVyX3RpbWVfeGVuKHN0YXJ0
X2luZm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKK3sKKwl1X2ludDY0X3QgcGh5c2ZyZWU7
CisJdV9pbnQ2NF90ICpQVDQgPSAodV9pbnQ2NF90ICopeGVuc3RhY2s7CisJdV9pbnQ2NF90ICpQ
VDMgPSAodV9pbnQ2NF90ICopKHhlbnN0YWNrICsgUEFHRV9TSVpFKTsKKwl1X2ludDY0X3QgKlBU
MiA9ICh1X2ludDY0X3QgKikoeGVuc3RhY2sgKyAyICogUEFHRV9TSVpFKTsKKwlpbnQgaTsKKwor
CWlmICgoc2kgPT0gTlVMTCkgfHwgKHhlbnN0YWNrID09IDApKSB7CisJCUhZUEVSVklTT1Jfc2h1
dGRvd24oU0hVVERPV05fY3Jhc2gpOworCX0KKworCS8qIFdlIHVzZSAzIHBhZ2VzIG9mIHhlbiBz
dGFjayBmb3IgdGhlIGJvb3QgcGFnZXRhYmxlcyAqLworCXBoeXNmcmVlID0geGVuc3RhY2sgKyAz
ICogUEFHRV9TSVpFIC0gS0VSTkJBU0U7CisKKwkvKiBTZXR1cCBYZW4gZ2xvYmFsIHZhcmlhYmxl
cyAqLworCUhZUEVSVklTT1Jfc3RhcnRfaW5mbyA9IHNpOworCUhZUEVSVklTT1Jfc2hhcmVkX2lu
Zm8gPQorCQkoc2hhcmVkX2luZm9fdCAqKShzaS0+c2hhcmVkX2luZm8gKyBLRVJOQkFTRSk7CisK
KwkvKgorCSAqIFNldHVwIHNvbWUgbWlzYyBnbG9iYWwgdmFyaWFibGVzIGZvciBYZW4gZGV2aWNl
cworCSAqCisJICogWFhYOiBkZXZpY2VzIHRoYXQgbmVlZCB0aGlzIHNwZWNpZmljIHZhcmlhYmxl
cyBzaG91bGQKKwkgKiAgICAgIGJlIHJld3JpdHRlbiB0byBmZXRjaCB0aGlzIGluZm8gYnkgdGhl
bXNlbHZlcyBmcm9tIHRoZQorCSAqICAgICAgc3RhcnRfaW5mbyBwYWdlLgorCSAqLworCXhlbl9z
dG9yZSA9IChzdHJ1Y3QgeGVuc3RvcmVfZG9tYWluX2ludGVyZmFjZSAqKQorCSAgICAgICAgICAg
IChwdG9hKHNpLT5zdG9yZV9tZm4pICsgS0VSTkJBU0UpOworCisJeGVuX2RvbWFpbl90eXBlID0g
WEVOX1BWX0RPTUFJTjsKKwl2bV9ndWVzdCA9IFZNX0dVRVNUX1hFTjsKKworCS8qCisJICogVXNl
IHRoZSBzdGFjayBYZW4gZ2l2ZXMgdXMgdG8gYnVpbGQgdGhlIHBhZ2UgdGFibGVzCisJICogYXMg
bmF0aXZlIEZyZWVCU0QgZXhwZWN0cyB0byBmaW5kIHRoZW0gKGNyZWF0ZWQKKwkgKiBieSB0aGUg
Ym9vdCB0cmFtcG9saW5lKS4KKwkgKi8KKwlmb3IgKGkgPSAwOyBpIDwgNTEyOyBpKyspIHsKKwkJ
LyogRWFjaCBzbG90IG9mIHRoZSBsZXZlbCA0IHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZl
bCAzIHBhZ2UgKi8KKwkJUFQ0W2ldID0gKCh1X2ludDY0X3QpJlBUM1swXSkgLSBLRVJOQkFTRTsK
KwkJUFQ0W2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1U7CisKKwkJLyogRWFjaCBzbG90IG9mIHRo
ZSBsZXZlbCAzIHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZlbCAyIHBhZ2UgKi8KKwkJUFQz
W2ldID0gKCh1X2ludDY0X3QpJlBUMlswXSkgLSBLRVJOQkFTRTsKKwkJUFQzW2ldIHw9IFBHX1Yg
fCBQR19SVyB8IFBHX1U7CisKKwkJLyogVGhlIGxldmVsIDIgcGFnZSBzbG90cyBhcmUgbWFwcGVk
IHdpdGggMk1CIHBhZ2VzIGZvciAxR0IuICovCisJCVBUMltpXSA9IGkgKiAoMiAqIDEwMjQgKiAx
MDI0KTsKKwkJUFQyW2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1BTIHwgUEdfVTsKKwl9CisJbG9h
ZF9jcjMoKCh1X2ludDY0X3QpJlBUNFswXSkgLSBLRVJOQkFTRSk7CisKKwkvKiBOb3cgd2UgY2Fu
IGp1bXAgaW50byB0aGUgbmF0aXZlIGluaXQgZnVuY3Rpb24gKi8KKwlyZXR1cm4gKGhhbW1lcl90
aW1lKDAsIHBoeXNmcmVlKSk7Cit9CmRpZmYgLS1naXQgYS9zeXMveGVuL3hlbi1vcy5oIGIvc3lz
L3hlbi94ZW4tb3MuaAppbmRleCA4NzY0NGU5Li5jNzQ3NGQ4IDEwMDY0NAotLS0gYS9zeXMveGVu
L3hlbi1vcy5oCisrKyBiL3N5cy94ZW4veGVuLW9zLmgKQEAgLTUxLDYgKzUxLDEwIEBACiB2b2lk
IGZvcmNlX2V2dGNobl9jYWxsYmFjayh2b2lkKTsKIAogZXh0ZXJuIHNoYXJlZF9pbmZvX3QgKkhZ
UEVSVklTT1Jfc2hhcmVkX2luZm87CitleHRlcm4gc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0
YXJ0X2luZm87CisKKy8qIFhYWDogd2UgbmVlZCB0byBnZXQgcmlkIG9mIHRoaXMgYW5kIHVzZSBI
WVBFUlZJU09SX3N0YXJ0X2luZm8gZGlyZWN0bHkgKi8KK2V4dGVybiBzdHJ1Y3QgeGVuc3RvcmVf
ZG9tYWluX2ludGVyZmFjZSAqeGVuX3N0b3JlOwogCiBlbnVtIHhlbl9kb21haW5fdHlwZSB7CiAJ
WEVOX05BVElWRSwgICAgICAgICAgICAgLyogcnVubmluZyBvbiBiYXJlIGhhcmR3YXJlICAgICov
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Tl-00033d-Jf; Tue, 14 Jan 2014 15:00:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TK-00032I-Q2
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [193.109.254.147:41262] by server-11.bemta-14.messagelabs.com
	id B0/29-20576-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13208 invoked from network); 14 Jan 2014 15:00:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694644"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35Sz-0006J6-OP;
	Tue, 14 Jan 2014 14:59:45 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:23 +0100
Message-ID: <1389711582-66908-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_01/20=5D_xen=3A_add_PV/PVH_ker?=
	=?utf-8?q?nel_entry_point?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHRoZSBQVi9QVkggZW50cnkgcG9pbnQgYW5kIHRoZSBsb3cgbGV2ZWwgZnVuY3Rpb25zIGZv
ciBQVkgKaW5pdGlhbGl6YXRpb24uCi0tLQogc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TICAgICB8
ICAgIDEgKwogc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUyB8ICAgODMgKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmggfCAgIDI2ICsr
KysrKysrKwogc3lzL2NvbmYvZmlsZXMuYW1kNjQgICAgICAgICB8ICAgIDIgKwogc3lzL2kzODYv
eGVuL3hlbl9tYWNoZGVwLmMgICB8ICAgIDIgKwogc3lzL3g4Ni94ZW4vaHZtLmMgICAgICAgICAg
ICB8ICAgIDEgKwogc3lzL3g4Ni94ZW4vcHYuYyAgICAgICAgICAgICB8ICAxMTkgKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCiBzeXMveGVuL3hlbi1vcy5oICAgICAg
ICAgICAgIHwgICAgNCArKwogOCBmaWxlcyBjaGFuZ2VkLCAyMzggaW5zZXJ0aW9ucygrKSwgMCBk
ZWxldGlvbnMoLSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMvYW1kNjQvYW1kNjQveGVuLWxvY29y
ZS5TCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94ZW4vcHYuYwoKZGlmZiAtLWdpdCBhL3N5
cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUyBiL3N5cy9hbWQ2NC9hbWQ2NC9sb2NvcmUuUwppbmRleCA1
NWNkYTNhLi40YWNlZjk3IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbG9jb3JlLlMKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L2xvY29yZS5TCkBAIC04NCw1ICs4NCw2IEBAIE5PTl9HUFJPRl9F
TlRSWShidGV4dCkKIAogCS5ic3MKIAlBTElHTl9EQVRBCQkJLyoganVzdCB0byBiZSBzdXJlICov
CisJLmdsb2JsCWJvb3RzdGFjawogCS5zcGFjZQkweDEwMDAJCQkvKiBzcGFjZSBmb3IgYm9vdHN0
YWNrIC0gdGVtcG9yYXJ5IHN0YWNrICovCiBib290c3RhY2s6CmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQveGVuLWxvY29yZS5TIGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpuZXcg
ZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi44NDI4N2M0Ci0tLSAvZGV2L251bGwKKysr
IGIvc3lzL2FtZDY0L2FtZDY0L3hlbi1sb2NvcmUuUwpAQCAtMCwwICsxLDgzIEBACisvKi0KKyAq
IENvcHlyaWdodCAoYykgMjAwMyBQZXRlciBXZW1tIDxwZXRlckBGcmVlQlNELm9yZz4KKyAqIENv
cHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9ubmUgPHJveWdlckBGcmVlQlNELm9yZz4KKyAq
IEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBz
b3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24s
IGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAq
IGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRh
aW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0
aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25z
IGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAg
IG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xh
aW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBw
cm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQ
Uk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgYGBBUyBJUycnIEFORAorICog
QU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElN
SVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFO
RCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJ
TiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAq
IEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZ
LCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRF
RCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExP
U1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisg
KiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIg
SU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVH
TElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBV
U0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBP
RgorICogU1VDSCBEQU1BR0UuCisgKgorICogJEZyZWVCU0QkCisgKi8KKworI2luY2x1ZGUgPG1h
Y2hpbmUvYXNtYWNyb3MuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3BzbC5oPgorI2luY2x1ZGUgPG1h
Y2hpbmUvcG1hcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvc3BlY2lhbHJlZy5oPgorCisjaW5jbHVk
ZSA8eGVuL3hlbi1vcy5oPgorI2RlZmluZSBfX0FTU0VNQkxZX18KKyNpbmNsdWRlIDx4ZW4vaW50
ZXJmYWNlL2VsZm5vdGUuaD4KKworI2luY2x1ZGUgImFzc3ltLnMiCisKKy5zZWN0aW9uIF9feGVu
X2d1ZXN0CisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0dVRVNUX09TLCAgICAgICAuYXNjaXos
ICJGcmVlQlNEIikKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfR1VFU1RfVkVSU0lPTiwgIC5h
c2NpeiwgIkhFQUQiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9YRU5fVkVSU0lPTiwgICAg
LmFzY2l6LCAieGVuLTMuMCIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX1ZJUlRfQkFTRSwg
ICAgICAucXVhZCwgIEtFUk5CQVNFKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9URV9QQUREUl9P
RkZTRVQsICAgLnF1YWQsICBLRVJOQkFTRSkgLyogWGVuIGhvbm91cnMgZWxmLT5wX3BhZGRyOyBj
b21wZW5zYXRlIGZvciB0aGlzICovCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RFX0VOVFJZLCAg
ICAgICAgICAucXVhZCwgIHhlbl9zdGFydCkKKwlFTEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfSFlQ
RVJDQUxMX1BBR0UsIC5xdWFkLAkgaHlwZXJjYWxsX3BhZ2UpCisJRUxGTk9URShYZW4sIFhFTl9F
TEZOT1RFX0hWX1NUQVJUX0xPVywgICAucXVhZCwgIEhZUEVSVklTT1JfVklSVF9TVEFSVCkKKwlF
TEZOT1RFKFhlbiwgWEVOX0VMRk5PVEVfRkVBVFVSRVMsICAgICAgIC5hc2NpeiwgIndyaXRhYmxl
X2Rlc2NyaXB0b3JfdGFibGVzfGF1dG9fdHJhbnNsYXRlZF9waHlzbWFwfHN1cGVydmlzb3JfbW9k
ZV9rZXJuZWx8aHZtX2NhbGxiYWNrX3ZlY3RvciIpCisJRUxGTk9URShYZW4sIFhFTl9FTEZOT1RF
X1BBRV9NT0RFLCAgICAgICAuYXNjaXosICJ5ZXMiKQorCUVMRk5PVEUoWGVuLCBYRU5fRUxGTk9U
RV9MMV9NRk5fVkFMSUQsICAgLmxvbmcsICBQR19WLCBQR19WKQorCUVMRk5PVEUoWGVuLCBYRU5f
RUxGTk9URV9MT0FERVIsICAgICAgICAgLmFzY2l6LCAiZ2VuZXJpYyIpCisJRUxGTk9URShYZW4s
IFhFTl9FTEZOT1RFX1NVU1BFTkRfQ0FOQ0VMLCAubG9uZywgIDApCisJRUxGTk9URShYZW4sIFhF
Tl9FTEZOT1RFX0JTRF9TWU1UQUIsCSAuYXNjaXosICJ5ZXMiKQorCisJLnRleHQKKy5wMmFsaWdu
IFBBR0VfU0hJRlQsIDB4OTAJLyogSHlwZXJjYWxsX3BhZ2UgbmVlZHMgdG8gYmUgUEFHRSBhbGln
bmVkICovCisKK05PTl9HUFJPRl9FTlRSWShoeXBlcmNhbGxfcGFnZSkKKwkuc2tpcAkweDEwMDAs
IDB4OTAJLyogRmlsbCB3aXRoICJub3AicyAqLworCitOT05fR1BST0ZfRU5UUlkoeGVuX3N0YXJ0
KQorCS8qIERvbid0IHRydXN0IHdoYXQgdGhlIGxvYWRlciBnaXZlcyBmb3IgcmZsYWdzLiAqLwor
CXB1c2hxCSRQU0xfS0VSTkVMCisJcG9wZnEKKworCS8qIFBhcmFtZXRlcnMgZm9yIHRoZSB4ZW4g
aW5pdCBmdW5jdGlvbiAqLworCW1vdnEJJXJzaSwgJXJkaQkJLyogc2hhcmVkX2luZm8gKGFyZyAx
KSAqLworCW1vdnEJJXJzcCwgJXJzaQkJLyogeGVuc3RhY2sgICAgKGFyZyAyKSAqLworCisJLyog
VXNlIG91ciBvd24gc3RhY2sgKi8KKwltb3ZxCSRib290c3RhY2ssJXJzcAorCXhvcmwJJWVicCwg
JWVicAorCisJLyogdV9pbnQ2NF90IGhhbW1lcl90aW1lX3hlbihzdGFydF9pbmZvX3QgKnNpLCB1
X2ludDY0X3QgeGVuc3RhY2spOyAqLworCWNhbGwJaGFtbWVyX3RpbWVfeGVuCisJbW92cQklcmF4
LCAlcnNwCQkvKiBzZXQgdXAga3N0YWNrIGZvciBtaV9zdGFydHVwKCkgKi8KKwljYWxsCW1pX3N0
YXJ0dXAJCS8qIGF1dG9jb25maWd1cmF0aW9uLCBtb3VudHJvb3QgZXRjICovCisKKwkvKiBOT1RS
RUFDSEVEICovCiswOglobHQKKwlqbXAgCTBiCmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVk
ZS9hc21hY3Jvcy5oIGIvc3lzL2FtZDY0L2luY2x1ZGUvYXNtYWNyb3MuaAppbmRleCAxZmI1OTJh
Li5jZThkY2U0IDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvaW5jbHVkZS9hc21hY3Jvcy5oCisrKyBi
L3N5cy9hbWQ2NC9pbmNsdWRlL2FzbWFjcm9zLmgKQEAgLTIwMSw0ICsyMDEsMzAgQEAKIAogI2Vu
ZGlmIC8qIExPQ09SRSAqLwogCisjaWZkZWYgX19TVERDX18KKyNkZWZpbmUgRUxGTk9URShuYW1l
LCB0eXBlLCBkZXNjdHlwZSwgZGVzY2RhdGEuLi4pIFwKKy5wdXNoc2VjdGlvbiAubm90ZS5uYW1l
ICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKyAgLmFsaWduIDQgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICA7ICAgICAgIFwKKyAgLmxvbmcgMmYgLSAxZiAgICAgICAgIC8qIG5hbWVzeiAq
LyAgICA7ICAgICAgIFwKKyAgLmxvbmcgNGYgLSAzZiAgICAgICAgIC8qIGRlc2NzeiAqLyAgICA7
ICAgICAgIFwKKyAgLmxvbmcgdHlwZSAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAg
IFwKKzE6LmFzY2l6ICNuYW1lICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzI6
LmFsaWduIDQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzM6ZGVzY3R5
cGUgZGVzY2RhdGEgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKzQ6LmFsaWduIDQgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICA7ICAgICAgIFwKKy5wb3BzZWN0aW9uCisjZWxzZSAv
KiAhX19TVERDX18sIGkuZS4gLXRyYWRpdGlvbmFsICovCisjZGVmaW5lIEVMRk5PVEUobmFtZSwg
dHlwZSwgZGVzY3R5cGUsIGRlc2NkYXRhKSBcCisucHVzaHNlY3Rpb24gLm5vdGUubmFtZSAgICAg
ICAgICAgICAgICAgOyAgICAgICBcCisgIC5hbGlnbiA0ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgOyAgICAgICBcCisgIC5sb25nIDJmIC0gMWYgICAgICAgICAvKiBuYW1lc3ogKi8gICAg
OyAgICAgICBcCisgIC5sb25nIDRmIC0gM2YgICAgICAgICAvKiBkZXNjc3ogKi8gICAgOyAgICAg
ICBcCisgIC5sb25nIHR5cGUgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisx
Oi5hc2NpeiAibmFtZSIgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisyOi5hbGln
biA0ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCiszOmRlc2N0eXBlIGRl
c2NkYXRhICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCis0Oi5hbGlnbiA0ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgOyAgICAgICBcCisucG9wc2VjdGlvbgorI2VuZGlmIC8qIF9f
U1REQ19fICovCisKICNlbmRpZiAvKiAhX01BQ0hJTkVfQVNNQUNST1NfSF8gKi8KZGlmZiAtLWdp
dCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKaW5kZXggZDFi
ZGNkOS4uMTYwMjlkOCAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuYW1kNjQKKysrIGIvc3lz
L2NvbmYvZmlsZXMuYW1kNjQKQEAgLTExOSw2ICsxMTksNyBAQCBhbWQ2NC9hbWQ2NC9pbl9ja3N1
bS5jCQlvcHRpb25hbAlpbmV0IHwgaW5ldDYKIGFtZDY0L2FtZDY0L2luaXRjcHUuYwkJc3RhbmRh
cmQKIGFtZDY0L2FtZDY0L2lvLmMJCW9wdGlvbmFsCWlvCiBhbWQ2NC9hbWQ2NC9sb2NvcmUuUwkJ
c3RhbmRhcmQJbm8tb2JqCithbWQ2NC9hbWQ2NC94ZW4tbG9jb3JlLlMJb3B0aW9uYWwJeGVuaHZt
CiBhbWQ2NC9hbWQ2NC9tYWNoZGVwLmMJCXN0YW5kYXJkCiBhbWQ2NC9hbWQ2NC9tZW0uYwkJb3B0
aW9uYWwJbWVtCiBhbWQ2NC9hbWQ2NC9taW5pZHVtcF9tYWNoZGVwLmMJc3RhbmRhcmQKQEAgLTU2
NiwzICs1NjcsNCBAQCB4ODYveDg2L25leHVzLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni90c2MuYwkJ
CXN0YW5kYXJkCiB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9p
bnRyLmMJCW9wdGlvbmFsCXhlbiB8IHhlbmh2bQoreDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVu
aHZtCmRpZmYgLS1naXQgYS9zeXMvaTM4Ni94ZW4veGVuX21hY2hkZXAuYyBiL3N5cy9pMzg2L3hl
bi94ZW5fbWFjaGRlcC5jCmluZGV4IDcwNDliZTYuLmZkNTc1ZWUgMTAwNjQ0Ci0tLSBhL3N5cy9p
Mzg2L3hlbi94ZW5fbWFjaGRlcC5jCisrKyBiL3N5cy9pMzg2L3hlbi94ZW5fbWFjaGRlcC5jCkBA
IC04OSw2ICs4OSw3IEBAIElEVFZFQyhkaXYpLCBJRFRWRUMoZGJnKSwgSURUVkVDKG5taSksIElE
VFZFQyhicHQpLCBJRFRWRUMob2ZsKSwKIAogaW50IHhlbmRlYnVnX2ZsYWdzOyAKIHN0YXJ0X2lu
Zm9fdCAqeGVuX3N0YXJ0X2luZm87CitzdGFydF9pbmZvX3QgKkhZUEVSVklTT1Jfc3RhcnRfaW5m
bzsKIHNoYXJlZF9pbmZvX3QgKkhZUEVSVklTT1Jfc2hhcmVkX2luZm87CiB4ZW5fcGZuX3QgKnhl
bl9tYWNoaW5lX3BoeXMgPSBtYWNoaW5lX3RvX3BoeXNfbWFwcGluZzsKIHhlbl9wZm5fdCAqeGVu
X3BoeXNfbWFjaGluZTsKQEAgLTkyNyw2ICs5MjgsNyBAQCBpbml0dmFsdWVzKHN0YXJ0X2luZm9f
dCAqc3RhcnRpbmZvKQogCUhZUEVSVklTT1Jfdm1fYXNzaXN0KFZNQVNTVF9DTURfZW5hYmxlLCBW
TUFTU1RfVFlQRV80Z2Jfc2VnbWVudHNfbm90aWZ5KTsJCiAjZW5kaWYJCiAJeGVuX3N0YXJ0X2lu
Zm8gPSBzdGFydGluZm87CisJSFlQRVJWSVNPUl9zdGFydF9pbmZvID0gc3RhcnRpbmZvOwogCXhl
bl9waHlzX21hY2hpbmUgPSAoeGVuX3Bmbl90ICopc3RhcnRpbmZvLT5tZm5fbGlzdDsKIAogCUlk
bGVQVEQgPSAocGRfZW50cnlfdCAqKSgodWludDhfdCAqKXN0YXJ0aW5mby0+cHRfYmFzZSArIFBB
R0VfU0laRSk7CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYveGVuL2h2
bS5jCmluZGV4IDcyODExZGMuLmIzOTc3MjEgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVuL2h2bS5j
CisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC0xNTksNiArMTU5LDcgQEAgRFBDUFVfREVGSU5F
KHhlbl9pbnRyX2hhbmRsZV90LCBpcGlfaGFuZGxlW25pdGVtcyh4ZW5faXBpcyldKTsKIC8qKiBI
eXBlcmNhbGwgdGFibGUgYWNjZXNzZWQgdmlhIEhZUEVSVklTT1JfKl9vcCgpIG1ldGhvZHMuICov
CiBjaGFyICpoeXBlcmNhbGxfc3R1YnM7CiBzaGFyZWRfaW5mb190ICpIWVBFUlZJU09SX3NoYXJl
ZF9pbmZvOworc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0YXJ0X2luZm87CiAKICNpZmRlZiBT
TVAKIC8qLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSBYRU4gUFYgSVBJIEhhbmRsZXJzIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9wdi5j
IGIvc3lzL3g4Ni94ZW4vcHYuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi41
NTcxZWNmCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL3g4Ni94ZW4vcHYuYwpAQCAtMCwwICsxLDEx
OSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAwNCBDaHJpc3RpYW4gTGltcGFjaC4KKyAqIENv
cHlyaWdodCAoYykgMjAwNC0yMDA2LDIwMDggS2lwIE1hY3kKKyAqIENvcHlyaWdodCAoYykgMjAx
MyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMg
cmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJp
bmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0
ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6Cisg
KiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3Zl
IGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhl
IGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBm
b3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhp
cyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUK
KyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRo
IHRoZSBkaXN0cmlidXRpb24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBU
SEUgQVVUSE9SIEFORCBDT05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9S
IElNUExJRUQgV0FSUkFOVElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQor
ICogSU1QTElFRCBXQVJSQU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1Ig
QSBQQVJUSUNVTEFSIFBVUlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hB
TEwgVEhFIEFVVEhPUiBPUiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVD
VCwgSU5ESVJFQ1QsIElOQ0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVO
VElBTAorICogREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVN
RU5UIE9GIFNVQlNUSVRVVEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFU
QSwgT1IgUFJPRklUUzsgT1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVT
RUQgQU5EIE9OIEFOWSBUSEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBT
VFJJQ1QKKyAqIExJQUJJTElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RI
RVJXSVNFKSBBUklTSU5HIElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09G
VFdBUkUsIEVWRU4gSUYgQURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFN
QUdFLgorICovCisKKyNpbmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQi
KTsKKworI2luY2x1ZGUgPHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4KKyNpbmNs
dWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3JlYm9vdC5oPgorI2luY2x1ZGUgPHN5
cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9sb2NrLmg+CisjaW5jbHVkZSA8c3lzL3J3bG9jay5o
PgorCisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS92bV9leHRlcm4uaD4KKyNpbmNs
dWRlIDx2bS92bV9rZXJuLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFnZS5oPgorI2luY2x1ZGUgPHZt
L3ZtX21hcC5oPgorI2luY2x1ZGUgPHZtL3ZtX29iamVjdC5oPgorI2luY2x1ZGUgPHZtL3ZtX3Bh
Z2VyLmg+CisjaW5jbHVkZSA8dm0vdm1fcGFyYW0uaD4KKworI2luY2x1ZGUgPHhlbi94ZW4tb3Mu
aD4KKyNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorCisvKiBOYXRpdmUgaW5pdGlhbCBmdW5j
dGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJfdGltZSh1X2ludDY0X3QsIHVfaW50NjRf
dCk7CisvKiBYZW4gaW5pdGlhbCBmdW5jdGlvbiAqLworZXh0ZXJuIHVfaW50NjRfdCBoYW1tZXJf
dGltZV94ZW4oc3RhcnRfaW5mb190ICosIHVfaW50NjRfdCk7CisKKy8qCisgKiBGaXJzdCBmdW5j
dGlvbiBjYWxsZWQgYnkgdGhlIFhlbiBQVkggYm9vdCBzZXF1ZW5jZS4KKyAqCisgKiBTZXQgc29t
ZSBYZW4gZ2xvYmFsIHZhcmlhYmxlcyBhbmQgcHJlcGFyZSB0aGUgZW52aXJvbm1lbnQgc28gaXQg
aXMKKyAqIGFzIHNpbWlsYXIgYXMgcG9zc2libGUgdG8gd2hhdCBuYXRpdmUgRnJlZUJTRCBpbml0
IGZ1bmN0aW9uIGV4cGVjdHMuCisgKi8KK3VfaW50NjRfdAoraGFtbWVyX3RpbWVfeGVuKHN0YXJ0
X2luZm9fdCAqc2ksIHVfaW50NjRfdCB4ZW5zdGFjaykKK3sKKwl1X2ludDY0X3QgcGh5c2ZyZWU7
CisJdV9pbnQ2NF90ICpQVDQgPSAodV9pbnQ2NF90ICopeGVuc3RhY2s7CisJdV9pbnQ2NF90ICpQ
VDMgPSAodV9pbnQ2NF90ICopKHhlbnN0YWNrICsgUEFHRV9TSVpFKTsKKwl1X2ludDY0X3QgKlBU
MiA9ICh1X2ludDY0X3QgKikoeGVuc3RhY2sgKyAyICogUEFHRV9TSVpFKTsKKwlpbnQgaTsKKwor
CWlmICgoc2kgPT0gTlVMTCkgfHwgKHhlbnN0YWNrID09IDApKSB7CisJCUhZUEVSVklTT1Jfc2h1
dGRvd24oU0hVVERPV05fY3Jhc2gpOworCX0KKworCS8qIFdlIHVzZSAzIHBhZ2VzIG9mIHhlbiBz
dGFjayBmb3IgdGhlIGJvb3QgcGFnZXRhYmxlcyAqLworCXBoeXNmcmVlID0geGVuc3RhY2sgKyAz
ICogUEFHRV9TSVpFIC0gS0VSTkJBU0U7CisKKwkvKiBTZXR1cCBYZW4gZ2xvYmFsIHZhcmlhYmxl
cyAqLworCUhZUEVSVklTT1Jfc3RhcnRfaW5mbyA9IHNpOworCUhZUEVSVklTT1Jfc2hhcmVkX2lu
Zm8gPQorCQkoc2hhcmVkX2luZm9fdCAqKShzaS0+c2hhcmVkX2luZm8gKyBLRVJOQkFTRSk7CisK
KwkvKgorCSAqIFNldHVwIHNvbWUgbWlzYyBnbG9iYWwgdmFyaWFibGVzIGZvciBYZW4gZGV2aWNl
cworCSAqCisJICogWFhYOiBkZXZpY2VzIHRoYXQgbmVlZCB0aGlzIHNwZWNpZmljIHZhcmlhYmxl
cyBzaG91bGQKKwkgKiAgICAgIGJlIHJld3JpdHRlbiB0byBmZXRjaCB0aGlzIGluZm8gYnkgdGhl
bXNlbHZlcyBmcm9tIHRoZQorCSAqICAgICAgc3RhcnRfaW5mbyBwYWdlLgorCSAqLworCXhlbl9z
dG9yZSA9IChzdHJ1Y3QgeGVuc3RvcmVfZG9tYWluX2ludGVyZmFjZSAqKQorCSAgICAgICAgICAg
IChwdG9hKHNpLT5zdG9yZV9tZm4pICsgS0VSTkJBU0UpOworCisJeGVuX2RvbWFpbl90eXBlID0g
WEVOX1BWX0RPTUFJTjsKKwl2bV9ndWVzdCA9IFZNX0dVRVNUX1hFTjsKKworCS8qCisJICogVXNl
IHRoZSBzdGFjayBYZW4gZ2l2ZXMgdXMgdG8gYnVpbGQgdGhlIHBhZ2UgdGFibGVzCisJICogYXMg
bmF0aXZlIEZyZWVCU0QgZXhwZWN0cyB0byBmaW5kIHRoZW0gKGNyZWF0ZWQKKwkgKiBieSB0aGUg
Ym9vdCB0cmFtcG9saW5lKS4KKwkgKi8KKwlmb3IgKGkgPSAwOyBpIDwgNTEyOyBpKyspIHsKKwkJ
LyogRWFjaCBzbG90IG9mIHRoZSBsZXZlbCA0IHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZl
bCAzIHBhZ2UgKi8KKwkJUFQ0W2ldID0gKCh1X2ludDY0X3QpJlBUM1swXSkgLSBLRVJOQkFTRTsK
KwkJUFQ0W2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1U7CisKKwkJLyogRWFjaCBzbG90IG9mIHRo
ZSBsZXZlbCAzIHBhZ2VzIHBvaW50cyB0byB0aGUgc2FtZSBsZXZlbCAyIHBhZ2UgKi8KKwkJUFQz
W2ldID0gKCh1X2ludDY0X3QpJlBUMlswXSkgLSBLRVJOQkFTRTsKKwkJUFQzW2ldIHw9IFBHX1Yg
fCBQR19SVyB8IFBHX1U7CisKKwkJLyogVGhlIGxldmVsIDIgcGFnZSBzbG90cyBhcmUgbWFwcGVk
IHdpdGggMk1CIHBhZ2VzIGZvciAxR0IuICovCisJCVBUMltpXSA9IGkgKiAoMiAqIDEwMjQgKiAx
MDI0KTsKKwkJUFQyW2ldIHw9IFBHX1YgfCBQR19SVyB8IFBHX1BTIHwgUEdfVTsKKwl9CisJbG9h
ZF9jcjMoKCh1X2ludDY0X3QpJlBUNFswXSkgLSBLRVJOQkFTRSk7CisKKwkvKiBOb3cgd2UgY2Fu
IGp1bXAgaW50byB0aGUgbmF0aXZlIGluaXQgZnVuY3Rpb24gKi8KKwlyZXR1cm4gKGhhbW1lcl90
aW1lKDAsIHBoeXNmcmVlKSk7Cit9CmRpZmYgLS1naXQgYS9zeXMveGVuL3hlbi1vcy5oIGIvc3lz
L3hlbi94ZW4tb3MuaAppbmRleCA4NzY0NGU5Li5jNzQ3NGQ4IDEwMDY0NAotLS0gYS9zeXMveGVu
L3hlbi1vcy5oCisrKyBiL3N5cy94ZW4veGVuLW9zLmgKQEAgLTUxLDYgKzUxLDEwIEBACiB2b2lk
IGZvcmNlX2V2dGNobl9jYWxsYmFjayh2b2lkKTsKIAogZXh0ZXJuIHNoYXJlZF9pbmZvX3QgKkhZ
UEVSVklTT1Jfc2hhcmVkX2luZm87CitleHRlcm4gc3RhcnRfaW5mb190ICpIWVBFUlZJU09SX3N0
YXJ0X2luZm87CisKKy8qIFhYWDogd2UgbmVlZCB0byBnZXQgcmlkIG9mIHRoaXMgYW5kIHVzZSBI
WVBFUlZJU09SX3N0YXJ0X2luZm8gZGlyZWN0bHkgKi8KK2V4dGVybiBzdHJ1Y3QgeGVuc3RvcmVf
ZG9tYWluX2ludGVyZmFjZSAqeGVuX3N0b3JlOwogCiBlbnVtIHhlbl9kb21haW5fdHlwZSB7CiAJ
WEVOX05BVElWRSwgICAgICAgICAgICAgLyogcnVubmluZyBvbiBiYXJlIGhhcmR3YXJlICAgICov
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35UD-00035m-6z; Tue, 14 Jan 2014 15:01:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TL-00032M-Gb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [193.109.254.147:57585] by server-15.bemta-14.messagelabs.com
	id 4B/0C-22186-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13453 invoked from network); 14 Jan 2014 15:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694666"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:47 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T1-0006J6-TZ;
	Tue, 14 Jan 2014 14:59:47 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:27 +0100
Message-ID: <1389711582-66908-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 05/20] xen: rework xen timer so it can be
	used early in boot process
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should not introduce any functional change, and makes the
functions suitable to be called before we have actually mapped the
vcpu_info struct on a per-cpu basis.
---
 sys/dev/xen/timer/timer.c |   29 ++++++++++++++++++++---------
 1 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index 354085b..b2f6bcd 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -230,22 +230,22 @@ xen_fetch_vcpu_tinfo(struct vcpu_time_info *dst, struct vcpu_time_info *src)
 /**
  * \brief Get the current time, in nanoseconds, since the hypervisor booted.
  *
+ * \param vcpu		vcpu_info structure to fetch the time from.
+ *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined value.
  */
 static uint64_t
-xen_fetch_vcpu_time(void)
+xen_fetch_vcpu_time(struct vcpu_info *vcpu)
 {
 	struct vcpu_time_info dst;
 	struct vcpu_time_info *src;
 	uint32_t pre_version;
 	uint64_t now;
 	volatile uint64_t last;
-	struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);
 
 	src = &vcpu->time;
 
-	critical_enter();
 	do {
 		pre_version = xen_fetch_vcpu_tinfo(&dst, src);
 		barrier();
@@ -266,16 +266,19 @@ xen_fetch_vcpu_time(void)
 		}
 	} while (!atomic_cmpset_64(&xen_timer_last_time, last, now));
 
-	critical_exit();
-
 	return (now);
 }
 
 static uint32_t
 xentimer_get_timecount(struct timecounter *tc)
 {
+	uint32_t xen_time;
 
-	return ((uint32_t)xen_fetch_vcpu_time() & UINT_MAX);
+	critical_enter();
+	xen_time = (uint32_t)xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) & UINT_MAX;
+	critical_exit();
+
+	return (xen_time);
 }
 
 /**
@@ -305,7 +308,12 @@ xen_fetch_wallclock(struct timespec *ts)
 static void
 xen_fetch_uptime(struct timespec *ts)
 {
-	uint64_t uptime = xen_fetch_vcpu_time();
+	uint64_t uptime;
+
+	critical_enter();
+	uptime = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
+	critical_exit();
+
 	ts->tv_sec = uptime / NSEC_IN_SEC;
 	ts->tv_nsec = uptime % NSEC_IN_SEC;
 }
@@ -354,7 +362,7 @@ xentimer_intr(void *arg)
 	struct xentimer_softc *sc = (struct xentimer_softc *)arg;
 	struct xentimer_pcpu_data *pcpu = DPCPU_PTR(xentimer_pcpu);
 
-	pcpu->last_processed = xen_fetch_vcpu_time();
+	pcpu->last_processed = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
 	if (pcpu->timer != 0 && sc->et.et_active)
 		sc->et.et_event_cb(&sc->et, sc->et.et_arg);
 
@@ -415,7 +423,10 @@ xentimer_et_start(struct eventtimer *et,
 	do {
 		if (++i == 60)
 			panic("can't schedule timer");
-		next_time = xen_fetch_vcpu_time() + first_in_ns;
+		critical_enter();
+		next_time = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) +
+		            first_in_ns;
+		critical_exit();
 		error = xentimer_vcpu_start_timer(cpu, next_time);
 	} while (error == -ETIME);
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35UD-00035m-6z; Tue, 14 Jan 2014 15:01:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TL-00032M-Gb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:07 +0000
Received: from [193.109.254.147:57585] by server-15.bemta-14.messagelabs.com
	id 4B/0C-22186-6F055D25; Tue, 14 Jan 2014 15:00:06 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13453 invoked from network); 14 Jan 2014 15:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694666"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:47 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T1-0006J6-TZ;
	Tue, 14 Jan 2014 14:59:47 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:27 +0100
Message-ID: <1389711582-66908-6-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 05/20] xen: rework xen timer so it can be
	used early in boot process
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This should not introduce any functional change, and makes the
functions suitable to be called before we have actually mapped the
vcpu_info struct on a per-cpu basis.
---
 sys/dev/xen/timer/timer.c |   29 ++++++++++++++++++++---------
 1 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index 354085b..b2f6bcd 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -230,22 +230,22 @@ xen_fetch_vcpu_tinfo(struct vcpu_time_info *dst, struct vcpu_time_info *src)
 /**
  * \brief Get the current time, in nanoseconds, since the hypervisor booted.
  *
+ * \param vcpu		vcpu_info structure to fetch the time from.
+ *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined value.
  */
 static uint64_t
-xen_fetch_vcpu_time(void)
+xen_fetch_vcpu_time(struct vcpu_info *vcpu)
 {
 	struct vcpu_time_info dst;
 	struct vcpu_time_info *src;
 	uint32_t pre_version;
 	uint64_t now;
 	volatile uint64_t last;
-	struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);
 
 	src = &vcpu->time;
 
-	critical_enter();
 	do {
 		pre_version = xen_fetch_vcpu_tinfo(&dst, src);
 		barrier();
@@ -266,16 +266,19 @@ xen_fetch_vcpu_time(void)
 		}
 	} while (!atomic_cmpset_64(&xen_timer_last_time, last, now));
 
-	critical_exit();
-
 	return (now);
 }
 
 static uint32_t
 xentimer_get_timecount(struct timecounter *tc)
 {
+	uint32_t xen_time;
 
-	return ((uint32_t)xen_fetch_vcpu_time() & UINT_MAX);
+	critical_enter();
+	xen_time = (uint32_t)xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) & UINT_MAX;
+	critical_exit();
+
+	return (xen_time);
 }
 
 /**
@@ -305,7 +308,12 @@ xen_fetch_wallclock(struct timespec *ts)
 static void
 xen_fetch_uptime(struct timespec *ts)
 {
-	uint64_t uptime = xen_fetch_vcpu_time();
+	uint64_t uptime;
+
+	critical_enter();
+	uptime = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
+	critical_exit();
+
 	ts->tv_sec = uptime / NSEC_IN_SEC;
 	ts->tv_nsec = uptime % NSEC_IN_SEC;
 }
@@ -354,7 +362,7 @@ xentimer_intr(void *arg)
 	struct xentimer_softc *sc = (struct xentimer_softc *)arg;
 	struct xentimer_pcpu_data *pcpu = DPCPU_PTR(xentimer_pcpu);
 
-	pcpu->last_processed = xen_fetch_vcpu_time();
+	pcpu->last_processed = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info));
 	if (pcpu->timer != 0 && sc->et.et_active)
 		sc->et.et_event_cb(&sc->et, sc->et.et_arg);
 
@@ -415,7 +423,10 @@ xentimer_et_start(struct eventtimer *et,
 	do {
 		if (++i == 60)
 			panic("can't schedule timer");
-		next_time = xen_fetch_vcpu_time() + first_in_ns;
+		critical_enter();
+		next_time = xen_fetch_vcpu_time(DPCPU_GET(vcpu_info)) +
+		            first_in_ns;
+		critical_exit();
 		error = xentimer_vcpu_start_timer(cpu, next_time);
 	} while (error == -ETIME);
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35UX-00038Q-6j; Tue, 14 Jan 2014 15:01:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TN-00032S-UR
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:10 +0000
Received: from [85.158.143.35:2485] by server-1.bemta-4.messagelabs.com id
	BB/77-02132-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711604!11680901!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8518 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694686"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T2-0006J6-V2;
	Tue, 14 Jan 2014 14:59:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:29 +0100
Message-ID: <1389711582-66908-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 07/20] xen: implement hook to fetch e820
	memory map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   50 ++++++++++++++++++++++++++----------------
 sys/amd64/include/pc/bios.h |    2 +
 sys/amd64/include/sysarch.h |    1 +
 sys/x86/xen/pv.c            |   25 +++++++++++++++++++++
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index b8d6dc2..64df89a 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -169,11 +169,15 @@ SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 /* Preload data parse function */
 static caddr_t native_parse_preload_data(u_int64_t);
 
+/* Native function to fetch and parse the e820 map */
+static void native_parse_memmap(caddr_t, vm_paddr_t *, int *);
+
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
+	.parse_memmap =		native_parse_memmap,
 };
 
 /*
@@ -1403,21 +1407,12 @@ add_physmap_entry(uint64_t base, uint64_t length, vm_paddr_t *physmap,
 	return (1);
 }
 
-static void
-add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
-    int *physmap_idx)
+void
+bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+                      vm_paddr_t *physmap, int *physmap_idx)
 {
 	struct bios_smap *smap, *smapend;
-	u_int32_t smapsize;
 
-	/*
-	 * Memory map from INT 15:E820.
-	 *
-	 * subr_module.c says:
-	 * "Consumer may safely assume that size value precedes data."
-	 * ie: an int32_t immediately precedes smap.
-	 */
-	smapsize = *((u_int32_t *)smapbase - 1);
 	smapend = (struct bios_smap *)((uintptr_t)smapbase + smapsize);
 
 	for (smap = smapbase; smap < smapend; smap++) {
@@ -1434,6 +1429,29 @@ add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
 	}
 }
 
+static void
+native_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct bios_smap *smap;
+	u_int32_t size;
+
+	/*
+	 * Memory map from INT 15:E820.
+	 *
+	 * subr_module.c says:
+	 * "Consumer may safely assume that size value precedes data."
+	 * ie: an int32_t immediately precedes smap.
+	 */
+
+	smap = (struct bios_smap *)preload_search_info(kmdp,
+	    MODINFO_METADATA | MODINFOMD_SMAP);
+	if (smap == NULL)
+		panic("No BIOS smap info from loader!");
+	size = *((u_int32_t *)smap - 1);
+
+	bios_add_smap_entries(smap, size, physmap, physmap_idx);
+}
+
 /*
  * Populate the (physmap) array with base/bound pairs describing the
  * available physical memory in the system, then test this memory and
@@ -1451,19 +1469,13 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 	vm_paddr_t pa, physmap[PHYSMAP_SIZE];
 	u_long physmem_start, physmem_tunable, memtest;
 	pt_entry_t *pte;
-	struct bios_smap *smapbase;
 	quad_t dcons_addr, dcons_size;
 
 	bzero(physmap, sizeof(physmap));
 	basemem = 0;
 	physmap_idx = 0;
 
-	smapbase = (struct bios_smap *)preload_search_info(kmdp,
-	    MODINFO_METADATA | MODINFOMD_SMAP);
-	if (smapbase == NULL)
-		panic("No BIOS smap info from loader!");
-
-	add_smap_entries(smapbase, physmap, &physmap_idx);
+	init_ops.parse_memmap(kmdp, physmap, &physmap_idx);
 
 	/*
 	 * Find the 'base memory' segment for SMP
diff --git a/sys/amd64/include/pc/bios.h b/sys/amd64/include/pc/bios.h
index e7d568e..95ef703 100644
--- a/sys/amd64/include/pc/bios.h
+++ b/sys/amd64/include/pc/bios.h
@@ -106,6 +106,8 @@ struct bios_oem {
 int	bios_oem_strings(struct bios_oem *oem, u_char *buffer, size_t maxlen);
 uint32_t bios_sigsearch(uint32_t start, u_char *sig, int siglen, int paralen,
 	    int sigofs);
+void bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+	    vm_paddr_t *physmap, int *physmap_idx);
 #endif
 
 #endif /* _MACHINE_PC_BIOS_H_ */
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 60fa635..084223e 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -15,6 +15,7 @@ struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
+	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 0ec4b54..d11bc1a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
 
 #include <machine/sysarch.h>
 #include <machine/clock.h>
+#include <machine/pc/bios.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -57,8 +58,11 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+#define MAX_E820_ENTRIES	128
+
 /*--------------------------- Forward Declarations ---------------------------*/
 static caddr_t xen_pv_parse_preload_data(u_int64_t);
+static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
 
 static void xen_pv_set_init_ops(void);
 
@@ -68,6 +72,7 @@ struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
 	.early_delay_init =	xen_delay_init,
 	.early_delay =		xen_delay,
+	.parse_memmap =		xen_pv_parse_memmap,
 };
 
 static struct
@@ -88,6 +93,8 @@ static struct
 	{NULL,	0}
 };
 
+static struct bios_smap xen_smap[MAX_E820_ENTRIES];
+
 /*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
@@ -201,6 +208,24 @@ xen_pv_parse_preload_data(u_int64_t modulep)
 }
 
 static void
+xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct xen_memory_map memmap;
+	u_int32_t size;
+	int rc;
+
+	/* Fetch the E820 map from Xen */
+	memmap.nr_entries = MAX_E820_ENTRIES;
+	set_xen_guest_handle(memmap.buffer, xen_smap);
+	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
+	if (rc)
+		panic("unable to fetch Xen E820 memory map");
+	size = memmap.nr_entries * sizeof(xen_smap[0]);
+
+	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
+}
+
+static void
 xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35UX-00038Q-6j; Tue, 14 Jan 2014 15:01:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TN-00032S-UR
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:10 +0000
Received: from [85.158.143.35:2485] by server-1.bemta-4.messagelabs.com id
	BB/77-02132-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711604!11680901!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8518 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694686"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T2-0006J6-V2;
	Tue, 14 Jan 2014 14:59:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:29 +0100
Message-ID: <1389711582-66908-8-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 07/20] xen: implement hook to fetch e820
	memory map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   50 ++++++++++++++++++++++++++----------------
 sys/amd64/include/pc/bios.h |    2 +
 sys/amd64/include/sysarch.h |    1 +
 sys/x86/xen/pv.c            |   25 +++++++++++++++++++++
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index b8d6dc2..64df89a 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -169,11 +169,15 @@ SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 /* Preload data parse function */
 static caddr_t native_parse_preload_data(u_int64_t);
 
+/* Native function to fetch and parse the e820 map */
+static void native_parse_memmap(caddr_t, vm_paddr_t *, int *);
+
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
+	.parse_memmap =		native_parse_memmap,
 };
 
 /*
@@ -1403,21 +1407,12 @@ add_physmap_entry(uint64_t base, uint64_t length, vm_paddr_t *physmap,
 	return (1);
 }
 
-static void
-add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
-    int *physmap_idx)
+void
+bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+                      vm_paddr_t *physmap, int *physmap_idx)
 {
 	struct bios_smap *smap, *smapend;
-	u_int32_t smapsize;
 
-	/*
-	 * Memory map from INT 15:E820.
-	 *
-	 * subr_module.c says:
-	 * "Consumer may safely assume that size value precedes data."
-	 * ie: an int32_t immediately precedes smap.
-	 */
-	smapsize = *((u_int32_t *)smapbase - 1);
 	smapend = (struct bios_smap *)((uintptr_t)smapbase + smapsize);
 
 	for (smap = smapbase; smap < smapend; smap++) {
@@ -1434,6 +1429,29 @@ add_smap_entries(struct bios_smap *smapbase, vm_paddr_t *physmap,
 	}
 }
 
+static void
+native_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct bios_smap *smap;
+	u_int32_t size;
+
+	/*
+	 * Memory map from INT 15:E820.
+	 *
+	 * subr_module.c says:
+	 * "Consumer may safely assume that size value precedes data."
+	 * ie: an int32_t immediately precedes smap.
+	 */
+
+	smap = (struct bios_smap *)preload_search_info(kmdp,
+	    MODINFO_METADATA | MODINFOMD_SMAP);
+	if (smap == NULL)
+		panic("No BIOS smap info from loader!");
+	size = *((u_int32_t *)smap - 1);
+
+	bios_add_smap_entries(smap, size, physmap, physmap_idx);
+}
+
 /*
  * Populate the (physmap) array with base/bound pairs describing the
  * available physical memory in the system, then test this memory and
@@ -1451,19 +1469,13 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 	vm_paddr_t pa, physmap[PHYSMAP_SIZE];
 	u_long physmem_start, physmem_tunable, memtest;
 	pt_entry_t *pte;
-	struct bios_smap *smapbase;
 	quad_t dcons_addr, dcons_size;
 
 	bzero(physmap, sizeof(physmap));
 	basemem = 0;
 	physmap_idx = 0;
 
-	smapbase = (struct bios_smap *)preload_search_info(kmdp,
-	    MODINFO_METADATA | MODINFOMD_SMAP);
-	if (smapbase == NULL)
-		panic("No BIOS smap info from loader!");
-
-	add_smap_entries(smapbase, physmap, &physmap_idx);
+	init_ops.parse_memmap(kmdp, physmap, &physmap_idx);
 
 	/*
 	 * Find the 'base memory' segment for SMP
diff --git a/sys/amd64/include/pc/bios.h b/sys/amd64/include/pc/bios.h
index e7d568e..95ef703 100644
--- a/sys/amd64/include/pc/bios.h
+++ b/sys/amd64/include/pc/bios.h
@@ -106,6 +106,8 @@ struct bios_oem {
 int	bios_oem_strings(struct bios_oem *oem, u_char *buffer, size_t maxlen);
 uint32_t bios_sigsearch(uint32_t start, u_char *sig, int siglen, int paralen,
 	    int sigofs);
+void bios_add_smap_entries(struct bios_smap *smapbase, u_int32_t smapsize,
+	    vm_paddr_t *physmap, int *physmap_idx);
 #endif
 
 #endif /* _MACHINE_PC_BIOS_H_ */
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 60fa635..084223e 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -15,6 +15,7 @@ struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
+	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 0ec4b54..d11bc1a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -48,6 +48,7 @@ __FBSDID("$FreeBSD$");
 
 #include <machine/sysarch.h>
 #include <machine/clock.h>
+#include <machine/pc/bios.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -57,8 +58,11 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+#define MAX_E820_ENTRIES	128
+
 /*--------------------------- Forward Declarations ---------------------------*/
 static caddr_t xen_pv_parse_preload_data(u_int64_t);
+static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
 
 static void xen_pv_set_init_ops(void);
 
@@ -68,6 +72,7 @@ struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
 	.early_delay_init =	xen_delay_init,
 	.early_delay =		xen_delay,
+	.parse_memmap =		xen_pv_parse_memmap,
 };
 
 static struct
@@ -88,6 +93,8 @@ static struct
 	{NULL,	0}
 };
 
+static struct bios_smap xen_smap[MAX_E820_ENTRIES];
+
 /*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
@@ -201,6 +208,24 @@ xen_pv_parse_preload_data(u_int64_t modulep)
 }
 
 static void
+xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
+{
+	struct xen_memory_map memmap;
+	u_int32_t size;
+	int rc;
+
+	/* Fetch the E820 map from Xen */
+	memmap.nr_entries = MAX_E820_ENTRIES;
+	set_xen_guest_handle(memmap.buffer, xen_smap);
+	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
+	if (rc)
+		panic("unable to fetch Xen E820 memory map");
+	size = memmap.nr_entries * sizeof(xen_smap[0]);
+
+	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
+}
+
+static void
 xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Un-0003BK-2n; Tue, 14 Jan 2014 15:01:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TO-00032X-Pg
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:11 +0000
Received: from [193.109.254.147:41426] by server-2.bemta-14.messagelabs.com id
	DC/77-00361-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13638 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694676"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T2-0006J6-Dt;
	Tue, 14 Jan 2014 14:59:48 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:28 +0100
Message-ID: <1389711582-66908-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 06/20] xen: implement an early timer for Xen
	PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running as a PVH guest, there's no emulated i8254, so we need to
use the Xen PV timer as the early source for DELAY. This change allows
for different implementations of the early DELAY function and
implements a Xen variant for it.
---
 sys/amd64/amd64/machdep.c   |    6 ++-
 sys/amd64/include/clock.h   |    5 ++
 sys/amd64/include/sysarch.h |    2 +
 sys/conf/files.amd64        |    1 +
 sys/conf/files.i386         |    1 +
 sys/dev/xen/timer/timer.c   |   33 +++++++++++++
 sys/i386/include/clock.h    |    5 ++
 sys/x86/isa/clock.c         |   53 +--------------------
 sys/x86/x86/delay.c         |  112 +++++++++++++++++++++++++++++++++++++++++++
 sys/x86/xen/pv.c            |    3 +
 10 files changed, 167 insertions(+), 54 deletions(-)
 create mode 100644 sys/x86/x86/delay.c

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 343f9b8..b8d6dc2 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -172,6 +172,8 @@ static caddr_t native_parse_preload_data(u_int64_t);
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
+	.early_delay_init =	i8254_init,
+	.early_delay =		i8254_delay,
 };
 
 /*
@@ -1822,10 +1824,10 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	lidt(&r_idt);
 
 	/*
-	 * Initialize the i8254 before the console so that console
+	 * Initialize the early delay before the console so that console
 	 * initialization can use DELAY().
 	 */
-	i8254_init();
+	init_ops.early_delay_init();
 
 	/*
 	 * Initialize the console before we print anything out.
diff --git a/sys/amd64/include/clock.h b/sys/amd64/include/clock.h
index d7f7d82..ac8818f 100644
--- a/sys/amd64/include/clock.h
+++ b/sys/amd64/include/clock.h
@@ -25,6 +25,11 @@ extern int	smp_tsc;
 #endif
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 58ac8cd..60fa635 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -13,6 +13,8 @@
  */
 struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
+	void	(*early_delay_init)(void);
+	void	(*early_delay)(int);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/conf/files.amd64 b/sys/conf/files.amd64
index 16029d8..109a796 100644
--- a/sys/conf/files.amd64
+++ b/sys/conf/files.amd64
@@ -565,6 +565,7 @@ x86/x86/mptable_pci.c		optional	mptable pci
 x86/x86/msi.c			optional	pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional	xenhvm
 x86/xen/xen_intr.c		optional	xen | xenhvm
 x86/xen/pv.c			optional	xenhvm
diff --git a/sys/conf/files.i386 b/sys/conf/files.i386
index eb8697c..790296d 100644
--- a/sys/conf/files.i386
+++ b/sys/conf/files.i386
@@ -600,5 +600,6 @@ x86/x86/mptable_pci.c		optional apic native pci
 x86/x86/msi.c			optional apic pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional xenhvm
 x86/xen/xen_intr.c		optional xen | xenhvm
diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index b2f6bcd..96372ab 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -59,6 +59,9 @@ __FBSDID("$FreeBSD$");
 #include <machine/_inttypes.h>
 #include <machine/smp.h>
 
+/* For the declaration of clock_lock */
+#include <isa/rtc.h>
+
 #include "clock_if.h"
 
 static devclass_t xentimer_devclass;
@@ -584,6 +587,36 @@ xentimer_suspend(device_t dev)
 	return (0);
 }
 
+/*
+ * Xen delay early init
+ */
+void xen_delay_init(void)
+{
+	/* Init the clock lock */
+	mtx_init(&clock_lock, "clk", NULL, MTX_SPIN | MTX_NOPROFILE);
+}
+/*
+ * Xen PV DELAY function
+ *
+ * When running on PVH mode we don't have an emulated i8524, so
+ * make use of the Xen time info in order to code a simple DELAY
+ * function that can be used during early boot.
+ */
+void xen_delay(int n)
+{
+	uint64_t end_ns;
+	uint64_t current;
+
+	end_ns = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+	end_ns += n * NSEC_IN_USEC;
+
+	for (;;) {
+		current = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+		if (current >= end_ns)
+			break;
+	}
+}
+
 static device_method_t xentimer_methods[] = {
 	DEVMETHOD(device_identify, xentimer_identify),
 	DEVMETHOD(device_probe, xentimer_probe),
diff --git a/sys/i386/include/clock.h b/sys/i386/include/clock.h
index d980ec7..b831445 100644
--- a/sys/i386/include/clock.h
+++ b/sys/i386/include/clock.h
@@ -22,6 +22,11 @@ extern int	tsc_is_invariant;
 extern int	tsc_perf_stat;
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/x86/isa/clock.c b/sys/x86/isa/clock.c
index a12e175..a5aed1c 100644
--- a/sys/x86/isa/clock.c
+++ b/sys/x86/isa/clock.c
@@ -247,61 +247,13 @@ getit(void)
 	return ((high << 8) | low);
 }
 
-#ifndef DELAYDEBUG
-static u_int
-get_tsc(__unused struct timecounter *tc)
-{
-
-	return (rdtsc32());
-}
-
-static __inline int
-delay_tc(int n)
-{
-	struct timecounter *tc;
-	timecounter_get_t *func;
-	uint64_t end, freq, now;
-	u_int last, mask, u;
-
-	tc = timecounter;
-	freq = atomic_load_acq_64(&tsc_freq);
-	if (tsc_is_invariant && freq != 0) {
-		func = get_tsc;
-		mask = ~0u;
-	} else {
-		if (tc->tc_quality <= 0)
-			return (0);
-		func = tc->tc_get_timecount;
-		mask = tc->tc_counter_mask;
-		freq = tc->tc_frequency;
-	}
-	now = 0;
-	end = freq * n / 1000000;
-	if (func == get_tsc)
-		sched_pin();
-	last = func(tc) & mask;
-	do {
-		cpu_spinwait();
-		u = func(tc) & mask;
-		if (u < last)
-			now += mask - last + u + 1;
-		else
-			now += u - last;
-		last = u;
-	} while (now < end);
-	if (func == get_tsc)
-		sched_unpin();
-	return (1);
-}
-#endif
-
 /*
  * Wait "n" microseconds.
  * Relies on timer 1 counting down from (i8254_freq / hz)
  * Note: timer had better have been programmed before this is first used!
  */
 void
-DELAY(int n)
+i8254_delay(int n)
 {
 	int delta, prev_tick, tick, ticks_left;
 #ifdef DELAYDEBUG
@@ -317,9 +269,6 @@ DELAY(int n)
 	}
 	if (state == 1)
 		printf("DELAY(%d)...", n);
-#else
-	if (delay_tc(n))
-		return;
 #endif
 	/*
 	 * Read the counter first, so that the rest of the setup overhead is
diff --git a/sys/x86/x86/delay.c b/sys/x86/x86/delay.c
new file mode 100644
index 0000000..d13c727
--- /dev/null
+++ b/sys/x86/x86/delay.c
@@ -0,0 +1,112 @@
+/*-
+ * Copyright (c) 1990 The Regents of the University of California.
+ * Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * William Jolitz and Don Ahn.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	from: @(#)clock.c	7.2 (Berkeley) 5/12/91
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+/* Generic x86 routines to handle delay */
+
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/timetc.h>
+#include <sys/proc.h>
+#include <sys/kernel.h>
+#include <sys/sched.h>
+
+#include <machine/clock.h>
+#include <machine/cpu.h>
+#include <machine/sysarch.h>
+
+static u_int
+get_tsc(__unused struct timecounter *tc)
+{
+
+	return (rdtsc32());
+}
+
+static int
+delay_tc(int n)
+{
+	struct timecounter *tc;
+	timecounter_get_t *func;
+	uint64_t end, freq, now;
+	u_int last, mask, u;
+
+	tc = timecounter;
+	freq = atomic_load_acq_64(&tsc_freq);
+	if (tsc_is_invariant && freq != 0) {
+		func = get_tsc;
+		mask = ~0u;
+	} else {
+		if (tc->tc_quality <= 0)
+			return (0);
+		func = tc->tc_get_timecount;
+		mask = tc->tc_counter_mask;
+		freq = tc->tc_frequency;
+	}
+	now = 0;
+	end = freq * n / 1000000;
+	if (func == get_tsc)
+		sched_pin();
+	last = func(tc) & mask;
+	do {
+		cpu_spinwait();
+		u = func(tc) & mask;
+		if (u < last)
+			now += mask - last + u + 1;
+		else
+			now += u - last;
+		last = u;
+	} while (now < end);
+	if (func == get_tsc)
+		sched_unpin();
+	return (1);
+}
+
+#ifndef XEN
+void
+DELAY(int n)
+{
+
+	if (delay_tc(n))
+		return;
+
+#ifdef __amd64__
+	init_ops.early_delay(n);
+#else
+	i8254_delay(n);
+#endif
+}
+#endif
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 908b50b..0ec4b54 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_param.h>
 
 #include <machine/sysarch.h>
+#include <machine/clock.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -65,6 +66,8 @@ static void xen_pv_set_init_ops(void);
 /* Xen init_ops implementation. */
 struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
+	.early_delay_init =	xen_delay_init,
+	.early_delay =		xen_delay,
 };
 
 static struct
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Un-0003BK-2n; Tue, 14 Jan 2014 15:01:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TO-00032X-Pg
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:11 +0000
Received: from [193.109.254.147:41426] by server-2.bemta-14.messagelabs.com id
	DC/77-00361-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389711603!10793030!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13638 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694676"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T2-0006J6-Dt;
	Tue, 14 Jan 2014 14:59:48 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:28 +0100
Message-ID: <1389711582-66908-7-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 06/20] xen: implement an early timer for Xen
	PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When running as a PVH guest, there's no emulated i8254, so we need to
use the Xen PV timer as the early source for DELAY. This change allows
for different implementations of the early DELAY function and
implements a Xen variant for it.
---
 sys/amd64/amd64/machdep.c   |    6 ++-
 sys/amd64/include/clock.h   |    5 ++
 sys/amd64/include/sysarch.h |    2 +
 sys/conf/files.amd64        |    1 +
 sys/conf/files.i386         |    1 +
 sys/dev/xen/timer/timer.c   |   33 +++++++++++++
 sys/i386/include/clock.h    |    5 ++
 sys/x86/isa/clock.c         |   53 +--------------------
 sys/x86/x86/delay.c         |  112 +++++++++++++++++++++++++++++++++++++++++++
 sys/x86/xen/pv.c            |    3 +
 10 files changed, 167 insertions(+), 54 deletions(-)
 create mode 100644 sys/x86/x86/delay.c

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 343f9b8..b8d6dc2 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -172,6 +172,8 @@ static caddr_t native_parse_preload_data(u_int64_t);
 /* Default init_ops implementation. */
 struct init_ops init_ops = {
 	.parse_preload_data =	native_parse_preload_data,
+	.early_delay_init =	i8254_init,
+	.early_delay =		i8254_delay,
 };
 
 /*
@@ -1822,10 +1824,10 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	lidt(&r_idt);
 
 	/*
-	 * Initialize the i8254 before the console so that console
+	 * Initialize the early delay before the console so that console
 	 * initialization can use DELAY().
 	 */
-	i8254_init();
+	init_ops.early_delay_init();
 
 	/*
 	 * Initialize the console before we print anything out.
diff --git a/sys/amd64/include/clock.h b/sys/amd64/include/clock.h
index d7f7d82..ac8818f 100644
--- a/sys/amd64/include/clock.h
+++ b/sys/amd64/include/clock.h
@@ -25,6 +25,11 @@ extern int	smp_tsc;
 #endif
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 58ac8cd..60fa635 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -13,6 +13,8 @@
  */
 struct init_ops {
 	caddr_t	(*parse_preload_data)(u_int64_t);
+	void	(*early_delay_init)(void);
+	void	(*early_delay)(int);
 };
 
 extern struct init_ops init_ops;
diff --git a/sys/conf/files.amd64 b/sys/conf/files.amd64
index 16029d8..109a796 100644
--- a/sys/conf/files.amd64
+++ b/sys/conf/files.amd64
@@ -565,6 +565,7 @@ x86/x86/mptable_pci.c		optional	mptable pci
 x86/x86/msi.c			optional	pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional	xenhvm
 x86/xen/xen_intr.c		optional	xen | xenhvm
 x86/xen/pv.c			optional	xenhvm
diff --git a/sys/conf/files.i386 b/sys/conf/files.i386
index eb8697c..790296d 100644
--- a/sys/conf/files.i386
+++ b/sys/conf/files.i386
@@ -600,5 +600,6 @@ x86/x86/mptable_pci.c		optional apic native pci
 x86/x86/msi.c			optional apic pci
 x86/x86/nexus.c			standard
 x86/x86/tsc.c			standard
+x86/x86/delay.c			standard
 x86/xen/hvm.c			optional xenhvm
 x86/xen/xen_intr.c		optional xen | xenhvm
diff --git a/sys/dev/xen/timer/timer.c b/sys/dev/xen/timer/timer.c
index b2f6bcd..96372ab 100644
--- a/sys/dev/xen/timer/timer.c
+++ b/sys/dev/xen/timer/timer.c
@@ -59,6 +59,9 @@ __FBSDID("$FreeBSD$");
 #include <machine/_inttypes.h>
 #include <machine/smp.h>
 
+/* For the declaration of clock_lock */
+#include <isa/rtc.h>
+
 #include "clock_if.h"
 
 static devclass_t xentimer_devclass;
@@ -584,6 +587,36 @@ xentimer_suspend(device_t dev)
 	return (0);
 }
 
+/*
+ * Xen delay early init
+ */
+void xen_delay_init(void)
+{
+	/* Init the clock lock */
+	mtx_init(&clock_lock, "clk", NULL, MTX_SPIN | MTX_NOPROFILE);
+}
+/*
+ * Xen PV DELAY function
+ *
+ * When running on PVH mode we don't have an emulated i8524, so
+ * make use of the Xen time info in order to code a simple DELAY
+ * function that can be used during early boot.
+ */
+void xen_delay(int n)
+{
+	uint64_t end_ns;
+	uint64_t current;
+
+	end_ns = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+	end_ns += n * NSEC_IN_USEC;
+
+	for (;;) {
+		current = xen_fetch_vcpu_time(&HYPERVISOR_shared_info->vcpu_info[0]);
+		if (current >= end_ns)
+			break;
+	}
+}
+
 static device_method_t xentimer_methods[] = {
 	DEVMETHOD(device_identify, xentimer_identify),
 	DEVMETHOD(device_probe, xentimer_probe),
diff --git a/sys/i386/include/clock.h b/sys/i386/include/clock.h
index d980ec7..b831445 100644
--- a/sys/i386/include/clock.h
+++ b/sys/i386/include/clock.h
@@ -22,6 +22,11 @@ extern int	tsc_is_invariant;
 extern int	tsc_perf_stat;
 
 void	i8254_init(void);
+void	i8254_delay(int);
+#ifdef XENHVM
+void	xen_delay_init(void);
+void	xen_delay(int);
+#endif
 
 /*
  * Driver to clock driver interface.
diff --git a/sys/x86/isa/clock.c b/sys/x86/isa/clock.c
index a12e175..a5aed1c 100644
--- a/sys/x86/isa/clock.c
+++ b/sys/x86/isa/clock.c
@@ -247,61 +247,13 @@ getit(void)
 	return ((high << 8) | low);
 }
 
-#ifndef DELAYDEBUG
-static u_int
-get_tsc(__unused struct timecounter *tc)
-{
-
-	return (rdtsc32());
-}
-
-static __inline int
-delay_tc(int n)
-{
-	struct timecounter *tc;
-	timecounter_get_t *func;
-	uint64_t end, freq, now;
-	u_int last, mask, u;
-
-	tc = timecounter;
-	freq = atomic_load_acq_64(&tsc_freq);
-	if (tsc_is_invariant && freq != 0) {
-		func = get_tsc;
-		mask = ~0u;
-	} else {
-		if (tc->tc_quality <= 0)
-			return (0);
-		func = tc->tc_get_timecount;
-		mask = tc->tc_counter_mask;
-		freq = tc->tc_frequency;
-	}
-	now = 0;
-	end = freq * n / 1000000;
-	if (func == get_tsc)
-		sched_pin();
-	last = func(tc) & mask;
-	do {
-		cpu_spinwait();
-		u = func(tc) & mask;
-		if (u < last)
-			now += mask - last + u + 1;
-		else
-			now += u - last;
-		last = u;
-	} while (now < end);
-	if (func == get_tsc)
-		sched_unpin();
-	return (1);
-}
-#endif
-
 /*
  * Wait "n" microseconds.
  * Relies on timer 1 counting down from (i8254_freq / hz)
  * Note: timer had better have been programmed before this is first used!
  */
 void
-DELAY(int n)
+i8254_delay(int n)
 {
 	int delta, prev_tick, tick, ticks_left;
 #ifdef DELAYDEBUG
@@ -317,9 +269,6 @@ DELAY(int n)
 	}
 	if (state == 1)
 		printf("DELAY(%d)...", n);
-#else
-	if (delay_tc(n))
-		return;
 #endif
 	/*
 	 * Read the counter first, so that the rest of the setup overhead is
diff --git a/sys/x86/x86/delay.c b/sys/x86/x86/delay.c
new file mode 100644
index 0000000..d13c727
--- /dev/null
+++ b/sys/x86/x86/delay.c
@@ -0,0 +1,112 @@
+/*-
+ * Copyright (c) 1990 The Regents of the University of California.
+ * Copyright (c) 2010 Alexander Motin <mav@FreeBSD.org>
+ * All rights reserved.
+ *
+ * This code is derived from software contributed to Berkeley by
+ * William Jolitz and Don Ahn.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 4. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	from: @(#)clock.c	7.2 (Berkeley) 5/12/91
+ */
+
+#include <sys/cdefs.h>
+__FBSDID("$FreeBSD$");
+
+/* Generic x86 routines to handle delay */
+
+#include <sys/param.h>
+#include <sys/systm.h>
+#include <sys/timetc.h>
+#include <sys/proc.h>
+#include <sys/kernel.h>
+#include <sys/sched.h>
+
+#include <machine/clock.h>
+#include <machine/cpu.h>
+#include <machine/sysarch.h>
+
+static u_int
+get_tsc(__unused struct timecounter *tc)
+{
+
+	return (rdtsc32());
+}
+
+static int
+delay_tc(int n)
+{
+	struct timecounter *tc;
+	timecounter_get_t *func;
+	uint64_t end, freq, now;
+	u_int last, mask, u;
+
+	tc = timecounter;
+	freq = atomic_load_acq_64(&tsc_freq);
+	if (tsc_is_invariant && freq != 0) {
+		func = get_tsc;
+		mask = ~0u;
+	} else {
+		if (tc->tc_quality <= 0)
+			return (0);
+		func = tc->tc_get_timecount;
+		mask = tc->tc_counter_mask;
+		freq = tc->tc_frequency;
+	}
+	now = 0;
+	end = freq * n / 1000000;
+	if (func == get_tsc)
+		sched_pin();
+	last = func(tc) & mask;
+	do {
+		cpu_spinwait();
+		u = func(tc) & mask;
+		if (u < last)
+			now += mask - last + u + 1;
+		else
+			now += u - last;
+		last = u;
+	} while (now < end);
+	if (func == get_tsc)
+		sched_unpin();
+	return (1);
+}
+
+#ifndef XEN
+void
+DELAY(int n)
+{
+
+	if (delay_tc(n))
+		return;
+
+#ifdef __amd64__
+	init_ops.early_delay(n);
+#else
+	i8254_delay(n);
+#endif
+}
+#endif
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 908b50b..0ec4b54 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -47,6 +47,7 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_param.h>
 
 #include <machine/sysarch.h>
+#include <machine/clock.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -65,6 +66,8 @@ static void xen_pv_set_init_ops(void);
 /* Xen init_ops implementation. */
 struct init_ops xen_init_ops = {
 	.parse_preload_data =	xen_pv_parse_preload_data,
+	.early_delay_init =	xen_delay_init,
+	.early_delay =		xen_delay,
 };
 
 static struct
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Us-0003DG-6F; Tue, 14 Jan 2014 15:01:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TS-00032m-E9
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:15 +0000
Received: from [85.158.143.35:25691] by server-1.bemta-4.messagelabs.com id
	06/87-02132-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711605!11680913!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T3-0006J6-Fw;
	Tue, 14 Jan 2014 14:59:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:30 +0100
Message-ID: <1389711582-66908-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 08/20] xen: use the same hypercall mechanism
	for XEN and XENHVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/include/xen/hypercall.h |    7 -------
 sys/i386/i386/locore.s            |    9 +++++++++
 sys/i386/include/xen/hypercall.h  |    8 --------
 sys/x86/xen/hvm.c                 |   24 ++++++++++--------------
 4 files changed, 19 insertions(+), 29 deletions(-)

diff --git a/sys/amd64/include/xen/hypercall.h b/sys/amd64/include/xen/hypercall.h
index a1b2a5c..499fb4d 100644
--- a/sys/amd64/include/xen/hypercall.h
+++ b/sys/amd64/include/xen/hypercall.h
@@ -51,15 +51,8 @@
 #define CONFIG_XEN_COMPAT	0x030002
 #define __must_check
 
-#ifdef XEN
 #define HYPERCALL_STR(name)					\
 	"call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)					\
-	"mov $("STR(__HYPERVISOR_##name)" * 32),%%eax; "\
-	"add hypercall_stubs(%%rip),%%rax; "			\
-	"call *%%rax"
-#endif
 
 #define _hypercall0(type, name)			\
 ({						\
diff --git a/sys/i386/i386/locore.s b/sys/i386/i386/locore.s
index 68cb430..bd136b1 100644
--- a/sys/i386/i386/locore.s
+++ b/sys/i386/i386/locore.s
@@ -898,3 +898,12 @@ done_pde:
 #endif
 
 	ret
+
+#ifdef XENHVM
+/* Xen Hypercall page */
+	.text
+.p2align PAGE_SHIFT, 0x90	/* Hypercall_page needs to be PAGE aligned */
+
+NON_GPROF_ENTRY(hypercall_page)
+	.skip	0x1000, 0x90	/* Fill with "nop"s */
+#endif
diff --git a/sys/i386/include/xen/hypercall.h b/sys/i386/include/xen/hypercall.h
index edc13f4..16b5ee2 100644
--- a/sys/i386/include/xen/hypercall.h
+++ b/sys/i386/include/xen/hypercall.h
@@ -39,16 +39,8 @@
 #define	ENOXENSYS	38
 #define CONFIG_XEN_COMPAT	0x030002
 
-
-#if defined(XEN)
 #define HYPERCALL_STR(name)                                     \
         "call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)                                     \
-        "mov hypercall_stubs,%%eax; "                           \
-        "add $("STR(__HYPERVISOR_##name)" * 32),%%eax; "        \
-        "call *%%eax"
-#endif
 
 #define _hypercall0(type, name)                 \
 ({                                              \
diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index b397721..9a0411e 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -157,7 +157,7 @@ DPCPU_DEFINE(xen_intr_handle_t, ipi_handle[nitems(xen_ipis)]);
 
 /*------------------ Hypervisor Access Shared Memory Regions -----------------*/
 /** Hypercall table accessed via HYPERVISOR_*_op() methods. */
-char *hypercall_stubs;
+extern char *hypercall_page;
 shared_info_t *HYPERVISOR_shared_info;
 start_info_t *HYPERVISOR_start_info;
 
@@ -559,7 +559,7 @@ xen_hvm_cpuid_base(void)
  * Allocate and fill in the hypcall page.
  */
 static int
-xen_hvm_init_hypercall_stubs(void)
+xen_hvm_init_hypercall_stubs(enum xen_hvm_init_type init_type)
 {
 	uint32_t base, regs[4];
 	int i;
@@ -568,7 +568,7 @@ xen_hvm_init_hypercall_stubs(void)
 	if (base == 0)
 		return (ENXIO);
 
-	if (hypercall_stubs == NULL) {
+	if (init_type == XEN_HVM_INIT_COLD) {
 		do_cpuid(base + 1, regs);
 		printf("XEN: Hypervisor version %d.%d detected.\n",
 		    regs[0] >> 16, regs[0] & 0xffff);
@@ -578,18 +578,9 @@ xen_hvm_init_hypercall_stubs(void)
 	 * Find the hypercall pages.
 	 */
 	do_cpuid(base + 2, regs);
-	
-	if (hypercall_stubs == NULL) {
-		size_t call_region_size;
-
-		call_region_size = regs[0] * PAGE_SIZE;
-		hypercall_stubs = malloc(call_region_size, M_XENHVM, M_NOWAIT);
-		if (hypercall_stubs == NULL)
-			panic("Unable to allocate Xen hypercall region");
-	}
 
 	for (i = 0; i < regs[0]; i++)
-		wrmsr(regs[1], vtophys(hypercall_stubs + i * PAGE_SIZE) + i);
+		wrmsr(regs[1], vtophys(&hypercall_page + i * PAGE_SIZE) + i);
 
 	return (0);
 }
@@ -692,7 +683,12 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	if (init_type == XEN_HVM_INIT_CANCELLED_SUSPEND)
 		return;
 
-	error = xen_hvm_init_hypercall_stubs();
+	if (xen_pv_domain()) {
+		/* hypercall page is already set in the PV case */
+		error = 0;
+	} else {
+		error = xen_hvm_init_hypercall_stubs(init_type);
+	}
 
 	switch (init_type) {
 	case XEN_HVM_INIT_COLD:
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Us-0003DG-6F; Tue, 14 Jan 2014 15:01:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35TS-00032m-E9
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:15 +0000
Received: from [85.158.143.35:25691] by server-1.bemta-4.messagelabs.com id
	06/87-02132-7F055D25; Tue, 14 Jan 2014 15:00:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389711605!11680913!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 14 Jan 2014 15:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92694698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T3-0006J6-Fw;
	Tue, 14 Jan 2014 14:59:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:30 +0100
Message-ID: <1389711582-66908-9-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 08/20] xen: use the same hypercall mechanism
	for XEN and XENHVM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/include/xen/hypercall.h |    7 -------
 sys/i386/i386/locore.s            |    9 +++++++++
 sys/i386/include/xen/hypercall.h  |    8 --------
 sys/x86/xen/hvm.c                 |   24 ++++++++++--------------
 4 files changed, 19 insertions(+), 29 deletions(-)

diff --git a/sys/amd64/include/xen/hypercall.h b/sys/amd64/include/xen/hypercall.h
index a1b2a5c..499fb4d 100644
--- a/sys/amd64/include/xen/hypercall.h
+++ b/sys/amd64/include/xen/hypercall.h
@@ -51,15 +51,8 @@
 #define CONFIG_XEN_COMPAT	0x030002
 #define __must_check
 
-#ifdef XEN
 #define HYPERCALL_STR(name)					\
 	"call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)					\
-	"mov $("STR(__HYPERVISOR_##name)" * 32),%%eax; "\
-	"add hypercall_stubs(%%rip),%%rax; "			\
-	"call *%%rax"
-#endif
 
 #define _hypercall0(type, name)			\
 ({						\
diff --git a/sys/i386/i386/locore.s b/sys/i386/i386/locore.s
index 68cb430..bd136b1 100644
--- a/sys/i386/i386/locore.s
+++ b/sys/i386/i386/locore.s
@@ -898,3 +898,12 @@ done_pde:
 #endif
 
 	ret
+
+#ifdef XENHVM
+/* Xen Hypercall page */
+	.text
+.p2align PAGE_SHIFT, 0x90	/* Hypercall_page needs to be PAGE aligned */
+
+NON_GPROF_ENTRY(hypercall_page)
+	.skip	0x1000, 0x90	/* Fill with "nop"s */
+#endif
diff --git a/sys/i386/include/xen/hypercall.h b/sys/i386/include/xen/hypercall.h
index edc13f4..16b5ee2 100644
--- a/sys/i386/include/xen/hypercall.h
+++ b/sys/i386/include/xen/hypercall.h
@@ -39,16 +39,8 @@
 #define	ENOXENSYS	38
 #define CONFIG_XEN_COMPAT	0x030002
 
-
-#if defined(XEN)
 #define HYPERCALL_STR(name)                                     \
         "call hypercall_page + ("STR(__HYPERVISOR_##name)" * 32)"
-#else
-#define HYPERCALL_STR(name)                                     \
-        "mov hypercall_stubs,%%eax; "                           \
-        "add $("STR(__HYPERVISOR_##name)" * 32),%%eax; "        \
-        "call *%%eax"
-#endif
 
 #define _hypercall0(type, name)                 \
 ({                                              \
diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index b397721..9a0411e 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -157,7 +157,7 @@ DPCPU_DEFINE(xen_intr_handle_t, ipi_handle[nitems(xen_ipis)]);
 
 /*------------------ Hypervisor Access Shared Memory Regions -----------------*/
 /** Hypercall table accessed via HYPERVISOR_*_op() methods. */
-char *hypercall_stubs;
+extern char *hypercall_page;
 shared_info_t *HYPERVISOR_shared_info;
 start_info_t *HYPERVISOR_start_info;
 
@@ -559,7 +559,7 @@ xen_hvm_cpuid_base(void)
  * Allocate and fill in the hypcall page.
  */
 static int
-xen_hvm_init_hypercall_stubs(void)
+xen_hvm_init_hypercall_stubs(enum xen_hvm_init_type init_type)
 {
 	uint32_t base, regs[4];
 	int i;
@@ -568,7 +568,7 @@ xen_hvm_init_hypercall_stubs(void)
 	if (base == 0)
 		return (ENXIO);
 
-	if (hypercall_stubs == NULL) {
+	if (init_type == XEN_HVM_INIT_COLD) {
 		do_cpuid(base + 1, regs);
 		printf("XEN: Hypervisor version %d.%d detected.\n",
 		    regs[0] >> 16, regs[0] & 0xffff);
@@ -578,18 +578,9 @@ xen_hvm_init_hypercall_stubs(void)
 	 * Find the hypercall pages.
 	 */
 	do_cpuid(base + 2, regs);
-	
-	if (hypercall_stubs == NULL) {
-		size_t call_region_size;
-
-		call_region_size = regs[0] * PAGE_SIZE;
-		hypercall_stubs = malloc(call_region_size, M_XENHVM, M_NOWAIT);
-		if (hypercall_stubs == NULL)
-			panic("Unable to allocate Xen hypercall region");
-	}
 
 	for (i = 0; i < regs[0]; i++)
-		wrmsr(regs[1], vtophys(hypercall_stubs + i * PAGE_SIZE) + i);
+		wrmsr(regs[1], vtophys(&hypercall_page + i * PAGE_SIZE) + i);
 
 	return (0);
 }
@@ -692,7 +683,12 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	if (init_type == XEN_HVM_INIT_CANCELLED_SUSPEND)
 		return;
 
-	error = xen_hvm_init_hypercall_stubs();
+	if (xen_pv_domain()) {
+		/* hypercall page is already set in the PV case */
+		error = 0;
+	} else {
+		error = xen_hvm_init_hypercall_stubs(init_type);
+	}
 
 	switch (init_type) {
 	case XEN_HVM_INIT_COLD:
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Uw-0003Ep-ER; Tue, 14 Jan 2014 15:01:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35Tj-00033L-HB
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:32 +0000
Received: from [193.109.254.147:53516] by server-15.bemta-14.messagelabs.com
	id 76/9C-22186-70155D25; Tue, 14 Jan 2014 15:00:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389711620!10812538!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14653 invoked from network); 14 Jan 2014 15:00:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:47 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T1-0006J6-Ck;
	Tue, 14 Jan 2014 14:59:47 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:26 +0100
Message-ID: <1389711582-66908-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 04/20] amd64: introduce hook for custom
	preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
 sys/amd64/include/sysarch.h |   12 ++++++
 sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 11 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index f0d4ea8..343f9b8 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/reg.h>
 #include <machine/sigframe.h>
 #include <machine/specialreg.h>
+#include <machine/sysarch.h>
 #ifdef PERFMON
 #include <machine/perfmon.h>
 #endif
@@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
     char *xfpustate, size_t xfpustate_len);
 SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 
+/* Preload data parse function */
+static caddr_t native_parse_preload_data(u_int64_t);
+
+/* Default init_ops implementation. */
+struct init_ops init_ops = {
+	.parse_preload_data =	native_parse_preload_data,
+};
+
 /*
  * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
  * the physical address at which the kernel is loaded.
@@ -1685,6 +1694,26 @@ do_next:
 	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
 }
 
+static caddr_t
+native_parse_preload_data(u_int64_t modulep)
+{
+	caddr_t kmdp;
+
+	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
+	preload_bootstrap_relocate(KERNBASE);
+	kmdp = preload_search_by_type("elf kernel");
+	if (kmdp == NULL)
+		kmdp = preload_search_by_type("elf64 kernel");
+	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
+	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
+#ifdef DDB
+	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
+	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
+#endif
+
+	return (kmdp);
+}
+
 u_int64_t
 hammer_time(u_int64_t modulep, u_int64_t physfree)
 {
@@ -1709,17 +1738,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	 */
 	proc_linkup0(&proc0, &thread0);
 
-	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
-	preload_bootstrap_relocate(KERNBASE);
-	kmdp = preload_search_by_type("elf kernel");
-	if (kmdp == NULL)
-		kmdp = preload_search_by_type("elf64 kernel");
-	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
-	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
-#ifdef DDB
-	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
-	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
-#endif
+	kmdp = init_ops.parse_preload_data(modulep);
 
 	/* Init basic tunables, hz etc */
 	init_param1();
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index cd380d4..58ac8cd 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -4,3 +4,15 @@
 /* $FreeBSD$ */
 
 #include <x86/sysarch.h>
+
+/*
+ * Struct containing pointers to init functions whose
+ * implementation is run time selectable.  Selection can be made,
+ * for example, based on detection of a BIOS variant or
+ * hypervisor environment.
+ */
+struct init_ops {
+	caddr_t	(*parse_preload_data)(u_int64_t);
+};
+
+extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index db3b7a3..908b50b 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_pager.h>
 #include <vm/vm_param.h>
 
+#include <machine/sysarch.h>
+
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
 
@@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+/*--------------------------- Forward Declarations ---------------------------*/
+static caddr_t xen_pv_parse_preload_data(u_int64_t);
+
+static void xen_pv_set_init_ops(void);
+
+/*-------------------------------- Global Data -------------------------------*/
+/* Xen init_ops implementation. */
+struct init_ops xen_init_ops = {
+	.parse_preload_data =	xen_pv_parse_preload_data,
+};
+
+static struct
+{
+	const char	*ev;
+	int		mask;
+} howto_names[] = {
+	{"boot_askname",	RB_ASKNAME},
+	{"boot_single",		RB_SINGLE},
+	{"boot_nosync",		RB_NOSYNC},
+	{"boot_halt",		RB_ASKNAME},
+	{"boot_serial",		RB_SERIAL},
+	{"boot_cdrom",		RB_CDROM},
+	{"boot_gdb",		RB_GDB},
+	{"boot_gdb_pause",	RB_RESERVED1},
+	{"boot_verbose",	RB_VERBOSE},
+	{"boot_multicons",	RB_MULTIPLE},
+	{NULL,	0}
+};
+
+/*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
  *
@@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	}
 	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
 
+	/* Set the hooks for early functions that diverge from bare metal */
+	xen_pv_set_init_ops();
+
 	/* Now we can jump into the native init function */
 	return (hammer_time(0, physfree));
 }
+
+/*-------------------------------- PV specific -------------------------------*/
+/*
+ * Functions to convert the "extra" parameters passed by Xen
+ * into FreeBSD boot options (from the i386 Xen port).
+ */
+static char *
+xen_setbootenv(char *cmd_line)
+{
+	char *cmd_line_next;
+
+        /* Skip leading spaces */
+        for (; *cmd_line == ' '; cmd_line++);
+
+	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
+	return (cmd_line);
+}
+
+static int
+xen_boothowto(char *envp)
+{
+	int i, howto = 0;
+
+	/* get equivalents from the environment */
+	for (i = 0; howto_names[i].ev != NULL; i++)
+		if (getenv(howto_names[i].ev) != NULL)
+			howto |= howto_names[i].mask;
+	return (howto);
+}
+
+static caddr_t
+xen_pv_parse_preload_data(u_int64_t modulep)
+{
+	/* Parse the extra boot information given by Xen */
+	if (HYPERVISOR_start_info->cmd_line)
+		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
+	boothowto |= xen_boothowto(kern_envp);
+
+	return (NULL);
+}
+
+static void
+xen_pv_set_init_ops(void)
+{
+	/* Init ops for Xen PV */
+	init_ops = xen_init_ops;
+}
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:01:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35Uw-0003Ep-ER; Tue, 14 Jan 2014 15:01:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35Tj-00033L-HB
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:32 +0000
Received: from [193.109.254.147:53516] by server-15.bemta-14.messagelabs.com
	id 76/9C-22186-70155D25; Tue, 14 Jan 2014 15:00:23 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389711620!10812538!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14653 invoked from network); 14 Jan 2014 15:00:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575051"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:47 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T1-0006J6-Ck;
	Tue, 14 Jan 2014 14:59:47 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:26 +0100
Message-ID: <1389711582-66908-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 04/20] amd64: introduce hook for custom
	preload metadata parsers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/amd64/amd64/machdep.c   |   41 ++++++++++++++++------
 sys/amd64/include/sysarch.h |   12 ++++++
 sys/x86/xen/pv.c            |   82 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 124 insertions(+), 11 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index f0d4ea8..343f9b8 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -126,6 +126,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/reg.h>
 #include <machine/sigframe.h>
 #include <machine/specialreg.h>
+#include <machine/sysarch.h>
 #ifdef PERFMON
 #include <machine/perfmon.h>
 #endif
@@ -165,6 +166,14 @@ static int  set_fpcontext(struct thread *td, const mcontext_t *mcp,
     char *xfpustate, size_t xfpustate_len);
 SYSINIT(cpu, SI_SUB_CPU, SI_ORDER_FIRST, cpu_startup, NULL);
 
+/* Preload data parse function */
+static caddr_t native_parse_preload_data(u_int64_t);
+
+/* Default init_ops implementation. */
+struct init_ops init_ops = {
+	.parse_preload_data =	native_parse_preload_data,
+};
+
 /*
  * The file "conf/ldscript.amd64" defines the symbol "kernphys".  Its value is
  * the physical address at which the kernel is loaded.
@@ -1685,6 +1694,26 @@ do_next:
 	msgbufp = (struct msgbuf *)PHYS_TO_DMAP(phys_avail[pa_indx]);
 }
 
+static caddr_t
+native_parse_preload_data(u_int64_t modulep)
+{
+	caddr_t kmdp;
+
+	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
+	preload_bootstrap_relocate(KERNBASE);
+	kmdp = preload_search_by_type("elf kernel");
+	if (kmdp == NULL)
+		kmdp = preload_search_by_type("elf64 kernel");
+	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
+	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
+#ifdef DDB
+	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
+	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
+#endif
+
+	return (kmdp);
+}
+
 u_int64_t
 hammer_time(u_int64_t modulep, u_int64_t physfree)
 {
@@ -1709,17 +1738,7 @@ hammer_time(u_int64_t modulep, u_int64_t physfree)
 	 */
 	proc_linkup0(&proc0, &thread0);
 
-	preload_metadata = (caddr_t)(uintptr_t)(modulep + KERNBASE);
-	preload_bootstrap_relocate(KERNBASE);
-	kmdp = preload_search_by_type("elf kernel");
-	if (kmdp == NULL)
-		kmdp = preload_search_by_type("elf64 kernel");
-	boothowto = MD_FETCH(kmdp, MODINFOMD_HOWTO, int);
-	kern_envp = MD_FETCH(kmdp, MODINFOMD_ENVP, char *) + KERNBASE;
-#ifdef DDB
-	ksym_start = MD_FETCH(kmdp, MODINFOMD_SSYM, uintptr_t);
-	ksym_end = MD_FETCH(kmdp, MODINFOMD_ESYM, uintptr_t);
-#endif
+	kmdp = init_ops.parse_preload_data(modulep);
 
 	/* Init basic tunables, hz etc */
 	init_param1();
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index cd380d4..58ac8cd 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -4,3 +4,15 @@
 /* $FreeBSD$ */
 
 #include <x86/sysarch.h>
+
+/*
+ * Struct containing pointers to init functions whose
+ * implementation is run time selectable.  Selection can be made,
+ * for example, based on detection of a BIOS variant or
+ * hypervisor environment.
+ */
+struct init_ops {
+	caddr_t	(*parse_preload_data)(u_int64_t);
+};
+
+extern struct init_ops init_ops;
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index db3b7a3..908b50b 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -46,6 +46,8 @@ __FBSDID("$FreeBSD$");
 #include <vm/vm_pager.h>
 #include <vm/vm_param.h>
 
+#include <machine/sysarch.h>
+
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
 
@@ -54,6 +56,36 @@ extern u_int64_t hammer_time(u_int64_t, u_int64_t);
 /* Xen initial function */
 extern u_int64_t hammer_time_xen(start_info_t *, u_int64_t);
 
+/*--------------------------- Forward Declarations ---------------------------*/
+static caddr_t xen_pv_parse_preload_data(u_int64_t);
+
+static void xen_pv_set_init_ops(void);
+
+/*-------------------------------- Global Data -------------------------------*/
+/* Xen init_ops implementation. */
+struct init_ops xen_init_ops = {
+	.parse_preload_data =	xen_pv_parse_preload_data,
+};
+
+static struct
+{
+	const char	*ev;
+	int		mask;
+} howto_names[] = {
+	{"boot_askname",	RB_ASKNAME},
+	{"boot_single",		RB_SINGLE},
+	{"boot_nosync",		RB_NOSYNC},
+	{"boot_halt",		RB_ASKNAME},
+	{"boot_serial",		RB_SERIAL},
+	{"boot_cdrom",		RB_CDROM},
+	{"boot_gdb",		RB_GDB},
+	{"boot_gdb_pause",	RB_RESERVED1},
+	{"boot_verbose",	RB_VERBOSE},
+	{"boot_multicons",	RB_MULTIPLE},
+	{NULL,	0}
+};
+
+/*-------------------------------- Xen PV init -------------------------------*/
 /*
  * First function called by the Xen PVH boot sequence.
  *
@@ -118,6 +150,56 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	}
 	load_cr3(((u_int64_t)&PT4[0]) - KERNBASE);
 
+	/* Set the hooks for early functions that diverge from bare metal */
+	xen_pv_set_init_ops();
+
 	/* Now we can jump into the native init function */
 	return (hammer_time(0, physfree));
 }
+
+/*-------------------------------- PV specific -------------------------------*/
+/*
+ * Functions to convert the "extra" parameters passed by Xen
+ * into FreeBSD boot options (from the i386 Xen port).
+ */
+static char *
+xen_setbootenv(char *cmd_line)
+{
+	char *cmd_line_next;
+
+        /* Skip leading spaces */
+        for (; *cmd_line == ' '; cmd_line++);
+
+	for (cmd_line_next = cmd_line; strsep(&cmd_line_next, ",") != NULL;);
+	return (cmd_line);
+}
+
+static int
+xen_boothowto(char *envp)
+{
+	int i, howto = 0;
+
+	/* get equivalents from the environment */
+	for (i = 0; howto_names[i].ev != NULL; i++)
+		if (getenv(howto_names[i].ev) != NULL)
+			howto |= howto_names[i].mask;
+	return (howto);
+}
+
+static caddr_t
+xen_pv_parse_preload_data(u_int64_t modulep)
+{
+	/* Parse the extra boot information given by Xen */
+	if (HYPERVISOR_start_info->cmd_line)
+		kern_envp = xen_setbootenv(HYPERVISOR_start_info->cmd_line);
+	boothowto |= xen_boothowto(kern_envp);
+
+	return (NULL);
+}
+
+static void
+xen_pv_set_init_ops(void)
+{
+	/* Init ops for Xen PV */
+	init_ops = xen_init_ops;
+}
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:02:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35VF-0003Ju-2O; Tue, 14 Jan 2014 15:02:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35Tf-00033F-T5
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:37 +0000
Received: from [85.158.139.211:11281] by server-5.bemta-5.messagelabs.com id
	5B/1D-14928-50155D25; Tue, 14 Jan 2014 15:00:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389711618!9704937!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20013 invoked from network); 14 Jan 2014 15:00:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575047"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:46 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T0-0006J6-SE;
	Tue, 14 Jan 2014 14:59:46 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:25 +0100
Message-ID: <1389711582-66908-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 03/20] xen: add and enable Xen console for
	PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds and enables the console used on XEN kernels.
---
 sys/conf/files                     |    4 +-
 sys/dev/xen/console/console.c      |   37 +++++++++++++++++++++++++++++------
 sys/dev/xen/console/xencons_ring.c |   15 +++++++++----
 sys/i386/include/xen/xen-os.h      |    1 -
 sys/i386/xen/xen_machdep.c         |   17 ----------------
 sys/x86/xen/pv.c                   |    4 +++
 sys/xen/xen-os.h                   |    4 +++
 7 files changed, 50 insertions(+), 32 deletions(-)

diff --git a/sys/conf/files b/sys/conf/files
index 33fc75d..bddf021 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -2493,8 +2493,8 @@ dev/xe/if_xe_pccard.c		optional xe pccard
 dev/xen/balloon/balloon.c	optional xen | xenhvm
 dev/xen/blkfront/blkfront.c	optional xen | xenhvm
 dev/xen/blkback/blkback.c	optional xen | xenhvm
-dev/xen/console/console.c	optional xen
-dev/xen/console/xencons_ring.c	optional xen
+dev/xen/console/console.c	optional xen | xenhvm
+dev/xen/console/xencons_ring.c	optional xen | xenhvm
 dev/xen/control/control.c	optional xen | xenhvm
 dev/xen/netback/netback.c	optional xen | xenhvm
 dev/xen/netfront/netfront.c	optional xen | xenhvm
diff --git a/sys/dev/xen/console/console.c b/sys/dev/xen/console/console.c
index 23eaee2..899dffc 100644
--- a/sys/dev/xen/console/console.c
+++ b/sys/dev/xen/console/console.c
@@ -69,11 +69,14 @@ struct mtx              cn_mtx;
 static char wbuf[WBUF_SIZE];
 static char rbuf[RBUF_SIZE];
 static int rc, rp;
-static unsigned int cnsl_evt_reg;
+unsigned int cnsl_evt_reg;
 static unsigned int wc, wp; /* write_cons, write_prod */
 xen_intr_handle_t xen_intr_handle;
 device_t xencons_dev;
 
+/* Virtual address of the shared console page */
+char *console_page;
+
 #ifdef KDB
 static int	xc_altbrk;
 #endif
@@ -110,9 +113,26 @@ static struct ttydevsw xc_ttydevsw = {
         .tsw_outwakeup	= xcoutwakeup,
 };
 
+/*----------------------------- Debug function -------------------------------*/
+#define XC_PRINTF_BUFSIZE 1024
+void
+xc_printf(const char *fmt, ...)
+{
+	static char buf[XC_PRINTF_BUFSIZE];
+	__va_list ap;
+
+	va_start(ap, fmt);
+	vsnprintf(buf, sizeof(buf), fmt, ap);
+	va_end(ap);
+	HYPERVISOR_console_write(buf, strlen(buf));
+}
+
 static void
 xc_cnprobe(struct consdev *cp)
 {
+	if (!xen_pv_domain())
+		return;
+
 	cp->cn_pri = CN_REMOTE;
 	sprintf(cp->cn_name, "%s0", driver_name);
 }
@@ -175,7 +195,7 @@ static void
 xc_cnputc(struct consdev *dev, int c)
 {
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		xc_cnputc_dom0(dev, c);
 	else
 		xc_cnputc_domu(dev, c);
@@ -206,8 +226,7 @@ xcons_putc(int c)
 		xcons_force_flush();
 #endif	    	
 	}
-	if (cnsl_evt_reg)
-		__xencons_tx_flush();
+	__xencons_tx_flush();
 	
 	/* inform start path that we're pretty full */
 	return ((wp - wc) >= WBUF_SIZE - 100) ? TRUE : FALSE;
@@ -217,6 +236,10 @@ static void
 xc_identify(driver_t *driver, device_t parent)
 {
 	device_t child;
+
+	if (!xen_pv_domain())
+		return;
+
 	child = BUS_ADD_CHILD(parent, 0, driver_name, 0);
 	device_set_driver(child, driver);
 	device_set_desc(child, "Xen Console");
@@ -245,7 +268,7 @@ xc_attach(device_t dev)
 	cnsl_evt_reg = 1;
 	callout_reset(&xc_callout, XC_POLLTIME, xc_timeout, xccons);
     
-	if (xen_start_info->flags & SIF_INITDOMAIN) {
+	if (xen_initial_domain()) {
 		error = xen_intr_bind_virq(dev, VIRQ_CONSOLE, 0, NULL,
 		                           xencons_priv_interrupt, NULL,
 		                           INTR_TYPE_TTY, &xen_intr_handle);
@@ -309,7 +332,7 @@ __xencons_tx_flush(void)
 		sz = wp - wc;
 		if (sz > (WBUF_SIZE - WBUF_MASK(wc)))
 			sz = WBUF_SIZE - WBUF_MASK(wc);
-		if (xen_start_info->flags & SIF_INITDOMAIN) {
+		if (xen_initial_domain()) {
 			HYPERVISOR_console_io(CONSOLEIO_write, sz, &wbuf[WBUF_MASK(wc)]);
 			wc += sz;
 		} else {
@@ -424,7 +447,7 @@ xcons_force_flush(void)
 {
 	int        sz;
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		return;
 
 	/* Spin until console data is flushed through to the domain controller. */
diff --git a/sys/dev/xen/console/xencons_ring.c b/sys/dev/xen/console/xencons_ring.c
index 3701551..d826363 100644
--- a/sys/dev/xen/console/xencons_ring.c
+++ b/sys/dev/xen/console/xencons_ring.c
@@ -32,9 +32,9 @@ __FBSDID("$FreeBSD$");
 
 #define console_evtchn	console.domU.evtchn
 xen_intr_handle_t console_handle;
-extern char *console_page;
 extern struct mtx              cn_mtx;
 extern device_t xencons_dev;
+extern int cnsl_evt_reg;
 
 static inline struct xencons_interface *
 xencons_interface(void)
@@ -60,6 +60,8 @@ xencons_ring_send(const char *data, unsigned len)
 	struct xencons_interface *intf; 
 	XENCONS_RING_IDX cons, prod;
 	int sent;
+	struct evtchn_send send = { .port =
+	                            HYPERVISOR_start_info->console_evtchn };
 
 	intf = xencons_interface();
 	cons = intf->out_cons;
@@ -76,7 +78,10 @@ xencons_ring_send(const char *data, unsigned len)
 	wmb();
 	intf->out_prod = prod;
 
-	xen_intr_signal(console_handle);
+	if (cnsl_evt_reg)
+		xen_intr_signal(console_handle);
+	else
+		HYPERVISOR_event_channel_op(EVTCHNOP_send, &send);
 
 	return sent;
 
@@ -125,11 +130,11 @@ xencons_ring_init(void)
 {
 	int err;
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return 0;
 
 	err = xen_intr_bind_local_port(xencons_dev,
-	    xen_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
+	    HYPERVISOR_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
 	    INTR_TYPE_MISC | INTR_MPSAFE, &console_handle);
 	if (err) {
 		return err;
@@ -145,7 +150,7 @@ void
 xencons_suspend(void)
 {
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return;
 
 	xen_intr_unbind(&console_handle);
diff --git a/sys/i386/include/xen/xen-os.h b/sys/i386/include/xen/xen-os.h
index a8fba61..3d1ef04 100644
--- a/sys/i386/include/xen/xen-os.h
+++ b/sys/i386/include/xen/xen-os.h
@@ -45,7 +45,6 @@ static inline void rep_nop(void)
 #define cpu_relax() rep_nop()
 
 #ifndef XENHVM
-void xc_printf(const char *fmt, ...);
 
 #ifdef SMP
 extern int gdtset;
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index fd575ee..09c01f1 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -186,21 +186,6 @@ xen_boothowto(char *envp)
 	return howto;
 }
 
-#define XC_PRINTF_BUFSIZE 1024
-void
-xc_printf(const char *fmt, ...)
-{
-        __va_list ap;
-        int retval;
-        static char buf[XC_PRINTF_BUFSIZE];
-
-        va_start(ap, fmt);
-        retval = vsnprintf(buf, XC_PRINTF_BUFSIZE - 1, fmt, ap);
-        va_end(ap);
-        buf[retval] = 0;
-        (void)HYPERVISOR_console_write(buf, retval);
-}
-
 
 #define XPQUEUE_SIZE 128
 
@@ -745,8 +730,6 @@ void initvalues(start_info_t *startinfo);
 struct xenstore_domain_interface;
 extern struct xenstore_domain_interface *xen_store;
 
-char *console_page;
-
 void *
 bootmem_alloc(unsigned int size) 
 {
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 5571ecf..db3b7a3 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -70,9 +70,12 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	int i;
 
 	if ((si == NULL) || (xenstack == 0)) {
+		xc_printf("ERROR: invalid start_info or xen stack, halting\n");
 		HYPERVISOR_shutdown(SHUTDOWN_crash);
 	}
 
+	xc_printf("FreeBSD PVH running on %s\n", si->magic);
+
 	/* We use 3 pages of xen stack for the boot pagetables */
 	physfree = xenstack + 3 * PAGE_SIZE - KERNBASE;
 
@@ -90,6 +93,7 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	 */
 	xen_store = (struct xenstore_domain_interface *)
 	            (ptoa(si->store_mfn) + KERNBASE);
+	console_page = (char *)(ptoa(si->console.domU.mfn) + KERNBASE);
 
 	xen_domain_type = XEN_PV_DOMAIN;
 	vm_guest = VM_GUEST_XEN;
diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index e8a5a99..b1aa0a9 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -55,6 +55,7 @@ extern start_info_t *HYPERVISOR_start_info;
 
 /* XXX: we need to get rid of this and use HYPERVISOR_start_info directly */
 extern struct xenstore_domain_interface *xen_store;
+extern char *console_page;
 
 enum xen_domain_type {
 	XEN_NATIVE,             /* running on bare hardware    */
@@ -89,6 +90,9 @@ xen_initial_domain(void)
 	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
 }
 
+/* Debug function, prints directly to hypervisor console */
+void xc_printf(const char *, ...) __printflike(1, 2);
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:02:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35VF-0003Ju-2O; Tue, 14 Jan 2014 15:02:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35Tf-00033F-T5
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:37 +0000
Received: from [85.158.139.211:11281] by server-5.bemta-5.messagelabs.com id
	5B/1D-14928-50155D25; Tue, 14 Jan 2014 15:00:21 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389711618!9704937!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20013 invoked from network); 14 Jan 2014 15:00:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575047"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:46 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T0-0006J6-SE;
	Tue, 14 Jan 2014 14:59:46 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:25 +0100
Message-ID: <1389711582-66908-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 03/20] xen: add and enable Xen console for
	PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds and enables the console used on XEN kernels.
---
 sys/conf/files                     |    4 +-
 sys/dev/xen/console/console.c      |   37 +++++++++++++++++++++++++++++------
 sys/dev/xen/console/xencons_ring.c |   15 +++++++++----
 sys/i386/include/xen/xen-os.h      |    1 -
 sys/i386/xen/xen_machdep.c         |   17 ----------------
 sys/x86/xen/pv.c                   |    4 +++
 sys/xen/xen-os.h                   |    4 +++
 7 files changed, 50 insertions(+), 32 deletions(-)

diff --git a/sys/conf/files b/sys/conf/files
index 33fc75d..bddf021 100644
--- a/sys/conf/files
+++ b/sys/conf/files
@@ -2493,8 +2493,8 @@ dev/xe/if_xe_pccard.c		optional xe pccard
 dev/xen/balloon/balloon.c	optional xen | xenhvm
 dev/xen/blkfront/blkfront.c	optional xen | xenhvm
 dev/xen/blkback/blkback.c	optional xen | xenhvm
-dev/xen/console/console.c	optional xen
-dev/xen/console/xencons_ring.c	optional xen
+dev/xen/console/console.c	optional xen | xenhvm
+dev/xen/console/xencons_ring.c	optional xen | xenhvm
 dev/xen/control/control.c	optional xen | xenhvm
 dev/xen/netback/netback.c	optional xen | xenhvm
 dev/xen/netfront/netfront.c	optional xen | xenhvm
diff --git a/sys/dev/xen/console/console.c b/sys/dev/xen/console/console.c
index 23eaee2..899dffc 100644
--- a/sys/dev/xen/console/console.c
+++ b/sys/dev/xen/console/console.c
@@ -69,11 +69,14 @@ struct mtx              cn_mtx;
 static char wbuf[WBUF_SIZE];
 static char rbuf[RBUF_SIZE];
 static int rc, rp;
-static unsigned int cnsl_evt_reg;
+unsigned int cnsl_evt_reg;
 static unsigned int wc, wp; /* write_cons, write_prod */
 xen_intr_handle_t xen_intr_handle;
 device_t xencons_dev;
 
+/* Virtual address of the shared console page */
+char *console_page;
+
 #ifdef KDB
 static int	xc_altbrk;
 #endif
@@ -110,9 +113,26 @@ static struct ttydevsw xc_ttydevsw = {
         .tsw_outwakeup	= xcoutwakeup,
 };
 
+/*----------------------------- Debug function -------------------------------*/
+#define XC_PRINTF_BUFSIZE 1024
+void
+xc_printf(const char *fmt, ...)
+{
+	static char buf[XC_PRINTF_BUFSIZE];
+	__va_list ap;
+
+	va_start(ap, fmt);
+	vsnprintf(buf, sizeof(buf), fmt, ap);
+	va_end(ap);
+	HYPERVISOR_console_write(buf, strlen(buf));
+}
+
 static void
 xc_cnprobe(struct consdev *cp)
 {
+	if (!xen_pv_domain())
+		return;
+
 	cp->cn_pri = CN_REMOTE;
 	sprintf(cp->cn_name, "%s0", driver_name);
 }
@@ -175,7 +195,7 @@ static void
 xc_cnputc(struct consdev *dev, int c)
 {
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		xc_cnputc_dom0(dev, c);
 	else
 		xc_cnputc_domu(dev, c);
@@ -206,8 +226,7 @@ xcons_putc(int c)
 		xcons_force_flush();
 #endif	    	
 	}
-	if (cnsl_evt_reg)
-		__xencons_tx_flush();
+	__xencons_tx_flush();
 	
 	/* inform start path that we're pretty full */
 	return ((wp - wc) >= WBUF_SIZE - 100) ? TRUE : FALSE;
@@ -217,6 +236,10 @@ static void
 xc_identify(driver_t *driver, device_t parent)
 {
 	device_t child;
+
+	if (!xen_pv_domain())
+		return;
+
 	child = BUS_ADD_CHILD(parent, 0, driver_name, 0);
 	device_set_driver(child, driver);
 	device_set_desc(child, "Xen Console");
@@ -245,7 +268,7 @@ xc_attach(device_t dev)
 	cnsl_evt_reg = 1;
 	callout_reset(&xc_callout, XC_POLLTIME, xc_timeout, xccons);
     
-	if (xen_start_info->flags & SIF_INITDOMAIN) {
+	if (xen_initial_domain()) {
 		error = xen_intr_bind_virq(dev, VIRQ_CONSOLE, 0, NULL,
 		                           xencons_priv_interrupt, NULL,
 		                           INTR_TYPE_TTY, &xen_intr_handle);
@@ -309,7 +332,7 @@ __xencons_tx_flush(void)
 		sz = wp - wc;
 		if (sz > (WBUF_SIZE - WBUF_MASK(wc)))
 			sz = WBUF_SIZE - WBUF_MASK(wc);
-		if (xen_start_info->flags & SIF_INITDOMAIN) {
+		if (xen_initial_domain()) {
 			HYPERVISOR_console_io(CONSOLEIO_write, sz, &wbuf[WBUF_MASK(wc)]);
 			wc += sz;
 		} else {
@@ -424,7 +447,7 @@ xcons_force_flush(void)
 {
 	int        sz;
 
-	if (xen_start_info->flags & SIF_INITDOMAIN)
+	if (xen_initial_domain())
 		return;
 
 	/* Spin until console data is flushed through to the domain controller. */
diff --git a/sys/dev/xen/console/xencons_ring.c b/sys/dev/xen/console/xencons_ring.c
index 3701551..d826363 100644
--- a/sys/dev/xen/console/xencons_ring.c
+++ b/sys/dev/xen/console/xencons_ring.c
@@ -32,9 +32,9 @@ __FBSDID("$FreeBSD$");
 
 #define console_evtchn	console.domU.evtchn
 xen_intr_handle_t console_handle;
-extern char *console_page;
 extern struct mtx              cn_mtx;
 extern device_t xencons_dev;
+extern int cnsl_evt_reg;
 
 static inline struct xencons_interface *
 xencons_interface(void)
@@ -60,6 +60,8 @@ xencons_ring_send(const char *data, unsigned len)
 	struct xencons_interface *intf; 
 	XENCONS_RING_IDX cons, prod;
 	int sent;
+	struct evtchn_send send = { .port =
+	                            HYPERVISOR_start_info->console_evtchn };
 
 	intf = xencons_interface();
 	cons = intf->out_cons;
@@ -76,7 +78,10 @@ xencons_ring_send(const char *data, unsigned len)
 	wmb();
 	intf->out_prod = prod;
 
-	xen_intr_signal(console_handle);
+	if (cnsl_evt_reg)
+		xen_intr_signal(console_handle);
+	else
+		HYPERVISOR_event_channel_op(EVTCHNOP_send, &send);
 
 	return sent;
 
@@ -125,11 +130,11 @@ xencons_ring_init(void)
 {
 	int err;
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return 0;
 
 	err = xen_intr_bind_local_port(xencons_dev,
-	    xen_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
+	    HYPERVISOR_start_info->console_evtchn, NULL, xencons_handle_input, NULL,
 	    INTR_TYPE_MISC | INTR_MPSAFE, &console_handle);
 	if (err) {
 		return err;
@@ -145,7 +150,7 @@ void
 xencons_suspend(void)
 {
 
-	if (!xen_start_info->console_evtchn)
+	if (!HYPERVISOR_start_info->console_evtchn)
 		return;
 
 	xen_intr_unbind(&console_handle);
diff --git a/sys/i386/include/xen/xen-os.h b/sys/i386/include/xen/xen-os.h
index a8fba61..3d1ef04 100644
--- a/sys/i386/include/xen/xen-os.h
+++ b/sys/i386/include/xen/xen-os.h
@@ -45,7 +45,6 @@ static inline void rep_nop(void)
 #define cpu_relax() rep_nop()
 
 #ifndef XENHVM
-void xc_printf(const char *fmt, ...);
 
 #ifdef SMP
 extern int gdtset;
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index fd575ee..09c01f1 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -186,21 +186,6 @@ xen_boothowto(char *envp)
 	return howto;
 }
 
-#define XC_PRINTF_BUFSIZE 1024
-void
-xc_printf(const char *fmt, ...)
-{
-        __va_list ap;
-        int retval;
-        static char buf[XC_PRINTF_BUFSIZE];
-
-        va_start(ap, fmt);
-        retval = vsnprintf(buf, XC_PRINTF_BUFSIZE - 1, fmt, ap);
-        va_end(ap);
-        buf[retval] = 0;
-        (void)HYPERVISOR_console_write(buf, retval);
-}
-
 
 #define XPQUEUE_SIZE 128
 
@@ -745,8 +730,6 @@ void initvalues(start_info_t *startinfo);
 struct xenstore_domain_interface;
 extern struct xenstore_domain_interface *xen_store;
 
-char *console_page;
-
 void *
 bootmem_alloc(unsigned int size) 
 {
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 5571ecf..db3b7a3 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -70,9 +70,12 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	int i;
 
 	if ((si == NULL) || (xenstack == 0)) {
+		xc_printf("ERROR: invalid start_info or xen stack, halting\n");
 		HYPERVISOR_shutdown(SHUTDOWN_crash);
 	}
 
+	xc_printf("FreeBSD PVH running on %s\n", si->magic);
+
 	/* We use 3 pages of xen stack for the boot pagetables */
 	physfree = xenstack + 3 * PAGE_SIZE - KERNBASE;
 
@@ -90,6 +93,7 @@ hammer_time_xen(start_info_t *si, u_int64_t xenstack)
 	 */
 	xen_store = (struct xenstore_domain_interface *)
 	            (ptoa(si->store_mfn) + KERNBASE);
+	console_page = (char *)(ptoa(si->console.domU.mfn) + KERNBASE);
 
 	xen_domain_type = XEN_PV_DOMAIN;
 	vm_guest = VM_GUEST_XEN;
diff --git a/sys/xen/xen-os.h b/sys/xen/xen-os.h
index e8a5a99..b1aa0a9 100644
--- a/sys/xen/xen-os.h
+++ b/sys/xen/xen-os.h
@@ -55,6 +55,7 @@ extern start_info_t *HYPERVISOR_start_info;
 
 /* XXX: we need to get rid of this and use HYPERVISOR_start_info directly */
 extern struct xenstore_domain_interface *xen_store;
+extern char *console_page;
 
 enum xen_domain_type {
 	XEN_NATIVE,             /* running on bare hardware    */
@@ -89,6 +90,9 @@ xen_initial_domain(void)
 	        HYPERVISOR_start_info->flags & SIF_INITDOMAIN);
 }
 
+/* Debug function, prints directly to hypervisor console */
+void xc_printf(const char *, ...) __printflike(1, 2);
+
 #ifndef xen_mb
 #define xen_mb() mb()
 #endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:02:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35VG-0003Lk-KO; Tue, 14 Jan 2014 15:02:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35To-00033J-KO
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:37 +0000
Received: from [85.158.139.211:22739] by server-14.bemta-5.messagelabs.com id
	1B/47-24200-60155D25; Tue, 14 Jan 2014 15:00:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389711618!9704937!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20335 invoked from network); 14 Jan 2014 15:00:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T4-0006J6-0S;
	Tue, 14 Jan 2014 14:59:50 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:31 +0100
Message-ID: <1389711582-66908-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_09/20=5D_xen=3A_add_a_apic=5Fe?=
	=?utf-8?q?numerator_for_PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LS0tCiBzeXMvY29uZi9maWxlcy5hbWQ2NCAgICAgfCAgICAxICsKIHN5cy94ODYveGVuL3B2Y3B1
X2VudW0uYyB8ICAxMzYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKwogMiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgMCBkZWxldGlvbnMoLSkK
IGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveDg2L3hlbi9wdmNwdV9lbnVtLmMKCmRpZmYgLS1naXQg
YS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IDEwOWE3
OTYuLmEzNDkxZGEgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5cy9j
b25mL2ZpbGVzLmFtZDY0CkBAIC01NjksMyArNTY5LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0K
K3g4Ni94ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy94
ODYveGVuL3B2Y3B1X2VudW0uYyBiL3N5cy94ODYveGVuL3B2Y3B1X2VudW0uYwpuZXcgZmlsZSBt
b2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi4wMzg0ODg2Ci0tLSAvZGV2L251bGwKKysrIGIvc3lz
L3g4Ni94ZW4vcHZjcHVfZW51bS5jCkBAIC0wLDAgKzEsMTM2IEBACisvKi0KKyAqIENvcHlyaWdo
dCAoYykgMjAwMyBKb2huIEJhbGR3aW4gPGpoYkBGcmVlQlNELm9yZz4KKyAqIENvcHlyaWdodCAo
YykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2Ug
YW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBw
ZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBt
ZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGlj
ZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBp
biB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRl
ZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKiAzLiBOZWl0aGVyIHRoZSBuYW1lIG9mIHRoZSBh
dXRob3Igbm9yIHRoZSBuYW1lcyBvZiBhbnkgY28tY29udHJpYnV0b3JzCisgKiAgICBtYXkgYmUg
dXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBmcm9tIHRoaXMgc29m
dHdhcmUKKyAqICAgIHdpdGhvdXQgc3BlY2lmaWMgcHJpb3Igd3JpdHRlbiBwZXJtaXNzaW9uLgor
ICoKKyAqIFRISVMgU09GVFdBUkUgSVMgUFJPVklERUQgQlkgVEhFIEFVVEhPUiBBTkQgQ09OVFJJ
QlVUT1JTIGBgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElF
UywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lE
RU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAo
SU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUg
R09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVP
UlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElU
WSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElO
IEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpbmNs
dWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUgPHN5
cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4K
KyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorI2luY2x1ZGUg
PHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS9wbWFwLmg+CisK
KyNpbmNsdWRlIDxtYWNoaW5lL2ludHJfbWFjaGRlcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvYXBp
Y3Zhci5oPgorCisjaW5jbHVkZSA8bWFjaGluZS9jcHUuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3Nt
cC5oPgorCisjaW5jbHVkZSA8eGVuL3hlbi1vcy5oPgorI2luY2x1ZGUgPHhlbi9oeXBlcnZpc29y
Lmg+CisKKyNpbmNsdWRlIDx4ZW4vaW50ZXJmYWNlL3ZjcHUuaD4KKworc3RhdGljIGludCB4ZW5w
dl9wcm9iZSh2b2lkKTsKK3N0YXRpYyBpbnQgeGVucHZfcHJvYmVfY3B1cyh2b2lkKTsKK3N0YXRp
YyBpbnQgeGVucHZfc2V0dXBfbG9jYWwodm9pZCk7CitzdGF0aWMgaW50IHhlbnB2X3NldHVwX2lv
KHZvaWQpOworCitzdGF0aWMgc3RydWN0IGFwaWNfZW51bWVyYXRvciB4ZW5wdl9lbnVtZXJhdG9y
ID0geworCSJYZW4gUFYiLAorCXhlbnB2X3Byb2JlLAorCXhlbnB2X3Byb2JlX2NwdXMsCisJeGVu
cHZfc2V0dXBfbG9jYWwsCisJeGVucHZfc2V0dXBfaW8KK307CisKKy8qCisgKiBUaGlzIGVudW1l
cmF0b3Igd2lsbCBvbmx5IGJlIHJlZ2lzdGVyZWQgb24gUFZICisgKi8KK3N0YXRpYyBpbnQKK3hl
bnB2X3Byb2JlKHZvaWQpCit7CisJcmV0dXJuICgtMTAwKTsKK30KKworLyoKKyAqIFRlc3QgZWFj
aCBwb3NzaWJsZSB2Q1BVIGluIG9yZGVyIHRvIGZpbmQgdGhlIG51bWJlciBvZiB2Q1BVcworICov
CitzdGF0aWMgaW50Cit4ZW5wdl9wcm9iZV9jcHVzKHZvaWQpCit7CisjaWZkZWYgU01QCisJaW50
IGksIHJldDsKKworCWZvciAoaSA9IDA7IGkgPCBNQVhDUFU7IGkrKykgeworCQlyZXQgPSBIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2lzX3VwLCBpLCBOVUxMKTsKKwkJaWYgKHJldCA+PSAwKQor
CQkJY3B1X2FkZCgoaSAqIDIpLCAoaSA9PSAwKSk7CisJfQorI2VuZGlmCisJcmV0dXJuICgwKTsK
K30KKworLyoKKyAqIEluaXRpYWxpemUgdGhlIHZDUFUgaWQgb2YgdGhlIEJTUAorICovCitzdGF0
aWMgaW50Cit4ZW5wdl9zZXR1cF9sb2NhbCh2b2lkKQoreworCVBDUFVfU0VUKHZjcHVfaWQsIDAp
OworCXJldHVybiAoMCk7Cit9CisKKy8qCisgKiBPbiBQVkggZ3Vlc3RzIHRoZXJlJ3Mgbm8gSU8g
QVBJQworICovCitzdGF0aWMgaW50Cit4ZW5wdl9zZXR1cF9pbyh2b2lkKQoreworCXJldHVybiAo
MCk7Cit9CisKK3N0YXRpYyB2b2lkCit4ZW5wdl9yZWdpc3Rlcih2b2lkICpkdW1teSBfX3VudXNl
ZCkKK3sKKwlpZiAoeGVuX3B2X2RvbWFpbigpKSB7CisJCWFwaWNfcmVnaXN0ZXJfZW51bWVyYXRv
cigmeGVucHZfZW51bWVyYXRvcik7CisJfQorfQorU1lTSU5JVCh4ZW5wdl9yZWdpc3RlciwgU0lf
U1VCX1RVTkFCTEVTIC0gMSwgU0lfT1JERVJfRklSU1QsIHhlbnB2X3JlZ2lzdGVyLCBOVUxMKTsK
KworLyoKKyAqIFNldHVwIHBlci1DUFUgdkNQVSBJRHMKKyAqLworc3RhdGljIHZvaWQKK3hlbnB2
X3NldF9pZHModm9pZCAqZHVtbXkpCit7CisJc3RydWN0IHBjcHUgKnBjOworCWludCBpOworCisJ
Q1BVX0ZPUkVBQ0goaSkgeworCQlwYyA9IHBjcHVfZmluZChpKTsKKwkJcGMtPnBjX3ZjcHVfaWQg
PSBpOworCX0KK30KK1NZU0lOSVQoeGVucHZfc2V0X2lkcywgU0lfU1VCX0NQVSwgU0lfT1JERVJf
TUlERExFLCB4ZW5wdl9zZXRfaWRzLCBOVUxMKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:02:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35VG-0003Lk-KO; Tue, 14 Jan 2014 15:02:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35To-00033J-KO
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:00:37 +0000
Received: from [85.158.139.211:22739] by server-14.bemta-5.messagelabs.com id
	1B/47-24200-60155D25; Tue, 14 Jan 2014 15:00:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389711618!9704937!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20335 invoked from network); 14 Jan 2014 15:00:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:00:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90575064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 14:59:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 09:59:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T4-0006J6-0S;
	Tue, 14 Jan 2014 14:59:50 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:31 +0100
Message-ID: <1389711582-66908-10-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_09/20=5D_xen=3A_add_a_apic=5Fe?=
	=?utf-8?q?numerator_for_PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

LS0tCiBzeXMvY29uZi9maWxlcy5hbWQ2NCAgICAgfCAgICAxICsKIHN5cy94ODYveGVuL3B2Y3B1
X2VudW0uYyB8ICAxMzYgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKwogMiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgMCBkZWxldGlvbnMoLSkK
IGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveDg2L3hlbi9wdmNwdV9lbnVtLmMKCmRpZmYgLS1naXQg
YS9zeXMvY29uZi9maWxlcy5hbWQ2NCBiL3N5cy9jb25mL2ZpbGVzLmFtZDY0CmluZGV4IDEwOWE3
OTYuLmEzNDkxZGEgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0CisrKyBiL3N5cy9j
b25mL2ZpbGVzLmFtZDY0CkBAIC01NjksMyArNTY5LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0K
K3g4Ni94ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy94
ODYveGVuL3B2Y3B1X2VudW0uYyBiL3N5cy94ODYveGVuL3B2Y3B1X2VudW0uYwpuZXcgZmlsZSBt
b2RlIDEwMDY0NAppbmRleCAwMDAwMDAwLi4wMzg0ODg2Ci0tLSAvZGV2L251bGwKKysrIGIvc3lz
L3g4Ni94ZW4vcHZjcHVfZW51bS5jCkBAIC0wLDAgKzEsMTM2IEBACisvKi0KKyAqIENvcHlyaWdo
dCAoYykgMjAwMyBKb2huIEJhbGR3aW4gPGpoYkBGcmVlQlNELm9yZz4KKyAqIENvcHlyaWdodCAo
YykgMjAxMyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCBy
aWdodHMgcmVzZXJ2ZWQuCisgKgorICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2Ug
YW5kIGJpbmFyeSBmb3Jtcywgd2l0aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBw
ZXJtaXR0ZWQgcHJvdmlkZWQgdGhhdCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBt
ZXQ6CisgKiAxLiBSZWRpc3RyaWJ1dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhl
IGFib3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBh
bmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJp
bmFyeSBmb3JtIG11c3QgcmVwcm9kdWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGlj
ZSwgdGhpcyBsaXN0IG9mIGNvbmRpdGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBp
biB0aGUKKyAqICAgIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRl
ZCB3aXRoIHRoZSBkaXN0cmlidXRpb24uCisgKiAzLiBOZWl0aGVyIHRoZSBuYW1lIG9mIHRoZSBh
dXRob3Igbm9yIHRoZSBuYW1lcyBvZiBhbnkgY28tY29udHJpYnV0b3JzCisgKiAgICBtYXkgYmUg
dXNlZCB0byBlbmRvcnNlIG9yIHByb21vdGUgcHJvZHVjdHMgZGVyaXZlZCBmcm9tIHRoaXMgc29m
dHdhcmUKKyAqICAgIHdpdGhvdXQgc3BlY2lmaWMgcHJpb3Igd3JpdHRlbiBwZXJtaXNzaW9uLgor
ICoKKyAqIFRISVMgU09GVFdBUkUgSVMgUFJPVklERUQgQlkgVEhFIEFVVEhPUiBBTkQgQ09OVFJJ
QlVUT1JTIGBgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFOVElF
UywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJSQU5U
SUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBPUiBD
T05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lE
RU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdFUyAo
SU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRVVEUg
R09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsgT1Ig
QlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBUSEVP
UlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJTElU
WSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5HIElO
IEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYgQURW
SVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNpbmNs
dWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUgPHN5
cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMuaD4K
KyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorI2luY2x1ZGUg
PHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8dm0vdm0uaD4KKyNpbmNsdWRlIDx2bS9wbWFwLmg+CisK
KyNpbmNsdWRlIDxtYWNoaW5lL2ludHJfbWFjaGRlcC5oPgorI2luY2x1ZGUgPG1hY2hpbmUvYXBp
Y3Zhci5oPgorCisjaW5jbHVkZSA8bWFjaGluZS9jcHUuaD4KKyNpbmNsdWRlIDxtYWNoaW5lL3Nt
cC5oPgorCisjaW5jbHVkZSA8eGVuL3hlbi1vcy5oPgorI2luY2x1ZGUgPHhlbi9oeXBlcnZpc29y
Lmg+CisKKyNpbmNsdWRlIDx4ZW4vaW50ZXJmYWNlL3ZjcHUuaD4KKworc3RhdGljIGludCB4ZW5w
dl9wcm9iZSh2b2lkKTsKK3N0YXRpYyBpbnQgeGVucHZfcHJvYmVfY3B1cyh2b2lkKTsKK3N0YXRp
YyBpbnQgeGVucHZfc2V0dXBfbG9jYWwodm9pZCk7CitzdGF0aWMgaW50IHhlbnB2X3NldHVwX2lv
KHZvaWQpOworCitzdGF0aWMgc3RydWN0IGFwaWNfZW51bWVyYXRvciB4ZW5wdl9lbnVtZXJhdG9y
ID0geworCSJYZW4gUFYiLAorCXhlbnB2X3Byb2JlLAorCXhlbnB2X3Byb2JlX2NwdXMsCisJeGVu
cHZfc2V0dXBfbG9jYWwsCisJeGVucHZfc2V0dXBfaW8KK307CisKKy8qCisgKiBUaGlzIGVudW1l
cmF0b3Igd2lsbCBvbmx5IGJlIHJlZ2lzdGVyZWQgb24gUFZICisgKi8KK3N0YXRpYyBpbnQKK3hl
bnB2X3Byb2JlKHZvaWQpCit7CisJcmV0dXJuICgtMTAwKTsKK30KKworLyoKKyAqIFRlc3QgZWFj
aCBwb3NzaWJsZSB2Q1BVIGluIG9yZGVyIHRvIGZpbmQgdGhlIG51bWJlciBvZiB2Q1BVcworICov
CitzdGF0aWMgaW50Cit4ZW5wdl9wcm9iZV9jcHVzKHZvaWQpCit7CisjaWZkZWYgU01QCisJaW50
IGksIHJldDsKKworCWZvciAoaSA9IDA7IGkgPCBNQVhDUFU7IGkrKykgeworCQlyZXQgPSBIWVBF
UlZJU09SX3ZjcHVfb3AoVkNQVU9QX2lzX3VwLCBpLCBOVUxMKTsKKwkJaWYgKHJldCA+PSAwKQor
CQkJY3B1X2FkZCgoaSAqIDIpLCAoaSA9PSAwKSk7CisJfQorI2VuZGlmCisJcmV0dXJuICgwKTsK
K30KKworLyoKKyAqIEluaXRpYWxpemUgdGhlIHZDUFUgaWQgb2YgdGhlIEJTUAorICovCitzdGF0
aWMgaW50Cit4ZW5wdl9zZXR1cF9sb2NhbCh2b2lkKQoreworCVBDUFVfU0VUKHZjcHVfaWQsIDAp
OworCXJldHVybiAoMCk7Cit9CisKKy8qCisgKiBPbiBQVkggZ3Vlc3RzIHRoZXJlJ3Mgbm8gSU8g
QVBJQworICovCitzdGF0aWMgaW50Cit4ZW5wdl9zZXR1cF9pbyh2b2lkKQoreworCXJldHVybiAo
MCk7Cit9CisKK3N0YXRpYyB2b2lkCit4ZW5wdl9yZWdpc3Rlcih2b2lkICpkdW1teSBfX3VudXNl
ZCkKK3sKKwlpZiAoeGVuX3B2X2RvbWFpbigpKSB7CisJCWFwaWNfcmVnaXN0ZXJfZW51bWVyYXRv
cigmeGVucHZfZW51bWVyYXRvcik7CisJfQorfQorU1lTSU5JVCh4ZW5wdl9yZWdpc3RlciwgU0lf
U1VCX1RVTkFCTEVTIC0gMSwgU0lfT1JERVJfRklSU1QsIHhlbnB2X3JlZ2lzdGVyLCBOVUxMKTsK
KworLyoKKyAqIFNldHVwIHBlci1DUFUgdkNQVSBJRHMKKyAqLworc3RhdGljIHZvaWQKK3hlbnB2
X3NldF9pZHModm9pZCAqZHVtbXkpCit7CisJc3RydWN0IHBjcHUgKnBjOworCWludCBpOworCisJ
Q1BVX0ZPUkVBQ0goaSkgeworCQlwYyA9IHBjcHVfZmluZChpKTsKKwkJcGMtPnBjX3ZjcHVfaWQg
PSBpOworCX0KK30KK1NZU0lOSVQoeGVucHZfc2V0X2lkcywgU0lfU1VCX0NQVSwgU0lfT1JERVJf
TUlERExFLCB4ZW5wdl9zZXRfaWRzLCBOVUxMKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikK
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:17:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35jr-00056s-Ik; Tue, 14 Jan 2014 15:17:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W35jq-00056n-3W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:17:10 +0000
Received: from [85.158.139.211:39943] by server-17.bemta-5.messagelabs.com id
	57/A4-19152-5F455D25; Tue, 14 Jan 2014 15:17:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389712626!8490204!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25156 invoked from network); 14 Jan 2014 15:17:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:17:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90585596"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:17:06 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:17:05 -0500
Message-ID: <52D554F0.9000409@citrix.com>
Date: Tue, 14 Jan 2014 15:17:04 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>	<20140113120131.GA15623@aepfle.de>	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>	<1389618054.13654.57.camel@kazak.uk.xensource.com>	<52D3F535020000780011311B@nat28.tlf.novell.com>	<52D3EE14.3080609@citrix.com>
	<20140114145328.GA12888@aepfle.de>
In-Reply-To: <20140114145328.GA12888@aepfle.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 14:53, Olaf Hering wrote:
> On Mon, Jan 13, David Vrabel wrote:
> 
>> Can we have a patch to blkif.h that clarifies this?
> 
> What about this change?
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..56e2faa 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,10 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate space
> + *     (discardable extents) in units that are larger than the exported logical
> + *     block size. The properties discard-granularity and discard-alignment may
> + *     be present if the backing device has such requirments.

Clarify that both discard-granularity and discard-alignment must be
present if non-sector-sized granularity is required. e.g.,

"If the backing device has such discardable extents the backend must
provide both discard-granularity and discard-alignment."

You find it useful to add these recommendations:

"Backends supporting discard should include discard-granularity and
discard-alignment even if it supports discarding individual sectors.
Frontends should assume discard-aligment == 0 and discard-granularity ==
sector size if these keys are missing."

>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +346,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:17:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35jr-00056s-Ik; Tue, 14 Jan 2014 15:17:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W35jq-00056n-3W
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:17:10 +0000
Received: from [85.158.139.211:39943] by server-17.bemta-5.messagelabs.com id
	57/A4-19152-5F455D25; Tue, 14 Jan 2014 15:17:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389712626!8490204!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25156 invoked from network); 14 Jan 2014 15:17:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:17:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90585596"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:17:06 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:17:05 -0500
Message-ID: <52D554F0.9000409@citrix.com>
Date: Tue, 14 Jan 2014 15:17:04 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>	<20140113120131.GA15623@aepfle.de>	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>	<1389618054.13654.57.camel@kazak.uk.xensource.com>	<52D3F535020000780011311B@nat28.tlf.novell.com>	<52D3EE14.3080609@citrix.com>
	<20140114145328.GA12888@aepfle.de>
In-Reply-To: <20140114145328.GA12888@aepfle.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 14:53, Olaf Hering wrote:
> On Mon, Jan 13, David Vrabel wrote:
> 
>> Can we have a patch to blkif.h that clarifies this?
> 
> What about this change?
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..56e2faa 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,10 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate space
> + *     (discardable extents) in units that are larger than the exported logical
> + *     block size. The properties discard-granularity and discard-alignment may
> + *     be present if the backing device has such requirments.

Clarify that both discard-granularity and discard-alignment must be
present if non-sector-sized granularity is required. e.g.,

"If the backing device has such discardable extents the backend must
provide both discard-granularity and discard-alignment."

You find it useful to add these recommendations:

"Backends supporting discard should include discard-granularity and
discard-alignment even if it supports discarding individual sectors.
Frontends should assume discard-aligment == 0 and discard-granularity ==
sector size if these keys are missing."

>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +346,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35oe-0005PJ-MI; Tue, 14 Jan 2014 15:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W35od-0005P6-9c
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:22:07 +0000
Received: from [85.158.137.68:57127] by server-13.bemta-3.messagelabs.com id
	C7/53-28603-E1655D25; Tue, 14 Jan 2014 15:22:06 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389712924!9100192!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17287 invoked from network); 14 Jan 2014 15:22:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:22:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92707468"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:22:03 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:22:02 -0500
Message-ID: <52D55612.8060501@citrix.com>
Date: Tue, 14 Jan 2014 10:21:54 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
In-Reply-To: <21203.52907.856721.125370@mariner.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 06:31 AM, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
>> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
>>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
>>>> Fitting in with libxl proper would require the API to look a certain way
>>>> (take a context etc), so perhaps libxlu would be more appropriate,
>>>> alongside the disk syntax parser etc?
>>>
>>> Possibly. I looked at that back then (and today again) and it seemed to
>>> all be related to parsing things into XLU_Config objects. I guess I did
>>> not have a good feel for what libxlu was supposed to be. If it is
>>> supposed to be a generic library of auxiliary toolstack functionality
>>> then I think it would be a good place.
>>
>> I think that (aux functionality) was the intention -- the fact that it
>> is all parsing stuff right now is just a coincidence.

Ok then libxlu sounds like a reasonable place for this. I will submit a 
patch.

>>
>> (Ian J: right?)
>
> Yes, that's right.
>
> Thanks,
> Ian.
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6999 - Release Date: 01/13/14
>


-- 
Ross Philipson

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35oe-0005PJ-MI; Tue, 14 Jan 2014 15:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W35od-0005P6-9c
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:22:07 +0000
Received: from [85.158.137.68:57127] by server-13.bemta-3.messagelabs.com id
	C7/53-28603-E1655D25; Tue, 14 Jan 2014 15:22:06 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389712924!9100192!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17287 invoked from network); 14 Jan 2014 15:22:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:22:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92707468"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:22:03 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:22:02 -0500
Message-ID: <52D55612.8060501@citrix.com>
Date: Tue, 14 Jan 2014 10:21:54 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
In-Reply-To: <21203.52907.856721.125370@mariner.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/13/2014 06:31 AM, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
>> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
>>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
>>>> Fitting in with libxl proper would require the API to look a certain way
>>>> (take a context etc), so perhaps libxlu would be more appropriate,
>>>> alongside the disk syntax parser etc?
>>>
>>> Possibly. I looked at that back then (and today again) and it seemed to
>>> all be related to parsing things into XLU_Config objects. I guess I did
>>> not have a good feel for what libxlu was supposed to be. If it is
>>> supposed to be a generic library of auxiliary toolstack functionality
>>> then I think it would be a good place.
>>
>> I think that (aux functionality) was the intention -- the fact that it
>> is all parsing stuff right now is just a coincidence.

Ok then libxlu sounds like a reasonable place for this. I will submit a 
patch.

>>
>> (Ian J: right?)
>
> Yes, that's right.
>
> Thanks,
> Ian.
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3658/6999 - Release Date: 01/13/14
>


-- 
Ross Philipson

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35pc-0005Wv-92; Tue, 14 Jan 2014 15:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W35pa-0005We-3n
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:23:06 +0000
Received: from [85.158.137.68:9434] by server-1.bemta-3.messagelabs.com id
	9A/1C-29598-95655D25; Tue, 14 Jan 2014 15:23:05 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389712984!9102505!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10634 invoked from network); 14 Jan 2014 15:23:04 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-11.tower-31.messagelabs.com with SMTP;
	14 Jan 2014 15:23:04 -0000
Received: (qmail 24103 invoked by uid 10000); 14 Jan 2014 15:23:04 -0000
Date: Tue, 14 Jan 2014 15:23:03 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140114152303.GA19990@dark.recoil.org>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389710629.12434.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 02:43:49PM +0000, Ian Campbell wrote:
> On Sun, 2014-01-12 at 19:17 +0000, Dave Scott wrote:
> > Hi Anil,
> > 
> > Thanks for getting oxenstored on ARM working!
> > 
> > I'm happy with a simple 'Failure not implemented' exception for the
> > moment. I think that once we're using the libxl bindings everywhere we
> > can probably remove these libxc bindings anyway.
> > 
> > Acked-by: David Scott <dave.scott@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Release hat: This is pretty tightly targeted, and the worst that could
> happen is a build failure, which is generally pretty easy to detect
> before a release, although perhaps less so in the case of ocaml which
> isn't enabled by everyone.
> 
> Although I have two misgivings in that regard:
> 
> > > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > index f5cf0ed..ff29b47 100644
> > > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > > xch, value domid,  {
> > >  	CAMLparam4(xch, domid, input, config);
> > >  	CAMLlocal2(array, tmp);
> 
> Aren't these variables now unused? Does the compiler not complain about
> this (with -Werror => build failure)?
> 
> Perhaps CAMLlocal2 both defines and references the variables keeping
> this issue at bay?

That's right.  CAMLlocal2 creates a stack variable and registers it with
the garbage collector as a root (to ensure that it's not collected during
the lifetime of the function).  This keeps it live and always used from
the perspective of the C compiler.

I confirmed this with a 'make V=1' with '-Wall -Werror' to make sure, and
it compiles fine on gcc-4.8.1.  There's an unrelated signed/unsigned error
if -Wextra is included that I'll look at separately and is unrelated to
this patch.

> > > +#if defined(__i386__) || defined(__x86_64__)
> > >  	int r;
> > >  	unsigned int c_input[2];
> > >  	char *c_config[4], *out_config[4];
> > > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > > xch, value domid,
> > >  			 c_input, (const char **)c_config, out_config);
> > >  	if (r < 0)
> > >  		failwith_xc(_H(xch));
> > > +#else
> > > +	caml_failwith("xc_domain_cpuid_set: not implemented");
> > > #endif
> > >  	CAMLreturn(array);
> 
> In the !__i386__ && !__x86_64__ case this code is unreachable, right
> because caml_failwith is marked Noreturn?
> 
> Is there any chance that some compiler version might pick up on this and
> complain about the dead code? Or perhaps complain about the
> uninitialised use of array?
> 
> I suppose putting the CAMLreturn inside the x86 case runs the opposite
> risk of a compiler which doesn't pay proper attention to Noreturn and
> therefore things we are reaching the end of a non-void function? Would
> CAMLreturn(unit) be appropriate in that case?

Yeah, I'm not aware of any compiler that doesn't respect the noreturn
attribute and also emits unused variable warnings. I didn't modify the
CAMLreturn in favour of minimising the x86/ARM differences, but you could
modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
to keep these bindings as straight-line as possible for the 4.4 release
though, and to refactor oxenstored to not depend on them at all in the
future (it only uses a small part of libxc and these cpuid functions
aren't used at all).

(snip)
> > > 
> > > +#else
> > > +	caml_failwith("xc_domain_cpuid_check: not implemented");
> > > #endif
> > >  	CAMLreturn(ret);
> 
> Unreached?

See above.

cheers,
Anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35pc-0005Wv-92; Tue, 14 Jan 2014 15:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avsm@dark.recoil.org>) id 1W35pa-0005We-3n
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:23:06 +0000
Received: from [85.158.137.68:9434] by server-1.bemta-3.messagelabs.com id
	9A/1C-29598-95655D25; Tue, 14 Jan 2014 15:23:05 +0000
X-Env-Sender: avsm@dark.recoil.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389712984!9102505!1
X-Originating-IP: [89.16.177.154]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10634 invoked from network); 14 Jan 2014 15:23:04 -0000
Received: from recoil.dh.bytemark.co.uk (HELO dark.recoil.org) (89.16.177.154)
	by server-11.tower-31.messagelabs.com with SMTP;
	14 Jan 2014 15:23:04 -0000
Received: (qmail 24103 invoked by uid 10000); 14 Jan 2014 15:23:04 -0000
Date: Tue, 14 Jan 2014 15:23:03 +0000
From: Anil Madhavapeddy <anil@recoil.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140114152303.GA19990@dark.recoil.org>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389710629.12434.64.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 02:43:49PM +0000, Ian Campbell wrote:
> On Sun, 2014-01-12 at 19:17 +0000, Dave Scott wrote:
> > Hi Anil,
> > 
> > Thanks for getting oxenstored on ARM working!
> > 
> > I'm happy with a simple 'Failure not implemented' exception for the
> > moment. I think that once we're using the libxl bindings everywhere we
> > can probably remove these libxc bindings anyway.
> > 
> > Acked-by: David Scott <dave.scott@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Release hat: This is pretty tightly targeted, and the worst that could
> happen is a build failure, which is generally pretty easy to detect
> before a release, although perhaps less so in the case of ocaml which
> isn't enabled by everyone.
> 
> Although I have two misgivings in that regard:
> 
> > > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > index f5cf0ed..ff29b47 100644
> > > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > > @@ -714,6 +714,7 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > > xch, value domid,  {
> > >  	CAMLparam4(xch, domid, input, config);
> > >  	CAMLlocal2(array, tmp);
> 
> Aren't these variables now unused? Does the compiler not complain about
> this (with -Werror => build failure)?
> 
> Perhaps CAMLlocal2 both defines and references the variables keeping
> this issue at bay?

That's right.  CAMLlocal2 creates a stack variable and registers it with
the garbage collector as a root (to ensure that it's not collected during
the lifetime of the function).  This keeps it live and always used from
the perspective of the C compiler.

I confirmed this with a 'make V=1' with '-Wall -Werror' to make sure, and
it compiles fine on gcc-4.8.1.  There's an unrelated signed/unsigned error
if -Wextra is included that I'll look at separately and is unrelated to
this patch.

> > > +#if defined(__i386__) || defined(__x86_64__)
> > >  	int r;
> > >  	unsigned int c_input[2];
> > >  	char *c_config[4], *out_config[4];
> > > @@ -742,17 +743,24 @@ CAMLprim value stub_xc_domain_cpuid_set(value
> > > xch, value domid,
> > >  			 c_input, (const char **)c_config, out_config);
> > >  	if (r < 0)
> > >  		failwith_xc(_H(xch));
> > > +#else
> > > +	caml_failwith("xc_domain_cpuid_set: not implemented");
> > > #endif
> > >  	CAMLreturn(array);
> 
> In the !__i386__ && !__x86_64__ case this code is unreachable, right
> because caml_failwith is marked Noreturn?
> 
> Is there any chance that some compiler version might pick up on this and
> complain about the dead code? Or perhaps complain about the
> uninitialised use of array?
> 
> I suppose putting the CAMLreturn inside the x86 case runs the opposite
> risk of a compiler which doesn't pay proper attention to Noreturn and
> therefore things we are reaching the end of a non-void function? Would
> CAMLreturn(unit) be appropriate in that case?

Yeah, I'm not aware of any compiler that doesn't respect the noreturn
attribute and also emits unused variable warnings. I didn't modify the
CAMLreturn in favour of minimising the x86/ARM differences, but you could
modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
to keep these bindings as straight-line as possible for the 4.4 release
though, and to refactor oxenstored to not depend on them at all in the
future (it only uses a small part of libxc and these cpuid functions
aren't used at all).

(snip)
> > > 
> > > +#else
> > > +	caml_failwith("xc_domain_cpuid_check: not implemented");
> > > #endif
> > >  	CAMLreturn(ret);
> 
> Unreached?

See above.

cheers,
Anil

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35qO-0005gP-VV; Tue, 14 Jan 2014 15:23:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35qN-0005fe-Gq
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:23:55 +0000
Received: from [85.158.137.68:15136] by server-1.bemta-3.messagelabs.com id
	6B/5D-29598-A8655D25; Tue, 14 Jan 2014 15:23:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389713032!5418230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5326 invoked from network); 14 Jan 2014 15:23:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:23:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90589673"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:23:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:23:51 -0500
Message-ID: <1389713030.12434.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Tue, 14 Jan 2014 15:23:50 +0000
In-Reply-To: <52D55612.8060501@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
	<52D55612.8060501@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 10:21 -0500, Ross Philipson wrote:
> On 01/13/2014 06:31 AM, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
> >> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> >>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
> >>>> Fitting in with libxl proper would require the API to look a certain way
> >>>> (take a context etc), so perhaps libxlu would be more appropriate,
> >>>> alongside the disk syntax parser etc?
> >>>
> >>> Possibly. I looked at that back then (and today again) and it seemed to
> >>> all be related to parsing things into XLU_Config objects. I guess I did
> >>> not have a good feel for what libxlu was supposed to be. If it is
> >>> supposed to be a generic library of auxiliary toolstack functionality
> >>> then I think it would be a good place.
> >>
> >> I think that (aux functionality) was the intention -- the fact that it
> >> is all parsing stuff right now is just a coincidence.
> 
> Ok then libxlu sounds like a reasonable place for this. I will submit a 
> patch.

Thanks. It's probably a bit late to squeeze this in for 4.4, so you've
got a little while until 4.5 development opens.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35qO-0005gP-VV; Tue, 14 Jan 2014 15:23:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35qN-0005fe-Gq
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:23:55 +0000
Received: from [85.158.137.68:15136] by server-1.bemta-3.messagelabs.com id
	6B/5D-29598-A8655D25; Tue, 14 Jan 2014 15:23:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389713032!5418230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5326 invoked from network); 14 Jan 2014 15:23:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:23:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90589673"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:23:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:23:51 -0500
Message-ID: <1389713030.12434.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ross Philipson <ross.philipson@citrix.com>
Date: Tue, 14 Jan 2014 15:23:50 +0000
In-Reply-To: <52D55612.8060501@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
	<52D55612.8060501@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 10:21 -0500, Ross Philipson wrote:
> On 01/13/2014 06:31 AM, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
> >> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
> >>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
> >>>> Fitting in with libxl proper would require the API to look a certain way
> >>>> (take a context etc), so perhaps libxlu would be more appropriate,
> >>>> alongside the disk syntax parser etc?
> >>>
> >>> Possibly. I looked at that back then (and today again) and it seemed to
> >>> all be related to parsing things into XLU_Config objects. I guess I did
> >>> not have a good feel for what libxlu was supposed to be. If it is
> >>> supposed to be a generic library of auxiliary toolstack functionality
> >>> then I think it would be a good place.
> >>
> >> I think that (aux functionality) was the intention -- the fact that it
> >> is all parsing stuff right now is just a coincidence.
> 
> Ok then libxlu sounds like a reasonable place for this. I will submit a 
> patch.

Thanks. It's probably a bit late to squeeze this in for 4.4, so you've
got a little while until 4.5 development opens.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35rr-0005se-E6; Tue, 14 Jan 2014 15:25:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35rp-0005sR-Bp
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:25 +0000
Received: from [85.158.139.211:19697] by server-11.bemta-5.messagelabs.com id
	80/B4-23268-4E655D25; Tue, 14 Jan 2014 15:25:24 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713122!9663816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24535 invoked from network); 14 Jan 2014 15:25:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709101"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T6-0006J6-L4;
	Tue, 14 Jan 2014 14:59:52 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:36 +0100
Message-ID: <1389711582-66908-15-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_14/20=5D_xen=3A_introduce_xenp?=
	=?utf-8?q?v_bus_and_a_dummy_pvcpu_device?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24ndCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRl
IGEgZHVtbXkKYnVzIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoIHRvIGl0IChp
bnN0ZWFkIG9mCmF0dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRl
dmljZSB0aGF0IHdpbGwgYmUgdXNlZAp0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
Ci0tLQogc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYg
IHwgICAgMSArCiBzeXMveDg2L3hlbi94ZW5wdi5jICB8ICAxMjggKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMTMwIGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94
ZW4veGVucHYuYwoKZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKaW5kZXggYTM0OTFkYS4uZDdjOThjYyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKKysrIGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKQEAgLTU3MCwzICs1NzAsNCBA
QCB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9pbnRyLmMJCW9w
dGlvbmFsCXhlbiB8IHhlbmh2bQogeDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYv
eGVuL3B2Y3B1X2VudW0uYwkJb3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRp
b25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9m
aWxlcy5pMzg2CmluZGV4IDc5MDI5NmQuLjgxMTQyZTMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2Zp
bGVzLmkzODYKKysrIGIvc3lzL2NvbmYvZmlsZXMuaTM4NgpAQCAtNjAzLDMgKzYwMyw0IEBAIHg4
Ni94ODYvdHNjLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni9kZWxheS5jCQkJc3RhbmRhcmQKIHg4Ni94
ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0aW9uYWwg
eGVuIHwgeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy94ODYveGVuL3hlbnB2LmMgYi9zeXMveDg2L3hlbi94ZW5wdi5jCm5ldyBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLmUxMjgyY2YKLS0tIC9kZXYvbnVsbAorKysg
Yi9zeXMveDg2L3hlbi94ZW5wdi5jCkBAIC0wLDAgKzEsMTI4IEBACisvKgorICogQ29weXJpZ2h0
IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxs
IHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJj
ZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJl
IHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJl
IG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0
aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25z
IGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4g
YmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90
aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVy
IGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3Zp
ZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJ
REVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVY
UFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBU
TywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBF
VkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBB
TlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywg
UFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0Yg
VVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09O
VFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5D
RSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0Yg
VEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICog
U1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRG
cmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL3N5c3Rt
Lmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNs
dWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3BjcHUuaD4KKyNpbmNsdWRlIDxzeXMv
c21wLmg+CisKKyNpbmNsdWRlIDx4ZW4veGVuLW9zLmg+CisKK3N0YXRpYyBkZXZjbGFzc190IHhl
bnB2X2RldmNsYXNzOworCitzdGF0aWMgdm9pZAoreGVucHZfaWRlbnRpZnkoZHJpdmVyX3QgKmRy
aXZlciwgZGV2aWNlX3QgcGFyZW50KQoreworCWlmICgheGVuX2RvbWFpbigpKQorCQlyZXR1cm47
CisKKwkvKiBNYWtlIHN1cmUgdGhlcmUncyBvbmx5IG9uZSB4ZW5wdiBkZXZpY2UuICovCisJaWYg
KGRldmNsYXNzX2dldF9kZXZpY2UoeGVucHZfZGV2Y2xhc3MsIDApKQorCQlyZXR1cm47CisKKwkv
KgorCSAqIFVzZSBhIGhpZ2ggb3JkZXIgbnVtYmVyIHNvIHhlbnB2IGlzIGF0dGFjaGVkIGFmdGVy
CisJICogeGVucGNpIG9uIEhWTSBndWVzdHMuCisJICovCisJQlVTX0FERF9DSElMRChwYXJlbnQs
IDIwMCwgInhlbnB2IiwgMCk7Cit9CisKK3N0YXRpYyBpbnQKK3hlbnB2X3Byb2JlKGRldmljZV90
IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVzIik7CisJZGV2aWNl
X3F1aWV0KGRldik7CisJcmV0dXJuIChCVVNfUFJPQkVfTk9XSUxEQ0FSRCk7Cit9CisKK3N0YXRp
YyBpbnQKK3hlbnB2X2F0dGFjaChkZXZpY2VfdCBkZXYpCit7CisJZGV2aWNlX3QgY2hpbGQ7CisK
KwlpZiAoeGVuX2h2bV9kb21haW4oKSkgeworCQlkZXZpY2VfdCB4ZW5wY2k7CisJCWRldmNsYXNz
X3QgZGM7CisKKwkJLyogTWFrZSBzdXJlIHhlbnBjaSBoYXMgYmVlbiBhdHRhY2hlZCAqLworCQlk
YyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOworCQlpZiAoZGMgPT0gTlVMTCkKKwkJCXBhbmlj
KCJ1bmFibGUgdG8gZmluZCB4ZW5wY2kgZGV2Y2xhc3MiKTsKKworCQl4ZW5wY2kgPSBkZXZjbGFz
c19nZXRfZGV2aWNlKGRjLCAwKTsKKwkJaWYgKHhlbnBjaSA9PSBOVUxMKQorCQkJcGFuaWMoInVu
YWJsZSB0byBmaW5kIHhlbnBjaSBkZXZpY2UiKTsKKworCQlpZiAoIWRldmljZV9pc19hdHRhY2hl
ZCh4ZW5wY2kpKQorCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhlbnBj
aSIpOworCX0KKworCS8qCisJICogTGV0IG91ciBjaGlsZCBkcml2ZXJzIGlkZW50aWZ5IGFueSBj
aGlsZCBkZXZpY2VzIHRoYXQgdGhleQorCSAqIGNhbiBmaW5kLiAgT25jZSB0aGF0IGlzIGRvbmUg
YXR0YWNoIGFueSBkZXZpY2VzIHRoYXQgd2UKKwkgKiBmb3VuZC4KKwkgKi8KKwlidXNfZ2VuZXJp
Y19wcm9iZShkZXYpOworCWJ1c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJaWYgKCFkZXZjbGFz
c19nZXRfZGV2aWNlKGRldmNsYXNzX2ZpbmQoImlzYSIpLCAwKSkgeworCQljaGlsZCA9IEJVU19B
RERfQ0hJTEQoZGV2LCAwLCAiaXNhIiwgMCk7CisJCWlmIChjaGlsZCA9PSBOVUxMKQorCQkJcGFu
aWMoInhlbnB2X2F0dGFjaCBpc2EiKTsKKwkJZGV2aWNlX3Byb2JlX2FuZF9hdHRhY2goY2hpbGQp
OworCX0KKworCXJldHVybiAwOworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2X21l
dGhvZHNbXSA9IHsKKwkvKiBEZXZpY2UgaW50ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9p
ZGVudGlmeSwJeGVucHZfaWRlbnRpZnkpLAorCURFVk1FVEhPRChkZXZpY2VfcHJvYmUsCQl4ZW5w
dl9wcm9iZSksCisJREVWTUVUSE9EKGRldmljZV9hdHRhY2gsCXhlbnB2X2F0dGFjaCksCisJREVW
TUVUSE9EKGRldmljZV9zdXNwZW5kLAlidXNfZ2VuZXJpY19zdXNwZW5kKSwKKwlERVZNRVRIT0Qo
ZGV2aWNlX3Jlc3VtZSwJYnVzX2dlbmVyaWNfcmVzdW1lKSwKKworCS8qIEJ1cyBpbnRlcmZhY2Ug
Ki8KKwlERVZNRVRIT0QoYnVzX2FkZF9jaGlsZCwJYnVzX2dlbmVyaWNfYWRkX2NoaWxkKSwKKwor
CURFVk1FVEhPRF9FTkQKK307CisKK3N0YXRpYyBkcml2ZXJfdCB4ZW5wdl9kcml2ZXIgPSB7CisJ
InhlbnB2IiwKKwl4ZW5wdl9tZXRob2RzLAorCTEsCQkJLyogbm8gc29mdGMgKi8KK307CisKK0RS
SVZFUl9NT0RVTEUoeGVucHYsIG5leHVzLCB4ZW5wdl9kcml2ZXIsIHhlbnB2X2RldmNsYXNzLCAw
LCAwKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35rr-0005se-E6; Tue, 14 Jan 2014 15:25:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35rp-0005sR-Bp
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:25 +0000
Received: from [85.158.139.211:19697] by server-11.bemta-5.messagelabs.com id
	80/B4-23268-4E655D25; Tue, 14 Jan 2014 15:25:24 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713122!9663816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24535 invoked from network); 14 Jan 2014 15:25:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709101"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T6-0006J6-L4;
	Tue, 14 Jan 2014 14:59:52 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:36 +0100
Message-ID: <1389711582-66908-15-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_14/20=5D_xen=3A_introduce_xenp?=
	=?utf-8?q?v_bus_and_a_dummy_pvcpu_device?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgWGVuIFBWSCBndWVzdHMgZG9lc24ndCBoYXZlIEFDUEksIHdlIG5lZWQgdG8gY3JlYXRl
IGEgZHVtbXkKYnVzIHNvIHRvcCBsZXZlbCBYZW4gZGV2aWNlcyBjYW4gYXR0YWNoIHRvIGl0IChp
bnN0ZWFkIG9mCmF0dGFjaGluZyBkaXJlY3RseSB0byB0aGUgbmV4dXMpIGFuZCBhIHB2Y3B1IGRl
dmljZSB0aGF0IHdpbGwgYmUgdXNlZAp0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
Ci0tLQogc3lzL2NvbmYvZmlsZXMuYW1kNjQgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYg
IHwgICAgMSArCiBzeXMveDg2L3hlbi94ZW5wdi5jICB8ICAxMjggKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMTMwIGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgc3lzL3g4Ni94
ZW4veGVucHYuYwoKZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmFtZDY0IGIvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKaW5kZXggYTM0OTFkYS4uZDdjOThjYyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYv
ZmlsZXMuYW1kNjQKKysrIGIvc3lzL2NvbmYvZmlsZXMuYW1kNjQKQEAgLTU3MCwzICs1NzAsNCBA
QCB4ODYveGVuL2h2bS5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYveGVuL3hlbl9pbnRyLmMJCW9w
dGlvbmFsCXhlbiB8IHhlbmh2bQogeDg2L3hlbi9wdi5jCQkJb3B0aW9uYWwJeGVuaHZtCiB4ODYv
eGVuL3B2Y3B1X2VudW0uYwkJb3B0aW9uYWwJeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRp
b25hbAl4ZW5odm0KZGlmZiAtLWdpdCBhL3N5cy9jb25mL2ZpbGVzLmkzODYgYi9zeXMvY29uZi9m
aWxlcy5pMzg2CmluZGV4IDc5MDI5NmQuLjgxMTQyZTMgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2Zp
bGVzLmkzODYKKysrIGIvc3lzL2NvbmYvZmlsZXMuaTM4NgpAQCAtNjAzLDMgKzYwMyw0IEBAIHg4
Ni94ODYvdHNjLmMJCQlzdGFuZGFyZAogeDg2L3g4Ni9kZWxheS5jCQkJc3RhbmRhcmQKIHg4Ni94
ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0aW9uYWwg
eGVuIHwgeGVuaHZtCit4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KZGlm
ZiAtLWdpdCBhL3N5cy94ODYveGVuL3hlbnB2LmMgYi9zeXMveDg2L3hlbi94ZW5wdi5jCm5ldyBm
aWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLmUxMjgyY2YKLS0tIC9kZXYvbnVsbAorKysg
Yi9zeXMveDg2L3hlbi94ZW5wdi5jCkBAIC0wLDAgKzEsMTI4IEBACisvKgorICogQ29weXJpZ2h0
IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgorICogQWxs
IHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJj
ZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJl
IHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucworICogYXJl
IG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0
aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlzdCBvZiBjb25kaXRpb25z
IGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4g
YmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHlyaWdodAorICogICAgbm90
aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVy
IGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIgbWF0ZXJpYWxzIHByb3Zp
ZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNPRlRXQVJFIElTIFBST1ZJ
REVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycnIEFORAorICogQU5ZIEVY
UFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBU
TywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRBQklMSVRZIEFORCBGSVRO
RVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NMQUlNRUQuICBJTiBOTyBF
VkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBB
TlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBD
T05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywg
UFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VSVklDRVM7IExPU1MgT0Yg
VVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJVUFRJT04pCisgKiBIT1dF
VkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFksIFdIRVRIRVIgSU4gQ09O
VFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNMVURJTkcgTkVHTElHRU5D
RSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VUIE9GIFRIRSBVU0UgT0Yg
VEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NTSUJJTElUWSBPRgorICog
U1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5oPgorX19GQlNESUQoIiRG
cmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5jbHVkZSA8c3lzL3N5c3Rt
Lmg+CisjaW5jbHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNs
dWRlIDxzeXMvbW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3BjcHUuaD4KKyNpbmNsdWRlIDxzeXMv
c21wLmg+CisKKyNpbmNsdWRlIDx4ZW4veGVuLW9zLmg+CisKK3N0YXRpYyBkZXZjbGFzc190IHhl
bnB2X2RldmNsYXNzOworCitzdGF0aWMgdm9pZAoreGVucHZfaWRlbnRpZnkoZHJpdmVyX3QgKmRy
aXZlciwgZGV2aWNlX3QgcGFyZW50KQoreworCWlmICgheGVuX2RvbWFpbigpKQorCQlyZXR1cm47
CisKKwkvKiBNYWtlIHN1cmUgdGhlcmUncyBvbmx5IG9uZSB4ZW5wdiBkZXZpY2UuICovCisJaWYg
KGRldmNsYXNzX2dldF9kZXZpY2UoeGVucHZfZGV2Y2xhc3MsIDApKQorCQlyZXR1cm47CisKKwkv
KgorCSAqIFVzZSBhIGhpZ2ggb3JkZXIgbnVtYmVyIHNvIHhlbnB2IGlzIGF0dGFjaGVkIGFmdGVy
CisJICogeGVucGNpIG9uIEhWTSBndWVzdHMuCisJICovCisJQlVTX0FERF9DSElMRChwYXJlbnQs
IDIwMCwgInhlbnB2IiwgMCk7Cit9CisKK3N0YXRpYyBpbnQKK3hlbnB2X3Byb2JlKGRldmljZV90
IGRldikKK3sKKworCWRldmljZV9zZXRfZGVzYyhkZXYsICJYZW4gUFYgYnVzIik7CisJZGV2aWNl
X3F1aWV0KGRldik7CisJcmV0dXJuIChCVVNfUFJPQkVfTk9XSUxEQ0FSRCk7Cit9CisKK3N0YXRp
YyBpbnQKK3hlbnB2X2F0dGFjaChkZXZpY2VfdCBkZXYpCit7CisJZGV2aWNlX3QgY2hpbGQ7CisK
KwlpZiAoeGVuX2h2bV9kb21haW4oKSkgeworCQlkZXZpY2VfdCB4ZW5wY2k7CisJCWRldmNsYXNz
X3QgZGM7CisKKwkJLyogTWFrZSBzdXJlIHhlbnBjaSBoYXMgYmVlbiBhdHRhY2hlZCAqLworCQlk
YyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOworCQlpZiAoZGMgPT0gTlVMTCkKKwkJCXBhbmlj
KCJ1bmFibGUgdG8gZmluZCB4ZW5wY2kgZGV2Y2xhc3MiKTsKKworCQl4ZW5wY2kgPSBkZXZjbGFz
c19nZXRfZGV2aWNlKGRjLCAwKTsKKwkJaWYgKHhlbnBjaSA9PSBOVUxMKQorCQkJcGFuaWMoInVu
YWJsZSB0byBmaW5kIHhlbnBjaSBkZXZpY2UiKTsKKworCQlpZiAoIWRldmljZV9pc19hdHRhY2hl
ZCh4ZW5wY2kpKQorCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhlbnBj
aSIpOworCX0KKworCS8qCisJICogTGV0IG91ciBjaGlsZCBkcml2ZXJzIGlkZW50aWZ5IGFueSBj
aGlsZCBkZXZpY2VzIHRoYXQgdGhleQorCSAqIGNhbiBmaW5kLiAgT25jZSB0aGF0IGlzIGRvbmUg
YXR0YWNoIGFueSBkZXZpY2VzIHRoYXQgd2UKKwkgKiBmb3VuZC4KKwkgKi8KKwlidXNfZ2VuZXJp
Y19wcm9iZShkZXYpOworCWJ1c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJaWYgKCFkZXZjbGFz
c19nZXRfZGV2aWNlKGRldmNsYXNzX2ZpbmQoImlzYSIpLCAwKSkgeworCQljaGlsZCA9IEJVU19B
RERfQ0hJTEQoZGV2LCAwLCAiaXNhIiwgMCk7CisJCWlmIChjaGlsZCA9PSBOVUxMKQorCQkJcGFu
aWMoInhlbnB2X2F0dGFjaCBpc2EiKTsKKwkJZGV2aWNlX3Byb2JlX2FuZF9hdHRhY2goY2hpbGQp
OworCX0KKworCXJldHVybiAwOworfQorCitzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnB2X21l
dGhvZHNbXSA9IHsKKwkvKiBEZXZpY2UgaW50ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9p
ZGVudGlmeSwJeGVucHZfaWRlbnRpZnkpLAorCURFVk1FVEhPRChkZXZpY2VfcHJvYmUsCQl4ZW5w
dl9wcm9iZSksCisJREVWTUVUSE9EKGRldmljZV9hdHRhY2gsCXhlbnB2X2F0dGFjaCksCisJREVW
TUVUSE9EKGRldmljZV9zdXNwZW5kLAlidXNfZ2VuZXJpY19zdXNwZW5kKSwKKwlERVZNRVRIT0Qo
ZGV2aWNlX3Jlc3VtZSwJYnVzX2dlbmVyaWNfcmVzdW1lKSwKKworCS8qIEJ1cyBpbnRlcmZhY2Ug
Ki8KKwlERVZNRVRIT0QoYnVzX2FkZF9jaGlsZCwJYnVzX2dlbmVyaWNfYWRkX2NoaWxkKSwKKwor
CURFVk1FVEhPRF9FTkQKK307CisKK3N0YXRpYyBkcml2ZXJfdCB4ZW5wdl9kcml2ZXIgPSB7CisJ
InhlbnB2IiwKKwl4ZW5wdl9tZXRob2RzLAorCTEsCQkJLyogbm8gc29mdGMgKi8KK307CisKK0RS
SVZFUl9NT0RVTEUoeGVucHYsIG5leHVzLCB4ZW5wdl9kcml2ZXIsIHhlbnB2X2RldmNsYXNzLCAw
LCAwKTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35rx-0005tR-0G; Tue, 14 Jan 2014 15:25:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35rv-0005sy-FB
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:31 +0000
Received: from [85.158.139.211:40935] by server-5.bemta-5.messagelabs.com id
	4E/4A-14928-AE655D25; Tue, 14 Jan 2014 15:25:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713122!9663816!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25177 invoked from network); 14 Jan 2014 15:25:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709156"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:28 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T9-0006J6-Cw;
	Tue, 14 Jan 2014 14:59:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:41 +0100
Message-ID: <1389711582-66908-20-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 19/20] xen: changes to gnttab for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/gnttab.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/sys/xen/gnttab.c b/sys/xen/gnttab.c
index 03c32b7..6949be5 100644
--- a/sys/xen/gnttab.c
+++ b/sys/xen/gnttab.c
@@ -25,6 +25,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/lock.h>
 #include <sys/malloc.h>
 #include <sys/mman.h>
+#include <sys/limits.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -607,6 +608,7 @@ gnttab_resume(void)
 {
 	int error;
 	unsigned int max_nr_gframes, nr_gframes;
+	void *alloc_mem;
 
 	nr_gframes = nr_grant_frames;
 	max_nr_gframes = max_nr_grant_frames();
@@ -614,11 +616,25 @@ gnttab_resume(void)
 		return (ENOSYS);
 
 	if (!resume_frames) {
-		error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
-		    &resume_frames);
-		if (error) {
-			printf("error mapping gnttab share frames\n");
-			return (error);
+		if (xen_pv_domain()) {
+			/*
+			 * This is a waste of physical memory,
+			 * we should use ballooned pages instead,
+			 * but it will do for now.
+			 */
+			alloc_mem = contigmalloc(max_nr_gframes * PAGE_SIZE,
+			                         M_DEVBUF, M_NOWAIT, 0,
+			                         ULONG_MAX, PAGE_SIZE, 0);
+			KASSERT((alloc_mem != NULL),
+				("unable to alloc memory for gnttab"));
+			resume_frames = vtophys(alloc_mem);
+		} else {
+			error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
+			    &resume_frames);
+			if (error) {
+				printf("error mapping gnttab share frames\n");
+				return (error);
+			}
 		}
 	}
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35rx-0005tR-0G; Tue, 14 Jan 2014 15:25:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35rv-0005sy-FB
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:31 +0000
Received: from [85.158.139.211:40935] by server-5.bemta-5.messagelabs.com id
	4E/4A-14928-AE655D25; Tue, 14 Jan 2014 15:25:30 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713122!9663816!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25177 invoked from network); 14 Jan 2014 15:25:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709156"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:28 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T9-0006J6-Cw;
	Tue, 14 Jan 2014 14:59:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:41 +0100
Message-ID: <1389711582-66908-20-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 19/20] xen: changes to gnttab for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/gnttab.c |   26 +++++++++++++++++++++-----
 1 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/sys/xen/gnttab.c b/sys/xen/gnttab.c
index 03c32b7..6949be5 100644
--- a/sys/xen/gnttab.c
+++ b/sys/xen/gnttab.c
@@ -25,6 +25,7 @@ __FBSDID("$FreeBSD$");
 #include <sys/lock.h>
 #include <sys/malloc.h>
 #include <sys/mman.h>
+#include <sys/limits.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -607,6 +608,7 @@ gnttab_resume(void)
 {
 	int error;
 	unsigned int max_nr_gframes, nr_gframes;
+	void *alloc_mem;
 
 	nr_gframes = nr_grant_frames;
 	max_nr_gframes = max_nr_grant_frames();
@@ -614,11 +616,25 @@ gnttab_resume(void)
 		return (ENOSYS);
 
 	if (!resume_frames) {
-		error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
-		    &resume_frames);
-		if (error) {
-			printf("error mapping gnttab share frames\n");
-			return (error);
+		if (xen_pv_domain()) {
+			/*
+			 * This is a waste of physical memory,
+			 * we should use ballooned pages instead,
+			 * but it will do for now.
+			 */
+			alloc_mem = contigmalloc(max_nr_gframes * PAGE_SIZE,
+			                         M_DEVBUF, M_NOWAIT, 0,
+			                         ULONG_MAX, PAGE_SIZE, 0);
+			KASSERT((alloc_mem != NULL),
+				("unable to alloc memory for gnttab"));
+			resume_frames = vtophys(alloc_mem);
+		} else {
+			error = xenpci_alloc_space(PAGE_SIZE * max_nr_gframes,
+			    &resume_frames);
+			if (error) {
+				printf("error mapping gnttab share frames\n");
+				return (error);
+			}
 		}
 	}
 
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s3-0005w1-Pv; Tue, 14 Jan 2014 15:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s2-0005up-4Q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:38 +0000
Received: from [85.158.139.211:41634] by server-4.bemta-5.messagelabs.com id
	C4/2A-26791-1F655D25; Tue, 14 Jan 2014 15:25:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26394 invoked from network); 14 Jan 2014 15:25:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590385"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T9-0006J6-Ts;
	Tue, 14 Jan 2014 14:59:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:42 +0100
Message-ID: <1389711582-66908-21-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 20/20] isa: allow ISA bus to attach to xenpv
	device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/x86/isa/isa.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
index 1a57137..9287ff2 100644
--- a/sys/x86/isa/isa.c
+++ b/sys/x86/isa/isa.c
@@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
  * On this platform, isa can also attach to the legacy bus.
  */
 DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
+#ifdef XENHVM
+DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
+#endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s3-0005w1-Pv; Tue, 14 Jan 2014 15:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s2-0005up-4Q
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:38 +0000
Received: from [85.158.139.211:41634] by server-4.bemta-5.messagelabs.com id
	C4/2A-26791-1F655D25; Tue, 14 Jan 2014 15:25:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26394 invoked from network); 14 Jan 2014 15:25:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590385"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T9-0006J6-Ts;
	Tue, 14 Jan 2014 14:59:55 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:42 +0100
Message-ID: <1389711582-66908-21-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 20/20] isa: allow ISA bus to attach to xenpv
	device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/x86/isa/isa.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/sys/x86/isa/isa.c b/sys/x86/isa/isa.c
index 1a57137..9287ff2 100644
--- a/sys/x86/isa/isa.c
+++ b/sys/x86/isa/isa.c
@@ -241,3 +241,6 @@ isa_release_resource(device_t bus, device_t child, int type, int rid,
  * On this platform, isa can also attach to the legacy bus.
  */
 DRIVER_MODULE(isa, legacy, isa_driver, isa_devclass, 0, 0);
+#ifdef XENHVM
+DRIVER_MODULE(isa, xenpv, isa_driver, isa_devclass, 0, 0);
+#endif
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s4-0005wR-9u; Tue, 14 Jan 2014 15:25:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s3-0005ve-6D
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:39 +0000
Received: from [85.158.137.68:37311] by server-14.bemta-3.messagelabs.com id
	F5/AA-06105-2F655D25; Tue, 14 Jan 2014 15:25:38 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389713135!9047758!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26974 invoked from network); 14 Jan 2014 15:25:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709209"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:34 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T7-0006J6-6J;
	Tue, 14 Jan 2014 14:59:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:37 +0100
Message-ID: <1389711582-66908-16-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_15/20=5D_xen=3A_create_a_PV_CP?=
	=?utf-8?q?U_device_for_PVH_guests?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgdGhlcmUncyBubyBBQ1BJIG9uIFBWSCBndWVzdHMsIHdlIG5lZWQgdG8gY3JlYXRlIGEg
ZHVtbXkgQ1BVCmRldmljZSBpbiBvcmRlciB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmll
bGQuCi0tLQogc3lzL2NvbmYvZmlsZXMgICAgICAgICAgICB8ICAgIDEgKwogc3lzL2Rldi94ZW4v
cHZjcHUvcHZjcHUuYyB8ICAxMDEgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrCiAyIGZpbGVzIGNoYW5nZWQsIDEwMiBpbnNlcnRpb25zKCspLCAwIGRlbGV0aW9u
cygtKQogY3JlYXRlIG1vZGUgMTAwNjQ0IHN5cy9kZXYveGVuL3B2Y3B1L3B2Y3B1LmMKCmRpZmYg
LS1naXQgYS9zeXMvY29uZi9maWxlcyBiL3N5cy9jb25mL2ZpbGVzCmluZGV4IGJkZGYwMjEuLjE3
OGQ1ZTIgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzCisrKyBiL3N5cy9jb25mL2ZpbGVzCkBA
IC0yNTAwLDYgKzI1MDAsNyBAQCBkZXYveGVuL25ldGJhY2svbmV0YmFjay5jCW9wdGlvbmFsIHhl
biB8IHhlbmh2bQogZGV2L3hlbi9uZXRmcm9udC9uZXRmcm9udC5jCW9wdGlvbmFsIHhlbiB8IHhl
bmh2bQogZGV2L3hlbi94ZW5wY2kveGVucGNpLmMJCW9wdGlvbmFsIHhlbnBjaQogZGV2L3hlbi90
aW1lci90aW1lci5jCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KK2Rldi94ZW4vcHZjcHUvcHZjcHUu
YwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZtCiBkZXYveGwvaWZfeGwuYwkJCW9wdGlvbmFsIHhsIHBj
aQogZGV2L3hsL3hscGh5LmMJCQlvcHRpb25hbCB4bCBwY2kKIGZzL2RlYWRmcy9kZWFkX3Zub3Bz
LmMJCXN0YW5kYXJkCmRpZmYgLS1naXQgYS9zeXMvZGV2L3hlbi9wdmNwdS9wdmNwdS5jIGIvc3lz
L2Rldi94ZW4vcHZjcHUvcHZjcHUuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAw
Li43ZjM2OTdjCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL2Rldi94ZW4vcHZjcHUvcHZjcHUuYwpA
QCAtMCwwICsxLDEwMSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgor
ICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0
aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhh
dCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1
dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICog
ICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNj
bGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9k
dWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRp
dGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50
YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRp
b24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBD
T05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFO
VElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJS
QU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBV
UlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBP
UiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElO
Q0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdF
UyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRV
VEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsg
T1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBU
SEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJ
TElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5H
IElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYg
QURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNp
bmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUg
PHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMu
aD4KKyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL21vZHVsZS5oPgorI2lu
Y2x1ZGUgPHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorCisjaW5jbHVkZSA8eGVu
L3hlbi1vcy5oPgorCisvKgorICogRHVtbXkgWGVuIGNwdSBkZXZpY2UKKyAqCisgKiBTaW5jZSB0
aGVyZSdzIG5vIEFDUEkgb24gUFZIIGd1ZXN0cywgd2UgbmVlZCB0byBjcmVhdGUgYSBkdW1teQor
ICogQ1BVIGRldmljZSBpbiBvcmRlciB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
CisgKi8KKworc3RhdGljIHZvaWQKK3hlbnB2Y3B1X2lkZW50aWZ5KGRyaXZlcl90ICpkcml2ZXIs
IGRldmljZV90IHBhcmVudCkKK3sKKwlkZXZpY2VfdCBjaGlsZDsKKwlpbnQgaTsKKworCS8qIE9u
bHkgYXR0YWNoIHRvIFBWIGd1ZXN0cywgSFZNIGd1ZXN0cyB1c2UgdGhlIEFDUEkgQ1BVIGRldmlj
ZXMgKi8KKwlpZiAoIXhlbl9wdl9kb21haW4oKSkKKwkJcmV0dXJuOworCisJQ1BVX0ZPUkVBQ0go
aSkgeworCQljaGlsZCA9IEJVU19BRERfQ0hJTEQocGFyZW50LCAwLCAicHZjcHUiLCBpKTsKKwkJ
aWYgKGNoaWxkID09IE5VTEwpCisJCQlwYW5pYygieGVucHZjcHVfaWRlbnRpZnkgYWRkIHB2Y3B1
Iik7CisJfQorfQorCitzdGF0aWMgaW50Cit4ZW5wdmNwdV9wcm9iZShkZXZpY2VfdCBkZXYpCit7
CisKKwlkZXZpY2Vfc2V0X2Rlc2MoZGV2LCAiWGVuIFBWIENQVSIpOworCXJldHVybiAoQlVTX1BS
T0JFX05PV0lMRENBUkQpOworfQorCitzdGF0aWMgaW50Cit4ZW5wdmNwdV9hdHRhY2goZGV2aWNl
X3QgZGV2KQoreworCXN0cnVjdCBwY3B1ICpwYzsKKwlpbnQgY3B1OworCisJY3B1ID0gZGV2aWNl
X2dldF91bml0KGRldik7CisJcGMgPSBwY3B1X2ZpbmQoY3B1KTsKKwlwYy0+cGNfZGV2aWNlID0g
ZGV2OworCXJldHVybiAoMCk7Cit9CisKK3N0YXRpYyBkZXZpY2VfbWV0aG9kX3QgeGVucHZjcHVf
bWV0aG9kc1tdID0geworCURFVk1FVEhPRChkZXZpY2VfaWRlbnRpZnksIHhlbnB2Y3B1X2lkZW50
aWZ5KSwKKwlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLCB4ZW5wdmNwdV9wcm9iZSksCisJREVWTUVU
SE9EKGRldmljZV9hdHRhY2gsIHhlbnB2Y3B1X2F0dGFjaCksCisKKwlERVZNRVRIT0RfRU5ECit9
OworCitzdGF0aWMgZHJpdmVyX3QgeGVucHZjcHVfZHJpdmVyID0geworCSJwdmNwdSIsCisJeGVu
cHZjcHVfbWV0aG9kcywKKwkwLAorfTsKKworZGV2Y2xhc3NfdCB4ZW5wdmNwdV9kZXZjbGFzczsK
KworRFJJVkVSX01PRFVMRSh4ZW5wdmNwdSwgeGVucHYsIHhlbnB2Y3B1X2RyaXZlciwgeGVucHZj
cHVfZGV2Y2xhc3MsIDAsIDApOworTU9EVUxFX0RFUEVORCh4ZW5wdmNwdSwgeGVucHYsIDEsIDEs
IDEpOwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s4-0005wR-9u; Tue, 14 Jan 2014 15:25:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s3-0005ve-6D
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:39 +0000
Received: from [85.158.137.68:37311] by server-14.bemta-3.messagelabs.com id
	F5/AA-06105-2F655D25; Tue, 14 Jan 2014 15:25:38 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389713135!9047758!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26974 invoked from network); 14 Jan 2014 15:25:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709209"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:34 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T7-0006J6-6J;
	Tue, 14 Jan 2014 14:59:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:37 +0100
Message-ID: <1389711582-66908-16-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_15/20=5D_xen=3A_create_a_PV_CP?=
	=?utf-8?q?U_device_for_PVH_guests?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2luY2UgdGhlcmUncyBubyBBQ1BJIG9uIFBWSCBndWVzdHMsIHdlIG5lZWQgdG8gY3JlYXRlIGEg
ZHVtbXkgQ1BVCmRldmljZSBpbiBvcmRlciB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmll
bGQuCi0tLQogc3lzL2NvbmYvZmlsZXMgICAgICAgICAgICB8ICAgIDEgKwogc3lzL2Rldi94ZW4v
cHZjcHUvcHZjcHUuYyB8ICAxMDEgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrCiAyIGZpbGVzIGNoYW5nZWQsIDEwMiBpbnNlcnRpb25zKCspLCAwIGRlbGV0aW9u
cygtKQogY3JlYXRlIG1vZGUgMTAwNjQ0IHN5cy9kZXYveGVuL3B2Y3B1L3B2Y3B1LmMKCmRpZmYg
LS1naXQgYS9zeXMvY29uZi9maWxlcyBiL3N5cy9jb25mL2ZpbGVzCmluZGV4IGJkZGYwMjEuLjE3
OGQ1ZTIgMTAwNjQ0Ci0tLSBhL3N5cy9jb25mL2ZpbGVzCisrKyBiL3N5cy9jb25mL2ZpbGVzCkBA
IC0yNTAwLDYgKzI1MDAsNyBAQCBkZXYveGVuL25ldGJhY2svbmV0YmFjay5jCW9wdGlvbmFsIHhl
biB8IHhlbmh2bQogZGV2L3hlbi9uZXRmcm9udC9uZXRmcm9udC5jCW9wdGlvbmFsIHhlbiB8IHhl
bmh2bQogZGV2L3hlbi94ZW5wY2kveGVucGNpLmMJCW9wdGlvbmFsIHhlbnBjaQogZGV2L3hlbi90
aW1lci90aW1lci5jCQlvcHRpb25hbCB4ZW4gfCB4ZW5odm0KK2Rldi94ZW4vcHZjcHUvcHZjcHUu
YwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZtCiBkZXYveGwvaWZfeGwuYwkJCW9wdGlvbmFsIHhsIHBj
aQogZGV2L3hsL3hscGh5LmMJCQlvcHRpb25hbCB4bCBwY2kKIGZzL2RlYWRmcy9kZWFkX3Zub3Bz
LmMJCXN0YW5kYXJkCmRpZmYgLS1naXQgYS9zeXMvZGV2L3hlbi9wdmNwdS9wdmNwdS5jIGIvc3lz
L2Rldi94ZW4vcHZjcHUvcHZjcHUuYwpuZXcgZmlsZSBtb2RlIDEwMDY0NAppbmRleCAwMDAwMDAw
Li43ZjM2OTdjCi0tLSAvZGV2L251bGwKKysrIGIvc3lzL2Rldi94ZW4vcHZjcHUvcHZjcHUuYwpA
QCAtMCwwICsxLDEwMSBAQAorLyoKKyAqIENvcHlyaWdodCAoYykgMjAxMyBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KKyAqIEFsbCByaWdodHMgcmVzZXJ2ZWQuCisgKgor
ICogUmVkaXN0cmlidXRpb24gYW5kIHVzZSBpbiBzb3VyY2UgYW5kIGJpbmFyeSBmb3Jtcywgd2l0
aCBvciB3aXRob3V0CisgKiBtb2RpZmljYXRpb24sIGFyZSBwZXJtaXR0ZWQgcHJvdmlkZWQgdGhh
dCB0aGUgZm9sbG93aW5nIGNvbmRpdGlvbnMKKyAqIGFyZSBtZXQ6CisgKiAxLiBSZWRpc3RyaWJ1
dGlvbnMgb2Ygc291cmNlIGNvZGUgbXVzdCByZXRhaW4gdGhlIGFib3ZlIGNvcHlyaWdodAorICog
ICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2luZyBkaXNj
bGFpbWVyLgorICogMi4gUmVkaXN0cmlidXRpb25zIGluIGJpbmFyeSBmb3JtIG11c3QgcmVwcm9k
dWNlIHRoZSBhYm92ZSBjb3B5cmlnaHQKKyAqICAgIG5vdGljZSwgdGhpcyBsaXN0IG9mIGNvbmRp
dGlvbnMgYW5kIHRoZSBmb2xsb3dpbmcgZGlzY2xhaW1lciBpbiB0aGUKKyAqICAgIGRvY3VtZW50
YXRpb24gYW5kL29yIG90aGVyIG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZSBkaXN0cmlidXRp
b24uCisgKgorICogVEhJUyBTT0ZUV0FSRSBJUyBQUk9WSURFRCBCWSBUSEUgQVVUSE9SIEFORCBD
T05UUklCVVRPUlMgQVMgSVMnJyBBTkQKKyAqIEFOWSBFWFBSRVNTIE9SIElNUExJRUQgV0FSUkFO
VElFUywgSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFRIRQorICogSU1QTElFRCBXQVJS
QU5USUVTIE9GIE1FUkNIQU5UQUJJTElUWSBBTkQgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBV
UlBPU0UKKyAqIEFSRSBESVNDTEFJTUVELiAgSU4gTk8gRVZFTlQgU0hBTEwgVEhFIEFVVEhPUiBP
UiBDT05UUklCVVRPUlMgQkUgTElBQkxFCisgKiBGT1IgQU5ZIERJUkVDVCwgSU5ESVJFQ1QsIElO
Q0lERU5UQUwsIFNQRUNJQUwsIEVYRU1QTEFSWSwgT1IgQ09OU0VRVUVOVElBTAorICogREFNQUdF
UyAoSU5DTFVESU5HLCBCVVQgTk9UIExJTUlURUQgVE8sIFBST0NVUkVNRU5UIE9GIFNVQlNUSVRV
VEUgR09PRFMKKyAqIE9SIFNFUlZJQ0VTOyBMT1NTIE9GIFVTRSwgREFUQSwgT1IgUFJPRklUUzsg
T1IgQlVTSU5FU1MgSU5URVJSVVBUSU9OKQorICogSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWSBU
SEVPUlkgT0YgTElBQklMSVRZLCBXSEVUSEVSIElOIENPTlRSQUNULCBTVFJJQ1QKKyAqIExJQUJJ
TElUWSwgT1IgVE9SVCAoSU5DTFVESU5HIE5FR0xJR0VOQ0UgT1IgT1RIRVJXSVNFKSBBUklTSU5H
IElOIEFOWSBXQVkKKyAqIE9VVCBPRiBUSEUgVVNFIE9GIFRISVMgU09GVFdBUkUsIEVWRU4gSUYg
QURWSVNFRCBPRiBUSEUgUE9TU0lCSUxJVFkgT0YKKyAqIFNVQ0ggREFNQUdFLgorICovCisKKyNp
bmNsdWRlIDxzeXMvY2RlZnMuaD4KK19fRkJTRElEKCIkRnJlZUJTRCQiKTsKKworI2luY2x1ZGUg
PHN5cy9wYXJhbS5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5oPgorI2luY2x1ZGUgPHN5cy9idXMu
aD4KKyNpbmNsdWRlIDxzeXMva2VybmVsLmg+CisjaW5jbHVkZSA8c3lzL21vZHVsZS5oPgorI2lu
Y2x1ZGUgPHN5cy9wY3B1Lmg+CisjaW5jbHVkZSA8c3lzL3NtcC5oPgorCisjaW5jbHVkZSA8eGVu
L3hlbi1vcy5oPgorCisvKgorICogRHVtbXkgWGVuIGNwdSBkZXZpY2UKKyAqCisgKiBTaW5jZSB0
aGVyZSdzIG5vIEFDUEkgb24gUFZIIGd1ZXN0cywgd2UgbmVlZCB0byBjcmVhdGUgYSBkdW1teQor
ICogQ1BVIGRldmljZSBpbiBvcmRlciB0byBmaWxsIHRoZSBwY3B1LT5wY19kZXZpY2UgZmllbGQu
CisgKi8KKworc3RhdGljIHZvaWQKK3hlbnB2Y3B1X2lkZW50aWZ5KGRyaXZlcl90ICpkcml2ZXIs
IGRldmljZV90IHBhcmVudCkKK3sKKwlkZXZpY2VfdCBjaGlsZDsKKwlpbnQgaTsKKworCS8qIE9u
bHkgYXR0YWNoIHRvIFBWIGd1ZXN0cywgSFZNIGd1ZXN0cyB1c2UgdGhlIEFDUEkgQ1BVIGRldmlj
ZXMgKi8KKwlpZiAoIXhlbl9wdl9kb21haW4oKSkKKwkJcmV0dXJuOworCisJQ1BVX0ZPUkVBQ0go
aSkgeworCQljaGlsZCA9IEJVU19BRERfQ0hJTEQocGFyZW50LCAwLCAicHZjcHUiLCBpKTsKKwkJ
aWYgKGNoaWxkID09IE5VTEwpCisJCQlwYW5pYygieGVucHZjcHVfaWRlbnRpZnkgYWRkIHB2Y3B1
Iik7CisJfQorfQorCitzdGF0aWMgaW50Cit4ZW5wdmNwdV9wcm9iZShkZXZpY2VfdCBkZXYpCit7
CisKKwlkZXZpY2Vfc2V0X2Rlc2MoZGV2LCAiWGVuIFBWIENQVSIpOworCXJldHVybiAoQlVTX1BS
T0JFX05PV0lMRENBUkQpOworfQorCitzdGF0aWMgaW50Cit4ZW5wdmNwdV9hdHRhY2goZGV2aWNl
X3QgZGV2KQoreworCXN0cnVjdCBwY3B1ICpwYzsKKwlpbnQgY3B1OworCisJY3B1ID0gZGV2aWNl
X2dldF91bml0KGRldik7CisJcGMgPSBwY3B1X2ZpbmQoY3B1KTsKKwlwYy0+cGNfZGV2aWNlID0g
ZGV2OworCXJldHVybiAoMCk7Cit9CisKK3N0YXRpYyBkZXZpY2VfbWV0aG9kX3QgeGVucHZjcHVf
bWV0aG9kc1tdID0geworCURFVk1FVEhPRChkZXZpY2VfaWRlbnRpZnksIHhlbnB2Y3B1X2lkZW50
aWZ5KSwKKwlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLCB4ZW5wdmNwdV9wcm9iZSksCisJREVWTUVU
SE9EKGRldmljZV9hdHRhY2gsIHhlbnB2Y3B1X2F0dGFjaCksCisKKwlERVZNRVRIT0RfRU5ECit9
OworCitzdGF0aWMgZHJpdmVyX3QgeGVucHZjcHVfZHJpdmVyID0geworCSJwdmNwdSIsCisJeGVu
cHZjcHVfbWV0aG9kcywKKwkwLAorfTsKKworZGV2Y2xhc3NfdCB4ZW5wdmNwdV9kZXZjbGFzczsK
KworRFJJVkVSX01PRFVMRSh4ZW5wdmNwdSwgeGVucHYsIHhlbnB2Y3B1X2RyaXZlciwgeGVucHZj
cHVfZGV2Y2xhc3MsIDAsIDApOworTU9EVUxFX0RFUEVORCh4ZW5wdmNwdSwgeGVucHYsIDEsIDEs
IDEpOwotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s9-0005zY-QF; Tue, 14 Jan 2014 15:25:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s8-0005ye-Fa
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:44 +0000
Received: from [85.158.139.211:46166] by server-14.bemta-5.messagelabs.com id
	DC/52-24200-7F655D25; Tue, 14 Jan 2014 15:25:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27102 invoked from network); 14 Jan 2014 15:25:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590647"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:36 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T5-0006J6-It;
	Tue, 14 Jan 2014 14:59:51 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:34 +0100
Message-ID: <1389711582-66908-13-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_12/20=5D_xen=3A_add_a_hook_to_?=
	=?utf-8?q?perform_AP_startup?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QVAgc3RhcnR1cCBvbiBQVkggZm9sbG93cyB0aGUgUFYgbWV0aG9kLCBzbyB3ZSBuZWVkIHRvIGFk
ZCBhIGhvb2sgaW4Kb3JkZXIgdG8gZGl2ZXJnZSBmcm9tIGJhcmUgbWV0YWwuCi0tLQogc3lzL2Ft
ZDY0L2FtZDY0L21wX21hY2hkZXAuYyB8ICAgMTQgKysrLS0tCiBzeXMvYW1kNjQvaW5jbHVkZS9j
cHUuaCAgICAgIHwgICAgMSArCiBzeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCAgICAgIHwgICAgMSAr
CiBzeXMveDg2L3hlbi9odm0uYyAgICAgICAgICAgIHwgICAxMiArKysrKy0KIHN5cy94ODYveGVu
L3B2LmMgICAgICAgICAgICAgfCAgIDg1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogc3lzL3hlbi9wdi5oICAgICAgICAgICAgICAgICB8ICAgMzIgKysrKysrKysr
KysrKysrKwogNiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveGVuL3B2LmgKCmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQvbXBfbWFjaGRlcC5jIGIvc3lzL2FtZDY0L2FtZDY0L21wX21hY2hkZXAuYwppbmRl
eCA0YWY0ZjhmLi4xN2U5NTdkIDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbXBfbWFjaGRl
cC5jCisrKyBiL3N5cy9hbWQ2NC9hbWQ2NC9tcF9tYWNoZGVwLmMKQEAgLTkwLDcgKzkwLDcgQEAg
ZXh0ZXJuICBzdHJ1Y3QgcGNwdSBfX3BjcHVbXTsKIAogLyogQVAgdXNlcyB0aGlzIGR1cmluZyBi
b290c3RyYXAuICBEbyBub3Qgc3RhdGljaXplLiAgKi8KIGNoYXIgKmJvb3RTVEs7Ci1zdGF0aWMg
aW50IGJvb3RBUDsKK2ludCBib290QVA7CiAKIC8qIEZyZWUgdGhlc2UgYWZ0ZXIgdXNlICovCiB2
b2lkICpib290c3RhY2tzW01BWENQVV07CkBAIC0xMjQsNyArMTI0LDggQEAgc3RhdGljIHVfbG9u
ZyAqaXBpX2hhcmRjbG9ja19jb3VudHNbTUFYQ1BVXTsKIAogLyogRGVmYXVsdCBjcHVfb3BzIGlt
cGxlbWVudGF0aW9uLiAqLwogc3RydWN0IGNwdV9vcHMgY3B1X29wcyA9IHsKLQkuaXBpX3ZlY3Rv
cmVkID0gbGFwaWNfaXBpX3ZlY3RvcmVkCisJLmlwaV92ZWN0b3JlZCA9IGxhcGljX2lwaV92ZWN0
b3JlZCwKKwkuc3RhcnRfYWxsX2FwcyA9IG5hdGl2ZV9zdGFydF9hbGxfYXBzLAogfTsKIAogZXh0
ZXJuIGludGhhbmRfdCBJRFRWRUMoZmFzdF9zeXNjYWxsKSwgSURUVkVDKGZhc3Rfc3lzY2FsbDMy
KTsKQEAgLTEzOCw3ICsxMzksNyBAQCBleHRlcm4gaW50IHBtYXBfcGNpZF9lbmFibGVkOwogc3Rh
dGljIHZvbGF0aWxlIGNwdXNldF90IGlwaV9ubWlfcGVuZGluZzsKIAogLyogdXNlZCB0byBob2xk
IHRoZSBBUCdzIHVudGlsIHdlIGFyZSByZWFkeSB0byByZWxlYXNlIHRoZW0gKi8KLXN0YXRpYyBz
dHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4Oworc3RydWN0IG10eCBhcF9ib290X210eDsKIAogLyogU2V0
IHRvIDEgb25jZSB3ZSdyZSByZWFkeSB0byBsZXQgdGhlIEFQcyBvdXQgb2YgdGhlIHBlbi4gKi8K
IHN0YXRpYyB2b2xhdGlsZSBpbnQgYXBzX3JlYWR5ID0gMDsKQEAgLTE2NSw3ICsxNjYsNiBAQCBz
dGF0aWMgaW50IGNwdV9jb3JlczsJCQkvKiBjb3JlcyBwZXIgcGFja2FnZSAqLwogCiBzdGF0aWMg
dm9pZAlhc3NpZ25fY3B1X2lkcyh2b2lkKTsKIHN0YXRpYyB2b2lkCXNldF9pbnRlcnJ1cHRfYXBp
Y19pZHModm9pZCk7Ci1zdGF0aWMgaW50CXN0YXJ0X2FsbF9hcHModm9pZCk7CiBzdGF0aWMgaW50
CXN0YXJ0X2FwKGludCBhcGljX2lkKTsKIHN0YXRpYyB2b2lkCXJlbGVhc2VfYXBzKHZvaWQgKmR1
bW15KTsKIApAQCAtNTY5LDcgKzU2OSw3IEBAIGNwdV9tcF9zdGFydCh2b2lkKQogCWFzc2lnbl9j
cHVfaWRzKCk7CiAKIAkvKiBTdGFydCBlYWNoIEFwcGxpY2F0aW9uIFByb2Nlc3NvciAqLwotCXN0
YXJ0X2FsbF9hcHMoKTsKKwljcHVfb3BzLnN0YXJ0X2FsbF9hcHMoKTsKIAogCXNldF9pbnRlcnJ1
cHRfYXBpY19pZHMoKTsKIH0KQEAgLTkwOCw4ICs5MDgsOCBAQCBhc3NpZ25fY3B1X2lkcyh2b2lk
KQogLyoKICAqIHN0YXJ0IGVhY2ggQVAgaW4gb3VyIGxpc3QKICAqLwotc3RhdGljIGludAotc3Rh
cnRfYWxsX2Fwcyh2b2lkKQoraW50CituYXRpdmVfc3RhcnRfYWxsX2Fwcyh2b2lkKQogewogCXZt
X29mZnNldF90IHZhID0gYm9vdF9hZGRyZXNzICsgS0VSTkJBU0U7CiAJdV9pbnQ2NF90ICpwdDQs
ICpwdDMsICpwdDI7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaCBiL3N5cy9h
bWQ2NC9pbmNsdWRlL2NwdS5oCmluZGV4IDNjNWQ1ZGYuLjk4ZGMzZTAgMTAwNjQ0Ci0tLSBhL3N5
cy9hbWQ2NC9pbmNsdWRlL2NwdS5oCisrKyBiL3N5cy9hbWQ2NC9pbmNsdWRlL2NwdS5oCkBAIC02
NCw2ICs2NCw3IEBAIHN0cnVjdCBjcHVfb3BzIHsKIAl2b2lkICgqY3B1X2luaXQpKHZvaWQpOwog
CXZvaWQgKCpjcHVfcmVzdW1lKSh2b2lkKTsKIAl2b2lkICgqaXBpX3ZlY3RvcmVkKSh1X2ludCwg
aW50KTsKKwlpbnQgICgqc3RhcnRfYWxsX2Fwcykodm9pZCk7CiB9OwogCiBleHRlcm4gc3RydWN0
CWNwdV9vcHMgY3B1X29wczsKZGlmZiAtLWdpdCBhL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oIGIv
c3lzL2FtZDY0L2luY2x1ZGUvc21wLmgKaW5kZXggZDFiMzY2Yi4uMTViYzgyMyAxMDA2NDQKLS0t
IGEvc3lzL2FtZDY0L2luY2x1ZGUvc21wLmgKKysrIGIvc3lzL2FtZDY0L2luY2x1ZGUvc21wLmgK
QEAgLTc5LDYgKzc5LDcgQEAgdm9pZAlzbXBfbWFza2VkX2ludmxwZ19yYW5nZShjcHVzZXRfdCBt
YXNrLCBzdHJ1Y3QgcG1hcCAqcG1hcCwKIAkgICAgdm1fb2Zmc2V0X3Qgc3RhcnR2YSwgdm1fb2Zm
c2V0X3QgZW5kdmEpOwogdm9pZAlzbXBfaW52bHRsYihzdHJ1Y3QgcG1hcCAqcG1hcCk7CiB2b2lk
CXNtcF9tYXNrZWRfaW52bHRsYihjcHVzZXRfdCBtYXNrLCBzdHJ1Y3QgcG1hcCAqcG1hcCk7Citp
bnQJbmF0aXZlX3N0YXJ0X2FsbF9hcHModm9pZCk7CiAKICNlbmRpZiAvKiAhTE9DT1JFICovCiAj
ZW5kaWYgLyogU01QICovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYv
eGVuL2h2bS5jCmluZGV4IGZiMWVkNzkuLjQ5Y2FhY2YgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVu
L2h2bS5jCisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC01Myw2ICs1Myw5IEBAIF9fRkJTRElE
KCIkRnJlZUJTRCQiKTsKICNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgogI2luY2x1ZGUgPHhl
bi9odm0uaD4KICNpbmNsdWRlIDx4ZW4veGVuX2ludHIuaD4KKyNpZmRlZiBfX2FtZDY0X18KKyNp
bmNsdWRlIDx4ZW4vcHYuaD4KKyNlbmRpZgogCiAjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS9odm0v
cGFyYW1zLmg+CiAjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS92Y3B1Lmg+CkBAIC0xMTksNyArMTIy
LDEwIEBAIGVudW0geGVuX2RvbWFpbl90eXBlIHhlbl9kb21haW5fdHlwZSA9IFhFTl9OQVRJVkU7
CiBzdHJ1Y3QgY3B1X29wcyB4ZW5faHZtX2NwdV9vcHMgPSB7CiAJLmlwaV92ZWN0b3JlZAk9IGxh
cGljX2lwaV92ZWN0b3JlZCwKIAkuY3B1X2luaXQJPSB4ZW5faHZtX2NwdV9pbml0LAotCS5jcHVf
cmVzdW1lCT0geGVuX2h2bV9jcHVfcmVzdW1lCisJLmNwdV9yZXN1bWUJPSB4ZW5faHZtX2NwdV9y
ZXN1bWUsCisjaWZkZWYgX19hbWQ2NF9fCisJLnN0YXJ0X2FsbF9hcHMgPSBuYXRpdmVfc3RhcnRf
YWxsX2FwcywKKyNlbmRpZgogfTsKIAogc3RhdGljIE1BTExPQ19ERUZJTkUoTV9YRU5IVk0sICJ4
ZW5faHZtIiwgIlhlbiBIVk0gUFYgU3VwcG9ydCIpOwpAQCAtNjk4LDYgKzcwNCwxMCBAQCB4ZW5f
aHZtX2luaXQoZW51bSB4ZW5faHZtX2luaXRfdHlwZSBpbml0X3R5cGUpCiAJCXNldHVwX3hlbl9m
ZWF0dXJlcygpOwogCQljcHVfb3BzID0geGVuX2h2bV9jcHVfb3BzOwogIAkJdm1fZ3Vlc3QgPSBW
TV9HVUVTVF9YRU47CisjaWZkZWYgX19hbWQ2NF9fCisJCWlmICh4ZW5fcHZfZG9tYWluKCkpCisJ
CQljcHVfb3BzLnN0YXJ0X2FsbF9hcHMgPSB4ZW5fcHZfc3RhcnRfYWxsX2FwczsKKyNlbmRpZgog
CQlicmVhazsKIAljYXNlIFhFTl9IVk1fSU5JVF9SRVNVTUU6CiAJCWlmIChlcnJvciAhPSAwKQpk
aWZmIC0tZ2l0IGEvc3lzL3g4Ni94ZW4vcHYuYyBiL3N5cy94ODYveGVuL3B2LmMKaW5kZXggZDEx
YmMxYS4uMjJmZDZhNiAxMDA2NDQKLS0tIGEvc3lzL3g4Ni94ZW4vcHYuYworKysgYi9zeXMveDg2
L3hlbi9wdi5jCkBAIC0zNCw4ICszNCwxMSBAQCBfX0ZCU0RJRCgiJEZyZWVCU0QkIik7CiAjaW5j
bHVkZSA8c3lzL2tlcm5lbC5oPgogI2luY2x1ZGUgPHN5cy9yZWJvb3QuaD4KICNpbmNsdWRlIDxz
eXMvc3lzdG0uaD4KKyNpbmNsdWRlIDxzeXMvbWFsbG9jLmg+CiAjaW5jbHVkZSA8c3lzL2xvY2su
aD4KICNpbmNsdWRlIDxzeXMvcndsb2NrLmg+CisjaW5jbHVkZSA8c3lzL211dGV4Lmg+CisjaW5j
bHVkZSA8c3lzL3NtcC5oPgogCiAjaW5jbHVkZSA8dm0vdm0uaD4KICNpbmNsdWRlIDx2bS92bV9l
eHRlcm4uaD4KQEAgLTQ5LDkgKzUyLDEzIEBAIF9fRkJTRElEKCIkRnJlZUJTRCQiKTsKICNpbmNs
dWRlIDxtYWNoaW5lL3N5c2FyY2guaD4KICNpbmNsdWRlIDxtYWNoaW5lL2Nsb2NrLmg+CiAjaW5j
bHVkZSA8bWFjaGluZS9wYy9iaW9zLmg+CisjaW5jbHVkZSA8bWFjaGluZS9zbXAuaD4KIAogI2lu
Y2x1ZGUgPHhlbi94ZW4tb3MuaD4KICNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorI2luY2x1
ZGUgPHhlbi9wdi5oPgorCisjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS92Y3B1Lmg+CiAKIC8qIE5h
dGl2ZSBpbml0aWFsIGZ1bmN0aW9uICovCiBleHRlcm4gdV9pbnQ2NF90IGhhbW1lcl90aW1lKHVf
aW50NjRfdCwgdV9pbnQ2NF90KTsKQEAgLTY1LDYgKzcyLDE1IEBAIHN0YXRpYyBjYWRkcl90IHhl
bl9wdl9wYXJzZV9wcmVsb2FkX2RhdGEodV9pbnQ2NF90KTsKIHN0YXRpYyB2b2lkIHhlbl9wdl9w
YXJzZV9tZW1tYXAoY2FkZHJfdCwgdm1fcGFkZHJfdCAqLCBpbnQgKik7CiAKIHN0YXRpYyB2b2lk
IHhlbl9wdl9zZXRfaW5pdF9vcHModm9pZCk7CisvKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0gRXh0ZXJuIERlY2xhcmF0aW9ucyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0qLworLyog
VmFyaWFibGVzIHVzZWQgYnkgYW1kNjQgbXBfbWFjaGRlcCB0byBzdGFydCBBUHMgKi8KK2V4dGVy
biBzdHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4OworZXh0ZXJuIHZvaWQgKmJvb3RzdGFja3NbXTsKK2V4
dGVybiBjaGFyICpkb3VibGVmYXVsdF9zdGFjazsKK2V4dGVybiBjaGFyICpubWlfc3RhY2s7Citl
eHRlcm4gdm9pZCAqZHBjcHU7CitleHRlcm4gaW50IGJvb3RBUDsKK2V4dGVybiBjaGFyICpib290
U1RLOwogCiAvKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIEdsb2JhbCBEYXRhIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0qLwogLyogWGVuIGluaXRfb3BzIGltcGxlbWVu
dGF0aW9uLiAqLwpAQCAtMTY4LDYgKzE4NCw3NSBAQCBoYW1tZXJfdGltZV94ZW4oc3RhcnRfaW5m
b190ICpzaSwgdV9pbnQ2NF90IHhlbnN0YWNrKQogfQogCiAvKi0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tIFBWIHNwZWNpZmljIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0q
LworCitzdGF0aWMgaW50CitzdGFydF94ZW5fYXAoaW50IGNwdSkKK3sKKwlzdHJ1Y3QgdmNwdV9n
dWVzdF9jb250ZXh0ICpjdHh0OworCWludCBtcywgY3B1cyA9IG1wX25hcHM7CisKKwljdHh0ID0g
bWFsbG9jKHNpemVvZigqY3R4dCksIE1fVEVNUCwgTV9OT1dBSVQgfCBNX1pFUk8pOworCWlmIChj
dHh0ID09IE5VTEwpCisJCXBhbmljKCJ1bmFibGUgdG8gYWxsb2NhdGUgbWVtb3J5Iik7CisKKwlj
dHh0LT5mbGFncyA9IFZHQ0ZfSU5fS0VSTkVMOworCWN0eHQtPnVzZXJfcmVncy5yaXAgPSAodW5z
aWduZWQgbG9uZykgaW5pdF9zZWNvbmRhcnk7CisJY3R4dC0+dXNlcl9yZWdzLnJzcCA9ICh1bnNp
Z25lZCBsb25nKSBib290U1RLOworCisJLyogU2V0IHRoZSBBUCB0byB1c2UgdGhlIHNhbWUgcGFn
ZSB0YWJsZXMgKi8KKwljdHh0LT5jdHJscmVnWzNdID0gS1BNTDRwaHlzOworCisJaWYgKEhZUEVS
VklTT1JfdmNwdV9vcChWQ1BVT1BfaW5pdGlhbGlzZSwgY3B1LCBjdHh0KSkKKwkJcGFuaWMoInVu
YWJsZSB0byBpbml0aWFsaXplIEFQIyVkXG4iLCBjcHUpOworCisJZnJlZShjdHh0LCBNX1RFTVAp
OworCisJLyogTGF1bmNoIHRoZSB2Q1BVICovCisJaWYgKEhZUEVSVklTT1JfdmNwdV9vcChWQ1BV
T1BfdXAsIGNwdSwgTlVMTCkpCisJCXBhbmljKCJ1bmFibGUgdG8gc3RhcnQgQVAjJWRcbiIsIGNw
dSk7CisKKwkvKiBXYWl0IHVwIHRvIDUgc2Vjb25kcyBmb3IgaXQgdG8gc3RhcnQuICovCisJZm9y
IChtcyA9IDA7IG1zIDwgNTAwMDsgbXMrKykgeworCQlpZiAobXBfbmFwcyA+IGNwdXMpCisJCQly
ZXR1cm4gKDEpOwkvKiByZXR1cm4gU1VDQ0VTUyAqLworCQlERUxBWSgxMDAwKTsKKwl9CisKKwly
ZXR1cm4gKDApOworfQorCitpbnQKK3hlbl9wdl9zdGFydF9hbGxfYXBzKHZvaWQpCit7CisJaW50
IGNwdTsKKworCW10eF9pbml0KCZhcF9ib290X210eCwgImFwIGJvb3QiLCBOVUxMLCBNVFhfU1BJ
Tik7CisKKwlmb3IgKGNwdSA9IDE7IGNwdSA8IG1wX25jcHVzOyBjcHUrKykgeworCisJCS8qIGFs
bG9jYXRlIGFuZCBzZXQgdXAgYW4gaWRsZSBzdGFjayBkYXRhIHBhZ2UgKi8KKwkJYm9vdHN0YWNr
c1tjcHVdID0gKHZvaWQgKilrbWVtX21hbGxvYyhrZXJuZWxfYXJlbmEsCisJCSAgICBLU1RBQ0tf
UEFHRVMgKiBQQUdFX1NJWkUsIE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJZG91YmxlZmF1bHRfc3Rh
Y2sgPSAoY2hhciAqKWttZW1fbWFsbG9jKGtlcm5lbF9hcmVuYSwKKwkJICAgIFBBR0VfU0laRSwg
TV9XQUlUT0sgfCBNX1pFUk8pOworCQlubWlfc3RhY2sgPSAoY2hhciAqKWttZW1fbWFsbG9jKGtl
cm5lbF9hcmVuYSwgUEFHRV9TSVpFLAorCQkgICAgTV9XQUlUT0sgfCBNX1pFUk8pOworCQlkcGNw
dSA9ICh2b2lkICopa21lbV9tYWxsb2Moa2VybmVsX2FyZW5hLCBEUENQVV9TSVpFLAorCQkgICAg
TV9XQUlUT0sgfCBNX1pFUk8pOworCisJCWJvb3RTVEsgPSAoY2hhciAqKWJvb3RzdGFja3NbY3B1
XSArIEtTVEFDS19QQUdFUyAqIFBBR0VfU0laRSAtIDg7CisJCWJvb3RBUCA9IGNwdTsKKworCQkv
KiBhdHRlbXB0IHRvIHN0YXJ0IHRoZSBBcHBsaWNhdGlvbiBQcm9jZXNzb3IgKi8KKwkJaWYgKCFz
dGFydF94ZW5fYXAoY3B1KSkKKwkJCXBhbmljKCJBUCAjJWQgZmFpbGVkIHRvIHN0YXJ0ISIsIGNw
dSk7CisKKwkJQ1BVX1NFVChjcHUsICZhbGxfY3B1cyk7CS8qIHJlY29yZCBBUCBpbiBDUFUgbWFw
ICovCisJfQorCisJcmV0dXJuIChtcF9uYXBzKTsKK30KKwogLyoKICAqIEZ1bmN0aW9ucyB0byBj
b252ZXJ0IHRoZSAiZXh0cmEiIHBhcmFtZXRlcnMgcGFzc2VkIGJ5IFhlbgogICogaW50byBGcmVl
QlNEIGJvb3Qgb3B0aW9ucyAoZnJvbSB0aGUgaTM4NiBYZW4gcG9ydCkuCmRpZmYgLS1naXQgYS9z
eXMveGVuL3B2LmggYi9zeXMveGVuL3B2LmgKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAw
MDAwMC4uNDViNzQ3MwotLS0gL2Rldi9udWxsCisrKyBiL3N5cy94ZW4vcHYuaApAQCAtMCwwICsx
LDMyIEBACisvKgorICogQ29weXJpZ2h0IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgorICogQWxsIHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3Ry
aWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhv
dXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xs
b3dpbmcgY29uZGl0aW9ucworICogYXJlIG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBz
b3VyY2UgY29kZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2Us
IHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisg
KiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFi
b3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQg
dGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyIGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQv
b3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisg
KiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9S
UyBBUyBJUycnIEFORAorICogQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNM
VURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0Yg
TUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICog
QVJFIERJU0NMQUlNRUQuICBJTiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJV
VE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwg
U1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJ
TkcsIEJVVCBOT1QgTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUwor
ICogT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVT
UyBJTlRFUlJVUFRJT04pCisgKiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBM
SUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBU
T1JUIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdB
WQorICogT1VUIE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9G
IFRIRSBQT1NTSUJJTElUWSBPRgorICogU1VDSCBEQU1BR0UuCisgKi8KKworI2lmbmRlZglfX1hF
Tl9QVl9IX18KKyNkZWZpbmUJX19YRU5fUFZfSF9fCisKK2ludAl4ZW5fcHZfc3RhcnRfYWxsX2Fw
cyh2b2lkKTsKKworI2VuZGlmCS8qIF9fWEVOX1BWX0hfXyAqLwotLSAKMS43LjcuNSAoQXBwbGUg
R2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35s9-0005zY-QF; Tue, 14 Jan 2014 15:25:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35s8-0005ye-Fa
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:44 +0000
Received: from [85.158.139.211:46166] by server-14.bemta-5.messagelabs.com id
	DC/52-24200-7F655D25; Tue, 14 Jan 2014 15:25:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27102 invoked from network); 14 Jan 2014 15:25:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590647"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:36 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T5-0006J6-It;
	Tue, 14 Jan 2014 14:59:51 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:34 +0100
Message-ID: <1389711582-66908-13-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_12/20=5D_xen=3A_add_a_hook_to_?=
	=?utf-8?q?perform_AP_startup?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QVAgc3RhcnR1cCBvbiBQVkggZm9sbG93cyB0aGUgUFYgbWV0aG9kLCBzbyB3ZSBuZWVkIHRvIGFk
ZCBhIGhvb2sgaW4Kb3JkZXIgdG8gZGl2ZXJnZSBmcm9tIGJhcmUgbWV0YWwuCi0tLQogc3lzL2Ft
ZDY0L2FtZDY0L21wX21hY2hkZXAuYyB8ICAgMTQgKysrLS0tCiBzeXMvYW1kNjQvaW5jbHVkZS9j
cHUuaCAgICAgIHwgICAgMSArCiBzeXMvYW1kNjQvaW5jbHVkZS9zbXAuaCAgICAgIHwgICAgMSAr
CiBzeXMveDg2L3hlbi9odm0uYyAgICAgICAgICAgIHwgICAxMiArKysrKy0KIHN5cy94ODYveGVu
L3B2LmMgICAgICAgICAgICAgfCAgIDg1ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKwogc3lzL3hlbi9wdi5oICAgICAgICAgICAgICAgICB8ICAgMzIgKysrKysrKysr
KysrKysrKwogNiBmaWxlcyBjaGFuZ2VkLCAxMzcgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMo
LSkKIGNyZWF0ZSBtb2RlIDEwMDY0NCBzeXMveGVuL3B2LmgKCmRpZmYgLS1naXQgYS9zeXMvYW1k
NjQvYW1kNjQvbXBfbWFjaGRlcC5jIGIvc3lzL2FtZDY0L2FtZDY0L21wX21hY2hkZXAuYwppbmRl
eCA0YWY0ZjhmLi4xN2U5NTdkIDEwMDY0NAotLS0gYS9zeXMvYW1kNjQvYW1kNjQvbXBfbWFjaGRl
cC5jCisrKyBiL3N5cy9hbWQ2NC9hbWQ2NC9tcF9tYWNoZGVwLmMKQEAgLTkwLDcgKzkwLDcgQEAg
ZXh0ZXJuICBzdHJ1Y3QgcGNwdSBfX3BjcHVbXTsKIAogLyogQVAgdXNlcyB0aGlzIGR1cmluZyBi
b290c3RyYXAuICBEbyBub3Qgc3RhdGljaXplLiAgKi8KIGNoYXIgKmJvb3RTVEs7Ci1zdGF0aWMg
aW50IGJvb3RBUDsKK2ludCBib290QVA7CiAKIC8qIEZyZWUgdGhlc2UgYWZ0ZXIgdXNlICovCiB2
b2lkICpib290c3RhY2tzW01BWENQVV07CkBAIC0xMjQsNyArMTI0LDggQEAgc3RhdGljIHVfbG9u
ZyAqaXBpX2hhcmRjbG9ja19jb3VudHNbTUFYQ1BVXTsKIAogLyogRGVmYXVsdCBjcHVfb3BzIGlt
cGxlbWVudGF0aW9uLiAqLwogc3RydWN0IGNwdV9vcHMgY3B1X29wcyA9IHsKLQkuaXBpX3ZlY3Rv
cmVkID0gbGFwaWNfaXBpX3ZlY3RvcmVkCisJLmlwaV92ZWN0b3JlZCA9IGxhcGljX2lwaV92ZWN0
b3JlZCwKKwkuc3RhcnRfYWxsX2FwcyA9IG5hdGl2ZV9zdGFydF9hbGxfYXBzLAogfTsKIAogZXh0
ZXJuIGludGhhbmRfdCBJRFRWRUMoZmFzdF9zeXNjYWxsKSwgSURUVkVDKGZhc3Rfc3lzY2FsbDMy
KTsKQEAgLTEzOCw3ICsxMzksNyBAQCBleHRlcm4gaW50IHBtYXBfcGNpZF9lbmFibGVkOwogc3Rh
dGljIHZvbGF0aWxlIGNwdXNldF90IGlwaV9ubWlfcGVuZGluZzsKIAogLyogdXNlZCB0byBob2xk
IHRoZSBBUCdzIHVudGlsIHdlIGFyZSByZWFkeSB0byByZWxlYXNlIHRoZW0gKi8KLXN0YXRpYyBz
dHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4Oworc3RydWN0IG10eCBhcF9ib290X210eDsKIAogLyogU2V0
IHRvIDEgb25jZSB3ZSdyZSByZWFkeSB0byBsZXQgdGhlIEFQcyBvdXQgb2YgdGhlIHBlbi4gKi8K
IHN0YXRpYyB2b2xhdGlsZSBpbnQgYXBzX3JlYWR5ID0gMDsKQEAgLTE2NSw3ICsxNjYsNiBAQCBz
dGF0aWMgaW50IGNwdV9jb3JlczsJCQkvKiBjb3JlcyBwZXIgcGFja2FnZSAqLwogCiBzdGF0aWMg
dm9pZAlhc3NpZ25fY3B1X2lkcyh2b2lkKTsKIHN0YXRpYyB2b2lkCXNldF9pbnRlcnJ1cHRfYXBp
Y19pZHModm9pZCk7Ci1zdGF0aWMgaW50CXN0YXJ0X2FsbF9hcHModm9pZCk7CiBzdGF0aWMgaW50
CXN0YXJ0X2FwKGludCBhcGljX2lkKTsKIHN0YXRpYyB2b2lkCXJlbGVhc2VfYXBzKHZvaWQgKmR1
bW15KTsKIApAQCAtNTY5LDcgKzU2OSw3IEBAIGNwdV9tcF9zdGFydCh2b2lkKQogCWFzc2lnbl9j
cHVfaWRzKCk7CiAKIAkvKiBTdGFydCBlYWNoIEFwcGxpY2F0aW9uIFByb2Nlc3NvciAqLwotCXN0
YXJ0X2FsbF9hcHMoKTsKKwljcHVfb3BzLnN0YXJ0X2FsbF9hcHMoKTsKIAogCXNldF9pbnRlcnJ1
cHRfYXBpY19pZHMoKTsKIH0KQEAgLTkwOCw4ICs5MDgsOCBAQCBhc3NpZ25fY3B1X2lkcyh2b2lk
KQogLyoKICAqIHN0YXJ0IGVhY2ggQVAgaW4gb3VyIGxpc3QKICAqLwotc3RhdGljIGludAotc3Rh
cnRfYWxsX2Fwcyh2b2lkKQoraW50CituYXRpdmVfc3RhcnRfYWxsX2Fwcyh2b2lkKQogewogCXZt
X29mZnNldF90IHZhID0gYm9vdF9hZGRyZXNzICsgS0VSTkJBU0U7CiAJdV9pbnQ2NF90ICpwdDQs
ICpwdDMsICpwdDI7CmRpZmYgLS1naXQgYS9zeXMvYW1kNjQvaW5jbHVkZS9jcHUuaCBiL3N5cy9h
bWQ2NC9pbmNsdWRlL2NwdS5oCmluZGV4IDNjNWQ1ZGYuLjk4ZGMzZTAgMTAwNjQ0Ci0tLSBhL3N5
cy9hbWQ2NC9pbmNsdWRlL2NwdS5oCisrKyBiL3N5cy9hbWQ2NC9pbmNsdWRlL2NwdS5oCkBAIC02
NCw2ICs2NCw3IEBAIHN0cnVjdCBjcHVfb3BzIHsKIAl2b2lkICgqY3B1X2luaXQpKHZvaWQpOwog
CXZvaWQgKCpjcHVfcmVzdW1lKSh2b2lkKTsKIAl2b2lkICgqaXBpX3ZlY3RvcmVkKSh1X2ludCwg
aW50KTsKKwlpbnQgICgqc3RhcnRfYWxsX2Fwcykodm9pZCk7CiB9OwogCiBleHRlcm4gc3RydWN0
CWNwdV9vcHMgY3B1X29wczsKZGlmZiAtLWdpdCBhL3N5cy9hbWQ2NC9pbmNsdWRlL3NtcC5oIGIv
c3lzL2FtZDY0L2luY2x1ZGUvc21wLmgKaW5kZXggZDFiMzY2Yi4uMTViYzgyMyAxMDA2NDQKLS0t
IGEvc3lzL2FtZDY0L2luY2x1ZGUvc21wLmgKKysrIGIvc3lzL2FtZDY0L2luY2x1ZGUvc21wLmgK
QEAgLTc5LDYgKzc5LDcgQEAgdm9pZAlzbXBfbWFza2VkX2ludmxwZ19yYW5nZShjcHVzZXRfdCBt
YXNrLCBzdHJ1Y3QgcG1hcCAqcG1hcCwKIAkgICAgdm1fb2Zmc2V0X3Qgc3RhcnR2YSwgdm1fb2Zm
c2V0X3QgZW5kdmEpOwogdm9pZAlzbXBfaW52bHRsYihzdHJ1Y3QgcG1hcCAqcG1hcCk7CiB2b2lk
CXNtcF9tYXNrZWRfaW52bHRsYihjcHVzZXRfdCBtYXNrLCBzdHJ1Y3QgcG1hcCAqcG1hcCk7Citp
bnQJbmF0aXZlX3N0YXJ0X2FsbF9hcHModm9pZCk7CiAKICNlbmRpZiAvKiAhTE9DT1JFICovCiAj
ZW5kaWYgLyogU01QICovCmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi9odm0uYyBiL3N5cy94ODYv
eGVuL2h2bS5jCmluZGV4IGZiMWVkNzkuLjQ5Y2FhY2YgMTAwNjQ0Ci0tLSBhL3N5cy94ODYveGVu
L2h2bS5jCisrKyBiL3N5cy94ODYveGVuL2h2bS5jCkBAIC01Myw2ICs1Myw5IEBAIF9fRkJTRElE
KCIkRnJlZUJTRCQiKTsKICNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgogI2luY2x1ZGUgPHhl
bi9odm0uaD4KICNpbmNsdWRlIDx4ZW4veGVuX2ludHIuaD4KKyNpZmRlZiBfX2FtZDY0X18KKyNp
bmNsdWRlIDx4ZW4vcHYuaD4KKyNlbmRpZgogCiAjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS9odm0v
cGFyYW1zLmg+CiAjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS92Y3B1Lmg+CkBAIC0xMTksNyArMTIy
LDEwIEBAIGVudW0geGVuX2RvbWFpbl90eXBlIHhlbl9kb21haW5fdHlwZSA9IFhFTl9OQVRJVkU7
CiBzdHJ1Y3QgY3B1X29wcyB4ZW5faHZtX2NwdV9vcHMgPSB7CiAJLmlwaV92ZWN0b3JlZAk9IGxh
cGljX2lwaV92ZWN0b3JlZCwKIAkuY3B1X2luaXQJPSB4ZW5faHZtX2NwdV9pbml0LAotCS5jcHVf
cmVzdW1lCT0geGVuX2h2bV9jcHVfcmVzdW1lCisJLmNwdV9yZXN1bWUJPSB4ZW5faHZtX2NwdV9y
ZXN1bWUsCisjaWZkZWYgX19hbWQ2NF9fCisJLnN0YXJ0X2FsbF9hcHMgPSBuYXRpdmVfc3RhcnRf
YWxsX2FwcywKKyNlbmRpZgogfTsKIAogc3RhdGljIE1BTExPQ19ERUZJTkUoTV9YRU5IVk0sICJ4
ZW5faHZtIiwgIlhlbiBIVk0gUFYgU3VwcG9ydCIpOwpAQCAtNjk4LDYgKzcwNCwxMCBAQCB4ZW5f
aHZtX2luaXQoZW51bSB4ZW5faHZtX2luaXRfdHlwZSBpbml0X3R5cGUpCiAJCXNldHVwX3hlbl9m
ZWF0dXJlcygpOwogCQljcHVfb3BzID0geGVuX2h2bV9jcHVfb3BzOwogIAkJdm1fZ3Vlc3QgPSBW
TV9HVUVTVF9YRU47CisjaWZkZWYgX19hbWQ2NF9fCisJCWlmICh4ZW5fcHZfZG9tYWluKCkpCisJ
CQljcHVfb3BzLnN0YXJ0X2FsbF9hcHMgPSB4ZW5fcHZfc3RhcnRfYWxsX2FwczsKKyNlbmRpZgog
CQlicmVhazsKIAljYXNlIFhFTl9IVk1fSU5JVF9SRVNVTUU6CiAJCWlmIChlcnJvciAhPSAwKQpk
aWZmIC0tZ2l0IGEvc3lzL3g4Ni94ZW4vcHYuYyBiL3N5cy94ODYveGVuL3B2LmMKaW5kZXggZDEx
YmMxYS4uMjJmZDZhNiAxMDA2NDQKLS0tIGEvc3lzL3g4Ni94ZW4vcHYuYworKysgYi9zeXMveDg2
L3hlbi9wdi5jCkBAIC0zNCw4ICszNCwxMSBAQCBfX0ZCU0RJRCgiJEZyZWVCU0QkIik7CiAjaW5j
bHVkZSA8c3lzL2tlcm5lbC5oPgogI2luY2x1ZGUgPHN5cy9yZWJvb3QuaD4KICNpbmNsdWRlIDxz
eXMvc3lzdG0uaD4KKyNpbmNsdWRlIDxzeXMvbWFsbG9jLmg+CiAjaW5jbHVkZSA8c3lzL2xvY2su
aD4KICNpbmNsdWRlIDxzeXMvcndsb2NrLmg+CisjaW5jbHVkZSA8c3lzL211dGV4Lmg+CisjaW5j
bHVkZSA8c3lzL3NtcC5oPgogCiAjaW5jbHVkZSA8dm0vdm0uaD4KICNpbmNsdWRlIDx2bS92bV9l
eHRlcm4uaD4KQEAgLTQ5LDkgKzUyLDEzIEBAIF9fRkJTRElEKCIkRnJlZUJTRCQiKTsKICNpbmNs
dWRlIDxtYWNoaW5lL3N5c2FyY2guaD4KICNpbmNsdWRlIDxtYWNoaW5lL2Nsb2NrLmg+CiAjaW5j
bHVkZSA8bWFjaGluZS9wYy9iaW9zLmg+CisjaW5jbHVkZSA8bWFjaGluZS9zbXAuaD4KIAogI2lu
Y2x1ZGUgPHhlbi94ZW4tb3MuaD4KICNpbmNsdWRlIDx4ZW4vaHlwZXJ2aXNvci5oPgorI2luY2x1
ZGUgPHhlbi9wdi5oPgorCisjaW5jbHVkZSA8eGVuL2ludGVyZmFjZS92Y3B1Lmg+CiAKIC8qIE5h
dGl2ZSBpbml0aWFsIGZ1bmN0aW9uICovCiBleHRlcm4gdV9pbnQ2NF90IGhhbW1lcl90aW1lKHVf
aW50NjRfdCwgdV9pbnQ2NF90KTsKQEAgLTY1LDYgKzcyLDE1IEBAIHN0YXRpYyBjYWRkcl90IHhl
bl9wdl9wYXJzZV9wcmVsb2FkX2RhdGEodV9pbnQ2NF90KTsKIHN0YXRpYyB2b2lkIHhlbl9wdl9w
YXJzZV9tZW1tYXAoY2FkZHJfdCwgdm1fcGFkZHJfdCAqLCBpbnQgKik7CiAKIHN0YXRpYyB2b2lk
IHhlbl9wdl9zZXRfaW5pdF9vcHModm9pZCk7CisvKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0gRXh0ZXJuIERlY2xhcmF0aW9ucyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0qLworLyog
VmFyaWFibGVzIHVzZWQgYnkgYW1kNjQgbXBfbWFjaGRlcCB0byBzdGFydCBBUHMgKi8KK2V4dGVy
biBzdHJ1Y3QgbXR4IGFwX2Jvb3RfbXR4OworZXh0ZXJuIHZvaWQgKmJvb3RzdGFja3NbXTsKK2V4
dGVybiBjaGFyICpkb3VibGVmYXVsdF9zdGFjazsKK2V4dGVybiBjaGFyICpubWlfc3RhY2s7Citl
eHRlcm4gdm9pZCAqZHBjcHU7CitleHRlcm4gaW50IGJvb3RBUDsKK2V4dGVybiBjaGFyICpib290
U1RLOwogCiAvKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIEdsb2JhbCBEYXRhIC0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0qLwogLyogWGVuIGluaXRfb3BzIGltcGxlbWVu
dGF0aW9uLiAqLwpAQCAtMTY4LDYgKzE4NCw3NSBAQCBoYW1tZXJfdGltZV94ZW4oc3RhcnRfaW5m
b190ICpzaSwgdV9pbnQ2NF90IHhlbnN0YWNrKQogfQogCiAvKi0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tIFBWIHNwZWNpZmljIC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0q
LworCitzdGF0aWMgaW50CitzdGFydF94ZW5fYXAoaW50IGNwdSkKK3sKKwlzdHJ1Y3QgdmNwdV9n
dWVzdF9jb250ZXh0ICpjdHh0OworCWludCBtcywgY3B1cyA9IG1wX25hcHM7CisKKwljdHh0ID0g
bWFsbG9jKHNpemVvZigqY3R4dCksIE1fVEVNUCwgTV9OT1dBSVQgfCBNX1pFUk8pOworCWlmIChj
dHh0ID09IE5VTEwpCisJCXBhbmljKCJ1bmFibGUgdG8gYWxsb2NhdGUgbWVtb3J5Iik7CisKKwlj
dHh0LT5mbGFncyA9IFZHQ0ZfSU5fS0VSTkVMOworCWN0eHQtPnVzZXJfcmVncy5yaXAgPSAodW5z
aWduZWQgbG9uZykgaW5pdF9zZWNvbmRhcnk7CisJY3R4dC0+dXNlcl9yZWdzLnJzcCA9ICh1bnNp
Z25lZCBsb25nKSBib290U1RLOworCisJLyogU2V0IHRoZSBBUCB0byB1c2UgdGhlIHNhbWUgcGFn
ZSB0YWJsZXMgKi8KKwljdHh0LT5jdHJscmVnWzNdID0gS1BNTDRwaHlzOworCisJaWYgKEhZUEVS
VklTT1JfdmNwdV9vcChWQ1BVT1BfaW5pdGlhbGlzZSwgY3B1LCBjdHh0KSkKKwkJcGFuaWMoInVu
YWJsZSB0byBpbml0aWFsaXplIEFQIyVkXG4iLCBjcHUpOworCisJZnJlZShjdHh0LCBNX1RFTVAp
OworCisJLyogTGF1bmNoIHRoZSB2Q1BVICovCisJaWYgKEhZUEVSVklTT1JfdmNwdV9vcChWQ1BV
T1BfdXAsIGNwdSwgTlVMTCkpCisJCXBhbmljKCJ1bmFibGUgdG8gc3RhcnQgQVAjJWRcbiIsIGNw
dSk7CisKKwkvKiBXYWl0IHVwIHRvIDUgc2Vjb25kcyBmb3IgaXQgdG8gc3RhcnQuICovCisJZm9y
IChtcyA9IDA7IG1zIDwgNTAwMDsgbXMrKykgeworCQlpZiAobXBfbmFwcyA+IGNwdXMpCisJCQly
ZXR1cm4gKDEpOwkvKiByZXR1cm4gU1VDQ0VTUyAqLworCQlERUxBWSgxMDAwKTsKKwl9CisKKwly
ZXR1cm4gKDApOworfQorCitpbnQKK3hlbl9wdl9zdGFydF9hbGxfYXBzKHZvaWQpCit7CisJaW50
IGNwdTsKKworCW10eF9pbml0KCZhcF9ib290X210eCwgImFwIGJvb3QiLCBOVUxMLCBNVFhfU1BJ
Tik7CisKKwlmb3IgKGNwdSA9IDE7IGNwdSA8IG1wX25jcHVzOyBjcHUrKykgeworCisJCS8qIGFs
bG9jYXRlIGFuZCBzZXQgdXAgYW4gaWRsZSBzdGFjayBkYXRhIHBhZ2UgKi8KKwkJYm9vdHN0YWNr
c1tjcHVdID0gKHZvaWQgKilrbWVtX21hbGxvYyhrZXJuZWxfYXJlbmEsCisJCSAgICBLU1RBQ0tf
UEFHRVMgKiBQQUdFX1NJWkUsIE1fV0FJVE9LIHwgTV9aRVJPKTsKKwkJZG91YmxlZmF1bHRfc3Rh
Y2sgPSAoY2hhciAqKWttZW1fbWFsbG9jKGtlcm5lbF9hcmVuYSwKKwkJICAgIFBBR0VfU0laRSwg
TV9XQUlUT0sgfCBNX1pFUk8pOworCQlubWlfc3RhY2sgPSAoY2hhciAqKWttZW1fbWFsbG9jKGtl
cm5lbF9hcmVuYSwgUEFHRV9TSVpFLAorCQkgICAgTV9XQUlUT0sgfCBNX1pFUk8pOworCQlkcGNw
dSA9ICh2b2lkICopa21lbV9tYWxsb2Moa2VybmVsX2FyZW5hLCBEUENQVV9TSVpFLAorCQkgICAg
TV9XQUlUT0sgfCBNX1pFUk8pOworCisJCWJvb3RTVEsgPSAoY2hhciAqKWJvb3RzdGFja3NbY3B1
XSArIEtTVEFDS19QQUdFUyAqIFBBR0VfU0laRSAtIDg7CisJCWJvb3RBUCA9IGNwdTsKKworCQkv
KiBhdHRlbXB0IHRvIHN0YXJ0IHRoZSBBcHBsaWNhdGlvbiBQcm9jZXNzb3IgKi8KKwkJaWYgKCFz
dGFydF94ZW5fYXAoY3B1KSkKKwkJCXBhbmljKCJBUCAjJWQgZmFpbGVkIHRvIHN0YXJ0ISIsIGNw
dSk7CisKKwkJQ1BVX1NFVChjcHUsICZhbGxfY3B1cyk7CS8qIHJlY29yZCBBUCBpbiBDUFUgbWFw
ICovCisJfQorCisJcmV0dXJuIChtcF9uYXBzKTsKK30KKwogLyoKICAqIEZ1bmN0aW9ucyB0byBj
b252ZXJ0IHRoZSAiZXh0cmEiIHBhcmFtZXRlcnMgcGFzc2VkIGJ5IFhlbgogICogaW50byBGcmVl
QlNEIGJvb3Qgb3B0aW9ucyAoZnJvbSB0aGUgaTM4NiBYZW4gcG9ydCkuCmRpZmYgLS1naXQgYS9z
eXMveGVuL3B2LmggYi9zeXMveGVuL3B2LmgKbmV3IGZpbGUgbW9kZSAxMDA2NDQKaW5kZXggMDAw
MDAwMC4uNDViNzQ3MwotLS0gL2Rldi9udWxsCisrKyBiL3N5cy94ZW4vcHYuaApAQCAtMCwwICsx
LDMyIEBACisvKgorICogQ29weXJpZ2h0IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgorICogQWxsIHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3Ry
aWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhv
dXQKKyAqIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xs
b3dpbmcgY29uZGl0aW9ucworICogYXJlIG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBz
b3VyY2UgY29kZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2Us
IHRoaXMgbGlzdCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisg
KiAyLiBSZWRpc3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFi
b3ZlIGNvcHlyaWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQg
dGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyIGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQv
b3Igb3RoZXIgbWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisg
KiBUSElTIFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9S
UyBBUyBJUycnIEFORAorICogQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNM
VURJTkcsIEJVVCBOT1QgTElNSVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0Yg
TUVSQ0hBTlRBQklMSVRZIEFORCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICog
QVJFIERJU0NMQUlNRUQuICBJTiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJV
VE9SUyBCRSBMSUFCTEUKKyAqIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwg
U1BFQ0lBTCwgRVhFTVBMQVJZLCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJ
TkcsIEJVVCBOT1QgTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUwor
ICogT1IgU0VSVklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVT
UyBJTlRFUlJVUFRJT04pCisgKiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBM
SUFCSUxJVFksIFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBU
T1JUIChJTkNMVURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdB
WQorICogT1VUIE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9G
IFRIRSBQT1NTSUJJTElUWSBPRgorICogU1VDSCBEQU1BR0UuCisgKi8KKworI2lmbmRlZglfX1hF
Tl9QVl9IX18KKyNkZWZpbmUJX19YRU5fUFZfSF9fCisKK2ludAl4ZW5fcHZfc3RhcnRfYWxsX2Fw
cyh2b2lkKTsKKworI2VuZGlmCS8qIF9fWEVOX1BWX0hfXyAqLwotLSAKMS43LjcuNSAoQXBwbGUg
R2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sF-00063A-2S; Tue, 14 Jan 2014 15:25:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sC-00061X-VL
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:49 +0000
Received: from [85.158.139.211:27725] by server-12.bemta-5.messagelabs.com id
	22/8A-30017-CF655D25; Tue, 14 Jan 2014 15:25:48 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27610 invoked from network); 14 Jan 2014 15:25:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590737"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T7-0006J6-Ph;
	Tue, 14 Jan 2014 14:59:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:38 +0100
Message-ID: <1389711582-66908-17-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_16/20=5D_xen=3A_create_a_Xen_n?=
	=?utf-8?q?exus_to_use_in_PV/PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5leHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hh
cmdlIGZvcgphdHRhY2hpbmcgWGVuIHNwZWNpZmljIGRldmljZXMuCi0tLQogc3lzL2NvbmYvZmls
ZXMuYW1kNjQgICAgICAgICAgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYgICAgICAgICAg
IHwgICAgMSArCiBzeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyB8ICAgIDIgKy0KIHN5cy9k
ZXYveGVuL3RpbWVyL3RpbWVyLmMgICAgIHwgICAgNCArLQogc3lzL2Rldi94ZW4veGVucGNpL3hl
bnBjaS5jICAgfCAgIDY0ICsrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiBzeXMv
eDg2L3hlbi94ZW5fbmV4dXMuYyAgICAgICB8ICAgNzYgKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYyAgIHwgICAgNiAr
LS0tCiA3IGZpbGVzIGNoYW5nZWQsIDg1IGluc2VydGlvbnMoKyksIDY5IGRlbGV0aW9ucygtKQog
Y3JlYXRlIG1vZGUgMTAwNjQ0IHN5cy94ODYveGVuL3hlbl9uZXh1cy5jCgpkaWZmIC0tZ2l0IGEv
c3lzL2NvbmYvZmlsZXMuYW1kNjQgYi9zeXMvY29uZi9maWxlcy5hbWQ2NAppbmRleCBkN2M5OGNj
Li5mMzc4OTgzIDEwMDY0NAotLS0gYS9zeXMvY29uZi9maWxlcy5hbWQ2NAorKysgYi9zeXMvY29u
Zi9maWxlcy5hbWQ2NApAQCAtNTcxLDMgKzU3MSw0IEBAIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0
aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94
ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVucHYuYwkJCW9wdGlv
bmFsCXhlbmh2bQoreDg2L3hlbi94ZW5fbmV4dXMuYwkJb3B0aW9uYWwJeGVuaHZtCmRpZmYgLS1n
aXQgYS9zeXMvY29uZi9maWxlcy5pMzg2IGIvc3lzL2NvbmYvZmlsZXMuaTM4NgppbmRleCA4MTE0
MmUzLi4wMjg4N2EzMyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuaTM4NgorKysgYi9zeXMv
Y29uZi9maWxlcy5pMzg2CkBAIC02MDQsMyArNjA0LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZtCiB4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4g
fCB4ZW5odm0KK3g4Ni94ZW4veGVuX25leHVzLmMJCW9wdGlvbmFsIHhlbiB8IHhlbmh2bQpkaWZm
IC0tZ2l0IGEvc3lzL2Rldi94ZW4vY29uc29sZS9jb25zb2xlLmMgYi9zeXMvZGV2L3hlbi9jb25z
b2xlL2NvbnNvbGUuYwppbmRleCA4OTlkZmZjLi45MTUzOGZlIDEwMDY0NAotLS0gYS9zeXMvZGV2
L3hlbi9jb25zb2xlL2NvbnNvbGUuYworKysgYi9zeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUu
YwpAQCAtNDYyLDQgKzQ2Miw0IEBAIHhjb25zX2ZvcmNlX2ZsdXNoKHZvaWQpCiAJfQogfQogCi1E
UklWRVJfTU9EVUxFKHhjLCBuZXh1cywgeGNfZHJpdmVyLCB4Y19kZXZjbGFzcywgMCwgMCk7CitE
UklWRVJfTU9EVUxFKHhjLCB4ZW5wdiwgeGNfZHJpdmVyLCB4Y19kZXZjbGFzcywgMCwgMCk7CmRp
ZmYgLS1naXQgYS9zeXMvZGV2L3hlbi90aW1lci90aW1lci5jIGIvc3lzL2Rldi94ZW4vdGltZXIv
dGltZXIuYwppbmRleCA5NjM3MmFiLi5mMTZmNWE1IDEwMDY0NAotLS0gYS9zeXMvZGV2L3hlbi90
aW1lci90aW1lci5jCisrKyBiL3N5cy9kZXYveGVuL3RpbWVyL3RpbWVyLmMKQEAgLTYzNiw1ICs2
MzYsNSBAQCBzdGF0aWMgZHJpdmVyX3QgeGVudGltZXJfZHJpdmVyID0gewogCXNpemVvZihzdHJ1
Y3QgeGVudGltZXJfc29mdGMpLAogfTsKIAotRFJJVkVSX01PRFVMRSh4ZW50aW1lciwgbmV4dXMs
IHhlbnRpbWVyX2RyaXZlciwgeGVudGltZXJfZGV2Y2xhc3MsIDAsIDApOwotTU9EVUxFX0RFUEVO
RCh4ZW50aW1lciwgbmV4dXMsIDEsIDEsIDEpOworRFJJVkVSX01PRFVMRSh4ZW50aW1lciwgeGVu
cHYsIHhlbnRpbWVyX2RyaXZlciwgeGVudGltZXJfZGV2Y2xhc3MsIDAsIDApOworTU9EVUxFX0RF
UEVORCh4ZW50aW1lciwgeGVucHYsIDEsIDEsIDEpOwpkaWZmIC0tZ2l0IGEvc3lzL2Rldi94ZW4v
eGVucGNpL3hlbnBjaS5jIGIvc3lzL2Rldi94ZW4veGVucGNpL3hlbnBjaS5jCmluZGV4IGRkMmFk
OTIuLmUzMzQwNTEgMTAwNjQ0Ci0tLSBhL3N5cy9kZXYveGVuL3hlbnBjaS94ZW5wY2kuYworKysg
Yi9zeXMvZGV2L3hlbi94ZW5wY2kveGVucGNpLmMKQEAgLTUxLDggKzUxLDYgQEAgX19GQlNESUQo
IiRGcmVlQlNEJCIpOwogCiBleHRlcm4gdm9pZCB4ZW5faW50cl9oYW5kbGVfdXBjYWxsKHN0cnVj
dCB0cmFwZnJhbWUgKnRyYXBfZnJhbWUpOwogCi1zdGF0aWMgZGV2aWNlX3QgbmV4dXM7Ci0KIC8q
CiAgKiBUaGlzIGlzIHVzZWQgdG8gZmluZCBvdXIgcGxhdGZvcm0gZGV2aWNlIGluc3RhbmNlLgog
ICovCkBAIC0xODgsMzYgKzE4Niw2IEBAIHhlbnBjaV9hbGxvY19zcGFjZShzaXplX3Qgc3osIHZt
X3BhZGRyX3QgKnBhKQogCX0KIH0KIAotc3RhdGljIHN0cnVjdCByZXNvdXJjZSAqCi14ZW5wY2lf
YWxsb2NfcmVzb3VyY2UoZGV2aWNlX3QgZGV2LCBkZXZpY2VfdCBjaGlsZCwgaW50IHR5cGUsIGlu
dCAqcmlkLAotICAgIHVfbG9uZyBzdGFydCwgdV9sb25nIGVuZCwgdV9sb25nIGNvdW50LCB1X2lu
dCBmbGFncykKLXsKLQlyZXR1cm4gKEJVU19BTExPQ19SRVNPVVJDRShuZXh1cywgY2hpbGQsIHR5
cGUsIHJpZCwgc3RhcnQsCi0JICAgIGVuZCwgY291bnQsIGZsYWdzKSk7Ci19Ci0KLQotc3RhdGlj
IGludAoteGVucGNpX3JlbGVhc2VfcmVzb3VyY2UoZGV2aWNlX3QgZGV2LCBkZXZpY2VfdCBjaGls
ZCwgaW50IHR5cGUsIGludCByaWQsCi0gICAgc3RydWN0IHJlc291cmNlICpyKQotewotCXJldHVy
biAoQlVTX1JFTEVBU0VfUkVTT1VSQ0UobmV4dXMsIGNoaWxkLCB0eXBlLCByaWQsIHIpKTsKLX0K
LQotc3RhdGljIGludAoteGVucGNpX2FjdGl2YXRlX3Jlc291cmNlKGRldmljZV90IGRldiwgZGV2
aWNlX3QgY2hpbGQsIGludCB0eXBlLCBpbnQgcmlkLAotICAgIHN0cnVjdCByZXNvdXJjZSAqcikK
LXsKLQlyZXR1cm4gKEJVU19BQ1RJVkFURV9SRVNPVVJDRShuZXh1cywgY2hpbGQsIHR5cGUsIHJp
ZCwgcikpOwotfQotCi1zdGF0aWMgaW50Ci14ZW5wY2lfZGVhY3RpdmF0ZV9yZXNvdXJjZShkZXZp
Y2VfdCBkZXYsIGRldmljZV90IGNoaWxkLCBpbnQgdHlwZSwKLSAgICBpbnQgcmlkLCBzdHJ1Y3Qg
cmVzb3VyY2UgKnIpCi17Ci0JcmV0dXJuIChCVVNfREVBQ1RJVkFURV9SRVNPVVJDRShuZXh1cywg
Y2hpbGQsIHR5cGUsIHJpZCwgcikpOwotfQotCiAvKgogICogUHJvYmUgLSBqdXN0IGNoZWNrIGRl
dmljZSBJRC4KICAqLwpAQCAtMjI5LDcgKzE5Nyw3IEBAIHhlbnBjaV9wcm9iZShkZXZpY2VfdCBk
ZXYpCiAJCXJldHVybiAoRU5YSU8pOwogCiAJZGV2aWNlX3NldF9kZXNjKGRldiwgIlhlbiBQbGF0
Zm9ybSBEZXZpY2UiKTsKLQlyZXR1cm4gKGJ1c19nZW5lcmljX3Byb2JlKGRldikpOworCXJldHVy
biAoQlVTX1BST0JFX0RFRkFVTFQpOwogfQogCiAvKgpAQCAtMjM5LDIwICsyMDcsOCBAQCBzdGF0
aWMgaW50CiB4ZW5wY2lfYXR0YWNoKGRldmljZV90IGRldikKIHsKIAlzdHJ1Y3QgeGVucGNpX3Nv
ZnRjICpzY3AgPSBkZXZpY2VfZ2V0X3NvZnRjKGRldik7Ci0JZGV2Y2xhc3NfdCBkYzsKIAlpbnQg
ZXJyb3I7CiAKLQkvKgotCSAqIEZpbmQgYW5kIHJlY29yZCBuZXh1czAuICBTaW5jZSB3ZSBhcmUg
bm90IHJlYWxseSBvbiB0aGUKLQkgKiBQQ0kgYnVzLCBhbGwgcmVzb3VyY2Ugb3BlcmF0aW9ucyBh
cmUgZGlyZWN0ZWQgdG8gbmV4dXMKLQkgKiBpbnN0ZWFkIG9mIHRocm91Z2ggb3VyIHBhcmVudC4K
LQkgKi8KLQlpZiAoKGRjID0gZGV2Y2xhc3NfZmluZCgibmV4dXMiKSkgID09IDAKLQkgfHwgKG5l
eHVzID0gZGV2Y2xhc3NfZ2V0X2RldmljZShkYywgMCkpID09IDApIHsKLQkJZGV2aWNlX3ByaW50
ZihkZXYsICJ1bmFibGUgdG8gZmluZCBuZXh1cy4iKTsKLQkJcmV0dXJuIChFTk9FTlQpOwotCX0K
LQogCWVycm9yID0geGVucGNpX2FsbG9jYXRlX3Jlc291cmNlcyhkZXYpOwogCWlmIChlcnJvcikg
ewogCQlkZXZpY2VfcHJpbnRmKGRldiwgInhlbnBjaV9hbGxvY2F0ZV9yZXNvdXJjZXMgZmFpbGVk
KCVkKS5cbiIsCkBAIC0yNzAsNyArMjI2LDcgQEAgeGVucGNpX2F0dGFjaChkZXZpY2VfdCBkZXYp
CiAJCWdvdG8gZXJyZXhpdDsKIAl9CiAKLQlyZXR1cm4gKGJ1c19nZW5lcmljX2F0dGFjaChkZXYp
KTsKKwlyZXR1cm4gKDApOwogCiBlcnJleGl0OgogCS8qCkBAIC0zMDksMTYgKzI2NSwxMCBAQCB4
ZW5wY2lfZGV0YWNoKGRldmljZV90IGRldikKIH0KIAogc3RhdGljIGludAoteGVucGNpX3N1c3Bl
bmQoZGV2aWNlX3QgZGV2KQotewotCXJldHVybiAoYnVzX2dlbmVyaWNfc3VzcGVuZChkZXYpKTsK
LX0KLQotc3RhdGljIGludAogeGVucGNpX3Jlc3VtZShkZXZpY2VfdCBkZXYpCiB7CiAJeGVuX2h2
bV9zZXRfY2FsbGJhY2soZGV2KTsKLQlyZXR1cm4gKGJ1c19nZW5lcmljX3Jlc3VtZShkZXYpKTsK
KwlyZXR1cm4gKDApOwogfQogCiBzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnBjaV9tZXRob2Rz
W10gPSB7CkBAIC0zMjYsMTYgKzI3Niw4IEBAIHN0YXRpYyBkZXZpY2VfbWV0aG9kX3QgeGVucGNp
X21ldGhvZHNbXSA9IHsKIAlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLAkJeGVucGNpX3Byb2JlKSwK
IAlERVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJeGVucGNpX2F0dGFjaCksCiAJREVWTUVUSE9EKGRl
dmljZV9kZXRhY2gsCXhlbnBjaV9kZXRhY2gpLAotCURFVk1FVEhPRChkZXZpY2Vfc3VzcGVuZCwJ
eGVucGNpX3N1c3BlbmQpLAogCURFVk1FVEhPRChkZXZpY2VfcmVzdW1lLAl4ZW5wY2lfcmVzdW1l
KSwKIAotCS8qIEJ1cyBpbnRlcmZhY2UgKi8KLQlERVZNRVRIT0QoYnVzX2FkZF9jaGlsZCwJYnVz
X2dlbmVyaWNfYWRkX2NoaWxkKSwKLQlERVZNRVRIT0QoYnVzX2FsbG9jX3Jlc291cmNlLCAgIHhl
bnBjaV9hbGxvY19yZXNvdXJjZSksCi0JREVWTUVUSE9EKGJ1c19yZWxlYXNlX3Jlc291cmNlLCB4
ZW5wY2lfcmVsZWFzZV9yZXNvdXJjZSksCi0JREVWTUVUSE9EKGJ1c19hY3RpdmF0ZV9yZXNvdXJj
ZSwgeGVucGNpX2FjdGl2YXRlX3Jlc291cmNlKSwKLQlERVZNRVRIT0QoYnVzX2RlYWN0aXZhdGVf
cmVzb3VyY2UsIHhlbnBjaV9kZWFjdGl2YXRlX3Jlc291cmNlKSwKLQogCXsgMCwgMCB9CiB9Owog
CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi94ZW5fbmV4dXMuYyBiL3N5cy94ODYveGVuL3hlbl9u
ZXh1cy5jCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLmM1YjdjNDMKLS0tIC9k
ZXYvbnVsbAorKysgYi9zeXMveDg2L3hlbi94ZW5fbmV4dXMuYwpAQCAtMCwwICsxLDc2IEBACisv
KgorICogQ29weXJpZ2h0IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPgorICogQWxsIHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBh
bmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1v
ZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29u
ZGl0aW9ucworICogYXJlIG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29k
ZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlz
dCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRp
c3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHly
aWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxv
d2luZyBkaXNjbGFpbWVyIGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIg
bWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNP
RlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycn
IEFORAorICogQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJV
VCBOT1QgTElNSVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRB
QklMSVRZIEFORCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NM
QUlNRUQuICBJTiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBM
SUFCTEUKKyAqIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwg
RVhFTVBMQVJZLCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBO
T1QgTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VS
VklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJV
UFRJT04pCisgKiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFks
IFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNM
VURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VU
IE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NT
SUJJTElUWSBPRgorICogU1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5o
PgorX19GQlNESUQoIiRGcmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5j
bHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNsdWRlIDxzeXMv
bW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3N5c2N0bC5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5o
PgorI2luY2x1ZGUgPHN5cy9zbXAuaD4KKworI2luY2x1ZGUgPG1hY2hpbmUvbmV4dXN2YXIuaD4K
KworI2luY2x1ZGUgPHhlbi94ZW4tb3MuaD4KKworLyoKKyAqIFhlbiBuZXh1cyg0KSBkcml2ZXIu
CisgKi8KK3N0YXRpYyBpbnQKK25leHVzX3hlbl9wcm9iZShkZXZpY2VfdCBkZXYpCit7CisJaWYg
KCF4ZW5fcHZfZG9tYWluKCkpCisJCXJldHVybiAoRU5YSU8pOworCisJcmV0dXJuIChCVVNfUFJP
QkVfREVGQVVMVCk7Cit9CisKK3N0YXRpYyBpbnQKK25leHVzX3hlbl9hdHRhY2goZGV2aWNlX3Qg
ZGV2KQoreworCisJbmV4dXNfaW5pdF9yZXNvdXJjZXMoKTsKKwlidXNfZ2VuZXJpY19wcm9iZShk
ZXYpOworCWJ1c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJcmV0dXJuIDA7Cit9CisKK3N0YXRp
YyBkZXZpY2VfbWV0aG9kX3QgbmV4dXNfeGVuX21ldGhvZHNbXSA9IHsKKwkvKiBEZXZpY2UgaW50
ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwJCW5leHVzX3hlbl9wcm9iZSksCisJ
REVWTUVUSE9EKGRldmljZV9hdHRhY2gsCW5leHVzX3hlbl9hdHRhY2gpLAorCisJeyAwLCAwIH0K
K307CisKK0RFRklORV9DTEFTU18xKG5leHVzLCBuZXh1c194ZW5fZHJpdmVyLCBuZXh1c194ZW5f
bWV0aG9kcywgMSwgbmV4dXNfZHJpdmVyKTsKK3N0YXRpYyBkZXZjbGFzc190IG5leHVzX2RldmNs
YXNzOworCitEUklWRVJfTU9EVUxFKG5leHVzX3hlbiwgcm9vdCwgbmV4dXNfeGVuX2RyaXZlciwg
bmV4dXNfZGV2Y2xhc3MsIDAsIDApOwpkaWZmIC0tZ2l0IGEvc3lzL3hlbi94ZW5zdG9yZS94ZW5z
dG9yZS5jIGIvc3lzL3hlbi94ZW5zdG9yZS94ZW5zdG9yZS5jCmluZGV4IGQ0MDQ4NjIuLmI1Y2Y0
MTMgMTAwNjQ0Ci0tLSBhL3N5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYworKysgYi9zeXMveGVu
L3hlbnN0b3JlL3hlbnN0b3JlLmMKQEAgLTEyNjEsMTEgKzEyNjEsNyBAQCBzdGF0aWMgZGV2aWNl
X21ldGhvZF90IHhlbnN0b3JlX21ldGhvZHNbXSA9IHsKIERFRklORV9DTEFTU18wKHhlbnN0b3Jl
LCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX21ldGhvZHMsIDApOwogc3RhdGljIGRldmNsYXNz
X3QgeGVuc3RvcmVfZGV2Y2xhc3M7IAogIAotI2lmZGVmIFhFTkhWTQotRFJJVkVSX01PRFVMRSh4
ZW5zdG9yZSwgeGVucGNpLCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX2RldmNsYXNzLCAwLCAw
KTsKLSNlbHNlCi1EUklWRVJfTU9EVUxFKHhlbnN0b3JlLCBuZXh1cywgeGVuc3RvcmVfZHJpdmVy
LCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7Ci0jZW5kaWYKK0RSSVZFUl9NT0RVTEUoeGVuc3Rv
cmUsIHhlbnB2LCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX2RldmNsYXNzLCAwLCAwKTsKIAog
LyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIFN5c2N0bCBEYXRhIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KIC8qIFhYWCBTaG91bGRuJ3QgdGhlIG5vZGUgYmUgc29t
ZXdoZXJlIGVsc2U/ICovCi0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sF-00063A-2S; Tue, 14 Jan 2014 15:25:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sC-00061X-VL
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:49 +0000
Received: from [85.158.139.211:27725] by server-12.bemta-5.messagelabs.com id
	22/8A-30017-CF655D25; Tue, 14 Jan 2014 15:25:48 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27610 invoked from network); 14 Jan 2014 15:25:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590737"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T7-0006J6-Ph;
	Tue, 14 Jan 2014 14:59:53 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:38 +0100
Message-ID: <1389711582-66908-17-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] =?utf-8?q?=5BPATCH_v10_16/20=5D_xen=3A_create_a_Xen_n?=
	=?utf-8?q?exus_to_use_in_PV/PVH?=
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgWGVuIHNwZWNpZmljIG5leHVzIHRoYXQgaXMgZ29pbmcgdG8gYmUgaW4gY2hh
cmdlIGZvcgphdHRhY2hpbmcgWGVuIHNwZWNpZmljIGRldmljZXMuCi0tLQogc3lzL2NvbmYvZmls
ZXMuYW1kNjQgICAgICAgICAgfCAgICAxICsKIHN5cy9jb25mL2ZpbGVzLmkzODYgICAgICAgICAg
IHwgICAgMSArCiBzeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUuYyB8ICAgIDIgKy0KIHN5cy9k
ZXYveGVuL3RpbWVyL3RpbWVyLmMgICAgIHwgICAgNCArLQogc3lzL2Rldi94ZW4veGVucGNpL3hl
bnBjaS5jICAgfCAgIDY0ICsrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCiBzeXMv
eDg2L3hlbi94ZW5fbmV4dXMuYyAgICAgICB8ICAgNzYgKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysKIHN5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYyAgIHwgICAgNiAr
LS0tCiA3IGZpbGVzIGNoYW5nZWQsIDg1IGluc2VydGlvbnMoKyksIDY5IGRlbGV0aW9ucygtKQog
Y3JlYXRlIG1vZGUgMTAwNjQ0IHN5cy94ODYveGVuL3hlbl9uZXh1cy5jCgpkaWZmIC0tZ2l0IGEv
c3lzL2NvbmYvZmlsZXMuYW1kNjQgYi9zeXMvY29uZi9maWxlcy5hbWQ2NAppbmRleCBkN2M5OGNj
Li5mMzc4OTgzIDEwMDY0NAotLS0gYS9zeXMvY29uZi9maWxlcy5hbWQ2NAorKysgYi9zeXMvY29u
Zi9maWxlcy5hbWQ2NApAQCAtNTcxLDMgKzU3MSw0IEBAIHg4Ni94ZW4veGVuX2ludHIuYwkJb3B0
aW9uYWwJeGVuIHwgeGVuaHZtCiB4ODYveGVuL3B2LmMJCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94
ZW4vcHZjcHVfZW51bS5jCQlvcHRpb25hbAl4ZW5odm0KIHg4Ni94ZW4veGVucHYuYwkJCW9wdGlv
bmFsCXhlbmh2bQoreDg2L3hlbi94ZW5fbmV4dXMuYwkJb3B0aW9uYWwJeGVuaHZtCmRpZmYgLS1n
aXQgYS9zeXMvY29uZi9maWxlcy5pMzg2IGIvc3lzL2NvbmYvZmlsZXMuaTM4NgppbmRleCA4MTE0
MmUzLi4wMjg4N2EzMyAxMDA2NDQKLS0tIGEvc3lzL2NvbmYvZmlsZXMuaTM4NgorKysgYi9zeXMv
Y29uZi9maWxlcy5pMzg2CkBAIC02MDQsMyArNjA0LDQgQEAgeDg2L3g4Ni9kZWxheS5jCQkJc3Rh
bmRhcmQKIHg4Ni94ZW4vaHZtLmMJCQlvcHRpb25hbCB4ZW5odm0KIHg4Ni94ZW4veGVuX2ludHIu
YwkJb3B0aW9uYWwgeGVuIHwgeGVuaHZtCiB4ODYveGVuL3hlbnB2LmMJCQlvcHRpb25hbCB4ZW4g
fCB4ZW5odm0KK3g4Ni94ZW4veGVuX25leHVzLmMJCW9wdGlvbmFsIHhlbiB8IHhlbmh2bQpkaWZm
IC0tZ2l0IGEvc3lzL2Rldi94ZW4vY29uc29sZS9jb25zb2xlLmMgYi9zeXMvZGV2L3hlbi9jb25z
b2xlL2NvbnNvbGUuYwppbmRleCA4OTlkZmZjLi45MTUzOGZlIDEwMDY0NAotLS0gYS9zeXMvZGV2
L3hlbi9jb25zb2xlL2NvbnNvbGUuYworKysgYi9zeXMvZGV2L3hlbi9jb25zb2xlL2NvbnNvbGUu
YwpAQCAtNDYyLDQgKzQ2Miw0IEBAIHhjb25zX2ZvcmNlX2ZsdXNoKHZvaWQpCiAJfQogfQogCi1E
UklWRVJfTU9EVUxFKHhjLCBuZXh1cywgeGNfZHJpdmVyLCB4Y19kZXZjbGFzcywgMCwgMCk7CitE
UklWRVJfTU9EVUxFKHhjLCB4ZW5wdiwgeGNfZHJpdmVyLCB4Y19kZXZjbGFzcywgMCwgMCk7CmRp
ZmYgLS1naXQgYS9zeXMvZGV2L3hlbi90aW1lci90aW1lci5jIGIvc3lzL2Rldi94ZW4vdGltZXIv
dGltZXIuYwppbmRleCA5NjM3MmFiLi5mMTZmNWE1IDEwMDY0NAotLS0gYS9zeXMvZGV2L3hlbi90
aW1lci90aW1lci5jCisrKyBiL3N5cy9kZXYveGVuL3RpbWVyL3RpbWVyLmMKQEAgLTYzNiw1ICs2
MzYsNSBAQCBzdGF0aWMgZHJpdmVyX3QgeGVudGltZXJfZHJpdmVyID0gewogCXNpemVvZihzdHJ1
Y3QgeGVudGltZXJfc29mdGMpLAogfTsKIAotRFJJVkVSX01PRFVMRSh4ZW50aW1lciwgbmV4dXMs
IHhlbnRpbWVyX2RyaXZlciwgeGVudGltZXJfZGV2Y2xhc3MsIDAsIDApOwotTU9EVUxFX0RFUEVO
RCh4ZW50aW1lciwgbmV4dXMsIDEsIDEsIDEpOworRFJJVkVSX01PRFVMRSh4ZW50aW1lciwgeGVu
cHYsIHhlbnRpbWVyX2RyaXZlciwgeGVudGltZXJfZGV2Y2xhc3MsIDAsIDApOworTU9EVUxFX0RF
UEVORCh4ZW50aW1lciwgeGVucHYsIDEsIDEsIDEpOwpkaWZmIC0tZ2l0IGEvc3lzL2Rldi94ZW4v
eGVucGNpL3hlbnBjaS5jIGIvc3lzL2Rldi94ZW4veGVucGNpL3hlbnBjaS5jCmluZGV4IGRkMmFk
OTIuLmUzMzQwNTEgMTAwNjQ0Ci0tLSBhL3N5cy9kZXYveGVuL3hlbnBjaS94ZW5wY2kuYworKysg
Yi9zeXMvZGV2L3hlbi94ZW5wY2kveGVucGNpLmMKQEAgLTUxLDggKzUxLDYgQEAgX19GQlNESUQo
IiRGcmVlQlNEJCIpOwogCiBleHRlcm4gdm9pZCB4ZW5faW50cl9oYW5kbGVfdXBjYWxsKHN0cnVj
dCB0cmFwZnJhbWUgKnRyYXBfZnJhbWUpOwogCi1zdGF0aWMgZGV2aWNlX3QgbmV4dXM7Ci0KIC8q
CiAgKiBUaGlzIGlzIHVzZWQgdG8gZmluZCBvdXIgcGxhdGZvcm0gZGV2aWNlIGluc3RhbmNlLgog
ICovCkBAIC0xODgsMzYgKzE4Niw2IEBAIHhlbnBjaV9hbGxvY19zcGFjZShzaXplX3Qgc3osIHZt
X3BhZGRyX3QgKnBhKQogCX0KIH0KIAotc3RhdGljIHN0cnVjdCByZXNvdXJjZSAqCi14ZW5wY2lf
YWxsb2NfcmVzb3VyY2UoZGV2aWNlX3QgZGV2LCBkZXZpY2VfdCBjaGlsZCwgaW50IHR5cGUsIGlu
dCAqcmlkLAotICAgIHVfbG9uZyBzdGFydCwgdV9sb25nIGVuZCwgdV9sb25nIGNvdW50LCB1X2lu
dCBmbGFncykKLXsKLQlyZXR1cm4gKEJVU19BTExPQ19SRVNPVVJDRShuZXh1cywgY2hpbGQsIHR5
cGUsIHJpZCwgc3RhcnQsCi0JICAgIGVuZCwgY291bnQsIGZsYWdzKSk7Ci19Ci0KLQotc3RhdGlj
IGludAoteGVucGNpX3JlbGVhc2VfcmVzb3VyY2UoZGV2aWNlX3QgZGV2LCBkZXZpY2VfdCBjaGls
ZCwgaW50IHR5cGUsIGludCByaWQsCi0gICAgc3RydWN0IHJlc291cmNlICpyKQotewotCXJldHVy
biAoQlVTX1JFTEVBU0VfUkVTT1VSQ0UobmV4dXMsIGNoaWxkLCB0eXBlLCByaWQsIHIpKTsKLX0K
LQotc3RhdGljIGludAoteGVucGNpX2FjdGl2YXRlX3Jlc291cmNlKGRldmljZV90IGRldiwgZGV2
aWNlX3QgY2hpbGQsIGludCB0eXBlLCBpbnQgcmlkLAotICAgIHN0cnVjdCByZXNvdXJjZSAqcikK
LXsKLQlyZXR1cm4gKEJVU19BQ1RJVkFURV9SRVNPVVJDRShuZXh1cywgY2hpbGQsIHR5cGUsIHJp
ZCwgcikpOwotfQotCi1zdGF0aWMgaW50Ci14ZW5wY2lfZGVhY3RpdmF0ZV9yZXNvdXJjZShkZXZp
Y2VfdCBkZXYsIGRldmljZV90IGNoaWxkLCBpbnQgdHlwZSwKLSAgICBpbnQgcmlkLCBzdHJ1Y3Qg
cmVzb3VyY2UgKnIpCi17Ci0JcmV0dXJuIChCVVNfREVBQ1RJVkFURV9SRVNPVVJDRShuZXh1cywg
Y2hpbGQsIHR5cGUsIHJpZCwgcikpOwotfQotCiAvKgogICogUHJvYmUgLSBqdXN0IGNoZWNrIGRl
dmljZSBJRC4KICAqLwpAQCAtMjI5LDcgKzE5Nyw3IEBAIHhlbnBjaV9wcm9iZShkZXZpY2VfdCBk
ZXYpCiAJCXJldHVybiAoRU5YSU8pOwogCiAJZGV2aWNlX3NldF9kZXNjKGRldiwgIlhlbiBQbGF0
Zm9ybSBEZXZpY2UiKTsKLQlyZXR1cm4gKGJ1c19nZW5lcmljX3Byb2JlKGRldikpOworCXJldHVy
biAoQlVTX1BST0JFX0RFRkFVTFQpOwogfQogCiAvKgpAQCAtMjM5LDIwICsyMDcsOCBAQCBzdGF0
aWMgaW50CiB4ZW5wY2lfYXR0YWNoKGRldmljZV90IGRldikKIHsKIAlzdHJ1Y3QgeGVucGNpX3Nv
ZnRjICpzY3AgPSBkZXZpY2VfZ2V0X3NvZnRjKGRldik7Ci0JZGV2Y2xhc3NfdCBkYzsKIAlpbnQg
ZXJyb3I7CiAKLQkvKgotCSAqIEZpbmQgYW5kIHJlY29yZCBuZXh1czAuICBTaW5jZSB3ZSBhcmUg
bm90IHJlYWxseSBvbiB0aGUKLQkgKiBQQ0kgYnVzLCBhbGwgcmVzb3VyY2Ugb3BlcmF0aW9ucyBh
cmUgZGlyZWN0ZWQgdG8gbmV4dXMKLQkgKiBpbnN0ZWFkIG9mIHRocm91Z2ggb3VyIHBhcmVudC4K
LQkgKi8KLQlpZiAoKGRjID0gZGV2Y2xhc3NfZmluZCgibmV4dXMiKSkgID09IDAKLQkgfHwgKG5l
eHVzID0gZGV2Y2xhc3NfZ2V0X2RldmljZShkYywgMCkpID09IDApIHsKLQkJZGV2aWNlX3ByaW50
ZihkZXYsICJ1bmFibGUgdG8gZmluZCBuZXh1cy4iKTsKLQkJcmV0dXJuIChFTk9FTlQpOwotCX0K
LQogCWVycm9yID0geGVucGNpX2FsbG9jYXRlX3Jlc291cmNlcyhkZXYpOwogCWlmIChlcnJvcikg
ewogCQlkZXZpY2VfcHJpbnRmKGRldiwgInhlbnBjaV9hbGxvY2F0ZV9yZXNvdXJjZXMgZmFpbGVk
KCVkKS5cbiIsCkBAIC0yNzAsNyArMjI2LDcgQEAgeGVucGNpX2F0dGFjaChkZXZpY2VfdCBkZXYp
CiAJCWdvdG8gZXJyZXhpdDsKIAl9CiAKLQlyZXR1cm4gKGJ1c19nZW5lcmljX2F0dGFjaChkZXYp
KTsKKwlyZXR1cm4gKDApOwogCiBlcnJleGl0OgogCS8qCkBAIC0zMDksMTYgKzI2NSwxMCBAQCB4
ZW5wY2lfZGV0YWNoKGRldmljZV90IGRldikKIH0KIAogc3RhdGljIGludAoteGVucGNpX3N1c3Bl
bmQoZGV2aWNlX3QgZGV2KQotewotCXJldHVybiAoYnVzX2dlbmVyaWNfc3VzcGVuZChkZXYpKTsK
LX0KLQotc3RhdGljIGludAogeGVucGNpX3Jlc3VtZShkZXZpY2VfdCBkZXYpCiB7CiAJeGVuX2h2
bV9zZXRfY2FsbGJhY2soZGV2KTsKLQlyZXR1cm4gKGJ1c19nZW5lcmljX3Jlc3VtZShkZXYpKTsK
KwlyZXR1cm4gKDApOwogfQogCiBzdGF0aWMgZGV2aWNlX21ldGhvZF90IHhlbnBjaV9tZXRob2Rz
W10gPSB7CkBAIC0zMjYsMTYgKzI3Niw4IEBAIHN0YXRpYyBkZXZpY2VfbWV0aG9kX3QgeGVucGNp
X21ldGhvZHNbXSA9IHsKIAlERVZNRVRIT0QoZGV2aWNlX3Byb2JlLAkJeGVucGNpX3Byb2JlKSwK
IAlERVZNRVRIT0QoZGV2aWNlX2F0dGFjaCwJeGVucGNpX2F0dGFjaCksCiAJREVWTUVUSE9EKGRl
dmljZV9kZXRhY2gsCXhlbnBjaV9kZXRhY2gpLAotCURFVk1FVEhPRChkZXZpY2Vfc3VzcGVuZCwJ
eGVucGNpX3N1c3BlbmQpLAogCURFVk1FVEhPRChkZXZpY2VfcmVzdW1lLAl4ZW5wY2lfcmVzdW1l
KSwKIAotCS8qIEJ1cyBpbnRlcmZhY2UgKi8KLQlERVZNRVRIT0QoYnVzX2FkZF9jaGlsZCwJYnVz
X2dlbmVyaWNfYWRkX2NoaWxkKSwKLQlERVZNRVRIT0QoYnVzX2FsbG9jX3Jlc291cmNlLCAgIHhl
bnBjaV9hbGxvY19yZXNvdXJjZSksCi0JREVWTUVUSE9EKGJ1c19yZWxlYXNlX3Jlc291cmNlLCB4
ZW5wY2lfcmVsZWFzZV9yZXNvdXJjZSksCi0JREVWTUVUSE9EKGJ1c19hY3RpdmF0ZV9yZXNvdXJj
ZSwgeGVucGNpX2FjdGl2YXRlX3Jlc291cmNlKSwKLQlERVZNRVRIT0QoYnVzX2RlYWN0aXZhdGVf
cmVzb3VyY2UsIHhlbnBjaV9kZWFjdGl2YXRlX3Jlc291cmNlKSwKLQogCXsgMCwgMCB9CiB9Owog
CmRpZmYgLS1naXQgYS9zeXMveDg2L3hlbi94ZW5fbmV4dXMuYyBiL3N5cy94ODYveGVuL3hlbl9u
ZXh1cy5jCm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAwMDAuLmM1YjdjNDMKLS0tIC9k
ZXYvbnVsbAorKysgYi9zeXMveDg2L3hlbi94ZW5fbmV4dXMuYwpAQCAtMCwwICsxLDc2IEBACisv
KgorICogQ29weXJpZ2h0IChjKSAyMDEzIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPgorICogQWxsIHJpZ2h0cyByZXNlcnZlZC4KKyAqCisgKiBSZWRpc3RyaWJ1dGlvbiBh
bmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZvcm1zLCB3aXRoIG9yIHdpdGhvdXQKKyAqIG1v
ZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBwcm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29u
ZGl0aW9ucworICogYXJlIG1ldDoKKyAqIDEuIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29k
ZSBtdXN0IHJldGFpbiB0aGUgYWJvdmUgY29weXJpZ2h0CisgKiAgICBub3RpY2UsIHRoaXMgbGlz
dCBvZiBjb25kaXRpb25zIGFuZCB0aGUgZm9sbG93aW5nIGRpc2NsYWltZXIuCisgKiAyLiBSZWRp
c3RyaWJ1dGlvbnMgaW4gYmluYXJ5IGZvcm0gbXVzdCByZXByb2R1Y2UgdGhlIGFib3ZlIGNvcHly
aWdodAorICogICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxv
d2luZyBkaXNjbGFpbWVyIGluIHRoZQorICogICAgZG9jdW1lbnRhdGlvbiBhbmQvb3Igb3RoZXIg
bWF0ZXJpYWxzIHByb3ZpZGVkIHdpdGggdGhlIGRpc3RyaWJ1dGlvbi4KKyAqCisgKiBUSElTIFNP
RlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBBVVRIT1IgQU5EIENPTlRSSUJVVE9SUyBBUyBJUycn
IEFORAorICogQU5ZIEVYUFJFU1MgT1IgSU1QTElFRCBXQVJSQU5USUVTLCBJTkNMVURJTkcsIEJV
VCBOT1QgTElNSVRFRCBUTywgVEhFCisgKiBJTVBMSUVEIFdBUlJBTlRJRVMgT0YgTUVSQ0hBTlRB
QklMSVRZIEFORCBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRQorICogQVJFIERJU0NM
QUlNRUQuICBJTiBOTyBFVkVOVCBTSEFMTCBUSEUgQVVUSE9SIE9SIENPTlRSSUJVVE9SUyBCRSBM
SUFCTEUKKyAqIEZPUiBBTlkgRElSRUNULCBJTkRJUkVDVCwgSU5DSURFTlRBTCwgU1BFQ0lBTCwg
RVhFTVBMQVJZLCBPUiBDT05TRVFVRU5USUFMCisgKiBEQU1BR0VTIChJTkNMVURJTkcsIEJVVCBO
T1QgTElNSVRFRCBUTywgUFJPQ1VSRU1FTlQgT0YgU1VCU1RJVFVURSBHT09EUworICogT1IgU0VS
VklDRVM7IExPU1MgT0YgVVNFLCBEQVRBLCBPUiBQUk9GSVRTOyBPUiBCVVNJTkVTUyBJTlRFUlJV
UFRJT04pCisgKiBIT1dFVkVSIENBVVNFRCBBTkQgT04gQU5ZIFRIRU9SWSBPRiBMSUFCSUxJVFks
IFdIRVRIRVIgSU4gQ09OVFJBQ1QsIFNUUklDVAorICogTElBQklMSVRZLCBPUiBUT1JUIChJTkNM
VURJTkcgTkVHTElHRU5DRSBPUiBPVEhFUldJU0UpIEFSSVNJTkcgSU4gQU5ZIFdBWQorICogT1VU
IE9GIFRIRSBVU0UgT0YgVEhJUyBTT0ZUV0FSRSwgRVZFTiBJRiBBRFZJU0VEIE9GIFRIRSBQT1NT
SUJJTElUWSBPRgorICogU1VDSCBEQU1BR0UuCisgKi8KKworI2luY2x1ZGUgPHN5cy9jZGVmcy5o
PgorX19GQlNESUQoIiRGcmVlQlNEJCIpOworCisjaW5jbHVkZSA8c3lzL3BhcmFtLmg+CisjaW5j
bHVkZSA8c3lzL2J1cy5oPgorI2luY2x1ZGUgPHN5cy9rZXJuZWwuaD4KKyNpbmNsdWRlIDxzeXMv
bW9kdWxlLmg+CisjaW5jbHVkZSA8c3lzL3N5c2N0bC5oPgorI2luY2x1ZGUgPHN5cy9zeXN0bS5o
PgorI2luY2x1ZGUgPHN5cy9zbXAuaD4KKworI2luY2x1ZGUgPG1hY2hpbmUvbmV4dXN2YXIuaD4K
KworI2luY2x1ZGUgPHhlbi94ZW4tb3MuaD4KKworLyoKKyAqIFhlbiBuZXh1cyg0KSBkcml2ZXIu
CisgKi8KK3N0YXRpYyBpbnQKK25leHVzX3hlbl9wcm9iZShkZXZpY2VfdCBkZXYpCit7CisJaWYg
KCF4ZW5fcHZfZG9tYWluKCkpCisJCXJldHVybiAoRU5YSU8pOworCisJcmV0dXJuIChCVVNfUFJP
QkVfREVGQVVMVCk7Cit9CisKK3N0YXRpYyBpbnQKK25leHVzX3hlbl9hdHRhY2goZGV2aWNlX3Qg
ZGV2KQoreworCisJbmV4dXNfaW5pdF9yZXNvdXJjZXMoKTsKKwlidXNfZ2VuZXJpY19wcm9iZShk
ZXYpOworCWJ1c19nZW5lcmljX2F0dGFjaChkZXYpOworCisJcmV0dXJuIDA7Cit9CisKK3N0YXRp
YyBkZXZpY2VfbWV0aG9kX3QgbmV4dXNfeGVuX21ldGhvZHNbXSA9IHsKKwkvKiBEZXZpY2UgaW50
ZXJmYWNlICovCisJREVWTUVUSE9EKGRldmljZV9wcm9iZSwJCW5leHVzX3hlbl9wcm9iZSksCisJ
REVWTUVUSE9EKGRldmljZV9hdHRhY2gsCW5leHVzX3hlbl9hdHRhY2gpLAorCisJeyAwLCAwIH0K
K307CisKK0RFRklORV9DTEFTU18xKG5leHVzLCBuZXh1c194ZW5fZHJpdmVyLCBuZXh1c194ZW5f
bWV0aG9kcywgMSwgbmV4dXNfZHJpdmVyKTsKK3N0YXRpYyBkZXZjbGFzc190IG5leHVzX2RldmNs
YXNzOworCitEUklWRVJfTU9EVUxFKG5leHVzX3hlbiwgcm9vdCwgbmV4dXNfeGVuX2RyaXZlciwg
bmV4dXNfZGV2Y2xhc3MsIDAsIDApOwpkaWZmIC0tZ2l0IGEvc3lzL3hlbi94ZW5zdG9yZS94ZW5z
dG9yZS5jIGIvc3lzL3hlbi94ZW5zdG9yZS94ZW5zdG9yZS5jCmluZGV4IGQ0MDQ4NjIuLmI1Y2Y0
MTMgMTAwNjQ0Ci0tLSBhL3N5cy94ZW4veGVuc3RvcmUveGVuc3RvcmUuYworKysgYi9zeXMveGVu
L3hlbnN0b3JlL3hlbnN0b3JlLmMKQEAgLTEyNjEsMTEgKzEyNjEsNyBAQCBzdGF0aWMgZGV2aWNl
X21ldGhvZF90IHhlbnN0b3JlX21ldGhvZHNbXSA9IHsKIERFRklORV9DTEFTU18wKHhlbnN0b3Jl
LCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX21ldGhvZHMsIDApOwogc3RhdGljIGRldmNsYXNz
X3QgeGVuc3RvcmVfZGV2Y2xhc3M7IAogIAotI2lmZGVmIFhFTkhWTQotRFJJVkVSX01PRFVMRSh4
ZW5zdG9yZSwgeGVucGNpLCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX2RldmNsYXNzLCAwLCAw
KTsKLSNlbHNlCi1EUklWRVJfTU9EVUxFKHhlbnN0b3JlLCBuZXh1cywgeGVuc3RvcmVfZHJpdmVy
LCB4ZW5zdG9yZV9kZXZjbGFzcywgMCwgMCk7Ci0jZW5kaWYKK0RSSVZFUl9NT0RVTEUoeGVuc3Rv
cmUsIHhlbnB2LCB4ZW5zdG9yZV9kcml2ZXIsIHhlbnN0b3JlX2RldmNsYXNzLCAwLCAwKTsKIAog
LyotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tIFN5c2N0bCBEYXRhIC0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tKi8KIC8qIFhYWCBTaG91bGRuJ3QgdGhlIG5vZGUgYmUgc29t
ZXdoZXJlIGVsc2U/ICovCi0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlz
dApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sH-00064m-Bm; Tue, 14 Jan 2014 15:25:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sE-00062o-Ve
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:51 +0000
Received: from [85.158.139.211:46778] by server-2.bemta-5.messagelabs.com id
	6F/3A-29392-EF655D25; Tue, 14 Jan 2014 15:25:50 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27853 invoked from network); 14 Jan 2014 15:25:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590768"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T5-0006J6-21;
	Tue, 14 Jan 2014 14:59:51 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:33 +0100
Message-ID: <1389711582-66908-12-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 11/20] xen: changes to hvm code in order to
	support PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On PVH we don't need to init the shared info page, or disable emulated
devices. Also, make sure PV IPIs are set before starting the APs.
---
 sys/x86/xen/hvm.c |   17 ++++++++++++-----
 1 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index 9a0411e..fb1ed79 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -523,7 +523,7 @@ xen_setup_cpus(void)
 {
 	int i;
 
-	if (!xen_hvm_domain() || !xen_vector_callback_enabled)
+	if (!xen_vector_callback_enabled)
 		return;
 
 #ifdef __amd64__
@@ -712,10 +712,13 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	}
 
 	xen_vector_callback_enabled = 0;
-	xen_domain_type = XEN_HVM_DOMAIN;
-	xen_hvm_init_shared_info_page();
 	xen_hvm_set_callback(NULL);
-	xen_hvm_disable_emulated_devices();
+
+	if (!xen_pv_domain()) {
+		xen_domain_type = XEN_HVM_DOMAIN;
+		xen_hvm_init_shared_info_page();
+		xen_hvm_disable_emulated_devices();
+	}
 } 
 
 void
@@ -746,6 +749,9 @@ xen_set_vcpu_id(void)
 	struct pcpu *pc;
 	int i;
 
+	if (!xen_hvm_domain())
+		return;
+
 	/* Set vcpu_id to acpi_id */
 	CPU_FOREACH(i) {
 		pc = pcpu_find(i);
@@ -789,7 +795,8 @@ xen_hvm_cpu_init(void)
 
 SYSINIT(xen_hvm_init, SI_SUB_HYPERVISOR, SI_ORDER_FIRST, xen_hvm_sysinit, NULL);
 #ifdef SMP
-SYSINIT(xen_setup_cpus, SI_SUB_SMP, SI_ORDER_FIRST, xen_setup_cpus, NULL);
+/* We need to setup IPIs before APs are started */
+SYSINIT(xen_setup_cpus, SI_SUB_SMP-1, SI_ORDER_FIRST, xen_setup_cpus, NULL);
 #endif
 SYSINIT(xen_hvm_cpu_init, SI_SUB_INTR, SI_ORDER_FIRST, xen_hvm_cpu_init, NULL);
 SYSINIT(xen_set_vcpu_id, SI_SUB_CPU, SI_ORDER_ANY, xen_set_vcpu_id, NULL);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sH-00064m-Bm; Tue, 14 Jan 2014 15:25:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sE-00062o-Ve
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:51 +0000
Received: from [85.158.139.211:46778] by server-2.bemta-5.messagelabs.com id
	6F/3A-29392-EF655D25; Tue, 14 Jan 2014 15:25:50 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389713135!9663875!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27853 invoked from network); 14 Jan 2014 15:25:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590768"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:45 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T5-0006J6-21;
	Tue, 14 Jan 2014 14:59:51 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:33 +0100
Message-ID: <1389711582-66908-12-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 11/20] xen: changes to hvm code in order to
	support PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On PVH we don't need to init the shared info page, or disable emulated
devices. Also, make sure PV IPIs are set before starting the APs.
---
 sys/x86/xen/hvm.c |   17 ++++++++++++-----
 1 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/sys/x86/xen/hvm.c b/sys/x86/xen/hvm.c
index 9a0411e..fb1ed79 100644
--- a/sys/x86/xen/hvm.c
+++ b/sys/x86/xen/hvm.c
@@ -523,7 +523,7 @@ xen_setup_cpus(void)
 {
 	int i;
 
-	if (!xen_hvm_domain() || !xen_vector_callback_enabled)
+	if (!xen_vector_callback_enabled)
 		return;
 
 #ifdef __amd64__
@@ -712,10 +712,13 @@ xen_hvm_init(enum xen_hvm_init_type init_type)
 	}
 
 	xen_vector_callback_enabled = 0;
-	xen_domain_type = XEN_HVM_DOMAIN;
-	xen_hvm_init_shared_info_page();
 	xen_hvm_set_callback(NULL);
-	xen_hvm_disable_emulated_devices();
+
+	if (!xen_pv_domain()) {
+		xen_domain_type = XEN_HVM_DOMAIN;
+		xen_hvm_init_shared_info_page();
+		xen_hvm_disable_emulated_devices();
+	}
 } 
 
 void
@@ -746,6 +749,9 @@ xen_set_vcpu_id(void)
 	struct pcpu *pc;
 	int i;
 
+	if (!xen_hvm_domain())
+		return;
+
 	/* Set vcpu_id to acpi_id */
 	CPU_FOREACH(i) {
 		pc = pcpu_find(i);
@@ -789,7 +795,8 @@ xen_hvm_cpu_init(void)
 
 SYSINIT(xen_hvm_init, SI_SUB_HYPERVISOR, SI_ORDER_FIRST, xen_hvm_sysinit, NULL);
 #ifdef SMP
-SYSINIT(xen_setup_cpus, SI_SUB_SMP, SI_ORDER_FIRST, xen_setup_cpus, NULL);
+/* We need to setup IPIs before APs are started */
+SYSINIT(xen_setup_cpus, SI_SUB_SMP-1, SI_ORDER_FIRST, xen_setup_cpus, NULL);
 #endif
 SYSINIT(xen_hvm_cpu_init, SI_SUB_INTR, SI_ORDER_FIRST, xen_hvm_cpu_init, NULL);
 SYSINIT(xen_set_vcpu_id, SI_SUB_CPU, SI_ORDER_ANY, xen_set_vcpu_id, NULL);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sI-000669-Hh; Tue, 14 Jan 2014 15:25:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sG-00063l-89
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:52 +0000
Received: from [85.158.143.35:55238] by server-1.bemta-4.messagelabs.com id
	95/58-02132-FF655D25; Tue, 14 Jan 2014 15:25:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389713149!10386953!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10015 invoked from network); 14 Jan 2014 15:25:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T6-0006J6-3l;
	Tue, 14 Jan 2014 14:59:52 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:35 +0100
Message-ID: <1389711582-66908-14-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 13/20] xen: introduce flag to disable the
	local apic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PVH guests don't have an emulated lapic.
---
 sys/amd64/amd64/mp_machdep.c |   10 ++++++----
 sys/amd64/include/apicvar.h  |    1 +
 sys/i386/include/apicvar.h   |    1 +
 sys/i386/xen/xen_machdep.c   |    2 ++
 sys/x86/x86/local_apic.c     |    8 +++++---
 sys/x86/xen/pv.c             |    3 +++
 6 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
index 17e957d..cea0306 100644
--- a/sys/amd64/amd64/mp_machdep.c
+++ b/sys/amd64/amd64/mp_machdep.c
@@ -707,7 +707,8 @@ init_secondary(void)
 	wrmsr(MSR_SF_MASK, PSL_NT|PSL_T|PSL_I|PSL_C|PSL_D);
 
 	/* Disable local APIC just to be sure. */
-	lapic_disable();
+	if (lapic_valid)
+		lapic_disable();
 
 	/* signal our startup to the BSP. */
 	mp_naps++;
@@ -733,7 +734,7 @@ init_secondary(void)
 
 	/* A quick check from sanity claus */
 	cpuid = PCPU_GET(cpuid);
-	if (PCPU_GET(apic_id) != lapic_id()) {
+	if (lapic_valid && (PCPU_GET(apic_id) != lapic_id())) {
 		printf("SMP: cpuid = %d\n", cpuid);
 		printf("SMP: actual apic_id = %d\n", lapic_id());
 		printf("SMP: correct apic_id = %d\n", PCPU_GET(apic_id));
@@ -749,7 +750,8 @@ init_secondary(void)
 	mtx_lock_spin(&ap_boot_mtx);
 
 	/* Init local apic for irq's */
-	lapic_setup(1);
+	if (lapic_valid)
+		lapic_setup(1);
 
 	/* Set memory range attributes for this CPU to match the BSP */
 	mem_range_AP_init();
@@ -764,7 +766,7 @@ init_secondary(void)
 	if (cpu_logical > 1 && PCPU_GET(apic_id) % cpu_logical != 0)
 		CPU_SET(cpuid, &logical_cpus_mask);
 
-	if (bootverbose)
+	if (lapic_valid && bootverbose)
 		lapic_dump("AP");
 
 	if (smp_cpus == mp_ncpus) {
diff --git a/sys/amd64/include/apicvar.h b/sys/amd64/include/apicvar.h
index e7423a3..c04a238 100644
--- a/sys/amd64/include/apicvar.h
+++ b/sys/amd64/include/apicvar.h
@@ -169,6 +169,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/include/apicvar.h b/sys/i386/include/apicvar.h
index df99ebe..ea8a3c3 100644
--- a/sys/i386/include/apicvar.h
+++ b/sys/i386/include/apicvar.h
@@ -168,6 +168,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index 09c01f1..25b9cfc 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -59,6 +59,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/intr_machdep.h>
 #include <machine/md_var.h>
 #include <machine/asmacros.h>
+#include <machine/apicvar.h>
 
 
 
@@ -912,6 +913,7 @@ initvalues(start_info_t *startinfo)
 #endif	
 	xen_start_info = startinfo;
 	HYPERVISOR_start_info = startinfo;
+	lapic_valid = false;
 	xen_phys_machine = (xen_pfn_t *)startinfo->mfn_list;
 
 	IdlePTD = (pd_entry_t *)((uint8_t *)startinfo->pt_base + PAGE_SIZE);
diff --git a/sys/x86/x86/local_apic.c b/sys/x86/x86/local_apic.c
index 41bd602..fddf1fb 100644
--- a/sys/x86/x86/local_apic.c
+++ b/sys/x86/x86/local_apic.c
@@ -156,6 +156,7 @@ extern inthand_t IDTVEC(rsvd);
 
 volatile lapic_t *lapic;
 vm_paddr_t lapic_paddr;
+bool lapic_valid = true;
 static u_long lapic_timer_divisor;
 static struct eventtimer lapic_et;
 
@@ -1367,9 +1368,10 @@ apic_setup_io(void *dummy __unused)
 	if (retval != 0)
 		printf("%s: Failed to setup I/O APICs: returned %d\n",
 		    best_enum->apic_name, retval);
-#ifdef XEN
-	return;
-#endif
+
+	if (!lapic_valid)
+		return;
+
 	/*
 	 * Finish setting up the local APIC on the BSP once we know how to
 	 * properly program the LINT pins.
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 22fd6a6..6ea1e2a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/clock.h>
 #include <machine/pc/bios.h>
 #include <machine/smp.h>
+#include <machine/apicvar.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -315,4 +316,6 @@ xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
 	init_ops = xen_init_ops;
+	/* Disable lapic */
+	lapic_valid = false;
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sI-000669-Hh; Tue, 14 Jan 2014 15:25:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sG-00063l-89
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:52 +0000
Received: from [85.158.143.35:55238] by server-1.bemta-4.messagelabs.com id
	95/58-02132-FF655D25; Tue, 14 Jan 2014 15:25:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389713149!10386953!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10015 invoked from network); 14 Jan 2014 15:25:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:48 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T6-0006J6-3l;
	Tue, 14 Jan 2014 14:59:52 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:35 +0100
Message-ID: <1389711582-66908-14-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 13/20] xen: introduce flag to disable the
	local apic
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PVH guests don't have an emulated lapic.
---
 sys/amd64/amd64/mp_machdep.c |   10 ++++++----
 sys/amd64/include/apicvar.h  |    1 +
 sys/i386/include/apicvar.h   |    1 +
 sys/i386/xen/xen_machdep.c   |    2 ++
 sys/x86/x86/local_apic.c     |    8 +++++---
 sys/x86/xen/pv.c             |    3 +++
 6 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/sys/amd64/amd64/mp_machdep.c b/sys/amd64/amd64/mp_machdep.c
index 17e957d..cea0306 100644
--- a/sys/amd64/amd64/mp_machdep.c
+++ b/sys/amd64/amd64/mp_machdep.c
@@ -707,7 +707,8 @@ init_secondary(void)
 	wrmsr(MSR_SF_MASK, PSL_NT|PSL_T|PSL_I|PSL_C|PSL_D);
 
 	/* Disable local APIC just to be sure. */
-	lapic_disable();
+	if (lapic_valid)
+		lapic_disable();
 
 	/* signal our startup to the BSP. */
 	mp_naps++;
@@ -733,7 +734,7 @@ init_secondary(void)
 
 	/* A quick check from sanity claus */
 	cpuid = PCPU_GET(cpuid);
-	if (PCPU_GET(apic_id) != lapic_id()) {
+	if (lapic_valid && (PCPU_GET(apic_id) != lapic_id())) {
 		printf("SMP: cpuid = %d\n", cpuid);
 		printf("SMP: actual apic_id = %d\n", lapic_id());
 		printf("SMP: correct apic_id = %d\n", PCPU_GET(apic_id));
@@ -749,7 +750,8 @@ init_secondary(void)
 	mtx_lock_spin(&ap_boot_mtx);
 
 	/* Init local apic for irq's */
-	lapic_setup(1);
+	if (lapic_valid)
+		lapic_setup(1);
 
 	/* Set memory range attributes for this CPU to match the BSP */
 	mem_range_AP_init();
@@ -764,7 +766,7 @@ init_secondary(void)
 	if (cpu_logical > 1 && PCPU_GET(apic_id) % cpu_logical != 0)
 		CPU_SET(cpuid, &logical_cpus_mask);
 
-	if (bootverbose)
+	if (lapic_valid && bootverbose)
 		lapic_dump("AP");
 
 	if (smp_cpus == mp_ncpus) {
diff --git a/sys/amd64/include/apicvar.h b/sys/amd64/include/apicvar.h
index e7423a3..c04a238 100644
--- a/sys/amd64/include/apicvar.h
+++ b/sys/amd64/include/apicvar.h
@@ -169,6 +169,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/include/apicvar.h b/sys/i386/include/apicvar.h
index df99ebe..ea8a3c3 100644
--- a/sys/i386/include/apicvar.h
+++ b/sys/i386/include/apicvar.h
@@ -168,6 +168,7 @@ inthand_t
 
 extern vm_paddr_t lapic_paddr;
 extern int apic_cpuids[];
+extern bool lapic_valid;
 
 u_int	apic_alloc_vector(u_int apic_id, u_int irq);
 u_int	apic_alloc_vectors(u_int apic_id, u_int *irqs, u_int count,
diff --git a/sys/i386/xen/xen_machdep.c b/sys/i386/xen/xen_machdep.c
index 09c01f1..25b9cfc 100644
--- a/sys/i386/xen/xen_machdep.c
+++ b/sys/i386/xen/xen_machdep.c
@@ -59,6 +59,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/intr_machdep.h>
 #include <machine/md_var.h>
 #include <machine/asmacros.h>
+#include <machine/apicvar.h>
 
 
 
@@ -912,6 +913,7 @@ initvalues(start_info_t *startinfo)
 #endif	
 	xen_start_info = startinfo;
 	HYPERVISOR_start_info = startinfo;
+	lapic_valid = false;
 	xen_phys_machine = (xen_pfn_t *)startinfo->mfn_list;
 
 	IdlePTD = (pd_entry_t *)((uint8_t *)startinfo->pt_base + PAGE_SIZE);
diff --git a/sys/x86/x86/local_apic.c b/sys/x86/x86/local_apic.c
index 41bd602..fddf1fb 100644
--- a/sys/x86/x86/local_apic.c
+++ b/sys/x86/x86/local_apic.c
@@ -156,6 +156,7 @@ extern inthand_t IDTVEC(rsvd);
 
 volatile lapic_t *lapic;
 vm_paddr_t lapic_paddr;
+bool lapic_valid = true;
 static u_long lapic_timer_divisor;
 static struct eventtimer lapic_et;
 
@@ -1367,9 +1368,10 @@ apic_setup_io(void *dummy __unused)
 	if (retval != 0)
 		printf("%s: Failed to setup I/O APICs: returned %d\n",
 		    best_enum->apic_name, retval);
-#ifdef XEN
-	return;
-#endif
+
+	if (!lapic_valid)
+		return;
+
 	/*
 	 * Finish setting up the local APIC on the BSP once we know how to
 	 * properly program the LINT pins.
diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
index 22fd6a6..6ea1e2a 100644
--- a/sys/x86/xen/pv.c
+++ b/sys/x86/xen/pv.c
@@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
 #include <machine/clock.h>
 #include <machine/pc/bios.h>
 #include <machine/smp.h>
+#include <machine/apicvar.h>
 
 #include <xen/xen-os.h>
 #include <xen/hypervisor.h>
@@ -315,4 +316,6 @@ xen_pv_set_init_ops(void)
 {
 	/* Init ops for Xen PV */
 	init_ops = xen_init_ops;
+	/* Disable lapic */
+	lapic_valid = false;
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sR-0006Bq-Kx; Tue, 14 Jan 2014 15:26:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sN-0006AA-PC
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:59 +0000
Received: from [85.158.143.35:8607] by server-1.bemta-4.messagelabs.com id
	EE/88-02132-70755D25; Tue, 14 Jan 2014 15:25:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713157!11531267!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8666 invoked from network); 14 Jan 2014 15:25:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709345"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T8-0006J6-Bu;
	Tue, 14 Jan 2014 14:59:54 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:39 +0100
Message-ID: <1389711582-66908-18-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 17/20] xen: add shutdown hook for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the PV shutdown hook to PVH.
---
 sys/dev/xen/control/control.c |   37 ++++++++++++++++++-------------------
 1 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/sys/dev/xen/control/control.c b/sys/dev/xen/control/control.c
index bc0609d..78894ba 100644
--- a/sys/dev/xen/control/control.c
+++ b/sys/dev/xen/control/control.c
@@ -316,21 +316,6 @@ xctrl_suspend()
 	EVENTHANDLER_INVOKE(power_resume);
 }
 
-static void
-xen_pv_shutdown_final(void *arg, int howto)
-{
-	/*
-	 * Inform the hypervisor that shutdown is complete.
-	 * This is not necessary in HVM domains since Xen
-	 * emulates ACPI in that mode and FreeBSD's ACPI
-	 * support will request this transition.
-	 */
-	if (howto & (RB_HALT | RB_POWEROFF))
-		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
-	else
-		HYPERVISOR_shutdown(SHUTDOWN_reboot);
-}
-
 #else
 
 /* HVM mode suspension. */
@@ -440,6 +425,21 @@ xctrl_crash()
 	panic("Xen directed crash");
 }
 
+static void
+xen_pv_shutdown_final(void *arg, int howto)
+{
+	/*
+	 * Inform the hypervisor that shutdown is complete.
+	 * This is not necessary in HVM domains since Xen
+	 * emulates ACPI in that mode and FreeBSD's ACPI
+	 * support will request this transition.
+	 */
+	if (howto & (RB_HALT | RB_POWEROFF))
+		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
+	else
+		HYPERVISOR_shutdown(SHUTDOWN_reboot);
+}
+
 /*------------------------------ Event Reception -----------------------------*/
 static void
 xctrl_on_watch_event(struct xs_watch *watch, const char **vec, unsigned int len)
@@ -522,10 +522,9 @@ xctrl_attach(device_t dev)
 	xctrl->xctrl_watch.callback_data = (uintptr_t)xctrl;
 	xs_register_watch(&xctrl->xctrl_watch);
 
-#ifndef XENHVM
-	EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
-			      SHUTDOWN_PRI_LAST);
-#endif
+	if (xen_pv_domain())
+		EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
+		                      SHUTDOWN_PRI_LAST);
 
 	return (0);
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sS-0006D6-4x; Tue, 14 Jan 2014 15:26:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sO-0006B4-VH
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:26:01 +0000
Received: from [85.158.143.35:8730] by server-1.bemta-4.messagelabs.com id
	88/98-02132-80755D25; Tue, 14 Jan 2014 15:26:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713157!11531267!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9265 invoked from network); 14 Jan 2014 15:25:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709361"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:58 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T4-0006J6-HA;
	Tue, 14 Jan 2014 14:59:50 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:32 +0100
Message-ID: <1389711582-66908-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 10/20] xen: add hook for AP bootstrap memory
	reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hook will only be implemented for bare metal, Xen doesn't require
any bootstrap code since APs are started in long mode with paging
enabled.
---
 sys/amd64/amd64/machdep.c   |    6 +++++-
 sys/amd64/include/sysarch.h |    1 +
 2 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 64df89a..3a2db30 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -178,6 +178,9 @@ struct init_ops init_ops = {
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
 	.parse_memmap =		native_parse_memmap,
+#ifdef SMP
+	.mp_bootaddress =	mp_bootaddress,
+#endif
 };
 
 /*
@@ -1492,7 +1495,8 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 
 #ifdef SMP
 	/* make hole for AP bootstrap code */
-	physmap[1] = mp_bootaddress(physmap[1] / 1024);
+	if (init_ops.mp_bootaddress)
+		physmap[1] = init_ops.mp_bootaddress(physmap[1] / 1024);
 #endif
 
 	/*
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 084223e..7696064 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -16,6 +16,7 @@ struct init_ops {
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
 	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
+	u_int	(*mp_bootaddress)(u_int);
 };
 
 extern struct init_ops init_ops;
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sS-0006D6-4x; Tue, 14 Jan 2014 15:26:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sO-0006B4-VH
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:26:01 +0000
Received: from [85.158.143.35:8730] by server-1.bemta-4.messagelabs.com id
	88/98-02132-80755D25; Tue, 14 Jan 2014 15:26:00 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713157!11531267!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9265 invoked from network); 14 Jan 2014 15:25:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709361"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:58 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T4-0006J6-HA;
	Tue, 14 Jan 2014 14:59:50 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:32 +0100
Message-ID: <1389711582-66908-11-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 10/20] xen: add hook for AP bootstrap memory
	reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hook will only be implemented for bare metal, Xen doesn't require
any bootstrap code since APs are started in long mode with paging
enabled.
---
 sys/amd64/amd64/machdep.c   |    6 +++++-
 sys/amd64/include/sysarch.h |    1 +
 2 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/sys/amd64/amd64/machdep.c b/sys/amd64/amd64/machdep.c
index 64df89a..3a2db30 100644
--- a/sys/amd64/amd64/machdep.c
+++ b/sys/amd64/amd64/machdep.c
@@ -178,6 +178,9 @@ struct init_ops init_ops = {
 	.early_delay_init =	i8254_init,
 	.early_delay =		i8254_delay,
 	.parse_memmap =		native_parse_memmap,
+#ifdef SMP
+	.mp_bootaddress =	mp_bootaddress,
+#endif
 };
 
 /*
@@ -1492,7 +1495,8 @@ getmemsize(caddr_t kmdp, u_int64_t first)
 
 #ifdef SMP
 	/* make hole for AP bootstrap code */
-	physmap[1] = mp_bootaddress(physmap[1] / 1024);
+	if (init_ops.mp_bootaddress)
+		physmap[1] = init_ops.mp_bootaddress(physmap[1] / 1024);
 #endif
 
 	/*
diff --git a/sys/amd64/include/sysarch.h b/sys/amd64/include/sysarch.h
index 084223e..7696064 100644
--- a/sys/amd64/include/sysarch.h
+++ b/sys/amd64/include/sysarch.h
@@ -16,6 +16,7 @@ struct init_ops {
 	void	(*early_delay_init)(void);
 	void	(*early_delay)(int);
 	void	(*parse_memmap)(caddr_t, vm_paddr_t *, int *);
+	u_int	(*mp_bootaddress)(u_int);
 };
 
 extern struct init_ops init_ops;
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sR-0006Bq-Kx; Tue, 14 Jan 2014 15:26:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sN-0006AA-PC
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:25:59 +0000
Received: from [85.158.143.35:8607] by server-1.bemta-4.messagelabs.com id
	EE/88-02132-70755D25; Tue, 14 Jan 2014 15:25:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713157!11531267!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8666 invoked from network); 14 Jan 2014 15:25:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:25:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92709345"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:25:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:25:56 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T8-0006J6-Bu;
	Tue, 14 Jan 2014 14:59:54 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:39 +0100
Message-ID: <1389711582-66908-18-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 17/20] xen: add shutdown hook for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the PV shutdown hook to PVH.
---
 sys/dev/xen/control/control.c |   37 ++++++++++++++++++-------------------
 1 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/sys/dev/xen/control/control.c b/sys/dev/xen/control/control.c
index bc0609d..78894ba 100644
--- a/sys/dev/xen/control/control.c
+++ b/sys/dev/xen/control/control.c
@@ -316,21 +316,6 @@ xctrl_suspend()
 	EVENTHANDLER_INVOKE(power_resume);
 }
 
-static void
-xen_pv_shutdown_final(void *arg, int howto)
-{
-	/*
-	 * Inform the hypervisor that shutdown is complete.
-	 * This is not necessary in HVM domains since Xen
-	 * emulates ACPI in that mode and FreeBSD's ACPI
-	 * support will request this transition.
-	 */
-	if (howto & (RB_HALT | RB_POWEROFF))
-		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
-	else
-		HYPERVISOR_shutdown(SHUTDOWN_reboot);
-}
-
 #else
 
 /* HVM mode suspension. */
@@ -440,6 +425,21 @@ xctrl_crash()
 	panic("Xen directed crash");
 }
 
+static void
+xen_pv_shutdown_final(void *arg, int howto)
+{
+	/*
+	 * Inform the hypervisor that shutdown is complete.
+	 * This is not necessary in HVM domains since Xen
+	 * emulates ACPI in that mode and FreeBSD's ACPI
+	 * support will request this transition.
+	 */
+	if (howto & (RB_HALT | RB_POWEROFF))
+		HYPERVISOR_shutdown(SHUTDOWN_poweroff);
+	else
+		HYPERVISOR_shutdown(SHUTDOWN_reboot);
+}
+
 /*------------------------------ Event Reception -----------------------------*/
 static void
 xctrl_on_watch_event(struct xs_watch *watch, const char **vec, unsigned int len)
@@ -522,10 +522,9 @@ xctrl_attach(device_t dev)
 	xctrl->xctrl_watch.callback_data = (uintptr_t)xctrl;
 	xs_register_watch(&xctrl->xctrl_watch);
 
-#ifndef XENHVM
-	EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
-			      SHUTDOWN_PRI_LAST);
-#endif
+	if (xen_pv_domain())
+		EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
+		                      SHUTDOWN_PRI_LAST);
 
 	return (0);
 }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sU-0006Gs-Ba; Tue, 14 Jan 2014 15:26:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sT-0006ES-6U
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:26:05 +0000
Received: from [193.109.254.147:64142] by server-3.bemta-14.messagelabs.com id
	81/3F-11000-B0755D25; Tue, 14 Jan 2014 15:26:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389713162!10837737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30033 invoked from network); 14 Jan 2014 15:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:26:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:26:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T8-0006J6-SQ;
	Tue, 14 Jan 2014 14:59:54 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:40 +0100
Message-ID: <1389711582-66908-19-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 18/20] xen: xenstore changes to support PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xenstore/xenstore.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/sys/xen/xenstore/xenstore.c b/sys/xen/xenstore/xenstore.c
index b5cf413..7fa08cc 100644
--- a/sys/xen/xenstore/xenstore.c
+++ b/sys/xen/xenstore/xenstore.c
@@ -229,13 +229,11 @@ struct xs_softc {
 	 */
 	struct sx xenwatch_mutex;
 
-#ifdef XENHVM
 	/**
 	 * The HVM guest pseudo-physical frame number.  This is Xen's mapping
 	 * of the true machine frame number into our "physical address space".
 	 */
 	unsigned long gpfn;
-#endif
 
 	/**
 	 * The event channel for communicating with the
@@ -1147,13 +1145,15 @@ xs_attach(device_t dev)
 	/* Initialize the interface to xenstore. */
 	struct proc *p;
 
-#ifdef XENHVM
-	xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
-	xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
-	xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
-#else
-	xs.evtchn = xen_start_info->store_evtchn;
-#endif
+	if (xen_hvm_domain()) {
+		xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
+		xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
+		xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
+	} else if (xen_pv_domain()) {
+		xs.evtchn = HYPERVISOR_start_info->store_evtchn;
+	} else {
+		panic("Unknown domain type, cannot initialize xenstore\n");
+	}
 
 	TAILQ_INIT(&xs.reply_list);
 	TAILQ_INIT(&xs.watch_events);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:26:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35sU-0006Gs-Ba; Tue, 14 Jan 2014 15:26:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W35sT-0006ES-6U
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:26:05 +0000
Received: from [193.109.254.147:64142] by server-3.bemta-14.messagelabs.com id
	81/3F-11000-B0755D25; Tue, 14 Jan 2014 15:26:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389713162!10837737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30033 invoked from network); 14 Jan 2014 15:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90590893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:26:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:26:00 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W35T8-0006J6-SQ;
	Tue, 14 Jan 2014 14:59:54 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <freebsd-xen@freebsd.org>, <freebsd-current@freebsd.org>,
	<xen-devel@lists.xen.org>, <gibbs@freebsd.org>, <jhb@freebsd.org>,
	<kib@freebsd.org>, <julien.grall@citrix.com>
Date: Tue, 14 Jan 2014 15:59:40 +0100
Message-ID: <1389711582-66908-19-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v10 18/20] xen: xenstore changes to support PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 sys/xen/xenstore/xenstore.c |   18 +++++++++---------
 1 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/sys/xen/xenstore/xenstore.c b/sys/xen/xenstore/xenstore.c
index b5cf413..7fa08cc 100644
--- a/sys/xen/xenstore/xenstore.c
+++ b/sys/xen/xenstore/xenstore.c
@@ -229,13 +229,11 @@ struct xs_softc {
 	 */
 	struct sx xenwatch_mutex;
 
-#ifdef XENHVM
 	/**
 	 * The HVM guest pseudo-physical frame number.  This is Xen's mapping
 	 * of the true machine frame number into our "physical address space".
 	 */
 	unsigned long gpfn;
-#endif
 
 	/**
 	 * The event channel for communicating with the
@@ -1147,13 +1145,15 @@ xs_attach(device_t dev)
 	/* Initialize the interface to xenstore. */
 	struct proc *p;
 
-#ifdef XENHVM
-	xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
-	xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
-	xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
-#else
-	xs.evtchn = xen_start_info->store_evtchn;
-#endif
+	if (xen_hvm_domain()) {
+		xs.evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
+		xs.gpfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
+		xen_store = pmap_mapdev(xs.gpfn * PAGE_SIZE, PAGE_SIZE);
+	} else if (xen_pv_domain()) {
+		xs.evtchn = HYPERVISOR_start_info->store_evtchn;
+	} else {
+		panic("Unknown domain type, cannot initialize xenstore\n");
+	}
 
 	TAILQ_INIT(&xs.reply_list);
 	TAILQ_INIT(&xs.watch_events);
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35u5-00074F-EA; Tue, 14 Jan 2014 15:27:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35u2-00072n-NS
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:27:42 +0000
Received: from [193.109.254.147:58413] by server-8.bemta-14.messagelabs.com id
	26/11-30921-A6755D25; Tue, 14 Jan 2014 15:27:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389713256!10757673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21082 invoked from network); 14 Jan 2014 15:27:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:27:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92710136"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:27:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:27:34 -0500
Message-ID: <1389713254.12434.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 14 Jan 2014 15:27:34 +0000
In-Reply-To: <52D55061.2020900@citrix.com>
References: <52D55061.2020900@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:57 +0000, Andrew Cooper wrote:
> As a result,  it is not obvious how best to fix this with backwards
> compatibility in mind. 

You clearly aren't devious enough ;-)

It's pretty nasty, and it might might turn out to take us more than one
release to resolve, meaning we might have to insert something into the
save stream in one release but not be able to fully use it until the
release after, but I think it can be done in one release...

Given that the very first thing in the migration stream is "unsigned
long p2m_size" we pretty much have to play some horrible trick with it.

For example, what if we say that if the first 4 bytes of the stream are
0xffffffff then the stream is in "64-bit clean" mode -- which means that
all subsequent unsigned longs are actually 64-bit, including p2m_size
which follows immediately after the 0xffffffff magic number. If the
first 4 bytes are not 0xffffffff then this is a normal native word size
stream and those are 4 bytes of the p2m size.

On a 64-bit restorer you would have to read 4 bytes and if it is not
0xffffffff read another 4 and combine them to get the actual p2m size,
if the first 4 are 0xffffffff then you continue as normal with an 8 byte
read to get the p2m size.

On a 32-bit restorer, well, I guess you get the idea.

It's gross, but backwards compat can be like that...

Slightly more flexible would be to take 4-bytes = 0xffffffff to indicate
that an "extended-info" section follows, containing a non-optional
non-PV specific "stream info" block, which initially would simply
indicate that the stream was either 64-bit clean or would indicate what
sizeof(unsigned long) it uses. Since you would naturally design this
"stream info" block to be extensible it could be used in the future to
dig out of other holes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35u5-00074F-EA; Tue, 14 Jan 2014 15:27:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35u2-00072n-NS
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:27:42 +0000
Received: from [193.109.254.147:58413] by server-8.bemta-14.messagelabs.com id
	26/11-30921-A6755D25; Tue, 14 Jan 2014 15:27:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389713256!10757673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21082 invoked from network); 14 Jan 2014 15:27:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:27:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92710136"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:27:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:27:34 -0500
Message-ID: <1389713254.12434.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 14 Jan 2014 15:27:34 +0000
In-Reply-To: <52D55061.2020900@citrix.com>
References: <52D55061.2020900@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:57 +0000, Andrew Cooper wrote:
> As a result,  it is not obvious how best to fix this with backwards
> compatibility in mind. 

You clearly aren't devious enough ;-)

It's pretty nasty, and it might might turn out to take us more than one
release to resolve, meaning we might have to insert something into the
save stream in one release but not be able to fully use it until the
release after, but I think it can be done in one release...

Given that the very first thing in the migration stream is "unsigned
long p2m_size" we pretty much have to play some horrible trick with it.

For example, what if we say that if the first 4 bytes of the stream are
0xffffffff then the stream is in "64-bit clean" mode -- which means that
all subsequent unsigned longs are actually 64-bit, including p2m_size
which follows immediately after the 0xffffffff magic number. If the
first 4 bytes are not 0xffffffff then this is a normal native word size
stream and those are 4 bytes of the p2m size.

On a 64-bit restorer you would have to read 4 bytes and if it is not
0xffffffff read another 4 and combine them to get the actual p2m size,
if the first 4 are 0xffffffff then you continue as normal with an 8 byte
read to get the p2m size.

On a 32-bit restorer, well, I guess you get the idea.

It's gross, but backwards compat can be like that...

Slightly more flexible would be to take 4-bytes = 0xffffffff to indicate
that an "extended-info" section follows, containing a non-optional
non-PV specific "stream info" block, which initially would simply
indicate that the stream was either 64-bit clean or would indicate what
sizeof(unsigned long) it uses. Since you would naturally design this
"stream info" block to be extensible it could be used in the future to
dig out of other holes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35v4-0007Yz-18; Tue, 14 Jan 2014 15:28:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35v0-0007XY-Dm
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 15:28:42 +0000
Received: from [85.158.139.211:27477] by server-16.bemta-5.messagelabs.com id
	6D/DE-11843-9A755D25; Tue, 14 Jan 2014 15:28:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389713319!9531080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10315 invoked from network); 14 Jan 2014 15:28:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:28:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92710554"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:28:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:28:37 -0500
Message-ID: <1389713316.12434.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 15:28:36 +0000
In-Reply-To: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> Except grant-table (I can't find {get,put}_page for grant-table code???),

I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
through page_get_owner_and_reference.

and on unmap it is in__gnttab_unmap_common_complete.

It's a bit of a complex maze though so I'm not entirely sure, perhaps
Tim, Keir or Jan can confirm that a grant mapping always takes a
reference on the mapped page (it seems like PV x86 ought to be relying
on this for safety anyhow).

I think the flush in alloc_heap_pages would also serve as a backstop,
wouldn't it?

> all the callers are protected by a get_page before removing the page. So if the
> another VCPU is trying to access to this page before the flush, it will just
> read/write the wrong page.
> 
> The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
> on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
> should be safe because create_p2m_entries only deal with a specific domain.
> 
> I don't think I forget case in this function. Let me know if it's the case.
> ---
>  xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..ad6f76e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
>                       int mattr,
>                       p2m_type_t t)
>  {
> -    int rc, flush;
> +    int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *first = NULL, *second = NULL, *third = NULL;
>      paddr_t addr;
> @@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
>                    cur_first_offset = ~0,
>                    cur_second_offset = ~0;
>      unsigned long count = 0;
> +    unsigned int flush = 0;
>      bool_t populate = (op == INSERT || op == ALLOCATE);
>  
>      spin_lock(&p2m->lock);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(d);
> +
>      addr = start_gpaddr;
>      while ( addr < end_gpaddr )
>      {
> @@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
>              cur_second_offset = second_table_offset(addr);
>          }
>  
> -        flush = third[third_table_offset(addr)].p2m.valid;
> +        flush |= third[third_table_offset(addr)].p2m.valid;
>  
>          /* Allocate a new RAM page and attach */
>          switch (op) {
> @@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
>                  break;
>          }
>  
> -        if ( flush )
> -            flush_tlb_all_local();
> -
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
>          {
> @@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
>          addr += PAGE_SIZE;
>      }
>  
> +    if ( flush )
> +    {
> +        /* At the beginning of the function, Xen is updating VTTBR
> +         * with the domain where the mappings are created. In this
> +         * case it's only necessary to flush TLBs on every CPUs with
> +         * the current VMID (our domain).
> +         */
> +        flush_tlb();
> +    }
> +
>      if ( op == ALLOCATE || op == INSERT )
>      {
>          unsigned long sgfn = paddr_to_pfn(start_gpaddr);
> @@ -409,6 +420,9 @@ out:
>      if (second) unmap_domain_page(second);
>      if (first) unmap_domain_page(first);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(current->domain);
> +
>      spin_unlock(&p2m->lock);
>  
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35v4-0007Yz-18; Tue, 14 Jan 2014 15:28:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35v0-0007XY-Dm
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 15:28:42 +0000
Received: from [85.158.139.211:27477] by server-16.bemta-5.messagelabs.com id
	6D/DE-11843-9A755D25; Tue, 14 Jan 2014 15:28:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389713319!9531080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10315 invoked from network); 14 Jan 2014 15:28:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:28:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92710554"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:28:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:28:37 -0500
Message-ID: <1389713316.12434.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 15:28:36 +0000
In-Reply-To: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> Except grant-table (I can't find {get,put}_page for grant-table code???),

I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
through page_get_owner_and_reference.

and on unmap it is in__gnttab_unmap_common_complete.

It's a bit of a complex maze though so I'm not entirely sure, perhaps
Tim, Keir or Jan can confirm that a grant mapping always takes a
reference on the mapped page (it seems like PV x86 ought to be relying
on this for safety anyhow).

I think the flush in alloc_heap_pages would also serve as a backstop,
wouldn't it?

> all the callers are protected by a get_page before removing the page. So if the
> another VCPU is trying to access to this page before the flush, it will just
> read/write the wrong page.
> 
> The downside of this patch is Xen flushes less TLBs. Instead of flushing all TLBs
> on the current PCPU, Xen flushes TLBs for a specific VMID on every CPUs. This
> should be safe because create_p2m_entries only deal with a specific domain.
> 
> I don't think I forget case in this function. Let me know if it's the case.
> ---
>  xen/arch/arm/p2m.c |   24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 11f4714..ad6f76e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -238,7 +238,7 @@ static int create_p2m_entries(struct domain *d,
>                       int mattr,
>                       p2m_type_t t)
>  {
> -    int rc, flush;
> +    int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *first = NULL, *second = NULL, *third = NULL;
>      paddr_t addr;
> @@ -246,10 +246,14 @@ static int create_p2m_entries(struct domain *d,
>                    cur_first_offset = ~0,
>                    cur_second_offset = ~0;
>      unsigned long count = 0;
> +    unsigned int flush = 0;
>      bool_t populate = (op == INSERT || op == ALLOCATE);
>  
>      spin_lock(&p2m->lock);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(d);
> +
>      addr = start_gpaddr;
>      while ( addr < end_gpaddr )
>      {
> @@ -316,7 +320,7 @@ static int create_p2m_entries(struct domain *d,
>              cur_second_offset = second_table_offset(addr);
>          }
>  
> -        flush = third[third_table_offset(addr)].p2m.valid;
> +        flush |= third[third_table_offset(addr)].p2m.valid;
>  
>          /* Allocate a new RAM page and attach */
>          switch (op) {
> @@ -373,9 +377,6 @@ static int create_p2m_entries(struct domain *d,
>                  break;
>          }
>  
> -        if ( flush )
> -            flush_tlb_all_local();
> -
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>          if ( op == RELINQUISH && count >= 0x2000 )
>          {
> @@ -392,6 +393,16 @@ static int create_p2m_entries(struct domain *d,
>          addr += PAGE_SIZE;
>      }
>  
> +    if ( flush )
> +    {
> +        /* At the beginning of the function, Xen is updating VTTBR
> +         * with the domain where the mappings are created. In this
> +         * case it's only necessary to flush TLBs on every CPUs with
> +         * the current VMID (our domain).
> +         */
> +        flush_tlb();
> +    }
> +
>      if ( op == ALLOCATE || op == INSERT )
>      {
>          unsigned long sgfn = paddr_to_pfn(start_gpaddr);
> @@ -409,6 +420,9 @@ out:
>      if (second) unmap_domain_page(second);
>      if (first) unmap_domain_page(first);
>  
> +    if ( d != current->domain )
> +        p2m_load_VTTBR(current->domain);
> +
>      spin_unlock(&p2m->lock);
>  
>      return rc;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35ww-00084Y-9N; Tue, 14 Jan 2014 15:30:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35wt-00084I-P8
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:30:39 +0000
Received: from [85.158.143.35:41901] by server-1.bemta-4.messagelabs.com id
	69/21-02132-F1855D25; Tue, 14 Jan 2014 15:30:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389713437!11633115!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25080 invoked from network); 14 Jan 2014 15:30:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:30:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92711504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:30:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:30:36 -0500
Message-ID: <1389713435.12434.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Tue, 14 Jan 2014 15:30:35 +0000
In-Reply-To: <20140114152303.GA19990@dark.recoil.org>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
	<20140114152303.GA19990@dark.recoil.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 15:23 +0000, Anil Madhavapeddy wrote:

> > Perhaps CAMLlocal2 both defines and references the variables keeping
> > this issue at bay?
> 
> That's right.  CAMLlocal2 creates a stack variable and registers it with
> the garbage collector as a root (to ensure that it's not collected during
> the lifetime of the function).  This keeps it live and always used from
> the perspective of the C compiler.

Thanks.

> Yeah, I'm not aware of any compiler that doesn't respect the noreturn
> attribute and also emits unused variable warnings. I didn't modify the
> CAMLreturn in favour of minimising the x86/ARM differences, but you could
> modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
> to keep these bindings as straight-line as possible for the 4.4 release
> though, and to refactor oxenstored to not depend on them at all in the
> future (it only uses a small part of libxc and these cpuid functions
> aren't used at all).

Thanks, I'm convinced by that argument, this can go in after rc2 is cut.

Release-ack: Ian Campbell

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W35ww-00084Y-9N; Tue, 14 Jan 2014 15:30:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W35wt-00084I-P8
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:30:39 +0000
Received: from [85.158.143.35:41901] by server-1.bemta-4.messagelabs.com id
	69/21-02132-F1855D25; Tue, 14 Jan 2014 15:30:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389713437!11633115!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25080 invoked from network); 14 Jan 2014 15:30:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:30:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92711504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 15:30:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:30:36 -0500
Message-ID: <1389713435.12434.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Tue, 14 Jan 2014 15:30:35 +0000
In-Reply-To: <20140114152303.GA19990@dark.recoil.org>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
	<20140114152303.GA19990@dark.recoil.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 15:23 +0000, Anil Madhavapeddy wrote:

> > Perhaps CAMLlocal2 both defines and references the variables keeping
> > this issue at bay?
> 
> That's right.  CAMLlocal2 creates a stack variable and registers it with
> the garbage collector as a root (to ensure that it's not collected during
> the lifetime of the function).  This keeps it live and always used from
> the perspective of the C compiler.

Thanks.

> Yeah, I'm not aware of any compiler that doesn't respect the noreturn
> attribute and also emits unused variable warnings. I didn't modify the
> CAMLreturn in favour of minimising the x86/ARM differences, but you could
> modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
> to keep these bindings as straight-line as possible for the 4.4 release
> though, and to refactor oxenstored to not depend on them at all in the
> future (it only uses a small part of libxc and these cpuid functions
> aren't used at all).

Thanks, I'm convinced by that argument, this can go in after rc2 is cut.

Release-ack: Ian Campbell

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:39:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W365Q-0000C3-JY; Tue, 14 Jan 2014 15:39:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W365O-0000BQ-1a
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:39:26 +0000
Received: from [85.158.143.35:64567] by server-1.bemta-4.messagelabs.com id
	C9/90-02132-D2A55D25; Tue, 14 Jan 2014 15:39:25 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713963!11535131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20656 invoked from network); 14 Jan 2014 15:39:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:39:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90597649"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:39:22 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:39:22 -0500
Message-ID: <52D55A27.4090103@citrix.com>
Date: Tue, 14 Jan 2014 10:39:19 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
	<52D55612.8060501@citrix.com>
	<1389713030.12434.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1389713030.12434.86.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 10:23 AM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 10:21 -0500, Ross Philipson wrote:
>> On 01/13/2014 06:31 AM, Ian Jackson wrote:
>>> Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
>>>> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
>>>>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
>>>>>> Fitting in with libxl proper would require the API to look a certain way
>>>>>> (take a context etc), so perhaps libxlu would be more appropriate,
>>>>>> alongside the disk syntax parser etc?
>>>>>
>>>>> Possibly. I looked at that back then (and today again) and it seemed to
>>>>> all be related to parsing things into XLU_Config objects. I guess I did
>>>>> not have a good feel for what libxlu was supposed to be. If it is
>>>>> supposed to be a generic library of auxiliary toolstack functionality
>>>>> then I think it would be a good place.
>>>>
>>>> I think that (aux functionality) was the intention -- the fact that it
>>>> is all parsing stuff right now is just a coincidence.
>>
>> Ok then libxlu sounds like a reasonable place for this. I will submit a
>> patch.
>
> Thanks. It's probably a bit late to squeeze this in for 4.4, so you've
> got a little while until 4.5 development opens.

Right, I will shoot for 4.5 then - thanks.

>
> Ian.
>
>


-- 
Ross Philipson

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:39:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W365Q-0000C3-JY; Tue, 14 Jan 2014 15:39:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ross.Philipson@citrix.com>) id 1W365O-0000BQ-1a
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:39:26 +0000
Received: from [85.158.143.35:64567] by server-1.bemta-4.messagelabs.com id
	C9/90-02132-D2A55D25; Tue, 14 Jan 2014 15:39:25 +0000
X-Env-Sender: Ross.Philipson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389713963!11535131!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20656 invoked from network); 14 Jan 2014 15:39:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:39:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90597649"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 15:39:22 +0000
Received: from [IPv6:::1] (10.204.206.105) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 10:39:22 -0500
Message-ID: <52D55A27.4090103@citrix.com>
Date: Tue, 14 Jan 2014 10:39:19 -0500
From: Ross Philipson <ross.philipson@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D45075EA57@G4W3221.americas.hpqcorp.net>
	<1389102962.12612.19.camel@kazak.uk.xensource.com>
	<92B37F2487AE0841841737618F25AC1A1918AABE@FTLPEX01CL03.citrite.net>
	<1389261984.27473.46.camel@kazak.uk.xensource.com>
	<52CEB663.4070609@citrix.com>	<20140109190049.GB17806@pegasus.dumpdata.com>
	<1389349212.19142.21.camel@kazak.uk.xensource.com>
	<52D002D6.7090306@citrix.com>
	<1389366233.19142.60.camel@kazak.uk.xensource.com>
	<52D058F3.5080504@citrix.com>
	<1389608732.13654.5.camel@kazak.uk.xensource.com>
	<21203.52907.856721.125370@mariner.uk.xensource.com>
	<52D55612.8060501@citrix.com>
	<1389713030.12434.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1389713030.12434.86.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.204.206.105]
X-DLP: MIA1
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] passing smbios table from qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 10:23 AM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 10:21 -0500, Ross Philipson wrote:
>> On 01/13/2014 06:31 AM, Ian Jackson wrote:
>>> Ian Campbell writes ("Re: [Xen-devel] passing smbios table from qemu"):
>>>> On Fri, 2014-01-10 at 15:32 -0500, Ross Philipson wrote:
>>>>> On 01/10/2014 10:03 AM, Ian Campbell wrote:
>>>>>> Fitting in with libxl proper would require the API to look a certain way
>>>>>> (take a context etc), so perhaps libxlu would be more appropriate,
>>>>>> alongside the disk syntax parser etc?
>>>>>
>>>>> Possibly. I looked at that back then (and today again) and it seemed to
>>>>> all be related to parsing things into XLU_Config objects. I guess I did
>>>>> not have a good feel for what libxlu was supposed to be. If it is
>>>>> supposed to be a generic library of auxiliary toolstack functionality
>>>>> then I think it would be a good place.
>>>>
>>>> I think that (aux functionality) was the intention -- the fact that it
>>>> is all parsing stuff right now is just a coincidence.
>>
>> Ok then libxlu sounds like a reasonable place for this. I will submit a
>> patch.
>
> Thanks. It's probably a bit late to squeeze this in for 4.4, so you've
> got a little while until 4.5 development opens.

Right, I will shoot for 4.5 then - thanks.

>
> Ian.
>
>


-- 
Ross Philipson

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:41:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W367f-0000Lr-3e; Tue, 14 Jan 2014 15:41:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W367d-0000LY-CI
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:41:45 +0000
Received: from [85.158.137.68:55338] by server-9.bemta-3.messagelabs.com id
	47/CF-13104-8BA55D25; Tue, 14 Jan 2014 15:41:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389714103!9047508!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23264 invoked from network); 14 Jan 2014 15:41:43 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:41:43 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so2747026wib.4
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 07:41:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=eZAf2Jsq8X2YtooXl7ODA3aJHw4fEtu9rY90GKEFVoY=;
	b=h7qLKydLbhOFRZfMAfhWXAnNKKgEOvNGaH2OQWBQrTD7Patv9h5hRPQ6CzBFv6RbOr
	D3aQUMGxwg2lCUC6j24WueS7kvg1XnzRl2CtjaOEw4FbbYQz/1oWU1n10ZboXLL59KbD
	ZNRGEQyxzQHSIhlk5AXXhJs/Pwgovv3s+grR2BlGQVYAEoj7ZueZ/m1of4f8kQztcQMj
	vDHFgqLTEkMOKgqI/RE+QSA5NSiZV/Sf6wNARqmPiMIw2T9QLkz60HwXr2F5flzRtzMH
	GoN0/OWUOitRwuWrLnQXjvgIBXY6nNLg+pGRPtK42WrxRxwJNkJPhlFxmQq1fiPk7Ced
	++Hw==
X-Gm-Message-State: ALoCoQk+BQ5iU5NSiveXj5Oga9T9JCBLQNZ5x41w9kRrjNPNPyG99KxnmBHjllh/724GKS08bymg
X-Received: by 10.180.8.194 with SMTP id t2mr3606396wia.41.1389714103256;
	Tue, 14 Jan 2014 07:41:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f7sm839362wjb.7.2014.01.14.07.41.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 07:41:42 -0800 (PST)
Message-ID: <52D55AB4.4010504@linaro.org>
Date: Tue, 14 Jan 2014 15:41:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1389711582-66908-15-git-send-email-roger.pau@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 02:59 PM, Roger Pau Monne wrote:
> +static int
> +xenpv_attach(device_t dev)
> +{
> +	device_t child;
> +
> +	if (xen_hvm_domain()) {
> +		device_t xenpci;
> +		devclass_t dc;
> +
> +		/* Make sure xenpci has been attached */
> +		dc = devclass_find("xenpci");
> +		if (dc == NULL)
> +			panic("unable to find xenpci devclass");
> +
> +		xenpci = devclass_get_device(dc, 0);
> +		if (xenpci == NULL)
> +			panic("unable to find xenpci device");
> +
> +		if (!device_is_attached(xenpci))
> +			panic("trying to attach xenpv before xenpci");
> +	}

Can you use the identify method to add the xenpci device?

As I said earlier, I will reuse this code for ARM guest and this device
is not used on this architecture.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 15:41:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 15:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W367f-0000Lr-3e; Tue, 14 Jan 2014 15:41:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W367d-0000LY-CI
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 15:41:45 +0000
Received: from [85.158.137.68:55338] by server-9.bemta-3.messagelabs.com id
	47/CF-13104-8BA55D25; Tue, 14 Jan 2014 15:41:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389714103!9047508!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23264 invoked from network); 14 Jan 2014 15:41:43 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 15:41:43 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so2747026wib.4
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 07:41:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=eZAf2Jsq8X2YtooXl7ODA3aJHw4fEtu9rY90GKEFVoY=;
	b=h7qLKydLbhOFRZfMAfhWXAnNKKgEOvNGaH2OQWBQrTD7Patv9h5hRPQ6CzBFv6RbOr
	D3aQUMGxwg2lCUC6j24WueS7kvg1XnzRl2CtjaOEw4FbbYQz/1oWU1n10ZboXLL59KbD
	ZNRGEQyxzQHSIhlk5AXXhJs/Pwgovv3s+grR2BlGQVYAEoj7ZueZ/m1of4f8kQztcQMj
	vDHFgqLTEkMOKgqI/RE+QSA5NSiZV/Sf6wNARqmPiMIw2T9QLkz60HwXr2F5flzRtzMH
	GoN0/OWUOitRwuWrLnQXjvgIBXY6nNLg+pGRPtK42WrxRxwJNkJPhlFxmQq1fiPk7Ced
	++Hw==
X-Gm-Message-State: ALoCoQk+BQ5iU5NSiveXj5Oga9T9JCBLQNZ5x41w9kRrjNPNPyG99KxnmBHjllh/724GKS08bymg
X-Received: by 10.180.8.194 with SMTP id t2mr3606396wia.41.1389714103256;
	Tue, 14 Jan 2014 07:41:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f7sm839362wjb.7.2014.01.14.07.41.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 07:41:42 -0800 (PST)
Message-ID: <52D55AB4.4010504@linaro.org>
Date: Tue, 14 Jan 2014 15:41:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1389711582-66908-15-git-send-email-roger.pau@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 02:59 PM, Roger Pau Monne wrote:
> +static int
> +xenpv_attach(device_t dev)
> +{
> +	device_t child;
> +
> +	if (xen_hvm_domain()) {
> +		device_t xenpci;
> +		devclass_t dc;
> +
> +		/* Make sure xenpci has been attached */
> +		dc = devclass_find("xenpci");
> +		if (dc == NULL)
> +			panic("unable to find xenpci devclass");
> +
> +		xenpci = devclass_get_device(dc, 0);
> +		if (xenpci == NULL)
> +			panic("unable to find xenpci device");
> +
> +		if (!device_is_attached(xenpci))
> +			panic("trying to attach xenpv before xenpci");
> +	}

Can you use the identify method to add the xenpci device?

As I said earlier, I will reuse this code for ARM guest and this device
is not used on this architecture.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:06:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36V2-0002C0-FD; Tue, 14 Jan 2014 16:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W36V0-0002Bv-Rb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:05:55 +0000
Received: from [85.158.139.211:27871] by server-16.bemta-5.messagelabs.com id
	91/1F-11843-26065D25; Tue, 14 Jan 2014 16:05:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389715553!9540292!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1222 invoked from network); 14 Jan 2014 16:05:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 16:05:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 16:05:52 +0000
Message-Id: <52D56E6E02000078001138D2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 16:05:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <52D55061.2020900@citrix.com>
In-Reply-To: <52D55061.2020900@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> As part of XenServer's attempt to move to a 64bit dom0, we have
> encountered a sizeable flaw in xc_domain_{save,restore}().
> 
> Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
> 
> xc: detail: xc_domain_restore: starting restore of new domid 1
> xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
> xc: error: Couldn't allocate p2m_frame_list array: Internal error
> xc: detail: Restore exit of domid 1 with rc=1
> 
> This is caused because of
> 
> RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
> 
> where sizeof(unsigned long) is different between the source and destination.
> 
> 
> It is unreasonable for the format of the migration stream to rely on the
> bitness of the toolstack, which should be completely transparent as far
> as "motion of a VM" is concerned.  Furthermore, the same issue occurs
> with suspend/resume where the stream gets written to a file in the meantime.
> 
> A quick grep across the code shows several other items in the migration
> stream which depend on toolstack bitness.
> 
> There is no way to divine whether the far side of the migration stream
> is 32 or 64 bit, which is now vital information required to read the
> stream correctly.

And I think, even if x86 doesn't care, differing endianness should
be dealt with at the same time.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:06:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36V2-0002C0-FD; Tue, 14 Jan 2014 16:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W36V0-0002Bv-Rb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:05:55 +0000
Received: from [85.158.139.211:27871] by server-16.bemta-5.messagelabs.com id
	91/1F-11843-26065D25; Tue, 14 Jan 2014 16:05:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389715553!9540292!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1222 invoked from network); 14 Jan 2014 16:05:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Jan 2014 16:05:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 14 Jan 2014 16:05:52 +0000
Message-Id: <52D56E6E02000078001138D2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 14 Jan 2014 16:05:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <52D55061.2020900@citrix.com>
In-Reply-To: <52D55061.2020900@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> As part of XenServer's attempt to move to a 64bit dom0, we have
> encountered a sizeable flaw in xc_domain_{save,restore}().
> 
> Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
> 
> xc: detail: xc_domain_restore: starting restore of new domid 1
> xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
> xc: error: Couldn't allocate p2m_frame_list array: Internal error
> xc: detail: Restore exit of domid 1 with rc=1
> 
> This is caused because of
> 
> RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
> 
> where sizeof(unsigned long) is different between the source and destination.
> 
> 
> It is unreasonable for the format of the migration stream to rely on the
> bitness of the toolstack, which should be completely transparent as far
> as "motion of a VM" is concerned.  Furthermore, the same issue occurs
> with suspend/resume where the stream gets written to a file in the meantime.
> 
> A quick grep across the code shows several other items in the migration
> stream which depend on toolstack bitness.
> 
> There is no way to divine whether the far side of the migration stream
> is 32 or 64 bit, which is now vital information required to read the
> stream correctly.

And I think, even if x86 doesn't care, differing endianness should
be dealt with at the same time.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:08:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36XK-0002Jp-CC; Tue, 14 Jan 2014 16:08:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36XJ-0002Jj-FX
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:08:17 +0000
Received: from [193.109.254.147:32550] by server-11.bemta-14.messagelabs.com
	id 4D/E1-20576-0F065D25; Tue, 14 Jan 2014 16:08:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389715695!8499743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14939 invoked from network); 14 Jan 2014 16:08:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:08:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90613276"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:08:13 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:08:12 -0500
Message-ID: <52D560EB.8040108@citrix.com>
Date: Tue, 14 Jan 2014 17:08:11 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org>
In-Reply-To: <52D55AB4.4010504@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 16:41, Julien Grall wrote:
> On 01/14/2014 02:59 PM, Roger Pau Monne wrote:
>> +static int
>> +xenpv_attach(device_t dev)
>> +{
>> +	device_t child;
>> +
>> +	if (xen_hvm_domain()) {
>> +		device_t xenpci;
>> +		devclass_t dc;
>> +
>> +		/* Make sure xenpci has been attached */
>> +		dc = devclass_find("xenpci");
>> +		if (dc == NULL)
>> +			panic("unable to find xenpci devclass");
>> +
>> +		xenpci = devclass_get_device(dc, 0);
>> +		if (xenpci == NULL)
>> +			panic("unable to find xenpci device");
>> +
>> +		if (!device_is_attached(xenpci))
>> +			panic("trying to attach xenpv before xenpci");
>> +	}
> 
> Can you use the identify method to add the xenpci device?

I don't think so, xenpci is a pci device, it is detected and plugged by
the pci bus code.

> As I said earlier, I will reuse this code for ARM guest and this device
> is not used on this architecture.

You could move this chunk of code (the check for xenpci) to a static
inline function and make it a noop for ARM.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:08:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36XK-0002Jp-CC; Tue, 14 Jan 2014 16:08:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36XJ-0002Jj-FX
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:08:17 +0000
Received: from [193.109.254.147:32550] by server-11.bemta-14.messagelabs.com
	id 4D/E1-20576-0F065D25; Tue, 14 Jan 2014 16:08:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389715695!8499743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14939 invoked from network); 14 Jan 2014 16:08:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:08:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90613276"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:08:13 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:08:12 -0500
Message-ID: <52D560EB.8040108@citrix.com>
Date: Tue, 14 Jan 2014 17:08:11 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org>
In-Reply-To: <52D55AB4.4010504@linaro.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: jhb@freebsd.org, xen-devel@lists.xen.org, julien.grall@citrix.com,
	freebsd-xen@freebsd.org, freebsd-current@freebsd.org,
	kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 16:41, Julien Grall wrote:
> On 01/14/2014 02:59 PM, Roger Pau Monne wrote:
>> +static int
>> +xenpv_attach(device_t dev)
>> +{
>> +	device_t child;
>> +
>> +	if (xen_hvm_domain()) {
>> +		device_t xenpci;
>> +		devclass_t dc;
>> +
>> +		/* Make sure xenpci has been attached */
>> +		dc = devclass_find("xenpci");
>> +		if (dc == NULL)
>> +			panic("unable to find xenpci devclass");
>> +
>> +		xenpci = devclass_get_device(dc, 0);
>> +		if (xenpci == NULL)
>> +			panic("unable to find xenpci device");
>> +
>> +		if (!device_is_attached(xenpci))
>> +			panic("trying to attach xenpv before xenpci");
>> +	}
> 
> Can you use the identify method to add the xenpci device?

I don't think so, xenpci is a pci device, it is detected and plugged by
the pci bus code.

> As I said earlier, I will reuse this code for ARM guest and this device
> is not used on this architecture.

You could move this chunk of code (the check for xenpci) to a static
inline function and make it a noop for ARM.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36da-0002oO-ST; Tue, 14 Jan 2014 16:14:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1W36dZ-0002oJ-IV
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:14:45 +0000
Received: from [193.109.254.147:27481] by server-5.bemta-14.messagelabs.com id
	93/CB-03510-47265D25; Tue, 14 Jan 2014 16:14:44 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389716082!10835662!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31409 invoked from network); 14 Jan 2014 16:14:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:14:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92734889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:14:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:14:21 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1W36dB-0007ja-Qb;
	Tue, 14 Jan 2014 16:14:21 +0000
Message-ID: <52D5625D.7030702@citrix.com>
Date: Tue, 14 Jan 2014 16:14:21 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
In-Reply-To: <52D560EB.8040108@citrix.com>
X-DLP: MIA1
Cc: jhb@freebsd.org, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTQvMjAxNCAwNDowOCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAxNC8w
MS8xNCAxNjo0MSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+PiBPbiAwMS8xNC8yMDE0IDAyOjU5IFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2X2F0dGFj
aChkZXZpY2VfdCBkZXYpCj4+PiArewo+Pj4gKwlkZXZpY2VfdCBjaGlsZDsKPj4+ICsKPj4+ICsJ
aWYgKHhlbl9odm1fZG9tYWluKCkpIHsKPj4+ICsJCWRldmljZV90IHhlbnBjaTsKPj4+ICsJCWRl
dmNsYXNzX3QgZGM7Cj4+PiArCj4+PiArCQkvKiBNYWtlIHN1cmUgeGVucGNpIGhhcyBiZWVuIGF0
dGFjaGVkICovCj4+PiArCQlkYyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOwo+Pj4gKwkJaWYg
KGRjID09IE5VTEwpCj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5kIHhlbnBjaSBkZXZjbGFz
cyIpOwo+Pj4gKwo+Pj4gKwkJeGVucGNpID0gZGV2Y2xhc3NfZ2V0X2RldmljZShkYywgMCk7Cj4+
PiArCQlpZiAoeGVucGNpID09IE5VTEwpCj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5kIHhl
bnBjaSBkZXZpY2UiKTsKPj4+ICsKPj4+ICsJCWlmICghZGV2aWNlX2lzX2F0dGFjaGVkKHhlbnBj
aSkpCj4+PiArCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhlbnBjaSIp
Owo+Pj4gKwl9Cj4+Cj4+IENhbiB5b3UgdXNlIHRoZSBpZGVudGlmeSBtZXRob2QgdG8gYWRkIHRo
ZSB4ZW5wY2kgZGV2aWNlPwo+IAo+IEkgZG9uJ3QgdGhpbmsgc28sIHhlbnBjaSBpcyBhIHBjaSBk
ZXZpY2UsIGl0IGlzIGRldGVjdGVkIGFuZCBwbHVnZ2VkIGJ5Cj4gdGhlIHBjaSBidXMgY29kZS4K
Ck91cHMsIEkgdGhvdWdoIHlvdSBhcmUgdHJ5aW5nIHRvIGFkZCB0aGUgZGV2aWNlLiBJbiB0aGlz
IGNhc2UsIHRoZSBjaGVjawpzZWVtcyBwb2ludGxlc3MuIEluIHdoaWNoIGNhc2UgdGhlIHhlbnBj
aSBjb3VsZG4ndCBleGlzdD8KCi0tIApKdWxpZW4gR3JhbGwKCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36da-0002oO-ST; Tue, 14 Jan 2014 16:14:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1W36dZ-0002oJ-IV
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:14:45 +0000
Received: from [193.109.254.147:27481] by server-5.bemta-14.messagelabs.com id
	93/CB-03510-47265D25; Tue, 14 Jan 2014 16:14:44 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389716082!10835662!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31409 invoked from network); 14 Jan 2014 16:14:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:14:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92734889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:14:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:14:21 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1W36dB-0007ja-Qb;
	Tue, 14 Jan 2014 16:14:21 +0000
Message-ID: <52D5625D.7030702@citrix.com>
Date: Tue, 14 Jan 2014 16:14:21 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
In-Reply-To: <52D560EB.8040108@citrix.com>
X-DLP: MIA1
Cc: jhb@freebsd.org, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTQvMjAxNCAwNDowOCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAxNC8w
MS8xNCAxNjo0MSwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+PiBPbiAwMS8xNC8yMDE0IDAyOjU5IFBN
LCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+PiArc3RhdGljIGludAo+Pj4gK3hlbnB2X2F0dGFj
aChkZXZpY2VfdCBkZXYpCj4+PiArewo+Pj4gKwlkZXZpY2VfdCBjaGlsZDsKPj4+ICsKPj4+ICsJ
aWYgKHhlbl9odm1fZG9tYWluKCkpIHsKPj4+ICsJCWRldmljZV90IHhlbnBjaTsKPj4+ICsJCWRl
dmNsYXNzX3QgZGM7Cj4+PiArCj4+PiArCQkvKiBNYWtlIHN1cmUgeGVucGNpIGhhcyBiZWVuIGF0
dGFjaGVkICovCj4+PiArCQlkYyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOwo+Pj4gKwkJaWYg
KGRjID09IE5VTEwpCj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5kIHhlbnBjaSBkZXZjbGFz
cyIpOwo+Pj4gKwo+Pj4gKwkJeGVucGNpID0gZGV2Y2xhc3NfZ2V0X2RldmljZShkYywgMCk7Cj4+
PiArCQlpZiAoeGVucGNpID09IE5VTEwpCj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5kIHhl
bnBjaSBkZXZpY2UiKTsKPj4+ICsKPj4+ICsJCWlmICghZGV2aWNlX2lzX2F0dGFjaGVkKHhlbnBj
aSkpCj4+PiArCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhlbnBjaSIp
Owo+Pj4gKwl9Cj4+Cj4+IENhbiB5b3UgdXNlIHRoZSBpZGVudGlmeSBtZXRob2QgdG8gYWRkIHRo
ZSB4ZW5wY2kgZGV2aWNlPwo+IAo+IEkgZG9uJ3QgdGhpbmsgc28sIHhlbnBjaSBpcyBhIHBjaSBk
ZXZpY2UsIGl0IGlzIGRldGVjdGVkIGFuZCBwbHVnZ2VkIGJ5Cj4gdGhlIHBjaSBidXMgY29kZS4K
Ck91cHMsIEkgdGhvdWdoIHlvdSBhcmUgdHJ5aW5nIHRvIGFkZCB0aGUgZGV2aWNlLiBJbiB0aGlz
IGNhc2UsIHRoZSBjaGVjawpzZWVtcyBwb2ludGxlc3MuIEluIHdoaWNoIGNhc2UgdGhlIHhlbnBj
aSBjb3VsZG4ndCBleGlzdD8KCi0tIApKdWxpZW4gR3JhbGwKCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36gp-000328-Fl; Tue, 14 Jan 2014 16:18:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36go-000320-6j
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:18:06 +0000
Received: from [85.158.143.35:55195] by server-1.bemta-4.messagelabs.com id
	CB/92-02132-D3365D25; Tue, 14 Jan 2014 16:18:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389716283!11696945!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27333 invoked from network); 14 Jan 2014 16:18:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:18:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90618566"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:18:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:18:02 -0500
Message-ID: <1389716281.12434.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Jan 2014 16:18:01 +0000
In-Reply-To: <52D56E6E02000078001138D2@nat28.tlf.novell.com>
References: <52D55061.2020900@citrix.com>
	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 16:05 +0000, Jan Beulich wrote:
> >>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > As part of XenServer's attempt to move to a 64bit dom0, we have
> > encountered a sizeable flaw in xc_domain_{save,restore}().
> > 
> > Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
> > 
> > xc: detail: xc_domain_restore: starting restore of new domid 1
> > xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
> > xc: error: Couldn't allocate p2m_frame_list array: Internal error
> > xc: detail: Restore exit of domid 1 with rc=1
> > 
> > This is caused because of
> > 
> > RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
> > 
> > where sizeof(unsigned long) is different between the source and destination.
> > 
> > 
> > It is unreasonable for the format of the migration stream to rely on the
> > bitness of the toolstack, which should be completely transparent as far
> > as "motion of a VM" is concerned.  Furthermore, the same issue occurs
> > with suspend/resume where the stream gets written to a file in the meantime.
> > 
> > A quick grep across the code shows several other items in the migration
> > stream which depend on toolstack bitness.
> > 
> > There is no way to divine whether the far side of the migration stream
> > is 32 or 64 bit, which is now vital information required to read the
> > stream correctly.
> 
> And I think, even if x86 doesn't care, differing endianness should
> be dealt with at the same time.

FWIW I'm not currently expecting ARM to reuse
tools/libxc/xc_domain_{save,restore}.c.

It might be worth putting the effort into making the ARM code be cleaner
and supportable with a sensible protocol so that other future ports can
reuse it. Potentially even x86 could one day switch, although the old
code would have to remain for compat purposes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36gp-000328-Fl; Tue, 14 Jan 2014 16:18:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36go-000320-6j
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:18:06 +0000
Received: from [85.158.143.35:55195] by server-1.bemta-4.messagelabs.com id
	CB/92-02132-D3365D25; Tue, 14 Jan 2014 16:18:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389716283!11696945!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27333 invoked from network); 14 Jan 2014 16:18:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:18:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90618566"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:18:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:18:02 -0500
Message-ID: <1389716281.12434.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 14 Jan 2014 16:18:01 +0000
In-Reply-To: <52D56E6E02000078001138D2@nat28.tlf.novell.com>
References: <52D55061.2020900@citrix.com>
	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 16:05 +0000, Jan Beulich wrote:
> >>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > As part of XenServer's attempt to move to a 64bit dom0, we have
> > encountered a sizeable flaw in xc_domain_{save,restore}().
> > 
> > Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
> > 
> > xc: detail: xc_domain_restore: starting restore of new domid 1
> > xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
> > xc: error: Couldn't allocate p2m_frame_list array: Internal error
> > xc: detail: Restore exit of domid 1 with rc=1
> > 
> > This is caused because of
> > 
> > RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
> > 
> > where sizeof(unsigned long) is different between the source and destination.
> > 
> > 
> > It is unreasonable for the format of the migration stream to rely on the
> > bitness of the toolstack, which should be completely transparent as far
> > as "motion of a VM" is concerned.  Furthermore, the same issue occurs
> > with suspend/resume where the stream gets written to a file in the meantime.
> > 
> > A quick grep across the code shows several other items in the migration
> > stream which depend on toolstack bitness.
> > 
> > There is no way to divine whether the far side of the migration stream
> > is 32 or 64 bit, which is now vital information required to read the
> > stream correctly.
> 
> And I think, even if x86 doesn't care, differing endianness should
> be dealt with at the same time.

FWIW I'm not currently expecting ARM to reuse
tools/libxc/xc_domain_{save,restore}.c.

It might be worth putting the effort into making the ARM code be cleaner
and supportable with a sensible protocol so that other future ports can
reuse it. Potentially even x86 could one day switch, although the old
code would have to remain for compat purposes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:21:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36jW-0003TR-Eo; Tue, 14 Jan 2014 16:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36jU-0003Rm-6s
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:20:52 +0000
Received: from [85.158.139.211:57366] by server-9.bemta-5.messagelabs.com id
	71/34-15098-3E365D25; Tue, 14 Jan 2014 16:20:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389716448!9731118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5495 invoked from network); 14 Jan 2014 16:20:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:20:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92738398"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:20:48 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:20:47 -0500
Message-ID: <52D563DE.1020207@citrix.com>
Date: Tue, 14 Jan 2014 17:20:46 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
	<52D5625D.7030702@citrix.com>
In-Reply-To: <52D5625D.7030702@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: jhb@freebsd.org, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTQvMDEvMTQgMTc6MTQsIEp1bGllbiBHcmFsbCB3cm90ZToKPiBPbiAwMS8xNC8yMDE0IDA0
OjA4IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAxNC8wMS8xNCAxNjo0MSwgSnVs
aWVuIEdyYWxsIHdyb3RlOgo+Pj4gT24gMDEvMTQvMjAxNCAwMjo1OSBQTSwgUm9nZXIgUGF1IE1v
bm5lIHdyb3RlOgo+Pj4+ICtzdGF0aWMgaW50Cj4+Pj4gK3hlbnB2X2F0dGFjaChkZXZpY2VfdCBk
ZXYpCj4+Pj4gK3sKPj4+PiArCWRldmljZV90IGNoaWxkOwo+Pj4+ICsKPj4+PiArCWlmICh4ZW5f
aHZtX2RvbWFpbigpKSB7Cj4+Pj4gKwkJZGV2aWNlX3QgeGVucGNpOwo+Pj4+ICsJCWRldmNsYXNz
X3QgZGM7Cj4+Pj4gKwo+Pj4+ICsJCS8qIE1ha2Ugc3VyZSB4ZW5wY2kgaGFzIGJlZW4gYXR0YWNo
ZWQgKi8KPj4+PiArCQlkYyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOwo+Pj4+ICsJCWlmIChk
YyA9PSBOVUxMKQo+Pj4+ICsJCQlwYW5pYygidW5hYmxlIHRvIGZpbmQgeGVucGNpIGRldmNsYXNz
Iik7Cj4+Pj4gKwo+Pj4+ICsJCXhlbnBjaSA9IGRldmNsYXNzX2dldF9kZXZpY2UoZGMsIDApOwo+
Pj4+ICsJCWlmICh4ZW5wY2kgPT0gTlVMTCkKPj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5k
IHhlbnBjaSBkZXZpY2UiKTsKPj4+PiArCj4+Pj4gKwkJaWYgKCFkZXZpY2VfaXNfYXR0YWNoZWQo
eGVucGNpKSkKPj4+PiArCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhl
bnBjaSIpOwo+Pj4+ICsJfQo+Pj4KPj4+IENhbiB5b3UgdXNlIHRoZSBpZGVudGlmeSBtZXRob2Qg
dG8gYWRkIHRoZSB4ZW5wY2kgZGV2aWNlPwo+Pgo+PiBJIGRvbid0IHRoaW5rIHNvLCB4ZW5wY2kg
aXMgYSBwY2kgZGV2aWNlLCBpdCBpcyBkZXRlY3RlZCBhbmQgcGx1Z2dlZCBieQo+PiB0aGUgcGNp
IGJ1cyBjb2RlLgo+IAo+IE91cHMsIEkgdGhvdWdoIHlvdSBhcmUgdHJ5aW5nIHRvIGFkZCB0aGUg
ZGV2aWNlLiBJbiB0aGlzIGNhc2UsIHRoZSBjaGVjawo+IHNlZW1zIHBvaW50bGVzcy4gSW4gd2hp
Y2ggY2FzZSB0aGUgeGVucGNpIGNvdWxkbid0IGV4aXN0PwoKSXQncyBqdXN0IGEgImJlbHQgYW5k
IHN1c3BlbmRlcnMiLCBpZiB3ZSBhdHRhY2ggdGhlIHhlbnB2IGJ1cyB3aXRob3V0CnhlbnBjaSBi
ZWluZyBhdHRhY2hlZCBmaXJzdCBhIGJ1bmNoIG9mIHRoaW5ncyBhcmUgZ29pbmcgdG8gZmFpbCwg
SQp0aG91Z2ggaXQgbWlnaHQgYmUgYmVzdCB0byBwcmludCBhIGNsZWFyIGVycm9yIG1lc3NhZ2Ug
YWJvdXQgd2hhdCB3ZW50Cndyb25nIGluIG9yZGVyIHRvIGhlbHAgZGVidWcgaXQuCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:21:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36jW-0003TR-Eo; Tue, 14 Jan 2014 16:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36jU-0003Rm-6s
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:20:52 +0000
Received: from [85.158.139.211:57366] by server-9.bemta-5.messagelabs.com id
	71/34-15098-3E365D25; Tue, 14 Jan 2014 16:20:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389716448!9731118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5495 invoked from network); 14 Jan 2014 16:20:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:20:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92738398"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:20:48 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:20:47 -0500
Message-ID: <52D563DE.1020207@citrix.com>
Date: Tue, 14 Jan 2014 17:20:46 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
	<52D5625D.7030702@citrix.com>
In-Reply-To: <52D5625D.7030702@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: jhb@freebsd.org, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTQvMDEvMTQgMTc6MTQsIEp1bGllbiBHcmFsbCB3cm90ZToKPiBPbiAwMS8xNC8yMDE0IDA0
OjA4IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAxNC8wMS8xNCAxNjo0MSwgSnVs
aWVuIEdyYWxsIHdyb3RlOgo+Pj4gT24gMDEvMTQvMjAxNCAwMjo1OSBQTSwgUm9nZXIgUGF1IE1v
bm5lIHdyb3RlOgo+Pj4+ICtzdGF0aWMgaW50Cj4+Pj4gK3hlbnB2X2F0dGFjaChkZXZpY2VfdCBk
ZXYpCj4+Pj4gK3sKPj4+PiArCWRldmljZV90IGNoaWxkOwo+Pj4+ICsKPj4+PiArCWlmICh4ZW5f
aHZtX2RvbWFpbigpKSB7Cj4+Pj4gKwkJZGV2aWNlX3QgeGVucGNpOwo+Pj4+ICsJCWRldmNsYXNz
X3QgZGM7Cj4+Pj4gKwo+Pj4+ICsJCS8qIE1ha2Ugc3VyZSB4ZW5wY2kgaGFzIGJlZW4gYXR0YWNo
ZWQgKi8KPj4+PiArCQlkYyA9IGRldmNsYXNzX2ZpbmQoInhlbnBjaSIpOwo+Pj4+ICsJCWlmIChk
YyA9PSBOVUxMKQo+Pj4+ICsJCQlwYW5pYygidW5hYmxlIHRvIGZpbmQgeGVucGNpIGRldmNsYXNz
Iik7Cj4+Pj4gKwo+Pj4+ICsJCXhlbnBjaSA9IGRldmNsYXNzX2dldF9kZXZpY2UoZGMsIDApOwo+
Pj4+ICsJCWlmICh4ZW5wY2kgPT0gTlVMTCkKPj4+PiArCQkJcGFuaWMoInVuYWJsZSB0byBmaW5k
IHhlbnBjaSBkZXZpY2UiKTsKPj4+PiArCj4+Pj4gKwkJaWYgKCFkZXZpY2VfaXNfYXR0YWNoZWQo
eGVucGNpKSkKPj4+PiArCQkJcGFuaWMoInRyeWluZyB0byBhdHRhY2ggeGVucHYgYmVmb3JlIHhl
bnBjaSIpOwo+Pj4+ICsJfQo+Pj4KPj4+IENhbiB5b3UgdXNlIHRoZSBpZGVudGlmeSBtZXRob2Qg
dG8gYWRkIHRoZSB4ZW5wY2kgZGV2aWNlPwo+Pgo+PiBJIGRvbid0IHRoaW5rIHNvLCB4ZW5wY2kg
aXMgYSBwY2kgZGV2aWNlLCBpdCBpcyBkZXRlY3RlZCBhbmQgcGx1Z2dlZCBieQo+PiB0aGUgcGNp
IGJ1cyBjb2RlLgo+IAo+IE91cHMsIEkgdGhvdWdoIHlvdSBhcmUgdHJ5aW5nIHRvIGFkZCB0aGUg
ZGV2aWNlLiBJbiB0aGlzIGNhc2UsIHRoZSBjaGVjawo+IHNlZW1zIHBvaW50bGVzcy4gSW4gd2hp
Y2ggY2FzZSB0aGUgeGVucGNpIGNvdWxkbid0IGV4aXN0PwoKSXQncyBqdXN0IGEgImJlbHQgYW5k
IHN1c3BlbmRlcnMiLCBpZiB3ZSBhdHRhY2ggdGhlIHhlbnB2IGJ1cyB3aXRob3V0CnhlbnBjaSBi
ZWluZyBhdHRhY2hlZCBmaXJzdCBhIGJ1bmNoIG9mIHRoaW5ncyBhcmUgZ29pbmcgdG8gZmFpbCwg
SQp0aG91Z2ggaXQgbWlnaHQgYmUgYmVzdCB0byBwcmludCBhIGNsZWFyIGVycm9yIG1lc3NhZ2Ug
YWJvdXQgd2hhdCB3ZW50Cndyb25nIGluIG9yZGVyIHRvIGhlbHAgZGVidWcgaXQuCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:30:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36sR-00044k-6O; Tue, 14 Jan 2014 16:30:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36sM-00043H-Ei
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:30:04 +0000
Received: from [85.158.137.68:49238] by server-10.bemta-3.messagelabs.com id
	56/85-23989-90665D25; Tue, 14 Jan 2014 16:30:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389716998!7938342!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12951 invoked from network); 14 Jan 2014 16:30:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:30:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90624486"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:29:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:29:57 -0500
Message-ID: <1389716996.12434.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:29:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Subject: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've just tagged 4.4.0-rc2, please test and report bugs.

The tarball can be downloaded here:

http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:30:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36sR-00044k-6O; Tue, 14 Jan 2014 16:30:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36sM-00043H-Ei
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:30:04 +0000
Received: from [85.158.137.68:49238] by server-10.bemta-3.messagelabs.com id
	56/85-23989-90665D25; Tue, 14 Jan 2014 16:30:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389716998!7938342!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12951 invoked from network); 14 Jan 2014 16:30:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:30:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90624486"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:29:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:29:57 -0500
Message-ID: <1389716996.12434.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:29:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Subject: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've just tagged 4.4.0-rc2, please test and report bugs.

The tarball can be downloaded here:

http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:31:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36tI-00049B-IV; Tue, 14 Jan 2014 16:31:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36tG-00048t-HU
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:30:58 +0000
Received: from [85.158.143.35:41111] by server-2.bemta-4.messagelabs.com id
	13/15-11386-14665D25; Tue, 14 Jan 2014 16:30:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389717055!11693896!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3017 invoked from network); 14 Jan 2014 16:30:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92743040"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:30:55 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:30:55 -0500
Message-ID: <52D5663D.5050200@citrix.com>
Date: Tue, 14 Jan 2014 17:30:53 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <52D55061.2020900@citrix.com>	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
	<1389716281.12434.100.camel@kazak.uk.xensource.com>
In-Reply-To: <1389716281.12434.100.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 17:18, Ian Campbell wrote:
> On Tue, 2014-01-14 at 16:05 +0000, Jan Beulich wrote:
>>>>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> As part of XenServer's attempt to move to a 64bit dom0, we have
>>> encountered a sizeable flaw in xc_domain_{save,restore}().
>>>
>>> Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
>>>
>>> xc: detail: xc_domain_restore: starting restore of new domid 1
>>> xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
>>> xc: error: Couldn't allocate p2m_frame_list array: Internal error
>>> xc: detail: Restore exit of domid 1 with rc=1
>>>
>>> This is caused because of
>>>
>>> RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
>>>
>>> where sizeof(unsigned long) is different between the source and destination.
>>>
>>>
>>> It is unreasonable for the format of the migration stream to rely on the
>>> bitness of the toolstack, which should be completely transparent as far
>>> as "motion of a VM" is concerned.  Furthermore, the same issue occurs
>>> with suspend/resume where the stream gets written to a file in the meantime.
>>>
>>> A quick grep across the code shows several other items in the migration
>>> stream which depend on toolstack bitness.
>>>
>>> There is no way to divine whether the far side of the migration stream
>>> is 32 or 64 bit, which is now vital information required to read the
>>> stream correctly.
>>
>> And I think, even if x86 doesn't care, differing endianness should
>> be dealt with at the same time.
> 
> FWIW I'm not currently expecting ARM to reuse
> tools/libxc/xc_domain_{save,restore}.c.
> 
> It might be worth putting the effort into making the ARM code be cleaner
> and supportable with a sensible protocol so that other future ports can
> reuse it. Potentially even x86 could one day switch, although the old
> code would have to remain for compat purposes.

If we only guarantee migration support between n and n+1 (so for example
4.2 to 4.3, but not 4.2 to 4.4), the old code could go away at some point.

http://wiki.xen.org/wiki/Xen_Version_Compatibility


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:31:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36tI-00049B-IV; Tue, 14 Jan 2014 16:31:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W36tG-00048t-HU
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:30:58 +0000
Received: from [85.158.143.35:41111] by server-2.bemta-4.messagelabs.com id
	13/15-11386-14665D25; Tue, 14 Jan 2014 16:30:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389717055!11693896!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3017 invoked from network); 14 Jan 2014 16:30:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92743040"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:30:55 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:30:55 -0500
Message-ID: <52D5663D.5050200@citrix.com>
Date: Tue, 14 Jan 2014 17:30:53 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <52D55061.2020900@citrix.com>	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
	<1389716281.12434.100.camel@kazak.uk.xensource.com>
In-Reply-To: <1389716281.12434.100.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 17:18, Ian Campbell wrote:
> On Tue, 2014-01-14 at 16:05 +0000, Jan Beulich wrote:
>>>>> On 14.01.14 at 15:57, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> As part of XenServer's attempt to move to a 64bit dom0, we have
>>> encountered a sizeable flaw in xc_domain_{save,restore}().
>>>
>>> Migration of a VM from a 32bit toolstack to a 64bit toolstackfails with:
>>>
>>> xc: detail: xc_domain_restore: starting restore of new domid 1
>>> xc: detail: xc_domain_restore: p2m_size = ffffffff00010000
>>> xc: error: Couldn't allocate p2m_frame_list array: Internal error
>>> xc: detail: Restore exit of domid 1 with rc=1
>>>
>>> This is caused because of
>>>
>>> RDEXACT(io_fd, &dinfo->p2m_size, sizeof(unsigned long))
>>>
>>> where sizeof(unsigned long) is different between the source and destination.
>>>
>>>
>>> It is unreasonable for the format of the migration stream to rely on the
>>> bitness of the toolstack, which should be completely transparent as far
>>> as "motion of a VM" is concerned.  Furthermore, the same issue occurs
>>> with suspend/resume where the stream gets written to a file in the meantime.
>>>
>>> A quick grep across the code shows several other items in the migration
>>> stream which depend on toolstack bitness.
>>>
>>> There is no way to divine whether the far side of the migration stream
>>> is 32 or 64 bit, which is now vital information required to read the
>>> stream correctly.
>>
>> And I think, even if x86 doesn't care, differing endianness should
>> be dealt with at the same time.
> 
> FWIW I'm not currently expecting ARM to reuse
> tools/libxc/xc_domain_{save,restore}.c.
> 
> It might be worth putting the effort into making the ARM code be cleaner
> and supportable with a sensible protocol so that other future ports can
> reuse it. Potentially even x86 could one day switch, although the old
> code would have to remain for compat purposes.

If we only guarantee migration support between n and n+1 (so for example
4.2 to 4.3, but not 4.2 to 4.4), the old code could go away at some point.

http://wiki.xen.org/wiki/Xen_Version_Compatibility


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:35:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36xK-0004OV-Rn; Tue, 14 Jan 2014 16:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36xI-0004ON-Sd
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:35:09 +0000
Received: from [85.158.143.35:44341] by server-2.bemta-4.messagelabs.com id
	5B/8B-11386-C3765D25; Tue, 14 Jan 2014 16:35:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389717306!11621702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10357 invoked from network); 14 Jan 2014 16:35:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92745563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:34:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:34:58 -0500
Message-ID: <1389717297.12434.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 14 Jan 2014 16:34:57 +0000
In-Reply-To: <52D5663D.5050200@citrix.com>
References: <52D55061.2020900@citrix.com>
	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
	<1389716281.12434.100.camel@kazak.uk.xensource.com>
	<52D5663D.5050200@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTE0IGF0IDE3OjMwICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDE0LzAxLzE0IDE3OjE4LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBUdWUsIDIw
MTQtMDEtMTQgYXQgMTY6MDUgKzAwMDAsIEphbiBCZXVsaWNoIHdyb3RlOgo+ID4+Pj4+IE9uIDE0
LjAxLjE0IGF0IDE1OjU3LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PiB3cm90ZToKPiA+Pj4gQXMgcGFydCBvZiBYZW5TZXJ2ZXIncyBhdHRlbXB0IHRvIG1vdmUgdG8g
YSA2NGJpdCBkb20wLCB3ZSBoYXZlCj4gPj4+IGVuY291bnRlcmVkIGEgc2l6ZWFibGUgZmxhdyBp
biB4Y19kb21haW5fe3NhdmUscmVzdG9yZX0oKS4KPiA+Pj4KPiA+Pj4gTWlncmF0aW9uIG9mIGEg
Vk0gZnJvbSBhIDMyYml0IHRvb2xzdGFjayB0byBhIDY0Yml0IHRvb2xzdGFja2ZhaWxzIHdpdGg6
Cj4gPj4+Cj4gPj4+IHhjOiBkZXRhaWw6IHhjX2RvbWFpbl9yZXN0b3JlOiBzdGFydGluZyByZXN0
b3JlIG9mIG5ldyBkb21pZCAxCj4gPj4+IHhjOiBkZXRhaWw6IHhjX2RvbWFpbl9yZXN0b3JlOiBw
Mm1fc2l6ZSA9IGZmZmZmZmZmMDAwMTAwMDAKPiA+Pj4geGM6IGVycm9yOiBDb3VsZG4ndCBhbGxv
Y2F0ZSBwMm1fZnJhbWVfbGlzdCBhcnJheTogSW50ZXJuYWwgZXJyb3IKPiA+Pj4geGM6IGRldGFp
bDogUmVzdG9yZSBleGl0IG9mIGRvbWlkIDEgd2l0aCByYz0xCj4gPj4+Cj4gPj4+IFRoaXMgaXMg
Y2F1c2VkIGJlY2F1c2Ugb2YKPiA+Pj4KPiA+Pj4gUkRFWEFDVChpb19mZCwgJmRpbmZvLT5wMm1f
c2l6ZSwgc2l6ZW9mKHVuc2lnbmVkIGxvbmcpKQo+ID4+Pgo+ID4+PiB3aGVyZSBzaXplb2YodW5z
aWduZWQgbG9uZykgaXMgZGlmZmVyZW50IGJldHdlZW4gdGhlIHNvdXJjZSBhbmQgZGVzdGluYXRp
b24uCj4gPj4+Cj4gPj4+Cj4gPj4+IEl0IGlzIHVucmVhc29uYWJsZSBmb3IgdGhlIGZvcm1hdCBv
ZiB0aGUgbWlncmF0aW9uIHN0cmVhbSB0byByZWx5IG9uIHRoZQo+ID4+PiBiaXRuZXNzIG9mIHRo
ZSB0b29sc3RhY2ssIHdoaWNoIHNob3VsZCBiZSBjb21wbGV0ZWx5IHRyYW5zcGFyZW50IGFzIGZh
cgo+ID4+PiBhcyAibW90aW9uIG9mIGEgVk0iIGlzIGNvbmNlcm5lZC4gIEZ1cnRoZXJtb3JlLCB0
aGUgc2FtZSBpc3N1ZSBvY2N1cnMKPiA+Pj4gd2l0aCBzdXNwZW5kL3Jlc3VtZSB3aGVyZSB0aGUg
c3RyZWFtIGdldHMgd3JpdHRlbiB0byBhIGZpbGUgaW4gdGhlIG1lYW50aW1lLgo+ID4+Pgo+ID4+
PiBBIHF1aWNrIGdyZXAgYWNyb3NzIHRoZSBjb2RlIHNob3dzIHNldmVyYWwgb3RoZXIgaXRlbXMg
aW4gdGhlIG1pZ3JhdGlvbgo+ID4+PiBzdHJlYW0gd2hpY2ggZGVwZW5kIG9uIHRvb2xzdGFjayBi
aXRuZXNzLgo+ID4+Pgo+ID4+PiBUaGVyZSBpcyBubyB3YXkgdG8gZGl2aW5lIHdoZXRoZXIgdGhl
IGZhciBzaWRlIG9mIHRoZSBtaWdyYXRpb24gc3RyZWFtCj4gPj4+IGlzIDMyIG9yIDY0IGJpdCwg
d2hpY2ggaXMgbm93IHZpdGFsIGluZm9ybWF0aW9uIHJlcXVpcmVkIHRvIHJlYWQgdGhlCj4gPj4+
IHN0cmVhbSBjb3JyZWN0bHkuCj4gPj4KPiA+PiBBbmQgSSB0aGluaywgZXZlbiBpZiB4ODYgZG9l
c24ndCBjYXJlLCBkaWZmZXJpbmcgZW5kaWFubmVzcyBzaG91bGQKPiA+PiBiZSBkZWFsdCB3aXRo
IGF0IHRoZSBzYW1lIHRpbWUuCj4gPiAKPiA+IEZXSVcgSSdtIG5vdCBjdXJyZW50bHkgZXhwZWN0
aW5nIEFSTSB0byByZXVzZQo+ID4gdG9vbHMvbGlieGMveGNfZG9tYWluX3tzYXZlLHJlc3RvcmV9
LmMuCj4gPiAKPiA+IEl0IG1pZ2h0IGJlIHdvcnRoIHB1dHRpbmcgdGhlIGVmZm9ydCBpbnRvIG1h
a2luZyB0aGUgQVJNIGNvZGUgYmUgY2xlYW5lcgo+ID4gYW5kIHN1cHBvcnRhYmxlIHdpdGggYSBz
ZW5zaWJsZSBwcm90b2NvbCBzbyB0aGF0IG90aGVyIGZ1dHVyZSBwb3J0cyBjYW4KPiA+IHJldXNl
IGl0LiBQb3RlbnRpYWxseSBldmVuIHg4NiBjb3VsZCBvbmUgZGF5IHN3aXRjaCwgYWx0aG91Z2gg
dGhlIG9sZAo+ID4gY29kZSB3b3VsZCBoYXZlIHRvIHJlbWFpbiBmb3IgY29tcGF0IHB1cnBvc2Vz
Lgo+IAo+IElmIHdlIG9ubHkgZ3VhcmFudGVlIG1pZ3JhdGlvbiBzdXBwb3J0IGJldHdlZW4gbiBh
bmQgbisxIChzbyBmb3IgZXhhbXBsZQo+IDQuMiB0byA0LjMsIGJ1dCBub3QgNC4yIHRvIDQuNCks
IHRoZSBvbGQgY29kZSBjb3VsZCBnbyBhd2F5IGF0IHNvbWUgcG9pbnQuCj4gCj4gaHR0cDovL3dp
a2kueGVuLm9yZy93aWtpL1hlbl9WZXJzaW9uX0NvbXBhdGliaWxpdHkKCldlJ3ZlIGhpc3Rvcmlj
YWxseSBub3QgZGVsaWJlcmF0ZWx5IGJyb2tlbiBpdCB0aG91Z2gsIGJ1dCBnaXZlbiBhIGNsZWFu
CmJyZWFrIHdlIGNvdWxkIHBlcmhhcHMgcmVtb3ZlIHRoZSBvbGQgY29kZSBldmVudHVhbGx5LgoK
SWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhl
bi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3Rz
Lnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:35:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W36xK-0004OV-Rn; Tue, 14 Jan 2014 16:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W36xI-0004ON-Sd
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:35:09 +0000
Received: from [85.158.143.35:44341] by server-2.bemta-4.messagelabs.com id
	5B/8B-11386-C3765D25; Tue, 14 Jan 2014 16:35:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389717306!11621702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10357 invoked from network); 14 Jan 2014 16:35:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92745563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:34:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 11:34:58 -0500
Message-ID: <1389717297.12434.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 14 Jan 2014 16:34:57 +0000
In-Reply-To: <52D5663D.5050200@citrix.com>
References: <52D55061.2020900@citrix.com>
	<52D56E6E02000078001138D2@nat28.tlf.novell.com>
	<1389716281.12434.100.camel@kazak.uk.xensource.com>
	<52D5663D.5050200@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Migration between different bitness toolstacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTE0IGF0IDE3OjMwICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDE0LzAxLzE0IDE3OjE4LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBUdWUsIDIw
MTQtMDEtMTQgYXQgMTY6MDUgKzAwMDAsIEphbiBCZXVsaWNoIHdyb3RlOgo+ID4+Pj4+IE9uIDE0
LjAxLjE0IGF0IDE1OjU3LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29t
PiB3cm90ZToKPiA+Pj4gQXMgcGFydCBvZiBYZW5TZXJ2ZXIncyBhdHRlbXB0IHRvIG1vdmUgdG8g
YSA2NGJpdCBkb20wLCB3ZSBoYXZlCj4gPj4+IGVuY291bnRlcmVkIGEgc2l6ZWFibGUgZmxhdyBp
biB4Y19kb21haW5fe3NhdmUscmVzdG9yZX0oKS4KPiA+Pj4KPiA+Pj4gTWlncmF0aW9uIG9mIGEg
Vk0gZnJvbSBhIDMyYml0IHRvb2xzdGFjayB0byBhIDY0Yml0IHRvb2xzdGFja2ZhaWxzIHdpdGg6
Cj4gPj4+Cj4gPj4+IHhjOiBkZXRhaWw6IHhjX2RvbWFpbl9yZXN0b3JlOiBzdGFydGluZyByZXN0
b3JlIG9mIG5ldyBkb21pZCAxCj4gPj4+IHhjOiBkZXRhaWw6IHhjX2RvbWFpbl9yZXN0b3JlOiBw
Mm1fc2l6ZSA9IGZmZmZmZmZmMDAwMTAwMDAKPiA+Pj4geGM6IGVycm9yOiBDb3VsZG4ndCBhbGxv
Y2F0ZSBwMm1fZnJhbWVfbGlzdCBhcnJheTogSW50ZXJuYWwgZXJyb3IKPiA+Pj4geGM6IGRldGFp
bDogUmVzdG9yZSBleGl0IG9mIGRvbWlkIDEgd2l0aCByYz0xCj4gPj4+Cj4gPj4+IFRoaXMgaXMg
Y2F1c2VkIGJlY2F1c2Ugb2YKPiA+Pj4KPiA+Pj4gUkRFWEFDVChpb19mZCwgJmRpbmZvLT5wMm1f
c2l6ZSwgc2l6ZW9mKHVuc2lnbmVkIGxvbmcpKQo+ID4+Pgo+ID4+PiB3aGVyZSBzaXplb2YodW5z
aWduZWQgbG9uZykgaXMgZGlmZmVyZW50IGJldHdlZW4gdGhlIHNvdXJjZSBhbmQgZGVzdGluYXRp
b24uCj4gPj4+Cj4gPj4+Cj4gPj4+IEl0IGlzIHVucmVhc29uYWJsZSBmb3IgdGhlIGZvcm1hdCBv
ZiB0aGUgbWlncmF0aW9uIHN0cmVhbSB0byByZWx5IG9uIHRoZQo+ID4+PiBiaXRuZXNzIG9mIHRo
ZSB0b29sc3RhY2ssIHdoaWNoIHNob3VsZCBiZSBjb21wbGV0ZWx5IHRyYW5zcGFyZW50IGFzIGZh
cgo+ID4+PiBhcyAibW90aW9uIG9mIGEgVk0iIGlzIGNvbmNlcm5lZC4gIEZ1cnRoZXJtb3JlLCB0
aGUgc2FtZSBpc3N1ZSBvY2N1cnMKPiA+Pj4gd2l0aCBzdXNwZW5kL3Jlc3VtZSB3aGVyZSB0aGUg
c3RyZWFtIGdldHMgd3JpdHRlbiB0byBhIGZpbGUgaW4gdGhlIG1lYW50aW1lLgo+ID4+Pgo+ID4+
PiBBIHF1aWNrIGdyZXAgYWNyb3NzIHRoZSBjb2RlIHNob3dzIHNldmVyYWwgb3RoZXIgaXRlbXMg
aW4gdGhlIG1pZ3JhdGlvbgo+ID4+PiBzdHJlYW0gd2hpY2ggZGVwZW5kIG9uIHRvb2xzdGFjayBi
aXRuZXNzLgo+ID4+Pgo+ID4+PiBUaGVyZSBpcyBubyB3YXkgdG8gZGl2aW5lIHdoZXRoZXIgdGhl
IGZhciBzaWRlIG9mIHRoZSBtaWdyYXRpb24gc3RyZWFtCj4gPj4+IGlzIDMyIG9yIDY0IGJpdCwg
d2hpY2ggaXMgbm93IHZpdGFsIGluZm9ybWF0aW9uIHJlcXVpcmVkIHRvIHJlYWQgdGhlCj4gPj4+
IHN0cmVhbSBjb3JyZWN0bHkuCj4gPj4KPiA+PiBBbmQgSSB0aGluaywgZXZlbiBpZiB4ODYgZG9l
c24ndCBjYXJlLCBkaWZmZXJpbmcgZW5kaWFubmVzcyBzaG91bGQKPiA+PiBiZSBkZWFsdCB3aXRo
IGF0IHRoZSBzYW1lIHRpbWUuCj4gPiAKPiA+IEZXSVcgSSdtIG5vdCBjdXJyZW50bHkgZXhwZWN0
aW5nIEFSTSB0byByZXVzZQo+ID4gdG9vbHMvbGlieGMveGNfZG9tYWluX3tzYXZlLHJlc3RvcmV9
LmMuCj4gPiAKPiA+IEl0IG1pZ2h0IGJlIHdvcnRoIHB1dHRpbmcgdGhlIGVmZm9ydCBpbnRvIG1h
a2luZyB0aGUgQVJNIGNvZGUgYmUgY2xlYW5lcgo+ID4gYW5kIHN1cHBvcnRhYmxlIHdpdGggYSBz
ZW5zaWJsZSBwcm90b2NvbCBzbyB0aGF0IG90aGVyIGZ1dHVyZSBwb3J0cyBjYW4KPiA+IHJldXNl
IGl0LiBQb3RlbnRpYWxseSBldmVuIHg4NiBjb3VsZCBvbmUgZGF5IHN3aXRjaCwgYWx0aG91Z2gg
dGhlIG9sZAo+ID4gY29kZSB3b3VsZCBoYXZlIHRvIHJlbWFpbiBmb3IgY29tcGF0IHB1cnBvc2Vz
Lgo+IAo+IElmIHdlIG9ubHkgZ3VhcmFudGVlIG1pZ3JhdGlvbiBzdXBwb3J0IGJldHdlZW4gbiBh
bmQgbisxIChzbyBmb3IgZXhhbXBsZQo+IDQuMiB0byA0LjMsIGJ1dCBub3QgNC4yIHRvIDQuNCks
IHRoZSBvbGQgY29kZSBjb3VsZCBnbyBhd2F5IGF0IHNvbWUgcG9pbnQuCj4gCj4gaHR0cDovL3dp
a2kueGVuLm9yZy93aWtpL1hlbl9WZXJzaW9uX0NvbXBhdGliaWxpdHkKCldlJ3ZlIGhpc3Rvcmlj
YWxseSBub3QgZGVsaWJlcmF0ZWx5IGJyb2tlbiBpdCB0aG91Z2gsIGJ1dCBnaXZlbiBhIGNsZWFu
CmJyZWFrIHdlIGNvdWxkIHBlcmhhcHMgcmVtb3ZlIHRoZSBvbGQgY29kZSBldmVudHVhbGx5LgoK
SWFuLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhl
bi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3Rz
Lnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37BD-00059u-A2; Tue, 14 Jan 2014 16:49:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W37BB-00058z-5B
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:49:29 +0000
Received: from [193.109.254.147:45512] by server-7.bemta-14.messagelabs.com id
	7D/87-15500-89A65D25; Tue, 14 Jan 2014 16:49:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389718165!10821045!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25480 invoked from network); 14 Jan 2014 16:49:25 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:49:25 -0000
Received: by mail-wi0-f181.google.com with SMTP id hq4so1002082wib.2
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 08:49:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=IrgsaE+czSp8wPiWmzGaD5r9aoqXBgo1RcPgxl/q6nc=;
	b=PKRfrv6rczYi3OccZEyJY/Bb22NCJTdt4sSvM1o5ZE9KsWTayMcPNwmub8JaCG6z12
	hFhtnqcY9PxHHwOQEYfUFzcfF2AUvmAi6y2o3E3G4eXhivQ7SIapciop2u4jK0Vbh/f9
	pMdj+IsjqUd605pK2Xu/jYM0WKDRg1GrEE6iCmhywFPvKRoQLn/5lo4OsnPv2yUgs01I
	u64eCneUbjLE0bIM3HOoqKCkrYB9t/CFW1U+TNjXXEMpv2boPbKpKLHYKePr3OTKg1D2
	k2WU4ncybmkTQ4YHXpIgcpRNgI7O1YSwq3MNCdFPoNJpXxPSBIreVPrF1l/xVbR74c/T
	Pa3A==
X-Gm-Message-State: ALoCoQlZ2sXH+HLies/AicED6lXaqqKacv0OGpRHf2q8w/hLXefUnYFthO2/5NlpA6+hyr60SXYk
X-Received: by 10.180.80.103 with SMTP id q7mr11155351wix.14.1389718164890;
	Tue, 14 Jan 2014 08:49:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hy8sm1003957wjb.2.2014.01.14.08.49.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 08:49:24 -0800 (PST)
Message-ID: <52D56A92.4070100@linaro.org>
Date: Tue, 14 Jan 2014 16:49:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
	<52D5625D.7030702@citrix.com> <52D563DE.1020207@citrix.com>
In-Reply-To: <52D563DE.1020207@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTQvMjAxNCAwNDoyMCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAxNC8w
MS8xNCAxNzoxNCwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+PiBPbiAwMS8xNC8yMDE0IDA0OjA4IFBN
LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMTQvMDEvMTQgMTY6NDEsIEp1bGllbiBH
cmFsbCB3cm90ZToKPj4+PiBPbiAwMS8xNC8yMDE0IDAyOjU5IFBNLCBSb2dlciBQYXUgTW9ubmUg
d3JvdGU6Cj4+Pj4+ICtzdGF0aWMgaW50Cj4+Pj4+ICt4ZW5wdl9hdHRhY2goZGV2aWNlX3QgZGV2
KQo+Pj4+PiArewo+Pj4+PiArCWRldmljZV90IGNoaWxkOwo+Pj4+PiArCj4+Pj4+ICsJaWYgKHhl
bl9odm1fZG9tYWluKCkpIHsKPj4+Pj4gKwkJZGV2aWNlX3QgeGVucGNpOwo+Pj4+PiArCQlkZXZj
bGFzc190IGRjOwo+Pj4+PiArCj4+Pj4+ICsJCS8qIE1ha2Ugc3VyZSB4ZW5wY2kgaGFzIGJlZW4g
YXR0YWNoZWQgKi8KPj4+Pj4gKwkJZGMgPSBkZXZjbGFzc19maW5kKCJ4ZW5wY2kiKTsKPj4+Pj4g
KwkJaWYgKGRjID09IE5VTEwpCj4+Pj4+ICsJCQlwYW5pYygidW5hYmxlIHRvIGZpbmQgeGVucGNp
IGRldmNsYXNzIik7Cj4+Pj4+ICsKPj4+Pj4gKwkJeGVucGNpID0gZGV2Y2xhc3NfZ2V0X2Rldmlj
ZShkYywgMCk7Cj4+Pj4+ICsJCWlmICh4ZW5wY2kgPT0gTlVMTCkKPj4+Pj4gKwkJCXBhbmljKCJ1
bmFibGUgdG8gZmluZCB4ZW5wY2kgZGV2aWNlIik7Cj4+Pj4+ICsKPj4+Pj4gKwkJaWYgKCFkZXZp
Y2VfaXNfYXR0YWNoZWQoeGVucGNpKSkKPj4+Pj4gKwkJCXBhbmljKCJ0cnlpbmcgdG8gYXR0YWNo
IHhlbnB2IGJlZm9yZSB4ZW5wY2kiKTsKPj4+Pj4gKwl9Cj4+Pj4KPj4+PiBDYW4geW91IHVzZSB0
aGUgaWRlbnRpZnkgbWV0aG9kIHRvIGFkZCB0aGUgeGVucGNpIGRldmljZT8KPj4+Cj4+PiBJIGRv
bid0IHRoaW5rIHNvLCB4ZW5wY2kgaXMgYSBwY2kgZGV2aWNlLCBpdCBpcyBkZXRlY3RlZCBhbmQg
cGx1Z2dlZCBieQo+Pj4gdGhlIHBjaSBidXMgY29kZS4KPj4KPj4gT3VwcywgSSB0aG91Z2ggeW91
IGFyZSB0cnlpbmcgdG8gYWRkIHRoZSBkZXZpY2UuIEluIHRoaXMgY2FzZSwgdGhlIGNoZWNrCj4+
IHNlZW1zIHBvaW50bGVzcy4gSW4gd2hpY2ggY2FzZSB0aGUgeGVucGNpIGNvdWxkbid0IGV4aXN0
Pwo+IAo+IEl0J3MganVzdCBhICJiZWx0IGFuZCBzdXNwZW5kZXJzIiwgaWYgd2UgYXR0YWNoIHRo
ZSB4ZW5wdiBidXMgd2l0aG91dAo+IHhlbnBjaSBiZWluZyBhdHRhY2hlZCBmaXJzdCBhIGJ1bmNo
IG9mIHRoaW5ncyBhcmUgZ29pbmcgdG8gZmFpbCwgSQo+IHRob3VnaCBpdCBtaWdodCBiZSBiZXN0
IHRvIHByaW50IGEgY2xlYXIgZXJyb3IgbWVzc2FnZSBhYm91dCB3aGF0IHdlbnQKPiB3cm9uZyBp
biBvcmRlciB0byBoZWxwIGRlYnVnIGl0LgoKSSBvbmx5IHNlZSBvbmUgcGxhY2Ugd2hpY2ggY291
bGQgZmFpbGVkLCBhbmQgd2UgYXJlIGFscmVhZHkgcHJvdGVjdGVkLgpJdCdzIHdoZW4gd2UgYXJl
IHRyeWluZyB0byBhbGxvY2F0ZSBzcGFjZSBmcm9tIGdyYW50LXRhYmxlIHZpYQp4ZW5wY2lfYWxs
b2Nfc3BhY2UuCgpJIHRoaW5rIHRoaXMgZXJyb3Igc2hvdWxkIGJlIGVub3VnaCB0byB1bmRlcnN0
YW5kIHRoZSBwcm9ibGVtLiBBdCB0aGUKc2FtZSB0aW1lLCBpdCdzIHRoZSBzYW1lIHRoaW5ncyB3
aXRoIHhlbnN0b3JlLiBJZiBncmFudC10YWJsZQppbml0aWFsaXphdGlvbiBoYXMgZmFpbGVkLCBh
biBlcnJvciBtZXNzYWdlIGlzIGp1c3QgcHJpbnRlZCBhbmQgRnJlZUJTRAp3aWxsIGxpa2VseSBm
YWlsZWQgbGF0ZXIgd2hlbiBpdCB3aWxsIHRyeSB0byBpbml0aWFsaXplZCB0aGUgUFYgZGlzay4K
Ci0tIApKdWxpZW4gR3JhbGwKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37BD-00059u-A2; Tue, 14 Jan 2014 16:49:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W37BB-00058z-5B
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:49:29 +0000
Received: from [193.109.254.147:45512] by server-7.bemta-14.messagelabs.com id
	7D/87-15500-89A65D25; Tue, 14 Jan 2014 16:49:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389718165!10821045!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25480 invoked from network); 14 Jan 2014 16:49:25 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:49:25 -0000
Received: by mail-wi0-f181.google.com with SMTP id hq4so1002082wib.2
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 08:49:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=IrgsaE+czSp8wPiWmzGaD5r9aoqXBgo1RcPgxl/q6nc=;
	b=PKRfrv6rczYi3OccZEyJY/Bb22NCJTdt4sSvM1o5ZE9KsWTayMcPNwmub8JaCG6z12
	hFhtnqcY9PxHHwOQEYfUFzcfF2AUvmAi6y2o3E3G4eXhivQ7SIapciop2u4jK0Vbh/f9
	pMdj+IsjqUd605pK2Xu/jYM0WKDRg1GrEE6iCmhywFPvKRoQLn/5lo4OsnPv2yUgs01I
	u64eCneUbjLE0bIM3HOoqKCkrYB9t/CFW1U+TNjXXEMpv2boPbKpKLHYKePr3OTKg1D2
	k2WU4ncybmkTQ4YHXpIgcpRNgI7O1YSwq3MNCdFPoNJpXxPSBIreVPrF1l/xVbR74c/T
	Pa3A==
X-Gm-Message-State: ALoCoQlZ2sXH+HLies/AicED6lXaqqKacv0OGpRHf2q8w/hLXefUnYFthO2/5NlpA6+hyr60SXYk
X-Received: by 10.180.80.103 with SMTP id q7mr11155351wix.14.1389718164890;
	Tue, 14 Jan 2014 08:49:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hy8sm1003957wjb.2.2014.01.14.08.49.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 08:49:24 -0800 (PST)
Message-ID: <52D56A92.4070100@linaro.org>
Date: Tue, 14 Jan 2014 16:49:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1389711582-66908-1-git-send-email-roger.pau@citrix.com>
	<1389711582-66908-15-git-send-email-roger.pau@citrix.com>
	<52D55AB4.4010504@linaro.org> <52D560EB.8040108@citrix.com>
	<52D5625D.7030702@citrix.com> <52D563DE.1020207@citrix.com>
In-Reply-To: <52D563DE.1020207@citrix.com>
Cc: jhb@freebsd.org, xen-devel@lists.xen.org,
	Julien Grall <julien.grall@citrix.com>, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH v10 14/20] xen: introduce xenpv bus and a
 dummy pvcpu device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTQvMjAxNCAwNDoyMCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAxNC8w
MS8xNCAxNzoxNCwgSnVsaWVuIEdyYWxsIHdyb3RlOgo+PiBPbiAwMS8xNC8yMDE0IDA0OjA4IFBN
LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMTQvMDEvMTQgMTY6NDEsIEp1bGllbiBH
cmFsbCB3cm90ZToKPj4+PiBPbiAwMS8xNC8yMDE0IDAyOjU5IFBNLCBSb2dlciBQYXUgTW9ubmUg
d3JvdGU6Cj4+Pj4+ICtzdGF0aWMgaW50Cj4+Pj4+ICt4ZW5wdl9hdHRhY2goZGV2aWNlX3QgZGV2
KQo+Pj4+PiArewo+Pj4+PiArCWRldmljZV90IGNoaWxkOwo+Pj4+PiArCj4+Pj4+ICsJaWYgKHhl
bl9odm1fZG9tYWluKCkpIHsKPj4+Pj4gKwkJZGV2aWNlX3QgeGVucGNpOwo+Pj4+PiArCQlkZXZj
bGFzc190IGRjOwo+Pj4+PiArCj4+Pj4+ICsJCS8qIE1ha2Ugc3VyZSB4ZW5wY2kgaGFzIGJlZW4g
YXR0YWNoZWQgKi8KPj4+Pj4gKwkJZGMgPSBkZXZjbGFzc19maW5kKCJ4ZW5wY2kiKTsKPj4+Pj4g
KwkJaWYgKGRjID09IE5VTEwpCj4+Pj4+ICsJCQlwYW5pYygidW5hYmxlIHRvIGZpbmQgeGVucGNp
IGRldmNsYXNzIik7Cj4+Pj4+ICsKPj4+Pj4gKwkJeGVucGNpID0gZGV2Y2xhc3NfZ2V0X2Rldmlj
ZShkYywgMCk7Cj4+Pj4+ICsJCWlmICh4ZW5wY2kgPT0gTlVMTCkKPj4+Pj4gKwkJCXBhbmljKCJ1
bmFibGUgdG8gZmluZCB4ZW5wY2kgZGV2aWNlIik7Cj4+Pj4+ICsKPj4+Pj4gKwkJaWYgKCFkZXZp
Y2VfaXNfYXR0YWNoZWQoeGVucGNpKSkKPj4+Pj4gKwkJCXBhbmljKCJ0cnlpbmcgdG8gYXR0YWNo
IHhlbnB2IGJlZm9yZSB4ZW5wY2kiKTsKPj4+Pj4gKwl9Cj4+Pj4KPj4+PiBDYW4geW91IHVzZSB0
aGUgaWRlbnRpZnkgbWV0aG9kIHRvIGFkZCB0aGUgeGVucGNpIGRldmljZT8KPj4+Cj4+PiBJIGRv
bid0IHRoaW5rIHNvLCB4ZW5wY2kgaXMgYSBwY2kgZGV2aWNlLCBpdCBpcyBkZXRlY3RlZCBhbmQg
cGx1Z2dlZCBieQo+Pj4gdGhlIHBjaSBidXMgY29kZS4KPj4KPj4gT3VwcywgSSB0aG91Z2ggeW91
IGFyZSB0cnlpbmcgdG8gYWRkIHRoZSBkZXZpY2UuIEluIHRoaXMgY2FzZSwgdGhlIGNoZWNrCj4+
IHNlZW1zIHBvaW50bGVzcy4gSW4gd2hpY2ggY2FzZSB0aGUgeGVucGNpIGNvdWxkbid0IGV4aXN0
Pwo+IAo+IEl0J3MganVzdCBhICJiZWx0IGFuZCBzdXNwZW5kZXJzIiwgaWYgd2UgYXR0YWNoIHRo
ZSB4ZW5wdiBidXMgd2l0aG91dAo+IHhlbnBjaSBiZWluZyBhdHRhY2hlZCBmaXJzdCBhIGJ1bmNo
IG9mIHRoaW5ncyBhcmUgZ29pbmcgdG8gZmFpbCwgSQo+IHRob3VnaCBpdCBtaWdodCBiZSBiZXN0
IHRvIHByaW50IGEgY2xlYXIgZXJyb3IgbWVzc2FnZSBhYm91dCB3aGF0IHdlbnQKPiB3cm9uZyBp
biBvcmRlciB0byBoZWxwIGRlYnVnIGl0LgoKSSBvbmx5IHNlZSBvbmUgcGxhY2Ugd2hpY2ggY291
bGQgZmFpbGVkLCBhbmQgd2UgYXJlIGFscmVhZHkgcHJvdGVjdGVkLgpJdCdzIHdoZW4gd2UgYXJl
IHRyeWluZyB0byBhbGxvY2F0ZSBzcGFjZSBmcm9tIGdyYW50LXRhYmxlIHZpYQp4ZW5wY2lfYWxs
b2Nfc3BhY2UuCgpJIHRoaW5rIHRoaXMgZXJyb3Igc2hvdWxkIGJlIGVub3VnaCB0byB1bmRlcnN0
YW5kIHRoZSBwcm9ibGVtLiBBdCB0aGUKc2FtZSB0aW1lLCBpdCdzIHRoZSBzYW1lIHRoaW5ncyB3
aXRoIHhlbnN0b3JlLiBJZiBncmFudC10YWJsZQppbml0aWFsaXphdGlvbiBoYXMgZmFpbGVkLCBh
biBlcnJvciBtZXNzYWdlIGlzIGp1c3QgcHJpbnRlZCBhbmQgRnJlZUJTRAp3aWxsIGxpa2VseSBm
YWlsZWQgbGF0ZXIgd2hlbiBpdCB3aWxsIHRyeSB0byBpbml0aWFsaXplZCB0aGUgUFYgZGlzay4K
Ci0tIApKdWxpZW4gR3JhbGwKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:55:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37Gh-0005Ve-Uo; Tue, 14 Jan 2014 16:55:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37Gg-0005VY-LD
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:55:10 +0000
Received: from [85.158.143.35:48557] by server-2.bemta-4.messagelabs.com id
	78/8A-11386-EEB65D25; Tue, 14 Jan 2014 16:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389718507!11646740!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23938 invoked from network); 14 Jan 2014 16:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92755539"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:55:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 11:55:05 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37Ga-0008DS-HP;
	Tue, 14 Jan 2014 16:55:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:55:04 +0000
Message-ID: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
was mapped with ioremap_nocache and hence isn't cached in the first place.
Accesses should be via writeq though, so do that.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/arm64/smpboot.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 1287c72..9146476 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -32,8 +32,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
         return -EFAULT;
     }
 
-    release[0] = __pa(init_secondary);
-    flush_xen_data_tlb_range_va((vaddr_t)release, sizeof(*release));
+    writeq(__pa(init_secondary), release);
 
     iounmap(release);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:55:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37Gh-0005Ve-Uo; Tue, 14 Jan 2014 16:55:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37Gg-0005VY-LD
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:55:10 +0000
Received: from [85.158.143.35:48557] by server-2.bemta-4.messagelabs.com id
	78/8A-11386-EEB65D25; Tue, 14 Jan 2014 16:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389718507!11646740!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23938 invoked from network); 14 Jan 2014 16:55:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92755539"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 16:55:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 11:55:05 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37Ga-0008DS-HP;
	Tue, 14 Jan 2014 16:55:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:55:04 +0000
Message-ID: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
was mapped with ioremap_nocache and hence isn't cached in the first place.
Accesses should be via writeq though, so do that.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/arm64/smpboot.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 1287c72..9146476 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -32,8 +32,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
         return -EFAULT;
     }
 
-    release[0] = __pa(init_secondary);
-    flush_xen_data_tlb_range_va((vaddr_t)release, sizeof(*release));
+    writeq(__pa(init_secondary), release);
 
     iounmap(release);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:55:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37HE-0005YS-P4; Tue, 14 Jan 2014 16:55:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37HC-0005YD-Nb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:55:43 +0000
Received: from [193.109.254.147:4755] by server-8.bemta-14.messagelabs.com id
	1E/CB-30921-E0C65D25; Tue, 14 Jan 2014 16:55:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389718539!10757759!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15625 invoked from network); 14 Jan 2014 16:55:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90637776"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:55:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 11:55:14 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37Gj-0008Db-LP;
	Tue, 14 Jan 2014 16:55:13 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:55:13 +0000
Message-ID: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when setting or
	clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These mappings are global and therefore need flushing on all processors. Add
flush_all_xen_data_tlb_range_va which accomplishes this.

Also update the comments in the other flush_xen_*_tlb functions to mention
that they operate on the local processor only.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c                |    4 ++--
 xen/include/asm-arm/arm32/page.h |   36 ++++++++++++++++++++++++++++++------
 xen/include/asm-arm/arm64/page.h |   35 +++++++++++++++++++++++++++++------
 3 files changed, 61 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 35af1ad..cddb174 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -234,7 +234,7 @@ void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
     pte.pt.ai = attributes;
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -242,7 +242,7 @@ void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..533b253 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -23,7 +23,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /*
- * Flush all hypervisor mappings from the TLB and branch predictor.
+ * Flush all hypervisor mappings from the TLB and branch predictor of
+ * the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -43,8 +45,9 @@ static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -57,10 +60,12 @@ static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -73,6 +78,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB on all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile(STORE_CP32(0, TLBIMVAHIS)
+                     : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..42023cc 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -18,7 +18,8 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /*
- * Flush all hypervisor mappings from the TLB
+ * Flush all hypervisor mappings from the TLB of the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -36,8 +37,9 @@ static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -50,10 +52,12 @@ static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -66,6 +70,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB of all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile("tlbi vae2is, %0;"
+                     : : "r" (va>>PAGE_SHIFT) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 16:55:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 16:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37HE-0005YS-P4; Tue, 14 Jan 2014 16:55:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37HC-0005YD-Nb
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 16:55:43 +0000
Received: from [193.109.254.147:4755] by server-8.bemta-14.messagelabs.com id
	1E/CB-30921-E0C65D25; Tue, 14 Jan 2014 16:55:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389718539!10757759!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15625 invoked from network); 14 Jan 2014 16:55:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 16:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90637776"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 16:55:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 11:55:14 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37Gj-0008Db-LP;
	Tue, 14 Jan 2014 16:55:13 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 16:55:13 +0000
Message-ID: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when setting or
	clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These mappings are global and therefore need flushing on all processors. Add
flush_all_xen_data_tlb_range_va which accomplishes this.

Also update the comments in the other flush_xen_*_tlb functions to mention
that they operate on the local processor only.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c                |    4 ++--
 xen/include/asm-arm/arm32/page.h |   36 ++++++++++++++++++++++++++++++------
 xen/include/asm-arm/arm64/page.h |   35 +++++++++++++++++++++++++++++------
 3 files changed, 61 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 35af1ad..cddb174 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -234,7 +234,7 @@ void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
     pte.pt.ai = attributes;
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 /* Remove a mapping from a fixmap entry */
@@ -242,7 +242,7 @@ void clear_fixmap(unsigned map)
 {
     lpae_t pte = {0};
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
-    flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
+    flush_all_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
 #ifdef CONFIG_DOMAIN_PAGE
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..533b253 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -23,7 +23,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /*
- * Flush all hypervisor mappings from the TLB and branch predictor.
+ * Flush all hypervisor mappings from the TLB and branch predictor of
+ * the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -43,8 +45,9 @@ static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -57,10 +60,12 @@ static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -73,6 +78,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB on all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile(STORE_CP32(0, TLBIMVAHIS)
+                     : : "r" (va) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..42023cc 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -18,7 +18,8 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /*
- * Flush all hypervisor mappings from the TLB
+ * Flush all hypervisor mappings from the TLB of the local processor.
+ *
  * This is needed after changing Xen code mappings.
  *
  * The caller needs to issue the necessary DSB and D-cache flushes
@@ -36,8 +37,9 @@ static inline void flush_xen_text_tlb(void)
 }
 
 /*
- * Flush all hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush all hypervisor mappings from the data TLB of the local
+ * processor. This is not sufficient when changing code mappings or
+ * for self modifying code.
  */
 static inline void flush_xen_data_tlb(void)
 {
@@ -50,10 +52,12 @@ static inline void flush_xen_data_tlb(void)
 }
 
 /*
- * Flush a range of VA's hypervisor mappings from the data TLB. This is not
- * sufficient when changing code mappings or for self modifying code.
+ * Flush a range of VA's hypervisor mappings from the data TLB of the
+ * local processor. This is not sufficient when changing code mappings
+ * or for self modifying code.
  */
-static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
+static inline void flush_xen_data_tlb_range_va(unsigned long va,
+                                               unsigned long size)
 {
     unsigned long end = va + size;
     dsb(); /* Ensure preceding are visible */
@@ -66,6 +70,25 @@ static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long s
     isb();
 }
 
+/*
+ * Flush a range of VA's hypervisor mappings from the data TLB of all
+ * processors in the inner-shareable domain. This is not sufficient
+ * when changing code mappings or for self modifying code.
+ */
+static inline void flush_all_xen_data_tlb_range_va(unsigned long va,
+                                                   unsigned long size)
+{
+    unsigned long end = va + size;
+    dsb(); /* Ensure preceding are visible */
+    while ( va < end ) {
+        asm volatile("tlbi vae2is, %0;"
+                     : : "r" (va>>PAGE_SHIFT) : "memory");
+        va += PAGE_SIZE;
+    }
+    dsb(); /* Ensure completion of the TLB flush */
+    isb();
+}
+
 /* Ask the MMU to translate a VA for us */
 static inline uint64_t __va_to_par(vaddr_t va)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 17:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 17:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37rL-0007Qp-Kc; Tue, 14 Jan 2014 17:33:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37rH-0007Qi-GD
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 17:32:59 +0000
Received: from [193.109.254.147:58241] by server-13.bemta-14.messagelabs.com
	id 21/DE-19374-AC475D25; Tue, 14 Jan 2014 17:32:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389720776!9354898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14051 invoked from network); 14 Jan 2014 17:32:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 17:32:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90656417"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 17:32:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 12:32:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37rC-0008Op-Np;
	Tue, 14 Jan 2014 17:32:54 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 17:32:54 +0000
Message-ID: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on 64-bit
	hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
off the top bit of the entry address for PSCI_UP.

Follow the pattern established in do_trap_hypercall.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---

Release argument: Only supporting single vcpu guests on arm64 would be
unfortunate. There is no risk to arm32 since the ifdef ensures the code
remains the same.
---
 xen/arch/arm/traps.c |   17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index fdf9440..62c9df2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
 }
 #endif
 
+#ifdef CONFIG_ARM_64
+#define PSCI_OP_REG(r) (r)->x0
+#define PSCI_RESULT_REG(r) (r)->x0
+#define PSCI_ARGS(r) (r)->x1, (r)->x2
+#else
+#define PSCI_OP_REG(r) (r)->r0
+#define PSCI_RESULT_REG(r) (r)->r0
+#define PSCI_ARGS(r) (r)->r1, (r)->r2
+#endif
+
 static void do_trap_psci(struct cpu_user_regs *regs)
 {
     arm_psci_fn_t psci_call = NULL;
 
-    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
+    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
     {
         domain_crash_synchronous();
         return;
     }
 
-    psci_call = arm_psci_table[regs->r0].fn;
+    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
     if ( psci_call == NULL )
     {
         domain_crash_synchronous();
         return;
     }
-    regs->r0 = psci_call(regs->r1, regs->r2);
+
+    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
 }
 
 #ifdef CONFIG_ARM_64
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 17:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 17:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W37rL-0007Qp-Kc; Tue, 14 Jan 2014 17:33:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W37rH-0007Qi-GD
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 17:32:59 +0000
Received: from [193.109.254.147:58241] by server-13.bemta-14.messagelabs.com
	id 21/DE-19374-AC475D25; Tue, 14 Jan 2014 17:32:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389720776!9354898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14051 invoked from network); 14 Jan 2014 17:32:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 17:32:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90656417"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 17:32:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 12:32:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W37rC-0008Op-Np;
	Tue, 14 Jan 2014 17:32:54 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 17:32:54 +0000
Message-ID: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on 64-bit
	hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
off the top bit of the entry address for PSCI_UP.

Follow the pattern established in do_trap_hypercall.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---

Release argument: Only supporting single vcpu guests on arm64 would be
unfortunate. There is no risk to arm32 since the ifdef ensures the code
remains the same.
---
 xen/arch/arm/traps.c |   17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index fdf9440..62c9df2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
 }
 #endif
 
+#ifdef CONFIG_ARM_64
+#define PSCI_OP_REG(r) (r)->x0
+#define PSCI_RESULT_REG(r) (r)->x0
+#define PSCI_ARGS(r) (r)->x1, (r)->x2
+#else
+#define PSCI_OP_REG(r) (r)->r0
+#define PSCI_RESULT_REG(r) (r)->r0
+#define PSCI_ARGS(r) (r)->r1, (r)->r2
+#endif
+
 static void do_trap_psci(struct cpu_user_regs *regs)
 {
     arm_psci_fn_t psci_call = NULL;
 
-    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
+    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
     {
         domain_crash_synchronous();
         return;
     }
 
-    psci_call = arm_psci_table[regs->r0].fn;
+    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
     if ( psci_call == NULL )
     {
         domain_crash_synchronous();
         return;
     }
-    regs->r0 = psci_call(regs->r1, regs->r2);
+
+    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
 }
 
 #ifdef CONFIG_ARM_64
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W392m-0002Ao-A3; Tue, 14 Jan 2014 18:48:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W392k-0002Aj-VP
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:48:55 +0000
Received: from [85.158.143.35:8203] by server-2.bemta-4.messagelabs.com id
	F2/4F-11386-69685D25; Tue, 14 Jan 2014 18:48:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389725333!11673309!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15482 invoked from network); 14 Jan 2014 18:48:53 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:48:53 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so786440wgg.21
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:48:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=vdqiJqGAYulFd+KcJWOyAtJQhSmg0iv4ih1Jkzu9vDc=;
	b=GEu90kG+oUfqZA2yk4lCp7sp0vzuL37aHp25SSfWYKENfhWR+jp+IJaBCB17RPbHBt
	10vONOI1dILOiTSfYG/lBGyzf3eVMu0bDRRTh2dV7CiOvKkwDBmUGo+W1RkneF0fdfCg
	qR5DciwtuXL7OV1CXylQgLvpQu2Bnnz4zAkqLdcI2oqSGhbTURw8ykZhkQNda1at5/f9
	K5HkEJBr3OkRaANM+VRxyTd7vp15kA4Z6rjgiQfCaTmoYck+0GNLBip807qAniKBmYBr
	0VecX5hL57QLWO8gu/I5lAj5zEbLmy/N7LACrNcOlnfAjw/r30LlBaX/73TG7ar9Ft4r
	jUyQ==
X-Gm-Message-State: ALoCoQkoWoBzB1F0I7bzzCVKqCZLu2B36cAUADvBxdoImWoX9zWo1wIa8qH+FoJttvM9Rkg7vkGl
X-Received: by 10.194.176.163 with SMTP id cj3mr30958wjc.8.1389725333130;
	Tue, 14 Jan 2014 10:48:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hv3sm2775349wib.5.2014.01.14.10.48.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:48:51 -0800 (PST)
Message-ID: <52D58692.8000305@linaro.org>
Date: Tue, 14 Jan 2014 18:48:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 04:55 PM, Ian Campbell wrote:
> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> was mapped with ioremap_nocache and hence isn't cached in the first place.
> Accesses should be via writeq though, so do that.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/arm64/smpboot.c |    3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
> index 1287c72..9146476 100644
> --- a/xen/arch/arm/arm64/smpboot.c
> +++ b/xen/arch/arm/arm64/smpboot.c
> @@ -32,8 +32,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
>          return -EFAULT;
>      }
>  
> -    release[0] = __pa(init_secondary);
> -    flush_xen_data_tlb_range_va((vaddr_t)release, sizeof(*release));
> +    writeq(__pa(init_secondary), release);
>  
>      iounmap(release);
>  
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W392m-0002Ao-A3; Tue, 14 Jan 2014 18:48:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W392k-0002Aj-VP
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:48:55 +0000
Received: from [85.158.143.35:8203] by server-2.bemta-4.messagelabs.com id
	F2/4F-11386-69685D25; Tue, 14 Jan 2014 18:48:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389725333!11673309!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15482 invoked from network); 14 Jan 2014 18:48:53 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:48:53 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so786440wgg.21
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:48:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=vdqiJqGAYulFd+KcJWOyAtJQhSmg0iv4ih1Jkzu9vDc=;
	b=GEu90kG+oUfqZA2yk4lCp7sp0vzuL37aHp25SSfWYKENfhWR+jp+IJaBCB17RPbHBt
	10vONOI1dILOiTSfYG/lBGyzf3eVMu0bDRRTh2dV7CiOvKkwDBmUGo+W1RkneF0fdfCg
	qR5DciwtuXL7OV1CXylQgLvpQu2Bnnz4zAkqLdcI2oqSGhbTURw8ykZhkQNda1at5/f9
	K5HkEJBr3OkRaANM+VRxyTd7vp15kA4Z6rjgiQfCaTmoYck+0GNLBip807qAniKBmYBr
	0VecX5hL57QLWO8gu/I5lAj5zEbLmy/N7LACrNcOlnfAjw/r30LlBaX/73TG7ar9Ft4r
	jUyQ==
X-Gm-Message-State: ALoCoQkoWoBzB1F0I7bzzCVKqCZLu2B36cAUADvBxdoImWoX9zWo1wIa8qH+FoJttvM9Rkg7vkGl
X-Received: by 10.194.176.163 with SMTP id cj3mr30958wjc.8.1389725333130;
	Tue, 14 Jan 2014 10:48:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hv3sm2775349wib.5.2014.01.14.10.48.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:48:51 -0800 (PST)
Message-ID: <52D58692.8000305@linaro.org>
Date: Tue, 14 Jan 2014 18:48:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 04:55 PM, Ian Campbell wrote:
> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> was mapped with ioremap_nocache and hence isn't cached in the first place.
> Accesses should be via writeq though, so do that.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/arm64/smpboot.c |    3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
> index 1287c72..9146476 100644
> --- a/xen/arch/arm/arm64/smpboot.c
> +++ b/xen/arch/arm/arm64/smpboot.c
> @@ -32,8 +32,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
>          return -EFAULT;
>      }
>  
> -    release[0] = __pa(init_secondary);
> -    flush_xen_data_tlb_range_va((vaddr_t)release, sizeof(*release));
> +    writeq(__pa(init_secondary), release);
>  
>      iounmap(release);
>  
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:52:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W396U-0002QB-P4; Tue, 14 Jan 2014 18:52:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W396S-0002Q5-Rn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:52:44 +0000
Received: from [85.158.139.211:42065] by server-3.bemta-5.messagelabs.com id
	A9/4F-04773-C7785D25; Tue, 14 Jan 2014 18:52:44 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389725561!9717864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15322 invoked from network); 14 Jan 2014 18:52:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:52:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92805821"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 18:52:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 13:52:40 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W396N-0001vw-Sc;
	Tue, 14 Jan 2014 18:52:39 +0000
Date: Tue, 14 Jan 2014 18:52:39 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <20140114185238.GJ1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

There is an update of QEMU 1.6, I have done a merge and put that in a tree:
git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2

Thanks,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:52:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W396U-0002QB-P4; Tue, 14 Jan 2014 18:52:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W396S-0002Q5-Rn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:52:44 +0000
Received: from [85.158.139.211:42065] by server-3.bemta-5.messagelabs.com id
	A9/4F-04773-C7785D25; Tue, 14 Jan 2014 18:52:44 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389725561!9717864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15322 invoked from network); 14 Jan 2014 18:52:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:52:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="92805821"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 18:52:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 13:52:40 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W396N-0001vw-Sc;
	Tue, 14 Jan 2014 18:52:39 +0000
Date: Tue, 14 Jan 2014 18:52:39 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Message-ID: <20140114185238.GJ1696@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

There is an update of QEMU 1.6, I have done a merge and put that in a tree:
git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2

Thanks,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W398r-0002Xq-Bp; Tue, 14 Jan 2014 18:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W398o-0002Xe-Bn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:55:11 +0000
Received: from [85.158.137.68:49514] by server-11.bemta-3.messagelabs.com id
	7E/7F-19379-D0885D25; Tue, 14 Jan 2014 18:55:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389725708!9119862!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24144 invoked from network); 14 Jan 2014 18:55:09 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:55:09 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so807818wgh.7
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:55:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=guBvw1q+UQzastfqfBU8Y03hxwiha/SbIg6Z9+RTsa8=;
	b=VepsbQe9bYyb3o1WmZevjEbGuHpdJEEHwXp31OBbgZh7YeqtOILXzrGs3+e/XxO9J8
	R/VeHkszZg5U5+9YX2j+HT2YIFM60e0RTgCfC2ae51xur1D8C7InUueyA6Q5r4Rufx8Q
	aNy63p1CxWurSCYtVRAfABOvg5UlBeAxsHnB/8CKAFUkUcbTvPUOpzp3ZFHrEIt/Yo0e
	w/5ha3J4uIQYr2H7ZPx7OcgKIdsVEHiHZOYkSDcGGADbbdqaNot2rwhqdFHba7f+kawv
	wrAGWrxZA5oETzeBLgeZhN6izrAtimnxefJ/iKWNpX8lr8v03kRZ6H1mJ0ZIrcGtXC3v
	6Yhg==
X-Gm-Message-State: ALoCoQmcL55sySoBkvoOd4Y3hjHxYwK3WrbJY4zBwGEYNfhCS7kxAmf66GmM0W2dg7iRZFTfhSAo
X-Received: by 10.194.108.100 with SMTP id hj4mr3723076wjb.83.1389725708859;
	Tue, 14 Jan 2014 10:55:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ay6sm1278129wjb.23.2014.01.14.10.55.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:55:08 -0800 (PST)
Message-ID: <52D5880B.30506@linaro.org>
Date: Tue, 14 Jan 2014 18:55:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 04:55 PM, Ian Campbell wrote:
> These mappings are global and therefore need flushing on all processors. Add
> flush_all_xen_data_tlb_range_va which accomplishes this.

Can we make name consistent across every *tlb* function call? On
flushtlb.h we use *_local for maintenance on the current processor only.
If the suffix is not present then the maintenance will be done on every
processor.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W398r-0002Xq-Bp; Tue, 14 Jan 2014 18:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W398o-0002Xe-Bn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:55:11 +0000
Received: from [85.158.137.68:49514] by server-11.bemta-3.messagelabs.com id
	7E/7F-19379-D0885D25; Tue, 14 Jan 2014 18:55:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389725708!9119862!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24144 invoked from network); 14 Jan 2014 18:55:09 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:55:09 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so807818wgh.7
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:55:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=guBvw1q+UQzastfqfBU8Y03hxwiha/SbIg6Z9+RTsa8=;
	b=VepsbQe9bYyb3o1WmZevjEbGuHpdJEEHwXp31OBbgZh7YeqtOILXzrGs3+e/XxO9J8
	R/VeHkszZg5U5+9YX2j+HT2YIFM60e0RTgCfC2ae51xur1D8C7InUueyA6Q5r4Rufx8Q
	aNy63p1CxWurSCYtVRAfABOvg5UlBeAxsHnB/8CKAFUkUcbTvPUOpzp3ZFHrEIt/Yo0e
	w/5ha3J4uIQYr2H7ZPx7OcgKIdsVEHiHZOYkSDcGGADbbdqaNot2rwhqdFHba7f+kawv
	wrAGWrxZA5oETzeBLgeZhN6izrAtimnxefJ/iKWNpX8lr8v03kRZ6H1mJ0ZIrcGtXC3v
	6Yhg==
X-Gm-Message-State: ALoCoQmcL55sySoBkvoOd4Y3hjHxYwK3WrbJY4zBwGEYNfhCS7kxAmf66GmM0W2dg7iRZFTfhSAo
X-Received: by 10.194.108.100 with SMTP id hj4mr3723076wjb.83.1389725708859;
	Tue, 14 Jan 2014 10:55:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ay6sm1278129wjb.23.2014.01.14.10.55.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:55:08 -0800 (PST)
Message-ID: <52D5880B.30506@linaro.org>
Date: Tue, 14 Jan 2014 18:55:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 04:55 PM, Ian Campbell wrote:
> These mappings are global and therefore need flushing on all processors. Add
> flush_all_xen_data_tlb_range_va which accomplishes this.

Can we make name consistent across every *tlb* function call? On
flushtlb.h we use *_local for maintenance on the current processor only.
If the suffix is not present then the maintenance will be done on every
processor.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:57:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W39B5-0002gR-84; Tue, 14 Jan 2014 18:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W39B3-0002gM-GL
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:57:29 +0000
Received: from [85.158.137.68:58489] by server-1.bemta-3.messagelabs.com id
	B9/AE-29598-89885D25; Tue, 14 Jan 2014 18:57:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389725847!5476565!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4429 invoked from network); 14 Jan 2014 18:57:27 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:57:27 -0000
Received: by mail-wi0-f181.google.com with SMTP id hq4so1176172wib.14
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:57:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5z28vt1hlNj7A9Dy0SaY5EJZADLC2CQqr3FnkL3zhco=;
	b=c92mG2eSnZPpuBpn43IWsLUys/We7z+2RrPPxOiXgBo2iwIRfJG24ipORUeynt+Pzs
	lAL8hJAhTR2SwiUtUa8Gl1oDDwRAHXCkpiaC24WsO9mo4HhWppHySsmjOWcigSRLybd0
	iixRUlGUnKlqt7YAAsaXN924TW2+wHiHfNehpE8Y+dWhnrDRKU7leSY/ZWVdkdera10R
	XKk01M0pxKX5OwZlPtf73TIERUaH+7xFs69eSSY1rTck0dlxiOC2qsr+KDoMINzLnd1y
	V5mXFG+sOdIz+zVjJu+3XwcN1i2lXVK8NLK6q7VCcLO29I5Wnyx0JpcPpRKgErE/J0aZ
	P3iQ==
X-Gm-Message-State: ALoCoQl3f00HuM5Nukr6Fl61DmwdzkdGL1b15w/HnmPP1jkkMRFv6HRMFpnoickiro7AlbWn+UM3
X-Received: by 10.194.59.210 with SMTP id b18mr2321wjr.60.1389725847504;
	Tue, 14 Jan 2014 10:57:27 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ux5sm1300950wjc.6.2014.01.14.10.57.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:57:26 -0800 (PST)
Message-ID: <52D58895.6030900@linaro.org>
Date: Tue, 14 Jan 2014 18:57:25 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on
	64-bit hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 05:32 PM, Ian Campbell wrote:
> Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
> off the top bit of the entry address for PSCI_UP.
> 
> Follow the pattern established in do_trap_hypercall.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> 
> Release argument: Only supporting single vcpu guests on arm64 would be
> unfortunate. There is no risk to arm32 since the ifdef ensures the code
> remains the same.
> ---
>  xen/arch/arm/traps.c |   17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index fdf9440..62c9df2 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM_64
> +#define PSCI_OP_REG(r) (r)->x0
> +#define PSCI_RESULT_REG(r) (r)->x0
> +#define PSCI_ARGS(r) (r)->x1, (r)->x2
> +#else
> +#define PSCI_OP_REG(r) (r)->r0
> +#define PSCI_RESULT_REG(r) (r)->r0
> +#define PSCI_ARGS(r) (r)->r1, (r)->r2
> +#endif
> +
>  static void do_trap_psci(struct cpu_user_regs *regs)
>  {
>      arm_psci_fn_t psci_call = NULL;
>  
> -    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
> +    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
>      {
>          domain_crash_synchronous();
>          return;
>      }
>  
> -    psci_call = arm_psci_table[regs->r0].fn;
> +    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
>      if ( psci_call == NULL )
>      {
>          domain_crash_synchronous();
>          return;
>      }
> -    regs->r0 = psci_call(regs->r1, regs->r2);
> +
> +    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
>  }
>  
>  #ifdef CONFIG_ARM_64
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 18:57:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 18:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W39B5-0002gR-84; Tue, 14 Jan 2014 18:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W39B3-0002gM-GL
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 18:57:29 +0000
Received: from [85.158.137.68:58489] by server-1.bemta-3.messagelabs.com id
	B9/AE-29598-89885D25; Tue, 14 Jan 2014 18:57:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389725847!5476565!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4429 invoked from network); 14 Jan 2014 18:57:27 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 18:57:27 -0000
Received: by mail-wi0-f181.google.com with SMTP id hq4so1176172wib.14
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 10:57:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5z28vt1hlNj7A9Dy0SaY5EJZADLC2CQqr3FnkL3zhco=;
	b=c92mG2eSnZPpuBpn43IWsLUys/We7z+2RrPPxOiXgBo2iwIRfJG24ipORUeynt+Pzs
	lAL8hJAhTR2SwiUtUa8Gl1oDDwRAHXCkpiaC24WsO9mo4HhWppHySsmjOWcigSRLybd0
	iixRUlGUnKlqt7YAAsaXN924TW2+wHiHfNehpE8Y+dWhnrDRKU7leSY/ZWVdkdera10R
	XKk01M0pxKX5OwZlPtf73TIERUaH+7xFs69eSSY1rTck0dlxiOC2qsr+KDoMINzLnd1y
	V5mXFG+sOdIz+zVjJu+3XwcN1i2lXVK8NLK6q7VCcLO29I5Wnyx0JpcPpRKgErE/J0aZ
	P3iQ==
X-Gm-Message-State: ALoCoQl3f00HuM5Nukr6Fl61DmwdzkdGL1b15w/HnmPP1jkkMRFv6HRMFpnoickiro7AlbWn+UM3
X-Received: by 10.194.59.210 with SMTP id b18mr2321wjr.60.1389725847504;
	Tue, 14 Jan 2014 10:57:27 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ux5sm1300950wjc.6.2014.01.14.10.57.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 10:57:26 -0800 (PST)
Message-ID: <52D58895.6030900@linaro.org>
Date: Tue, 14 Jan 2014 18:57:25 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on
	64-bit hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 05:32 PM, Ian Campbell wrote:
> Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
> off the top bit of the entry address for PSCI_UP.
> 
> Follow the pattern established in do_trap_hypercall.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> 
> Release argument: Only supporting single vcpu guests on arm64 would be
> unfortunate. There is no risk to arm32 since the ifdef ensures the code
> remains the same.
> ---
>  xen/arch/arm/traps.c |   17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index fdf9440..62c9df2 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
>  }
>  #endif
>  
> +#ifdef CONFIG_ARM_64
> +#define PSCI_OP_REG(r) (r)->x0
> +#define PSCI_RESULT_REG(r) (r)->x0
> +#define PSCI_ARGS(r) (r)->x1, (r)->x2
> +#else
> +#define PSCI_OP_REG(r) (r)->r0
> +#define PSCI_RESULT_REG(r) (r)->r0
> +#define PSCI_ARGS(r) (r)->r1, (r)->r2
> +#endif
> +
>  static void do_trap_psci(struct cpu_user_regs *regs)
>  {
>      arm_psci_fn_t psci_call = NULL;
>  
> -    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
> +    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
>      {
>          domain_crash_synchronous();
>          return;
>      }
>  
> -    psci_call = arm_psci_table[regs->r0].fn;
> +    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
>      if ( psci_call == NULL )
>      {
>          domain_crash_synchronous();
>          return;
>      }
> -    regs->r0 = psci_call(regs->r1, regs->r2);
> +
> +    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
>  }
>  
>  #ifdef CONFIG_ARM_64
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 19:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 19:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W39fR-0004cY-Eg; Tue, 14 Jan 2014 19:28:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W39fQ-0004cT-D6
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 19:28:52 +0000
Received: from [85.158.139.211:52000] by server-17.bemta-5.messagelabs.com id
	AC/9E-19152-3FF85D25; Tue, 14 Jan 2014 19:28:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389727729!9754819!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1933 invoked from network); 14 Jan 2014 19:28:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 19:28:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90708278"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 19:28:48 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 14:28:48 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 19:28:39 +0000
Message-ID: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control (11b57f) solved the spinning
thread problem, however caused an another one. The receive side can stall, if:
- xenvif_rx_action sets rx_queue_stopped to false
- interrupt happens, and sets rx_event to true
- then xenvif_kthread sets rx_event to false

Also, through rx_event a malicious guest can force the RX thread to spin. This
patch ditch that two variable, and rework rx_work_todo. If the thread finds it
can't fit more skb's into the ring, it saves the last slot estimation into
rx_last_skb_slots, otherwise it's kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring
- there is space for the topmost packet in the queue

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 19:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 19:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W39fR-0004cY-Eg; Tue, 14 Jan 2014 19:28:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W39fQ-0004cT-D6
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 19:28:52 +0000
Received: from [85.158.139.211:52000] by server-17.bemta-5.messagelabs.com id
	AC/9E-19152-3FF85D25; Tue, 14 Jan 2014 19:28:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389727729!9754819!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1933 invoked from network); 14 Jan 2014 19:28:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 19:28:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,658,1384300800"; d="scan'208";a="90708278"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 19:28:48 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 14:28:48 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 19:28:39 +0000
Message-ID: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control (11b57f) solved the spinning
thread problem, however caused an another one. The receive side can stall, if:
- xenvif_rx_action sets rx_queue_stopped to false
- interrupt happens, and sets rx_event to true
- then xenvif_kthread sets rx_event to false

Also, through rx_event a malicious guest can force the RX thread to spin. This
patch ditch that two variable, and rework rx_work_todo. If the thread finds it
can't fit more skb's into the ring, it saves the last slot estimation into
rx_last_skb_slots, otherwise it's kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring
- there is space for the topmost packet in the queue

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:21:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AUY-0006vB-4J; Tue, 14 Jan 2014 20:21:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3AUX-0006v4-9C
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:21:41 +0000
Received: from [193.109.254.147:17538] by server-9.bemta-14.messagelabs.com id
	47/89-13957-45C95D25; Tue, 14 Jan 2014 20:21:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389730898!10802431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27744 invoked from network); 14 Jan 2014 20:21:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:21:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92848916"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:21:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:21:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W3AUT-0003MD-Gt; Tue, 14 Jan 2014 20:21:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 20:21:36 +0000
Message-ID: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a

In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
but struct xen_add_to_physmap_batch has 'space' as a uint16_t.

By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
unsigned int to uint16_t, which changes the space switch()'d upon.

This wouldn't be noticed with any upstream code (of which I am aware), but was
discovered because of the XenServer support for legacy Windows PV drivers,
which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
The current Windows PV drivers don't do this any more, but we 'fix' Xen to
support running VMs with out-of-date tools.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>

---

As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
for the fix.

This was caught by a compile failure rather than a functional test.  I have
encountered a different compile error which turns out to be a bug in the cross
compiler we are currently using, so I need to fix that before I can
functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
test of whether the functionality of XENMEM_add_to_physmap is actually still
the same.  If anyone dares look,
https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace-quirks.patch
are the hacks required to make the legacy drivers work on modern Xen.)
---
 xen/arch/arm/mm.c    |    2 +-
 xen/arch/x86/mm.c    |    2 +-
 xen/include/xen/mm.h |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 293b6e2..127cce0 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
 
 int xenmem_add_to_physmap_one(
     struct domain *d,
-    uint16_t space,
+    unsigned int space,
     domid_t foreign_domid,
     unsigned long idx,
     xen_pfn_t gpfn)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 32c0473..172c68c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
 
 int xenmem_add_to_physmap_one(
     struct domain *d,
-    uint16_t space,
+    unsigned int space,
     domid_t foreign_domid,
     unsigned long idx,
     xen_pfn_t gpfn)
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index f90ed74..b183189 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
 
 void scrub_one_page(struct page_info *);
 
-int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
+int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
                               domid_t foreign_domid,
                               unsigned long idx, xen_pfn_t gpfn);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:21:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AUY-0006vB-4J; Tue, 14 Jan 2014 20:21:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3AUX-0006v4-9C
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:21:41 +0000
Received: from [193.109.254.147:17538] by server-9.bemta-14.messagelabs.com id
	47/89-13957-45C95D25; Tue, 14 Jan 2014 20:21:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389730898!10802431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27744 invoked from network); 14 Jan 2014 20:21:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:21:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92848916"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:21:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:21:37 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W3AUT-0003MD-Gt; Tue, 14 Jan 2014 20:21:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 14 Jan 2014 20:21:36 +0000
Message-ID: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a

In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
but struct xen_add_to_physmap_batch has 'space' as a uint16_t.

By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
unsigned int to uint16_t, which changes the space switch()'d upon.

This wouldn't be noticed with any upstream code (of which I am aware), but was
discovered because of the XenServer support for legacy Windows PV drivers,
which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
The current Windows PV drivers don't do this any more, but we 'fix' Xen to
support running VMs with out-of-date tools.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>

---

As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
for the fix.

This was caught by a compile failure rather than a functional test.  I have
encountered a different compile error which turns out to be a bug in the cross
compiler we are currently using, so I need to fix that before I can
functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
test of whether the functionality of XENMEM_add_to_physmap is actually still
the same.  If anyone dares look,
https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace-quirks.patch
are the hacks required to make the legacy drivers work on modern Xen.)
---
 xen/arch/arm/mm.c    |    2 +-
 xen/arch/x86/mm.c    |    2 +-
 xen/include/xen/mm.h |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 293b6e2..127cce0 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
 
 int xenmem_add_to_physmap_one(
     struct domain *d,
-    uint16_t space,
+    unsigned int space,
     domid_t foreign_domid,
     unsigned long idx,
     xen_pfn_t gpfn)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 32c0473..172c68c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p)
 
 int xenmem_add_to_physmap_one(
     struct domain *d,
-    uint16_t space,
+    unsigned int space,
     domid_t foreign_domid,
     unsigned long idx,
     xen_pfn_t gpfn)
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index f90ed74..b183189 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned long nr_pages)
 
 void scrub_one_page(struct page_info *);
 
-int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
+int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
                               domid_t foreign_domid,
                               unsigned long idx, xen_pfn_t gpfn);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Akt-0007ps-JJ; Tue, 14 Jan 2014 20:38:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Akr-0007ni-V6
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:38:34 +0000
Received: from [85.158.143.35:49485] by server-1.bemta-4.messagelabs.com id
	B1/C5-02132-940A5D25; Tue, 14 Jan 2014 20:38:33 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389731911!11687164!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15261 invoked from network); 14 Jan 2014 20:38:32 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:38:32 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so103159lbh.22
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:38:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:to:mime-version;
	bh=rbPCbjyGUunq8SaQeQnHLjF8dl5jaezTam8KAq5KowM=;
	b=D8opPayew4Ej/z+wpFUaK66YVxLXGWnPRIwIqlR7ahKypS0e09ioWT8HpqZlm2BOPq
	uV3+ghGCgIvSBLaWPSLWk9QfjaCsdcNHcumzfgYlOlW/RbZAoupgMdDaux9cbqVX30C2
	xFGQbyNBAlZ3IugAVDn0r8j2BuuqqWjR31Rt6SaO69YbzP/YsLe5vJUV6AsryUY2xlr8
	sD7VNlQAZcHMoBhr5dch6Vo7Z6Q17YADdn5bI5x7qNkbtGWXbCfYaY/mN3UungsKe3+V
	DXYN8iDWGCY3yz6Hf3WFM2feB50syxSG4l73CrO671E2PMEOEM0NoEfH+7ibfouHEdBv
	qz5A==
X-Received: by 10.153.10.33 with SMTP id dx1mr1969404lad.23.1389731911391;
	Tue, 14 Jan 2014 12:38:31 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id c15sm1026663lbq.11.2014.01.14.12.38.30
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:38:30 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 15 Jan 2014 00:38:28 +0400
Message-Id: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] ctfconvert problems with build on illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4985585721010273978=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4985585721010273978==
Content-Type: multipart/alternative; boundary="Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB"


--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hello All,

I have problems with ctfconvert with build sources of xen-4.2 on illumos =
based platform:
https://www.illumos.org/issues/3205

could you please let me know - what the reason to have a variable in =
structure 'struct hvm_hw_cpu_xsave'  with zero size:
struct { char x[0]; } ymm;    /* YMM */
--
Best regards,
Igor Kozhukhov





--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello =
All,<div><br></div><div>I have problems with ctfconvert with build =
sources of xen-4.2 on illumos based platform:</div><div><a =
href=3D"https://www.illumos.org/issues/3205">https://www.illumos.org/issue=
s/3205</a></div><div><br></div><div>could you please let me know - what =
the reason to have a variable in structure '<span =
class=3D"Apple-style-span" style=3D"color: rgb(26, 26, 26); font-family: =
monospace; font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave' =
</span>&nbsp;with zero size:</div><div><pre style=3D"margin: 1em 1em 1em =
1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); =
border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; =
overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: =
normal; font-variant: normal; font-weight: normal; letter-spacing: =
normal; line-height: normal; orphans: auto; text-align: start; =
text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM =
*/</pre><div>--</div></div><div><div apple-content-edited=3D"true"><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; ">Best regards,<br>Igor =
Kozhukhov<br><br><br><br></div>
</div>
<br></div></body></html>=

--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB--


--===============4985585721010273978==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4985585721010273978==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Akt-0007ps-JJ; Tue, 14 Jan 2014 20:38:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Akr-0007ni-V6
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:38:34 +0000
Received: from [85.158.143.35:49485] by server-1.bemta-4.messagelabs.com id
	B1/C5-02132-940A5D25; Tue, 14 Jan 2014 20:38:33 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389731911!11687164!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15261 invoked from network); 14 Jan 2014 20:38:32 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:38:32 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so103159lbh.22
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:38:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:to:mime-version;
	bh=rbPCbjyGUunq8SaQeQnHLjF8dl5jaezTam8KAq5KowM=;
	b=D8opPayew4Ej/z+wpFUaK66YVxLXGWnPRIwIqlR7ahKypS0e09ioWT8HpqZlm2BOPq
	uV3+ghGCgIvSBLaWPSLWk9QfjaCsdcNHcumzfgYlOlW/RbZAoupgMdDaux9cbqVX30C2
	xFGQbyNBAlZ3IugAVDn0r8j2BuuqqWjR31Rt6SaO69YbzP/YsLe5vJUV6AsryUY2xlr8
	sD7VNlQAZcHMoBhr5dch6Vo7Z6Q17YADdn5bI5x7qNkbtGWXbCfYaY/mN3UungsKe3+V
	DXYN8iDWGCY3yz6Hf3WFM2feB50syxSG4l73CrO671E2PMEOEM0NoEfH+7ibfouHEdBv
	qz5A==
X-Received: by 10.153.10.33 with SMTP id dx1mr1969404lad.23.1389731911391;
	Tue, 14 Jan 2014 12:38:31 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id c15sm1026663lbq.11.2014.01.14.12.38.30
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:38:30 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 15 Jan 2014 00:38:28 +0400
Message-Id: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] ctfconvert problems with build on illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4985585721010273978=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4985585721010273978==
Content-Type: multipart/alternative; boundary="Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB"


--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hello All,

I have problems with ctfconvert with build sources of xen-4.2 on illumos =
based platform:
https://www.illumos.org/issues/3205

could you please let me know - what the reason to have a variable in =
structure 'struct hvm_hw_cpu_xsave'  with zero size:
struct { char x[0]; } ymm;    /* YMM */
--
Best regards,
Igor Kozhukhov





--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello =
All,<div><br></div><div>I have problems with ctfconvert with build =
sources of xen-4.2 on illumos based platform:</div><div><a =
href=3D"https://www.illumos.org/issues/3205">https://www.illumos.org/issue=
s/3205</a></div><div><br></div><div>could you please let me know - what =
the reason to have a variable in structure '<span =
class=3D"Apple-style-span" style=3D"color: rgb(26, 26, 26); font-family: =
monospace; font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave' =
</span>&nbsp;with zero size:</div><div><pre style=3D"margin: 1em 1em 1em =
1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); =
border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; =
overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: =
normal; font-variant: normal; font-weight: normal; letter-spacing: =
normal; line-height: normal; orphans: auto; text-align: start; =
text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM =
*/</pre><div>--</div></div><div><div apple-content-edited=3D"true"><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; ">Best regards,<br>Igor =
Kozhukhov<br><br><br><br></div>
</div>
<br></div></body></html>=

--Apple-Mail=_9DCFE949-4AC9-4F5D-A9A4-CCE1F7C978FB--


--===============4985585721010273978==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4985585721010273978==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:40:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ami-00089V-W8; Tue, 14 Jan 2014 20:40:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amg-00089C-8w
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:26 +0000
Received: from [85.158.139.211:25877] by server-10.bemta-5.messagelabs.com id
	C5/48-01405-9B0A5D25; Tue, 14 Jan 2014 20:40:25 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4879 invoked from network); 14 Jan 2014 20:40:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738294"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:14 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:13 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:50 +0000
Message-ID: <1389731995-9887-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f74fa92..d43444d 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				     struct pending_tx_info,
+				     callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				     struct xenvif,
+				     pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:40:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ami-00089V-W8; Tue, 14 Jan 2014 20:40:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amg-00089C-8w
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:26 +0000
Received: from [85.158.139.211:25877] by server-10.bemta-5.messagelabs.com id
	C5/48-01405-9B0A5D25; Tue, 14 Jan 2014 20:40:25 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4879 invoked from network); 14 Jan 2014 20:40:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738294"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:14 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:13 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:50 +0000
Message-ID: <1389731995-9887-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f74fa92..d43444d 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				     struct pending_tx_info,
+				     callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				     struct xenvif,
+				     pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Amp-0008AB-I0; Tue, 14 Jan 2014 20:40:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amg-00089E-T1
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:27 +0000
Received: from [85.158.139.211:5783] by server-5.bemta-5.messagelabs.com id
	E5/11-14928-AB0A5D25; Tue, 14 Jan 2014 20:40:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23509 invoked from network); 14 Jan 2014 20:40:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:01 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:01 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:46 +0000
Message-ID: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

v4: Now we are using a new grant mapping API to avoid m2p_override. The RX queue
timeout logic changed also.

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Amp-0008AB-I0; Tue, 14 Jan 2014 20:40:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amg-00089E-T1
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:27 +0000
Received: from [85.158.139.211:5783] by server-5.bemta-5.messagelabs.com id
	E5/11-14928-AB0A5D25; Tue, 14 Jan 2014 20:40:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23509 invoked from network); 14 Jan 2014 20:40:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:01 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:01 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:46 +0000
Message-ID: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

v4: Now we are using a new grant mapping API to avoid m2p_override. The RX queue
timeout logic changed also.

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnJ-0008Bi-Ep; Tue, 14 Jan 2014 20:41:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amh-00089H-1F
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:27 +0000
Received: from [85.158.139.211:5791] by server-4.bemta-5.messagelabs.com id
	BE/F1-26791-AB0A5D25; Tue, 14 Jan 2014 20:40:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 14 Jan 2014 20:40:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738351"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:22 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:53 +0000
Message-ID: <1389731995-9887-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnJ-0008Bi-Ep; Tue, 14 Jan 2014 20:41:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amh-00089H-1F
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:27 +0000
Received: from [85.158.139.211:5791] by server-4.bemta-5.messagelabs.com id
	BE/F1-26791-AB0A5D25; Tue, 14 Jan 2014 20:40:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4922 invoked from network); 14 Jan 2014 20:40:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738351"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:22 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:53 +0000
Message-ID: <1389731995-9887-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnP-0008Dh-2I; Tue, 14 Jan 2014 20:41:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amj-00089P-AJ
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:29 +0000
Received: from [85.158.139.211:63930] by server-11.bemta-5.messagelabs.com id
	96/28-23268-BB0A5D25; Tue, 14 Jan 2014 20:40:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4957 invoked from network); 14 Jan 2014 20:40:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738373"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:25 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:24 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:54 +0000
Message-ID: <1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.
The QDisc queue in worst case takes 3 round to flush usually.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

v4:
- due to RX flow control changes now start_xmit doesn't drop the packets but
  place them on the internal queue. So the timer set rx_queue_purge and kick in
  the thread to drop the packets there
- we shoot down the timer if a previously filled internal queue drains
- adjust the teardown timeout as in worst case it can take more time now

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 ++++++
 drivers/net/xen-netback/interface.c |   24 ++++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |   23 ++++++++++++++++++++---
 3 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 109c29f..d1cd8ce 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -129,6 +129,9 @@ struct xenvif {
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
+	bool rx_queue_purge;
+
+	struct timer_list wake_queue;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index c531f6c..2616d51 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		vif->rx_queue_purge = true;
+		xenvif_kick_thread(vif);
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	init_timer(&vif->credit_timeout);
 	vif->credit_window_start = get_jiffies_64();
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -529,6 +548,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
 			    net_ratelimit())
 				netdev_err(vif->dev,
 					   "Page still granted! Index: %x\n",
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index d2ccb55..1378abd 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -63,6 +63,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -1919,8 +1926,9 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return (!skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots)) ||
+	       vif->rx_queue_purge;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -2011,12 +2019,19 @@ int xenvif_kthread(void *data)
 		if (kthread_should_stop())
 			break;
 
+		if (vif->rx_queue_purge) {
+			skb_queue_purge(&vif->rx_queue);
+			vif->rx_queue_purge = false;
+		}
+
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
 		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
+		    netif_queue_stopped(vif->dev)) {
+			del_timer_sync(&vif->wake_queue);
 			xenvif_start_queue(vif);
+		}
 
 		cond_resched();
 	}
@@ -2067,6 +2082,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnP-0008Dh-2I; Tue, 14 Jan 2014 20:41:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amj-00089P-AJ
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:29 +0000
Received: from [85.158.139.211:63930] by server-11.bemta-5.messagelabs.com id
	96/28-23268-BB0A5D25; Tue, 14 Jan 2014 20:40:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4957 invoked from network); 14 Jan 2014 20:40:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738373"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:25 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:24 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:54 +0000
Message-ID: <1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.
The QDisc queue in worst case takes 3 round to flush usually.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

v4:
- due to RX flow control changes now start_xmit doesn't drop the packets but
  place them on the internal queue. So the timer set rx_queue_purge and kick in
  the thread to drop the packets there
- we shoot down the timer if a previously filled internal queue drains
- adjust the teardown timeout as in worst case it can take more time now

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 ++++++
 drivers/net/xen-netback/interface.c |   24 ++++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |   23 ++++++++++++++++++++---
 3 files changed, 48 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 109c29f..d1cd8ce 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -129,6 +129,9 @@ struct xenvif {
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
+	bool rx_queue_purge;
+
+	struct timer_list wake_queue;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index c531f6c..2616d51 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		vif->rx_queue_purge = true;
+		xenvif_kick_thread(vif);
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	init_timer(&vif->credit_timeout);
 	vif->credit_window_start = get_jiffies_64();
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -529,6 +548,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
 			    net_ratelimit())
 				netdev_err(vif->dev,
 					   "Page still granted! Index: %x\n",
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index d2ccb55..1378abd 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -63,6 +63,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -1919,8 +1926,9 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return (!skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots)) ||
+	       vif->rx_queue_purge;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -2011,12 +2019,19 @@ int xenvif_kthread(void *data)
 		if (kthread_should_stop())
 			break;
 
+		if (vif->rx_queue_purge) {
+			skb_queue_purge(&vif->rx_queue);
+			vif->rx_queue_purge = false;
+		}
+
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
 		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
+		    netif_queue_stopped(vif->dev)) {
+			del_timer_sync(&vif->wake_queue);
 			xenvif_start_queue(vif);
+		}
 
 		cond_resched();
 	}
@@ -2067,6 +2082,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnV-0008F8-1S; Tue, 14 Jan 2014 20:41:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amj-00089O-AJ
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:30 +0000
Received: from [85.158.139.211:63922] by server-8.bemta-5.messagelabs.com id
	33/46-29838-BB0A5D25; Tue, 14 Jan 2014 20:40:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23579 invoked from network); 14 Jan 2014 20:40:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:04 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:04 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:47 +0000
Message-ID: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

v4:
- fix indentations and comments
- use bool for tx_dealloc_work_todo
- BUG() if grant_tx_handle corrupted - now really :)
- go back to gnttab_unmap_refs, now we rely on API changes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   30 +++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  171 +++++++++++++++++++++++++++++++++++
 3 files changed, 201 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..3e5ca11 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,26 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
-
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
+	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
+	struct page *pages_to_map[MAX_PENDING_REQS];
+	struct page *pages_to_unmap[MAX_PENDING_REQS];
+
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -222,6 +242,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -229,6 +251,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..a7855b3 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -38,6 +38,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..b84d2b8 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -772,6 +772,21 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif,
+					u16 pending_idx,
+					struct xen_netif_tx_request *txp,
+					struct gnttab_map_grant_ref *gop)
+{
+	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1611,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif = container_of(temp - pending_idx,
+					  struct xenvif,
+					  pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_dealloc_action:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Trying to unmap invalid handle! "
+					   "pending_idx: %x\n", pending_idx);
+				BUG();
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
+				vif->mmap_pages[pending_idx];
+			gnttab_set_unmap_op(gop,
+					    idx_to_kaddr(vif, pending_idx),
+					    GNTMAP_host_map,
+					    vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
+					vif->pages_to_unmap,
+					gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				   gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					   " host_addr: %llx handle: %x status: %d\n",
+					   gop[i].host_addr,
+					   gop[i].handle,
+					   gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				   XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1677,6 +1793,31 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+			   "Trying to unmap invalid handle! pending_idx: %x\n",
+			   pending_idx);
+		return;
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			    idx_to_kaddr(vif, pending_idx),
+			    GNTMAP_host_map,
+			    vif->grant_tx_handle[pending_idx]);
+	ret = gnttab_unmap_refs(&tx_unmap_op,
+				&vif->mmap_pages[pending_idx],
+				1);
+
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+					&tx_unmap_op,
+					1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1738,6 +1879,14 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline bool tx_dealloc_work_todo(struct xenvif *vif)
+{
+	if (vif->dealloc_cons != vif->dealloc_prod)
+		return true;
+
+	return false;
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1826,6 +1975,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					 tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnV-0008F8-1S; Tue, 14 Jan 2014 20:41:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amj-00089O-AJ
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:30 +0000
Received: from [85.158.139.211:63922] by server-8.bemta-5.messagelabs.com id
	33/46-29838-BB0A5D25; Tue, 14 Jan 2014 20:40:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23579 invoked from network); 14 Jan 2014 20:40:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:04 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:04 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:47 +0000
Message-ID: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

v4:
- fix indentations and comments
- use bool for tx_dealloc_work_todo
- BUG() if grant_tx_handle corrupted - now really :)
- go back to gnttab_unmap_refs, now we rely on API changes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   30 +++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  171 +++++++++++++++++++++++++++++++++++
 3 files changed, 201 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c955fc3..3e5ca11 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,26 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
-
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
+	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
+	struct page *pages_to_map[MAX_PENDING_REQS];
+	struct page *pages_to_unmap[MAX_PENDING_REQS];
+
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -222,6 +242,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -229,6 +251,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..a7855b3 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -38,6 +38,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 4f81ac0..b84d2b8 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -772,6 +772,21 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif,
+					u16 pending_idx,
+					struct xen_netif_tx_request *txp,
+					struct gnttab_map_grant_ref *gop)
+{
+	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1611,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif = container_of(temp - pending_idx,
+					  struct xenvif,
+					  pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_dealloc_action:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Trying to unmap invalid handle! "
+					   "pending_idx: %x\n", pending_idx);
+				BUG();
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
+				vif->mmap_pages[pending_idx];
+			gnttab_set_unmap_op(gop,
+					    idx_to_kaddr(vif, pending_idx),
+					    GNTMAP_host_map,
+					    vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
+					vif->pages_to_unmap,
+					gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				   gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					   " host_addr: %llx handle: %x status: %d\n",
+					   gop[i].host_addr,
+					   gop[i].handle,
+					   gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				   XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1677,6 +1793,31 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+			   "Trying to unmap invalid handle! pending_idx: %x\n",
+			   pending_idx);
+		return;
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			    idx_to_kaddr(vif, pending_idx),
+			    GNTMAP_host_map,
+			    vif->grant_tx_handle[pending_idx]);
+	ret = gnttab_unmap_refs(&tx_unmap_op,
+				&vif->mmap_pages[pending_idx],
+				1);
+
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+					&tx_unmap_op,
+					1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1738,6 +1879,14 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline bool tx_dealloc_work_todo(struct xenvif *vif)
+{
+	if (vif->dealloc_cons != vif->dealloc_prod)
+		return true;
+
+	return false;
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1826,6 +1975,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					 tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnX-0008Gv-08; Tue, 14 Jan 2014 20:41:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amk-00089a-SC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:31 +0000
Received: from [85.158.139.211:64034] by server-3.bemta-5.messagelabs.com id
	E0/D3-04773-DB0A5D25; Tue, 14 Jan 2014 20:40:29 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5034 invoked from network); 14 Jan 2014 20:40:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:27 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:27 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:55 +0000
Message-ID: <1389731995-9887-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

v4:
- use bool for tx_dealloc_work_todo

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   31 ++++++++++++++++++++++++++++++-
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 1594109..ce6b5b5 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -115,6 +115,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 669bd55..1c34f56 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -406,6 +406,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -553,6 +554,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9e7ba04..b1d1d4c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -135,6 +135,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1936,10 +1941,34 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline bool tx_dealloc_work_todo(struct xenvif *vif)
 {
-	if (vif->dealloc_cons != vif->dealloc_prod)
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+		    (vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+		    !vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					  jiffies + msecs_to_jiffies(1));
+
+			}
+			return false;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
 		return true;
+	}
 
 	return false;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnX-0008Gv-08; Tue, 14 Jan 2014 20:41:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amk-00089a-SC
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:31 +0000
Received: from [85.158.139.211:64034] by server-3.bemta-5.messagelabs.com id
	E0/D3-04773-DB0A5D25; Tue, 14 Jan 2014 20:40:29 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389732023!9762771!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5034 invoked from network); 14 Jan 2014 20:40:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90738381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:27 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:27 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:55 +0000
Message-ID: <1389731995-9887-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

v4:
- use bool for tx_dealloc_work_todo

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   31 ++++++++++++++++++++++++++++++-
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 1594109..ce6b5b5 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -115,6 +115,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 669bd55..1c34f56 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -406,6 +406,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -553,6 +554,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9e7ba04..b1d1d4c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -135,6 +135,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1936,10 +1941,34 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline bool tx_dealloc_work_todo(struct xenvif *vif)
 {
-	if (vif->dealloc_cons != vif->dealloc_prod)
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+		    (vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+		    !vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					  jiffies + msecs_to_jiffies(1));
+
+			}
+			return false;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
 		return true;
+	}
 
 	return false;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnY-0008Hc-7n; Tue, 14 Jan 2014 20:41:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Aml-00089b-0O
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:32 +0000
Received: from [85.158.139.211:64042] by server-14.bemta-5.messagelabs.com id
	77/15-24200-DB0A5D25; Tue, 14 Jan 2014 20:40:29 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 14 Jan 2014 20:40:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858965"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:07 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:07 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:48 +0000
Message-ID: <1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

v4:
- fix indentations and comments
- handle errors of set_phys_to_machine
- go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
  modified API

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   60 +++++++-
 drivers/net/xen-netback/netback.c   |  256 ++++++++++++++---------------------
 2 files changed, 159 insertions(+), 157 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index a7855b3..1e0bf71 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+	    vif->dealloc_task == NULL ||
+	    !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+				       vif->mmap_pages,
+				       false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return NULL;
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err_rx_unbind;
 	}
 
+	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
+					   (void *)vif,
+					   "%s-dealloc",
+					   vif->dev->name);
+	if (IS_ERR(vif->dealloc_task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(vif->dealloc_task);
+		goto err_rx_unbind;
+	}
+
 	vif->task = task;
 
 	rtnl_lock();
@@ -443,6 +474,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -480,6 +512,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -495,6 +532,23 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+			    net_ratelimit())
+				netdev_err(vif->dev,
+					   "Page still granted! Index: %x\n",
+					   i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7050f63..e73af87 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -645,9 +645,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
-					       struct sk_buff *skb,
-					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
+							struct sk_buff *skb,
+							struct xen_netif_tx_request *txp,
+							struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -811,83 +814,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -909,9 +841,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+		    NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				   "Stale mapped handle! pending_idx %x handle %x\n",
+				   pending_idx,
+				   vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+			    NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Stale mapped handle! pending_idx %x handle %x\n",
+					   pending_idx,
+					   vif->grant_tx_handle[pending_idx]);
+				xenvif_fatal_tx_err(vif);
+			}
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -975,7 +926,9 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif,
+			      struct sk_buff *skb,
+			      u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -989,6 +942,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -996,10 +960,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1480,30 +1449,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1532,17 +1481,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1566,12 +1515,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+				  skb,
+				  skb_shinfo(skb)->destructor_arg ?
+				  pending_idx :
+				  INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			/* We have to set this flag so the dealloc thread can
+			 * send the slots back
+			 */
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1731,7 +1702,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1741,7 +1712,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = gnttab_map_refs(vif->tx_map_ops,
+			      vif->pages_to_map,
+			      nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1752,45 +1726,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnY-0008Hc-7n; Tue, 14 Jan 2014 20:41:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Aml-00089b-0O
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:32 +0000
Received: from [85.158.139.211:64042] by server-14.bemta-5.messagelabs.com id
	77/15-24200-DB0A5D25; Tue, 14 Jan 2014 20:40:29 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 14 Jan 2014 20:40:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858965"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:07 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:07 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:48 +0000
Message-ID: <1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

v4:
- fix indentations and comments
- handle errors of set_phys_to_machine
- go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
  modified API

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   60 +++++++-
 drivers/net/xen-netback/netback.c   |  256 ++++++++++++++---------------------
 2 files changed, 159 insertions(+), 157 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index a7855b3..1e0bf71 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+	    vif->dealloc_task == NULL ||
+	    !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+				       vif->mmap_pages,
+				       false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return NULL;
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 		goto err_rx_unbind;
 	}
 
+	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
+					   (void *)vif,
+					   "%s-dealloc",
+					   vif->dev->name);
+	if (IS_ERR(vif->dealloc_task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(vif->dealloc_task);
+		goto err_rx_unbind;
+	}
+
 	vif->task = task;
 
 	rtnl_lock();
@@ -443,6 +474,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -480,6 +512,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -495,6 +532,23 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+			    net_ratelimit())
+				netdev_err(vif->dev,
+					   "Page still granted! Index: %x\n",
+					   i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7050f63..e73af87 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -645,9 +645,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
-					       struct sk_buff *skb,
-					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
+							struct sk_buff *skb,
+							struct xen_netif_tx_request *txp,
+							struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -811,83 +814,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -909,9 +841,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+		    NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				   "Stale mapped handle! pending_idx %x handle %x\n",
+				   pending_idx,
+				   vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+			    NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Stale mapped handle! pending_idx %x handle %x\n",
+					   pending_idx,
+					   vif->grant_tx_handle[pending_idx]);
+				xenvif_fatal_tx_err(vif);
+			}
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -975,7 +926,9 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif,
+			      struct sk_buff *skb,
+			      u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -989,6 +942,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -996,10 +960,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1480,30 +1449,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1532,17 +1481,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1566,12 +1515,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+				  skb,
+				  skb_shinfo(skb)->destructor_arg ?
+				  pending_idx :
+				  INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			/* We have to set this flag so the dealloc thread can
+			 * send the slots back
+			 */
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1731,7 +1702,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1741,7 +1712,10 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = gnttab_map_refs(vif->tx_map_ops,
+			      vif->pages_to_map,
+			      nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1752,45 +1726,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnY-0008Iu-RF; Tue, 14 Jan 2014 20:41:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amm-00089i-1M
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:33 +0000
Received: from [85.158.139.211:26161] by server-16.bemta-5.messagelabs.com id
	B8/AA-11843-EB0A5D25; Tue, 14 Jan 2014 20:40:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23666 invoked from network); 14 Jan 2014 20:40:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858976"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:16 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:16 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:51 +0000
Message-ID: <1389731995-9887-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AnY-0008Iu-RF; Tue, 14 Jan 2014 20:41:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amm-00089i-1M
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:33 +0000
Received: from [85.158.139.211:26161] by server-16.bemta-5.messagelabs.com id
	B8/AA-11843-EB0A5D25; Tue, 14 Jan 2014 20:40:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389732023!9769819!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23666 invoked from network); 14 Jan 2014 20:40:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858976"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:16 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:16 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:51 +0000
Message-ID: <1389731995-9887-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ana-0008Jj-CY; Tue, 14 Jan 2014 20:41:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amo-00089q-89
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:35 +0000
Received: from [85.158.137.68:57063] by server-2.bemta-3.messagelabs.com id
	90/B8-17329-FB0A5D25; Tue, 14 Jan 2014 20:40:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389732026!9194144!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13774 invoked from network); 14 Jan 2014 20:40:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858980"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:19 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:19 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:52 +0000
Message-ID: <1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

v4:
- handle error of skb_copy_expand()

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  125 ++++++++++++++++++++++++++++++++++---
 1 file changed, 115 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index c2b2597..345c6a2 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -802,6 +802,20 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb =
+		alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			  GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 							struct sk_buff *skb,
 							struct xen_netif_tx_request *txp,
@@ -812,11 +826,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			netdev_err(vif->dev,
+				   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+					     pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -845,6 +887,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -867,6 +910,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -903,11 +947,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -919,6 +972,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1422,8 +1501,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1431,9 +1509,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1495,6 +1570,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				  pending_idx :
 				  INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					      0,
+					      0,
+					      GFP_ATOMIC | __GFP_NOWARN);
+			if (!skb) {
+				netdev_dbg(vif->dev,
+					   "Can't consolidate skb with too many fragments\n");
+				if (skb_shinfo(nskb)->destructor_arg)
+					skb_shinfo(nskb)->tx_flags |=
+						SKBTX_DEV_ZEROCOPY;
+				kfree_skb(nskb);
+				continue;
+			}
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1590,6 +1692,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ana-0008Jj-CY; Tue, 14 Jan 2014 20:41:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amo-00089q-89
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:35 +0000
Received: from [85.158.137.68:57063] by server-2.bemta-3.messagelabs.com id
	90/B8-17329-FB0A5D25; Tue, 14 Jan 2014 20:40:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389732026!9194144!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13774 invoked from network); 14 Jan 2014 20:40:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858980"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:19 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:19 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:52 +0000
Message-ID: <1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

v4:
- handle error of skb_copy_expand()

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  125 ++++++++++++++++++++++++++++++++++---
 1 file changed, 115 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index c2b2597..345c6a2 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -802,6 +802,20 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb =
+		alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			  GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 							struct sk_buff *skb,
 							struct xen_netif_tx_request *txp,
@@ -812,11 +826,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			netdev_err(vif->dev,
+				   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+					     pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -845,6 +887,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -867,6 +910,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -903,11 +947,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -919,6 +972,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1422,8 +1501,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1431,9 +1509,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1495,6 +1570,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				  pending_idx :
 				  INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					      0,
+					      0,
+					      GFP_ATOMIC | __GFP_NOWARN);
+			if (!skb) {
+				netdev_dbg(vif->dev,
+					   "Can't consolidate skb with too many fragments\n");
+				if (skb_shinfo(nskb)->destructor_arg)
+					skb_shinfo(nskb)->tx_flags |=
+						SKBTX_DEV_ZEROCOPY;
+				kfree_skb(nskb);
+				continue;
+			}
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1590,6 +1692,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ang-0008N2-AC; Tue, 14 Jan 2014 20:41:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amo-00089g-89
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:35 +0000
Received: from [85.158.137.68:45504] by server-7.bemta-3.messagelabs.com id
	9E/31-27599-EB0A5D25; Tue, 14 Jan 2014 20:40:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389732026!9194144!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13702 invoked from network); 14 Jan 2014 20:40:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858969"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:11 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:10 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:49 +0000
Message-ID: <1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolate with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index f35a3ce..2b1cd83 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -135,11 +105,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5724468..f74fa92 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif,
 					u16 pending_idx,
 					struct xen_netif_tx_request *txp,
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -873,14 +833,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1353,7 +1311,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 	       (skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1728,18 +1685,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:41:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ang-0008N2-AC; Tue, 14 Jan 2014 20:41:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Amo-00089g-89
	for xen-devel@lists.xenproject.org; Tue, 14 Jan 2014 20:40:35 +0000
Received: from [85.158.137.68:45504] by server-7.bemta-3.messagelabs.com id
	9E/31-27599-EB0A5D25; Tue, 14 Jan 2014 20:40:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389732026!9194144!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13702 invoked from network); 14 Jan 2014 20:40:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92858969"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Jan 2014 20:40:11 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 15:40:10 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 14 Jan 2014 20:39:49 +0000
Message-ID: <1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v4 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolate with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index f35a3ce..2b1cd83 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -135,11 +105,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5724468..f74fa92 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif,
 					u16 pending_idx,
 					struct xen_netif_tx_request *txp,
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -873,14 +833,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1353,7 +1311,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 	       (skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1728,18 +1685,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:46:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:46:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Aso-00016B-Rr; Tue, 14 Jan 2014 20:46:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3Asm-00015q-A4
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:46:44 +0000
Received: from [85.158.139.211:18705] by server-8.bemta-5.messagelabs.com id
	B6/4A-29838-332A5D25; Tue, 14 Jan 2014 20:46:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389732401!8547054!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26763 invoked from network); 14 Jan 2014 20:46:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:46:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208,217";a="90741210"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:46:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:46:40 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3Asi-0003iO-2v;
	Tue, 14 Jan 2014 20:46:40 +0000
Message-ID: <52D5A230.3010005@citrix.com>
Date: Tue, 14 Jan 2014 20:46:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
In-Reply-To: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3787406655883536074=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3787406655883536074==
Content-Type: multipart/alternative;
	boundary="------------060201020105070703050609"

--------------060201020105070703050609
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/01/14 20:38, Igor Kozhukhov wrote:
> Hello All,
>
> I have problems with ctfconvert with build sources of xen-4.2 on
> illumos based platform:
> https://www.illumos.org/issues/3205
>
> could you please let me know - what the reason to have a variable in
> structure 'struct hvm_hw_cpu_xsave'  with zero size:
> struct { char x[0]; } ymm;    /* YMM */
> --
> Best regards,
> Igor Kozhukhov
>
>

The structure is variable length depending on whether the VM has enabled
AVX support.

It is rather unfortunate that we have non-complient C used to specify
the ABI.

~Andrew

--------------060201020105070703050609
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 14/01/14 20:38, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote
      cite="mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      Hello All,
      <div><br>
      </div>
      <div>I have problems with ctfconvert with build sources of xen-4.2
        on illumos based platform:</div>
      <div><a moz-do-not-send="true"
          href="https://www.illumos.org/issues/3205">https://www.illumos.org/issues/3205</a></div>
      <div><br>
      </div>
      <div>could you please let me know - what the reason to have a
        variable in structure '<span class="Apple-style-span"
          style="color: rgb(26, 26, 26); font-family: monospace;
          font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave'
        </span>&nbsp;with zero size:</div>
      <div>
        <pre style="margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM */</pre>
        <div>--</div>
      </div>
      <div>
        <div apple-content-edited="true">
          <div style="word-wrap: break-word; -webkit-nbsp-mode: space;
            -webkit-line-break: after-white-space; ">Best regards,<br>
            Igor Kozhukhov<br>
            <br>
            <br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The structure is variable length depending on whether the VM has
    enabled AVX support.<br>
    <br>
    It is rather unfortunate that we have non-complient C used to
    specify the ABI.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------060201020105070703050609--


--===============3787406655883536074==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3787406655883536074==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:46:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:46:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Aso-00016B-Rr; Tue, 14 Jan 2014 20:46:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3Asm-00015q-A4
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:46:44 +0000
Received: from [85.158.139.211:18705] by server-8.bemta-5.messagelabs.com id
	B6/4A-29838-332A5D25; Tue, 14 Jan 2014 20:46:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389732401!8547054!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26763 invoked from network); 14 Jan 2014 20:46:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:46:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208,217";a="90741210"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:46:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:46:40 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3Asi-0003iO-2v;
	Tue, 14 Jan 2014 20:46:40 +0000
Message-ID: <52D5A230.3010005@citrix.com>
Date: Tue, 14 Jan 2014 20:46:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
In-Reply-To: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3787406655883536074=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3787406655883536074==
Content-Type: multipart/alternative;
	boundary="------------060201020105070703050609"

--------------060201020105070703050609
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/01/14 20:38, Igor Kozhukhov wrote:
> Hello All,
>
> I have problems with ctfconvert with build sources of xen-4.2 on
> illumos based platform:
> https://www.illumos.org/issues/3205
>
> could you please let me know - what the reason to have a variable in
> structure 'struct hvm_hw_cpu_xsave'  with zero size:
> struct { char x[0]; } ymm;    /* YMM */
> --
> Best regards,
> Igor Kozhukhov
>
>

The structure is variable length depending on whether the VM has enabled
AVX support.

It is rather unfortunate that we have non-complient C used to specify
the ABI.

~Andrew

--------------060201020105070703050609
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 14/01/14 20:38, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote
      cite="mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      Hello All,
      <div><br>
      </div>
      <div>I have problems with ctfconvert with build sources of xen-4.2
        on illumos based platform:</div>
      <div><a moz-do-not-send="true"
          href="https://www.illumos.org/issues/3205">https://www.illumos.org/issues/3205</a></div>
      <div><br>
      </div>
      <div>could you please let me know - what the reason to have a
        variable in structure '<span class="Apple-style-span"
          style="color: rgb(26, 26, 26); font-family: monospace;
          font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave'
        </span>&nbsp;with zero size:</div>
      <div>
        <pre style="margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM */</pre>
        <div>--</div>
      </div>
      <div>
        <div apple-content-edited="true">
          <div style="word-wrap: break-word; -webkit-nbsp-mode: space;
            -webkit-line-break: after-white-space; ">Best regards,<br>
            Igor Kozhukhov<br>
            <br>
            <br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The structure is variable length depending on whether the VM has
    enabled AVX support.<br>
    <br>
    It is rather unfortunate that we have non-complient C used to
    specify the ABI.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------060201020105070703050609--


--===============3787406655883536074==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3787406655883536074==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:50:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AwA-0001Qg-PO; Tue, 14 Jan 2014 20:50:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Aw9-0001Qa-38
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:50:13 +0000
Received: from [85.158.139.211:50029] by server-10.bemta-5.messagelabs.com id
	B5/DE-01405-403A5D25; Tue, 14 Jan 2014 20:50:12 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389732610!9718672!1
X-Originating-IP: [209.85.217.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9649 invoked from network); 14 Jan 2014 20:50:11 -0000
Received: from mail-lb0-f179.google.com (HELO mail-lb0-f179.google.com)
	(209.85.217.179)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:50:11 -0000
Received: by mail-lb0-f179.google.com with SMTP id p9so112195lbv.24
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:50:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=cfNJYSbk4Pl8JBa1Zg3TUrm3cTviKf/AnCz/t5yhYpE=;
	b=Upe9hhrEGKO5PR76M6JmLbR7gDI0FcaXj20cmzSHQFnNxXZZPBjz4fQS/o6swenX7m
	6AXQtuIF9eFmbHWDQfeiEu+p1V6a/bkv+NWLOiSUJp5tNx3jvgEdCIxNES5dld49Ha31
	Bz9culXDlzlvaMtCzScNJttokkqWkGZrbzMt082GrHV1z59vQSkC45q/kN/jLReQTzoY
	R0fML5KzfQT9fKSBm3lZfQj7H/c3WF4ONv60fdD2qyI2T5Ky5cMOyrGkZ/5JE9lVw4xT
	CRKod6/DHcSEzYep0P5mgQEVU3RRro//kbG8OYmZK2mpe2cF766QE5NISCY3ZQNU7fzy
	whxg==
X-Received: by 10.152.143.101 with SMTP id sd5mr2040834lab.26.1389732610512;
	Tue, 14 Jan 2014 12:50:10 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	bo10sm1035937lbb.16.2014.01.14.12.50.09 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:50:09 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D304D4.3030704@citrix.com>
Date: Wed, 15 Jan 2014 00:50:08 +0400
Message-Id: <3F54960E-F347-4A93-ADFB-9C7E1CAA8E0D@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
	<BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
	<52D304D4.3030704@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi Andrew,

On Jan 13, 2014, at 1:10 AM, Andrew Cooper wrote:

> On 12/01/2014 21:01, Igor Kozhukhov wrote:
>> On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:
>> 
>>> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>>>> Hi Andrew,
>>>> 
>>>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>>>> 
>>>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>>>> Hello All,
>>>>>> 
>>>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>>>> 
>>>>>> i have received 'pirq' from hypervisor > 255.
>>>>>> 
>>>>>> map_irq.domid = DOMID_SELF;                                                   
>>>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>>>> map_irq.pirq = -1;                                                            
>>>>>> map_irq.bus = busnum;                                                         
>>>>>> map_irq.devfn = devfn;                                                        
>>>>>> map_irq.entry_nr = i;                                                         
>>>>>> map_irq.table_base = 0;                                              
>>>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>>>> irqno = map_irq.pirq;
>>>>>> 
>>>>>> i have:
>>>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>>>> 
>>>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>>>> 
>>>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>>>> handed back will be the event channel on which the notification will
>>>>> arrive, and has nothing to do with regular IDT vectors.
>>>> it is for dom0.
>>>> 
>>>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>>>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>>>> 
>>>> 
>>>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>>>> 
>>>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>>>> but - how to correct translate it to APIC IRQ (physical irq)?
>>> Why do you need to know?
>>> 
>>> Xen controls all interrupts on the system.  Event channels which you
>>> register with Xen have no mapping/relation to local apic vectors.  Your
>>> device drivers should not expect to have an apic vector in their hand.
>>> 
>>> The reason behind this is that as virtual cpus get scheduled around
>>> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
>>> point their vector will change.
>> is it possible to receive IRQ from APIC table from Xen as index ?
> 
> No.
> 
>> i need it for local APIC pointer to APIC table array as index.
>> all others functions is using index from apic_irq_table[] as APIC IRQ.
>> 
>> i have function apic_find_irq() for this. 
>> it is not my realization - it is original code.
> 
> Nothing in a dom0 system should know/care about apic vectors.  Dom0
> cannot use the IDT, nor can it even write to MSI/MSI-X configuration
> registers (they get trapped and fixed-up by Xen).
> 
> Even if there were a hypercall to map an event channel back to an
> apic-id/vector, it is possible that the data would be stale by the time
> the vcpu ran again.

thanks for your details. I have found and fixed a problem.
now i have finished illumos side and have to fork with xen-4.2 sources.

it was interest problem.
details:
i have array:
#define APIC_MAX_VECTOR 255
uchar_t msi_vector_to_pirq[APIC_MAX_VECTOR+1]

i have received map_irq.pirq = 279(0x117) 
it is more then 255(0xff)

i have operation:
msi_vector_to_pirq[vector] = (uchar_t)pirq;

by this operation i have msi_vector_to_pirq[vector] = 0x17;
:)

it was a mistake, because i tried to use APIC IRQs from reserved space with additional PIRQs.
i have replaced it by: get free APIC IRQ from range 0x10 - 0xff and update info about IRQ with additional field for PIRQ for mapping by event.

now i have correct APIC IRQs with sync it for PIRQ map.

> ~Andrew

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:50:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3AwA-0001Qg-PO; Tue, 14 Jan 2014 20:50:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Aw9-0001Qa-38
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:50:13 +0000
Received: from [85.158.139.211:50029] by server-10.bemta-5.messagelabs.com id
	B5/DE-01405-403A5D25; Tue, 14 Jan 2014 20:50:12 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389732610!9718672!1
X-Originating-IP: [209.85.217.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9649 invoked from network); 14 Jan 2014 20:50:11 -0000
Received: from mail-lb0-f179.google.com (HELO mail-lb0-f179.google.com)
	(209.85.217.179)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:50:11 -0000
Received: by mail-lb0-f179.google.com with SMTP id p9so112195lbv.24
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:50:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=cfNJYSbk4Pl8JBa1Zg3TUrm3cTviKf/AnCz/t5yhYpE=;
	b=Upe9hhrEGKO5PR76M6JmLbR7gDI0FcaXj20cmzSHQFnNxXZZPBjz4fQS/o6swenX7m
	6AXQtuIF9eFmbHWDQfeiEu+p1V6a/bkv+NWLOiSUJp5tNx3jvgEdCIxNES5dld49Ha31
	Bz9culXDlzlvaMtCzScNJttokkqWkGZrbzMt082GrHV1z59vQSkC45q/kN/jLReQTzoY
	R0fML5KzfQT9fKSBm3lZfQj7H/c3WF4ONv60fdD2qyI2T5Ky5cMOyrGkZ/5JE9lVw4xT
	CRKod6/DHcSEzYep0P5mgQEVU3RRro//kbG8OYmZK2mpe2cF766QE5NISCY3ZQNU7fzy
	whxg==
X-Received: by 10.152.143.101 with SMTP id sd5mr2040834lab.26.1389732610512;
	Tue, 14 Jan 2014 12:50:10 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	bo10sm1035937lbb.16.2014.01.14.12.50.09 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:50:09 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D304D4.3030704@citrix.com>
Date: Wed, 15 Jan 2014 00:50:08 +0400
Message-Id: <3F54960E-F347-4A93-ADFB-9C7E1CAA8E0D@gmail.com>
References: <F2E7BD27-3DF5-44D8-B0CC-ACFE12A1614B@gmail.com>
	<52D2DF12.5050801@citrix.com>
	<43CC936F-4037-49A4-B69B-D6ED14FD9EDE@gmail.com>
	<52D2FC4B.2080509@citrix.com>
	<BA268314-38BF-4948-B69C-AD5A3916D5B9@gmail.com>
	<52D304D4.3030704@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] translate pirq to irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi Andrew,

On Jan 13, 2014, at 1:10 AM, Andrew Cooper wrote:

> On 12/01/2014 21:01, Igor Kozhukhov wrote:
>> On Jan 13, 2014, at 12:34 AM, Andrew Cooper wrote:
>> 
>>> On 12/01/2014 19:26, Igor Kozhukhov wrote:
>>>> Hi Andrew,
>>>> 
>>>> On Jan 12, 2014, at 10:29 PM, Andrew Cooper wrote:
>>>> 
>>>>> On 11/01/2014 22:59, Igor Kozhukhov wrote:
>>>>>> Hello All,
>>>>>> 
>>>>>> I see a comment in physdev.h for 'struct physdev_map_pirq', var 'pirq':
>>>>>> /* IN - high 16 bits hold segment for MAP_PIRQ_TYPE_MSI_SEG */
>>>>>> 
>>>>>> i have received 'pirq' from hypervisor > 255.
>>>>>> 
>>>>>> map_irq.domid = DOMID_SELF;                                                   
>>>>>> map_irq.type = MAP_PIRQ_TYPE_MSI;                                             
>>>>>> map_irq.index = -1; /* hypervisor auto allocates vector */                    
>>>>>> map_irq.pirq = -1;                                                            
>>>>>> map_irq.bus = busnum;                                                         
>>>>>> map_irq.devfn = devfn;                                                        
>>>>>> map_irq.entry_nr = i;                                                         
>>>>>> map_irq.table_base = 0;                                              
>>>>>> rc = HYPERVISOR_physdev_op(PHYSDEVOP_map_pirq, &map_irq);                     
>>>>>> irqno = map_irq.pirq;
>>>>>> 
>>>>>> i have:
>>>>>> irqno = 279 - it is more APIC_MAX_VECTOR(255)
>>>>>> 
>>>>>> i have a question: how to correct translate pirq to irq for APIC map table ?
>>>>>> 
>>>>>> all work well on xen-3.4, but it has another realization in function physdev_map_pirq() then for xen-4.2.
>>>>> Is this for a PV or HVM guest?  I suspect PV, in which case the irqno
>>>>> handed back will be the event channel on which the notification will
>>>>> arrive, and has nothing to do with regular IDT vectors.
>>>> it is for dom0.
>>>> 
>>>> full boot log with xen debug info and DDI_DEBUG on illumos you can find here :
>>>> http://apt2.dilos.org/dilos/logs/putty.log.dom0.txt
>>>> 
>>>> 
>>>> if it is possible - could you please let me know how to work MSI irq translation to APIC irq table for xen-4.2 ?
>>>> 
>>>> i see - in xen code we have a range from 16 to 784 for 4 CPU for MSI IRQ (irq_create() function)
>>>> but - how to correct translate it to APIC IRQ (physical irq)?
>>> Why do you need to know?
>>> 
>>> Xen controls all interrupts on the system.  Event channels which you
>>> register with Xen have no mapping/relation to local apic vectors.  Your
>>> device drivers should not expect to have an apic vector in their hand.
>>> 
>>> The reason behind this is that as virtual cpus get scheduled around
>>> physical cpus, Xen needs to move the interrupts from IDT to IDT at which
>>> point their vector will change.
>> is it possible to receive IRQ from APIC table from Xen as index ?
> 
> No.
> 
>> i need it for local APIC pointer to APIC table array as index.
>> all others functions is using index from apic_irq_table[] as APIC IRQ.
>> 
>> i have function apic_find_irq() for this. 
>> it is not my realization - it is original code.
> 
> Nothing in a dom0 system should know/care about apic vectors.  Dom0
> cannot use the IDT, nor can it even write to MSI/MSI-X configuration
> registers (they get trapped and fixed-up by Xen).
> 
> Even if there were a hypercall to map an event channel back to an
> apic-id/vector, it is possible that the data would be stale by the time
> the vcpu ran again.

thanks for your details. I have found and fixed a problem.
now i have finished illumos side and have to fork with xen-4.2 sources.

it was interest problem.
details:
i have array:
#define APIC_MAX_VECTOR 255
uchar_t msi_vector_to_pirq[APIC_MAX_VECTOR+1]

i have received map_irq.pirq = 279(0x117) 
it is more then 255(0xff)

i have operation:
msi_vector_to_pirq[vector] = (uchar_t)pirq;

by this operation i have msi_vector_to_pirq[vector] = 0x17;
:)

it was a mistake, because i tried to use APIC IRQs from reserved space with additional PIRQs.
i have replaced it by: get free APIC IRQ from range 0x10 - 0xff and update info about IRQ with additional field for PIRQ for mapping by event.

now i have correct APIC IRQs with sync it for PIRQ map.

> ~Andrew

-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 20:53:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ayt-0001nN-HS; Tue, 14 Jan 2014 20:53:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Ayq-0001nG-Tn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:53:02 +0000
Received: from [85.158.137.68:8166] by server-5.bemta-3.messagelabs.com id
	6D/A9-25188-CA3A5D25; Tue, 14 Jan 2014 20:53:00 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389732778!9153548!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29766 invoked from network); 14 Jan 2014 20:52:59 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:52:59 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so115940lbh.36
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:52:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:message-id:references:to;
	bh=KatuVjEZYxIwEsa9/+CkU5pdN4FHVcKe0HDg0NnXwAw=;
	b=Ol1wsyidPajKmh16D6AIY2vMSn9BcX1NiJU2Y1juGbEvU1G+0sQxFs2dUIwSf270wl
	oJ+Gbt8wc54XFkJPaXYNi0B5Gv4pIIwn5QT0w3+sOIpQyyQquWWY6r6+GDMd1qMhfsIg
	P9iJAOA2we8nzDw+a4x2Lg9fyIHbjtfoCj4yz3CsmC5zLYJSET4VPgfIyQdyo27rI9Gx
	vloIY3VXLNhjQV4R8DwAOKX/moPNn0FhyLv6V4irq7SwRNyR7xkecd9E9MSMlGRfilJC
	0UmamJLn4WYWH5L4eKgSL6tkUFW7bKMybtYSup961QREqPmbrNMr3eO4sxGaEcYw4DbD
	4mFw==
X-Received: by 10.112.164.40 with SMTP id yn8mr67420lbb.88.1389732778550;
	Tue, 14 Jan 2014 12:52:58 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id j1sm1049109lbl.10.2014.01.14.12.52.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:52:57 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D5A230.3010005@citrix.com>
Date: Wed, 15 Jan 2014 00:52:55 +0400
Message-Id: <EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6806707874873248734=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6806707874873248734==
Content-Type: multipart/alternative; boundary="Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542"


--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1


On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:

> On 14/01/14 20:38, Igor Kozhukhov wrote:
>> Hello All,
>>=20
>> I have problems with ctfconvert with build sources of xen-4.2 on =
illumos based platform:
>> https://www.illumos.org/issues/3205
>>=20
>> could you please let me know - what the reason to have a variable in =
structure 'struct hvm_hw_cpu_xsave'
>>          with zero size:
>> struct { char x[0]; } ymm;    /* YMM */
>> --
>> Best regards,
>> Igor Kozhukhov
>>=20
>>=20
>=20
> The structure is variable length depending on whether the VM has =
enabled AVX support.
>=20
> It is rather unfortunate that we have non-complient C used to specify =
the ABI.

can we use here ?
struct { char x[1]; } ymm;

it will fix my problem.

-Igor


--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div =
apple-content-edited=3D"true"><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
'Lucida Grande'; font-style: normal; font-variant: normal; font-weight: =
normal; letter-spacing: normal; line-height: normal; orphans: 2; =
text-align: -webkit-auto; text-indent: 0px; text-transform: none; =
white-space: normal; widows: 2; word-spacing: 0px; =
-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; =
"><br></div></span></span></div><div><div>On Jan 15, 2014, at 12:46 AM, =
Andrew Cooper wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite">
 =20
    <meta content=3D"text/html; charset=3DISO-8859-1" =
http-equiv=3D"Content-Type">
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF">
    <div class=3D"moz-cite-prefix">On 14/01/14 20:38, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote =
cite=3D"mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com" type=3D"cite">=

      <meta http-equiv=3D"Content-Type" content=3D"text/html;
        charset=3DISO-8859-1">
      Hello All,
      <div><br>
      </div>
      <div>I have problems with ctfconvert with build sources of xen-4.2
        on illumos based platform:</div>
      <div><a moz-do-not-send=3D"true" =
href=3D"https://www.illumos.org/issues/3205">https://www.illumos.org/issue=
s/3205</a></div>
      <div><br>
      </div>
      <div>could you please let me know - what the reason to have a
        variable in structure '<span class=3D"Apple-style-span" =
style=3D"color: rgb(26, 26, 26); font-family: monospace;
          font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave'
        </span>&nbsp;with zero size:</div>
      <div>
        <pre style=3D"margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px =
0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, =
218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: =
rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: auto; text-align: start; text-indent: 0px; =
text-transform: none; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM =
*/</pre>
        <div>--</div>
      </div>
      <div>
        <div apple-content-edited=3D"true">
          <div style=3D"word-wrap: break-word; -webkit-nbsp-mode: space;
            -webkit-line-break: after-white-space; ">Best regards,<br>
            Igor Kozhukhov<br>
            <br>
            <br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The structure is variable length depending on whether the VM has
    enabled AVX support.<br>
    <br>
    It is rather unfortunate that we have non-complient C used to
    specify the ABI.<br></div></blockquote></div><br><div>can we use =
here ?</div><div><span class=3D"Apple-style-span" style=3D"color: =
rgb(26, 26, 26); font-family: monospace; font-size: 13px; white-space: =
pre; ">struct { char x[1]; } =
ymm;</span></div><div><div><br></div></div><div>it will fix my =
problem.</div><div><br></div><div>-Igor</div><div><br></div></body></html>=

--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542--


--===============6806707874873248734==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6806707874873248734==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:53:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Ayt-0001nN-HS; Tue, 14 Jan 2014 20:53:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3Ayq-0001nG-Tn
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:53:02 +0000
Received: from [85.158.137.68:8166] by server-5.bemta-3.messagelabs.com id
	6D/A9-25188-CA3A5D25; Tue, 14 Jan 2014 20:53:00 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389732778!9153548!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29766 invoked from network); 14 Jan 2014 20:52:59 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:52:59 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so115940lbh.36
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 12:52:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:message-id:references:to;
	bh=KatuVjEZYxIwEsa9/+CkU5pdN4FHVcKe0HDg0NnXwAw=;
	b=Ol1wsyidPajKmh16D6AIY2vMSn9BcX1NiJU2Y1juGbEvU1G+0sQxFs2dUIwSf270wl
	oJ+Gbt8wc54XFkJPaXYNi0B5Gv4pIIwn5QT0w3+sOIpQyyQquWWY6r6+GDMd1qMhfsIg
	P9iJAOA2we8nzDw+a4x2Lg9fyIHbjtfoCj4yz3CsmC5zLYJSET4VPgfIyQdyo27rI9Gx
	vloIY3VXLNhjQV4R8DwAOKX/moPNn0FhyLv6V4irq7SwRNyR7xkecd9E9MSMlGRfilJC
	0UmamJLn4WYWH5L4eKgSL6tkUFW7bKMybtYSup961QREqPmbrNMr3eO4sxGaEcYw4DbD
	4mFw==
X-Received: by 10.112.164.40 with SMTP id yn8mr67420lbb.88.1389732778550;
	Tue, 14 Jan 2014 12:52:58 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id j1sm1049109lbl.10.2014.01.14.12.52.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 12:52:57 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <52D5A230.3010005@citrix.com>
Date: Wed, 15 Jan 2014 00:52:55 +0400
Message-Id: <EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6806707874873248734=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6806707874873248734==
Content-Type: multipart/alternative; boundary="Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542"


--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1


On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:

> On 14/01/14 20:38, Igor Kozhukhov wrote:
>> Hello All,
>>=20
>> I have problems with ctfconvert with build sources of xen-4.2 on =
illumos based platform:
>> https://www.illumos.org/issues/3205
>>=20
>> could you please let me know - what the reason to have a variable in =
structure 'struct hvm_hw_cpu_xsave'
>>          with zero size:
>> struct { char x[0]; } ymm;    /* YMM */
>> --
>> Best regards,
>> Igor Kozhukhov
>>=20
>>=20
>=20
> The structure is variable length depending on whether the VM has =
enabled AVX support.
>=20
> It is rather unfortunate that we have non-complient C used to specify =
the ABI.

can we use here ?
struct { char x[1]; } ymm;

it will fix my problem.

-Igor


--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div =
apple-content-edited=3D"true"><span class=3D"Apple-style-span" =
style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: =
'Lucida Grande'; font-style: normal; font-variant: normal; font-weight: =
normal; letter-spacing: normal; line-height: normal; orphans: 2; =
text-align: -webkit-auto; text-indent: 0px; text-transform: none; =
white-space: normal; widows: 2; word-spacing: 0px; =
-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
rgb(0, 0, 0); font-family: 'Lucida Grande'; font-style: normal; =
font-variant: normal; font-weight: normal; letter-spacing: normal; =
line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: =
0px; text-transform: none; white-space: normal; widows: 2; word-spacing: =
0px; -webkit-border-horizontal-spacing: 0px; =
-webkit-border-vertical-spacing: 0px; =
-webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space; =
"><br></div></span></span></div><div><div>On Jan 15, 2014, at 12:46 AM, =
Andrew Cooper wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite">
 =20
    <meta content=3D"text/html; charset=3DISO-8859-1" =
http-equiv=3D"Content-Type">
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF">
    <div class=3D"moz-cite-prefix">On 14/01/14 20:38, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote =
cite=3D"mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com" type=3D"cite">=

      <meta http-equiv=3D"Content-Type" content=3D"text/html;
        charset=3DISO-8859-1">
      Hello All,
      <div><br>
      </div>
      <div>I have problems with ctfconvert with build sources of xen-4.2
        on illumos based platform:</div>
      <div><a moz-do-not-send=3D"true" =
href=3D"https://www.illumos.org/issues/3205">https://www.illumos.org/issue=
s/3205</a></div>
      <div><br>
      </div>
      <div>could you please let me know - what the reason to have a
        variable in structure '<span class=3D"Apple-style-span" =
style=3D"color: rgb(26, 26, 26); font-family: monospace;
          font-size: 13px; white-space: pre; ">struct hvm_hw_cpu_xsave'
        </span>&nbsp;with zero size:</div>
      <div>
        <pre style=3D"margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px =
0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, =
218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: =
rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: =
normal; font-weight: normal; letter-spacing: normal; line-height: =
normal; orphans: auto; text-align: start; text-indent: 0px; =
text-transform: none; widows: auto; word-spacing: 0px; =
-webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM =
*/</pre>
        <div>--</div>
      </div>
      <div>
        <div apple-content-edited=3D"true">
          <div style=3D"word-wrap: break-word; -webkit-nbsp-mode: space;
            -webkit-line-break: after-white-space; ">Best regards,<br>
            Igor Kozhukhov<br>
            <br>
            <br>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The structure is variable length depending on whether the VM has
    enabled AVX support.<br>
    <br>
    It is rather unfortunate that we have non-complient C used to
    specify the ABI.<br></div></blockquote></div><br><div>can we use =
here ?</div><div><span class=3D"Apple-style-span" style=3D"color: =
rgb(26, 26, 26); font-family: monospace; font-size: 13px; white-space: =
pre; ">struct { char x[1]; } =
ymm;</span></div><div><div><br></div></div><div>it will fix my =
problem.</div><div><br></div><div>-Igor</div><div><br></div></body></html>=

--Apple-Mail=_068B81DA-61B8-4217-A07A-E09E804DA542--


--===============6806707874873248734==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6806707874873248734==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3B3w-0001zX-L8; Tue, 14 Jan 2014 20:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3B3v-0001zS-Dj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:58:15 +0000
Received: from [85.158.143.35:65485] by server-1.bemta-4.messagelabs.com id
	27/30-02132-6E4A5D25; Tue, 14 Jan 2014 20:58:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389733092!11661240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21032 invoked from network); 14 Jan 2014 20:58:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208,217";a="90745026"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:58:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:58:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3B3r-000461-9w;
	Tue, 14 Jan 2014 20:58:11 +0000
Message-ID: <52D5A4E3.1080405@citrix.com>
Date: Tue, 14 Jan 2014 20:58:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
	<EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
In-Reply-To: <EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3114380352825263795=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3114380352825263795==
Content-Type: multipart/alternative;
	boundary="------------050807050109070802020105"

--------------050807050109070802020105
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/01/14 20:52, Igor Kozhukhov wrote:
>
> On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:
>
>> On 14/01/14 20:38, Igor Kozhukhov wrote:
>>> Hello All,
>>>
>>> I have problems with ctfconvert with build sources of xen-4.2 on
>>> illumos based platform:
>>> https://www.illumos.org/issues/3205
>>>
>>> could you please let me know - what the reason to have a variable in
>>> structure 'struct hvm_hw_cpu_xsave'  with zero size:
>>> struct { char x[0]; } ymm;    /* YMM */
>>> --
>>> Best regards,
>>> Igor Kozhukhov
>>>
>>>
>>
>> The structure is variable length depending on whether the VM has
>> enabled AVX support.
>>
>> It is rather unfortunate that we have non-complient C used to specify
>> the ABI.
>
> can we use here ?
> struct { char x[1]; } ymm;
>
> it will fix my problem.
>
> -Igor
>

No, it wont.

The 'ymm' is either not present and has a size of 0 as far as save_area
is concerned, or is present, and has a size of 16 * 256bit registers.

The content of this structure is only relevant as far as
hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
pointer-trickary to move the data.  Changing that char x[0] to char x[1]
will break the pointer trickary, and migration will suffer an ABI breakage.

~Andrew

--------------050807050109070802020105
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 14/01/14 20:52, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote
      cite="mid:EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div apple-content-edited="true"><span class="Apple-style-span"
          style="border-collapse: separate; color: rgb(0, 0, 0);
          font-family: 'Lucida Grande'; font-style: normal;
          font-variant: normal; font-weight: normal; letter-spacing:
          normal; line-height: normal; orphans: 2; text-align:
          -webkit-auto; text-indent: 0px; text-transform: none;
          white-space: normal; widows: 2; word-spacing: 0px;
          -webkit-border-horizontal-spacing: 0px;
          -webkit-border-vertical-spacing: 0px;
          -webkit-text-decorations-in-effect: none;
          -webkit-text-size-adjust: auto; -webkit-text-stroke-width:
          0px; font-size: medium; "><span class="Apple-style-span"
            style="border-collapse: separate; color: rgb(0, 0, 0);
            font-family: 'Lucida Grande'; font-style: normal;
            font-variant: normal; font-weight: normal; letter-spacing:
            normal; line-height: normal; orphans: 2; text-align:
            -webkit-auto; text-indent: 0px; text-transform: none;
            white-space: normal; widows: 2; word-spacing: 0px;
            -webkit-border-horizontal-spacing: 0px;
            -webkit-border-vertical-spacing: 0px;
            -webkit-text-decorations-in-effect: none;
            -webkit-text-size-adjust: auto; -webkit-text-stroke-width:
            0px; font-size: medium; ">
            <div style="word-wrap: break-word; -webkit-nbsp-mode: space;
              -webkit-line-break: after-white-space; "><br>
            </div>
          </span></span></div>
      <div>
        <div>On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:</div>
        <br class="Apple-interchange-newline">
        <blockquote type="cite">
          <div text="#000000" bgcolor="#FFFFFF">
            <div class="moz-cite-prefix">On 14/01/14 20:38, Igor
              Kozhukhov wrote:<br>
            </div>
            <blockquote
              cite="mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com"
              type="cite"> Hello All,
              <div><br>
              </div>
              <div>I have problems with ctfconvert with build sources of
                xen-4.2 on illumos based platform:</div>
              <div><a moz-do-not-send="true"
                  href="https://www.illumos.org/issues/3205">https://www.illumos.org/issues/3205</a></div>
              <div><br>
              </div>
              <div>could you please let me know - what the reason to
                have a variable in structure '<span
                  class="Apple-style-span" style="color: rgb(26, 26,
                  26); font-family: monospace; font-size: 13px;
                  white-space: pre; ">struct hvm_hw_cpu_xsave' </span>&nbsp;with
                zero size:</div>
              <div>
                <pre style="margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM */</pre>
                <div>--</div>
              </div>
              <div>
                <div apple-content-edited="true">
                  <div style="word-wrap: break-word; -webkit-nbsp-mode:
                    space; -webkit-line-break: after-white-space; ">Best
                    regards,<br>
                    Igor Kozhukhov<br>
                    <br>
                    <br>
                  </div>
                </div>
              </div>
            </blockquote>
            <br>
            The structure is variable length depending on whether the VM
            has enabled AVX support.<br>
            <br>
            It is rather unfortunate that we have non-complient C used
            to specify the ABI.<br>
          </div>
        </blockquote>
      </div>
      <br>
      <div>can we use here ?</div>
      <div><span class="Apple-style-span" style="color: rgb(26, 26, 26);
          font-family: monospace; font-size: 13px; white-space: pre; ">struct
          { char x[1]; } ymm;</span></div>
      <div>
        <div><br>
        </div>
      </div>
      <div>it will fix my problem.</div>
      <div><br>
      </div>
      <div>-Igor</div>
      <div><br>
      </div>
    </blockquote>
    <br>
    No, it wont.<br>
    <br>
    The 'ymm' is either not present and has a size of 0 as far as
    save_area is concerned, or is present, and has a size of 16 * 256bit
    registers.<br>
    <br>
    The content of this structure is only relevant as far as
    hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
    pointer-trickary to move the data.&nbsp; Changing that char x[0] to char
    x[1] will break the pointer trickary, and migration will suffer an
    ABI breakage.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------050807050109070802020105--


--===============3114380352825263795==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3114380352825263795==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 20:58:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 20:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3B3w-0001zX-L8; Tue, 14 Jan 2014 20:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3B3v-0001zS-Dj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 20:58:15 +0000
Received: from [85.158.143.35:65485] by server-1.bemta-4.messagelabs.com id
	27/30-02132-6E4A5D25; Tue, 14 Jan 2014 20:58:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389733092!11661240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21032 invoked from network); 14 Jan 2014 20:58:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 20:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208,217";a="90745026"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 20:58:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 14 Jan 2014 15:58:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3B3r-000461-9w;
	Tue, 14 Jan 2014 20:58:11 +0000
Message-ID: <52D5A4E3.1080405@citrix.com>
Date: Tue, 14 Jan 2014 20:58:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
	<EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
In-Reply-To: <EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
	platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3114380352825263795=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3114380352825263795==
Content-Type: multipart/alternative;
	boundary="------------050807050109070802020105"

--------------050807050109070802020105
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 14/01/14 20:52, Igor Kozhukhov wrote:
>
> On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:
>
>> On 14/01/14 20:38, Igor Kozhukhov wrote:
>>> Hello All,
>>>
>>> I have problems with ctfconvert with build sources of xen-4.2 on
>>> illumos based platform:
>>> https://www.illumos.org/issues/3205
>>>
>>> could you please let me know - what the reason to have a variable in
>>> structure 'struct hvm_hw_cpu_xsave'  with zero size:
>>> struct { char x[0]; } ymm;    /* YMM */
>>> --
>>> Best regards,
>>> Igor Kozhukhov
>>>
>>>
>>
>> The structure is variable length depending on whether the VM has
>> enabled AVX support.
>>
>> It is rather unfortunate that we have non-complient C used to specify
>> the ABI.
>
> can we use here ?
> struct { char x[1]; } ymm;
>
> it will fix my problem.
>
> -Igor
>

No, it wont.

The 'ymm' is either not present and has a size of 0 as far as save_area
is concerned, or is present, and has a size of 16 * 256bit registers.

The content of this structure is only relevant as far as
hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
pointer-trickary to move the data.  Changing that char x[0] to char x[1]
will break the pointer trickary, and migration will suffer an ABI breakage.

~Andrew

--------------050807050109070802020105
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 14/01/14 20:52, Igor Kozhukhov
      wrote:<br>
    </div>
    <blockquote
      cite="mid:EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div apple-content-edited="true"><span class="Apple-style-span"
          style="border-collapse: separate; color: rgb(0, 0, 0);
          font-family: 'Lucida Grande'; font-style: normal;
          font-variant: normal; font-weight: normal; letter-spacing:
          normal; line-height: normal; orphans: 2; text-align:
          -webkit-auto; text-indent: 0px; text-transform: none;
          white-space: normal; widows: 2; word-spacing: 0px;
          -webkit-border-horizontal-spacing: 0px;
          -webkit-border-vertical-spacing: 0px;
          -webkit-text-decorations-in-effect: none;
          -webkit-text-size-adjust: auto; -webkit-text-stroke-width:
          0px; font-size: medium; "><span class="Apple-style-span"
            style="border-collapse: separate; color: rgb(0, 0, 0);
            font-family: 'Lucida Grande'; font-style: normal;
            font-variant: normal; font-weight: normal; letter-spacing:
            normal; line-height: normal; orphans: 2; text-align:
            -webkit-auto; text-indent: 0px; text-transform: none;
            white-space: normal; widows: 2; word-spacing: 0px;
            -webkit-border-horizontal-spacing: 0px;
            -webkit-border-vertical-spacing: 0px;
            -webkit-text-decorations-in-effect: none;
            -webkit-text-size-adjust: auto; -webkit-text-stroke-width:
            0px; font-size: medium; ">
            <div style="word-wrap: break-word; -webkit-nbsp-mode: space;
              -webkit-line-break: after-white-space; "><br>
            </div>
          </span></span></div>
      <div>
        <div>On Jan 15, 2014, at 12:46 AM, Andrew Cooper wrote:</div>
        <br class="Apple-interchange-newline">
        <blockquote type="cite">
          <div text="#000000" bgcolor="#FFFFFF">
            <div class="moz-cite-prefix">On 14/01/14 20:38, Igor
              Kozhukhov wrote:<br>
            </div>
            <blockquote
              cite="mid:A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com"
              type="cite"> Hello All,
              <div><br>
              </div>
              <div>I have problems with ctfconvert with build sources of
                xen-4.2 on illumos based platform:</div>
              <div><a moz-do-not-send="true"
                  href="https://www.illumos.org/issues/3205">https://www.illumos.org/issues/3205</a></div>
              <div><br>
              </div>
              <div>could you please let me know - what the reason to
                have a variable in structure '<span
                  class="Apple-style-span" style="color: rgb(26, 26,
                  26); font-family: monospace; font-size: 13px;
                  white-space: pre; ">struct hvm_hw_cpu_xsave' </span>&nbsp;with
                zero size:</div>
              <div>
                <pre style="margin: 1em 1em 1em 1.6em; padding: 2px 2px 2px 0px; background-color: rgb(250, 250, 250); border: 1px solid rgb(218, 218, 218); width: auto; overflow-x: auto; overflow-y: hidden; color: rgb(26, 26, 26); font-size: 13px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">struct { char x[0]; } ymm;    /* YMM */</pre>
                <div>--</div>
              </div>
              <div>
                <div apple-content-edited="true">
                  <div style="word-wrap: break-word; -webkit-nbsp-mode:
                    space; -webkit-line-break: after-white-space; ">Best
                    regards,<br>
                    Igor Kozhukhov<br>
                    <br>
                    <br>
                  </div>
                </div>
              </div>
            </blockquote>
            <br>
            The structure is variable length depending on whether the VM
            has enabled AVX support.<br>
            <br>
            It is rather unfortunate that we have non-complient C used
            to specify the ABI.<br>
          </div>
        </blockquote>
      </div>
      <br>
      <div>can we use here ?</div>
      <div><span class="Apple-style-span" style="color: rgb(26, 26, 26);
          font-family: monospace; font-size: 13px; white-space: pre; ">struct
          { char x[1]; } ymm;</span></div>
      <div>
        <div><br>
        </div>
      </div>
      <div>it will fix my problem.</div>
      <div><br>
      </div>
      <div>-Igor</div>
      <div><br>
      </div>
    </blockquote>
    <br>
    No, it wont.<br>
    <br>
    The 'ymm' is either not present and has a size of 0 as far as
    save_area is concerned, or is present, and has a size of 16 * 256bit
    registers.<br>
    <br>
    The content of this structure is only relevant as far as
    hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
    pointer-trickary to move the data.&nbsp; Changing that char x[0] to char
    x[1] will break the pointer trickary, and migration will suffer an
    ABI breakage.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------050807050109070802020105--


--===============3114380352825263795==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3114380352825263795==--


From xen-devel-bounces@lists.xen.org Tue Jan 14 21:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 21:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3B6r-0002Xs-7P; Tue, 14 Jan 2014 21:01:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3B6p-0002Xk-Pf
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 21:01:16 +0000
Received: from [85.158.143.35:18893] by server-1.bemta-4.messagelabs.com id
	29/12-02132-B95A5D25; Tue, 14 Jan 2014 21:01:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389733273!11747665!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1089 invoked from network); 14 Jan 2014 21:01:13 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 21:01:13 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so961172wes.35
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 13:01:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=1TbpAhb4lxt76CWwhJDZBmQx877iqtHxRw5FiCJ8RR4=;
	b=Ya6Qt679oDiNKdfj4zevVlbIBX8YXvTIW9a9BiSGg5FhENXjuXDI/OLbkkQuUR4jhZ
	dUI/7gI1JW5V5ajmqGb+t9Q8Nw2a7TIhrdMo4k/Lx3sup8yum4MEX89c8DBIZ2D/d9rh
	zKbo7ItpOgQBf3mlyUXRIDK00Jl6mP1H6rxNf+WLlXs4SyrBFj9QYp7ZzYPYB6dLjjRN
	n4Yzb1kKbap9zqqaavJIV7BFC/OEK502ONFOSzWePE/QruELM6ppeQ6idd2hTPgwH9/L
	gi+2xZUU8PuC+WOWafToOOBQTtXcVO/WgX0bDUo4AqZxrIkY5736E978XHu+iNV/lTAw
	Hetg==
X-Gm-Message-State: ALoCoQmQTZMpJNo8LuputgfkkZP+Ruew5hb6xvsN88pTlwvPAgBmAopqEPCzDdng+fCCcXuxlAZb
X-Received: by 10.180.79.38 with SMTP id g6mr4885327wix.60.1389733273106;
	Tue, 14 Jan 2014 13:01:13 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id md9sm25006951wic.1.2014.01.14.13.01.11
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 14 Jan 2014 13:01:12 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: freebsd-xen@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	freebsd-arm@FreeBSD.org
Date: Tue, 14 Jan 2014 21:01:07 +0000
Message-Id: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, roger.pau@citrix.com
Subject: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

During the last couple of months I have been working on porting FreeBSD on
Xen on ARM.

It's my first big project on FreeBSD and I'm not sure if I have cc the right
MLs and people. If not, could somebody point me to the right email address?

With this patch series I'm able to boot FreeBSD with a single VCPU on every
32 bits platform supported by Xen.

The patch series is divided in 5 parts:
	* #1-#12: Clean up and bug fix with some options
	* #13-#17: Update the Xen interface to Xen 4.4 and fix compilation
	* #18-#32: Update Xen code to support ARM guest
	* #33-#38: Update ARM code to support Xen guest
	* #39-#40: Add platform code to support ARM guest

I have pushed a branch on my git based on Royger's pvh work version 10:
	git clone git://xenbits.xen.org/people/julieng/freebsd.git -b xen-arm
	http://xenbits.xen.org/gitweb/?p=people/julieng/freebsd.git;a=shortlog;h=refs/heads/xen-arm

If needed, I can send the patch one by one the different mailing list.

This new support brings 2 open questions (for both Xen and FreeBSD community).
When a new guest is created, the toolstack will generated a device tree which
will contains:
	- The amount of memory
	- The description of the platform (gic, timer, hypervisor node)
	- PSCI node for SMP bringup

Until now, Xen on ARM supported only Linux-based OS. When the support of
device tree was added in Xen for guest, we chose to use Linux device tree
bindings (gic, timer,...). It seems that FreeBSD choose a different way to
implement device tree:
	- strictly respect ePAR (for interrupt-parent property)
	- gic bindings different (only 1 interrupt cell)

I would like to come with a common device tree specification (bindings, ...)
across every operating system. I know the Linux community is working on having
a device tree bindings out of the kernel tree. Does FreeBSD community plan to
work with Linux community for this purpose?

As specified early, the device tree can vary accross Xen version and user input
(eg the memory). I have noticed few places (mainly the timer) where the IRQs
number are harcoded.

In the long-term, it would be nice to see FreeBSD booting out-of-box (eg the
device tree is directly provided by the board firmware). I plan to add support
for Device Tree loading via Linux Boot ABI, it the way that Xen use to boot a
new guest.

The second question is related to memory attribute for page table. The early
page table setup by FreeBSD are using Write-Through memory attribute which
result to a likely random (not every time the same place) crash before the
real page table are initialized.
Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
I have no idea if it's a requirement from Xen or a bug (either in Xen or
FreeBSD).

The code is taking its first faltering steps, therefore the TODO is quite big:
	- Try the code on x86 architecture. I did lots of rework for the event
	channels code and didn't even try to compile
	- Clean up event channel code
	- Clean up xen control code
	- Add support for Device Tree loading via Linux Boot ABI
	- Fixing crashing on userspace. Could be related to the patch
	series "arm SMP on Cortex-A15"
	- Add guest SMP support
	- DOM0 support?

Any help, comments, questions are welcomed.

Sincerely yours,

============= Instruction to test FreeBSD on Xen on ARM ===========

Xen upstream tree doesn't fully support ELF loading for ARM. You will need to
recompile the tools with the following 2 patches:
	- https://patches.linaro.org/22228/
	- https://patches.linaro.org/22227/

To compile and boot Xen on your board, you can refer to the wiki page:
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions

The instruction to compile FreeBSD for Xen on ARM:
$ truncate -s 512 xenvm.img
$ sudo mdconfig -f xenvm.img -u0
$ sudo newfs /dev/md0
$ sudo mount /dev/md0 /mnt

$ sudo make TARGET_ARCH=armv6 kernel-toolchain
$ sudo make TARGET_ARCH=armv6 KERNCONF=XENHVM buildkernel
$ sudo make TARGET_ARCH=armv6 buildworld
$ sudo make TARGET_ARCH=armv6 DESTDIR=/mnt installworld distribution

$ echo "/dev/xbd0	/	ufs	rw	1	1" > /mnt/etc/fstab
$ vi /mnt/etc/ttys (add the line 'xc0 "/usr/libexec/getty Pc" xterm on secure")

$ sudo umount /mnt
$ sudo mdconfig -d u0

Then you can copy the rootfs and the kernel to DOM 0 on your board.

To boot the a FreeBSD your will required the following configuration file
$ cat freebsd.xl
kernel="kernel"
memory=64
name="freebsd"
vcpus=1
autoballon="off"
disk=[ 'phy:/dev/loop0,xvda,w' ]
$ losetup /dev/loop0 xenvm.img
$ xl create freebsd.xl
$ xl console freebsd

Julien Grall (40):
  xen/netfront: Don't need to include machine/intr_machdep.h
  xen/blkfront: Don't need to include machine/intr_machdep.h
  xen/control: Remove include machine/intr_machdep.h
  xen: Define xen_intr_handle_upcall in common headers
  xen: Remove duplicate features.h header in i386 arch
  xen/hypercall: Allow HYPERVISOR_console_write to take a const string
  xen/timer: Make xen timer optional
  xen/console: Fix build when DDB is enabled
  xen/console: clean up identify callback
  xen/xenstore: xs_probe should return BUS_PROBE_NOWILDCARD
  xen/control: xctlr_probe shoud return BUS_PROBE_NOWILDCARD
  xen/hypervisor: Be sure to set __XEN_INTERFACE_VERSION__
  xen/interface: Update interface to Xen 4.4 headers
  xen: Bump __XEN_INTERFACE_VERSION__ to 4.3
  xen/baloon: Fix compilation with Xen 4.4 headers
  xen/netfront: Fix compilation with Xen 4.4 headers
  xen/gnttab: Fix compilation with Xen 4.4 headers
  xen/ballon: Use correct type for frame list
  xen/netback: Fix printf format for xen_pfn_t
  xen/netfront: Use the correct type for rx_pfn_array
  xen/gnttab: Use the right type for the frames
  xen/gnttab: Export resume_frames
  xen/gnttab: Add a guard for xenpci functions
  xen/netfront: Define PTE flags per architecture
  xen/control: Implement suspend has panic for ARM
  xen: Introduce xen_pmap
  xen: move x86/xen/xenpv.c in dev/xen/xenpv.c
  xen/xenpv: Only add isa for x86 architecture
  xen/xenpv: Protect xenpci code by DEV_XENPCI
  xen: move xen_intr.c to common code
  xen/evtchn: Rework the event channels handler to make it generic
  xen/console: handle console for HVM
  arm/timer: Handle timer with no clock-frequency property
  arm/timer: WORKAROUND The timer should support DT...
  arm: Detect new revision for cortex A15
  arm: add ELFNOTE macro
  arm: Implement disable_intr
  arm: Use Write-Back as memory attribute for early page table
  dts: Add xenvm.dts
  arm: Add xenhvm platform

 sys/amd64/conf/GENERIC                  |    4 +-
 sys/amd64/include/apicvar.h             |    1 -
 sys/amd64/include/xen/hypercall.h       |    2 +-
 sys/amd64/include/xen/xen-os.h          |    2 +
 sys/amd64/include/xen/xenpmap.h         |    2 +-
 sys/arm/arm/cpufunc.c                   |    2 +
 sys/arm/arm/generic_timer.c             |   12 +-
 sys/arm/arm/locore.S                    |    4 +-
 sys/arm/conf/XENHVM                     |   97 +++
 sys/arm/include/armreg.h                |    2 +
 sys/arm/include/asmacros.h              |   26 +
 sys/arm/include/cpufunc.h               |    6 +
 sys/arm/include/xen/hypercall.h         |   50 ++
 sys/arm/include/xen/synch_bitops.h      |   52 ++
 sys/arm/include/xen/xen-os.h            |   29 +
 sys/arm/include/xen/xenfunc.h           |    4 +
 sys/arm/include/xen/xenvar.h            |   14 +
 sys/arm/xenhvm/elf.S                    |   12 +
 sys/arm/xenhvm/files.xenhvm             |   18 +
 sys/arm/xenhvm/hypercall.S              |  107 +++
 sys/arm/xenhvm/xen-dt.c                 |  148 ++++
 sys/arm/xenhvm/xenhvm_machdep.c         |  132 ++++
 sys/boot/fdt/dts/xenvm.dts              |   77 ++
 sys/conf/files                          |    4 +-
 sys/conf/options                        |    3 +
 sys/conf/options.arm                    |    1 +
 sys/dev/xen/balloon/balloon.c           |    6 +-
 sys/dev/xen/blkfront/blkfront.c         |    1 -
 sys/dev/xen/console/console.c           |   28 +-
 sys/dev/xen/console/xencons_ring.c      |   41 +-
 sys/dev/xen/control/control.c           |   19 +-
 sys/dev/xen/netback/netback.c           |    6 +-
 sys/dev/xen/netfront/netfront.c         |   15 +-
 sys/dev/xen/xenpci/xenpci.c             |    2 +-
 sys/dev/xen/xenpv.c                     |  133 ++++
 sys/i386/conf/GENERIC                   |    4 +-
 sys/i386/conf/XEN                       |    1 +
 sys/i386/include/apicvar.h              |    1 -
 sys/i386/include/xen/features.h         |   22 -
 sys/i386/include/xen/hypercall.h        |    2 +-
 sys/i386/include/xen/xen-os.h           |    2 +
 sys/i386/include/xen/xenvar.h           |    2 +-
 sys/i386/xen/xen_machdep.c              |    2 +-
 sys/x86/xen/xen_intr.c                  | 1241 -------------------------------
 sys/x86/xen/xenpv.c                     |  128 ----
 sys/xen/gnttab.c                        |   23 +-
 sys/xen/hypervisor.h                    |    3 +-
 sys/xen/interface/arch-arm.h            |  259 +++++--
 sys/xen/interface/arch-arm/hvm/save.h   |    2 +-
 sys/xen/interface/arch-x86/hvm/save.h   |   25 +-
 sys/xen/interface/arch-x86/xen-mca.h    |    2 +-
 sys/xen/interface/arch-x86/xen-x86_32.h |    2 +-
 sys/xen/interface/arch-x86/xen-x86_64.h |    2 +-
 sys/xen/interface/arch-x86/xen.h        |   31 +-
 sys/xen/interface/callback.h            |    4 +-
 sys/xen/interface/dom0_ops.h            |    2 +-
 sys/xen/interface/domctl.h              |   65 +-
 sys/xen/interface/elfnote.h             |   10 +-
 sys/xen/interface/event_channel.h       |    5 +-
 sys/xen/interface/features.h            |   16 +-
 sys/xen/interface/gcov.h                |  115 +++
 sys/xen/interface/grant_table.h         |    8 +-
 sys/xen/interface/hvm/hvm_xs_strings.h  |   80 ++
 sys/xen/interface/hvm/ioreq.h           |   20 +-
 sys/xen/interface/hvm/params.h          |   14 +-
 sys/xen/interface/hvm/pvdrivers.h       |   47 ++
 sys/xen/interface/hvm/save.h            |    4 +-
 sys/xen/interface/io/blkif.h            |  104 ++-
 sys/xen/interface/io/console.h          |    2 +-
 sys/xen/interface/io/fbif.h             |    2 +-
 sys/xen/interface/io/kbdif.h            |    2 +-
 sys/xen/interface/io/netif.h            |   47 +-
 sys/xen/interface/io/pciif.h            |    3 +-
 sys/xen/interface/io/protocols.h        |    5 +-
 sys/xen/interface/io/tpmif.h            |   68 +-
 sys/xen/interface/io/usbif.h            |    1 -
 sys/xen/interface/io/vscsiif.h          |   42 +-
 sys/xen/interface/io/xenbus.h           |   20 +-
 sys/xen/interface/io/xs_wire.h          |    7 +-
 sys/xen/interface/kexec.h               |    7 +-
 sys/xen/interface/mem_event.h           |    4 +-
 sys/xen/interface/memory.h              |   80 +-
 sys/xen/interface/nmi.h                 |    7 +-
 sys/xen/interface/physdev.h             |   33 +-
 sys/xen/interface/platform.h            |   27 +-
 sys/xen/interface/sched.h               |   87 ++-
 sys/xen/interface/sysctl.h              |   47 +-
 sys/xen/interface/tmem.h                |    2 +-
 sys/xen/interface/trace.h               |   93 ++-
 sys/xen/interface/vcpu.h                |    2 +-
 sys/xen/interface/version.h             |    6 +-
 sys/xen/interface/xen-compat.h          |    2 +-
 sys/xen/interface/xen.h                 |  128 +++-
 sys/xen/interface/xenoprof.h            |    2 +-
 sys/xen/interface/xsm/flask_op.h        |    8 +
 sys/xen/xen-os.h                        |    2 +-
 sys/xen/xen_intr.c                      | 1072 ++++++++++++++++++++++++++
 sys/xen/xen_intr.h                      |    8 +-
 sys/xen/xenstore/xenstore.c             |    4 +-
 99 files changed, 3339 insertions(+), 1791 deletions(-)
 create mode 100644 sys/arm/conf/XENHVM
 create mode 100644 sys/arm/include/xen/hypercall.h
 create mode 100644 sys/arm/include/xen/synch_bitops.h
 create mode 100644 sys/arm/include/xen/xen-os.h
 create mode 100644 sys/arm/include/xen/xenfunc.h
 create mode 100644 sys/arm/include/xen/xenvar.h
 create mode 100644 sys/arm/xenhvm/elf.S
 create mode 100644 sys/arm/xenhvm/files.xenhvm
 create mode 100644 sys/arm/xenhvm/hypercall.S
 create mode 100644 sys/arm/xenhvm/xen-dt.c
 create mode 100644 sys/arm/xenhvm/xenhvm_machdep.c
 create mode 100644 sys/boot/fdt/dts/xenvm.dts
 create mode 100644 sys/dev/xen/xenpv.c
 delete mode 100644 sys/i386/include/xen/features.h
 delete mode 100644 sys/x86/xen/xen_intr.c
 delete mode 100644 sys/x86/xen/xenpv.c
 create mode 100644 sys/xen/interface/gcov.h
 create mode 100644 sys/xen/interface/hvm/hvm_xs_strings.h
 create mode 100644 sys/xen/interface/hvm/pvdrivers.h
 create mode 100644 sys/xen/xen_intr.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 21:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 21:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3B6r-0002Xs-7P; Tue, 14 Jan 2014 21:01:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3B6p-0002Xk-Pf
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 21:01:16 +0000
Received: from [85.158.143.35:18893] by server-1.bemta-4.messagelabs.com id
	29/12-02132-B95A5D25; Tue, 14 Jan 2014 21:01:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389733273!11747665!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1089 invoked from network); 14 Jan 2014 21:01:13 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 21:01:13 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so961172wes.35
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 13:01:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=1TbpAhb4lxt76CWwhJDZBmQx877iqtHxRw5FiCJ8RR4=;
	b=Ya6Qt679oDiNKdfj4zevVlbIBX8YXvTIW9a9BiSGg5FhENXjuXDI/OLbkkQuUR4jhZ
	dUI/7gI1JW5V5ajmqGb+t9Q8Nw2a7TIhrdMo4k/Lx3sup8yum4MEX89c8DBIZ2D/d9rh
	zKbo7ItpOgQBf3mlyUXRIDK00Jl6mP1H6rxNf+WLlXs4SyrBFj9QYp7ZzYPYB6dLjjRN
	n4Yzb1kKbap9zqqaavJIV7BFC/OEK502ONFOSzWePE/QruELM6ppeQ6idd2hTPgwH9/L
	gi+2xZUU8PuC+WOWafToOOBQTtXcVO/WgX0bDUo4AqZxrIkY5736E978XHu+iNV/lTAw
	Hetg==
X-Gm-Message-State: ALoCoQmQTZMpJNo8LuputgfkkZP+Ruew5hb6xvsN88pTlwvPAgBmAopqEPCzDdng+fCCcXuxlAZb
X-Received: by 10.180.79.38 with SMTP id g6mr4885327wix.60.1389733273106;
	Tue, 14 Jan 2014 13:01:13 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id md9sm25006951wic.1.2014.01.14.13.01.11
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 14 Jan 2014 13:01:12 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: freebsd-xen@freebsd.org, xen-devel@lists.xen.org, gibbs@freebsd.org,
	freebsd-arm@FreeBSD.org
Date: Tue, 14 Jan 2014 21:01:07 +0000
Message-Id: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, roger.pau@citrix.com
Subject: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

During the last couple of months I have been working on porting FreeBSD on
Xen on ARM.

It's my first big project on FreeBSD and I'm not sure if I have cc the right
MLs and people. If not, could somebody point me to the right email address?

With this patch series I'm able to boot FreeBSD with a single VCPU on every
32 bits platform supported by Xen.

The patch series is divided in 5 parts:
	* #1-#12: Clean up and bug fix with some options
	* #13-#17: Update the Xen interface to Xen 4.4 and fix compilation
	* #18-#32: Update Xen code to support ARM guest
	* #33-#38: Update ARM code to support Xen guest
	* #39-#40: Add platform code to support ARM guest

I have pushed a branch on my git based on Royger's pvh work version 10:
	git clone git://xenbits.xen.org/people/julieng/freebsd.git -b xen-arm
	http://xenbits.xen.org/gitweb/?p=people/julieng/freebsd.git;a=shortlog;h=refs/heads/xen-arm

If needed, I can send the patch one by one the different mailing list.

This new support brings 2 open questions (for both Xen and FreeBSD community).
When a new guest is created, the toolstack will generated a device tree which
will contains:
	- The amount of memory
	- The description of the platform (gic, timer, hypervisor node)
	- PSCI node for SMP bringup

Until now, Xen on ARM supported only Linux-based OS. When the support of
device tree was added in Xen for guest, we chose to use Linux device tree
bindings (gic, timer,...). It seems that FreeBSD choose a different way to
implement device tree:
	- strictly respect ePAR (for interrupt-parent property)
	- gic bindings different (only 1 interrupt cell)

I would like to come with a common device tree specification (bindings, ...)
across every operating system. I know the Linux community is working on having
a device tree bindings out of the kernel tree. Does FreeBSD community plan to
work with Linux community for this purpose?

As specified early, the device tree can vary accross Xen version and user input
(eg the memory). I have noticed few places (mainly the timer) where the IRQs
number are harcoded.

In the long-term, it would be nice to see FreeBSD booting out-of-box (eg the
device tree is directly provided by the board firmware). I plan to add support
for Device Tree loading via Linux Boot ABI, it the way that Xen use to boot a
new guest.

The second question is related to memory attribute for page table. The early
page table setup by FreeBSD are using Write-Through memory attribute which
result to a likely random (not every time the same place) crash before the
real page table are initialized.
Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
I have no idea if it's a requirement from Xen or a bug (either in Xen or
FreeBSD).

The code is taking its first faltering steps, therefore the TODO is quite big:
	- Try the code on x86 architecture. I did lots of rework for the event
	channels code and didn't even try to compile
	- Clean up event channel code
	- Clean up xen control code
	- Add support for Device Tree loading via Linux Boot ABI
	- Fixing crashing on userspace. Could be related to the patch
	series "arm SMP on Cortex-A15"
	- Add guest SMP support
	- DOM0 support?

Any help, comments, questions are welcomed.

Sincerely yours,

============= Instruction to test FreeBSD on Xen on ARM ===========

Xen upstream tree doesn't fully support ELF loading for ARM. You will need to
recompile the tools with the following 2 patches:
	- https://patches.linaro.org/22228/
	- https://patches.linaro.org/22227/

To compile and boot Xen on your board, you can refer to the wiki page:
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions

The instruction to compile FreeBSD for Xen on ARM:
$ truncate -s 512 xenvm.img
$ sudo mdconfig -f xenvm.img -u0
$ sudo newfs /dev/md0
$ sudo mount /dev/md0 /mnt

$ sudo make TARGET_ARCH=armv6 kernel-toolchain
$ sudo make TARGET_ARCH=armv6 KERNCONF=XENHVM buildkernel
$ sudo make TARGET_ARCH=armv6 buildworld
$ sudo make TARGET_ARCH=armv6 DESTDIR=/mnt installworld distribution

$ echo "/dev/xbd0	/	ufs	rw	1	1" > /mnt/etc/fstab
$ vi /mnt/etc/ttys (add the line 'xc0 "/usr/libexec/getty Pc" xterm on secure")

$ sudo umount /mnt
$ sudo mdconfig -d u0

Then you can copy the rootfs and the kernel to DOM 0 on your board.

To boot the a FreeBSD your will required the following configuration file
$ cat freebsd.xl
kernel="kernel"
memory=64
name="freebsd"
vcpus=1
autoballon="off"
disk=[ 'phy:/dev/loop0,xvda,w' ]
$ losetup /dev/loop0 xenvm.img
$ xl create freebsd.xl
$ xl console freebsd

Julien Grall (40):
  xen/netfront: Don't need to include machine/intr_machdep.h
  xen/blkfront: Don't need to include machine/intr_machdep.h
  xen/control: Remove include machine/intr_machdep.h
  xen: Define xen_intr_handle_upcall in common headers
  xen: Remove duplicate features.h header in i386 arch
  xen/hypercall: Allow HYPERVISOR_console_write to take a const string
  xen/timer: Make xen timer optional
  xen/console: Fix build when DDB is enabled
  xen/console: clean up identify callback
  xen/xenstore: xs_probe should return BUS_PROBE_NOWILDCARD
  xen/control: xctlr_probe shoud return BUS_PROBE_NOWILDCARD
  xen/hypervisor: Be sure to set __XEN_INTERFACE_VERSION__
  xen/interface: Update interface to Xen 4.4 headers
  xen: Bump __XEN_INTERFACE_VERSION__ to 4.3
  xen/baloon: Fix compilation with Xen 4.4 headers
  xen/netfront: Fix compilation with Xen 4.4 headers
  xen/gnttab: Fix compilation with Xen 4.4 headers
  xen/ballon: Use correct type for frame list
  xen/netback: Fix printf format for xen_pfn_t
  xen/netfront: Use the correct type for rx_pfn_array
  xen/gnttab: Use the right type for the frames
  xen/gnttab: Export resume_frames
  xen/gnttab: Add a guard for xenpci functions
  xen/netfront: Define PTE flags per architecture
  xen/control: Implement suspend has panic for ARM
  xen: Introduce xen_pmap
  xen: move x86/xen/xenpv.c in dev/xen/xenpv.c
  xen/xenpv: Only add isa for x86 architecture
  xen/xenpv: Protect xenpci code by DEV_XENPCI
  xen: move xen_intr.c to common code
  xen/evtchn: Rework the event channels handler to make it generic
  xen/console: handle console for HVM
  arm/timer: Handle timer with no clock-frequency property
  arm/timer: WORKAROUND The timer should support DT...
  arm: Detect new revision for cortex A15
  arm: add ELFNOTE macro
  arm: Implement disable_intr
  arm: Use Write-Back as memory attribute for early page table
  dts: Add xenvm.dts
  arm: Add xenhvm platform

 sys/amd64/conf/GENERIC                  |    4 +-
 sys/amd64/include/apicvar.h             |    1 -
 sys/amd64/include/xen/hypercall.h       |    2 +-
 sys/amd64/include/xen/xen-os.h          |    2 +
 sys/amd64/include/xen/xenpmap.h         |    2 +-
 sys/arm/arm/cpufunc.c                   |    2 +
 sys/arm/arm/generic_timer.c             |   12 +-
 sys/arm/arm/locore.S                    |    4 +-
 sys/arm/conf/XENHVM                     |   97 +++
 sys/arm/include/armreg.h                |    2 +
 sys/arm/include/asmacros.h              |   26 +
 sys/arm/include/cpufunc.h               |    6 +
 sys/arm/include/xen/hypercall.h         |   50 ++
 sys/arm/include/xen/synch_bitops.h      |   52 ++
 sys/arm/include/xen/xen-os.h            |   29 +
 sys/arm/include/xen/xenfunc.h           |    4 +
 sys/arm/include/xen/xenvar.h            |   14 +
 sys/arm/xenhvm/elf.S                    |   12 +
 sys/arm/xenhvm/files.xenhvm             |   18 +
 sys/arm/xenhvm/hypercall.S              |  107 +++
 sys/arm/xenhvm/xen-dt.c                 |  148 ++++
 sys/arm/xenhvm/xenhvm_machdep.c         |  132 ++++
 sys/boot/fdt/dts/xenvm.dts              |   77 ++
 sys/conf/files                          |    4 +-
 sys/conf/options                        |    3 +
 sys/conf/options.arm                    |    1 +
 sys/dev/xen/balloon/balloon.c           |    6 +-
 sys/dev/xen/blkfront/blkfront.c         |    1 -
 sys/dev/xen/console/console.c           |   28 +-
 sys/dev/xen/console/xencons_ring.c      |   41 +-
 sys/dev/xen/control/control.c           |   19 +-
 sys/dev/xen/netback/netback.c           |    6 +-
 sys/dev/xen/netfront/netfront.c         |   15 +-
 sys/dev/xen/xenpci/xenpci.c             |    2 +-
 sys/dev/xen/xenpv.c                     |  133 ++++
 sys/i386/conf/GENERIC                   |    4 +-
 sys/i386/conf/XEN                       |    1 +
 sys/i386/include/apicvar.h              |    1 -
 sys/i386/include/xen/features.h         |   22 -
 sys/i386/include/xen/hypercall.h        |    2 +-
 sys/i386/include/xen/xen-os.h           |    2 +
 sys/i386/include/xen/xenvar.h           |    2 +-
 sys/i386/xen/xen_machdep.c              |    2 +-
 sys/x86/xen/xen_intr.c                  | 1241 -------------------------------
 sys/x86/xen/xenpv.c                     |  128 ----
 sys/xen/gnttab.c                        |   23 +-
 sys/xen/hypervisor.h                    |    3 +-
 sys/xen/interface/arch-arm.h            |  259 +++++--
 sys/xen/interface/arch-arm/hvm/save.h   |    2 +-
 sys/xen/interface/arch-x86/hvm/save.h   |   25 +-
 sys/xen/interface/arch-x86/xen-mca.h    |    2 +-
 sys/xen/interface/arch-x86/xen-x86_32.h |    2 +-
 sys/xen/interface/arch-x86/xen-x86_64.h |    2 +-
 sys/xen/interface/arch-x86/xen.h        |   31 +-
 sys/xen/interface/callback.h            |    4 +-
 sys/xen/interface/dom0_ops.h            |    2 +-
 sys/xen/interface/domctl.h              |   65 +-
 sys/xen/interface/elfnote.h             |   10 +-
 sys/xen/interface/event_channel.h       |    5 +-
 sys/xen/interface/features.h            |   16 +-
 sys/xen/interface/gcov.h                |  115 +++
 sys/xen/interface/grant_table.h         |    8 +-
 sys/xen/interface/hvm/hvm_xs_strings.h  |   80 ++
 sys/xen/interface/hvm/ioreq.h           |   20 +-
 sys/xen/interface/hvm/params.h          |   14 +-
 sys/xen/interface/hvm/pvdrivers.h       |   47 ++
 sys/xen/interface/hvm/save.h            |    4 +-
 sys/xen/interface/io/blkif.h            |  104 ++-
 sys/xen/interface/io/console.h          |    2 +-
 sys/xen/interface/io/fbif.h             |    2 +-
 sys/xen/interface/io/kbdif.h            |    2 +-
 sys/xen/interface/io/netif.h            |   47 +-
 sys/xen/interface/io/pciif.h            |    3 +-
 sys/xen/interface/io/protocols.h        |    5 +-
 sys/xen/interface/io/tpmif.h            |   68 +-
 sys/xen/interface/io/usbif.h            |    1 -
 sys/xen/interface/io/vscsiif.h          |   42 +-
 sys/xen/interface/io/xenbus.h           |   20 +-
 sys/xen/interface/io/xs_wire.h          |    7 +-
 sys/xen/interface/kexec.h               |    7 +-
 sys/xen/interface/mem_event.h           |    4 +-
 sys/xen/interface/memory.h              |   80 +-
 sys/xen/interface/nmi.h                 |    7 +-
 sys/xen/interface/physdev.h             |   33 +-
 sys/xen/interface/platform.h            |   27 +-
 sys/xen/interface/sched.h               |   87 ++-
 sys/xen/interface/sysctl.h              |   47 +-
 sys/xen/interface/tmem.h                |    2 +-
 sys/xen/interface/trace.h               |   93 ++-
 sys/xen/interface/vcpu.h                |    2 +-
 sys/xen/interface/version.h             |    6 +-
 sys/xen/interface/xen-compat.h          |    2 +-
 sys/xen/interface/xen.h                 |  128 +++-
 sys/xen/interface/xenoprof.h            |    2 +-
 sys/xen/interface/xsm/flask_op.h        |    8 +
 sys/xen/xen-os.h                        |    2 +-
 sys/xen/xen_intr.c                      | 1072 ++++++++++++++++++++++++++
 sys/xen/xen_intr.h                      |    8 +-
 sys/xen/xenstore/xenstore.c             |    4 +-
 99 files changed, 3339 insertions(+), 1791 deletions(-)
 create mode 100644 sys/arm/conf/XENHVM
 create mode 100644 sys/arm/include/xen/hypercall.h
 create mode 100644 sys/arm/include/xen/synch_bitops.h
 create mode 100644 sys/arm/include/xen/xen-os.h
 create mode 100644 sys/arm/include/xen/xenfunc.h
 create mode 100644 sys/arm/include/xen/xenvar.h
 create mode 100644 sys/arm/xenhvm/elf.S
 create mode 100644 sys/arm/xenhvm/files.xenhvm
 create mode 100644 sys/arm/xenhvm/hypercall.S
 create mode 100644 sys/arm/xenhvm/xen-dt.c
 create mode 100644 sys/arm/xenhvm/xenhvm_machdep.c
 create mode 100644 sys/boot/fdt/dts/xenvm.dts
 create mode 100644 sys/dev/xen/xenpv.c
 delete mode 100644 sys/i386/include/xen/features.h
 delete mode 100644 sys/x86/xen/xen_intr.c
 delete mode 100644 sys/x86/xen/xenpv.c
 create mode 100644 sys/xen/interface/gcov.h
 create mode 100644 sys/xen/interface/hvm/hvm_xs_strings.h
 create mode 100644 sys/xen/interface/hvm/pvdrivers.h
 create mode 100644 sys/xen/xen_intr.c

-- 
1.8.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 21:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 21:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Bzw-00055B-6i; Tue, 14 Jan 2014 21:58:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3Bzt-000556-W4
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 21:58:10 +0000
Received: from [85.158.137.68:28287] by server-17.bemta-3.messagelabs.com id
	B4/67-15965-1F2B5D25; Tue, 14 Jan 2014 21:58:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389736688!7981753!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16171 invoked from network); 14 Jan 2014 21:58:08 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 21:58:08 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389736688; l=2356;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=K9nOdcF5UtUgzwozKsl6TXkeNOw=;
	b=vKTtqKuYlGjYBK98Jsqa4Ybb3MRNWLaJiOiPSNXd5AoKSTQnBGRHZVqSEoe2GCZ2aDH
	oIWZGhde9Wiphr36ixrCBBQKtI8Y1K4U787W1igI6DrshTq+zyi9uHMKkwQGH4Z6Ba9xd
	TdcbtkZP120mDyMOJij0VYX1AXiZKyUqN3Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVYxSFJSlSFHc//WORzCfkz/AXknJsPmneTL2bA==
X-RZG-CLASS-ID: mo00
Received: from probook.site ([2001:a60:10a2:bf01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 503486q0ELw79O7 ; 
	Tue, 14 Jan 2014 22:58:07 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 20ABA5025A; Tue, 14 Jan 2014 22:58:06 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Date: Tue, 14 Jan 2014 22:57:59 +0100
Message-Id: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Also fix the name of the discard-alignment property, add the missing 'n'.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 xen/include/public/io/blkif.h | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..515ea90 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,7 +175,7 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
- * discard-aligment
+ * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0
  *      Notes:          4, 5
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,14 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. If the backing device has such discardable extents the
+ *     backend must provide both discard-granularity and discard-alignment.
+ *     Backends supporting discard should include discard-granularity and
+ *     discard-alignment even if it supports discarding individual sectors.
+ *     Frontends should assume discard-alignment == 0 and discard-granularity ==
+ *     sector size if these keys are missing.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +350,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 21:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 21:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Bzw-00055B-6i; Tue, 14 Jan 2014 21:58:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3Bzt-000556-W4
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 21:58:10 +0000
Received: from [85.158.137.68:28287] by server-17.bemta-3.messagelabs.com id
	B4/67-15965-1F2B5D25; Tue, 14 Jan 2014 21:58:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389736688!7981753!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16171 invoked from network); 14 Jan 2014 21:58:08 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 21:58:08 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389736688; l=2356;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=K9nOdcF5UtUgzwozKsl6TXkeNOw=;
	b=vKTtqKuYlGjYBK98Jsqa4Ybb3MRNWLaJiOiPSNXd5AoKSTQnBGRHZVqSEoe2GCZ2aDH
	oIWZGhde9Wiphr36ixrCBBQKtI8Y1K4U787W1igI6DrshTq+zyi9uHMKkwQGH4Z6Ba9xd
	TdcbtkZP120mDyMOJij0VYX1AXiZKyUqN3Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVYxSFJSlSFHc//WORzCfkz/AXknJsPmneTL2bA==
X-RZG-CLASS-ID: mo00
Received: from probook.site ([2001:a60:10a2:bf01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 503486q0ELw79O7 ; 
	Tue, 14 Jan 2014 22:58:07 +0100 (CET)
Received: by probook.site (Postfix, from userid 1000)
	id 20ABA5025A; Tue, 14 Jan 2014 22:58:06 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: Keir Fraser <keir@xen.org>,
	xen-devel@lists.xen.org
Date: Tue, 14 Jan 2014 22:57:59 +0100
Message-Id: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Also fix the name of the discard-alignment property, add the missing 'n'.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 xen/include/public/io/blkif.h | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..515ea90 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,7 +175,7 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
- * discard-aligment
+ * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0
  *      Notes:          4, 5
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,14 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. If the backing device has such discardable extents the
+ *     backend must provide both discard-granularity and discard-alignment.
+ *     Backends supporting discard should include discard-granularity and
+ *     discard-alignment even if it supports discarding individual sectors.
+ *     Frontends should assume discard-alignment == 0 and discard-granularity ==
+ *     sector size if these keys are missing.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +350,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CPo-0006Rh-RV; Tue, 14 Jan 2014 22:24:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3CPn-0006Rc-Qj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 22:24:55 +0000
Received: from [85.158.143.35:13133] by server-1.bemta-4.messagelabs.com id
	F2/2B-02132-739B5D25; Tue, 14 Jan 2014 22:24:55 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389738293!11697406!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14302 invoked from network); 14 Jan 2014 22:24:54 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-9.tower-21.messagelabs.com with SMTP;
	14 Jan 2014 22:24:54 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id AEFBD584271;
	Tue, 14 Jan 2014 14:24:52 -0800 (PST)
Date: Tue, 14 Jan 2014 14:24:50 -0800 (PST)
Message-Id: <20140114.142450.1533356315215529641.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/3] make skb_checksum_setup
 generally available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 9 Jan 2014 10:02:45 +0000

> Both xen-netfront and xen-netback need to be able to set up the partial
> checksum offset of an skb and may also need to recalculate the pseudo-
> header checksum in the process. This functionality is currently private
> and duplicated between the two drivers.
> 
> Patch #1 of this series moves the implementation into the core network code
> as there is nothing xen-specific about it and it is potentially useful to
> any network driver.
> Patch #2 removes the private implementation from netback.
> Patch #3 removes the private implementation from netfront.
> 
> v2:
> - Put skb_checksum_setup in skbuff.c rather than dev.c
> - remove inline

Series applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CPo-0006Rh-RV; Tue, 14 Jan 2014 22:24:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3CPn-0006Rc-Qj
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 22:24:55 +0000
Received: from [85.158.143.35:13133] by server-1.bemta-4.messagelabs.com id
	F2/2B-02132-739B5D25; Tue, 14 Jan 2014 22:24:55 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389738293!11697406!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14302 invoked from network); 14 Jan 2014 22:24:54 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-9.tower-21.messagelabs.com with SMTP;
	14 Jan 2014 22:24:54 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id AEFBD584271;
	Tue, 14 Jan 2014 14:24:52 -0800 (PST)
Date: Tue, 14 Jan 2014 14:24:50 -0800 (PST)
Message-Id: <20140114.142450.1533356315215529641.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
References: <1389261768-30606-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2 0/3] make skb_checksum_setup
 generally available
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Thu, 9 Jan 2014 10:02:45 +0000

> Both xen-netfront and xen-netback need to be able to set up the partial
> checksum offset of an skb and may also need to recalculate the pseudo-
> header checksum in the process. This functionality is currently private
> and duplicated between the two drivers.
> 
> Patch #1 of this series moves the implementation into the core network code
> as there is nothing xen-specific about it and it is potentially useful to
> any network driver.
> Patch #2 removes the private implementation from netback.
> Patch #3 removes the private implementation from netfront.
> 
> v2:
> - Put skb_checksum_setup in skbuff.c rather than dev.c
> - remove inline

Series applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:49:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CnX-0007kX-23; Tue, 14 Jan 2014 22:49:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3CnV-0007kS-HO
	for xen-devel@lists.xensource.com; Tue, 14 Jan 2014 22:49:25 +0000
Received: from [193.109.254.147:54378] by server-11.bemta-14.messagelabs.com
	id 92/31-20576-4FEB5D25; Tue, 14 Jan 2014 22:49:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389739762!10869626!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23646 invoked from network); 14 Jan 2014 22:49:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 22:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90784276"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 22:49:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 17:49:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3CnQ-0001XI-K6;
	Tue, 14 Jan 2014 22:49:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3CnQ-0005rG-Ei;
	Tue, 14 Jan 2014 22:49:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24371-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 22:49:20 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24371: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24371 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24371/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24369
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore        fail REGR. vs. 24369

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)       broken like 24366
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 24354

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:49:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CnX-0007kX-23; Tue, 14 Jan 2014 22:49:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3CnV-0007kS-HO
	for xen-devel@lists.xensource.com; Tue, 14 Jan 2014 22:49:25 +0000
Received: from [193.109.254.147:54378] by server-11.bemta-14.messagelabs.com
	id 92/31-20576-4FEB5D25; Tue, 14 Jan 2014 22:49:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389739762!10869626!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23646 invoked from network); 14 Jan 2014 22:49:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Jan 2014 22:49:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="90784276"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Jan 2014 22:49:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 14 Jan 2014 17:49:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3CnQ-0001XI-K6;
	Tue, 14 Jan 2014 22:49:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3CnQ-0005rG-Ei;
	Tue, 14 Jan 2014 22:49:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24371-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 14 Jan 2014 22:49:20 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24371: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24371 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24371/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24369
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore        fail REGR. vs. 24369

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)       broken like 24366
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 24354

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:50:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CoC-0007n8-FF; Tue, 14 Jan 2014 22:50:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W3CoB-0007mx-KA
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 22:50:07 +0000
Received: from [85.158.139.211:64711] by server-8.bemta-5.messagelabs.com id
	60/58-29838-E1FB5D25; Tue, 14 Jan 2014 22:50:06 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389739804!9781575!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26436 invoked from network); 14 Jan 2014 22:50:06 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 22:50:06 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 14 Jan 2014 22:49:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="631509973"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.39])
	by fldsmtpi03.verizon.com with ESMTP; 14 Jan 2014 22:49:52 +0000
Message-ID: <52D5BF10.2050702@terremark.com>
Date: Tue, 14 Jan 2014 17:49:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
In-Reply-To: <1389716996.12434.105.camel@kazak.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/14 11:29, Ian Campbell wrote:
> We've just tagged 4.4.0-rc2, please test and report bugs.
>
> The tarball can be downloaded here:
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

This tarball does not build on CentOS 5.10:


gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 
-Wall -Wstrict-prototypes -Wdeclaration-after-statement 
-I/home/don/xen-4.4.0-rc2/xen/include 
-I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic 
-I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default -msoft-float 
-fno-stack-protector -fno-exceptions -Wnested-externs -DHAVE_GAS_VMX 
-mno-red-zone -mno-sse -fpic -fno-asynchronous-unwind-tables 
-DGCC_HAS_VISIBILITY_ATTRIBUTE -fno-builtin -fno-common -Werror 
-Wredundant-decls -Wno-pointer-arith -pipe -g -D__XEN__ -include 
/home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc -iwithprefix 
include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI -DHAS_GDBSX 
-DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS -fno-omit-frame-pointer 
-DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c memory.c -o memory.o
cc1: warnings being treated as errors
memory.c: In function 'compat_memory_op':
memory.c:213: warning: comparison is always true due to limited range of 
data type
memory.c:214: warning: comparison is always true due to limited range of 
data type
memory.c:215: warning: comparison is always true due to limited range of 
data type
make[5]: *** [memory.o] Error 1
make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
make[4]: *** [compat/built_in.o] Error 2
make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
make[1]: *** [install] Error 2
make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
make: *** [install-xen] Error 2
dcs-xen-53:~/xen-4.4.0-rc2>uname -a
Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013 
x86_64 x86_64 x86_64 GNU/Linux
dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
CentOS release 5.10 (Final)
dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --enable-shared --enable-threads=posix 
--enable-checking=release --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-libgcj-multifile 
--enable-languages=c,c++,objc,obj-c++,java,fortran,ada 
--enable-java-awt=gtk --disable-dssi --disable-plugin 
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre 
--with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)


git blame points to:


dcs-xen-53:~/xen>git show 244ce6f4
commit 244ce6f42d1843c02be36ed808452df570378cb1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 8 09:06:07 2014 +0100

     compat wrapper for XENMEM_add_to_physmap_batch

     Signed-off-by: Jan Beulich <jbeulich@suse.com>
     Acked-by: Keir Fraser <keir@xen.org>
...


     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 22:50:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 22:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3CoC-0007n8-FF; Tue, 14 Jan 2014 22:50:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W3CoB-0007mx-KA
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 22:50:07 +0000
Received: from [85.158.139.211:64711] by server-8.bemta-5.messagelabs.com id
	60/58-29838-E1FB5D25; Tue, 14 Jan 2014 22:50:06 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389739804!9781575!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26436 invoked from network); 14 Jan 2014 22:50:06 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Jan 2014 22:50:06 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 14 Jan 2014 22:49:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="631509973"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.39])
	by fldsmtpi03.verizon.com with ESMTP; 14 Jan 2014 22:49:52 +0000
Message-ID: <52D5BF10.2050702@terremark.com>
Date: Tue, 14 Jan 2014 17:49:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
In-Reply-To: <1389716996.12434.105.camel@kazak.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/14 11:29, Ian Campbell wrote:
> We've just tagged 4.4.0-rc2, please test and report bugs.
>
> The tarball can be downloaded here:
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

This tarball does not build on CentOS 5.10:


gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 
-Wall -Wstrict-prototypes -Wdeclaration-after-statement 
-I/home/don/xen-4.4.0-rc2/xen/include 
-I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic 
-I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default -msoft-float 
-fno-stack-protector -fno-exceptions -Wnested-externs -DHAVE_GAS_VMX 
-mno-red-zone -mno-sse -fpic -fno-asynchronous-unwind-tables 
-DGCC_HAS_VISIBILITY_ATTRIBUTE -fno-builtin -fno-common -Werror 
-Wredundant-decls -Wno-pointer-arith -pipe -g -D__XEN__ -include 
/home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc -iwithprefix 
include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI -DHAS_GDBSX 
-DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS -fno-omit-frame-pointer 
-DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c memory.c -o memory.o
cc1: warnings being treated as errors
memory.c: In function 'compat_memory_op':
memory.c:213: warning: comparison is always true due to limited range of 
data type
memory.c:214: warning: comparison is always true due to limited range of 
data type
memory.c:215: warning: comparison is always true due to limited range of 
data type
make[5]: *** [memory.o] Error 1
make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
make[4]: *** [compat/built_in.o] Error 2
make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
make[1]: *** [install] Error 2
make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
make: *** [install-xen] Error 2
dcs-xen-53:~/xen-4.4.0-rc2>uname -a
Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013 
x86_64 x86_64 x86_64 GNU/Linux
dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
CentOS release 5.10 (Final)
dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --enable-shared --enable-threads=posix 
--enable-checking=release --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-libgcj-multifile 
--enable-languages=c,c++,objc,obj-c++,java,fortran,ada 
--enable-java-awt=gtk --disable-dssi --disable-plugin 
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre 
--with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)


git blame points to:


dcs-xen-53:~/xen>git show 244ce6f4
commit 244ce6f42d1843c02be36ed808452df570378cb1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 8 09:06:07 2014 +0100

     compat wrapper for XENMEM_add_to_physmap_batch

     Signed-off-by: Jan Beulich <jbeulich@suse.com>
     Acked-by: Keir Fraser <keir@xen.org>
...


     -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 23:29:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 23:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3DPM-0000qD-6W; Tue, 14 Jan 2014 23:28:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3DPK-0000q8-8m
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 23:28:30 +0000
Received: from [193.109.254.147:33534] by server-5.bemta-14.messagelabs.com id
	DB/42-03510-D18C5D25; Tue, 14 Jan 2014 23:28:29 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389742108!7397378!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30029 invoked from network); 14 Jan 2014 23:28:28 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-4.tower-27.messagelabs.com with SMTP;
	14 Jan 2014 23:28:28 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 9D45458434D;
	Tue, 14 Jan 2014 15:28:26 -0800 (PST)
Date: Tue, 14 Jan 2014 15:28:25 -0800 (PST)
Message-Id: <20140114.152825.2110618560908841742.davem@davemloft.net>
To: Annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <Annie.li@oracle.com>
Date: Fri, 10 Jan 2014 06:48:38 +0800

> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.
> 
> Signed-off-by: Annie Li <Annie.li@oracle.com>

If a Xen networking driver would review this I'd appreciate it.

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 14 23:29:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jan 2014 23:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3DPM-0000qD-6W; Tue, 14 Jan 2014 23:28:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3DPK-0000q8-8m
	for xen-devel@lists.xen.org; Tue, 14 Jan 2014 23:28:30 +0000
Received: from [193.109.254.147:33534] by server-5.bemta-14.messagelabs.com id
	DB/42-03510-D18C5D25; Tue, 14 Jan 2014 23:28:29 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389742108!7397378!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30029 invoked from network); 14 Jan 2014 23:28:28 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-4.tower-27.messagelabs.com with SMTP;
	14 Jan 2014 23:28:28 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 9D45458434D;
	Tue, 14 Jan 2014 15:28:26 -0800 (PST)
Date: Tue, 14 Jan 2014 15:28:25 -0800 (PST)
Message-Id: <20140114.152825.2110618560908841742.davem@davemloft.net>
To: Annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <Annie.li@oracle.com>
Date: Fri, 10 Jan 2014 06:48:38 +0800

> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.
> 
> Signed-off-by: Annie Li <Annie.li@oracle.com>

If a Xen networking driver would review this I'd appreciate it.

Thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 00:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 00:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3E3H-0003Ow-Sg; Wed, 15 Jan 2014 00:09:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3E3G-0003Or-K8
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 00:09:46 +0000
Received: from [85.158.139.211:55066] by server-14.bemta-5.messagelabs.com id
	26/90-24200-9C1D5D25; Wed, 15 Jan 2014 00:09:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389744583!9586559!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8944 invoked from network); 15 Jan 2014 00:09:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 00:09:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92927079"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 00:09:42 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 14 Jan 2014 19:09:42 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 15 Jan 2014
	01:09:40 +0100
Message-ID: <52D5D1C3.3020601@citrix.com>
Date: Wed, 15 Jan 2014 00:09:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com>
In-Reply-To: <52D5BF10.2050702@terremark.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 22:49, Don Slutz wrote:
> On 01/14/14 11:29, Ian Campbell wrote:
>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>
>> The tarball can be downloaded here:
>>
>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz
>>
>> Ian.
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
> This tarball does not build on CentOS 5.10:
>
>
> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
> -I/home/don/xen-4.4.0-rc2/xen/include
> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
> -pipe -g -D__XEN__ -include
> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
> memory.c -o memory.o
> cc1: warnings being treated as errors
> memory.c: In function 'compat_memory_op':
> memory.c:213: warning: comparison is always true due to limited range
> of data type
> memory.c:214: warning: comparison is always true due to limited range
> of data type
> memory.c:215: warning: comparison is always true due to limited range
> of data type
> make[5]: *** [memory.o] Error 1
> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
> make[4]: *** [compat/built_in.o] Error 2
> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
> make[1]: *** [install] Error 2
> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
> make: *** [install-xen] Error 2
> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
> x86_64 x86_64 x86_64 GNU/Linux
> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
> CentOS release 5.10 (Final)
> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
> Using built-in specs.
> Target: x86_64-redhat-linux
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
> --disable-libunwind-exceptions --enable-libgcj-multifile
> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
> --enable-java-awt=gtk --disable-dssi --disable-plugin
> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
> --with-cpu=generic --host=x86_64-redhat-linux
> Thread model: posix
> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)

I have also just encountered this build error, but am currently on the
fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
newer compilers causes the complaint to go away.  I would certainly like
to hope that compat_handle_ok() is correct, and does appear to be
correct from code inspection.

The if statement becomes gigantic after preprocessing, and I ran out of
effort today to sanitise the preprocessed output and check it for
correctness.  (At the very least, it would be kind to the compiler to
factor out the paging_mode_external(current->domain) check and degrade
the compat_handle_okay()s to compat_array_access_ok())

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 00:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 00:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3E3H-0003Ow-Sg; Wed, 15 Jan 2014 00:09:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3E3G-0003Or-K8
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 00:09:46 +0000
Received: from [85.158.139.211:55066] by server-14.bemta-5.messagelabs.com id
	26/90-24200-9C1D5D25; Wed, 15 Jan 2014 00:09:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389744583!9586559!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8944 invoked from network); 15 Jan 2014 00:09:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 00:09:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,659,1384300800"; d="scan'208";a="92927079"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 00:09:42 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 14 Jan 2014 19:09:42 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 15 Jan 2014
	01:09:40 +0100
Message-ID: <52D5D1C3.3020601@citrix.com>
Date: Wed, 15 Jan 2014 00:09:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com>
In-Reply-To: <52D5BF10.2050702@terremark.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 22:49, Don Slutz wrote:
> On 01/14/14 11:29, Ian Campbell wrote:
>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>
>> The tarball can be downloaded here:
>>
>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz
>>
>> Ian.
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
> This tarball does not build on CentOS 5.10:
>
>
> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
> -I/home/don/xen-4.4.0-rc2/xen/include
> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
> -pipe -g -D__XEN__ -include
> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
> memory.c -o memory.o
> cc1: warnings being treated as errors
> memory.c: In function 'compat_memory_op':
> memory.c:213: warning: comparison is always true due to limited range
> of data type
> memory.c:214: warning: comparison is always true due to limited range
> of data type
> memory.c:215: warning: comparison is always true due to limited range
> of data type
> make[5]: *** [memory.o] Error 1
> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
> make[4]: *** [compat/built_in.o] Error 2
> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
> make[1]: *** [install] Error 2
> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
> make: *** [install-xen] Error 2
> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
> x86_64 x86_64 x86_64 GNU/Linux
> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
> CentOS release 5.10 (Final)
> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
> Using built-in specs.
> Target: x86_64-redhat-linux
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
> --disable-libunwind-exceptions --enable-libgcj-multifile
> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
> --enable-java-awt=gtk --disable-dssi --disable-plugin
> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
> --with-cpu=generic --host=x86_64-redhat-linux
> Thread model: posix
> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)

I have also just encountered this build error, but am currently on the
fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
newer compilers causes the complaint to go away.  I would certainly like
to hope that compat_handle_ok() is correct, and does appear to be
correct from code inspection.

The if statement becomes gigantic after preprocessing, and I ran out of
effort today to sanitise the preprocessed output and check it for
correctness.  (At the very least, it would be kind to the compiler to
factor out the paging_mode_external(current->domain) check and degrade
the compat_handle_okay()s to compat_array_access_ok())

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 01:38:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 01:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3FQW-0002Kg-3z; Wed, 15 Jan 2014 01:37:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W3FFX-0001vc-AT
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 01:26:31 +0000
Received: from [193.109.254.147:30072] by server-15.bemta-14.messagelabs.com
	id E3/56-22186-6C3E5D25; Wed, 15 Jan 2014 01:26:30 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389749188!10915472!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15416 invoked from network); 15 Jan 2014 01:26:29 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 01:26:29 -0000
Received: by mail-ie0-f169.google.com with SMTP id at1so1438440iec.28
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 17:26:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=+4JJGpGzvrdmRsYJC2le9nkdPo2M0GbBqZE4IUfdtLA=;
	b=WzvzbY6uPAzkeJLG3MwimVbfBhuzMeJvHQC96lTJCHF6xVkyhaH038iU5m3YCnHFuY
	5/BOMGB2pNXdyrZR13sPszflbASmBFd04FanuskIGu6thERfWGpyRHt2jvrvGwqyeXpP
	xuNgcjuj6naT24V7fRyx3TMIJU+dCy9bfKr09haCXisIcGUF+U7hnurtUhUDhzATBnxv
	GxZnSkq4mkbVxKDMHW8joShvNbUXHj1phDWtCc+Wsqfz2fnIxCpNDginoIBNLgiXDO/M
	GlmUt2gywGFPov8gYujrOPb1FFs6nS01cMCkv3CQ9oI+DEhOaiTjB0ZSusalf10f3G5k
	sTmg==
X-Gm-Message-State: ALoCoQm22gmKWUoxnE4I0siOa+2DRisJLkdi/3cD0hQkIb5881Od5Hirt3DRWJFFG1t2npoVHKIj
X-Received: by 10.50.50.70 with SMTP id a6mr6024319igo.1.1389749188092;
	Tue, 14 Jan 2014 17:26:28 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id gc2sm26851224igd.6.2014.01.14.17.26.27
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 17:26:27 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 18:26:25 -0700
Message-Id: <24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Wed, 15 Jan 2014 01:37:50 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
> This new support brings 2 open questions (for both Xen and FreeBSD community).
> When a new guest is created, the toolstack will generated a device tree which
> will contains:
> 	- The amount of memory
> 	- The description of the platform (gic, timer, hypervisor node)
> 	- PSCI node for SMP bringup
> 
> Until now, Xen on ARM supported only Linux-based OS. When the support of
> device tree was added in Xen for guest, we chose to use Linux device tree
> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
> implement device tree:
> 	- strictly respect ePAR (for interrupt-parent property)
> 	- gic bindings different (only 1 interrupt cell)
> 
> I would like to come with a common device tree specification (bindings, ...)
> across every operating system. I know the Linux community is working on having
> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
> work with Linux community for this purpose?

We generally try to follow the common definitions for the FDT stuff. There are a few cases where we either lack the feature set of Linux, or where the Linux folks are moving quickly and changing the underlying definitions where we wait for the standards to mature before we implement. In some cases, where maturity hasn't happened, or where the bindings are overly Linux centric (which in theory they aren't supposed to be, but sometimes wind up that way). where we've not implemented things.

> As specified early, the device tree can vary accross Xen version and user input
> (eg the memory). I have noticed few places (mainly the timer) where the IRQs
> number are harcoded.

These cases should be viewed as bugs in FreeBSD, I believe. One of the goals that has wide support, at leas tin theory, is that we can boot with an unaltered Linux FDT. This goal is some time in the future.

> In the long-term, it would be nice to see FreeBSD booting out-of-box (eg the
> device tree is directly provided by the board firmware). I plan to add support
> for Device Tree loading via Linux Boot ABI, it the way that Xen use to boot a
> new guest.

That would be most welcome.

> The second question is related to memory attribute for page table. The early
> page table setup by FreeBSD are using Write-Through memory attribute which
> result to a likely random (not every time the same place) crash before the
> real page table are initialized.
> Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
> I have no idea if it's a requirement from Xen or a bug (either in Xen or
> FreeBSD).

There were some problems with pages being setup improperly for the mutexes to operate properly. Have you confirmed this on a recent version of FreeBSD?

> The code is taking its first faltering steps, therefore the TODO is quite big:
> 	- Try the code on x86 architecture. I did lots of rework for the event
> 	channels code and didn't even try to compile
> 	- Clean up event channel code
> 	- Clean up xen control code
> 	- Add support for Device Tree loading via Linux Boot ABI
> 	- Fixing crashing on userspace. Could be related to the patch
> 	series "arm SMP on Cortex-A15"
> 	- Add guest SMP support
> 	- DOM0 support?
> 
> Any help, comments, questions are welcomed.

I think this is great! We'd love to work with you to make this happen.

Warner

> Sincerely yours,
> 
> ============= Instruction to test FreeBSD on Xen on ARM ===========
[trimmed]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 01:38:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 01:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3FQW-0002Kg-3z; Wed, 15 Jan 2014 01:37:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W3FFX-0001vc-AT
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 01:26:31 +0000
Received: from [193.109.254.147:30072] by server-15.bemta-14.messagelabs.com
	id E3/56-22186-6C3E5D25; Wed, 15 Jan 2014 01:26:30 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389749188!10915472!1
X-Originating-IP: [209.85.223.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15416 invoked from network); 15 Jan 2014 01:26:29 -0000
Received: from mail-ie0-f169.google.com (HELO mail-ie0-f169.google.com)
	(209.85.223.169)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 01:26:29 -0000
Received: by mail-ie0-f169.google.com with SMTP id at1so1438440iec.28
	for <xen-devel@lists.xen.org>; Tue, 14 Jan 2014 17:26:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=+4JJGpGzvrdmRsYJC2le9nkdPo2M0GbBqZE4IUfdtLA=;
	b=WzvzbY6uPAzkeJLG3MwimVbfBhuzMeJvHQC96lTJCHF6xVkyhaH038iU5m3YCnHFuY
	5/BOMGB2pNXdyrZR13sPszflbASmBFd04FanuskIGu6thERfWGpyRHt2jvrvGwqyeXpP
	xuNgcjuj6naT24V7fRyx3TMIJU+dCy9bfKr09haCXisIcGUF+U7hnurtUhUDhzATBnxv
	GxZnSkq4mkbVxKDMHW8joShvNbUXHj1phDWtCc+Wsqfz2fnIxCpNDginoIBNLgiXDO/M
	GlmUt2gywGFPov8gYujrOPb1FFs6nS01cMCkv3CQ9oI+DEhOaiTjB0ZSusalf10f3G5k
	sTmg==
X-Gm-Message-State: ALoCoQm22gmKWUoxnE4I0siOa+2DRisJLkdi/3cD0hQkIb5881Od5Hirt3DRWJFFG1t2npoVHKIj
X-Received: by 10.50.50.70 with SMTP id a6mr6024319igo.1.1389749188092;
	Tue, 14 Jan 2014 17:26:28 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id gc2sm26851224igd.6.2014.01.14.17.26.27
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 14 Jan 2014 17:26:27 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
Date: Tue, 14 Jan 2014 18:26:25 -0700
Message-Id: <24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Wed, 15 Jan 2014 01:37:50 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
> This new support brings 2 open questions (for both Xen and FreeBSD community).
> When a new guest is created, the toolstack will generated a device tree which
> will contains:
> 	- The amount of memory
> 	- The description of the platform (gic, timer, hypervisor node)
> 	- PSCI node for SMP bringup
> 
> Until now, Xen on ARM supported only Linux-based OS. When the support of
> device tree was added in Xen for guest, we chose to use Linux device tree
> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
> implement device tree:
> 	- strictly respect ePAR (for interrupt-parent property)
> 	- gic bindings different (only 1 interrupt cell)
> 
> I would like to come with a common device tree specification (bindings, ...)
> across every operating system. I know the Linux community is working on having
> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
> work with Linux community for this purpose?

We generally try to follow the common definitions for the FDT stuff. There are a few cases where we either lack the feature set of Linux, or where the Linux folks are moving quickly and changing the underlying definitions where we wait for the standards to mature before we implement. In some cases, where maturity hasn't happened, or where the bindings are overly Linux centric (which in theory they aren't supposed to be, but sometimes wind up that way). where we've not implemented things.

> As specified early, the device tree can vary accross Xen version and user input
> (eg the memory). I have noticed few places (mainly the timer) where the IRQs
> number are harcoded.

These cases should be viewed as bugs in FreeBSD, I believe. One of the goals that has wide support, at leas tin theory, is that we can boot with an unaltered Linux FDT. This goal is some time in the future.

> In the long-term, it would be nice to see FreeBSD booting out-of-box (eg the
> device tree is directly provided by the board firmware). I plan to add support
> for Device Tree loading via Linux Boot ABI, it the way that Xen use to boot a
> new guest.

That would be most welcome.

> The second question is related to memory attribute for page table. The early
> page table setup by FreeBSD are using Write-Through memory attribute which
> result to a likely random (not every time the same place) crash before the
> real page table are initialized.
> Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
> I have no idea if it's a requirement from Xen or a bug (either in Xen or
> FreeBSD).

There were some problems with pages being setup improperly for the mutexes to operate properly. Have you confirmed this on a recent version of FreeBSD?

> The code is taking its first faltering steps, therefore the TODO is quite big:
> 	- Try the code on x86 architecture. I did lots of rework for the event
> 	channels code and didn't even try to compile
> 	- Clean up event channel code
> 	- Clean up xen control code
> 	- Add support for Device Tree loading via Linux Boot ABI
> 	- Fixing crashing on userspace. Could be related to the patch
> 	series "arm SMP on Cortex-A15"
> 	- Add guest SMP support
> 	- DOM0 support?
> 
> Any help, comments, questions are welcomed.

I think this is great! We'd love to work with you to make this happen.

Warner

> Sincerely yours,
> 
> ============= Instruction to test FreeBSD on Xen on ARM ===========
[trimmed]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 02:30:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 02:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3GF2-0005Am-J3; Wed, 15 Jan 2014 02:30:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W3GF1-0005Ah-1y
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 02:30:03 +0000
Received: from [193.109.254.147:14803] by server-13.bemta-14.messagelabs.com
	id 74/01-19374-AA2F5D25; Wed, 15 Jan 2014 02:30:02 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389753000!10910095!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2284 invoked from network); 15 Jan 2014 02:30:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 02:30:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0F2TuiB028984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 02:29:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0F2Tn3T015022
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 02:29:54 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0F2Tm1d003507; Wed, 15 Jan 2014 02:29:49 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Jan 2014 18:29:48 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Jan 2014 02:33:48 +0800
Message-Id: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: ian.jackson@eu.citrx.com, annie.li@oracle.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split eventchannel feature was supported by netback/netfront,
"xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
to show tx/rx eventchannel correctly.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |   11 ++++++++++-
 tools/libxl/libxl_types.idl |    3 ++-
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 3 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4222687 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3125,6 +3125,7 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     GC_INIT(ctx);
     char *dompath, *nicpath;
     char *val;
+    int  evtch;
 
     dompath = libxl__xs_get_dompath(gc, domid);
     nicinfo->devid = nic->devid;
@@ -3141,7 +3142,15 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/state", nicpath));
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
-    nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(evtch > 0)
+        nicinfo->evtch_tx = nicinfo->evtch_rx = evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch_tx = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e6368c7 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
     ("frontend_id", uint32),
     ("devid", libxl_devid),
     ("state", integer),
-    ("evtch", integer),
+    ("evtch_tx", integer),
+    ("evtch_rx", integer),
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..7353187 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch_tx, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 02:30:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 02:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3GF2-0005Am-J3; Wed, 15 Jan 2014 02:30:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W3GF1-0005Ah-1y
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 02:30:03 +0000
Received: from [193.109.254.147:14803] by server-13.bemta-14.messagelabs.com
	id 74/01-19374-AA2F5D25; Wed, 15 Jan 2014 02:30:02 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389753000!10910095!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2284 invoked from network); 15 Jan 2014 02:30:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 02:30:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0F2TuiB028984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 02:29:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0F2Tn3T015022
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 02:29:54 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0F2Tm1d003507; Wed, 15 Jan 2014 02:29:49 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 14 Jan 2014 18:29:48 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Wed, 15 Jan 2014 02:33:48 +0800
Message-Id: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: ian.jackson@eu.citrx.com, annie.li@oracle.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split eventchannel feature was supported by netback/netfront,
"xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
to show tx/rx eventchannel correctly.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |   11 ++++++++++-
 tools/libxl/libxl_types.idl |    3 ++-
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 3 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4222687 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3125,6 +3125,7 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     GC_INIT(ctx);
     char *dompath, *nicpath;
     char *val;
+    int  evtch;
 
     dompath = libxl__xs_get_dompath(gc, domid);
     nicinfo->devid = nic->devid;
@@ -3141,7 +3142,15 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/state", nicpath));
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
-    nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(evtch > 0)
+        nicinfo->evtch_tx = nicinfo->evtch_rx = evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch_tx = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e6368c7 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
     ("frontend_id", uint32),
     ("devid", libxl_devid),
     ("state", integer),
-    ("evtch", integer),
+    ("evtch_tx", integer),
+    ("evtch_rx", integer),
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..7353187 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch_tx, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 05:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 05:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3IgX-0002nk-SX; Wed, 15 Jan 2014 05:06:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3IgX-0002nf-3v
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 05:06:37 +0000
Received: from [85.158.143.35:57237] by server-2.bemta-4.messagelabs.com id
	12/D1-11386-C5716D25; Wed, 15 Jan 2014 05:06:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389762393!9094641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29030 invoked from network); 15 Jan 2014 05:06:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 05:06:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,660,1384300800"; d="scan'208";a="92971987"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 05:06:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 00:06:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3Ig4-0003Ob-72;
	Wed, 15 Jan 2014 05:06:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3Ig4-0001Qk-3g;
	Wed, 15 Jan 2014 05:06:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24373-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 05:06:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24373: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24373 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24373/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24369

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 05:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 05:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3IgX-0002nk-SX; Wed, 15 Jan 2014 05:06:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3IgX-0002nf-3v
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 05:06:37 +0000
Received: from [85.158.143.35:57237] by server-2.bemta-4.messagelabs.com id
	12/D1-11386-C5716D25; Wed, 15 Jan 2014 05:06:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389762393!9094641!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29030 invoked from network); 15 Jan 2014 05:06:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 05:06:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,660,1384300800"; d="scan'208";a="92971987"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 05:06:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 00:06:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3Ig4-0003Ob-72;
	Wed, 15 Jan 2014 05:06:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3Ig4-0001Qk-3g;
	Wed, 15 Jan 2014 05:06:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24373-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 05:06:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24373: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24373 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24373/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24369

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 07:33:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 07:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Kxu-0008B0-6E; Wed, 15 Jan 2014 07:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hahn@univention.de>) id 1W3Kxt-0008Av-7L
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 07:32:41 +0000
Received: from [85.158.137.68:4700] by server-2.bemta-3.messagelabs.com id
	8E/D5-17329-79936D25; Wed, 15 Jan 2014 07:32:39 +0000
X-Env-Sender: hahn@univention.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389771158!9235620!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5493 invoked from network); 15 Jan 2014 07:32:38 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-13.tower-31.messagelabs.com with SMTP;
	15 Jan 2014 07:32:38 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 5BA7A1939290;
	Wed, 15 Jan 2014 08:32:36 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 322D21939285;
	Wed, 15 Jan 2014 08:29:58 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tNfVIrFZH8xs; Wed, 15 Jan 2014 08:29:57 +0100 (CET)
Received: from [192.168.0.191] (mail.univention.de [82.198.197.8])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id E81031939281;
	Wed, 15 Jan 2014 08:28:55 +0100 (CET)
Message-ID: <52D638B5.5090202@univention.de>
Date: Wed, 15 Jan 2014 08:28:53 +0100
From: Philipp Hahn <hahn@univention.de>
Organization: Univention GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel@lists.xen.org, Keir Fraser <keir.fraser@citrix.com>
X-Enigmail-Version: 1.5.1
Content-Type: multipart/mixed; boundary="------------050309040106000909080202"
Subject: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050309040106000909080202
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

Hello,

we encountered a strange race condition in tools/hotplug/Linux/block,
which only shows very rarely and only once for the first domain started
after the host server is started. The complete details are in our
Bugzilla at <https://forge.univention.org/bugzilla/show_bug.cgi?id=20481>.

In our case its Xen-4.1.3 (yes I know it's ancient, but the code is
still the same in current Xen-git) and it only happens for file-backed
files (in our case a ISO-image as CD-ROM).

>        # Avoid a race with the remove if the path has been deleted, or
> »···# otherwise changed from "InitWait" state e.g. due to a timeout
>         xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
>         if [ "$xenbus_state" != '2' ]
>         then
>           release_lock "block"
>           fatal "Path closed or removed during hotplug add: $XENBUS_PATH state: $xenbus_state"
>         fi

The problem is that sometimes the other end (kernel/qemu?) is too slow
and the device is still in in state 1=Initializing. If that happens, the
domU stat is aborted and destroyed.
If the same VM is then started again, it works flawlessly.

That code block was added in
<http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=92e6cb5673b37bd883bdef0d0e83faf000edf61d>
by Keir Fraser.

My work-around is to delay for one more second and retry again if the
state is 1=Initializing. The printout confirmed that that case actually
happened.

Signed-off-by: Philipp Hahn <hahn@univention.de>

BYtE
Philipp Hahn
-- 
Philipp Hahn
Open Source Software Engineer

Univention GmbH
be open.
Mary-Somerville-Str. 1
D-28359 Bremen
Tel.: +49 421 22232-0
Fax : +49 421 22232-99
hahn@univention.de

http://www.univention.de/
Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876

--------------050309040106000909080202
Content-Type: text/x-patch;
 name="fix_hotplug_race.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fix_hotplug_race.patch"

# Bug #20481: Fix race during device hotplug
# Sometimes qemu/the kernel takes too long for the first domain to start. The
# hotplug script then finds the device still in state 1=Initialising and aborts.
# Add an artifical delay of 1 seconds and try again.
diff --git a/tools/hotplug/Linux/block b/tools/hotplug/Linux/block
index 06de5c9..cbf2af3 100644
--- a/tools/hotplug/Linux/block
+++ b/tools/hotplug/Linux/block
@@ -255,12 +255,16 @@ case "$command" in
 
         # Avoid a race with the remove if the path has been deleted, or
 	# otherwise changed from "InitWait" state e.g. due to a timeout
-        xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
-        if [ "$xenbus_state" != '2' ]
-        then
+        while true
+        do
+          xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
+          case "$xenbus_state" in
+          1) log notice "Path still initializing: $XENBUS_PATH" ; sleep 1 ; continue ;;
+          2) break ;;
+          esac
           release_lock "block"
           fatal "Path closed or removed during hotplug add: $XENBUS_PATH state: $xenbus_state"
-        fi
+        done
 
         if [ "$mode" = 'w' ] && ! stat "$file" -c %A | grep -q w
         then

--------------050309040106000909080202
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050309040106000909080202--


From xen-devel-bounces@lists.xen.org Wed Jan 15 07:33:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 07:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Kxu-0008B0-6E; Wed, 15 Jan 2014 07:32:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hahn@univention.de>) id 1W3Kxt-0008Av-7L
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 07:32:41 +0000
Received: from [85.158.137.68:4700] by server-2.bemta-3.messagelabs.com id
	8E/D5-17329-79936D25; Wed, 15 Jan 2014 07:32:39 +0000
X-Env-Sender: hahn@univention.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389771158!9235620!1
X-Originating-IP: [82.198.197.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5493 invoked from network); 15 Jan 2014 07:32:38 -0000
Received: from mail.univention.de (HELO mail.univention.de) (82.198.197.8)
	by server-13.tower-31.messagelabs.com with SMTP;
	15 Jan 2014 07:32:38 -0000
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 5BA7A1939290;
	Wed, 15 Jan 2014 08:32:36 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by slugis.knut.univention.de (Postfix) with ESMTP id 322D21939285;
	Wed, 15 Jan 2014 08:29:58 +0100 (CET)
X-Virus-Scanned: by amavisd-new-2.6.1 (20080629) (Debian) at knut.univention.de
Received: from mail.univention.de ([127.0.0.1])
	by localhost (slugis.knut.univention.de [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id tNfVIrFZH8xs; Wed, 15 Jan 2014 08:29:57 +0100 (CET)
Received: from [192.168.0.191] (mail.univention.de [82.198.197.8])
	by slugis.knut.univention.de (Postfix) with ESMTPSA id E81031939281;
	Wed, 15 Jan 2014 08:28:55 +0100 (CET)
Message-ID: <52D638B5.5090202@univention.de>
Date: Wed, 15 Jan 2014 08:28:53 +0100
From: Philipp Hahn <hahn@univention.de>
Organization: Univention GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel@lists.xen.org, Keir Fraser <keir.fraser@citrix.com>
X-Enigmail-Version: 1.5.1
Content-Type: multipart/mixed; boundary="------------050309040106000909080202"
Subject: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050309040106000909080202
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit

Hello,

we encountered a strange race condition in tools/hotplug/Linux/block,
which only shows very rarely and only once for the first domain started
after the host server is started. The complete details are in our
Bugzilla at <https://forge.univention.org/bugzilla/show_bug.cgi?id=20481>.

In our case its Xen-4.1.3 (yes I know it's ancient, but the code is
still the same in current Xen-git) and it only happens for file-backed
files (in our case a ISO-image as CD-ROM).

>        # Avoid a race with the remove if the path has been deleted, or
> »···# otherwise changed from "InitWait" state e.g. due to a timeout
>         xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
>         if [ "$xenbus_state" != '2' ]
>         then
>           release_lock "block"
>           fatal "Path closed or removed during hotplug add: $XENBUS_PATH state: $xenbus_state"
>         fi

The problem is that sometimes the other end (kernel/qemu?) is too slow
and the device is still in in state 1=Initializing. If that happens, the
domU stat is aborted and destroyed.
If the same VM is then started again, it works flawlessly.

That code block was added in
<http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=92e6cb5673b37bd883bdef0d0e83faf000edf61d>
by Keir Fraser.

My work-around is to delay for one more second and retry again if the
state is 1=Initializing. The printout confirmed that that case actually
happened.

Signed-off-by: Philipp Hahn <hahn@univention.de>

BYtE
Philipp Hahn
-- 
Philipp Hahn
Open Source Software Engineer

Univention GmbH
be open.
Mary-Somerville-Str. 1
D-28359 Bremen
Tel.: +49 421 22232-0
Fax : +49 421 22232-99
hahn@univention.de

http://www.univention.de/
Geschäftsführer: Peter H. Ganten
HRB 20755 Amtsgericht Bremen
Steuer-Nr.: 71-597-02876

--------------050309040106000909080202
Content-Type: text/x-patch;
 name="fix_hotplug_race.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="fix_hotplug_race.patch"

# Bug #20481: Fix race during device hotplug
# Sometimes qemu/the kernel takes too long for the first domain to start. The
# hotplug script then finds the device still in state 1=Initialising and aborts.
# Add an artifical delay of 1 seconds and try again.
diff --git a/tools/hotplug/Linux/block b/tools/hotplug/Linux/block
index 06de5c9..cbf2af3 100644
--- a/tools/hotplug/Linux/block
+++ b/tools/hotplug/Linux/block
@@ -255,12 +255,16 @@ case "$command" in
 
         # Avoid a race with the remove if the path has been deleted, or
 	# otherwise changed from "InitWait" state e.g. due to a timeout
-        xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
-        if [ "$xenbus_state" != '2' ]
-        then
+        while true
+        do
+          xenbus_state=$(xenstore_read_default "$XENBUS_PATH/state" 'unknown')
+          case "$xenbus_state" in
+          1) log notice "Path still initializing: $XENBUS_PATH" ; sleep 1 ; continue ;;
+          2) break ;;
+          esac
           release_lock "block"
           fatal "Path closed or removed during hotplug add: $XENBUS_PATH state: $xenbus_state"
-        fi
+        done
 
         if [ "$mode" = 'w' ] && ! stat "$file" -c %A | grep -q w
         then

--------------050309040106000909080202
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050309040106000909080202--


From xen-devel-bounces@lists.xen.org Wed Jan 15 08:08:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 08:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3LW1-0001Kd-OV; Wed, 15 Jan 2014 08:07:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W3LW0-0001KY-Gk
	for Xen-devel@lists.xen.org; Wed, 15 Jan 2014 08:07:56 +0000
Received: from [85.158.137.68:32730] by server-2.bemta-3.messagelabs.com id
	B8/DD-17329-BD146D25; Wed, 15 Jan 2014 08:07:55 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389773273!9198369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32297 invoked from network); 15 Jan 2014 08:07:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 08:07:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93000221"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 08:07:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 03:07:51 -0500
Message-ID: <52D641D7.8000904@citrix.com>
Date: Wed, 15 Jan 2014 09:07:51 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philipp Hahn <hahn@univention.de>, <Xen-devel@lists.xen.org>, Keir Fraser
	<keir.fraser@citrix.com>
References: <52D638B5.5090202@univention.de>
In-Reply-To: <52D638B5.5090202@univention.de>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTUvMDEvMTQgMDg6MjgsIFBoaWxpcHAgSGFobiB3cm90ZToKPiBIZWxsbywKPiAKPiB3ZSBl
bmNvdW50ZXJlZCBhIHN0cmFuZ2UgcmFjZSBjb25kaXRpb24gaW4gdG9vbHMvaG90cGx1Zy9MaW51
eC9ibG9jaywKPiB3aGljaCBvbmx5IHNob3dzIHZlcnkgcmFyZWx5IGFuZCBvbmx5IG9uY2UgZm9y
IHRoZSBmaXJzdCBkb21haW4gc3RhcnRlZAo+IGFmdGVyIHRoZSBob3N0IHNlcnZlciBpcyBzdGFy
dGVkLiBUaGUgY29tcGxldGUgZGV0YWlscyBhcmUgaW4gb3VyCj4gQnVnemlsbGEgYXQgPGh0dHBz
Oi8vZm9yZ2UudW5pdmVudGlvbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTIwNDgxPi4K
PiAKPiBJbiBvdXIgY2FzZSBpdHMgWGVuLTQuMS4zICh5ZXMgSSBrbm93IGl0J3MgYW5jaWVudCwg
YnV0IHRoZSBjb2RlIGlzCj4gc3RpbGwgdGhlIHNhbWUgaW4gY3VycmVudCBYZW4tZ2l0KSBhbmQg
aXQgb25seSBoYXBwZW5zIGZvciBmaWxlLWJhY2tlZAo+IGZpbGVzIChpbiBvdXIgY2FzZSBhIElT
Ty1pbWFnZSBhcyBDRC1ST00pLgo+IAo+PiAgICAgICAgIyBBdm9pZCBhIHJhY2Ugd2l0aCB0aGUg
cmVtb3ZlIGlmIHRoZSBwYXRoIGhhcyBiZWVuIGRlbGV0ZWQsIG9yCj4+IMK7wrfCt8K3IyBvdGhl
cndpc2UgY2hhbmdlZCBmcm9tICJJbml0V2FpdCIgc3RhdGUgZS5nLiBkdWUgdG8gYSB0aW1lb3V0
Cj4+ICAgICAgICAgeGVuYnVzX3N0YXRlPSQoeGVuc3RvcmVfcmVhZF9kZWZhdWx0ICIkWEVOQlVT
X1BBVEgvc3RhdGUiICd1bmtub3duJykKPj4gICAgICAgICBpZiBbICIkeGVuYnVzX3N0YXRlIiAh
PSAnMicgXQo+PiAgICAgICAgIHRoZW4KPj4gICAgICAgICAgIHJlbGVhc2VfbG9jayAiYmxvY2si
Cj4+ICAgICAgICAgICBmYXRhbCAiUGF0aCBjbG9zZWQgb3IgcmVtb3ZlZCBkdXJpbmcgaG90cGx1
ZyBhZGQ6ICRYRU5CVVNfUEFUSCBzdGF0ZTogJHhlbmJ1c19zdGF0ZSIKPj4gICAgICAgICBmaQo+
IAo+IFRoZSBwcm9ibGVtIGlzIHRoYXQgc29tZXRpbWVzIHRoZSBvdGhlciBlbmQgKGtlcm5lbC9x
ZW11PykgaXMgdG9vIHNsb3cKPiBhbmQgdGhlIGRldmljZSBpcyBzdGlsbCBpbiBpbiBzdGF0ZSAx
PUluaXRpYWxpemluZy4gSWYgdGhhdCBoYXBwZW5zLCB0aGUKPiBkb21VIHN0YXQgaXMgYWJvcnRl
ZCBhbmQgZGVzdHJveWVkLgo+IElmIHRoZSBzYW1lIFZNIGlzIHRoZW4gc3RhcnRlZCBhZ2Fpbiwg
aXQgd29ya3MgZmxhd2xlc3NseS4KClRoaXMgcHJvYmxlbSBvbmx5IG1hbmlmZXN0cyBpdHNlbGYg
d2l0aCB4ZW5kIGFuZCB4bCBmcm9tIFhlbiB2ZXJzaW9ucyA8CjQuMiwgc2luY2UgNC4yIG9ud2Fy
ZHMgbGlieGwgd2FpdHMgZm9yIHRoZSBiYWNrZW5kIHRvIHN3aXRjaCB0byBzdGF0ZSAyCmJlZm9y
ZSBsYXVuY2hpbmcgaG90cGx1ZyBzY3JpcHRzLgoKSSB3b3VsZCBsaWtlIHRvIGF2b2lkIGhhdmlu
ZyB0aGlzIGtpbmQgb2YgaW5maW5pdGUgbG9vcCBpbiB0aGUgYmxvY2sKaG90cGx1ZyBzY3JpcHQs
IGlzIHRoZXJlIGFueXdheSB0aGlzIGNhbiBiZSBmaXhlZCBpbiB4ZW5kPyAod2hpY2ggaXMgdGhl
Cm9ubHkgdG9vbHN0YWNrIHRoYXQgaGFzIHRoaXMgcHJvYmxlbSBpbiB1cHN0cmVhbSB2ZXJzaW9u
cykuCgpSb2dlci4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 15 08:08:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 08:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3LW1-0001Kd-OV; Wed, 15 Jan 2014 08:07:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W3LW0-0001KY-Gk
	for Xen-devel@lists.xen.org; Wed, 15 Jan 2014 08:07:56 +0000
Received: from [85.158.137.68:32730] by server-2.bemta-3.messagelabs.com id
	B8/DD-17329-BD146D25; Wed, 15 Jan 2014 08:07:55 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389773273!9198369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32297 invoked from network); 15 Jan 2014 08:07:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 08:07:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93000221"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 08:07:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 03:07:51 -0500
Message-ID: <52D641D7.8000904@citrix.com>
Date: Wed, 15 Jan 2014 09:07:51 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philipp Hahn <hahn@univention.de>, <Xen-devel@lists.xen.org>, Keir Fraser
	<keir.fraser@citrix.com>
References: <52D638B5.5090202@univention.de>
In-Reply-To: <52D638B5.5090202@univention.de>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTUvMDEvMTQgMDg6MjgsIFBoaWxpcHAgSGFobiB3cm90ZToKPiBIZWxsbywKPiAKPiB3ZSBl
bmNvdW50ZXJlZCBhIHN0cmFuZ2UgcmFjZSBjb25kaXRpb24gaW4gdG9vbHMvaG90cGx1Zy9MaW51
eC9ibG9jaywKPiB3aGljaCBvbmx5IHNob3dzIHZlcnkgcmFyZWx5IGFuZCBvbmx5IG9uY2UgZm9y
IHRoZSBmaXJzdCBkb21haW4gc3RhcnRlZAo+IGFmdGVyIHRoZSBob3N0IHNlcnZlciBpcyBzdGFy
dGVkLiBUaGUgY29tcGxldGUgZGV0YWlscyBhcmUgaW4gb3VyCj4gQnVnemlsbGEgYXQgPGh0dHBz
Oi8vZm9yZ2UudW5pdmVudGlvbi5vcmcvYnVnemlsbGEvc2hvd19idWcuY2dpP2lkPTIwNDgxPi4K
PiAKPiBJbiBvdXIgY2FzZSBpdHMgWGVuLTQuMS4zICh5ZXMgSSBrbm93IGl0J3MgYW5jaWVudCwg
YnV0IHRoZSBjb2RlIGlzCj4gc3RpbGwgdGhlIHNhbWUgaW4gY3VycmVudCBYZW4tZ2l0KSBhbmQg
aXQgb25seSBoYXBwZW5zIGZvciBmaWxlLWJhY2tlZAo+IGZpbGVzIChpbiBvdXIgY2FzZSBhIElT
Ty1pbWFnZSBhcyBDRC1ST00pLgo+IAo+PiAgICAgICAgIyBBdm9pZCBhIHJhY2Ugd2l0aCB0aGUg
cmVtb3ZlIGlmIHRoZSBwYXRoIGhhcyBiZWVuIGRlbGV0ZWQsIG9yCj4+IMK7wrfCt8K3IyBvdGhl
cndpc2UgY2hhbmdlZCBmcm9tICJJbml0V2FpdCIgc3RhdGUgZS5nLiBkdWUgdG8gYSB0aW1lb3V0
Cj4+ICAgICAgICAgeGVuYnVzX3N0YXRlPSQoeGVuc3RvcmVfcmVhZF9kZWZhdWx0ICIkWEVOQlVT
X1BBVEgvc3RhdGUiICd1bmtub3duJykKPj4gICAgICAgICBpZiBbICIkeGVuYnVzX3N0YXRlIiAh
PSAnMicgXQo+PiAgICAgICAgIHRoZW4KPj4gICAgICAgICAgIHJlbGVhc2VfbG9jayAiYmxvY2si
Cj4+ICAgICAgICAgICBmYXRhbCAiUGF0aCBjbG9zZWQgb3IgcmVtb3ZlZCBkdXJpbmcgaG90cGx1
ZyBhZGQ6ICRYRU5CVVNfUEFUSCBzdGF0ZTogJHhlbmJ1c19zdGF0ZSIKPj4gICAgICAgICBmaQo+
IAo+IFRoZSBwcm9ibGVtIGlzIHRoYXQgc29tZXRpbWVzIHRoZSBvdGhlciBlbmQgKGtlcm5lbC9x
ZW11PykgaXMgdG9vIHNsb3cKPiBhbmQgdGhlIGRldmljZSBpcyBzdGlsbCBpbiBpbiBzdGF0ZSAx
PUluaXRpYWxpemluZy4gSWYgdGhhdCBoYXBwZW5zLCB0aGUKPiBkb21VIHN0YXQgaXMgYWJvcnRl
ZCBhbmQgZGVzdHJveWVkLgo+IElmIHRoZSBzYW1lIFZNIGlzIHRoZW4gc3RhcnRlZCBhZ2Fpbiwg
aXQgd29ya3MgZmxhd2xlc3NseS4KClRoaXMgcHJvYmxlbSBvbmx5IG1hbmlmZXN0cyBpdHNlbGYg
d2l0aCB4ZW5kIGFuZCB4bCBmcm9tIFhlbiB2ZXJzaW9ucyA8CjQuMiwgc2luY2UgNC4yIG9ud2Fy
ZHMgbGlieGwgd2FpdHMgZm9yIHRoZSBiYWNrZW5kIHRvIHN3aXRjaCB0byBzdGF0ZSAyCmJlZm9y
ZSBsYXVuY2hpbmcgaG90cGx1ZyBzY3JpcHRzLgoKSSB3b3VsZCBsaWtlIHRvIGF2b2lkIGhhdmlu
ZyB0aGlzIGtpbmQgb2YgaW5maW5pdGUgbG9vcCBpbiB0aGUgYmxvY2sKaG90cGx1ZyBzY3JpcHQs
IGlzIHRoZXJlIGFueXdheSB0aGlzIGNhbiBiZSBmaXhlZCBpbiB4ZW5kPyAod2hpY2ggaXMgdGhl
Cm9ubHkgdG9vbHN0YWNrIHRoYXQgaGFzIHRoaXMgcHJvYmxlbSBpbiB1cHN0cmVhbSB2ZXJzaW9u
cykuCgpSb2dlci4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 15 08:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 08:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MA3-00039v-Js; Wed, 15 Jan 2014 08:49:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3MA2-00039q-7w
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 08:49:18 +0000
Received: from [85.158.139.211:14688] by server-6.bemta-5.messagelabs.com id
	24/B0-16310-D8B46D25; Wed, 15 Jan 2014 08:49:17 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389775756!9828775!1
X-Originating-IP: [213.199.154.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16499 invoked from network); 15 Jan 2014 08:49:16 -0000
Received: from mail-db3lp0083.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.83)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Jan 2014 08:49:16 -0000
Received: from AMSPRD0111HT002.eurprd01.prod.exchangelabs.com (157.56.250.229)
	by DB4PR03MB411.eurprd03.prod.outlook.com (10.141.235.149) with
	Microsoft
	SMTP Server (TLS) id 15.0.851.11; Wed, 15 Jan 2014 08:49:15 +0000
Message-ID: <52D64B87.6000400@zynstra.com>
Date: Wed, 15 Jan 2014 08:49:11 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [157.56.250.229]
X-ClientProxiedBy: DB3PR03CA004.eurprd03.prod.outlook.com (10.242.134.14) To
	DB4PR03MB411.eurprd03.prod.outlook.com (10.141.235.149)
X-Forefront-PRVS: 00922518D8
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(779001)(679001)(689001)(51704005)(189002)(199002)(479174003)(377454003)(51444003)(24454002)(61514002)(49866001)(54356001)(50986001)(80316001)(79102001)(53806001)(42186004)(76796001)(54316002)(66066001)(47776003)(47976001)(51856001)(63696002)(46102001)(77096001)(83322001)(47736001)(56776001)(80976001)(92566001)(93136001)(36756003)(93516001)(81816001)(64126003)(76786001)(59896001)(33656001)(76482001)(92726001)(77982001)(81342001)(74876001)(87976001)(31966008)(4396001)(81686001)(69226001)(56816005)(74662001)(80022001)(59766001)(85306002)(23756003)(83072002)(50466002)(74706001)(81542001)(74502001)(83506001)(47446002)(90146001)(74366001)(85852003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB411;
	H:AMSPRD0111HT002.eurprd01.prod.exchangelabs.com;
	CLIP:157.56.250.229; FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en;
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
I am still getting OOM errors with both of these set to 32 so I'll try 
another bump to 64.  I think that if I do find values which prevent it 
though then it is more of a work around than a fix because it still 
suggests that swap is not being used when ballooning is no longer 
capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11 
kernel) guest running on the dom0 with tmem activated so I'm going to 
see if I can find a comparable workload to see if I get the same issue 
with a different kernel configuration.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 08:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 08:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MA3-00039v-Js; Wed, 15 Jan 2014 08:49:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3MA2-00039q-7w
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 08:49:18 +0000
Received: from [85.158.139.211:14688] by server-6.bemta-5.messagelabs.com id
	24/B0-16310-D8B46D25; Wed, 15 Jan 2014 08:49:17 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389775756!9828775!1
X-Originating-IP: [213.199.154.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16499 invoked from network); 15 Jan 2014 08:49:16 -0000
Received: from mail-db3lp0083.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.83)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Jan 2014 08:49:16 -0000
Received: from AMSPRD0111HT002.eurprd01.prod.exchangelabs.com (157.56.250.229)
	by DB4PR03MB411.eurprd03.prod.outlook.com (10.141.235.149) with
	Microsoft
	SMTP Server (TLS) id 15.0.851.11; Wed, 15 Jan 2014 08:49:15 +0000
Message-ID: <52D64B87.6000400@zynstra.com>
Date: Wed, 15 Jan 2014 08:49:11 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com>
In-Reply-To: <52CE7E67.5080708@oracle.com>
X-Originating-IP: [157.56.250.229]
X-ClientProxiedBy: DB3PR03CA004.eurprd03.prod.outlook.com (10.242.134.14) To
	DB4PR03MB411.eurprd03.prod.outlook.com (10.141.235.149)
X-Forefront-PRVS: 00922518D8
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(779001)(679001)(689001)(51704005)(189002)(199002)(479174003)(377454003)(51444003)(24454002)(61514002)(49866001)(54356001)(50986001)(80316001)(79102001)(53806001)(42186004)(76796001)(54316002)(66066001)(47776003)(47976001)(51856001)(63696002)(46102001)(77096001)(83322001)(47736001)(56776001)(80976001)(92566001)(93136001)(36756003)(93516001)(81816001)(64126003)(76786001)(59896001)(33656001)(76482001)(92726001)(77982001)(81342001)(74876001)(87976001)(31966008)(4396001)(81686001)(69226001)(56816005)(74662001)(80022001)(59766001)(85306002)(23756003)(83072002)(50466002)(74706001)(81542001)(74502001)(83506001)(47446002)(90146001)(74366001)(85852003);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB411;
	H:AMSPRD0111HT002.eurprd01.prod.exchangelabs.com;
	CLIP:157.56.250.229; FPR:; RD:InfoNoRecords; MX:1; A:1; LANG:en;
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/07/2014 05:21 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> Could you confirm that this problem doesn't exist if loading tmem with
>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>> difference packages during your testing.
>>> This will help to figure out whether selfshrinking is the root cause.
>> Got an oom with selfshrinking=0, again during a gcc compile.
>> Unfortunately I don't have a single test case which demonstrates the
>> problem but as I mentioned before it will generally show up under
>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>
> So the root cause is not because enabled selfshrinking.
> Then what I can think of is that the xen-selfballoon driver was too
> aggressive, too many pages were ballooned out which causeed heavy memory
> pressure to guest OS.
> And kswapd started to reclaim page until most of pages were
> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
> triggered.
> In theory the balloon driver should give back ballooned out pages to
> guest OS, but I'm afraid this procedure is not fast enough.
>
> My suggestion is reserve a min memory for your guest OS so that the
> xen-selfballoon won't be so aggressive.
> You can do it through parameters selfballoon_reserved_mb or
> selfballoon_min_usable_mb.
I am still getting OOM errors with both of these set to 32 so I'll try 
another bump to 64.  I think that if I do find values which prevent it 
though then it is more of a work around than a fix because it still 
suggests that swap is not being used when ballooning is no longer 
capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11 
kernel) guest running on the dom0 with tmem activated so I'm going to 
see if I can find a comparable workload to see if I get the same issue 
with a different kernel configuration.

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:12:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MVp-00040y-5c; Wed, 15 Jan 2014 09:11:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3MVn-00040t-B0
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:11:47 +0000
Received: from [85.158.143.35:24136] by server-2.bemta-4.messagelabs.com id
	EC/66-11386-2D056D25; Wed, 15 Jan 2014 09:11:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389777105!9139340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32224 invoked from network); 15 Jan 2014 09:11:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:11:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:11:45 +0000
Message-Id: <52D65EDE0200007800113B47@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:11:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Xen-devel" <xen-devel@lists.xen.org>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  >>> On 14.01.14 at 21:21, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but 
> was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
> for the fix.

Seconded.

Jan

> This was caught by a compile failure rather than a functional test.  I have
> encountered a different compile error which turns out to be a bug in the 
> cross
> compiler we are currently using, so I need to fix that before I can
> functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
> test of whether the functionality of XENMEM_add_to_physmap is actually still
> the same.  If anyone dares look,
> https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace-qu 
> irks.patch
> are the hacks required to make the legacy drivers work on modern Xen.)
> ---
>  xen/arch/arm/mm.c    |    2 +-
>  xen/arch/x86/mm.c    |    2 +-
>  xen/include/xen/mm.h |    2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 293b6e2..127cce0 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 32c0473..172c68c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned 
> long e, void *p)
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index f90ed74..b183189 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned 
> long nr_pages)
>  
>  void scrub_one_page(struct page_info *);
>  
> -int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
> +int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
>                                domid_t foreign_domid,
>                                unsigned long idx, xen_pfn_t gpfn);
>  
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:12:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MVp-00040y-5c; Wed, 15 Jan 2014 09:11:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3MVn-00040t-B0
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:11:47 +0000
Received: from [85.158.143.35:24136] by server-2.bemta-4.messagelabs.com id
	EC/66-11386-2D056D25; Wed, 15 Jan 2014 09:11:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389777105!9139340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32224 invoked from network); 15 Jan 2014 09:11:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:11:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:11:45 +0000
Message-Id: <52D65EDE0200007800113B47@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:11:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Xen-devel" <xen-devel@lists.xen.org>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  >>> On 14.01.14 at 21:21, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but 
> was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
> for the fix.

Seconded.

Jan

> This was caught by a compile failure rather than a functional test.  I have
> encountered a different compile error which turns out to be a bug in the 
> cross
> compiler we are currently using, so I need to fix that before I can
> functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
> test of whether the functionality of XENMEM_add_to_physmap is actually still
> the same.  If anyone dares look,
> https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace-qu 
> irks.patch
> are the hacks required to make the legacy drivers work on modern Xen.)
> ---
>  xen/arch/arm/mm.c    |    2 +-
>  xen/arch/x86/mm.c    |    2 +-
>  xen/include/xen/mm.h |    2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 293b6e2..127cce0 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 32c0473..172c68c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned 
> long e, void *p)
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index f90ed74..b183189 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned 
> long nr_pages)
>  
>  void scrub_one_page(struct page_info *);
>  
> -int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
> +int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
>                                domid_t foreign_domid,
>                                unsigned long idx, xen_pfn_t gpfn);
>  
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:27:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Mkf-0004WY-94; Wed, 15 Jan 2014 09:27:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Mkd-0004WI-OE
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 09:27:08 +0000
Received: from [85.158.139.211:28819] by server-14.bemta-5.messagelabs.com id
	EF/A5-24200-A6456D25; Wed, 15 Jan 2014 09:27:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389778025!9844145!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11790 invoked from network); 15 Jan 2014 09:27:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:27:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:27:04 +0000
Message-Id: <52D662760200007800113B5A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:27:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <anthony.perard@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

According to our own documentation, even Python 2.3 is supported
for building, yet the qemu-upstream version update results in that
part of the build no longer working with Python 2.4 due to a number
of conditional assignments like this

    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)

in scripts/qapi-visit.py.

Converting these instances to proper conditionals fixes the issue
for me.

I further found that with some gcc versions trace/control-internal.h
causes a huge amount of warnings to be generated. While this isn't
keeping the build from succeeding, it's still rather annoying.

The combined full fix for both issues is below. But I'm uncertain
whether sending these to qemu-devel would make any sense, as
I have no idea whether such old build environments are being
cared about there.

Jan

--- a/scripts/qapi-visit.py
+++ b/scripts/qapi-visit.py
@@ -20,7 +20,10 @@ import errno
 def generate_visit_struct_fields(name, field_prefix, fn_prefix, members):
     substructs = []
     ret = ''
-    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
+    if not fn_prefix:
+        full_name = name
+    else:
+        full_name = "%s_%s" % (name, fn_prefix)
 
     for argname, argentry, optional, structured in parse_args(members):
         if structured:
@@ -84,7 +87,10 @@ if (!error_is_set(errp)) {
 ''')
     push_indent()
 
-    full_name = name if not field_prefix else "%s_%s" % (field_prefix, name)
+    if not field_prefix:
+        full_name = name
+    else:
+        full_name = "%s_%s" % (field_prefix, name)
 
     if len(field_prefix):
         ret += mcgen('''
@@ -226,6 +232,10 @@ def generate_visit_union(expr):
 
     base = expr.get('base')
     discriminator = expr.get('discriminator')
+    if not discriminator:
+        type="type"
+    else:
+        type=discriminator
 
     if discriminator == {}:
         assert not base
@@ -270,7 +280,7 @@ void visit_type_%(name)s(Visitor *m, %(n
         if (!err) {
             switch ((*obj)->kind) {
 ''',
-                 name=name, type="type" if not discriminator else discriminator)
+                 name=name, type=type)
 
     for key in members:
         if not discriminator:
--- a/trace/control-internal.h
+++ b/trace/control-internal.h
@@ -16,15 +16,15 @@
 extern TraceEvent trace_events[];
 
 
-static inline TraceEvent *trace_event_id(TraceEventID id)
+static inline TraceEventID trace_event_count(void)
 {
-    assert(id < trace_event_count());
-    return &trace_events[id];
+    return TRACE_EVENT_COUNT;
 }
 
-static inline TraceEventID trace_event_count(void)
+static inline TraceEvent *trace_event_id(TraceEventID id)
 {
-    return TRACE_EVENT_COUNT;
+    assert(id < trace_event_count());
+    return &trace_events[id];
 }
 
 static inline bool trace_event_is_pattern(const char *str)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:27:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Mkf-0004WY-94; Wed, 15 Jan 2014 09:27:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Mkd-0004WI-OE
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 09:27:08 +0000
Received: from [85.158.139.211:28819] by server-14.bemta-5.messagelabs.com id
	EF/A5-24200-A6456D25; Wed, 15 Jan 2014 09:27:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389778025!9844145!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11790 invoked from network); 15 Jan 2014 09:27:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:27:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:27:04 +0000
Message-Id: <52D662760200007800113B5A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:27:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <anthony.perard@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

According to our own documentation, even Python 2.3 is supported
for building, yet the qemu-upstream version update results in that
part of the build no longer working with Python 2.4 due to a number
of conditional assignments like this

    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)

in scripts/qapi-visit.py.

Converting these instances to proper conditionals fixes the issue
for me.

I further found that with some gcc versions trace/control-internal.h
causes a huge amount of warnings to be generated. While this isn't
keeping the build from succeeding, it's still rather annoying.

The combined full fix for both issues is below. But I'm uncertain
whether sending these to qemu-devel would make any sense, as
I have no idea whether such old build environments are being
cared about there.

Jan

--- a/scripts/qapi-visit.py
+++ b/scripts/qapi-visit.py
@@ -20,7 +20,10 @@ import errno
 def generate_visit_struct_fields(name, field_prefix, fn_prefix, members):
     substructs = []
     ret = ''
-    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
+    if not fn_prefix:
+        full_name = name
+    else:
+        full_name = "%s_%s" % (name, fn_prefix)
 
     for argname, argentry, optional, structured in parse_args(members):
         if structured:
@@ -84,7 +87,10 @@ if (!error_is_set(errp)) {
 ''')
     push_indent()
 
-    full_name = name if not field_prefix else "%s_%s" % (field_prefix, name)
+    if not field_prefix:
+        full_name = name
+    else:
+        full_name = "%s_%s" % (field_prefix, name)
 
     if len(field_prefix):
         ret += mcgen('''
@@ -226,6 +232,10 @@ def generate_visit_union(expr):
 
     base = expr.get('base')
     discriminator = expr.get('discriminator')
+    if not discriminator:
+        type="type"
+    else:
+        type=discriminator
 
     if discriminator == {}:
         assert not base
@@ -270,7 +280,7 @@ void visit_type_%(name)s(Visitor *m, %(n
         if (!err) {
             switch ((*obj)->kind) {
 ''',
-                 name=name, type="type" if not discriminator else discriminator)
+                 name=name, type=type)
 
     for key in members:
         if not discriminator:
--- a/trace/control-internal.h
+++ b/trace/control-internal.h
@@ -16,15 +16,15 @@
 extern TraceEvent trace_events[];
 
 
-static inline TraceEvent *trace_event_id(TraceEventID id)
+static inline TraceEventID trace_event_count(void)
 {
-    assert(id < trace_event_count());
-    return &trace_events[id];
+    return TRACE_EVENT_COUNT;
 }
 
-static inline TraceEventID trace_event_count(void)
+static inline TraceEvent *trace_event_id(TraceEventID id)
 {
-    return TRACE_EVENT_COUNT;
+    assert(id < trace_event_count());
+    return &trace_events[id];
 }
 
 static inline bool trace_event_is_pattern(const char *str)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MsW-0005AU-Dx; Wed, 15 Jan 2014 09:35:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3MsV-0005AP-Gy
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:35:15 +0000
Received: from [193.109.254.147:27737] by server-8.bemta-14.messagelabs.com id
	2C/EF-30921-25656D25; Wed, 15 Jan 2014 09:35:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389778512!10980152!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9471 invoked from network); 15 Jan 2014 09:35:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93017579"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:35:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:35:10 -0500
Message-ID: <1389778510.12434.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 15 Jan 2014 09:35:10 +0000
In-Reply-To: <20140114185238.GJ1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2

Based on the above I have no idea whether a freeze exception should be
granted for this, so my default answer is no. I'm not sure what else you
could have expected.

If you think there are changes here which should be in 4.4.0 then please
enumerate all changes included in this merge which have any relation to
Xen and their potential impact on the release. See
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
for guidance on the sorts of considerations to make.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MsW-0005AU-Dx; Wed, 15 Jan 2014 09:35:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3MsV-0005AP-Gy
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:35:15 +0000
Received: from [193.109.254.147:27737] by server-8.bemta-14.messagelabs.com id
	2C/EF-30921-25656D25; Wed, 15 Jan 2014 09:35:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389778512!10980152!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9471 invoked from network); 15 Jan 2014 09:35:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93017579"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:35:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:35:10 -0500
Message-ID: <1389778510.12434.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 15 Jan 2014 09:35:10 +0000
In-Reply-To: <20140114185238.GJ1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2

Based on the above I have no idea whether a freeze exception should be
granted for this, so my default answer is no. I'm not sure what else you
could have expected.

If you think there are changes here which should be in 4.4.0 then please
enumerate all changes included in this merge which have any relation to
Xen and their potential impact on the release. See
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
for guidance on the sorts of considerations to make.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MvR-0005IQ-11; Wed, 15 Jan 2014 09:38:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3MvC-0005I0-9l
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:38:12 +0000
Received: from [85.158.137.68:45211] by server-15.bemta-3.messagelabs.com id
	30/D5-11556-8F656D25; Wed, 15 Jan 2014 09:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389778678!9203632!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13909 invoked from network); 15 Jan 2014 09:38:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:38:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90896329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 09:37:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:37:38 -0500
Message-ID: <1389778658.12434.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 09:37:38 +0000
In-Reply-To: <52D5880B.30506@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> > These mappings are global and therefore need flushing on all processors. Add
> > flush_all_xen_data_tlb_range_va which accomplishes this.
> 
> Can we make name consistent across every *tlb* function call? On
> flushtlb.h we use *_local for maintenance on the current processor only.
> If the suffix is not present then the maintenance will be done on every
> processor.

I was trying to avoid a massive renaming of the existing flush_xen_*. I
suppose I should just go ahead and do it.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3MvR-0005IQ-11; Wed, 15 Jan 2014 09:38:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3MvC-0005I0-9l
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:38:12 +0000
Received: from [85.158.137.68:45211] by server-15.bemta-3.messagelabs.com id
	30/D5-11556-8F656D25; Wed, 15 Jan 2014 09:38:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389778678!9203632!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13909 invoked from network); 15 Jan 2014 09:38:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:38:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90896329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 09:37:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:37:38 -0500
Message-ID: <1389778658.12434.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 09:37:38 +0000
In-Reply-To: <52D5880B.30506@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> > These mappings are global and therefore need flushing on all processors. Add
> > flush_all_xen_data_tlb_range_va which accomplishes this.
> 
> Can we make name consistent across every *tlb* function call? On
> flushtlb.h we use *_local for maintenance on the current processor only.
> If the suffix is not present then the maintenance will be done on every
> processor.

I was trying to avoid a massive renaming of the existing flush_xen_*. I
suppose I should just go ahead and do it.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N1f-0005p7-1l; Wed, 15 Jan 2014 09:44:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3N1d-0005p2-Hw
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:44:41 +0000
Received: from [85.158.143.35:58340] by server-3.bemta-4.messagelabs.com id
	1F/3C-32360-88856D25; Wed, 15 Jan 2014 09:44:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389779080!11759577!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26419 invoked from network); 15 Jan 2014 09:44:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:44:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:44:39 +0000
Message-Id: <52D666950200007800113B71@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:44:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>, "Don Slutz" <dslutz@verizon.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
In-Reply-To: <52D5D1C3.3020601@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 14/01/2014 22:49, Don Slutz wrote:
>> On 01/14/14 11:29, Ian Campbell wrote:
>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>
>>> The tarball can be downloaded here:
>>>
>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>
>>> Ian.
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>>
>> This tarball does not build on CentOS 5.10:
>>
>>
>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>> -I/home/don/xen-4.4.0-rc2/xen/include
>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>> -pipe -g -D__XEN__ -include
>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>> memory.c -o memory.o
>> cc1: warnings being treated as errors
>> memory.c: In function 'compat_memory_op':
>> memory.c:213: warning: comparison is always true due to limited range
>> of data type
>> memory.c:214: warning: comparison is always true due to limited range
>> of data type
>> memory.c:215: warning: comparison is always true due to limited range
>> of data type
>> make[5]: *** [memory.o] Error 1
>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>> make[4]: *** [compat/built_in.o] Error 2
>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>> make[1]: *** [install] Error 2
>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>> make: *** [install-xen] Error 2
>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>> x86_64 x86_64 x86_64 GNU/Linux
>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>> CentOS release 5.10 (Final)
>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>> Using built-in specs.
>> Target: x86_64-redhat-linux
>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>> --disable-libunwind-exceptions --enable-libgcj-multifile
>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>> --with-cpu=generic --host=x86_64-redhat-linux
>> Thread model: posix
>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
> 
> I have also just encountered this build error, but am currently on the
> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
> newer compilers causes the complaint to go away.  I would certainly like
> to hope that compat_handle_ok() is correct, and does appear to be
> correct from code inspection.
> 
> The if statement becomes gigantic after preprocessing, and I ran out of
> effort today to sanitise the preprocessed output and check it for
> correctness.  (At the very least, it would be kind to the compiler to
> factor out the paging_mode_external(current->domain) check and degrade
> the compat_handle_okay()s to compat_array_access_ok())

It's not that bad; breaking the if() up a little got me to quickly
see that this is due to struct xen_add_to_physmap_batch's
size field being uint16_t. Using a separate local variable to
latch the structure value makes the problem go away. The
warning isn't really a compiler bug, but also not very useful.

Question now is: Should we replace the checks with
BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
or suppress the warning via intermediate variable?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N1f-0005p7-1l; Wed, 15 Jan 2014 09:44:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3N1d-0005p2-Hw
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:44:41 +0000
Received: from [85.158.143.35:58340] by server-3.bemta-4.messagelabs.com id
	1F/3C-32360-88856D25; Wed, 15 Jan 2014 09:44:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389779080!11759577!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26419 invoked from network); 15 Jan 2014 09:44:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 09:44:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 09:44:39 +0000
Message-Id: <52D666950200007800113B71@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 09:44:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>, "Don Slutz" <dslutz@verizon.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
In-Reply-To: <52D5D1C3.3020601@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 14/01/2014 22:49, Don Slutz wrote:
>> On 01/14/14 11:29, Ian Campbell wrote:
>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>
>>> The tarball can be downloaded here:
>>>
>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>
>>> Ian.
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>>
>> This tarball does not build on CentOS 5.10:
>>
>>
>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>> -I/home/don/xen-4.4.0-rc2/xen/include
>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>> -pipe -g -D__XEN__ -include
>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>> memory.c -o memory.o
>> cc1: warnings being treated as errors
>> memory.c: In function 'compat_memory_op':
>> memory.c:213: warning: comparison is always true due to limited range
>> of data type
>> memory.c:214: warning: comparison is always true due to limited range
>> of data type
>> memory.c:215: warning: comparison is always true due to limited range
>> of data type
>> make[5]: *** [memory.o] Error 1
>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>> make[4]: *** [compat/built_in.o] Error 2
>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>> make[1]: *** [install] Error 2
>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>> make: *** [install-xen] Error 2
>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>> x86_64 x86_64 x86_64 GNU/Linux
>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>> CentOS release 5.10 (Final)
>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>> Using built-in specs.
>> Target: x86_64-redhat-linux
>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>> --disable-libunwind-exceptions --enable-libgcj-multifile
>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>> --with-cpu=generic --host=x86_64-redhat-linux
>> Thread model: posix
>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
> 
> I have also just encountered this build error, but am currently on the
> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
> newer compilers causes the complaint to go away.  I would certainly like
> to hope that compat_handle_ok() is correct, and does appear to be
> correct from code inspection.
> 
> The if statement becomes gigantic after preprocessing, and I ran out of
> effort today to sanitise the preprocessed output and check it for
> correctness.  (At the very least, it would be kind to the compiler to
> factor out the paging_mode_external(current->domain) check and degrade
> the compat_handle_okay()s to compat_array_access_ok())

It's not that bad; breaking the if() up a little got me to quickly
see that this is due to struct xen_add_to_physmap_batch's
size field being uint16_t. Using a separate local variable to
latch the structure value makes the problem go away. The
warning isn't really a compiler bug, but also not very useful.

Question now is: Should we replace the checks with
BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
or suppress the warning via intermediate variable?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:44:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N1p-0005py-Jn; Wed, 15 Jan 2014 09:44:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3N1n-0005p1-Sa
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:44:52 +0000
Received: from [85.158.139.211:12272] by server-6.bemta-5.messagelabs.com id
	F0/92-16310-17856D25; Wed, 15 Jan 2014 09:44:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389779056!9861974!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28030 invoked from network); 15 Jan 2014 09:44:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:44:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93019160"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:44:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:44:15 -0500
Message-ID: <1389779054.12434.126.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 09:44:14 +0000
In-Reply-To: <52D5A4E3.1080405@citrix.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
	<EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
	<52D5A4E3.1080405@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Igor Kozhukhov <ikozhukhov@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
 platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 20:58 +0000, Andrew Cooper wrote:

> The content of this structure is only relevant as far as
> hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
> pointer-trickary to move the data.  Changing that char x[0] to char
> x[1] will break the pointer trickary, and migration will suffer an ABI
> breakage.

The pointer-trickery will need updating at the same time then, that's
not rocket science and the use of a single char array is the normal
portable way to do these sorts of variable length suffixes to structs.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:44:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N1p-0005py-Jn; Wed, 15 Jan 2014 09:44:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3N1n-0005p1-Sa
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:44:52 +0000
Received: from [85.158.139.211:12272] by server-6.bemta-5.messagelabs.com id
	F0/92-16310-17856D25; Wed, 15 Jan 2014 09:44:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389779056!9861974!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28030 invoked from network); 15 Jan 2014 09:44:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:44:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93019160"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:44:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:44:15 -0500
Message-ID: <1389779054.12434.126.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 09:44:14 +0000
In-Reply-To: <52D5A4E3.1080405@citrix.com>
References: <A73456DB-64C7-4E4B-B681-9DFA12281638@gmail.com>
	<52D5A230.3010005@citrix.com>
	<EE36B3F3-39A2-484E-8869-AF51C72DFBD9@gmail.com>
	<52D5A4E3.1080405@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Igor Kozhukhov <ikozhukhov@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ctfconvert problems with build on illumos based
 platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 20:58 +0000, Andrew Cooper wrote:

> The content of this structure is only relevant as far as
> hvm_{save,load}_cpu_xsave_states() goes, which resorts to some
> pointer-trickary to move the data.  Changing that char x[0] to char
> x[1] will break the pointer trickary, and migration will suffer an ABI
> breakage.

The pointer-trickery will need updating at the same time then, that's
not rocket science and the use of a single char array is the normal
portable way to do these sorts of variable length suffixes to structs.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:45:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N2M-0005tH-Ut; Wed, 15 Jan 2014 09:45:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W3N20-0005sB-Mc
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 09:45:19 +0000
Received: from [193.109.254.147:20444] by server-9.bemta-14.messagelabs.com id
	95/EA-13957-0A856D25; Wed, 15 Jan 2014 09:45:04 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389779102!10985968!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27379 invoked from network); 15 Jan 2014 09:45:03 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	15 Jan 2014 09:45:03 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0F9htXT024427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 04:43:56 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-63.ams2.redhat.com
	[10.36.112.63])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0F9hoeD005940; Wed, 15 Jan 2014 04:43:51 -0500
Message-ID: <52D65856.6050901@redhat.com>
Date: Wed, 15 Jan 2014 10:43:50 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, 1257099@bugs.launchpad.net,
	Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/01/2014 03:12, Don Slutz ha scritto:
> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> to be fixed (TMPB).
> 
> Add new functions do_libtool and libtool_prog.
> 
> Add check for broken gcc and libtool.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
> Was posted as an attachment.
> 
> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
> 
>  configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 62 insertions(+), 1 deletion(-)
> 
> diff --git a/configure b/configure
> index edfea95..852d021 100755
> --- a/configure
> +++ b/configure
> @@ -12,7 +12,10 @@ else
>  fi
>  
>  TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> +TMPO="${TMPDIR1}/${TMPB}.o"
> +TMPL="${TMPDIR1}/${TMPB}.lo"
> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>  TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>  
>  # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> @@ -86,6 +89,38 @@ compile_prog() {
>    do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>  }
>  
> +do_libtool() {
> +    local mode=$1
> +    shift
> +    # Run the compiler, capturing its output to the log.
> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> +    # Test passed. If this is an --enable-werror build, rerun
> +    # the test with -Werror and bail out if it fails. This
> +    # makes warning-generating-errors in configure test code
> +    # obvious to developers.
> +    if test "$werror" != "yes"; then
> +        return 0
> +    fi
> +    # Don't bother rerunning the compile if we were already using -Werror
> +    case "$*" in
> +        *-Werror*)
> +           return 0
> +        ;;
> +    esac
> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> +    error_exit "configure test passed without -Werror but failed with -Werror." \
> +        "This is probably a bug in the configure script. The failing command" \
> +        "will be at the bottom of config.log." \
> +        "You can run configure with --disable-werror to bypass this check."
> +}
> +
> +libtool_prog() {
> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> +}
> +
>  # symbolically link $1 to $2.  Portable version of "ln -sf".
>  symlink() {
>    rm -rf "$2"
> @@ -1367,6 +1402,32 @@ EOF
>    fi
>  fi
>  
> +# check for broken gcc and libtool in RHEL5
> +if test -n "$libtool" -a "$pie" != "no" ; then
> +  cat > $TMPC <<EOF
> +
> +void *f(unsigned char *buf, int len);
> +void *g(unsigned char *buf, int len);
> +
> +void *
> +f(unsigned char *buf, int len)
> +{
> +    return (void*)0L;
> +}
> +
> +void *
> +g(unsigned char *buf, int len)
> +{
> +    return f(buf, len);
> +}
> +
> +EOF
> +  if ! libtool_prog; then
> +    echo "Disabling libtool due to broken toolchain support"
> +    libtool=
> +  fi
> +fi
> +
>  ##########################################
>  # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>  # use i686 as default anyway, but for those that don't, an explicit
> 

I'm applying this to a "configure" branch on my github repository.  Thanks!

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:45:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N2M-0005tH-Ut; Wed, 15 Jan 2014 09:45:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W3N20-0005sB-Mc
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 09:45:19 +0000
Received: from [193.109.254.147:20444] by server-9.bemta-14.messagelabs.com id
	95/EA-13957-0A856D25; Wed, 15 Jan 2014 09:45:04 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389779102!10985968!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27379 invoked from network); 15 Jan 2014 09:45:03 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	15 Jan 2014 09:45:03 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0F9htXT024427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 04:43:56 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-63.ams2.redhat.com
	[10.36.112.63])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0F9hoeD005940; Wed, 15 Jan 2014 04:43:51 -0500
Message-ID: <52D65856.6050901@redhat.com>
Date: Wed, 15 Jan 2014 10:43:50 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130923 Thunderbird/17.0.9
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
In-Reply-To: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, 1257099@bugs.launchpad.net,
	Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/01/2014 03:12, Don Slutz ha scritto:
> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> to be fixed (TMPB).
> 
> Add new functions do_libtool and libtool_prog.
> 
> Add check for broken gcc and libtool.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
> Was posted as an attachment.
> 
> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
> 
>  configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 62 insertions(+), 1 deletion(-)
> 
> diff --git a/configure b/configure
> index edfea95..852d021 100755
> --- a/configure
> +++ b/configure
> @@ -12,7 +12,10 @@ else
>  fi
>  
>  TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> +TMPO="${TMPDIR1}/${TMPB}.o"
> +TMPL="${TMPDIR1}/${TMPB}.lo"
> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>  TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>  
>  # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> @@ -86,6 +89,38 @@ compile_prog() {
>    do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>  }
>  
> +do_libtool() {
> +    local mode=$1
> +    shift
> +    # Run the compiler, capturing its output to the log.
> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> +    # Test passed. If this is an --enable-werror build, rerun
> +    # the test with -Werror and bail out if it fails. This
> +    # makes warning-generating-errors in configure test code
> +    # obvious to developers.
> +    if test "$werror" != "yes"; then
> +        return 0
> +    fi
> +    # Don't bother rerunning the compile if we were already using -Werror
> +    case "$*" in
> +        *-Werror*)
> +           return 0
> +        ;;
> +    esac
> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> +    error_exit "configure test passed without -Werror but failed with -Werror." \
> +        "This is probably a bug in the configure script. The failing command" \
> +        "will be at the bottom of config.log." \
> +        "You can run configure with --disable-werror to bypass this check."
> +}
> +
> +libtool_prog() {
> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> +}
> +
>  # symbolically link $1 to $2.  Portable version of "ln -sf".
>  symlink() {
>    rm -rf "$2"
> @@ -1367,6 +1402,32 @@ EOF
>    fi
>  fi
>  
> +# check for broken gcc and libtool in RHEL5
> +if test -n "$libtool" -a "$pie" != "no" ; then
> +  cat > $TMPC <<EOF
> +
> +void *f(unsigned char *buf, int len);
> +void *g(unsigned char *buf, int len);
> +
> +void *
> +f(unsigned char *buf, int len)
> +{
> +    return (void*)0L;
> +}
> +
> +void *
> +g(unsigned char *buf, int len)
> +{
> +    return f(buf, len);
> +}
> +
> +EOF
> +  if ! libtool_prog; then
> +    echo "Disabling libtool due to broken toolchain support"
> +    libtool=
> +  fi
> +fi
> +
>  ##########################################
>  # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>  # use i686 as default anyway, but for those that don't, an explicit
> 

I'm applying this to a "configure" branch on my github repository.  Thanks!

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:53:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N9x-0006cW-AP; Wed, 15 Jan 2014 09:53:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3N9v-0006cQ-PW
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:53:16 +0000
Received: from [85.158.137.68:29832] by server-10.bemta-3.messagelabs.com id
	42/C2-23989-A8A56D25; Wed, 15 Jan 2014 09:53:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389779592!8068920!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7830 invoked from network); 15 Jan 2014 09:53:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:53:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93020678"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:53:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:53:12 -0500
Message-ID: <1389779590.12434.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 09:53:10 +0000
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> 
> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2,

That's certainly a good indicator, but you've not covered the actual
risks and rewards of making this change now:
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze

Please can you do so.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:53:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3N9x-0006cW-AP; Wed, 15 Jan 2014 09:53:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3N9v-0006cQ-PW
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:53:16 +0000
Received: from [85.158.137.68:29832] by server-10.bemta-3.messagelabs.com id
	42/C2-23989-A8A56D25; Wed, 15 Jan 2014 09:53:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389779592!8068920!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7830 invoked from network); 15 Jan 2014 09:53:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:53:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93020678"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 09:53:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:53:12 -0500
Message-ID: <1389779590.12434.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 09:53:10 +0000
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> 
> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2,

That's certainly a good indicator, but you've not covered the actual
risks and rewards of making this change now:
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze

Please can you do so.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:57:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NDl-0006je-Vq; Wed, 15 Jan 2014 09:57:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3NDk-0006jZ-9v
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:57:12 +0000
Received: from [85.158.139.211:27999] by server-3.bemta-5.messagelabs.com id
	94/72-04773-77B56D25; Wed, 15 Jan 2014 09:57:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389779829!7146118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7421 invoked from network); 15 Jan 2014 09:57:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:57:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90900358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 09:57:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:57:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3NDg-0006By-6J;
	Wed, 15 Jan 2014 09:57:08 +0000
Message-ID: <52D65B74.9070604@citrix.com>
Date: Wed, 15 Jan 2014 09:57:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
In-Reply-To: <1389779590.12434.131.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 09:53, Ian Campbell wrote:
> On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>>
>> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
>> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>>
>> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>> unsigned int to uint16_t, which changes the space switch()'d upon.
>>
>> This wouldn't be noticed with any upstream code (of which I am aware), but was
>> discovered because of the XenServer support for legacy Windows PV drivers,
>> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
>> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>> support running VMs with out-of-date tools.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>
>> ---
>>
>> As this breakage was caused between 4.4-rc1 and -rc2,
> That's certainly a good indicator, but you've not covered the actual
> risks and rewards of making this change now:
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>
> Please can you do so.
>
>

Contributes towards #1 "Bug-free release"

Risks:
 * We now know we have an ABI regression
 * It is a fairly obvious fix which is unlikely to have hidden issues
itself.

Rewards:
 * We keep the hypervisor ABI compatible with Xen 4.3

Alternatives:
 * Revert the patch which introduced the regression, but that is very
undesirable as it was fixing another long-running Xen operation, and
common-ifying some code between x86 and arm

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 09:57:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 09:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NDl-0006je-Vq; Wed, 15 Jan 2014 09:57:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3NDk-0006jZ-9v
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 09:57:12 +0000
Received: from [85.158.139.211:27999] by server-3.bemta-5.messagelabs.com id
	94/72-04773-77B56D25; Wed, 15 Jan 2014 09:57:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389779829!7146118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7421 invoked from network); 15 Jan 2014 09:57:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 09:57:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90900358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 09:57:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 04:57:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3NDg-0006By-6J;
	Wed, 15 Jan 2014 09:57:08 +0000
Message-ID: <52D65B74.9070604@citrix.com>
Date: Wed, 15 Jan 2014 09:57:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
In-Reply-To: <1389779590.12434.131.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 09:53, Ian Campbell wrote:
> On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>>
>> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
>> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>>
>> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>> unsigned int to uint16_t, which changes the space switch()'d upon.
>>
>> This wouldn't be noticed with any upstream code (of which I am aware), but was
>> discovered because of the XenServer support for legacy Windows PV drivers,
>> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
>> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>> support running VMs with out-of-date tools.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>
>> ---
>>
>> As this breakage was caused between 4.4-rc1 and -rc2,
> That's certainly a good indicator, but you've not covered the actual
> risks and rewards of making this change now:
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>
> Please can you do so.
>
>

Contributes towards #1 "Bug-free release"

Risks:
 * We now know we have an ABI regression
 * It is a fairly obvious fix which is unlikely to have hidden issues
itself.

Rewards:
 * We keep the hypervisor ABI compatible with Xen 4.3

Alternatives:
 * Revert the patch which introduced the regression, but that is very
undesirable as it was fixing another long-running Xen operation, and
common-ifying some code between x86 and arm

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:04:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NKN-0007J6-22; Wed, 15 Jan 2014 10:04:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3NKJ-0007J1-QL
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:03:59 +0000
Received: from [85.158.143.35:19347] by server-1.bemta-4.messagelabs.com id
	69/6C-02132-F0D56D25; Wed, 15 Jan 2014 10:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389780237!11854698!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10816 invoked from network); 15 Jan 2014 10:03:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90901887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:03:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:03:55 -0500
Message-ID: <1389780234.12434.139.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:03:54 +0000
In-Reply-To: <52D662760200007800113B5A@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> According to our own documentation, even Python 2.3 is supported
> for building,

It might be time to consider reving that. Current state of some distros:

Debian Squeeze (oldstable): 2.6
Debian Wheezy (stable): 2.7
RHEL 5.9: 2.4
RHEL 4.8: 2.3
SLES 11SP3: 2.6
SLES 10SP4: 2.4

So by bumping to a minimum version of 2.4 we would be dropping support
for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
paying attention, which is possible).

Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
of which seem desirable to drop unnecessarily. (If there were known
issues with 2.4 then I would be inclined to re-evaluate that though)

>  yet the qemu-upstream version update results in that
> part of the build no longer working with Python 2.4 due to a number
> of conditional assignments like this
> 
>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> 
> in scripts/qapi-visit.py.

This is consistent with their configure script which checks for:
        # Note that if the Python conditional here evaluates True we will exit
        # with status 1 which is a shell 'false' value.
        if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or sys.version_info >= (3,))'; then
          error_exit "Cannot use '$python', Python 2.4 or later is required." \
              "Note that Python 3 or later is not yet supported." \
              "Use --python=/path/to/python to specify a supported Python."
        fi

Did you hit that too?

> Converting these instances to proper conditionals fixes the issue
> for me.
> 
> I further found that with some gcc versions trace/control-internal.h
> causes a huge amount of warnings to be generated. While this isn't
> keeping the build from succeeding, it's still rather annoying.

Ancient versions of gcc I presume?

> The combined full fix for both issues is below. But I'm uncertain
> whether sending these to qemu-devel would make any sense, as
> I have no idea whether such old build environments are being
> cared about there.

I don't know about that, but at least the Python ones do not look like
improvements in their own right.

> 
> Jan
> 
> --- a/scripts/qapi-visit.py
> +++ b/scripts/qapi-visit.py
> @@ -20,7 +20,10 @@ import errno
>  def generate_visit_struct_fields(name, field_prefix, fn_prefix, members):
>      substructs = []
>      ret = ''
> -    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> +    if not fn_prefix:
> +        full_name = name
> +    else:
> +        full_name = "%s_%s" % (name, fn_prefix)
>  
>      for argname, argentry, optional, structured in parse_args(members):
>          if structured:
> @@ -84,7 +87,10 @@ if (!error_is_set(errp)) {
>  ''')
>      push_indent()
>  
> -    full_name = name if not field_prefix else "%s_%s" % (field_prefix, name)
> +    if not field_prefix:
> +        full_name = name
> +    else:
> +        full_name = "%s_%s" % (field_prefix, name)
>  
>      if len(field_prefix):
>          ret += mcgen('''
> @@ -226,6 +232,10 @@ def generate_visit_union(expr):
>  
>      base = expr.get('base')
>      discriminator = expr.get('discriminator')
> +    if not discriminator:
> +        type="type"
> +    else:
> +        type=discriminator
>  
>      if discriminator == {}:
>          assert not base
> @@ -270,7 +280,7 @@ void visit_type_%(name)s(Visitor *m, %(n
>          if (!err) {
>              switch ((*obj)->kind) {
>  ''',
> -                 name=name, type="type" if not discriminator else discriminator)
> +                 name=name, type=type)
>  
>      for key in members:
>          if not discriminator:
> --- a/trace/control-internal.h
> +++ b/trace/control-internal.h
> @@ -16,15 +16,15 @@
>  extern TraceEvent trace_events[];
>  
> 
> -static inline TraceEvent *trace_event_id(TraceEventID id)
> +static inline TraceEventID trace_event_count(void)
>  {
> -    assert(id < trace_event_count());
> -    return &trace_events[id];
> +    return TRACE_EVENT_COUNT;
>  }
>  
> -static inline TraceEventID trace_event_count(void)
> +static inline TraceEvent *trace_event_id(TraceEventID id)
>  {
> -    return TRACE_EVENT_COUNT;
> +    assert(id < trace_event_count());
> +    return &trace_events[id];
>  }
>  
>  static inline bool trace_event_is_pattern(const char *str)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:04:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NKN-0007J6-22; Wed, 15 Jan 2014 10:04:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3NKJ-0007J1-QL
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:03:59 +0000
Received: from [85.158.143.35:19347] by server-1.bemta-4.messagelabs.com id
	69/6C-02132-F0D56D25; Wed, 15 Jan 2014 10:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389780237!11854698!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10816 invoked from network); 15 Jan 2014 10:03:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="90901887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:03:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:03:55 -0500
Message-ID: <1389780234.12434.139.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:03:54 +0000
In-Reply-To: <52D662760200007800113B5A@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> According to our own documentation, even Python 2.3 is supported
> for building,

It might be time to consider reving that. Current state of some distros:

Debian Squeeze (oldstable): 2.6
Debian Wheezy (stable): 2.7
RHEL 5.9: 2.4
RHEL 4.8: 2.3
SLES 11SP3: 2.6
SLES 10SP4: 2.4

So by bumping to a minimum version of 2.4 we would be dropping support
for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
paying attention, which is possible).

Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
of which seem desirable to drop unnecessarily. (If there were known
issues with 2.4 then I would be inclined to re-evaluate that though)

>  yet the qemu-upstream version update results in that
> part of the build no longer working with Python 2.4 due to a number
> of conditional assignments like this
> 
>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> 
> in scripts/qapi-visit.py.

This is consistent with their configure script which checks for:
        # Note that if the Python conditional here evaluates True we will exit
        # with status 1 which is a shell 'false' value.
        if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or sys.version_info >= (3,))'; then
          error_exit "Cannot use '$python', Python 2.4 or later is required." \
              "Note that Python 3 or later is not yet supported." \
              "Use --python=/path/to/python to specify a supported Python."
        fi

Did you hit that too?

> Converting these instances to proper conditionals fixes the issue
> for me.
> 
> I further found that with some gcc versions trace/control-internal.h
> causes a huge amount of warnings to be generated. While this isn't
> keeping the build from succeeding, it's still rather annoying.

Ancient versions of gcc I presume?

> The combined full fix for both issues is below. But I'm uncertain
> whether sending these to qemu-devel would make any sense, as
> I have no idea whether such old build environments are being
> cared about there.

I don't know about that, but at least the Python ones do not look like
improvements in their own right.

> 
> Jan
> 
> --- a/scripts/qapi-visit.py
> +++ b/scripts/qapi-visit.py
> @@ -20,7 +20,10 @@ import errno
>  def generate_visit_struct_fields(name, field_prefix, fn_prefix, members):
>      substructs = []
>      ret = ''
> -    full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> +    if not fn_prefix:
> +        full_name = name
> +    else:
> +        full_name = "%s_%s" % (name, fn_prefix)
>  
>      for argname, argentry, optional, structured in parse_args(members):
>          if structured:
> @@ -84,7 +87,10 @@ if (!error_is_set(errp)) {
>  ''')
>      push_indent()
>  
> -    full_name = name if not field_prefix else "%s_%s" % (field_prefix, name)
> +    if not field_prefix:
> +        full_name = name
> +    else:
> +        full_name = "%s_%s" % (field_prefix, name)
>  
>      if len(field_prefix):
>          ret += mcgen('''
> @@ -226,6 +232,10 @@ def generate_visit_union(expr):
>  
>      base = expr.get('base')
>      discriminator = expr.get('discriminator')
> +    if not discriminator:
> +        type="type"
> +    else:
> +        type=discriminator
>  
>      if discriminator == {}:
>          assert not base
> @@ -270,7 +280,7 @@ void visit_type_%(name)s(Visitor *m, %(n
>          if (!err) {
>              switch ((*obj)->kind) {
>  ''',
> -                 name=name, type="type" if not discriminator else discriminator)
> +                 name=name, type=type)
>  
>      for key in members:
>          if not discriminator:
> --- a/trace/control-internal.h
> +++ b/trace/control-internal.h
> @@ -16,15 +16,15 @@
>  extern TraceEvent trace_events[];
>  
> 
> -static inline TraceEvent *trace_event_id(TraceEventID id)
> +static inline TraceEventID trace_event_count(void)
>  {
> -    assert(id < trace_event_count());
> -    return &trace_events[id];
> +    return TRACE_EVENT_COUNT;
>  }
>  
> -static inline TraceEventID trace_event_count(void)
> +static inline TraceEvent *trace_event_id(TraceEventID id)
>  {
> -    return TRACE_EVENT_COUNT;
> +    assert(id < trace_event_count());
> +    return &trace_events[id];
>  }
>  
>  static inline bool trace_event_is_pattern(const char *str)
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NO0-0007Qk-Nf; Wed, 15 Jan 2014 10:07:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3NNz-0007Qc-CB
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:07:47 +0000
Received: from [193.109.254.147:11645] by server-3.bemta-14.messagelabs.com id
	F8/92-11000-2FD56D25; Wed, 15 Jan 2014 10:07:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389780464!10917534!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18322 invoked from network); 15 Jan 2014 10:07:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:07:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93023536"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:07:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:07:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3NNv-0006LH-LL;
	Wed, 15 Jan 2014 10:07:43 +0000
Date: Wed, 15 Jan 2014 10:07:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Message-ID: <20140115100743.GG5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.
> 

This path seldom gets call -- not that many people unload xen-netfront
driver. If Annie has tested this patch and it works as expected I think
it's fine.

I'm not netfront maintainer but I'm happy to add
Acked-by: Wei Liu <wei.liu2@citrix.com>
if Annie confirms she's tested this patch.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NO0-0007Qk-Nf; Wed, 15 Jan 2014 10:07:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3NNz-0007Qc-CB
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:07:47 +0000
Received: from [193.109.254.147:11645] by server-3.bemta-14.messagelabs.com id
	F8/92-11000-2FD56D25; Wed, 15 Jan 2014 10:07:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389780464!10917534!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18322 invoked from network); 15 Jan 2014 10:07:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:07:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,661,1384300800"; d="scan'208";a="93023536"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:07:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:07:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3NNv-0006LH-LL;
	Wed, 15 Jan 2014 10:07:43 +0000
Date: Wed, 15 Jan 2014 10:07:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Message-ID: <20140115100743.GG5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.
> 

This path seldom gets call -- not that many people unload xen-netfront
driver. If Annie has tested this patch and it works as expected I think
it's fine.

I'm not netfront maintainer but I'm happy to add
Acked-by: Wei Liu <wei.liu2@citrix.com>
if Annie confirms she's tested this patch.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:12:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NSp-0007vr-G8; Wed, 15 Jan 2014 10:12:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NSo-0007vk-3T
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:12:46 +0000
Received: from [85.158.137.68:2669] by server-15.bemta-3.messagelabs.com id
	57/80-11556-D1F56D25; Wed, 15 Jan 2014 10:12:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389780764!9199365!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25308 invoked from network); 15 Jan 2014 10:12:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:12:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:12:43 +0000
Message-Id: <52D66D2B0200007800113BC9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:12:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>,
	"Jan Beulich" <JBeulich@suse.com>,"Don Slutz" <dslutz@verizon.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
	<52D666950200007800113B71@nat28.tlf.novell.com>
In-Reply-To: <52D666950200007800113B71@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 10:44, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 14/01/2014 22:49, Don Slutz wrote:
>>> On 01/14/14 11:29, Ian Campbell wrote:
>>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>>
>>>> The tarball can be downloaded here:
>>>>
>>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>>
>>>> Ian.
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org 
>>>> http://lists.xen.org/xen-devel 
>>>
>>> This tarball does not build on CentOS 5.10:
>>>
>>>
>>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>>> -I/home/don/xen-4.4.0-rc2/xen/include
>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>>> -pipe -g -D__XEN__ -include
>>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>>> memory.c -o memory.o
>>> cc1: warnings being treated as errors
>>> memory.c: In function 'compat_memory_op':
>>> memory.c:213: warning: comparison is always true due to limited range
>>> of data type
>>> memory.c:214: warning: comparison is always true due to limited range
>>> of data type
>>> memory.c:215: warning: comparison is always true due to limited range
>>> of data type
>>> make[5]: *** [memory.o] Error 1
>>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>>> make[4]: *** [compat/built_in.o] Error 2
>>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>> make[1]: *** [install] Error 2
>>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>> make: *** [install-xen] Error 2
>>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>>> x86_64 x86_64 x86_64 GNU/Linux
>>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>>> CentOS release 5.10 (Final)
>>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>>> Using built-in specs.
>>> Target: x86_64-redhat-linux
>>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>>> --disable-libunwind-exceptions --enable-libgcj-multifile
>>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>>> --with-cpu=generic --host=x86_64-redhat-linux
>>> Thread model: posix
>>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
>> 
>> I have also just encountered this build error, but am currently on the
>> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
>> newer compilers causes the complaint to go away.  I would certainly like
>> to hope that compat_handle_ok() is correct, and does appear to be
>> correct from code inspection.
>> 
>> The if statement becomes gigantic after preprocessing, and I ran out of
>> effort today to sanitise the preprocessed output and check it for
>> correctness.  (At the very least, it would be kind to the compiler to
>> factor out the paging_mode_external(current->domain) check and degrade
>> the compat_handle_okay()s to compat_array_access_ok())
> 
> It's not that bad; breaking the if() up a little got me to quickly
> see that this is due to struct xen_add_to_physmap_batch's
> size field being uint16_t. Using a separate local variable to
> latch the structure value makes the problem go away. The
> warning isn't really a compiler bug, but also not very useful.
> 
> Question now is: Should we replace the checks with
> BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
> or suppress the warning via intermediate variable?

And looking at the implications I'm much in favor of using
an intermediate variable (see below).

Jan

compat/memory: fix build with old gcc

struct xen_add_to_physmap_batch's size field being uint16_t causes old
compiler versions to warn about the pointless range check done inside
compat_handle_okay().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
         {
             unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
                                  / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
+            /* Use an intermediate variable to suppress warnings on old gcc: */
+            unsigned int size = cmp.atpb.size;
             xen_ulong_t *idxs = (void *)(nat.atpb + 1);
             xen_pfn_t *gpfns = (void *)(idxs + limit);
 
             if ( copy_from_guest(&cmp.atpb, compat, 1) ||
-                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
+                 !compat_handle_okay(cmp.atpb.idxs, size) ||
+                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
+                 !compat_handle_okay(cmp.atpb.errs, size) )
                 return -EFAULT;
 
             end_extent = start_extent + limit;
-            if ( end_extent > cmp.atpb.size )
-                end_extent = cmp.atpb.size;
+            if ( end_extent > size )
+                end_extent = size;
 
             idxs -= start_extent;
             gpfns -= start_extent;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:12:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NSp-0007vr-G8; Wed, 15 Jan 2014 10:12:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NSo-0007vk-3T
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:12:46 +0000
Received: from [85.158.137.68:2669] by server-15.bemta-3.messagelabs.com id
	57/80-11556-D1F56D25; Wed, 15 Jan 2014 10:12:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389780764!9199365!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25308 invoked from network); 15 Jan 2014 10:12:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:12:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:12:43 +0000
Message-Id: <52D66D2B0200007800113BC9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:12:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>,
	"Jan Beulich" <JBeulich@suse.com>,"Don Slutz" <dslutz@verizon.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
	<52D666950200007800113B71@nat28.tlf.novell.com>
In-Reply-To: <52D666950200007800113B71@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 10:44, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 14/01/2014 22:49, Don Slutz wrote:
>>> On 01/14/14 11:29, Ian Campbell wrote:
>>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>>
>>>> The tarball can be downloaded here:
>>>>
>>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>>
>>>> Ian.
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org 
>>>> http://lists.xen.org/xen-devel 
>>>
>>> This tarball does not build on CentOS 5.10:
>>>
>>>
>>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>>> -I/home/don/xen-4.4.0-rc2/xen/include
>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>>> -pipe -g -D__XEN__ -include
>>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>>> memory.c -o memory.o
>>> cc1: warnings being treated as errors
>>> memory.c: In function 'compat_memory_op':
>>> memory.c:213: warning: comparison is always true due to limited range
>>> of data type
>>> memory.c:214: warning: comparison is always true due to limited range
>>> of data type
>>> memory.c:215: warning: comparison is always true due to limited range
>>> of data type
>>> make[5]: *** [memory.o] Error 1
>>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>>> make[4]: *** [compat/built_in.o] Error 2
>>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>> make[1]: *** [install] Error 2
>>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>> make: *** [install-xen] Error 2
>>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>>> x86_64 x86_64 x86_64 GNU/Linux
>>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>>> CentOS release 5.10 (Final)
>>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>>> Using built-in specs.
>>> Target: x86_64-redhat-linux
>>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>>> --disable-libunwind-exceptions --enable-libgcj-multifile
>>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>>> --with-cpu=generic --host=x86_64-redhat-linux
>>> Thread model: posix
>>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
>> 
>> I have also just encountered this build error, but am currently on the
>> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
>> newer compilers causes the complaint to go away.  I would certainly like
>> to hope that compat_handle_ok() is correct, and does appear to be
>> correct from code inspection.
>> 
>> The if statement becomes gigantic after preprocessing, and I ran out of
>> effort today to sanitise the preprocessed output and check it for
>> correctness.  (At the very least, it would be kind to the compiler to
>> factor out the paging_mode_external(current->domain) check and degrade
>> the compat_handle_okay()s to compat_array_access_ok())
> 
> It's not that bad; breaking the if() up a little got me to quickly
> see that this is due to struct xen_add_to_physmap_batch's
> size field being uint16_t. Using a separate local variable to
> latch the structure value makes the problem go away. The
> warning isn't really a compiler bug, but also not very useful.
> 
> Question now is: Should we replace the checks with
> BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
> or suppress the warning via intermediate variable?

And looking at the implications I'm much in favor of using
an intermediate variable (see below).

Jan

compat/memory: fix build with old gcc

struct xen_add_to_physmap_batch's size field being uint16_t causes old
compiler versions to warn about the pointless range check done inside
compat_handle_okay().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
         {
             unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
                                  / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
+            /* Use an intermediate variable to suppress warnings on old gcc: */
+            unsigned int size = cmp.atpb.size;
             xen_ulong_t *idxs = (void *)(nat.atpb + 1);
             xen_pfn_t *gpfns = (void *)(idxs + limit);
 
             if ( copy_from_guest(&cmp.atpb, compat, 1) ||
-                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
+                 !compat_handle_okay(cmp.atpb.idxs, size) ||
+                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
+                 !compat_handle_okay(cmp.atpb.errs, size) )
                 return -EFAULT;
 
             end_extent = start_extent + limit;
-            if ( end_extent > cmp.atpb.size )
-                end_extent = cmp.atpb.size;
+            if ( end_extent > size )
+                end_extent = size;
 
             idxs -= start_extent;
             gpfns -= start_extent;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:22:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nbb-0008T3-S2; Wed, 15 Jan 2014 10:21:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Nba-0008Sy-62
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:21:50 +0000
Received: from [85.158.137.68:43101] by server-16.bemta-3.messagelabs.com id
	F9/23-26128-C3166D25; Wed, 15 Jan 2014 10:21:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389781307!9202110!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7613 invoked from network); 15 Jan 2014 10:21:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:21:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:21:47 +0000
Message-Id: <52D66F4C0200007800113BDB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:21:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
In-Reply-To: <1389780234.12434.139.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
>> According to our own documentation, even Python 2.3 is supported
>> for building,
> 
> It might be time to consider reving that.

Post 4.4 perhaps, but I don't think now is the right time to consider
anything like that. IMO we ought to fix the build instead.

> Current state of some distros:
> 
> Debian Squeeze (oldstable): 2.6
> Debian Wheezy (stable): 2.7
> RHEL 5.9: 2.4
> RHEL 4.8: 2.3
> SLES 11SP3: 2.6
> SLES 10SP4: 2.4
> 
> So by bumping to a minimum version of 2.4 we would be dropping support
> for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> paying attention, which is possible).
> 
> Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> of which seem desirable to drop unnecessarily. (If there were known
> issues with 2.4 then I would be inclined to re-evaluate that though)

As said - I encountered the issue here with 2.4 (on SLE10).

And yes, bumping the requirement beyond 2.4 would require
me to find solutions for how to build on SLE10 (which continues
to be the lowest common denominator here to allow building
and testing of all code streams I currently need to support, i.e.
I'm keeping one system around for 32- and 64-bit each, and it
so happens that one of those is where I do most of my work on).

>>  yet the qemu-upstream version update results in that
>> part of the build no longer working with Python 2.4 due to a number
>> of conditional assignments like this
>> 
>>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
>> 
>> in scripts/qapi-visit.py.
> 
> This is consistent with their configure script which checks for:
>         # Note that if the Python conditional here evaluates True we will 
> exit
>         # with status 1 which is a shell 'false' value.
>         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
> sys.version_info >= (3,))'; then
>           error_exit "Cannot use '$python', Python 2.4 or later is 
> required." \
>               "Note that Python 3 or later is not yet supported." \
>               "Use --python=/path/to/python to specify a supported Python."
>         fi
> 
> Did you hit that too?

No, I didn't (or didn't notice, which would mean the failure here
doesn't get propagated correctly) - it was the actual build that
was failing.

>> Converting these instances to proper conditionals fixes the issue
>> for me.
>> 
>> I further found that with some gcc versions trace/control-internal.h
>> causes a huge amount of warnings to be generated. While this isn't
>> keeping the build from succeeding, it's still rather annoying.
> 
> Ancient versions of gcc I presume?

Yes, 4.1.x.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:22:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nbb-0008T3-S2; Wed, 15 Jan 2014 10:21:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Nba-0008Sy-62
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:21:50 +0000
Received: from [85.158.137.68:43101] by server-16.bemta-3.messagelabs.com id
	F9/23-26128-C3166D25; Wed, 15 Jan 2014 10:21:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389781307!9202110!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7613 invoked from network); 15 Jan 2014 10:21:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:21:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:21:47 +0000
Message-Id: <52D66F4C0200007800113BDB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:21:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
In-Reply-To: <1389780234.12434.139.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
>> According to our own documentation, even Python 2.3 is supported
>> for building,
> 
> It might be time to consider reving that.

Post 4.4 perhaps, but I don't think now is the right time to consider
anything like that. IMO we ought to fix the build instead.

> Current state of some distros:
> 
> Debian Squeeze (oldstable): 2.6
> Debian Wheezy (stable): 2.7
> RHEL 5.9: 2.4
> RHEL 4.8: 2.3
> SLES 11SP3: 2.6
> SLES 10SP4: 2.4
> 
> So by bumping to a minimum version of 2.4 we would be dropping support
> for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> paying attention, which is possible).
> 
> Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> of which seem desirable to drop unnecessarily. (If there were known
> issues with 2.4 then I would be inclined to re-evaluate that though)

As said - I encountered the issue here with 2.4 (on SLE10).

And yes, bumping the requirement beyond 2.4 would require
me to find solutions for how to build on SLE10 (which continues
to be the lowest common denominator here to allow building
and testing of all code streams I currently need to support, i.e.
I'm keeping one system around for 32- and 64-bit each, and it
so happens that one of those is where I do most of my work on).

>>  yet the qemu-upstream version update results in that
>> part of the build no longer working with Python 2.4 due to a number
>> of conditional assignments like this
>> 
>>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
>> 
>> in scripts/qapi-visit.py.
> 
> This is consistent with their configure script which checks for:
>         # Note that if the Python conditional here evaluates True we will 
> exit
>         # with status 1 which is a shell 'false' value.
>         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
> sys.version_info >= (3,))'; then
>           error_exit "Cannot use '$python', Python 2.4 or later is 
> required." \
>               "Note that Python 3 or later is not yet supported." \
>               "Use --python=/path/to/python to specify a supported Python."
>         fi
> 
> Did you hit that too?

No, I didn't (or didn't notice, which would mean the failure here
doesn't get propagated correctly) - it was the actual build that
was failing.

>> Converting these instances to proper conditionals fixes the issue
>> for me.
>> 
>> I further found that with some gcc versions trace/control-internal.h
>> causes a huge amount of warnings to be generated. While this isn't
>> keeping the build from succeeding, it's still rather annoying.
> 
> Ancient versions of gcc I presume?

Yes, 4.1.x.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:31:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nku-0000Vt-Uc; Wed, 15 Jan 2014 10:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3Nkt-0000Vo-7V
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:31:27 +0000
Received: from [85.158.143.35:62859] by server-1.bemta-4.messagelabs.com id
	EE/D2-02132-E7366D25; Wed, 15 Jan 2014 10:31:26 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389781878!11848465!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13324 invoked from network); 15 Jan 2014 10:31:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:31:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90909204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:31:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:31:17 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3Nkj-0006fD-9N;
	Wed, 15 Jan 2014 10:31:17 +0000
Message-ID: <52D66375.9010508@citrix.com>
Date: Wed, 15 Jan 2014 10:31:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
	<52D666950200007800113B71@nat28.tlf.novell.com>
	<52D66D2B0200007800113BC9@nat28.tlf.novell.com>
In-Reply-To: <52D66D2B0200007800113BC9@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:12, Jan Beulich wrote:
>>>> On 15.01.14 at 10:44, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 14/01/2014 22:49, Don Slutz wrote:
>>>> On 01/14/14 11:29, Ian Campbell wrote:
>>>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>>>
>>>>> The tarball can be downloaded here:
>>>>>
>>>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>>>
>>>>> Ian.
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@lists.xen.org 
>>>>> http://lists.xen.org/xen-devel 
>>>> This tarball does not build on CentOS 5.10:
>>>>
>>>>
>>>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>>>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>>>> -I/home/don/xen-4.4.0-rc2/xen/include
>>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>>>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>>>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>>>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>>>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>>>> -pipe -g -D__XEN__ -include
>>>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>>>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>>>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>>>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>>>> memory.c -o memory.o
>>>> cc1: warnings being treated as errors
>>>> memory.c: In function 'compat_memory_op':
>>>> memory.c:213: warning: comparison is always true due to limited range
>>>> of data type
>>>> memory.c:214: warning: comparison is always true due to limited range
>>>> of data type
>>>> memory.c:215: warning: comparison is always true due to limited range
>>>> of data type
>>>> make[5]: *** [memory.o] Error 1
>>>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>>>> make[4]: *** [compat/built_in.o] Error 2
>>>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>>>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>>>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>>>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>>>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>>> make[1]: *** [install] Error 2
>>>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>>> make: *** [install-xen] Error 2
>>>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>>>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>>>> CentOS release 5.10 (Final)
>>>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>>>> Using built-in specs.
>>>> Target: x86_64-redhat-linux
>>>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>>>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>>>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>>>> --disable-libunwind-exceptions --enable-libgcj-multifile
>>>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>>>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>>>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>>>> --with-cpu=generic --host=x86_64-redhat-linux
>>>> Thread model: posix
>>>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
>>> I have also just encountered this build error, but am currently on the
>>> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
>>> newer compilers causes the complaint to go away.  I would certainly like
>>> to hope that compat_handle_ok() is correct, and does appear to be
>>> correct from code inspection.
>>>
>>> The if statement becomes gigantic after preprocessing, and I ran out of
>>> effort today to sanitise the preprocessed output and check it for
>>> correctness.  (At the very least, it would be kind to the compiler to
>>> factor out the paging_mode_external(current->domain) check and degrade
>>> the compat_handle_okay()s to compat_array_access_ok())
>> It's not that bad; breaking the if() up a little got me to quickly
>> see that this is due to struct xen_add_to_physmap_batch's
>> size field being uint16_t. Using a separate local variable to
>> latch the structure value makes the problem go away. The
>> warning isn't really a compiler bug, but also not very useful.
>>
>> Question now is: Should we replace the checks with
>> BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
>> or suppress the warning via intermediate variable?
> And looking at the implications I'm much in favor of using
> an intermediate variable (see below).
>
> Jan
>
> compat/memory: fix build with old gcc
>
> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc: */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:31:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nku-0000Vt-Uc; Wed, 15 Jan 2014 10:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3Nkt-0000Vo-7V
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:31:27 +0000
Received: from [85.158.143.35:62859] by server-1.bemta-4.messagelabs.com id
	EE/D2-02132-E7366D25; Wed, 15 Jan 2014 10:31:26 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389781878!11848465!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13324 invoked from network); 15 Jan 2014 10:31:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:31:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90909204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:31:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:31:17 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3Nkj-0006fD-9N;
	Wed, 15 Jan 2014 10:31:17 +0000
Message-ID: <52D66375.9010508@citrix.com>
Date: Wed, 15 Jan 2014 10:31:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389716996.12434.105.camel@kazak.uk.xensource.com>
	<52D5BF10.2050702@terremark.com> <52D5D1C3.3020601@citrix.com>
	<52D666950200007800113B71@nat28.tlf.novell.com>
	<52D66D2B0200007800113BC9@nat28.tlf.novell.com>
In-Reply-To: <52D66D2B0200007800113BC9@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] 4.4.0-rc2 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:12, Jan Beulich wrote:
>>>> On 15.01.14 at 10:44, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 15.01.14 at 01:09, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 14/01/2014 22:49, Don Slutz wrote:
>>>> On 01/14/14 11:29, Ian Campbell wrote:
>>>>> We've just tagged 4.4.0-rc2, please test and report bugs.
>>>>>
>>>>> The tarball can be downloaded here:
>>>>>
>>>>> http://bits.xensource.com/oss-xen/release/4.4.0-rc2/xen-4.4.0-rc2.tar.gz 
>>>>>
>>>>> Ian.
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@lists.xen.org 
>>>>> http://lists.xen.org/xen-devel 
>>>> This tarball does not build on CentOS 5.10:
>>>>
>>>>
>>>> gcc -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing
>>>> -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement
>>>> -I/home/don/xen-4.4.0-rc2/xen/include
>>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-generic
>>>> -I/home/don/xen-4.4.0-rc2/xen/include/asm-x86/mach-default
>>>> -msoft-float -fno-stack-protector -fno-exceptions -Wnested-externs
>>>> -DHAVE_GAS_VMX -mno-red-zone -mno-sse -fpic
>>>> -fno-asynchronous-unwind-tables -DGCC_HAS_VISIBILITY_ATTRIBUTE
>>>> -fno-builtin -fno-common -Werror -Wredundant-decls -Wno-pointer-arith
>>>> -pipe -g -D__XEN__ -include
>>>> /home/don/xen-4.4.0-rc2/xen/include/xen/config.h -nostdinc
>>>> -iwithprefix include -fno-optimize-sibling-calls -DVERBOSE -DHAS_ACPI
>>>> -DHAS_GDBSX -DHAS_PASSTHROUGH -DHAS_PCI -DHAS_IOPORTS
>>>> -fno-omit-frame-pointer -DCONFIG_FRAME_POINTER -MMD -MF .memory.o.d -c
>>>> memory.c -o memory.o
>>>> cc1: warnings being treated as errors
>>>> memory.c: In function 'compat_memory_op':
>>>> memory.c:213: warning: comparison is always true due to limited range
>>>> of data type
>>>> memory.c:214: warning: comparison is always true due to limited range
>>>> of data type
>>>> memory.c:215: warning: comparison is always true due to limited range
>>>> of data type
>>>> make[5]: *** [memory.o] Error 1
>>>> make[5]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common/compat'
>>>> make[4]: *** [compat/built_in.o] Error 2
>>>> make[4]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/common'
>>>> make[3]: *** [/home/don/xen-4.4.0-rc2/xen/common/built_in.o] Error 2
>>>> make[3]: Leaving directory `/home/don/xen-4.4.0-rc2/xen/arch/x86'
>>>> make[2]: *** [/home/don/xen-4.4.0-rc2/xen/xen] Error 2
>>>> make[2]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>>> make[1]: *** [install] Error 2
>>>> make[1]: Leaving directory `/home/don/xen-4.4.0-rc2/xen'
>>>> make: *** [install-xen] Error 2
>>>> dcs-xen-53:~/xen-4.4.0-rc2>uname -a
>>>> Linux dcs-xen-53 2.6.18-371.el5xen #1 SMP Tue Oct 1 09:15:30 EDT 2013
>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>> dcs-xen-53:~/xen-4.4.0-rc2>cat /etc/redhat-release
>>>> CentOS release 5.10 (Final)
>>>> dcs-xen-53:~/xen-4.4.0-rc2>gcc -v
>>>> Using built-in specs.
>>>> Target: x86_64-redhat-linux
>>>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
>>>> --infodir=/usr/share/info --enable-shared --enable-threads=posix
>>>> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
>>>> --disable-libunwind-exceptions --enable-libgcj-multifile
>>>> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada
>>>> --enable-java-awt=gtk --disable-dssi --disable-plugin
>>>> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
>>>> --with-cpu=generic --host=x86_64-redhat-linux
>>>> Thread model: posix
>>>> gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)
>>> I have also just encountered this build error, but am currently on the
>>> fence as to whether it is a compiler bug in 4.1.2 or bad code.  Using
>>> newer compilers causes the complaint to go away.  I would certainly like
>>> to hope that compat_handle_ok() is correct, and does appear to be
>>> correct from code inspection.
>>>
>>> The if statement becomes gigantic after preprocessing, and I ran out of
>>> effort today to sanitise the preprocessed output and check it for
>>> correctness.  (At the very least, it would be kind to the compiler to
>>> factor out the paging_mode_external(current->domain) check and degrade
>>> the compat_handle_okay()s to compat_array_access_ok())
>> It's not that bad; breaking the if() up a little got me to quickly
>> see that this is due to struct xen_add_to_physmap_batch's
>> size field being uint16_t. Using a separate local variable to
>> latch the structure value makes the problem go away. The
>> warning isn't really a compiler bug, but also not very useful.
>>
>> Question now is: Should we replace the checks with
>> BUILD_BUG_ON()s (I wouldn't want to drop them altogether)
>> or suppress the warning via intermediate variable?
> And looking at the implications I'm much in favor of using
> an intermediate variable (see below).
>
> Jan
>
> compat/memory: fix build with old gcc
>
> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc: */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:35:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NoS-0000dh-Lw; Wed, 15 Jan 2014 10:35:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3NoR-0000dU-Bm
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:35:07 +0000
Received: from [85.158.143.35:4688] by server-1.bemta-4.messagelabs.com id
	4A/BA-02132-A5466D25; Wed, 15 Jan 2014 10:35:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389782104!14150!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24414 invoked from network); 15 Jan 2014 10:35:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:35:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93031061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:35:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:35:03 -0500
Message-ID: <1389782102.12434.163.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 10:35:02 +0000
In-Reply-To: <52D65B74.9070604@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
> On 15/01/14 09:53, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> >>
> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> >>
> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> >> unsigned int to uint16_t, which changes the space switch()'d upon.
> >>
> >> This wouldn't be noticed with any upstream code (of which I am aware), but was
> >> discovered because of the XenServer support for legacy Windows PV drivers,
> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> >> support running VMs with out-of-date tools.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Keir Fraser <keir@xen.org>
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >>
> >> ---
> >>
> >> As this breakage was caused between 4.4-rc1 and -rc2,
> > That's certainly a good indicator, but you've not covered the actual
> > risks and rewards of making this change now:
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> >
> > Please can you do so.
> >
> >
> 
> Contributes towards #1 "Bug-free release"
> 
> Risks:
>  * We now know we have an ABI regression
>  * It is a fairly obvious fix which is unlikely to have hidden issues
> itself.
> 
> Rewards:
>  * We keep the hypervisor ABI compatible with Xen 4.3

IMHO it already is -- the 4.4 ABI is not broken because the truncated
bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
We still very much have the option of deferring this change to 4.5
and/or when the bits become used, with no risk to the Xen 4.4 release.

Please try and consider the guidelines exceptions with an unbiased eye,
rather than just as a mechanism for reconfirming your existing belief
that the patch should go in.

I was tempted to reject this patch just to make a point, but I think if
I'm being honest it probably should go in, so IFF Jan concurs with
"fairly obvious fix which is unlikely to have hidden issues":

Release-Ack: Ian Campbell

Next time I might be in a worse mood.

Ian.

> Alternatives:
>  * Revert the patch which introduced the regression, but that is very
> undesirable as it was fixing another long-running Xen operation, and
> common-ifying some code between x86 and arm
> 
> ~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:35:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3NoS-0000dh-Lw; Wed, 15 Jan 2014 10:35:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3NoR-0000dU-Bm
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:35:07 +0000
Received: from [85.158.143.35:4688] by server-1.bemta-4.messagelabs.com id
	4A/BA-02132-A5466D25; Wed, 15 Jan 2014 10:35:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389782104!14150!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24414 invoked from network); 15 Jan 2014 10:35:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:35:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93031061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:35:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:35:03 -0500
Message-ID: <1389782102.12434.163.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 10:35:02 +0000
In-Reply-To: <52D65B74.9070604@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
> On 15/01/14 09:53, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> >>
> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> >>
> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> >> unsigned int to uint16_t, which changes the space switch()'d upon.
> >>
> >> This wouldn't be noticed with any upstream code (of which I am aware), but was
> >> discovered because of the XenServer support for legacy Windows PV drivers,
> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> >> support running VMs with out-of-date tools.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Keir Fraser <keir@xen.org>
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >>
> >> ---
> >>
> >> As this breakage was caused between 4.4-rc1 and -rc2,
> > That's certainly a good indicator, but you've not covered the actual
> > risks and rewards of making this change now:
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> >
> > Please can you do so.
> >
> >
> 
> Contributes towards #1 "Bug-free release"
> 
> Risks:
>  * We now know we have an ABI regression
>  * It is a fairly obvious fix which is unlikely to have hidden issues
> itself.
> 
> Rewards:
>  * We keep the hypervisor ABI compatible with Xen 4.3

IMHO it already is -- the 4.4 ABI is not broken because the truncated
bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
We still very much have the option of deferring this change to 4.5
and/or when the bits become used, with no risk to the Xen 4.4 release.

Please try and consider the guidelines exceptions with an unbiased eye,
rather than just as a mechanism for reconfirming your existing belief
that the patch should go in.

I was tempted to reject this patch just to make a point, but I think if
I'm being honest it probably should go in, so IFF Jan concurs with
"fairly obvious fix which is unlikely to have hidden issues":

Release-Ack: Ian Campbell

Next time I might be in a worse mood.

Ian.

> Alternatives:
>  * Revert the patch which introduced the regression, but that is very
> undesirable as it was fixing another long-running Xen operation, and
> common-ifying some code between x86 and arm
> 
> ~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:37:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nqh-0000lE-A2; Wed, 15 Jan 2014 10:37:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Nqf-0000l6-LW
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:37:25 +0000
Received: from [85.158.137.68:53288] by server-7.bemta-3.messagelabs.com id
	08/B1-27599-4E466D25; Wed, 15 Jan 2014 10:37:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389782242!9258432!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1750 invoked from network); 15 Jan 2014 10:37:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:37:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93031440"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:37:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:37:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3NqN-0006jp-Qj;
	Wed, 15 Jan 2014 10:37:07 +0000
Date: Wed, 15 Jan 2014 10:37:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115103707.GI5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - xenvif_rx_action sets rx_queue_stopped to false
> - interrupt happens, and sets rx_event to true
> - then xenvif_kthread sets rx_event to false
> 

If you mean "rx_work_todo" returns false.

In this case

(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;

can still be true, can't it?

> Also, through rx_event a malicious guest can force the RX thread to spin. This
> patch ditch that two variable, and rework rx_work_todo. If the thread finds it

This seems to be a bigger problem. Can you elaborate?

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:37:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nqh-0000lE-A2; Wed, 15 Jan 2014 10:37:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Nqf-0000l6-LW
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:37:25 +0000
Received: from [85.158.137.68:53288] by server-7.bemta-3.messagelabs.com id
	08/B1-27599-4E466D25; Wed, 15 Jan 2014 10:37:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389782242!9258432!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1750 invoked from network); 15 Jan 2014 10:37:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:37:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93031440"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 10:37:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:37:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3NqN-0006jp-Qj;
	Wed, 15 Jan 2014 10:37:07 +0000
Date: Wed, 15 Jan 2014 10:37:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115103707.GI5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - xenvif_rx_action sets rx_queue_stopped to false
> - interrupt happens, and sets rx_event to true
> - then xenvif_kthread sets rx_event to false
> 

If you mean "rx_work_todo" returns false.

In this case

(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;

can still be true, can't it?

> Also, through rx_event a malicious guest can force the RX thread to spin. This
> patch ditch that two variable, and rework rx_work_todo. If the thread finds it

This seems to be a bigger problem. Can you elaborate?

Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:41:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nua-0001FY-W4; Wed, 15 Jan 2014 10:41:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NuZ-0001FS-Ek
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:41:27 +0000
Received: from [85.158.139.211:3069] by server-1.bemta-5.messagelabs.com id
	00/4E-21065-6D566D25; Wed, 15 Jan 2014 10:41:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389782485!9836127!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3779 invoked from network); 15 Jan 2014 10:41:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:41:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:41:25 +0000
Message-Id: <52D673E60200007800113C11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:41:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7CC6C6.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7CC6C6.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

struct xen_add_to_physmap_batch's size field being uint16_t causes old
compiler versions to warn about the pointless range check done inside
compat_handle_okay().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
         {
             unsigned int limit =3D (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atp=
b))
                                  / (sizeof(nat.atpb->idxs.p) + sizeof(nat.=
atpb->gpfns.p));
+            /* Use an intermediate variable to suppress warnings on old =
gcc: */
+            unsigned int size =3D cmp.atpb.size;
             xen_ulong_t *idxs =3D (void *)(nat.atpb + 1);
             xen_pfn_t *gpfns =3D (void *)(idxs + limit);
=20
             if ( copy_from_guest(&cmp.atpb, compat, 1) ||
-                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
+                 !compat_handle_okay(cmp.atpb.idxs, size) ||
+                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
+                 !compat_handle_okay(cmp.atpb.errs, size) )
                 return -EFAULT;
=20
             end_extent =3D start_extent + limit;
-            if ( end_extent > cmp.atpb.size )
-                end_extent =3D cmp.atpb.size;
+            if ( end_extent > size )
+                end_extent =3D size;
=20
             idxs -=3D start_extent;
             gpfns -=3D start_extent;




--=__Part4F7CC6C6.0__=
Content-Type: text/plain; name="xatpb-old-gcc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xatpb-old-gcc.patch"

compat/memory: fix build with old gcc=0A=0Astruct xen_add_to_physmap_batch'=
s size field being uint16_t causes old=0Acompiler versions to warn about =
the pointless range check done inside=0Acompat_handle_okay().=0A=0ASigned-o=
ff-by: Jan Beulich <jbeulich@suse.com>=0ATested-by: Andrew Cooper =
<andrew.cooper3@citrix.com>=0A=0A--- a/xen/common/compat/memory.c=0A+++ =
b/xen/common/compat/memory.c=0A@@ -206,18 +206,20 @@ int compat_memory_op(u=
nsigned int cmd, X=0A         {=0A             unsigned int limit =3D =
(COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))=0A                              =
    / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));=0A+          =
  /* Use an intermediate variable to suppress warnings on old gcc: */=0A+  =
          unsigned int size =3D cmp.atpb.size;=0A             xen_ulong_t =
*idxs =3D (void *)(nat.atpb + 1);=0A             xen_pfn_t *gpfns =3D =
(void *)(idxs + limit);=0A =0A             if ( copy_from_guest(&cmp.atpb, =
compat, 1) ||=0A-                 !compat_handle_okay(cmp.atpb.idxs, =
cmp.atpb.size) ||=0A-                 !compat_handle_okay(cmp.atpb.gpfns, =
cmp.atpb.size) ||=0A-                 !compat_handle_okay(cmp.atpb.errs, =
cmp.atpb.size) )=0A+                 !compat_handle_okay(cmp.atpb.idxs, =
size) ||=0A+                 !compat_handle_okay(cmp.atpb.gpfns, size) =
||=0A+                 !compat_handle_okay(cmp.atpb.errs, size) )=0A       =
          return -EFAULT;=0A =0A             end_extent =3D start_extent + =
limit;=0A-            if ( end_extent > cmp.atpb.size )=0A-                =
end_extent =3D cmp.atpb.size;=0A+            if ( end_extent > size )=0A+  =
              end_extent =3D size;=0A =0A             idxs -=3D start_exten=
t;=0A             gpfns -=3D start_extent;=0A
--=__Part4F7CC6C6.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7CC6C6.0__=--


From xen-devel-bounces@lists.xen.org Wed Jan 15 10:41:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nua-0001FY-W4; Wed, 15 Jan 2014 10:41:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NuZ-0001FS-Ek
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:41:27 +0000
Received: from [85.158.139.211:3069] by server-1.bemta-5.messagelabs.com id
	00/4E-21065-6D566D25; Wed, 15 Jan 2014 10:41:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389782485!9836127!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3779 invoked from network); 15 Jan 2014 10:41:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:41:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:41:25 +0000
Message-Id: <52D673E60200007800113C11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:41:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part4F7CC6C6.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part4F7CC6C6.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

struct xen_add_to_physmap_batch's size field being uint16_t causes old
compiler versions to warn about the pointless range check done inside
compat_handle_okay().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
         {
             unsigned int limit =3D (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atp=
b))
                                  / (sizeof(nat.atpb->idxs.p) + sizeof(nat.=
atpb->gpfns.p));
+            /* Use an intermediate variable to suppress warnings on old =
gcc: */
+            unsigned int size =3D cmp.atpb.size;
             xen_ulong_t *idxs =3D (void *)(nat.atpb + 1);
             xen_pfn_t *gpfns =3D (void *)(idxs + limit);
=20
             if ( copy_from_guest(&cmp.atpb, compat, 1) ||
-                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
-                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
+                 !compat_handle_okay(cmp.atpb.idxs, size) ||
+                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
+                 !compat_handle_okay(cmp.atpb.errs, size) )
                 return -EFAULT;
=20
             end_extent =3D start_extent + limit;
-            if ( end_extent > cmp.atpb.size )
-                end_extent =3D cmp.atpb.size;
+            if ( end_extent > size )
+                end_extent =3D size;
=20
             idxs -=3D start_extent;
             gpfns -=3D start_extent;




--=__Part4F7CC6C6.0__=
Content-Type: text/plain; name="xatpb-old-gcc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xatpb-old-gcc.patch"

compat/memory: fix build with old gcc=0A=0Astruct xen_add_to_physmap_batch'=
s size field being uint16_t causes old=0Acompiler versions to warn about =
the pointless range check done inside=0Acompat_handle_okay().=0A=0ASigned-o=
ff-by: Jan Beulich <jbeulich@suse.com>=0ATested-by: Andrew Cooper =
<andrew.cooper3@citrix.com>=0A=0A--- a/xen/common/compat/memory.c=0A+++ =
b/xen/common/compat/memory.c=0A@@ -206,18 +206,20 @@ int compat_memory_op(u=
nsigned int cmd, X=0A         {=0A             unsigned int limit =3D =
(COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))=0A                              =
    / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));=0A+          =
  /* Use an intermediate variable to suppress warnings on old gcc: */=0A+  =
          unsigned int size =3D cmp.atpb.size;=0A             xen_ulong_t =
*idxs =3D (void *)(nat.atpb + 1);=0A             xen_pfn_t *gpfns =3D =
(void *)(idxs + limit);=0A =0A             if ( copy_from_guest(&cmp.atpb, =
compat, 1) ||=0A-                 !compat_handle_okay(cmp.atpb.idxs, =
cmp.atpb.size) ||=0A-                 !compat_handle_okay(cmp.atpb.gpfns, =
cmp.atpb.size) ||=0A-                 !compat_handle_okay(cmp.atpb.errs, =
cmp.atpb.size) )=0A+                 !compat_handle_okay(cmp.atpb.idxs, =
size) ||=0A+                 !compat_handle_okay(cmp.atpb.gpfns, size) =
||=0A+                 !compat_handle_okay(cmp.atpb.errs, size) )=0A       =
          return -EFAULT;=0A =0A             end_extent =3D start_extent + =
limit;=0A-            if ( end_extent > cmp.atpb.size )=0A-                =
end_extent =3D cmp.atpb.size;=0A+            if ( end_extent > size )=0A+  =
              end_extent =3D size;=0A =0A             idxs -=3D start_exten=
t;=0A             gpfns -=3D start_extent;=0A
--=__Part4F7CC6C6.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part4F7CC6C6.0__=--


From xen-devel-bounces@lists.xen.org Wed Jan 15 10:44:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nxa-0001Nu-JF; Wed, 15 Jan 2014 10:44:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NxY-0001Nk-9Y
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:44:32 +0000
Received: from [85.158.137.68:60307] by server-2.bemta-3.messagelabs.com id
	67/07-17329-F8666D25; Wed, 15 Jan 2014 10:44:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389782670!8437530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10681 invoked from network); 15 Jan 2014 10:44:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:44:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:44:30 +0000
Message-Id: <52D6749D0200007800113C14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:44:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782102.12434.163.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
>> On 15/01/14 09:53, Ian Campbell wrote:
>> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>> >>
>> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> int,
>> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>> >>
>> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>> >> unsigned int to uint16_t, which changes the space switch()'d upon.
>> >>
>> >> This wouldn't be noticed with any upstream code (of which I am aware), but 
> was
>> >> discovered because of the XenServer support for legacy Windows PV drivers,
>> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> set.
>> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>> >> support running VMs with out-of-date tools.
>> >>
>> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> >> CC: Keir Fraser <keir@xen.org>
>> >> CC: Jan Beulich <JBeulich@suse.com>
>> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> >>
>> >> ---
>> >>
>> >> As this breakage was caused between 4.4-rc1 and -rc2,
>> > That's certainly a good indicator, but you've not covered the actual
>> > risks and rewards of making this change now:
>> > 
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_c 
> ode_freeze
>> >
>> > Please can you do so.
>> >
>> >
>> 
>> Contributes towards #1 "Bug-free release"
>> 
>> Risks:
>>  * We now know we have an ABI regression
>>  * It is a fairly obvious fix which is unlikely to have hidden issues
>> itself.
>> 
>> Rewards:
>>  * We keep the hypervisor ABI compatible with Xen 4.3
> 
> IMHO it already is -- the 4.4 ABI is not broken because the truncated
> bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.

Not exactly: 4.4 now also accepts what 4.3 would reject.

> We still very much have the option of deferring this change to 4.5
> and/or when the bits become used, with no risk to the Xen 4.4 release.
> 
> Please try and consider the guidelines exceptions with an unbiased eye,
> rather than just as a mechanism for reconfirming your existing belief
> that the patch should go in.
> 
> I was tempted to reject this patch just to make a point, but I think if
> I'm being honest it probably should go in, so IFF Jan concurs with
> "fairly obvious fix which is unlikely to have hidden issues":

I already did in an earlier reply (or at least it was meant to be that
way).

> Release-Ack: Ian Campbell

Thanks.

> Next time I might be in a worse mood.

Hopefully not too much worse ;-)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:44:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Nxa-0001Nu-JF; Wed, 15 Jan 2014 10:44:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3NxY-0001Nk-9Y
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:44:32 +0000
Received: from [85.158.137.68:60307] by server-2.bemta-3.messagelabs.com id
	67/07-17329-F8666D25; Wed, 15 Jan 2014 10:44:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389782670!8437530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10681 invoked from network); 15 Jan 2014 10:44:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 10:44:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 10:44:30 +0000
Message-Id: <52D6749D0200007800113C14@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 10:44:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782102.12434.163.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
>> On 15/01/14 09:53, Ian Campbell wrote:
>> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>> >>
>> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> int,
>> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>> >>
>> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>> >> unsigned int to uint16_t, which changes the space switch()'d upon.
>> >>
>> >> This wouldn't be noticed with any upstream code (of which I am aware), but 
> was
>> >> discovered because of the XenServer support for legacy Windows PV drivers,
>> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> set.
>> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>> >> support running VMs with out-of-date tools.
>> >>
>> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> >> CC: Keir Fraser <keir@xen.org>
>> >> CC: Jan Beulich <JBeulich@suse.com>
>> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> >>
>> >> ---
>> >>
>> >> As this breakage was caused between 4.4-rc1 and -rc2,
>> > That's certainly a good indicator, but you've not covered the actual
>> > risks and rewards of making this change now:
>> > 
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_c 
> ode_freeze
>> >
>> > Please can you do so.
>> >
>> >
>> 
>> Contributes towards #1 "Bug-free release"
>> 
>> Risks:
>>  * We now know we have an ABI regression
>>  * It is a fairly obvious fix which is unlikely to have hidden issues
>> itself.
>> 
>> Rewards:
>>  * We keep the hypervisor ABI compatible with Xen 4.3
> 
> IMHO it already is -- the 4.4 ABI is not broken because the truncated
> bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.

Not exactly: 4.4 now also accepts what 4.3 would reject.

> We still very much have the option of deferring this change to 4.5
> and/or when the bits become used, with no risk to the Xen 4.4 release.
> 
> Please try and consider the guidelines exceptions with an unbiased eye,
> rather than just as a mechanism for reconfirming your existing belief
> that the patch should go in.
> 
> I was tempted to reject this patch just to make a point, but I think if
> I'm being honest it probably should go in, so IFF Jan concurs with
> "fairly obvious fix which is unlikely to have hidden issues":

I already did in an earlier reply (or at least it was meant to be that
way).

> Release-Ack: Ian Campbell

Thanks.

> Next time I might be in a worse mood.

Hopefully not too much worse ;-)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O19-0001Yp-F2; Wed, 15 Jan 2014 10:48:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O18-0001Yj-4S
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:48:14 +0000
Received: from [85.158.143.35:18894] by server-1.bemta-4.messagelabs.com id
	AE/95-02132-D6766D25; Wed, 15 Jan 2014 10:48:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389782891!11862241!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13639 invoked from network); 15 Jan 2014 10:48:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:48:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90912828"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:48:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:48:10 -0500
Message-ID: <1389782889.12434.175.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:48:09 +0000
In-Reply-To: <52D66F4C0200007800113BDB@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> >> According to our own documentation, even Python 2.3 is supported
> >> for building,
> > 
> > It might be time to consider reving that.
> 
> Post 4.4 perhaps, but I don't think now is the right time to consider
> anything like that. IMO we ought to fix the build instead.

I think revving the minimum requirement to Python 2.4 would be a
perfectly acceptable thing to do at this stage in the release, it's
nothing more than a change to the README.

>From the rest of your mail I agree that going further than that and
requiring 2.5+ would be wrong for 4.4.

> > Current state of some distros:
> > 
> > Debian Squeeze (oldstable): 2.6
> > Debian Wheezy (stable): 2.7
> > RHEL 5.9: 2.4
> > RHEL 4.8: 2.3
> > SLES 11SP3: 2.6
> > SLES 10SP4: 2.4
> > 
> > So by bumping to a minimum version of 2.4 we would be dropping support
> > for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> > it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> > paying attention, which is possible).
> > 
> > Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> > of which seem desirable to drop unnecessarily. (If there were known
> > issues with 2.4 then I would be inclined to re-evaluate that though)
> 
> As said - I encountered the issue here with 2.4 (on SLE10).

Sorry, I misunderstood and thought this was an issue with 2.3
somewhere. 

Given that it is 2.4 and Qemu's configure script explicitly says that
2.4 is what they want I think it would be worth sending the fix to qemu
upstream -- at worst they say no and bump their requirement (at which
point we'd have to have a think about what to do).

> And yes, bumping the requirement beyond 2.4 would require
> me to find solutions for how to build on SLE10 (which continues
> to be the lowest common denominator here to allow building
> and testing of all code streams I currently need to support, i.e.
> I'm keeping one system around for 32- and 64-bit each, and it
> so happens that one of those is where I do most of my work on).

This is due to your workflow, not a SLE10 requirement to support latest
Xen or anything like that, right?

(I'm not suggesting your workflow isn't important, I think if a distro
is still in use by a developer then that is good reason to keep
supporting it)

> 
> >>  yet the qemu-upstream version update results in that
> >> part of the build no longer working with Python 2.4 due to a number
> >> of conditional assignments like this
> >> 
> >>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> >> 
> >> in scripts/qapi-visit.py.
> > 
> > This is consistent with their configure script which checks for:
> >         # Note that if the Python conditional here evaluates True we will 
> > exit
> >         # with status 1 which is a shell 'false' value.
> >         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
> > sys.version_info >= (3,))'; then
> >           error_exit "Cannot use '$python', Python 2.4 or later is 
> > required." \
> >               "Note that Python 3 or later is not yet supported." \
> >               "Use --python=/path/to/python to specify a supported Python."
> >         fi
> > 
> > Did you hit that too?
> 
> No, I didn't (or didn't notice, which would mean the failure here
> doesn't get propagated correctly) - it was the actual build that
> was failing.

OK -- now I'm confused because you seem to say you expect to have hit
this while above you say you are using Python 2.4.

> 
> >> Converting these instances to proper conditionals fixes the issue
> >> for me.
> >> 
> >> I further found that with some gcc versions trace/control-internal.h
> >> causes a huge amount of warnings to be generated. While this isn't
> >> keeping the build from succeeding, it's still rather annoying.
> > 
> > Ancient versions of gcc I presume?
> 
> Yes, 4.1.x.

Also SLE10 era I see, also RHEL5 -- so I think this one is still
important.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O19-0001Yp-F2; Wed, 15 Jan 2014 10:48:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O18-0001Yj-4S
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:48:14 +0000
Received: from [85.158.143.35:18894] by server-1.bemta-4.messagelabs.com id
	AE/95-02132-D6766D25; Wed, 15 Jan 2014 10:48:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389782891!11862241!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13639 invoked from network); 15 Jan 2014 10:48:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:48:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90912828"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:48:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:48:10 -0500
Message-ID: <1389782889.12434.175.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:48:09 +0000
In-Reply-To: <52D66F4C0200007800113BDB@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> >> According to our own documentation, even Python 2.3 is supported
> >> for building,
> > 
> > It might be time to consider reving that.
> 
> Post 4.4 perhaps, but I don't think now is the right time to consider
> anything like that. IMO we ought to fix the build instead.

I think revving the minimum requirement to Python 2.4 would be a
perfectly acceptable thing to do at this stage in the release, it's
nothing more than a change to the README.

>From the rest of your mail I agree that going further than that and
requiring 2.5+ would be wrong for 4.4.

> > Current state of some distros:
> > 
> > Debian Squeeze (oldstable): 2.6
> > Debian Wheezy (stable): 2.7
> > RHEL 5.9: 2.4
> > RHEL 4.8: 2.3
> > SLES 11SP3: 2.6
> > SLES 10SP4: 2.4
> > 
> > So by bumping to a minimum version of 2.4 we would be dropping support
> > for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> > it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> > paying attention, which is possible).
> > 
> > Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> > of which seem desirable to drop unnecessarily. (If there were known
> > issues with 2.4 then I would be inclined to re-evaluate that though)
> 
> As said - I encountered the issue here with 2.4 (on SLE10).

Sorry, I misunderstood and thought this was an issue with 2.3
somewhere. 

Given that it is 2.4 and Qemu's configure script explicitly says that
2.4 is what they want I think it would be worth sending the fix to qemu
upstream -- at worst they say no and bump their requirement (at which
point we'd have to have a think about what to do).

> And yes, bumping the requirement beyond 2.4 would require
> me to find solutions for how to build on SLE10 (which continues
> to be the lowest common denominator here to allow building
> and testing of all code streams I currently need to support, i.e.
> I'm keeping one system around for 32- and 64-bit each, and it
> so happens that one of those is where I do most of my work on).

This is due to your workflow, not a SLE10 requirement to support latest
Xen or anything like that, right?

(I'm not suggesting your workflow isn't important, I think if a distro
is still in use by a developer then that is good reason to keep
supporting it)

> 
> >>  yet the qemu-upstream version update results in that
> >> part of the build no longer working with Python 2.4 due to a number
> >> of conditional assignments like this
> >> 
> >>     full_name = name if not fn_prefix else "%s_%s" % (name, fn_prefix)
> >> 
> >> in scripts/qapi-visit.py.
> > 
> > This is consistent with their configure script which checks for:
> >         # Note that if the Python conditional here evaluates True we will 
> > exit
> >         # with status 1 which is a shell 'false' value.
> >         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
> > sys.version_info >= (3,))'; then
> >           error_exit "Cannot use '$python', Python 2.4 or later is 
> > required." \
> >               "Note that Python 3 or later is not yet supported." \
> >               "Use --python=/path/to/python to specify a supported Python."
> >         fi
> > 
> > Did you hit that too?
> 
> No, I didn't (or didn't notice, which would mean the failure here
> doesn't get propagated correctly) - it was the actual build that
> was failing.

OK -- now I'm confused because you seem to say you expect to have hit
this while above you say you are using Python 2.4.

> 
> >> Converting these instances to proper conditionals fixes the issue
> >> for me.
> >> 
> >> I further found that with some gcc versions trace/control-internal.h
> >> causes a huge amount of warnings to be generated. While this isn't
> >> keeping the build from succeeding, it's still rather annoying.
> > 
> > Ancient versions of gcc I presume?
> 
> Yes, 4.1.x.

Also SLE10 era I see, also RHEL5 -- so I think this one is still
important.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:49:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O2J-0001zG-3Q; Wed, 15 Jan 2014 10:49:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3O2H-0001z1-Jr
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:49:25 +0000
Received: from [85.158.137.68:15632] by server-12.bemta-3.messagelabs.com id
	46/27-20055-3B766D25; Wed, 15 Jan 2014 10:49:23 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389782960!9261752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22828 invoked from network); 15 Jan 2014 10:49:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913021"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:49:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:49:20 -0500
Message-ID: <52D667AE.3040506@citrix.com>
Date: Wed, 15 Jan 2014 10:49:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>	<1389779590.12434.131.camel@kazak.uk.xensource.com>	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782102.12434.163.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:35, Ian Campbell wrote:
> On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
>> On 15/01/14 09:53, Ian Campbell wrote:
>>> On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>>>>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>>>>
>>>> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
>>>> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>>>>
>>>> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>>>> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>>>> unsigned int to uint16_t, which changes the space switch()'d upon.
>>>>
>>>> This wouldn't be noticed with any upstream code (of which I am aware), but was
>>>> discovered because of the XenServer support for legacy Windows PV drivers,
>>>> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
>>>> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>>>> support running VMs with out-of-date tools.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Keir Fraser <keir@xen.org>
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>
>>>> ---
>>>>
>>>> As this breakage was caused between 4.4-rc1 and -rc2,
>>> That's certainly a good indicator, but you've not covered the actual
>>> risks and rewards of making this change now:
>>> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>>>
>>> Please can you do so.
>>>
>>>
>>
>> Contributes towards #1 "Bug-free release"
>>
>> Risks:
>>  * We now know we have an ABI regression
>>  * It is a fairly obvious fix which is unlikely to have hidden issues
>> itself.
>>
>> Rewards:
>>  * We keep the hypervisor ABI compatible with Xen 4.3
> 
> IMHO it already is -- the 4.4 ABI is not broken because the truncated
> bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
> We still very much have the option of deferring this change to 4.5
> and/or when the bits become used, with no risk to the Xen 4.4 release.

It is a guest visible change as it changes the behaviour if the guest
supplies space >= 0x1000.  e.g., space == 0x1000 would be truncated and
it would operate on space == 0x0000 and (potentially) return sucesss
instead of an -EINVAL error.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:49:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O2J-0001zG-3Q; Wed, 15 Jan 2014 10:49:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3O2H-0001z1-Jr
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:49:25 +0000
Received: from [85.158.137.68:15632] by server-12.bemta-3.messagelabs.com id
	46/27-20055-3B766D25; Wed, 15 Jan 2014 10:49:23 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389782960!9261752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22828 invoked from network); 15 Jan 2014 10:49:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:49:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913021"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:49:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:49:20 -0500
Message-ID: <52D667AE.3040506@citrix.com>
Date: Wed, 15 Jan 2014 10:49:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>	<1389779590.12434.131.camel@kazak.uk.xensource.com>	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782102.12434.163.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:35, Ian Campbell wrote:
> On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
>> On 15/01/14 09:53, Ian Campbell wrote:
>>> On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
>>>>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
>>>>
>>>> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
>>>> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
>>>>
>>>> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
>>>> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
>>>> unsigned int to uint16_t, which changes the space switch()'d upon.
>>>>
>>>> This wouldn't be noticed with any upstream code (of which I am aware), but was
>>>> discovered because of the XenServer support for legacy Windows PV drivers,
>>>> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
>>>> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
>>>> support running VMs with out-of-date tools.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Keir Fraser <keir@xen.org>
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>
>>>> ---
>>>>
>>>> As this breakage was caused between 4.4-rc1 and -rc2,
>>> That's certainly a good indicator, but you've not covered the actual
>>> risks and rewards of making this change now:
>>> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>>>
>>> Please can you do so.
>>>
>>>
>>
>> Contributes towards #1 "Bug-free release"
>>
>> Risks:
>>  * We now know we have an ABI regression
>>  * It is a fairly obvious fix which is unlikely to have hidden issues
>> itself.
>>
>> Rewards:
>>  * We keep the hypervisor ABI compatible with Xen 4.3
> 
> IMHO it already is -- the 4.4 ABI is not broken because the truncated
> bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
> We still very much have the option of deferring this change to 4.5
> and/or when the bits become used, with no risk to the Xen 4.4 release.

It is a guest visible change as it changes the behaviour if the guest
supplies space >= 0x1000.  e.g., space == 0x1000 would be truncated and
it would operate on space == 0x0000 and (potentially) return sucesss
instead of an -EINVAL error.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O4D-0002B9-L1; Wed, 15 Jan 2014 10:51:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O4C-0002Az-Cr
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:51:24 +0000
Received: from [85.158.139.211:15492] by server-9.bemta-5.messagelabs.com id
	91/2C-15098-B2866D25; Wed, 15 Jan 2014 10:51:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389783081!9871747!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12746 invoked from network); 15 Jan 2014 10:51:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913341"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:51:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:51:20 -0500
Message-ID: <1389783079.12434.178.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:51:19 +0000
In-Reply-To: <52D6749D0200007800113C14@nat28.tlf.novell.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
	<52D6749D0200007800113C14@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:44 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
> >> On 15/01/14 09:53, Ian Campbell wrote:
> >> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
> >> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> >> >>
> >> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> > int,
> >> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> >> >>
> >> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> >> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> >> >> unsigned int to uint16_t, which changes the space switch()'d upon.
> >> >>
> >> >> This wouldn't be noticed with any upstream code (of which I am aware), but 
> > was
> >> >> discovered because of the XenServer support for legacy Windows PV drivers,
> >> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> > set.
> >> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> >> >> support running VMs with out-of-date tools.
> >> >>
> >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> >> CC: Keir Fraser <keir@xen.org>
> >> >> CC: Jan Beulich <JBeulich@suse.com>
> >> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >> >>
> >> >> ---
> >> >>
> >> >> As this breakage was caused between 4.4-rc1 and -rc2,
> >> > That's certainly a good indicator, but you've not covered the actual
> >> > risks and rewards of making this change now:
> >> > 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_c 
> > ode_freeze
> >> >
> >> > Please can you do so.
> >> >
> >> >
> >> 
> >> Contributes towards #1 "Bug-free release"
> >> 
> >> Risks:
> >>  * We now know we have an ABI regression
> >>  * It is a fairly obvious fix which is unlikely to have hidden issues
> >> itself.
> >> 
> >> Rewards:
> >>  * We keep the hypervisor ABI compatible with Xen 4.3
> > 
> > IMHO it already is -- the 4.4 ABI is not broken because the truncated
> > bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
> 
> Not exactly: 4.4 now also accepts what 4.3 would reject.

That is a valid point, thanks. With that having been pointed out I think
it is pretty obvious that this should go in.

> I already did in an earlier reply (or at least it was meant to be that
> way).

I saw your reviewed by but didn't know if it applied to 4.4 or 4.5 so I
wanted check.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O4D-0002B9-L1; Wed, 15 Jan 2014 10:51:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O4C-0002Az-Cr
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 10:51:24 +0000
Received: from [85.158.139.211:15492] by server-9.bemta-5.messagelabs.com id
	91/2C-15098-B2866D25; Wed, 15 Jan 2014 10:51:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389783081!9871747!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12746 invoked from network); 15 Jan 2014 10:51:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913341"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:51:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:51:20 -0500
Message-ID: <1389783079.12434.178.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:51:19 +0000
In-Reply-To: <52D6749D0200007800113C14@nat28.tlf.novell.com>
References: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
	<1389779590.12434.131.camel@kazak.uk.xensource.com>
	<52D65B74.9070604@citrix.com>
	<1389782102.12434.163.camel@kazak.uk.xensource.com>
	<52D6749D0200007800113C14@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:44 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 09:57 +0000, Andrew Cooper wrote:
> >> On 15/01/14 09:53, Ian Campbell wrote:
> >> > On Tue, 2014-01-14 at 20:21 +0000, Andrew Cooper wrote:
> >> >>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> >> >>
> >> >> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned 
> > int,
> >> >> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> >> >>
> >> >> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> >> >> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> >> >> unsigned int to uint16_t, which changes the space switch()'d upon.
> >> >>
> >> >> This wouldn't be noticed with any upstream code (of which I am aware), but 
> > was
> >> >> discovered because of the XenServer support for legacy Windows PV drivers,
> >> >> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit 
> > set.
> >> >> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> >> >> support running VMs with out-of-date tools.
> >> >>
> >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> >> CC: Keir Fraser <keir@xen.org>
> >> >> CC: Jan Beulich <JBeulich@suse.com>
> >> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >> >>
> >> >> ---
> >> >>
> >> >> As this breakage was caused between 4.4-rc1 and -rc2,
> >> > That's certainly a good indicator, but you've not covered the actual
> >> > risks and rewards of making this change now:
> >> > 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_c 
> > ode_freeze
> >> >
> >> > Please can you do so.
> >> >
> >> >
> >> 
> >> Contributes towards #1 "Bug-free release"
> >> 
> >> Risks:
> >>  * We now know we have an ABI regression
> >>  * It is a fairly obvious fix which is unlikely to have hidden issues
> >> itself.
> >> 
> >> Rewards:
> >>  * We keep the hypervisor ABI compatible with Xen 4.3
> > 
> > IMHO it already is -- the 4.4 ABI is not broken because the truncated
> > bits are not used in the Xen ABI, 4.4 accepts everything which 4.3 does.
> 
> Not exactly: 4.4 now also accepts what 4.3 would reject.

That is a valid point, thanks. With that having been pointed out I think
it is pretty obvious that this should go in.

> I already did in an earlier reply (or at least it was meant to be that
> way).

I saw your reviewed by but didn't know if it applied to 4.4 or 4.5 so I
wanted check.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:52:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O5U-0002IT-52; Wed, 15 Jan 2014 10:52:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O5S-0002I9-94
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:52:43 +0000
Received: from [85.158.139.211:37859] by server-3.bemta-5.messagelabs.com id
	BF/71-04773-57866D25; Wed, 15 Jan 2014 10:52:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389783155!9878933!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8511 invoked from network); 15 Jan 2014 10:52:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:52:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913626"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:52:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:52:34 -0500
Message-ID: <1389783153.12434.179.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:52:33 +0000
In-Reply-To: <52D673E60200007800113C11@nat28.tlf.novell.com>
References: <52D673E60200007800113C11@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:41 +0000, Jan Beulich wrote:
> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

As far as the release goes this is fine IMHO.

> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc: */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 10:52:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 10:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3O5U-0002IT-52; Wed, 15 Jan 2014 10:52:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3O5S-0002I9-94
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 10:52:43 +0000
Received: from [85.158.139.211:37859] by server-3.bemta-5.messagelabs.com id
	BF/71-04773-57866D25; Wed, 15 Jan 2014 10:52:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389783155!9878933!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8511 invoked from network); 15 Jan 2014 10:52:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 10:52:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90913626"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 10:52:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 05:52:34 -0500
Message-ID: <1389783153.12434.179.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 10:52:33 +0000
In-Reply-To: <52D673E60200007800113C11@nat28.tlf.novell.com>
References: <52D673E60200007800113C11@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 10:41 +0000, Jan Beulich wrote:
> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

As far as the release goes this is fine IMHO.

> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) + sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc: */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:03:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OFQ-0002us-9S; Wed, 15 Jan 2014 11:03:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3OFO-0002un-W8
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:02:59 +0000
Received: from [193.109.254.147:42641] by server-3.bemta-14.messagelabs.com id
	2F/AA-11000-2EA66D25; Wed, 15 Jan 2014 11:02:58 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389783776!11008230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16883 invoked from network); 15 Jan 2014 11:02:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037213"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:02:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:02:55 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3OFL-00075l-8X; Wed, 15 Jan 2014 11:02:55 +0000
Message-ID: <52D66ADF.9070401@citrix.com>
Date: Wed, 15 Jan 2014 11:02:55 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Annie Li <Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
In-Reply-To: <20140115100743.GG5698@zion.uk.xensource.com>
X-DLP: MIA2
Cc: netdev@vger.kernel.org, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:07, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
>>
>
> This path seldom gets call -- not that many people unload xen-netfront
> driver. If Annie has tested this patch and it works as expected I think
> it's fine.
>
In XenServer we have seen a number of cases where unplugging and 
replugging VIFs results in leakage of grant references, eventually 
leading to a case where you cannot plug a VIF (after ~ 400 such cycles)...

It's worth pointing out, as far as this patch is concerned, that 
gnttab_end_foreign_access() can fail, which is not taken into account here.

Andrew.

> I'm not netfront maintainer but I'm happy to add
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> if Annie confirms she's tested this patch.
>
> Wei.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:03:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OFQ-0002us-9S; Wed, 15 Jan 2014 11:03:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3OFO-0002un-W8
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:02:59 +0000
Received: from [193.109.254.147:42641] by server-3.bemta-14.messagelabs.com id
	2F/AA-11000-2EA66D25; Wed, 15 Jan 2014 11:02:58 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389783776!11008230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16883 invoked from network); 15 Jan 2014 11:02:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:02:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037213"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:02:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:02:55 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3OFL-00075l-8X; Wed, 15 Jan 2014 11:02:55 +0000
Message-ID: <52D66ADF.9070401@citrix.com>
Date: Wed, 15 Jan 2014 11:02:55 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Annie Li <Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
In-Reply-To: <20140115100743.GG5698@zion.uk.xensource.com>
X-DLP: MIA2
Cc: netdev@vger.kernel.org, ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:07, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
>>
>
> This path seldom gets call -- not that many people unload xen-netfront
> driver. If Annie has tested this patch and it works as expected I think
> it's fine.
>
In XenServer we have seen a number of cases where unplugging and 
replugging VIFs results in leakage of grant references, eventually 
leading to a case where you cannot plug a VIF (after ~ 400 such cycles)...

It's worth pointing out, as far as this patch is concerned, that 
gnttab_end_foreign_access() can fail, which is not taken into account here.

Andrew.

> I'm not netfront maintainer but I'm happy to add
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> if Annie confirms she's tested this patch.
>
> Wei.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OFz-0002xB-MQ; Wed, 15 Jan 2014 11:03:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3OFz-0002x3-5P
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:03:35 +0000
Received: from [85.158.137.68:13619] by server-1.bemta-3.messagelabs.com id
	21/03-29598-60B66D25; Wed, 15 Jan 2014 11:03:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389783813!9246065!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18867 invoked from network); 15 Jan 2014 11:03:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 11:03:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 11:03:31 +0000
Message-Id: <52D679140200007800113C55@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 11:03:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782889.12434.175.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
>> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
>> >> According to our own documentation, even Python 2.3 is supported
>> >> for building,
>> > 
>> > It might be time to consider reving that.
>> 
>> Post 4.4 perhaps, but I don't think now is the right time to consider
>> anything like that. IMO we ought to fix the build instead.
> 
> I think revving the minimum requirement to Python 2.4 would be a
> perfectly acceptable thing to do at this stage in the release, it's
> nothing more than a change to the README.

And tools/configure*.


>> And yes, bumping the requirement beyond 2.4 would require
>> me to find solutions for how to build on SLE10 (which continues
>> to be the lowest common denominator here to allow building
>> and testing of all code streams I currently need to support, i.e.
>> I'm keeping one system around for 32- and 64-bit each, and it
>> so happens that one of those is where I do most of my work on).
> 
> This is due to your workflow, not a SLE10 requirement to support latest
> Xen or anything like that, right?

Right.

>> > This is consistent with their configure script which checks for:
>> >         # Note that if the Python conditional here evaluates True we will 
>> > exit
>> >         # with status 1 which is a shell 'false' value.
>> >         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
>> > sys.version_info >= (3,))'; then
>> >           error_exit "Cannot use '$python', Python 2.4 or later is 
>> > required." \
>> >               "Note that Python 3 or later is not yet supported." \
>> >               "Use --python=/path/to/python to specify a supported Python."
>> >         fi
>> > 
>> > Did you hit that too?
>> 
>> No, I didn't (or didn't notice, which would mean the failure here
>> doesn't get propagated correctly) - it was the actual build that
>> was failing.
> 
> OK -- now I'm confused because you seem to say you expect to have hit
> this while above you say you are using Python 2.4.

I didn't look closely enough: I took the excerpt to mean > 2.4
whereas it means >=. Sorry for that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OFz-0002xB-MQ; Wed, 15 Jan 2014 11:03:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3OFz-0002x3-5P
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:03:35 +0000
Received: from [85.158.137.68:13619] by server-1.bemta-3.messagelabs.com id
	21/03-29598-60B66D25; Wed, 15 Jan 2014 11:03:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389783813!9246065!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18867 invoked from network); 15 Jan 2014 11:03:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 11:03:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 11:03:31 +0000
Message-Id: <52D679140200007800113C55@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 11:03:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
In-Reply-To: <1389782889.12434.175.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 11:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
>> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
>> >> According to our own documentation, even Python 2.3 is supported
>> >> for building,
>> > 
>> > It might be time to consider reving that.
>> 
>> Post 4.4 perhaps, but I don't think now is the right time to consider
>> anything like that. IMO we ought to fix the build instead.
> 
> I think revving the minimum requirement to Python 2.4 would be a
> perfectly acceptable thing to do at this stage in the release, it's
> nothing more than a change to the README.

And tools/configure*.


>> And yes, bumping the requirement beyond 2.4 would require
>> me to find solutions for how to build on SLE10 (which continues
>> to be the lowest common denominator here to allow building
>> and testing of all code streams I currently need to support, i.e.
>> I'm keeping one system around for 32- and 64-bit each, and it
>> so happens that one of those is where I do most of my work on).
> 
> This is due to your workflow, not a SLE10 requirement to support latest
> Xen or anything like that, right?

Right.

>> > This is consistent with their configure script which checks for:
>> >         # Note that if the Python conditional here evaluates True we will 
>> > exit
>> >         # with status 1 which is a shell 'false' value.
>> >         if ! $python -c 'import sys; sys.exit(sys.version_info < (2,4) or 
>> > sys.version_info >= (3,))'; then
>> >           error_exit "Cannot use '$python', Python 2.4 or later is 
>> > required." \
>> >               "Note that Python 3 or later is not yet supported." \
>> >               "Use --python=/path/to/python to specify a supported Python."
>> >         fi
>> > 
>> > Did you hit that too?
>> 
>> No, I didn't (or didn't notice, which would mean the failure here
>> doesn't get propagated correctly) - it was the actual build that
>> was failing.
> 
> OK -- now I'm confused because you seem to say you expect to have hit
> this while above you say you are using Python 2.4.

I didn't look closely enough: I took the excerpt to mean > 2.4
whereas it means >=. Sorry for that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OHw-00036g-78; Wed, 15 Jan 2014 11:05:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3OHu-00036b-II
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:05:34 +0000
Received: from [85.158.137.68:8334] by server-5.bemta-3.messagelabs.com id
	00/27-25188-D7B66D25; Wed, 15 Jan 2014 11:05:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389783931!9213217!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19014 invoked from network); 15 Jan 2014 11:05:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:05:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037791"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:05:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:05:30 -0500
Message-ID: <1389783928.12434.180.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 11:05:28 +0000
In-Reply-To: <52D679140200007800113C55@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<52D679140200007800113C55@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 11:03 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> >> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> >> >> According to our own documentation, even Python 2.3 is supported
> >> >> for building,
> >> > 
> >> > It might be time to consider reving that.
> >> 
> >> Post 4.4 perhaps, but I don't think now is the right time to consider
> >> anything like that. IMO we ought to fix the build instead.
> > 
> > I think revving the minimum requirement to Python 2.4 would be a
> > perfectly acceptable thing to do at this stage in the release, it's
> > nothing more than a change to the README.
> 
> And tools/configure*.

True, I'd forgotten about that. I still think it would be an acceptable
risk, but given that the actual issues are with 2.4 I'm not going to be
bothered to push for it at this stage...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OHw-00036g-78; Wed, 15 Jan 2014 11:05:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3OHu-00036b-II
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:05:34 +0000
Received: from [85.158.137.68:8334] by server-5.bemta-3.messagelabs.com id
	00/27-25188-D7B66D25; Wed, 15 Jan 2014 11:05:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389783931!9213217!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19014 invoked from network); 15 Jan 2014 11:05:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:05:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037791"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:05:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:05:30 -0500
Message-ID: <1389783928.12434.180.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 15 Jan 2014 11:05:28 +0000
In-Reply-To: <52D679140200007800113C55@nat28.tlf.novell.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<52D679140200007800113C55@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 11:03 +0000, Jan Beulich wrote:
> >>> On 15.01.14 at 11:48, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> >> >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> >> >> According to our own documentation, even Python 2.3 is supported
> >> >> for building,
> >> > 
> >> > It might be time to consider reving that.
> >> 
> >> Post 4.4 perhaps, but I don't think now is the right time to consider
> >> anything like that. IMO we ought to fix the build instead.
> > 
> > I think revving the minimum requirement to Python 2.4 would be a
> > perfectly acceptable thing to do at this stage in the release, it's
> > nothing more than a change to the README.
> 
> And tools/configure*.

True, I'd forgotten about that. I still think it would be an acceptable
risk, but given that the actual issues are with 2.4 I'm not going to be
bothered to push for it at this stage...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:06:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OIM-0003AV-Jz; Wed, 15 Jan 2014 11:06:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3OII-0003A3-ND
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 11:06:00 +0000
Received: from [85.158.137.68:10757] by server-13.bemta-3.messagelabs.com id
	AC/BC-28603-59B66D25; Wed, 15 Jan 2014 11:05:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389783954!9246895!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5144 invoked from network); 15 Jan 2014 11:05:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:05:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037850"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:05:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 06:05:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3OIC-0005BL-UG;
	Wed, 15 Jan 2014 11:05:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3OIC-0008AZ-MD;
	Wed, 15 Jan 2014 11:05:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24375-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 11:05:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24375: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24375 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24375/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24373 REGR. vs. 24369

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24373
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24373

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24373 never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:06:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OIM-0003AV-Jz; Wed, 15 Jan 2014 11:06:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3OII-0003A3-ND
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 11:06:00 +0000
Received: from [85.158.137.68:10757] by server-13.bemta-3.messagelabs.com id
	AC/BC-28603-59B66D25; Wed, 15 Jan 2014 11:05:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389783954!9246895!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5144 invoked from network); 15 Jan 2014 11:05:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:05:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93037850"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:05:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 06:05:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3OIC-0005BL-UG;
	Wed, 15 Jan 2014 11:05:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3OIC-0008AZ-MD;
	Wed, 15 Jan 2014 11:05:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24375-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 11:05:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24375: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24375 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24375/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24373 REGR. vs. 24369

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24373
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24373

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24373 never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 14 16:19:08 2014 +0100

    update Xen version to 4.4-rc2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OQY-0003t6-4l; Wed, 15 Jan 2014 11:14:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3OQX-0003t1-E5
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:14:29 +0000
Received: from [193.109.254.147:34101] by server-10.bemta-14.messagelabs.com
	id FF/59-20752-49D66D25; Wed, 15 Jan 2014 11:14:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389784463!11029980!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23146 invoked from network); 15 Jan 2014 11:14:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:14:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93039732"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:14:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:14:23 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3OQQ-0007Ef-PC;
	Wed, 15 Jan 2014 11:14:22 +0000
Date: Wed, 15 Jan 2014 11:14:22 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140115111422.GJ5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D66ADF.9070401@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:02:55AM +0000, Andrew Bennieston wrote:
> On 15/01/14 10:07, Wei Liu wrote:
> >On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
> >>Current netfront only grants pages for grant copy, not for grant transfer, so
> >>remove corresponding transfer code and add receiving copy code in
> >>xennet_release_rx_bufs.
> >>
> >
> >This path seldom gets call -- not that many people unload xen-netfront
> >driver. If Annie has tested this patch and it works as expected I think
> >it's fine.
> >
> In XenServer we have seen a number of cases where unplugging and
> replugging VIFs results in leakage of grant references, eventually
> leading to a case where you cannot plug a VIF (after ~ 400 such
> cycles)...
> 

OK, this makes sense.

> It's worth pointing out, as far as this patch is concerned, that
> gnttab_end_foreign_access() can fail, which is not taken into
> account here.
> 

How? gnttab_end_foreign_access doesn't return any error. The gref which
cannot be freed right away will be added to a deferred list and handle
later.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OQY-0003t6-4l; Wed, 15 Jan 2014 11:14:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3OQX-0003t1-E5
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:14:29 +0000
Received: from [193.109.254.147:34101] by server-10.bemta-14.messagelabs.com
	id FF/59-20752-49D66D25; Wed, 15 Jan 2014 11:14:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389784463!11029980!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23146 invoked from network); 15 Jan 2014 11:14:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:14:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93039732"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:14:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:14:23 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3OQQ-0007Ef-PC;
	Wed, 15 Jan 2014 11:14:22 +0000
Date: Wed, 15 Jan 2014 11:14:22 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140115111422.GJ5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D66ADF.9070401@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:02:55AM +0000, Andrew Bennieston wrote:
> On 15/01/14 10:07, Wei Liu wrote:
> >On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
> >>Current netfront only grants pages for grant copy, not for grant transfer, so
> >>remove corresponding transfer code and add receiving copy code in
> >>xennet_release_rx_bufs.
> >>
> >
> >This path seldom gets call -- not that many people unload xen-netfront
> >driver. If Annie has tested this patch and it works as expected I think
> >it's fine.
> >
> In XenServer we have seen a number of cases where unplugging and
> replugging VIFs results in leakage of grant references, eventually
> leading to a case where you cannot plug a VIF (after ~ 400 such
> cycles)...
> 

OK, this makes sense.

> It's worth pointing out, as far as this patch is concerned, that
> gnttab_end_foreign_access() can fail, which is not taken into
> account here.
> 

How? gnttab_end_foreign_access doesn't return any error. The gref which
cannot be freed right away will be added to a deferred list and handle
later.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:21:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OWm-0004Nm-1R; Wed, 15 Jan 2014 11:20:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3OWl-0004Nh-1v
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:20:55 +0000
Received: from [85.158.143.35:21377] by server-3.bemta-4.messagelabs.com id
	47/73-32360-61F66D25; Wed, 15 Jan 2014 11:20:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389784852!11717017!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2318 invoked from network); 15 Jan 2014 11:20:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:20:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90919581"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:20:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:20:51 -0500
Message-ID: <52D66F11.204@citrix.com>
Date: Wed, 15 Jan 2014 11:20:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code
	in	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 22:48, Annie Li wrote:
> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.

While netfront only supports a copying backend, I don't see anything
preventing the backend from retaining mappings to netfront's Rx buffers...

> Signed-off-by: Annie Li <Annie.li@oracle.com>
> ---
>  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>  1 files changed, 3 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index e59acb1..692589e 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>  
>  static void xennet_release_rx_bufs(struct netfront_info *np)
>  {
[...]
> -		mfn = gnttab_end_foreign_transfer_ref(ref);
> +		gnttab_end_foreign_access_ref(ref, 0);

... the gnttab_end_foreign_access_ref() may then fail and...

>  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
[...]
> +		kfree_skb(skb);

... this could then potentially free pages that the backend still has
mapped.  If the pages are then reused, this would leak information to
the backend.

Since only a buggy backend would result in this, leaking the skbs and
grant refs would be acceptable here.  I would also print an error.

While checking blkfront for how it handles this, it also doesn't appear
to do the right thing either.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:21:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OWm-0004Nm-1R; Wed, 15 Jan 2014 11:20:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3OWl-0004Nh-1v
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:20:55 +0000
Received: from [85.158.143.35:21377] by server-3.bemta-4.messagelabs.com id
	47/73-32360-61F66D25; Wed, 15 Jan 2014 11:20:54 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389784852!11717017!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2318 invoked from network); 15 Jan 2014 11:20:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:20:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90919581"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:20:52 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:20:51 -0500
Message-ID: <52D66F11.204@citrix.com>
Date: Wed, 15 Jan 2014 11:20:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code
	in	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 09/01/14 22:48, Annie Li wrote:
> Current netfront only grants pages for grant copy, not for grant transfer, so
> remove corresponding transfer code and add receiving copy code in
> xennet_release_rx_bufs.

While netfront only supports a copying backend, I don't see anything
preventing the backend from retaining mappings to netfront's Rx buffers...

> Signed-off-by: Annie Li <Annie.li@oracle.com>
> ---
>  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>  1 files changed, 3 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index e59acb1..692589e 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>  
>  static void xennet_release_rx_bufs(struct netfront_info *np)
>  {
[...]
> -		mfn = gnttab_end_foreign_transfer_ref(ref);
> +		gnttab_end_foreign_access_ref(ref, 0);

... the gnttab_end_foreign_access_ref() may then fail and...

>  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
[...]
> +		kfree_skb(skb);

... this could then potentially free pages that the backend still has
mapped.  If the pages are then reused, this would leak information to
the backend.

Since only a buggy backend would result in this, leaking the skbs and
grant refs would be acceptable here.  I would also print an error.

While checking blkfront for how it handles this, it also doesn't appear
to do the right thing either.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:43:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Orl-0005Dr-34; Wed, 15 Jan 2014 11:42:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Orj-0005Dm-25
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:42:35 +0000
Received: from [193.109.254.147:54390] by server-10.bemta-14.messagelabs.com
	id 4D/17-20752-A2476D25; Wed, 15 Jan 2014 11:42:34 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389786151!10921147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4871 invoked from network); 15 Jan 2014 11:42:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:42:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93044749"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:42:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:42:09 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3OrJ-0007gq-29;
	Wed, 15 Jan 2014 11:42:09 +0000
Date: Wed, 15 Jan 2014 11:42:09 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140115114208.GK5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D66F11.204@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>, wei.liu2@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
> On 09/01/14 22:48, Annie Li wrote:
> > Current netfront only grants pages for grant copy, not for grant transfer, so
> > remove corresponding transfer code and add receiving copy code in
> > xennet_release_rx_bufs.
> 
> While netfront only supports a copying backend, I don't see anything
> preventing the backend from retaining mappings to netfront's Rx buffers...
> 

Correct.

> > Signed-off-by: Annie Li <Annie.li@oracle.com>
> > ---
> >  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
> >  1 files changed, 3 insertions(+), 57 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index e59acb1..692589e 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
> >  
> >  static void xennet_release_rx_bufs(struct netfront_info *np)
> >  {
> [...]
> > -		mfn = gnttab_end_foreign_transfer_ref(ref);
> > +		gnttab_end_foreign_access_ref(ref, 0);
> 
> ... the gnttab_end_foreign_access_ref() may then fail and...
> 

Oh, I see. Andrew was actually referencing this function. Yes, it can
fail. Since he omitted "_ref" I looked at the other function when I
replied to him...

> >  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
> >  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
> [...]
> > +		kfree_skb(skb);
> 
> ... this could then potentially free pages that the backend still has
> mapped.  If the pages are then reused, this would leak information to
> the backend.
> 
> Since only a buggy backend would result in this, leaking the skbs and
> grant refs would be acceptable here.  I would also print an error.
> 

How about using gnttab_end_foreign_access. The deferred queue looks like
a right solution -- pending page won't get freed until gref is
quiescent.

Wei.

> While checking blkfront for how it handles this, it also doesn't appear
> to do the right thing either.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:43:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Orl-0005Dr-34; Wed, 15 Jan 2014 11:42:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Orj-0005Dm-25
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:42:35 +0000
Received: from [193.109.254.147:54390] by server-10.bemta-14.messagelabs.com
	id 4D/17-20752-A2476D25; Wed, 15 Jan 2014 11:42:34 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389786151!10921147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4871 invoked from network); 15 Jan 2014 11:42:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:42:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93044749"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 11:42:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:42:09 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3OrJ-0007gq-29;
	Wed, 15 Jan 2014 11:42:09 +0000
Date: Wed, 15 Jan 2014 11:42:09 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140115114208.GK5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D66F11.204@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>, wei.liu2@citrix.com,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
> On 09/01/14 22:48, Annie Li wrote:
> > Current netfront only grants pages for grant copy, not for grant transfer, so
> > remove corresponding transfer code and add receiving copy code in
> > xennet_release_rx_bufs.
> 
> While netfront only supports a copying backend, I don't see anything
> preventing the backend from retaining mappings to netfront's Rx buffers...
> 

Correct.

> > Signed-off-by: Annie Li <Annie.li@oracle.com>
> > ---
> >  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
> >  1 files changed, 3 insertions(+), 57 deletions(-)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index e59acb1..692589e 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
> >  
> >  static void xennet_release_rx_bufs(struct netfront_info *np)
> >  {
> [...]
> > -		mfn = gnttab_end_foreign_transfer_ref(ref);
> > +		gnttab_end_foreign_access_ref(ref, 0);
> 
> ... the gnttab_end_foreign_access_ref() may then fail and...
> 

Oh, I see. Andrew was actually referencing this function. Yes, it can
fail. Since he omitted "_ref" I looked at the other function when I
replied to him...

> >  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
> >  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
> [...]
> > +		kfree_skb(skb);
> 
> ... this could then potentially free pages that the backend still has
> mapped.  If the pages are then reused, this would leak information to
> the backend.
> 
> Since only a buggy backend would result in this, leaking the skbs and
> grant refs would be acceptable here.  I would also print an error.
> 

How about using gnttab_end_foreign_access. The deferred queue looks like
a right solution -- pending page won't get freed until gref is
quiescent.

Wei.

> While checking blkfront for how it handles this, it also doesn't appear
> to do the right thing either.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OwA-0005LS-PM; Wed, 15 Jan 2014 11:47:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Ow9-0005LJ-Cu
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:47:10 +0000
Received: from [85.158.139.211:26481] by server-16.bemta-5.messagelabs.com id
	03/B4-11843-B3576D25; Wed, 15 Jan 2014 11:47:07 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389786425!9882985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29879 invoked from network); 15 Jan 2014 11:47:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:47:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90925050"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:47:04 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:47:04 -0500
Message-ID: <52D67536.4030106@citrix.com>
Date: Wed, 15 Jan 2014 11:47:02 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
In-Reply-To: <20140115103707.GI5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:37, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>> thread problem, however caused an another one. The receive side can stall, if:
>> - xenvif_rx_action sets rx_queue_stopped to false
>> - interrupt happens, and sets rx_event to true
>> - then xenvif_kthread sets rx_event to false
>>
>
> If you mean "rx_work_todo" returns false.
>
> In this case
>
> (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>
> can still be true, can't it?
Sorry, I should wrote rx_queue_stopped to true

>
>> Also, through rx_event a malicious guest can force the RX thread to spin. This
>> patch ditch that two variable, and rework rx_work_todo. If the thread finds it
>
> This seems to be a bigger problem. Can you elaborate?
My mistake too. I forgot that rx_action set it to false, so it's not 
really a spinning. However the thread should still run xenvif_rx_action 
to figure out there is no space in the ring before it sets rx_event to 
false. In my patch we can quit earlier.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3OwA-0005LS-PM; Wed, 15 Jan 2014 11:47:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Ow9-0005LJ-Cu
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 11:47:10 +0000
Received: from [85.158.139.211:26481] by server-16.bemta-5.messagelabs.com id
	03/B4-11843-B3576D25; Wed, 15 Jan 2014 11:47:07 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389786425!9882985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29879 invoked from network); 15 Jan 2014 11:47:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:47:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90925050"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:47:04 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:47:04 -0500
Message-ID: <52D67536.4030106@citrix.com>
Date: Wed, 15 Jan 2014 11:47:02 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
In-Reply-To: <20140115103707.GI5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 10:37, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>> thread problem, however caused an another one. The receive side can stall, if:
>> - xenvif_rx_action sets rx_queue_stopped to false
>> - interrupt happens, and sets rx_event to true
>> - then xenvif_kthread sets rx_event to false
>>
>
> If you mean "rx_work_todo" returns false.
>
> In this case
>
> (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>
> can still be true, can't it?
Sorry, I should wrote rx_queue_stopped to true

>
>> Also, through rx_event a malicious guest can force the RX thread to spin. This
>> patch ditch that two variable, and rework rx_work_todo. If the thread finds it
>
> This seems to be a bigger problem. Can you elaborate?
My mistake too. I forgot that rx_action set it to false, so it's not 
really a spinning. However the thread should still run xenvif_rx_action 
to figure out there is no space in the ring before it sets rx_event to 
false. In my patch we can quit earlier.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3P1K-0005q3-IO; Wed, 15 Jan 2014 11:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3P1J-0005py-IR
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:52:29 +0000
Received: from [85.158.137.68:43848] by server-7.bemta-3.messagelabs.com id
	4F/D0-27599-B7676D25; Wed, 15 Jan 2014 11:52:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389786745!9224617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1874 invoked from network); 15 Jan 2014 11:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90925798"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:52:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:52:24 -0500
Message-ID: <52D67677.4050407@citrix.com>
Date: Wed, 15 Jan 2014 11:52:23 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
In-Reply-To: <20140115114208.GK5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 11:42, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
>> On 09/01/14 22:48, Annie Li wrote:
>>> Current netfront only grants pages for grant copy, not for grant transfer, so
>>> remove corresponding transfer code and add receiving copy code in
>>> xennet_release_rx_bufs.
>>
>> While netfront only supports a copying backend, I don't see anything
>> preventing the backend from retaining mappings to netfront's Rx buffers...
>>
> 
> Correct.
> 
>>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>>> ---
>>>  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>>  1 files changed, 3 insertions(+), 57 deletions(-)
>>>
>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>> index e59acb1..692589e 100644
>>> --- a/drivers/net/xen-netfront.c
>>> +++ b/drivers/net/xen-netfront.c
>>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>>  
>>>  static void xennet_release_rx_bufs(struct netfront_info *np)
>>>  {
>> [...]
>>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>>> +		gnttab_end_foreign_access_ref(ref, 0);
>>
>> ... the gnttab_end_foreign_access_ref() may then fail and...
>>
> 
> Oh, I see. Andrew was actually referencing this function. Yes, it can
> fail. Since he omitted "_ref" I looked at the other function when I
> replied to him...
> 
>>>  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>>  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>> [...]
>>> +		kfree_skb(skb);
>>
>> ... this could then potentially free pages that the backend still has
>> mapped.  If the pages are then reused, this would leak information to
>> the backend.
>>
>> Since only a buggy backend would result in this, leaking the skbs and
>> grant refs would be acceptable here.  I would also print an error.
>>
> 
> How about using gnttab_end_foreign_access. The deferred queue looks like
> a right solution -- pending page won't get freed until gref is
> quiescent.

This is more like the correct approach but I don't think it still quite
right.  The skb owns the pages so we don't want
gnttab_end_foreign_access() to free them as freeing the skb will attempt
to free them again.

Having gnttab_end_foreign_access() do a free just looks odd to me, the
free isn't paired with any alloc in the grant table code.

It seems more logical to me that granting access takes an additional
page ref, and then ending access releases that ref.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 11:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 11:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3P1K-0005q3-IO; Wed, 15 Jan 2014 11:52:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3P1J-0005py-IR
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 11:52:29 +0000
Received: from [85.158.137.68:43848] by server-7.bemta-3.messagelabs.com id
	4F/D0-27599-B7676D25; Wed, 15 Jan 2014 11:52:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389786745!9224617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1874 invoked from network); 15 Jan 2014 11:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 11:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90925798"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 11:52:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 06:52:24 -0500
Message-ID: <52D67677.4050407@citrix.com>
Date: Wed, 15 Jan 2014 11:52:23 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
In-Reply-To: <20140115114208.GK5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Annie Li <Annie.li@oracle.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 11:42, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
>> On 09/01/14 22:48, Annie Li wrote:
>>> Current netfront only grants pages for grant copy, not for grant transfer, so
>>> remove corresponding transfer code and add receiving copy code in
>>> xennet_release_rx_bufs.
>>
>> While netfront only supports a copying backend, I don't see anything
>> preventing the backend from retaining mappings to netfront's Rx buffers...
>>
> 
> Correct.
> 
>>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>>> ---
>>>  drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>>  1 files changed, 3 insertions(+), 57 deletions(-)
>>>
>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>> index e59acb1..692589e 100644
>>> --- a/drivers/net/xen-netfront.c
>>> +++ b/drivers/net/xen-netfront.c
>>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>>  
>>>  static void xennet_release_rx_bufs(struct netfront_info *np)
>>>  {
>> [...]
>>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>>> +		gnttab_end_foreign_access_ref(ref, 0);
>>
>> ... the gnttab_end_foreign_access_ref() may then fail and...
>>
> 
> Oh, I see. Andrew was actually referencing this function. Yes, it can
> fail. Since he omitted "_ref" I looked at the other function when I
> replied to him...
> 
>>>  		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>>  		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>> [...]
>>> +		kfree_skb(skb);
>>
>> ... this could then potentially free pages that the backend still has
>> mapped.  If the pages are then reused, this would leak information to
>> the backend.
>>
>> Since only a buggy backend would result in this, leaking the skbs and
>> grant refs would be acceptable here.  I would also print an error.
>>
> 
> How about using gnttab_end_foreign_access. The deferred queue looks like
> a right solution -- pending page won't get freed until gref is
> quiescent.

This is more like the correct approach but I don't think it still quite
right.  The skb owns the pages so we don't want
gnttab_end_foreign_access() to free them as freeing the skb will attempt
to free them again.

Having gnttab_end_foreign_access() do a free just looks odd to me, the
free isn't paired with any alloc in the grant table code.

It seems more logical to me that granting access takes an additional
page ref, and then ending access releases that ref.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 12:31:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 12:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3PcR-0007YE-2M; Wed, 15 Jan 2014 12:30:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3PcP-0007Y9-NV
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 12:30:50 +0000
Received: from [85.158.137.68:5171] by server-16.bemta-3.messagelabs.com id
	86/54-26128-87F76D25; Wed, 15 Jan 2014 12:30:48 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389789046!9313926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6505 invoked from network); 15 Jan 2014 12:30:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 12:30:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93057660"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 12:30:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 07:30:45 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3PcL-0008QO-44;
	Wed, 15 Jan 2014 12:30:45 +0000
Date: Wed, 15 Jan 2014 12:30:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140115123045.GL5698@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen: master branch
Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
               linux-next

When I tried to start a HVM domain running squeeze with 2.6.32, I got

(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
[... more of this ...]
[67196.736733] device vif5.0 entered promiscuous mode
[67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
[67196.911973] device vif5.0-emu entered promiscuous mode
[67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
[67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
(d5) HVM Loader
(d5) Detected Xen v4.4-unstable
(d5) Xenbus rings @0xfeffc000, event channel 3
(d5) System requested SeaBIOS
(d5) CPU speed is 2660 MHz
(d5) Relocating guest memory for lowmem MMIO space disabled
(XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
(d5) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
(d5) PCI-ISA link 1 routed to IRQ10

The guest was eventually up and running and seemed to be working fine.
The failure in log was not that harmful after all... But it would be
nice to figure out what's going on here. I suspect the toolstack was
trying to setup something, failed, then retried and eventually succeed.
Sadly the error log wasn't elaborated enough to provide direct insight
into the root cause and I couldn't tell which side (toolstack, kernel,
Xen) to blame.

The failing snippet in Xen common/event_channel.c

269     if ( (rchn->state != ECS_UNBOUND) ||                                        
270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
271         ERROR_EXIT_DOM(-EINVAL, rd);     

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 12:31:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 12:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3PcR-0007YE-2M; Wed, 15 Jan 2014 12:30:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3PcP-0007Y9-NV
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 12:30:50 +0000
Received: from [85.158.137.68:5171] by server-16.bemta-3.messagelabs.com id
	86/54-26128-87F76D25; Wed, 15 Jan 2014 12:30:48 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389789046!9313926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6505 invoked from network); 15 Jan 2014 12:30:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 12:30:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="93057660"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 12:30:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 07:30:45 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3PcL-0008QO-44;
	Wed, 15 Jan 2014 12:30:45 +0000
Date: Wed, 15 Jan 2014 12:30:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140115123045.GL5698@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen: master branch
Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
               linux-next

When I tried to start a HVM domain running squeeze with 2.6.32, I got

(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
(XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
[... more of this ...]
[67196.736733] device vif5.0 entered promiscuous mode
[67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
[67196.911973] device vif5.0-emu entered promiscuous mode
[67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
[67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
(d5) HVM Loader
(d5) Detected Xen v4.4-unstable
(d5) Xenbus rings @0xfeffc000, event channel 3
(d5) System requested SeaBIOS
(d5) CPU speed is 2660 MHz
(d5) Relocating guest memory for lowmem MMIO space disabled
(XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
(d5) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
(d5) PCI-ISA link 1 routed to IRQ10

The guest was eventually up and running and seemed to be working fine.
The failure in log was not that harmful after all... But it would be
nice to figure out what's going on here. I suspect the toolstack was
trying to setup something, failed, then retried and eventually succeed.
Sadly the error log wasn't elaborated enough to provide direct insight
into the root cause and I couldn't tell which side (toolstack, kernel,
Xen) to blame.

The failing snippet in Xen common/event_channel.c

269     if ( (rchn->state != ECS_UNBOUND) ||                                        
270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
271         ERROR_EXIT_DOM(-EINVAL, rd);     

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 12:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 12:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Pjw-0007mV-7d; Wed, 15 Jan 2014 12:38:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Pjs-0007hf-Px
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 12:38:34 +0000
Received: from [85.158.139.211:30500] by server-7.bemta-5.messagelabs.com id
	AA/DE-04824-24186D25; Wed, 15 Jan 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389789504!9898149!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16822 invoked from network); 15 Jan 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90937853"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 12:37:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 07:37:58 -0500
Message-ID: <1389789477.12434.196.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 15 Jan 2014 12:37:57 +0000
In-Reply-To: <20140115123045.GL5698@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
> Xen: master branch
> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>                linux-next
> 
> When I tried to start a HVM domain running squeeze with 2.6.32, I got
> 
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22

Julien mentioned having seen something very similar with BSD on ARM
yesterday...

> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> [... more of this ...]
> [67196.736733] device vif5.0 entered promiscuous mode
> [67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
> [67196.911973] device vif5.0-emu entered promiscuous mode
> [67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
> [67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
> (d5) HVM Loader
> (d5) Detected Xen v4.4-unstable
> (d5) Xenbus rings @0xfeffc000, event channel 3
> (d5) System requested SeaBIOS
> (d5) CPU speed is 2660 MHz
> (d5) Relocating guest memory for lowmem MMIO space disabled
> (XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
> (d5) PCI-ISA link 0 routed to IRQ5
> (XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
> (d5) PCI-ISA link 1 routed to IRQ10
> 
> The guest was eventually up and running and seemed to be working fine.
> The failure in log was not that harmful after all... But it would be
> nice to figure out what's going on here. I suspect the toolstack was
> trying to setup something, failed, then retried and eventually succeed.
> Sadly the error log wasn't elaborated enough to provide direct insight
> into the root cause and I couldn't tell which side (toolstack, kernel,
> Xen) to blame.
> 
> The failing snippet in Xen common/event_channel.c
> 
> 269     if ( (rchn->state != ECS_UNBOUND) ||                                        
> 270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
> 271         ERROR_EXIT_DOM(-EINVAL, rd);     
> 
> Wei.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 12:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 12:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Pjw-0007mV-7d; Wed, 15 Jan 2014 12:38:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Pjs-0007hf-Px
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 12:38:34 +0000
Received: from [85.158.139.211:30500] by server-7.bemta-5.messagelabs.com id
	AA/DE-04824-24186D25; Wed, 15 Jan 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389789504!9898149!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16822 invoked from network); 15 Jan 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,662,1384300800"; d="scan'208";a="90937853"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 12:37:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 07:37:58 -0500
Message-ID: <1389789477.12434.196.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 15 Jan 2014 12:37:57 +0000
In-Reply-To: <20140115123045.GL5698@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
> Xen: master branch
> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>                linux-next
> 
> When I tried to start a HVM domain running squeeze with 2.6.32, I got
> 
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22

Julien mentioned having seen something very similar with BSD on ARM
yesterday...

> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> [... more of this ...]
> [67196.736733] device vif5.0 entered promiscuous mode
> [67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
> [67196.911973] device vif5.0-emu entered promiscuous mode
> [67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
> [67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
> (d5) HVM Loader
> (d5) Detected Xen v4.4-unstable
> (d5) Xenbus rings @0xfeffc000, event channel 3
> (d5) System requested SeaBIOS
> (d5) CPU speed is 2660 MHz
> (d5) Relocating guest memory for lowmem MMIO space disabled
> (XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
> (d5) PCI-ISA link 0 routed to IRQ5
> (XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
> (d5) PCI-ISA link 1 routed to IRQ10
> 
> The guest was eventually up and running and seemed to be working fine.
> The failure in log was not that harmful after all... But it would be
> nice to figure out what's going on here. I suspect the toolstack was
> trying to setup something, failed, then retried and eventually succeed.
> Sadly the error log wasn't elaborated enough to provide direct insight
> into the root cause and I couldn't tell which side (toolstack, kernel,
> Xen) to blame.
> 
> The failing snippet in Xen common/event_channel.c
> 
> 269     if ( (rchn->state != ECS_UNBOUND) ||                                        
> 270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
> 271         ERROR_EXIT_DOM(-EINVAL, rd);     
> 
> Wei.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:14:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QHv-000195-ID; Wed, 15 Jan 2014 13:13:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3QHu-00018t-1X
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:13:42 +0000
Received: from [85.158.137.68:25763] by server-6.bemta-3.messagelabs.com id
	2D/CB-04868-58986D25; Wed, 15 Jan 2014 13:13:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389791619!5619647!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4387 invoked from network); 15 Jan 2014 13:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90947623"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 13:13:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:13:18 -0500
Message-ID: <1389791597.12434.216.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Date: Wed, 15 Jan 2014 13:13:17 +0000
In-Reply-To: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 02:33 +0800, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> After split eventchannel feature was supported by netback/netfront,
> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> to show tx/rx eventchannel correctly.
> 
> Signed-off-by: Annie Li <annie.li@oracle.com>

How critical is this for 4.4?

Please consider
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze and make a case for it if you think it should go in.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..e6368c7 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("frontend_id", uint32),
>      ("devid", libxl_devid),
>      ("state", integer),
> -    ("evtch", integer),
> +    ("evtch_tx", integer),
> +    ("evtch_rx", integer),

This needs backwards compatibility handling, see the big comment at the
head of libxl.h and the other examples in that file. I'm doubtful that
you will be able to remove the evtch field without breaking the API, so
it probably needs to stay even if it is explicitly invalid under some
circumstances.

It also needs a suitable LIBXL_HAVE_ #define, again see libxl.h.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:14:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QHv-000195-ID; Wed, 15 Jan 2014 13:13:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3QHu-00018t-1X
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:13:42 +0000
Received: from [85.158.137.68:25763] by server-6.bemta-3.messagelabs.com id
	2D/CB-04868-58986D25; Wed, 15 Jan 2014 13:13:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389791619!5619647!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4387 invoked from network); 15 Jan 2014 13:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90947623"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 13:13:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:13:18 -0500
Message-ID: <1389791597.12434.216.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Date: Wed, 15 Jan 2014 13:13:17 +0000
In-Reply-To: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 02:33 +0800, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> After split eventchannel feature was supported by netback/netfront,
> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> to show tx/rx eventchannel correctly.
> 
> Signed-off-by: Annie Li <annie.li@oracle.com>

How critical is this for 4.4?

Please consider
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze and make a case for it if you think it should go in.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..e6368c7 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("frontend_id", uint32),
>      ("devid", libxl_devid),
>      ("state", integer),
> -    ("evtch", integer),
> +    ("evtch_tx", integer),
> +    ("evtch_rx", integer),

This needs backwards compatibility handling, see the big comment at the
head of libxl.h and the other examples in that file. I'm doubtful that
you will be able to remove the evtch field without breaking the API, so
it probably needs to stay even if it is explicitly invalid under some
circumstances.

It also needs a suitable LIBXL_HAVE_ #define, again see libxl.h.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:33:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qaf-0002FD-I4; Wed, 15 Jan 2014 13:33:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Qad-0002F8-QD
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:33:04 +0000
Received: from [85.158.139.211:48174] by server-8.bemta-5.messagelabs.com id
	08/8A-29838-E0E86D25; Wed, 15 Jan 2014 13:33:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389792780!9919937!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29849 invoked from network); 15 Jan 2014 13:33:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:33:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93074217"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 13:32:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:32:43 -0500
Message-ID: <1389792762.3793.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Wed, 15 Jan 2014 13:32:42 +0000
In-Reply-To: <1389651599-26562-1-git-send-email-baozich@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xenproject.org,
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20. Besides,
> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
> these macros instead of hard-coded shift value.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

WRT a 4.4 freeze exception the main bit is the use of #21 instead of #20
as the shift for the L2 entry, which can result in an UNK/SBZP bit being
set. ARM ARM says:

        Hardware must implement the bit as Read-As-Zero, and must ignore
        writes to the field.
        
        Software must not rely on the field reading as all 0s, and
        except for writing back to the register must treat the value
        as if it is UNKNOWN. Software must use an SBZP policy to write
        to the field.

The danger is that some future version of the architecture assigns
meaning to that bit. All in all this seems like a pretty benign issue,
but on the flip side the fix is reasonable low risk, the only real
danger is that one of the replacements is wrong and most of them are
pretty trivial, although s/#18/#(SECOND_SHIFT - 3)/ is a bit less so.

I was initially leaning towards putting this into the queue for 4.5, but
on reflection I'm now starting to lean the other way.

Does anyone feel strongly that this shouldn't go into 4.4?

> ---
>  xen/arch/arm/arm32/head.S | 20 ++++++++++----------
>  xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
>  2 files changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 96230ac..f3eab89 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -291,14 +291,14 @@ cpu_init_done:
>          ldr   r4, =boot_second
>          add   r4, r4, r10            /* r1 := paddr (boot_second) */
>  
> -        lsr   r2, r9, #20            /* Base address for 2MB mapping */
> -        lsl   r2, r2, #20
> +        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
> +        lsl   r2, r2, #SECOND_SHIFT
>          orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
>          orr   r2, r2, #PT_LOWER(MEM)
>  
>          /* ... map of vaddr(start) in boot_second */
>          ldr   r1, =start
> -        lsr   r1, #18                /* Slot for vaddr(start) */
> +        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>          strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
>  
>          /* ... map of paddr(start) in boot_second */
> @@ -307,7 +307,7 @@ cpu_init_done:
>                                        * then the mapping was done in
>                                        * boot_pgtable above */
>  
> -        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
> +        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
>          strd  r2, r3, [r4, r1]       /* Map Xen there */
>  1:
>  
> @@ -339,8 +339,8 @@ paging:
>          /* Add UART to the fixmap table */
>          ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
>          mov   r3, #0
> -        lsr   r2, r11, #12
> -        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> +        lsr   r2, r11, #THIRD_SHIFT
> +        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>          orr   r2, r2, #PT_UPPER(DEV_L3)
>          orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> @@ -353,7 +353,7 @@ paging:
>          orr   r2, r2, #PT_UPPER(PT)
>          orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
>          ldr   r4, =FIXMAP_ADDR(0)
> -        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
> +        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
>          strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
>  
>          /* Use a virtual address to access the UART. */
> @@ -365,12 +365,12 @@ paging:
>  
>          ldr   r1, =boot_second
>          mov   r3, #0x0
> -        lsr   r2, r8, #21
> -        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
> +        lsr   r2, r8, #SECOND_SHIFT
> +        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
>          orr   r2, r2, #PT_UPPER(MEM)
>          orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
>          ldr   r4, =BOOT_FDT_VIRT_START
> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
>          strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
>          dsb
>  1:
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index bebddf0..5b164e9 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -278,11 +278,11 @@ skip_bss:
>          str   x2, [x4, #0]           /* Map it in slot 0 */
>  
>          /* ... map of paddr(start) in boot_first */
> -        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
> +        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
>          and   x1, x2, 0x1ff          /* x1 := Slot to use */
>          cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
>  
> -        lsl   x2, x2, #30            /* Base address for 1GB mapping */
> +        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>          lsl   x1, x1, #3             /* x1 := Slot offset */
> @@ -292,23 +292,23 @@ skip_bss:
>          ldr   x4, =boot_second
>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>  
> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
> -        lsl   x2, x2, #20
> +        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
> +        lsl   x2, x2, #SECOND_SHIFT
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>  
>          /* ... map of vaddr(start) in boot_second */
>          ldr   x1, =start
> -        lsr   x1, x1, #18            /* Slot for vaddr(start) */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>          str   x2, [x4, x1]           /* Map vaddr(start) */
>  
>          /* ... map of paddr(start) in boot_second */
> -        lsr   x1, x19, #30           /* Base paddr */
> +        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
>          cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
>                                        * then the mapping was done in
>                                        * boot_pgtable or boot_first above */
>  
> -        lsr   x1, x19, #18           /* Slot for paddr(start) */
> +        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
>          str   x2, [x4, x1]           /* Map Xen there */
>  1:
>  
> @@ -340,8 +340,8 @@ paging:
>          /* Add UART to the fixmap table */
>          ldr   x1, =xen_fixmap
>          add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
> -        lsr   x2, x23, #12
> -        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
> +        lsr   x2, x23, #THIRD_SHIFT
> +        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>          mov   x3, #PT_DEV_L3
>          orr   x2, x2, x3             /* x2 := 4K dev map including UART */
>          str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> @@ -354,7 +354,7 @@ paging:
>          mov   x3, #PT_PT
>          orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
>          ldr   x1, =FIXMAP_ADDR(0)
> -        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
>  
>          /* Use a virtual address to access the UART. */
> @@ -364,12 +364,12 @@ paging:
>          /* Map the DTB in the boot misc slot */
>          cbnz  x22, 1f                /* Only on boot CPU */
>  
> -        lsr   x2, x21, #21
> -        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
> +        lsr   x2, x21, #SECOND_SHIFT
> +        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
>          mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
>          orr   x2, x2, x3
>          ldr   x1, =BOOT_FDT_VIRT_START
> -        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
>          str   x2, [x4, x1]           /* Map it in the early fdt slot */
>          dsb   sy
>  1:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:33:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qaf-0002FD-I4; Wed, 15 Jan 2014 13:33:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Qad-0002F8-QD
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:33:04 +0000
Received: from [85.158.139.211:48174] by server-8.bemta-5.messagelabs.com id
	08/8A-29838-E0E86D25; Wed, 15 Jan 2014 13:33:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389792780!9919937!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29849 invoked from network); 15 Jan 2014 13:33:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:33:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93074217"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 13:32:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:32:43 -0500
Message-ID: <1389792762.3793.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Wed, 15 Jan 2014 13:32:42 +0000
In-Reply-To: <1389651599-26562-1-git-send-email-baozich@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xenproject.org,
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20. Besides,
> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
> these macros instead of hard-coded shift value.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

WRT a 4.4 freeze exception the main bit is the use of #21 instead of #20
as the shift for the L2 entry, which can result in an UNK/SBZP bit being
set. ARM ARM says:

        Hardware must implement the bit as Read-As-Zero, and must ignore
        writes to the field.
        
        Software must not rely on the field reading as all 0s, and
        except for writing back to the register must treat the value
        as if it is UNKNOWN. Software must use an SBZP policy to write
        to the field.

The danger is that some future version of the architecture assigns
meaning to that bit. All in all this seems like a pretty benign issue,
but on the flip side the fix is reasonable low risk, the only real
danger is that one of the replacements is wrong and most of them are
pretty trivial, although s/#18/#(SECOND_SHIFT - 3)/ is a bit less so.

I was initially leaning towards putting this into the queue for 4.5, but
on reflection I'm now starting to lean the other way.

Does anyone feel strongly that this shouldn't go into 4.4?

> ---
>  xen/arch/arm/arm32/head.S | 20 ++++++++++----------
>  xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
>  2 files changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 96230ac..f3eab89 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -291,14 +291,14 @@ cpu_init_done:
>          ldr   r4, =boot_second
>          add   r4, r4, r10            /* r1 := paddr (boot_second) */
>  
> -        lsr   r2, r9, #20            /* Base address for 2MB mapping */
> -        lsl   r2, r2, #20
> +        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
> +        lsl   r2, r2, #SECOND_SHIFT
>          orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
>          orr   r2, r2, #PT_LOWER(MEM)
>  
>          /* ... map of vaddr(start) in boot_second */
>          ldr   r1, =start
> -        lsr   r1, #18                /* Slot for vaddr(start) */
> +        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>          strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
>  
>          /* ... map of paddr(start) in boot_second */
> @@ -307,7 +307,7 @@ cpu_init_done:
>                                        * then the mapping was done in
>                                        * boot_pgtable above */
>  
> -        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
> +        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
>          strd  r2, r3, [r4, r1]       /* Map Xen there */
>  1:
>  
> @@ -339,8 +339,8 @@ paging:
>          /* Add UART to the fixmap table */
>          ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
>          mov   r3, #0
> -        lsr   r2, r11, #12
> -        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
> +        lsr   r2, r11, #THIRD_SHIFT
> +        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>          orr   r2, r2, #PT_UPPER(DEV_L3)
>          orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> @@ -353,7 +353,7 @@ paging:
>          orr   r2, r2, #PT_UPPER(PT)
>          orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
>          ldr   r4, =FIXMAP_ADDR(0)
> -        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
> +        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
>          strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
>  
>          /* Use a virtual address to access the UART. */
> @@ -365,12 +365,12 @@ paging:
>  
>          ldr   r1, =boot_second
>          mov   r3, #0x0
> -        lsr   r2, r8, #21
> -        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
> +        lsr   r2, r8, #SECOND_SHIFT
> +        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
>          orr   r2, r2, #PT_UPPER(MEM)
>          orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
>          ldr   r4, =BOOT_FDT_VIRT_START
> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
>          strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
>          dsb
>  1:
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index bebddf0..5b164e9 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -278,11 +278,11 @@ skip_bss:
>          str   x2, [x4, #0]           /* Map it in slot 0 */
>  
>          /* ... map of paddr(start) in boot_first */
> -        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
> +        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
>          and   x1, x2, 0x1ff          /* x1 := Slot to use */
>          cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
>  
> -        lsl   x2, x2, #30            /* Base address for 1GB mapping */
> +        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>          lsl   x1, x1, #3             /* x1 := Slot offset */
> @@ -292,23 +292,23 @@ skip_bss:
>          ldr   x4, =boot_second
>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>  
> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
> -        lsl   x2, x2, #20
> +        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
> +        lsl   x2, x2, #SECOND_SHIFT
>          mov   x3, #PT_MEM            /* x2 := Section map */
>          orr   x2, x2, x3
>  
>          /* ... map of vaddr(start) in boot_second */
>          ldr   x1, =start
> -        lsr   x1, x1, #18            /* Slot for vaddr(start) */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>          str   x2, [x4, x1]           /* Map vaddr(start) */
>  
>          /* ... map of paddr(start) in boot_second */
> -        lsr   x1, x19, #30           /* Base paddr */
> +        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
>          cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
>                                        * then the mapping was done in
>                                        * boot_pgtable or boot_first above */
>  
> -        lsr   x1, x19, #18           /* Slot for paddr(start) */
> +        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
>          str   x2, [x4, x1]           /* Map Xen there */
>  1:
>  
> @@ -340,8 +340,8 @@ paging:
>          /* Add UART to the fixmap table */
>          ldr   x1, =xen_fixmap
>          add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
> -        lsr   x2, x23, #12
> -        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
> +        lsr   x2, x23, #THIRD_SHIFT
> +        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>          mov   x3, #PT_DEV_L3
>          orr   x2, x2, x3             /* x2 := 4K dev map including UART */
>          str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
> @@ -354,7 +354,7 @@ paging:
>          mov   x3, #PT_PT
>          orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
>          ldr   x1, =FIXMAP_ADDR(0)
> -        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
>  
>          /* Use a virtual address to access the UART. */
> @@ -364,12 +364,12 @@ paging:
>          /* Map the DTB in the boot misc slot */
>          cbnz  x22, 1f                /* Only on boot CPU */
>  
> -        lsr   x2, x21, #21
> -        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
> +        lsr   x2, x21, #SECOND_SHIFT
> +        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
>          mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
>          orr   x2, x2, x3
>          ldr   x1, =BOOT_FDT_VIRT_START
> -        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
>          str   x2, [x4, x1]           /* Map it in the early fdt slot */
>          dsb   sy
>  1:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:38:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qfq-0002Qc-CI; Wed, 15 Jan 2014 13:38:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1W3QZT-0002Ej-Ow
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:31:52 +0000
Received: from [85.158.143.35:18213] by server-2.bemta-4.messagelabs.com id
	09/57-11386-7CD86D25; Wed, 15 Jan 2014 13:31:51 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389792708!11909739!1
X-Originating-IP: [65.54.190.91]
X-SpamReason: No, hits=0.7 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_2,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22742 invoked from network); 15 Jan 2014 13:31:49 -0000
Received: from bay0-omc2-s16.bay0.hotmail.com (HELO
	bay0-omc2-s16.bay0.hotmail.com) (65.54.190.91)
	by server-4.tower-21.messagelabs.com with SMTP;
	15 Jan 2014 13:31:49 -0000
Received: from BAY170-W96 ([65.54.190.123]) by bay0-omc2-s16.bay0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Wed, 15 Jan 2014 05:31:48 -0800
X-TMN: [AD6p9bBbpBCLSLjQgcSLDC8RvHUmnjdwF4nf8jITcoY=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 10:31:48 -0300
Importance: Normal
In-Reply-To: <52D2DE47.4070506@citrix.com>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>,
	<52D2DE47.4070506@citrix.com>
MIME-Version: 1.0
X-OriginalArrivalTime: 15 Jan 2014 13:31:48.0842 (UTC)
	FILETIME=[2476B4A0:01CF11F6]
X-Mailman-Approved-At: Wed, 15 Jan 2014 13:38:24 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8187801456345481387=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8187801456345481387==
Content-Type: multipart/alternative;
	boundary="_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_"

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> I have skimmed over=2C and lack of boot messages appears to be the=0A=
    main problem.
>> Do you have a serial PCI card you could put it=2C (or does your USB=0A=
    port have debug capabilities) and can you set up a serial console?>> As=
 for EFI boot itself=2C there is a lot which is expected not to work=0A=
    at the moment=2C because of the split between Xen and dom0 and who=0A=
    gets to use bootservices etc=2C but it is expected to boot and=0A=
    function basically.
>> Have you tried xen unstable (which is currently 4.4-rc1 + some).  It=0A=
    might be an interesting datapoint=2C but I cant offhand recall whether=
=0A=
    there is much/anything relevant to EFI boot.
>> ~Andrew

I recall having tried with xen-git from Arch Linux User Repository=2C here:=
https://aur.archlinux.org/packages/xen-git/?comments=3Dall
This was during late-December=2C but I assumed it was a 4.4 unstable versio=
n. I think I managed to build it as a EFI executable by removing the oxen.p=
atch that were causing compiling issues (Check comments). Behaviator was th=
e same.

I spend some time reading how to use the Serial Port from the Xen Wiki to f=
igure out what I have to do:http://wiki.xen.org/wiki/Xen_Serial_Console
No=2C I don't have a PCI Card to provide a Serial Port=2C neither I'm aware=
 that any of the USB Ports have anything special like debug capabilities. A=
lso=2C as I only have PCIe Slots=2C I suppose it will be even more hard to =
get a PCIe Card with a Serial Port. What my Motherboard got is a Serial Por=
t Header. As the BIOS actually has an option to enable a Serial Port=2C I s=
uppose it should be ready to use if I plug there a Serial bracket like this=
:http://www.irblaster.info/asus_bracket.jpgI *SHOULD* have one of those in =
a box somewhere=2C because they were more common in another era where Mothe=
rboards usually came with one as accessory. What I'm missing is the Serial-=
to-USB cable. Not sure where I can get one locally=2C either.
Also=2C I don't get a clear message out of the wiki article regarding the S=
erial-to-USB cable. Basically=2C what I understand is that the Serial side =
must be connected to the computer that I want to get output from=2C and the=
 USB side goes to the one that will be running the terminal application to =
read it=2C as it is impossible to do it the other way around because on boo=
t the computer with Xen can't use an USB as a COM port like on the other ma=
chine. Assuming it works like that=2C it is doable.
Additionally=2C I have been reading about doing it via Serial over LAN=2C b=
ecause with that I don't need to look around where to purchase or how to bu=
ild the cable=2C but getting info about how to use SoL has been hard. I hav=
e vPro support with Intel AMT available=2C and I read that it can be used f=
or SoL. Also=2C the BIOS has an option to provide EMS Console Redirection f=
or out-of-band management=2C but that seems to be useful only for some Wind=
ows installations that support it=2C I don't see anything regarding redirec=
ting serial output. Most of the instructions I read about SoL seems to be b=
ased around using IPMI (Which my Motherboard doesn't have) instead of AMT. =
It is possible to get SoL working with only AMT?Instructions aren't clear e=
ither regarding the fact that the other computer must support AMT too=2C or=
 what Software I must run so it can listen to the Network (Keep in mind tha=
n that computer is running Windows XP).
I will try to get my hands on the requiered bracket and cable to do it the =
Serial-to-USB way. Else=2C it must be via SoL=2C and I can't figure out how=
 to make it work.

BTW=2C I found a Thread of someone which was having a similar issue:http://=
lists.xen.org/archives/html/xen-devel/2012-12/msg01791.htmlXen didn't detec=
ted ACPI when booting on UEFI=2C and a workaround was required. However=2C =
that was for Xen 4.2. 		 	   		  =

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div><span style=3D"background-c=
olor: rgb(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: =
'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height: 18=
px=3B white-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 5=
1=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font-size=
: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i><span s=
tyle=3D"font-size: 12pt=3B">I have skimmed over=2C and lack of boot message=
s appears to be the=0A=
    main problem.</span><BR></div><div><span style=3D"background-color: rgb=
(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier =
New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B whi=
te-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 51=2C 51)=
=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=
=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>Do you have a=
 serial PCI card you could put it=2C (or does your USB=0A=
    port have debug capabilities) and can you set up a serial console?</div=
><div><span style=3D"background-color: rgb(250=2C 250=2C 250)=3B color: rgb=
(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B f=
ont-size: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B</sp=
an><i style=3D"color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C =
Courier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B white-space=
: pre-wrap=3B">&gt=3B </i>As for EFI boot itself=2C there is a lot which is=
 expected not to work=0A=
    at the moment=2C because of the split between Xen and dom0 and who=0A=
    gets to use bootservices etc=2C but it is expected to boot and=0A=
    function basically.<br><span style=3D"background-color: rgb(250=2C 250=
=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C Cou=
rier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B white-space: p=
re-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 51=2C 51)=3B font-fam=
ily: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-heigh=
t: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>Have you tried xen unstable=
 (which is currently 4.4-rc1 + some).&nbsp=3B It=0A=
    might be an interesting datapoint=2C but I cant offhand recall whether=
=0A=
    there is much/anything relevant to EFI boot.<br><span style=3D"backgrou=
nd-color: rgb(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-fami=
ly: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height=
: 18px=3B white-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=
=2C 51=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font=
-size: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>~A=
ndrew<br></div><div><br></div><div>I recall having tried with xen-git from =
Arch Linux User Repository=2C here:</div><div><a href=3D"https://aur.archli=
nux.org/packages/xen-git/?comments=3Dall" target=3D"_blank">https://aur.arc=
hlinux.org/packages/xen-git/?comments=3Dall</a></div><div><br></div><div>Th=
is was during late-December=2C but I assumed it was a 4.4 unstable version.=
 I think I managed to build it as a EFI executable by removing the oxen.pat=
ch that were causing compiling issues (Check comments). Behaviator was the =
same.</div><div><br></div><div><br></div><div><span style=3D"font-size: 12p=
t=3B">I spend some time reading how to use the Serial Port from the Xen Wik=
i to figure out what I have to do:</span></div><div><div><a href=3D"http://=
wiki.xen.org/wiki/Xen_Serial_Console" target=3D"_blank">http://wiki.xen.org=
/wiki/Xen_Serial_Console</a></div></div><div><br></div><div>No=2C I don't h=
ave a PCI Card to provide a Serial Port=2C neither I'm aware that any of th=
e USB Ports have anything special like debug capabilities. Also=2C as I onl=
y have PCIe Slots=2C I suppose it will be even more hard to get a PCIe Card=
 with a Serial Port. What my Motherboard got is a Serial Port Header. As th=
e BIOS actually has an option to enable a Serial Port=2C I suppose it shoul=
d be ready to use if I plug there a Serial bracket like this:</div><div><a =
href=3D"http://www.irblaster.info/asus_bracket.jpg" target=3D"_blank">http:=
//www.irblaster.info/asus_bracket.jpg</a></div><div>I *SHOULD* have one of =
those in a box somewhere=2C because they were more common in another era wh=
ere Motherboards usually came with one as accessory. What I'm missing is th=
e Serial-to-USB cable. Not sure where I can get one locally=2C either.</div=
><div><br></div><div>Also=2C I don't get a clear message out of the wiki ar=
ticle regarding the Serial-to-USB cable. Basically=2C what I understand is =
that the Serial side must be connected to the computer that I want to get o=
utput from=2C and the USB side goes to the one that will be running the ter=
minal application to read it=2C as it is impossible to do it the other way =
around because on boot the computer with Xen can't use an USB as a COM port=
 like on the other machine. Assuming it works like that=2C it is doable.</d=
iv><div><br></div><div>Additionally=2C I have been reading about doing it v=
ia Serial over LAN=2C because with that I don't need to look around where t=
o purchase or how to build the cable=2C but getting info about how to use S=
oL has been hard. I have vPro support with Intel AMT available=2C and I rea=
d that it can be used for SoL. Also=2C&nbsp=3B<span style=3D"font-size: 12p=
t=3B">the BIOS has an option to provide EMS Console Redirection for out-of-=
band management=2C but that seems to be useful only for some Windows instal=
lations that support it=2C I don't see anything regarding redirecting seria=
l output. Most of the instructions I read about SoL seems to be based aroun=
d using IPMI (Which my Motherboard doesn't have) instead of AMT. It is poss=
ible to get SoL working with only AMT?</span></div><div><span style=3D"font=
-size: 12pt=3B">Instructions aren't clear either regarding the fact that th=
e other computer must support AMT too=2C or what Software I must run so it =
can listen to the Network (Keep in mind than that computer is running Windo=
ws XP).</span></div><div><br></div><div><span style=3D"font-size: 12pt=3B">=
I will try to get my hands on the requiered bracket and cable to do it the =
Serial-to-USB way. Else=2C it must be via SoL=2C and I can't figure out how=
 to make it work.</span></div><div><br></div><div><br></div><div>BTW=2C I f=
ound a Thread of someone which was having a similar issue:</div><div><a hre=
f=3D"http://lists.xen.org/archives/html/xen-devel/2012-12/msg01791.html" ta=
rget=3D"_blank">http://lists.xen.org/archives/html/xen-devel/2012-12/msg017=
91.html</a></div><div>Xen didn't detected ACPI when booting on UEFI=2C and =
a workaround was required. However=2C that was for Xen 4.2.</div><style><!-=
-=0A=
.ExternalClass .ecxhmmessage P {=0A=
padding:0px=3B=0A=
}=0A=
=0A=
.ExternalClass body.ecxhmmessage {=0A=
font-size:12pt=3B=0A=
font-family:Calibri=3B=0A=
}=0A=
=0A=
--></style> 		 	   		  </div></body>
</html>=

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_--


--===============8187801456345481387==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8187801456345481387==--


From xen-devel-bounces@lists.xen.org Wed Jan 15 13:38:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qfq-0002Qc-CI; Wed, 15 Jan 2014 13:38:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1W3QZT-0002Ej-Ow
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:31:52 +0000
Received: from [85.158.143.35:18213] by server-2.bemta-4.messagelabs.com id
	09/57-11386-7CD86D25; Wed, 15 Jan 2014 13:31:51 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389792708!11909739!1
X-Originating-IP: [65.54.190.91]
X-SpamReason: No, hits=0.7 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_2,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22742 invoked from network); 15 Jan 2014 13:31:49 -0000
Received: from bay0-omc2-s16.bay0.hotmail.com (HELO
	bay0-omc2-s16.bay0.hotmail.com) (65.54.190.91)
	by server-4.tower-21.messagelabs.com with SMTP;
	15 Jan 2014 13:31:49 -0000
Received: from BAY170-W96 ([65.54.190.123]) by bay0-omc2-s16.bay0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Wed, 15 Jan 2014 05:31:48 -0800
X-TMN: [AD6p9bBbpBCLSLjQgcSLDC8RvHUmnjdwF4nf8jITcoY=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 10:31:48 -0300
Importance: Normal
In-Reply-To: <52D2DE47.4070506@citrix.com>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>,
	<52D2DE47.4070506@citrix.com>
MIME-Version: 1.0
X-OriginalArrivalTime: 15 Jan 2014 13:31:48.0842 (UTC)
	FILETIME=[2476B4A0:01CF11F6]
X-Mailman-Approved-At: Wed, 15 Jan 2014 13:38:24 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8187801456345481387=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8187801456345481387==
Content-Type: multipart/alternative;
	boundary="_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_"

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> I have skimmed over=2C and lack of boot messages appears to be the=0A=
    main problem.
>> Do you have a serial PCI card you could put it=2C (or does your USB=0A=
    port have debug capabilities) and can you set up a serial console?>> As=
 for EFI boot itself=2C there is a lot which is expected not to work=0A=
    at the moment=2C because of the split between Xen and dom0 and who=0A=
    gets to use bootservices etc=2C but it is expected to boot and=0A=
    function basically.
>> Have you tried xen unstable (which is currently 4.4-rc1 + some).  It=0A=
    might be an interesting datapoint=2C but I cant offhand recall whether=
=0A=
    there is much/anything relevant to EFI boot.
>> ~Andrew

I recall having tried with xen-git from Arch Linux User Repository=2C here:=
https://aur.archlinux.org/packages/xen-git/?comments=3Dall
This was during late-December=2C but I assumed it was a 4.4 unstable versio=
n. I think I managed to build it as a EFI executable by removing the oxen.p=
atch that were causing compiling issues (Check comments). Behaviator was th=
e same.

I spend some time reading how to use the Serial Port from the Xen Wiki to f=
igure out what I have to do:http://wiki.xen.org/wiki/Xen_Serial_Console
No=2C I don't have a PCI Card to provide a Serial Port=2C neither I'm aware=
 that any of the USB Ports have anything special like debug capabilities. A=
lso=2C as I only have PCIe Slots=2C I suppose it will be even more hard to =
get a PCIe Card with a Serial Port. What my Motherboard got is a Serial Por=
t Header. As the BIOS actually has an option to enable a Serial Port=2C I s=
uppose it should be ready to use if I plug there a Serial bracket like this=
:http://www.irblaster.info/asus_bracket.jpgI *SHOULD* have one of those in =
a box somewhere=2C because they were more common in another era where Mothe=
rboards usually came with one as accessory. What I'm missing is the Serial-=
to-USB cable. Not sure where I can get one locally=2C either.
Also=2C I don't get a clear message out of the wiki article regarding the S=
erial-to-USB cable. Basically=2C what I understand is that the Serial side =
must be connected to the computer that I want to get output from=2C and the=
 USB side goes to the one that will be running the terminal application to =
read it=2C as it is impossible to do it the other way around because on boo=
t the computer with Xen can't use an USB as a COM port like on the other ma=
chine. Assuming it works like that=2C it is doable.
Additionally=2C I have been reading about doing it via Serial over LAN=2C b=
ecause with that I don't need to look around where to purchase or how to bu=
ild the cable=2C but getting info about how to use SoL has been hard. I hav=
e vPro support with Intel AMT available=2C and I read that it can be used f=
or SoL. Also=2C the BIOS has an option to provide EMS Console Redirection f=
or out-of-band management=2C but that seems to be useful only for some Wind=
ows installations that support it=2C I don't see anything regarding redirec=
ting serial output. Most of the instructions I read about SoL seems to be b=
ased around using IPMI (Which my Motherboard doesn't have) instead of AMT. =
It is possible to get SoL working with only AMT?Instructions aren't clear e=
ither regarding the fact that the other computer must support AMT too=2C or=
 what Software I must run so it can listen to the Network (Keep in mind tha=
n that computer is running Windows XP).
I will try to get my hands on the requiered bracket and cable to do it the =
Serial-to-USB way. Else=2C it must be via SoL=2C and I can't figure out how=
 to make it work.

BTW=2C I found a Thread of someone which was having a similar issue:http://=
lists.xen.org/archives/html/xen-devel/2012-12/msg01791.htmlXen didn't detec=
ted ACPI when booting on UEFI=2C and a workaround was required. However=2C =
that was for Xen 4.2. 		 	   		  =

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div><span style=3D"background-c=
olor: rgb(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: =
'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height: 18=
px=3B white-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 5=
1=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font-size=
: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i><span s=
tyle=3D"font-size: 12pt=3B">I have skimmed over=2C and lack of boot message=
s appears to be the=0A=
    main problem.</span><BR></div><div><span style=3D"background-color: rgb=
(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier =
New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B whi=
te-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 51=2C 51)=
=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=
=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>Do you have a=
 serial PCI card you could put it=2C (or does your USB=0A=
    port have debug capabilities) and can you set up a serial console?</div=
><div><span style=3D"background-color: rgb(250=2C 250=2C 250)=3B color: rgb=
(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B f=
ont-size: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B</sp=
an><i style=3D"color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C =
Courier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B white-space=
: pre-wrap=3B">&gt=3B </i>As for EFI boot itself=2C there is a lot which is=
 expected not to work=0A=
    at the moment=2C because of the split between Xen and dom0 and who=0A=
    gets to use bootservices etc=2C but it is expected to boot and=0A=
    function basically.<br><span style=3D"background-color: rgb(250=2C 250=
=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-family: 'Courier New'=2C Cou=
rier=2C monospace=3B font-size: 12px=3B line-height: 18px=3B white-space: p=
re-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=2C 51=2C 51)=3B font-fam=
ily: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-heigh=
t: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>Have you tried xen unstable=
 (which is currently 4.4-rc1 + some).&nbsp=3B It=0A=
    might be an interesting datapoint=2C but I cant offhand recall whether=
=0A=
    there is much/anything relevant to EFI boot.<br><span style=3D"backgrou=
nd-color: rgb(250=2C 250=2C 250)=3B color: rgb(51=2C 51=2C 51)=3B font-fami=
ly: 'Courier New'=2C Courier=2C monospace=3B font-size: 12px=3B line-height=
: 18px=3B white-space: pre-wrap=3B">&gt=3B</span><i style=3D"color: rgb(51=
=2C 51=2C 51)=3B font-family: 'Courier New'=2C Courier=2C monospace=3B font=
-size: 12px=3B line-height: 18px=3B white-space: pre-wrap=3B">&gt=3B </i>~A=
ndrew<br></div><div><br></div><div>I recall having tried with xen-git from =
Arch Linux User Repository=2C here:</div><div><a href=3D"https://aur.archli=
nux.org/packages/xen-git/?comments=3Dall" target=3D"_blank">https://aur.arc=
hlinux.org/packages/xen-git/?comments=3Dall</a></div><div><br></div><div>Th=
is was during late-December=2C but I assumed it was a 4.4 unstable version.=
 I think I managed to build it as a EFI executable by removing the oxen.pat=
ch that were causing compiling issues (Check comments). Behaviator was the =
same.</div><div><br></div><div><br></div><div><span style=3D"font-size: 12p=
t=3B">I spend some time reading how to use the Serial Port from the Xen Wik=
i to figure out what I have to do:</span></div><div><div><a href=3D"http://=
wiki.xen.org/wiki/Xen_Serial_Console" target=3D"_blank">http://wiki.xen.org=
/wiki/Xen_Serial_Console</a></div></div><div><br></div><div>No=2C I don't h=
ave a PCI Card to provide a Serial Port=2C neither I'm aware that any of th=
e USB Ports have anything special like debug capabilities. Also=2C as I onl=
y have PCIe Slots=2C I suppose it will be even more hard to get a PCIe Card=
 with a Serial Port. What my Motherboard got is a Serial Port Header. As th=
e BIOS actually has an option to enable a Serial Port=2C I suppose it shoul=
d be ready to use if I plug there a Serial bracket like this:</div><div><a =
href=3D"http://www.irblaster.info/asus_bracket.jpg" target=3D"_blank">http:=
//www.irblaster.info/asus_bracket.jpg</a></div><div>I *SHOULD* have one of =
those in a box somewhere=2C because they were more common in another era wh=
ere Motherboards usually came with one as accessory. What I'm missing is th=
e Serial-to-USB cable. Not sure where I can get one locally=2C either.</div=
><div><br></div><div>Also=2C I don't get a clear message out of the wiki ar=
ticle regarding the Serial-to-USB cable. Basically=2C what I understand is =
that the Serial side must be connected to the computer that I want to get o=
utput from=2C and the USB side goes to the one that will be running the ter=
minal application to read it=2C as it is impossible to do it the other way =
around because on boot the computer with Xen can't use an USB as a COM port=
 like on the other machine. Assuming it works like that=2C it is doable.</d=
iv><div><br></div><div>Additionally=2C I have been reading about doing it v=
ia Serial over LAN=2C because with that I don't need to look around where t=
o purchase or how to build the cable=2C but getting info about how to use S=
oL has been hard. I have vPro support with Intel AMT available=2C and I rea=
d that it can be used for SoL. Also=2C&nbsp=3B<span style=3D"font-size: 12p=
t=3B">the BIOS has an option to provide EMS Console Redirection for out-of-=
band management=2C but that seems to be useful only for some Windows instal=
lations that support it=2C I don't see anything regarding redirecting seria=
l output. Most of the instructions I read about SoL seems to be based aroun=
d using IPMI (Which my Motherboard doesn't have) instead of AMT. It is poss=
ible to get SoL working with only AMT?</span></div><div><span style=3D"font=
-size: 12pt=3B">Instructions aren't clear either regarding the fact that th=
e other computer must support AMT too=2C or what Software I must run so it =
can listen to the Network (Keep in mind than that computer is running Windo=
ws XP).</span></div><div><br></div><div><span style=3D"font-size: 12pt=3B">=
I will try to get my hands on the requiered bracket and cable to do it the =
Serial-to-USB way. Else=2C it must be via SoL=2C and I can't figure out how=
 to make it work.</span></div><div><br></div><div><br></div><div>BTW=2C I f=
ound a Thread of someone which was having a similar issue:</div><div><a hre=
f=3D"http://lists.xen.org/archives/html/xen-devel/2012-12/msg01791.html" ta=
rget=3D"_blank">http://lists.xen.org/archives/html/xen-devel/2012-12/msg017=
91.html</a></div><div>Xen didn't detected ACPI when booting on UEFI=2C and =
a workaround was required. However=2C that was for Xen 4.2.</div><style><!-=
-=0A=
.ExternalClass .ecxhmmessage P {=0A=
padding:0px=3B=0A=
}=0A=
=0A=
.ExternalClass body.ecxhmmessage {=0A=
font-size:12pt=3B=0A=
font-family:Calibri=3B=0A=
}=0A=
=0A=
--></style> 		 	   		  </div></body>
</html>=

--_9520a9a8-f9bc-4a27-92ea-6a1d95a9d21b_--


--===============8187801456345481387==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8187801456345481387==--


From xen-devel-bounces@lists.xen.org Wed Jan 15 13:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QiC-0002qt-6Z; Wed, 15 Jan 2014 13:40:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3QiA-0002qo-Mh
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:40:51 +0000
Received: from [85.158.137.68:8198] by server-9.bemta-3.messagelabs.com id
	9A/52-13104-0EF86D25; Wed, 15 Jan 2014 13:40:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389793246!9331840!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23888 invoked from network); 15 Jan 2014 13:40:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:40:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93076378"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 13:40:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:40:45 -0500
Message-ID: <1389793244.3793.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 13:40:44 +0000
In-Reply-To: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
References: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared
 between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 20:50 +0000, Julien Grall wrote:
> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
> IRQ is correctly setup.
> 
> As IRQ can be shared between devices, if the devices are not assigned to the
> same domain or Xen, this could result to IRQ route to the domain instead of
> Xen ...
> 
> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Does this patch relate to or rely on " setup_dt_irq: don't enable the
IRQ if the creation has failed" at all?

> 
> ---
>     Hopefully, none of the supported platforms have UARTs (the only device
>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>     avoid waste of time for developer.

Hrm, at some point I think we have to say no and I think post-rc "nice
to avoid waste of time for developer" might be it. After all in a little
over a month developers will be using 4.5-pre with this patch applied.

What actually happens without this patch? The Xen console UART stops
working because the IRQ is delivered to the guest and not to Xen?

How did you discover this? Does this happen in practice on any of the
platforms which Xen supports? I think in general shared interrupts are
reasonably rare on ARM, especially for on-SoC peripherals which the UART
very often will be.

>     The downside of this patch is if someone wants to support a such platform
>     (eg IRQ shared between device assigned to different domain/XEN), it will
>     end up to a error message and a panic.

I suppose that at least serves as an indication that some actual
development work would be required.

> ---
>  xen/arch/arm/domain_build.c |    8 ++++++--
>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..1fc359a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>          }
>  
>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
> -        /* Don't check return because the IRQ can be use by multiple device */
> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
> +            return res;
> +        }
>      }
>  
>      /* Map the address ranges */
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 62510e3..829d767 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -602,6 +602,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock, flags);
> +
> +    if ( desc->status & IRQ_GUEST )
> +    {
> +        struct domain *d;
> +
> +        ASSERT(desc->action != NULL);
> +
> +        d = desc->action->dev_id;
> +
> +        spin_unlock_irqrestore(&desc->lock, flags);
> +        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +               irq->irq, d->domain_id);
> +        return -EADDRINUSE;
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> @@ -756,7 +771,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      struct irqaction *action;
>      struct irq_desc *desc = irq_to_desc(irq->irq);
>      unsigned long flags;
> -    int retval;
> +    int retval = 0;
>      bool_t level;
>      struct pending_irq *p;
>  
> @@ -771,6 +786,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>  
> +    /* If the IRQ is already used by someone
> +     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
> +     *  - Otherwise -> For now, don't allow the IRQ to be shared between
> +     *  Xen and domains.
> +     */
> +    if ( desc->action != NULL )
> +    {
> +        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
> +            goto out;
> +
> +        if ( desc->status & IRQ_GUEST )
> +        {
> +            d = desc->action->dev_id;
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +                   irq->irq, d->domain_id);
> +        }
> +        else
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
> +                   irq->irq);
> +        retval = -EADDRINUSE;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QiC-0002qt-6Z; Wed, 15 Jan 2014 13:40:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3QiA-0002qo-Mh
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:40:51 +0000
Received: from [85.158.137.68:8198] by server-9.bemta-3.messagelabs.com id
	9A/52-13104-0EF86D25; Wed, 15 Jan 2014 13:40:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389793246!9331840!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23888 invoked from network); 15 Jan 2014 13:40:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:40:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93076378"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 13:40:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:40:45 -0500
Message-ID: <1389793244.3793.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 13:40:44 +0000
In-Reply-To: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
References: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared
 between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 20:50 +0000, Julien Grall wrote:
> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
> IRQ is correctly setup.
> 
> As IRQ can be shared between devices, if the devices are not assigned to the
> same domain or Xen, this could result to IRQ route to the domain instead of
> Xen ...
> 
> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Does this patch relate to or rely on " setup_dt_irq: don't enable the
IRQ if the creation has failed" at all?

> 
> ---
>     Hopefully, none of the supported platforms have UARTs (the only device
>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>     avoid waste of time for developer.

Hrm, at some point I think we have to say no and I think post-rc "nice
to avoid waste of time for developer" might be it. After all in a little
over a month developers will be using 4.5-pre with this patch applied.

What actually happens without this patch? The Xen console UART stops
working because the IRQ is delivered to the guest and not to Xen?

How did you discover this? Does this happen in practice on any of the
platforms which Xen supports? I think in general shared interrupts are
reasonably rare on ARM, especially for on-SoC peripherals which the UART
very often will be.

>     The downside of this patch is if someone wants to support a such platform
>     (eg IRQ shared between device assigned to different domain/XEN), it will
>     end up to a error message and a panic.

I suppose that at least serves as an indication that some actual
development work would be required.

> ---
>  xen/arch/arm/domain_build.c |    8 ++++++--
>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..1fc359a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>          }
>  
>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
> -        /* Don't check return because the IRQ can be use by multiple device */
> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
> +            return res;
> +        }
>      }
>  
>      /* Map the address ranges */
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 62510e3..829d767 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -602,6 +602,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock, flags);
> +
> +    if ( desc->status & IRQ_GUEST )
> +    {
> +        struct domain *d;
> +
> +        ASSERT(desc->action != NULL);
> +
> +        d = desc->action->dev_id;
> +
> +        spin_unlock_irqrestore(&desc->lock, flags);
> +        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +               irq->irq, d->domain_id);
> +        return -EADDRINUSE;
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> @@ -756,7 +771,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      struct irqaction *action;
>      struct irq_desc *desc = irq_to_desc(irq->irq);
>      unsigned long flags;
> -    int retval;
> +    int retval = 0;
>      bool_t level;
>      struct pending_irq *p;
>  
> @@ -771,6 +786,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>  
> +    /* If the IRQ is already used by someone
> +     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
> +     *  - Otherwise -> For now, don't allow the IRQ to be shared between
> +     *  Xen and domains.
> +     */
> +    if ( desc->action != NULL )
> +    {
> +        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
> +            goto out;
> +
> +        if ( desc->status & IRQ_GUEST )
> +        {
> +            d = desc->action->dev_id;
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +                   irq->irq, d->domain_id);
> +        }
> +        else
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
> +                   irq->irq);
> +        retval = -EADDRINUSE;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QlL-00030K-47; Wed, 15 Jan 2014 13:44:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W3QlJ-000303-LS; Wed, 15 Jan 2014 13:44:05 +0000
Received: from [193.109.254.147:44535] by server-5.bemta-14.messagelabs.com id
	0D/44-03510-4A096D25; Wed, 15 Jan 2014 13:44:04 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389793444!11043042!1
X-Originating-IP: [209.85.212.172]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7116 invoked from network); 15 Jan 2014 13:44:04 -0000
Received: from mail-wi0-f172.google.com (HELO mail-wi0-f172.google.com)
	(209.85.212.172)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:44:04 -0000
Received: by mail-wi0-f172.google.com with SMTP id ex4so3355679wid.11
	for <multiple recipients>; Wed, 15 Jan 2014 05:44:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8whw3+xWZC/Yq08lhyIWI0l8xIKifE2c2xjfyCk2l5c=;
	b=UhnqoJuCvmaefP9sLNV/bB/702BBcHsuUIalsVO/q4gxEVHNLV7AtV1F95zOIFMVCR
	6kOs7WmbR3IShuiTjvNZ7GcjkCyZw+6V2Ptu1a8n5GadNRuRs4wMxGij/Umk9FxTOkXx
	H4K2elXNK+R7KKwONAub91+kcWgEiDPSl5tBH5NegQRksfZcGplkwkSrslva6wZSHt2O
	ISj0HTnq/zM2/UcJUBIVJ9Ww61i9fFzLcgq9imVgFDiv9+TjrePVdnqWiElGCVz0TwtC
	wrB/oYCgouDp5jU0/jUQSn+2YwVn4w4p6URAFj3ozRPs/02EF2etjTlO22piDPNJojWg
	LIPA==
X-Received: by 10.194.185.205 with SMTP id fe13mr2475541wjc.23.1389793443723; 
	Wed, 15 Jan 2014 05:44:03 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id ot9sm3327688wjc.0.2014.01.15.05.44.02
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 05:44:02 -0800 (PST)
Message-ID: <52D690A1.5060204@xen.org>
Date: Wed, 15 Jan 2014 13:44:01 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
References: <52B44230.20209@xen.org>
In-Reply-To: <52B44230.20209@xen.org>
Subject: Re: [Xen-devel] Pre FOSDEM Hackathon [Urgent, please vote]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I just talked to KB from CentOS about this and we agreed that we don't 
have enough time to pull this off this time. Both KB and I have too much 
on our plates right now, to make this happen. Sorry for any 
inconvenience caused.
Regards
Lars

On 20/12/2013 13:12, Lars Kurth wrote:
> Hi all,
> I was just approached by a FOSS project which we have ties with, and 
> it is possible that we can have some space (including sponsored food) 
> on the 31st  of Jan in Brussells to collaborate with other projects. 
> If we get enough people who would be interested to attend, I will help 
> coorganize it. If you would go, please reply +1.
> Regards
> Lars
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QlL-00030K-47; Wed, 15 Jan 2014 13:44:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W3QlJ-000303-LS; Wed, 15 Jan 2014 13:44:05 +0000
Received: from [193.109.254.147:44535] by server-5.bemta-14.messagelabs.com id
	0D/44-03510-4A096D25; Wed, 15 Jan 2014 13:44:04 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389793444!11043042!1
X-Originating-IP: [209.85.212.172]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7116 invoked from network); 15 Jan 2014 13:44:04 -0000
Received: from mail-wi0-f172.google.com (HELO mail-wi0-f172.google.com)
	(209.85.212.172)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:44:04 -0000
Received: by mail-wi0-f172.google.com with SMTP id ex4so3355679wid.11
	for <multiple recipients>; Wed, 15 Jan 2014 05:44:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8whw3+xWZC/Yq08lhyIWI0l8xIKifE2c2xjfyCk2l5c=;
	b=UhnqoJuCvmaefP9sLNV/bB/702BBcHsuUIalsVO/q4gxEVHNLV7AtV1F95zOIFMVCR
	6kOs7WmbR3IShuiTjvNZ7GcjkCyZw+6V2Ptu1a8n5GadNRuRs4wMxGij/Umk9FxTOkXx
	H4K2elXNK+R7KKwONAub91+kcWgEiDPSl5tBH5NegQRksfZcGplkwkSrslva6wZSHt2O
	ISj0HTnq/zM2/UcJUBIVJ9Ww61i9fFzLcgq9imVgFDiv9+TjrePVdnqWiElGCVz0TwtC
	wrB/oYCgouDp5jU0/jUQSn+2YwVn4w4p6URAFj3ozRPs/02EF2etjTlO22piDPNJojWg
	LIPA==
X-Received: by 10.194.185.205 with SMTP id fe13mr2475541wjc.23.1389793443723; 
	Wed, 15 Jan 2014 05:44:03 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id ot9sm3327688wjc.0.2014.01.15.05.44.02
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 05:44:02 -0800 (PST)
Message-ID: <52D690A1.5060204@xen.org>
Date: Wed, 15 Jan 2014 13:44:01 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
References: <52B44230.20209@xen.org>
In-Reply-To: <52B44230.20209@xen.org>
Subject: Re: [Xen-devel] Pre FOSDEM Hackathon [Urgent, please vote]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I just talked to KB from CentOS about this and we agreed that we don't 
have enough time to pull this off this time. Both KB and I have too much 
on our plates right now, to make this happen. Sorry for any 
inconvenience caused.
Regards
Lars

On 20/12/2013 13:12, Lars Kurth wrote:
> Hi all,
> I was just approached by a FOSS project which we have ties with, and 
> it is possible that we can have some space (including sponsored food) 
> on the 31st  of Jan in Brussells to collaborate with other projects. 
> If we get enough people who would be interested to attend, I will help 
> coorganize it. If you would go, please reply +1.
> Regards
> Lars
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:44:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qm8-00036e-RQ; Wed, 15 Jan 2014 13:44:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Qm7-00036G-D6
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:44:55 +0000
Received: from [193.109.254.147:61363] by server-16.bemta-14.messagelabs.com
	id 89/07-20600-6D096D25; Wed, 15 Jan 2014 13:44:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389793492!11043323!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13408 invoked from network); 15 Jan 2014 13:44:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:44:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90956811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 13:44:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:44:51 -0500
Message-ID: <1389793490.3793.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 13:44:50 +0000
In-Reply-To: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> For now __setup_dt_irq can only fail if the action is already set.

Can this ever happen with the current code base?

> If in the future, the function is updated we don't want to enable the IRQ.

Such an update is likely to be post-4.4 (unless there is a relationship
with "IRQ: Protect IRQ to be shared between domains and XEN" AND the RM
becomes convinced to grant a freeze exception for that patch).

On that basis this patch could also easily be post-4.4.

> Assuming the function can fail with action = NULL, when Xen will receive the
> IRQ it will segfault because do_IRQ doesn't check if action is NULL.

It seems unlikely that the system would be fully functional after such
an error even with this patch -- it would have failed to register either
timer, maintenance or the console interrupt.

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..62510e3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> -    desc->handler->startup(desc);
> -
> +    if ( !rc )
> +        desc->handler->startup(desc);
>  
>      return rc;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:44:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Qm8-00036e-RQ; Wed, 15 Jan 2014 13:44:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Qm7-00036G-D6
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 13:44:55 +0000
Received: from [193.109.254.147:61363] by server-16.bemta-14.messagelabs.com
	id 89/07-20600-6D096D25; Wed, 15 Jan 2014 13:44:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389793492!11043323!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13408 invoked from network); 15 Jan 2014 13:44:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:44:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90956811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 13:44:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 08:44:51 -0500
Message-ID: <1389793490.3793.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 13:44:50 +0000
In-Reply-To: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> For now __setup_dt_irq can only fail if the action is already set.

Can this ever happen with the current code base?

> If in the future, the function is updated we don't want to enable the IRQ.

Such an update is likely to be post-4.4 (unless there is a relationship
with "IRQ: Protect IRQ to be shared between domains and XEN" AND the RM
becomes convinced to grant a freeze exception for that patch).

On that basis this patch could also easily be post-4.4.

> Assuming the function can fail with action = NULL, when Xen will receive the
> IRQ it will segfault because do_IRQ doesn't check if action is NULL.

It seems unlikely that the system would be fully functional after such
an error even with this patch -- it would have failed to register either
timer, maintenance or the console interrupt.

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..62510e3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> -    desc->handler->startup(desc);
> -
> +    if ( !rc )
> +        desc->handler->startup(desc);
>  
>      return rc;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QrZ-0003lF-Mc; Wed, 15 Jan 2014 13:50:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3QrY-0003l8-7p
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:50:32 +0000
Received: from [85.158.137.68:34885] by server-10.bemta-3.messagelabs.com id
	B8/5A-23989-42296D25; Wed, 15 Jan 2014 13:50:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389793827!9262079!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18417 invoked from network); 15 Jan 2014 13:50:27 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:50:27 -0000
Received: by mail-we0-f177.google.com with SMTP id x55so1729760wes.8
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 05:50:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=p3+/8Hx44n4JBZaUOdPeikSjVoP+V4eaaqgGf2Ua5p0=;
	b=DU+7TZaunW4jCoCg2ViIHCA7/9rVFZ7jtgnRLUTS8O35AUSfFrPhEDONIFgRpbL4/u
	hhGJ/Wl3D6ix5ylyoO/nEyc9d+4bp1GhslJ9KPmiEHBuVC2mapxR1Oq5GXdTGq8jaoh2
	Bke9KpYpxC4UKDguWjQIPsjUbQOSwrSfZvthRUhkoBobX4s609vvRTwNj4LkbatMsZFc
	L85gE3BlJy/jRFJSC2FoJI+XVtvMhegMOgm//hCdZgsHpNBQUV8FtViN04b5YkZkYyw0
	wLbjJy2tiAQKFvNVdGYuUJcHKB6wYGuYshrjzHIB9H3lMwyA+XDx7kH3K4ETUq5Pcu8P
	Ff9w==
X-Gm-Message-State: ALoCoQm7BX9miymgU9G6uisEGcWckQj/en1ZxSLst3MAVuP/l57Y53qt/5NBPedKlFr7uyXtx/sk
X-Received: by 10.194.174.197 with SMTP id bu5mr2109483wjc.71.1389793826888;
	Wed, 15 Jan 2014 05:50:26 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id dh8sm6755823wib.4.2014.01.15.05.50.25
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 05:50:26 -0800 (PST)
Message-ID: <52D6921F.1030307@linaro.org>
Date: Wed, 15 Jan 2014 13:50:23 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
In-Reply-To: <1389778658.12434.120.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 09:37 AM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>> These mappings are global and therefore need flushing on all processors. Add
>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>
>> Can we make name consistent across every *tlb* function call? On
>> flushtlb.h we use *_local for maintenance on the current processor only.
>> If the suffix is not present then the maintenance will be done on every
>> processor.
> 
> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> suppose I should just go ahead and do it.

If it's too big for 4.4, the modification could be post release. It
would be nice to have a common convention at some point.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 13:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 13:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3QrZ-0003lF-Mc; Wed, 15 Jan 2014 13:50:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3QrY-0003l8-7p
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 13:50:32 +0000
Received: from [85.158.137.68:34885] by server-10.bemta-3.messagelabs.com id
	B8/5A-23989-42296D25; Wed, 15 Jan 2014 13:50:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389793827!9262079!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18417 invoked from network); 15 Jan 2014 13:50:27 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 13:50:27 -0000
Received: by mail-we0-f177.google.com with SMTP id x55so1729760wes.8
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 05:50:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=p3+/8Hx44n4JBZaUOdPeikSjVoP+V4eaaqgGf2Ua5p0=;
	b=DU+7TZaunW4jCoCg2ViIHCA7/9rVFZ7jtgnRLUTS8O35AUSfFrPhEDONIFgRpbL4/u
	hhGJ/Wl3D6ix5ylyoO/nEyc9d+4bp1GhslJ9KPmiEHBuVC2mapxR1Oq5GXdTGq8jaoh2
	Bke9KpYpxC4UKDguWjQIPsjUbQOSwrSfZvthRUhkoBobX4s609vvRTwNj4LkbatMsZFc
	L85gE3BlJy/jRFJSC2FoJI+XVtvMhegMOgm//hCdZgsHpNBQUV8FtViN04b5YkZkYyw0
	wLbjJy2tiAQKFvNVdGYuUJcHKB6wYGuYshrjzHIB9H3lMwyA+XDx7kH3K4ETUq5Pcu8P
	Ff9w==
X-Gm-Message-State: ALoCoQm7BX9miymgU9G6uisEGcWckQj/en1ZxSLst3MAVuP/l57Y53qt/5NBPedKlFr7uyXtx/sk
X-Received: by 10.194.174.197 with SMTP id bu5mr2109483wjc.71.1389793826888;
	Wed, 15 Jan 2014 05:50:26 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id dh8sm6755823wib.4.2014.01.15.05.50.25
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 05:50:26 -0800 (PST)
Message-ID: <52D6921F.1030307@linaro.org>
Date: Wed, 15 Jan 2014 13:50:23 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
In-Reply-To: <1389778658.12434.120.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 09:37 AM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>> These mappings are global and therefore need flushing on all processors. Add
>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>
>> Can we make name consistent across every *tlb* function call? On
>> flushtlb.h we use *_local for maintenance on the current processor only.
>> If the suffix is not present then the maintenance will be done on every
>> processor.
> 
> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> suppose I should just go ahead and do it.

If it's too big for 4.4, the modification could be post release. It
would be nice to have a common convention at some point.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R29-0004Kg-Ti; Wed, 15 Jan 2014 14:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3R27-0004Kb-Kt
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:01:27 +0000
Received: from [85.158.143.35:45324] by server-3.bemta-4.messagelabs.com id
	FB/CC-32360-6B496D25; Wed, 15 Jan 2014 14:01:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389794485!11918784!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11444 invoked from network); 15 Jan 2014 14:01:26 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:01:26 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so1772880wgg.6
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:01:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ecNlTwXvsB9dA1ahzLgm/b9nKjXQgueI+61aTgWUPf4=;
	b=ZrdtIls/YCStgU5QJzH77YxGjNMhUovL0uPghC93x/DWNPUr4yT4E9n8odYhteLJPp
	au7HKMUVX51wD1CIC4npiAou5ehAbNYvzE2SFUA9vLK51Sj2p9kiBU8SYNH/+WKKh377
	vNJpYTvJtzrph0+ZIqt9q2q/xYN2WpVJgf6SS8BbeiX0a/c2xWfiwsPJzwODlrPcPwD0
	c1ZFoYAcOhEFZp5ybDk3XEqiR278d91iKLfqBhrdP5ytOnOZQ4Y8Wt5lsB6swNOFPOM5
	6hGrmqAUopOXe0R4EaJuI7V3N5HmUvYvouqsOy+5cdX4kiSUXcA6CwYbCaAXTE0mR0S0
	omVQ==
X-Gm-Message-State: ALoCoQly/5EOzH8s/xecZ9u6hcs5RqkLwbOxhP5BjZfbaXUQoSxEivKwmakbkkKoViwoetHgL+lN
X-Received: by 10.194.6.42 with SMTP id x10mr2535579wjx.17.1389794485170;
	Wed, 15 Jan 2014 06:01:25 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hy8sm3365350wjb.2.2014.01.15.06.01.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:01:24 -0800 (PST)
Message-ID: <52D694B2.3040308@linaro.org>
Date: Wed, 15 Jan 2014 14:01:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
In-Reply-To: <1389793490.3793.29.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:44 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
>> For now __setup_dt_irq can only fail if the action is already set.
> 
> Can this ever happen with the current code base?

No, that why I didn't ask for a freeze exception for Xen 4.4.

> 
>> If in the future, the function is updated we don't want to enable the IRQ.
> 
> Such an update is likely to be post-4.4 (unless there is a relationship
> with "IRQ: Protect IRQ to be shared between domains and XEN" AND the RM
> becomes convinced to grant a freeze exception for that patch).
> 
> On that basis this patch could also easily be post-4.4.
>
>> Assuming the function can fail with action = NULL, when Xen will receive the
>> IRQ it will segfault because do_IRQ doesn't check if action is NULL.
> 
> It seems unlikely that the system would be fully functional after such
> an error even with this patch -- it would have failed to register either
> timer, maintenance or the console interrupt.

Timer and maintenance code don't check the return of request_dt_irq
(which call setup_dt_irq).

For the console interrupt, the callback which initialize the interrupt
doesn't return an error... On every serial driver, only an error message
is printed.

In any case, it's wrong to enable this IRQ if the descriptor is not
correctly setup.

> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c |    4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..62510e3 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>      rc = __setup_irq(desc, irq->irq, new);
>>      spin_unlock_irqrestore(&desc->lock, flags);
>>  
>> -    desc->handler->startup(desc);
>> -
>> +    if ( !rc )
>> +        desc->handler->startup(desc);
>>  
>>      return rc;
>>  }
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R29-0004Kg-Ti; Wed, 15 Jan 2014 14:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3R27-0004Kb-Kt
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:01:27 +0000
Received: from [85.158.143.35:45324] by server-3.bemta-4.messagelabs.com id
	FB/CC-32360-6B496D25; Wed, 15 Jan 2014 14:01:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389794485!11918784!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11444 invoked from network); 15 Jan 2014 14:01:26 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:01:26 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so1772880wgg.6
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:01:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ecNlTwXvsB9dA1ahzLgm/b9nKjXQgueI+61aTgWUPf4=;
	b=ZrdtIls/YCStgU5QJzH77YxGjNMhUovL0uPghC93x/DWNPUr4yT4E9n8odYhteLJPp
	au7HKMUVX51wD1CIC4npiAou5ehAbNYvzE2SFUA9vLK51Sj2p9kiBU8SYNH/+WKKh377
	vNJpYTvJtzrph0+ZIqt9q2q/xYN2WpVJgf6SS8BbeiX0a/c2xWfiwsPJzwODlrPcPwD0
	c1ZFoYAcOhEFZp5ybDk3XEqiR278d91iKLfqBhrdP5ytOnOZQ4Y8Wt5lsB6swNOFPOM5
	6hGrmqAUopOXe0R4EaJuI7V3N5HmUvYvouqsOy+5cdX4kiSUXcA6CwYbCaAXTE0mR0S0
	omVQ==
X-Gm-Message-State: ALoCoQly/5EOzH8s/xecZ9u6hcs5RqkLwbOxhP5BjZfbaXUQoSxEivKwmakbkkKoViwoetHgL+lN
X-Received: by 10.194.6.42 with SMTP id x10mr2535579wjx.17.1389794485170;
	Wed, 15 Jan 2014 06:01:25 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id hy8sm3365350wjb.2.2014.01.15.06.01.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:01:24 -0800 (PST)
Message-ID: <52D694B2.3040308@linaro.org>
Date: Wed, 15 Jan 2014 14:01:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
In-Reply-To: <1389793490.3793.29.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:44 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
>> For now __setup_dt_irq can only fail if the action is already set.
> 
> Can this ever happen with the current code base?

No, that why I didn't ask for a freeze exception for Xen 4.4.

> 
>> If in the future, the function is updated we don't want to enable the IRQ.
> 
> Such an update is likely to be post-4.4 (unless there is a relationship
> with "IRQ: Protect IRQ to be shared between domains and XEN" AND the RM
> becomes convinced to grant a freeze exception for that patch).
> 
> On that basis this patch could also easily be post-4.4.
>
>> Assuming the function can fail with action = NULL, when Xen will receive the
>> IRQ it will segfault because do_IRQ doesn't check if action is NULL.
> 
> It seems unlikely that the system would be fully functional after such
> an error even with this patch -- it would have failed to register either
> timer, maintenance or the console interrupt.

Timer and maintenance code don't check the return of request_dt_irq
(which call setup_dt_irq).

For the console interrupt, the callback which initialize the interrupt
doesn't return an error... On every serial driver, only an error message
is printed.

In any case, it's wrong to enable this IRQ if the descriptor is not
correctly setup.

> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c |    4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..62510e3 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -605,8 +605,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>      rc = __setup_irq(desc, irq->irq, new);
>>      spin_unlock_irqrestore(&desc->lock, flags);
>>  
>> -    desc->handler->startup(desc);
>> -
>> +    if ( !rc )
>> +        desc->handler->startup(desc);
>>  
>>      return rc;
>>  }
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R6Q-0004SB-UV; Wed, 15 Jan 2014 14:05:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3R6P-0004S6-RU
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:05:54 +0000
Received: from [85.158.137.68:54174] by server-10.bemta-3.messagelabs.com id
	55/9A-23989-1C596D25; Wed, 15 Jan 2014 14:05:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389794750!9337488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15341 invoked from network); 15 Jan 2014 14:05:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:05:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90964305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:05:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:05:49 -0500
Message-ID: <1389794748.3793.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:05:48 +0000
In-Reply-To: <52D6921F.1030307@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>> These mappings are global and therefore need flushing on all processors. Add
> >>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>
> >> Can we make name consistent across every *tlb* function call? On
> >> flushtlb.h we use *_local for maintenance on the current processor only.
> >> If the suffix is not present then the maintenance will be done on every
> >> processor.
> > 
> > I was trying to avoid a massive renaming of the existing flush_xen_*. I
> > suppose I should just go ahead and do it.
> 
> If it's too big for 4.4,

With my temporary-RM hat on I've struggled with this a few times this
week -- that is, larger, mostly mechanical, textual changes which come
about because it is the correct/cleanest thing to do as part of a
smaller change which on their own would be pretty clear candidates for
an exception. Chen's change "xen/arm{32, 64}: fix section shift when
mapping 2MB block in boot page table" is in a similar boat.

I'm not sure where the balance should lie really.

>  the modification could be post release. It
> would be nice to have a common convention at some point.

Yes, this is certainly the case.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R6Q-0004SB-UV; Wed, 15 Jan 2014 14:05:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3R6P-0004S6-RU
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:05:54 +0000
Received: from [85.158.137.68:54174] by server-10.bemta-3.messagelabs.com id
	55/9A-23989-1C596D25; Wed, 15 Jan 2014 14:05:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389794750!9337488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15341 invoked from network); 15 Jan 2014 14:05:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:05:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90964305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:05:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:05:49 -0500
Message-ID: <1389794748.3793.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:05:48 +0000
In-Reply-To: <52D6921F.1030307@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>> These mappings are global and therefore need flushing on all processors. Add
> >>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>
> >> Can we make name consistent across every *tlb* function call? On
> >> flushtlb.h we use *_local for maintenance on the current processor only.
> >> If the suffix is not present then the maintenance will be done on every
> >> processor.
> > 
> > I was trying to avoid a massive renaming of the existing flush_xen_*. I
> > suppose I should just go ahead and do it.
> 
> If it's too big for 4.4,

With my temporary-RM hat on I've struggled with this a few times this
week -- that is, larger, mostly mechanical, textual changes which come
about because it is the correct/cleanest thing to do as part of a
smaller change which on their own would be pretty clear candidates for
an exception. Chen's change "xen/arm{32, 64}: fix section shift when
mapping 2MB block in boot page table" is in a similar boat.

I'm not sure where the balance should lie really.

>  the modification could be post release. It
> would be nice to have a common convention at some point.

Yes, this is certainly the case.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:07:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R7z-0004YK-Dw; Wed, 15 Jan 2014 14:07:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3R7x-0004Y8-LK
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:07:29 +0000
Received: from [193.109.254.147:35637] by server-7.bemta-14.messagelabs.com id
	1A/4B-15500-12696D25; Wed, 15 Jan 2014 14:07:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389794847!10981003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20269 invoked from network); 15 Jan 2014 14:07:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90965040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:07:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:07:26 -0500
Message-ID: <1389794845.3793.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:07:25 +0000
In-Reply-To: <52D694B2.3040308@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
> On 01/15/2014 01:44 PM, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> >> For now __setup_dt_irq can only fail if the action is already set.
> > 
> > Can this ever happen with the current code base?
> 
> No, that why I didn't ask for a freeze exception for Xen 4.4.

Ah ok. It is useful to tag patches which aren't for consideration (and
those which are for that matter, although the exception request suffices
there).

I'll put this in my 4.5 queue and consider it again later. Please ping
me a little while after 4.4 branches if I haven't done so.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:07:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R7z-0004YK-Dw; Wed, 15 Jan 2014 14:07:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3R7x-0004Y8-LK
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:07:29 +0000
Received: from [193.109.254.147:35637] by server-7.bemta-14.messagelabs.com id
	1A/4B-15500-12696D25; Wed, 15 Jan 2014 14:07:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389794847!10981003!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20269 invoked from network); 15 Jan 2014 14:07:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90965040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:07:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:07:26 -0500
Message-ID: <1389794845.3793.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:07:25 +0000
In-Reply-To: <52D694B2.3040308@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
> On 01/15/2014 01:44 PM, Ian Campbell wrote:
> > On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> >> For now __setup_dt_irq can only fail if the action is already set.
> > 
> > Can this ever happen with the current code base?
> 
> No, that why I didn't ask for a freeze exception for Xen 4.4.

Ah ok. It is useful to tag patches which aren't for consideration (and
those which are for that matter, although the exception request suffices
there).

I'll put this in my 4.5 queue and consider it again later. Please ping
me a little while after 4.4 branches if I haven't done so.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R9D-0004i4-2m; Wed, 15 Jan 2014 14:08:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3R9C-0004hv-3O
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:08:46 +0000
Received: from [85.158.137.68:30024] by server-16.bemta-3.messagelabs.com id
	44/5B-26128-D6696D25; Wed, 15 Jan 2014 14:08:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389794924!9267619!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17371 invoked from network); 15 Jan 2014 14:08:44 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:08:44 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so5172106wgh.3
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:08:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=qSV1AIwbcXCLwh5FQ983g/VglXu/qjzNqOOjWQ3vM2E=;
	b=N20cJis7rfEkFpwMfyxgqcM63LusjnvZtkUAccMFfWryuxVohmUpEOe0YSyoMM1j6E
	Zq9zRMTD8rXxG4JIY2XY/xhOA304ZAo5CoiVCCKcFA5C4/9HnXcoEe+0XaeoNstA8IAi
	2/t2fucb7lP5CgneW0PDQ43jtNMk5wrGQ58DAWZCFb0c+Dn/88RPfjLo2hv2wSw0NxtC
	XVWXdcezSQC/Siw2Gc4CaZUytifWJLKwks1I3YwiqvuVNGzLg7sml+vRg8Su6ADHJUq8
	T4qV91sHp32ypB/lxj+B+UG7pGaHdcNCCnv9GddoBB/MCaI06bki5qP5cRaws3WHpCHP
	HAfw==
X-Gm-Message-State: ALoCoQnrhgYD23lDf5tI9ZoWC3ORSrIDijwqRXbpAgFN6PH2aTlpZXwqQPY9C3EkPd0hp2OzOB0o
X-Received: by 10.180.20.15 with SMTP id j15mr2713524wie.4.1389794921380;
	Wed, 15 Jan 2014 06:08:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id pk8sm6822331wic.6.2014.01.15.06.08.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:08:40 -0800 (PST)
Message-ID: <52D69665.9080203@linaro.org>
Date: Wed, 15 Jan 2014 14:08:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794845.3793.50.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:07 PM, Ian Campbell wrote:
> On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
>> On 01/15/2014 01:44 PM, Ian Campbell wrote:
>>> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
>>>> For now __setup_dt_irq can only fail if the action is already set.
>>>
>>> Can this ever happen with the current code base?
>>
>> No, that why I didn't ask for a freeze exception for Xen 4.4.
> 
> Ah ok. It is useful to tag patches which aren't for consideration (and
> those which are for that matter, although the exception request suffices
> there).

Do you have a specific tag in my mind for this purpose?

> I'll put this in my 4.5 queue and consider it again later. Please ping
> me a little while after 4.4 branches if I haven't done so.

Thanks.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3R9D-0004i4-2m; Wed, 15 Jan 2014 14:08:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3R9C-0004hv-3O
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:08:46 +0000
Received: from [85.158.137.68:30024] by server-16.bemta-3.messagelabs.com id
	44/5B-26128-D6696D25; Wed, 15 Jan 2014 14:08:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389794924!9267619!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17371 invoked from network); 15 Jan 2014 14:08:44 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:08:44 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so5172106wgh.3
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:08:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=qSV1AIwbcXCLwh5FQ983g/VglXu/qjzNqOOjWQ3vM2E=;
	b=N20cJis7rfEkFpwMfyxgqcM63LusjnvZtkUAccMFfWryuxVohmUpEOe0YSyoMM1j6E
	Zq9zRMTD8rXxG4JIY2XY/xhOA304ZAo5CoiVCCKcFA5C4/9HnXcoEe+0XaeoNstA8IAi
	2/t2fucb7lP5CgneW0PDQ43jtNMk5wrGQ58DAWZCFb0c+Dn/88RPfjLo2hv2wSw0NxtC
	XVWXdcezSQC/Siw2Gc4CaZUytifWJLKwks1I3YwiqvuVNGzLg7sml+vRg8Su6ADHJUq8
	T4qV91sHp32ypB/lxj+B+UG7pGaHdcNCCnv9GddoBB/MCaI06bki5qP5cRaws3WHpCHP
	HAfw==
X-Gm-Message-State: ALoCoQnrhgYD23lDf5tI9ZoWC3ORSrIDijwqRXbpAgFN6PH2aTlpZXwqQPY9C3EkPd0hp2OzOB0o
X-Received: by 10.180.20.15 with SMTP id j15mr2713524wie.4.1389794921380;
	Wed, 15 Jan 2014 06:08:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id pk8sm6822331wic.6.2014.01.15.06.08.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:08:40 -0800 (PST)
Message-ID: <52D69665.9080203@linaro.org>
Date: Wed, 15 Jan 2014 14:08:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794845.3793.50.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:07 PM, Ian Campbell wrote:
> On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
>> On 01/15/2014 01:44 PM, Ian Campbell wrote:
>>> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
>>>> For now __setup_dt_irq can only fail if the action is already set.
>>>
>>> Can this ever happen with the current code base?
>>
>> No, that why I didn't ask for a freeze exception for Xen 4.4.
> 
> Ah ok. It is useful to tag patches which aren't for consideration (and
> those which are for that matter, although the exception request suffices
> there).

Do you have a specific tag in my mind for this purpose?

> I'll put this in my 4.5 queue and consider it again later. Please ping
> me a little while after 4.4 branches if I haven't done so.

Thanks.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:11:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RC7-0005Db-SB; Wed, 15 Jan 2014 14:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RC6-0005DR-02
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:11:46 +0000
Received: from [85.158.139.211:61998] by server-11.bemta-5.messagelabs.com id
	D3/36-23268-12796D25; Wed, 15 Jan 2014 14:11:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389795103!9728780!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19281 invoked from network); 15 Jan 2014 14:11:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:11:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90966706"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:11:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:11:29 -0500
Message-ID: <1389795087.3793.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:11:27 +0000
In-Reply-To: <52D69665.9080203@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
	<52D69665.9080203@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:08 +0000, Julien Grall wrote:
> On 01/15/2014 02:07 PM, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
> >> On 01/15/2014 01:44 PM, Ian Campbell wrote:
> >>> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> >>>> For now __setup_dt_irq can only fail if the action is already set.
> >>>
> >>> Can this ever happen with the current code base?
> >>
> >> No, that why I didn't ask for a freeze exception for Xen 4.4.
> > 
> > Ah ok. It is useful to tag patches which aren't for consideration (and
> > those which are for that matter, although the exception request suffices
> > there).
> 
> Do you have a specific tag in my mind for this purpose?

Anything would do, either [PATCH for-4.5] in $subject or:
        ---
        This patch is for 4.5
after the commit message etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:11:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RC7-0005Db-SB; Wed, 15 Jan 2014 14:11:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RC6-0005DR-02
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:11:46 +0000
Received: from [85.158.139.211:61998] by server-11.bemta-5.messagelabs.com id
	D3/36-23268-12796D25; Wed, 15 Jan 2014 14:11:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389795103!9728780!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19281 invoked from network); 15 Jan 2014 14:11:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:11:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90966706"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:11:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:11:29 -0500
Message-ID: <1389795087.3793.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:11:27 +0000
In-Reply-To: <52D69665.9080203@linaro.org>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
	<52D69665.9080203@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:08 +0000, Julien Grall wrote:
> On 01/15/2014 02:07 PM, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:01 +0000, Julien Grall wrote:
> >> On 01/15/2014 01:44 PM, Ian Campbell wrote:
> >>> On Fri, 2014-01-10 at 20:49 +0000, Julien Grall wrote:
> >>>> For now __setup_dt_irq can only fail if the action is already set.
> >>>
> >>> Can this ever happen with the current code base?
> >>
> >> No, that why I didn't ask for a freeze exception for Xen 4.4.
> > 
> > Ah ok. It is useful to tag patches which aren't for consideration (and
> > those which are for that matter, although the exception request suffices
> > there).
> 
> Do you have a specific tag in my mind for this purpose?

Anything would do, either [PATCH for-4.5] in $subject or:
        ---
        This patch is for 4.5
after the commit message etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDM-0005Ju-BL; Wed, 15 Jan 2014 14:13:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDK-0005Ji-Ns
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:13:02 +0000
Received: from [85.158.139.211:24130] by server-6.bemta-5.messagelabs.com id
	39/E1-16310-D6796D25; Wed, 15 Jan 2014 14:13:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389795179!9893119!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10604 invoked from network); 15 Jan 2014 14:13:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:12:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:12:00 -0500
Message-ID: <1389795119.3793.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 14:11:59 +0000
In-Reply-To: <1389691844.13654.119.camel@kazak.uk.xensource.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389691844.13654.119.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
 protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 09:30 +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> > As Ian Campbell writes:
> 
> "...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
> add on commit, no need to resend just for this IMHO)
> 
> >   There are two mechanisms by which a suspend can be aborted and the
> >   original domain resumed.
> > 
> >   The older method is that the toolstack resets a bunch of state (see
> >   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
> >   restarts the domain. The domain will see HYPERVISOR_suspend return 0
> >   and will continue without any realisation that it is actually
> >   running in the original domain and not in a new one. This method is
> >   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
> >   but it is not.
> > 
> >   The other method is newer and in this case the toolstack arranges
> >   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
> >   domain will observe this and realise that it has been restarted in
> >   the same domain and will behave accordingly. This method is
> >   implemented, correctly AFAIK, by
> >   libxl_domain_resume(suspend_cancel=1).
> > 
> > Attempting to use the old method without doing all of the work simply
> > causes the guest to crash.  Implementing the work required for old
> > method, or for checking that domains actually support the new method,
> > is not feasible at this stage of the 4.4 release.
> > 
> > So, always use the new method, without regard to the declarations of
> > support by the guest.  This is a strict improvement: guests which do
> > in fact support the new method will work, whereas ones which don't are
> > no worse off.
> 
> I agree with this rationale.
> 
> > There are two call sites of libxl_domain_resume that need fixing, both
> > in the migration error path.
> > 
> > With this change I observe a correct and successful resumption of a
> > Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> > attempt which I arranged to fail by nobbling the block hotplug script.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Applied. Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDM-0005Ju-BL; Wed, 15 Jan 2014 14:13:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDK-0005Ji-Ns
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:13:02 +0000
Received: from [85.158.139.211:24130] by server-6.bemta-5.messagelabs.com id
	39/E1-16310-D6796D25; Wed, 15 Jan 2014 14:13:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389795179!9893119!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10604 invoked from network); 15 Jan 2014 14:13:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:12:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:12:00 -0500
Message-ID: <1389795119.3793.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 14:11:59 +0000
In-Reply-To: <1389691844.13654.119.camel@kazak.uk.xensource.com>
References: <1389636937-9477-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389691844.13654.119.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: Always use "fast" migration resume
 protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 09:30 +0000, Ian Campbell wrote:
> On Mon, 2014-01-13 at 18:15 +0000, Ian Jackson wrote:
> > As Ian Campbell writes:
> 
> "...in http://bugs.xenproject.org/xen/bug/30" would be useful here (can
> add on commit, no need to resend just for this IMHO)
> 
> >   There are two mechanisms by which a suspend can be aborted and the
> >   original domain resumed.
> > 
> >   The older method is that the toolstack resets a bunch of state (see
> >   tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
> >   restarts the domain. The domain will see HYPERVISOR_suspend return 0
> >   and will continue without any realisation that it is actually
> >   running in the original domain and not in a new one. This method is
> >   supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
> >   but it is not.
> > 
> >   The other method is newer and in this case the toolstack arranges
> >   that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
> >   domain will observe this and realise that it has been restarted in
> >   the same domain and will behave accordingly. This method is
> >   implemented, correctly AFAIK, by
> >   libxl_domain_resume(suspend_cancel=1).
> > 
> > Attempting to use the old method without doing all of the work simply
> > causes the guest to crash.  Implementing the work required for old
> > method, or for checking that domains actually support the new method,
> > is not feasible at this stage of the 4.4 release.
> > 
> > So, always use the new method, without regard to the declarations of
> > support by the guest.  This is a strict improvement: guests which do
> > in fact support the new method will work, whereas ones which don't are
> > no worse off.
> 
> I agree with this rationale.
> 
> > There are two call sites of libxl_domain_resume that need fixing, both
> > in the migration error path.
> > 
> > With this change I observe a correct and successful resumption of a
> > Debian wheezy guest with a Linux 3.4.70 kernel after a migration
> > attempt which I arranged to fail by nobbling the block hotplug script.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Applied. Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDU-0005Ku-OC; Wed, 15 Jan 2014 14:13:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDU-0005Ki-3B
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:13:12 +0000
Received: from [193.109.254.147:48943] by server-9.bemta-14.messagelabs.com id
	07/2D-13957-77796D25; Wed, 15 Jan 2014 14:13:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389795189!10980508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 15 Jan 2014 14:13:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087287"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:12:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:12:24 -0500
Message-ID: <1389795143.3793.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 14:12:23 +0000
In-Reply-To: <52D54F96.2060607@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:54 +0000, Andrew Cooper wrote:
> On 14/01/14 14:50, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
> >> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> >> device assignment if PoD is enabled.").
> >>
> >> This change is restricted to HVM guest, as only HVM is relevant in the
> >> counterpart in Xend. We're late in release cycle so the change should
> >> only do what's necessary. Probably we can revisit it if we need to do
> >> the same thing for PV guest in the future.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > Release hat: The risk here is of a false positive detecting whether PoD
> > would be used and therefore refusing to start a domain. However Wei
> > directed me earlier on to the code in setup_guest which sets
> > XENMEMF_populate_on_demand and I believe it is using the same logic.
> >
> > The benefit of this is that it will stop people starting a domain in an
> > invalid configuration -- but what is the downside here? Is it an
> > unhandled IOMMU fault or another host-fatal error? That would make the
> > argument for taking this patch pretty strong. On the other hand if the
> > failure were simply to kill this domain, that would be a less serious
> > issue and I'd be in two minds, mainly due to George not being here to
> > confirm that the pod_enabled logic is correct (although if he were here
> > I wouldn't be wrestling with this question at all ;-)).
> >
> > I'm leaning towards taking this fix, but I'd really like to know what
> > the current failure case looks like.
> >
> > Ian.
> 
> The answer is likely hardware specific.
> 
> An IOMMU fault (however handled by Xen) will result in a master abort on
> the DMA transaction for the PCI device which has suffered the fault. 
> That device can then do anything from continue blindly to issuing an NMI
> IOCK/SERR which will likely be fatal to the entire server.

Thanks, that tips me over into:
Release-ack: Ian Campbell

I applied, there was a reject against "libxl: Auto-assign NIC devids in
initiate_domain_create" which I fixed up.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDU-0005Ku-OC; Wed, 15 Jan 2014 14:13:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDU-0005Ki-3B
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:13:12 +0000
Received: from [193.109.254.147:48943] by server-9.bemta-14.messagelabs.com id
	07/2D-13957-77796D25; Wed, 15 Jan 2014 14:13:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389795189!10980508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 15 Jan 2014 14:13:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087287"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:12:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:12:24 -0500
Message-ID: <1389795143.3793.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 15 Jan 2014 14:12:23 +0000
In-Reply-To: <52D54F96.2060607@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:54 +0000, Andrew Cooper wrote:
> On 14/01/14 14:50, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
> >> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
> >> device assignment if PoD is enabled.").
> >>
> >> This change is restricted to HVM guest, as only HVM is relevant in the
> >> counterpart in Xend. We're late in release cycle so the change should
> >> only do what's necessary. Probably we can revisit it if we need to do
> >> the same thing for PV guest in the future.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > Release hat: The risk here is of a false positive detecting whether PoD
> > would be used and therefore refusing to start a domain. However Wei
> > directed me earlier on to the code in setup_guest which sets
> > XENMEMF_populate_on_demand and I believe it is using the same logic.
> >
> > The benefit of this is that it will stop people starting a domain in an
> > invalid configuration -- but what is the downside here? Is it an
> > unhandled IOMMU fault or another host-fatal error? That would make the
> > argument for taking this patch pretty strong. On the other hand if the
> > failure were simply to kill this domain, that would be a less serious
> > issue and I'd be in two minds, mainly due to George not being here to
> > confirm that the pod_enabled logic is correct (although if he were here
> > I wouldn't be wrestling with this question at all ;-)).
> >
> > I'm leaning towards taking this fix, but I'd really like to know what
> > the current failure case looks like.
> >
> > Ian.
> 
> The answer is likely hardware specific.
> 
> An IOMMU fault (however handled by Xen) will result in a master abort on
> the DMA transaction for the PCI device which has suffered the fault. 
> That device can then do anything from continue blindly to issuing an NMI
> IOCK/SERR which will likely be fatal to the entire server.

Thanks, that tips me over into:
Release-ack: Ian Campbell

I applied, there was a reject against "libxl: Auto-assign NIC devids in
initiate_domain_create" which I fixed up.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDy-0005Qt-65; Wed, 15 Jan 2014 14:13:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDw-0005Po-EN
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:13:40 +0000
Received: from [85.158.137.68:53304] by server-13.bemta-3.messagelabs.com id
	65/E7-28603-09796D25; Wed, 15 Jan 2014 14:13:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389795214!5635432!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26206 invoked from network); 15 Jan 2014 14:13:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087716"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:13:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:13:16 -0500
Message-ID: <1389795194.3793.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:13:14 +0000
In-Reply-To: <1389633361.13654.112.camel@kazak.uk.xensource.com>
References: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
	<1389633361.13654.112.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 17:16 +0000, Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> > On ARM, flush_tlb_mask is used in the common code:
> >     - alloc_heap_pages: the flush is only be called if the new allocated
> >     page was used by a domain before. So we need to flush only TLB non-secure
> > non-hyp inner-shareable.
> >     - common/grant-table.c: every calls to flush_tlb_mask are used with
> >     the current domain. A flush TLB by current VMID inner-shareable is enough.
> > 
> > The current code only flush hypervisor TLB on the current PCPU. For now,
> > flush TLBs non-secure non-hyp on every PCPUs.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RDy-0005Qt-65; Wed, 15 Jan 2014 14:13:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RDw-0005Po-EN
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:13:40 +0000
Received: from [85.158.137.68:53304] by server-13.bemta-3.messagelabs.com id
	65/E7-28603-09796D25; Wed, 15 Jan 2014 14:13:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389795214!5635432!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26206 invoked from network); 15 Jan 2014 14:13:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087716"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:13:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:13:16 -0500
Message-ID: <1389795194.3793.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:13:14 +0000
In-Reply-To: <1389633361.13654.112.camel@kazak.uk.xensource.com>
References: <1389286683-11656-1-git-send-email-julien.grall@linaro.org>
	<1389633361.13654.112.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: correct flush_tlb_mask behaviour
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-13 at 17:16 +0000, Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:58 +0000, Julien Grall wrote:
> > On ARM, flush_tlb_mask is used in the common code:
> >     - alloc_heap_pages: the flush is only be called if the new allocated
> >     page was used by a domain before. So we need to flush only TLB non-secure
> > non-hyp inner-shareable.
> >     - common/grant-table.c: every calls to flush_tlb_mask are used with
> >     the current domain. A flush TLB by current VMID inner-shareable is enough.
> > 
> > The current code only flush hypervisor TLB on the current PCPU. For now,
> > flush TLBs non-secure non-hyp on every PCPUs.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RE2-0005SM-Jc; Wed, 15 Jan 2014 14:13:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RE1-0005Rn-6m
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:13:45 +0000
Received: from [85.158.143.35:9238] by server-3.bemta-4.messagelabs.com id
	AA/F7-32360-89796D25; Wed, 15 Jan 2014 14:13:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389795222!11842983!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14202 invoked from network); 15 Jan 2014 14:13:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:13:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:13:26 -0500
Message-ID: <1389795204.3793.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:13:24 +0000
In-Reply-To: <1389710646.12434.66.camel@kazak.uk.xensource.com>
References: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
	<1389710646.12434.66.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:44 +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 13:36 +0000, Julien Grall wrote:
> > The p2m is shared between VCPUs for each domain. Currently Xen only flush
> > TLB on the local PCPU. This could result to mismatch between the mapping in the
> > p2m and TLBs.
> > 
> > Flush TLB entries used by this domain on every PCPU. The flush can also be
> > moved out of the loop because:
> >     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> >     - INSERT: if valid = 1 that would means with have replaced a
> >     page that already belongs to the domain. A VCPU can write on the wrong page.
> >     This can happen for dom0 with the 1:1 mapping because the mapping is not
> >     removed from the p2m.
> >     - REMOVE: except for grant-table (replace_grant_host_mapping), each
> >     call to guest_physmap_remove_page are protected by the callers via a
> >         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
> >     the page can't be allocated for another domain until the last put_page.
> >     - RELINQUISH : the domain is not running anymore so we don't care...
> > 
> > Also avoid leaking a foreign page if the function is INSERTed a new mapping
> > on top of foreign mapping.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Release hat: There are two major issues here, one is not broadcasting
> the TLB flush, which is a potential security issue (another VCPU can
> keep accessing a page after it is freed). The other is a potential DoS
> by leaking a reference on a foreign page, which would stop that domain
> from ever being destroyed.
> 
> Either of these two issues would be enough to justify taking this change
> for 4.4.
> 
> We are cutting rc2 at the moment, I will apply after that is out the
> way.

Done, on top of "xen/arm: correct flush_tlb_mask behaviour".


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RE2-0005SM-Jc; Wed, 15 Jan 2014 14:13:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RE1-0005Rn-6m
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:13:45 +0000
Received: from [85.158.143.35:9238] by server-3.bemta-4.messagelabs.com id
	AA/F7-32360-89796D25; Wed, 15 Jan 2014 14:13:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389795222!11842983!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14202 invoked from network); 15 Jan 2014 14:13:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:13:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93087876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:13:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:13:26 -0500
Message-ID: <1389795204.3793.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:13:24 +0000
In-Reply-To: <1389710646.12434.66.camel@kazak.uk.xensource.com>
References: <1389706615-9578-1-git-send-email-julien.grall@linaro.org>
	<1389710646.12434.66.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 14:44 +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 13:36 +0000, Julien Grall wrote:
> > The p2m is shared between VCPUs for each domain. Currently Xen only flush
> > TLB on the local PCPU. This could result to mismatch between the mapping in the
> > p2m and TLBs.
> > 
> > Flush TLB entries used by this domain on every PCPU. The flush can also be
> > moved out of the loop because:
> >     - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
> >     - INSERT: if valid = 1 that would means with have replaced a
> >     page that already belongs to the domain. A VCPU can write on the wrong page.
> >     This can happen for dom0 with the 1:1 mapping because the mapping is not
> >     removed from the p2m.
> >     - REMOVE: except for grant-table (replace_grant_host_mapping), each
> >     call to guest_physmap_remove_page are protected by the callers via a
> >         get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
> >     the page can't be allocated for another domain until the last put_page.
> >     - RELINQUISH : the domain is not running anymore so we don't care...
> > 
> > Also avoid leaking a foreign page if the function is INSERTed a new mapping
> > on top of foreign mapping.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Release hat: There are two major issues here, one is not broadcasting
> the TLB flush, which is a potential security issue (another VCPU can
> keep accessing a page after it is freed). The other is a potential DoS
> by leaking a reference on a foreign page, which would stop that domain
> from ever being destroyed.
> 
> Either of these two issues would be enough to justify taking this change
> for 4.4.
> 
> We are cutting rc2 at the moment, I will apply after that is out the
> way.

Done, on top of "xen/arm: correct flush_tlb_mask behaviour".


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:14:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3REb-0005b2-1a; Wed, 15 Jan 2014 14:14:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3REZ-0005aa-F7
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:14:19 +0000
Received: from [85.158.139.211:44219] by server-9.bemta-5.messagelabs.com id
	11/D0-15098-AB796D25; Wed, 15 Jan 2014 14:14:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389795256!8709312!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12721 invoked from network); 15 Jan 2014 14:14:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:14:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90968134"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:14:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:14:09 -0500
Message-ID: <1389795248.3793.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 15 Jan 2014 14:14:08 +0000
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 0/3] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 16:56 -0500, Don Slutz wrote:
> Changes v3 to v4:
>     Drop patch 1 -- already commited
>     Drop patch 3 -- Does not need to be in 4.4 as far as I know
>     Added "Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>" to all 3.

> Release manager requests:
>   patch 1 and 3 are optional for 4.4.0.
>   patch 2 should be in 4.4.0
>   patch 4 and 5 would be good to be in 4.4.0

This is all totally inconsistent.

The first quoted para talks about patch numbers without names without
realising that the reader has no context for them (especially given that
the request for this resend was due to the confusion surrounding what
was what in the previous iteration).

The second quoted para hasn't been true for at least one iteration.

So, I was just about to apply:
>   xg_read_mem: Report on error.
>   xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

but now I've no idea whether I really should or not.

I'm going to apply anyway, since I suspect that was what was intended,
but in the future please work on the clarity of your communications, 100
lines of quotations and analysis is no good if it doesn't actually make
sense.

If I wasn't supposed to apply please shout and I will revert.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:14:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:14:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3REb-0005b2-1a; Wed, 15 Jan 2014 14:14:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3REZ-0005aa-F7
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:14:19 +0000
Received: from [85.158.139.211:44219] by server-9.bemta-5.messagelabs.com id
	11/D0-15098-AB796D25; Wed, 15 Jan 2014 14:14:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389795256!8709312!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12721 invoked from network); 15 Jan 2014 14:14:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:14:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90968134"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:14:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:14:09 -0500
Message-ID: <1389795248.3793.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Wed, 15 Jan 2014 14:14:08 +0000
In-Reply-To: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
References: <1389391020-14476-1-git-send-email-dslutz@verizon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org, Jan
	Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v4 0/3] gdbsx: fix 3 bugs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-10 at 16:56 -0500, Don Slutz wrote:
> Changes v3 to v4:
>     Drop patch 1 -- already commited
>     Drop patch 3 -- Does not need to be in 4.4 as far as I know
>     Added "Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>" to all 3.

> Release manager requests:
>   patch 1 and 3 are optional for 4.4.0.
>   patch 2 should be in 4.4.0
>   patch 4 and 5 would be good to be in 4.4.0

This is all totally inconsistent.

The first quoted para talks about patch numbers without names without
realising that the reader has no context for them (especially given that
the request for this resend was due to the confusion surrounding what
was what in the previous iteration).

The second quoted para hasn't been true for at least one iteration.

So, I was just about to apply:
>   xg_read_mem: Report on error.
>   xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.

but now I've no idea whether I really should or not.

I'm going to apply anyway, since I suspect that was what was intended,
but in the future please work on the clarity of your communications, 100
lines of quotations and analysis is no good if it doesn't actually make
sense.

If I wasn't supposed to apply please shout and I will revert.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:15:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RFs-0005op-MO; Wed, 15 Jan 2014 14:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RFq-0005oW-M0
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:15:39 +0000
Received: from [85.158.137.68:38982] by server-9.bemta-3.messagelabs.com id
	F9/59-13104-90896D25; Wed, 15 Jan 2014 14:15:37 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389795333!9281278!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 15 Jan 2014 14:15:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:15:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEFTJT025120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:15:29 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEFSTK008570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:15:28 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEFSmS005336; Wed, 15 Jan 2014 14:15:28 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:15:28 -0800
Message-ID: <52D697FB.3000304@oracle.com>
Date: Wed, 15 Jan 2014 22:15:23 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com>
In-Reply-To: <52D66ADF.9070401@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:02, Andrew Bennieston wrote:
> On 15/01/14 10:07, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>>> Current netfront only grants pages for grant copy, not for grant 
>>> transfer, so
>>> remove corresponding transfer code and add receiving copy code in
>>> xennet_release_rx_bufs.
>>>
>>
>> This path seldom gets call -- not that many people unload xen-netfront
>> driver. If Annie has tested this patch and it works as expected I think
>> it's fine.
>>
> In XenServer we have seen a number of cases where unplugging and 
> replugging VIFs results in leakage of grant references, eventually 
> leading to a case where you cannot plug a VIF (after ~ 400 such 
> cycles)...
>
> It's worth pointing out, as far as this patch is concerned, that 
> gnttab_end_foreign_access() can fail, 

Just like what Wei mentioned, it is gnttab_end_foreign_access_ref here, 
right?

> which is not taken into account here.

Good point, gnttab_end_foreign_access_ref fails for grant which is in use.

Thanks
Annie
>
> Andrew.
>
>> I'm not netfront maintainer but I'm happy to add
>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>> if Annie confirms she's tested this patch.
>>
>> Wei.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:15:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RFs-0005op-MO; Wed, 15 Jan 2014 14:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RFq-0005oW-M0
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:15:39 +0000
Received: from [85.158.137.68:38982] by server-9.bemta-3.messagelabs.com id
	F9/59-13104-90896D25; Wed, 15 Jan 2014 14:15:37 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389795333!9281278!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 15 Jan 2014 14:15:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:15:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEFTJT025120
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:15:29 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEFSTK008570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:15:28 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEFSmS005336; Wed, 15 Jan 2014 14:15:28 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:15:28 -0800
Message-ID: <52D697FB.3000304@oracle.com>
Date: Wed, 15 Jan 2014 22:15:23 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com>
In-Reply-To: <52D66ADF.9070401@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:02, Andrew Bennieston wrote:
> On 15/01/14 10:07, Wei Liu wrote:
>> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>>> Current netfront only grants pages for grant copy, not for grant 
>>> transfer, so
>>> remove corresponding transfer code and add receiving copy code in
>>> xennet_release_rx_bufs.
>>>
>>
>> This path seldom gets call -- not that many people unload xen-netfront
>> driver. If Annie has tested this patch and it works as expected I think
>> it's fine.
>>
> In XenServer we have seen a number of cases where unplugging and 
> replugging VIFs results in leakage of grant references, eventually 
> leading to a case where you cannot plug a VIF (after ~ 400 such 
> cycles)...
>
> It's worth pointing out, as far as this patch is concerned, that 
> gnttab_end_foreign_access() can fail, 

Just like what Wei mentioned, it is gnttab_end_foreign_access_ref here, 
right?

> which is not taken into account here.

Good point, gnttab_end_foreign_access_ref fails for grant which is in use.

Thanks
Annie
>
> Andrew.
>
>> I'm not netfront maintainer but I'm happy to add
>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>> if Annie confirms she's tested this patch.
>>
>> Wei.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RGm-0005zR-AV; Wed, 15 Jan 2014 14:16:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RGk-0005yx-Sn
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:16:35 +0000
Received: from [85.158.143.35:46660] by server-3.bemta-4.messagelabs.com id
	FA/9D-32360-04896D25; Wed, 15 Jan 2014 14:16:32 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389795390!11843994!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8006 invoked from network); 15 Jan 2014 14:16:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:16:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEGSYm023295
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:16:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FEGSMe004373
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:16:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0FEGR86004340; Wed, 15 Jan 2014 14:16:27 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:16:26 -0800
Message-ID: <52D69837.9090609@oracle.com>
Date: Wed, 15 Jan 2014 22:16:23 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
In-Reply-To: <52D66F11.204@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:20, David Vrabel wrote:
> On 09/01/14 22:48, Annie Li wrote:
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
> While netfront only supports a copying backend, I don't see anything
> preventing the backend from retaining mappings to netfront's Rx buffers...

Right. This does not prevent backend from mappings.
Maybe my description is not clear. What I mean here is based on old 
2.6.18 netfront which uses "copying_receiver" to tell netback whether rx 
requires grant copy. Probably changing "grant copy" above into "grant 
access" vs "grant transfer" is more precise. And the 
"gnttab_end_foreign_transfer_ref" is the unnecessary code kept from old 
netfront.

>
>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>> ---
>>   drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>   1 files changed, 3 insertions(+), 57 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index e59acb1..692589e 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>   
>>   static void xennet_release_rx_bufs(struct netfront_info *np)
>>   {
> [...]
>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>> +		gnttab_end_foreign_access_ref(ref, 0);
> ... the gnttab_end_foreign_access_ref() may then fail and...
>
>>   		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
> [...]
>> +		kfree_skb(skb);
> ... this could then potentially free pages that the backend still has
> mapped.  If the pages are then reused, this would leak information to
> the backend.

Yes, it is possible. But doing kfree_skb is right thing from netfront 
point of view.

>
> Since only a buggy backend would result in this, leaking the skbs and
> grant refs would be acceptable here.

This is the same thing for tx side which uses similar process.

>    I would also print an error.

You mean add some print log here? Is it necessary?

Thanks
Annie
>
> While checking blkfront for how it handles this, it also doesn't appear
> to do the right thing either.
>
> David
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:16:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:16:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RGm-0005zR-AV; Wed, 15 Jan 2014 14:16:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RGk-0005yx-Sn
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:16:35 +0000
Received: from [85.158.143.35:46660] by server-3.bemta-4.messagelabs.com id
	FA/9D-32360-04896D25; Wed, 15 Jan 2014 14:16:32 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389795390!11843994!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8006 invoked from network); 15 Jan 2014 14:16:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:16:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEGSYm023295
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:16:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FEGSMe004373
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:16:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0FEGR86004340; Wed, 15 Jan 2014 14:16:27 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:16:26 -0800
Message-ID: <52D69837.9090609@oracle.com>
Date: Wed, 15 Jan 2014 22:16:23 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
In-Reply-To: <52D66F11.204@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:20, David Vrabel wrote:
> On 09/01/14 22:48, Annie Li wrote:
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
> While netfront only supports a copying backend, I don't see anything
> preventing the backend from retaining mappings to netfront's Rx buffers...

Right. This does not prevent backend from mappings.
Maybe my description is not clear. What I mean here is based on old 
2.6.18 netfront which uses "copying_receiver" to tell netback whether rx 
requires grant copy. Probably changing "grant copy" above into "grant 
access" vs "grant transfer" is more precise. And the 
"gnttab_end_foreign_transfer_ref" is the unnecessary code kept from old 
netfront.

>
>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>> ---
>>   drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>   1 files changed, 3 insertions(+), 57 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index e59acb1..692589e 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>   
>>   static void xennet_release_rx_bufs(struct netfront_info *np)
>>   {
> [...]
>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>> +		gnttab_end_foreign_access_ref(ref, 0);
> ... the gnttab_end_foreign_access_ref() may then fail and...
>
>>   		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
> [...]
>> +		kfree_skb(skb);
> ... this could then potentially free pages that the backend still has
> mapped.  If the pages are then reused, this would leak information to
> the backend.

Yes, it is possible. But doing kfree_skb is right thing from netfront 
point of view.

>
> Since only a buggy backend would result in this, leaking the skbs and
> grant refs would be acceptable here.

This is the same thing for tx side which uses similar process.

>    I would also print an error.

You mean add some print log here? Is it necessary?

Thanks
Annie
>
> While checking blkfront for how it handles this, it also doesn't appear
> to do the right thing either.
>
> David
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RH3-00063n-8s; Wed, 15 Jan 2014 14:16:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RH1-00063D-H0
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:16:51 +0000
Received: from [85.158.137.68:44756] by server-2.bemta-3.messagelabs.com id
	74/A7-17329-25896D25; Wed, 15 Jan 2014 14:16:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389795409!8140215!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23091 invoked from network); 15 Jan 2014 14:16:50 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:16:50 -0000
Received: by mail-we0-f171.google.com with SMTP id w61so1834998wes.16
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:16:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nIfgvdPH2eqo1NGUWGUd0vw1EEAn+Kmxni44oOQKhbA=;
	b=Qcks3Z00yUg0nnWi71tAhkQekik40ahOEQSMVpyDxxwRga+Bh30UHLLj2Vi70LiFSE
	NVXew5qo1htDqlrXMznr4WZH/RUfuCqMoodsOd8bE7pQUCodCxhx+AlWMzNFmo7YZDrt
	oxNsnD+2iaU5FwGoVcldLDvrrcMYhl7k1C2pXFlwrfBhKRW2uCvAtrQ+O/zhREvirEGb
	chSwFAH4WyPn8W1L2iejfi42bbhkjgTZPlrmEcVhH0KX0wbmZHzzcMZ+bIjYmRW4gdXh
	giKbg2FPFe7elpmLswvQXWc3Zf/4JOPnsM5xRs5L/Hcc7WwaO4h553B6xmFVGRWy/Q7/
	3EsA==
X-Gm-Message-State: ALoCoQm0Aah7FyZd19eqTekg5XhsZsHbYEKhYh5EgpQX5d6K85CSiksehYvXvEUZnQVVj7tnaaJV
X-Received: by 10.180.106.165 with SMTP id gv5mr2610908wib.32.1389795409418;
	Wed, 15 Jan 2014 06:16:49 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id w1sm29345342wix.1.2014.01.15.06.16.47
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:16:48 -0800 (PST)
Message-ID: <52D6984E.9070909@linaro.org>
Date: Wed, 15 Jan 2014 14:16:46 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
	<1389793244.3793.25.camel@kazak.uk.xensource.com>
In-Reply-To: <1389793244.3793.25.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared
 between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:40 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 20:50 +0000, Julien Grall wrote:
>> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
>> IRQ is correctly setup.
>>
>> As IRQ can be shared between devices, if the devices are not assigned to the
>> same domain or Xen, this could result to IRQ route to the domain instead of
>> Xen ...
>>
>> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Does this patch relate to or rely on " setup_dt_irq: don't enable the
> IRQ if the creation has failed" at all?

There is no relation between the 2 patches. Each one fix a different bug.

>>
>> ---
>>     Hopefully, none of the supported platforms have UARTs (the only device
>>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>>     avoid waste of time for developer.
> 
> Hrm, at some point I think we have to say no and I think post-rc "nice
> to avoid waste of time for developer" might be it. After all in a little
> over a month developers will be using 4.5-pre with this patch applied.

I'm fine to wait after Xen 4.4 release.

> What actually happens without this patch? The Xen console UART stops
> working because the IRQ is delivered to the guest and not to Xen?

Right.

> How did you discover this? Does this happen in practice on any of the
> platforms which Xen supports? I think in general shared interrupts are
> reasonably rare on ARM, especially for on-SoC peripherals which the UART
> very often will be.

By reading the code, IRQ_GUEST is set unconditionally in
dt_route_irq_to_guest.

All the current supported platform are safe.

---
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RH3-00063n-8s; Wed, 15 Jan 2014 14:16:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RH1-00063D-H0
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:16:51 +0000
Received: from [85.158.137.68:44756] by server-2.bemta-3.messagelabs.com id
	74/A7-17329-25896D25; Wed, 15 Jan 2014 14:16:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389795409!8140215!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23091 invoked from network); 15 Jan 2014 14:16:50 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:16:50 -0000
Received: by mail-we0-f171.google.com with SMTP id w61so1834998wes.16
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:16:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nIfgvdPH2eqo1NGUWGUd0vw1EEAn+Kmxni44oOQKhbA=;
	b=Qcks3Z00yUg0nnWi71tAhkQekik40ahOEQSMVpyDxxwRga+Bh30UHLLj2Vi70LiFSE
	NVXew5qo1htDqlrXMznr4WZH/RUfuCqMoodsOd8bE7pQUCodCxhx+AlWMzNFmo7YZDrt
	oxNsnD+2iaU5FwGoVcldLDvrrcMYhl7k1C2pXFlwrfBhKRW2uCvAtrQ+O/zhREvirEGb
	chSwFAH4WyPn8W1L2iejfi42bbhkjgTZPlrmEcVhH0KX0wbmZHzzcMZ+bIjYmRW4gdXh
	giKbg2FPFe7elpmLswvQXWc3Zf/4JOPnsM5xRs5L/Hcc7WwaO4h553B6xmFVGRWy/Q7/
	3EsA==
X-Gm-Message-State: ALoCoQm0Aah7FyZd19eqTekg5XhsZsHbYEKhYh5EgpQX5d6K85CSiksehYvXvEUZnQVVj7tnaaJV
X-Received: by 10.180.106.165 with SMTP id gv5mr2610908wib.32.1389795409418;
	Wed, 15 Jan 2014 06:16:49 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id w1sm29345342wix.1.2014.01.15.06.16.47
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:16:48 -0800 (PST)
Message-ID: <52D6984E.9070909@linaro.org>
Date: Wed, 15 Jan 2014 14:16:46 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389387012-26247-1-git-send-email-julien.grall@linaro.org>
	<1389793244.3793.25.camel@kazak.uk.xensource.com>
In-Reply-To: <1389793244.3793.25.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: IRQ: Protect IRQ to be shared
 between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:40 PM, Ian Campbell wrote:
> On Fri, 2014-01-10 at 20:50 +0000, Julien Grall wrote:
>> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
>> IRQ is correctly setup.
>>
>> As IRQ can be shared between devices, if the devices are not assigned to the
>> same domain or Xen, this could result to IRQ route to the domain instead of
>> Xen ...
>>
>> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Does this patch relate to or rely on " setup_dt_irq: don't enable the
> IRQ if the creation has failed" at all?

There is no relation between the 2 patches. Each one fix a different bug.

>>
>> ---
>>     Hopefully, none of the supported platforms have UARTs (the only device
>>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>>     avoid waste of time for developer.
> 
> Hrm, at some point I think we have to say no and I think post-rc "nice
> to avoid waste of time for developer" might be it. After all in a little
> over a month developers will be using 4.5-pre with this patch applied.

I'm fine to wait after Xen 4.4 release.

> What actually happens without this patch? The Xen console UART stops
> working because the IRQ is delivered to the guest and not to Xen?

Right.

> How did you discover this? Does this happen in practice on any of the
> platforms which Xen supports? I think in general shared interrupts are
> reasonably rare on ARM, especially for on-SoC peripherals which the UART
> very often will be.

By reading the code, IRQ_GUEST is set unconditionally in
dt_route_irq_to_guest.

All the current supported platform are safe.

---
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:17:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RHY-0006Cp-SM; Wed, 15 Jan 2014 14:17:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RHX-0006CI-AO
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:17:23 +0000
Received: from [85.158.139.211:30812] by server-11.bemta-5.messagelabs.com id
	DC/24-23268-27896D25; Wed, 15 Jan 2014 14:17:22 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389795440!9937271!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16639 invoked from network); 15 Jan 2014 14:17:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:17:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEHIG4024453
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:17:19 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FEHCFh006223
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:17:17 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEHCht009787; Wed, 15 Jan 2014 14:17:12 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:17:11 -0800
Message-ID: <52D69864.9030207@oracle.com>
Date: Wed, 15 Jan 2014 22:17:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com>
In-Reply-To: <52D67677.4050407@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:52, David Vrabel wrote:
> On 15/01/14 11:42, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
>>> On 09/01/14 22:48, Annie Li wrote:
>>>> Current netfront only grants pages for grant copy, not for grant transfer, so
>>>> remove corresponding transfer code and add receiving copy code in
>>>> xennet_release_rx_bufs.
>>> While netfront only supports a copying backend, I don't see anything
>>> preventing the backend from retaining mappings to netfront's Rx buffers...
>>>
>> Correct.
>>
>>>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>>>> ---
>>>>   drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>>>   1 files changed, 3 insertions(+), 57 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>>> index e59acb1..692589e 100644
>>>> --- a/drivers/net/xen-netfront.c
>>>> +++ b/drivers/net/xen-netfront.c
>>>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>>>   
>>>>   static void xennet_release_rx_bufs(struct netfront_info *np)
>>>>   {
>>> [...]
>>>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>>>> +		gnttab_end_foreign_access_ref(ref, 0);
>>> ... the gnttab_end_foreign_access_ref() may then fail and...
>>>
>> Oh, I see. Andrew was actually referencing this function. Yes, it can
>> fail. Since he omitted "_ref" I looked at the other function when I
>> replied to him...
>>
>>>>   		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>>>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>>> [...]
>>>> +		kfree_skb(skb);
>>> ... this could then potentially free pages that the backend still has
>>> mapped.  If the pages are then reused, this would leak information to
>>> the backend.
>>>
>>> Since only a buggy backend would result in this, leaking the skbs and
>>> grant refs would be acceptable here.  I would also print an error.
>>>
>> How about using gnttab_end_foreign_access. The deferred queue looks like
>> a right solution -- pending page won't get freed until gref is
>> quiescent.
> This is more like the correct approach but I don't think it still quite
> right.  The skb owns the pages so we don't want
> gnttab_end_foreign_access() to free them as freeing the skb will attempt
> to free them again.
>
> Having gnttab_end_foreign_access() do a free just looks odd to me, the
> free isn't paired with any alloc in the grant table code.
>
> It seems more logical to me that granting access takes an additional
> page ref, and then ending access releases that ref.

I am thinking of two ways, and they can be implemented in new patches.
1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called 
to free skb. Otherwise, using gnttab_end_foreign_access to release ref 
and pages.
2. Add a similar deferred way of gnttab_end_foreign_access in 
gnttab_end_foreign_access_ref.

Thanks
Annie



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:17:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RHY-0006Cp-SM; Wed, 15 Jan 2014 14:17:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3RHX-0006CI-AO
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:17:23 +0000
Received: from [85.158.139.211:30812] by server-11.bemta-5.messagelabs.com id
	DC/24-23268-27896D25; Wed, 15 Jan 2014 14:17:22 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389795440!9937271!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16639 invoked from network); 15 Jan 2014 14:17:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 14:17:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEHIG4024453
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:17:19 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FEHCFh006223
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:17:17 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEHCht009787; Wed, 15 Jan 2014 14:17:12 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:17:11 -0800
Message-ID: <52D69864.9030207@oracle.com>
Date: Wed, 15 Jan 2014 22:17:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com>
In-Reply-To: <52D67677.4050407@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 19:52, David Vrabel wrote:
> On 15/01/14 11:42, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 11:20:49AM +0000, David Vrabel wrote:
>>> On 09/01/14 22:48, Annie Li wrote:
>>>> Current netfront only grants pages for grant copy, not for grant transfer, so
>>>> remove corresponding transfer code and add receiving copy code in
>>>> xennet_release_rx_bufs.
>>> While netfront only supports a copying backend, I don't see anything
>>> preventing the backend from retaining mappings to netfront's Rx buffers...
>>>
>> Correct.
>>
>>>> Signed-off-by: Annie Li <Annie.li@oracle.com>
>>>> ---
>>>>   drivers/net/xen-netfront.c |   60 ++-----------------------------------------
>>>>   1 files changed, 3 insertions(+), 57 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>>>> index e59acb1..692589e 100644
>>>> --- a/drivers/net/xen-netfront.c
>>>> +++ b/drivers/net/xen-netfront.c
>>>> @@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>>>>   
>>>>   static void xennet_release_rx_bufs(struct netfront_info *np)
>>>>   {
>>> [...]
>>>> -		mfn = gnttab_end_foreign_transfer_ref(ref);
>>>> +		gnttab_end_foreign_access_ref(ref, 0);
>>> ... the gnttab_end_foreign_access_ref() may then fail and...
>>>
>> Oh, I see. Andrew was actually referencing this function. Yes, it can
>> fail. Since he omitted "_ref" I looked at the other function when I
>> replied to him...
>>
>>>>   		gnttab_release_grant_reference(&np->gref_rx_head, ref);
>>>>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>>> [...]
>>>> +		kfree_skb(skb);
>>> ... this could then potentially free pages that the backend still has
>>> mapped.  If the pages are then reused, this would leak information to
>>> the backend.
>>>
>>> Since only a buggy backend would result in this, leaking the skbs and
>>> grant refs would be acceptable here.  I would also print an error.
>>>
>> How about using gnttab_end_foreign_access. The deferred queue looks like
>> a right solution -- pending page won't get freed until gref is
>> quiescent.
> This is more like the correct approach but I don't think it still quite
> right.  The skb owns the pages so we don't want
> gnttab_end_foreign_access() to free them as freeing the skb will attempt
> to free them again.
>
> Having gnttab_end_foreign_access() do a free just looks odd to me, the
> free isn't paired with any alloc in the grant table code.
>
> It seems more logical to me that granting access takes an additional
> page ref, and then ending access releases that ref.

I am thinking of two ways, and they can be implemented in new patches.
1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called 
to free skb. Otherwise, using gnttab_end_foreign_access to release ref 
and pages.
2. Add a similar deferred way of gnttab_end_foreign_access in 
gnttab_end_foreign_access_ref.

Thanks
Annie



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:18:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RIN-0006OO-CG; Wed, 15 Jan 2014 14:18:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RIL-0006Nm-Ro
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:18:14 +0000
Received: from [85.158.137.68:7960] by server-6.bemta-3.messagelabs.com id
	28/46-04868-4A896D25; Wed, 15 Jan 2014 14:18:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389795490!9282122!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7672 invoked from network); 15 Jan 2014 14:18:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:18:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93090295"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:18:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:18:08 -0500
Message-ID: <1389795487.3793.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Wed, 15 Jan 2014 14:18:07 +0000
In-Reply-To: <1389713435.12434.93.camel@kazak.uk.xensource.com>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
	<20140114152303.GA19990@dark.recoil.org>
	<1389713435.12434.93.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 15:30 +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 15:23 +0000, Anil Madhavapeddy wrote:
> 
> > > Perhaps CAMLlocal2 both defines and references the variables keeping
> > > this issue at bay?
> > 
> > That's right.  CAMLlocal2 creates a stack variable and registers it with
> > the garbage collector as a root (to ensure that it's not collected during
> > the lifetime of the function).  This keeps it live and always used from
> > the perspective of the C compiler.
> 
> Thanks.
> 
> > Yeah, I'm not aware of any compiler that doesn't respect the noreturn
> > attribute and also emits unused variable warnings. I didn't modify the
> > CAMLreturn in favour of minimising the x86/ARM differences, but you could
> > modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
> > to keep these bindings as straight-line as possible for the 4.4 release
> > though, and to refactor oxenstored to not depend on them at all in the
> > future (it only uses a small part of libxc and these cpuid functions
> > aren't used at all).
> 
> Thanks, I'm convinced by that argument, this can go in after rc2 is cut.
> 
> Release-ack: Ian Campbell

Applied



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:18:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RIN-0006OO-CG; Wed, 15 Jan 2014 14:18:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3RIL-0006Nm-Ro
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:18:14 +0000
Received: from [85.158.137.68:7960] by server-6.bemta-3.messagelabs.com id
	28/46-04868-4A896D25; Wed, 15 Jan 2014 14:18:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389795490!9282122!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7672 invoked from network); 15 Jan 2014 14:18:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:18:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93090295"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:18:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:18:08 -0500
Message-ID: <1389795487.3793.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anil Madhavapeddy <anil@recoil.org>
Date: Wed, 15 Jan 2014 14:18:07 +0000
In-Reply-To: <1389713435.12434.93.camel@kazak.uk.xensource.com>
References: <20140111233325.GA30303@dark.recoil.org>
	<6FB4516F0E9B0F43B54F88D855ABB790DE7744@AMSPEX01CL03.citrite.net>
	<1389710629.12434.64.camel@kazak.uk.xensource.com>
	<20140114152303.GA19990@dark.recoil.org>
	<1389713435.12434.93.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dave Scott <Dave.Scott@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] libxl: ocaml: guard x86-specific
 functions behind an ifdef
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 15:30 +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 15:23 +0000, Anil Madhavapeddy wrote:
> 
> > > Perhaps CAMLlocal2 both defines and references the variables keeping
> > > this issue at bay?
> > 
> > That's right.  CAMLlocal2 creates a stack variable and registers it with
> > the garbage collector as a root (to ensure that it's not collected during
> > the lifetime of the function).  This keeps it live and always used from
> > the perspective of the C compiler.
> 
> Thanks.
> 
> > Yeah, I'm not aware of any compiler that doesn't respect the noreturn
> > attribute and also emits unused variable warnings. I didn't modify the
> > CAMLreturn in favour of minimising the x86/ARM differences, but you could
> > modify the #endif to be an #else/#endif to only return on x86.  I'd prefer
> > to keep these bindings as straight-line as possible for the 4.4 release
> > though, and to refactor oxenstored to not depend on them at all in the
> > future (it only uses a small part of libxc and these cpuid functions
> > aren't used at all).
> 
> Thanks, I'm convinced by that argument, this can go in after rc2 is cut.
> 
> Release-ack: Ian Campbell

Applied



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RON-0007Cv-OW; Wed, 15 Jan 2014 14:24:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3ROM-0007Cj-AJ
	for Xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:24:26 +0000
Received: from [193.109.254.147:55950] by server-14.bemta-14.messagelabs.com
	id 54/37-12628-91A96D25; Wed, 15 Jan 2014 14:24:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389795863!11030935!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4990 invoked from network); 15 Jan 2014 14:24:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90972330"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:24:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:24:22 -0500
Message-ID: <1389795861.3793.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Wed, 15 Jan 2014 14:24:21 +0000
In-Reply-To: <52D641D7.8000904@citrix.com>
References: <52D638B5.5090202@univention.de> <52D641D7.8000904@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir.fraser@citrix.com>, Xen-devel@lists.xen.org,
	Philipp Hahn <hahn@univention.de>
Subject: Re: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTE1IGF0IDA5OjA3ICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDE1LzAxLzE0IDA4OjI4LCBQaGlsaXBwIEhhaG4gd3JvdGU6Cj4gPiBIZWxsbywKPiA+
IAo+ID4gd2UgZW5jb3VudGVyZWQgYSBzdHJhbmdlIHJhY2UgY29uZGl0aW9uIGluIHRvb2xzL2hv
dHBsdWcvTGludXgvYmxvY2ssCj4gPiB3aGljaCBvbmx5IHNob3dzIHZlcnkgcmFyZWx5IGFuZCBv
bmx5IG9uY2UgZm9yIHRoZSBmaXJzdCBkb21haW4gc3RhcnRlZAo+ID4gYWZ0ZXIgdGhlIGhvc3Qg
c2VydmVyIGlzIHN0YXJ0ZWQuIFRoZSBjb21wbGV0ZSBkZXRhaWxzIGFyZSBpbiBvdXIKPiA+IEJ1
Z3ppbGxhIGF0IDxodHRwczovL2ZvcmdlLnVuaXZlbnRpb24ub3JnL2J1Z3ppbGxhL3Nob3dfYnVn
LmNnaT9pZD0yMDQ4MT4uCj4gPiAKPiA+IEluIG91ciBjYXNlIGl0cyBYZW4tNC4xLjMgKHllcyBJ
IGtub3cgaXQncyBhbmNpZW50LCBidXQgdGhlIGNvZGUgaXMKPiA+IHN0aWxsIHRoZSBzYW1lIGlu
IGN1cnJlbnQgWGVuLWdpdCkgYW5kIGl0IG9ubHkgaGFwcGVucyBmb3IgZmlsZS1iYWNrZWQKPiA+
IGZpbGVzIChpbiBvdXIgY2FzZSBhIElTTy1pbWFnZSBhcyBDRC1ST00pLgo+ID4gCj4gPj4gICAg
ICAgICMgQXZvaWQgYSByYWNlIHdpdGggdGhlIHJlbW92ZSBpZiB0aGUgcGF0aCBoYXMgYmVlbiBk
ZWxldGVkLCBvcgo+ID4+IMK7wrfCt8K3IyBvdGhlcndpc2UgY2hhbmdlZCBmcm9tICJJbml0V2Fp
dCIgc3RhdGUgZS5nLiBkdWUgdG8gYSB0aW1lb3V0Cj4gPj4gICAgICAgICB4ZW5idXNfc3RhdGU9
JCh4ZW5zdG9yZV9yZWFkX2RlZmF1bHQgIiRYRU5CVVNfUEFUSC9zdGF0ZSIgJ3Vua25vd24nKQo+
ID4+ICAgICAgICAgaWYgWyAiJHhlbmJ1c19zdGF0ZSIgIT0gJzInIF0KPiA+PiAgICAgICAgIHRo
ZW4KPiA+PiAgICAgICAgICAgcmVsZWFzZV9sb2NrICJibG9jayIKPiA+PiAgICAgICAgICAgZmF0
YWwgIlBhdGggY2xvc2VkIG9yIHJlbW92ZWQgZHVyaW5nIGhvdHBsdWcgYWRkOiAkWEVOQlVTX1BB
VEggc3RhdGU6ICR4ZW5idXNfc3RhdGUiCj4gPj4gICAgICAgICBmaQo+ID4gCj4gPiBUaGUgcHJv
YmxlbSBpcyB0aGF0IHNvbWV0aW1lcyB0aGUgb3RoZXIgZW5kIChrZXJuZWwvcWVtdT8pIGlzIHRv
byBzbG93Cj4gPiBhbmQgdGhlIGRldmljZSBpcyBzdGlsbCBpbiBpbiBzdGF0ZSAxPUluaXRpYWxp
emluZy4gSWYgdGhhdCBoYXBwZW5zLCB0aGUKPiA+IGRvbVUgc3RhdCBpcyBhYm9ydGVkIGFuZCBk
ZXN0cm95ZWQuCj4gPiBJZiB0aGUgc2FtZSBWTSBpcyB0aGVuIHN0YXJ0ZWQgYWdhaW4sIGl0IHdv
cmtzIGZsYXdsZXNzbHkuCj4gCj4gVGhpcyBwcm9ibGVtIG9ubHkgbWFuaWZlc3RzIGl0c2VsZiB3
aXRoIHhlbmQgYW5kIHhsIGZyb20gWGVuIHZlcnNpb25zIDwKPiA0LjIsIHNpbmNlIDQuMiBvbndh
cmRzIGxpYnhsIHdhaXRzIGZvciB0aGUgYmFja2VuZCB0byBzd2l0Y2ggdG8gc3RhdGUgMgo+IGJl
Zm9yZSBsYXVuY2hpbmcgaG90cGx1ZyBzY3JpcHRzLgo+IAo+IEkgd291bGQgbGlrZSB0byBhdm9p
ZCBoYXZpbmcgdGhpcyBraW5kIG9mIGluZmluaXRlIGxvb3AgaW4gdGhlIGJsb2NrCj4gaG90cGx1
ZyBzY3JpcHQsIGlzIHRoZXJlIGFueXdheSB0aGlzIGNhbiBiZSBmaXhlZCBpbiB4ZW5kPyAod2hp
Y2ggaXMgdGhlCj4gb25seSB0b29sc3RhY2sgdGhhdCBoYXMgdGhpcyBwcm9ibGVtIGluIHVwc3Ry
ZWFtIHZlcnNpb25zKS4KCnhlbmQgcmVsaWVzIG9uIHVkZXYgcnVubmluZyB0aGUgc2NyaXB0cyBi
YXNlZCBvbiB0aGUga2VybmVsIGZpcmluZyB0aGUKZXZlbnRzLCBzbyBJIGRvbid0IHRoaW5rIHhl
bmQgY2FuIGZpeCBpdCwgYXQgbGVhc3Qgbm90IHdpdGhvdXQgcXVpdGUgYQpiaWcgY2hhbmdlLCBJ
IGRvbid0IGtub3cgaG93IHRoZSB4ZW5kIG1haW50YWluZXJzIHdvdWxkIHZpZXcgdGhhdC4KCkRv
ZXMgYmxrYmFjayBmaXJlIHRoZSB1ZXZlbnQgYmVmb3JlIGl0IGhhcyBtb3ZlZCB0byBzdGF0ZSAy
IHRob3VnaCAtLQpzb3VuZHMgYSBiaXQgaWZmeSB0byBtZS4gUGVyaGFwcyB0aGVyZSBpcyBhbiB1
bmRlcmx5aW5nIGtlcm5lbCBidWcgaGVyZT8KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ROM-0007Ck-8E; Wed, 15 Jan 2014 14:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3ROK-0007Ce-HR
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:24:24 +0000
Received: from [85.158.139.211:25366] by server-2.bemta-5.messagelabs.com id
	78/ED-29392-71A96D25; Wed, 15 Jan 2014 14:24:23 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389795861!9920277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19250 invoked from network); 15 Jan 2014 14:24:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:24:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90972320"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:24:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:24:20 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3ROG-00021f-AW;
	Wed, 15 Jan 2014 14:24:20 +0000
Date: Wed, 15 Jan 2014 14:24:20 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115142419.GK1696@perard.uk.xensource.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389782889.12434.175.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> > >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> > >> According to our own documentation, even Python 2.3 is supported
> > >> for building,
> > > 
> > > It might be time to consider reving that.
> > 
> > Post 4.4 perhaps, but I don't think now is the right time to consider
> > anything like that. IMO we ought to fix the build instead.
> 
> I think revving the minimum requirement to Python 2.4 would be a
> perfectly acceptable thing to do at this stage in the release, it's
> nothing more than a change to the README.
> 
> From the rest of your mail I agree that going further than that and
> requiring 2.5+ would be wrong for 4.4.
> 
> > > Current state of some distros:
> > > 
> > > Debian Squeeze (oldstable): 2.6
> > > Debian Wheezy (stable): 2.7
> > > RHEL 5.9: 2.4
> > > RHEL 4.8: 2.3
> > > SLES 11SP3: 2.6
> > > SLES 10SP4: 2.4
> > > 
> > > So by bumping to a minimum version of 2.4 we would be dropping support
> > > for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> > > it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> > > paying attention, which is possible).
> > > 
> > > Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> > > of which seem desirable to drop unnecessarily. (If there were known
> > > issues with 2.4 then I would be inclined to re-evaluate that though)
> > 
> > As said - I encountered the issue here with 2.4 (on SLE10).
> 
> Sorry, I misunderstood and thought this was an issue with 2.3
> somewhere. 
> 
> Given that it is 2.4 and Qemu's configure script explicitly says that
> 2.4 is what they want I think it would be worth sending the fix to qemu
> upstream -- at worst they say no and bump their requirement (at which
> point we'd have to have a think about what to do).

It's look like the fix is already upstream. Here in the 1.6 stable
branch:
http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd433f9fe3d

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RON-0007Cv-OW; Wed, 15 Jan 2014 14:24:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3ROM-0007Cj-AJ
	for Xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:24:26 +0000
Received: from [193.109.254.147:55950] by server-14.bemta-14.messagelabs.com
	id 54/37-12628-91A96D25; Wed, 15 Jan 2014 14:24:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389795863!11030935!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4990 invoked from network); 15 Jan 2014 14:24:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:24:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90972330"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:24:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:24:22 -0500
Message-ID: <1389795861.3793.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Wed, 15 Jan 2014 14:24:21 +0000
In-Reply-To: <52D641D7.8000904@citrix.com>
References: <52D638B5.5090202@univention.de> <52D641D7.8000904@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir.fraser@citrix.com>, Xen-devel@lists.xen.org,
	Philipp Hahn <hahn@univention.de>
Subject: Re: [Xen-devel] [BUG,PATCH] race in xen-block-hotplug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTE1IGF0IDA5OjA3ICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDE1LzAxLzE0IDA4OjI4LCBQaGlsaXBwIEhhaG4gd3JvdGU6Cj4gPiBIZWxsbywKPiA+
IAo+ID4gd2UgZW5jb3VudGVyZWQgYSBzdHJhbmdlIHJhY2UgY29uZGl0aW9uIGluIHRvb2xzL2hv
dHBsdWcvTGludXgvYmxvY2ssCj4gPiB3aGljaCBvbmx5IHNob3dzIHZlcnkgcmFyZWx5IGFuZCBv
bmx5IG9uY2UgZm9yIHRoZSBmaXJzdCBkb21haW4gc3RhcnRlZAo+ID4gYWZ0ZXIgdGhlIGhvc3Qg
c2VydmVyIGlzIHN0YXJ0ZWQuIFRoZSBjb21wbGV0ZSBkZXRhaWxzIGFyZSBpbiBvdXIKPiA+IEJ1
Z3ppbGxhIGF0IDxodHRwczovL2ZvcmdlLnVuaXZlbnRpb24ub3JnL2J1Z3ppbGxhL3Nob3dfYnVn
LmNnaT9pZD0yMDQ4MT4uCj4gPiAKPiA+IEluIG91ciBjYXNlIGl0cyBYZW4tNC4xLjMgKHllcyBJ
IGtub3cgaXQncyBhbmNpZW50LCBidXQgdGhlIGNvZGUgaXMKPiA+IHN0aWxsIHRoZSBzYW1lIGlu
IGN1cnJlbnQgWGVuLWdpdCkgYW5kIGl0IG9ubHkgaGFwcGVucyBmb3IgZmlsZS1iYWNrZWQKPiA+
IGZpbGVzIChpbiBvdXIgY2FzZSBhIElTTy1pbWFnZSBhcyBDRC1ST00pLgo+ID4gCj4gPj4gICAg
ICAgICMgQXZvaWQgYSByYWNlIHdpdGggdGhlIHJlbW92ZSBpZiB0aGUgcGF0aCBoYXMgYmVlbiBk
ZWxldGVkLCBvcgo+ID4+IMK7wrfCt8K3IyBvdGhlcndpc2UgY2hhbmdlZCBmcm9tICJJbml0V2Fp
dCIgc3RhdGUgZS5nLiBkdWUgdG8gYSB0aW1lb3V0Cj4gPj4gICAgICAgICB4ZW5idXNfc3RhdGU9
JCh4ZW5zdG9yZV9yZWFkX2RlZmF1bHQgIiRYRU5CVVNfUEFUSC9zdGF0ZSIgJ3Vua25vd24nKQo+
ID4+ICAgICAgICAgaWYgWyAiJHhlbmJ1c19zdGF0ZSIgIT0gJzInIF0KPiA+PiAgICAgICAgIHRo
ZW4KPiA+PiAgICAgICAgICAgcmVsZWFzZV9sb2NrICJibG9jayIKPiA+PiAgICAgICAgICAgZmF0
YWwgIlBhdGggY2xvc2VkIG9yIHJlbW92ZWQgZHVyaW5nIGhvdHBsdWcgYWRkOiAkWEVOQlVTX1BB
VEggc3RhdGU6ICR4ZW5idXNfc3RhdGUiCj4gPj4gICAgICAgICBmaQo+ID4gCj4gPiBUaGUgcHJv
YmxlbSBpcyB0aGF0IHNvbWV0aW1lcyB0aGUgb3RoZXIgZW5kIChrZXJuZWwvcWVtdT8pIGlzIHRv
byBzbG93Cj4gPiBhbmQgdGhlIGRldmljZSBpcyBzdGlsbCBpbiBpbiBzdGF0ZSAxPUluaXRpYWxp
emluZy4gSWYgdGhhdCBoYXBwZW5zLCB0aGUKPiA+IGRvbVUgc3RhdCBpcyBhYm9ydGVkIGFuZCBk
ZXN0cm95ZWQuCj4gPiBJZiB0aGUgc2FtZSBWTSBpcyB0aGVuIHN0YXJ0ZWQgYWdhaW4sIGl0IHdv
cmtzIGZsYXdsZXNzbHkuCj4gCj4gVGhpcyBwcm9ibGVtIG9ubHkgbWFuaWZlc3RzIGl0c2VsZiB3
aXRoIHhlbmQgYW5kIHhsIGZyb20gWGVuIHZlcnNpb25zIDwKPiA0LjIsIHNpbmNlIDQuMiBvbndh
cmRzIGxpYnhsIHdhaXRzIGZvciB0aGUgYmFja2VuZCB0byBzd2l0Y2ggdG8gc3RhdGUgMgo+IGJl
Zm9yZSBsYXVuY2hpbmcgaG90cGx1ZyBzY3JpcHRzLgo+IAo+IEkgd291bGQgbGlrZSB0byBhdm9p
ZCBoYXZpbmcgdGhpcyBraW5kIG9mIGluZmluaXRlIGxvb3AgaW4gdGhlIGJsb2NrCj4gaG90cGx1
ZyBzY3JpcHQsIGlzIHRoZXJlIGFueXdheSB0aGlzIGNhbiBiZSBmaXhlZCBpbiB4ZW5kPyAod2hp
Y2ggaXMgdGhlCj4gb25seSB0b29sc3RhY2sgdGhhdCBoYXMgdGhpcyBwcm9ibGVtIGluIHVwc3Ry
ZWFtIHZlcnNpb25zKS4KCnhlbmQgcmVsaWVzIG9uIHVkZXYgcnVubmluZyB0aGUgc2NyaXB0cyBi
YXNlZCBvbiB0aGUga2VybmVsIGZpcmluZyB0aGUKZXZlbnRzLCBzbyBJIGRvbid0IHRoaW5rIHhl
bmQgY2FuIGZpeCBpdCwgYXQgbGVhc3Qgbm90IHdpdGhvdXQgcXVpdGUgYQpiaWcgY2hhbmdlLCBJ
IGRvbid0IGtub3cgaG93IHRoZSB4ZW5kIG1haW50YWluZXJzIHdvdWxkIHZpZXcgdGhhdC4KCkRv
ZXMgYmxrYmFjayBmaXJlIHRoZSB1ZXZlbnQgYmVmb3JlIGl0IGhhcyBtb3ZlZCB0byBzdGF0ZSAy
IHRob3VnaCAtLQpzb3VuZHMgYSBiaXQgaWZmeSB0byBtZS4gUGVyaGFwcyB0aGVyZSBpcyBhbiB1
bmRlcmx5aW5nIGtlcm5lbCBidWcgaGVyZT8KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ROM-0007Ck-8E; Wed, 15 Jan 2014 14:24:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3ROK-0007Ce-HR
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:24:24 +0000
Received: from [85.158.139.211:25366] by server-2.bemta-5.messagelabs.com id
	78/ED-29392-71A96D25; Wed, 15 Jan 2014 14:24:23 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389795861!9920277!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19250 invoked from network); 15 Jan 2014 14:24:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:24:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90972320"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:24:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:24:20 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3ROG-00021f-AW;
	Wed, 15 Jan 2014 14:24:20 +0000
Date: Wed, 15 Jan 2014 14:24:20 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115142419.GK1696@perard.uk.xensource.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389782889.12434.175.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
> On Wed, 2014-01-15 at 10:21 +0000, Jan Beulich wrote:
> > >>> On 15.01.14 at 11:03, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Wed, 2014-01-15 at 09:27 +0000, Jan Beulich wrote:
> > >> According to our own documentation, even Python 2.3 is supported
> > >> for building,
> > > 
> > > It might be time to consider reving that.
> > 
> > Post 4.4 perhaps, but I don't think now is the right time to consider
> > anything like that. IMO we ought to fix the build instead.
> 
> I think revving the minimum requirement to Python 2.4 would be a
> perfectly acceptable thing to do at this stage in the release, it's
> nothing more than a change to the README.
> 
> From the rest of your mail I agree that going further than that and
> requiring 2.5+ would be wrong for 4.4.
> 
> > > Current state of some distros:
> > > 
> > > Debian Squeeze (oldstable): 2.6
> > > Debian Wheezy (stable): 2.7
> > > RHEL 5.9: 2.4
> > > RHEL 4.8: 2.3
> > > SLES 11SP3: 2.6
> > > SLES 10SP4: 2.4
> > > 
> > > So by bumping to a minimum version of 2.4 we would be dropping support
> > > for RHEL4 dom0 userspace -- Personally I think that would be acceptable,
> > > it's now RHEL's old-old-stable (unless RHEL7 happened while I wasn't
> > > paying attention, which is possible).
> > > 
> > > Bumping further than 2.4 would mean dropping RHEL5 and SLES10, neither
> > > of which seem desirable to drop unnecessarily. (If there were known
> > > issues with 2.4 then I would be inclined to re-evaluate that though)
> > 
> > As said - I encountered the issue here with 2.4 (on SLE10).
> 
> Sorry, I misunderstood and thought this was an issue with 2.3
> somewhere. 
> 
> Given that it is 2.4 and Qemu's configure script explicitly says that
> 2.4 is what they want I think it would be worth sending the fix to qemu
> upstream -- at worst they say no and bump their requirement (at which
> point we'd have to have a think about what to do).

It's look like the fix is already upstream. Here in the 1.6 stable
branch:
http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd433f9fe3d

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:29:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RT3-0007at-Rf; Wed, 15 Jan 2014 14:29:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3RT2-0007aU-HM
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:29:16 +0000
Received: from [85.158.137.68:9730] by server-9.bemta-3.messagelabs.com id
	36/F4-13104-B3B96D25; Wed, 15 Jan 2014 14:29:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389796153!9267796!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1923 invoked from network); 15 Jan 2014 14:29:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93093824"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:29:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:29:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RSy-00025p-6G;
	Wed, 15 Jan 2014 14:29:12 +0000
Date: Wed, 15 Jan 2014 14:29:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115142912.GM5698@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
	<1389789477.12434.196.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389789477.12434.196.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 12:37:57PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
> > Xen: master branch
> > Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
> >                linux-next
> > 
> > When I tried to start a HVM domain running squeeze with 2.6.32, I got
> > 
> > (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> 
> Julien mentioned having seen something very similar with BSD on ARM
> yesterday...
> 

Weird, this error message was gone after I rebooted my server. I
wouldn't bother with heisenbug at this stage. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:29:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RT3-0007at-Rf; Wed, 15 Jan 2014 14:29:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3RT2-0007aU-HM
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:29:16 +0000
Received: from [85.158.137.68:9730] by server-9.bemta-3.messagelabs.com id
	36/F4-13104-B3B96D25; Wed, 15 Jan 2014 14:29:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389796153!9267796!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1923 invoked from network); 15 Jan 2014 14:29:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93093824"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 14:29:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:29:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RSy-00025p-6G;
	Wed, 15 Jan 2014 14:29:12 +0000
Date: Wed, 15 Jan 2014 14:29:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115142912.GM5698@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
	<1389789477.12434.196.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389789477.12434.196.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 12:37:57PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
> > Xen: master branch
> > Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
> >                linux-next
> > 
> > When I tried to start a HVM domain running squeeze with 2.6.32, I got
> > 
> > (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> 
> Julien mentioned having seen something very similar with BSD on ARM
> yesterday...
> 

Weird, this error message was gone after I rebooted my server. I
wouldn't bother with heisenbug at this stage. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RUS-0007y9-DW; Wed, 15 Jan 2014 14:30:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3RUR-0007xu-4K
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:30:43 +0000
Received: from [193.109.254.147:42285] by server-10.bemta-14.messagelabs.com
	id 1B/83-20752-29B96D25; Wed, 15 Jan 2014 14:30:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389796241!7559058!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32600 invoked from network); 15 Jan 2014 14:30:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:30:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 14:30:41 +0000
Message-Id: <52D6A9A10200007800113E08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 14:30:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Anthony PERARD" <anthony.perard@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
In-Reply-To: <20140115142419.GK1696@perard.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
> On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
>> Given that it is 2.4 and Qemu's configure script explicitly says that
>> 2.4 is what they want I think it would be worth sending the fix to qemu
>> upstream -- at worst they say no and bump their requirement (at which
>> point we'd have to have a think about what to do).
> 
> It's look like the fix is already upstream. Here in the 1.6 stable
> branch:
> http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
> 3f9fe3d

Oh, great. Stefano - can you pull this in then, please?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RUS-0007y9-DW; Wed, 15 Jan 2014 14:30:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3RUR-0007xu-4K
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:30:43 +0000
Received: from [193.109.254.147:42285] by server-10.bemta-14.messagelabs.com
	id 1B/83-20752-29B96D25; Wed, 15 Jan 2014 14:30:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389796241!7559058!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32600 invoked from network); 15 Jan 2014 14:30:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:30:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 14:30:41 +0000
Message-Id: <52D6A9A10200007800113E08@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 14:30:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Anthony PERARD" <anthony.perard@citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
In-Reply-To: <20140115142419.GK1696@perard.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
> On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
>> Given that it is 2.4 and Qemu's configure script explicitly says that
>> 2.4 is what they want I think it would be worth sending the fix to qemu
>> upstream -- at worst they say no and bump their requirement (at which
>> point we'd have to have a think about what to do).
> 
> It's look like the fix is already upstream. Here in the 1.6 stable
> branch:
> http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
> 3f9fe3d

Oh, great. Stefano - can you pull this in then, please?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:33:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RWb-00087C-Vb; Wed, 15 Jan 2014 14:32:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3RWa-000873-Vv
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:32:57 +0000
Received: from [193.109.254.147:44616] by server-11.bemta-14.messagelabs.com
	id 56/50-20576-81C96D25; Wed, 15 Jan 2014 14:32:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389796374!11071161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25928 invoked from network); 15 Jan 2014 14:32:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:32:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90975309"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:32:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:32:53 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RWX-00029G-GB;
	Wed, 15 Jan 2014 14:32:53 +0000
Date: Wed, 15 Jan 2014 14:32:53 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140115143253.GN5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D69864.9030207@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: netdev@vger.kernel.org, ian.campbell@citrix.com,
	Wei Liu <wei.liu2@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 10:17:08PM +0800, annie li wrote:
[...]
> >Having gnttab_end_foreign_access() do a free just looks odd to me, the
> >free isn't paired with any alloc in the grant table code.
> >
> >It seems more logical to me that granting access takes an additional
> >page ref, and then ending access releases that ref.
> 
> I am thinking of two ways, and they can be implemented in new patches.
> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is
> called to free skb. Otherwise, using gnttab_end_foreign_access to
> release ref and pages.

This is probably not a very good idea as skb_free does a lot more things
than simply freeing pages. You're still leaking something (skb: state,
head etc.) with this approach.

> 2. Add a similar deferred way of gnttab_end_foreign_access in
> gnttab_end_foreign_access_ref.
> 
> Thanks
> Annie
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:33:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RWb-00087C-Vb; Wed, 15 Jan 2014 14:32:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3RWa-000873-Vv
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:32:57 +0000
Received: from [193.109.254.147:44616] by server-11.bemta-14.messagelabs.com
	id 56/50-20576-81C96D25; Wed, 15 Jan 2014 14:32:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389796374!11071161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25928 invoked from network); 15 Jan 2014 14:32:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:32:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90975309"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:32:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:32:53 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RWX-00029G-GB;
	Wed, 15 Jan 2014 14:32:53 +0000
Date: Wed, 15 Jan 2014 14:32:53 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140115143253.GN5698@zion.uk.xensource.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D69864.9030207@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: netdev@vger.kernel.org, ian.campbell@citrix.com,
	Wei Liu <wei.liu2@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 10:17:08PM +0800, annie li wrote:
[...]
> >Having gnttab_end_foreign_access() do a free just looks odd to me, the
> >free isn't paired with any alloc in the grant table code.
> >
> >It seems more logical to me that granting access takes an additional
> >page ref, and then ending access releases that ref.
> 
> I am thinking of two ways, and they can be implemented in new patches.
> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is
> called to free skb. Otherwise, using gnttab_end_foreign_access to
> release ref and pages.

This is probably not a very good idea as skb_free does a lot more things
than simply freeing pages. You're still leaking something (skb: state,
head etc.) with this approach.

> 2. Add a similar deferred way of gnttab_end_foreign_access in
> gnttab_end_foreign_access_ref.
> 
> Thanks
> Annie
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RYX-0008FV-Gr; Wed, 15 Jan 2014 14:34:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RYW-0008FQ-61
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:34:56 +0000
Received: from [193.109.254.147:15059] by server-4.bemta-14.messagelabs.com id
	7B/A4-03916-F8C96D25; Wed, 15 Jan 2014 14:34:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389796494!11025811!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21321 invoked from network); 15 Jan 2014 14:34:54 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:34:54 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so1870534wes.1
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 06:34:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MGJDQBAbISOClbHvWgGEu51EBQMncaoaK4rlwORWS1Q=;
	b=FkWOevrOMnECNlkNCFoezjWcvjNQN97tUgdAEGLo4tSer599M11uEsFW0QkN0kWiR6
	bYaLIWM2CKRkk4z1ScvrL4y0tbQbXZZLdGQzhO3nlRMh4KArSNsn8iR6kbjtBjC0C6v2
	zop7z19QiXqYvUBcbXcnfzkK/vYmdNuvEA+1RSd8i3falxcOiUho/gfrCN5SdoyXGQMB
	riEbUgVTVmYxa3Af9fXKWuWGGLZJlUCV1ImIc1SsNTKZBvBJcQ42iJeeWZ2k4K4mxCHW
	BBzDmjn0s4XyOCU6gXXTROq3WBi5qMIsgzmGMj3pPzO73HRvOlvl1W3VKLhgEzmz59zt
	flZw==
X-Gm-Message-State: ALoCoQlEy4mcnF0gNlcl72XsIGidB01Ekv9eBbXXVZvkGS8sp3FatKJlFrXup3OrGl6FCVNgPfnF
X-Received: by 10.180.39.2 with SMTP id l2mr2689293wik.47.1389796493380;
	Wed, 15 Jan 2014 06:34:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	fp4sm28626722wic.11.2014.01.15.06.34.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:34:52 -0800 (PST)
Message-ID: <52D69C88.70609@linaro.org>
Date: Wed, 15 Jan 2014 14:34:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
	<1389789477.12434.196.camel@kazak.uk.xensource.com>
	<20140115142912.GM5698@zion.uk.xensource.com>
In-Reply-To: <20140115142912.GM5698@zion.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:29 PM, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 12:37:57PM +0000, Ian Campbell wrote:
>> On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
>>> Xen: master branch
>>> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>>>                linux-next
>>>
>>> When I tried to start a HVM domain running squeeze with 2.6.32, I got
>>>
>>> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
>>
>> Julien mentioned having seen something very similar with BSD on ARM
>> yesterday...
>>
> 
> Weird, this error message was gone after I rebooted my server. I
> wouldn't bother with heisenbug at this stage. :-)
>

I had the same bug on ARM platform with FreeBSD and Linux guest. It
happens once a while but I'm unable to reproduce reliably.

As for you, the bug disappear after reboot.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RYX-0008FV-Gr; Wed, 15 Jan 2014 14:34:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RYW-0008FQ-61
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:34:56 +0000
Received: from [193.109.254.147:15059] by server-4.bemta-14.messagelabs.com id
	7B/A4-03916-F8C96D25; Wed, 15 Jan 2014 14:34:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389796494!11025811!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21321 invoked from network); 15 Jan 2014 14:34:54 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:34:54 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so1870534wes.1
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 06:34:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MGJDQBAbISOClbHvWgGEu51EBQMncaoaK4rlwORWS1Q=;
	b=FkWOevrOMnECNlkNCFoezjWcvjNQN97tUgdAEGLo4tSer599M11uEsFW0QkN0kWiR6
	bYaLIWM2CKRkk4z1ScvrL4y0tbQbXZZLdGQzhO3nlRMh4KArSNsn8iR6kbjtBjC0C6v2
	zop7z19QiXqYvUBcbXcnfzkK/vYmdNuvEA+1RSd8i3falxcOiUho/gfrCN5SdoyXGQMB
	riEbUgVTVmYxa3Af9fXKWuWGGLZJlUCV1ImIc1SsNTKZBvBJcQ42iJeeWZ2k4K4mxCHW
	BBzDmjn0s4XyOCU6gXXTROq3WBi5qMIsgzmGMj3pPzO73HRvOlvl1W3VKLhgEzmz59zt
	flZw==
X-Gm-Message-State: ALoCoQlEy4mcnF0gNlcl72XsIGidB01Ekv9eBbXXVZvkGS8sp3FatKJlFrXup3OrGl6FCVNgPfnF
X-Received: by 10.180.39.2 with SMTP id l2mr2689293wik.47.1389796493380;
	Wed, 15 Jan 2014 06:34:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	fp4sm28626722wic.11.2014.01.15.06.34.51 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:34:52 -0800 (PST)
Message-ID: <52D69C88.70609@linaro.org>
Date: Wed, 15 Jan 2014 14:34:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
	<1389789477.12434.196.camel@kazak.uk.xensource.com>
	<20140115142912.GM5698@zion.uk.xensource.com>
In-Reply-To: <20140115142912.GM5698@zion.uk.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:29 PM, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 12:37:57PM +0000, Ian Campbell wrote:
>> On Wed, 2014-01-15 at 12:30 +0000, Wei Liu wrote:
>>> Xen: master branch
>>> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>>>                linux-next
>>>
>>> When I tried to start a HVM domain running squeeze with 2.6.32, I got
>>>
>>> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
>>
>> Julien mentioned having seen something very similar with BSD on ARM
>> yesterday...
>>
> 
> Weird, this error message was gone after I rebooted my server. I
> wouldn't bother with heisenbug at this stage. :-)
>

I had the same bug on ARM platform with FreeBSD and Linux guest. It
happens once a while but I'm unable to reproduce reliably.

As for you, the bug disappear after reboot.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RYk-0008GX-UE; Wed, 15 Jan 2014 14:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3RYi-0008GH-O4
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:35:08 +0000
Received: from [85.158.143.35:47715] by server-2.bemta-4.messagelabs.com id
	3C/4F-11386-C9C96D25; Wed, 15 Jan 2014 14:35:08 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389796506!11771827!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7384 invoked from network); 15 Jan 2014 14:35:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90976019"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:35:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:35:05 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3RYf-0002Bj-87;
	Wed, 15 Jan 2014 14:35:05 +0000
Date: Wed, 15 Jan 2014 14:35:05 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115143504.GL1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389778510.12434.118.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> 
> Based on the above I have no idea whether a freeze exception should be
> granted for this, so my default answer is no. I'm not sure what else you
> could have expected.
> 
> If you think there are changes here which should be in 4.4.0 then please
> enumerate all changes included in this merge which have any relation to
> Xen and their potential impact on the release.

I have a list the change here that have a potential impact on Xen, with
the ones that I think are quite important at the beginning. Either the
commit title speak for itself or I added a small description on what is
affected.

Fix pc migration from qemu <= 1.5

- Potential compilation issue:
Adjust qapi-visit for python-2.4.3
configure: Explicitly set ARFLAGS so we can build with GNU Make 4.0

- Memory leak:
qapi: fix memleak by adding implict struct functions in dealloc visitor
  qapi is used by qmp, so potential leaks when doing qmp call
qom: Fix memory leak in object_property_set_link()
  same for qom

qdev-monitor: Fix crash when device_add is called with abstract driver
qdev-monitor: Unref device when device_add fails
  those could be a potential issue triggered through device-add qmp
  command

qemu-char: Fix potential out of bounds access to local arrays
  for serial="vc:WxH"
  vc stand for virtual console

audio: honor QEMU_AUDIO_TIMER_PERIOD instead of waking up every *nano* second
  looks like it improve cpu load when playing audio

scsi_target_send_command(): amend stable-1.6 port of the CVE-2013-4344 fix
  if someone use scsi disk

vmdk: Fix vmdk_parse_extents
vmdk: Fix creating big description file
  if someone use a vmdk disk image

qcow2: count_contiguous_clusters and compression
qcow2: fix possible corruption when reading multiple clusters
qcow2: Zero-initialise first cluster for new images
  several qcow2 fixes, a file disk format

virtio-net: only delete bh that existed
virtio-net: fix the memory leak in rxfilter_notify()
  few virtio fixes

rng-egd: offset the point when repeatedly read from the buffer
  looks like this can be used by virtio


I did not list the commit that does not look like a Xen guest can use.
So is this look like patches to take in our tree ? At least the first 7
would be good to take I think (migration fix, memory leaks, qdev fixes).

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RYk-0008GX-UE; Wed, 15 Jan 2014 14:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3RYi-0008GH-O4
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:35:08 +0000
Received: from [85.158.143.35:47715] by server-2.bemta-4.messagelabs.com id
	3C/4F-11386-C9C96D25; Wed, 15 Jan 2014 14:35:08 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389796506!11771827!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7384 invoked from network); 15 Jan 2014 14:35:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:35:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90976019"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:35:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:35:05 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3RYf-0002Bj-87;
	Wed, 15 Jan 2014 14:35:05 +0000
Date: Wed, 15 Jan 2014 14:35:05 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115143504.GL1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389778510.12434.118.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> 
> Based on the above I have no idea whether a freeze exception should be
> granted for this, so my default answer is no. I'm not sure what else you
> could have expected.
> 
> If you think there are changes here which should be in 4.4.0 then please
> enumerate all changes included in this merge which have any relation to
> Xen and their potential impact on the release.

I have a list the change here that have a potential impact on Xen, with
the ones that I think are quite important at the beginning. Either the
commit title speak for itself or I added a small description on what is
affected.

Fix pc migration from qemu <= 1.5

- Potential compilation issue:
Adjust qapi-visit for python-2.4.3
configure: Explicitly set ARFLAGS so we can build with GNU Make 4.0

- Memory leak:
qapi: fix memleak by adding implict struct functions in dealloc visitor
  qapi is used by qmp, so potential leaks when doing qmp call
qom: Fix memory leak in object_property_set_link()
  same for qom

qdev-monitor: Fix crash when device_add is called with abstract driver
qdev-monitor: Unref device when device_add fails
  those could be a potential issue triggered through device-add qmp
  command

qemu-char: Fix potential out of bounds access to local arrays
  for serial="vc:WxH"
  vc stand for virtual console

audio: honor QEMU_AUDIO_TIMER_PERIOD instead of waking up every *nano* second
  looks like it improve cpu load when playing audio

scsi_target_send_command(): amend stable-1.6 port of the CVE-2013-4344 fix
  if someone use scsi disk

vmdk: Fix vmdk_parse_extents
vmdk: Fix creating big description file
  if someone use a vmdk disk image

qcow2: count_contiguous_clusters and compression
qcow2: fix possible corruption when reading multiple clusters
qcow2: Zero-initialise first cluster for new images
  several qcow2 fixes, a file disk format

virtio-net: only delete bh that existed
virtio-net: fix the memory leak in rxfilter_notify()
  few virtio fixes

rng-egd: offset the point when repeatedly read from the buffer
  looks like this can be used by virtio


I did not list the commit that does not look like a Xen guest can use.
So is this look like patches to take in our tree ? At least the first 7
would be good to take I think (migration fix, memory leaks, qdev fixes).

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rbx-00005u-OR; Wed, 15 Jan 2014 14:38:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Rbw-00005i-Cs
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:38:28 +0000
Received: from [193.109.254.147:42097] by server-12.bemta-14.messagelabs.com
	id 34/A3-13681-36D96D25; Wed, 15 Jan 2014 14:38:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389796706!11085422!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8051 invoked from network); 15 Jan 2014 14:38:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:38:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 14:38:26 +0000
Message-Id: <52D6AB730200007800113E1F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 14:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Zir Blazer" <zir_blazer@hotmail.com>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>,
	<52D2DE47.4070506@citrix.com>
	<BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
In-Reply-To: <BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 14:31, Zir Blazer <zir_blazer@hotmail.com> wrote:
> BTW, I found a Thread of someone which was having a similar 
> issue:http://lists.xen.org/archives/html/xen-devel/2012-12/msg01791.htmlXen 
> didn't detected ACPI when booting on UEFI, and a workaround was required. 
> However, that was for Xen 4.2. 		 	   		  

And obviously - because the problem is with the Dom0 kernel,
provided you boot via xen.efi - you'd still have the same issue as
long as you don't have suitable code in place in the kernel to deal
with EFI on top of Xen. Upstream doesn't have such code yet.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rbx-00005u-OR; Wed, 15 Jan 2014 14:38:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3Rbw-00005i-Cs
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:38:28 +0000
Received: from [193.109.254.147:42097] by server-12.bemta-14.messagelabs.com
	id 34/A3-13681-36D96D25; Wed, 15 Jan 2014 14:38:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389796706!11085422!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8051 invoked from network); 15 Jan 2014 14:38:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:38:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 14:38:26 +0000
Message-Id: <52D6AB730200007800113E1F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 14:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Zir Blazer" <zir_blazer@hotmail.com>
References: <BAY170-W646961885B339AB1673393F3BD0@phx.gbl>,
	<52D2DE47.4070506@citrix.com>
	<BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
In-Reply-To: <BAY170-W968F25835DE78ECD78357FF3BE0@phx.gbl>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen hangs when trying to boot in UEFI mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 14:31, Zir Blazer <zir_blazer@hotmail.com> wrote:
> BTW, I found a Thread of someone which was having a similar 
> issue:http://lists.xen.org/archives/html/xen-devel/2012-12/msg01791.htmlXen 
> didn't detected ACPI when booting on UEFI, and a workaround was required. 
> However, that was for Xen 4.2. 		 	   		  

And obviously - because the problem is with the Dom0 kernel,
provided you boot via xen.efi - you'd still have the same issue as
long as you don't have suitable code in place in the kernel to deal
with EFI on top of Xen. Upstream doesn't have such code yet.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rf6-0000Sx-Ha; Wed, 15 Jan 2014 14:41:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W3Rf5-0000SK-2a
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:41:43 +0000
Received: from [85.158.139.211:36800] by server-3.bemta-5.messagelabs.com id
	90/6F-04773-62E96D25; Wed, 15 Jan 2014 14:41:42 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389796900!8716955!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3400 invoked from network); 15 Jan 2014 14:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:41:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEfcUG023783
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:41:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEfbOF007870
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:41:37 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEfarb007849; Wed, 15 Jan 2014 14:41:36 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:41:36 -0800
Message-ID: <52D69E0B.5020006@oracle.com>
Date: Wed, 15 Jan 2014 22:41:15 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
In-Reply-To: <52D64B87.6000400@zynstra.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/15/2014 04:49 PM, James Dingwall wrote:
> Bob Liu wrote:
>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> Could you confirm that this problem doesn't exist if loading tmem with
>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>> difference packages during your testing.
>>>> This will help to figure out whether selfshrinking is the root cause.
>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>> Unfortunately I don't have a single test case which demonstrates the
>>> problem but as I mentioned before it will generally show up under
>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>
>> So the root cause is not because enabled selfshrinking.
>> Then what I can think of is that the xen-selfballoon driver was too
>> aggressive, too many pages were ballooned out which causeed heavy memory
>> pressure to guest OS.
>> And kswapd started to reclaim page until most of pages were
>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>> triggered.
>> In theory the balloon driver should give back ballooned out pages to
>> guest OS, but I'm afraid this procedure is not fast enough.
>>
>> My suggestion is reserve a min memory for your guest OS so that the
>> xen-selfballoon won't be so aggressive.
>> You can do it through parameters selfballoon_reserved_mb or
>> selfballoon_min_usable_mb.
> I am still getting OOM errors with both of these set to 32 so I'll try
> another bump to 64.  I think that if I do find values which prevent it
> though then it is more of a work around than a fix because it still
> suggests that swap is not being used when ballooning is no longer

Yes, it's like a work around. But I don't think there is a better way.

>From the recent OOM log your reported:
[ 8212.940769] Free swap  = 1925576kB
[ 8212.940770] Total swap = 2097148kB

[504638.442136] Free swap  = 1868108kB
[504638.442137] Total swap = 2097148kB

171572KB and 229040KB data are swapped out to swap disk, I think there
are already significantly values for guest OS has only ~300M usable memory.
guest OS can't find out pages suitable for swap any more after so many
pages are swapped out, although at that time the swap device still have
enough space.

The OOM may not be triggered if the balloon driver can give back memory
to guest OS fast enough but I think it's unrealistic.
So the best way is reserve more memory to guest OS.

> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
> kernel) guest running on the dom0 with tmem activated so I'm going to
> see if I can find a comparable workload to see if I get the same issue
> with a different kernel configuration.
> 
> James

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rf6-0000Sx-Ha; Wed, 15 Jan 2014 14:41:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W3Rf5-0000SK-2a
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:41:43 +0000
Received: from [85.158.139.211:36800] by server-3.bemta-5.messagelabs.com id
	90/6F-04773-62E96D25; Wed, 15 Jan 2014 14:41:42 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389796900!8716955!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3400 invoked from network); 15 Jan 2014 14:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 14:41:41 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FEfcUG023783
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 14:41:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEfbOF007870
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 14:41:37 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FEfarb007849; Wed, 15 Jan 2014 14:41:36 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 06:41:36 -0800
Message-ID: <52D69E0B.5020006@oracle.com>
Date: Wed, 15 Jan 2014 22:41:15 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
In-Reply-To: <52D64B87.6000400@zynstra.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/15/2014 04:49 PM, James Dingwall wrote:
> Bob Liu wrote:
>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> Could you confirm that this problem doesn't exist if loading tmem with
>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>> difference packages during your testing.
>>>> This will help to figure out whether selfshrinking is the root cause.
>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>> Unfortunately I don't have a single test case which demonstrates the
>>> problem but as I mentioned before it will generally show up under
>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>
>> So the root cause is not because enabled selfshrinking.
>> Then what I can think of is that the xen-selfballoon driver was too
>> aggressive, too many pages were ballooned out which causeed heavy memory
>> pressure to guest OS.
>> And kswapd started to reclaim page until most of pages were
>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>> triggered.
>> In theory the balloon driver should give back ballooned out pages to
>> guest OS, but I'm afraid this procedure is not fast enough.
>>
>> My suggestion is reserve a min memory for your guest OS so that the
>> xen-selfballoon won't be so aggressive.
>> You can do it through parameters selfballoon_reserved_mb or
>> selfballoon_min_usable_mb.
> I am still getting OOM errors with both of these set to 32 so I'll try
> another bump to 64.  I think that if I do find values which prevent it
> though then it is more of a work around than a fix because it still
> suggests that swap is not being used when ballooning is no longer

Yes, it's like a work around. But I don't think there is a better way.

>From the recent OOM log your reported:
[ 8212.940769] Free swap  = 1925576kB
[ 8212.940770] Total swap = 2097148kB

[504638.442136] Free swap  = 1868108kB
[504638.442137] Total swap = 2097148kB

171572KB and 229040KB data are swapped out to swap disk, I think there
are already significantly values for guest OS has only ~300M usable memory.
guest OS can't find out pages suitable for swap any more after so many
pages are swapped out, although at that time the swap device still have
enough space.

The OOM may not be triggered if the balloon driver can give back memory
to guest OS fast enough but I think it's unrealistic.
So the best way is reserve more memory to guest OS.

> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
> kernel) guest running on the dom0 with tmem activated so I'm going to
> see if I can find a comparable workload to see if I get the same issue
> with a different kernel configuration.
> 
> James

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rif-0000jy-89; Wed, 15 Jan 2014 14:45:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Rid-0000jp-0V
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:45:23 +0000
Received: from [85.158.143.35:28329] by server-3.bemta-4.messagelabs.com id
	D3/A6-32360-20F96D25; Wed, 15 Jan 2014 14:45:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389797120!11924985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20938 invoked from network); 15 Jan 2014 14:45:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90979851"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:45:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:45:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RiZ-0002LZ-4F;
	Wed, 15 Jan 2014 14:45:19 +0000
Date: Wed, 15 Jan 2014 14:45:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115144519.GO5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D67536.4030106@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:47:02AM +0000, Zoltan Kiss wrote:
> On 15/01/14 10:37, Wei Liu wrote:
> >On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
> >>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>thread problem, however caused an another one. The receive side can stall, if:
> >>- xenvif_rx_action sets rx_queue_stopped to false
> >>- interrupt happens, and sets rx_event to true
> >>- then xenvif_kthread sets rx_event to false
> >>
> >
> >If you mean "rx_work_todo" returns false.
> >
> >In this case
> >
> >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >
> >can still be true, can't it?
> Sorry, I should wrote rx_queue_stopped to true
> 

In this case, if rx_queue_stopped is true, then we're expecting frontend
to notify us, right?

rx_queue_stopped is set to true if we cannot make any progress to queue
packet into the ring. In that situation we can expect frontend will send
notification to backend after it goes through the backlog in the ring.
That means rx_event is set to true, and rx_work_todo is true again. So
the ring is actually not stalled in this case as well. Did I miss
something?

> >
> >>Also, through rx_event a malicious guest can force the RX thread to spin. This
> >>patch ditch that two variable, and rework rx_work_todo. If the thread finds it
> >
> >This seems to be a bigger problem. Can you elaborate?
> My mistake too. I forgot that rx_action set it to false, so it's not
> really a spinning. However the thread should still run
> xenvif_rx_action to figure out there is no space in the ring before
> it sets rx_event to false. In my patch we can quit earlier.
> 
> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rif-0000jy-89; Wed, 15 Jan 2014 14:45:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Rid-0000jp-0V
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:45:23 +0000
Received: from [85.158.143.35:28329] by server-3.bemta-4.messagelabs.com id
	D3/A6-32360-20F96D25; Wed, 15 Jan 2014 14:45:22 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389797120!11924985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20938 invoked from network); 15 Jan 2014 14:45:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90979851"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:45:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:45:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3RiZ-0002LZ-4F;
	Wed, 15 Jan 2014 14:45:19 +0000
Date: Wed, 15 Jan 2014 14:45:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115144519.GO5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D67536.4030106@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 11:47:02AM +0000, Zoltan Kiss wrote:
> On 15/01/14 10:37, Wei Liu wrote:
> >On Tue, Jan 14, 2014 at 07:28:39PM +0000, Zoltan Kiss wrote:
> >>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>thread problem, however caused an another one. The receive side can stall, if:
> >>- xenvif_rx_action sets rx_queue_stopped to false
> >>- interrupt happens, and sets rx_event to true
> >>- then xenvif_kthread sets rx_event to false
> >>
> >
> >If you mean "rx_work_todo" returns false.
> >
> >In this case
> >
> >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >
> >can still be true, can't it?
> Sorry, I should wrote rx_queue_stopped to true
> 

In this case, if rx_queue_stopped is true, then we're expecting frontend
to notify us, right?

rx_queue_stopped is set to true if we cannot make any progress to queue
packet into the ring. In that situation we can expect frontend will send
notification to backend after it goes through the backlog in the ring.
That means rx_event is set to true, and rx_work_todo is true again. So
the ring is actually not stalled in this case as well. Did I miss
something?

> >
> >>Also, through rx_event a malicious guest can force the RX thread to spin. This
> >>patch ditch that two variable, and rework rx_work_todo. If the thread finds it
> >
> >This seems to be a bigger problem. Can you elaborate?
> My mistake too. I forgot that rx_action set it to false, so it's not
> really a spinning. However the thread should still run
> xenvif_rx_action to figure out there is no space in the ring before
> it sets rx_event to false. In my patch we can quit earlier.
> 
> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:52:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rpp-0001Mm-0v; Wed, 15 Jan 2014 14:52:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Rpn-0001Mh-HP
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:52:47 +0000
Received: from [85.158.139.211:43325] by server-4.bemta-5.messagelabs.com id
	5A/BA-26791-EB0A6D25; Wed, 15 Jan 2014 14:52:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389797564!9902400!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17509 invoked from network); 15 Jan 2014 14:52:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:52:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90982437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:52:44 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:52:43 -0500
Message-ID: <52D6A0B9.2030204@citrix.com>
Date: Wed, 15 Jan 2014 14:52:41 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
In-Reply-To: <20140115144519.GO5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:45, Wei Liu wrote:
>>>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>>>> > >>thread problem, however caused an another one. The receive side can stall, if:
>>>> > >>- xenvif_rx_action sets rx_queue_stopped to false
>>>> > >>- interrupt happens, and sets rx_event to true
>>>> > >>- then xenvif_kthread sets rx_event to false
>>>> > >>
>>> > >
>>> > >If you mean "rx_work_todo" returns false.
>>> > >
>>> > >In this case
>>> > >
>>> > >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>>> > >
>>> > >can still be true, can't it?
>> >Sorry, I should wrote rx_queue_stopped to true
>> >
> In this case, if rx_queue_stopped is true, then we're expecting frontend
> to notify us, right?
>
> rx_queue_stopped is set to true if we cannot make any progress to queue
> packet into the ring. In that situation we can expect frontend will send
> notification to backend after it goes through the backlog in the ring.
> That means rx_event is set to true, and rx_work_todo is true again. So
> the ring is actually not stalled in this case as well. Did I miss
> something?
>

Yes, we expect the guest to notify us, and it does, and we set rx_event 
to true (see second point), but then the thread set it to false (see 
third point). Talking with Paul, another solution could be to set 
rx_event false before calling xenvif_rx_action. But using 
rx_last_skb_slots makes it quicker for the thread to see if it doesn't 
have to do anything.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:52:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rpp-0001Mm-0v; Wed, 15 Jan 2014 14:52:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Rpn-0001Mh-HP
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:52:47 +0000
Received: from [85.158.139.211:43325] by server-4.bemta-5.messagelabs.com id
	5A/BA-26791-EB0A6D25; Wed, 15 Jan 2014 14:52:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389797564!9902400!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17509 invoked from network); 15 Jan 2014 14:52:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:52:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90982437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:52:44 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:52:43 -0500
Message-ID: <52D6A0B9.2030204@citrix.com>
Date: Wed, 15 Jan 2014 14:52:41 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
In-Reply-To: <20140115144519.GO5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:45, Wei Liu wrote:
>>>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>>>> > >>thread problem, however caused an another one. The receive side can stall, if:
>>>> > >>- xenvif_rx_action sets rx_queue_stopped to false
>>>> > >>- interrupt happens, and sets rx_event to true
>>>> > >>- then xenvif_kthread sets rx_event to false
>>>> > >>
>>> > >
>>> > >If you mean "rx_work_todo" returns false.
>>> > >
>>> > >In this case
>>> > >
>>> > >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>>> > >
>>> > >can still be true, can't it?
>> >Sorry, I should wrote rx_queue_stopped to true
>> >
> In this case, if rx_queue_stopped is true, then we're expecting frontend
> to notify us, right?
>
> rx_queue_stopped is set to true if we cannot make any progress to queue
> packet into the ring. In that situation we can expect frontend will send
> notification to backend after it goes through the backlog in the ring.
> That means rx_event is set to true, and rx_work_todo is true again. So
> the ring is actually not stalled in this case as well. Did I miss
> something?
>

Yes, we expect the guest to notify us, and it does, and we set rx_event 
to true (see second point), but then the thread set it to false (see 
third point). Talking with Paul, another solution could be to set 
rx_event false before calling xenvif_rx_action. But using 
rx_last_skb_slots makes it quicker for the thread to see if it doesn't 
have to do anything.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:56:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RtV-0001UF-N5; Wed, 15 Jan 2014 14:56:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RtU-0001U7-OB
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:56:37 +0000
Received: from [193.109.254.147:33853] by server-1.bemta-14.messagelabs.com id
	84/78-15600-3A1A6D25; Wed, 15 Jan 2014 14:56:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389797794!11090775!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1876 invoked from network); 15 Jan 2014 14:56:34 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:56:34 -0000
Received: by mail-we0-f180.google.com with SMTP id q59so1869328wes.39
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:56:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=G54RprpkzLyAaQyBS/8vquehWb59zYIieONd4geXHVU=;
	b=YIQxDOA8S5Y0l9IZUy0rJrYatNchffkvkjmvo8izPCNQZNr/Z/7j8OwezzF+7CG0ZZ
	Nxx3lnpGbqm82nECGqvhhAeh7+8fGdYBe1ihv1HUg2lDnW9tvgO5r8pCphkVH8qyREW0
	Hl6XUDLNotJ9TG2j3iGASWW7I0ZDTfqw+S4YT+zlwU8bYO/xoZ+cLrrY8sf1UtT0Viy5
	uQu9FbIr/8ZdvGgYlxaFgnuAoqqab5yUibOF/zMFXvryWf5PNK/1/D1HEmMZZvYIgbAg
	DgHjt0ktp+yy1w8O8SMPjcJ4bTH/DBaP1ATgxvbIvfEGNCNreImsPDqFf4px0q5y6B3l
	p18g==
X-Gm-Message-State: ALoCoQmiVtGCuq0QzIzSsR9qNzj28dRHTCNckr57rshXxK6WPAnGuDmgeNBg87btM2/4Ck7J1U9q
X-Received: by 10.180.98.199 with SMTP id ek7mr2726501wib.21.1389797794242;
	Wed, 15 Jan 2014 06:56:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id w1sm29510822wix.1.2014.01.15.06.56.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:56:33 -0800 (PST)
Message-ID: <52D6A19F.4070803@linaro.org>
Date: Wed, 15 Jan 2014 14:56:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
	<1389792762.3793.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389792762.3793.19.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:32 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>> Section shift for level-2 page table should be #21 rather than #20. Besides,
>> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
>> these macros instead of hard-coded shift value.
>>
>> Signed-off-by: Chen Baozi <baozich@gmail.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> WRT a 4.4 freeze exception the main bit is the use of #21 instead of #20
> as the shift for the L2 entry, which can result in an UNK/SBZP bit being
> set. ARM ARM says:
> 
>         Hardware must implement the bit as Read-As-Zero, and must ignore
>         writes to the field.
>         
>         Software must not rely on the field reading as all 0s, and
>         except for writing back to the register must treat the value
>         as if it is UNKNOWN. Software must use an SBZP policy to write
>         to the field.
> 
> The danger is that some future version of the architecture assigns
> meaning to that bit. All in all this seems like a pretty benign issue,
> but on the flip side the fix is reasonable low risk, the only real
> danger is that one of the replacements is wrong and most of them are
> pretty trivial, although s/#18/#(SECOND_SHIFT - 3)/ is a bit less so.
> 
> I was initially leaning towards putting this into the queue for 4.5, but
> on reflection I'm now starting to lean the other way.
> 
> Does anyone feel strongly that this shouldn't go into 4.4?

This sounds a good fix for Xen 4.4.

Acked-by: Julien Grall <julien.grall@linaro.org>

>> ---
>>  xen/arch/arm/arm32/head.S | 20 ++++++++++----------
>>  xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
>>  2 files changed, 23 insertions(+), 23 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index 96230ac..f3eab89 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -291,14 +291,14 @@ cpu_init_done:
>>          ldr   r4, =boot_second
>>          add   r4, r4, r10            /* r1 := paddr (boot_second) */
>>  
>> -        lsr   r2, r9, #20            /* Base address for 2MB mapping */
>> -        lsl   r2, r2, #20
>> +        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
>> +        lsl   r2, r2, #SECOND_SHIFT
>>          orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
>>          orr   r2, r2, #PT_LOWER(MEM)
>>  
>>          /* ... map of vaddr(start) in boot_second */
>>          ldr   r1, =start
>> -        lsr   r1, #18                /* Slot for vaddr(start) */
>> +        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>>          strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
>>  
>>          /* ... map of paddr(start) in boot_second */
>> @@ -307,7 +307,7 @@ cpu_init_done:
>>                                        * then the mapping was done in
>>                                        * boot_pgtable above */
>>  
>> -        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
>> +        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
>>          strd  r2, r3, [r4, r1]       /* Map Xen there */
>>  1:
>>  
>> @@ -339,8 +339,8 @@ paging:
>>          /* Add UART to the fixmap table */
>>          ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
>>          mov   r3, #0
>> -        lsr   r2, r11, #12
>> -        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
>> +        lsr   r2, r11, #THIRD_SHIFT
>> +        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>>          orr   r2, r2, #PT_UPPER(DEV_L3)
>>          orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
>>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>> @@ -353,7 +353,7 @@ paging:
>>          orr   r2, r2, #PT_UPPER(PT)
>>          orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
>>          ldr   r4, =FIXMAP_ADDR(0)
>> -        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
>>          strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
>>  
>>          /* Use a virtual address to access the UART. */
>> @@ -365,12 +365,12 @@ paging:
>>  
>>          ldr   r1, =boot_second
>>          mov   r3, #0x0
>> -        lsr   r2, r8, #21
>> -        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
>> +        lsr   r2, r8, #SECOND_SHIFT
>> +        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
>>          orr   r2, r2, #PT_UPPER(MEM)
>>          orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
>>          ldr   r4, =BOOT_FDT_VIRT_START
>> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
>>          strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
>>          dsb
>>  1:
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index bebddf0..5b164e9 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -278,11 +278,11 @@ skip_bss:
>>          str   x2, [x4, #0]           /* Map it in slot 0 */
>>  
>>          /* ... map of paddr(start) in boot_first */
>> -        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
>> +        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
>>          and   x1, x2, 0x1ff          /* x1 := Slot to use */
>>          cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
>>  
>> -        lsl   x2, x2, #30            /* Base address for 1GB mapping */
>> +        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
>>          mov   x3, #PT_MEM            /* x2 := Section map */
>>          orr   x2, x2, x3
>>          lsl   x1, x1, #3             /* x1 := Slot offset */
>> @@ -292,23 +292,23 @@ skip_bss:
>>          ldr   x4, =boot_second
>>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>>  
>> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
>> -        lsl   x2, x2, #20
>> +        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
>> +        lsl   x2, x2, #SECOND_SHIFT
>>          mov   x3, #PT_MEM            /* x2 := Section map */
>>          orr   x2, x2, x3
>>  
>>          /* ... map of vaddr(start) in boot_second */
>>          ldr   x1, =start
>> -        lsr   x1, x1, #18            /* Slot for vaddr(start) */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>>          str   x2, [x4, x1]           /* Map vaddr(start) */
>>  
>>          /* ... map of paddr(start) in boot_second */
>> -        lsr   x1, x19, #30           /* Base paddr */
>> +        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
>>          cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
>>                                        * then the mapping was done in
>>                                        * boot_pgtable or boot_first above */
>>  
>> -        lsr   x1, x19, #18           /* Slot for paddr(start) */
>> +        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
>>          str   x2, [x4, x1]           /* Map Xen there */
>>  1:
>>  
>> @@ -340,8 +340,8 @@ paging:
>>          /* Add UART to the fixmap table */
>>          ldr   x1, =xen_fixmap
>>          add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
>> -        lsr   x2, x23, #12
>> -        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
>> +        lsr   x2, x23, #THIRD_SHIFT
>> +        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>>          mov   x3, #PT_DEV_L3
>>          orr   x2, x2, x3             /* x2 := 4K dev map including UART */
>>          str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>> @@ -354,7 +354,7 @@ paging:
>>          mov   x3, #PT_PT
>>          orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
>>          ldr   x1, =FIXMAP_ADDR(0)
>> -        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
>>  
>>          /* Use a virtual address to access the UART. */
>> @@ -364,12 +364,12 @@ paging:
>>          /* Map the DTB in the boot misc slot */
>>          cbnz  x22, 1f                /* Only on boot CPU */
>>  
>> -        lsr   x2, x21, #21
>> -        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
>> +        lsr   x2, x21, #SECOND_SHIFT
>> +        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
>>          mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
>>          orr   x2, x2, x3
>>          ldr   x1, =BOOT_FDT_VIRT_START
>> -        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
>>          str   x2, [x4, x1]           /* Map it in the early fdt slot */
>>          dsb   sy
>>  1:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:56:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3RtV-0001UF-N5; Wed, 15 Jan 2014 14:56:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3RtU-0001U7-OB
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:56:37 +0000
Received: from [193.109.254.147:33853] by server-1.bemta-14.messagelabs.com id
	84/78-15600-3A1A6D25; Wed, 15 Jan 2014 14:56:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389797794!11090775!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1876 invoked from network); 15 Jan 2014 14:56:34 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:56:34 -0000
Received: by mail-we0-f180.google.com with SMTP id q59so1869328wes.39
	for <xen-devel@lists.xenproject.org>;
	Wed, 15 Jan 2014 06:56:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=G54RprpkzLyAaQyBS/8vquehWb59zYIieONd4geXHVU=;
	b=YIQxDOA8S5Y0l9IZUy0rJrYatNchffkvkjmvo8izPCNQZNr/Z/7j8OwezzF+7CG0ZZ
	Nxx3lnpGbqm82nECGqvhhAeh7+8fGdYBe1ihv1HUg2lDnW9tvgO5r8pCphkVH8qyREW0
	Hl6XUDLNotJ9TG2j3iGASWW7I0ZDTfqw+S4YT+zlwU8bYO/xoZ+cLrrY8sf1UtT0Viy5
	uQu9FbIr/8ZdvGgYlxaFgnuAoqqab5yUibOF/zMFXvryWf5PNK/1/D1HEmMZZvYIgbAg
	DgHjt0ktp+yy1w8O8SMPjcJ4bTH/DBaP1ATgxvbIvfEGNCNreImsPDqFf4px0q5y6B3l
	p18g==
X-Gm-Message-State: ALoCoQmiVtGCuq0QzIzSsR9qNzj28dRHTCNckr57rshXxK6WPAnGuDmgeNBg87btM2/4Ck7J1U9q
X-Received: by 10.180.98.199 with SMTP id ek7mr2726501wib.21.1389797794242;
	Wed, 15 Jan 2014 06:56:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id w1sm29510822wix.1.2014.01.15.06.56.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:56:33 -0800 (PST)
Message-ID: <52D6A19F.4070803@linaro.org>
Date: Wed, 15 Jan 2014 14:56:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
	<1389792762.3793.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1389792762.3793.19.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xenproject.org, Chen Baozi <baozich@gmail.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:32 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>> Section shift for level-2 page table should be #21 rather than #20. Besides,
>> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
>> these macros instead of hard-coded shift value.
>>
>> Signed-off-by: Chen Baozi <baozich@gmail.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> WRT a 4.4 freeze exception the main bit is the use of #21 instead of #20
> as the shift for the L2 entry, which can result in an UNK/SBZP bit being
> set. ARM ARM says:
> 
>         Hardware must implement the bit as Read-As-Zero, and must ignore
>         writes to the field.
>         
>         Software must not rely on the field reading as all 0s, and
>         except for writing back to the register must treat the value
>         as if it is UNKNOWN. Software must use an SBZP policy to write
>         to the field.
> 
> The danger is that some future version of the architecture assigns
> meaning to that bit. All in all this seems like a pretty benign issue,
> but on the flip side the fix is reasonable low risk, the only real
> danger is that one of the replacements is wrong and most of them are
> pretty trivial, although s/#18/#(SECOND_SHIFT - 3)/ is a bit less so.
> 
> I was initially leaning towards putting this into the queue for 4.5, but
> on reflection I'm now starting to lean the other way.
> 
> Does anyone feel strongly that this shouldn't go into 4.4?

This sounds a good fix for Xen 4.4.

Acked-by: Julien Grall <julien.grall@linaro.org>

>> ---
>>  xen/arch/arm/arm32/head.S | 20 ++++++++++----------
>>  xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
>>  2 files changed, 23 insertions(+), 23 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
>> index 96230ac..f3eab89 100644
>> --- a/xen/arch/arm/arm32/head.S
>> +++ b/xen/arch/arm/arm32/head.S
>> @@ -291,14 +291,14 @@ cpu_init_done:
>>          ldr   r4, =boot_second
>>          add   r4, r4, r10            /* r1 := paddr (boot_second) */
>>  
>> -        lsr   r2, r9, #20            /* Base address for 2MB mapping */
>> -        lsl   r2, r2, #20
>> +        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
>> +        lsl   r2, r2, #SECOND_SHIFT
>>          orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
>>          orr   r2, r2, #PT_LOWER(MEM)
>>  
>>          /* ... map of vaddr(start) in boot_second */
>>          ldr   r1, =start
>> -        lsr   r1, #18                /* Slot for vaddr(start) */
>> +        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>>          strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
>>  
>>          /* ... map of paddr(start) in boot_second */
>> @@ -307,7 +307,7 @@ cpu_init_done:
>>                                        * then the mapping was done in
>>                                        * boot_pgtable above */
>>  
>> -        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
>> +        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
>>          strd  r2, r3, [r4, r1]       /* Map Xen there */
>>  1:
>>  
>> @@ -339,8 +339,8 @@ paging:
>>          /* Add UART to the fixmap table */
>>          ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
>>          mov   r3, #0
>> -        lsr   r2, r11, #12
>> -        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
>> +        lsr   r2, r11, #THIRD_SHIFT
>> +        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>>          orr   r2, r2, #PT_UPPER(DEV_L3)
>>          orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
>>          strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>> @@ -353,7 +353,7 @@ paging:
>>          orr   r2, r2, #PT_UPPER(PT)
>>          orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
>>          ldr   r4, =FIXMAP_ADDR(0)
>> -        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
>>          strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
>>  
>>          /* Use a virtual address to access the UART. */
>> @@ -365,12 +365,12 @@ paging:
>>  
>>          ldr   r1, =boot_second
>>          mov   r3, #0x0
>> -        lsr   r2, r8, #21
>> -        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
>> +        lsr   r2, r8, #SECOND_SHIFT
>> +        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
>>          orr   r2, r2, #PT_UPPER(MEM)
>>          orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
>>          ldr   r4, =BOOT_FDT_VIRT_START
>> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */
>>          strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
>>          dsb
>>  1:
>> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
>> index bebddf0..5b164e9 100644
>> --- a/xen/arch/arm/arm64/head.S
>> +++ b/xen/arch/arm/arm64/head.S
>> @@ -278,11 +278,11 @@ skip_bss:
>>          str   x2, [x4, #0]           /* Map it in slot 0 */
>>  
>>          /* ... map of paddr(start) in boot_first */
>> -        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
>> +        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
>>          and   x1, x2, 0x1ff          /* x1 := Slot to use */
>>          cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
>>  
>> -        lsl   x2, x2, #30            /* Base address for 1GB mapping */
>> +        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
>>          mov   x3, #PT_MEM            /* x2 := Section map */
>>          orr   x2, x2, x3
>>          lsl   x1, x1, #3             /* x1 := Slot offset */
>> @@ -292,23 +292,23 @@ skip_bss:
>>          ldr   x4, =boot_second
>>          add   x4, x4, x20            /* x4 := paddr (boot_second) */
>>  
>> -        lsr   x2, x19, #20           /* Base address for 2MB mapping */
>> -        lsl   x2, x2, #20
>> +        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
>> +        lsl   x2, x2, #SECOND_SHIFT
>>          mov   x3, #PT_MEM            /* x2 := Section map */
>>          orr   x2, x2, x3
>>  
>>          /* ... map of vaddr(start) in boot_second */
>>          ldr   x1, =start
>> -        lsr   x1, x1, #18            /* Slot for vaddr(start) */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
>>          str   x2, [x4, x1]           /* Map vaddr(start) */
>>  
>>          /* ... map of paddr(start) in boot_second */
>> -        lsr   x1, x19, #30           /* Base paddr */
>> +        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
>>          cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
>>                                        * then the mapping was done in
>>                                        * boot_pgtable or boot_first above */
>>  
>> -        lsr   x1, x19, #18           /* Slot for paddr(start) */
>> +        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
>>          str   x2, [x4, x1]           /* Map Xen there */
>>  1:
>>  
>> @@ -340,8 +340,8 @@ paging:
>>          /* Add UART to the fixmap table */
>>          ldr   x1, =xen_fixmap
>>          add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
>> -        lsr   x2, x23, #12
>> -        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
>> +        lsr   x2, x23, #THIRD_SHIFT
>> +        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
>>          mov   x3, #PT_DEV_L3
>>          orr   x2, x2, x3             /* x2 := 4K dev map including UART */
>>          str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
>> @@ -354,7 +354,7 @@ paging:
>>          mov   x3, #PT_PT
>>          orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
>>          ldr   x1, =FIXMAP_ADDR(0)
>> -        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
>>          str   x2, [x4, x1]           /* Map it in the fixmap's slot */
>>  
>>          /* Use a virtual address to access the UART. */
>> @@ -364,12 +364,12 @@ paging:
>>          /* Map the DTB in the boot misc slot */
>>          cbnz  x22, 1f                /* Only on boot CPU */
>>  
>> -        lsr   x2, x21, #21
>> -        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
>> +        lsr   x2, x21, #SECOND_SHIFT
>> +        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
>>          mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
>>          orr   x2, x2, x3
>>          ldr   x1, =BOOT_FDT_VIRT_START
>> -        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
>> +        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
>>          str   x2, [x4, x1]           /* Map it in the early fdt slot */
>>          dsb   sy
>>  1:
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:58:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rv6-0001ae-7D; Wed, 15 Jan 2014 14:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3Rv4-0001aT-NP
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:58:14 +0000
Received: from [85.158.143.35:33798] by server-1.bemta-4.messagelabs.com id
	6A/FA-02132-602A6D25; Wed, 15 Jan 2014 14:58:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389797893!11933163!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26905 invoked from network); 15 Jan 2014 14:58:13 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:58:13 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi8so840779wib.9
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 06:58:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=r+TmjeXEaKhXdz8v78JTkMXeDbliV+1BuIV8Y3Td0bs=;
	b=KaDWQiXTDJgoVqp76414/6izIV1N3l4Te4uQWU0slibqCj1QWjR0BR9BZ2V6VFcB9N
	q7R03wFe4KdfAWFPd90DcfO3PBltbr5KQ0kEXupg0bw+g7gpukZxcuTqk7jbVibOFfcb
	ElIv9DUEszud1pKzpuOLb5U9+2t14bbGLeURoW4NoT3xMJMtWVd8qf4+s95Dpi/oZrpr
	c5TFyPd1avoG/EVTIK8bKNjF15oMwtKFhXJzkEKEjDHkF5sWs+AvBvsAaczzgqpZ8DVH
	8F+xLvRERe5kkgdU9K2RG1r86IsJoBj7U22FrCWefKF4XdgsupE7GkmwKue/SjjUDnzB
	QMPg==
X-Gm-Message-State: ALoCoQkM1Y3njMK3TDY/L0SLblmGZqYONrPeJ1UZHn321Wmeuza/ts4Bxy3zwsiExjmqd97K7Hbe
X-Received: by 10.195.13.113 with SMTP id ex17mr2985258wjd.0.1389797893087;
	Wed, 15 Jan 2014 06:58:13 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ua8sm3493151wjc.4.2014.01.15.06.58.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:58:12 -0800 (PST)
Message-ID: <52D6A201.6030708@linaro.org>
Date: Wed, 15 Jan 2014 14:58:09 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
	<1389794748.3793.48.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794748.3793.48.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:05 PM, Ian Campbell wrote:
> On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
>> On 01/15/2014 09:37 AM, Ian Campbell wrote:
>>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>>>> These mappings are global and therefore need flushing on all processors. Add
>>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>>>
>>>> Can we make name consistent across every *tlb* function call? On
>>>> flushtlb.h we use *_local for maintenance on the current processor only.
>>>> If the suffix is not present then the maintenance will be done on every
>>>> processor.
>>>
>>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
>>> suppose I should just go ahead and do it.
>>
>> If it's too big for 4.4,
> 
> With my temporary-RM hat on I've struggled with this a few times this
> week -- that is, larger, mostly mechanical, textual changes which come
> about because it is the correct/cleanest thing to do as part of a
> smaller change which on their own would be pretty clear candidates for
> an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> mapping 2MB block in boot page table" is in a similar boat.
> 
> I'm not sure where the balance should lie really.

The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
less trivial. We will have to think about the function name.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:58:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rv6-0001ae-7D; Wed, 15 Jan 2014 14:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3Rv4-0001aT-NP
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:58:14 +0000
Received: from [85.158.143.35:33798] by server-1.bemta-4.messagelabs.com id
	6A/FA-02132-602A6D25; Wed, 15 Jan 2014 14:58:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389797893!11933163!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26905 invoked from network); 15 Jan 2014 14:58:13 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:58:13 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi8so840779wib.9
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 06:58:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=r+TmjeXEaKhXdz8v78JTkMXeDbliV+1BuIV8Y3Td0bs=;
	b=KaDWQiXTDJgoVqp76414/6izIV1N3l4Te4uQWU0slibqCj1QWjR0BR9BZ2V6VFcB9N
	q7R03wFe4KdfAWFPd90DcfO3PBltbr5KQ0kEXupg0bw+g7gpukZxcuTqk7jbVibOFfcb
	ElIv9DUEszud1pKzpuOLb5U9+2t14bbGLeURoW4NoT3xMJMtWVd8qf4+s95Dpi/oZrpr
	c5TFyPd1avoG/EVTIK8bKNjF15oMwtKFhXJzkEKEjDHkF5sWs+AvBvsAaczzgqpZ8DVH
	8F+xLvRERe5kkgdU9K2RG1r86IsJoBj7U22FrCWefKF4XdgsupE7GkmwKue/SjjUDnzB
	QMPg==
X-Gm-Message-State: ALoCoQkM1Y3njMK3TDY/L0SLblmGZqYONrPeJ1UZHn321Wmeuza/ts4Bxy3zwsiExjmqd97K7Hbe
X-Received: by 10.195.13.113 with SMTP id ex17mr2985258wjd.0.1389797893087;
	Wed, 15 Jan 2014 06:58:13 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id ua8sm3493151wjc.4.2014.01.15.06.58.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 06:58:12 -0800 (PST)
Message-ID: <52D6A201.6030708@linaro.org>
Date: Wed, 15 Jan 2014 14:58:09 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
	<1389794748.3793.48.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794748.3793.48.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:05 PM, Ian Campbell wrote:
> On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
>> On 01/15/2014 09:37 AM, Ian Campbell wrote:
>>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
>>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>>>> These mappings are global and therefore need flushing on all processors. Add
>>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
>>>>
>>>> Can we make name consistent across every *tlb* function call? On
>>>> flushtlb.h we use *_local for maintenance on the current processor only.
>>>> If the suffix is not present then the maintenance will be done on every
>>>> processor.
>>>
>>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
>>> suppose I should just go ahead and do it.
>>
>> If it's too big for 4.4,
> 
> With my temporary-RM hat on I've struggled with this a few times this
> week -- that is, larger, mostly mechanical, textual changes which come
> about because it is the correct/cleanest thing to do as part of a
> smaller change which on their own would be pretty clear candidates for
> an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> mapping 2MB block in boot page table" is in a similar boat.
> 
> I'm not sure where the balance should lie really.

The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
less trivial. We will have to think about the function name.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:59:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rvs-0001ga-LI; Wed, 15 Jan 2014 14:59:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Rvq-0001gJ-Oi
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:59:02 +0000
Received: from [193.109.254.147:26345] by server-13.bemta-14.messagelabs.com
	id 4C/BF-19374-632A6D25; Wed, 15 Jan 2014 14:59:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389797939!11040461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8426 invoked from network); 15 Jan 2014 14:59:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:59:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90984847"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:58:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:58:22 -0500
Message-ID: <1389797901.3793.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:58:21 +0000
In-Reply-To: <52D58692.8000305@linaro.org>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
 smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> > flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> > the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> > was mapped with ioremap_nocache and hence isn't cached in the first place.
> > Accesses should be via writeq though, so do that.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks. I think release wise this can wait for 4.5, the flush is
unnecessary but not harmful.

While the use of writeq is needed for the additional barriers which it
adds it's not strictly needed here because there is no prior write to
sequence against (and ioremap_nocache has a barrier in it).

Unless removing this pointless flush makes some analysis of the use of
the functions less confusing perhaps?

However thinking about the writeq barriers to lead me to notice the lack
of a recommended dsb() before the subsequent use of sev(), which is
needed to ensure that the write has occurred before we wake the other
processor. We get away with this because the iounmap in the middle does
a write_pte which has a dsb in it. Phew! Still. for 4.5:

8<-----

>From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 15 Jan 2014 14:49:04 +0000
Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up

This ensures that the writeq to the release address has occurred.

In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but
make it explicit.

The ARMv8 ARM recommends that sev() is usually accompanied by a dsb(), the
only other uses are in the v7 spinlock code which includes a dsb() already.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/arm64/smpboot.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 9146476..9c7bf29 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -36,6 +36,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
 
     iounmap(release);
 
+    dsb();
     sev();
 
     return cpu_up_send_sgi(cpu);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:59:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rvs-0001ga-LI; Wed, 15 Jan 2014 14:59:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Rvq-0001gJ-Oi
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 14:59:02 +0000
Received: from [193.109.254.147:26345] by server-13.bemta-14.messagelabs.com
	id 4C/BF-19374-632A6D25; Wed, 15 Jan 2014 14:59:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389797939!11040461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8426 invoked from network); 15 Jan 2014 14:59:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:59:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90984847"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:58:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:58:22 -0500
Message-ID: <1389797901.3793.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 14:58:21 +0000
In-Reply-To: <52D58692.8000305@linaro.org>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
 smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> > flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> > the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> > was mapped with ioremap_nocache and hence isn't cached in the first place.
> > Accesses should be via writeq though, so do that.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks. I think release wise this can wait for 4.5, the flush is
unnecessary but not harmful.

While the use of writeq is needed for the additional barriers which it
adds it's not strictly needed here because there is no prior write to
sequence against (and ioremap_nocache has a barrier in it).

Unless removing this pointless flush makes some analysis of the use of
the functions less confusing perhaps?

However thinking about the writeq barriers to lead me to notice the lack
of a recommended dsb() before the subsequent use of sev(), which is
needed to ensure that the write has occurred before we wake the other
processor. We get away with this because the iounmap in the middle does
a write_pte which has a dsb in it. Phew! Still. for 4.5:

8<-----

>From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 15 Jan 2014 14:49:04 +0000
Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up

This ensures that the writeq to the release address has occurred.

In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but
make it explicit.

The ARMv8 ARM recommends that sev() is usually accompanied by a dsb(), the
only other uses are in the v7 spinlock code which includes a dsb() already.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/arm64/smpboot.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
index 9146476..9c7bf29 100644
--- a/xen/arch/arm/arm64/smpboot.c
+++ b/xen/arch/arm/arm64/smpboot.c
@@ -36,6 +36,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
 
     iounmap(release);
 
+    dsb();
     sev();
 
     return cpu_up_send_sgi(cpu);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:59:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rwk-0001nI-3p; Wed, 15 Jan 2014 14:59:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Rwh-0001mz-Rg
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:59:56 +0000
Received: from [85.158.137.68:9422] by server-5.bemta-3.messagelabs.com id
	56/0B-25188-A62A6D25; Wed, 15 Jan 2014 14:59:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389797992!9328670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2843 invoked from network); 15 Jan 2014 14:59:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:59:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90985818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:59:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:59:51 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3Rwd-0002Zb-RL;
	Wed, 15 Jan 2014 14:59:51 +0000
Date: Wed, 15 Jan 2014 14:59:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115145951.GP5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D6A0B9.2030204@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
> On 15/01/14 14:45, Wei Liu wrote:
> >>>>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>>>> >>thread problem, however caused an another one. The receive side can stall, if:
> >>>>> >>- xenvif_rx_action sets rx_queue_stopped to false
> >>>>> >>- interrupt happens, and sets rx_event to true
> >>>>> >>- then xenvif_kthread sets rx_event to false
> >>>>> >>
> >>>> >
> >>>> >If you mean "rx_work_todo" returns false.
> >>>> >
> >>>> >In this case
> >>>> >
> >>>> >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >>>> >
> >>>> >can still be true, can't it?
> >>>Sorry, I should wrote rx_queue_stopped to true
> >>>
> >In this case, if rx_queue_stopped is true, then we're expecting frontend
> >to notify us, right?
> >
> >rx_queue_stopped is set to true if we cannot make any progress to queue
> >packet into the ring. In that situation we can expect frontend will send
> >notification to backend after it goes through the backlog in the ring.
> >That means rx_event is set to true, and rx_work_todo is true again. So
> >the ring is actually not stalled in this case as well. Did I miss
> >something?
> >
> 
> Yes, we expect the guest to notify us, and it does, and we set
> rx_event to true (see second point), but then the thread set it to
> false (see third point). Talking with Paul, another solution could
> be to set rx_event false before calling xenvif_rx_action. But using
> rx_last_skb_slots makes it quicker for the thread to see if it
> doesn't have to do anything.
> 

OK, this is a better explaination. So actually there's no bug in the
original implementation and your patch is sort of an improvement.

Could you send a new version of this patch with relevant information in
commit message? Talking to people offline is faster, but I would like to
have public discussion and relevant information archived in a searchable
form. Thanks.

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 14:59:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 14:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rwk-0001nI-3p; Wed, 15 Jan 2014 14:59:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3Rwh-0001mz-Rg
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 14:59:56 +0000
Received: from [85.158.137.68:9422] by server-5.bemta-3.messagelabs.com id
	56/0B-25188-A62A6D25; Wed, 15 Jan 2014 14:59:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389797992!9328670!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2843 invoked from network); 15 Jan 2014 14:59:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 14:59:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90985818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 14:59:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 09:59:51 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3Rwd-0002Zb-RL;
	Wed, 15 Jan 2014 14:59:51 +0000
Date: Wed, 15 Jan 2014 14:59:51 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115145951.GP5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D6A0B9.2030204@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
> On 15/01/14 14:45, Wei Liu wrote:
> >>>>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>>>> >>thread problem, however caused an another one. The receive side can stall, if:
> >>>>> >>- xenvif_rx_action sets rx_queue_stopped to false
> >>>>> >>- interrupt happens, and sets rx_event to true
> >>>>> >>- then xenvif_kthread sets rx_event to false
> >>>>> >>
> >>>> >
> >>>> >If you mean "rx_work_todo" returns false.
> >>>> >
> >>>> >In this case
> >>>> >
> >>>> >(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >>>> >
> >>>> >can still be true, can't it?
> >>>Sorry, I should wrote rx_queue_stopped to true
> >>>
> >In this case, if rx_queue_stopped is true, then we're expecting frontend
> >to notify us, right?
> >
> >rx_queue_stopped is set to true if we cannot make any progress to queue
> >packet into the ring. In that situation we can expect frontend will send
> >notification to backend after it goes through the backlog in the ring.
> >That means rx_event is set to true, and rx_work_todo is true again. So
> >the ring is actually not stalled in this case as well. Did I miss
> >something?
> >
> 
> Yes, we expect the guest to notify us, and it does, and we set
> rx_event to true (see second point), but then the thread set it to
> false (see third point). Talking with Paul, another solution could
> be to set rx_event false before calling xenvif_rx_action. But using
> rx_last_skb_slots makes it quicker for the thread to see if it
> doesn't have to do anything.
> 

OK, this is a better explaination. So actually there's no bug in the
original implementation and your patch is sort of an improvement.

Could you send a new version of this patch with relevant information in
commit message? Talking to people offline is faster, but I would like to
have public discussion and relevant information archived in a searchable
form. Thanks.

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rz9-00026h-Ve; Wed, 15 Jan 2014 15:02:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Rz8-00026c-W3
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:02:27 +0000
Received: from [85.158.143.35:2315] by server-2.bemta-4.messagelabs.com id
	50/A3-11386-203A6D25; Wed, 15 Jan 2014 15:02:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389798144!11884551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23635 invoked from network); 15 Jan 2014 15:02:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:02:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90987400"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:02:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:02:23 -0500
Message-ID: <1389798142.3793.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 15:02:22 +0000
In-Reply-To: <52D6A201.6030708@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
	<1389794748.3793.48.camel@kazak.uk.xensource.com>
	<52D6A201.6030708@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:58 +0000, Julien Grall wrote:
> On 01/15/2014 02:05 PM, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> >> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> >>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>>>> These mappings are global and therefore need flushing on all processors. Add
> >>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>>>
> >>>> Can we make name consistent across every *tlb* function call? On
> >>>> flushtlb.h we use *_local for maintenance on the current processor only.
> >>>> If the suffix is not present then the maintenance will be done on every
> >>>> processor.
> >>>
> >>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> >>> suppose I should just go ahead and do it.
> >>
> >> If it's too big for 4.4,
> > 
> > With my temporary-RM hat on I've struggled with this a few times this
> > week -- that is, larger, mostly mechanical, textual changes which come
> > about because it is the correct/cleanest thing to do as part of a
> > smaller change which on their own would be pretty clear candidates for
> > an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> > mapping 2MB block in boot page table" is in a similar boat.
> > 
> > I'm not sure where the balance should lie really.
> 
> The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
> less trivial. We will have to think about the function name.

Yes, especially where the old function name continues to exist but with
different semantics (at least in this case it would be a wider, and
therefore safe, flush, but still).

That does seems to be an argument for doing the rename sooner rather
than later.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Rz9-00026h-Ve; Wed, 15 Jan 2014 15:02:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Rz8-00026c-W3
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:02:27 +0000
Received: from [85.158.143.35:2315] by server-2.bemta-4.messagelabs.com id
	50/A3-11386-203A6D25; Wed, 15 Jan 2014 15:02:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389798144!11884551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23635 invoked from network); 15 Jan 2014 15:02:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:02:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90987400"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:02:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:02:23 -0500
Message-ID: <1389798142.3793.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 15:02:22 +0000
In-Reply-To: <52D6A201.6030708@linaro.org>
References: <1389718513-1638-1-git-send-email-ian.campbell@citrix.com>
	<52D5880B.30506@linaro.org>
	<1389778658.12434.120.camel@kazak.uk.xensource.com>
	<52D6921F.1030307@linaro.org>
	<1389794748.3793.48.camel@kazak.uk.xensource.com>
	<52D6A201.6030708@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: flush TLB on all CPUs when
 setting or clearing fixmaps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:58 +0000, Julien Grall wrote:
> On 01/15/2014 02:05 PM, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 13:50 +0000, Julien Grall wrote:
> >> On 01/15/2014 09:37 AM, Ian Campbell wrote:
> >>> On Tue, 2014-01-14 at 18:55 +0000, Julien Grall wrote:
> >>>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>>>> These mappings are global and therefore need flushing on all processors. Add
> >>>>> flush_all_xen_data_tlb_range_va which accomplishes this.
> >>>>
> >>>> Can we make name consistent across every *tlb* function call? On
> >>>> flushtlb.h we use *_local for maintenance on the current processor only.
> >>>> If the suffix is not present then the maintenance will be done on every
> >>>> processor.
> >>>
> >>> I was trying to avoid a massive renaming of the existing flush_xen_*. I
> >>> suppose I should just go ahead and do it.
> >>
> >> If it's too big for 4.4,
> > 
> > With my temporary-RM hat on I've struggled with this a few times this
> > week -- that is, larger, mostly mechanical, textual changes which come
> > about because it is the correct/cleanest thing to do as part of a
> > smaller change which on their own would be pretty clear candidates for
> > an exception. Chen's change "xen/arm{32, 64}: fix section shift when
> > mapping 2MB block in boot page table" is in a similar boat.
> > 
> > I'm not sure where the balance should lie really.
> 
> The "issue" I see is backporting patch from Xen 4.5 to Xen 4.4 will be
> less trivial. We will have to think about the function name.

Yes, especially where the old function name continues to exist but with
different semantics (at least in this case it would be a wider, and
therefore safe, flush, but still).

That does seems to be an argument for doing the rename sooner rather
than later.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:09:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3S5Q-0002S9-RD; Wed, 15 Jan 2014 15:08:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3S5P-0002S2-Jn
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 15:08:55 +0000
Received: from [85.158.137.68:18425] by server-10.bemta-3.messagelabs.com id
	22/0D-23989-684A6D25; Wed, 15 Jan 2014 15:08:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389798531!9279208!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25362 invoked from network); 15 Jan 2014 15:08:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:08:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90991094"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:08:14 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:08:13 -0500
Message-ID: <52D6A45C.1060705@citrix.com>
Date: Wed, 15 Jan 2014 15:08:12 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
	<20140115145951.GP5698@zion.uk.xensource.com>
In-Reply-To: <20140115145951.GP5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:59, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
>> On 15/01/14 14:45, Wei Liu wrote:
>>>>>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>>>>>>>>> thread problem, however caused an another one. The receive side can stall, if:
>>>>>>>>> - xenvif_rx_action sets rx_queue_stopped to false
>>>>>>>>> - interrupt happens, and sets rx_event to true
>>>>>>>>> - then xenvif_kthread sets rx_event to false
>>>>>>>>>
>>>>>>>
>>>>>>> If you mean "rx_work_todo" returns false.
>>>>>>>
>>>>>>> In this case
>>>>>>>
>>>>>>> (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>>>>>>>
>>>>>>> can still be true, can't it?
>>>>> Sorry, I should wrote rx_queue_stopped to true
>>>>>
>>> In this case, if rx_queue_stopped is true, then we're expecting frontend
>>> to notify us, right?
>>>
>>> rx_queue_stopped is set to true if we cannot make any progress to queue
>>> packet into the ring. In that situation we can expect frontend will send
>>> notification to backend after it goes through the backlog in the ring.
>>> That means rx_event is set to true, and rx_work_todo is true again. So
>>> the ring is actually not stalled in this case as well. Did I miss
>>> something?
>>>
>>
>> Yes, we expect the guest to notify us, and it does, and we set
>> rx_event to true (see second point), but then the thread set it to
>> false (see third point). Talking with Paul, another solution could
>> be to set rx_event false before calling xenvif_rx_action. But using
>> rx_last_skb_slots makes it quicker for the thread to see if it
>> doesn't have to do anything.
>>
>
> OK, this is a better explaination. So actually there's no bug in the
> original implementation and your patch is sort of an improvement.
>
> Could you send a new version of this patch with relevant information in
> commit message? Talking to people offline is faster, but I would like to
> have public discussion and relevant information archived in a searchable
> form. Thanks.

No, there is a bug in the original implementation:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo never returns true anymore

I will update the explanation and send in the patch again.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:09:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3S5Q-0002S9-RD; Wed, 15 Jan 2014 15:08:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3S5P-0002S2-Jn
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 15:08:55 +0000
Received: from [85.158.137.68:18425] by server-10.bemta-3.messagelabs.com id
	22/0D-23989-684A6D25; Wed, 15 Jan 2014 15:08:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389798531!9279208!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25362 invoked from network); 15 Jan 2014 15:08:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:08:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90991094"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:08:14 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:08:13 -0500
Message-ID: <52D6A45C.1060705@citrix.com>
Date: Wed, 15 Jan 2014 15:08:12 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
	<20140115145951.GP5698@zion.uk.xensource.com>
In-Reply-To: <20140115145951.GP5698@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:59, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
>> On 15/01/14 14:45, Wei Liu wrote:
>>>>>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>>>>>>>>> thread problem, however caused an another one. The receive side can stall, if:
>>>>>>>>> - xenvif_rx_action sets rx_queue_stopped to false
>>>>>>>>> - interrupt happens, and sets rx_event to true
>>>>>>>>> - then xenvif_kthread sets rx_event to false
>>>>>>>>>
>>>>>>>
>>>>>>> If you mean "rx_work_todo" returns false.
>>>>>>>
>>>>>>> In this case
>>>>>>>
>>>>>>> (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
>>>>>>>
>>>>>>> can still be true, can't it?
>>>>> Sorry, I should wrote rx_queue_stopped to true
>>>>>
>>> In this case, if rx_queue_stopped is true, then we're expecting frontend
>>> to notify us, right?
>>>
>>> rx_queue_stopped is set to true if we cannot make any progress to queue
>>> packet into the ring. In that situation we can expect frontend will send
>>> notification to backend after it goes through the backlog in the ring.
>>> That means rx_event is set to true, and rx_work_todo is true again. So
>>> the ring is actually not stalled in this case as well. Did I miss
>>> something?
>>>
>>
>> Yes, we expect the guest to notify us, and it does, and we set
>> rx_event to true (see second point), but then the thread set it to
>> false (see third point). Talking with Paul, another solution could
>> be to set rx_event false before calling xenvif_rx_action. But using
>> rx_last_skb_slots makes it quicker for the thread to see if it
>> doesn't have to do anything.
>>
>
> OK, this is a better explaination. So actually there's no bug in the
> original implementation and your patch is sort of an improvement.
>
> Could you send a new version of this patch with relevant information in
> commit message? Talking to people offline is faster, but I would like to
> have public discussion and relevant information archived in a searchable
> form. Thanks.

No, there is a bug in the original implementation:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo never returns true anymore

I will update the explanation and send in the patch again.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SCd-0002xQ-Qn; Wed, 15 Jan 2014 15:16:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3SCc-0002xK-Da
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 15:16:22 +0000
Received: from [85.158.139.211:20144] by server-7.bemta-5.messagelabs.com id
	76/D6-04824-546A6D25; Wed, 15 Jan 2014 15:16:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389798979!7233776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19102 invoked from network); 15 Jan 2014 15:16:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:16:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90996549"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:16:19 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:16:18 -0500
Message-ID: <52D6A640.8000805@citrix.com>
Date: Wed, 15 Jan 2014 15:16:16 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 20:39, Zoltan Kiss wrote:
> @@ -1677,6 +1793,31 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>   	vif->mmap_pages[pending_idx] = NULL;
>   }
>
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op,
> +				&vif->mmap_pages[pending_idx],
> +				1);
> +
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +					&tx_unmap_op,
> +					1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}

Awkward mistake, I forgot to delete the hypercall ... Even more 
interesting, it caused troubles only very rarely ...

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:16:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SCd-0002xQ-Qn; Wed, 15 Jan 2014 15:16:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3SCc-0002xK-Da
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 15:16:22 +0000
Received: from [85.158.139.211:20144] by server-7.bemta-5.messagelabs.com id
	76/D6-04824-546A6D25; Wed, 15 Jan 2014 15:16:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389798979!7233776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19102 invoked from network); 15 Jan 2014 15:16:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:16:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90996549"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:16:19 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:16:18 -0500
Message-ID: <52D6A640.8000805@citrix.com>
Date: Wed, 15 Jan 2014 15:16:16 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 20:39, Zoltan Kiss wrote:
> @@ -1677,6 +1793,31 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>   	vif->mmap_pages[pending_idx] = NULL;
>   }
>
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op,
> +				&vif->mmap_pages[pending_idx],
> +				1);
> +
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +					&tx_unmap_op,
> +					1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}

Awkward mistake, I forgot to delete the hypercall ... Even more 
interesting, it caused troubles only very rarely ...

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SEs-0003d4-ET; Wed, 15 Jan 2014 15:18:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3SEq-0003cz-St
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:18:41 +0000
Received: from [85.158.137.68:42373] by server-8.bemta-3.messagelabs.com id
	28/43-31081-CC6A6D25; Wed, 15 Jan 2014 15:18:36 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389799113!8512561!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25504 invoked from network); 15 Jan 2014 15:18:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 15:18:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FFIT1O010909
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 15:18:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FFISAv007828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 15:18:29 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FFISm2008462; Wed, 15 Jan 2014 15:18:28 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 07:18:28 -0800
Message-ID: <52D6A6BC.4060603@oracle.com>
Date: Wed, 15 Jan 2014 23:18:20 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
	<1389791597.12434.216.camel@kazak.uk.xensource.com>
In-Reply-To: <1389791597.12434.216.camel@kazak.uk.xensource.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 21:13, Ian Campbell wrote:
> On Wed, 2014-01-15 at 02:33 +0800, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>>
>> After split eventchannel feature was supported by netback/netfront,
>> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
>> to show tx/rx eventchannel correctly.
>>
>> Signed-off-by: Annie Li <annie.li@oracle.com>
> How critical is this for 4.4?

I think it can wait. This issue only happens with split event channel 
feature implemented in latest netback/netfront, "xl network-list" works 
OK for old netback/netfront.

>
> Please consider
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze and make a case for it if you think it should go in.
>
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index 649ce50..e6368c7 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>>       ("frontend_id", uint32),
>>       ("devid", libxl_devid),
>>       ("state", integer),
>> -    ("evtch", integer),
>> +    ("evtch_tx", integer),
>> +    ("evtch_rx", integer),
> This needs backwards compatibility handling, see the big comment at the
> head of libxl.h and the other examples in that file. I'm doubtful that
> you will be able to remove the evtch field without breaking the API, so
> it probably needs to stay even if it is explicitly invalid under some
> circumstances.
>
> It also needs a suitable LIBXL_HAVE_ #define, again see libxl.h.

Yes, this patch does not handle backwards compatibility,  and probably 
breaks the API. Let me fix them, thanks!

Thanks
Annie
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SEs-0003d4-ET; Wed, 15 Jan 2014 15:18:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3SEq-0003cz-St
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:18:41 +0000
Received: from [85.158.137.68:42373] by server-8.bemta-3.messagelabs.com id
	28/43-31081-CC6A6D25; Wed, 15 Jan 2014 15:18:36 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389799113!8512561!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25504 invoked from network); 15 Jan 2014 15:18:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 15:18:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FFIT1O010909
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 15:18:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FFISAv007828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 15:18:29 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FFISm2008462; Wed, 15 Jan 2014 15:18:28 GMT
Received: from [192.168.1.102] (/222.130.142.192)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 07:18:28 -0800
Message-ID: <52D6A6BC.4060603@oracle.com>
Date: Wed, 15 Jan 2014 23:18:20 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
	<1389791597.12434.216.camel@kazak.uk.xensource.com>
In-Reply-To: <1389791597.12434.216.camel@kazak.uk.xensource.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-15 21:13, Ian Campbell wrote:
> On Wed, 2014-01-15 at 02:33 +0800, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>>
>> After split eventchannel feature was supported by netback/netfront,
>> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
>> to show tx/rx eventchannel correctly.
>>
>> Signed-off-by: Annie Li <annie.li@oracle.com>
> How critical is this for 4.4?

I think it can wait. This issue only happens with split event channel 
feature implemented in latest netback/netfront, "xl network-list" works 
OK for old netback/netfront.

>
> Please consider
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze and make a case for it if you think it should go in.
>
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index 649ce50..e6368c7 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>>       ("frontend_id", uint32),
>>       ("devid", libxl_devid),
>>       ("state", integer),
>> -    ("evtch", integer),
>> +    ("evtch_tx", integer),
>> +    ("evtch_rx", integer),
> This needs backwards compatibility handling, see the big comment at the
> head of libxl.h and the other examples in that file. I'm doubtful that
> you will be able to remove the evtch field without breaking the API, so
> it probably needs to stay even if it is explicitly invalid under some
> circumstances.
>
> It also needs a suitable LIBXL_HAVE_ #define, again see libxl.h.

Yes, this patch does not handle backwards compatibility,  and probably 
breaks the API. Let me fix them, thanks!

Thanks
Annie
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SF8-0003fH-Rl; Wed, 15 Jan 2014 15:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3SF7-0003f3-Oj
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:18:58 +0000
Received: from [85.158.137.68:44575] by server-1.bemta-3.messagelabs.com id
	7E/7E-29598-DD6A6D25; Wed, 15 Jan 2014 15:18:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389799130!5671049!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22033 invoked from network); 15 Jan 2014 15:18:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:18:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90997395"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:18:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:18:36 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W3SEm-0002r6-C3;
	Wed, 15 Jan 2014 15:18:36 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 15 Jan 2014 15:18:31 +0000
Message-ID: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
	offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for IPv6 checksum offload and GSO when those
features are available in the backend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 43 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c41537b..321759f 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		tx->flags |= XEN_NETTXF_extra_info;
 
 		gso->u.gso.size = skb_shinfo(skb)->gso_size;
-		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
+		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
+			XEN_NETIF_GSO_TYPE_TCPV6 :
+			XEN_NETIF_GSO_TYPE_TCPV4;
 		gso->u.gso.pad = 0;
 		gso->u.gso.features = 0;
 
@@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 		return -EINVAL;
 	}
 
-	/* Currently only TCPv4 S.O. is supported. */
-	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
+	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
+	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
 		if (net_ratelimit())
 			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
 		return -EINVAL;
 	}
 
 	skb_shinfo(skb)->gso_size = gso->u.gso.size;
-	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+	skb_shinfo(skb)->gso_type =
+		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
+		SKB_GSO_TCPV4 :
+		SKB_GSO_TCPV6;
 
 	/* Header must be checked, and gso_segs computed. */
 	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
@@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_SG;
 	}
 
+	if (features & NETIF_F_IPV6_CSUM) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-ipv6-csum-offload", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_IPV6_CSUM;
+	}
+
 	if (features & NETIF_F_TSO) {
 		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 				 "feature-gso-tcpv4", "%d", &val) < 0)
@@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_TSO;
 	}
 
+	if (features & NETIF_F_TSO6) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-gso-tcpv6", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_TSO6;
+	}
+
 	return features;
 }
 
@@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_SG |
+				  NETIF_F_IPV6_CSUM |
+				  NETIF_F_TSO | NETIF_F_TSO6;
 
 	/*
          * Assume that all hw features are available for now. This set
@@ -1716,6 +1741,19 @@ again:
 		goto abort_transaction;
 	}
 
+	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);
+	if (err) {
+		message = "writing feature-gso-tcpv6";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-offload",
+			    "%d", 1);
+	if (err) {
+		message = "writing feature-ipv6-csum-offload";
+		goto abort_transaction;
+	}
+
 	err = xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err == -EAGAIN)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SF8-0003fH-Rl; Wed, 15 Jan 2014 15:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3SF7-0003f3-Oj
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:18:58 +0000
Received: from [85.158.137.68:44575] by server-1.bemta-3.messagelabs.com id
	7E/7E-29598-DD6A6D25; Wed, 15 Jan 2014 15:18:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389799130!5671049!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22033 invoked from network); 15 Jan 2014 15:18:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:18:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90997395"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:18:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:18:36 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W3SEm-0002r6-C3;
	Wed, 15 Jan 2014 15:18:36 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 15 Jan 2014 15:18:31 +0000
Message-ID: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
	offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for IPv6 checksum offload and GSO when those
features are available in the backend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
 drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 43 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c41537b..321759f 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		tx->flags |= XEN_NETTXF_extra_info;
 
 		gso->u.gso.size = skb_shinfo(skb)->gso_size;
-		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
+		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
+			XEN_NETIF_GSO_TYPE_TCPV6 :
+			XEN_NETIF_GSO_TYPE_TCPV4;
 		gso->u.gso.pad = 0;
 		gso->u.gso.features = 0;
 
@@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 		return -EINVAL;
 	}
 
-	/* Currently only TCPv4 S.O. is supported. */
-	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
+	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
+	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
 		if (net_ratelimit())
 			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
 		return -EINVAL;
 	}
 
 	skb_shinfo(skb)->gso_size = gso->u.gso.size;
-	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+	skb_shinfo(skb)->gso_type =
+		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
+		SKB_GSO_TCPV4 :
+		SKB_GSO_TCPV6;
 
 	/* Header must be checked, and gso_segs computed. */
 	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
@@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_SG;
 	}
 
+	if (features & NETIF_F_IPV6_CSUM) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-ipv6-csum-offload", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_IPV6_CSUM;
+	}
+
 	if (features & NETIF_F_TSO) {
 		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 				 "feature-gso-tcpv4", "%d", &val) < 0)
@@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_TSO;
 	}
 
+	if (features & NETIF_F_TSO6) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-gso-tcpv6", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_TSO6;
+	}
+
 	return features;
 }
 
@@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_SG |
+				  NETIF_F_IPV6_CSUM |
+				  NETIF_F_TSO | NETIF_F_TSO6;
 
 	/*
          * Assume that all hw features are available for now. This set
@@ -1716,6 +1741,19 @@ again:
 		goto abort_transaction;
 	}
 
+	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);
+	if (err) {
+		message = "writing feature-gso-tcpv6";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-offload",
+			    "%d", 1);
+	if (err) {
+		message = "writing feature-ipv6-csum-offload";
+		goto abort_transaction;
+	}
+
 	err = xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err == -EAGAIN)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SLV-0003wr-NZ; Wed, 15 Jan 2014 15:25:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3SLU-0003wm-Is
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:25:32 +0000
Received: from [193.109.254.147:37680] by server-13.bemta-14.messagelabs.com
	id DD/54-19374-B68A6D25; Wed, 15 Jan 2014 15:25:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389799529!8759607!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16563 invoked from network); 15 Jan 2014 15:25:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:25:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91000779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3SLQ-0002zE-MT;
	Wed, 15 Jan 2014 15:25:28 +0000
Message-ID: <52D6A868.6040707@citrix.com>
Date: Wed, 15 Jan 2014 15:25:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 15:18, Paul Durrant wrote:
> This patch adds support for IPv6 checksum offload and GSO when those
> features are available in the backend.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 43 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index c41537b..321759f 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  		tx->flags |= XEN_NETTXF_extra_info;
>  
>  		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
> +			XEN_NETIF_GSO_TYPE_TCPV6 :
> +			XEN_NETIF_GSO_TYPE_TCPV4;
>  		gso->u.gso.pad = 0;
>  		gso->u.gso.features = 0;
>  
> @@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
>  		return -EINVAL;
>  	}
>  
> -	/* Currently only TCPv4 S.O. is supported. */
> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
>  		if (net_ratelimit())
>  			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
>  		return -EINVAL;
>  	}
>  
>  	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> +	skb_shinfo(skb)->gso_type =
> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> +		SKB_GSO_TCPV4 :
> +		SKB_GSO_TCPV6;
>  
>  	/* Header must be checked, and gso_segs computed. */
>  	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> @@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
>  			features &= ~NETIF_F_SG;
>  	}
>  
> +	if (features & NETIF_F_IPV6_CSUM) {
> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> +				 "feature-ipv6-csum-offload", "%d", &val) < 0)
> +			val = 0;
> +
> +		if (!val)
> +			features &= ~NETIF_F_IPV6_CSUM;
> +	}
> +
>  	if (features & NETIF_F_TSO) {
>  		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>  				 "feature-gso-tcpv4", "%d", &val) < 0)
> @@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
>  			features &= ~NETIF_F_TSO;
>  	}
>  
> +	if (features & NETIF_F_TSO6) {
> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> +				 "feature-gso-tcpv6", "%d", &val) < 0)
> +			val = 0;
> +
> +		if (!val)
> +			features &= ~NETIF_F_TSO6;
> +	}
> +
>  	return features;
>  }
>  
> @@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_SG |
> +				  NETIF_F_IPV6_CSUM |
> +				  NETIF_F_TSO | NETIF_F_TSO6;
>  
>  	/*
>           * Assume that all hw features are available for now. This set
> @@ -1716,6 +1741,19 @@ again:
>  		goto abort_transaction;
>  	}
>  
> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);

"%d", 1 results in a constant string.  xenbus_write() would avoid a
transitory memory allocation.

~Andrew

> +	if (err) {
> +		message = "writing feature-gso-tcpv6";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-offload",
> +			    "%d", 1);
> +	if (err) {
> +		message = "writing feature-ipv6-csum-offload";
> +		goto abort_transaction;
> +	}
> +
>  	err = xenbus_transaction_end(xbt, 0);
>  	if (err) {
>  		if (err == -EAGAIN)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SLV-0003wr-NZ; Wed, 15 Jan 2014 15:25:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3SLU-0003wm-Is
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:25:32 +0000
Received: from [193.109.254.147:37680] by server-13.bemta-14.messagelabs.com
	id DD/54-19374-B68A6D25; Wed, 15 Jan 2014 15:25:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389799529!8759607!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16563 invoked from network); 15 Jan 2014 15:25:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:25:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91000779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3SLQ-0002zE-MT;
	Wed, 15 Jan 2014 15:25:28 +0000
Message-ID: <52D6A868.6040707@citrix.com>
Date: Wed, 15 Jan 2014 15:25:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 15:18, Paul Durrant wrote:
> This patch adds support for IPv6 checksum offload and GSO when those
> features are available in the backend.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 43 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index c41537b..321759f 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  		tx->flags |= XEN_NETTXF_extra_info;
>  
>  		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> +		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
> +			XEN_NETIF_GSO_TYPE_TCPV6 :
> +			XEN_NETIF_GSO_TYPE_TCPV4;
>  		gso->u.gso.pad = 0;
>  		gso->u.gso.features = 0;
>  
> @@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
>  		return -EINVAL;
>  	}
>  
> -	/* Currently only TCPv4 S.O. is supported. */
> -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
>  		if (net_ratelimit())
>  			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
>  		return -EINVAL;
>  	}
>  
>  	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> +	skb_shinfo(skb)->gso_type =
> +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> +		SKB_GSO_TCPV4 :
> +		SKB_GSO_TCPV6;
>  
>  	/* Header must be checked, and gso_segs computed. */
>  	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> @@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
>  			features &= ~NETIF_F_SG;
>  	}
>  
> +	if (features & NETIF_F_IPV6_CSUM) {
> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> +				 "feature-ipv6-csum-offload", "%d", &val) < 0)
> +			val = 0;
> +
> +		if (!val)
> +			features &= ~NETIF_F_IPV6_CSUM;
> +	}
> +
>  	if (features & NETIF_F_TSO) {
>  		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
>  				 "feature-gso-tcpv4", "%d", &val) < 0)
> @@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
>  			features &= ~NETIF_F_TSO;
>  	}
>  
> +	if (features & NETIF_F_TSO6) {
> +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> +				 "feature-gso-tcpv6", "%d", &val) < 0)
> +			val = 0;
> +
> +		if (!val)
> +			features &= ~NETIF_F_TSO6;
> +	}
> +
>  	return features;
>  }
>  
> @@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_SG |
> +				  NETIF_F_IPV6_CSUM |
> +				  NETIF_F_TSO | NETIF_F_TSO6;
>  
>  	/*
>           * Assume that all hw features are available for now. This set
> @@ -1716,6 +1741,19 @@ again:
>  		goto abort_transaction;
>  	}
>  
> +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);

"%d", 1 results in a constant string.  xenbus_write() would avoid a
transitory memory allocation.

~Andrew

> +	if (err) {
> +		message = "writing feature-gso-tcpv6";
> +		goto abort_transaction;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-offload",
> +			    "%d", 1);
> +	if (err) {
> +		message = "writing feature-ipv6-csum-offload";
> +		goto abort_transaction;
> +	}
> +
>  	err = xenbus_transaction_end(xbt, 0);
>  	if (err) {
>  		if (err == -EAGAIN)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SNj-00044o-BH; Wed, 15 Jan 2014 15:27:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3SNh-00044e-LM
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:27:49 +0000
Received: from [85.158.139.211:42353] by server-8.bemta-5.messagelabs.com id
	F3/D7-29838-4F8A6D25; Wed, 15 Jan 2014 15:27:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389798846!9889087!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15134 invoked from network); 15 Jan 2014 15:14:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:14:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90995245"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:14:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:14:00 -0500
Message-ID: <52D6A5B7.40503@citrix.com>
Date: Wed, 15 Jan 2014 15:13:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
In-Reply-To: <52D69864.9030207@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:17, annie li wrote:
> 
> I am thinking of two ways, and they can be implemented in new patches.
> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called
> to free skb. Otherwise, using gnttab_end_foreign_access to release ref
> and pages.
> 2. Add a similar deferred way of gnttab_end_foreign_access in
> gnttab_end_foreign_access_ref.

Something like the following (untested!) patch is what I'm thinking of.

Fixups to existing drivers (except netfront) are included but may not be
quite correct.

8<----------
>From 76c254c8e020f4427e8f37c867622f0bfd5ac85f Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 15 Jan 2014 15:04:52 +0000
Subject: [PATCH] HACK! xen/gnttab: make gnttab_end_foreign_access() more
useful

This is UNTESTED and is an example of the sort of change I'm looking
for.

Freeing the page in gnttab_end_foreign_access() means it cannot be
used where the pages are freed in some other way (e.g., as part of a
kfree_skb()).

Leave the freeing of the page to the caller.  If the page still has
foreign users, take an additional reference before adding it to the
deferred list.

Hack all the users of the call to do something resembling the right
thing.  Not quite sure on the blkfront changes.
---
 drivers/block/xen-blkfront.c    |   22 +++++++++++++---------
 drivers/char/tpm/xen-tpmfront.c |    3 +--
 drivers/pci/xen-pcifront.c      |    3 +--
 drivers/xen/grant-table.c       |   19 +++++++++++--------
 include/xen/grant_table.h       |   11 ++++++-----
 5 files changed, 32 insertions(+), 26 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..a586496 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
 			if (entry->page) {
 				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
 					 entry->ref, page_to_pfn(entry->page));
-				__free_page(entry->page);
+				put_page(entry->page);
 			} else
 				pr_info("freeing g.e. %#x\n", entry->ref);
 			kfree(entry);
@@ -555,15 +555,18 @@ static void gnttab_add_deferred(grant_ref_t ref,
bool readonly,
 }

 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
-			       unsigned long page)
+			       struct page *page)
 {
-	if (gnttab_end_foreign_access_ref(ref, readonly)) {
+	if (gnttab_end_foreign_access_ref(ref, readonly))
 		put_free_entry(ref);
-		if (page != 0)
-			free_page(page);
-	} else
-		gnttab_add_deferred(ref, readonly,
-				    page ? virt_to_page(page) : NULL);
+	else {
+		/*
+		 * Take a reference to the page so it's not freed
+		 * while the foreign domain still has access to it.
+		 */
+		get_page(page);
+		gnttab_add_deferred(ref, readonly, page);
+	}
 }
 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);

diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..ffa3ce6 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -91,13 +91,14 @@ bool gnttab_trans_grants_available(void);
 int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);

 /*
- * Eventually end access through the given grant reference, and once that
- * access has been ended, free the given page too.  Access will be ended
- * immediately iff the grant entry is not in use, otherwise it will happen
- * some time later.  page may be 0, in which case no freeing will occur.
+ * Eventually end access through the given grant reference, if the
+ * page is still in use an additional reference is taken and released
+ * when access has ended.  Access will be ended immediately iff the
+ * grant entry is not in use, otherwise it will happen some time
+ * later.
  */
 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
-			       unsigned long page);
+			       struct page *page);

 int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..45a2a01 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -931,14 +931,16 @@ static void blkif_free(struct blkfront_info *info,
int suspend)
 	if (!list_empty(&info->grants)) {
 		list_for_each_entry_safe(persistent_gnt, n,
 		                         &info->grants, node) {
+			struct page *page = pfn_to_page(persistent_gnt->pfn);
+
 			list_del(&persistent_gnt->node);
 			if (persistent_gnt->gref != GRANT_INVALID_REF) {
 				gnttab_end_foreign_access(persistent_gnt->gref,
-				                          0, 0UL);
+				                          0, page);
 				info->persistent_gnts_c--;
 			}
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 	}
@@ -970,10 +972,13 @@ static void blkif_free(struct blkfront_info *info,
int suspend)
 		       info->shadow[i].req.u.indirect.nr_segments :
 		       info->shadow[i].req.u.rw.nr_segments;
 		for (j = 0; j < segs; j++) {
+			struct page *page;
+
 			persistent_gnt = info->shadow[i].grants_used[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+			page = pfn_to_page(persistent_gnt->pfn);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0, page);
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}

@@ -1010,10 +1015,11 @@ free_shadow:
 	/* Free resources associated with old device channel. */
 	if (info->ring_ref != GRANT_INVALID_REF) {
 		gnttab_end_foreign_access(info->ring_ref, 0,
-					  (unsigned long)info->ring.sring);
+					  virt_to_page(info->ring.sring));
 		info->ring_ref = GRANT_INVALID_REF;
 		info->ring.sring = NULL;
 	}
+	free_page((unsigned long)info->ring.sring);
 	if (info->irq)
 		unbind_from_irqhandler(info->irq, info);
 	info->evtchn = info->irq = 0;
@@ -1053,7 +1059,7 @@ static void blkif_completion(struct blk_shadow *s,
struct blkfront_info *info,
 	}
 	/* Add the persistent grant into the list of free grants */
 	for (i = 0; i < nseg; i++) {
-		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
+		if (gnttab_end_foreign_access_ref(s->grants_used[i]->gref, 0)) {
 			/*
 			 * If the grant is still mapped by the backend (the
 			 * backend has chosen to make this grant persistent)
@@ -1072,14 +1078,13 @@ static void blkif_completion(struct blk_shadow
*s, struct blkfront_info *info,
 			 * so it will not be picked again unless we run out of
 			 * persistent grants.
 			 */
-			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
 			s->grants_used[i]->gref = GRANT_INVALID_REF;
 			list_add_tail(&s->grants_used[i]->node, &info->grants);
 		}
 	}
 	if (s->req.operation == BLKIF_OP_INDIRECT) {
 		for (i = 0; i < INDIRECT_GREFS(nseg); i++) {
-			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
+			if (gnttab_end_foreign_access_ref(s->indirect_grants[i]->gref, 0)) {
 				if (!info->feature_persistent)
 					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
 							     s->indirect_grants[i]->gref);
@@ -1088,7 +1093,6 @@ static void blkif_completion(struct blk_shadow *s,
struct blkfront_info *info,
 			} else {
 				struct page *indirect_page;

-				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
 				/*
 				 * Add the used indirect page back to the list of
 				 * available pages for indirect grefs.
diff --git a/drivers/char/tpm/xen-tpmfront.c
b/drivers/char/tpm/xen-tpmfront.c
index c8ff4df..00d1132 100644
--- a/drivers/char/tpm/xen-tpmfront.c
+++ b/drivers/char/tpm/xen-tpmfront.c
@@ -315,8 +315,7 @@ static void ring_free(struct tpm_private *priv)
 	if (priv->ring_ref)
 		gnttab_end_foreign_access(priv->ring_ref, 0,
 				(unsigned long)priv->shr);
-	else
-		free_page((unsigned long)priv->shr);
+	free_page((unsigned long)priv->shr);

 	if (priv->chip && priv->chip->vendor.irq)
 		unbind_from_irqhandler(priv->chip->vendor.irq, priv);
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index f7197a7..253a129 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -759,8 +759,7 @@ static void free_pdev(struct pcifront_device *pdev)
 	if (pdev->gnt_ref != INVALID_GRANT_REF)
 		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
 					  (unsigned long)pdev->sh_info);
-	else
-		free_page((unsigned long)pdev->sh_info);
+	free_page((unsigned long)pdev->sh_info);

 	dev_set_drvdata(&pdev->xdev->dev, NULL);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SNj-00044o-BH; Wed, 15 Jan 2014 15:27:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3SNh-00044e-LM
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:27:49 +0000
Received: from [85.158.139.211:42353] by server-8.bemta-5.messagelabs.com id
	F3/D7-29838-4F8A6D25; Wed, 15 Jan 2014 15:27:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389798846!9889087!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15134 invoked from network); 15 Jan 2014 15:14:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:14:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="90995245"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:14:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:14:00 -0500
Message-ID: <52D6A5B7.40503@citrix.com>
Date: Wed, 15 Jan 2014 15:13:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
In-Reply-To: <52D69864.9030207@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:17, annie li wrote:
> 
> I am thinking of two ways, and they can be implemented in new patches.
> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called
> to free skb. Otherwise, using gnttab_end_foreign_access to release ref
> and pages.
> 2. Add a similar deferred way of gnttab_end_foreign_access in
> gnttab_end_foreign_access_ref.

Something like the following (untested!) patch is what I'm thinking of.

Fixups to existing drivers (except netfront) are included but may not be
quite correct.

8<----------
>From 76c254c8e020f4427e8f37c867622f0bfd5ac85f Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 15 Jan 2014 15:04:52 +0000
Subject: [PATCH] HACK! xen/gnttab: make gnttab_end_foreign_access() more
useful

This is UNTESTED and is an example of the sort of change I'm looking
for.

Freeing the page in gnttab_end_foreign_access() means it cannot be
used where the pages are freed in some other way (e.g., as part of a
kfree_skb()).

Leave the freeing of the page to the caller.  If the page still has
foreign users, take an additional reference before adding it to the
deferred list.

Hack all the users of the call to do something resembling the right
thing.  Not quite sure on the blkfront changes.
---
 drivers/block/xen-blkfront.c    |   22 +++++++++++++---------
 drivers/char/tpm/xen-tpmfront.c |    3 +--
 drivers/pci/xen-pcifront.c      |    3 +--
 drivers/xen/grant-table.c       |   19 +++++++++++--------
 include/xen/grant_table.h       |   11 ++++++-----
 5 files changed, 32 insertions(+), 26 deletions(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..a586496 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
 			if (entry->page) {
 				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
 					 entry->ref, page_to_pfn(entry->page));
-				__free_page(entry->page);
+				put_page(entry->page);
 			} else
 				pr_info("freeing g.e. %#x\n", entry->ref);
 			kfree(entry);
@@ -555,15 +555,18 @@ static void gnttab_add_deferred(grant_ref_t ref,
bool readonly,
 }

 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
-			       unsigned long page)
+			       struct page *page)
 {
-	if (gnttab_end_foreign_access_ref(ref, readonly)) {
+	if (gnttab_end_foreign_access_ref(ref, readonly))
 		put_free_entry(ref);
-		if (page != 0)
-			free_page(page);
-	} else
-		gnttab_add_deferred(ref, readonly,
-				    page ? virt_to_page(page) : NULL);
+	else {
+		/*
+		 * Take a reference to the page so it's not freed
+		 * while the foreign domain still has access to it.
+		 */
+		get_page(page);
+		gnttab_add_deferred(ref, readonly, page);
+	}
 }
 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);

diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..ffa3ce6 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -91,13 +91,14 @@ bool gnttab_trans_grants_available(void);
 int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);

 /*
- * Eventually end access through the given grant reference, and once that
- * access has been ended, free the given page too.  Access will be ended
- * immediately iff the grant entry is not in use, otherwise it will happen
- * some time later.  page may be 0, in which case no freeing will occur.
+ * Eventually end access through the given grant reference, if the
+ * page is still in use an additional reference is taken and released
+ * when access has ended.  Access will be ended immediately iff the
+ * grant entry is not in use, otherwise it will happen some time
+ * later.
  */
 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
-			       unsigned long page);
+			       struct page *page);

 int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..45a2a01 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -931,14 +931,16 @@ static void blkif_free(struct blkfront_info *info,
int suspend)
 	if (!list_empty(&info->grants)) {
 		list_for_each_entry_safe(persistent_gnt, n,
 		                         &info->grants, node) {
+			struct page *page = pfn_to_page(persistent_gnt->pfn);
+
 			list_del(&persistent_gnt->node);
 			if (persistent_gnt->gref != GRANT_INVALID_REF) {
 				gnttab_end_foreign_access(persistent_gnt->gref,
-				                          0, 0UL);
+				                          0, page);
 				info->persistent_gnts_c--;
 			}
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 	}
@@ -970,10 +972,13 @@ static void blkif_free(struct blkfront_info *info,
int suspend)
 		       info->shadow[i].req.u.indirect.nr_segments :
 		       info->shadow[i].req.u.rw.nr_segments;
 		for (j = 0; j < segs; j++) {
+			struct page *page;
+
 			persistent_gnt = info->shadow[i].grants_used[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+			page = pfn_to_page(persistent_gnt->pfn);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0, page);
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}

@@ -1010,10 +1015,11 @@ free_shadow:
 	/* Free resources associated with old device channel. */
 	if (info->ring_ref != GRANT_INVALID_REF) {
 		gnttab_end_foreign_access(info->ring_ref, 0,
-					  (unsigned long)info->ring.sring);
+					  virt_to_page(info->ring.sring));
 		info->ring_ref = GRANT_INVALID_REF;
 		info->ring.sring = NULL;
 	}
+	free_page((unsigned long)info->ring.sring);
 	if (info->irq)
 		unbind_from_irqhandler(info->irq, info);
 	info->evtchn = info->irq = 0;
@@ -1053,7 +1059,7 @@ static void blkif_completion(struct blk_shadow *s,
struct blkfront_info *info,
 	}
 	/* Add the persistent grant into the list of free grants */
 	for (i = 0; i < nseg; i++) {
-		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
+		if (gnttab_end_foreign_access_ref(s->grants_used[i]->gref, 0)) {
 			/*
 			 * If the grant is still mapped by the backend (the
 			 * backend has chosen to make this grant persistent)
@@ -1072,14 +1078,13 @@ static void blkif_completion(struct blk_shadow
*s, struct blkfront_info *info,
 			 * so it will not be picked again unless we run out of
 			 * persistent grants.
 			 */
-			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
 			s->grants_used[i]->gref = GRANT_INVALID_REF;
 			list_add_tail(&s->grants_used[i]->node, &info->grants);
 		}
 	}
 	if (s->req.operation == BLKIF_OP_INDIRECT) {
 		for (i = 0; i < INDIRECT_GREFS(nseg); i++) {
-			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
+			if (gnttab_end_foreign_access_ref(s->indirect_grants[i]->gref, 0)) {
 				if (!info->feature_persistent)
 					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
 							     s->indirect_grants[i]->gref);
@@ -1088,7 +1093,6 @@ static void blkif_completion(struct blk_shadow *s,
struct blkfront_info *info,
 			} else {
 				struct page *indirect_page;

-				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
 				/*
 				 * Add the used indirect page back to the list of
 				 * available pages for indirect grefs.
diff --git a/drivers/char/tpm/xen-tpmfront.c
b/drivers/char/tpm/xen-tpmfront.c
index c8ff4df..00d1132 100644
--- a/drivers/char/tpm/xen-tpmfront.c
+++ b/drivers/char/tpm/xen-tpmfront.c
@@ -315,8 +315,7 @@ static void ring_free(struct tpm_private *priv)
 	if (priv->ring_ref)
 		gnttab_end_foreign_access(priv->ring_ref, 0,
 				(unsigned long)priv->shr);
-	else
-		free_page((unsigned long)priv->shr);
+	free_page((unsigned long)priv->shr);

 	if (priv->chip && priv->chip->vendor.irq)
 		unbind_from_irqhandler(priv->chip->vendor.irq, priv);
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index f7197a7..253a129 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -759,8 +759,7 @@ static void free_pdev(struct pcifront_device *pdev)
 	if (pdev->gnt_ref != INVALID_GRANT_REF)
 		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
 					  (unsigned long)pdev->sh_info);
-	else
-		free_page((unsigned long)pdev->sh_info);
+	free_page((unsigned long)pdev->sh_info);

 	dev_set_drvdata(&pdev->xdev->dev, NULL);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SRN-0004bU-3P; Wed, 15 Jan 2014 15:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3SRL-0004bO-Pb
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:31:35 +0000
Received: from [85.158.137.68:64015] by server-5.bemta-3.messagelabs.com id
	40/80-25188-6D9A6D25; Wed, 15 Jan 2014 15:31:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389799889!9377617!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16171 invoked from network); 15 Jan 2014 15:31:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:31:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93122353"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:31:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:31:28 -0500
Message-ID: <52D6A9CF.8020202@citrix.com>
Date: Wed, 15 Jan 2014 15:31:27 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 18:33, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> After split eventchannel feature was supported by netback/netfront,
> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> to show tx/rx eventchannel correctly.
[...]
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("frontend_id", uint32),
>      ("devid", libxl_devid),
>      ("state", integer),
> -    ("evtch", integer),
> +    ("evtch_tx", integer),
> +    ("evtch_rx", integer),

Does this break libxl's API or ABI?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SRN-0004bU-3P; Wed, 15 Jan 2014 15:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3SRL-0004bO-Pb
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:31:35 +0000
Received: from [85.158.137.68:64015] by server-5.bemta-3.messagelabs.com id
	40/80-25188-6D9A6D25; Wed, 15 Jan 2014 15:31:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389799889!9377617!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16171 invoked from network); 15 Jan 2014 15:31:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:31:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93122353"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:31:28 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:31:28 -0500
Message-ID: <52D6A9CF.8020202@citrix.com>
Date: Wed, 15 Jan 2014 15:31:27 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: ian.jackson@eu.citrx.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/14 18:33, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> After split eventchannel feature was supported by netback/netfront,
> "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> to show tx/rx eventchannel correctly.
[...]
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("frontend_id", uint32),
>      ("devid", libxl_devid),
>      ("state", integer),
> -    ("evtch", integer),
> +    ("evtch_tx", integer),
> +    ("evtch_rx", integer),

Does this break libxl's API or ABI?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SUj-0004kV-Np; Wed, 15 Jan 2014 15:35:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3SUi-0004kM-Cs
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:35:04 +0000
Received: from [85.158.139.211:11489] by server-4.bemta-5.messagelabs.com id
	EA/48-26791-7AAA6D25; Wed, 15 Jan 2014 15:35:03 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389800101!9949256!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15354 invoked from network); 15 Jan 2014 15:35:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:35:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93123492"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:35:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:35:00 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3SUe-0003Fb-Ij; Wed, 15 Jan 2014 15:35:00 +0000
Message-ID: <52D6AAA4.7090108@citrix.com>
Date: Wed, 15 Jan 2014 15:35:00 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com> <52D697FB.3000304@oracle.com>
In-Reply-To: <52D697FB.3000304@oracle.com>
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:15, annie li wrote:
>
> On 2014-1-15 19:02, Andrew Bennieston wrote:
>> On 15/01/14 10:07, Wei Liu wrote:
>>> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>>>> Current netfront only grants pages for grant copy, not for grant
>>>> transfer, so
>>>> remove corresponding transfer code and add receiving copy code in
>>>> xennet_release_rx_bufs.
>>>>
>>>
>>> This path seldom gets call -- not that many people unload xen-netfront
>>> driver. If Annie has tested this patch and it works as expected I think
>>> it's fine.
>>>
>> In XenServer we have seen a number of cases where unplugging and
>> replugging VIFs results in leakage of grant references, eventually
>> leading to a case where you cannot plug a VIF (after ~ 400 such
>> cycles)...
>>
>> It's worth pointing out, as far as this patch is concerned, that
>> gnttab_end_foreign_access() can fail,
>
> Just like what Wei mentioned, it is gnttab_end_foreign_access_ref here,
> right?
Yes, sorry - I forgot to type the _ref part!
>
>> which is not taken into account here.
>
> Good point, gnttab_end_foreign_access_ref fails for grant which is in use.
>
> Thanks
> Annie
>>
>> Andrew.
>>
>>> I'm not netfront maintainer but I'm happy to add
>>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>>> if Annie confirms she's tested this patch.
>>>
>>> Wei.
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SUj-0004kV-Np; Wed, 15 Jan 2014 15:35:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3SUi-0004kM-Cs
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:35:04 +0000
Received: from [85.158.139.211:11489] by server-4.bemta-5.messagelabs.com id
	EA/48-26791-7AAA6D25; Wed, 15 Jan 2014 15:35:03 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389800101!9949256!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15354 invoked from network); 15 Jan 2014 15:35:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:35:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93123492"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:35:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:35:00 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3SUe-0003Fb-Ij; Wed, 15 Jan 2014 15:35:00 +0000
Message-ID: <52D6AAA4.7090108@citrix.com>
Date: Wed, 15 Jan 2014 15:35:00 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140115100743.GG5698@zion.uk.xensource.com>
	<52D66ADF.9070401@citrix.com> <52D697FB.3000304@oracle.com>
In-Reply-To: <52D697FB.3000304@oracle.com>
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 14:15, annie li wrote:
>
> On 2014-1-15 19:02, Andrew Bennieston wrote:
>> On 15/01/14 10:07, Wei Liu wrote:
>>> On Fri, Jan 10, 2014 at 06:48:38AM +0800, Annie Li wrote:
>>>> Current netfront only grants pages for grant copy, not for grant
>>>> transfer, so
>>>> remove corresponding transfer code and add receiving copy code in
>>>> xennet_release_rx_bufs.
>>>>
>>>
>>> This path seldom gets call -- not that many people unload xen-netfront
>>> driver. If Annie has tested this patch and it works as expected I think
>>> it's fine.
>>>
>> In XenServer we have seen a number of cases where unplugging and
>> replugging VIFs results in leakage of grant references, eventually
>> leading to a case where you cannot plug a VIF (after ~ 400 such
>> cycles)...
>>
>> It's worth pointing out, as far as this patch is concerned, that
>> gnttab_end_foreign_access() can fail,
>
> Just like what Wei mentioned, it is gnttab_end_foreign_access_ref here,
> right?
Yes, sorry - I forgot to type the _ref part!
>
>> which is not taken into account here.
>
> Good point, gnttab_end_foreign_access_ref fails for grant which is in use.
>
> Thanks
> Annie
>>
>> Andrew.
>>
>>> I'm not netfront maintainer but I'm happy to add
>>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>>> if Annie confirms she's tested this patch.
>>>
>>> Wei.
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:51:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Sk8-0005h1-9U; Wed, 15 Jan 2014 15:51:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Sk7-0005gw-4s
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:50:59 +0000
Received: from [85.158.137.68:40890] by server-9.bemta-3.messagelabs.com id
	31/AB-13104-16EA6D25; Wed, 15 Jan 2014 15:50:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389801050!9286448!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 631 invoked from network); 15 Jan 2014 15:50:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91010922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:50:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:50:49 -0500
Message-ID: <1389801048.3793.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 15 Jan 2014 15:50:48 +0000
In-Reply-To: <52D6A9CF.8020202@citrix.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
	<52D6A9CF.8020202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrx.com, Annie Li <Annie.li@oracle.com>,
	wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 15:31 +0000, David Vrabel wrote:
> On 14/01/14 18:33, Annie Li wrote:
> > From: Annie Li <annie.li@oracle.com>
> > 
> > After split eventchannel feature was supported by netback/netfront,
> > "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> > to show tx/rx eventchannel correctly.
> [...]
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
> >      ("frontend_id", uint32),
> >      ("devid", libxl_devid),
> >      ("state", integer),
> > -    ("evtch", integer),
> > +    ("evtch_tx", integer),
> > +    ("evtch_rx", integer),
> 
> Does this break libxl's API or ABI?

Both, but we only guarantee API stability, not ABI stability (IOW we'd
just bump the SONAME).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:51:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Sk8-0005h1-9U; Wed, 15 Jan 2014 15:51:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3Sk7-0005gw-4s
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:50:59 +0000
Received: from [85.158.137.68:40890] by server-9.bemta-3.messagelabs.com id
	31/AB-13104-16EA6D25; Wed, 15 Jan 2014 15:50:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389801050!9286448!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 631 invoked from network); 15 Jan 2014 15:50:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:50:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91010922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 15:50:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 10:50:49 -0500
Message-ID: <1389801048.3793.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 15 Jan 2014 15:50:48 +0000
In-Reply-To: <52D6A9CF.8020202@citrix.com>
References: <1389724428-3228-1-git-send-email-Annie.li@oracle.com>
	<52D6A9CF.8020202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrx.com, Annie Li <Annie.li@oracle.com>,
	wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 15:31 +0000, David Vrabel wrote:
> On 14/01/14 18:33, Annie Li wrote:
> > From: Annie Li <annie.li@oracle.com>
> > 
> > After split eventchannel feature was supported by netback/netfront,
> > "xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
> > to show tx/rx eventchannel correctly.
> [...]
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -488,7 +488,8 @@ libxl_nicinfo = Struct("nicinfo", [
> >      ("frontend_id", uint32),
> >      ("devid", libxl_devid),
> >      ("state", integer),
> > -    ("evtch", integer),
> > +    ("evtch_tx", integer),
> > +    ("evtch_rx", integer),
> 
> Does this break libxl's API or ABI?

Both, but we only guarantee API stability, not ABI stability (IOW we'd
just bump the SONAME).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Snb-0005n5-Tz; Wed, 15 Jan 2014 15:54:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3Sna-0005mx-Bf
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:54:34 +0000
Received: from [85.158.139.211:39198] by server-3.bemta-5.messagelabs.com id
	4B/C4-04773-93FA6D25; Wed, 15 Jan 2014 15:54:33 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389801271!7242762!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4837 invoked from network); 15 Jan 2014 15:54:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:54:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93130551"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:54:24 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 15 Jan 2014 10:54:23 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Wed, 15 Jan 2014 16:54:05 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH net-next] xen-netfront: add support for
	IPv6 offloads
Thread-Index: AQHPEgUQuhiFQyjN8k2YNGkK6y3A05qF1zwAgAAYY+A=
Date: Wed, 15 Jan 2014 15:54:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
In-Reply-To: <52D6A868.6040707@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 15 January 2014 15:25
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; Boris Ostrovsky; David
> Vrabel
> Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for
> IPv6 offloads
> 
> On 15/01/14 15:18, Paul Durrant wrote:
> > This patch adds support for IPv6 checksum offload and GSO when those
> > features are available in the backend.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> > ---
> >  drivers/net/xen-netfront.c |   48
> +++++++++++++++++++++++++++++++++++++++-----
> >  1 file changed, 43 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index c41537b..321759f 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >  		tx->flags |= XEN_NETTXF_extra_info;
> >
> >  		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> > -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> > +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> SKB_GSO_TCPV6) ?
> > +			XEN_NETIF_GSO_TYPE_TCPV6 :
> > +			XEN_NETIF_GSO_TYPE_TCPV4;
> >  		gso->u.gso.pad = 0;
> >  		gso->u.gso.features = 0;
> >
> > @@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff
> *skb,
> >  		return -EINVAL;
> >  	}
> >
> > -	/* Currently only TCPv4 S.O. is supported. */
> > -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> > +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> > +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >  		if (net_ratelimit())
> >  			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >  		return -EINVAL;
> >  	}
> >
> >  	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> > -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> > +	skb_shinfo(skb)->gso_type =
> > +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> > +		SKB_GSO_TCPV4 :
> > +		SKB_GSO_TCPV6;
> >
> >  	/* Header must be checked, and gso_segs computed. */
> >  	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> > @@ -1191,6 +1196,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >  			features &= ~NETIF_F_SG;
> >  	}
> >
> > +	if (features & NETIF_F_IPV6_CSUM) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-ipv6-csum-offload", "%d", &val) <
> 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_IPV6_CSUM;
> > +	}
> > +
> >  	if (features & NETIF_F_TSO) {
> >  		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >  				 "feature-gso-tcpv4", "%d", &val) < 0)
> > @@ -1200,6 +1214,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >  			features &= ~NETIF_F_TSO;
> >  	}
> >
> > +	if (features & NETIF_F_TSO6) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-gso-tcpv6", "%d", &val) < 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_TSO6;
> > +	}
> > +
> >  	return features;
> >  }
> >
> > @@ -1338,7 +1361,9 @@ static struct net_device
> *xennet_create_dev(struct xenbus_device *dev)
> >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >  				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_SG |
> > +				  NETIF_F_IPV6_CSUM |
> > +				  NETIF_F_TSO | NETIF_F_TSO6;
> >
> >  	/*
> >           * Assume that all hw features are available for now. This set
> > @@ -1716,6 +1741,19 @@ again:
> >  		goto abort_transaction;
> >  	}
> >
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> 
> "%d", 1 results in a constant string.  xenbus_write() would avoid a
> transitory memory allocation.
> 
> ~Andrew
> 

This code is consistent with all the other xenbus_printf()s in the neighbourhood and does it really matter?

  Paul

> > +	if (err) {
> > +		message = "writing feature-gso-tcpv6";
> > +		goto abort_transaction;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> offload",
> > +			    "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-ipv6-csum-offload";
> > +		goto abort_transaction;
> > +	}
> > +
> >  	err = xenbus_transaction_end(xbt, 0);
> >  	if (err) {
> >  		if (err == -EAGAIN)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 15:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 15:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Snb-0005n5-Tz; Wed, 15 Jan 2014 15:54:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3Sna-0005mx-Bf
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 15:54:34 +0000
Received: from [85.158.139.211:39198] by server-3.bemta-5.messagelabs.com id
	4B/C4-04773-93FA6D25; Wed, 15 Jan 2014 15:54:33 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389801271!7242762!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4837 invoked from network); 15 Jan 2014 15:54:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 15:54:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93130551"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 15:54:24 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 15 Jan 2014 10:54:23 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Wed, 15 Jan 2014 16:54:05 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH net-next] xen-netfront: add support for
	IPv6 offloads
Thread-Index: AQHPEgUQuhiFQyjN8k2YNGkK6y3A05qF1zwAgAAYY+A=
Date: Wed, 15 Jan 2014 15:54:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
In-Reply-To: <52D6A868.6040707@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 15 January 2014 15:25
> To: Paul Durrant
> Cc: netdev@vger.kernel.org; xen-devel@lists.xen.org; Boris Ostrovsky; David
> Vrabel
> Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for
> IPv6 offloads
> 
> On 15/01/14 15:18, Paul Durrant wrote:
> > This patch adds support for IPv6 checksum offload and GSO when those
> > features are available in the backend.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: David Vrabel <david.vrabel@citrix.com>
> > ---
> >  drivers/net/xen-netfront.c |   48
> +++++++++++++++++++++++++++++++++++++++-----
> >  1 file changed, 43 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index c41537b..321759f 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
> >  		tx->flags |= XEN_NETTXF_extra_info;
> >
> >  		gso->u.gso.size = skb_shinfo(skb)->gso_size;
> > -		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
> > +		gso->u.gso.type = (skb_shinfo(skb)->gso_type &
> SKB_GSO_TCPV6) ?
> > +			XEN_NETIF_GSO_TYPE_TCPV6 :
> > +			XEN_NETIF_GSO_TYPE_TCPV4;
> >  		gso->u.gso.pad = 0;
> >  		gso->u.gso.features = 0;
> >
> > @@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff
> *skb,
> >  		return -EINVAL;
> >  	}
> >
> > -	/* Currently only TCPv4 S.O. is supported. */
> > -	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
> > +	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
> > +	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
> >  		if (net_ratelimit())
> >  			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
> >  		return -EINVAL;
> >  	}
> >
> >  	skb_shinfo(skb)->gso_size = gso->u.gso.size;
> > -	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
> > +	skb_shinfo(skb)->gso_type =
> > +		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
> > +		SKB_GSO_TCPV4 :
> > +		SKB_GSO_TCPV6;
> >
> >  	/* Header must be checked, and gso_segs computed. */
> >  	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
> > @@ -1191,6 +1196,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >  			features &= ~NETIF_F_SG;
> >  	}
> >
> > +	if (features & NETIF_F_IPV6_CSUM) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-ipv6-csum-offload", "%d", &val) <
> 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_IPV6_CSUM;
> > +	}
> > +
> >  	if (features & NETIF_F_TSO) {
> >  		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> >  				 "feature-gso-tcpv4", "%d", &val) < 0)
> > @@ -1200,6 +1214,15 @@ static netdev_features_t
> xennet_fix_features(struct net_device *dev,
> >  			features &= ~NETIF_F_TSO;
> >  	}
> >
> > +	if (features & NETIF_F_TSO6) {
> > +		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
> > +				 "feature-gso-tcpv6", "%d", &val) < 0)
> > +			val = 0;
> > +
> > +		if (!val)
> > +			features &= ~NETIF_F_TSO6;
> > +	}
> > +
> >  	return features;
> >  }
> >
> > @@ -1338,7 +1361,9 @@ static struct net_device
> *xennet_create_dev(struct xenbus_device *dev)
> >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >  				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG |
> NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_SG |
> > +				  NETIF_F_IPV6_CSUM |
> > +				  NETIF_F_TSO | NETIF_F_TSO6;
> >
> >  	/*
> >           * Assume that all hw features are available for now. This set
> > @@ -1716,6 +1741,19 @@ again:
> >  		goto abort_transaction;
> >  	}
> >
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> 
> "%d", 1 results in a constant string.  xenbus_write() would avoid a
> transitory memory allocation.
> 
> ~Andrew
> 

This code is consistent with all the other xenbus_printf()s in the neighbourhood and does it really matter?

  Paul

> > +	if (err) {
> > +		message = "writing feature-gso-tcpv6";
> > +		goto abort_transaction;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "feature-ipv6-csum-
> offload",
> > +			    "%d", 1);
> > +	if (err) {
> > +		message = "writing feature-ipv6-csum-offload";
> > +		goto abort_transaction;
> > +	}
> > +
> >  	err = xenbus_transaction_end(xbt, 0);
> >  	if (err) {
> >  		if (err == -EAGAIN)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:03:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SwI-0006qH-9M; Wed, 15 Jan 2014 16:03:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3SwH-0006qB-Cq
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:03:33 +0000
Received: from [85.158.143.35:46994] by server-2.bemta-4.messagelabs.com id
	A4/10-11386-451B6D25; Wed, 15 Jan 2014 16:03:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389801812!9265280!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28330 invoked from network); 15 Jan 2014 16:03:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 16:03:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 16:03:31 +0000
Message-Id: <52D6BF630200007800113EFB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 16:03:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 16:54, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> On 15/01/14 15:18, Paul Durrant wrote:
>> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);
>> 
>> "%d", 1 results in a constant string.  xenbus_write() would avoid a
>> transitory memory allocation.
> 
> This code is consistent with all the other xenbus_printf()s in the 
> neighbourhood and does it really matter?

I think we should always strive to have the simplest possible code
that fulfills the purpose. And hence we shouldn't be setting further
bad precedents. (In fact I have a patch queued to replace all the
unnecessary xenbus_printf()s with xenbus_write()s on
linux-2.6.18-xen.hg, and may look into porting this to the
respective upstream components.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:03:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3SwI-0006qH-9M; Wed, 15 Jan 2014 16:03:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3SwH-0006qB-Cq
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:03:33 +0000
Received: from [85.158.143.35:46994] by server-2.bemta-4.messagelabs.com id
	A4/10-11386-451B6D25; Wed, 15 Jan 2014 16:03:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389801812!9265280!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28330 invoked from network); 15 Jan 2014 16:03:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 16:03:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 15 Jan 2014 16:03:31 +0000
Message-Id: <52D6BF630200007800113EFB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 15 Jan 2014 16:03:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 16:54, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> On 15/01/14 15:18, Paul Durrant wrote:
>> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6", "%d", 1);
>> 
>> "%d", 1 results in a constant string.  xenbus_write() would avoid a
>> transitory memory allocation.
> 
> This code is consistent with all the other xenbus_printf()s in the 
> neighbourhood and does it really matter?

I think we should always strive to have the simplest possible code
that fulfills the purpose. And hence we shouldn't be setting further
bad precedents. (In fact I have a patch queued to replace all the
unnecessary xenbus_printf()s with xenbus_write()s on
linux-2.6.18-xen.hg, and may look into porting this to the
respective upstream components.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:10:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3T2n-0007PR-AP; Wed, 15 Jan 2014 16:10:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3T2l-0007PL-Oa
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:10:16 +0000
Received: from [85.158.139.211:16926] by server-2.bemta-5.messagelabs.com id
	0E/AE-29392-7E2B6D25; Wed, 15 Jan 2014 16:10:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389802212!9777184!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27208 invoked from network); 15 Jan 2014 16:10:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:10:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91020649"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:10:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:10:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3T2h-0003r7-Lc;
	Wed, 15 Jan 2014 16:10:11 +0000
Date: Wed, 15 Jan 2014 16:10:11 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115161011.GQ5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
	<20140115145951.GP5698@zion.uk.xensource.com>
	<52D6A45C.1060705@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D6A45C.1060705@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 03:08:12PM +0000, Zoltan Kiss wrote:
> On 15/01/14 14:59, Wei Liu wrote:
> >On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
> >>On 15/01/14 14:45, Wei Liu wrote:
> >>>>>>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>>>>>>>>thread problem, however caused an another one. The receive side can stall, if:
> >>>>>>>>>- xenvif_rx_action sets rx_queue_stopped to false
> >>>>>>>>>- interrupt happens, and sets rx_event to true
> >>>>>>>>>- then xenvif_kthread sets rx_event to false
> >>>>>>>>>
> >>>>>>>
> >>>>>>>If you mean "rx_work_todo" returns false.
> >>>>>>>
> >>>>>>>In this case
> >>>>>>>
> >>>>>>>(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >>>>>>>
> >>>>>>>can still be true, can't it?
> >>>>>Sorry, I should wrote rx_queue_stopped to true
> >>>>>
> >>>In this case, if rx_queue_stopped is true, then we're expecting frontend
> >>>to notify us, right?
> >>>
> >>>rx_queue_stopped is set to true if we cannot make any progress to queue
> >>>packet into the ring. In that situation we can expect frontend will send
> >>>notification to backend after it goes through the backlog in the ring.
> >>>That means rx_event is set to true, and rx_work_todo is true again. So
> >>>the ring is actually not stalled in this case as well. Did I miss
> >>>something?
> >>>
> >>
> >>Yes, we expect the guest to notify us, and it does, and we set
> >>rx_event to true (see second point), but then the thread set it to
> >>false (see third point). Talking with Paul, another solution could
> >>be to set rx_event false before calling xenvif_rx_action. But using
> >>rx_last_skb_slots makes it quicker for the thread to see if it
> >>doesn't have to do anything.
> >>
> >
> >OK, this is a better explaination. So actually there's no bug in the
> >original implementation and your patch is sort of an improvement.
> >
> >Could you send a new version of this patch with relevant information in
> >commit message? Talking to people offline is faster, but I would like to
> >have public discussion and relevant information archived in a searchable
> >form. Thanks.
> 
> No, there is a bug in the original implementation:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo never returns true anymore
> 

I see what you mean. The interrupt is "lost", that's why it's stalled.

> I will update the explanation and send in the patch again.
> 

Thanks.

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:10:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3T2n-0007PR-AP; Wed, 15 Jan 2014 16:10:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3T2l-0007PL-Oa
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:10:16 +0000
Received: from [85.158.139.211:16926] by server-2.bemta-5.messagelabs.com id
	0E/AE-29392-7E2B6D25; Wed, 15 Jan 2014 16:10:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389802212!9777184!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27208 invoked from network); 15 Jan 2014 16:10:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:10:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91020649"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:10:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:10:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3T2h-0003r7-Lc;
	Wed, 15 Jan 2014 16:10:11 +0000
Date: Wed, 15 Jan 2014 16:10:11 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140115161011.GQ5698@zion.uk.xensource.com>
References: <1389727719-21439-1-git-send-email-zoltan.kiss@citrix.com>
	<20140115103707.GI5698@zion.uk.xensource.com>
	<52D67536.4030106@citrix.com>
	<20140115144519.GO5698@zion.uk.xensource.com>
	<52D6A0B9.2030204@citrix.com>
	<20140115145951.GP5698@zion.uk.xensource.com>
	<52D6A45C.1060705@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D6A45C.1060705@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 03:08:12PM +0000, Zoltan Kiss wrote:
> On 15/01/14 14:59, Wei Liu wrote:
> >On Wed, Jan 15, 2014 at 02:52:41PM +0000, Zoltan Kiss wrote:
> >>On 15/01/14 14:45, Wei Liu wrote:
> >>>>>>The recent patch to fix receive side flow control (11b57f) solved the spinning
> >>>>>>>>>thread problem, however caused an another one. The receive side can stall, if:
> >>>>>>>>>- xenvif_rx_action sets rx_queue_stopped to false
> >>>>>>>>>- interrupt happens, and sets rx_event to true
> >>>>>>>>>- then xenvif_kthread sets rx_event to false
> >>>>>>>>>
> >>>>>>>
> >>>>>>>If you mean "rx_work_todo" returns false.
> >>>>>>>
> >>>>>>>In this case
> >>>>>>>
> >>>>>>>(!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) || vif->rx_event;
> >>>>>>>
> >>>>>>>can still be true, can't it?
> >>>>>Sorry, I should wrote rx_queue_stopped to true
> >>>>>
> >>>In this case, if rx_queue_stopped is true, then we're expecting frontend
> >>>to notify us, right?
> >>>
> >>>rx_queue_stopped is set to true if we cannot make any progress to queue
> >>>packet into the ring. In that situation we can expect frontend will send
> >>>notification to backend after it goes through the backlog in the ring.
> >>>That means rx_event is set to true, and rx_work_todo is true again. So
> >>>the ring is actually not stalled in this case as well. Did I miss
> >>>something?
> >>>
> >>
> >>Yes, we expect the guest to notify us, and it does, and we set
> >>rx_event to true (see second point), but then the thread set it to
> >>false (see third point). Talking with Paul, another solution could
> >>be to set rx_event false before calling xenvif_rx_action. But using
> >>rx_last_skb_slots makes it quicker for the thread to see if it
> >>doesn't have to do anything.
> >>
> >
> >OK, this is a better explaination. So actually there's no bug in the
> >original implementation and your patch is sort of an improvement.
> >
> >Could you send a new version of this patch with relevant information in
> >commit message? Talking to people offline is faster, but I would like to
> >have public discussion and relevant information archived in a searchable
> >form. Thanks.
> 
> No, there is a bug in the original implementation:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo never returns true anymore
> 

I see what you mean. The interrupt is "lost", that's why it's stalled.

> I will update the explanation and send in the patch again.
> 

Thanks.

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:11:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3T49-0007XU-KP; Wed, 15 Jan 2014 16:11:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W3T47-0007X5-Il; Wed, 15 Jan 2014 16:11:39 +0000
Received: from [193.109.254.147:49997] by server-8.bemta-14.messagelabs.com id
	D7/4F-30921-A33B6D25; Wed, 15 Jan 2014 16:11:38 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389802297!11013005!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14088 invoked from network); 15 Jan 2014 16:11:37 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:11:37 -0000
Received: by mail-la0-f41.google.com with SMTP id mc6so1572050lab.14
	for <multiple recipients>; Wed, 15 Jan 2014 08:11:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=l1njGk/LszfPhbnS+kcUlN+EoCsuOfdEK52e/qTjze8=;
	b=nJy5y0PRviKm3yt+d32kCo7/310hyOev4eP9pX1WagCe3rxTs+4AsHxQM5eshdcdFG
	a5fL4mXD4AMyVx4spbKwT9ZwqnX75Y7FJtlsfnhCwtXBwCqqHD/sF8S5pRPpZRL3E1RY
	h6Si/2EX1LHl+bbuwu1tdHvpFoKxquCWWxHb70buE90/ZGgBEN1e4jWMcHIpYTMjdA8Y
	tnvusffVt4AHc3yqxpyOiUHWeMkutIR/mU6nY4Ac0qVAHpdWEhAhsKCgRn3T+f55yPoN
	w/L67hS3R70nVTBPKQr68uYLE16S65z7adqzJnl2MGtCdZc7Q9dwj5lV+A+P3H/BtyZ1
	pzdQ==
MIME-Version: 1.0
X-Received: by 10.112.139.72 with SMTP id qw8mr1965072lbb.16.1389802292853;
	Wed, 15 Jan 2014 08:11:32 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Wed, 15 Jan 2014 08:11:32 -0800 (PST)
Date: Wed, 15 Jan 2014 11:11:32 -0500
X-Google-Sender-Auth: 8TgF-0cOYRoyEzJ5HwQzJv2uR-Q
Message-ID: <CAHehzX2MTXozFjEf7stj5m660ybojuPGB4DfSju32HjLjM4=CQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] Reminder: Next Xen Project Document Day is Monday,
	January 27
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please mark your calendars that the next Xen Project Document Day will
be held on the final Monday of the month, January 27.

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

If you get a few moments in the next week and a half, please take a
look at the current TODO list:

http://wiki.xen.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

So please think about how you can join in the action.  If you haven't
requested to be made a Wiki editor, save time and do it now so you are
ready to go on Document Day.  Just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

See you in #xendocs  on January 27!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:11:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3T49-0007XU-KP; Wed, 15 Jan 2014 16:11:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W3T47-0007X5-Il; Wed, 15 Jan 2014 16:11:39 +0000
Received: from [193.109.254.147:49997] by server-8.bemta-14.messagelabs.com id
	D7/4F-30921-A33B6D25; Wed, 15 Jan 2014 16:11:38 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389802297!11013005!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14088 invoked from network); 15 Jan 2014 16:11:37 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:11:37 -0000
Received: by mail-la0-f41.google.com with SMTP id mc6so1572050lab.14
	for <multiple recipients>; Wed, 15 Jan 2014 08:11:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=l1njGk/LszfPhbnS+kcUlN+EoCsuOfdEK52e/qTjze8=;
	b=nJy5y0PRviKm3yt+d32kCo7/310hyOev4eP9pX1WagCe3rxTs+4AsHxQM5eshdcdFG
	a5fL4mXD4AMyVx4spbKwT9ZwqnX75Y7FJtlsfnhCwtXBwCqqHD/sF8S5pRPpZRL3E1RY
	h6Si/2EX1LHl+bbuwu1tdHvpFoKxquCWWxHb70buE90/ZGgBEN1e4jWMcHIpYTMjdA8Y
	tnvusffVt4AHc3yqxpyOiUHWeMkutIR/mU6nY4Ac0qVAHpdWEhAhsKCgRn3T+f55yPoN
	w/L67hS3R70nVTBPKQr68uYLE16S65z7adqzJnl2MGtCdZc7Q9dwj5lV+A+P3H/BtyZ1
	pzdQ==
MIME-Version: 1.0
X-Received: by 10.112.139.72 with SMTP id qw8mr1965072lbb.16.1389802292853;
	Wed, 15 Jan 2014 08:11:32 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Wed, 15 Jan 2014 08:11:32 -0800 (PST)
Date: Wed, 15 Jan 2014 11:11:32 -0500
X-Google-Sender-Auth: 8TgF-0cOYRoyEzJ5HwQzJv2uR-Q
Message-ID: <CAHehzX2MTXozFjEf7stj5m660ybojuPGB4DfSju32HjLjM4=CQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] Reminder: Next Xen Project Document Day is Monday,
	January 27
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please mark your calendars that the next Xen Project Document Day will
be held on the final Monday of the month, January 27.

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

If you get a few moments in the next week and a half, please take a
look at the current TODO list:

http://wiki.xen.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

So please think about how you can join in the action.  If you haven't
requested to be made a Wiki editor, save time and do it now so you are
ready to go on Document Day.  Just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

See you in #xendocs  on January 27!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGD-0000Ep-Ll; Wed, 15 Jan 2014 16:24:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGB-0000Dw-Bt
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:07 +0000
Received: from [193.109.254.147:17722] by server-9.bemta-14.messagelabs.com id
	9E/7D-13957-626B6D25; Wed, 15 Jan 2014 16:24:06 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389803044!11093443!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10278 invoked from network); 15 Jan 2014 16:24:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:24:00 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG4-00048I-Fk;
	Wed, 15 Jan 2014 16:24:00 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG4-0008IA-7x; Wed, 15 Jan 2014 16:24:00 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:24 +0000
Message-ID: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for multiple
	queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb rxhash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 130 insertions(+), 37 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 508ea96..9b08da5 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues = 16;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 
 static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1)
+		queue_idx = 0;
+	else {
+		hash = skb_get_rxhash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->number);
+	}
+	else
+		path = (char *)dev->nodename;
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	}
+	else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0)
+		max_queues = 1;
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
+	else {
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
-		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1879,8 +1971,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGD-0000Ep-Ll; Wed, 15 Jan 2014 16:24:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGB-0000Dw-Bt
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:07 +0000
Received: from [193.109.254.147:17722] by server-9.bemta-14.messagelabs.com id
	9E/7D-13957-626B6D25; Wed, 15 Jan 2014 16:24:06 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389803044!11093443!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10278 invoked from network); 15 Jan 2014 16:24:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:24:00 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG4-00048I-Fk;
	Wed, 15 Jan 2014 16:24:00 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG4-0008IA-7x; Wed, 15 Jan 2014 16:24:00 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:24 +0000
Message-ID: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for multiple
	queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb rxhash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 130 insertions(+), 37 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 508ea96..9b08da5 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues = 16;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 
 static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1)
+		queue_idx = 0;
+	else {
+		hash = skb_get_rxhash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->number);
+	}
+	else
+		path = (char *)dev->nodename;
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	}
+	else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0)
+		max_queues = 1;
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
+	else {
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
-		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1879,8 +1971,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TG8-0000DK-U2; Wed, 15 Jan 2014 16:24:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TG7-0000DD-5u
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:03 +0000
Received: from [85.158.137.68:31423] by server-14.bemta-3.messagelabs.com id
	79/BC-06105-226B6D25; Wed, 15 Jan 2014 16:24:02 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389803038!9350268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12809 invoked from network); 15 Jan 2014 16:23:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:23:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145179"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:23:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:56 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG0-00047r-A0;
	Wed, 15 Jan 2014 16:23:56 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG0-0008Hv-32; Wed, 15 Jan 2014 16:23:56 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:21 +0000
Message-ID: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific data
	into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_rxhash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   66 +++--
 drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
 drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++------------------
 drivers/net/xen-netback/xenbus.c    |   89 ++++--
 4 files changed, 556 insertions(+), 423 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c47794b..54d2eeb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,19 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+struct xenvif;
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int number; /* Queue number, 0-based */
+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,7 +142,7 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 
@@ -150,14 +152,27 @@ struct xenvif {
 	 */
 	RING_IDX rx_req_cons_peek;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -171,12 +186,9 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
@@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_rx_ring_full(struct xenvif *vif);
+int xenvif_rx_ring_full(struct xenvif_queue *queue);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Queue an SKB for transmission to the frontend */
-void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
+void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff *skb);
 /* Notify xenvif that ring now has space to send an skb to the frontend */
-void xenvif_notify_tx_completion(struct xenvif *vif);
+void xenvif_notify_tx_completion(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
@@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
 /* Returns number of ring slots required to send an skb to the frontend */
 unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
-void xenvif_rx_action(struct xenvif *vif);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
+void xenvif_rx_action(struct xenvif_queue *queue);
 
 int xenvif_kthread(void *data);
 
+int xenvif_poll(struct napi_struct *napi, int budget);
+
+void xenvif_carrier_on(struct xenvif *vif);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..0113324 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,32 +41,50 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
+{
+	netif_tx_wake_queue(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	netif_tx_stop_queue(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
+static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
+{
+	return netif_tx_queue_stopped(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
 }
 
-static int xenvif_rx_schedulable(struct xenvif *vif)
+static int xenvif_rx_schedulable(struct xenvif_queue *queue)
 {
-	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
+	return xenvif_schedulable(queue->vif) && !xenvif_rx_ring_full(queue);
 }
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (xenvif_rx_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	if (xenvif_rx_schedulable(queue))
+		xenvif_wake_queue(queue);
 
 	return IRQ_HANDLED;
 }
@@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	}
+	else {
+		/* Use skb_get_rxhash to obtain an L4 hash if available */
+		hash = skb_get_rxhash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	u16 queue_index = 0;
+	struct xenvif_queue *queue = NULL;
 
 	BUG_ON(skb->dev != dev);
 
-	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL)
+	/* Drop the packet if the queues are not set up */
+	if (vif->num_queues < 1 || vif->queues == NULL)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &vif->queues[queue_index];
+
+	/* Drop the packet if queue is not ready */
+	if (queue->task == NULL)
 		goto drop;
 
 	/* Drop the packet if the target domain has no receive buffers. */
-	if (!xenvif_rx_schedulable(vif))
+	if (!xenvif_rx_schedulable(queue))
 		goto drop;
 
 	/* Reserve ring slots for the worst-case number of fragments. */
-	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
+	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
 
-	if (vif->can_queue && xenvif_must_stop_queue(vif))
-		netif_stop_queue(dev);
+	if (vif->can_queue && xenvif_must_stop_queue(queue))
+		xenvif_stop_queue(queue);
 
-	xenvif_queue_tx_skb(vif, skb);
+	xenvif_queue_tx_skb(queue, skb);
 
 	return NETDEV_TX_OK;
 
@@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	return NETDEV_TX_OK;
 }
 
-void xenvif_notify_tx_completion(struct xenvif *vif)
+void xenvif_notify_tx_completion(struct xenvif_queue *queue)
 {
-	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	if (xenvif_queue_stopped(queue) && xenvif_rx_schedulable(queue))
+		xenvif_wake_queue(queue);
 }
 
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
@@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -287,6 +343,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for(queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7842555..586e741 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -132,10 +132,10 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
 static int max_required_rx_slots(struct xenvif *vif)
@@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
 	return max;
 }
 
-int xenvif_rx_ring_full(struct xenvif *vif)
+int xenvif_rx_ring_full(struct xenvif_queue *queue)
 {
-	RING_IDX peek   = vif->rx_req_cons_peek;
-	RING_IDX needed = max_required_rx_slots(vif);
+	RING_IDX peek   = queue->rx_req_cons_peek;
+	RING_IDX needed = max_required_rx_slots(queue->vif);
 
-	return ((vif->rx.sring->req_prod - peek) < needed) ||
-	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
+	return ((queue->rx.sring->req_prod - peek) < needed) ||
+	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
 }
 
-int xenvif_must_stop_queue(struct xenvif *vif)
+int xenvif_must_stop_queue(struct xenvif_queue *queue)
 {
-	if (!xenvif_rx_ring_full(vif))
+	if (!xenvif_rx_ring_full(queue))
 		return 0;
 
-	vif->rx.sring->req_event = vif->rx_req_cons_peek +
-		max_required_rx_slots(vif);
+	queue->rx.sring->req_event = queue->rx_req_cons_peek +
+		max_required_rx_slots(queue->vif);
 	mb(); /* request notification /then/ check the queue */
 
-	return xenvif_rx_ring_full(vif);
+	return xenvif_rx_ring_full(queue);
 }
 
 /*
@@ -306,13 +306,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -330,7 +330,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -557,12 +558,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-static void xenvif_kick_thread(struct xenvif *vif)
+static void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-void xenvif_rx_action(struct xenvif *vif)
+void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
 	int need_to_notify = 0;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
 	count = 0;
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		vif = netdev_priv(skb->dev);
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		nr_frags = skb_shinfo(skb)->nr_frags;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 
 		count += nr_frags + 1;
 
@@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
 			break;
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		return;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		vif = netdev_priv(skb->dev);
-
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->vif->dev->stats.tx_bytes += skb->len;
+		queue->vif->dev->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		if (ret)
 			need_to_notify = 1;
 
-		xenvif_notify_tx_completion(vif);
+		xenvif_notify_tx_completion(queue);
 
 		npo.meta_cons += sco->meta_slots_used;
 		dev_kfree_skb(skb);
 	}
 
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 
 	/* More work to do? */
-	if (!skb_queue_empty(&vif->rx_queue))
-		xenvif_kick_thread(vif);
+	if (!skb_queue_empty(&queue->rx_queue))
+		xenvif_kick_thread(queue);
 }
 
-void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
+void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff *skb)
 {
-	skb_queue_tail(&vif->rx_queue, skb);
+	skb_queue_tail(&queue->rx_queue, skb);
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -979,18 +977,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return err;
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		skb_probe_transport_header(skb, 0);
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->vif->dev->stats.rx_bytes += skb->len;
+		queue->vif->dev->stats.rx_packets++;
 
 		work_done++;
 
@@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue);
+	return !skb_queue_empty(&queue->rx_queue);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
-	vif->rx_req_cons_peek = 0;
+	queue->rx_req_cons_peek = 0;
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (rx_work_todo(vif))
-			xenvif_rx_action(vif);
+		if (rx_work_todo(queue))
+			xenvif_rx_action(queue);
 
 		cond_resched();
 	}
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f035899..c3332e2 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -35,8 +36,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
+	{
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->number = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->number);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-
 /* ** Driver Registration ** */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TG8-0000DK-U2; Wed, 15 Jan 2014 16:24:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TG7-0000DD-5u
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:03 +0000
Received: from [85.158.137.68:31423] by server-14.bemta-3.messagelabs.com id
	79/BC-06105-226B6D25; Wed, 15 Jan 2014 16:24:02 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389803038!9350268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12809 invoked from network); 15 Jan 2014 16:23:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:23:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145179"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:23:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:56 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG0-00047r-A0;
	Wed, 15 Jan 2014 16:23:56 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG0-0008Hv-32; Wed, 15 Jan 2014 16:23:56 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:21 +0000
Message-ID: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific data
	into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_rxhash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   66 +++--
 drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
 drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++------------------
 drivers/net/xen-netback/xenbus.c    |   89 ++++--
 4 files changed, 556 insertions(+), 423 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index c47794b..54d2eeb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,19 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+struct xenvif;
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int number; /* Queue number, 0-based */
+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,7 +142,7 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 
@@ -150,14 +152,27 @@ struct xenvif {
 	 */
 	RING_IDX rx_req_cons_peek;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -171,12 +186,9 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
@@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_rx_ring_full(struct xenvif *vif);
+int xenvif_rx_ring_full(struct xenvif_queue *queue);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Queue an SKB for transmission to the frontend */
-void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
+void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff *skb);
 /* Notify xenvif that ring now has space to send an skb to the frontend */
-void xenvif_notify_tx_completion(struct xenvif *vif);
+void xenvif_notify_tx_completion(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
@@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
 /* Returns number of ring slots required to send an skb to the frontend */
 unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
-void xenvif_rx_action(struct xenvif *vif);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
+void xenvif_rx_action(struct xenvif_queue *queue);
 
 int xenvif_kthread(void *data);
 
+int xenvif_poll(struct napi_struct *napi, int budget);
+
+void xenvif_carrier_on(struct xenvif *vif);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..0113324 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,32 +41,50 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
+{
+	netif_tx_wake_queue(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	netif_tx_stop_queue(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
+static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
+{
+	return netif_tx_queue_stopped(
+			netdev_get_tx_queue(queue->vif->dev, queue->number));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
 }
 
-static int xenvif_rx_schedulable(struct xenvif *vif)
+static int xenvif_rx_schedulable(struct xenvif_queue *queue)
 {
-	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
+	return xenvif_schedulable(queue->vif) && !xenvif_rx_ring_full(queue);
 }
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (xenvif_rx_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	if (xenvif_rx_schedulable(queue))
+		xenvif_wake_queue(queue);
 
 	return IRQ_HANDLED;
 }
@@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	}
+	else {
+		/* Use skb_get_rxhash to obtain an L4 hash if available */
+		hash = skb_get_rxhash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	u16 queue_index = 0;
+	struct xenvif_queue *queue = NULL;
 
 	BUG_ON(skb->dev != dev);
 
-	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL)
+	/* Drop the packet if the queues are not set up */
+	if (vif->num_queues < 1 || vif->queues == NULL)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &vif->queues[queue_index];
+
+	/* Drop the packet if queue is not ready */
+	if (queue->task == NULL)
 		goto drop;
 
 	/* Drop the packet if the target domain has no receive buffers. */
-	if (!xenvif_rx_schedulable(vif))
+	if (!xenvif_rx_schedulable(queue))
 		goto drop;
 
 	/* Reserve ring slots for the worst-case number of fragments. */
-	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
+	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
 
-	if (vif->can_queue && xenvif_must_stop_queue(vif))
-		netif_stop_queue(dev);
+	if (vif->can_queue && xenvif_must_stop_queue(queue))
+		xenvif_stop_queue(queue);
 
-	xenvif_queue_tx_skb(vif, skb);
+	xenvif_queue_tx_skb(queue, skb);
 
 	return NETDEV_TX_OK;
 
@@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	return NETDEV_TX_OK;
 }
 
-void xenvif_notify_tx_completion(struct xenvif *vif)
+void xenvif_notify_tx_completion(struct xenvif_queue *queue)
 {
-	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	if (xenvif_queue_stopped(queue) && xenvif_rx_schedulable(queue))
+		xenvif_wake_queue(queue);
 }
 
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
@@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -287,6 +343,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for(queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 7842555..586e741 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -132,10 +132,10 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
 static int max_required_rx_slots(struct xenvif *vif)
@@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
 	return max;
 }
 
-int xenvif_rx_ring_full(struct xenvif *vif)
+int xenvif_rx_ring_full(struct xenvif_queue *queue)
 {
-	RING_IDX peek   = vif->rx_req_cons_peek;
-	RING_IDX needed = max_required_rx_slots(vif);
+	RING_IDX peek   = queue->rx_req_cons_peek;
+	RING_IDX needed = max_required_rx_slots(queue->vif);
 
-	return ((vif->rx.sring->req_prod - peek) < needed) ||
-	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
+	return ((queue->rx.sring->req_prod - peek) < needed) ||
+	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) < needed);
 }
 
-int xenvif_must_stop_queue(struct xenvif *vif)
+int xenvif_must_stop_queue(struct xenvif_queue *queue)
 {
-	if (!xenvif_rx_ring_full(vif))
+	if (!xenvif_rx_ring_full(queue))
 		return 0;
 
-	vif->rx.sring->req_event = vif->rx_req_cons_peek +
-		max_required_rx_slots(vif);
+	queue->rx.sring->req_event = queue->rx_req_cons_peek +
+		max_required_rx_slots(queue->vif);
 	mb(); /* request notification /then/ check the queue */
 
-	return xenvif_rx_ring_full(vif);
+	return xenvif_rx_ring_full(queue);
 }
 
 /*
@@ -306,13 +306,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -330,7 +330,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -557,12 +558,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-static void xenvif_kick_thread(struct xenvif *vif)
+static void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-void xenvif_rx_action(struct xenvif *vif)
+void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
 	int need_to_notify = 0;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
 	count = 0;
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		vif = netdev_priv(skb->dev);
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		nr_frags = skb_shinfo(skb)->nr_frags;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 
 		count += nr_frags + 1;
 
@@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
 			break;
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		return;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		vif = netdev_priv(skb->dev);
-
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->vif->dev->stats.tx_bytes += skb->len;
+		queue->vif->dev->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		if (ret)
 			need_to_notify = 1;
 
-		xenvif_notify_tx_completion(vif);
+		xenvif_notify_tx_completion(queue);
 
 		npo.meta_cons += sco->meta_slots_used;
 		dev_kfree_skb(skb);
 	}
 
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 
 	/* More work to do? */
-	if (!skb_queue_empty(&vif->rx_queue))
-		xenvif_kick_thread(vif);
+	if (!skb_queue_empty(&queue->rx_queue))
+		xenvif_kick_thread(queue);
 }
 
-void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
+void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff *skb)
 {
-	skb_queue_tail(&vif->rx_queue, skb);
+	skb_queue_tail(&queue->rx_queue, skb);
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -979,18 +977,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return err;
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		skb_probe_transport_header(skb, 0);
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->vif->dev->stats.rx_bytes += skb->len;
+		queue->vif->dev->stats.rx_packets++;
 
 		work_done++;
 
@@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue);
+	return !skb_queue_empty(&queue->rx_queue);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
-	vif->rx_req_cons_peek = 0;
+	queue->rx_req_cons_peek = 0;
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (rx_work_todo(vif))
-			xenvif_rx_action(vif);
+		if (rx_work_todo(queue))
+			xenvif_rx_action(queue);
 
 		cond_resched();
 	}
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f035899..c3332e2 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -35,8 +36,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
+	{
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->number = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->number);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-
 /* ** Driver Registration ** */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGC-0000EJ-3d; Wed, 15 Jan 2014 16:24:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGA-0000Dg-Ak
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:07 +0000
Received: from [85.158.139.211:18909] by server-17.bemta-5.messagelabs.com id
	AA/FD-19152-526B6D25; Wed, 15 Jan 2014 16:24:05 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389803041!9959372!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2061 invoked from network); 15 Jan 2014 16:24:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91026442"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:58 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG2-00047u-Iq;
	Wed, 15 Jan 2014 16:23:58 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG2-0008I0-Bq; Wed, 15 Jan 2014 16:23:58 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:22 +0000
Message-ID: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for multiple
	queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    8 +++-
 drivers/net/xen-netback/netback.c   |    3 ++
 drivers/net/xen-netback/xenbus.c    |   70 ++++++++++++++++++++++++++++++-----
 4 files changed, 72 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 54d2eeb..97efd09 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 0113324..0234ff0 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/*
+	 * Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 586e741..5d717d7 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -55,6 +55,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues = 4;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index c3332e2..ce7ca9a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -21,6 +21,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/*
+	 * Multi-queue support: This is an optional feature.
+	 */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0)
+		requested_num_queues = 1; /* Fall back to single queue */
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
 	{
@@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1)
+		xspath = (char *)dev->otherend;
+	else {
+		xspathsize = strlen(dev->otherend) + 10;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				queue->number);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGE-0000F4-6X; Wed, 15 Jan 2014 16:24:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGC-0000E6-4T
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:09 +0000
Received: from [85.158.137.68:9062] by server-8.bemta-3.messagelabs.com id
	D0/35-31081-726B6D25; Wed, 15 Jan 2014 16:24:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389803038!9350268!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 15 Jan 2014 16:24:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145191"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:59 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG3-00048F-I3;
	Wed, 15 Jan 2014 16:23:59 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG3-0008I5-Ay; Wed, 15 Jan 2014 16:23:59 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:23 +0000
Message-ID: <1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
	data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  931 +++++++++++++++++++++++++-------------------
 1 file changed, 541 insertions(+), 390 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..508ea96 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -81,9 +81,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int number; /* Queue number, 0-based */
+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +96,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -139,6 +140,17 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
@@ -186,21 +198,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -220,41 +232,39 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
 
 	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->number));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -263,9 +273,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -278,7 +289,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -288,44 +299,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -336,71 +347,75 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -410,21 +425,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -441,18 +455,18 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		tx->gref = np->grant_tx_ref[id] = ref;
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -484,20 +498,20 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			tx->gref = np->grant_tx_ref[id] = ref;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -514,7 +528,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -540,6 +554,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1 || np->queues == NULL)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -574,29 +603,29 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -612,7 +641,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -625,14 +654,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -640,12 +669,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->number));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -658,32 +687,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -698,7 +732,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -711,33 +745,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -746,7 +780,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -767,7 +801,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -782,9 +816,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -795,7 +829,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -826,17 +860,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -923,11 +957,10 @@ out:
 	return err;
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -938,12 +971,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -953,7 +986,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -961,8 +994,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -975,29 +1008,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -1009,7 +1042,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -1023,7 +1056,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1032,22 +1065,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1056,14 +1089,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1111,56 +1144,56 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		gnttab_end_foreign_access_ref(queue->grant_tx_ref[i],
 					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+		gnttab_release_grant_reference(&queue->gref_tx_head,
+					       queue->grant_tx_ref[i]);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
+	struct mmu_update      *mmu = queue->rx_mmu;
+	struct multicall_entry *mcl = queue->rx_mcl;
 	struct sk_buff_head free_list;
 	struct sk_buff *skb;
 	unsigned long mfn;
 	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
+	dev_warn(&queue->info->netdev->dev, "%s: fix me for copying receiver.\n",
 			 __func__);
 	return;
 
 	skb_queue_head_init(&free_list);
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF) {
 			unused++;
 			continue;
 		}
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		if (0 == mfn) {
 			skb_shinfo(skb)->nr_frags = 0;
@@ -1191,31 +1224,37 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		xfer++;
 	}
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
+	dev_info(&queue->info->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
 		 __func__, xfer, noxfer, unused);
 
 	if (xfer) {
 		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
+			MULTI_mmu_update(mcl, queue->rx_mmu, mmu - queue->rx_mmu,
 					 NULL, DOMID_SELF);
 			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
+			HYPERVISOR_multicall(queue->rx_mcl, mcl - queue->rx_mcl);
 		}
 	}
 
 	__skb_queue_purge(&free_list);
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1258,25 +1297,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1291,7 +1329,12 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i) {
+		xennet_interrupt(0, &info->queues[i]);
+	}
 }
 #endif
 
@@ -1306,6 +1349,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1317,24 +1361,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1347,37 +1382,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
@@ -1401,10 +1407,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1462,30 +1464,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1526,100 +1533,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1628,13 +1621,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1642,21 +1635,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1667,17 +1660,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1685,13 +1738,70 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->number = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			}
+			else goto out;
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			}
+			else goto out;
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1699,34 +1809,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1773,6 +1883,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1785,6 +1898,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1805,36 +1920,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1843,14 +1962,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1952,7 +2074,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1963,6 +2088,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2103,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2125,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -2006,6 +2139,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -2019,16 +2154,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2038,7 +2176,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2085,17 +2226,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGA-0000Dp-Mo; Wed, 15 Jan 2014 16:24:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TG9-0000DJ-Eg
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:05 +0000
Received: from [85.158.139.211:15464] by server-13.bemta-5.messagelabs.com id
	2C/A5-11357-426B6D25; Wed, 15 Jan 2014 16:24:04 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389803041!9959372!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1957 invoked from network); 15 Jan 2014 16:24:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91026376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TFt-00047o-Tz;
	Wed, 15 Jan 2014 16:23:49 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TFt-0008Hq-NW; Wed, 15 Jan 2014 16:23:49 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:20 +0000
Message-ID: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

One open issue is how to deal with the tx_credit data for rate limiting.
This used to exist on a per-VIF basis, and these patches move it to
per-queue to avoid contention on concurrent access to the tx_credit
data from multiple threads. This has the side effect of breaking the
tx_credit accounting across the VIF as a whole. I cannot see a situation
in which people would want to use both rate limiting and a
high-performance multi-queue mode, but if this is problematic then it
can be brought back to the VIF level, with appropriate protection.
Obviously, it continues to work identically in the case where there is
only one queue.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGC-0000EJ-3d; Wed, 15 Jan 2014 16:24:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGA-0000Dg-Ak
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:07 +0000
Received: from [85.158.139.211:18909] by server-17.bemta-5.messagelabs.com id
	AA/FD-19152-526B6D25; Wed, 15 Jan 2014 16:24:05 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389803041!9959372!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2061 invoked from network); 15 Jan 2014 16:24:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91026442"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:58 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG2-00047u-Iq;
	Wed, 15 Jan 2014 16:23:58 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG2-0008I0-Bq; Wed, 15 Jan 2014 16:23:58 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:22 +0000
Message-ID: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for multiple
	queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    8 +++-
 drivers/net/xen-netback/netback.c   |    3 ++
 drivers/net/xen-netback/xenbus.c    |   70 ++++++++++++++++++++++++++++++-----
 4 files changed, 72 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 54d2eeb..97efd09 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 0113324..0234ff0 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/*
+	 * Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 586e741..5d717d7 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -55,6 +55,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues = 4;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index c3332e2..ce7ca9a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -21,6 +21,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/*
+	 * Multi-queue support: This is an optional feature.
+	 */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0)
+		requested_num_queues = 1; /* Fall back to single queue */
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
 	{
@@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1)
+		xspath = (char *)dev->otherend;
+	else {
+		xspathsize = strlen(dev->otherend) + 10;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				queue->number);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGE-0000F4-6X; Wed, 15 Jan 2014 16:24:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TGC-0000E6-4T
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:09 +0000
Received: from [85.158.137.68:9062] by server-8.bemta-3.messagelabs.com id
	D0/35-31081-726B6D25; Wed, 15 Jan 2014 16:24:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389803038!9350268!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13966 invoked from network); 15 Jan 2014 16:24:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="93145191"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 16:24:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:59 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TG3-00048F-I3;
	Wed, 15 Jan 2014 16:23:59 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TG3-0008I5-Ay; Wed, 15 Jan 2014 16:23:59 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:23 +0000
Message-ID: <1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
	data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  931 +++++++++++++++++++++++++-------------------
 1 file changed, 541 insertions(+), 390 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..508ea96 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -81,9 +81,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int number; /* Queue number, 0-based */
+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +96,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -139,6 +140,17 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
@@ -186,21 +198,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -220,41 +232,39 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
 
 	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->number));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -263,9 +273,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -278,7 +289,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -288,44 +299,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -336,71 +347,75 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -410,21 +425,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -441,18 +455,18 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		tx->gref = np->grant_tx_ref[id] = ref;
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -484,20 +498,20 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			tx->gref = np->grant_tx_ref[id] = ref;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -514,7 +528,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -540,6 +554,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1 || np->queues == NULL)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -574,29 +603,29 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -612,7 +641,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -625,14 +654,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -640,12 +669,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->number));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -658,32 +687,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -698,7 +732,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -711,33 +745,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -746,7 +780,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -767,7 +801,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -782,9 +816,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -795,7 +829,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -826,17 +860,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -923,11 +957,10 @@ out:
 	return err;
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -938,12 +971,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -953,7 +986,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -961,8 +994,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -975,29 +1008,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -1009,7 +1042,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -1023,7 +1056,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1032,22 +1065,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1056,14 +1089,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1111,56 +1144,56 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		gnttab_end_foreign_access_ref(queue->grant_tx_ref[i],
 					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+		gnttab_release_grant_reference(&queue->gref_tx_head,
+					       queue->grant_tx_ref[i]);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
+	struct mmu_update      *mmu = queue->rx_mmu;
+	struct multicall_entry *mcl = queue->rx_mcl;
 	struct sk_buff_head free_list;
 	struct sk_buff *skb;
 	unsigned long mfn;
 	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
+	dev_warn(&queue->info->netdev->dev, "%s: fix me for copying receiver.\n",
 			 __func__);
 	return;
 
 	skb_queue_head_init(&free_list);
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF) {
 			unused++;
 			continue;
 		}
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		if (0 == mfn) {
 			skb_shinfo(skb)->nr_frags = 0;
@@ -1191,31 +1224,37 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		xfer++;
 	}
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
+	dev_info(&queue->info->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
 		 __func__, xfer, noxfer, unused);
 
 	if (xfer) {
 		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
+			MULTI_mmu_update(mcl, queue->rx_mmu, mmu - queue->rx_mmu,
 					 NULL, DOMID_SELF);
 			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
+			HYPERVISOR_multicall(queue->rx_mcl, mcl - queue->rx_mcl);
 		}
 	}
 
 	__skb_queue_purge(&free_list);
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1258,25 +1297,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1291,7 +1329,12 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i) {
+		xennet_interrupt(0, &info->queues[i]);
+	}
 }
 #endif
 
@@ -1306,6 +1349,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1317,24 +1361,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1347,37 +1382,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
@@ -1401,10 +1407,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1462,30 +1464,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1526,100 +1533,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1628,13 +1621,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1642,21 +1635,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1667,17 +1660,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1685,13 +1738,70 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->number = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			}
+			else goto out;
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			}
+			else goto out;
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1699,34 +1809,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1773,6 +1883,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1785,6 +1898,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1805,36 +1920,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1843,14 +1962,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1952,7 +2074,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1963,6 +2088,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2103,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2125,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -2006,6 +2139,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -2019,16 +2154,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2038,7 +2176,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2085,17 +2226,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGA-0000Dp-Mo; Wed, 15 Jan 2014 16:24:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1W3TG9-0000DJ-Eg
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 16:24:05 +0000
Received: from [85.158.139.211:15464] by server-13.bemta-5.messagelabs.com id
	2C/A5-11357-426B6D25; Wed, 15 Jan 2014 16:24:04 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389803041!9959372!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1957 invoked from network); 15 Jan 2014 16:24:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91026376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:23:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1W3TFt-00047o-Tz;
	Wed, 15 Jan 2014 16:23:49 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1W3TFt-0008Hq-NW; Wed, 15 Jan 2014 16:23:49 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Wed, 15 Jan 2014 16:23:20 +0000
Message-ID: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: paul.durrant@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

One open issue is how to deal with the tx_credit data for rate limiting.
This used to exist on a per-VIF basis, and these patches move it to
per-queue to avoid contention on concurrent access to the tx_credit
data from multiple threads. This has the side effect of breaking the
tx_credit accounting across the VIF as a whole. I cannot see a situation
in which people would want to use both rate limiting and a
high-performance multi-queue mode, but if this is problematic then it
can be brought back to the VIF level, with appropriate protection.
Obviously, it continues to work identically in the case where there is
only one queue.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGP-0000JB-W0; Wed, 15 Jan 2014 16:24:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3TGO-0000IX-9m
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:24:20 +0000
Received: from [193.109.254.147:24842] by server-8.bemta-14.messagelabs.com id
	CE/DF-30921-336B6D25; Wed, 15 Jan 2014 16:24:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389803058!11093512!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11901 invoked from network); 15 Jan 2014 16:24:18 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:18 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so2029042wgg.9
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 08:24:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=oFqgP2bYNTmnHFB2wz8dF2DyDmx5LNBdt4iOM+2CgZ0=;
	b=nK/XFZdUtD8hd+IP91NboQIYrSOxsfODsNHBNEcsrZyi0DSoe+FogS2S4PnH6uCMmm
	jTEo/E3ZxaN8DvyknJbrr9aAO8ErPDIWWasztYIUlm0/XqP+SBPoVSEOYUSUNr1qPy8s
	n3J+hS7cGUgrdyNpMRL8UxMXQIj6YYN0GFddDyhW4McGSL+xybF2KOJZIZO5K5H8nY9l
	kWYJTDiYzpsW33FgsSjPUpQHSNhRmnLwop0yQEc+U7Jeh+JN+WHgZ1BHLdbH/Ni4M2en
	tkFtLT24jVkuils6hwJGlg8h1nCFVXOFLDpaFOX9tCF1p3i7s9DNPAZT0Q5HynknvC8l
	9hIw==
X-Gm-Message-State: ALoCoQkOavSPBanOiyAlNutvSOLaH9fyJ7JKPuxVU7UvNfmxincQKDgVoBj5mbpFoDIwNfvOBWUA
X-Received: by 10.194.63.134 with SMTP id g6mr3273698wjs.46.1389803057418;
	Wed, 15 Jan 2014 08:24:17 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y8sm3685838wje.12.2014.01.15.08.24.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 08:24:16 -0800 (PST)
Message-ID: <52D6B62A.9000208@linaro.org>
Date: Wed, 15 Jan 2014 16:24:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
In-Reply-To: <24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, nwhitehorn@freebsd.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:26 AM, Warner Losh wrote:
> 
> On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
>> This new support brings 2 open questions (for both Xen and FreeBSD community).
>> When a new guest is created, the toolstack will generated a device tree which
>> will contains:
>> 	- The amount of memory
>> 	- The description of the platform (gic, timer, hypervisor node)
>> 	- PSCI node for SMP bringup
>>
>> Until now, Xen on ARM supported only Linux-based OS. When the support of
>> device tree was added in Xen for guest, we chose to use Linux device tree
>> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
>> implement device tree:
>> 	- strictly respect ePAR (for interrupt-parent property)
>> 	- gic bindings different (only 1 interrupt cell)
>>
>> I would like to come with a common device tree specification (bindings, ...)
>> across every operating system. I know the Linux community is working on having
>> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
>> work with Linux community for this purpose?
> 
> We generally try to follow the common definitions for the FDT stuff.
> There are a few cases where we either lack the feature set of Linux, or
> where the Linux folks are moving quickly and changing the underlying
definitions
> where we wait for the standards to mature before we implement. In some
cases, where
> maturity hasn't happened, or where the bindings are overly Linux
centric (which in
> theory they aren't supposed to be, but sometimes wind up that way).
where we've not
> implemented things.

As I understand main bindings (gic, timer) are set in stone and won't
change. Ian Campbell has a repository with all the ARM bindings here:
http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
a=tree;f=Bindings/arm

To compare the difference between the DT provided by Xen, and the one
correctly parsed by FreeBSD
	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
        - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts

>From Xen side:
	- Every device should move under a simple-bus. I think it's harmless
for Linux side.
	- How about hypervisor node? IHMO this node should also live under the
simple-bus.

>From FreeBSD side:
	- GIC and Timer bindings needs to be handled correctly (see below the
problem for the GIC)
	- Less stricter about interrupt-parent property. Eg, look at the
grant-parent if there is no property interrupt-parent
	- If the hypervisor doesn't live under simple-bus, the interrupt/memory
retrieving should be moved from simple-bus to nexus?

Before the revision r260282 (I have added Nathan on cc), I was able to
use the Linux GIC bindings (see
http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
without any issue.

It seems the fdt bus, now consider the number of interrupt cells is
always 1.

>> As specified early, the device tree can vary accross Xen version and user input
>> (eg the memory). I have noticed few places (mainly the timer) where the IRQs
>> number are harcoded.
> 
> These cases should be viewed as bugs in FreeBSD, I believe. One of the goals
> that has wide support, at leas tin theory, is that we can boot with an
unaltered
> Linux FDT. This goal is some time in the future.

The timer interrupt used by FreeBSD doesn't match the one used by Xen
(secure vs non-secure interrupt). This should be easily resolved by
retrieving the interrupt from the device tree.

Is there a particular reason to use the secure interrupt for the timer?

> 
>> The second question is related to memory attribute for page table. The early
>> page table setup by FreeBSD are using Write-Through memory attribute which
>> result to a likely random (not every time the same place) crash before the
>> real page table are initialized.
>> Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
>> I have no idea if it's a requirement from Xen or a bug (either in Xen or
>> FreeBSD).
> 
> There were some problems with pages being setup improperly for the mutexes
> to operate properly. Have you confirmed this on a recent version of
FreeBSD?

I have checkout the freeBSD branch last week.

I'm not sure to understand the relation with mutexes. The symptoms I
have is mostly data/prefetch abort receive to FreeBSD. Could it be related?

>> The code is taking its first faltering steps, therefore the TODO is quite big:
>> 	- Try the code on x86 architecture. I did lots of rework for the event
>> 	channels code and didn't even try to compile
>> 	- Clean up event channel code
>> 	- Clean up xen control code
>> 	- Add support for Device Tree loading via Linux Boot ABI
>> 	- Fixing crashing on userspace. Could be related to the patch
>> 	series "arm SMP on Cortex-A15"
>> 	- Add guest SMP support
>> 	- DOM0 support?
>>
>> Any help, comments, questions are welcomed.
> 
> I think this is great! We'd love to work with you to make this happen.
> 
> Warner
> 
>> Sincerely yours,
>>
>> ============= Instruction to test FreeBSD on Xen on ARM ===========
> [trimmed]
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TGP-0000JB-W0; Wed, 15 Jan 2014 16:24:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3TGO-0000IX-9m
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:24:20 +0000
Received: from [193.109.254.147:24842] by server-8.bemta-14.messagelabs.com id
	CE/DF-30921-336B6D25; Wed, 15 Jan 2014 16:24:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389803058!11093512!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11901 invoked from network); 15 Jan 2014 16:24:18 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:24:18 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so2029042wgg.9
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 08:24:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=oFqgP2bYNTmnHFB2wz8dF2DyDmx5LNBdt4iOM+2CgZ0=;
	b=nK/XFZdUtD8hd+IP91NboQIYrSOxsfODsNHBNEcsrZyi0DSoe+FogS2S4PnH6uCMmm
	jTEo/E3ZxaN8DvyknJbrr9aAO8ErPDIWWasztYIUlm0/XqP+SBPoVSEOYUSUNr1qPy8s
	n3J+hS7cGUgrdyNpMRL8UxMXQIj6YYN0GFddDyhW4McGSL+xybF2KOJZIZO5K5H8nY9l
	kWYJTDiYzpsW33FgsSjPUpQHSNhRmnLwop0yQEc+U7Jeh+JN+WHgZ1BHLdbH/Ni4M2en
	tkFtLT24jVkuils6hwJGlg8h1nCFVXOFLDpaFOX9tCF1p3i7s9DNPAZT0Q5HynknvC8l
	9hIw==
X-Gm-Message-State: ALoCoQkOavSPBanOiyAlNutvSOLaH9fyJ7JKPuxVU7UvNfmxincQKDgVoBj5mbpFoDIwNfvOBWUA
X-Received: by 10.194.63.134 with SMTP id g6mr3273698wjs.46.1389803057418;
	Wed, 15 Jan 2014 08:24:17 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y8sm3685838wje.12.2014.01.15.08.24.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 08:24:16 -0800 (PST)
Message-ID: <52D6B62A.9000208@linaro.org>
Date: Wed, 15 Jan 2014 16:24:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
In-Reply-To: <24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, nwhitehorn@freebsd.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 01:26 AM, Warner Losh wrote:
> 
> On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
>> This new support brings 2 open questions (for both Xen and FreeBSD community).
>> When a new guest is created, the toolstack will generated a device tree which
>> will contains:
>> 	- The amount of memory
>> 	- The description of the platform (gic, timer, hypervisor node)
>> 	- PSCI node for SMP bringup
>>
>> Until now, Xen on ARM supported only Linux-based OS. When the support of
>> device tree was added in Xen for guest, we chose to use Linux device tree
>> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
>> implement device tree:
>> 	- strictly respect ePAR (for interrupt-parent property)
>> 	- gic bindings different (only 1 interrupt cell)
>>
>> I would like to come with a common device tree specification (bindings, ...)
>> across every operating system. I know the Linux community is working on having
>> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
>> work with Linux community for this purpose?
> 
> We generally try to follow the common definitions for the FDT stuff.
> There are a few cases where we either lack the feature set of Linux, or
> where the Linux folks are moving quickly and changing the underlying
definitions
> where we wait for the standards to mature before we implement. In some
cases, where
> maturity hasn't happened, or where the bindings are overly Linux
centric (which in
> theory they aren't supposed to be, but sometimes wind up that way).
where we've not
> implemented things.

As I understand main bindings (gic, timer) are set in stone and won't
change. Ian Campbell has a repository with all the ARM bindings here:
http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
a=tree;f=Bindings/arm

To compare the difference between the DT provided by Xen, and the one
correctly parsed by FreeBSD
	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
        - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts

>From Xen side:
	- Every device should move under a simple-bus. I think it's harmless
for Linux side.
	- How about hypervisor node? IHMO this node should also live under the
simple-bus.

>From FreeBSD side:
	- GIC and Timer bindings needs to be handled correctly (see below the
problem for the GIC)
	- Less stricter about interrupt-parent property. Eg, look at the
grant-parent if there is no property interrupt-parent
	- If the hypervisor doesn't live under simple-bus, the interrupt/memory
retrieving should be moved from simple-bus to nexus?

Before the revision r260282 (I have added Nathan on cc), I was able to
use the Linux GIC bindings (see
http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
without any issue.

It seems the fdt bus, now consider the number of interrupt cells is
always 1.

>> As specified early, the device tree can vary accross Xen version and user input
>> (eg the memory). I have noticed few places (mainly the timer) where the IRQs
>> number are harcoded.
> 
> These cases should be viewed as bugs in FreeBSD, I believe. One of the goals
> that has wide support, at leas tin theory, is that we can boot with an
unaltered
> Linux FDT. This goal is some time in the future.

The timer interrupt used by FreeBSD doesn't match the one used by Xen
(secure vs non-secure interrupt). This should be easily resolved by
retrieving the interrupt from the device tree.

Is there a particular reason to use the secure interrupt for the timer?

> 
>> The second question is related to memory attribute for page table. The early
>> page table setup by FreeBSD are using Write-Through memory attribute which
>> result to a likely random (not every time the same place) crash before the
>> real page table are initialized.
>> Replacing Write-Through by Write-Back made FreeBSD boot correctly. Even today,
>> I have no idea if it's a requirement from Xen or a bug (either in Xen or
>> FreeBSD).
> 
> There were some problems with pages being setup improperly for the mutexes
> to operate properly. Have you confirmed this on a recent version of
FreeBSD?

I have checkout the freeBSD branch last week.

I'm not sure to understand the relation with mutexes. The symptoms I
have is mostly data/prefetch abort receive to FreeBSD. Could it be related?

>> The code is taking its first faltering steps, therefore the TODO is quite big:
>> 	- Try the code on x86 architecture. I did lots of rework for the event
>> 	channels code and didn't even try to compile
>> 	- Clean up event channel code
>> 	- Clean up xen control code
>> 	- Add support for Device Tree loading via Linux Boot ABI
>> 	- Fixing crashing on userspace. Could be related to the patch
>> 	series "arm SMP on Cortex-A15"
>> 	- Add guest SMP support
>> 	- DOM0 support?
>>
>> Any help, comments, questions are welcomed.
> 
> I think this is great! We'd love to work with you to make this happen.
> 
> Warner
> 
>> Sincerely yours,
>>
>> ============= Instruction to test FreeBSD on Xen on ARM ===========
> [trimmed]
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:27:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TJe-0000uG-PJ; Wed, 15 Jan 2014 16:27:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3TJd-0000u0-3u
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:27:41 +0000
Received: from [85.158.143.35:51019] by server-3.bemta-4.messagelabs.com id
	10/57-32360-CF6B6D25; Wed, 15 Jan 2014 16:27:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389803258!4818176!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20308 invoked from network); 15 Jan 2014 16:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91028207"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:27:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:27:36 -0500
Message-ID: <1389803256.3793.95.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 15 Jan 2014 16:27:36 +0000
In-Reply-To: <20140115143504.GL1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > 
> > Based on the above I have no idea whether a freeze exception should be
> > granted for this, so my default answer is no. I'm not sure what else you
> > could have expected.
> > 
> > If you think there are changes here which should be in 4.4.0 then please
> > enumerate all changes included in this merge which have any relation to
> > Xen and their potential impact on the release.
> 
> I have a list the change here that have a potential impact on Xen, with
> the ones that I think are quite important at the beginning. Either the
> commit title speak for itself or I added a small description on what is
> affected.

Thanks but there's not a lot here for me to go on WRT making a decision
on a freeze exception. Did you refer to 
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
like I said? A freeze exception needs an analysis of benefits and risks,
not the very briefest words you can possibly manage.

Anyway it appears this is a grab bag of things we might want and misc
fixes which are perhaps nice to have, I'm nowhere near comfortable
giving it a blanket exemption based on what you've presented here, or
even of cherry picking what might be the important ones. If you think
any or all of it is actually important for 4.4 please make a proper case
for inclusion, either of the aggregate or of the individual changes.

Ian.

> 
> Fix pc migration from qemu <= 1.5
> 
> - Potential compilation issue:
> Adjust qapi-visit for python-2.4.3
> configure: Explicitly set ARFLAGS so we can build with GNU Make 4.0
> 
> - Memory leak:
> qapi: fix memleak by adding implict struct functions in dealloc visitor
>   qapi is used by qmp, so potential leaks when doing qmp call
> qom: Fix memory leak in object_property_set_link()
>   same for qom
> 
> qdev-monitor: Fix crash when device_add is called with abstract driver
> qdev-monitor: Unref device when device_add fails
>   those could be a potential issue triggered through device-add qmp
>   command
> 
> qemu-char: Fix potential out of bounds access to local arrays
>   for serial="vc:WxH"
>   vc stand for virtual console
> 
> audio: honor QEMU_AUDIO_TIMER_PERIOD instead of waking up every *nano* second
>   looks like it improve cpu load when playing audio
> 
> scsi_target_send_command(): amend stable-1.6 port of the CVE-2013-4344 fix
>   if someone use scsi disk
> 
> vmdk: Fix vmdk_parse_extents
> vmdk: Fix creating big description file
>   if someone use a vmdk disk image
> 
> qcow2: count_contiguous_clusters and compression
> qcow2: fix possible corruption when reading multiple clusters
> qcow2: Zero-initialise first cluster for new images
>   several qcow2 fixes, a file disk format
> 
> virtio-net: only delete bh that existed
> virtio-net: fix the memory leak in rxfilter_notify()
>   few virtio fixes
> 
> rng-egd: offset the point when repeatedly read from the buffer
>   looks like this can be used by virtio
> 
> 
> I did not list the commit that does not look like a Xen guest can use.
> So is this look like patches to take in our tree ? At least the first 7
> would be good to take I think (migration fix, memory leaks, qdev fixes).
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:27:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TJe-0000uG-PJ; Wed, 15 Jan 2014 16:27:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3TJd-0000u0-3u
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:27:41 +0000
Received: from [85.158.143.35:51019] by server-3.bemta-4.messagelabs.com id
	10/57-32360-CF6B6D25; Wed, 15 Jan 2014 16:27:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389803258!4818176!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20308 invoked from network); 15 Jan 2014 16:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91028207"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:27:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:27:36 -0500
Message-ID: <1389803256.3793.95.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 15 Jan 2014 16:27:36 +0000
In-Reply-To: <20140115143504.GL1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > 
> > Based on the above I have no idea whether a freeze exception should be
> > granted for this, so my default answer is no. I'm not sure what else you
> > could have expected.
> > 
> > If you think there are changes here which should be in 4.4.0 then please
> > enumerate all changes included in this merge which have any relation to
> > Xen and their potential impact on the release.
> 
> I have a list the change here that have a potential impact on Xen, with
> the ones that I think are quite important at the beginning. Either the
> commit title speak for itself or I added a small description on what is
> affected.

Thanks but there's not a lot here for me to go on WRT making a decision
on a freeze exception. Did you refer to 
http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
like I said? A freeze exception needs an analysis of benefits and risks,
not the very briefest words you can possibly manage.

Anyway it appears this is a grab bag of things we might want and misc
fixes which are perhaps nice to have, I'm nowhere near comfortable
giving it a blanket exemption based on what you've presented here, or
even of cherry picking what might be the important ones. If you think
any or all of it is actually important for 4.4 please make a proper case
for inclusion, either of the aggregate or of the individual changes.

Ian.

> 
> Fix pc migration from qemu <= 1.5
> 
> - Potential compilation issue:
> Adjust qapi-visit for python-2.4.3
> configure: Explicitly set ARFLAGS so we can build with GNU Make 4.0
> 
> - Memory leak:
> qapi: fix memleak by adding implict struct functions in dealloc visitor
>   qapi is used by qmp, so potential leaks when doing qmp call
> qom: Fix memory leak in object_property_set_link()
>   same for qom
> 
> qdev-monitor: Fix crash when device_add is called with abstract driver
> qdev-monitor: Unref device when device_add fails
>   those could be a potential issue triggered through device-add qmp
>   command
> 
> qemu-char: Fix potential out of bounds access to local arrays
>   for serial="vc:WxH"
>   vc stand for virtual console
> 
> audio: honor QEMU_AUDIO_TIMER_PERIOD instead of waking up every *nano* second
>   looks like it improve cpu load when playing audio
> 
> scsi_target_send_command(): amend stable-1.6 port of the CVE-2013-4344 fix
>   if someone use scsi disk
> 
> vmdk: Fix vmdk_parse_extents
> vmdk: Fix creating big description file
>   if someone use a vmdk disk image
> 
> qcow2: count_contiguous_clusters and compression
> qcow2: fix possible corruption when reading multiple clusters
> qcow2: Zero-initialise first cluster for new images
>   several qcow2 fixes, a file disk format
> 
> virtio-net: only delete bh that existed
> virtio-net: fix the memory leak in rxfilter_notify()
>   few virtio fixes
> 
> rng-egd: offset the point when repeatedly read from the buffer
>   looks like this can be used by virtio
> 
> 
> I did not list the commit that does not look like a Xen guest can use.
> So is this look like patches to take in our tree ? At least the first 7
> would be good to take I think (migration fix, memory leaks, qdev fixes).
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TQv-0001ej-UM; Wed, 15 Jan 2014 16:35:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3TQu-0001ee-1B
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:35:12 +0000
Received: from [85.158.143.35:32016] by server-2.bemta-4.messagelabs.com id
	2A/05-11386-EB8B6D25; Wed, 15 Jan 2014 16:35:10 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389803709!11883113!1
X-Originating-IP: [213.199.154.75]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13575 invoked from network); 15 Jan 2014 16:35:09 -0000
Received: from mail-db3lp0075.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.75)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Jan 2014 16:35:09 -0000
Received: from DB3PRD0611HT001.eurprd06.prod.outlook.com (157.56.254.229) by
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Wed, 15 Jan 2014 16:35:07 +0000
Message-ID: <52D6B8B6.5070302@zynstra.com>
Date: Wed, 15 Jan 2014 16:35:02 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com>
In-Reply-To: <52D69E0B.5020006@oracle.com>
X-Originating-IP: [157.56.254.229]
X-ClientProxiedBy: AMSPR03CA017.eurprd03.prod.outlook.com (10.242.225.145)
	To AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22)
X-Forefront-PRVS: 00922518D8
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(689001)(679001)(779001)(51704005)(24454002)(51444003)(199002)(189002)(377454003)(479174003)(79102001)(59896001)(66066001)(74662001)(74366001)(36756003)(47776003)(74876001)(77982001)(76786001)(76796001)(74502001)(31966008)(47446002)(50466002)(63696002)(59766001)(80022001)(56776001)(92566001)(92726001)(53806001)(64126003)(85852003)(69226001)(54356001)(42186004)(85306002)(76482001)(56816005)(90146001)(87976001)(77096001)(83072002)(54316002)(47976001)(46102001)(83322001)(47736001)(23756003)(81342001)(50986001)(74706001)(93136001)(51856001)(81816001)(33656001)(80316001)(81686001)(83506001)(49866001)(4396001)(80976001)(81542001)(93516002);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AM3PR03MB404;
	H:DB3PRD0611HT001.eurprd06.prod.outlook.com; CLIP:157.56.254.229;
	FPR:; RD:InfoNoRecords; A:1; MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/15/2014 04:49 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> Could you confirm that this problem doesn't exist if loading tmem with
>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>> difference packages during your testing.
>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>> Unfortunately I don't have a single test case which demonstrates the
>>>> problem but as I mentioned before it will generally show up under
>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>
>>> So the root cause is not because enabled selfshrinking.
>>> Then what I can think of is that the xen-selfballoon driver was too
>>> aggressive, too many pages were ballooned out which causeed heavy memory
>>> pressure to guest OS.
>>> And kswapd started to reclaim page until most of pages were
>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>> triggered.
>>> In theory the balloon driver should give back ballooned out pages to
>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>
>>> My suggestion is reserve a min memory for your guest OS so that the
>>> xen-selfballoon won't be so aggressive.
>>> You can do it through parameters selfballoon_reserved_mb or
>>> selfballoon_min_usable_mb.
>> I am still getting OOM errors with both of these set to 32 so I'll try
>> another bump to 64.  I think that if I do find values which prevent it
>> though then it is more of a work around than a fix because it still
>> suggests that swap is not being used when ballooning is no longer
> Yes, it's like a work around. But I don't think there is a better way.
>
>  From the recent OOM log your reported:
> [ 8212.940769] Free swap  = 1925576kB
> [ 8212.940770] Total swap = 2097148kB
>
> [504638.442136] Free swap  = 1868108kB
> [504638.442137] Total swap = 2097148kB
>
> 171572KB and 229040KB data are swapped out to swap disk, I think there
> are already significantly values for guest OS has only ~300M usable memory.
> guest OS can't find out pages suitable for swap any more after so many
> pages are swapped out, although at that time the swap device still have
> enough space.
>
> The OOM may not be triggered if the balloon driver can give back memory
> to guest OS fast enough but I think it's unrealistic.
> So the best way is reserve more memory to guest OS.
>
>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>> kernel) guest running on the dom0 with tmem activated so I'm going to
>> see if I can find a comparable workload to see if I get the same issue
>> with a different kernel configuration.
>>
I've done a bit more testing and seem to have found an extra condition 
which is affecting the OOM behaviour in my guests.  All my Gentoo guests 
have swap space backed by a dm-crypt volume and if I remove this layer 
then things seem to be behaving much more reliably.  In my Ubuntu guests 
I have plain swap space and so far I haven't been able to trigger an OOM 
condition.  Is it possible that it is the dm-crypt layer failing to get 
working memory when swapping something in/out and causing the error?

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TQv-0001ej-UM; Wed, 15 Jan 2014 16:35:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3TQu-0001ee-1B
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:35:12 +0000
Received: from [85.158.143.35:32016] by server-2.bemta-4.messagelabs.com id
	2A/05-11386-EB8B6D25; Wed, 15 Jan 2014 16:35:10 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389803709!11883113!1
X-Originating-IP: [213.199.154.75]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13575 invoked from network); 15 Jan 2014 16:35:09 -0000
Received: from mail-db3lp0075.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.75)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	15 Jan 2014 16:35:09 -0000
Received: from DB3PRD0611HT001.eurprd06.prod.outlook.com (157.56.254.229) by
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Wed, 15 Jan 2014 16:35:07 +0000
Message-ID: <52D6B8B6.5070302@zynstra.com>
Date: Wed, 15 Jan 2014 16:35:02 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com>
In-Reply-To: <52D69E0B.5020006@oracle.com>
X-Originating-IP: [157.56.254.229]
X-ClientProxiedBy: AMSPR03CA017.eurprd03.prod.outlook.com (10.242.225.145)
	To AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22)
X-Forefront-PRVS: 00922518D8
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(689001)(679001)(779001)(51704005)(24454002)(51444003)(199002)(189002)(377454003)(479174003)(79102001)(59896001)(66066001)(74662001)(74366001)(36756003)(47776003)(74876001)(77982001)(76786001)(76796001)(74502001)(31966008)(47446002)(50466002)(63696002)(59766001)(80022001)(56776001)(92566001)(92726001)(53806001)(64126003)(85852003)(69226001)(54356001)(42186004)(85306002)(76482001)(56816005)(90146001)(87976001)(77096001)(83072002)(54316002)(47976001)(46102001)(83322001)(47736001)(23756003)(81342001)(50986001)(74706001)(93136001)(51856001)(81816001)(33656001)(80316001)(81686001)(83506001)(49866001)(4396001)(80976001)(81542001)(93516002);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AM3PR03MB404;
	H:DB3PRD0611HT001.eurprd06.prod.outlook.com; CLIP:157.56.254.229;
	FPR:; RD:InfoNoRecords; A:1; MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/15/2014 04:49 PM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> Could you confirm that this problem doesn't exist if loading tmem with
>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>> difference packages during your testing.
>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>> Unfortunately I don't have a single test case which demonstrates the
>>>> problem but as I mentioned before it will generally show up under
>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>
>>> So the root cause is not because enabled selfshrinking.
>>> Then what I can think of is that the xen-selfballoon driver was too
>>> aggressive, too many pages were ballooned out which causeed heavy memory
>>> pressure to guest OS.
>>> And kswapd started to reclaim page until most of pages were
>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>> triggered.
>>> In theory the balloon driver should give back ballooned out pages to
>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>
>>> My suggestion is reserve a min memory for your guest OS so that the
>>> xen-selfballoon won't be so aggressive.
>>> You can do it through parameters selfballoon_reserved_mb or
>>> selfballoon_min_usable_mb.
>> I am still getting OOM errors with both of these set to 32 so I'll try
>> another bump to 64.  I think that if I do find values which prevent it
>> though then it is more of a work around than a fix because it still
>> suggests that swap is not being used when ballooning is no longer
> Yes, it's like a work around. But I don't think there is a better way.
>
>  From the recent OOM log your reported:
> [ 8212.940769] Free swap  = 1925576kB
> [ 8212.940770] Total swap = 2097148kB
>
> [504638.442136] Free swap  = 1868108kB
> [504638.442137] Total swap = 2097148kB
>
> 171572KB and 229040KB data are swapped out to swap disk, I think there
> are already significantly values for guest OS has only ~300M usable memory.
> guest OS can't find out pages suitable for swap any more after so many
> pages are swapped out, although at that time the swap device still have
> enough space.
>
> The OOM may not be triggered if the balloon driver can give back memory
> to guest OS fast enough but I think it's unrealistic.
> So the best way is reserve more memory to guest OS.
>
>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>> kernel) guest running on the dom0 with tmem activated so I'm going to
>> see if I can find a comparable workload to see if I get the same issue
>> with a different kernel configuration.
>>
I've done a bit more testing and seem to have found an extra condition 
which is affecting the OOM behaviour in my guests.  All my Gentoo guests 
have swap space backed by a dm-crypt volume and if I remove this layer 
then things seem to be behaving much more reliably.  In my Ubuntu guests 
I have plain swap space and so far I haven't been able to trigger an OOM 
condition.  Is it possible that it is the dm-crypt layer failing to get 
working memory when swapping something in/out and causing the error?

James

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:36:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TSK-0001jW-E0; Wed, 15 Jan 2014 16:36:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3TSI-0001jN-9b
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:36:38 +0000
Received: from [193.109.254.147:59257] by server-10.bemta-14.messagelabs.com
	id A0/1B-20752-519B6D25; Wed, 15 Jan 2014 16:36:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389803796!9593392!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27535 invoked from network); 15 Jan 2014 16:36:36 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:36:36 -0000
Received: by mail-we0-f173.google.com with SMTP id t60so2042983wes.32
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 08:36:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ghl/lRLHAQiSsE+0SSwdDh4UCWoHM4SBWVYTD6c9ssk=;
	b=fryDLY8Q0o433WJNoEs/3yzx4MoiMdOJ8KRUuY4ko8LgVmmn0qX2uDDjMMK/l2FQ4o
	YyzY+ubsQlZhE31lO52xIiuTWzKfO/SAQtkXce2l6Di/i+CXvkEGPVaQjgNUp5BPZWD8
	18gSdHZIqSJws6YVYUOD0NqqDYanLmAABkEKl7b5/Qepit+4fnk9VLhZVUwThDslH/AN
	dcuhgeouJMVKzA9WBQl2IuLKoLxSiVuJkTQKRp+sDgutT/PskY/tlfLBM9Yg6sVUXnQu
	1gL1xaYEKdlTuW2CAKhMvAWzcoQhoQpdYUT3JzpR3inyBvg2PO9nCupu+3gtwEsDTj+x
	fNHg==
X-Gm-Message-State: ALoCoQn9zOqLUzueXDWJNoMBcvpI76XUhmf/IyQp+LrTl6nGbv6PMywPWH4Li3FKHoNYmmuMTi9P
X-Received: by 10.194.175.133 with SMTP id ca5mr3375401wjc.19.1389803794856;
	Wed, 15 Jan 2014 08:36:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q15sm3709398wjw.18.2014.01.15.08.36.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 08:36:34 -0800 (PST)
Message-ID: <52D6B90E.8050908@linaro.org>
Date: Wed, 15 Jan 2014 16:36:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
	<1389797901.3793.71.camel@kazak.uk.xensource.com>
In-Reply-To: <1389797901.3793.71.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:58 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
>>> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
>>> was mapped with ioremap_nocache and hence isn't cached in the first place.
>>> Accesses should be via writeq though, so do that.
>>>
>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> Thanks. I think release wise this can wait for 4.5, the flush is
> unnecessary but not harmful.
> 
> While the use of writeq is needed for the additional barriers which it
> adds it's not strictly needed here because there is no prior write to
> sequence against (and ioremap_nocache has a barrier in it).
> 
> Unless removing this pointless flush makes some analysis of the use of
> the functions less confusing perhaps?

For code comprehension, it's better. I think this patch is like "arm:
flush TLB on all CPUs when setting and ...". It's not harmful when Xen
is used but it helps readability.

> However thinking about the writeq barriers to lead me to notice the lack
> of a recommended dsb() before the subsequent use of sev(), which is
> needed to ensure that the write has occurred before we wake the other
> processor. We get away with this because the iounmap in the middle does
> a write_pte which has a dsb in it. Phew! Still. for 4.5:
> 
> 8<-----
> 
> From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 15 Jan 2014 14:49:04 +0000
> Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up
> 
> This ensures that the writeq to the release address has occurred.
> 
> In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but

e entual? Did you function write_pte()?

> make it explicit.
> 
> The ARMv8 ARM recommends that sev() is usually accompanied by a dsb(), the
> only other uses are in the v7 spinlock code which includes a dsb() already.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm64/smpboot.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
> index 9146476..9c7bf29 100644
> --- a/xen/arch/arm/arm64/smpboot.c
> +++ b/xen/arch/arm/arm64/smpboot.c
> @@ -36,6 +36,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
>  
>      iounmap(release);
>  
> +    dsb();
>      sev();
>  
>      return cpu_up_send_sgi(cpu);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:36:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TSK-0001jW-E0; Wed, 15 Jan 2014 16:36:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3TSI-0001jN-9b
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:36:38 +0000
Received: from [193.109.254.147:59257] by server-10.bemta-14.messagelabs.com
	id A0/1B-20752-519B6D25; Wed, 15 Jan 2014 16:36:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389803796!9593392!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27535 invoked from network); 15 Jan 2014 16:36:36 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:36:36 -0000
Received: by mail-we0-f173.google.com with SMTP id t60so2042983wes.32
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 08:36:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ghl/lRLHAQiSsE+0SSwdDh4UCWoHM4SBWVYTD6c9ssk=;
	b=fryDLY8Q0o433WJNoEs/3yzx4MoiMdOJ8KRUuY4ko8LgVmmn0qX2uDDjMMK/l2FQ4o
	YyzY+ubsQlZhE31lO52xIiuTWzKfO/SAQtkXce2l6Di/i+CXvkEGPVaQjgNUp5BPZWD8
	18gSdHZIqSJws6YVYUOD0NqqDYanLmAABkEKl7b5/Qepit+4fnk9VLhZVUwThDslH/AN
	dcuhgeouJMVKzA9WBQl2IuLKoLxSiVuJkTQKRp+sDgutT/PskY/tlfLBM9Yg6sVUXnQu
	1gL1xaYEKdlTuW2CAKhMvAWzcoQhoQpdYUT3JzpR3inyBvg2PO9nCupu+3gtwEsDTj+x
	fNHg==
X-Gm-Message-State: ALoCoQn9zOqLUzueXDWJNoMBcvpI76XUhmf/IyQp+LrTl6nGbv6PMywPWH4Li3FKHoNYmmuMTi9P
X-Received: by 10.194.175.133 with SMTP id ca5mr3375401wjc.19.1389803794856;
	Wed, 15 Jan 2014 08:36:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q15sm3709398wjw.18.2014.01.15.08.36.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 08:36:34 -0800 (PST)
Message-ID: <52D6B90E.8050908@linaro.org>
Date: Wed, 15 Jan 2014 16:36:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
	<1389797901.3793.71.camel@kazak.uk.xensource.com>
In-Reply-To: <1389797901.3793.71.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
	smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:58 PM, Ian Campbell wrote:
> On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
>> On 01/14/2014 04:55 PM, Ian Campbell wrote:
>>> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
>>> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
>>> was mapped with ioremap_nocache and hence isn't cached in the first place.
>>> Accesses should be via writeq though, so do that.
>>>
>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> Thanks. I think release wise this can wait for 4.5, the flush is
> unnecessary but not harmful.
> 
> While the use of writeq is needed for the additional barriers which it
> adds it's not strictly needed here because there is no prior write to
> sequence against (and ioremap_nocache has a barrier in it).
> 
> Unless removing this pointless flush makes some analysis of the use of
> the functions less confusing perhaps?

For code comprehension, it's better. I think this patch is like "arm:
flush TLB on all CPUs when setting and ...". It's not harmful when Xen
is used but it helps readability.

> However thinking about the writeq barriers to lead me to notice the lack
> of a recommended dsb() before the subsequent use of sev(), which is
> needed to ensure that the write has occurred before we wake the other
> processor. We get away with this because the iounmap in the middle does
> a write_pte which has a dsb in it. Phew! Still. for 4.5:
> 
> 8<-----
> 
> From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
> From: Ian Campbell <ian.campbell@citrix.com>
> Date: Wed, 15 Jan 2014 14:49:04 +0000
> Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up
> 
> This ensures that the writeq to the release address has occurred.
> 
> In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but

e entual? Did you function write_pte()?

> make it explicit.
> 
> The ARMv8 ARM recommends that sev() is usually accompanied by a dsb(), the
> only other uses are in the v7 spinlock code which includes a dsb() already.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm64/smpboot.c |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/arm/arm64/smpboot.c b/xen/arch/arm/arm64/smpboot.c
> index 9146476..9c7bf29 100644
> --- a/xen/arch/arm/arm64/smpboot.c
> +++ b/xen/arch/arm/arm64/smpboot.c
> @@ -36,6 +36,7 @@ static int __init smp_spin_table_cpu_up(int cpu)
>  
>      iounmap(release);
>  
> +    dsb();
>      sev();
>  
>      return cpu_up_send_sgi(cpu);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:45:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TaY-00025D-Ek; Wed, 15 Jan 2014 16:45:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3TaS-000252-7E
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:45:08 +0000
Received: from [85.158.139.211:30044] by server-15.bemta-5.messagelabs.com id
	F5/0D-08490-F0BB6D25; Wed, 15 Jan 2014 16:45:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389804301!9972235!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12866 invoked from network); 15 Jan 2014 16:45:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:45:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91036689"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:45:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:45:00 -0500
Message-ID: <1389804299.3793.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 16:44:59 +0000
In-Reply-To: <52D6B90E.8050908@linaro.org>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
	<1389797901.3793.71.camel@kazak.uk.xensource.com>
	<52D6B90E.8050908@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
 smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 16:36 +0000, Julien Grall wrote:
> On 01/15/2014 02:58 PM, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
> >> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> >>> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> >>> was mapped with ioremap_nocache and hence isn't cached in the first place.
> >>> Accesses should be via writeq though, so do that.
> >>>
> >>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >> Acked-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Thanks. I think release wise this can wait for 4.5, the flush is
> > unnecessary but not harmful.
> > 
> > While the use of writeq is needed for the additional barriers which it
> > adds it's not strictly needed here because there is no prior write to
> > sequence against (and ioremap_nocache has a barrier in it).
> > 
> > Unless removing this pointless flush makes some analysis of the use of
> > the functions less confusing perhaps?
> 
> For code comprehension, it's better. I think this patch is like "arm:
> flush TLB on all CPUs when setting and ...". It's not harmful when Xen
> is used but it helps readability.

True, I'm not sure I'm comfortable hanging a freeze exception on that
though.

> 
> > However thinking about the writeq barriers to lead me to notice the lack
> > of a recommended dsb() before the subsequent use of sev(), which is
> > needed to ensure that the write has occurred before we wake the other
> > processor. We get away with this because the iounmap in the middle does
> > a write_pte which has a dsb in it. Phew! Still. for 4.5:
> > 
> > 8<-----
> > 
> > From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
> > From: Ian Campbell <ian.campbell@citrix.com>
> > Date: Wed, 15 Jan 2014 14:49:04 +0000
> > Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up
> > 
> > This ensures that the writeq to the release address has occurred.
> > 
> > In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but
> 
> e entual? Did you function write_pte()?

s/ /v/ => eventual. Oops.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 16:45:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 16:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3TaY-00025D-Ek; Wed, 15 Jan 2014 16:45:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3TaS-000252-7E
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 16:45:08 +0000
Received: from [85.158.139.211:30044] by server-15.bemta-5.messagelabs.com id
	F5/0D-08490-F0BB6D25; Wed, 15 Jan 2014 16:45:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389804301!9972235!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12866 invoked from network); 15 Jan 2014 16:45:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 16:45:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,663,1384300800"; d="scan'208";a="91036689"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 16:45:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 11:45:00 -0500
Message-ID: <1389804299.3793.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 15 Jan 2014 16:44:59 +0000
In-Reply-To: <52D6B90E.8050908@linaro.org>
References: <1389718504-1514-1-git-send-email-ian.campbell@citrix.com>
	<52D58692.8000305@linaro.org>
	<1389797901.3793.71.camel@kazak.uk.xensource.com>
	<52D6B90E.8050908@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correctly write release target in
 smp_spin_table_cpu_up
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 16:36 +0000, Julien Grall wrote:
> On 01/15/2014 02:58 PM, Ian Campbell wrote:
> > On Tue, 2014-01-14 at 18:48 +0000, Julien Grall wrote:
> >> On 01/14/2014 04:55 PM, Ian Campbell wrote:
> >>> flush_xen_data_tlb_range_va() is clearly bogus since it flushes the tlb, not
> >>> the data cache. Perhaps what was meant was flush_xen_dcache(), but the address
> >>> was mapped with ioremap_nocache and hence isn't cached in the first place.
> >>> Accesses should be via writeq though, so do that.
> >>>
> >>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >> Acked-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Thanks. I think release wise this can wait for 4.5, the flush is
> > unnecessary but not harmful.
> > 
> > While the use of writeq is needed for the additional barriers which it
> > adds it's not strictly needed here because there is no prior write to
> > sequence against (and ioremap_nocache has a barrier in it).
> > 
> > Unless removing this pointless flush makes some analysis of the use of
> > the functions less confusing perhaps?
> 
> For code comprehension, it's better. I think this patch is like "arm:
> flush TLB on all CPUs when setting and ...". It's not harmful when Xen
> is used but it helps readability.

True, I'm not sure I'm comfortable hanging a freeze exception on that
though.

> 
> > However thinking about the writeq barriers to lead me to notice the lack
> > of a recommended dsb() before the subsequent use of sev(), which is
> > needed to ensure that the write has occurred before we wake the other
> > processor. We get away with this because the iounmap in the middle does
> > a write_pte which has a dsb in it. Phew! Still. for 4.5:
> > 
> > 8<-----
> > 
> > From aab391b98760cc8417e06512848cf83dd5d71b5c Mon Sep 17 00:00:00 2001
> > From: Ian Campbell <ian.campbell@citrix.com>
> > Date: Wed, 15 Jan 2014 14:49:04 +0000
> > Subject: [PATCH] xen: arm: add barrier before sev in smp_spin_table_cpu_up
> > 
> > This ensures that the writeq to the release address has occurred.
> > 
> > In reality there is a dsb() in the iounmap() (in the e entual write_pte()) but
> 
> e entual? Did you function write_pte()?

s/ /v/ => eventual. Oops.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:09:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Txk-0003lo-D5; Wed, 15 Jan 2014 17:09:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3Txi-0003lj-4c
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:09:06 +0000
Received: from [85.158.137.68:50029] by server-17.bemta-3.messagelabs.com id
	54/23-15965-1B0C6D25; Wed, 15 Jan 2014 17:09:05 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389805742!9308303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21257 invoked from network); 15 Jan 2014 17:09:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:09:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="91048702"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 17:08:35 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 15 Jan 2014 12:08:34 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Wed, 15 Jan 2014 18:08:26 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH net-next] xen-netfront: add support for
	IPv6 offloads
Thread-Index: AQHPEgUQuhiFQyjN8k2YNGkK6y3A05qF1zwAgAAYY+D///I+gIAAIprg
Date: Wed, 15 Jan 2014 17:08:25 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02081B7@AMSPEX01CL01.citrite.net>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
	<52D6BF630200007800113EFB@nat28.tlf.novell.com>
In-Reply-To: <52D6BF630200007800113EFB@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 15 January 2014 16:04
> To: Andrew Cooper; Paul Durrant
> Cc: David Vrabel; xen-devel@lists.xen.org; Boris Ostrovsky;
> netdev@vger.kernel.org
> Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for
> IPv6 offloads
> 
> >>> On 15.01.14 at 16:54, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> >> On 15/01/14 15:18, Paul Durrant wrote:
> >> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> >>
> >> "%d", 1 results in a constant string.  xenbus_write() would avoid a
> >> transitory memory allocation.
> >
> > This code is consistent with all the other xenbus_printf()s in the
> > neighbourhood and does it really matter?
> 
> I think we should always strive to have the simplest possible code
> that fulfills the purpose. And hence we shouldn't be setting further
> bad precedents. (In fact I have a patch queued to replace all the
> unnecessary xenbus_printf()s with xenbus_write()s on
> linux-2.6.18-xen.hg, and may look into porting this to the
> respective upstream components.)
> 

Ok. Personally I'd go for code consistency with this patch and then a full replacement... but I'll re-work it.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:09:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Txk-0003lo-D5; Wed, 15 Jan 2014 17:09:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3Txi-0003lj-4c
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:09:06 +0000
Received: from [85.158.137.68:50029] by server-17.bemta-3.messagelabs.com id
	54/23-15965-1B0C6D25; Wed, 15 Jan 2014 17:09:05 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389805742!9308303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21257 invoked from network); 15 Jan 2014 17:09:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:09:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="91048702"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Jan 2014 17:08:35 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 15 Jan 2014 12:08:34 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Wed, 15 Jan 2014 18:08:26 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH net-next] xen-netfront: add support for
	IPv6 offloads
Thread-Index: AQHPEgUQuhiFQyjN8k2YNGkK6y3A05qF1zwAgAAYY+D///I+gIAAIprg
Date: Wed, 15 Jan 2014 17:08:25 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02081B7@AMSPEX01CL01.citrite.net>
References: <1389799111-8372-1-git-send-email-paul.durrant@citrix.com>
	<52D6A868.6040707@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208071@AMSPEX01CL01.citrite.net>
	<52D6BF630200007800113EFB@nat28.tlf.novell.com>
In-Reply-To: <52D6BF630200007800113EFB@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for IPv6
 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 15 January 2014 16:04
> To: Andrew Cooper; Paul Durrant
> Cc: David Vrabel; xen-devel@lists.xen.org; Boris Ostrovsky;
> netdev@vger.kernel.org
> Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: add support for
> IPv6 offloads
> 
> >>> On 15.01.14 at 16:54, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> >> On 15/01/14 15:18, Paul Durrant wrote:
> >> > +	err = xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv6",
> "%d", 1);
> >>
> >> "%d", 1 results in a constant string.  xenbus_write() would avoid a
> >> transitory memory allocation.
> >
> > This code is consistent with all the other xenbus_printf()s in the
> > neighbourhood and does it really matter?
> 
> I think we should always strive to have the simplest possible code
> that fulfills the purpose. And hence we shouldn't be setting further
> bad precedents. (In fact I have a patch queued to replace all the
> unnecessary xenbus_printf()s with xenbus_write()s on
> linux-2.6.18-xen.hg, and may look into porting this to the
> respective upstream components.)
> 

Ok. Personally I'd go for code consistency with this patch and then a full replacement... but I'll re-work it.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:11:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3U00-0003uq-0f; Wed, 15 Jan 2014 17:11:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Tzz-0003uk-0o
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 17:11:27 +0000
Received: from [85.158.143.35:54462] by server-2.bemta-4.messagelabs.com id
	9B/0A-11386-E31C6D25; Wed, 15 Jan 2014 17:11:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389805882!11966253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 759 invoked from network); 15 Jan 2014 17:11:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:11:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93168209"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:11:21 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 12:11:20 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 15 Jan 2014 17:11:07 +0000
Message-ID: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control (11b57f) solved the spinning
thread problem, however caused an another one. The receive side can stall, if:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo doesn't return true anymore

Also, if interrupt sent but there is still no room in the ring, it take quite a
long time until xenvif_rx_action realize it. This patch ditch that two variable,
and rework rx_work_todo. If the thread finds it can't fit more skb's into the
ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring (like before)
- there is space for the topmost packet in the queue

I think that's more natural and optimal thing to test than two bool which are
set somewhere else.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:11:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3U00-0003uq-0f; Wed, 15 Jan 2014 17:11:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W3Tzz-0003uk-0o
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 17:11:27 +0000
Received: from [85.158.143.35:54462] by server-2.bemta-4.messagelabs.com id
	9B/0A-11386-E31C6D25; Wed, 15 Jan 2014 17:11:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389805882!11966253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 759 invoked from network); 15 Jan 2014 17:11:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:11:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93168209"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:11:21 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 12:11:20 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Wed, 15 Jan 2014 17:11:07 +0000
Message-ID: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control (11b57f) solved the spinning
thread problem, however caused an another one. The receive side can stall, if:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo doesn't return true anymore

Also, if interrupt sent but there is still no room in the ring, it take quite a
long time until xenvif_rx_action realize it. This patch ditch that two variable,
and rework rx_work_todo. If the thread finds it can't fit more skb's into the
ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring (like before)
- there is space for the topmost packet in the queue

I think that's more natural and optimal thing to test than two bool which are
set somewhere else.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3U1H-00040X-Fy; Wed, 15 Jan 2014 17:12:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3U1G-00040N-4H
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:12:46 +0000
Received: from [85.158.143.35:62198] by server-2.bemta-4.messagelabs.com id
	2F/AB-11386-D81C6D25; Wed, 15 Jan 2014 17:12:45 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389805964!11811281!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25446 invoked from network); 15 Jan 2014 17:12:45 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 17:12:45 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389805964; l=832;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:Cc:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=j40g8zhvlcaAa3qw0s3HWtKMU48=;
	b=s2bVONFlFUaLIVIJ0Nm8CZe1cD1mGSf0mTZtAjAfDycvQf2Iaxff+3+r5jcpH5tBn/C
	OxI/t2KEQZuZ1ZThNcFf1/q4cvzrXo2WkbQXYZQChsGckHmwfig1vPW+A984TYUkMF+nb
	Nodju0JabgQDkznUhgkwtUTKMeUXQdrWBHw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.20 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 2075c4q0FHCiKY7 ; 
	Wed, 15 Jan 2014 18:12:44 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2E7D150267; Wed, 15 Jan 2014 18:12:44 +0100 (CET)
Date: Wed, 15 Jan 2014 18:12:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140115171244.GA2596@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


It seems qemu does not enumerate the configured disks correctly with a
config like this:

disk=[  
        'raw:/some.iso,hda:cdrom,r',
        'raw:/some.raw,xvda,w',
]

With a PV guest it works fine, the guest has a hda and xvda device. 
But a HVM guest fails to start:
qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists

I think that kind of config used to work with xend. Should the code which
enumerates the disks be smarter?

Olaf

-- 
name="enum"
memory=1024
disk=[  
        'file:/some.iso,hda:cdrom,r',
        'raw:/some.raw,xvda,r',
]
vif=[ 'mac=00:16:3e:14:85:06,bridge=br0', ]
vfb=['type=vnc,vncunused=1,keymap=de']
builder="linux"
extra="ignore_loglevel xencons=hvc0 console=hvc0"
kernel="/vmlinuz"
ramdisk="/initrd"

builder="hvm"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3U1H-00040X-Fy; Wed, 15 Jan 2014 17:12:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3U1G-00040N-4H
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:12:46 +0000
Received: from [85.158.143.35:62198] by server-2.bemta-4.messagelabs.com id
	2F/AB-11386-D81C6D25; Wed, 15 Jan 2014 17:12:45 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389805964!11811281!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25446 invoked from network); 15 Jan 2014 17:12:45 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 17:12:45 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389805964; l=832;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:Cc:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=j40g8zhvlcaAa3qw0s3HWtKMU48=;
	b=s2bVONFlFUaLIVIJ0Nm8CZe1cD1mGSf0mTZtAjAfDycvQf2Iaxff+3+r5jcpH5tBn/C
	OxI/t2KEQZuZ1ZThNcFf1/q4cvzrXo2WkbQXYZQChsGckHmwfig1vPW+A984TYUkMF+nb
	Nodju0JabgQDkznUhgkwtUTKMeUXQdrWBHw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.20 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id 2075c4q0FHCiKY7 ; 
	Wed, 15 Jan 2014 18:12:44 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2E7D150267; Wed, 15 Jan 2014 18:12:44 +0100 (CET)
Date: Wed, 15 Jan 2014 18:12:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140115171244.GA2596@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


It seems qemu does not enumerate the configured disks correctly with a
config like this:

disk=[  
        'raw:/some.iso,hda:cdrom,r',
        'raw:/some.raw,xvda,w',
]

With a PV guest it works fine, the guest has a hda and xvda device. 
But a HVM guest fails to start:
qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists

I think that kind of config used to work with xend. Should the code which
enumerates the disks be smarter?

Olaf

-- 
name="enum"
memory=1024
disk=[  
        'file:/some.iso,hda:cdrom,r',
        'raw:/some.raw,xvda,r',
]
vif=[ 'mac=00:16:3e:14:85:06,bridge=br0', ]
vfb=['type=vnc,vncunused=1,keymap=de']
builder="linux"
extra="ignore_loglevel xencons=hvc0 console=hvc0"
kernel="/vmlinuz"
ramdisk="/initrd"

builder="hvm"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:22:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UAP-0004dY-LD; Wed, 15 Jan 2014 17:22:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3UAP-0004dT-7e
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:22:13 +0000
Received: from [193.109.254.147:57693] by server-15.bemta-14.messagelabs.com
	id CB/CF-22186-3C3C6D25; Wed, 15 Jan 2014 17:22:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389806529!8788673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17651 invoked from network); 15 Jan 2014 17:22:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93173077"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:22:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 12:22:08 -0500
Message-ID: <1389806527.3793.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 15 Jan 2014 17:22:07 +0000
In-Reply-To: <20140115171244.GA2596@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> It seems qemu does not enumerate the configured disks correctly with a
> config like this:
> 
> disk=[  
>         'raw:/some.iso,hda:cdrom,r',
>         'raw:/some.raw,xvda,w',
> ]
> 
> With a PV guest it works fine, the guest has a hda and xvda device. 
> But a HVM guest fails to start:
> qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> 
> I think that kind of config used to work with xend.

Did it? I thought xvda and hda were effectively considered two faces of
the same device, so I'm not so sure. I'd be particularly surprised if
this worked by design rather than coincidence.

Based on docs/misc/vbd-interface.txt I wouldn't be all that surprised if
some guest kernels had a problem with this even if you could convince
the toolstack it was ok.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:22:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UAP-0004dY-LD; Wed, 15 Jan 2014 17:22:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3UAP-0004dT-7e
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:22:13 +0000
Received: from [193.109.254.147:57693] by server-15.bemta-14.messagelabs.com
	id CB/CF-22186-3C3C6D25; Wed, 15 Jan 2014 17:22:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389806529!8788673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17651 invoked from network); 15 Jan 2014 17:22:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93173077"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:22:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 12:22:08 -0500
Message-ID: <1389806527.3793.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 15 Jan 2014 17:22:07 +0000
In-Reply-To: <20140115171244.GA2596@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> It seems qemu does not enumerate the configured disks correctly with a
> config like this:
> 
> disk=[  
>         'raw:/some.iso,hda:cdrom,r',
>         'raw:/some.raw,xvda,w',
> ]
> 
> With a PV guest it works fine, the guest has a hda and xvda device. 
> But a HVM guest fails to start:
> qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> 
> I think that kind of config used to work with xend.

Did it? I thought xvda and hda were effectively considered two faces of
the same device, so I'm not so sure. I'd be particularly surprised if
this worked by design rather than coincidence.

Based on docs/misc/vbd-interface.txt I wouldn't be all that surprised if
some guest kernels had a problem with this even if you could convince
the toolstack it was ok.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:30:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UIZ-0005BV-Np; Wed, 15 Jan 2014 17:30:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3UIY-0005BQ-QB
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:30:39 +0000
Received: from [85.158.143.35:52366] by server-2.bemta-4.messagelabs.com id
	E8/61-11386-EB5C6D25; Wed, 15 Jan 2014 17:30:38 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389807036!11970351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12317 invoked from network); 15 Jan 2014 17:30:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:30:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93176544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:30:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 12:30:35 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W3UIV-000553-B9;
	Wed, 15 Jan 2014 17:30:35 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 15 Jan 2014 17:30:33 +0000
Message-ID: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netfront: add support for IPv6
	offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for IPv6 checksum offload and GSO when those
features are available in the backend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
v2:
- Use xenbus_write rather than xenbus_printf

 drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 43 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c41537b..d7bee8a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		tx->flags |= XEN_NETTXF_extra_info;
 
 		gso->u.gso.size = skb_shinfo(skb)->gso_size;
-		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
+		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
+			XEN_NETIF_GSO_TYPE_TCPV6 :
+			XEN_NETIF_GSO_TYPE_TCPV4;
 		gso->u.gso.pad = 0;
 		gso->u.gso.features = 0;
 
@@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 		return -EINVAL;
 	}
 
-	/* Currently only TCPv4 S.O. is supported. */
-	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
+	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
+	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
 		if (net_ratelimit())
 			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
 		return -EINVAL;
 	}
 
 	skb_shinfo(skb)->gso_size = gso->u.gso.size;
-	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+	skb_shinfo(skb)->gso_type =
+		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
+		SKB_GSO_TCPV4 :
+		SKB_GSO_TCPV6;
 
 	/* Header must be checked, and gso_segs computed. */
 	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
@@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_SG;
 	}
 
+	if (features & NETIF_F_IPV6_CSUM) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-ipv6-csum-offload", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_IPV6_CSUM;
+	}
+
 	if (features & NETIF_F_TSO) {
 		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 				 "feature-gso-tcpv4", "%d", &val) < 0)
@@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_TSO;
 	}
 
+	if (features & NETIF_F_TSO6) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-gso-tcpv6", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_TSO6;
+	}
+
 	return features;
 }
 
@@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_SG |
+				  NETIF_F_IPV6_CSUM |
+				  NETIF_F_TSO | NETIF_F_TSO6;
 
 	/*
          * Assume that all hw features are available for now. This set
@@ -1716,6 +1741,19 @@ again:
 		goto abort_transaction;
 	}
 
+	err = xenbus_write(xbt, dev->nodename, "feature-gso-tcpv6", "1");
+	if (err) {
+		message = "writing feature-gso-tcpv6";
+		goto abort_transaction;
+	}
+
+	err = xenbus_write(xbt, dev->nodename, "feature-ipv6-csum-offload",
+			   "1");
+	if (err) {
+		message = "writing feature-ipv6-csum-offload";
+		goto abort_transaction;
+	}
+
 	err = xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err == -EAGAIN)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:30:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UIZ-0005BV-Np; Wed, 15 Jan 2014 17:30:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3UIY-0005BQ-QB
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:30:39 +0000
Received: from [85.158.143.35:52366] by server-2.bemta-4.messagelabs.com id
	E8/61-11386-EB5C6D25; Wed, 15 Jan 2014 17:30:38 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389807036!11970351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12317 invoked from network); 15 Jan 2014 17:30:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 17:30:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,664,1384300800"; d="scan'208";a="93176544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 17:30:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 12:30:35 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W3UIV-000553-B9;
	Wed, 15 Jan 2014 17:30:35 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xen.org>
Date: Wed, 15 Jan 2014 17:30:33 +0000
Message-ID: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH net-next v2] xen-netfront: add support for IPv6
	offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds support for IPv6 checksum offload and GSO when those
features are available in the backend.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
---
v2:
- Use xenbus_write rather than xenbus_printf

 drivers/net/xen-netfront.c |   48 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 43 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index c41537b..d7bee8a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -617,7 +617,9 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		tx->flags |= XEN_NETTXF_extra_info;
 
 		gso->u.gso.size = skb_shinfo(skb)->gso_size;
-		gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
+		gso->u.gso.type = (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ?
+			XEN_NETIF_GSO_TYPE_TCPV6 :
+			XEN_NETIF_GSO_TYPE_TCPV4;
 		gso->u.gso.pad = 0;
 		gso->u.gso.features = 0;
 
@@ -809,15 +811,18 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 		return -EINVAL;
 	}
 
-	/* Currently only TCPv4 S.O. is supported. */
-	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
+	if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4 &&
+	    gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV6) {
 		if (net_ratelimit())
 			pr_warn("Bad GSO type %d\n", gso->u.gso.type);
 		return -EINVAL;
 	}
 
 	skb_shinfo(skb)->gso_size = gso->u.gso.size;
-	skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+	skb_shinfo(skb)->gso_type =
+		(gso->u.gso.type == XEN_NETIF_GSO_TYPE_TCPV4) ?
+		SKB_GSO_TCPV4 :
+		SKB_GSO_TCPV6;
 
 	/* Header must be checked, and gso_segs computed. */
 	skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
@@ -1191,6 +1196,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_SG;
 	}
 
+	if (features & NETIF_F_IPV6_CSUM) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-ipv6-csum-offload", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_IPV6_CSUM;
+	}
+
 	if (features & NETIF_F_TSO) {
 		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 				 "feature-gso-tcpv4", "%d", &val) < 0)
@@ -1200,6 +1214,15 @@ static netdev_features_t xennet_fix_features(struct net_device *dev,
 			features &= ~NETIF_F_TSO;
 	}
 
+	if (features & NETIF_F_TSO6) {
+		if (xenbus_scanf(XBT_NIL, np->xbdev->otherend,
+				 "feature-gso-tcpv6", "%d", &val) < 0)
+			val = 0;
+
+		if (!val)
+			features &= ~NETIF_F_TSO6;
+	}
+
 	return features;
 }
 
@@ -1338,7 +1361,9 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_SG |
+				  NETIF_F_IPV6_CSUM |
+				  NETIF_F_TSO | NETIF_F_TSO6;
 
 	/*
          * Assume that all hw features are available for now. This set
@@ -1716,6 +1741,19 @@ again:
 		goto abort_transaction;
 	}
 
+	err = xenbus_write(xbt, dev->nodename, "feature-gso-tcpv6", "1");
+	if (err) {
+		message = "writing feature-gso-tcpv6";
+		goto abort_transaction;
+	}
+
+	err = xenbus_write(xbt, dev->nodename, "feature-ipv6-csum-offload",
+			   "1");
+	if (err) {
+		message = "writing feature-ipv6-csum-offload";
+		goto abort_transaction;
+	}
+
 	err = xenbus_transaction_end(xbt, 0);
 	if (err) {
 		if (err == -EAGAIN)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:45:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UWh-0005kF-Cb; Wed, 15 Jan 2014 17:45:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3UWf-0005kA-SJ
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:45:14 +0000
Received: from [85.158.139.211:36891] by server-6.bemta-5.messagelabs.com id
	FD/1D-16310-929C6D25; Wed, 15 Jan 2014 17:45:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389807911!9968993!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30600 invoked from network); 15 Jan 2014 17:45:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 17:45:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389807911; l=1521;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=u6VXgYa69mV7TyVRFWIvJF0QwCc=;
	b=XbkPXFndlXcXtdUE5SoewUHr2+3wJ9jr9kDct9tfslTVeohEm0ZcFnP+5qPjNu1ESgi
	gI8RUq/B/f2BYOY9HyrxPb8z2bh9jHoR7vPg5P6mfCK0JKOsx8BDOVr+2xMFOoymaAQKd
	iC8pFIX4u7CBQH81b4XH55orJDaA4GIL5AY=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z004f3q0FHjBNNM ; 
	Wed, 15 Jan 2014 18:45:11 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 0C9D350267; Wed, 15 Jan 2014 18:45:10 +0100 (CET)
Date: Wed, 15 Jan 2014 18:45:10 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115174510.GA5171@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389806527.3793.106.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, Ian Campbell wrote:

> On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> > It seems qemu does not enumerate the configured disks correctly with a
> > config like this:
> > 
> > disk=[  
> >         'raw:/some.iso,hda:cdrom,r',
> >         'raw:/some.raw,xvda,w',
> > ]
> > 
> > With a PV guest it works fine, the guest has a hda and xvda device. 
> > But a HVM guest fails to start:
> > qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> > 
> > I think that kind of config used to work with xend.
> 
> Did it? I thought xvda and hda were effectively considered two faces of
> the same device, so I'm not so sure. I'd be particularly surprised if
> this worked by design rather than coincidence.

Putting a 'device_model_version="qemu-xen-traditional"' into the config
fixes it for me.

So the question is how it is supposed to work. My understanding is that
for HVM some sort of IDE is (or was?) required to let it boot from a
block device. Thats why I have hd[abcd] as device name. In addition to
that one could have as many disks named xvd[abc..], which are PV only.

After some testing it seems that today the guest will boot from xvda,
even with qemu-xen-traditional. So either that got fixed with libxl, or
xend from 4.2 got it all wrong.

So what should be done with such configs, if they really exist in the
wild? The obvious workaround is device_model_version="qemu-xen-traditional".

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 17:45:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 17:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3UWh-0005kF-Cb; Wed, 15 Jan 2014 17:45:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3UWf-0005kA-SJ
	for xen-devel@lists.xen.org; Wed, 15 Jan 2014 17:45:14 +0000
Received: from [85.158.139.211:36891] by server-6.bemta-5.messagelabs.com id
	FD/1D-16310-929C6D25; Wed, 15 Jan 2014 17:45:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389807911!9968993!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30600 invoked from network); 15 Jan 2014 17:45:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 17:45:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389807911; l=1521;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=u6VXgYa69mV7TyVRFWIvJF0QwCc=;
	b=XbkPXFndlXcXtdUE5SoewUHr2+3wJ9jr9kDct9tfslTVeohEm0ZcFnP+5qPjNu1ESgi
	gI8RUq/B/f2BYOY9HyrxPb8z2bh9jHoR7vPg5P6mfCK0JKOsx8BDOVr+2xMFOoymaAQKd
	iC8pFIX4u7CBQH81b4XH55orJDaA4GIL5AY=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z004f3q0FHjBNNM ; 
	Wed, 15 Jan 2014 18:45:11 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 0C9D350267; Wed, 15 Jan 2014 18:45:10 +0100 (CET)
Date: Wed, 15 Jan 2014 18:45:10 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140115174510.GA5171@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389806527.3793.106.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, Ian Campbell wrote:

> On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> > It seems qemu does not enumerate the configured disks correctly with a
> > config like this:
> > 
> > disk=[  
> >         'raw:/some.iso,hda:cdrom,r',
> >         'raw:/some.raw,xvda,w',
> > ]
> > 
> > With a PV guest it works fine, the guest has a hda and xvda device. 
> > But a HVM guest fails to start:
> > qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> > 
> > I think that kind of config used to work with xend.
> 
> Did it? I thought xvda and hda were effectively considered two faces of
> the same device, so I'm not so sure. I'd be particularly surprised if
> this worked by design rather than coincidence.

Putting a 'device_model_version="qemu-xen-traditional"' into the config
fixes it for me.

So the question is how it is supposed to work. My understanding is that
for HVM some sort of IDE is (or was?) required to let it boot from a
block device. Thats why I have hd[abcd] as device name. In addition to
that one could have as many disks named xvd[abc..], which are PV only.

After some testing it seems that today the guest will boot from xvda,
even with qemu-xen-traditional. So either that got fixed with libxl, or
xend from 4.2 got it all wrong.

So what should be done with such configs, if they really exist in the
wild? The obvious workaround is device_model_version="qemu-xen-traditional".

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 20:09:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 20:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Wle-0003ps-8j; Wed, 15 Jan 2014 20:08:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W3Wlb-0003pn-L5
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 20:08:47 +0000
Received: from [85.158.137.68:46466] by server-17.bemta-3.messagelabs.com id
	1E/15-15965-ECAE6D25; Wed, 15 Jan 2014 20:08:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389816523!9383604!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5215 invoked from network); 15 Jan 2014 20:08:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 20:08:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FK7dwh015819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 20:07:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FK7c7S013823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 20:07:38 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FK7bEE008344; Wed, 15 Jan 2014 20:07:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 12:07:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7F72B1BFB4A; Wed, 15 Jan 2014 15:07:36 -0500 (EST)
Date: Wed, 15 Jan 2014 15:07:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140115200736.GB5201@phenom.dumpdata.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
	<52D3E1500200007800113058@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3E1500200007800113058@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, jun.nakajima@intel.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:51:28AM +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 12:38, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
> >> In fact I can't see where this would be forced off: xc_cpuid_x86.c
> >> only does so in the PV case, and all hvm_pse1gb_supported() is
> >> that the CPU supports it and the domain uses HAP.
> > 
> > Took me a while to spot it too:
> > static void intel_xc_cpuid_policy(
> > [...]
> >             case 0x80000001: {
> >                 int is_64bit = hypervisor_is_64bit(xch) && is_pae;
> >         
> >                 /* Only a few features are advertised in Intel's 0x80000001. */
> >                 regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
> >                                        bitmaskof(X86_FEATURE_ABM);
> >                 regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
> >                 break;
> >             }
> >         
> >         
> > Which masks anything which is not explicitly mentioned. (PAGE1GB is in
> > regs[3], I think).
> 
> Ah, okay. The funs of white listing on HVM vs black listing on PV
> again.
> 
> > The AMD version is more permissive:
> > 
> >         regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
> >                     (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
> >                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
> >                     bitmaskof(X86_FEATURE_SYSCALL) |
> >                     bitmaskof(X86_FEATURE_MP) |
> >                     bitmaskof(X86_FEATURE_MMXEXT) |
> >                     bitmaskof(X86_FEATURE_FFXSR) |
> >                     bitmaskof(X86_FEATURE_3DNOW) |
> >                     bitmaskof(X86_FEATURE_3DNOWEXT));
> > 
> > (but I didn't check if PAGE1GB is in that magic number...)
> 
> It's not - it's bit 26.

So.. it sounds to me like everybody is in the agreement that this is the
right thing to do (enable it if the hypervisor has it enabled)?

And the next thing is actually come up with a patch to do some of this
plumbing - naturally for Xen 4.5?
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 20:09:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 20:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Wle-0003ps-8j; Wed, 15 Jan 2014 20:08:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W3Wlb-0003pn-L5
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 20:08:47 +0000
Received: from [85.158.137.68:46466] by server-17.bemta-3.messagelabs.com id
	1E/15-15965-ECAE6D25; Wed, 15 Jan 2014 20:08:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389816523!9383604!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5215 invoked from network); 15 Jan 2014 20:08:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Jan 2014 20:08:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0FK7dwh015819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 15 Jan 2014 20:07:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0FK7c7S013823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 15 Jan 2014 20:07:38 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0FK7bEE008344; Wed, 15 Jan 2014 20:07:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 12:07:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7F72B1BFB4A; Wed, 15 Jan 2014 15:07:36 -0500 (EST)
Date: Wed, 15 Jan 2014 15:07:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140115200736.GB5201@phenom.dumpdata.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
	<52D3E1500200007800113058@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D3E1500200007800113058@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, jun.nakajima@intel.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 13, 2014 at 11:51:28AM +0000, Jan Beulich wrote:
> >>> On 13.01.14 at 12:38, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-13 at 11:30 +0000, Jan Beulich wrote:
> >> In fact I can't see where this would be forced off: xc_cpuid_x86.c
> >> only does so in the PV case, and all hvm_pse1gb_supported() is
> >> that the CPU supports it and the domain uses HAP.
> > 
> > Took me a while to spot it too:
> > static void intel_xc_cpuid_policy(
> > [...]
> >             case 0x80000001: {
> >                 int is_64bit = hypervisor_is_64bit(xch) && is_pae;
> >         
> >                 /* Only a few features are advertised in Intel's 0x80000001. */
> >                 regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
> >                                        bitmaskof(X86_FEATURE_ABM);
> >                 regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_SYSCALL) : 0) |
> >                             (is_64bit ? bitmaskof(X86_FEATURE_RDTSCP) : 0));
> >                 break;
> >             }
> >         
> >         
> > Which masks anything which is not explicitly mentioned. (PAGE1GB is in
> > regs[3], I think).
> 
> Ah, okay. The funs of white listing on HVM vs black listing on PV
> again.
> 
> > The AMD version is more permissive:
> > 
> >         regs[3] &= (0x0183f3ff | /* features shared with 0x00000001:EDX */
> >                     (is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
> >                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
> >                     bitmaskof(X86_FEATURE_SYSCALL) |
> >                     bitmaskof(X86_FEATURE_MP) |
> >                     bitmaskof(X86_FEATURE_MMXEXT) |
> >                     bitmaskof(X86_FEATURE_FFXSR) |
> >                     bitmaskof(X86_FEATURE_3DNOW) |
> >                     bitmaskof(X86_FEATURE_3DNOWEXT));
> > 
> > (but I didn't check if PAGE1GB is in that magic number...)
> 
> It's not - it's bit 26.

So.. it sounds to me like everybody is in the agreement that this is the
right thing to do (enable it if the hypervisor has it enabled)?

And the next thing is actually come up with a patch to do some of this
plumbing - naturally for Xen 4.5?
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 21:00:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 21:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3XZ2-000645-5b; Wed, 15 Jan 2014 20:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3XZ1-000640-5c
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 20:59:51 +0000
Received: from [85.158.139.211:35197] by server-17.bemta-5.messagelabs.com id
	9F/6A-19152-6C6F6D25; Wed, 15 Jan 2014 20:59:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389819587!9993054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30131 invoked from network); 15 Jan 2014 20:59:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 20:59:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93252652"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 20:59:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 15:59:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3XYv-00086Q-OF;
	Wed, 15 Jan 2014 20:59:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3XYv-00035A-KX;
	Wed, 15 Jan 2014 20:59:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24380-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 20:59:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24380: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24380 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24380/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24375
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24381-bisect
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24375 pass in 24380
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24375 pass in 24380

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24364

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24375 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24381 never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ branch=xen-unstable
+ revision=cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   4fad2dc..cfad12f  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 21:00:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 21:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3XZ2-000645-5b; Wed, 15 Jan 2014 20:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3XZ1-000640-5c
	for xen-devel@lists.xensource.com; Wed, 15 Jan 2014 20:59:51 +0000
Received: from [85.158.139.211:35197] by server-17.bemta-5.messagelabs.com id
	9F/6A-19152-6C6F6D25; Wed, 15 Jan 2014 20:59:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389819587!9993054!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30131 invoked from network); 15 Jan 2014 20:59:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Jan 2014 20:59:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93252652"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Jan 2014 20:59:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 15:59:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3XYv-00086Q-OF;
	Wed, 15 Jan 2014 20:59:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3XYv-00035A-KX;
	Wed, 15 Jan 2014 20:59:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24380-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 15 Jan 2014 20:59:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24380: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24380 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24380/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24375
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24381-bisect
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24375 pass in 24380
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24375 pass in 24380

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24364

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24375 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24381 never pass

version targeted for testing:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
baseline version:
 xen                  4fad2dc72a8607f50c3783e1cbcb3fb25e3af932

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ branch=xen-unstable
+ revision=cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   4fad2dc..cfad12f  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 21:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 21:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Y7z-0007VV-Eu; Wed, 15 Jan 2014 21:35:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Gortmaker@windriver.com>) id 1W3Y7y-0007VQ-9g
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 21:35:58 +0000
Received: from [193.109.254.147:52223] by server-12.bemta-14.messagelabs.com
	id 26/4B-13681-D3FF6D25; Wed, 15 Jan 2014 21:35:57 +0000
X-Env-Sender: Paul.Gortmaker@windriver.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389821754!11148235!1
X-Originating-IP: [147.11.146.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26993 invoked from network); 15 Jan 2014 21:35:56 -0000
Received: from mail1.windriver.com (HELO mail1.windriver.com) (147.11.146.13)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 21:35:56 -0000
Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com
	[147.11.189.40])
	by mail1.windriver.com (8.14.5/8.14.5) with ESMTP id s0FLZkQZ015994
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 15 Jan 2014 13:35:46 -0800 (PST)
Received: from yow-asskicker.wrs.com (128.224.146.66) by
	ALA-HCA.corp.ad.wrs.com (147.11.189.40) with Microsoft SMTP Server id
	14.2.347.0; Wed, 15 Jan 2014 13:35:47 -0800
From: Paul Gortmaker <paul.gortmaker@windriver.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date: Wed, 15 Jan 2014 16:35:43 -0500
Message-ID: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
Cc: Richard Weinberger <richard@nod.at>, linux-kernel@vger.kernel.org,
	Paul Gortmaker <paul.gortmaker@windriver.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linuxppc-dev@lists.ozlabs.org
Subject: [Xen-devel] [PATCH v2] drivers/tty/hvc: don't use module_init in
	non-modular hyp. console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The HVC_OPAL/RTAS/UDBG/XEN options are all bool, and hence their support
is either present or absent.  It will never be modular, so using
module_init as an alias for __initcall is rather misleading.

Fix this up now, so that we can relocate module_init from
init.h into module.h in the future.  If we don't do this, we'd
have to add module.h to obviously non-modular code, and that
would be a worse thing.

Note that direct use of __initcall is discouraged, vs. one
of the priority categorized subgroups.  As __initcall gets
mapped onto device_initcall, our use of device_initcall
directly in this change means that the runtime impact is
zero -- it will remain at level 6 in initcall ordering.

Also the __exitcall functions have been outright deleted since
they are only ever of interest to UML, and UML will never be
using any of this code.

Cc: Richard Weinberger <richard@nod.at>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---

[v2: unchanged; just added xen guys to Cc list, as hvc_xen isnt
 hooked into the MAINTAINERS file as of yet, so I forgot them.]

diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c
index 6496872e2e47..b01659bd4f7c 100644
--- a/drivers/tty/hvc/hvc_opal.c
+++ b/drivers/tty/hvc/hvc_opal.c
@@ -255,13 +255,7 @@ static int __init hvc_opal_init(void)
 	/* Register as a vio device to receive callbacks */
 	return platform_driver_register(&hvc_opal_driver);
 }
-module_init(hvc_opal_init);
-
-static void __exit hvc_opal_exit(void)
-{
-	platform_driver_unregister(&hvc_opal_driver);
-}
-module_exit(hvc_opal_exit);
+device_initcall(hvc_opal_init);
 
 static void udbg_opal_putc(char c)
 {
diff --git a/drivers/tty/hvc/hvc_rtas.c b/drivers/tty/hvc/hvc_rtas.c
index 0069bb86ba49..08c87920b74a 100644
--- a/drivers/tty/hvc/hvc_rtas.c
+++ b/drivers/tty/hvc/hvc_rtas.c
@@ -102,17 +102,7 @@ static int __init hvc_rtas_init(void)
 
 	return 0;
 }
-module_init(hvc_rtas_init);
-
-/* This will tear down the tty portion of the driver */
-static void __exit hvc_rtas_exit(void)
-{
-	/* Really the fun isn't over until the worker thread breaks down and
-	 * the tty cleans up */
-	if (hvc_rtas_dev)
-		hvc_remove(hvc_rtas_dev);
-}
-module_exit(hvc_rtas_exit);
+device_initcall(hvc_rtas_init);
 
 /* This will happen prior to module init.  There is no tty at this time? */
 static int __init hvc_rtas_console_init(void)
diff --git a/drivers/tty/hvc/hvc_udbg.c b/drivers/tty/hvc/hvc_udbg.c
index 72228276fe31..9cf573d06a29 100644
--- a/drivers/tty/hvc/hvc_udbg.c
+++ b/drivers/tty/hvc/hvc_udbg.c
@@ -80,14 +80,7 @@ static int __init hvc_udbg_init(void)
 
 	return 0;
 }
-module_init(hvc_udbg_init);
-
-static void __exit hvc_udbg_exit(void)
-{
-	if (hvc_udbg_dev)
-		hvc_remove(hvc_udbg_dev);
-}
-module_exit(hvc_udbg_exit);
+device_initcall(hvc_udbg_init);
 
 static int __init hvc_udbg_console_init(void)
 {
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 636c9baad7a5..2dc2831840ca 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -561,18 +561,7 @@ static int __init xen_hvc_init(void)
 #endif
 	return r;
 }
-
-static void __exit xen_hvc_fini(void)
-{
-	struct xencons_info *entry, *next;
-
-	if (list_empty(&xenconsoles))
-			return;
-
-	list_for_each_entry_safe(entry, next, &xenconsoles, list) {
-		xen_console_remove(entry);
-	}
-}
+device_initcall(xen_hvc_init);
 
 static int xen_cons_init(void)
 {
@@ -598,10 +587,6 @@ static int xen_cons_init(void)
 	hvc_instantiate(HVC_COOKIE, 0, ops);
 	return 0;
 }
-
-
-module_init(xen_hvc_init);
-module_exit(xen_hvc_fini);
 console_initcall(xen_cons_init);
 
 #ifdef CONFIG_EARLY_PRINTK
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 15 21:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jan 2014 21:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3Y7z-0007VV-Eu; Wed, 15 Jan 2014 21:35:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Gortmaker@windriver.com>) id 1W3Y7y-0007VQ-9g
	for xen-devel@lists.xenproject.org; Wed, 15 Jan 2014 21:35:58 +0000
Received: from [193.109.254.147:52223] by server-12.bemta-14.messagelabs.com
	id 26/4B-13681-D3FF6D25; Wed, 15 Jan 2014 21:35:57 +0000
X-Env-Sender: Paul.Gortmaker@windriver.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389821754!11148235!1
X-Originating-IP: [147.11.146.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26993 invoked from network); 15 Jan 2014 21:35:56 -0000
Received: from mail1.windriver.com (HELO mail1.windriver.com) (147.11.146.13)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 15 Jan 2014 21:35:56 -0000
Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com
	[147.11.189.40])
	by mail1.windriver.com (8.14.5/8.14.5) with ESMTP id s0FLZkQZ015994
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
	Wed, 15 Jan 2014 13:35:46 -0800 (PST)
Received: from yow-asskicker.wrs.com (128.224.146.66) by
	ALA-HCA.corp.ad.wrs.com (147.11.189.40) with Microsoft SMTP Server id
	14.2.347.0; Wed, 15 Jan 2014 13:35:47 -0800
From: Paul Gortmaker <paul.gortmaker@windriver.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date: Wed, 15 Jan 2014 16:35:43 -0500
Message-ID: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
Cc: Richard Weinberger <richard@nod.at>, linux-kernel@vger.kernel.org,
	Paul Gortmaker <paul.gortmaker@windriver.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linuxppc-dev@lists.ozlabs.org
Subject: [Xen-devel] [PATCH v2] drivers/tty/hvc: don't use module_init in
	non-modular hyp. console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The HVC_OPAL/RTAS/UDBG/XEN options are all bool, and hence their support
is either present or absent.  It will never be modular, so using
module_init as an alias for __initcall is rather misleading.

Fix this up now, so that we can relocate module_init from
init.h into module.h in the future.  If we don't do this, we'd
have to add module.h to obviously non-modular code, and that
would be a worse thing.

Note that direct use of __initcall is discouraged, vs. one
of the priority categorized subgroups.  As __initcall gets
mapped onto device_initcall, our use of device_initcall
directly in this change means that the runtime impact is
zero -- it will remain at level 6 in initcall ordering.

Also the __exitcall functions have been outright deleted since
they are only ever of interest to UML, and UML will never be
using any of this code.

Cc: Richard Weinberger <richard@nod.at>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---

[v2: unchanged; just added xen guys to Cc list, as hvc_xen isnt
 hooked into the MAINTAINERS file as of yet, so I forgot them.]

diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c
index 6496872e2e47..b01659bd4f7c 100644
--- a/drivers/tty/hvc/hvc_opal.c
+++ b/drivers/tty/hvc/hvc_opal.c
@@ -255,13 +255,7 @@ static int __init hvc_opal_init(void)
 	/* Register as a vio device to receive callbacks */
 	return platform_driver_register(&hvc_opal_driver);
 }
-module_init(hvc_opal_init);
-
-static void __exit hvc_opal_exit(void)
-{
-	platform_driver_unregister(&hvc_opal_driver);
-}
-module_exit(hvc_opal_exit);
+device_initcall(hvc_opal_init);
 
 static void udbg_opal_putc(char c)
 {
diff --git a/drivers/tty/hvc/hvc_rtas.c b/drivers/tty/hvc/hvc_rtas.c
index 0069bb86ba49..08c87920b74a 100644
--- a/drivers/tty/hvc/hvc_rtas.c
+++ b/drivers/tty/hvc/hvc_rtas.c
@@ -102,17 +102,7 @@ static int __init hvc_rtas_init(void)
 
 	return 0;
 }
-module_init(hvc_rtas_init);
-
-/* This will tear down the tty portion of the driver */
-static void __exit hvc_rtas_exit(void)
-{
-	/* Really the fun isn't over until the worker thread breaks down and
-	 * the tty cleans up */
-	if (hvc_rtas_dev)
-		hvc_remove(hvc_rtas_dev);
-}
-module_exit(hvc_rtas_exit);
+device_initcall(hvc_rtas_init);
 
 /* This will happen prior to module init.  There is no tty at this time? */
 static int __init hvc_rtas_console_init(void)
diff --git a/drivers/tty/hvc/hvc_udbg.c b/drivers/tty/hvc/hvc_udbg.c
index 72228276fe31..9cf573d06a29 100644
--- a/drivers/tty/hvc/hvc_udbg.c
+++ b/drivers/tty/hvc/hvc_udbg.c
@@ -80,14 +80,7 @@ static int __init hvc_udbg_init(void)
 
 	return 0;
 }
-module_init(hvc_udbg_init);
-
-static void __exit hvc_udbg_exit(void)
-{
-	if (hvc_udbg_dev)
-		hvc_remove(hvc_udbg_dev);
-}
-module_exit(hvc_udbg_exit);
+device_initcall(hvc_udbg_init);
 
 static int __init hvc_udbg_console_init(void)
 {
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 636c9baad7a5..2dc2831840ca 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -561,18 +561,7 @@ static int __init xen_hvc_init(void)
 #endif
 	return r;
 }
-
-static void __exit xen_hvc_fini(void)
-{
-	struct xencons_info *entry, *next;
-
-	if (list_empty(&xenconsoles))
-			return;
-
-	list_for_each_entry_safe(entry, next, &xenconsoles, list) {
-		xen_console_remove(entry);
-	}
-}
+device_initcall(xen_hvc_init);
 
 static int xen_cons_init(void)
 {
@@ -598,10 +587,6 @@ static int xen_cons_init(void)
 	hvc_instantiate(HVC_COOKIE, 0, ops);
 	return 0;
 }
-
-
-module_init(xen_hvc_init);
-module_exit(xen_hvc_fini);
 console_initcall(xen_cons_init);
 
 #ifdef CONFIG_EARLY_PRINTK
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:01:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aO7-0005oH-C2; Thu, 16 Jan 2014 00:00:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aO5-0005oC-C2
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:00:45 +0000
Received: from [85.158.139.211:38182] by server-12.bemta-5.messagelabs.com id
	AB/70-30017-C2127D25; Thu, 16 Jan 2014 00:00:44 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389830442!8801526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7104 invoked from network); 16 Jan 2014 00:00:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:00:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93307376"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:00:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:00:40 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aO0-000289-PH;
	Thu, 16 Jan 2014 00:00:40 +0000
Date: Thu, 16 Jan 2014 00:00:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000040.GA5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a stray blank line change in xenvif_tx_create_gop. (I removed
that part too early and didn't bother to paste it back...)

On Tue, Jan 14, 2014 at 08:39:47PM +0000, Zoltan Kiss wrote:
[...]
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op,
> +				&vif->mmap_pages[pending_idx],
> +				1);
> +
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +					&tx_unmap_op,
> +					1);

As you said in your other email, this should be removed. :-)

> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1738,6 +1879,14 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
> +{
> +	if (vif->dealloc_cons != vif->dealloc_prod)
> +		return true;
> +
> +	return false;

This can be simplified as
  return vif->dealloc_cons != vif->dealloc_prod;

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:01:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aOX-0005p9-3v; Thu, 16 Jan 2014 00:01:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aOV-0005oz-Hu
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:01:11 +0000
Received: from [85.158.139.211:37015] by server-4.bemta-5.messagelabs.com id
	E5/1C-26791-64127D25; Thu, 16 Jan 2014 00:01:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389830468!8801587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 16 Jan 2014 00:01:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:01:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93307484"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:01:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:01:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aOR-00028q-85;
	Thu, 16 Jan 2014 00:01:07 +0000
Date: Thu, 16 Jan 2014 00:01:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000107.GB5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:48PM +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping
> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()
> - fix unmapping timeout in xenvif_free()
> 
> v4:
> - fix indentations and comments
> - handle errors of set_phys_to_machine

There's no call to set_phys_to_machine in this patch. Did I miss
something?

> - go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
>   modified API
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   60 +++++++-
>  drivers/net/xen-netback/netback.c   |  256 ++++++++++++++---------------------
>  2 files changed, 159 insertions(+), 157 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index a7855b3..1e0bf71 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +	    vif->dealloc_task == NULL ||
> +	    !xenvif_schedulable(vif))
>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,

At the beginning of the function there's BUG_ON checks for vif->task. I
would suggest you do the same for vif->dealloc_task, just to be
consistent.

>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it. The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning
> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +				       vif->mmap_pages,
> +				       false);
> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return NULL;
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err_rx_unbind;
>  	}
>  
> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
> +					   (void *)vif,
> +					   "%s-dealloc",
> +					   vif->dev->name);
> +	if (IS_ERR(vif->dealloc_task)) {
> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
> +		err = PTR_ERR(vif->dealloc_task);
> +		goto err_rx_unbind;
> +	}
> +
>  	vif->task = task;

Please move this line before the above hunk. Don't separate it from
corresponding kthread_create.

Last but not least, though I've looked at this patch for several rounds
and and the basic logic looks correct to me, I would like it to go
through XenRT tests if possible -- eye inspection is error-prone to such
complicated change. (If I'm not mistaken you once told me you've done
regression tests already. That would be neat!)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:01:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aOX-0005p9-3v; Thu, 16 Jan 2014 00:01:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aOV-0005oz-Hu
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:01:11 +0000
Received: from [85.158.139.211:37015] by server-4.bemta-5.messagelabs.com id
	E5/1C-26791-64127D25; Thu, 16 Jan 2014 00:01:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389830468!8801587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8721 invoked from network); 16 Jan 2014 00:01:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:01:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93307484"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:01:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:01:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aOR-00028q-85;
	Thu, 16 Jan 2014 00:01:07 +0000
Date: Thu, 16 Jan 2014 00:01:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000107.GB5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:48PM +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping
> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()
> - fix unmapping timeout in xenvif_free()
> 
> v4:
> - fix indentations and comments
> - handle errors of set_phys_to_machine

There's no call to set_phys_to_machine in this patch. Did I miss
something?

> - go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
>   modified API
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   60 +++++++-
>  drivers/net/xen-netback/netback.c   |  256 ++++++++++++++---------------------
>  2 files changed, 159 insertions(+), 157 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index a7855b3..1e0bf71 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +	    vif->dealloc_task == NULL ||
> +	    !xenvif_schedulable(vif))
>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,

At the beginning of the function there's BUG_ON checks for vif->task. I
would suggest you do the same for vif->dealloc_task, just to be
consistent.

>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it. The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning
> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +				       vif->mmap_pages,
> +				       false);
> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return NULL;
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -390,6 +410,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  		goto err_rx_unbind;
>  	}
>  
> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
> +					   (void *)vif,
> +					   "%s-dealloc",
> +					   vif->dev->name);
> +	if (IS_ERR(vif->dealloc_task)) {
> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
> +		err = PTR_ERR(vif->dealloc_task);
> +		goto err_rx_unbind;
> +	}
> +
>  	vif->task = task;

Please move this line before the above hunk. Don't separate it from
corresponding kthread_create.

Last but not least, though I've looked at this patch for several rounds
and and the basic logic looks correct to me, I would like it to go
through XenRT tests if possible -- eye inspection is error-prone to such
complicated change. (If I'm not mistaken you once told me you've done
regression tests already. That would be neat!)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:01:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aO7-0005oH-C2; Thu, 16 Jan 2014 00:00:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aO5-0005oC-C2
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:00:45 +0000
Received: from [85.158.139.211:38182] by server-12.bemta-5.messagelabs.com id
	AB/70-30017-C2127D25; Thu, 16 Jan 2014 00:00:44 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389830442!8801526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7104 invoked from network); 16 Jan 2014 00:00:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:00:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93307376"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:00:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:00:40 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aO0-000289-PH;
	Thu, 16 Jan 2014 00:00:40 +0000
Date: Thu, 16 Jan 2014 00:00:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000040.GA5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a stray blank line change in xenvif_tx_create_gop. (I removed
that part too early and didn't bother to paste it back...)

On Tue, Jan 14, 2014 at 08:39:47PM +0000, Zoltan Kiss wrote:
[...]
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		return;
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op,
> +				&vif->mmap_pages[pending_idx],
> +				1);
> +
> +	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +					&tx_unmap_op,
> +					1);

As you said in your other email, this should be removed. :-)

> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1738,6 +1879,14 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
> +{
> +	if (vif->dealloc_cons != vif->dealloc_prod)
> +		return true;
> +
> +	return false;

This can be simplified as
  return vif->dealloc_cons != vif->dealloc_prod;

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQ4-000694-PD; Thu, 16 Jan 2014 00:02:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQ3-00068x-3v
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:02:47 +0000
Received: from [85.158.143.35:26988] by server-1.bemta-4.messagelabs.com id
	54/C4-02132-6A127D25; Thu, 16 Jan 2014 00:02:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389830564!177727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13975 invoked from network); 16 Jan 2014 00:02:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:02:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93308217"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:02:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:02:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aPz-0002AL-HY;
	Thu, 16 Jan 2014 00:02:43 +0000
Date: Thu, 16 Jan 2014 00:02:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000243.GC5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 3/9] xen-netback: Remove old TX
 grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:49PM +0000, Zoltan Kiss wrote:
> These became obsolate with grant mapping. I've left intentionally the
               ^ obsolete
> indentations in this way, to improve readability of previous patches.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQ4-000694-PD; Thu, 16 Jan 2014 00:02:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQ3-00068x-3v
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:02:47 +0000
Received: from [85.158.143.35:26988] by server-1.bemta-4.messagelabs.com id
	54/C4-02132-6A127D25; Thu, 16 Jan 2014 00:02:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389830564!177727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13975 invoked from network); 16 Jan 2014 00:02:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:02:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93308217"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:02:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:02:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aPz-0002AL-HY;
	Thu, 16 Jan 2014 00:02:43 +0000
Date: Thu, 16 Jan 2014 00:02:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000243.GC5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-4-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 3/9] xen-netback: Remove old TX
 grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:49PM +0000, Zoltan Kiss wrote:
> These became obsolate with grant mapping. I've left intentionally the
               ^ obsolete
> indentations in this way, to improve readability of previous patches.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:03:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQZ-0006ER-Nc; Thu, 16 Jan 2014 00:03:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQY-0006EC-QN
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:03:19 +0000
Received: from [85.158.137.68:25398] by server-1.bemta-3.messagelabs.com id
	FC/6B-29598-6C127D25; Thu, 16 Jan 2014 00:03:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389830596!8232795!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14161 invoked from network); 16 Jan 2014 00:03:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93308318"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:03:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:03:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aQV-0002AW-1J;
	Thu, 16 Jan 2014 00:03:15 +0000
Date: Thu, 16 Jan 2014 00:03:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000314.GD5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:52PM +0000, Zoltan Kiss wrote:
[...]
>  	/* Skip first skb fragment if it is on same page as header fragment. */
> @@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>  
>  	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
>  
> +	if (frag_overflow) {
> +		struct sk_buff *nskb = xenvif_alloc_skb(0);
> +		if (unlikely(nskb == NULL)) {
> +			netdev_err(vif->dev,
> +				   "Can't allocate the frag_list skb.\n");

This, and other occurences of netdev_* logs need to be rate limit.
Otherwise you risk flooding kernel log when system is under memory
pressure.

> +			return NULL;
> +		}
> +
> +		shinfo = skb_shinfo(nskb);
> +		frags = shinfo->frags;
> +
> +		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
> +		     shinfo->nr_frags++, txp++, gop++) {
> +			index = pending_index(vif->pending_cons++);
> +			pending_idx = vif->pending_ring[index];
> +			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
> +			frag_set_pending_idx(&frags[shinfo->nr_frags],
> +					     pending_idx);
> +		}
> +
> +		skb_shinfo(skb)->frag_list = nskb;
> +	}
> +
>  	return gop;
>  }
>  
[...]
> @@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  				  pending_idx :
>  				  INVALID_PENDING_IDX);
>  
> +		if (skb_shinfo(skb)->frag_list) {
> +			nskb = skb_shinfo(skb)->frag_list;
> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> +			skb->len += nskb->len;
> +			skb->data_len += nskb->len;
> +			skb->truesize += nskb->truesize;
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			vif->tx_zerocopy_sent += 2;
> +			nskb = skb;
> +
> +			skb = skb_copy_expand(skb,
> +					      0,
> +					      0,
> +					      GFP_ATOMIC | __GFP_NOWARN);
> +			if (!skb) {
> +				netdev_dbg(vif->dev,
> +					   "Can't consolidate skb with too many fragments\n");

Rate limit.

> +				if (skb_shinfo(nskb)->destructor_arg)
> +					skb_shinfo(nskb)->tx_flags |=
> +						SKBTX_DEV_ZEROCOPY;

Why is this needed? nskb is the saved pointer to original skb, which has
already had SKBTX_DEV_ZEROCOPY in tx_flags. Did I miss something?

Wei.

> +				kfree_skb(nskb);
> +				continue;
> +			}
> +			skb_shinfo(skb)->destructor_arg = NULL;
> +		}
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
> @@ -1590,6 +1692,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		}
>  
>  		netif_receive_skb(skb);
> +
> +		if (nskb)
> +			kfree_skb(nskb);
>  	}
>  
>  	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:03:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:03:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQZ-0006ER-Nc; Thu, 16 Jan 2014 00:03:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQY-0006EC-QN
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:03:19 +0000
Received: from [85.158.137.68:25398] by server-1.bemta-3.messagelabs.com id
	FC/6B-29598-6C127D25; Thu, 16 Jan 2014 00:03:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389830596!8232795!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14161 invoked from network); 16 Jan 2014 00:03:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93308318"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:03:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:03:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aQV-0002AW-1J;
	Thu, 16 Jan 2014 00:03:15 +0000
Date: Thu, 16 Jan 2014 00:03:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000314.GD5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:52PM +0000, Zoltan Kiss wrote:
[...]
>  	/* Skip first skb fragment if it is on same page as header fragment. */
> @@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>  
>  	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
>  
> +	if (frag_overflow) {
> +		struct sk_buff *nskb = xenvif_alloc_skb(0);
> +		if (unlikely(nskb == NULL)) {
> +			netdev_err(vif->dev,
> +				   "Can't allocate the frag_list skb.\n");

This, and other occurences of netdev_* logs need to be rate limit.
Otherwise you risk flooding kernel log when system is under memory
pressure.

> +			return NULL;
> +		}
> +
> +		shinfo = skb_shinfo(nskb);
> +		frags = shinfo->frags;
> +
> +		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
> +		     shinfo->nr_frags++, txp++, gop++) {
> +			index = pending_index(vif->pending_cons++);
> +			pending_idx = vif->pending_ring[index];
> +			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
> +			frag_set_pending_idx(&frags[shinfo->nr_frags],
> +					     pending_idx);
> +		}
> +
> +		skb_shinfo(skb)->frag_list = nskb;
> +	}
> +
>  	return gop;
>  }
>  
[...]
> @@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  				  pending_idx :
>  				  INVALID_PENDING_IDX);
>  
> +		if (skb_shinfo(skb)->frag_list) {
> +			nskb = skb_shinfo(skb)->frag_list;
> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
> +			skb->len += nskb->len;
> +			skb->data_len += nskb->len;
> +			skb->truesize += nskb->truesize;
> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
> +			vif->tx_zerocopy_sent += 2;
> +			nskb = skb;
> +
> +			skb = skb_copy_expand(skb,
> +					      0,
> +					      0,
> +					      GFP_ATOMIC | __GFP_NOWARN);
> +			if (!skb) {
> +				netdev_dbg(vif->dev,
> +					   "Can't consolidate skb with too many fragments\n");

Rate limit.

> +				if (skb_shinfo(nskb)->destructor_arg)
> +					skb_shinfo(nskb)->tx_flags |=
> +						SKBTX_DEV_ZEROCOPY;

Why is this needed? nskb is the saved pointer to original skb, which has
already had SKBTX_DEV_ZEROCOPY in tx_flags. Did I miss something?

Wei.

> +				kfree_skb(nskb);
> +				continue;
> +			}
> +			skb_shinfo(skb)->destructor_arg = NULL;
> +		}
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
> @@ -1590,6 +1692,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		}
>  
>  		netif_receive_skb(skb);
> +
> +		if (nskb)
> +			kfree_skb(nskb);
>  	}
>  
>  	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:03:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQt-0006J3-Bb; Thu, 16 Jan 2014 00:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQr-0006Ic-N3
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:03:37 +0000
Received: from [85.158.143.35:35648] by server-1.bemta-4.messagelabs.com id
	33/25-02132-9D127D25; Thu, 16 Jan 2014 00:03:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389830615!4875872!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15086 invoked from network); 16 Jan 2014 00:03:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:03:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91199870"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:03:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:03:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aQo-0002AZ-A2;
	Thu, 16 Jan 2014 00:03:34 +0000
Date: Thu, 16 Jan 2014 00:03:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000334.GE5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
[...]
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 109c29f..d1cd8ce 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -129,6 +129,9 @@ struct xenvif {
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;

Hmm... You seemed to mix your other patch with this series. :-)

> +	bool rx_queue_purge;
> +
> +	struct timer_list wake_queue;
>  
>  	/* This array is allocated seperately as it is large */
>  	struct gnttab_copy *grant_copy_op;
> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>  
>  extern bool separate_tx_rx_irq;
>  
[...]
> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>  		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>  			unmap_timeout++;
>  			schedule_timeout(msecs_to_jiffies(1000));
> -			if (unmap_timeout > 9 &&
> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&

This line is really too long. And what's the rationale behind this long
expression?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:03:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aQt-0006J3-Bb; Thu, 16 Jan 2014 00:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aQr-0006Ic-N3
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:03:37 +0000
Received: from [85.158.143.35:35648] by server-1.bemta-4.messagelabs.com id
	33/25-02132-9D127D25; Thu, 16 Jan 2014 00:03:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389830615!4875872!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15086 invoked from network); 16 Jan 2014 00:03:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:03:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91199870"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:03:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:03:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aQo-0002AZ-A2;
	Thu, 16 Jan 2014 00:03:34 +0000
Date: Thu, 16 Jan 2014 00:03:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140116000334.GE5331@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
[...]
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 109c29f..d1cd8ce 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -129,6 +129,9 @@ struct xenvif {
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;

Hmm... You seemed to mix your other patch with this series. :-)

> +	bool rx_queue_purge;
> +
> +	struct timer_list wake_queue;
>  
>  	/* This array is allocated seperately as it is large */
>  	struct gnttab_copy *grant_copy_op;
> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>  
>  extern bool separate_tx_rx_irq;
>  
[...]
> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>  		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>  			unmap_timeout++;
>  			schedule_timeout(msecs_to_jiffies(1000));
> -			if (unmap_timeout > 9 &&
> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&

This line is really too long. And what's the rationale behind this long
expression?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aad-00076S-NE; Thu, 16 Jan 2014 00:13:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aac-00076N-MH
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:13:42 +0000
Received: from [85.158.139.211:3361] by server-16.bemta-5.messagelabs.com id
	B5/02-11843-53427D25; Thu, 16 Jan 2014 00:13:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389831219!9988433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8714 invoked from network); 16 Jan 2014 00:13:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91201844"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:13:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:13:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aaY-0002KG-Jl;
	Thu, 16 Jan 2014 00:13:38 +0000
Date: Thu, 16 Jan 2014 00:13:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001338.GF5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cool! Finally!

On Wed, Jan 15, 2014 at 04:23:20PM +0000, Andrew J. Bennieston wrote:
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> 
> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> One open issue is how to deal with the tx_credit data for rate limiting.
> This used to exist on a per-VIF basis, and these patches move it to
> per-queue to avoid contention on concurrent access to the tx_credit
> data from multiple threads. This has the side effect of breaking the
> tx_credit accounting across the VIF as a whole. I cannot see a situation
> in which people would want to use both rate limiting and a
> high-performance multi-queue mode, but if this is problematic then it
> can be brought back to the VIF level, with appropriate protection.
> Obviously, it continues to work identically in the case where there is
> only one queue.
> 

I would go for per-queue limit at the stage as it simplifies things and
keep you focus on core functionality.

Wei.

> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 
> Queue-specific XenStore entries for ring references and event channels
> are stored hierarchically, i.e. under .../queue-N/... where N varies
> from 0 to one less than the requested number of queues (inclusive). If
> only one queue is requested, it falls back to the flat structure where
> the ring references and event channels are written at the same level as
> other vif information.
> 
> --
> Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aad-00076S-NE; Thu, 16 Jan 2014 00:13:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aac-00076N-MH
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:13:42 +0000
Received: from [85.158.139.211:3361] by server-16.bemta-5.messagelabs.com id
	B5/02-11843-53427D25; Thu, 16 Jan 2014 00:13:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389831219!9988433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8714 invoked from network); 16 Jan 2014 00:13:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:13:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91201844"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:13:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:13:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aaY-0002KG-Jl;
	Thu, 16 Jan 2014 00:13:38 +0000
Date: Thu, 16 Jan 2014 00:13:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001338.GF5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cool! Finally!

On Wed, Jan 15, 2014 at 04:23:20PM +0000, Andrew J. Bennieston wrote:
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> 
> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> One open issue is how to deal with the tx_credit data for rate limiting.
> This used to exist on a per-VIF basis, and these patches move it to
> per-queue to avoid contention on concurrent access to the tx_credit
> data from multiple threads. This has the side effect of breaking the
> tx_credit accounting across the VIF as a whole. I cannot see a situation
> in which people would want to use both rate limiting and a
> high-performance multi-queue mode, but if this is problematic then it
> can be brought back to the VIF level, with appropriate protection.
> Obviously, it continues to work identically in the case where there is
> only one queue.
> 

I would go for per-queue limit at the stage as it simplifies things and
keep you focus on core functionality.

Wei.

> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 
> Queue-specific XenStore entries for ring references and event channels
> are stored hierarchically, i.e. under .../queue-N/... where N varies
> from 0 to one less than the requested number of queues (inclusive). If
> only one queue is requested, it falls back to the flat structure where
> the ring references and event channels are written at the same level as
> other vif information.
> 
> --
> Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:17:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ae1-0007Dm-Dd; Thu, 16 Jan 2014 00:17:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3ae0-0007Df-4s
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:17:12 +0000
Received: from [193.109.254.147:20631] by server-10.bemta-14.messagelabs.com
	id 18/94-20752-70527D25; Thu, 16 Jan 2014 00:17:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389831429!11081711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2795 invoked from network); 16 Jan 2014 00:17:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:17:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93310722"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:17:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:17:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3adv-0002Ni-2s;
	Thu, 16 Jan 2014 00:17:07 +0000
Date: Thu, 16 Jan 2014 00:17:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001706.GG5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
[...]
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int number; /* Queue number, 0-based */

Use "id" instead?

> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
>  
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,7 +142,7 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  
> @@ -150,14 +152,27 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
>  
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];

Any reason to swtich back to array inside structure? This array is
really large.

>  
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>  
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +};
> +
[...]
>  
> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)

Suggest add xenvif_ prefix.

> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	}
> +	else {

Coding style.

> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
> +		hash = skb_get_rxhash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
> +	}

Actually why do you special-case num_queues == 1? If it is an
optimazation for old frontend then please add some comment.

> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	u16 queue_index = 0;
> +	struct xenvif_queue *queue = NULL;
>  
>  	BUG_ON(skb->dev != dev);
>  
> -	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL)
> +	/* Drop the packet if the queues are not set up */
> +	if (vif->num_queues < 1 || vif->queues == NULL)

You don't need both, do you? They should be strictly synchronized.

> +		goto drop;
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	queue_index = skb_get_queue_mapping(skb);
> +	queue = &vif->queues[queue_index];
> +
> +	/* Drop the packet if queue is not ready */
> +	if (queue->task == NULL)
>  		goto drop;
>  
[...]
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
> @@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;

Better insert empty line here.

> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
>  
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;

Ditto.

> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
>  
[...]
> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
>  
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
>  
> -

Blank line change, not necessary.

Wei.

>  /* ** Driver Registration ** */
>  
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:17:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ae1-0007Dm-Dd; Thu, 16 Jan 2014 00:17:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3ae0-0007Df-4s
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:17:12 +0000
Received: from [193.109.254.147:20631] by server-10.bemta-14.messagelabs.com
	id 18/94-20752-70527D25; Thu, 16 Jan 2014 00:17:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389831429!11081711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2795 invoked from network); 16 Jan 2014 00:17:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:17:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93310722"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:17:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:17:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3adv-0002Ni-2s;
	Thu, 16 Jan 2014 00:17:07 +0000
Date: Thu, 16 Jan 2014 00:17:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001706.GG5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
[...]
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int number; /* Queue number, 0-based */

Use "id" instead?

> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
>  
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,7 +142,7 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  
> @@ -150,14 +152,27 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
>  
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];

Any reason to swtich back to array inside structure? This array is
really large.

>  
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>  
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +};
> +
[...]
>  
> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)

Suggest add xenvif_ prefix.

> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	}
> +	else {

Coding style.

> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
> +		hash = skb_get_rxhash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
> +	}

Actually why do you special-case num_queues == 1? If it is an
optimazation for old frontend then please add some comment.

> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	u16 queue_index = 0;
> +	struct xenvif_queue *queue = NULL;
>  
>  	BUG_ON(skb->dev != dev);
>  
> -	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL)
> +	/* Drop the packet if the queues are not set up */
> +	if (vif->num_queues < 1 || vif->queues == NULL)

You don't need both, do you? They should be strictly synchronized.

> +		goto drop;
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	queue_index = skb_get_queue_mapping(skb);
> +	queue = &vif->queues[queue_index];
> +
> +	/* Drop the packet if queue is not ready */
> +	if (queue->task == NULL)
>  		goto drop;
>  
[...]
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
> @@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;

Better insert empty line here.

> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
>  
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;

Ditto.

> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
>  
[...]
> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
>  
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
>  
> -

Blank line change, not necessary.

Wei.

>  /* ** Driver Registration ** */
>  
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:18:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aez-0007J7-Tg; Thu, 16 Jan 2014 00:18:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aey-0007Ix-4B
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:18:12 +0000
Received: from [85.158.137.68:39418] by server-7.bemta-3.messagelabs.com id
	1D/C9-27599-34527D25; Thu, 16 Jan 2014 00:18:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389831488!9389183!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25273 invoked from network); 16 Jan 2014 00:18:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91203223"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:18:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:18:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aet-0002OV-Kt;
	Thu, 16 Jan 2014 00:18:07 +0000
Date: Thu, 16 Jan 2014 00:18:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001807.GH5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:22PM +0000, Andrew J. Bennieston wrote:
[...]
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	char name[IFNAMSIZ] = {};
>  
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/*
> +	 * Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			xenvif_max_queues);

Indentation.


>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 586e741..5d717d7 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -55,6 +55,9 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
>  
> +unsigned int xenvif_max_queues = 4;
> +module_param(xenvif_max_queues, uint, 0644);
> +

This looks a bit arbitrary. I guess it is better to set the default
value to number of CPUs in Dom0?

>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
> index c3332e2..ce7ca9a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -21,6 +21,7 @@
>  
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
>  
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
>  
> +	/*
> +	 * Multi-queue support: This is an optional feature.
> +	 */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u", xenvif_max_queues);

Should prefix this with "feature-".

> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0)
> +		requested_num_queues = 1; /* Fall back to single queue */

You also need to check whether it gets back something larger than
permitted -- guest can be buggy or malicious.

>  
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
>  
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
> +	rtnl_unlock();
>  
>  	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
>  	{
> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */

I think even if the frontend only requests 1 queue you can still put it
under subdirectory? I don't have strong preference here though...

After the protocol is settled it needs to be documented in netif.h.

> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;
> +	else {
> +		xspathsize = strlen(dev->otherend) + 10;

Please avoid using magic number.

> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");

"ring reference"?

> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
> +				queue->number);

Indentation.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:18:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3aez-0007J7-Tg; Thu, 16 Jan 2014 00:18:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3aey-0007Ix-4B
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:18:12 +0000
Received: from [85.158.137.68:39418] by server-7.bemta-3.messagelabs.com id
	1D/C9-27599-34527D25; Thu, 16 Jan 2014 00:18:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389831488!9389183!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25273 invoked from network); 16 Jan 2014 00:18:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="91203223"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 00:18:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:18:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3aet-0002OV-Kt;
	Thu, 16 Jan 2014 00:18:07 +0000
Date: Thu, 16 Jan 2014 00:18:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116001807.GH5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:22PM +0000, Andrew J. Bennieston wrote:
[...]
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	char name[IFNAMSIZ] = {};
>  
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/*
> +	 * Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			xenvif_max_queues);

Indentation.


>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 586e741..5d717d7 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -55,6 +55,9 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
>  
> +unsigned int xenvif_max_queues = 4;
> +module_param(xenvif_max_queues, uint, 0644);
> +

This looks a bit arbitrary. I guess it is better to set the default
value to number of CPUs in Dom0?

>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
> index c3332e2..ce7ca9a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -21,6 +21,7 @@
>  
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
>  
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
>  
> +	/*
> +	 * Multi-queue support: This is an optional feature.
> +	 */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u", xenvif_max_queues);

Should prefix this with "feature-".

> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0)
> +		requested_num_queues = 1; /* Fall back to single queue */

You also need to check whether it gets back something larger than
permitted -- guest can be buggy or malicious.

>  
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
>  
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
> +	rtnl_unlock();
>  
>  	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
>  	{
> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */

I think even if the frontend only requests 1 queue you can still put it
under subdirectory? I don't have strong preference here though...

After the protocol is settled it needs to be documented in netif.h.

> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;
> +	else {
> +		xspathsize = strlen(dev->otherend) + 10;

Please avoid using magic number.

> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");

"ring reference"?

> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
> +				queue->number);

Indentation.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:25:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ale-0007wx-Qa; Thu, 16 Jan 2014 00:25:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3alc-0007ws-Hb
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:25:04 +0000
Received: from [85.158.137.68:19213] by server-7.bemta-3.messagelabs.com id
	45/DC-27599-FD627D25; Thu, 16 Jan 2014 00:25:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389831901!9451256!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26757 invoked from network); 16 Jan 2014 00:25:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93312125"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:25:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:25:00 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3alY-0002VS-Jj;
	Thu, 16 Jan 2014 00:25:00 +0000
Date: Thu, 16 Jan 2014 00:25:00 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116002500.GI5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:23PM +0000, Andrew J. Bennieston wrote:
[...]
> +
> +struct netfront_queue {
> +	unsigned int number; /* Queue number, 0-based */
> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> +	struct netfront_info *info;
>  
>  	struct napi_struct napi;
>  
> @@ -93,10 +96,8 @@ struct netfront_info {
>  	unsigned int tx_evtchn, rx_evtchn;
>  	unsigned int tx_irq, rx_irq;
>  	/* Only used when split event channels support is enabled */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> -
> -	struct xenbus_device *xbdev;
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */

Basically you're anticipating the maximum number of queues is below 10
here. I think leaving one more byte here won't hurt, just in case you
will have more than 10 queues. The same goes to netback as well.

In your next patch you have max_queue as 16 by default, which has
already broken your assumption here already. :-)

>  
>  	spinlock_t   tx_lock;
>  	struct xen_netif_tx_front_ring tx;
> @@ -139,6 +140,17 @@ struct netfront_info {
>  	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
>  	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
>  	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
> +};
> +
[...]
>  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	unsigned short id;
> @@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	unsigned int offset = offset_in_page(data);
>  	unsigned int len = skb_headlen(skb);
>  	unsigned long flags;
> +	struct netfront_queue *queue = NULL;
> +	u16 queue_index;
> +
> +	/* Drop the packet if no queues are set up */
> +	if (np->num_queues < 1 || np->queues == NULL)

Same as the comment in netback, you won't need both.

And the rest of the patch is basically replacing np with queue and
putting things in loops. So I stopped here...

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:25:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:25:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ale-0007wx-Qa; Thu, 16 Jan 2014 00:25:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3alc-0007ws-Hb
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:25:04 +0000
Received: from [85.158.137.68:19213] by server-7.bemta-3.messagelabs.com id
	45/DC-27599-FD627D25; Thu, 16 Jan 2014 00:25:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389831901!9451256!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26757 invoked from network); 16 Jan 2014 00:25:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93312125"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:25:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:25:00 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3alY-0002VS-Jj;
	Thu, 16 Jan 2014 00:25:00 +0000
Date: Thu, 16 Jan 2014 00:25:00 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116002500.GI5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:23PM +0000, Andrew J. Bennieston wrote:
[...]
> +
> +struct netfront_queue {
> +	unsigned int number; /* Queue number, 0-based */
> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> +	struct netfront_info *info;
>  
>  	struct napi_struct napi;
>  
> @@ -93,10 +96,8 @@ struct netfront_info {
>  	unsigned int tx_evtchn, rx_evtchn;
>  	unsigned int tx_irq, rx_irq;
>  	/* Only used when split event channels support is enabled */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> -
> -	struct xenbus_device *xbdev;
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */

Basically you're anticipating the maximum number of queues is below 10
here. I think leaving one more byte here won't hurt, just in case you
will have more than 10 queues. The same goes to netback as well.

In your next patch you have max_queue as 16 by default, which has
already broken your assumption here already. :-)

>  
>  	spinlock_t   tx_lock;
>  	struct xen_netif_tx_front_ring tx;
> @@ -139,6 +140,17 @@ struct netfront_info {
>  	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
>  	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
>  	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
> +};
> +
[...]
>  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	unsigned short id;
> @@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	unsigned int offset = offset_in_page(data);
>  	unsigned int len = skb_headlen(skb);
>  	unsigned long flags;
> +	struct netfront_queue *queue = NULL;
> +	u16 queue_index;
> +
> +	/* Drop the packet if no queues are set up */
> +	if (np->num_queues < 1 || np->queues == NULL)

Same as the comment in netback, you won't need both.

And the rest of the patch is basically replacing np with queue and
putting things in loops. So I stopped here...

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ang-000843-K0; Thu, 16 Jan 2014 00:27:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3ane-00083u-W4
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:27:11 +0000
Received: from [193.109.254.147:12172] by server-14.bemta-14.messagelabs.com
	id BB/61-12628-E5727D25; Thu, 16 Jan 2014 00:27:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389832027!8839364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6952 invoked from network); 16 Jan 2014 00:27:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93312449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:27:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:27:06 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3ana-0002XH-Jb;
	Thu, 16 Jan 2014 00:27:06 +0000
Date: Thu, 16 Jan 2014 00:27:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116002706.GJ5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
[...]
> +/* Module parameters */
> +unsigned int xennet_max_queues = 16;
> +module_param(xennet_max_queues, uint, 0644);
> +

This looks quite arbitrary as well.

>  static const struct ethtool_ops xennet_ethtool_ops;
>  
[...]
> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
> +			   struct xenbus_transaction *xbt, int write_hierarchical)
> +{
> +	/* Write the queue-specific keys into XenStore in the traditional
> +	 * way for a single queue, or in a queue subkeys for multiple
> +	 * queues.
> +	 */
> +	struct xenbus_device *dev = queue->info->xbdev;
> +	int err;
> +	const char *message;
> +	char *path;
> +	size_t pathsize;
> +
> +	/* Choose the correct place to write the keys */
> +	if (write_hierarchical) {
> +		pathsize = strlen(dev->nodename) + 10;
> +		path = kzalloc(pathsize, GFP_KERNEL);
> +		if (!path) {
> +			err = -ENOMEM;
> +			message = "writing ring references";

This error message doesn't sound right.

> +			goto error;
> +		}
> +		snprintf(path, pathsize, "%s/queue-%u",
> +				dev->nodename, queue->number);
> +	}
> +	else
> +		path = (char *)dev->nodename;

Coding style. Should be surounded by {};

> +
[...]
> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	int err;
>  	unsigned int feature_split_evtchn;
>  	unsigned int i = 0;
> +	unsigned int max_queues = 0;
>  	struct netfront_queue *queue = NULL;
>  
>  	info->netdev->irq = 0;
>  
> +	/* Check if backend supports multiple queues */
> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
> +			"multi-queue-max-queues", "%u", &max_queues);
> +	if (err < 0)
> +		max_queues = 1;
> +

Need to check if backend provide too big a number for frontend.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 00:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 00:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ang-000843-K0; Thu, 16 Jan 2014 00:27:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3ane-00083u-W4
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 00:27:11 +0000
Received: from [193.109.254.147:12172] by server-14.bemta-14.messagelabs.com
	id BB/61-12628-E5727D25; Thu, 16 Jan 2014 00:27:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389832027!8839364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6952 invoked from network); 16 Jan 2014 00:27:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 00:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,665,1384300800"; d="scan'208";a="93312449"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 00:27:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 15 Jan 2014 19:27:06 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3ana-0002XH-Jb;
	Thu, 16 Jan 2014 00:27:06 +0000
Date: Thu, 16 Jan 2014 00:27:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140116002706.GJ5331@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
[...]
> +/* Module parameters */
> +unsigned int xennet_max_queues = 16;
> +module_param(xennet_max_queues, uint, 0644);
> +

This looks quite arbitrary as well.

>  static const struct ethtool_ops xennet_ethtool_ops;
>  
[...]
> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
> +			   struct xenbus_transaction *xbt, int write_hierarchical)
> +{
> +	/* Write the queue-specific keys into XenStore in the traditional
> +	 * way for a single queue, or in a queue subkeys for multiple
> +	 * queues.
> +	 */
> +	struct xenbus_device *dev = queue->info->xbdev;
> +	int err;
> +	const char *message;
> +	char *path;
> +	size_t pathsize;
> +
> +	/* Choose the correct place to write the keys */
> +	if (write_hierarchical) {
> +		pathsize = strlen(dev->nodename) + 10;
> +		path = kzalloc(pathsize, GFP_KERNEL);
> +		if (!path) {
> +			err = -ENOMEM;
> +			message = "writing ring references";

This error message doesn't sound right.

> +			goto error;
> +		}
> +		snprintf(path, pathsize, "%s/queue-%u",
> +				dev->nodename, queue->number);
> +	}
> +	else
> +		path = (char *)dev->nodename;

Coding style. Should be surounded by {};

> +
[...]
> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	int err;
>  	unsigned int feature_split_evtchn;
>  	unsigned int i = 0;
> +	unsigned int max_queues = 0;
>  	struct netfront_queue *queue = NULL;
>  
>  	info->netdev->irq = 0;
>  
> +	/* Check if backend supports multiple queues */
> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
> +			"multi-queue-max-queues", "%u", &max_queues);
> +	if (err < 0)
> +		max_queues = 1;
> +

Need to check if backend provide too big a number for frontend.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 01:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 01:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3bfp-0006GR-7H; Thu, 16 Jan 2014 01:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W3bfo-0006GJ-2n
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 01:23:08 +0000
Received: from [85.158.143.35:50261] by server-1.bemta-4.messagelabs.com id
	78/54-02132-B7437D25; Thu, 16 Jan 2014 01:23:07 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389835385!11945882!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17223 invoked from network); 16 Jan 2014 01:23:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 01:23:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G1N2sJ008183
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 01:23:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G1N1Br022045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 01:23:02 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G1N1r7016263; Thu, 16 Jan 2014 01:23:01 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 17:23:00 -0800
Message-ID: <52D7346A.3000300@oracle.com>
Date: Thu, 16 Jan 2014 09:22:50 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
In-Reply-To: <52D6B8B6.5070302@zynstra.com>
Content-Type: multipart/mixed; boundary="------------040804010606070709040505"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------040804010606070709040505
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

On 01/16/2014 12:35 AM, James Dingwall wrote:
> Bob Liu wrote:
>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>> with
>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>> difference packages during your testing.
>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>> problem but as I mentioned before it will generally show up under
>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>
>>>> So the root cause is not because enabled selfshrinking.
>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>> memory
>>>> pressure to guest OS.
>>>> And kswapd started to reclaim page until most of pages were
>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>> triggered.
>>>> In theory the balloon driver should give back ballooned out pages to
>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>
>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>> xen-selfballoon won't be so aggressive.
>>>> You can do it through parameters selfballoon_reserved_mb or
>>>> selfballoon_min_usable_mb.
>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>> another bump to 64.  I think that if I do find values which prevent it
>>> though then it is more of a work around than a fix because it still
>>> suggests that swap is not being used when ballooning is no longer
>> Yes, it's like a work around. But I don't think there is a better way.
>>
>>  From the recent OOM log your reported:
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>>
>> [504638.442136] Free swap  = 1868108kB
>> [504638.442137] Total swap = 2097148kB
>>
>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>> are already significantly values for guest OS has only ~300M usable
>> memory.
>> guest OS can't find out pages suitable for swap any more after so many
>> pages are swapped out, although at that time the swap device still have
>> enough space.
>>
>> The OOM may not be triggered if the balloon driver can give back memory
>> to guest OS fast enough but I think it's unrealistic.
>> So the best way is reserve more memory to guest OS.
>>
>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>> see if I can find a comparable workload to see if I get the same issue
>>> with a different kernel configuration.
>>>
> I've done a bit more testing and seem to have found an extra condition
> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
> have swap space backed by a dm-crypt volume and if I remove this layer
> then things seem to be behaving much more reliably.  In my Ubuntu guests
> I have plain swap space and so far I haven't been able to trigger an OOM
> condition.  Is it possible that it is the dm-crypt layer failing to get
> working memory when swapping something in/out and causing the error?
> 

One possible reason is the dm layer and related dm target driver occupy
a significant mount of memory and there is no way for xenselfballoon to
know this. So selfballoon driver ballooned out more memory than the
system really requires.

I have made a patch by reserving extra 10% of original total memory, by
this way I think we can make the system much more reliably in all cases.
Could you please have a test? You don't need to set
selfballoon_reserved_mb by yourself any more.

-- 
Regards,
-Bob

--------------040804010606070709040505
Content-Type: text/x-patch;
 name="xen_selfballoon_deaggressive.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen_selfballoon_deaggressive.patch"

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..8f33254 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
 #endif /* CONFIG_FRONTSWAP */
 
 #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
+#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
 
 /*
  * Use current balloon size, the goal (vm_committed_as), and hysteresis
@@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
 int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 {
 	bool enable = false;
+	unsigned long reserve_pages;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 	if (!enable)
 		return -ENODEV;
 
+	/*
+	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
+	 * to make selfballoon not so aggressive.
+	 *
+	 * There are two reasons:
+	 * 1) The goal_page doesn't contain some pages used by kernel space,
+	 *    like slab cache and pages used by device drivers.
+	 *
+	 * 2) The balloon driver may not give back memory to guest OS fast
+	 *    enough when the workload suddenly aquries a lot of memory.
+	 *
+	 * In both cases, the guest OS will suffer from memory pressure and
+	 * OOM killer may be triggered.
+	 * By reserving extra 10% of total ram pages, we can keep the system
+	 * much more reliably and response faster in some cases.
+	 */
+	if (!selfballoon_reserved_mb) {
+		reserve_pages = totalram_pages / 10;
+		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
+	}
 	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
 
 	return 0;

--------------040804010606070709040505
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------040804010606070709040505--


From xen-devel-bounces@lists.xen.org Thu Jan 16 01:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 01:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3bfp-0006GR-7H; Thu, 16 Jan 2014 01:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W3bfo-0006GJ-2n
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 01:23:08 +0000
Received: from [85.158.143.35:50261] by server-1.bemta-4.messagelabs.com id
	78/54-02132-B7437D25; Thu, 16 Jan 2014 01:23:07 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389835385!11945882!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17223 invoked from network); 16 Jan 2014 01:23:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 01:23:06 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G1N2sJ008183
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 01:23:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G1N1Br022045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 01:23:02 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G1N1r7016263; Thu, 16 Jan 2014 01:23:01 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 17:23:00 -0800
Message-ID: <52D7346A.3000300@oracle.com>
Date: Thu, 16 Jan 2014 09:22:50 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
In-Reply-To: <52D6B8B6.5070302@zynstra.com>
Content-Type: multipart/mixed; boundary="------------040804010606070709040505"
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------040804010606070709040505
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

On 01/16/2014 12:35 AM, James Dingwall wrote:
> Bob Liu wrote:
>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>> with
>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>> difference packages during your testing.
>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>> problem but as I mentioned before it will generally show up under
>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>
>>>> So the root cause is not because enabled selfshrinking.
>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>> memory
>>>> pressure to guest OS.
>>>> And kswapd started to reclaim page until most of pages were
>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>> triggered.
>>>> In theory the balloon driver should give back ballooned out pages to
>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>
>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>> xen-selfballoon won't be so aggressive.
>>>> You can do it through parameters selfballoon_reserved_mb or
>>>> selfballoon_min_usable_mb.
>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>> another bump to 64.  I think that if I do find values which prevent it
>>> though then it is more of a work around than a fix because it still
>>> suggests that swap is not being used when ballooning is no longer
>> Yes, it's like a work around. But I don't think there is a better way.
>>
>>  From the recent OOM log your reported:
>> [ 8212.940769] Free swap  = 1925576kB
>> [ 8212.940770] Total swap = 2097148kB
>>
>> [504638.442136] Free swap  = 1868108kB
>> [504638.442137] Total swap = 2097148kB
>>
>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>> are already significantly values for guest OS has only ~300M usable
>> memory.
>> guest OS can't find out pages suitable for swap any more after so many
>> pages are swapped out, although at that time the swap device still have
>> enough space.
>>
>> The OOM may not be triggered if the balloon driver can give back memory
>> to guest OS fast enough but I think it's unrealistic.
>> So the best way is reserve more memory to guest OS.
>>
>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>> see if I can find a comparable workload to see if I get the same issue
>>> with a different kernel configuration.
>>>
> I've done a bit more testing and seem to have found an extra condition
> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
> have swap space backed by a dm-crypt volume and if I remove this layer
> then things seem to be behaving much more reliably.  In my Ubuntu guests
> I have plain swap space and so far I haven't been able to trigger an OOM
> condition.  Is it possible that it is the dm-crypt layer failing to get
> working memory when swapping something in/out and causing the error?
> 

One possible reason is the dm layer and related dm target driver occupy
a significant mount of memory and there is no way for xenselfballoon to
know this. So selfballoon driver ballooned out more memory than the
system really requires.

I have made a patch by reserving extra 10% of original total memory, by
this way I think we can make the system much more reliably in all cases.
Could you please have a test? You don't need to set
selfballoon_reserved_mb by yourself any more.

-- 
Regards,
-Bob

--------------040804010606070709040505
Content-Type: text/x-patch;
 name="xen_selfballoon_deaggressive.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xen_selfballoon_deaggressive.patch"

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..8f33254 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
 #endif /* CONFIG_FRONTSWAP */
 
 #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
+#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
 
 /*
  * Use current balloon size, the goal (vm_committed_as), and hysteresis
@@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
 int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 {
 	bool enable = false;
+	unsigned long reserve_pages;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 	if (!enable)
 		return -ENODEV;
 
+	/*
+	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
+	 * to make selfballoon not so aggressive.
+	 *
+	 * There are two reasons:
+	 * 1) The goal_page doesn't contain some pages used by kernel space,
+	 *    like slab cache and pages used by device drivers.
+	 *
+	 * 2) The balloon driver may not give back memory to guest OS fast
+	 *    enough when the workload suddenly aquries a lot of memory.
+	 *
+	 * In both cases, the guest OS will suffer from memory pressure and
+	 * OOM killer may be triggered.
+	 * By reserving extra 10% of total ram pages, we can keep the system
+	 * much more reliably and response faster in some cases.
+	 */
+	if (!selfballoon_reserved_mb) {
+		reserve_pages = totalram_pages / 10;
+		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
+	}
 	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
 
 	return 0;

--------------040804010606070709040505
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------040804010606070709040505--


From xen-devel-bounces@lists.xen.org Thu Jan 16 01:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 01:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3cCF-0007be-7F; Thu, 16 Jan 2014 01:56:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W3cCD-0007bZ-Kp
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 01:56:37 +0000
Received: from [193.109.254.147:24783] by server-14.bemta-14.messagelabs.com
	id 97/62-12628-55C37D25; Thu, 16 Jan 2014 01:56:37 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389837394!11068054!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 16 Jan 2014 01:56:35 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-5.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	16 Jan 2014 01:56:35 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZH00D001DU6600@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Wed, 15 Jan 2014 19:56:34 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.16.15115,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZH00C1I1E6XC00@smtpauth3.wiscmail.wisc.edu>; Wed,
	15 Jan 2014 19:56:32 -0600 (CST)
Message-id: <52D73C4E.2080306@freebsd.org>
Date: Wed, 15 Jan 2014 19:56:30 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org>
In-reply-to: <52D6B62A.9000208@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/14 10:24, Julien Grall wrote:
> On 01/15/2014 01:26 AM, Warner Losh wrote:
>> On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
>>> This new support brings 2 open questions (for both Xen and FreeBSD community).
>>> When a new guest is created, the toolstack will generated a device tree which
>>> will contains:
>>> 	- The amount of memory
>>> 	- The description of the platform (gic, timer, hypervisor node)
>>> 	- PSCI node for SMP bringup
>>>
>>> Until now, Xen on ARM supported only Linux-based OS. When the support of
>>> device tree was added in Xen for guest, we chose to use Linux device tree
>>> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
>>> implement device tree:
>>> 	- strictly respect ePAR (for interrupt-parent property)
>>> 	- gic bindings different (only 1 interrupt cell)
>>>
>>> I would like to come with a common device tree specification (bindings, ...)
>>> across every operating system. I know the Linux community is working on having
>>> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
>>> work with Linux community for this purpose?
>> We generally try to follow the common definitions for the FDT stuff.
>> There are a few cases where we either lack the feature set of Linux, or
>> where the Linux folks are moving quickly and changing the underlying
> definitions
>> where we wait for the standards to mature before we implement. In some
> cases, where
>> maturity hasn't happened, or where the bindings are overly Linux
> centric (which in
>> theory they aren't supposed to be, but sometimes wind up that way).
> where we've not
>> implemented things.
> As I understand main bindings (gic, timer) are set in stone and won't
> change. Ian Campbell has a repository with all the ARM bindings here:
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
> a=tree;f=Bindings/arm
>
> To compare the difference between the DT provided by Xen, and the one
> correctly parsed by FreeBSD
> 	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
>          - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts
>
> >From Xen side:
> 	- Every device should move under a simple-bus. I think it's harmless
> for Linux side.
> 	- How about hypervisor node? IHMO this node should also live under the
> simple-bus.
>
> >From FreeBSD side:
> 	- GIC and Timer bindings needs to be handled correctly (see below the
> problem for the GIC)
> 	- Less stricter about interrupt-parent property. Eg, look at the
> grant-parent if there is no property interrupt-parent
> 	- If the hypervisor doesn't live under simple-bus, the interrupt/memory
> retrieving should be moved from simple-bus to nexus?
>
> Before the revision r260282 (I have added Nathan on cc), I was able to
> use the Linux GIC bindings (see
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
> without any issue.
>
> It seems the fdt bus, now consider the number of interrupt cells is
> always 1.
>
>

Thanks for the CC. Could you explain what you mean by "grant-parent" 
etc? "interrupt-parent" is a fundamental part of the way PAPR (and 
ePAPR) work, so I'm very very hesitant to start guessing. I think things 
have broken for you because some (a lot, actually) of OF code does not 
expect #interrupt-cells to be more than 2. Some APIs might need 
updating, which I'll try to take care of. Could you tell me what the 
difference between SPI and PPI is, by the way?

On the subject of simple-bus, they usually aren't necessary. For 
example, all hypervisor devices on IBM hardware live under /vdevice, 
which is attached to the device tree root. They don't use MMIO, so 
simple-bus doesn't really make sense. How does Xen communicate with the 
OS in these devices?
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 01:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 01:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3cCF-0007be-7F; Thu, 16 Jan 2014 01:56:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W3cCD-0007bZ-Kp
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 01:56:37 +0000
Received: from [193.109.254.147:24783] by server-14.bemta-14.messagelabs.com
	id 97/62-12628-55C37D25; Thu, 16 Jan 2014 01:56:37 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389837394!11068054!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17821 invoked from network); 16 Jan 2014 01:56:35 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-5.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	16 Jan 2014 01:56:35 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZH00D001DU6600@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Wed, 15 Jan 2014 19:56:34 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.16.15115,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZH00C1I1E6XC00@smtpauth3.wiscmail.wisc.edu>; Wed,
	15 Jan 2014 19:56:32 -0600 (CST)
Message-id: <52D73C4E.2080306@freebsd.org>
Date: Wed, 15 Jan 2014 19:56:30 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org>
In-reply-to: <52D6B62A.9000208@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/14 10:24, Julien Grall wrote:
> On 01/15/2014 01:26 AM, Warner Losh wrote:
>> On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
>>> This new support brings 2 open questions (for both Xen and FreeBSD community).
>>> When a new guest is created, the toolstack will generated a device tree which
>>> will contains:
>>> 	- The amount of memory
>>> 	- The description of the platform (gic, timer, hypervisor node)
>>> 	- PSCI node for SMP bringup
>>>
>>> Until now, Xen on ARM supported only Linux-based OS. When the support of
>>> device tree was added in Xen for guest, we chose to use Linux device tree
>>> bindings (gic, timer,...). It seems that FreeBSD choose a different way to
>>> implement device tree:
>>> 	- strictly respect ePAR (for interrupt-parent property)
>>> 	- gic bindings different (only 1 interrupt cell)
>>>
>>> I would like to come with a common device tree specification (bindings, ...)
>>> across every operating system. I know the Linux community is working on having
>>> a device tree bindings out of the kernel tree. Does FreeBSD community plan to
>>> work with Linux community for this purpose?
>> We generally try to follow the common definitions for the FDT stuff.
>> There are a few cases where we either lack the feature set of Linux, or
>> where the Linux folks are moving quickly and changing the underlying
> definitions
>> where we wait for the standards to mature before we implement. In some
> cases, where
>> maturity hasn't happened, or where the bindings are overly Linux
> centric (which in
>> theory they aren't supposed to be, but sometimes wind up that way).
> where we've not
>> implemented things.
> As I understand main bindings (gic, timer) are set in stone and won't
> change. Ian Campbell has a repository with all the ARM bindings here:
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
> a=tree;f=Bindings/arm
>
> To compare the difference between the DT provided by Xen, and the one
> correctly parsed by FreeBSD
> 	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
>          - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts
>
> >From Xen side:
> 	- Every device should move under a simple-bus. I think it's harmless
> for Linux side.
> 	- How about hypervisor node? IHMO this node should also live under the
> simple-bus.
>
> >From FreeBSD side:
> 	- GIC and Timer bindings needs to be handled correctly (see below the
> problem for the GIC)
> 	- Less stricter about interrupt-parent property. Eg, look at the
> grant-parent if there is no property interrupt-parent
> 	- If the hypervisor doesn't live under simple-bus, the interrupt/memory
> retrieving should be moved from simple-bus to nexus?
>
> Before the revision r260282 (I have added Nathan on cc), I was able to
> use the Linux GIC bindings (see
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
> without any issue.
>
> It seems the fdt bus, now consider the number of interrupt cells is
> always 1.
>
>

Thanks for the CC. Could you explain what you mean by "grant-parent" 
etc? "interrupt-parent" is a fundamental part of the way PAPR (and 
ePAPR) work, so I'm very very hesitant to start guessing. I think things 
have broken for you because some (a lot, actually) of OF code does not 
expect #interrupt-cells to be more than 2. Some APIs might need 
updating, which I'll try to take care of. Could you tell me what the 
difference between SPI and PPI is, by the way?

On the subject of simple-bus, they usually aren't necessary. For 
example, all hypervisor devices on IBM hardware live under /vdevice, 
which is attached to the device tree root. They don't use MMIO, so 
simple-bus doesn't really make sense. How does Xen communicate with the 
OS in these devices?
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 02:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 02:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3clq-0001B1-8Q; Thu, 16 Jan 2014 02:33:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3clo-0001Aw-Me
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 02:33:24 +0000
Received: from [85.158.137.68:28979] by server-5.bemta-3.messagelabs.com id
	8D/08-25188-3F447D25; Thu, 16 Jan 2014 02:33:23 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389839602!8599888!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7652 invoked from network); 16 Jan 2014 02:33:22 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 02:33:22 -0000
Received: by mail-la0-f54.google.com with SMTP id y1so2025680lam.41
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 18:33:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=eWZdHm8VHCS05Px4wsOG4GkP/qOr5dlWlyDW2low/4k=;
	b=ZWqVDPA8rHSsIjL1YgspBheDUVOXc5pg9XSRTDbHRCbsM8zNuu8gU+dCU9f9Thiomh
	RW1VXRKBY5VtwzLeREVSPVl9DfqVLY5F1OqUCtw4qIGcehiAtKM9gtecXYKkG2UeNg5z
	XKPguT2PPRULCoM/L5T3zIKwcD6cKZtrOo6YYu2VZZzegLxSuse47oAZmN1Lk5zpM673
	N6WO7JLfTc/AiSqK3TkX96py6Wuo4j2bTSNNB46r32iNm6Cho1jj8fYY7rRi4P0Yk4a6
	qdnOySQR99YXoug4SNRcc2YXeAwre4N7q5fpA5QT1mMvBccvCAqFQgwy0v3sgqxrkJ4x
	4yQQ==
X-Received: by 10.112.160.196 with SMTP id xm4mr3410688lbb.34.1389839601804;
	Wed, 15 Jan 2014 18:33:21 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id rb4sm3531829lbb.1.2014.01.15.18.33.20
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 18:33:21 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 16 Jan 2014 06:33:17 +0400
Message-Id: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

could you please let me know - why we have different QEMU for xen builds ?
i see :
qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git

can we use ONE ?
or we need different binaries ?

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 02:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 02:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3clq-0001B1-8Q; Thu, 16 Jan 2014 02:33:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3clo-0001Aw-Me
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 02:33:24 +0000
Received: from [85.158.137.68:28979] by server-5.bemta-3.messagelabs.com id
	8D/08-25188-3F447D25; Thu, 16 Jan 2014 02:33:23 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389839602!8599888!1
X-Originating-IP: [209.85.215.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7652 invoked from network); 16 Jan 2014 02:33:22 -0000
Received: from mail-la0-f54.google.com (HELO mail-la0-f54.google.com)
	(209.85.215.54)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 02:33:22 -0000
Received: by mail-la0-f54.google.com with SMTP id y1so2025680lam.41
	for <xen-devel@lists.xen.org>; Wed, 15 Jan 2014 18:33:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=eWZdHm8VHCS05Px4wsOG4GkP/qOr5dlWlyDW2low/4k=;
	b=ZWqVDPA8rHSsIjL1YgspBheDUVOXc5pg9XSRTDbHRCbsM8zNuu8gU+dCU9f9Thiomh
	RW1VXRKBY5VtwzLeREVSPVl9DfqVLY5F1OqUCtw4qIGcehiAtKM9gtecXYKkG2UeNg5z
	XKPguT2PPRULCoM/L5T3zIKwcD6cKZtrOo6YYu2VZZzegLxSuse47oAZmN1Lk5zpM673
	N6WO7JLfTc/AiSqK3TkX96py6Wuo4j2bTSNNB46r32iNm6Cho1jj8fYY7rRi4P0Yk4a6
	qdnOySQR99YXoug4SNRcc2YXeAwre4N7q5fpA5QT1mMvBccvCAqFQgwy0v3sgqxrkJ4x
	4yQQ==
X-Received: by 10.112.160.196 with SMTP id xm4mr3410688lbb.34.1389839601804;
	Wed, 15 Jan 2014 18:33:21 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id rb4sm3531829lbb.1.2014.01.15.18.33.20
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 15 Jan 2014 18:33:21 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 16 Jan 2014 06:33:17 +0400
Message-Id: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

could you please let me know - why we have different QEMU for xen builds ?
i see :
qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git

can we use ONE ?
or we need different binaries ?

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3en3-0006p1-Ni; Thu, 16 Jan 2014 04:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3en1-0006ow-Sw
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 04:42:48 +0000
Received: from [85.158.143.35:36702] by server-3.bemta-4.messagelabs.com id
	49/6A-32360-74367D25; Thu, 16 Jan 2014 04:42:47 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389847365!12037545!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9863 invoked from network); 16 Jan 2014 04:42:45 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Jan 2014 04:42:45 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 15 Jan 2014 20:38:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="439614122"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 15 Jan 2014 20:42:43 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 20:42:43 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 12:42:40 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMAAvg9oAAWCpcJA=
Date: Thu, 16 Jan 2014 04:42:40 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
In-Reply-To: <52CE93D8.4080201@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-09:
> On 08.01.14 06:50, Zhang, Yang Z wrote:
>> Egger, Christoph wrote on 2014-01-07:
>>> On 07.01.14 09:54, Zhang, Yang Z wrote:
>>>> Jan Beulich wrote on 2014-01-07:
>>>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z"
>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>> wrote:
>>>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>>>> wrote:
>>>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>>>>>> That way we can be certain that the retry won't do
>>>>>>>>>>> something different from what requested the retry, getting
>>>>>>>>>>> once again closer to real hardware behavior (where what we
>>>>>>>>>>> use retries for is simply a bus operation, not involving
>>>>>>>>>>> redundant decoding of instructions).
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>>>> vmexit to
>>>>>>>>>> L1 and
>>>>>>>>>> L1 may use the wrong instruction.
>>>>>>>>> 
>>>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>>>> There should be, at any given point in time, at most one
>>>>>>>>> instruction being emulated. Can you please give a more
>>>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>>>> practical?)
>>>>>> problem?
>>>>>>>> 
>>>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>>>> info and saw the strange phenomenon:
>>>>>>>> 
>>>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>> 
>>>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>>>> information why this happens. And I only saw it with L1
>>>>>>>> hyper-v (Xen on Xen and KVM on Xen don't have this issue) .
>>>>>>> 
>>>>>>> But in order to validate the fix is (a) needed and (b) correct,
>>>>>>> a proper understanding of the issue is necessary as the very first step.
>>>>>>> Which doesn't require knowing internals of Hyper-V, all you
>>>>>>> need is tracking of emulator state changes in Xen (which I
>>>>>>> realize is said easier than done, since you want to make sure
>>>>>>> you don't generate huge amounts of output before actually
>>>>>>> hitting the issue, making it close to
>>>>>> impossible to analyze).
>>>>>> 
>>>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>>>>> access directly. For example, if L1 pass through (not VT-d) a
>>>>>> device to L2 directly. And L0 may get X86EMUL_RETRY when handling
>>>>>> this IO request. At same time, if there is a virtual vmexit pending
>>>>>> (for example, an interrupt pending to inject to L1) and hypervisor
>>>>>> will switch the VCPU context from L2 to L1. Now we already in L1's
>>>>>> context, but since we got a X86EMUL_RETRY just now and this means
>>>>>> hyprevisor will retry to handle the IO request later and
>>>>>> unfortunately, the retry will happen in L1's context. Without your
>>>>>> patch, L0 just emulate an L1's instruction and everything goes on.
>>>>>> With your patch, L0 will get the instruction from the buffer which
>>>>>> is belonging to L2, and problem arises.
>>>>>> 
>>>>>> So the fixing is that if there is a pending IO request, no
>>>>>> virtual vmexit/vmentry is allowed which following hardware's behavior.
>>>>>> 
>>>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>>>      struct vcpu *v = current;
>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> 
>>>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>>>> +        return;
>>>>>>      /*
>>>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>>>       * just handled and the true vmentry. If during this
>>>>>> window,
>>>>> 
>>>>> This change looks much more sensible; question is whether simply
>>>>> ignoring the switch request is acceptable (and I don't know the
>>>>> nested HVM code well enough to tell). Furthermore I wonder
>>>>> whether that's
>>> really a VMX-only issue.
>>>> 
>>>> From hardware's point, an IO operation is handled atomically. So the
>>>> problem will never happen. But in Xen, an IO operation is divided
>>>> into several steps. Without nested virtualization, the VCPU is paused
>>>> or looped until the IO emulation is finished. But in nested
>>>> environment, the VCPU will continue running even the IO emulation is
>>>> not finished. So my patch will check this and allow the VCPU to
>>>> continue running only there is no pending IO request. This is
>>>> matching hardware's behavior.
>>>> 
>>>> I guess SVM also has this problem. But since I don't know nested
>>>> SVM well, so perhaps Christoph can help to have a double check.
>>> 
>>> For SVM this issue was fixed with commit
>>> d740d811925385c09553cbe6dee8e77c1d43b198
>>> 
>>> And there is a followup cleanup commit
>>> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
>>> 
>>> The change in nestedsvm.c in commit
>>> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
>>> 
>>> Move that into nhvm_interrupt_blocked() in hvm.c right before
>>> 
>>>     return hvm_funcs.nhvm_intr_blocked(v); and the fix applies for
>>> both SVM and VMX.
>>> 
>> 
>> I guess this is no enough. L2 may vmexit to L1 during IO emulation
>> not only due to interrupt. Now I cannot give an example, but the
>> hyper-v cannot boot up with your suggestion. So I guess only
>> consider external interrupt is not enough. We should prohibit
>> vmswith if there is a pending IO emulation both from L1 or L2(This
>> may never happen to L1, but from hardware's behavior, we should add this check).
> 
> I compared nsvm_vcpu_switch() with nvmx_switch_guest() (both are
> called from entry.S) and come up with one question:
> 
> How do you handle the case of a virtual TLB flush (which shoots down EPT
> tables) from another virtual CPU while you are in the middle of a
> vmentry/vmexit emulation? This happens when a vCPU sends an IPI to an
> other vCPU.
> 
> If you do not handle this case you launch a l2 guest with an empty EPT
> table (you set it up correctly but an other CPU shot it down right
> after you set it up) and that lets you end up in an EPT fault endless loop.
> 

Yes. It should be another issue. We may need another patch to solve it. 

Back to the IO emulation issue, your suggestion didn't solve the problem. So how about my solution that not allow doing virtual vmenty/vmexit when there is pending IO request not finished.

> Christoph


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3en3-0006p1-Ni; Thu, 16 Jan 2014 04:42:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3en1-0006ow-Sw
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 04:42:48 +0000
Received: from [85.158.143.35:36702] by server-3.bemta-4.messagelabs.com id
	49/6A-32360-74367D25; Thu, 16 Jan 2014 04:42:47 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389847365!12037545!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9863 invoked from network); 16 Jan 2014 04:42:45 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-21.messagelabs.com with SMTP;
	16 Jan 2014 04:42:45 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 15 Jan 2014 20:38:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="439614122"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 15 Jan 2014 20:42:43 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 20:42:43 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 12:42:40 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMAAvg9oAAWCpcJA=
Date: Thu, 16 Jan 2014 04:42:40 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
In-Reply-To: <52CE93D8.4080201@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-09:
> On 08.01.14 06:50, Zhang, Yang Z wrote:
>> Egger, Christoph wrote on 2014-01-07:
>>> On 07.01.14 09:54, Zhang, Yang Z wrote:
>>>> Jan Beulich wrote on 2014-01-07:
>>>>>>>> On 24.12.13 at 12:29, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>>> wrote:
>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>> On 18.12.13 at 10:40, "Zhang, Yang Z"
>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>> wrote:
>>>>>>>> Jan Beulich wrote on 2013-12-18:
>>>>>>>>>>>> On 18.12.13 at 09:36, "Zhang, Yang Z"
>>>>>>>>>>>> <yang.z.zhang@intel.com>
>>>>>>> wrote:
>>>>>>>>>> Jan Beulich wrote on 2013-09-30:
>>>>>>>>>>> Rather than re-reading the instruction bytes upon retry
>>>>>>>>>>> processing, stash away and re-use tha what we already read.
>>>>>>>>>>> That way we can be certain that the retry won't do
>>>>>>>>>>> something different from what requested the retry, getting
>>>>>>>>>>> once again closer to real hardware behavior (where what we
>>>>>>>>>>> use retries for is simply a bus operation, not involving
>>>>>>>>>>> redundant decoding of instructions).
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> This patch doesn't consider the nested case.
>>>>>>>>>> For example, if the buffer saved the L2's instruction, then
>>>>>>>>>> vmexit to
>>>>>>>>>> L1 and
>>>>>>>>>> L1 may use the wrong instruction.
>>>>>>>>> 
>>>>>>>>> I'm having difficulty seeing how the two could get intermixed:
>>>>>>>>> There should be, at any given point in time, at most one
>>>>>>>>> instruction being emulated. Can you please give a more
>>>>>>>>> elaborate explanation of the situation where you see a (theoretical?
>>>>>>>>> practical?)
>>>>>> problem?
>>>>>>>> 
>>>>>>>> I saw this issue when booting L1 hyper-v. I added some debug
>>>>>>>> info and saw the strange phenomenon:
>>>>>>>> 
>>>>>>>> (XEN) write to buffer: eip 0xfffff8800430bc80, size 16,
>>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>> (XEN) read from buffer: eip 0xfffff800002f6138, size 16,
>>>>>>>> content:f7420c1f608488b 44000000011442c7
>>>>>>>> 
>>>>>>>> From the log, we can see different eip but using the same buffer.
>>>>>>>> Since I don't know how hyper-v working, so I cannot give more
>>>>>>>> information why this happens. And I only saw it with L1
>>>>>>>> hyper-v (Xen on Xen and KVM on Xen don't have this issue) .
>>>>>>> 
>>>>>>> But in order to validate the fix is (a) needed and (b) correct,
>>>>>>> a proper understanding of the issue is necessary as the very first step.
>>>>>>> Which doesn't require knowing internals of Hyper-V, all you
>>>>>>> need is tracking of emulator state changes in Xen (which I
>>>>>>> realize is said easier than done, since you want to make sure
>>>>>>> you don't generate huge amounts of output before actually
>>>>>>> hitting the issue, making it close to
>>>>>> impossible to analyze).
>>>>>> 
>>>>>> Ok. It should be an old issue and just exposed by your patch only:
>>>>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>>>>> access directly. For example, if L1 pass through (not VT-d) a
>>>>>> device to L2 directly. And L0 may get X86EMUL_RETRY when handling
>>>>>> this IO request. At same time, if there is a virtual vmexit pending
>>>>>> (for example, an interrupt pending to inject to L1) and hypervisor
>>>>>> will switch the VCPU context from L2 to L1. Now we already in L1's
>>>>>> context, but since we got a X86EMUL_RETRY just now and this means
>>>>>> hyprevisor will retry to handle the IO request later and
>>>>>> unfortunately, the retry will happen in L1's context. Without your
>>>>>> patch, L0 just emulate an L1's instruction and everything goes on.
>>>>>> With your patch, L0 will get the instruction from the buffer which
>>>>>> is belonging to L2, and problem arises.
>>>>>> 
>>>>>> So the fixing is that if there is a pending IO request, no
>>>>>> virtual vmexit/vmentry is allowed which following hardware's behavior.
>>>>>> 
>>>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> b/xen/arch/x86/hvm/vmx/vvmx.c index 41db52b..c5446a9 100644
>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>> @@ -1394,7 +1394,10 @@ void nvmx_switch_guest(void)
>>>>>>      struct vcpu *v = current;
>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> 
>>>>>> +    if ( p->state != STATE_IOREQ_NONE )
>>>>>> +        return;
>>>>>>      /*
>>>>>>       * a softirq may interrupt us between a virtual vmentry is
>>>>>>       * just handled and the true vmentry. If during this
>>>>>> window,
>>>>> 
>>>>> This change looks much more sensible; question is whether simply
>>>>> ignoring the switch request is acceptable (and I don't know the
>>>>> nested HVM code well enough to tell). Furthermore I wonder
>>>>> whether that's
>>> really a VMX-only issue.
>>>> 
>>>> From hardware's point, an IO operation is handled atomically. So the
>>>> problem will never happen. But in Xen, an IO operation is divided
>>>> into several steps. Without nested virtualization, the VCPU is paused
>>>> or looped until the IO emulation is finished. But in nested
>>>> environment, the VCPU will continue running even the IO emulation is
>>>> not finished. So my patch will check this and allow the VCPU to
>>>> continue running only there is no pending IO request. This is
>>>> matching hardware's behavior.
>>>> 
>>>> I guess SVM also has this problem. But since I don't know nested
>>>> SVM well, so perhaps Christoph can help to have a double check.
>>> 
>>> For SVM this issue was fixed with commit
>>> d740d811925385c09553cbe6dee8e77c1d43b198
>>> 
>>> And there is a followup cleanup commit
>>> ac97fa6a21ccd395cca43890bbd0bf32e3255ebb
>>> 
>>> The change in nestedsvm.c in commit
>>> d740d811925385c09553cbe6dee8e77c1d43b198 is actually not SVM specific.
>>> 
>>> Move that into nhvm_interrupt_blocked() in hvm.c right before
>>> 
>>>     return hvm_funcs.nhvm_intr_blocked(v); and the fix applies for
>>> both SVM and VMX.
>>> 
>> 
>> I guess this is no enough. L2 may vmexit to L1 during IO emulation
>> not only due to interrupt. Now I cannot give an example, but the
>> hyper-v cannot boot up with your suggestion. So I guess only
>> consider external interrupt is not enough. We should prohibit
>> vmswith if there is a pending IO emulation both from L1 or L2(This
>> may never happen to L1, but from hardware's behavior, we should add this check).
> 
> I compared nsvm_vcpu_switch() with nvmx_switch_guest() (both are
> called from entry.S) and come up with one question:
> 
> How do you handle the case of a virtual TLB flush (which shoots down EPT
> tables) from another virtual CPU while you are in the middle of a
> vmentry/vmexit emulation? This happens when a vCPU sends an IPI to an
> other vCPU.
> 
> If you do not handle this case you launch a l2 guest with an empty EPT
> table (you set it up correctly but an other CPU shot it down right
> after you set it up) and that lets you end up in an EPT fault endless loop.
> 

Yes. It should be another issue. We may need another patch to solve it. 

Back to the IO emulation issue, your suggestion didn't solve the problem. So how about my solution that not allow doing virtual vmenty/vmexit when there is pending IO request not finished.

> Christoph


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3evL-0007FV-KQ; Thu, 16 Jan 2014 04:51:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3evJ-0007FQ-UX
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 04:51:22 +0000
Received: from [193.109.254.147:27557] by server-11.bemta-14.messagelabs.com
	id C3/08-20576-94567D25; Thu, 16 Jan 2014 04:51:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389847877!8861909!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32578 invoked from network); 16 Jan 2014 04:51:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 04:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="91247386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 04:51:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 23:51:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3evD-00020m-60;
	Thu, 16 Jan 2014 04:51:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3evC-0002bk-7X;
	Thu, 16 Jan 2014 04:51:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24382-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 04:51:14 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24382: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24382 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24382/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail REGR. vs. 24375

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24380

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2

------------------------------------------------------------
People who touched revisions under test:
  Anil Madhavapeddy <anil@recoil.org>
  David Scott <dave.scott@eu.citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c04c825bdf1e946260cba325eeed993004051050
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Jan 13 18:15:37 2014 +0000

    xl: Always use "fast" migration resume protocol
    
    As Ian Campbell writes in http://bugs.xenproject.org/xen/bug/30:
    
      There are two mechanisms by which a suspend can be aborted and the
      original domain resumed.
    
      The older method is that the toolstack resets a bunch of state (see
      tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
      restarts the domain. The domain will see HYPERVISOR_suspend return 0
      and will continue without any realisation that it is actually
      running in the original domain and not in a new one. This method is
      supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
      but it is not.
    
      The other method is newer and in this case the toolstack arranges
      that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
      domain will observe this and realise that it has been restarted in
      the same domain and will behave accordingly. This method is
      implemented, correctly AFAIK, by
      libxl_domain_resume(suspend_cancel=1).
    
    Attempting to use the old method without doing all of the work simply
    causes the guest to crash.  Implementing the work required for old
    method, or for checking that domains actually support the new method,
    is not feasible at this stage of the 4.4 release.
    
    So, always use the new method, without regard to the declarations of
    support by the guest.  This is a strict improvement: guests which do
    in fact support the new method will work, whereas ones which don't are
    no worse off.
    
    There are two call sites of libxl_domain_resume that need fixing, both
    in the migration error path.
    
    With this change I observe a correct and successful resumption of a
    Debian wheezy guest with a Linux 3.4.70 kernel after a migration
    attempt which I arranged to fail by nobbling the block hotplug script.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: konrad.wilk@oracle.com
    CC: David Vrabel <david.vrabel@citrix.com>
    CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit c63868ff0164fa1927b0d39f9aef5c7074ee04e2
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Mon Jan 13 11:52:28 2014 +0000

    libxl: disallow PCI device assignment for HVM guest when PoD is enabled
    
    This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
    device assignment if PoD is enabled.").
    
    This change is restricted to HVM guest, as only HVM is relevant in the
    counterpart in Xend. We're late in release cycle so the change should
    only do what's necessary. Probably we can revisit it if we need to do
    the same thing for PV guest in the future.
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Cc: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <ian.jackson@eu.citrix.com>

commit c938f5414b2b7ffa5b6f65daa9f4522db360987b
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Jan 14 13:36:55 2014 +0000

    xen/arm: p2m: Correctly flush TLB in create_p2m_entries
    
    The p2m is shared between VCPUs for each domain. Currently Xen only flush
    TLB on the local PCPU. This could result to mismatch between the mapping in the
    p2m and TLBs.
    
    Flush TLB entries used by this domain on every PCPU. The flush can also be
    moved out of the loop because:
        - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
        - INSERT: if valid = 1 that would means with have replaced a
        page that already belongs to the domain. A VCPU can write on the wrong page.
        This can happen for dom0 with the 1:1 mapping because the mapping is not
        removed from the p2m.
        - REMOVE: except for grant-table (replace_grant_host_mapping), each
        call to guest_physmap_remove_page are protected by the callers via a
            get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
        the page can't be allocated for another domain until the last put_page.
        - RELINQUISH : the domain is not running anymore so we don't care...
    
    Also avoid leaking a foreign page if the function is INSERTed a new mapping
    on top of foreign mapping.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 8a9ab685bef64a50f58d99a4e8728b770289e9ef
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Jan 9 16:58:03 2014 +0000

    xen/arm: correct flush_tlb_mask behaviour
    
    On ARM, flush_tlb_mask is used in the common code:
        - alloc_heap_pages: the flush is only be called if the new allocated
        page was used by a domain before. So we need to flush only TLB non-secure
    non-hyp inner-shareable.
        - common/grant-table.c: every calls to flush_tlb_mask are used with
        the current domain. A flush TLB by current VMID inner-shareable is enough.
    
    The current code only flush hypervisor TLB on the current PCPU. For now,
    flush TLBs non-secure non-hyp on every PCPUs.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 9b2691bdb499a3c2a136596658571056df1d42c8
Author: Anil Madhavapeddy <anil@recoil.org>
Date:   Sat Jan 11 23:33:25 2014 +0000

    libxl: ocaml: guard x86-specific functions behind an ifdef
    
    The various cpuid functions are not available on ARM, so this
    makes them raise an OCaml exception.  Omitting the functions
    completely results in a link failure in oxenstored due to the
    missing symbols, so this is preferable to the much bigger patch
    that would result from adding conditional compilation into the
    OCaml interfaces.
    
    With this patch, oxenstored can successfully start a domain on
    Xen/ARM.
    
    Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
    Acked-by: David Scott <dave.scott@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f91dc2000a86d0af00f52b527fd28df00070f9cb
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 10 16:57:00 2014 -0500

    xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.
    
    Without this gdb does not report an error.
    
    With this patch and using a 1G hvm domU:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Cannot access memory at address 0x6ae9168b
    
    Drop output of iop->remain because it most likely will be zero.
    This leads to a strange message:
    
    ERROR: failed to read 0 bytes. errno:14 rc:-1
    
    Add address to write error because it may be the only message
    displayed.
    
    Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
    error and so iop->remain will be zero.
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

commit dfbd3d62cd4a8e6e51be7896300e59c9ee0b8f82
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 10 16:56:59 2014 -0500

    xg_read_mem: Report on error.
    
    I had coded this with XGERR, but gdb will try to read memory without
    a direct request from the user.  So the error message can be confusing.
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3evL-0007FV-KQ; Thu, 16 Jan 2014 04:51:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3evJ-0007FQ-UX
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 04:51:22 +0000
Received: from [193.109.254.147:27557] by server-11.bemta-14.messagelabs.com
	id C3/08-20576-94567D25; Thu, 16 Jan 2014 04:51:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389847877!8861909!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32578 invoked from network); 16 Jan 2014 04:51:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 04:51:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="91247386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 04:51:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 23:51:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3evD-00020m-60;
	Thu, 16 Jan 2014 04:51:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3evC-0002bk-7X;
	Thu, 16 Jan 2014 04:51:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24382-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 04:51:14 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24382: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24382 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24382/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail REGR. vs. 24375

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24380

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2

------------------------------------------------------------
People who touched revisions under test:
  Anil Madhavapeddy <anil@recoil.org>
  David Scott <dave.scott@eu.citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c04c825bdf1e946260cba325eeed993004051050
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Jan 13 18:15:37 2014 +0000

    xl: Always use "fast" migration resume protocol
    
    As Ian Campbell writes in http://bugs.xenproject.org/xen/bug/30:
    
      There are two mechanisms by which a suspend can be aborted and the
      original domain resumed.
    
      The older method is that the toolstack resets a bunch of state (see
      tools/python/xen/xend/XendDomainInfo.py resumeDomain) and then
      restarts the domain. The domain will see HYPERVISOR_suspend return 0
      and will continue without any realisation that it is actually
      running in the original domain and not in a new one. This method is
      supposed to be implemented by libxl_domain_resume(suspend_cancel=0)
      but it is not.
    
      The other method is newer and in this case the toolstack arranges
      that HYPERVISOR_suspend returns SUSPEND_CANCEL and restarts it. The
      domain will observe this and realise that it has been restarted in
      the same domain and will behave accordingly. This method is
      implemented, correctly AFAIK, by
      libxl_domain_resume(suspend_cancel=1).
    
    Attempting to use the old method without doing all of the work simply
    causes the guest to crash.  Implementing the work required for old
    method, or for checking that domains actually support the new method,
    is not feasible at this stage of the 4.4 release.
    
    So, always use the new method, without regard to the declarations of
    support by the guest.  This is a strict improvement: guests which do
    in fact support the new method will work, whereas ones which don't are
    no worse off.
    
    There are two call sites of libxl_domain_resume that need fixing, both
    in the migration error path.
    
    With this change I observe a correct and successful resumption of a
    Debian wheezy guest with a Linux 3.4.70 kernel after a migration
    attempt which I arranged to fail by nobbling the block hotplug script.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: konrad.wilk@oracle.com
    CC: David Vrabel <david.vrabel@citrix.com>
    CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit c63868ff0164fa1927b0d39f9aef5c7074ee04e2
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Mon Jan 13 11:52:28 2014 +0000

    libxl: disallow PCI device assignment for HVM guest when PoD is enabled
    
    This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
    device assignment if PoD is enabled.").
    
    This change is restricted to HVM guest, as only HVM is relevant in the
    counterpart in Xend. We're late in release cycle so the change should
    only do what's necessary. Probably we can revisit it if we need to do
    the same thing for PV guest in the future.
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Cc: Ian Campbell <ian.campbell@citrix.com>
    Cc: Ian Jackson <ian.jackson@eu.citrix.com>

commit c938f5414b2b7ffa5b6f65daa9f4522db360987b
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Jan 14 13:36:55 2014 +0000

    xen/arm: p2m: Correctly flush TLB in create_p2m_entries
    
    The p2m is shared between VCPUs for each domain. Currently Xen only flush
    TLB on the local PCPU. This could result to mismatch between the mapping in the
    p2m and TLBs.
    
    Flush TLB entries used by this domain on every PCPU. The flush can also be
    moved out of the loop because:
        - ALLOCATE: only called for dom0 RAM allocation, so the flush is never called
        - INSERT: if valid = 1 that would means with have replaced a
        page that already belongs to the domain. A VCPU can write on the wrong page.
        This can happen for dom0 with the 1:1 mapping because the mapping is not
        removed from the p2m.
        - REMOVE: except for grant-table (replace_grant_host_mapping), each
        call to guest_physmap_remove_page are protected by the callers via a
            get_page -> .... -> guest_physmap_remove_page -> ... -> put_page. So
        the page can't be allocated for another domain until the last put_page.
        - RELINQUISH : the domain is not running anymore so we don't care...
    
    Also avoid leaking a foreign page if the function is INSERTed a new mapping
    on top of foreign mapping.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 8a9ab685bef64a50f58d99a4e8728b770289e9ef
Author: Julien Grall <julien.grall@linaro.org>
Date:   Thu Jan 9 16:58:03 2014 +0000

    xen/arm: correct flush_tlb_mask behaviour
    
    On ARM, flush_tlb_mask is used in the common code:
        - alloc_heap_pages: the flush is only be called if the new allocated
        page was used by a domain before. So we need to flush only TLB non-secure
    non-hyp inner-shareable.
        - common/grant-table.c: every calls to flush_tlb_mask are used with
        the current domain. A flush TLB by current VMID inner-shareable is enough.
    
    The current code only flush hypervisor TLB on the current PCPU. For now,
    flush TLBs non-secure non-hyp on every PCPUs.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 9b2691bdb499a3c2a136596658571056df1d42c8
Author: Anil Madhavapeddy <anil@recoil.org>
Date:   Sat Jan 11 23:33:25 2014 +0000

    libxl: ocaml: guard x86-specific functions behind an ifdef
    
    The various cpuid functions are not available on ARM, so this
    makes them raise an OCaml exception.  Omitting the functions
    completely results in a link failure in oxenstored due to the
    missing symbols, so this is preferable to the much bigger patch
    that would result from adding conditional compilation into the
    OCaml interfaces.
    
    With this patch, oxenstored can successfully start a domain on
    Xen/ARM.
    
    Signed-off-by: Anil Madhavapeddy <anil@recoil.org>
    Acked-by: David Scott <dave.scott@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit f91dc2000a86d0af00f52b527fd28df00070f9cb
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 10 16:57:00 2014 -0500

    xg_main: If XEN_DOMCTL_gdbsx_guestmemio fails then force error.
    
    Without this gdb does not report an error.
    
    With this patch and using a 1G hvm domU:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Cannot access memory at address 0x6ae9168b
    
    Drop output of iop->remain because it most likely will be zero.
    This leads to a strange message:
    
    ERROR: failed to read 0 bytes. errno:14 rc:-1
    
    Add address to write error because it may be the only message
    displayed.
    
    Note: currently XEN_DOMCTL_gdbsx_guestmemio does not change 'iop' on
    error and so iop->remain will be zero.
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>

commit dfbd3d62cd4a8e6e51be7896300e59c9ee0b8f82
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 10 16:56:59 2014 -0500

    xg_read_mem: Report on error.
    
    I had coded this with XGERR, but gdb will try to read memory without
    a direct request from the user.  So the error message can be confusing.
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:56:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3f0C-0007WX-55; Thu, 16 Jan 2014 04:56:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3f0A-0007WQ-6T
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 04:56:22 +0000
Received: from [85.158.137.68:32262] by server-7.bemta-3.messagelabs.com id
	CE/6B-27599-57667D25; Thu, 16 Jan 2014 04:56:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389848178!8256824!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8615 invoked from network); 16 Jan 2014 04:56:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 04:56:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="93354525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 04:56:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 23:56:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3f04-000223-Ti;
	Thu, 16 Jan 2014 04:56:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3f03-0001Ei-Up;
	Thu, 16 Jan 2014 04:56:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24384-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 04:56:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24384: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5099914858673814752=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5099914858673814752==
Content-Type: text/plain

flight 24384 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24384/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24311
 test-amd64-i386-freebsd10-i386  7 freebsd-install        running [st=running!]

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 776 lines long.)


--===============5099914858673814752==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5099914858673814752==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 04:56:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 04:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3f0C-0007WX-55; Thu, 16 Jan 2014 04:56:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3f0A-0007WQ-6T
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 04:56:22 +0000
Received: from [85.158.137.68:32262] by server-7.bemta-3.messagelabs.com id
	CE/6B-27599-57667D25; Thu, 16 Jan 2014 04:56:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389848178!8256824!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8615 invoked from network); 16 Jan 2014 04:56:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 04:56:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="93354525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 04:56:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 15 Jan 2014 23:56:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3f04-000223-Ti;
	Thu, 16 Jan 2014 04:56:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3f03-0001Ei-Up;
	Thu, 16 Jan 2014 04:56:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24384-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 04:56:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24384: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5099914858673814752=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5099914858673814752==
Content-Type: text/plain

flight 24384 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24384/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24311
 test-amd64-i386-freebsd10-i386  7 freebsd-install        running [st=running!]

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 776 lines long.)


--===============5099914858673814752==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5099914858673814752==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:00:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3fzv-0002Kg-Bh; Thu, 16 Jan 2014 06:00:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3fzt-0002JT-B5
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 06:00:09 +0000
Received: from [85.158.137.68:63594] by server-1.bemta-3.messagelabs.com id
	E7/BB-29598-86577D25; Thu, 16 Jan 2014 06:00:08 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389852006!9436493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14173 invoked from network); 16 Jan 2014 06:00:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 06:00:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G5xxGN030742
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 06:00:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G5xwNo010577
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 16 Jan 2014 05:59:59 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G5xv8S026625; Thu, 16 Jan 2014 05:59:58 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 21:59:57 -0800
Message-ID: <52D77559.5010106@oracle.com>
Date: Thu, 16 Jan 2014 13:59:53 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
	<52D6A5B7.40503@citrix.com>
In-Reply-To: <52D6A5B7.40503@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/15 23:13, David Vrabel wrote:
> On 15/01/14 14:17, annie li wrote:
>> I am thinking of two ways, and they can be implemented in new patches.
>> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called
>> to free skb. Otherwise, using gnttab_end_foreign_access to release ref
>> and pages.
>> 2. Add a similar deferred way of gnttab_end_foreign_access in
>> gnttab_end_foreign_access_ref.
> Something like the following (untested!) patch is what I'm thinking of.
>
> Fixups to existing drivers (except netfront) are included but may not be
> quite correct.

Part of it implements the 1 above, if gnttab_end_foreign_access_ref 
fails and then use gnttab_end_foreign_access to end the grant. Another 
is splitting __free_page from gnttab_end_foreign_access. This is 
improvement for current grant end access, and all drivers that involve 
gnttab_end_foreign_access needs to do corresponding change.
I think it can be a separate patch from my clean up patch which fixes 
grant leaking in netfront.

Thanks
Annie
>
> 8<----------
>  From 76c254c8e020f4427e8f37c867622f0bfd5ac85f Mon Sep 17 00:00:00 2001
> From: David Vrabel <david.vrabel@citrix.com>
> Date: Wed, 15 Jan 2014 15:04:52 +0000
> Subject: [PATCH] HACK! xen/gnttab: make gnttab_end_foreign_access() more
> useful
>
> This is UNTESTED and is an example of the sort of change I'm looking
> for.
>
> Freeing the page in gnttab_end_foreign_access() means it cannot be
> used where the pages are freed in some other way (e.g., as part of a
> kfree_skb()).
>
> Leave the freeing of the page to the caller.  If the page still has
> foreign users, take an additional reference before adding it to the
> deferred list.
>
> Hack all the users of the call to do something resembling the right
> thing.  Not quite sure on the blkfront changes.
> ---
>   drivers/block/xen-blkfront.c    |   22 +++++++++++++---------
>   drivers/char/tpm/xen-tpmfront.c |    3 +--
>   drivers/pci/xen-pcifront.c      |    3 +--
>   drivers/xen/grant-table.c       |   19 +++++++++++--------
>   include/xen/grant_table.h       |   11 ++++++-----
>   5 files changed, 32 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..a586496 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
>   			if (entry->page) {
>   				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
>   					 entry->ref, page_to_pfn(entry->page));
> -				__free_page(entry->page);
> +				put_page(entry->page);
>   			} else
>   				pr_info("freeing g.e. %#x\n", entry->ref);
>   			kfree(entry);
> @@ -555,15 +555,18 @@ static void gnttab_add_deferred(grant_ref_t ref,
> bool readonly,
>   }
>
>   void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
> -			       unsigned long page)
> +			       struct page *page)
>   {
> -	if (gnttab_end_foreign_access_ref(ref, readonly)) {
> +	if (gnttab_end_foreign_access_ref(ref, readonly))
>   		put_free_entry(ref);
> -		if (page != 0)
> -			free_page(page);
> -	} else
> -		gnttab_add_deferred(ref, readonly,
> -				    page ? virt_to_page(page) : NULL);
> +	else {
> +		/*
> +		 * Take a reference to the page so it's not freed
> +		 * while the foreign domain still has access to it.
> +		 */
> +		get_page(page);
> +		gnttab_add_deferred(ref, readonly, page);
> +	}
>   }
>   EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);
>
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..ffa3ce6 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -91,13 +91,14 @@ bool gnttab_trans_grants_available(void);
>   int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
>
>   /*
> - * Eventually end access through the given grant reference, and once that
> - * access has been ended, free the given page too.  Access will be ended
> - * immediately iff the grant entry is not in use, otherwise it will happen
> - * some time later.  page may be 0, in which case no freeing will occur.
> + * Eventually end access through the given grant reference, if the
> + * page is still in use an additional reference is taken and released
> + * when access has ended.  Access will be ended immediately iff the
> + * grant entry is not in use, otherwise it will happen some time
> + * later.
>    */
>   void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
> -			       unsigned long page);
> +			       struct page *page);
>
>   int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c4a4c90..45a2a01 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -931,14 +931,16 @@ static void blkif_free(struct blkfront_info *info,
> int suspend)
>   	if (!list_empty(&info->grants)) {
>   		list_for_each_entry_safe(persistent_gnt, n,
>   		                         &info->grants, node) {
> +			struct page *page = pfn_to_page(persistent_gnt->pfn);
> +
>   			list_del(&persistent_gnt->node);
>   			if (persistent_gnt->gref != GRANT_INVALID_REF) {
>   				gnttab_end_foreign_access(persistent_gnt->gref,
> -				                          0, 0UL);
> +				                          0, page);
>   				info->persistent_gnts_c--;
>   			}
>   			if (info->feature_persistent)
> -				__free_page(pfn_to_page(persistent_gnt->pfn));
> +				__free_page(page);
>   			kfree(persistent_gnt);
>   		}
>   	}
> @@ -970,10 +972,13 @@ static void blkif_free(struct blkfront_info *info,
> int suspend)
>   		       info->shadow[i].req.u.indirect.nr_segments :
>   		       info->shadow[i].req.u.rw.nr_segments;
>   		for (j = 0; j < segs; j++) {
> +			struct page *page;
> +
>   			persistent_gnt = info->shadow[i].grants_used[j];
> -			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
> +			page = pfn_to_page(persistent_gnt->pfn);
> +			gnttab_end_foreign_access(persistent_gnt->gref, 0, page);
>   			if (info->feature_persistent)
> -				__free_page(pfn_to_page(persistent_gnt->pfn));
> +				__free_page(page);
>   			kfree(persistent_gnt);
>   		}
>
> @@ -1010,10 +1015,11 @@ free_shadow:
>   	/* Free resources associated with old device channel. */
>   	if (info->ring_ref != GRANT_INVALID_REF) {
>   		gnttab_end_foreign_access(info->ring_ref, 0,
> -					  (unsigned long)info->ring.sring);
> +					  virt_to_page(info->ring.sring));
>   		info->ring_ref = GRANT_INVALID_REF;
>   		info->ring.sring = NULL;
>   	}
> +	free_page((unsigned long)info->ring.sring);
>   	if (info->irq)
>   		unbind_from_irqhandler(info->irq, info);
>   	info->evtchn = info->irq = 0;
> @@ -1053,7 +1059,7 @@ static void blkif_completion(struct blk_shadow *s,
> struct blkfront_info *info,
>   	}
>   	/* Add the persistent grant into the list of free grants */
>   	for (i = 0; i < nseg; i++) {
> -		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
> +		if (gnttab_end_foreign_access_ref(s->grants_used[i]->gref, 0)) {
>   			/*
>   			 * If the grant is still mapped by the backend (the
>   			 * backend has chosen to make this grant persistent)
> @@ -1072,14 +1078,13 @@ static void blkif_completion(struct blk_shadow
> *s, struct blkfront_info *info,
>   			 * so it will not be picked again unless we run out of
>   			 * persistent grants.
>   			 */
> -			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
>   			s->grants_used[i]->gref = GRANT_INVALID_REF;
>   			list_add_tail(&s->grants_used[i]->node, &info->grants);
>   		}
>   	}
>   	if (s->req.operation == BLKIF_OP_INDIRECT) {
>   		for (i = 0; i < INDIRECT_GREFS(nseg); i++) {
> -			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
> +			if (gnttab_end_foreign_access_ref(s->indirect_grants[i]->gref, 0)) {
>   				if (!info->feature_persistent)
>   					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
>   							     s->indirect_grants[i]->gref);
> @@ -1088,7 +1093,6 @@ static void blkif_completion(struct blk_shadow *s,
> struct blkfront_info *info,
>   			} else {
>   				struct page *indirect_page;
>
> -				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
>   				/*
>   				 * Add the used indirect page back to the list of
>   				 * available pages for indirect grefs.
> diff --git a/drivers/char/tpm/xen-tpmfront.c
> b/drivers/char/tpm/xen-tpmfront.c
> index c8ff4df..00d1132 100644
> --- a/drivers/char/tpm/xen-tpmfront.c
> +++ b/drivers/char/tpm/xen-tpmfront.c
> @@ -315,8 +315,7 @@ static void ring_free(struct tpm_private *priv)
>   	if (priv->ring_ref)
>   		gnttab_end_foreign_access(priv->ring_ref, 0,
>   				(unsigned long)priv->shr);
> -	else
> -		free_page((unsigned long)priv->shr);
> +	free_page((unsigned long)priv->shr);
>
>   	if (priv->chip && priv->chip->vendor.irq)
>   		unbind_from_irqhandler(priv->chip->vendor.irq, priv);
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index f7197a7..253a129 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -759,8 +759,7 @@ static void free_pdev(struct pcifront_device *pdev)
>   	if (pdev->gnt_ref != INVALID_GRANT_REF)
>   		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
>   					  (unsigned long)pdev->sh_info);
> -	else
> -		free_page((unsigned long)pdev->sh_info);
> +	free_page((unsigned long)pdev->sh_info);
>
>   	dev_set_drvdata(&pdev->xdev->dev, NULL);
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:00:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3fzv-0002Kg-Bh; Thu, 16 Jan 2014 06:00:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3fzt-0002JT-B5
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 06:00:09 +0000
Received: from [85.158.137.68:63594] by server-1.bemta-3.messagelabs.com id
	E7/BB-29598-86577D25; Thu, 16 Jan 2014 06:00:08 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1389852006!9436493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14173 invoked from network); 16 Jan 2014 06:00:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 06:00:07 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G5xxGN030742
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 06:00:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G5xwNo010577
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 16 Jan 2014 05:59:59 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G5xv8S026625; Thu, 16 Jan 2014 05:59:58 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 21:59:57 -0800
Message-ID: <52D77559.5010106@oracle.com>
Date: Thu, 16 Jan 2014 13:59:53 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<52D66F11.204@citrix.com>
	<20140115114208.GK5698@zion.uk.xensource.com>
	<52D67677.4050407@citrix.com> <52D69864.9030207@oracle.com>
	<52D6A5B7.40503@citrix.com>
In-Reply-To: <52D6A5B7.40503@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: netdev@vger.kernel.org, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/15 23:13, David Vrabel wrote:
> On 15/01/14 14:17, annie li wrote:
>> I am thinking of two ways, and they can be implemented in new patches.
>> 1. If gnttab_end_foreign_access_ref succeeds, then kfree_skb is called
>> to free skb. Otherwise, using gnttab_end_foreign_access to release ref
>> and pages.
>> 2. Add a similar deferred way of gnttab_end_foreign_access in
>> gnttab_end_foreign_access_ref.
> Something like the following (untested!) patch is what I'm thinking of.
>
> Fixups to existing drivers (except netfront) are included but may not be
> quite correct.

Part of it implements the 1 above, if gnttab_end_foreign_access_ref 
fails and then use gnttab_end_foreign_access to end the grant. Another 
is splitting __free_page from gnttab_end_foreign_access. This is 
improvement for current grant end access, and all drivers that involve 
gnttab_end_foreign_access needs to do corresponding change.
I think it can be a separate patch from my clean up patch which fixes 
grant leaking in netfront.

Thanks
Annie
>
> 8<----------
>  From 76c254c8e020f4427e8f37c867622f0bfd5ac85f Mon Sep 17 00:00:00 2001
> From: David Vrabel <david.vrabel@citrix.com>
> Date: Wed, 15 Jan 2014 15:04:52 +0000
> Subject: [PATCH] HACK! xen/gnttab: make gnttab_end_foreign_access() more
> useful
>
> This is UNTESTED and is an example of the sort of change I'm looking
> for.
>
> Freeing the page in gnttab_end_foreign_access() means it cannot be
> used where the pages are freed in some other way (e.g., as part of a
> kfree_skb()).
>
> Leave the freeing of the page to the caller.  If the page still has
> foreign users, take an additional reference before adding it to the
> deferred list.
>
> Hack all the users of the call to do something resembling the right
> thing.  Not quite sure on the blkfront changes.
> ---
>   drivers/block/xen-blkfront.c    |   22 +++++++++++++---------
>   drivers/char/tpm/xen-tpmfront.c |    3 +--
>   drivers/pci/xen-pcifront.c      |    3 +--
>   drivers/xen/grant-table.c       |   19 +++++++++++--------
>   include/xen/grant_table.h       |   11 ++++++-----
>   5 files changed, 32 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..a586496 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
>   			if (entry->page) {
>   				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
>   					 entry->ref, page_to_pfn(entry->page));
> -				__free_page(entry->page);
> +				put_page(entry->page);
>   			} else
>   				pr_info("freeing g.e. %#x\n", entry->ref);
>   			kfree(entry);
> @@ -555,15 +555,18 @@ static void gnttab_add_deferred(grant_ref_t ref,
> bool readonly,
>   }
>
>   void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
> -			       unsigned long page)
> +			       struct page *page)
>   {
> -	if (gnttab_end_foreign_access_ref(ref, readonly)) {
> +	if (gnttab_end_foreign_access_ref(ref, readonly))
>   		put_free_entry(ref);
> -		if (page != 0)
> -			free_page(page);
> -	} else
> -		gnttab_add_deferred(ref, readonly,
> -				    page ? virt_to_page(page) : NULL);
> +	else {
> +		/*
> +		 * Take a reference to the page so it's not freed
> +		 * while the foreign domain still has access to it.
> +		 */
> +		get_page(page);
> +		gnttab_add_deferred(ref, readonly, page);
> +	}
>   }
>   EXPORT_SYMBOL_GPL(gnttab_end_foreign_access);
>
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..ffa3ce6 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -91,13 +91,14 @@ bool gnttab_trans_grants_available(void);
>   int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
>
>   /*
> - * Eventually end access through the given grant reference, and once that
> - * access has been ended, free the given page too.  Access will be ended
> - * immediately iff the grant entry is not in use, otherwise it will happen
> - * some time later.  page may be 0, in which case no freeing will occur.
> + * Eventually end access through the given grant reference, if the
> + * page is still in use an additional reference is taken and released
> + * when access has ended.  Access will be ended immediately iff the
> + * grant entry is not in use, otherwise it will happen some time
> + * later.
>    */
>   void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
> -			       unsigned long page);
> +			       struct page *page);
>
>   int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index c4a4c90..45a2a01 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -931,14 +931,16 @@ static void blkif_free(struct blkfront_info *info,
> int suspend)
>   	if (!list_empty(&info->grants)) {
>   		list_for_each_entry_safe(persistent_gnt, n,
>   		                         &info->grants, node) {
> +			struct page *page = pfn_to_page(persistent_gnt->pfn);
> +
>   			list_del(&persistent_gnt->node);
>   			if (persistent_gnt->gref != GRANT_INVALID_REF) {
>   				gnttab_end_foreign_access(persistent_gnt->gref,
> -				                          0, 0UL);
> +				                          0, page);
>   				info->persistent_gnts_c--;
>   			}
>   			if (info->feature_persistent)
> -				__free_page(pfn_to_page(persistent_gnt->pfn));
> +				__free_page(page);
>   			kfree(persistent_gnt);
>   		}
>   	}
> @@ -970,10 +972,13 @@ static void blkif_free(struct blkfront_info *info,
> int suspend)
>   		       info->shadow[i].req.u.indirect.nr_segments :
>   		       info->shadow[i].req.u.rw.nr_segments;
>   		for (j = 0; j < segs; j++) {
> +			struct page *page;
> +
>   			persistent_gnt = info->shadow[i].grants_used[j];
> -			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
> +			page = pfn_to_page(persistent_gnt->pfn);
> +			gnttab_end_foreign_access(persistent_gnt->gref, 0, page);
>   			if (info->feature_persistent)
> -				__free_page(pfn_to_page(persistent_gnt->pfn));
> +				__free_page(page);
>   			kfree(persistent_gnt);
>   		}
>
> @@ -1010,10 +1015,11 @@ free_shadow:
>   	/* Free resources associated with old device channel. */
>   	if (info->ring_ref != GRANT_INVALID_REF) {
>   		gnttab_end_foreign_access(info->ring_ref, 0,
> -					  (unsigned long)info->ring.sring);
> +					  virt_to_page(info->ring.sring));
>   		info->ring_ref = GRANT_INVALID_REF;
>   		info->ring.sring = NULL;
>   	}
> +	free_page((unsigned long)info->ring.sring);
>   	if (info->irq)
>   		unbind_from_irqhandler(info->irq, info);
>   	info->evtchn = info->irq = 0;
> @@ -1053,7 +1059,7 @@ static void blkif_completion(struct blk_shadow *s,
> struct blkfront_info *info,
>   	}
>   	/* Add the persistent grant into the list of free grants */
>   	for (i = 0; i < nseg; i++) {
> -		if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
> +		if (gnttab_end_foreign_access_ref(s->grants_used[i]->gref, 0)) {
>   			/*
>   			 * If the grant is still mapped by the backend (the
>   			 * backend has chosen to make this grant persistent)
> @@ -1072,14 +1078,13 @@ static void blkif_completion(struct blk_shadow
> *s, struct blkfront_info *info,
>   			 * so it will not be picked again unless we run out of
>   			 * persistent grants.
>   			 */
> -			gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
>   			s->grants_used[i]->gref = GRANT_INVALID_REF;
>   			list_add_tail(&s->grants_used[i]->node, &info->grants);
>   		}
>   	}
>   	if (s->req.operation == BLKIF_OP_INDIRECT) {
>   		for (i = 0; i < INDIRECT_GREFS(nseg); i++) {
> -			if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
> +			if (gnttab_end_foreign_access_ref(s->indirect_grants[i]->gref, 0)) {
>   				if (!info->feature_persistent)
>   					pr_alert_ratelimited("backed has not unmapped grant: %u\n",
>   							     s->indirect_grants[i]->gref);
> @@ -1088,7 +1093,6 @@ static void blkif_completion(struct blk_shadow *s,
> struct blkfront_info *info,
>   			} else {
>   				struct page *indirect_page;
>
> -				gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
>   				/*
>   				 * Add the used indirect page back to the list of
>   				 * available pages for indirect grefs.
> diff --git a/drivers/char/tpm/xen-tpmfront.c
> b/drivers/char/tpm/xen-tpmfront.c
> index c8ff4df..00d1132 100644
> --- a/drivers/char/tpm/xen-tpmfront.c
> +++ b/drivers/char/tpm/xen-tpmfront.c
> @@ -315,8 +315,7 @@ static void ring_free(struct tpm_private *priv)
>   	if (priv->ring_ref)
>   		gnttab_end_foreign_access(priv->ring_ref, 0,
>   				(unsigned long)priv->shr);
> -	else
> -		free_page((unsigned long)priv->shr);
> +	free_page((unsigned long)priv->shr);
>
>   	if (priv->chip && priv->chip->vendor.irq)
>   		unbind_from_irqhandler(priv->chip->vendor.irq, priv);
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index f7197a7..253a129 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -759,8 +759,7 @@ static void free_pdev(struct pcifront_device *pdev)
>   	if (pdev->gnt_ref != INVALID_GRANT_REF)
>   		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
>   					  (unsigned long)pdev->sh_info);
> -	else
> -		free_page((unsigned long)pdev->sh_info);
> +	free_page((unsigned long)pdev->sh_info);
>
>   	dev_set_drvdata(&pdev->xdev->dev, NULL);
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3g4l-0002Tw-33; Thu, 16 Jan 2014 06:05:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3g4i-0002Tq-Rf
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 06:05:09 +0000
Received: from [85.158.143.35:53704] by server-1.bemta-4.messagelabs.com id
	B0/74-02132-49677D25; Thu, 16 Jan 2014 06:05:08 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389852306!12045227!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22185 invoked from network); 16 Jan 2014 06:05:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 06:05:07 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G652X7003038
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 06:05:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G651R5021645
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 06:05:01 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0G6507E014965; Thu, 16 Jan 2014 06:05:01 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 22:05:00 -0800
Message-ID: <52D77684.20808@oracle.com>
Date: Thu, 16 Jan 2014 14:04:52 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140114.152825.2110618560908841742.davem@davemloft.net>
In-Reply-To: <20140114.152825.2110618560908841742.davem@davemloft.net>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/15 7:28, David Miller wrote:
> From: Annie Li <Annie.li@oracle.com>
> Date: Fri, 10 Jan 2014 06:48:38 +0800
>
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
>>
>> Signed-off-by: Annie Li <Annie.li@oracle.com>
> If a Xen networking driver would review this I'd appreciate it.
>

I will re-post a new patch with improved comments. This patch mainly 
fixes leaking grant issue in netfront, and improvement on 
gnttab_end_foreign_access_ref and gnttab_end_foreign_access can be 
implemented in a separate patch.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3g4l-0002Tw-33; Thu, 16 Jan 2014 06:05:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3g4i-0002Tq-Rf
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 06:05:09 +0000
Received: from [85.158.143.35:53704] by server-1.bemta-4.messagelabs.com id
	B0/74-02132-49677D25; Thu, 16 Jan 2014 06:05:08 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389852306!12045227!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22185 invoked from network); 16 Jan 2014 06:05:07 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 06:05:07 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G652X7003038
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 06:05:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G651R5021645
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 06:05:01 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0G6507E014965; Thu, 16 Jan 2014 06:05:01 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 22:05:00 -0800
Message-ID: <52D77684.20808@oracle.com>
Date: Thu, 16 Jan 2014 14:04:52 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1389307718-2845-1-git-send-email-Annie.li@oracle.com>
	<20140114.152825.2110618560908841742.davem@davemloft.net>
In-Reply-To: <20140114.152825.2110618560908841742.davem@davemloft.net>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/15 7:28, David Miller wrote:
> From: Annie Li <Annie.li@oracle.com>
> Date: Fri, 10 Jan 2014 06:48:38 +0800
>
>> Current netfront only grants pages for grant copy, not for grant transfer, so
>> remove corresponding transfer code and add receiving copy code in
>> xennet_release_rx_bufs.
>>
>> Signed-off-by: Annie Li <Annie.li@oracle.com>
> If a Xen networking driver would review this I'd appreciate it.
>

I will re-post a new patch with improved comments. This patch mainly 
fixes leaking grant issue in netfront, and improvement on 
gnttab_end_foreign_access_ref and gnttab_end_foreign_access can be 
implemented in a separate patch.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:41:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3gdX-0004GH-86; Thu, 16 Jan 2014 06:41:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3gdW-0004GC-31
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 06:41:06 +0000
Received: from [193.109.254.147:30332] by server-6.bemta-14.messagelabs.com id
	B4/31-14958-10F77D25; Thu, 16 Jan 2014 06:41:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389854463!8873024!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28435 invoked from network); 16 Jan 2014 06:41:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	16 Jan 2014 06:41:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 15 Jan 2014 22:37:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="467501425"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 15 Jan 2014 22:41:02 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:41:01 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:41:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 14:41:00 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHO8SqkaA8yi+K0sk+fDGrKfVQCipqHKAWA
Date: Thu, 16 Jan 2014 06:40:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
In-Reply-To: <20131204195147.GA3833@pegasus.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2013-12-05:
> Hey,
> 
> I just started noticing it today - with qemu-xen (tip is commit
> b97307ecaad98360f41ea36cd9674ef810c4f8cf
>     xen_disk: mark ioreq as mapped before unmapping in error case)
> when I try to pass in a PCI device at bootup it blows up with:
> 
Hi, Konrad,

I cannot reproduce this issue with the same qemu-xen. Is there any special step needed?

Here is the NIC info:
08:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)
  Subsystem: Intel Corporation PRO/1000 PT Desktop Adapter
  Flags: bus master, fast devsel, latency 0, IRQ 34
  Memory at d2440000 (32-bit, non-prefetchable) [size=128K]
  Memory at d2420000 (32-bit, non-prefetchable) [size=128K]
  I/O ports at 5000 [size=32]
  Expansion ROM at d2400000 [disabled] [size=128K]
  Capabilities: [c8] Power Management version 2
  Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
  Capabilities: [e0] Express Endpoint, MSI 00
  Capabilities: [100] Advanced Error Reporting
  Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-1e-8e-97
  Kernel driver in use: pciback
  Kernel modules: e1000e


BTW, is the fixing already in Qemu upstream?

> char device redirected to /dev/pts/2 (label serial0) qemu: hardware
> error: xen: failed to populate ram at 40050000 CPU #0: EAX=00000000
> EBX=00000000 ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000
> EBP=00000000 ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
> A20=1 SMM=0 HLT=0 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000
> 0000ffff 00009b00 SS =0000 00000000 0000ffff 00009300 DS =0000 00000000
> 0000ffff 00009300 FS =0000 00000000 0000ffff 00009300 GS =0000 00000000
> 0000ffff 00009300 LDT=0000 00000000 0000ffff 00008200 TR =0000 00000000
> 0000ffff 00008b00 GDT=     00000000 0000ffff IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=00000000
> DR1=00000000 DR2=00000000 DR3=00000000 DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000 FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000 CPU #1: EAX=00000000 EBX=00000000
> ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000 EBP=00000000
> ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000 0000ffff
> 00009b00
> 
> 
> ...
> -bash-4.1# xl dmesg | tail
> [ 2788.038463] xen_pciback: vpci(d3) [2013-12-04 19:43:40] System
> requested SeaBIOS
> : 0000:01:00.1: (d3) [2013-12-04 19:43:40] CPU speed is 3093 MHz
> assign to virtua(d3) [2013-12-04 19:43:40] Relocating guest memory for
> lowmem MMIO space disabled l slot 0 [ 2788.076396] device vif3.0
> entered promiscuous mode
> 
> If I don't have 'pci=' stanze in my guest config it boots just fine.
> 
> -bash-4.1# more /vm.cfg builder='hvm' disk = [
> 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r'] memory=1024 boot="d"
> vcpus=2 serial="pty" vnclisten="0.0.0.0" name="latest" vif = [
> 'mac=00:0F:4B:00:00:68, bridge=switch' ] on_crash="preserve"
> pci=["01:00.0"]
> 
> Any ideas?
> 
> This is with today's Xen and 3.13-rc2. The device in question is
> -bash-4.1# lspci -s 01:00.0 -v
> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: fast devsel, IRQ 16 Memory at fbc20000 (32-bit,
>         non-prefetchable) [disabled] [size=128K] Memory at fb800000
>         (32-bit, non-prefetchable) [disabled] [size=4M] I/O ports at
>         e020 [disabled] [size=32] Memory at fbc44000 (32-bit,
>         non-prefetchable) [disabled] [size=16K] Expansion ROM at
>         fb400000 [disabled] [size=4M] Capabilities: [40] Power
>         Management version 3 Capabilities: [50] MSI: Enable- Count=1/1
>         Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable- Count=10
>         Masked- Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting Capabilities: [140]
>         Device Serial Number 00-1b-21-ff-ff-45-d9-ac Capabilities: [150]
>         Alternative Routing-ID Interpretation (ARI) Capabilities: [160]
>         Single Root I/O Virtualization (SR-IOV) Kernel driver in use:
>         pciback Kernel modules: igb
> 
> Oh and of course it boots with the traditional QEMU.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:41:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3gdX-0004GH-86; Thu, 16 Jan 2014 06:41:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3gdW-0004GC-31
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 06:41:06 +0000
Received: from [193.109.254.147:30332] by server-6.bemta-14.messagelabs.com id
	B4/31-14958-10F77D25; Thu, 16 Jan 2014 06:41:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389854463!8873024!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28435 invoked from network); 16 Jan 2014 06:41:04 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	16 Jan 2014 06:41:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 15 Jan 2014 22:37:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="467501425"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 15 Jan 2014 22:41:02 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:41:01 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:41:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 14:41:00 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHO8SqkaA8yi+K0sk+fDGrKfVQCipqHKAWA
Date: Thu, 16 Jan 2014 06:40:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
In-Reply-To: <20131204195147.GA3833@pegasus.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2013-12-05:
> Hey,
> 
> I just started noticing it today - with qemu-xen (tip is commit
> b97307ecaad98360f41ea36cd9674ef810c4f8cf
>     xen_disk: mark ioreq as mapped before unmapping in error case)
> when I try to pass in a PCI device at bootup it blows up with:
> 
Hi, Konrad,

I cannot reproduce this issue with the same qemu-xen. Is there any special step needed?

Here is the NIC info:
08:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet Controller (Copper) (rev 06)
  Subsystem: Intel Corporation PRO/1000 PT Desktop Adapter
  Flags: bus master, fast devsel, latency 0, IRQ 34
  Memory at d2440000 (32-bit, non-prefetchable) [size=128K]
  Memory at d2420000 (32-bit, non-prefetchable) [size=128K]
  I/O ports at 5000 [size=32]
  Expansion ROM at d2400000 [disabled] [size=128K]
  Capabilities: [c8] Power Management version 2
  Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
  Capabilities: [e0] Express Endpoint, MSI 00
  Capabilities: [100] Advanced Error Reporting
  Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-1e-8e-97
  Kernel driver in use: pciback
  Kernel modules: e1000e


BTW, is the fixing already in Qemu upstream?

> char device redirected to /dev/pts/2 (label serial0) qemu: hardware
> error: xen: failed to populate ram at 40050000 CPU #0: EAX=00000000
> EBX=00000000 ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000
> EBP=00000000 ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
> A20=1 SMM=0 HLT=0 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000
> 0000ffff 00009b00 SS =0000 00000000 0000ffff 00009300 DS =0000 00000000
> 0000ffff 00009300 FS =0000 00000000 0000ffff 00009300 GS =0000 00000000
> 0000ffff 00009300 LDT=0000 00000000 0000ffff 00008200 TR =0000 00000000
> 0000ffff 00008b00 GDT=     00000000 0000ffff IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=00000000
> DR1=00000000 DR2=00000000 DR3=00000000 DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000 FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000 CPU #1: EAX=00000000 EBX=00000000
> ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000 EBP=00000000
> ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000 0000ffff
> 00009b00
> 
> 
> ...
> -bash-4.1# xl dmesg | tail
> [ 2788.038463] xen_pciback: vpci(d3) [2013-12-04 19:43:40] System
> requested SeaBIOS
> : 0000:01:00.1: (d3) [2013-12-04 19:43:40] CPU speed is 3093 MHz
> assign to virtua(d3) [2013-12-04 19:43:40] Relocating guest memory for
> lowmem MMIO space disabled l slot 0 [ 2788.076396] device vif3.0
> entered promiscuous mode
> 
> If I don't have 'pci=' stanze in my guest config it boots just fine.
> 
> -bash-4.1# more /vm.cfg builder='hvm' disk = [
> 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r'] memory=1024 boot="d"
> vcpus=2 serial="pty" vnclisten="0.0.0.0" name="latest" vif = [
> 'mac=00:0F:4B:00:00:68, bridge=switch' ] on_crash="preserve"
> pci=["01:00.0"]
> 
> Any ideas?
> 
> This is with today's Xen and 3.13-rc2. The device in question is
> -bash-4.1# lspci -s 01:00.0 -v
> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: fast devsel, IRQ 16 Memory at fbc20000 (32-bit,
>         non-prefetchable) [disabled] [size=128K] Memory at fb800000
>         (32-bit, non-prefetchable) [disabled] [size=4M] I/O ports at
>         e020 [disabled] [size=32] Memory at fbc44000 (32-bit,
>         non-prefetchable) [disabled] [size=16K] Expansion ROM at
>         fb400000 [disabled] [size=4M] Capabilities: [40] Power
>         Management version 3 Capabilities: [50] MSI: Enable- Count=1/1
>         Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable- Count=10
>         Masked- Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting Capabilities: [140]
>         Device Serial Number 00-1b-21-ff-ff-45-d9-ac Capabilities: [150]
>         Alternative Routing-ID Interpretation (ARI) Capabilities: [160]
>         Single Root I/O Virtualization (SR-IOV) Kernel driver in use:
>         pciback Kernel modules: igb
> 
> Oh and of course it boots with the traditional QEMU.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:54:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3gqk-0004ns-GB; Thu, 16 Jan 2014 06:54:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3gqi-0004nl-IS
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 06:54:44 +0000
Received: from [193.109.254.147:26253] by server-8.bemta-14.messagelabs.com id
	B9/D9-30921-33287D25; Thu, 16 Jan 2014 06:54:43 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389855281!7689177!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23394 invoked from network); 16 Jan 2014 06:54:42 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-27.messagelabs.com with SMTP;
	16 Jan 2014 06:54:42 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 15 Jan 2014 22:54:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="439656525"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 15 Jan 2014 22:54:16 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:54:16 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:54:16 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 14:54:12 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel
	<xen-devel@lists.xenproject.org>
Thread-Topic: [Xen-devel] Xen 4.4 development update
Thread-Index: AQHPDHQAeg19CS8yYECwAi2GfY+HYpqG9b0A
Date: Thu, 16 Jan 2014 06:54:12 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote on 2014-01-08:
> I'm filing in for George while he is on vacation and travelling to a conference etc.
> I'm still coming up to speed wrt what is going on with this release so
> please do correct me when I'm wrong. George will be back on 20 January.
> 
> This information will be mirrored on the Xen 4.4 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.4
> We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
> time and on George's final comments in [1] I think this means that PVH
> dom0 support has not made the cut for 4.4, which is a shame but there
> is plenty of good functionality (including PVH domU support) in there.
> 
> [1]
> http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3 E
> 
> = Timeline =
> 
> Here is our current timeline based on a 6-month release:
> 
> * Feature freeze: 18 October 2013
> * Code freezing point: 18 November 2013
> * First RCs: 6 December 2013  <== WE ARE HERE
> * Release: 21 January 2014
> 
> Last updated: 8 January 2014
> 
> == Completed ==
> 
> * Event channel scalability (FIFO event channels)
> 
> * Non-udev scripts for driver domains (non-Linux driver domains)
> 
> * Multi-vector PCI MSI (Hypervisor side)
> 
> * Improved Spice support on libxl
>  - Added Spice vdagent support
>  - Added Spice clipboard sharing support
>  - Spice usbredirection support for upstream qemu
> * PHV domU (experimental only)
> 
> * pvgrub2 checked into grub upstream
> 
> * ARM64 guest
> 
> * Guest EFI booting (tianocore)
> 
> * kexec
> 
> * Testing: Xen on ARM
> 
> * Update to SeaBIOS 1.7.3.1
> 
> * Update to qemu 1.6
> 
> * SWIOTLB (in Linux 3.13)
> 
> * Disk: indirect descriptors (in 3.11)
> 
> * Reworked ocaml bindings

Can I say nested virtualization also is good supported in Xen 4.4?

> 
> == Resolved since last update ==
> 
> == Open ==
> 
> * xl support for vnc and vnclisten options with PV guests  >
> http://bugs.xenproject.org/xen/bug/25
>  status: V4 patch posted. Should go in.
>  Blocker?
> * libxl / xl does not handle failure of remote qemu gracefully
>> Related to http://bugs.xenproject.org/xen/bug/29
>> Easiest way to reproduce:
>>  - set "vncunused=0" and do a local migrate
>>  - The "remote" qemu will fail because the vnc port is in use
>> The failure isn't the problem, but everything being stuck
> afterwards is  Ian J investigating
> 
> * xl needs to disallow PoD with PCI passthrough
>> see
> http://xen.1045712.n5.nabble.com/PATCH-VT-d-Dis-allow-PCI-device-assig
> nme nt-if-PoD-is-enabled-td2547788.html
> 
> * qemu-upstream not freeing pirq
>> http://www.gossamer-threads.com/lists/xen/devel/281498
>> http://marc.info/?l=xen-devel&m=137265766424502
>> status: patches posted; latest patches need testing  Not a blocker.
> * Race in PV shutdown between tool detection and shutdown watch  >
> http://www.gossamer-threads.com/lists/xen/devel/282467
>> Nothing to do with ACPI
>> status: Patches posted
>> Not a blocker.
> * xl does not support specifying virtual function for passthrough
> device  >
> http://bugs.xenproject.org/xen/bug/22
>  Too much work to be a blocker.
> * xl does not handle migrate interruption gracefully
>> If you start a localhost migrate, and press "Ctrl-C" in the middle,
>> you get two hung domains
>> Ian J investigated -- can of worms, too big to be a blocker for 4.4
> * Win2k3 SP2 RTC infinite loops
>> Regression introduced late in Xen-4.3 development
>    owner: andrew.cooper@citrix
>    status: patches posted, undergoing review. ( v2 ID
> 1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )
> 
>> andyhhp: my proposed RTC fixes break migrate from older versions of
>> Xen, so I have to redesign it from scratch. no way it is going to
>> be ready for 4.4
> 
> * HPET interrupt stack overflow (when using hpet_broadcast mode and
> MSI capable HPETs)
>   owner: andyh@citrix
>   status: patches posted, undergoing review iteration.
>> andyhhp: I have more work to do on the HPET series
>> andyhhp: no way it is going to be ready or safe for 4.4
> 
> * PCI hole resize support hvmloader/qemu-traditional/qemu-upstream
> with PCI/GPU passthrough
>> http://bugs.xenproject.org/xen/bug/28
>> http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
>> Where Stefano writes:
>> 2) for Xen 4.4 rework the two patches above and improve
>> i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
>> enough, it also needs to be able to resize the system memory region
>> (xen.ram) to make room for the bigger pci_hole
> 
>   status: not going to be fixed for 4.4 either. Created bug #28.
> * qemu memory leak?
>> http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html
> 
> * qemu-* parses "008" as octal in USB bus.addr format
>> http://bugs.xenproject.org/xen/bug/15
>> just needs documenting
>   Anthony Perard to patch docs
> * osstest windows-install failures
>> http://bugs.xenproject.org/xen/bug/29
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
>   Anthony and/or Jan investigating
> === Big ticket items ===
> 
> * PVH dom0 (w/ Linux)
>   blocker
>   owner: mukesh@oracle, george@citrix
>   status (Linux): Acked, waiting for ABI to be nailed down
>   status (Xen): v6 posted; no longer considered a blocker
> * libvirt/libxl integration (external)
>  - owner: jfehlig@suse, dario@citrix
>  - patches posted (should be released before 4.4)
>   - migration - PCI pass-through - In progress - integration w/
>   libvirt's lock manager - improved concurrency
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 06:54:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 06:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3gqk-0004ns-GB; Thu, 16 Jan 2014 06:54:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3gqi-0004nl-IS
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 06:54:44 +0000
Received: from [193.109.254.147:26253] by server-8.bemta-14.messagelabs.com id
	B9/D9-30921-33287D25; Thu, 16 Jan 2014 06:54:43 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389855281!7689177!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23394 invoked from network); 16 Jan 2014 06:54:42 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-27.messagelabs.com with SMTP;
	16 Jan 2014 06:54:42 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 15 Jan 2014 22:54:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,666,1384329600"; d="scan'208";a="439656525"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 15 Jan 2014 22:54:16 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:54:16 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 15 Jan 2014 22:54:16 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Thu, 16 Jan 2014 14:54:12 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel
	<xen-devel@lists.xenproject.org>
Thread-Topic: [Xen-devel] Xen 4.4 development update
Thread-Index: AQHPDHQAeg19CS8yYECwAi2GfY+HYpqG9b0A
Date: Thu, 16 Jan 2014 06:54:12 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1389186984.4883.67.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote on 2014-01-08:
> I'm filing in for George while he is on vacation and travelling to a conference etc.
> I'm still coming up to speed wrt what is going on with this release so
> please do correct me when I'm wrong. George will be back on 20 January.
> 
> This information will be mirrored on the Xen 4.4 Roadmap wiki page:
>  http://wiki.xen.org/wiki/Xen_Roadmap/4.4
> We tagged 4.4.0-rc1 on 19 December. Based on the conversation had last
> time and on George's final comments in [1] I think this means that PVH
> dom0 support has not made the cut for 4.4, which is a shame but there
> is plenty of good functionality (including PVH domU support) in there.
> 
> [1]
> http://bugs.xenproject.org/xen/mid/%3C52B05C0A.4040404@eu.citrix.com%3 E
> 
> = Timeline =
> 
> Here is our current timeline based on a 6-month release:
> 
> * Feature freeze: 18 October 2013
> * Code freezing point: 18 November 2013
> * First RCs: 6 December 2013  <== WE ARE HERE
> * Release: 21 January 2014
> 
> Last updated: 8 January 2014
> 
> == Completed ==
> 
> * Event channel scalability (FIFO event channels)
> 
> * Non-udev scripts for driver domains (non-Linux driver domains)
> 
> * Multi-vector PCI MSI (Hypervisor side)
> 
> * Improved Spice support on libxl
>  - Added Spice vdagent support
>  - Added Spice clipboard sharing support
>  - Spice usbredirection support for upstream qemu
> * PHV domU (experimental only)
> 
> * pvgrub2 checked into grub upstream
> 
> * ARM64 guest
> 
> * Guest EFI booting (tianocore)
> 
> * kexec
> 
> * Testing: Xen on ARM
> 
> * Update to SeaBIOS 1.7.3.1
> 
> * Update to qemu 1.6
> 
> * SWIOTLB (in Linux 3.13)
> 
> * Disk: indirect descriptors (in 3.11)
> 
> * Reworked ocaml bindings

Can I say nested virtualization also is good supported in Xen 4.4?

> 
> == Resolved since last update ==
> 
> == Open ==
> 
> * xl support for vnc and vnclisten options with PV guests  >
> http://bugs.xenproject.org/xen/bug/25
>  status: V4 patch posted. Should go in.
>  Blocker?
> * libxl / xl does not handle failure of remote qemu gracefully
>> Related to http://bugs.xenproject.org/xen/bug/29
>> Easiest way to reproduce:
>>  - set "vncunused=0" and do a local migrate
>>  - The "remote" qemu will fail because the vnc port is in use
>> The failure isn't the problem, but everything being stuck
> afterwards is  Ian J investigating
> 
> * xl needs to disallow PoD with PCI passthrough
>> see
> http://xen.1045712.n5.nabble.com/PATCH-VT-d-Dis-allow-PCI-device-assig
> nme nt-if-PoD-is-enabled-td2547788.html
> 
> * qemu-upstream not freeing pirq
>> http://www.gossamer-threads.com/lists/xen/devel/281498
>> http://marc.info/?l=xen-devel&m=137265766424502
>> status: patches posted; latest patches need testing  Not a blocker.
> * Race in PV shutdown between tool detection and shutdown watch  >
> http://www.gossamer-threads.com/lists/xen/devel/282467
>> Nothing to do with ACPI
>> status: Patches posted
>> Not a blocker.
> * xl does not support specifying virtual function for passthrough
> device  >
> http://bugs.xenproject.org/xen/bug/22
>  Too much work to be a blocker.
> * xl does not handle migrate interruption gracefully
>> If you start a localhost migrate, and press "Ctrl-C" in the middle,
>> you get two hung domains
>> Ian J investigated -- can of worms, too big to be a blocker for 4.4
> * Win2k3 SP2 RTC infinite loops
>> Regression introduced late in Xen-4.3 development
>    owner: andrew.cooper@citrix
>    status: patches posted, undergoing review. ( v2 ID
> 1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )
> 
>> andyhhp: my proposed RTC fixes break migrate from older versions of
>> Xen, so I have to redesign it from scratch. no way it is going to
>> be ready for 4.4
> 
> * HPET interrupt stack overflow (when using hpet_broadcast mode and
> MSI capable HPETs)
>   owner: andyh@citrix
>   status: patches posted, undergoing review iteration.
>> andyhhp: I have more work to do on the HPET series
>> andyhhp: no way it is going to be ready or safe for 4.4
> 
> * PCI hole resize support hvmloader/qemu-traditional/qemu-upstream
> with PCI/GPU passthrough
>> http://bugs.xenproject.org/xen/bug/28
>> http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
>> Where Stefano writes:
>> 2) for Xen 4.4 rework the two patches above and improve
>> i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
>> enough, it also needs to be able to resize the system memory region
>> (xen.ram) to make room for the bigger pci_hole
> 
>   status: not going to be fixed for 4.4 either. Created bug #28.
> * qemu memory leak?
>> http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html
> 
> * qemu-* parses "008" as octal in USB bus.addr format
>> http://bugs.xenproject.org/xen/bug/15
>> just needs documenting
>   Anthony Perard to patch docs
> * osstest windows-install failures
>> http://bugs.xenproject.org/xen/bug/29
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
>   Anthony and/or Jan investigating
> === Big ticket items ===
> 
> * PVH dom0 (w/ Linux)
>   blocker
>   owner: mukesh@oracle, george@citrix
>   status (Linux): Acked, waiting for ABI to be nailed down
>   status (Xen): v6 posted; no longer considered a blocker
> * libvirt/libxl integration (external)
>  - owner: jfehlig@suse, dario@citrix
>  - patches posted (should be released before 4.4)
>   - migration - PCI pass-through - In progress - integration w/
>   libvirt's lock manager - improved concurrency
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:19:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hE2-0006Kd-7H; Thu, 16 Jan 2014 07:18:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3hE0-0006KY-Hw
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 07:18:48 +0000
Received: from [85.158.137.68:50593] by server-7.bemta-3.messagelabs.com id
	14/F2-27599-7D787D25; Thu, 16 Jan 2014 07:18:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389856725!8626373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31619 invoked from network); 16 Jan 2014 07:18:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 07:18:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="93377478"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 07:18:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 02:18:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3hDv-0002m2-2u;
	Thu, 16 Jan 2014 07:18:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3hDu-0001KY-3g;
	Thu, 16 Jan 2014 07:18:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24383-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 07:18:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24383: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2512929643686777218=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2512929643686777218==
Content-Type: text/plain

flight 24383 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24383/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 redhat-install    fail REGR. vs. 24349
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 24349

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24349
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============2512929643686777218==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2512929643686777218==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:19:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hE2-0006Kd-7H; Thu, 16 Jan 2014 07:18:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3hE0-0006KY-Hw
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 07:18:48 +0000
Received: from [85.158.137.68:50593] by server-7.bemta-3.messagelabs.com id
	14/F2-27599-7D787D25; Thu, 16 Jan 2014 07:18:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389856725!8626373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31619 invoked from network); 16 Jan 2014 07:18:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 07:18:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="93377478"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 07:18:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 02:18:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3hDv-0002m2-2u;
	Thu, 16 Jan 2014 07:18:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3hDu-0001KY-3g;
	Thu, 16 Jan 2014 07:18:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24383-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 07:18:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24383: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2512929643686777218=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2512929643686777218==
Content-Type: text/plain

flight 24383 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24383/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 redhat-install    fail REGR. vs. 24349
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 24349

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24349
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============2512929643686777218==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2512929643686777218==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:49:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hhO-0007U6-BR; Thu, 16 Jan 2014 07:49:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3hhM-0007U1-KH
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 07:49:08 +0000
Received: from [193.109.254.147:56355] by server-10.bemta-14.messagelabs.com
	id CD/A2-20752-3FE87D25; Thu, 16 Jan 2014 07:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389858547!11193194!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 16 Jan 2014 07:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 07:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 07:49:06 +0000
Message-Id: <52D79CFF02000078001141F5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 07:49:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
	<52D3E1500200007800113058@nat28.tlf.novell.com>
	<20140115200736.GB5201@phenom.dumpdata.com>
In-Reply-To: <20140115200736.GB5201@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, jun.nakajima@intel.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 21:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> So.. it sounds to me like everybody is in the agreement that this is the
> right thing to do (enable it if the hypervisor has it enabled)?
> 
> And the next thing is actually come up with a patch to do some of this
> plumbing - naturally for Xen 4.5?

Yes, so would I think.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:49:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hhO-0007U6-BR; Thu, 16 Jan 2014 07:49:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3hhM-0007U1-KH
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 07:49:08 +0000
Received: from [193.109.254.147:56355] by server-10.bemta-14.messagelabs.com
	id CD/A2-20752-3FE87D25; Thu, 16 Jan 2014 07:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389858547!11193194!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 16 Jan 2014 07:49:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 07:49:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 07:49:06 +0000
Message-Id: <52D79CFF02000078001141F5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 07:49:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140110184151.GA20232@pegasus.dumpdata.com>
	<1389607803.8187.22.camel@kazak.uk.xensource.com>
	<52D3DC730200007800112FF6@nat28.tlf.novell.com>
	<1389613109.13654.43.camel@kazak.uk.xensource.com>
	<52D3E1500200007800113058@nat28.tlf.novell.com>
	<20140115200736.GB5201@phenom.dumpdata.com>
In-Reply-To: <20140115200736.GB5201@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, jun.nakajima@intel.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] 1GB hugepages and intel_xc_cpuid_policy by default
 disables it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 15.01.14 at 21:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> So.. it sounds to me like everybody is in the agreement that this is the
> right thing to do (enable it if the hypervisor has it enabled)?
> 
> And the next thing is actually come up with a patch to do some of this
> plumbing - naturally for Xen 4.5?

Yes, so would I think.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hkt-0007th-B3; Thu, 16 Jan 2014 07:52:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W3hkr-0007tU-Qx
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 07:52:46 +0000
Received: from [193.109.254.147:6775] by server-1.bemta-14.messagelabs.com id
	73/BE-15600-DCF87D25; Thu, 16 Jan 2014 07:52:45 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389858763!11210545!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27598 invoked from network); 16 Jan 2014 07:52:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 07:52:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G7qbBn016063
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 07:52:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G7qbxv005051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 07:52:37 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G7qbbD011899; Thu, 16 Jan 2014 07:52:37 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 23:52:36 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Thu, 16 Jan 2014 07:57:08 +0800
Message-Id: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	Annie Li <Annie.li@oracle.com>, david.vrabel@citrix.com,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch implements two things:

* release grant reference and skb for rx path, this fixex resource leaking.
* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

gnttab_end_foreign_access_ref may fail when the grant entry is currently used
for reading or writing. But this patch does not cover this and improvement for
this failure may be implemented in a separate patch.

Test has been run with this patch.

V2: improve patch comments.

Signed-off-by: Annie Li <Annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   60 ++-----------------------------------------
 1 files changed, 3 insertions(+), 57 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..692589e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
 	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
+		if (ref == GRANT_INVALID_REF)
 			continue;
-		}
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
+		gnttab_end_foreign_access_ref(ref, 0);
 		gnttab_release_grant_reference(&np->gref_rx_head, ref);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
-			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
-
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
-
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
-
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
-- 
1.7.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 07:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 07:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3hkt-0007th-B3; Thu, 16 Jan 2014 07:52:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W3hkr-0007tU-Qx
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 07:52:46 +0000
Received: from [193.109.254.147:6775] by server-1.bemta-14.messagelabs.com id
	73/BE-15600-DCF87D25; Thu, 16 Jan 2014 07:52:45 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389858763!11210545!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27598 invoked from network); 16 Jan 2014 07:52:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 07:52:44 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0G7qbBn016063
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 07:52:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G7qbxv005051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 07:52:37 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0G7qbbD011899; Thu, 16 Jan 2014 07:52:37 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 15 Jan 2014 23:52:36 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Thu, 16 Jan 2014 07:57:08 +0800
Message-Id: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	Annie Li <Annie.li@oracle.com>, david.vrabel@citrix.com,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch implements two things:

* release grant reference and skb for rx path, this fixex resource leaking.
* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

gnttab_end_foreign_access_ref may fail when the grant entry is currently used
for reading or writing. But this patch does not cover this and improvement for
this failure may be implemented in a separate patch.

Test has been run with this patch.

V2: improve patch comments.

Signed-off-by: Annie Li <Annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   60 ++-----------------------------------------
 1 files changed, 3 insertions(+), 57 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index e59acb1..692589e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,78 +1134,24 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
 	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
+		if (ref == GRANT_INVALID_REF)
 			continue;
-		}
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
+		gnttab_end_foreign_access_ref(ref, 0);
 		gnttab_release_grant_reference(&np->gref_rx_head, ref);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
-			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
-
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
-
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
-
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
-- 
1.7.6.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 08:24:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 08:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3iF3-00019u-Df; Thu, 16 Jan 2014 08:23:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3iF2-00019U-0J
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 08:23:56 +0000
Received: from [193.109.254.147:43309] by server-7.bemta-14.messagelabs.com id
	F2/FC-15500-B1797D25; Thu, 16 Jan 2014 08:23:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389860634!11218317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22040 invoked from network); 16 Jan 2014 08:23:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 08:23:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 08:23:54 +0000
Message-Id: <52D7A5260200007800114253@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 08:23:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 05:42, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Back to the IO emulation issue, your suggestion didn't solve the problem. So 
> how about my solution that not allow doing virtual vmenty/vmexit when there 
> is pending IO request not finished.

Sounds pretty reasonable to me (i.e. in line with striving towards
emulation being as close to real hardware behavior as possible).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 08:24:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 08:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3iF3-00019u-Df; Thu, 16 Jan 2014 08:23:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3iF2-00019U-0J
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 08:23:56 +0000
Received: from [193.109.254.147:43309] by server-7.bemta-14.messagelabs.com id
	F2/FC-15500-B1797D25; Thu, 16 Jan 2014 08:23:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389860634!11218317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22040 invoked from network); 16 Jan 2014 08:23:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 08:23:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 08:23:54 +0000
Message-Id: <52D7A5260200007800114253@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 08:23:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 05:42, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Back to the IO emulation issue, your suggestion didn't solve the problem. So 
> how about my solution that not allow doing virtual vmenty/vmexit when there 
> is pending IO request not finished.

Sounds pretty reasonable to me (i.e. in line with striving towards
emulation being as close to real hardware behavior as possible).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 08:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 08:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3iNt-0001ta-F0; Thu, 16 Jan 2014 08:33:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W3iNs-0001tV-Rs
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 08:33:04 +0000
Received: from [193.109.254.147:49626] by server-7.bemta-14.messagelabs.com id
	8B/09-15500-04997D25; Thu, 16 Jan 2014 08:33:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389861182!11183256!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31964 invoked from network); 16 Jan 2014 08:33:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 08:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="91283398"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 08:33:02 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 03:33:01 -0500
Message-ID: <52D7993D.3090708@citrix.com>
Date: Thu, 16 Jan 2014 09:33:01 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen.org <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <osstest-24382-mainreport@xen.org>
In-Reply-To: <osstest-24382-mainreport@xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [xen-unstable test] 24382: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 05:51, xen.org wrote:
> flight 24382 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24382/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail REGR. vs. 24375

The only relevant message I can find regarding this failure is from Qemu:

10000 when runstate is INMIGRATE

Which seems quite similar to the winxp failure in 24384:

xen_ram_alloc: do not alloc 1f800000 bytes of ram at 0 when runstate is
INMIGRATE
xen_ram_alloc: do not alloc 800000 bytes of ram at 1f800000 when
runstate is INMIGRATE
xen_ram_alloc: do not alloc 10000 bytes of ram at 20000000 when runstate
is INMIGRATE
xen_ram_alloc: do not alloc 40000 bytes of ram at 20010000 when runstate
is INMIGRATE


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 08:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 08:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3iNt-0001ta-F0; Thu, 16 Jan 2014 08:33:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W3iNs-0001tV-Rs
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 08:33:04 +0000
Received: from [193.109.254.147:49626] by server-7.bemta-14.messagelabs.com id
	8B/09-15500-04997D25; Thu, 16 Jan 2014 08:33:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389861182!11183256!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31964 invoked from network); 16 Jan 2014 08:33:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 08:33:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,666,1384300800"; d="scan'208";a="91283398"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 08:33:02 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 03:33:01 -0500
Message-ID: <52D7993D.3090708@citrix.com>
Date: Thu, 16 Jan 2014 09:33:01 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen.org <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <osstest-24382-mainreport@xen.org>
In-Reply-To: <osstest-24382-mainreport@xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [xen-unstable test] 24382: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 05:51, xen.org wrote:
> flight 24382 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24382/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail REGR. vs. 24375

The only relevant message I can find regarding this failure is from Qemu:

10000 when runstate is INMIGRATE

Which seems quite similar to the winxp failure in 24384:

xen_ram_alloc: do not alloc 1f800000 bytes of ram at 0 when runstate is
INMIGRATE
xen_ram_alloc: do not alloc 800000 bytes of ram at 1f800000 when
runstate is INMIGRATE
xen_ram_alloc: do not alloc 10000 bytes of ram at 20000000 when runstate
is INMIGRATE
xen_ram_alloc: do not alloc 40000 bytes of ram at 20010000 when runstate
is INMIGRATE


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3j3P-00040r-MG; Thu, 16 Jan 2014 09:15:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3j3O-00040m-C8
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:15:58 +0000
Received: from [193.109.254.147:53614] by server-12.bemta-14.messagelabs.com
	id 63/FB-13681-D43A7D25; Thu, 16 Jan 2014 09:15:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389863757!11215110!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3459 invoked from network); 16 Jan 2014 09:15:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:15:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:15:56 +0000
Message-Id: <52D7B158020000780011427E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:15:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
> -        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
> +    {
> +        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);

"val" is a misleading name here.

> +
> +        /*
> +         * We may have received a PMU interrupt during WRMSR handling
> +         * and since do_wrmsr may load VPMU context we should save
> +         * (and unload) it again.
> +         */
> +        if ( !is_hvm_domain(current->domain) &&
> +             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )

I'd suggest parenthesizing the operands of &.

> +        {
> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);

What's this cast good for?

> +            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        }
> +        return val;
> +    }
>      return 0;
>  }
>  
> @@ -91,16 +112,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
> -        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +    {
> +        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +
> +        if ( !is_hvm_domain(current->domain) &&
> +             current->arch.vpmu.xenpmu_data->pmu_flags)

Coding style (here and elsewhere).

> +    /* dom0 will handle this interrupt */
> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +    {
> +        if ( smp_processor_id() >= dom0->max_vcpus )
> +            return 0;
> +        v = dom0->vcpu[smp_processor_id()];
> +    }

Ugly new uses of "dom0". And the correlation between
smp_processor_id() and dom0->max_vcpus doesn't look sane
either.

> +
> +    vpmu = vcpu_vpmu(v);
> +    if ( !is_hvm_domain(v->domain) )
> +    {
> +        /* PV guest or dom0 is doing system profiling */
> +        void *p;
> +        struct cpu_user_regs *gregs;

const.

> +        int err;
> +
> +        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
> +            return 1;
> +
> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +
> +        /* Store appropriate registers in xenpmu_data */
> +        p = &v->arch.vpmu.xenpmu_data->pmu.regs;

This and the code below should be done in a type safe manner if
at all possible. I.e. p should not be void *, but a union of pointers
or a pointer to a union.

> +        if ( is_pv_32bit_domain(current->domain) )
> +        {
> +            /*
> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> +             * and therefore we treat it the same way as a non-priviledged
> +             * PV 32-bit domain.
> +             */
> +            struct compat_cpu_user_regs cmp;
> +
> +            gregs = guest_cpu_user_regs();
> +            XLAT_cpu_user_regs(&cmp, gregs);
> +            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));

And then there would be no point in using an intermediate variable
here.

> +        }
> +        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )

Did you perhaps mean !is_control_domain() or
!is_hardware_domain()?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:16:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3j3P-00040r-MG; Thu, 16 Jan 2014 09:15:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3j3O-00040m-C8
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:15:58 +0000
Received: from [193.109.254.147:53614] by server-12.bemta-14.messagelabs.com
	id 63/FB-13681-D43A7D25; Thu, 16 Jan 2014 09:15:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389863757!11215110!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3459 invoked from network); 16 Jan 2014 09:15:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:15:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:15:56 +0000
Message-Id: <52D7B158020000780011427E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:15:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
> -        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
> +    {
> +        int val = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);

"val" is a misleading name here.

> +
> +        /*
> +         * We may have received a PMU interrupt during WRMSR handling
> +         * and since do_wrmsr may load VPMU context we should save
> +         * (and unload) it again.
> +         */
> +        if ( !is_hvm_domain(current->domain) &&
> +             current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )

I'd suggest parenthesizing the operands of &.

> +        {
> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);

What's this cast good for?

> +            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        }
> +        return val;
> +    }
>      return 0;
>  }
>  
> @@ -91,16 +112,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
> -        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +    {
> +        int val = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +
> +        if ( !is_hvm_domain(current->domain) &&
> +             current->arch.vpmu.xenpmu_data->pmu_flags)

Coding style (here and elsewhere).

> +    /* dom0 will handle this interrupt */
> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +    {
> +        if ( smp_processor_id() >= dom0->max_vcpus )
> +            return 0;
> +        v = dom0->vcpu[smp_processor_id()];
> +    }

Ugly new uses of "dom0". And the correlation between
smp_processor_id() and dom0->max_vcpus doesn't look sane
either.

> +
> +    vpmu = vcpu_vpmu(v);
> +    if ( !is_hvm_domain(v->domain) )
> +    {
> +        /* PV guest or dom0 is doing system profiling */
> +        void *p;
> +        struct cpu_user_regs *gregs;

const.

> +        int err;
> +
> +        if (v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED)
> +            return 1;
> +
> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +
> +        /* Store appropriate registers in xenpmu_data */
> +        p = &v->arch.vpmu.xenpmu_data->pmu.regs;

This and the code below should be done in a type safe manner if
at all possible. I.e. p should not be void *, but a union of pointers
or a pointer to a union.

> +        if ( is_pv_32bit_domain(current->domain) )
> +        {
> +            /*
> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> +             * and therefore we treat it the same way as a non-priviledged
> +             * PV 32-bit domain.
> +             */
> +            struct compat_cpu_user_regs cmp;
> +
> +            gregs = guest_cpu_user_regs();
> +            XLAT_cpu_user_regs(&cmp, gregs);
> +            memcpy(p, &cmp, sizeof(struct compat_cpu_user_regs));

And then there would be no point in using an intermediate variable
here.

> +        }
> +        else if ( (current->domain != dom0) && !is_idle_vcpu(current) )

Did you perhaps mean !is_control_domain() or
!is_hardware_domain()?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jEM-0004Z2-3G; Thu, 16 Jan 2014 09:27:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jEK-0004Yp-3I
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:27:16 +0000
Received: from [85.158.143.35:33223] by server-2.bemta-4.messagelabs.com id
	27/FC-11386-3F5A7D25; Thu, 16 Jan 2014 09:27:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389864434!12082862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11278 invoked from network); 16 Jan 2014 09:27:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:27:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:27:14 +0000
Message-Id: <52D7B3FE020000780011428E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:27:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 14/16] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Save VPMU state during context switch for both HVM and PV guests unless we
> are in PMU privileged mode (i.e. dom0 is doing all profiling).

This description doesn't seem to be in line with ...

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1444,17 +1444,15 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      }
>  
>      if (prev != next)
> -        update_runstate_area(prev);
> -
> -    if ( is_hvm_vcpu(prev) )
>      {
> -        if (prev != next)
> +        update_runstate_area(prev);
> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) || prev->domain != dom0 )

... this condition: vpmu_save() is being called when in privileged
mode and the switched out domain is other than Dom0 (yet above
you say all prifiling is done by Dom0 in that mode).

Apart from that - the latter condition likely wants to become
!is_control_domain() to be in line with earlier patches (although,
as can be seem by an earlier similar comment of mine, you aren't
really consistent throughout your patches in the regard, which
needs to be fixed).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jEM-0004Z2-3G; Thu, 16 Jan 2014 09:27:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jEK-0004Yp-3I
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:27:16 +0000
Received: from [85.158.143.35:33223] by server-2.bemta-4.messagelabs.com id
	27/FC-11386-3F5A7D25; Thu, 16 Jan 2014 09:27:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389864434!12082862!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11278 invoked from network); 16 Jan 2014 09:27:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:27:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:27:14 +0000
Message-Id: <52D7B3FE020000780011428E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:27:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-15-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 14/16] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Save VPMU state during context switch for both HVM and PV guests unless we
> are in PMU privileged mode (i.e. dom0 is doing all profiling).

This description doesn't seem to be in line with ...

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1444,17 +1444,15 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      }
>  
>      if (prev != next)
> -        update_runstate_area(prev);
> -
> -    if ( is_hvm_vcpu(prev) )
>      {
> -        if (prev != next)
> +        update_runstate_area(prev);
> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) || prev->domain != dom0 )

... this condition: vpmu_save() is being called when in privileged
mode and the switched out domain is other than Dom0 (yet above
you say all prifiling is done by Dom0 in that mode).

Apart from that - the latter condition likely wants to become
!is_control_domain() to be in line with earlier patches (although,
as can be seem by an earlier similar comment of mine, you aren't
really consistent throughout your patches in the regard, which
needs to be fixed).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:36:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jMa-0005Jq-77; Thu, 16 Jan 2014 09:35:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jMY-0005Jj-VZ
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:35:47 +0000
Received: from [193.109.254.147:51711] by server-11.bemta-14.messagelabs.com
	id 37/B5-20576-2F7A7D25; Thu, 16 Jan 2014 09:35:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389864945!11151768!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31879 invoked from network); 16 Jan 2014 09:35:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:35:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:35:45 +0000
Message-Id: <52D7B60002000078001142A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:35:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add support for using NMIs as PMU interrupts

This needs significant extension, detailing how all the handling
involved is safe in NMI context.

> @@ -202,10 +227,14 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>              struct segment_register cs;
>  
>              gregs = guest_cpu_user_regs();
> -            hvm_get_segment_register(current, x86_seg_cs, &cs);
>  
>              memcpy(p, gregs, sizeof(struct cpu_user_regs));
> -            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
> +            /* This is unsafe in NMI context, we'll do it in softint handler */
> +            if ( vpmu_apic_vector != APIC_DM_NMI )

Even if correct at present, I'd strongly suggest against != here (and
below), in favor of &.

> +/* Process the softirq set by PMU NMI handler */
> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
> +
> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||
> +         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
> +        v = dom0->vcpu[smp_processor_id()];
> +    else
> +        v = sampled;
> +
> +    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
> +    if ( is_hvm_domain(sampled->domain) )
> +    {
> +        struct segment_register cs;
> +
> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
> +        regs->cs = cs.attr.fields.dpl;
> +    }
> +
> +    send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +}

Perhaps I should have asked this on an earlier patch already:
How is this supposed to work for a 32-bit HVM guest?
struct cpu_user_regs is clearly different for it than what the
hypervisor or a 64-bit HVM guest would use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:36:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jMa-0005Jq-77; Thu, 16 Jan 2014 09:35:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jMY-0005Jj-VZ
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:35:47 +0000
Received: from [193.109.254.147:51711] by server-11.bemta-14.messagelabs.com
	id 37/B5-20576-2F7A7D25; Thu, 16 Jan 2014 09:35:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389864945!11151768!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31879 invoked from network); 16 Jan 2014 09:35:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 09:35:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 09:35:45 +0000
Message-Id: <52D7B60002000078001142A2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 09:35:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.01.14 at 20:24, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add support for using NMIs as PMU interrupts

This needs significant extension, detailing how all the handling
involved is safe in NMI context.

> @@ -202,10 +227,14 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>              struct segment_register cs;
>  
>              gregs = guest_cpu_user_regs();
> -            hvm_get_segment_register(current, x86_seg_cs, &cs);
>  
>              memcpy(p, gregs, sizeof(struct cpu_user_regs));
> -            ((struct cpu_user_regs *)p)->cs = cs.attr.fields.dpl;
> +            /* This is unsafe in NMI context, we'll do it in softint handler */
> +            if ( vpmu_apic_vector != APIC_DM_NMI )

Even if correct at present, I'd strongly suggest against != here (and
below), in favor of &.

> +/* Process the softirq set by PMU NMI handler */
> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
> +
> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||
> +         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
> +        v = dom0->vcpu[smp_processor_id()];
> +    else
> +        v = sampled;
> +
> +    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
> +    if ( is_hvm_domain(sampled->domain) )
> +    {
> +        struct segment_register cs;
> +
> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
> +        regs->cs = cs.attr.fields.dpl;
> +    }
> +
> +    send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +}

Perhaps I should have asked this on an earlier patch already:
How is this supposed to work for a 32-bit HVM guest?
struct cpu_user_regs is clearly different for it than what the
hypervisor or a 64-bit HVM guest would use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:36:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jN3-0005MI-GC; Thu, 16 Jan 2014 09:36:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jN1-0005M2-ND
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:36:15 +0000
Received: from [85.158.143.35:16627] by server-2.bemta-4.messagelabs.com id
	7E/0F-11386-F08A7D25; Thu, 16 Jan 2014 09:36:15 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389864973!12097135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7496 invoked from network); 16 Jan 2014 09:36:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93400578"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:36:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:36:12 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jMx-0001I2-LY; Thu, 16 Jan 2014 09:36:11 +0000
Message-ID: <52D7A80B.60605@citrix.com>
Date: Thu, 16 Jan 2014 09:36:11 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<20140116001338.GF5331@zion.uk.xensource.com>
In-Reply-To: <20140116001338.GF5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:13, Wei Liu wrote:
> Cool! Finally!
>
> On Wed, Jan 15, 2014 at 04:23:20PM +0000, Andrew J. Bennieston wrote:
>> This patch series implements multiple transmit and receive queues (i.e.
>> multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows:
>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>      netfront respectively, and modify the rest of the code to use these
>>      as appropriate.
>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>     multiple shared rings and event channels, and code to connect these
>>     as appropriate.
>>
>> All other transmit and receive processing remains unchanged, i.e. there
>> is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>
>> To summarise:
>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>      with a single queue.
>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>      scaling out over available resources, not an efficiency improvement.
>>    * Results depend on the availability of sufficient CPUs, as well as the
>>      distribution of interrupts and the distribution of TCP streams across
>>      the queues.
>>
>> One open issue is how to deal with the tx_credit data for rate limiting.
>> This used to exist on a per-VIF basis, and these patches move it to
>> per-queue to avoid contention on concurrent access to the tx_credit
>> data from multiple threads. This has the side effect of breaking the
>> tx_credit accounting across the VIF as a whole. I cannot see a situation
>> in which people would want to use both rate limiting and a
>> high-performance multi-queue mode, but if this is problematic then it
>> can be brought back to the VIF level, with appropriate protection.
>> Obviously, it continues to work identically in the case where there is
>> only one queue.
>>
>
> I would go for per-queue limit at the stage as it simplifies things and
> keep you focus on core functionality.
Agreed - that's how I saw it at the time; it seems unlikely that people 
would want to use both features, anyway.

Andrew.
>
> Wei.
>
>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>> frontend and backend, since only one option exists. Future patches to
>> support other frontends (particularly Windows) will need to add some
>> capability to negotiate not only the hash algorithm selection, but also
>> allow the frontend to specify some parameters to this.
>>
>> Queue-specific XenStore entries for ring references and event channels
>> are stored hierarchically, i.e. under .../queue-N/... where N varies
>> from 0 to one less than the requested number of queues (inclusive). If
>> only one queue is requested, it falls back to the flat structure where
>> the ring references and event channels are written at the same level as
>> other vif information.
>>
>> --
>> Andrew J. Bennieston


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:36:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jN3-0005MI-GC; Thu, 16 Jan 2014 09:36:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jN1-0005M2-ND
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:36:15 +0000
Received: from [85.158.143.35:16627] by server-2.bemta-4.messagelabs.com id
	7E/0F-11386-F08A7D25; Thu, 16 Jan 2014 09:36:15 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389864973!12097135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7496 invoked from network); 16 Jan 2014 09:36:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:36:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93400578"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:36:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:36:12 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jMx-0001I2-LY; Thu, 16 Jan 2014 09:36:11 +0000
Message-ID: <52D7A80B.60605@citrix.com>
Date: Thu, 16 Jan 2014 09:36:11 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<20140116001338.GF5331@zion.uk.xensource.com>
In-Reply-To: <20140116001338.GF5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:13, Wei Liu wrote:
> Cool! Finally!
>
> On Wed, Jan 15, 2014 at 04:23:20PM +0000, Andrew J. Bennieston wrote:
>> This patch series implements multiple transmit and receive queues (i.e.
>> multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows:
>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>      netfront respectively, and modify the rest of the code to use these
>>      as appropriate.
>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>     multiple shared rings and event channels, and code to connect these
>>     as appropriate.
>>
>> All other transmit and receive processing remains unchanged, i.e. there
>> is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>
>> To summarise:
>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>      with a single queue.
>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>      scaling out over available resources, not an efficiency improvement.
>>    * Results depend on the availability of sufficient CPUs, as well as the
>>      distribution of interrupts and the distribution of TCP streams across
>>      the queues.
>>
>> One open issue is how to deal with the tx_credit data for rate limiting.
>> This used to exist on a per-VIF basis, and these patches move it to
>> per-queue to avoid contention on concurrent access to the tx_credit
>> data from multiple threads. This has the side effect of breaking the
>> tx_credit accounting across the VIF as a whole. I cannot see a situation
>> in which people would want to use both rate limiting and a
>> high-performance multi-queue mode, but if this is problematic then it
>> can be brought back to the VIF level, with appropriate protection.
>> Obviously, it continues to work identically in the case where there is
>> only one queue.
>>
>
> I would go for per-queue limit at the stage as it simplifies things and
> keep you focus on core functionality.
Agreed - that's how I saw it at the time; it seems unlikely that people 
would want to use both features, anyway.

Andrew.
>
> Wei.
>
>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>> frontend and backend, since only one option exists. Future patches to
>> support other frontends (particularly Windows) will need to add some
>> capability to negotiate not only the hash algorithm selection, but also
>> allow the frontend to specify some parameters to this.
>>
>> Queue-specific XenStore entries for ring references and event channels
>> are stored hierarchically, i.e. under .../queue-N/... where N varies
>> from 0 to one less than the requested number of queues (inclusive). If
>> only one queue is requested, it falls back to the flat structure where
>> the ring references and event channels are written at the same level as
>> other vif information.
>>
>> --
>> Andrew J. Bennieston


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:45:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jVu-00067v-DO; Thu, 16 Jan 2014 09:45:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3jVs-00067q-5y
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:45:24 +0000
Received: from [193.109.254.147:22363] by server-12.bemta-14.messagelabs.com
	id C3/31-13681-33AA7D25; Thu, 16 Jan 2014 09:45:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389865521!11159816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29098 invoked from network); 16 Jan 2014 09:45:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:45:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93402160"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:45:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:45:20 -0500
Message-ID: <1389865519.5190.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Thu, 16 Jan 2014 09:45:19 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
> Can I say nested virtualization also is good supported in Xen 4.4? 

Can you enumerate the scenarios which have been tested and which you
consider are "supported"?

What do the hypervisor side maintainers think?

ISTR that not so long ago there were some quirks wrt not exposing the
feature to guests and crashing if they used it, and another one a while
back relating to a guest being able to enable nested virt on itself
regardless of the host administrator's settings. I suppose they are both
fixed but I wonder if the "command and control" side of nested virt has
had the same level of consideration and testing as the actual
functionality.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:45:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jVu-00067v-DO; Thu, 16 Jan 2014 09:45:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3jVs-00067q-5y
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:45:24 +0000
Received: from [193.109.254.147:22363] by server-12.bemta-14.messagelabs.com
	id C3/31-13681-33AA7D25; Thu, 16 Jan 2014 09:45:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389865521!11159816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29098 invoked from network); 16 Jan 2014 09:45:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:45:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93402160"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:45:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:45:20 -0500
Message-ID: <1389865519.5190.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Thu, 16 Jan 2014 09:45:19 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
> Can I say nested virtualization also is good supported in Xen 4.4? 

Can you enumerate the scenarios which have been tested and which you
consider are "supported"?

What do the hypervisor side maintainers think?

ISTR that not so long ago there were some quirks wrt not exposing the
feature to guests and crashing if they used it, and another one a while
back relating to a guest being able to enable nested virt on itself
regardless of the host administrator's settings. I suppose they are both
fixed but I wonder if the "command and control" side of nested virt has
had the same level of consideration and testing as the actual
functionality.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jeM-0006j2-Gk; Thu, 16 Jan 2014 09:54:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jeL-0006ix-4l
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:54:09 +0000
Received: from [193.109.254.147:45308] by server-16.bemta-14.messagelabs.com
	id F7/6C-20600-04CA7D25; Thu, 16 Jan 2014 09:54:08 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389866046!11156643!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17201 invoked from network); 16 Jan 2014 09:54:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:54:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91301803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 09:54:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:54:05 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jeG-0001eW-KP; Thu, 16 Jan 2014 09:54:04 +0000
Message-ID: <52D7AC3C.6080101@citrix.com>
Date: Thu, 16 Jan 2014 09:54:04 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
In-Reply-To: <20140116001706.GG5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:17, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int number; /* Queue number, 0-based */
>
> Use "id" instead?

Ok; I suppose number implies "the number of queues", not "which queue is
this?"

>
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,7 +142,7 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>
>> @@ -150,14 +152,27 @@ struct xenvif {
>>   	 */
>>   	RING_IDX rx_req_cons_peek;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>
> Any reason to swtich back to array inside structure? This array is
> really large.
>

It was moved to a separate vmalloc because it was large, but now the
array of queues is allocated through vmalloc anyway. I preferred to
bring this back into the structure rather than have more allocations to
track and remember to free at all relevant points. If there is any
significant reason to split this out I'm happy to do so...

>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +};
>> +
> [...]
>>
>> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
>
> Suggest add xenvif_ prefix.
>

Ok.

>> +{
>> +	struct xenvif *vif = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_index;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (vif->num_queues == 1) {
>> +		queue_index = 0;
>> +	}
>> +	else {
>
> Coding style.
>
>> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
>> +		hash = skb_get_rxhash(skb);
>> +		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
>> +	}
>
> Actually why do you special-case num_queues == 1? If it is an
> optimazation for old frontend then please add some comment.
>

That was the intention. I'll add a comment to that effect.

>> +
>> +	return queue_index;
>> +}
>> +
>>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	struct xenvif *vif = netdev_priv(dev);
>> +	u16 queue_index = 0;
>> +	struct xenvif_queue *queue = NULL;
>>
>>   	BUG_ON(skb->dev != dev);
>>
>> -	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL)
>> +	/* Drop the packet if the queues are not set up */
>> +	if (vif->num_queues < 1 || vif->queues == NULL)
>
> You don't need both, do you? They should be strictly synchronized.
>

Hmm... true. I'll change to just if (vif->num_queues < 1), since it
states the intent more clearly than the pointer check.

>> +		goto drop;
>> +
>> +	/* Obtain the queue to be used to transmit this packet */
>> +	queue_index = skb_get_queue_mapping(skb);
>> +	queue = &vif->queues[queue_index];
>> +
>> +	/* Drop the packet if queue is not ready */
>> +	if (queue->task == NULL)
>>   		goto drop;
>>
> [...]
>>   static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>> @@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>>
>>   static void xenvif_up(struct xenvif *vif)
>>   {
>> -	napi_enable(&vif->napi);
>> -	enable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		enable_irq(vif->rx_irq);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>
> Better insert empty line here.
>
>> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_enable(&queue->napi);
>> +		enable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			enable_irq(queue->rx_irq);
>> +		xenvif_check_rx_xenvif(queue);
>> +	}
>>   }
>>
>>   static void xenvif_down(struct xenvif *vif)
>>   {
>> -	napi_disable(&vif->napi);
>> -	disable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		disable_irq(vif->rx_irq);
>> -	del_timer_sync(&vif->credit_timeout);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>
> Ditto.
>
>> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_disable(&queue->napi);
>> +		disable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			disable_irq(queue->rx_irq);
>> +		del_timer_sync(&queue->credit_timeout);
>> +	}
>>   }
>>
> [...]
>> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>>   		val = 0;
>>   	vif->ipv6_csum = !!val;
>>
>> -	/* Map the shared frame, irq etc. */
>> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
>> -			     tx_evtchn, rx_evtchn);
>> -	if (err) {
>> -		xenbus_dev_fatal(dev, err,
>> -				 "mapping shared-frames %lu/%lu port tx %u rx %u",
>> -				 tx_ring_ref, rx_ring_ref,
>> -				 tx_evtchn, rx_evtchn);
>> -		return err;
>> -	}
>>   	return 0;
>>   }
>>
>> -
>
> Blank line change, not necessary.
>

I thought I'd caught all of those; must have missed one!

Andrew.

> Wei.
>
>>   /* ** Driver Registration ** */
>>
>>
>> --
>> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jeM-0006j2-Gk; Thu, 16 Jan 2014 09:54:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jeL-0006ix-4l
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 09:54:09 +0000
Received: from [193.109.254.147:45308] by server-16.bemta-14.messagelabs.com
	id F7/6C-20600-04CA7D25; Thu, 16 Jan 2014 09:54:08 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389866046!11156643!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17201 invoked from network); 16 Jan 2014 09:54:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:54:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91301803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 09:54:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:54:05 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jeG-0001eW-KP; Thu, 16 Jan 2014 09:54:04 +0000
Message-ID: <52D7AC3C.6080101@citrix.com>
Date: Thu, 16 Jan 2014 09:54:04 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
In-Reply-To: <20140116001706.GG5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:17, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int number; /* Queue number, 0-based */
>
> Use "id" instead?

Ok; I suppose number implies "the number of queues", not "which queue is
this?"

>
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,7 +142,7 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>
>> @@ -150,14 +152,27 @@ struct xenvif {
>>   	 */
>>   	RING_IDX rx_req_cons_peek;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>
> Any reason to swtich back to array inside structure? This array is
> really large.
>

It was moved to a separate vmalloc because it was large, but now the
array of queues is allocated through vmalloc anyway. I preferred to
bring this back into the structure rather than have more allocations to
track and remember to free at all relevant points. If there is any
significant reason to split this out I'm happy to do so...

>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +};
>> +
> [...]
>>
>> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
>
> Suggest add xenvif_ prefix.
>

Ok.

>> +{
>> +	struct xenvif *vif = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_index;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (vif->num_queues == 1) {
>> +		queue_index = 0;
>> +	}
>> +	else {
>
> Coding style.
>
>> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
>> +		hash = skb_get_rxhash(skb);
>> +		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
>> +	}
>
> Actually why do you special-case num_queues == 1? If it is an
> optimazation for old frontend then please add some comment.
>

That was the intention. I'll add a comment to that effect.

>> +
>> +	return queue_index;
>> +}
>> +
>>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	struct xenvif *vif = netdev_priv(dev);
>> +	u16 queue_index = 0;
>> +	struct xenvif_queue *queue = NULL;
>>
>>   	BUG_ON(skb->dev != dev);
>>
>> -	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL)
>> +	/* Drop the packet if the queues are not set up */
>> +	if (vif->num_queues < 1 || vif->queues == NULL)
>
> You don't need both, do you? They should be strictly synchronized.
>

Hmm... true. I'll change to just if (vif->num_queues < 1), since it
states the intent more clearly than the pointer check.

>> +		goto drop;
>> +
>> +	/* Obtain the queue to be used to transmit this packet */
>> +	queue_index = skb_get_queue_mapping(skb);
>> +	queue = &vif->queues[queue_index];
>> +
>> +	/* Drop the packet if queue is not ready */
>> +	if (queue->task == NULL)
>>   		goto drop;
>>
> [...]
>>   static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>> @@ -163,20 +209,30 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>>
>>   static void xenvif_up(struct xenvif *vif)
>>   {
>> -	napi_enable(&vif->napi);
>> -	enable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		enable_irq(vif->rx_irq);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>
> Better insert empty line here.
>
>> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_enable(&queue->napi);
>> +		enable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			enable_irq(queue->rx_irq);
>> +		xenvif_check_rx_xenvif(queue);
>> +	}
>>   }
>>
>>   static void xenvif_down(struct xenvif *vif)
>>   {
>> -	napi_disable(&vif->napi);
>> -	disable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		disable_irq(vif->rx_irq);
>> -	del_timer_sync(&vif->credit_timeout);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>
> Ditto.
>
>> +	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_disable(&queue->napi);
>> +		disable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			disable_irq(queue->rx_irq);
>> +		del_timer_sync(&queue->credit_timeout);
>> +	}
>>   }
>>
> [...]
>> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>>   		val = 0;
>>   	vif->ipv6_csum = !!val;
>>
>> -	/* Map the shared frame, irq etc. */
>> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
>> -			     tx_evtchn, rx_evtchn);
>> -	if (err) {
>> -		xenbus_dev_fatal(dev, err,
>> -				 "mapping shared-frames %lu/%lu port tx %u rx %u",
>> -				 tx_ring_ref, rx_ring_ref,
>> -				 tx_evtchn, rx_evtchn);
>> -		return err;
>> -	}
>>   	return 0;
>>   }
>>
>> -
>
> Blank line change, not necessary.
>

I thought I'd caught all of those; must have missed one!

Andrew.

> Wei.
>
>>   /* ** Driver Registration ** */
>>
>>
>> --
>> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jgV-0006o6-PZ; Thu, 16 Jan 2014 09:56:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3jgU-0006o0-T3
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:56:23 +0000
Received: from [85.158.137.68:23752] by server-11.bemta-3.messagelabs.com id
	2C/ED-19379-6CCA7D25; Thu, 16 Jan 2014 09:56:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389866179!8310888!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12089 invoked from network); 16 Jan 2014 09:56:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93404552"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:56:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:56:18 -0500
Message-ID: <1389866177.5190.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 16 Jan 2014 09:56:17 +0000
In-Reply-To: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
References: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 06:33 +0400, Igor Kozhukhov wrote:
> could you please let me know - why we have different QEMU for xen builds ?
> i see :
> qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
> qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git
> 
> can we use ONE ?
> or we need different binaries ?

qemu-xen-traditional is the old Xen fork of Qemu. It was the only qemu
for a long time, and was the default until 4.3 (I think).

Obviously the fork was a bad thing so from 4.2 we have also had the
"qemu-xen" version of qemu which is the upstream qemu with Xen support.
This was "tech preview" in 4.2 and become the default (in most cases) in
4.3. (the exception is stubdomains which currently only work for
traditional). It is also intended that distros can just use their
existing qemu packaging instead of packaging a special Xen version of
qemu (they like this from a security support PoV etc).

The reason why the traditional fork lives on despite the default having
been changed is that VMs which were installed on that platform may not
take kindly to being switched to the newer one (in particular Windows
VMs might require reactivation). So the upstream project intends to keep
this code base alive, in a heavily frozen/maintenance state for the
foreseeable future.

New VM deployments from 4.3 onwards should use the qemu-xen fork where
possible.

Your use of 4.2 makes it hard for me to make a recommendation to you,
since qemu-xen in 4.2 was tech preview and was missing some features,
but it is the future, while 4.2 still used the old frozen qemu as its
default.

My recommendation would be to be more concerned about pulling forward to
a newer Xen (like 4.3 or even 4.4-rc) and on getting your Xen patches
upstream before worrying about Qemu too much, and then having done that
to focus mainly on upstream qemu-xen.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jgV-0006o6-PZ; Thu, 16 Jan 2014 09:56:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3jgU-0006o0-T3
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 09:56:23 +0000
Received: from [85.158.137.68:23752] by server-11.bemta-3.messagelabs.com id
	2C/ED-19379-6CCA7D25; Thu, 16 Jan 2014 09:56:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389866179!8310888!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12089 invoked from network); 16 Jan 2014 09:56:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 09:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93404552"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 09:56:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 04:56:18 -0500
Message-ID: <1389866177.5190.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 16 Jan 2014 09:56:17 +0000
In-Reply-To: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
References: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 06:33 +0400, Igor Kozhukhov wrote:
> could you please let me know - why we have different QEMU for xen builds ?
> i see :
> qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
> qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git
> 
> can we use ONE ?
> or we need different binaries ?

qemu-xen-traditional is the old Xen fork of Qemu. It was the only qemu
for a long time, and was the default until 4.3 (I think).

Obviously the fork was a bad thing so from 4.2 we have also had the
"qemu-xen" version of qemu which is the upstream qemu with Xen support.
This was "tech preview" in 4.2 and become the default (in most cases) in
4.3. (the exception is stubdomains which currently only work for
traditional). It is also intended that distros can just use their
existing qemu packaging instead of packaging a special Xen version of
qemu (they like this from a security support PoV etc).

The reason why the traditional fork lives on despite the default having
been changed is that VMs which were installed on that platform may not
take kindly to being switched to the newer one (in particular Windows
VMs might require reactivation). So the upstream project intends to keep
this code base alive, in a heavily frozen/maintenance state for the
foreseeable future.

New VM deployments from 4.3 onwards should use the qemu-xen fork where
possible.

Your use of 4.2 makes it hard for me to make a recommendation to you,
since qemu-xen in 4.2 was tech preview and was missing some features,
but it is the future, while 4.2 still used the old frozen qemu as its
default.

My recommendation would be to be more concerned about pulling forward to
a newer Xen (like 4.3 or even 4.4-rc) and on getting your Xen patches
upstream before worrying about Qemu too much, and then having done that
to focus mainly on upstream qemu-xen.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:03:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jnX-0007Wr-RR; Thu, 16 Jan 2014 10:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3jnX-0007Wm-5R
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:03:39 +0000
Received: from [85.158.143.35:64243] by server-3.bemta-4.messagelabs.com id
	AD/C6-32360-A7EA7D25; Thu, 16 Jan 2014 10:03:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389866616!9413829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6055 invoked from network); 16 Jan 2014 10:03:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93405871"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:03:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:03:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3jnT-0001nH-FD;
	Thu, 16 Jan 2014 10:03:35 +0000
Message-ID: <52D7AE77.7000306@citrix.com>
Date: Thu, 16 Jan 2014 10:03:35 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389865519.5190.9.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 09:45, Ian Campbell wrote:
> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>> Can I say nested virtualization also is good supported in Xen 4.4? 
> Can you enumerate the scenarios which have been tested and which you
> consider are "supported"?
>
> What do the hypervisor side maintainers think?

Absolutely still experimental.

>
> ISTR that not so long ago there were some quirks wrt not exposing the
> feature to guests and crashing if they used it, and another one a while
> back relating to a guest being able to enable nested virt on itself
> regardless of the host administrator's settings. I suppose they are both
> fixed but I wonder if the "command and control" side of nested virt has
> had the same level of consideration and testing as the actual
> functionality.
>
> Ian.

To the best of my knowledge, there are still deadlocks cases in the mm
code for interesting combinations of L1 and L2 memory styles such as
PoD/Paging.

See some of the concernes in the thread including Message-ID:
4A59A236-A272-471D-A061-A960E0CEFAAD@gridcentric.ca

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:03:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jnX-0007Wr-RR; Thu, 16 Jan 2014 10:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3jnX-0007Wm-5R
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:03:39 +0000
Received: from [85.158.143.35:64243] by server-3.bemta-4.messagelabs.com id
	AD/C6-32360-A7EA7D25; Thu, 16 Jan 2014 10:03:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1389866616!9413829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6055 invoked from network); 16 Jan 2014 10:03:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93405871"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:03:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:03:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W3jnT-0001nH-FD;
	Thu, 16 Jan 2014 10:03:35 +0000
Message-ID: <52D7AE77.7000306@citrix.com>
Date: Thu, 16 Jan 2014 10:03:35 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389865519.5190.9.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 09:45, Ian Campbell wrote:
> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>> Can I say nested virtualization also is good supported in Xen 4.4? 
> Can you enumerate the scenarios which have been tested and which you
> consider are "supported"?
>
> What do the hypervisor side maintainers think?

Absolutely still experimental.

>
> ISTR that not so long ago there were some quirks wrt not exposing the
> feature to guests and crashing if they used it, and another one a while
> back relating to a guest being able to enable nested virt on itself
> regardless of the host administrator's settings. I suppose they are both
> fixed but I wonder if the "command and control" side of nested virt has
> had the same level of consideration and testing as the actual
> functionality.
>
> Ian.

To the best of my knowledge, there are still deadlocks cases in the mm
code for interesting combinations of L1 and L2 memory styles such as
PoD/Paging.

See some of the concernes in the thread including Message-ID:
4A59A236-A272-471D-A061-A960E0CEFAAD@gridcentric.ca

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jnx-0007Z6-DV; Thu, 16 Jan 2014 10:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jnu-0007Yp-Gy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:03 +0000
Received: from [85.158.139.211:45686] by server-4.bemta-5.messagelabs.com id
	91/42-26791-19EA7D25; Thu, 16 Jan 2014 10:04:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389866641!9900331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24329 invoked from network); 16 Jan 2014 10:04:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 10:04:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 10:04:00 +0000
Message-Id: <52D7BC9E02000078001142D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 10:03:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389865519.5190.9.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 10:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>> Can I say nested virtualization also is good supported in Xen 4.4? 
> 
> Can you enumerate the scenarios which have been tested and which you
> consider are "supported"?
> 
> What do the hypervisor side maintainers think?

Indeed I'm not sure we're there yet, not the least considering
the recent discussion between SVM and VMX folks about how
to deal with a certain problem, where a regression on the SVM
side was expected if the code would have got committed as is.

But in the end I think the VMX and SVM maintainers should
have the final say on nVMX and nSVM support state.

Jan

> ISTR that not so long ago there were some quirks wrt not exposing the
> feature to guests and crashing if they used it, and another one a while
> back relating to a guest being able to enable nested virt on itself
> regardless of the host administrator's settings. I suppose they are both
> fixed but I wonder if the "command and control" side of nested virt has
> had the same level of consideration and testing as the actual
> functionality.
> 
> Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jnx-0007Z6-DV; Thu, 16 Jan 2014 10:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3jnu-0007Yp-Gy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:03 +0000
Received: from [85.158.139.211:45686] by server-4.bemta-5.messagelabs.com id
	91/42-26791-19EA7D25; Thu, 16 Jan 2014 10:04:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389866641!9900331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24329 invoked from network); 16 Jan 2014 10:04:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 10:04:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 10:04:00 +0000
Message-Id: <52D7BC9E02000078001142D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 10:03:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1389865519.5190.9.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 10:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>> Can I say nested virtualization also is good supported in Xen 4.4? 
> 
> Can you enumerate the scenarios which have been tested and which you
> consider are "supported"?
> 
> What do the hypervisor side maintainers think?

Indeed I'm not sure we're there yet, not the least considering
the recent discussion between SVM and VMX folks about how
to deal with a certain problem, where a regression on the SVM
side was expected if the code would have got committed as is.

But in the end I think the VMX and SVM maintainers should
have the final say on nVMX and nSVM support state.

Jan

> ISTR that not so long ago there were some quirks wrt not exposing the
> feature to guests and crashing if they used it, and another one a while
> back relating to a guest being able to enable nested virt on itself
> regardless of the host administrator's settings. I suppose they are both
> fixed but I wonder if the "command and control" side of nested virt has
> had the same level of consideration and testing as the actual
> functionality.
> 
> Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3joF-0007c3-RG; Thu, 16 Jan 2014 10:04:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3joD-0007bf-2c
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:21 +0000
Received: from [85.158.139.211:35177] by server-13.bemta-5.messagelabs.com id
	60/A2-11357-4AEA7D25; Thu, 16 Jan 2014 10:04:20 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389866658!9919367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4036 invoked from network); 16 Jan 2014 10:04:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:04:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91303790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:04:17 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:04:17 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:04:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit and
	receive queues
Thread-Index: AQHPEg4tkrLMu+HMMUmr4/hNIQfB6JqHH4Hw
Date: Thu, 16 Jan 2014 10:04:14 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant
> Subject: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit and receive
> queues
> 
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-
> queue_performance_testing
> 

Nice numbers!

> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> One open issue is how to deal with the tx_credit data for rate limiting.
> This used to exist on a per-VIF basis, and these patches move it to
> per-queue to avoid contention on concurrent access to the tx_credit
> data from multiple threads. This has the side effect of breaking the
> tx_credit accounting across the VIF as a whole. I cannot see a situation
> in which people would want to use both rate limiting and a
> high-performance multi-queue mode, but if this is problematic then it
> can be brought back to the VIF level, with appropriate protection.
> Obviously, it continues to work identically in the case where there is
> only one queue.
> 
> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 

Yes, Windows RSS stipulates a Toeplitz hash and specifies a hash key and mapping table. There's further awkwardness in the need to pass the actual hash value to the frontend too - but we could use an 'extra' seg for that, analogous to passing the GSO mss value through.

   Paul

> Queue-specific XenStore entries for ring references and event channels
> are stored hierarchically, i.e. under .../queue-N/... where N varies
> from 0 to one less than the requested number of queues (inclusive). If
> only one queue is requested, it falls back to the flat structure where
> the ring references and event channels are written at the same level as
> other vif information.
> 
> --
> Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3joF-0007c3-RG; Thu, 16 Jan 2014 10:04:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3joD-0007bf-2c
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:21 +0000
Received: from [85.158.139.211:35177] by server-13.bemta-5.messagelabs.com id
	60/A2-11357-4AEA7D25; Thu, 16 Jan 2014 10:04:20 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389866658!9919367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4036 invoked from network); 16 Jan 2014 10:04:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:04:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91303790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:04:17 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:04:17 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:04:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit and
	receive queues
Thread-Index: AQHPEg4tkrLMu+HMMUmr4/hNIQfB6JqHH4Hw
Date: Thu, 16 Jan 2014 10:04:14 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant
> Subject: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit and receive
> queues
> 
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-
> queue_performance_testing
> 

Nice numbers!

> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> One open issue is how to deal with the tx_credit data for rate limiting.
> This used to exist on a per-VIF basis, and these patches move it to
> per-queue to avoid contention on concurrent access to the tx_credit
> data from multiple threads. This has the side effect of breaking the
> tx_credit accounting across the VIF as a whole. I cannot see a situation
> in which people would want to use both rate limiting and a
> high-performance multi-queue mode, but if this is problematic then it
> can be brought back to the VIF level, with appropriate protection.
> Obviously, it continues to work identically in the case where there is
> only one queue.
> 
> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 

Yes, Windows RSS stipulates a Toeplitz hash and specifies a hash key and mapping table. There's further awkwardness in the need to pass the actual hash value to the frontend too - but we could use an 'extra' seg for that, analogous to passing the GSO mss value through.

   Paul

> Queue-specific XenStore entries for ring references and event channels
> are stored hierarchically, i.e. under .../queue-N/... where N varies
> from 0 to one less than the requested number of queues (inclusive). If
> only one queue is requested, it falls back to the flat structure where
> the ring references and event channels are written at the same level as
> other vif information.
> 
> --
> Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jom-0007jR-8q; Thu, 16 Jan 2014 10:04:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jok-0007iu-2H
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:54 +0000
Received: from [193.109.254.147:6200] by server-10.bemta-14.messagelabs.com id
	0B/32-20752-5CEA7D25; Thu, 16 Jan 2014 10:04:53 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389866691!9733294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29066 invoked from network); 16 Jan 2014 10:04:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:04:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91303894"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:04:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:04:50 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jof-0001o6-Mh; Thu, 16 Jan 2014 10:04:49 +0000
Message-ID: <52D7AEC1.2070302@citrix.com>
Date: Thu, 16 Jan 2014 10:04:49 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
	<20140116001807.GH5331@zion.uk.xensource.com>
In-Reply-To: <20140116001807.GH5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:18, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:22PM +0000, Andrew J. Bennieston wrote:
> [...]
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/*
>> +	 * Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			xenvif_max_queues);
>
> Indentation.
>
>
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 586e741..5d717d7 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -55,6 +55,9 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues = 4;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +
>
> This looks a bit arbitrary. I guess it is better to set the default
> value to number of CPUs in Dom0?
>

It is quite arbitrary. A default of the number of dom0 CPUs makes sense.
I'll change it.

>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
>> index c3332e2..ce7ca9a 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -21,6 +21,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/*
>> +	 * Multi-queue support: This is an optional feature.
>> +	 */
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u", xenvif_max_queues);
>
> Should prefix this with "feature-".
>

This isn't a feature flag, it is a parameter specifying the maximum
number of queues the backend will allow. It implies feature-multi-queue,
which is not written. Way back in 2013 I posted an RFC for the XenStore
keys to negotiate this and there was some disagreement on how to
approach this particular bit...

One argument went that having a feature-multi-queue was redundant if you
were also writing multi-queue-max-queues (regardless of the value that
was specified). Another argument was that having feature-multi-queue
allowed you to grep the xenstore-ls output for all supported features.

I decided to take this approach, but I'm not committed to it. Prefixing
this with feature- probably isn't right, though; the feature-* keys all
appear to be boolean flags indicating whether a feature is supported;
this communicates a further value. A simple change would be to add the
(redundant) feature-multi-queue = 1 flag...

>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0)
>> +		requested_num_queues = 1; /* Fall back to single queue */
>
> You also need to check whether it gets back something larger than
> permitted -- guest can be buggy or malicious.
>

Ah, yes. I missed that.

>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
>>   	{
>> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>
> I think even if the frontend only requests 1 queue you can still put it
> under subdirectory? I don't have strong preference here though...

There would still be some logic necessary to determine where to read
these values for old frontends vs new frontends, so having the 1-queue
case common to both made the most sense, to me. We'll always have to 
support new-backend/old-frontend and old-backend/new-frontend 
combinations so both will (always) have to check and do the appropriate 
thing here, whatever choice is made.

>
> After the protocol is settled it needs to be documented in netif.h.
>

Indeed. I'll wait a couple of iterations of these patches so the
protocol can stabilise before doing that.

>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>> +	else {
>> +		xspathsize = strlen(dev->otherend) + 10;
>
> Please avoid using magic number.
>
>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>
> "ring reference"?
>

Hmm, I probably need to reword that.

Andrew.

>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
>> +				queue->number);
>
> Indentation.
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jom-0007jR-8q; Thu, 16 Jan 2014 10:04:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jok-0007iu-2H
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:04:54 +0000
Received: from [193.109.254.147:6200] by server-10.bemta-14.messagelabs.com id
	0B/32-20752-5CEA7D25; Thu, 16 Jan 2014 10:04:53 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389866691!9733294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29066 invoked from network); 16 Jan 2014 10:04:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:04:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91303894"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:04:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:04:50 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jof-0001o6-Mh; Thu, 16 Jan 2014 10:04:49 +0000
Message-ID: <52D7AEC1.2070302@citrix.com>
Date: Thu, 16 Jan 2014 10:04:49 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
	<20140116001807.GH5331@zion.uk.xensource.com>
In-Reply-To: <20140116001807.GH5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:18, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:22PM +0000, Andrew J. Bennieston wrote:
> [...]
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/*
>> +	 * Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			xenvif_max_queues);
>
> Indentation.
>
>
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 586e741..5d717d7 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -55,6 +55,9 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues = 4;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +
>
> This looks a bit arbitrary. I guess it is better to set the default
> value to number of CPUs in Dom0?
>

It is quite arbitrary. A default of the number of dom0 CPUs makes sense.
I'll change it.

>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
>> index c3332e2..ce7ca9a 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -21,6 +21,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/*
>> +	 * Multi-queue support: This is an optional feature.
>> +	 */
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u", xenvif_max_queues);
>
> Should prefix this with "feature-".
>

This isn't a feature flag, it is a parameter specifying the maximum
number of queues the backend will allow. It implies feature-multi-queue,
which is not written. Way back in 2013 I posted an RFC for the XenStore
keys to negotiate this and there was some disagreement on how to
approach this particular bit...

One argument went that having a feature-multi-queue was redundant if you
were also writing multi-queue-max-queues (regardless of the value that
was specified). Another argument was that having feature-multi-queue
allowed you to grep the xenstore-ls output for all supported features.

I decided to take this approach, but I'm not committed to it. Prefixing
this with feature- probably isn't right, though; the feature-* keys all
appear to be boolean flags indicating whether a feature is supported;
this communicates a further value. A simple change would be to add the
(redundant) feature-multi-queue = 1 flag...

>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0)
>> +		requested_num_queues = 1; /* Fall back to single queue */
>
> You also need to check whether it gets back something larger than
> permitted -- guest can be buggy or malicious.
>

Ah, yes. I missed that.

>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index)
>>   	{
>> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>
> I think even if the frontend only requests 1 queue you can still put it
> under subdirectory? I don't have strong preference here though...

There would still be some logic necessary to determine where to read
these values for old frontends vs new frontends, so having the 1-queue
case common to both made the most sense, to me. We'll always have to 
support new-backend/old-frontend and old-backend/new-frontend 
combinations so both will (always) have to check and do the appropriate 
thing here, whatever choice is made.

>
> After the protocol is settled it needs to be documented in netif.h.
>

Indeed. I'll wait a couple of iterations of these patches so the
protocol can stabilise before doing that.

>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>> +	else {
>> +		xspathsize = strlen(dev->otherend) + 10;
>
> Please avoid using magic number.
>
>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>
> "ring reference"?
>

Hmm, I probably need to reword that.

Andrew.

>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
>> +				queue->number);
>
> Indentation.
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jsX-00084P-Ic; Thu, 16 Jan 2014 10:08:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jsW-00084I-Kx
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:08:48 +0000
Received: from [85.158.137.68:47962] by server-14.bemta-3.messagelabs.com id
	3F/6E-06105-FAFA7D25; Thu, 16 Jan 2014 10:08:47 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389866925!9450877!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 16 Jan 2014 10:08:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:08:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93407178"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:08:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:08:44 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jsS-0001sW-5G; Thu, 16 Jan 2014 10:08:44 +0000
Message-ID: <52D7AFAB.50404@citrix.com>
Date: Thu, 16 Jan 2014 10:08:43 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
	<20140116002500.GI5331@zion.uk.xensource.com>
In-Reply-To: <20140116002500.GI5331@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:25, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:23PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +
>> +struct netfront_queue {
>> +	unsigned int number; /* Queue number, 0-based */
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>> +	struct netfront_info *info;
>>
>>   	struct napi_struct napi;
>>
>> @@ -93,10 +96,8 @@ struct netfront_info {
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	unsigned int tx_irq, rx_irq;
>>   	/* Only used when split event channels support is enabled */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> -
>> -	struct xenbus_device *xbdev;
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>
> Basically you're anticipating the maximum number of queues is below 10
> here. I think leaving one more byte here won't hurt, just in case you
> will have more than 10 queues. The same goes to netback as well.
>
> In your next patch you have max_queue as 16 by default, which has
> already broken your assumption here already. :-)
>

Yes, this should be fixed. I wouldn't expect more than 100 queues, so
adding another byte here should be sufficient. In any case, I don't
think the names are used for anything other than human readable output,
so there won't be any functional impact to truncating them.

>>
>>   	spinlock_t   tx_lock;
>>   	struct xen_netif_tx_front_ring tx;
>> @@ -139,6 +140,17 @@ struct netfront_info {
>>   	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
>>   	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
>>   	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
>> +};
>> +
> [...]
>>   static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	unsigned short id;
>> @@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	unsigned int offset = offset_in_page(data);
>>   	unsigned int len = skb_headlen(skb);
>>   	unsigned long flags;
>> +	struct netfront_queue *queue = NULL;
>> +	u16 queue_index;
>> +
>> +	/* Drop the packet if no queues are set up */
>> +	if (np->num_queues < 1 || np->queues == NULL)
>
> Same as the comment in netback, you won't need both.

Agreed. I will switch to if (np->num_queues < 1) for the same reason I
gave in the netback mail.

Andrew.
>
> And the rest of the patch is basically replacing np with queue and
> putting things in loops. So I stopped here...
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jsX-00084P-Ic; Thu, 16 Jan 2014 10:08:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3jsW-00084I-Kx
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:08:48 +0000
Received: from [85.158.137.68:47962] by server-14.bemta-3.messagelabs.com id
	3F/6E-06105-FAFA7D25; Thu, 16 Jan 2014 10:08:47 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1389866925!9450877!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 16 Jan 2014 10:08:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:08:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93407178"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:08:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:08:44 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3jsS-0001sW-5G; Thu, 16 Jan 2014 10:08:44 +0000
Message-ID: <52D7AFAB.50404@citrix.com>
Date: Thu, 16 Jan 2014 10:08:43 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-4-git-send-email-andrew.bennieston@citrix.com>
	<20140116002500.GI5331@zion.uk.xensource.com>
In-Reply-To: <20140116002500.GI5331@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/4] xen-netfront: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:25, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:23PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +
>> +struct netfront_queue {
>> +	unsigned int number; /* Queue number, 0-based */
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>> +	struct netfront_info *info;
>>
>>   	struct napi_struct napi;
>>
>> @@ -93,10 +96,8 @@ struct netfront_info {
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	unsigned int tx_irq, rx_irq;
>>   	/* Only used when split event channels support is enabled */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> -
>> -	struct xenbus_device *xbdev;
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>
> Basically you're anticipating the maximum number of queues is below 10
> here. I think leaving one more byte here won't hurt, just in case you
> will have more than 10 queues. The same goes to netback as well.
>
> In your next patch you have max_queue as 16 by default, which has
> already broken your assumption here already. :-)
>

Yes, this should be fixed. I wouldn't expect more than 100 queues, so
adding another byte here should be sufficient. In any case, I don't
think the names are used for anything other than human readable output,
so there won't be any functional impact to truncating them.

>>
>>   	spinlock_t   tx_lock;
>>   	struct xen_netif_tx_front_ring tx;
>> @@ -139,6 +140,17 @@ struct netfront_info {
>>   	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
>>   	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
>>   	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
>> +};
>> +
> [...]
>>   static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	unsigned short id;
>> @@ -555,6 +575,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	unsigned int offset = offset_in_page(data);
>>   	unsigned int len = skb_headlen(skb);
>>   	unsigned long flags;
>> +	struct netfront_queue *queue = NULL;
>> +	u16 queue_index;
>> +
>> +	/* Drop the packet if no queues are set up */
>> +	if (np->num_queues < 1 || np->queues == NULL)
>
> Same as the comment in netback, you won't need both.

Agreed. I will switch to if (np->num_queues < 1) for the same reason I
gave in the netback mail.

Andrew.
>
> And the rest of the patch is basically replacing np with queue and
> putting things in loops. So I stopped here...
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:10:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jtq-0008UP-TV; Thu, 16 Jan 2014 10:10:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W3jto-0008Th-8u
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:10:08 +0000
Received: from [85.158.139.211:33647] by server-15.bemta-5.messagelabs.com id
	B3/DE-08490-FFFA7D25; Thu, 16 Jan 2014 10:10:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389867006!10096507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9315 invoked from network); 16 Jan 2014 10:10:06 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 10:10:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W3jth-000F80-V1; Thu, 16 Jan 2014 10:10:02 +0000
Date: Thu, 16 Jan 2014 11:10:01 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116101001.GA56618@deinos.phlegethon.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389713316.12434.91.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389713316.12434.91.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:28 +0000 on 14 Jan (1389709716), Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> > Except grant-table (I can't find {get,put}_page for grant-table code???),
> 
> I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
> through page_get_owner_and_reference.
> 
> and on unmap it is in__gnttab_unmap_common_complete.
> 
> It's a bit of a complex maze though so I'm not entirely sure, perhaps
> Tim, Keir or Jan can confirm that a grant mapping always takes a
> reference on the mapped page (it seems like PV x86 ought to be relying
> on this for safety anyhow).

Not claiming to understand it completely, but I agree with your analysis.

> I think the flush in alloc_heap_pages would also serve as a backstop,
> wouldn't it?

Not entirely -- if the grant mapping didn't take a ref, then the page
could be freed and reassigned with the grant mapping still in place --
the TLB flush doesn't help if the PTE is still there. :)

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:10:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3jtq-0008UP-TV; Thu, 16 Jan 2014 10:10:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W3jto-0008Th-8u
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:10:08 +0000
Received: from [85.158.139.211:33647] by server-15.bemta-5.messagelabs.com id
	B3/DE-08490-FFFA7D25; Thu, 16 Jan 2014 10:10:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389867006!10096507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9315 invoked from network); 16 Jan 2014 10:10:06 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 10:10:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W3jth-000F80-V1; Thu, 16 Jan 2014 10:10:02 +0000
Date: Thu, 16 Jan 2014 11:10:01 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116101001.GA56618@deinos.phlegethon.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389713316.12434.91.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389713316.12434.91.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:28 +0000 on 14 Jan (1389709716), Ian Campbell wrote:
> On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> > Except grant-table (I can't find {get,put}_page for grant-table code???),
> 
> I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
> through page_get_owner_and_reference.
> 
> and on unmap it is in__gnttab_unmap_common_complete.
> 
> It's a bit of a complex maze though so I'm not entirely sure, perhaps
> Tim, Keir or Jan can confirm that a grant mapping always takes a
> reference on the mapped page (it seems like PV x86 ought to be relying
> on this for safety anyhow).

Not claiming to understand it completely, but I agree with your analysis.

> I think the flush in alloc_heap_pages would also serve as a backstop,
> wouldn't it?

Not entirely -- if the grant mapping didn't take a ref, then the page
could be freed and reassigned with the grant mapping still in place --
the TLB flush doesn't help if the PTE is still there. :)

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:20:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k48-00013j-S5; Thu, 16 Jan 2014 10:20:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3k47-00013e-0b
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:20:47 +0000
Received: from [85.158.139.211:34717] by server-5.bemta-5.messagelabs.com id
	82/CF-14928-E72B7D25; Thu, 16 Jan 2014 10:20:46 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389867644!10099469!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21868 invoked from network); 16 Jan 2014 10:20:44 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:20:44 -0000
Received: by mail-la0-f50.google.com with SMTP id ec20so2399046lab.9
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 02:20:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=KqWFlBBfkIt65dZkxxlzQsoSOFCIt1BfQFvQA59E4qo=;
	b=Ak5J0Ng8yWO3uxME28ttTvr1uo8DrlhMQXY2tQY5KqZYbt7sOUZub6U++94spfB5qT
	VKhWAnP3px+LHgY+g/Nyl5MIwsU/ggthORQb4EFZU1FeJQIskLZiGqEeL4zl6sG+QBA3
	vWd0yCRIHPiudl8Pg7EMyPBSkXTDa9fRa7WxE1R5pqegUSlrhor17JOhwJcsx+7jehZP
	WjSVO+rHNr082hhHi1Ht2p+HQQYfD83MQNWSOQvyxKXFIqzTG6zg2zp7yiBND0TLrLZb
	2MvMcKnjHqiOv8u33Iw+5EUqmdTggSgKaRyuwV99Abcii6kruo/S0GTEUnCQRacu1rPf
	V3Yg==
X-Received: by 10.112.160.196 with SMTP id xm4mr4471387lbb.34.1389867641391;
	Thu, 16 Jan 2014 02:20:41 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id n13sm4162358lbl.17.2014.01.16.02.20.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 16 Jan 2014 02:20:39 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1389866177.5190.18.camel@kazak.uk.xensource.com>
Date: Thu, 16 Jan 2014 14:20:36 +0400
Message-Id: <B20A2430-759A-45E6-802E-EDA020226604@gmail.com>
References: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
	<1389866177.5190.18.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Jan 16, 2014, at 1:56 PM, Ian Campbell wrote:

> On Thu, 2014-01-16 at 06:33 +0400, Igor Kozhukhov wrote:
>> could you please let me know - why we have different QEMU for xen builds ?
>> i see :
>> qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
>> qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git
>> 
>> can we use ONE ?
>> or we need different binaries ?
> 
> qemu-xen-traditional is the old Xen fork of Qemu. It was the only qemu
> for a long time, and was the default until 4.3 (I think).

Thanks for your details about QEMU.
i have additional patches/changes for using vdiskadm.
it is tools for using different images as storage for VMs.
i'll take a look a port to xen-4.3 instead of xen-4.2.

at this moment i have fixed/updated changes on illumos(kernel side) and need to work on xen side with sources - take a look and try to apply patches from xen-3.4.x.

i'll try to send to upstream my changes to Xen sources with changes.
also i have updated libfsimage with ZFS from illumos and will be ready for up merge to xen-unstable.

-Igor

> Obviously the fork was a bad thing so from 4.2 we have also had the
> "qemu-xen" version of qemu which is the upstream qemu with Xen support.
> This was "tech preview" in 4.2 and become the default (in most cases) in
> 4.3. (the exception is stubdomains which currently only work for
> traditional). It is also intended that distros can just use their
> existing qemu packaging instead of packaging a special Xen version of
> qemu (they like this from a security support PoV etc).
> 
> The reason why the traditional fork lives on despite the default having
> been changed is that VMs which were installed on that platform may not
> take kindly to being switched to the newer one (in particular Windows
> VMs might require reactivation). So the upstream project intends to keep
> this code base alive, in a heavily frozen/maintenance state for the
> foreseeable future.
> 
> New VM deployments from 4.3 onwards should use the qemu-xen fork where
> possible.
> 
> Your use of 4.2 makes it hard for me to make a recommendation to you,
> since qemu-xen in 4.2 was tech preview and was missing some features,
> but it is the future, while 4.2 still used the old frozen qemu as its
> default.
> 
> My recommendation would be to be more concerned about pulling forward to
> a newer Xen (like 4.3 or even 4.4-rc) and on getting your Xen patches
> upstream before worrying about Qemu too much, and then having done that
> to focus mainly on upstream qemu-xen.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:20:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k48-00013j-S5; Thu, 16 Jan 2014 10:20:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W3k47-00013e-0b
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:20:47 +0000
Received: from [85.158.139.211:34717] by server-5.bemta-5.messagelabs.com id
	82/CF-14928-E72B7D25; Thu, 16 Jan 2014 10:20:46 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389867644!10099469!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21868 invoked from network); 16 Jan 2014 10:20:44 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:20:44 -0000
Received: by mail-la0-f50.google.com with SMTP id ec20so2399046lab.9
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 02:20:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=KqWFlBBfkIt65dZkxxlzQsoSOFCIt1BfQFvQA59E4qo=;
	b=Ak5J0Ng8yWO3uxME28ttTvr1uo8DrlhMQXY2tQY5KqZYbt7sOUZub6U++94spfB5qT
	VKhWAnP3px+LHgY+g/Nyl5MIwsU/ggthORQb4EFZU1FeJQIskLZiGqEeL4zl6sG+QBA3
	vWd0yCRIHPiudl8Pg7EMyPBSkXTDa9fRa7WxE1R5pqegUSlrhor17JOhwJcsx+7jehZP
	WjSVO+rHNr082hhHi1Ht2p+HQQYfD83MQNWSOQvyxKXFIqzTG6zg2zp7yiBND0TLrLZb
	2MvMcKnjHqiOv8u33Iw+5EUqmdTggSgKaRyuwV99Abcii6kruo/S0GTEUnCQRacu1rPf
	V3Yg==
X-Received: by 10.112.160.196 with SMTP id xm4mr4471387lbb.34.1389867641391;
	Thu, 16 Jan 2014 02:20:41 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id n13sm4162358lbl.17.2014.01.16.02.20.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 16 Jan 2014 02:20:39 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1389866177.5190.18.camel@kazak.uk.xensource.com>
Date: Thu, 16 Jan 2014 14:20:36 +0400
Message-Id: <B20A2430-759A-45E6-802E-EDA020226604@gmail.com>
References: <374E62F7-3C1F-4679-B635-3F07FF0F77A3@gmail.com>
	<1389866177.5190.18.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] different QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Jan 16, 2014, at 1:56 PM, Ian Campbell wrote:

> On Thu, 2014-01-16 at 06:33 +0400, Igor Kozhukhov wrote:
>> could you please let me know - why we have different QEMU for xen builds ?
>> i see :
>> qemu-xen-dir-remote - git://xenbits.xen.org/qemu-upstream-4.2-testing.git
>> qemu-xen-traditional-dir-remote - git://xenbits.xen.org/qemu-xen-4.2-testing.git
>> 
>> can we use ONE ?
>> or we need different binaries ?
> 
> qemu-xen-traditional is the old Xen fork of Qemu. It was the only qemu
> for a long time, and was the default until 4.3 (I think).

Thanks for your details about QEMU.
i have additional patches/changes for using vdiskadm.
it is tools for using different images as storage for VMs.
i'll take a look a port to xen-4.3 instead of xen-4.2.

at this moment i have fixed/updated changes on illumos(kernel side) and need to work on xen side with sources - take a look and try to apply patches from xen-3.4.x.

i'll try to send to upstream my changes to Xen sources with changes.
also i have updated libfsimage with ZFS from illumos and will be ready for up merge to xen-unstable.

-Igor

> Obviously the fork was a bad thing so from 4.2 we have also had the
> "qemu-xen" version of qemu which is the upstream qemu with Xen support.
> This was "tech preview" in 4.2 and become the default (in most cases) in
> 4.3. (the exception is stubdomains which currently only work for
> traditional). It is also intended that distros can just use their
> existing qemu packaging instead of packaging a special Xen version of
> qemu (they like this from a security support PoV etc).
> 
> The reason why the traditional fork lives on despite the default having
> been changed is that VMs which were installed on that platform may not
> take kindly to being switched to the newer one (in particular Windows
> VMs might require reactivation). So the upstream project intends to keep
> this code base alive, in a heavily frozen/maintenance state for the
> foreseeable future.
> 
> New VM deployments from 4.3 onwards should use the qemu-xen fork where
> possible.
> 
> Your use of 4.2 makes it hard for me to make a recommendation to you,
> since qemu-xen in 4.2 was tech preview and was missing some features,
> but it is the future, while 4.2 still used the old frozen qemu as its
> default.
> 
> My recommendation would be to be more concerned about pulling forward to
> a newer Xen (like 4.3 or even 4.4-rc) and on getting your Xen patches
> upstream before worrying about Qemu too much, and then having done that
> to focus mainly on upstream qemu-xen.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k7A-0001Am-Np; Thu, 16 Jan 2014 10:23:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3k77-0001AZ-UX
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:23:54 +0000
Received: from [193.109.254.147:41126] by server-1.bemta-14.messagelabs.com id
	93/4D-15600-933B7D25; Thu, 16 Jan 2014 10:23:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389867829!11265816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11893 invoked from network); 16 Jan 2014 10:23:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:23:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410017"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:23:48 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:23:47 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:23:46 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
	queue struct.
Thread-Index: AQHPEg4x4WFe49MrpEWenZ63DvyEMpqHIP5g
Date: Thu, 16 Jan 2014 10:23:46 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_rxhash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |   66 +++--
>  drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>  drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++-------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   89 ++++--
>  4 files changed, 556 insertions(+), 423 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index c47794b..54d2eeb 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +struct xenvif;
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int number; /* Queue number, 0-based */
> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */

I wonder whether it would be neater to #define the name size here...

> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */

...and the IRQ name size here. It's kind of ugly to have + some_magic_value in array definitions.

>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,7 +142,7 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
> 
> @@ -150,14 +152,27 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];

I see you brought this back in line, which is reasonable as the queue is now a separately allocated struct.

> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -171,12 +186,9 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Statistics */

Do stats need to be per-queue (and then possibly aggregated at query time)?

>  	unsigned long rx_gso_checksum_fixup;
> @@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>  			    domid_t domid,
>  			    unsigned int handle);
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue);
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn);
>  void xenvif_disconnect(struct xenvif *vif);
> @@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
> 
>  int xenvif_schedulable(struct xenvif *vif);
> 
> -int xenvif_rx_ring_full(struct xenvif *vif);
> +int xenvif_rx_ring_full(struct xenvif_queue *queue);
> 
> -int xenvif_must_stop_queue(struct xenvif *vif);
> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
> 
>  /* (Un)Map communication rings. */
> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref);
> 
>  /* Check for SKBs from frontend and schedule backend processing */
> -void xenvif_check_rx_xenvif(struct xenvif *vif);
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
> 
>  /* Queue an SKB for transmission to the frontend */
> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
> *skb);
>  /* Notify xenvif that ring now has space to send an skb to the frontend */
> -void xenvif_notify_tx_completion(struct xenvif *vif);
> +void xenvif_notify_tx_completion(struct xenvif_queue *queue);
> 
>  /* Prevent the device from generating any further traffic. */
>  void xenvif_carrier_off(struct xenvif *vif);
> @@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
>  /* Returns number of ring slots required to send an skb to the frontend */
>  unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
> 
> -int xenvif_tx_action(struct xenvif *vif, int budget);
> -void xenvif_rx_action(struct xenvif *vif);
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
> +void xenvif_rx_action(struct xenvif_queue *queue);
> 
>  int xenvif_kthread(void *data);
> 
> +int xenvif_poll(struct napi_struct *napi, int budget);
> +
> +void xenvif_carrier_on(struct xenvif *vif);
> +
>  extern bool separate_tx_rx_irq;
> 
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index fff8cdd..0113324 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -34,7 +34,6 @@
>  #include <linux/ethtool.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/if_vlan.h>
> -#include <linux/vmalloc.h>
> 
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> @@ -42,32 +41,50 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> 
> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
> +{
> +	netif_tx_wake_queue(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Might be neater to declare some stack variables for dev and number to avoid the long line.

> +}
> +
> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
> +{
> +	netif_tx_stop_queue(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Ditto.

> +}
> +
> +static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
> +{
> +	return netif_tx_queue_stopped(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Ditto.

> +}
> +
>  int xenvif_schedulable(struct xenvif *vif)
>  {
>  	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
>  }
> 
> -static int xenvif_rx_schedulable(struct xenvif *vif)
> +static int xenvif_rx_schedulable(struct xenvif_queue *queue)
>  {
> -	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
> +	return xenvif_schedulable(queue->vif) &&
> !xenvif_rx_ring_full(queue);

I guess your patches have not been re-based onto net-next? xenvif_ring_full() and xenvif_rx_schedulable() went away in c/s ca2f09f2b2c6c25047cfc545d057c4edfcfe561c (xen-netback: improve guest-receive-side flow control).

Can you rebase? Eventual patches will need to go into net-next.

  Paul

>  }
> 
>  static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
> -		napi_schedule(&vif->napi);
> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
> +		napi_schedule(&queue->napi);
> 
>  	return IRQ_HANDLED;
>  }
> 
> -static int xenvif_poll(struct napi_struct *napi, int budget)
> +int xenvif_poll(struct napi_struct *napi, int budget)
>  {
> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
> +	struct xenvif_queue *queue = container_of(napi, struct
> xenvif_queue, napi);
>  	int work_done;
> 
> -	work_done = xenvif_tx_action(vif, budget);
> +	work_done = xenvif_tx_action(queue, budget);
> 
>  	if (work_done < budget) {
>  		int more_to_do = 0;
> @@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  		local_irq_save(flags);
> 
> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
> more_to_do);
> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
> more_to_do);
>  		if (!more_to_do)
>  			__napi_complete(napi);
> 
> @@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (xenvif_rx_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	if (xenvif_rx_schedulable(queue))
> +		xenvif_wake_queue(queue);
> 
>  	return IRQ_HANDLED;
>  }
> @@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
>  	return IRQ_HANDLED;
>  }
> 
> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	}

Style.

> +	else {
> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
> +		hash = skb_get_rxhash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
> 32);
> +	}
> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	u16 queue_index = 0;
> +	struct xenvif_queue *queue = NULL;
> 
>  	BUG_ON(skb->dev != dev);
> 
> -	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL)
> +	/* Drop the packet if the queues are not set up */
> +	if (vif->num_queues < 1 || vif->queues == NULL)
> +		goto drop;

Just do the former test and ASSERT the second.
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	queue_index = skb_get_queue_mapping(skb);

Personally, I'd stick a range check here.

> +	queue = &vif->queues[queue_index];
> +
> +	/* Drop the packet if queue is not ready */
> +	if (queue->task == NULL)
>  		goto drop;
> 
>  	/* Drop the packet if the target domain has no receive buffers. */
> -	if (!xenvif_rx_schedulable(vif))
> +	if (!xenvif_rx_schedulable(queue))
>  		goto drop;
> 
>  	/* Reserve ring slots for the worst-case number of fragments. */
> -	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
> +	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
> 
> -	if (vif->can_queue && xenvif_must_stop_queue(vif))
> -		netif_stop_queue(dev);
> +	if (vif->can_queue && xenvif_must_stop_queue(queue))
> +		xenvif_stop_queue(queue);
> 
> -	xenvif_queue_tx_skb(vif, skb);
> +	xenvif_queue_tx_skb(queue, skb);
> 
>  	return NETDEV_TX_OK;
> 
> @@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  	return NETDEV_TX_OK;
>  }
> 
> -void xenvif_notify_tx_completion(struct xenvif *vif)
> +void xenvif_notify_tx_completion(struct xenvif_queue *queue)
>  {
> -	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	if (xenvif_queue_stopped(queue) &&
> xenvif_rx_schedulable(queue))
> +		xenvif_wake_queue(queue);
>  }
> 
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
> @@ -163,20 +209,30 @@ static struct net_device_stats
> *xenvif_get_stats(struct net_device *dev)
> 
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
> 
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
> 
>  static int xenvif_open(struct net_device *dev)
> @@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_up(vif);
> -	netif_start_queue(dev);
> +	netif_tx_start_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_down(vif);
> -	netif_stop_queue(dev);
> +	netif_tx_stop_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -287,6 +343,7 @@ static const struct net_device_ops
> xenvif_netdev_ops = {
>  	.ndo_fix_features = xenvif_fix_features,
>  	.ndo_set_mac_address = eth_mac_addr,
>  	.ndo_validate_addr   = eth_validate_addr,
> +	.ndo_select_queue = select_queue,
>  };
> 
>  struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> @@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	struct net_device *dev;
>  	struct xenvif *vif;
>  	char name[IFNAMSIZ] = {};
> -	int i;
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> @@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	vif = netdev_priv(dev);
> 
> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> -				     MAX_GRANT_COPY_OPS);
> -	if (vif->grant_copy_op == NULL) {
> -		pr_warn("Could not allocate grant copy space for %s\n",
> name);
> -		free_netdev(dev);
> -		return ERR_PTR(-ENOMEM);
> -	}
> -
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
>  	vif->ip_csum = 1;
>  	vif->dev = dev;
> 
> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
> -	vif->credit_usec  = 0UL;
> -	init_timer(&vif->credit_timeout);
> -	vif->credit_window_start = get_jiffies_64();
> +	/* Start out with no queues */
> +	vif->num_queues = 0;
> +	vif->queues = NULL;
> 
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
> @@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
> 
> -	skb_queue_head_init(&vif->rx_queue);
> -	skb_queue_head_init(&vif->tx_queue);
> -
> -	vif->pending_cons = 0;
> -	vif->pending_prod = MAX_PENDING_REQS;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> -
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
>  	 * largest non-broadcast address to prevent the address getting
> @@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>  	dev->dev_addr[0] &= ~0x01;
> 
> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
> XENVIF_NAPI_WEIGHT);
> -
>  	netif_carrier_off(dev);
> 
>  	err = register_netdev(dev);
> @@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	return vif;
>  }
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue)
> +{
> +	int i;
> +
> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
> +	queue->credit_usec  = 0UL;
> +	init_timer(&queue->credit_timeout);
> +	queue->credit_window_start = get_jiffies_64();
> +
> +	skb_queue_head_init(&queue->rx_queue);
> +	skb_queue_head_init(&queue->tx_queue);
> +
> +	queue->pending_cons = 0;
> +	queue->pending_prod = MAX_PENDING_REQS;
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		queue->pending_ring[i] = i;
> +		queue->mmap_pages[i] = NULL;
> +	}
> +
> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> +			XENVIF_NAPI_WEIGHT);
> +}
> +
> +void xenvif_carrier_on(struct xenvif *vif)
> +{
> +	rtnl_lock();
> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> +	netdev_update_features(vif->dev);
> +	netif_carrier_on(vif->dev);
> +	if (netif_running(vif->dev))
> +		xenvif_up(vif);
> +	rtnl_unlock();
> +}
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn)
>  {
>  	struct task_struct *task;
>  	int err = -ENOMEM;
> 
> -	BUG_ON(vif->tx_irq);
> -	BUG_ON(vif->task);
> +	BUG_ON(queue->tx_irq);
> +	BUG_ON(queue->task);
> 
> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
> 
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
> -			vif->dev->name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
> +			queue->name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = vif->rx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = queue->rx_irq = err;
> +		disable_irq(queue->tx_irq);
>  	} else {
>  		/* feature-split-event-channels == 1 */
> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
> -			 "%s-tx", vif->dev->name);
> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
> +			 "%s-tx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
> -			vif->tx_irq_name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
> 0,
> +			queue->tx_irq_name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = err;
> +		disable_irq(queue->tx_irq);
> 
> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
> -			 "%s-rx", vif->dev->name);
> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
> +			 "%s-rx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
> -			vif->rx_irq_name, vif);
> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
> 0,
> +			queue->rx_irq_name, queue);
>  		if (err < 0)
>  			goto err_tx_unbind;
> -		vif->rx_irq = err;
> -		disable_irq(vif->rx_irq);
> +		queue->rx_irq = err;
> +		disable_irq(queue->rx_irq);
>  	}
> 
> -	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&queue->wq);
>  	task = kthread_create(xenvif_kthread,
> -			      (void *)vif, "%s", vif->dev->name);
> +			      (void *)queue, "%s", queue->name);
>  	if (IS_ERR(task)) {
> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
> >name);
> +		pr_warn("Could not allocate kthread for %s\n", queue-
> >name);
>  		err = PTR_ERR(task);
>  		goto err_rx_unbind;
>  	}
> 
> -	vif->task = task;
> -
> -	rtnl_lock();
> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> -	netdev_update_features(vif->dev);
> -	netif_carrier_on(vif->dev);
> -	if (netif_running(vif->dev))
> -		xenvif_up(vif);
> -	rtnl_unlock();
> +	queue->task = task;
> 
> -	wake_up_process(vif->task);
> +	wake_up_process(queue->task);
> 
>  	return 0;
> 
>  err_rx_unbind:
> -	unbind_from_irqhandler(vif->rx_irq, vif);
> -	vif->rx_irq = 0;
> +	unbind_from_irqhandler(queue->rx_irq, queue);
> +	queue->rx_irq = 0;
>  err_tx_unbind:
> -	unbind_from_irqhandler(vif->tx_irq, vif);
> -	vif->tx_irq = 0;
> +	unbind_from_irqhandler(queue->tx_irq, queue);
> +	queue->tx_irq = 0;
>  err_unmap:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  err:
>  	module_put(THIS_MODULE);
>  	return err;
> @@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
> 
>  void xenvif_disconnect(struct xenvif *vif)
>  {
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
>  	if (netif_carrier_ok(vif->dev))
>  		xenvif_carrier_off(vif);
> 
> -	if (vif->task) {
> -		kthread_stop(vif->task);
> -		vif->task = NULL;
> -	}
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> 
> -	if (vif->tx_irq) {
> -		if (vif->tx_irq == vif->rx_irq)
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -		else {
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -			unbind_from_irqhandler(vif->rx_irq, vif);
> +		if (queue->task) {
> +			kthread_stop(queue->task);
> +			queue->task = NULL;
>  		}
> -		vif->tx_irq = 0;
> +
> +		if (queue->tx_irq) {
> +			if (queue->tx_irq == queue->rx_irq)
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +			else {
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +				unbind_from_irqhandler(queue->rx_irq,
> queue);
> +			}
> +			queue->tx_irq = 0;
> +		}
> +
> +		xenvif_unmap_frontend_rings(queue);
>  	}
> 
> -	xenvif_unmap_frontend_rings(vif);
> +
>  }
> 
>  void xenvif_free(struct xenvif *vif)
>  {
> -	netif_napi_del(&vif->napi);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> 
> -	unregister_netdev(vif->dev);
> +	for(queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		netif_napi_del(&queue->napi);
> +	}
> 
> -	vfree(vif->grant_copy_op);
> +	/* Free the array of queues */
> +	vfree(vif->queues);
> +	vif->num_queues = 0;
> +	vif->queues = 0;
> +
> +	unregister_netdev(vif->dev);
>  	free_netdev(vif->dev);
> 
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 7842555..586e741 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
>   * one or more merged tx requests, otherwise it is the continuation of
>   * previous tx request.
>   */
> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
> RING_IDX idx)
>  {
> -	return vif->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
> +	return queue->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status);
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st);
> 
> -static inline int tx_work_todo(struct xenvif *vif);
> -static inline int rx_work_todo(struct xenvif *vif);
> +static inline int tx_work_todo(struct xenvif_queue *queue);
> +static inline int rx_work_todo(struct xenvif_queue *queue);
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags);
> 
> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>  				       u16 idx)
>  {
> -	return page_to_pfn(vif->mmap_pages[idx]);
> +	return page_to_pfn(queue->mmap_pages[idx]);
>  }
> 
> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>  					 u16 idx)
>  {
> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>  }
> 
>  /* This is a miniumum size for the linear area to avoid lots of
> @@ -132,10 +132,10 @@ static inline pending_ring_idx_t
> pending_index(unsigned i)
>  	return i & (MAX_PENDING_REQS-1);
>  }
> 
> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
> *queue)
>  {
>  	return MAX_PENDING_REQS -
> -		vif->pending_prod + vif->pending_cons;
> +		queue->pending_prod + queue->pending_cons;
>  }
> 
>  static int max_required_rx_slots(struct xenvif *vif)
> @@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
>  	return max;
>  }
> 
> -int xenvif_rx_ring_full(struct xenvif *vif)
> +int xenvif_rx_ring_full(struct xenvif_queue *queue)
>  {
> -	RING_IDX peek   = vif->rx_req_cons_peek;
> -	RING_IDX needed = max_required_rx_slots(vif);
> +	RING_IDX peek   = queue->rx_req_cons_peek;
> +	RING_IDX needed = max_required_rx_slots(queue->vif);
> 
> -	return ((vif->rx.sring->req_prod - peek) < needed) ||
> -	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
> needed);
> +	return ((queue->rx.sring->req_prod - peek) < needed) ||
> +	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
> needed);
>  }
> 
> -int xenvif_must_stop_queue(struct xenvif *vif)
> +int xenvif_must_stop_queue(struct xenvif_queue *queue)
>  {
> -	if (!xenvif_rx_ring_full(vif))
> +	if (!xenvif_rx_ring_full(queue))
>  		return 0;
> 
> -	vif->rx.sring->req_event = vif->rx_req_cons_peek +
> -		max_required_rx_slots(vif);
> +	queue->rx.sring->req_event = queue->rx_req_cons_peek +
> +		max_required_rx_slots(queue->vif);
>  	mb(); /* request notification /then/ check the queue */
> 
> -	return xenvif_rx_ring_full(vif);
> +	return xenvif_rx_ring_full(queue);
>  }
> 
>  /*
> @@ -306,13 +306,13 @@ struct netrx_pending_operations {
>  	grant_ref_t copy_gref;
>  };
> 
> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
> *queue,
>  						 struct
> netrx_pending_operations *npo)
>  {
>  	struct xenvif_rx_meta *meta;
>  	struct xen_netif_rx_request *req;
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
> 
>  	meta = npo->meta + npo->meta_prod++;
>  	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
> @@ -330,7 +330,7 @@ static struct xenvif_rx_meta
> *get_next_rx_buffer(struct xenvif *vif,
>   * Set up the grant operations for this fragment. If it's a flipping
>   * interface, we also set up the unmap request from here.
>   */
> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
> sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
>  				 unsigned long offset, int *head)
> @@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  			 */
>  			BUG_ON(*head);
> 
> -			meta = get_next_rx_buffer(vif, npo);
> +			meta = get_next_rx_buffer(queue, npo);
>  		}
> 
>  		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
> @@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		copy_gop->source.u.gmfn =
> virt_to_mfn(page_address(page));
>  		copy_gop->source.offset = offset;
> 
> -		copy_gop->dest.domid = vif->domid;
> +		copy_gop->dest.domid = queue->vif->domid;
>  		copy_gop->dest.offset = npo->copy_off;
>  		copy_gop->dest.u.ref = npo->copy_gref;
> 
> @@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		else
>  			gso_type = XEN_NETIF_GSO_TYPE_NONE;
> 
> -		if (*head && ((1 << gso_type) & vif->gso_mask))
> -			vif->rx.req_cons++;
> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
> +			queue->rx.req_cons++;
> 
>  		*head = 0; /* There must be something in this buffer now. */
> 
> @@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>   * frontend-side LRO).
>   */
>  static int xenvif_gop_skb(struct sk_buff *skb,
> -			  struct netrx_pending_operations *npo)
> +			  struct netrx_pending_operations *npo,
> +			  struct xenvif_queue *queue)
>  {
>  	struct xenvif *vif = netdev_priv(skb->dev);
>  	int nr_frags = skb_shinfo(skb)->nr_frags;
> @@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
> 
>  	/* Set up a GSO prefix descriptor, if necessary */
>  	if ((1 << gso_type) & vif->gso_prefix_mask) {
> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +		req = RING_GET_REQUEST(&queue->rx, queue-
> >rx.req_cons++);
>  		meta = npo->meta + npo->meta_prod++;
>  		meta->gso_type = gso_type;
>  		meta->gso_size = gso_size;
> @@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		meta->id = req->id;
>  	}
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>  	meta = npo->meta + npo->meta_prod++;
> 
>  	if ((1 << gso_type) & vif->gso_mask) {
> @@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		if (data + len > skb_tail_pointer(skb))
>  			len = skb_tail_pointer(skb) - data;
> 
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     virt_to_page(data), len, offset, &head);
>  		data += len;
>  	}
> 
>  	for (i = 0; i < nr_frags; i++) {
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     skb_frag_page(&skb_shinfo(skb)-
> >frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> @@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
> nr_meta_slots,
>  	return status;
>  }
> 
> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
> status,
>  				      struct xenvif_rx_meta *meta,
>  				      int nr_meta_slots)
>  {
> @@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif
> *vif, int status,
>  			flags = XEN_NETRXF_more_data;
> 
>  		offset = 0;
> -		make_rx_response(vif, meta[i].id, status, offset,
> +		make_rx_response(queue, meta[i].id, status, offset,
>  				 meta[i].size, flags);
>  	}
>  }
> @@ -557,12 +558,12 @@ struct skb_cb_overlay {
>  	int meta_slots_used;
>  };
> 
> -static void xenvif_kick_thread(struct xenvif *vif)
> +static void xenvif_kick_thread(struct xenvif_queue *queue)
>  {
> -	wake_up(&vif->wq);
> +	wake_up(&queue->wq);
>  }
> 
> -void xenvif_rx_action(struct xenvif *vif)
> +void xenvif_rx_action(struct xenvif_queue *queue)
>  {
>  	s8 status;
>  	u16 flags;
> @@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
>  	int need_to_notify = 0;
> 
>  	struct netrx_pending_operations npo = {
> -		.copy  = vif->grant_copy_op,
> -		.meta  = vif->meta,
> +		.copy  = queue->grant_copy_op,
> +		.meta  = queue->meta,
>  	};
> 
>  	skb_queue_head_init(&rxq);
> 
>  	count = 0;
> 
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> -		vif = netdev_priv(skb->dev);
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>  		nr_frags = skb_shinfo(skb)->nr_frags;
> 
>  		sco = (struct skb_cb_overlay *)skb->cb;
> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
> 
>  		count += nr_frags + 1;
> 
> @@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
>  			break;
>  	}
> 
> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
> 
>  	if (!npo.copy_prod)
>  		return;
> 
>  	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
> 
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> 
> -		vif = netdev_priv(skb->dev);
> -
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_prefix_mask) {
> -			resp = RING_GET_RESPONSE(&vif->rx,
> -						 vif->rx.rsp_prod_pvt++);
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_prefix_mask) {
> +			resp = RING_GET_RESPONSE(&queue->rx,
> +						 queue->rx.rsp_prod_pvt++);
> 
>  			resp->flags = XEN_NETRXF_gso_prefix |
> XEN_NETRXF_more_data;
> 
> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
> -			resp->id = vif->meta[npo.meta_cons].id;
> +			resp->offset = queue-
> >meta[npo.meta_cons].gso_size;
> +			resp->id = queue->meta[npo.meta_cons].id;
>  			resp->status = sco->meta_slots_used;
> 
>  			npo.meta_cons++;
> @@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
>  		}
> 
> 
> -		vif->dev->stats.tx_bytes += skb->len;
> -		vif->dev->stats.tx_packets++;
> +		queue->vif->dev->stats.tx_bytes += skb->len;
> +		queue->vif->dev->stats.tx_packets++;
> 
> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
> &npo);
> +		status = xenvif_check_gop(queue->vif, sco-
> >meta_slots_used, &npo);
> 
>  		if (sco->meta_slots_used == 1)
>  			flags = 0;
> @@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
>  			flags |= XEN_NETRXF_data_validated;
> 
>  		offset = 0;
> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
> +		resp = make_rx_response(queue, queue-
> >meta[npo.meta_cons].id,
>  					status, offset,
> -					vif->meta[npo.meta_cons].size,
> +					queue->meta[npo.meta_cons].size,
>  					flags);
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_mask) {
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_mask) {
>  			struct xen_netif_extra_info *gso =
>  				(struct xen_netif_extra_info *)
> -				RING_GET_RESPONSE(&vif->rx,
> -						  vif->rx.rsp_prod_pvt++);
> +				RING_GET_RESPONSE(&queue->rx,
> +						  queue-
> >rx.rsp_prod_pvt++);
> 
>  			resp->flags |= XEN_NETRXF_extra_info;
> 
> -			gso->u.gso.type = vif-
> >meta[npo.meta_cons].gso_type;
> -			gso->u.gso.size = vif-
> >meta[npo.meta_cons].gso_size;
> +			gso->u.gso.type = queue-
> >meta[npo.meta_cons].gso_type;
> +			gso->u.gso.size = queue-
> >meta[npo.meta_cons].gso_size;
>  			gso->u.gso.pad = 0;
>  			gso->u.gso.features = 0;
> 
> @@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
>  			gso->flags = 0;
>  		}
> 
> -		xenvif_add_frag_responses(vif, status,
> -					  vif->meta + npo.meta_cons + 1,
> +		xenvif_add_frag_responses(queue, status,
> +					  queue->meta + npo.meta_cons + 1,
>  					  sco->meta_slots_used);
> 
> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
> ret);
> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
> ret);
> 
>  		if (ret)
>  			need_to_notify = 1;
> 
> -		xenvif_notify_tx_completion(vif);
> +		xenvif_notify_tx_completion(queue);
> 
>  		npo.meta_cons += sco->meta_slots_used;
>  		dev_kfree_skb(skb);
>  	}
> 
>  	if (need_to_notify)
> -		notify_remote_via_irq(vif->rx_irq);
> +		notify_remote_via_irq(queue->rx_irq);
> 
>  	/* More work to do? */
> -	if (!skb_queue_empty(&vif->rx_queue))
> -		xenvif_kick_thread(vif);
> +	if (!skb_queue_empty(&queue->rx_queue))
> +		xenvif_kick_thread(queue);
>  }
> 
> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
> -	skb_queue_tail(&vif->rx_queue, skb);
> +	skb_queue_tail(&queue->rx_queue, skb);
> 
> -	xenvif_kick_thread(vif);
> +	xenvif_kick_thread(queue);
>  }
> 
> -void xenvif_check_rx_xenvif(struct xenvif *vif)
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>  {
>  	int more_to_do;
> 
> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
> 
>  	if (more_to_do)
> -		napi_schedule(&vif->napi);
> +		napi_schedule(&queue->napi);
>  }
> 
> -static void tx_add_credit(struct xenvif *vif)
> +static void tx_add_credit(struct xenvif_queue *queue)
>  {
>  	unsigned long max_burst, max_credit;
> 
> @@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
>  	 * Allow a burst big enough to transmit a jumbo packet of up to
> 128kB.
>  	 * Otherwise the interface can seize up due to insufficient credit.
>  	 */
> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
> >tx.req_cons)->size;
>  	max_burst = min(max_burst, 131072UL);
> -	max_burst = max(max_burst, vif->credit_bytes);
> +	max_burst = max(max_burst, queue->credit_bytes);
> 
>  	/* Take care that adding a new chunk of credit doesn't wrap to zero.
> */
> -	max_credit = vif->remaining_credit + vif->credit_bytes;
> -	if (max_credit < vif->remaining_credit)
> +	max_credit = queue->remaining_credit + queue->credit_bytes;
> +	if (max_credit < queue->remaining_credit)
>  		max_credit = ULONG_MAX; /* wrapped: clamp to
> ULONG_MAX */
> 
> -	vif->remaining_credit = min(max_credit, max_burst);
> +	queue->remaining_credit = min(max_credit, max_burst);
>  }
> 
>  static void tx_credit_callback(unsigned long data)
>  {
> -	struct xenvif *vif = (struct xenvif *)data;
> -	tx_add_credit(vif);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
> +	tx_add_credit(queue);
> +	xenvif_check_rx_xenvif(queue);
>  }
> 
> -static void xenvif_tx_err(struct xenvif *vif,
> +static void xenvif_tx_err(struct xenvif_queue *queue,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>  		if (cons == end)
>  			break;
> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>  	} while (1);
> -	vif->tx.req_cons = cons;
> +	queue->tx.req_cons = cons;
>  }
> 
>  static void xenvif_fatal_tx_err(struct xenvif *vif)
> @@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>  	xenvif_carrier_off(vif);
>  }
> 
> -static int xenvif_count_requests(struct xenvif *vif,
> +static int xenvif_count_requests(struct xenvif_queue *queue,
>  				 struct xen_netif_tx_request *first,
>  				 struct xen_netif_tx_request *txp,
>  				 int work_to_do)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
>  	int slots = 0;
>  	int drop_err = 0;
>  	int more_data;
> @@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		struct xen_netif_tx_request dropped_tx = { 0 };
> 
>  		if (slots >= work_to_do) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Asked for %d slots but exceeds this
> limit\n",
>  				   work_to_do);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -ENODATA;
>  		}
> 
> @@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 * considered malicious.
>  		 */
>  		if (unlikely(slots >= fatal_skb_slots)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Malicious frontend using %d slots,
> threshold %u\n",
>  				   slots, fatal_skb_slots);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -E2BIG;
>  		}
> 
> @@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
> {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Too many slots (%d) exceeding
> limit (%d), dropping packet\n",
>  					   slots,
> XEN_NETBK_LEGACY_SLOTS_MAX);
>  			drop_err = -E2BIG;
> @@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		if (drop_err)
>  			txp = &dropped_tx;
> 
> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
> slots),
>  		       sizeof(*txp));
> 
>  		/* If the guest submitted a frame >= 64 KiB then
> @@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && txp->size > first->size) {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Invalid tx request, slot size %u >
> remaining size %u\n",
>  					   txp->size, first->size);
>  			drop_err = -EIO;
> @@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		slots++;
> 
>  		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev, "Cross page boundary, txp-
> >offset: %x, size: %u\n",
> +			netdev_err(queue->vif->dev, "Cross page boundary,
> txp->offset: %x, size: %u\n",
>  				 txp->offset, txp->size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
> @@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>  	} while (more_data);
> 
>  	if (drop_err) {
> -		xenvif_tx_err(vif, first, cons + slots);
> +		xenvif_tx_err(queue, first, cons + slots);
>  		return drop_err;
>  	}
> 
>  	return slots;
>  }
> 
> -static struct page *xenvif_alloc_page(struct xenvif *vif,
> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>  				      u16 pending_idx)
>  {
>  	struct page *page;
> @@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif
> *vif,
>  	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  	if (!page)
>  		return NULL;
> -	vif->mmap_pages[pending_idx] = page;
> +	queue->mmap_pages[pending_idx] = page;
> 
>  	return page;
>  }
> 
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
> *queue,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
>  					       struct gnttab_copy *gop)
> @@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>  	     shinfo->nr_frags++) {
>  		struct pending_tx_info *pending_tx_info =
> -			vif->pending_tx_info;
> +			queue->pending_tx_info;
> 
>  		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  		if (!page)
> @@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  			gop->flags = GNTCOPY_source_gref;
> 
>  			gop->source.u.ref = txp->gref;
> -			gop->source.domid = vif->domid;
> +			gop->source.domid = queue->vif->domid;
>  			gop->source.offset = txp->offset;
> 
>  			gop->dest.domid = DOMID_SELF;
> @@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				gop->len = txp->size;
>  				dst_offset += gop->len;
> 
> -				index = pending_index(vif-
> >pending_cons++);
> +				index = pending_index(queue-
> >pending_cons++);
> 
> -				pending_idx = vif->pending_ring[index];
> +				pending_idx = queue->pending_ring[index];
> 
> 
> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>  				       sizeof(*txp));
> @@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				 * fields for head tx req will be set
>  				 * to correct values after the loop.
>  				 */
> -				vif->mmap_pages[pending_idx] = (void
> *)(~0UL);
> +				queue->mmap_pages[pending_idx] = (void
> *)(~0UL);
>  				pending_tx_info[pending_idx].head =
>  					INVALID_PENDING_RING_IDX;
> 
> @@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  		first->req.offset = 0;
>  		first->req.size = dst_offset;
>  		first->head = start_idx;
> -		vif->mmap_pages[head_idx] = page;
> +		queue->mmap_pages[head_idx] = page;
>  		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>  	}
> 
> @@ -979,18 +977,18 @@ static struct gnttab_copy
> *xenvif_get_requests(struct xenvif *vif,
>  err:
>  	/* Unwind, freeing all pages and sending error responses. */
>  	while (shinfo->nr_frags-- > start) {
> -		xenvif_idx_release(vif,
> +		xenvif_idx_release(queue,
>  				frag_get_pending_idx(&frags[shinfo-
> >nr_frags]),
>  				XEN_NETIF_RSP_ERROR);
>  	}
>  	/* The head too, if necessary. */
>  	if (start)
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	return NULL;
>  }
> 
> -static int xenvif_tx_check_gop(struct xenvif *vif,
> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>  			       struct sk_buff *skb,
>  			       struct gnttab_copy **gopp)
>  {
> @@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	/* Check status of header. */
>  	err = gop->status;
>  	if (unlikely(err))
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		pending_ring_idx_t head;
> 
>  		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
> -		tx_info = &vif->pending_tx_info[pending_idx];
> +		tx_info = &queue->pending_tx_info[pending_idx];
>  		head = tx_info->head;
> 
>  		/* Check error status: if okay then remember grant handle.
> */
> @@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  			newerr = (++gop)->status;
>  			if (newerr)
>  				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
> +			peek = queue-
> >pending_ring[pending_index(++head)];
> +		} while (!pending_tx_is_head(queue, peek));
> 
>  		if (likely(!newerr)) {
>  			/* Had a previous error? Invalidate this fragment. */
>  			if (unlikely(err))
> -				xenvif_idx_release(vif, pending_idx,
> +				xenvif_idx_release(queue, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);
>  			continue;
>  		}
> 
>  		/* Error on this fragment: respond to client with an error. */
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  		/* Not the first error? Preceding frags already invalidated. */
>  		if (err)
> @@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> 
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo-
> >frags[j]);
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	return err;
>  }
> 
> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	int nr_frags = shinfo->nr_frags;
> @@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif,
> struct sk_buff *skb)
> 
>  		pending_idx = frag_get_pending_idx(frag);
> 
> -		txp = &vif->pending_tx_info[pending_idx].req;
> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
> +		txp = &queue->pending_tx_info[pending_idx].req;
> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>  		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>  		skb->len += txp->size;
>  		skb->data_len += txp->size;
>  		skb->truesize += txp->size;
> 
>  		/* Take an extra reference to offset xenvif_idx_release */
> -		get_page(vif->mmap_pages[pending_idx]);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		get_page(queue->mmap_pages[pending_idx]);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  	}
>  }
> 
> -static int xenvif_get_extras(struct xenvif *vif,
> +static int xenvif_get_extras(struct xenvif_queue *queue,
>  				struct xen_netif_extra_info *extras,
>  				int work_to_do)
>  {
>  	struct xen_netif_extra_info extra;
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
>  		if (unlikely(work_to_do-- <= 0)) {
> -			netdev_err(vif->dev, "Missing extra info\n");
> -			xenvif_fatal_tx_err(vif);
> +			netdev_err(queue->vif->dev, "Missing extra
> info\n");
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EBADR;
>  		}
> 
> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>  		       sizeof(extra));
>  		if (unlikely(!extra.type ||
>  			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> -			vif->tx.req_cons = ++cons;
> -			netdev_err(vif->dev,
> +			queue->tx.req_cons = ++cons;
> +			netdev_err(queue->vif->dev,
>  				   "Invalid extra type: %d\n", extra.type);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
>  		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
> -		vif->tx.req_cons = ++cons;
> +		queue->tx.req_cons = ++cons;
>  	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
> 
>  	return work_to_do;
> @@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif,
> struct sk_buff *skb)
>  	return err;
>  }
> 
> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
> size)
>  {
>  	u64 now = get_jiffies_64();
> -	u64 next_credit = vif->credit_window_start +
> -		msecs_to_jiffies(vif->credit_usec / 1000);
> +	u64 next_credit = queue->credit_window_start +
> +		msecs_to_jiffies(queue->credit_usec / 1000);
> 
>  	/* Timer could already be pending in rare cases. */
> -	if (timer_pending(&vif->credit_timeout))
> +	if (timer_pending(&queue->credit_timeout))
>  		return true;
> 
>  	/* Passed the point where we can replenish credit? */
>  	if (time_after_eq64(now, next_credit)) {
> -		vif->credit_window_start = now;
> -		tx_add_credit(vif);
> +		queue->credit_window_start = now;
> +		tx_add_credit(queue);
>  	}
> 
>  	/* Still too big to send right now? Set a callback. */
> -	if (size > vif->remaining_credit) {
> -		vif->credit_timeout.data     =
> -			(unsigned long)vif;
> -		vif->credit_timeout.function =
> +	if (size > queue->remaining_credit) {
> +		queue->credit_timeout.data     =
> +			(unsigned long)queue;
> +		queue->credit_timeout.function =
>  			tx_credit_callback;
> -		mod_timer(&vif->credit_timeout,
> +		mod_timer(&queue->credit_timeout,
>  			  next_credit);
> -		vif->credit_window_start = next_credit;
> +		queue->credit_window_start = next_credit;
> 
>  		return true;
>  	}
> @@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
> unsigned size)
>  	return false;
>  }
> 
> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
> budget)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>  	struct sk_buff *skb;
>  	int ret;
> 
> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	while ((nr_pending_reqs(queue) +
> XEN_NETBK_LEGACY_SLOTS_MAX
>  		< MAX_PENDING_REQS) &&
> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>  		struct xen_netif_tx_request txreq;
>  		struct xen_netif_tx_request
> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>  		struct page *page;
> @@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		unsigned int data_len;
>  		pending_ring_idx_t index;
> 
> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>  		    XEN_NETIF_TX_RING_SIZE) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Impossible number of requests. "
>  				   "req_prod %d, req_cons %d, size %ld\n",
> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
> +				   queue->tx.sring->req_prod, queue-
> >tx.req_cons,
>  				   XEN_NETIF_TX_RING_SIZE);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			continue;
>  		}
> 
> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
> >tx);
> +		work_to_do =
> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>  		if (!work_to_do)
>  			break;
> 
> -		idx = vif->tx.req_cons;
> +		idx = queue->tx.req_cons;
>  		rmb(); /* Ensure that we see the request before we copy it.
> */
> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
> sizeof(txreq));
> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
> sizeof(txreq));
> 
>  		/* Credit-based scheduling. */
> -		if (txreq.size > vif->remaining_credit &&
> -		    tx_credit_exceeded(vif, txreq.size))
> +		if (txreq.size > queue->remaining_credit &&
> +		    tx_credit_exceeded(queue, txreq.size))
>  			break;
> 
> -		vif->remaining_credit -= txreq.size;
> +		queue->remaining_credit -= txreq.size;
> 
>  		work_to_do--;
> -		vif->tx.req_cons = ++idx;
> +		queue->tx.req_cons = ++idx;
> 
>  		memset(extras, 0, sizeof(extras));
>  		if (txreq.flags & XEN_NETTXF_extra_info) {
> -			work_to_do = xenvif_get_extras(vif, extras,
> +			work_to_do = xenvif_get_extras(queue, extras,
>  						       work_to_do);
> -			idx = vif->tx.req_cons;
> +			idx = queue->tx.req_cons;
>  			if (unlikely(work_to_do < 0))
>  				break;
>  		}
> 
> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
> work_to_do);
> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
> work_to_do);
>  		if (unlikely(ret < 0))
>  			break;
> 
>  		idx += ret;
> 
>  		if (unlikely(txreq.size < ETH_HLEN)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Bad packet size: %d\n", txreq.size);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		/* No crossing a page as the payload mustn't fragment. */
>  		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "txreq.offset: %x, size: %u, end: %lu\n",
>  				   txreq.offset, txreq.size,
>  				   (txreq.offset&~PAGE_MASK) + txreq.size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			break;
>  		}
> 
> -		index = pending_index(vif->pending_cons);
> -		pending_idx = vif->pending_ring[index];
> +		index = pending_index(queue->pending_cons);
> +		pending_idx = queue->pending_ring[index];
> 
>  		data_len = (txreq.size > PKT_PROT_LEN &&
>  			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
> @@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>  				GFP_ATOMIC | __GFP_NOWARN);
>  		if (unlikely(skb == NULL)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't allocate a skb in start_xmit.\n");
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
> @@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  			struct xen_netif_extra_info *gso;
>  			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
> 
> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>  				/* Failure in xenvif_set_skb_gso is fatal. */
>  				kfree_skb(skb);
>  				break;
> @@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		}
> 
>  		/* XXX could copy straight to head */
> -		page = xenvif_alloc_page(vif, pending_idx);
> +		page = xenvif_alloc_page(queue, pending_idx);
>  		if (!page) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		gop->source.u.ref = txreq.gref;
> -		gop->source.domid = vif->domid;
> +		gop->source.domid = queue->vif->domid;
>  		gop->source.offset = txreq.offset;
> 
>  		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
> @@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
> 
>  		gop++;
> 
> -		memcpy(&vif->pending_tx_info[pending_idx].req,
> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>  		       &txreq, sizeof(txreq));
> -		vif->pending_tx_info[pending_idx].head = index;
> +		queue->pending_tx_info[pending_idx].head = index;
>  		*((u16 *)skb->data) = pending_idx;
> 
>  		__skb_put(skb, data_len);
> @@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  					     INVALID_PENDING_IDX);
>  		}
> 
> -		vif->pending_cons++;
> +		queue->pending_cons++;
> 
> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
> gop);
>  		if (request_gop == NULL) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
>  		gop = request_gop;
> 
> -		__skb_queue_tail(&vif->tx_queue, skb);
> +		__skb_queue_tail(&queue->tx_queue, skb);
> 
> -		vif->tx.req_cons = idx;
> +		queue->tx.req_cons = idx;
> 
> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
> >tx_copy_ops))
> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
> >tx_copy_ops))
>  			break;
>  	}
> 
> -	return gop - vif->tx_copy_ops;
> +	return gop - queue->tx_copy_ops;
>  }
> 
> 
> -static int xenvif_tx_submit(struct xenvif *vif)
> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops;
> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>  	struct sk_buff *skb;
>  	int work_done = 0;
> 
> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>  		struct xen_netif_tx_request *txp;
>  		u16 pending_idx;
>  		unsigned data_len;
> 
>  		pending_idx = *((u16 *)skb->data);
> -		txp = &vif->pending_tx_info[pending_idx].req;
> +		txp = &queue->pending_tx_info[pending_idx].req;
> 
>  		/* Check the remap error code. */
> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
> -			netdev_dbg(vif->dev, "netback grant failed.\n");
> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
> +			netdev_dbg(queue->vif->dev, "netback grant
> failed.\n");
>  			skb_shinfo(skb)->nr_frags = 0;
>  			kfree_skb(skb);
>  			continue;
> @@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		data_len = skb->len;
>  		memcpy(skb->data,
> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>  		       data_len);
>  		if (data_len < txp->size) {
>  			/* Append the packet payload as a fragment. */
> @@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  			txp->size -= data_len;
>  		} else {
>  			/* Schedule a response immediately. */
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
> 
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(queue, skb);
> 
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
> PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
>  		}
> 
> -		skb->dev      = vif->dev;
> +		skb->dev      = queue->vif->dev;
>  		skb->protocol = eth_type_trans(skb, skb->dev);
>  		skb_reset_network_header(skb);
> 
> -		if (checksum_setup(vif, skb)) {
> -			netdev_dbg(vif->dev,
> +		if (checksum_setup(queue->vif, skb)) {
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't setup checksum in
> net_tx_action\n");
>  			kfree_skb(skb);
>  			continue;
> @@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		skb_probe_transport_header(skb, 0);
> 
> -		vif->dev->stats.rx_bytes += skb->len;
> -		vif->dev->stats.rx_packets++;
> +		queue->vif->dev->stats.rx_bytes += skb->len;
> +		queue->vif->dev->stats.rx_packets++;
> 
>  		work_done++;
> 
> @@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  }
> 
>  /* Called after netfront has transmitted */
> -int xenvif_tx_action(struct xenvif *vif, int budget)
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>  {
>  	unsigned nr_gops;
>  	int work_done;
> 
> -	if (unlikely(!tx_work_todo(vif)))
> +	if (unlikely(!tx_work_todo(queue)))
>  		return 0;
> 
> -	nr_gops = xenvif_tx_build_gops(vif, budget);
> +	nr_gops = xenvif_tx_build_gops(queue, budget);
> 
>  	if (nr_gops == 0)
>  		return 0;
> 
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
> 
> -	work_done = xenvif_tx_submit(vif);
> +	work_done = xenvif_tx_submit(queue);
> 
>  	return work_done;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status)
>  {
>  	struct pending_tx_info *pending_tx_info;
>  	pending_ring_idx_t head;
>  	u16 peek; /* peek into next tx request */
> 
> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
> 
>  	/* Already complete? */
> -	if (vif->mmap_pages[pending_idx] == NULL)
> +	if (queue->mmap_pages[pending_idx] == NULL)
>  		return;
> 
> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
> 
>  	head = pending_tx_info->head;
> 
> -	BUG_ON(!pending_tx_is_head(vif, head));
> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
> +	BUG_ON(!pending_tx_is_head(queue, head));
> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
> pending_idx);
> 
>  	do {
>  		pending_ring_idx_t index;
>  		pending_ring_idx_t idx = pending_index(head);
> -		u16 info_idx = vif->pending_ring[idx];
> +		u16 info_idx = queue->pending_ring[idx];
> 
> -		pending_tx_info = &vif->pending_tx_info[info_idx];
> -		make_tx_response(vif, &pending_tx_info->req, status);
> +		pending_tx_info = &queue->pending_tx_info[info_idx];
> +		make_tx_response(queue, &pending_tx_info->req, status);
> 
>  		/* Setting any number other than
>  		 * INVALID_PENDING_RING_IDX indicates this slot is
> @@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif,
> u16 pending_idx,
>  		 */
>  		pending_tx_info->head = 0;
> 
> -		index = pending_index(vif->pending_prod++);
> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
> +		index = pending_index(queue->pending_prod++);
> +		queue->pending_ring[index] = queue-
> >pending_ring[info_idx];
> 
> -		peek = vif->pending_ring[pending_index(++head)];
> +		peek = queue->pending_ring[pending_index(++head)];
> 
> -	} while (!pending_tx_is_head(vif, peek));
> +	} while (!pending_tx_is_head(queue, peek));
> 
> -	put_page(vif->mmap_pages[pending_idx]);
> -	vif->mmap_pages[pending_idx] = NULL;
> +	put_page(queue->mmap_pages[pending_idx]);
> +	queue->mmap_pages[pending_idx] = NULL;
>  }
> 
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st)
>  {
> -	RING_IDX i = vif->tx.rsp_prod_pvt;
> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>  	struct xen_netif_tx_response *resp;
>  	int notify;
> 
> -	resp = RING_GET_RESPONSE(&vif->tx, i);
> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>  	resp->id     = txp->id;
>  	resp->status = st;
> 
>  	if (txp->flags & XEN_NETTXF_extra_info)
> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> 
> -	vif->tx.rsp_prod_pvt = ++i;
> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
> +	queue->tx.rsp_prod_pvt = ++i;
> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>  	if (notify)
> -		notify_remote_via_irq(vif->tx_irq);
> +		notify_remote_via_irq(queue->tx_irq);
>  }
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags)
>  {
> -	RING_IDX i = vif->rx.rsp_prod_pvt;
> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>  	struct xen_netif_rx_response *resp;
> 
> -	resp = RING_GET_RESPONSE(&vif->rx, i);
> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>  	resp->offset     = offset;
>  	resp->flags      = flags;
>  	resp->id         = id;
> @@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
>  	if (st < 0)
>  		resp->status = (s16)st;
> 
> -	vif->rx.rsp_prod_pvt = ++i;
> +	queue->rx.rsp_prod_pvt = ++i;
> 
>  	return resp;
>  }
> 
> -static inline int rx_work_todo(struct xenvif *vif)
> +static inline int rx_work_todo(struct xenvif_queue *queue)
>  {
> -	return !skb_queue_empty(&vif->rx_queue);
> +	return !skb_queue_empty(&queue->rx_queue);
>  }
> 
> -static inline int tx_work_todo(struct xenvif *vif)
> +static inline int tx_work_todo(struct xenvif_queue *queue)
>  {
> 
> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>  	     < MAX_PENDING_REQS))
>  		return 1;
> 
>  	return 0;
>  }
> 
> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>  {
> -	if (vif->tx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->tx.sring);
> -	if (vif->rx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->rx.sring);
> +	if (queue->tx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->tx.sring);
> +	if (queue->rx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->rx.sring);
>  }
> 
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref)
>  {
> @@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif
> *vif,
> 
>  	int err = -ENOMEM;
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     tx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	txs = (struct xen_netif_tx_sring *)addr;
> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     rx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	rxs = (struct xen_netif_rx_sring *)addr;
> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
> 
> -	vif->rx_req_cons_peek = 0;
> +	queue->rx_req_cons_peek = 0;
> 
>  	return 0;
> 
>  err:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  	return err;
>  }
> 
>  int xenvif_kthread(void *data)
>  {
> -	struct xenvif *vif = data;
> +	struct xenvif_queue *queue = data;
> 
>  	while (!kthread_should_stop()) {
> -		wait_event_interruptible(vif->wq,
> -					 rx_work_todo(vif) ||
> +		wait_event_interruptible(queue->wq,
> +					 rx_work_todo(queue) ||
>  					 kthread_should_stop());
>  		if (kthread_should_stop())
>  			break;
> 
> -		if (rx_work_todo(vif))
> -			xenvif_rx_action(vif);
> +		if (rx_work_todo(queue))
> +			xenvif_rx_action(queue);
> 
>  		cond_resched();
>  	}
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f035899..c3332e2 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
>  */
> 
>  #include "common.h"
> +#include <linux/vmalloc.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -35,8 +36,9 @@ struct backend_info {
>  	u8 have_hotplug_status_watch:1;
>  };
> 
> -static int connect_rings(struct backend_info *);
> -static void connect(struct backend_info *);
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue);
> +static void connect(struct backend_info *be);
> +static int read_xenbus_vif_flags(struct backend_info *be);
>  static void backend_create_xenvif(struct backend_info *be);
>  static void unregister_hotplug_status_watch(struct backend_info *be);
>  static void set_backend_state(struct backend_info *be,
> @@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
>  {
>  	int err;
>  	struct xenbus_device *dev = be->dev;
> -
> -	err = connect_rings(be);
> -	if (err)
> -		return;
> +	unsigned long credit_bytes, credit_usec;
> +	unsigned int queue_index;
> +	struct xenvif_queue *queue;
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
>  		return;
>  	}
> 
> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
> -			  &be->vif->credit_usec);
> -	be->vif->remaining_credit = be->vif->credit_bytes;
> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
> +	read_xenbus_vif_flags(be);
> +
> +	be->vif->num_queues = 1;
> +	be->vif->queues = vzalloc(be->vif->num_queues *
> +			sizeof(struct xenvif_queue));
> +
> +	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index)
> +	{
> +		queue = &be->vif->queues[queue_index];
> +		queue->vif = be->vif;
> +		queue->number = queue_index;
> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
> +				be->vif->dev->name, queue->number);
> +
> +		xenvif_init_queue(queue);
> +
> +		queue->remaining_credit = credit_bytes;
> +
> +		err = connect_rings(be, queue);
> +		if (err)
> +			goto err;
> +	}
> +
> +	xenvif_carrier_on(be->vif);
> 
>  	unregister_hotplug_status_watch(be);
>  	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
> @@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
>  	if (!err)
>  		be->have_hotplug_status_watch = 1;
> 
> -	netif_wake_queue(be->vif->dev);
> +	netif_tx_wake_all_queues(be->vif->dev);
> +
> +	return;
> +
> +err:
> +	vfree(be->vif->queues);
> +	be->vif->queues = NULL;
> +	be->vif->num_queues = 0;
> +	return;
>  }
> 
> 
> -static int connect_rings(struct backend_info *be)
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue)
>  {
> -	struct xenvif *vif = be->vif;
>  	struct xenbus_device *dev = be->dev;
>  	unsigned long tx_ring_ref, rx_ring_ref;
> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
> +	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> -	int val;
> 
>  	err = xenbus_gather(XBT_NIL, dev->otherend,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
> @@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
>  		rx_evtchn = tx_evtchn;
>  	}
> 
> +	/* Map the shared frame, irq etc. */
> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
> +			     tx_evtchn, rx_evtchn);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err,
> +				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> +				 tx_ring_ref, rx_ring_ref,
> +				 tx_evtchn, rx_evtchn);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static int read_xenbus_vif_flags(struct backend_info *be)
> +{
> +	struct xenvif *vif = be->vif;
> +	struct xenbus_device *dev = be->dev;
> +	unsigned int rx_copy;
> +	int err, val;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
> "%u",
>  			   &rx_copy);
>  	if (err == -ENOENT) {
> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
> 
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
> 
> -
>  /* ** Driver Registration ** */
> 
> 
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k7A-0001Am-Np; Thu, 16 Jan 2014 10:23:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3k77-0001AZ-UX
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:23:54 +0000
Received: from [193.109.254.147:41126] by server-1.bemta-14.messagelabs.com id
	93/4D-15600-933B7D25; Thu, 16 Jan 2014 10:23:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389867829!11265816!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11893 invoked from network); 16 Jan 2014 10:23:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:23:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410017"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:23:48 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:23:47 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:23:46 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
	queue struct.
Thread-Index: AQHPEg4x4WFe49MrpEWenZ63DvyEMpqHIP5g
Date: Thu, 16 Jan 2014 10:23:46 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_rxhash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |   66 +++--
>  drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>  drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++-------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   89 ++++--
>  4 files changed, 556 insertions(+), 423 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index c47794b..54d2eeb 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +struct xenvif;
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int number; /* Queue number, 0-based */
> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */

I wonder whether it would be neater to #define the name size here...

> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */

...and the IRQ name size here. It's kind of ugly to have + some_magic_value in array definitions.

>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,7 +142,7 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
> 
> @@ -150,14 +152,27 @@ struct xenvif {
>  	 */
>  	RING_IDX rx_req_cons_peek;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];

I see you brought this back in line, which is reasonable as the queue is now a separately allocated struct.

> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -171,12 +186,9 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Statistics */

Do stats need to be per-queue (and then possibly aggregated at query time)?

>  	unsigned long rx_gso_checksum_fixup;
> @@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>  			    domid_t domid,
>  			    unsigned int handle);
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue);
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn);
>  void xenvif_disconnect(struct xenvif *vif);
> @@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
> 
>  int xenvif_schedulable(struct xenvif *vif);
> 
> -int xenvif_rx_ring_full(struct xenvif *vif);
> +int xenvif_rx_ring_full(struct xenvif_queue *queue);
> 
> -int xenvif_must_stop_queue(struct xenvif *vif);
> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
> 
>  /* (Un)Map communication rings. */
> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref);
> 
>  /* Check for SKBs from frontend and schedule backend processing */
> -void xenvif_check_rx_xenvif(struct xenvif *vif);
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
> 
>  /* Queue an SKB for transmission to the frontend */
> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
> *skb);
>  /* Notify xenvif that ring now has space to send an skb to the frontend */
> -void xenvif_notify_tx_completion(struct xenvif *vif);
> +void xenvif_notify_tx_completion(struct xenvif_queue *queue);
> 
>  /* Prevent the device from generating any further traffic. */
>  void xenvif_carrier_off(struct xenvif *vif);
> @@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
>  /* Returns number of ring slots required to send an skb to the frontend */
>  unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
> 
> -int xenvif_tx_action(struct xenvif *vif, int budget);
> -void xenvif_rx_action(struct xenvif *vif);
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
> +void xenvif_rx_action(struct xenvif_queue *queue);
> 
>  int xenvif_kthread(void *data);
> 
> +int xenvif_poll(struct napi_struct *napi, int budget);
> +
> +void xenvif_carrier_on(struct xenvif *vif);
> +
>  extern bool separate_tx_rx_irq;
> 
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index fff8cdd..0113324 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -34,7 +34,6 @@
>  #include <linux/ethtool.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/if_vlan.h>
> -#include <linux/vmalloc.h>
> 
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> @@ -42,32 +41,50 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> 
> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
> +{
> +	netif_tx_wake_queue(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Might be neater to declare some stack variables for dev and number to avoid the long line.

> +}
> +
> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
> +{
> +	netif_tx_stop_queue(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Ditto.

> +}
> +
> +static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
> +{
> +	return netif_tx_queue_stopped(
> +			netdev_get_tx_queue(queue->vif->dev, queue-
> >number));

Ditto.

> +}
> +
>  int xenvif_schedulable(struct xenvif *vif)
>  {
>  	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
>  }
> 
> -static int xenvif_rx_schedulable(struct xenvif *vif)
> +static int xenvif_rx_schedulable(struct xenvif_queue *queue)
>  {
> -	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
> +	return xenvif_schedulable(queue->vif) &&
> !xenvif_rx_ring_full(queue);

I guess your patches have not been re-based onto net-next? xenvif_ring_full() and xenvif_rx_schedulable() went away in c/s ca2f09f2b2c6c25047cfc545d057c4edfcfe561c (xen-netback: improve guest-receive-side flow control).

Can you rebase? Eventual patches will need to go into net-next.

  Paul

>  }
> 
>  static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
> -		napi_schedule(&vif->napi);
> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
> +		napi_schedule(&queue->napi);
> 
>  	return IRQ_HANDLED;
>  }
> 
> -static int xenvif_poll(struct napi_struct *napi, int budget)
> +int xenvif_poll(struct napi_struct *napi, int budget)
>  {
> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
> +	struct xenvif_queue *queue = container_of(napi, struct
> xenvif_queue, napi);
>  	int work_done;
> 
> -	work_done = xenvif_tx_action(vif, budget);
> +	work_done = xenvif_tx_action(queue, budget);
> 
>  	if (work_done < budget) {
>  		int more_to_do = 0;
> @@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  		local_irq_save(flags);
> 
> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
> more_to_do);
> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
> more_to_do);
>  		if (!more_to_do)
>  			__napi_complete(napi);
> 
> @@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (xenvif_rx_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	if (xenvif_rx_schedulable(queue))
> +		xenvif_wake_queue(queue);
> 
>  	return IRQ_HANDLED;
>  }
> @@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
>  	return IRQ_HANDLED;
>  }
> 
> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	}

Style.

> +	else {
> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
> +		hash = skb_get_rxhash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
> 32);
> +	}
> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	u16 queue_index = 0;
> +	struct xenvif_queue *queue = NULL;
> 
>  	BUG_ON(skb->dev != dev);
> 
> -	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL)
> +	/* Drop the packet if the queues are not set up */
> +	if (vif->num_queues < 1 || vif->queues == NULL)
> +		goto drop;

Just do the former test and ASSERT the second.
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	queue_index = skb_get_queue_mapping(skb);

Personally, I'd stick a range check here.

> +	queue = &vif->queues[queue_index];
> +
> +	/* Drop the packet if queue is not ready */
> +	if (queue->task == NULL)
>  		goto drop;
> 
>  	/* Drop the packet if the target domain has no receive buffers. */
> -	if (!xenvif_rx_schedulable(vif))
> +	if (!xenvif_rx_schedulable(queue))
>  		goto drop;
> 
>  	/* Reserve ring slots for the worst-case number of fragments. */
> -	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
> +	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
> 
> -	if (vif->can_queue && xenvif_must_stop_queue(vif))
> -		netif_stop_queue(dev);
> +	if (vif->can_queue && xenvif_must_stop_queue(queue))
> +		xenvif_stop_queue(queue);
> 
> -	xenvif_queue_tx_skb(vif, skb);
> +	xenvif_queue_tx_skb(queue, skb);
> 
>  	return NETDEV_TX_OK;
> 
> @@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  	return NETDEV_TX_OK;
>  }
> 
> -void xenvif_notify_tx_completion(struct xenvif *vif)
> +void xenvif_notify_tx_completion(struct xenvif_queue *queue)
>  {
> -	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	if (xenvif_queue_stopped(queue) &&
> xenvif_rx_schedulable(queue))
> +		xenvif_wake_queue(queue);
>  }
> 
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
> @@ -163,20 +209,30 @@ static struct net_device_stats
> *xenvif_get_stats(struct net_device *dev)
> 
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
> 
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
> 
>  static int xenvif_open(struct net_device *dev)
> @@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_up(vif);
> -	netif_start_queue(dev);
> +	netif_tx_start_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_down(vif);
> -	netif_stop_queue(dev);
> +	netif_tx_stop_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -287,6 +343,7 @@ static const struct net_device_ops
> xenvif_netdev_ops = {
>  	.ndo_fix_features = xenvif_fix_features,
>  	.ndo_set_mac_address = eth_mac_addr,
>  	.ndo_validate_addr   = eth_validate_addr,
> +	.ndo_select_queue = select_queue,
>  };
> 
>  struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> @@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	struct net_device *dev;
>  	struct xenvif *vif;
>  	char name[IFNAMSIZ] = {};
> -	int i;
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> @@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	vif = netdev_priv(dev);
> 
> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> -				     MAX_GRANT_COPY_OPS);
> -	if (vif->grant_copy_op == NULL) {
> -		pr_warn("Could not allocate grant copy space for %s\n",
> name);
> -		free_netdev(dev);
> -		return ERR_PTR(-ENOMEM);
> -	}
> -
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
>  	vif->ip_csum = 1;
>  	vif->dev = dev;
> 
> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
> -	vif->credit_usec  = 0UL;
> -	init_timer(&vif->credit_timeout);
> -	vif->credit_window_start = get_jiffies_64();
> +	/* Start out with no queues */
> +	vif->num_queues = 0;
> +	vif->queues = NULL;
> 
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
> @@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
> 
> -	skb_queue_head_init(&vif->rx_queue);
> -	skb_queue_head_init(&vif->tx_queue);
> -
> -	vif->pending_cons = 0;
> -	vif->pending_prod = MAX_PENDING_REQS;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> -
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
>  	 * largest non-broadcast address to prevent the address getting
> @@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>  	dev->dev_addr[0] &= ~0x01;
> 
> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
> XENVIF_NAPI_WEIGHT);
> -
>  	netif_carrier_off(dev);
> 
>  	err = register_netdev(dev);
> @@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	return vif;
>  }
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue)
> +{
> +	int i;
> +
> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
> +	queue->credit_usec  = 0UL;
> +	init_timer(&queue->credit_timeout);
> +	queue->credit_window_start = get_jiffies_64();
> +
> +	skb_queue_head_init(&queue->rx_queue);
> +	skb_queue_head_init(&queue->tx_queue);
> +
> +	queue->pending_cons = 0;
> +	queue->pending_prod = MAX_PENDING_REQS;
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		queue->pending_ring[i] = i;
> +		queue->mmap_pages[i] = NULL;
> +	}
> +
> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> +			XENVIF_NAPI_WEIGHT);
> +}
> +
> +void xenvif_carrier_on(struct xenvif *vif)
> +{
> +	rtnl_lock();
> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> +	netdev_update_features(vif->dev);
> +	netif_carrier_on(vif->dev);
> +	if (netif_running(vif->dev))
> +		xenvif_up(vif);
> +	rtnl_unlock();
> +}
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn)
>  {
>  	struct task_struct *task;
>  	int err = -ENOMEM;
> 
> -	BUG_ON(vif->tx_irq);
> -	BUG_ON(vif->task);
> +	BUG_ON(queue->tx_irq);
> +	BUG_ON(queue->task);
> 
> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
> 
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
> -			vif->dev->name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
> +			queue->name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = vif->rx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = queue->rx_irq = err;
> +		disable_irq(queue->tx_irq);
>  	} else {
>  		/* feature-split-event-channels == 1 */
> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
> -			 "%s-tx", vif->dev->name);
> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
> +			 "%s-tx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
> -			vif->tx_irq_name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
> 0,
> +			queue->tx_irq_name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = err;
> +		disable_irq(queue->tx_irq);
> 
> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
> -			 "%s-rx", vif->dev->name);
> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
> +			 "%s-rx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
> -			vif->rx_irq_name, vif);
> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
> 0,
> +			queue->rx_irq_name, queue);
>  		if (err < 0)
>  			goto err_tx_unbind;
> -		vif->rx_irq = err;
> -		disable_irq(vif->rx_irq);
> +		queue->rx_irq = err;
> +		disable_irq(queue->rx_irq);
>  	}
> 
> -	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&queue->wq);
>  	task = kthread_create(xenvif_kthread,
> -			      (void *)vif, "%s", vif->dev->name);
> +			      (void *)queue, "%s", queue->name);
>  	if (IS_ERR(task)) {
> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
> >name);
> +		pr_warn("Could not allocate kthread for %s\n", queue-
> >name);
>  		err = PTR_ERR(task);
>  		goto err_rx_unbind;
>  	}
> 
> -	vif->task = task;
> -
> -	rtnl_lock();
> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> -	netdev_update_features(vif->dev);
> -	netif_carrier_on(vif->dev);
> -	if (netif_running(vif->dev))
> -		xenvif_up(vif);
> -	rtnl_unlock();
> +	queue->task = task;
> 
> -	wake_up_process(vif->task);
> +	wake_up_process(queue->task);
> 
>  	return 0;
> 
>  err_rx_unbind:
> -	unbind_from_irqhandler(vif->rx_irq, vif);
> -	vif->rx_irq = 0;
> +	unbind_from_irqhandler(queue->rx_irq, queue);
> +	queue->rx_irq = 0;
>  err_tx_unbind:
> -	unbind_from_irqhandler(vif->tx_irq, vif);
> -	vif->tx_irq = 0;
> +	unbind_from_irqhandler(queue->tx_irq, queue);
> +	queue->tx_irq = 0;
>  err_unmap:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  err:
>  	module_put(THIS_MODULE);
>  	return err;
> @@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
> 
>  void xenvif_disconnect(struct xenvif *vif)
>  {
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
>  	if (netif_carrier_ok(vif->dev))
>  		xenvif_carrier_off(vif);
> 
> -	if (vif->task) {
> -		kthread_stop(vif->task);
> -		vif->task = NULL;
> -	}
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> 
> -	if (vif->tx_irq) {
> -		if (vif->tx_irq == vif->rx_irq)
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -		else {
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -			unbind_from_irqhandler(vif->rx_irq, vif);
> +		if (queue->task) {
> +			kthread_stop(queue->task);
> +			queue->task = NULL;
>  		}
> -		vif->tx_irq = 0;
> +
> +		if (queue->tx_irq) {
> +			if (queue->tx_irq == queue->rx_irq)
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +			else {
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +				unbind_from_irqhandler(queue->rx_irq,
> queue);
> +			}
> +			queue->tx_irq = 0;
> +		}
> +
> +		xenvif_unmap_frontend_rings(queue);
>  	}
> 
> -	xenvif_unmap_frontend_rings(vif);
> +
>  }
> 
>  void xenvif_free(struct xenvif *vif)
>  {
> -	netif_napi_del(&vif->napi);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> 
> -	unregister_netdev(vif->dev);
> +	for(queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		netif_napi_del(&queue->napi);
> +	}
> 
> -	vfree(vif->grant_copy_op);
> +	/* Free the array of queues */
> +	vfree(vif->queues);
> +	vif->num_queues = 0;
> +	vif->queues = 0;
> +
> +	unregister_netdev(vif->dev);
>  	free_netdev(vif->dev);
> 
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 7842555..586e741 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
>   * one or more merged tx requests, otherwise it is the continuation of
>   * previous tx request.
>   */
> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
> RING_IDX idx)
>  {
> -	return vif->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
> +	return queue->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status);
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st);
> 
> -static inline int tx_work_todo(struct xenvif *vif);
> -static inline int rx_work_todo(struct xenvif *vif);
> +static inline int tx_work_todo(struct xenvif_queue *queue);
> +static inline int rx_work_todo(struct xenvif_queue *queue);
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags);
> 
> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>  				       u16 idx)
>  {
> -	return page_to_pfn(vif->mmap_pages[idx]);
> +	return page_to_pfn(queue->mmap_pages[idx]);
>  }
> 
> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>  					 u16 idx)
>  {
> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>  }
> 
>  /* This is a miniumum size for the linear area to avoid lots of
> @@ -132,10 +132,10 @@ static inline pending_ring_idx_t
> pending_index(unsigned i)
>  	return i & (MAX_PENDING_REQS-1);
>  }
> 
> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
> *queue)
>  {
>  	return MAX_PENDING_REQS -
> -		vif->pending_prod + vif->pending_cons;
> +		queue->pending_prod + queue->pending_cons;
>  }
> 
>  static int max_required_rx_slots(struct xenvif *vif)
> @@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
>  	return max;
>  }
> 
> -int xenvif_rx_ring_full(struct xenvif *vif)
> +int xenvif_rx_ring_full(struct xenvif_queue *queue)
>  {
> -	RING_IDX peek   = vif->rx_req_cons_peek;
> -	RING_IDX needed = max_required_rx_slots(vif);
> +	RING_IDX peek   = queue->rx_req_cons_peek;
> +	RING_IDX needed = max_required_rx_slots(queue->vif);
> 
> -	return ((vif->rx.sring->req_prod - peek) < needed) ||
> -	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
> needed);
> +	return ((queue->rx.sring->req_prod - peek) < needed) ||
> +	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
> needed);
>  }
> 
> -int xenvif_must_stop_queue(struct xenvif *vif)
> +int xenvif_must_stop_queue(struct xenvif_queue *queue)
>  {
> -	if (!xenvif_rx_ring_full(vif))
> +	if (!xenvif_rx_ring_full(queue))
>  		return 0;
> 
> -	vif->rx.sring->req_event = vif->rx_req_cons_peek +
> -		max_required_rx_slots(vif);
> +	queue->rx.sring->req_event = queue->rx_req_cons_peek +
> +		max_required_rx_slots(queue->vif);
>  	mb(); /* request notification /then/ check the queue */
> 
> -	return xenvif_rx_ring_full(vif);
> +	return xenvif_rx_ring_full(queue);
>  }
> 
>  /*
> @@ -306,13 +306,13 @@ struct netrx_pending_operations {
>  	grant_ref_t copy_gref;
>  };
> 
> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
> *queue,
>  						 struct
> netrx_pending_operations *npo)
>  {
>  	struct xenvif_rx_meta *meta;
>  	struct xen_netif_rx_request *req;
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
> 
>  	meta = npo->meta + npo->meta_prod++;
>  	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
> @@ -330,7 +330,7 @@ static struct xenvif_rx_meta
> *get_next_rx_buffer(struct xenvif *vif,
>   * Set up the grant operations for this fragment. If it's a flipping
>   * interface, we also set up the unmap request from here.
>   */
> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
> sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
>  				 unsigned long offset, int *head)
> @@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  			 */
>  			BUG_ON(*head);
> 
> -			meta = get_next_rx_buffer(vif, npo);
> +			meta = get_next_rx_buffer(queue, npo);
>  		}
> 
>  		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
> @@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		copy_gop->source.u.gmfn =
> virt_to_mfn(page_address(page));
>  		copy_gop->source.offset = offset;
> 
> -		copy_gop->dest.domid = vif->domid;
> +		copy_gop->dest.domid = queue->vif->domid;
>  		copy_gop->dest.offset = npo->copy_off;
>  		copy_gop->dest.u.ref = npo->copy_gref;
> 
> @@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		else
>  			gso_type = XEN_NETIF_GSO_TYPE_NONE;
> 
> -		if (*head && ((1 << gso_type) & vif->gso_mask))
> -			vif->rx.req_cons++;
> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
> +			queue->rx.req_cons++;
> 
>  		*head = 0; /* There must be something in this buffer now. */
> 
> @@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>   * frontend-side LRO).
>   */
>  static int xenvif_gop_skb(struct sk_buff *skb,
> -			  struct netrx_pending_operations *npo)
> +			  struct netrx_pending_operations *npo,
> +			  struct xenvif_queue *queue)
>  {
>  	struct xenvif *vif = netdev_priv(skb->dev);
>  	int nr_frags = skb_shinfo(skb)->nr_frags;
> @@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
> 
>  	/* Set up a GSO prefix descriptor, if necessary */
>  	if ((1 << gso_type) & vif->gso_prefix_mask) {
> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +		req = RING_GET_REQUEST(&queue->rx, queue-
> >rx.req_cons++);
>  		meta = npo->meta + npo->meta_prod++;
>  		meta->gso_type = gso_type;
>  		meta->gso_size = gso_size;
> @@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		meta->id = req->id;
>  	}
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>  	meta = npo->meta + npo->meta_prod++;
> 
>  	if ((1 << gso_type) & vif->gso_mask) {
> @@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		if (data + len > skb_tail_pointer(skb))
>  			len = skb_tail_pointer(skb) - data;
> 
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     virt_to_page(data), len, offset, &head);
>  		data += len;
>  	}
> 
>  	for (i = 0; i < nr_frags; i++) {
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     skb_frag_page(&skb_shinfo(skb)-
> >frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> @@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
> nr_meta_slots,
>  	return status;
>  }
> 
> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
> status,
>  				      struct xenvif_rx_meta *meta,
>  				      int nr_meta_slots)
>  {
> @@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif
> *vif, int status,
>  			flags = XEN_NETRXF_more_data;
> 
>  		offset = 0;
> -		make_rx_response(vif, meta[i].id, status, offset,
> +		make_rx_response(queue, meta[i].id, status, offset,
>  				 meta[i].size, flags);
>  	}
>  }
> @@ -557,12 +558,12 @@ struct skb_cb_overlay {
>  	int meta_slots_used;
>  };
> 
> -static void xenvif_kick_thread(struct xenvif *vif)
> +static void xenvif_kick_thread(struct xenvif_queue *queue)
>  {
> -	wake_up(&vif->wq);
> +	wake_up(&queue->wq);
>  }
> 
> -void xenvif_rx_action(struct xenvif *vif)
> +void xenvif_rx_action(struct xenvif_queue *queue)
>  {
>  	s8 status;
>  	u16 flags;
> @@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
>  	int need_to_notify = 0;
> 
>  	struct netrx_pending_operations npo = {
> -		.copy  = vif->grant_copy_op,
> -		.meta  = vif->meta,
> +		.copy  = queue->grant_copy_op,
> +		.meta  = queue->meta,
>  	};
> 
>  	skb_queue_head_init(&rxq);
> 
>  	count = 0;
> 
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> -		vif = netdev_priv(skb->dev);
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>  		nr_frags = skb_shinfo(skb)->nr_frags;
> 
>  		sco = (struct skb_cb_overlay *)skb->cb;
> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
> 
>  		count += nr_frags + 1;
> 
> @@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
>  			break;
>  	}
> 
> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
> 
>  	if (!npo.copy_prod)
>  		return;
> 
>  	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
> 
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> 
> -		vif = netdev_priv(skb->dev);
> -
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_prefix_mask) {
> -			resp = RING_GET_RESPONSE(&vif->rx,
> -						 vif->rx.rsp_prod_pvt++);
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_prefix_mask) {
> +			resp = RING_GET_RESPONSE(&queue->rx,
> +						 queue->rx.rsp_prod_pvt++);
> 
>  			resp->flags = XEN_NETRXF_gso_prefix |
> XEN_NETRXF_more_data;
> 
> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
> -			resp->id = vif->meta[npo.meta_cons].id;
> +			resp->offset = queue-
> >meta[npo.meta_cons].gso_size;
> +			resp->id = queue->meta[npo.meta_cons].id;
>  			resp->status = sco->meta_slots_used;
> 
>  			npo.meta_cons++;
> @@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
>  		}
> 
> 
> -		vif->dev->stats.tx_bytes += skb->len;
> -		vif->dev->stats.tx_packets++;
> +		queue->vif->dev->stats.tx_bytes += skb->len;
> +		queue->vif->dev->stats.tx_packets++;
> 
> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
> &npo);
> +		status = xenvif_check_gop(queue->vif, sco-
> >meta_slots_used, &npo);
> 
>  		if (sco->meta_slots_used == 1)
>  			flags = 0;
> @@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
>  			flags |= XEN_NETRXF_data_validated;
> 
>  		offset = 0;
> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
> +		resp = make_rx_response(queue, queue-
> >meta[npo.meta_cons].id,
>  					status, offset,
> -					vif->meta[npo.meta_cons].size,
> +					queue->meta[npo.meta_cons].size,
>  					flags);
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_mask) {
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_mask) {
>  			struct xen_netif_extra_info *gso =
>  				(struct xen_netif_extra_info *)
> -				RING_GET_RESPONSE(&vif->rx,
> -						  vif->rx.rsp_prod_pvt++);
> +				RING_GET_RESPONSE(&queue->rx,
> +						  queue-
> >rx.rsp_prod_pvt++);
> 
>  			resp->flags |= XEN_NETRXF_extra_info;
> 
> -			gso->u.gso.type = vif-
> >meta[npo.meta_cons].gso_type;
> -			gso->u.gso.size = vif-
> >meta[npo.meta_cons].gso_size;
> +			gso->u.gso.type = queue-
> >meta[npo.meta_cons].gso_type;
> +			gso->u.gso.size = queue-
> >meta[npo.meta_cons].gso_size;
>  			gso->u.gso.pad = 0;
>  			gso->u.gso.features = 0;
> 
> @@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
>  			gso->flags = 0;
>  		}
> 
> -		xenvif_add_frag_responses(vif, status,
> -					  vif->meta + npo.meta_cons + 1,
> +		xenvif_add_frag_responses(queue, status,
> +					  queue->meta + npo.meta_cons + 1,
>  					  sco->meta_slots_used);
> 
> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
> ret);
> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
> ret);
> 
>  		if (ret)
>  			need_to_notify = 1;
> 
> -		xenvif_notify_tx_completion(vif);
> +		xenvif_notify_tx_completion(queue);
> 
>  		npo.meta_cons += sco->meta_slots_used;
>  		dev_kfree_skb(skb);
>  	}
> 
>  	if (need_to_notify)
> -		notify_remote_via_irq(vif->rx_irq);
> +		notify_remote_via_irq(queue->rx_irq);
> 
>  	/* More work to do? */
> -	if (!skb_queue_empty(&vif->rx_queue))
> -		xenvif_kick_thread(vif);
> +	if (!skb_queue_empty(&queue->rx_queue))
> +		xenvif_kick_thread(queue);
>  }
> 
> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
> -	skb_queue_tail(&vif->rx_queue, skb);
> +	skb_queue_tail(&queue->rx_queue, skb);
> 
> -	xenvif_kick_thread(vif);
> +	xenvif_kick_thread(queue);
>  }
> 
> -void xenvif_check_rx_xenvif(struct xenvif *vif)
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>  {
>  	int more_to_do;
> 
> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
> 
>  	if (more_to_do)
> -		napi_schedule(&vif->napi);
> +		napi_schedule(&queue->napi);
>  }
> 
> -static void tx_add_credit(struct xenvif *vif)
> +static void tx_add_credit(struct xenvif_queue *queue)
>  {
>  	unsigned long max_burst, max_credit;
> 
> @@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
>  	 * Allow a burst big enough to transmit a jumbo packet of up to
> 128kB.
>  	 * Otherwise the interface can seize up due to insufficient credit.
>  	 */
> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
> >tx.req_cons)->size;
>  	max_burst = min(max_burst, 131072UL);
> -	max_burst = max(max_burst, vif->credit_bytes);
> +	max_burst = max(max_burst, queue->credit_bytes);
> 
>  	/* Take care that adding a new chunk of credit doesn't wrap to zero.
> */
> -	max_credit = vif->remaining_credit + vif->credit_bytes;
> -	if (max_credit < vif->remaining_credit)
> +	max_credit = queue->remaining_credit + queue->credit_bytes;
> +	if (max_credit < queue->remaining_credit)
>  		max_credit = ULONG_MAX; /* wrapped: clamp to
> ULONG_MAX */
> 
> -	vif->remaining_credit = min(max_credit, max_burst);
> +	queue->remaining_credit = min(max_credit, max_burst);
>  }
> 
>  static void tx_credit_callback(unsigned long data)
>  {
> -	struct xenvif *vif = (struct xenvif *)data;
> -	tx_add_credit(vif);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
> +	tx_add_credit(queue);
> +	xenvif_check_rx_xenvif(queue);
>  }
> 
> -static void xenvif_tx_err(struct xenvif *vif,
> +static void xenvif_tx_err(struct xenvif_queue *queue,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>  		if (cons == end)
>  			break;
> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>  	} while (1);
> -	vif->tx.req_cons = cons;
> +	queue->tx.req_cons = cons;
>  }
> 
>  static void xenvif_fatal_tx_err(struct xenvif *vif)
> @@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>  	xenvif_carrier_off(vif);
>  }
> 
> -static int xenvif_count_requests(struct xenvif *vif,
> +static int xenvif_count_requests(struct xenvif_queue *queue,
>  				 struct xen_netif_tx_request *first,
>  				 struct xen_netif_tx_request *txp,
>  				 int work_to_do)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
>  	int slots = 0;
>  	int drop_err = 0;
>  	int more_data;
> @@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		struct xen_netif_tx_request dropped_tx = { 0 };
> 
>  		if (slots >= work_to_do) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Asked for %d slots but exceeds this
> limit\n",
>  				   work_to_do);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -ENODATA;
>  		}
> 
> @@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 * considered malicious.
>  		 */
>  		if (unlikely(slots >= fatal_skb_slots)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Malicious frontend using %d slots,
> threshold %u\n",
>  				   slots, fatal_skb_slots);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -E2BIG;
>  		}
> 
> @@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
> {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Too many slots (%d) exceeding
> limit (%d), dropping packet\n",
>  					   slots,
> XEN_NETBK_LEGACY_SLOTS_MAX);
>  			drop_err = -E2BIG;
> @@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		if (drop_err)
>  			txp = &dropped_tx;
> 
> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
> slots),
>  		       sizeof(*txp));
> 
>  		/* If the guest submitted a frame >= 64 KiB then
> @@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && txp->size > first->size) {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Invalid tx request, slot size %u >
> remaining size %u\n",
>  					   txp->size, first->size);
>  			drop_err = -EIO;
> @@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		slots++;
> 
>  		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev, "Cross page boundary, txp-
> >offset: %x, size: %u\n",
> +			netdev_err(queue->vif->dev, "Cross page boundary,
> txp->offset: %x, size: %u\n",
>  				 txp->offset, txp->size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
> @@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>  	} while (more_data);
> 
>  	if (drop_err) {
> -		xenvif_tx_err(vif, first, cons + slots);
> +		xenvif_tx_err(queue, first, cons + slots);
>  		return drop_err;
>  	}
> 
>  	return slots;
>  }
> 
> -static struct page *xenvif_alloc_page(struct xenvif *vif,
> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>  				      u16 pending_idx)
>  {
>  	struct page *page;
> @@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif
> *vif,
>  	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  	if (!page)
>  		return NULL;
> -	vif->mmap_pages[pending_idx] = page;
> +	queue->mmap_pages[pending_idx] = page;
> 
>  	return page;
>  }
> 
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
> *queue,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
>  					       struct gnttab_copy *gop)
> @@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>  	     shinfo->nr_frags++) {
>  		struct pending_tx_info *pending_tx_info =
> -			vif->pending_tx_info;
> +			queue->pending_tx_info;
> 
>  		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  		if (!page)
> @@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  			gop->flags = GNTCOPY_source_gref;
> 
>  			gop->source.u.ref = txp->gref;
> -			gop->source.domid = vif->domid;
> +			gop->source.domid = queue->vif->domid;
>  			gop->source.offset = txp->offset;
> 
>  			gop->dest.domid = DOMID_SELF;
> @@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				gop->len = txp->size;
>  				dst_offset += gop->len;
> 
> -				index = pending_index(vif-
> >pending_cons++);
> +				index = pending_index(queue-
> >pending_cons++);
> 
> -				pending_idx = vif->pending_ring[index];
> +				pending_idx = queue->pending_ring[index];
> 
> 
> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>  				       sizeof(*txp));
> @@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				 * fields for head tx req will be set
>  				 * to correct values after the loop.
>  				 */
> -				vif->mmap_pages[pending_idx] = (void
> *)(~0UL);
> +				queue->mmap_pages[pending_idx] = (void
> *)(~0UL);
>  				pending_tx_info[pending_idx].head =
>  					INVALID_PENDING_RING_IDX;
> 
> @@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  		first->req.offset = 0;
>  		first->req.size = dst_offset;
>  		first->head = start_idx;
> -		vif->mmap_pages[head_idx] = page;
> +		queue->mmap_pages[head_idx] = page;
>  		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>  	}
> 
> @@ -979,18 +977,18 @@ static struct gnttab_copy
> *xenvif_get_requests(struct xenvif *vif,
>  err:
>  	/* Unwind, freeing all pages and sending error responses. */
>  	while (shinfo->nr_frags-- > start) {
> -		xenvif_idx_release(vif,
> +		xenvif_idx_release(queue,
>  				frag_get_pending_idx(&frags[shinfo-
> >nr_frags]),
>  				XEN_NETIF_RSP_ERROR);
>  	}
>  	/* The head too, if necessary. */
>  	if (start)
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	return NULL;
>  }
> 
> -static int xenvif_tx_check_gop(struct xenvif *vif,
> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>  			       struct sk_buff *skb,
>  			       struct gnttab_copy **gopp)
>  {
> @@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	/* Check status of header. */
>  	err = gop->status;
>  	if (unlikely(err))
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		pending_ring_idx_t head;
> 
>  		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
> -		tx_info = &vif->pending_tx_info[pending_idx];
> +		tx_info = &queue->pending_tx_info[pending_idx];
>  		head = tx_info->head;
> 
>  		/* Check error status: if okay then remember grant handle.
> */
> @@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  			newerr = (++gop)->status;
>  			if (newerr)
>  				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
> +			peek = queue-
> >pending_ring[pending_index(++head)];
> +		} while (!pending_tx_is_head(queue, peek));
> 
>  		if (likely(!newerr)) {
>  			/* Had a previous error? Invalidate this fragment. */
>  			if (unlikely(err))
> -				xenvif_idx_release(vif, pending_idx,
> +				xenvif_idx_release(queue, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);
>  			continue;
>  		}
> 
>  		/* Error on this fragment: respond to client with an error. */
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  		/* Not the first error? Preceding frags already invalidated. */
>  		if (err)
> @@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> 
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo-
> >frags[j]);
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	return err;
>  }
> 
> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	int nr_frags = shinfo->nr_frags;
> @@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif,
> struct sk_buff *skb)
> 
>  		pending_idx = frag_get_pending_idx(frag);
> 
> -		txp = &vif->pending_tx_info[pending_idx].req;
> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
> +		txp = &queue->pending_tx_info[pending_idx].req;
> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>  		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>  		skb->len += txp->size;
>  		skb->data_len += txp->size;
>  		skb->truesize += txp->size;
> 
>  		/* Take an extra reference to offset xenvif_idx_release */
> -		get_page(vif->mmap_pages[pending_idx]);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		get_page(queue->mmap_pages[pending_idx]);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  	}
>  }
> 
> -static int xenvif_get_extras(struct xenvif *vif,
> +static int xenvif_get_extras(struct xenvif_queue *queue,
>  				struct xen_netif_extra_info *extras,
>  				int work_to_do)
>  {
>  	struct xen_netif_extra_info extra;
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
>  		if (unlikely(work_to_do-- <= 0)) {
> -			netdev_err(vif->dev, "Missing extra info\n");
> -			xenvif_fatal_tx_err(vif);
> +			netdev_err(queue->vif->dev, "Missing extra
> info\n");
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EBADR;
>  		}
> 
> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>  		       sizeof(extra));
>  		if (unlikely(!extra.type ||
>  			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> -			vif->tx.req_cons = ++cons;
> -			netdev_err(vif->dev,
> +			queue->tx.req_cons = ++cons;
> +			netdev_err(queue->vif->dev,
>  				   "Invalid extra type: %d\n", extra.type);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
>  		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
> -		vif->tx.req_cons = ++cons;
> +		queue->tx.req_cons = ++cons;
>  	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
> 
>  	return work_to_do;
> @@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif,
> struct sk_buff *skb)
>  	return err;
>  }
> 
> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
> size)
>  {
>  	u64 now = get_jiffies_64();
> -	u64 next_credit = vif->credit_window_start +
> -		msecs_to_jiffies(vif->credit_usec / 1000);
> +	u64 next_credit = queue->credit_window_start +
> +		msecs_to_jiffies(queue->credit_usec / 1000);
> 
>  	/* Timer could already be pending in rare cases. */
> -	if (timer_pending(&vif->credit_timeout))
> +	if (timer_pending(&queue->credit_timeout))
>  		return true;
> 
>  	/* Passed the point where we can replenish credit? */
>  	if (time_after_eq64(now, next_credit)) {
> -		vif->credit_window_start = now;
> -		tx_add_credit(vif);
> +		queue->credit_window_start = now;
> +		tx_add_credit(queue);
>  	}
> 
>  	/* Still too big to send right now? Set a callback. */
> -	if (size > vif->remaining_credit) {
> -		vif->credit_timeout.data     =
> -			(unsigned long)vif;
> -		vif->credit_timeout.function =
> +	if (size > queue->remaining_credit) {
> +		queue->credit_timeout.data     =
> +			(unsigned long)queue;
> +		queue->credit_timeout.function =
>  			tx_credit_callback;
> -		mod_timer(&vif->credit_timeout,
> +		mod_timer(&queue->credit_timeout,
>  			  next_credit);
> -		vif->credit_window_start = next_credit;
> +		queue->credit_window_start = next_credit;
> 
>  		return true;
>  	}
> @@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
> unsigned size)
>  	return false;
>  }
> 
> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
> budget)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>  	struct sk_buff *skb;
>  	int ret;
> 
> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	while ((nr_pending_reqs(queue) +
> XEN_NETBK_LEGACY_SLOTS_MAX
>  		< MAX_PENDING_REQS) &&
> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>  		struct xen_netif_tx_request txreq;
>  		struct xen_netif_tx_request
> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>  		struct page *page;
> @@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		unsigned int data_len;
>  		pending_ring_idx_t index;
> 
> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>  		    XEN_NETIF_TX_RING_SIZE) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Impossible number of requests. "
>  				   "req_prod %d, req_cons %d, size %ld\n",
> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
> +				   queue->tx.sring->req_prod, queue-
> >tx.req_cons,
>  				   XEN_NETIF_TX_RING_SIZE);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			continue;
>  		}
> 
> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
> >tx);
> +		work_to_do =
> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>  		if (!work_to_do)
>  			break;
> 
> -		idx = vif->tx.req_cons;
> +		idx = queue->tx.req_cons;
>  		rmb(); /* Ensure that we see the request before we copy it.
> */
> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
> sizeof(txreq));
> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
> sizeof(txreq));
> 
>  		/* Credit-based scheduling. */
> -		if (txreq.size > vif->remaining_credit &&
> -		    tx_credit_exceeded(vif, txreq.size))
> +		if (txreq.size > queue->remaining_credit &&
> +		    tx_credit_exceeded(queue, txreq.size))
>  			break;
> 
> -		vif->remaining_credit -= txreq.size;
> +		queue->remaining_credit -= txreq.size;
> 
>  		work_to_do--;
> -		vif->tx.req_cons = ++idx;
> +		queue->tx.req_cons = ++idx;
> 
>  		memset(extras, 0, sizeof(extras));
>  		if (txreq.flags & XEN_NETTXF_extra_info) {
> -			work_to_do = xenvif_get_extras(vif, extras,
> +			work_to_do = xenvif_get_extras(queue, extras,
>  						       work_to_do);
> -			idx = vif->tx.req_cons;
> +			idx = queue->tx.req_cons;
>  			if (unlikely(work_to_do < 0))
>  				break;
>  		}
> 
> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
> work_to_do);
> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
> work_to_do);
>  		if (unlikely(ret < 0))
>  			break;
> 
>  		idx += ret;
> 
>  		if (unlikely(txreq.size < ETH_HLEN)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Bad packet size: %d\n", txreq.size);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		/* No crossing a page as the payload mustn't fragment. */
>  		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "txreq.offset: %x, size: %u, end: %lu\n",
>  				   txreq.offset, txreq.size,
>  				   (txreq.offset&~PAGE_MASK) + txreq.size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			break;
>  		}
> 
> -		index = pending_index(vif->pending_cons);
> -		pending_idx = vif->pending_ring[index];
> +		index = pending_index(queue->pending_cons);
> +		pending_idx = queue->pending_ring[index];
> 
>  		data_len = (txreq.size > PKT_PROT_LEN &&
>  			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
> @@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>  				GFP_ATOMIC | __GFP_NOWARN);
>  		if (unlikely(skb == NULL)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't allocate a skb in start_xmit.\n");
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
> @@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  			struct xen_netif_extra_info *gso;
>  			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
> 
> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>  				/* Failure in xenvif_set_skb_gso is fatal. */
>  				kfree_skb(skb);
>  				break;
> @@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		}
> 
>  		/* XXX could copy straight to head */
> -		page = xenvif_alloc_page(vif, pending_idx);
> +		page = xenvif_alloc_page(queue, pending_idx);
>  		if (!page) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		gop->source.u.ref = txreq.gref;
> -		gop->source.domid = vif->domid;
> +		gop->source.domid = queue->vif->domid;
>  		gop->source.offset = txreq.offset;
> 
>  		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
> @@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
> 
>  		gop++;
> 
> -		memcpy(&vif->pending_tx_info[pending_idx].req,
> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>  		       &txreq, sizeof(txreq));
> -		vif->pending_tx_info[pending_idx].head = index;
> +		queue->pending_tx_info[pending_idx].head = index;
>  		*((u16 *)skb->data) = pending_idx;
> 
>  		__skb_put(skb, data_len);
> @@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  					     INVALID_PENDING_IDX);
>  		}
> 
> -		vif->pending_cons++;
> +		queue->pending_cons++;
> 
> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
> gop);
>  		if (request_gop == NULL) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
>  		gop = request_gop;
> 
> -		__skb_queue_tail(&vif->tx_queue, skb);
> +		__skb_queue_tail(&queue->tx_queue, skb);
> 
> -		vif->tx.req_cons = idx;
> +		queue->tx.req_cons = idx;
> 
> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
> >tx_copy_ops))
> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
> >tx_copy_ops))
>  			break;
>  	}
> 
> -	return gop - vif->tx_copy_ops;
> +	return gop - queue->tx_copy_ops;
>  }
> 
> 
> -static int xenvif_tx_submit(struct xenvif *vif)
> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops;
> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>  	struct sk_buff *skb;
>  	int work_done = 0;
> 
> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>  		struct xen_netif_tx_request *txp;
>  		u16 pending_idx;
>  		unsigned data_len;
> 
>  		pending_idx = *((u16 *)skb->data);
> -		txp = &vif->pending_tx_info[pending_idx].req;
> +		txp = &queue->pending_tx_info[pending_idx].req;
> 
>  		/* Check the remap error code. */
> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
> -			netdev_dbg(vif->dev, "netback grant failed.\n");
> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
> +			netdev_dbg(queue->vif->dev, "netback grant
> failed.\n");
>  			skb_shinfo(skb)->nr_frags = 0;
>  			kfree_skb(skb);
>  			continue;
> @@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		data_len = skb->len;
>  		memcpy(skb->data,
> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>  		       data_len);
>  		if (data_len < txp->size) {
>  			/* Append the packet payload as a fragment. */
> @@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  			txp->size -= data_len;
>  		} else {
>  			/* Schedule a response immediately. */
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
> 
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(queue, skb);
> 
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
> PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
>  		}
> 
> -		skb->dev      = vif->dev;
> +		skb->dev      = queue->vif->dev;
>  		skb->protocol = eth_type_trans(skb, skb->dev);
>  		skb_reset_network_header(skb);
> 
> -		if (checksum_setup(vif, skb)) {
> -			netdev_dbg(vif->dev,
> +		if (checksum_setup(queue->vif, skb)) {
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't setup checksum in
> net_tx_action\n");
>  			kfree_skb(skb);
>  			continue;
> @@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		skb_probe_transport_header(skb, 0);
> 
> -		vif->dev->stats.rx_bytes += skb->len;
> -		vif->dev->stats.rx_packets++;
> +		queue->vif->dev->stats.rx_bytes += skb->len;
> +		queue->vif->dev->stats.rx_packets++;
> 
>  		work_done++;
> 
> @@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  }
> 
>  /* Called after netfront has transmitted */
> -int xenvif_tx_action(struct xenvif *vif, int budget)
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>  {
>  	unsigned nr_gops;
>  	int work_done;
> 
> -	if (unlikely(!tx_work_todo(vif)))
> +	if (unlikely(!tx_work_todo(queue)))
>  		return 0;
> 
> -	nr_gops = xenvif_tx_build_gops(vif, budget);
> +	nr_gops = xenvif_tx_build_gops(queue, budget);
> 
>  	if (nr_gops == 0)
>  		return 0;
> 
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
> 
> -	work_done = xenvif_tx_submit(vif);
> +	work_done = xenvif_tx_submit(queue);
> 
>  	return work_done;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status)
>  {
>  	struct pending_tx_info *pending_tx_info;
>  	pending_ring_idx_t head;
>  	u16 peek; /* peek into next tx request */
> 
> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
> 
>  	/* Already complete? */
> -	if (vif->mmap_pages[pending_idx] == NULL)
> +	if (queue->mmap_pages[pending_idx] == NULL)
>  		return;
> 
> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
> 
>  	head = pending_tx_info->head;
> 
> -	BUG_ON(!pending_tx_is_head(vif, head));
> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
> +	BUG_ON(!pending_tx_is_head(queue, head));
> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
> pending_idx);
> 
>  	do {
>  		pending_ring_idx_t index;
>  		pending_ring_idx_t idx = pending_index(head);
> -		u16 info_idx = vif->pending_ring[idx];
> +		u16 info_idx = queue->pending_ring[idx];
> 
> -		pending_tx_info = &vif->pending_tx_info[info_idx];
> -		make_tx_response(vif, &pending_tx_info->req, status);
> +		pending_tx_info = &queue->pending_tx_info[info_idx];
> +		make_tx_response(queue, &pending_tx_info->req, status);
> 
>  		/* Setting any number other than
>  		 * INVALID_PENDING_RING_IDX indicates this slot is
> @@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif,
> u16 pending_idx,
>  		 */
>  		pending_tx_info->head = 0;
> 
> -		index = pending_index(vif->pending_prod++);
> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
> +		index = pending_index(queue->pending_prod++);
> +		queue->pending_ring[index] = queue-
> >pending_ring[info_idx];
> 
> -		peek = vif->pending_ring[pending_index(++head)];
> +		peek = queue->pending_ring[pending_index(++head)];
> 
> -	} while (!pending_tx_is_head(vif, peek));
> +	} while (!pending_tx_is_head(queue, peek));
> 
> -	put_page(vif->mmap_pages[pending_idx]);
> -	vif->mmap_pages[pending_idx] = NULL;
> +	put_page(queue->mmap_pages[pending_idx]);
> +	queue->mmap_pages[pending_idx] = NULL;
>  }
> 
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st)
>  {
> -	RING_IDX i = vif->tx.rsp_prod_pvt;
> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>  	struct xen_netif_tx_response *resp;
>  	int notify;
> 
> -	resp = RING_GET_RESPONSE(&vif->tx, i);
> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>  	resp->id     = txp->id;
>  	resp->status = st;
> 
>  	if (txp->flags & XEN_NETTXF_extra_info)
> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> 
> -	vif->tx.rsp_prod_pvt = ++i;
> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
> +	queue->tx.rsp_prod_pvt = ++i;
> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>  	if (notify)
> -		notify_remote_via_irq(vif->tx_irq);
> +		notify_remote_via_irq(queue->tx_irq);
>  }
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags)
>  {
> -	RING_IDX i = vif->rx.rsp_prod_pvt;
> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>  	struct xen_netif_rx_response *resp;
> 
> -	resp = RING_GET_RESPONSE(&vif->rx, i);
> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>  	resp->offset     = offset;
>  	resp->flags      = flags;
>  	resp->id         = id;
> @@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
>  	if (st < 0)
>  		resp->status = (s16)st;
> 
> -	vif->rx.rsp_prod_pvt = ++i;
> +	queue->rx.rsp_prod_pvt = ++i;
> 
>  	return resp;
>  }
> 
> -static inline int rx_work_todo(struct xenvif *vif)
> +static inline int rx_work_todo(struct xenvif_queue *queue)
>  {
> -	return !skb_queue_empty(&vif->rx_queue);
> +	return !skb_queue_empty(&queue->rx_queue);
>  }
> 
> -static inline int tx_work_todo(struct xenvif *vif)
> +static inline int tx_work_todo(struct xenvif_queue *queue)
>  {
> 
> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>  	     < MAX_PENDING_REQS))
>  		return 1;
> 
>  	return 0;
>  }
> 
> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>  {
> -	if (vif->tx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->tx.sring);
> -	if (vif->rx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->rx.sring);
> +	if (queue->tx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->tx.sring);
> +	if (queue->rx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->rx.sring);
>  }
> 
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref)
>  {
> @@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif
> *vif,
> 
>  	int err = -ENOMEM;
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     tx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	txs = (struct xen_netif_tx_sring *)addr;
> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     rx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	rxs = (struct xen_netif_rx_sring *)addr;
> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
> 
> -	vif->rx_req_cons_peek = 0;
> +	queue->rx_req_cons_peek = 0;
> 
>  	return 0;
> 
>  err:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  	return err;
>  }
> 
>  int xenvif_kthread(void *data)
>  {
> -	struct xenvif *vif = data;
> +	struct xenvif_queue *queue = data;
> 
>  	while (!kthread_should_stop()) {
> -		wait_event_interruptible(vif->wq,
> -					 rx_work_todo(vif) ||
> +		wait_event_interruptible(queue->wq,
> +					 rx_work_todo(queue) ||
>  					 kthread_should_stop());
>  		if (kthread_should_stop())
>  			break;
> 
> -		if (rx_work_todo(vif))
> -			xenvif_rx_action(vif);
> +		if (rx_work_todo(queue))
> +			xenvif_rx_action(queue);
> 
>  		cond_resched();
>  	}
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f035899..c3332e2 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
>  */
> 
>  #include "common.h"
> +#include <linux/vmalloc.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -35,8 +36,9 @@ struct backend_info {
>  	u8 have_hotplug_status_watch:1;
>  };
> 
> -static int connect_rings(struct backend_info *);
> -static void connect(struct backend_info *);
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue);
> +static void connect(struct backend_info *be);
> +static int read_xenbus_vif_flags(struct backend_info *be);
>  static void backend_create_xenvif(struct backend_info *be);
>  static void unregister_hotplug_status_watch(struct backend_info *be);
>  static void set_backend_state(struct backend_info *be,
> @@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
>  {
>  	int err;
>  	struct xenbus_device *dev = be->dev;
> -
> -	err = connect_rings(be);
> -	if (err)
> -		return;
> +	unsigned long credit_bytes, credit_usec;
> +	unsigned int queue_index;
> +	struct xenvif_queue *queue;
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
>  		return;
>  	}
> 
> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
> -			  &be->vif->credit_usec);
> -	be->vif->remaining_credit = be->vif->credit_bytes;
> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
> +	read_xenbus_vif_flags(be);
> +
> +	be->vif->num_queues = 1;
> +	be->vif->queues = vzalloc(be->vif->num_queues *
> +			sizeof(struct xenvif_queue));
> +
> +	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index)
> +	{
> +		queue = &be->vif->queues[queue_index];
> +		queue->vif = be->vif;
> +		queue->number = queue_index;
> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
> +				be->vif->dev->name, queue->number);
> +
> +		xenvif_init_queue(queue);
> +
> +		queue->remaining_credit = credit_bytes;
> +
> +		err = connect_rings(be, queue);
> +		if (err)
> +			goto err;
> +	}
> +
> +	xenvif_carrier_on(be->vif);
> 
>  	unregister_hotplug_status_watch(be);
>  	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
> @@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
>  	if (!err)
>  		be->have_hotplug_status_watch = 1;
> 
> -	netif_wake_queue(be->vif->dev);
> +	netif_tx_wake_all_queues(be->vif->dev);
> +
> +	return;
> +
> +err:
> +	vfree(be->vif->queues);
> +	be->vif->queues = NULL;
> +	be->vif->num_queues = 0;
> +	return;
>  }
> 
> 
> -static int connect_rings(struct backend_info *be)
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue)
>  {
> -	struct xenvif *vif = be->vif;
>  	struct xenbus_device *dev = be->dev;
>  	unsigned long tx_ring_ref, rx_ring_ref;
> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
> +	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> -	int val;
> 
>  	err = xenbus_gather(XBT_NIL, dev->otherend,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
> @@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
>  		rx_evtchn = tx_evtchn;
>  	}
> 
> +	/* Map the shared frame, irq etc. */
> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
> +			     tx_evtchn, rx_evtchn);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err,
> +				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> +				 tx_ring_ref, rx_ring_ref,
> +				 tx_evtchn, rx_evtchn);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static int read_xenbus_vif_flags(struct backend_info *be)
> +{
> +	struct xenvif *vif = be->vif;
> +	struct xenbus_device *dev = be->dev;
> +	unsigned int rx_copy;
> +	int err, val;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
> "%u",
>  			   &rx_copy);
>  	if (err == -ENOENT) {
> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
> 
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
> 
> -
>  /* ** Driver Registration ** */
> 
> 
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k8H-0001Gv-TR; Thu, 16 Jan 2014 10:25:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3k8D-0001Gb-Uj
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:25:03 +0000
Received: from [193.109.254.147:58717] by server-11.bemta-14.messagelabs.com
	id C3/79-20576-D73B7D25; Thu, 16 Jan 2014 10:25:01 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389867899!11167738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12008 invoked from network); 16 Jan 2014 10:25:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410214"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:24:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:24:58 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3k8A-00026m-0w; Thu, 16 Jan 2014 10:24:58 +0000
Message-ID: <52D7B379.201@citrix.com>
Date: Thu, 16 Jan 2014 10:24:57 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140116002706.GJ5331@zion.uk.xensource.com>
In-Reply-To: <20140116002706.GJ5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:27, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +/* Module parameters */
>> +unsigned int xennet_max_queues = 16;
>> +module_param(xennet_max_queues, uint, 0644);
>> +
>
> This looks quite arbitrary as well.

Arbitrary, but less arbitrary than the netback parameter. I wanted to
pick a value that was large enough to accommodate most sensible
backend-provided limits, but not so large that it wasted resources. It
is necessary only because I have to call alloc_etherdev_mq() long before
I can read the maximum number of queues from XenStore. This value gets
used there, so I wanted it to be at least as big as anything sensible
that a backend might provide.

>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
> [...]
>> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
>> +			   struct xenbus_transaction *xbt, int write_hierarchical)
>> +{
>> +	/* Write the queue-specific keys into XenStore in the traditional
>> +	 * way for a single queue, or in a queue subkeys for multiple
>> +	 * queues.
>> +	 */
>> +	struct xenbus_device *dev = queue->info->xbdev;
>> +	int err;
>> +	const char *message;
>> +	char *path;
>> +	size_t pathsize;
>> +
>> +	/* Choose the correct place to write the keys */
>> +	if (write_hierarchical) {
>> +		pathsize = strlen(dev->nodename) + 10;
>> +		path = kzalloc(pathsize, GFP_KERNEL);
>> +		if (!path) {
>> +			err = -ENOMEM;
>> +			message = "writing ring references";
>
> This error message doesn't sound right.
>

I'll reword it.

>> +			goto error;
>> +		}
>> +		snprintf(path, pathsize, "%s/queue-%u",
>> +				dev->nodename, queue->number);
>> +	}
>> +	else
>> +		path = (char *)dev->nodename;
>
> Coding style. Should be surounded by {};
>

OK.

>> +
> [...]
>> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	int err;
>>   	unsigned int feature_split_evtchn;
>>   	unsigned int i = 0;
>> +	unsigned int max_queues = 0;
>>   	struct netfront_queue *queue = NULL;
>>
>>   	info->netdev->irq = 0;
>>
>> +	/* Check if backend supports multiple queues */
>> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>> +			"multi-queue-max-queues", "%u", &max_queues);
>> +	if (err < 0)
>> +		max_queues = 1;
>> +
>
> Need to check if backend provide too big a number for frontend.

Yes, I'll add that check.

Andrew.

>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3k8H-0001Gv-TR; Thu, 16 Jan 2014 10:25:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3k8D-0001Gb-Uj
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:25:03 +0000
Received: from [193.109.254.147:58717] by server-11.bemta-14.messagelabs.com
	id C3/79-20576-D73B7D25; Thu, 16 Jan 2014 10:25:01 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389867899!11167738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12008 invoked from network); 16 Jan 2014 10:25:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410214"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:24:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:24:58 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3k8A-00026m-0w; Thu, 16 Jan 2014 10:24:58 +0000
Message-ID: <52D7B379.201@citrix.com>
Date: Thu, 16 Jan 2014 10:24:57 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140116002706.GJ5331@zion.uk.xensource.com>
In-Reply-To: <20140116002706.GJ5331@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:27, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> [...]
>> +/* Module parameters */
>> +unsigned int xennet_max_queues = 16;
>> +module_param(xennet_max_queues, uint, 0644);
>> +
>
> This looks quite arbitrary as well.

Arbitrary, but less arbitrary than the netback parameter. I wanted to
pick a value that was large enough to accommodate most sensible
backend-provided limits, but not so large that it wasted resources. It
is necessary only because I have to call alloc_etherdev_mq() long before
I can read the maximum number of queues from XenStore. This value gets
used there, so I wanted it to be at least as big as anything sensible
that a backend might provide.

>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
> [...]
>> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
>> +			   struct xenbus_transaction *xbt, int write_hierarchical)
>> +{
>> +	/* Write the queue-specific keys into XenStore in the traditional
>> +	 * way for a single queue, or in a queue subkeys for multiple
>> +	 * queues.
>> +	 */
>> +	struct xenbus_device *dev = queue->info->xbdev;
>> +	int err;
>> +	const char *message;
>> +	char *path;
>> +	size_t pathsize;
>> +
>> +	/* Choose the correct place to write the keys */
>> +	if (write_hierarchical) {
>> +		pathsize = strlen(dev->nodename) + 10;
>> +		path = kzalloc(pathsize, GFP_KERNEL);
>> +		if (!path) {
>> +			err = -ENOMEM;
>> +			message = "writing ring references";
>
> This error message doesn't sound right.
>

I'll reword it.

>> +			goto error;
>> +		}
>> +		snprintf(path, pathsize, "%s/queue-%u",
>> +				dev->nodename, queue->number);
>> +	}
>> +	else
>> +		path = (char *)dev->nodename;
>
> Coding style. Should be surounded by {};
>

OK.

>> +
> [...]
>> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	int err;
>>   	unsigned int feature_split_evtchn;
>>   	unsigned int i = 0;
>> +	unsigned int max_queues = 0;
>>   	struct netfront_queue *queue = NULL;
>>
>>   	info->netdev->irq = 0;
>>
>> +	/* Check if backend supports multiple queues */
>> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>> +			"multi-queue-max-queues", "%u", &max_queues);
>> +	if (err < 0)
>> +		max_queues = 1;
>> +
>
> Need to check if backend provide too big a number for frontend.

Yes, I'll add that check.

Andrew.

>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kAV-0001Rl-0x; Thu, 16 Jan 2014 10:27:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kAS-0001Rb-U2
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:27:21 +0000
Received: from [85.158.143.35:55883] by server-1.bemta-4.messagelabs.com id
	2A/1D-02132-804B7D25; Thu, 16 Jan 2014 10:27:20 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389868037!12107003!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11735 invoked from network); 16 Jan 2014 10:27:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:27:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410556"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:27:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:27:16 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kAO-00029f-MX; Thu, 16 Jan 2014 10:27:16 +0000
Message-ID: <52D7B404.9080007@citrix.com>
Date: Thu, 16 Jan 2014 10:27:16 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subject:   Re: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit
and receive queues To:        Paul Durrant
<Paul.Durrant@citrix.com>,"xen-devel@lists.xenproject.org"
<xen-devel@lists.xenproject.org> Cc:        Ian Campbell
<Ian.Campbell@citrix.com>,Wei Liu <wei.liu2@citrix.com> Bcc:
-=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=- On
16/01/14 10:04, Paul Durrant wrote:
>> -----Original Message----- From: Andrew J. Bennieston
>> [mailto:andrew.bennieston@citrix.com] Sent: 15 January 2014 16:23 To:
>> xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei Liu; Paul
>> Durrant Subject: [PATCH RFC 0/4]: xen-net{back,front}: Multiple
>> transmit and receive queues
>>
>> This patch series implements multiple transmit and receive queues
>> (i.e.  multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows: - Patches 1 and 3 factor out the
>> queue-specific data for netback and netfront respectively, and modify
>> the rest of the code to use these as appropriate.  - Patches 2 and 4
>> introduce new XenStore keys to negotiate and use multiple shared
>> rings and event channels, and code to connect these as appropriate.
>>
>> All other transmit and receive processing remains unchanged, i.e.
>> there is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-
>> queue_performance_testing
>>
>
> Nice numbers!
>
>> To summarise: * Using multiple queues allows a VM to transmit at line
>> rate on a 10 Gbit/s NIC, compared with a maximum aggregate throughput
>> of 6 Gbit/s with a single queue.  * For intra-host VM--VM traffic,
>> eight queues provide 171% of the throughput of a single queue; almost
>> 12 Gbit/s instead of 6 Gbit/s.  * There is a corresponding increase
>> in total CPU usage, i.e. this is a scaling out over available
>> resources, not an efficiency improvement.  * Results depend on the
>> availability of sufficient CPUs, as well as the distribution of
>> interrupts and the distribution of TCP streams across the queues.
>>
>> One open issue is how to deal with the tx_credit data for rate
>> limiting.  This used to exist on a per-VIF basis, and these patches
>> move it to per-queue to avoid contention on concurrent access to the
>> tx_credit data from multiple threads. This has the side effect of
>> breaking the tx_credit accounting across the VIF as a whole. I cannot
>> see a situation in which people would want to use both rate limiting
>> and a high-performance multi-queue mode, but if this is problematic
>> then it can be brought back to the VIF level, with appropriate
>> protection.  Obviously, it continues to work identically in the case
>> where there is only one queue.
>>
>> Queue selection is currently achieved via an L4 hash on the packet
>> (i.e.  TCP src/dst port, IP src/dst address) and is not negotiated
>> between the frontend and backend, since only one option exists.
>> Future patches to support other frontends (particularly Windows) will
>> need to add some capability to negotiate not only the hash algorithm
>> selection, but also allow the frontend to specify some parameters to
>> this.
>>
>
> Yes, Windows RSS stipulates a Toeplitz hash and specifies a hash key
> and mapping table. There's further awkwardness in the need to pass the
> actual hash value to the frontend too - but we could use an 'extra'
> seg for that, analogous to passing the GSO mss value through.

Yes, I was hoping we might be able to play tricks like that when it came
to implementing Toeplitz support.

Andrew

>
>     Paul
>
>> Queue-specific XenStore entries for ring references and event
>> channels are stored hierarchically, i.e. under .../queue-N/... where
>> N varies from 0 to one less than the requested number of queues
>> (inclusive). If only one queue is requested, it falls back to the
>> flat structure where the ring references and event channels are
>> written at the same level as other vif information.
>>
>> -- Andrew J. Bennieston


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kAV-0001Rl-0x; Thu, 16 Jan 2014 10:27:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kAS-0001Rb-U2
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:27:21 +0000
Received: from [85.158.143.35:55883] by server-1.bemta-4.messagelabs.com id
	2A/1D-02132-804B7D25; Thu, 16 Jan 2014 10:27:20 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389868037!12107003!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11735 invoked from network); 16 Jan 2014 10:27:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:27:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410556"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:27:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:27:16 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kAO-00029f-MX; Thu, 16 Jan 2014 10:27:16 +0000
Message-ID: <52D7B404.9080007@citrix.com>
Date: Thu, 16 Jan 2014 10:27:16 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208D6E@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/4]: xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subject:   Re: [PATCH RFC 0/4]: xen-net{back,front}: Multiple transmit
and receive queues To:        Paul Durrant
<Paul.Durrant@citrix.com>,"xen-devel@lists.xenproject.org"
<xen-devel@lists.xenproject.org> Cc:        Ian Campbell
<Ian.Campbell@citrix.com>,Wei Liu <wei.liu2@citrix.com> Bcc:
-=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=- On
16/01/14 10:04, Paul Durrant wrote:
>> -----Original Message----- From: Andrew J. Bennieston
>> [mailto:andrew.bennieston@citrix.com] Sent: 15 January 2014 16:23 To:
>> xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei Liu; Paul
>> Durrant Subject: [PATCH RFC 0/4]: xen-net{back,front}: Multiple
>> transmit and receive queues
>>
>> This patch series implements multiple transmit and receive queues
>> (i.e.  multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows: - Patches 1 and 3 factor out the
>> queue-specific data for netback and netfront respectively, and modify
>> the rest of the code to use these as appropriate.  - Patches 2 and 4
>> introduce new XenStore keys to negotiate and use multiple shared
>> rings and event channels, and code to connect these as appropriate.
>>
>> All other transmit and receive processing remains unchanged, i.e.
>> there is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-
>> queue_performance_testing
>>
>
> Nice numbers!
>
>> To summarise: * Using multiple queues allows a VM to transmit at line
>> rate on a 10 Gbit/s NIC, compared with a maximum aggregate throughput
>> of 6 Gbit/s with a single queue.  * For intra-host VM--VM traffic,
>> eight queues provide 171% of the throughput of a single queue; almost
>> 12 Gbit/s instead of 6 Gbit/s.  * There is a corresponding increase
>> in total CPU usage, i.e. this is a scaling out over available
>> resources, not an efficiency improvement.  * Results depend on the
>> availability of sufficient CPUs, as well as the distribution of
>> interrupts and the distribution of TCP streams across the queues.
>>
>> One open issue is how to deal with the tx_credit data for rate
>> limiting.  This used to exist on a per-VIF basis, and these patches
>> move it to per-queue to avoid contention on concurrent access to the
>> tx_credit data from multiple threads. This has the side effect of
>> breaking the tx_credit accounting across the VIF as a whole. I cannot
>> see a situation in which people would want to use both rate limiting
>> and a high-performance multi-queue mode, but if this is problematic
>> then it can be brought back to the VIF level, with appropriate
>> protection.  Obviously, it continues to work identically in the case
>> where there is only one queue.
>>
>> Queue selection is currently achieved via an L4 hash on the packet
>> (i.e.  TCP src/dst port, IP src/dst address) and is not negotiated
>> between the frontend and backend, since only one option exists.
>> Future patches to support other frontends (particularly Windows) will
>> need to add some capability to negotiate not only the hash algorithm
>> selection, but also allow the frontend to specify some parameters to
>> this.
>>
>
> Yes, Windows RSS stipulates a Toeplitz hash and specifies a hash key
> and mapping table. There's further awkwardness in the need to pass the
> actual hash value to the frontend too - but we could use an 'extra'
> seg for that, analogous to passing the GSO mss value through.

Yes, I was hoping we might be able to play tricks like that when it came
to implementing Toeplitz support.

Andrew

>
>     Paul
>
>> Queue-specific XenStore entries for ring references and event
>> channels are stored hierarchically, i.e. under .../queue-N/... where
>> N varies from 0 to one less than the requested number of queues
>> (inclusive). If only one queue is requested, it falls back to the
>> flat structure where the ring references and event channels are
>> written at the same level as other vif information.
>>
>> -- Andrew J. Bennieston


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kBS-0001XD-CW; Thu, 16 Jan 2014 10:28:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3kBP-0001Wv-UN
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:28:20 +0000
Received: from [193.109.254.147:12315] by server-14.bemta-14.messagelabs.com
	id 88/A6-12628-344B7D25; Thu, 16 Jan 2014 10:28:19 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389868097!11237212!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10075 invoked from network); 16 Jan 2014 10:28:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410719"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:28:16 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:28:16 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:28:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
Thread-Index: AQHPEg4yhOcJLcRL7k2lFB+dDpatWJqHH3DQ
Date: Thu, 16 Jan 2014 10:28:14 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> Subject: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    8 +++-
>  drivers/net/xen-netback/netback.c   |    3 ++
>  drivers/net/xen-netback/xenbus.c    |   70
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 72 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 54d2eeb..97efd09 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 0113324..0234ff0 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/*
> +	 * Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 586e741..5d717d7 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -55,6 +55,9 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues = 4;
> +module_param(xenvif_max_queues, uint, 0644);
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index c3332e2..ce7ca9a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -21,6 +21,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/*
> +	 * Multi-queue support: This is an optional feature.
> +	 */

Comment style. Did you run checkpatch?

> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0)
> +		requested_num_queues = 1; /* Fall back to single queue */
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index)
>  	{
> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;
> +	else {
> +		xspathsize = strlen(dev->otherend) + 10;

Magic number.

  Paul

> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				queue->number);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kBS-0001XD-CW; Thu, 16 Jan 2014 10:28:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3kBP-0001Wv-UN
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:28:20 +0000
Received: from [193.109.254.147:12315] by server-14.bemta-14.messagelabs.com
	id 88/A6-12628-344B7D25; Thu, 16 Jan 2014 10:28:19 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389868097!11237212!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10075 invoked from network); 16 Jan 2014 10:28:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93410719"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:28:16 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 05:28:16 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 11:28:15 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
Thread-Index: AQHPEg4yhOcJLcRL7k2lFB+dDpatWJqHH3DQ
Date: Thu, 16 Jan 2014 10:28:14 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 15 January 2014 16:23
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> Subject: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    8 +++-
>  drivers/net/xen-netback/netback.c   |    3 ++
>  drivers/net/xen-netback/xenbus.c    |   70
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 72 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 54d2eeb..97efd09 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 0113324..0234ff0 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/*
> +	 * Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 586e741..5d717d7 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -55,6 +55,9 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues = 4;
> +module_param(xenvif_max_queues, uint, 0644);
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index c3332e2..ce7ca9a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -21,6 +21,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/*
> +	 * Multi-queue support: This is an optional feature.
> +	 */

Comment style. Did you run checkpatch?

> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0)
> +		requested_num_queues = 1; /* Fall back to single queue */
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index)
>  	{
> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;
> +	else {
> +		xspathsize = strlen(dev->otherend) + 10;

Magic number.

  Paul

> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				queue->number);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:40:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kMa-0002Dn-9F; Thu, 16 Jan 2014 10:39:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kMY-0002DV-Oy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:39:50 +0000
Received: from [85.158.139.211:4016] by server-8.bemta-5.messagelabs.com id
	CE/18-29838-5F6B7D25; Thu, 16 Jan 2014 10:39:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389868787!10114646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19319 invoked from network); 16 Jan 2014 10:39:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:39:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93413265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:39:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:39:46 -0500
Message-ID: <52D7B6F1.1040604@citrix.com>
Date: Thu, 16 Jan 2014 10:39:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com>
In-Reply-To: <52D7B379.201@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:24, Andrew Bennieston wrote:
> On 16/01/14 00:27, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>> 
>>> +            goto error;
>>> +        }
>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>> +                dev->nodename, queue->number);
>>> +    }
>>> +    else
>>> +        path = (char *)dev->nodename;
>>
>> Coding style. Should be surounded by {};
> 
> OK.

Linux style is single line blocks are not surrounded by braces.  You
should have the else on the same line as the preceeding } though.

i.e.,

if (...) {
   one_line()
   two_line()
   red_line()
   blue_line()
} else
   a_line()

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:40:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kMa-0002Dn-9F; Thu, 16 Jan 2014 10:39:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kMY-0002DV-Oy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:39:50 +0000
Received: from [85.158.139.211:4016] by server-8.bemta-5.messagelabs.com id
	CE/18-29838-5F6B7D25; Thu, 16 Jan 2014 10:39:49 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389868787!10114646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19319 invoked from network); 16 Jan 2014 10:39:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:39:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93413265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:39:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:39:46 -0500
Message-ID: <52D7B6F1.1040604@citrix.com>
Date: Thu, 16 Jan 2014 10:39:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com>
In-Reply-To: <52D7B379.201@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:24, Andrew Bennieston wrote:
> On 16/01/14 00:27, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>> 
>>> +            goto error;
>>> +        }
>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>> +                dev->nodename, queue->number);
>>> +    }
>>> +    else
>>> +        path = (char *)dev->nodename;
>>
>> Coding style. Should be surounded by {};
> 
> OK.

Linux style is single line blocks are not surrounded by braces.  You
should have the else on the same line as the preceeding } though.

i.e.,

if (...) {
   one_line()
   two_line()
   red_line()
   blue_line()
} else
   a_line()

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kTQ-00032V-Mu; Thu, 16 Jan 2014 10:46:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3kTO-00032P-Q8
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:46:54 +0000
Received: from [85.158.143.35:10249] by server-2.bemta-4.messagelabs.com id
	27/03-11386-E98B7D25; Thu, 16 Jan 2014 10:46:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389869212!11957319!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30809 invoked from network); 16 Jan 2014 10:46:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93414449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:46:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:46:51 -0500
Message-ID: <1389869209.6697.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 16 Jan 2014 10:46:49 +0000
In-Reply-To: <20140116101001.GA56618@deinos.phlegethon.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389713316.12434.91.camel@kazak.uk.xensource.com>
	<20140116101001.GA56618@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 11:10 +0100, Tim Deegan wrote:
> At 15:28 +0000 on 14 Jan (1389709716), Ian Campbell wrote:
> > On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> > > Except grant-table (I can't find {get,put}_page for grant-table code???),
> > 
> > I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
> > through page_get_owner_and_reference.
> > 
> > and on unmap it is in__gnttab_unmap_common_complete.
> > 
> > It's a bit of a complex maze though so I'm not entirely sure, perhaps
> > Tim, Keir or Jan can confirm that a grant mapping always takes a
> > reference on the mapped page (it seems like PV x86 ought to be relying
> > on this for safety anyhow).
> 
> Not claiming to understand it completely, but I agree with your analysis.

Thanks.

> > I think the flush in alloc_heap_pages would also serve as a backstop,
> > wouldn't it?
> 
> Not entirely -- if the grant mapping didn't take a ref, then the page
> could be freed and reassigned with the grant mapping still in place --
> the TLB flush doesn't help if the PTE is still there. :)

True, but I think we agree that grant mappings do (and must) take refs,
Phew!

Likewise on ARM we reference count foreign mappings so this is ok wrt
this sort of thing too.

I actually forgot I'd asked this question and was waiting for feedback
-- so the patch is already in, good thing it is all fine!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kTQ-00032V-Mu; Thu, 16 Jan 2014 10:46:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3kTO-00032P-Q8
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:46:54 +0000
Received: from [85.158.143.35:10249] by server-2.bemta-4.messagelabs.com id
	27/03-11386-E98B7D25; Thu, 16 Jan 2014 10:46:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1389869212!11957319!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30809 invoked from network); 16 Jan 2014 10:46:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:46:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93414449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:46:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:46:51 -0500
Message-ID: <1389869209.6697.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Thu, 16 Jan 2014 10:46:49 +0000
In-Reply-To: <20140116101001.GA56618@deinos.phlegethon.org>
References: <1389285240-7116-1-git-send-email-julien.grall@linaro.org>
	<1389713316.12434.91.camel@kazak.uk.xensource.com>
	<20140116101001.GA56618@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: p2m: Correctly flush TLB in
 create_p2m_entries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 11:10 +0100, Tim Deegan wrote:
> At 15:28 +0000 on 14 Jan (1389709716), Ian Campbell wrote:
> > On Thu, 2014-01-09 at 16:34 +0000, Julien Grall wrote:
> > > Except grant-table (I can't find {get,put}_page for grant-table code???),
> > 
> > I think they are in __gnttab_map_grant_ref, within __get_paged_frame or
> > through page_get_owner_and_reference.
> > 
> > and on unmap it is in__gnttab_unmap_common_complete.
> > 
> > It's a bit of a complex maze though so I'm not entirely sure, perhaps
> > Tim, Keir or Jan can confirm that a grant mapping always takes a
> > reference on the mapped page (it seems like PV x86 ought to be relying
> > on this for safety anyhow).
> 
> Not claiming to understand it completely, but I agree with your analysis.

Thanks.

> > I think the flush in alloc_heap_pages would also serve as a backstop,
> > wouldn't it?
> 
> Not entirely -- if the grant mapping didn't take a ref, then the page
> could be freed and reassigned with the grant mapping still in place --
> the TLB flush doesn't help if the PTE is still there. :)

True, but I think we agree that grant mappings do (and must) take refs,
Phew!

Likewise on ARM we reference count foreign mappings so this is ok wrt
this sort of thing too.

I actually forgot I'd asked this question and was waiting for feedback
-- so the patch is already in, good thing it is all fine!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:47:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kUB-00037T-Pk; Thu, 16 Jan 2014 10:47:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kU9-00037I-Hx
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:47:41 +0000
Received: from [85.158.139.211:65443] by server-1.bemta-5.messagelabs.com id
	D2/83-21065-CC8B7D25; Thu, 16 Jan 2014 10:47:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389869258!9933944!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9017 invoked from network); 16 Jan 2014 10:47:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:47:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93414668"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:47:38 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:47:37 -0500
Message-ID: <52D7B8C8.9010301@citrix.com>
Date: Thu, 16 Jan 2014 10:47:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Gortmaker <paul.gortmaker@windriver.com>
References: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
In-Reply-To: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Richard Weinberger <richard@nod.at>, linux-kernel@vger.kernel.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linuxppc-dev@lists.ozlabs.org
Subject: Re: [Xen-devel] [PATCH v2] drivers/tty/hvc: don't use module_init
 in non-modular hyp. console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 21:35, Paul Gortmaker wrote:
> The HVC_OPAL/RTAS/UDBG/XEN options are all bool, and hence their support
> is either present or absent.  It will never be modular, so using
> module_init as an alias for __initcall is rather misleading.
> 
> Fix this up now, so that we can relocate module_init from
> init.h into module.h in the future.  If we don't do this, we'd
> have to add module.h to obviously non-modular code, and that
> would be a worse thing.
> 
> Note that direct use of __initcall is discouraged, vs. one
> of the priority categorized subgroups.  As __initcall gets
> mapped onto device_initcall, our use of device_initcall
> directly in this change means that the runtime impact is
> zero -- it will remain at level 6 in initcall ordering.
> 
> Also the __exitcall functions have been outright deleted since
> they are only ever of interest to UML, and UML will never be
> using any of this code.

For the hvc_xen changes

Acked-by: David Vrabel <david.vrabel@citrix.com>

Thanks

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:47:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kUB-00037T-Pk; Thu, 16 Jan 2014 10:47:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kU9-00037I-Hx
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:47:41 +0000
Received: from [85.158.139.211:65443] by server-1.bemta-5.messagelabs.com id
	D2/83-21065-CC8B7D25; Thu, 16 Jan 2014 10:47:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389869258!9933944!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9017 invoked from network); 16 Jan 2014 10:47:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:47:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93414668"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:47:38 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:47:37 -0500
Message-ID: <52D7B8C8.9010301@citrix.com>
Date: Thu, 16 Jan 2014 10:47:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Gortmaker <paul.gortmaker@windriver.com>
References: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
In-Reply-To: <1389821743-25081-1-git-send-email-paul.gortmaker@windriver.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Richard Weinberger <richard@nod.at>, linux-kernel@vger.kernel.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, linuxppc-dev@lists.ozlabs.org
Subject: Re: [Xen-devel] [PATCH v2] drivers/tty/hvc: don't use module_init
 in non-modular hyp. console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 21:35, Paul Gortmaker wrote:
> The HVC_OPAL/RTAS/UDBG/XEN options are all bool, and hence their support
> is either present or absent.  It will never be modular, so using
> module_init as an alias for __initcall is rather misleading.
> 
> Fix this up now, so that we can relocate module_init from
> init.h into module.h in the future.  If we don't do this, we'd
> have to add module.h to obviously non-modular code, and that
> would be a worse thing.
> 
> Note that direct use of __initcall is discouraged, vs. one
> of the priority categorized subgroups.  As __initcall gets
> mapped onto device_initcall, our use of device_initcall
> directly in this change means that the runtime impact is
> zero -- it will remain at level 6 in initcall ordering.
> 
> Also the __exitcall functions have been outright deleted since
> they are only ever of interest to UML, and UML will never be
> using any of this code.

For the hvc_xen changes

Acked-by: David Vrabel <david.vrabel@citrix.com>

Thanks

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kXz-0003Z8-DC; Thu, 16 Jan 2014 10:51:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kXx-0003Z2-9Z
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:37 +0000
Received: from [193.109.254.147:35337] by server-3.bemta-14.messagelabs.com id
	D8/CA-11000-8B9B7D25; Thu, 16 Jan 2014 10:51:36 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389869494!8917830!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27829 invoked from network); 16 Jan 2014 10:51:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93415638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:33 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kOU-0002No-9O; Thu, 16 Jan 2014 10:41:50 +0000
Message-ID: <52D7B76E.2080001@citrix.com>
Date: Thu, 16 Jan 2014 10:41:50 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
In-Reply-To: <52D7B6F1.1040604@citrix.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:39, David Vrabel wrote:
> On 16/01/14 10:24, Andrew Bennieston wrote:
>> On 16/01/14 00:27, Wei Liu wrote:
>>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>>>
>>>> +            goto error;
>>>> +        }
>>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>>> +                dev->nodename, queue->number);
>>>> +    }
>>>> +    else
>>>> +        path = (char *)dev->nodename;
>>>
>>> Coding style. Should be surounded by {};
>>
>> OK.
>
> Linux style is single line blocks are not surrounded by braces.  You
> should have the else on the same line as the preceeding } though.
>
> i.e.,
>
> if (...) {
>     one_line()
>     two_line()
>     red_line()
>     blue_line()
> } else
>     a_line()
>
> David
>
Right; I'll make sure this consistently done throughout.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kY2-0003ZZ-Ru; Thu, 16 Jan 2014 10:51:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kY2-0003ZP-0r
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:42 +0000
Received: from [85.158.143.35:14197] by server-1.bemta-4.messagelabs.com id
	B1/10-02132-DB9B7D25; Thu, 16 Jan 2014 10:51:41 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389869499!12124733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26105 invoked from network); 16 Jan 2014 10:51:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91314450"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:38 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kN8-0002LK-3Z; Thu, 16 Jan 2014 10:40:26 +0000
Message-ID: <52D7B719.3010501@citrix.com>
Date: Thu, 16 Jan 2014 10:40:25 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:28, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 15 January 2014 16:23
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
>> Subject: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Builds on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Writes the maximum supported number of queues into XenStore, and reads
>> the values written by the frontend to determine how many queues to use.
>>
>> Ring references and event channels are read from XenStore on a per-queue
>> basis and rings are connected accordingly.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |    2 +
>>   drivers/net/xen-netback/interface.c |    8 +++-
>>   drivers/net/xen-netback/netback.c   |    3 ++
>>   drivers/net/xen-netback/xenbus.c    |   70
>> ++++++++++++++++++++++++++++++-----
>>   4 files changed, 72 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index 54d2eeb..97efd09 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
>>
>>   extern bool separate_tx_rx_irq;
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index 0113324..0234ff0 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/*
>> +	 * Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			xenvif_max_queues);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 586e741..5d717d7 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -55,6 +55,9 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues = 4;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +
>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index c3332e2..ce7ca9a 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -21,6 +21,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/*
>> +	 * Multi-queue support: This is an optional feature.
>> +	 */
>
> Comment style. Did you run checkpatch?
>
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u",
>> xenvif_max_queues);
>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0)
>> +		requested_num_queues = 1; /* Fall back to single queue */
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
>>> num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index)
>>   	{
>> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>> +	else {
>> +		xspathsize = strlen(dev->otherend) + 10;
>
> Magic number.
>
>    Paul
>
Yes, I'll change this to a defined constant.

Andrew

>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
>>> otherend,
>> +				queue->number);
>> +	}
>>
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>>   			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err,
>>   				 "reading %s/ring-ref",
>> -				 dev->otherend);
>> -		return err;
>> +				 xspath);
>> +		goto err;
>>   	}
>>
>>   	/* Try split event channels first, then single event channel. */
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "event-channel-tx", "%u", &tx_evtchn,
>>   			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>>   	if (err < 0) {
>> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +		err = xenbus_scanf(XBT_NIL, xspath,
>>   				   "event-channel", "%u", &tx_evtchn);
>>   		if (err < 0) {
>>   			xenbus_dev_fatal(dev, err,
>>   					 "reading %s/event-channel(-tx/rx)",
>> -					 dev->otherend);
>> -			return err;
>> +					 xspath);
>> +			goto err;
>>   		}
>>   		rx_evtchn = tx_evtchn;
>>   	}
>> @@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>>   				 tx_ring_ref, rx_ring_ref,
>>   				 tx_evtchn, rx_evtchn);
>> -		return err;
>> +		goto err;
>>   	}
>>
>> -	return 0;
>> +	err = 0;
>> +err: /* Regular return falls through with err == 0 */
>> +	if (xspath != dev->otherend)
>> +		kfree(xspath);
>> +
>> +	return err;
>>   }
>>
>>   static int read_xenbus_vif_flags(struct backend_info *be)
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kXz-0003Z8-DC; Thu, 16 Jan 2014 10:51:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kXx-0003Z2-9Z
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:37 +0000
Received: from [193.109.254.147:35337] by server-3.bemta-14.messagelabs.com id
	D8/CA-11000-8B9B7D25; Thu, 16 Jan 2014 10:51:36 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389869494!8917830!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27829 invoked from network); 16 Jan 2014 10:51:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93415638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:33 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kOU-0002No-9O; Thu, 16 Jan 2014 10:41:50 +0000
Message-ID: <52D7B76E.2080001@citrix.com>
Date: Thu, 16 Jan 2014 10:41:50 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
In-Reply-To: <52D7B6F1.1040604@citrix.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:39, David Vrabel wrote:
> On 16/01/14 10:24, Andrew Bennieston wrote:
>> On 16/01/14 00:27, Wei Liu wrote:
>>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>>>
>>>> +            goto error;
>>>> +        }
>>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>>> +                dev->nodename, queue->number);
>>>> +    }
>>>> +    else
>>>> +        path = (char *)dev->nodename;
>>>
>>> Coding style. Should be surounded by {};
>>
>> OK.
>
> Linux style is single line blocks are not surrounded by braces.  You
> should have the else on the same line as the preceeding } though.
>
> i.e.,
>
> if (...) {
>     one_line()
>     two_line()
>     red_line()
>     blue_line()
> } else
>     a_line()
>
> David
>
Right; I'll make sure this consistently done throughout.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kY2-0003ZZ-Ru; Thu, 16 Jan 2014 10:51:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kY2-0003ZP-0r
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:42 +0000
Received: from [85.158.143.35:14197] by server-1.bemta-4.messagelabs.com id
	B1/10-02132-DB9B7D25; Thu, 16 Jan 2014 10:51:41 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389869499!12124733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26105 invoked from network); 16 Jan 2014 10:51:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91314450"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:38 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kN8-0002LK-3Z; Thu, 16 Jan 2014 10:40:26 +0000
Message-ID: <52D7B719.3010501@citrix.com>
Date: Thu, 16 Jan 2014 10:40:25 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DFE@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 2/4] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:28, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 15 January 2014 16:23
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
>> Subject: [PATCH RFC 2/4] xen-netback: Add support for multiple queues
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Builds on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Writes the maximum supported number of queues into XenStore, and reads
>> the values written by the frontend to determine how many queues to use.
>>
>> Ring references and event channels are read from XenStore on a per-queue
>> basis and rings are connected accordingly.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |    2 +
>>   drivers/net/xen-netback/interface.c |    8 +++-
>>   drivers/net/xen-netback/netback.c   |    3 ++
>>   drivers/net/xen-netback/xenbus.c    |   70
>> ++++++++++++++++++++++++++++++-----
>>   4 files changed, 72 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index 54d2eeb..97efd09 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -254,4 +254,6 @@ void xenvif_carrier_on(struct xenvif *vif);
>>
>>   extern bool separate_tx_rx_irq;
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index 0113324..0234ff0 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -355,7 +355,13 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/*
>> +	 * Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			xenvif_max_queues);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 586e741..5d717d7 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -55,6 +55,9 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues = 4;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +
>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index c3332e2..ce7ca9a 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -21,6 +21,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -160,6 +161,14 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/*
>> +	 * Multi-queue support: This is an optional feature.
>> +	 */
>
> Comment style. Did you run checkpatch?
>
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u",
>> xenvif_max_queues);
>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -491,6 +500,16 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0)
>> +		requested_num_queues = 1; /* Fall back to single queue */
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -501,9 +520,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
>>> num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index)
>>   	{
>> @@ -549,29 +572,51 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>> +	else {
>> +		xspathsize = strlen(dev->otherend) + 10;
>
> Magic number.
>
>    Paul
>
Yes, I'll change this to a defined constant.

Andrew

>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
>>> otherend,
>> +				queue->number);
>> +	}
>>
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>>   			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err,
>>   				 "reading %s/ring-ref",
>> -				 dev->otherend);
>> -		return err;
>> +				 xspath);
>> +		goto err;
>>   	}
>>
>>   	/* Try split event channels first, then single event channel. */
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "event-channel-tx", "%u", &tx_evtchn,
>>   			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>>   	if (err < 0) {
>> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +		err = xenbus_scanf(XBT_NIL, xspath,
>>   				   "event-channel", "%u", &tx_evtchn);
>>   		if (err < 0) {
>>   			xenbus_dev_fatal(dev, err,
>>   					 "reading %s/event-channel(-tx/rx)",
>> -					 dev->otherend);
>> -			return err;
>> +					 xspath);
>> +			goto err;
>>   		}
>>   		rx_evtchn = tx_evtchn;
>>   	}
>> @@ -584,10 +629,15 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>>   				 tx_ring_ref, rx_ring_ref,
>>   				 tx_evtchn, rx_evtchn);
>> -		return err;
>> +		goto err;
>>   	}
>>
>> -	return 0;
>> +	err = 0;
>> +err: /* Regular return falls through with err == 0 */
>> +	if (xspath != dev->otherend)
>> +		kfree(xspath);
>> +
>> +	return err;
>>   }
>>
>>   static int read_xenbus_vif_flags(struct backend_info *be)
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kYC-0003cA-9x; Thu, 16 Jan 2014 10:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kY9-0003bK-Rj
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:50 +0000
Received: from [85.158.143.35:15066] by server-1.bemta-4.messagelabs.com id
	46/50-02132-5C9B7D25; Thu, 16 Jan 2014 10:51:49 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389869504!274456!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10370 invoked from network); 16 Jan 2014 10:51:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93415684"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:42 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kLT-0002Jh-FU; Thu, 16 Jan 2014 10:38:43 +0000
Message-ID: <52D7B6B3.5060303@citrix.com>
Date: Thu, 16 Jan 2014 10:38:43 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:23, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 15 January 2014 16:23
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
>> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
>> queue struct.
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netback, move the
>> queue-specific data from struct xenvif into struct xenvif_queue, and
>> update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_netdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0 for a single queue and uses
>> skb_get_rxhash() to compute the queue index otherwise.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |   66 +++--
>>   drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>>   drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++-------------
>> -----
>>   drivers/net/xen-netback/xenbus.c    |   89 ++++--
>>   4 files changed, 556 insertions(+), 423 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index c47794b..54d2eeb 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
>>    */
>>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>> XEN_NETIF_RX_RING_SIZE)
>>
>> -struct xenvif {
>> -	/* Unique identifier for this interface. */
>> -	domid_t          domid;
>> -	unsigned int     handle;
>> +struct xenvif;
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int number; /* Queue number, 0-based */
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>
> I wonder whether it would be neater to #define the name size here...
>

Absolutely. I'll do this in V2.

>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>
> ...and the IRQ name size here. It's kind of ugly to have + some_magic_value in array definitions.
>

As above.

>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,7 +142,7 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>
>> @@ -150,14 +152,27 @@ struct xenvif {
>>   	 */
>>   	RING_IDX rx_req_cons_peek;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>
> I see you brought this back in line, which is reasonable as the queue is now a separately allocated struct.
>

Indeed; trying to keep the number of separate allocs/frees to a minimum,
for everybody's sanity!

>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +};
>> +
>> +struct xenvif {
>> +	/* Unique identifier for this interface. */
>> +	domid_t          domid;
>> +	unsigned int     handle;
>> +
>>   	u8               fe_dev_addr[6];
>>
>>   	/* Frontend feature information. */
>> @@ -171,12 +186,9 @@ struct xenvif {
>>   	/* Internal feature information. */
>>   	u8 can_queue:1;	    /* can queue packets for receiver? */
>>
>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> -	unsigned long   credit_bytes;
>> -	unsigned long   credit_usec;
>> -	unsigned long   remaining_credit;
>> -	struct timer_list credit_timeout;
>> -	u64 credit_window_start;
>> +	/* Queues */
>> +	unsigned int num_queues;
>> +	struct xenvif_queue *queues;
>>
>>   	/* Statistics */
>
> Do stats need to be per-queue (and then possibly aggregated at query time)?
>

Aside from the potential to see the stats for each queue, which may be
useful in some limited circumstances for performance testing or
debugging, I don't see what this buys us...

>>   	unsigned long rx_gso_checksum_fixup;
>> @@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>>   			    domid_t domid,
>>   			    unsigned int handle);
>>
>> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>> +void xenvif_init_queue(struct xenvif_queue *queue);
>> +
>> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>>   		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>>   		   unsigned int rx_evtchn);
>>   void xenvif_disconnect(struct xenvif *vif);
>> @@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
>>
>>   int xenvif_schedulable(struct xenvif *vif);
>>
>> -int xenvif_rx_ring_full(struct xenvif *vif);
>> +int xenvif_rx_ring_full(struct xenvif_queue *queue);
>>
>> -int xenvif_must_stop_queue(struct xenvif *vif);
>> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
>>
>>   /* (Un)Map communication rings. */
>> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
>> -int xenvif_map_frontend_rings(struct xenvif *vif,
>> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
>> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>>   			      grant_ref_t tx_ring_ref,
>>   			      grant_ref_t rx_ring_ref);
>>
>>   /* Check for SKBs from frontend and schedule backend processing */
>> -void xenvif_check_rx_xenvif(struct xenvif *vif);
>> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
>>
>>   /* Queue an SKB for transmission to the frontend */
>> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
>> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
>> *skb);
>>   /* Notify xenvif that ring now has space to send an skb to the frontend */
>> -void xenvif_notify_tx_completion(struct xenvif *vif);
>> +void xenvif_notify_tx_completion(struct xenvif_queue *queue);
>>
>>   /* Prevent the device from generating any further traffic. */
>>   void xenvif_carrier_off(struct xenvif *vif);
>> @@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
>>   /* Returns number of ring slots required to send an skb to the frontend */
>>   unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
>>
>> -int xenvif_tx_action(struct xenvif *vif, int budget);
>> -void xenvif_rx_action(struct xenvif *vif);
>> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
>> +void xenvif_rx_action(struct xenvif_queue *queue);
>>
>>   int xenvif_kthread(void *data);
>>
>> +int xenvif_poll(struct napi_struct *napi, int budget);
>> +
>> +void xenvif_carrier_on(struct xenvif *vif);
>> +
>>   extern bool separate_tx_rx_irq;
>>
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index fff8cdd..0113324 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -34,7 +34,6 @@
>>   #include <linux/ethtool.h>
>>   #include <linux/rtnetlink.h>
>>   #include <linux/if_vlan.h>
>> -#include <linux/vmalloc.h>
>>
>>   #include <xen/events.h>
>>   #include <asm/xen/hypercall.h>
>> @@ -42,32 +41,50 @@
>>   #define XENVIF_QUEUE_LENGTH 32
>>   #define XENVIF_NAPI_WEIGHT  64
>>
>> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
>> +{
>> +	netif_tx_wake_queue(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Might be neater to declare some stack variables for dev and number to avoid the long line.
>
>> +}
>> +
>> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
>> +{
>> +	netif_tx_stop_queue(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Ditto.
>
>> +}
>> +
>> +static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
>> +{
>> +	return netif_tx_queue_stopped(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Ditto.
>
>> +}
>> +
>>   int xenvif_schedulable(struct xenvif *vif)
>>   {
>>   	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
>>   }
>>
>> -static int xenvif_rx_schedulable(struct xenvif *vif)
>> +static int xenvif_rx_schedulable(struct xenvif_queue *queue)
>>   {
>> -	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
>> +	return xenvif_schedulable(queue->vif) &&
>> !xenvif_rx_ring_full(queue);
>
> I guess your patches have not been re-based onto net-next? xenvif_ring_full() and xenvif_rx_schedulable() went away in c/s ca2f09f2b2c6c25047cfc545d057c4edfcfe561c (xen-netback: improve guest-receive-side flow control).
>
> Can you rebase? Eventual patches will need to go into net-next.

They haven't yet; I wanted to get some comments on these, but I will
definitely rebase onto net-next in the near future.

>
>    Paul
>
>>   }
>>
>>   static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>>   {
>> -	struct xenvif *vif = dev_id;
>> +	struct xenvif_queue *queue = dev_id;
>>
>> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
>> -		napi_schedule(&vif->napi);
>> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
>> +		napi_schedule(&queue->napi);
>>
>>   	return IRQ_HANDLED;
>>   }
>>
>> -static int xenvif_poll(struct napi_struct *napi, int budget)
>> +int xenvif_poll(struct napi_struct *napi, int budget)
>>   {
>> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
>> +	struct xenvif_queue *queue = container_of(napi, struct
>> xenvif_queue, napi);
>>   	int work_done;
>>
>> -	work_done = xenvif_tx_action(vif, budget);
>> +	work_done = xenvif_tx_action(queue, budget);
>>
>>   	if (work_done < budget) {
>>   		int more_to_do = 0;
>> @@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int
>> budget)
>>
>>   		local_irq_save(flags);
>>
>> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
>> more_to_do);
>> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
>> more_to_do);
>>   		if (!more_to_do)
>>   			__napi_complete(napi);
>>
>> @@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int
>> budget)
>>
>>   static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>>   {
>> -	struct xenvif *vif = dev_id;
>> +	struct xenvif_queue *queue = dev_id;
>>
>> -	if (xenvif_rx_schedulable(vif))
>> -		netif_wake_queue(vif->dev);
>> +	if (xenvif_rx_schedulable(queue))
>> +		xenvif_wake_queue(queue);
>>
>>   	return IRQ_HANDLED;
>>   }
>> @@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void
>> *dev_id)
>>   	return IRQ_HANDLED;
>>   }
>>
>> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
>> +{
>> +	struct xenvif *vif = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_index;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (vif->num_queues == 1) {
>> +		queue_index = 0;
>> +	}
>
> Style.
>
>> +	else {
>> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
>> +		hash = skb_get_rxhash(skb);
>> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
>> 32);
>> +	}
>> +
>> +	return queue_index;
>> +}
>> +
>>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	struct xenvif *vif = netdev_priv(dev);
>> +	u16 queue_index = 0;
>> +	struct xenvif_queue *queue = NULL;
>>
>>   	BUG_ON(skb->dev != dev);
>>
>> -	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL)
>> +	/* Drop the packet if the queues are not set up */
>> +	if (vif->num_queues < 1 || vif->queues == NULL)
>> +		goto drop;
>
> Just do the former test and ASSERT the second.
>> +
>> +	/* Obtain the queue to be used to transmit this packet */
>> +	queue_index = skb_get_queue_mapping(skb);
>
> Personally, I'd stick a range check here.
>

OK.

Andrew

>> +	queue = &vif->queues[queue_index];
>> +
>> +	/* Drop the packet if queue is not ready */
>> +	if (queue->task == NULL)
>>   		goto drop;
>>
>>   	/* Drop the packet if the target domain has no receive buffers. */
>> -	if (!xenvif_rx_schedulable(vif))
>> +	if (!xenvif_rx_schedulable(queue))
>>   		goto drop;
>>
>>   	/* Reserve ring slots for the worst-case number of fragments. */
>> -	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
>> +	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
>>
>> -	if (vif->can_queue && xenvif_must_stop_queue(vif))
>> -		netif_stop_queue(dev);
>> +	if (vif->can_queue && xenvif_must_stop_queue(queue))
>> +		xenvif_stop_queue(queue);
>>
>> -	xenvif_queue_tx_skb(vif, skb);
>> +	xenvif_queue_tx_skb(queue, skb);
>>
>>   	return NETDEV_TX_OK;
>>
>> @@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>>   	return NETDEV_TX_OK;
>>   }
>>
>> -void xenvif_notify_tx_completion(struct xenvif *vif)
>> +void xenvif_notify_tx_completion(struct xenvif_queue *queue)
>>   {
>> -	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
>> -		netif_wake_queue(vif->dev);
>> +	if (xenvif_queue_stopped(queue) &&
>> xenvif_rx_schedulable(queue))
>> +		xenvif_wake_queue(queue);
>>   }
>>
>>   static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>> @@ -163,20 +209,30 @@ static struct net_device_stats
>> *xenvif_get_stats(struct net_device *dev)
>>
>>   static void xenvif_up(struct xenvif *vif)
>>   {
>> -	napi_enable(&vif->napi);
>> -	enable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		enable_irq(vif->rx_irq);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_enable(&queue->napi);
>> +		enable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			enable_irq(queue->rx_irq);
>> +		xenvif_check_rx_xenvif(queue);
>> +	}
>>   }
>>
>>   static void xenvif_down(struct xenvif *vif)
>>   {
>> -	napi_disable(&vif->napi);
>> -	disable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		disable_irq(vif->rx_irq);
>> -	del_timer_sync(&vif->credit_timeout);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_disable(&queue->napi);
>> +		disable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			disable_irq(queue->rx_irq);
>> +		del_timer_sync(&queue->credit_timeout);
>> +	}
>>   }
>>
>>   static int xenvif_open(struct net_device *dev)
>> @@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
>>   	struct xenvif *vif = netdev_priv(dev);
>>   	if (netif_carrier_ok(dev))
>>   		xenvif_up(vif);
>> -	netif_start_queue(dev);
>> +	netif_tx_start_all_queues(dev);
>>   	return 0;
>>   }
>>
>> @@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
>>   	struct xenvif *vif = netdev_priv(dev);
>>   	if (netif_carrier_ok(dev))
>>   		xenvif_down(vif);
>> -	netif_stop_queue(dev);
>> +	netif_tx_stop_all_queues(dev);
>>   	return 0;
>>   }
>>
>> @@ -287,6 +343,7 @@ static const struct net_device_ops
>> xenvif_netdev_ops = {
>>   	.ndo_fix_features = xenvif_fix_features,
>>   	.ndo_set_mac_address = eth_mac_addr,
>>   	.ndo_validate_addr   = eth_validate_addr,
>> +	.ndo_select_queue = select_queue,
>>   };
>>
>>   struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>> @@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	struct net_device *dev;
>>   	struct xenvif *vif;
>>   	char name[IFNAMSIZ] = {};
>> -	int i;
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> @@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>
>>   	vif = netdev_priv(dev);
>>
>> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>> -				     MAX_GRANT_COPY_OPS);
>> -	if (vif->grant_copy_op == NULL) {
>> -		pr_warn("Could not allocate grant copy space for %s\n",
>> name);
>> -		free_netdev(dev);
>> -		return ERR_PTR(-ENOMEM);
>> -	}
>> -
>>   	vif->domid  = domid;
>>   	vif->handle = handle;
>>   	vif->can_sg = 1;
>>   	vif->ip_csum = 1;
>>   	vif->dev = dev;
>>
>> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
>> -	vif->credit_usec  = 0UL;
>> -	init_timer(&vif->credit_timeout);
>> -	vif->credit_window_start = get_jiffies_64();
>> +	/* Start out with no queues */
>> +	vif->num_queues = 0;
>> +	vif->queues = NULL;
>>
>>   	dev->netdev_ops	= &xenvif_netdev_ops;
>>   	dev->hw_features = NETIF_F_SG |
>> @@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>
>>   	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
>>
>> -	skb_queue_head_init(&vif->rx_queue);
>> -	skb_queue_head_init(&vif->tx_queue);
>> -
>> -	vif->pending_cons = 0;
>> -	vif->pending_prod = MAX_PENDING_REQS;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> -
>>   	/*
>>   	 * Initialise a dummy MAC address. We choose the numerically
>>   	 * largest non-broadcast address to prevent the address getting
>> @@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>>   	dev->dev_addr[0] &= ~0x01;
>>
>> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
>> XENVIF_NAPI_WEIGHT);
>> -
>>   	netif_carrier_off(dev);
>>
>>   	err = register_netdev(dev);
>> @@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	return vif;
>>   }
>>
>> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>> +void xenvif_init_queue(struct xenvif_queue *queue)
>> +{
>> +	int i;
>> +
>> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
>> +	queue->credit_usec  = 0UL;
>> +	init_timer(&queue->credit_timeout);
>> +	queue->credit_window_start = get_jiffies_64();
>> +
>> +	skb_queue_head_init(&queue->rx_queue);
>> +	skb_queue_head_init(&queue->tx_queue);
>> +
>> +	queue->pending_cons = 0;
>> +	queue->pending_prod = MAX_PENDING_REQS;
>> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
>> +		queue->pending_ring[i] = i;
>> +		queue->mmap_pages[i] = NULL;
>> +	}
>> +
>> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> +			XENVIF_NAPI_WEIGHT);
>> +}
>> +
>> +void xenvif_carrier_on(struct xenvif *vif)
>> +{
>> +	rtnl_lock();
>> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
>> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
>> +	netdev_update_features(vif->dev);
>> +	netif_carrier_on(vif->dev);
>> +	if (netif_running(vif->dev))
>> +		xenvif_up(vif);
>> +	rtnl_unlock();
>> +}
>> +
>> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>>   		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>>   		   unsigned int rx_evtchn)
>>   {
>>   	struct task_struct *task;
>>   	int err = -ENOMEM;
>>
>> -	BUG_ON(vif->tx_irq);
>> -	BUG_ON(vif->task);
>> +	BUG_ON(queue->tx_irq);
>> +	BUG_ON(queue->task);
>>
>> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
>> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>>   	if (err < 0)
>>   		goto err;
>>
>>   	if (tx_evtchn == rx_evtchn) {
>>   		/* feature-split-event-channels == 0 */
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
>> -			vif->dev->name, vif);
>> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
>> +			queue->name, queue);
>>   		if (err < 0)
>>   			goto err_unmap;
>> -		vif->tx_irq = vif->rx_irq = err;
>> -		disable_irq(vif->tx_irq);
>> +		queue->tx_irq = queue->rx_irq = err;
>> +		disable_irq(queue->tx_irq);
>>   	} else {
>>   		/* feature-split-event-channels == 1 */
>> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
>> -			 "%s-tx", vif->dev->name);
>> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
>> +			 "%s-tx", queue->name);
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
>> -			vif->tx_irq_name, vif);
>> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
>> 0,
>> +			queue->tx_irq_name, queue);
>>   		if (err < 0)
>>   			goto err_unmap;
>> -		vif->tx_irq = err;
>> -		disable_irq(vif->tx_irq);
>> +		queue->tx_irq = err;
>> +		disable_irq(queue->tx_irq);
>>
>> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
>> -			 "%s-rx", vif->dev->name);
>> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
>> +			 "%s-rx", queue->name);
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
>> -			vif->rx_irq_name, vif);
>> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
>> 0,
>> +			queue->rx_irq_name, queue);
>>   		if (err < 0)
>>   			goto err_tx_unbind;
>> -		vif->rx_irq = err;
>> -		disable_irq(vif->rx_irq);
>> +		queue->rx_irq = err;
>> +		disable_irq(queue->rx_irq);
>>   	}
>>
>> -	init_waitqueue_head(&vif->wq);
>> +	init_waitqueue_head(&queue->wq);
>>   	task = kthread_create(xenvif_kthread,
>> -			      (void *)vif, "%s", vif->dev->name);
>> +			      (void *)queue, "%s", queue->name);
>>   	if (IS_ERR(task)) {
>> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
>>> name);
>> +		pr_warn("Could not allocate kthread for %s\n", queue-
>>> name);
>>   		err = PTR_ERR(task);
>>   		goto err_rx_unbind;
>>   	}
>>
>> -	vif->task = task;
>> -
>> -	rtnl_lock();
>> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
>> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
>> -	netdev_update_features(vif->dev);
>> -	netif_carrier_on(vif->dev);
>> -	if (netif_running(vif->dev))
>> -		xenvif_up(vif);
>> -	rtnl_unlock();
>> +	queue->task = task;
>>
>> -	wake_up_process(vif->task);
>> +	wake_up_process(queue->task);
>>
>>   	return 0;
>>
>>   err_rx_unbind:
>> -	unbind_from_irqhandler(vif->rx_irq, vif);
>> -	vif->rx_irq = 0;
>> +	unbind_from_irqhandler(queue->rx_irq, queue);
>> +	queue->rx_irq = 0;
>>   err_tx_unbind:
>> -	unbind_from_irqhandler(vif->tx_irq, vif);
>> -	vif->tx_irq = 0;
>> +	unbind_from_irqhandler(queue->tx_irq, queue);
>> +	queue->tx_irq = 0;
>>   err_unmap:
>> -	xenvif_unmap_frontend_rings(vif);
>> +	xenvif_unmap_frontend_rings(queue);
>>   err:
>>   	module_put(THIS_MODULE);
>>   	return err;
>> @@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
>>
>>   void xenvif_disconnect(struct xenvif *vif)
>>   {
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +
>>   	if (netif_carrier_ok(vif->dev))
>>   		xenvif_carrier_off(vif);
>>
>> -	if (vif->task) {
>> -		kthread_stop(vif->task);
>> -		vif->task = NULL;
>> -	}
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>>
>> -	if (vif->tx_irq) {
>> -		if (vif->tx_irq == vif->rx_irq)
>> -			unbind_from_irqhandler(vif->tx_irq, vif);
>> -		else {
>> -			unbind_from_irqhandler(vif->tx_irq, vif);
>> -			unbind_from_irqhandler(vif->rx_irq, vif);
>> +		if (queue->task) {
>> +			kthread_stop(queue->task);
>> +			queue->task = NULL;
>>   		}
>> -		vif->tx_irq = 0;
>> +
>> +		if (queue->tx_irq) {
>> +			if (queue->tx_irq == queue->rx_irq)
>> +				unbind_from_irqhandler(queue->tx_irq,
>> queue);
>> +			else {
>> +				unbind_from_irqhandler(queue->tx_irq,
>> queue);
>> +				unbind_from_irqhandler(queue->rx_irq,
>> queue);
>> +			}
>> +			queue->tx_irq = 0;
>> +		}
>> +
>> +		xenvif_unmap_frontend_rings(queue);
>>   	}
>>
>> -	xenvif_unmap_frontend_rings(vif);
>> +
>>   }
>>
>>   void xenvif_free(struct xenvif *vif)
>>   {
>> -	netif_napi_del(&vif->napi);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>>
>> -	unregister_netdev(vif->dev);
>> +	for(queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		netif_napi_del(&queue->napi);
>> +	}
>>
>> -	vfree(vif->grant_copy_op);
>> +	/* Free the array of queues */
>> +	vfree(vif->queues);
>> +	vif->num_queues = 0;
>> +	vif->queues = 0;
>> +
>> +	unregister_netdev(vif->dev);
>>   	free_netdev(vif->dev);
>>
>>   	module_put(THIS_MODULE);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 7842555..586e741 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
>>    * one or more merged tx requests, otherwise it is the continuation of
>>    * previous tx request.
>>    */
>> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
>> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
>> RING_IDX idx)
>>   {
>> -	return vif->pending_tx_info[idx].head !=
>> INVALID_PENDING_RING_IDX;
>> +	return queue->pending_tx_info[idx].head !=
>> INVALID_PENDING_RING_IDX;
>>   }
>>
>> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
>> pending_idx,
>>   			       u8 status);
>>
>> -static void make_tx_response(struct xenvif *vif,
>> +static void make_tx_response(struct xenvif_queue *queue,
>>   			     struct xen_netif_tx_request *txp,
>>   			     s8       st);
>>
>> -static inline int tx_work_todo(struct xenvif *vif);
>> -static inline int rx_work_todo(struct xenvif *vif);
>> +static inline int tx_work_todo(struct xenvif_queue *queue);
>> +static inline int rx_work_todo(struct xenvif_queue *queue);
>>
>> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>> +static struct xen_netif_rx_response *make_rx_response(struct
>> xenvif_queue *queue,
>>   					     u16      id,
>>   					     s8       st,
>>   					     u16      offset,
>>   					     u16      size,
>>   					     u16      flags);
>>
>> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
>> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>>   				       u16 idx)
>>   {
>> -	return page_to_pfn(vif->mmap_pages[idx]);
>> +	return page_to_pfn(queue->mmap_pages[idx]);
>>   }
>>
>> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
>> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>>   					 u16 idx)
>>   {
>> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
>> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>>   }
>>
>>   /* This is a miniumum size for the linear area to avoid lots of
>> @@ -132,10 +132,10 @@ static inline pending_ring_idx_t
>> pending_index(unsigned i)
>>   	return i & (MAX_PENDING_REQS-1);
>>   }
>>
>> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
>> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
>> *queue)
>>   {
>>   	return MAX_PENDING_REQS -
>> -		vif->pending_prod + vif->pending_cons;
>> +		queue->pending_prod + queue->pending_cons;
>>   }
>>
>>   static int max_required_rx_slots(struct xenvif *vif)
>> @@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
>>   	return max;
>>   }
>>
>> -int xenvif_rx_ring_full(struct xenvif *vif)
>> +int xenvif_rx_ring_full(struct xenvif_queue *queue)
>>   {
>> -	RING_IDX peek   = vif->rx_req_cons_peek;
>> -	RING_IDX needed = max_required_rx_slots(vif);
>> +	RING_IDX peek   = queue->rx_req_cons_peek;
>> +	RING_IDX needed = max_required_rx_slots(queue->vif);
>>
>> -	return ((vif->rx.sring->req_prod - peek) < needed) ||
>> -	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
>> needed);
>> +	return ((queue->rx.sring->req_prod - peek) < needed) ||
>> +	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
>> needed);
>>   }
>>
>> -int xenvif_must_stop_queue(struct xenvif *vif)
>> +int xenvif_must_stop_queue(struct xenvif_queue *queue)
>>   {
>> -	if (!xenvif_rx_ring_full(vif))
>> +	if (!xenvif_rx_ring_full(queue))
>>   		return 0;
>>
>> -	vif->rx.sring->req_event = vif->rx_req_cons_peek +
>> -		max_required_rx_slots(vif);
>> +	queue->rx.sring->req_event = queue->rx_req_cons_peek +
>> +		max_required_rx_slots(queue->vif);
>>   	mb(); /* request notification /then/ check the queue */
>>
>> -	return xenvif_rx_ring_full(vif);
>> +	return xenvif_rx_ring_full(queue);
>>   }
>>
>>   /*
>> @@ -306,13 +306,13 @@ struct netrx_pending_operations {
>>   	grant_ref_t copy_gref;
>>   };
>>
>> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
>> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
>> *queue,
>>   						 struct
>> netrx_pending_operations *npo)
>>   {
>>   	struct xenvif_rx_meta *meta;
>>   	struct xen_netif_rx_request *req;
>>
>> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>>
>>   	meta = npo->meta + npo->meta_prod++;
>>   	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
>> @@ -330,7 +330,7 @@ static struct xenvif_rx_meta
>> *get_next_rx_buffer(struct xenvif *vif,
>>    * Set up the grant operations for this fragment. If it's a flipping
>>    * interface, we also set up the unmap request from here.
>>    */
>> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
>> sk_buff *skb,
>>   				 struct netrx_pending_operations *npo,
>>   				 struct page *page, unsigned long size,
>>   				 unsigned long offset, int *head)
>> @@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   			 */
>>   			BUG_ON(*head);
>>
>> -			meta = get_next_rx_buffer(vif, npo);
>> +			meta = get_next_rx_buffer(queue, npo);
>>   		}
>>
>>   		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
>> @@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   		copy_gop->source.u.gmfn =
>> virt_to_mfn(page_address(page));
>>   		copy_gop->source.offset = offset;
>>
>> -		copy_gop->dest.domid = vif->domid;
>> +		copy_gop->dest.domid = queue->vif->domid;
>>   		copy_gop->dest.offset = npo->copy_off;
>>   		copy_gop->dest.u.ref = npo->copy_gref;
>>
>> @@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   		else
>>   			gso_type = XEN_NETIF_GSO_TYPE_NONE;
>>
>> -		if (*head && ((1 << gso_type) & vif->gso_mask))
>> -			vif->rx.req_cons++;
>> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
>> +			queue->rx.req_cons++;
>>
>>   		*head = 0; /* There must be something in this buffer now. */
>>
>> @@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>    * frontend-side LRO).
>>    */
>>   static int xenvif_gop_skb(struct sk_buff *skb,
>> -			  struct netrx_pending_operations *npo)
>> +			  struct netrx_pending_operations *npo,
>> +			  struct xenvif_queue *queue)
>>   {
>>   	struct xenvif *vif = netdev_priv(skb->dev);
>>   	int nr_frags = skb_shinfo(skb)->nr_frags;
>> @@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>
>>   	/* Set up a GSO prefix descriptor, if necessary */
>>   	if ((1 << gso_type) & vif->gso_prefix_mask) {
>> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +		req = RING_GET_REQUEST(&queue->rx, queue-
>>> rx.req_cons++);
>>   		meta = npo->meta + npo->meta_prod++;
>>   		meta->gso_type = gso_type;
>>   		meta->gso_size = gso_size;
>> @@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>   		meta->id = req->id;
>>   	}
>>
>> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>>   	meta = npo->meta + npo->meta_prod++;
>>
>>   	if ((1 << gso_type) & vif->gso_mask) {
>> @@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>   		if (data + len > skb_tail_pointer(skb))
>>   			len = skb_tail_pointer(skb) - data;
>>
>> -		xenvif_gop_frag_copy(vif, skb, npo,
>> +		xenvif_gop_frag_copy(queue, skb, npo,
>>   				     virt_to_page(data), len, offset, &head);
>>   		data += len;
>>   	}
>>
>>   	for (i = 0; i < nr_frags; i++) {
>> -		xenvif_gop_frag_copy(vif, skb, npo,
>> +		xenvif_gop_frag_copy(queue, skb, npo,
>>   				     skb_frag_page(&skb_shinfo(skb)-
>>> frags[i]),
>>   				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>>   				     skb_shinfo(skb)->frags[i].page_offset,
>> @@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
>> nr_meta_slots,
>>   	return status;
>>   }
>>
>> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
>> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
>> status,
>>   				      struct xenvif_rx_meta *meta,
>>   				      int nr_meta_slots)
>>   {
>> @@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif
>> *vif, int status,
>>   			flags = XEN_NETRXF_more_data;
>>
>>   		offset = 0;
>> -		make_rx_response(vif, meta[i].id, status, offset,
>> +		make_rx_response(queue, meta[i].id, status, offset,
>>   				 meta[i].size, flags);
>>   	}
>>   }
>> @@ -557,12 +558,12 @@ struct skb_cb_overlay {
>>   	int meta_slots_used;
>>   };
>>
>> -static void xenvif_kick_thread(struct xenvif *vif)
>> +static void xenvif_kick_thread(struct xenvif_queue *queue)
>>   {
>> -	wake_up(&vif->wq);
>> +	wake_up(&queue->wq);
>>   }
>>
>> -void xenvif_rx_action(struct xenvif *vif)
>> +void xenvif_rx_action(struct xenvif_queue *queue)
>>   {
>>   	s8 status;
>>   	u16 flags;
>> @@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
>>   	int need_to_notify = 0;
>>
>>   	struct netrx_pending_operations npo = {
>> -		.copy  = vif->grant_copy_op,
>> -		.meta  = vif->meta,
>> +		.copy  = queue->grant_copy_op,
>> +		.meta  = queue->meta,
>>   	};
>>
>>   	skb_queue_head_init(&rxq);
>>
>>   	count = 0;
>>
>> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
>> -		vif = netdev_priv(skb->dev);
>> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>>   		nr_frags = skb_shinfo(skb)->nr_frags;
>>
>>   		sco = (struct skb_cb_overlay *)skb->cb;
>> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
>> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
>>
>>   		count += nr_frags + 1;
>>
>> @@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			break;
>>   	}
>>
>> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
>> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
>>
>>   	if (!npo.copy_prod)
>>   		return;
>>
>>   	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
>> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
>> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
>>
>>   	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>>   		sco = (struct skb_cb_overlay *)skb->cb;
>>
>> -		vif = netdev_priv(skb->dev);
>> -
>> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
>> -		    vif->gso_prefix_mask) {
>> -			resp = RING_GET_RESPONSE(&vif->rx,
>> -						 vif->rx.rsp_prod_pvt++);
>> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
>> +		    queue->vif->gso_prefix_mask) {
>> +			resp = RING_GET_RESPONSE(&queue->rx,
>> +						 queue->rx.rsp_prod_pvt++);
>>
>>   			resp->flags = XEN_NETRXF_gso_prefix |
>> XEN_NETRXF_more_data;
>>
>> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
>> -			resp->id = vif->meta[npo.meta_cons].id;
>> +			resp->offset = queue-
>>> meta[npo.meta_cons].gso_size;
>> +			resp->id = queue->meta[npo.meta_cons].id;
>>   			resp->status = sco->meta_slots_used;
>>
>>   			npo.meta_cons++;
>> @@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
>>   		}
>>
>>
>> -		vif->dev->stats.tx_bytes += skb->len;
>> -		vif->dev->stats.tx_packets++;
>> +		queue->vif->dev->stats.tx_bytes += skb->len;
>> +		queue->vif->dev->stats.tx_packets++;
>>
>> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
>> &npo);
>> +		status = xenvif_check_gop(queue->vif, sco-
>>> meta_slots_used, &npo);
>>
>>   		if (sco->meta_slots_used == 1)
>>   			flags = 0;
>> @@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			flags |= XEN_NETRXF_data_validated;
>>
>>   		offset = 0;
>> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
>> +		resp = make_rx_response(queue, queue-
>>> meta[npo.meta_cons].id,
>>   					status, offset,
>> -					vif->meta[npo.meta_cons].size,
>> +					queue->meta[npo.meta_cons].size,
>>   					flags);
>>
>> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
>> -		    vif->gso_mask) {
>> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
>> +		    queue->vif->gso_mask) {
>>   			struct xen_netif_extra_info *gso =
>>   				(struct xen_netif_extra_info *)
>> -				RING_GET_RESPONSE(&vif->rx,
>> -						  vif->rx.rsp_prod_pvt++);
>> +				RING_GET_RESPONSE(&queue->rx,
>> +						  queue-
>>> rx.rsp_prod_pvt++);
>>
>>   			resp->flags |= XEN_NETRXF_extra_info;
>>
>> -			gso->u.gso.type = vif-
>>> meta[npo.meta_cons].gso_type;
>> -			gso->u.gso.size = vif-
>>> meta[npo.meta_cons].gso_size;
>> +			gso->u.gso.type = queue-
>>> meta[npo.meta_cons].gso_type;
>> +			gso->u.gso.size = queue-
>>> meta[npo.meta_cons].gso_size;
>>   			gso->u.gso.pad = 0;
>>   			gso->u.gso.features = 0;
>>
>> @@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			gso->flags = 0;
>>   		}
>>
>> -		xenvif_add_frag_responses(vif, status,
>> -					  vif->meta + npo.meta_cons + 1,
>> +		xenvif_add_frag_responses(queue, status,
>> +					  queue->meta + npo.meta_cons + 1,
>>   					  sco->meta_slots_used);
>>
>> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
>> ret);
>> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
>> ret);
>>
>>   		if (ret)
>>   			need_to_notify = 1;
>>
>> -		xenvif_notify_tx_completion(vif);
>> +		xenvif_notify_tx_completion(queue);
>>
>>   		npo.meta_cons += sco->meta_slots_used;
>>   		dev_kfree_skb(skb);
>>   	}
>>
>>   	if (need_to_notify)
>> -		notify_remote_via_irq(vif->rx_irq);
>> +		notify_remote_via_irq(queue->rx_irq);
>>
>>   	/* More work to do? */
>> -	if (!skb_queue_empty(&vif->rx_queue))
>> -		xenvif_kick_thread(vif);
>> +	if (!skb_queue_empty(&queue->rx_queue))
>> +		xenvif_kick_thread(queue);
>>   }
>>
>> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
>> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
>> *skb)
>>   {
>> -	skb_queue_tail(&vif->rx_queue, skb);
>> +	skb_queue_tail(&queue->rx_queue, skb);
>>
>> -	xenvif_kick_thread(vif);
>> +	xenvif_kick_thread(queue);
>>   }
>>
>> -void xenvif_check_rx_xenvif(struct xenvif *vif)
>> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>>   {
>>   	int more_to_do;
>>
>> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
>> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
>>
>>   	if (more_to_do)
>> -		napi_schedule(&vif->napi);
>> +		napi_schedule(&queue->napi);
>>   }
>>
>> -static void tx_add_credit(struct xenvif *vif)
>> +static void tx_add_credit(struct xenvif_queue *queue)
>>   {
>>   	unsigned long max_burst, max_credit;
>>
>> @@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
>>   	 * Allow a burst big enough to transmit a jumbo packet of up to
>> 128kB.
>>   	 * Otherwise the interface can seize up due to insufficient credit.
>>   	 */
>> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
>> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
>>> tx.req_cons)->size;
>>   	max_burst = min(max_burst, 131072UL);
>> -	max_burst = max(max_burst, vif->credit_bytes);
>> +	max_burst = max(max_burst, queue->credit_bytes);
>>
>>   	/* Take care that adding a new chunk of credit doesn't wrap to zero.
>> */
>> -	max_credit = vif->remaining_credit + vif->credit_bytes;
>> -	if (max_credit < vif->remaining_credit)
>> +	max_credit = queue->remaining_credit + queue->credit_bytes;
>> +	if (max_credit < queue->remaining_credit)
>>   		max_credit = ULONG_MAX; /* wrapped: clamp to
>> ULONG_MAX */
>>
>> -	vif->remaining_credit = min(max_credit, max_burst);
>> +	queue->remaining_credit = min(max_credit, max_burst);
>>   }
>>
>>   static void tx_credit_callback(unsigned long data)
>>   {
>> -	struct xenvif *vif = (struct xenvif *)data;
>> -	tx_add_credit(vif);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
>> +	tx_add_credit(queue);
>> +	xenvif_check_rx_xenvif(queue);
>>   }
>>
>> -static void xenvif_tx_err(struct xenvif *vif,
>> +static void xenvif_tx_err(struct xenvif_queue *queue,
>>   			  struct xen_netif_tx_request *txp, RING_IDX end)
>>   {
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>
>>   	do {
>> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
>> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>>   		if (cons == end)
>>   			break;
>> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
>> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>>   	} while (1);
>> -	vif->tx.req_cons = cons;
>> +	queue->tx.req_cons = cons;
>>   }
>>
>>   static void xenvif_fatal_tx_err(struct xenvif *vif)
>> @@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>>   	xenvif_carrier_off(vif);
>>   }
>>
>> -static int xenvif_count_requests(struct xenvif *vif,
>> +static int xenvif_count_requests(struct xenvif_queue *queue,
>>   				 struct xen_netif_tx_request *first,
>>   				 struct xen_netif_tx_request *txp,
>>   				 int work_to_do)
>>   {
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>   	int slots = 0;
>>   	int drop_err = 0;
>>   	int more_data;
>> @@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		struct xen_netif_tx_request dropped_tx = { 0 };
>>
>>   		if (slots >= work_to_do) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Asked for %d slots but exceeds this
>> limit\n",
>>   				   work_to_do);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -ENODATA;
>>   		}
>>
>> @@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 * considered malicious.
>>   		 */
>>   		if (unlikely(slots >= fatal_skb_slots)) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Malicious frontend using %d slots,
>> threshold %u\n",
>>   				   slots, fatal_skb_slots);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -E2BIG;
>>   		}
>>
>> @@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 */
>>   		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
>> {
>>   			if (net_ratelimit())
>> -				netdev_dbg(vif->dev,
>> +				netdev_dbg(queue->vif->dev,
>>   					   "Too many slots (%d) exceeding
>> limit (%d), dropping packet\n",
>>   					   slots,
>> XEN_NETBK_LEGACY_SLOTS_MAX);
>>   			drop_err = -E2BIG;
>> @@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		if (drop_err)
>>   			txp = &dropped_tx;
>>
>> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
>> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
>> slots),
>>   		       sizeof(*txp));
>>
>>   		/* If the guest submitted a frame >= 64 KiB then
>> @@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 */
>>   		if (!drop_err && txp->size > first->size) {
>>   			if (net_ratelimit())
>> -				netdev_dbg(vif->dev,
>> +				netdev_dbg(queue->vif->dev,
>>   					   "Invalid tx request, slot size %u >
>> remaining size %u\n",
>>   					   txp->size, first->size);
>>   			drop_err = -EIO;
>> @@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		slots++;
>>
>>   		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
>> -			netdev_err(vif->dev, "Cross page boundary, txp-
>>> offset: %x, size: %u\n",
>> +			netdev_err(queue->vif->dev, "Cross page boundary,
>> txp->offset: %x, size: %u\n",
>>   				 txp->offset, txp->size);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EINVAL;
>>   		}
>>
>> @@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   	} while (more_data);
>>
>>   	if (drop_err) {
>> -		xenvif_tx_err(vif, first, cons + slots);
>> +		xenvif_tx_err(queue, first, cons + slots);
>>   		return drop_err;
>>   	}
>>
>>   	return slots;
>>   }
>>
>> -static struct page *xenvif_alloc_page(struct xenvif *vif,
>> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>>   				      u16 pending_idx)
>>   {
>>   	struct page *page;
>> @@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif
>> *vif,
>>   	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>>   	if (!page)
>>   		return NULL;
>> -	vif->mmap_pages[pending_idx] = page;
>> +	queue->mmap_pages[pending_idx] = page;
>>
>>   	return page;
>>   }
>>
>> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
>> *queue,
>>   					       struct sk_buff *skb,
>>   					       struct xen_netif_tx_request *txp,
>>   					       struct gnttab_copy *gop)
>> @@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>>   	     shinfo->nr_frags++) {
>>   		struct pending_tx_info *pending_tx_info =
>> -			vif->pending_tx_info;
>> +			queue->pending_tx_info;
>>
>>   		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>>   		if (!page)
>> @@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   			gop->flags = GNTCOPY_source_gref;
>>
>>   			gop->source.u.ref = txp->gref;
>> -			gop->source.domid = vif->domid;
>> +			gop->source.domid = queue->vif->domid;
>>   			gop->source.offset = txp->offset;
>>
>>   			gop->dest.domid = DOMID_SELF;
>> @@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   				gop->len = txp->size;
>>   				dst_offset += gop->len;
>>
>> -				index = pending_index(vif-
>>> pending_cons++);
>> +				index = pending_index(queue-
>>> pending_cons++);
>>
>> -				pending_idx = vif->pending_ring[index];
>> +				pending_idx = queue->pending_ring[index];
>>
>>
>> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>>   				       sizeof(*txp));
>> @@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   				 * fields for head tx req will be set
>>   				 * to correct values after the loop.
>>   				 */
>> -				vif->mmap_pages[pending_idx] = (void
>> *)(~0UL);
>> +				queue->mmap_pages[pending_idx] = (void
>> *)(~0UL);
>>   				pending_tx_info[pending_idx].head =
>>   					INVALID_PENDING_RING_IDX;
>>
>> @@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   		first->req.offset = 0;
>>   		first->req.size = dst_offset;
>>   		first->head = start_idx;
>> -		vif->mmap_pages[head_idx] = page;
>> +		queue->mmap_pages[head_idx] = page;
>>   		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>>   	}
>>
>> @@ -979,18 +977,18 @@ static struct gnttab_copy
>> *xenvif_get_requests(struct xenvif *vif,
>>   err:
>>   	/* Unwind, freeing all pages and sending error responses. */
>>   	while (shinfo->nr_frags-- > start) {
>> -		xenvif_idx_release(vif,
>> +		xenvif_idx_release(queue,
>>   				frag_get_pending_idx(&frags[shinfo-
>>> nr_frags]),
>>   				XEN_NETIF_RSP_ERROR);
>>   	}
>>   	/* The head too, if necessary. */
>>   	if (start)
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   	return NULL;
>>   }
>>
>> -static int xenvif_tx_check_gop(struct xenvif *vif,
>> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>>   			       struct sk_buff *skb,
>>   			       struct gnttab_copy **gopp)
>>   {
>> @@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	/* Check status of header. */
>>   	err = gop->status;
>>   	if (unlikely(err))
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   	/* Skip first skb fragment if it is on same page as header fragment. */
>>   	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
>> @@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   		pending_ring_idx_t head;
>>
>>   		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
>> -		tx_info = &vif->pending_tx_info[pending_idx];
>> +		tx_info = &queue->pending_tx_info[pending_idx];
>>   		head = tx_info->head;
>>
>>   		/* Check error status: if okay then remember grant handle.
>> */
>> @@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   			newerr = (++gop)->status;
>>   			if (newerr)
>>   				break;
>> -			peek = vif->pending_ring[pending_index(++head)];
>> -		} while (!pending_tx_is_head(vif, peek));
>> +			peek = queue-
>>> pending_ring[pending_index(++head)];
>> +		} while (!pending_tx_is_head(queue, peek));
>>
>>   		if (likely(!newerr)) {
>>   			/* Had a previous error? Invalidate this fragment. */
>>   			if (unlikely(err))
>> -				xenvif_idx_release(vif, pending_idx,
>> +				xenvif_idx_release(queue, pending_idx,
>>   						   XEN_NETIF_RSP_OKAY);
>>   			continue;
>>   		}
>>
>>   		/* Error on this fragment: respond to client with an error. */
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   		/* Not the first error? Preceding frags already invalidated. */
>>   		if (err)
>> @@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>
>>   		/* First error: invalidate header and preceding fragments. */
>>   		pending_idx = *((u16 *)skb->data);
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>>   		for (j = start; j < i; j++) {
>>   			pending_idx = frag_get_pending_idx(&shinfo-
>>> frags[j]);
>> -			xenvif_idx_release(vif, pending_idx,
>> +			xenvif_idx_release(queue, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>>
>> @@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	return err;
>>   }
>>
>> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
>> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
>> *skb)
>>   {
>>   	struct skb_shared_info *shinfo = skb_shinfo(skb);
>>   	int nr_frags = shinfo->nr_frags;
>> @@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif,
>> struct sk_buff *skb)
>>
>>   		pending_idx = frag_get_pending_idx(frag);
>>
>> -		txp = &vif->pending_tx_info[pending_idx].req;
>> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
>> +		txp = &queue->pending_tx_info[pending_idx].req;
>> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>>   		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>>   		skb->len += txp->size;
>>   		skb->data_len += txp->size;
>>   		skb->truesize += txp->size;
>>
>>   		/* Take an extra reference to offset xenvif_idx_release */
>> -		get_page(vif->mmap_pages[pending_idx]);
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>> +		get_page(queue->mmap_pages[pending_idx]);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>>   	}
>>   }
>>
>> -static int xenvif_get_extras(struct xenvif *vif,
>> +static int xenvif_get_extras(struct xenvif_queue *queue,
>>   				struct xen_netif_extra_info *extras,
>>   				int work_to_do)
>>   {
>>   	struct xen_netif_extra_info extra;
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>
>>   	do {
>>   		if (unlikely(work_to_do-- <= 0)) {
>> -			netdev_err(vif->dev, "Missing extra info\n");
>> -			xenvif_fatal_tx_err(vif);
>> +			netdev_err(queue->vif->dev, "Missing extra
>> info\n");
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EBADR;
>>   		}
>>
>> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
>> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>>   		       sizeof(extra));
>>   		if (unlikely(!extra.type ||
>>   			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
>> -			vif->tx.req_cons = ++cons;
>> -			netdev_err(vif->dev,
>> +			queue->tx.req_cons = ++cons;
>> +			netdev_err(queue->vif->dev,
>>   				   "Invalid extra type: %d\n", extra.type);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EINVAL;
>>   		}
>>
>>   		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
>> -		vif->tx.req_cons = ++cons;
>> +		queue->tx.req_cons = ++cons;
>>   	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
>>
>>   	return work_to_do;
>> @@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif,
>> struct sk_buff *skb)
>>   	return err;
>>   }
>>
>> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
>> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
>> size)
>>   {
>>   	u64 now = get_jiffies_64();
>> -	u64 next_credit = vif->credit_window_start +
>> -		msecs_to_jiffies(vif->credit_usec / 1000);
>> +	u64 next_credit = queue->credit_window_start +
>> +		msecs_to_jiffies(queue->credit_usec / 1000);
>>
>>   	/* Timer could already be pending in rare cases. */
>> -	if (timer_pending(&vif->credit_timeout))
>> +	if (timer_pending(&queue->credit_timeout))
>>   		return true;
>>
>>   	/* Passed the point where we can replenish credit? */
>>   	if (time_after_eq64(now, next_credit)) {
>> -		vif->credit_window_start = now;
>> -		tx_add_credit(vif);
>> +		queue->credit_window_start = now;
>> +		tx_add_credit(queue);
>>   	}
>>
>>   	/* Still too big to send right now? Set a callback. */
>> -	if (size > vif->remaining_credit) {
>> -		vif->credit_timeout.data     =
>> -			(unsigned long)vif;
>> -		vif->credit_timeout.function =
>> +	if (size > queue->remaining_credit) {
>> +		queue->credit_timeout.data     =
>> +			(unsigned long)queue;
>> +		queue->credit_timeout.function =
>>   			tx_credit_callback;
>> -		mod_timer(&vif->credit_timeout,
>> +		mod_timer(&queue->credit_timeout,
>>   			  next_credit);
>> -		vif->credit_window_start = next_credit;
>> +		queue->credit_window_start = next_credit;
>>
>>   		return true;
>>   	}
>> @@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
>> unsigned size)
>>   	return false;
>>   }
>>
>> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
>> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
>> budget)
>>   {
>> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
>> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>>   	struct sk_buff *skb;
>>   	int ret;
>>
>> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
>> +	while ((nr_pending_reqs(queue) +
>> XEN_NETBK_LEGACY_SLOTS_MAX
>>   		< MAX_PENDING_REQS) &&
>> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
>> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>>   		struct xen_netif_tx_request txreq;
>>   		struct xen_netif_tx_request
>> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>>   		struct page *page;
>> @@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   		unsigned int data_len;
>>   		pending_ring_idx_t index;
>>
>> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
>> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>>   		    XEN_NETIF_TX_RING_SIZE) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Impossible number of requests. "
>>   				   "req_prod %d, req_cons %d, size %ld\n",
>> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
>> +				   queue->tx.sring->req_prod, queue-
>>> tx.req_cons,
>>   				   XEN_NETIF_TX_RING_SIZE);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			continue;
>>   		}
>>
>> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
>>> tx);
>> +		work_to_do =
>> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>>   		if (!work_to_do)
>>   			break;
>>
>> -		idx = vif->tx.req_cons;
>> +		idx = queue->tx.req_cons;
>>   		rmb(); /* Ensure that we see the request before we copy it.
>> */
>> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
>> sizeof(txreq));
>> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
>> sizeof(txreq));
>>
>>   		/* Credit-based scheduling. */
>> -		if (txreq.size > vif->remaining_credit &&
>> -		    tx_credit_exceeded(vif, txreq.size))
>> +		if (txreq.size > queue->remaining_credit &&
>> +		    tx_credit_exceeded(queue, txreq.size))
>>   			break;
>>
>> -		vif->remaining_credit -= txreq.size;
>> +		queue->remaining_credit -= txreq.size;
>>
>>   		work_to_do--;
>> -		vif->tx.req_cons = ++idx;
>> +		queue->tx.req_cons = ++idx;
>>
>>   		memset(extras, 0, sizeof(extras));
>>   		if (txreq.flags & XEN_NETTXF_extra_info) {
>> -			work_to_do = xenvif_get_extras(vif, extras,
>> +			work_to_do = xenvif_get_extras(queue, extras,
>>   						       work_to_do);
>> -			idx = vif->tx.req_cons;
>> +			idx = queue->tx.req_cons;
>>   			if (unlikely(work_to_do < 0))
>>   				break;
>>   		}
>>
>> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
>> work_to_do);
>> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
>> work_to_do);
>>   		if (unlikely(ret < 0))
>>   			break;
>>
>>   		idx += ret;
>>
>>   		if (unlikely(txreq.size < ETH_HLEN)) {
>> -			netdev_dbg(vif->dev,
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Bad packet size: %d\n", txreq.size);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>>   		/* No crossing a page as the payload mustn't fragment. */
>>   		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "txreq.offset: %x, size: %u, end: %lu\n",
>>   				   txreq.offset, txreq.size,
>>   				   (txreq.offset&~PAGE_MASK) + txreq.size);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			break;
>>   		}
>>
>> -		index = pending_index(vif->pending_cons);
>> -		pending_idx = vif->pending_ring[index];
>> +		index = pending_index(queue->pending_cons);
>> +		pending_idx = queue->pending_ring[index];
>>
>>   		data_len = (txreq.size > PKT_PROT_LEN &&
>>   			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
>> @@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>   		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>>   				GFP_ATOMIC | __GFP_NOWARN);
>>   		if (unlikely(skb == NULL)) {
>> -			netdev_dbg(vif->dev,
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Can't allocate a skb in start_xmit.\n");
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>> @@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>   			struct xen_netif_extra_info *gso;
>>   			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
>>
>> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
>> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>>   				/* Failure in xenvif_set_skb_gso is fatal. */
>>   				kfree_skb(skb);
>>   				break;
>> @@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   		}
>>
>>   		/* XXX could copy straight to head */
>> -		page = xenvif_alloc_page(vif, pending_idx);
>> +		page = xenvif_alloc_page(queue, pending_idx);
>>   		if (!page) {
>>   			kfree_skb(skb);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>>   		gop->source.u.ref = txreq.gref;
>> -		gop->source.domid = vif->domid;
>> +		gop->source.domid = queue->vif->domid;
>>   		gop->source.offset = txreq.offset;
>>
>>   		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
>> @@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>
>>   		gop++;
>>
>> -		memcpy(&vif->pending_tx_info[pending_idx].req,
>> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>>   		       &txreq, sizeof(txreq));
>> -		vif->pending_tx_info[pending_idx].head = index;
>> +		queue->pending_tx_info[pending_idx].head = index;
>>   		*((u16 *)skb->data) = pending_idx;
>>
>>   		__skb_put(skb, data_len);
>> @@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   					     INVALID_PENDING_IDX);
>>   		}
>>
>> -		vif->pending_cons++;
>> +		queue->pending_cons++;
>>
>> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
>> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
>> gop);
>>   		if (request_gop == NULL) {
>>   			kfree_skb(skb);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>   		gop = request_gop;
>>
>> -		__skb_queue_tail(&vif->tx_queue, skb);
>> +		__skb_queue_tail(&queue->tx_queue, skb);
>>
>> -		vif->tx.req_cons = idx;
>> +		queue->tx.req_cons = idx;
>>
>> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
>>> tx_copy_ops))
>> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
>>> tx_copy_ops))
>>   			break;
>>   	}
>>
>> -	return gop - vif->tx_copy_ops;
>> +	return gop - queue->tx_copy_ops;
>>   }
>>
>>
>> -static int xenvif_tx_submit(struct xenvif *vif)
>> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>>   {
>> -	struct gnttab_copy *gop = vif->tx_copy_ops;
>> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>>   	struct sk_buff *skb;
>>   	int work_done = 0;
>>
>> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
>> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>>   		struct xen_netif_tx_request *txp;
>>   		u16 pending_idx;
>>   		unsigned data_len;
>>
>>   		pending_idx = *((u16 *)skb->data);
>> -		txp = &vif->pending_tx_info[pending_idx].req;
>> +		txp = &queue->pending_tx_info[pending_idx].req;
>>
>>   		/* Check the remap error code. */
>> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
>> -			netdev_dbg(vif->dev, "netback grant failed.\n");
>> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
>> +			netdev_dbg(queue->vif->dev, "netback grant
>> failed.\n");
>>   			skb_shinfo(skb)->nr_frags = 0;
>>   			kfree_skb(skb);
>>   			continue;
>> @@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>
>>   		data_len = skb->len;
>>   		memcpy(skb->data,
>> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
>> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>>   		       data_len);
>>   		if (data_len < txp->size) {
>>   			/* Append the packet payload as a fragment. */
>> @@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   			txp->size -= data_len;
>>   		} else {
>>   			/* Schedule a response immediately. */
>> -			xenvif_idx_release(vif, pending_idx,
>> +			xenvif_idx_release(queue, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>>
>> @@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		else if (txp->flags & XEN_NETTXF_data_validated)
>>   			skb->ip_summed = CHECKSUM_UNNECESSARY;
>>
>> -		xenvif_fill_frags(vif, skb);
>> +		xenvif_fill_frags(queue, skb);
>>
>>   		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
>> PKT_PROT_LEN) {
>>   			int target = min_t(int, skb->len, PKT_PROT_LEN);
>>   			__pskb_pull_tail(skb, target - skb_headlen(skb));
>>   		}
>>
>> -		skb->dev      = vif->dev;
>> +		skb->dev      = queue->vif->dev;
>>   		skb->protocol = eth_type_trans(skb, skb->dev);
>>   		skb_reset_network_header(skb);
>>
>> -		if (checksum_setup(vif, skb)) {
>> -			netdev_dbg(vif->dev,
>> +		if (checksum_setup(queue->vif, skb)) {
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Can't setup checksum in
>> net_tx_action\n");
>>   			kfree_skb(skb);
>>   			continue;
>> @@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>
>>   		skb_probe_transport_header(skb, 0);
>>
>> -		vif->dev->stats.rx_bytes += skb->len;
>> -		vif->dev->stats.rx_packets++;
>> +		queue->vif->dev->stats.rx_bytes += skb->len;
>> +		queue->vif->dev->stats.rx_packets++;
>
>>
>>   		work_done++;
>>
>> @@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   }
>>
>>   /* Called after netfront has transmitted */
>> -int xenvif_tx_action(struct xenvif *vif, int budget)
>> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>>   {
>>   	unsigned nr_gops;
>>   	int work_done;
>>
>> -	if (unlikely(!tx_work_todo(vif)))
>> +	if (unlikely(!tx_work_todo(queue)))
>>   		return 0;
>>
>> -	nr_gops = xenvif_tx_build_gops(vif, budget);
>> +	nr_gops = xenvif_tx_build_gops(queue, budget);
>>
>>   	if (nr_gops == 0)
>>   		return 0;
>>
>> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
>> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
>>
>> -	work_done = xenvif_tx_submit(vif);
>> +	work_done = xenvif_tx_submit(queue);
>>
>>   	return work_done;
>>   }
>>
>> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
>> pending_idx,
>>   			       u8 status)
>>   {
>>   	struct pending_tx_info *pending_tx_info;
>>   	pending_ring_idx_t head;
>>   	u16 peek; /* peek into next tx request */
>>
>> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
>> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
>>
>>   	/* Already complete? */
>> -	if (vif->mmap_pages[pending_idx] == NULL)
>> +	if (queue->mmap_pages[pending_idx] == NULL)
>>   		return;
>>
>> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
>> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
>>
>>   	head = pending_tx_info->head;
>>
>> -	BUG_ON(!pending_tx_is_head(vif, head));
>> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
>> +	BUG_ON(!pending_tx_is_head(queue, head));
>> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
>> pending_idx);
>>
>>   	do {
>>   		pending_ring_idx_t index;
>>   		pending_ring_idx_t idx = pending_index(head);
>> -		u16 info_idx = vif->pending_ring[idx];
>> +		u16 info_idx = queue->pending_ring[idx];
>>
>> -		pending_tx_info = &vif->pending_tx_info[info_idx];
>> -		make_tx_response(vif, &pending_tx_info->req, status);
>> +		pending_tx_info = &queue->pending_tx_info[info_idx];
>> +		make_tx_response(queue, &pending_tx_info->req, status);
>>
>>   		/* Setting any number other than
>>   		 * INVALID_PENDING_RING_IDX indicates this slot is
>> @@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif,
>> u16 pending_idx,
>>   		 */
>>   		pending_tx_info->head = 0;
>>
>> -		index = pending_index(vif->pending_prod++);
>> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
>> +		index = pending_index(queue->pending_prod++);
>> +		queue->pending_ring[index] = queue-
>>> pending_ring[info_idx];
>>
>> -		peek = vif->pending_ring[pending_index(++head)];
>> +		peek = queue->pending_ring[pending_index(++head)];
>>
>> -	} while (!pending_tx_is_head(vif, peek));
>> +	} while (!pending_tx_is_head(queue, peek));
>>
>> -	put_page(vif->mmap_pages[pending_idx]);
>> -	vif->mmap_pages[pending_idx] = NULL;
>> +	put_page(queue->mmap_pages[pending_idx]);
>> +	queue->mmap_pages[pending_idx] = NULL;
>>   }
>>
>>
>> -static void make_tx_response(struct xenvif *vif,
>> +static void make_tx_response(struct xenvif_queue *queue,
>>   			     struct xen_netif_tx_request *txp,
>>   			     s8       st)
>>   {
>> -	RING_IDX i = vif->tx.rsp_prod_pvt;
>> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>>   	struct xen_netif_tx_response *resp;
>>   	int notify;
>>
>> -	resp = RING_GET_RESPONSE(&vif->tx, i);
>> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>>   	resp->id     = txp->id;
>>   	resp->status = st;
>>
>>   	if (txp->flags & XEN_NETTXF_extra_info)
>> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
>> XEN_NETIF_RSP_NULL;
>> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
>> XEN_NETIF_RSP_NULL;
>>
>> -	vif->tx.rsp_prod_pvt = ++i;
>> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
>> +	queue->tx.rsp_prod_pvt = ++i;
>> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>>   	if (notify)
>> -		notify_remote_via_irq(vif->tx_irq);
>> +		notify_remote_via_irq(queue->tx_irq);
>>   }
>>
>> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>> +static struct xen_netif_rx_response *make_rx_response(struct
>> xenvif_queue *queue,
>>   					     u16      id,
>>   					     s8       st,
>>   					     u16      offset,
>>   					     u16      size,
>>   					     u16      flags)
>>   {
>> -	RING_IDX i = vif->rx.rsp_prod_pvt;
>> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>>   	struct xen_netif_rx_response *resp;
>>
>> -	resp = RING_GET_RESPONSE(&vif->rx, i);
>> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>>   	resp->offset     = offset;
>>   	resp->flags      = flags;
>>   	resp->id         = id;
>> @@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response
>> *make_rx_response(struct xenvif *vif,
>>   	if (st < 0)
>>   		resp->status = (s16)st;
>>
>> -	vif->rx.rsp_prod_pvt = ++i;
>> +	queue->rx.rsp_prod_pvt = ++i;
>>
>>   	return resp;
>>   }
>>
>> -static inline int rx_work_todo(struct xenvif *vif)
>> +static inline int rx_work_todo(struct xenvif_queue *queue)
>>   {
>> -	return !skb_queue_empty(&vif->rx_queue);
>> +	return !skb_queue_empty(&queue->rx_queue);
>>   }
>>
>> -static inline int tx_work_todo(struct xenvif *vif)
>> +static inline int tx_work_todo(struct xenvif_queue *queue)
>>   {
>>
>> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
>> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
>> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
>> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>>   	     < MAX_PENDING_REQS))
>>   		return 1;
>>
>>   	return 0;
>>   }
>>
>> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
>> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>>   {
>> -	if (vif->tx.sring)
>> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
>> -					vif->tx.sring);
>> -	if (vif->rx.sring)
>> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
>> -					vif->rx.sring);
>> +	if (queue->tx.sring)
>> +
>> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
>> +					queue->tx.sring);
>> +	if (queue->rx.sring)
>> +
>> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
>> +					queue->rx.sring);
>>   }
>>
>> -int xenvif_map_frontend_rings(struct xenvif *vif,
>> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>>   			      grant_ref_t tx_ring_ref,
>>   			      grant_ref_t rx_ring_ref)
>>   {
>> @@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif
>> *vif,
>>
>>   	int err = -ENOMEM;
>>
>> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
>> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
>>> vif),
>>   				     tx_ring_ref, &addr);
>>   	if (err)
>>   		goto err;
>>
>>   	txs = (struct xen_netif_tx_sring *)addr;
>> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
>> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
>>
>> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
>> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
>>> vif),
>>   				     rx_ring_ref, &addr);
>>   	if (err)
>>   		goto err;
>>
>>   	rxs = (struct xen_netif_rx_sring *)addr;
>> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
>> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
>>
>> -	vif->rx_req_cons_peek = 0;
>> +	queue->rx_req_cons_peek = 0;
>>
>>   	return 0;
>>
>>   err:
>> -	xenvif_unmap_frontend_rings(vif);
>> +	xenvif_unmap_frontend_rings(queue);
>>   	return err;
>>   }
>>
>>   int xenvif_kthread(void *data)
>>   {
>> -	struct xenvif *vif = data;
>> +	struct xenvif_queue *queue = data;
>>
>>   	while (!kthread_should_stop()) {
>> -		wait_event_interruptible(vif->wq,
>> -					 rx_work_todo(vif) ||
>> +		wait_event_interruptible(queue->wq,
>> +					 rx_work_todo(queue) ||
>>   					 kthread_should_stop());
>>   		if (kthread_should_stop())
>>   			break;
>>
>> -		if (rx_work_todo(vif))
>> -			xenvif_rx_action(vif);
>> +		if (rx_work_todo(queue))
>> +			xenvif_rx_action(queue);
>>
>>   		cond_resched();
>>   	}
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index f035899..c3332e2 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -20,6 +20,7 @@
>>   */
>>
>>   #include "common.h"
>> +#include <linux/vmalloc.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -35,8 +36,9 @@ struct backend_info {
>>   	u8 have_hotplug_status_watch:1;
>>   };
>>
>> -static int connect_rings(struct backend_info *);
>> -static void connect(struct backend_info *);
>> +static int connect_rings(struct backend_info *be, struct xenvif_queue
>> *queue);
>> +static void connect(struct backend_info *be);
>> +static int read_xenbus_vif_flags(struct backend_info *be);
>>   static void backend_create_xenvif(struct backend_info *be);
>>   static void unregister_hotplug_status_watch(struct backend_info *be);
>>   static void set_backend_state(struct backend_info *be,
>> @@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
>>   {
>>   	int err;
>>   	struct xenbus_device *dev = be->dev;
>> -
>> -	err = connect_rings(be);
>> -	if (err)
>> -		return;
>> +	unsigned long credit_bytes, credit_usec;
>> +	unsigned int queue_index;
>> +	struct xenvif_queue *queue;
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
>>   		return;
>>   	}
>>
>> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
>> -			  &be->vif->credit_usec);
>> -	be->vif->remaining_credit = be->vif->credit_bytes;
>> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>> +	read_xenbus_vif_flags(be);
>> +
>> +	be->vif->num_queues = 1;
>> +	be->vif->queues = vzalloc(be->vif->num_queues *
>> +			sizeof(struct xenvif_queue));
>> +
>> +	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index)
>> +	{
>> +		queue = &be->vif->queues[queue_index];
>> +		queue->vif = be->vif;
>> +		queue->number = queue_index;
>> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
>> +				be->vif->dev->name, queue->number);
>> +
>> +		xenvif_init_queue(queue);
>> +
>> +		queue->remaining_credit = credit_bytes;
>> +
>> +		err = connect_rings(be, queue);
>> +		if (err)
>> +			goto err;
>> +	}
>> +
>> +	xenvif_carrier_on(be->vif);
>>
>>   	unregister_hotplug_status_watch(be);
>>   	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
>> @@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
>>   	if (!err)
>>   		be->have_hotplug_status_watch = 1;
>>
>> -	netif_wake_queue(be->vif->dev);
>> +	netif_tx_wake_all_queues(be->vif->dev);
>> +
>> +	return;
>> +
>> +err:
>> +	vfree(be->vif->queues);
>> +	be->vif->queues = NULL;
>> +	be->vif->num_queues = 0;
>> +	return;
>>   }
>>
>>
>> -static int connect_rings(struct backend_info *be)
>> +static int connect_rings(struct backend_info *be, struct xenvif_queue
>> *queue)
>>   {
>> -	struct xenvif *vif = be->vif;
>>   	struct xenbus_device *dev = be->dev;
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
>> +	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> -	int val;
>>
>>   	err = xenbus_gather(XBT_NIL, dev->otherend,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>> @@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
>>   		rx_evtchn = tx_evtchn;
>>   	}
>>
>> +	/* Map the shared frame, irq etc. */
>> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
>> +			     tx_evtchn, rx_evtchn);
>> +	if (err) {
>> +		xenbus_dev_fatal(dev, err,
>> +				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>> +				 tx_ring_ref, rx_ring_ref,
>> +				 tx_evtchn, rx_evtchn);
>> +		return err;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int read_xenbus_vif_flags(struct backend_info *be)
>> +{
>> +	struct xenvif *vif = be->vif;
>> +	struct xenbus_device *dev = be->dev;
>> +	unsigned int rx_copy;
>> +	int err, val;
>> +
>>   	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
>> "%u",
>>   			   &rx_copy);
>>   	if (err == -ENOENT) {
>> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>>   		val = 0;
>>   	vif->ipv6_csum = !!val;
>>
>> -	/* Map the shared frame, irq etc. */
>> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
>> -			     tx_evtchn, rx_evtchn);
>> -	if (err) {
>> -		xenbus_dev_fatal(dev, err,
>> -				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>> -				 tx_ring_ref, rx_ring_ref,
>> -				 tx_evtchn, rx_evtchn);
>> -		return err;
>> -	}
>>   	return 0;
>>   }
>>
>> -
>>   /* ** Driver Registration ** */
>>
>>
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kYC-0003cA-9x; Thu, 16 Jan 2014 10:51:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kY9-0003bK-Rj
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 10:51:50 +0000
Received: from [85.158.143.35:15066] by server-1.bemta-4.messagelabs.com id
	46/50-02132-5C9B7D25; Thu, 16 Jan 2014 10:51:49 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389869504!274456!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10370 invoked from network); 16 Jan 2014 10:51:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93415684"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 10:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:51:42 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kLT-0002Jh-FU; Thu, 16 Jan 2014 10:38:43 +0000
Message-ID: <52D7B6B3.5060303@citrix.com>
Date: Thu, 16 Jan 2014 10:38:43 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:23, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 15 January 2014 16:23
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
>> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
>> queue struct.
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netback, move the
>> queue-specific data from struct xenvif into struct xenvif_queue, and
>> update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_netdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0 for a single queue and uses
>> skb_get_rxhash() to compute the queue index otherwise.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |   66 +++--
>>   drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>>   drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++-------------
>> -----
>>   drivers/net/xen-netback/xenbus.c    |   89 ++++--
>>   4 files changed, 556 insertions(+), 423 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index c47794b..54d2eeb 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
>>    */
>>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>> XEN_NETIF_RX_RING_SIZE)
>>
>> -struct xenvif {
>> -	/* Unique identifier for this interface. */
>> -	domid_t          domid;
>> -	unsigned int     handle;
>> +struct xenvif;
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int number; /* Queue number, 0-based */
>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>
> I wonder whether it would be neater to #define the name size here...
>

Absolutely. I'll do this in V2.

>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>
> ...and the IRQ name size here. It's kind of ugly to have + some_magic_value in array definitions.
>

As above.

>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,7 +142,7 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>
>> @@ -150,14 +152,27 @@ struct xenvif {
>>   	 */
>>   	RING_IDX rx_req_cons_peek;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>
> I see you brought this back in line, which is reasonable as the queue is now a separately allocated struct.
>

Indeed; trying to keep the number of separate allocs/frees to a minimum,
for everybody's sanity!

>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +};
>> +
>> +struct xenvif {
>> +	/* Unique identifier for this interface. */
>> +	domid_t          domid;
>> +	unsigned int     handle;
>> +
>>   	u8               fe_dev_addr[6];
>>
>>   	/* Frontend feature information. */
>> @@ -171,12 +186,9 @@ struct xenvif {
>>   	/* Internal feature information. */
>>   	u8 can_queue:1;	    /* can queue packets for receiver? */
>>
>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> -	unsigned long   credit_bytes;
>> -	unsigned long   credit_usec;
>> -	unsigned long   remaining_credit;
>> -	struct timer_list credit_timeout;
>> -	u64 credit_window_start;
>> +	/* Queues */
>> +	unsigned int num_queues;
>> +	struct xenvif_queue *queues;
>>
>>   	/* Statistics */
>
> Do stats need to be per-queue (and then possibly aggregated at query time)?
>

Aside from the potential to see the stats for each queue, which may be
useful in some limited circumstances for performance testing or
debugging, I don't see what this buys us...

>>   	unsigned long rx_gso_checksum_fixup;
>> @@ -194,7 +206,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>>   			    domid_t domid,
>>   			    unsigned int handle);
>>
>> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>> +void xenvif_init_queue(struct xenvif_queue *queue);
>> +
>> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>>   		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>>   		   unsigned int rx_evtchn);
>>   void xenvif_disconnect(struct xenvif *vif);
>> @@ -205,23 +219,23 @@ void xenvif_xenbus_fini(void);
>>
>>   int xenvif_schedulable(struct xenvif *vif);
>>
>> -int xenvif_rx_ring_full(struct xenvif *vif);
>> +int xenvif_rx_ring_full(struct xenvif_queue *queue);
>>
>> -int xenvif_must_stop_queue(struct xenvif *vif);
>> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
>>
>>   /* (Un)Map communication rings. */
>> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
>> -int xenvif_map_frontend_rings(struct xenvif *vif,
>> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
>> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>>   			      grant_ref_t tx_ring_ref,
>>   			      grant_ref_t rx_ring_ref);
>>
>>   /* Check for SKBs from frontend and schedule backend processing */
>> -void xenvif_check_rx_xenvif(struct xenvif *vif);
>> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
>>
>>   /* Queue an SKB for transmission to the frontend */
>> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb);
>> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
>> *skb);
>>   /* Notify xenvif that ring now has space to send an skb to the frontend */
>> -void xenvif_notify_tx_completion(struct xenvif *vif);
>> +void xenvif_notify_tx_completion(struct xenvif_queue *queue);
>>
>>   /* Prevent the device from generating any further traffic. */
>>   void xenvif_carrier_off(struct xenvif *vif);
>> @@ -229,11 +243,15 @@ void xenvif_carrier_off(struct xenvif *vif);
>>   /* Returns number of ring slots required to send an skb to the frontend */
>>   unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
>>
>> -int xenvif_tx_action(struct xenvif *vif, int budget);
>> -void xenvif_rx_action(struct xenvif *vif);
>> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
>> +void xenvif_rx_action(struct xenvif_queue *queue);
>>
>>   int xenvif_kthread(void *data);
>>
>> +int xenvif_poll(struct napi_struct *napi, int budget);
>> +
>> +void xenvif_carrier_on(struct xenvif *vif);
>> +
>>   extern bool separate_tx_rx_irq;
>>
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index fff8cdd..0113324 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -34,7 +34,6 @@
>>   #include <linux/ethtool.h>
>>   #include <linux/rtnetlink.h>
>>   #include <linux/if_vlan.h>
>> -#include <linux/vmalloc.h>
>>
>>   #include <xen/events.h>
>>   #include <asm/xen/hypercall.h>
>> @@ -42,32 +41,50 @@
>>   #define XENVIF_QUEUE_LENGTH 32
>>   #define XENVIF_NAPI_WEIGHT  64
>>
>> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
>> +{
>> +	netif_tx_wake_queue(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Might be neater to declare some stack variables for dev and number to avoid the long line.
>
>> +}
>> +
>> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
>> +{
>> +	netif_tx_stop_queue(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Ditto.
>
>> +}
>> +
>> +static inline int xenvif_queue_stopped(struct xenvif_queue *queue)
>> +{
>> +	return netif_tx_queue_stopped(
>> +			netdev_get_tx_queue(queue->vif->dev, queue-
>>> number));
>
> Ditto.
>
>> +}
>> +
>>   int xenvif_schedulable(struct xenvif *vif)
>>   {
>>   	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
>>   }
>>
>> -static int xenvif_rx_schedulable(struct xenvif *vif)
>> +static int xenvif_rx_schedulable(struct xenvif_queue *queue)
>>   {
>> -	return xenvif_schedulable(vif) && !xenvif_rx_ring_full(vif);
>> +	return xenvif_schedulable(queue->vif) &&
>> !xenvif_rx_ring_full(queue);
>
> I guess your patches have not been re-based onto net-next? xenvif_ring_full() and xenvif_rx_schedulable() went away in c/s ca2f09f2b2c6c25047cfc545d057c4edfcfe561c (xen-netback: improve guest-receive-side flow control).
>
> Can you rebase? Eventual patches will need to go into net-next.

They haven't yet; I wanted to get some comments on these, but I will
definitely rebase onto net-next in the near future.

>
>    Paul
>
>>   }
>>
>>   static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>>   {
>> -	struct xenvif *vif = dev_id;
>> +	struct xenvif_queue *queue = dev_id;
>>
>> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
>> -		napi_schedule(&vif->napi);
>> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
>> +		napi_schedule(&queue->napi);
>>
>>   	return IRQ_HANDLED;
>>   }
>>
>> -static int xenvif_poll(struct napi_struct *napi, int budget)
>> +int xenvif_poll(struct napi_struct *napi, int budget)
>>   {
>> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
>> +	struct xenvif_queue *queue = container_of(napi, struct
>> xenvif_queue, napi);
>>   	int work_done;
>>
>> -	work_done = xenvif_tx_action(vif, budget);
>> +	work_done = xenvif_tx_action(queue, budget);
>>
>>   	if (work_done < budget) {
>>   		int more_to_do = 0;
>> @@ -91,7 +108,7 @@ static int xenvif_poll(struct napi_struct *napi, int
>> budget)
>>
>>   		local_irq_save(flags);
>>
>> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
>> more_to_do);
>> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
>> more_to_do);
>>   		if (!more_to_do)
>>   			__napi_complete(napi);
>>
>> @@ -103,10 +120,10 @@ static int xenvif_poll(struct napi_struct *napi, int
>> budget)
>>
>>   static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>>   {
>> -	struct xenvif *vif = dev_id;
>> +	struct xenvif_queue *queue = dev_id;
>>
>> -	if (xenvif_rx_schedulable(vif))
>> -		netif_wake_queue(vif->dev);
>> +	if (xenvif_rx_schedulable(queue))
>> +		xenvif_wake_queue(queue);
>>
>>   	return IRQ_HANDLED;
>>   }
>> @@ -119,27 +136,56 @@ static irqreturn_t xenvif_interrupt(int irq, void
>> *dev_id)
>>   	return IRQ_HANDLED;
>>   }
>>
>> +static u16 select_queue(struct net_device *dev, struct sk_buff *skb)
>> +{
>> +	struct xenvif *vif = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_index;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (vif->num_queues == 1) {
>> +		queue_index = 0;
>> +	}
>
> Style.
>
>> +	else {
>> +		/* Use skb_get_rxhash to obtain an L4 hash if available */
>> +		hash = skb_get_rxhash(skb);
>> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
>> 32);
>> +	}
>> +
>> +	return queue_index;
>> +}
>> +
>>   static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   {
>>   	struct xenvif *vif = netdev_priv(dev);
>> +	u16 queue_index = 0;
>> +	struct xenvif_queue *queue = NULL;
>>
>>   	BUG_ON(skb->dev != dev);
>>
>> -	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL)
>> +	/* Drop the packet if the queues are not set up */
>> +	if (vif->num_queues < 1 || vif->queues == NULL)
>> +		goto drop;
>
> Just do the former test and ASSERT the second.
>> +
>> +	/* Obtain the queue to be used to transmit this packet */
>> +	queue_index = skb_get_queue_mapping(skb);
>
> Personally, I'd stick a range check here.
>

OK.

Andrew

>> +	queue = &vif->queues[queue_index];
>> +
>> +	/* Drop the packet if queue is not ready */
>> +	if (queue->task == NULL)
>>   		goto drop;
>>
>>   	/* Drop the packet if the target domain has no receive buffers. */
>> -	if (!xenvif_rx_schedulable(vif))
>> +	if (!xenvif_rx_schedulable(queue))
>>   		goto drop;
>>
>>   	/* Reserve ring slots for the worst-case number of fragments. */
>> -	vif->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
>> +	queue->rx_req_cons_peek += xenvif_count_skb_slots(vif, skb);
>>
>> -	if (vif->can_queue && xenvif_must_stop_queue(vif))
>> -		netif_stop_queue(dev);
>> +	if (vif->can_queue && xenvif_must_stop_queue(queue))
>> +		xenvif_stop_queue(queue);
>>
>> -	xenvif_queue_tx_skb(vif, skb);
>> +	xenvif_queue_tx_skb(queue, skb);
>>
>>   	return NETDEV_TX_OK;
>>
>> @@ -149,10 +195,10 @@ static int xenvif_start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>>   	return NETDEV_TX_OK;
>>   }
>>
>> -void xenvif_notify_tx_completion(struct xenvif *vif)
>> +void xenvif_notify_tx_completion(struct xenvif_queue *queue)
>>   {
>> -	if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif))
>> -		netif_wake_queue(vif->dev);
>> +	if (xenvif_queue_stopped(queue) &&
>> xenvif_rx_schedulable(queue))
>> +		xenvif_wake_queue(queue);
>>   }
>>
>>   static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>> @@ -163,20 +209,30 @@ static struct net_device_stats
>> *xenvif_get_stats(struct net_device *dev)
>>
>>   static void xenvif_up(struct xenvif *vif)
>>   {
>> -	napi_enable(&vif->napi);
>> -	enable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		enable_irq(vif->rx_irq);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_enable(&queue->napi);
>> +		enable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			enable_irq(queue->rx_irq);
>> +		xenvif_check_rx_xenvif(queue);
>> +	}
>>   }
>>
>>   static void xenvif_down(struct xenvif *vif)
>>   {
>> -	napi_disable(&vif->napi);
>> -	disable_irq(vif->tx_irq);
>> -	if (vif->tx_irq != vif->rx_irq)
>> -		disable_irq(vif->rx_irq);
>> -	del_timer_sync(&vif->credit_timeout);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>> +		napi_disable(&queue->napi);
>> +		disable_irq(queue->tx_irq);
>> +		if (queue->tx_irq != queue->rx_irq)
>> +			disable_irq(queue->rx_irq);
>> +		del_timer_sync(&queue->credit_timeout);
>> +	}
>>   }
>>
>>   static int xenvif_open(struct net_device *dev)
>> @@ -184,7 +240,7 @@ static int xenvif_open(struct net_device *dev)
>>   	struct xenvif *vif = netdev_priv(dev);
>>   	if (netif_carrier_ok(dev))
>>   		xenvif_up(vif);
>> -	netif_start_queue(dev);
>> +	netif_tx_start_all_queues(dev);
>>   	return 0;
>>   }
>>
>> @@ -193,7 +249,7 @@ static int xenvif_close(struct net_device *dev)
>>   	struct xenvif *vif = netdev_priv(dev);
>>   	if (netif_carrier_ok(dev))
>>   		xenvif_down(vif);
>> -	netif_stop_queue(dev);
>> +	netif_tx_stop_all_queues(dev);
>>   	return 0;
>>   }
>>
>> @@ -287,6 +343,7 @@ static const struct net_device_ops
>> xenvif_netdev_ops = {
>>   	.ndo_fix_features = xenvif_fix_features,
>>   	.ndo_set_mac_address = eth_mac_addr,
>>   	.ndo_validate_addr   = eth_validate_addr,
>> +	.ndo_select_queue = select_queue,
>>   };
>>
>>   struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>> @@ -296,10 +353,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	struct net_device *dev;
>>   	struct xenvif *vif;
>>   	char name[IFNAMSIZ] = {};
>> -	int i;
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> @@ -309,24 +365,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>
>>   	vif = netdev_priv(dev);
>>
>> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
>> -				     MAX_GRANT_COPY_OPS);
>> -	if (vif->grant_copy_op == NULL) {
>> -		pr_warn("Could not allocate grant copy space for %s\n",
>> name);
>> -		free_netdev(dev);
>> -		return ERR_PTR(-ENOMEM);
>> -	}
>> -
>>   	vif->domid  = domid;
>>   	vif->handle = handle;
>>   	vif->can_sg = 1;
>>   	vif->ip_csum = 1;
>>   	vif->dev = dev;
>>
>> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
>> -	vif->credit_usec  = 0UL;
>> -	init_timer(&vif->credit_timeout);
>> -	vif->credit_window_start = get_jiffies_64();
>> +	/* Start out with no queues */
>> +	vif->num_queues = 0;
>> +	vif->queues = NULL;
>>
>>   	dev->netdev_ops	= &xenvif_netdev_ops;
>>   	dev->hw_features = NETIF_F_SG |
>> @@ -337,16 +384,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>
>>   	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
>>
>> -	skb_queue_head_init(&vif->rx_queue);
>> -	skb_queue_head_init(&vif->tx_queue);
>> -
>> -	vif->pending_cons = 0;
>> -	vif->pending_prod = MAX_PENDING_REQS;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> -
>>   	/*
>>   	 * Initialise a dummy MAC address. We choose the numerically
>>   	 * largest non-broadcast address to prevent the address getting
>> @@ -356,8 +393,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>>   	dev->dev_addr[0] &= ~0x01;
>>
>> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
>> XENVIF_NAPI_WEIGHT);
>> -
>>   	netif_carrier_off(dev);
>>
>>   	err = register_netdev(dev);
>> @@ -374,84 +409,110 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	return vif;
>>   }
>>
>> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>> +void xenvif_init_queue(struct xenvif_queue *queue)
>> +{
>> +	int i;
>> +
>> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
>> +	queue->credit_usec  = 0UL;
>> +	init_timer(&queue->credit_timeout);
>> +	queue->credit_window_start = get_jiffies_64();
>> +
>> +	skb_queue_head_init(&queue->rx_queue);
>> +	skb_queue_head_init(&queue->tx_queue);
>> +
>> +	queue->pending_cons = 0;
>> +	queue->pending_prod = MAX_PENDING_REQS;
>> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
>> +		queue->pending_ring[i] = i;
>> +		queue->mmap_pages[i] = NULL;
>> +	}
>> +
>> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> +			XENVIF_NAPI_WEIGHT);
>> +}
>> +
>> +void xenvif_carrier_on(struct xenvif *vif)
>> +{
>> +	rtnl_lock();
>> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
>> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
>> +	netdev_update_features(vif->dev);
>> +	netif_carrier_on(vif->dev);
>> +	if (netif_running(vif->dev))
>> +		xenvif_up(vif);
>> +	rtnl_unlock();
>> +}
>> +
>> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>>   		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>>   		   unsigned int rx_evtchn)
>>   {
>>   	struct task_struct *task;
>>   	int err = -ENOMEM;
>>
>> -	BUG_ON(vif->tx_irq);
>> -	BUG_ON(vif->task);
>> +	BUG_ON(queue->tx_irq);
>> +	BUG_ON(queue->task);
>>
>> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
>> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>>   	if (err < 0)
>>   		goto err;
>>
>>   	if (tx_evtchn == rx_evtchn) {
>>   		/* feature-split-event-channels == 0 */
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
>> -			vif->dev->name, vif);
>> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
>> +			queue->name, queue);
>>   		if (err < 0)
>>   			goto err_unmap;
>> -		vif->tx_irq = vif->rx_irq = err;
>> -		disable_irq(vif->tx_irq);
>> +		queue->tx_irq = queue->rx_irq = err;
>> +		disable_irq(queue->tx_irq);
>>   	} else {
>>   		/* feature-split-event-channels == 1 */
>> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
>> -			 "%s-tx", vif->dev->name);
>> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
>> +			 "%s-tx", queue->name);
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
>> -			vif->tx_irq_name, vif);
>> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
>> 0,
>> +			queue->tx_irq_name, queue);
>>   		if (err < 0)
>>   			goto err_unmap;
>> -		vif->tx_irq = err;
>> -		disable_irq(vif->tx_irq);
>> +		queue->tx_irq = err;
>> +		disable_irq(queue->tx_irq);
>>
>> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
>> -			 "%s-rx", vif->dev->name);
>> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
>> +			 "%s-rx", queue->name);
>>   		err = bind_interdomain_evtchn_to_irqhandler(
>> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
>> -			vif->rx_irq_name, vif);
>> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
>> 0,
>> +			queue->rx_irq_name, queue);
>>   		if (err < 0)
>>   			goto err_tx_unbind;
>> -		vif->rx_irq = err;
>> -		disable_irq(vif->rx_irq);
>> +		queue->rx_irq = err;
>> +		disable_irq(queue->rx_irq);
>>   	}
>>
>> -	init_waitqueue_head(&vif->wq);
>> +	init_waitqueue_head(&queue->wq);
>>   	task = kthread_create(xenvif_kthread,
>> -			      (void *)vif, "%s", vif->dev->name);
>> +			      (void *)queue, "%s", queue->name);
>>   	if (IS_ERR(task)) {
>> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
>>> name);
>> +		pr_warn("Could not allocate kthread for %s\n", queue-
>>> name);
>>   		err = PTR_ERR(task);
>>   		goto err_rx_unbind;
>>   	}
>>
>> -	vif->task = task;
>> -
>> -	rtnl_lock();
>> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
>> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
>> -	netdev_update_features(vif->dev);
>> -	netif_carrier_on(vif->dev);
>> -	if (netif_running(vif->dev))
>> -		xenvif_up(vif);
>> -	rtnl_unlock();
>> +	queue->task = task;
>>
>> -	wake_up_process(vif->task);
>> +	wake_up_process(queue->task);
>>
>>   	return 0;
>>
>>   err_rx_unbind:
>> -	unbind_from_irqhandler(vif->rx_irq, vif);
>> -	vif->rx_irq = 0;
>> +	unbind_from_irqhandler(queue->rx_irq, queue);
>> +	queue->rx_irq = 0;
>>   err_tx_unbind:
>> -	unbind_from_irqhandler(vif->tx_irq, vif);
>> -	vif->tx_irq = 0;
>> +	unbind_from_irqhandler(queue->tx_irq, queue);
>> +	queue->tx_irq = 0;
>>   err_unmap:
>> -	xenvif_unmap_frontend_rings(vif);
>> +	xenvif_unmap_frontend_rings(queue);
>>   err:
>>   	module_put(THIS_MODULE);
>>   	return err;
>> @@ -470,34 +531,51 @@ void xenvif_carrier_off(struct xenvif *vif)
>>
>>   void xenvif_disconnect(struct xenvif *vif)
>>   {
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>> +
>>   	if (netif_carrier_ok(vif->dev))
>>   		xenvif_carrier_off(vif);
>>
>> -	if (vif->task) {
>> -		kthread_stop(vif->task);
>> -		vif->task = NULL;
>> -	}
>> +	for (queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		queue = &vif->queues[queue_index];
>>
>> -	if (vif->tx_irq) {
>> -		if (vif->tx_irq == vif->rx_irq)
>> -			unbind_from_irqhandler(vif->tx_irq, vif);
>> -		else {
>> -			unbind_from_irqhandler(vif->tx_irq, vif);
>> -			unbind_from_irqhandler(vif->rx_irq, vif);
>> +		if (queue->task) {
>> +			kthread_stop(queue->task);
>> +			queue->task = NULL;
>>   		}
>> -		vif->tx_irq = 0;
>> +
>> +		if (queue->tx_irq) {
>> +			if (queue->tx_irq == queue->rx_irq)
>> +				unbind_from_irqhandler(queue->tx_irq,
>> queue);
>> +			else {
>> +				unbind_from_irqhandler(queue->tx_irq,
>> queue);
>> +				unbind_from_irqhandler(queue->rx_irq,
>> queue);
>> +			}
>> +			queue->tx_irq = 0;
>> +		}
>> +
>> +		xenvif_unmap_frontend_rings(queue);
>>   	}
>>
>> -	xenvif_unmap_frontend_rings(vif);
>> +
>>   }
>>
>>   void xenvif_free(struct xenvif *vif)
>>   {
>> -	netif_napi_del(&vif->napi);
>> +	struct xenvif_queue *queue = NULL;
>> +	unsigned int queue_index;
>>
>> -	unregister_netdev(vif->dev);
>> +	for(queue_index = 0; queue_index < vif->num_queues;
>> ++queue_index) {
>> +		netif_napi_del(&queue->napi);
>> +	}
>>
>> -	vfree(vif->grant_copy_op);
>> +	/* Free the array of queues */
>> +	vfree(vif->queues);
>> +	vif->num_queues = 0;
>> +	vif->queues = 0;
>> +
>> +	unregister_netdev(vif->dev);
>>   	free_netdev(vif->dev);
>>
>>   	module_put(THIS_MODULE);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 7842555..586e741 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -76,38 +76,38 @@ module_param(fatal_skb_slots, uint, 0444);
>>    * one or more merged tx requests, otherwise it is the continuation of
>>    * previous tx request.
>>    */
>> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
>> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
>> RING_IDX idx)
>>   {
>> -	return vif->pending_tx_info[idx].head !=
>> INVALID_PENDING_RING_IDX;
>> +	return queue->pending_tx_info[idx].head !=
>> INVALID_PENDING_RING_IDX;
>>   }
>>
>> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
>> pending_idx,
>>   			       u8 status);
>>
>> -static void make_tx_response(struct xenvif *vif,
>> +static void make_tx_response(struct xenvif_queue *queue,
>>   			     struct xen_netif_tx_request *txp,
>>   			     s8       st);
>>
>> -static inline int tx_work_todo(struct xenvif *vif);
>> -static inline int rx_work_todo(struct xenvif *vif);
>> +static inline int tx_work_todo(struct xenvif_queue *queue);
>> +static inline int rx_work_todo(struct xenvif_queue *queue);
>>
>> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>> +static struct xen_netif_rx_response *make_rx_response(struct
>> xenvif_queue *queue,
>>   					     u16      id,
>>   					     s8       st,
>>   					     u16      offset,
>>   					     u16      size,
>>   					     u16      flags);
>>
>> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
>> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>>   				       u16 idx)
>>   {
>> -	return page_to_pfn(vif->mmap_pages[idx]);
>> +	return page_to_pfn(queue->mmap_pages[idx]);
>>   }
>>
>> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
>> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>>   					 u16 idx)
>>   {
>> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
>> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>>   }
>>
>>   /* This is a miniumum size for the linear area to avoid lots of
>> @@ -132,10 +132,10 @@ static inline pending_ring_idx_t
>> pending_index(unsigned i)
>>   	return i & (MAX_PENDING_REQS-1);
>>   }
>>
>> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
>> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
>> *queue)
>>   {
>>   	return MAX_PENDING_REQS -
>> -		vif->pending_prod + vif->pending_cons;
>> +		queue->pending_prod + queue->pending_cons;
>>   }
>>
>>   static int max_required_rx_slots(struct xenvif *vif)
>> @@ -149,25 +149,25 @@ static int max_required_rx_slots(struct xenvif *vif)
>>   	return max;
>>   }
>>
>> -int xenvif_rx_ring_full(struct xenvif *vif)
>> +int xenvif_rx_ring_full(struct xenvif_queue *queue)
>>   {
>> -	RING_IDX peek   = vif->rx_req_cons_peek;
>> -	RING_IDX needed = max_required_rx_slots(vif);
>> +	RING_IDX peek   = queue->rx_req_cons_peek;
>> +	RING_IDX needed = max_required_rx_slots(queue->vif);
>>
>> -	return ((vif->rx.sring->req_prod - peek) < needed) ||
>> -	       ((vif->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
>> needed);
>> +	return ((queue->rx.sring->req_prod - peek) < needed) ||
>> +	       ((queue->rx.rsp_prod_pvt + XEN_NETIF_RX_RING_SIZE - peek) <
>> needed);
>>   }
>>
>> -int xenvif_must_stop_queue(struct xenvif *vif)
>> +int xenvif_must_stop_queue(struct xenvif_queue *queue)
>>   {
>> -	if (!xenvif_rx_ring_full(vif))
>> +	if (!xenvif_rx_ring_full(queue))
>>   		return 0;
>>
>> -	vif->rx.sring->req_event = vif->rx_req_cons_peek +
>> -		max_required_rx_slots(vif);
>> +	queue->rx.sring->req_event = queue->rx_req_cons_peek +
>> +		max_required_rx_slots(queue->vif);
>>   	mb(); /* request notification /then/ check the queue */
>>
>> -	return xenvif_rx_ring_full(vif);
>> +	return xenvif_rx_ring_full(queue);
>>   }
>>
>>   /*
>> @@ -306,13 +306,13 @@ struct netrx_pending_operations {
>>   	grant_ref_t copy_gref;
>>   };
>>
>> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
>> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
>> *queue,
>>   						 struct
>> netrx_pending_operations *npo)
>>   {
>>   	struct xenvif_rx_meta *meta;
>>   	struct xen_netif_rx_request *req;
>>
>> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>>
>>   	meta = npo->meta + npo->meta_prod++;
>>   	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
>> @@ -330,7 +330,7 @@ static struct xenvif_rx_meta
>> *get_next_rx_buffer(struct xenvif *vif,
>>    * Set up the grant operations for this fragment. If it's a flipping
>>    * interface, we also set up the unmap request from here.
>>    */
>> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
>> sk_buff *skb,
>>   				 struct netrx_pending_operations *npo,
>>   				 struct page *page, unsigned long size,
>>   				 unsigned long offset, int *head)
>> @@ -365,7 +365,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   			 */
>>   			BUG_ON(*head);
>>
>> -			meta = get_next_rx_buffer(vif, npo);
>> +			meta = get_next_rx_buffer(queue, npo);
>>   		}
>>
>>   		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
>> @@ -379,7 +379,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   		copy_gop->source.u.gmfn =
>> virt_to_mfn(page_address(page));
>>   		copy_gop->source.offset = offset;
>>
>> -		copy_gop->dest.domid = vif->domid;
>> +		copy_gop->dest.domid = queue->vif->domid;
>>   		copy_gop->dest.offset = npo->copy_off;
>>   		copy_gop->dest.u.ref = npo->copy_gref;
>>
>> @@ -404,8 +404,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>   		else
>>   			gso_type = XEN_NETIF_GSO_TYPE_NONE;
>>
>> -		if (*head && ((1 << gso_type) & vif->gso_mask))
>> -			vif->rx.req_cons++;
>> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
>> +			queue->rx.req_cons++;
>>
>>   		*head = 0; /* There must be something in this buffer now. */
>>
>> @@ -425,7 +425,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
>> struct sk_buff *skb,
>>    * frontend-side LRO).
>>    */
>>   static int xenvif_gop_skb(struct sk_buff *skb,
>> -			  struct netrx_pending_operations *npo)
>> +			  struct netrx_pending_operations *npo,
>> +			  struct xenvif_queue *queue)
>>   {
>>   	struct xenvif *vif = netdev_priv(skb->dev);
>>   	int nr_frags = skb_shinfo(skb)->nr_frags;
>> @@ -453,7 +454,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>
>>   	/* Set up a GSO prefix descriptor, if necessary */
>>   	if ((1 << gso_type) & vif->gso_prefix_mask) {
>> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +		req = RING_GET_REQUEST(&queue->rx, queue-
>>> rx.req_cons++);
>>   		meta = npo->meta + npo->meta_prod++;
>>   		meta->gso_type = gso_type;
>>   		meta->gso_size = gso_size;
>> @@ -461,7 +462,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>   		meta->id = req->id;
>>   	}
>>
>> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
>> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>>   	meta = npo->meta + npo->meta_prod++;
>>
>>   	if ((1 << gso_type) & vif->gso_mask) {
>> @@ -485,13 +486,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>>   		if (data + len > skb_tail_pointer(skb))
>>   			len = skb_tail_pointer(skb) - data;
>>
>> -		xenvif_gop_frag_copy(vif, skb, npo,
>> +		xenvif_gop_frag_copy(queue, skb, npo,
>>   				     virt_to_page(data), len, offset, &head);
>>   		data += len;
>>   	}
>>
>>   	for (i = 0; i < nr_frags; i++) {
>> -		xenvif_gop_frag_copy(vif, skb, npo,
>> +		xenvif_gop_frag_copy(queue, skb, npo,
>>   				     skb_frag_page(&skb_shinfo(skb)-
>>> frags[i]),
>>   				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>>   				     skb_shinfo(skb)->frags[i].page_offset,
>> @@ -527,7 +528,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
>> nr_meta_slots,
>>   	return status;
>>   }
>>
>> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
>> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
>> status,
>>   				      struct xenvif_rx_meta *meta,
>>   				      int nr_meta_slots)
>>   {
>> @@ -548,7 +549,7 @@ static void xenvif_add_frag_responses(struct xenvif
>> *vif, int status,
>>   			flags = XEN_NETRXF_more_data;
>>
>>   		offset = 0;
>> -		make_rx_response(vif, meta[i].id, status, offset,
>> +		make_rx_response(queue, meta[i].id, status, offset,
>>   				 meta[i].size, flags);
>>   	}
>>   }
>> @@ -557,12 +558,12 @@ struct skb_cb_overlay {
>>   	int meta_slots_used;
>>   };
>>
>> -static void xenvif_kick_thread(struct xenvif *vif)
>> +static void xenvif_kick_thread(struct xenvif_queue *queue)
>>   {
>> -	wake_up(&vif->wq);
>> +	wake_up(&queue->wq);
>>   }
>>
>> -void xenvif_rx_action(struct xenvif *vif)
>> +void xenvif_rx_action(struct xenvif_queue *queue)
>>   {
>>   	s8 status;
>>   	u16 flags;
>> @@ -578,20 +579,19 @@ void xenvif_rx_action(struct xenvif *vif)
>>   	int need_to_notify = 0;
>>
>>   	struct netrx_pending_operations npo = {
>> -		.copy  = vif->grant_copy_op,
>> -		.meta  = vif->meta,
>> +		.copy  = queue->grant_copy_op,
>> +		.meta  = queue->meta,
>>   	};
>>
>>   	skb_queue_head_init(&rxq);
>>
>>   	count = 0;
>>
>> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
>> -		vif = netdev_priv(skb->dev);
>> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>>   		nr_frags = skb_shinfo(skb)->nr_frags;
>>
>>   		sco = (struct skb_cb_overlay *)skb->cb;
>> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
>> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
>>
>>   		count += nr_frags + 1;
>>
>> @@ -603,28 +603,26 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			break;
>>   	}
>>
>> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
>> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
>>
>>   	if (!npo.copy_prod)
>>   		return;
>>
>>   	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
>> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
>> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
>>
>>   	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>>   		sco = (struct skb_cb_overlay *)skb->cb;
>>
>> -		vif = netdev_priv(skb->dev);
>> -
>> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
>> -		    vif->gso_prefix_mask) {
>> -			resp = RING_GET_RESPONSE(&vif->rx,
>> -						 vif->rx.rsp_prod_pvt++);
>> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
>> +		    queue->vif->gso_prefix_mask) {
>> +			resp = RING_GET_RESPONSE(&queue->rx,
>> +						 queue->rx.rsp_prod_pvt++);
>>
>>   			resp->flags = XEN_NETRXF_gso_prefix |
>> XEN_NETRXF_more_data;
>>
>> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
>> -			resp->id = vif->meta[npo.meta_cons].id;
>> +			resp->offset = queue-
>>> meta[npo.meta_cons].gso_size;
>> +			resp->id = queue->meta[npo.meta_cons].id;
>>   			resp->status = sco->meta_slots_used;
>>
>>   			npo.meta_cons++;
>> @@ -632,10 +630,10 @@ void xenvif_rx_action(struct xenvif *vif)
>>   		}
>>
>>
>> -		vif->dev->stats.tx_bytes += skb->len;
>> -		vif->dev->stats.tx_packets++;
>> +		queue->vif->dev->stats.tx_bytes += skb->len;
>> +		queue->vif->dev->stats.tx_packets++;
>>
>> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
>> &npo);
>> +		status = xenvif_check_gop(queue->vif, sco-
>>> meta_slots_used, &npo);
>>
>>   		if (sco->meta_slots_used == 1)
>>   			flags = 0;
>> @@ -649,22 +647,22 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			flags |= XEN_NETRXF_data_validated;
>>
>>   		offset = 0;
>> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
>> +		resp = make_rx_response(queue, queue-
>>> meta[npo.meta_cons].id,
>>   					status, offset,
>> -					vif->meta[npo.meta_cons].size,
>> +					queue->meta[npo.meta_cons].size,
>>   					flags);
>>
>> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
>> -		    vif->gso_mask) {
>> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
>> +		    queue->vif->gso_mask) {
>>   			struct xen_netif_extra_info *gso =
>>   				(struct xen_netif_extra_info *)
>> -				RING_GET_RESPONSE(&vif->rx,
>> -						  vif->rx.rsp_prod_pvt++);
>> +				RING_GET_RESPONSE(&queue->rx,
>> +						  queue-
>>> rx.rsp_prod_pvt++);
>>
>>   			resp->flags |= XEN_NETRXF_extra_info;
>>
>> -			gso->u.gso.type = vif-
>>> meta[npo.meta_cons].gso_type;
>> -			gso->u.gso.size = vif-
>>> meta[npo.meta_cons].gso_size;
>> +			gso->u.gso.type = queue-
>>> meta[npo.meta_cons].gso_type;
>> +			gso->u.gso.size = queue-
>>> meta[npo.meta_cons].gso_size;
>>   			gso->u.gso.pad = 0;
>>   			gso->u.gso.features = 0;
>>
>> @@ -672,47 +670,47 @@ void xenvif_rx_action(struct xenvif *vif)
>>   			gso->flags = 0;
>>   		}
>>
>> -		xenvif_add_frag_responses(vif, status,
>> -					  vif->meta + npo.meta_cons + 1,
>> +		xenvif_add_frag_responses(queue, status,
>> +					  queue->meta + npo.meta_cons + 1,
>>   					  sco->meta_slots_used);
>>
>> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
>> ret);
>> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
>> ret);
>>
>>   		if (ret)
>>   			need_to_notify = 1;
>>
>> -		xenvif_notify_tx_completion(vif);
>> +		xenvif_notify_tx_completion(queue);
>>
>>   		npo.meta_cons += sco->meta_slots_used;
>>   		dev_kfree_skb(skb);
>>   	}
>>
>>   	if (need_to_notify)
>> -		notify_remote_via_irq(vif->rx_irq);
>> +		notify_remote_via_irq(queue->rx_irq);
>>
>>   	/* More work to do? */
>> -	if (!skb_queue_empty(&vif->rx_queue))
>> -		xenvif_kick_thread(vif);
>> +	if (!skb_queue_empty(&queue->rx_queue))
>> +		xenvif_kick_thread(queue);
>>   }
>>
>> -void xenvif_queue_tx_skb(struct xenvif *vif, struct sk_buff *skb)
>> +void xenvif_queue_tx_skb(struct xenvif_queue *queue, struct sk_buff
>> *skb)
>>   {
>> -	skb_queue_tail(&vif->rx_queue, skb);
>> +	skb_queue_tail(&queue->rx_queue, skb);
>>
>> -	xenvif_kick_thread(vif);
>> +	xenvif_kick_thread(queue);
>>   }
>>
>> -void xenvif_check_rx_xenvif(struct xenvif *vif)
>> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>>   {
>>   	int more_to_do;
>>
>> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
>> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
>>
>>   	if (more_to_do)
>> -		napi_schedule(&vif->napi);
>> +		napi_schedule(&queue->napi);
>>   }
>>
>> -static void tx_add_credit(struct xenvif *vif)
>> +static void tx_add_credit(struct xenvif_queue *queue)
>>   {
>>   	unsigned long max_burst, max_credit;
>>
>> @@ -720,37 +718,37 @@ static void tx_add_credit(struct xenvif *vif)
>>   	 * Allow a burst big enough to transmit a jumbo packet of up to
>> 128kB.
>>   	 * Otherwise the interface can seize up due to insufficient credit.
>>   	 */
>> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
>> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
>>> tx.req_cons)->size;
>>   	max_burst = min(max_burst, 131072UL);
>> -	max_burst = max(max_burst, vif->credit_bytes);
>> +	max_burst = max(max_burst, queue->credit_bytes);
>>
>>   	/* Take care that adding a new chunk of credit doesn't wrap to zero.
>> */
>> -	max_credit = vif->remaining_credit + vif->credit_bytes;
>> -	if (max_credit < vif->remaining_credit)
>> +	max_credit = queue->remaining_credit + queue->credit_bytes;
>> +	if (max_credit < queue->remaining_credit)
>>   		max_credit = ULONG_MAX; /* wrapped: clamp to
>> ULONG_MAX */
>>
>> -	vif->remaining_credit = min(max_credit, max_burst);
>> +	queue->remaining_credit = min(max_credit, max_burst);
>>   }
>>
>>   static void tx_credit_callback(unsigned long data)
>>   {
>> -	struct xenvif *vif = (struct xenvif *)data;
>> -	tx_add_credit(vif);
>> -	xenvif_check_rx_xenvif(vif);
>> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
>> +	tx_add_credit(queue);
>> +	xenvif_check_rx_xenvif(queue);
>>   }
>>
>> -static void xenvif_tx_err(struct xenvif *vif,
>> +static void xenvif_tx_err(struct xenvif_queue *queue,
>>   			  struct xen_netif_tx_request *txp, RING_IDX end)
>>   {
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>
>>   	do {
>> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
>> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>>   		if (cons == end)
>>   			break;
>> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
>> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>>   	} while (1);
>> -	vif->tx.req_cons = cons;
>> +	queue->tx.req_cons = cons;
>>   }
>>
>>   static void xenvif_fatal_tx_err(struct xenvif *vif)
>> @@ -759,12 +757,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>>   	xenvif_carrier_off(vif);
>>   }
>>
>> -static int xenvif_count_requests(struct xenvif *vif,
>> +static int xenvif_count_requests(struct xenvif_queue *queue,
>>   				 struct xen_netif_tx_request *first,
>>   				 struct xen_netif_tx_request *txp,
>>   				 int work_to_do)
>>   {
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>   	int slots = 0;
>>   	int drop_err = 0;
>>   	int more_data;
>> @@ -776,10 +774,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		struct xen_netif_tx_request dropped_tx = { 0 };
>>
>>   		if (slots >= work_to_do) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Asked for %d slots but exceeds this
>> limit\n",
>>   				   work_to_do);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -ENODATA;
>>   		}
>>
>> @@ -787,10 +785,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 * considered malicious.
>>   		 */
>>   		if (unlikely(slots >= fatal_skb_slots)) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Malicious frontend using %d slots,
>> threshold %u\n",
>>   				   slots, fatal_skb_slots);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -E2BIG;
>>   		}
>>
>> @@ -803,7 +801,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 */
>>   		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
>> {
>>   			if (net_ratelimit())
>> -				netdev_dbg(vif->dev,
>> +				netdev_dbg(queue->vif->dev,
>>   					   "Too many slots (%d) exceeding
>> limit (%d), dropping packet\n",
>>   					   slots,
>> XEN_NETBK_LEGACY_SLOTS_MAX);
>>   			drop_err = -E2BIG;
>> @@ -812,7 +810,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		if (drop_err)
>>   			txp = &dropped_tx;
>>
>> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
>> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
>> slots),
>>   		       sizeof(*txp));
>>
>>   		/* If the guest submitted a frame >= 64 KiB then
>> @@ -826,7 +824,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		 */
>>   		if (!drop_err && txp->size > first->size) {
>>   			if (net_ratelimit())
>> -				netdev_dbg(vif->dev,
>> +				netdev_dbg(queue->vif->dev,
>>   					   "Invalid tx request, slot size %u >
>> remaining size %u\n",
>>   					   txp->size, first->size);
>>   			drop_err = -EIO;
>> @@ -836,9 +834,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   		slots++;
>>
>>   		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
>> -			netdev_err(vif->dev, "Cross page boundary, txp-
>>> offset: %x, size: %u\n",
>> +			netdev_err(queue->vif->dev, "Cross page boundary,
>> txp->offset: %x, size: %u\n",
>>   				 txp->offset, txp->size);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EINVAL;
>>   		}
>>
>> @@ -850,14 +848,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>>   	} while (more_data);
>>
>>   	if (drop_err) {
>> -		xenvif_tx_err(vif, first, cons + slots);
>> +		xenvif_tx_err(queue, first, cons + slots);
>>   		return drop_err;
>>   	}
>>
>>   	return slots;
>>   }
>>
>> -static struct page *xenvif_alloc_page(struct xenvif *vif,
>> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>>   				      u16 pending_idx)
>>   {
>>   	struct page *page;
>> @@ -865,12 +863,12 @@ static struct page *xenvif_alloc_page(struct xenvif
>> *vif,
>>   	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>>   	if (!page)
>>   		return NULL;
>> -	vif->mmap_pages[pending_idx] = page;
>> +	queue->mmap_pages[pending_idx] = page;
>>
>>   	return page;
>>   }
>>
>> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
>> *queue,
>>   					       struct sk_buff *skb,
>>   					       struct xen_netif_tx_request *txp,
>>   					       struct gnttab_copy *gop)
>> @@ -901,7 +899,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>>   	     shinfo->nr_frags++) {
>>   		struct pending_tx_info *pending_tx_info =
>> -			vif->pending_tx_info;
>> +			queue->pending_tx_info;
>>
>>   		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>>   		if (!page)
>> @@ -913,7 +911,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   			gop->flags = GNTCOPY_source_gref;
>>
>>   			gop->source.u.ref = txp->gref;
>> -			gop->source.domid = vif->domid;
>> +			gop->source.domid = queue->vif->domid;
>>   			gop->source.offset = txp->offset;
>>
>>   			gop->dest.domid = DOMID_SELF;
>> @@ -938,9 +936,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   				gop->len = txp->size;
>>   				dst_offset += gop->len;
>>
>> -				index = pending_index(vif-
>>> pending_cons++);
>> +				index = pending_index(queue-
>>> pending_cons++);
>>
>> -				pending_idx = vif->pending_ring[index];
>> +				pending_idx = queue->pending_ring[index];
>>
>>
>> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>>   				       sizeof(*txp));
>> @@ -949,7 +947,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   				 * fields for head tx req will be set
>>   				 * to correct values after the loop.
>>   				 */
>> -				vif->mmap_pages[pending_idx] = (void
>> *)(~0UL);
>> +				queue->mmap_pages[pending_idx] = (void
>> *)(~0UL);
>>   				pending_tx_info[pending_idx].head =
>>   					INVALID_PENDING_RING_IDX;
>>
>> @@ -969,7 +967,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
>> xenvif *vif,
>>   		first->req.offset = 0;
>>   		first->req.size = dst_offset;
>>   		first->head = start_idx;
>> -		vif->mmap_pages[head_idx] = page;
>> +		queue->mmap_pages[head_idx] = page;
>>   		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>>   	}
>>
>> @@ -979,18 +977,18 @@ static struct gnttab_copy
>> *xenvif_get_requests(struct xenvif *vif,
>>   err:
>>   	/* Unwind, freeing all pages and sending error responses. */
>>   	while (shinfo->nr_frags-- > start) {
>> -		xenvif_idx_release(vif,
>> +		xenvif_idx_release(queue,
>>   				frag_get_pending_idx(&frags[shinfo-
>>> nr_frags]),
>>   				XEN_NETIF_RSP_ERROR);
>>   	}
>>   	/* The head too, if necessary. */
>>   	if (start)
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   	return NULL;
>>   }
>>
>> -static int xenvif_tx_check_gop(struct xenvif *vif,
>> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>>   			       struct sk_buff *skb,
>>   			       struct gnttab_copy **gopp)
>>   {
>> @@ -1005,7 +1003,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	/* Check status of header. */
>>   	err = gop->status;
>>   	if (unlikely(err))
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   	/* Skip first skb fragment if it is on same page as header fragment. */
>>   	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
>> @@ -1015,7 +1013,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   		pending_ring_idx_t head;
>>
>>   		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
>> -		tx_info = &vif->pending_tx_info[pending_idx];
>> +		tx_info = &queue->pending_tx_info[pending_idx];
>>   		head = tx_info->head;
>>
>>   		/* Check error status: if okay then remember grant handle.
>> */
>> @@ -1023,19 +1021,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   			newerr = (++gop)->status;
>>   			if (newerr)
>>   				break;
>> -			peek = vif->pending_ring[pending_index(++head)];
>> -		} while (!pending_tx_is_head(vif, peek));
>> +			peek = queue-
>>> pending_ring[pending_index(++head)];
>> +		} while (!pending_tx_is_head(queue, peek));
>>
>>   		if (likely(!newerr)) {
>>   			/* Had a previous error? Invalidate this fragment. */
>>   			if (unlikely(err))
>> -				xenvif_idx_release(vif, pending_idx,
>> +				xenvif_idx_release(queue, pending_idx,
>>   						   XEN_NETIF_RSP_OKAY);
>>   			continue;
>>   		}
>>
>>   		/* Error on this fragment: respond to client with an error. */
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_ERROR);
>>
>>   		/* Not the first error? Preceding frags already invalidated. */
>>   		if (err)
>> @@ -1043,10 +1041,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>
>>   		/* First error: invalidate header and preceding fragments. */
>>   		pending_idx = *((u16 *)skb->data);
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>>   		for (j = start; j < i; j++) {
>>   			pending_idx = frag_get_pending_idx(&shinfo-
>>> frags[j]);
>> -			xenvif_idx_release(vif, pending_idx,
>> +			xenvif_idx_release(queue, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>>
>> @@ -1058,7 +1056,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   	return err;
>>   }
>>
>> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
>> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
>> *skb)
>>   {
>>   	struct skb_shared_info *shinfo = skb_shinfo(skb);
>>   	int nr_frags = shinfo->nr_frags;
>> @@ -1072,46 +1070,46 @@ static void xenvif_fill_frags(struct xenvif *vif,
>> struct sk_buff *skb)
>>
>>   		pending_idx = frag_get_pending_idx(frag);
>>
>> -		txp = &vif->pending_tx_info[pending_idx].req;
>> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
>> +		txp = &queue->pending_tx_info[pending_idx].req;
>> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>>   		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>>   		skb->len += txp->size;
>>   		skb->data_len += txp->size;
>>   		skb->truesize += txp->size;
>>
>>   		/* Take an extra reference to offset xenvif_idx_release */
>> -		get_page(vif->mmap_pages[pending_idx]);
>> -		xenvif_idx_release(vif, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>> +		get_page(queue->mmap_pages[pending_idx]);
>> +		xenvif_idx_release(queue, pending_idx,
>> XEN_NETIF_RSP_OKAY);
>>   	}
>>   }
>>
>> -static int xenvif_get_extras(struct xenvif *vif,
>> +static int xenvif_get_extras(struct xenvif_queue *queue,
>>   				struct xen_netif_extra_info *extras,
>>   				int work_to_do)
>>   {
>>   	struct xen_netif_extra_info extra;
>> -	RING_IDX cons = vif->tx.req_cons;
>> +	RING_IDX cons = queue->tx.req_cons;
>>
>>   	do {
>>   		if (unlikely(work_to_do-- <= 0)) {
>> -			netdev_err(vif->dev, "Missing extra info\n");
>> -			xenvif_fatal_tx_err(vif);
>> +			netdev_err(queue->vif->dev, "Missing extra
>> info\n");
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EBADR;
>>   		}
>>
>> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
>> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>>   		       sizeof(extra));
>>   		if (unlikely(!extra.type ||
>>   			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
>> -			vif->tx.req_cons = ++cons;
>> -			netdev_err(vif->dev,
>> +			queue->tx.req_cons = ++cons;
>> +			netdev_err(queue->vif->dev,
>>   				   "Invalid extra type: %d\n", extra.type);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			return -EINVAL;
>>   		}
>>
>>   		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
>> -		vif->tx.req_cons = ++cons;
>> +		queue->tx.req_cons = ++cons;
>>   	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
>>
>>   	return work_to_do;
>> @@ -1424,31 +1422,31 @@ static int checksum_setup(struct xenvif *vif,
>> struct sk_buff *skb)
>>   	return err;
>>   }
>>
>> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
>> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
>> size)
>>   {
>>   	u64 now = get_jiffies_64();
>> -	u64 next_credit = vif->credit_window_start +
>> -		msecs_to_jiffies(vif->credit_usec / 1000);
>> +	u64 next_credit = queue->credit_window_start +
>> +		msecs_to_jiffies(queue->credit_usec / 1000);
>>
>>   	/* Timer could already be pending in rare cases. */
>> -	if (timer_pending(&vif->credit_timeout))
>> +	if (timer_pending(&queue->credit_timeout))
>>   		return true;
>>
>>   	/* Passed the point where we can replenish credit? */
>>   	if (time_after_eq64(now, next_credit)) {
>> -		vif->credit_window_start = now;
>> -		tx_add_credit(vif);
>> +		queue->credit_window_start = now;
>> +		tx_add_credit(queue);
>>   	}
>>
>>   	/* Still too big to send right now? Set a callback. */
>> -	if (size > vif->remaining_credit) {
>> -		vif->credit_timeout.data     =
>> -			(unsigned long)vif;
>> -		vif->credit_timeout.function =
>> +	if (size > queue->remaining_credit) {
>> +		queue->credit_timeout.data     =
>> +			(unsigned long)queue;
>> +		queue->credit_timeout.function =
>>   			tx_credit_callback;
>> -		mod_timer(&vif->credit_timeout,
>> +		mod_timer(&queue->credit_timeout,
>>   			  next_credit);
>> -		vif->credit_window_start = next_credit;
>> +		queue->credit_window_start = next_credit;
>>
>>   		return true;
>>   	}
>> @@ -1456,15 +1454,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
>> unsigned size)
>>   	return false;
>>   }
>>
>> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
>> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
>> budget)
>>   {
>> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
>> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>>   	struct sk_buff *skb;
>>   	int ret;
>>
>> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
>> +	while ((nr_pending_reqs(queue) +
>> XEN_NETBK_LEGACY_SLOTS_MAX
>>   		< MAX_PENDING_REQS) &&
>> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
>> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>>   		struct xen_netif_tx_request txreq;
>>   		struct xen_netif_tx_request
>> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>>   		struct page *page;
>> @@ -1475,69 +1473,69 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   		unsigned int data_len;
>>   		pending_ring_idx_t index;
>>
>> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
>> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>>   		    XEN_NETIF_TX_RING_SIZE) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "Impossible number of requests. "
>>   				   "req_prod %d, req_cons %d, size %ld\n",
>> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
>> +				   queue->tx.sring->req_prod, queue-
>>> tx.req_cons,
>>   				   XEN_NETIF_TX_RING_SIZE);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			continue;
>>   		}
>>
>> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
>>> tx);
>> +		work_to_do =
>> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>>   		if (!work_to_do)
>>   			break;
>>
>> -		idx = vif->tx.req_cons;
>> +		idx = queue->tx.req_cons;
>>   		rmb(); /* Ensure that we see the request before we copy it.
>> */
>> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
>> sizeof(txreq));
>> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
>> sizeof(txreq));
>>
>>   		/* Credit-based scheduling. */
>> -		if (txreq.size > vif->remaining_credit &&
>> -		    tx_credit_exceeded(vif, txreq.size))
>> +		if (txreq.size > queue->remaining_credit &&
>> +		    tx_credit_exceeded(queue, txreq.size))
>>   			break;
>>
>> -		vif->remaining_credit -= txreq.size;
>> +		queue->remaining_credit -= txreq.size;
>>
>>   		work_to_do--;
>> -		vif->tx.req_cons = ++idx;
>> +		queue->tx.req_cons = ++idx;
>>
>>   		memset(extras, 0, sizeof(extras));
>>   		if (txreq.flags & XEN_NETTXF_extra_info) {
>> -			work_to_do = xenvif_get_extras(vif, extras,
>> +			work_to_do = xenvif_get_extras(queue, extras,
>>   						       work_to_do);
>> -			idx = vif->tx.req_cons;
>> +			idx = queue->tx.req_cons;
>>   			if (unlikely(work_to_do < 0))
>>   				break;
>>   		}
>>
>> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
>> work_to_do);
>> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
>> work_to_do);
>>   		if (unlikely(ret < 0))
>>   			break;
>>
>>   		idx += ret;
>>
>>   		if (unlikely(txreq.size < ETH_HLEN)) {
>> -			netdev_dbg(vif->dev,
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Bad packet size: %d\n", txreq.size);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>>   		/* No crossing a page as the payload mustn't fragment. */
>>   		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
>> -			netdev_err(vif->dev,
>> +			netdev_err(queue->vif->dev,
>>   				   "txreq.offset: %x, size: %u, end: %lu\n",
>>   				   txreq.offset, txreq.size,
>>   				   (txreq.offset&~PAGE_MASK) + txreq.size);
>> -			xenvif_fatal_tx_err(vif);
>> +			xenvif_fatal_tx_err(queue->vif);
>>   			break;
>>   		}
>>
>> -		index = pending_index(vif->pending_cons);
>> -		pending_idx = vif->pending_ring[index];
>> +		index = pending_index(queue->pending_cons);
>> +		pending_idx = queue->pending_ring[index];
>>
>>   		data_len = (txreq.size > PKT_PROT_LEN &&
>>   			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
>> @@ -1546,9 +1544,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>   		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>>   				GFP_ATOMIC | __GFP_NOWARN);
>>   		if (unlikely(skb == NULL)) {
>> -			netdev_dbg(vif->dev,
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Can't allocate a skb in start_xmit.\n");
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>> @@ -1559,7 +1557,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>   			struct xen_netif_extra_info *gso;
>>   			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
>>
>> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
>> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>>   				/* Failure in xenvif_set_skb_gso is fatal. */
>>   				kfree_skb(skb);
>>   				break;
>> @@ -1567,15 +1565,15 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   		}
>>
>>   		/* XXX could copy straight to head */
>> -		page = xenvif_alloc_page(vif, pending_idx);
>> +		page = xenvif_alloc_page(queue, pending_idx);
>>   		if (!page) {
>>   			kfree_skb(skb);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>
>>   		gop->source.u.ref = txreq.gref;
>> -		gop->source.domid = vif->domid;
>> +		gop->source.domid = queue->vif->domid;
>>   		gop->source.offset = txreq.offset;
>>
>>   		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
>> @@ -1587,9 +1585,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
>> *vif, int budget)
>>
>>   		gop++;
>>
>> -		memcpy(&vif->pending_tx_info[pending_idx].req,
>> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>>   		       &txreq, sizeof(txreq));
>> -		vif->pending_tx_info[pending_idx].head = index;
>> +		queue->pending_tx_info[pending_idx].head = index;
>>   		*((u16 *)skb->data) = pending_idx;
>>
>>   		__skb_put(skb, data_len);
>> @@ -1604,45 +1602,45 @@ static unsigned xenvif_tx_build_gops(struct
>> xenvif *vif, int budget)
>>   					     INVALID_PENDING_IDX);
>>   		}
>>
>> -		vif->pending_cons++;
>> +		queue->pending_cons++;
>>
>> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
>> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
>> gop);
>>   		if (request_gop == NULL) {
>>   			kfree_skb(skb);
>> -			xenvif_tx_err(vif, &txreq, idx);
>> +			xenvif_tx_err(queue, &txreq, idx);
>>   			break;
>>   		}
>>   		gop = request_gop;
>>
>> -		__skb_queue_tail(&vif->tx_queue, skb);
>> +		__skb_queue_tail(&queue->tx_queue, skb);
>>
>> -		vif->tx.req_cons = idx;
>> +		queue->tx.req_cons = idx;
>>
>> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
>>> tx_copy_ops))
>> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
>>> tx_copy_ops))
>>   			break;
>>   	}
>>
>> -	return gop - vif->tx_copy_ops;
>> +	return gop - queue->tx_copy_ops;
>>   }
>>
>>
>> -static int xenvif_tx_submit(struct xenvif *vif)
>> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>>   {
>> -	struct gnttab_copy *gop = vif->tx_copy_ops;
>> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>>   	struct sk_buff *skb;
>>   	int work_done = 0;
>>
>> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
>> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>>   		struct xen_netif_tx_request *txp;
>>   		u16 pending_idx;
>>   		unsigned data_len;
>>
>>   		pending_idx = *((u16 *)skb->data);
>> -		txp = &vif->pending_tx_info[pending_idx].req;
>> +		txp = &queue->pending_tx_info[pending_idx].req;
>>
>>   		/* Check the remap error code. */
>> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
>> -			netdev_dbg(vif->dev, "netback grant failed.\n");
>> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
>> +			netdev_dbg(queue->vif->dev, "netback grant
>> failed.\n");
>>   			skb_shinfo(skb)->nr_frags = 0;
>>   			kfree_skb(skb);
>>   			continue;
>> @@ -1650,7 +1648,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>
>>   		data_len = skb->len;
>>   		memcpy(skb->data,
>> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
>> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>>   		       data_len);
>>   		if (data_len < txp->size) {
>>   			/* Append the packet payload as a fragment. */
>> @@ -1658,7 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   			txp->size -= data_len;
>>   		} else {
>>   			/* Schedule a response immediately. */
>> -			xenvif_idx_release(vif, pending_idx,
>> +			xenvif_idx_release(queue, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>>
>> @@ -1667,19 +1665,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		else if (txp->flags & XEN_NETTXF_data_validated)
>>   			skb->ip_summed = CHECKSUM_UNNECESSARY;
>>
>> -		xenvif_fill_frags(vif, skb);
>> +		xenvif_fill_frags(queue, skb);
>>
>>   		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
>> PKT_PROT_LEN) {
>>   			int target = min_t(int, skb->len, PKT_PROT_LEN);
>>   			__pskb_pull_tail(skb, target - skb_headlen(skb));
>>   		}
>>
>> -		skb->dev      = vif->dev;
>> +		skb->dev      = queue->vif->dev;
>>   		skb->protocol = eth_type_trans(skb, skb->dev);
>>   		skb_reset_network_header(skb);
>>
>> -		if (checksum_setup(vif, skb)) {
>> -			netdev_dbg(vif->dev,
>> +		if (checksum_setup(queue->vif, skb)) {
>> +			netdev_dbg(queue->vif->dev,
>>   				   "Can't setup checksum in
>> net_tx_action\n");
>>   			kfree_skb(skb);
>>   			continue;
>> @@ -1687,8 +1685,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>
>>   		skb_probe_transport_header(skb, 0);
>>
>> -		vif->dev->stats.rx_bytes += skb->len;
>> -		vif->dev->stats.rx_packets++;
>> +		queue->vif->dev->stats.rx_bytes += skb->len;
>> +		queue->vif->dev->stats.rx_packets++;
>
>>
>>   		work_done++;
>>
>> @@ -1699,53 +1697,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   }
>>
>>   /* Called after netfront has transmitted */
>> -int xenvif_tx_action(struct xenvif *vif, int budget)
>> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>>   {
>>   	unsigned nr_gops;
>>   	int work_done;
>>
>> -	if (unlikely(!tx_work_todo(vif)))
>> +	if (unlikely(!tx_work_todo(queue)))
>>   		return 0;
>>
>> -	nr_gops = xenvif_tx_build_gops(vif, budget);
>> +	nr_gops = xenvif_tx_build_gops(queue, budget);
>>
>>   	if (nr_gops == 0)
>>   		return 0;
>>
>> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
>> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
>>
>> -	work_done = xenvif_tx_submit(vif);
>> +	work_done = xenvif_tx_submit(queue);
>>
>>   	return work_done;
>>   }
>>
>> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
>> pending_idx,
>>   			       u8 status)
>>   {
>>   	struct pending_tx_info *pending_tx_info;
>>   	pending_ring_idx_t head;
>>   	u16 peek; /* peek into next tx request */
>>
>> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
>> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
>>
>>   	/* Already complete? */
>> -	if (vif->mmap_pages[pending_idx] == NULL)
>> +	if (queue->mmap_pages[pending_idx] == NULL)
>>   		return;
>>
>> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
>> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
>>
>>   	head = pending_tx_info->head;
>>
>> -	BUG_ON(!pending_tx_is_head(vif, head));
>> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
>> +	BUG_ON(!pending_tx_is_head(queue, head));
>> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
>> pending_idx);
>>
>>   	do {
>>   		pending_ring_idx_t index;
>>   		pending_ring_idx_t idx = pending_index(head);
>> -		u16 info_idx = vif->pending_ring[idx];
>> +		u16 info_idx = queue->pending_ring[idx];
>>
>> -		pending_tx_info = &vif->pending_tx_info[info_idx];
>> -		make_tx_response(vif, &pending_tx_info->req, status);
>> +		pending_tx_info = &queue->pending_tx_info[info_idx];
>> +		make_tx_response(queue, &pending_tx_info->req, status);
>>
>>   		/* Setting any number other than
>>   		 * INVALID_PENDING_RING_IDX indicates this slot is
>> @@ -1753,50 +1751,50 @@ static void xenvif_idx_release(struct xenvif *vif,
>> u16 pending_idx,
>>   		 */
>>   		pending_tx_info->head = 0;
>>
>> -		index = pending_index(vif->pending_prod++);
>> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
>> +		index = pending_index(queue->pending_prod++);
>> +		queue->pending_ring[index] = queue-
>>> pending_ring[info_idx];
>>
>> -		peek = vif->pending_ring[pending_index(++head)];
>> +		peek = queue->pending_ring[pending_index(++head)];
>>
>> -	} while (!pending_tx_is_head(vif, peek));
>> +	} while (!pending_tx_is_head(queue, peek));
>>
>> -	put_page(vif->mmap_pages[pending_idx]);
>> -	vif->mmap_pages[pending_idx] = NULL;
>> +	put_page(queue->mmap_pages[pending_idx]);
>> +	queue->mmap_pages[pending_idx] = NULL;
>>   }
>>
>>
>> -static void make_tx_response(struct xenvif *vif,
>> +static void make_tx_response(struct xenvif_queue *queue,
>>   			     struct xen_netif_tx_request *txp,
>>   			     s8       st)
>>   {
>> -	RING_IDX i = vif->tx.rsp_prod_pvt;
>> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>>   	struct xen_netif_tx_response *resp;
>>   	int notify;
>>
>> -	resp = RING_GET_RESPONSE(&vif->tx, i);
>> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>>   	resp->id     = txp->id;
>>   	resp->status = st;
>>
>>   	if (txp->flags & XEN_NETTXF_extra_info)
>> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
>> XEN_NETIF_RSP_NULL;
>> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
>> XEN_NETIF_RSP_NULL;
>>
>> -	vif->tx.rsp_prod_pvt = ++i;
>> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
>> +	queue->tx.rsp_prod_pvt = ++i;
>> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>>   	if (notify)
>> -		notify_remote_via_irq(vif->tx_irq);
>> +		notify_remote_via_irq(queue->tx_irq);
>>   }
>>
>> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>> +static struct xen_netif_rx_response *make_rx_response(struct
>> xenvif_queue *queue,
>>   					     u16      id,
>>   					     s8       st,
>>   					     u16      offset,
>>   					     u16      size,
>>   					     u16      flags)
>>   {
>> -	RING_IDX i = vif->rx.rsp_prod_pvt;
>> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>>   	struct xen_netif_rx_response *resp;
>>
>> -	resp = RING_GET_RESPONSE(&vif->rx, i);
>> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>>   	resp->offset     = offset;
>>   	resp->flags      = flags;
>>   	resp->id         = id;
>> @@ -1804,38 +1802,38 @@ static struct xen_netif_rx_response
>> *make_rx_response(struct xenvif *vif,
>>   	if (st < 0)
>>   		resp->status = (s16)st;
>>
>> -	vif->rx.rsp_prod_pvt = ++i;
>> +	queue->rx.rsp_prod_pvt = ++i;
>>
>>   	return resp;
>>   }
>>
>> -static inline int rx_work_todo(struct xenvif *vif)
>> +static inline int rx_work_todo(struct xenvif_queue *queue)
>>   {
>> -	return !skb_queue_empty(&vif->rx_queue);
>> +	return !skb_queue_empty(&queue->rx_queue);
>>   }
>>
>> -static inline int tx_work_todo(struct xenvif *vif)
>> +static inline int tx_work_todo(struct xenvif_queue *queue)
>>   {
>>
>> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
>> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
>> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
>> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>>   	     < MAX_PENDING_REQS))
>>   		return 1;
>>
>>   	return 0;
>>   }
>>
>> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
>> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>>   {
>> -	if (vif->tx.sring)
>> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
>> -					vif->tx.sring);
>> -	if (vif->rx.sring)
>> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
>> -					vif->rx.sring);
>> +	if (queue->tx.sring)
>> +
>> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
>> +					queue->tx.sring);
>> +	if (queue->rx.sring)
>> +
>> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
>> +					queue->rx.sring);
>>   }
>>
>> -int xenvif_map_frontend_rings(struct xenvif *vif,
>> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>>   			      grant_ref_t tx_ring_ref,
>>   			      grant_ref_t rx_ring_ref)
>>   {
>> @@ -1845,44 +1843,44 @@ int xenvif_map_frontend_rings(struct xenvif
>> *vif,
>>
>>   	int err = -ENOMEM;
>>
>> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
>> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
>>> vif),
>>   				     tx_ring_ref, &addr);
>>   	if (err)
>>   		goto err;
>>
>>   	txs = (struct xen_netif_tx_sring *)addr;
>> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
>> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
>>
>> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
>> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
>>> vif),
>>   				     rx_ring_ref, &addr);
>>   	if (err)
>>   		goto err;
>>
>>   	rxs = (struct xen_netif_rx_sring *)addr;
>> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
>> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
>>
>> -	vif->rx_req_cons_peek = 0;
>> +	queue->rx_req_cons_peek = 0;
>>
>>   	return 0;
>>
>>   err:
>> -	xenvif_unmap_frontend_rings(vif);
>> +	xenvif_unmap_frontend_rings(queue);
>>   	return err;
>>   }
>>
>>   int xenvif_kthread(void *data)
>>   {
>> -	struct xenvif *vif = data;
>> +	struct xenvif_queue *queue = data;
>>
>>   	while (!kthread_should_stop()) {
>> -		wait_event_interruptible(vif->wq,
>> -					 rx_work_todo(vif) ||
>> +		wait_event_interruptible(queue->wq,
>> +					 rx_work_todo(queue) ||
>>   					 kthread_should_stop());
>>   		if (kthread_should_stop())
>>   			break;
>>
>> -		if (rx_work_todo(vif))
>> -			xenvif_rx_action(vif);
>> +		if (rx_work_todo(queue))
>> +			xenvif_rx_action(queue);
>>
>>   		cond_resched();
>>   	}
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index f035899..c3332e2 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -20,6 +20,7 @@
>>   */
>>
>>   #include "common.h"
>> +#include <linux/vmalloc.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -35,8 +36,9 @@ struct backend_info {
>>   	u8 have_hotplug_status_watch:1;
>>   };
>>
>> -static int connect_rings(struct backend_info *);
>> -static void connect(struct backend_info *);
>> +static int connect_rings(struct backend_info *be, struct xenvif_queue
>> *queue);
>> +static void connect(struct backend_info *be);
>> +static int read_xenbus_vif_flags(struct backend_info *be);
>>   static void backend_create_xenvif(struct backend_info *be);
>>   static void unregister_hotplug_status_watch(struct backend_info *be);
>>   static void set_backend_state(struct backend_info *be,
>> @@ -486,10 +488,9 @@ static void connect(struct backend_info *be)
>>   {
>>   	int err;
>>   	struct xenbus_device *dev = be->dev;
>> -
>> -	err = connect_rings(be);
>> -	if (err)
>> -		return;
>> +	unsigned long credit_bytes, credit_usec;
>> +	unsigned int queue_index;
>> +	struct xenvif_queue *queue;
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -497,9 +498,31 @@ static void connect(struct backend_info *be)
>>   		return;
>>   	}
>>
>> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
>> -			  &be->vif->credit_usec);
>> -	be->vif->remaining_credit = be->vif->credit_bytes;
>> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>> +	read_xenbus_vif_flags(be);
>> +
>> +	be->vif->num_queues = 1;
>> +	be->vif->queues = vzalloc(be->vif->num_queues *
>> +			sizeof(struct xenvif_queue));
>> +
>> +	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index)
>> +	{
>> +		queue = &be->vif->queues[queue_index];
>> +		queue->vif = be->vif;
>> +		queue->number = queue_index;
>> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
>> +				be->vif->dev->name, queue->number);
>> +
>> +		xenvif_init_queue(queue);
>> +
>> +		queue->remaining_credit = credit_bytes;
>> +
>> +		err = connect_rings(be, queue);
>> +		if (err)
>> +			goto err;
>> +	}
>> +
>> +	xenvif_carrier_on(be->vif);
>>
>>   	unregister_hotplug_status_watch(be);
>>   	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
>> @@ -508,18 +531,24 @@ static void connect(struct backend_info *be)
>>   	if (!err)
>>   		be->have_hotplug_status_watch = 1;
>>
>> -	netif_wake_queue(be->vif->dev);
>> +	netif_tx_wake_all_queues(be->vif->dev);
>> +
>> +	return;
>> +
>> +err:
>> +	vfree(be->vif->queues);
>> +	be->vif->queues = NULL;
>> +	be->vif->num_queues = 0;
>> +	return;
>>   }
>>
>>
>> -static int connect_rings(struct backend_info *be)
>> +static int connect_rings(struct backend_info *be, struct xenvif_queue
>> *queue)
>>   {
>> -	struct xenvif *vif = be->vif;
>>   	struct xenbus_device *dev = be->dev;
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
>> +	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> -	int val;
>>
>>   	err = xenbus_gather(XBT_NIL, dev->otherend,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>> @@ -547,6 +576,27 @@ static int connect_rings(struct backend_info *be)
>>   		rx_evtchn = tx_evtchn;
>>   	}
>>
>> +	/* Map the shared frame, irq etc. */
>> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
>> +			     tx_evtchn, rx_evtchn);
>> +	if (err) {
>> +		xenbus_dev_fatal(dev, err,
>> +				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>> +				 tx_ring_ref, rx_ring_ref,
>> +				 tx_evtchn, rx_evtchn);
>> +		return err;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int read_xenbus_vif_flags(struct backend_info *be)
>> +{
>> +	struct xenvif *vif = be->vif;
>> +	struct xenbus_device *dev = be->dev;
>> +	unsigned int rx_copy;
>> +	int err, val;
>> +
>>   	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
>> "%u",
>>   			   &rx_copy);
>>   	if (err == -ENOENT) {
>> @@ -622,20 +672,9 @@ static int connect_rings(struct backend_info *be)
>>   		val = 0;
>>   	vif->ipv6_csum = !!val;
>>
>> -	/* Map the shared frame, irq etc. */
>> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
>> -			     tx_evtchn, rx_evtchn);
>> -	if (err) {
>> -		xenbus_dev_fatal(dev, err,
>> -				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>> -				 tx_ring_ref, rx_ring_ref,
>> -				 tx_evtchn, rx_evtchn);
>> -		return err;
>> -	}
>>   	return 0;
>>   }
>>
>> -
>>   /* ** Driver Registration ** */
>>
>>
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:52:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kYU-0003i0-Vy; Thu, 16 Jan 2014 10:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3kYT-0003hX-MG
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:52:10 +0000
Received: from [85.158.139.211:24950] by server-10.bemta-5.messagelabs.com id
	1B/F0-01405-8D9B7D25; Thu, 16 Jan 2014 10:52:08 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389869527!8895883!1
X-Originating-IP: [213.199.154.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10755 invoked from network); 16 Jan 2014 10:52:08 -0000
Received: from mail-db3lp0084.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.84)
	by server-16.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	16 Jan 2014 10:52:08 -0000
Received: from AMXPRD0410HT003.eurprd04.prod.outlook.com (157.56.248.165) by
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Thu, 16 Jan 2014 10:52:06 +0000
Message-ID: <52D7B9D1.5080907@zynstra.com>
Date: Thu, 16 Jan 2014 10:52:01 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com>
In-Reply-To: <52D7346A.3000300@oracle.com>
X-Originating-IP: [157.56.248.165]
X-ClientProxiedBy: AMSPR03CA007.eurprd03.prod.outlook.com (10.242.77.145) To
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22)
X-Forefront-PRVS: 0093C80C01
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(979002)(669001)(779001)(679001)(689001)(199002)(164054003)(51444003)(189002)(24454002)(479174003)(377454003)(51704005)(87976001)(56816005)(90146001)(47976001)(54316002)(80976001)(46102001)(83322001)(77096001)(83072002)(85852003)(64126003)(53806001)(76482001)(85306002)(54356001)(69226001)(42186004)(83506001)(81686001)(80316001)(4396001)(49866001)(93516002)(81542001)(81342001)(50986001)(47736001)(23756003)(51856001)(81816001)(33656001)(74706001)(93136001)(74502001)(31966008)(50466002)(47446002)(63696002)(76786001)(76796001)(59766001)(80022001)(77982001)(74366001)(79102001)(59896001)(74662001)(66066001)(74876001)(47776003)(36756003)(92726001)(56776001)(92566001)(969003)(989001)(999001)(1009001)(1019001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AM3PR03MB404;
	H:AMXPRD0410HT003.eurprd04.prod.outlook.com; CLIP:157.56.248.165;
	FPR:; RD:InfoNoRecords; A:1; MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/16/2014 12:35 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>>> with
>>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>>> difference packages during your testing.
>>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>>> problem but as I mentioned before it will generally show up under
>>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>>
>>>>> So the root cause is not because enabled selfshrinking.
>>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>>> memory
>>>>> pressure to guest OS.
>>>>> And kswapd started to reclaim page until most of pages were
>>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>>> triggered.
>>>>> In theory the balloon driver should give back ballooned out pages to
>>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>>
>>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>>> xen-selfballoon won't be so aggressive.
>>>>> You can do it through parameters selfballoon_reserved_mb or
>>>>> selfballoon_min_usable_mb.
>>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>>> another bump to 64.  I think that if I do find values which prevent it
>>>> though then it is more of a work around than a fix because it still
>>>> suggests that swap is not being used when ballooning is no longer
>>> Yes, it's like a work around. But I don't think there is a better way.
>>>
>>>   From the recent OOM log your reported:
>>> [ 8212.940769] Free swap  = 1925576kB
>>> [ 8212.940770] Total swap = 2097148kB
>>>
>>> [504638.442136] Free swap  = 1868108kB
>>> [504638.442137] Total swap = 2097148kB
>>>
>>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>>> are already significantly values for guest OS has only ~300M usable
>>> memory.
>>> guest OS can't find out pages suitable for swap any more after so many
>>> pages are swapped out, although at that time the swap device still have
>>> enough space.
>>>
>>> The OOM may not be triggered if the balloon driver can give back memory
>>> to guest OS fast enough but I think it's unrealistic.
>>> So the best way is reserve more memory to guest OS.
>>>
>>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>>> see if I can find a comparable workload to see if I get the same issue
>>>> with a different kernel configuration.
>>>>
>> I've done a bit more testing and seem to have found an extra condition
>> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
>> have swap space backed by a dm-crypt volume and if I remove this layer
>> then things seem to be behaving much more reliably.  In my Ubuntu guests
>> I have plain swap space and so far I haven't been able to trigger an OOM
>> condition.  Is it possible that it is the dm-crypt layer failing to get
>> working memory when swapping something in/out and causing the error?
>>
> One possible reason is the dm layer and related dm target driver occupy
> a significant mount of memory and there is no way for xenselfballoon to
> know this. So selfballoon driver ballooned out more memory than the
> system really requires.
>
> I have made a patch by reserving extra 10% of original total memory, by
> this way I think we can make the system much more reliably in all cases.
> Could you please have a test? You don't need to set
> selfballoon_reserved_mb by yourself any more.
I'm running a 3.12.7 kernel with all 3 patches and original swap 
configuration, so far so good.  I shall keep testing and let you know 
how things go in a few days.

Thanks,
James

/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking Y

/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_hysteresis 
20
/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_inertia 6
/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_selfshrinking 
1
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_downhysteresis 
8
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballooning 1
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_interval 
5
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_min_usable_mb 
0
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_reserved_mb 
74
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_uphysteresis 
1
>
> xen_selfballoon_deaggressive.patch
>
>
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..8f33254 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>   #endif /* CONFIG_FRONTSWAP */
>   
>   #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>   
>   /*
>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>   int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   {
>   	bool enable = false;
> +	unsigned long reserve_pages;
>   
>   	if (!xen_domain())
>   		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   	if (!enable)
>   		return -ENODEV;
>   
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are two reasons:
> +	 * 1) The goal_page doesn't contain some pages used by kernel space,
> +	 *    like slab cache and pages used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>   	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>   
>   	return 0;
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:52:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kYU-0003i0-Vy; Thu, 16 Jan 2014 10:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W3kYT-0003hX-MG
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:52:10 +0000
Received: from [85.158.139.211:24950] by server-10.bemta-5.messagelabs.com id
	1B/F0-01405-8D9B7D25; Thu, 16 Jan 2014 10:52:08 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389869527!8895883!1
X-Originating-IP: [213.199.154.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10755 invoked from network); 16 Jan 2014 10:52:08 -0000
Received: from mail-db3lp0084.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.84)
	by server-16.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	16 Jan 2014 10:52:08 -0000
Received: from AMXPRD0410HT003.eurprd04.prod.outlook.com (157.56.248.165) by
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Thu, 16 Jan 2014 10:52:06 +0000
Message-ID: <52D7B9D1.5080907@zynstra.com>
Date: Thu, 16 Jan 2014 10:52:01 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com>
In-Reply-To: <52D7346A.3000300@oracle.com>
X-Originating-IP: [157.56.248.165]
X-ClientProxiedBy: AMSPR03CA007.eurprd03.prod.outlook.com (10.242.77.145) To
	AM3PR03MB404.eurprd03.prod.outlook.com (10.242.110.22)
X-Forefront-PRVS: 0093C80C01
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(979002)(669001)(779001)(679001)(689001)(199002)(164054003)(51444003)(189002)(24454002)(479174003)(377454003)(51704005)(87976001)(56816005)(90146001)(47976001)(54316002)(80976001)(46102001)(83322001)(77096001)(83072002)(85852003)(64126003)(53806001)(76482001)(85306002)(54356001)(69226001)(42186004)(83506001)(81686001)(80316001)(4396001)(49866001)(93516002)(81542001)(81342001)(50986001)(47736001)(23756003)(51856001)(81816001)(33656001)(74706001)(93136001)(74502001)(31966008)(50466002)(47446002)(63696002)(76786001)(76796001)(59766001)(80022001)(77982001)(74366001)(79102001)(59896001)(74662001)(66066001)(74876001)(47776003)(36756003)(92726001)(56776001)(92566001)(969003)(989001)(999001)(1009001)(1019001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:AM3PR03MB404;
	H:AMXPRD0410HT003.eurprd04.prod.outlook.com; CLIP:157.56.248.165;
	FPR:; RD:InfoNoRecords; A:1; MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/16/2014 12:35 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>>> with
>>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>>> difference packages during your testing.
>>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>>> problem but as I mentioned before it will generally show up under
>>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>>
>>>>> So the root cause is not because enabled selfshrinking.
>>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>>> memory
>>>>> pressure to guest OS.
>>>>> And kswapd started to reclaim page until most of pages were
>>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>>> triggered.
>>>>> In theory the balloon driver should give back ballooned out pages to
>>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>>
>>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>>> xen-selfballoon won't be so aggressive.
>>>>> You can do it through parameters selfballoon_reserved_mb or
>>>>> selfballoon_min_usable_mb.
>>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>>> another bump to 64.  I think that if I do find values which prevent it
>>>> though then it is more of a work around than a fix because it still
>>>> suggests that swap is not being used when ballooning is no longer
>>> Yes, it's like a work around. But I don't think there is a better way.
>>>
>>>   From the recent OOM log your reported:
>>> [ 8212.940769] Free swap  = 1925576kB
>>> [ 8212.940770] Total swap = 2097148kB
>>>
>>> [504638.442136] Free swap  = 1868108kB
>>> [504638.442137] Total swap = 2097148kB
>>>
>>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>>> are already significantly values for guest OS has only ~300M usable
>>> memory.
>>> guest OS can't find out pages suitable for swap any more after so many
>>> pages are swapped out, although at that time the swap device still have
>>> enough space.
>>>
>>> The OOM may not be triggered if the balloon driver can give back memory
>>> to guest OS fast enough but I think it's unrealistic.
>>> So the best way is reserve more memory to guest OS.
>>>
>>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>>> see if I can find a comparable workload to see if I get the same issue
>>>> with a different kernel configuration.
>>>>
>> I've done a bit more testing and seem to have found an extra condition
>> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
>> have swap space backed by a dm-crypt volume and if I remove this layer
>> then things seem to be behaving much more reliably.  In my Ubuntu guests
>> I have plain swap space and so far I haven't been able to trigger an OOM
>> condition.  Is it possible that it is the dm-crypt layer failing to get
>> working memory when swapping something in/out and causing the error?
>>
> One possible reason is the dm layer and related dm target driver occupy
> a significant mount of memory and there is no way for xenselfballoon to
> know this. So selfballoon driver ballooned out more memory than the
> system really requires.
>
> I have made a patch by reserving extra 10% of original total memory, by
> this way I think we can make the system much more reliably in all cases.
> Could you please have a test? You don't need to set
> selfballoon_reserved_mb by yourself any more.
I'm running a 3.12.7 kernel with all 3 patches and original swap 
configuration, so far so good.  I shall keep testing and let you know 
how things go in a few days.

Thanks,
James

/sys/module/tmem/parameters/cleancache Y
/sys/module/tmem/parameters/frontswap Y
/sys/module/tmem/parameters/selfballooning Y
/sys/module/tmem/parameters/selfshrinking Y

/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_hysteresis 
20
/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_inertia 6
/sys/devices/system/xen_memory/xen_memory0/selfballoon/frontswap_selfshrinking 
1
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_downhysteresis 
8
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballooning 1
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_interval 
5
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_min_usable_mb 
0
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_reserved_mb 
74
/sys/devices/system/xen_memory/xen_memory0/selfballoon/selfballoon_uphysteresis 
1
>
> xen_selfballoon_deaggressive.patch
>
>
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..8f33254 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>   #endif /* CONFIG_FRONTSWAP */
>   
>   #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>   
>   /*
>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>   int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   {
>   	bool enable = false;
> +	unsigned long reserve_pages;
>   
>   	if (!xen_domain())
>   		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   	if (!enable)
>   		return -ENODEV;
>   
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are two reasons:
> +	 * 1) The goal_page doesn't contain some pages used by kernel space,
> +	 *    like slab cache and pages used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>   	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>   
>   	return 0;
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:58:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3keO-0004VI-EH; Thu, 16 Jan 2014 10:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3keM-0004VD-TO
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:58:15 +0000
Received: from [85.158.143.35:10865] by server-3.bemta-4.messagelabs.com id
	7D/B0-32360-64BB7D25; Thu, 16 Jan 2014 10:58:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389869892!12073777!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12460 invoked from network); 16 Jan 2014 10:58:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91315681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:58:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:58:10 -0500
Message-ID: <1389869890.6697.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 16 Jan 2014 10:58:10 +0000
In-Reply-To: <20140115174510.GA5171@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 18:45 +0100, Olaf Hering wrote:
> On Wed, Jan 15, Ian Campbell wrote:
> 
> > On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> > > It seems qemu does not enumerate the configured disks correctly with a
> > > config like this:
> > > 
> > > disk=[  
> > >         'raw:/some.iso,hda:cdrom,r',
> > >         'raw:/some.raw,xvda,w',
> > > ]
> > > 
> > > With a PV guest it works fine, the guest has a hda and xvda device. 
> > > But a HVM guest fails to start:
> > > qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> > > 
> > > I think that kind of config used to work with xend.
> > 
> > Did it? I thought xvda and hda were effectively considered two faces of
> > the same device, so I'm not so sure. I'd be particularly surprised if
> > this worked by design rather than coincidence.
> 
> Putting a 'device_model_version="qemu-xen-traditional"' into the config
> fixes it for me.
> 
> So the question is how it is supposed to work. My understanding is that
> for HVM some sort of IDE is (or was?) required to let it boot from a
> block device. Thats why I have hd[abcd] as device name. In addition to
> that one could have as many disks named xvd[abc..], which are PV only.

Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
hd[a-d], which refer to the same underlying volume.

This allows you to boot from hda, do an unplug and then switch to using
xvda.

If you want a pure PV disk you should label it xvde onwards as
docs/misc/vbd-interface.txt suggests. I don't think there is a way to
force xvd[a-d] to be pure PV.

Obviously this precludes you explicitly asking for both xvda and hda,
since there will be a clash. I don't know how/why qemu-trad lets this
work.

> After some testing it seems that today the guest will boot from xvda,
> even with qemu-xen-traditional.

So basically the bogus second definition of hda as a cdrom is ignored?

>  So either that got fixed with libxl, or
> xend from 4.2 got it all wrong.
> 
> So what should be done with such configs, if they really exist in the
> wild?

>  The obvious workaround is device_model_version="qemu-xen-traditional".

Yes, either that or switch the disk stuff around / delete the hda.

General advice would be to stick with qemu-xen-traditional for vms which
were created/installed with it, unless you know the in guest OS to be
fine, I suppose that could extend to this case too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 10:58:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 10:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3keO-0004VI-EH; Thu, 16 Jan 2014 10:58:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3keM-0004VD-TO
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 10:58:15 +0000
Received: from [85.158.143.35:10865] by server-3.bemta-4.messagelabs.com id
	7D/B0-32360-64BB7D25; Thu, 16 Jan 2014 10:58:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389869892!12073777!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12460 invoked from network); 16 Jan 2014 10:58:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 10:58:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91315681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 10:58:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 05:58:10 -0500
Message-ID: <1389869890.6697.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 16 Jan 2014 10:58:10 +0000
In-Reply-To: <20140115174510.GA5171@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 18:45 +0100, Olaf Hering wrote:
> On Wed, Jan 15, Ian Campbell wrote:
> 
> > On Wed, 2014-01-15 at 18:12 +0100, Olaf Hering wrote:
> > > It seems qemu does not enumerate the configured disks correctly with a
> > > config like this:
> > > 
> > > disk=[  
> > >         'raw:/some.iso,hda:cdrom,r',
> > >         'raw:/some.raw,xvda,w',
> > > ]
> > > 
> > > With a PV guest it works fine, the guest has a hda and xvda device. 
> > > But a HVM guest fails to start:
> > > qemu-system-i386: -drive file=/some.raw,if=ide,index=0,media=disk,format=raw,cache=writeback: drive with bus=0, unit=0 (index=0) exists
> > > 
> > > I think that kind of config used to work with xend.
> > 
> > Did it? I thought xvda and hda were effectively considered two faces of
> > the same device, so I'm not so sure. I'd be particularly surprised if
> > this worked by design rather than coincidence.
> 
> Putting a 'device_model_version="qemu-xen-traditional"' into the config
> fixes it for me.
> 
> So the question is how it is supposed to work. My understanding is that
> for HVM some sort of IDE is (or was?) required to let it boot from a
> block device. Thats why I have hd[abcd] as device name. In addition to
> that one could have as many disks named xvd[abc..], which are PV only.

Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
hd[a-d], which refer to the same underlying volume.

This allows you to boot from hda, do an unplug and then switch to using
xvda.

If you want a pure PV disk you should label it xvde onwards as
docs/misc/vbd-interface.txt suggests. I don't think there is a way to
force xvd[a-d] to be pure PV.

Obviously this precludes you explicitly asking for both xvda and hda,
since there will be a clash. I don't know how/why qemu-trad lets this
work.

> After some testing it seems that today the guest will boot from xvda,
> even with qemu-xen-traditional.

So basically the bogus second definition of hda as a cdrom is ignored?

>  So either that got fixed with libxl, or
> xend from 4.2 got it all wrong.
> 
> So what should be done with such configs, if they really exist in the
> wild?

>  The obvious workaround is device_model_version="qemu-xen-traditional".

Yes, either that or switch the disk stuff around / delete the hda.

General advice would be to stick with qemu-xen-traditional for vms which
were created/installed with it, unless you know the in guest OS to be
fine, I suppose that could extend to this case too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:03:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kjT-0004ts-QB; Thu, 16 Jan 2014 11:03:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3kjS-0004tn-FO
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:03:30 +0000
Received: from [85.158.139.211:47006] by server-13.bemta-5.messagelabs.com id
	27/16-11357-18CB7D25; Thu, 16 Jan 2014 11:03:29 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389870206!10112169!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24910 invoked from network); 16 Jan 2014 11:03:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:03:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93418184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:03:26 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 06:03:25 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 12:03:24 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
	queue struct.
Thread-Index: AQHPEg4x4WFe49MrpEWenZ63DvyEMpqHIP5g///4YoCAABYywA==
Date: Thu, 16 Jan 2014 11:03:23 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
	<52D7B6B3.5060303@citrix.com>
In-Reply-To: <52D7B6B3.5060303@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 16 January 2014 10:39
> To: Paul Durrant; xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu
> Subject: Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> queue struct.
> 
> On 16/01/14 10:23, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> >> Sent: 15 January 2014 16:23
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> >> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> >> queue struct.
> >>
> >> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >> In preparation for multi-queue support in xen-netback, move the
> >> queue-specific data from struct xenvif into struct xenvif_queue, and
> >> update the rest of the code to use this.
> >>
> >> Also adds loops over queues where appropriate, even though only one is
> >> configured at this point, and uses alloc_netdev_mq() and the
> >> corresponding multi-queue netif wake/start/stop functions in preparation
> >> for multiple active queues.
> >>
> >> Finally, implements a trivial queue selection function suitable for
> >> ndo_select_queue, which simply returns 0 for a single queue and uses
> >> skb_get_rxhash() to compute the queue index otherwise.
> >>
> >> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >> ---
> >>   drivers/net/xen-netback/common.h    |   66 +++--
> >>   drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
> >>   drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++---------
> ----
> >> -----
> >>   drivers/net/xen-netback/xenbus.c    |   89 ++++--
> >>   4 files changed, 556 insertions(+), 423 deletions(-)
> >>
> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> >> netback/common.h
> >> index c47794b..54d2eeb 100644
> >> --- a/drivers/net/xen-netback/common.h
> >> +++ b/drivers/net/xen-netback/common.h
> >> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
> >>    */
> >>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> >> XEN_NETIF_RX_RING_SIZE)
> >>
> >> -struct xenvif {
> >> -	/* Unique identifier for this interface. */
> >> -	domid_t          domid;
> >> -	unsigned int     handle;
> >> +struct xenvif;
> >> +
> >> +struct xenvif_queue { /* Per-queue data for xenvif */
> >> +	unsigned int number; /* Queue number, 0-based */
> >> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> >
> > I wonder whether it would be neater to #define the name size here...
> >
> 
> Absolutely. I'll do this in V2.
> 
> >> +	struct xenvif *vif; /* Parent VIF */
> >>
> >>   	/* Use NAPI for guest TX */
> >>   	struct napi_struct napi;
> >>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
> >>   	unsigned int tx_irq;
> >>   	/* Only used when feature-split-event-channels = 1 */
> >> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> >> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
> >
> > ...and the IRQ name size here. It's kind of ugly to have + some_magic_value
> in array definitions.
> >
> 
> As above.
> 
> >>   	struct xen_netif_tx_back_ring tx;
> >>   	struct sk_buff_head tx_queue;
> >>   	struct page *mmap_pages[MAX_PENDING_REQS];
> >> @@ -140,7 +142,7 @@ struct xenvif {
> >>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
> >>   	unsigned int rx_irq;
> >>   	/* Only used when feature-split-event-channels = 1 */
> >> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> >> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
> >>   	struct xen_netif_rx_back_ring rx;
> >>   	struct sk_buff_head rx_queue;
> >>
> >> @@ -150,14 +152,27 @@ struct xenvif {
> >>   	 */
> >>   	RING_IDX rx_req_cons_peek;
> >>
> >> -	/* This array is allocated seperately as it is large */
> >> -	struct gnttab_copy *grant_copy_op;
> >> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> >
> > I see you brought this back in line, which is reasonable as the queue is now
> a separately allocated struct.
> >
> 
> Indeed; trying to keep the number of separate allocs/frees to a minimum,
> for everybody's sanity!
> 
> >>
> >>   	/* We create one meta structure per ring request we consume, so
> >>   	 * the maximum number is the same as the ring size.
> >>   	 */
> >>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> >>
> >> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> >> +	unsigned long   credit_bytes;
> >> +	unsigned long   credit_usec;
> >> +	unsigned long   remaining_credit;
> >> +	struct timer_list credit_timeout;
> >> +	u64 credit_window_start;
> >> +
> >> +};
> >> +
> >> +struct xenvif {
> >> +	/* Unique identifier for this interface. */
> >> +	domid_t          domid;
> >> +	unsigned int     handle;
> >> +
> >>   	u8               fe_dev_addr[6];
> >>
> >>   	/* Frontend feature information. */
> >> @@ -171,12 +186,9 @@ struct xenvif {
> >>   	/* Internal feature information. */
> >>   	u8 can_queue:1;	    /* can queue packets for receiver? */
> >>
> >> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> >> -	unsigned long   credit_bytes;
> >> -	unsigned long   credit_usec;
> >> -	unsigned long   remaining_credit;
> >> -	struct timer_list credit_timeout;
> >> -	u64 credit_window_start;
> >> +	/* Queues */
> >> +	unsigned int num_queues;
> >> +	struct xenvif_queue *queues;
> >>
> >>   	/* Statistics */
> >
> > Do stats need to be per-queue (and then possibly aggregated at query
> time)?
> >
> 
> Aside from the potential to see the stats for each queue, which may be
> useful in some limited circumstances for performance testing or
> debugging, I don't see what this buys us...
> 

Well, if you have multiple queues running simultaneously, do you make sure global stats are atomically adjusted? I didn't see any code to that effect, and since atomic ops are expensive it's usually better to keep per queue stats and aggregate at the point of query.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:03:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kjT-0004ts-QB; Thu, 16 Jan 2014 11:03:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W3kjS-0004tn-FO
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:03:30 +0000
Received: from [85.158.139.211:47006] by server-13.bemta-5.messagelabs.com id
	27/16-11357-18CB7D25; Thu, 16 Jan 2014 11:03:29 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389870206!10112169!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24910 invoked from network); 16 Jan 2014 11:03:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:03:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93418184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:03:26 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 16 Jan 2014 06:03:25 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 12:03:24 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
	queue struct.
Thread-Index: AQHPEg4x4WFe49MrpEWenZ63DvyEMpqHIP5g///4YoCAABYywA==
Date: Thu, 16 Jan 2014 11:03:23 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
	<52D7B6B3.5060303@citrix.com>
In-Reply-To: <52D7B6B3.5060303@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 16 January 2014 10:39
> To: Paul Durrant; xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu
> Subject: Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> queue struct.
> 
> On 16/01/14 10:23, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> >> Sent: 15 January 2014 16:23
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Ian Campbell; Wei Liu; Paul Durrant; Andrew Bennieston
> >> Subject: [PATCH RFC 1/4] xen-netback: Factor queue-specific data into
> >> queue struct.
> >>
> >> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >> In preparation for multi-queue support in xen-netback, move the
> >> queue-specific data from struct xenvif into struct xenvif_queue, and
> >> update the rest of the code to use this.
> >>
> >> Also adds loops over queues where appropriate, even though only one is
> >> configured at this point, and uses alloc_netdev_mq() and the
> >> corresponding multi-queue netif wake/start/stop functions in preparation
> >> for multiple active queues.
> >>
> >> Finally, implements a trivial queue selection function suitable for
> >> ndo_select_queue, which simply returns 0 for a single queue and uses
> >> skb_get_rxhash() to compute the queue index otherwise.
> >>
> >> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >> ---
> >>   drivers/net/xen-netback/common.h    |   66 +++--
> >>   drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
> >>   drivers/net/xen-netback/netback.c   |  516 +++++++++++++++++---------
> ----
> >> -----
> >>   drivers/net/xen-netback/xenbus.c    |   89 ++++--
> >>   4 files changed, 556 insertions(+), 423 deletions(-)
> >>
> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> >> netback/common.h
> >> index c47794b..54d2eeb 100644
> >> --- a/drivers/net/xen-netback/common.h
> >> +++ b/drivers/net/xen-netback/common.h
> >> @@ -108,17 +108,19 @@ struct xenvif_rx_meta {
> >>    */
> >>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> >> XEN_NETIF_RX_RING_SIZE)
> >>
> >> -struct xenvif {
> >> -	/* Unique identifier for this interface. */
> >> -	domid_t          domid;
> >> -	unsigned int     handle;
> >> +struct xenvif;
> >> +
> >> +struct xenvif_queue { /* Per-queue data for xenvif */
> >> +	unsigned int number; /* Queue number, 0-based */
> >> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> >
> > I wonder whether it would be neater to #define the name size here...
> >
> 
> Absolutely. I'll do this in V2.
> 
> >> +	struct xenvif *vif; /* Parent VIF */
> >>
> >>   	/* Use NAPI for guest TX */
> >>   	struct napi_struct napi;
> >>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
> >>   	unsigned int tx_irq;
> >>   	/* Only used when feature-split-event-channels = 1 */
> >> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> >> +	char tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
> >
> > ...and the IRQ name size here. It's kind of ugly to have + some_magic_value
> in array definitions.
> >
> 
> As above.
> 
> >>   	struct xen_netif_tx_back_ring tx;
> >>   	struct sk_buff_head tx_queue;
> >>   	struct page *mmap_pages[MAX_PENDING_REQS];
> >> @@ -140,7 +142,7 @@ struct xenvif {
> >>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
> >>   	unsigned int rx_irq;
> >>   	/* Only used when feature-split-event-channels = 1 */
> >> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> >> +	char rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */
> >>   	struct xen_netif_rx_back_ring rx;
> >>   	struct sk_buff_head rx_queue;
> >>
> >> @@ -150,14 +152,27 @@ struct xenvif {
> >>   	 */
> >>   	RING_IDX rx_req_cons_peek;
> >>
> >> -	/* This array is allocated seperately as it is large */
> >> -	struct gnttab_copy *grant_copy_op;
> >> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> >
> > I see you brought this back in line, which is reasonable as the queue is now
> a separately allocated struct.
> >
> 
> Indeed; trying to keep the number of separate allocs/frees to a minimum,
> for everybody's sanity!
> 
> >>
> >>   	/* We create one meta structure per ring request we consume, so
> >>   	 * the maximum number is the same as the ring size.
> >>   	 */
> >>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> >>
> >> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> >> +	unsigned long   credit_bytes;
> >> +	unsigned long   credit_usec;
> >> +	unsigned long   remaining_credit;
> >> +	struct timer_list credit_timeout;
> >> +	u64 credit_window_start;
> >> +
> >> +};
> >> +
> >> +struct xenvif {
> >> +	/* Unique identifier for this interface. */
> >> +	domid_t          domid;
> >> +	unsigned int     handle;
> >> +
> >>   	u8               fe_dev_addr[6];
> >>
> >>   	/* Frontend feature information. */
> >> @@ -171,12 +186,9 @@ struct xenvif {
> >>   	/* Internal feature information. */
> >>   	u8 can_queue:1;	    /* can queue packets for receiver? */
> >>
> >> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> >> -	unsigned long   credit_bytes;
> >> -	unsigned long   credit_usec;
> >> -	unsigned long   remaining_credit;
> >> -	struct timer_list credit_timeout;
> >> -	u64 credit_window_start;
> >> +	/* Queues */
> >> +	unsigned int num_queues;
> >> +	struct xenvif_queue *queues;
> >>
> >>   	/* Statistics */
> >
> > Do stats need to be per-queue (and then possibly aggregated at query
> time)?
> >
> 
> Aside from the potential to see the stats for each queue, which may be
> useful in some limited circumstances for performance testing or
> debugging, I don't see what this buys us...
> 

Well, if you have multiple queues running simultaneously, do you make sure global stats are atomically adjusted? I didn't see any code to that effect, and since atomic ops are expensive it's usually better to keep per queue stats and aggregate at the point of query.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:04:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kkf-00055J-AD; Thu, 16 Jan 2014 11:04:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kkd-000559-EG
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:04:43 +0000
Received: from [85.158.139.211:62638] by server-3.bemta-5.messagelabs.com id
	76/8F-04773-ACCB7D25; Thu, 16 Jan 2014 11:04:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389870270!9939847!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20731 invoked from network); 16 Jan 2014 11:04:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:04:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91317186"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:04:27 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:04:26 -0500
Message-ID: <52D7BCB9.2020807@citrix.com>
Date: Thu, 16 Jan 2014 11:04:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
	<52D7B76E.2080001@citrix.com>
In-Reply-To: <52D7B76E.2080001@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:41, Andrew Bennieston wrote:
> On 16/01/14 10:39, David Vrabel wrote:
>> On 16/01/14 10:24, Andrew Bennieston wrote:
>>> On 16/01/14 00:27, Wei Liu wrote:
>>>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>>>>
>>>>> +            goto error;
>>>>> +        }
>>>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>>>> +                dev->nodename, queue->number);
>>>>> +    }
>>>>> +    else
>>>>> +        path = (char *)dev->nodename;
>>>>
>>>> Coding style. Should be surounded by {};
>>>
>>> OK.
>>
>> Linux style is single line blocks are not surrounded by braces.  You
>> should have the else on the same line as the preceeding } though.
>>
>> i.e.,
>>
>> if (...) {
>>     one_line()
>>     two_line()
>>     red_line()
>>     blue_line()
>> } else
>>     a_line()
>>
>> David
>>
> Right; I'll make sure this consistently done throughout.

Nope. Turns out I'm wrong.

In my defense, the last time I read CodingStyle (which was a
considerable time ago!) it didn't say this.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:04:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kkf-00055J-AD; Thu, 16 Jan 2014 11:04:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kkd-000559-EG
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:04:43 +0000
Received: from [85.158.139.211:62638] by server-3.bemta-5.messagelabs.com id
	76/8F-04773-ACCB7D25; Thu, 16 Jan 2014 11:04:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389870270!9939847!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20731 invoked from network); 16 Jan 2014 11:04:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:04:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91317186"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:04:27 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:04:26 -0500
Message-ID: <52D7BCB9.2020807@citrix.com>
Date: Thu, 16 Jan 2014 11:04:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
	<52D7B76E.2080001@citrix.com>
In-Reply-To: <52D7B76E.2080001@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 10:41, Andrew Bennieston wrote:
> On 16/01/14 10:39, David Vrabel wrote:
>> On 16/01/14 10:24, Andrew Bennieston wrote:
>>> On 16/01/14 00:27, Wei Liu wrote:
>>>> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>>>>
>>>>> +            goto error;
>>>>> +        }
>>>>> +        snprintf(path, pathsize, "%s/queue-%u",
>>>>> +                dev->nodename, queue->number);
>>>>> +    }
>>>>> +    else
>>>>> +        path = (char *)dev->nodename;
>>>>
>>>> Coding style. Should be surounded by {};
>>>
>>> OK.
>>
>> Linux style is single line blocks are not surrounded by braces.  You
>> should have the else on the same line as the preceeding } though.
>>
>> i.e.,
>>
>> if (...) {
>>     one_line()
>>     two_line()
>>     red_line()
>>     blue_line()
>> } else
>>     a_line()
>>
>> David
>>
> Right; I'll make sure this consistently done throughout.

Nope. Turns out I'm wrong.

In my defense, the last time I read CodingStyle (which was a
considerable time ago!) it didn't say this.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:06:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kmc-0005Er-65; Thu, 16 Jan 2014 11:06:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kmb-0005El-FE
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:06:45 +0000
Received: from [85.158.143.35:49443] by server-2.bemta-4.messagelabs.com id
	11/AC-11386-44DB7D25; Thu, 16 Jan 2014 11:06:44 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389870402!279618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7158 invoked from network); 16 Jan 2014 11:06:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:06:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93419116"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:06:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:06:41 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kmX-0002sz-D5; Thu, 16 Jan 2014 11:06:41 +0000
Message-ID: <52D7BD41.70408@citrix.com>
Date: Thu, 16 Jan 2014 11:06:41 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
	<52D7B6B3.5060303@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subject:   Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific data
into queue struct.  To:        Paul Durrant
<Paul.Durrant@citrix.com>,"xen-devel@lists.xenproject.org"
<xen-devel@lists.xenproject.org> Cc:        Ian Campbell
<Ian.Campbell@citrix.com>,Wei Liu <wei.liu2@citrix.com> Bcc:
-=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=- On
16/01/14 11:03, Paul Durrant wrote:
>> -----Original Message----- From: Andrew Bennieston
>> [mailto:andrew.bennieston@citrix.com] Sent: 16 January 2014 10:39 To:
>> Paul Durrant; xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei
>> Liu Subject: Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific
>> data into queue struct.
>>
>> On 16/01/14 10:23, Paul Durrant wrote:
>>>> -----Original Message----- From: Andrew J. Bennieston
>>>> [mailto:andrew.bennieston@citrix.com] Sent: 15 January 2014 16:23
>>>> To: xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei Liu; Paul
>>>> Durrant; Andrew Bennieston Subject: [PATCH RFC 1/4] xen-netback:
>>>> Factor queue-specific data into queue struct.
>>>>
>>>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>>>
>>>> In preparation for multi-queue support in xen-netback, move the
>>>> queue-specific data from struct xenvif into struct xenvif_queue,
>>>> and update the rest of the code to use this.
>>>>
>>>> Also adds loops over queues where appropriate, even though only one
>>>> is configured at this point, and uses alloc_netdev_mq() and the
>>>> corresponding multi-queue netif wake/start/stop functions in
>>>> preparation for multiple active queues.
>>>>
>>>> Finally, implements a trivial queue selection function suitable for
>>>> ndo_select_queue, which simply returns 0 for a single queue and
>>>> uses skb_get_rxhash() to compute the queue index otherwise.
>>>>
>>>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>>>> --- drivers/net/xen-netback/common.h    |   66 +++--
>>>> drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>>>> drivers/net/xen-netback/netback.c   |  516
>>>> +++++++++++++++++---------
>> ----
>>>> ----- drivers/net/xen-netback/xenbus.c    |   89 ++++-- 4 files
>>>> changed, 556 insertions(+), 423 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>>>> netback/common.h index c47794b..54d2eeb 100644 ---
>>>> a/drivers/net/xen-netback/common.h +++
>>>> b/drivers/net/xen-netback/common.h @@ -108,17 +108,19 @@ struct
>>>> xenvif_rx_meta { */ #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>>>> XEN_NETIF_RX_RING_SIZE)
>>>>
>>>> -struct xenvif { -	/* Unique identifier for this interface. */ -
>>>> domid_t          domid; -	unsigned int     handle; +struct xenvif;
>>>> + +struct xenvif_queue { /* Per-queue data for xenvif */ +	unsigned
>>>> int number; /* Queue number, 0-based */ +	char name[IFNAMSIZ+4];
>>>> /* DEVNAME-qN */
>>>
>>> I wonder whether it would be neater to #define the name size here...
>>>
>>
>> Absolutely. I'll do this in V2.
>>
>>>> +	struct xenvif *vif; /* Parent VIF */
>>>>
>>>>    	/* Use NAPI for guest TX */ struct napi_struct napi; /* When
>>>>    	feature-split-event-channels = 0, tx_irq = rx_irq. */
>>>>    	unsigned int tx_irq; /* Only used when
>>>>    	feature-split-event-channels = 1 */ -	char
>>>>    	tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ +	char
>>>>    	tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>>>
>>> ...and the IRQ name size here. It's kind of ugly to have +
>>> some_magic_value
>> in array definitions.
>>>
>>
>> As above.
>>
>>>>    	struct xen_netif_tx_back_ring tx; struct sk_buff_head
>>>>    	tx_queue; struct page *mmap_pages[MAX_PENDING_REQS]; @@
>>>>    	-140,7 +142,7 @@ struct xenvif { /* When
>>>>    	feature-split-event-channels = 0, tx_irq = rx_irq. */
>>>>    	unsigned int rx_irq; /* Only used when
>>>>    	feature-split-event-channels = 1 */ -	char
>>>>    	rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ +	char
>>>>    	rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */ struct
>>>>    	xen_netif_rx_back_ring rx; struct sk_buff_head rx_queue;
>>>>
>>>> @@ -150,14 +152,27 @@ struct xenvif { */ RING_IDX rx_req_cons_peek;
>>>>
>>>> -	/* This array is allocated seperately as it is large */ -
>>>> struct gnttab_copy *grant_copy_op; +	struct gnttab_copy
>>>> grant_copy_op[MAX_GRANT_COPY_OPS];
>>>
>>> I see you brought this back in line, which is reasonable as the
>>> queue is now
>> a separately allocated struct.
>>>
>>
>> Indeed; trying to keep the number of separate allocs/frees to a
>> minimum, for everybody's sanity!
>>
>>>>
>>>>    	/* We create one meta structure per ring request we consume,
>>>>    	so * the maximum number is the same as the ring size.  */
>>>>    	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>>>
>>>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
>>>> */ +	unsigned long   credit_bytes; +	unsigned long   credit_usec;
>>>> +	unsigned long   remaining_credit; +	struct timer_list
>>>> credit_timeout; +	u64 credit_window_start; + +}; + +struct xenvif
>>>> { +	/* Unique identifier for this interface. */ +	domid_t
>>>> domid; +	unsigned int     handle; + u8
>>>> fe_dev_addr[6];
>>>>
>>>>    	/* Frontend feature information. */ @@ -171,12 +186,9 @@
>>>>    	struct xenvif { /* Internal feature information. */ u8
>>>>    	can_queue:1;	    /* can queue packets for receiver? */
>>>>
>>>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
>>>> */ -	unsigned long   credit_bytes; -	unsigned long   credit_usec;
>>>> -	unsigned long   remaining_credit; -	struct timer_list
>>>> credit_timeout; -	u64 credit_window_start; +	/* Queues */ +
>>>> unsigned int num_queues; +	struct xenvif_queue *queues;
>>>>
>>>>    	/* Statistics */
>>>
>>> Do stats need to be per-queue (and then possibly aggregated at query
>> time)?
>>>
>>
>> Aside from the potential to see the stats for each queue, which may
>> be useful in some limited circumstances for performance testing or
>> debugging, I don't see what this buys us...
>>
>
> Well, if you have multiple queues running simultaneously, do you make
> sure global stats are atomically adjusted? I didn't see any code to
> that effect, and since atomic ops are expensive it's usually better to
> keep per queue stats and aggregate at the point of query.
Good point. I'll do per-queue stats in V2.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:06:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kmc-0005Er-65; Thu, 16 Jan 2014 11:06:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3kmb-0005El-FE
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:06:45 +0000
Received: from [85.158.143.35:49443] by server-2.bemta-4.messagelabs.com id
	11/AC-11386-44DB7D25; Thu, 16 Jan 2014 11:06:44 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389870402!279618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7158 invoked from network); 16 Jan 2014 11:06:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:06:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93419116"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:06:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:06:41 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3kmX-0002sz-D5; Thu, 16 Jan 2014 11:06:41 +0000
Message-ID: <52D7BD41.70408@citrix.com>
Date: Thu, 16 Jan 2014 11:06:41 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208DDA@AMSPEX01CL01.citrite.net>
	<52D7B6B3.5060303@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0208FAE@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subject:   Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific data
into queue struct.  To:        Paul Durrant
<Paul.Durrant@citrix.com>,"xen-devel@lists.xenproject.org"
<xen-devel@lists.xenproject.org> Cc:        Ian Campbell
<Ian.Campbell@citrix.com>,Wei Liu <wei.liu2@citrix.com> Bcc:
-=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=- On
16/01/14 11:03, Paul Durrant wrote:
>> -----Original Message----- From: Andrew Bennieston
>> [mailto:andrew.bennieston@citrix.com] Sent: 16 January 2014 10:39 To:
>> Paul Durrant; xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei
>> Liu Subject: Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific
>> data into queue struct.
>>
>> On 16/01/14 10:23, Paul Durrant wrote:
>>>> -----Original Message----- From: Andrew J. Bennieston
>>>> [mailto:andrew.bennieston@citrix.com] Sent: 15 January 2014 16:23
>>>> To: xen-devel@lists.xenproject.org Cc: Ian Campbell; Wei Liu; Paul
>>>> Durrant; Andrew Bennieston Subject: [PATCH RFC 1/4] xen-netback:
>>>> Factor queue-specific data into queue struct.
>>>>
>>>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>>>
>>>> In preparation for multi-queue support in xen-netback, move the
>>>> queue-specific data from struct xenvif into struct xenvif_queue,
>>>> and update the rest of the code to use this.
>>>>
>>>> Also adds loops over queues where appropriate, even though only one
>>>> is configured at this point, and uses alloc_netdev_mq() and the
>>>> corresponding multi-queue netif wake/start/stop functions in
>>>> preparation for multiple active queues.
>>>>
>>>> Finally, implements a trivial queue selection function suitable for
>>>> ndo_select_queue, which simply returns 0 for a single queue and
>>>> uses skb_get_rxhash() to compute the queue index otherwise.
>>>>
>>>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>>>> --- drivers/net/xen-netback/common.h    |   66 +++--
>>>> drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
>>>> drivers/net/xen-netback/netback.c   |  516
>>>> +++++++++++++++++---------
>> ----
>>>> ----- drivers/net/xen-netback/xenbus.c    |   89 ++++-- 4 files
>>>> changed, 556 insertions(+), 423 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>>>> netback/common.h index c47794b..54d2eeb 100644 ---
>>>> a/drivers/net/xen-netback/common.h +++
>>>> b/drivers/net/xen-netback/common.h @@ -108,17 +108,19 @@ struct
>>>> xenvif_rx_meta { */ #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>>>> XEN_NETIF_RX_RING_SIZE)
>>>>
>>>> -struct xenvif { -	/* Unique identifier for this interface. */ -
>>>> domid_t          domid; -	unsigned int     handle; +struct xenvif;
>>>> + +struct xenvif_queue { /* Per-queue data for xenvif */ +	unsigned
>>>> int number; /* Queue number, 0-based */ +	char name[IFNAMSIZ+4];
>>>> /* DEVNAME-qN */
>>>
>>> I wonder whether it would be neater to #define the name size here...
>>>
>>
>> Absolutely. I'll do this in V2.
>>
>>>> +	struct xenvif *vif; /* Parent VIF */
>>>>
>>>>    	/* Use NAPI for guest TX */ struct napi_struct napi; /* When
>>>>    	feature-split-event-channels = 0, tx_irq = rx_irq. */
>>>>    	unsigned int tx_irq; /* Only used when
>>>>    	feature-split-event-channels = 1 */ -	char
>>>>    	tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ +	char
>>>>    	tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */
>>>
>>> ...and the IRQ name size here. It's kind of ugly to have +
>>> some_magic_value
>> in array definitions.
>>>
>>
>> As above.
>>
>>>>    	struct xen_netif_tx_back_ring tx; struct sk_buff_head
>>>>    	tx_queue; struct page *mmap_pages[MAX_PENDING_REQS]; @@
>>>>    	-140,7 +142,7 @@ struct xenvif { /* When
>>>>    	feature-split-event-channels = 0, tx_irq = rx_irq. */
>>>>    	unsigned int rx_irq; /* Only used when
>>>>    	feature-split-event-channels = 1 */ -	char
>>>>    	rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ +	char
>>>>    	rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */ struct
>>>>    	xen_netif_rx_back_ring rx; struct sk_buff_head rx_queue;
>>>>
>>>> @@ -150,14 +152,27 @@ struct xenvif { */ RING_IDX rx_req_cons_peek;
>>>>
>>>> -	/* This array is allocated seperately as it is large */ -
>>>> struct gnttab_copy *grant_copy_op; +	struct gnttab_copy
>>>> grant_copy_op[MAX_GRANT_COPY_OPS];
>>>
>>> I see you brought this back in line, which is reasonable as the
>>> queue is now
>> a separately allocated struct.
>>>
>>
>> Indeed; trying to keep the number of separate allocs/frees to a
>> minimum, for everybody's sanity!
>>
>>>>
>>>>    	/* We create one meta structure per ring request we consume,
>>>>    	so * the maximum number is the same as the ring size.  */
>>>>    	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>>>
>>>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
>>>> */ +	unsigned long   credit_bytes; +	unsigned long   credit_usec;
>>>> +	unsigned long   remaining_credit; +	struct timer_list
>>>> credit_timeout; +	u64 credit_window_start; + +}; + +struct xenvif
>>>> { +	/* Unique identifier for this interface. */ +	domid_t
>>>> domid; +	unsigned int     handle; + u8
>>>> fe_dev_addr[6];
>>>>
>>>>    	/* Frontend feature information. */ @@ -171,12 +186,9 @@
>>>>    	struct xenvif { /* Internal feature information. */ u8
>>>>    	can_queue:1;	    /* can queue packets for receiver? */
>>>>
>>>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
>>>> */ -	unsigned long   credit_bytes; -	unsigned long   credit_usec;
>>>> -	unsigned long   remaining_credit; -	struct timer_list
>>>> credit_timeout; -	u64 credit_window_start; +	/* Queues */ +
>>>> unsigned int num_queues; +	struct xenvif_queue *queues;
>>>>
>>>>    	/* Statistics */
>>>
>>> Do stats need to be per-queue (and then possibly aggregated at query
>> time)?
>>>
>>
>> Aside from the potential to see the stats for each queue, which may
>> be useful in some limited circumstances for performance testing or
>> debugging, I don't see what this buys us...
>>
>
> Well, if you have multiple queues running simultaneously, do you make
> sure global stats are atomically adjusted? I didn't see any code to
> that effect, and since atomic ops are expensive it's usually better to
> keep per queue stats and aggregate at the point of query.
Good point. I'll do per-queue stats in V2.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:10:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kq7-0005th-RZ; Thu, 16 Jan 2014 11:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kq6-0005tU-4a
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:10:22 +0000
Received: from [85.158.143.35:51116] by server-1.bemta-4.messagelabs.com id
	D4/96-02132-D1EB7D25; Thu, 16 Jan 2014 11:10:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389870619!12117627!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14125 invoked from network); 16 Jan 2014 11:10:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:10:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93419865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:10:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:10:18 -0500
Message-ID: <52D7BE19.2010009@citrix.com>
Date: Thu, 16 Jan 2014 11:10:17 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 23:57, Annie Li wrote:
> This patch implements two things:
> 
> * release grant reference and skb for rx path, this fixex resource leaking.
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> for reading or writing. But this patch does not cover this and improvement for
> this failure may be implemented in a separate patch.

I don't think replacing a resource leak with a security bug is a good idea.

If you would prefer not to fix the gnttab_end_foreign_access() call, I
think you can fix this in netfront by taking a reference to the page
before calling gnttab_end_foreign_access().  This will ensure the page
isn't freed until the subsequent kfree_skb(), or the gref is released by
the foreign domain (whichever is later).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:10:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kq7-0005th-RZ; Thu, 16 Jan 2014 11:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W3kq6-0005tU-4a
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:10:22 +0000
Received: from [85.158.143.35:51116] by server-1.bemta-4.messagelabs.com id
	D4/96-02132-D1EB7D25; Thu, 16 Jan 2014 11:10:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389870619!12117627!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14125 invoked from network); 16 Jan 2014 11:10:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:10:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93419865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:10:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:10:18 -0500
Message-ID: <52D7BE19.2010009@citrix.com>
Date: Thu, 16 Jan 2014 11:10:17 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/14 23:57, Annie Li wrote:
> This patch implements two things:
> 
> * release grant reference and skb for rx path, this fixex resource leaking.
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> for reading or writing. But this patch does not cover this and improvement for
> this failure may be implemented in a separate patch.

I don't think replacing a resource leak with a security bug is a good idea.

If you would prefer not to fix the gnttab_end_foreign_access() call, I
think you can fix this in netfront by taking a reference to the page
before calling gnttab_end_foreign_access().  This will ensure the page
isn't freed until the subsequent kfree_skb(), or the gref is released by
the foreign domain (whichever is later).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kqU-00063D-7R; Thu, 16 Jan 2014 11:10:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3kqS-00061f-Fb
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:10:44 +0000
Received: from [85.158.139.211:29444] by server-2.bemta-5.messagelabs.com id
	60/0F-29392-33EB7D25; Thu, 16 Jan 2014 11:10:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389870642!10114503!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14875 invoked from network); 16 Jan 2014 11:10:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 11:10:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 11:10:42 +0000
Message-Id: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 11:10:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dietmar Hahn" <dietmar.hahn@ts.fujitsu.com>
References: <1538524.5AKIkpF9LB@amur>
In-Reply-To: <1538524.5AKIkpF9LB@amur>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6D5EE53E.0__="
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6D5EE53E.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> =
wrote:
> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
> softlockups from time to time.
>=20
> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:3135=
1]
>=20
> I tracked this down to the call of xc_domain_set_pod_target() and =
further
> p2m_pod_set_mem_target().
>=20
> Unfortunately I can this check only with xen-4.2.2 as I don't have a =
machine
> with enough memory for current hypervisors. But it seems the code is =
nearly
> the same.

While I still didn't see a formal report of this against SLE11 yet,
attached a draft patch against the SP3 code base adding manual
preemption to the hypercall path of privcmd. This is only lightly
tested, and therefore has a little bit of debugging code still left in
there. Mind giving this an try (perhaps together with the patch
David had sent for the other issue - there may still be a need for
further preemption points in the IOCTL_PRIVCMD_MMAP*
handling, but without knowing for sure whether that matters to
you I didn't want to add this right away)?

Jan


--=__Part6D5EE53E.0__=
Content-Type: text/plain; name="xen-privcmd-hcall-preemption.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-privcmd-hcall-preemption.patch"

--- sle11sp3.orig/arch/x86/include/mach-xen/asm/hypervisor.h	2012-10-19 =
16:11:54.000000000 +0200=0A+++ sle11sp3/arch/x86/include/mach-xen/asm/hyper=
visor.h	2014-01-15 13:02:20.000000000 +0100=0A@@ -235,6 +235,9 @@ static =
inline int gnttab_post_map_adjust=0A #ifdef CONFIG_XEN=0A #define =
is_running_on_xen() 1=0A extern char hypercall_page[PAGE_SIZE];=0A+#define =
in_hypercall(regs) (!user_mode_vm(regs) && \=0A+	(regs)->ip >=3D =
(unsigned long)hypercall_page && \=0A+	(regs)->ip < (unsigned long)hyperca=
ll_page + PAGE_SIZE)=0A #else=0A extern char *hypercall_stubs;=0A #define =
is_running_on_xen() (!!hypercall_stubs)=0A--- sle11sp3.orig/arch/x86/kernel=
/entry_32-xen.S	2012-10-19 16:10:09.000000000 +0200=0A+++ sle11sp3/arch/x86=
/kernel/entry_32-xen.S	2014-01-16 10:49:07.000000000 +0100=0A@@ -980,6 =
+980,20 @@ ENTRY(hypervisor_callback)=0A 	call evtchn_do_upcall=0A 	=
add  $4,%esp=0A 	CFI_ADJUST_CFA_OFFSET -4=0A+#ifndef CONFIG_PREEMPT=
=0A+	test %al,%al=0A+	jz   ret_from_intr=0A+	GET_THREAD_INFO(%ed=
x)=0A+	cmpl $0,TI_preempt_count(%edx)=0A+	jnz  ret_from_intr=0A+	=
testl $_TIF_NEED_RESCHED,TI_flags(%edx)=0A+	jz   ret_from_intr=0A+	=
testl $X86_EFLAGS_IF,PT_EFLAGS(%esp)=0A+	jz   ret_from_intr=0A+	=
movb $0,PER_CPU_VAR(privcmd_hcall)=0A+	call preempt_schedule_irq=0A+	=
movb $1,PER_CPU_VAR(privcmd_hcall)=0A+#endif=0A 	jmp  ret_from_intr=
=0A 	CFI_ENDPROC=0A =0A--- sle11sp3.orig/arch/x86/kernel/entry_64-xen.S	=
2011-10-06 13:06:38.000000000 +0200=0A+++ sle11sp3/arch/x86/kernel/entry_64=
-xen.S	2014-01-16 10:52:27.000000000 +0100=0A@@ -982,6 +982,20 @@ =
ENTRY(do_hypervisor_callback)   # do_hyp=0A 	popq %rsp=0A 	CFI_DEF_CFA=
_REGISTER rsp=0A 	decl PER_CPU_VAR(irq_count)=0A+#ifndef CONFIG_PREEM=
PT=0A+	test %al,%al=0A+	jz   error_exit=0A+	GET_THREAD_INFO(%rd=
x)=0A+	cmpl $0,TI_preempt_count(%rdx)=0A+	jnz  error_exit=0A+	bt =
  $TIF_NEED_RESCHED,TI_flags(%rdx)=0A+	jnc  error_exit=0A+	bt   =
$9,EFLAGS-ARGOFFSET(%rsp)=0A+	jnc  error_exit=0A+	movb $0,PER_CPU_VAR=
(privcmd_hcall)=0A+	call preempt_schedule_irq=0A+	movb $1,PER_CPU_VAR=
(privcmd_hcall)=0A+#endif=0A 	jmp  error_exit=0A 	CFI_ENDPROC=0A =
END(do_hypervisor_callback)=0A--- sle11sp3.orig/drivers/xen/core/evtchn.c	=
2013-02-05 17:47:43.000000000 +0100=0A+++ sle11sp3/drivers/xen/core/evtchn.=
c	2014-01-15 13:42:02.000000000 +0100=0A@@ -379,7 +379,14 @@ static =
DEFINE_PER_CPU(unsigned int, curr=0A #endif=0A =0A /* NB. Interrupts are =
disabled on entry. */=0A-asmlinkage void __irq_entry evtchn_do_upcall(struc=
t pt_regs *regs)=0A+asmlinkage=0A+#ifdef CONFIG_PREEMPT=0A+void=0A+#define =
return(x) return=0A+#else=0A+bool=0A+#endif=0A+__irq_entry evtchn_do_upcall=
(struct pt_regs *regs)=0A {=0A 	unsigned long       l1, l2;=0A 	unsigned =
long       masked_l1, masked_l2;=0A@@ -393,7 +400,7 @@ asmlinkage void =
__irq_entry evtchn_do_up=0A 		__this_cpu_or(upcall_state, =
UPC_NESTED_LATCH);=0A 		/* Avoid a callback storm when we reenable =
delivery. */=0A 		vcpu_info_write(evtchn_upcall_pending, =
0);=0A-		return;=0A+		return(false);=0A 	}=0A =0A 	=
old_regs =3D set_irq_regs(regs);=0A@@ -511,6 +518,9 @@ asmlinkage void =
__irq_entry evtchn_do_up=0A 	irq_exit();=0A 	xen_spin_irq_exit();=0A 	=
set_irq_regs(old_regs);=0A+=0A+	return(__this_cpu_read(privcmd_hcall) && =
in_hypercall(regs));=0A+#undef return=0A }=0A =0A static int find_unbound_i=
rq(unsigned int node, struct irq_cfg **pcfg,=0A--- sle11sp3.orig/drivers/xe=
n/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100=0A+++ =
sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 =
+0100=0A@@ -23,6 +23,18 @@=0A #include <xen/interface/xen.h>=0A #include =
<xen/xen_proc.h>=0A #include <xen/features.h>=0A+#include <xen/evtchn.h>=0A=
+=0A+#ifndef CONFIG_PREEMPT=0A+DEFINE_PER_CPU(bool, privcmd_hcall);=0A+#end=
if=0A+=0A+static inline void _privcmd_hcall(bool state)=0A+{=0A+#ifndef =
CONFIG_PREEMPT=0A+	this_cpu_write(privcmd_hcall, state);=0A+#endif=0A+=
}=0A =0A static struct proc_dir_entry *privcmd_intf;=0A static struct =
proc_dir_entry *capabilities_intf;=0A@@ -97,6 +109,7 @@ static long =
privcmd_ioctl(struct file *f=0A 		ret =3D -ENOSYS;=0A 		=
if (hypercall.op >=3D (PAGE_SIZE >> 5))=0A 			break;=0A+	=
	_privcmd_hcall(true);=0A 		ret =3D _hypercall(long, =
(unsigned int)hypercall.op,=0A 				 (unsigned =
long)hypercall.arg[0],=0A 				 (unsigned =
long)hypercall.arg[1],=0A@@ -104,8 +117,10 @@ static long privcmd_ioctl(str=
uct file *f=0A 				 (unsigned long)hypercall.arg[3],=
=0A 				 (unsigned long)hypercall.arg[4]);=0A =
#else=0A+		_privcmd_hcall(true);=0A 		ret =3D =
privcmd_hypercall(&hypercall);=0A #endif=0A+		_privcmd_hcall(fals=
e);=0A 	}=0A 	break;=0A =0A--- sle11sp3.orig/include/xen/evtchn.h	=
2011-12-09 15:38:45.000000000 +0100=0A+++ sle11sp3/include/xen/evtchn.h	=
2014-01-15 14:32:14.000000000 +0100=0A@@ -143,7 +143,13 @@ void irq_resume(=
void);=0A #endif=0A =0A /* Entry point for notifications into Linux =
subsystems. */=0A-asmlinkage void evtchn_do_upcall(struct pt_regs =
*regs);=0A+asmlinkage=0A+#ifdef CONFIG_PREEMPT=0A+void=0A+#else=0A+bool=0A+=
#endif=0A+evtchn_do_upcall(struct pt_regs *regs);=0A =0A /* Mark a PIRQ as =
unavailable for dynamic allocation. */=0A void evtchn_register_pirq(int =
irq);=0A@@ -221,6 +227,8 @@ void notify_remote_via_ipi(unsigned int =0A =
void clear_ipi_evtchn(void);=0A #endif=0A =0A+DECLARE_PER_CPU(bool, =
privcmd_hcall);=0A+=0A #if defined(CONFIG_XEN_SPINLOCK_ACQUIRE_NESTING) =
\=0A     && CONFIG_XEN_SPINLOCK_ACQUIRE_NESTING=0A void xen_spin_irq_enter(=
void);=0A--- sle11sp3.orig/kernel/sched.c	2014-01-10 14:11:39.0000000=
00 +0100=0A+++ sle11sp3/kernel/sched.c	2014-01-16 11:05:05.000000000 =
+0100=0A@@ -4690,6 +4690,9 @@ asmlinkage void __sched notrace preempt_=0A =
}=0A EXPORT_SYMBOL(preempt_schedule);=0A =0A+#endif=0A+#if defined(CONFIG_P=
REEMPT) || defined(CONFIG_XEN)=0A+=0A /*=0A  * this is the entry point to =
schedule() from kernel preemption=0A  * off of irq context.=0A@@ -4699,6 =
+4702,14 @@ EXPORT_SYMBOL(preempt_schedule);=0A asmlinkage void __sched =
preempt_schedule_irq(void)=0A {=0A 	struct thread_info *ti =3D =
current_thread_info();=0A+#ifdef CONFIG_XEN//temp=0A+static DEFINE_PER_CPU(=
unsigned long, cnt);=0A+static DEFINE_PER_CPU(unsigned long, thr);=0A+if(__=
this_cpu_inc_return(cnt) > __this_cpu_read(thr)) {=0A+ __this_cpu_or(thr, =
__this_cpu_read(cnt));=0A+ printk("psi[%02u] %08x:%d #%lx\n", raw_smp_proce=
ssor_id(), ti->preempt_count, need_resched(), __this_cpu_read(cnt));=0A+}=
=0A+#endif=0A =0A 	/* Catch callers which need to be fixed */=0A 	=
BUG_ON(ti->preempt_count || !irqs_disabled());=0A
--=__Part6D5EE53E.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6D5EE53E.0__=--


From xen-devel-bounces@lists.xen.org Thu Jan 16 11:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3kqU-00063D-7R; Thu, 16 Jan 2014 11:10:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3kqS-00061f-Fb
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:10:44 +0000
Received: from [85.158.139.211:29444] by server-2.bemta-5.messagelabs.com id
	60/0F-29392-33EB7D25; Thu, 16 Jan 2014 11:10:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389870642!10114503!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14875 invoked from network); 16 Jan 2014 11:10:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 11:10:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 11:10:42 +0000
Message-Id: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 11:10:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dietmar Hahn" <dietmar.hahn@ts.fujitsu.com>
References: <1538524.5AKIkpF9LB@amur>
In-Reply-To: <1538524.5AKIkpF9LB@amur>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6D5EE53E.0__="
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6D5EE53E.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> =
wrote:
> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
> softlockups from time to time.
>=20
> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:3135=
1]
>=20
> I tracked this down to the call of xc_domain_set_pod_target() and =
further
> p2m_pod_set_mem_target().
>=20
> Unfortunately I can this check only with xen-4.2.2 as I don't have a =
machine
> with enough memory for current hypervisors. But it seems the code is =
nearly
> the same.

While I still didn't see a formal report of this against SLE11 yet,
attached a draft patch against the SP3 code base adding manual
preemption to the hypercall path of privcmd. This is only lightly
tested, and therefore has a little bit of debugging code still left in
there. Mind giving this an try (perhaps together with the patch
David had sent for the other issue - there may still be a need for
further preemption points in the IOCTL_PRIVCMD_MMAP*
handling, but without knowing for sure whether that matters to
you I didn't want to add this right away)?

Jan


--=__Part6D5EE53E.0__=
Content-Type: text/plain; name="xen-privcmd-hcall-preemption.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-privcmd-hcall-preemption.patch"

--- sle11sp3.orig/arch/x86/include/mach-xen/asm/hypervisor.h	2012-10-19 =
16:11:54.000000000 +0200=0A+++ sle11sp3/arch/x86/include/mach-xen/asm/hyper=
visor.h	2014-01-15 13:02:20.000000000 +0100=0A@@ -235,6 +235,9 @@ static =
inline int gnttab_post_map_adjust=0A #ifdef CONFIG_XEN=0A #define =
is_running_on_xen() 1=0A extern char hypercall_page[PAGE_SIZE];=0A+#define =
in_hypercall(regs) (!user_mode_vm(regs) && \=0A+	(regs)->ip >=3D =
(unsigned long)hypercall_page && \=0A+	(regs)->ip < (unsigned long)hyperca=
ll_page + PAGE_SIZE)=0A #else=0A extern char *hypercall_stubs;=0A #define =
is_running_on_xen() (!!hypercall_stubs)=0A--- sle11sp3.orig/arch/x86/kernel=
/entry_32-xen.S	2012-10-19 16:10:09.000000000 +0200=0A+++ sle11sp3/arch/x86=
/kernel/entry_32-xen.S	2014-01-16 10:49:07.000000000 +0100=0A@@ -980,6 =
+980,20 @@ ENTRY(hypervisor_callback)=0A 	call evtchn_do_upcall=0A 	=
add  $4,%esp=0A 	CFI_ADJUST_CFA_OFFSET -4=0A+#ifndef CONFIG_PREEMPT=
=0A+	test %al,%al=0A+	jz   ret_from_intr=0A+	GET_THREAD_INFO(%ed=
x)=0A+	cmpl $0,TI_preempt_count(%edx)=0A+	jnz  ret_from_intr=0A+	=
testl $_TIF_NEED_RESCHED,TI_flags(%edx)=0A+	jz   ret_from_intr=0A+	=
testl $X86_EFLAGS_IF,PT_EFLAGS(%esp)=0A+	jz   ret_from_intr=0A+	=
movb $0,PER_CPU_VAR(privcmd_hcall)=0A+	call preempt_schedule_irq=0A+	=
movb $1,PER_CPU_VAR(privcmd_hcall)=0A+#endif=0A 	jmp  ret_from_intr=
=0A 	CFI_ENDPROC=0A =0A--- sle11sp3.orig/arch/x86/kernel/entry_64-xen.S	=
2011-10-06 13:06:38.000000000 +0200=0A+++ sle11sp3/arch/x86/kernel/entry_64=
-xen.S	2014-01-16 10:52:27.000000000 +0100=0A@@ -982,6 +982,20 @@ =
ENTRY(do_hypervisor_callback)   # do_hyp=0A 	popq %rsp=0A 	CFI_DEF_CFA=
_REGISTER rsp=0A 	decl PER_CPU_VAR(irq_count)=0A+#ifndef CONFIG_PREEM=
PT=0A+	test %al,%al=0A+	jz   error_exit=0A+	GET_THREAD_INFO(%rd=
x)=0A+	cmpl $0,TI_preempt_count(%rdx)=0A+	jnz  error_exit=0A+	bt =
  $TIF_NEED_RESCHED,TI_flags(%rdx)=0A+	jnc  error_exit=0A+	bt   =
$9,EFLAGS-ARGOFFSET(%rsp)=0A+	jnc  error_exit=0A+	movb $0,PER_CPU_VAR=
(privcmd_hcall)=0A+	call preempt_schedule_irq=0A+	movb $1,PER_CPU_VAR=
(privcmd_hcall)=0A+#endif=0A 	jmp  error_exit=0A 	CFI_ENDPROC=0A =
END(do_hypervisor_callback)=0A--- sle11sp3.orig/drivers/xen/core/evtchn.c	=
2013-02-05 17:47:43.000000000 +0100=0A+++ sle11sp3/drivers/xen/core/evtchn.=
c	2014-01-15 13:42:02.000000000 +0100=0A@@ -379,7 +379,14 @@ static =
DEFINE_PER_CPU(unsigned int, curr=0A #endif=0A =0A /* NB. Interrupts are =
disabled on entry. */=0A-asmlinkage void __irq_entry evtchn_do_upcall(struc=
t pt_regs *regs)=0A+asmlinkage=0A+#ifdef CONFIG_PREEMPT=0A+void=0A+#define =
return(x) return=0A+#else=0A+bool=0A+#endif=0A+__irq_entry evtchn_do_upcall=
(struct pt_regs *regs)=0A {=0A 	unsigned long       l1, l2;=0A 	unsigned =
long       masked_l1, masked_l2;=0A@@ -393,7 +400,7 @@ asmlinkage void =
__irq_entry evtchn_do_up=0A 		__this_cpu_or(upcall_state, =
UPC_NESTED_LATCH);=0A 		/* Avoid a callback storm when we reenable =
delivery. */=0A 		vcpu_info_write(evtchn_upcall_pending, =
0);=0A-		return;=0A+		return(false);=0A 	}=0A =0A 	=
old_regs =3D set_irq_regs(regs);=0A@@ -511,6 +518,9 @@ asmlinkage void =
__irq_entry evtchn_do_up=0A 	irq_exit();=0A 	xen_spin_irq_exit();=0A 	=
set_irq_regs(old_regs);=0A+=0A+	return(__this_cpu_read(privcmd_hcall) && =
in_hypercall(regs));=0A+#undef return=0A }=0A =0A static int find_unbound_i=
rq(unsigned int node, struct irq_cfg **pcfg,=0A--- sle11sp3.orig/drivers/xe=
n/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100=0A+++ =
sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 =
+0100=0A@@ -23,6 +23,18 @@=0A #include <xen/interface/xen.h>=0A #include =
<xen/xen_proc.h>=0A #include <xen/features.h>=0A+#include <xen/evtchn.h>=0A=
+=0A+#ifndef CONFIG_PREEMPT=0A+DEFINE_PER_CPU(bool, privcmd_hcall);=0A+#end=
if=0A+=0A+static inline void _privcmd_hcall(bool state)=0A+{=0A+#ifndef =
CONFIG_PREEMPT=0A+	this_cpu_write(privcmd_hcall, state);=0A+#endif=0A+=
}=0A =0A static struct proc_dir_entry *privcmd_intf;=0A static struct =
proc_dir_entry *capabilities_intf;=0A@@ -97,6 +109,7 @@ static long =
privcmd_ioctl(struct file *f=0A 		ret =3D -ENOSYS;=0A 		=
if (hypercall.op >=3D (PAGE_SIZE >> 5))=0A 			break;=0A+	=
	_privcmd_hcall(true);=0A 		ret =3D _hypercall(long, =
(unsigned int)hypercall.op,=0A 				 (unsigned =
long)hypercall.arg[0],=0A 				 (unsigned =
long)hypercall.arg[1],=0A@@ -104,8 +117,10 @@ static long privcmd_ioctl(str=
uct file *f=0A 				 (unsigned long)hypercall.arg[3],=
=0A 				 (unsigned long)hypercall.arg[4]);=0A =
#else=0A+		_privcmd_hcall(true);=0A 		ret =3D =
privcmd_hypercall(&hypercall);=0A #endif=0A+		_privcmd_hcall(fals=
e);=0A 	}=0A 	break;=0A =0A--- sle11sp3.orig/include/xen/evtchn.h	=
2011-12-09 15:38:45.000000000 +0100=0A+++ sle11sp3/include/xen/evtchn.h	=
2014-01-15 14:32:14.000000000 +0100=0A@@ -143,7 +143,13 @@ void irq_resume(=
void);=0A #endif=0A =0A /* Entry point for notifications into Linux =
subsystems. */=0A-asmlinkage void evtchn_do_upcall(struct pt_regs =
*regs);=0A+asmlinkage=0A+#ifdef CONFIG_PREEMPT=0A+void=0A+#else=0A+bool=0A+=
#endif=0A+evtchn_do_upcall(struct pt_regs *regs);=0A =0A /* Mark a PIRQ as =
unavailable for dynamic allocation. */=0A void evtchn_register_pirq(int =
irq);=0A@@ -221,6 +227,8 @@ void notify_remote_via_ipi(unsigned int =0A =
void clear_ipi_evtchn(void);=0A #endif=0A =0A+DECLARE_PER_CPU(bool, =
privcmd_hcall);=0A+=0A #if defined(CONFIG_XEN_SPINLOCK_ACQUIRE_NESTING) =
\=0A     && CONFIG_XEN_SPINLOCK_ACQUIRE_NESTING=0A void xen_spin_irq_enter(=
void);=0A--- sle11sp3.orig/kernel/sched.c	2014-01-10 14:11:39.0000000=
00 +0100=0A+++ sle11sp3/kernel/sched.c	2014-01-16 11:05:05.000000000 =
+0100=0A@@ -4690,6 +4690,9 @@ asmlinkage void __sched notrace preempt_=0A =
}=0A EXPORT_SYMBOL(preempt_schedule);=0A =0A+#endif=0A+#if defined(CONFIG_P=
REEMPT) || defined(CONFIG_XEN)=0A+=0A /*=0A  * this is the entry point to =
schedule() from kernel preemption=0A  * off of irq context.=0A@@ -4699,6 =
+4702,14 @@ EXPORT_SYMBOL(preempt_schedule);=0A asmlinkage void __sched =
preempt_schedule_irq(void)=0A {=0A 	struct thread_info *ti =3D =
current_thread_info();=0A+#ifdef CONFIG_XEN//temp=0A+static DEFINE_PER_CPU(=
unsigned long, cnt);=0A+static DEFINE_PER_CPU(unsigned long, thr);=0A+if(__=
this_cpu_inc_return(cnt) > __this_cpu_read(thr)) {=0A+ __this_cpu_or(thr, =
__this_cpu_read(cnt));=0A+ printk("psi[%02u] %08x:%d #%lx\n", raw_smp_proce=
ssor_id(), ti->preempt_count, need_resched(), __this_cpu_read(cnt));=0A+}=
=0A+#endif=0A =0A 	/* Catch callers which need to be fixed */=0A 	=
BUG_ON(ti->preempt_count || !irqs_disabled());=0A
--=__Part6D5EE53E.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6D5EE53E.0__=--


From xen-devel-bounces@lists.xen.org Thu Jan 16 11:29:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3l7q-0007L1-R0; Thu, 16 Jan 2014 11:28:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W3l7o-0007KL-Bv; Thu, 16 Jan 2014 11:28:40 +0000
Received: from [85.158.139.211:50125] by server-15.bemta-5.messagelabs.com id
	94/EA-08490-762C7D25; Thu, 16 Jan 2014 11:28:39 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389871718!10119819!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22880 invoked from network); 16 Jan 2014 11:28:38 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:28:38 -0000
Received: by mail-lb0-f180.google.com with SMTP id n15so1746666lbi.25
	for <multiple recipients>; Thu, 16 Jan 2014 03:28:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=/3NBddMFWKbty/jJvL2z6kbrEszi1Zvngj6QJLhggEc=;
	b=UA6jdcgbQ1/+BgYRD9nObxkSXGxuvNDrEIwNUiDb/IfzieTEABcVNc8mchcQKVDDQ5
	P39tFkYV49/icakDgrJEsAEHN/uwJQu/bb8R5isxG241ZUYYZe5o+6l0UMthxD6SF8w3
	8W7zRJ44rKRtcLlpNMQeAMnRp/yFZuG3KHGiq3MJHlWMFwDUX5j6C1lMmZWNtI33XHn+
	rjJSo3VW+UfTJj5kShx2PWwADXZc7JZmtR6gmfjWcfdOHSSeGYEv0sZb2XkKm9qb0MM/
	pRcA8wpAwc6Jz3XwjIsYniowLT4+Z13H9Xh6g6OYfxFvz/ZZuINKctjgIts4ZiBDdzjW
	CkEw==
MIME-Version: 1.0
X-Received: by 10.152.42.230 with SMTP id r6mr4983104lal.18.1389871717815;
	Thu, 16 Jan 2014 03:28:37 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Thu, 16 Jan 2014 03:28:37 -0800 (PST)
Date: Thu, 16 Jan 2014 06:28:37 -0500
X-Google-Sender-Auth: Hlbaitrbqxkc9fOt8bRdiaoWCCk
Message-ID: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk, 
	publicity@lists.xenproject.org
Subject: [Xen-devel] Xen Project Test Day: Test 4.4 RC2 on January 20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Release time is approaching, so Test Days have arrived!

On Monday, January 20, we are holding a Test Day for Xen 4.4. Release
Candidate 2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC2, we need to know about it and how to test it.  Either edit the
instructions page or send me a few lines describing the feature and
how it should be tested.

Right now, RC2 is labelled a general test (e.g., "Does Xen compile,
install, and do the things Xen normally does?").  We don't have any
specific tests of new functionality identified.  If you have something
new which needs testing in RC2, we need to know about it.

EVERYONE:

Please join us on Monday, January 20, and help make sure the next
release of Xen is the best one yet!

Russ Pavlicek
Xen Project Evangelist

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:29:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3l7q-0007L1-R0; Thu, 16 Jan 2014 11:28:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W3l7o-0007KL-Bv; Thu, 16 Jan 2014 11:28:40 +0000
Received: from [85.158.139.211:50125] by server-15.bemta-5.messagelabs.com id
	94/EA-08490-762C7D25; Thu, 16 Jan 2014 11:28:39 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389871718!10119819!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22880 invoked from network); 16 Jan 2014 11:28:38 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:28:38 -0000
Received: by mail-lb0-f180.google.com with SMTP id n15so1746666lbi.25
	for <multiple recipients>; Thu, 16 Jan 2014 03:28:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=/3NBddMFWKbty/jJvL2z6kbrEszi1Zvngj6QJLhggEc=;
	b=UA6jdcgbQ1/+BgYRD9nObxkSXGxuvNDrEIwNUiDb/IfzieTEABcVNc8mchcQKVDDQ5
	P39tFkYV49/icakDgrJEsAEHN/uwJQu/bb8R5isxG241ZUYYZe5o+6l0UMthxD6SF8w3
	8W7zRJ44rKRtcLlpNMQeAMnRp/yFZuG3KHGiq3MJHlWMFwDUX5j6C1lMmZWNtI33XHn+
	rjJSo3VW+UfTJj5kShx2PWwADXZc7JZmtR6gmfjWcfdOHSSeGYEv0sZb2XkKm9qb0MM/
	pRcA8wpAwc6Jz3XwjIsYniowLT4+Z13H9Xh6g6OYfxFvz/ZZuINKctjgIts4ZiBDdzjW
	CkEw==
MIME-Version: 1.0
X-Received: by 10.152.42.230 with SMTP id r6mr4983104lal.18.1389871717815;
	Thu, 16 Jan 2014 03:28:37 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Thu, 16 Jan 2014 03:28:37 -0800 (PST)
Date: Thu, 16 Jan 2014 06:28:37 -0500
X-Google-Sender-Auth: Hlbaitrbqxkc9fOt8bRdiaoWCCk
Message-ID: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk, 
	publicity@lists.xenproject.org
Subject: [Xen-devel] Xen Project Test Day: Test 4.4 RC2 on January 20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Release time is approaching, so Test Days have arrived!

On Monday, January 20, we are holding a Test Day for Xen 4.4. Release
Candidate 2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC2, we need to know about it and how to test it.  Either edit the
instructions page or send me a few lines describing the feature and
how it should be tested.

Right now, RC2 is labelled a general test (e.g., "Does Xen compile,
install, and do the things Xen normally does?").  We don't have any
specific tests of new functionality identified.  If you have something
new which needs testing in RC2, we need to know about it.

EVERYONE:

Please join us on Monday, January 20, and help make sure the next
release of Xen is the best one yet!

Russ Pavlicek
Xen Project Evangelist

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:33:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lCh-0007fy-UW; Thu, 16 Jan 2014 11:33:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3lCg-0007fi-B0
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:33:42 +0000
Received: from [85.158.139.211:57697] by server-11.bemta-5.messagelabs.com id
	99/94-23268-593C7D25; Thu, 16 Jan 2014 11:33:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389872019!10093460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6619 invoked from network); 16 Jan 2014 11:33:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:33:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93424847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:33:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:33:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3lCc-0003Ka-CI;
	Thu, 16 Jan 2014 11:33:38 +0000
Date: Thu, 16 Jan 2014 11:33:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140116113338.GS5698@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
	<52D7AC3C.6080101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D7AC3C.6080101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 09:54:04AM +0000, Andrew Bennieston wrote:
> On 16/01/14 00:17, Wei Liu wrote:
> >On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
> >[...]
> >>+
> >>+struct xenvif_queue { /* Per-queue data for xenvif */
> >>+	unsigned int number; /* Queue number, 0-based */
> >
> >Use "id" instead?
> 
> Ok; I suppose number implies "the number of queues", not "which queue is
> this?"
> 

Right. "id" sounds more intuitive to me. And it saves you from some
typing, big win! ;-)

> >
> >>+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> >>+	struct xenvif *vif; /* Parent VIF */
[...]
> >>
> >>@@ -150,14 +152,27 @@ struct xenvif {
> >>  	 */
> >>  	RING_IDX rx_req_cons_peek;
> >>
> >>-	/* This array is allocated seperately as it is large */
> >>-	struct gnttab_copy *grant_copy_op;
> >>+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> >
> >Any reason to swtich back to array inside structure? This array is
> >really large.
> >
> 
> It was moved to a separate vmalloc because it was large, but now the
> array of queues is allocated through vmalloc anyway. I preferred to
> bring this back into the structure rather than have more allocations to
> track and remember to free at all relevant points. If there is any
> significant reason to split this out I'm happy to do so...
> 

OK, if everything is allocated via vmalloc then I think it's fine.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:33:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lCh-0007fy-UW; Thu, 16 Jan 2014 11:33:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3lCg-0007fi-B0
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:33:42 +0000
Received: from [85.158.139.211:57697] by server-11.bemta-5.messagelabs.com id
	99/94-23268-593C7D25; Thu, 16 Jan 2014 11:33:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389872019!10093460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6619 invoked from network); 16 Jan 2014 11:33:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:33:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93424847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:33:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:33:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3lCc-0003Ka-CI;
	Thu, 16 Jan 2014 11:33:38 +0000
Date: Thu, 16 Jan 2014 11:33:38 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140116113338.GS5698@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
	<52D7AC3C.6080101@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D7AC3C.6080101@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 09:54:04AM +0000, Andrew Bennieston wrote:
> On 16/01/14 00:17, Wei Liu wrote:
> >On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
> >[...]
> >>+
> >>+struct xenvif_queue { /* Per-queue data for xenvif */
> >>+	unsigned int number; /* Queue number, 0-based */
> >
> >Use "id" instead?
> 
> Ok; I suppose number implies "the number of queues", not "which queue is
> this?"
> 

Right. "id" sounds more intuitive to me. And it saves you from some
typing, big win! ;-)

> >
> >>+	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
> >>+	struct xenvif *vif; /* Parent VIF */
[...]
> >>
> >>@@ -150,14 +152,27 @@ struct xenvif {
> >>  	 */
> >>  	RING_IDX rx_req_cons_peek;
> >>
> >>-	/* This array is allocated seperately as it is large */
> >>-	struct gnttab_copy *grant_copy_op;
> >>+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> >
> >Any reason to swtich back to array inside structure? This array is
> >really large.
> >
> 
> It was moved to a separate vmalloc because it was large, but now the
> array of queues is allocated through vmalloc anyway. I preferred to
> bring this back into the structure rather than have more allocations to
> track and remember to free at all relevant points. If there is any
> significant reason to split this out I'm happy to do so...
> 

OK, if everything is allocated via vmalloc then I think it's fine.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lKP-00085p-OY; Thu, 16 Jan 2014 11:41:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3lKO-00085b-5S
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 11:41:40 +0000
Received: from [193.109.254.147:22120] by server-3.bemta-14.messagelabs.com id
	E5/3C-11000-375C7D25; Thu, 16 Jan 2014 11:41:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389872497!11197234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24694 invoked from network); 16 Jan 2014 11:41:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:41:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91325861"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:41:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 06:41:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3lK3-000466-DC;
	Thu, 16 Jan 2014 11:41:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3lK2-0004oX-O9;
	Thu, 16 Jan 2014 11:41:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24387-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 11:41:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24387: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1889251731056662637=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1889251731056662637==
Content-Type: text/plain

flight 24387 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24311
 test-amd64-i386-freebsd10-i386 7 freebsd-install running in 24384 [st=running!]

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24384

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24315
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24384 like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24384 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24384 never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 776 lines long.)


--===============1889251731056662637==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1889251731056662637==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lKP-00085p-OY; Thu, 16 Jan 2014 11:41:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3lKO-00085b-5S
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 11:41:40 +0000
Received: from [193.109.254.147:22120] by server-3.bemta-14.messagelabs.com id
	E5/3C-11000-375C7D25; Thu, 16 Jan 2014 11:41:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389872497!11197234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24694 invoked from network); 16 Jan 2014 11:41:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:41:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91325861"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:41:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 06:41:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3lK3-000466-DC;
	Thu, 16 Jan 2014 11:41:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3lK2-0004oX-O9;
	Thu, 16 Jan 2014 11:41:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24387-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 11:41:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24387: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1889251731056662637=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1889251731056662637==
Content-Type: text/plain

flight 24387 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24311
 test-amd64-i386-freebsd10-i386 7 freebsd-install running in 24384 [st=running!]

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24384

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24315
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24384 like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24384 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24384 never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 776 lines long.)


--===============1889251731056662637==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1889251731056662637==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lMs-0008IG-W5; Thu, 16 Jan 2014 11:44:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3lMr-0008I6-Qq
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:44:14 +0000
Received: from [85.158.137.68:50881] by server-9.bemta-3.messagelabs.com id
	B7/ED-13104-D06C7D25; Thu, 16 Jan 2014 11:44:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389872652!9517815!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23045 invoked from network); 16 Jan 2014 11:44:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 11:44:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389872652; l=945;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=DT1LLRGWMpI2lnqhfj/lkxtvPLg=;
	b=l58mkmUN/UBT7ExGKkR5aCt6YMSeYkG9YhMx89dEZ7zJTVyBTQatmjklK1ocE08esCW
	ByfXisQL0J0TdwBZTC4HCHKf766ddEKw/fi1HRzQdJIG6eNHv09LsbEY/zEzEB1+gLFvC
	08WezfqwU6taKTPHfQpH/bhCQ4FWS03uAMc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id L012e7q0GBiCWTs ; 
	Thu, 16 Jan 2014 12:44:12 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D92F150267; Thu, 16 Jan 2014 12:44:11 +0100 (CET)
Date: Thu, 16 Jan 2014 12:44:11 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116114411.GA29530@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389869890.6697.6.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, Ian Campbell wrote:

> Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> hd[a-d], which refer to the same underlying volume.
> 
> This allows you to boot from hda, do an unplug and then switch to using
> xvda.

Not really: In the end hda is connected to the emulated IDE, so today
its really "sda" in domU because pata_piix will drive it. sda was
connected to emulated LSI SCSI. But xvda was not connected to any
emulated controller, its PV only. Thats how it is done with qemu-trad.
So having hda and xvda in the same config was working, and maybe even
supported?

With qemu-upstream this appearently changed. I'm not saying this change
in behaviour is good or bad, just that something changed. Some people
still use the kernel names instead of UUID or LABEL. So they have to
adjust their config in domU and also in domU.cfg before they switch from
qemu-trad to qemu-upstream.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lMs-0008IG-W5; Thu, 16 Jan 2014 11:44:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3lMr-0008I6-Qq
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:44:14 +0000
Received: from [85.158.137.68:50881] by server-9.bemta-3.messagelabs.com id
	B7/ED-13104-D06C7D25; Thu, 16 Jan 2014 11:44:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389872652!9517815!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23045 invoked from network); 16 Jan 2014 11:44:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 11:44:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389872652; l=945;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=DT1LLRGWMpI2lnqhfj/lkxtvPLg=;
	b=l58mkmUN/UBT7ExGKkR5aCt6YMSeYkG9YhMx89dEZ7zJTVyBTQatmjklK1ocE08esCW
	ByfXisQL0J0TdwBZTC4HCHKf766ddEKw/fi1HRzQdJIG6eNHv09LsbEY/zEzEB1+gLFvC
	08WezfqwU6taKTPHfQpH/bhCQ4FWS03uAMc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id L012e7q0GBiCWTs ; 
	Thu, 16 Jan 2014 12:44:12 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D92F150267; Thu, 16 Jan 2014 12:44:11 +0100 (CET)
Date: Thu, 16 Jan 2014 12:44:11 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116114411.GA29530@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389869890.6697.6.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, Ian Campbell wrote:

> Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> hd[a-d], which refer to the same underlying volume.
> 
> This allows you to boot from hda, do an unplug and then switch to using
> xvda.

Not really: In the end hda is connected to the emulated IDE, so today
its really "sda" in domU because pata_piix will drive it. sda was
connected to emulated LSI SCSI. But xvda was not connected to any
emulated controller, its PV only. Thats how it is done with qemu-trad.
So having hda and xvda in the same config was working, and maybe even
supported?

With qemu-upstream this appearently changed. I'm not saying this change
in behaviour is good or bad, just that something changed. Some people
still use the kernel names instead of UUID or LABEL. So they have to
adjust their config in domU and also in domU.cfg before they switch from
qemu-trad to qemu-upstream.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lMp-0008Ho-F3; Thu, 16 Jan 2014 11:44:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3lMn-0008Hj-OX
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:44:09 +0000
Received: from [85.158.137.68:21947] by server-8.bemta-3.messagelabs.com id
	17/44-31081-806C7D25; Thu, 16 Jan 2014 11:44:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389872646!9516433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 584 invoked from network); 16 Jan 2014 11:44:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:44:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93426940"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:44:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W3lMi-0003UH-NQ; Thu, 16 Jan 2014 11:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 16 Jan 2014 11:44:03 +0000
Message-ID: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These messages are printed per-cpu with no cpu identifying information.  There
is no reason to print them for each cpu, so restrict them to cpu 0 only.
Futhermore, "MCE support disabled by bootparam" really doesn't warrent having
a file:line reference.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>

---

Discovered when testing Xen-4.4-rc2 on an Haswell SDP platform with some MCE
and Cstate quirks.  We have still not debugged whether it is the SDP or Xen
which is at fault, but Xen 4.3 and 4.4 needs "max_cstate=1 mce=off" to
successfully boot on it.

I am on the fence as to whether this should be proposed for inclusion into
4.4.  Boot-time logspam is not a massive problem, but on the other hand, it
contributes against "#2 An awesome release" and the fixes are quite obvious
with a very low risk of hidden further bugs.
---
 xen/arch/x86/cpu/mcheck/mce.c |    4 ++--
 xen/arch/x86/cpu/mwait-idle.c |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index b375ef7..6726ec2 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
 {
     enum mcheck_type inited = mcheck_none;
 
-    if (mce_disabled == 1) {
-        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
+    if (bsp && mce_disabled == 1) {
+        printk(XENLOG_INFO "MCE support disabled by bootparam\n");
         return;
     }
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 85179f2..a145289 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
 		state = MWAIT_HINT2CSTATE(hint) + 1;
 		substate = MWAIT_HINT2SUBSTATE(hint);
 
-		if (state > max_cstate) {
+		if (cpu == 0 && state > max_cstate) {
 			printk(PREFIX "max C-state %u reached\n", max_cstate);
 			break;
 		}
@@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
 		if (substate >= num_substates)
 			continue;
 
-		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
+		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
 			printk(PREFIX "max C-state count of %u reached\n",
 			       ACPI_PROCESSOR_MAX_POWER);
 			break;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lMp-0008Ho-F3; Thu, 16 Jan 2014 11:44:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W3lMn-0008Hj-OX
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:44:09 +0000
Received: from [85.158.137.68:21947] by server-8.bemta-3.messagelabs.com id
	17/44-31081-806C7D25; Thu, 16 Jan 2014 11:44:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389872646!9516433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 584 invoked from network); 16 Jan 2014 11:44:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:44:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93426940"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:44:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:44:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W3lMi-0003UH-NQ; Thu, 16 Jan 2014 11:44:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 16 Jan 2014 11:44:03 +0000
Message-ID: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These messages are printed per-cpu with no cpu identifying information.  There
is no reason to print them for each cpu, so restrict them to cpu 0 only.
Futhermore, "MCE support disabled by bootparam" really doesn't warrent having
a file:line reference.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>

---

Discovered when testing Xen-4.4-rc2 on an Haswell SDP platform with some MCE
and Cstate quirks.  We have still not debugged whether it is the SDP or Xen
which is at fault, but Xen 4.3 and 4.4 needs "max_cstate=1 mce=off" to
successfully boot on it.

I am on the fence as to whether this should be proposed for inclusion into
4.4.  Boot-time logspam is not a massive problem, but on the other hand, it
contributes against "#2 An awesome release" and the fixes are quite obvious
with a very low risk of hidden further bugs.
---
 xen/arch/x86/cpu/mcheck/mce.c |    4 ++--
 xen/arch/x86/cpu/mwait-idle.c |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index b375ef7..6726ec2 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
 {
     enum mcheck_type inited = mcheck_none;
 
-    if (mce_disabled == 1) {
-        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
+    if (bsp && mce_disabled == 1) {
+        printk(XENLOG_INFO "MCE support disabled by bootparam\n");
         return;
     }
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 85179f2..a145289 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
 		state = MWAIT_HINT2CSTATE(hint) + 1;
 		substate = MWAIT_HINT2SUBSTATE(hint);
 
-		if (state > max_cstate) {
+		if (cpu == 0 && state > max_cstate) {
 			printk(PREFIX "max C-state %u reached\n", max_cstate);
 			break;
 		}
@@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
 		if (substate >= num_substates)
 			continue;
 
-		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
+		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
 			printk(PREFIX "max C-state count of %u reached\n",
 			       ACPI_PROCESSOR_MAX_POWER);
 			break;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lNY-0008Pn-Mu; Thu, 16 Jan 2014 11:44:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3lNW-0008PP-Jm
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:44:54 +0000
Received: from [193.109.254.147:46030] by server-1.bemta-14.messagelabs.com id
	FF/10-15600-536C7D25; Thu, 16 Jan 2014 11:44:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389872691!7766065!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22970 invoked from network); 16 Jan 2014 11:44:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:44:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93427060"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:44:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:44:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3lNS-0003V3-C2;
	Thu, 16 Jan 2014 11:44:50 +0000
Date: Thu, 16 Jan 2014 11:44:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140116114450.GT5698@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D7B6F1.1040604@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, paul.durrant@citrix.com,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
	for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 10:39:45AM +0000, David Vrabel wrote:
> On 16/01/14 10:24, Andrew Bennieston wrote:
> > On 16/01/14 00:27, Wei Liu wrote:
> >> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> >> 
> >>> +            goto error;
> >>> +        }
> >>> +        snprintf(path, pathsize, "%s/queue-%u",
> >>> +                dev->nodename, queue->number);
> >>> +    }
> >>> +    else
> >>> +        path = (char *)dev->nodename;
> >>
> >> Coding style. Should be surounded by {};
> > 
> > OK.
> 
> Linux style is single line blocks are not surrounded by braces.  You

Not if the other branch contains multiple statements and is surounded by
braces. See Documentation/CodingStyle line 169. ;-)

Wei.

> should have the else on the same line as the preceeding } though.
> 
> i.e.,
> 
> if (...) {
>    one_line()
>    two_line()
>    red_line()
>    blue_line()
> } else
>    a_line()
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:44:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lNY-0008Pn-Mu; Thu, 16 Jan 2014 11:44:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W3lNW-0008PP-Jm
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:44:54 +0000
Received: from [193.109.254.147:46030] by server-1.bemta-14.messagelabs.com id
	FF/10-15600-536C7D25; Thu, 16 Jan 2014 11:44:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1389872691!7766065!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22970 invoked from network); 16 Jan 2014 11:44:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:44:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93427060"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:44:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:44:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W3lNS-0003V3-C2;
	Thu, 16 Jan 2014 11:44:50 +0000
Date: Thu, 16 Jan 2014 11:44:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140116114450.GT5698@zion.uk.xensource.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140116002706.GJ5331@zion.uk.xensource.com>
	<52D7B379.201@citrix.com> <52D7B6F1.1040604@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D7B6F1.1040604@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, paul.durrant@citrix.com,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	ian.campbell@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support
	for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 10:39:45AM +0000, David Vrabel wrote:
> On 16/01/14 10:24, Andrew Bennieston wrote:
> > On 16/01/14 00:27, Wei Liu wrote:
> >> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> >> 
> >>> +            goto error;
> >>> +        }
> >>> +        snprintf(path, pathsize, "%s/queue-%u",
> >>> +                dev->nodename, queue->number);
> >>> +    }
> >>> +    else
> >>> +        path = (char *)dev->nodename;
> >>
> >> Coding style. Should be surounded by {};
> > 
> > OK.
> 
> Linux style is single line blocks are not surrounded by braces.  You

Not if the other branch contains multiple statements and is surounded by
braces. See Documentation/CodingStyle line 169. ;-)

Wei.

> should have the else on the same line as the preceeding } though.
> 
> i.e.,
> 
> if (...) {
>    one_line()
>    two_line()
>    red_line()
>    blue_line()
> } else
>    a_line()
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:48:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lQu-0000IK-HE; Thu, 16 Jan 2014 11:48:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3lQs-0000Hm-7H
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 11:48:22 +0000
Received: from [85.158.139.211:46210] by server-14.bemta-5.messagelabs.com id
	CC/81-24200-507C7D25; Thu, 16 Jan 2014 11:48:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389872897!10125426!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5132 invoked from network); 16 Jan 2014 11:48:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91327306"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:48:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 06:48:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3lQm-00048B-1j;
	Thu, 16 Jan 2014 11:48:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3lQl-00006h-UQ;
	Thu, 16 Jan 2014 11:48:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24386-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 11:48:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24386: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24386 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24386/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv          16 guest-start.2               fail pass in 24382
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail in 24382 pass in 24386

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24375
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24382 like 24380

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24382 never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2

------------------------------------------------------------
People who touched revisions under test:
  Anil Madhavapeddy <anil@recoil.org>
  David Scott <dave.scott@eu.citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c04c825bdf1e946260cba325eeed993004051050
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c04c825bdf1e946260cba325eeed993004051050
+ branch=xen-unstable
+ revision=c04c825bdf1e946260cba325eeed993004051050
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c04c825bdf1e946260cba325eeed993004051050:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   cfad12f..c04c825  c04c825bdf1e946260cba325eeed993004051050 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:48:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lQu-0000IK-HE; Thu, 16 Jan 2014 11:48:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3lQs-0000Hm-7H
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 11:48:22 +0000
Received: from [85.158.139.211:46210] by server-14.bemta-5.messagelabs.com id
	CC/81-24200-507C7D25; Thu, 16 Jan 2014 11:48:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389872897!10125426!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5132 invoked from network); 16 Jan 2014 11:48:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91327306"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:48:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 06:48:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3lQm-00048B-1j;
	Thu, 16 Jan 2014 11:48:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3lQl-00006h-UQ;
	Thu, 16 Jan 2014 11:48:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24386-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 11:48:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24386: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24386 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24386/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv          16 guest-start.2               fail pass in 24382
 test-amd64-i386-freebsd10-amd64 13 guest-localmigrate/x10 fail in 24382 pass in 24386

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24375
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24382 like 24380

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24382 never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  cfad12fd29637e4e5ede4a9ff15e2777e7a7d7b2

------------------------------------------------------------
People who touched revisions under test:
  Anil Madhavapeddy <anil@recoil.org>
  David Scott <dave.scott@eu.citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=c04c825bdf1e946260cba325eeed993004051050
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable c04c825bdf1e946260cba325eeed993004051050
+ branch=xen-unstable
+ revision=c04c825bdf1e946260cba325eeed993004051050
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c04c825bdf1e946260cba325eeed993004051050:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   cfad12f..c04c825  c04c825bdf1e946260cba325eeed993004051050 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:50:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lSQ-0000r5-RK; Thu, 16 Jan 2014 11:49:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3lSP-0000o9-Jm
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:49:57 +0000
Received: from [85.158.137.68:48015] by server-3.bemta-3.messagelabs.com id
	0A/8E-10658-467C7D25; Thu, 16 Jan 2014 11:49:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389872994!5840050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11364 invoked from network); 16 Jan 2014 11:49:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:49:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93428094"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:49:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:49:53 -0500
Message-ID: <1389872992.6697.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 16 Jan 2014 11:49:52 +0000
In-Reply-To: <20140116114411.GA29530@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 12:44 +0100, Olaf Hering wrote:
> On Thu, Jan 16, Ian Campbell wrote:
> 
> > Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> > hd[a-d], which refer to the same underlying volume.
> > 
> > This allows you to boot from hda, do an unplug and then switch to using
> > xvda.
> 
> Not really: In the end hda is connected to the emulated IDE, so today
> its really "sda" in domU because pata_piix will drive it.

true.

> sda was connected to emulated LSI SCSI.

Where did this come from? Did you request it somehow? I don't believe
there was ever an emulated SCSI controller by default.

Are you sure that this sda wasn't referring to the same backing volume
as xvda?

> But xvda was not connected to any
> emulated controller, its PV only.

Are you sure it wasn't showing up with a different name? Or perhaps by
enabling LSI SCSI you have suppressed the default IDE controllers.

>  Thats how it is done with qemu-trad.

It was not *the* way though.

> So having hda and xvda in the same config was working, and maybe even
> supported?

Maybe working, but not supported IMHO.

> With qemu-upstream this appearently changed. I'm not saying this change
> in behaviour is good or bad, just that something changed. Some people
> still use the kernel names instead of UUID or LABEL. So they have to
> adjust their config in domU and also in domU.cfg before they switch from
> qemu-trad to qemu-upstream.
> 
> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:50:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:50:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lSQ-0000r5-RK; Thu, 16 Jan 2014 11:49:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3lSP-0000o9-Jm
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 11:49:57 +0000
Received: from [85.158.137.68:48015] by server-3.bemta-3.messagelabs.com id
	0A/8E-10658-467C7D25; Thu, 16 Jan 2014 11:49:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1389872994!5840050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11364 invoked from network); 16 Jan 2014 11:49:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:49:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93428094"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 11:49:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:49:53 -0500
Message-ID: <1389872992.6697.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 16 Jan 2014 11:49:52 +0000
In-Reply-To: <20140116114411.GA29530@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 12:44 +0100, Olaf Hering wrote:
> On Thu, Jan 16, Ian Campbell wrote:
> 
> > Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> > hd[a-d], which refer to the same underlying volume.
> > 
> > This allows you to boot from hda, do an unplug and then switch to using
> > xvda.
> 
> Not really: In the end hda is connected to the emulated IDE, so today
> its really "sda" in domU because pata_piix will drive it.

true.

> sda was connected to emulated LSI SCSI.

Where did this come from? Did you request it somehow? I don't believe
there was ever an emulated SCSI controller by default.

Are you sure that this sda wasn't referring to the same backing volume
as xvda?

> But xvda was not connected to any
> emulated controller, its PV only.

Are you sure it wasn't showing up with a different name? Or perhaps by
enabling LSI SCSI you have suppressed the default IDE controllers.

>  Thats how it is done with qemu-trad.

It was not *the* way though.

> So having hda and xvda in the same config was working, and maybe even
> supported?

Maybe working, but not supported IMHO.

> With qemu-upstream this appearently changed. I'm not saying this change
> in behaviour is good or bad, just that something changed. Some people
> still use the kernel names instead of UUID or LABEL. So they have to
> adjust their config in domU and also in domU.cfg before they switch from
> qemu-trad to qemu-upstream.
> 
> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lXi-00024D-Mh; Thu, 16 Jan 2014 11:55:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3lXi-000248-5E
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:55:26 +0000
Received: from [85.158.139.211:32634] by server-9.bemta-5.messagelabs.com id
	68/2C-15098-DA8C7D25; Thu, 16 Jan 2014 11:55:25 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389873323!9934126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12727 invoked from network); 16 Jan 2014 11:55:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:55:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91329065"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:55:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:55:22 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3lXd-0003dM-Ti; Thu, 16 Jan 2014 11:55:21 +0000
Message-ID: <52D7C8A9.5030609@citrix.com>
Date: Thu, 16 Jan 2014 11:55:21 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
	<52D7AC3C.6080101@citrix.com>
	<20140116113338.GS5698@zion.uk.xensource.com>
In-Reply-To: <20140116113338.GS5698@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 11:33, Wei Liu wrote:
> On Thu, Jan 16, 2014 at 09:54:04AM +0000, Andrew Bennieston wrote:
>> On 16/01/14 00:17, Wei Liu wrote:
>>> On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
>>> [...]
>>>> +
>>>> +struct xenvif_queue { /* Per-queue data for xenvif */
>>>> +	unsigned int number; /* Queue number, 0-based */
>>>
>>> Use "id" instead?
>>
>> Ok; I suppose number implies "the number of queues", not "which queue is
>> this?"
>>
>
> Right. "id" sounds more intuitive to me. And it saves you from some
> typing, big win! ;-)

Actually it's more typing, since I've already typed "number" (and I 
don't think :%s/number/id/g is a safe thing to do!)... but I'll change 
it anyway. :)

Andrew

>
>>>
>>>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>>>> +	struct xenvif *vif; /* Parent VIF */
> [...]
>>>>
>>>> @@ -150,14 +152,27 @@ struct xenvif {
>>>>   	 */
>>>>   	RING_IDX rx_req_cons_peek;
>>>>
>>>> -	/* This array is allocated seperately as it is large */
>>>> -	struct gnttab_copy *grant_copy_op;
>>>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>>>
>>> Any reason to swtich back to array inside structure? This array is
>>> really large.
>>>
>>
>> It was moved to a separate vmalloc because it was large, but now the
>> array of queues is allocated through vmalloc anyway. I preferred to
>> bring this back into the structure rather than have more allocations to
>> track and remember to free at all relevant points. If there is any
>> significant reason to split this out I'm happy to do so...
>>
>
> OK, if everything is allocated via vmalloc then I think it's fine.
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 11:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 11:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lXi-00024D-Mh; Thu, 16 Jan 2014 11:55:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W3lXi-000248-5E
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 11:55:26 +0000
Received: from [85.158.139.211:32634] by server-9.bemta-5.messagelabs.com id
	68/2C-15098-DA8C7D25; Thu, 16 Jan 2014 11:55:25 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389873323!9934126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12727 invoked from network); 16 Jan 2014 11:55:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 11:55:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91329065"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 11:55:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 06:55:22 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W3lXd-0003dM-Ti; Thu, 16 Jan 2014 11:55:21 +0000
Message-ID: <52D7C8A9.5030609@citrix.com>
Date: Thu, 16 Jan 2014 11:55:21 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-2-git-send-email-andrew.bennieston@citrix.com>
	<20140116001706.GG5331@zion.uk.xensource.com>
	<52D7AC3C.6080101@citrix.com>
	<20140116113338.GS5698@zion.uk.xensource.com>
In-Reply-To: <20140116113338.GS5698@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific
 data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 11:33, Wei Liu wrote:
> On Thu, Jan 16, 2014 at 09:54:04AM +0000, Andrew Bennieston wrote:
>> On 16/01/14 00:17, Wei Liu wrote:
>>> On Wed, Jan 15, 2014 at 04:23:21PM +0000, Andrew J. Bennieston wrote:
>>> [...]
>>>> +
>>>> +struct xenvif_queue { /* Per-queue data for xenvif */
>>>> +	unsigned int number; /* Queue number, 0-based */
>>>
>>> Use "id" instead?
>>
>> Ok; I suppose number implies "the number of queues", not "which queue is
>> this?"
>>
>
> Right. "id" sounds more intuitive to me. And it saves you from some
> typing, big win! ;-)

Actually it's more typing, since I've already typed "number" (and I 
don't think :%s/number/id/g is a safe thing to do!)... but I'll change 
it anyway. :)

Andrew

>
>>>
>>>> +	char name[IFNAMSIZ+4]; /* DEVNAME-qN */
>>>> +	struct xenvif *vif; /* Parent VIF */
> [...]
>>>>
>>>> @@ -150,14 +152,27 @@ struct xenvif {
>>>>   	 */
>>>>   	RING_IDX rx_req_cons_peek;
>>>>
>>>> -	/* This array is allocated seperately as it is large */
>>>> -	struct gnttab_copy *grant_copy_op;
>>>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>>>
>>> Any reason to swtich back to array inside structure? This array is
>>> really large.
>>>
>>
>> It was moved to a separate vmalloc because it was large, but now the
>> array of queues is allocated through vmalloc anyway. I preferred to
>> bring this back into the structure rather than have more allocations to
>> track and remember to free at all relevant points. If there is any
>> significant reason to split this out I'm happy to do so...
>>
>
> OK, if everything is allocated via vmalloc then I think it's fine.
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:01:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ldB-0002Tb-Kl; Thu, 16 Jan 2014 12:01:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3ld8-0002TI-JZ
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 12:01:02 +0000
Received: from [193.109.254.147:29616] by server-15.bemta-14.messagelabs.com
	id 36/EF-22186-DF9C7D25; Thu, 16 Jan 2014 12:01:01 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389873660!11245093!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13600 invoked from network); 16 Jan 2014 12:01:01 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 12:01:01 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389873660; l=1015;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=AsF0WubK91yPCaW+qQs5OJyUiXM=;
	b=c5B3OZeRMAglP8zq0Z4hcJSIYHlrtTW+uEQpm50ZaCuSQTRK/NLhQvkuQv5ZfvAcJZ5
	djbJbi6b/Q4Cd/RRSPpiAqwAuEiJPurQ7F+ibNNR0LRZxROtm0/r50n1KkpR/Ww50kqF6
	sNDr2FlfzT3UhEmkMESeobU5jxxm7Q3Ld6I=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z0067cq0GC10XR3 ; 
	Thu, 16 Jan 2014 13:01:00 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5A01C50267; Thu, 16 Jan 2014 13:01:00 +0100 (CET)
Date: Thu, 16 Jan 2014 13:01:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116120100.GA31437@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
	<1389872992.6697.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389872992.6697.9.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, Ian Campbell wrote:

> > sda was connected to emulated LSI SCSI.
> 
> Where did this come from? Did you request it somehow? I don't believe
> there was ever an emulated SCSI controller by default.
> 
> Are you sure that this sda wasn't referring to the same backing volume
> as xvda?
 
Giving sda as name will instruct qemu-trad to create a LSI controller.
At least with SLES it can not be used because the unplug code shuts down
the controller and that also removes the disks. And the PV driver does
not claim them. Something like that, I dont remember the details.

> > But xvda was not connected to any
> > emulated controller, its PV only.
> 
> Are you sure it wasn't showing up with a different name? Or perhaps by
> enabling LSI SCSI you have suppressed the default IDE controllers.

No, having hda, sda and xvda would (in theory) give three disks(IDE,
SCSI and PV). And with just xvda qemu-trad will not boot because its not
connected to an emulated controller.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:01:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ldB-0002Tb-Kl; Thu, 16 Jan 2014 12:01:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W3ld8-0002TI-JZ
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 12:01:02 +0000
Received: from [193.109.254.147:29616] by server-15.bemta-14.messagelabs.com
	id 36/EF-22186-DF9C7D25; Thu, 16 Jan 2014 12:01:01 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389873660!11245093!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13600 invoked from network); 16 Jan 2014 12:01:01 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 12:01:01 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389873660; l=1015;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=AsF0WubK91yPCaW+qQs5OJyUiXM=;
	b=c5B3OZeRMAglP8zq0Z4hcJSIYHlrtTW+uEQpm50ZaCuSQTRK/NLhQvkuQv5ZfvAcJZ5
	djbJbi6b/Q4Cd/RRSPpiAqwAuEiJPurQ7F+ibNNR0LRZxROtm0/r50n1KkpR/Ww50kqF6
	sNDr2FlfzT3UhEmkMESeobU5jxxm7Q3Ld6I=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssVY9SQsClBrtp5eFYXU7YfNKMAfocpEeot7SjdQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10a1:4201:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id z0067cq0GC10XR3 ; 
	Thu, 16 Jan 2014 13:01:00 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5A01C50267; Thu, 16 Jan 2014 13:01:00 +0100 (CET)
Date: Thu, 16 Jan 2014 13:01:00 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140116120100.GA31437@aepfle.de>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
	<1389872992.6697.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389872992.6697.9.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, Ian Campbell wrote:

> > sda was connected to emulated LSI SCSI.
> 
> Where did this come from? Did you request it somehow? I don't believe
> there was ever an emulated SCSI controller by default.
> 
> Are you sure that this sda wasn't referring to the same backing volume
> as xvda?
 
Giving sda as name will instruct qemu-trad to create a LSI controller.
At least with SLES it can not be used because the unplug code shuts down
the controller and that also removes the disks. And the PV driver does
not claim them. Something like that, I dont remember the details.

> > But xvda was not connected to any
> > emulated controller, its PV only.
> 
> Are you sure it wasn't showing up with a different name? Or perhaps by
> enabling LSI SCSI you have suppressed the default IDE controllers.

No, having hda, sda and xvda would (in theory) give three disks(IDE,
SCSI and PV). And with just xvda qemu-trad will not boot because its not
connected to an emulated controller.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lue-0003sf-5r; Thu, 16 Jan 2014 12:19:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3lua-0003sa-2g
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 12:19:06 +0000
Received: from [193.109.254.147:23428] by server-6.bemta-14.messagelabs.com id
	3C/A5-14958-73EC7D25; Thu, 16 Jan 2014 12:19:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389874741!11281413!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21775 invoked from network); 16 Jan 2014 12:19:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 12:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93436834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 12:19:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 07:19:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3luW-0003zQ-1p;
	Thu, 16 Jan 2014 12:19:00 +0000
Date: Thu, 16 Jan 2014 12:17:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <52D6566C.5050302@oracle.com>
Message-ID: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CC'ing xen-devel and Roger.

On Wed, 15 Jan 2014, Qin Li wrote:
> Hi Stefano,
> 
> How do you do?
> 
> Currently, Solaris only works as PV-on-HVM guest on Xen, but recently,

Actually now you have a better way of running Solaris on Xen: PVH.

http://wiki.xen.org/wiki/Xen_Overview#PV_in_an_HVM_Container_.28PVH.29_-_New_in_Xen_4.4

Roger already ported FreeBSD to Xen as a PVH guest:

http://marc.info/?l=freebsd-current&m=138971161228874&w=2


> we decide to changes this situation by work out below 2 features for
> Solaris guest OS:
> 
> /* x86: Does this Xen host support the HVM callback vector type? */
> #define XENFEAT_hvm_callback_vector 8
> 
> /* x86: pvclock algorithm is safe to use on HVM */
> #define XENFEAT_hvm_safe_pvclock

FYI both these features are available and used by PVH guests.


> For XENFEAT_hvm_callback_vector, it's straightforward that the original
> apic interrupt handler needs to be registered onto each vCPU's IDT.
> But for XENFEAT_hvm_safe_pvclock, I have some confusions as following:
> . Why the pvclock implementation within guest OS, has to depend on
> XENFEAT_hvm_callback_vector?

Because you need to be able to receive timer interrupts on multiple
vcpus and without XENFEAT_hvm_callback_vector you would only receive
interrupts from the Xen Platform PCI device.


> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> is already visiable. Does guest OS still need any action to ask
> hypervisor to update this piece of memory periodically?

I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3lue-0003sf-5r; Thu, 16 Jan 2014 12:19:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3lua-0003sa-2g
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 12:19:06 +0000
Received: from [193.109.254.147:23428] by server-6.bemta-14.messagelabs.com id
	3C/A5-14958-73EC7D25; Thu, 16 Jan 2014 12:19:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389874741!11281413!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21775 invoked from network); 16 Jan 2014 12:19:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 12:19:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93436834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 12:19:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 07:19:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3luW-0003zQ-1p;
	Thu, 16 Jan 2014 12:19:00 +0000
Date: Thu, 16 Jan 2014 12:17:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <52D6566C.5050302@oracle.com>
Message-ID: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CC'ing xen-devel and Roger.

On Wed, 15 Jan 2014, Qin Li wrote:
> Hi Stefano,
> 
> How do you do?
> 
> Currently, Solaris only works as PV-on-HVM guest on Xen, but recently,

Actually now you have a better way of running Solaris on Xen: PVH.

http://wiki.xen.org/wiki/Xen_Overview#PV_in_an_HVM_Container_.28PVH.29_-_New_in_Xen_4.4

Roger already ported FreeBSD to Xen as a PVH guest:

http://marc.info/?l=freebsd-current&m=138971161228874&w=2


> we decide to changes this situation by work out below 2 features for
> Solaris guest OS:
> 
> /* x86: Does this Xen host support the HVM callback vector type? */
> #define XENFEAT_hvm_callback_vector 8
> 
> /* x86: pvclock algorithm is safe to use on HVM */
> #define XENFEAT_hvm_safe_pvclock

FYI both these features are available and used by PVH guests.


> For XENFEAT_hvm_callback_vector, it's straightforward that the original
> apic interrupt handler needs to be registered onto each vCPU's IDT.
> But for XENFEAT_hvm_safe_pvclock, I have some confusions as following:
> . Why the pvclock implementation within guest OS, has to depend on
> XENFEAT_hvm_callback_vector?

Because you need to be able to receive timer interrupts on multiple
vcpus and without XENFEAT_hvm_callback_vector you would only receive
interrupts from the Xen Platform PCI device.


> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> is already visiable. Does guest OS still need any action to ask
> hypervisor to update this piece of memory periodically?

I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:57:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3mVS-0006K4-II; Thu, 16 Jan 2014 12:57:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3mVQ-0006Jw-1q
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 12:57:08 +0000
Received: from [193.109.254.147:8003] by server-10.bemta-14.messagelabs.com id
	D8/8F-20752-327D7D25; Thu, 16 Jan 2014 12:57:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389877025!11311164!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20540 invoked from network); 16 Jan 2014 12:57:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 12:57:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91345743"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 12:57:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 07:57:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3mVL-0004Ut-BY;
	Thu, 16 Jan 2014 12:57:03 +0000
Date: Thu, 16 Jan 2014 12:56:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Nathan Whitehorn <nwhitehorn@freebsd.org>
In-Reply-To: <52D73C4E.2080306@freebsd.org>
Message-ID: <alpine.DEB.2.02.1401161245250.21510@kaball.uk.xensource.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Nathan Whitehorn wrote:
> On 01/15/14 10:24, Julien Grall wrote:
> > On 01/15/2014 01:26 AM, Warner Losh wrote:
> > > On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
> > > > This new support brings 2 open questions (for both Xen and FreeBSD
> > > > community).
> > > > When a new guest is created, the toolstack will generated a device tree
> > > > which
> > > > will contains:
> > > > 	- The amount of memory
> > > > 	- The description of the platform (gic, timer, hypervisor node)
> > > > 	- PSCI node for SMP bringup
> > > > 
> > > > Until now, Xen on ARM supported only Linux-based OS. When the support of
> > > > device tree was added in Xen for guest, we chose to use Linux device
> > > > tree
> > > > bindings (gic, timer,...). It seems that FreeBSD choose a different way
> > > > to
> > > > implement device tree:
> > > > 	- strictly respect ePAR (for interrupt-parent property)
> > > > 	- gic bindings different (only 1 interrupt cell)
> > > > 
> > > > I would like to come with a common device tree specification (bindings,
> > > > ...)
> > > > across every operating system. I know the Linux community is working on
> > > > having
> > > > a device tree bindings out of the kernel tree. Does FreeBSD community
> > > > plan to
> > > > work with Linux community for this purpose?
> > > We generally try to follow the common definitions for the FDT stuff.
> > > There are a few cases where we either lack the feature set of Linux, or
> > > where the Linux folks are moving quickly and changing the underlying
> > definitions
> > > where we wait for the standards to mature before we implement. In some
> > cases, where
> > > maturity hasn't happened, or where the bindings are overly Linux
> > centric (which in
> > > theory they aren't supposed to be, but sometimes wind up that way).
> > where we've not
> > > implemented things.
> > As I understand main bindings (gic, timer) are set in stone and won't
> > change. Ian Campbell has a repository with all the ARM bindings here:
> > http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
> > a=tree;f=Bindings/arm
> > 
> > To compare the difference between the DT provided by Xen, and the one
> > correctly parsed by FreeBSD
> > 	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
> >          - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts
> > 
> > >From Xen side:
> > 	- Every device should move under a simple-bus. I think it's harmless
> > for Linux side.
> > 	- How about hypervisor node? IHMO this node should also live under the
> > simple-bus.
> > 
> > >From FreeBSD side:
> > 	- GIC and Timer bindings needs to be handled correctly (see below the
> > problem for the GIC)
> > 	- Less stricter about interrupt-parent property. Eg, look at the
> > grant-parent if there is no property interrupt-parent
> > 	- If the hypervisor doesn't live under simple-bus, the
> > interrupt/memory
> > retrieving should be moved from simple-bus to nexus?
> > 
> > Before the revision r260282 (I have added Nathan on cc), I was able to
> > use the Linux GIC bindings (see
> > http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
> > without any issue.
> > 
> > It seems the fdt bus, now consider the number of interrupt cells is
> > always 1.
> > 
> > 
> 
> Thanks for the CC. Could you explain what you mean by "grant-parent" etc?
> "interrupt-parent" is a fundamental part of the way PAPR (and ePAPR) work, so
> I'm very very hesitant to start guessing. I think things have broken for you
> because some (a lot, actually) of OF code does not expect #interrupt-cells to
> be more than 2. Some APIs might need updating, which I'll try to take care of.
> Could you tell me what the difference between SPI and PPI is, by the way?

SPI: Shared Peripheral Interrupt
PPI: Private Peripheral Interrupt

PPIs are per processor interrupts.


> On the subject of simple-bus, they usually aren't necessary. For example, all
> hypervisor devices on IBM hardware live under /vdevice, which is attached to
> the device tree root. They don't use MMIO, so simple-bus doesn't really make
> sense. How does Xen communicate with the OS in these devices?

The PPI and the memory region advertized under the Xen hypervisor node
on device tree are used to setup the basic infrastructure (grant table,
event channels) needed by Xen paravirtualized devices.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 12:57:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 12:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3mVS-0006K4-II; Thu, 16 Jan 2014 12:57:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3mVQ-0006Jw-1q
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 12:57:08 +0000
Received: from [193.109.254.147:8003] by server-10.bemta-14.messagelabs.com id
	D8/8F-20752-327D7D25; Thu, 16 Jan 2014 12:57:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389877025!11311164!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20540 invoked from network); 16 Jan 2014 12:57:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 12:57:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91345743"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 12:57:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 07:57:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3mVL-0004Ut-BY;
	Thu, 16 Jan 2014 12:57:03 +0000
Date: Thu, 16 Jan 2014 12:56:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Nathan Whitehorn <nwhitehorn@freebsd.org>
In-Reply-To: <52D73C4E.2080306@freebsd.org>
Message-ID: <alpine.DEB.2.02.1401161245250.21510@kaball.uk.xensource.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Nathan Whitehorn wrote:
> On 01/15/14 10:24, Julien Grall wrote:
> > On 01/15/2014 01:26 AM, Warner Losh wrote:
> > > On Jan 14, 2014, at 2:01 PM, Julien Grall wrote:
> > > > This new support brings 2 open questions (for both Xen and FreeBSD
> > > > community).
> > > > When a new guest is created, the toolstack will generated a device tree
> > > > which
> > > > will contains:
> > > > 	- The amount of memory
> > > > 	- The description of the platform (gic, timer, hypervisor node)
> > > > 	- PSCI node for SMP bringup
> > > > 
> > > > Until now, Xen on ARM supported only Linux-based OS. When the support of
> > > > device tree was added in Xen for guest, we chose to use Linux device
> > > > tree
> > > > bindings (gic, timer,...). It seems that FreeBSD choose a different way
> > > > to
> > > > implement device tree:
> > > > 	- strictly respect ePAR (for interrupt-parent property)
> > > > 	- gic bindings different (only 1 interrupt cell)
> > > > 
> > > > I would like to come with a common device tree specification (bindings,
> > > > ...)
> > > > across every operating system. I know the Linux community is working on
> > > > having
> > > > a device tree bindings out of the kernel tree. Does FreeBSD community
> > > > plan to
> > > > work with Linux community for this purpose?
> > > We generally try to follow the common definitions for the FDT stuff.
> > > There are a few cases where we either lack the feature set of Linux, or
> > > where the Linux folks are moving quickly and changing the underlying
> > definitions
> > > where we wait for the standards to mature before we implement. In some
> > cases, where
> > > maturity hasn't happened, or where the bindings are overly Linux
> > centric (which in
> > > theory they aren't supposed to be, but sometimes wind up that way).
> > where we've not
> > > implemented things.
> > As I understand main bindings (gic, timer) are set in stone and won't
> > change. Ian Campbell has a repository with all the ARM bindings here:
> > http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;
> > a=tree;f=Bindings/arm
> > 
> > To compare the difference between the DT provided by Xen, and the one
> > correctly parsed by FreeBSD
> > 	- Xen: http://xenbits.xen.org/people/julieng/xenvm-4.2.dts
> >          - FreeBSD: http://xenbits.xen.org/people/julieng/xenvm-bsd.dts
> > 
> > >From Xen side:
> > 	- Every device should move under a simple-bus. I think it's harmless
> > for Linux side.
> > 	- How about hypervisor node? IHMO this node should also live under the
> > simple-bus.
> > 
> > >From FreeBSD side:
> > 	- GIC and Timer bindings needs to be handled correctly (see below the
> > problem for the GIC)
> > 	- Less stricter about interrupt-parent property. Eg, look at the
> > grant-parent if there is no property interrupt-parent
> > 	- If the hypervisor doesn't live under simple-bus, the
> > interrupt/memory
> > retrieving should be moved from simple-bus to nexus?
> > 
> > Before the revision r260282 (I have added Nathan on cc), I was able to
> > use the Linux GIC bindings (see
> > http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=blob;f=Bindings/arm/gic.txt)
> > without any issue.
> > 
> > It seems the fdt bus, now consider the number of interrupt cells is
> > always 1.
> > 
> > 
> 
> Thanks for the CC. Could you explain what you mean by "grant-parent" etc?
> "interrupt-parent" is a fundamental part of the way PAPR (and ePAPR) work, so
> I'm very very hesitant to start guessing. I think things have broken for you
> because some (a lot, actually) of OF code does not expect #interrupt-cells to
> be more than 2. Some APIs might need updating, which I'll try to take care of.
> Could you tell me what the difference between SPI and PPI is, by the way?

SPI: Shared Peripheral Interrupt
PPI: Private Peripheral Interrupt

PPIs are per processor interrupts.


> On the subject of simple-bus, they usually aren't necessary. For example, all
> hypervisor devices on IBM hardware live under /vdevice, which is attached to
> the device tree root. They don't use MMIO, so simple-bus doesn't really make
> sense. How does Xen communicate with the OS in these devices?

The PPI and the memory region advertized under the Xen hypervisor node
on device tree are used to setup the basic infrastructure (grant table,
event channels) needed by Xen paravirtualized devices.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:05:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3md3-00074z-2p; Thu, 16 Jan 2014 13:05:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3md1-00074u-EE
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 13:04:59 +0000
Received: from [193.109.254.147:42673] by server-16.bemta-14.messagelabs.com
	id 53/B8-20600-AF8D7D25; Thu, 16 Jan 2014 13:04:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389877496!11302588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19229 invoked from network); 16 Jan 2014 13:04:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 13:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91349161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 13:04:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 08:04:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3mcw-0004Wd-MC;
	Thu, 16 Jan 2014 13:04:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3mcw-0005eY-IT;
	Thu, 16 Jan 2014 13:04:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24393-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 13:04:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24393: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3823162927040390151=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3823162927040390151==
Content-Type: text/plain

flight 24393 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install      fail REGR. vs. 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============3823162927040390151==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3823162927040390151==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:05:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3md3-00074z-2p; Thu, 16 Jan 2014 13:05:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3md1-00074u-EE
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 13:04:59 +0000
Received: from [193.109.254.147:42673] by server-16.bemta-14.messagelabs.com
	id 53/B8-20600-AF8D7D25; Thu, 16 Jan 2014 13:04:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389877496!11302588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19229 invoked from network); 16 Jan 2014 13:04:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 13:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="91349161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 13:04:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 08:04:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3mcw-0004Wd-MC;
	Thu, 16 Jan 2014 13:04:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3mcw-0005eY-IT;
	Thu, 16 Jan 2014 13:04:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24393-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 13:04:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24393: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3823162927040390151=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3823162927040390151==
Content-Type: text/plain

flight 24393 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install      fail REGR. vs. 24340

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============3823162927040390151==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3823162927040390151==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:27:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3myQ-0008Dp-5R; Thu, 16 Jan 2014 13:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3myN-0008Dk-Qw
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:27:05 +0000
Received: from [85.158.143.35:31261] by server-3.bemta-4.messagelabs.com id
	BF/72-32360-72ED7D25; Thu, 16 Jan 2014 13:27:03 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389878821!12088642!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2299 invoked from network); 16 Jan 2014 13:27:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 13:27:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93459355"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 13:27:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 08:27:00 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3myJ-0004x5-R9;
	Thu, 16 Jan 2014 13:26:59 +0000
Message-ID: <1389878814.1061.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 16 Jan 2014 13:26:54 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function that exchange the head is racy.
The cmpxchg and the compare are not atomic.
Assume two thread one (T1) inserting on committed list and another
trying to comsume it (T2).
T1 start inserting the element (A), set prev pointer (commit list use
mcte_prev) then is stop after the cmpxchg succeeded.
T2 get the list and change elements (A) and update the commit list
head.
T1 resume, read pointer to prev again and compare with result from
cmpxchg which succeeded but in the meantime prev changed in memory.
Not T1 assume the element was not inserted on the list and try to
insert again. Now A however is in another state and should not be
modified by T1.
To solve the race use temporary variable for prev pointer.
Note that compiler should not optimize the memory fetch as cmpxhg
do a full barrier.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 37d830f..5821278 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -127,13 +127,15 @@ static DEFINE_PER_CPU(struct mc_telem_cpu_ctl, mctctl);
 static DEFINE_SPINLOCK(processing_lock);
 
 static void mctelem_xchg_head(struct mctelem_ent **headp,
-				struct mctelem_ent **old,
+				struct mctelem_ent **oldp,
 				struct mctelem_ent *new)
 {
+	struct mctelem_ent *old;
+
 	for (;;) {
-		*old = *headp;
+		*oldp = old = *headp;
 		wmb();
-		if (cmpxchgptr(headp, *old, new) == *old)
+		if (cmpxchgptr(headp, old, new) == old)
 			break;
 	}
 }
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:27:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3myQ-0008Dp-5R; Thu, 16 Jan 2014 13:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3myN-0008Dk-Qw
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:27:05 +0000
Received: from [85.158.143.35:31261] by server-3.bemta-4.messagelabs.com id
	BF/72-32360-72ED7D25; Thu, 16 Jan 2014 13:27:03 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389878821!12088642!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2299 invoked from network); 16 Jan 2014 13:27:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 13:27:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,667,1384300800"; d="scan'208";a="93459355"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 13:27:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 08:27:00 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3myJ-0004x5-R9;
	Thu, 16 Jan 2014 13:26:59 +0000
Message-ID: <1389878814.1061.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 16 Jan 2014 13:26:54 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function that exchange the head is racy.
The cmpxchg and the compare are not atomic.
Assume two thread one (T1) inserting on committed list and another
trying to comsume it (T2).
T1 start inserting the element (A), set prev pointer (commit list use
mcte_prev) then is stop after the cmpxchg succeeded.
T2 get the list and change elements (A) and update the commit list
head.
T1 resume, read pointer to prev again and compare with result from
cmpxchg which succeeded but in the meantime prev changed in memory.
Not T1 assume the element was not inserted on the list and try to
insert again. Now A however is in another state and should not be
modified by T1.
To solve the race use temporary variable for prev pointer.
Note that compiler should not optimize the memory fetch as cmpxhg
do a full barrier.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 37d830f..5821278 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -127,13 +127,15 @@ static DEFINE_PER_CPU(struct mc_telem_cpu_ctl, mctctl);
 static DEFINE_SPINLOCK(processing_lock);
 
 static void mctelem_xchg_head(struct mctelem_ent **headp,
-				struct mctelem_ent **old,
+				struct mctelem_ent **oldp,
 				struct mctelem_ent *new)
 {
+	struct mctelem_ent *old;
+
 	for (;;) {
-		*old = *headp;
+		*oldp = old = *headp;
 		wmb();
-		if (cmpxchgptr(headp, *old, new) == *old)
+		if (cmpxchgptr(headp, old, new) == old)
 			break;
 	}
 }
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3n7I-0000OT-6E; Thu, 16 Jan 2014 13:36:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3n7H-0000OO-Ao
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 13:36:15 +0000
Received: from [85.158.143.35:6225] by server-1.bemta-4.messagelabs.com id
	1D/A1-02132-E40E7D25; Thu, 16 Jan 2014 13:36:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389879055!12164548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23021 invoked from network); 16 Jan 2014 13:30:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 13:30:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 13:30:55 +0000
Message-Id: <52D7ED1C0200007800114406@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 13:30:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 12:44, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
>  {
>      enum mcheck_type inited = mcheck_none;
>  
> -    if (mce_disabled == 1) {
> -        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
> +    if (bsp && mce_disabled == 1) {
> +        printk(XENLOG_INFO "MCE support disabled by bootparam\n");

While I'm fine with this, ...

> --- a/xen/arch/x86/cpu/mwait-idle.c
> +++ b/xen/arch/x86/cpu/mwait-idle.c
> @@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
>  		state = MWAIT_HINT2CSTATE(hint) + 1;
>  		substate = MWAIT_HINT2SUBSTATE(hint);
>  
> -		if (state > max_cstate) {
> +		if (cpu == 0 && state > max_cstate) {
>  			printk(PREFIX "max C-state %u reached\n", max_cstate);
>  			break;
>  		}
> @@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
>  		if (substate >= num_substates)
>  			continue;
>  
> -		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
> +		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>  			printk(PREFIX "max C-state count of %u reached\n",
>  			       ACPI_PROCESSOR_MAX_POWER);
>  			break;

... I object to both of these: There's no reason why the C-state
count could differ between CPUs. Hence I'd accept these being
guarded so they each get printed just once, but tying this to
CPU0 is wrong. And then, adding the CPU number would be a
natural thing to do.

Also, with this being a clone of Linux code (with which I sync from
time to time), I'd really expect such changes to go through there.
Of course, if you see the messages with Xen but not with a suitable
Linux equivalent, then we'd have to look into why you see them...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3n7I-0000OT-6E; Thu, 16 Jan 2014 13:36:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3n7H-0000OO-Ao
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 13:36:15 +0000
Received: from [85.158.143.35:6225] by server-1.bemta-4.messagelabs.com id
	1D/A1-02132-E40E7D25; Thu, 16 Jan 2014 13:36:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389879055!12164548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23021 invoked from network); 16 Jan 2014 13:30:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 13:30:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 13:30:55 +0000
Message-Id: <52D7ED1C0200007800114406@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 13:30:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 12:44, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
>  {
>      enum mcheck_type inited = mcheck_none;
>  
> -    if (mce_disabled == 1) {
> -        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
> +    if (bsp && mce_disabled == 1) {
> +        printk(XENLOG_INFO "MCE support disabled by bootparam\n");

While I'm fine with this, ...

> --- a/xen/arch/x86/cpu/mwait-idle.c
> +++ b/xen/arch/x86/cpu/mwait-idle.c
> @@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
>  		state = MWAIT_HINT2CSTATE(hint) + 1;
>  		substate = MWAIT_HINT2SUBSTATE(hint);
>  
> -		if (state > max_cstate) {
> +		if (cpu == 0 && state > max_cstate) {
>  			printk(PREFIX "max C-state %u reached\n", max_cstate);
>  			break;
>  		}
> @@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct notifier_block *nfb,
>  		if (substate >= num_substates)
>  			continue;
>  
> -		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
> +		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>  			printk(PREFIX "max C-state count of %u reached\n",
>  			       ACPI_PROCESSOR_MAX_POWER);
>  			break;

... I object to both of these: There's no reason why the C-state
count could differ between CPUs. Hence I'd accept these being
guarded so they each get printed just once, but tying this to
CPU0 is wrong. And then, adding the CPU number would be a
natural thing to do.

Also, with this being a clone of Linux code (with which I sync from
time to time), I'd really expect such changes to go through there.
Of course, if you see the messages with Xen but not with a suitable
Linux equivalent, then we'd have to look into why you see them...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nDR-00011I-6s; Thu, 16 Jan 2014 13:42:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3nDP-00011D-Qp
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:42:36 +0000
Received: from [85.158.137.68:63446] by server-4.bemta-3.messagelabs.com id
	2E/4C-10414-AC1E7D25; Thu, 16 Jan 2014 13:42:34 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389879751!9567685!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24930 invoked from network); 16 Jan 2014 13:42:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 13:42:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GDgQu4012916
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 13:42:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GDgP7T003172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 16 Jan 2014 13:42:25 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GDgPIv003152; Thu, 16 Jan 2014 13:42:25 GMT
Received: from [192.168.1.102] (/123.114.39.40)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 05:42:24 -0800
Message-ID: <52D7E1BB.8000706@oracle.com>
Date: Thu, 16 Jan 2014 21:42:19 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com>
In-Reply-To: <52D7BE19.2010009@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-16 19:10, David Vrabel wrote:
> On 15/01/14 23:57, Annie Li wrote:
>> This patch implements two things:
>>
>> * release grant reference and skb for rx path, this fixex resource leaking.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>> for reading or writing. But this patch does not cover this and improvement for
>> this failure may be implemented in a separate patch.
> I don't think replacing a resource leak with a security bug is a good idea.
>
> If you would prefer not to fix the gnttab_end_foreign_access() call, I
> think you can fix this in netfront by taking a reference to the page
> before calling gnttab_end_foreign_access().  This will ensure the page
> isn't freed until the subsequent kfree_skb(), or the gref is released by
> the foreign domain (whichever is later).

What I thought is to split the implementation into two patches, this 
patch fixes the rx path resource leak(just like what tx path does), then 
a separate patch fixes gnttab_end_foreign_access_ref failure issue for 
both tx/rx through taking reference to the page before 
gnttab_end_foreign_access.
If you'd like they are posted together, I will create new patch for the 
latter and then post them.:-)

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nDR-00011I-6s; Thu, 16 Jan 2014 13:42:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W3nDP-00011D-Qp
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:42:36 +0000
Received: from [85.158.137.68:63446] by server-4.bemta-3.messagelabs.com id
	2E/4C-10414-AC1E7D25; Thu, 16 Jan 2014 13:42:34 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389879751!9567685!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24930 invoked from network); 16 Jan 2014 13:42:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 13:42:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GDgQu4012916
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 13:42:27 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GDgP7T003172
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 16 Jan 2014 13:42:25 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GDgPIv003152; Thu, 16 Jan 2014 13:42:25 GMT
Received: from [192.168.1.102] (/123.114.39.40)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 05:42:24 -0800
Message-ID: <52D7E1BB.8000706@oracle.com>
Date: Thu, 16 Jan 2014 21:42:19 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com>
In-Reply-To: <52D7BE19.2010009@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-16 19:10, David Vrabel wrote:
> On 15/01/14 23:57, Annie Li wrote:
>> This patch implements two things:
>>
>> * release grant reference and skb for rx path, this fixex resource leaking.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>> for reading or writing. But this patch does not cover this and improvement for
>> this failure may be implemented in a separate patch.
> I don't think replacing a resource leak with a security bug is a good idea.
>
> If you would prefer not to fix the gnttab_end_foreign_access() call, I
> think you can fix this in netfront by taking a reference to the page
> before calling gnttab_end_foreign_access().  This will ensure the page
> isn't freed until the subsequent kfree_skb(), or the gref is released by
> the foreign domain (whichever is later).

What I thought is to split the implementation into two patches, this 
patch fixes the rx path resource leak(just like what tx path does), then 
a separate patch fixes gnttab_end_foreign_access_ref failure issue for 
both tx/rx through taking reference to the page before 
gnttab_end_foreign_access.
If you'd like they are posted together, I will create new patch for the 
latter and then post them.:-)

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:56:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nQx-0001fj-2L; Thu, 16 Jan 2014 13:56:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3nQw-0001fe-BM
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:56:34 +0000
Received: from [85.158.139.211:10569] by server-5.bemta-5.messagelabs.com id
	E0/25-14928-115E7D25; Thu, 16 Jan 2014 13:56:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389880592!10175726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23309 invoked from network); 16 Jan 2014 13:56:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 13:56:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 13:56:32 +0000
Message-Id: <52D7F31E0200007800114419@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 13:56:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1389878814.1061.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> The function that exchange the head is racy.
> The cmpxchg and the compare are not atomic.
> Assume two thread one (T1) inserting on committed list and another
> trying to comsume it (T2).
> T1 start inserting the element (A), set prev pointer (commit list use
> mcte_prev) then is stop after the cmpxchg succeeded.
> T2 get the list and change elements (A) and update the commit list
> head.
> T1 resume, read pointer to prev again and compare with result from
> cmpxchg which succeeded but in the meantime prev changed in memory.
> Not T1 assume the element was not inserted on the list and try to
> insert again. Now A however is in another state and should not be
> modified by T1.
> To solve the race use temporary variable for prev pointer.
> Note that compiler should not optimize the memory fetch as cmpxhg
> do a full barrier.

This last sentence is pretty pointless, since there's an explicit
wmb() between setting old and doing the cmpxchg. (Question is
whether that wmb() actually is necessary; without a comment I
can't easily see what it is supposed to fence - surely it's not the
write to *oldp. And that's regardless of it being redundant with
the barrier embedded with cmpxchgptr().)

But overall, while I think your analysis is correct, the description
could do with some cleanup, also spelling-wise.

>  static void mctelem_xchg_head(struct mctelem_ent **headp,
> -				struct mctelem_ent **old,
> +				struct mctelem_ent **oldp,
>  				struct mctelem_ent *new)
>  {
> +	struct mctelem_ent *old;
> +
>  	for (;;) {
> -		*old = *headp;
> +		*oldp = old = *headp;
>  		wmb();
> -		if (cmpxchgptr(headp, *old, new) == *old)
> +		if (cmpxchgptr(headp, old, new) == old)
>  			break;
>  	}
>  }

Now that you use a temporary, it would make sense (and make
the code easier to read) if you set *oldp to old just once, after
the loop.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 13:56:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 13:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nQx-0001fj-2L; Thu, 16 Jan 2014 13:56:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3nQw-0001fe-BM
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 13:56:34 +0000
Received: from [85.158.139.211:10569] by server-5.bemta-5.messagelabs.com id
	E0/25-14928-115E7D25; Thu, 16 Jan 2014 13:56:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389880592!10175726!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23309 invoked from network); 16 Jan 2014 13:56:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 13:56:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 13:56:32 +0000
Message-Id: <52D7F31E0200007800114419@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 13:56:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1389878814.1061.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> The function that exchange the head is racy.
> The cmpxchg and the compare are not atomic.
> Assume two thread one (T1) inserting on committed list and another
> trying to comsume it (T2).
> T1 start inserting the element (A), set prev pointer (commit list use
> mcte_prev) then is stop after the cmpxchg succeeded.
> T2 get the list and change elements (A) and update the commit list
> head.
> T1 resume, read pointer to prev again and compare with result from
> cmpxchg which succeeded but in the meantime prev changed in memory.
> Not T1 assume the element was not inserted on the list and try to
> insert again. Now A however is in another state and should not be
> modified by T1.
> To solve the race use temporary variable for prev pointer.
> Note that compiler should not optimize the memory fetch as cmpxhg
> do a full barrier.

This last sentence is pretty pointless, since there's an explicit
wmb() between setting old and doing the cmpxchg. (Question is
whether that wmb() actually is necessary; without a comment I
can't easily see what it is supposed to fence - surely it's not the
write to *oldp. And that's regardless of it being redundant with
the barrier embedded with cmpxchgptr().)

But overall, while I think your analysis is correct, the description
could do with some cleanup, also spelling-wise.

>  static void mctelem_xchg_head(struct mctelem_ent **headp,
> -				struct mctelem_ent **old,
> +				struct mctelem_ent **oldp,
>  				struct mctelem_ent *new)
>  {
> +	struct mctelem_ent *old;
> +
>  	for (;;) {
> -		*old = *headp;
> +		*oldp = old = *headp;
>  		wmb();
> -		if (cmpxchgptr(headp, *old, new) == *old)
> +		if (cmpxchgptr(headp, old, new) == old)
>  			break;
>  	}
>  }

Now that you use a temporary, it would make sense (and make
the code easier to read) if you set *oldp to old just once, after
the loop.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nbu-0002Mg-Ou; Thu, 16 Jan 2014 14:07:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3nbt-0002Mb-2O
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:07:53 +0000
Received: from [85.158.139.211:57625] by server-16.bemta-5.messagelabs.com id
	5B/FE-11843-8B7E7D25; Thu, 16 Jan 2014 14:07:52 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389881269!10164051!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1073 invoked from network); 16 Jan 2014 14:07:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:07:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91371940"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 14:07:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:07:48 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3nbo-0005X9-A4;
	Thu, 16 Jan 2014 14:07:48 +0000
Message-ID: <1389881263.1061.7.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 14:07:43 +0000
In-Reply-To: <52D7F31E0200007800114419@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > The function that exchange the head is racy.
> > The cmpxchg and the compare are not atomic.
> > Assume two thread one (T1) inserting on committed list and another
> > trying to comsume it (T2).
> > T1 start inserting the element (A), set prev pointer (commit list use
> > mcte_prev) then is stop after the cmpxchg succeeded.
> > T2 get the list and change elements (A) and update the commit list
> > head.
> > T1 resume, read pointer to prev again and compare with result from
> > cmpxchg which succeeded but in the meantime prev changed in memory.
> > Not T1 assume the element was not inserted on the list and try to
> > insert again. Now A however is in another state and should not be
> > modified by T1.
> > To solve the race use temporary variable for prev pointer.
> > Note that compiler should not optimize the memory fetch as cmpxhg
> > do a full barrier.
> 
> This last sentence is pretty pointless, since there's an explicit
> wmb() between setting old and doing the cmpxchg. (Question is
> whether that wmb() actually is necessary; without a comment I
> can't easily see what it is supposed to fence - surely it's not the
> write to *oldp. And that's regardless of it being redundant with
> the barrier embedded with cmpxchgptr().)
> 
> But overall, while I think your analysis is correct, the description
> could do with some cleanup, also spelling-wise.
> 

Yes, sorry, I'm not native English. Suggestion accepted.

The race is here

if (cmpxchgptr(headp, *old, new) == *old)

You do the cmpxchg (which is atomic), then you read old pointer (*old)
then you compare cmpxchg result with the content of the pointer.

Yes, wmb is quite useless, cmpxchg should be enough.

> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
> > -				struct mctelem_ent **old,
> > +				struct mctelem_ent **oldp,
> >  				struct mctelem_ent *new)
> >  {
> > +	struct mctelem_ent *old;
> > +
> >  	for (;;) {
> > -		*old = *headp;
> > +		*oldp = old = *headp;
> >  		wmb();
> > -		if (cmpxchgptr(headp, *old, new) == *old)
> > +		if (cmpxchgptr(headp, old, new) == old)
> >  			break;
> >  	}
> >  }
> 
> Now that you use a temporary, it would make sense (and make
> the code easier to read) if you set *oldp to old just once, after
> the loop.
> 
> Jan
> 

No, this will create another race. After cmpxchg you must assume that
the list element can be own by somebody else.

Frediano




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nbu-0002Mg-Ou; Thu, 16 Jan 2014 14:07:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3nbt-0002Mb-2O
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:07:53 +0000
Received: from [85.158.139.211:57625] by server-16.bemta-5.messagelabs.com id
	5B/FE-11843-8B7E7D25; Thu, 16 Jan 2014 14:07:52 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389881269!10164051!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1073 invoked from network); 16 Jan 2014 14:07:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:07:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91371940"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 14:07:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:07:48 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3nbo-0005X9-A4;
	Thu, 16 Jan 2014 14:07:48 +0000
Message-ID: <1389881263.1061.7.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 14:07:43 +0000
In-Reply-To: <52D7F31E0200007800114419@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > The function that exchange the head is racy.
> > The cmpxchg and the compare are not atomic.
> > Assume two thread one (T1) inserting on committed list and another
> > trying to comsume it (T2).
> > T1 start inserting the element (A), set prev pointer (commit list use
> > mcte_prev) then is stop after the cmpxchg succeeded.
> > T2 get the list and change elements (A) and update the commit list
> > head.
> > T1 resume, read pointer to prev again and compare with result from
> > cmpxchg which succeeded but in the meantime prev changed in memory.
> > Not T1 assume the element was not inserted on the list and try to
> > insert again. Now A however is in another state and should not be
> > modified by T1.
> > To solve the race use temporary variable for prev pointer.
> > Note that compiler should not optimize the memory fetch as cmpxhg
> > do a full barrier.
> 
> This last sentence is pretty pointless, since there's an explicit
> wmb() between setting old and doing the cmpxchg. (Question is
> whether that wmb() actually is necessary; without a comment I
> can't easily see what it is supposed to fence - surely it's not the
> write to *oldp. And that's regardless of it being redundant with
> the barrier embedded with cmpxchgptr().)
> 
> But overall, while I think your analysis is correct, the description
> could do with some cleanup, also spelling-wise.
> 

Yes, sorry, I'm not native English. Suggestion accepted.

The race is here

if (cmpxchgptr(headp, *old, new) == *old)

You do the cmpxchg (which is atomic), then you read old pointer (*old)
then you compare cmpxchg result with the content of the pointer.

Yes, wmb is quite useless, cmpxchg should be enough.

> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
> > -				struct mctelem_ent **old,
> > +				struct mctelem_ent **oldp,
> >  				struct mctelem_ent *new)
> >  {
> > +	struct mctelem_ent *old;
> > +
> >  	for (;;) {
> > -		*old = *headp;
> > +		*oldp = old = *headp;
> >  		wmb();
> > -		if (cmpxchgptr(headp, *old, new) == *old)
> > +		if (cmpxchgptr(headp, old, new) == old)
> >  			break;
> >  	}
> >  }
> 
> Now that you use a temporary, it would make sense (and make
> the code easier to read) if you set *oldp to old just once, after
> the loop.
> 
> Jan
> 

No, this will create another race. After cmpxchg you must assume that
the list element can be own by somebody else.

Frediano




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:09:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ndp-0002fJ-FW; Thu, 16 Jan 2014 14:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3ndo-0002dz-7n
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 14:09:52 +0000
Received: from [85.158.143.35:53870] by server-1.bemta-4.messagelabs.com id
	D0/EF-02132-F28E7D25; Thu, 16 Jan 2014 14:09:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389881389!12099882!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29984 invoked from network); 16 Jan 2014 14:09:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:09:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93476515"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:09:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:09:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3ndk-0005ZC-BO;
	Thu, 16 Jan 2014 14:09:48 +0000
Date: Thu, 16 Jan 2014 14:08:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zhenzhong Duan <zhenzhong.duan@oracle.com>
In-Reply-To: <52CFC1CD.6060605@oracle.com>
Message-ID: <alpine.DEB.2.02.1401161408200.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
	<1389278302.27473.132.camel@kazak.uk.xensource.com>
	<52CFC1CD.6060605@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xudong Hao <xudong.hao@intel.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Jan 2014, Zhenzhong Duan wrote:
> On 2014-1-9 22:38, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
> > > On Thu, 9 Jan 2014, Ian Campbell wrote:
> > > > On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > > > > Hi Maintainer
> > > > > 
> > > > > I added 64bit pci hotplug support to hvm guest recently.
> > > > > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > > > > merged in qemu-xen-traditional.
> > > > Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> > > > hand the patch you link to is against qemu-xen, which Stefano does
> > > > maintain, so I'm a bit confused.
> > > That is not the case: the patch is against qemu-xen-traditional
> > > (hw/pass-through.c doesn't exist on QEMU upstream based trees).
> So anyone knows if it could be accepted?
> I think it could be used to fix an oracle internal bug.
> We found when we hotplug many pci vf to a hvm, some regions may need to be
> mapped above 4G.
> That patch add the support.

Ian Jackson, could you please weight in on this discussion?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:09:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ndp-0002fJ-FW; Thu, 16 Jan 2014 14:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3ndo-0002dz-7n
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 14:09:52 +0000
Received: from [85.158.143.35:53870] by server-1.bemta-4.messagelabs.com id
	D0/EF-02132-F28E7D25; Thu, 16 Jan 2014 14:09:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389881389!12099882!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29984 invoked from network); 16 Jan 2014 14:09:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:09:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93476515"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:09:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:09:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3ndk-0005ZC-BO;
	Thu, 16 Jan 2014 14:09:48 +0000
Date: Thu, 16 Jan 2014 14:08:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zhenzhong Duan <zhenzhong.duan@oracle.com>
In-Reply-To: <52CFC1CD.6060605@oracle.com>
Message-ID: <alpine.DEB.2.02.1401161408200.21510@kaball.uk.xensource.com>
References: <52CE575F.9050303@oracle.com>
	<1389260454.27473.27.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401091256440.21510@kaball.uk.xensource.com>
	<1389278302.27473.132.camel@kazak.uk.xensource.com>
	<52CFC1CD.6060605@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xudong Hao <xudong.hao@intel.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] Ask about status about 64 bit pci hotplug support
 on qemu-xen-traditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 10 Jan 2014, Zhenzhong Duan wrote:
> On 2014-1-9 22:38, Ian Campbell wrote:
> > On Thu, 2014-01-09 at 12:57 +0000, Stefano Stabellini wrote:
> > > On Thu, 9 Jan 2014, Ian Campbell wrote:
> > > > On Thu, 2014-01-09 at 16:01 +0800, Zhenzhong Duan wrote:
> > > > > Hi Maintainer
> > > > > 
> > > > > I added 64bit pci hotplug support to hvm guest recently.
> > > > > Then I found XudongHao had ever sent a similar patch, but it wasn't
> > > > > merged in qemu-xen-traditional.
> > > > Stefano is not the maintainer of this tree, Ian Jackson is. On the other
> > > > hand the patch you link to is against qemu-xen, which Stefano does
> > > > maintain, so I'm a bit confused.
> > > That is not the case: the patch is against qemu-xen-traditional
> > > (hw/pass-through.c doesn't exist on QEMU upstream based trees).
> So anyone knows if it could be accepted?
> I think it could be used to fix an oracle internal bug.
> We found when we hotplug many pci vf to a hvm, some regions may need to be
> mapped above 4G.
> That patch add the support.

Ian Jackson, could you please weight in on this discussion?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nrK-0003d9-Ka; Thu, 16 Jan 2014 14:23:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3nrJ-0003d4-Hl
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 14:23:49 +0000
Received: from [193.109.254.147:18633] by server-9.bemta-14.messagelabs.com id
	70/C3-13957-47BE7D25; Thu, 16 Jan 2014 14:23:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389882226!11306036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7406 invoked from network); 16 Jan 2014 14:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93481287"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:23:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:23:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3nql-0005kY-IV;
	Thu, 16 Jan 2014 14:23:15 +0000
Date: Thu, 16 Jan 2014 14:22:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389606502.8187.10.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Jan 2014, Ian Campbell wrote:
> On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> > flight 24366 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> > [...]
> > Tests which did not succeed, but are not blocking:
> >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> AKA
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html
> 
> We are getting there (slowly), the new failure after making EXT4
> available is:
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
>         [    0.087330] Registering SWP/SWPB emulation handler
>         [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
>         [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
>         [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
>         [    0.193307] List of all partitions:
>         [    0.193325] ca02         4194304 xvda2  driver: vbd
>         [    0.193340] ca01         1024000 xvda1  driver: vbd
>         [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
>         [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
>         
> The disk is on LVM and
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
> shows that front and backend are in state 4/connected. I don't see any
> smoking guns in the logs.
> 
> Julien/Stefano -- do you see anything?

LVM works for me, but I am not using udev at the moment.


> The filesystem has been mounted in dom0 (for debootstrap) which is
> running the same kernel binary.
> 
> The associated kernel build job, including binaries and .config is at
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-amd64-pvops/info.html
> and the Xen build is at 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-armhf/info.html
> 
> I also have marilith-n4 in a similar state, let me know if you want to
> have a poke at it.
> 
> In that environment I tried making root be xvda and swap xvdb but that
> didn't help. I also tried various rootflags= debug options but no extra
> info.
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nrK-0003d9-Ka; Thu, 16 Jan 2014 14:23:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3nrJ-0003d4-Hl
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 14:23:49 +0000
Received: from [193.109.254.147:18633] by server-9.bemta-14.messagelabs.com id
	70/C3-13957-47BE7D25; Thu, 16 Jan 2014 14:23:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389882226!11306036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7406 invoked from network); 16 Jan 2014 14:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93481287"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:23:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:23:15 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3nql-0005kY-IV;
	Thu, 16 Jan 2014 14:23:15 +0000
Date: Thu, 16 Jan 2014 14:22:15 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389606502.8187.10.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Jan 2014, Ian Campbell wrote:
> On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> > flight 24366 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> > [...]
> > Tests which did not succeed, but are not blocking:
> >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> AKA
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html
> 
> We are getting there (slowly), the new failure after making EXT4
> available is:
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
>         [    0.087330] Registering SWP/SWPB emulation handler
>         [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
>         [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
>         [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
>         [    0.193307] List of all partitions:
>         [    0.193325] ca02         4194304 xvda2  driver: vbd
>         [    0.193340] ca01         1024000 xvda1  driver: vbd
>         [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
>         [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
>         
> The disk is on LVM and
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
> shows that front and backend are in state 4/connected. I don't see any
> smoking guns in the logs.
> 
> Julien/Stefano -- do you see anything?

LVM works for me, but I am not using udev at the moment.


> The filesystem has been mounted in dom0 (for debootstrap) which is
> running the same kernel binary.
> 
> The associated kernel build job, including binaries and .config is at
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-amd64-pvops/info.html
> and the Xen build is at 
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/build-armhf/info.html
> 
> I also have marilith-n4 in a similar state, let me know if you want to
> have a poke at it.
> 
> In that environment I tried making root be xvda and swap xvdb but that
> didn't help. I also tried various rootflags= debug options but no extra
> info.
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:28:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nvL-0003kZ-I4; Thu, 16 Jan 2014 14:27:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W3nhw-00036y-Id
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:14:08 +0000
Received: from [85.158.143.35:26710] by server-3.bemta-4.messagelabs.com id
	B5/BD-32360-F29E7D25; Thu, 16 Jan 2014 14:14:07 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389881646!12184612!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19258 invoked from network); 16 Jan 2014 14:14:07 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 14:14:07 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328] (helo=localhost)
	by os.inf.tu-dresden.de with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128)
	(Exim 4.82) id 1W3nhu-0007oF-GT; Thu, 16 Jan 2014 15:14:06 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
To: kvm@vger.kernel.org
Date: Thu, 16 Jan 2014 15:13:44 +0100
Message-Id: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
X-Mailer: git-send-email 1.8.4.2
X-Mailman-Approved-At: Thu, 16 Jan 2014 14:27:58 +0000
Cc: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The paravirtualized clock used in KVM and Xen uses a version field to
allow the guest to see when the shared data structure is inconsistent.
The code reads the version field twice (before and after the data
structure is copied) and checks whether they haven't
changed and that the lowest bit is not set. As the second access is not
synchronized, the compiler could generate code that accesses version
more than two times and you end up with inconsistent data.

An example using pvclock_get_time_values:

host starts updating data, sets src->version to 1
guest reads src->version (1) and stores it into dst->version.
guest copies inconsistent data
guest reads src->version (1) and computes xor with dst->version.
host finishes updating data and sets src->version to 2
guest reads src->version (2) and checks whether lower bit is not set.
while loop exits with inconsistent data!

AFAICS the compiler is allowed to optimize the given code this way.

Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
---
 arch/x86/kernel/pvclock.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 42eb330..f62b41c 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
 static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
 					struct pvclock_vcpu_time_info *src)
 {
+	u32 nversion;
+
 	do {
 		dst->version = src->version;
 		rmb();		/* fetch version before data */
@@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
 		dst->tsc_shift         = src->tsc_shift;
 		dst->flags             = src->flags;
 		rmb();		/* test version after fetching data */
-	} while ((src->version & 1) || (dst->version != src->version));
+		nversion = ACCESS_ONCE(src->version);
+	} while ((nversion & 1) || (dst->version != nversion));
 
 	return dst->version;
 }
@@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
 			    struct pvclock_vcpu_time_info *vcpu_time,
 			    struct timespec *ts)
 {
-	u32 version;
+	u32 version, nversion;
 	u64 delta;
 	struct timespec now;
 
@@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
 		now.tv_sec  = wall_clock->sec;
 		now.tv_nsec = wall_clock->nsec;
 		rmb();		/* fetch time before checking version */
-	} while ((wall_clock->version & 1) || (version != wall_clock->version));
+		nversion = ACCESS_ONCE(wall_clock->version);
+	} while ((nversion & 1) || (version != nversion));
 
 	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
 	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:28:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nvL-0003kZ-I4; Thu, 16 Jan 2014 14:27:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W3nhw-00036y-Id
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:14:08 +0000
Received: from [85.158.143.35:26710] by server-3.bemta-4.messagelabs.com id
	B5/BD-32360-F29E7D25; Thu, 16 Jan 2014 14:14:07 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389881646!12184612!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19258 invoked from network); 16 Jan 2014 14:14:07 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 14:14:07 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328] (helo=localhost)
	by os.inf.tu-dresden.de with esmtpsa (TLSv1.2:AES128-GCM-SHA256:128)
	(Exim 4.82) id 1W3nhu-0007oF-GT; Thu, 16 Jan 2014 15:14:06 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
To: kvm@vger.kernel.org
Date: Thu, 16 Jan 2014 15:13:44 +0100
Message-Id: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
X-Mailer: git-send-email 1.8.4.2
X-Mailman-Approved-At: Thu, 16 Jan 2014 14:27:58 +0000
Cc: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The paravirtualized clock used in KVM and Xen uses a version field to
allow the guest to see when the shared data structure is inconsistent.
The code reads the version field twice (before and after the data
structure is copied) and checks whether they haven't
changed and that the lowest bit is not set. As the second access is not
synchronized, the compiler could generate code that accesses version
more than two times and you end up with inconsistent data.

An example using pvclock_get_time_values:

host starts updating data, sets src->version to 1
guest reads src->version (1) and stores it into dst->version.
guest copies inconsistent data
guest reads src->version (1) and computes xor with dst->version.
host finishes updating data and sets src->version to 2
guest reads src->version (2) and checks whether lower bit is not set.
while loop exits with inconsistent data!

AFAICS the compiler is allowed to optimize the given code this way.

Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
---
 arch/x86/kernel/pvclock.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 42eb330..f62b41c 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
 static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
 					struct pvclock_vcpu_time_info *src)
 {
+	u32 nversion;
+
 	do {
 		dst->version = src->version;
 		rmb();		/* fetch version before data */
@@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
 		dst->tsc_shift         = src->tsc_shift;
 		dst->flags             = src->flags;
 		rmb();		/* test version after fetching data */
-	} while ((src->version & 1) || (dst->version != src->version));
+		nversion = ACCESS_ONCE(src->version);
+	} while ((nversion & 1) || (dst->version != nversion));
 
 	return dst->version;
 }
@@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
 			    struct pvclock_vcpu_time_info *vcpu_time,
 			    struct timespec *ts)
 {
-	u32 version;
+	u32 version, nversion;
 	u64 delta;
 	struct timespec now;
 
@@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
 		now.tv_sec  = wall_clock->sec;
 		now.tv_nsec = wall_clock->nsec;
 		rmb();		/* fetch time before checking version */
-	} while ((wall_clock->version & 1) || (version != wall_clock->version));
+		nversion = ACCESS_ONCE(wall_clock->version);
+	} while ((nversion & 1) || (version != nversion));
 
 	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
 	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:29:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nwb-0003vG-AM; Thu, 16 Jan 2014 14:29:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3nwZ-0003v4-Jr
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 14:29:15 +0000
Received: from [85.158.139.211:28892] by server-11.bemta-5.messagelabs.com id
	74/32-23268-ABCE7D25; Thu, 16 Jan 2014 14:29:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389882552!10175602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32293 invoked from network); 16 Jan 2014 14:29:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:29:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93483464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:29:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:29:11 -0500
Message-ID: <1389882550.6697.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 16 Jan 2014 14:29:10 +0000
In-Reply-To: <alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:22 +0000, Stefano Stabellini wrote:
> On Mon, 13 Jan 2014, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> > > flight 24366 xen-unstable real [real]
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> > > [...]
> > > Tests which did not succeed, but are not blocking:
> > >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> > AKA
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html
> > 
> > We are getting there (slowly), the new failure after making EXT4
> > available is:
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
> >         [    0.087330] Registering SWP/SWPB emulation handler
> >         [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
> >         [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
> >         [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
> >         [    0.193307] List of all partitions:
> >         [    0.193325] ca02         4194304 xvda2  driver: vbd
> >         [    0.193340] ca01         1024000 xvda1  driver: vbd
> >         [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
> >         [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
> >         
> > The disk is on LVM and
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
> > shows that front and backend are in state 4/connected. I don't see any
> > smoking guns in the logs.
> > 
> > Julien/Stefano -- do you see anything?
> 
> LVM works for me, but I am not using udev at the moment.

I instrumented the guest f/s and blkfront and it seems like reads are
returning buffers full of 0xc2c2c2c2, which is the pattern that Xen
scrubs pages with in a debug build.

So either there is a cache coherency issue or perhaps something to do
with the dom0 swiotlb doing direct i/o to guest pages and sending them
to the wrong place.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:29:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nwb-0003vG-AM; Thu, 16 Jan 2014 14:29:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3nwZ-0003v4-Jr
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 14:29:15 +0000
Received: from [85.158.139.211:28892] by server-11.bemta-5.messagelabs.com id
	74/32-23268-ABCE7D25; Thu, 16 Jan 2014 14:29:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389882552!10175602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32293 invoked from network); 16 Jan 2014 14:29:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 14:29:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93483464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 14:29:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 09:29:11 -0500
Message-ID: <1389882550.6697.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 16 Jan 2014 14:29:10 +0000
In-Reply-To: <alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:22 +0000, Stefano Stabellini wrote:
> On Mon, 13 Jan 2014, Ian Campbell wrote:
> > On Mon, 2014-01-13 at 08:19 +0000, xen.org wrote:
> > > flight 24366 xen-unstable real [real]
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/
> > > [...]
> > > Tests which did not succeed, but are not blocking:
> > >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> > AKA
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/info.html
> > 
> > We are getting there (slowly), the new failure after making EXT4
> > available is:
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5---var-log-xen-console-guest-debian.guest.osstest.log
> >         [    0.087330] Registering SWP/SWPB emulation handler
> >         [    0.089780] blkfront: xvda2: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
> >         [    0.099255] blkfront: xvda1: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
> >         [    0.179907] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
> >         [    0.193307] List of all partitions:
> >         [    0.193325] ca02         4194304 xvda2  driver: vbd
> >         [    0.193340] ca01         1024000 xvda1  driver: vbd
> >         [    0.193352] No filesystem could mount root, tried:  ext3 ext2 ext4
> >         [    0.193376] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(202,2)
> >         
> > The disk is on LVM and
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24366/test-armhf-armhf-xl/marilith-n5-output-xenstore-ls_-fp
> > shows that front and backend are in state 4/connected. I don't see any
> > smoking guns in the logs.
> > 
> > Julien/Stefano -- do you see anything?
> 
> LVM works for me, but I am not using udev at the moment.

I instrumented the guest f/s and blkfront and it seems like reads are
returning buffers full of 0xc2c2c2c2, which is the pattern that Xen
scrubs pages with in a debug build.

So either there is a cache coherency issue or perhaps something to do
with the dom0 swiotlb doing direct i/o to guest pages and sending them
to the wrong place.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:31:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nyf-0004FI-EQ; Thu, 16 Jan 2014 14:31:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3nyd-0004F3-9d
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:31:23 +0000
Received: from [85.158.139.211:11525] by server-16.bemta-5.messagelabs.com id
	BF/04-11843-A3DE7D25; Thu, 16 Jan 2014 14:31:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389882681!9997332!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18344 invoked from network); 16 Jan 2014 14:31:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 14:31:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 14:31:22 +0000
Message-Id: <52D7FB460200007800114449@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 14:31:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
In-Reply-To: <1389881263.1061.7.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:07, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
>> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> > The function that exchange the head is racy.
>> > The cmpxchg and the compare are not atomic.
>> > Assume two thread one (T1) inserting on committed list and another
>> > trying to comsume it (T2).
>> > T1 start inserting the element (A), set prev pointer (commit list use
>> > mcte_prev) then is stop after the cmpxchg succeeded.
>> > T2 get the list and change elements (A) and update the commit list
>> > head.
>> > T1 resume, read pointer to prev again and compare with result from
>> > cmpxchg which succeeded but in the meantime prev changed in memory.
>> > Not T1 assume the element was not inserted on the list and try to
>> > insert again. Now A however is in another state and should not be
>> > modified by T1.
>> > To solve the race use temporary variable for prev pointer.
>> > Note that compiler should not optimize the memory fetch as cmpxhg
>> > do a full barrier.
>> 
>> This last sentence is pretty pointless, since there's an explicit
>> wmb() between setting old and doing the cmpxchg. (Question is
>> whether that wmb() actually is necessary; without a comment I
>> can't easily see what it is supposed to fence - surely it's not the
>> write to *oldp. And that's regardless of it being redundant with
>> the barrier embedded with cmpxchgptr().)
>> 
>> But overall, while I think your analysis is correct, the description
>> could do with some cleanup, also spelling-wise.
>> 
> 
> Yes, sorry, I'm not native English.

Neither am I.

> Suggestion accepted.
> 
> The race is here
> 
> if (cmpxchgptr(headp, *old, new) == *old)
> 
> You do the cmpxchg (which is atomic), then you read old pointer (*old)
> then you compare cmpxchg result with the content of the pointer.

All understood.

> Yes, wmb is quite useless, cmpxchg should be enough.
> 
>> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
>> > -				struct mctelem_ent **old,
>> > +				struct mctelem_ent **oldp,
>> >  				struct mctelem_ent *new)
>> >  {
>> > +	struct mctelem_ent *old;
>> > +
>> >  	for (;;) {
>> > -		*old = *headp;
>> > +		*oldp = old = *headp;
>> >  		wmb();
>> > -		if (cmpxchgptr(headp, *old, new) == *old)
>> > +		if (cmpxchgptr(headp, old, new) == old)
>> >  			break;
>> >  	}
>> >  }
>> 
>> Now that you use a temporary, it would make sense (and make
>> the code easier to read) if you set *oldp to old just once, after
>> the loop.
> 
> No, this will create another race. After cmpxchg you must assume that
> the list element can be own by somebody else.

"The list element" being which? I simply don't see how storing to
*oldp non-atomically is race free when done before the cmpxchg,
but is a problem when done afterwards. Are there perhaps
additional requirement put on this by some (but not all) of the
callers? I ask because the function by itself doesn't seem to have
a race other than the one you try to fix, no matter where *oldp
is being stored to. If indeed there are further requirements, then
adding a comment here would seem necessary (since I'm sure
there'd be me or someone else stumbling across that inefficiency
again in the future, and hence wanting to clean it up in the same
way I suggested).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:31:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3nyf-0004FI-EQ; Thu, 16 Jan 2014 14:31:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3nyd-0004F3-9d
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 14:31:23 +0000
Received: from [85.158.139.211:11525] by server-16.bemta-5.messagelabs.com id
	BF/04-11843-A3DE7D25; Thu, 16 Jan 2014 14:31:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389882681!9997332!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18344 invoked from network); 16 Jan 2014 14:31:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 14:31:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 14:31:22 +0000
Message-Id: <52D7FB460200007800114449@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 14:31:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
In-Reply-To: <1389881263.1061.7.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:07, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
>> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
>> > The function that exchange the head is racy.
>> > The cmpxchg and the compare are not atomic.
>> > Assume two thread one (T1) inserting on committed list and another
>> > trying to comsume it (T2).
>> > T1 start inserting the element (A), set prev pointer (commit list use
>> > mcte_prev) then is stop after the cmpxchg succeeded.
>> > T2 get the list and change elements (A) and update the commit list
>> > head.
>> > T1 resume, read pointer to prev again and compare with result from
>> > cmpxchg which succeeded but in the meantime prev changed in memory.
>> > Not T1 assume the element was not inserted on the list and try to
>> > insert again. Now A however is in another state and should not be
>> > modified by T1.
>> > To solve the race use temporary variable for prev pointer.
>> > Note that compiler should not optimize the memory fetch as cmpxhg
>> > do a full barrier.
>> 
>> This last sentence is pretty pointless, since there's an explicit
>> wmb() between setting old and doing the cmpxchg. (Question is
>> whether that wmb() actually is necessary; without a comment I
>> can't easily see what it is supposed to fence - surely it's not the
>> write to *oldp. And that's regardless of it being redundant with
>> the barrier embedded with cmpxchgptr().)
>> 
>> But overall, while I think your analysis is correct, the description
>> could do with some cleanup, also spelling-wise.
>> 
> 
> Yes, sorry, I'm not native English.

Neither am I.

> Suggestion accepted.
> 
> The race is here
> 
> if (cmpxchgptr(headp, *old, new) == *old)
> 
> You do the cmpxchg (which is atomic), then you read old pointer (*old)
> then you compare cmpxchg result with the content of the pointer.

All understood.

> Yes, wmb is quite useless, cmpxchg should be enough.
> 
>> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
>> > -				struct mctelem_ent **old,
>> > +				struct mctelem_ent **oldp,
>> >  				struct mctelem_ent *new)
>> >  {
>> > +	struct mctelem_ent *old;
>> > +
>> >  	for (;;) {
>> > -		*old = *headp;
>> > +		*oldp = old = *headp;
>> >  		wmb();
>> > -		if (cmpxchgptr(headp, *old, new) == *old)
>> > +		if (cmpxchgptr(headp, old, new) == old)
>> >  			break;
>> >  	}
>> >  }
>> 
>> Now that you use a temporary, it would make sense (and make
>> the code easier to read) if you set *oldp to old just once, after
>> the loop.
> 
> No, this will create another race. After cmpxchg you must assume that
> the list element can be own by somebody else.

"The list element" being which? I simply don't see how storing to
*oldp non-atomically is race free when done before the cmpxchg,
but is a problem when done afterwards. Are there perhaps
additional requirement put on this by some (but not all) of the
callers? I ask because the function by itself doesn't seem to have
a race other than the one you try to fix, no matter where *oldp
is being stored to. If indeed there are further requirements, then
adding a comment here would seem necessary (since I'm sure
there'd be me or someone else stumbling across that inefficiency
again in the future, and hence wanting to clean it up in the same
way I suggested).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:58:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3oOT-0005iQ-FN; Thu, 16 Jan 2014 14:58:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W3oOS-0005iL-3s
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 14:58:04 +0000
Received: from [85.158.137.68:8551] by server-10.bemta-3.messagelabs.com id
	1C/95-23989-A73F7D25; Thu, 16 Jan 2014 14:58:02 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389884279!9572974!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3714 invoked from network); 16 Jan 2014 14:58:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 14:58:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GEvno1002290
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 14:57:50 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GEvmGP027333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 14:57:48 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GEvlr5010612; Thu, 16 Jan 2014 14:57:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 06:57:47 -0800
Message-ID: <52D7F391.60500@oracle.com>
Date: Thu, 16 Jan 2014 09:58:25 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B158020000780011427E@nat28.tlf.novell.com>
In-Reply-To: <52D7B158020000780011427E@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
	for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/2014 04:15 AM, Jan Beulich wrote:
>> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>> ...
>> +        {
>> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
>> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
> What's this cast good for?

These routines return an int that indicates whether this was full 
context save or just that the counters have been stopped. Depending on 
where the routines are called from we may ignore the return value 
(having VPMU_CONTEXT_SAVE forces "full save").

I did it so that compiler won't complain about ignoring return value (I 
don't think it does now but I imagine some version still may).

>> +            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +        }
>> +        return val;
>> +    }
>>       return 0;
>>   }
>>   
...
>> +    /* dom0 will handle this interrupt */
>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +    {
>> +        if ( smp_processor_id() >= dom0->max_vcpus )
>> +            return 0;
>> +        v = dom0->vcpu[smp_processor_id()];
>> +    }
> Ugly new uses of "dom0". And the correlation between
> smp_processor_id() and dom0->max_vcpus doesn't look sane
> either.

Yes, this needs to be fixed. Maybe

     v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];

And drop the check. (And the same change in pmu_softnmi() in a later patch).

Not sure what to do about dom0. What else can I use instead? I need to 
choose dom0's vcpu.


Thanks.
-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 14:58:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 14:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3oOT-0005iQ-FN; Thu, 16 Jan 2014 14:58:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W3oOS-0005iL-3s
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 14:58:04 +0000
Received: from [85.158.137.68:8551] by server-10.bemta-3.messagelabs.com id
	1C/95-23989-A73F7D25; Thu, 16 Jan 2014 14:58:02 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389884279!9572974!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3714 invoked from network); 16 Jan 2014 14:58:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 14:58:01 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GEvno1002290
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 14:57:50 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GEvmGP027333
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 14:57:48 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GEvlr5010612; Thu, 16 Jan 2014 14:57:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 06:57:47 -0800
Message-ID: <52D7F391.60500@oracle.com>
Date: Thu, 16 Jan 2014 09:58:25 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B158020000780011427E@nat28.tlf.novell.com>
In-Reply-To: <52D7B158020000780011427E@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
	for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/2014 04:15 AM, Jan Beulich wrote:
>> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>> ...
>> +        {
>> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
>> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
> What's this cast good for?

These routines return an int that indicates whether this was full 
context save or just that the counters have been stopped. Depending on 
where the routines are called from we may ignore the return value 
(having VPMU_CONTEXT_SAVE forces "full save").

I did it so that compiler won't complain about ignoring return value (I 
don't think it does now but I imagine some version still may).

>> +            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +        }
>> +        return val;
>> +    }
>>       return 0;
>>   }
>>   
...
>> +    /* dom0 will handle this interrupt */
>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +    {
>> +        if ( smp_processor_id() >= dom0->max_vcpus )
>> +            return 0;
>> +        v = dom0->vcpu[smp_processor_id()];
>> +    }
> Ugly new uses of "dom0". And the correlation between
> smp_processor_id() and dom0->max_vcpus doesn't look sane
> either.

Yes, this needs to be fixed. Maybe

     v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];

And drop the check. (And the same change in pmu_softnmi() in a later patch).

Not sure what to do about dom0. What else can I use instead? I need to 
choose dom0's vcpu.


Thanks.
-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3oV8-0006NF-K9; Thu, 16 Jan 2014 15:04:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3oV7-0006NA-3h
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:04:57 +0000
Received: from [85.158.137.68:8014] by server-11.bemta-3.messagelabs.com id
	75/9A-19379-815F7D25; Thu, 16 Jan 2014 15:04:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389884695!9573042!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11600 invoked from network); 16 Jan 2014 15:04:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:04:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:04:56 +0000
Message-Id: <52D80324020000780011447A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:04:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julian Stecklina" <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
In-Reply-To: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:13, Julian Stecklina <jsteckli@os.inf.tu-dresden.de> wrote:
> The paravirtualized clock used in KVM and Xen uses a version field to
> allow the guest to see when the shared data structure is inconsistent.
> The code reads the version field twice (before and after the data
> structure is copied) and checks whether they haven't
> changed and that the lowest bit is not set. As the second access is not
> synchronized, the compiler could generate code that accesses version
> more than two times and you end up with inconsistent data.
> 
> An example using pvclock_get_time_values:
> 
> host starts updating data, sets src->version to 1
> guest reads src->version (1) and stores it into dst->version.
> guest copies inconsistent data
> guest reads src->version (1) and computes xor with dst->version.
> host finishes updating data and sets src->version to 2
> guest reads src->version (2) and checks whether lower bit is not set.
> while loop exits with inconsistent data!
> 
> AFAICS the compiler is allowed to optimize the given code this way.

I don't think so - this would only be an issue if the conditions used
| instead of ||. || implies a sequence point between evaluating the
left and right sides, and the standard says: "The presence of a
sequence point between the evaluation of expressions A and B
implies that every value computation and side effect associated
with A is sequenced before every value computation and side
effect associated with B."

And even if there was a problem (i.e. my interpretation of the
above being incorrect), I don't think you'd need ACCESS_ONCE()
here: The same local variable can't have two different values in
two different use sites when there was no intermediate
assignment to it.

Interestingly the old XenoLinux code uses | instead of || in
both cases, yet only one of the two also entertains the use
of a local variable. I shall fix this (read: Thanks for pointing
out even if in the upstream incarnation this is not a problem).

Jan

> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> ---
>  arch/x86/kernel/pvclock.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 42eb330..f62b41c 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct 
> pvclock_shadow_time *shadow)
>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  					struct pvclock_vcpu_time_info *src)
>  {
> +	u32 nversion;
> +
>  	do {
>  		dst->version = src->version;
>  		rmb();		/* fetch version before data */
> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct 
> pvclock_shadow_time *dst,
>  		dst->tsc_shift         = src->tsc_shift;
>  		dst->flags             = src->flags;
>  		rmb();		/* test version after fetching data */
> -	} while ((src->version & 1) || (dst->version != src->version));
> +		nversion = ACCESS_ONCE(src->version);
> +	} while ((nversion & 1) || (dst->version != nversion));
>  
>  	return dst->version;
>  }
> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock 
> *wall_clock,
>  			    struct pvclock_vcpu_time_info *vcpu_time,
>  			    struct timespec *ts)
>  {
> -	u32 version;
> +	u32 version, nversion;
>  	u64 delta;
>  	struct timespec now;
>  
> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock 
> *wall_clock,
>  		now.tv_sec  = wall_clock->sec;
>  		now.tv_nsec = wall_clock->nsec;
>  		rmb();		/* fetch time before checking version */
> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> +		nversion = ACCESS_ONCE(wall_clock->version);
> +	} while ((nversion & 1) || (version != nversion));
>  
>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> -- 
> 1.8.4.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3oV8-0006NF-K9; Thu, 16 Jan 2014 15:04:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3oV7-0006NA-3h
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:04:57 +0000
Received: from [85.158.137.68:8014] by server-11.bemta-3.messagelabs.com id
	75/9A-19379-815F7D25; Thu, 16 Jan 2014 15:04:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389884695!9573042!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11600 invoked from network); 16 Jan 2014 15:04:55 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:04:55 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:04:56 +0000
Message-Id: <52D80324020000780011447A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:04:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julian Stecklina" <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
In-Reply-To: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:13, Julian Stecklina <jsteckli@os.inf.tu-dresden.de> wrote:
> The paravirtualized clock used in KVM and Xen uses a version field to
> allow the guest to see when the shared data structure is inconsistent.
> The code reads the version field twice (before and after the data
> structure is copied) and checks whether they haven't
> changed and that the lowest bit is not set. As the second access is not
> synchronized, the compiler could generate code that accesses version
> more than two times and you end up with inconsistent data.
> 
> An example using pvclock_get_time_values:
> 
> host starts updating data, sets src->version to 1
> guest reads src->version (1) and stores it into dst->version.
> guest copies inconsistent data
> guest reads src->version (1) and computes xor with dst->version.
> host finishes updating data and sets src->version to 2
> guest reads src->version (2) and checks whether lower bit is not set.
> while loop exits with inconsistent data!
> 
> AFAICS the compiler is allowed to optimize the given code this way.

I don't think so - this would only be an issue if the conditions used
| instead of ||. || implies a sequence point between evaluating the
left and right sides, and the standard says: "The presence of a
sequence point between the evaluation of expressions A and B
implies that every value computation and side effect associated
with A is sequenced before every value computation and side
effect associated with B."

And even if there was a problem (i.e. my interpretation of the
above being incorrect), I don't think you'd need ACCESS_ONCE()
here: The same local variable can't have two different values in
two different use sites when there was no intermediate
assignment to it.

Interestingly the old XenoLinux code uses | instead of || in
both cases, yet only one of the two also entertains the use
of a local variable. I shall fix this (read: Thanks for pointing
out even if in the upstream incarnation this is not a problem).

Jan

> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> ---
>  arch/x86/kernel/pvclock.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 42eb330..f62b41c 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct 
> pvclock_shadow_time *shadow)
>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  					struct pvclock_vcpu_time_info *src)
>  {
> +	u32 nversion;
> +
>  	do {
>  		dst->version = src->version;
>  		rmb();		/* fetch version before data */
> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct 
> pvclock_shadow_time *dst,
>  		dst->tsc_shift         = src->tsc_shift;
>  		dst->flags             = src->flags;
>  		rmb();		/* test version after fetching data */
> -	} while ((src->version & 1) || (dst->version != src->version));
> +		nversion = ACCESS_ONCE(src->version);
> +	} while ((nversion & 1) || (dst->version != nversion));
>  
>  	return dst->version;
>  }
> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock 
> *wall_clock,
>  			    struct pvclock_vcpu_time_info *vcpu_time,
>  			    struct timespec *ts)
>  {
> -	u32 version;
> +	u32 version, nversion;
>  	u64 delta;
>  	struct timespec now;
>  
> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock 
> *wall_clock,
>  		now.tv_sec  = wall_clock->sec;
>  		now.tv_nsec = wall_clock->nsec;
>  		rmb();		/* fetch time before checking version */
> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> +		nversion = ACCESS_ONCE(wall_clock->version);
> +	} while ((nversion & 1) || (version != nversion));
>  
>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> -- 
> 1.8.4.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:13:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3od7-0006WF-Ls; Thu, 16 Jan 2014 15:13:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W3od6-0006WA-2e
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:13:12 +0000
Received: from [193.109.254.147:42909] by server-10.bemta-14.messagelabs.com
	id 99/D3-20752-707F7D25; Thu, 16 Jan 2014 15:13:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389885182!11332548!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3137 invoked from network); 16 Jan 2014 15:13:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 15:13:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GFCu4E030790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 15:12:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GFCtiq026656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 15:12:55 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GFCsNR026647; Thu, 16 Jan 2014 15:12:55 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 07:12:54 -0800
Message-ID: <52D7F71C.4010303@oracle.com>
Date: Thu, 16 Jan 2014 10:13:32 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B60002000078001142A2@nat28.tlf.novell.com>
In-Reply-To: <52D7B60002000078001142A2@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/2014 04:35 AM, Jan Beulich wrote:
>
>> +/* Process the softirq set by PMU NMI handler */
>> +static void pmu_softnmi(void)
>> +{
>> +    struct cpu_user_regs *regs;
>> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
>> +
>> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||
>> +         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +        v = dom0->vcpu[smp_processor_id()];
>> +    else
>> +        v = sampled;
>> +
>> +    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
>> +    if ( is_hvm_domain(sampled->domain) )
>> +    {
>> +        struct segment_register cs;
>> +
>> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
>> +        regs->cs = cs.attr.fields.dpl;
>> +    }
>> +
>> +    send_guest_vcpu_virq(v, VIRQ_XENPMU);
>> +}
> Perhaps I should have asked this on an earlier patch already:
> How is this supposed to work for a 32-bit HVM guest?
> struct cpu_user_regs is clearly different for it than what the
> hypervisor or a 64-bit HVM guest would use.

Right, I need to XLAT_cpu_user_regs() for 32-bit HVM guest in 
vpmu_interrupt() (I already do it for a PV guest).

Thanks.
-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:13:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3od7-0006WF-Ls; Thu, 16 Jan 2014 15:13:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W3od6-0006WA-2e
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:13:12 +0000
Received: from [193.109.254.147:42909] by server-10.bemta-14.messagelabs.com
	id 99/D3-20752-707F7D25; Thu, 16 Jan 2014 15:13:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389885182!11332548!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3137 invoked from network); 16 Jan 2014 15:13:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 16 Jan 2014 15:13:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0GFCu4E030790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 16 Jan 2014 15:12:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GFCtiq026656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 16 Jan 2014 15:12:55 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0GFCsNR026647; Thu, 16 Jan 2014 15:12:55 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 07:12:54 -0800
Message-ID: <52D7F71C.4010303@oracle.com>
Date: Thu, 16 Jan 2014 10:13:32 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-16-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B60002000078001142A2@nat28.tlf.novell.com>
In-Reply-To: <52D7B60002000078001142A2@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v3 15/16] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/2014 04:35 AM, Jan Beulich wrote:
>
>> +/* Process the softirq set by PMU NMI handler */
>> +static void pmu_softnmi(void)
>> +{
>> +    struct cpu_user_regs *regs;
>> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
>> +
>> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||
>> +         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +        v = dom0->vcpu[smp_processor_id()];
>> +    else
>> +        v = sampled;
>> +
>> +    regs = &v->arch.vpmu.xenpmu_data->pmu.regs;
>> +    if ( is_hvm_domain(sampled->domain) )
>> +    {
>> +        struct segment_register cs;
>> +
>> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
>> +        regs->cs = cs.attr.fields.dpl;
>> +    }
>> +
>> +    send_guest_vcpu_virq(v, VIRQ_XENPMU);
>> +}
> Perhaps I should have asked this on an earlier patch already:
> How is this supposed to work for a 32-bit HVM guest?
> struct cpu_user_regs is clearly different for it than what the
> hypervisor or a 64-bit HVM guest would use.

Right, I need to XLAT_cpu_user_regs() for 32-bit HVM guest in 
vpmu_interrupt() (I already do it for a PV guest).

Thanks.
-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:15:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:15:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ofF-0006wk-TM; Thu, 16 Jan 2014 15:15:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3ofD-0006wY-QG
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:15:24 +0000
Received: from [85.158.143.35:18852] by server-3.bemta-4.messagelabs.com id
	71/DD-32360-B87F7D25; Thu, 16 Jan 2014 15:15:23 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389885321!12208325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6413 invoked from network); 16 Jan 2014 15:15:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91401614"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:15:20 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3obM-0006ve-Rm;
	Thu, 16 Jan 2014 15:11:24 +0000
Message-ID: <1389885079.1061.15.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 15:11:19 +0000
In-Reply-To: <52D7FB460200007800114449@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:31 +0000, Jan Beulich wrote:
> >>> On 16.01.14 at 15:07, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
> >> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> >> > The function that exchange the head is racy.
> >> > The cmpxchg and the compare are not atomic.
> >> > Assume two thread one (T1) inserting on committed list and another
> >> > trying to comsume it (T2).
> >> > T1 start inserting the element (A), set prev pointer (commit list use
> >> > mcte_prev) then is stop after the cmpxchg succeeded.
> >> > T2 get the list and change elements (A) and update the commit list
> >> > head.
> >> > T1 resume, read pointer to prev again and compare with result from
> >> > cmpxchg which succeeded but in the meantime prev changed in memory.
> >> > Not T1 assume the element was not inserted on the list and try to
> >> > insert again. Now A however is in another state and should not be
> >> > modified by T1.
> >> > To solve the race use temporary variable for prev pointer.
> >> > Note that compiler should not optimize the memory fetch as cmpxhg
> >> > do a full barrier.
> >> 
> >> This last sentence is pretty pointless, since there's an explicit
> >> wmb() between setting old and doing the cmpxchg. (Question is
> >> whether that wmb() actually is necessary; without a comment I
> >> can't easily see what it is supposed to fence - surely it's not the
> >> write to *oldp. And that's regardless of it being redundant with
> >> the barrier embedded with cmpxchgptr().)
> >> 
> >> But overall, while I think your analysis is correct, the description
> >> could do with some cleanup, also spelling-wise.
> >> 
> > 
> > Yes, sorry, I'm not native English.
> 
> Neither am I.
> 
> > Suggestion accepted.
> > 
> > The race is here
> > 
> > if (cmpxchgptr(headp, *old, new) == *old)
> > 
> > You do the cmpxchg (which is atomic), then you read old pointer (*old)
> > then you compare cmpxchg result with the content of the pointer.
> 
> All understood.
> 
> > Yes, wmb is quite useless, cmpxchg should be enough.
> > 
> >> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
> >> > -				struct mctelem_ent **old,
> >> > +				struct mctelem_ent **oldp,
> >> >  				struct mctelem_ent *new)
> >> >  {
> >> > +	struct mctelem_ent *old;
> >> > +
> >> >  	for (;;) {
> >> > -		*old = *headp;
> >> > +		*oldp = old = *headp;
> >> >  		wmb();
> >> > -		if (cmpxchgptr(headp, *old, new) == *old)
> >> > +		if (cmpxchgptr(headp, old, new) == old)
> >> >  			break;
> >> >  	}
> >> >  }
> >> 
> >> Now that you use a temporary, it would make sense (and make
> >> the code easier to read) if you set *oldp to old just once, after
> >> the loop.
> > 
> > No, this will create another race. After cmpxchg you must assume that
> > the list element can be own by somebody else.
> 
> "The list element" being which? I simply don't see how storing to
> *oldp non-atomically is race free when done before the cmpxchg,
> but is a problem when done afterwards. Are there perhaps
> additional requirement put on this by some (but not all) of the
> callers? I ask because the function by itself doesn't seem to have
> a race other than the one you try to fix, no matter where *oldp
> is being stored to. If indeed there are further requirements, then
> adding a comment here would seem necessary (since I'm sure
> there'd be me or someone else stumbling across that inefficiency
> again in the future, and hence wanting to clean it up in the same
> way I suggested).
> 
> Jan
> 

This function is used to safely add a element to a list. You pass:
- pointer to pointer to head (headp);
- pointer to pointer to next/prev pointer (old/oldp);
- pointer to list element (new).
Now you own (nobody should allow to touch) the element till it get
inserted to the list.
Now assume we set the pointer after the loop. The head points to an
element which has no next/prev set so you break the list. Another thread
will see a corrupted or shortened list.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:15:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:15:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ofF-0006wk-TM; Thu, 16 Jan 2014 15:15:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3ofD-0006wY-QG
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:15:24 +0000
Received: from [85.158.143.35:18852] by server-3.bemta-4.messagelabs.com id
	71/DD-32360-B87F7D25; Thu, 16 Jan 2014 15:15:23 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389885321!12208325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6413 invoked from network); 16 Jan 2014 15:15:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91401614"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:15:20 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3obM-0006ve-Rm;
	Thu, 16 Jan 2014 15:11:24 +0000
Message-ID: <1389885079.1061.15.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 15:11:19 +0000
In-Reply-To: <52D7FB460200007800114449@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:31 +0000, Jan Beulich wrote:
> >>> On 16.01.14 at 15:07, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> > On Thu, 2014-01-16 at 13:56 +0000, Jan Beulich wrote:
> >> >>> On 16.01.14 at 14:26, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> >> > The function that exchange the head is racy.
> >> > The cmpxchg and the compare are not atomic.
> >> > Assume two thread one (T1) inserting on committed list and another
> >> > trying to comsume it (T2).
> >> > T1 start inserting the element (A), set prev pointer (commit list use
> >> > mcte_prev) then is stop after the cmpxchg succeeded.
> >> > T2 get the list and change elements (A) and update the commit list
> >> > head.
> >> > T1 resume, read pointer to prev again and compare with result from
> >> > cmpxchg which succeeded but in the meantime prev changed in memory.
> >> > Not T1 assume the element was not inserted on the list and try to
> >> > insert again. Now A however is in another state and should not be
> >> > modified by T1.
> >> > To solve the race use temporary variable for prev pointer.
> >> > Note that compiler should not optimize the memory fetch as cmpxhg
> >> > do a full barrier.
> >> 
> >> This last sentence is pretty pointless, since there's an explicit
> >> wmb() between setting old and doing the cmpxchg. (Question is
> >> whether that wmb() actually is necessary; without a comment I
> >> can't easily see what it is supposed to fence - surely it's not the
> >> write to *oldp. And that's regardless of it being redundant with
> >> the barrier embedded with cmpxchgptr().)
> >> 
> >> But overall, while I think your analysis is correct, the description
> >> could do with some cleanup, also spelling-wise.
> >> 
> > 
> > Yes, sorry, I'm not native English.
> 
> Neither am I.
> 
> > Suggestion accepted.
> > 
> > The race is here
> > 
> > if (cmpxchgptr(headp, *old, new) == *old)
> > 
> > You do the cmpxchg (which is atomic), then you read old pointer (*old)
> > then you compare cmpxchg result with the content of the pointer.
> 
> All understood.
> 
> > Yes, wmb is quite useless, cmpxchg should be enough.
> > 
> >> >  static void mctelem_xchg_head(struct mctelem_ent **headp,
> >> > -				struct mctelem_ent **old,
> >> > +				struct mctelem_ent **oldp,
> >> >  				struct mctelem_ent *new)
> >> >  {
> >> > +	struct mctelem_ent *old;
> >> > +
> >> >  	for (;;) {
> >> > -		*old = *headp;
> >> > +		*oldp = old = *headp;
> >> >  		wmb();
> >> > -		if (cmpxchgptr(headp, *old, new) == *old)
> >> > +		if (cmpxchgptr(headp, old, new) == old)
> >> >  			break;
> >> >  	}
> >> >  }
> >> 
> >> Now that you use a temporary, it would make sense (and make
> >> the code easier to read) if you set *oldp to old just once, after
> >> the loop.
> > 
> > No, this will create another race. After cmpxchg you must assume that
> > the list element can be own by somebody else.
> 
> "The list element" being which? I simply don't see how storing to
> *oldp non-atomically is race free when done before the cmpxchg,
> but is a problem when done afterwards. Are there perhaps
> additional requirement put on this by some (but not all) of the
> callers? I ask because the function by itself doesn't seem to have
> a race other than the one you try to fix, no matter where *oldp
> is being stored to. If indeed there are further requirements, then
> adding a comment here would seem necessary (since I'm sure
> there'd be me or someone else stumbling across that inefficiency
> again in the future, and hence wanting to clean it up in the same
> way I suggested).
> 
> Jan
> 

This function is used to safely add a element to a list. You pass:
- pointer to pointer to head (headp);
- pointer to pointer to next/prev pointer (old/oldp);
- pointer to list element (new).
Now you own (nobody should allow to touch) the element till it get
inserted to the list.
Now assume we set the pointer after the loop. The head points to an
element which has no next/prev set so you break the list. Another thread
will see a corrupted or shortened list.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:16:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ogN-0007Dh-9D; Thu, 16 Jan 2014 15:16:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3ogK-0007DR-Qc
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:16:33 +0000
Received: from [193.109.254.147:60460] by server-2.bemta-14.messagelabs.com id
	E5/2E-00361-0D7F7D25; Thu, 16 Jan 2014 15:16:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389885390!11322834!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18884 invoked from network); 16 Jan 2014 15:16:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93506096"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:16:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:16:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3ogG-00077u-Rv;
	Thu, 16 Jan 2014 15:16:28 +0000
Date: Thu, 16 Jan 2014 15:15:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140116114411.GA29530@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Olaf Hering wrote:
> On Thu, Jan 16, Ian Campbell wrote:
> 
> > Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> > hd[a-d], which refer to the same underlying volume.
> > 
> > This allows you to boot from hda, do an unplug and then switch to using
> > xvda.
> 
> Not really: In the end hda is connected to the emulated IDE, so today
> its really "sda" in domU because pata_piix will drive it. sda was
> connected to emulated LSI SCSI. But xvda was not connected to any
> emulated controller, its PV only. Thats how it is done with qemu-trad.
> So having hda and xvda in the same config was working, and maybe even
> supported?
> 
> With qemu-upstream this appearently changed. I'm not saying this change
> in behaviour is good or bad, just that something changed. Some people
> still use the kernel names instead of UUID or LABEL. So they have to
> adjust their config in domU and also in domU.cfg before they switch from
> qemu-trad to qemu-upstream.

This behaviour was chosen on purpose 3 years ago, knowing that it might
cause problems to people that try to use xvda and hda in the same
config.

Why? Because all the alternatives that we considered were far worse.
You can have fun digging through the archives, you might discover things
like xvdHD that might have even made it into Linux for a short while.

Like Ian said, SCSI devices get a name from xvde onward.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:16:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ogN-0007Dh-9D; Thu, 16 Jan 2014 15:16:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3ogK-0007DR-Qc
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:16:33 +0000
Received: from [193.109.254.147:60460] by server-2.bemta-14.messagelabs.com id
	E5/2E-00361-0D7F7D25; Thu, 16 Jan 2014 15:16:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389885390!11322834!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18884 invoked from network); 16 Jan 2014 15:16:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:16:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93506096"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:16:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:16:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3ogG-00077u-Rv;
	Thu, 16 Jan 2014 15:16:28 +0000
Date: Thu, 16 Jan 2014 15:15:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140116114411.GA29530@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xen.org,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Olaf Hering wrote:
> On Thu, Jan 16, Ian Campbell wrote:
> 
> > Not quite. Each xvd[a-d] creates both a PV and an emulated IDE device
> > hd[a-d], which refer to the same underlying volume.
> > 
> > This allows you to boot from hda, do an unplug and then switch to using
> > xvda.
> 
> Not really: In the end hda is connected to the emulated IDE, so today
> its really "sda" in domU because pata_piix will drive it. sda was
> connected to emulated LSI SCSI. But xvda was not connected to any
> emulated controller, its PV only. Thats how it is done with qemu-trad.
> So having hda and xvda in the same config was working, and maybe even
> supported?
> 
> With qemu-upstream this appearently changed. I'm not saying this change
> in behaviour is good or bad, just that something changed. Some people
> still use the kernel names instead of UUID or LABEL. So they have to
> adjust their config in domU and also in domU.cfg before they switch from
> qemu-trad to qemu-upstream.

This behaviour was chosen on purpose 3 years ago, knowing that it might
cause problems to people that try to use xvda and hda in the same
config.

Why? Because all the alternatives that we considered were far worse.
You can have fun digging through the archives, you might discover things
like xvdHD that might have even made it into Linux for a short while.

Like Ian said, SCSI devices get a name from xvde onward.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:28:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ora-0007up-9f; Thu, 16 Jan 2014 15:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3orY-0007uk-UA
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:28:09 +0000
Received: from [85.158.137.68:57530] by server-10.bemta-3.messagelabs.com id
	22/D5-23989-48AF7D25; Thu, 16 Jan 2014 15:28:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389886082!8761771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26178 invoked from network); 16 Jan 2014 15:28:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:28:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93511761"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:28:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 10:28:01 -0500
Received: from marilith-n13-p0.uk.xensource.com ([10.80.229.115]
	helo=marilith-n13.uk.xensource.com.)	by norwich.cam.xci-test.com with
	esmtp (Exim 4.72)	(envelope-from <ian.campbell@citrix.com>)	id
	1W3orQ-0005I0-4d; Thu, 16 Jan 2014 15:28:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 16 Jan 2014 15:27:59 +0000
Message-ID: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	tim@xen.org, julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com
Subject: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.

The correct solution here would be to check for ENOSYS in libxl, unfortunately
xc_domain_set_pod_target suffers from the same broken error reporting as the
rest of libxc and throws away the errno.

So for now conditionally define xc_domain_set_pod_target to return success
(which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
errno==-1 and returns -1, which matches the broken error reporting of the
existing function. It appears to have no in tree callers in any case.

The conditional should be removed once libxc has been fixed.

This makes ballooning (xl mem-set) work for ARM domains.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: george.dunlap@citrix.com
---
I'd be generally wary of modifying the error handling in a piecemeal way, but
certainly doing so for 4.4 now would be inapropriate.

IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
frame, at which point this conditional stuff could be dropped.

In terms of the 4.4 release, obviously ballooning would be very nice to have
for ARM guests, on the other hand I'm aware that while the patch is fairly
small/contained and safe it is also pretty skanky and likely wouldn't be
accepted outside of the rc period.
---
 tools/libxc/xc_domain.c |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..e1d1bec 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -986,6 +986,12 @@ out:
     return rc;
 }
 
+/* Currently only implemented on x86. This cannot be handled in the
+ * caller, e.g. by looking for errno==ENOSYS because of the broken
+ * error reporting style. Once this is fixed then this condition can
+ * be removed.
+ */
+#if defined(__i386__)||defined(__x86_64__)
 static int xc_domain_pod_target(xc_interface *xch,
                                 int op,
                                 uint32_t domid,
@@ -1055,6 +1061,28 @@ int xc_domain_get_pod_target(xc_interface *xch,
                                 pod_cache_pages,
                                 pod_entries);
 }
+#else
+int xc_domain_set_pod_target(xc_interface *xch,
+                             uint32_t domid,
+                             uint64_t target_pages,
+                             uint64_t *tot_pages,
+                             uint64_t *pod_cache_pages,
+                             uint64_t *pod_entries)
+{
+    return 0;
+}
+int xc_domain_get_pod_target(xc_interface *xch,
+                             uint32_t domid,
+                             uint64_t *tot_pages,
+                             uint64_t *pod_cache_pages,
+                             uint64_t *pod_entries)
+{
+    /* On x86 (above) xc_domain_pod_target will incorrectly return -1
+     * with errno==-1 on error. Do the same for least surprise. */
+    errno = -1;
+    return -1;
+}
+#endif
 
 int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:28:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ora-0007up-9f; Thu, 16 Jan 2014 15:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3orY-0007uk-UA
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:28:09 +0000
Received: from [85.158.137.68:57530] by server-10.bemta-3.messagelabs.com id
	22/D5-23989-48AF7D25; Thu, 16 Jan 2014 15:28:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389886082!8761771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26178 invoked from network); 16 Jan 2014 15:28:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:28:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93511761"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:28:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 10:28:01 -0500
Received: from marilith-n13-p0.uk.xensource.com ([10.80.229.115]
	helo=marilith-n13.uk.xensource.com.)	by norwich.cam.xci-test.com with
	esmtp (Exim 4.72)	(envelope-from <ian.campbell@citrix.com>)	id
	1W3orQ-0005I0-4d; Thu, 16 Jan 2014 15:28:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 16 Jan 2014 15:27:59 +0000
Message-ID: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	tim@xen.org, julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com
Subject: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.

The correct solution here would be to check for ENOSYS in libxl, unfortunately
xc_domain_set_pod_target suffers from the same broken error reporting as the
rest of libxc and throws away the errno.

So for now conditionally define xc_domain_set_pod_target to return success
(which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
errno==-1 and returns -1, which matches the broken error reporting of the
existing function. It appears to have no in tree callers in any case.

The conditional should be removed once libxc has been fixed.

This makes ballooning (xl mem-set) work for ARM domains.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: george.dunlap@citrix.com
---
I'd be generally wary of modifying the error handling in a piecemeal way, but
certainly doing so for 4.4 now would be inapropriate.

IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
frame, at which point this conditional stuff could be dropped.

In terms of the 4.4 release, obviously ballooning would be very nice to have
for ARM guests, on the other hand I'm aware that while the patch is fairly
small/contained and safe it is also pretty skanky and likely wouldn't be
accepted outside of the rc period.
---
 tools/libxc/xc_domain.c |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..e1d1bec 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -986,6 +986,12 @@ out:
     return rc;
 }
 
+/* Currently only implemented on x86. This cannot be handled in the
+ * caller, e.g. by looking for errno==ENOSYS because of the broken
+ * error reporting style. Once this is fixed then this condition can
+ * be removed.
+ */
+#if defined(__i386__)||defined(__x86_64__)
 static int xc_domain_pod_target(xc_interface *xch,
                                 int op,
                                 uint32_t domid,
@@ -1055,6 +1061,28 @@ int xc_domain_get_pod_target(xc_interface *xch,
                                 pod_cache_pages,
                                 pod_entries);
 }
+#else
+int xc_domain_set_pod_target(xc_interface *xch,
+                             uint32_t domid,
+                             uint64_t target_pages,
+                             uint64_t *tot_pages,
+                             uint64_t *pod_cache_pages,
+                             uint64_t *pod_entries)
+{
+    return 0;
+}
+int xc_domain_get_pod_target(xc_interface *xch,
+                             uint32_t domid,
+                             uint64_t *tot_pages,
+                             uint64_t *pod_cache_pages,
+                             uint64_t *pod_entries)
+{
+    /* On x86 (above) xc_domain_pod_target will incorrectly return -1
+     * with errno==-1 on error. Do the same for least surprise. */
+    errno = -1;
+    return -1;
+}
+#endif
 
 int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:31:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ouY-0008Vm-9y; Thu, 16 Jan 2014 15:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3ouW-0008Vf-Og
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:31:12 +0000
Received: from [85.158.137.68:48294] by server-13.bemta-3.messagelabs.com id
	11/F1-28603-04BF7D25; Thu, 16 Jan 2014 15:31:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389886271!8762683!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15344 invoked from network); 16 Jan 2014 15:31:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:31:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:31:17 +0000
Message-Id: <52D8094D02000078001144AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:31:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
	<alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 16:15, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> This behaviour was chosen on purpose 3 years ago, knowing that it might
> cause problems to people that try to use xvda and hda in the same
> config.
> 
> Why? Because all the alternatives that we considered were far worse.
> You can have fun digging through the archives, you might discover things
> like xvdHD that might have even made it into Linux for a short while.

And IIRC all this was done just because the pv-ops incarnation
of blkfront was stripped off the respective device name mapping
(i.e. if all of what XenoLinux'es blkfront did was properly ported
over, no such behavioral change would have been needed).

(Whether what the old blkfront did was nice is a completely
separate question.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:31:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3ouY-0008Vm-9y; Thu, 16 Jan 2014 15:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3ouW-0008Vf-Og
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:31:12 +0000
Received: from [85.158.137.68:48294] by server-13.bemta-3.messagelabs.com id
	11/F1-28603-04BF7D25; Thu, 16 Jan 2014 15:31:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389886271!8762683!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15344 invoked from network); 16 Jan 2014 15:31:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:31:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:31:17 +0000
Message-Id: <52D8094D02000078001144AD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:31:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <20140115171244.GA2596@aepfle.de>
	<1389806527.3793.106.camel@kazak.uk.xensource.com>
	<20140115174510.GA5171@aepfle.de>
	<1389869890.6697.6.camel@kazak.uk.xensource.com>
	<20140116114411.GA29530@aepfle.de>
	<alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161509270.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] incorrect disk numbering with qemu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 16:15, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> This behaviour was chosen on purpose 3 years ago, knowing that it might
> cause problems to people that try to use xvda and hda in the same
> config.
> 
> Why? Because all the alternatives that we considered were far worse.
> You can have fun digging through the archives, you might discover things
> like xvdHD that might have even made it into Linux for a short while.

And IIRC all this was done just because the pv-ops incarnation
of blkfront was stripped off the respective device name mapping
(i.e. if all of what XenoLinux'es blkfront did was properly ported
over, no such behavioral change would have been needed).

(Whether what the old blkfront did was nice is a completely
separate question.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p5K-0000gp-A5; Thu, 16 Jan 2014 15:42:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3p5I-0000gf-LF
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:42:20 +0000
Received: from [193.109.254.147:2109] by server-10.bemta-14.messagelabs.com id
	A4/B1-20752-BDDF7D25; Thu, 16 Jan 2014 15:42:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389886939!11266871!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2314 invoked from network); 16 Jan 2014 15:42:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:42:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:42:31 +0000
Message-Id: <52D80BE802000078001144D2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:42:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
In-Reply-To: <1389885079.1061.15.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 16:11, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> This function is used to safely add a element to a list. You pass:
> - pointer to pointer to head (headp);
> - pointer to pointer to next/prev pointer (old/oldp);
> - pointer to list element (new).
> Now you own (nobody should allow to touch) the element till it get
> inserted to the list.
> Now assume we set the pointer after the loop. The head points to an
> element which has no next/prev set so you break the list. Another thread
> will see a corrupted or shortened list.

Ah, okay. So "old" (or "oldp" after you patch) is really misguiding.
This should rather become "linkp" or some such then.

And in that case I'd then also ask for the declaration of the new
"old" to be moved inside the for() body.

With that and with at least the spelling mistakes fixed in the
description, I'll be fine with the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p5K-0000gp-A5; Thu, 16 Jan 2014 15:42:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3p5I-0000gf-LF
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:42:20 +0000
Received: from [193.109.254.147:2109] by server-10.bemta-14.messagelabs.com id
	A4/B1-20752-BDDF7D25; Thu, 16 Jan 2014 15:42:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1389886939!11266871!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2314 invoked from network); 16 Jan 2014 15:42:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:42:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:42:31 +0000
Message-Id: <52D80BE802000078001144D2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:42:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
In-Reply-To: <1389885079.1061.15.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 16:11, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> This function is used to safely add a element to a list. You pass:
> - pointer to pointer to head (headp);
> - pointer to pointer to next/prev pointer (old/oldp);
> - pointer to list element (new).
> Now you own (nobody should allow to touch) the element till it get
> inserted to the list.
> Now assume we set the pointer after the loop. The head points to an
> element which has no next/prev set so you break the list. Another thread
> will see a corrupted or shortened list.

Ah, okay. So "old" (or "oldp" after you patch) is really misguiding.
This should rather become "linkp" or some such then.

And in that case I'd then also ask for the declaration of the new
"old" to be moved inside the for() body.

With that and with at least the spelling mistakes fixed in the
description, I'll be fine with the patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p6K-0000pV-Li; Thu, 16 Jan 2014 15:43:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3p6I-0000pH-Pl
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:43:23 +0000
Received: from [85.158.137.68:3033] by server-7.bemta-3.messagelabs.com id
	61/AD-27599-91EF7D25; Thu, 16 Jan 2014 15:43:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389886999!9533092!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5681 invoked from network); 16 Jan 2014 15:43:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:43:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93518542"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:43:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:43:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3p6D-0007Ym-Nd;
	Thu, 16 Jan 2014 15:43:17 +0000
Date: Thu, 16 Jan 2014 15:42:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389803256.3793.95.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > 
> > > Based on the above I have no idea whether a freeze exception should be
> > > granted for this, so my default answer is no. I'm not sure what else you
> > > could have expected.
> > > 
> > > If you think there are changes here which should be in 4.4.0 then please
> > > enumerate all changes included in this merge which have any relation to
> > > Xen and their potential impact on the release.
> > 
> > I have a list the change here that have a potential impact on Xen, with
> > the ones that I think are quite important at the beginning. Either the
> > commit title speak for itself or I added a small description on what is
> > affected.
> 
> Thanks but there's not a lot here for me to go on WRT making a decision
> on a freeze exception. Did you refer to 
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> like I said? A freeze exception needs an analysis of benefits and risks,
> not the very briefest words you can possibly manage.
> 
> Anyway it appears this is a grab bag of things we might want and misc
> fixes which are perhaps nice to have, I'm nowhere near comfortable
> giving it a blanket exemption based on what you've presented here, or
> even of cherry picking what might be the important ones. If you think
> any or all of it is actually important for 4.4 please make a proper case
> for inclusion, either of the aggregate or of the individual changes.

Anthony, did you simply update the tree by pulling from the upstream 1.6
stable tree? I also assume that you tested at the very least the basic
PV and HVM configurations?

If so, I think we should take everything they have there. If we don't,
I'll propose to do the same for 4.4.1.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p6K-0000pV-Li; Thu, 16 Jan 2014 15:43:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3p6I-0000pH-Pl
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:43:23 +0000
Received: from [85.158.137.68:3033] by server-7.bemta-3.messagelabs.com id
	61/AD-27599-91EF7D25; Thu, 16 Jan 2014 15:43:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389886999!9533092!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5681 invoked from network); 16 Jan 2014 15:43:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:43:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93518542"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:43:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:43:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3p6D-0007Ym-Nd;
	Thu, 16 Jan 2014 15:43:17 +0000
Date: Thu, 16 Jan 2014 15:42:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389803256.3793.95.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > 
> > > Based on the above I have no idea whether a freeze exception should be
> > > granted for this, so my default answer is no. I'm not sure what else you
> > > could have expected.
> > > 
> > > If you think there are changes here which should be in 4.4.0 then please
> > > enumerate all changes included in this merge which have any relation to
> > > Xen and their potential impact on the release.
> > 
> > I have a list the change here that have a potential impact on Xen, with
> > the ones that I think are quite important at the beginning. Either the
> > commit title speak for itself or I added a small description on what is
> > affected.
> 
> Thanks but there's not a lot here for me to go on WRT making a decision
> on a freeze exception. Did you refer to 
> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> like I said? A freeze exception needs an analysis of benefits and risks,
> not the very briefest words you can possibly manage.
> 
> Anyway it appears this is a grab bag of things we might want and misc
> fixes which are perhaps nice to have, I'm nowhere near comfortable
> giving it a blanket exemption based on what you've presented here, or
> even of cherry picking what might be the important ones. If you think
> any or all of it is actually important for 4.4 please make a proper case
> for inclusion, either of the aggregate or of the individual changes.

Anthony, did you simply update the tree by pulling from the upstream 1.6
stable tree? I also assume that you tested at the very least the basic
PV and HVM configurations?

If so, I think we should take everything they have there. If we don't,
I'll propose to do the same for 4.4.1.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:46:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p93-0000zw-3m; Thu, 16 Jan 2014 15:46:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3p91-0000zl-5B
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:46:11 +0000
Received: from [85.158.143.35:26655] by server-2.bemta-4.messagelabs.com id
	7A/55-11386-2CEF7D25; Thu, 16 Jan 2014 15:46:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389887168!5050260!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 16 Jan 2014 15:46:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93519708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:46:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:46:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3p8w-0007ba-Vo;
	Thu, 16 Jan 2014 15:46:06 +0000
Date: Thu, 16 Jan 2014 15:45:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161542580.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Stefano Stabellini wrote:
> On Wed, 15 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > 
> > > > Based on the above I have no idea whether a freeze exception should be
> > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > could have expected.
> > > > 
> > > > If you think there are changes here which should be in 4.4.0 then please
> > > > enumerate all changes included in this merge which have any relation to
> > > > Xen and their potential impact on the release.
> > > 
> > > I have a list the change here that have a potential impact on Xen, with
> > > the ones that I think are quite important at the beginning. Either the
> > > commit title speak for itself or I added a small description on what is
> > > affected.
> > 
> > Thanks but there's not a lot here for me to go on WRT making a decision
> > on a freeze exception. Did you refer to 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > like I said? A freeze exception needs an analysis of benefits and risks,
> > not the very briefest words you can possibly manage.
> > 
> > Anyway it appears this is a grab bag of things we might want and misc
> > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > giving it a blanket exemption based on what you've presented here, or
> > even of cherry picking what might be the important ones. If you think
> > any or all of it is actually important for 4.4 please make a proper case
> > for inclusion, either of the aggregate or of the individual changes.
> 
> Anthony, did you simply update the tree by pulling from the upstream 1.6
> stable tree? I also assume that you tested at the very least the basic
> PV and HVM configurations?
> 
> If so, I think we should take everything they have there. If we don't,
> I'll propose to do the same for 4.4.1.

I realize I have been a bit terse there: the reason is that I think we
should be pulling from QEMU stable trees for the corresponding Xen
stable releases. Their stable backporting policy seems reasonable and
not laxer than ours.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:46:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3p93-0000zw-3m; Thu, 16 Jan 2014 15:46:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3p91-0000zl-5B
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:46:11 +0000
Received: from [85.158.143.35:26655] by server-2.bemta-4.messagelabs.com id
	7A/55-11386-2CEF7D25; Thu, 16 Jan 2014 15:46:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389887168!5050260!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 16 Jan 2014 15:46:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93519708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:46:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:46:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3p8w-0007ba-Vo;
	Thu, 16 Jan 2014 15:46:06 +0000
Date: Thu, 16 Jan 2014 15:45:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161542580.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Stefano Stabellini wrote:
> On Wed, 15 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > 
> > > > Based on the above I have no idea whether a freeze exception should be
> > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > could have expected.
> > > > 
> > > > If you think there are changes here which should be in 4.4.0 then please
> > > > enumerate all changes included in this merge which have any relation to
> > > > Xen and their potential impact on the release.
> > > 
> > > I have a list the change here that have a potential impact on Xen, with
> > > the ones that I think are quite important at the beginning. Either the
> > > commit title speak for itself or I added a small description on what is
> > > affected.
> > 
> > Thanks but there's not a lot here for me to go on WRT making a decision
> > on a freeze exception. Did you refer to 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > like I said? A freeze exception needs an analysis of benefits and risks,
> > not the very briefest words you can possibly manage.
> > 
> > Anyway it appears this is a grab bag of things we might want and misc
> > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > giving it a blanket exemption based on what you've presented here, or
> > even of cherry picking what might be the important ones. If you think
> > any or all of it is actually important for 4.4 please make a proper case
> > for inclusion, either of the aggregate or of the individual changes.
> 
> Anthony, did you simply update the tree by pulling from the upstream 1.6
> stable tree? I also assume that you tested at the very least the basic
> PV and HVM configurations?
> 
> If so, I think we should take everything they have there. If we don't,
> I'll propose to do the same for 4.4.1.

I realize I have been a bit terse there: the reason is that I think we
should be pulling from QEMU stable trees for the corresponding Xen
stable releases. Their stable backporting policy seems reasonable and
not laxer than ours.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:48:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pBO-00018m-46; Thu, 16 Jan 2014 15:48:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3pBN-00018h-4i
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 15:48:37 +0000
Received: from [85.158.139.211:46610] by server-1.bemta-5.messagelabs.com id
	AF/E5-21065-45FF7D25; Thu, 16 Jan 2014 15:48:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389887315!10195504!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15497 invoked from network); 16 Jan 2014 15:48:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:48:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:48:54 +0000
Message-Id: <52D80D6102000078001144E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:48:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B158020000780011427E@nat28.tlf.novell.com>
	<52D7F391.60500@oracle.com>
In-Reply-To: <52D7F391.60500@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:58, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/16/2014 04:15 AM, Jan Beulich wrote:
>>> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>> ...
>>> +        {
>>> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
>>> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
>> What's this cast good for?
> 
> These routines return an int that indicates whether this was full 
> context save or just that the counters have been stopped. Depending on 
> where the routines are called from we may ignore the return value 
> (having VPMU_CONTEXT_SAVE forces "full save").
> 
> I did it so that compiler won't complain about ignoring return value (I 
> don't think it does now but I imagine some version still may).

If any were, we'd have many more places where such casts to void
would be needed. The only place where such make sense are macros
that produce a value which is to be ignored.

>>> +    /* dom0 will handle this interrupt */
>>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>>> +    {
>>> +        if ( smp_processor_id() >= dom0->max_vcpus )
>>> +            return 0;
>>> +        v = dom0->vcpu[smp_processor_id()];
>>> +    }
>> Ugly new uses of "dom0". And the correlation between
>> smp_processor_id() and dom0->max_vcpus doesn't look sane
>> either.
> 
> Yes, this needs to be fixed. Maybe
> 
>      v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
> 
> And drop the check. (And the same change in pmu_softnmi() in a later patch).

Looks odd, but as long as the kernel side can cope...

> Not sure what to do about dom0. What else can I use instead? I need to 
> choose dom0's vcpu.

Just think the disaggregation way - ultimately it may not be Dom0
that handles this. But I said "ugly", not "unacceptable", knowing
that for the time being you likely have no reasonable alternative.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:48:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:48:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pBO-00018m-46; Thu, 16 Jan 2014 15:48:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W3pBN-00018h-4i
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 15:48:37 +0000
Received: from [85.158.139.211:46610] by server-1.bemta-5.messagelabs.com id
	AF/E5-21065-45FF7D25; Thu, 16 Jan 2014 15:48:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389887315!10195504!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15497 invoked from network); 16 Jan 2014 15:48:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 15:48:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 16 Jan 2014 15:48:54 +0000
Message-Id: <52D80D6102000078001144E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 16 Jan 2014 15:48:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1389036295-3877-1-git-send-email-boris.ostrovsky@oracle.com>
	<1389036295-3877-13-git-send-email-boris.ostrovsky@oracle.com>
	<52D7B158020000780011427E@nat28.tlf.novell.com>
	<52D7F391.60500@oracle.com>
In-Reply-To: <52D7F391.60500@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH v3 12/16] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 15:58, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/16/2014 04:15 AM, Jan Beulich wrote:
>>> @@ -82,7 +87,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>>> ...
>>> +        {
>>> +            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
>>> +            (void)vpmu->arch_vpmu_ops->arch_vpmu_save(current);
>> What's this cast good for?
> 
> These routines return an int that indicates whether this was full 
> context save or just that the counters have been stopped. Depending on 
> where the routines are called from we may ignore the return value 
> (having VPMU_CONTEXT_SAVE forces "full save").
> 
> I did it so that compiler won't complain about ignoring return value (I 
> don't think it does now but I imagine some version still may).

If any were, we'd have many more places where such casts to void
would be needed. The only place where such make sense are macros
that produce a value which is to be ignored.

>>> +    /* dom0 will handle this interrupt */
>>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>>> +    {
>>> +        if ( smp_processor_id() >= dom0->max_vcpus )
>>> +            return 0;
>>> +        v = dom0->vcpu[smp_processor_id()];
>>> +    }
>> Ugly new uses of "dom0". And the correlation between
>> smp_processor_id() and dom0->max_vcpus doesn't look sane
>> either.
> 
> Yes, this needs to be fixed. Maybe
> 
>      v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
> 
> And drop the check. (And the same change in pmu_softnmi() in a later patch).

Looks odd, but as long as the kernel side can cope...

> Not sure what to do about dom0. What else can I use instead? I need to 
> choose dom0's vcpu.

Just think the disaggregation way - ultimately it may not be Dom0
that handles this. But I said "ugly", not "unacceptable", knowing
that for the time being you likely have no reasonable alternative.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:51:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pDi-0001Zi-PR; Thu, 16 Jan 2014 15:51:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3pDi-0001Zd-9L
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:51:02 +0000
Received: from [85.158.139.211:16810] by server-7.bemta-5.messagelabs.com id
	4F/D6-04824-5EFF7D25; Thu, 16 Jan 2014 15:51:01 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389887459!8980760!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5855 invoked from network); 16 Jan 2014 15:51:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91418380"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:50:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:50:58 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3pDd-0007hX-1F;
	Thu, 16 Jan 2014 15:50:57 +0000
Date: Thu, 16 Jan 2014 15:50:56 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140116155055.GN1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> On Wed, 15 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > 
> > > > Based on the above I have no idea whether a freeze exception should be
> > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > could have expected.
> > > > 
> > > > If you think there are changes here which should be in 4.4.0 then please
> > > > enumerate all changes included in this merge which have any relation to
> > > > Xen and their potential impact on the release.
> > > 
> > > I have a list the change here that have a potential impact on Xen, with
> > > the ones that I think are quite important at the beginning. Either the
> > > commit title speak for itself or I added a small description on what is
> > > affected.
> > 
> > Thanks but there's not a lot here for me to go on WRT making a decision
> > on a freeze exception. Did you refer to 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > like I said? A freeze exception needs an analysis of benefits and risks,
> > not the very briefest words you can possibly manage.
> > 
> > Anyway it appears this is a grab bag of things we might want and misc
> > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > giving it a blanket exemption based on what you've presented here, or
> > even of cherry picking what might be the important ones. If you think
> > any or all of it is actually important for 4.4 please make a proper case
> > for inclusion, either of the aggregate or of the individual changes.
> 
> Anthony, did you simply update the tree by pulling from the upstream 1.6
> stable tree?

Yes, a simple merge.

> I also assume that you tested at the very least the basic
> PV and HVM configurations?

:(, no, I haven't try PV. But I did try HVM.

There is one thing that I may want to try, it's migration from the
previous version of Xen. There is one patch that change (fix?) that.

> If so, I think we should take everything they have there. If we don't,
> I'll propose to do the same for 4.4.1.

Ok.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:51:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pDi-0001Zi-PR; Thu, 16 Jan 2014 15:51:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W3pDi-0001Zd-9L
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:51:02 +0000
Received: from [85.158.139.211:16810] by server-7.bemta-5.messagelabs.com id
	4F/D6-04824-5EFF7D25; Thu, 16 Jan 2014 15:51:01 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389887459!8980760!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5855 invoked from network); 16 Jan 2014 15:51:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91418380"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:50:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:50:58 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W3pDd-0007hX-1F;
	Thu, 16 Jan 2014 15:50:57 +0000
Date: Thu, 16 Jan 2014 15:50:56 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140116155055.GN1696@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> On Wed, 15 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > 
> > > > Based on the above I have no idea whether a freeze exception should be
> > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > could have expected.
> > > > 
> > > > If you think there are changes here which should be in 4.4.0 then please
> > > > enumerate all changes included in this merge which have any relation to
> > > > Xen and their potential impact on the release.
> > > 
> > > I have a list the change here that have a potential impact on Xen, with
> > > the ones that I think are quite important at the beginning. Either the
> > > commit title speak for itself or I added a small description on what is
> > > affected.
> > 
> > Thanks but there's not a lot here for me to go on WRT making a decision
> > on a freeze exception. Did you refer to 
> > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > like I said? A freeze exception needs an analysis of benefits and risks,
> > not the very briefest words you can possibly manage.
> > 
> > Anyway it appears this is a grab bag of things we might want and misc
> > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > giving it a blanket exemption based on what you've presented here, or
> > even of cherry picking what might be the important ones. If you think
> > any or all of it is actually important for 4.4 please make a proper case
> > for inclusion, either of the aggregate or of the individual changes.
> 
> Anthony, did you simply update the tree by pulling from the upstream 1.6
> stable tree?

Yes, a simple merge.

> I also assume that you tested at the very least the basic
> PV and HVM configurations?

:(, no, I haven't try PV. But I did try HVM.

There is one thing that I may want to try, it's migration from the
previous version of Xen. There is one patch that change (fix?) that.

> If so, I think we should take everything they have there. If we don't,
> I'll propose to do the same for 4.4.1.

Ok.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:52:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pF3-0001i9-W0; Thu, 16 Jan 2014 15:52:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3pF2-0001i1-Pq
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:52:25 +0000
Received: from [85.158.139.211:12864] by server-4.bemta-5.messagelabs.com id
	B3/C2-26791-83008D25; Thu, 16 Jan 2014 15:52:24 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389887541!10206893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 16 Jan 2014 15:52:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:52:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93522449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:52:21 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3pEy-0007iL-18;
	Thu, 16 Jan 2014 15:52:20 +0000
Message-ID: <1389887534.1061.17.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 15:52:14 +0000
In-Reply-To: <52D80BE802000078001144D2@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
	<52D80BE802000078001144D2@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function (mctelem_xchg_head()) used to exchange mce telemetry
list heads is racy.  It may write to the head twice, with the second
write linking to an element in the wrong state.

If there are two threads, T1 inserting on committed list; and T2
trying to consume it.

1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
2. T1 is interrupted after the cmpxchg succeeded.
3. T2 gets the list and changes element A and updates the commit list
   head.
4. T1 resumes, reads pointer to prev again and compare with result
   from the cmpxchg which succeeded but in the meantime prev changed
   in memory.
5. T1 thinks the cmpxchg failed and goes around the loop again,
   linking head to A again.

To solve the race use temporary variable for prev pointer.

*linkp (which point to a field in the element) must be updated before
the cmpxchg() as after a successful cmpxchg the element might be
immediately removed and reinitialized.

The wmb() prior to the cmpchgptr() call is not necessary since it is
already a full memory barrier.  This wmb() is thus removed.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Version 2:
- changed commit message;
- remove useless wmb().

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 37d830f..895ce1a 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -127,13 +127,14 @@ static DEFINE_PER_CPU(struct mc_telem_cpu_ctl, mctctl);
 static DEFINE_SPINLOCK(processing_lock);
 
 static void mctelem_xchg_head(struct mctelem_ent **headp,
-				struct mctelem_ent **old,
+				struct mctelem_ent **linkp,
 				struct mctelem_ent *new)
 {
 	for (;;) {
-		*old = *headp;
-		wmb();
-		if (cmpxchgptr(headp, *old, new) == *old)
+		struct mctelem_ent *old;
+
+		*linkp = old = *headp;
+		if (cmpxchgptr(headp, old, new) == old)
 			break;
 	}
 }
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:52:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pF3-0001i9-W0; Thu, 16 Jan 2014 15:52:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W3pF2-0001i1-Pq
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:52:25 +0000
Received: from [85.158.139.211:12864] by server-4.bemta-5.messagelabs.com id
	B3/C2-26791-83008D25; Thu, 16 Jan 2014 15:52:24 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389887541!10206893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 16 Jan 2014 15:52:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:52:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93522449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 15:52:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:52:21 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W3pEy-0007iL-18;
	Thu, 16 Jan 2014 15:52:20 +0000
Message-ID: <1389887534.1061.17.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 16 Jan 2014 15:52:14 +0000
In-Reply-To: <52D80BE802000078001144D2@nat28.tlf.novell.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
	<52D80BE802000078001144D2@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] mce: Fix race condition in mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function (mctelem_xchg_head()) used to exchange mce telemetry
list heads is racy.  It may write to the head twice, with the second
write linking to an element in the wrong state.

If there are two threads, T1 inserting on committed list; and T2
trying to consume it.

1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
2. T1 is interrupted after the cmpxchg succeeded.
3. T2 gets the list and changes element A and updates the commit list
   head.
4. T1 resumes, reads pointer to prev again and compare with result
   from the cmpxchg which succeeded but in the meantime prev changed
   in memory.
5. T1 thinks the cmpxchg failed and goes around the loop again,
   linking head to A again.

To solve the race use temporary variable for prev pointer.

*linkp (which point to a field in the element) must be updated before
the cmpxchg() as after a successful cmpxchg the element might be
immediately removed and reinitialized.

The wmb() prior to the cmpchgptr() call is not necessary since it is
already a full memory barrier.  This wmb() is thus removed.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Version 2:
- changed commit message;
- remove useless wmb().

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 37d830f..895ce1a 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -127,13 +127,14 @@ static DEFINE_PER_CPU(struct mc_telem_cpu_ctl, mctctl);
 static DEFINE_SPINLOCK(processing_lock);
 
 static void mctelem_xchg_head(struct mctelem_ent **headp,
-				struct mctelem_ent **old,
+				struct mctelem_ent **linkp,
 				struct mctelem_ent *new)
 {
 	for (;;) {
-		*old = *headp;
-		wmb();
-		if (cmpxchgptr(headp, *old, new) == *old)
+		struct mctelem_ent *old;
+
+		*linkp = old = *headp;
+		if (cmpxchgptr(headp, old, new) == old)
 			break;
 	}
 }
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pFQ-0001lY-Dw; Thu, 16 Jan 2014 15:52:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3pFP-0001lP-NK
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:52:48 +0000
Received: from [85.158.143.35:19325] by server-2.bemta-4.messagelabs.com id
	58/70-11386-F4008D25; Thu, 16 Jan 2014 15:52:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389887564!371335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2925 invoked from network); 16 Jan 2014 15:52:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:52:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91419197"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:52:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:52:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pFK-0007iO-Oy;
	Thu, 16 Jan 2014 15:52:42 +0000
Date: Thu, 16 Jan 2014 15:51:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140116155055.GN1696@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Anthony PERARD wrote:
> On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > 
> > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > could have expected.
> > > > > 
> > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > enumerate all changes included in this merge which have any relation to
> > > > > Xen and their potential impact on the release.
> > > > 
> > > > I have a list the change here that have a potential impact on Xen, with
> > > > the ones that I think are quite important at the beginning. Either the
> > > > commit title speak for itself or I added a small description on what is
> > > > affected.
> > > 
> > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > on a freeze exception. Did you refer to 
> > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > not the very briefest words you can possibly manage.
> > > 
> > > Anyway it appears this is a grab bag of things we might want and misc
> > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > giving it a blanket exemption based on what you've presented here, or
> > > even of cherry picking what might be the important ones. If you think
> > > any or all of it is actually important for 4.4 please make a proper case
> > > for inclusion, either of the aggregate or of the individual changes.
> > 
> > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > stable tree?
> 
> Yes, a simple merge.
> 
> > I also assume that you tested at the very least the basic
> > PV and HVM configurations?
> 
> :(, no, I haven't try PV. But I did try HVM.
> 
> There is one thing that I may want to try, it's migration from the
> previous version of Xen. There is one patch that change (fix?) that.

Please do and let me know if it works as expected.



> > If so, I think we should take everything they have there. If we don't,
> > I'll propose to do the same for 4.4.1.
> 
> Ok.
> 
> -- 
> Anthony PERARD
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 15:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 15:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pFQ-0001lY-Dw; Thu, 16 Jan 2014 15:52:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3pFP-0001lP-NK
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 15:52:48 +0000
Received: from [85.158.143.35:19325] by server-2.bemta-4.messagelabs.com id
	58/70-11386-F4008D25; Thu, 16 Jan 2014 15:52:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389887564!371335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2925 invoked from network); 16 Jan 2014 15:52:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 15:52:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91419197"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 15:52:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 10:52:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pFK-0007iO-Oy;
	Thu, 16 Jan 2014 15:52:42 +0000
Date: Thu, 16 Jan 2014 15:51:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140116155055.GN1696@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014, Anthony PERARD wrote:
> On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > 
> > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > could have expected.
> > > > > 
> > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > enumerate all changes included in this merge which have any relation to
> > > > > Xen and their potential impact on the release.
> > > > 
> > > > I have a list the change here that have a potential impact on Xen, with
> > > > the ones that I think are quite important at the beginning. Either the
> > > > commit title speak for itself or I added a small description on what is
> > > > affected.
> > > 
> > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > on a freeze exception. Did you refer to 
> > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > not the very briefest words you can possibly manage.
> > > 
> > > Anyway it appears this is a grab bag of things we might want and misc
> > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > giving it a blanket exemption based on what you've presented here, or
> > > even of cherry picking what might be the important ones. If you think
> > > any or all of it is actually important for 4.4 please make a proper case
> > > for inclusion, either of the aggregate or of the individual changes.
> > 
> > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > stable tree?
> 
> Yes, a simple merge.
> 
> > I also assume that you tested at the very least the basic
> > PV and HVM configurations?
> 
> :(, no, I haven't try PV. But I did try HVM.
> 
> There is one thing that I may want to try, it's migration from the
> previous version of Xen. There is one patch that change (fix?) that.

Please do and let me know if it works as expected.



> > If so, I think we should take everything they have there. If we don't,
> > I'll propose to do the same for 4.4.1.
> 
> Ok.
> 
> -- 
> Anthony PERARD
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pQq-0002uZ-KC; Thu, 16 Jan 2014 16:04:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W3pQo-0002uM-PW
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 16:04:35 +0000
Received: from [85.158.137.68:13672] by server-5.bemta-3.messagelabs.com id
	E8/8A-25188-21308D25; Thu, 16 Jan 2014 16:04:34 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389888273!8772089!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25919 invoked from network); 16 Jan 2014 16:04:33 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 16:04:33 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa
	(TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.82)
	id 1W3pQn-0002jt-2A; Thu, 16 Jan 2014 17:04:33 +0100
Message-ID: <52D8030E.1050501@os.inf.tu-dresden.de>
Date: Thu, 16 Jan 2014 17:04:30 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
In-Reply-To: <52D80324020000780011447A@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5638682330891921928=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============5638682330891921928==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/16/2014 04:04 PM, Jan Beulich wrote:
> I don't think so - this would only be an issue if the conditions used
> | instead of ||. || implies a sequence point between evaluating the
> left and right sides, and the standard says: "The presence of a
> sequence point between the evaluation of expressions A and B
> implies that every value computation and side effect associated
> with A is sequenced before every value computation and side
> effect associated with B."

This only applies to single-threaded code. Multithreaded code must be
data-race free for that to be true. See

https://lwn.net/Articles/508991/

> And even if there was a problem (i.e. my interpretation of the
> above being incorrect), I don't think you'd need ACCESS_ONCE()
> here: The same local variable can't have two different values in
> two different use sites when there was no intermediate
> assignment to it.

Same comment as above.

Julian


--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlLYAw4ACgkQ2EtjUdW3H9mg3wCfU44CY1EtPolzH94/ZTCD9ieS
UssAn0EbzbTnn9KyRddcduoAqNaETCJq
=rOdL
-----END PGP SIGNATURE-----

--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO--


--===============5638682330891921928==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5638682330891921928==--


From xen-devel-bounces@lists.xen.org Thu Jan 16 16:04:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pQq-0002uZ-KC; Thu, 16 Jan 2014 16:04:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W3pQo-0002uM-PW
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 16:04:35 +0000
Received: from [85.158.137.68:13672] by server-5.bemta-3.messagelabs.com id
	E8/8A-25188-21308D25; Thu, 16 Jan 2014 16:04:34 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389888273!8772089!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25919 invoked from network); 16 Jan 2014 16:04:33 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 16 Jan 2014 16:04:33 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa
	(TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.82)
	id 1W3pQn-0002jt-2A; Thu, 16 Jan 2014 17:04:33 +0100
Message-ID: <52D8030E.1050501@os.inf.tu-dresden.de>
Date: Thu, 16 Jan 2014 17:04:30 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
In-Reply-To: <52D80324020000780011447A@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5638682330891921928=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============5638682330891921928==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/16/2014 04:04 PM, Jan Beulich wrote:
> I don't think so - this would only be an issue if the conditions used
> | instead of ||. || implies a sequence point between evaluating the
> left and right sides, and the standard says: "The presence of a
> sequence point between the evaluation of expressions A and B
> implies that every value computation and side effect associated
> with A is sequenced before every value computation and side
> effect associated with B."

This only applies to single-threaded code. Multithreaded code must be
data-race free for that to be true. See

https://lwn.net/Articles/508991/

> And even if there was a problem (i.e. my interpretation of the
> above being incorrect), I don't think you'd need ACCESS_ONCE()
> here: The same local variable can't have two different values in
> two different use sites when there was no intermediate
> assignment to it.

Same comment as above.

Julian


--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlLYAw4ACgkQ2EtjUdW3H9mg3wCfU44CY1EtPolzH94/ZTCD9ieS
UssAn0EbzbTnn9KyRddcduoAqNaETCJq
=rOdL
-----END PGP SIGNATURE-----

--gqvpxSGfhJUaAPh2v1tSMKFuW7qO5jXaO--


--===============5638682330891921928==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5638682330891921928==--


From xen-devel-bounces@lists.xen.org Thu Jan 16 16:11:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pWw-00034P-I7; Thu, 16 Jan 2014 16:10:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3pWv-00034K-Ms
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 16:10:53 +0000
Received: from [85.158.137.68:2933] by server-6.bemta-3.messagelabs.com id
	B6/AC-04868-C8408D25; Thu, 16 Jan 2014 16:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389888650!8419154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16986 invoked from network); 16 Jan 2014 16:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91428093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 16:10:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:10:49 -0500
Message-ID: <1389888647.6697.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 16 Jan 2014 16:10:47 +0000
In-Reply-To: <52D6B62A.9000208@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	nwhitehorn@freebsd.org, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 16:24 +0000, Julien Grall wrote:
> As I understand main bindings (gic, timer) are set in stone and won't
> change. Ian Campbell has a repository with all the ARM bindings here:
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=tree;f=Bindings/arm 

FYI this tree is automatically extracted from the bindings in the Linux
source, because long term we would like to split them out from Linux and
try and standardize some (if not all) of them. This is exactly because
these bindings should not be OS specific...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:11:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pWw-00034P-I7; Thu, 16 Jan 2014 16:10:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W3pWv-00034K-Ms
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 16:10:53 +0000
Received: from [85.158.137.68:2933] by server-6.bemta-3.messagelabs.com id
	B6/AC-04868-C8408D25; Thu, 16 Jan 2014 16:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389888650!8419154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16986 invoked from network); 16 Jan 2014 16:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91428093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 16:10:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:10:49 -0500
Message-ID: <1389888647.6697.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 16 Jan 2014 16:10:47 +0000
In-Reply-To: <52D6B62A.9000208@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	nwhitehorn@freebsd.org, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-15 at 16:24 +0000, Julien Grall wrote:
> As I understand main bindings (gic, timer) are set in stone and won't
> change. Ian Campbell has a repository with all the ARM bindings here:
> http://xenbits.xen.org/gitweb/?p=people/ianc/device-tree-rebasing.git;a=tree;f=Bindings/arm 

FYI this tree is automatically extracted from the bindings in the Linux
source, because long term we would like to split them out from Linux and
try and standardize some (if not all) of them. This is exactly because
these bindings should not be OS specific...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:20:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pfe-0004V0-0O; Thu, 16 Jan 2014 16:19:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3pfc-0004Uv-Fy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 16:19:52 +0000
Received: from [193.109.254.147:12859] by server-5.bemta-14.messagelabs.com id
	1F/79-03510-7A608D25; Thu, 16 Jan 2014 16:19:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389889189!11322708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1745 invoked from network); 16 Jan 2014 16:19:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:19:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93535657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 16:19:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:19:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pfX-00087Q-IV;
	Thu, 16 Jan 2014 16:19:47 +0000
Date: Thu, 16 Jan 2014 16:18:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401161553560.21510@kaball.uk.xensource.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
	<1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Jan 2014, Zoltan Kiss wrote:
> This patch does the following:
> - move the m2p_override part from grant_[un]map_refs to gntdev, where it is
>   needed after mapping operations

As I wrote earlier, I am not against the idea of moving the m2p_override
calls in principle, but I would like to see either the calls being moved
to x86 specific areas or left in grant-table.c.

The reason is simple: from an architectural perspective if we wanted to
introduce an m2p on arm (actual we already have it but at the moment we
are not setting it up using m2p_override calls), we would want to do it
by calls from grant-table.c because unfortunately we need it for things
other than gntdev. In fact for devices that are not behind an IOMMU, we
do need to make sure that DMA operations involving granted pages are
handled correctly (by translating the pfns to mfns and vice versa). This
would have to happen for netback and blkback too.
So if we say that m2p_override functions are arch-independent, then they
need to stay where they are.

On the other hand, we if say that the m2p_override is an x86 specific
hack that is going away, then I don't want it in common code.
If you insist on calling it from common code, then please remove the
m2p_override stubs from arch/arm and ifdef x86 the whole thing in
gntdev.c.


> - but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
>   because it is needed always
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/xen/p2m.c        |    5 ---
>  drivers/xen/gntdev.c      |   62 ++++++++++++++++++++++++++++++++++---
>  drivers/xen/grant-table.c |   75 +++++++++++++++------------------------------
>  3 files changed, 83 insertions(+), 59 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..b1e9407 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -891,10 +891,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	WARN_ON(PagePrivate(page));
>  	SetPagePrivate(page);
>  	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -962,7 +958,6 @@ int m2p_remove_override(struct page *page,
>  	WARN_ON(!PagePrivate(page));
>  	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
>
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..b89aaa2 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
>  static int map_grant_pages(struct grant_map *map)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +	unsigned long mfn;
>  
>  	if (!use_ptemod) {
>  		/* Note: it could already be mapped */
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> @@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
>  static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
>  
>  	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
>  		int pgno = (map->notify.addr >> PAGE_SHIFT);
> @@ -316,8 +349,29 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  	}
>  
>  	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +				NULL,
> +				map->pages + offset,
> +				pages);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < pages; i++) {
> +		err = m2p_remove_override((map->pages + offset)[i],
> +					  use_ptemod ?
> +					  &(map->kmap_ops + offset)[i] :
> +					  NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..ad281e4 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -885,7 +885,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> @@ -904,40 +903,29 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		for (i = 0; i < count; i++) {
>  			if (map_ops[i].status)
>  				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
> +				return -ENOMEM;
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +	} else {
> +		for (i = 0; i < count; i++) {
> +			if (map_ops[i].status)
> +				continue;
> +			if (map_ops[i].flags & GNTMAP_contains_pte) {
> +				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +					(map_ops[i].host_addr & ~PAGE_MASK));
> +				mfn = pte_mfn(*pte);
> +			} else {
> +				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +			}
> +			pages[i]->index = pfn_to_mfn(page_to_pfn(pages[i]));
> +			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
> +							  FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> @@ -946,7 +934,6 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
> @@ -958,26 +945,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> +	} else {
> +		for (i = 0; i < count; i++) {
> +				set_phys_to_machine(page_to_pfn(pages[i]),
> +						    pages[i]->index);
> +		}
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:20:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3pfe-0004V0-0O; Thu, 16 Jan 2014 16:19:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3pfc-0004Uv-Fy
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 16:19:52 +0000
Received: from [193.109.254.147:12859] by server-5.bemta-14.messagelabs.com id
	1F/79-03510-7A608D25; Thu, 16 Jan 2014 16:19:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389889189!11322708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1745 invoked from network); 16 Jan 2014 16:19:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:19:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93535657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 16:19:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:19:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pfX-00087Q-IV;
	Thu, 16 Jan 2014 16:19:47 +0000
Date: Thu, 16 Jan 2014 16:18:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401161553560.21510@kaball.uk.xensource.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
	<1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 13 Jan 2014, Zoltan Kiss wrote:
> This patch does the following:
> - move the m2p_override part from grant_[un]map_refs to gntdev, where it is
>   needed after mapping operations

As I wrote earlier, I am not against the idea of moving the m2p_override
calls in principle, but I would like to see either the calls being moved
to x86 specific areas or left in grant-table.c.

The reason is simple: from an architectural perspective if we wanted to
introduce an m2p on arm (actual we already have it but at the moment we
are not setting it up using m2p_override calls), we would want to do it
by calls from grant-table.c because unfortunately we need it for things
other than gntdev. In fact for devices that are not behind an IOMMU, we
do need to make sure that DMA operations involving granted pages are
handled correctly (by translating the pfns to mfns and vice versa). This
would have to happen for netback and blkback too.
So if we say that m2p_override functions are arch-independent, then they
need to stay where they are.

On the other hand, we if say that the m2p_override is an x86 specific
hack that is going away, then I don't want it in common code.
If you insist on calling it from common code, then please remove the
m2p_override stubs from arch/arm and ifdef x86 the whole thing in
gntdev.c.


> - but move set_phys_to_machine from m2p_override to grant_[un]map_refs,
>   because it is needed always
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/xen/p2m.c        |    5 ---
>  drivers/xen/gntdev.c      |   62 ++++++++++++++++++++++++++++++++++---
>  drivers/xen/grant-table.c |   75 +++++++++++++++------------------------------
>  3 files changed, 83 insertions(+), 59 deletions(-)
> 
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..b1e9407 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -891,10 +891,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	WARN_ON(PagePrivate(page));
>  	SetPagePrivate(page);
>  	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -962,7 +958,6 @@ int m2p_remove_override(struct page *page,
>  	WARN_ON(!PagePrivate(page));
>  	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
>
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..b89aaa2 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -250,6 +250,9 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token,
>  static int map_grant_pages(struct grant_map *map)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +	unsigned long mfn;
>  
>  	if (!use_ptemod) {
>  		/* Note: it could already be mapped */
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> @@ -304,6 +336,7 @@ static int map_grant_pages(struct grant_map *map)
>  static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  {
>  	int i, err = 0;
> +	bool lazy = false;
>  
>  	if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
>  		int pgno = (map->notify.addr >> PAGE_SHIFT);
> @@ -316,8 +349,29 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  	}
>  
>  	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +				NULL,
> +				map->pages + offset,
> +				pages);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < pages; i++) {
> +		err = m2p_remove_override((map->pages + offset)[i],
> +					  use_ptemod ?
> +					  &(map->kmap_ops + offset)[i] :
> +					  NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..ad281e4 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -885,7 +885,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> @@ -904,40 +903,29 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		for (i = 0; i < count; i++) {
>  			if (map_ops[i].status)
>  				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +			if (unlikely(!set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +							  map_ops[i].dev_bus_addr >> PAGE_SHIFT)))
> +				return -ENOMEM;
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +	} else {
> +		for (i = 0; i < count; i++) {
> +			if (map_ops[i].status)
> +				continue;
> +			if (map_ops[i].flags & GNTMAP_contains_pte) {
> +				pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +					(map_ops[i].host_addr & ~PAGE_MASK));
> +				mfn = pte_mfn(*pte);
> +			} else {
> +				mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +			}
> +			pages[i]->index = pfn_to_mfn(page_to_pfn(pages[i]));
> +			if (unlikely(!set_phys_to_machine(page_to_pfn(pages[i]),
> +							  FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> @@ -946,7 +934,6 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct page **pages, unsigned int count)
>  {
>  	int i, ret;
> -	bool lazy = false;
>  
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
> @@ -958,26 +945,14 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> +	} else {
> +		for (i = 0; i < count; i++) {
> +				set_phys_to_machine(page_to_pfn(pages[i]),
> +						    pages[i]->index);
> +		}
>  	}
>  
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> -
> -	return ret;
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:58:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qGk-000659-JQ; Thu, 16 Jan 2014 16:58:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3qGg-00063s-Vt
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 16:58:11 +0000
Received: from [85.158.137.68:47026] by server-3.bemta-3.messagelabs.com id
	C6/33-10658-1AF08D25; Thu, 16 Jan 2014 16:58:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389891487!9547355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14942 invoked from network); 16 Jan 2014 16:58:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:58:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93552055"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 16:58:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:58:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pzN-0008QB-MH;
	Thu, 16 Jan 2014 16:40:17 +0000
Date: Thu, 16 Jan 2014 16:39:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D6A9A10200007800113E08@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
	<52D6A9A10200007800113E08@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Jan Beulich wrote:
> >>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
> >> Given that it is 2.4 and Qemu's configure script explicitly says that
> >> 2.4 is what they want I think it would be worth sending the fix to qemu
> >> upstream -- at worst they say no and bump their requirement (at which
> >> point we'd have to have a think about what to do).
> > 
> > It's look like the fix is already upstream. Here in the 1.6 stable
> > branch:
> > http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
> > 3f9fe3d
> 
> Oh, great. Stefano - can you pull this in then, please?

I would rather pull it as part of the update to 1.6.2.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 16:58:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 16:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qGk-000659-JQ; Thu, 16 Jan 2014 16:58:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W3qGg-00063s-Vt
	for xen-devel@lists.xenproject.org; Thu, 16 Jan 2014 16:58:11 +0000
Received: from [85.158.137.68:47026] by server-3.bemta-3.messagelabs.com id
	C6/33-10658-1AF08D25; Thu, 16 Jan 2014 16:58:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389891487!9547355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14942 invoked from network); 16 Jan 2014 16:58:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 16:58:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93552055"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 16:58:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 16 Jan 2014 11:58:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W3pzN-0008QB-MH;
	Thu, 16 Jan 2014 16:40:17 +0000
Date: Thu, 16 Jan 2014 16:39:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52D6A9A10200007800113E08@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
	<52D6A9A10200007800113E08@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Jan Beulich wrote:
> >>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
> >> Given that it is 2.4 and Qemu's configure script explicitly says that
> >> 2.4 is what they want I think it would be worth sending the fix to qemu
> >> upstream -- at worst they say no and bump their requirement (at which
> >> point we'd have to have a think about what to do).
> > 
> > It's look like the fix is already upstream. Here in the 1.6 stable
> > branch:
> > http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
> > 3f9fe3d
> 
> Oh, great. Stefano - can you pull this in then, please?

I would rather pull it as part of the update to 1.6.2.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qen-0007cc-04; Thu, 16 Jan 2014 17:23:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qed-0007YV-V0
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:56 +0000
Received: from [85.158.143.35:3480] by server-1.bemta-4.messagelabs.com id
	55/24-02132-D6518D25; Thu, 16 Jan 2014 17:22:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389892970!10935079!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23928 invoked from network); 16 Jan 2014 17:22:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562729"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:50 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeN-0005uR-Ou;
	Thu, 16 Jan 2014 17:22:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeM-0002Dm-Ak;
	Thu, 16 Jan 2014 17:22:38 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:22 +0000
Message-ID: <1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/7] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeb-0007Xl-1K; Thu, 16 Jan 2014 17:22:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeW-0007WI-RL
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:49 +0000
Received: from [85.158.139.211:35221] by server-16.bemta-5.messagelabs.com id
	C3/C5-11843-76518D25; Thu, 16 Jan 2014 17:22:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389892964!10220110!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24302 invoked from network); 16 Jan 2014 17:22:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeH-0005u9-L5;
	Thu, 16 Jan 2014 17:22:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeG-0002DL-17;
	Thu, 16 Jan 2014 17:22:32 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:17 +0000
Message-ID: <1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/7] libxl: fork: Break out childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qen-0007cc-04; Thu, 16 Jan 2014 17:23:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qed-0007YV-V0
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:56 +0000
Received: from [85.158.143.35:3480] by server-1.bemta-4.messagelabs.com id
	55/24-02132-D6518D25; Thu, 16 Jan 2014 17:22:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389892970!10935079!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23928 invoked from network); 16 Jan 2014 17:22:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562729"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:50 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeN-0005uR-Ou;
	Thu, 16 Jan 2014 17:22:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeM-0002Dm-Ak;
	Thu, 16 Jan 2014 17:22:38 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:22 +0000
Message-ID: <1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 7/7] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeL-0007UB-MC; Thu, 16 Jan 2014 17:22:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeJ-0007Tz-ST
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:35 +0000
Received: from [85.158.143.35:52744] by server-2.bemta-4.messagelabs.com id
	13/CF-11386-B5518D25; Thu, 16 Jan 2014 17:22:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389892953!394815!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18118 invoked from network); 16 Jan 2014 17:22:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459177"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeF-0005u3-F5;
	Thu, 16 Jan 2014 17:22:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeD-0002DE-IF;
	Thu, 16 Jan 2014 17:22:29 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:15 +0000
Message-ID: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libvirt reaps its children synchronously and has no central pid
registry and no dispatch mechanism.  libxl does have a pid registry so
can provide a selective reaping facility, but that is not currently exposed.

NB that I have compiled this series but I have NOT EXECUTED IT.
The most plausible test environment is a suitably modified libvirt.

 1/7 libxl: fork: Break out checked_waitpid
 2/7 libxl: fork: Break out childproc_reaped_ours
 3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
 4/7 libxl: fork: assert that chldmode is right
 5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
 6/7 libxl: fork: Provide ..._always_selective_reap
 7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeZ-0007XH-63; Thu, 16 Jan 2014 17:22:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeV-0007W7-6K
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:47 +0000
Received: from [85.158.139.211:15352] by server-12.bemta-5.messagelabs.com id
	41/D2-30017-66518D25; Thu, 16 Jan 2014 17:22:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389892964!10220110!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24246 invoked from network); 16 Jan 2014 17:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459257"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeG-0005u6-EQ;
	Thu, 16 Jan 2014 17:22:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeE-0002DH-Ri;
	Thu, 16 Jan 2014 17:22:30 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:16 +0000
Message-ID: <1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/7] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeW-0007WF-14; Thu, 16 Jan 2014 17:22:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeU-0007W0-1O
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:46 +0000
Received: from [85.158.137.68:17710] by server-5.bemta-3.messagelabs.com id
	FD/AE-25188-56518D25; Thu, 16 Jan 2014 17:22:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389892963!9608504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6407 invoked from network); 16 Jan 2014 17:22:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeK-0005uG-97;
	Thu, 16 Jan 2014 17:22:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeI-0002DV-LD;
	Thu, 16 Jan 2014 17:22:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:19 +0000
Message-ID: <1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/7] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qef-0007ZU-QW; Thu, 16 Jan 2014 17:22:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qea-0007Xc-Q4
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:53 +0000
Received: from [85.158.137.68:15704] by server-3.bemta-3.messagelabs.com id
	50/36-10658-B6518D25; Thu, 16 Jan 2014 17:22:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389892969!9608527!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6887 invoked from network); 16 Jan 2014 17:22:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459308"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeL-0005uL-Bx;
	Thu, 16 Jan 2014 17:22:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeJ-0002Da-RO;
	Thu, 16 Jan 2014 17:22:35 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:20 +0000
Message-ID: <1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4221d5a..12e3d1f 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -471,9 +471,10 @@ typedef enum {
      * all children. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -531,7 +532,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -547,6 +549,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qem-0007bb-HQ; Thu, 16 Jan 2014 17:23:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeb-0007YA-Op
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:54 +0000
Received: from [85.158.143.35:3436] by server-3.bemta-4.messagelabs.com id
	F6/3A-32360-C6518D25; Thu, 16 Jan 2014 17:22:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389892970!10935079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23795 invoked from network); 16 Jan 2014 17:22:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562714"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:49 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeM-0005uO-KQ;
	Thu, 16 Jan 2014 17:22:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeL-0002Dg-1f;
	Thu, 16 Jan 2014 17:22:37 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:21 +0000
Message-ID: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 12 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 12e3d1f..c09e3ed 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -480,6 +480,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeZ-0007XH-63; Thu, 16 Jan 2014 17:22:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeV-0007W7-6K
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:47 +0000
Received: from [85.158.139.211:15352] by server-12.bemta-5.messagelabs.com id
	41/D2-30017-66518D25; Thu, 16 Jan 2014 17:22:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389892964!10220110!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24246 invoked from network); 16 Jan 2014 17:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459257"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeG-0005u6-EQ;
	Thu, 16 Jan 2014 17:22:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeE-0002DH-Ri;
	Thu, 16 Jan 2014 17:22:30 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:16 +0000
Message-ID: <1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/7] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeL-0007UB-MC; Thu, 16 Jan 2014 17:22:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeJ-0007Tz-ST
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:35 +0000
Received: from [85.158.143.35:52744] by server-2.bemta-4.messagelabs.com id
	13/CF-11386-B5518D25; Thu, 16 Jan 2014 17:22:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389892953!394815!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18118 invoked from network); 16 Jan 2014 17:22:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459177"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeF-0005u3-F5;
	Thu, 16 Jan 2014 17:22:31 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeD-0002DE-IF;
	Thu, 16 Jan 2014 17:22:29 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:15 +0000
Message-ID: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libvirt reaps its children synchronously and has no central pid
registry and no dispatch mechanism.  libxl does have a pid registry so
can provide a selective reaping facility, but that is not currently exposed.

NB that I have compiled this series but I have NOT EXECUTED IT.
The most plausible test environment is a suitably modified libvirt.

 1/7 libxl: fork: Break out checked_waitpid
 2/7 libxl: fork: Break out childproc_reaped_ours
 3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
 4/7 libxl: fork: assert that chldmode is right
 5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
 6/7 libxl: fork: Provide ..._always_selective_reap
 7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeb-0007Xl-1K; Thu, 16 Jan 2014 17:22:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeW-0007WI-RL
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:49 +0000
Received: from [85.158.139.211:35221] by server-16.bemta-5.messagelabs.com id
	C3/C5-11843-76518D25; Thu, 16 Jan 2014 17:22:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389892964!10220110!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24302 invoked from network); 16 Jan 2014 17:22:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeH-0005u9-L5;
	Thu, 16 Jan 2014 17:22:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeG-0002DL-17;
	Thu, 16 Jan 2014 17:22:32 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:17 +0000
Message-ID: <1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/7] libxl: fork: Break out childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qef-0007ZU-QW; Thu, 16 Jan 2014 17:22:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qea-0007Xc-Q4
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:53 +0000
Received: from [85.158.137.68:15704] by server-3.bemta-3.messagelabs.com id
	50/36-10658-B6518D25; Thu, 16 Jan 2014 17:22:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389892969!9608527!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6887 invoked from network); 16 Jan 2014 17:22:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459308"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeL-0005uL-Bx;
	Thu, 16 Jan 2014 17:22:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeJ-0002Da-RO;
	Thu, 16 Jan 2014 17:22:35 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:20 +0000
Message-ID: <1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4221d5a..12e3d1f 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -471,9 +471,10 @@ typedef enum {
      * all children. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -531,7 +532,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -547,6 +549,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qem-0007bb-HQ; Thu, 16 Jan 2014 17:23:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeb-0007YA-Op
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:54 +0000
Received: from [85.158.143.35:3436] by server-3.bemta-4.messagelabs.com id
	F6/3A-32360-C6518D25; Thu, 16 Jan 2014 17:22:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389892970!10935079!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23795 invoked from network); 16 Jan 2014 17:22:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562714"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:49 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeM-0005uO-KQ;
	Thu, 16 Jan 2014 17:22:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeL-0002Dg-1f;
	Thu, 16 Jan 2014 17:22:37 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:21 +0000
Message-ID: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 12 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 12e3d1f..c09e3ed 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -480,6 +480,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qeW-0007WF-14; Thu, 16 Jan 2014 17:22:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeU-0007W0-1O
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:46 +0000
Received: from [85.158.137.68:17710] by server-5.bemta-3.messagelabs.com id
	FD/AE-25188-56518D25; Thu, 16 Jan 2014 17:22:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389892963!9608504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6407 invoked from network); 16 Jan 2014 17:22:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="93562657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeK-0005uG-97;
	Thu, 16 Jan 2014 17:22:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeI-0002DV-LD;
	Thu, 16 Jan 2014 17:22:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:19 +0000
Message-ID: <1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 4/7] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qec-0007YZ-GK; Thu, 16 Jan 2014 17:22:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeW-0007WQ-Uo
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:49 +0000
Received: from [85.158.139.211:15432] by server-1.bemta-5.messagelabs.com id
	BD/C4-21065-86518D25; Thu, 16 Jan 2014 17:22:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389892965!7510663!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28388 invoked from network); 16 Jan 2014 17:22:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459285"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeJ-0005uD-2e;
	Thu, 16 Jan 2014 17:22:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeH-0002DQ-9V;
	Thu, 16 Jan 2014 17:22:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:18 +0000
Message-ID: <1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..4221d5a 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 17:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3qec-0007YZ-GK; Thu, 16 Jan 2014 17:22:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3qeW-0007WQ-Uo
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 17:22:49 +0000
Received: from [85.158.139.211:15432] by server-1.bemta-5.messagelabs.com id
	BD/C4-21065-86518D25; Thu, 16 Jan 2014 17:22:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389892965!7510663!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28388 invoked from network); 16 Jan 2014 17:22:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 17:22:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,668,1384300800"; d="scan'208";a="91459285"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Jan 2014 17:22:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 12:22:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeJ-0005uD-2e;
	Thu, 16 Jan 2014 17:22:35 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3qeH-0002DQ-9V;
	Thu, 16 Jan 2014 17:22:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 16 Jan 2014 17:22:18 +0000
Message-ID: <1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..4221d5a 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 19:26:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 19:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3sZN-0006FD-I4; Thu, 16 Jan 2014 19:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3sYx-0006F3-5U
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 19:25:32 +0000
Received: from [85.158.139.211:58289] by server-7.bemta-5.messagelabs.com id
	7A/F0-04824-61238D25; Thu, 16 Jan 2014 19:25:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389900307!10041886!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2056 invoked from network); 16 Jan 2014 19:25:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:25:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93615170"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:25:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 14:25:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3sYr-0006Vl-GZ;
	Thu, 16 Jan 2014 19:25:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3sYp-0002NP-UX;
	Thu, 16 Jan 2014 19:25:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21208.12814.360247.15168@mariner.uk.xensource.com>
Date: Thu, 16 Jan 2014 19:25:02 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[RFC PATCH 0/7] libxl: fork: Selective reaping"):
> libvirt reaps its children synchronously and has no central pid
> registry and no dispatch mechanism.  libxl does have a pid registry so
> can provide a selective reaping facility, but that is not currently exposed.
> 
> NB that I have compiled this series but I have NOT EXECUTED IT.
> The most plausible test environment is a suitably modified libvirt.
> 
>  1/7 libxl: fork: Break out checked_waitpid
>  2/7 libxl: fork: Break out childproc_reaped_ours
>  3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
>  4/7 libxl: fork: assert that chldmode is right
>  5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
>  6/7 libxl: fork: Provide ..._always_selective_reap
>  7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP

I should say, to Jim: I think that with this series applied, simply
having libvirt pass libxl_sigchld_owner_libxl_always_selective_reap
should be sufficient for everything to work.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 19:26:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 19:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3sZN-0006FD-I4; Thu, 16 Jan 2014 19:25:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3sYx-0006F3-5U
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 19:25:32 +0000
Received: from [85.158.139.211:58289] by server-7.bemta-5.messagelabs.com id
	7A/F0-04824-61238D25; Thu, 16 Jan 2014 19:25:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389900307!10041886!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2056 invoked from network); 16 Jan 2014 19:25:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:25:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93615170"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:25:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 14:25:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3sYr-0006Vl-GZ;
	Thu, 16 Jan 2014 19:25:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W3sYp-0002NP-UX;
	Thu, 16 Jan 2014 19:25:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21208.12814.360247.15168@mariner.uk.xensource.com>
Date: Thu, 16 Jan 2014 19:25:02 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[RFC PATCH 0/7] libxl: fork: Selective reaping"):
> libvirt reaps its children synchronously and has no central pid
> registry and no dispatch mechanism.  libxl does have a pid registry so
> can provide a selective reaping facility, but that is not currently exposed.
> 
> NB that I have compiled this series but I have NOT EXECUTED IT.
> The most plausible test environment is a suitably modified libvirt.
> 
>  1/7 libxl: fork: Break out checked_waitpid
>  2/7 libxl: fork: Break out childproc_reaped_ours
>  3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
>  4/7 libxl: fork: assert that chldmode is right
>  5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
>  6/7 libxl: fork: Provide ..._always_selective_reap
>  7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP

I should say, to Jim: I think that with this series applied, simply
having libvirt pass libxl_sigchld_owner_libxl_always_selective_reap
should be sufficient for everything to work.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 19:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 19:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3skL-0006sK-9U; Thu, 16 Jan 2014 19:36:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3skJ-0006sF-Kj
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 19:36:56 +0000
Received: from [193.109.254.147:64702] by server-4.bemta-14.messagelabs.com id
	B3/6E-03916-6D438D25; Thu, 16 Jan 2014 19:36:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389901011!11403980!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19344 invoked from network); 16 Jan 2014 19:36:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:36:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93620555"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:36:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 14:36:50 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3skD-0006Za-Sh;
	Thu, 16 Jan 2014 19:36:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3sk4-0002HD-KW;
	Thu, 16 Jan 2014 19:36:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24397-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 19:36:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24397: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8251524696010513739=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8251524696010513739==
Content-Type: text/plain

flight 24397 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24397/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore           fail pass in 24387
 test-amd64-amd64-pair        16 guest-start                 fail pass in 24387
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24387 pass in 24397
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24387 pass in 24397

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24311
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24315
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24387 like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24387 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24387 never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ branch=linux-3.4
+ revision=4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922:tested/linux-3.4
Counting objects: 1   
Counting objects: 92   
Counting objects: 231, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/167)   
Writing objects:   1% (2/167)   
Writing objects:   2% (4/167)   
Writing objects:   3% (6/167)   
Writing objects:   4% (7/167)   
Writing objects:   5% (9/167)   
Writing objects:   6% (11/167)   
Writing objects:   7% (12/167)   
Writing objects:   8% (14/167)   
Writing objects:   9% (16/167)   
Writing objects:  10% (17/167)   
Writing objects:  11% (19/167)   
Writing objects:  12% (21/167)   
Writing objects:  13% (22/167)   
Writing objects:  14% (25/167)   
Writing objects:  15% (26/167)   
Writing objects:  16% (27/167)   
Writing objects:  17% (29/167)   
Writing objects:  18% (31/167)   
Writing objects:  19% (32/167)   
Writing objects:  20% (34/167)   
Writing objects:  21% (36/167)   
Writing objects:  22% (37/167)   
Writing objects:  23% (39/167)   
Writing objects:  24% (41/167)   
Writing objects:  25% (42/167)   
Writing objects:  26% (44/167)   
Writing objects:  27% (46/167)   
Writing objects:  28% (47/167)   
Writing objects:  29% (49/167)   
Writing objects:  30% (51/167)   
Writing objects:  31% (52/167)   
Writing objects:  32% (54/167)   
Writing objects:  33% (56/167)   
Writing objects:  34% (57/167)   
Writing objects:  35% (59/167)   
Writing objects:  36% (61/167)   
Writing objects:  37% (62/167)   
Writing objects:  38% (64/167)   
Writing objects:  39% (66/167)   
Writing objects:  40% (67/167)   
Writing objects:  41% (69/167)   
Writing objects:  42% (71/167)   
Writing objects:  43% (72/167)   
Writing objects:  44% (74/167)   
Writing objects:  45% (76/167)   
Writing objects:  46% (77/167)   
Writing objects:  47% (79/167)   
Writing objects:  48% (81/167)   
Writing objects:  49% (82/167)   
Writing objects:  50% (84/167)   
Writing objects:  51% (86/167)   
Writing objects:  52% (87/167)   
Writing objects:  53% (89/167)   
Writing objects:  54% (91/167)   
Writing objects:  55% (92/167)   
Writing objects:  56% (94/167)   
Writing objects:  57% (96/167)   
Writing objects:  58% (97/167)   
Writing objects:  59% (99/167)   
Writing objects:  60% (101/167)   
Writing objects:  61% (102/167)   
Writing objects:  62% (104/167)   
Writing objects:  63% (106/167)   
Writing objects:  64% (107/167)   
Writing objects:  65% (109/167)   
Writing objects:  66% (111/167)   
Writing objects:  67% (112/167)   
Writing objects:  68% (114/167)   
Writing objects:  69% (116/167)   
Writing objects:  70% (117/167)   
Writing objects:  71% (119/167)   
Writing objects:  72% (121/167)   
Writing objects:  73% (122/167)   
Writing objects:  74% (124/167)   
Writing objects:  75% (126/167)   
Writing objects:  76% (127/167)   
Writing objects:  77% (129/167)   
Writing objects:  78% (131/167)   
Writing objects:  79% (132/167)   
Writing objects:  80% (134/167)   
Writing objects:  81% (136/167)   
Writing objects:  82% (137/167)   
Writing objects:  83% (139/167)   
Writing objects:  84% (141/167)   
Writing objects:  85% (142/167)   
Writing objects:  86% (144/167)   
Writing objects:  87% (146/167)   
Writing objects:  88% (147/167)   
Writing objects:  89% (149/167)   
Writing objects:  90% (151/167)   
Writing objects:  91% (152/167)   
Writing objects:  92% (154/167)   
Writing objects:  93% (156/167)   
Writing objects:  94% (157/167)   
Writing objects:  95% (159/167)   
Writing objects:  96% (161/167)   
Writing objects:  97% (162/167)   
Writing objects:  98% (164/167)   
Writing objects:  99% (166/167)   
Writing objects: 100% (167/167)   
Writing objects: 100% (167/167), 28.67 KiB, done.
Total 167 (delta 139), reused 167 (delta 139)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   94f578e..4b9c8e9  4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922 -> tested/linux-3.4
+ exit 0


--===============8251524696010513739==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8251524696010513739==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 19:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 19:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3skL-0006sK-9U; Thu, 16 Jan 2014 19:36:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3skJ-0006sF-Kj
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 19:36:56 +0000
Received: from [193.109.254.147:64702] by server-4.bemta-14.messagelabs.com id
	B3/6E-03916-6D438D25; Thu, 16 Jan 2014 19:36:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389901011!11403980!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19344 invoked from network); 16 Jan 2014 19:36:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:36:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93620555"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:36:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 14:36:50 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3skD-0006Za-Sh;
	Thu, 16 Jan 2014 19:36:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3sk4-0002HD-KW;
	Thu, 16 Jan 2014 19:36:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24397-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 19:36:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24397: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8251524696010513739=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8251524696010513739==
Content-Type: text/plain

flight 24397 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24397/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore           fail pass in 24387
 test-amd64-amd64-pair        16 guest-start                 fail pass in 24387
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24387 pass in 24397
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24387 pass in 24397

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24311
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24315
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24315
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24387 like 24315

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24387 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24387 never pass

version targeted for testing:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
baseline version:
 linux                94f578e6aba14bb2aeb00db2e7f6e5f704fee937

------------------------------------------------------------
People who touched revisions under test:
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  David S. Miller <davem@davemloft.net>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Honggang Li <honli@redhat.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Michael Chan <mchan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nat Gurumoorthy <natg@google.com>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Simon Horman <horms+renesas@verge.net.au>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ branch=linux-3.4
+ revision=4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922:tested/linux-3.4
Counting objects: 1   
Counting objects: 92   
Counting objects: 231, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/167)   
Writing objects:   1% (2/167)   
Writing objects:   2% (4/167)   
Writing objects:   3% (6/167)   
Writing objects:   4% (7/167)   
Writing objects:   5% (9/167)   
Writing objects:   6% (11/167)   
Writing objects:   7% (12/167)   
Writing objects:   8% (14/167)   
Writing objects:   9% (16/167)   
Writing objects:  10% (17/167)   
Writing objects:  11% (19/167)   
Writing objects:  12% (21/167)   
Writing objects:  13% (22/167)   
Writing objects:  14% (25/167)   
Writing objects:  15% (26/167)   
Writing objects:  16% (27/167)   
Writing objects:  17% (29/167)   
Writing objects:  18% (31/167)   
Writing objects:  19% (32/167)   
Writing objects:  20% (34/167)   
Writing objects:  21% (36/167)   
Writing objects:  22% (37/167)   
Writing objects:  23% (39/167)   
Writing objects:  24% (41/167)   
Writing objects:  25% (42/167)   
Writing objects:  26% (44/167)   
Writing objects:  27% (46/167)   
Writing objects:  28% (47/167)   
Writing objects:  29% (49/167)   
Writing objects:  30% (51/167)   
Writing objects:  31% (52/167)   
Writing objects:  32% (54/167)   
Writing objects:  33% (56/167)   
Writing objects:  34% (57/167)   
Writing objects:  35% (59/167)   
Writing objects:  36% (61/167)   
Writing objects:  37% (62/167)   
Writing objects:  38% (64/167)   
Writing objects:  39% (66/167)   
Writing objects:  40% (67/167)   
Writing objects:  41% (69/167)   
Writing objects:  42% (71/167)   
Writing objects:  43% (72/167)   
Writing objects:  44% (74/167)   
Writing objects:  45% (76/167)   
Writing objects:  46% (77/167)   
Writing objects:  47% (79/167)   
Writing objects:  48% (81/167)   
Writing objects:  49% (82/167)   
Writing objects:  50% (84/167)   
Writing objects:  51% (86/167)   
Writing objects:  52% (87/167)   
Writing objects:  53% (89/167)   
Writing objects:  54% (91/167)   
Writing objects:  55% (92/167)   
Writing objects:  56% (94/167)   
Writing objects:  57% (96/167)   
Writing objects:  58% (97/167)   
Writing objects:  59% (99/167)   
Writing objects:  60% (101/167)   
Writing objects:  61% (102/167)   
Writing objects:  62% (104/167)   
Writing objects:  63% (106/167)   
Writing objects:  64% (107/167)   
Writing objects:  65% (109/167)   
Writing objects:  66% (111/167)   
Writing objects:  67% (112/167)   
Writing objects:  68% (114/167)   
Writing objects:  69% (116/167)   
Writing objects:  70% (117/167)   
Writing objects:  71% (119/167)   
Writing objects:  72% (121/167)   
Writing objects:  73% (122/167)   
Writing objects:  74% (124/167)   
Writing objects:  75% (126/167)   
Writing objects:  76% (127/167)   
Writing objects:  77% (129/167)   
Writing objects:  78% (131/167)   
Writing objects:  79% (132/167)   
Writing objects:  80% (134/167)   
Writing objects:  81% (136/167)   
Writing objects:  82% (137/167)   
Writing objects:  83% (139/167)   
Writing objects:  84% (141/167)   
Writing objects:  85% (142/167)   
Writing objects:  86% (144/167)   
Writing objects:  87% (146/167)   
Writing objects:  88% (147/167)   
Writing objects:  89% (149/167)   
Writing objects:  90% (151/167)   
Writing objects:  91% (152/167)   
Writing objects:  92% (154/167)   
Writing objects:  93% (156/167)   
Writing objects:  94% (157/167)   
Writing objects:  95% (159/167)   
Writing objects:  96% (161/167)   
Writing objects:  97% (162/167)   
Writing objects:  98% (164/167)   
Writing objects:  99% (166/167)   
Writing objects: 100% (167/167)   
Writing objects: 100% (167/167), 28.67 KiB, done.
Total 167 (delta 139), reused 167 (delta 139)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   94f578e..4b9c8e9  4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922 -> tested/linux-3.4
+ exit 0


--===============8251524696010513739==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8251524696010513739==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 20:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 20:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3tTd-0000sk-Nl; Thu, 16 Jan 2014 20:23:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1W3tTc-0000sd-2u
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 20:23:44 +0000
Received: from [85.158.137.68:41226] by server-4.bemta-3.messagelabs.com id
	12/4C-10414-FCF38D25; Thu, 16 Jan 2014 20:23:43 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389903812!8459345!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjkxODYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13768 invoked from network); 16 Jan 2014 20:23:33 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 20:23:33 -0000
Received: by mail-ob0-f172.google.com with SMTP id vb8so3364013obc.31
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 12:23:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=/ao97U3HB+el3aGxoP8zPhCLgLjZdCBbPa4H6UgLTSM=;
	b=ZIwTV2F6aVeFGFJ15BXSSgJdVG3liFaIdtmJApQmetqLoh0ZVOhf3HYFhtX3LBgLoB
	6CrLSfkxVwbwGfUNO+j9gBYPuFd9dnJmiALYuNrfyF9tiDkkrKv11FSia53CttdMTSCv
	tOqWoxNx2+9WPOmzo9p0BRM0ioi9bePGf1pdiumlyzQg6Faz+Wx7unMIwbK23EMc+G5o
	kC16Spn2KyuqG4/PU5X4bOf2naqtxMTNMcw6Vkj3uiXYvyCbRlPvsgoy+7ayMsga9oxb
	yZvJyotUzlhrPgRJZWj2RpSduP6RLcff5u4mSPWEZjyJ6ZL+6G1UaMZVWbWeIWW5KGZy
	PJOA==
X-Gm-Message-State: ALoCoQm67Orf6FAm0myM9h5J2kQrdhQ78MAGU+D3Zx/TV824nL4xQsVgKv51CUPD6ar9JS6+9DTm
MIME-Version: 1.0
X-Received: by 10.60.165.2 with SMTP id yu2mr2428581oeb.78.1389903811723; Thu,
	16 Jan 2014 12:23:31 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Thu, 16 Jan 2014 12:23:31 -0800 (PST)
In-Reply-To: <1387334265.3880.87.camel@Solace>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
Date: Thu, 16 Jan 2014 10:23:31 -1000
Message-ID: <CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	Henri Casanova <henric@hawaii.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario,

Sorry for disappearing for so long ... I'm back and ready to continue working.

> A quick note about timing, which is probably pretty bad. :-( This is
> absolutely not your fault, but we are working on releasing Xen 4.4 at
> the beginning of 2014, so, until then, the most of the focus would be on
> bugfixing, rather than implementing and reviewing new features.

Should be OK. I just hope I can get some of my code committed before
the end of my school semester.

> That being said, about the code...
>
> On sab, 2013-12-14 at 08:15 -1000, Justin Weaver wrote:
>> Modified function runq_candidate in the credit 2 scheduler to
>> have it consider hard and soft affinity when choosing the next
>> vCPU from the run queue to run on the given pCPU.
>>
> Ok, and the question is then, is that enough for implementing hard and
> soft affinities? By 'that' I mean, 'modifying runq_candidate'. Or do we
> need to do something else, in some other places?
>
> Notice that I'm not saying things actually are in one way or the other
> (although, I do think that this is not enough: e.g., what about
> choose_cpu() ?). I'm rather saying that I think this information should
> be present in the changelog. :-)

Other functions will need to change, but currently with only one run
queue, only runq_candidate needed to change. I'll look through the
others again with the mindset that we (or maybe I) will fix the issue
that is causing only one run queue to be created despite having
multiple cores/sockets available.

>> Function now chooses the vCPU with the most credit that has hard affinity
>> and maybe soft affinity for the given pCPU. If it does not have soft affinity
>> and there is another vCPU that prefers to run on the given pCPU, then as long
>> as it has at least a certain amount of credit (currently defined as half of
>> CSCHED_CREDIT_INIT, but more testing is needed to determine the best value)
>> then it is chosen instead.
>>
> Ok, so, why this 'certain amount of credit' thing? I got the technical
> details of it from the code below, but can you spend a few words on why
> and how you think something like this would be required and/or useful?

Without a solution like this I believe soft affinity would be ignored.
The next vCPU picked to run on the given pCPU would always be the one
on the run queue with the most credit (how it is now) whether it
prefers to run on the pCPU or not. My thinking (for example) is that
it might be better to pick the vCPU with the second most credit and
which actually prefers to run on the given pCPU over the one with more
credit but with no soft affinity preference (or preference for a
different pCPU).

> Oh, and still about the process, no matter how simple it is or will turn
> out to be, I'd send at least two patches, one for hard affinity and the
> other one for soft affinity. That would make the whole thing a lot
> easier to both review (right now) and understand (in future, when
> looking at git log).

Understood, will do.

>> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
>> index 4e68375..d337cdd 100644
>> --- a/xen/common/sched_credit2.c
>> +++ b/xen/common/sched_credit2.c
>> @@ -116,6 +116,10 @@
>>   * ATM, set so that highest-weight VMs can only run for 10ms
>>   * before a reset event. */
>>  #define CSCHED_CREDIT_INIT          MILLISECS(10)
>> +/* Minimum amount of credit needed for a vcpu with soft
>> +   affinity for a given cpu to be picked from the run queue
>> +   over a vcpu with more credit but only hard affinity. */
>> +#define CSCHED_MIN_CREDIT_PREFER_SA MILLISECS(5)
>>
> As said above, what is this buying us? What's the big idea behind it?

I believe I answered above, but I can clarify further if necessary.

>
>>  /* Carryover: How much "extra" credit may be carried over after
>>   * a reset. */
>>  #define CSCHED_CARRYOVER_MAX        CSCHED_MIN_TIMER
>> @@ -1615,6 +1619,7 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>  {
>>      struct list_head *iter;
>>      struct csched_vcpu *snext = NULL;
>> +    bool_t found_snext_w_hard_affinity = 0;
>>
>>      /* Default to current if runnable, idle otherwise */
>>      if ( vcpu_runnable(scurr->vcpu) )
>> @@ -1626,6 +1631,11 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>      {
>>          struct csched_vcpu * svc = list_entry(iter, struct csched_vcpu, runq_elem);
>>
>> +        /* If this is not allowed to run on this processor based on its
>> +         * hard affinity mask, continue to the next vcpu on the run queue */
>> +        if ( !cpumask_test_cpu(cpu, &svc->cpu_hard_affinity) )
>> +            continue;
>> +
> And, as mentioned above already too, if we don't have hard affinity with
> this pCPU, how did we get on this runqueue? Obviously, I know how we got
> here in the present situation... Actually, that's exactly what I meant
> when saying that there is probably more effort needed somewhere else, to
> avoid as much as possible for a vCPU to land in the runqueue of a pCPU
> which is outside of its hard affinity (and soft too, of course).

Right, and I'll further examine runq_assign knowing that the single
run queue issue will eventually be fixed.

>
>>          /* If this is on a different processor, don't pull it unless
>>           * its credit is at least CSCHED_MIGRATE_RESIST higher. */
>>          if ( svc->vcpu->processor != cpu
>> @@ -1633,13 +1643,29 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>              continue;
>>
>>          /* If the next one on the list has more credit than current
>> -         * (or idle, if current is not runnable), choose it. */
>> -        if ( svc->credit > snext->credit )
>> +         * (or idle, if current is not runnable), choose it. Only need
>> +         * to do this once since run queue is in credit order. */
>> +        if ( !found_snext_w_hard_affinity
>> +             && svc->credit > snext->credit )
>> +        {
>> +            snext = svc;
>> +            found_snext_w_hard_affinity = 1;
>> +        }
>> +
> Ok, this is probably the right thing for hard affinity. However...
>
>> +        /* Is there enough credit left in this vcpu to continue
>> +         * considering soft affinity? */
>> +        if ( svc->credit < CSCHED_MIN_CREDIT_PREFER_SA )
>> +            break;
>> +
>> +        /* Does this vcpu prefer to run on this cpu? */
>> +        if ( !cpumask_full(svc->cpu_soft_affinity)
>> +             && cpumask_test_cpu(cpu, &svc->cpu_soft_affinity) )
>>              snext = svc;
>> +        else
>> +            continue;
>>
> ... No matter the effect of CSCHED_MIN_CREDIT_PREFER_SA, I wonder
> whether we're interfering too much with the credit2 algorithm.
>
> Consider for example the situation where all but one pCPUs are busy, and
> assume we have a bunch of vCPUs, at the head of the free pCPU's
> runqueue, with a great amount of credit, but without soft affinity for
> the the pCPU. OTOH, there might be vCPUs with way less credits, but with
> soft affinity to there, and we'd be letting the later(s) run while it
> were the former(s) that should have, wouldn't we?
>
> Of course, if depends on their credits to be greater than
> CSCHED_MIN_CREDIT_PREFER_SA, but still, this does not look like the
> right approach to me, at least not at this stage.
>
> What I think I'd try to do is as follows:
>  1) try as hard as possible to make sure that each vCPU is in a runqueue
>     belonging to at least one of the pCPUs it has hard affinity with
>  2) try hard (but a bit less hard than 1) is fine) to make sure that
>     each vCPU is in a runqueue belonging to at least one of the pCPUs it
>     has soft affiniy with
>  3) when scheduling (in runq_candidate), scan the runqueue in credit
>     order and pick up the first vCPU that has hard affinity with the
>     pCPU being considered (as you're also doing), but forgetting about
>     soft affinity.
>
> Once that is done, we could look at introduce something like
> CSCHED_MIN_CREDIT_PREFER_SA, as an optimization, and see how it
> performs.
>
> Still as optimizations, we can try to do something clever wrt 1) and 2),
> e.g., instead of making sure a vCPU lands in a runqueue belonging to at
> least a vCPU in the affinity mask, we could try to put the vCPU in the
> runqueue with the bigger intersection between its pCPUs and the domain's
> affinity, to maximize the probability of the scheduling being quick
> enough.... But again, this can come later.
>
> So, does all this make sense?

Yes, that all makes sense. Thank you for the feedback. I think my work
will be more useful if I can also get the system to start making
multiple run queues like it should, despite the message in the credit
2 comments about there currently only being one queue.

> Thanks again for your work and Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>

Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 20:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 20:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3tTd-0000sk-Nl; Thu, 16 Jan 2014 20:23:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1W3tTc-0000sd-2u
	for xen-devel@lists.xen.org; Thu, 16 Jan 2014 20:23:44 +0000
Received: from [85.158.137.68:41226] by server-4.bemta-3.messagelabs.com id
	12/4C-10414-FCF38D25; Thu, 16 Jan 2014 20:23:43 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-9.tower-31.messagelabs.com!1389903812!8459345!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	RCVD_BY_IP,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjkxODYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13768 invoked from network); 16 Jan 2014 20:23:33 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 20:23:33 -0000
Received: by mail-ob0-f172.google.com with SMTP id vb8so3364013obc.31
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 12:23:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=/ao97U3HB+el3aGxoP8zPhCLgLjZdCBbPa4H6UgLTSM=;
	b=ZIwTV2F6aVeFGFJ15BXSSgJdVG3liFaIdtmJApQmetqLoh0ZVOhf3HYFhtX3LBgLoB
	6CrLSfkxVwbwGfUNO+j9gBYPuFd9dnJmiALYuNrfyF9tiDkkrKv11FSia53CttdMTSCv
	tOqWoxNx2+9WPOmzo9p0BRM0ioi9bePGf1pdiumlyzQg6Faz+Wx7unMIwbK23EMc+G5o
	kC16Spn2KyuqG4/PU5X4bOf2naqtxMTNMcw6Vkj3uiXYvyCbRlPvsgoy+7ayMsga9oxb
	yZvJyotUzlhrPgRJZWj2RpSduP6RLcff5u4mSPWEZjyJ6ZL+6G1UaMZVWbWeIWW5KGZy
	PJOA==
X-Gm-Message-State: ALoCoQm67Orf6FAm0myM9h5J2kQrdhQ78MAGU+D3Zx/TV824nL4xQsVgKv51CUPD6ar9JS6+9DTm
MIME-Version: 1.0
X-Received: by 10.60.165.2 with SMTP id yu2mr2428581oeb.78.1389903811723; Thu,
	16 Jan 2014 12:23:31 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Thu, 16 Jan 2014 12:23:31 -0800 (PST)
In-Reply-To: <1387334265.3880.87.camel@Solace>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
Date: Thu, 16 Jan 2014 10:23:31 -1000
Message-ID: <CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	Henri Casanova <henric@hawaii.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario,

Sorry for disappearing for so long ... I'm back and ready to continue working.

> A quick note about timing, which is probably pretty bad. :-( This is
> absolutely not your fault, but we are working on releasing Xen 4.4 at
> the beginning of 2014, so, until then, the most of the focus would be on
> bugfixing, rather than implementing and reviewing new features.

Should be OK. I just hope I can get some of my code committed before
the end of my school semester.

> That being said, about the code...
>
> On sab, 2013-12-14 at 08:15 -1000, Justin Weaver wrote:
>> Modified function runq_candidate in the credit 2 scheduler to
>> have it consider hard and soft affinity when choosing the next
>> vCPU from the run queue to run on the given pCPU.
>>
> Ok, and the question is then, is that enough for implementing hard and
> soft affinities? By 'that' I mean, 'modifying runq_candidate'. Or do we
> need to do something else, in some other places?
>
> Notice that I'm not saying things actually are in one way or the other
> (although, I do think that this is not enough: e.g., what about
> choose_cpu() ?). I'm rather saying that I think this information should
> be present in the changelog. :-)

Other functions will need to change, but currently with only one run
queue, only runq_candidate needed to change. I'll look through the
others again with the mindset that we (or maybe I) will fix the issue
that is causing only one run queue to be created despite having
multiple cores/sockets available.

>> Function now chooses the vCPU with the most credit that has hard affinity
>> and maybe soft affinity for the given pCPU. If it does not have soft affinity
>> and there is another vCPU that prefers to run on the given pCPU, then as long
>> as it has at least a certain amount of credit (currently defined as half of
>> CSCHED_CREDIT_INIT, but more testing is needed to determine the best value)
>> then it is chosen instead.
>>
> Ok, so, why this 'certain amount of credit' thing? I got the technical
> details of it from the code below, but can you spend a few words on why
> and how you think something like this would be required and/or useful?

Without a solution like this I believe soft affinity would be ignored.
The next vCPU picked to run on the given pCPU would always be the one
on the run queue with the most credit (how it is now) whether it
prefers to run on the pCPU or not. My thinking (for example) is that
it might be better to pick the vCPU with the second most credit and
which actually prefers to run on the given pCPU over the one with more
credit but with no soft affinity preference (or preference for a
different pCPU).

> Oh, and still about the process, no matter how simple it is or will turn
> out to be, I'd send at least two patches, one for hard affinity and the
> other one for soft affinity. That would make the whole thing a lot
> easier to both review (right now) and understand (in future, when
> looking at git log).

Understood, will do.

>> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
>> index 4e68375..d337cdd 100644
>> --- a/xen/common/sched_credit2.c
>> +++ b/xen/common/sched_credit2.c
>> @@ -116,6 +116,10 @@
>>   * ATM, set so that highest-weight VMs can only run for 10ms
>>   * before a reset event. */
>>  #define CSCHED_CREDIT_INIT          MILLISECS(10)
>> +/* Minimum amount of credit needed for a vcpu with soft
>> +   affinity for a given cpu to be picked from the run queue
>> +   over a vcpu with more credit but only hard affinity. */
>> +#define CSCHED_MIN_CREDIT_PREFER_SA MILLISECS(5)
>>
> As said above, what is this buying us? What's the big idea behind it?

I believe I answered above, but I can clarify further if necessary.

>
>>  /* Carryover: How much "extra" credit may be carried over after
>>   * a reset. */
>>  #define CSCHED_CARRYOVER_MAX        CSCHED_MIN_TIMER
>> @@ -1615,6 +1619,7 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>  {
>>      struct list_head *iter;
>>      struct csched_vcpu *snext = NULL;
>> +    bool_t found_snext_w_hard_affinity = 0;
>>
>>      /* Default to current if runnable, idle otherwise */
>>      if ( vcpu_runnable(scurr->vcpu) )
>> @@ -1626,6 +1631,11 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>      {
>>          struct csched_vcpu * svc = list_entry(iter, struct csched_vcpu, runq_elem);
>>
>> +        /* If this is not allowed to run on this processor based on its
>> +         * hard affinity mask, continue to the next vcpu on the run queue */
>> +        if ( !cpumask_test_cpu(cpu, &svc->cpu_hard_affinity) )
>> +            continue;
>> +
> And, as mentioned above already too, if we don't have hard affinity with
> this pCPU, how did we get on this runqueue? Obviously, I know how we got
> here in the present situation... Actually, that's exactly what I meant
> when saying that there is probably more effort needed somewhere else, to
> avoid as much as possible for a vCPU to land in the runqueue of a pCPU
> which is outside of its hard affinity (and soft too, of course).

Right, and I'll further examine runq_assign knowing that the single
run queue issue will eventually be fixed.

>
>>          /* If this is on a different processor, don't pull it unless
>>           * its credit is at least CSCHED_MIGRATE_RESIST higher. */
>>          if ( svc->vcpu->processor != cpu
>> @@ -1633,13 +1643,29 @@ runq_candidate(struct csched_runqueue_data *rqd,
>>              continue;
>>
>>          /* If the next one on the list has more credit than current
>> -         * (or idle, if current is not runnable), choose it. */
>> -        if ( svc->credit > snext->credit )
>> +         * (or idle, if current is not runnable), choose it. Only need
>> +         * to do this once since run queue is in credit order. */
>> +        if ( !found_snext_w_hard_affinity
>> +             && svc->credit > snext->credit )
>> +        {
>> +            snext = svc;
>> +            found_snext_w_hard_affinity = 1;
>> +        }
>> +
> Ok, this is probably the right thing for hard affinity. However...
>
>> +        /* Is there enough credit left in this vcpu to continue
>> +         * considering soft affinity? */
>> +        if ( svc->credit < CSCHED_MIN_CREDIT_PREFER_SA )
>> +            break;
>> +
>> +        /* Does this vcpu prefer to run on this cpu? */
>> +        if ( !cpumask_full(svc->cpu_soft_affinity)
>> +             && cpumask_test_cpu(cpu, &svc->cpu_soft_affinity) )
>>              snext = svc;
>> +        else
>> +            continue;
>>
> ... No matter the effect of CSCHED_MIN_CREDIT_PREFER_SA, I wonder
> whether we're interfering too much with the credit2 algorithm.
>
> Consider for example the situation where all but one pCPUs are busy, and
> assume we have a bunch of vCPUs, at the head of the free pCPU's
> runqueue, with a great amount of credit, but without soft affinity for
> the the pCPU. OTOH, there might be vCPUs with way less credits, but with
> soft affinity to there, and we'd be letting the later(s) run while it
> were the former(s) that should have, wouldn't we?
>
> Of course, if depends on their credits to be greater than
> CSCHED_MIN_CREDIT_PREFER_SA, but still, this does not look like the
> right approach to me, at least not at this stage.
>
> What I think I'd try to do is as follows:
>  1) try as hard as possible to make sure that each vCPU is in a runqueue
>     belonging to at least one of the pCPUs it has hard affinity with
>  2) try hard (but a bit less hard than 1) is fine) to make sure that
>     each vCPU is in a runqueue belonging to at least one of the pCPUs it
>     has soft affiniy with
>  3) when scheduling (in runq_candidate), scan the runqueue in credit
>     order and pick up the first vCPU that has hard affinity with the
>     pCPU being considered (as you're also doing), but forgetting about
>     soft affinity.
>
> Once that is done, we could look at introduce something like
> CSCHED_MIN_CREDIT_PREFER_SA, as an optimization, and see how it
> performs.
>
> Still as optimizations, we can try to do something clever wrt 1) and 2),
> e.g., instead of making sure a vCPU lands in a runqueue belonging to at
> least a vCPU in the affinity mask, we could try to put the vCPU in the
> runqueue with the bigger intersection between its pCPUs and the domain's
> affinity, to maximize the probability of the scheduling being quick
> enough.... But again, this can come later.
>
> So, does all this make sense?

Yes, that all makes sense. Thank you for the feedback. I think my work
will be more useful if I can also get the system to start making
multiple run queues like it should, despite the message in the credit
2 comments about there currently only being one queue.

> Thanks again for your work and Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>

Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 16 22:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 22:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3vjC-0008CN-31; Thu, 16 Jan 2014 22:47:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3vjB-0008CI-2H
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 22:47:57 +0000
Received: from [193.109.254.147:8943] by server-6.bemta-14.messagelabs.com id
	ED/0F-14958-C9168D25; Thu, 16 Jan 2014 22:47:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389912473!11331434!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8087 invoked from network); 16 Jan 2014 22:47:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 22:47:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93681457"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 22:47:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 17:47:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3vj5-0007VK-Gl;
	Thu, 16 Jan 2014 22:47:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3vj5-0003nT-GL;
	Thu, 16 Jan 2014 22:47:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24401-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 22:47:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24401: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1979429017597643445=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1979429017597643445==
Content-Type: text/plain

flight 24401 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24401/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24349

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail like 24403-bisect
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============1979429017597643445==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1979429017597643445==--

From xen-devel-bounces@lists.xen.org Thu Jan 16 22:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jan 2014 22:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3vjC-0008CN-31; Thu, 16 Jan 2014 22:47:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W3vjB-0008CI-2H
	for xen-devel@lists.xensource.com; Thu, 16 Jan 2014 22:47:57 +0000
Received: from [193.109.254.147:8943] by server-6.bemta-14.messagelabs.com id
	ED/0F-14958-C9168D25; Thu, 16 Jan 2014 22:47:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389912473!11331434!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8087 invoked from network); 16 Jan 2014 22:47:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 22:47:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93681457"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 22:47:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 17:47:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W3vj5-0007VK-Gl;
	Thu, 16 Jan 2014 22:47:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W3vj5-0003nT-GL;
	Thu, 16 Jan 2014 22:47:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24401-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 16 Jan 2014 22:47:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24401: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1979429017597643445=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1979429017597643445==
Content-Type: text/plain

flight 24401 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24401/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24349

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail like 24403-bisect
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============1979429017597643445==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1979429017597643445==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 00:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 00:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3xE9-0004gF-4M; Fri, 17 Jan 2014 00:24:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3xE7-0004gA-JM
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 00:23:59 +0000
Received: from [85.158.143.35:56056] by server-1.bemta-4.messagelabs.com id
	E9/82-02132-D1878D25; Fri, 17 Jan 2014 00:23:57 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389918236!442358!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7043 invoked from network); 17 Jan 2014 00:23:57 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 00:23:57 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E7E9A581D7E;
	Thu, 16 Jan 2014 16:23:55 -0800 (PST)
Date: Thu, 16 Jan 2014 16:23:55 -0800 (PST)
Message-Id: <20140116.162355.2015643029877352943.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
References: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: add support for
	IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Wed, 15 Jan 2014 17:30:33 +0000

> This patch adds support for IPv6 checksum offload and GSO when those
> features are available in the backend.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Applied.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 00:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 00:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3xE9-0004gF-4M; Fri, 17 Jan 2014 00:24:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3xE7-0004gA-JM
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 00:23:59 +0000
Received: from [85.158.143.35:56056] by server-1.bemta-4.messagelabs.com id
	E9/82-02132-D1878D25; Fri, 17 Jan 2014 00:23:57 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389918236!442358!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7043 invoked from network); 17 Jan 2014 00:23:57 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-16.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 00:23:57 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E7E9A581D7E;
	Thu, 16 Jan 2014 16:23:55 -0800 (PST)
Date: Thu, 16 Jan 2014 16:23:55 -0800 (PST)
Message-Id: <20140116.162355.2015643029877352943.davem@davemloft.net>
To: paul.durrant@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
References: <1389807033-11105-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: add support for
	IPv6 offloads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paul Durrant <paul.durrant@citrix.com>
Date: Wed, 15 Jan 2014 17:30:33 +0000

> This patch adds support for IPv6 checksum offload and GSO when those
> features are available in the backend.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Applied.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 00:37:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 00:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3xQR-0005FI-VJ; Fri, 17 Jan 2014 00:36:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3xQP-0005FD-WE
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 00:36:42 +0000
Received: from [85.158.139.211:23788] by server-12.bemta-5.messagelabs.com id
	75/60-30017-91B78D25; Fri, 17 Jan 2014 00:36:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389919000!10087697!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20355 invoked from network); 17 Jan 2014 00:36:40 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 00:36:40 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so3968153wes.1
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 16:36:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4wXhBVEK9bu6kQV4qJr65d7sAYxtH2FVJr6eGR3jehM=;
	b=GAhXSPJVF2TtGKuoE1BXVZSPTm3VfYTiIGB/7XNtebF7SRRPB6oW39of8UOWyvXHx3
	6/r1xLCzQ8bnOMyZKR7LmNeeFSghm1qvAf2IWSy7S2S6Qv36L+qaP/8DBjFjHx/aa5ii
	/5cWYqQjWGOS9sP2A8GJEHlCeZ2QIxGTUgjNpV0YYfldl2C0fg1rZABWDS/YcM7rHdcI
	6XsmnMCwCYEZBoyArnp0V8QO0h6iwF/hjma4NXU+ijnPpoZfHDxs3x0Lt1cBvGVtEbh0
	Lz3GIABAl4PMpf9EgrVg+/cp4/GS9nXbJJfTocDjV0X2x9K8nFkMou41MAMplAKX/DtJ
	EzkA==
X-Gm-Message-State: ALoCoQlX0DFq3rpwd2dQDxYEf8gY2510n34n28YY5epzfjs3bNWgBURqNEGXaofzqZh62MQWzJoY
X-Received: by 10.180.80.103 with SMTP id q7mr30118wix.14.1389919000069;
	Thu, 16 Jan 2014 16:36:40 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id ea4sm402202wib.7.2014.01.16.16.36.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 16 Jan 2014 16:36:39 -0800 (PST)
Message-ID: <52D87B15.5090208@linaro.org>
Date: Fri, 17 Jan 2014 00:36:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
In-Reply-To: <52D73C4E.2080306@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>
> Thanks for the CC. Could you explain what you mean by "grant-parent"
> etc? "interrupt-parent" is a fundamental part of the way PAPR (and
> ePAPR) work, so I'm very very hesitant to start guessing. I think things
> have broken for you because some (a lot, actually) of OF code does not
> expect #interrupt-cells to be more than 2. Some APIs might need
> updating, which I'll try to take care of. Could you tell me what the
> difference between SPI and PPI is, by the way?

Sorry, I also made some typoes in my explanation so it was not clear.

interrupt-parent is a property in a device node which links this node to 
an interrupt controller (in our case the GIC controller).

The way to handle it on Linux and the ePAR is different:
    - ePAR (chapter 2.4) says:
The physical wiring of an interrupt source to an interrupt controller is 
represented in the device tree with the interrupt-parent property. Nodes 
that represent interrupt-generating devices contain an
interrupt-parent property which has a phandle value that points to the 
device to which the device's interrupts are routed, typically an 
interrupt controller. If an interrupt-generating device does not have
an interrupt-parent property, its interrupt parent is assumed to be its 
device tree parent.
 From my understanding, it's mandatory to have an interrupt-parent 
property on each device node which is using interrupts. If it doesn't 
exist it will assume that the parent is interrupt controller.
If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
in this way.
    - Linux implementation will look at to the node, if the property 
doesn't exists, it will check if the ancestor has this property ...

So the device tree below is valid on Linux, but not on FreeBSD:

/ {
   interrupt-parent = &gic

   gic: gic@10
   {
     ...
   }

   timer@1
   {
     interrupts = <...>
   }
}

Most of shipped device tree use this trick.

IanC: I was reading the linux binding documentation 
(devicetree/booting-without-of.txt VII.2) and it seems that the 
explanation differs from the implementation, right?

For the #interrupt-cells property, the problem starts in fdt_intr_to_rl 
(sys/dev/fdt/fdt_common.c:476). ofw_bus_map_intr is called always with 
the first cells of the interrupt no matter the number of cells specified 
by #interrupt-cells.

> On the subject of simple-bus, they usually aren't necessary. For
> example, all hypervisor devices on IBM hardware live under /vdevice,
> which is attached to the device tree root. They don't use MMIO, so
> simple-bus doesn't really make sense. How does Xen communicate with the
> OS in these devices?
> -Nathan

As I understand, only the simple bus code (see simplebus_attach) is 
translating the interrupts in the device on a resource.
So if you have a node directly attached to the root node with interrupts 
and MMIO, the driver won't be able to retrieve and translate the 
interrupts via bus_alloc_resources.

In the Xen device tree, we have an hypervisor node directly attach to 
the root which contains both MMIO and interrupt used by Xen to 
communicate with the guest.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 00:37:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 00:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3xQR-0005FI-VJ; Fri, 17 Jan 2014 00:36:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W3xQP-0005FD-WE
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 00:36:42 +0000
Received: from [85.158.139.211:23788] by server-12.bemta-5.messagelabs.com id
	75/60-30017-91B78D25; Fri, 17 Jan 2014 00:36:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389919000!10087697!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20355 invoked from network); 17 Jan 2014 00:36:40 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 00:36:40 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so3968153wes.1
	for <xen-devel@lists.xen.org>; Thu, 16 Jan 2014 16:36:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4wXhBVEK9bu6kQV4qJr65d7sAYxtH2FVJr6eGR3jehM=;
	b=GAhXSPJVF2TtGKuoE1BXVZSPTm3VfYTiIGB/7XNtebF7SRRPB6oW39of8UOWyvXHx3
	6/r1xLCzQ8bnOMyZKR7LmNeeFSghm1qvAf2IWSy7S2S6Qv36L+qaP/8DBjFjHx/aa5ii
	/5cWYqQjWGOS9sP2A8GJEHlCeZ2QIxGTUgjNpV0YYfldl2C0fg1rZABWDS/YcM7rHdcI
	6XsmnMCwCYEZBoyArnp0V8QO0h6iwF/hjma4NXU+ijnPpoZfHDxs3x0Lt1cBvGVtEbh0
	Lz3GIABAl4PMpf9EgrVg+/cp4/GS9nXbJJfTocDjV0X2x9K8nFkMou41MAMplAKX/DtJ
	EzkA==
X-Gm-Message-State: ALoCoQlX0DFq3rpwd2dQDxYEf8gY2510n34n28YY5epzfjs3bNWgBURqNEGXaofzqZh62MQWzJoY
X-Received: by 10.180.80.103 with SMTP id q7mr30118wix.14.1389919000069;
	Thu, 16 Jan 2014 16:36:40 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id ea4sm402202wib.7.2014.01.16.16.36.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 16 Jan 2014 16:36:39 -0800 (PST)
Message-ID: <52D87B15.5090208@linaro.org>
Date: Fri, 17 Jan 2014 00:36:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
In-Reply-To: <52D73C4E.2080306@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>
> Thanks for the CC. Could you explain what you mean by "grant-parent"
> etc? "interrupt-parent" is a fundamental part of the way PAPR (and
> ePAPR) work, so I'm very very hesitant to start guessing. I think things
> have broken for you because some (a lot, actually) of OF code does not
> expect #interrupt-cells to be more than 2. Some APIs might need
> updating, which I'll try to take care of. Could you tell me what the
> difference between SPI and PPI is, by the way?

Sorry, I also made some typoes in my explanation so it was not clear.

interrupt-parent is a property in a device node which links this node to 
an interrupt controller (in our case the GIC controller).

The way to handle it on Linux and the ePAR is different:
    - ePAR (chapter 2.4) says:
The physical wiring of an interrupt source to an interrupt controller is 
represented in the device tree with the interrupt-parent property. Nodes 
that represent interrupt-generating devices contain an
interrupt-parent property which has a phandle value that points to the 
device to which the device's interrupts are routed, typically an 
interrupt controller. If an interrupt-generating device does not have
an interrupt-parent property, its interrupt parent is assumed to be its 
device tree parent.
 From my understanding, it's mandatory to have an interrupt-parent 
property on each device node which is using interrupts. If it doesn't 
exist it will assume that the parent is interrupt controller.
If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
in this way.
    - Linux implementation will look at to the node, if the property 
doesn't exists, it will check if the ancestor has this property ...

So the device tree below is valid on Linux, but not on FreeBSD:

/ {
   interrupt-parent = &gic

   gic: gic@10
   {
     ...
   }

   timer@1
   {
     interrupts = <...>
   }
}

Most of shipped device tree use this trick.

IanC: I was reading the linux binding documentation 
(devicetree/booting-without-of.txt VII.2) and it seems that the 
explanation differs from the implementation, right?

For the #interrupt-cells property, the problem starts in fdt_intr_to_rl 
(sys/dev/fdt/fdt_common.c:476). ofw_bus_map_intr is called always with 
the first cells of the interrupt no matter the number of cells specified 
by #interrupt-cells.

> On the subject of simple-bus, they usually aren't necessary. For
> example, all hypervisor devices on IBM hardware live under /vdevice,
> which is attached to the device tree root. They don't use MMIO, so
> simple-bus doesn't really make sense. How does Xen communicate with the
> OS in these devices?
> -Nathan

As I understand, only the simple bus code (see simplebus_attach) is 
translating the interrupts in the device on a resource.
So if you have a node directly attached to the root node with interrupts 
and MMIO, the driver won't be able to retrieve and translate the 
interrupts via bus_alloc_resources.

In the Xen device tree, we have an hypervisor node directly attach to 
the root which contains both MMIO and interrupt used by Xen to 
communicate with the guest.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 01:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 01:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3yBy-00030u-N8; Fri, 17 Jan 2014 01:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3yBx-00030p-9E
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 01:25:49 +0000
Received: from [85.158.137.68:35992] by server-3.bemta-3.messagelabs.com id
	45/FF-10658-C9688D25; Fri, 17 Jan 2014 01:25:48 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389921947!9661727!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30806 invoked from network); 17 Jan 2014 01:25:47 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	17 Jan 2014 01:25:47 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 1D8E3582459;
	Thu, 16 Jan 2014 17:25:46 -0800 (PST)
Date: Thu, 16 Jan 2014 17:25:45 -0800 (PST)
Message-Id: <20140116.172545.698957643976308902.davem@davemloft.net>
To: annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52D7E1BB.8000706@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D7E1BB.8000706@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com,
	andrew.bennieston@citrix.com
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: annie li <annie.li@oracle.com>
Date: Thu, 16 Jan 2014 21:42:19 +0800

> What I thought is to split the implementation into two patches, this
> patch fixes the rx path resource leak(just like what tx path does),
> then a separate patch fixes gnttab_end_foreign_access_ref failure
> issue for both tx/rx through taking reference to the page before
> gnttab_end_foreign_access.
> If you'd like they are posted together, I will create new patch for
> the latter and then post them.:-)

That would probably work best.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 01:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 01:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3yBy-00030u-N8; Fri, 17 Jan 2014 01:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W3yBx-00030p-9E
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 01:25:49 +0000
Received: from [85.158.137.68:35992] by server-3.bemta-3.messagelabs.com id
	45/FF-10658-C9688D25; Fri, 17 Jan 2014 01:25:48 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389921947!9661727!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30806 invoked from network); 17 Jan 2014 01:25:47 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	17 Jan 2014 01:25:47 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 1D8E3582459;
	Thu, 16 Jan 2014 17:25:46 -0800 (PST)
Date: Thu, 16 Jan 2014 17:25:45 -0800 (PST)
Message-Id: <20140116.172545.698957643976308902.davem@davemloft.net>
To: annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52D7E1BB.8000706@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D7E1BB.8000706@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com,
	andrew.bennieston@citrix.com
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: annie li <annie.li@oracle.com>
Date: Thu, 16 Jan 2014 21:42:19 +0800

> What I thought is to split the implementation into two patches, this
> patch fixes the rx path resource leak(just like what tx path does),
> then a separate patch fixes gnttab_end_foreign_access_ref failure
> issue for both tx/rx through taking reference to the page before
> gnttab_end_foreign_access.
> If you'd like they are posted together, I will create new patch for
> the latter and then post them.:-)

That would probably work best.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 02:16:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 02:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3yyp-0005eH-Gc; Fri, 17 Jan 2014 02:16:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3yyn-0005eC-Oh
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 02:16:17 +0000
Received: from [85.158.143.35:41546] by server-1.bemta-4.messagelabs.com id
	1B/D0-02132-17298D25; Fri, 17 Jan 2014 02:16:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389924975!12244695!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25278 invoked from network); 17 Jan 2014 02:16:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 02:16:16 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 16 Jan 2014 18:12:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460214151"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Jan 2014 18:16:14 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:16:14 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:16:13 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 10:16:12 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnA==
Date: Fri, 17 Jan 2014 02:16:11 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
In-Reply-To: <52D7BC9E02000078001142D4@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-16:
>>>> On 16.01.14 at 10:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>>> Can I say nested virtualization also is good supported in Xen 4.4?
>> 
>> Can you enumerate the scenarios which have been tested and which you
>> consider are "supported"?

As you know, it hard to cover all nested testings. So currently, will only cover the following L1 hypervisors: Xen, KVM, VMware workstation, VMware ESX, Win7 xp Mode, Hyper-v and L2 guest are: xp_x86, xp_x64, win7_x86, win7_x64, win8_x86, win8_x64, rhel6_x86, rhel6_x64.
For L1 Xen, KVM, VMware workstation, VMware ESX, Win7 xp Mode, I can boot up all L2 guests.
And for Hyperv, there are some known issues. But I can boot all L2 guests with my workaround patch.
Here are two topic that are discussing in mailing list which will block hyper-v
http://www.gossamer-threads.com/lists/xen/devel/310569?do=post_view_threaded
http://www.gossamer-threads.com/lists/xen/devel/299986
Also there are two another issues which I am working on cooking patches.

BTW, all my testing is based on L2 EPT. It seems there are some issues with L2 shadow page table that need deep investigatation.

As Andrew said, nested still in experimental stage, because there are still lots of scenarios I am not covered in my testing. So it may not accurate to say it is good supported. But I hope people know that the nested is ready to use now. And encourage them to try it and report bug to us to push nested move forward.

>> 
>> What do the hypervisor side maintainers think?
> 
> Indeed I'm not sure we're there yet, not the least considering the
> recent discussion between SVM and VMX folks about how to deal with a
> certain problem, where a regression on the SVM side was expected if
> the code would have got committed as is.
> 
> But in the end I think the VMX and SVM maintainers should have the
> final say on nVMX and nSVM support state.
> 
> Jan
> 
>> ISTR that not so long ago there were some quirks wrt not exposing
>> the feature to guests and crashing if they used it, and another one
>> a while back relating to a guest being able to enable nested virt on
>> itself regardless of the host administrator's settings. I suppose
>> they are both fixed but I wonder if the "command and control" side
>> of nested virt has had the same level of consideration and testing
>> as the actual functionality.
>> 
>> Ian.
> 
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 02:16:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 02:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3yyp-0005eH-Gc; Fri, 17 Jan 2014 02:16:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3yyn-0005eC-Oh
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 02:16:17 +0000
Received: from [85.158.143.35:41546] by server-1.bemta-4.messagelabs.com id
	1B/D0-02132-17298D25; Fri, 17 Jan 2014 02:16:17 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389924975!12244695!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25278 invoked from network); 17 Jan 2014 02:16:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-3.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 02:16:16 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 16 Jan 2014 18:12:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460214151"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Jan 2014 18:16:14 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:16:14 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:16:13 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 10:16:12 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnA==
Date: Fri, 17 Jan 2014 02:16:11 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
In-Reply-To: <52D7BC9E02000078001142D4@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-16:
>>>> On 16.01.14 at 10:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2014-01-16 at 06:54 +0000, Zhang, Yang Z wrote:
>>> Can I say nested virtualization also is good supported in Xen 4.4?
>> 
>> Can you enumerate the scenarios which have been tested and which you
>> consider are "supported"?

As you know, it hard to cover all nested testings. So currently, will only cover the following L1 hypervisors: Xen, KVM, VMware workstation, VMware ESX, Win7 xp Mode, Hyper-v and L2 guest are: xp_x86, xp_x64, win7_x86, win7_x64, win8_x86, win8_x64, rhel6_x86, rhel6_x64.
For L1 Xen, KVM, VMware workstation, VMware ESX, Win7 xp Mode, I can boot up all L2 guests.
And for Hyperv, there are some known issues. But I can boot all L2 guests with my workaround patch.
Here are two topic that are discussing in mailing list which will block hyper-v
http://www.gossamer-threads.com/lists/xen/devel/310569?do=post_view_threaded
http://www.gossamer-threads.com/lists/xen/devel/299986
Also there are two another issues which I am working on cooking patches.

BTW, all my testing is based on L2 EPT. It seems there are some issues with L2 shadow page table that need deep investigatation.

As Andrew said, nested still in experimental stage, because there are still lots of scenarios I am not covered in my testing. So it may not accurate to say it is good supported. But I hope people know that the nested is ready to use now. And encourage them to try it and report bug to us to push nested move forward.

>> 
>> What do the hypervisor side maintainers think?
> 
> Indeed I'm not sure we're there yet, not the least considering the
> recent discussion between SVM and VMX folks about how to deal with a
> certain problem, where a regression on the SVM side was expected if
> the code would have got committed as is.
> 
> But in the end I think the VMX and SVM maintainers should have the
> final say on nVMX and nSVM support state.
> 
> Jan
> 
>> ISTR that not so long ago there were some quirks wrt not exposing
>> the feature to guests and crashing if they used it, and another one
>> a while back relating to a guest being able to enable nested virt on
>> itself regardless of the host administrator's settings. I suppose
>> they are both fixed but I wonder if the "command and control" side
>> of nested virt has had the same level of consideration and testing
>> as the actual functionality.
>> 
>> Ian.
> 
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 02:53:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 02:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3zYj-0007X6-Ab; Fri, 17 Jan 2014 02:53:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3zYi-0007X1-3S
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 02:53:24 +0000
Received: from [85.158.139.211:5019] by server-13.bemta-5.messagelabs.com id
	80/D0-11357-32B98D25; Fri, 17 Jan 2014 02:53:23 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389927201!7569439!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22594 invoked from network); 17 Jan 2014 02:53:22 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-206.messagelabs.com with SMTP;
	17 Jan 2014 02:53:22 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Jan 2014 18:53:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="440179175"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Jan 2014 18:53:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:53:20 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 10:53:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMAAvg9oAAWCpcJD//7kmAP/+RL/w
Date: Fri, 17 Jan 2014 02:53:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BED16@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
	<52D7A5260200007800114253@nat28.tlf.novell.com>
In-Reply-To: <52D7A5260200007800114253@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-16:
>>>> On 16.01.14 at 05:42, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Back to the IO emulation issue, your suggestion didn't solve the
>> problem. So how about my solution that not allow doing virtual
>> vmenty/vmexit when there is pending IO request not finished.
> 
> Sounds pretty reasonable to me (i.e. in line with striving towards
> emulation being as close to real hardware behavior as possible).
> 

Ok. I will send out it later.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 02:53:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 02:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3zYj-0007X6-Ab; Fri, 17 Jan 2014 02:53:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W3zYi-0007X1-3S
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 02:53:24 +0000
Received: from [85.158.139.211:5019] by server-13.bemta-5.messagelabs.com id
	80/D0-11357-32B98D25; Fri, 17 Jan 2014 02:53:23 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389927201!7569439!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22594 invoked from network); 17 Jan 2014 02:53:22 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-206.messagelabs.com with SMTP;
	17 Jan 2014 02:53:22 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Jan 2014 18:53:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="440179175"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Jan 2014 18:53:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 16 Jan 2014 18:53:20 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 10:53:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>
Thread-Topic: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction
	for retry processing
Thread-Index: AQHOvd1TT7LqR1t2qUWUIfiMLEuwe5paFDAQ//+FyoCAAIpBsP//mLYAgAn4BVCAFU4agIAAhs5w//+OMoAAOoMiMAAvg9oAAWCpcJD//7kmAP/+RL/w
Date: Fri, 17 Jan 2014 02:53:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BED16@SHSMSX104.ccr.corp.intel.com>
References: <52498FFC02000078000F8068@nat28.tlf.novell.com>
	<524991D902000078000F8087@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99612E@SHSMSX104.ccr.corp.intel.com>
	<52B16F67020000780010E8D8@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996202@SHSMSX104.ccr.corp.intel.com>
	<52B18CBC020000780010EA26@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A99BCF2@SHSMSX104.ccr.corp.intel.com>
	<52CBC8C10200007800110EFA@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A34C2@SHSMSX104.ccr.corp.intel.com>
	<52CBCC4F.8080500@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A4680@SHSMSX104.ccr.corp.intel.com>
	<52CE93D8.4080201@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE0A4@SHSMSX104.ccr.corp.intel.com>
	<52D7A5260200007800114253@nat28.tlf.novell.com>
In-Reply-To: <52D7A5260200007800114253@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	"Dong, Eddie" <eddie.dong@intel.com>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 5/5] x86/HVM: cache emulated instruction for
 retry processing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-16:
>>>> On 16.01.14 at 05:42, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Back to the IO emulation issue, your suggestion didn't solve the
>> problem. So how about my solution that not allow doing virtual
>> vmenty/vmexit when there is pending IO request not finished.
> 
> Sounds pretty reasonable to me (i.e. in line with striving towards
> emulation being as close to real hardware behavior as possible).
> 

Ok. I will send out it later.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 03:05:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 03:05:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3zjk-0008A8-6H; Fri, 17 Jan 2014 03:04:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W3zji-0008A3-Ec
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 03:04:46 +0000
Received: from [193.109.254.147:46530] by server-6.bemta-14.messagelabs.com id
	4C/77-14958-DCD98D25; Fri, 17 Jan 2014 03:04:45 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389927883!11331571!1
X-Originating-IP: [144.92.197.222]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5335 invoked from network); 17 Jan 2014 03:04:44 -0000
Received: from wmauth2.doit.wisc.edu (HELO smtpauth2.wiscmail.wisc.edu)
	(144.92.197.222)
	by server-5.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	17 Jan 2014 03:04:44 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth2.wiscmail.wisc.edu by
	smtpauth2.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZI00E00YYQG100@smtpauth2.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Thu, 16 Jan 2014 21:04:43 -0600 (CST)
X-Spam-PmxInfo: Server=avs-2, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.17.25115,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth2.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZI00CCNZ7TJI00@smtpauth2.wiscmail.wisc.edu>; Thu,
	16 Jan 2014 21:04:43 -0600 (CST)
Message-id: <52D89DC9.7050303@freebsd.org>
Date: Thu, 16 Jan 2014 21:04:41 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org>
In-reply-to: <52D87B15.5090208@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/14 18:36, Julien Grall wrote:
>
>
> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>
>> Thanks for the CC. Could you explain what you mean by "grant-parent"
>> etc? "interrupt-parent" is a fundamental part of the way PAPR (and
>> ePAPR) work, so I'm very very hesitant to start guessing. I think things
>> have broken for you because some (a lot, actually) of OF code does not
>> expect #interrupt-cells to be more than 2. Some APIs might need
>> updating, which I'll try to take care of. Could you tell me what the
>> difference between SPI and PPI is, by the way?
>
> Sorry, I also made some typoes in my explanation so it was not clear.
>
> interrupt-parent is a property in a device node which links this node 
> to an interrupt controller (in our case the GIC controller).
>
> The way to handle it on Linux and the ePAR is different:
>    - ePAR (chapter 2.4) says:
> The physical wiring of an interrupt source to an interrupt controller 
> is represented in the device tree with the interrupt-parent property. 
> Nodes that represent interrupt-generating devices contain an
> interrupt-parent property which has a phandle value that points to the 
> device to which the device's interrupts are routed, typically an 
> interrupt controller. If an interrupt-generating device does not have
> an interrupt-parent property, its interrupt parent is assumed to be 
> its device tree parent.
> From my understanding, it's mandatory to have an interrupt-parent 
> property on each device node which is using interrupts. If it doesn't 
> exist it will assume that the parent is interrupt controller.
> If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
> in this way.
>    - Linux implementation will look at to the node, if the property 
> doesn't exists, it will check if the ancestor has this property ...



> So the device tree below is valid on Linux, but not on FreeBSD:
>
> / {
>   interrupt-parent = &gic
>
>   gic: gic@10
>   {
>     ...
>   }
>
>   timer@1
>   {
>     interrupts = <...>
>   }
> }
>
> Most of shipped device tree use this trick.
>
> IanC: I was reading the linux binding documentation 
> (devicetree/booting-without-of.txt VII.2) and it seems that the 
> explanation differs from the implementation, right?
>
> For the #interrupt-cells property, the problem starts in 
> fdt_intr_to_rl (sys/dev/fdt/fdt_common.c:476). ofw_bus_map_intr is 
> called always with the first cells of the interrupt no matter the 
> number of cells specified by #interrupt-cells.

The specification is actually a little unclear on this point, but 
FreeBSD follows the same rules as Linux in any case. Most, if not all, 
FreeBSD code should check any ancestor at this point as well. In 
particular fdt_intr_to_rl does this. What it *doesn't* do is allow 
#interrupt-cells to be larger than 2. I'll fix this this weekend.

>> On the subject of simple-bus, they usually aren't necessary. For
>> example, all hypervisor devices on IBM hardware live under /vdevice,
>> which is attached to the device tree root. They don't use MMIO, so
>> simple-bus doesn't really make sense. How does Xen communicate with the
>> OS in these devices?
>> -Nathan
>
> As I understand, only the simple bus code (see simplebus_attach) is 
> translating the interrupts in the device on a resource.
> So if you have a node directly attached to the root node with 
> interrupts and MMIO, the driver won't be able to retrieve and 
> translate the interrupts via bus_alloc_resources.

Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

> In the Xen device tree, we have an hypervisor node directly attach to 
> the root which contains both MMIO and interrupt used by Xen to 
> communicate with the guest.
>

OK. This should be fine, though simplebus would also work if you use MMIO.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 03:05:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 03:05:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W3zjk-0008A8-6H; Fri, 17 Jan 2014 03:04:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W3zji-0008A3-Ec
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 03:04:46 +0000
Received: from [193.109.254.147:46530] by server-6.bemta-14.messagelabs.com id
	4C/77-14958-DCD98D25; Fri, 17 Jan 2014 03:04:45 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389927883!11331571!1
X-Originating-IP: [144.92.197.222]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5335 invoked from network); 17 Jan 2014 03:04:44 -0000
Received: from wmauth2.doit.wisc.edu (HELO smtpauth2.wiscmail.wisc.edu)
	(144.92.197.222)
	by server-5.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	17 Jan 2014 03:04:44 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth2.wiscmail.wisc.edu by
	smtpauth2.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZI00E00YYQG100@smtpauth2.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Thu, 16 Jan 2014 21:04:43 -0600 (CST)
X-Spam-PmxInfo: Server=avs-2, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.17.25115,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth2.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZI00CCNZ7TJI00@smtpauth2.wiscmail.wisc.edu>; Thu,
	16 Jan 2014 21:04:43 -0600 (CST)
Message-id: <52D89DC9.7050303@freebsd.org>
Date: Thu, 16 Jan 2014 21:04:41 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org>
In-reply-to: <52D87B15.5090208@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/16/14 18:36, Julien Grall wrote:
>
>
> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>
>> Thanks for the CC. Could you explain what you mean by "grant-parent"
>> etc? "interrupt-parent" is a fundamental part of the way PAPR (and
>> ePAPR) work, so I'm very very hesitant to start guessing. I think things
>> have broken for you because some (a lot, actually) of OF code does not
>> expect #interrupt-cells to be more than 2. Some APIs might need
>> updating, which I'll try to take care of. Could you tell me what the
>> difference between SPI and PPI is, by the way?
>
> Sorry, I also made some typoes in my explanation so it was not clear.
>
> interrupt-parent is a property in a device node which links this node 
> to an interrupt controller (in our case the GIC controller).
>
> The way to handle it on Linux and the ePAR is different:
>    - ePAR (chapter 2.4) says:
> The physical wiring of an interrupt source to an interrupt controller 
> is represented in the device tree with the interrupt-parent property. 
> Nodes that represent interrupt-generating devices contain an
> interrupt-parent property which has a phandle value that points to the 
> device to which the device's interrupts are routed, typically an 
> interrupt controller. If an interrupt-generating device does not have
> an interrupt-parent property, its interrupt parent is assumed to be 
> its device tree parent.
> From my understanding, it's mandatory to have an interrupt-parent 
> property on each device node which is using interrupts. If it doesn't 
> exist it will assume that the parent is interrupt controller.
> If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
> in this way.
>    - Linux implementation will look at to the node, if the property 
> doesn't exists, it will check if the ancestor has this property ...



> So the device tree below is valid on Linux, but not on FreeBSD:
>
> / {
>   interrupt-parent = &gic
>
>   gic: gic@10
>   {
>     ...
>   }
>
>   timer@1
>   {
>     interrupts = <...>
>   }
> }
>
> Most of shipped device tree use this trick.
>
> IanC: I was reading the linux binding documentation 
> (devicetree/booting-without-of.txt VII.2) and it seems that the 
> explanation differs from the implementation, right?
>
> For the #interrupt-cells property, the problem starts in 
> fdt_intr_to_rl (sys/dev/fdt/fdt_common.c:476). ofw_bus_map_intr is 
> called always with the first cells of the interrupt no matter the 
> number of cells specified by #interrupt-cells.

The specification is actually a little unclear on this point, but 
FreeBSD follows the same rules as Linux in any case. Most, if not all, 
FreeBSD code should check any ancestor at this point as well. In 
particular fdt_intr_to_rl does this. What it *doesn't* do is allow 
#interrupt-cells to be larger than 2. I'll fix this this weekend.

>> On the subject of simple-bus, they usually aren't necessary. For
>> example, all hypervisor devices on IBM hardware live under /vdevice,
>> which is attached to the device tree root. They don't use MMIO, so
>> simple-bus doesn't really make sense. How does Xen communicate with the
>> OS in these devices?
>> -Nathan
>
> As I understand, only the simple bus code (see simplebus_attach) is 
> translating the interrupts in the device on a resource.
> So if you have a node directly attached to the root node with 
> interrupts and MMIO, the driver won't be able to retrieve and 
> translate the interrupts via bus_alloc_resources.

Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

> In the Xen device tree, we have an hypervisor node directly attach to 
> the root which contains both MMIO and interrupt used by Xen to 
> communicate with the guest.
>

OK. This should be fine, though simplebus would also work if you use MMIO.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 04:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 04:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W40du-0002W2-Nx; Fri, 17 Jan 2014 04:02:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W40ds-0002Vx-TW
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 04:02:49 +0000
Received: from [85.158.143.35:55979] by server-1.bemta-4.messagelabs.com id
	DB/A8-02132-86BA8D25; Fri, 17 Jan 2014 04:02:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389931365!12303068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32465 invoked from network); 17 Jan 2014 04:02:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 04:02:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91635729"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 04:02:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 23:02:44 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W40dn-0000ch-Ao;
	Fri, 17 Jan 2014 04:02:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W40dm-0003Ew-K2;
	Fri, 17 Jan 2014 04:02:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24407-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 04:02:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24407: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4267447016360144853=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4267447016360144853==
Content-Type: text/plain

flight 24407 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24407/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep  fail in 24401 REGR. vs. 24349

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  6 leak-check/basis(6)        fail pass in 24401
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24401
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24401

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install    fail like 24406-bisect
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24401 like 24333
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail in 24401 like 24403-bisect
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24401 like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24401 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24401 never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============4267447016360144853==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4267447016360144853==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 04:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 04:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W40du-0002W2-Nx; Fri, 17 Jan 2014 04:02:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W40ds-0002Vx-TW
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 04:02:49 +0000
Received: from [85.158.143.35:55979] by server-1.bemta-4.messagelabs.com id
	DB/A8-02132-86BA8D25; Fri, 17 Jan 2014 04:02:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389931365!12303068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32465 invoked from network); 17 Jan 2014 04:02:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 04:02:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91635729"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 04:02:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 16 Jan 2014 23:02:44 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W40dn-0000ch-Ao;
	Fri, 17 Jan 2014 04:02:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W40dm-0003Ew-K2;
	Fri, 17 Jan 2014 04:02:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24407-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 04:02:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24407: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4267447016360144853=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4267447016360144853==
Content-Type: text/plain

flight 24407 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24407/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep  fail in 24401 REGR. vs. 24349

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  6 leak-check/basis(6)        fail pass in 24401
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24401
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24401

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd  7 redhat-install    fail like 24406-bisect
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24401 like 24333
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail in 24401 like 24403-bisect
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24401 like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24401 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24401 never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                broken  
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============4267447016360144853==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4267447016360144853==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W42sO-0000TO-I1; Fri, 17 Jan 2014 06:25:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W42sM-0000TJ-TP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:25:55 +0000
Received: from [85.158.143.35:53824] by server-2.bemta-4.messagelabs.com id
	3F/DE-11386-2FCC8D25; Fri, 17 Jan 2014 06:25:54 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389939950!12306041!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11447 invoked from network); 17 Jan 2014 06:25:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:25:51 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0H6PkaG012545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 06:25:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0H6Pj4d004570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 06:25:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0H6Pi6O024869; Fri, 17 Jan 2014 06:25:44 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 22:25:44 -0800
Message-ID: <52D8CCE4.9010804@oracle.com>
Date: Fri, 17 Jan 2014 14:25:40 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com>
In-Reply-To: <52D7BE19.2010009@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/16 19:10, David Vrabel wrote:
> On 15/01/14 23:57, Annie Li wrote:
>> This patch implements two things:
>>
>> * release grant reference and skb for rx path, this fixex resource leaking.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>> for reading or writing. But this patch does not cover this and improvement for
>> this failure may be implemented in a separate patch.
> I don't think replacing a resource leak with a security bug is a good idea.
>
> If you would prefer not to fix the gnttab_end_foreign_access() call, I
> think you can fix this in netfront by taking a reference to the page
> before calling gnttab_end_foreign_access().  This will ensure the page
> isn't freed until the subsequent kfree_skb(), or the gref is released by
> the foreign domain (whichever is later).

Taking a reference to the page before calling 
gnttab_end_foreign_access() delays the free work until kfree_skb(). 
Simply adding put_page before kfree_skb() does not make things different 
from gnttab_end_foreign_access_ref(), and the pages will be freed by 
kfree_skb(), problem will be hit in gnttab_handle_deferred() when 
freeing pages which already be freed.

So put_page is required in gnttab_end_foreign_access(), this will ensure 
either free is taken by kfree_skb or gnttab_handle_deferred. This 
involves changes in blkfront/pcifront/tpmfront(just like your patch), 
this way ensure page is released when ref is end.

Another solution I am thinking is calling gnttab_end_foreign_access() 
with page parameter as NULL, then gnttab_end_foreign_access will only do 
ending grant reference work and releasing page work is done by kfree_skb().

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W42sO-0000TO-I1; Fri, 17 Jan 2014 06:25:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W42sM-0000TJ-TP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:25:55 +0000
Received: from [85.158.143.35:53824] by server-2.bemta-4.messagelabs.com id
	3F/DE-11386-2FCC8D25; Fri, 17 Jan 2014 06:25:54 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389939950!12306041!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11447 invoked from network); 17 Jan 2014 06:25:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:25:51 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0H6PkaG012545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 06:25:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0H6Pj4d004570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 06:25:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0H6Pi6O024869; Fri, 17 Jan 2014 06:25:44 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 22:25:44 -0800
Message-ID: <52D8CCE4.9010804@oracle.com>
Date: Fri, 17 Jan 2014 14:25:40 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com>
In-Reply-To: <52D7BE19.2010009@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/16 19:10, David Vrabel wrote:
> On 15/01/14 23:57, Annie Li wrote:
>> This patch implements two things:
>>
>> * release grant reference and skb for rx path, this fixex resource leaking.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>> for reading or writing. But this patch does not cover this and improvement for
>> this failure may be implemented in a separate patch.
> I don't think replacing a resource leak with a security bug is a good idea.
>
> If you would prefer not to fix the gnttab_end_foreign_access() call, I
> think you can fix this in netfront by taking a reference to the page
> before calling gnttab_end_foreign_access().  This will ensure the page
> isn't freed until the subsequent kfree_skb(), or the gref is released by
> the foreign domain (whichever is later).

Taking a reference to the page before calling 
gnttab_end_foreign_access() delays the free work until kfree_skb(). 
Simply adding put_page before kfree_skb() does not make things different 
from gnttab_end_foreign_access_ref(), and the pages will be freed by 
kfree_skb(), problem will be hit in gnttab_handle_deferred() when 
freeing pages which already be freed.

So put_page is required in gnttab_end_foreign_access(), this will ensure 
either free is taken by kfree_skb or gnttab_handle_deferred. This 
involves changes in blkfront/pcifront/tpmfront(just like your patch), 
this way ensure page is released when ref is end.

Another solution I am thinking is calling gnttab_end_foreign_access() 
with page parameter as NULL, then gnttab_end_foreign_access will only do 
ending grant reference work and releasing page work is done by kfree_skb().

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:36:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W432q-00012U-0b; Fri, 17 Jan 2014 06:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W432n-00012P-Ow
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 06:36:41 +0000
Received: from [85.158.143.35:36677] by server-2.bemta-4.messagelabs.com id
	E3/A4-11386-97FC8D25; Fri, 17 Jan 2014 06:36:41 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389940598!12236821!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16345 invoked from network); 17 Jan 2014 06:36:40 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:36:40 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 16 Jan 2014 23:36:28 -0700
Message-ID: <52D8CF6A.7050609@suse.com>
Date: Thu, 16 Jan 2014 23:36:26 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
>
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
>
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_event.h |    5 +++++
>  tools/libxl/libxl_fork.c  |    7 +++++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 12e3d1f..c09e3ed 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -480,6 +480,11 @@ typedef enum {
>      /* libxl owns SIGCHLD all the time, and the application is
>       * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
> +
> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> +     * children.  The application will reap its own children
> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> +    libxl_sigchld_owner_libxl_always_selective_reap,
>   

Should there be some documentation in the opening comments of
"Subprocess handling"?  E.g. an entry under "For programs which run
their own children alongside libxl's:"?

BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
different libxl_ctx.  Currently in the libvirt libxl driver there's a
driver-wide ctx for more host-centric operations like
libxl_get_version_info(), libxl_get_free_memory(), etc., and a
per-domain ctx for domain-specific operations.  The current doc for
libxl_childproc_setmode() says:

 * May not be called when libxl might have any child processes, or
the              
 * behaviour is undefined.  So it is best to call this
at                           
 *
initialisation.                                                                  


which makes me believe setmode() should be called during initialization
of the driver, using the driver-wide ctx.  But looking at the code in
libxl__ev_child_fork(), seems a domain-specific ctx would be used, since
a domain-specific operation resulted in calling libxl__ev_child_fork().

>  } libxl_sigchld_owner;
>  
>  typedef struct {
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index b2325e0..16e17f6 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>   

When calling setmode() on the driver-wide or on each domain-specific
ctx, I get an assert with this hunk

libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
`!sigchld_owner' failed.

Calling setmode() on the driver-wide ctx, I hit the assert when starting
a domain, which has its own ctx.  When calling setmode() on the
domain-specific ctx's, I don't see any problems with only one domain
(and hence one ctx), but hit the assert on a second domain, which has
its own ctx.  So e.g. libxl_domain_create_new(dom1_ctx, ...) works, but
I hit the assert when calling libxl_domain_create_new(dom2_ctx, ...).

I don't see the assert, regardless of how I call setmode(), when
changing this hunk to

@@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
creating)
 {
     switch (ctx->childproc_hooks->chldowner) {
     case libxl_sigchld_owner_libxl:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return creating || !LIBXL_LIST_EMPTY(&ctx->children);
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
-    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();

I'm not familiar with this aspect of libxl, so this might be complete
nonsense.  But improving the setmode() docs wrt apps that have multiple
libxl_ctx's would be helpful.

Regards,
Jim

>          return 1;
>      }
>      abort();
> @@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>      int e = libxl__self_pipe_eatall(selfpipe);
>      if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
>  
> +    if (CTX->childproc_hooks->chldowner
> +        == libxl_sigchld_owner_libxl_always_selective_reap) {
> +        childproc_checkall(egc);
> +        return;
> +    }
> +
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
>          pid_t pid = checked_waitpid(egc, -1, &status);
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:36:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W432q-00012U-0b; Fri, 17 Jan 2014 06:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W432n-00012P-Ow
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 06:36:41 +0000
Received: from [85.158.143.35:36677] by server-2.bemta-4.messagelabs.com id
	E3/A4-11386-97FC8D25; Fri, 17 Jan 2014 06:36:41 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1389940598!12236821!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16345 invoked from network); 17 Jan 2014 06:36:40 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:36:40 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 16 Jan 2014 23:36:28 -0700
Message-ID: <52D8CF6A.7050609@suse.com>
Date: Thu, 16 Jan 2014 23:36:26 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
>
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
>
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_event.h |    5 +++++
>  tools/libxl/libxl_fork.c  |    7 +++++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 12e3d1f..c09e3ed 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -480,6 +480,11 @@ typedef enum {
>      /* libxl owns SIGCHLD all the time, and the application is
>       * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
> +
> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> +     * children.  The application will reap its own children
> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> +    libxl_sigchld_owner_libxl_always_selective_reap,
>   

Should there be some documentation in the opening comments of
"Subprocess handling"?  E.g. an entry under "For programs which run
their own children alongside libxl's:"?

BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
different libxl_ctx.  Currently in the libvirt libxl driver there's a
driver-wide ctx for more host-centric operations like
libxl_get_version_info(), libxl_get_free_memory(), etc., and a
per-domain ctx for domain-specific operations.  The current doc for
libxl_childproc_setmode() says:

 * May not be called when libxl might have any child processes, or
the              
 * behaviour is undefined.  So it is best to call this
at                           
 *
initialisation.                                                                  


which makes me believe setmode() should be called during initialization
of the driver, using the driver-wide ctx.  But looking at the code in
libxl__ev_child_fork(), seems a domain-specific ctx would be used, since
a domain-specific operation resulted in calling libxl__ev_child_fork().

>  } libxl_sigchld_owner;
>  
>  typedef struct {
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index b2325e0..16e17f6 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>   

When calling setmode() on the driver-wide or on each domain-specific
ctx, I get an assert with this hunk

libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
`!sigchld_owner' failed.

Calling setmode() on the driver-wide ctx, I hit the assert when starting
a domain, which has its own ctx.  When calling setmode() on the
domain-specific ctx's, I don't see any problems with only one domain
(and hence one ctx), but hit the assert on a second domain, which has
its own ctx.  So e.g. libxl_domain_create_new(dom1_ctx, ...) works, but
I hit the assert when calling libxl_domain_create_new(dom2_ctx, ...).

I don't see the assert, regardless of how I call setmode(), when
changing this hunk to

@@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
creating)
 {
     switch (ctx->childproc_hooks->chldowner) {
     case libxl_sigchld_owner_libxl:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return creating || !LIBXL_LIST_EMPTY(&ctx->children);
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
-    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();

I'm not familiar with this aspect of libxl, so this might be complete
nonsense.  But improving the setmode() docs wrt apps that have multiple
libxl_ctx's would be helpful.

Regards,
Jim

>          return 1;
>      }
>      abort();
> @@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>      int e = libxl__self_pipe_eatall(selfpipe);
>      if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
>  
> +    if (CTX->childproc_hooks->chldowner
> +        == libxl_sigchld_owner_libxl_always_selective_reap) {
> +        childproc_checkall(egc);
> +        return;
> +    }
> +
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
>          pid_t pid = checked_waitpid(egc, -1, &status);
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:39:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W435e-0001SY-4v; Fri, 17 Jan 2014 06:39:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W435c-0001SS-QM
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:39:37 +0000
Received: from [85.158.137.68:14284] by server-16.bemta-3.messagelabs.com id
	9D/C2-26128-720D8D25; Fri, 17 Jan 2014 06:39:35 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389940773!6021847!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12304 invoked from network); 17 Jan 2014 06:39:33 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-31.messagelabs.com with SMTP;
	17 Jan 2014 06:39:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 16 Jan 2014 22:39:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460316162"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga001.fm.intel.com with ESMTP; 16 Jan 2014 22:39:31 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:35:08 +0800
Message-Id: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	jun.nakajima@intel.com, Yang Zhang <yang.z.zhang@Intel.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
	guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

L2 guest will access the physical device directly(nested VT-d). For such access,
Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index c2ef1d1..38e2327 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
     mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
                               0, page_order);
 
+    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
+    if ( *p2mt == p2m_mmio_direct )
+        goto direct_mmio_out;
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( p2m_is_mmio(*p2mt) )
+    if ( *p2mt == p2m_mmio_dm )
         goto out;
 
     rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
@@ -184,8 +187,9 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
     if ( !mfn_valid(mfn) )
         goto out;
 
-    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
     rc = NESTEDHVM_PAGEFAULT_DONE;
+direct_mmio_out:
+    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
 out:
     __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
     return rc;
@@ -245,6 +249,8 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
         break;
     case NESTEDHVM_PAGEFAULT_MMIO:
         return rv;
+    case NESTEDHVM_PAGEFAULT_DIRECT_MMIO:
+        break;
     default:
         BUG();
         break;
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index d8124cf..cca41b3 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -53,6 +53,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
 #define NESTEDHVM_PAGEFAULT_RETRY      5
+#define NESTEDHVM_PAGEFAULT_DIRECT_MMIO 6
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:39:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W435e-0001SY-4v; Fri, 17 Jan 2014 06:39:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W435c-0001SS-QM
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:39:37 +0000
Received: from [85.158.137.68:14284] by server-16.bemta-3.messagelabs.com id
	9D/C2-26128-720D8D25; Fri, 17 Jan 2014 06:39:35 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389940773!6021847!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12304 invoked from network); 17 Jan 2014 06:39:33 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-31.messagelabs.com with SMTP;
	17 Jan 2014 06:39:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 16 Jan 2014 22:39:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460316162"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga001.fm.intel.com with ESMTP; 16 Jan 2014 22:39:31 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:35:08 +0800
Message-Id: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	jun.nakajima@intel.com, Yang Zhang <yang.z.zhang@Intel.com>,
	xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
	guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

L2 guest will access the physical device directly(nested VT-d). For such access,
Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
 xen/include/asm-x86/hvm/nestedhvm.h |    1 +
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index c2ef1d1..38e2327 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
     mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
                               0, page_order);
 
+    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
+    if ( *p2mt == p2m_mmio_direct )
+        goto direct_mmio_out;
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( p2m_is_mmio(*p2mt) )
+    if ( *p2mt == p2m_mmio_dm )
         goto out;
 
     rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
@@ -184,8 +187,9 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
     if ( !mfn_valid(mfn) )
         goto out;
 
-    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
     rc = NESTEDHVM_PAGEFAULT_DONE;
+direct_mmio_out:
+    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
 out:
     __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
     return rc;
@@ -245,6 +249,8 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
         break;
     case NESTEDHVM_PAGEFAULT_MMIO:
         return rv;
+    case NESTEDHVM_PAGEFAULT_DIRECT_MMIO:
+        break;
     default:
         BUG();
         break;
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index d8124cf..cca41b3 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -53,6 +53,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
 #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
 #define NESTEDHVM_PAGEFAULT_MMIO       4
 #define NESTEDHVM_PAGEFAULT_RETRY      5
+#define NESTEDHVM_PAGEFAULT_DIRECT_MMIO 6
 int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W437V-0001Zu-7M; Fri, 17 Jan 2014 06:41:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W437T-0001Zo-KP
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 06:41:31 +0000
Received: from [85.158.143.35:55383] by server-1.bemta-4.messagelabs.com id
	94/9B-02132-A90D8D25; Fri, 17 Jan 2014 06:41:30 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389940888!11013234!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9836 invoked from network); 17 Jan 2014 06:41:30 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:41:30 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 16 Jan 2014 23:41:18 -0700
Message-ID: <52D8D087.3000603@suse.com>
Date: Thu, 16 Jan 2014 23:41:11 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<21208.12814.360247.15168@mariner.uk.xensource.com>
In-Reply-To: <21208.12814.360247.15168@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("[RFC PATCH 0/7] libxl: fork: Selective reaping"):
>   
>> libvirt reaps its children synchronously and has no central pid
>> registry and no dispatch mechanism.  libxl does have a pid registry so
>> can provide a selective reaping facility, but that is not currently exposed.
>>
>> NB that I have compiled this series but I have NOT EXECUTED IT.
>> The most plausible test environment is a suitably modified libvirt.
>>
>>  1/7 libxl: fork: Break out checked_waitpid
>>  2/7 libxl: fork: Break out childproc_reaped_ours
>>  3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
>>  4/7 libxl: fork: assert that chldmode is right
>>  5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
>>  6/7 libxl: fork: Provide ..._always_selective_reap
>>  7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
>>     
>
> I should say, to Jim: I think that with this series applied, simply
> having libvirt pass libxl_sigchld_owner_libxl_always_selective_reap
> should be sufficient for everything to work.
>   

Ian, thanks a lot for these patches!  They certainly make life much
easier in the libvirt libxl driver :).  See 6/7 for a few questions.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W437V-0001Zu-7M; Fri, 17 Jan 2014 06:41:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W437T-0001Zo-KP
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 06:41:31 +0000
Received: from [85.158.143.35:55383] by server-1.bemta-4.messagelabs.com id
	94/9B-02132-A90D8D25; Fri, 17 Jan 2014 06:41:30 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389940888!11013234!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9836 invoked from network); 17 Jan 2014 06:41:30 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:41:30 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 16 Jan 2014 23:41:18 -0700
Message-ID: <52D8D087.3000603@suse.com>
Date: Thu, 16 Jan 2014 23:41:11 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<21208.12814.360247.15168@mariner.uk.xensource.com>
In-Reply-To: <21208.12814.360247.15168@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC PATCH 0/7] libxl: fork: Selective reaping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("[RFC PATCH 0/7] libxl: fork: Selective reaping"):
>   
>> libvirt reaps its children synchronously and has no central pid
>> registry and no dispatch mechanism.  libxl does have a pid registry so
>> can provide a selective reaping facility, but that is not currently exposed.
>>
>> NB that I have compiled this series but I have NOT EXECUTED IT.
>> The most plausible test environment is a suitably modified libvirt.
>>
>>  1/7 libxl: fork: Break out checked_waitpid
>>  2/7 libxl: fork: Break out childproc_reaped_ours
>>  3/7 libxl: fork: Clarify docs for libxl_sigchld_owner
>>  4/7 libxl: fork: assert that chldmode is right
>>  5/7 libxl: fork: Provide libxl_childproc_sigchld_occurred
>>  6/7 libxl: fork: Provide ..._always_selective_reap
>>  7/7 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
>>     
>
> I should say, to Jim: I think that with this series applied, simply
> having libvirt pass libxl_sigchld_owner_libxl_always_selective_reap
> should be sufficient for everything to work.
>   

Ian, thanks a lot for these patches!  They certainly make life much
easier in the libvirt libxl driver :).  See 6/7 for a few questions.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:43:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W439O-0001i0-V4; Fri, 17 Jan 2014 06:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W439N-0001hu-Ph
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:43:29 +0000
Received: from [85.158.143.35:38412] by server-2.bemta-4.messagelabs.com id
	CE/18-11386-111D8D25; Fri, 17 Jan 2014 06:43:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389941007!12233642!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32144 invoked from network); 17 Jan 2014 06:43:28 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 06:43:28 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 16 Jan 2014 22:43:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="466544319"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga002.fm.intel.com with ESMTP; 16 Jan 2014 22:43:25 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:39:02 +0800
Message-Id: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, chegger@amazon.de,
	eddie.dong@intel.com, jun.nakajima@intel.com,
	Yang Zhang <yang.z.zhang@Intel.com>
Subject: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
	during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Sometimes, L0 need to decode the L2's instruction to handle IO access directly.
And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
there is a virtual vmexit pending (for example, an interrupt pending to inject
to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we already
in L1's context, but since we got a X86EMUL_RETRY just now and this means hyprevisor
will retry to handle the IO request later and unfortunately, the retry will happen
in L1's context. And it will cause the problem.
The fixing is that if there is a pending IO request, no virtual vmexit/vmentry
is allowed.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 41db52b..27119d5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,8 +1394,16 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *p = get_ioreq(v);
 
     /*
+     * a pending IO emualtion may still no finished. In this case,
+     * no virtual vmswith is allowed. Or else, the following IO
+     * emulation will handled in a wrong VCPU context.
+     */
+    if ( p->state != STATE_IOREQ_NONE )
+        return;
+    /*
      * a softirq may interrupt us between a virtual vmentry is
      * just handled and the true vmentry. If during this window,
      * a L1 virtual interrupt causes another virtual vmexit, we
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:43:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W439O-0001i0-V4; Fri, 17 Jan 2014 06:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W439N-0001hu-Ph
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:43:29 +0000
Received: from [85.158.143.35:38412] by server-2.bemta-4.messagelabs.com id
	CE/18-11386-111D8D25; Fri, 17 Jan 2014 06:43:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389941007!12233642!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32144 invoked from network); 17 Jan 2014 06:43:28 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-21.messagelabs.com with SMTP;
	17 Jan 2014 06:43:28 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 16 Jan 2014 22:43:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="466544319"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga002.fm.intel.com with ESMTP; 16 Jan 2014 22:43:25 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:39:02 +0800
Message-Id: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, chegger@amazon.de,
	eddie.dong@intel.com, jun.nakajima@intel.com,
	Yang Zhang <yang.z.zhang@Intel.com>
Subject: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
	during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Sometimes, L0 need to decode the L2's instruction to handle IO access directly.
And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
there is a virtual vmexit pending (for example, an interrupt pending to inject
to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we already
in L1's context, but since we got a X86EMUL_RETRY just now and this means hyprevisor
will retry to handle the IO request later and unfortunately, the retry will happen
in L1's context. And it will cause the problem.
The fixing is that if there is a pending IO request, no virtual vmexit/vmentry
is allowed.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 41db52b..27119d5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,8 +1394,16 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *p = get_ioreq(v);
 
     /*
+     * a pending IO emualtion may still no finished. In this case,
+     * no virtual vmswith is allowed. Or else, the following IO
+     * emulation will handled in a wrong VCPU context.
+     */
+    if ( p->state != STATE_IOREQ_NONE )
+        return;
+    /*
      * a softirq may interrupt us between a virtual vmentry is
      * just handled and the true vmentry. If during this window,
      * a L1 virtual interrupt causes another virtual vmexit, we
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:59:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W43O6-0002mI-Ma; Fri, 17 Jan 2014 06:58:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W43O5-0002mD-5s
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:58:41 +0000
Received: from [85.158.143.35:64426] by server-2.bemta-4.messagelabs.com id
	8D/80-11386-0A4D8D25; Fri, 17 Jan 2014 06:58:40 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389941918!11015203!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19025 invoked from network); 17 Jan 2014 06:58:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:58:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0H6wYM6010569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 06:58:35 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0H6wXuS018255
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 06:58:34 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0H6wXLu020039; Fri, 17 Jan 2014 06:58:33 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 22:58:33 -0800
Message-ID: <52D8D496.6070609@oracle.com>
Date: Fri, 17 Jan 2014 14:58:30 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
In-Reply-To: <52D8CCE4.9010804@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/17 14:25, annie li wrote:
>
> On 2014/1/16 19:10, David Vrabel wrote:
>> On 15/01/14 23:57, Annie Li wrote:
>>> This patch implements two things:
>>>
>>> * release grant reference and skb for rx path, this fixex resource 
>>> leaking.
>>> * clean up grant transfer code kept from old netfront(2.6.18) which 
>>> grants
>>> pages for access/map and transfer. But grant transfer is deprecated 
>>> in current
>>> netfront, so remove corresponding release code for transfer.
>>>
>>> gnttab_end_foreign_access_ref may fail when the grant entry is 
>>> currently used
>>> for reading or writing. But this patch does not cover this and 
>>> improvement for
>>> this failure may be implemented in a separate patch.
>> I don't think replacing a resource leak with a security bug is a good 
>> idea.
>>
>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>> think you can fix this in netfront by taking a reference to the page
>> before calling gnttab_end_foreign_access().  This will ensure the page
>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>> the foreign domain (whichever is later).
>
> Taking a reference to the page before calling 
> gnttab_end_foreign_access() delays the free work until kfree_skb(). 
> Simply adding put_page before kfree_skb() does not make things 
> different from gnttab_end_foreign_access_ref(), and the pages will be 
> freed by kfree_skb(), problem will be hit in gnttab_handle_deferred() 
> when freeing pages which already be freed.
>
> So put_page is required in gnttab_end_foreign_access(), this will 
> ensure either free is taken by kfree_skb or gnttab_handle_deferred. 
> This involves changes in blkfront/pcifront/tpmfront(just like your 
> patch), this way ensure page is released when ref is end.
>
> Another solution I am thinking is calling gnttab_end_foreign_access() 
> with page parameter as NULL, then gnttab_end_foreign_access will only 
> do ending grant reference work and releasing page work is done by 
> kfree_skb().

Not NULL above, it should be 0UL.

Thanks
Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 06:59:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 06:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W43O6-0002mI-Ma; Fri, 17 Jan 2014 06:58:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W43O5-0002mD-5s
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 06:58:41 +0000
Received: from [85.158.143.35:64426] by server-2.bemta-4.messagelabs.com id
	8D/80-11386-0A4D8D25; Fri, 17 Jan 2014 06:58:40 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389941918!11015203!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19025 invoked from network); 17 Jan 2014 06:58:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 06:58:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0H6wYM6010569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 06:58:35 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0H6wXuS018255
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 06:58:34 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0H6wXLu020039; Fri, 17 Jan 2014 06:58:33 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 16 Jan 2014 22:58:33 -0800
Message-ID: <52D8D496.6070609@oracle.com>
Date: Fri, 17 Jan 2014 14:58:30 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
In-Reply-To: <52D8CCE4.9010804@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/17 14:25, annie li wrote:
>
> On 2014/1/16 19:10, David Vrabel wrote:
>> On 15/01/14 23:57, Annie Li wrote:
>>> This patch implements two things:
>>>
>>> * release grant reference and skb for rx path, this fixex resource 
>>> leaking.
>>> * clean up grant transfer code kept from old netfront(2.6.18) which 
>>> grants
>>> pages for access/map and transfer. But grant transfer is deprecated 
>>> in current
>>> netfront, so remove corresponding release code for transfer.
>>>
>>> gnttab_end_foreign_access_ref may fail when the grant entry is 
>>> currently used
>>> for reading or writing. But this patch does not cover this and 
>>> improvement for
>>> this failure may be implemented in a separate patch.
>> I don't think replacing a resource leak with a security bug is a good 
>> idea.
>>
>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>> think you can fix this in netfront by taking a reference to the page
>> before calling gnttab_end_foreign_access().  This will ensure the page
>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>> the foreign domain (whichever is later).
>
> Taking a reference to the page before calling 
> gnttab_end_foreign_access() delays the free work until kfree_skb(). 
> Simply adding put_page before kfree_skb() does not make things 
> different from gnttab_end_foreign_access_ref(), and the pages will be 
> freed by kfree_skb(), problem will be hit in gnttab_handle_deferred() 
> when freeing pages which already be freed.
>
> So put_page is required in gnttab_end_foreign_access(), this will 
> ensure either free is taken by kfree_skb or gnttab_handle_deferred. 
> This involves changes in blkfront/pcifront/tpmfront(just like your 
> patch), this way ensure page is released when ref is end.
>
> Another solution I am thinking is calling gnttab_end_foreign_access() 
> with page parameter as NULL, then gnttab_end_foreign_access will only 
> do ending grant reference work and releasing page work is done by 
> kfree_skb().

Not NULL above, it should be 0UL.

Thanks
Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:11:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W44W0-0006lf-CQ; Fri, 17 Jan 2014 08:10:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W44Vz-0006la-4f
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 08:10:55 +0000
Received: from [193.109.254.147:62754] by server-16.bemta-14.messagelabs.com
	id 06/CE-20600-E85E8D25; Fri, 17 Jan 2014 08:10:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389946252!11483404!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23176 invoked from network); 17 Jan 2014 08:10:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 08:10:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93777071"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 08:10:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 03:10:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W44Vu-0001sf-OP;
	Fri, 17 Jan 2014 08:10:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W44Vu-0006PL-NP;
	Fri, 17 Jan 2014 08:10:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24410-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 08:10:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24410: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24410 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24410/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 24386
 test-armhf-armhf-xl           4 xen-install                 fail pass in 24386
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10     fail pass in 24386
 test-amd64-amd64-pv          16 guest-start.2      fail in 24386 pass in 24410
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24386 pass in 24410

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24386 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24386 never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  c04c825bdf1e946260cba325eeed993004051050

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:11:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W44W0-0006lf-CQ; Fri, 17 Jan 2014 08:10:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W44Vz-0006la-4f
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 08:10:55 +0000
Received: from [193.109.254.147:62754] by server-16.bemta-14.messagelabs.com
	id 06/CE-20600-E85E8D25; Fri, 17 Jan 2014 08:10:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1389946252!11483404!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23176 invoked from network); 17 Jan 2014 08:10:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 08:10:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93777071"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 08:10:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 03:10:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W44Vu-0001sf-OP;
	Fri, 17 Jan 2014 08:10:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W44Vu-0006PL-NP;
	Fri, 17 Jan 2014 08:10:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24410-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 08:10:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24410: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24410 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24410/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 24386
 test-armhf-armhf-xl           4 xen-install                 fail pass in 24386
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10     fail pass in 24386
 test-amd64-amd64-pv          16 guest-start.2      fail in 24386 pass in 24410
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24386 pass in 24410

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24386 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24386 never pass

version targeted for testing:
 xen                  c04c825bdf1e946260cba325eeed993004051050
baseline version:
 xen                  c04c825bdf1e946260cba325eeed993004051050

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:22:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W44h6-0007LJ-NS; Fri, 17 Jan 2014 08:22:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W44h4-0007LE-VV
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 08:22:23 +0000
Received: from [85.158.143.35:12124] by server-1.bemta-4.messagelabs.com id
	58/CC-02132-E38E8D25; Fri, 17 Jan 2014 08:22:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389946941!11030692!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29294 invoked from network); 17 Jan 2014 08:22:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 08:22:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 08:22:21 +0000
Message-Id: <52D8F64A020000780011470D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 08:22:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
	<52D6A9A10200007800113E08@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 17:39, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 15 Jan 2014, Jan Beulich wrote:
>> >>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
>> > On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
>> >> Given that it is 2.4 and Qemu's configure script explicitly says that
>> >> 2.4 is what they want I think it would be worth sending the fix to qemu
>> >> upstream -- at worst they say no and bump their requirement (at which
>> >> point we'd have to have a think about what to do).
>> > 
>> > It's look like the fix is already upstream. Here in the 1.6 stable
>> > branch:
>> > 
> http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
>> > 3f9fe3d
>> 
>> Oh, great. Stefano - can you pull this in then, please?
> 
> I would rather pull it as part of the update to 1.6.2.

Fine with me if that's going to happen for 4.4. If however that's
going to be postponed to 4.4.1, I'd still request this build fix to
be cherry picked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:22:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W44h6-0007LJ-NS; Fri, 17 Jan 2014 08:22:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W44h4-0007LE-VV
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 08:22:23 +0000
Received: from [85.158.143.35:12124] by server-1.bemta-4.messagelabs.com id
	58/CC-02132-E38E8D25; Fri, 17 Jan 2014 08:22:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389946941!11030692!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29294 invoked from network); 17 Jan 2014 08:22:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 08:22:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 08:22:21 +0000
Message-Id: <52D8F64A020000780011470D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 08:22:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <52D662760200007800113B5A@nat28.tlf.novell.com>
	<1389780234.12434.139.camel@kazak.uk.xensource.com>
	<52D66F4C0200007800113BDB@nat28.tlf.novell.com>
	<1389782889.12434.175.camel@kazak.uk.xensource.com>
	<20140115142419.GK1696@perard.uk.xensource.com>
	<52D6A9A10200007800113E08@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161638550.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Python version requirements vs qemu-upstream
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 17:39, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 15 Jan 2014, Jan Beulich wrote:
>> >>> On 15.01.14 at 15:24, Anthony PERARD <anthony.perard@citrix.com> wrote:
>> > On Wed, Jan 15, 2014 at 10:48:09AM +0000, Ian Campbell wrote:
>> >> Given that it is 2.4 and Qemu's configure script explicitly says that
>> >> 2.4 is what they want I think it would be worth sending the fix to qemu
>> >> upstream -- at worst they say no and bump their requirement (at which
>> >> point we'd have to have a think about what to do).
>> > 
>> > It's look like the fix is already upstream. Here in the 1.6 stable
>> > branch:
>> > 
> http://git.qemu.org/?p=qemu.git;a=commit;h=0ca1774b619dc53db5eb1419d12efcd43 
>> > 3f9fe3d
>> 
>> Oh, great. Stefano - can you pull this in then, please?
> 
> I would rather pull it as part of the update to 1.6.2.

Fine with me if that's going to happen for 4.4. If however that's
going to be postponed to 4.4.1, I'd still request this build fix to
be cherry picked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W456h-0000gi-BU; Fri, 17 Jan 2014 08:48:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W456f-0000gd-AE
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 08:48:49 +0000
Received: from [85.158.143.35:8703] by server-3.bemta-4.messagelabs.com id
	60/24-32360-07EE8D25; Fri, 17 Jan 2014 08:48:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389948526!11036728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14059 invoked from network); 17 Jan 2014 08:48:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 08:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93785056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 08:48:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 03:48:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W456a-000246-Hv;
	Fri, 17 Jan 2014 08:48:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W456a-00011t-HR;
	Fri, 17 Jan 2014 08:48:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24411-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 08:48:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24411: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3887665316512793933=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3887665316512793933==
Content-Type: text/plain

flight 24411 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24411/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 freebsd-install        fail REGR. vs. 24333
 build-amd64-pvops             3 host-build-prep  fail in 24401 REGR. vs. 24349

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                    fail pass in 24407
 test-amd64-amd64-xl           8 debian-fixup                fail pass in 24407
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot              fail pass in 24407
 test-amd64-i386-xl-win7-amd64 6 leak-check/basis(6) fail in 24407 pass in 24401
 test-amd64-i386-xl-qemut-win7-amd64 7 windows-install fail in 24407 pass in 24411
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24407 pass in 24411

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail like 24403-bisect
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24349
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-freebsd10-amd64  5 xen-boot           fail in 24407 like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install  fail in 24407 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 24407 like 24406-bisect
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install fail in 24407 like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24401 like 24333

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24407 never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24401 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============3887665316512793933==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3887665316512793933==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 08:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 08:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W456h-0000gi-BU; Fri, 17 Jan 2014 08:48:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W456f-0000gd-AE
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 08:48:49 +0000
Received: from [85.158.143.35:8703] by server-3.bemta-4.messagelabs.com id
	60/24-32360-07EE8D25; Fri, 17 Jan 2014 08:48:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1389948526!11036728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14059 invoked from network); 17 Jan 2014 08:48:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 08:48:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93785056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 08:48:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 03:48:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W456a-000246-Hv;
	Fri, 17 Jan 2014 08:48:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W456a-00011t-HR;
	Fri, 17 Jan 2014 08:48:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24411-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 08:48:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24411: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3887665316512793933=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3887665316512793933==
Content-Type: text/plain

flight 24411 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24411/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 freebsd-install        fail REGR. vs. 24333
 build-amd64-pvops             3 host-build-prep  fail in 24401 REGR. vs. 24349

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                    fail pass in 24407
 test-amd64-amd64-xl           8 debian-fixup                fail pass in 24407
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot              fail pass in 24407
 test-amd64-i386-xl-win7-amd64 6 leak-check/basis(6) fail in 24407 pass in 24401
 test-amd64-i386-xl-qemut-win7-amd64 7 windows-install fail in 24407 pass in 24411
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24407 pass in 24411

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail like 24403-bisect
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24349
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10   fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-freebsd10-amd64  5 xen-boot           fail in 24407 like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install  fail in 24407 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 24407 like 24406-bisect
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install fail in 24407 like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24401 like 24333

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24407 never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24401 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24401 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24401 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24401 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24401 n/a

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1603 lines long.)


--===============3887665316512793933==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3887665316512793933==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:00:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45Hg-0000v2-2d; Fri, 17 Jan 2014 09:00:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0876ab4dd=chegger@amazon.de>)
	id 1W45He-0000ux-G0
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:00:10 +0000
Received: from [85.158.139.211:64099] by server-6.bemta-5.messagelabs.com id
	91/29-16310-911F8D25; Fri, 17 Jan 2014 09:00:09 +0000
X-Env-Sender: prvs=0876ab4dd=chegger@amazon.de
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389949207!10323861!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4508 invoked from network); 17 Jan 2014 09:00:08 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:00:08 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389949208; x=1421485208;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=eIbUD0bXJR7bnCqXG9cHxTNQ43usejhLVdsOcZXM9UQ=;
	b=DpCMKqLnegRChMWkEBXJoBoeDU16ZSHD5WQs+2jBPoaZNsScQM4jHqfQ
	nLH8AFkCUghtkx8dg8VmJ5BDX3izM3CIKeVwAY0n1W7MGxREq0NL5AaY3
	PiApvhnlwl3/bEVm9mIs6lKdMpw2iigYq774m0Rueg99idzj7dEJPEX00 Y=;
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="62709220"
Received: from email-inbound-relay-62002.pdx2.amazon.com ([10.241.21.79])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 17 Jan 2014 09:00:02 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-62002.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0H902gn021713
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 17 Jan 2014 09:00:02 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Fri, 17 Jan 2014 00:59:57 -0800
Message-ID: <52D8F10A.7080501@amazon.de>
Date: Fri, 17 Jan 2014 09:59:54 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: andrew.cooper3@citrix.com, xiantao.zhang@intel.com, eddie.dong@intel.com,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17.01.14 07:35, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> L2 guest will access the physical device directly(nested VT-d). For such access,
> Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
>  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
>  2 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> index c2ef1d1..38e2327 100644
> --- a/xen/arch/x86/mm/hap/nested_hap.c
> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> @@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
>      mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
>                                0, page_order);
>  
> +    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
> +    if ( *p2mt == p2m_mmio_direct )
> +        goto direct_mmio_out;
>      rc = NESTEDHVM_PAGEFAULT_MMIO;
> -    if ( p2m_is_mmio(*p2mt) )
> +    if ( *p2mt == p2m_mmio_dm )
>          goto out;

Why does p2m_is_mmio() not cover p2m_mmio_direct ?

Christoph

>      rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
> @@ -184,8 +187,9 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
>      if ( !mfn_valid(mfn) )
>          goto out;
>  
> -    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
>      rc = NESTEDHVM_PAGEFAULT_DONE;
> +direct_mmio_out:
> +    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
>  out:
>      __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
>      return rc;
> @@ -245,6 +249,8 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>          break;
>      case NESTEDHVM_PAGEFAULT_MMIO:
>          return rv;
> +    case NESTEDHVM_PAGEFAULT_DIRECT_MMIO:
> +        break;
>      default:
>          BUG();
>          break;
> diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
> index d8124cf..cca41b3 100644
> --- a/xen/include/asm-x86/hvm/nestedhvm.h
> +++ b/xen/include/asm-x86/hvm/nestedhvm.h
> @@ -53,6 +53,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
>  #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
>  #define NESTEDHVM_PAGEFAULT_MMIO       4
>  #define NESTEDHVM_PAGEFAULT_RETRY      5
> +#define NESTEDHVM_PAGEFAULT_DIRECT_MMIO 6
>  int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      bool_t access_r, bool_t access_w, bool_t access_x);
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:00:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45Hg-0000v2-2d; Fri, 17 Jan 2014 09:00:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0876ab4dd=chegger@amazon.de>)
	id 1W45He-0000ux-G0
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:00:10 +0000
Received: from [85.158.139.211:64099] by server-6.bemta-5.messagelabs.com id
	91/29-16310-911F8D25; Fri, 17 Jan 2014 09:00:09 +0000
X-Env-Sender: prvs=0876ab4dd=chegger@amazon.de
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389949207!10323861!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4508 invoked from network); 17 Jan 2014 09:00:08 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:00:08 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1389949208; x=1421485208;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=eIbUD0bXJR7bnCqXG9cHxTNQ43usejhLVdsOcZXM9UQ=;
	b=DpCMKqLnegRChMWkEBXJoBoeDU16ZSHD5WQs+2jBPoaZNsScQM4jHqfQ
	nLH8AFkCUghtkx8dg8VmJ5BDX3izM3CIKeVwAY0n1W7MGxREq0NL5AaY3
	PiApvhnlwl3/bEVm9mIs6lKdMpw2iigYq774m0Rueg99idzj7dEJPEX00 Y=;
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="62709220"
Received: from email-inbound-relay-62002.pdx2.amazon.com ([10.241.21.79])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 17 Jan 2014 09:00:02 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-62002.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0H902gn021713
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 17 Jan 2014 09:00:02 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Fri, 17 Jan 2014 00:59:57 -0800
Message-ID: <52D8F10A.7080501@amazon.de>
Date: Fri, 17 Jan 2014 09:59:54 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: andrew.cooper3@citrix.com, xiantao.zhang@intel.com, eddie.dong@intel.com,
	jun.nakajima@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17.01.14 07:35, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> L2 guest will access the physical device directly(nested VT-d). For such access,
> Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
>  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
>  2 files changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> index c2ef1d1..38e2327 100644
> --- a/xen/arch/x86/mm/hap/nested_hap.c
> +++ b/xen/arch/x86/mm/hap/nested_hap.c
> @@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
>      mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
>                                0, page_order);
>  
> +    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
> +    if ( *p2mt == p2m_mmio_direct )
> +        goto direct_mmio_out;
>      rc = NESTEDHVM_PAGEFAULT_MMIO;
> -    if ( p2m_is_mmio(*p2mt) )
> +    if ( *p2mt == p2m_mmio_dm )
>          goto out;

Why does p2m_is_mmio() not cover p2m_mmio_direct ?

Christoph

>      rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
> @@ -184,8 +187,9 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
>      if ( !mfn_valid(mfn) )
>          goto out;
>  
> -    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
>      rc = NESTEDHVM_PAGEFAULT_DONE;
> +direct_mmio_out:
> +    *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
>  out:
>      __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
>      return rc;
> @@ -245,6 +249,8 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>          break;
>      case NESTEDHVM_PAGEFAULT_MMIO:
>          return rv;
> +    case NESTEDHVM_PAGEFAULT_DIRECT_MMIO:
> +        break;
>      default:
>          BUG();
>          break;
> diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
> index d8124cf..cca41b3 100644
> --- a/xen/include/asm-x86/hvm/nestedhvm.h
> +++ b/xen/include/asm-x86/hvm/nestedhvm.h
> @@ -53,6 +53,7 @@ bool_t nestedhvm_vcpu_in_guestmode(struct vcpu *v);
>  #define NESTEDHVM_PAGEFAULT_L0_ERROR   3
>  #define NESTEDHVM_PAGEFAULT_MMIO       4
>  #define NESTEDHVM_PAGEFAULT_RETRY      5
> +#define NESTEDHVM_PAGEFAULT_DIRECT_MMIO 6
>  int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
>      bool_t access_r, bool_t access_w, bool_t access_x);
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:29:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45k2-0002yh-6o; Fri, 17 Jan 2014 09:29:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W45k0-0002yZ-5h
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:29:28 +0000
Received: from [85.158.137.68:31580] by server-17.bemta-3.messagelabs.com id
	10/E4-15965-7F7F8D25; Fri, 17 Jan 2014 09:29:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389950965!9757823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10764 invoked from network); 17 Jan 2014 09:29:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93794219"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:29:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:29:24 -0500
Message-ID: <1389950962.6697.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 17 Jan 2014 09:29:22 +0000
In-Reply-To: <52D87B15.5090208@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
> 
> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
> >
> > Thanks for the CC. Could you explain what you mean by "grant-parent"
> > etc? "interrupt-parent" is a fundamental part of the way PAPR (and
> > ePAPR) work, so I'm very very hesitant to start guessing. I think things
> > have broken for you because some (a lot, actually) of OF code does not
> > expect #interrupt-cells to be more than 2. Some APIs might need
> > updating, which I'll try to take care of. Could you tell me what the
> > difference between SPI and PPI is, by the way?
> 
> Sorry, I also made some typoes in my explanation so it was not clear.
> 
> interrupt-parent is a property in a device node which links this node to 
> an interrupt controller (in our case the GIC controller).
> 
> The way to handle it on Linux and the ePAR is different:
>     - ePAR (chapter 2.4) says:
> The physical wiring of an interrupt source to an interrupt controller is 
> represented in the device tree with the interrupt-parent property. Nodes 
> that represent interrupt-generating devices contain an
> interrupt-parent property which has a phandle value that points to the 
> device to which the device's interrupts are routed, typically an 
> interrupt controller. If an interrupt-generating device does not have
> an interrupt-parent property, its interrupt parent is assumed to be its 
> device tree parent.
>  From my understanding, it's mandatory to have an interrupt-parent 
> property on each device node which is using interrupts. If it doesn't 
> exist it will assume that the parent is interrupt controller.
> If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
> in this way.
>     - Linux implementation will look at to the node, if the property 
> doesn't exists, it will check if the ancestor has this property ...
> 
> So the device tree below is valid on Linux, but not on FreeBSD:
> 
> / {
>    interrupt-parent = &gic
> 
>    gic: gic@10
>    {
>      ...
>    }
> 
>    timer@1
>    {
>      interrupts = <...>
>    }
> }
> 
> Most of shipped device tree use this trick.
> 
> IanC: I was reading the linux binding documentation 
> (devicetree/booting-without-of.txt VII.2) and it seems that the 
> explanation differs from the implementation, right?

I vaguely recall someone saying that the Linux behaviour was a quirk of
some real PPC system which supplied a DTB which required this behaviour
which has leaked into the other platforms. It does also sound like a
useful extension to the spec which makes the dtb easier to write.

The right place to discuss this would probably be either #devicetree on
freenode or devicetree@vger.kernel.org.

> > On the subject of simple-bus, they usually aren't necessary. For
> > example, all hypervisor devices on IBM hardware live under /vdevice,
> > which is attached to the device tree root. They don't use MMIO, so
> > simple-bus doesn't really make sense. How does Xen communicate with the
> > OS in these devices?
> > -Nathan
> 
> As I understand, only the simple bus code (see simplebus_attach) is 
> translating the interrupts in the device on a resource.
> So if you have a node directly attached to the root node with interrupts 
> and MMIO, the driver won't be able to retrieve and translate the 
> interrupts via bus_alloc_resources.

Is the root node not considered to be a "top-level simple-bus" with a
1:1 mapping of MMIO and interrupts? (Linux seems to treat it this way,
but I haven't trawled the docs for a spec reference to back that
behaviour up). I take it BSD doesn't do this?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:29:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45k2-0002yh-6o; Fri, 17 Jan 2014 09:29:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W45k0-0002yZ-5h
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:29:28 +0000
Received: from [85.158.137.68:31580] by server-17.bemta-3.messagelabs.com id
	10/E4-15965-7F7F8D25; Fri, 17 Jan 2014 09:29:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389950965!9757823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10764 invoked from network); 17 Jan 2014 09:29:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93794219"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:29:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:29:24 -0500
Message-ID: <1389950962.6697.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 17 Jan 2014 09:29:22 +0000
In-Reply-To: <52D87B15.5090208@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
> 
> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
> >
> > Thanks for the CC. Could you explain what you mean by "grant-parent"
> > etc? "interrupt-parent" is a fundamental part of the way PAPR (and
> > ePAPR) work, so I'm very very hesitant to start guessing. I think things
> > have broken for you because some (a lot, actually) of OF code does not
> > expect #interrupt-cells to be more than 2. Some APIs might need
> > updating, which I'll try to take care of. Could you tell me what the
> > difference between SPI and PPI is, by the way?
> 
> Sorry, I also made some typoes in my explanation so it was not clear.
> 
> interrupt-parent is a property in a device node which links this node to 
> an interrupt controller (in our case the GIC controller).
> 
> The way to handle it on Linux and the ePAR is different:
>     - ePAR (chapter 2.4) says:
> The physical wiring of an interrupt source to an interrupt controller is 
> represented in the device tree with the interrupt-parent property. Nodes 
> that represent interrupt-generating devices contain an
> interrupt-parent property which has a phandle value that points to the 
> device to which the device's interrupts are routed, typically an 
> interrupt controller. If an interrupt-generating device does not have
> an interrupt-parent property, its interrupt parent is assumed to be its 
> device tree parent.
>  From my understanding, it's mandatory to have an interrupt-parent 
> property on each device node which is using interrupts. If it doesn't 
> exist it will assume that the parent is interrupt controller.
> If I'm mistaken, at least FreeBSD handle the interrupt-parent property 
> in this way.
>     - Linux implementation will look at to the node, if the property 
> doesn't exists, it will check if the ancestor has this property ...
> 
> So the device tree below is valid on Linux, but not on FreeBSD:
> 
> / {
>    interrupt-parent = &gic
> 
>    gic: gic@10
>    {
>      ...
>    }
> 
>    timer@1
>    {
>      interrupts = <...>
>    }
> }
> 
> Most of shipped device tree use this trick.
> 
> IanC: I was reading the linux binding documentation 
> (devicetree/booting-without-of.txt VII.2) and it seems that the 
> explanation differs from the implementation, right?

I vaguely recall someone saying that the Linux behaviour was a quirk of
some real PPC system which supplied a DTB which required this behaviour
which has leaked into the other platforms. It does also sound like a
useful extension to the spec which makes the dtb easier to write.

The right place to discuss this would probably be either #devicetree on
freenode or devicetree@vger.kernel.org.

> > On the subject of simple-bus, they usually aren't necessary. For
> > example, all hypervisor devices on IBM hardware live under /vdevice,
> > which is attached to the device tree root. They don't use MMIO, so
> > simple-bus doesn't really make sense. How does Xen communicate with the
> > OS in these devices?
> > -Nathan
> 
> As I understand, only the simple bus code (see simplebus_attach) is 
> translating the interrupts in the device on a resource.
> So if you have a node directly attached to the root node with interrupts 
> and MMIO, the driver won't be able to retrieve and translate the 
> interrupts via bus_alloc_resources.

Is the root node not considered to be a "top-level simple-bus" with a
1:1 mapping of MMIO and interrupts? (Linux seems to treat it this way,
but I haven't trawled the docs for a spec reference to back that
behaviour up). I take it BSD doesn't do this?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45nb-00039I-3e; Fri, 17 Jan 2014 09:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W45nZ-00039B-BV
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:33:09 +0000
Received: from [85.158.143.35:8775] by server-1.bemta-4.messagelabs.com id
	66/5E-02132-4D8F8D25; Fri, 17 Jan 2014 09:33:08 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389951186!12345214!1
X-Originating-IP: [209.85.214.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 782 invoked from network); 17 Jan 2014 09:33:07 -0000
Received: from mail-ob0-f182.google.com (HELO mail-ob0-f182.google.com)
	(209.85.214.182)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:33:07 -0000
Received: by mail-ob0-f182.google.com with SMTP id wn1so4067627obc.13
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 01:33:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=awtEyvDDUAtwv9FA9UgonNq2Qh1gnis9R6krPywTGGI=;
	b=zfS9Rwmn0lH2FoPh+E0dnp00Ns6cJLPAMLayvf/NHHZish7cZw/kbWhwW+r0XGM+Rs
	GPxDhQIwQPc/H6RyXIn+KzpXtVdDSvr/3aRdVEKRHs7ML+v5VYb9OsaCTF2gtzsnPnO0
	h9fw9gqsEY96F5ZivWsL9ap4Tcy3Qhp7VmkadCrTebQy7fJ6/kcfGSQSLuniBIB0FsWt
	i/J7DUfytffWqupYzWdC/jxXXzFjSlXK9D1DpiPYiU5HXXRxPTGuoveIpipr4RVdzkbF
	GS3hE8S9WPKM86FaBx5ZfsSTeVO7lU4IolAyxs91gkFC0bU8W1tixBIISuxvyP8fkfjT
	so6w==
MIME-Version: 1.0
X-Received: by 10.182.146.104 with SMTP id tb8mr704462obb.54.1389951185666;
	Fri, 17 Jan 2014 01:33:05 -0800 (PST)
Received: by 10.60.68.6 with HTTP; Fri, 17 Jan 2014 01:33:05 -0800 (PST)
In-Reply-To: <52B1B616.4010402@samsung.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
Date: Fri, 17 Jan 2014 17:33:05 +0800
Message-ID: <CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Eugene Fedotov <e.fedotov@samsung.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 18, 2013 at 10:49 PM, Eugene Fedotov <e.fedotov@samsung.com> wrote:
>> On Wed, 2013-12-18 at 17:23 +0400, Eugene Fedotov wrote:
>>
>>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1b37078
>>> wpath=/local/domain/0/backend/vif/12/0/state token=3/0: event
>>> epath=/local/domain/0/backend/vif/12/0/state
>>> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend
>>> /local/domain/0/backend/vif/12/0/state wanted state 2 still waiting state
>>> 1
>>
>> Do you have netback enabled in your dom0 kernel?
>>
>>
> Yes, 3.9.1 kernel.
> Can it be outdated ?
>
> Best regards,
> Eugene.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Hi ALL

I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
but I've tested with xen-4.3.1 the same settings, and have no problem

if I remove "vif    = ['mac=00:16:3E:8E:84:00']", domU can boot up successfully
but network is not usable.

kernel settings: 3.12.6
# zcat /proc/config.gz |grep XEN
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_PCI_XEN=y
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_NETXEN_NIC=m
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=y
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_XEN_WDT=y
CONFIG_XEN_FBDEV_FRONTEND=y
CONFIG_XEN_BALLOON=y
# CONFIG_XEN_SELFBALLOONING is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=y
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=y
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=y
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y

# cat test1_stable
kernel = "kernel/kernel-genkernel-x86_64-3.12.6"
ramdisk = "kernel/initramfs-genkernel-x86_64-3.12.6"
extra = "root=/dev/xvda1 ro rootfstype=ext4 console=hvc0"
memory = 2048
vcpus = 2
name = "test1_stable"
vif    = ['mac=00:16:3E:8E:84:00']
disk   = ['file:/mnt/proj/xen/gen1_stable_ext4.img,xvda1,w']

# xl -vvvv create -c test1_stable
Parsing config from test1_stable
libxl: debug: libxl_create.c:1322:do_domain_create: ao 0x62a640:
create: how=(nil) callback=(nil) poller=0x629f90
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=xvda1 spec.backend=unknown
libxl: debug: libxl_device.c:197:disk_try_backend: Disk vdev=xvda1,
backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
vdev=xvda1, using backend qdisk
libxl: debug: libxl_create.c:777:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no
bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x628858: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=6,
free_memkb=2288
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
candidate with 1 nodes, 4 cpus and 2288 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 ro
rootfstype=ext4 console=hvc0", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path
kernel/kernel-genkernel-x86_64-3.12.6
domainbuilder: detail: xc_dom_kernel_file:
filename="kernel/kernel-genkernel-x86_64-3.12.6"
domainbuilder: detail: xc_dom_malloc_filemap    : 8185 kB
domainbuilder: detail: xc_dom_ramdisk_file:
filename="kernel/initramfs-genkernel-x86_64-3.12.6"
domainbuilder: detail: xc_dom_malloc_filemap    : 4624 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps
xen-3.0-x86_64 xen-3.0-x86_32p
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_malloc            : 25202 kB
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x7f557c -> 0x189ca00
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x1141000
xc: detail: elf_parse_binary: phdr: paddr=0x2200000 memsz=0x11c0f0
xc: detail: elf_parse_binary: phdr: paddr=0x231d000 memsz=0x14c80
xc: detail: elf_parse_binary: phdr: paddr=0x2332000 memsz=0x700000
xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x2a32000
xc: detail: elf_xen_parse_note: GUEST_OS = "linux"
xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6"
xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0"
xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff823321e0
xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
xc: detail: elf_xen_parse_note: FEATURES =
"!writable_page_tables|pae_pgdir_above_4gb"
xc: detail: elf_xen_parse_note: PAE_MODE = "yes"
xc: detail: elf_xen_parse_note: LOADER = "generic"
xc: detail: elf_xen_parse_note: unknown xen elf note (0xd)
xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1
xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0
xc: detail: elf_xen_addr_calc_check: addresses:
xc: detail:     virt_base        = 0xffffffff80000000
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0xffffffff80000000
xc: detail:     virt_kstart      = 0xffffffff81000000
xc: detail:     virt_kend        = 0xffffffff82a32000
xc: detail:     virt_entry       = 0xffffffff823321e0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64:
0xffffffff81000000 -> 0xffffffff82a32000
domainbuilder: detail: xc_dom_mem_init: mem 2048 MB, pages 0x80000
pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x80000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
domainbuilder: detail: xc_dom_malloc            : 4096 kB
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
0xffffffff81000000 -> 0xffffffff82a32000  (pfn 0x1000 + 0x1a32 pages)
domainbuilder: detail: xc_dom_malloc            : 157 kB
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x1000+0x1a32 at 0x7f8e1573a000
xc: detail: elf_load_binary: phdr 0 at 0x7f8e1573a000 -> 0x7f8e1687b000
xc: detail: elf_load_binary: phdr 1 at 0x7f8e1693a000 -> 0x7f8e16a560f0
xc: detail: elf_load_binary: phdr 2 at 0x7f8e16a57000 -> 0x7f8e16a6bc80
xc: detail: elf_load_binary: phdr 3 at 0x7f8e16a6c000 -> 0x7f8e16bd6000
domainbuilder: detail: xc_dom_alloc_segment:   ramdisk      :
0xffffffff82a32000 -> 0xffffffff82eb7000  (pfn 0x2a32 + 0x485 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x2a32+0x485 at 0x7f8e152b5000
domainbuilder: detail: xc_dom_alloc_segment:   phys2mach    :
0xffffffff82eb7000 -> 0xffffffff832b7000  (pfn 0x2eb7 + 0x400 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x2eb7+0x400 at 0x7f8e14eb5000
domainbuilder: detail: xc_dom_alloc_page   :   start info   :
0xffffffff832b7000 (pfn 0x32b7)
domainbuilder: detail: xc_dom_alloc_page   :   xenstore     :
0xffffffff832b8000 (pfn 0x32b8)
domainbuilder: detail: xc_dom_alloc_page   :   console      :
0xffffffff832b9000 (pfn 0x32b9)
domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48:
0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39:
0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30:
0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21:
0xffffffff80000000 -> 0xffffffff833fffff, 26 table(s)
domainbuilder: detail: xc_dom_alloc_segment:   page tables  :
0xffffffff832ba000 -> 0xffffffff832d7000  (pfn 0x32ba + 0x1d pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x32ba+0x1d at 0x7f8e1bf5c000
domainbuilder: detail: xc_dom_alloc_page   :   boot stack   :
0xffffffff832d7000 (pfn 0x32d7)
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0xffffffff832d8000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0xffffffff83400000
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_32p
domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x80000
domainbuilder: detail: clear_page: pfn 0x32b9, mfn 0x141305
domainbuilder: detail: clear_page: pfn 0x32b8, mfn 0x1bcc66
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x32b7+0x1 at 0x7f8e1bfc5000
domainbuilder: detail: start_info_x86_64: called
domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 29510 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 12809 kB
domainbuilder: detail:       domU mmap          : 34 MB
domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xba9eb
domainbuilder: detail: shared_info_x86_64: called
domainbuilder: detail: vcpu_x86_64: called
domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x32ba mfn 0x141304
domainbuilder: detail: launch_vm: called, ctxt=0x7f8e1bfc6004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=xvda1 spec.backend=qdisk
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x629a80: deregister unregistered
libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
device-model /usr/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
/usr/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   5
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-5,server,nowait
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -xen-attach
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   test1_stable
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -nographic
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenpv
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2049
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x628a90 wpath=/local/domain/0/device-model/5/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1336:do_domain_create: ao 0x62a640:
inprogress: poller=0x629f90, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x628a90
wpath=/local/domain/0/device-model/5/state token=3/0: event
epath=/local/domain/0/device-model/5/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x628a90
wpath=/local/domain/0/device-model/5/state token=3/0: event
epath=/local/domain/0/device-model/5/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x628a90 wpath=/local/domain/0/device-model/5/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x628a90: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-5
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8
wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event
epath=/local/domain/0/backend/vif/5/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend
/local/domain/0/backend/vif/5/0/state wanted state 2 still waiting
state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8
wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event
epath=/local/domain/0/backend/vif/5/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend
/local/domain/0/backend/vif/5/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62acd8: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62ad60: deregister unregistered
libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
add nic devices
libxl: debug: libxl_dm.c:1478:kill_device_model: Device Model signaled
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62b800: deregister unregistered
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62bb10: deregister unregistered
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
failed for 5
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x62a640:
complete, rc=-3
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x62a640: destroy
xc: debug: hypercall buffer: total allocations:1850 total releases:1850
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:1839 misses:4 toobig:7

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:33:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:33:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45nb-00039I-3e; Fri, 17 Jan 2014 09:33:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W45nZ-00039B-BV
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:33:09 +0000
Received: from [85.158.143.35:8775] by server-1.bemta-4.messagelabs.com id
	66/5E-02132-4D8F8D25; Fri, 17 Jan 2014 09:33:08 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389951186!12345214!1
X-Originating-IP: [209.85.214.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 782 invoked from network); 17 Jan 2014 09:33:07 -0000
Received: from mail-ob0-f182.google.com (HELO mail-ob0-f182.google.com)
	(209.85.214.182)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:33:07 -0000
Received: by mail-ob0-f182.google.com with SMTP id wn1so4067627obc.13
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 01:33:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=awtEyvDDUAtwv9FA9UgonNq2Qh1gnis9R6krPywTGGI=;
	b=zfS9Rwmn0lH2FoPh+E0dnp00Ns6cJLPAMLayvf/NHHZish7cZw/kbWhwW+r0XGM+Rs
	GPxDhQIwQPc/H6RyXIn+KzpXtVdDSvr/3aRdVEKRHs7ML+v5VYb9OsaCTF2gtzsnPnO0
	h9fw9gqsEY96F5ZivWsL9ap4Tcy3Qhp7VmkadCrTebQy7fJ6/kcfGSQSLuniBIB0FsWt
	i/J7DUfytffWqupYzWdC/jxXXzFjSlXK9D1DpiPYiU5HXXRxPTGuoveIpipr4RVdzkbF
	GS3hE8S9WPKM86FaBx5ZfsSTeVO7lU4IolAyxs91gkFC0bU8W1tixBIISuxvyP8fkfjT
	so6w==
MIME-Version: 1.0
X-Received: by 10.182.146.104 with SMTP id tb8mr704462obb.54.1389951185666;
	Fri, 17 Jan 2014 01:33:05 -0800 (PST)
Received: by 10.60.68.6 with HTTP; Fri, 17 Jan 2014 01:33:05 -0800 (PST)
In-Reply-To: <52B1B616.4010402@samsung.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
Date: Fri, 17 Jan 2014 17:33:05 +0800
Message-ID: <CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Eugene Fedotov <e.fedotov@samsung.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Dec 18, 2013 at 10:49 PM, Eugene Fedotov <e.fedotov@samsung.com> wrote:
>> On Wed, 2013-12-18 at 17:23 +0400, Eugene Fedotov wrote:
>>
>>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1b37078
>>> wpath=/local/domain/0/backend/vif/12/0/state token=3/0: event
>>> epath=/local/domain/0/backend/vif/12/0/state
>>> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend
>>> /local/domain/0/backend/vif/12/0/state wanted state 2 still waiting state
>>> 1
>>
>> Do you have netback enabled in your dom0 kernel?
>>
>>
> Yes, 3.9.1 kernel.
> Can it be outdated ?
>
> Best regards,
> Eugene.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Hi ALL

I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
but I've tested with xen-4.3.1 the same settings, and have no problem

if I remove "vif    = ['mac=00:16:3E:8E:84:00']", domU can boot up successfully
but network is not usable.

kernel settings: 3.12.6
# zcat /proc/config.gz |grep XEN
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_PCI_XEN=y
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_NETXEN_NIC=m
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_NETDEV_BACKEND=y
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_XEN_WDT=y
CONFIG_XEN_FBDEV_FRONTEND=y
CONFIG_XEN_BALLOON=y
# CONFIG_XEN_SELFBALLOONING is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=y
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=y
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=y
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y

# cat test1_stable
kernel = "kernel/kernel-genkernel-x86_64-3.12.6"
ramdisk = "kernel/initramfs-genkernel-x86_64-3.12.6"
extra = "root=/dev/xvda1 ro rootfstype=ext4 console=hvc0"
memory = 2048
vcpus = 2
name = "test1_stable"
vif    = ['mac=00:16:3E:8E:84:00']
disk   = ['file:/mnt/proj/xen/gen1_stable_ext4.img,xvda1,w']

# xl -vvvv create -c test1_stable
Parsing config from test1_stable
libxl: debug: libxl_create.c:1322:do_domain_create: ao 0x62a640:
create: how=(nil) callback=(nil) poller=0x629f90
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=xvda1 spec.backend=unknown
libxl: debug: libxl_device.c:197:disk_try_backend: Disk vdev=xvda1,
backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
vdev=xvda1, using backend qdisk
libxl: debug: libxl_create.c:777:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no
bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x628858: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
NUMA placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=6,
free_memkb=2288
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
candidate with 1 nodes, 4 cpus and 2288 KB free selected
domainbuilder: detail: xc_dom_allocate: cmdline="root=/dev/xvda1 ro
rootfstype=ext4 console=hvc0", features="(null)"
libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path
kernel/kernel-genkernel-x86_64-3.12.6
domainbuilder: detail: xc_dom_kernel_file:
filename="kernel/kernel-genkernel-x86_64-3.12.6"
domainbuilder: detail: xc_dom_malloc_filemap    : 8185 kB
domainbuilder: detail: xc_dom_ramdisk_file:
filename="kernel/initramfs-genkernel-x86_64-3.12.6"
domainbuilder: detail: xc_dom_malloc_filemap    : 4624 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps
xen-3.0-x86_64 xen-3.0-x86_32p
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_malloc            : 25202 kB
domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x7f557c -> 0x189ca00
domainbuilder: detail: loader probe OK
xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0x1141000
xc: detail: elf_parse_binary: phdr: paddr=0x2200000 memsz=0x11c0f0
xc: detail: elf_parse_binary: phdr: paddr=0x231d000 memsz=0x14c80
xc: detail: elf_parse_binary: phdr: paddr=0x2332000 memsz=0x700000
xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x2a32000
xc: detail: elf_xen_parse_note: GUEST_OS = "linux"
xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6"
xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0"
xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff823321e0
xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
xc: detail: elf_xen_parse_note: FEATURES =
"!writable_page_tables|pae_pgdir_above_4gb"
xc: detail: elf_xen_parse_note: PAE_MODE = "yes"
xc: detail: elf_xen_parse_note: LOADER = "generic"
xc: detail: elf_xen_parse_note: unknown xen elf note (0xd)
xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1
xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0
xc: detail: elf_xen_addr_calc_check: addresses:
xc: detail:     virt_base        = 0xffffffff80000000
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0xffffffff80000000
xc: detail:     virt_kstart      = 0xffffffff81000000
xc: detail:     virt_kend        = 0xffffffff82a32000
xc: detail:     virt_entry       = 0xffffffff823321e0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64:
0xffffffff81000000 -> 0xffffffff82a32000
domainbuilder: detail: xc_dom_mem_init: mem 2048 MB, pages 0x80000
pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x80000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
domainbuilder: detail: xc_dom_malloc            : 4096 kB
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
0xffffffff81000000 -> 0xffffffff82a32000  (pfn 0x1000 + 0x1a32 pages)
domainbuilder: detail: xc_dom_malloc            : 157 kB
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x1000+0x1a32 at 0x7f8e1573a000
xc: detail: elf_load_binary: phdr 0 at 0x7f8e1573a000 -> 0x7f8e1687b000
xc: detail: elf_load_binary: phdr 1 at 0x7f8e1693a000 -> 0x7f8e16a560f0
xc: detail: elf_load_binary: phdr 2 at 0x7f8e16a57000 -> 0x7f8e16a6bc80
xc: detail: elf_load_binary: phdr 3 at 0x7f8e16a6c000 -> 0x7f8e16bd6000
domainbuilder: detail: xc_dom_alloc_segment:   ramdisk      :
0xffffffff82a32000 -> 0xffffffff82eb7000  (pfn 0x2a32 + 0x485 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x2a32+0x485 at 0x7f8e152b5000
domainbuilder: detail: xc_dom_alloc_segment:   phys2mach    :
0xffffffff82eb7000 -> 0xffffffff832b7000  (pfn 0x2eb7 + 0x400 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x2eb7+0x400 at 0x7f8e14eb5000
domainbuilder: detail: xc_dom_alloc_page   :   start info   :
0xffffffff832b7000 (pfn 0x32b7)
domainbuilder: detail: xc_dom_alloc_page   :   xenstore     :
0xffffffff832b8000 (pfn 0x32b8)
domainbuilder: detail: xc_dom_alloc_page   :   console      :
0xffffffff832b9000 (pfn 0x32b9)
domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48:
0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39:
0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30:
0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21:
0xffffffff80000000 -> 0xffffffff833fffff, 26 table(s)
domainbuilder: detail: xc_dom_alloc_segment:   page tables  :
0xffffffff832ba000 -> 0xffffffff832d7000  (pfn 0x32ba + 0x1d pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x32ba+0x1d at 0x7f8e1bf5c000
domainbuilder: detail: xc_dom_alloc_page   :   boot stack   :
0xffffffff832d7000 (pfn 0x32d7)
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0xffffffff832d8000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0xffffffff83400000
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: arch_setup_bootearly: doing nothing
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_64 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type:
xen-3.0-x86_32p
domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x80000
domainbuilder: detail: clear_page: pfn 0x32b9, mfn 0x141305
domainbuilder: detail: clear_page: pfn 0x32b8, mfn 0x1bcc66
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn
0x32b7+0x1 at 0x7f8e1bfc5000
domainbuilder: detail: start_info_x86_64: called
domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 pfn=0x1001
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 29510 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 12809 kB
domainbuilder: detail:       domU mmap          : 34 MB
domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xba9eb
domainbuilder: detail: shared_info_x86_64: called
domainbuilder: detail: vcpu_x86_64: called
domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x32ba mfn 0x141304
domainbuilder: detail: launch_vm: called, ctxt=0x7f8e1bfc6004
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=xvda1 spec.backend=qdisk
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x629a80: deregister unregistered
libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
device-model /usr/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
/usr/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   5
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-5,server,nowait
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -xen-attach
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   test1_stable
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -nographic
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenpv
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2049
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x628a90 wpath=/local/domain/0/device-model/5/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1336:do_domain_create: ao 0x62a640:
inprogress: poller=0x629f90, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x628a90
wpath=/local/domain/0/device-model/5/state token=3/0: event
epath=/local/domain/0/device-model/5/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x628a90
wpath=/local/domain/0/device-model/5/state token=3/0: event
epath=/local/domain/0/device-model/5/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x628a90 wpath=/local/domain/0/device-model/5/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x628a90: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-5
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8
wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event
epath=/local/domain/0/backend/vif/5/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend
/local/domain/0/backend/vif/5/0/state wanted state 2 still waiting
state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8
wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event
epath=/local/domain/0/backend/vif/5/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend
/local/domain/0/backend/vif/5/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62acd8: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62ad60: deregister unregistered
libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
add nic devices
libxl: debug: libxl_dm.c:1478:kill_device_model: Device Model signaled
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62b800: deregister unregistered
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x62bb10: deregister unregistered
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
failed for 5
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x62a640:
complete, rc=-3
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x62a640: destroy
xc: debug: hypercall buffer: total allocations:1850 total releases:1850
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:1839 misses:4 toobig:7

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45us-0003nG-RV; Fri, 17 Jan 2014 09:40:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W45uq-0003mr-HE
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 09:40:40 +0000
Received: from [85.158.139.211:8530] by server-2.bemta-5.messagelabs.com id
	DB/AC-29392-79AF8D25; Fri, 17 Jan 2014 09:40:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389951636!27934!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24650 invoked from network); 17 Jan 2014 09:40:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:40:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93796837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:40:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:40:35 -0500
Message-ID: <1389951634.6697.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 17 Jan 2014 09:40:34 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> As Andrew said, nested still in experimental stage, because there are
> still lots of scenarios I am not covered in my testing. So it may not
> accurate to say it is good supported. But I hope people know that the
> nested is ready to use now. And encourage them to try it and report
> bug to us to push nested move forward. 

Perhaps we could say it is "tech preview" rather than "experimental"?

What would really help is some clear guidance (i.e. docs, wiki page, a
blog post or more than one of these) on:
      * what configurations (L1 hyp/L2 guest) are supposed to work, are
        tested and are considered Supported by you guys.
      * what scenarios are expected to work, but have not been tested
        (bug/success reports welcome etc)
      * what scenarios are expected not to work but which we would like
        to support at some point or are actively working on adding
        support for
      * what scenarios are not expected to work and no one is working on

I think documenting the things toward the top end of that list would be
most valuable, the last one could be quite a long list ("everything
else") and is maybe a bit silly to try and enumerate.

Some docs and a blog post to raise awareness of the feature would likely
help push things forwards, do you think you could put something
together? For the docs either send a docs patch or write something on
the wiki (email me your wiki userid if you need to be granted write
access), for a blog post contact the publicity@ list.

You could also consider writing some instructions for test days so that
the community can do some testing, it also a good way to gain visibility
for a feature. It's probably too late to get that done for the one on
Monday[0] but you should coordinate with Russ & Dario about a future one
(a test day dedicated to nested virt is even a possibility)

Ian.

[0] http://wiki.xen.org/wiki/Xen_Test_Days  
        ttp://www.xenproject.org/about/events/viewevent/90-xen-test-day-for-4-4-rc2.html
    http://wiki.xen.org/wiki/Xen_4.4_RC2_test_instructions
    


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45us-0003nG-RV; Fri, 17 Jan 2014 09:40:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W45uq-0003mr-HE
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 09:40:40 +0000
Received: from [85.158.139.211:8530] by server-2.bemta-5.messagelabs.com id
	DB/AC-29392-79AF8D25; Fri, 17 Jan 2014 09:40:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389951636!27934!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24650 invoked from network); 17 Jan 2014 09:40:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:40:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93796837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:40:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:40:35 -0500
Message-ID: <1389951634.6697.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 17 Jan 2014 09:40:34 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> As Andrew said, nested still in experimental stage, because there are
> still lots of scenarios I am not covered in my testing. So it may not
> accurate to say it is good supported. But I hope people know that the
> nested is ready to use now. And encourage them to try it and report
> bug to us to push nested move forward. 

Perhaps we could say it is "tech preview" rather than "experimental"?

What would really help is some clear guidance (i.e. docs, wiki page, a
blog post or more than one of these) on:
      * what configurations (L1 hyp/L2 guest) are supposed to work, are
        tested and are considered Supported by you guys.
      * what scenarios are expected to work, but have not been tested
        (bug/success reports welcome etc)
      * what scenarios are expected not to work but which we would like
        to support at some point or are actively working on adding
        support for
      * what scenarios are not expected to work and no one is working on

I think documenting the things toward the top end of that list would be
most valuable, the last one could be quite a long list ("everything
else") and is maybe a bit silly to try and enumerate.

Some docs and a blog post to raise awareness of the feature would likely
help push things forwards, do you think you could put something
together? For the docs either send a docs patch or write something on
the wiki (email me your wiki userid if you need to be granted write
access), for a blog post contact the publicity@ list.

You could also consider writing some instructions for test days so that
the community can do some testing, it also a good way to gain visibility
for a feature. It's probably too late to get that done for the one on
Monday[0] but you should coordinate with Russ & Dario about a future one
(a test day dedicated to nested virt is even a possibility)

Ian.

[0] http://wiki.xen.org/wiki/Xen_Test_Days  
        ttp://www.xenproject.org/about/events/viewevent/90-xen-test-day-for-4-4-rc2.html
    http://wiki.xen.org/wiki/Xen_4.4_RC2_test_instructions
    


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:41:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45vL-0003uO-LS; Fri, 17 Jan 2014 09:41:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W45vF-0003tc-C7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:41:09 +0000
Received: from [193.109.254.147:27863] by server-15.bemta-14.messagelabs.com
	id 9A/19-22186-0BAF8D25; Fri, 17 Jan 2014 09:41:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389951664!11476268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16262 invoked from network); 17 Jan 2014 09:41:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 09:41:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 09:41:03 +0000
Message-Id: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 09:41:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julian Stecklina" <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
In-Reply-To: <52D8030E.1050501@os.inf.tu-dresden.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 17:04, Julian Stecklina <jsteckli@os.inf.tu-dresden.de> wrote:
> On 01/16/2014 04:04 PM, Jan Beulich wrote:
>> I don't think so - this would only be an issue if the conditions used
>> | instead of ||. || implies a sequence point between evaluating the
>> left and right sides, and the standard says: "The presence of a
>> sequence point between the evaluation of expressions A and B
>> implies that every value computation and side effect associated
>> with A is sequenced before every value computation and side
>> effect associated with B."
> 
> This only applies to single-threaded code. Multithreaded code must be
> data-race free for that to be true. See
> 
> https://lwn.net/Articles/508991/ 
> 
>> And even if there was a problem (i.e. my interpretation of the
>> above being incorrect), I don't think you'd need ACCESS_ONCE()
>> here: The same local variable can't have two different values in
>> two different use sites when there was no intermediate
>> assignment to it.
> 
> Same comment as above.

One half of this doesn't apply here, due to the explicit barriers
that are there. The half about converting local variable accesses
back to memory reads (i.e. eliding the local variable), however,
is only a theoretical issue afaict: If a compiler really did this, I
think there'd be far more places where this would hurt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:41:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W45vL-0003uO-LS; Fri, 17 Jan 2014 09:41:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W45vF-0003tc-C7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:41:09 +0000
Received: from [193.109.254.147:27863] by server-15.bemta-14.messagelabs.com
	id 9A/19-22186-0BAF8D25; Fri, 17 Jan 2014 09:41:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389951664!11476268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16262 invoked from network); 17 Jan 2014 09:41:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 09:41:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 09:41:03 +0000
Message-Id: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 09:41:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julian Stecklina" <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
In-Reply-To: <52D8030E.1050501@os.inf.tu-dresden.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 16.01.14 at 17:04, Julian Stecklina <jsteckli@os.inf.tu-dresden.de> wrote:
> On 01/16/2014 04:04 PM, Jan Beulich wrote:
>> I don't think so - this would only be an issue if the conditions used
>> | instead of ||. || implies a sequence point between evaluating the
>> left and right sides, and the standard says: "The presence of a
>> sequence point between the evaluation of expressions A and B
>> implies that every value computation and side effect associated
>> with A is sequenced before every value computation and side
>> effect associated with B."
> 
> This only applies to single-threaded code. Multithreaded code must be
> data-race free for that to be true. See
> 
> https://lwn.net/Articles/508991/ 
> 
>> And even if there was a problem (i.e. my interpretation of the
>> above being incorrect), I don't think you'd need ACCESS_ONCE()
>> here: The same local variable can't have two different values in
>> two different use sites when there was no intermediate
>> assignment to it.
> 
> Same comment as above.

One half of this doesn't apply here, due to the explicit barriers
that are there. The half about converting local variable accesses
back to memory reads (i.e. eliding the local variable), however,
is only a theoretical issue afaict: If a compiler really did this, I
think there'd be far more places where this would hurt.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W460M-0004JR-N0; Fri, 17 Jan 2014 09:46:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W460K-0004JK-Ot
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:46:20 +0000
Received: from [85.158.139.211:59545] by server-14.bemta-5.messagelabs.com id
	EB/EF-24200-CEBF8D25; Fri, 17 Jan 2014 09:46:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389951976!7624273!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14661 invoked from network); 17 Jan 2014 09:46:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93798060"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:46:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:46:14 -0500
Message-ID: <1389951973.6697.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 09:46:13 +0000
In-Reply-To: <CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> but I've tested with xen-4.3.1 the same settings, and have no problem

Do you have a bridge configured? What is the output of "brctl show" and
"ifconfig -a" while the guest is running?

> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices

It seems to have waited for the backend to hit state 2, which happens,
and then it has failed for some reason, which I can't see here. It might
be worth adding some debug to the vif hotplug script to see if it is
running at all.

Are you using the vif script's from Xen 4.4? Perhaps you have got some
stale 4.3.1 scripts sitting around?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W460M-0004JR-N0; Fri, 17 Jan 2014 09:46:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W460K-0004JK-Ot
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:46:20 +0000
Received: from [85.158.139.211:59545] by server-14.bemta-5.messagelabs.com id
	EB/EF-24200-CEBF8D25; Fri, 17 Jan 2014 09:46:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1389951976!7624273!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14661 invoked from network); 17 Jan 2014 09:46:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 09:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93798060"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 09:46:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 04:46:14 -0500
Message-ID: <1389951973.6697.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 09:46:13 +0000
In-Reply-To: <CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> but I've tested with xen-4.3.1 the same settings, and have no problem

Do you have a bridge configured? What is the output of "brctl show" and
"ifconfig -a" while the guest is running?

> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices

It seems to have waited for the backend to hit state 2, which happens,
and then it has failed for some reason, which I can't see here. It might
be worth adding some debug to the vif hotplug script to see if it is
running at all.

Are you using the vif script's from Xen 4.4? Perhaps you have got some
stale 4.3.1 scripts sitting around?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 09:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W464S-0004y9-EJ; Fri, 17 Jan 2014 09:50:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W464Q-0004xy-QU
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:50:35 +0000
Received: from [85.158.139.211:37891] by server-14.bemta-5.messagelabs.com id
	84/18-24200-AECF8D25; Fri, 17 Jan 2014 09:50:34 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389952233!10329288!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18446 invoked from network); 17 Jan 2014 09:50:33 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 09:50:33 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa
	(TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.82)
	id 1W464O-00074H-QQ; Fri, 17 Jan 2014 10:50:32 +0100
Message-ID: <52D8FCE4.3090105@os.inf.tu-dresden.de>
Date: Fri, 17 Jan 2014 10:50:28 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
In-Reply-To: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7959369230906595684=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============7959369230906595684==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="d54HPT8faa1SGSaavDR6CoSlovG5jOIbw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/17/2014 10:41 AM, Jan Beulich wrote:
> The half about converting local variable accesses
> back to memory reads (i.e. eliding the local variable), however,
> is only a theoretical issue afaict: If a compiler really did this, I
> think there'd be far more places where this would hurt.

It happens rarely, but it does happen. Not fixing those issues is
inviting trouble with new compiler generations. And these issues are
terribly hard to debug.

Julian


--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlLY/OYACgkQ2EtjUdW3H9mjSACfQFnCXIHmAzIiE4LTVSFwzsSR
eTkAnR1dXaTxteaQsslSDd1pVuLmt/Jc
=u+wy
-----END PGP SIGNATURE-----

--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw--


--===============7959369230906595684==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7959369230906595684==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 09:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 09:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W464S-0004y9-EJ; Fri, 17 Jan 2014 09:50:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W464Q-0004xy-QU
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:50:35 +0000
Received: from [85.158.139.211:37891] by server-14.bemta-5.messagelabs.com id
	84/18-24200-AECF8D25; Fri, 17 Jan 2014 09:50:34 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389952233!10329288!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18446 invoked from network); 17 Jan 2014 09:50:33 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 09:50:33 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa
	(TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.82)
	id 1W464O-00074H-QQ; Fri, 17 Jan 2014 10:50:32 +0100
Message-ID: <52D8FCE4.3090105@os.inf.tu-dresden.de>
Date: Fri, 17 Jan 2014 10:50:28 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
In-Reply-To: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7959369230906595684=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============7959369230906595684==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="d54HPT8faa1SGSaavDR6CoSlovG5jOIbw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/17/2014 10:41 AM, Jan Beulich wrote:
> The half about converting local variable accesses
> back to memory reads (i.e. eliding the local variable), however,
> is only a theoretical issue afaict: If a compiler really did this, I
> think there'd be far more places where this would hurt.

It happens rarely, but it does happen. Not fixing those issues is
inviting trouble with new compiler generations. And these issues are
terribly hard to debug.

Julian


--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlLY/OYACgkQ2EtjUdW3H9mjSACfQFnCXIHmAzIiE4LTVSFwzsSR
eTkAnR1dXaTxteaQsslSDd1pVuLmt/Jc
=u+wy
-----END PGP SIGNATURE-----

--d54HPT8faa1SGSaavDR6CoSlovG5jOIbw--


--===============7959369230906595684==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7959369230906595684==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 10:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46SI-0006Gh-OR; Fri, 17 Jan 2014 10:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W46SH-0006E1-AR
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:15:13 +0000
Received: from [85.158.143.35:14082] by server-1.bemta-4.messagelabs.com id
	C2/3E-02132-0B209D25; Fri, 17 Jan 2014 10:15:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389953711!12356039!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31757 invoked from network); 17 Jan 2014 10:15:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 10:15:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 10:15:11 +0000
Message-Id: <52D910BE02000078001147DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 10:15:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Zhang" <yang.z.zhang@intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, chegger@amazon.de, eddie.dong@intel.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
> Sometimes, L0 need to decode the L2's instruction to handle IO access 
> directly.
> And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
> there is a virtual vmexit pending (for example, an interrupt pending to 
> inject
> to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we 
> already
> in L1's context, but since we got a X86EMUL_RETRY just now and this means 
> hyprevisor
> will retry to handle the IO request later and unfortunately, the retry will 
> happen
> in L1's context. And it will cause the problem.
> The fixing is that if there is a pending IO request, no virtual 
> vmexit/vmentry
> is allowed.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++

Didn't we agree earlier on to do this in common code?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46SI-0006Gh-OR; Fri, 17 Jan 2014 10:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W46SH-0006E1-AR
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:15:13 +0000
Received: from [85.158.143.35:14082] by server-1.bemta-4.messagelabs.com id
	C2/3E-02132-0B209D25; Fri, 17 Jan 2014 10:15:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1389953711!12356039!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31757 invoked from network); 17 Jan 2014 10:15:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 10:15:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 10:15:11 +0000
Message-Id: <52D910BE02000078001147DC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 10:15:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Zhang" <yang.z.zhang@intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, chegger@amazon.de, eddie.dong@intel.com,
	jun.nakajima@intel.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
> Sometimes, L0 need to decode the L2's instruction to handle IO access 
> directly.
> And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
> there is a virtual vmexit pending (for example, an interrupt pending to 
> inject
> to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we 
> already
> in L1's context, but since we got a X86EMUL_RETRY just now and this means 
> hyprevisor
> will retry to handle the IO request later and unfortunately, the retry will 
> happen
> in L1's context. And it will cause the problem.
> The fixing is that if there is a pending IO request, no virtual 
> vmexit/vmentry
> is allowed.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++

Didn't we agree earlier on to do this in common code?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:26:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46ci-00079C-3W; Fri, 17 Jan 2014 10:26:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W46ch-000797-44
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:25:59 +0000
Received: from [193.109.254.147:27653] by server-4.bemta-14.messagelabs.com id
	DF/BE-03916-63509D25; Fri, 17 Jan 2014 10:25:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389954356!11499093!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21410 invoked from network); 17 Jan 2014 10:25:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:25:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91706625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 10:25:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 05:25:55 -0500
Message-ID: <1389954353.6697.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 17 Jan 2014 10:25:53 +0000
In-Reply-To: <52D58895.6030900@linaro.org>
References: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
	<52D58895.6030900@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <apatel@apm.com>, stefano.stabellini@eu.citrix.com, tim@xen.org,
	Pranavkumar Sawargaonkar <psawargaonkar@apm.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on
 64-bit hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:57 +0000, Julien Grall wrote:
> On 01/14/2014 05:32 PM, Ian Campbell wrote:
> > Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
> > off the top bit of the entry address for PSCI_UP.
> > 
> > Follow the pattern established in do_trap_hypercall.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

Applied thanks.

> > ---
> > 
> > Release argument: Only supporting single vcpu guests on arm64 would be
> > unfortunate. There is no risk to arm32 since the ifdef ensures the code
> > remains the same.
> > ---
> >  xen/arch/arm/traps.c |   17 ++++++++++++++---
> >  1 file changed, 14 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index fdf9440..62c9df2 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
> >  }
> >  #endif
> >  
> > +#ifdef CONFIG_ARM_64
> > +#define PSCI_OP_REG(r) (r)->x0
> > +#define PSCI_RESULT_REG(r) (r)->x0
> > +#define PSCI_ARGS(r) (r)->x1, (r)->x2
> > +#else
> > +#define PSCI_OP_REG(r) (r)->r0
> > +#define PSCI_RESULT_REG(r) (r)->r0
> > +#define PSCI_ARGS(r) (r)->r1, (r)->r2
> > +#endif
> > +
> >  static void do_trap_psci(struct cpu_user_regs *regs)
> >  {
> >      arm_psci_fn_t psci_call = NULL;
> >  
> > -    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
> > +    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
> >      {
> >          domain_crash_synchronous();
> >          return;
> >      }
> >  
> > -    psci_call = arm_psci_table[regs->r0].fn;
> > +    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
> >      if ( psci_call == NULL )
> >      {
> >          domain_crash_synchronous();
> >          return;
> >      }
> > -    regs->r0 = psci_call(regs->r1, regs->r2);
> > +
> > +    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
> >  }
> >  
> >  #ifdef CONFIG_ARM_64
> > 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:26:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:26:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46ci-00079C-3W; Fri, 17 Jan 2014 10:26:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W46ch-000797-44
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:25:59 +0000
Received: from [193.109.254.147:27653] by server-4.bemta-14.messagelabs.com id
	DF/BE-03916-63509D25; Fri, 17 Jan 2014 10:25:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389954356!11499093!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21410 invoked from network); 17 Jan 2014 10:25:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:25:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91706625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 10:25:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 05:25:55 -0500
Message-ID: <1389954353.6697.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 17 Jan 2014 10:25:53 +0000
In-Reply-To: <52D58895.6030900@linaro.org>
References: <1389720774-27931-1-git-send-email-ian.campbell@citrix.com>
	<52D58895.6030900@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <apatel@apm.com>, stefano.stabellini@eu.citrix.com, tim@xen.org,
	Pranavkumar Sawargaonkar <psawargaonkar@apm.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct guest PSCI handling on
 64-bit hypervisor.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 18:57 +0000, Julien Grall wrote:
> On 01/14/2014 05:32 PM, Ian Campbell wrote:
> > Using ->rN truncates the 64-bit registers to 32-bits, which on X-gene chops
> > off the top bit of the entry address for PSCI_UP.
> > 
> > Follow the pattern established in do_trap_hypercall.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>

Applied thanks.

> > ---
> > 
> > Release argument: Only supporting single vcpu guests on arm64 would be
> > unfortunate. There is no risk to arm32 since the ifdef ensures the code
> > remains the same.
> > ---
> >  xen/arch/arm/traps.c |   17 ++++++++++++++---
> >  1 file changed, 14 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index fdf9440..62c9df2 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1065,23 +1065,34 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code)
> >  }
> >  #endif
> >  
> > +#ifdef CONFIG_ARM_64
> > +#define PSCI_OP_REG(r) (r)->x0
> > +#define PSCI_RESULT_REG(r) (r)->x0
> > +#define PSCI_ARGS(r) (r)->x1, (r)->x2
> > +#else
> > +#define PSCI_OP_REG(r) (r)->r0
> > +#define PSCI_RESULT_REG(r) (r)->r0
> > +#define PSCI_ARGS(r) (r)->r1, (r)->r2
> > +#endif
> > +
> >  static void do_trap_psci(struct cpu_user_regs *regs)
> >  {
> >      arm_psci_fn_t psci_call = NULL;
> >  
> > -    if ( regs->r0 >= ARRAY_SIZE(arm_psci_table) )
> > +    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
> >      {
> >          domain_crash_synchronous();
> >          return;
> >      }
> >  
> > -    psci_call = arm_psci_table[regs->r0].fn;
> > +    psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
> >      if ( psci_call == NULL )
> >      {
> >          domain_crash_synchronous();
> >          return;
> >      }
> > -    regs->r0 = psci_call(regs->r1, regs->r2);
> > +
> > +    PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs));
> >  }
> >  
> >  #ifdef CONFIG_ARM_64
> > 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:37:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46nS-0007mZ-Td; Fri, 17 Jan 2014 10:37:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W46nR-0007mU-4l
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:37:05 +0000
Received: from [193.109.254.147:15351] by server-11.bemta-14.messagelabs.com
	id F7/87-20576-0D709D25; Fri, 17 Jan 2014 10:37:04 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389955022!11513140!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9015 invoked from network); 17 Jan 2014 10:37:03 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:37:03 -0000
Received: by mail-qc0-f170.google.com with SMTP id e9so3467669qcy.29
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 02:37:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=HGUMG5+r67dC0MHv1MIIhmfgajltJZ62m89VA+unN+c=;
	b=sBG97Eh9ytzLXvyJnNJsl0QpZls8Y6IGkZ8ma1R1FpnCb7Hq3fZkvsMJWFhzkAmNjo
	UjtUIZPcAk08dkEgZct5kQ7CtpgZaZH/AlVqAWL7RK77YdrkAHfWn9eatmjCOWULHZ1K
	TZJiAZfi7MVIAZHrCMg6bLeR0zluMQJnY4wcsF4AKv8H2q2vGaxz1ZB9lcEilcgD/utK
	HYucdrcEngUHcv4S1d7cccXmK2TYbrpcBGrbZS+DcpSztuX6K+NbMOqzQlMSelT/LSmE
	UpGUZdRYl8e+nCkfiiTCDvr73vxrLsutqzjXqRirccES7hG3S/5Go6Hzy6RmDfU4Zn5b
	OQFA==
MIME-Version: 1.0
X-Received: by 10.224.97.67 with SMTP id k3mr1712427qan.17.1389955022302; Fri,
	17 Jan 2014 02:37:02 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 02:37:02 -0800 (PST)
In-Reply-To: <1389951973.6697.47.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:37:02 +0800
Message-ID: <CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
>> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
>> but I've tested with xen-4.3.1 the same settings, and have no problem
>
> Do you have a bridge configured? What is the output of "brctl show" and
> "ifconfig -a" while the guest is running?
>
>> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
>> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
>> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
>> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
>> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
>
> It seems to have waited for the backend to hit state 2, which happens,
> and then it has failed for some reason, which I can't see here. It might
> be worth adding some debug to the vif hotplug script to see if it is
> running at all.
>
> Are you using the vif script's from Xen 4.4? Perhaps you have got some
> stale 4.3.1 scripts sitting around?
>
> Ian.
>
I think there is no problem with bridge and script(exactly same as 4.4)
btw, If I install xen-4.3.1, then everyting works fine.


ofire xen-4.4.0-rc2 # ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 379790  bytes 215154226 (205.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 379790  bytes 215154226 (205.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

net0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::221:9bff:fe74:2cd1  prefixlen 64  scopeid 0x20<link>
        ether 00:21:9b:74:2c:d1  txqueuelen 1000  (Ethernet)
        RX packets 1017893  bytes 174572838 (166.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 577828  bytes 627464455 (598.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 21  memory 0xf7ae0000-f7b00000

xenbr0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        inet 192.168.90.122  netmask 255.255.254.0  broadcast 192.168.90.255
        inet6 fe80::221:9bff:fe74:2cd1  prefixlen 64  scopeid 0x20<link>
        ether 00:21:9b:74:2c:d1  txqueuelen 0  (Ethernet)
        RX packets 1015312  bytes 156011479 (148.7 MiB)
        RX errors 0  dropped 10  overruns 0  frame 0
        TX packets 216345  bytes 600980890 (573.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ofire xen-4.4.0-rc2 # br
brctl       break       bridge      brushtopbm
ofire xen-4.4.0-rc2 # brctl show
bridge name     bridge id               STP enabled     interfaces
xenbr0          8000.00219b742cd1       no              net0


ofire xen-4.4.0-rc2 # find . -name "vif-*" -and -type f | sort -u | xargs md5sum
f144e27878d656226202a2ec3f797167  ./tools/hotplug/Linux/vif-bridge
6dbe122d5bb8cc82cea01c3928f6361d  ./tools/hotplug/Linux/vif-common.sh
02a2614f1451b4207c82305a93bfa060  ./tools/hotplug/Linux/vif-nat
478061920ef927f570d6f8970be34926  ./tools/hotplug/Linux/vif-openvswitch
b8417ff8aa76ea6aef8fdd57fffe7677  ./tools/hotplug/Linux/vif-route
96d12dbf85c3823b2a644e58bf3fbc73  ./tools/hotplug/Linux/vif-setup
295971747eb386c8cc42b9f6907a6e8e  ./tools/hotplug/NetBSD/vif-bridge
b9ea5011dc0f72a69c8fa572de997954  ./tools/hotplug/NetBSD/vif-ip
ofire xen-4.4.0-rc2 # ls /etc/xen/scripts/vif-* |sort -u |xargs md5sum
f144e27878d656226202a2ec3f797167  /etc/xen/scripts/vif-bridge
6dbe122d5bb8cc82cea01c3928f6361d  /etc/xen/scripts/vif-common.sh
02a2614f1451b4207c82305a93bfa060  /etc/xen/scripts/vif-nat
478061920ef927f570d6f8970be34926  /etc/xen/scripts/vif-openvswitch
b8417ff8aa76ea6aef8fdd57fffe7677  /etc/xen/scripts/vif-route
96d12dbf85c3823b2a644e58bf3fbc73  /etc/xen/scripts/vif-setup

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:37:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46nS-0007mZ-Td; Fri, 17 Jan 2014 10:37:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W46nR-0007mU-4l
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:37:05 +0000
Received: from [193.109.254.147:15351] by server-11.bemta-14.messagelabs.com
	id F7/87-20576-0D709D25; Fri, 17 Jan 2014 10:37:04 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389955022!11513140!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9015 invoked from network); 17 Jan 2014 10:37:03 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:37:03 -0000
Received: by mail-qc0-f170.google.com with SMTP id e9so3467669qcy.29
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 02:37:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=HGUMG5+r67dC0MHv1MIIhmfgajltJZ62m89VA+unN+c=;
	b=sBG97Eh9ytzLXvyJnNJsl0QpZls8Y6IGkZ8ma1R1FpnCb7Hq3fZkvsMJWFhzkAmNjo
	UjtUIZPcAk08dkEgZct5kQ7CtpgZaZH/AlVqAWL7RK77YdrkAHfWn9eatmjCOWULHZ1K
	TZJiAZfi7MVIAZHrCMg6bLeR0zluMQJnY4wcsF4AKv8H2q2vGaxz1ZB9lcEilcgD/utK
	HYucdrcEngUHcv4S1d7cccXmK2TYbrpcBGrbZS+DcpSztuX6K+NbMOqzQlMSelT/LSmE
	UpGUZdRYl8e+nCkfiiTCDvr73vxrLsutqzjXqRirccES7hG3S/5Go6Hzy6RmDfU4Zn5b
	OQFA==
MIME-Version: 1.0
X-Received: by 10.224.97.67 with SMTP id k3mr1712427qan.17.1389955022302; Fri,
	17 Jan 2014 02:37:02 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 02:37:02 -0800 (PST)
In-Reply-To: <1389951973.6697.47.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:37:02 +0800
Message-ID: <CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
>> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
>> but I've tested with xen-4.3.1 the same settings, and have no problem
>
> Do you have a bridge configured? What is the output of "brctl show" and
> "ifconfig -a" while the guest is running?
>
>> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
>> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
>> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
>> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
>> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
>
> It seems to have waited for the backend to hit state 2, which happens,
> and then it has failed for some reason, which I can't see here. It might
> be worth adding some debug to the vif hotplug script to see if it is
> running at all.
>
> Are you using the vif script's from Xen 4.4? Perhaps you have got some
> stale 4.3.1 scripts sitting around?
>
> Ian.
>
I think there is no problem with bridge and script(exactly same as 4.4)
btw, If I install xen-4.3.1, then everyting works fine.


ofire xen-4.4.0-rc2 # ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 379790  bytes 215154226 (205.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 379790  bytes 215154226 (205.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

net0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::221:9bff:fe74:2cd1  prefixlen 64  scopeid 0x20<link>
        ether 00:21:9b:74:2c:d1  txqueuelen 1000  (Ethernet)
        RX packets 1017893  bytes 174572838 (166.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 577828  bytes 627464455 (598.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 21  memory 0xf7ae0000-f7b00000

xenbr0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        inet 192.168.90.122  netmask 255.255.254.0  broadcast 192.168.90.255
        inet6 fe80::221:9bff:fe74:2cd1  prefixlen 64  scopeid 0x20<link>
        ether 00:21:9b:74:2c:d1  txqueuelen 0  (Ethernet)
        RX packets 1015312  bytes 156011479 (148.7 MiB)
        RX errors 0  dropped 10  overruns 0  frame 0
        TX packets 216345  bytes 600980890 (573.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ofire xen-4.4.0-rc2 # br
brctl       break       bridge      brushtopbm
ofire xen-4.4.0-rc2 # brctl show
bridge name     bridge id               STP enabled     interfaces
xenbr0          8000.00219b742cd1       no              net0


ofire xen-4.4.0-rc2 # find . -name "vif-*" -and -type f | sort -u | xargs md5sum
f144e27878d656226202a2ec3f797167  ./tools/hotplug/Linux/vif-bridge
6dbe122d5bb8cc82cea01c3928f6361d  ./tools/hotplug/Linux/vif-common.sh
02a2614f1451b4207c82305a93bfa060  ./tools/hotplug/Linux/vif-nat
478061920ef927f570d6f8970be34926  ./tools/hotplug/Linux/vif-openvswitch
b8417ff8aa76ea6aef8fdd57fffe7677  ./tools/hotplug/Linux/vif-route
96d12dbf85c3823b2a644e58bf3fbc73  ./tools/hotplug/Linux/vif-setup
295971747eb386c8cc42b9f6907a6e8e  ./tools/hotplug/NetBSD/vif-bridge
b9ea5011dc0f72a69c8fa572de997954  ./tools/hotplug/NetBSD/vif-ip
ofire xen-4.4.0-rc2 # ls /etc/xen/scripts/vif-* |sort -u |xargs md5sum
f144e27878d656226202a2ec3f797167  /etc/xen/scripts/vif-bridge
6dbe122d5bb8cc82cea01c3928f6361d  /etc/xen/scripts/vif-common.sh
02a2614f1451b4207c82305a93bfa060  /etc/xen/scripts/vif-nat
478061920ef927f570d6f8970be34926  /etc/xen/scripts/vif-openvswitch
b8417ff8aa76ea6aef8fdd57fffe7677  /etc/xen/scripts/vif-route
96d12dbf85c3823b2a644e58bf3fbc73  /etc/xen/scripts/vif-setup

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:44:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46un-0008Rs-Jz; Fri, 17 Jan 2014 10:44:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W46um-0008Rn-Ox
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:44:40 +0000
Received: from [193.109.254.147:40643] by server-1.bemta-14.messagelabs.com id
	5B/FA-15600-89909D25; Fri, 17 Jan 2014 10:44:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389955478!11496674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20128 invoked from network); 17 Jan 2014 10:44:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:44:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91711455"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 10:44:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 05:44:37 -0500
Message-ID: <1389955476.6697.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 10:44:36 +0000
In-Reply-To: <CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> >> but I've tested with xen-4.3.1 the same settings, and have no problem
> >
> > Do you have a bridge configured? What is the output of "brctl show" and
> > "ifconfig -a" while the guest is running?
> >
> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
> >
> > It seems to have waited for the backend to hit state 2, which happens,
> > and then it has failed for some reason, which I can't see here. It might
> > be worth adding some debug to the vif hotplug script to see if it is
> > running at all.
> >
> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
> > stale 4.3.1 scripts sitting around?
> >
> > Ian.
> >
> I think there is no problem with bridge and script(exactly same as 4.4)

I still think the next step is to instrument the scripts and see what is
going on.

> btw, If I install xen-4.3.1, then everyting works fine.

Have you looked into the differences?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:44:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W46un-0008Rs-Jz; Fri, 17 Jan 2014 10:44:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W46um-0008Rn-Ox
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:44:40 +0000
Received: from [193.109.254.147:40643] by server-1.bemta-14.messagelabs.com id
	5B/FA-15600-89909D25; Fri, 17 Jan 2014 10:44:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389955478!11496674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20128 invoked from network); 17 Jan 2014 10:44:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:44:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91711455"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 10:44:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 05:44:37 -0500
Message-ID: <1389955476.6697.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 10:44:36 +0000
In-Reply-To: <CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> >> but I've tested with xen-4.3.1 the same settings, and have no problem
> >
> > Do you have a bridge configured? What is the output of "brctl show" and
> > "ifconfig -a" while the guest is running?
> >
> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
> >
> > It seems to have waited for the backend to hit state 2, which happens,
> > and then it has failed for some reason, which I can't see here. It might
> > be worth adding some debug to the vif hotplug script to see if it is
> > running at all.
> >
> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
> > stale 4.3.1 scripts sitting around?
> >
> > Ian.
> >
> I think there is no problem with bridge and script(exactly same as 4.4)

I still think the next step is to instrument the scripts and see what is
going on.

> btw, If I install xen-4.3.1, then everyting works fine.

Have you looked into the differences?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W474z-0000cc-FI; Fri, 17 Jan 2014 10:55:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W474x-0000cX-Hw
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:55:11 +0000
Received: from [85.158.139.211:3669] by server-10.bemta-5.messagelabs.com id
	98/4C-01405-E0C09D25; Fri, 17 Jan 2014 10:55:10 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389956108!10351445!1
X-Originating-IP: [209.85.128.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25940 invoked from network); 17 Jan 2014 10:55:10 -0000
Received: from mail-qe0-f47.google.com (HELO mail-qe0-f47.google.com)
	(209.85.128.47)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:55:10 -0000
Received: by mail-qe0-f47.google.com with SMTP id 5so3684134qeb.6
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 02:55:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=/81FXkW9ScTScbxtEOdhOvWDckR/VsKpGluirgpUlb4=;
	b=HSJdOuZLaC1tzBR6PrOADzsmE4+1doGMZtZC4gFBj+zWAwL9ltu00pDnvoqlHjht9B
	7boezuKnTjzX59I6gN5bPx9bjuHXPuu7186ojfOmHjH7xteVxDhDlWmOZRfLS42tRLNs
	DRlc5YzYbnxIL+U87bzqhPCmYxLOl3EbrKKCRDqi9Ktl4lUDK0H2gFTJJyoY1Ll/GTv3
	iExdir9Ht3957jUVk1ng2tDvDptCPEC0f+ta9ESx9HgQOvWZdbkaOqG/InFGOgzrSdgD
	PIm+SotopWOMYrQbvALTna7FESCmJToaqmMFTdLwZkH4q5URBskKFMaeuG9VkgtTKNzI
	uKlQ==
MIME-Version: 1.0
X-Received: by 10.224.124.133 with SMTP id u5mr1716222qar.79.1389956108631;
	Fri, 17 Jan 2014 02:55:08 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 02:55:08 -0800 (PST)
In-Reply-To: <1389955476.6697.58.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:55:08 +0800
Message-ID: <CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 6:44 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
>> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
>> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
>> >> but I've tested with xen-4.3.1 the same settings, and have no problem
>> >
>> > Do you have a bridge configured? What is the output of "brctl show" and
>> > "ifconfig -a" while the guest is running?
>> >
>> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
>> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
>> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
>> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
>> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
>> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
>> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
>> >
>> > It seems to have waited for the backend to hit state 2, which happens,
>> > and then it has failed for some reason, which I can't see here. It might
>> > be worth adding some debug to the vif hotplug script to see if it is
>> > running at all.
>> >
>> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
>> > stale 4.3.1 scripts sitting around?
>> >
>> > Ian.
>> >
>> I think there is no problem with bridge and script(exactly same as 4.4)
>
> I still think the next step is to instrument the scripts and see what is
> going on.
do you have any suggestion of how I should locate this problem?
which script or where I can add debug info.
headless here.

>
>> btw, If I install xen-4.3.1, then everyting works fine.
>
> Have you looked into the differences?
>
Not really, I copy the vif-* script from 4.3.1 to /etc/xen/scripts/
and still got the same problem

> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 10:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 10:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W474z-0000cc-FI; Fri, 17 Jan 2014 10:55:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W474x-0000cX-Hw
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 10:55:11 +0000
Received: from [85.158.139.211:3669] by server-10.bemta-5.messagelabs.com id
	98/4C-01405-E0C09D25; Fri, 17 Jan 2014 10:55:10 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389956108!10351445!1
X-Originating-IP: [209.85.128.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25940 invoked from network); 17 Jan 2014 10:55:10 -0000
Received: from mail-qe0-f47.google.com (HELO mail-qe0-f47.google.com)
	(209.85.128.47)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 10:55:10 -0000
Received: by mail-qe0-f47.google.com with SMTP id 5so3684134qeb.6
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 02:55:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=/81FXkW9ScTScbxtEOdhOvWDckR/VsKpGluirgpUlb4=;
	b=HSJdOuZLaC1tzBR6PrOADzsmE4+1doGMZtZC4gFBj+zWAwL9ltu00pDnvoqlHjht9B
	7boezuKnTjzX59I6gN5bPx9bjuHXPuu7186ojfOmHjH7xteVxDhDlWmOZRfLS42tRLNs
	DRlc5YzYbnxIL+U87bzqhPCmYxLOl3EbrKKCRDqi9Ktl4lUDK0H2gFTJJyoY1Ll/GTv3
	iExdir9Ht3957jUVk1ng2tDvDptCPEC0f+ta9ESx9HgQOvWZdbkaOqG/InFGOgzrSdgD
	PIm+SotopWOMYrQbvALTna7FESCmJToaqmMFTdLwZkH4q5URBskKFMaeuG9VkgtTKNzI
	uKlQ==
MIME-Version: 1.0
X-Received: by 10.224.124.133 with SMTP id u5mr1716222qar.79.1389956108631;
	Fri, 17 Jan 2014 02:55:08 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 02:55:08 -0800 (PST)
In-Reply-To: <1389955476.6697.58.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:55:08 +0800
Message-ID: <CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 6:44 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
>> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
>> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
>> >> but I've tested with xen-4.3.1 the same settings, and have no problem
>> >
>> > Do you have a bridge configured? What is the output of "brctl show" and
>> > "ifconfig -a" while the guest is running?
>> >
>> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
>> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
>> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
>> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
>> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
>> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
>> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
>> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
>> >
>> > It seems to have waited for the backend to hit state 2, which happens,
>> > and then it has failed for some reason, which I can't see here. It might
>> > be worth adding some debug to the vif hotplug script to see if it is
>> > running at all.
>> >
>> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
>> > stale 4.3.1 scripts sitting around?
>> >
>> > Ian.
>> >
>> I think there is no problem with bridge and script(exactly same as 4.4)
>
> I still think the next step is to instrument the scripts and see what is
> going on.
do you have any suggestion of how I should locate this problem?
which script or where I can add debug info.
headless here.

>
>> btw, If I install xen-4.3.1, then everyting works fine.
>
> Have you looked into the differences?
>
Not really, I copy the vif-* script from 4.3.1 to /etc/xen/scripts/
and still got the same problem

> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:01:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47BC-0001GL-Dt; Fri, 17 Jan 2014 11:01:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47BA-0001GG-6p
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:01:36 +0000
Received: from [85.158.137.68:16971] by server-10.bemta-3.messagelabs.com id
	02/A9-23989-F8D09D25; Fri, 17 Jan 2014 11:01:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389956493!9746462!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20876 invoked from network); 17 Jan 2014 11:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91715093"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:01:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:01:31 -0500
Message-ID: <1389956491.6697.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 11:01:31 +0000
In-Reply-To: <CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:55 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 6:44 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
> >> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> >> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> >> >> but I've tested with xen-4.3.1 the same settings, and have no problem
> >> >
> >> > Do you have a bridge configured? What is the output of "brctl show" and
> >> > "ifconfig -a" while the guest is running?
> >> >
> >> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> >> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> >> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> >> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> >> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> >> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> >> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
> >> >
> >> > It seems to have waited for the backend to hit state 2, which happens,
> >> > and then it has failed for some reason, which I can't see here. It might
> >> > be worth adding some debug to the vif hotplug script to see if it is
> >> > running at all.
> >> >
> >> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
> >> > stale 4.3.1 scripts sitting around?
> >> >
> >> > Ian.
> >> >
> >> I think there is no problem with bridge and script(exactly same as 4.4)
> >
> > I still think the next step is to instrument the scripts and see what is
> > going on.
> do you have any suggestion of how I should locate this problem?
> which script or where I can add debug info.

vif-bridge and the common scripts which it includes would be a good
start. Just an echo at the top to confirm that the script is running
would be useful.

I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
when these scripts were launched by udev, but now that libxl runs them
you may find that the debug from the script comes out on stdout/err of
the xl create command so perhaps that isn't needed any more.

> headless here.

That shouldn't matter, you are looking for output from userspace
scripts, not kernel or hypervisor logs.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:01:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47BC-0001GL-Dt; Fri, 17 Jan 2014 11:01:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47BA-0001GG-6p
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:01:36 +0000
Received: from [85.158.137.68:16971] by server-10.bemta-3.messagelabs.com id
	02/A9-23989-F8D09D25; Fri, 17 Jan 2014 11:01:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389956493!9746462!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20876 invoked from network); 17 Jan 2014 11:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91715093"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:01:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:01:31 -0500
Message-ID: <1389956491.6697.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 11:01:31 +0000
In-Reply-To: <CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:55 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 6:44 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 18:37 +0800, Dennis Lan (dlan) wrote:
> >> On Fri, Jan 17, 2014 at 5:46 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Fri, 2014-01-17 at 17:33 +0800, Dennis Lan (dlan) wrote:
> >> >> I'm also hit this problem, using xen-4.4.0-rc2 release,  on x86_64 machine.
> >> >> but I've tested with xen-4.3.1 the same settings, and have no problem
> >> >
> >> > Do you have a bridge configured? What is the output of "brctl show" and
> >> > "ifconfig -a" while the guest is running?
> >> >
> >> >> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: register slotnum=3
> >> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> >> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 still waiting state 1
> >> >> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: event epath=/local/domain/0/backend/vif/5/0/state
> >> >> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/5/0/state wanted state 2 ok
> >> >> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x62acd8 wpath=/local/domain/0/backend/vif/5/0/state token=3/1: deregister slotnum=3
> >> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62acd8: deregister unregistered
> >> >> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x62ad60: deregister unregistered
> >> >> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to add nic devices
> >> >
> >> > It seems to have waited for the backend to hit state 2, which happens,
> >> > and then it has failed for some reason, which I can't see here. It might
> >> > be worth adding some debug to the vif hotplug script to see if it is
> >> > running at all.
> >> >
> >> > Are you using the vif script's from Xen 4.4? Perhaps you have got some
> >> > stale 4.3.1 scripts sitting around?
> >> >
> >> > Ian.
> >> >
> >> I think there is no problem with bridge and script(exactly same as 4.4)
> >
> > I still think the next step is to instrument the scripts and see what is
> > going on.
> do you have any suggestion of how I should locate this problem?
> which script or where I can add debug info.

vif-bridge and the common scripts which it includes would be a good
start. Just an echo at the top to confirm that the script is running
would be useful.

I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
when these scripts were launched by udev, but now that libxl runs them
you may find that the debug from the script comes out on stdout/err of
the xl create command so perhaps that isn't needed any more.

> headless here.

That shouldn't matter, you are looking for output from userspace
scripts, not kernel or hypervisor logs.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47JB-0001qf-Fi; Fri, 17 Jan 2014 11:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47JA-0001qa-3V
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:09:52 +0000
Received: from [85.158.143.35:41222] by server-1.bemta-4.messagelabs.com id
	04/98-02132-F7F09D25; Fri, 17 Jan 2014 11:09:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389956989!12382726!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11721 invoked from network); 17 Jan 2014 11:09:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:09:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93813977"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:09:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:09:48 -0500
Message-ID: <1389956987.6697.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:09:47 +0000
In-Reply-To: <1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 1/7] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> This is a simple error-handling wrapper for waitpid.  We're going to
> want to call waitpid somewhere else and this avoids some of the
> duplication.
> 
> No functional change in this patch.  (Technically, we used to check
> chldmode_ours again in the EINTR case, and don't now, but that can't
> have changed because we continuously hold the libxl ctx lock.)

I was going to ask if that outer while condition is a bit pointless
then, but I see that outside the context we drop and reacquire the lock
so it makes sense.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
>  1 file changed, 18 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 4ae9f94..2252370 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
>   * Actual child process handling
>   */
>  
> +/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
> +static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
> +{
> +    for (;;) {
> +        pid_t got = waitpid(want, status, WNOHANG);
> +        if (got != -1)
> +            return got;
> +        if (errno == ECHILD)
> +            return got;
> +        if (errno == EINTR)
> +            continue;
> +        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
> +        return 0;
> +    }
> +}
> +
>  static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>                                       int fd, short events, short revents);
>  
> @@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>  
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
> -        pid_t pid = waitpid(-1, &status, WNOHANG);
> -
> -        if (pid == 0) return;
> +        pid_t pid = checked_waitpid(egc, -1, &status);
>  
> -        if (pid == -1) {
> -            if (errno == ECHILD) return;
> -            if (errno == EINTR) continue;
> -            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
> +        if (pid == 0 || pid == -1 /* ECHILD */)
>              return;
> -        }
>  
>          int rc = childproc_reaped(egc, pid, status);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47JB-0001qf-Fi; Fri, 17 Jan 2014 11:09:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47JA-0001qa-3V
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:09:52 +0000
Received: from [85.158.143.35:41222] by server-1.bemta-4.messagelabs.com id
	04/98-02132-F7F09D25; Fri, 17 Jan 2014 11:09:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389956989!12382726!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11721 invoked from network); 17 Jan 2014 11:09:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:09:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93813977"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:09:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:09:48 -0500
Message-ID: <1389956987.6697.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:09:47 +0000
In-Reply-To: <1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 1/7] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> This is a simple error-handling wrapper for waitpid.  We're going to
> want to call waitpid somewhere else and this avoids some of the
> duplication.
> 
> No functional change in this patch.  (Technically, we used to check
> chldmode_ours again in the EINTR case, and don't now, but that can't
> have changed because we continuously hold the libxl ctx lock.)

I was going to ask if that outer while condition is a bit pointless
then, but I see that outside the context we drop and reacquire the lock
so it makes sense.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
>  1 file changed, 18 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 4ae9f94..2252370 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
>   * Actual child process handling
>   */
>  
> +/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
> +static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
> +{
> +    for (;;) {
> +        pid_t got = waitpid(want, status, WNOHANG);
> +        if (got != -1)
> +            return got;
> +        if (errno == ECHILD)
> +            return got;
> +        if (errno == EINTR)
> +            continue;
> +        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
> +        return 0;
> +    }
> +}
> +
>  static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>                                       int fd, short events, short revents);
>  
> @@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>  
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
> -        pid_t pid = waitpid(-1, &status, WNOHANG);
> -
> -        if (pid == 0) return;
> +        pid_t pid = checked_waitpid(egc, -1, &status);
>  
> -        if (pid == -1) {
> -            if (errno == ECHILD) return;
> -            if (errno == EINTR) continue;
> -            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
> +        if (pid == 0 || pid == -1 /* ECHILD */)
>              return;
> -        }
>  
>          int rc = childproc_reaped(egc, pid, status);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Ja-0001st-9Z; Fri, 17 Jan 2014 11:10:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47JY-0001sa-69
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:10:16 +0000
Received: from [85.158.143.35:3509] by server-2.bemta-4.messagelabs.com id
	AD/21-11386-79F09D25; Fri, 17 Jan 2014 11:10:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389957013!12331167!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5074 invoked from network); 17 Jan 2014 11:10:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:10:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93814056"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:10:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:10:12 -0500
Message-ID: <1389957011.6697.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:10:11 +0000
In-Reply-To: <1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/7] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> We're going to want to do this again at a new call site.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Ja-0001st-9Z; Fri, 17 Jan 2014 11:10:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47JY-0001sa-69
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:10:16 +0000
Received: from [85.158.143.35:3509] by server-2.bemta-4.messagelabs.com id
	AD/21-11386-79F09D25; Fri, 17 Jan 2014 11:10:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389957013!12331167!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5074 invoked from network); 17 Jan 2014 11:10:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:10:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93814056"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:10:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:10:12 -0500
Message-ID: <1389957011.6697.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:10:11 +0000
In-Reply-To: <1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/7] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> We're going to want to do this again at a new call site.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:13:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Ly-00024E-FB; Fri, 17 Jan 2014 11:12:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Lx-000246-7Q
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:12:45 +0000
Received: from [85.158.137.68:58605] by server-13.bemta-3.messagelabs.com id
	12/E3-28603-C2019D25; Fri, 17 Jan 2014 11:12:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389957162!9785883!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30063 invoked from network); 17 Jan 2014 11:12:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:12:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91717148"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:12:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:12:41 -0500
Message-ID: <1389957160.6697.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:12:40 +0000
In-Reply-To: <1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
> process's children, and clarify the wording of the description of
> libxl_sigchld_owner_libxl_always.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_event.h |    5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 6261f99..4221d5a 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>  
> 
>  typedef enum {
> -    /* libxl owns SIGCHLD whenever it has a child. */
> +    /* libxl owns SIGCHLD whenever it has a child, and reaps
> +     * all children. */

"all children, including those which were not spawned by libxl" might be
extra clear. It might also be worth saying explicitly "When libxl has no
children it does not own SIGCHLD and will not reap anything". I know
it's implied by the existing text but it is a bit of a subtle point.

With or without those mods:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>      libxl_sigchld_owner_libxl,
>  
>      /* Application promises to call libxl_childproc_exited but NOT
> @@ -476,7 +477,7 @@ typedef enum {
>      libxl_sigchld_owner_mainloop,
>  
>      /* libxl owns SIGCHLD all the time, and the application is
> -     * relying on libxl's event loop for reaping its own children. */
> +     * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
>  } libxl_sigchld_owner;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:13:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Ly-00024E-FB; Fri, 17 Jan 2014 11:12:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Lx-000246-7Q
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:12:45 +0000
Received: from [85.158.137.68:58605] by server-13.bemta-3.messagelabs.com id
	12/E3-28603-C2019D25; Fri, 17 Jan 2014 11:12:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389957162!9785883!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30063 invoked from network); 17 Jan 2014 11:12:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:12:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91717148"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:12:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:12:41 -0500
Message-ID: <1389957160.6697.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:12:40 +0000
In-Reply-To: <1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
> process's children, and clarify the wording of the description of
> libxl_sigchld_owner_libxl_always.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_event.h |    5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 6261f99..4221d5a 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>  
> 
>  typedef enum {
> -    /* libxl owns SIGCHLD whenever it has a child. */
> +    /* libxl owns SIGCHLD whenever it has a child, and reaps
> +     * all children. */

"all children, including those which were not spawned by libxl" might be
extra clear. It might also be worth saying explicitly "When libxl has no
children it does not own SIGCHLD and will not reap anything". I know
it's implied by the existing text but it is a bit of a subtle point.

With or without those mods:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>      libxl_sigchld_owner_libxl,
>  
>      /* Application promises to call libxl_childproc_exited but NOT
> @@ -476,7 +477,7 @@ typedef enum {
>      libxl_sigchld_owner_mainloop,
>  
>      /* libxl owns SIGCHLD all the time, and the application is
> -     * relying on libxl's event loop for reaping its own children. */
> +     * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
>  } libxl_sigchld_owner;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:14:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47NG-0002Bf-WE; Fri, 17 Jan 2014 11:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47NG-0002BR-1K
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:14:06 +0000
Received: from [85.158.139.211:34969] by server-4.bemta-5.messagelabs.com id
	F5/1F-26791-B7019D25; Fri, 17 Jan 2014 11:14:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389957241!10352381!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12559 invoked from network); 17 Jan 2014 11:14:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:14:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91717513"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:14:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:14:00 -0500
Message-ID: <1389957239.6697.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:13:59 +0000
In-Reply-To: <1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 4/7] libxl: fork: assert that chldmode is
	right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> In libxl_childproc_reaped, check that the chldmode is as expected.

The doc comment on libxl_childproc_reaped says:
 * May be called only by an application which has called setmode with
 * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
so this is obviously correct.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 7b84765..85db2fb 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
>  {
>      EGC_INIT(ctx);
>      CTX_LOCK;
> +    assert(CTX->childproc_hooks->chldowner
> +           == libxl_sigchld_owner_mainloop);
>      int rc = childproc_reaped(egc, pid, status);
>      CTX_UNLOCK;
>      EGC_FREE;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:14:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47NG-0002Bf-WE; Fri, 17 Jan 2014 11:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47NG-0002BR-1K
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:14:06 +0000
Received: from [85.158.139.211:34969] by server-4.bemta-5.messagelabs.com id
	F5/1F-26791-B7019D25; Fri, 17 Jan 2014 11:14:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389957241!10352381!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12559 invoked from network); 17 Jan 2014 11:14:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:14:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91717513"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:14:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:14:00 -0500
Message-ID: <1389957239.6697.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:13:59 +0000
In-Reply-To: <1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 4/7] libxl: fork: assert that chldmode is
	right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> In libxl_childproc_reaped, check that the chldmode is as expected.

The doc comment on libxl_childproc_reaped says:
 * May be called only by an application which has called setmode with
 * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
so this is obviously correct.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 7b84765..85db2fb 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
>  {
>      EGC_INIT(ctx);
>      CTX_LOCK;
> +    assert(CTX->childproc_hooks->chldowner
> +           == libxl_sigchld_owner_mainloop);
>      int rc = childproc_reaped(egc, pid, status);
>      CTX_UNLOCK;
>      EGC_FREE;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:23:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Vs-0002bZ-JU; Fri, 17 Jan 2014 11:23:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Vq-0002bQ-Ii
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:22:58 +0000
Received: from [85.158.139.211:43699] by server-15.bemta-5.messagelabs.com id
	04/5E-08490-19219D25; Fri, 17 Jan 2014 11:22:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389957775!10361340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26932 invoked from network); 17 Jan 2014 11:22:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:22:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93816673"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:22:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:22:55 -0500
Message-ID: <1389957773.6697.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:22:53 +0000
In-Reply-To: <1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> +        if (got == -1) {
> +            LIBXL__EVENT_DISASTER
> +                (egc, "waitpid() gave ECHILD but we have a child",
> +                 ECHILD, 0);
> +            /* it must have finished but we don't know its status */
> +            status = 255<<8; /* no wait.h macro for this! */
> +            assert(WIFEXITED(status));
> +            assert(WEXITSTATUS(status)==255);
> +            assert(!WIFSIGNALED(status));
> +            assert(!WIFSTOPPED(status));

This is quite exciting! How can this happen? Kernel bug or similar?

Either way, I suppose this is the best we can do under the circumstances
so:
Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:23:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Vs-0002bZ-JU; Fri, 17 Jan 2014 11:23:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Vq-0002bQ-Ii
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:22:58 +0000
Received: from [85.158.139.211:43699] by server-15.bemta-5.messagelabs.com id
	04/5E-08490-19219D25; Fri, 17 Jan 2014 11:22:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389957775!10361340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26932 invoked from network); 17 Jan 2014 11:22:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:22:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93816673"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:22:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:22:55 -0500
Message-ID: <1389957773.6697.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:22:53 +0000
In-Reply-To: <1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> +        if (got == -1) {
> +            LIBXL__EVENT_DISASTER
> +                (egc, "waitpid() gave ECHILD but we have a child",
> +                 ECHILD, 0);
> +            /* it must have finished but we don't know its status */
> +            status = 255<<8; /* no wait.h macro for this! */
> +            assert(WIFEXITED(status));
> +            assert(WEXITSTATUS(status)==255);
> +            assert(!WIFSIGNALED(status));
> +            assert(!WIFSTOPPED(status));

This is quite exciting! How can this happen? Kernel bug or similar?

Either way, I suppose this is the best we can do under the circumstances
so:
Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:27:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47aA-00036N-Il; Fri, 17 Jan 2014 11:27:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W47a8-000369-Il
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:24 +0000
Received: from [85.158.139.211:45232] by server-15.bemta-5.messagelabs.com id
	7D/D7-08490-B9319D25; Fri, 17 Jan 2014 11:27:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389958041!10334136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17935 invoked from network); 17 Jan 2014 11:27:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93817285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 06:27:21 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47a4-0002q7-Dc;
	Fri, 17 Jan 2014 11:27:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47a2-0003Rt-UR;
	Fri, 17 Jan 2014 11:27:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.5013.712955.967233@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 11:27:17 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389957773.6697.76.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> > +            /* it must have finished but we don't know its status */
> > +            status = 255<<8; /* no wait.h macro for this! */
> > +            assert(WIFEXITED(status));
> > +            assert(WEXITSTATUS(status)==255);
> > +            assert(!WIFSIGNALED(status));
> > +            assert(!WIFSTOPPED(status));
> 
> This is quite exciting! How can this happen? Kernel bug or similar?

In principle it would be possible (and standards-compliant) for a
system to encode its wait status in a different way to usual.  I don't
think any such system really exists - at least, not one we'll be
running libxl on.

Of course this code is only reached if DISASTER returns...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:27:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Zz-00035b-PO; Fri, 17 Jan 2014 11:27:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Zx-00035U-UJ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:14 +0000
Received: from [193.109.254.147:3653] by server-11.bemta-14.messagelabs.com id
	FE/4A-20576-19319D25; Fri, 17 Jan 2014 11:27:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389958031!11483510!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24763 invoked from network); 17 Jan 2014 11:27:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93817274"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:27:10 -0500
Message-ID: <1389958029.6697.77.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:27:09 +0000
In-Reply-To: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
> 
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
> 
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

So far as this patch goes:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

But Jim's queries about multiple ctx's etc are interesting.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:27:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47Zz-00035b-PO; Fri, 17 Jan 2014 11:27:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47Zx-00035U-UJ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:14 +0000
Received: from [193.109.254.147:3653] by server-11.bemta-14.messagelabs.com id
	FE/4A-20576-19319D25; Fri, 17 Jan 2014 11:27:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389958031!11483510!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24763 invoked from network); 17 Jan 2014 11:27:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93817274"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:27:10 -0500
Message-ID: <1389958029.6697.77.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:27:09 +0000
In-Reply-To: <1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
> 
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
> 
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

So far as this patch goes:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

But Jim's queries about multiple ctx's etc are interesting.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:27:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47aA-00036N-Il; Fri, 17 Jan 2014 11:27:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W47a8-000369-Il
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:24 +0000
Received: from [85.158.139.211:45232] by server-15.bemta-5.messagelabs.com id
	7D/D7-08490-B9319D25; Fri, 17 Jan 2014 11:27:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389958041!10334136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17935 invoked from network); 17 Jan 2014 11:27:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93817285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 06:27:21 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47a4-0002q7-Dc;
	Fri, 17 Jan 2014 11:27:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47a2-0003Rt-UR;
	Fri, 17 Jan 2014 11:27:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.5013.712955.967233@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 11:27:17 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389957773.6697.76.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> > +            /* it must have finished but we don't know its status */
> > +            status = 255<<8; /* no wait.h macro for this! */
> > +            assert(WIFEXITED(status));
> > +            assert(WEXITSTATUS(status)==255);
> > +            assert(!WIFSIGNALED(status));
> > +            assert(!WIFSTOPPED(status));
> 
> This is quite exciting! How can this happen? Kernel bug or similar?

In principle it would be possible (and standards-compliant) for a
system to encode its wait status in a different way to usual.  I don't
think any such system really exists - at least, not one we'll be
running libxl on.

Of course this code is only reached if DISASTER returns...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:28:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47ag-0003C6-OA; Fri, 17 Jan 2014 11:27:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47ae-0003Bb-RZ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:57 +0000
Received: from [85.158.139.211:34569] by server-3.bemta-5.messagelabs.com id
	F2/27-04773-CB319D25; Fri, 17 Jan 2014 11:27:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389958073!10364387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28688 invoked from network); 17 Jan 2014 11:27:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91720275"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:27:52 -0500
Message-ID: <1389958072.6697.78.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:27:52 +0000
In-Reply-To: <1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 7/7] libxl: fork: Provide
 LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> This is the feature test macro for libxl_childproc_sigchld_occurred
> and libxl_sigchld_owner_libxl_always_selective_reap.
> 
> It is split out into this separate patch because: a single feature
> test is sensible because we do not intend anyone to release or ship
> libxl versions with one of these but not the other; but, the two
> features are in separate patches for clarity; and, this just makes
> reading the actual code easier.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl.h |   13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..1ac34c3 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -409,6 +409,19 @@
>   */
>  #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
>  
> +/*
> + * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
> + *
> + * If this is defined:
> + *
> + * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
> + * value libxl_sigchld_owner_libxl_always_selective_reap which may be
> + * passed to libxl_childproc_setmode in hooks->chldmode.
> + *
> + * Secondly, the function libxl_childproc_sigchld_occurred exists.
> + */
> +#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:28:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47ag-0003C6-OA; Fri, 17 Jan 2014 11:27:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47ae-0003Bb-RZ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:27:57 +0000
Received: from [85.158.139.211:34569] by server-3.bemta-5.messagelabs.com id
	F2/27-04773-CB319D25; Fri, 17 Jan 2014 11:27:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389958073!10364387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28688 invoked from network); 17 Jan 2014 11:27:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91720275"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:27:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:27:52 -0500
Message-ID: <1389958072.6697.78.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:27:52 +0000
In-Reply-To: <1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-8-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 7/7] libxl: fork: Provide
 LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> This is the feature test macro for libxl_childproc_sigchld_occurred
> and libxl_sigchld_owner_libxl_always_selective_reap.
> 
> It is split out into this separate patch because: a single feature
> test is sensible because we do not intend anyone to release or ship
> libxl versions with one of these but not the other; but, the two
> features are in separate patches for clarity; and, this just makes
> reading the actual code easier.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl.h |   13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..1ac34c3 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -409,6 +409,19 @@
>   */
>  #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
>  
> +/*
> + * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
> + *
> + * If this is defined:
> + *
> + * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
> + * value libxl_sigchld_owner_libxl_always_selective_reap which may be
> + * passed to libxl_childproc_setmode in hooks->chldmode.
> + *
> + * Secondly, the function libxl_childproc_sigchld_occurred exists.
> + */
> +#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47ci-0003cf-Eg; Fri, 17 Jan 2014 11:30:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47ch-0003cU-8Z
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:30:03 +0000
Received: from [85.158.143.35:51811] by server-1.bemta-4.messagelabs.com id
	1A/5B-02132-A3419D25; Fri, 17 Jan 2014 11:30:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389958200!12388181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23968 invoked from network); 17 Jan 2014 11:30:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:30:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91721010"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:30:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:30:00 -0500
Message-ID: <1389958199.6697.80.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:29:59 +0000
In-Reply-To: <21209.5013.712955.967233@mariner.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
	<21209.5013.712955.967233@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 11:27 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> > On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> > > +            /* it must have finished but we don't know its status */
> > > +            status = 255<<8; /* no wait.h macro for this! */
> > > +            assert(WIFEXITED(status));
> > > +            assert(WEXITSTATUS(status)==255);
> > > +            assert(!WIFSIGNALED(status));
> > > +            assert(!WIFSTOPPED(status));
> > 
> > This is quite exciting! How can this happen? Kernel bug or similar?
> 
> In principle it would be possible (and standards-compliant) for a
> system to encode its wait status in a different way to usual.  I don't
> think any such system really exists - at least, not one we'll be
> running libxl on.
> 
> Of course this code is only reached if DISASTER returns...

Sorry, I meant how can we get here at all, i.e. with a waitpid returning
-1/ECHLD when we think there is actually a child.

Thinking about it this way I now realise this could be a bug in libxl or
the application which caused the process to be reaped elsewhere.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47ci-0003cf-Eg; Fri, 17 Jan 2014 11:30:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W47ch-0003cU-8Z
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:30:03 +0000
Received: from [85.158.143.35:51811] by server-1.bemta-4.messagelabs.com id
	1A/5B-02132-A3419D25; Fri, 17 Jan 2014 11:30:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389958200!12388181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23968 invoked from network); 17 Jan 2014 11:30:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:30:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91721010"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:30:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:30:00 -0500
Message-ID: <1389958199.6697.80.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:29:59 +0000
In-Reply-To: <21209.5013.712955.967233@mariner.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
	<21209.5013.712955.967233@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 11:27 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> > On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> > > +            /* it must have finished but we don't know its status */
> > > +            status = 255<<8; /* no wait.h macro for this! */
> > > +            assert(WIFEXITED(status));
> > > +            assert(WEXITSTATUS(status)==255);
> > > +            assert(!WIFSIGNALED(status));
> > > +            assert(!WIFSTOPPED(status));
> > 
> > This is quite exciting! How can this happen? Kernel bug or similar?
> 
> In principle it would be possible (and standards-compliant) for a
> system to encode its wait status in a different way to usual.  I don't
> think any such system really exists - at least, not one we'll be
> running libxl on.
> 
> Of course this code is only reached if DISASTER returns...

Sorry, I meant how can we get here at all, i.e. with a waitpid returning
-1/ECHLD when we think there is actually a child.

Thinking about it this way I now realise this could be a bug in libxl or
the application which caused the process to be reaped elsewhere.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:36:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47iL-00046l-2Z; Fri, 17 Jan 2014 11:35:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W47iJ-00046g-LQ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:35:51 +0000
Received: from [193.109.254.147:57071] by server-8.bemta-14.messagelabs.com id
	2E/A5-30921-69519D25; Fri, 17 Jan 2014 11:35:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389958548!11486118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16976 invoked from network); 17 Jan 2014 11:35:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:35:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91722305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:35:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 06:35:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47iE-0002t7-Fk;
	Fri, 17 Jan 2014 11:35:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47iC-0003T6-Ky;
	Fri, 17 Jan 2014 11:35:44 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.5519.308053.311118@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 11:35:43 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D8CF6A.7050609@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> Ian Jackson wrote:
> > +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> > +     * children.  The application will reap its own children
> > +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> > +    libxl_sigchld_owner_libxl_always_selective_reap,
> 
> Should there be some documentation in the opening comments of
> "Subprocess handling"?  E.g. an entry under "For programs which run
> their own children alongside libxl's:"?

Yes.

> BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
> different libxl_ctx.  Currently in the libvirt libxl driver there's a
> driver-wide ctx for more host-centric operations like
> libxl_get_version_info(), libxl_get_free_memory(), etc., and a
> per-domain ctx for domain-specific operations.  The current doc for
> libxl_childproc_setmode() says:

Oh dear.  Can you change it to use the same ctx everywhere ?

If not, I'm afraid, someone needs to
 - keep a record of all the libxl ctxs
 - when the self-pipe triggers, iterate over all the ctxs calling
   libxl_childproc_sigchld_occurred

If that "someone" is libvirt, you'll have to do the self-pipe trick.

It would probably be easier to have libxl maintain a global list of
libxl ctx's for this purpose.

> When calling setmode() on the driver-wide or on each domain-specific
> ctx, I get an assert with this hunk
> 
> libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
> `!sigchld_owner' failed.

The problem is that the SIGCHLD ownership is attached to the
individual ctx.  This could in principle be changed, I think.

I will try to see if I can do that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:36:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47iL-00046l-2Z; Fri, 17 Jan 2014 11:35:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W47iJ-00046g-LQ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 11:35:51 +0000
Received: from [193.109.254.147:57071] by server-8.bemta-14.messagelabs.com id
	2E/A5-30921-69519D25; Fri, 17 Jan 2014 11:35:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389958548!11486118!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16976 invoked from network); 17 Jan 2014 11:35:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:35:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91722305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:35:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 06:35:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47iE-0002t7-Fk;
	Fri, 17 Jan 2014 11:35:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W47iC-0003T6-Ky;
	Fri, 17 Jan 2014 11:35:44 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.5519.308053.311118@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 11:35:43 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D8CF6A.7050609@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> Ian Jackson wrote:
> > +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> > +     * children.  The application will reap its own children
> > +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> > +    libxl_sigchld_owner_libxl_always_selective_reap,
> 
> Should there be some documentation in the opening comments of
> "Subprocess handling"?  E.g. an entry under "For programs which run
> their own children alongside libxl's:"?

Yes.

> BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
> different libxl_ctx.  Currently in the libvirt libxl driver there's a
> driver-wide ctx for more host-centric operations like
> libxl_get_version_info(), libxl_get_free_memory(), etc., and a
> per-domain ctx for domain-specific operations.  The current doc for
> libxl_childproc_setmode() says:

Oh dear.  Can you change it to use the same ctx everywhere ?

If not, I'm afraid, someone needs to
 - keep a record of all the libxl ctxs
 - when the self-pipe triggers, iterate over all the ctxs calling
   libxl_childproc_sigchld_occurred

If that "someone" is libvirt, you'll have to do the self-pipe trick.

It would probably be easier to have libxl maintain a global list of
libxl ctx's for this purpose.

> When calling setmode() on the driver-wide or on each domain-specific
> ctx, I get an assert with this hunk
> 
> libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
> `!sigchld_owner' failed.

The problem is that the SIGCHLD ownership is attached to the
individual ctx.  This could in principle be changed, I think.

I will try to see if I can do that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:43:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47pJ-0004mP-6k; Fri, 17 Jan 2014 11:43:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W47pH-0004mK-S6
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:43:04 +0000
Received: from [85.158.143.35:53962] by server-3.bemta-4.messagelabs.com id
	C6/61-32360-74719D25; Fri, 17 Jan 2014 11:43:03 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389958981!12396817!1
X-Originating-IP: [209.85.128.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13902 invoked from network); 17 Jan 2014 11:43:02 -0000
Received: from mail-qe0-f45.google.com (HELO mail-qe0-f45.google.com)
	(209.85.128.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:43:02 -0000
Received: by mail-qe0-f45.google.com with SMTP id x7so1255272qeu.32
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 03:43:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7diGGv5XeGSQaZZLjlhQHjzOuKMlMK+Q9SWhgsjGNO8=;
	b=KdEmiYTaZVgZE6MZb1zDTCGmT+gJxSZ6e/uNAOE+Mtb5YmUR2BtxsWPbMz4PkvOiZ/
	XFfL1PY4rXsQjXKhjXj839fi3BOInyljYXIT0e47p4myvC0BSojx6NZngY8ancL03/kL
	cvLGCHyp6rHcu3R9d6rhUBeQaa774HtgdQFsMAZUaNHYZbcqGnuSD9eG37Nt4tUxZ1vL
	4dqmJwg985DaX8kSL/llNn03kRJtMuCCy94R+KeGUO1RPZWQxSWwh0qs2T0rmFkXU6XA
	AL3BXolE3zDRzs9+VuKffi89rKh1o99vY6Z0W9NywIXOMwtwnaBrdMtrwHN9uzGuQSyF
	JFHg==
MIME-Version: 1.0
X-Received: by 10.140.102.242 with SMTP id w105mr2133750qge.74.1389958981224; 
	Fri, 17 Jan 2014 03:43:01 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 03:43:01 -0800 (PST)
In-Reply-To: <1389956491.6697.64.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 19:43:01 +0800
Message-ID: <CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:


> vif-bridge and the common scripts which it includes would be a good
> start. Just an echo at the top to confirm that the script is running
> would be useful.
>
> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
> when these scripts were launched by udev, but now that libxl runs them
> you may find that the debug from the script comes out on stdout/err of
> the xl create command so perhaps that isn't needed any more.
>
>> headless here.
>
> That shouldn't matter, you are looking for output from userspace
> scripts, not kernel or hypervisor logs.
>
> Ian.
>

Hi Ian
I suspect for 4.4.0, the network devices even was not detected.
this is output from 4.3.1, notes follow lines.

libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
dlan: vif-bridge start
dlan: vif-common start

dlan: vif-bridge start -> output from vif-bridge script
dlan: vif-common start -> output from vif-common.sh script


-------- logs -----
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x62ab58 wpath=/local/domain/0/backend/vif/7/0/
state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
wpath=/local/domain/0/backend/vif/7/0/state toke
n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vif/7/0/state wanted state
 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
wpath=/local/domain/0/backend/vif/7/0/state toke
n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vif/7/0/state wanted state
 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x62ab58 wpath=/local/domain/0/backend/vif/7/
0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x62ab58: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
dlan: vif-bridge start
dlan: vif-common start
libxl: debug: libxl_event.c:1747:libxl__ao_progress_report: ao
0x62a570: progress report: callback queued aop=0x62b
560
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x62a570: complete, rc=0
libxl: debug: libxl_event.c:1160:egc_run_callbacks: ao 0x62a570:
progress report: callback aop=0x62b560

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:43:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47pJ-0004mP-6k; Fri, 17 Jan 2014 11:43:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W47pH-0004mK-S6
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:43:04 +0000
Received: from [85.158.143.35:53962] by server-3.bemta-4.messagelabs.com id
	C6/61-32360-74719D25; Fri, 17 Jan 2014 11:43:03 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389958981!12396817!1
X-Originating-IP: [209.85.128.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13902 invoked from network); 17 Jan 2014 11:43:02 -0000
Received: from mail-qe0-f45.google.com (HELO mail-qe0-f45.google.com)
	(209.85.128.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:43:02 -0000
Received: by mail-qe0-f45.google.com with SMTP id x7so1255272qeu.32
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 03:43:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7diGGv5XeGSQaZZLjlhQHjzOuKMlMK+Q9SWhgsjGNO8=;
	b=KdEmiYTaZVgZE6MZb1zDTCGmT+gJxSZ6e/uNAOE+Mtb5YmUR2BtxsWPbMz4PkvOiZ/
	XFfL1PY4rXsQjXKhjXj839fi3BOInyljYXIT0e47p4myvC0BSojx6NZngY8ancL03/kL
	cvLGCHyp6rHcu3R9d6rhUBeQaa774HtgdQFsMAZUaNHYZbcqGnuSD9eG37Nt4tUxZ1vL
	4dqmJwg985DaX8kSL/llNn03kRJtMuCCy94R+KeGUO1RPZWQxSWwh0qs2T0rmFkXU6XA
	AL3BXolE3zDRzs9+VuKffi89rKh1o99vY6Z0W9NywIXOMwtwnaBrdMtrwHN9uzGuQSyF
	JFHg==
MIME-Version: 1.0
X-Received: by 10.140.102.242 with SMTP id w105mr2133750qge.74.1389958981224; 
	Fri, 17 Jan 2014 03:43:01 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 03:43:01 -0800 (PST)
In-Reply-To: <1389956491.6697.64.camel@kazak.uk.xensource.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
Date: Fri, 17 Jan 2014 19:43:01 +0800
Message-ID: <CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:


> vif-bridge and the common scripts which it includes would be a good
> start. Just an echo at the top to confirm that the script is running
> would be useful.
>
> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
> when these scripts were launched by udev, but now that libxl runs them
> you may find that the debug from the script comes out on stdout/err of
> the xl create command so perhaps that isn't needed any more.
>
>> headless here.
>
> That shouldn't matter, you are looking for output from userspace
> scripts, not kernel or hypervisor logs.
>
> Ian.
>

Hi Ian
I suspect for 4.4.0, the network devices even was not detected.
this is output from 4.3.1, notes follow lines.

libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
dlan: vif-bridge start
dlan: vif-common start

dlan: vif-bridge start -> output from vif-bridge script
dlan: vif-common start -> output from vif-common.sh script


-------- logs -----
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x62ab58 wpath=/local/domain/0/backend/vif/7/0/
state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
wpath=/local/domain/0/backend/vif/7/0/state toke
n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vif/7/0/state wanted state
 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
wpath=/local/domain/0/backend/vif/7/0/state toke
n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vif/7/0/state wanted state
 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x62ab58 wpath=/local/domain/0/backend/vif/7/
0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x62ab58: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
dlan: vif-bridge start
dlan: vif-common start
libxl: debug: libxl_event.c:1747:libxl__ao_progress_report: ao
0x62a570: progress report: callback queued aop=0x62b
560
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x62a570: complete, rc=0
libxl: debug: libxl_event.c:1160:egc_run_callbacks: ao 0x62a570:
progress report: callback aop=0x62b560

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47xV-000582-L3; Fri, 17 Jan 2014 11:51:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W47xT-00057x-Da
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:51:31 +0000
Received: from [85.158.139.211:63406] by server-7.bemta-5.messagelabs.com id
	B3/02-04824-24919D25; Fri, 17 Jan 2014 11:51:30 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389959488!10379100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21263 invoked from network); 17 Jan 2014 11:51:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:51:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91725523"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:51:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:51:26 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W47xO-0007Mt-8O;
	Fri, 17 Jan 2014 11:51:26 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 11:51:14 +0000
Message-ID: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The LINUX (PV_OPS) section was out-dated and it's better to only have
this information in one place (Tte Linux MAINTAINERS file).

The LINUX (XCP) section was an external project that that hasn't been
maintained for years.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 MAINTAINERS |   10 ----------
 1 files changed, 0 insertions(+), 10 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 4d9648f..7757cdd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
 F:      xen/arch/x86/machine_kexec.c
 F:      xen/arch/x86/x86_64/kexec_reloc.S
 
-LINUX (PV_OPS)
-M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-S:	Supported
-T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
-
-LINUX (XCP)
-M:	Ian Campbell <ian.campbell@citrix.com>
-S:	Supported
-T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
-
 MACHINE CHECK (MCA) & RAS
 M:	Christoph Egger <chegger@amazon.de>
 M:	Liu Jinsong <jinsong.liu@intel.com>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47xV-000582-L3; Fri, 17 Jan 2014 11:51:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W47xT-00057x-Da
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:51:31 +0000
Received: from [85.158.139.211:63406] by server-7.bemta-5.messagelabs.com id
	B3/02-04824-24919D25; Fri, 17 Jan 2014 11:51:30 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389959488!10379100!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21263 invoked from network); 17 Jan 2014 11:51:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:51:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91725523"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 11:51:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:51:26 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W47xO-0007Mt-8O;
	Fri, 17 Jan 2014 11:51:26 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 11:51:14 +0000
Message-ID: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The LINUX (PV_OPS) section was out-dated and it's better to only have
this information in one place (Tte Linux MAINTAINERS file).

The LINUX (XCP) section was an external project that that hasn't been
maintained for years.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 MAINTAINERS |   10 ----------
 1 files changed, 0 insertions(+), 10 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 4d9648f..7757cdd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
 F:      xen/arch/x86/machine_kexec.c
 F:      xen/arch/x86/x86_64/kexec_reloc.S
 
-LINUX (PV_OPS)
-M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-S:	Supported
-T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
-
-LINUX (XCP)
-M:	Ian Campbell <ian.campbell@citrix.com>
-S:	Supported
-T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
-
 MACHINE CHECK (MCA) & RAS
 M:	Christoph Egger <chegger@amazon.de>
 M:	Liu Jinsong <jinsong.liu@intel.com>
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47y4-0005Hl-78; Fri, 17 Jan 2014 11:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W47y3-0005HC-9c
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:52:07 +0000
Received: from [85.158.137.68:54795] by server-3.bemta-3.messagelabs.com id
	FF/9E-10658-66919D25; Fri, 17 Jan 2014 11:52:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389959524!9740276!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14907 invoked from network); 17 Jan 2014 11:52:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:52:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93822358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:52:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:52:03 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W47xy-0007NY-Ol;
	Fri, 17 Jan 2014 11:52:02 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 11:51:51 +0000
Message-ID: <1389959511-7113-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] MAINTAINERS: add git repository for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 MAINTAINERS |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 31a0462..c4d90df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9549,6 +9549,7 @@ M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 M:	Boris Ostrovsky <boris.ostrovsky@oracle.com>
 M:	David Vrabel <david.vrabel@citrix.com>
 L:	xen-devel@lists.xenproject.org (moderated for non-subscribers)
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
 S:	Supported
 F:	arch/x86/xen/
 F:	drivers/*/xen-*front.c
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W47y4-0005Hl-78; Fri, 17 Jan 2014 11:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W47y3-0005HC-9c
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:52:07 +0000
Received: from [85.158.137.68:54795] by server-3.bemta-3.messagelabs.com id
	FF/9E-10658-66919D25; Fri, 17 Jan 2014 11:52:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1389959524!9740276!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14907 invoked from network); 17 Jan 2014 11:52:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:52:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93822358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:52:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:52:03 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1W47xy-0007NY-Ol;
	Fri, 17 Jan 2014 11:52:02 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 11:51:51 +0000
Message-ID: <1389959511-7113-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] MAINTAINERS: add git repository for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 MAINTAINERS |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 31a0462..c4d90df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9549,6 +9549,7 @@ M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
 M:	Boris Ostrovsky <boris.ostrovsky@oracle.com>
 M:	David Vrabel <david.vrabel@citrix.com>
 L:	xen-devel@lists.xenproject.org (moderated for non-subscribers)
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
 S:	Supported
 F:	arch/x86/xen/
 F:	drivers/*/xen-*front.c
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:59:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W484q-0005lW-4L; Fri, 17 Jan 2014 11:59:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W484p-0005lO-0h
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:59:07 +0000
Received: from [193.109.254.147:5156] by server-2.bemta-14.messagelabs.com id
	A0/88-00361-A0B19D25; Fri, 17 Jan 2014 11:59:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389959944!11498637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11126 invoked from network); 17 Jan 2014 11:59:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:59:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93823458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:59:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:59:03 -0500
Message-ID: <1389959942.6697.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 11:59:02 +0000
In-Reply-To: <CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Roger
	Pau Monne <roger.pau@citrix.com>, Eugene Fedotov <e.fedotov@samsung.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> 
> > vif-bridge and the common scripts which it includes would be a good
> > start. Just an echo at the top to confirm that the script is running
> > would be useful.
> >
> > I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
> > when these scripts were launched by udev, but now that libxl runs them
> > you may find that the debug from the script comes out on stdout/err of
> > the xl create command so perhaps that isn't needed any more.
> >
> >> headless here.
> >
> > That shouldn't matter, you are looking for output from userspace
> > scripts, not kernel or hypervisor logs.
> >
> > Ian.
> >
> 
> Hi Ian
> I suspect for 4.4.0, the network devices even was not detected.
> this is output from 4.3.1, notes follow lines.
> 
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> dlan: vif-bridge start
> dlan: vif-common start
> 
> dlan: vif-bridge start -> output from vif-bridge script
> dlan: vif-common start -> output from vif-common.sh script

So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
produce the same output?

(please can you try and set the text type to "preformatted" for the logs
-- having them wrapped makes them very hard to read).

The lack of 
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
in your original logs is a bit concerning.

Roger -- any ideas?
> 
> 
> -------- logs -----
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x62ab58 wpath=/local/domain/0/backend/vif/7/0/
> state token=3/1: register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
> wpath=/local/domain/0/backend/vif/7/0/state toke
> n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vif/7/0/state wanted state
>  2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
> wpath=/local/domain/0/backend/vif/7/0/state toke
> n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vif/7/0/state wanted state
>  2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x62ab58 wpath=/local/domain/0/backend/vif/7/
> 0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x62ab58: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> dlan: vif-bridge start
> dlan: vif-common start
> libxl: debug: libxl_event.c:1747:libxl__ao_progress_report: ao
> 0x62a570: progress report: callback queued aop=0x62b
> 560
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x62a570: complete, rc=0
> libxl: debug: libxl_event.c:1160:egc_run_callbacks: ao 0x62a570:
> progress report: callback aop=0x62b560



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 11:59:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 11:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W484q-0005lW-4L; Fri, 17 Jan 2014 11:59:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W484p-0005lO-0h
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 11:59:07 +0000
Received: from [193.109.254.147:5156] by server-2.bemta-14.messagelabs.com id
	A0/88-00361-A0B19D25; Fri, 17 Jan 2014 11:59:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389959944!11498637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11126 invoked from network); 17 Jan 2014 11:59:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 11:59:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93823458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 11:59:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 06:59:03 -0500
Message-ID: <1389959942.6697.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
Date: Fri, 17 Jan 2014 11:59:02 +0000
In-Reply-To: <CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Roger
	Pau Monne <roger.pau@citrix.com>, Eugene Fedotov <e.fedotov@samsung.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> 
> 
> > vif-bridge and the common scripts which it includes would be a good
> > start. Just an echo at the top to confirm that the script is running
> > would be useful.
> >
> > I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
> > when these scripts were launched by udev, but now that libxl runs them
> > you may find that the debug from the script comes out on stdout/err of
> > the xl create command so perhaps that isn't needed any more.
> >
> >> headless here.
> >
> > That shouldn't matter, you are looking for output from userspace
> > scripts, not kernel or hypervisor logs.
> >
> > Ian.
> >
> 
> Hi Ian
> I suspect for 4.4.0, the network devices even was not detected.
> this is output from 4.3.1, notes follow lines.
> 
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> dlan: vif-bridge start
> dlan: vif-common start
> 
> dlan: vif-bridge start -> output from vif-bridge script
> dlan: vif-common start -> output from vif-common.sh script

So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
produce the same output?

(please can you try and set the text type to "preformatted" for the logs
-- having them wrapped makes them very hard to read).

The lack of 
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
in your original logs is a bit concerning.

Roger -- any ideas?
> 
> 
> -------- logs -----
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x62ab58 wpath=/local/domain/0/backend/vif/7/0/
> state token=3/1: register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
> wpath=/local/domain/0/backend/vif/7/0/state toke
> n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vif/7/0/state wanted state
>  2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x62ab58
> wpath=/local/domain/0/backend/vif/7/0/state toke
> n=3/1: event epath=/local/domain/0/backend/vif/7/0/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vif/7/0/state wanted state
>  2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x62ab58 wpath=/local/domain/0/backend/vif/7/
> 0/state token=3/1: deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x62ab58: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> dlan: vif-bridge start
> dlan: vif-common start
> libxl: debug: libxl_event.c:1747:libxl__ao_progress_report: ao
> 0x62a570: progress report: callback queued aop=0x62b
> 560
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x62a570: complete, rc=0
> libxl: debug: libxl_event.c:1160:egc_run_callbacks: ao 0x62a570:
> progress report: callback aop=0x62b560



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48Dh-0006Vz-9p; Fri, 17 Jan 2014 12:08:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W48Df-0006Vu-IL
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:08:15 +0000
Received: from [85.158.137.68:13788] by server-8.bemta-3.messagelabs.com id
	9D/77-31081-E2D19D25; Fri, 17 Jan 2014 12:08:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389960492!9709556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 17 Jan 2014 12:08:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:08:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93825598"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 12:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 07:08:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W48Da-0007bt-Rn;
	Fri, 17 Jan 2014 12:08:10 +0000
Date: Fri, 17 Jan 2014 12:08:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140117120810.GA11681@zion.uk.xensource.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D8CCE4.9010804@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
> 
> On 2014/1/16 19:10, David Vrabel wrote:
> >On 15/01/14 23:57, Annie Li wrote:
> >>This patch implements two things:
> >>
> >>* release grant reference and skb for rx path, this fixex resource leaking.
> >>* clean up grant transfer code kept from old netfront(2.6.18) which grants
> >>pages for access/map and transfer. But grant transfer is deprecated in current
> >>netfront, so remove corresponding release code for transfer.
> >>
> >>gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> >>for reading or writing. But this patch does not cover this and improvement for
> >>this failure may be implemented in a separate patch.
> >I don't think replacing a resource leak with a security bug is a good idea.
> >
> >If you would prefer not to fix the gnttab_end_foreign_access() call, I
> >think you can fix this in netfront by taking a reference to the page
> >before calling gnttab_end_foreign_access().  This will ensure the page
> >isn't freed until the subsequent kfree_skb(), or the gref is released by
> >the foreign domain (whichever is later).
> 
> Taking a reference to the page before calling
> gnttab_end_foreign_access() delays the free work until kfree_skb().
> Simply adding put_page before kfree_skb() does not make things
> different from gnttab_end_foreign_access_ref(), and the pages will
> be freed by kfree_skb(), problem will be hit in
> gnttab_handle_deferred() when freeing pages which already be freed.
> 

I think David's idea is:

	get_page
	gnttab_end_foreign_access
	kfree_skb

The get_page is to offset put_page in gnttab_end_foreign_access. You
don't need to put page before kfree_skb.

Wei.

> So put_page is required in gnttab_end_foreign_access(), this will
> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
> This involves changes in blkfront/pcifront/tpmfront(just like your
> patch), this way ensure page is released when ref is end.
> 
> Another solution I am thinking is calling
> gnttab_end_foreign_access() with page parameter as NULL, then
> gnttab_end_foreign_access will only do ending grant reference work
> and releasing page work is done by kfree_skb().
> 
> Thanks
> Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48Dh-0006Vz-9p; Fri, 17 Jan 2014 12:08:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W48Df-0006Vu-IL
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:08:15 +0000
Received: from [85.158.137.68:13788] by server-8.bemta-3.messagelabs.com id
	9D/77-31081-E2D19D25; Fri, 17 Jan 2014 12:08:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1389960492!9709556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 17 Jan 2014 12:08:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:08:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93825598"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 12:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 07:08:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W48Da-0007bt-Rn;
	Fri, 17 Jan 2014 12:08:10 +0000
Date: Fri, 17 Jan 2014 12:08:10 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140117120810.GA11681@zion.uk.xensource.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D8CCE4.9010804@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
> 
> On 2014/1/16 19:10, David Vrabel wrote:
> >On 15/01/14 23:57, Annie Li wrote:
> >>This patch implements two things:
> >>
> >>* release grant reference and skb for rx path, this fixex resource leaking.
> >>* clean up grant transfer code kept from old netfront(2.6.18) which grants
> >>pages for access/map and transfer. But grant transfer is deprecated in current
> >>netfront, so remove corresponding release code for transfer.
> >>
> >>gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> >>for reading or writing. But this patch does not cover this and improvement for
> >>this failure may be implemented in a separate patch.
> >I don't think replacing a resource leak with a security bug is a good idea.
> >
> >If you would prefer not to fix the gnttab_end_foreign_access() call, I
> >think you can fix this in netfront by taking a reference to the page
> >before calling gnttab_end_foreign_access().  This will ensure the page
> >isn't freed until the subsequent kfree_skb(), or the gref is released by
> >the foreign domain (whichever is later).
> 
> Taking a reference to the page before calling
> gnttab_end_foreign_access() delays the free work until kfree_skb().
> Simply adding put_page before kfree_skb() does not make things
> different from gnttab_end_foreign_access_ref(), and the pages will
> be freed by kfree_skb(), problem will be hit in
> gnttab_handle_deferred() when freeing pages which already be freed.
> 

I think David's idea is:

	get_page
	gnttab_end_foreign_access
	kfree_skb

The get_page is to offset put_page in gnttab_end_foreign_access. You
don't need to put page before kfree_skb.

Wei.

> So put_page is required in gnttab_end_foreign_access(), this will
> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
> This involves changes in blkfront/pcifront/tpmfront(just like your
> patch), this way ensure page is released when ref is end.
> 
> Another solution I am thinking is calling
> gnttab_end_foreign_access() with page parameter as NULL, then
> gnttab_end_foreign_access will only do ending grant reference work
> and releasing page work is done by kfree_skb().
> 
> Thanks
> Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48JA-000725-Ed; Fri, 17 Jan 2014 12:13:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W48J9-00071J-AC
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:13:55 +0000
Received: from [85.158.143.35:61172] by server-1.bemta-4.messagelabs.com id
	E9/0B-02132-28E19D25; Fri, 17 Jan 2014 12:13:54 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389960832!12317960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11094 invoked from network); 17 Jan 2014 12:13:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:13:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91732262"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 12:13:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 07:13:52 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W48J5-0007hz-J2;
	Fri, 17 Jan 2014 12:13:51 +0000
Date: Fri, 17 Jan 2014 12:13:51 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117121350.GA16586@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > 
> > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > could have expected.
> > > > > > 
> > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > Xen and their potential impact on the release.
> > > > > 
> > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > the ones that I think are quite important at the beginning. Either the
> > > > > commit title speak for itself or I added a small description on what is
> > > > > affected.
> > > > 
> > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > on a freeze exception. Did you refer to 
> > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > not the very briefest words you can possibly manage.
> > > > 
> > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > giving it a blanket exemption based on what you've presented here, or
> > > > even of cherry picking what might be the important ones. If you think
> > > > any or all of it is actually important for 4.4 please make a proper case
> > > > for inclusion, either of the aggregate or of the individual changes.
> > > 
> > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > stable tree?
> > 
> > Yes, a simple merge.
> > 
> > > I also assume that you tested at the very least the basic
> > > PV and HVM configurations?
> > 
> > :(, no, I haven't try PV. But I did try HVM.
> > 
> > There is one thing that I may want to try, it's migration from the
> > previous version of Xen. There is one patch that change (fix?) that.
> 
> Please do and let me know if it works as expected.

I have tryied a pv guest, it does work fine.

I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
both the current qemu-xen tree and with the merge of 1.6.2, but the
migration fail in both cases because of the same error reported by qemu.
(Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
investigate in that. Might just be a issue with my compile script ...
(using the wrong qemu-xen tree).

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48JA-000725-Ed; Fri, 17 Jan 2014 12:13:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W48J9-00071J-AC
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:13:55 +0000
Received: from [85.158.143.35:61172] by server-1.bemta-4.messagelabs.com id
	E9/0B-02132-28E19D25; Fri, 17 Jan 2014 12:13:54 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1389960832!12317960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11094 invoked from network); 17 Jan 2014 12:13:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:13:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91732262"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 12:13:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 07:13:52 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W48J5-0007hz-J2;
	Fri, 17 Jan 2014 12:13:51 +0000
Date: Fri, 17 Jan 2014 12:13:51 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117121350.GA16586@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > 
> > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > could have expected.
> > > > > > 
> > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > Xen and their potential impact on the release.
> > > > > 
> > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > the ones that I think are quite important at the beginning. Either the
> > > > > commit title speak for itself or I added a small description on what is
> > > > > affected.
> > > > 
> > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > on a freeze exception. Did you refer to 
> > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > not the very briefest words you can possibly manage.
> > > > 
> > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > giving it a blanket exemption based on what you've presented here, or
> > > > even of cherry picking what might be the important ones. If you think
> > > > any or all of it is actually important for 4.4 please make a proper case
> > > > for inclusion, either of the aggregate or of the individual changes.
> > > 
> > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > stable tree?
> > 
> > Yes, a simple merge.
> > 
> > > I also assume that you tested at the very least the basic
> > > PV and HVM configurations?
> > 
> > :(, no, I haven't try PV. But I did try HVM.
> > 
> > There is one thing that I may want to try, it's migration from the
> > previous version of Xen. There is one patch that change (fix?) that.
> 
> Please do and let me know if it works as expected.

I have tryied a pv guest, it does work fine.

I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
both the current qemu-xen tree and with the merge of 1.6.2, but the
migration fail in both cases because of the same error reported by qemu.
(Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
investigate in that. Might just be a issue with my compile script ...
(using the wrong qemu-xen tree).

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:33:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48bO-0008HZ-HA; Fri, 17 Jan 2014 12:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W48bN-0008HS-3Z
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:32:45 +0000
Received: from [85.158.139.211:59201] by server-10.bemta-5.messagelabs.com id
	0B/5A-01405-CE229D25; Fri, 17 Jan 2014 12:32:44 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389961962!10384933!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3245 invoked from network); 17 Jan 2014 12:32:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 12:32:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HCWaMH003831
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 12:32:37 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0HCWY2v020480
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 12:32:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HCWYAu006732; Fri, 17 Jan 2014 12:32:34 GMT
Received: from [192.168.1.102] (/123.123.250.195)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 04:32:34 -0800
Message-ID: <52D922DD.2060407@oracle.com>
Date: Fri, 17 Jan 2014 20:32:29 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
In-Reply-To: <20140117120810.GA11681@zion.uk.xensource.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-17 20:08, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>> On 2014/1/16 19:10, David Vrabel wrote:
>>> On 15/01/14 23:57, Annie Li wrote:
>>>> This patch implements two things:
>>>>
>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>> netfront, so remove corresponding release code for transfer.
>>>>
>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>> for reading or writing. But this patch does not cover this and improvement for
>>>> this failure may be implemented in a separate patch.
>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>
>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>> think you can fix this in netfront by taking a reference to the page
>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>> the foreign domain (whichever is later).
>> Taking a reference to the page before calling
>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>> Simply adding put_page before kfree_skb() does not make things
>> different from gnttab_end_foreign_access_ref(), and the pages will
>> be freed by kfree_skb(), problem will be hit in
>> gnttab_handle_deferred() when freeing pages which already be freed.
>>
> I think David's idea is:
>
> 	get_page
> 	gnttab_end_foreign_access
> 	kfree_skb
>
> The get_page is to offset put_page in gnttab_end_foreign_access. You
> don't need to put page before kfree_skb.

Yes, this is what I described as following about David's patch.

>> So put_page is required in gnttab_end_foreign_access(), this will
>> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
>> This involves changes in blkfront/pcifront/tpmfront(just like your
>> patch), this way ensure page is released when ref is end.

But this would has some issue in netfront tx path. Netfront ends all 
grant reference of one skb first and then release this skb. If the 
gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(), this 
frag page and corresponding grant reference will be put in entry and 
release work will be done in the timer routine. If some frag pages of 
one skb is free in this timer routine, then dev_kfree_skb_irq will free 
pages which have been freed.
So I prefer following way I mentioned, suggestions?

>> Another solution I am thinking is calling
>> gnttab_end_foreign_access() with page parameter as NULL, then
>> gnttab_end_foreign_access will only do ending grant reference work
>> and releasing page work is done by kfree_skb().

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:33:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48bO-0008HZ-HA; Fri, 17 Jan 2014 12:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W48bN-0008HS-3Z
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 12:32:45 +0000
Received: from [85.158.139.211:59201] by server-10.bemta-5.messagelabs.com id
	0B/5A-01405-CE229D25; Fri, 17 Jan 2014 12:32:44 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389961962!10384933!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3245 invoked from network); 17 Jan 2014 12:32:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 12:32:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HCWaMH003831
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 12:32:37 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0HCWY2v020480
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 12:32:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HCWYAu006732; Fri, 17 Jan 2014 12:32:34 GMT
Received: from [192.168.1.102] (/123.123.250.195)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 04:32:34 -0800
Message-ID: <52D922DD.2060407@oracle.com>
Date: Fri, 17 Jan 2014 20:32:29 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
In-Reply-To: <20140117120810.GA11681@zion.uk.xensource.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-17 20:08, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>> On 2014/1/16 19:10, David Vrabel wrote:
>>> On 15/01/14 23:57, Annie Li wrote:
>>>> This patch implements two things:
>>>>
>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>> netfront, so remove corresponding release code for transfer.
>>>>
>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>> for reading or writing. But this patch does not cover this and improvement for
>>>> this failure may be implemented in a separate patch.
>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>
>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>> think you can fix this in netfront by taking a reference to the page
>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>> the foreign domain (whichever is later).
>> Taking a reference to the page before calling
>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>> Simply adding put_page before kfree_skb() does not make things
>> different from gnttab_end_foreign_access_ref(), and the pages will
>> be freed by kfree_skb(), problem will be hit in
>> gnttab_handle_deferred() when freeing pages which already be freed.
>>
> I think David's idea is:
>
> 	get_page
> 	gnttab_end_foreign_access
> 	kfree_skb
>
> The get_page is to offset put_page in gnttab_end_foreign_access. You
> don't need to put page before kfree_skb.

Yes, this is what I described as following about David's patch.

>> So put_page is required in gnttab_end_foreign_access(), this will
>> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
>> This involves changes in blkfront/pcifront/tpmfront(just like your
>> patch), this way ensure page is released when ref is end.

But this would has some issue in netfront tx path. Netfront ends all 
grant reference of one skb first and then release this skb. If the 
gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(), this 
frag page and corresponding grant reference will be put in entry and 
release work will be done in the timer routine. If some frag pages of 
one skb is free in this timer routine, then dev_kfree_skb_irq will free 
pages which have been freed.
So I prefer following way I mentioned, suggestions?

>> Another solution I am thinking is calling
>> gnttab_end_foreign_access() with page parameter as NULL, then
>> gnttab_end_foreign_access will only do ending grant reference work
>> and releasing page work is done by kfree_skb().

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48gh-0008Pm-Dv; Fri, 17 Jan 2014 12:38:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W48gf-0008Pe-Af
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 12:38:13 +0000
Received: from [193.109.254.147:24542] by server-1.bemta-14.messagelabs.com id
	A6/A2-15600-43429D25; Fri, 17 Jan 2014 12:38:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389962286!11509538!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3651 invoked from network); 17 Jan 2014 12:38:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:38:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91739413"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 12:38:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 07:38:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48gW-0003CI-8o;
	Fri, 17 Jan 2014 12:38:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48gU-0004ag-5L;
	Fri, 17 Jan 2014 12:38:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.9256.615318.923918@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 12:38:00 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D8CF6A.7050609@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> I don't see the assert, regardless of how I call setmode(), when
> changing this hunk to
> 
> @@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
> creating)
>  {
>      switch (ctx->childproc_hooks->chldowner) {
>      case libxl_sigchld_owner_libxl:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return creating || !LIBXL_LIST_EMPTY(&ctx->children);
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> -    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return 1;
>      }
>      abort();

I should say: that that works is just luck, I think.  I have a better
fix.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48gh-0008Pm-Dv; Fri, 17 Jan 2014 12:38:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W48gf-0008Pe-Af
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 12:38:13 +0000
Received: from [193.109.254.147:24542] by server-1.bemta-14.messagelabs.com id
	A6/A2-15600-43429D25; Fri, 17 Jan 2014 12:38:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389962286!11509538!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3651 invoked from network); 17 Jan 2014 12:38:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:38:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91739413"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 12:38:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 07:38:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48gW-0003CI-8o;
	Fri, 17 Jan 2014 12:38:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48gU-0004ag-5L;
	Fri, 17 Jan 2014 12:38:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.9256.615318.923918@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 12:38:00 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D8CF6A.7050609@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> I don't see the assert, regardless of how I call setmode(), when
> changing this hunk to
> 
> @@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
> creating)
>  {
>      switch (ctx->childproc_hooks->chldowner) {
>      case libxl_sigchld_owner_libxl:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return creating || !LIBXL_LIST_EMPTY(&ctx->children);
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> -    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return 1;
>      }
>      abort();

I should say: that that works is just luck, I think.  I have a better
fix.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:41:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48jz-0000YT-9d; Fri, 17 Jan 2014 12:41:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W48jy-0000YN-Eg
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 12:41:38 +0000
Received: from [85.158.143.35:32226] by server-1.bemta-4.messagelabs.com id
	FA/2A-02132-10529D25; Fri, 17 Jan 2014 12:41:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389962496!12409112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29899 invoked from network); 17 Jan 2014 12:41:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:41:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93836025"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 12:41:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 07:41:29 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48jo-0003D9-B9;
	Fri, 17 Jan 2014 12:41:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48jm-0004dl-Jt;
	Fri, 17 Jan 2014 12:41:26 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.9461.507821.973648@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 12:41:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389957160.6697.71.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
	<1389957160.6697.71.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 3/7] libxl: fork: Clarify docs for libxl_sigchld_owner"):
> On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> >  typedef enum {
> > -    /* libxl owns SIGCHLD whenever it has a child. */
> > +    /* libxl owns SIGCHLD whenever it has a child, and reaps
> > +     * all children. */
> 
> "all children, including those which were not spawned by libxl" might be
> extra clear. It might also be worth saying explicitly "When libxl has no
> children it does not own SIGCHLD and will not reap anything". I know
> it's implied by the existing text but it is a bit of a subtle point.

Good idea, done.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 12:41:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 12:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W48jz-0000YT-9d; Fri, 17 Jan 2014 12:41:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W48jy-0000YN-Eg
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 12:41:38 +0000
Received: from [85.158.143.35:32226] by server-1.bemta-4.messagelabs.com id
	FA/2A-02132-10529D25; Fri, 17 Jan 2014 12:41:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1389962496!12409112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29899 invoked from network); 17 Jan 2014 12:41:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 12:41:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93836025"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 12:41:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 07:41:29 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48jo-0003D9-B9;
	Fri, 17 Jan 2014 12:41:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W48jm-0004dl-Jt;
	Fri, 17 Jan 2014 12:41:26 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.9461.507821.973648@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 12:41:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389957160.6697.71.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-4-git-send-email-ian.jackson@eu.citrix.com>
	<1389957160.6697.71.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/7] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 3/7] libxl: fork: Clarify docs for libxl_sigchld_owner"):
> On Thu, 2014-01-16 at 17:22 +0000, Ian Jackson wrote:
> >  typedef enum {
> > -    /* libxl owns SIGCHLD whenever it has a child. */
> > +    /* libxl owns SIGCHLD whenever it has a child, and reaps
> > +     * all children. */
> 
> "all children, including those which were not spawned by libxl" might be
> extra clear. It might also be worth saying explicitly "When libxl has no
> children it does not own SIGCHLD and will not reap anything". I know
> it's implied by the existing text but it is a bit of a subtle point.

Good idea, done.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:06:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W497E-0001fd-G9; Fri, 17 Jan 2014 13:05:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W497C-0001fY-TV
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:05:39 +0000
Received: from [193.109.254.147:4108] by server-10.bemta-14.messagelabs.com id
	07/EB-20752-1AA29D25; Fri, 17 Jan 2014 13:05:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389963935!10034048!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22244 invoked from network); 17 Jan 2014 13:05:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:05:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93842338"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:05:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:05:34 -0500
Message-ID: <1389963933.6697.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 17 Jan 2014 13:05:33 +0000
In-Reply-To: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
References: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 11:51 +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The LINUX (PV_OPS) section was out-dated and it's better to only have
> this information in one place (Tte Linux MAINTAINERS file).
> 
> The LINUX (XCP) section was an external project that that hasn't been
> maintained for years.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  MAINTAINERS |   10 ----------
>  1 files changed, 0 insertions(+), 10 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 4d9648f..7757cdd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
>  F:      xen/arch/x86/machine_kexec.c
>  F:      xen/arch/x86/x86_64/kexec_reloc.S
>  
> -LINUX (PV_OPS)
> -M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> -S:	Supported
> -T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> -
> -LINUX (XCP)
> -M:	Ian Campbell <ian.campbell@citrix.com>
> -S:	Supported
> -T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
> -
>  MACHINE CHECK (MCA) & RAS
>  M:	Christoph Egger <chegger@amazon.de>
>  M:	Liu Jinsong <jinsong.liu@intel.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:06:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W497E-0001fd-G9; Fri, 17 Jan 2014 13:05:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W497C-0001fY-TV
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:05:39 +0000
Received: from [193.109.254.147:4108] by server-10.bemta-14.messagelabs.com id
	07/EB-20752-1AA29D25; Fri, 17 Jan 2014 13:05:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389963935!10034048!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22244 invoked from network); 17 Jan 2014 13:05:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:05:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93842338"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:05:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:05:34 -0500
Message-ID: <1389963933.6697.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 17 Jan 2014 13:05:33 +0000
In-Reply-To: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
References: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 11:51 +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The LINUX (PV_OPS) section was out-dated and it's better to only have
> this information in one place (Tte Linux MAINTAINERS file).
> 
> The LINUX (XCP) section was an external project that that hasn't been
> maintained for years.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  MAINTAINERS |   10 ----------
>  1 files changed, 0 insertions(+), 10 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 4d9648f..7757cdd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
>  F:      xen/arch/x86/machine_kexec.c
>  F:      xen/arch/x86/x86_64/kexec_reloc.S
>  
> -LINUX (PV_OPS)
> -M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> -S:	Supported
> -T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> -
> -LINUX (XCP)
> -M:	Ian Campbell <ian.campbell@citrix.com>
> -S:	Supported
> -T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
> -
>  MACHINE CHECK (MCA) & RAS
>  M:	Christoph Egger <chegger@amazon.de>
>  M:	Liu Jinsong <jinsong.liu@intel.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:10:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49CE-0002HD-7h; Fri, 17 Jan 2014 13:10:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W49CC-0002H7-Aa
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:10:48 +0000
Received: from [193.109.254.147:14192] by server-15.bemta-14.messagelabs.com
	id 55/16-22186-7DB29D25; Fri, 17 Jan 2014 13:10:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389964246!11471981!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13940 invoked from network); 17 Jan 2014 13:10:47 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 13:10:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W49C2-000Fm5-18; Fri, 17 Jan 2014 13:10:38 +0000
Date: Fri, 17 Jan 2014 14:10:38 +0100
From: Tim Deegan <tim@xen.org>
To: "Egger, Christoph" <chegger@amazon.de>
Message-ID: <20140117131038.GA59786@deinos.phlegethon.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
	<52D8F10A.7080501@amazon.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D8F10A.7080501@amazon.de>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com,
	Yang Zhang <yang.z.zhang@intel.com>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:59 +0100 on 17 Jan (1389949194), Egger, Christoph wrote:
> On 17.01.14 07:35, Yang Zhang wrote:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > L2 guest will access the physical device directly(nested VT-d). For such access,
> > Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> > distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> > This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> > 
> > Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> > ---
> >  xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
> >  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
> >  2 files changed, 9 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> > index c2ef1d1..38e2327 100644
> > --- a/xen/arch/x86/mm/hap/nested_hap.c
> > +++ b/xen/arch/x86/mm/hap/nested_hap.c
> > @@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
> >      mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
> >                                0, page_order);
> >  
> > +    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
> > +    if ( *p2mt == p2m_mmio_direct )
> > +        goto direct_mmio_out;
> >      rc = NESTEDHVM_PAGEFAULT_MMIO;
> > -    if ( p2m_is_mmio(*p2mt) )
> > +    if ( *p2mt == p2m_mmio_dm )
> >          goto out;
> 
> Why does p2m_is_mmio() not cover p2m_mmio_direct ?

It does.  This code is changing to cover the two kinds of mmio
separately.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:10:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49CE-0002HD-7h; Fri, 17 Jan 2014 13:10:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W49CC-0002H7-Aa
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:10:48 +0000
Received: from [193.109.254.147:14192] by server-15.bemta-14.messagelabs.com
	id 55/16-22186-7DB29D25; Fri, 17 Jan 2014 13:10:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389964246!11471981!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13940 invoked from network); 17 Jan 2014 13:10:47 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 13:10:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W49C2-000Fm5-18; Fri, 17 Jan 2014 13:10:38 +0000
Date: Fri, 17 Jan 2014 14:10:38 +0100
From: Tim Deegan <tim@xen.org>
To: "Egger, Christoph" <chegger@amazon.de>
Message-ID: <20140117131038.GA59786@deinos.phlegethon.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
	<52D8F10A.7080501@amazon.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D8F10A.7080501@amazon.de>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com,
	Yang Zhang <yang.z.zhang@intel.com>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:59 +0100 on 17 Jan (1389949194), Egger, Christoph wrote:
> On 17.01.14 07:35, Yang Zhang wrote:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > L2 guest will access the physical device directly(nested VT-d). For such access,
> > Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> > distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> > This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> > 
> > Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> > ---
> >  xen/arch/x86/mm/hap/nested_hap.c    |   10 ++++++++--
> >  xen/include/asm-x86/hvm/nestedhvm.h |    1 +
> >  2 files changed, 9 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
> > index c2ef1d1..38e2327 100644
> > --- a/xen/arch/x86/mm/hap/nested_hap.c
> > +++ b/xen/arch/x86/mm/hap/nested_hap.c
> > @@ -170,8 +170,11 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
> >      mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, p2ma,
> >                                0, page_order);
> >  
> > +    rc = NESTEDHVM_PAGEFAULT_DIRECT_MMIO;
> > +    if ( *p2mt == p2m_mmio_direct )
> > +        goto direct_mmio_out;
> >      rc = NESTEDHVM_PAGEFAULT_MMIO;
> > -    if ( p2m_is_mmio(*p2mt) )
> > +    if ( *p2mt == p2m_mmio_dm )
> >          goto out;
> 
> Why does p2m_is_mmio() not cover p2m_mmio_direct ?

It does.  This code is changing to cover the two kinds of mmio
separately.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:16:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49HO-0002SA-5H; Fri, 17 Jan 2014 13:16:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W49HN-0002S5-ID
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:16:09 +0000
Received: from [85.158.139.211:58197] by server-9.bemta-5.messagelabs.com id
	E8/A5-15098-81D29D25; Fri, 17 Jan 2014 13:16:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389964568!10335159!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6048 invoked from network); 17 Jan 2014 13:16:08 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 13:16:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W49HL-000Ftx-LI; Fri, 17 Jan 2014 13:16:07 +0000
Date: Fri, 17 Jan 2014 14:16:07 +0100
From: Tim Deegan <tim@xen.org>
To: Yang Zhang <yang.z.zhang@intel.com>
Message-ID: <20140117131607.GB59786@deinos.phlegethon.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:35 +0800 on 17 Jan (1389965708), Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> L2 guest will access the physical device directly(nested VT-d). For such access,
> Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:16:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49HO-0002SA-5H; Fri, 17 Jan 2014 13:16:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W49HN-0002S5-ID
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:16:09 +0000
Received: from [85.158.139.211:58197] by server-9.bemta-5.messagelabs.com id
	E8/A5-15098-81D29D25; Fri, 17 Jan 2014 13:16:08 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389964568!10335159!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6048 invoked from network); 17 Jan 2014 13:16:08 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 13:16:08 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W49HL-000Ftx-LI; Fri, 17 Jan 2014 13:16:07 +0000
Date: Fri, 17 Jan 2014 14:16:07 +0100
From: Tim Deegan <tim@xen.org>
To: Yang Zhang <yang.z.zhang@intel.com>
Message-ID: <20140117131607.GB59786@deinos.phlegethon.org>
References: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389940508-2239-1-git-send-email-yang.z.zhang@intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: JBeulich@suse.com, andrew.cooper3@citrix.com, eddie.dong@intel.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] nested EPT: fixing wrong handling for L2
 guest's direct mmio access
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:35 +0800 on 17 Jan (1389965708), Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> L2 guest will access the physical device directly(nested VT-d). For such access,
> Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
> distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
> This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49KC-0002ss-Oi; Fri, 17 Jan 2014 13:19:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W49K9-0002rO-Dq
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:19:03 +0000
Received: from [85.158.139.211:14524] by server-8.bemta-5.messagelabs.com id
	7D/6E-29838-4CD29D25; Fri, 17 Jan 2014 13:19:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389964738!90480!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10491 invoked from network); 17 Jan 2014 13:18:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:18:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93846001"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:18:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:18:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W49K4-00009X-ST;
	Fri, 17 Jan 2014 13:18:56 +0000
Date: Fri, 17 Jan 2014 13:17:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140117121350.GA16586@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Anthony PERARD wrote:
> On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > 
> > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > could have expected.
> > > > > > > 
> > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > Xen and their potential impact on the release.
> > > > > > 
> > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > commit title speak for itself or I added a small description on what is
> > > > > > affected.
> > > > > 
> > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > on a freeze exception. Did you refer to 
> > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > not the very briefest words you can possibly manage.
> > > > > 
> > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > even of cherry picking what might be the important ones. If you think
> > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > 
> > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > stable tree?
> > > 
> > > Yes, a simple merge.
> > > 
> > > > I also assume that you tested at the very least the basic
> > > > PV and HVM configurations?
> > > 
> > > :(, no, I haven't try PV. But I did try HVM.
> > > 
> > > There is one thing that I may want to try, it's migration from the
> > > previous version of Xen. There is one patch that change (fix?) that.
> > 
> > Please do and let me know if it works as expected.
> 
> I have tryied a pv guest, it does work fine.
> 
> I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> both the current qemu-xen tree and with the merge of 1.6.2, but the
> migration fail in both cases because of the same error reported by qemu.
> (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> investigate in that. Might just be a issue with my compile script ...
> (using the wrong qemu-xen tree).

It is important that we identify what is the cause of the problem.
Especially if you think that it could be a "compile script" issue,
because I imagine that if it is, it might invalidate your previous
positive tests too.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49KC-0002ss-Oi; Fri, 17 Jan 2014 13:19:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W49K9-0002rO-Dq
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:19:03 +0000
Received: from [85.158.139.211:14524] by server-8.bemta-5.messagelabs.com id
	7D/6E-29838-4CD29D25; Fri, 17 Jan 2014 13:19:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389964738!90480!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10491 invoked from network); 17 Jan 2014 13:18:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:18:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93846001"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:18:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:18:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W49K4-00009X-ST;
	Fri, 17 Jan 2014 13:18:56 +0000
Date: Fri, 17 Jan 2014 13:17:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140117121350.GA16586@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Anthony PERARD wrote:
> On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > 
> > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > could have expected.
> > > > > > > 
> > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > Xen and their potential impact on the release.
> > > > > > 
> > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > commit title speak for itself or I added a small description on what is
> > > > > > affected.
> > > > > 
> > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > on a freeze exception. Did you refer to 
> > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > not the very briefest words you can possibly manage.
> > > > > 
> > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > even of cherry picking what might be the important ones. If you think
> > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > 
> > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > stable tree?
> > > 
> > > Yes, a simple merge.
> > > 
> > > > I also assume that you tested at the very least the basic
> > > > PV and HVM configurations?
> > > 
> > > :(, no, I haven't try PV. But I did try HVM.
> > > 
> > > There is one thing that I may want to try, it's migration from the
> > > previous version of Xen. There is one patch that change (fix?) that.
> > 
> > Please do and let me know if it works as expected.
> 
> I have tryied a pv guest, it does work fine.
> 
> I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> both the current qemu-xen tree and with the merge of 1.6.2, but the
> migration fail in both cases because of the same error reported by qemu.
> (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> investigate in that. Might just be a issue with my compile script ...
> (using the wrong qemu-xen tree).

It is important that we identify what is the cause of the problem.
Especially if you think that it could be a "compile script" issue,
because I imagine that if it is, it might invalidate your previous
positive tests too.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:23:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49O6-0003D8-FE; Fri, 17 Jan 2014 13:23:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ferdinand.brasser@trust.cased.de>)
	id 1W49O4-0003D2-EP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:23:04 +0000
Received: from [85.158.137.68:3565] by server-3.bemta-3.messagelabs.com id
	06/5F-10658-7BE29D25; Fri, 17 Jan 2014 13:23:03 +0000
X-Env-Sender: ferdinand.brasser@trust.cased.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389964978!9783153!1
X-Originating-IP: [130.83.156.225]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25121 invoked from network); 17 Jan 2014 13:22:58 -0000
Received: from lnx500.hrz.tu-darmstadt.de (HELO lnx500.hrz.tu-darmstadt.de)
	(130.83.156.225)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 13:22:58 -0000
Received: from mail.cased.de (mail.cased.de [130.83.33.42])
	by lnx500.hrz.tu-darmstadt.de (8.14.4/8.14.4/HRZ/PMX) with ESMTP id
	s0HDMvLM010529
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 14:22:57 +0100
	(envelope-from ferdinand.brasser@trust.cased.de)
Received: from [130.83.40.173] (swn173.trust.informatik.tu-darmstadt.de
	[130.83.40.173])
	(using SSLv3 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.cased.de (Postfix) with ESMTPSA id 906D444E0EA;
	Fri, 17 Jan 2014 14:22:57 +0100 (CET)
Message-ID: <1389964977.2099.3.camel@64bitDom0>
From: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:22:57 +0100
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
X-PMX-TU: seen v1.2 by 5.6.1.2065439, Antispam-Engine: 2.7.2.376379,
	Antispam-Data: 2014.1.17.131515
X-PMX-RELAY: outgoing
Cc: mihai.bucicoiu@trust.cased.de
Subject: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear all,

My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 

Here at CASED, we have developed a live updating mechanism for Xen,
which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
would like to know if there is any interest from the community to
integrate our approach into Xen. If so, some advice on how to proceed is
welcomed.

Our approach to update Xen is - very high level - to load a complete new
version of Xen at runtime and then transfer the state of the old version
to the new one. Afterwards the execution is continued by the new
version. We make use of Xen functions to disable all but one CPU and
interrupts during the update process to keep the state consistent while
transferring. We have evaluate our prototype with the result that the
update process takes about 45ms on our test system. 

We hope you guys find this work interesting and we would be happy to
work together with you to make our prototype a usable and reliable
function of Xen.

Regards,
Ferdinand


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:23:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49O6-0003D8-FE; Fri, 17 Jan 2014 13:23:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ferdinand.brasser@trust.cased.de>)
	id 1W49O4-0003D2-EP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:23:04 +0000
Received: from [85.158.137.68:3565] by server-3.bemta-3.messagelabs.com id
	06/5F-10658-7BE29D25; Fri, 17 Jan 2014 13:23:03 +0000
X-Env-Sender: ferdinand.brasser@trust.cased.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389964978!9783153!1
X-Originating-IP: [130.83.156.225]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25121 invoked from network); 17 Jan 2014 13:22:58 -0000
Received: from lnx500.hrz.tu-darmstadt.de (HELO lnx500.hrz.tu-darmstadt.de)
	(130.83.156.225)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 13:22:58 -0000
Received: from mail.cased.de (mail.cased.de [130.83.33.42])
	by lnx500.hrz.tu-darmstadt.de (8.14.4/8.14.4/HRZ/PMX) with ESMTP id
	s0HDMvLM010529
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 14:22:57 +0100
	(envelope-from ferdinand.brasser@trust.cased.de)
Received: from [130.83.40.173] (swn173.trust.informatik.tu-darmstadt.de
	[130.83.40.173])
	(using SSLv3 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mail.cased.de (Postfix) with ESMTPSA id 906D444E0EA;
	Fri, 17 Jan 2014 14:22:57 +0100 (CET)
Message-ID: <1389964977.2099.3.camel@64bitDom0>
From: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
To: xen-devel@lists.xen.org
Date: Fri, 17 Jan 2014 14:22:57 +0100
X-Mailer: Evolution 3.2.3-0ubuntu6 
Mime-Version: 1.0
X-PMX-TU: seen v1.2 by 5.6.1.2065439, Antispam-Engine: 2.7.2.376379,
	Antispam-Data: 2014.1.17.131515
X-PMX-RELAY: outgoing
Cc: mihai.bucicoiu@trust.cased.de
Subject: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear all,

My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 

Here at CASED, we have developed a live updating mechanism for Xen,
which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
would like to know if there is any interest from the community to
integrate our approach into Xen. If so, some advice on how to proceed is
welcomed.

Our approach to update Xen is - very high level - to load a complete new
version of Xen at runtime and then transfer the state of the old version
to the new one. Afterwards the execution is continued by the new
version. We make use of Xen functions to disable all but one CPU and
interrupts during the update process to keep the state consistent while
transferring. We have evaluate our prototype with the result that the
update process takes about 45ms on our test system. 

We hope you guys find this work interesting and we would be happy to
work together with you to make our prototype a usable and reliable
function of Xen.

Regards,
Ferdinand


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49T1-0003MZ-6u; Fri, 17 Jan 2014 13:28:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W49Sz-0003MU-R4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:28:10 +0000
Received: from [85.158.139.211:22444] by server-10.bemta-5.messagelabs.com id
	CB/64-01405-9EF29D25; Fri, 17 Jan 2014 13:28:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389965287!10392010!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22783 invoked from network); 17 Jan 2014 13:28:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91752298"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:28:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:28:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W49Sv-0000GW-It;
	Fri, 17 Jan 2014 13:28:05 +0000
Message-ID: <52D92FE4.9080504@citrix.com>
Date: Fri, 17 Jan 2014 13:28:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
References: <1389964977.2099.3.camel@64bitDom0>
In-Reply-To: <1389964977.2099.3.camel@64bitDom0>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 13:22, Ferdinand Brasser wrote:
> Dear all,
>
> My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 
>
> Here at CASED, we have developed a live updating mechanism for Xen,
> which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> would like to know if there is any interest from the community to
> integrate our approach into Xen. If so, some advice on how to proceed is
> welcomed.
>
> Our approach to update Xen is - very high level - to load a complete new
> version of Xen at runtime and then transfer the state of the old version
> to the new one. Afterwards the execution is continued by the new
> version. We make use of Xen functions to disable all but one CPU and
> interrupts during the update process to keep the state consistent while
> transferring. We have evaluate our prototype with the result that the
> update process takes about 45ms on our test system. 
>
> We hope you guys find this work interesting and we would be happy to
> work together with you to make our prototype a usable and reliable
> function of Xen.
>
> Regards,
> Ferdinand

Hello,

This looks like a fantastic area to be working in, and I am sure people
would be interested in playing with it

As far as how to proceed, your best bet would be to provide some code
and instructions, perhaps with a step-by-step guide for a demo?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49T1-0003MZ-6u; Fri, 17 Jan 2014 13:28:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W49Sz-0003MU-R4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:28:10 +0000
Received: from [85.158.139.211:22444] by server-10.bemta-5.messagelabs.com id
	CB/64-01405-9EF29D25; Fri, 17 Jan 2014 13:28:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389965287!10392010!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22783 invoked from network); 17 Jan 2014 13:28:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:28:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91752298"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:28:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:28:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W49Sv-0000GW-It;
	Fri, 17 Jan 2014 13:28:05 +0000
Message-ID: <52D92FE4.9080504@citrix.com>
Date: Fri, 17 Jan 2014 13:28:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
References: <1389964977.2099.3.camel@64bitDom0>
In-Reply-To: <1389964977.2099.3.camel@64bitDom0>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 13:22, Ferdinand Brasser wrote:
> Dear all,
>
> My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 
>
> Here at CASED, we have developed a live updating mechanism for Xen,
> which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> would like to know if there is any interest from the community to
> integrate our approach into Xen. If so, some advice on how to proceed is
> welcomed.
>
> Our approach to update Xen is - very high level - to load a complete new
> version of Xen at runtime and then transfer the state of the old version
> to the new one. Afterwards the execution is continued by the new
> version. We make use of Xen functions to disable all but one CPU and
> interrupts during the update process to keep the state consistent while
> transferring. We have evaluate our prototype with the result that the
> update process takes about 45ms on our test system. 
>
> We hope you guys find this work interesting and we would be happy to
> work together with you to make our prototype a usable and reliable
> function of Xen.
>
> Regards,
> Ferdinand

Hello,

This looks like a fantastic area to be working in, and I am sure people
would be interested in playing with it

As far as how to proceed, your best bet would be to provide some code
and instructions, perhaps with a step-by-step guide for a demo?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:31:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49Vz-0003z6-Qv; Fri, 17 Jan 2014 13:31:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W49Vy-0003z0-5k
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:31:14 +0000
Received: from [85.158.143.35:61325] by server-2.bemta-4.messagelabs.com id
	36/71-11386-1A039D25; Fri, 17 Jan 2014 13:31:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389965471!12372773!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20564 invoked from network); 17 Jan 2014 13:31:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93849312"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:31:11 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:31:10 -0500
Message-ID: <52D9309D.1030808@citrix.com>
Date: Fri, 17 Jan 2014 14:31:09 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Dennis Lan (dlan)"
	<dennis.yxun@gmail.com>
References: <52B181EB.6080303@samsung.com>	
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>	
	<21169.39199.902511.563980@mariner.uk.xensource.com>	
	<52B1A1B9.70307@samsung.com>	
	<1387373404.28680.19.camel@kazak.uk.xensource.com>	
	<52B1B616.4010402@samsung.com>	
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>	
	<1389951973.6697.47.camel@kazak.uk.xensource.com>	
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>	
	<1389955476.6697.58.camel@kazak.uk.xensource.com>	
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>	
	<1389956491.6697.64.camel@kazak.uk.xensource.com>	
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
In-Reply-To: <1389959942.6697.87.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 12:59, Ian Campbell wrote:
> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>
>>> vif-bridge and the common scripts which it includes would be a good
>>> start. Just an echo at the top to confirm that the script is running
>>> would be useful.
>>>
>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
>>> when these scripts were launched by udev, but now that libxl runs them
>>> you may find that the debug from the script comes out on stdout/err of
>>> the xl create command so perhaps that isn't needed any more.
>>>
>>>> headless here.
>>>
>>> That shouldn't matter, you are looking for output from userspace
>>> scripts, not kernel or hypervisor logs.
>>>
>>> Ian.
>>>
>>
>> Hi Ian
>> I suspect for 4.4.0, the network devices even was not detected.
>> this is output from 4.3.1, notes follow lines.
>>
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script: /etc/xen/scripts/vif-bridge online
>> dlan: vif-bridge start
>> dlan: vif-common start
>>
>> dlan: vif-bridge start -> output from vif-bridge script
>> dlan: vif-common start -> output from vif-common.sh script
> 
> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
> produce the same output?
> 
> (please can you try and set the text type to "preformatted" for the logs
> -- having them wrapped makes them very hard to read).
> 
> The lack of 
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> in your original logs is a bit concerning.
> 
> Roger -- any ideas?

My first guess would be that libxl__get_domid failed, however I'm not 
able to reproduce this. I'm attaching a patch to add an error message 
if libxl__get_domid fails, and also prevent the removal of xenstore 
entries so we can see what's going on. Dennis/Eugene, could you try the 
attached patch and send the output of xl -vvv create <...> and 
xenstore-ls -fp after the failed creation?

---
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..03f9fe9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
         rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
+        LOG(ERROR, "domain creation failed, not doing removal of xs entries");
+        dcs->callback(egc, dcs, rc, -1);
+        return;
         if (dcs->guest_domid) {
             dcs->dds.ao = ao;
             dcs->dds.domid = dcs->guest_domid;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index ba7d100..56d8162 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__ao_device *aodev)
      * hotplug scripts
      */
     rc = libxl__get_domid(gc, &domid);
-    if (rc) goto out;
+    if (rc) {
+        LOG(ERROR, "unable to get domain id, error: %d", rc);
+        goto out;
+    }
     if (aodev->dev->backend_domid != domid) {
         if (aodev->action != LIBXL__DEVICE_ACTION_REMOVE)
             goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:31:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49Vz-0003z6-Qv; Fri, 17 Jan 2014 13:31:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W49Vy-0003z0-5k
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:31:14 +0000
Received: from [85.158.143.35:61325] by server-2.bemta-4.messagelabs.com id
	36/71-11386-1A039D25; Fri, 17 Jan 2014 13:31:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389965471!12372773!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20564 invoked from network); 17 Jan 2014 13:31:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93849312"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 13:31:11 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:31:10 -0500
Message-ID: <52D9309D.1030808@citrix.com>
Date: Fri, 17 Jan 2014 14:31:09 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Dennis Lan (dlan)"
	<dennis.yxun@gmail.com>
References: <52B181EB.6080303@samsung.com>	
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>	
	<21169.39199.902511.563980@mariner.uk.xensource.com>	
	<52B1A1B9.70307@samsung.com>	
	<1387373404.28680.19.camel@kazak.uk.xensource.com>	
	<52B1B616.4010402@samsung.com>	
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>	
	<1389951973.6697.47.camel@kazak.uk.xensource.com>	
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>	
	<1389955476.6697.58.camel@kazak.uk.xensource.com>	
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>	
	<1389956491.6697.64.camel@kazak.uk.xensource.com>	
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
In-Reply-To: <1389959942.6697.87.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Eugene Fedotov <e.fedotov@samsung.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 12:59, Ian Campbell wrote:
> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>
>>
>>> vif-bridge and the common scripts which it includes would be a good
>>> start. Just an echo at the top to confirm that the script is running
>>> would be useful.
>>>
>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debugging
>>> when these scripts were launched by udev, but now that libxl runs them
>>> you may find that the debug from the script comes out on stdout/err of
>>> the xl create command so perhaps that isn't needed any more.
>>>
>>>> headless here.
>>>
>>> That shouldn't matter, you are looking for output from userspace
>>> scripts, not kernel or hypervisor logs.
>>>
>>> Ian.
>>>
>>
>> Hi Ian
>> I suspect for 4.4.0, the network devices even was not detected.
>> this is output from 4.3.1, notes follow lines.
>>
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script: /etc/xen/scripts/vif-bridge online
>> dlan: vif-bridge start
>> dlan: vif-common start
>>
>> dlan: vif-bridge start -> output from vif-bridge script
>> dlan: vif-common start -> output from vif-common.sh script
> 
> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
> produce the same output?
> 
> (please can you try and set the text type to "preformatted" for the logs
> -- having them wrapped makes them very hard to read).
> 
> The lack of 
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> in your original logs is a bit concerning.
> 
> Roger -- any ideas?

My first guess would be that libxl__get_domid failed, however I'm not 
able to reproduce this. I'm attaching a patch to add an error message 
if libxl__get_domid fails, and also prevent the removal of xenstore 
entries so we can see what's going on. Dennis/Eugene, could you try the 
attached patch and send the output of xl -vvv create <...> and 
xenstore-ls -fp after the failed creation?

---
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..03f9fe9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
         rc = xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_config->b_info.exec_ssidref);
 
     if (rc) {
+        LOG(ERROR, "domain creation failed, not doing removal of xs entries");
+        dcs->callback(egc, dcs, rc, -1);
+        return;
         if (dcs->guest_domid) {
             dcs->dds.ao = ao;
             dcs->dds.domid = dcs->guest_domid;
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index ba7d100..56d8162 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__ao_device *aodev)
      * hotplug scripts
      */
     rc = libxl__get_domid(gc, &domid);
-    if (rc) goto out;
+    if (rc) {
+        LOG(ERROR, "unable to get domain id, error: %d", rc);
+        goto out;
+    }
     if (aodev->dev->backend_domid != domid) {
         if (aodev->action != LIBXL__DEVICE_ACTION_REMOVE)
             goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49hi-0004gj-5h; Fri, 17 Jan 2014 13:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W49hg-0004ge-Cw
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:43:20 +0000
Received: from [193.109.254.147:20543] by server-13.bemta-14.messagelabs.com
	id 2F/DE-19374-77339D25; Fri, 17 Jan 2014 13:43:19 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389966197!11564139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5638 invoked from network); 17 Jan 2014 13:43:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:43:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91756358"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:43:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:43:17 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W49hc-0000Sh-C4;
	Fri, 17 Jan 2014 13:43:16 +0000
Date: Fri, 17 Jan 2014 13:43:16 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117134315.GB16586@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > 
> > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > could have expected.
> > > > > > > > 
> > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > Xen and their potential impact on the release.
> > > > > > > 
> > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > affected.
> > > > > > 
> > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > on a freeze exception. Did you refer to 
> > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > not the very briefest words you can possibly manage.
> > > > > > 
> > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > 
> > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > stable tree?
> > > > 
> > > > Yes, a simple merge.
> > > > 
> > > > > I also assume that you tested at the very least the basic
> > > > > PV and HVM configurations?
> > > > 
> > > > :(, no, I haven't try PV. But I did try HVM.
> > > > 
> > > > There is one thing that I may want to try, it's migration from the
> > > > previous version of Xen. There is one patch that change (fix?) that.
> > > 
> > > Please do and let me know if it works as expected.
> > 
> > I have tryied a pv guest, it does work fine.
> > 
> > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > migration fail in both cases because of the same error reported by qemu.
> > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > investigate in that. Might just be a issue with my compile script ...
> > (using the wrong qemu-xen tree).
> 
> It is important that we identify what is the cause of the problem.
> Especially if you think that it could be a "compile script" issue,
> because I imagine that if it is, it might invalidate your previous
> positive tests too.

I was compiling with always the master branch of qemu-xen. So I had a
Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
migration test.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49hi-0004gj-5h; Fri, 17 Jan 2014 13:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W49hg-0004ge-Cw
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:43:20 +0000
Received: from [193.109.254.147:20543] by server-13.bemta-14.messagelabs.com
	id 2F/DE-19374-77339D25; Fri, 17 Jan 2014 13:43:19 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389966197!11564139!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5638 invoked from network); 17 Jan 2014 13:43:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:43:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91756358"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:43:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:43:17 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W49hc-0000Sh-C4;
	Fri, 17 Jan 2014 13:43:16 +0000
Date: Fri, 17 Jan 2014 13:43:16 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117134315.GB16586@perard.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > 
> > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > could have expected.
> > > > > > > > 
> > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > Xen and their potential impact on the release.
> > > > > > > 
> > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > affected.
> > > > > > 
> > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > on a freeze exception. Did you refer to 
> > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > not the very briefest words you can possibly manage.
> > > > > > 
> > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > 
> > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > stable tree?
> > > > 
> > > > Yes, a simple merge.
> > > > 
> > > > > I also assume that you tested at the very least the basic
> > > > > PV and HVM configurations?
> > > > 
> > > > :(, no, I haven't try PV. But I did try HVM.
> > > > 
> > > > There is one thing that I may want to try, it's migration from the
> > > > previous version of Xen. There is one patch that change (fix?) that.
> > > 
> > > Please do and let me know if it works as expected.
> > 
> > I have tryied a pv guest, it does work fine.
> > 
> > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > migration fail in both cases because of the same error reported by qemu.
> > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > investigate in that. Might just be a issue with my compile script ...
> > (using the wrong qemu-xen tree).
> 
> It is important that we identify what is the cause of the problem.
> Especially if you think that it could be a "compile script" issue,
> because I imagine that if it is, it might invalidate your previous
> positive tests too.

I was compiling with always the master branch of qemu-xen. So I had a
Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
migration test.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:45:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49jj-0004oS-UJ; Fri, 17 Jan 2014 13:45:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W49ji-0004oJ-MP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:45:26 +0000
Received: from [85.158.137.68:59013] by server-9.bemta-3.messagelabs.com id
	24/7D-13104-5F339D25; Fri, 17 Jan 2014 13:45:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389966324!9731089!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31649 invoked from network); 17 Jan 2014 13:45:25 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:45:25 -0000
Received: by mail-wi0-f175.google.com with SMTP id hr1so708411wib.2
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 05:45:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=o20v3u1W/AFMqlLmKwxjEwoJnXnNjVeiKZVFU838MHA=;
	b=INniB7d1bCMg/pyhm5tZ9UbW9S8EYJV8iX8Lba4VyLeZN7/wVRsIDtINRYMezT1UsM
	AQsiUHq+KQmUwJrhGqZIZBhPx/MrM0klXyFn5E9FHD3NMSmW0xU3L8B3ngXVb5P0yDHH
	8hfHR/vV4jR9GHPYt89Vp4YvUeGOCsgQgmarPyU2+osg+NG7cjtIXwCkXyLmFaR0LdiO
	cLJattH30aMLyUdxTz91BoPqpDi9v9XT8Ghk/dGbFb4jrF845RGdTWeWluuseg+7btXX
	3wZP0Fe4EZ3hbSiJcuf14jdpgmqNqy9xf/xSUxO+17FR0dJnHB8v1ZxzZotjp/I9Kdya
	/otg==
X-Gm-Message-State: ALoCoQn/C+neDerjGuZ0wimQ7oa3l5alGgqfvz5JG+KpRYT8mvUO0mj4O01iR34aoQoxz5ctFsIK
X-Received: by 10.180.160.166 with SMTP id xl6mr2472944wib.43.1389966324694;
	Fri, 17 Jan 2014 05:45:24 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id dm2sm3286200wib.8.2014.01.17.05.45.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 05:45:23 -0800 (PST)
Message-ID: <52D933F2.8000101@linaro.org>
Date: Fri, 17 Jan 2014 13:45:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
In-Reply-To: <52D89DC9.7050303@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
> On 01/16/14 18:36, Julien Grall wrote:
>
> The specification is actually a little unclear on this point, but
> FreeBSD follows the same rules as Linux in any case. Most, if not all,
> FreeBSD code should check any ancestor at this point as well. In
> particular fdt_intr_to_rl does this. What it *doesn't* do is allow
> #interrupt-cells to be larger than 2. I'll fix this this weekend.

Thanks, for working on this part.

Another things to take into account: the first cell doesn't always 
contain the interrupt.
With the Linux binding (#interrupt-cells == 3)
   - cell 1: 1 or 0 (PPI vs SPI)
   - cell 2: relative IRQ number to the start of PPI/SPI
   - cell 3: cpu mask + interrupt flags (edge/level...)

>>> On the subject of simple-bus, they usually aren't necessary. For
>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>> which is attached to the device tree root. They don't use MMIO, so
>>> simple-bus doesn't really make sense. How does Xen communicate with the
>>> OS in these devices?
>>> -Nathan
>>
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with
>> interrupts and MMIO, the driver won't be able to retrieve and
>> translate the interrupts via bus_alloc_resources.
>
> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

I have noticed at least one issue (which is not related to my problem):
   - When the OFW nexus translate IRQ (with #interrupt-cells > 1), the 
rid will be equal to 0, 0 + #interrupt-cells, ... So the number will be 
discontinued. Rather than on simple-bus for the same device, the rid 
will be 0, 1, 2...

For my issue, I will look at it again this week-end.

BTW when I look to the FDT (sys/dev/fdt_common.c) and the ofw 
(sys/dev/ofw_nexus.c) code, I have notice that lots of code are duplicated.

It would be nice to have common helper to avoid duplicate code and issue 
for the future :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:45:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49jj-0004oS-UJ; Fri, 17 Jan 2014 13:45:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W49ji-0004oJ-MP
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:45:26 +0000
Received: from [85.158.137.68:59013] by server-9.bemta-3.messagelabs.com id
	24/7D-13104-5F339D25; Fri, 17 Jan 2014 13:45:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389966324!9731089!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31649 invoked from network); 17 Jan 2014 13:45:25 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:45:25 -0000
Received: by mail-wi0-f175.google.com with SMTP id hr1so708411wib.2
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 05:45:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=o20v3u1W/AFMqlLmKwxjEwoJnXnNjVeiKZVFU838MHA=;
	b=INniB7d1bCMg/pyhm5tZ9UbW9S8EYJV8iX8Lba4VyLeZN7/wVRsIDtINRYMezT1UsM
	AQsiUHq+KQmUwJrhGqZIZBhPx/MrM0klXyFn5E9FHD3NMSmW0xU3L8B3ngXVb5P0yDHH
	8hfHR/vV4jR9GHPYt89Vp4YvUeGOCsgQgmarPyU2+osg+NG7cjtIXwCkXyLmFaR0LdiO
	cLJattH30aMLyUdxTz91BoPqpDi9v9XT8Ghk/dGbFb4jrF845RGdTWeWluuseg+7btXX
	3wZP0Fe4EZ3hbSiJcuf14jdpgmqNqy9xf/xSUxO+17FR0dJnHB8v1ZxzZotjp/I9Kdya
	/otg==
X-Gm-Message-State: ALoCoQn/C+neDerjGuZ0wimQ7oa3l5alGgqfvz5JG+KpRYT8mvUO0mj4O01iR34aoQoxz5ctFsIK
X-Received: by 10.180.160.166 with SMTP id xl6mr2472944wib.43.1389966324694;
	Fri, 17 Jan 2014 05:45:24 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id dm2sm3286200wib.8.2014.01.17.05.45.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 05:45:23 -0800 (PST)
Message-ID: <52D933F2.8000101@linaro.org>
Date: Fri, 17 Jan 2014 13:45:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
In-Reply-To: <52D89DC9.7050303@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
> On 01/16/14 18:36, Julien Grall wrote:
>
> The specification is actually a little unclear on this point, but
> FreeBSD follows the same rules as Linux in any case. Most, if not all,
> FreeBSD code should check any ancestor at this point as well. In
> particular fdt_intr_to_rl does this. What it *doesn't* do is allow
> #interrupt-cells to be larger than 2. I'll fix this this weekend.

Thanks, for working on this part.

Another things to take into account: the first cell doesn't always 
contain the interrupt.
With the Linux binding (#interrupt-cells == 3)
   - cell 1: 1 or 0 (PPI vs SPI)
   - cell 2: relative IRQ number to the start of PPI/SPI
   - cell 3: cpu mask + interrupt flags (edge/level...)

>>> On the subject of simple-bus, they usually aren't necessary. For
>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>> which is attached to the device tree root. They don't use MMIO, so
>>> simple-bus doesn't really make sense. How does Xen communicate with the
>>> OS in these devices?
>>> -Nathan
>>
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with
>> interrupts and MMIO, the driver won't be able to retrieve and
>> translate the interrupts via bus_alloc_resources.
>
> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

I have noticed at least one issue (which is not related to my problem):
   - When the OFW nexus translate IRQ (with #interrupt-cells > 1), the 
rid will be equal to 0, 0 + #interrupt-cells, ... So the number will be 
discontinued. Rather than on simple-bus for the same device, the rid 
will be 0, 1, 2...

For my issue, I will look at it again this week-end.

BTW when I look to the FDT (sys/dev/fdt_common.c) and the ofw 
(sys/dev/ofw_nexus.c) code, I have notice that lots of code are duplicated.

It would be nice to have common helper to avoid duplicate code and issue 
for the future :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:46:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49kb-0004tb-Lp; Fri, 17 Jan 2014 13:46:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W49ka-0004tU-Be
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:46:20 +0000
Received: from [193.109.254.147:54279] by server-3.bemta-14.messagelabs.com id
	DF/A2-11000-B2439D25; Fri, 17 Jan 2014 13:46:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389966377!11521516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14454 invoked from network); 17 Jan 2014 13:46:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91757809"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:46:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:46:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W49kV-0000V7-K1;
	Fri, 17 Jan 2014 13:46:15 +0000
Date: Fri, 17 Jan 2014 13:45:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140117134315.GB16586@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Anthony PERARD wrote:
> On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> > On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > > 
> > > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > > could have expected.
> > > > > > > > > 
> > > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > > Xen and their potential impact on the release.
> > > > > > > > 
> > > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > > affected.
> > > > > > > 
> > > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > > on a freeze exception. Did you refer to 
> > > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > > not the very briefest words you can possibly manage.
> > > > > > > 
> > > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > > 
> > > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > > stable tree?
> > > > > 
> > > > > Yes, a simple merge.
> > > > > 
> > > > > > I also assume that you tested at the very least the basic
> > > > > > PV and HVM configurations?
> > > > > 
> > > > > :(, no, I haven't try PV. But I did try HVM.
> > > > > 
> > > > > There is one thing that I may want to try, it's migration from the
> > > > > previous version of Xen. There is one patch that change (fix?) that.
> > > > 
> > > > Please do and let me know if it works as expected.
> > > 
> > > I have tryied a pv guest, it does work fine.
> > > 
> > > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > > migration fail in both cases because of the same error reported by qemu.
> > > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > > investigate in that. Might just be a issue with my compile script ...
> > > (using the wrong qemu-xen tree).
> > 
> > It is important that we identify what is the cause of the problem.
> > Especially if you think that it could be a "compile script" issue,
> > because I imagine that if it is, it might invalidate your previous
> > positive tests too.
> 
> I was compiling with always the master branch of qemu-xen. So I had a
> Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
> migration test.

OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
works?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:46:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49kb-0004tb-Lp; Fri, 17 Jan 2014 13:46:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W49ka-0004tU-Be
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:46:20 +0000
Received: from [193.109.254.147:54279] by server-3.bemta-14.messagelabs.com id
	DF/A2-11000-B2439D25; Fri, 17 Jan 2014 13:46:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389966377!11521516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14454 invoked from network); 17 Jan 2014 13:46:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91757809"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 13:46:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 08:46:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W49kV-0000V7-K1;
	Fri, 17 Jan 2014 13:46:15 +0000
Date: Fri, 17 Jan 2014 13:45:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony PERARD <anthony.perard@citrix.com>
In-Reply-To: <20140117134315.GB16586@perard.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Anthony PERARD wrote:
> On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> > On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > > 
> > > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > > could have expected.
> > > > > > > > > 
> > > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > > Xen and their potential impact on the release.
> > > > > > > > 
> > > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > > affected.
> > > > > > > 
> > > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > > on a freeze exception. Did you refer to 
> > > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > > not the very briefest words you can possibly manage.
> > > > > > > 
> > > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > > 
> > > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > > stable tree?
> > > > > 
> > > > > Yes, a simple merge.
> > > > > 
> > > > > > I also assume that you tested at the very least the basic
> > > > > > PV and HVM configurations?
> > > > > 
> > > > > :(, no, I haven't try PV. But I did try HVM.
> > > > > 
> > > > > There is one thing that I may want to try, it's migration from the
> > > > > previous version of Xen. There is one patch that change (fix?) that.
> > > > 
> > > > Please do and let me know if it works as expected.
> > > 
> > > I have tryied a pv guest, it does work fine.
> > > 
> > > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > > migration fail in both cases because of the same error reported by qemu.
> > > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > > investigate in that. Might just be a issue with my compile script ...
> > > (using the wrong qemu-xen tree).
> > 
> > It is important that we identify what is the cause of the problem.
> > Especially if you think that it could be a "compile script" issue,
> > because I imagine that if it is, it might invalidate your previous
> > positive tests too.
> 
> I was compiling with always the master branch of qemu-xen. So I had a
> Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
> migration test.

OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
works?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:49:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49nu-0005Df-FZ; Fri, 17 Jan 2014 13:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W49ns-0005DV-1W
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:49:44 +0000
Received: from [85.158.139.211:24874] by server-8.bemta-5.messagelabs.com id
	B5/01-29838-7F439D25; Fri, 17 Jan 2014 13:49:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389966582!10397427!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32704 invoked from network); 17 Jan 2014 13:49:42 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:49:42 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so4595056wes.28
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 05:49:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=gptWh6SCeZgU4ox3TMfLGNjiaIT4/FIrU9bsOrmbQ3o=;
	b=Z2pbuINs4udnu7QiW8X5UqJNokgnqyxvhgud7LQxUFiIA5Ubi4kNnxPPSTO9jA20KB
	+J2Ln4IkkPhRc3kx5QfJ3emj7To/qUOxTvffTWfXVlHKvFLYER7zFs4T7N703Q4F9V1F
	WatmFnOKtj0QjlQolXuNIa7BpR7wOPXqEPWVKrBG5eiz88BTSp0wNhjeD0qPAgC8K05K
	5mVGq4KvleSP0C4BjYDnAdswHVU953o6GB0STzaqOqSrtd+/l3H7OXwRSJAH1GevJlLU
	ygZ6Ocvaca68kPUIallA+4/6X53Lelb0Oe7iyYQrfNodsO3vnVntHapHqMEzVkO3HUqD
	1kuA==
X-Gm-Message-State: ALoCoQmmMv2SiqXvV2fiHD4OUMuN0ABBYAY65PjsMr4Drp9QZO3HY+jZ4NstUpeacHaVcxzLri2e
X-Received: by 10.194.92.7 with SMTP id ci7mr2157335wjb.58.1389966582419;
	Fri, 17 Jan 2014 05:49:42 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id x4sm3327855wif.0.2014.01.17.05.49.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 05:49:41 -0800 (PST)
Message-ID: <52D934F4.709@linaro.org>
Date: Fri, 17 Jan 2014 13:49:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>	
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>	
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>	
	<52D87B15.5090208@linaro.org>
	<1389950962.6697.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1389950962.6697.33.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 09:29 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
>>
>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> On the subject of simple-bus, they usually aren't necessary. For
>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>> which is attached to the device tree root. They don't use MMIO, so
>>> simple-bus doesn't really make sense. How does Xen communicate with the
>>> OS in these devices?
>>> -Nathan
>>
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with interrupts
>> and MMIO, the driver won't be able to retrieve and translate the
>> interrupts via bus_alloc_resources.
>
> Is the root node not considered to be a "top-level simple-bus" with a
> 1:1 mapping of MMIO and interrupts? (Linux seems to treat it this way,
> but I haven't trawled the docs for a spec reference to back that
> behaviour up). I take it BSD doesn't do this?

There is 2 different paths on FreeBSD to decode interrupt/MMIO 
(depending if you are under the root node or a simple-bus node). Most of 
the code is duplicated but there are some parts which differs (for 
instance interrupt decoding, see my answer to Nathan).

I will look at closer to the code this week-end and see if I can fix it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 13:49:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 13:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W49nu-0005Df-FZ; Fri, 17 Jan 2014 13:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W49ns-0005DV-1W
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 13:49:44 +0000
Received: from [85.158.139.211:24874] by server-8.bemta-5.messagelabs.com id
	B5/01-29838-7F439D25; Fri, 17 Jan 2014 13:49:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1389966582!10397427!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32704 invoked from network); 17 Jan 2014 13:49:42 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 13:49:42 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so4595056wes.28
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 05:49:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=gptWh6SCeZgU4ox3TMfLGNjiaIT4/FIrU9bsOrmbQ3o=;
	b=Z2pbuINs4udnu7QiW8X5UqJNokgnqyxvhgud7LQxUFiIA5Ubi4kNnxPPSTO9jA20KB
	+J2Ln4IkkPhRc3kx5QfJ3emj7To/qUOxTvffTWfXVlHKvFLYER7zFs4T7N703Q4F9V1F
	WatmFnOKtj0QjlQolXuNIa7BpR7wOPXqEPWVKrBG5eiz88BTSp0wNhjeD0qPAgC8K05K
	5mVGq4KvleSP0C4BjYDnAdswHVU953o6GB0STzaqOqSrtd+/l3H7OXwRSJAH1GevJlLU
	ygZ6Ocvaca68kPUIallA+4/6X53Lelb0Oe7iyYQrfNodsO3vnVntHapHqMEzVkO3HUqD
	1kuA==
X-Gm-Message-State: ALoCoQmmMv2SiqXvV2fiHD4OUMuN0ABBYAY65PjsMr4Drp9QZO3HY+jZ4NstUpeacHaVcxzLri2e
X-Received: by 10.194.92.7 with SMTP id ci7mr2157335wjb.58.1389966582419;
	Fri, 17 Jan 2014 05:49:42 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id x4sm3327855wif.0.2014.01.17.05.49.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 05:49:41 -0800 (PST)
Message-ID: <52D934F4.709@linaro.org>
Date: Fri, 17 Jan 2014 13:49:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>	
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>	
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>	
	<52D87B15.5090208@linaro.org>
	<1389950962.6697.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1389950962.6697.33.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 09:29 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
>>
>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> On the subject of simple-bus, they usually aren't necessary. For
>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>> which is attached to the device tree root. They don't use MMIO, so
>>> simple-bus doesn't really make sense. How does Xen communicate with the
>>> OS in these devices?
>>> -Nathan
>>
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with interrupts
>> and MMIO, the driver won't be able to retrieve and translate the
>> interrupts via bus_alloc_resources.
>
> Is the root node not considered to be a "top-level simple-bus" with a
> 1:1 mapping of MMIO and interrupts? (Linux seems to treat it this way,
> but I haven't trawled the docs for a spec reference to back that
> behaviour up). I take it BSD doesn't do this?

There is 2 different paths on FreeBSD to decode interrupt/MMIO 
(depending if you are under the root node or a simple-bus node). Most of 
the code is duplicated but there are some parts which differs (for 
instance interrupt decoding, see my answer to Nathan).

I will look at closer to the code this week-end and see if I can fix it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:03:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4A0b-0006JF-80; Fri, 17 Jan 2014 14:02:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W4A0a-0006JA-5P
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:02:52 +0000
Received: from [193.109.254.147:33765] by server-6.bemta-14.messagelabs.com id
	87/05-14958-B0839D25; Fri, 17 Jan 2014 14:02:51 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389967369!11461403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32009 invoked from network); 17 Jan 2014 14:02:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:02:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91762637"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 14:02:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:02:48 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W4A0V-0000of-23;
	Fri, 17 Jan 2014 14:02:47 +0000
Date: Fri, 17 Jan 2014 14:02:46 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140117140246.GB11681@zion.uk.xensource.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D922DD.2060407@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 08:32:29PM +0800, annie li wrote:
> 
> On 2014-1-17 20:08, Wei Liu wrote:
> >On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
> >>On 2014/1/16 19:10, David Vrabel wrote:
> >>>On 15/01/14 23:57, Annie Li wrote:
> >>>>This patch implements two things:
> >>>>
> >>>>* release grant reference and skb for rx path, this fixex resource leaking.
> >>>>* clean up grant transfer code kept from old netfront(2.6.18) which grants
> >>>>pages for access/map and transfer. But grant transfer is deprecated in current
> >>>>netfront, so remove corresponding release code for transfer.
> >>>>
> >>>>gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> >>>>for reading or writing. But this patch does not cover this and improvement for
> >>>>this failure may be implemented in a separate patch.
> >>>I don't think replacing a resource leak with a security bug is a good idea.
> >>>
> >>>If you would prefer not to fix the gnttab_end_foreign_access() call, I
> >>>think you can fix this in netfront by taking a reference to the page
> >>>before calling gnttab_end_foreign_access().  This will ensure the page
> >>>isn't freed until the subsequent kfree_skb(), or the gref is released by
> >>>the foreign domain (whichever is later).
> >>Taking a reference to the page before calling
> >>gnttab_end_foreign_access() delays the free work until kfree_skb().
> >>Simply adding put_page before kfree_skb() does not make things
> >>different from gnttab_end_foreign_access_ref(), and the pages will
> >>be freed by kfree_skb(), problem will be hit in
> >>gnttab_handle_deferred() when freeing pages which already be freed.
> >>
> >I think David's idea is:
> >
> >	get_page
> >	gnttab_end_foreign_access
> >	kfree_skb
> >
> >The get_page is to offset put_page in gnttab_end_foreign_access. You
> >don't need to put page before kfree_skb.
> 
> Yes, this is what I described as following about David's patch.
> 
> >>So put_page is required in gnttab_end_foreign_access(), this will
> >>ensure either free is taken by kfree_skb or gnttab_handle_deferred.
> >>This involves changes in blkfront/pcifront/tpmfront(just like your
> >>patch), this way ensure page is released when ref is end.
> 
> But this would has some issue in netfront tx path. Netfront ends all

What issue with tx path? Your patch only touches rx skbs, doesn't it?

> grant reference of one skb first and then release this skb. If the
> gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(),
> this frag page and corresponding grant reference will be put in
> entry and release work will be done in the timer routine. If some

I understand up to this point.

> frag pages of one skb is free in this timer routine, then
> dev_kfree_skb_irq will free pages which have been freed.

Why is dev_kfree_skb_irq involved? It is used in tx path not rx path.
Even if we look at dev_kfree_skb_irq, it calls __kfree_skb for dropped
packet eventually, which should do the right thing if we don't mess up
ref counts.

Wei.

> So I prefer following way I mentioned, suggestions?
> 
> >>Another solution I am thinking is calling
> >>gnttab_end_foreign_access() with page parameter as NULL, then
> >>gnttab_end_foreign_access will only do ending grant reference work
> >>and releasing page work is done by kfree_skb().
> 
> Thanks
> Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:03:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4A0b-0006JF-80; Fri, 17 Jan 2014 14:02:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W4A0a-0006JA-5P
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:02:52 +0000
Received: from [193.109.254.147:33765] by server-6.bemta-14.messagelabs.com id
	87/05-14958-B0839D25; Fri, 17 Jan 2014 14:02:51 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389967369!11461403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32009 invoked from network); 17 Jan 2014 14:02:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:02:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="91762637"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 14:02:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:02:48 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W4A0V-0000of-23;
	Fri, 17 Jan 2014 14:02:47 +0000
Date: Fri, 17 Jan 2014 14:02:46 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: annie li <annie.li@oracle.com>
Message-ID: <20140117140246.GB11681@zion.uk.xensource.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D922DD.2060407@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 08:32:29PM +0800, annie li wrote:
> 
> On 2014-1-17 20:08, Wei Liu wrote:
> >On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
> >>On 2014/1/16 19:10, David Vrabel wrote:
> >>>On 15/01/14 23:57, Annie Li wrote:
> >>>>This patch implements two things:
> >>>>
> >>>>* release grant reference and skb for rx path, this fixex resource leaking.
> >>>>* clean up grant transfer code kept from old netfront(2.6.18) which grants
> >>>>pages for access/map and transfer. But grant transfer is deprecated in current
> >>>>netfront, so remove corresponding release code for transfer.
> >>>>
> >>>>gnttab_end_foreign_access_ref may fail when the grant entry is currently used
> >>>>for reading or writing. But this patch does not cover this and improvement for
> >>>>this failure may be implemented in a separate patch.
> >>>I don't think replacing a resource leak with a security bug is a good idea.
> >>>
> >>>If you would prefer not to fix the gnttab_end_foreign_access() call, I
> >>>think you can fix this in netfront by taking a reference to the page
> >>>before calling gnttab_end_foreign_access().  This will ensure the page
> >>>isn't freed until the subsequent kfree_skb(), or the gref is released by
> >>>the foreign domain (whichever is later).
> >>Taking a reference to the page before calling
> >>gnttab_end_foreign_access() delays the free work until kfree_skb().
> >>Simply adding put_page before kfree_skb() does not make things
> >>different from gnttab_end_foreign_access_ref(), and the pages will
> >>be freed by kfree_skb(), problem will be hit in
> >>gnttab_handle_deferred() when freeing pages which already be freed.
> >>
> >I think David's idea is:
> >
> >	get_page
> >	gnttab_end_foreign_access
> >	kfree_skb
> >
> >The get_page is to offset put_page in gnttab_end_foreign_access. You
> >don't need to put page before kfree_skb.
> 
> Yes, this is what I described as following about David's patch.
> 
> >>So put_page is required in gnttab_end_foreign_access(), this will
> >>ensure either free is taken by kfree_skb or gnttab_handle_deferred.
> >>This involves changes in blkfront/pcifront/tpmfront(just like your
> >>patch), this way ensure page is released when ref is end.
> 
> But this would has some issue in netfront tx path. Netfront ends all

What issue with tx path? Your patch only touches rx skbs, doesn't it?

> grant reference of one skb first and then release this skb. If the
> gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(),
> this frag page and corresponding grant reference will be put in
> entry and release work will be done in the timer routine. If some

I understand up to this point.

> frag pages of one skb is free in this timer routine, then
> dev_kfree_skb_irq will free pages which have been freed.

Why is dev_kfree_skb_irq involved? It is used in tx path not rx path.
Even if we look at dev_kfree_skb_irq, it calls __kfree_skb for dropped
packet eventually, which should do the right thing if we don't mess up
ref counts.

Wei.

> So I prefer following way I mentioned, suggestions?
> 
> >>Another solution I am thinking is calling
> >>gnttab_end_foreign_access() with page parameter as NULL, then
> >>gnttab_end_foreign_access will only do ending grant reference work
> >>and releasing page work is done by kfree_skb().
> 
> Thanks
> Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ADw-0006z5-NG; Fri, 17 Jan 2014 14:16:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ADv-0006z0-7x
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:16:39 +0000
Received: from [85.158.143.35:64710] by server-1.bemta-4.messagelabs.com id
	27/15-02132-64B39D25; Fri, 17 Jan 2014 14:16:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389968196!12426464!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3170 invoked from network); 17 Jan 2014 14:16:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93862321"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 14:16:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:16:35 -0500
Message-ID: <1389968194.6697.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
Date: Fri, 17 Jan 2014 14:16:34 +0000
In-Reply-To: <1389964977.2099.3.camel@64bitDom0>
References: <1389964977.2099.3.camel@64bitDom0>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 14:22 +0100, Ferdinand Brasser wrote:
> Dear all,
> 
> My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 
> 
> Here at CASED, we have developed a live updating mechanism for Xen,
> which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> would like to know if there is any interest from the community to
> integrate our approach into Xen. If so, some advice on how to proceed is
> welcomed.
> 
> Our approach to update Xen is - very high level - to load a complete new
> version of Xen at runtime and then transfer the state of the old version
> to the new one. Afterwards the execution is continued by the new
> version. We make use of Xen functions to disable all but one CPU and
> interrupts during the update process to keep the state consistent while
> transferring. We have evaluate our prototype with the result that the
> update process takes about 45ms on our test system. 

This sounds pretty cool. I think everyone would be interested in hearing
a bit more about it and in seeing the code.

What is that scope? Does/will it work between arbitrary versions of Xen
or is it more aimed at hotfixing a particular version of Xen IOW when
the before and after versions are very similar modulo a security update
or something similarly contained?

How much code is it to make this work and how intrusive is it?

Ultimately for it to go upstream it would need to be ported to work with
the development version of Xen and then reviewed etc etc. Our guidance
for patch submission are at
http://wiki.xen.org/wiki/Submitting_Xen_Patches but it might be a bit
premature -- just throwing the code into a git tree somewhere would
allow interested folks to take a look.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ADw-0006z5-NG; Fri, 17 Jan 2014 14:16:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ADv-0006z0-7x
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:16:39 +0000
Received: from [85.158.143.35:64710] by server-1.bemta-4.messagelabs.com id
	27/15-02132-64B39D25; Fri, 17 Jan 2014 14:16:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389968196!12426464!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3170 invoked from network); 17 Jan 2014 14:16:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,670,1384300800"; d="scan'208";a="93862321"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 14:16:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:16:35 -0500
Message-ID: <1389968194.6697.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>
Date: Fri, 17 Jan 2014 14:16:34 +0000
In-Reply-To: <1389964977.2099.3.camel@64bitDom0>
References: <1389964977.2099.3.camel@64bitDom0>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 14:22 +0100, Ferdinand Brasser wrote:
> Dear all,
> 
> My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt. 
> 
> Here at CASED, we have developed a live updating mechanism for Xen,
> which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> would like to know if there is any interest from the community to
> integrate our approach into Xen. If so, some advice on how to proceed is
> welcomed.
> 
> Our approach to update Xen is - very high level - to load a complete new
> version of Xen at runtime and then transfer the state of the old version
> to the new one. Afterwards the execution is continued by the new
> version. We make use of Xen functions to disable all but one CPU and
> interrupts during the update process to keep the state consistent while
> transferring. We have evaluate our prototype with the result that the
> update process takes about 45ms on our test system. 

This sounds pretty cool. I think everyone would be interested in hearing
a bit more about it and in seeing the code.

What is that scope? Does/will it work between arbitrary versions of Xen
or is it more aimed at hotfixing a particular version of Xen IOW when
the before and after versions are very similar modulo a security update
or something similarly contained?

How much code is it to make this work and how intrusive is it?

Ultimately for it to go upstream it would need to be ported to work with
the development version of Xen and then reviewed etc etc. Our guidance
for patch submission are at
http://wiki.xen.org/wiki/Submitting_Xen_Patches but it might be a bit
premature -- just throwing the code into a git tree somewhere would
allow interested folks to take a look.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:33:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4AU2-000846-UU; Fri, 17 Jan 2014 14:33:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4AU2-000841-0H
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 14:33:18 +0000
Received: from [85.158.137.68:29140] by server-14.bemta-3.messagelabs.com id
	B1/D9-06105-D2F39D25; Fri, 17 Jan 2014 14:33:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389969196!9742921!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32720 invoked from network); 17 Jan 2014 14:33:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 14:33:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 14:33:15 +0000
Message-Id: <52D94D3A0200007800114957@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 14:33:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5n
IHNvZnQgbG9ja3VwcwppbiBEb20wIEkgcmVhbGl6ZWQgdGhhdCB3aGF0IEkgZGlkIGluIGxpbnV4
LTIuNi4xOC14ZW4uaGcncyBjL3MKOTg5OmE3NzgxYzBhM2I5YSAoInhlbi9iYWxsb29uOiBmaXgg
YmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRv
ZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVyZQp0cmlnZ2VycyBhcyBzb29uIGFz
IHRoZXJlJ3MgYSByZWFzb25hYmxlIGFtb3VudCBvZiBleGNlc3MgbWVtb3J5LgpBbmQgdGhhdCBp
cyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNwZW50IHNpZ25pZmljYW50IGFtb3VudHMgb2YK
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCmZpbmFsbHkgZ290IGNoZWNrZWQgaW4sIG9yIG11c3QgaGF2ZSBzY3Jld2VkIHVwIGlu
IHNvbWUgb3RoZXIgd2F5LgpFeHRyZW1lbHkgZW1iYXJyYXNzaW5nLi4uCgpJbiB0aGUgY291cnNl
IG9mIGZpbmRpbmcgYSBwcm9wZXIgc29sdXRpb24gSSBzb29uIHN0dW1ibGVkIGFjcm9zcwp1cHN0
cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24ncyBpbml0aWFsIHN0
YXRlIHRvCm51bWJlciBvZiBleGlzdGluZyBSQU0gcGFnZXMiKSwgYW5kIGhlbmNlIHdlbnQgYWhl
YWQgYW5kCmNvbXBhcmVkIHRocmVlIGRpZmZlcmVudCBjYWxjdWxhdGlvbnMgZm9yIGluaXRpYWwg
YnMuY3VycmVudF9wYWdlczoKCihhKSB1cHN0cmVhbSdzIChvcGVuIGNvZGluZyBnZXRfbnVtX3Bo
eXNwYWdlcygpLCBhcyBJIGRpZCB0aGlzIG9uCiAgICBhbiBvbGRlciBrZXJuZWwpCihiKSBwbGFp
biBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFNIFBGTikKKGMpIFhF
Tk1FTV9nZXRfcG9kX3RhcmdldCBvdXRwdXQgKHdpdGggdGhlIGh5cGVydmlzb3IgYWx0ZXJlZAog
ICAgdG8gbm90IHJlZnVzZSB0aGlzIGZvciBhIGRvbWFpbiBkb2luZyBpdCBvbiBpdHNlbGYpCgpU
aGUgZm91cnRoIChvcmlnaW5hbCkgbWV0aG9kLCB1c2luZyB0b3RhbHJhbV9wYWdlcywgd2FzIGFs
cmVhZHkKa25vd24gdG8gcmVzdWx0IGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93biBl
bm91Z2gsIGFuZApoZW5jZSBzZXR0aW5nIHVwIHRoZSBkb21haW4gZm9yIGFuIGV2ZW50dWFsIGNy
YXNoIHdoZW4gdGhlIFBvRApjYWNoZSBydW5zIGVtcHR5LgoKSW50ZXJlc3RpbmdseSwgKGEpIHRv
byByZXN1bHRzIGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93bgplbm91Z2ggLSB0aGVy
ZSdzIGEgZ2FwIG9mIGV4YWN0bHkgYXMgbWFueSBwYWdlcyBhcyBhcmUgbWFya2VkCnJlc2VydmVk
IGJlbG93IHRoZSAxTWIgYm91bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAp1cHN0cmVh
bSBjb21taXQgaXMgcHJlc3VtYWJseSBicm9rZW4uCgpTaG9ydCBvZiBhIHJlbGlhYmxlIChhbmQg
aWRlYWxseSBhcmNoaXRlY3R1cmUgaW5kZXBlbmRlbnQpIHdheSBvZgprbm93aW5nIHRoZSBuZWNl
c3NhcnkgYWRqdXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1dGlvbgoobm90IGJhbGxv
b25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gK
bW9yZSB0aGFuIG5lY2Vzc2FyeSkgdHVybnMgb3V0IHRvIGJlIHVzaW5nIHRoZSBtaW5pbXVtIG9m
IChiKQphbmQgKGMpOiBXaGVuIHRoZSBkb21haW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwg
KGIpIGlzCm1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMg
Y2xvc2VzdC4KClF1ZXN0aW9uIG5vdyBpczogQ29uc2lkZXJpbmcgdGhhdCAoYSkgaXMgYnJva2Vu
IChhbmQgaGFyZCB0byBmaXgpCmFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBhIGxhcmdlIHBhcnQg
b2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KdG9vIG11Y2ggYmFsbG9vbmluZyBkb3duLCBz
aG91bGRuJ3Qgd2Ugb3BlbiB1cApYRU5NRU1fZ2V0X3BvZF90YXJnZXQgZm9yIGRvbWFpbnMgdG8g
cXVlcnkgb24gdGhlbXNlbHZlcz8KQWx0ZXJuYXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhl
ciB3YXkgdG8gY2FsY3VsYXRlIGEKcmVhc29uYWJseSBwcmVjaXNlIHZhbHVlPwoKSmFuCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:33:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4AU2-000846-UU; Fri, 17 Jan 2014 14:33:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4AU2-000841-0H
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 14:33:18 +0000
Received: from [85.158.137.68:29140] by server-14.bemta-3.messagelabs.com id
	B1/D9-06105-D2F39D25; Fri, 17 Jan 2014 14:33:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389969196!9742921!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32720 invoked from network); 17 Jan 2014 14:33:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 14:33:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 14:33:15 +0000
Message-Id: <52D94D3A0200007800114957@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 14:33:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

V2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5n
IHNvZnQgbG9ja3VwcwppbiBEb20wIEkgcmVhbGl6ZWQgdGhhdCB3aGF0IEkgZGlkIGluIGxpbnV4
LTIuNi4xOC14ZW4uaGcncyBjL3MKOTg5OmE3NzgxYzBhM2I5YSAoInhlbi9iYWxsb29uOiBmaXgg
YmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRv
ZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVyZQp0cmlnZ2VycyBhcyBzb29uIGFz
IHRoZXJlJ3MgYSByZWFzb25hYmxlIGFtb3VudCBvZiBleGNlc3MgbWVtb3J5LgpBbmQgdGhhdCBp
cyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNwZW50IHNpZ25pZmljYW50IGFtb3VudHMgb2YK
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCmZpbmFsbHkgZ290IGNoZWNrZWQgaW4sIG9yIG11c3QgaGF2ZSBzY3Jld2VkIHVwIGlu
IHNvbWUgb3RoZXIgd2F5LgpFeHRyZW1lbHkgZW1iYXJyYXNzaW5nLi4uCgpJbiB0aGUgY291cnNl
IG9mIGZpbmRpbmcgYSBwcm9wZXIgc29sdXRpb24gSSBzb29uIHN0dW1ibGVkIGFjcm9zcwp1cHN0
cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24ncyBpbml0aWFsIHN0
YXRlIHRvCm51bWJlciBvZiBleGlzdGluZyBSQU0gcGFnZXMiKSwgYW5kIGhlbmNlIHdlbnQgYWhl
YWQgYW5kCmNvbXBhcmVkIHRocmVlIGRpZmZlcmVudCBjYWxjdWxhdGlvbnMgZm9yIGluaXRpYWwg
YnMuY3VycmVudF9wYWdlczoKCihhKSB1cHN0cmVhbSdzIChvcGVuIGNvZGluZyBnZXRfbnVtX3Bo
eXNwYWdlcygpLCBhcyBJIGRpZCB0aGlzIG9uCiAgICBhbiBvbGRlciBrZXJuZWwpCihiKSBwbGFp
biBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFNIFBGTikKKGMpIFhF
Tk1FTV9nZXRfcG9kX3RhcmdldCBvdXRwdXQgKHdpdGggdGhlIGh5cGVydmlzb3IgYWx0ZXJlZAog
ICAgdG8gbm90IHJlZnVzZSB0aGlzIGZvciBhIGRvbWFpbiBkb2luZyBpdCBvbiBpdHNlbGYpCgpU
aGUgZm91cnRoIChvcmlnaW5hbCkgbWV0aG9kLCB1c2luZyB0b3RhbHJhbV9wYWdlcywgd2FzIGFs
cmVhZHkKa25vd24gdG8gcmVzdWx0IGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93biBl
bm91Z2gsIGFuZApoZW5jZSBzZXR0aW5nIHVwIHRoZSBkb21haW4gZm9yIGFuIGV2ZW50dWFsIGNy
YXNoIHdoZW4gdGhlIFBvRApjYWNoZSBydW5zIGVtcHR5LgoKSW50ZXJlc3RpbmdseSwgKGEpIHRv
byByZXN1bHRzIGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93bgplbm91Z2ggLSB0aGVy
ZSdzIGEgZ2FwIG9mIGV4YWN0bHkgYXMgbWFueSBwYWdlcyBhcyBhcmUgbWFya2VkCnJlc2VydmVk
IGJlbG93IHRoZSAxTWIgYm91bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAp1cHN0cmVh
bSBjb21taXQgaXMgcHJlc3VtYWJseSBicm9rZW4uCgpTaG9ydCBvZiBhIHJlbGlhYmxlIChhbmQg
aWRlYWxseSBhcmNoaXRlY3R1cmUgaW5kZXBlbmRlbnQpIHdheSBvZgprbm93aW5nIHRoZSBuZWNl
c3NhcnkgYWRqdXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1dGlvbgoobm90IGJhbGxv
b25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gK
bW9yZSB0aGFuIG5lY2Vzc2FyeSkgdHVybnMgb3V0IHRvIGJlIHVzaW5nIHRoZSBtaW5pbXVtIG9m
IChiKQphbmQgKGMpOiBXaGVuIHRoZSBkb21haW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwg
KGIpIGlzCm1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMg
Y2xvc2VzdC4KClF1ZXN0aW9uIG5vdyBpczogQ29uc2lkZXJpbmcgdGhhdCAoYSkgaXMgYnJva2Vu
IChhbmQgaGFyZCB0byBmaXgpCmFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBhIGxhcmdlIHBhcnQg
b2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KdG9vIG11Y2ggYmFsbG9vbmluZyBkb3duLCBz
aG91bGRuJ3Qgd2Ugb3BlbiB1cApYRU5NRU1fZ2V0X3BvZF90YXJnZXQgZm9yIGRvbWFpbnMgdG8g
cXVlcnkgb24gdGhlbXNlbHZlcz8KQWx0ZXJuYXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhl
ciB3YXkgdG8gY2FsY3VsYXRlIGEKcmVhc29uYWJseSBwcmVjaXNlIHZhbHVlPwoKSmFuCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4AhD-0000W9-M8; Fri, 17 Jan 2014 14:46:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W4AhC-0000W3-E4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:46:54 +0000
Received: from [193.109.254.147:12059] by server-12.bemta-14.messagelabs.com
	id D8/E4-13681-D5249D25; Fri, 17 Jan 2014 14:46:53 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389970012!11498173!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7686 invoked from network); 17 Jan 2014 14:46:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-27.messagelabs.com with SMTP;
	17 Jan 2014 14:46:52 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 17 Jan 2014 06:46:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460523397"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 17 Jan 2014 06:41:51 -0800
Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 17 Jan 2014 06:41:50 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 17 Jan 2014 06:41:50 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 22:41:47 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Jan Beulich
	<JBeulich@suse.com>
Thread-Topic: [PATCH v2] mce: Fix race condition in mctelem_xchg_head
Thread-Index: AQHPEtL22bAwsogD10i+nSd2T/2RUpqI/jfg
Date: Fri, 17 Jan 2014 14:41:46 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350149346E@SHSMSX101.ccr.corp.intel.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
	<52D80BE802000078001144D2@nat28.tlf.novell.com>
	<1389887534.1061.17.camel@hamster.uk.xensource.com>
In-Reply-To: <1389887534.1061.17.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <chegger@amazon.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] mce: Fix race condition in
	mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio wrote:
> The function (mctelem_xchg_head()) used to exchange mce telemetry
> list heads is racy.  It may write to the head twice, with the second
> write linking to an element in the wrong state.
> 
> If there are two threads, T1 inserting on committed list; and T2
> trying to consume it.
> 
> 1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
> 2. T1 is interrupted after the cmpxchg succeeded.
> 3. T2 gets the list and changes element A and updates the commit list
>    head.
> 4. T1 resumes, reads pointer to prev again and compare with result
>    from the cmpxchg which succeeded but in the meantime prev changed
>    in memory.
> 5. T1 thinks the cmpxchg failed and goes around the loop again,
>    linking head to A again.
> 
> To solve the race use temporary variable for prev pointer.
> 
> *linkp (which point to a field in the element) must be updated before
> the cmpxchg() as after a successful cmpxchg the element might be
> immediately removed and reinitialized.
> 
> The wmb() prior to the cmpchgptr() call is not necessary since it is
> already a full memory barrier.  This wmb() is thus removed.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>


Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4AhD-0000W9-M8; Fri, 17 Jan 2014 14:46:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W4AhC-0000W3-E4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:46:54 +0000
Received: from [193.109.254.147:12059] by server-12.bemta-14.messagelabs.com
	id D8/E4-13681-D5249D25; Fri, 17 Jan 2014 14:46:53 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389970012!11498173!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7686 invoked from network); 17 Jan 2014 14:46:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-27.messagelabs.com with SMTP;
	17 Jan 2014 14:46:52 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 17 Jan 2014 06:46:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,670,1384329600"; d="scan'208";a="460523397"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 17 Jan 2014 06:41:51 -0800
Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 17 Jan 2014 06:41:50 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 17 Jan 2014 06:41:50 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Fri, 17 Jan 2014 22:41:47 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Jan Beulich
	<JBeulich@suse.com>
Thread-Topic: [PATCH v2] mce: Fix race condition in mctelem_xchg_head
Thread-Index: AQHPEtL22bAwsogD10i+nSd2T/2RUpqI/jfg
Date: Fri, 17 Jan 2014 14:41:46 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350149346E@SHSMSX101.ccr.corp.intel.com>
References: <1389878814.1061.1.camel@hamster.uk.xensource.com>
	<52D7F31E0200007800114419@nat28.tlf.novell.com>
	<1389881263.1061.7.camel@hamster.uk.xensource.com>
	<52D7FB460200007800114449@nat28.tlf.novell.com>
	<1389885079.1061.15.camel@hamster.uk.xensource.com>
	<52D80BE802000078001144D2@nat28.tlf.novell.com>
	<1389887534.1061.17.camel@hamster.uk.xensource.com>
In-Reply-To: <1389887534.1061.17.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <chegger@amazon.de>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] mce: Fix race condition in
	mctelem_xchg_head
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio wrote:
> The function (mctelem_xchg_head()) used to exchange mce telemetry
> list heads is racy.  It may write to the head twice, with the second
> write linking to an element in the wrong state.
> 
> If there are two threads, T1 inserting on committed list; and T2
> trying to consume it.
> 
> 1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
> 2. T1 is interrupted after the cmpxchg succeeded.
> 3. T2 gets the list and changes element A and updates the commit list
>    head.
> 4. T1 resumes, reads pointer to prev again and compare with result
>    from the cmpxchg which succeeded but in the meantime prev changed
>    in memory.
> 5. T1 thinks the cmpxchg failed and goes around the loop again,
>    linking head to A again.
> 
> To solve the race use temporary variable for prev pointer.
> 
> *linkp (which point to a field in the element) must be updated before
> the cmpxchg() as after a successful cmpxchg the element might be
> immediately removed and reinitialized.
> 
> The wmb() prior to the cmpchgptr() call is not necessary since it is
> already a full memory barrier.  This wmb() is thus removed.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>


Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:55:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Ap9-0001IU-81; Fri, 17 Jan 2014 14:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4Ap7-0001IJ-Mz
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:55:06 +0000
Received: from [85.158.137.68:54474] by server-7.bemta-3.messagelabs.com id
	A3/C3-27599-84449D25; Fri, 17 Jan 2014 14:55:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389970502!9805743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7859 invoked from network); 17 Jan 2014 14:55:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:55:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93875551"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 14:55:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:55:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W4Ap2-0001gS-2N; Fri, 17 Jan 2014 14:55:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 14:54:58 +0000
Message-ID: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI debugging
	using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since Qemu-trad
  c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
  "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."

there has been quite a lot of extra logging appearing in the VM logs.

The hotplug debugging contributes 2 vmexits per slot per hotplug event, which
are simply a waste of time unless a developer is trying to debug VM
hotplugging problems.

Introduce a platform flag, "acpi-debug" to indicate in the AML whether
debugging writes are wanted or not.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 docs/misc/xenstore-paths.markdown       |    1 +
 tools/firmware/hvmloader/acpi/build.c   |    5 +++++
 tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
 tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
 4 files changed, 13 insertions(+)

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..593a7b1 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -190,6 +190,7 @@ Various platform properties.
 * acpi -- is ACPI enabled for this domain
 * acpi_s3 -- is ACPI S3 support enabled for this domain
 * acpi_s4 -- is ACPI S4 support enabled for this domain
+* acpi-debug -- whether the AML code should write debugging messages to qemu
 
 #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
 
diff --git a/tools/firmware/hvmloader/acpi/build.c b/tools/firmware/hvmloader/acpi/build.c
index f1dd3f0..59716bb 100644
--- a/tools/firmware/hvmloader/acpi/build.c
+++ b/tools/firmware/hvmloader/acpi/build.c
@@ -47,6 +47,7 @@ struct acpi_info {
     uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
     uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
     uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
+    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
     uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
     uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
     uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC struct */
@@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
     unsigned char       *dsdt;
     unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
     int                  nr_secondaries, i;
+    const char          *xs_str;
 
     /* Allocate and initialise the acpi info area. */
     mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
@@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
     if ( !new_vm_gid(acpi_info) )
         goto oom;
 
+    xs_str = xenstore_read("platform/acpi-debug", "0");
+
     acpi_info->com1_present = uart_exists(0x3f8);
     acpi_info->com2_present = uart_exists(0x2f8);
     acpi_info->lpt1_present = lpt_exists(0x378);
     acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
+    acpi_info->acpi_debugging = (xs_str[0] == '1');
     acpi_info->pci_min = pci_mem_start;
     acpi_info->pci_len = pci_mem_end - pci_mem_start;
 
diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl b/tools/firmware/hvmloader/acpi/dsdt.asl
index 247a8ad..e753286 100644
--- a/tools/firmware/hvmloader/acpi/dsdt.asl
+++ b/tools/firmware/hvmloader/acpi/dsdt.asl
@@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM", 0)
            UAR2, 1,
            LTP1, 1,
            HPET, 1,
+           ADBG, 1,
            Offset(4),
            PMIN, 32,
            PLEN, 32,
diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index a4b693b..3f0ca74 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -347,14 +347,18 @@ int main(int argc, char **argv)
             /* _SUN == dev */
             stmt("Name", "_SUN, 0x%08x", slot >> 3);
             push_block("Method", "_EJ0, 1");
+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
             stmt("Store", "0x88, \\_GPE.DPT2");
+            pop_block();
             stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
                  (slot & 1) ? 0x10 : 0x01, slot & ~1);
             pop_block();
             push_block("Method", "_STA, 0");
+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
             stmt("Store", "0x89, \\_GPE.DPT2");
+            pop_block();
             if ( slot & 1 )
                 stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
             else
@@ -426,8 +430,10 @@ int main(int argc, char **argv)
         stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
         stmt("And", "Local1, 0xff, SLT");
         /* Debug */
+        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
         stmt("Store", "SLT, DPT1");
         stmt("Store", "EVT, DPT2");
+        pop_block();
         /* Decision tree */
         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
         pop_block();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 14:55:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 14:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Ap9-0001IU-81; Fri, 17 Jan 2014 14:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4Ap7-0001IJ-Mz
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 14:55:06 +0000
Received: from [85.158.137.68:54474] by server-7.bemta-3.messagelabs.com id
	A3/C3-27599-84449D25; Fri, 17 Jan 2014 14:55:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389970502!9805743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7859 invoked from network); 17 Jan 2014 14:55:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 14:55:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93875551"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 14:55:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 09:55:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W4Ap2-0001gS-2N; Fri, 17 Jan 2014 14:55:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 14:54:58 +0000
Message-ID: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI debugging
	using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since Qemu-trad
  c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
  "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."

there has been quite a lot of extra logging appearing in the VM logs.

The hotplug debugging contributes 2 vmexits per slot per hotplug event, which
are simply a waste of time unless a developer is trying to debug VM
hotplugging problems.

Introduce a platform flag, "acpi-debug" to indicate in the AML whether
debugging writes are wanted or not.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 docs/misc/xenstore-paths.markdown       |    1 +
 tools/firmware/hvmloader/acpi/build.c   |    5 +++++
 tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
 tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
 4 files changed, 13 insertions(+)

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..593a7b1 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -190,6 +190,7 @@ Various platform properties.
 * acpi -- is ACPI enabled for this domain
 * acpi_s3 -- is ACPI S3 support enabled for this domain
 * acpi_s4 -- is ACPI S4 support enabled for this domain
+* acpi-debug -- whether the AML code should write debugging messages to qemu
 
 #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
 
diff --git a/tools/firmware/hvmloader/acpi/build.c b/tools/firmware/hvmloader/acpi/build.c
index f1dd3f0..59716bb 100644
--- a/tools/firmware/hvmloader/acpi/build.c
+++ b/tools/firmware/hvmloader/acpi/build.c
@@ -47,6 +47,7 @@ struct acpi_info {
     uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
     uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
     uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
+    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
     uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
     uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
     uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC struct */
@@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
     unsigned char       *dsdt;
     unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
     int                  nr_secondaries, i;
+    const char          *xs_str;
 
     /* Allocate and initialise the acpi info area. */
     mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
@@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
     if ( !new_vm_gid(acpi_info) )
         goto oom;
 
+    xs_str = xenstore_read("platform/acpi-debug", "0");
+
     acpi_info->com1_present = uart_exists(0x3f8);
     acpi_info->com2_present = uart_exists(0x2f8);
     acpi_info->lpt1_present = lpt_exists(0x378);
     acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
+    acpi_info->acpi_debugging = (xs_str[0] == '1');
     acpi_info->pci_min = pci_mem_start;
     acpi_info->pci_len = pci_mem_end - pci_mem_start;
 
diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl b/tools/firmware/hvmloader/acpi/dsdt.asl
index 247a8ad..e753286 100644
--- a/tools/firmware/hvmloader/acpi/dsdt.asl
+++ b/tools/firmware/hvmloader/acpi/dsdt.asl
@@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM", 0)
            UAR2, 1,
            LTP1, 1,
            HPET, 1,
+           ADBG, 1,
            Offset(4),
            PMIN, 32,
            PLEN, 32,
diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index a4b693b..3f0ca74 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -347,14 +347,18 @@ int main(int argc, char **argv)
             /* _SUN == dev */
             stmt("Name", "_SUN, 0x%08x", slot >> 3);
             push_block("Method", "_EJ0, 1");
+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
             stmt("Store", "0x88, \\_GPE.DPT2");
+            pop_block();
             stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
                  (slot & 1) ? 0x10 : 0x01, slot & ~1);
             pop_block();
             push_block("Method", "_STA, 0");
+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
             stmt("Store", "0x89, \\_GPE.DPT2");
+            pop_block();
             if ( slot & 1 )
                 stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
             else
@@ -426,8 +430,10 @@ int main(int argc, char **argv)
         stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
         stmt("And", "Local1, 0xff, SLT");
         /* Debug */
+        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
         stmt("Store", "SLT, DPT1");
         stmt("Store", "EVT, DPT2");
+        pop_block();
         /* Decision tree */
         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
         pop_block();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:09:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B2s-0002I4-2U; Fri, 17 Jan 2014 15:09:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4B2q-0002HJ-Hd
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:09:16 +0000
Received: from [85.158.139.211:43959] by server-10.bemta-5.messagelabs.com id
	18/D1-01405-B9749D25; Fri, 17 Jan 2014 15:09:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389971354!10220333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3447 invoked from network); 17 Jan 2014 15:09:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 15:09:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 15:09:14 +0000
Message-Id: <52D955A902000078001149E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 15:09:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Konrad,

you have been working on the discard stuff quite a bit iirc - any
chance you could take a look and send and ack/review?

Jan

> ---
>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..515ea90 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process 
> BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,14 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate 
> space
> + *     (discardable extents) in units that are larger than the exported 
> logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend must provide both discard-granularity and discard-alignment.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> ==
> + *     sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +350,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if 
> the
> + *     backing device supports secure discard.
>   */
>  
>  /*
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:09:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B2s-0002I4-2U; Fri, 17 Jan 2014 15:09:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4B2q-0002HJ-Hd
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:09:16 +0000
Received: from [85.158.139.211:43959] by server-10.bemta-5.messagelabs.com id
	18/D1-01405-B9749D25; Fri, 17 Jan 2014 15:09:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1389971354!10220333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3447 invoked from network); 17 Jan 2014 15:09:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 15:09:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 15:09:14 +0000
Message-Id: <52D955A902000078001149E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 15:09:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
In-Reply-To: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Konrad,

you have been working on the discard stuff quite a bit iirc - any
chance you could take a look and send and ack/review?

Jan

> ---
>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..515ea90 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process 
> BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,14 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate 
> space
> + *     (discardable extents) in units that are larger than the exported 
> logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend must provide both discard-granularity and discard-alignment.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> ==
> + *     sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +350,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if 
> the
> + *     backing device supports secure discard.
>   */
>  
>  /*
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:10:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B42-0002TH-KG; Fri, 17 Jan 2014 15:10:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4B41-0002TA-8X
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:10:29 +0000
Received: from [85.158.137.68:10597] by server-14.bemta-3.messagelabs.com id
	F0/6E-06105-4E749D25; Fri, 17 Jan 2014 15:10:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389971424!8989125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16652 invoked from network); 17 Jan 2014 15:10:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:10:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93887070"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:10:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 10:10:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4B3u-0003zY-Rm;
	Fri, 17 Jan 2014 15:10:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4B3u-0006qY-MS;
	Fri, 17 Jan 2014 15:10:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24413-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 15:10:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24413: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0111256883454603220=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0111256883454603220==
Content-Type: text/plain

flight 24413 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24413/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                    fail pass in 24407
 test-amd64-i386-qemut-rhel6hvm-amd  6 leak-check/basis(6)   fail pass in 24407
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24411
 test-amd64-i386-xl-win7-amd64 6 leak-check/basis(6) fail in 24407 pass in 24413
 test-amd64-i386-xl-qemut-win7-amd64 7 windows-install fail in 24407 pass in 24413
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24407 pass in 24413
 test-amd64-amd64-xl           8 debian-fixup       fail in 24411 pass in 24413
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot     fail in 24411 pass in 24413

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install  fail in 24407 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 24407 like 24406-bisect
 test-amd64-i386-freebsd10-amd64 7 freebsd-install fail in 24411 like 24417-bisect
 test-amd64-i386-xl-win7-amd64  5 xen-boot             fail in 24411 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail in 24411 like 24403-bisect
 test-amd64-amd64-xl-win7-amd64  7 windows-install     fail in 24411 like 24349
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24411 like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24411 never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=1071ea6e68ead40df739b223e9013d99c23c19ab
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 1071ea6e68ead40df739b223e9013d99c23c19ab
+ branch=linux-3.10
+ revision=1071ea6e68ead40df739b223e9013d99c23c19ab
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 1071ea6e68ead40df739b223e9013d99c23c19ab:tested/linux-3.10
Counting objects: 1   
Counting objects: 10   
Counting objects: 27   
Counting objects: 48   
Counting objects: 83   
Counting objects: 167   
Counting objects: 257   
Counting objects: 366   
Counting objects: 483   
Counting objects: 498, done.
Compressing objects:   1% (1/63)   
Compressing objects:   3% (2/63)   
Compressing objects:   4% (3/63)   
Compressing objects:   6% (4/63)   
Compressing objects:   7% (5/63)   
Compressing objects:   9% (6/63)   
Compressing objects:  11% (7/63)   
Compressing objects:  12% (8/63)   
Compressing objects:  14% (9/63)   
Compressing objects:  15% (10/63)   
Compressing objects:  17% (11/63)   
Compressing objects:  19% (12/63)   
Compressing objects:  20% (13/63)   
Compressing objects:  22% (14/63)   
Compressing objects:  23% (15/63)   
Compressing objects:  25% (16/63)   
Compressing objects:  26% (17/63)   
Compressing objects:  28% (18/63)   
Compressing objects:  30% (19/63)   
Compressing objects:  31% (20/63)   
Compressing objects:  33% (21/63)   
Compressing objects:  34% (22/63)   
Compressing objects:  36% (23/63)   
Compressing objects:  38% (24/63)   
Compressing objects:  39% (25/63)   
Compressing objects:  41% (26/63)   
Compressing objects:  42% (27/63)   
Compressing objects:  44% (28/63)   
Compressing objects:  46% (29/63)   
Compressing objects:  47% (30/63)   
Compressing objects:  49% (31/63)   
Compressing objects:  50% (32/63)   
Compressing objects:  52% (33/63)   
Compressing objects:  53% (34/63)   
Compressing objects:  55% (35/63)   
Compressing objects:  57% (36/63)   
Compressing objects:  58% (37/63)   
Compressing objects:  60% (38/63)   
Compressing objects:  61% (39/63)   
Compressing objects:  63% (40/63)   
Compressing objects:  65% (41/63)   
Compressing objects:  66% (42/63)   
Compressing objects:  68% (43/63)   
Compressing objects:  69% (44/63)   
Compressing objects:  71% (45/63)   
Compressing objects:  73% (46/63)   
Compressing objects:  74% (47/63)   
Compressing objects:  76% (48/63)   
Compressing objects:  77% (49/63)   
Compressing objects:  79% (50/63)   
Compressing objects:  80% (51/63)   
Compressing objects:  82% (52/63)   
Compressing objects:  84% (53/63)   
Compressing objects:  85% (54/63)   
Compressing objects:  87% (55/63)   
Compressing objects:  88% (56/63)   
Compressing objects:  90% (57/63)   
Compressing objects:  92% (58/63)   
Compressing objects:  93% (59/63)   
Compressing objects:  95% (60/63)   
Compressing objects:  96% (61/63)   
Compressing objects:  98% (62/63)   
Compressing objects: 100% (63/63)   
Compressing objects: 100% (63/63), done.
Writing objects:   0% (1/370)   
Writing objects:   1% (4/370)   
Writing objects:   2% (8/370)   
Writing objects:   3% (12/370)   
Writing objects:   4% (15/370)   
Writing objects:   5% (19/370)   
Writing objects:   6% (23/370)   
Writing objects:   7% (26/370)   
Writing objects:   8% (30/370)   
Writing objects:   9% (34/370)   
Writing objects:  10% (37/370)   
Writing objects:  11% (41/370)   
Writing objects:  12% (45/370)   
Writing objects:  13% (49/370)   
Writing objects:  14% (52/370)   
Writing objects:  15% (56/370)   
Writing objects:  16% (60/370)   
Writing objects:  17% (63/370)   
Writing objects:  18% (67/370)   
Writing objects:  19% (71/370)   
Writing objects:  20% (74/370)   
Writing objects:  21% (78/370)   
Writing objects:  22% (82/370)   
Writing objects:  23% (86/370)   
Writing objects:  24% (89/370)   
Writing objects:  25% (93/370)   
Writing objects:  26% (97/370)   
Writing objects:  27% (100/370)   
Writing objects:  28% (104/370)   
Writing objects:  29% (108/370)   
Writing objects:  30% (111/370)   
Writing objects:  31% (115/370)   
Writing objects:  32% (119/370)   
Writing objects:  33% (123/370)   
Writing objects:  34% (126/370)   
Writing objects:  35% (130/370)   
Writing objects:  36% (134/370)   
Writing objects:  37% (137/370)   
Writing objects:  38% (141/370)   
Writing objects:  39% (145/370)   
Writing objects:  40% (148/370)   
Writing objects:  41% (152/370)   
Writing objects:  42% (156/370)   
Writing objects:  43% (160/370)   
Writing objects:  44% (163/370)   
Writing objects:  45% (167/370)   
Writing objects:  46% (171/370)   
Writing objects:  47% (174/370)   
Writing objects:  48% (178/370)   
Writing objects:  49% (182/370)   
Writing objects:  50% (185/370)   
Writing objects:  51% (189/370)   
Writing objects:  52% (193/370)   
Writing objects:  53% (197/370)   
Writing objects:  54% (200/370)   
Writing objects:  55% (204/370)   
Writing objects:  56% (208/370)   
Writing objects:  57% (211/370)   
Writing objects:  58% (215/370)   
Writing objects:  59% (219/370)   
Writing objects:  60% (222/370)   
Writing objects:  61% (226/370)   
Writing objects:  62% (230/370)   
Writing objects:  63% (234/370)   
Writing objects:  64% (237/370)   
Writing objects:  65% (241/370)   
Writing objects:  66% (245/370)   
Writing objects:  67% (248/370)   
Writing objects:  68% (252/370)   
Writing objects:  69% (256/370)   
Writing objects:  70% (259/370)   
Writing objects:  71% (263/370)   
Writing objects:  72% (267/370)   
Writing objects:  73% (271/370)   
Writing objects:  74% (274/370)   
Writing objects:  75% (278/370)   
Writing objects:  76% (282/370)   
Writing objects:  77% (285/370)   
Writing objects:  78% (289/370)   
Writing objects:  79% (293/370)   
Writing objects:  80% (296/370)   
Writing objects:  81% (300/370)   
Writing objects:  82% (304/370)   
Writing objects:  83% (308/370)   
Writing objects:  84% (311/370)   
Writing objects:  85% (315/370)   
Writing objects:  86% (319/370)   
Writing objects:  87% (322/370)   
Writing objects:  88% (326/370)   
Writing objects:  89% (330/370)   
Writing objects:  90% (333/370)   
Writing objects:  91% (337/370)   
Writing objects:  92% (341/370)   
Writing objects:  93% (345/370)   
Writing objects:  94% (348/370)   
Writing objects:  95% (352/370)   
Writing objects:  96% (356/370)   
Writing objects:  97% (359/370)   
Writing objects:  98% (363/370)   
Writing objects:  99% (367/370)   
Writing objects: 100% (370/370)   
Writing objects: 100% (370/370), 63.46 KiB, done.
Total 370 (delta 307), reused 370 (delta 307)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   8b4ed85..1071ea6  1071ea6e68ead40df739b223e9013d99c23c19ab -> tested/linux-3.10
+ exit 0


--===============0111256883454603220==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0111256883454603220==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:10:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B42-0002TH-KG; Fri, 17 Jan 2014 15:10:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4B41-0002TA-8X
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:10:29 +0000
Received: from [85.158.137.68:10597] by server-14.bemta-3.messagelabs.com id
	F0/6E-06105-4E749D25; Fri, 17 Jan 2014 15:10:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389971424!8989125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16652 invoked from network); 17 Jan 2014 15:10:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:10:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93887070"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:10:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 10:10:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4B3u-0003zY-Rm;
	Fri, 17 Jan 2014 15:10:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4B3u-0006qY-MS;
	Fri, 17 Jan 2014 15:10:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24413-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 15:10:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24413: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0111256883454603220=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0111256883454603220==
Content-Type: text/plain

flight 24413 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24413/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                    fail pass in 24407
 test-amd64-i386-qemut-rhel6hvm-amd  6 leak-check/basis(6)   fail pass in 24407
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24411
 test-amd64-i386-xl-win7-amd64 6 leak-check/basis(6) fail in 24407 pass in 24413
 test-amd64-i386-xl-qemut-win7-amd64 7 windows-install fail in 24407 pass in 24413
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24407 pass in 24413
 test-amd64-amd64-xl           8 debian-fixup       fail in 24411 pass in 24413
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot     fail in 24411 pass in 24413

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24349
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24333
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24349
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24349
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24349
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install  fail in 24407 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 7 redhat-install fail in 24407 like 24406-bisect
 test-amd64-i386-freebsd10-amd64 7 freebsd-install fail in 24411 like 24417-bisect
 test-amd64-i386-xl-win7-amd64  5 xen-boot             fail in 24411 like 24349
 test-amd64-i386-qemuu-rhel6hvm-amd 6 leak-check/basis(6) fail in 24411 like 24403-bisect
 test-amd64-amd64-xl-win7-amd64  7 windows-install     fail in 24411 like 24349
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24411 like 24349

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24411 never pass

version targeted for testing:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab
baseline version:
 linux                8b4ed85b8404ffe7e10ee410c4df3968b86f0793

------------------------------------------------------------
People who touched revisions under test:
  Abhilash Kesavan <a.kesavan@samsung.com>
  Andrew Bresticker <abrestic@chromium.org>
  Andrey Vagin <avagin@openvz.org>
  Axel Lin <axel.lin@ingics.com>
  Bang Nguyen <bang.nguyen@oracle.com>
  Ben Segall <bsegall@google.com>
  Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
  Borislav Petkov <bp@suse.de>
  Curt Brune <curt@cumulusnetworks.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@openwrt.org>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Helge Deller <deller@gmx.de>
  Honggang Li <honli@redhat.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  James Bottomley <JBottomley@Parallels.com>
  James Hogan <james.hogan@imgtec.com>
  Jason Wang <jasowang@redhat.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  John David Anglin <dave.anglin@bell.net>
  Kamala R <kamala@aristanetworks.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Li RongQing <roy.qing.li@gmail.com>
  Magnus Damm <damm@opensource.se>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Chan <mchan@broadcom.com>
  Michael Dalton <mwdalton@google.com>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Turquette <mturquette@linaro.org>
  Nat Gurumoorthy <natg@google.com>
  Nestor Lopez Casado <nlopezcasad@logitech.com>
  Nix <nix@esperi.org.uk>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paul Turner <pjt@google.com>
  Pavel Emelyanov <xemul@parallels.com>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Richard Cochran <richardcochran@gmail.com>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Sachin Kamat <sachin.kamat@linaro.org>
  Salam Noureddine <noureddine@aristanetworks.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Yaju Cao <yacao@redhat.com>
  Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=1071ea6e68ead40df739b223e9013d99c23c19ab
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 1071ea6e68ead40df739b223e9013d99c23c19ab
+ branch=linux-3.10
+ revision=1071ea6e68ead40df739b223e9013d99c23c19ab
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 1071ea6e68ead40df739b223e9013d99c23c19ab:tested/linux-3.10
Counting objects: 1   
Counting objects: 10   
Counting objects: 27   
Counting objects: 48   
Counting objects: 83   
Counting objects: 167   
Counting objects: 257   
Counting objects: 366   
Counting objects: 483   
Counting objects: 498, done.
Compressing objects:   1% (1/63)   
Compressing objects:   3% (2/63)   
Compressing objects:   4% (3/63)   
Compressing objects:   6% (4/63)   
Compressing objects:   7% (5/63)   
Compressing objects:   9% (6/63)   
Compressing objects:  11% (7/63)   
Compressing objects:  12% (8/63)   
Compressing objects:  14% (9/63)   
Compressing objects:  15% (10/63)   
Compressing objects:  17% (11/63)   
Compressing objects:  19% (12/63)   
Compressing objects:  20% (13/63)   
Compressing objects:  22% (14/63)   
Compressing objects:  23% (15/63)   
Compressing objects:  25% (16/63)   
Compressing objects:  26% (17/63)   
Compressing objects:  28% (18/63)   
Compressing objects:  30% (19/63)   
Compressing objects:  31% (20/63)   
Compressing objects:  33% (21/63)   
Compressing objects:  34% (22/63)   
Compressing objects:  36% (23/63)   
Compressing objects:  38% (24/63)   
Compressing objects:  39% (25/63)   
Compressing objects:  41% (26/63)   
Compressing objects:  42% (27/63)   
Compressing objects:  44% (28/63)   
Compressing objects:  46% (29/63)   
Compressing objects:  47% (30/63)   
Compressing objects:  49% (31/63)   
Compressing objects:  50% (32/63)   
Compressing objects:  52% (33/63)   
Compressing objects:  53% (34/63)   
Compressing objects:  55% (35/63)   
Compressing objects:  57% (36/63)   
Compressing objects:  58% (37/63)   
Compressing objects:  60% (38/63)   
Compressing objects:  61% (39/63)   
Compressing objects:  63% (40/63)   
Compressing objects:  65% (41/63)   
Compressing objects:  66% (42/63)   
Compressing objects:  68% (43/63)   
Compressing objects:  69% (44/63)   
Compressing objects:  71% (45/63)   
Compressing objects:  73% (46/63)   
Compressing objects:  74% (47/63)   
Compressing objects:  76% (48/63)   
Compressing objects:  77% (49/63)   
Compressing objects:  79% (50/63)   
Compressing objects:  80% (51/63)   
Compressing objects:  82% (52/63)   
Compressing objects:  84% (53/63)   
Compressing objects:  85% (54/63)   
Compressing objects:  87% (55/63)   
Compressing objects:  88% (56/63)   
Compressing objects:  90% (57/63)   
Compressing objects:  92% (58/63)   
Compressing objects:  93% (59/63)   
Compressing objects:  95% (60/63)   
Compressing objects:  96% (61/63)   
Compressing objects:  98% (62/63)   
Compressing objects: 100% (63/63)   
Compressing objects: 100% (63/63), done.
Writing objects:   0% (1/370)   
Writing objects:   1% (4/370)   
Writing objects:   2% (8/370)   
Writing objects:   3% (12/370)   
Writing objects:   4% (15/370)   
Writing objects:   5% (19/370)   
Writing objects:   6% (23/370)   
Writing objects:   7% (26/370)   
Writing objects:   8% (30/370)   
Writing objects:   9% (34/370)   
Writing objects:  10% (37/370)   
Writing objects:  11% (41/370)   
Writing objects:  12% (45/370)   
Writing objects:  13% (49/370)   
Writing objects:  14% (52/370)   
Writing objects:  15% (56/370)   
Writing objects:  16% (60/370)   
Writing objects:  17% (63/370)   
Writing objects:  18% (67/370)   
Writing objects:  19% (71/370)   
Writing objects:  20% (74/370)   
Writing objects:  21% (78/370)   
Writing objects:  22% (82/370)   
Writing objects:  23% (86/370)   
Writing objects:  24% (89/370)   
Writing objects:  25% (93/370)   
Writing objects:  26% (97/370)   
Writing objects:  27% (100/370)   
Writing objects:  28% (104/370)   
Writing objects:  29% (108/370)   
Writing objects:  30% (111/370)   
Writing objects:  31% (115/370)   
Writing objects:  32% (119/370)   
Writing objects:  33% (123/370)   
Writing objects:  34% (126/370)   
Writing objects:  35% (130/370)   
Writing objects:  36% (134/370)   
Writing objects:  37% (137/370)   
Writing objects:  38% (141/370)   
Writing objects:  39% (145/370)   
Writing objects:  40% (148/370)   
Writing objects:  41% (152/370)   
Writing objects:  42% (156/370)   
Writing objects:  43% (160/370)   
Writing objects:  44% (163/370)   
Writing objects:  45% (167/370)   
Writing objects:  46% (171/370)   
Writing objects:  47% (174/370)   
Writing objects:  48% (178/370)   
Writing objects:  49% (182/370)   
Writing objects:  50% (185/370)   
Writing objects:  51% (189/370)   
Writing objects:  52% (193/370)   
Writing objects:  53% (197/370)   
Writing objects:  54% (200/370)   
Writing objects:  55% (204/370)   
Writing objects:  56% (208/370)   
Writing objects:  57% (211/370)   
Writing objects:  58% (215/370)   
Writing objects:  59% (219/370)   
Writing objects:  60% (222/370)   
Writing objects:  61% (226/370)   
Writing objects:  62% (230/370)   
Writing objects:  63% (234/370)   
Writing objects:  64% (237/370)   
Writing objects:  65% (241/370)   
Writing objects:  66% (245/370)   
Writing objects:  67% (248/370)   
Writing objects:  68% (252/370)   
Writing objects:  69% (256/370)   
Writing objects:  70% (259/370)   
Writing objects:  71% (263/370)   
Writing objects:  72% (267/370)   
Writing objects:  73% (271/370)   
Writing objects:  74% (274/370)   
Writing objects:  75% (278/370)   
Writing objects:  76% (282/370)   
Writing objects:  77% (285/370)   
Writing objects:  78% (289/370)   
Writing objects:  79% (293/370)   
Writing objects:  80% (296/370)   
Writing objects:  81% (300/370)   
Writing objects:  82% (304/370)   
Writing objects:  83% (308/370)   
Writing objects:  84% (311/370)   
Writing objects:  85% (315/370)   
Writing objects:  86% (319/370)   
Writing objects:  87% (322/370)   
Writing objects:  88% (326/370)   
Writing objects:  89% (330/370)   
Writing objects:  90% (333/370)   
Writing objects:  91% (337/370)   
Writing objects:  92% (341/370)   
Writing objects:  93% (345/370)   
Writing objects:  94% (348/370)   
Writing objects:  95% (352/370)   
Writing objects:  96% (356/370)   
Writing objects:  97% (359/370)   
Writing objects:  98% (363/370)   
Writing objects:  99% (367/370)   
Writing objects: 100% (370/370)   
Writing objects: 100% (370/370), 63.46 KiB, done.
Total 370 (delta 307), reused 370 (delta 307)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   8b4ed85..1071ea6  1071ea6e68ead40df739b223e9013d99c23c19ab -> tested/linux-3.10
+ exit 0


--===============0111256883454603220==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0111256883454603220==--

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:15:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B8d-0002u6-LG; Fri, 17 Jan 2014 15:15:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4B8c-0002u1-7m
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:15:14 +0000
Received: from [85.158.137.68:60692] by server-7.bemta-3.messagelabs.com id
	3E/05-27599-10949D25; Fri, 17 Jan 2014 15:15:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389971710!9806453!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10407 invoked from network); 17 Jan 2014 15:15:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91792234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:15:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:15:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4B8W-0001yv-PU;
	Fri, 17 Jan 2014 15:15:08 +0000
Date: Fri, 17 Jan 2014 15:14:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. But it would be worth to
> have a knob to disable it in case the backing file was intentionally
> created non-sparse to avoid fragmentation.
> How could this be knob be passed from domU.cfg:disk=[] to the actual
> qemu process?

It would need to be on xenstore, because that is the only per-disk
interface xen_disk is listening to.


> blkfront_setup_discard should check for "qdisk" instead of (, or in
> addition to?) "file" to actually make use of this new feature.

Why? I don't think that would be correct: if the feature is advertised
on xenstore by the backend (feature-discard) then blkfront can/should
use it. If it is not present then it is not going to use it.
Let's not complicate things further.


> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 16 ++++++++++++++++
>  2 files changed, 28 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..555c2d6 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -68,6 +68,8 @@ struct ioreq {
>      int                 presync;
>      int                 postsync;
>      uint8_t             mapped;
> +    int64_t             sector_num;
> +    int                 nb_sectors;

You have access to the original request via req, I don't think you need
these two fields, do you?


>      /* grant mapping */
>      uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
> @@ -232,6 +234,7 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
>  static int ioreq_parse(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
> +    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
>      uintptr_t mem;
>      size_t len;
>      int i;
> @@ -244,6 +247,10 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_READ:
>          ioreq->prot = PROT_WRITE; /* to memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        ioreq->sector_num = discard_req->sector_number;
> +        ioreq->nb_sectors = discard_req->nr_sectors;
> +        return 0;
>      case BLKIF_OP_FLUSH_DISKCACHE:
>          ioreq->presync = 1;
>          if (!ioreq->req.nr_segments) {
> @@ -521,6 +528,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, ioreq->nb_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        ioreq->sector_num, ioreq->nb_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -764,6 +778,7 @@ static int blk_init(struct XenDevice *xendev)
>       */
>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
> +    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
>      g_free(directiosafe);
> @@ -801,6 +816,7 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:15:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B8d-0002u6-LG; Fri, 17 Jan 2014 15:15:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4B8c-0002u1-7m
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:15:14 +0000
Received: from [85.158.137.68:60692] by server-7.bemta-3.messagelabs.com id
	3E/05-27599-10949D25; Fri, 17 Jan 2014 15:15:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1389971710!9806453!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10407 invoked from network); 17 Jan 2014 15:15:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91792234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:15:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:15:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4B8W-0001yv-PU;
	Fri, 17 Jan 2014 15:15:08 +0000
Date: Fri, 17 Jan 2014 15:14:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. But it would be worth to
> have a knob to disable it in case the backing file was intentionally
> created non-sparse to avoid fragmentation.
> How could this be knob be passed from domU.cfg:disk=[] to the actual
> qemu process?

It would need to be on xenstore, because that is the only per-disk
interface xen_disk is listening to.


> blkfront_setup_discard should check for "qdisk" instead of (, or in
> addition to?) "file" to actually make use of this new feature.

Why? I don't think that would be correct: if the feature is advertised
on xenstore by the backend (feature-discard) then blkfront can/should
use it. If it is not present then it is not going to use it.
Let's not complicate things further.


> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 16 ++++++++++++++++
>  2 files changed, 28 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..555c2d6 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -68,6 +68,8 @@ struct ioreq {
>      int                 presync;
>      int                 postsync;
>      uint8_t             mapped;
> +    int64_t             sector_num;
> +    int                 nb_sectors;

You have access to the original request via req, I don't think you need
these two fields, do you?


>      /* grant mapping */
>      uint32_t            domids[BLKIF_MAX_SEGMENTS_PER_REQUEST];
> @@ -232,6 +234,7 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
>  static int ioreq_parse(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
> +    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
>      uintptr_t mem;
>      size_t len;
>      int i;
> @@ -244,6 +247,10 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_READ:
>          ioreq->prot = PROT_WRITE; /* to memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        ioreq->sector_num = discard_req->sector_number;
> +        ioreq->nb_sectors = discard_req->nr_sectors;
> +        return 0;
>      case BLKIF_OP_FLUSH_DISKCACHE:
>          ioreq->presync = 1;
>          if (!ioreq->req.nr_segments) {
> @@ -521,6 +528,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, ioreq->nb_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        ioreq->sector_num, ioreq->nb_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -764,6 +778,7 @@ static int blk_init(struct XenDevice *xendev)
>       */
>      xenstore_write_be_int(&blkdev->xendev, "feature-flush-cache", 1);
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
> +    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
>      g_free(directiosafe);
> @@ -801,6 +816,7 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B9n-0002ya-6W; Fri, 17 Jan 2014 15:16:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4B9m-0002yV-Oh
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:16:26 +0000
Received: from [193.109.254.147:33519] by server-7.bemta-14.messagelabs.com id
	CB/74-15500-A4949D25; Fri, 17 Jan 2014 15:16:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389971783!9263529!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19390 invoked from network); 17 Jan 2014 15:16:25 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:16:25 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 08:16:13 -0700
Message-ID: <52D9493C.2060506@suse.com>
Date: Fri, 17 Jan 2014 08:16:12 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>	<52D8CF6A.7050609@suse.com>
	<21209.5519.308053.311118@mariner.uk.xensource.com>
In-Reply-To: <21209.5519.308053.311118@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
>   
>> Ian Jackson wrote:
>>     
>>> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
>>> +     * children.  The application will reap its own children
>>> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
>>> +    libxl_sigchld_owner_libxl_always_selective_reap,
>>>       
>> Should there be some documentation in the opening comments of
>> "Subprocess handling"?  E.g. an entry under "For programs which run
>> their own children alongside libxl's:"?
>>     
>
> Yes.
>
>   
>> BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
>> different libxl_ctx.  Currently in the libvirt libxl driver there's a
>> driver-wide ctx for more host-centric operations like
>> libxl_get_version_info(), libxl_get_free_memory(), etc., and a
>> per-domain ctx for domain-specific operations.  The current doc for
>> libxl_childproc_setmode() says:
>>     
>
> Oh dear.  Can you change it to use the same ctx everywhere ?
>   

What would be the downside of this in a multi-threaded application,
where there are many concurrent domain operations?  I was under the
impression that such an app would need a per-domain ctx.

Regards,
Jim

> If not, I'm afraid, someone needs to
>  - keep a record of all the libxl ctxs
>  - when the self-pipe triggers, iterate over all the ctxs calling
>    libxl_childproc_sigchld_occurred
>
> If that "someone" is libvirt, you'll have to do the self-pipe trick.
>
> It would probably be easier to have libxl maintain a global list of
> libxl ctx's for this purpose.
>
>   
>> When calling setmode() on the driver-wide or on each domain-specific
>> ctx, I get an assert with this hunk
>>
>> libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
>> `!sigchld_owner' failed.
>>     
>
> The problem is that the SIGCHLD ownership is attached to the
> individual ctx.  This could in principle be changed, I think.
>
> I will try to see if I can do that.
>
> Ian.
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4B9n-0002ya-6W; Fri, 17 Jan 2014 15:16:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4B9m-0002yV-Oh
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:16:26 +0000
Received: from [193.109.254.147:33519] by server-7.bemta-14.messagelabs.com id
	CB/74-15500-A4949D25; Fri, 17 Jan 2014 15:16:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1389971783!9263529!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19390 invoked from network); 17 Jan 2014 15:16:25 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:16:25 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 08:16:13 -0700
Message-ID: <52D9493C.2060506@suse.com>
Date: Fri, 17 Jan 2014 08:16:12 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>	<52D8CF6A.7050609@suse.com>
	<21209.5519.308053.311118@mariner.uk.xensource.com>
In-Reply-To: <21209.5519.308053.311118@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
>   
>> Ian Jackson wrote:
>>     
>>> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
>>> +     * children.  The application will reap its own children
>>> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
>>> +    libxl_sigchld_owner_libxl_always_selective_reap,
>>>       
>> Should there be some documentation in the opening comments of
>> "Subprocess handling"?  E.g. an entry under "For programs which run
>> their own children alongside libxl's:"?
>>     
>
> Yes.
>
>   
>> BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
>> different libxl_ctx.  Currently in the libvirt libxl driver there's a
>> driver-wide ctx for more host-centric operations like
>> libxl_get_version_info(), libxl_get_free_memory(), etc., and a
>> per-domain ctx for domain-specific operations.  The current doc for
>> libxl_childproc_setmode() says:
>>     
>
> Oh dear.  Can you change it to use the same ctx everywhere ?
>   

What would be the downside of this in a multi-threaded application,
where there are many concurrent domain operations?  I was under the
impression that such an app would need a per-domain ctx.

Regards,
Jim

> If not, I'm afraid, someone needs to
>  - keep a record of all the libxl ctxs
>  - when the self-pipe triggers, iterate over all the ctxs calling
>    libxl_childproc_sigchld_occurred
>
> If that "someone" is libvirt, you'll have to do the self-pipe trick.
>
> It would probably be easier to have libxl maintain a global list of
> libxl ctx's for this purpose.
>
>   
>> When calling setmode() on the driver-wide or on each domain-specific
>> ctx, I get an assert with this hunk
>>
>> libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
>> `!sigchld_owner' failed.
>>     
>
> The problem is that the SIGCHLD ownership is attached to the
> individual ctx.  This could in principle be changed, I think.
>
> I will try to see if I can do that.
>
> Ian.
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:22:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BFl-0003hg-13; Fri, 17 Jan 2014 15:22:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4BFk-0003hb-8T
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:22:36 +0000
Received: from [193.109.254.147:29687] by server-14.bemta-14.messagelabs.com
	id 6A/CC-12628-BBA49D25; Fri, 17 Jan 2014 15:22:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389972153!11481833!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31436 invoked from network); 17 Jan 2014 15:22:34 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:22:34 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 08:22:31 -0700
Message-ID: <52D94AB6.5000705@suse.com>
Date: Fri, 17 Jan 2014 08:22:30 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>	<52D8CF6A.7050609@suse.com>
	<21209.9256.615318.923918@mariner.uk.xensource.com>
In-Reply-To: <21209.9256.615318.923918@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
>   
>> I don't see the assert, regardless of how I call setmode(), when
>> changing this hunk to
>>
>> @@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
>> creating)
>>  {
>>      switch (ctx->childproc_hooks->chldowner) {
>>      case libxl_sigchld_owner_libxl:
>> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>>          return creating || !LIBXL_LIST_EMPTY(&ctx->children);
>>      case libxl_sigchld_owner_mainloop:
>>          return 0;
>>      case libxl_sigchld_owner_libxl_always:
>> -    case libxl_sigchld_owner_libxl_always_selective_reap:
>>          return 1;
>>      }
>>      abort();
>>     
>
> I should say: that that works is just luck, I think.

That's what I suspected :).  And I also suspect my luck would run out
once I throw 10's of domains with many concurrent operations in the mix.

> I have a better fix.
>   

Ok, thanks!

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:22:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BFl-0003hg-13; Fri, 17 Jan 2014 15:22:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4BFk-0003hb-8T
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:22:36 +0000
Received: from [193.109.254.147:29687] by server-14.bemta-14.messagelabs.com
	id 6A/CC-12628-BBA49D25; Fri, 17 Jan 2014 15:22:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1389972153!11481833!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31436 invoked from network); 17 Jan 2014 15:22:34 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:22:34 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 08:22:31 -0700
Message-ID: <52D94AB6.5000705@suse.com>
Date: Fri, 17 Jan 2014 08:22:30 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>	<52D8CF6A.7050609@suse.com>
	<21209.9256.615318.923918@mariner.uk.xensource.com>
In-Reply-To: <21209.9256.615318.923918@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
>   
>> I don't see the assert, regardless of how I call setmode(), when
>> changing this hunk to
>>
>> @@ -264,11 +264,11 @@ static bool chldmode_ours(libxl_ctx *ctx, bool
>> creating)
>>  {
>>      switch (ctx->childproc_hooks->chldowner) {
>>      case libxl_sigchld_owner_libxl:
>> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>>          return creating || !LIBXL_LIST_EMPTY(&ctx->children);
>>      case libxl_sigchld_owner_mainloop:
>>          return 0;
>>      case libxl_sigchld_owner_libxl_always:
>> -    case libxl_sigchld_owner_libxl_always_selective_reap:
>>          return 1;
>>      }
>>      abort();
>>     
>
> I should say: that that works is just luck, I think.

That's what I suspected :).  And I also suspect my luck would run out
once I throw 10's of domains with many concurrent operations in the mix.

> I have a better fix.
>   

Ok, thanks!

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:25:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BIP-0003oU-Jt; Fri, 17 Jan 2014 15:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BIL-0003oJ-Pd
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:25:20 +0000
Received: from [85.158.143.35:33080] by server-3.bemta-4.messagelabs.com id
	E0/73-32360-D5B49D25; Fri, 17 Jan 2014 15:25:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389972314!12444178!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16222 invoked from network); 17 Jan 2014 15:25:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93893090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:25:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:25:13 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BIG-00028d-EF;
	Fri, 17 Jan 2014 15:25:12 +0000
Date: Fri, 17 Jan 2014 15:24:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1401171522170.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian.Campbell@citrix.com, arnd@arndb.de, marc.zyngier@arm.com,
	catalin.marinas@arm.com, nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, cov@codeaurora.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net

Russell? Do you have an opinion on this?


> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile        |    2 ++
>  arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 67 insertions(+)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
> new file mode 100644
> index 0000000..8435ff59
> --- /dev/null
> +++ b/arch/arm/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM_PARAVIRT_H
> +#define _ASM_ARM_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..34cf9a6 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o
> diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:25:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BIP-0003oU-Jt; Fri, 17 Jan 2014 15:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BIL-0003oJ-Pd
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 15:25:20 +0000
Received: from [85.158.143.35:33080] by server-3.bemta-4.messagelabs.com id
	E0/73-32360-D5B49D25; Fri, 17 Jan 2014 15:25:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389972314!12444178!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16222 invoked from network); 17 Jan 2014 15:25:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93893090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:25:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:25:13 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BIG-00028d-EF;
	Fri, 17 Jan 2014 15:25:12 +0000
Date: Fri, 17 Jan 2014 15:24:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1401171522170.21510@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian.Campbell@citrix.com, arnd@arndb.de, marc.zyngier@arm.com,
	catalin.marinas@arm.com, nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, cov@codeaurora.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net

Russell? Do you have an opinion on this?


> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile        |    2 ++
>  arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 67 insertions(+)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
> new file mode 100644
> index 0000000..8435ff59
> --- /dev/null
> +++ b/arch/arm/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM_PARAVIRT_H
> +#define _ASM_ARM_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..34cf9a6 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o
> diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BKe-0003wn-UA; Fri, 17 Jan 2014 15:27:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4BKd-0003wi-N3
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:27:39 +0000
Received: from [193.109.254.147:36823] by server-14.bemta-14.messagelabs.com
	id A2/A3-12628-AEB49D25; Fri, 17 Jan 2014 15:27:38 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389972443!11575544!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26466 invoked from network); 17 Jan 2014 15:27:24 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-6.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	17 Jan 2014 15:27:24 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZJ00200X9S0G00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:27:23 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.17.151815,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZJ00D54XLLZ510@smtpauth4.wiscmail.wisc.edu>; Fri,
	17 Jan 2014 09:27:23 -0600 (CST)
Message-id: <52D94BD9.5020209@freebsd.org>
Date: Fri, 17 Jan 2014 09:27:21 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52D933F2.8000101@linaro.org>
In-reply-to: <52D933F2.8000101@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/17/14 07:45, Julien Grall wrote:
>
>
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>
>> The specification is actually a little unclear on this point, but
>> FreeBSD follows the same rules as Linux in any case. Most, if not all,
>> FreeBSD code should check any ancestor at this point as well. In
>> particular fdt_intr_to_rl does this. What it *doesn't* do is allow
>> #interrupt-cells to be larger than 2. I'll fix this this weekend.
>
> Thanks, for working on this part.
>
> Another things to take into account: the first cell doesn't always 
> contain the interrupt.
> With the Linux binding (#interrupt-cells == 3)
>   - cell 1: 1 or 0 (PPI vs SPI)
>   - cell 2: relative IRQ number to the start of PPI/SPI
>   - cell 3: cpu mask + interrupt flags (edge/level...)
> `

Yep. This will require a little API redesign, but shouldn't be that bad.

>>>> On the subject of simple-bus, they usually aren't necessary. For
>>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>>> which is attached to the device tree root. They don't use MMIO, so
>>>> simple-bus doesn't really make sense. How does Xen communicate with 
>>>> the
>>>> OS in these devices?
>>>> -Nathan
>>>
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
>>
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>
> I have noticed at least one issue (which is not related to my problem):
>   - When the OFW nexus translate IRQ (with #interrupt-cells > 1), the 
> rid will be equal to 0, 0 + #interrupt-cells, ... So the number will 
> be discontinued. Rather than on simple-bus for the same device, the 
> rid will be 0, 1, 2...

Interesting. I'll investigate.

> For my issue, I will look at it again this week-end.
>
> BTW when I look to the FDT (sys/dev/fdt_common.c) and the ofw 
> (sys/dev/ofw_nexus.c) code, I have notice that lots of code are 
> duplicated.
>
> It would be nice to have common helper to avoid duplicate code and 
> issue for the future :).
>

I'm in the middle of cleaning all this up (which is how I'm on the hook 
for breaking your case!). Most of fdt_common.c is not long for this world.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BKe-0003wn-UA; Fri, 17 Jan 2014 15:27:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4BKd-0003wi-N3
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:27:39 +0000
Received: from [193.109.254.147:36823] by server-14.bemta-14.messagelabs.com
	id A2/A3-12628-AEB49D25; Fri, 17 Jan 2014 15:27:38 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389972443!11575544!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26466 invoked from network); 17 Jan 2014 15:27:24 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-6.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	17 Jan 2014 15:27:24 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZJ00200X9S0G00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Fri, 17 Jan 2014 09:27:23 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.17.151815,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from comporellon.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZJ00D54XLLZ510@smtpauth4.wiscmail.wisc.edu>; Fri,
	17 Jan 2014 09:27:23 -0600 (CST)
Message-id: <52D94BD9.5020209@freebsd.org>
Date: Fri, 17 Jan 2014 09:27:21 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52D933F2.8000101@linaro.org>
In-reply-to: <52D933F2.8000101@linaro.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/17/14 07:45, Julien Grall wrote:
>
>
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>
>> The specification is actually a little unclear on this point, but
>> FreeBSD follows the same rules as Linux in any case. Most, if not all,
>> FreeBSD code should check any ancestor at this point as well. In
>> particular fdt_intr_to_rl does this. What it *doesn't* do is allow
>> #interrupt-cells to be larger than 2. I'll fix this this weekend.
>
> Thanks, for working on this part.
>
> Another things to take into account: the first cell doesn't always 
> contain the interrupt.
> With the Linux binding (#interrupt-cells == 3)
>   - cell 1: 1 or 0 (PPI vs SPI)
>   - cell 2: relative IRQ number to the start of PPI/SPI
>   - cell 3: cpu mask + interrupt flags (edge/level...)
> `

Yep. This will require a little API redesign, but shouldn't be that bad.

>>>> On the subject of simple-bus, they usually aren't necessary. For
>>>> example, all hypervisor devices on IBM hardware live under /vdevice,
>>>> which is attached to the device tree root. They don't use MMIO, so
>>>> simple-bus doesn't really make sense. How does Xen communicate with 
>>>> the
>>>> OS in these devices?
>>>> -Nathan
>>>
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
>>
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>
> I have noticed at least one issue (which is not related to my problem):
>   - When the OFW nexus translate IRQ (with #interrupt-cells > 1), the 
> rid will be equal to 0, 0 + #interrupt-cells, ... So the number will 
> be discontinued. Rather than on simple-bus for the same device, the 
> rid will be 0, 1, 2...

Interesting. I'll investigate.

> For my issue, I will look at it again this week-end.
>
> BTW when I look to the FDT (sys/dev/fdt_common.c) and the ofw 
> (sys/dev/ofw_nexus.c) code, I have notice that lots of code are 
> duplicated.
>
> It would be nice to have common helper to avoid duplicate code and 
> issue for the future :).
>

I'm in the middle of cleaning all this up (which is how I'm on the hook 
for breaking your case!). Most of fdt_common.c is not long for this world.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:29:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BMV-0004BW-Fc; Fri, 17 Jan 2014 15:29:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4BMT-0004BN-3d
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:29:33 +0000
Received: from [193.109.254.147:7603] by server-10.bemta-14.messagelabs.com id
	B4/D5-20752-C5C49D25; Fri, 17 Jan 2014 15:29:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389972571!11582368!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4518 invoked from network); 17 Jan 2014 15:29:31 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:29:31 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389972571; l=1632;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=6Mk0vqEbBnD75k6ur1iV9nMkkyw=;
	b=qIuDCQm9ukDWQEvt3pEQ6qZzdtXKylpFUKTNawHjJHGKbRVPpPHgiXT9/zo6f6qk2+t
	MXZpV0D5bpdk3NQaXkSZpDa6SdCjTnoQaEO1fGAiH3s8bIa3CXfx2ceZ3wBzuCEJdOdxO
	WMExq71blIC3oRLmOCySL7ULkYFS2OLHxvw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z004f3q0HFTVq0I ; 
	Fri, 17 Jan 2014 16:29:31 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id ACA5450267; Fri, 17 Jan 2014 16:29:30 +0100 (CET)
Date: Fri, 17 Jan 2014 16:29:30 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117152930.GA13324@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> On Thu, 9 Jan 2014, Olaf Hering wrote:
> > The discard support is enabled unconditionally. But it would be worth to
> > have a knob to disable it in case the backing file was intentionally
> > created non-sparse to avoid fragmentation.
> > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > qemu process?
> 
> It would need to be on xenstore, because that is the only per-disk
> interface xen_disk is listening to.

I figured that out. There are already script=, backend= and other knobs.
I will see how to add a discard=on|off to libxl and write that to the
xenstore backend node so qemu can get it from there.
What property name do you suggest? I have something like
"toolstack-option-discard" in mind.

> > blkfront_setup_discard should check for "qdisk" instead of (, or in
> > addition to?) "file" to actually make use of this new feature.
> 
> Why? I don't think that would be correct: if the feature is advertised
> on xenstore by the backend (feature-discard) then blkfront can/should
> use it. If it is not present then it is not going to use it.
> Let's not complicate things further.

blockfront is broken:
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg00988.html


> > +++ b/hw/block/xen_disk.c
> > @@ -68,6 +68,8 @@ struct ioreq {
> >      int                 presync;
> >      int                 postsync;
> >      uint8_t             mapped;
> > +    int64_t             sector_num;
> > +    int                 nb_sectors;
> 
> You have access to the original request via req, I don't think you need
> these two fields, do you?

I will double check if thats doable.

Thanks,

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:29:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BMV-0004BW-Fc; Fri, 17 Jan 2014 15:29:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4BMT-0004BN-3d
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:29:33 +0000
Received: from [193.109.254.147:7603] by server-10.bemta-14.messagelabs.com id
	B4/D5-20752-C5C49D25; Fri, 17 Jan 2014 15:29:32 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389972571!11582368!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4518 invoked from network); 17 Jan 2014 15:29:31 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:29:31 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389972571; l=1632;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=6Mk0vqEbBnD75k6ur1iV9nMkkyw=;
	b=qIuDCQm9ukDWQEvt3pEQ6qZzdtXKylpFUKTNawHjJHGKbRVPpPHgiXT9/zo6f6qk2+t
	MXZpV0D5bpdk3NQaXkSZpDa6SdCjTnoQaEO1fGAiH3s8bIa3CXfx2ceZ3wBzuCEJdOdxO
	WMExq71blIC3oRLmOCySL7ULkYFS2OLHxvw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id Z004f3q0HFTVq0I ; 
	Fri, 17 Jan 2014 16:29:31 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id ACA5450267; Fri, 17 Jan 2014 16:29:30 +0100 (CET)
Date: Fri, 17 Jan 2014 16:29:30 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117152930.GA13324@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> On Thu, 9 Jan 2014, Olaf Hering wrote:
> > The discard support is enabled unconditionally. But it would be worth to
> > have a knob to disable it in case the backing file was intentionally
> > created non-sparse to avoid fragmentation.
> > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > qemu process?
> 
> It would need to be on xenstore, because that is the only per-disk
> interface xen_disk is listening to.

I figured that out. There are already script=, backend= and other knobs.
I will see how to add a discard=on|off to libxl and write that to the
xenstore backend node so qemu can get it from there.
What property name do you suggest? I have something like
"toolstack-option-discard" in mind.

> > blkfront_setup_discard should check for "qdisk" instead of (, or in
> > addition to?) "file" to actually make use of this new feature.
> 
> Why? I don't think that would be correct: if the feature is advertised
> on xenstore by the backend (feature-discard) then blkfront can/should
> use it. If it is not present then it is not going to use it.
> Let's not complicate things further.

blockfront is broken:
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg00988.html


> > +++ b/hw/block/xen_disk.c
> > @@ -68,6 +68,8 @@ struct ioreq {
> >      int                 presync;
> >      int                 postsync;
> >      uint8_t             mapped;
> > +    int64_t             sector_num;
> > +    int                 nb_sectors;
> 
> You have access to the original request via req, I don't think you need
> these two fields, do you?

I will double check if thats doable.

Thanks,

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:35:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BRi-0004qV-OJ; Fri, 17 Jan 2014 15:34:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BRh-0004qP-0A
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:34:57 +0000
Received: from [85.158.139.211:29019] by server-15.bemta-5.messagelabs.com id
	E5/F3-08490-0AD49D25; Fri, 17 Jan 2014 15:34:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389972894!10433654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20527 invoked from network); 17 Jan 2014 15:34:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:34:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93898316"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:34:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:34:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BRc-0002J3-N4;
	Fri, 17 Jan 2014 15:34:52 +0000
Date: Fri, 17 Jan 2014 15:33:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117152930.GA13324@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > The discard support is enabled unconditionally. But it would be worth to
> > > have a knob to disable it in case the backing file was intentionally
> > > created non-sparse to avoid fragmentation.
> > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > qemu process?
> > 
> > It would need to be on xenstore, because that is the only per-disk
> > interface xen_disk is listening to.
> 
> I figured that out. There are already script=, backend= and other knobs.
> I will see how to add a discard=on|off to libxl and write that to the
> xenstore backend node so qemu can get it from there.
> What property name do you suggest? I have something like
> "toolstack-option-discard" in mind.

discard_enabled?


> > > blkfront_setup_discard should check for "qdisk" instead of (, or in
> > > addition to?) "file" to actually make use of this new feature.
> > 
> > Why? I don't think that would be correct: if the feature is advertised
> > on xenstore by the backend (feature-discard) then blkfront can/should
> > use it. If it is not present then it is not going to use it.
> > Let's not complicate things further.
> 
> blockfront is broken:
> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg00988.html

blkfront should be fixed then :-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:35:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BRi-0004qV-OJ; Fri, 17 Jan 2014 15:34:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BRh-0004qP-0A
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:34:57 +0000
Received: from [85.158.139.211:29019] by server-15.bemta-5.messagelabs.com id
	E5/F3-08490-0AD49D25; Fri, 17 Jan 2014 15:34:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389972894!10433654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20527 invoked from network); 17 Jan 2014 15:34:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:34:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93898316"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:34:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:34:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BRc-0002J3-N4;
	Fri, 17 Jan 2014 15:34:52 +0000
Date: Fri, 17 Jan 2014 15:33:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117152930.GA13324@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > The discard support is enabled unconditionally. But it would be worth to
> > > have a knob to disable it in case the backing file was intentionally
> > > created non-sparse to avoid fragmentation.
> > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > qemu process?
> > 
> > It would need to be on xenstore, because that is the only per-disk
> > interface xen_disk is listening to.
> 
> I figured that out. There are already script=, backend= and other knobs.
> I will see how to add a discard=on|off to libxl and write that to the
> xenstore backend node so qemu can get it from there.
> What property name do you suggest? I have something like
> "toolstack-option-discard" in mind.

discard_enabled?


> > > blkfront_setup_discard should check for "qdisk" instead of (, or in
> > > addition to?) "file" to actually make use of this new feature.
> > 
> > Why? I don't think that would be correct: if the feature is advertised
> > on xenstore by the backend (feature-discard) then blkfront can/should
> > use it. If it is not present then it is not going to use it.
> > Let's not complicate things further.
> 
> blockfront is broken:
> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg00988.html

blkfront should be fixed then :-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BV3-0004zf-DN; Fri, 17 Jan 2014 15:38:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4BV1-0004xr-5F
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:38:23 +0000
Received: from [85.158.139.211:39378] by server-11.bemta-5.messagelabs.com id
	4B/ED-23268-E6E49D25; Fri, 17 Jan 2014 15:38:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389973101!10393280!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28482 invoked from network); 17 Jan 2014 15:38:21 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:38:21 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389973101; l=1064;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=3jevFwHlxwY2LBJ+xvs4CH3757w=;
	b=NyBvAIEFk9nhoWsHdbtGVjNf3sDpkDC/oRMo7YDR4jVxr4KmSYRkxPFoCiz4dICiNXU
	jEfv1RiLryhM3XxiLcSQ2LkMgYBLKUbe9E304oUp/5rxjw8LWwWU/9XokmtZWPbUNoapt
	CmR+riiAQsX3vi6iDjOh4KwkfV1H/fNiS5Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.20 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id R07582q0HFcLncp ; 
	Fri, 17 Jan 2014 16:38:21 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id F2D2D50267; Fri, 17 Jan 2014 16:38:20 +0100 (CET)
Date: Fri, 17 Jan 2014 16:38:20 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117153820.GA17569@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> On Fri, 17 Jan 2014, Olaf Hering wrote:
> > On Fri, Jan 17, Stefano Stabellini wrote:
> > 
> > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > The discard support is enabled unconditionally. But it would be worth to
> > > > have a knob to disable it in case the backing file was intentionally
> > > > created non-sparse to avoid fragmentation.
> > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > qemu process?
> > > 
> > > It would need to be on xenstore, because that is the only per-disk
> > > interface xen_disk is listening to.
> > 
> > I figured that out. There are already script=, backend= and other knobs.
> > I will see how to add a discard=on|off to libxl and write that to the
> > xenstore backend node so qemu can get it from there.
> > What property name do you suggest? I have something like
> > "toolstack-option-discard" in mind.
> 
> discard_enabled?

Isnt that name too generic? In the end that node is used also by backend
and frontend.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BV3-0004zf-DN; Fri, 17 Jan 2014 15:38:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4BV1-0004xr-5F
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:38:23 +0000
Received: from [85.158.139.211:39378] by server-11.bemta-5.messagelabs.com id
	4B/ED-23268-E6E49D25; Fri, 17 Jan 2014 15:38:22 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1389973101!10393280!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28482 invoked from network); 17 Jan 2014 15:38:21 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:38:21 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389973101; l=1064;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=3jevFwHlxwY2LBJ+xvs4CH3757w=;
	b=NyBvAIEFk9nhoWsHdbtGVjNf3sDpkDC/oRMo7YDR4jVxr4KmSYRkxPFoCiz4dICiNXU
	jEfv1RiLryhM3XxiLcSQ2LkMgYBLKUbe9E304oUp/5rxjw8LWwWU/9XokmtZWPbUNoapt
	CmR+riiAQsX3vi6iDjOh4KwkfV1H/fNiS5Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.20 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id R07582q0HFcLncp ; 
	Fri, 17 Jan 2014 16:38:21 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id F2D2D50267; Fri, 17 Jan 2014 16:38:20 +0100 (CET)
Date: Fri, 17 Jan 2014 16:38:20 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117153820.GA17569@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> On Fri, 17 Jan 2014, Olaf Hering wrote:
> > On Fri, Jan 17, Stefano Stabellini wrote:
> > 
> > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > The discard support is enabled unconditionally. But it would be worth to
> > > > have a knob to disable it in case the backing file was intentionally
> > > > created non-sparse to avoid fragmentation.
> > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > qemu process?
> > > 
> > > It would need to be on xenstore, because that is the only per-disk
> > > interface xen_disk is listening to.
> > 
> > I figured that out. There are already script=, backend= and other knobs.
> > I will see how to add a discard=on|off to libxl and write that to the
> > xenstore backend node so qemu can get it from there.
> > What property name do you suggest? I have something like
> > "toolstack-option-discard" in mind.
> 
> discard_enabled?

Isnt that name too generic? In the end that node is used also by backend
and frontend.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BX0-0005D4-Ah; Fri, 17 Jan 2014 15:40:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W4BWy-0005Cx-QI
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:40:24 +0000
Received: from [85.158.139.211:57338] by server-1.bemta-5.messagelabs.com id
	D5/EA-21065-5EE49D25; Fri, 17 Jan 2014 15:40:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389973218!10437942!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10075 invoked from network); 17 Jan 2014 15:40:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:40:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93900408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:40:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:40:16 -0500
Message-ID: <52D94EDE.7000502@citrix.com>
Date: Fri, 17 Jan 2014 15:40:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
In-Reply-To: <20140117120810.GA11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	annie li <annie.li@oracle.com>, andrew.bennieston@citrix.com,
	davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 12:08, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>>
>> On 2014/1/16 19:10, David Vrabel wrote:
>>> On 15/01/14 23:57, Annie Li wrote:
>>>> This patch implements two things:
>>>>
>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>> netfront, so remove corresponding release code for transfer.
>>>>
>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>> for reading or writing. But this patch does not cover this and improvement for
>>>> this failure may be implemented in a separate patch.
>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>
>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>> think you can fix this in netfront by taking a reference to the page
>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>> the foreign domain (whichever is later).
>>
>> Taking a reference to the page before calling
>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>> Simply adding put_page before kfree_skb() does not make things
>> different from gnttab_end_foreign_access_ref(), and the pages will
>> be freed by kfree_skb(), problem will be hit in
>> gnttab_handle_deferred() when freeing pages which already be freed.
>>
> 
> I think David's idea is:
> 
> 	get_page
> 	gnttab_end_foreign_access
> 	kfree_skb
> 
> The get_page is to offset put_page in gnttab_end_foreign_access. You
> don't need to put page before kfree_skb.

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BX0-0005D4-Ah; Fri, 17 Jan 2014 15:40:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W4BWy-0005Cx-QI
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:40:24 +0000
Received: from [85.158.139.211:57338] by server-1.bemta-5.messagelabs.com id
	D5/EA-21065-5EE49D25; Fri, 17 Jan 2014 15:40:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1389973218!10437942!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10075 invoked from network); 17 Jan 2014 15:40:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:40:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93900408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 15:40:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:40:16 -0500
Message-ID: <52D94EDE.7000502@citrix.com>
Date: Fri, 17 Jan 2014 15:40:14 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
In-Reply-To: <20140117120810.GA11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	annie li <annie.li@oracle.com>, andrew.bennieston@citrix.com,
	davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 12:08, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>>
>> On 2014/1/16 19:10, David Vrabel wrote:
>>> On 15/01/14 23:57, Annie Li wrote:
>>>> This patch implements two things:
>>>>
>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>> netfront, so remove corresponding release code for transfer.
>>>>
>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>> for reading or writing. But this patch does not cover this and improvement for
>>>> this failure may be implemented in a separate patch.
>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>
>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>> think you can fix this in netfront by taking a reference to the page
>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>> the foreign domain (whichever is later).
>>
>> Taking a reference to the page before calling
>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>> Simply adding put_page before kfree_skb() does not make things
>> different from gnttab_end_foreign_access_ref(), and the pages will
>> be freed by kfree_skb(), problem will be hit in
>> gnttab_handle_deferred() when freeing pages which already be freed.
>>
> 
> I think David's idea is:
> 
> 	get_page
> 	gnttab_end_foreign_access
> 	kfree_skb
> 
> The get_page is to offset put_page in gnttab_end_foreign_access. You
> don't need to put page before kfree_skb.

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BaT-0005OW-0W; Fri, 17 Jan 2014 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4BaR-0005OG-JF
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:43:59 +0000
Received: from [85.158.139.211:47446] by server-8.bemta-5.messagelabs.com id
	74/A3-29838-EBF49D25; Fri, 17 Jan 2014 15:43:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389973438!10422996!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19127 invoked from network); 17 Jan 2014 15:43:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 15:43:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 15:43:57 +0000
Message-Id: <52D95DCD0200007800114A2A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 15:43:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<52D406D9.7060407@citrix.com>
In-Reply-To: <52D406D9.7060407@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	StefanoStabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 13/01/14 15:14, Jan Beulich wrote:
>> Aiming at a release in late January or early February, I'd like to cut
>> RC1s later this or early next week.
>>
>> Please indicate any bug fixes that so far may have been missed
>> in the backports already done.
>>
>> Thanks, Jan
> 
> Looking through the XenServer patch queue on top of stable-4.3
> 
> All of these are suggestions for "might be useful to take", rather than
> "strictly bugfixes"
> 
> Hypervisor:
> 
> * x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
> xen-unstable.hg-27635.45bf542dd584
> git: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
> 
> * x86/ats: Fix parsing of 'ats' command line option
> xen-unstable.hg-27831.bd84d8277c21
> git: 7b5af1df122092243a3697409d5a5ad3b9944da4
> 
> * x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
> xen-unstable.hg-27960.48cd67917186
> git: a82e98d473fd212316ea5aa078a7588324b020e5
> 
> * kexec: prevent deadlock on reentry to the crash path
> xen-unstable.hg-28067.065befe6d07e
> git: 470f58c159410b280627c2ea7798ea12ad93bd7c

I applied all of them to 4.3 and all but the first one to 4.2 (the
first one would require a backport to be done first).

Jan

> Tools:
> 
> * libxl: Allow 4 MB of video RAM for Cirrus graphics on traditional QEM
> xen-unstable.hg-27834.02b33d7e56f6
> git: 13d13a45d0591fc195666ea20ddf8781a0367e88
> 
> 
> ~Andrew




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BaP-0005O4-Kk; Fri, 17 Jan 2014 15:43:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4BaN-0005Ny-Fd
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:43:55 +0000
Received: from [85.158.137.68:60503] by server-12.bemta-3.messagelabs.com id
	4A/82-20055-ABF49D25; Fri, 17 Jan 2014 15:43:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389973432!9817966!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16702 invoked from network); 17 Jan 2014 15:43:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:43:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91804546"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:43:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:43:51 -0500
Message-ID: <1389973430.6697.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 17 Jan 2014 15:43:50 +0000
In-Reply-To: <20140117153820.GA17569@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:38 +0100, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Fri, 17 Jan 2014, Olaf Hering wrote:
> > > On Fri, Jan 17, Stefano Stabellini wrote:
> > > 
> > > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > > The discard support is enabled unconditionally. But it would be worth to
> > > > > have a knob to disable it in case the backing file was intentionally
> > > > > created non-sparse to avoid fragmentation.
> > > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > > qemu process?
> > > > 
> > > > It would need to be on xenstore, because that is the only per-disk
> > > > interface xen_disk is listening to.
> > > 
> > > I figured that out. There are already script=, backend= and other knobs.
> > > I will see how to add a discard=on|off to libxl and write that to the
> > > xenstore backend node so qemu can get it from there.
> > > What property name do you suggest? I have something like
> > > "toolstack-option-discard" in mind.
> > 
> > discard_enabled?
> 
> Isnt that name too generic? In the end that node is used also by backend
> and frontend.

Surely this node is for toolstack to qdisk communication. It is then up
to itself qdisk to decide whether to expose this feature to the
frontend, using the existing defined feature flag for that purpose.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BaT-0005OW-0W; Fri, 17 Jan 2014 15:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4BaR-0005OG-JF
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:43:59 +0000
Received: from [85.158.139.211:47446] by server-8.bemta-5.messagelabs.com id
	74/A3-29838-EBF49D25; Fri, 17 Jan 2014 15:43:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1389973438!10422996!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19127 invoked from network); 17 Jan 2014 15:43:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 15:43:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 15:43:57 +0000
Message-Id: <52D95DCD0200007800114A2A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 15:43:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52D410E102000078001132F8@nat28.tlf.novell.com>
	<52D406D9.7060407@citrix.com>
In-Reply-To: <52D406D9.7060407@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	StefanoStabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] preparing for 4.3.2 and 4.2.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.01.14 at 16:31, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 13/01/14 15:14, Jan Beulich wrote:
>> Aiming at a release in late January or early February, I'd like to cut
>> RC1s later this or early next week.
>>
>> Please indicate any bug fixes that so far may have been missed
>> in the backports already done.
>>
>> Thanks, Jan
> 
> Looking through the XenServer patch queue on top of stable-4.3
> 
> All of these are suggestions for "might be useful to take", rather than
> "strictly bugfixes"
> 
> Hypervisor:
> 
> * x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
> xen-unstable.hg-27635.45bf542dd584
> git: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
> 
> * x86/ats: Fix parsing of 'ats' command line option
> xen-unstable.hg-27831.bd84d8277c21
> git: 7b5af1df122092243a3697409d5a5ad3b9944da4
> 
> * x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
> xen-unstable.hg-27960.48cd67917186
> git: a82e98d473fd212316ea5aa078a7588324b020e5
> 
> * kexec: prevent deadlock on reentry to the crash path
> xen-unstable.hg-28067.065befe6d07e
> git: 470f58c159410b280627c2ea7798ea12ad93bd7c

I applied all of them to 4.3 and all but the first one to 4.2 (the
first one would require a backport to be done first).

Jan

> Tools:
> 
> * libxl: Allow 4 MB of video RAM for Cirrus graphics on traditional QEM
> xen-unstable.hg-27834.02b33d7e56f6
> git: 13d13a45d0591fc195666ea20ddf8781a0367e88
> 
> 
> ~Andrew




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BaP-0005O4-Kk; Fri, 17 Jan 2014 15:43:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4BaN-0005Ny-Fd
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:43:55 +0000
Received: from [85.158.137.68:60503] by server-12.bemta-3.messagelabs.com id
	4A/82-20055-ABF49D25; Fri, 17 Jan 2014 15:43:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389973432!9817966!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16702 invoked from network); 17 Jan 2014 15:43:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:43:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91804546"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:43:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:43:51 -0500
Message-ID: <1389973430.6697.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Fri, 17 Jan 2014 15:43:50 +0000
In-Reply-To: <20140117153820.GA17569@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:38 +0100, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Fri, 17 Jan 2014, Olaf Hering wrote:
> > > On Fri, Jan 17, Stefano Stabellini wrote:
> > > 
> > > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > > The discard support is enabled unconditionally. But it would be worth to
> > > > > have a knob to disable it in case the backing file was intentionally
> > > > > created non-sparse to avoid fragmentation.
> > > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > > qemu process?
> > > > 
> > > > It would need to be on xenstore, because that is the only per-disk
> > > > interface xen_disk is listening to.
> > > 
> > > I figured that out. There are already script=, backend= and other knobs.
> > > I will see how to add a discard=on|off to libxl and write that to the
> > > xenstore backend node so qemu can get it from there.
> > > What property name do you suggest? I have something like
> > > "toolstack-option-discard" in mind.
> > 
> > discard_enabled?
> 
> Isnt that name too generic? In the end that node is used also by backend
> and frontend.

Surely this node is for toolstack to qdisk communication. It is then up
to itself qdisk to decide whether to expose this feature to the
frontend, using the existing defined feature flag for that purpose.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BbH-0005bP-Kd; Fri, 17 Jan 2014 15:44:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BbF-0005bF-QN
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:44:50 +0000
Received: from [85.158.139.211:55298] by server-15.bemta-5.messagelabs.com id
	F8/16-08490-1FF49D25; Fri, 17 Jan 2014 15:44:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389973487!10429893!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30438 invoked from network); 17 Jan 2014 15:44:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:44:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91804839"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:44:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:44:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BbB-0002WJ-8K;
	Fri, 17 Jan 2014 15:44:45 +0000
Date: Fri, 17 Jan 2014 15:43:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117153820.GA17569@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Fri, 17 Jan 2014, Olaf Hering wrote:
> > > On Fri, Jan 17, Stefano Stabellini wrote:
> > > 
> > > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > > The discard support is enabled unconditionally. But it would be worth to
> > > > > have a knob to disable it in case the backing file was intentionally
> > > > > created non-sparse to avoid fragmentation.
> > > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > > qemu process?
> > > > 
> > > > It would need to be on xenstore, because that is the only per-disk
> > > > interface xen_disk is listening to.
> > > 
> > > I figured that out. There are already script=, backend= and other knobs.
> > > I will see how to add a discard=on|off to libxl and write that to the
> > > xenstore backend node so qemu can get it from there.
> > > What property name do you suggest? I have something like
> > > "toolstack-option-discard" in mind.
> > 
> > discard_enabled?
> 
> Isnt that name too generic? In the end that node is used also by backend
> and frontend.

The problem is that it is confusing to have two options in the same
place, one written by the toolstack for the backend and the other
written by the backend for the frontend.

Can't we just assume that if the backend can do discard on that file, it
is simply going to enable feature-discard? Do we really need the
toolstack driven option too?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BbH-0005bP-Kd; Fri, 17 Jan 2014 15:44:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4BbF-0005bF-QN
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:44:50 +0000
Received: from [85.158.139.211:55298] by server-15.bemta-5.messagelabs.com id
	F8/16-08490-1FF49D25; Fri, 17 Jan 2014 15:44:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389973487!10429893!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30438 invoked from network); 17 Jan 2014 15:44:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 15:44:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91804839"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 15:44:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 10:44:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4BbB-0002WJ-8K;
	Fri, 17 Jan 2014 15:44:45 +0000
Date: Fri, 17 Jan 2014 15:43:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117153820.GA17569@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > On Fri, 17 Jan 2014, Olaf Hering wrote:
> > > On Fri, Jan 17, Stefano Stabellini wrote:
> > > 
> > > > On Thu, 9 Jan 2014, Olaf Hering wrote:
> > > > > The discard support is enabled unconditionally. But it would be worth to
> > > > > have a knob to disable it in case the backing file was intentionally
> > > > > created non-sparse to avoid fragmentation.
> > > > > How could this be knob be passed from domU.cfg:disk=[] to the actual
> > > > > qemu process?
> > > > 
> > > > It would need to be on xenstore, because that is the only per-disk
> > > > interface xen_disk is listening to.
> > > 
> > > I figured that out. There are already script=, backend= and other knobs.
> > > I will see how to add a discard=on|off to libxl and write that to the
> > > xenstore backend node so qemu can get it from there.
> > > What property name do you suggest? I have something like
> > > "toolstack-option-discard" in mind.
> > 
> > discard_enabled?
> 
> Isnt that name too generic? In the end that node is used also by backend
> and frontend.

The problem is that it is confusing to have two options in the same
place, one written by the toolstack for the backend and the other
written by the backend for the frontend.

Can't we just assume that if the backend can do discard on that file, it
is simply going to enable feature-discard? Do we really need the
toolstack driven option too?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:45:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BcA-0005p0-7S; Fri, 17 Jan 2014 15:45:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W4Bc8-0005oh-K3
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:45:44 +0000
Received: from [85.158.137.68:15560] by server-4.bemta-3.messagelabs.com id
	6E/CA-10414-72059D25; Fri, 17 Jan 2014 15:45:43 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389973541!9818391!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25992 invoked from network); 17 Jan 2014 15:45:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:45:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HFjZQA016716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 15:45:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFjYxa004962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 15:45:35 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFjYgS004956; Fri, 17 Jan 2014 15:45:34 GMT
Received: from [192.168.1.102] (/123.123.250.195)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 07:45:34 -0800
Message-ID: <52D94F8C.7060509@oracle.com>
Date: Fri, 17 Jan 2014 23:43:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
In-Reply-To: <20140117140246.GB11681@zion.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-17 22:02, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 08:32:29PM +0800, annie li wrote:
>> On 2014-1-17 20:08, Wei Liu wrote:
>>> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>>>> On 2014/1/16 19:10, David Vrabel wrote:
>>>>> On 15/01/14 23:57, Annie Li wrote:
>>>>>> This patch implements two things:
>>>>>>
>>>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>>>> netfront, so remove corresponding release code for transfer.
>>>>>>
>>>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>>>> for reading or writing. But this patch does not cover this and improvement for
>>>>>> this failure may be implemented in a separate patch.
>>>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>>>
>>>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>>>> think you can fix this in netfront by taking a reference to the page
>>>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>>>> the foreign domain (whichever is later).
>>>> Taking a reference to the page before calling
>>>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>>>> Simply adding put_page before kfree_skb() does not make things
>>>> different from gnttab_end_foreign_access_ref(), and the pages will
>>>> be freed by kfree_skb(), problem will be hit in
>>>> gnttab_handle_deferred() when freeing pages which already be freed.
>>>>
>>> I think David's idea is:
>>>
>>> 	get_page
>>> 	gnttab_end_foreign_access
>>> 	kfree_skb
>>>
>>> The get_page is to offset put_page in gnttab_end_foreign_access. You
>>> don't need to put page before kfree_skb.
>> Yes, this is what I described as following about David's patch.
>>
>>>> So put_page is required in gnttab_end_foreign_access(), this will
>>>> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
>>>> This involves changes in blkfront/pcifront/tpmfront(just like your
>>>> patch), this way ensure page is released when ref is end.
>> But this would has some issue in netfront tx path. Netfront ends all
> What issue with tx path? Your patch only touches rx skbs, doesn't it?

No, I am trying to implement 2 patches. One is my original patch which 
fix rx leaking, another is to improve gnttab_end_foreign_access, it 
would involve not only tx path, but also blkfront/pcifront/tpmfront 
since they use gnttab_end_foreign_access in their source code.

>
>> grant reference of one skb first and then release this skb. If the
>> gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(),
>> this frag page and corresponding grant reference will be put in
>> entry and release work will be done in the timer routine. If some
> I understand up to this point.
>
>> frag pages of one skb is free in this timer routine, then
>> dev_kfree_skb_irq will free pages which have been freed.
> Why is dev_kfree_skb_irq involved? It is used in tx path not rx path.

This is involved in second patch as David suggested, it ensures page 
would be released when grant access is end and avoid situation where 
page is released but grant reference is still mapped.

> Even if we look at dev_kfree_skb_irq, it calls __kfree_skb for dropped
> packet eventually, which should do the right thing if we don't mess up
> ref counts.

I think you are right, I mixed it with get_skb just now. Either 
__kfree_skb or gnttab_end_foreign_access() does the free work.

Thanks
Annie
>
> Wei.
>
>> So I prefer following way I mentioned, suggestions?
>>
>>>> Another solution I am thinking is calling
>>>> gnttab_end_foreign_access() with page parameter as NULL, then
>>>> gnttab_end_foreign_access will only do ending grant reference work
>>>> and releasing page work is done by kfree_skb().
>> Thanks
>> Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:45:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BcA-0005p0-7S; Fri, 17 Jan 2014 15:45:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W4Bc8-0005oh-K3
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 15:45:44 +0000
Received: from [85.158.137.68:15560] by server-4.bemta-3.messagelabs.com id
	6E/CA-10414-72059D25; Fri, 17 Jan 2014 15:45:43 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389973541!9818391!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25992 invoked from network); 17 Jan 2014 15:45:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:45:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HFjZQA016716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 15:45:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFjYxa004962
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 15:45:35 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFjYgS004956; Fri, 17 Jan 2014 15:45:34 GMT
Received: from [192.168.1.102] (/123.123.250.195)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 07:45:34 -0800
Message-ID: <52D94F8C.7060509@oracle.com>
Date: Fri, 17 Jan 2014 23:43:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 5.1;
	rv:17.0) Gecko/20131118 Thunderbird/17.0.11
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
In-Reply-To: <20140117140246.GB11681@zion.uk.xensource.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org, xen-devel@lists.xen.org,
	David Vrabel <david.vrabel@citrix.com>,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014-1-17 22:02, Wei Liu wrote:
> On Fri, Jan 17, 2014 at 08:32:29PM +0800, annie li wrote:
>> On 2014-1-17 20:08, Wei Liu wrote:
>>> On Fri, Jan 17, 2014 at 02:25:40PM +0800, annie li wrote:
>>>> On 2014/1/16 19:10, David Vrabel wrote:
>>>>> On 15/01/14 23:57, Annie Li wrote:
>>>>>> This patch implements two things:
>>>>>>
>>>>>> * release grant reference and skb for rx path, this fixex resource leaking.
>>>>>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>>>>>> pages for access/map and transfer. But grant transfer is deprecated in current
>>>>>> netfront, so remove corresponding release code for transfer.
>>>>>>
>>>>>> gnttab_end_foreign_access_ref may fail when the grant entry is currently used
>>>>>> for reading or writing. But this patch does not cover this and improvement for
>>>>>> this failure may be implemented in a separate patch.
>>>>> I don't think replacing a resource leak with a security bug is a good idea.
>>>>>
>>>>> If you would prefer not to fix the gnttab_end_foreign_access() call, I
>>>>> think you can fix this in netfront by taking a reference to the page
>>>>> before calling gnttab_end_foreign_access().  This will ensure the page
>>>>> isn't freed until the subsequent kfree_skb(), or the gref is released by
>>>>> the foreign domain (whichever is later).
>>>> Taking a reference to the page before calling
>>>> gnttab_end_foreign_access() delays the free work until kfree_skb().
>>>> Simply adding put_page before kfree_skb() does not make things
>>>> different from gnttab_end_foreign_access_ref(), and the pages will
>>>> be freed by kfree_skb(), problem will be hit in
>>>> gnttab_handle_deferred() when freeing pages which already be freed.
>>>>
>>> I think David's idea is:
>>>
>>> 	get_page
>>> 	gnttab_end_foreign_access
>>> 	kfree_skb
>>>
>>> The get_page is to offset put_page in gnttab_end_foreign_access. You
>>> don't need to put page before kfree_skb.
>> Yes, this is what I described as following about David's patch.
>>
>>>> So put_page is required in gnttab_end_foreign_access(), this will
>>>> ensure either free is taken by kfree_skb or gnttab_handle_deferred.
>>>> This involves changes in blkfront/pcifront/tpmfront(just like your
>>>> patch), this way ensure page is released when ref is end.
>> But this would has some issue in netfront tx path. Netfront ends all
> What issue with tx path? Your patch only touches rx skbs, doesn't it?

No, I am trying to implement 2 patches. One is my original patch which 
fix rx leaking, another is to improve gnttab_end_foreign_access, it 
would involve not only tx path, but also blkfront/pcifront/tpmfront 
since they use gnttab_end_foreign_access in their source code.

>
>> grant reference of one skb first and then release this skb. If the
>> gnttab_end_foreign_access_ref fails in gnttab_end_foreign_access(),
>> this frag page and corresponding grant reference will be put in
>> entry and release work will be done in the timer routine. If some
> I understand up to this point.
>
>> frag pages of one skb is free in this timer routine, then
>> dev_kfree_skb_irq will free pages which have been freed.
> Why is dev_kfree_skb_irq involved? It is used in tx path not rx path.

This is involved in second patch as David suggested, it ensures page 
would be released when grant access is end and avoid situation where 
page is released but grant reference is still mapped.

> Even if we look at dev_kfree_skb_irq, it calls __kfree_skb for dropped
> packet eventually, which should do the right thing if we don't mess up
> ref counts.

I think you are right, I mixed it with get_skb just now. Either 
__kfree_skb or gnttab_end_foreign_access() does the free work.

Thanks
Annie
>
> Wei.
>
>> So I prefer following way I mentioned, suggestions?
>>
>>>> Another solution I am thinking is calling
>>>> gnttab_end_foreign_access() with page parameter as NULL, then
>>>> gnttab_end_foreign_access will only do ending grant reference work
>>>> and releasing page work is done by kfree_skb().
>> Thanks
>> Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bl3-0006vb-Fu; Fri, 17 Jan 2014 15:54:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W4Bl2-0006vW-AB
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:54:56 +0000
Received: from [85.158.137.68:54031] by server-10.bemta-3.messagelabs.com id
	8A/D7-23989-F4259D25; Fri, 17 Jan 2014 15:54:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389974092!6150979!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21795 invoked from network); 17 Jan 2014 15:54:54 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:54:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HFroDx026951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 15:53:50 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0HFrl9u006590
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 15:53:48 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFrlhi024069; Fri, 17 Jan 2014 15:53:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 07:53:47 -0800
Message-ID: <52D95233.1090003@oracle.com>
Date: Fri, 17 Jan 2014 10:54:27 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
In-Reply-To: <52D94D3A0200007800114957@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTcvMjAxNCAwOTozMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4gV2hpbGUgbG9va2lu
ZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vw
cwo+IGluIERvbTAgSSByZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhl
bi5oZydzIGMvcwo+IDk4OTphNzc4MWMwYTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24g
ZHJpdmVyIGFjY291bnRpbmcgZm9yCj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qg
d29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVyZQo+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhl
cmUncyBhIHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4gQW5kIHRoYXQgaXMg
ZGVzcGl0ZSBtZSBrbm93aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4g
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCj4gZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAg
aW4gc29tZSBvdGhlciB3YXkuCj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+Cj4gSW4gdGhl
IGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jv
c3MKPiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24ncyBp
bml0aWFsIHN0YXRlIHRvCj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBhbmQgaGVu
Y2Ugd2VudCBhaGVhZCBhbmQKPiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2FsY3VsYXRpb25z
IGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4KPiAoYSkgdXBzdHJlYW0ncyAob3BlbiBj
b2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBvbgo+ICAgICAgYW4gb2xk
ZXIga2VybmVsKQo+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1h
eGltdW0gUkFNIFBGTikKPiAoYykgWEVOTUVNX2dldF9wb2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0
aGUgaHlwZXJ2aXNvciBhbHRlcmVkCj4gICAgICB0byBub3QgcmVmdXNlIHRoaXMgZm9yIGEgZG9t
YWluIGRvaW5nIGl0IG9uIGl0c2VsZikKPgo+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2Qs
IHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+IGtub3duIHRvIHJlc3VsdCBpbiB0
aGUgZHJpdmVyIG5vdCBiYWxsb29uaW5nIGRvd24gZW5vdWdoLCBhbmQKPiBoZW5jZSBzZXR0aW5n
IHVwIHRoZSBkb21haW4gZm9yIGFuIGV2ZW50dWFsIGNyYXNoIHdoZW4gdGhlIFBvRAo+IGNhY2hl
IHJ1bnMgZW1wdHkuCj4KPiBJbnRlcmVzdGluZ2x5LCAoYSkgdG9vIHJlc3VsdHMgaW4gdGhlIGRy
aXZlciBub3QgYmFsbG9vbmluZyBkb3duCj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFj
dGx5IGFzIG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+IHJlc2VydmVkIGJlbG93IHRoZSAxTWIg
Ym91bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAo+IHVwc3RyZWFtIGNvbW1pdCBpcyBw
cmVzdW1hYmx5IGJyb2tlbi4KPgo+IFNob3J0IG9mIGEgcmVsaWFibGUgKGFuZCBpZGVhbGx5IGFy
Y2hpdGVjdHVyZSBpbmRlcGVuZGVudCkgd2F5IG9mCj4ga25vd2luZyB0aGUgbmVjZXNzYXJ5IGFk
anVzdG1lbnQgdmFsdWUsIHRoZSBuZXh0IGJlc3Qgc29sdXRpb24KPiAobm90IGJhbGxvb25pbmcg
ZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gKPiBtb3Jl
IHRoYW4gbmVjZXNzYXJ5KSB0dXJucyBvdXQgdG8gYmUgdXNpbmcgdGhlIG1pbmltdW0gb2YgKGIp
Cj4gYW5kIChjKTogV2hlbiB0aGUgZG9tYWluIG9ubHkgaGFzIG1lbW9yeSBiZWxvdyA0R2IsIChi
KSBpcwo+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMg
Y2xvc2VzdC4KCkkgYW0gbm90IHN1cmUgSSB1bmRlcnN0YW5kIHdoeSAoYikgd291bGQgYmUgdGhl
IHJpZ2h0IGFuc3dlciBmb3IgCmxlc3MtdGhhbi00RyBndWVzdHMuIFRoZSByZWFzb24gZm9yIGMy
NzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQgbWF4X3BmbiAKaW5jbHVkZXMgTU1JTyBzcGFjZSAod2hp
Y2ggaXMgbm90IFJBTSkgYW5kIHRodXMgdGhlIGRyaXZlciB3aWxsIAp1bm5lY2Vzc2FyaWx5IGJh
bGxvb24gZG93biB0aGF0IG11Y2ggbWVtb3J5LgoKPiBRdWVzdGlvbiBub3cgaXM6IENvbnNpZGVy
aW5nIHRoYXQgKGEpIGlzIGJyb2tlbiAoYW5kIGhhcmQgdG8gZml4KQo+IGFuZCAoYikgaXMgaW4g
cHJlc3VtYWJseSBhIGxhcmdlIHBhcnQgb2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KPiB0
b28gbXVjaCBiYWxsb29uaW5nIGRvd24sIHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4gWEVOTUVNX2dl
dF9wb2RfdGFyZ2V0IGZvciBkb21haW5zIHRvIHF1ZXJ5IG9uIHRoZW1zZWx2ZXM/Cj4gQWx0ZXJu
YXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhlciB3YXkgdG8gY2FsY3VsYXRlIGEKPiByZWFz
b25hYmx5IHByZWNpc2UgdmFsdWU/CgpJIHRoaW5rIGh5cGVydmlzb3IgcXVlcnkgaXMgYSBnb29k
IHRoaW5nIGFsdGhvdWdoIEkgZG9uJ3Qga25vdyB3aGV0aGVyIApleHBvc2luZyBQb0Qtc3BlY2lm
aWMgZGF0YSAoY291bnQgYW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMgCm5lY2Vzc2Fy
eS4gSXQncyBwcm9iYWJseSBPSyAob3Igd2UgY2FuIHNldCB0aGVzZSBmaWVsZHMgdG8gemVybyBm
b3IgCm5vbi1wcml2aWxlZ2VkIGRvbWFpbnMpLgoKLWJvcmlzCgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 15:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 15:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bl3-0006vb-Fu; Fri, 17 Jan 2014 15:54:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W4Bl2-0006vW-AB
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 15:54:56 +0000
Received: from [85.158.137.68:54031] by server-10.bemta-3.messagelabs.com id
	8A/D7-23989-F4259D25; Fri, 17 Jan 2014 15:54:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1389974092!6150979!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21795 invoked from network); 17 Jan 2014 15:54:54 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 15:54:54 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HFroDx026951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 15:53:50 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0HFrl9u006590
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 15:53:48 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HFrlhi024069; Fri, 17 Jan 2014 15:53:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 07:53:47 -0800
Message-ID: <52D95233.1090003@oracle.com>
Date: Fri, 17 Jan 2014 10:54:27 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
In-Reply-To: <52D94D3A0200007800114957@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTcvMjAxNCAwOTozMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4gV2hpbGUgbG9va2lu
ZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vw
cwo+IGluIERvbTAgSSByZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhl
bi5oZydzIGMvcwo+IDk4OTphNzc4MWMwYTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24g
ZHJpdmVyIGFjY291bnRpbmcgZm9yCj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qg
d29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVyZQo+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhl
cmUncyBhIHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4gQW5kIHRoYXQgaXMg
ZGVzcGl0ZSBtZSBrbm93aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4g
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCj4gZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAg
aW4gc29tZSBvdGhlciB3YXkuCj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+Cj4gSW4gdGhl
IGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jv
c3MKPiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24ncyBp
bml0aWFsIHN0YXRlIHRvCj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBhbmQgaGVu
Y2Ugd2VudCBhaGVhZCBhbmQKPiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2FsY3VsYXRpb25z
IGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4KPiAoYSkgdXBzdHJlYW0ncyAob3BlbiBj
b2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBvbgo+ICAgICAgYW4gb2xk
ZXIga2VybmVsKQo+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1h
eGltdW0gUkFNIFBGTikKPiAoYykgWEVOTUVNX2dldF9wb2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0
aGUgaHlwZXJ2aXNvciBhbHRlcmVkCj4gICAgICB0byBub3QgcmVmdXNlIHRoaXMgZm9yIGEgZG9t
YWluIGRvaW5nIGl0IG9uIGl0c2VsZikKPgo+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2Qs
IHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+IGtub3duIHRvIHJlc3VsdCBpbiB0
aGUgZHJpdmVyIG5vdCBiYWxsb29uaW5nIGRvd24gZW5vdWdoLCBhbmQKPiBoZW5jZSBzZXR0aW5n
IHVwIHRoZSBkb21haW4gZm9yIGFuIGV2ZW50dWFsIGNyYXNoIHdoZW4gdGhlIFBvRAo+IGNhY2hl
IHJ1bnMgZW1wdHkuCj4KPiBJbnRlcmVzdGluZ2x5LCAoYSkgdG9vIHJlc3VsdHMgaW4gdGhlIGRy
aXZlciBub3QgYmFsbG9vbmluZyBkb3duCj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFj
dGx5IGFzIG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+IHJlc2VydmVkIGJlbG93IHRoZSAxTWIg
Ym91bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAo+IHVwc3RyZWFtIGNvbW1pdCBpcyBw
cmVzdW1hYmx5IGJyb2tlbi4KPgo+IFNob3J0IG9mIGEgcmVsaWFibGUgKGFuZCBpZGVhbGx5IGFy
Y2hpdGVjdHVyZSBpbmRlcGVuZGVudCkgd2F5IG9mCj4ga25vd2luZyB0aGUgbmVjZXNzYXJ5IGFk
anVzdG1lbnQgdmFsdWUsIHRoZSBuZXh0IGJlc3Qgc29sdXRpb24KPiAobm90IGJhbGxvb25pbmcg
ZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gKPiBtb3Jl
IHRoYW4gbmVjZXNzYXJ5KSB0dXJucyBvdXQgdG8gYmUgdXNpbmcgdGhlIG1pbmltdW0gb2YgKGIp
Cj4gYW5kIChjKTogV2hlbiB0aGUgZG9tYWluIG9ubHkgaGFzIG1lbW9yeSBiZWxvdyA0R2IsIChi
KSBpcwo+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMg
Y2xvc2VzdC4KCkkgYW0gbm90IHN1cmUgSSB1bmRlcnN0YW5kIHdoeSAoYikgd291bGQgYmUgdGhl
IHJpZ2h0IGFuc3dlciBmb3IgCmxlc3MtdGhhbi00RyBndWVzdHMuIFRoZSByZWFzb24gZm9yIGMy
NzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQgbWF4X3BmbiAKaW5jbHVkZXMgTU1JTyBzcGFjZSAod2hp
Y2ggaXMgbm90IFJBTSkgYW5kIHRodXMgdGhlIGRyaXZlciB3aWxsIAp1bm5lY2Vzc2FyaWx5IGJh
bGxvb24gZG93biB0aGF0IG11Y2ggbWVtb3J5LgoKPiBRdWVzdGlvbiBub3cgaXM6IENvbnNpZGVy
aW5nIHRoYXQgKGEpIGlzIGJyb2tlbiAoYW5kIGhhcmQgdG8gZml4KQo+IGFuZCAoYikgaXMgaW4g
cHJlc3VtYWJseSBhIGxhcmdlIHBhcnQgb2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KPiB0
b28gbXVjaCBiYWxsb29uaW5nIGRvd24sIHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4gWEVOTUVNX2dl
dF9wb2RfdGFyZ2V0IGZvciBkb21haW5zIHRvIHF1ZXJ5IG9uIHRoZW1zZWx2ZXM/Cj4gQWx0ZXJu
YXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhlciB3YXkgdG8gY2FsY3VsYXRlIGEKPiByZWFz
b25hYmx5IHByZWNpc2UgdmFsdWU/CgpJIHRoaW5rIGh5cGVydmlzb3IgcXVlcnkgaXMgYSBnb29k
IHRoaW5nIGFsdGhvdWdoIEkgZG9uJ3Qga25vdyB3aGV0aGVyIApleHBvc2luZyBQb0Qtc3BlY2lm
aWMgZGF0YSAoY291bnQgYW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMgCm5lY2Vzc2Fy
eS4gSXQncyBwcm9iYWJseSBPSyAob3Igd2UgY2FuIHNldCB0aGVzZSBmaWVsZHMgdG8gemVybyBm
b3IgCm5vbi1wcml2aWxlZ2VkIGRvbWFpbnMpLgoKLWJvcmlzCgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bsv-0007zJ-J6; Fri, 17 Jan 2014 16:03:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4Bst-0007zE-8v
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:03:03 +0000
Received: from [85.158.143.35:40230] by server-2.bemta-4.messagelabs.com id
	05/69-11386-63459D25; Fri, 17 Jan 2014 16:03:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389974581!12412785!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5460 invoked from network); 17 Jan 2014 16:03:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:03:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:03:00 +0000
Message-Id: <52D962460200007800114A60@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:03:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
In-Reply-To: <52D95233.1090003@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE3LjAxLjE0IGF0IDE2OjU0LCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMS8xNy8yMDE0IDA5OjMzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBz
ZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+PiBpbiBEb20wIEkgcmVhbGl6ZWQgdGhhdCB3aGF0
IEkgZGlkIGluIGxpbnV4LTIuNi4xOC14ZW4uaGcncyBjL3MKPj4gOTg5OmE3NzgxYzBhM2I5YSAo
Inhlbi9iYWxsb29uOiBmaXggYmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKPj4gSFZNLXdp
dGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVy
ZQo+PiB0cmlnZ2VycyBhcyBzb29uIGFzIHRoZXJlJ3MgYSByZWFzb25hYmxlIGFtb3VudCBvZiBl
eGNlc3MgbWVtb3J5Lgo+PiBBbmQgdGhhdCBpcyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNw
ZW50IHNpZ25pZmljYW50IGFtb3VudHMgb2YKPj4gaW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkg
bXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxzZSB0aGFuCj4+IGZpbmFsbHkgZ290IGNoZWNr
ZWQgaW4sIG9yIG11c3QgaGF2ZSBzY3Jld2VkIHVwIGluIHNvbWUgb3RoZXIgd2F5Lgo+PiBFeHRy
ZW1lbHkgZW1iYXJyYXNzaW5nLi4uCj4+Cj4+IEluIHRoZSBjb3Vyc2Ugb2YgZmluZGluZyBhIHBy
b3BlciBzb2x1dGlvbiBJIHNvb24gc3R1bWJsZWQgYWNyb3NzCj4+IHVwc3RyZWFtJ3MgYzI3NWE1
N2Y1ZSAoInhlbi9iYWxsb29uOiBTZXQgYmFsbG9vbidzIGluaXRpYWwgc3RhdGUgdG8KPj4gbnVt
YmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBhbmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPj4g
Y29tcGFyZWQgdGhyZWUgZGlmZmVyZW50IGNhbGN1bGF0aW9ucyBmb3IgaW5pdGlhbCBicy5jdXJy
ZW50X3BhZ2VzOgo+Pgo+PiAoYSkgdXBzdHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlz
cGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBvbgo+PiAgICAgIGFuIG9sZGVyIGtlcm5lbCkKPj4gKGIp
IHBsYWluIG9sZCBudW1fcGh5c3BhZ2VzIChlcXVhbGluZyB0aGUgbWF4aW11bSBSQU0gUEZOKQo+
PiAoYykgWEVOTUVNX2dldF9wb2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0aGUgaHlwZXJ2aXNvciBh
bHRlcmVkCj4+ICAgICAgdG8gbm90IHJlZnVzZSB0aGlzIGZvciBhIGRvbWFpbiBkb2luZyBpdCBv
biBpdHNlbGYpCj4+Cj4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2QsIHVzaW5nIHRvdGFs
cmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+PiBrbm93biB0byByZXN1bHQgaW4gdGhlIGRyaXZlciBu
b3QgYmFsbG9vbmluZyBkb3duIGVub3VnaCwgYW5kCj4+IGhlbmNlIHNldHRpbmcgdXAgdGhlIGRv
bWFpbiBmb3IgYW4gZXZlbnR1YWwgY3Jhc2ggd2hlbiB0aGUgUG9ECj4+IGNhY2hlIHJ1bnMgZW1w
dHkuCj4+Cj4+IEludGVyZXN0aW5nbHksIChhKSB0b28gcmVzdWx0cyBpbiB0aGUgZHJpdmVyIG5v
dCBiYWxsb29uaW5nIGRvd24KPj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFjdGx5IGFz
IG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+PiByZXNlcnZlZCBiZWxvdyB0aGUgMU1iIGJvdW5k
YXJ5LiBUaGVyZWZvcmUgYWZvcmVtZW50aW9uZWQKPj4gdXBzdHJlYW0gY29tbWl0IGlzIHByZXN1
bWFibHkgYnJva2VuLgo+Pgo+PiBTaG9ydCBvZiBhIHJlbGlhYmxlIChhbmQgaWRlYWxseSBhcmNo
aXRlY3R1cmUgaW5kZXBlbmRlbnQpIHdheSBvZgo+PiBrbm93aW5nIHRoZSBuZWNlc3NhcnkgYWRq
dXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1dGlvbgo+PiAobm90IGJhbGxvb25pbmcg
ZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gKPj4gbW9y
ZSB0aGFuIG5lY2Vzc2FyeSkgdHVybnMgb3V0IHRvIGJlIHVzaW5nIHRoZSBtaW5pbXVtIG9mIChi
KQo+PiBhbmQgKGMpOiBXaGVuIHRoZSBkb21haW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwg
KGIpIGlzCj4+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdl
dHMgY2xvc2VzdC4KPiAKPiBJIGFtIG5vdCBzdXJlIEkgdW5kZXJzdGFuZCB3aHkgKGIpIHdvdWxk
IGJlIHRoZSByaWdodCBhbnN3ZXIgZm9yIAo+IGxlc3MtdGhhbi00RyBndWVzdHMuIFRoZSByZWFz
b24gZm9yIGMyNzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQgbWF4X3BmbiAKPiBpbmNsdWRlcyBNTUlP
IHNwYWNlICh3aGljaCBpcyBub3QgUkFNKSBhbmQgdGh1cyB0aGUgZHJpdmVyIHdpbGwgCj4gdW5u
ZWNlc3NhcmlseSBiYWxsb29uIGRvd24gdGhhdCBtdWNoIG1lbW9yeS4KCm1heF9wZm4vbnVtX3Bo
eXNwYWdlcyBpc24ndCB0aGF0IGZhciBvZmYgZm9yIGd1ZXN0IHdpdGggbGVzcyB0aGFuCjRHYiwg
dGhlIG51bWJlciBjYWxjdWxhdGVkIGZyb20gdGhlIFBvRCBkYXRhIGlzIGEgbGl0dGxlIHdvcnNl
LgoKPj4gUXVlc3Rpb24gbm93IGlzOiBDb25zaWRlcmluZyB0aGF0IChhKSBpcyBicm9rZW4gKGFu
ZCBoYXJkIHRvIGZpeCkKPj4gYW5kIChiKSBpcyBpbiBwcmVzdW1hYmx5IGEgbGFyZ2UgcGFydCBv
ZiBwcmFjdGljYWwgY2FzZXMgbGVhZGluZyB0bwo+PiB0b28gbXVjaCBiYWxsb29uaW5nIGRvd24s
IHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4+IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBmb3IgZG9tYWlu
cyB0byBxdWVyeSBvbiB0aGVtc2VsdmVzPwo+PiBBbHRlcm5hdGl2ZWx5LCBjYW4gYW55b25lIHNl
ZSBhbm90aGVyIHdheSB0byBjYWxjdWxhdGUgYQo+PiByZWFzb25hYmx5IHByZWNpc2UgdmFsdWU/
Cj4gCj4gSSB0aGluayBoeXBlcnZpc29yIHF1ZXJ5IGlzIGEgZ29vZCB0aGluZyBhbHRob3VnaCBJ
IGRvbid0IGtub3cgd2hldGhlciAKPiBleHBvc2luZyBQb0Qtc3BlY2lmaWMgZGF0YSAoY291bnQg
YW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMgCj4gbmVjZXNzYXJ5LiBJdCdzIHByb2Jh
Ymx5IE9LIChvciB3ZSBjYW4gc2V0IHRoZXNlIGZpZWxkcyB0byB6ZXJvIGZvciAKPiBub24tcHJp
dmlsZWdlZCBkb21haW5zKS4KClRoYXQncyBwb2ludGxlc3MgdGhlbiAtIGlmIG5vIHVzZWZ1bCBk
YXRhIGlzIHByb3ZpZGVkIHRocm91Z2ggdGhlCmNhbGwgdG8gbm9uLXByaXZpbGVnZWQgZG9tYWlu
cywgd2UgY2FuIGFzIHdlbGwga2VlcCBpdCBlcnJvcmluZyBmb3IKdGhlbS4KCkphbgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bsv-0007zJ-J6; Fri, 17 Jan 2014 16:03:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4Bst-0007zE-8v
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:03:03 +0000
Received: from [85.158.143.35:40230] by server-2.bemta-4.messagelabs.com id
	05/69-11386-63459D25; Fri, 17 Jan 2014 16:03:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389974581!12412785!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5460 invoked from network); 17 Jan 2014 16:03:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:03:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:03:00 +0000
Message-Id: <52D962460200007800114A60@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:03:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
In-Reply-To: <52D95233.1090003@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE3LjAxLjE0IGF0IDE2OjU0LCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMS8xNy8yMDE0IDA5OjMzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBvRCBz
ZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+PiBpbiBEb20wIEkgcmVhbGl6ZWQgdGhhdCB3aGF0
IEkgZGlkIGluIGxpbnV4LTIuNi4xOC14ZW4uaGcncyBjL3MKPj4gOTg5OmE3NzgxYzBhM2I5YSAo
Inhlbi9iYWxsb29uOiBmaXggYmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKPj4gSFZNLXdp
dGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBhZGRlZCB0aGVy
ZQo+PiB0cmlnZ2VycyBhcyBzb29uIGFzIHRoZXJlJ3MgYSByZWFzb25hYmxlIGFtb3VudCBvZiBl
eGNlc3MgbWVtb3J5Lgo+PiBBbmQgdGhhdCBpcyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNw
ZW50IHNpZ25pZmljYW50IGFtb3VudHMgb2YKPj4gaW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkg
bXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxzZSB0aGFuCj4+IGZpbmFsbHkgZ290IGNoZWNr
ZWQgaW4sIG9yIG11c3QgaGF2ZSBzY3Jld2VkIHVwIGluIHNvbWUgb3RoZXIgd2F5Lgo+PiBFeHRy
ZW1lbHkgZW1iYXJyYXNzaW5nLi4uCj4+Cj4+IEluIHRoZSBjb3Vyc2Ugb2YgZmluZGluZyBhIHBy
b3BlciBzb2x1dGlvbiBJIHNvb24gc3R1bWJsZWQgYWNyb3NzCj4+IHVwc3RyZWFtJ3MgYzI3NWE1
N2Y1ZSAoInhlbi9iYWxsb29uOiBTZXQgYmFsbG9vbidzIGluaXRpYWwgc3RhdGUgdG8KPj4gbnVt
YmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBhbmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPj4g
Y29tcGFyZWQgdGhyZWUgZGlmZmVyZW50IGNhbGN1bGF0aW9ucyBmb3IgaW5pdGlhbCBicy5jdXJy
ZW50X3BhZ2VzOgo+Pgo+PiAoYSkgdXBzdHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlz
cGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBvbgo+PiAgICAgIGFuIG9sZGVyIGtlcm5lbCkKPj4gKGIp
IHBsYWluIG9sZCBudW1fcGh5c3BhZ2VzIChlcXVhbGluZyB0aGUgbWF4aW11bSBSQU0gUEZOKQo+
PiAoYykgWEVOTUVNX2dldF9wb2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0aGUgaHlwZXJ2aXNvciBh
bHRlcmVkCj4+ICAgICAgdG8gbm90IHJlZnVzZSB0aGlzIGZvciBhIGRvbWFpbiBkb2luZyBpdCBv
biBpdHNlbGYpCj4+Cj4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2QsIHVzaW5nIHRvdGFs
cmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+PiBrbm93biB0byByZXN1bHQgaW4gdGhlIGRyaXZlciBu
b3QgYmFsbG9vbmluZyBkb3duIGVub3VnaCwgYW5kCj4+IGhlbmNlIHNldHRpbmcgdXAgdGhlIGRv
bWFpbiBmb3IgYW4gZXZlbnR1YWwgY3Jhc2ggd2hlbiB0aGUgUG9ECj4+IGNhY2hlIHJ1bnMgZW1w
dHkuCj4+Cj4+IEludGVyZXN0aW5nbHksIChhKSB0b28gcmVzdWx0cyBpbiB0aGUgZHJpdmVyIG5v
dCBiYWxsb29uaW5nIGRvd24KPj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFjdGx5IGFz
IG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+PiByZXNlcnZlZCBiZWxvdyB0aGUgMU1iIGJvdW5k
YXJ5LiBUaGVyZWZvcmUgYWZvcmVtZW50aW9uZWQKPj4gdXBzdHJlYW0gY29tbWl0IGlzIHByZXN1
bWFibHkgYnJva2VuLgo+Pgo+PiBTaG9ydCBvZiBhIHJlbGlhYmxlIChhbmQgaWRlYWxseSBhcmNo
aXRlY3R1cmUgaW5kZXBlbmRlbnQpIHdheSBvZgo+PiBrbm93aW5nIHRoZSBuZWNlc3NhcnkgYWRq
dXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1dGlvbgo+PiAobm90IGJhbGxvb25pbmcg
ZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3duIG11Y2gKPj4gbW9y
ZSB0aGFuIG5lY2Vzc2FyeSkgdHVybnMgb3V0IHRvIGJlIHVzaW5nIHRoZSBtaW5pbXVtIG9mIChi
KQo+PiBhbmQgKGMpOiBXaGVuIHRoZSBkb21haW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwg
KGIpIGlzCj4+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdl
dHMgY2xvc2VzdC4KPiAKPiBJIGFtIG5vdCBzdXJlIEkgdW5kZXJzdGFuZCB3aHkgKGIpIHdvdWxk
IGJlIHRoZSByaWdodCBhbnN3ZXIgZm9yIAo+IGxlc3MtdGhhbi00RyBndWVzdHMuIFRoZSByZWFz
b24gZm9yIGMyNzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQgbWF4X3BmbiAKPiBpbmNsdWRlcyBNTUlP
IHNwYWNlICh3aGljaCBpcyBub3QgUkFNKSBhbmQgdGh1cyB0aGUgZHJpdmVyIHdpbGwgCj4gdW5u
ZWNlc3NhcmlseSBiYWxsb29uIGRvd24gdGhhdCBtdWNoIG1lbW9yeS4KCm1heF9wZm4vbnVtX3Bo
eXNwYWdlcyBpc24ndCB0aGF0IGZhciBvZmYgZm9yIGd1ZXN0IHdpdGggbGVzcyB0aGFuCjRHYiwg
dGhlIG51bWJlciBjYWxjdWxhdGVkIGZyb20gdGhlIFBvRCBkYXRhIGlzIGEgbGl0dGxlIHdvcnNl
LgoKPj4gUXVlc3Rpb24gbm93IGlzOiBDb25zaWRlcmluZyB0aGF0IChhKSBpcyBicm9rZW4gKGFu
ZCBoYXJkIHRvIGZpeCkKPj4gYW5kIChiKSBpcyBpbiBwcmVzdW1hYmx5IGEgbGFyZ2UgcGFydCBv
ZiBwcmFjdGljYWwgY2FzZXMgbGVhZGluZyB0bwo+PiB0b28gbXVjaCBiYWxsb29uaW5nIGRvd24s
IHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4+IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBmb3IgZG9tYWlu
cyB0byBxdWVyeSBvbiB0aGVtc2VsdmVzPwo+PiBBbHRlcm5hdGl2ZWx5LCBjYW4gYW55b25lIHNl
ZSBhbm90aGVyIHdheSB0byBjYWxjdWxhdGUgYQo+PiByZWFzb25hYmx5IHByZWNpc2UgdmFsdWU/
Cj4gCj4gSSB0aGluayBoeXBlcnZpc29yIHF1ZXJ5IGlzIGEgZ29vZCB0aGluZyBhbHRob3VnaCBJ
IGRvbid0IGtub3cgd2hldGhlciAKPiBleHBvc2luZyBQb0Qtc3BlY2lmaWMgZGF0YSAoY291bnQg
YW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMgCj4gbmVjZXNzYXJ5LiBJdCdzIHByb2Jh
Ymx5IE9LIChvciB3ZSBjYW4gc2V0IHRoZXNlIGZpZWxkcyB0byB6ZXJvIGZvciAKPiBub24tcHJp
dmlsZWdlZCBkb21haW5zKS4KClRoYXQncyBwb2ludGxlc3MgdGhlbiAtIGlmIG5vIHVzZWZ1bCBk
YXRhIGlzIHByb3ZpZGVkIHRocm91Z2ggdGhlCmNhbGwgdG8gbm9uLXByaXZpbGVnZWQgZG9tYWlu
cywgd2UgY2FuIGFzIHdlbGwga2VlcCBpdCBlcnJvcmluZyBmb3IKdGhlbS4KCkphbgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bwe-00087m-8d; Fri, 17 Jan 2014 16:06:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4Bwd-00087h-D4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:06:55 +0000
Received: from [193.109.254.147:7952] by server-5.bemta-14.messagelabs.com id
	06/7D-03510-E1559D25; Fri, 17 Jan 2014 16:06:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389974813!11585278!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5658 invoked from network); 17 Jan 2014 16:06:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:06:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389974813; l=914;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=JSEs6YQCpEUkItXFMfOE2vzf2qk=;
	b=gcfqfIOc0t6KNFy0Cgj1eCVcoQe7Y0h7hnN15VzK1sgJl52H1J/4qfzUKw409XccQbl
	usJut1MqID1i9xKxSMvatzXLizg4c+VDK9nF6QlJjeteMZZET8kibikOTSkSmNoEGIvBQ
	i5JXmASRNC9gmK8jonbDVB6hjRLBjEhE1tk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id U00b45q0HG6rreD ; 
	Fri, 17 Jan 2014 17:06:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5DCD750267; Fri, 17 Jan 2014 17:06:53 +0100 (CET)
Date: Fri, 17 Jan 2014 17:06:53 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117160653.GA20783@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> Can't we just assume that if the backend can do discard on that file, it
> is simply going to enable feature-discard? Do we really need the
> toolstack driven option too?

If the backing file is fully ppopulated on purpose to avoid
fragmentation of that file, then silently enabling discard means the
unwanted fragmentation will occour over time. If anyone is really doing
that in their setup I certainly dont know. At least kvm/qemu has such
knob, and also an hyperv host offers sparse, fully populated and overlay
as possible option when creating a new virtual disk image.

Now I think that if libxl already writes feature-discard=0, qemu can use
that to clear BDRV_O_UNMAP. Also blkbk could be extended to first check
if the property already exists before blindly enabling discard. No idea
if that makes sense for "phy", but you never know.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bwe-00087m-8d; Fri, 17 Jan 2014 16:06:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4Bwd-00087h-D4
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:06:55 +0000
Received: from [193.109.254.147:7952] by server-5.bemta-14.messagelabs.com id
	06/7D-03510-E1559D25; Fri, 17 Jan 2014 16:06:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1389974813!11585278!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5658 invoked from network); 17 Jan 2014 16:06:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:06:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389974813; l=914;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=JSEs6YQCpEUkItXFMfOE2vzf2qk=;
	b=gcfqfIOc0t6KNFy0Cgj1eCVcoQe7Y0h7hnN15VzK1sgJl52H1J/4qfzUKw409XccQbl
	usJut1MqID1i9xKxSMvatzXLizg4c+VDK9nF6QlJjeteMZZET8kibikOTSkSmNoEGIvBQ
	i5JXmASRNC9gmK8jonbDVB6hjRLBjEhE1tk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id U00b45q0HG6rreD ; 
	Fri, 17 Jan 2014 17:06:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5DCD750267; Fri, 17 Jan 2014 17:06:53 +0100 (CET)
Date: Fri, 17 Jan 2014 17:06:53 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117160653.GA20783@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> Can't we just assume that if the backend can do discard on that file, it
> is simply going to enable feature-discard? Do we really need the
> toolstack driven option too?

If the backing file is fully ppopulated on purpose to avoid
fragmentation of that file, then silently enabling discard means the
unwanted fragmentation will occour over time. If anyone is really doing
that in their setup I certainly dont know. At least kvm/qemu has such
knob, and also an hyperv host offers sparse, fully populated and overlay
as possible option when creating a new virtual disk image.

Now I think that if libxl already writes feature-discard=0, qemu can use
that to clear BDRV_O_UNMAP. Also blkbk could be extended to first check
if the property already exists before blindly enabling discard. No idea
if that makes sense for "phy", but you never know.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:07:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bwv-00089w-La; Fri, 17 Jan 2014 16:07:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W4Bwu-00089f-Ai
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:07:12 +0000
Received: from [85.158.137.68:31059] by server-1.bemta-3.messagelabs.com id
	07/F7-29598-F2559D25; Fri, 17 Jan 2014 16:07:11 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389974830!8645490!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 928 invoked from network); 17 Jan 2014 16:07:11 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:07:11 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so2208418eek.16
	for <xen-devel@lists.xensource.com>;
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=93NpVVfeDYlKt0XLd6xVFaOJ0yUXsUoMoN+9eiT/yDs=;
	b=ZIiZqlDVL+uGl0GtEB15QPu8rdgSZGK0nJp8OBJte+IOOFlQQgUKhFg98OHNXtT6qi
	kElkGFsJ4gn0J3rf3JE6ucruMlCnCjtLhW4kv6Z5oZn9ObRsv4Q12t8wvuz569emhb8S
	V2KfE2BWs4jowqZkfJLck5HKTzqVZuvhuzcF/RPH1/Wdf6MY0m7x535AWrZbl3aXrn+5
	wDhoUYFyCyc2jiE79TerhEm3QGPcxAGMRMlmYCzClfJSa/hOi22Vi2kew/xgFYxZg9/o
	GwXt6V/97ncv3jccdYWCRr1/jdPdVzR17o0UetvXOUgYFLgUhVItFSjFPdeUpQ3FS2QH
	N6iw==
X-Gm-Message-State: ALoCoQlAQhUjC9lZHmQbMT1sW6GCXCxnXEkv3lKBlkF/2aJCWtNDtIn/ophPMyAFAzk5ct6lnild
X-Received: by 10.14.107.3 with SMTP id n3mr3299934eeg.67.1389974830523;
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id v7sm27952685eel.2.2014.01.17.08.07.09
	for <xen-devel@lists.xensource.com>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
Message-ID: <52D95538.2080602@m2r.biz>
Date: Fri, 17 Jan 2014 17:07:20 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Xendomains service start show xl error on domain
 autostart already existing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I not know if was already reported.
Using xl (with xen-unstable commit 
025c1b755afc9a9f42f71ef167c20fdc616b1d2d) on xendomains service start 
restore correctly all saved domUs but also tried to boot them if are in 
autostart showing error instead skip them:

service xendomains start
Restoring Xen domains: arkivi clearos6 m2rsvc1 nagios office1_w7 s4pdc 
service2_w7 svn
Starting auto Xen domains: arkivi.cfglibxl: error: 
libxl.c:323:libxl__domain_rename: domain with name "arkivi" already exists.
libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
domain: -6

An error occurred while creating domain arkivi.cfg:

!
  clearos6.cfglibxl: error: libxl.c:323:libxl__domain_rename: domain 
with name "clearos6" already exists.
libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
domain: -6

An error occurred while creating domain clearos6.cfg:

!
...

Thanks for any reply and sorry for my bad english.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:07:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bwv-00089w-La; Fri, 17 Jan 2014 16:07:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W4Bwu-00089f-Ai
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:07:12 +0000
Received: from [85.158.137.68:31059] by server-1.bemta-3.messagelabs.com id
	07/F7-29598-F2559D25; Fri, 17 Jan 2014 16:07:11 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389974830!8645490!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 928 invoked from network); 17 Jan 2014 16:07:11 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:07:11 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so2208418eek.16
	for <xen-devel@lists.xensource.com>;
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=93NpVVfeDYlKt0XLd6xVFaOJ0yUXsUoMoN+9eiT/yDs=;
	b=ZIiZqlDVL+uGl0GtEB15QPu8rdgSZGK0nJp8OBJte+IOOFlQQgUKhFg98OHNXtT6qi
	kElkGFsJ4gn0J3rf3JE6ucruMlCnCjtLhW4kv6Z5oZn9ObRsv4Q12t8wvuz569emhb8S
	V2KfE2BWs4jowqZkfJLck5HKTzqVZuvhuzcF/RPH1/Wdf6MY0m7x535AWrZbl3aXrn+5
	wDhoUYFyCyc2jiE79TerhEm3QGPcxAGMRMlmYCzClfJSa/hOi22Vi2kew/xgFYxZg9/o
	GwXt6V/97ncv3jccdYWCRr1/jdPdVzR17o0UetvXOUgYFLgUhVItFSjFPdeUpQ3FS2QH
	N6iw==
X-Gm-Message-State: ALoCoQlAQhUjC9lZHmQbMT1sW6GCXCxnXEkv3lKBlkF/2aJCWtNDtIn/ophPMyAFAzk5ct6lnild
X-Received: by 10.14.107.3 with SMTP id n3mr3299934eeg.67.1389974830523;
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id v7sm27952685eel.2.2014.01.17.08.07.09
	for <xen-devel@lists.xensource.com>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 08:07:10 -0800 (PST)
Message-ID: <52D95538.2080602@m2r.biz>
Date: Fri, 17 Jan 2014 17:07:20 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
Subject: [Xen-devel] Xendomains service start show xl error on domain
 autostart already existing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I not know if was already reported.
Using xl (with xen-unstable commit 
025c1b755afc9a9f42f71ef167c20fdc616b1d2d) on xendomains service start 
restore correctly all saved domUs but also tried to boot them if are in 
autostart showing error instead skip them:

service xendomains start
Restoring Xen domains: arkivi clearos6 m2rsvc1 nagios office1_w7 s4pdc 
service2_w7 svn
Starting auto Xen domains: arkivi.cfglibxl: error: 
libxl.c:323:libxl__domain_rename: domain with name "arkivi" already exists.
libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
domain: -6

An error occurred while creating domain arkivi.cfg:

!
  clearos6.cfglibxl: error: libxl.c:323:libxl__domain_rename: domain 
with name "clearos6" already exists.
libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
domain: -6

An error occurred while creating domain clearos6.cfg:

!
...

Thanks for any reply and sorry for my bad english.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:09:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bya-0008OR-6J; Fri, 17 Jan 2014 16:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ByZ-0008NH-0i
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:08:55 +0000
Received: from [85.158.139.211:33813] by server-17.bemta-5.messagelabs.com id
	DF/67-19152-69559D25; Fri, 17 Jan 2014 16:08:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389974932!10435778!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14766 invoked from network); 17 Jan 2014 16:08:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:08:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93912743"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:08:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:08:51 -0500
Message-ID: <1389974929.6697.122.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 16:08:49 +0000
In-Reply-To: <52D962460200007800114A60@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAxLTE3IGF0IDE2OjAzICswMDAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiA+
Pj4gT24gMTcuMDEuMTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5
QG9yYWNsZS5jb20+IHdyb3RlOgo+ID4gT24gMDEvMTcvMjAxNCAwOTozMyBBTSwgSmFuIEJldWxp
Y2ggd3JvdGU6Cj4gPj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBv
RCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+ID4+IGluIERvbTAgSSByZWFsaXplZCB0aGF0
IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+ID4+IDk4OTphNzc4MWMw
YTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24gZHJpdmVyIGFjY291bnRpbmcgZm9yCj4g
Pj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBh
ZGRlZCB0aGVyZQo+ID4+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBhIHJlYXNvbmFibGUg
YW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4gPj4gQW5kIHRoYXQgaXMgZGVzcGl0ZSBtZSBrbm93
aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4gPj4gaW4gdGVzdGluZyB0
aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxzZSB0aGFuCj4gPj4g
ZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAgaW4gc29tZSBv
dGhlciB3YXkuCj4gPj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+ID4+Cj4gPj4gSW4gdGhl
IGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jv
c3MKPiA+PiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24n
cyBpbml0aWFsIHN0YXRlIHRvCj4gPj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBh
bmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPiA+PiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2Fs
Y3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4gPj4KPiA+PiAoYSkgdXBz
dHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBv
bgo+ID4+ICAgICAgYW4gb2xkZXIga2VybmVsKQo+ID4+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNw
YWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFNIFBGTikKPiA+PiAoYykgWEVOTUVNX2dldF9w
b2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0aGUgaHlwZXJ2aXNvciBhbHRlcmVkCj4gPj4gICAgICB0
byBub3QgcmVmdXNlIHRoaXMgZm9yIGEgZG9tYWluIGRvaW5nIGl0IG9uIGl0c2VsZikKPiA+Pgo+
ID4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2QsIHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3
YXMgYWxyZWFkeQo+ID4+IGtub3duIHRvIHJlc3VsdCBpbiB0aGUgZHJpdmVyIG5vdCBiYWxsb29u
aW5nIGRvd24gZW5vdWdoLCBhbmQKPiA+PiBoZW5jZSBzZXR0aW5nIHVwIHRoZSBkb21haW4gZm9y
IGFuIGV2ZW50dWFsIGNyYXNoIHdoZW4gdGhlIFBvRAo+ID4+IGNhY2hlIHJ1bnMgZW1wdHkuCj4g
Pj4KPiA+PiBJbnRlcmVzdGluZ2x5LCAoYSkgdG9vIHJlc3VsdHMgaW4gdGhlIGRyaXZlciBub3Qg
YmFsbG9vbmluZyBkb3duCj4gPj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFjdGx5IGFz
IG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+ID4+IHJlc2VydmVkIGJlbG93IHRoZSAxTWIgYm91
bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAo+ID4+IHVwc3RyZWFtIGNvbW1pdCBpcyBw
cmVzdW1hYmx5IGJyb2tlbi4KPiA+Pgo+ID4+IFNob3J0IG9mIGEgcmVsaWFibGUgKGFuZCBpZGVh
bGx5IGFyY2hpdGVjdHVyZSBpbmRlcGVuZGVudCkgd2F5IG9mCj4gPj4ga25vd2luZyB0aGUgbmVj
ZXNzYXJ5IGFkanVzdG1lbnQgdmFsdWUsIHRoZSBuZXh0IGJlc3Qgc29sdXRpb24KPiA+PiAobm90
IGJhbGxvb25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3du
IG11Y2gKPiA+PiBtb3JlIHRoYW4gbmVjZXNzYXJ5KSB0dXJucyBvdXQgdG8gYmUgdXNpbmcgdGhl
IG1pbmltdW0gb2YgKGIpCj4gPj4gYW5kIChjKTogV2hlbiB0aGUgZG9tYWluIG9ubHkgaGFzIG1l
bW9yeSBiZWxvdyA0R2IsIChiKSBpcwo+ID4+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUg
b3RoZXIgY2FzZXMgKGMpIGdldHMgY2xvc2VzdC4KPiA+IAo+ID4gSSBhbSBub3Qgc3VyZSBJIHVu
ZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvciAKPiA+IGxlc3Mt
dGhhbi00RyBndWVzdHMuIFRoZSByZWFzb24gZm9yIGMyNzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQg
bWF4X3BmbiAKPiA+IGluY2x1ZGVzIE1NSU8gc3BhY2UgKHdoaWNoIGlzIG5vdCBSQU0pIGFuZCB0
aHVzIHRoZSBkcml2ZXIgd2lsbCAKPiA+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3duIHRoYXQg
bXVjaCBtZW1vcnkuCj4gCj4gbWF4X3Bmbi9udW1fcGh5c3BhZ2VzIGlzbid0IHRoYXQgZmFyIG9m
ZiBmb3IgZ3Vlc3Qgd2l0aCBsZXNzIHRoYW4KPiA0R2IsIHRoZSBudW1iZXIgY2FsY3VsYXRlZCBm
cm9tIHRoZSBQb0QgZGF0YSBpcyBhIGxpdHRsZSB3b3JzZS4KCk9uIEFSTSBSQU0gbWF5IG5vdCBz
dGFydCBhdCAwIGFuZCBzbyB1c2luZyBtYXhfcGZuIGNhbiBiZSB2ZXJ5Cm1pc2xlYWRpbmcgYW5k
IGluIHByYWN0aWNlIGNhdXNlcyBhcm0gdG8gYmFsbG9vbiBkb3duIHRvIDAgYXMgZmFzdCBhcyBp
dApjYW4uCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:09:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Bya-0008OR-6J; Fri, 17 Jan 2014 16:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ByZ-0008NH-0i
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:08:55 +0000
Received: from [85.158.139.211:33813] by server-17.bemta-5.messagelabs.com id
	DF/67-19152-69559D25; Fri, 17 Jan 2014 16:08:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1389974932!10435778!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14766 invoked from network); 17 Jan 2014 16:08:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:08:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93912743"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:08:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:08:51 -0500
Message-ID: <1389974929.6697.122.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 16:08:49 +0000
In-Reply-To: <52D962460200007800114A60@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAxLTE3IGF0IDE2OjAzICswMDAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiA+
Pj4gT24gMTcuMDEuMTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5
QG9yYWNsZS5jb20+IHdyb3RlOgo+ID4gT24gMDEvMTcvMjAxNCAwOTozMyBBTSwgSmFuIEJldWxp
Y2ggd3JvdGU6Cj4gPj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBpc3N1ZSB3aXRoIFBv
RCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+ID4+IGluIERvbTAgSSByZWFsaXplZCB0aGF0
IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+ID4+IDk4OTphNzc4MWMw
YTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24gZHJpdmVyIGFjY291bnRpbmcgZm9yCj4g
Pj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRoZSBCVUdfT04oKSBh
ZGRlZCB0aGVyZQo+ID4+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBhIHJlYXNvbmFibGUg
YW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4gPj4gQW5kIHRoYXQgaXMgZGVzcGl0ZSBtZSBrbm93
aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4gPj4gaW4gdGVzdGluZyB0
aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxzZSB0aGFuCj4gPj4g
ZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAgaW4gc29tZSBv
dGhlciB3YXkuCj4gPj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+ID4+Cj4gPj4gSW4gdGhl
IGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jv
c3MKPiA+PiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjogU2V0IGJhbGxvb24n
cyBpbml0aWFsIHN0YXRlIHRvCj4gPj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJBTSBwYWdlcyIpLCBh
bmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPiA+PiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2Fs
Y3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4gPj4KPiA+PiAoYSkgdXBz
dHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMgSSBkaWQgdGhpcyBv
bgo+ID4+ICAgICAgYW4gb2xkZXIga2VybmVsKQo+ID4+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNw
YWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFNIFBGTikKPiA+PiAoYykgWEVOTUVNX2dldF9w
b2RfdGFyZ2V0IG91dHB1dCAod2l0aCB0aGUgaHlwZXJ2aXNvciBhbHRlcmVkCj4gPj4gICAgICB0
byBub3QgcmVmdXNlIHRoaXMgZm9yIGEgZG9tYWluIGRvaW5nIGl0IG9uIGl0c2VsZikKPiA+Pgo+
ID4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2QsIHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3
YXMgYWxyZWFkeQo+ID4+IGtub3duIHRvIHJlc3VsdCBpbiB0aGUgZHJpdmVyIG5vdCBiYWxsb29u
aW5nIGRvd24gZW5vdWdoLCBhbmQKPiA+PiBoZW5jZSBzZXR0aW5nIHVwIHRoZSBkb21haW4gZm9y
IGFuIGV2ZW50dWFsIGNyYXNoIHdoZW4gdGhlIFBvRAo+ID4+IGNhY2hlIHJ1bnMgZW1wdHkuCj4g
Pj4KPiA+PiBJbnRlcmVzdGluZ2x5LCAoYSkgdG9vIHJlc3VsdHMgaW4gdGhlIGRyaXZlciBub3Qg
YmFsbG9vbmluZyBkb3duCj4gPj4gZW5vdWdoIC0gdGhlcmUncyBhIGdhcCBvZiBleGFjdGx5IGFz
IG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+ID4+IHJlc2VydmVkIGJlbG93IHRoZSAxTWIgYm91
bmRhcnkuIFRoZXJlZm9yZSBhZm9yZW1lbnRpb25lZAo+ID4+IHVwc3RyZWFtIGNvbW1pdCBpcyBw
cmVzdW1hYmx5IGJyb2tlbi4KPiA+Pgo+ID4+IFNob3J0IG9mIGEgcmVsaWFibGUgKGFuZCBpZGVh
bGx5IGFyY2hpdGVjdHVyZSBpbmRlcGVuZGVudCkgd2F5IG9mCj4gPj4ga25vd2luZyB0aGUgbmVj
ZXNzYXJ5IGFkanVzdG1lbnQgdmFsdWUsIHRoZSBuZXh0IGJlc3Qgc29sdXRpb24KPiA+PiAobm90
IGJhbGxvb25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFsbG9vbmluZyBkb3du
IG11Y2gKPiA+PiBtb3JlIHRoYW4gbmVjZXNzYXJ5KSB0dXJucyBvdXQgdG8gYmUgdXNpbmcgdGhl
IG1pbmltdW0gb2YgKGIpCj4gPj4gYW5kIChjKTogV2hlbiB0aGUgZG9tYWluIG9ubHkgaGFzIG1l
bW9yeSBiZWxvdyA0R2IsIChiKSBpcwo+ID4+IG1vcmUgcHJlY2lzZSwgd2hlcmVhcyBpbiB0aGUg
b3RoZXIgY2FzZXMgKGMpIGdldHMgY2xvc2VzdC4KPiA+IAo+ID4gSSBhbSBub3Qgc3VyZSBJIHVu
ZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvciAKPiA+IGxlc3Mt
dGhhbi00RyBndWVzdHMuIFRoZSByZWFzb24gZm9yIGMyNzVhNTdmNWUgcGF0Y2ggd2FzIHRoYXQg
bWF4X3BmbiAKPiA+IGluY2x1ZGVzIE1NSU8gc3BhY2UgKHdoaWNoIGlzIG5vdCBSQU0pIGFuZCB0
aHVzIHRoZSBkcml2ZXIgd2lsbCAKPiA+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3duIHRoYXQg
bXVjaCBtZW1vcnkuCj4gCj4gbWF4X3Bmbi9udW1fcGh5c3BhZ2VzIGlzbid0IHRoYXQgZmFyIG9m
ZiBmb3IgZ3Vlc3Qgd2l0aCBsZXNzIHRoYW4KPiA0R2IsIHRoZSBudW1iZXIgY2FsY3VsYXRlZCBm
cm9tIHRoZSBQb0QgZGF0YSBpcyBhIGxpdHRsZSB3b3JzZS4KCk9uIEFSTSBSQU0gbWF5IG5vdCBz
dGFydCBhdCAwIGFuZCBzbyB1c2luZyBtYXhfcGZuIGNhbiBiZSB2ZXJ5Cm1pc2xlYWRpbmcgYW5k
IGluIHByYWN0aWNlIGNhdXNlcyBhcm0gdG8gYmFsbG9vbiBkb3duIHRvIDAgYXMgZmFzdCBhcyBp
dApjYW4uCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BzA-00007H-Ki; Fri, 17 Jan 2014 16:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Bz9-000075-VV
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:09:32 +0000
Received: from [193.109.254.147:24286] by server-12.bemta-14.messagelabs.com
	id 55/12-13681-BB559D25; Fri, 17 Jan 2014 16:09:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389974969!10080086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6872 invoked from network); 17 Jan 2014 16:09:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:09:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91816230"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:09:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:09:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4Bz5-0004Hf-Lq;
	Fri, 17 Jan 2014 16:09:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4Bz3-00007A-Mv;
	Fri, 17 Jan 2014 16:09:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.21940.215971.2386@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:09:24 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389958199.6697.80.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
	<21209.5013.712955.967233@mariner.uk.xensource.com>
	<1389958199.6697.80.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> On Fri, 2014-01-17 at 11:27 +0000, Ian Jackson wrote:
> > Of course this code is only reached if DISASTER returns...
> 
> Sorry, I meant how can we get here at all, i.e. with a waitpid returning
> -1/ECHLD when we think there is actually a child.
> 
> Thinking about it this way I now realise this could be a bug in libxl or
> the application which caused the process to be reaped elsewhere.

Exactly.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4BzA-00007H-Ki; Fri, 17 Jan 2014 16:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Bz9-000075-VV
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:09:32 +0000
Received: from [193.109.254.147:24286] by server-12.bemta-14.messagelabs.com
	id 55/12-13681-BB559D25; Fri, 17 Jan 2014 16:09:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389974969!10080086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6872 invoked from network); 17 Jan 2014 16:09:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:09:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91816230"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:09:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:09:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4Bz5-0004Hf-Lq;
	Fri, 17 Jan 2014 16:09:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4Bz3-00007A-Mv;
	Fri, 17 Jan 2014 16:09:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.21940.215971.2386@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:09:24 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1389958199.6697.80.camel@kazak.uk.xensource.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-6-git-send-email-ian.jackson@eu.citrix.com>
	<1389957773.6697.76.camel@kazak.uk.xensource.com>
	<21209.5013.712955.967233@mariner.uk.xensource.com>
	<1389958199.6697.80.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 5/7] libxl: fork: Provide
 libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 5/7] libxl: fork: Provide libxl_childproc_sigchld_occurred"):
> On Fri, 2014-01-17 at 11:27 +0000, Ian Jackson wrote:
> > Of course this code is only reached if DISASTER returns...
> 
> Sorry, I meant how can we get here at all, i.e. with a waitpid returning
> -1/ECHLD when we think there is actually a child.
> 
> Thinking about it this way I now realise this could be a bug in libxl or
> the application which caused the process to be reaped elsewhere.

Exactly.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:13:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C3I-0000kL-Bs; Fri, 17 Jan 2014 16:13:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W4C3G-0000kA-KL
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:13:46 +0000
Received: from [85.158.143.35:60293] by server-2.bemta-4.messagelabs.com id
	A5/E8-11386-AB659D25; Fri, 17 Jan 2014 16:13:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389975223!616921!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32397 invoked from network); 17 Jan 2014 16:13:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:13:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HGCfbx006611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 16:12:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HGCeMf007720
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 16:12:40 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0HGCew2027152; Fri, 17 Jan 2014 16:12:40 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 08:12:39 -0800
Message-ID: <52D956A0.5040208@oracle.com>
Date: Fri, 17 Jan 2014 11:13:20 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
In-Reply-To: <52D962460200007800114A60@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTcvMjAxNCAxMTowMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMTcuMDEu
MTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMS8xNy8yMDE0IDA5OjMzIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
IFdoaWxlIGxvb2tpbmcgaW50byBKw7xyZ2VuJ3MgaXNzdWUgd2l0aCBQb0Qgc2V0dXAgY2F1c2lu
ZyBzb2Z0IGxvY2t1cHMKPj4+IGluIERvbTAgSSByZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4g
bGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+Pj4gOTg5OmE3NzgxYzBhM2I5YSAoInhlbi9iYWxs
b29uOiBmaXggYmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKPj4+IEhWTS13aXRoLVBvRCBj
YXNlIikganVzdCBkb2Vzbid0IHdvcmsgLSB0aGUgQlVHX09OKCkgYWRkZWQgdGhlcmUKPj4+IHRy
aWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBhIHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBt
ZW1vcnkuCj4+PiBBbmQgdGhhdCBpcyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNwZW50IHNp
Z25pZmljYW50IGFtb3VudHMgb2YKPj4+IGluIHRlc3RpbmcgdGhhdCBjaGFuZ2UgLSBJIG11c3Qg
aGF2ZSB0ZXN0ZWQgc29tZXRoaW5nIGVsc2UgdGhhbgo+Pj4gZmluYWxseSBnb3QgY2hlY2tlZCBp
biwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAgaW4gc29tZSBvdGhlciB3YXkuCj4+PiBFeHRyZW1l
bHkgZW1iYXJyYXNzaW5nLi4uCj4+Pgo+Pj4gSW4gdGhlIGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJv
cGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jvc3MKPj4+IHVwc3RyZWFtJ3MgYzI3NWE1
N2Y1ZSAoInhlbi9iYWxsb29uOiBTZXQgYmFsbG9vbidzIGluaXRpYWwgc3RhdGUgdG8KPj4+IG51
bWJlciBvZiBleGlzdGluZyBSQU0gcGFnZXMiKSwgYW5kIGhlbmNlIHdlbnQgYWhlYWQgYW5kCj4+
PiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2FsY3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1
cnJlbnRfcGFnZXM6Cj4+Pgo+Pj4gKGEpIHVwc3RyZWFtJ3MgKG9wZW4gY29kaW5nIGdldF9udW1f
cGh5c3BhZ2VzKCksIGFzIEkgZGlkIHRoaXMgb24KPj4+ICAgICAgIGFuIG9sZGVyIGtlcm5lbCkK
Pj4+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFN
IFBGTikKPj4+IChjKSBYRU5NRU1fZ2V0X3BvZF90YXJnZXQgb3V0cHV0ICh3aXRoIHRoZSBoeXBl
cnZpc29yIGFsdGVyZWQKPj4+ICAgICAgIHRvIG5vdCByZWZ1c2UgdGhpcyBmb3IgYSBkb21haW4g
ZG9pbmcgaXQgb24gaXRzZWxmKQo+Pj4KPj4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2Qs
IHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+Pj4ga25vd24gdG8gcmVzdWx0IGlu
IHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93biBlbm91Z2gsIGFuZAo+Pj4gaGVuY2Ugc2V0
dGluZyB1cCB0aGUgZG9tYWluIGZvciBhbiBldmVudHVhbCBjcmFzaCB3aGVuIHRoZSBQb0QKPj4+
IGNhY2hlIHJ1bnMgZW1wdHkuCj4+Pgo+Pj4gSW50ZXJlc3RpbmdseSwgKGEpIHRvbyByZXN1bHRz
IGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93bgo+Pj4gZW5vdWdoIC0gdGhlcmUncyBh
IGdhcCBvZiBleGFjdGx5IGFzIG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+Pj4gcmVzZXJ2ZWQg
YmVsb3cgdGhlIDFNYiBib3VuZGFyeS4gVGhlcmVmb3JlIGFmb3JlbWVudGlvbmVkCj4+PiB1cHN0
cmVhbSBjb21taXQgaXMgcHJlc3VtYWJseSBicm9rZW4uCj4+Pgo+Pj4gU2hvcnQgb2YgYSByZWxp
YWJsZSAoYW5kIGlkZWFsbHkgYXJjaGl0ZWN0dXJlIGluZGVwZW5kZW50KSB3YXkgb2YKPj4+IGtu
b3dpbmcgdGhlIG5lY2Vzc2FyeSBhZGp1c3RtZW50IHZhbHVlLCB0aGUgbmV4dCBiZXN0IHNvbHV0
aW9uCj4+PiAobm90IGJhbGxvb25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFs
bG9vbmluZyBkb3duIG11Y2gKPj4+IG1vcmUgdGhhbiBuZWNlc3NhcnkpIHR1cm5zIG91dCB0byBi
ZSB1c2luZyB0aGUgbWluaW11bSBvZiAoYikKPj4+IGFuZCAoYyk6IFdoZW4gdGhlIGRvbWFpbiBv
bmx5IGhhcyBtZW1vcnkgYmVsb3cgNEdiLCAoYikgaXMKPj4+IG1vcmUgcHJlY2lzZSwgd2hlcmVh
cyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMgY2xvc2VzdC4KPj4gSSBhbSBub3Qgc3VyZSBJ
IHVuZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvcgo+PiBsZXNz
LXRoYW4tNEcgZ3Vlc3RzLiBUaGUgcmVhc29uIGZvciBjMjc1YTU3ZjVlIHBhdGNoIHdhcyB0aGF0
IG1heF9wZm4KPj4gaW5jbHVkZXMgTU1JTyBzcGFjZSAod2hpY2ggaXMgbm90IFJBTSkgYW5kIHRo
dXMgdGhlIGRyaXZlciB3aWxsCj4+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3duIHRoYXQgbXVj
aCBtZW1vcnkuCj4gbWF4X3Bmbi9udW1fcGh5c3BhZ2VzIGlzbid0IHRoYXQgZmFyIG9mZiBmb3Ig
Z3Vlc3Qgd2l0aCBsZXNzIHRoYW4KPiA0R2IsIHRoZSBudW1iZXIgY2FsY3VsYXRlZCBmcm9tIHRo
ZSBQb0QgZGF0YSBpcyBhIGxpdHRsZSB3b3JzZS4KCkZvciBhIDRHIGd1ZXN0IGl0J3MgNjVLIHBh
Z2VzIHRoYXQgYXJlIGJhbGxvb25lZCBkb3duIHNvIGl0J3Mgbm90IAppbnNpZ25pZmljYW50LgoK
QW5kIGl0IHlvdSBhcmUgaW5jcmVhc2luZyBNTUlPIHNpemUgKHNvbWV0aGluZyB0aGF0IHdlIGhh
ZCB0byBkbyBoZXJlKSAKaXQgZ2V0cyBwcm9ncmVzc2l2ZWx5IHdvcnNlLgoKPgo+Pj4gUXVlc3Rp
b24gbm93IGlzOiBDb25zaWRlcmluZyB0aGF0IChhKSBpcyBicm9rZW4gKGFuZCBoYXJkIHRvIGZp
eCkKPj4+IGFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBhIGxhcmdlIHBhcnQgb2YgcHJhY3RpY2Fs
IGNhc2VzIGxlYWRpbmcgdG8KPj4+IHRvbyBtdWNoIGJhbGxvb25pbmcgZG93biwgc2hvdWxkbid0
IHdlIG9wZW4gdXAKPj4+IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBmb3IgZG9tYWlucyB0byBxdWVy
eSBvbiB0aGVtc2VsdmVzPwo+Pj4gQWx0ZXJuYXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhl
ciB3YXkgdG8gY2FsY3VsYXRlIGEKPj4+IHJlYXNvbmFibHkgcHJlY2lzZSB2YWx1ZT8KPj4gSSB0
aGluayBoeXBlcnZpc29yIHF1ZXJ5IGlzIGEgZ29vZCB0aGluZyBhbHRob3VnaCBJIGRvbid0IGtu
b3cgd2hldGhlcgo+PiBleHBvc2luZyBQb0Qtc3BlY2lmaWMgZGF0YSAoY291bnQgYW5kIGVudHJ5
X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMKPj4gbmVjZXNzYXJ5LiBJdCdzIHByb2JhYmx5IE9LIChv
ciB3ZSBjYW4gc2V0IHRoZXNlIGZpZWxkcyB0byB6ZXJvIGZvcgo+PiBub24tcHJpdmlsZWdlZCBk
b21haW5zKS4KPiBUaGF0J3MgcG9pbnRsZXNzIHRoZW4gLSBpZiBubyB1c2VmdWwgZGF0YSBpcyBw
cm92aWRlZCB0aHJvdWdoIHRoZQo+IGNhbGwgdG8gbm9uLXByaXZpbGVnZWQgZG9tYWlucywgd2Ug
Y2FuIGFzIHdlbGwga2VlcCBpdCBlcnJvcmluZyBmb3IKPiB0aGVtLgo+CgpJIHRob3VnaHQgdGhh
dCBhcmUgYWZ0ZXIgZC0+dG90X3BhZ2VzLCBubz8KCi1ib3JpcwoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:13:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C3N-0000lN-TN; Fri, 17 Jan 2014 16:13:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4C3L-0000kz-Pk
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:13:51 +0000
Received: from [193.109.254.147:42985] by server-9.bemta-14.messagelabs.com id
	5B/78-13957-FB659D25; Fri, 17 Jan 2014 16:13:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389975228!11566454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22362 invoked from network); 17 Jan 2014 16:13:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:13:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93914960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:13:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:13:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4C3H-0004Ir-9o;
	Fri, 17 Jan 2014 16:13:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4C3F-00007a-Cz;
	Fri, 17 Jan 2014 16:13:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.22199.930970.176177@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:13:43 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D9493C.2060506@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
	<21209.5519.308053.311118@mariner.uk.xensource.com>
	<52D9493C.2060506@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> Ian Jackson wrote:
> > Oh dear.  Can you change it to use the same ctx everywhere ?
> 
> What would be the downside of this in a multi-threaded application,
> where there are many concurrent domain operations?  I was under the
> impression that such an app would need a per-domain ctx.

In theory this should all be fine.  The design is intended to allow
you to use the same ctx for the whole program.  libxl is supposed to
release the ctx lock when doing anything long-running.

However, there are still areas where this hasn't been fully achieved
(parts of the migration code, and all the QMP communication with the
device model, come to mind).

I'm about to post v2 of my series which relaxes the libxl restriction
about SIGCHLD.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:13:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C3I-0000kL-Bs; Fri, 17 Jan 2014 16:13:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W4C3G-0000kA-KL
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:13:46 +0000
Received: from [85.158.143.35:60293] by server-2.bemta-4.messagelabs.com id
	A5/E8-11386-AB659D25; Fri, 17 Jan 2014 16:13:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1389975223!616921!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32397 invoked from network); 17 Jan 2014 16:13:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:13:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0HGCfbx006611
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 17 Jan 2014 16:12:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0HGCeMf007720
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 17 Jan 2014 16:12:40 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0HGCew2027152; Fri, 17 Jan 2014 16:12:40 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 08:12:39 -0800
Message-ID: <52D956A0.5040208@oracle.com>
Date: Fri, 17 Jan 2014 11:13:20 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
In-Reply-To: <52D962460200007800114A60@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDEvMTcvMjAxNCAxMTowMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMTcuMDEu
MTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMS8xNy8yMDE0IDA5OjMzIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
IFdoaWxlIGxvb2tpbmcgaW50byBKw7xyZ2VuJ3MgaXNzdWUgd2l0aCBQb0Qgc2V0dXAgY2F1c2lu
ZyBzb2Z0IGxvY2t1cHMKPj4+IGluIERvbTAgSSByZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4g
bGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+Pj4gOTg5OmE3NzgxYzBhM2I5YSAoInhlbi9iYWxs
b29uOiBmaXggYmFsbG9vbiBkcml2ZXIgYWNjb3VudGluZyBmb3IKPj4+IEhWTS13aXRoLVBvRCBj
YXNlIikganVzdCBkb2Vzbid0IHdvcmsgLSB0aGUgQlVHX09OKCkgYWRkZWQgdGhlcmUKPj4+IHRy
aWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBhIHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBt
ZW1vcnkuCj4+PiBBbmQgdGhhdCBpcyBkZXNwaXRlIG1lIGtub3dpbmcgdGhhdCBJIHNwZW50IHNp
Z25pZmljYW50IGFtb3VudHMgb2YKPj4+IGluIHRlc3RpbmcgdGhhdCBjaGFuZ2UgLSBJIG11c3Qg
aGF2ZSB0ZXN0ZWQgc29tZXRoaW5nIGVsc2UgdGhhbgo+Pj4gZmluYWxseSBnb3QgY2hlY2tlZCBp
biwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQgdXAgaW4gc29tZSBvdGhlciB3YXkuCj4+PiBFeHRyZW1l
bHkgZW1iYXJyYXNzaW5nLi4uCj4+Pgo+Pj4gSW4gdGhlIGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJv
cGVyIHNvbHV0aW9uIEkgc29vbiBzdHVtYmxlZCBhY3Jvc3MKPj4+IHVwc3RyZWFtJ3MgYzI3NWE1
N2Y1ZSAoInhlbi9iYWxsb29uOiBTZXQgYmFsbG9vbidzIGluaXRpYWwgc3RhdGUgdG8KPj4+IG51
bWJlciBvZiBleGlzdGluZyBSQU0gcGFnZXMiKSwgYW5kIGhlbmNlIHdlbnQgYWhlYWQgYW5kCj4+
PiBjb21wYXJlZCB0aHJlZSBkaWZmZXJlbnQgY2FsY3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1
cnJlbnRfcGFnZXM6Cj4+Pgo+Pj4gKGEpIHVwc3RyZWFtJ3MgKG9wZW4gY29kaW5nIGdldF9udW1f
cGh5c3BhZ2VzKCksIGFzIEkgZGlkIHRoaXMgb24KPj4+ICAgICAgIGFuIG9sZGVyIGtlcm5lbCkK
Pj4+IChiKSBwbGFpbiBvbGQgbnVtX3BoeXNwYWdlcyAoZXF1YWxpbmcgdGhlIG1heGltdW0gUkFN
IFBGTikKPj4+IChjKSBYRU5NRU1fZ2V0X3BvZF90YXJnZXQgb3V0cHV0ICh3aXRoIHRoZSBoeXBl
cnZpc29yIGFsdGVyZWQKPj4+ICAgICAgIHRvIG5vdCByZWZ1c2UgdGhpcyBmb3IgYSBkb21haW4g
ZG9pbmcgaXQgb24gaXRzZWxmKQo+Pj4KPj4+IFRoZSBmb3VydGggKG9yaWdpbmFsKSBtZXRob2Qs
IHVzaW5nIHRvdGFscmFtX3BhZ2VzLCB3YXMgYWxyZWFkeQo+Pj4ga25vd24gdG8gcmVzdWx0IGlu
IHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93biBlbm91Z2gsIGFuZAo+Pj4gaGVuY2Ugc2V0
dGluZyB1cCB0aGUgZG9tYWluIGZvciBhbiBldmVudHVhbCBjcmFzaCB3aGVuIHRoZSBQb0QKPj4+
IGNhY2hlIHJ1bnMgZW1wdHkuCj4+Pgo+Pj4gSW50ZXJlc3RpbmdseSwgKGEpIHRvbyByZXN1bHRz
IGluIHRoZSBkcml2ZXIgbm90IGJhbGxvb25pbmcgZG93bgo+Pj4gZW5vdWdoIC0gdGhlcmUncyBh
IGdhcCBvZiBleGFjdGx5IGFzIG1hbnkgcGFnZXMgYXMgYXJlIG1hcmtlZAo+Pj4gcmVzZXJ2ZWQg
YmVsb3cgdGhlIDFNYiBib3VuZGFyeS4gVGhlcmVmb3JlIGFmb3JlbWVudGlvbmVkCj4+PiB1cHN0
cmVhbSBjb21taXQgaXMgcHJlc3VtYWJseSBicm9rZW4uCj4+Pgo+Pj4gU2hvcnQgb2YgYSByZWxp
YWJsZSAoYW5kIGlkZWFsbHkgYXJjaGl0ZWN0dXJlIGluZGVwZW5kZW50KSB3YXkgb2YKPj4+IGtu
b3dpbmcgdGhlIG5lY2Vzc2FyeSBhZGp1c3RtZW50IHZhbHVlLCB0aGUgbmV4dCBiZXN0IHNvbHV0
aW9uCj4+PiAobm90IGJhbGxvb25pbmcgZG93biB0b28gbGl0dGxlLCBidXQgYWxzbyBub3QgYmFs
bG9vbmluZyBkb3duIG11Y2gKPj4+IG1vcmUgdGhhbiBuZWNlc3NhcnkpIHR1cm5zIG91dCB0byBi
ZSB1c2luZyB0aGUgbWluaW11bSBvZiAoYikKPj4+IGFuZCAoYyk6IFdoZW4gdGhlIGRvbWFpbiBv
bmx5IGhhcyBtZW1vcnkgYmVsb3cgNEdiLCAoYikgaXMKPj4+IG1vcmUgcHJlY2lzZSwgd2hlcmVh
cyBpbiB0aGUgb3RoZXIgY2FzZXMgKGMpIGdldHMgY2xvc2VzdC4KPj4gSSBhbSBub3Qgc3VyZSBJ
IHVuZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvcgo+PiBsZXNz
LXRoYW4tNEcgZ3Vlc3RzLiBUaGUgcmVhc29uIGZvciBjMjc1YTU3ZjVlIHBhdGNoIHdhcyB0aGF0
IG1heF9wZm4KPj4gaW5jbHVkZXMgTU1JTyBzcGFjZSAod2hpY2ggaXMgbm90IFJBTSkgYW5kIHRo
dXMgdGhlIGRyaXZlciB3aWxsCj4+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3duIHRoYXQgbXVj
aCBtZW1vcnkuCj4gbWF4X3Bmbi9udW1fcGh5c3BhZ2VzIGlzbid0IHRoYXQgZmFyIG9mZiBmb3Ig
Z3Vlc3Qgd2l0aCBsZXNzIHRoYW4KPiA0R2IsIHRoZSBudW1iZXIgY2FsY3VsYXRlZCBmcm9tIHRo
ZSBQb0QgZGF0YSBpcyBhIGxpdHRsZSB3b3JzZS4KCkZvciBhIDRHIGd1ZXN0IGl0J3MgNjVLIHBh
Z2VzIHRoYXQgYXJlIGJhbGxvb25lZCBkb3duIHNvIGl0J3Mgbm90IAppbnNpZ25pZmljYW50LgoK
QW5kIGl0IHlvdSBhcmUgaW5jcmVhc2luZyBNTUlPIHNpemUgKHNvbWV0aGluZyB0aGF0IHdlIGhh
ZCB0byBkbyBoZXJlKSAKaXQgZ2V0cyBwcm9ncmVzc2l2ZWx5IHdvcnNlLgoKPgo+Pj4gUXVlc3Rp
b24gbm93IGlzOiBDb25zaWRlcmluZyB0aGF0IChhKSBpcyBicm9rZW4gKGFuZCBoYXJkIHRvIGZp
eCkKPj4+IGFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBhIGxhcmdlIHBhcnQgb2YgcHJhY3RpY2Fs
IGNhc2VzIGxlYWRpbmcgdG8KPj4+IHRvbyBtdWNoIGJhbGxvb25pbmcgZG93biwgc2hvdWxkbid0
IHdlIG9wZW4gdXAKPj4+IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBmb3IgZG9tYWlucyB0byBxdWVy
eSBvbiB0aGVtc2VsdmVzPwo+Pj4gQWx0ZXJuYXRpdmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhl
ciB3YXkgdG8gY2FsY3VsYXRlIGEKPj4+IHJlYXNvbmFibHkgcHJlY2lzZSB2YWx1ZT8KPj4gSSB0
aGluayBoeXBlcnZpc29yIHF1ZXJ5IGlzIGEgZ29vZCB0aGluZyBhbHRob3VnaCBJIGRvbid0IGtu
b3cgd2hldGhlcgo+PiBleHBvc2luZyBQb0Qtc3BlY2lmaWMgZGF0YSAoY291bnQgYW5kIGVudHJ5
X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMKPj4gbmVjZXNzYXJ5LiBJdCdzIHByb2JhYmx5IE9LIChv
ciB3ZSBjYW4gc2V0IHRoZXNlIGZpZWxkcyB0byB6ZXJvIGZvcgo+PiBub24tcHJpdmlsZWdlZCBk
b21haW5zKS4KPiBUaGF0J3MgcG9pbnRsZXNzIHRoZW4gLSBpZiBubyB1c2VmdWwgZGF0YSBpcyBw
cm92aWRlZCB0aHJvdWdoIHRoZQo+IGNhbGwgdG8gbm9uLXByaXZpbGVnZWQgZG9tYWlucywgd2Ug
Y2FuIGFzIHdlbGwga2VlcCBpdCBlcnJvcmluZyBmb3IKPiB0aGVtLgo+CgpJIHRob3VnaHQgdGhh
dCBhcmUgYWZ0ZXIgZC0+dG90X3BhZ2VzLCBubz8KCi1ib3JpcwoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:13:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C3N-0000lN-TN; Fri, 17 Jan 2014 16:13:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4C3L-0000kz-Pk
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:13:51 +0000
Received: from [193.109.254.147:42985] by server-9.bemta-14.messagelabs.com id
	5B/78-13957-FB659D25; Fri, 17 Jan 2014 16:13:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389975228!11566454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22362 invoked from network); 17 Jan 2014 16:13:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:13:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93914960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:13:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:13:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4C3H-0004Ir-9o;
	Fri, 17 Jan 2014 16:13:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4C3F-00007a-Cz;
	Fri, 17 Jan 2014 16:13:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.22199.930970.176177@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:13:43 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52D9493C.2060506@suse.com>
References: <1389892942-8452-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389892942-8452-7-git-send-email-ian.jackson@eu.citrix.com>
	<52D8CF6A.7050609@suse.com>
	<21209.5519.308053.311118@mariner.uk.xensource.com>
	<52D9493C.2060506@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap"):
> Ian Jackson wrote:
> > Oh dear.  Can you change it to use the same ctx everywhere ?
> 
> What would be the downside of this in a multi-threaded application,
> where there are many concurrent domain operations?  I was under the
> impression that such an app would need a per-domain ctx.

In theory this should all be fine.  The design is intended to allow
you to use the same ctx for the whole program.  libxl is supposed to
release the ctx lock when doing anything long-running.

However, there are still areas where this hasn't been fully achieved
(parts of the migration code, and all the QMP communication with the
device model, come to mind).

I'm about to post v2 of my series which relaxes the libxl restriction
about SIGCHLD.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C4s-0000wv-CB; Fri, 17 Jan 2014 16:15:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4C4q-0000wk-W2
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:15:25 +0000
Received: from [85.158.143.35:12436] by server-3.bemta-4.messagelabs.com id
	6E/37-32360-C1759D25; Fri, 17 Jan 2014 16:15:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389975321!12469002!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28979 invoked from network); 17 Jan 2014 16:15:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93915645"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:15:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4C4l-00033s-Lz;
	Fri, 17 Jan 2014 16:15:19 +0000
Date: Fri, 17 Jan 2014 16:14:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117160653.GA20783@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
	<20140117160653.GA20783@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > Can't we just assume that if the backend can do discard on that file, it
> > is simply going to enable feature-discard? Do we really need the
> > toolstack driven option too?
> 
> If the backing file is fully ppopulated on purpose to avoid
> fragmentation of that file, then silently enabling discard means the
> unwanted fragmentation will occour over time. If anyone is really doing
> that in their setup I certainly dont know. At least kvm/qemu has such
> knob, and also an hyperv host offers sparse, fully populated and overlay
> as possible option when creating a new virtual disk image.
> 
> Now I think that if libxl already writes feature-discard=0, qemu can use
> that to clear BDRV_O_UNMAP. Also blkbk could be extended to first check
> if the property already exists before blindly enabling discard. No idea
> if that makes sense for "phy", but you never know.

OK, but it is important that we don't collapse the two options into one:
the backend might or might not support feature-discard so we can't have
the toolstack write feature-discard directly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C4s-0000wv-CB; Fri, 17 Jan 2014 16:15:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4C4q-0000wk-W2
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:15:25 +0000
Received: from [85.158.143.35:12436] by server-3.bemta-4.messagelabs.com id
	6E/37-32360-C1759D25; Fri, 17 Jan 2014 16:15:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1389975321!12469002!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28979 invoked from network); 17 Jan 2014 16:15:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93915645"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:15:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4C4l-00033s-Lz;
	Fri, 17 Jan 2014 16:15:19 +0000
Date: Fri, 17 Jan 2014 16:14:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140117160653.GA20783@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
	<20140117160653.GA20783@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Olaf Hering wrote:
> On Fri, Jan 17, Stefano Stabellini wrote:
> 
> > Can't we just assume that if the backend can do discard on that file, it
> > is simply going to enable feature-discard? Do we really need the
> > toolstack driven option too?
> 
> If the backing file is fully ppopulated on purpose to avoid
> fragmentation of that file, then silently enabling discard means the
> unwanted fragmentation will occour over time. If anyone is really doing
> that in their setup I certainly dont know. At least kvm/qemu has such
> knob, and also an hyperv host offers sparse, fully populated and overlay
> as possible option when creating a new virtual disk image.
> 
> Now I think that if libxl already writes feature-discard=0, qemu can use
> that to clear BDRV_O_UNMAP. Also blkbk could be extended to first check
> if the property already exists before blindly enabling discard. No idea
> if that makes sense for "phy", but you never know.

OK, but it is important that we don't collapse the two options into one:
the backend might or might not support feature-discard so we can't have
the toolstack write feature-discard directly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C9A-0001cV-4X; Fri, 17 Jan 2014 16:19:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4C99-0001ao-5j
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:19:51 +0000
Received: from [193.109.254.147:31323] by server-13.bemta-14.messagelabs.com
	id A1/F1-19374-62859D25; Fri, 17 Jan 2014 16:19:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389975589!11595888!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28831 invoked from network); 17 Jan 2014 16:19:50 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:19:50 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389975589; l=321;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=oDWwDUBPmBAAz2JdOWc1W0xTa6A=;
	b=jL+RxYFNCyG0KWQ+zPCjzZjaNJ+PhsSyc/CaYebrgvYWNYfDDJ80EeSryCVBap83a8i
	Gw+Tzr6pOT7iFlu7i7xBg7bC+stSd1YKIbl+VcS8lx/u0/PhHPbDgEhbJP8yFNOQuHySt
	jVoq1ts9KYiSN+52vMtSRIVIswE7jJd0+po=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id J02542q0HGJnqPg ; 
	Fri, 17 Jan 2014 17:19:49 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2F39550267; Fri, 17 Jan 2014 17:19:49 +0100 (CET)
Date: Fri, 17 Jan 2014 17:19:48 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117161948.GA22478@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
	<20140117160653.GA20783@aepfle.de>
	<alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> OK, but it is important that we don't collapse the two options into one:
> the backend might or might not support feature-discard so we can't have
> the toolstack write feature-discard directly.

Thats true. So I will use "discard_enabled" in the libxl change.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4C9A-0001cV-4X; Fri, 17 Jan 2014 16:19:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W4C99-0001ao-5j
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:19:51 +0000
Received: from [193.109.254.147:31323] by server-13.bemta-14.messagelabs.com
	id A1/F1-19374-62859D25; Fri, 17 Jan 2014 16:19:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389975589!11595888!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28831 invoked from network); 17 Jan 2014 16:19:50 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:19:50 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1389975589; l=321;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=oDWwDUBPmBAAz2JdOWc1W0xTa6A=;
	b=jL+RxYFNCyG0KWQ+zPCjzZjaNJ+PhsSyc/CaYebrgvYWNYfDDJ80EeSryCVBap83a8i
	Gw+Tzr6pOT7iFlu7i7xBg7bC+stSd1YKIbl+VcS8lx/u0/PhHPbDgEhbJP8yFNOQuHySt
	jVoq1ts9KYiSN+52vMtSRIVIswE7jJd0+po=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssV9tSQpel3mMUOxIOuFo0SpDH5J/+i/pnIwx6tg==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10ce:4e01:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id J02542q0HGJnqPg ; 
	Fri, 17 Jan 2014 17:19:49 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2F39550267; Fri, 17 Jan 2014 17:19:49 +0100 (CET)
Date: Fri, 17 Jan 2014 17:19:48 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140117161948.GA22478@aepfle.de>
References: <1389304875-4311-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401171443080.21510@kaball.uk.xensource.com>
	<20140117152930.GA13324@aepfle.de>
	<alpine.DEB.2.02.1401171532470.21510@kaball.uk.xensource.com>
	<20140117153820.GA17569@aepfle.de>
	<alpine.DEB.2.02.1401171540300.21510@kaball.uk.xensource.com>
	<20140117160653.GA20783@aepfle.de>
	<alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401171613160.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] [RFC] qemu-upstream: add discard support
 for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, Stefano Stabellini wrote:

> OK, but it is important that we don't collapse the two options into one:
> the backend might or might not support feature-discard so we can't have
> the toolstack write feature-discard directly.

Thats true. So I will use "discard_enabled" in the libxl change.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDG-0001mS-RH; Fri, 17 Jan 2014 16:24:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4CDC-0001mN-2d
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:24:05 +0000
Received: from [85.158.143.35:50978] by server-2.bemta-4.messagelabs.com id
	71/F7-11386-12959D25; Fri, 17 Jan 2014 16:24:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389975840!12456877!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3002 invoked from network); 17 Jan 2014 16:24:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:24:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:23:59 +0000
Message-Id: <52D9672F0200007800114A9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:23:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
In-Reply-To: <52D956A0.5040208@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE3LjAxLjE0IGF0IDE3OjEzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMS8xNy8yMDE0IDExOjAzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMTcuMDEuMTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDEvMTcvMjAxNCAwOTozMyBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBp
c3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+Pj4+IGluIERvbTAgSSBy
ZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+Pj4+
IDk4OTphNzc4MWMwYTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24gZHJpdmVyIGFjY291
bnRpbmcgZm9yCj4+Pj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRo
ZSBCVUdfT04oKSBhZGRlZCB0aGVyZQo+Pj4+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBh
IHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4+Pj4gQW5kIHRoYXQgaXMgZGVz
cGl0ZSBtZSBrbm93aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4+Pj4g
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCj4+Pj4gZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQg
dXAgaW4gc29tZSBvdGhlciB3YXkuCj4+Pj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+Pj4+
Cj4+Pj4gSW4gdGhlIGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBz
dHVtYmxlZCBhY3Jvc3MKPj4+PiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjog
U2V0IGJhbGxvb24ncyBpbml0aWFsIHN0YXRlIHRvCj4+Pj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJB
TSBwYWdlcyIpLCBhbmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPj4+PiBjb21wYXJlZCB0aHJlZSBk
aWZmZXJlbnQgY2FsY3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4+Pj4K
Pj4+PiAoYSkgdXBzdHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMg
SSBkaWQgdGhpcyBvbgo+Pj4+ICAgICAgIGFuIG9sZGVyIGtlcm5lbCkKPj4+PiAoYikgcGxhaW4g
b2xkIG51bV9waHlzcGFnZXMgKGVxdWFsaW5nIHRoZSBtYXhpbXVtIFJBTSBQRk4pCj4+Pj4gKGMp
IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBvdXRwdXQgKHdpdGggdGhlIGh5cGVydmlzb3IgYWx0ZXJl
ZAo+Pj4+ICAgICAgIHRvIG5vdCByZWZ1c2UgdGhpcyBmb3IgYSBkb21haW4gZG9pbmcgaXQgb24g
aXRzZWxmKQo+Pj4+Cj4+Pj4gVGhlIGZvdXJ0aCAob3JpZ2luYWwpIG1ldGhvZCwgdXNpbmcgdG90
YWxyYW1fcGFnZXMsIHdhcyBhbHJlYWR5Cj4+Pj4ga25vd24gdG8gcmVzdWx0IGluIHRoZSBkcml2
ZXIgbm90IGJhbGxvb25pbmcgZG93biBlbm91Z2gsIGFuZAo+Pj4+IGhlbmNlIHNldHRpbmcgdXAg
dGhlIGRvbWFpbiBmb3IgYW4gZXZlbnR1YWwgY3Jhc2ggd2hlbiB0aGUgUG9ECj4+Pj4gY2FjaGUg
cnVucyBlbXB0eS4KPj4+Pgo+Pj4+IEludGVyZXN0aW5nbHksIChhKSB0b28gcmVzdWx0cyBpbiB0
aGUgZHJpdmVyIG5vdCBiYWxsb29uaW5nIGRvd24KPj4+PiBlbm91Z2ggLSB0aGVyZSdzIGEgZ2Fw
IG9mIGV4YWN0bHkgYXMgbWFueSBwYWdlcyBhcyBhcmUgbWFya2VkCj4+Pj4gcmVzZXJ2ZWQgYmVs
b3cgdGhlIDFNYiBib3VuZGFyeS4gVGhlcmVmb3JlIGFmb3JlbWVudGlvbmVkCj4+Pj4gdXBzdHJl
YW0gY29tbWl0IGlzIHByZXN1bWFibHkgYnJva2VuLgo+Pj4+Cj4+Pj4gU2hvcnQgb2YgYSByZWxp
YWJsZSAoYW5kIGlkZWFsbHkgYXJjaGl0ZWN0dXJlIGluZGVwZW5kZW50KSB3YXkgb2YKPj4+PiBr
bm93aW5nIHRoZSBuZWNlc3NhcnkgYWRqdXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1
dGlvbgo+Pj4+IChub3QgYmFsbG9vbmluZyBkb3duIHRvbyBsaXR0bGUsIGJ1dCBhbHNvIG5vdCBi
YWxsb29uaW5nIGRvd24gbXVjaAo+Pj4+IG1vcmUgdGhhbiBuZWNlc3NhcnkpIHR1cm5zIG91dCB0
byBiZSB1c2luZyB0aGUgbWluaW11bSBvZiAoYikKPj4+PiBhbmQgKGMpOiBXaGVuIHRoZSBkb21h
aW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwgKGIpIGlzCj4+Pj4gbW9yZSBwcmVjaXNlLCB3
aGVyZWFzIGluIHRoZSBvdGhlciBjYXNlcyAoYykgZ2V0cyBjbG9zZXN0Lgo+Pj4gSSBhbSBub3Qg
c3VyZSBJIHVuZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvcgo+
Pj4gbGVzcy10aGFuLTRHIGd1ZXN0cy4gVGhlIHJlYXNvbiBmb3IgYzI3NWE1N2Y1ZSBwYXRjaCB3
YXMgdGhhdCBtYXhfcGZuCj4+PiBpbmNsdWRlcyBNTUlPIHNwYWNlICh3aGljaCBpcyBub3QgUkFN
KSBhbmQgdGh1cyB0aGUgZHJpdmVyIHdpbGwKPj4+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3du
IHRoYXQgbXVjaCBtZW1vcnkuCj4+IG1heF9wZm4vbnVtX3BoeXNwYWdlcyBpc24ndCB0aGF0IGZh
ciBvZmYgZm9yIGd1ZXN0IHdpdGggbGVzcyB0aGFuCj4+IDRHYiwgdGhlIG51bWJlciBjYWxjdWxh
dGVkIGZyb20gdGhlIFBvRCBkYXRhIGlzIGEgbGl0dGxlIHdvcnNlLgo+IAo+IEZvciBhIDRHIGd1
ZXN0IGl0J3MgNjVLIHBhZ2VzIHRoYXQgYXJlIGJhbGxvb25lZCBkb3duIHNvIGl0J3Mgbm90IAo+
IGluc2lnbmlmaWNhbnQuCgpJIGRpZG4ndCBzYXkgKGluIHRoZSBvcmlnaW5hbCBtYWlsKSA0R2Ig
Z3Vlc3QgLSBJIHNhaWQgZ3Vlc3Qgd2l0aAptZW1vcnkgb25seSBiZWxvdyA0R2IuIFNvIHllcywg
Zm9yIDRHYiBndWVzdCB0aGlzIGlzIHVuYWNjZXB0YWJseQpoaWdoLCAuLi4KCj4gQW5kIGl0IHlv
dSBhcmUgaW5jcmVhc2luZyBNTUlPIHNpemUgKHNvbWV0aGluZyB0aGF0IHdlIGhhZCB0byBkbyBo
ZXJlKSAKPiBpdCBnZXRzIHByb2dyZXNzaXZlbHkgd29yc2UuCgouLi4gYW5kIGdyb3dpbmcgd2l0
aCBNTUlPIHNpemUsIGhlbmNlIHRoZSBQb0QgZGF0YSB5aWVsZHMgYmV0dGVyCnJlc3VsdHMgaW4g
dGhhdCBjYXNlLgoKPj4+PiBRdWVzdGlvbiBub3cgaXM6IENvbnNpZGVyaW5nIHRoYXQgKGEpIGlz
IGJyb2tlbiAoYW5kIGhhcmQgdG8gZml4KQo+Pj4+IGFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBh
IGxhcmdlIHBhcnQgb2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KPj4+PiB0b28gbXVjaCBi
YWxsb29uaW5nIGRvd24sIHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4+Pj4gWEVOTUVNX2dldF9wb2Rf
dGFyZ2V0IGZvciBkb21haW5zIHRvIHF1ZXJ5IG9uIHRoZW1zZWx2ZXM/Cj4+Pj4gQWx0ZXJuYXRp
dmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhlciB3YXkgdG8gY2FsY3VsYXRlIGEKPj4+PiByZWFz
b25hYmx5IHByZWNpc2UgdmFsdWU/Cj4+PiBJIHRoaW5rIGh5cGVydmlzb3IgcXVlcnkgaXMgYSBn
b29kIHRoaW5nIGFsdGhvdWdoIEkgZG9uJ3Qga25vdyB3aGV0aGVyCj4+PiBleHBvc2luZyBQb0Qt
c3BlY2lmaWMgZGF0YSAoY291bnQgYW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMKPj4+
IG5lY2Vzc2FyeS4gSXQncyBwcm9iYWJseSBPSyAob3Igd2UgY2FuIHNldCB0aGVzZSBmaWVsZHMg
dG8gemVybyBmb3IKPj4+IG5vbi1wcml2aWxlZ2VkIGRvbWFpbnMpLgo+PiBUaGF0J3MgcG9pbnRs
ZXNzIHRoZW4gLSBpZiBubyB1c2VmdWwgZGF0YSBpcyBwcm92aWRlZCB0aHJvdWdoIHRoZQo+PiBj
YWxsIHRvIG5vbi1wcml2aWxlZ2VkIGRvbWFpbnMsIHdlIGNhbiBhcyB3ZWxsIGtlZXAgaXQgZXJy
b3JpbmcgZm9yCj4+IHRoZW0uCj4+Cj4gCj4gSSB0aG91Z2h0IHRoYXQgYXJlIGFmdGVyIGQtPnRv
dF9wYWdlcywgbm8/CgpUaGF0IGNhbiBiZSBvYnRhaW5lZCB0aHJvdWdoIGFub3RoZXIgWEVOTUVN
XyBvcGVyYXRpb24uIE5vLAp3aGF0IGlzIG5lZWRlZCBpcyB0aGUgZGlmZmVyZW5jZSBiZXR3ZWVu
IFBvRCBlbnRyaWVzIGFuZCBQb0QKY2FjaGUgKHdoaWNoIHRoZW4gbmVlZHMgdG8gYmUgYWRkZWQg
dG8gdG90X3BhZ2VzKS4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDG-0001mS-RH; Fri, 17 Jan 2014 16:24:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4CDC-0001mN-2d
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:24:05 +0000
Received: from [85.158.143.35:50978] by server-2.bemta-4.messagelabs.com id
	71/F7-11386-12959D25; Fri, 17 Jan 2014 16:24:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1389975840!12456877!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3002 invoked from network); 17 Jan 2014 16:24:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:24:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:23:59 +0000
Message-Id: <52D9672F0200007800114A9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:23:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
In-Reply-To: <52D956A0.5040208@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDE3LjAxLjE0IGF0IDE3OjEzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMS8xNy8yMDE0IDExOjAzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMTcuMDEuMTQgYXQgMTY6NTQsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDEvMTcvMjAxNCAwOTozMyBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gV2hpbGUgbG9va2luZyBpbnRvIErDvHJnZW4ncyBp
c3N1ZSB3aXRoIFBvRCBzZXR1cCBjYXVzaW5nIHNvZnQgbG9ja3Vwcwo+Pj4+IGluIERvbTAgSSBy
ZWFsaXplZCB0aGF0IHdoYXQgSSBkaWQgaW4gbGludXgtMi42LjE4LXhlbi5oZydzIGMvcwo+Pj4+
IDk4OTphNzc4MWMwYTNiOWEgKCJ4ZW4vYmFsbG9vbjogZml4IGJhbGxvb24gZHJpdmVyIGFjY291
bnRpbmcgZm9yCj4+Pj4gSFZNLXdpdGgtUG9EIGNhc2UiKSBqdXN0IGRvZXNuJ3Qgd29yayAtIHRo
ZSBCVUdfT04oKSBhZGRlZCB0aGVyZQo+Pj4+IHRyaWdnZXJzIGFzIHNvb24gYXMgdGhlcmUncyBh
IHJlYXNvbmFibGUgYW1vdW50IG9mIGV4Y2VzcyBtZW1vcnkuCj4+Pj4gQW5kIHRoYXQgaXMgZGVz
cGl0ZSBtZSBrbm93aW5nIHRoYXQgSSBzcGVudCBzaWduaWZpY2FudCBhbW91bnRzIG9mCj4+Pj4g
aW4gdGVzdGluZyB0aGF0IGNoYW5nZSAtIEkgbXVzdCBoYXZlIHRlc3RlZCBzb21ldGhpbmcgZWxz
ZSB0aGFuCj4+Pj4gZmluYWxseSBnb3QgY2hlY2tlZCBpbiwgb3IgbXVzdCBoYXZlIHNjcmV3ZWQg
dXAgaW4gc29tZSBvdGhlciB3YXkuCj4+Pj4gRXh0cmVtZWx5IGVtYmFycmFzc2luZy4uLgo+Pj4+
Cj4+Pj4gSW4gdGhlIGNvdXJzZSBvZiBmaW5kaW5nIGEgcHJvcGVyIHNvbHV0aW9uIEkgc29vbiBz
dHVtYmxlZCBhY3Jvc3MKPj4+PiB1cHN0cmVhbSdzIGMyNzVhNTdmNWUgKCJ4ZW4vYmFsbG9vbjog
U2V0IGJhbGxvb24ncyBpbml0aWFsIHN0YXRlIHRvCj4+Pj4gbnVtYmVyIG9mIGV4aXN0aW5nIFJB
TSBwYWdlcyIpLCBhbmQgaGVuY2Ugd2VudCBhaGVhZCBhbmQKPj4+PiBjb21wYXJlZCB0aHJlZSBk
aWZmZXJlbnQgY2FsY3VsYXRpb25zIGZvciBpbml0aWFsIGJzLmN1cnJlbnRfcGFnZXM6Cj4+Pj4K
Pj4+PiAoYSkgdXBzdHJlYW0ncyAob3BlbiBjb2RpbmcgZ2V0X251bV9waHlzcGFnZXMoKSwgYXMg
SSBkaWQgdGhpcyBvbgo+Pj4+ICAgICAgIGFuIG9sZGVyIGtlcm5lbCkKPj4+PiAoYikgcGxhaW4g
b2xkIG51bV9waHlzcGFnZXMgKGVxdWFsaW5nIHRoZSBtYXhpbXVtIFJBTSBQRk4pCj4+Pj4gKGMp
IFhFTk1FTV9nZXRfcG9kX3RhcmdldCBvdXRwdXQgKHdpdGggdGhlIGh5cGVydmlzb3IgYWx0ZXJl
ZAo+Pj4+ICAgICAgIHRvIG5vdCByZWZ1c2UgdGhpcyBmb3IgYSBkb21haW4gZG9pbmcgaXQgb24g
aXRzZWxmKQo+Pj4+Cj4+Pj4gVGhlIGZvdXJ0aCAob3JpZ2luYWwpIG1ldGhvZCwgdXNpbmcgdG90
YWxyYW1fcGFnZXMsIHdhcyBhbHJlYWR5Cj4+Pj4ga25vd24gdG8gcmVzdWx0IGluIHRoZSBkcml2
ZXIgbm90IGJhbGxvb25pbmcgZG93biBlbm91Z2gsIGFuZAo+Pj4+IGhlbmNlIHNldHRpbmcgdXAg
dGhlIGRvbWFpbiBmb3IgYW4gZXZlbnR1YWwgY3Jhc2ggd2hlbiB0aGUgUG9ECj4+Pj4gY2FjaGUg
cnVucyBlbXB0eS4KPj4+Pgo+Pj4+IEludGVyZXN0aW5nbHksIChhKSB0b28gcmVzdWx0cyBpbiB0
aGUgZHJpdmVyIG5vdCBiYWxsb29uaW5nIGRvd24KPj4+PiBlbm91Z2ggLSB0aGVyZSdzIGEgZ2Fw
IG9mIGV4YWN0bHkgYXMgbWFueSBwYWdlcyBhcyBhcmUgbWFya2VkCj4+Pj4gcmVzZXJ2ZWQgYmVs
b3cgdGhlIDFNYiBib3VuZGFyeS4gVGhlcmVmb3JlIGFmb3JlbWVudGlvbmVkCj4+Pj4gdXBzdHJl
YW0gY29tbWl0IGlzIHByZXN1bWFibHkgYnJva2VuLgo+Pj4+Cj4+Pj4gU2hvcnQgb2YgYSByZWxp
YWJsZSAoYW5kIGlkZWFsbHkgYXJjaGl0ZWN0dXJlIGluZGVwZW5kZW50KSB3YXkgb2YKPj4+PiBr
bm93aW5nIHRoZSBuZWNlc3NhcnkgYWRqdXN0bWVudCB2YWx1ZSwgdGhlIG5leHQgYmVzdCBzb2x1
dGlvbgo+Pj4+IChub3QgYmFsbG9vbmluZyBkb3duIHRvbyBsaXR0bGUsIGJ1dCBhbHNvIG5vdCBi
YWxsb29uaW5nIGRvd24gbXVjaAo+Pj4+IG1vcmUgdGhhbiBuZWNlc3NhcnkpIHR1cm5zIG91dCB0
byBiZSB1c2luZyB0aGUgbWluaW11bSBvZiAoYikKPj4+PiBhbmQgKGMpOiBXaGVuIHRoZSBkb21h
aW4gb25seSBoYXMgbWVtb3J5IGJlbG93IDRHYiwgKGIpIGlzCj4+Pj4gbW9yZSBwcmVjaXNlLCB3
aGVyZWFzIGluIHRoZSBvdGhlciBjYXNlcyAoYykgZ2V0cyBjbG9zZXN0Lgo+Pj4gSSBhbSBub3Qg
c3VyZSBJIHVuZGVyc3RhbmQgd2h5IChiKSB3b3VsZCBiZSB0aGUgcmlnaHQgYW5zd2VyIGZvcgo+
Pj4gbGVzcy10aGFuLTRHIGd1ZXN0cy4gVGhlIHJlYXNvbiBmb3IgYzI3NWE1N2Y1ZSBwYXRjaCB3
YXMgdGhhdCBtYXhfcGZuCj4+PiBpbmNsdWRlcyBNTUlPIHNwYWNlICh3aGljaCBpcyBub3QgUkFN
KSBhbmQgdGh1cyB0aGUgZHJpdmVyIHdpbGwKPj4+IHVubmVjZXNzYXJpbHkgYmFsbG9vbiBkb3du
IHRoYXQgbXVjaCBtZW1vcnkuCj4+IG1heF9wZm4vbnVtX3BoeXNwYWdlcyBpc24ndCB0aGF0IGZh
ciBvZmYgZm9yIGd1ZXN0IHdpdGggbGVzcyB0aGFuCj4+IDRHYiwgdGhlIG51bWJlciBjYWxjdWxh
dGVkIGZyb20gdGhlIFBvRCBkYXRhIGlzIGEgbGl0dGxlIHdvcnNlLgo+IAo+IEZvciBhIDRHIGd1
ZXN0IGl0J3MgNjVLIHBhZ2VzIHRoYXQgYXJlIGJhbGxvb25lZCBkb3duIHNvIGl0J3Mgbm90IAo+
IGluc2lnbmlmaWNhbnQuCgpJIGRpZG4ndCBzYXkgKGluIHRoZSBvcmlnaW5hbCBtYWlsKSA0R2Ig
Z3Vlc3QgLSBJIHNhaWQgZ3Vlc3Qgd2l0aAptZW1vcnkgb25seSBiZWxvdyA0R2IuIFNvIHllcywg
Zm9yIDRHYiBndWVzdCB0aGlzIGlzIHVuYWNjZXB0YWJseQpoaWdoLCAuLi4KCj4gQW5kIGl0IHlv
dSBhcmUgaW5jcmVhc2luZyBNTUlPIHNpemUgKHNvbWV0aGluZyB0aGF0IHdlIGhhZCB0byBkbyBo
ZXJlKSAKPiBpdCBnZXRzIHByb2dyZXNzaXZlbHkgd29yc2UuCgouLi4gYW5kIGdyb3dpbmcgd2l0
aCBNTUlPIHNpemUsIGhlbmNlIHRoZSBQb0QgZGF0YSB5aWVsZHMgYmV0dGVyCnJlc3VsdHMgaW4g
dGhhdCBjYXNlLgoKPj4+PiBRdWVzdGlvbiBub3cgaXM6IENvbnNpZGVyaW5nIHRoYXQgKGEpIGlz
IGJyb2tlbiAoYW5kIGhhcmQgdG8gZml4KQo+Pj4+IGFuZCAoYikgaXMgaW4gcHJlc3VtYWJseSBh
IGxhcmdlIHBhcnQgb2YgcHJhY3RpY2FsIGNhc2VzIGxlYWRpbmcgdG8KPj4+PiB0b28gbXVjaCBi
YWxsb29uaW5nIGRvd24sIHNob3VsZG4ndCB3ZSBvcGVuIHVwCj4+Pj4gWEVOTUVNX2dldF9wb2Rf
dGFyZ2V0IGZvciBkb21haW5zIHRvIHF1ZXJ5IG9uIHRoZW1zZWx2ZXM/Cj4+Pj4gQWx0ZXJuYXRp
dmVseSwgY2FuIGFueW9uZSBzZWUgYW5vdGhlciB3YXkgdG8gY2FsY3VsYXRlIGEKPj4+PiByZWFz
b25hYmx5IHByZWNpc2UgdmFsdWU/Cj4+PiBJIHRoaW5rIGh5cGVydmlzb3IgcXVlcnkgaXMgYSBn
b29kIHRoaW5nIGFsdGhvdWdoIEkgZG9uJ3Qga25vdyB3aGV0aGVyCj4+PiBleHBvc2luZyBQb0Qt
c3BlY2lmaWMgZGF0YSAoY291bnQgYW5kIGVudHJ5X2NvdW50KSB0byB0aGUgZ3Vlc3QgaXMKPj4+
IG5lY2Vzc2FyeS4gSXQncyBwcm9iYWJseSBPSyAob3Igd2UgY2FuIHNldCB0aGVzZSBmaWVsZHMg
dG8gemVybyBmb3IKPj4+IG5vbi1wcml2aWxlZ2VkIGRvbWFpbnMpLgo+PiBUaGF0J3MgcG9pbnRs
ZXNzIHRoZW4gLSBpZiBubyB1c2VmdWwgZGF0YSBpcyBwcm92aWRlZCB0aHJvdWdoIHRoZQo+PiBj
YWxsIHRvIG5vbi1wcml2aWxlZ2VkIGRvbWFpbnMsIHdlIGNhbiBhcyB3ZWxsIGtlZXAgaXQgZXJy
b3JpbmcgZm9yCj4+IHRoZW0uCj4+Cj4gCj4gSSB0aG91Z2h0IHRoYXQgYXJlIGFmdGVyIGQtPnRv
dF9wYWdlcywgbm8/CgpUaGF0IGNhbiBiZSBvYnRhaW5lZCB0aHJvdWdoIGFub3RoZXIgWEVOTUVN
XyBvcGVyYXRpb24uIE5vLAp3aGF0IGlzIG5lZWRlZCBpcyB0aGUgZGlmZmVyZW5jZSBiZXR3ZWVu
IFBvRCBlbnRyaWVzIGFuZCBQb0QKY2FjaGUgKHdoaWNoIHRoZW4gbmVlZHMgdG8gYmUgYWRkZWQg
dG8gdG90X3BhZ2VzKS4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDW-0001nZ-Bt; Fri, 17 Jan 2014 16:24:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDU-0001nJ-M6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:20 +0000
Received: from [85.158.139.211:41697] by server-11.bemta-5.messagelabs.com id
	EB/F3-23268-F2959D25; Fri, 17 Jan 2014 16:24:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389975853!10444422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26966 invoked from network); 17 Jan 2014 16:24:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDL-0004MK-KZ;
	Fri, 17 Jan 2014 16:24:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDK-0000K3-2l;
	Fri, 17 Jan 2014 16:24:10 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:53 +0000
Message-ID: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libvirt reaps its children synchronously and has no central pid
registry and no dispatch mechanism.  libxl does have a pid registry so
can provide a selective reaping facility, but that is not currently
exposed.  Here we expose it.

Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
possible for those to share SIGCHLD: libxl expects either the
application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
of this series we relax this restriction by having libxl maintain a
process-wide list of the libxl ctxs that are supposed to be interested
in SIGCHLD.

I have not tested the selective reaping functionality.  The most
plausible test environment for that is a suitably modified libvirt.

I have tested the new SIGCHLD plumbing, at least with a single ctx,
since xl uses it.  Testing that it works in a real multi-ctx
application is again probably most easily done with libvirt.

I hope that with this series applied, simply having libvirt pass
libxl_sigchld_owner_libxl_always_selective_reap should be sufficient
for everything to work.  There is no need to specifically request the
SIGCHLD-sharing.

 a 01/12] libxl: fork: Break out checked_waitpid
 a 02/12] libxl: fork: Break out childproc_reaped_ours
 a 03/12] libxl: fork: Clarify docs for libxl_sigchld_owner
*  04/12] libxl: fork: Document libxl_sigchld_owner_libxl better
 a 05/12] libxl: fork: assert that chldmode is right
 a 06/12] libxl: fork: Provide libxl_childproc_sigchld_occurred
+a 07/12] libxl: fork: Provide ..._always_selective_reap
 a 08/12] libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
*  09/12] libxl: fork: Rename sigchld handler functions
*  10/12] libxl: fork: Break out sigchld_installhandler_core
*  11/12] libxl: fork: Break out sigchld_sethandler_raw
*  12/12] libxl: fork: Share SIGCHLD handler amongst ctxs

(a = acked; * = new patch; + = modified patch)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDW-0001nZ-Bt; Fri, 17 Jan 2014 16:24:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDU-0001nJ-M6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:20 +0000
Received: from [85.158.139.211:41697] by server-11.bemta-5.messagelabs.com id
	EB/F3-23268-F2959D25; Fri, 17 Jan 2014 16:24:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1389975853!10444422!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26966 invoked from network); 17 Jan 2014 16:24:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDL-0004MK-KZ;
	Fri, 17 Jan 2014 16:24:11 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDK-0000K3-2l;
	Fri, 17 Jan 2014 16:24:10 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:53 +0000
Message-ID: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libvirt reaps its children synchronously and has no central pid
registry and no dispatch mechanism.  libxl does have a pid registry so
can provide a selective reaping facility, but that is not currently
exposed.  Here we expose it.

Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
possible for those to share SIGCHLD: libxl expects either the
application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
of this series we relax this restriction by having libxl maintain a
process-wide list of the libxl ctxs that are supposed to be interested
in SIGCHLD.

I have not tested the selective reaping functionality.  The most
plausible test environment for that is a suitably modified libvirt.

I have tested the new SIGCHLD plumbing, at least with a single ctx,
since xl uses it.  Testing that it works in a real multi-ctx
application is again probably most easily done with libvirt.

I hope that with this series applied, simply having libvirt pass
libxl_sigchld_owner_libxl_always_selective_reap should be sufficient
for everything to work.  There is no need to specifically request the
SIGCHLD-sharing.

 a 01/12] libxl: fork: Break out checked_waitpid
 a 02/12] libxl: fork: Break out childproc_reaped_ours
 a 03/12] libxl: fork: Clarify docs for libxl_sigchld_owner
*  04/12] libxl: fork: Document libxl_sigchld_owner_libxl better
 a 05/12] libxl: fork: assert that chldmode is right
 a 06/12] libxl: fork: Provide libxl_childproc_sigchld_occurred
+a 07/12] libxl: fork: Provide ..._always_selective_reap
 a 08/12] libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
*  09/12] libxl: fork: Rename sigchld handler functions
*  10/12] libxl: fork: Break out sigchld_installhandler_core
*  11/12] libxl: fork: Break out sigchld_sethandler_raw
*  12/12] libxl: fork: Share SIGCHLD handler amongst ctxs

(a = acked; * = new patch; + = modified patch)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDd-0001pb-0K; Fri, 17 Jan 2014 16:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDb-0001ox-BY
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:27 +0000
Received: from [193.109.254.147:29672] by server-8.bemta-14.messagelabs.com id
	F2/40-30921-A3959D25; Fri, 17 Jan 2014 16:24:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24012 invoked from network); 17 Jan 2014 16:24:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDM-0004MN-Cz;
	Fri, 17 Jan 2014 16:24:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDL-0000K6-4Q;
	Fri, 17 Jan 2014 16:24:11 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:54 +0000
Message-ID: <1389975845-1195-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/12] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDf-0001ri-Pt; Fri, 17 Jan 2014 16:24:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDd-0001pX-B0
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:29 +0000
Received: from [85.158.139.211:48058] by server-9.bemta-5.messagelabs.com id
	FA/95-15098-C3959D25; Fri, 17 Jan 2014 16:24:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32726 invoked from network); 17 Jan 2014 16:24:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822519"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDO-0004MT-Hr;
	Fri, 17 Jan 2014 16:24:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDN-0000KF-A3;
	Fri, 17 Jan 2014 16:24:13 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:56 +0000
Message-ID: <1389975845-1195-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/12] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..ff0b2fa 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDe-0001qB-FL; Fri, 17 Jan 2014 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDc-0001pF-Dm
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:28 +0000
Received: from [85.158.139.211:13847] by server-15.bemta-5.messagelabs.com id
	86/5F-08490-B3959D25; Fri, 17 Jan 2014 16:24:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32645 invoked from network); 17 Jan 2014 16:24:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDN-0004MQ-Mm;
	Fri, 17 Jan 2014 16:24:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDM-0000KA-07;
	Fri, 17 Jan 2014 16:24:12 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:55 +0000
Message-ID: <1389975845-1195-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/12] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDe-0001qj-Vj; Fri, 17 Jan 2014 16:24:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDc-0001pN-Rd
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:29 +0000
Received: from [193.109.254.147:42833] by server-8.bemta-14.messagelabs.com id
	2C/40-30921-C3959D25; Fri, 17 Jan 2014 16:24:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24109 invoked from network); 17 Jan 2014 16:24:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919750"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDU-0004Mi-JP;
	Fri, 17 Jan 2014 16:24:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDS-0000Ke-Of;
	Fri, 17 Jan 2014 16:24:18 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:01 +0000
Message-ID: <1389975845-1195-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/12] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDg-0001sE-9q; Fri, 17 Jan 2014 16:24:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDe-0001q7-8E
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:30 +0000
Received: from [193.109.254.147:29868] by server-10.bemta-14.messagelabs.com
	id 12/34-20752-D3959D25; Fri, 17 Jan 2014 16:24:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24210 invoked from network); 17 Jan 2014 16:24:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919764"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDQ-0004MZ-PN;
	Fri, 17 Jan 2014 16:24:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDP-0000KP-9h;
	Fri, 17 Jan 2014 16:24:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:58 +0000
Message-ID: <1389975845-1195-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 05/12] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDe-0001qj-Vj; Fri, 17 Jan 2014 16:24:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDc-0001pN-Rd
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:29 +0000
Received: from [193.109.254.147:42833] by server-8.bemta-14.messagelabs.com id
	2C/40-30921-C3959D25; Fri, 17 Jan 2014 16:24:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24109 invoked from network); 17 Jan 2014 16:24:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919750"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDU-0004Mi-JP;
	Fri, 17 Jan 2014 16:24:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDS-0000Ke-Of;
	Fri, 17 Jan 2014 16:24:18 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:01 +0000
Message-ID: <1389975845-1195-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/12] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDf-0001ri-Pt; Fri, 17 Jan 2014 16:24:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDd-0001pX-B0
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:29 +0000
Received: from [85.158.139.211:48058] by server-9.bemta-5.messagelabs.com id
	FA/95-15098-C3959D25; Fri, 17 Jan 2014 16:24:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32726 invoked from network); 17 Jan 2014 16:24:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822519"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDO-0004MT-Hr;
	Fri, 17 Jan 2014 16:24:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDN-0000KF-A3;
	Fri, 17 Jan 2014 16:24:13 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:56 +0000
Message-ID: <1389975845-1195-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/12] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..ff0b2fa 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDd-0001pb-0K; Fri, 17 Jan 2014 16:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDb-0001ox-BY
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:27 +0000
Received: from [193.109.254.147:29672] by server-8.bemta-14.messagelabs.com id
	F2/40-30921-A3959D25; Fri, 17 Jan 2014 16:24:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24012 invoked from network); 17 Jan 2014 16:24:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDM-0004MN-Cz;
	Fri, 17 Jan 2014 16:24:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDL-0000K6-4Q;
	Fri, 17 Jan 2014 16:24:11 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:54 +0000
Message-ID: <1389975845-1195-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/12] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDg-0001sE-9q; Fri, 17 Jan 2014 16:24:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDe-0001q7-8E
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:30 +0000
Received: from [193.109.254.147:29868] by server-10.bemta-14.messagelabs.com
	id 12/34-20752-D3959D25; Fri, 17 Jan 2014 16:24:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24210 invoked from network); 17 Jan 2014 16:24:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919764"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDQ-0004MZ-PN;
	Fri, 17 Jan 2014 16:24:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDP-0000KP-9h;
	Fri, 17 Jan 2014 16:24:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:58 +0000
Message-ID: <1389975845-1195-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 05/12] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDe-0001qB-FL; Fri, 17 Jan 2014 16:24:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDc-0001pF-Dm
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:28 +0000
Received: from [85.158.139.211:13847] by server-15.bemta-5.messagelabs.com id
	86/5F-08490-B3959D25; Fri, 17 Jan 2014 16:24:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32645 invoked from network); 17 Jan 2014 16:24:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDN-0004MQ-Mm;
	Fri, 17 Jan 2014 16:24:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDM-0000KA-07;
	Fri, 17 Jan 2014 16:24:12 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:55 +0000
Message-ID: <1389975845-1195-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/12] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDh-0001tk-27; Fri, 17 Jan 2014 16:24:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDe-0001q9-S6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:31 +0000
Received: from [85.158.139.211:48170] by server-17.bemta-5.messagelabs.com id
	10/A2-19152-E3959D25; Fri, 17 Jan 2014 16:24:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 335 invoked from network); 17 Jan 2014 16:24:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822540"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDP-0004MW-Kd;
	Fri, 17 Jan 2014 16:24:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDO-0000KK-8N;
	Fri, 17 Jan 2014 16:24:14 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:57 +0000
Message-ID: <1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/12] libxl: fork: Document
	libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_sigchld_owner_libxl ought to have been mentioned in the list of
options for chldowner.  Since it's the default, move the description
of the its behaviour into the description of that option.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ff0b2fa..4f72c4b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -442,9 +442,26 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  * For programs which run their own children alongside libxl's:
  *
  *     A program which does this must call libxl_childproc_setmode.
- *     There are two options:
+ *     There are three options:
  * 
+ *     libxl_sigchld_owner_libxl:
+ *
+ *       While any libxl operation which might use child processes
+ *       is running, works like libxl_sigchld_owner_libxl_always;
+ *       but, deinstalls the handler the rest of the time.
+ *
+ *       In this mode, the application, while it uses any libxl
+ *       operation which might create or use child processes (see
+ *       above):
+ *           - Must not have any child processes running.
+ *           - Must not install a SIGCHLD handler.
+ *           - Must not reap any children.
+ *
+ *       This is the default (i.e. if setmode is not called, or 0 is
+ *       passed for hooks).
+ *
  *     libxl_sigchld_owner_mainloop:
+ *
  *       The application must install a SIGCHLD handler and reap (at
  *       least) all of libxl's children and pass their exit status to
  *       libxl by calling libxl_childproc_exited.  (If the application
@@ -452,17 +469,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       on each ctx.)
  *
  *     libxl_sigchld_owner_libxl_always:
+ *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
- *
- * An application which fails to call setmode, or which passes 0 for
- * hooks, while it uses any libxl operation which might
- * create or use child processes (see above):
- *   - Must not have any child processes running.
- *   - Must not install a SIGCHLD handler.
- *   - Must not reap any children.
  */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDh-0001tk-27; Fri, 17 Jan 2014 16:24:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDe-0001q9-S6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:31 +0000
Received: from [85.158.139.211:48170] by server-17.bemta-5.messagelabs.com id
	10/A2-19152-E3959D25; Fri, 17 Jan 2014 16:24:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 335 invoked from network); 17 Jan 2014 16:24:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822540"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDP-0004MW-Kd;
	Fri, 17 Jan 2014 16:24:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDO-0000KK-8N;
	Fri, 17 Jan 2014 16:24:14 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:57 +0000
Message-ID: <1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/12] libxl: fork: Document
	libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_sigchld_owner_libxl ought to have been mentioned in the list of
options for chldowner.  Since it's the default, move the description
of the its behaviour into the description of that option.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ff0b2fa..4f72c4b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -442,9 +442,26 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  * For programs which run their own children alongside libxl's:
  *
  *     A program which does this must call libxl_childproc_setmode.
- *     There are two options:
+ *     There are three options:
  * 
+ *     libxl_sigchld_owner_libxl:
+ *
+ *       While any libxl operation which might use child processes
+ *       is running, works like libxl_sigchld_owner_libxl_always;
+ *       but, deinstalls the handler the rest of the time.
+ *
+ *       In this mode, the application, while it uses any libxl
+ *       operation which might create or use child processes (see
+ *       above):
+ *           - Must not have any child processes running.
+ *           - Must not install a SIGCHLD handler.
+ *           - Must not reap any children.
+ *
+ *       This is the default (i.e. if setmode is not called, or 0 is
+ *       passed for hooks).
+ *
  *     libxl_sigchld_owner_mainloop:
+ *
  *       The application must install a SIGCHLD handler and reap (at
  *       least) all of libxl's children and pass their exit status to
  *       libxl by calling libxl_childproc_exited.  (If the application
@@ -452,17 +469,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       on each ctx.)
  *
  *     libxl_sigchld_owner_libxl_always:
+ *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
- *
- * An application which fails to call setmode, or which passes 0 for
- * hooks, while it uses any libxl operation which might
- * create or use child processes (see above):
- *   - Must not have any child processes running.
- *   - Must not install a SIGCHLD handler.
- *   - Must not reap any children.
  */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDh-0001vG-Uc; Fri, 17 Jan 2014 16:24:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDf-0001rF-TK
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:32 +0000
Received: from [85.158.139.211:48239] by server-7.bemta-5.messagelabs.com id
	21/22-04824-F3959D25; Fri, 17 Jan 2014 16:24:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 431 invoked from network); 17 Jan 2014 16:24:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822561"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDR-0004Mc-Tl;
	Fri, 17 Jan 2014 16:24:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDQ-0000KU-AA;
	Fri, 17 Jan 2014 16:24:16 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:59 +0000
Message-ID: <1389975845-1195-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/12] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4f72c4b..3c93955 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -482,9 +482,10 @@ typedef enum {
      * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -542,7 +543,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -558,6 +560,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDh-0001vG-Uc; Fri, 17 Jan 2014 16:24:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDf-0001rF-TK
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:32 +0000
Received: from [85.158.139.211:48239] by server-7.bemta-5.messagelabs.com id
	21/22-04824-F3959D25; Fri, 17 Jan 2014 16:24:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 431 invoked from network); 17 Jan 2014 16:24:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822561"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDR-0004Mc-Tl;
	Fri, 17 Jan 2014 16:24:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDQ-0000KU-AA;
	Fri, 17 Jan 2014 16:24:16 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:23:59 +0000
Message-ID: <1389975845-1195-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/12] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4f72c4b..3c93955 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -482,9 +482,10 @@ typedef enum {
      * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -542,7 +543,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -558,6 +560,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDj-0001x2-Cf; Fri, 17 Jan 2014 16:24:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDh-0001su-6G
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:33 +0000
Received: from [193.109.254.147:30032] by server-4.bemta-14.messagelabs.com id
	11/38-03916-04959D25; Fri, 17 Jan 2014 16:24:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24295 invoked from network); 17 Jan 2014 16:24:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919807"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDT-0004Mf-60;
	Fri, 17 Jan 2014 16:24:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDR-0000KZ-Gs;
	Fri, 17 Jan 2014 16:24:17 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:00 +0000
Message-ID: <1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/12] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Document the new mode in the big "Subprocess handling" comment.
---
 tools/libxl/libxl_event.h |   11 +++++++++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3c93955..824ac88 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
+ *
+ *     libxl_sigchld_owner_libxl_always_selective_reap:
+ *
+ *       The application expects to reap all of its own children
+ *       synchronously, and does not use SIGCHLD.  libxl is
+ *       to install a SIGCHLD handler.
  */
 
 
@@ -491,6 +497,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDj-0001x2-Cf; Fri, 17 Jan 2014 16:24:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDh-0001su-6G
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:33 +0000
Received: from [193.109.254.147:30032] by server-4.bemta-14.messagelabs.com id
	11/38-03916-04959D25; Fri, 17 Jan 2014 16:24:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24295 invoked from network); 17 Jan 2014 16:24:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919807"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDT-0004Mf-60;
	Fri, 17 Jan 2014 16:24:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDR-0000KZ-Gs;
	Fri, 17 Jan 2014 16:24:17 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:00 +0000
Message-ID: <1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/12] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Document the new mode in the big "Subprocess handling" comment.
---
 tools/libxl/libxl_event.h |   11 +++++++++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3c93955..824ac88 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
+ *
+ *     libxl_sigchld_owner_libxl_always_selective_reap:
+ *
+ *       The application expects to reap all of its own children
+ *       synchronously, and does not use SIGCHLD.  libxl is
+ *       to install a SIGCHLD handler.
  */
 
 
@@ -491,6 +497,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDl-000203-AM; Fri, 17 Jan 2014 16:24:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDj-0001x0-OR
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:35 +0000
Received: from [85.158.139.211:42913] by server-13.bemta-5.messagelabs.com id
	BE/48-11357-34959D25; Fri, 17 Jan 2014 16:24:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 695 invoked from network); 17 Jan 2014 16:24:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822620"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDV-0004Ml-Oy;
	Fri, 17 Jan 2014 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDU-0000Kj-6n;
	Fri, 17 Jan 2014 16:24:20 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:02 +0000
Message-ID: <1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/12] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to change these functions so that different libxl ctx's
can share a single SIGCHLD handler.  Rename them now to a new name
which doesn't imply unconditional handler installation or removal.

Also note in the comments that they are idempotent.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.c          |    2 +-
 tools/libxl/libxl_fork.c     |   22 +++++++++++-----------
 tools/libxl/libxl_internal.h |    4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4679b51 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -170,7 +170,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
     /* If we have outstanding children, then the application inherits
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
-    libxl__sigchld_removehandler(gc);
+    libxl__sigchld_notneeded(gc);
 
     if (ctx->sigchld_selfpipe[0] >= 0) {
         close(ctx->sigchld_selfpipe[0]);
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 16e17f6..a15af8e 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,7 +194,7 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
-void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
 
@@ -210,7 +210,7 @@ void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
     }
 }
 
-int libxl__sigchld_installhandler(libxl__gc *gc) /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int r, rc;
 
@@ -274,18 +274,18 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     abort();
 }
 
-static void perhaps_removehandler(libxl__gc *gc)
+static void perhaps_sigchld_notneeded(libxl__gc *gc)
 {
     if (!chldmode_ours(CTX, 0))
-        libxl__sigchld_removehandler(gc);
+        libxl__sigchld_notneeded(gc);
 }
 
-static int perhaps_installhandler(libxl__gc *gc, bool creating)
+static int perhaps_sigchld_needed(libxl__gc *gc, bool creating)
 {
     int rc;
 
     if (chldmode_ours(CTX, creating)) {
-        rc = libxl__sigchld_installhandler(gc);
+        rc = libxl__sigchld_needed(gc);
         if (rc) return rc;
     }
     return 0;
@@ -314,7 +314,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
  found:
     childproc_reaped_ours(egc, ch, status);
 
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
 
     return 0;
 }
@@ -445,7 +445,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     CTX_LOCK;
     int rc;
 
-    perhaps_installhandler(gc, 1);
+    perhaps_sigchld_needed(gc, 1);
 
     pid_t pid =
         CTX->childproc_hooks->fork_replacement
@@ -473,7 +473,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     rc = pid;
 
  out:
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
     CTX_UNLOCK;
     return rc;
 }
@@ -492,8 +492,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
     ctx->childproc_hooks = hooks;
     ctx->childproc_user = user;
 
-    perhaps_removehandler(gc);
-    perhaps_installhandler(gc, 0); /* idempotent, ok to ignore errors for now */
+    perhaps_sigchld_notneeded(gc);
+    perhaps_sigchld_needed(gc, 0); /* idempotent, ok to ignore errors for now */
 
     CTX_UNLOCK;
     GC_FREE;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..fba681d 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -838,8 +838,8 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 
 /* Internal to fork and child reaping machinery */
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
-int libxl__sigchld_installhandler(libxl__gc*); /* non-reentrant; logs errs */
-void libxl__sigchld_removehandler(libxl__gc*); /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
+void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDl-000203-AM; Fri, 17 Jan 2014 16:24:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDj-0001x0-OR
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:35 +0000
Received: from [85.158.139.211:42913] by server-13.bemta-5.messagelabs.com id
	BE/48-11357-34959D25; Fri, 17 Jan 2014 16:24:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 695 invoked from network); 17 Jan 2014 16:24:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822620"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDV-0004Ml-Oy;
	Fri, 17 Jan 2014 16:24:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDU-0000Kj-6n;
	Fri, 17 Jan 2014 16:24:20 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:02 +0000
Message-ID: <1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/12] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to change these functions so that different libxl ctx's
can share a single SIGCHLD handler.  Rename them now to a new name
which doesn't imply unconditional handler installation or removal.

Also note in the comments that they are idempotent.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.c          |    2 +-
 tools/libxl/libxl_fork.c     |   22 +++++++++++-----------
 tools/libxl/libxl_internal.h |    4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4679b51 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -170,7 +170,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
     /* If we have outstanding children, then the application inherits
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
-    libxl__sigchld_removehandler(gc);
+    libxl__sigchld_notneeded(gc);
 
     if (ctx->sigchld_selfpipe[0] >= 0) {
         close(ctx->sigchld_selfpipe[0]);
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 16e17f6..a15af8e 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,7 +194,7 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
-void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
 
@@ -210,7 +210,7 @@ void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
     }
 }
 
-int libxl__sigchld_installhandler(libxl__gc *gc) /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int r, rc;
 
@@ -274,18 +274,18 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     abort();
 }
 
-static void perhaps_removehandler(libxl__gc *gc)
+static void perhaps_sigchld_notneeded(libxl__gc *gc)
 {
     if (!chldmode_ours(CTX, 0))
-        libxl__sigchld_removehandler(gc);
+        libxl__sigchld_notneeded(gc);
 }
 
-static int perhaps_installhandler(libxl__gc *gc, bool creating)
+static int perhaps_sigchld_needed(libxl__gc *gc, bool creating)
 {
     int rc;
 
     if (chldmode_ours(CTX, creating)) {
-        rc = libxl__sigchld_installhandler(gc);
+        rc = libxl__sigchld_needed(gc);
         if (rc) return rc;
     }
     return 0;
@@ -314,7 +314,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
  found:
     childproc_reaped_ours(egc, ch, status);
 
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
 
     return 0;
 }
@@ -445,7 +445,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     CTX_LOCK;
     int rc;
 
-    perhaps_installhandler(gc, 1);
+    perhaps_sigchld_needed(gc, 1);
 
     pid_t pid =
         CTX->childproc_hooks->fork_replacement
@@ -473,7 +473,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     rc = pid;
 
  out:
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
     CTX_UNLOCK;
     return rc;
 }
@@ -492,8 +492,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
     ctx->childproc_hooks = hooks;
     ctx->childproc_user = user;
 
-    perhaps_removehandler(gc);
-    perhaps_installhandler(gc, 0); /* idempotent, ok to ignore errors for now */
+    perhaps_sigchld_notneeded(gc);
+    perhaps_sigchld_needed(gc, 0); /* idempotent, ok to ignore errors for now */
 
     CTX_UNLOCK;
     GC_FREE;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..fba681d 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -838,8 +838,8 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 
 /* Internal to fork and child reaping machinery */
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
-int libxl__sigchld_installhandler(libxl__gc*); /* non-reentrant; logs errs */
-void libxl__sigchld_removehandler(libxl__gc*); /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
+void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDm-00022E-Hf; Fri, 17 Jan 2014 16:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDl-0001zU-Eb
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:37 +0000
Received: from [85.158.139.211:48591] by server-12.bemta-5.messagelabs.com id
	7B/A0-30017-44959D25; Fri, 17 Jan 2014 16:24:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 871 invoked from network); 17 Jan 2014 16:24:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:34 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDX-0004Mr-Ky;
	Fri, 17 Jan 2014 16:24:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDW-0000Kt-Ch;
	Fri, 17 Jan 2014 16:24:22 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:04 +0000
Message-ID: <1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to want introduce another call site in the final
substantive patch.

Pure code motion; no functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index ce8e8eb..b6b14fe 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
     errno = esave;
 }
 
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
+{
+    struct sigaction ours;
+    int r;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, old);
+    assert(!r);
+}
+
 static void sigchld_removehandler_core(void)
 {
     struct sigaction was;
@@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
     assert(!sigchld_owner);
     sigchld_owner = CTX;
 
-    memset(&ours,0,sizeof(ours));
-    ours.sa_handler = sigchld_handler;
-    sigemptyset(&ours.sa_mask);
-    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-    assert(!r);
+    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
     assert(((void)"application must negotiate with libxl about SIGCHLD",
             !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDp-00025z-8A; Fri, 17 Jan 2014 16:24:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDm-000229-R2
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:39 +0000
Received: from [193.109.254.147:25735] by server-3.bemta-14.messagelabs.com id
	29/0F-11000-64959D25; Fri, 17 Jan 2014 16:24:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24724 invoked from network); 17 Jan 2014 16:24:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919889"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:35 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDY-0004Mu-Qe;
	Fri, 17 Jan 2014 16:24:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDX-0000Ky-Be;
	Fri, 17 Jan 2014 16:24:23 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:05 +0000
Message-ID: <1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h    |    2 +-
 tools/libxl/libxl_fork.c     |  132 +++++++++++++++++++++++++++++++++++-------
 tools/libxl/libxl_internal.h |    3 +
 3 files changed, 114 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ab6ac5c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
+ *       statuses.  The application may have have multiple libxl_ctxs
  *       configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b6b14fe..5b42bad 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,18 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +133,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +158,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +183,16 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
     errno = esave;
 }
 
@@ -195,25 +209,74 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside obviously from the risk of being
+ * interrupted by the signal.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    struct sigaction ours;
-    int r;
+    if (sigchld_installed)
+        return;
 
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -223,15 +286,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -262,12 +342,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDl-00021J-V8; Fri, 17 Jan 2014 16:24:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDk-0001xw-Cq
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:36 +0000
Received: from [193.109.254.147:43337] by server-8.bemta-14.messagelabs.com id
	8F/50-30921-34959D25; Fri, 17 Jan 2014 16:24:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24594 invoked from network); 17 Jan 2014 16:24:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDW-0004Mo-Or;
	Fri, 17 Jan 2014 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDV-0000Ko-DK;
	Fri, 17 Jan 2014 16:24:21 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:03 +0000
Message-ID: <1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/12] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pure code motion.  This is going to make the final substantive patch
easier to read.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index a15af8e..ce8e8eb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
+static void sigchld_installhandler_core(libxl__gc *gc)
+{
+    struct sigaction ours;
+    int r;
+
+    assert(!sigchld_owner);
+    sigchld_owner = CTX;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = sigchld_handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
+    assert(!r);
+
+    assert(((void)"application must negotiate with libxl about SIGCHLD",
+            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
+            (sigchld_saved_action.sa_handler == SIG_DFL ||
+             sigchld_saved_action.sa_handler == SIG_IGN)));
+}
+
 void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
@@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 
     atfork_lock();
     if (sigchld_owner != CTX) {
-        struct sigaction ours;
-
-        assert(!sigchld_owner);
-        sigchld_owner = CTX;
-
-        memset(&ours,0,sizeof(ours));
-        ours.sa_handler = sigchld_handler;
-        sigemptyset(&ours.sa_mask);
-        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-        assert(!r);
-
-        assert(((void)"application must negotiate with libxl about SIGCHLD",
-                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-                (sigchld_saved_action.sa_handler == SIG_DFL ||
-                 sigchld_saved_action.sa_handler == SIG_IGN)));
+        sigchld_installhandler_core(gc);
     }
     atfork_unlock();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDl-00021J-V8; Fri, 17 Jan 2014 16:24:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDk-0001xw-Cq
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:36 +0000
Received: from [193.109.254.147:43337] by server-8.bemta-14.messagelabs.com id
	8F/50-30921-34959D25; Fri, 17 Jan 2014 16:24:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24594 invoked from network); 17 Jan 2014 16:24:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919865"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDW-0004Mo-Or;
	Fri, 17 Jan 2014 16:24:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDV-0000Ko-DK;
	Fri, 17 Jan 2014 16:24:21 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:03 +0000
Message-ID: <1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/12] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pure code motion.  This is going to make the final substantive patch
easier to read.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index a15af8e..ce8e8eb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
+static void sigchld_installhandler_core(libxl__gc *gc)
+{
+    struct sigaction ours;
+    int r;
+
+    assert(!sigchld_owner);
+    sigchld_owner = CTX;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = sigchld_handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
+    assert(!r);
+
+    assert(((void)"application must negotiate with libxl about SIGCHLD",
+            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
+            (sigchld_saved_action.sa_handler == SIG_DFL ||
+             sigchld_saved_action.sa_handler == SIG_IGN)));
+}
+
 void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
@@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 
     atfork_lock();
     if (sigchld_owner != CTX) {
-        struct sigaction ours;
-
-        assert(!sigchld_owner);
-        sigchld_owner = CTX;
-
-        memset(&ours,0,sizeof(ours));
-        ours.sa_handler = sigchld_handler;
-        sigemptyset(&ours.sa_mask);
-        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-        assert(!r);
-
-        assert(((void)"application must negotiate with libxl about SIGCHLD",
-                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-                (sigchld_saved_action.sa_handler == SIG_DFL ||
-                 sigchld_saved_action.sa_handler == SIG_IGN)));
+        sigchld_installhandler_core(gc);
     }
     atfork_unlock();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDp-00025z-8A; Fri, 17 Jan 2014 16:24:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDm-000229-R2
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:39 +0000
Received: from [193.109.254.147:25735] by server-3.bemta-14.messagelabs.com id
	29/0F-11000-64959D25; Fri, 17 Jan 2014 16:24:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389975864!11605847!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24724 invoked from network); 17 Jan 2014 16:24:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93919889"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:35 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDY-0004Mu-Qe;
	Fri, 17 Jan 2014 16:24:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDX-0000Ky-Be;
	Fri, 17 Jan 2014 16:24:23 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:05 +0000
Message-ID: <1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h    |    2 +-
 tools/libxl/libxl_fork.c     |  132 +++++++++++++++++++++++++++++++++++-------
 tools/libxl/libxl_internal.h |    3 +
 3 files changed, 114 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ab6ac5c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
+ *       statuses.  The application may have have multiple libxl_ctxs
  *       configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b6b14fe..5b42bad 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,18 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +133,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +158,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +183,16 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
     errno = esave;
 }
 
@@ -195,25 +209,74 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside obviously from the risk of being
+ * interrupted by the signal.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    struct sigaction ours;
-    int r;
+    if (sigchld_installed)
+        return;
 
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -223,15 +286,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -262,12 +342,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CDm-00022E-Hf; Fri, 17 Jan 2014 16:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CDl-0001zU-Eb
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:24:37 +0000
Received: from [85.158.139.211:48591] by server-12.bemta-5.messagelabs.com id
	7B/A0-30017-44959D25; Fri, 17 Jan 2014 16:24:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389975865!10442980!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 871 invoked from network); 17 Jan 2014 16:24:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:24:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="91822638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:24:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:24:34 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDX-0004Mr-Ky;
	Fri, 17 Jan 2014 16:24:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CDW-0000Kt-Ch;
	Fri, 17 Jan 2014 16:24:22 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 16:24:04 +0000
Message-ID: <1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to want introduce another call site in the final
substantive patch.

Pure code motion; no functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index ce8e8eb..b6b14fe 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
     errno = esave;
 }
 
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
+{
+    struct sigaction ours;
+    int r;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, old);
+    assert(!r);
+}
+
 static void sigchld_removehandler_core(void)
 {
     struct sigaction was;
@@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
     assert(!sigchld_owner);
     sigchld_owner = CTX;
 
-    memset(&ours,0,sizeof(ours));
-    ours.sa_handler = sigchld_handler;
-    sigemptyset(&ours.sa_mask);
-    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-    assert(!r);
+    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
     assert(((void)"application must negotiate with libxl about SIGCHLD",
             !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CFF-00033A-7A; Fri, 17 Jan 2014 16:26:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4CFD-00032L-PE
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:26:07 +0000
Received: from [85.158.137.68:19525] by server-17.bemta-3.messagelabs.com id
	DF/78-15965-E9959D25; Fri, 17 Jan 2014 16:26:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389975965!9769616!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17936 invoked from network); 17 Jan 2014 16:26:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:26:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:26:04 +0000
Message-Id: <52D967AD0200007800114AA2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:26:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
In-Reply-To: <1389974929.6697.122.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
>> max_pfn/num_physpages isn't that far off for guest with less than
>> 4Gb, the number calculated from the PoD data is a little worse.
> 
> On ARM RAM may not start at 0 and so using max_pfn can be very
> misleading and in practice causes arm to balloon down to 0 as fast as it
> can.

Ugly. Is that only due to the temporary workaround for there not
being an IOMMU?

And short of the initial value needing to be architecture specific -
can you see a calculation that would yield a decent result on ARM
that would also be suitable on x86?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CFF-00033A-7A; Fri, 17 Jan 2014 16:26:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W4CFD-00032L-PE
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:26:07 +0000
Received: from [85.158.137.68:19525] by server-17.bemta-3.messagelabs.com id
	DF/78-15965-E9959D25; Fri, 17 Jan 2014 16:26:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1389975965!9769616!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17936 invoked from network); 17 Jan 2014 16:26:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Jan 2014 16:26:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 17 Jan 2014 16:26:04 +0000
Message-Id: <52D967AD0200007800114AA2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 17 Jan 2014 16:26:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
In-Reply-To: <1389974929.6697.122.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
>> max_pfn/num_physpages isn't that far off for guest with less than
>> 4Gb, the number calculated from the PoD data is a little worse.
> 
> On ARM RAM may not start at 0 and so using max_pfn can be very
> misleading and in practice causes arm to balloon down to 0 as fast as it
> can.

Ugly. Is that only due to the temporary workaround for there not
being an IOMMU?

And short of the initial value needing to be architecture specific -
can you see a calculation that would yield a decent result on ARM
that would also be suitable on x86?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:28:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CHe-0003kG-1z; Fri, 17 Jan 2014 16:28:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4CHc-0003iZ-B6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:28:36 +0000
Received: from [193.109.254.147:21055] by server-6.bemta-14.messagelabs.com id
	73/C9-14958-33A59D25; Fri, 17 Jan 2014 16:28:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389976113!11560917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12577 invoked from network); 17 Jan 2014 16:28:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93921657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:28:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:28:32 -0500
Message-ID: <1389976111.6697.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 16:28:31 +0000
In-Reply-To: <1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 04/12] libxl: fork: Document
 libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:23 +0000, Ian Jackson wrote:
> libxl_sigchld_owner_libxl ought to have been mentioned in the list of
> options for chldowner.  Since it's the default, move the description
> of the its behaviour into the description of that option.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:28:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CHe-0003kG-1z; Fri, 17 Jan 2014 16:28:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4CHc-0003iZ-B6
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:28:36 +0000
Received: from [193.109.254.147:21055] by server-6.bemta-14.messagelabs.com id
	73/C9-14958-33A59D25; Fri, 17 Jan 2014 16:28:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389976113!11560917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12577 invoked from network); 17 Jan 2014 16:28:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93921657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:28:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:28:32 -0500
Message-ID: <1389976111.6697.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 16:28:31 +0000
In-Reply-To: <1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-5-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 04/12] libxl: fork: Document
 libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:23 +0000, Ian Jackson wrote:
> libxl_sigchld_owner_libxl ought to have been mentioned in the list of
> options for chldowner.  Since it's the default, move the description
> of the its behaviour into the description of that option.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:37:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CQD-0004UQ-Cp; Fri, 17 Jan 2014 16:37:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CQB-0004UL-Os
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:37:27 +0000
Received: from [85.158.143.35:48749] by server-3.bemta-4.messagelabs.com id
	59/27-32360-74C59D25; Fri, 17 Jan 2014 16:37:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389976644!12420081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5282 invoked from network); 17 Jan 2014 16:37:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:37:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93925327"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:37:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:37:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CQ6-0004RP-AC;
	Fri, 17 Jan 2014 16:37:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CQ4-0000NS-L7;
	Fri, 17 Jan 2014 16:37:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.23615.552749.852681@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:37:19 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To check that this was doing roughly the right things, I straced xl.

Here is what it does before the first time libxl wants to fork:

 pipe([17, 18])                          = 0
 rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, {SIG_DFL, [], 0}, 8) = 0
 rt_sigaction(SIGCHLD, {0xb76ad150, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
 rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb744a978) = 10033

That seems about right.  (It does leak the self-pipe into the child
but that is of no consequence.)


Here is what is happens in the parent when the child exits:

  --- SIGCHLD (Child exited) @ 0 (0) ---
  write(18, "\0", 1)                      = 1
  sigreturn()                             = ? (mask now [])
  gettimeofday({1389975566, 501125}, NULL) = 0
  poll([{fd=17, events=POLLIN}, {fd=7, events=POLLIN}, {fd=7, events=POLLIN}], 3, -1) = 1 ([{fd=17, revents=POLLIN}])
  gettimeofday({1389975566, 501571}, NULL) = 0
  read(17, "\0", 256)                     = 1
  waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG) = 10033

That looks mostly right for the libxl child handling except that there
are two instances of {fd=7, events=POLLIN}.  I'm going to investigate
what that is and why it might be happening, starting by trying to
figure out what fd 7 is.

The trace then continues:

  close(14)                               = 0
  close(15)                               = 0
  close(12)                               = 0
  munmap(0xb766a000, 4096)                = 0
  close(11)                               = 0

I think that is the migration ao and its fds etc. being torn down.

  write(10, "\0", 1)                      = 1

This is the poller wakeup for ao completion.

  rt_sigaction(SIGCHLD, {0xb76ad150, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
  rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
  rt_sigaction(SIGCHLD, {SIG_DFL, [], 0}, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, 8) = 0

And here libxl removes the ctx from the SIGCHLD users: the first two
sigaction calls are from the defer and release, and the final one
restores the application's default handler.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:37:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CQD-0004UQ-Cp; Fri, 17 Jan 2014 16:37:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4CQB-0004UL-Os
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 16:37:27 +0000
Received: from [85.158.143.35:48749] by server-3.bemta-4.messagelabs.com id
	59/27-32360-74C59D25; Fri, 17 Jan 2014 16:37:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389976644!12420081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5282 invoked from network); 17 Jan 2014 16:37:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:37:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,674,1384300800"; d="scan'208";a="93925327"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:37:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 11:37:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CQ6-0004RP-AC;
	Fri, 17 Jan 2014 16:37:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4CQ4-0000NS-L7;
	Fri, 17 Jan 2014 16:37:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.23615.552749.852681@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 16:37:19 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To check that this was doing roughly the right things, I straced xl.

Here is what it does before the first time libxl wants to fork:

 pipe([17, 18])                          = 0
 rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, {SIG_DFL, [], 0}, 8) = 0
 rt_sigaction(SIGCHLD, {0xb76ad150, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
 rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb744a978) = 10033

That seems about right.  (It does leak the self-pipe into the child
but that is of no consequence.)


Here is what is happens in the parent when the child exits:

  --- SIGCHLD (Child exited) @ 0 (0) ---
  write(18, "\0", 1)                      = 1
  sigreturn()                             = ? (mask now [])
  gettimeofday({1389975566, 501125}, NULL) = 0
  poll([{fd=17, events=POLLIN}, {fd=7, events=POLLIN}, {fd=7, events=POLLIN}], 3, -1) = 1 ([{fd=17, revents=POLLIN}])
  gettimeofday({1389975566, 501571}, NULL) = 0
  read(17, "\0", 256)                     = 1
  waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG) = 10033

That looks mostly right for the libxl child handling except that there
are two instances of {fd=7, events=POLLIN}.  I'm going to investigate
what that is and why it might be happening, starting by trying to
figure out what fd 7 is.

The trace then continues:

  close(14)                               = 0
  close(15)                               = 0
  close(12)                               = 0
  munmap(0xb766a000, 4096)                = 0
  close(11)                               = 0

I think that is the migration ao and its fds etc. being torn down.

  write(10, "\0", 1)                      = 1

This is the poller wakeup for ao completion.

  rt_sigaction(SIGCHLD, {0xb76ad150, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
  rt_sigaction(SIGCHLD, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, NULL, 8) = 0
  rt_sigaction(SIGCHLD, {SIG_DFL, [], 0}, {0xb76ad507, [], SA_RESTART|SA_NOCLDSTOP}, 8) = 0

And here libxl removes the ctx from the SIGCHLD users: the first two
sigaction calls are from the defer and release, and the final one
restores the application's default handler.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:46:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CZ1-00058s-Fn; Fri, 17 Jan 2014 16:46:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1W4CYz-00058n-GD
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:46:33 +0000
Received: from [193.109.254.147:41038] by server-4.bemta-14.messagelabs.com id
	5E/F1-03916-86E59D25; Fri, 17 Jan 2014 16:46:32 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389977191!11609988!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26581 invoked from network); 17 Jan 2014 16:46:32 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:46:32 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so1666662vcb.14
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 08:46:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:subject:mime-version:content-type
	:content-transfer-encoding;
	bh=4J8ENiW7pZGHQ+o1Xr+BHdPf0UBM40O/7v5a6OkF2rA=;
	b=zWfYrUEAPFAk4cO8TGA5tosbUjNUmSPrhQz0CiWNMH5NXuQCIpFMHFn2p12Hic6nAT
	YlPE8v1eVaPjJhhO7gD6/3/qtV4js+5YNm0tnNE+y2qy6oZn8emuLEPV7zc+VZlSmwsD
	WMaAV9L8F0grdL3HPU/Js5UeNKuQS2NAIjj97lOsHRgTZmXciV6MyuQGjrnz4y8XEQOP
	uc8SciYDVIXbw+I3H9Cu9CCfi4kiTj4EtqxPid1NiZNGjitSEoaeuJ1eWQU59Q9cdlBp
	yxOcGso/mMFVe3GNJVjyZ+pzJbwi3ZwlaP80gOVZUG3YIIoDYhP8pf5WsBg7BNpY4SrZ
	vXng==
X-Received: by 10.52.231.130 with SMTP id tg2mr1173374vdc.16.1389977190793;
	Fri, 17 Jan 2014 08:46:30 -0800 (PST)
Received: from [127.0.0.1] (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id gt5sm14304483vdb.2.2014.01.17.08.46.28
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 08:46:30 -0800 (PST)
Date: Fri, 17 Jan 2014 13:46:15 -0300
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <122967730.20140117134615@gmail.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] micro-pv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Just a quick heads up to all that helped me with this. It is now
working. I have a simple context switcher running and I am currently
working on my embedded OS initialisation. All is looking good. Thanks
for your help.

Once I have got the OS running and communicating via the Xen console I
will then then have to revisit the real time performance aspects. I
imagine that will be in February now. Is there anyone able and willing
to help me to get up to speed in that area of the Xen codebase?

Regards.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:46:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CZ1-00058s-Fn; Fri, 17 Jan 2014 16:46:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1W4CYz-00058n-GD
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:46:33 +0000
Received: from [193.109.254.147:41038] by server-4.bemta-14.messagelabs.com id
	5E/F1-03916-86E59D25; Fri, 17 Jan 2014 16:46:32 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389977191!11609988!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26581 invoked from network); 17 Jan 2014 16:46:32 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:46:32 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so1666662vcb.14
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 08:46:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:subject:mime-version:content-type
	:content-transfer-encoding;
	bh=4J8ENiW7pZGHQ+o1Xr+BHdPf0UBM40O/7v5a6OkF2rA=;
	b=zWfYrUEAPFAk4cO8TGA5tosbUjNUmSPrhQz0CiWNMH5NXuQCIpFMHFn2p12Hic6nAT
	YlPE8v1eVaPjJhhO7gD6/3/qtV4js+5YNm0tnNE+y2qy6oZn8emuLEPV7zc+VZlSmwsD
	WMaAV9L8F0grdL3HPU/Js5UeNKuQS2NAIjj97lOsHRgTZmXciV6MyuQGjrnz4y8XEQOP
	uc8SciYDVIXbw+I3H9Cu9CCfi4kiTj4EtqxPid1NiZNGjitSEoaeuJ1eWQU59Q9cdlBp
	yxOcGso/mMFVe3GNJVjyZ+pzJbwi3ZwlaP80gOVZUG3YIIoDYhP8pf5WsBg7BNpY4SrZ
	vXng==
X-Received: by 10.52.231.130 with SMTP id tg2mr1173374vdc.16.1389977190793;
	Fri, 17 Jan 2014 08:46:30 -0800 (PST)
Received: from [127.0.0.1] (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id gt5sm14304483vdb.2.2014.01.17.08.46.28
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 08:46:30 -0800 (PST)
Date: Fri, 17 Jan 2014 13:46:15 -0300
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <122967730.20140117134615@gmail.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] micro-pv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Just a quick heads up to all that helped me with this. It is now
working. I have a simple context switcher running and I am currently
working on my embedded OS initialisation. All is looking good. Thanks
for your help.

Once I have got the OS running and communicating via the Xen console I
will then then have to revisit the real time performance aspects. I
imagine that will be in February now. Is there anyone able and willing
to help me to get up to speed in that area of the Xen codebase?

Regards.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:50:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CcZ-0005Qe-7z; Fri, 17 Jan 2014 16:50:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W4CcP-0005QU-4t
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:50:13 +0000
Received: from [85.158.139.211:16110] by server-16.bemta-5.messagelabs.com id
	51/53-11843-C3F59D25; Fri, 17 Jan 2014 16:50:04 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389977401!10414380!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=3.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,HTML_OBFUSCATE_05_10,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9339 invoked from network); 17 Jan 2014 16:50:03 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:50:03 -0000
Received: from mail-ob0-f178.google.com ([209.85.214.178]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUtlfOQBFjJlskPRCBWFDvZuU30j5yK6T@postini.com;
	Fri, 17 Jan 2014 08:50:03 PST
Received: by mail-ob0-f178.google.com with SMTP id uz6so4546030obc.37
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 08:50:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=ocGPDpo5YVKFqE0Fwht5dwccG0X3RflMSfu1uwD5AOQ=;
	b=JZ4A7nR9shUTzlqS+mHLyoERZcVI0JKfpadYWht4y80dTatKAPaimsTuYtU1IMFLS8
	i8zjyospJFLaTq34VTzz3L/kcVcwCakQES+7LXCe+i5t2oT//rpB7J9UOfIu95AsZ5dg
	ECwAs0lyt57fvwZE0t/9qRIrteHN4Y5Jhng4I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=ocGPDpo5YVKFqE0Fwht5dwccG0X3RflMSfu1uwD5AOQ=;
	b=hQxxXlrvOnK17oe/JUW5ojLLlSF6Yat3asburHwe4+QdTmbt5JqvqlFLkCODwiDLBW
	EBzPQ3tkDvXr27vCy/zMZAHTC1lMto4VyJqJHopoZGUggn3ih72gindnsD+Hla6kJjYu
	c4u9NQK8M53Olj3hrNktMgkT87zHQgmu1GMNWreEMktUBgA7yO/jWAcXXRaXtEK718iV
	DnieifYrRA6vgNGxx+JvdwZqpQCC91JQeQkIQ23SfLSQIGr3m5IqmUYWM+6L9PcvfG25
	JAB36haxmG/iKHD1P/hRyptga9Ho7B9uF2ETSChBkCDc561qrKFbYaV2dRiaVdmYd7Ye
	0Ekw==
X-Gm-Message-State: ALoCoQluBcIIQD0tjvDucwWHAYmV76E4f14WvHgU3/a/IEqTSmzmuOE23qkJuqhzco7D0ZErbmu193s9ohPN83Eg3nnQZhIhxImsmv9UqINN1UKyx0FHgGghTznXjCHhYgf/VVLJoJk5Xv1x9Q3HnDAB5sYpppU47pB7m99Ufx9hLFblHCXo6s0=
X-Received: by 10.60.50.202 with SMTP id e10mr2359170oeo.39.1389977400804;
	Fri, 17 Jan 2014 08:50:00 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.50.202 with SMTP id e10mr2359156oeo.39.1389977400620;
	Fri, 17 Jan 2014 08:50:00 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Fri, 17 Jan 2014 08:50:00 -0800 (PST)
Date: Fri, 17 Jan 2014 18:50:00 +0200
Message-ID: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2255479600936163449=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2255479600936163449==
Content-Type: multipart/alternative; boundary=001a11c3094635c72504f02d55cb

--001a11c3094635c72504f02d55cb
Content-Type: text/plain; charset=ISO-8859-1

Hi,

does anyone know about efforts on bringing QNX Neutrino as a dom0 or domU
in Xen - or, more specifically, in RT-Xen with global/partitioning
schedulers which should potentially support real-time requirements for
target OS?

Existing papers on RT-Xen imply using Linux as a target system, and all QNX
mentions with regard to Xen are quite outdated. I see that RT-Xen
activities are quite recent and production applications may still be
absent, but maybe anyone has tried this yet?

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c3094635c72504f02d55cb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi,<br><br></div>does anyone know about efforts =
on bringing QNX Neutrino as a dom0 or domU in Xen - or, more specifically, =
in RT-Xen with global/partitioning schedulers which should potentially supp=
ort real-time requirements for target OS?<br>
<br></div>Existing papers on RT-Xen imply using Linux as a target system, a=
nd all QNX mentions with regard to Xen are quite outdated. I see that RT-Xe=
n activities are quite recent and production applications may still be abse=
nt, but maybe anyone has tried this yet?<br clear=3D"all">
<div><div><div><div><div dir=3D"ltr"><font size=3D"-1"><br><span style=3D"v=
ertical-align:baseline;font-variant:normal;font-style:normal;font-size:12px=
;background-color:transparent;text-decoration:none;font-family:Arial;font-w=
eight:bold">Suikov Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont></div>
</div>
</div></div></div></div>

--001a11c3094635c72504f02d55cb--


--===============2255479600936163449==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2255479600936163449==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 16:50:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CcZ-0005Qe-7z; Fri, 17 Jan 2014 16:50:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W4CcP-0005QU-4t
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:50:13 +0000
Received: from [85.158.139.211:16110] by server-16.bemta-5.messagelabs.com id
	51/53-11843-C3F59D25; Fri, 17 Jan 2014 16:50:04 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1389977401!10414380!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=3.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,HTML_OBFUSCATE_05_10,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9339 invoked from network); 17 Jan 2014 16:50:03 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 16:50:03 -0000
Received: from mail-ob0-f178.google.com ([209.85.214.178]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUtlfOQBFjJlskPRCBWFDvZuU30j5yK6T@postini.com;
	Fri, 17 Jan 2014 08:50:03 PST
Received: by mail-ob0-f178.google.com with SMTP id uz6so4546030obc.37
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 08:50:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=ocGPDpo5YVKFqE0Fwht5dwccG0X3RflMSfu1uwD5AOQ=;
	b=JZ4A7nR9shUTzlqS+mHLyoERZcVI0JKfpadYWht4y80dTatKAPaimsTuYtU1IMFLS8
	i8zjyospJFLaTq34VTzz3L/kcVcwCakQES+7LXCe+i5t2oT//rpB7J9UOfIu95AsZ5dg
	ECwAs0lyt57fvwZE0t/9qRIrteHN4Y5Jhng4I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=ocGPDpo5YVKFqE0Fwht5dwccG0X3RflMSfu1uwD5AOQ=;
	b=hQxxXlrvOnK17oe/JUW5ojLLlSF6Yat3asburHwe4+QdTmbt5JqvqlFLkCODwiDLBW
	EBzPQ3tkDvXr27vCy/zMZAHTC1lMto4VyJqJHopoZGUggn3ih72gindnsD+Hla6kJjYu
	c4u9NQK8M53Olj3hrNktMgkT87zHQgmu1GMNWreEMktUBgA7yO/jWAcXXRaXtEK718iV
	DnieifYrRA6vgNGxx+JvdwZqpQCC91JQeQkIQ23SfLSQIGr3m5IqmUYWM+6L9PcvfG25
	JAB36haxmG/iKHD1P/hRyptga9Ho7B9uF2ETSChBkCDc561qrKFbYaV2dRiaVdmYd7Ye
	0Ekw==
X-Gm-Message-State: ALoCoQluBcIIQD0tjvDucwWHAYmV76E4f14WvHgU3/a/IEqTSmzmuOE23qkJuqhzco7D0ZErbmu193s9ohPN83Eg3nnQZhIhxImsmv9UqINN1UKyx0FHgGghTznXjCHhYgf/VVLJoJk5Xv1x9Q3HnDAB5sYpppU47pB7m99Ufx9hLFblHCXo6s0=
X-Received: by 10.60.50.202 with SMTP id e10mr2359170oeo.39.1389977400804;
	Fri, 17 Jan 2014 08:50:00 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.50.202 with SMTP id e10mr2359156oeo.39.1389977400620;
	Fri, 17 Jan 2014 08:50:00 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Fri, 17 Jan 2014 08:50:00 -0800 (PST)
Date: Fri, 17 Jan 2014 18:50:00 +0200
Message-ID: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2255479600936163449=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2255479600936163449==
Content-Type: multipart/alternative; boundary=001a11c3094635c72504f02d55cb

--001a11c3094635c72504f02d55cb
Content-Type: text/plain; charset=ISO-8859-1

Hi,

does anyone know about efforts on bringing QNX Neutrino as a dom0 or domU
in Xen - or, more specifically, in RT-Xen with global/partitioning
schedulers which should potentially support real-time requirements for
target OS?

Existing papers on RT-Xen imply using Linux as a target system, and all QNX
mentions with regard to Xen are quite outdated. I see that RT-Xen
activities are quite recent and production applications may still be
absent, but maybe anyone has tried this yet?

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c3094635c72504f02d55cb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi,<br><br></div>does anyone know about efforts =
on bringing QNX Neutrino as a dom0 or domU in Xen - or, more specifically, =
in RT-Xen with global/partitioning schedulers which should potentially supp=
ort real-time requirements for target OS?<br>
<br></div>Existing papers on RT-Xen imply using Linux as a target system, a=
nd all QNX mentions with regard to Xen are quite outdated. I see that RT-Xe=
n activities are quite recent and production applications may still be abse=
nt, but maybe anyone has tried this yet?<br clear=3D"all">
<div><div><div><div><div dir=3D"ltr"><font size=3D"-1"><br><span style=3D"v=
ertical-align:baseline;font-variant:normal;font-style:normal;font-size:12px=
;background-color:transparent;text-decoration:none;font-family:Arial;font-w=
eight:bold">Suikov Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont></div>
</div>
</div></div></div></div>

--001a11c3094635c72504f02d55cb--


--===============2255479600936163449==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2255479600936163449==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 16:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Cgt-0005jA-6W; Fri, 17 Jan 2014 16:54:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4Cgr-0005ix-HR
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:54:41 +0000
Received: from [85.158.137.68:25981] by server-1.bemta-3.messagelabs.com id
	0F/2A-29598-05069D25; Fri, 17 Jan 2014 16:54:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389977678!9867199!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3326 invoked from network); 17 Jan 2014 16:54:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:54:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91834440"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:54:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:54:37 -0500
Message-ID: <1389977676.6697.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 16:54:36 +0000
In-Reply-To: <52D967AD0200007800114AA2@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> 4Gb, the number calculated from the PoD data is a little worse.
> > 
> > On ARM RAM may not start at 0 and so using max_pfn can be very
> > misleading and in practice causes arm to balloon down to 0 as fast as it
> > can.
> 
> Ugly. Is that only due to the temporary workaround for there not
> being an IOMMU?

It's not to do with IOMMUs, no, and it isn't temporary.

Architecturally on ARM its not required for RAM to be at address 0 and
it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
SoC design).

If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
but target pages is just 0x8000, if current_pages is initialised to
max_pfn then the kernel immediately thinks it has to get rid of 0x800000
pages.

> And short of the initial value needing to be architecture specific -
> can you see a calculation that would yield a decent result on ARM
> that would also be suitable on x86?

I previously had a patch to use memblock_phys_mem_size(), but when I saw
Boris switch to get_num_physpages() I thought that would be OK, but I
didn't look into it very hard. Without checking I suspect they return
pretty much the same think and so memblock_phys_mem_size will have the
same issue you observed (which I confess I haven't yet gone back and
understood).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Cgt-0005jA-6W; Fri, 17 Jan 2014 16:54:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4Cgr-0005ix-HR
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 16:54:41 +0000
Received: from [85.158.137.68:25981] by server-1.bemta-3.messagelabs.com id
	0F/2A-29598-05069D25; Fri, 17 Jan 2014 16:54:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1389977678!9867199!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3326 invoked from network); 17 Jan 2014 16:54:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:54:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91834440"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 16:54:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:54:37 -0500
Message-ID: <1389977676.6697.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 16:54:36 +0000
In-Reply-To: <52D967AD0200007800114AA2@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> 4Gb, the number calculated from the PoD data is a little worse.
> > 
> > On ARM RAM may not start at 0 and so using max_pfn can be very
> > misleading and in practice causes arm to balloon down to 0 as fast as it
> > can.
> 
> Ugly. Is that only due to the temporary workaround for there not
> being an IOMMU?

It's not to do with IOMMUs, no, and it isn't temporary.

Architecturally on ARM its not required for RAM to be at address 0 and
it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
SoC design).

If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
but target pages is just 0x8000, if current_pages is initialised to
max_pfn then the kernel immediately thinks it has to get rid of 0x800000
pages.

> And short of the initial value needing to be architecture specific -
> can you see a calculation that would yield a decent result on ARM
> that would also be suitable on x86?

I previously had a patch to use memblock_phys_mem_size(), but when I saw
Boris switch to get_num_physpages() I thought that would be OK, but I
didn't look into it very hard. Without checking I suspect they return
pretty much the same think and so memblock_phys_mem_size will have the
same issue you observed (which I confess I haven't yet gone back and
understood).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:59:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ClP-00067b-Gq; Fri, 17 Jan 2014 16:59:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ClP-00067V-2q
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:59:23 +0000
Received: from [193.109.254.147:50025] by server-14.bemta-14.messagelabs.com
	id 40/CD-12628-A6169D25; Fri, 17 Jan 2014 16:59:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389977960!11612226!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3401 invoked from network); 17 Jan 2014 16:59:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:59:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93933302"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:59:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:59:19 -0500
Message-ID: <1389977958.6697.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Fri, 17 Jan 2014 16:59:18 +0000
In-Reply-To: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:50 +0200, Pavlo Suikov wrote:

> does anyone know about efforts on bringing QNX Neutrino as a dom0 or
> domU in Xen - or, more specifically, in RT-Xen with
> global/partitioning schedulers which should potentially support
> real-time requirements for target OS?
> 
> 
> Existing papers on RT-Xen imply using Linux as a target system, and
> all QNX mentions with regard to Xen are quite outdated. I see that
> RT-Xen activities are quite recent and production applications may
> still be absent, but maybe anyone has tried this yet?

I don't know much (or anything) about RT-Xen but from a regular Xen PoV
it's been a long while since I've heard anything about QNX on x86 Xen,
and I've never heard anything about QNX on ARM Xen (at least the h/w
assisted port in mainline).

If you are interested specifically in RT-Xen then I think you will
probably have more luck on their mailing list, not much (essentially no)
RT-Xen stuff happens on this list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 16:59:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 16:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ClP-00067b-Gq; Fri, 17 Jan 2014 16:59:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4ClP-00067V-2q
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 16:59:23 +0000
Received: from [193.109.254.147:50025] by server-14.bemta-14.messagelabs.com
	id 40/CD-12628-A6169D25; Fri, 17 Jan 2014 16:59:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1389977960!11612226!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3401 invoked from network); 17 Jan 2014 16:59:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 16:59:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93933302"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 16:59:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 11:59:19 -0500
Message-ID: <1389977958.6697.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Fri, 17 Jan 2014 16:59:18 +0000
In-Reply-To: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:50 +0200, Pavlo Suikov wrote:

> does anyone know about efforts on bringing QNX Neutrino as a dom0 or
> domU in Xen - or, more specifically, in RT-Xen with
> global/partitioning schedulers which should potentially support
> real-time requirements for target OS?
> 
> 
> Existing papers on RT-Xen imply using Linux as a target system, and
> all QNX mentions with regard to Xen are quite outdated. I see that
> RT-Xen activities are quite recent and production applications may
> still be absent, but maybe anyone has tried this yet?

I don't know much (or anything) about RT-Xen but from a regular Xen PoV
it's been a long while since I've heard anything about QNX on x86 Xen,
and I've never heard anything about QNX on ARM Xen (at least the h/w
assisted port in mainline).

If you are interested specifically in RT-Xen then I think you will
probably have more luck on their mailing list, not much (essentially no)
RT-Xen stuff happens on this list.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:14:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CzX-0006R0-FA; Fri, 17 Jan 2014 17:13:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4CzW-0006Qv-34
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 17:13:58 +0000
Received: from [85.158.139.211:3069] by server-10.bemta-5.messagelabs.com id
	E3/A0-01405-5D469D25; Fri, 17 Jan 2014 17:13:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389978835!10264813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1779 invoked from network); 17 Jan 2014 17:13:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93938926"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:13:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:13:54 -0500
Message-ID: <1389978832.6697.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 17:13:52 +0000
In-Reply-To: <52D94D3A0200007800114957@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
> Interestingly, (a) too results in the driver not ballooning down
> enough - there's a gap of exactly as many pages as are marked
> reserved below the 1Mb boundary. Therefore aforementioned
> upstream commit is presumably broken.

Can we count those reserved pages? (I guess you mean reserved in the
e820?)

> Short of a reliable (and ideally architecture independent) way of
> knowing the necessary adjustment value, the next best solution
> (not ballooning down too little, but also not ballooning down much
> more than necessary) turns out to be using the minimum of (b)
> and (c): When the domain only has memory below 4Gb, (b) is
> more precise, whereas in the other cases (c) gets closest.

I think I'd prefer an arch specific calculation (or an arch specific
adjustment to a generic calculation) to either of the above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:14:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4CzX-0006R0-FA; Fri, 17 Jan 2014 17:13:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4CzW-0006Qv-34
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 17:13:58 +0000
Received: from [85.158.139.211:3069] by server-10.bemta-5.messagelabs.com id
	E3/A0-01405-5D469D25; Fri, 17 Jan 2014 17:13:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1389978835!10264813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1779 invoked from network); 17 Jan 2014 17:13:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93938926"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:13:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:13:54 -0500
Message-ID: <1389978832.6697.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 17 Jan 2014 17:13:52 +0000
In-Reply-To: <52D94D3A0200007800114957@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
> Interestingly, (a) too results in the driver not ballooning down
> enough - there's a gap of exactly as many pages as are marked
> reserved below the 1Mb boundary. Therefore aforementioned
> upstream commit is presumably broken.

Can we count those reserved pages? (I guess you mean reserved in the
e820?)

> Short of a reliable (and ideally architecture independent) way of
> knowing the necessary adjustment value, the next best solution
> (not ballooning down too little, but also not ballooning down much
> more than necessary) turns out to be using the minimum of (b)
> and (c): When the domain only has memory below 4Gb, (b) is
> more precise, whereas in the other cases (c) gets closest.

I think I'd prefer an arch specific calculation (or an arch specific
adjustment to a generic calculation) to either of the above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4D1v-0006ZZ-HF; Fri, 17 Jan 2014 17:16:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4D1u-0006ZT-1a
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:16:26 +0000
Received: from [193.109.254.147:36667] by server-10.bemta-14.messagelabs.com
	id 34/E2-20752-96569D25; Fri, 17 Jan 2014 17:16:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389978983!11604764!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 510 invoked from network); 17 Jan 2014 17:16:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:16:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93940070"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:16:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:16:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4D1p-0003wh-D2;
	Fri, 17 Jan 2014 17:16:21 +0000
Date: Fri, 17 Jan 2014 17:15:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mj Embd <mj.embd@gmail.com>
In-Reply-To: <CAPUj1OMqxnxBbpgPL_r9kZJddtY8J1z_TW5BweBmVWN5EXCv6g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401171714490.21510@kaball.uk.xensource.com>
References: <CAPUj1OOnpbs4n5sNth6QWzgeHH_qH7JsA9jU=KkVCn091dOaww@mail.gmail.com>
	<1383234184.25018.114.camel@dagon.hellion.org.uk>
	<CAPUj1ONL2P6jBHOfAHOvrv_pnEtrJnsH=ui_2ocfuAYuj_F3dw@mail.gmail.com>
	<alpine.DEB.2.02.1311061820550.26077@kaball.uk.xensource.com>
	<CAPUj1ONi_qbsROegBCqXPCP0taex4gjxsWXvB1sokjHmYULHHw@mail.gmail.com>
	<alpine.DEB.2.02.1311071132280.26077@kaball.uk.xensource.com>
	<CAPUj1OPKarPAzBE8uhc9ScWEhcL3-Sps1kjZ4k3zHQGRWJ_X0g@mail.gmail.com>
	<alpine.DEB.2.02.1311071646040.26077@kaball.uk.xensource.com>
	<CAPUj1OMqxnxBbpgPL_r9kZJddtY8J1z_TW5BweBmVWN5EXCv6g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [query] gic_set_lr always uses maintenance Interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 7 Nov 2013, Mj Embd wrote:
> On Thu, Nov 7, 2013 at 10:18 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 7 Nov 2013, Mj Embd wrote:
> >> On Thu, Nov 7, 2013 at 5:07 PM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > Please don't top post as it makes it harder to follow the conversation.
> >> >
> >> > On Thu, 7 Nov 2013, Mj Embd wrote:
> >> >> A few thoughts are circling around my mind, don't know how much
> >> >> interrupt latency would it have.
> >> >>
> >> >> Rather than the hypervisor entry when guest does EOI, a late / lazy
> >> >> checkin on LR's can be done
> >> >>  on next hypervisor entry by
> >> >> a) guest doing something and trapping to hypervisor
> >> >> b) schedular timer in hypervisor
> >> >>
> >> >> What do you think on this...
> >> >
> >> > It might work.
> >> > One key issue is how to identify that the guest EOIed a particular irq
> >> > and henceforth the corresponding LR can be reused.
> >> [mj] I believe that GICH_ELSR0/1 can be read anytime to get the status.
> >>
> >> > I hope that the status bits in the LR register reflect this condition.
> >> > Maybe the status becomes 00 invalid after the guest does EOI? Otherwise
> >> [mj] The state in LR is marked invalid by Virtual CPU interface.
> >
> > Right. In that case the lazy LR clearance should work.
> >
> [mj] I have started analysing it, can i send a patch in near future

Do you have an update on this?
Did you manage to write such a patch?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4D1v-0006ZZ-HF; Fri, 17 Jan 2014 17:16:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4D1u-0006ZT-1a
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:16:26 +0000
Received: from [193.109.254.147:36667] by server-10.bemta-14.messagelabs.com
	id 34/E2-20752-96569D25; Fri, 17 Jan 2014 17:16:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1389978983!11604764!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 510 invoked from network); 17 Jan 2014 17:16:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:16:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93940070"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:16:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:16:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4D1p-0003wh-D2;
	Fri, 17 Jan 2014 17:16:21 +0000
Date: Fri, 17 Jan 2014 17:15:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Mj Embd <mj.embd@gmail.com>
In-Reply-To: <CAPUj1OMqxnxBbpgPL_r9kZJddtY8J1z_TW5BweBmVWN5EXCv6g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401171714490.21510@kaball.uk.xensource.com>
References: <CAPUj1OOnpbs4n5sNth6QWzgeHH_qH7JsA9jU=KkVCn091dOaww@mail.gmail.com>
	<1383234184.25018.114.camel@dagon.hellion.org.uk>
	<CAPUj1ONL2P6jBHOfAHOvrv_pnEtrJnsH=ui_2ocfuAYuj_F3dw@mail.gmail.com>
	<alpine.DEB.2.02.1311061820550.26077@kaball.uk.xensource.com>
	<CAPUj1ONi_qbsROegBCqXPCP0taex4gjxsWXvB1sokjHmYULHHw@mail.gmail.com>
	<alpine.DEB.2.02.1311071132280.26077@kaball.uk.xensource.com>
	<CAPUj1OPKarPAzBE8uhc9ScWEhcL3-Sps1kjZ4k3zHQGRWJ_X0g@mail.gmail.com>
	<alpine.DEB.2.02.1311071646040.26077@kaball.uk.xensource.com>
	<CAPUj1OMqxnxBbpgPL_r9kZJddtY8J1z_TW5BweBmVWN5EXCv6g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [query] gic_set_lr always uses maintenance Interrupt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 7 Nov 2013, Mj Embd wrote:
> On Thu, Nov 7, 2013 at 10:18 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Thu, 7 Nov 2013, Mj Embd wrote:
> >> On Thu, Nov 7, 2013 at 5:07 PM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > Please don't top post as it makes it harder to follow the conversation.
> >> >
> >> > On Thu, 7 Nov 2013, Mj Embd wrote:
> >> >> A few thoughts are circling around my mind, don't know how much
> >> >> interrupt latency would it have.
> >> >>
> >> >> Rather than the hypervisor entry when guest does EOI, a late / lazy
> >> >> checkin on LR's can be done
> >> >>  on next hypervisor entry by
> >> >> a) guest doing something and trapping to hypervisor
> >> >> b) schedular timer in hypervisor
> >> >>
> >> >> What do you think on this...
> >> >
> >> > It might work.
> >> > One key issue is how to identify that the guest EOIed a particular irq
> >> > and henceforth the corresponding LR can be reused.
> >> [mj] I believe that GICH_ELSR0/1 can be read anytime to get the status.
> >>
> >> > I hope that the status bits in the LR register reflect this condition.
> >> > Maybe the status becomes 00 invalid after the guest does EOI? Otherwise
> >> [mj] The state in LR is marked invalid by Virtual CPU interface.
> >
> > Right. In that case the lazy LR clearance should work.
> >
> [mj] I have started analysing it, can i send a patch in near future

Do you have an update on this?
Did you manage to write such a patch?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DAC-00088a-Qx; Fri, 17 Jan 2014 17:25:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4DAB-00088V-F0
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:24:59 +0000
Received: from [85.158.143.35:14037] by server-1.bemta-4.messagelabs.com id
	4A/CB-02132-A6769D25; Fri, 17 Jan 2014 17:24:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389979496!12425083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32726 invoked from network); 17 Jan 2014 17:24:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93942714"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:24:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:24:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W4DA6-0004fm-C0;
	Fri, 17 Jan 2014 17:24:54 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 17:24:53 +0000
Message-ID: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..b626c79 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -77,12 +77,22 @@ static u64 start_dma_addr;
 
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+	dma |= paddr & ~PAGE_MASK;
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DAC-00088a-Qx; Fri, 17 Jan 2014 17:25:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W4DAB-00088V-F0
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:24:59 +0000
Received: from [85.158.143.35:14037] by server-1.bemta-4.messagelabs.com id
	4A/CB-02132-A6769D25; Fri, 17 Jan 2014 17:24:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1389979496!12425083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32726 invoked from network); 17 Jan 2014 17:24:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93942714"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:24:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:24:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W4DA6-0004fm-C0;
	Fri, 17 Jan 2014 17:24:54 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 17 Jan 2014 17:24:53 +0000
Message-ID: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..b626c79 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -77,12 +77,22 @@ static u64 start_dma_addr;
 
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+	dma |= paddr & ~PAGE_MASK;
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DJt-0000L5-Va; Fri, 17 Jan 2014 17:35:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DJs-0000L0-C8
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:00 +0000
Received: from [85.158.139.211:41789] by server-3.bemta-5.messagelabs.com id
	C4/80-04773-3C969D25; Fri, 17 Jan 2014 17:34:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389980097!10446123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20329 invoked from network); 17 Jan 2014 17:34:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:34:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:34:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:34:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJn-0004jA-BA;
	Fri, 17 Jan 2014 17:34:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJl-0000ut-Qt;
	Fri, 17 Jan 2014 17:34:53 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:47 +0000
Message-ID: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0/3] tools: Miscellanous fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here are three bugfixes.  1/3 is quite important.  I'm CCing it to Jim
Fehlig in case he happens to see fallout from it with libvirt.

 1/3 libxl: events: Pass correct nfds to poll
 2/3 xl: Free optdata_begin when saving domain config
 3/3 xenstore: xs_suspend_evtchn_port: always free portstr

These are based on staging, but they will also apply without trouble
to my recent libxl fork series.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK4-0000Lh-L5; Fri, 17 Jan 2014 17:35:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK3-0000LU-2G
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:11 +0000
Received: from [85.158.143.35:46944] by server-3.bemta-4.messagelabs.com id
	35/9E-32360-EC969D25; Fri, 17 Jan 2014 17:35:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389980108!5313582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25999 invoked from network); 17 Jan 2014 17:35:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJo-0004jD-Dy;
	Fri, 17 Jan 2014 17:34:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJm-0000uw-P2;
	Fri, 17 Jan 2014 17:34:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:48 +0000
Message-ID: <1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_event.c:eventloop_iteration would pass the allocated pollfds
array size, rather than the used size, to poll (and to
afterpoll_internal).

The effect is that if the number of fds to poll on reduces, libxl will
poll on stale entries.  Because of the way the return value from poll
is processed these stale entries are often harmless because any events
coming back from poll ignored by libxl.  However, it could cause
malfunctions:

It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
the fd has been reused to refer to an object which can generate those
signals.  Alternatively, it could result in libxl spinning if the
stale entry refers to an fd which happens now to be ready for the
previously-requested operation.

I have tested this with a localhost migration and inspected the strace
output.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..1c48fee 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
      * can unlock it when it polls.
      */
     EGC_GC;
-    int rc;
+    int rc, nfds;
     struct timeval now;
     
     rc = libxl__gettimeofday(gc, &now);
@@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     int timeout;
 
     for (;;) {
-        int nfds = poller->fd_polls_allocd;
+        nfds = poller->fd_polls_allocd;
         timeout = -1;
         rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
                                  &timeout, now);
@@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     }
 
     CTX_UNLOCK;
-    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
+    rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
     if (rc < 0) {
@@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     rc = libxl__gettimeofday(gc, &now);
     if (rc) goto out;
 
-    afterpoll_internal(egc, poller,
-                       poller->fd_polls_allocd, poller->fd_polls, now);
+    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
 
     rc = 0;
  out:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK5-0000Lt-0q; Fri, 17 Jan 2014 17:35:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK3-0000LX-KQ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:11 +0000
Received: from [85.158.143.35:46994] by server-1.bemta-4.messagelabs.com id
	13/C5-02132-FC969D25; Fri, 17 Jan 2014 17:35:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389980108!5313582!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26207 invoked from network); 17 Jan 2014 17:35:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946554"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJp-0004jG-GI;
	Fri, 17 Jan 2014 17:34:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJo-0000v1-1A;
	Fri, 17 Jan 2014 17:34:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:49 +0000
Message-ID: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving domain
	config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This makes valgrind a bit happier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..aff6f90 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
                      ctx, fd, optdata_begin, hdr.optional_data_len,
                      source, "header"));
 
+    free(optdata_begin);
+
     fprintf(stderr, "Saving to %s new xl format (info"
             " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
             source, hdr.mandatory_flags, hdr.optional_flags,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DJt-0000L5-Va; Fri, 17 Jan 2014 17:35:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DJs-0000L0-C8
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:00 +0000
Received: from [85.158.139.211:41789] by server-3.bemta-5.messagelabs.com id
	C4/80-04773-3C969D25; Fri, 17 Jan 2014 17:34:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389980097!10446123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20329 invoked from network); 17 Jan 2014 17:34:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:34:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:34:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:34:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJn-0004jA-BA;
	Fri, 17 Jan 2014 17:34:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJl-0000ut-Qt;
	Fri, 17 Jan 2014 17:34:53 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:47 +0000
Message-ID: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 0/3] tools: Miscellanous fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here are three bugfixes.  1/3 is quite important.  I'm CCing it to Jim
Fehlig in case he happens to see fallout from it with libvirt.

 1/3 libxl: events: Pass correct nfds to poll
 2/3 xl: Free optdata_begin when saving domain config
 3/3 xenstore: xs_suspend_evtchn_port: always free portstr

These are based on staging, but they will also apply without trouble
to my recent libxl fork series.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK4-0000Lh-L5; Fri, 17 Jan 2014 17:35:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK3-0000LU-2G
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:11 +0000
Received: from [85.158.143.35:46944] by server-3.bemta-4.messagelabs.com id
	35/9E-32360-EC969D25; Fri, 17 Jan 2014 17:35:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389980108!5313582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25999 invoked from network); 17 Jan 2014 17:35:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJo-0004jD-Dy;
	Fri, 17 Jan 2014 17:34:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJm-0000uw-P2;
	Fri, 17 Jan 2014 17:34:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:48 +0000
Message-ID: <1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_event.c:eventloop_iteration would pass the allocated pollfds
array size, rather than the used size, to poll (and to
afterpoll_internal).

The effect is that if the number of fds to poll on reduces, libxl will
poll on stale entries.  Because of the way the return value from poll
is processed these stale entries are often harmless because any events
coming back from poll ignored by libxl.  However, it could cause
malfunctions:

It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
the fd has been reused to refer to an object which can generate those
signals.  Alternatively, it could result in libxl spinning if the
stale entry refers to an fd which happens now to be ready for the
previously-requested operation.

I have tested this with a localhost migration and inspected the strace
output.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..1c48fee 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
      * can unlock it when it polls.
      */
     EGC_GC;
-    int rc;
+    int rc, nfds;
     struct timeval now;
     
     rc = libxl__gettimeofday(gc, &now);
@@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     int timeout;
 
     for (;;) {
-        int nfds = poller->fd_polls_allocd;
+        nfds = poller->fd_polls_allocd;
         timeout = -1;
         rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
                                  &timeout, now);
@@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     }
 
     CTX_UNLOCK;
-    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
+    rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
     if (rc < 0) {
@@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     rc = libxl__gettimeofday(gc, &now);
     if (rc) goto out;
 
-    afterpoll_internal(egc, poller,
-                       poller->fd_polls_allocd, poller->fd_polls, now);
+    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
 
     rc = 0;
  out:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK5-0000M9-D5; Fri, 17 Jan 2014 17:35:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK4-0000Lg-Ug
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:13 +0000
Received: from [193.109.254.147:21531] by server-14.bemta-14.messagelabs.com
	id 0B/B5-12628-0D969D25; Fri, 17 Jan 2014 17:35:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389980110!9270937!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25779 invoked from network); 17 Jan 2014 17:35:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91849448"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJq-0004jJ-H1;
	Fri, 17 Jan 2014 17:34:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJp-0000v6-3r;
	Fri, 17 Jan 2014 17:34:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:50 +0000
Message-ID: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port: always
	free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If portstr!=NULL but plen==0 this function would leak portstr.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/xenstore/xs.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index a636498..9cd99eb 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
     portstr = xs_read(xs, XBT_NULL, path, &plen);
     xs_daemon_close(xs);
 
-    if (!portstr || !plen)
-        return -1;
+    if (!portstr || !plen) {
+        port = -1;
+	goto out;
+    }
 
     port = atoi(portstr);
-    free(portstr);
 
+out:
+    free(portstr);
     return port;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK5-0000Lt-0q; Fri, 17 Jan 2014 17:35:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK3-0000LX-KQ
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:11 +0000
Received: from [85.158.143.35:46994] by server-1.bemta-4.messagelabs.com id
	13/C5-02132-FC969D25; Fri, 17 Jan 2014 17:35:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389980108!5313582!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26207 invoked from network); 17 Jan 2014 17:35:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93946554"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJp-0004jG-GI;
	Fri, 17 Jan 2014 17:34:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJo-0000v1-1A;
	Fri, 17 Jan 2014 17:34:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:49 +0000
Message-ID: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving domain
	config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This makes valgrind a bit happier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..aff6f90 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
                      ctx, fd, optdata_begin, hdr.optional_data_len,
                      source, "header"));
 
+    free(optdata_begin);
+
     fprintf(stderr, "Saving to %s new xl format (info"
             " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
             source, hdr.mandatory_flags, hdr.optional_flags,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:35:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DK5-0000M9-D5; Fri, 17 Jan 2014 17:35:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DK4-0000Lg-Ug
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:35:13 +0000
Received: from [193.109.254.147:21531] by server-14.bemta-14.messagelabs.com
	id 0B/B5-12628-0D969D25; Fri, 17 Jan 2014 17:35:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389980110!9270937!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25779 invoked from network); 17 Jan 2014 17:35:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91849448"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:35:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 12:35:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJq-0004jJ-H1;
	Fri, 17 Jan 2014 17:34:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DJp-0000v6-3r;
	Fri, 17 Jan 2014 17:34:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 17 Jan 2014 17:34:50 +0000
Message-ID: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port: always
	free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If portstr!=NULL but plen==0 this function would leak portstr.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/xenstore/xs.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index a636498..9cd99eb 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
     portstr = xs_read(xs, XBT_NULL, path, &plen);
     xs_daemon_close(xs);
 
-    if (!portstr || !plen)
-        return -1;
+    if (!portstr || !plen) {
+        port = -1;
+	goto out;
+    }
 
     port = atoi(portstr);
-    free(portstr);
 
+out:
+    free(portstr);
     return port;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:43:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DRy-0001Bx-El; Fri, 17 Jan 2014 17:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4DRw-0001Bn-TO
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:43:21 +0000
Received: from [193.109.254.147:47486] by server-4.bemta-14.messagelabs.com id
	4A/BB-03916-8BB69D25; Fri, 17 Jan 2014 17:43:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389980598!11599904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 17 Jan 2014 17:43:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:43:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91851806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:43:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:43:17 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W4DRs-0004Mc-VF;
	Fri, 17 Jan 2014 17:43:16 +0000
Message-ID: <52D96BB4.9030309@citrix.com>
Date: Fri, 17 Jan 2014 17:43:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving
 domain config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 17:34, Ian Jackson wrote:
> This makes valgrind a bit happier.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>

This is CID: 1055903

It is probably worth noting that there are 11 other resource leaks
Coverity has found in this file.

~Andrew

> ---
>  tools/libxl/xl_cmdimpl.c |    2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..aff6f90 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
>                       ctx, fd, optdata_begin, hdr.optional_data_len,
>                       source, "header"));
>  
> +    free(optdata_begin);
> +
>      fprintf(stderr, "Saving to %s new xl format (info"
>              " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
>              source, hdr.mandatory_flags, hdr.optional_flags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:43:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DRy-0001Bx-El; Fri, 17 Jan 2014 17:43:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4DRw-0001Bn-TO
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:43:21 +0000
Received: from [193.109.254.147:47486] by server-4.bemta-14.messagelabs.com id
	4A/BB-03916-8BB69D25; Fri, 17 Jan 2014 17:43:20 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1389980598!11599904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8121 invoked from network); 17 Jan 2014 17:43:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:43:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91851806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:43:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:43:17 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W4DRs-0004Mc-VF;
	Fri, 17 Jan 2014 17:43:16 +0000
Message-ID: <52D96BB4.9030309@citrix.com>
Date: Fri, 17 Jan 2014 17:43:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving
 domain config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 17:34, Ian Jackson wrote:
> This makes valgrind a bit happier.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>

This is CID: 1055903

It is probably worth noting that there are 11 other resource leaks
Coverity has found in this file.

~Andrew

> ---
>  tools/libxl/xl_cmdimpl.c |    2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..aff6f90 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
>                       ctx, fd, optdata_begin, hdr.optional_data_len,
>                       source, "header"));
>  
> +    free(optdata_begin);
> +
>      fprintf(stderr, "Saving to %s new xl format (info"
>              " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
>              source, hdr.mandatory_flags, hdr.optional_flags,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:46:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DV5-0001VR-8r; Fri, 17 Jan 2014 17:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4DV3-0001VK-Om
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:46:33 +0000
Received: from [85.158.139.211:9763] by server-9.bemta-5.messagelabs.com id
	24/70-15098-97C69D25; Fri, 17 Jan 2014 17:46:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389980790!150050!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16058 invoked from network); 17 Jan 2014 17:46:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91852734"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:46:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:46:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W4DUk-0004PN-E9;
	Fri, 17 Jan 2014 17:46:14 +0000
Message-ID: <52D96C66.6000008@citrix.com>
Date: Fri, 17 Jan 2014 17:46:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port:
 always free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 17:34, Ian Jackson wrote:
> If portstr!=NULL but plen==0 this function would leak portstr.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/xenstore/xs.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
> index a636498..9cd99eb 100644
> --- a/tools/xenstore/xs.c
> +++ b/tools/xenstore/xs.c
> @@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
>      portstr = xs_read(xs, XBT_NULL, path, &plen);
>      xs_daemon_close(xs);
>  
> -    if (!portstr || !plen)
> -        return -1;
> +    if (!portstr || !plen) {
> +        port = -1;
> +	goto out;

You have mixed tabs/spaces here.

~Andrew

> +    }
>  
>      port = atoi(portstr);
> -    free(portstr);
>  
> +out:
> +    free(portstr);
>      return port;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:46:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DV5-0001VR-8r; Fri, 17 Jan 2014 17:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W4DV3-0001VK-Om
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 17:46:33 +0000
Received: from [85.158.139.211:9763] by server-9.bemta-5.messagelabs.com id
	24/70-15098-97C69D25; Fri, 17 Jan 2014 17:46:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1389980790!150050!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16058 invoked from network); 17 Jan 2014 17:46:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91852734"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 17:46:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:46:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W4DUk-0004PN-E9;
	Fri, 17 Jan 2014 17:46:14 +0000
Message-ID: <52D96C66.6000008@citrix.com>
Date: Fri, 17 Jan 2014 17:46:14 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port:
 always free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 17:34, Ian Jackson wrote:
> If portstr!=NULL but plen==0 this function would leak portstr.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/xenstore/xs.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
> index a636498..9cd99eb 100644
> --- a/tools/xenstore/xs.c
> +++ b/tools/xenstore/xs.c
> @@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
>      portstr = xs_read(xs, XBT_NULL, path, &plen);
>      xs_daemon_close(xs);
>  
> -    if (!portstr || !plen)
> -        return -1;
> +    if (!portstr || !plen) {
> +        port = -1;
> +	goto out;

You have mixed tabs/spaces here.

~Andrew

> +    }
>  
>      port = atoi(portstr);
> -    free(portstr);
>  
> +out:
> +    free(portstr);
>      return port;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:51:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DZD-00028F-1w; Fri, 17 Jan 2014 17:50:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W4DZB-000286-Bo
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:50:49 +0000
Received: from [193.109.254.147:48054] by server-10.bemta-14.messagelabs.com
	id 56/04-20752-87D69D25; Fri, 17 Jan 2014 17:50:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389981046!11575476!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 17 Jan 2014 17:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93951427"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:50:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:50:45 -0500
Message-ID: <52D96D73.1030803@citrix.com>
Date: Fri, 17 Jan 2014 17:50:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
	<52D94F8C.7060509@oracle.com>
In-Reply-To: <52D94F8C.7060509@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 15:43, annie li wrote:
> 
> No, I am trying to implement 2 patches.

I don't understand the need for two patches here, particularly when
the first patch introduces a security issue.  You can fold the following 
(untested) patch into your v2 patch and give it a try?

Thanks.

David

8<----------------------
xen-netfront: prevent unsafe reuse of rx buf pages after uninit

---
 drivers/net/xen-netfront.c |   21 +++++++++++++++++----
 1 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 692589e..47aa599 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,19 +1134,32 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct sk_buff *skb;
 	int id, ref;
 
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
+		struct sk_buff *skb;
+		skb_frag_t *frag;
+		const struct page *page;
+
+		skb = np->rx_skbs[id];
+		if (!skb)
+			continue;
+
 		ref = np->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
-		skb = np->rx_skbs[id];
-		gnttab_end_foreign_access_ref(ref, 0);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		frag = &skb_shinfo(skb)->frags[0];
+		page = skb_frag_page(frag);
+
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+
+		gnttab_end_foreign_access(ref, 0, page);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 17:51:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 17:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DZD-00028F-1w; Fri, 17 Jan 2014 17:50:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W4DZB-000286-Bo
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:50:49 +0000
Received: from [193.109.254.147:48054] by server-10.bemta-14.messagelabs.com
	id 56/04-20752-87D69D25; Fri, 17 Jan 2014 17:50:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1389981046!11575476!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 17 Jan 2014 17:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93951427"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 17:50:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 12:50:45 -0500
Message-ID: <52D96D73.1030803@citrix.com>
Date: Fri, 17 Jan 2014 17:50:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: annie li <annie.li@oracle.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
	<52D94F8C.7060509@oracle.com>
In-Reply-To: <52D94F8C.7060509@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 15:43, annie li wrote:
> 
> No, I am trying to implement 2 patches.

I don't understand the need for two patches here, particularly when
the first patch introduces a security issue.  You can fold the following 
(untested) patch into your v2 patch and give it a try?

Thanks.

David

8<----------------------
xen-netfront: prevent unsafe reuse of rx buf pages after uninit

---
 drivers/net/xen-netfront.c |   21 +++++++++++++++++----
 1 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 692589e..47aa599 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1134,19 +1134,32 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct sk_buff *skb;
 	int id, ref;
 
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
+		struct sk_buff *skb;
+		skb_frag_t *frag;
+		const struct page *page;
+
+		skb = np->rx_skbs[id];
+		if (!skb)
+			continue;
+
 		ref = np->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
-		skb = np->rx_skbs[id];
-		gnttab_end_foreign_access_ref(ref, 0);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		frag = &skb_shinfo(skb)->frags[0];
+		page = skb_frag_page(frag);
+
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+
+		gnttab_end_foreign_access(ref, 0, page);
 		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dhw-0002iG-AA; Fri, 17 Jan 2014 17:59:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dhv-0002iB-2e
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:59:51 +0000
Received: from [85.158.137.68:22412] by server-2.bemta-3.messagelabs.com id
	9D/6A-17329-69F69D25; Fri, 17 Jan 2014 17:59:50 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389981587!9018808!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16209 invoked from network); 17 Jan 2014 17:59:49 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:59:49 -0000
Received: by mail-pb0-f48.google.com with SMTP id rr13so4384703pbb.35
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 09:59:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=XuN6UU9JZ7pFYAeD6s2GYdXM4g47BYiFiQJR63JMXi0=;
	b=DRj6N40fsDc54NtrEFR+wBRi4VzvI+5v9Naao4wPjp3Eje/sjrN9weoCVOIZZvQ3Zw
	HVr+8z2dy8LoaG4yCz6I4xLB/JRkIbOjrd/q1hEmk7FfVVqoBTudG4Tb5SGtmThs3hww
	ncu2SJNlhtcVr+8IB0Asi7uwKYnoFqV2dQbCSJ5TgrBIfWjmaXq7pLU2wzUElVHLAo4Z
	WnZbsxyDFqz+SwrM2UxWFJf0RnbPelVM0Gg++DibxyRGFDqErNexDt1UdXHTYwW24sHc
	bsfI0ZratKFWtd00fE7XSyTRDY82YNeyAKXMaPw39hCXiJnPpqbxTL7cNtJNdawh0ZXC
	b+aA==
X-Received: by 10.68.91.3 with SMTP id ca3mr3629970pbb.20.1389981567146;
	Fri, 17 Jan 2014 09:59:27 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id x5sm24424933pbw.26.2014.01.17.09.59.23
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 09:59:25 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 17:59:12 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CEFF1FF0.4A880%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
Thread-Index: Ac8TrdPB52RuskKr1kSEqrp1gqIE9A==
In-Reply-To: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/2014 11:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> In addition, 'copyback' should be cleared even in the error case.
> 
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> 
> Changes in v2:
>  * Still clear copyback even in the error case.
> ---
>  xen/common/sysctl.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..0cb6ee1 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t)
> u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
>  
>          xfree(status);
>          copyback = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dhw-0002iG-AA; Fri, 17 Jan 2014 17:59:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dhv-0002iB-2e
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 17:59:51 +0000
Received: from [85.158.137.68:22412] by server-2.bemta-3.messagelabs.com id
	9D/6A-17329-69F69D25; Fri, 17 Jan 2014 17:59:50 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1389981587!9018808!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16209 invoked from network); 17 Jan 2014 17:59:49 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 17:59:49 -0000
Received: by mail-pb0-f48.google.com with SMTP id rr13so4384703pbb.35
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 09:59:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=XuN6UU9JZ7pFYAeD6s2GYdXM4g47BYiFiQJR63JMXi0=;
	b=DRj6N40fsDc54NtrEFR+wBRi4VzvI+5v9Naao4wPjp3Eje/sjrN9weoCVOIZZvQ3Zw
	HVr+8z2dy8LoaG4yCz6I4xLB/JRkIbOjrd/q1hEmk7FfVVqoBTudG4Tb5SGtmThs3hww
	ncu2SJNlhtcVr+8IB0Asi7uwKYnoFqV2dQbCSJ5TgrBIfWjmaXq7pLU2wzUElVHLAo4Z
	WnZbsxyDFqz+SwrM2UxWFJf0RnbPelVM0Gg++DibxyRGFDqErNexDt1UdXHTYwW24sHc
	bsfI0ZratKFWtd00fE7XSyTRDY82YNeyAKXMaPw39hCXiJnPpqbxTL7cNtJNdawh0ZXC
	b+aA==
X-Received: by 10.68.91.3 with SMTP id ca3mr3629970pbb.20.1389981567146;
	Fri, 17 Jan 2014 09:59:27 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id x5sm24424933pbw.26.2014.01.17.09.59.23
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 09:59:25 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 17:59:12 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CEFF1FF0.4A880%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
	SYSCTL_page_offline_op
Thread-Index: Ac8TrdPB52RuskKr1kSEqrp1gqIE9A==
In-Reply-To: <1389095946-11932-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch v2 1/4] common/sysctl: Don't leak status in
 SYSCTL_page_offline_op
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/01/2014 11:59, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> In addition, 'copyback' should be cleared even in the error case.
> 
> Also fix the indentation of the arguments to copy_to_guest() to help clarify
> that the 'ret = -EFAULT' is not part of the condition.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> 
> Changes in v2:
>  * Still clear copyback even in the error case.
> ---
>  xen/common/sysctl.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 117e095..0cb6ee1 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -230,12 +230,9 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t)
> u_sysctl)
>          }
>  
>          if ( copy_to_guest(
> -            op->u.page_offline.status, status,
> -            op->u.page_offline.end - op->u.page_offline.start + 1) )
> -        {
> +                 op->u.page_offline.status, status,
> +                 op->u.page_offline.end - op->u.page_offline.start + 1) )
>              ret = -EFAULT;
> -            break;
> -        }
>  
>          xfree(status);
>          copyback = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:01:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DjE-0002wN-Vu; Fri, 17 Jan 2014 18:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4DjD-0002wG-RL
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:01:12 +0000
Received: from [85.158.137.68:26261] by server-6.bemta-3.messagelabs.com id
	29/C6-04868-6EF69D25; Fri, 17 Jan 2014 18:01:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389981668!9841653!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30839 invoked from network); 17 Jan 2014 18:01:10 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:01:10 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so2325295pbb.25
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 10:01:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=H6TLFpwMOun6bNW62dBGmoqRhSBEt3+PH88tNaVOFbs=;
	b=elbKhffwv8VIgeyqzanLLHsKwV/DdUo3OQUVuzXLfQapKXDT4OrIoFd6TAC5mFR9Mg
	/gYIpAkpWH4faXOdFnIdWtField2q7tsPtgR0xQOtCt3rQY/B0og0A5hYRLJbYsLXKAP
	assZJYO/DgTu4grAIxoSgkvta1SEUQ/Sj5tenMTH3ap51gk8TSX7+RW2ivZ0gjCObdPv
	qpzCXYNWzeuuiNy6ysrK4IJXFvUvSYP+US2TS8h5KXfwQ3hDXw1S2Ei0T+iVwWnKsed5
	TUJOcvb/boz+MnyEgPVojzao/oPfBRmGXQLaIDmFcGf+WjNNB3Xd2rPMzxEWMf72x3Qv
	QGVQ==
X-Received: by 10.66.197.135 with SMTP id iu7mr3422196pac.149.1389981667931;
	Fri, 17 Jan 2014 10:01:07 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id yz5sm32954601pac.9.2014.01.17.10.01.00
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:01:04 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:00:48 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CEFF2050.4AA29%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
Thread-Index: Ac8Trgz6y0yFXLXmDEKkWA9IBeEiNg==
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 20:21, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
> for the fix.
> 
> This was caught by a compile failure rather than a functional test.  I have
> encountered a different compile error which turns out to be a bug in the cross
> compiler we are currently using, so I need to fix that before I can
> functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
> test of whether the functionality of XENMEM_add_to_physmap is actually still
> the same.  If anyone dares look,
> https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace
> -quirks.patch
> are the hacks required to make the legacy drivers work on modern Xen.)
> ---
>  xen/arch/arm/mm.c    |    2 +-
>  xen/arch/x86/mm.c    |    2 +-
>  xen/include/xen/mm.h |    2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 293b6e2..127cce0 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 32c0473..172c68c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned
> long e, void *p)
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index f90ed74..b183189 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned
> long nr_pages)
>  
>  void scrub_one_page(struct page_info *);
>  
> -int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
> +int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
>                                domid_t foreign_domid,
>                                unsigned long idx, xen_pfn_t gpfn);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:01:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DjE-0002wN-Vu; Fri, 17 Jan 2014 18:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4DjD-0002wG-RL
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:01:12 +0000
Received: from [85.158.137.68:26261] by server-6.bemta-3.messagelabs.com id
	29/C6-04868-6EF69D25; Fri, 17 Jan 2014 18:01:10 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1389981668!9841653!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30839 invoked from network); 17 Jan 2014 18:01:10 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:01:10 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so2325295pbb.25
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 10:01:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=H6TLFpwMOun6bNW62dBGmoqRhSBEt3+PH88tNaVOFbs=;
	b=elbKhffwv8VIgeyqzanLLHsKwV/DdUo3OQUVuzXLfQapKXDT4OrIoFd6TAC5mFR9Mg
	/gYIpAkpWH4faXOdFnIdWtField2q7tsPtgR0xQOtCt3rQY/B0og0A5hYRLJbYsLXKAP
	assZJYO/DgTu4grAIxoSgkvta1SEUQ/Sj5tenMTH3ap51gk8TSX7+RW2ivZ0gjCObdPv
	qpzCXYNWzeuuiNy6ysrK4IJXFvUvSYP+US2TS8h5KXfwQ3hDXw1S2Ei0T+iVwWnKsed5
	TUJOcvb/boz+MnyEgPVojzao/oPfBRmGXQLaIDmFcGf+WjNNB3Xd2rPMzxEWMf72x3Qv
	QGVQ==
X-Received: by 10.66.197.135 with SMTP id iu7mr3422196pac.149.1389981667931;
	Fri, 17 Jan 2014 10:01:07 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id yz5sm32954601pac.9.2014.01.17.10.01.00
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:01:04 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:00:48 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CEFF2050.4AA29%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
	XENMEM_add_to_physmap
Thread-Index: Ac8Trgz6y0yFXLXmDEKkWA9IBeEiNg==
In-Reply-To: <1389730896-29439-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch] common/memory: Fix ABI breakage for
 XENMEM_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 20:21, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

>   caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
> 
> In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
> but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
> 
> By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
> now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
> unsigned int to uint16_t, which changes the space switch()'d upon.
> 
> This wouldn't be noticed with any upstream code (of which I am aware), but was
> discovered because of the XenServer support for legacy Windows PV drivers,
> which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
> The current Windows PV drivers don't do this any more, but we 'fix' Xen to
> support running VMs with out-of-date tools.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> ---
> 
> As this breakage was caused between 4.4-rc1 and -rc2, I request a release ack
> for the fix.
> 
> This was caught by a compile failure rather than a functional test.  I have
> encountered a different compile error which turns out to be a bug in the cross
> compiler we are currently using, so I need to fix that before I can
> functionally test a 4.4-rc2 based XenServer.  (Which will be a rather better
> test of whether the functionality of XENMEM_add_to_physmap is actually still
> the same.  If anyone dares look,
> https://github.com/xenserver/xen-4.3.pg/blob/master/xen-legacy-win-xenmapspace
> -quirks.patch
> are the hacks required to make the legacy drivers work on modern Xen.)
> ---
>  xen/arch/arm/mm.c    |    2 +-
>  xen/arch/x86/mm.c    |    2 +-
>  xen/include/xen/mm.h |    2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 293b6e2..127cce0 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -970,7 +970,7 @@ void share_xen_page_with_privileged_guests(
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 32c0473..172c68c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4522,7 +4522,7 @@ static int handle_iomem_range(unsigned long s, unsigned
> long e, void *p)
>  
>  int xenmem_add_to_physmap_one(
>      struct domain *d,
> -    uint16_t space,
> +    unsigned int space,
>      domid_t foreign_domid,
>      unsigned long idx,
>      xen_pfn_t gpfn)
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index f90ed74..b183189 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -356,7 +356,7 @@ static inline unsigned int get_order_from_pages(unsigned
> long nr_pages)
>  
>  void scrub_one_page(struct page_info *);
>  
> -int xenmem_add_to_physmap_one(struct domain *d, uint16_t space,
> +int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
>                                domid_t foreign_domid,
>                                unsigned long idx, xen_pfn_t gpfn);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:03:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dl5-00034n-O9; Fri, 17 Jan 2014 18:03:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dl3-00034X-Hj
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:03:05 +0000
Received: from [85.158.139.211:17162] by server-6.bemta-5.messagelabs.com id
	2F/AB-16310-85079D25; Fri, 17 Jan 2014 18:03:04 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389981782!10397272!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3683 invoked from network); 17 Jan 2014 18:03:03 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:03:03 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so30760pdj.1
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 10:03:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ZfU2Mb9AUplDTbYb3pR7t5pmFa2awMS+cE94aK1Z380=;
	b=na0b535FCWy/hV4sLSf3IVeZsZDq1EtUEZjWK7/7Y0CtyH2Y8q2evfTe8OnlE40F0B
	aQqJCXZs9+e/rWCLFVCohjY4SqTWiVUnqkAOQeYAVrZJAQFp3n1SdMxUvPlwVy6D2+K/
	mEhoJWgAgx8O0ZforpVRhN1oHuiu+hYTpr6i1n6XxRKWRwWP/bwaNEkN0tvFsXmaBlsG
	C55OMTuvwoV3DTGO3PvdFeNiAKTXUNv+tb/lwj61kP7M8yfyvUbNDAJ52EVa01fFcEgb
	jjYS3WXQkplSIkgqOhyP4Syjvbnr93gPe+wv0+VpVl6rBbMDB46JalQpL79nKgjWi717
	iiVA==
X-Received: by 10.66.222.234 with SMTP id qp10mr3599128pac.156.1389981781766; 
	Fri, 17 Jan 2014 10:03:01 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id
	nw11sm32957663pab.13.2014.01.17.10.02.58 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:03:01 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:02:48 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	<xen-devel@lists.xen.org>
Message-ID: <CEFF20C8.4AA35%keir.xen@gmail.com>
Thread-Topic: [PATCH] blkif.h: enhance comments related to the discard feature
Thread-Index: Ac8TrlSAA6c2n/rpfkq/uvk8mldXCw==
In-Reply-To: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 21:57, "Olaf Hering" <olaf@aepfle.de> wrote:

> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Keir Fraser <keir@xen.org>

Although I am not an authority on this interface...

> ---
>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..515ea90 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties
> -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process
> BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,14 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate
> space
> + *     (discardable extents) in units that are larger than the exported
> logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend must provide both discard-granularity and discard-alignment.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity
> ==
> + *     sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +350,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */
>  
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:03:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dl5-00034n-O9; Fri, 17 Jan 2014 18:03:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dl3-00034X-Hj
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:03:05 +0000
Received: from [85.158.139.211:17162] by server-6.bemta-5.messagelabs.com id
	2F/AB-16310-85079D25; Fri, 17 Jan 2014 18:03:04 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1389981782!10397272!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3683 invoked from network); 17 Jan 2014 18:03:03 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:03:03 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so30760pdj.1
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 10:03:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=ZfU2Mb9AUplDTbYb3pR7t5pmFa2awMS+cE94aK1Z380=;
	b=na0b535FCWy/hV4sLSf3IVeZsZDq1EtUEZjWK7/7Y0CtyH2Y8q2evfTe8OnlE40F0B
	aQqJCXZs9+e/rWCLFVCohjY4SqTWiVUnqkAOQeYAVrZJAQFp3n1SdMxUvPlwVy6D2+K/
	mEhoJWgAgx8O0ZforpVRhN1oHuiu+hYTpr6i1n6XxRKWRwWP/bwaNEkN0tvFsXmaBlsG
	C55OMTuvwoV3DTGO3PvdFeNiAKTXUNv+tb/lwj61kP7M8yfyvUbNDAJ52EVa01fFcEgb
	jjYS3WXQkplSIkgqOhyP4Syjvbnr93gPe+wv0+VpVl6rBbMDB46JalQpL79nKgjWi717
	iiVA==
X-Received: by 10.66.222.234 with SMTP id qp10mr3599128pac.156.1389981781766; 
	Fri, 17 Jan 2014 10:03:01 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id
	nw11sm32957663pab.13.2014.01.17.10.02.58 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:03:01 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:02:48 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Olaf Hering <olaf@aepfle.de>,
	<xen-devel@lists.xen.org>
Message-ID: <CEFF20C8.4AA35%keir.xen@gmail.com>
Thread-Topic: [PATCH] blkif.h: enhance comments related to the discard feature
Thread-Index: Ac8TrlSAA6c2n/rpfkq/uvk8mldXCw==
In-Reply-To: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/01/2014 21:57, "Olaf Hering" <olaf@aepfle.de> wrote:

> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Keir Fraser <keir@xen.org>

Although I am not an authority on this interface...

> ---
>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..515ea90 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties
> -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process
> BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,14 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate
> space
> + *     (discardable extents) in units that are larger than the exported
> logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend must provide both discard-granularity and discard-alignment.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity
> ==
> + *     sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +350,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */
>  
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:04:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dlw-0003Ai-6N; Fri, 17 Jan 2014 18:04:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dlu-0003AU-V7
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 18:03:59 +0000
Received: from [85.158.137.68:4172] by server-13.bemta-3.messagelabs.com id
	9B/64-28603-E8079D25; Fri, 17 Jan 2014 18:03:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389981835!8666195!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5474 invoked from network); 17 Jan 2014 18:03:57 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:03:57 -0000
Received: by mail-pd0-f179.google.com with SMTP id q10so2252353pdj.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 17 Jan 2014 10:03:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=WLmB4uIuB8HvD0KDtuFIGeRS+ag+eq+RIYddHj3bjIk=;
	b=S4BqmQfqbrc50dRDBfqi8KPr+YSXU+qnO01Ftk4wqHhmeudxpzsSM6SH+J3rw/p+fQ
	5VcerDtCIRtPFNZiF1VnF7PySTeOw5PQ/7ofdPUreuRHEVACd1eJrqQsNB7ECZRuM4za
	scgPT0PlD9sCzHCdSMm9/MUbaKv4k92SMfaUvpLlWUBUAtbcew2hQ7VtD3Da2XxFYxk/
	4y3PHY9QzsMyisF/0KV3QXcurudcs0fQzeUBc9M7T4X/P+jdcKfAAzY3qv4AJXiEp3xP
	x80clYJPtFVgOlZ/KCND9DnVDacgOzGFaBWL2VUFO+GEfZSCz6hFAX7XoNf9EjjmrUDg
	iAPg==
X-Received: by 10.66.221.199 with SMTP id qg7mr3613653pac.13.1389981835281;
	Fri, 17 Jan 2014 10:03:55 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id g6sm32993244pat.2.2014.01.17.10.03.53
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:03:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:03:49 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEFF2105.4AA36%keir.xen@gmail.com>
Thread-Topic: [PATCH] compat/memory: fix build with old gcc
Thread-Index: Ac8Trnjc7xV5nbxn6kGFC7hAmwgWPw==
In-Reply-To: <52D673E60200007800113C11@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/2014 10:41, "Jan Beulich" <JBeulich@suse.com> wrote:

> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) +
> sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc:
> */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:04:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:04:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Dlw-0003Ai-6N; Fri, 17 Jan 2014 18:04:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1W4Dlu-0003AU-V7
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 18:03:59 +0000
Received: from [85.158.137.68:4172] by server-13.bemta-3.messagelabs.com id
	9B/64-28603-E8079D25; Fri, 17 Jan 2014 18:03:58 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1389981835!8666195!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5474 invoked from network); 17 Jan 2014 18:03:57 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:03:57 -0000
Received: by mail-pd0-f179.google.com with SMTP id q10so2252353pdj.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 17 Jan 2014 10:03:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=WLmB4uIuB8HvD0KDtuFIGeRS+ag+eq+RIYddHj3bjIk=;
	b=S4BqmQfqbrc50dRDBfqi8KPr+YSXU+qnO01Ftk4wqHhmeudxpzsSM6SH+J3rw/p+fQ
	5VcerDtCIRtPFNZiF1VnF7PySTeOw5PQ/7ofdPUreuRHEVACd1eJrqQsNB7ECZRuM4za
	scgPT0PlD9sCzHCdSMm9/MUbaKv4k92SMfaUvpLlWUBUAtbcew2hQ7VtD3Da2XxFYxk/
	4y3PHY9QzsMyisF/0KV3QXcurudcs0fQzeUBc9M7T4X/P+jdcKfAAzY3qv4AJXiEp3xP
	x80clYJPtFVgOlZ/KCND9DnVDacgOzGFaBWL2VUFO+GEfZSCz6hFAX7XoNf9EjjmrUDg
	iAPg==
X-Received: by 10.66.221.199 with SMTP id qg7mr3613653pac.13.1389981835281;
	Fri, 17 Jan 2014 10:03:55 -0800 (PST)
Received: from [10.128.254.65] ([184.68.12.146])
	by mx.google.com with ESMTPSA id g6sm32993244pat.2.2014.01.17.10.03.53
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 10:03:54 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Fri, 17 Jan 2014 18:03:49 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CEFF2105.4AA36%keir.xen@gmail.com>
Thread-Topic: [PATCH] compat/memory: fix build with old gcc
Thread-Index: Ac8Trnjc7xV5nbxn6kGFC7hAmwgWPw==
In-Reply-To: <52D673E60200007800113C11@nat28.tlf.novell.com>
Mime-version: 1.0
Subject: Re: [Xen-devel] [PATCH] compat/memory: fix build with old gcc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/01/2014 10:41, "Jan Beulich" <JBeulich@suse.com> wrote:

> struct xen_add_to_physmap_batch's size field being uint16_t causes old
> compiler versions to warn about the pointless range check done inside
> compat_handle_okay().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Keir Fraser <keir@xen.org>

> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -206,18 +206,20 @@ int compat_memory_op(unsigned int cmd, X
>          {
>              unsigned int limit = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.atpb))
>                                   / (sizeof(nat.atpb->idxs.p) +
> sizeof(nat.atpb->gpfns.p));
> +            /* Use an intermediate variable to suppress warnings on old gcc:
> */
> +            unsigned int size = cmp.atpb.size;
>              xen_ulong_t *idxs = (void *)(nat.atpb + 1);
>              xen_pfn_t *gpfns = (void *)(idxs + limit);
>  
>              if ( copy_from_guest(&cmp.atpb, compat, 1) ||
> -                 !compat_handle_okay(cmp.atpb.idxs, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.gpfns, cmp.atpb.size) ||
> -                 !compat_handle_okay(cmp.atpb.errs, cmp.atpb.size) )
> +                 !compat_handle_okay(cmp.atpb.idxs, size) ||
> +                 !compat_handle_okay(cmp.atpb.gpfns, size) ||
> +                 !compat_handle_okay(cmp.atpb.errs, size) )
>                  return -EFAULT;
>  
>              end_extent = start_extent + limit;
> -            if ( end_extent > cmp.atpb.size )
> -                end_extent = cmp.atpb.size;
> +            if ( end_extent > size )
> +                end_extent = size;
>  
>              idxs -= start_extent;
>              gpfns -= start_extent;
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:11:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DsS-0003zT-3l; Fri, 17 Jan 2014 18:10:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4DsQ-0003zO-V7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:10:43 +0000
Received: from [193.109.254.147:24510] by server-15.bemta-14.messagelabs.com
	id E2/B4-22186-22279D25; Fri, 17 Jan 2014 18:10:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389982240!11586771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9192 invoked from network); 17 Jan 2014 18:10:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93958378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 18:10:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 13:10:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4DsM-0004mX-Rl;
	Fri, 17 Jan 2014 18:10:38 +0000
Date: Fri, 17 Jan 2014 18:09:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.

We are lucky that LPAE only supports 40 bits otherwise we would need to
change any other functions that use unsigned long for mfn, starting from
set_phys_to_machine.


> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;

I'd add this comment:

/* avoid PFN_PHYS because phys_addr_t can be 32bit when dma_addr_t is
 * 64bit leading to a loss in information if the shift is done before
 * casting to 64bit. */

> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */

This check is useless because PFN_PHYS contains a cast to (phys_addr_t)
that is 32 bit. I think you'll have to:

dma_addr_t dma = ((dma_addr_t)mfn_to_pfn(PFN_DOWN(baddr))) << PAGE_SHIFT;


> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:11:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DsS-0003zT-3l; Fri, 17 Jan 2014 18:10:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W4DsQ-0003zO-V7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:10:43 +0000
Received: from [193.109.254.147:24510] by server-15.bemta-14.messagelabs.com
	id E2/B4-22186-22279D25; Fri, 17 Jan 2014 18:10:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1389982240!11586771!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9192 invoked from network); 17 Jan 2014 18:10:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:10:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93958378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 18:10:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 13:10:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W4DsM-0004mX-Rl;
	Fri, 17 Jan 2014 18:10:38 +0000
Date: Fri, 17 Jan 2014 18:09:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.

We are lucky that LPAE only supports 40 bits otherwise we would need to
change any other functions that use unsigned long for mfn, starting from
set_phys_to_machine.


> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;

I'd add this comment:

/* avoid PFN_PHYS because phys_addr_t can be 32bit when dma_addr_t is
 * 64bit leading to a loss in information if the shift is done before
 * casting to 64bit. */

> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */

This check is useless because PFN_PHYS contains a cast to (phys_addr_t)
that is 32 bit. I think you'll have to:

dma_addr_t dma = ((dma_addr_t)mfn_to_pfn(PFN_DOWN(baddr))) << PAGE_SHIFT;


> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DvW-00046f-NP; Fri, 17 Jan 2014 18:13:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DvU-00046U-OF
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 18:13:53 +0000
Received: from [193.109.254.147:12452] by server-3.bemta-14.messagelabs.com id
	0C/AD-11000-0E279D25; Fri, 17 Jan 2014 18:13:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389982429!11615351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16540 invoked from network); 17 Jan 2014 18:13:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:13:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93959206"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 18:13:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 13:13:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DvP-0004uh-KY;
	Fri, 17 Jan 2014 18:13:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DvN-0001Ae-Kg;
	Fri, 17 Jan 2014 18:13:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.29400.383300.983690@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:13:44 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> Previously, an application which had multiple libxl ctxs in multiple
> threads, would have to itself plumb SIGCHLD through to each ctx.
> Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

An un-posted version of this patch had a feature test macro for this
but I seem to have dropped that by mistake.  Here is a v2.1 with that
added.

Ian.

>From 3d519a8e8c117484f938e55b87eff97f2558144b Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:58:55 +0000
Subject: [PATCH] libxl: fork: Share SIGCHLD handler amongst ctxs

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

---
v2.1: Provide feature test macro LIBXL_HAVE_SIGCHLD_SHARING
---
 tools/libxl/libxl.h          |    9 +++
 tools/libxl/libxl_event.h    |    2 +-
 tools/libxl/libxl_fork.c     |  132 +++++++++++++++++++++++++++++++++++-------
 tools/libxl/libxl_internal.h |    3 +
 4 files changed, 123 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1ac34c3..0b992d1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -422,6 +422,15 @@
  */
 #define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SHARING
+ *
+ * If this is defined, it is permissible for multiple libxl ctxs
+ * to simultaneously "own" SIGCHLD.  See "Subprocess handling"
+ * in libxl_event.h.
+ */
+#define LIBXL_HAVE_SIGCHLD_SHARING 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ab6ac5c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
+ *       statuses.  The application may have have multiple libxl_ctxs
  *       configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b6b14fe..5b42bad 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,18 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +133,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +158,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +183,16 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
     errno = esave;
 }
 
@@ -195,25 +209,74 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside obviously from the risk of being
+ * interrupted by the signal.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    struct sigaction ours;
-    int r;
+    if (sigchld_installed)
+        return;
 
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -223,15 +286,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -262,12 +342,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4DvW-00046f-NP; Fri, 17 Jan 2014 18:13:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4DvU-00046U-OF
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 18:13:53 +0000
Received: from [193.109.254.147:12452] by server-3.bemta-14.messagelabs.com id
	0C/AD-11000-0E279D25; Fri, 17 Jan 2014 18:13:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389982429!11615351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16540 invoked from network); 17 Jan 2014 18:13:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:13:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="93959206"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 18:13:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 13:13:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DvP-0004uh-KY;
	Fri, 17 Jan 2014 18:13:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W4DvN-0001Ae-Kg;
	Fri, 17 Jan 2014 18:13:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21209.29400.383300.983690@mariner.uk.xensource.com>
Date: Fri, 17 Jan 2014 18:13:44 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> Previously, an application which had multiple libxl ctxs in multiple
> threads, would have to itself plumb SIGCHLD through to each ctx.
> Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

An un-posted version of this patch had a feature test macro for this
but I seem to have dropped that by mistake.  Here is a v2.1 with that
added.

Ian.

>From 3d519a8e8c117484f938e55b87eff97f2558144b Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 11:58:55 +0000
Subject: [PATCH] libxl: fork: Share SIGCHLD handler amongst ctxs

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

---
v2.1: Provide feature test macro LIBXL_HAVE_SIGCHLD_SHARING
---
 tools/libxl/libxl.h          |    9 +++
 tools/libxl/libxl_event.h    |    2 +-
 tools/libxl/libxl_fork.c     |  132 +++++++++++++++++++++++++++++++++++-------
 tools/libxl/libxl_internal.h |    3 +
 4 files changed, 123 insertions(+), 23 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1ac34c3..0b992d1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -422,6 +422,15 @@
  */
 #define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SHARING
+ *
+ * If this is defined, it is permissible for multiple libxl ctxs
+ * to simultaneously "own" SIGCHLD.  See "Subprocess handling"
+ * in libxl_event.h.
+ */
+#define LIBXL_HAVE_SIGCHLD_SHARING 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ab6ac5c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
+ *       statuses.  The application may have have multiple libxl_ctxs
  *       configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b6b14fe..5b42bad 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,18 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +133,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +158,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +183,16 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
     errno = esave;
 }
 
@@ -195,25 +209,74 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside obviously from the risk of being
+ * interrupted by the signal.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    struct sigaction ours;
-    int r;
+    if (sigchld_installed)
+        return;
 
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -223,15 +286,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -262,12 +342,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 18:40:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ELL-0005i4-Cs; Fri, 17 Jan 2014 18:40:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4ELK-0005hv-0s
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:40:34 +0000
Received: from [85.158.137.68:64926] by server-17.bemta-3.messagelabs.com id
	71/76-15965-12979D25; Fri, 17 Jan 2014 18:40:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389984021!9854578!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTYzMzYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29786 invoked from network); 17 Jan 2014 18:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; 
	d="asc'?scan'208";a="91870346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 18:40:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 13:40:20 -0500
Message-ID: <1389984018.16457.335.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 17 Jan 2014 19:40:18 +0100
In-Reply-To: <1389977958.6697.135.camel@kazak.uk.xensource.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Artem Mygaiev <artem.mygaiev@globallogic.com>, Sisu Xi <xisisu@gmail.com>,
	Claudio Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3438666887180032476=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3438666887180032476==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-v0C0uj2eTIuJwqceMqGx"

--=-v0C0uj2eTIuJwqceMqGx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-17 at 16:59 +0000, Ian Campbell wrote:
> On Fri, 2014-01-17 at 18:50 +0200, Pavlo Suikov wrote:
>=20
> > does anyone know about efforts on bringing QNX Neutrino as a dom0 or
> > domU in Xen - or, more specifically, in RT-Xen with
> > global/partitioning schedulers which should potentially support
> > real-time requirements for target OS?
> >=20
> >=20
> > Existing papers on RT-Xen imply using Linux as a target system, and
> > all QNX mentions with regard to Xen are quite outdated. I see that
> > RT-Xen activities are quite recent and production applications may
> > still be absent, but maybe anyone has tried this yet?
>=20
> I don't know much (or anything) about RT-Xen but from a regular Xen PoV
> it's been a long while since I've heard anything about QNX on x86 Xen,
> and I've never heard anything about QNX on ARM Xen (at least the h/w
> assisted port in mainline).
>=20
Well, it's been a while since the last time I tried QNX, and I *never*
used is on Xen (only baremetal). In principle, but on x86, it should not
be a big deal to have it running inside an HVM domain.

Of course, even there, I don't expect you to be able to get advantage of
any of the PV, nor PVHVM stuff, that are available in Linux (e.g., in
PVHVM mode) or even in Windows (with the proper drivers installed)
guests... Unless some kind of PV drivers exist for QNX, but I'm not
aware of any.

Getting reasonable RT performances is another pair of hands... We'll
(well, at list I will) be looking at that in the very immediate future,
so stay in touch. :-P

For ARM, given the fact that there is no a real HVM mode there, I think
you'd need a proper port of QNX on Xen on ARM, in its own and full
right.

This is probably going to be a lot similar to what is being attempted
here:
http://bugs.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E

> If you are interested specifically in RT-Xen then I think you will
> probably have more luck on their mailing list, not much (essentially no)
> RT-Xen stuff happens on this list.
>=20
Indeed you should ask them, and I'm adding Sisu to the recipients of
this e-mail.

That being said, RT-Xen is "just" two new schedulers (each with a couple
of possible 'operational mode', though), so porting on RT-Xen rather
than porting on Xen should not make that much of a difference.

Again, if thee will be something missing on the side of enabling
real-time behavior in the guest, that's a separate issue, and it is,
again, something that applies to both the cases (RT-Xen and Xen) pretty
much in the same way.

Anyway, again, there is rising interest in this sort of workloads these
days, as Artem (from your same company, I think, and I'm adding him, and
a bunch of other people too, to the Cc list) knows. :-)

In summary, I don't think anyone has ever tried that, given the above,
if you decide to do that, feel free to ask for any kind of help and keep
us posted on how it's going...  :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-v0C0uj2eTIuJwqceMqGx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZeRIACgkQk4XaBE3IOsQNJACgrXu+UVkGq0lKgVfF5fa/Nh1J
sO0AoKELSkLcR7cGGTlQDdN3tWvuOV0h
=Fh5w
-----END PGP SIGNATURE-----

--=-v0C0uj2eTIuJwqceMqGx--


--===============3438666887180032476==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3438666887180032476==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 18:40:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 18:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4ELL-0005i4-Cs; Fri, 17 Jan 2014 18:40:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4ELK-0005hv-0s
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 18:40:34 +0000
Received: from [85.158.137.68:64926] by server-17.bemta-3.messagelabs.com id
	71/76-15965-12979D25; Fri, 17 Jan 2014 18:40:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1389984021!9854578!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTYzMzYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29786 invoked from network); 17 Jan 2014 18:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 18:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; 
	d="asc'?scan'208";a="91870346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 18:40:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 13:40:20 -0500
Message-ID: <1389984018.16457.335.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 17 Jan 2014 19:40:18 +0100
In-Reply-To: <1389977958.6697.135.camel@kazak.uk.xensource.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Artem Mygaiev <artem.mygaiev@globallogic.com>, Sisu Xi <xisisu@gmail.com>,
	Claudio Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3438666887180032476=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3438666887180032476==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-v0C0uj2eTIuJwqceMqGx"

--=-v0C0uj2eTIuJwqceMqGx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-17 at 16:59 +0000, Ian Campbell wrote:
> On Fri, 2014-01-17 at 18:50 +0200, Pavlo Suikov wrote:
>=20
> > does anyone know about efforts on bringing QNX Neutrino as a dom0 or
> > domU in Xen - or, more specifically, in RT-Xen with
> > global/partitioning schedulers which should potentially support
> > real-time requirements for target OS?
> >=20
> >=20
> > Existing papers on RT-Xen imply using Linux as a target system, and
> > all QNX mentions with regard to Xen are quite outdated. I see that
> > RT-Xen activities are quite recent and production applications may
> > still be absent, but maybe anyone has tried this yet?
>=20
> I don't know much (or anything) about RT-Xen but from a regular Xen PoV
> it's been a long while since I've heard anything about QNX on x86 Xen,
> and I've never heard anything about QNX on ARM Xen (at least the h/w
> assisted port in mainline).
>=20
Well, it's been a while since the last time I tried QNX, and I *never*
used is on Xen (only baremetal). In principle, but on x86, it should not
be a big deal to have it running inside an HVM domain.

Of course, even there, I don't expect you to be able to get advantage of
any of the PV, nor PVHVM stuff, that are available in Linux (e.g., in
PVHVM mode) or even in Windows (with the proper drivers installed)
guests... Unless some kind of PV drivers exist for QNX, but I'm not
aware of any.

Getting reasonable RT performances is another pair of hands... We'll
(well, at list I will) be looking at that in the very immediate future,
so stay in touch. :-P

For ARM, given the fact that there is no a real HVM mode there, I think
you'd need a proper port of QNX on Xen on ARM, in its own and full
right.

This is probably going to be a lot similar to what is being attempted
here:
http://bugs.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E

> If you are interested specifically in RT-Xen then I think you will
> probably have more luck on their mailing list, not much (essentially no)
> RT-Xen stuff happens on this list.
>=20
Indeed you should ask them, and I'm adding Sisu to the recipients of
this e-mail.

That being said, RT-Xen is "just" two new schedulers (each with a couple
of possible 'operational mode', though), so porting on RT-Xen rather
than porting on Xen should not make that much of a difference.

Again, if thee will be something missing on the side of enabling
real-time behavior in the guest, that's a separate issue, and it is,
again, something that applies to both the cases (RT-Xen and Xen) pretty
much in the same way.

Anyway, again, there is rising interest in this sort of workloads these
days, as Artem (from your same company, I think, and I'm adding him, and
a bunch of other people too, to the Cc list) knows. :-)

In summary, I don't think anyone has ever tried that, given the above,
if you decide to do that, feel free to ask for any kind of help and keep
us posted on how it's going...  :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-v0C0uj2eTIuJwqceMqGx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZeRIACgkQk4XaBE3IOsQNJACgrXu+UVkGq0lKgVfF5fa/Nh1J
sO0AoKELSkLcR7cGGTlQDdN3tWvuOV0h
=Fh5w
-----END PGP SIGNATURE-----

--=-v0C0uj2eTIuJwqceMqGx--


--===============3438666887180032476==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3438666887180032476==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 19:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 19:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4F4v-00085R-W0; Fri, 17 Jan 2014 19:27:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W4F4u-00085M-Fd
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 19:27:40 +0000
Received: from [85.158.143.35:58451] by server-3.bemta-4.messagelabs.com id
	4C/3D-32360-B2489D25; Fri, 17 Jan 2014 19:27:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389986858!12476555!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4978 invoked from network); 17 Jan 2014 19:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 19:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91886882"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 19:27:37 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 14:27:36 -0500
Message-ID: <52D98427.1060103@citrix.com>
Date: Fri, 17 Jan 2014 19:27:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
In-Reply-To: <20140116000334.GE5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:03, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
> [...]
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
>> index 109c29f..d1cd8ce 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -129,6 +129,9 @@ struct xenvif {
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>   	RING_IDX rx_last_skb_slots;
>
> Hmm... You seemed to mix your other patch with this series. :-)
Yep, this series doesn't work without that patch (actually that is a bug 
in netback even without my series), so at the moment it is based on it.

>
>> +	bool rx_queue_purge;
>> +
>> +	struct timer_list wake_queue;
>>
>>   	/* This array is allocated seperately as it is large */
>>   	struct gnttab_copy *grant_copy_op;
>> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>>
>>   extern bool separate_tx_rx_irq;
>>
> [...]
>> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>>   		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>>   			unmap_timeout++;
>>   			schedule_timeout(msecs_to_jiffies(1000));
>> -			if (unmap_timeout > 9 &&
>> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
>
> This line is really too long. And what's the rationale behind this long
> expression?
It calculates how many times you should ditch the internal queue of an 
another (maybe stucked) vif before Qdisc empties it's actual content. 
After that there shouldn't be any mapped handle left, so we should start 
printing these messages. Actually it should use vif->dev->tx_queue_len, 
and yes, it is probably better to move it to the beginning of the 
function into a new variable, and use that here.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 19:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 19:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4F4v-00085R-W0; Fri, 17 Jan 2014 19:27:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W4F4u-00085M-Fd
	for xen-devel@lists.xenproject.org; Fri, 17 Jan 2014 19:27:40 +0000
Received: from [85.158.143.35:58451] by server-3.bemta-4.messagelabs.com id
	4C/3D-32360-B2489D25; Fri, 17 Jan 2014 19:27:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1389986858!12476555!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4978 invoked from network); 17 Jan 2014 19:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 19:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,675,1384300800"; d="scan'208";a="91886882"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 19:27:37 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 14:27:36 -0500
Message-ID: <52D98427.1060103@citrix.com>
Date: Fri, 17 Jan 2014 19:27:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
In-Reply-To: <20140116000334.GE5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:03, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
> [...]
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
>> index 109c29f..d1cd8ce 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -129,6 +129,9 @@ struct xenvif {
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>   	RING_IDX rx_last_skb_slots;
>
> Hmm... You seemed to mix your other patch with this series. :-)
Yep, this series doesn't work without that patch (actually that is a bug 
in netback even without my series), so at the moment it is based on it.

>
>> +	bool rx_queue_purge;
>> +
>> +	struct timer_list wake_queue;
>>
>>   	/* This array is allocated seperately as it is large */
>>   	struct gnttab_copy *grant_copy_op;
>> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>>
>>   extern bool separate_tx_rx_irq;
>>
> [...]
>> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>>   		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>>   			unmap_timeout++;
>>   			schedule_timeout(msecs_to_jiffies(1000));
>> -			if (unmap_timeout > 9 &&
>> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
>
> This line is really too long. And what's the rationale behind this long
> expression?
It calculates how many times you should ditch the internal queue of an 
another (maybe stucked) vif before Qdisc empties it's actual content. 
After that there shouldn't be any mapped handle left, so we should start 
printing these messages. Actually it should use vif->dev->tx_queue_len, 
and yes, it is probably better to move it to the beginning of the 
function into a new variable, and use that here.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 21:31:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 21:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4H0G-0005t4-Hy; Fri, 17 Jan 2014 21:31:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1W4H0E-0005su-IC
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 21:30:58 +0000
Received: from [85.158.143.35:32097] by server-1.bemta-4.messagelabs.com id
	60/17-02132-111A9D25; Fri, 17 Jan 2014 21:30:57 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389994256!5336692!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32386 invoked from network); 17 Jan 2014 21:30:57 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 21:30:57 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so5050420wgh.19
	for <xen-devel@lists.xensource.com>;
	Fri, 17 Jan 2014 13:30:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=L7bjyXBIFIt3RfTM9wQ3uJcuGNs8rpHt6Oe5XB6n2xY=;
	b=pBUAKBXSDa20HrSEMEujffekIPiEV3h2cuwkCXQG3gS6IayAkwXlSMP5MW8+ZQv7WF
	MdaEcsEhv5VvfmGsRij1NOrb+FohZ4Xqv6L5L3nn63slG018DSiJUaPNvO5UfALEvpn2
	YLQjsd60Qah/pw6NJNC/VgsukGJIFhTv3ARmgce8lWcpQ/lO9ss7JfLrTzNlY9aeQaRf
	8qRiRgxYVtLRldkaYfUNvVLbHFtLXRjhe4QNIqVfxR1ia6sTXYN0ABb96Z3XrWUaCRag
	Mnaj9ON93xcnZ0bPe5yPdQqq/cABNS4psd8/1val3mmWU+qlmHnLBWF3oWA7kLRJv25G
	vKYA==
MIME-Version: 1.0
X-Received: by 10.195.12.164 with SMTP id er4mr16615wjd.92.1389994256556; Fri,
	17 Jan 2014 13:30:56 -0800 (PST)
Received: by 10.216.22.193 with HTTP; Fri, 17 Jan 2014 13:30:56 -0800 (PST)
Date: Fri, 17 Jan 2014 16:30:56 -0500
Message-ID: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: xen-devel@lists.xensource.com
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] xen-netback in linux 3.12
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0947100433884337977=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0947100433884337977==
Content-Type: multipart/alternative; boundary=047d7bb04b2ae6e8c204f0314196

--047d7bb04b2ae6e8c204f0314196
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

I found xen-netback changed a lot in latest kernel. The shared xen-netback
threads in driver domain is replaced by per-vif kernel thread. In my
platform, I found the I/O throughput and scalability of per-vif netback is
better than previous implementation. Another advantage is that I can use
cgroups to control the CPU fair share among all VMs in driver domain (I
group netback thread and blkback thread for each VM). Is there any other
motivation for this modification? Thanks.

Regards,
Cong

--047d7bb04b2ae6e8c204f0314196
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi all,<br><br></div>I found xen-netback changed=
 a lot in latest kernel. The shared xen-netback threads in driver domain is=
 replaced by per-vif kernel thread. In my platform, I found the I/O through=
put and scalability of per-vif netback is better than previous implementati=
on. Another advantage is that I can use cgroups to control the CPU fair sha=
re among all VMs in driver domain (I group netback thread and blkback threa=
d for each VM). Is there any other motivation for this modification? Thanks=
.<br>
<br>Regards,<br></div>Cong<br></div>

--047d7bb04b2ae6e8c204f0314196--


--===============0947100433884337977==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0947100433884337977==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 21:31:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 21:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4H0G-0005t4-Hy; Fri, 17 Jan 2014 21:31:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1W4H0E-0005su-IC
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 21:30:58 +0000
Received: from [85.158.143.35:32097] by server-1.bemta-4.messagelabs.com id
	60/17-02132-111A9D25; Fri, 17 Jan 2014 21:30:57 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1389994256!5336692!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32386 invoked from network); 17 Jan 2014 21:30:57 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 21:30:57 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so5050420wgh.19
	for <xen-devel@lists.xensource.com>;
	Fri, 17 Jan 2014 13:30:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=L7bjyXBIFIt3RfTM9wQ3uJcuGNs8rpHt6Oe5XB6n2xY=;
	b=pBUAKBXSDa20HrSEMEujffekIPiEV3h2cuwkCXQG3gS6IayAkwXlSMP5MW8+ZQv7WF
	MdaEcsEhv5VvfmGsRij1NOrb+FohZ4Xqv6L5L3nn63slG018DSiJUaPNvO5UfALEvpn2
	YLQjsd60Qah/pw6NJNC/VgsukGJIFhTv3ARmgce8lWcpQ/lO9ss7JfLrTzNlY9aeQaRf
	8qRiRgxYVtLRldkaYfUNvVLbHFtLXRjhe4QNIqVfxR1ia6sTXYN0ABb96Z3XrWUaCRag
	Mnaj9ON93xcnZ0bPe5yPdQqq/cABNS4psd8/1val3mmWU+qlmHnLBWF3oWA7kLRJv25G
	vKYA==
MIME-Version: 1.0
X-Received: by 10.195.12.164 with SMTP id er4mr16615wjd.92.1389994256556; Fri,
	17 Jan 2014 13:30:56 -0800 (PST)
Received: by 10.216.22.193 with HTTP; Fri, 17 Jan 2014 13:30:56 -0800 (PST)
Date: Fri, 17 Jan 2014 16:30:56 -0500
Message-ID: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: xen-devel@lists.xensource.com
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] xen-netback in linux 3.12
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0947100433884337977=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0947100433884337977==
Content-Type: multipart/alternative; boundary=047d7bb04b2ae6e8c204f0314196

--047d7bb04b2ae6e8c204f0314196
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

I found xen-netback changed a lot in latest kernel. The shared xen-netback
threads in driver domain is replaced by per-vif kernel thread. In my
platform, I found the I/O throughput and scalability of per-vif netback is
better than previous implementation. Another advantage is that I can use
cgroups to control the CPU fair share among all VMs in driver domain (I
group netback thread and blkback thread for each VM). Is there any other
motivation for this modification? Thanks.

Regards,
Cong

--047d7bb04b2ae6e8c204f0314196
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi all,<br><br></div>I found xen-netback changed=
 a lot in latest kernel. The shared xen-netback threads in driver domain is=
 replaced by per-vif kernel thread. In my platform, I found the I/O through=
put and scalability of per-vif netback is better than previous implementati=
on. Another advantage is that I can use cgroups to control the CPU fair sha=
re among all VMs in driver domain (I group netback thread and blkback threa=
d for each VM). Is there any other motivation for this modification? Thanks=
.<br>
<br>Regards,<br></div>Cong<br></div>

--047d7bb04b2ae6e8c204f0314196--


--===============0947100433884337977==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0947100433884337977==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 22:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4HYp-0007fv-9u; Fri, 17 Jan 2014 22:06:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4HYn-0007fp-W7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 22:06:42 +0000
Received: from [85.158.139.211:46254] by server-17.bemta-5.messagelabs.com id
	87/54-19152-079A9D25; Fri, 17 Jan 2014 22:06:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389996389!10473734!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDYyMTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22688 invoked from network); 17 Jan 2014 22:06:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:06:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; 
	d="asc'?scan'208";a="94033817"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 22:06:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 17:06:27 -0500
Message-ID: <1389996386.19778.71.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 17 Jan 2014 23:06:26 +0100
In-Reply-To: <122967730.20140117134615@gmail.com>
References: <122967730.20140117134615@gmail.com>
Organization: Citrix Ltd
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] micro-pv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1837957807652817015=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1837957807652817015==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-B6iBQCSUpvVolMbDtuLg"

--=-B6iBQCSUpvVolMbDtuLg
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-17 at 13:46 -0300, Simon Martin wrote:
> Hi all,
>=20
Hi Simon!

> Just a quick heads up to all that helped me with this. It is now
> working. I have a simple context switcher running and I am currently
> working on my embedded OS initialisation. All is looking good. Thanks
> for your help.
>=20
Cool. First of all, thanks for the update. Also, congrats on getting to
this point! :-)

> Once I have got the OS running and communicating via the Xen console I
> will then then have to revisit the real time performance aspects. I
> imagine that will be in February now. Is there anyone able and willing
> to help me to get up to speed in that area of the Xen codebase?
>=20
Well, that is going to be a pretty new thing for Xen, so we all will
have to investigate and learn where the latency bottlenecks reside, and
what are the major threat to determinism, within the whole Xen
architecture and codebase.

Not sure this is the answer you'd have been hoping for, but I definitely
am up for going through that path and learn together. :-)

Timing wise, February, i.e., right after coming back from FOSDEM, would
be almost perfect for me. :-D

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-B6iBQCSUpvVolMbDtuLg
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZqWIACgkQk4XaBE3IOsS5nACglutyCdspNHh5fIjvtXGs0Q/c
X1UAoIatNIxeXi3RMFfgx0hR6qD0Edkl
=FhBn
-----END PGP SIGNATURE-----

--=-B6iBQCSUpvVolMbDtuLg--


--===============1837957807652817015==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1837957807652817015==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 22:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4HYp-0007fv-9u; Fri, 17 Jan 2014 22:06:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4HYn-0007fp-W7
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 22:06:42 +0000
Received: from [85.158.139.211:46254] by server-17.bemta-5.messagelabs.com id
	87/54-19152-079A9D25; Fri, 17 Jan 2014 22:06:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1389996389!10473734!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDYyMTAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22688 invoked from network); 17 Jan 2014 22:06:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:06:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; 
	d="asc'?scan'208";a="94033817"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 22:06:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 17:06:27 -0500
Message-ID: <1389996386.19778.71.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 17 Jan 2014 23:06:26 +0100
In-Reply-To: <122967730.20140117134615@gmail.com>
References: <122967730.20140117134615@gmail.com>
Organization: Citrix Ltd
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] micro-pv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1837957807652817015=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1837957807652817015==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-B6iBQCSUpvVolMbDtuLg"

--=-B6iBQCSUpvVolMbDtuLg
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-01-17 at 13:46 -0300, Simon Martin wrote:
> Hi all,
>=20
Hi Simon!

> Just a quick heads up to all that helped me with this. It is now
> working. I have a simple context switcher running and I am currently
> working on my embedded OS initialisation. All is looking good. Thanks
> for your help.
>=20
Cool. First of all, thanks for the update. Also, congrats on getting to
this point! :-)

> Once I have got the OS running and communicating via the Xen console I
> will then then have to revisit the real time performance aspects. I
> imagine that will be in February now. Is there anyone able and willing
> to help me to get up to speed in that area of the Xen codebase?
>=20
Well, that is going to be a pretty new thing for Xen, so we all will
have to investigate and learn where the latency bottlenecks reside, and
what are the major threat to determinism, within the whole Xen
architecture and codebase.

Not sure this is the answer you'd have been hoping for, but I definitely
am up for going through that path and learn together. :-)

Timing wise, February, i.e., right after coming back from FOSDEM, would
be almost perfect for me. :-D

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-B6iBQCSUpvVolMbDtuLg
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZqWIACgkQk4XaBE3IOsS5nACglutyCdspNHh5fIjvtXGs0Q/c
X1UAoIatNIxeXi3RMFfgx0hR6qD0Edkl
=FhBn
-----END PGP SIGNATURE-----

--=-B6iBQCSUpvVolMbDtuLg--


--===============1837957807652817015==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1837957807652817015==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 22:13:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Heu-0008Kp-4s; Fri, 17 Jan 2014 22:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Hes-0008Kh-Cw
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:12:58 +0000
Received: from [193.109.254.147:44920] by server-8.bemta-14.messagelabs.com id
	D2/96-30921-9EAA9D25; Fri, 17 Jan 2014 22:12:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389996775!11639604!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14391 invoked from network); 17 Jan 2014 22:12:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:12:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="91937802"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 22:12:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 17:12:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Hen-000658-CJ;
	Fri, 17 Jan 2014 22:12:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Hen-0001KG-A4;
	Fri, 17 Jan 2014 22:12:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24419-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 22:12:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24419: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24419 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24419/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24368
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24368

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09
baseline version:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 12d9655b96c702c7a936cefeec27c7fd19ff6d09
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:42:37 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 744165288752c2cfc179e5aeed6e3aa9905b480a
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:41:38 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit bb29b48ed8210ecb2c084b371057e78254d5998a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:39:08 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:13:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Heu-0008Kp-4s; Fri, 17 Jan 2014 22:13:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Hes-0008Kh-Cw
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:12:58 +0000
Received: from [193.109.254.147:44920] by server-8.bemta-14.messagelabs.com id
	D2/96-30921-9EAA9D25; Fri, 17 Jan 2014 22:12:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1389996775!11639604!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14391 invoked from network); 17 Jan 2014 22:12:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:12:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="91937802"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 22:12:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 17:12:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Hen-000658-CJ;
	Fri, 17 Jan 2014 22:12:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Hen-0001KG-A4;
	Fri, 17 Jan 2014 22:12:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24419-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 22:12:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24419: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24419 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24419/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24368
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24368

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09
baseline version:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 12d9655b96c702c7a936cefeec27c7fd19ff6d09
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:42:37 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 744165288752c2cfc179e5aeed6e3aa9905b480a
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:41:38 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit bb29b48ed8210ecb2c084b371057e78254d5998a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:39:08 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:17:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hj1-0008UV-2K; Fri, 17 Jan 2014 22:17:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4Hiz-0008UO-Tl
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:17:14 +0000
Received: from [193.109.254.147:59253] by server-1.bemta-14.messagelabs.com id
	3D/73-15600-9EBA9D25; Fri, 17 Jan 2014 22:17:13 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389997029!9300679!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1522 invoked from network); 17 Jan 2014 22:17:12 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 22:17:12 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 15:17:02 -0700
Message-ID: <52D9ABDD.5020804@suse.com>
Date: Fri, 17 Jan 2014 15:17:01 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 07/12] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
>
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
>
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
>
> ---
> v2: Document the new mode in the big "Subprocess handling" comment.
> ---
>  tools/libxl/libxl_event.h |   11 +++++++++++
>  tools/libxl/libxl_fork.c  |    7 +++++++
>  2 files changed, 18 insertions(+)
>
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3c93955..824ac88 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *       and provides a callback to be notified of their exit
>   *       statues.  The application must have only one libxl_ctx
>   *       configured this way.
> + *
> + *     libxl_sigchld_owner_libxl_always_selective_reap:
> + *
> + *       The application expects to reap all of its own children
> + *       synchronously, and does not use SIGCHLD.  libxl is
> + *       to install a SIGCHLD handler.
>   */
>  
>  
> @@ -491,6 +497,11 @@ typedef enum {
>      /* libxl owns SIGCHLD all the time, and the application is
>       * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
> +
> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> +     * children.  The application will reap its own children
> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> +    libxl_sigchld_owner_libxl_always_selective_reap,
>   

ACK to the doc improvements here.  Much clearer IMO.

Regards,
Jim

>  } libxl_sigchld_owner;
>  
>  typedef struct {
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index b2325e0..16e17f6 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return 1;
>      }
>      abort();
> @@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>      int e = libxl__self_pipe_eatall(selfpipe);
>      if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
>  
> +    if (CTX->childproc_hooks->chldowner
> +        == libxl_sigchld_owner_libxl_always_selective_reap) {
> +        childproc_checkall(egc);
> +        return;
> +    }
> +
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
>          pid_t pid = checked_waitpid(egc, -1, &status);
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:17:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hj1-0008UV-2K; Fri, 17 Jan 2014 22:17:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4Hiz-0008UO-Tl
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:17:14 +0000
Received: from [193.109.254.147:59253] by server-1.bemta-14.messagelabs.com id
	3D/73-15600-9EBA9D25; Fri, 17 Jan 2014 22:17:13 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1389997029!9300679!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1522 invoked from network); 17 Jan 2014 22:17:12 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 22:17:12 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 15:17:02 -0700
Message-ID: <52D9ABDD.5020804@suse.com>
Date: Fri, 17 Jan 2014 15:17:01 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389975845-1195-8-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 07/12] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Applications exist which want to use libxl in an event-driven mode but
> which do not integrate child termination into their event system, but
> instead reap all their own children synchronously.
>
> In such an application libxl must own SIGCHLD but avoid reaping any
> children that don't belong to libxl.
>
> Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
> behaviour.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
>
> ---
> v2: Document the new mode in the big "Subprocess handling" comment.
> ---
>  tools/libxl/libxl_event.h |   11 +++++++++++
>  tools/libxl/libxl_fork.c  |    7 +++++++
>  2 files changed, 18 insertions(+)
>
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 3c93955..824ac88 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *       and provides a callback to be notified of their exit
>   *       statues.  The application must have only one libxl_ctx
>   *       configured this way.
> + *
> + *     libxl_sigchld_owner_libxl_always_selective_reap:
> + *
> + *       The application expects to reap all of its own children
> + *       synchronously, and does not use SIGCHLD.  libxl is
> + *       to install a SIGCHLD handler.
>   */
>  
>  
> @@ -491,6 +497,11 @@ typedef enum {
>      /* libxl owns SIGCHLD all the time, and the application is
>       * relying on libxl's event loop for reaping its children too. */
>      libxl_sigchld_owner_libxl_always,
> +
> +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> +     * children.  The application will reap its own children
> +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> +    libxl_sigchld_owner_libxl_always_selective_reap,
>   

ACK to the doc improvements here.  Much clearer IMO.

Regards,
Jim

>  } libxl_sigchld_owner;
>  
>  typedef struct {
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index b2325e0..16e17f6 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
>      case libxl_sigchld_owner_mainloop:
>          return 0;
>      case libxl_sigchld_owner_libxl_always:
> +    case libxl_sigchld_owner_libxl_always_selective_reap:
>          return 1;
>      }
>      abort();
> @@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
>      int e = libxl__self_pipe_eatall(selfpipe);
>      if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
>  
> +    if (CTX->childproc_hooks->chldowner
> +        == libxl_sigchld_owner_libxl_always_selective_reap) {
> +        childproc_checkall(egc);
> +        return;
> +    }
> +
>      while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
>          int status;
>          pid_t pid = checked_waitpid(egc, -1, &status);
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hl4-0000GS-JS; Fri, 17 Jan 2014 22:19:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4Hl2-0000GK-Oy
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 22:19:20 +0000
Received: from [85.158.139.211:38566] by server-1.bemta-5.messagelabs.com id
	75/E1-21065-86CA9D25; Fri, 17 Jan 2014 22:19:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389997148!10482739!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTc5NjAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23043 invoked from network); 17 Jan 2014 22:19:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:19:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; 
	d="asc'?scan'208";a="91939450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 22:18:49 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 17:18:48 -0500
Message-ID: <1389997126.16457.339.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Fri, 17 Jan 2014 23:18:46 +0100
In-Reply-To: <CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0988506268020692131=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0988506268020692131==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-xAcJLPyoZlSBTJoi3JVZ"

--=-xAcJLPyoZlSBTJoi3JVZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-16 at 10:23 -1000, Justin Weaver wrote:
> Dario,
>=20
Hey! :-)

> Sorry for disappearing for so long ... I'm back and ready to continue wor=
king.
>=20
NP at all.

> Other functions will need to change, but currently with only one run
> queue, only runq_candidate needed to change. I'll look through the
> others again with the mindset that we (or maybe I) will fix the issue
> that is causing only one run queue to be created despite having
> multiple cores/sockets available.
>=20
> >> Function now chooses the vCPU with the most credit that has hard affin=
ity
> >> and maybe soft affinity for the given pCPU. If it does not have soft a=
ffinity
> >> and there is another vCPU that prefers to run on the given pCPU, then =
as long
> >> as it has at least a certain amount of credit (currently defined as ha=
lf of
> >> CSCHED_CREDIT_INIT, but more testing is needed to determine the best v=
alue)
> >> then it is chosen instead.
> >>
> > Ok, so, why this 'certain amount of credit' thing? I got the technical
> > details of it from the code below, but can you spend a few words on why
> > and how you think something like this would be required and/or useful?
>=20
Allow me to comment only on the 'only one runqueue on multiple socket
issue' thing. I honestly think that that one is a bug, so you shouldn't
base your work on that behavior. To try facilitate you doing this, I'll
try to put together a patch for fixing such issue early next week. I'm
not sure wheter it will be accepted in Xen right now or when 4.5
development cycle opens, but at least you can apply that and work on top
of it.

Would that make sense and be of any help to you?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-xAcJLPyoZlSBTJoi3JVZ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZrEYACgkQk4XaBE3IOsQ7iQCfX7kT+szKDfALSeJXiNBEdL9k
GasAn1BBQVyPwmmJpSIRAK1K3mbPyWIW
=Dzpe
-----END PGP SIGNATURE-----

--=-xAcJLPyoZlSBTJoi3JVZ--


--===============0988506268020692131==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0988506268020692131==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 22:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hl4-0000GS-JS; Fri, 17 Jan 2014 22:19:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W4Hl2-0000GK-Oy
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 22:19:20 +0000
Received: from [85.158.139.211:38566] by server-1.bemta-5.messagelabs.com id
	75/E1-21065-86CA9D25; Fri, 17 Jan 2014 22:19:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1389997148!10482739!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTc5NjAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23043 invoked from network); 17 Jan 2014 22:19:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:19:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; 
	d="asc'?scan'208";a="91939450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 22:18:49 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 17 Jan 2014 17:18:48 -0500
Message-ID: <1389997126.16457.339.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Fri, 17 Jan 2014 23:18:46 +0100
In-Reply-To: <CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0988506268020692131=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0988506268020692131==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-xAcJLPyoZlSBTJoi3JVZ"

--=-xAcJLPyoZlSBTJoi3JVZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-16 at 10:23 -1000, Justin Weaver wrote:
> Dario,
>=20
Hey! :-)

> Sorry for disappearing for so long ... I'm back and ready to continue wor=
king.
>=20
NP at all.

> Other functions will need to change, but currently with only one run
> queue, only runq_candidate needed to change. I'll look through the
> others again with the mindset that we (or maybe I) will fix the issue
> that is causing only one run queue to be created despite having
> multiple cores/sockets available.
>=20
> >> Function now chooses the vCPU with the most credit that has hard affin=
ity
> >> and maybe soft affinity for the given pCPU. If it does not have soft a=
ffinity
> >> and there is another vCPU that prefers to run on the given pCPU, then =
as long
> >> as it has at least a certain amount of credit (currently defined as ha=
lf of
> >> CSCHED_CREDIT_INIT, but more testing is needed to determine the best v=
alue)
> >> then it is chosen instead.
> >>
> > Ok, so, why this 'certain amount of credit' thing? I got the technical
> > details of it from the code below, but can you spend a few words on why
> > and how you think something like this would be required and/or useful?
>=20
Allow me to comment only on the 'only one runqueue on multiple socket
issue' thing. I honestly think that that one is a bug, so you shouldn't
base your work on that behavior. To try facilitate you doing this, I'll
try to put together a patch for fixing such issue early next week. I'm
not sure wheter it will be accepted in Xen right now or when 4.5
development cycle opens, but at least you can apply that and work on top
of it.

Would that make sense and be of any help to you?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-xAcJLPyoZlSBTJoi3JVZ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLZrEYACgkQk4XaBE3IOsQ7iQCfX7kT+szKDfALSeJXiNBEdL9k
GasAn1BBQVyPwmmJpSIRAK1K3mbPyWIW
=Dzpe
-----END PGP SIGNATURE-----

--=-xAcJLPyoZlSBTJoi3JVZ--


--===============0988506268020692131==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0988506268020692131==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 22:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4HvD-0000gv-Rm; Fri, 17 Jan 2014 22:29:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4HvC-0000go-Ai
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:29:50 +0000
Received: from [85.158.137.68:14067] by server-10.bemta-3.messagelabs.com id
	A4/7C-23989-DDEA9D25; Fri, 17 Jan 2014 22:29:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389997786!9818714!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21737 invoked from network); 17 Jan 2014 22:29:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 22:29:48 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 15:29:36 -0700
Message-ID: <52D9AECF.6050309@suse.com>
Date: Fri, 17 Jan 2014 15:29:35 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> libvirt reaps its children synchronously and has no central pid
> registry and no dispatch mechanism.  libxl does have a pid registry so
> can provide a selective reaping facility, but that is not currently
> exposed.  Here we expose it.
>
> Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
> possible for those to share SIGCHLD: libxl expects either the
> application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
> of this series we relax this restriction by having libxl maintain a
> process-wide list of the libxl ctxs that are supposed to be interested
> in SIGCHLD.
>
> I have not tested the selective reaping functionality.  The most
> plausible test environment for that is a suitably modified libvirt.
>   

I've been testing this series (plus 1/3 in your "tools: Miscellanous
fixes for 4.4" series) on a suitably modified libvirt and the results
look good so far :).

I'm running four scripts concurrently that

- start / stop domA
- save / restore domB
- reboot domC
- get stats on dom{A,B,C}

They have been running for about an hour now, and I haven't noticed any
problems

Thanks!
Jim

> I have tested the new SIGCHLD plumbing, at least with a single ctx,
> since xl uses it.  Testing that it works in a real multi-ctx
> application is again probably most easily done with libvirt.
>
> I hope that with this series applied, simply having libvirt pass
> libxl_sigchld_owner_libxl_always_selective_reap should be sufficient
> for everything to work.  There is no need to specifically request the
> SIGCHLD-sharing.
>
>  a 01/12] libxl: fork: Break out checked_waitpid
>  a 02/12] libxl: fork: Break out childproc_reaped_ours
>  a 03/12] libxl: fork: Clarify docs for libxl_sigchld_owner
> *  04/12] libxl: fork: Document libxl_sigchld_owner_libxl better
>  a 05/12] libxl: fork: assert that chldmode is right
>  a 06/12] libxl: fork: Provide libxl_childproc_sigchld_occurred
> +a 07/12] libxl: fork: Provide ..._always_selective_reap
>  a 08/12] libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
> *  09/12] libxl: fork: Rename sigchld handler functions
> *  10/12] libxl: fork: Break out sigchld_installhandler_core
> *  11/12] libxl: fork: Break out sigchld_sethandler_raw
> *  12/12] libxl: fork: Share SIGCHLD handler amongst ctxs
>
> (a = acked; * = new patch; + = modified patch)
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4HvD-0000gv-Rm; Fri, 17 Jan 2014 22:29:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W4HvC-0000go-Ai
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:29:50 +0000
Received: from [85.158.137.68:14067] by server-10.bemta-3.messagelabs.com id
	A4/7C-23989-DDEA9D25; Fri, 17 Jan 2014 22:29:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1389997786!9818714!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21737 invoked from network); 17 Jan 2014 22:29:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Jan 2014 22:29:48 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 17 Jan 2014 15:29:36 -0700
Message-ID: <52D9AECF.6050309@suse.com>
Date: Fri, 17 Jan 2014 15:29:35 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> libvirt reaps its children synchronously and has no central pid
> registry and no dispatch mechanism.  libxl does have a pid registry so
> can provide a selective reaping facility, but that is not currently
> exposed.  Here we expose it.
>
> Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
> possible for those to share SIGCHLD: libxl expects either the
> application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
> of this series we relax this restriction by having libxl maintain a
> process-wide list of the libxl ctxs that are supposed to be interested
> in SIGCHLD.
>
> I have not tested the selective reaping functionality.  The most
> plausible test environment for that is a suitably modified libvirt.
>   

I've been testing this series (plus 1/3 in your "tools: Miscellanous
fixes for 4.4" series) on a suitably modified libvirt and the results
look good so far :).

I'm running four scripts concurrently that

- start / stop domA
- save / restore domB
- reboot domC
- get stats on dom{A,B,C}

They have been running for about an hour now, and I haven't noticed any
problems

Thanks!
Jim

> I have tested the new SIGCHLD plumbing, at least with a single ctx,
> since xl uses it.  Testing that it works in a real multi-ctx
> application is again probably most easily done with libvirt.
>
> I hope that with this series applied, simply having libvirt pass
> libxl_sigchld_owner_libxl_always_selective_reap should be sufficient
> for everything to work.  There is no need to specifically request the
> SIGCHLD-sharing.
>
>  a 01/12] libxl: fork: Break out checked_waitpid
>  a 02/12] libxl: fork: Break out childproc_reaped_ours
>  a 03/12] libxl: fork: Clarify docs for libxl_sigchld_owner
> *  04/12] libxl: fork: Document libxl_sigchld_owner_libxl better
>  a 05/12] libxl: fork: assert that chldmode is right
>  a 06/12] libxl: fork: Provide libxl_childproc_sigchld_occurred
> +a 07/12] libxl: fork: Provide ..._always_selective_reap
>  a 08/12] libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
> *  09/12] libxl: fork: Rename sigchld handler functions
> *  10/12] libxl: fork: Break out sigchld_installhandler_core
> *  11/12] libxl: fork: Break out sigchld_sethandler_raw
> *  12/12] libxl: fork: Share SIGCHLD handler amongst ctxs
>
> (a = acked; * = new patch; + = modified patch)
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:34:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hzg-00014W-EM; Fri, 17 Jan 2014 22:34:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Hzf-00014P-Mn
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:34:28 +0000
Received: from [85.158.139.211:33378] by server-2.bemta-5.messagelabs.com id
	12/FF-29392-2FFA9D25; Fri, 17 Jan 2014 22:34:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389998064!10471785!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19480 invoked from network); 17 Jan 2014 22:34:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:34:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="94040453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 22:34:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 17:34:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Hza-0006CD-OK;
	Fri, 17 Jan 2014 22:34:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Hza-0002Hu-Mp;
	Fri, 17 Jan 2014 22:34:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24416-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 22:34:22 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24416: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24416 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24416/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503
baseline version:
 xen                  c04c825bdf1e946260cba325eeed993004051050

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Julien Grall <julien.grall@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=f26d849dff24120e6ad633db94abfbc6e6572503
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable f26d849dff24120e6ad633db94abfbc6e6572503
+ branch=xen-unstable
+ revision=f26d849dff24120e6ad633db94abfbc6e6572503
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f26d849dff24120e6ad633db94abfbc6e6572503:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c04c825..f26d849  f26d849dff24120e6ad633db94abfbc6e6572503 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 22:34:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 22:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Hzg-00014W-EM; Fri, 17 Jan 2014 22:34:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Hzf-00014P-Mn
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 22:34:28 +0000
Received: from [85.158.139.211:33378] by server-2.bemta-5.messagelabs.com id
	12/FF-29392-2FFA9D25; Fri, 17 Jan 2014 22:34:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1389998064!10471785!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19480 invoked from network); 17 Jan 2014 22:34:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 22:34:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="94040453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Jan 2014 22:34:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 17:34:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Hza-0006CD-OK;
	Fri, 17 Jan 2014 22:34:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Hza-0002Hu-Mp;
	Fri, 17 Jan 2014 22:34:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24416-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 22:34:22 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24416: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24416 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24416/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503
baseline version:
 xen                  c04c825bdf1e946260cba325eeed993004051050

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Julien Grall <julien.grall@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=f26d849dff24120e6ad633db94abfbc6e6572503
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable f26d849dff24120e6ad633db94abfbc6e6572503
+ branch=xen-unstable
+ revision=f26d849dff24120e6ad633db94abfbc6e6572503
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f26d849dff24120e6ad633db94abfbc6e6572503:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c04c825..f26d849  f26d849dff24120e6ad633db94abfbc6e6572503 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 23:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 23:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4IQu-0002z6-Li; Fri, 17 Jan 2014 23:02:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W4IQt-0002z1-3E
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 23:02:35 +0000
Received: from [85.158.143.35:52709] by server-2.bemta-4.messagelabs.com id
	CA/EE-11386-A86B9D25; Fri, 17 Jan 2014 23:02:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389999751!12461828!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30916 invoked from network); 17 Jan 2014 23:02:33 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 23:02:33 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so1016975pdj.12
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 15:02:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:mime-version:content-type
	:content-disposition:user-agent;
	bh=gPIu1hjZwcrxHmZ+OoXHgrNYsVQnUU3oX0wLMdXP29Y=;
	b=dWd+HF7jg4Efe+w7OaOVLRmSadxFvAwLKBYaWpg/X4U7O3USEc6aZmTYSWPMmPXy31
	Ll9DquCe8NlxzPlp4+h6gz/WPs4eJRtT4LyWyw4SuLENa/GQHd4lhltRywjQuruS4LSI
	eUocSC2dI0gCfowrIYSg2tdxexajZioi3irhIXyoLVVsg+3fG9ODFF8IWeZbfKC02S60
	+0FnHmeIWmv25zJmgV8/m4GVYUzQRFjVKmrZeKRgj3PuAsPmdmLd1VQB7Iqr6pmPzDSB
	ZrApNX/JCvuaQpHmvRgMpcvVN8iRzY6Fc3xGqIq5CS7sZJIydtwXkBHuTaOcB8GuILcQ
	WpYQ==
X-Received: by 10.66.122.40 with SMTP id lp8mr5083452pab.82.1389999751092;
	Fri, 17 Jan 2014 15:02:31 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id vp4sm34787270pab.8.2014.01.17.15.02.25
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 15:02:28 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 17 Jan 2014 15:02:24 -0800
Date: Fri, 17 Jan 2014 15:02:24 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: xen-devel@lists.xen.org
Message-ID: <20140117230219.GA28413@garbanzo.do-not-panic.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org
Subject: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8625916587483253016=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8625916587483253016==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="yrj/dFKFPuw6o+aM"
Content-Disposition: inline


--yrj/dFKFPuw6o+aM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
=66rom David one of the xen development kernel git trees to track is
xen/git.git [2], this tree however gives has undefined references when doin=
g a
fresh clone [shown below], but as expected does work well when only cloning
the linux-next branch [also below]. While I'm sure this is fine for
folks who can do the guess work do we really want to live with trees like
these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
required, so perhaps it should -- if we want to live with these ? Curious, =
how
many other git are there with a similar situation ?

The xen project web site actually lists [3] Konrad's xen git tree [4] for
development as the primary development tree, that probably should be
updated now, and likely with instructions to clone only the linux-next
branch ?

[0] https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/N=
ext/Trees#n176
[1] http://lists.xen.org/archives/html/xen-devel/2014-01/msg01504.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
[3] http://wiki.xenproject.org/wiki/Xen_Repositories#Primary_Xen_Repository
[4] git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git

mcgrof@bubbles ~ $ git clone
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git --reference linux=
/.git
Cloning into 'tip'...
remote: Counting objects: 2806, done.
remote: Compressing objects: 100% (334/334), done.
remote: Total 1797 (delta 1511), reused 1646 (delta 1462)
Receiving objects: 100% (1797/1797), 711.01 KiB | 640.00 KiB/s, done.
Resolving deltas: 100% (1511/1511), completed with 306 local objects.
Checking connectivity... done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.

mcgrof@work ~ $ git clone
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git -b linux-next --r=
eference linux/.git
Cloning into 'tip'...
remote: Counting objects: 2806, done.
remote: Compressing objects: 100% (377/377), done.
remote: Total 1797 (delta 1545), reused 1607 (delta 1419)
Receiving objects: 100% (1797/1797), 485.23 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1545/1545), completed with 327 local objects.
Checking connectivity... done.
Checking out files: 100% (44979/44979), done.

  Luis

--yrj/dFKFPuw6o+aM
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJS2bZ7AAoJEHoaE3+fqtqo2PUP/3XmVtQoDTfWm4dvKq5Sb0UO
t7sJR6CXny8MOFHXPpBiuucqjuXv6EPcOKu++04M4dC3bb9NMoaF07X1K5nBTtYF
ho/BxDzJIcRVwB0FzSUSLfbPHCNurTqo+vXZrSIJGuJcIwmenChPCq8eHs1LpKU6
5fZO8t4W480nkuLY7C7w1Y2YEDBXFC+DYa/x/U8KcKqdmzJOexR0Wo68B9a9PdLJ
HcUljCcj9mrLLQBNY4sEG0trwOBJtb/ph+FgfL7aJ3kgRsKcAT0jB5/MEnld/t/6
Y5uxl/06T7i04hri3YARFAEZZZy0LCats8N2X91TrkIhJ00HWxj3qcrfWixabdxk
77LupBfVLxkQ0hoS3yxOgvj08jBJYxftBPiBYlm+tGG/GYcRcLMLc8tZ/ExUyYax
7i3bg53q84oq+bombYRVCqV/PtYr585V/xi1cF4Bgs+BU14+jbaeoxxNWYNdADl+
GrR2u6r03i+m/m2Lx4+pGzsm6J2G3FQTU92Tbhd6m4RSLbDJRYTNDIB24iIADgQR
Hj/LyMiTTMIpuP4oyALv2OKIfIMLlRSLpUQFRvZWCimAp7rOO6O6Q9QxJP8HoS36
RvEua1584Mg460U3p+PrrEt4aJd17GD6Lr1MZxw148xUlNB2Dt1LW6Bsjbm8dQ/9
n0OafA1t6B50U2r9ktah
=13gW
-----END PGP SIGNATURE-----

--yrj/dFKFPuw6o+aM--


--===============8625916587483253016==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8625916587483253016==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 23:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 23:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4IR5-000303-8j; Fri, 17 Jan 2014 23:02:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4IR3-0002zw-Hk
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 23:02:45 +0000
Received: from [193.109.254.147:39222] by server-4.bemta-14.messagelabs.com id
	0B/9A-03916-496B9D25; Fri, 17 Jan 2014 23:02:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389999762!10129695!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26502 invoked from network); 17 Jan 2014 23:02:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 23:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="91949685"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 23:02:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 18:02:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4IQy-0006K7-LM;
	Fri, 17 Jan 2014 23:02:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4IQy-0007N0-J3;
	Fri, 17 Jan 2014 23:02:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24420-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 23:02:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24420: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24420 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24420/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7261a3fc6e6101293cff232b9423dd41b140fc0f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:37:06 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 491788b98c7b35822cab8bdf66504a78c88414ee
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:33:54 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit be29888fb069cae35be251ce3fcf74e937030812
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:33:10 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100

commit 39b9a5bc0858b604560499afdc9964a670c8b67b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:31:50 2014 +0100

    x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
    
    Having nmi_shootdown_cpus() report which pcpus failed to be shot down is a
    useful debugging hint as to what possibly went wrong (especially when the
    crash logs seem to indicate that an NMI timeout occurred while waiting for one
    of the problematic pcpus to perform an action).
    
    This is achieved by swapping an atomic_t count of unreported pcpus with a
    cpumask.  In the case that the 1 second timeout occurs, use the cpumask to
    identify the problematic pcpus.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
    master date: 2013-09-26 10:14:51 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 17 23:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 23:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4IQu-0002z6-Li; Fri, 17 Jan 2014 23:02:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W4IQt-0002z1-3E
	for xen-devel@lists.xen.org; Fri, 17 Jan 2014 23:02:35 +0000
Received: from [85.158.143.35:52709] by server-2.bemta-4.messagelabs.com id
	CA/EE-11386-A86B9D25; Fri, 17 Jan 2014 23:02:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1389999751!12461828!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30916 invoked from network); 17 Jan 2014 23:02:33 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 23:02:33 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so1016975pdj.12
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 15:02:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:mime-version:content-type
	:content-disposition:user-agent;
	bh=gPIu1hjZwcrxHmZ+OoXHgrNYsVQnUU3oX0wLMdXP29Y=;
	b=dWd+HF7jg4Efe+w7OaOVLRmSadxFvAwLKBYaWpg/X4U7O3USEc6aZmTYSWPMmPXy31
	Ll9DquCe8NlxzPlp4+h6gz/WPs4eJRtT4LyWyw4SuLENa/GQHd4lhltRywjQuruS4LSI
	eUocSC2dI0gCfowrIYSg2tdxexajZioi3irhIXyoLVVsg+3fG9ODFF8IWeZbfKC02S60
	+0FnHmeIWmv25zJmgV8/m4GVYUzQRFjVKmrZeKRgj3PuAsPmdmLd1VQB7Iqr6pmPzDSB
	ZrApNX/JCvuaQpHmvRgMpcvVN8iRzY6Fc3xGqIq5CS7sZJIydtwXkBHuTaOcB8GuILcQ
	WpYQ==
X-Received: by 10.66.122.40 with SMTP id lp8mr5083452pab.82.1389999751092;
	Fri, 17 Jan 2014 15:02:31 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id vp4sm34787270pab.8.2014.01.17.15.02.25
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 17 Jan 2014 15:02:28 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 17 Jan 2014 15:02:24 -0800
Date: Fri, 17 Jan 2014 15:02:24 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: xen-devel@lists.xen.org
Message-ID: <20140117230219.GA28413@garbanzo.do-not-panic.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org
Subject: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8625916587483253016=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8625916587483253016==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="yrj/dFKFPuw6o+aM"
Content-Disposition: inline


--yrj/dFKFPuw6o+aM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
=66rom David one of the xen development kernel git trees to track is
xen/git.git [2], this tree however gives has undefined references when doin=
g a
fresh clone [shown below], but as expected does work well when only cloning
the linux-next branch [also below]. While I'm sure this is fine for
folks who can do the guess work do we really want to live with trees like
these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
required, so perhaps it should -- if we want to live with these ? Curious, =
how
many other git are there with a similar situation ?

The xen project web site actually lists [3] Konrad's xen git tree [4] for
development as the primary development tree, that probably should be
updated now, and likely with instructions to clone only the linux-next
branch ?

[0] https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/N=
ext/Trees#n176
[1] http://lists.xen.org/archives/html/xen-devel/2014-01/msg01504.html
[2] git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
[3] http://wiki.xenproject.org/wiki/Xen_Repositories#Primary_Xen_Repository
[4] git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git

mcgrof@bubbles ~ $ git clone
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git --reference linux=
/.git
Cloning into 'tip'...
remote: Counting objects: 2806, done.
remote: Compressing objects: 100% (334/334), done.
remote: Total 1797 (delta 1511), reused 1646 (delta 1462)
Receiving objects: 100% (1797/1797), 711.01 KiB | 640.00 KiB/s, done.
Resolving deltas: 100% (1511/1511), completed with 306 local objects.
Checking connectivity... done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.

mcgrof@work ~ $ git clone
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git -b linux-next --r=
eference linux/.git
Cloning into 'tip'...
remote: Counting objects: 2806, done.
remote: Compressing objects: 100% (377/377), done.
remote: Total 1797 (delta 1545), reused 1607 (delta 1419)
Receiving objects: 100% (1797/1797), 485.23 KiB | 0 bytes/s, done.
Resolving deltas: 100% (1545/1545), completed with 327 local objects.
Checking connectivity... done.
Checking out files: 100% (44979/44979), done.

  Luis

--yrj/dFKFPuw6o+aM
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJS2bZ7AAoJEHoaE3+fqtqo2PUP/3XmVtQoDTfWm4dvKq5Sb0UO
t7sJR6CXny8MOFHXPpBiuucqjuXv6EPcOKu++04M4dC3bb9NMoaF07X1K5nBTtYF
ho/BxDzJIcRVwB0FzSUSLfbPHCNurTqo+vXZrSIJGuJcIwmenChPCq8eHs1LpKU6
5fZO8t4W480nkuLY7C7w1Y2YEDBXFC+DYa/x/U8KcKqdmzJOexR0Wo68B9a9PdLJ
HcUljCcj9mrLLQBNY4sEG0trwOBJtb/ph+FgfL7aJ3kgRsKcAT0jB5/MEnld/t/6
Y5uxl/06T7i04hri3YARFAEZZZy0LCats8N2X91TrkIhJ00HWxj3qcrfWixabdxk
77LupBfVLxkQ0hoS3yxOgvj08jBJYxftBPiBYlm+tGG/GYcRcLMLc8tZ/ExUyYax
7i3bg53q84oq+bombYRVCqV/PtYr585V/xi1cF4Bgs+BU14+jbaeoxxNWYNdADl+
GrR2u6r03i+m/m2Lx4+pGzsm6J2G3FQTU92Tbhd6m4RSLbDJRYTNDIB24iIADgQR
Hj/LyMiTTMIpuP4oyALv2OKIfIMLlRSLpUQFRvZWCimAp7rOO6O6Q9QxJP8HoS36
RvEua1584Mg460U3p+PrrEt4aJd17GD6Lr1MZxw148xUlNB2Dt1LW6Bsjbm8dQ/9
n0OafA1t6B50U2r9ktah
=13gW
-----END PGP SIGNATURE-----

--yrj/dFKFPuw6o+aM--


--===============8625916587483253016==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8625916587483253016==--


From xen-devel-bounces@lists.xen.org Fri Jan 17 23:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jan 2014 23:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4IR5-000303-8j; Fri, 17 Jan 2014 23:02:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4IR3-0002zw-Hk
	for xen-devel@lists.xensource.com; Fri, 17 Jan 2014 23:02:45 +0000
Received: from [193.109.254.147:39222] by server-4.bemta-14.messagelabs.com id
	0B/9A-03916-496B9D25; Fri, 17 Jan 2014 23:02:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1389999762!10129695!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26502 invoked from network); 17 Jan 2014 23:02:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Jan 2014 23:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,676,1384300800"; d="scan'208";a="91949685"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Jan 2014 23:02:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 17 Jan 2014 18:02:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4IQy-0006K7-LM;
	Fri, 17 Jan 2014 23:02:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4IQy-0007N0-J3;
	Fri, 17 Jan 2014 23:02:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24420-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 17 Jan 2014 23:02:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24420: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24420 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24420/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7261a3fc6e6101293cff232b9423dd41b140fc0f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:37:06 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 491788b98c7b35822cab8bdf66504a78c88414ee
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:33:54 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit be29888fb069cae35be251ce3fcf74e937030812
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:33:10 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100

commit 39b9a5bc0858b604560499afdc9964a670c8b67b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:31:50 2014 +0100

    x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
    
    Having nmi_shootdown_cpus() report which pcpus failed to be shot down is a
    useful debugging hint as to what possibly went wrong (especially when the
    crash logs seem to indicate that an NMI timeout occurred while waiting for one
    of the problematic pcpus to perform an action).
    
    This is achieved by swapping an atomic_t count of unreported pcpus with a
    cpumask.  In the case that the 1 second timeout occurs, use the cpumask to
    identify the problematic pcpus.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
    master date: 2013-09-26 10:14:51 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 02:26:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LbO-0000Lw-MI; Sat, 18 Jan 2014 02:25:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W4LbM-0000Lm-BN
	for Xen-devel@lists.xensource.com; Sat, 18 Jan 2014 02:25:36 +0000
Received: from [85.158.143.35:33665] by server-2.bemta-4.messagelabs.com id
	51/AB-11386-F16E9D25; Sat, 18 Jan 2014 02:25:35 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390011933!12515190!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16946 invoked from network); 18 Jan 2014 02:25:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Jan 2014 02:25:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0I2PV17031831
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Jan 2014 02:25:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PUPk001052
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Jan 2014 02:25:30 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PUWF008694; Sat, 18 Jan 2014 02:25:30 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 18:25:30 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Fri, 17 Jan 2014 18:24:55 -0800
Message-Id: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

pvh was designed to start with pv flags, but a commit in xen tree
51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as
they are not necessary. As a result, these CR flags must be set in the
guest.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |   43 +++++++++++++++++++++++++++++++++++++------
 arch/x86/xen/smp.c       |    2 +-
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 628099a..4a2aaa6 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1410,12 +1410,8 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
- *
- * Note, that it is refok - because the only caller of this after init
- * is PVH which is not going to use xen_load_gdt_boot or other
- * __init functions.
  */
-void __ref xen_setup_gdt(int cpu)
+static void xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+/*
+ * A pv guest starts with default flags that are not set for pvh, set them
+ * here asap.
+ */
+static void xen_pvh_set_cr_flags(int cpu)
+{
+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
+	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
+	 */
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
+}
+
+/*
+ * Note, that it is refok - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
+ */
+void __ref xen_pvh_secondary_vcpu_init(int cpu)
+{
+	xen_setup_gdt(cpu);
+	xen_pvh_set_cr_flags(cpu);
+}
+
 static void __init xen_pvh_early_guest_init(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		return;
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
+	if (xen_feature(XENFEAT_hvm_callback_vector)) {
 		xen_have_vector_callback = 1;
+		xen_pvh_set_cr_flags(0);
+	}
 
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 5e46190..a18eadd 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
 #ifdef CONFIG_X86_64
 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
 	    xen_feature(XENFEAT_supervisor_mode_kernel))
-		xen_setup_gdt(cpu);
+		xen_pvh_secondary_vcpu_init(cpu);
 #endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9059c24..1cb6f4c 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
-void xen_setup_gdt(int cpu);
+void xen_pvh_secondary_vcpu_init(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 02:26:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LbP-0000M3-IG; Sat, 18 Jan 2014 02:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W4LbM-0000Ln-Ht
	for Xen-devel@lists.xensource.com; Sat, 18 Jan 2014 02:25:36 +0000
Received: from [85.158.139.211:32518] by server-2.bemta-5.messagelabs.com id
	B7/13-29392-F16E9D25; Sat, 18 Jan 2014 02:25:35 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390011933!10294557!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15095 invoked from network); 18 Jan 2014 02:25:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Jan 2014 02:25:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0I2PV7Q031832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Jan 2014 02:25:32 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PTUM005808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Jan 2014 02:25:30 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PTxx008688; Sat, 18 Jan 2014 02:25:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 18:25:29 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Fri, 17 Jan 2014 18:24:54 -0800
Message-Id: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

The following patch sets the bits in CR0 and CR4. Please note, I'm working
on patch for the xen side. The CR4 features are not currently exported
to a PVH guest. 

Roger, I added your SOB line, please lmk if I need to add anything else.

This patch was build on top of a71accb67e7645c68061cec2bee6067205e439fc in
konrad devel/pvh.v13 branch.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 02:26:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LbO-0000Lw-MI; Sat, 18 Jan 2014 02:25:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W4LbM-0000Lm-BN
	for Xen-devel@lists.xensource.com; Sat, 18 Jan 2014 02:25:36 +0000
Received: from [85.158.143.35:33665] by server-2.bemta-4.messagelabs.com id
	51/AB-11386-F16E9D25; Sat, 18 Jan 2014 02:25:35 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390011933!12515190!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16946 invoked from network); 18 Jan 2014 02:25:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Jan 2014 02:25:34 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0I2PV17031831
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Jan 2014 02:25:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PUPk001052
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Jan 2014 02:25:30 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PUWF008694; Sat, 18 Jan 2014 02:25:30 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 18:25:30 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Fri, 17 Jan 2014 18:24:55 -0800
Message-Id: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

pvh was designed to start with pv flags, but a commit in xen tree
51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as
they are not necessary. As a result, these CR flags must be set in the
guest.

Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |   43 +++++++++++++++++++++++++++++++++++++------
 arch/x86/xen/smp.c       |    2 +-
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 628099a..4a2aaa6 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1410,12 +1410,8 @@ static void __init xen_boot_params_init_edd(void)
  * Set up the GDT and segment registers for -fstack-protector.  Until
  * we do this, we have to be careful not to call any stack-protected
  * function, which is most of the kernel.
- *
- * Note, that it is refok - because the only caller of this after init
- * is PVH which is not going to use xen_load_gdt_boot or other
- * __init functions.
  */
-void __ref xen_setup_gdt(int cpu)
+static void xen_setup_gdt(int cpu)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
 #ifdef CONFIG_X86_64
@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+/*
+ * A pv guest starts with default flags that are not set for pvh, set them
+ * here asap.
+ */
+static void xen_pvh_set_cr_flags(int cpu)
+{
+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
+	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
+	 */
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
+}
+
+/*
+ * Note, that it is refok - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
+ */
+void __ref xen_pvh_secondary_vcpu_init(int cpu)
+{
+	xen_setup_gdt(cpu);
+	xen_pvh_set_cr_flags(cpu);
+}
+
 static void __init xen_pvh_early_guest_init(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		return;
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
+	if (xen_feature(XENFEAT_hvm_callback_vector)) {
 		xen_have_vector_callback = 1;
+		xen_pvh_set_cr_flags(0);
+	}
 
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 5e46190..a18eadd 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
 #ifdef CONFIG_X86_64
 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
 	    xen_feature(XENFEAT_supervisor_mode_kernel))
-		xen_setup_gdt(cpu);
+		xen_pvh_secondary_vcpu_init(cpu);
 #endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9059c24..1cb6f4c 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
-void xen_setup_gdt(int cpu);
+void xen_pvh_secondary_vcpu_init(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 02:26:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:26:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LbP-0000M3-IG; Sat, 18 Jan 2014 02:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W4LbM-0000Ln-Ht
	for Xen-devel@lists.xensource.com; Sat, 18 Jan 2014 02:25:36 +0000
Received: from [85.158.139.211:32518] by server-2.bemta-5.messagelabs.com id
	B7/13-29392-F16E9D25; Sat, 18 Jan 2014 02:25:35 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390011933!10294557!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15095 invoked from network); 18 Jan 2014 02:25:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Jan 2014 02:25:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0I2PV7Q031832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 18 Jan 2014 02:25:32 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PTUM005808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 18 Jan 2014 02:25:30 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0I2PTxx008688; Sat, 18 Jan 2014 02:25:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 17 Jan 2014 18:25:29 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Fri, 17 Jan 2014 18:24:54 -0800
Message-Id: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

The following patch sets the bits in CR0 and CR4. Please note, I'm working
on patch for the xen side. The CR4 features are not currently exported
to a PVH guest. 

Roger, I added your SOB line, please lmk if I need to add anything else.

This patch was build on top of a71accb67e7645c68061cec2bee6067205e439fc in
konrad devel/pvh.v13 branch.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 02:36:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LlT-00014n-Rf; Sat, 18 Jan 2014 02:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W4LlR-00014i-OZ
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 02:36:02 +0000
Received: from [85.158.139.211:55762] by server-4.bemta-5.messagelabs.com id
	C9/A8-26791-098E9D25; Sat, 18 Jan 2014 02:36:00 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390012558!7783731!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32611 invoked from network); 18 Jan 2014 02:35:59 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 02:35:59 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so4244124qcv.19
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 18:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=JWS977IzwPwnUU0PXmWGZO89XsZ4p+G48Zov5NZBJH8=;
	b=TddYUhmgentqAFl2XYsTt/iJ+vNdWg2YUpso2KGRrM1G94jH1T/4o2O33mOH+2smZo
	tSKEYpipBs1TL5GFpBhlkj7wWmy0/yXNj+Gmy+bOrQtLknbfwTBpxJ3rFLlhyCnnFgMA
	dPOiFI+CVlQr0i3/ndK038ChpxKCQs37jUe9FY2v/1xmUKYrvcC9SUcktizJ0JeqVP7Y
	EzIM3TLsAGJhU0okRttlrevGRNHaQLY8LpgYf9Ivdcaiz13KlTa953e9/EVkiB4efjru
	oriROYOcFU18RzIecm+LpNVPB8WF/jS30RTQtd6DrQA+zACOvYAEvlOMlXuDZTNuZ/hp
	V76A==
MIME-Version: 1.0
X-Received: by 10.224.124.133 with SMTP id u5mr8788026qar.79.1390012557469;
	Fri, 17 Jan 2014 18:35:57 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 18:35:56 -0800 (PST)
In-Reply-To: <52D9309D.1030808@citrix.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
	<52D9309D.1030808@citrix.com>
Date: Sat, 18 Jan 2014 10:35:56 +0800
Message-ID: <CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/mixed; boundary=001a11c29f8ab9218804f0358452
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, Jan 17, 2014 at 9:31 PM, Roger Pau Monn=C3=A9 <roger.pau@citrix.com=
> wrote:
> On 17/01/14 12:59, Ian Campbell wrote:
>> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com>=
 wrote:
>>>
>>>
>>>> vif-bridge and the common scripts which it includes would be a good
>>>> start. Just an echo at the top to confirm that the script is running
>>>> would be useful.
>>>>
>>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debuggin=
g
>>>> when these scripts were launched by udev, but now that libxl runs them
>>>> you may find that the debug from the script comes out on stdout/err of
>>>> the xl create command so perhaps that isn't needed any more.
>>>>
>>>>> headless here.
>>>>
>>>> That shouldn't matter, you are looking for output from userspace
>>>> scripts, not kernel or hypervisor logs.
>>>>
>>>> Ian.
>>>>
>>>
>>> Hi Ian
>>> I suspect for 4.4.0, the network devices even was not detected.
>>> this is output from 4.3.1, notes follow lines.
>>>
>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>>> script: /etc/xen/scripts/vif-bridge online
>>> dlan: vif-bridge start
>>> dlan: vif-common start
>>>
>>> dlan: vif-bridge start -> output from vif-bridge script
>>> dlan: vif-common start -> output from vif-common.sh script
>>
>> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
>> produce the same output?
>>
>> (please can you try and set the text type to "preformatted" for the logs
>> -- having them wrapped makes them very hard to read).
>>
>> The lack of
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:=
 /etc/xen/scripts/vif-bridge online
>> in your original logs is a bit concerning.
>>
>> Roger -- any ideas?
>
> My first guess would be that libxl__get_domid failed, however I'm not
> able to reproduce this. I'm attaching a patch to add an error message
> if libxl__get_domid fails, and also prevent the removal of xenstore
> entries so we can see what's going on. Dennis/Eugene, could you try the
> attached patch and send the output of xl -vvv create <...> and
> xenstore-ls -fp after the failed creation?
>
> ---
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..03f9fe9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
>          rc =3D xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_con=
fig->b_info.exec_ssidref);
>
>      if (rc) {
> +        LOG(ERROR, "domain creation failed, not doing removal of xs entr=
ies");
> +        dcs->callback(egc, dcs, rc, -1);
> +        return;
>          if (dcs->guest_domid) {
>              dcs->dds.ao =3D ao;
>              dcs->dds.domid =3D dcs->guest_domid;
> diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
> index ba7d100..56d8162 100644
> --- a/tools/libxl/libxl_device.c
> +++ b/tools/libxl/libxl_device.c
> @@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__a=
o_device *aodev)
>       * hotplug scripts
>       */
>      rc =3D libxl__get_domid(gc, &domid);
> -    if (rc) goto out;
> +    if (rc) {
> +        LOG(ERROR, "unable to get domain id, error: %d", rc);
> +        goto out;
> +    }
>      if (aodev->dev->backend_domid !=3D domid) {
>          if (aodev->action !=3D LIBXL__DEVICE_ACTION_REMOVE)
>              goto out;
>
with this patch applied, I got following err. or see attached file for more=
 info

ofire configs # xl create -c test1_stable
Parsing config from test1_stable
libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
id, error: -3
libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
id, error: -3
libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
add nic devices
libxl: error: libxl_create.c:1279:domcreate_complete: domain creation
failed, not doing removal of xs entries

--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset=US-ASCII; name="log.txt"
Content-Disposition: attachment; filename="log.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqk9klo40

bGlieGw6IGRlYnVnOiBsaWJ4bF9jcmVhdGUuYzoxMzI1OmRvX2RvbWFpbl9jcmVhdGU6IGFvIDB4
NjJhNjQwOiBjcmVhdGU6IGhvdz0obmlsKSBjYWxsYmFjaz0obmlsKSBwb2xsZXI9MHg2MjlmOTAK
bGlieGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyNTE6bGlieGxfX2RldmljZV9kaXNrX3NldF9i
YWNrZW5kOiBEaXNrIHZkZXY9eHZkYTEgc3BlYy5iYWNrZW5kPXVua25vd24KbGlieGw6IGRlYnVn
OiBsaWJ4bF9kZXZpY2UuYzoxOTc6ZGlza190cnlfYmFja2VuZDogRGlzayB2ZGV2PXh2ZGExLCBi
YWNrZW5kIHBoeSB1bnN1aXRhYmxlIGFzIHBoeXMgcGF0aCBub3QgYSBibG9jayBkZXZpY2UKbGli
eGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyODY6bGlieGxfX2RldmljZV9kaXNrX3NldF9iYWNr
ZW5kOiBEaXNrIHZkZXY9eHZkYTEsIHVzaW5nIGJhY2tlbmQgcWRpc2sKbGlieGw6IGRlYnVnOiBs
aWJ4bF9jcmVhdGUuYzo3Nzc6aW5pdGlhdGVfZG9tYWluX2NyZWF0ZTogcnVubmluZyBib290bG9h
ZGVyCmxpYnhsOiBkZWJ1ZzogbGlieGxfYm9vdGxvYWRlci5jOjMyNzpsaWJ4bF9fYm9vdGxvYWRl
cl9ydW46IG5vIGJvb3Rsb2FkZXIgY29uZmlndXJlZCwgdXNpbmcgdXNlciBzdXBwbGllZCBrZXJu
ZWwKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjYwNzpsaWJ4bF9fZXZfeHN3YXRjaF9kZXJl
Z2lzdGVyOiB3YXRjaCB3PTB4NjI4ODU4OiBkZXJlZ2lzdGVyIHVucmVnaXN0ZXJlZApsaWJ4bDog
ZGVidWc6IGxpYnhsX251bWEuYzo0NzU6bGlieGxfX2dldF9udW1hX2NhbmRpZGF0ZTogTmV3IGJl
c3QgTlVNQSBwbGFjZW1lbnQgY2FuZGlkYXRlIGZvdW5kOiBucl9ub2Rlcz0xLCBucl9jcHVzPTQs
IG5yX3ZjcHVzPTYsIGZyZWVfbWVta2I9NDMzNgpsaWJ4bDogZGV0YWlsOiBsaWJ4bF9kb20uYzox
OTU6bnVtYV9wbGFjZV9kb21haW46IE5VTUEgcGxhY2VtZW50IGNhbmRpZGF0ZSB3aXRoIDEgbm9k
ZXMsIDQgY3B1cyBhbmQgNDMzNiBLQiBmcmVlIHNlbGVjdGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSJyb290PS9kZXYveHZkYTEgcm8gcm9vdGZzdHlw
ZT1leHQ0IGNvbnNvbGU9aHZjMCIsIGZlYXR1cmVzPSIobnVsbCkiCmxpYnhsOiBkZWJ1ZzogbGli
eGxfZG9tLmM6MzU3OmxpYnhsX19idWlsZF9wdjogcHYga2VybmVsIG1hcHBlZCAwIHBhdGgga2Vy
bmVsL2tlcm5lbC1nZW5rZXJuZWwteDg2XzY0LTMuMTIuNgpkb21haW5idWlsZGVyOiBkZXRhaWw6
IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Imtlcm5lbC9rZXJuZWwtZ2Vua2VybmVsLXg4
Nl82NC0zLjEyLjYiCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvY19maWxlbWFw
ICAgIDogODE4NSBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yYW1kaXNrX2ZpbGU6
IGZpbGVuYW1lPSJrZXJuZWwvaW5pdHJhbWZzLWdlbmtlcm5lbC14ODZfNjQtMy4xMi42Igpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9tYWxsb2NfZmlsZW1hcCAgICA6IDQ2MjQga0IKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF94ZW5faW5pdDogdmVyIDQuNCwgY2FwcyB4
ZW4tMy4wLXg4Nl82NCB4ZW4tMy4wLXg4Nl8zMnAgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX3BhcnNlX2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmlu
ZF9sb2FkZXI6IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhj
X2RvbV9maW5kX2xvYWRlcjogdHJ5aW5nIExpbnV4IGJ6SW1hZ2UgbG9hZGVyIC4uLiAKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jICAgICAgICAgICAgOiAyNTIwMiBrQgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9kb19ndW56aXA6IHVuemlwIG9rLCAweDdmNTU3YyAt
PiAweDE4OWNhMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsb2FkZXIgcHJvYmUgT0sKeGM6IGRl
dGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4MTE0
MTAwMAp4YzogZGV0YWlsOiBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDIyMDAwMDAg
bWVtc3o9MHgxMWMwZjAKeGM6IGRldGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9
MHgyMzFkMDAwIG1lbXN6PTB4MTRjODAKeGM6IGRldGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhk
cjogcGFkZHI9MHgyMzMyMDAwIG1lbXN6PTB4NzAwMDAwCnhjOiBkZXRhaWw6IGVsZl9wYXJzZV9i
aW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmEzMjAwMAp4YzogZGV0YWlsOiBlbGZfeGVu
X3BhcnNlX25vdGU6IEdVRVNUX09TID0gImxpbnV4Igp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNl
X25vdGU6IEdVRVNUX1ZFUlNJT04gPSAiMi42Igp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25v
dGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90
ZTogVklSVF9CQVNFID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFy
c2Vfbm90ZTogRU5UUlkgPSAweGZmZmZmZmZmODIzMzIxZTAKeGM6IGRldGFpbDogZWxmX3hlbl9w
YXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4ZmZmZmZmZmY4MTAwMTAwMAp4YzogZGV0YWls
OiBlbGZfeGVuX3BhcnNlX25vdGU6IEZFQVRVUkVTID0gIiF3cml0YWJsZV9wYWdlX3RhYmxlc3xw
YWVfcGdkaXJfYWJvdmVfNGdiIgp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9N
T0RFID0gInllcyIKeGM6IGRldGFpbDogZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2Vu
ZXJpYyIKeGM6IGRldGFpbDogZWxmX3hlbl9wYXJzZV9ub3RlOiB1bmtub3duIHhlbiBlbGYgbm90
ZSAoMHhkKQp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0g
MHgxCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZm
ODAwMDAwMDAwMDAwCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VU
ID0gMHgwCnhjOiBkZXRhaWw6IGVsZl94ZW5fYWRkcl9jYWxjX2NoZWNrOiBhZGRyZXNzZXM6Cnhj
OiBkZXRhaWw6ICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBk
ZXRhaWw6ICAgICBlbGZfcGFkZHJfb2Zmc2V0ID0gMHgwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X29m
ZnNldCAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2tlbmQgICAg
ICAgID0gMHhmZmZmZmZmZjgyYTMyMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2VudHJ5ICAgICAg
ID0gMHhmZmZmZmZmZjgyMzMyMWUwCnhjOiBkZXRhaWw6ICAgICBwMm1fYmFzZSAgICAgICAgID0g
MHhmZmZmZmZmZmZmZmZmZmZmCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX2Vs
Zl9rZXJuZWw6IHhlbi0zLjAteDg2XzY0OiAweGZmZmZmZmZmODEwMDAwMDAgLT4gMHhmZmZmZmZm
ZjgyYTMyMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiBtZW0gMjA0
OCBNQiwgcGFnZXMgMHg4MDAwMCBwYWdlcywgNGsgZWFjaApkb21haW5idWlsZGVyOiBkZXRhaWw6
IHhjX2RvbV9tZW1faW5pdDogMHg4MDAwMCBwYWdlcwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhj
X2RvbV9ib290X21lbV9pbml0OiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4ODZfY29t
cGF0OiBndWVzdCB4ZW4tMy4wLXg4Nl82NCwgYWRkcmVzcyBzaXplIDY0CmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogeGNfZG9tX21hbGxvYyAgICAgICAgICAgIDogNDA5NiBrQgpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9idWlsZF9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2FsbG9jX3NlZ21lbnQ6ICAga2VybmVsICAgICAgIDogMHhmZmZmZmZmZjgxMDAw
MDAwIC0+IDB4ZmZmZmZmZmY4MmEzMjAwMCAgKHBmbiAweDEwMDAgKyAweDFhMzIgcGFnZXMpCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvYyAgICAgICAgICAgIDogMTU3IGtCCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3Bmbl90b19wdHJfcmV0Y291bnQ6IGRvbVUgbWFw
cGluZzogcGZuIDB4MTAwMCsweDFhMzIgYXQgMHg3ZmE0NTY0ZTUwMDAKeGM6IGRldGFpbDogZWxm
X2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHg3ZmE0NTY0ZTUwMDAgLT4gMHg3ZmE0NTc2MjYwMDAK
eGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDEgYXQgMHg3ZmE0NTc2ZTUwMDAgLT4g
MHg3ZmE0NTc4MDEwZjAKeGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDIgYXQgMHg3
ZmE0NTc4MDIwMDAgLT4gMHg3ZmE0NTc4MTZjODAKeGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5
OiBwaGRyIDMgYXQgMHg3ZmE0NTc4MTcwMDAgLT4gMHg3ZmE0NTc5ODEwMDAKZG9tYWluYnVpbGRl
cjogZGV0YWlsOiB4Y19kb21fYWxsb2Nfc2VnbWVudDogICByYW1kaXNrICAgICAgOiAweGZmZmZm
ZmZmODJhMzIwMDAgLT4gMHhmZmZmZmZmZjgyZWI3MDAwICAocGZuIDB4MmEzMiArIDB4NDg1IHBh
Z2VzKQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50OiBk
b21VIG1hcHBpbmc6IHBmbiAweDJhMzIrMHg0ODUgYXQgMHg3ZmE0NTYwNjAwMDAKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2Nfc2VnbWVudDogICBwaHlzMm1hY2ggICAgOiAweGZm
ZmZmZmZmODJlYjcwMDAgLT4gMHhmZmZmZmZmZjgzMmI3MDAwICAocGZuIDB4MmViNyArIDB4NDAw
IHBhZ2VzKQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50
OiBkb21VIG1hcHBpbmc6IHBmbiAweDJlYjcrMHg0MDAgYXQgMHg3ZmE0NTVjNjAwMDAKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2NfcGFnZSAgIDogICBzdGFydCBpbmZvICAgOiAw
eGZmZmZmZmZmODMyYjcwMDAgKHBmbiAweDMyYjcpCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX2FsbG9jX3BhZ2UgICA6ICAgeGVuc3RvcmUgICAgIDogMHhmZmZmZmZmZjgzMmI4MDAwIChw
Zm4gMHgzMmI4KQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY19wYWdlICAgOiAg
IGNvbnNvbGUgICAgICA6IDB4ZmZmZmZmZmY4MzJiOTAwMCAocGZuIDB4MzJiOSkKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiBucl9wYWdlX3RhYmxlczogMHgwMDAwZmZmZmZmZmZmZmZmLzQ4OiAweGZm
ZmYwMDAwMDAwMDAwMDAgLT4gMHhmZmZmZmZmZmZmZmZmZmZmLCAxIHRhYmxlKHMpCmRvbWFpbmJ1
aWxkZXI6IGRldGFpbDogbnJfcGFnZV90YWJsZXM6IDB4MDAwMDAwN2ZmZmZmZmZmZi8zOTogMHhm
ZmZmZmY4MDAwMDAwMDAwIC0+IDB4ZmZmZmZmZmZmZmZmZmZmZiwgMSB0YWJsZShzKQpkb21haW5i
dWlsZGVyOiBkZXRhaWw6IG5yX3BhZ2VfdGFibGVzOiAweDAwMDAwMDAwM2ZmZmZmZmYvMzA6IDB4
ZmZmZmZmZmY4MDAwMDAwMCAtPiAweGZmZmZmZmZmYmZmZmZmZmYsIDEgdGFibGUocykKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiBucl9wYWdlX3RhYmxlczogMHgwMDAwMDAwMDAwMWZmZmZmLzIxOiAw
eGZmZmZmZmZmODAwMDAwMDAgLT4gMHhmZmZmZmZmZjgzM2ZmZmZmLCAyNiB0YWJsZShzKQpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY19zZWdtZW50OiAgIHBhZ2UgdGFibGVzICA6
IDB4ZmZmZmZmZmY4MzJiYTAwMCAtPiAweGZmZmZmZmZmODMyZDcwMDAgIChwZm4gMHgzMmJhICsg
MHgxZCBwYWdlcykKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcGZuX3RvX3B0cl9yZXRj
b3VudDogZG9tVSBtYXBwaW5nOiBwZm4gMHgzMmJhKzB4MWQgYXQgMHg3ZmE0NWNkMDcwMDAKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2NfcGFnZSAgIDogICBib290IHN0YWNrICAg
OiAweGZmZmZmZmZmODMyZDcwMDAgKHBmbiAweDMyZDcpCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
eGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfYWxsb2NfZW5kIDogMHhmZmZmZmZmZjgzMmQ4MDAw
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfcGd0YWJf
ZW5kIDogMHhmZmZmZmZmZjgzNDAwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jv
b3RfaW1hZ2U6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGFyY2hfc2V0dXBfYm9vdGVh
cmx5OiBkb2luZyBub3RoaW5nCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2NvbXBhdF9j
aGVjazogc3VwcG9ydGVkIGd1ZXN0IHR5cGU6IHhlbi0zLjAteDg2XzY0IDw9IG1hdGNoZXMKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBwb3J0ZWQgZ3Vlc3Qg
dHlwZTogeGVuLTMuMC14ODZfMzJwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3VwZGF0
ZV9ndWVzdF9wMm06IGRzdCA2NGJpdCwgcGFnZXMgMHg4MDAwMApkb21haW5idWlsZGVyOiBkZXRh
aWw6IGNsZWFyX3BhZ2U6IHBmbiAweDMyYjksIG1mbiAweDIyYWYzMwpkb21haW5idWlsZGVyOiBk
ZXRhaWw6IGNsZWFyX3BhZ2U6IHBmbiAweDMyYjgsIG1mbiAweDIyYWYzNApkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50OiBkb21VIG1hcHBpbmc6IHBmbiAw
eDMyYjcrMHgxIGF0IDB4N2ZhNDVjZDcwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc3RhcnRf
aW5mb194ODZfNjQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHNldHVwX2h5cGVyY2Fs
bF9wYWdlOiB2YWRkcj0weGZmZmZmZmZmODEwMDEwMDAgcGZuPTB4MTAwMQpkb21haW5idWlsZGVy
OiBkZXRhaWw6IGRvbWFpbiBidWlsZGVyIG1lbW9yeSBmb290cHJpbnQKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiAgICBhbGxvY2F0ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBtYWxsb2Mg
ICAgICAgICAgICAgOiAyOTUxMCBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGFub24g
bW1hcCAgICAgICAgICA6IDAgYnl0ZXMKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBtYXBwZWQK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBmaWxlIG1tYXAgICAgICAgICAgOiAxMjgwOSBr
Qgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGRvbVUgbW1hcCAgICAgICAgICA6IDM0IE1C
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogYXJjaF9zZXR1cF9ib290bGF0ZTogc2hhcmVkX2luZm86
IHBmbiAweDAsIG1mbiAweGI3NjhkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc2hhcmVkX2luZm9f
eDg2XzY0OiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB2Y3B1X3g4Nl82NDogY2FsbGVk
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogdmNwdV94ODZfNjQ6IGNyMzogcGZuIDB4MzJiYSBtZm4g
MHgyMmFmMzIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hfdm06IGNhbGxlZCwgY3R4dD0w
eDdmYTQ1Y2Q3MTAwNApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yZWxlYXNlOiBjYWxs
ZWQKbGlieGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyNTE6bGlieGxfX2RldmljZV9kaXNrX3Nl
dF9iYWNrZW5kOiBEaXNrIHZkZXY9eHZkYTEgc3BlYy5iYWNrZW5kPXFkaXNrCmxpYnhsOiBlcnJv
cjogbGlieGxfZGV2aWNlLmM6OTY5OmRldmljZV9ob3RwbHVnOiB1bmFibGUgdG8gZ2V0IGRvbWFp
biBpZCwgZXJyb3I6IC0zCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2
X3hzd2F0Y2hfZGVyZWdpc3Rlcjogd2F0Y2ggdz0weDYyOWE4MDogZGVyZWdpc3RlciB1bnJlZ2lz
dGVyZWQKbGlieGw6IGRlYnVnOiBsaWJ4bF9kbS5jOjEzMDM6bGlieGxfX3NwYXduX2xvY2FsX2Rt
OiBTcGF3bmluZyBkZXZpY2UtbW9kZWwgL3Vzci9saWIveGVuL2Jpbi9xZW11LXN5c3RlbS1pMzg2
IHdpdGggYXJndW1lbnRzOgpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bh
d25fbG9jYWxfZG06ICAgL3Vzci9saWIveGVuL2Jpbi9xZW11LXN5c3RlbS1pMzg2CmxpYnhsOiBk
ZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAteGVuLWRvbWlk
CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICA2
CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAt
Y2hhcmRldgpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxf
ZG06ICAgc29ja2V0LGlkPWxpYnhsLWNtZCxwYXRoPS92YXIvcnVuL3hlbi9xbXAtbGlieGwtNixz
ZXJ2ZXIsbm93YWl0CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9s
b2NhbF9kbTogICAtbW9uCmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3
bl9sb2NhbF9kbTogICBjaGFyZGV2PWxpYnhsLWNtZCxtb2RlPWNvbnRyb2wKbGlieGw6IGRlYnVn
OiBsaWJ4bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xvY2FsX2RtOiAgIC1ub2RlZmF1bHRzCmxp
YnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAteGVu
LWF0dGFjaApsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxf
ZG06ICAgLW5hbWUKbGlieGw6IGRlYnVnOiBsaWJ4bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xv
Y2FsX2RtOiAgIHRlc3QxX3N0YWJsZQpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4
bF9fc3Bhd25fbG9jYWxfZG06ICAgLW5vZ3JhcGhpYwpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6
MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxfZG06ICAgLW1hY2hpbmUKbGlieGw6IGRlYnVnOiBsaWJ4
bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xvY2FsX2RtOiAgIHhlbnB2CmxpYnhsOiBkZWJ1Zzog
bGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAtbQpsaWJ4bDogZGVidWc6
IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxfZG06ICAgMjA0OQpsaWJ4bDogZGVi
dWc6IGxpYnhsX2V2ZW50LmM6NTU5OmxpYnhsX19ldl94c3dhdGNoX3JlZ2lzdGVyOiB3YXRjaCB3
PTB4NjI4YTkwIHdwYXRoPS9sb2NhbC9kb21haW4vMC9kZXZpY2UtbW9kZWwvNi9zdGF0ZSB0b2tl
bj0zLzA6IHJlZ2lzdGVyIHNsb3RudW09MwpsaWJ4bDogZGVidWc6IGxpYnhsX2NyZWF0ZS5jOjEz
Mzk6ZG9fZG9tYWluX2NyZWF0ZTogYW8gMHg2MmE2NDA6IGlucHJvZ3Jlc3M6IHBvbGxlcj0weDYy
OWY5MCwgZmxhZ3M9aQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NTAzOndhdGNoZmRfY2Fs
bGJhY2s6IHdhdGNoIHc9MHg2MjhhOTAgd3BhdGg9L2xvY2FsL2RvbWFpbi8wL2RldmljZS1tb2Rl
bC82L3N0YXRlIHRva2VuPTMvMDogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8wL2RldmljZS1t
b2RlbC82L3N0YXRlCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1MDM6d2F0Y2hmZF9jYWxs
YmFjazogd2F0Y2ggdz0weDYyOGE5MCB3cGF0aD0vbG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVs
LzYvc3RhdGUgdG9rZW49My8wOiBldmVudCBlcGF0aD0vbG9jYWwvZG9tYWluLzAvZGV2aWNlLW1v
ZGVsLzYvc3RhdGUKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjU5NTpsaWJ4bF9fZXZfeHN3
YXRjaF9kZXJlZ2lzdGVyOiB3YXRjaCB3PTB4NjI4YTkwIHdwYXRoPS9sb2NhbC9kb21haW4vMC9k
ZXZpY2UtbW9kZWwvNi9zdGF0ZSB0b2tlbj0zLzA6IGRlcmVnaXN0ZXIgc2xvdG51bT0zCmxpYnhs
OiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2X3hzd2F0Y2hfZGVyZWdpc3Rlcjog
d2F0Y2ggdz0weDYyOGE5MDogZGVyZWdpc3RlciB1bnJlZ2lzdGVyZWQKbGlieGw6IGRlYnVnOiBs
aWJ4bF9xbXAuYzo2OTY6bGlieGxfX3FtcF9pbml0aWFsaXplOiBjb25uZWN0ZWQgdG8gL3Zhci9y
dW4veGVuL3FtcC1saWJ4bC02CmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6Mjk2OnFtcF9oYW5k
bGVfcmVzcG9uc2U6IG1lc3NhZ2UgdHlwZTogcW1wCmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6
NTQ2OnFtcF9zZW5kX3ByZXBhcmU6IG5leHQgcW1wIGNvbW1hbmQ6ICd7CiAgICAiZXhlY3V0ZSI6
ICJxbXBfY2FwYWJpbGl0aWVzIiwKICAgICJpZCI6IDEKfQonCmxpYnhsOiBkZWJ1ZzogbGlieGxf
cW1wLmM6Mjk2OnFtcF9oYW5kbGVfcmVzcG9uc2U6IG1lc3NhZ2UgdHlwZTogcmV0dXJuCmxpYnhs
OiBkZWJ1ZzogbGlieGxfcW1wLmM6NTQ2OnFtcF9zZW5kX3ByZXBhcmU6IG5leHQgcW1wIGNvbW1h
bmQ6ICd7CiAgICAiZXhlY3V0ZSI6ICJxdWVyeS1jaGFyZGV2IiwKICAgICJpZCI6IDIKfQonCmxp
YnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6Mjk2OnFtcF9oYW5kbGVfcmVzcG9uc2U6IG1lc3NhZ2Ug
dHlwZTogcmV0dXJuCmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6NTQ2OnFtcF9zZW5kX3ByZXBh
cmU6IG5leHQgcW1wIGNvbW1hbmQ6ICd7CiAgICAiZXhlY3V0ZSI6ICJxdWVyeS12bmMiLAogICAg
ImlkIjogMwp9CicKbGlieGw6IGRlYnVnOiBsaWJ4bF9xbXAuYzoyOTY6cW1wX2hhbmRsZV9yZXNw
b25zZTogbWVzc2FnZSB0eXBlOiByZXR1cm4KbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjU1
OTpsaWJ4bF9fZXZfeHN3YXRjaF9yZWdpc3Rlcjogd2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9j
YWwvZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0YXRlIHRva2VuPTMvMTogcmVnaXN0ZXIgc2xv
dG51bT0zCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazog
d2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9jYWwvZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0
YXRlIHRva2VuPTMvMTogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzYv
MC9zdGF0ZQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NjQ2OmRldnN0YXRlX3dhdGNoX2Nh
bGxiYWNrOiBiYWNrZW5kIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgd2Fu
dGVkIHN0YXRlIDIgc3RpbGwgd2FpdGluZyBzdGF0ZSAxCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZl
bnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazogd2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9jYWwv
ZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0YXRlIHRva2VuPTMvMTogZXZlbnQgZXBhdGg9L2xv
Y2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzYvMC9zdGF0ZQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2
ZW50LmM6NjQyOmRldnN0YXRlX3dhdGNoX2NhbGxiYWNrOiBiYWNrZW5kIC9sb2NhbC9kb21haW4v
MC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgd2FudGVkIHN0YXRlIDIgb2sKbGlieGw6IGRlYnVnOiBs
aWJ4bF9ldmVudC5jOjU5NTpsaWJ4bF9fZXZfeHN3YXRjaF9kZXJlZ2lzdGVyOiB3YXRjaCB3PTB4
NjJhY2Q4IHdwYXRoPS9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgdG9rZW49
My8xOiBkZXJlZ2lzdGVyIHNsb3RudW09MwpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NjA3
OmxpYnhsX19ldl94c3dhdGNoX2RlcmVnaXN0ZXI6IHdhdGNoIHc9MHg2MmFjZDg6IGRlcmVnaXN0
ZXIgdW5yZWdpc3RlcmVkCmxpYnhsOiBlcnJvcjogbGlieGxfZGV2aWNlLmM6OTY5OmRldmljZV9o
b3RwbHVnOiB1bmFibGUgdG8gZ2V0IGRvbWFpbiBpZCwgZXJyb3I6IC0zCmxpYnhsOiBkZWJ1Zzog
bGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2X3hzd2F0Y2hfZGVyZWdpc3Rlcjogd2F0Y2ggdz0w
eDYyYWQ2MDogZGVyZWdpc3RlciB1bnJlZ2lzdGVyZWQKbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVh
dGUuYzoxMjA2OmRvbWNyZWF0ZV9hdHRhY2hfdnRwbXM6IHVuYWJsZSB0byBhZGQgbmljIGRldmlj
ZXMKbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVhdGUuYzoxMjc5OmRvbWNyZWF0ZV9jb21wbGV0ZTog
ZG9tYWluIGNyZWF0aW9uIGZhaWxlZCwgbm90IGRvaW5nIHJlbW92YWwgb2YgeHMgZW50cmllcwps
aWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTU2MDpsaWJ4bF9fYW9fY29tcGxldGU6IGFvIDB4
NjJhNjQwOiBjb21wbGV0ZSwgcmM9LTMKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjE1MzI6
bGlieGxfX2FvX19kZXN0cm95OiBhbyAweDYyYTY0MDogZGVzdHJveQp4YzogZGVidWc6IGh5cGVy
Y2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIxNyB0b3RhbCByZWxlYXNlczoyMTcKeGM6
IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiBjdXJyZW50IGFsbG9jYXRpb25zOjAgbWF4aW11bSBh
bGxvY2F0aW9uczo0CnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2FjaGUgY3VycmVudCBz
aXplOjQKeGM6IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiBjYWNoZSBoaXRzOjIwNiBtaXNzZXM6
NCB0b29iaWc6NwpQYXJzaW5nIGNvbmZpZyBmcm9tIHRlc3QxX3N0YWJsZQo=
--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11c29f8ab9218804f0358452--


From xen-devel-bounces@lists.xen.org Sat Jan 18 02:36:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 02:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4LlT-00014n-Rf; Sat, 18 Jan 2014 02:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W4LlR-00014i-OZ
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 02:36:02 +0000
Received: from [85.158.139.211:55762] by server-4.bemta-5.messagelabs.com id
	C9/A8-26791-098E9D25; Sat, 18 Jan 2014 02:36:00 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390012558!7783731!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32611 invoked from network); 18 Jan 2014 02:35:59 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 02:35:59 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so4244124qcv.19
	for <xen-devel@lists.xen.org>; Fri, 17 Jan 2014 18:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=JWS977IzwPwnUU0PXmWGZO89XsZ4p+G48Zov5NZBJH8=;
	b=TddYUhmgentqAFl2XYsTt/iJ+vNdWg2YUpso2KGRrM1G94jH1T/4o2O33mOH+2smZo
	tSKEYpipBs1TL5GFpBhlkj7wWmy0/yXNj+Gmy+bOrQtLknbfwTBpxJ3rFLlhyCnnFgMA
	dPOiFI+CVlQr0i3/ndK038ChpxKCQs37jUe9FY2v/1xmUKYrvcC9SUcktizJ0JeqVP7Y
	EzIM3TLsAGJhU0okRttlrevGRNHaQLY8LpgYf9Ivdcaiz13KlTa953e9/EVkiB4efjru
	oriROYOcFU18RzIecm+LpNVPB8WF/jS30RTQtd6DrQA+zACOvYAEvlOMlXuDZTNuZ/hp
	V76A==
MIME-Version: 1.0
X-Received: by 10.224.124.133 with SMTP id u5mr8788026qar.79.1390012557469;
	Fri, 17 Jan 2014 18:35:57 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Fri, 17 Jan 2014 18:35:56 -0800 (PST)
In-Reply-To: <52D9309D.1030808@citrix.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
	<52D9309D.1030808@citrix.com>
Date: Sat, 18 Jan 2014 10:35:56 +0800
Message-ID: <CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/mixed; boundary=001a11c29f8ab9218804f0358452
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, Jan 17, 2014 at 9:31 PM, Roger Pau Monn=C3=A9 <roger.pau@citrix.com=
> wrote:
> On 17/01/14 12:59, Ian Campbell wrote:
>> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com>=
 wrote:
>>>
>>>
>>>> vif-bridge and the common scripts which it includes would be a good
>>>> start. Just an echo at the top to confirm that the script is running
>>>> would be useful.
>>>>
>>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debuggin=
g
>>>> when these scripts were launched by udev, but now that libxl runs them
>>>> you may find that the debug from the script comes out on stdout/err of
>>>> the xl create command so perhaps that isn't needed any more.
>>>>
>>>>> headless here.
>>>>
>>>> That shouldn't matter, you are looking for output from userspace
>>>> scripts, not kernel or hypervisor logs.
>>>>
>>>> Ian.
>>>>
>>>
>>> Hi Ian
>>> I suspect for 4.4.0, the network devices even was not detected.
>>> this is output from 4.3.1, notes follow lines.
>>>
>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>>> script: /etc/xen/scripts/vif-bridge online
>>> dlan: vif-bridge start
>>> dlan: vif-common start
>>>
>>> dlan: vif-bridge start -> output from vif-bridge script
>>> dlan: vif-common start -> output from vif-common.sh script
>>
>> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
>> produce the same output?
>>
>> (please can you try and set the text type to "preformatted" for the logs
>> -- having them wrapped makes them very hard to read).
>>
>> The lack of
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:=
 /etc/xen/scripts/vif-bridge online
>> in your original logs is a bit concerning.
>>
>> Roger -- any ideas?
>
> My first guess would be that libxl__get_domid failed, however I'm not
> able to reproduce this. I'm attaching a patch to add an error message
> if libxl__get_domid fails, and also prevent the removal of xenstore
> entries so we can see what's going on. Dennis/Eugene, could you try the
> attached patch and send the output of xl -vvv create <...> and
> xenstore-ls -fp after the failed creation?
>
> ---
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..03f9fe9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
>          rc =3D xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_con=
fig->b_info.exec_ssidref);
>
>      if (rc) {
> +        LOG(ERROR, "domain creation failed, not doing removal of xs entr=
ies");
> +        dcs->callback(egc, dcs, rc, -1);
> +        return;
>          if (dcs->guest_domid) {
>              dcs->dds.ao =3D ao;
>              dcs->dds.domid =3D dcs->guest_domid;
> diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
> index ba7d100..56d8162 100644
> --- a/tools/libxl/libxl_device.c
> +++ b/tools/libxl/libxl_device.c
> @@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__a=
o_device *aodev)
>       * hotplug scripts
>       */
>      rc =3D libxl__get_domid(gc, &domid);
> -    if (rc) goto out;
> +    if (rc) {
> +        LOG(ERROR, "unable to get domain id, error: %d", rc);
> +        goto out;
> +    }
>      if (aodev->dev->backend_domid !=3D domid) {
>          if (aodev->action !=3D LIBXL__DEVICE_ACTION_REMOVE)
>              goto out;
>
with this patch applied, I got following err. or see attached file for more=
 info

ofire configs # xl create -c test1_stable
Parsing config from test1_stable
libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
id, error: -3
libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
id, error: -3
libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
add nic devices
libxl: error: libxl_create.c:1279:domcreate_complete: domain creation
failed, not doing removal of xs entries

--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset=US-ASCII; name="log.txt"
Content-Disposition: attachment; filename="log.txt"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqk9klo40

bGlieGw6IGRlYnVnOiBsaWJ4bF9jcmVhdGUuYzoxMzI1OmRvX2RvbWFpbl9jcmVhdGU6IGFvIDB4
NjJhNjQwOiBjcmVhdGU6IGhvdz0obmlsKSBjYWxsYmFjaz0obmlsKSBwb2xsZXI9MHg2MjlmOTAK
bGlieGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyNTE6bGlieGxfX2RldmljZV9kaXNrX3NldF9i
YWNrZW5kOiBEaXNrIHZkZXY9eHZkYTEgc3BlYy5iYWNrZW5kPXVua25vd24KbGlieGw6IGRlYnVn
OiBsaWJ4bF9kZXZpY2UuYzoxOTc6ZGlza190cnlfYmFja2VuZDogRGlzayB2ZGV2PXh2ZGExLCBi
YWNrZW5kIHBoeSB1bnN1aXRhYmxlIGFzIHBoeXMgcGF0aCBub3QgYSBibG9jayBkZXZpY2UKbGli
eGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyODY6bGlieGxfX2RldmljZV9kaXNrX3NldF9iYWNr
ZW5kOiBEaXNrIHZkZXY9eHZkYTEsIHVzaW5nIGJhY2tlbmQgcWRpc2sKbGlieGw6IGRlYnVnOiBs
aWJ4bF9jcmVhdGUuYzo3Nzc6aW5pdGlhdGVfZG9tYWluX2NyZWF0ZTogcnVubmluZyBib290bG9h
ZGVyCmxpYnhsOiBkZWJ1ZzogbGlieGxfYm9vdGxvYWRlci5jOjMyNzpsaWJ4bF9fYm9vdGxvYWRl
cl9ydW46IG5vIGJvb3Rsb2FkZXIgY29uZmlndXJlZCwgdXNpbmcgdXNlciBzdXBwbGllZCBrZXJu
ZWwKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjYwNzpsaWJ4bF9fZXZfeHN3YXRjaF9kZXJl
Z2lzdGVyOiB3YXRjaCB3PTB4NjI4ODU4OiBkZXJlZ2lzdGVyIHVucmVnaXN0ZXJlZApsaWJ4bDog
ZGVidWc6IGxpYnhsX251bWEuYzo0NzU6bGlieGxfX2dldF9udW1hX2NhbmRpZGF0ZTogTmV3IGJl
c3QgTlVNQSBwbGFjZW1lbnQgY2FuZGlkYXRlIGZvdW5kOiBucl9ub2Rlcz0xLCBucl9jcHVzPTQs
IG5yX3ZjcHVzPTYsIGZyZWVfbWVta2I9NDMzNgpsaWJ4bDogZGV0YWlsOiBsaWJ4bF9kb20uYzox
OTU6bnVtYV9wbGFjZV9kb21haW46IE5VTUEgcGxhY2VtZW50IGNhbmRpZGF0ZSB3aXRoIDEgbm9k
ZXMsIDQgY3B1cyBhbmQgNDMzNiBLQiBmcmVlIHNlbGVjdGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2FsbG9jYXRlOiBjbWRsaW5lPSJyb290PS9kZXYveHZkYTEgcm8gcm9vdGZzdHlw
ZT1leHQ0IGNvbnNvbGU9aHZjMCIsIGZlYXR1cmVzPSIobnVsbCkiCmxpYnhsOiBkZWJ1ZzogbGli
eGxfZG9tLmM6MzU3OmxpYnhsX19idWlsZF9wdjogcHYga2VybmVsIG1hcHBlZCAwIHBhdGgga2Vy
bmVsL2tlcm5lbC1nZW5rZXJuZWwteDg2XzY0LTMuMTIuNgpkb21haW5idWlsZGVyOiBkZXRhaWw6
IHhjX2RvbV9rZXJuZWxfZmlsZTogZmlsZW5hbWU9Imtlcm5lbC9rZXJuZWwtZ2Vua2VybmVsLXg4
Nl82NC0zLjEyLjYiCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvY19maWxlbWFw
ICAgIDogODE4NSBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yYW1kaXNrX2ZpbGU6
IGZpbGVuYW1lPSJrZXJuZWwvaW5pdHJhbWZzLWdlbmtlcm5lbC14ODZfNjQtMy4xMi42Igpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9tYWxsb2NfZmlsZW1hcCAgICA6IDQ2MjQga0IKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYm9vdF94ZW5faW5pdDogdmVyIDQuNCwgY2FwcyB4
ZW4tMy4wLXg4Nl82NCB4ZW4tMy4wLXg4Nl8zMnAgCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX3BhcnNlX2ltYWdlOiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fZmlu
ZF9sb2FkZXI6IHRyeWluZyBtdWx0aWJvb3QtYmluYXJ5IGxvYWRlciAuLi4gCmRvbWFpbmJ1aWxk
ZXI6IGRldGFpbDogbG9hZGVyIHByb2JlIGZhaWxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhj
X2RvbV9maW5kX2xvYWRlcjogdHJ5aW5nIExpbnV4IGJ6SW1hZ2UgbG9hZGVyIC4uLiAKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fbWFsbG9jICAgICAgICAgICAgOiAyNTIwMiBrQgpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9kb19ndW56aXA6IHVuemlwIG9rLCAweDdmNTU3YyAt
PiAweDE4OWNhMDAKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsb2FkZXIgcHJvYmUgT0sKeGM6IGRl
dGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4MTE0
MTAwMAp4YzogZGV0YWlsOiBlbGZfcGFyc2VfYmluYXJ5OiBwaGRyOiBwYWRkcj0weDIyMDAwMDAg
bWVtc3o9MHgxMWMwZjAKeGM6IGRldGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFkZHI9
MHgyMzFkMDAwIG1lbXN6PTB4MTRjODAKeGM6IGRldGFpbDogZWxmX3BhcnNlX2JpbmFyeTogcGhk
cjogcGFkZHI9MHgyMzMyMDAwIG1lbXN6PTB4NzAwMDAwCnhjOiBkZXRhaWw6IGVsZl9wYXJzZV9i
aW5hcnk6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MmEzMjAwMAp4YzogZGV0YWlsOiBlbGZfeGVu
X3BhcnNlX25vdGU6IEdVRVNUX09TID0gImxpbnV4Igp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNl
X25vdGU6IEdVRVNUX1ZFUlNJT04gPSAiMi42Igp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25v
dGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90
ZTogVklSVF9CQVNFID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFy
c2Vfbm90ZTogRU5UUlkgPSAweGZmZmZmZmZmODIzMzIxZTAKeGM6IGRldGFpbDogZWxmX3hlbl9w
YXJzZV9ub3RlOiBIWVBFUkNBTExfUEFHRSA9IDB4ZmZmZmZmZmY4MTAwMTAwMAp4YzogZGV0YWls
OiBlbGZfeGVuX3BhcnNlX25vdGU6IEZFQVRVUkVTID0gIiF3cml0YWJsZV9wYWdlX3RhYmxlc3xw
YWVfcGdkaXJfYWJvdmVfNGdiIgp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25vdGU6IFBBRV9N
T0RFID0gInllcyIKeGM6IGRldGFpbDogZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2Vu
ZXJpYyIKeGM6IGRldGFpbDogZWxmX3hlbl9wYXJzZV9ub3RlOiB1bmtub3duIHhlbiBlbGYgbm90
ZSAoMHhkKQp4YzogZGV0YWlsOiBlbGZfeGVuX3BhcnNlX25vdGU6IFNVU1BFTkRfQ0FOQ0VMID0g
MHgxCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90ZTogSFZfU1RBUlRfTE9XID0gMHhmZmZm
ODAwMDAwMDAwMDAwCnhjOiBkZXRhaWw6IGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VU
ID0gMHgwCnhjOiBkZXRhaWw6IGVsZl94ZW5fYWRkcl9jYWxjX2NoZWNrOiBhZGRyZXNzZXM6Cnhj
OiBkZXRhaWw6ICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBk
ZXRhaWw6ICAgICBlbGZfcGFkZHJfb2Zmc2V0ID0gMHgwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X29m
ZnNldCAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2tzdGFy
dCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2tlbmQgICAg
ICAgID0gMHhmZmZmZmZmZjgyYTMyMDAwCnhjOiBkZXRhaWw6ICAgICB2aXJ0X2VudHJ5ICAgICAg
ID0gMHhmZmZmZmZmZjgyMzMyMWUwCnhjOiBkZXRhaWw6ICAgICBwMm1fYmFzZSAgICAgICAgID0g
MHhmZmZmZmZmZmZmZmZmZmZmCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3BhcnNlX2Vs
Zl9rZXJuZWw6IHhlbi0zLjAteDg2XzY0OiAweGZmZmZmZmZmODEwMDAwMDAgLT4gMHhmZmZmZmZm
ZjgyYTMyMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21lbV9pbml0OiBtZW0gMjA0
OCBNQiwgcGFnZXMgMHg4MDAwMCBwYWdlcywgNGsgZWFjaApkb21haW5idWlsZGVyOiBkZXRhaWw6
IHhjX2RvbV9tZW1faW5pdDogMHg4MDAwMCBwYWdlcwpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhj
X2RvbV9ib290X21lbV9pbml0OiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4ODZfY29t
cGF0OiBndWVzdCB4ZW4tMy4wLXg4Nl82NCwgYWRkcmVzcyBzaXplIDY0CmRvbWFpbmJ1aWxkZXI6
IGRldGFpbDogeGNfZG9tX21hbGxvYyAgICAgICAgICAgIDogNDA5NiBrQgpkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9idWlsZF9pbWFnZTogY2FsbGVkCmRvbWFpbmJ1aWxkZXI6IGRldGFp
bDogeGNfZG9tX2FsbG9jX3NlZ21lbnQ6ICAga2VybmVsICAgICAgIDogMHhmZmZmZmZmZjgxMDAw
MDAwIC0+IDB4ZmZmZmZmZmY4MmEzMjAwMCAgKHBmbiAweDEwMDAgKyAweDFhMzIgcGFnZXMpCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX21hbGxvYyAgICAgICAgICAgIDogMTU3IGtCCmRv
bWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3Bmbl90b19wdHJfcmV0Y291bnQ6IGRvbVUgbWFw
cGluZzogcGZuIDB4MTAwMCsweDFhMzIgYXQgMHg3ZmE0NTY0ZTUwMDAKeGM6IGRldGFpbDogZWxm
X2xvYWRfYmluYXJ5OiBwaGRyIDAgYXQgMHg3ZmE0NTY0ZTUwMDAgLT4gMHg3ZmE0NTc2MjYwMDAK
eGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDEgYXQgMHg3ZmE0NTc2ZTUwMDAgLT4g
MHg3ZmE0NTc4MDEwZjAKeGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5OiBwaGRyIDIgYXQgMHg3
ZmE0NTc4MDIwMDAgLT4gMHg3ZmE0NTc4MTZjODAKeGM6IGRldGFpbDogZWxmX2xvYWRfYmluYXJ5
OiBwaGRyIDMgYXQgMHg3ZmE0NTc4MTcwMDAgLT4gMHg3ZmE0NTc5ODEwMDAKZG9tYWluYnVpbGRl
cjogZGV0YWlsOiB4Y19kb21fYWxsb2Nfc2VnbWVudDogICByYW1kaXNrICAgICAgOiAweGZmZmZm
ZmZmODJhMzIwMDAgLT4gMHhmZmZmZmZmZjgyZWI3MDAwICAocGZuIDB4MmEzMiArIDB4NDg1IHBh
Z2VzKQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50OiBk
b21VIG1hcHBpbmc6IHBmbiAweDJhMzIrMHg0ODUgYXQgMHg3ZmE0NTYwNjAwMDAKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2Nfc2VnbWVudDogICBwaHlzMm1hY2ggICAgOiAweGZm
ZmZmZmZmODJlYjcwMDAgLT4gMHhmZmZmZmZmZjgzMmI3MDAwICAocGZuIDB4MmViNyArIDB4NDAw
IHBhZ2VzKQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50
OiBkb21VIG1hcHBpbmc6IHBmbiAweDJlYjcrMHg0MDAgYXQgMHg3ZmE0NTVjNjAwMDAKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2NfcGFnZSAgIDogICBzdGFydCBpbmZvICAgOiAw
eGZmZmZmZmZmODMyYjcwMDAgKHBmbiAweDMyYjcpCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNf
ZG9tX2FsbG9jX3BhZ2UgICA6ICAgeGVuc3RvcmUgICAgIDogMHhmZmZmZmZmZjgzMmI4MDAwIChw
Zm4gMHgzMmI4KQpkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY19wYWdlICAgOiAg
IGNvbnNvbGUgICAgICA6IDB4ZmZmZmZmZmY4MzJiOTAwMCAocGZuIDB4MzJiOSkKZG9tYWluYnVp
bGRlcjogZGV0YWlsOiBucl9wYWdlX3RhYmxlczogMHgwMDAwZmZmZmZmZmZmZmZmLzQ4OiAweGZm
ZmYwMDAwMDAwMDAwMDAgLT4gMHhmZmZmZmZmZmZmZmZmZmZmLCAxIHRhYmxlKHMpCmRvbWFpbmJ1
aWxkZXI6IGRldGFpbDogbnJfcGFnZV90YWJsZXM6IDB4MDAwMDAwN2ZmZmZmZmZmZi8zOTogMHhm
ZmZmZmY4MDAwMDAwMDAwIC0+IDB4ZmZmZmZmZmZmZmZmZmZmZiwgMSB0YWJsZShzKQpkb21haW5i
dWlsZGVyOiBkZXRhaWw6IG5yX3BhZ2VfdGFibGVzOiAweDAwMDAwMDAwM2ZmZmZmZmYvMzA6IDB4
ZmZmZmZmZmY4MDAwMDAwMCAtPiAweGZmZmZmZmZmYmZmZmZmZmYsIDEgdGFibGUocykKZG9tYWlu
YnVpbGRlcjogZGV0YWlsOiBucl9wYWdlX3RhYmxlczogMHgwMDAwMDAwMDAwMWZmZmZmLzIxOiAw
eGZmZmZmZmZmODAwMDAwMDAgLT4gMHhmZmZmZmZmZjgzM2ZmZmZmLCAyNiB0YWJsZShzKQpkb21h
aW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9hbGxvY19zZWdtZW50OiAgIHBhZ2UgdGFibGVzICA6
IDB4ZmZmZmZmZmY4MzJiYTAwMCAtPiAweGZmZmZmZmZmODMyZDcwMDAgIChwZm4gMHgzMmJhICsg
MHgxZCBwYWdlcykKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fcGZuX3RvX3B0cl9yZXRj
b3VudDogZG9tVSBtYXBwaW5nOiBwZm4gMHgzMmJhKzB4MWQgYXQgMHg3ZmE0NWNkMDcwMDAKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fYWxsb2NfcGFnZSAgIDogICBib290IHN0YWNrICAg
OiAweGZmZmZmZmZmODMyZDcwMDAgKHBmbiAweDMyZDcpCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDog
eGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfYWxsb2NfZW5kIDogMHhmZmZmZmZmZjgzMmQ4MDAw
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2J1aWxkX2ltYWdlICA6IHZpcnRfcGd0YWJf
ZW5kIDogMHhmZmZmZmZmZjgzNDAwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2Jv
b3RfaW1hZ2U6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IGFyY2hfc2V0dXBfYm9vdGVh
cmx5OiBkb2luZyBub3RoaW5nCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX2NvbXBhdF9j
aGVjazogc3VwcG9ydGVkIGd1ZXN0IHR5cGU6IHhlbi0zLjAteDg2XzY0IDw9IG1hdGNoZXMKZG9t
YWluYnVpbGRlcjogZGV0YWlsOiB4Y19kb21fY29tcGF0X2NoZWNrOiBzdXBwb3J0ZWQgZ3Vlc3Qg
dHlwZTogeGVuLTMuMC14ODZfMzJwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogeGNfZG9tX3VwZGF0
ZV9ndWVzdF9wMm06IGRzdCA2NGJpdCwgcGFnZXMgMHg4MDAwMApkb21haW5idWlsZGVyOiBkZXRh
aWw6IGNsZWFyX3BhZ2U6IHBmbiAweDMyYjksIG1mbiAweDIyYWYzMwpkb21haW5idWlsZGVyOiBk
ZXRhaWw6IGNsZWFyX3BhZ2U6IHBmbiAweDMyYjgsIG1mbiAweDIyYWYzNApkb21haW5idWlsZGVy
OiBkZXRhaWw6IHhjX2RvbV9wZm5fdG9fcHRyX3JldGNvdW50OiBkb21VIG1hcHBpbmc6IHBmbiAw
eDMyYjcrMHgxIGF0IDB4N2ZhNDVjZDcwMDAwCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc3RhcnRf
aW5mb194ODZfNjQ6IGNhbGxlZApkb21haW5idWlsZGVyOiBkZXRhaWw6IHNldHVwX2h5cGVyY2Fs
bF9wYWdlOiB2YWRkcj0weGZmZmZmZmZmODEwMDEwMDAgcGZuPTB4MTAwMQpkb21haW5idWlsZGVy
OiBkZXRhaWw6IGRvbWFpbiBidWlsZGVyIG1lbW9yeSBmb290cHJpbnQKZG9tYWluYnVpbGRlcjog
ZGV0YWlsOiAgICBhbGxvY2F0ZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBtYWxsb2Mg
ICAgICAgICAgICAgOiAyOTUxMCBrQgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGFub24g
bW1hcCAgICAgICAgICA6IDAgYnl0ZXMKZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICBtYXBwZWQK
ZG9tYWluYnVpbGRlcjogZGV0YWlsOiAgICAgICBmaWxlIG1tYXAgICAgICAgICAgOiAxMjgwOSBr
Qgpkb21haW5idWlsZGVyOiBkZXRhaWw6ICAgICAgIGRvbVUgbW1hcCAgICAgICAgICA6IDM0IE1C
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogYXJjaF9zZXR1cF9ib290bGF0ZTogc2hhcmVkX2luZm86
IHBmbiAweDAsIG1mbiAweGI3NjhkCmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogc2hhcmVkX2luZm9f
eDg2XzY0OiBjYWxsZWQKZG9tYWluYnVpbGRlcjogZGV0YWlsOiB2Y3B1X3g4Nl82NDogY2FsbGVk
CmRvbWFpbmJ1aWxkZXI6IGRldGFpbDogdmNwdV94ODZfNjQ6IGNyMzogcGZuIDB4MzJiYSBtZm4g
MHgyMmFmMzIKZG9tYWluYnVpbGRlcjogZGV0YWlsOiBsYXVuY2hfdm06IGNhbGxlZCwgY3R4dD0w
eDdmYTQ1Y2Q3MTAwNApkb21haW5idWlsZGVyOiBkZXRhaWw6IHhjX2RvbV9yZWxlYXNlOiBjYWxs
ZWQKbGlieGw6IGRlYnVnOiBsaWJ4bF9kZXZpY2UuYzoyNTE6bGlieGxfX2RldmljZV9kaXNrX3Nl
dF9iYWNrZW5kOiBEaXNrIHZkZXY9eHZkYTEgc3BlYy5iYWNrZW5kPXFkaXNrCmxpYnhsOiBlcnJv
cjogbGlieGxfZGV2aWNlLmM6OTY5OmRldmljZV9ob3RwbHVnOiB1bmFibGUgdG8gZ2V0IGRvbWFp
biBpZCwgZXJyb3I6IC0zCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2
X3hzd2F0Y2hfZGVyZWdpc3Rlcjogd2F0Y2ggdz0weDYyOWE4MDogZGVyZWdpc3RlciB1bnJlZ2lz
dGVyZWQKbGlieGw6IGRlYnVnOiBsaWJ4bF9kbS5jOjEzMDM6bGlieGxfX3NwYXduX2xvY2FsX2Rt
OiBTcGF3bmluZyBkZXZpY2UtbW9kZWwgL3Vzci9saWIveGVuL2Jpbi9xZW11LXN5c3RlbS1pMzg2
IHdpdGggYXJndW1lbnRzOgpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bh
d25fbG9jYWxfZG06ICAgL3Vzci9saWIveGVuL2Jpbi9xZW11LXN5c3RlbS1pMzg2CmxpYnhsOiBk
ZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAteGVuLWRvbWlk
CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICA2
CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAt
Y2hhcmRldgpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxf
ZG06ICAgc29ja2V0LGlkPWxpYnhsLWNtZCxwYXRoPS92YXIvcnVuL3hlbi9xbXAtbGlieGwtNixz
ZXJ2ZXIsbm93YWl0CmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9s
b2NhbF9kbTogICAtbW9uCmxpYnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3
bl9sb2NhbF9kbTogICBjaGFyZGV2PWxpYnhsLWNtZCxtb2RlPWNvbnRyb2wKbGlieGw6IGRlYnVn
OiBsaWJ4bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xvY2FsX2RtOiAgIC1ub2RlZmF1bHRzCmxp
YnhsOiBkZWJ1ZzogbGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAteGVu
LWF0dGFjaApsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxf
ZG06ICAgLW5hbWUKbGlieGw6IGRlYnVnOiBsaWJ4bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xv
Y2FsX2RtOiAgIHRlc3QxX3N0YWJsZQpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6MTMwNTpsaWJ4
bF9fc3Bhd25fbG9jYWxfZG06ICAgLW5vZ3JhcGhpYwpsaWJ4bDogZGVidWc6IGxpYnhsX2RtLmM6
MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxfZG06ICAgLW1hY2hpbmUKbGlieGw6IGRlYnVnOiBsaWJ4
bF9kbS5jOjEzMDU6bGlieGxfX3NwYXduX2xvY2FsX2RtOiAgIHhlbnB2CmxpYnhsOiBkZWJ1Zzog
bGlieGxfZG0uYzoxMzA1OmxpYnhsX19zcGF3bl9sb2NhbF9kbTogICAtbQpsaWJ4bDogZGVidWc6
IGxpYnhsX2RtLmM6MTMwNTpsaWJ4bF9fc3Bhd25fbG9jYWxfZG06ICAgMjA0OQpsaWJ4bDogZGVi
dWc6IGxpYnhsX2V2ZW50LmM6NTU5OmxpYnhsX19ldl94c3dhdGNoX3JlZ2lzdGVyOiB3YXRjaCB3
PTB4NjI4YTkwIHdwYXRoPS9sb2NhbC9kb21haW4vMC9kZXZpY2UtbW9kZWwvNi9zdGF0ZSB0b2tl
bj0zLzA6IHJlZ2lzdGVyIHNsb3RudW09MwpsaWJ4bDogZGVidWc6IGxpYnhsX2NyZWF0ZS5jOjEz
Mzk6ZG9fZG9tYWluX2NyZWF0ZTogYW8gMHg2MmE2NDA6IGlucHJvZ3Jlc3M6IHBvbGxlcj0weDYy
OWY5MCwgZmxhZ3M9aQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NTAzOndhdGNoZmRfY2Fs
bGJhY2s6IHdhdGNoIHc9MHg2MjhhOTAgd3BhdGg9L2xvY2FsL2RvbWFpbi8wL2RldmljZS1tb2Rl
bC82L3N0YXRlIHRva2VuPTMvMDogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8wL2RldmljZS1t
b2RlbC82L3N0YXRlCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1MDM6d2F0Y2hmZF9jYWxs
YmFjazogd2F0Y2ggdz0weDYyOGE5MCB3cGF0aD0vbG9jYWwvZG9tYWluLzAvZGV2aWNlLW1vZGVs
LzYvc3RhdGUgdG9rZW49My8wOiBldmVudCBlcGF0aD0vbG9jYWwvZG9tYWluLzAvZGV2aWNlLW1v
ZGVsLzYvc3RhdGUKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjU5NTpsaWJ4bF9fZXZfeHN3
YXRjaF9kZXJlZ2lzdGVyOiB3YXRjaCB3PTB4NjI4YTkwIHdwYXRoPS9sb2NhbC9kb21haW4vMC9k
ZXZpY2UtbW9kZWwvNi9zdGF0ZSB0b2tlbj0zLzA6IGRlcmVnaXN0ZXIgc2xvdG51bT0zCmxpYnhs
OiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2X3hzd2F0Y2hfZGVyZWdpc3Rlcjog
d2F0Y2ggdz0weDYyOGE5MDogZGVyZWdpc3RlciB1bnJlZ2lzdGVyZWQKbGlieGw6IGRlYnVnOiBs
aWJ4bF9xbXAuYzo2OTY6bGlieGxfX3FtcF9pbml0aWFsaXplOiBjb25uZWN0ZWQgdG8gL3Zhci9y
dW4veGVuL3FtcC1saWJ4bC02CmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6Mjk2OnFtcF9oYW5k
bGVfcmVzcG9uc2U6IG1lc3NhZ2UgdHlwZTogcW1wCmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6
NTQ2OnFtcF9zZW5kX3ByZXBhcmU6IG5leHQgcW1wIGNvbW1hbmQ6ICd7CiAgICAiZXhlY3V0ZSI6
ICJxbXBfY2FwYWJpbGl0aWVzIiwKICAgICJpZCI6IDEKfQonCmxpYnhsOiBkZWJ1ZzogbGlieGxf
cW1wLmM6Mjk2OnFtcF9oYW5kbGVfcmVzcG9uc2U6IG1lc3NhZ2UgdHlwZTogcmV0dXJuCmxpYnhs
OiBkZWJ1ZzogbGlieGxfcW1wLmM6NTQ2OnFtcF9zZW5kX3ByZXBhcmU6IG5leHQgcW1wIGNvbW1h
bmQ6ICd7CiAgICAiZXhlY3V0ZSI6ICJxdWVyeS1jaGFyZGV2IiwKICAgICJpZCI6IDIKfQonCmxp
YnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6Mjk2OnFtcF9oYW5kbGVfcmVzcG9uc2U6IG1lc3NhZ2Ug
dHlwZTogcmV0dXJuCmxpYnhsOiBkZWJ1ZzogbGlieGxfcW1wLmM6NTQ2OnFtcF9zZW5kX3ByZXBh
cmU6IG5leHQgcW1wIGNvbW1hbmQ6ICd7CiAgICAiZXhlY3V0ZSI6ICJxdWVyeS12bmMiLAogICAg
ImlkIjogMwp9CicKbGlieGw6IGRlYnVnOiBsaWJ4bF9xbXAuYzoyOTY6cW1wX2hhbmRsZV9yZXNw
b25zZTogbWVzc2FnZSB0eXBlOiByZXR1cm4KbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjU1
OTpsaWJ4bF9fZXZfeHN3YXRjaF9yZWdpc3Rlcjogd2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9j
YWwvZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0YXRlIHRva2VuPTMvMTogcmVnaXN0ZXIgc2xv
dG51bT0zCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazog
d2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9jYWwvZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0
YXRlIHRva2VuPTMvMTogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzYv
MC9zdGF0ZQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NjQ2OmRldnN0YXRlX3dhdGNoX2Nh
bGxiYWNrOiBiYWNrZW5kIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgd2Fu
dGVkIHN0YXRlIDIgc3RpbGwgd2FpdGluZyBzdGF0ZSAxCmxpYnhsOiBkZWJ1ZzogbGlieGxfZXZl
bnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazogd2F0Y2ggdz0weDYyYWNkOCB3cGF0aD0vbG9jYWwv
ZG9tYWluLzAvYmFja2VuZC92aWYvNi8wL3N0YXRlIHRva2VuPTMvMTogZXZlbnQgZXBhdGg9L2xv
Y2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlmLzYvMC9zdGF0ZQpsaWJ4bDogZGVidWc6IGxpYnhsX2V2
ZW50LmM6NjQyOmRldnN0YXRlX3dhdGNoX2NhbGxiYWNrOiBiYWNrZW5kIC9sb2NhbC9kb21haW4v
MC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgd2FudGVkIHN0YXRlIDIgb2sKbGlieGw6IGRlYnVnOiBs
aWJ4bF9ldmVudC5jOjU5NTpsaWJ4bF9fZXZfeHN3YXRjaF9kZXJlZ2lzdGVyOiB3YXRjaCB3PTB4
NjJhY2Q4IHdwYXRoPS9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi82LzAvc3RhdGUgdG9rZW49
My8xOiBkZXJlZ2lzdGVyIHNsb3RudW09MwpsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6NjA3
OmxpYnhsX19ldl94c3dhdGNoX2RlcmVnaXN0ZXI6IHdhdGNoIHc9MHg2MmFjZDg6IGRlcmVnaXN0
ZXIgdW5yZWdpc3RlcmVkCmxpYnhsOiBlcnJvcjogbGlieGxfZGV2aWNlLmM6OTY5OmRldmljZV9o
b3RwbHVnOiB1bmFibGUgdG8gZ2V0IGRvbWFpbiBpZCwgZXJyb3I6IC0zCmxpYnhsOiBkZWJ1Zzog
bGlieGxfZXZlbnQuYzo2MDc6bGlieGxfX2V2X3hzd2F0Y2hfZGVyZWdpc3Rlcjogd2F0Y2ggdz0w
eDYyYWQ2MDogZGVyZWdpc3RlciB1bnJlZ2lzdGVyZWQKbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVh
dGUuYzoxMjA2OmRvbWNyZWF0ZV9hdHRhY2hfdnRwbXM6IHVuYWJsZSB0byBhZGQgbmljIGRldmlj
ZXMKbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVhdGUuYzoxMjc5OmRvbWNyZWF0ZV9jb21wbGV0ZTog
ZG9tYWluIGNyZWF0aW9uIGZhaWxlZCwgbm90IGRvaW5nIHJlbW92YWwgb2YgeHMgZW50cmllcwps
aWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTU2MDpsaWJ4bF9fYW9fY29tcGxldGU6IGFvIDB4
NjJhNjQwOiBjb21wbGV0ZSwgcmM9LTMKbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVudC5jOjE1MzI6
bGlieGxfX2FvX19kZXN0cm95OiBhbyAweDYyYTY0MDogZGVzdHJveQp4YzogZGVidWc6IGh5cGVy
Y2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIxNyB0b3RhbCByZWxlYXNlczoyMTcKeGM6
IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiBjdXJyZW50IGFsbG9jYXRpb25zOjAgbWF4aW11bSBh
bGxvY2F0aW9uczo0CnhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2FjaGUgY3VycmVudCBz
aXplOjQKeGM6IGRlYnVnOiBoeXBlcmNhbGwgYnVmZmVyOiBjYWNoZSBoaXRzOjIwNiBtaXNzZXM6
NCB0b29iaWc6NwpQYXJzaW5nIGNvbmZpZyBmcm9tIHRlc3QxX3N0YWJsZQo=
--001a11c29f8ab9218804f0358452
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11c29f8ab9218804f0358452--


From xen-devel-bounces@lists.xen.org Sat Jan 18 06:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 06:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4PF9-0003aH-3f; Sat, 18 Jan 2014 06:18:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4PF6-0003UJ-W9
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 06:18:53 +0000
Received: from [193.109.254.147:15291] by server-13.bemta-14.messagelabs.com
	id 66/C4-19374-CCC1AD25; Sat, 18 Jan 2014 06:18:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390025929!10160043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9720 invoked from network); 18 Jan 2014 06:18:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 06:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,678,1384300800"; d="scan'208";a="92005727"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 06:18:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 01:18:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4PF1-0008T7-JR;
	Sat, 18 Jan 2014 06:18:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4PF1-0005sG-HL;
	Sat, 18 Jan 2014 06:18:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24422-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 06:18:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24422: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24422 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24422/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24368

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09
baseline version:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ branch=xen-4.2-testing
+ revision=12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 12d9655b96c702c7a936cefeec27c7fd19ff6d09:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   e131045..12d9655  12d9655b96c702c7a936cefeec27c7fd19ff6d09 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 06:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 06:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4PF9-0003aH-3f; Sat, 18 Jan 2014 06:18:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4PF6-0003UJ-W9
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 06:18:53 +0000
Received: from [193.109.254.147:15291] by server-13.bemta-14.messagelabs.com
	id 66/C4-19374-CCC1AD25; Sat, 18 Jan 2014 06:18:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390025929!10160043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9720 invoked from network); 18 Jan 2014 06:18:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 06:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,678,1384300800"; d="scan'208";a="92005727"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 06:18:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 01:18:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4PF1-0008T7-JR;
	Sat, 18 Jan 2014 06:18:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4PF1-0005sG-HL;
	Sat, 18 Jan 2014 06:18:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24422-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 06:18:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24422: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24422 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24422/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24368

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09
baseline version:
 xen                  e131045033e7235d17a0d4be88e3a550cfcaf375

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ branch=xen-4.2-testing
+ revision=12d9655b96c702c7a936cefeec27c7fd19ff6d09
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 12d9655b96c702c7a936cefeec27c7fd19ff6d09:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   e131045..12d9655  12d9655b96c702c7a936cefeec27c7fd19ff6d09 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 07:42:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 07:42:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4QXz-0007lT-Bt; Sat, 18 Jan 2014 07:42:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4QXw-0007lO-VS
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 07:42:25 +0000
Received: from [85.158.137.68:23544] by server-14.bemta-3.messagelabs.com id
	69/F5-06105-F503AD25; Sat, 18 Jan 2014 07:42:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390030941!8726289!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4069 invoked from network); 18 Jan 2014 07:42:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 07:42:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="94109993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 07:42:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 02:42:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4QXp-0000Tp-2s;
	Sat, 18 Jan 2014 07:42:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4QXo-0007le-SZ;
	Sat, 18 Jan 2014 07:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24425-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 07:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24425: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24425 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24425/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           7 debian-install            fail REGR. vs. 24416
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 24416
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24416

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      7 debian-install            fail REGR. vs. 24416

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 07:42:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 07:42:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4QXz-0007lT-Bt; Sat, 18 Jan 2014 07:42:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4QXw-0007lO-VS
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 07:42:25 +0000
Received: from [85.158.137.68:23544] by server-14.bemta-3.messagelabs.com id
	69/F5-06105-F503AD25; Sat, 18 Jan 2014 07:42:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390030941!8726289!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4069 invoked from network); 18 Jan 2014 07:42:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 07:42:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="94109993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 07:42:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 02:42:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4QXp-0000Tp-2s;
	Sat, 18 Jan 2014 07:42:17 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4QXo-0007le-SZ;
	Sat, 18 Jan 2014 07:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24425-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 07:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24425: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24425 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24425/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           7 debian-install            fail REGR. vs. 24416
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)  broken REGR. vs. 24416
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24416

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      7 debian-install            fail REGR. vs. 24416

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 08:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 08:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Qrb-0000qW-60; Sat, 18 Jan 2014 08:02:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W4QrZ-0000qR-Q4
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 08:02:42 +0000
Received: from [85.158.143.35:35437] by server-2.bemta-4.messagelabs.com id
	7F/66-11386-0253AD25; Sat, 18 Jan 2014 08:02:40 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390032158!697602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29491 invoked from network); 18 Jan 2014 08:02:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 08:02:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="94111710"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 08:02:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Sat, 18 Jan 2014 03:02:37 -0500
Message-ID: <52DA351C.2030604@citrix.com>
Date: Sat, 18 Jan 2014 09:02:36 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
References: <52B181EB.6080303@samsung.com>	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>	<21169.39199.902511.563980@mariner.uk.xensource.com>	<52B1A1B9.70307@samsung.com>	<1387373404.28680.19.camel@kazak.uk.xensource.com>	<52B1B616.4010402@samsung.com>	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>	<1389951973.6697.47.camel@kazak.uk.xensource.com>	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>	<1389955476.6697.58.camel@kazak.uk.xensource.com>	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>	<1389956491.6697.64.camel@kazak.uk.xensource.com>	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>	<1389959942.6697.87.camel@kazak.uk.xensource.com>	<52D9309D.1030808@citrix.com>
	<CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
In-Reply-To: <CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/01/14 03:35, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 9:31 PM, Roger Pau Monn=E9 <roger.pau@citrix.com>=
 wrote:
>> On 17/01/14 12:59, Ian Campbell wrote:
>>> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>>>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com=
> wrote:
>>>>
>>>>
>>>>> vif-bridge and the common scripts which it includes would be a good
>>>>> start. Just an echo at the top to confirm that the script is running
>>>>> would be useful.
>>>>>
>>>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debuggi=
ng
>>>>> when these scripts were launched by udev, but now that libxl runs them
>>>>> you may find that the debug from the script comes out on stdout/err of
>>>>> the xl create command so perhaps that isn't needed any more.
>>>>>
>>>>>> headless here.
>>>>>
>>>>> That shouldn't matter, you are looking for output from userspace
>>>>> scripts, not kernel or hypervisor logs.
>>>>>
>>>>> Ian.
>>>>>
>>>>
>>>> Hi Ian
>>>> I suspect for 4.4.0, the network devices even was not detected.
>>>> this is output from 4.3.1, notes follow lines.
>>>>
>>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>>>> script: /etc/xen/scripts/vif-bridge online
>>>> dlan: vif-bridge start
>>>> dlan: vif-common start
>>>>
>>>> dlan: vif-bridge start -> output from vif-bridge script
>>>> dlan: vif-common start -> output from vif-common.sh script
>>>
>>> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
>>> produce the same output?
>>>
>>> (please can you try and set the text type to "preformatted" for the logs
>>> -- having them wrapped makes them very hard to read).
>>>
>>> The lack of
>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script=
: /etc/xen/scripts/vif-bridge online
>>> in your original logs is a bit concerning.
>>>
>>> Roger -- any ideas?
>>
>> My first guess would be that libxl__get_domid failed, however I'm not
>> able to reproduce this. I'm attaching a patch to add an error message
>> if libxl__get_domid fails, and also prevent the removal of xenstore
>> entries so we can see what's going on. Dennis/Eugene, could you try the
>> attached patch and send the output of xl -vvv create <...> and
>> xenstore-ls -fp after the failed creation?
>>
>> ---
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index a604cd8..03f9fe9 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
>>          rc =3D xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_co=
nfig->b_info.exec_ssidref);
>>
>>      if (rc) {
>> +        LOG(ERROR, "domain creation failed, not doing removal of xs ent=
ries");
>> +        dcs->callback(egc, dcs, rc, -1);
>> +        return;
>>          if (dcs->guest_domid) {
>>              dcs->dds.ao =3D ao;
>>              dcs->dds.domid =3D dcs->guest_domid;
>> diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
>> index ba7d100..56d8162 100644
>> --- a/tools/libxl/libxl_device.c
>> +++ b/tools/libxl/libxl_device.c
>> @@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__=
ao_device *aodev)
>>       * hotplug scripts
>>       */
>>      rc =3D libxl__get_domid(gc, &domid);
>> -    if (rc) goto out;
>> +    if (rc) {
>> +        LOG(ERROR, "unable to get domain id, error: %d", rc);
>> +        goto out;
>> +    }
>>      if (aodev->dev->backend_domid !=3D domid) {
>>          if (aodev->action !=3D LIBXL__DEVICE_ACTION_REMOVE)
>>              goto out;
>>
> with this patch applied, I got following err. or see attached file for mo=
re info
> =

> ofire configs # xl create -c test1_stable
> Parsing config from test1_stable
> libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
> id, error: -3
> libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
> id, error: -3
> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
> add nic devices
> libxl: error: libxl_create.c:1279:domcreate_complete: domain creation
> failed, not doing removal of xs entries
> =


Hello,

Thanks for the log, could you please post the output of xenstore-ls -fp
after a failed domain creation?

My first guess is that your xencommons init script is outdated, could
you check if your xencommons init script has the following line:

113                 ${BINDIR}/xenstore-write "/local/domain/0/domid" 0

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 08:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 08:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Qrb-0000qW-60; Sat, 18 Jan 2014 08:02:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W4QrZ-0000qR-Q4
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 08:02:42 +0000
Received: from [85.158.143.35:35437] by server-2.bemta-4.messagelabs.com id
	7F/66-11386-0253AD25; Sat, 18 Jan 2014 08:02:40 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390032158!697602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29491 invoked from network); 18 Jan 2014 08:02:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 08:02:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="94111710"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 08:02:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Sat, 18 Jan 2014 03:02:37 -0500
Message-ID: <52DA351C.2030604@citrix.com>
Date: Sat, 18 Jan 2014 09:02:36 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
References: <52B181EB.6080303@samsung.com>	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>	<21169.39199.902511.563980@mariner.uk.xensource.com>	<52B1A1B9.70307@samsung.com>	<1387373404.28680.19.camel@kazak.uk.xensource.com>	<52B1B616.4010402@samsung.com>	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>	<1389951973.6697.47.camel@kazak.uk.xensource.com>	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>	<1389955476.6697.58.camel@kazak.uk.xensource.com>	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>	<1389956491.6697.64.camel@kazak.uk.xensource.com>	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>	<1389959942.6697.87.camel@kazak.uk.xensource.com>	<52D9309D.1030808@citrix.com>
	<CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
In-Reply-To: <CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/01/14 03:35, Dennis Lan (dlan) wrote:
> On Fri, Jan 17, 2014 at 9:31 PM, Roger Pau Monn=E9 <roger.pau@citrix.com>=
 wrote:
>> On 17/01/14 12:59, Ian Campbell wrote:
>>> On Fri, 2014-01-17 at 19:43 +0800, Dennis Lan (dlan) wrote:
>>>> On Fri, Jan 17, 2014 at 7:01 PM, Ian Campbell <Ian.Campbell@citrix.com=
> wrote:
>>>>
>>>>
>>>>> vif-bridge and the common scripts which it includes would be a good
>>>>> start. Just an echo at the top to confirm that the script is running
>>>>> would be useful.
>>>>>
>>>>> I used to do "exec 1>/tmp/hotplug.log 2>&1" at the top to aid debuggi=
ng
>>>>> when these scripts were launched by udev, but now that libxl runs them
>>>>> you may find that the debug from the script comes out on stdout/err of
>>>>> the xl create command so perhaps that isn't needed any more.
>>>>>
>>>>>> headless here.
>>>>>
>>>>> That shouldn't matter, you are looking for output from userspace
>>>>> scripts, not kernel or hypervisor logs.
>>>>>
>>>>> Ian.
>>>>>
>>>>
>>>> Hi Ian
>>>> I suspect for 4.4.0, the network devices even was not detected.
>>>> this is output from 4.3.1, notes follow lines.
>>>>
>>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>>>> script: /etc/xen/scripts/vif-bridge online
>>>> dlan: vif-bridge start
>>>> dlan: vif-common start
>>>>
>>>> dlan: vif-bridge start -> output from vif-bridge script
>>>> dlan: vif-common start -> output from vif-common.sh script
>>>
>>> So these are the 4.3 logs? Have you tried 4.4 and found that it doesn't
>>> produce the same output?
>>>
>>> (please can you try and set the text type to "preformatted" for the logs
>>> -- having them wrapped makes them very hard to read).
>>>
>>> The lack of
>>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script=
: /etc/xen/scripts/vif-bridge online
>>> in your original logs is a bit concerning.
>>>
>>> Roger -- any ideas?
>>
>> My first guess would be that libxl__get_domid failed, however I'm not
>> able to reproduce this. I'm attaching a patch to add an error message
>> if libxl__get_domid fails, and also prevent the removal of xenstore
>> entries so we can see what's going on. Dennis/Eugene, could you try the
>> attached patch and send the output of xl -vvv create <...> and
>> xenstore-ls -fp after the failed creation?
>>
>> ---
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index a604cd8..03f9fe9 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -1296,6 +1296,9 @@ static void domcreate_complete(libxl__egc *egc,
>>          rc =3D xc_flask_relabel_domain(CTX->xch, dcs->guest_domid, d_co=
nfig->b_info.exec_ssidref);
>>
>>      if (rc) {
>> +        LOG(ERROR, "domain creation failed, not doing removal of xs ent=
ries");
>> +        dcs->callback(egc, dcs, rc, -1);
>> +        return;
>>          if (dcs->guest_domid) {
>>              dcs->dds.ao =3D ao;
>>              dcs->dds.domid =3D dcs->guest_domid;
>> diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
>> index ba7d100..56d8162 100644
>> --- a/tools/libxl/libxl_device.c
>> +++ b/tools/libxl/libxl_device.c
>> @@ -965,7 +965,10 @@ static void device_hotplug(libxl__egc *egc, libxl__=
ao_device *aodev)
>>       * hotplug scripts
>>       */
>>      rc =3D libxl__get_domid(gc, &domid);
>> -    if (rc) goto out;
>> +    if (rc) {
>> +        LOG(ERROR, "unable to get domain id, error: %d", rc);
>> +        goto out;
>> +    }
>>      if (aodev->dev->backend_domid !=3D domid) {
>>          if (aodev->action !=3D LIBXL__DEVICE_ACTION_REMOVE)
>>              goto out;
>>
> with this patch applied, I got following err. or see attached file for mo=
re info
> =

> ofire configs # xl create -c test1_stable
> Parsing config from test1_stable
> libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
> id, error: -3
> libxl: error: libxl_device.c:969:device_hotplug: unable to get domain
> id, error: -3
> libxl: error: libxl_create.c:1206:domcreate_attach_vtpms: unable to
> add nic devices
> libxl: error: libxl_create.c:1279:domcreate_complete: domain creation
> failed, not doing removal of xs entries
> =


Hello,

Thanks for the log, could you please post the output of xenstore-ls -fp
after a failed domain creation?

My first guess is that your xencommons init script is outdated, could
you check if your xencommons init script has the following line:

113                 ${BINDIR}/xenstore-write "/local/domain/0/domid" 0

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 08:07:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 08:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Qvv-0000xp-T5; Sat, 18 Jan 2014 08:07:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Qvu-0000xh-A2
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 08:07:10 +0000
Received: from [85.158.143.35:49448] by server-3.bemta-4.messagelabs.com id
	5D/2A-32360-D263AD25; Sat, 18 Jan 2014 08:07:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390032427!12527393!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9782 invoked from network); 18 Jan 2014 08:07:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 08:07:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="92015814"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 08:07:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 03:07:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Qvp-0000bf-0g;
	Sat, 18 Jan 2014 08:07:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Qvo-0004ri-GZ;
	Sat, 18 Jan 2014 08:07:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24423-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 08:07:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24423: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24423 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24345

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7261a3fc6e6101293cff232b9423dd41b140fc0f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:37:06 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 491788b98c7b35822cab8bdf66504a78c88414ee
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:33:54 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit be29888fb069cae35be251ce3fcf74e937030812
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:33:10 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100

commit 39b9a5bc0858b604560499afdc9964a670c8b67b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:31:50 2014 +0100

    x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
    
    Having nmi_shootdown_cpus() report which pcpus failed to be shot down is a
    useful debugging hint as to what possibly went wrong (especially when the
    crash logs seem to indicate that an NMI timeout occurred while waiting for one
    of the problematic pcpus to perform an action).
    
    This is achieved by swapping an atomic_t count of unreported pcpus with a
    cpumask.  In the case that the 1 second timeout occurs, use the cpumask to
    identify the problematic pcpus.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
    master date: 2013-09-26 10:14:51 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 08:07:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 08:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Qvv-0000xp-T5; Sat, 18 Jan 2014 08:07:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Qvu-0000xh-A2
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 08:07:10 +0000
Received: from [85.158.143.35:49448] by server-3.bemta-4.messagelabs.com id
	5D/2A-32360-D263AD25; Sat, 18 Jan 2014 08:07:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390032427!12527393!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9782 invoked from network); 18 Jan 2014 08:07:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 08:07:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,679,1384300800"; d="scan'208";a="92015814"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 08:07:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 03:07:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4Qvp-0000bf-0g;
	Sat, 18 Jan 2014 08:07:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4Qvo-0004ri-GZ;
	Sat, 18 Jan 2014 08:07:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24423-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 08:07:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24423: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24423 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24345

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7261a3fc6e6101293cff232b9423dd41b140fc0f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:37:06 2014 +0100

    kexec: prevent deadlock on reentry to the crash path
    
    In some cases, such as suffering a queued-invalidation timeout while
    performing an iommu_crash_shutdown(), Xen can end up reentering the crash
    path. Previously, this would result in a deadlock in one_cpu_only(), as the
    test_and_set_bit() would fail.
    
    The crash path is not reentrant, and even if it could be made to be so, it is
    almost certain that we would fall over the same reentry condition again.
    
    The new code can distinguish a reentry case from multiple cpus racing down the
    crash path.  In the case that a reentry is detected, return back out to the
    nested panic() call, which will maybe_reboot() on our behalf.  This requires a
    bit of return plumbing back up to kexec_crash().
    
    While fixing this deadlock, also fix up an minor niggle seen recently from a
    XenServer crash report.  The report was from a Bank 8 MCE, which had managed
    to crash on all cpus at once.  The result was a lot of stack traces with cpus
    in kexec_common_shutdown(), which was infact the inlined version of
    one_cpu_only().  The kexec crash path is not a hotpath, so we can easily
    afford to prevent inlining for the sake of clarity in the stack traces.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: David Vrabel <david.vrabel@citrix.com>
    master commit: 470f58c159410b280627c2ea7798ea12ad93bd7c
    master date: 2013-11-27 15:13:48 +0100

commit 491788b98c7b35822cab8bdf66504a78c88414ee
Author: Paul Durrant <paul.durrant@citrix.com>
Date:   Fri Jan 17 16:33:54 2014 +0100

    x86/VT-x: Disable MSR intercept for SHADOW_GS_BASE
    
    Intercepting this MSR is pointless - The swapgs instruction does not cause a
    vmexit, so the cached result of this is potentially stale after the next guest
    instruction.  It is correctly saved and restored on vcpu context switch.
    
    Furthermore, 64bit Windows writes to this MSR on every thread context switch,
    so interception causes a substantial performance hit.
    
    Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: a82e98d473fd212316ea5aa078a7588324b020e5
    master date: 2013-11-15 11:02:17 +0100

commit be29888fb069cae35be251ce3fcf74e937030812
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:33:10 2014 +0100

    x86/ats: Fix parsing of 'ats' command line option
    
    This is really a boolean_param() hidden inside a hand-coded attempt to
    replicate boolean_param(), which misses the 'no-' prefix semantics
    expected with Xen boolean parameters.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 7b5af1df122092243a3697409d5a5ad3b9944da4
    master date: 2013-11-04 14:45:17 +0100

commit 39b9a5bc0858b604560499afdc9964a670c8b67b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jan 17 16:31:50 2014 +0100

    x86/crash: Indicate how well nmi_shootdown_cpus() managed to do
    
    Having nmi_shootdown_cpus() report which pcpus failed to be shot down is a
    useful debugging hint as to what possibly went wrong (especially when the
    crash logs seem to indicate that an NMI timeout occurred while waiting for one
    of the problematic pcpus to perform an action).
    
    This is achieved by swapping an atomic_t count of unreported pcpus with a
    cpumask.  In the case that the 1 second timeout occurs, use the cpumask to
    identify the problematic pcpus.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: f12c1f0b09205cdf18a2c4a615fdc3e7357ce704
    master date: 2013-09-26 10:14:51 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Wwk-0002d6-BP; Sat, 18 Jan 2014 14:32:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W4Wwi-0002d1-Lj
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 14:32:24 +0000
Received: from [85.158.143.35:11139] by server-2.bemta-4.messagelabs.com id
	42/C3-11386-7709AD25; Sat, 18 Jan 2014 14:32:23 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390055542!12489738!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1956 invoked from network); 18 Jan 2014 14:32:23 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Jan 2014 14:32:23 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Jan 2014 06:28:17 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,680,1384329600"; d="scan'208";a="468777955"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 18 Jan 2014 06:32:20 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:32:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:32:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Sat, 18 Jan 2014 22:32:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH] Nested VMX: prohabit virtual vmentry/vmexit during IO
	emulaiton
Thread-Index: AQHPE20P7oidqK7M2k6DXXzQDiYUiJqKio+w
Date: Sat, 18 Jan 2014 14:32:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
In-Reply-To: <52D910BE02000078001147DC@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-17:
>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>> Sometimes, L0 need to decode the L2's instruction to handle IO
>> access directly.
>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>> time, if there is a virtual vmexit pending (for example, an
>> interrupt pending to inject to L1) and hypervisor will switch the
>> VCPU context from L2 to L1. Now we already in L1's context, but
>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>> retry to handle the IO request later and unfortunately, the retry
>> will happen in L1's context. And it will cause the problem.
>> The fixing is that if there is a pending IO request, no virtual
>> vmexit/vmentry is allowed.
>> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
> 
> Didn't we agree earlier on to do this in common code?
> 

I think you agree with this fixing. Let me have a double check: Do you mean move the check to nhvm_interrupt_block () as Christoph suggested or move to another place in common code? Christoph' s suggestion doesn't solve the issue as I said in previous thread. Also, since SVM and VMX handle the vmswitch totally independent, there is no proper point to put the check in common code to cover both.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Wwk-0002d6-BP; Sat, 18 Jan 2014 14:32:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W4Wwi-0002d1-Lj
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 14:32:24 +0000
Received: from [85.158.143.35:11139] by server-2.bemta-4.messagelabs.com id
	42/C3-11386-7709AD25; Sat, 18 Jan 2014 14:32:23 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390055542!12489738!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1956 invoked from network); 18 Jan 2014 14:32:23 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Jan 2014 14:32:23 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Jan 2014 06:28:17 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,680,1384329600"; d="scan'208";a="468777955"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 18 Jan 2014 06:32:20 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:32:20 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:32:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id
	14.03.0123.003; Sat, 18 Jan 2014 22:32:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH] Nested VMX: prohabit virtual vmentry/vmexit during IO
	emulaiton
Thread-Index: AQHPE20P7oidqK7M2k6DXXzQDiYUiJqKio+w
Date: Sat, 18 Jan 2014 14:32:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
In-Reply-To: <52D910BE02000078001147DC@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-01-17:
>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>> Sometimes, L0 need to decode the L2's instruction to handle IO
>> access directly.
>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>> time, if there is a virtual vmexit pending (for example, an
>> interrupt pending to inject to L1) and hypervisor will switch the
>> VCPU context from L2 to L1. Now we already in L1's context, but
>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>> retry to handle the IO request later and unfortunately, the retry
>> will happen in L1's context. And it will cause the problem.
>> The fixing is that if there is a pending IO request, no virtual
>> vmexit/vmentry is allowed.
>> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
> 
> Didn't we agree earlier on to do this in common code?
> 

I think you agree with this fixing. Let me have a double check: Do you mean move the check to nhvm_interrupt_block () as Christoph suggested or move to another place in common code? Christoph' s suggestion doesn't solve the issue as I said in previous thread. Also, since SVM and VMX handle the vmswitch totally independent, there is no proper point to put the check in common code to cover both.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4XBz-0003Dh-Ru; Sat, 18 Jan 2014 14:48:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4XBx-0003Dc-Oq
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 14:48:10 +0000
Received: from [193.109.254.147:46006] by server-4.bemta-14.messagelabs.com id
	27/25-03916-9249AD25; Sat, 18 Jan 2014 14:48:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390056486!10199864!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 997 invoked from network); 18 Jan 2014 14:48:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,680,1384300800"; d="scan'208";a="92053909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 14:48:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 09:48:05 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4XBs-0003ho-O8;
	Sat, 18 Jan 2014 14:48:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4XBs-0004Qs-By;
	Sat, 18 Jan 2014 14:48:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24431-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 14:48:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24431: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24431 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24431/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24416
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24416

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   10 guest-saverestore           fail pass in 24425
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24425
 test-amd64-amd64-xl-sedf      7 debian-install     fail in 24425 pass in 24431
 test-armhf-armhf-xl           7 debian-install     fail in 24425 pass in 24431
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 24425 pass in 24431

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore         fail REGR. vs. 24416

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24425 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4XBz-0003Dh-Ru; Sat, 18 Jan 2014 14:48:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4XBx-0003Dc-Oq
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 14:48:10 +0000
Received: from [193.109.254.147:46006] by server-4.bemta-14.messagelabs.com id
	27/25-03916-9249AD25; Sat, 18 Jan 2014 14:48:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390056486!10199864!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 997 invoked from network); 18 Jan 2014 14:48:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,680,1384300800"; d="scan'208";a="92053909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Jan 2014 14:48:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 09:48:05 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4XBs-0003ho-O8;
	Sat, 18 Jan 2014 14:48:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4XBs-0004Qs-By;
	Sat, 18 Jan 2014 14:48:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24431-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 14:48:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24431: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24431 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24431/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24416
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24416

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   10 guest-saverestore           fail pass in 24425
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24425
 test-amd64-amd64-xl-sedf      7 debian-install     fail in 24425 pass in 24431
 test-armhf-armhf-xl           7 debian-install     fail in 24425 pass in 24431
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 24425 pass in 24431

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore         fail REGR. vs. 24416

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24425 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:49:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4XDc-0003XV-D8; Sat, 18 Jan 2014 14:49:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W4XDb-0003XM-Jz
	for xen-devel@lists.xenproject.org; Sat, 18 Jan 2014 14:49:51 +0000
Received: from [193.109.254.147:49070] by server-10.bemta-14.messagelabs.com
	id D8/C0-20752-E849AD25; Sat, 18 Jan 2014 14:49:50 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390056589!8203178!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31404 invoked from network); 18 Jan 2014 14:49:50 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-27.messagelabs.com with SMTP;
	18 Jan 2014 14:49:50 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Jan 2014 06:45:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,680,1384329600"; d="scan'208";a="440952742"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 18 Jan 2014 06:49:45 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:49:45 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:49:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sat, 18 Jan 2014 22:49:42 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
Thread-Index: AQHPEsA0JEdnRxr6yE6UHLYMOb4AO5qKkxVg
Date: Sat, 18 Jan 2014 14:49:41 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350149533F@SHSMSX101.ccr.corp.intel.com>
References: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
	<52D7ED1C0200007800114406@nat28.tlf.novell.com>
In-Reply-To: <52D7ED1C0200007800114406@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 16.01.14 at 12:44, Andrew Cooper <andrew.cooper3@citrix.com>
>>>> wrote: 
>> --- a/xen/arch/x86/cpu/mcheck/mce.c
>> +++ b/xen/arch/x86/cpu/mcheck/mce.c
>> @@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t
>>      bsp)  { enum mcheck_type inited = mcheck_none;
>> 
>> -    if (mce_disabled == 1) {
>> -        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
>> +    if (bsp && mce_disabled == 1) {
>> +        printk(XENLOG_INFO "MCE support disabled by bootparam\n");
> 
> While I'm fine with this, ...
> 
>> --- a/xen/arch/x86/cpu/mwait-idle.c
>> +++ b/xen/arch/x86/cpu/mwait-idle.c
>> @@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct
>>  		notifier_block *nfb, state = MWAIT_HINT2CSTATE(hint) + 1;
>>  		substate = MWAIT_HINT2SUBSTATE(hint);
>> 
>> -		if (state > max_cstate) {
>> +		if (cpu == 0 && state > max_cstate) {
>>  			printk(PREFIX "max C-state %u reached\n", max_cstate);  			break;
>>  		}
>> @@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct
>>  		notifier_block *nfb, if (substate >= num_substates)
>>  			continue;
>> 
>> -		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>> +		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>>  			printk(PREFIX "max C-state count of %u reached\n",
>>  			       ACPI_PROCESSOR_MAX_POWER);
>>  			break;
> 
> ... I object to both of these: There's no reason why the C-state
> count could differ between CPUs. Hence I'd accept these being
> guarded so they each get printed just once, but tying this to
> CPU0 is wrong. And then, adding the CPU number would be a
> natural thing to do.
> 
> Also, with this being a clone of Linux code (with which I sync from
> time to time), I'd really expect such changes to go through there.
> Of course, if you see the messages with Xen but not with a suitable
> Linux equivalent, then we'd have to look into why you see them...
> 

Agree, and, root cause 'Discovered when testing Xen-4.4-rc2 on an Haswell SDP platform with some MCE and Cstate quirks' is real point.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 14:49:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 14:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4XDc-0003XV-D8; Sat, 18 Jan 2014 14:49:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1W4XDb-0003XM-Jz
	for xen-devel@lists.xenproject.org; Sat, 18 Jan 2014 14:49:51 +0000
Received: from [193.109.254.147:49070] by server-10.bemta-14.messagelabs.com
	id D8/C0-20752-E849AD25; Sat, 18 Jan 2014 14:49:50 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390056589!8203178!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31404 invoked from network); 18 Jan 2014 14:49:50 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-27.messagelabs.com with SMTP;
	18 Jan 2014 14:49:50 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Jan 2014 06:45:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,680,1384329600"; d="scan'208";a="440952742"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 18 Jan 2014 06:49:45 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:49:45 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 18 Jan 2014 06:49:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sat, 18 Jan 2014 22:49:42 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
Thread-Index: AQHPEsA0JEdnRxr6yE6UHLYMOb4AO5qKkxVg
Date: Sat, 18 Jan 2014 14:49:41 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC82923350149533F@SHSMSX101.ccr.corp.intel.com>
References: <1389872643-24987-1-git-send-email-andrew.cooper3@citrix.com>
	<52D7ED1C0200007800114406@nat28.tlf.novell.com>
In-Reply-To: <52D7ED1C0200007800114406@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 16.01.14 at 12:44, Andrew Cooper <andrew.cooper3@citrix.com>
>>>> wrote: 
>> --- a/xen/arch/x86/cpu/mcheck/mce.c
>> +++ b/xen/arch/x86/cpu/mcheck/mce.c
>> @@ -729,8 +729,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t
>>      bsp)  { enum mcheck_type inited = mcheck_none;
>> 
>> -    if (mce_disabled == 1) {
>> -        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
>> +    if (bsp && mce_disabled == 1) {
>> +        printk(XENLOG_INFO "MCE support disabled by bootparam\n");
> 
> While I'm fine with this, ...
> 
>> --- a/xen/arch/x86/cpu/mwait-idle.c
>> +++ b/xen/arch/x86/cpu/mwait-idle.c
>> @@ -540,7 +540,7 @@ static int mwait_idle_cpu_init(struct
>>  		notifier_block *nfb, state = MWAIT_HINT2CSTATE(hint) + 1;
>>  		substate = MWAIT_HINT2SUBSTATE(hint);
>> 
>> -		if (state > max_cstate) {
>> +		if (cpu == 0 && state > max_cstate) {
>>  			printk(PREFIX "max C-state %u reached\n", max_cstate);  			break;
>>  		}
>> @@ -552,7 +552,7 @@ static int mwait_idle_cpu_init(struct
>>  		notifier_block *nfb, if (substate >= num_substates)
>>  			continue;
>> 
>> -		if (dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>> +		if (cpu == 0 && dev->count >= ACPI_PROCESSOR_MAX_POWER) {
>>  			printk(PREFIX "max C-state count of %u reached\n",
>>  			       ACPI_PROCESSOR_MAX_POWER);
>>  			break;
> 
> ... I object to both of these: There's no reason why the C-state
> count could differ between CPUs. Hence I'd accept these being
> guarded so they each get printed just once, but tying this to
> CPU0 is wrong. And then, adding the CPU number would be a
> natural thing to do.
> 
> Also, with this being a clone of Linux code (with which I sync from
> time to time), I'd really expect such changes to go through there.
> Of course, if you see the messages with Xen but not with a suitable
> Linux equivalent, then we'd have to look into why you see them...
> 

Agree, and, root cause 'Discovered when testing Xen-4.4-rc2 on an Haswell SDP platform with some MCE and Cstate quirks' is real point.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 15:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 15:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Xmf-0005PR-RV; Sat, 18 Jan 2014 15:26:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Xme-0005PM-SS
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 15:26:05 +0000
Received: from [85.158.143.35:31067] by server-3.bemta-4.messagelabs.com id
	76/75-32360-C0D9AD25; Sat, 18 Jan 2014 15:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390058761!12565972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27543 invoked from network); 18 Jan 2014 15:26:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,680,1384300800"; d="scan'208";a="94156328"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 10:26:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4XmZ-0003tV-UU;
	Sat, 18 Jan 2014 15:25:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4XmZ-0000kR-Ts;
	Sat, 18 Jan 2014 15:25:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24434-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 15:25:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24434: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24434 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24434/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           9 guest-start                 fail pass in 24423
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24423
 test-amd64-amd64-xl-win7-amd64  7 windows-install    fail pass in 24436-bisect
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24423

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24423 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24436 never pass

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=7261a3fc6e6101293cff232b9423dd41b140fc0f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 7261a3fc6e6101293cff232b9423dd41b140fc0f
+ branch=xen-4.3-testing
+ revision=7261a3fc6e6101293cff232b9423dd41b140fc0f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 7261a3fc6e6101293cff232b9423dd41b140fc0f:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   670d64a..7261a3f  7261a3fc6e6101293cff232b9423dd41b140fc0f -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 15:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 15:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4Xmf-0005PR-RV; Sat, 18 Jan 2014 15:26:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4Xme-0005PM-SS
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 15:26:05 +0000
Received: from [85.158.143.35:31067] by server-3.bemta-4.messagelabs.com id
	76/75-32360-C0D9AD25; Sat, 18 Jan 2014 15:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390058761!12565972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27543 invoked from network); 18 Jan 2014 15:26:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,680,1384300800"; d="scan'208";a="94156328"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 10:26:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4XmZ-0003tV-UU;
	Sat, 18 Jan 2014 15:25:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4XmZ-0000kR-Ts;
	Sat, 18 Jan 2014 15:25:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24434-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 15:25:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24434: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24434 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24434/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           9 guest-start                 fail pass in 24423
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24423
 test-amd64-amd64-xl-win7-amd64  7 windows-install    fail pass in 24436-bisect
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24423

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24345

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24423 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24436 never pass

version targeted for testing:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f
baseline version:
 xen                  670d64aed01e27d3e8b783fd83dc29bc46a808b7

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Paul Durrant <paul.durrant@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=7261a3fc6e6101293cff232b9423dd41b140fc0f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 7261a3fc6e6101293cff232b9423dd41b140fc0f
+ branch=xen-4.3-testing
+ revision=7261a3fc6e6101293cff232b9423dd41b140fc0f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 7261a3fc6e6101293cff232b9423dd41b140fc0f:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   670d64a..7261a3f  7261a3fc6e6101293cff232b9423dd41b140fc0f -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 21:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 21:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4d1Z-0003rY-AR; Sat, 18 Jan 2014 21:01:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W4d1Y-0003rT-0h
	for xen-devel@lists.xenproject.org; Sat, 18 Jan 2014 21:01:48 +0000
Received: from [193.109.254.147:54275] by server-1.bemta-14.messagelabs.com id
	0B/6D-15600-BBBEAD25; Sat, 18 Jan 2014 21:01:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390078905!10224615!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22101 invoked from network); 18 Jan 2014 21:01:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 21:01:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,681,1384300800"; d="scan'208";a="94189308"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 21:01:44 +0000
Received: from [10.68.14.32] (10.68.14.32) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sat, 18 Jan 2014 16:01:43 -0500
Message-ID: <52DAEBB1.2050405@citrix.com>
Date: Sat, 18 Jan 2014 21:01:37 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?ISO-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
	<1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.68.14.32]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 19:08, Zoltan Kiss wrote:
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>   	}
>   
>   	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>   	if (err)
>   		return err;
>   
>
This patch has a fundamental problem here: we change the pfn in 
gnttab_map_refs, then fetch it in m2p_override again, but then we have a 
different one than we need. This causes Dom0 crash. I will send a new 
version to fix that.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 21:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 21:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4d1Z-0003rY-AR; Sat, 18 Jan 2014 21:01:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W4d1Y-0003rT-0h
	for xen-devel@lists.xenproject.org; Sat, 18 Jan 2014 21:01:48 +0000
Received: from [193.109.254.147:54275] by server-1.bemta-14.messagelabs.com id
	0B/6D-15600-BBBEAD25; Sat, 18 Jan 2014 21:01:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390078905!10224615!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22101 invoked from network); 18 Jan 2014 21:01:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 21:01:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,681,1384300800"; d="scan'208";a="94189308"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 21:01:44 +0000
Received: from [10.68.14.32] (10.68.14.32) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Sat, 18 Jan 2014 16:01:43 -0500
Message-ID: <52DAEBB1.2050405@citrix.com>
Date: Sat, 18 Jan 2014 21:01:37 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?ISO-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1389640119-7936-1-git-send-email-zoltan.kiss@citrix.com>
	<1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389640119-7936-2-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.68.14.32]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 1/2] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/01/14 19:08, Zoltan Kiss wrote:
> @@ -284,8 +287,37 @@ static int map_grant_pages(struct grant_map *map)
>   	}
>   
>   	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs(map->map_ops, NULL, map->pages, map->count);
> +	if (err)
> +		return err;
> +
> +	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < map->count; i++) {
> +		/* Do not add to override if the map failed. */
> +		if (map->map_ops[i].status)
> +			continue;
> +
> +		if (map->map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map->map_ops[i].host_addr)) +
> +				(map->map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map->map_ops[i].dev_bus_addr);
> +		}
> +		err = m2p_add_override(mfn,
> +				       map->pages[i],
> +				       use_ptemod ? &map->kmap_ops[i] : NULL);
> +		if (err)
> +			break;
> +	}
> +
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
>   	if (err)
>   		return err;
>   
>
This patch has a fundamental problem here: we change the pfn in 
gnttab_map_refs, then fetch it in m2p_override again, but then we have a 
different one than we need. This causes Dom0 crash. I will send a new 
version to fix that.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:24:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fEb-0001I9-Vp; Sat, 18 Jan 2014 23:23:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jesse.benedict@citrix.com>)
	id 1W3smy-0007Ns-W0; Thu, 16 Jan 2014 19:39:41 +0000
Received: from [193.109.254.147:32491] by server-11.bemta-14.messagelabs.com
	id D2/60-20576-C7538D25; Thu, 16 Jan 2014 19:39:40 +0000
X-Env-Sender: jesse.benedict@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389901177!11310746!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6905 invoked from network); 16 Jan 2014 19:39:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:39:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93621470"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:39:10 +0000
Received: from FTLPEX01CL02.citrite.net ([169.254.2.8]) by
	FTLPEX01CL03.citrite.net ([169.254.1.174]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 14:39:10 -0500
From: Jesse Benedict <jesse.benedict@citrix.com>
To: 'Russ Pavlicek' <russell.pavlicek@xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-api@lists.xen.org"
	<xen-api@lists.xen.org>, "xs-devel@lists.xenserver.org"
	<xs-devel@lists.xenserver.org>, "cl-mirage@lists.cam.ac.uk"
	<cl-mirage@lists.cam.ac.uk>, "publicity@lists.xenproject.org"
	<publicity@lists.xenproject.org>
Thread-Topic: [xs-devel] Xen Project Test Day: Test 4.4 RC2 on January 20
Thread-Index: AQHPErDL+s+s1Grh7UyexLHJALpZMpqHv8Vw
Date: Thu, 16 Jan 2014 19:39:10 +0000
Message-ID: <B8C24FCCBFC459419478083491FF39E21EB5F8@FTLPEX01CL02.citrite.net>
References: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
In-Reply-To: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.11.24.188]
MIME-Version: 1.0
X-DLP: MIA2
X-Mailman-Approved-At: Sat, 18 Jan 2014 23:23:24 +0000
Subject: Re: [Xen-devel] [xs-devel] Xen Project Test Day: Test 4.4 RC2 on
	January 20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Count me in, sir!  I will review the links you have provided!


Sincerely,


Jesse Benedict, CCA | Citrix, Inc. | XenServer, XenClient Support Team
Work Better.  Live Better.  Call us at 1-800-4CITRIX!
Join the Community: support.citrix.com | discussions.citrix.com | blogs.citrix.com | taas.citrix.com

Customer Satisfaction is our goal. 
If you have feedback regarding my performance, please feel free to contact my Manager william.aycock@citrix.com

CONFIDENTIALITY NOTICE 
This e-mail message and all documents which accompany it are intended only for the use of the individual or entity to which addressed, and may contain privileged or confidential information. Any unauthorized disclosure or distribution of this e-mail message is prohibited. Any private files or utilities that are included in this e-mail are intended only for the use of the individual or entity to which this is addressed and distribution of these files or utilities is prohibited. If you have received this e-mail message in error, please notify me immediately. Thank you.


-----Original Message-----
From: russell.pavlicek.xen@gmail.com [mailto:russell.pavlicek.xen@gmail.com] On Behalf Of Russ Pavlicek
Sent: Thursday, January 16, 2014 6:29 AM
To: xen-devel@lists.xen.org; xen-users@lists.xen.org; xen-api@lists.xen.org; xs-devel@lists.xenserver.org; cl-mirage@lists.cam.ac.uk; publicity@lists.xenproject.org
Subject: [xs-devel] Xen Project Test Day: Test 4.4 RC2 on January 20

Release time is approaching, so Test Days have arrived!

On Monday, January 20, we are holding a Test Day for Xen 4.4. Release Candidate 2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in RC2, we need to know about it and how to test it.  Either edit the instructions page or send me a few lines describing the feature and how it should be tested.

Right now, RC2 is labelled a general test (e.g., "Does Xen compile, install, and do the things Xen normally does?").  We don't have any specific tests of new functionality identified.  If you have something new which needs testing in RC2, we need to know about it.

EVERYONE:

Please join us on Monday, January 20, and help make sure the next release of Xen is the best one yet!

Russ Pavlicek
Xen Project Evangelist


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:24:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fEb-0001I9-Vp; Sat, 18 Jan 2014 23:23:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jesse.benedict@citrix.com>)
	id 1W3smy-0007Ns-W0; Thu, 16 Jan 2014 19:39:41 +0000
Received: from [193.109.254.147:32491] by server-11.bemta-14.messagelabs.com
	id D2/60-20576-C7538D25; Thu, 16 Jan 2014 19:39:40 +0000
X-Env-Sender: jesse.benedict@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1389901177!11310746!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6905 invoked from network); 16 Jan 2014 19:39:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Jan 2014 19:39:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,669,1384300800"; d="scan'208";a="93621470"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Jan 2014 19:39:10 +0000
Received: from FTLPEX01CL02.citrite.net ([169.254.2.8]) by
	FTLPEX01CL03.citrite.net ([169.254.1.174]) with mapi id 14.02.0342.004;
	Thu, 16 Jan 2014 14:39:10 -0500
From: Jesse Benedict <jesse.benedict@citrix.com>
To: 'Russ Pavlicek' <russell.pavlicek@xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-api@lists.xen.org"
	<xen-api@lists.xen.org>, "xs-devel@lists.xenserver.org"
	<xs-devel@lists.xenserver.org>, "cl-mirage@lists.cam.ac.uk"
	<cl-mirage@lists.cam.ac.uk>, "publicity@lists.xenproject.org"
	<publicity@lists.xenproject.org>
Thread-Topic: [xs-devel] Xen Project Test Day: Test 4.4 RC2 on January 20
Thread-Index: AQHPErDL+s+s1Grh7UyexLHJALpZMpqHv8Vw
Date: Thu, 16 Jan 2014 19:39:10 +0000
Message-ID: <B8C24FCCBFC459419478083491FF39E21EB5F8@FTLPEX01CL02.citrite.net>
References: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
In-Reply-To: <CAHehzX34qmd0LQzQuX6xg92V+4n7Y5GTpyKNySmpHyA2BBh2ww@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.11.24.188]
MIME-Version: 1.0
X-DLP: MIA2
X-Mailman-Approved-At: Sat, 18 Jan 2014 23:23:24 +0000
Subject: Re: [Xen-devel] [xs-devel] Xen Project Test Day: Test 4.4 RC2 on
	January 20
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Count me in, sir!  I will review the links you have provided!


Sincerely,


Jesse Benedict, CCA | Citrix, Inc. | XenServer, XenClient Support Team
Work Better.  Live Better.  Call us at 1-800-4CITRIX!
Join the Community: support.citrix.com | discussions.citrix.com | blogs.citrix.com | taas.citrix.com

Customer Satisfaction is our goal. 
If you have feedback regarding my performance, please feel free to contact my Manager william.aycock@citrix.com

CONFIDENTIALITY NOTICE 
This e-mail message and all documents which accompany it are intended only for the use of the individual or entity to which addressed, and may contain privileged or confidential information. Any unauthorized disclosure or distribution of this e-mail message is prohibited. Any private files or utilities that are included in this e-mail are intended only for the use of the individual or entity to which this is addressed and distribution of these files or utilities is prohibited. If you have received this e-mail message in error, please notify me immediately. Thank you.


-----Original Message-----
From: russell.pavlicek.xen@gmail.com [mailto:russell.pavlicek.xen@gmail.com] On Behalf Of Russ Pavlicek
Sent: Thursday, January 16, 2014 6:29 AM
To: xen-devel@lists.xen.org; xen-users@lists.xen.org; xen-api@lists.xen.org; xs-devel@lists.xenserver.org; cl-mirage@lists.cam.ac.uk; publicity@lists.xenproject.org
Subject: [xs-devel] Xen Project Test Day: Test 4.4 RC2 on January 20

Release time is approaching, so Test Days have arrived!

On Monday, January 20, we are holding a Test Day for Xen 4.4. Release Candidate 2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in RC2, we need to know about it and how to test it.  Either edit the instructions page or send me a few lines describing the feature and how it should be tested.

Right now, RC2 is labelled a general test (e.g., "Does Xen compile, install, and do the things Xen normally does?").  We don't have any specific tests of new functionality identified.  If you have something new which needs testing in RC2, we need to know about it.

EVERYONE:

Please join us on Monday, January 20, and help make sure the next release of Xen is the best one yet!

Russ Pavlicek
Xen Project Evangelist


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:42:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:42:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fWQ-0002ET-Nt; Sat, 18 Jan 2014 23:41:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W4fWO-0002EO-Dm
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 23:41:48 +0000
Received: from [85.158.137.68:20339] by server-10.bemta-3.messagelabs.com id
	B8/2C-23989-B311BD25; Sat, 18 Jan 2014 23:41:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390088506!9967974!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13685 invoked from network); 18 Jan 2014 23:41:47 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 23:41:47 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so5870194wgh.24
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 15:41:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=h76klGEasTqnnGciDyydOIaeOjOmyJ0mgLXev17A2ug=;
	b=ceU6vybMIJk91vSOicz2ixG4W6r6JvtHjnovWnT4OaXrrb8nzbJr1YsDGeUiNIivlr
	J5z04eMjxkzB32UbYU7haDDmD4tWtOUKMmycKpjQJ5sDpKpDzqtzwBRmTPqhDOtHKK4w
	iiiIzvdRLP/b4sCPf5moI4n9IIlW/bIP99n9fnql0Qx7bSMqnu8os92+SW1tJvjuMFZi
	bz1YCQ5fVraimhou/B1NCL0Bx1lQbyuk14GuaHJp7VDt96+7+ilN30BMnoXXou/nKn7h
	fCDNRys8TWbtIIHjB+J/+MkyWs9QOnpq4NxEzL3kHIS7BxFNl6FbaGVn8W3VXA5g4M46
	jc3w==
X-Gm-Message-State: ALoCoQlyF3HJSp/zbxPrucmp2xXjtBaAvO33FLhLmpnrZ/r7F4dsmeOqVjhjCYminEMqcomuoDz1
X-Received: by 10.194.175.202 with SMTP id cc10mr8113022wjc.48.1390088506777; 
	Sat, 18 Jan 2014 15:41:46 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id w1sm16375423wix.1.2014.01.18.15.41.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 15:41:46 -0800 (PST)
Message-ID: <52DB1138.6010804@linaro.org>
Date: Sat, 18 Jan 2014 23:41:44 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
In-Reply-To: <52D89DC9.7050303@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hello Nathan,

On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
> On 01/16/14 18:36, Julien Grall wrote:
>>
>>
>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with
>> interrupts and MMIO, the driver won't be able to retrieve and
>> translate the interrupts via bus_alloc_resources.

> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

I have digged into the code to find the reason of my issue. FreeBSD is 
receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.

This is because the GIC is not yet initialized but FreeBSD asks to 
unmask the IRQ (sys/arm/arm/gic.c:306).

With this problem, all device nodes that are before the GIC in the 
device tree can't have interrupts. For instance this simple device will 
segfault on FreeBSD:

/ {

   mybus {
      compatible = "simple-bus";

      mynode {
         interrupt-parent = &gic;
         interrupts = <...>;
      };

      gic: gic@xxxx {
         interrupt-controller;
      }
   };
};

The node "mynode" will have to move after the GIC to be able to work 
correctly.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:42:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:42:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fWQ-0002ET-Nt; Sat, 18 Jan 2014 23:41:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W4fWO-0002EO-Dm
	for xen-devel@lists.xen.org; Sat, 18 Jan 2014 23:41:48 +0000
Received: from [85.158.137.68:20339] by server-10.bemta-3.messagelabs.com id
	B8/2C-23989-B311BD25; Sat, 18 Jan 2014 23:41:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390088506!9967974!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13685 invoked from network); 18 Jan 2014 23:41:47 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 23:41:47 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so5870194wgh.24
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 15:41:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=h76klGEasTqnnGciDyydOIaeOjOmyJ0mgLXev17A2ug=;
	b=ceU6vybMIJk91vSOicz2ixG4W6r6JvtHjnovWnT4OaXrrb8nzbJr1YsDGeUiNIivlr
	J5z04eMjxkzB32UbYU7haDDmD4tWtOUKMmycKpjQJ5sDpKpDzqtzwBRmTPqhDOtHKK4w
	iiiIzvdRLP/b4sCPf5moI4n9IIlW/bIP99n9fnql0Qx7bSMqnu8os92+SW1tJvjuMFZi
	bz1YCQ5fVraimhou/B1NCL0Bx1lQbyuk14GuaHJp7VDt96+7+ilN30BMnoXXou/nKn7h
	fCDNRys8TWbtIIHjB+J/+MkyWs9QOnpq4NxEzL3kHIS7BxFNl6FbaGVn8W3VXA5g4M46
	jc3w==
X-Gm-Message-State: ALoCoQlyF3HJSp/zbxPrucmp2xXjtBaAvO33FLhLmpnrZ/r7F4dsmeOqVjhjCYminEMqcomuoDz1
X-Received: by 10.194.175.202 with SMTP id cc10mr8113022wjc.48.1390088506777; 
	Sat, 18 Jan 2014 15:41:46 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id w1sm16375423wix.1.2014.01.18.15.41.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 15:41:46 -0800 (PST)
Message-ID: <52DB1138.6010804@linaro.org>
Date: Sat, 18 Jan 2014 23:41:44 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Nathan Whitehorn <nwhitehorn@freebsd.org>, 
 Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
In-Reply-To: <52D89DC9.7050303@freebsd.org>
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hello Nathan,

On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
> On 01/16/14 18:36, Julien Grall wrote:
>>
>>
>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>> As I understand, only the simple bus code (see simplebus_attach) is
>> translating the interrupts in the device on a resource.
>> So if you have a node directly attached to the root node with
>> interrupts and MMIO, the driver won't be able to retrieve and
>> translate the interrupts via bus_alloc_resources.

> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.

I have digged into the code to find the reason of my issue. FreeBSD is 
receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.

This is because the GIC is not yet initialized but FreeBSD asks to 
unmask the IRQ (sys/arm/arm/gic.c:306).

With this problem, all device nodes that are before the GIC in the 
device tree can't have interrupts. For instance this simple device will 
segfault on FreeBSD:

/ {

   mybus {
      compatible = "simple-bus";

      mynode {
         interrupt-parent = &gic;
         interrupts = <...>;
      };

      gic: gic@xxxx {
         interrupt-controller;
      }
   };
};

The node "mynode" will have to move after the GIC to be able to work 
correctly.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fkn-0002m0-6Q; Sat, 18 Jan 2014 23:56:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4fkl-0002lv-KT
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 23:56:39 +0000
Received: from [193.109.254.147:50433] by server-11.bemta-14.messagelabs.com
	id 6A/9E-20576-6B41BD25; Sat, 18 Jan 2014 23:56:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390089396!11646650!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19078 invoked from network); 18 Jan 2014 23:56:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 23:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,682,1384300800"; d="scan'208";a="94203722"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 23:56:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 18:56:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4fkg-0006N2-FY;
	Sat, 18 Jan 2014 23:56:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4fkd-0000eT-Jy;
	Sat, 18 Jan 2014 23:56:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24438-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 23:56:31 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24438: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24438 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24431 REGR. vs. 24416

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24431
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24431 pass in 24438
 test-amd64-i386-xl-credit2   10 guest-saverestore  fail in 24431 pass in 24438
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24431 pass in 24438
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24431 pass in 24438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 18 23:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jan 2014 23:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4fkn-0002m0-6Q; Sat, 18 Jan 2014 23:56:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4fkl-0002lv-KT
	for xen-devel@lists.xensource.com; Sat, 18 Jan 2014 23:56:39 +0000
Received: from [193.109.254.147:50433] by server-11.bemta-14.messagelabs.com
	id 6A/9E-20576-6B41BD25; Sat, 18 Jan 2014 23:56:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390089396!11646650!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19078 invoked from network); 18 Jan 2014 23:56:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Jan 2014 23:56:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,682,1384300800"; d="scan'208";a="94203722"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Jan 2014 23:56:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 18 Jan 2014 18:56:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4fkg-0006N2-FY;
	Sat, 18 Jan 2014 23:56:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4fkd-0000eT-Jy;
	Sat, 18 Jan 2014 23:56:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24438-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 18 Jan 2014 23:56:31 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24438: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24438 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24431 REGR. vs. 24416

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24431
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24431 pass in 24438
 test-amd64-i386-xl-credit2   10 guest-saverestore  fail in 24431 pass in 24438
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24431 pass in 24438
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24431 pass in 24438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 58f5bcaf05621810f06bf5b3592e2ae87475053d
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Fri Jan 17 16:02:00 2014 +0100

    MAINTAINERS: remove Linux sections
    
    The LINUX (PV_OPS) section was out-dated and it's better to only have
    this information in one place (Tte Linux MAINTAINERS file).
    
    The LINUX (XCP) section was an external project that that hasn't been
    maintained for years.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0b988ba711171b39aed9851cfe90fded50f775c5
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Fri Jan 17 16:00:21 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit e9af61b969906976188609379183cb304935f448
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Fri Jan 17 15:58:27 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 00:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 00:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4gZe-0005lX-J2; Sun, 19 Jan 2014 00:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W4gZc-0005lS-Na
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 00:49:12 +0000
Received: from [85.158.137.68:14084] by server-14.bemta-3.messagelabs.com id
	18/7A-06105-7012BD25; Sun, 19 Jan 2014 00:49:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390092551!6286906!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20578 invoked from network); 19 Jan 2014 00:49:11 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 00:49:11 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so2934485wgh.1
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 16:49:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=XJjoxoO3SYygMco9oN3CxbQ5BwGyCRaGfizJqVbiHAU=;
	b=k1ZOQ6x58eL1tzzrSTIZdJZK/BJys+L/cAwRyOtvVDCKPZELPYIb9GWMoB2bJfPOmG
	uDDxTSh7Q8WNu5yXCcVf0JCmeKLEMso7sanIWbzr+ndJi51NYjGR4RLB7hzslrrjskA3
	GeAUCQeeyNK8oJJwfKMKJHTSNdRULaLHaTgDwyjss4/fTFU/37tbOSyDLW/Xr0IeS7u3
	PtYnIVpANBJC1Y/4xid9E53mSibnL/cfrPQWeTVkfrhhzttHZudcez+1XKS9H6zkbGps
	CTXr3M/LLlttt0gn76gvdXSaq4W0fL/IbjHDZ2ViJrabG7M6KWyp41eXk5gi0Eaoxnpc
	oXiA==
X-Gm-Message-State: ALoCoQnbl8xSyPu/K6IJBjVXjdVgmMHi5BSa2PEulnIq3woz7QaNKWk7iHRO7TVPxrkIzzJ5wb2z
X-Received: by 10.180.107.136 with SMTP id hc8mr4302033wib.11.1390092550774;
	Sat, 18 Jan 2014 16:49:10 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm14011759wjb.2.2014.01.18.16.49.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 16:49:09 -0800 (PST)
Message-ID: <52DB2104.3070808@linaro.org>
Date: Sun, 19 Jan 2014 00:49:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>	
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>	
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>	
	<52D87B15.5090208@linaro.org>
	<1389950962.6697.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1389950962.6697.33.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 09:29 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
>> IanC: I was reading the linux binding documentation
>> (devicetree/booting-without-of.txt VII.2) and it seems that the
>> explanation differs from the implementation, right?
>
> I vaguely recall someone saying that the Linux behaviour was a quirk of
> some real PPC system which supplied a DTB which required this behaviour
> which has leaked into the other platforms. It does also sound like a
> useful extension to the spec which makes the dtb easier to write.

I've read twice the ePAR and did some test. I was completely wrong, this 
file is valid with the ePAR (there is even an example like that at the 
end at the specification). And FreeBSD doesn't complain about this since 
Nathan's ofw/fdt rework.

Sorry for the waste time.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 00:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 00:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4gZe-0005lX-J2; Sun, 19 Jan 2014 00:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W4gZc-0005lS-Na
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 00:49:12 +0000
Received: from [85.158.137.68:14084] by server-14.bemta-3.messagelabs.com id
	18/7A-06105-7012BD25; Sun, 19 Jan 2014 00:49:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390092551!6286906!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20578 invoked from network); 19 Jan 2014 00:49:11 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 00:49:11 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so2934485wgh.1
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 16:49:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=XJjoxoO3SYygMco9oN3CxbQ5BwGyCRaGfizJqVbiHAU=;
	b=k1ZOQ6x58eL1tzzrSTIZdJZK/BJys+L/cAwRyOtvVDCKPZELPYIb9GWMoB2bJfPOmG
	uDDxTSh7Q8WNu5yXCcVf0JCmeKLEMso7sanIWbzr+ndJi51NYjGR4RLB7hzslrrjskA3
	GeAUCQeeyNK8oJJwfKMKJHTSNdRULaLHaTgDwyjss4/fTFU/37tbOSyDLW/Xr0IeS7u3
	PtYnIVpANBJC1Y/4xid9E53mSibnL/cfrPQWeTVkfrhhzttHZudcez+1XKS9H6zkbGps
	CTXr3M/LLlttt0gn76gvdXSaq4W0fL/IbjHDZ2ViJrabG7M6KWyp41eXk5gi0Eaoxnpc
	oXiA==
X-Gm-Message-State: ALoCoQnbl8xSyPu/K6IJBjVXjdVgmMHi5BSa2PEulnIq3woz7QaNKWk7iHRO7TVPxrkIzzJ5wb2z
X-Received: by 10.180.107.136 with SMTP id hc8mr4302033wib.11.1390092550774;
	Sat, 18 Jan 2014 16:49:10 -0800 (PST)
Received: from [192.168.0.2] (cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net.
	[86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm14011759wjb.2.2014.01.18.16.49.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 16:49:09 -0800 (PST)
Message-ID: <52DB2104.3070808@linaro.org>
Date: Sun, 19 Jan 2014 00:49:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>	
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>	
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>	
	<52D87B15.5090208@linaro.org>
	<1389950962.6697.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1389950962.6697.33.camel@kazak.uk.xensource.com>
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	Warner Losh <imp@bsdimp.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 01/17/2014 09:29 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 00:36 +0000, Julien Grall wrote:
>> IanC: I was reading the linux binding documentation
>> (devicetree/booting-without-of.txt VII.2) and it seems that the
>> explanation differs from the implementation, right?
>
> I vaguely recall someone saying that the Linux behaviour was a quirk of
> some real PPC system which supplied a DTB which required this behaviour
> which has leaked into the other platforms. It does also sound like a
> useful extension to the spec which makes the dtb easier to write.

I've read twice the ePAR and did some test. I was completely wrong, this 
file is valid with the ePAR (there is even an example like that at the 
end at the specification). And FreeBSD doesn't complain about this since 
Nathan's ofw/fdt rework.

Sorry for the waste time.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 01:31:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 01:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4hDv-00032W-OG; Sun, 19 Jan 2014 01:30:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4hDu-00032R-Qn
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 01:30:51 +0000
Received: from [193.109.254.147:48306] by server-16.bemta-14.messagelabs.com
	id 73/7A-20600-ACA2BD25; Sun, 19 Jan 2014 01:30:50 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390095048!11740702!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9280 invoked from network); 19 Jan 2014 01:30:49 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-12.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 01:30:49 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00D00JNT7T00@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 19:30:47 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.11815,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM009AQK77M510@smtpauth3.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 19:30:46 -0600 (CST)
Message-id: <52DB2AC3.5000206@freebsd.org>
Date: Sat, 18 Jan 2014 19:30:43 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
In-reply-to: <52DB1138.6010804@linaro.org>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 17:41, Julien Grall wrote:
>
> Hello Nathan,
>
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>>
>>>
>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
>
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>
> I have digged into the code to find the reason of my issue. FreeBSD is
> receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>
> This is because the GIC is not yet initialized but FreeBSD asks to
> unmask the IRQ (sys/arm/arm/gic.c:306).
>
> With this problem, all device nodes that are before the GIC in the
> device tree can't have interrupts. For instance this simple device
> will segfault on FreeBSD:
>
> / {
>
>   mybus {
>      compatible = "simple-bus";
>
>      mynode {
>         interrupt-parent = &gic;
>         interrupts = <...>;
>      };
>
>      gic: gic@xxxx {
>         interrupt-controller;
>      }
>   };
> };
>
> The node "mynode" will have to move after the GIC to be able to work
> correctly.
>

Ah, that sounds like a bug in the interrupt handling code. The PPC code
is designed to handle this problem by deferring interrupt setup, as well
as a number of other latent issues, and I think would make a good match
for ARM as well. I made an experimental branch to port it to MIPS (the
code is almost entirely machine-independent) but am waiting for testing.
A general solution to this problem has to involve deferred setup.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 01:31:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 01:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4hDv-00032W-OG; Sun, 19 Jan 2014 01:30:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4hDu-00032R-Qn
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 01:30:51 +0000
Received: from [193.109.254.147:48306] by server-16.bemta-14.messagelabs.com
	id 73/7A-20600-ACA2BD25; Sun, 19 Jan 2014 01:30:50 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390095048!11740702!1
X-Originating-IP: [144.92.197.226]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9280 invoked from network); 19 Jan 2014 01:30:49 -0000
Received: from wmauth3.doit.wisc.edu (HELO smtpauth3.wiscmail.wisc.edu)
	(144.92.197.226)
	by server-12.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 01:30:49 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth3.wiscmail.wisc.edu by
	smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00D00JNT7T00@smtpauth3.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 19:30:47 -0600 (CST)
X-Spam-PmxInfo: Server=avs-3, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.11815,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth3.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM009AQK77M510@smtpauth3.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 19:30:46 -0600 (CST)
Message-id: <52DB2AC3.5000206@freebsd.org>
Date: Sat, 18 Jan 2014 19:30:43 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Julien Grall <julien.grall@linaro.org>, Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
In-reply-to: <52DB1138.6010804@linaro.org>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 17:41, Julien Grall wrote:
>
> Hello Nathan,
>
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>>
>>>
>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
>
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>
> I have digged into the code to find the reason of my issue. FreeBSD is
> receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>
> This is because the GIC is not yet initialized but FreeBSD asks to
> unmask the IRQ (sys/arm/arm/gic.c:306).
>
> With this problem, all device nodes that are before the GIC in the
> device tree can't have interrupts. For instance this simple device
> will segfault on FreeBSD:
>
> / {
>
>   mybus {
>      compatible = "simple-bus";
>
>      mynode {
>         interrupt-parent = &gic;
>         interrupts = <...>;
>      };
>
>      gic: gic@xxxx {
>         interrupt-controller;
>      }
>   };
> };
>
> The node "mynode" will have to move after the GIC to be able to work
> correctly.
>

Ah, that sounds like a bug in the interrupt handling code. The PPC code
is designed to handle this problem by deferring interrupt setup, as well
as a number of other latent issues, and I think would make a good match
for ARM as well. I made an experimental branch to port it to MIPS (the
code is almost entirely machine-independent) but am waiting for testing.
A general solution to this problem has to involve deferred setup.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 03:01:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 03:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4idZ-0007DY-QZ; Sun, 19 Jan 2014 03:01:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4idY-0007DT-3t
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:01:24 +0000
Received: from [193.109.254.147:47926] by server-16.bemta-14.messagelabs.com
	id 24/99-20600-3004BD25; Sun, 19 Jan 2014 03:01:23 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390100481!11686263!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20526 invoked from network); 19 Jan 2014 03:01:22 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-9.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 03:01:22 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00300OD1LZ00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 21:01:21 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.25114,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM000JUOE5EZ10@smtpauth4.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 21:01:20 -0600 (CST)
Message-id: <52DB3FFD.2070503@freebsd.org>
Date: Sat, 18 Jan 2014 21:01:17 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Warner Losh <imp@bsdimp.com>, Julien Grall <julien.grall@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
In-reply-to: <3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 20:44, Warner Losh wrote:
> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>
>> Hello Nathan,
>>
>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>
>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>> translating the interrupts in the device on a resource.
>>>> So if you have a node directly attached to the root node with
>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>> translate the interrupts via bus_alloc_resources.
>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>
>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>
>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>
>> / {
>>
>>  mybus {
>>     compatible = "simple-bus";
>>
>>     mynode {
>>        interrupt-parent = &gic;
>>        interrupts = <...>;
>>     };
>>
>>     gic: gic@xxxx {
>>        interrupt-controller;
>>     }
>>  };
>> };
>>
>> The node "mynode" will have to move after the GIC to be able to work correctly.
> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>
> Warner

Enumerating in some other order doesn't necessarily help: since the
interrupt and bus trees are independent, circular dependencies can
happen. This is not a hypothetical: on most powermacs, the main
interrupt controller is a functional unit on a PCI device -- a PCI
device whose other units have interrupt lines that eventually connect
back to itself. There is no way to fix that with ordering. So I think we
still need to defer interrupt setup. It's not that bad -- PPC already
does this to handle the powermac case.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 03:01:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 03:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4idZ-0007DY-QZ; Sun, 19 Jan 2014 03:01:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4idY-0007DT-3t
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:01:24 +0000
Received: from [193.109.254.147:47926] by server-16.bemta-14.messagelabs.com
	id 24/99-20600-3004BD25; Sun, 19 Jan 2014 03:01:23 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390100481!11686263!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20526 invoked from network); 19 Jan 2014 03:01:22 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-9.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 03:01:22 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00300OD1LZ00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 21:01:21 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.25114,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM000JUOE5EZ10@smtpauth4.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 21:01:20 -0600 (CST)
Message-id: <52DB3FFD.2070503@freebsd.org>
Date: Sat, 18 Jan 2014 21:01:17 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Warner Losh <imp@bsdimp.com>, Julien Grall <julien.grall@linaro.org>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
In-reply-to: <3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org,
	freebsd-arm@FreeBSD.org, gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 20:44, Warner Losh wrote:
> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>
>> Hello Nathan,
>>
>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>
>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>> translating the interrupts in the device on a resource.
>>>> So if you have a node directly attached to the root node with
>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>> translate the interrupts via bus_alloc_resources.
>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>
>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>
>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>
>> / {
>>
>>  mybus {
>>     compatible = "simple-bus";
>>
>>     mynode {
>>        interrupt-parent = &gic;
>>        interrupts = <...>;
>>     };
>>
>>     gic: gic@xxxx {
>>        interrupt-controller;
>>     }
>>  };
>> };
>>
>> The node "mynode" will have to move after the GIC to be able to work correctly.
> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>
> Warner

Enumerating in some other order doesn't necessarily help: since the
interrupt and bus trees are independent, circular dependencies can
happen. This is not a hypothetical: on most powermacs, the main
interrupt controller is a functional unit on a PCI device -- a PCI
device whose other units have interrupt lines that eventually connect
back to itself. There is no way to fix that with ordering. So I think we
still need to defer interrupt setup. It's not that bad -- PPC already
does this to handle the powermac case.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 03:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 03:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4izN-0008C9-3j; Sun, 19 Jan 2014 03:23:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4izM-0008C4-49
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:23:56 +0000
Received: from [193.109.254.147:36155] by server-9.bemta-14.messagelabs.com id
	BC/5C-13957-B454BD25; Sun, 19 Jan 2014 03:23:55 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390101833!11683249!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15608 invoked from network); 19 Jan 2014 03:23:54 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-2.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 03:23:54 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00600OYGMO00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 21:23:53 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.31515,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM0080HPFQZ500@smtpauth4.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 21:23:52 -0600 (CST)
Message-id: <52DB4546.504@freebsd.org>
Date: Sat, 18 Jan 2014 21:23:50 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
	<52DB3FFD.2070503@freebsd.org>
	<8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
In-reply-to: <8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 21:08, Warner Losh wrote:
> On Jan 18, 2014, at 8:01 PM, Nathan Whitehorn wrote:
>
>> On 01/18/14 20:44, Warner Losh wrote:
>>> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>>>
>>>> Hello Nathan,
>>>>
>>>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>>>> translating the interrupts in the device on a resource.
>>>>>> So if you have a node directly attached to the root node with
>>>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>>>> translate the interrupts via bus_alloc_resources.
>>>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>>>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>>>
>>>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>>>
>>>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>>>
>>>> / {
>>>>
>>>> mybus {
>>>>    compatible = "simple-bus";
>>>>
>>>>    mynode {
>>>>       interrupt-parent = &gic;
>>>>       interrupts = <...>;
>>>>    };
>>>>
>>>>    gic: gic@xxxx {
>>>>       interrupt-controller;
>>>>    }
>>>> };
>>>> };
>>>>
>>>> The node "mynode" will have to move after the GIC to be able to work correctly.
>>> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>>>
>>> Warner
>> Enumerating in some other order doesn't necessarily help: since the
>> interrupt and bus trees are independent, circular dependencies can
>> happen. This is not a hypothetical: on most powermacs, the main
>> interrupt controller is a functional unit on a PCI device -- a PCI
>> device whose other units have interrupt lines that eventually connect
>> back to itself. There is no way to fix that with ordering. So I think we
>> still need to defer interrupt setup. It's not that bad -- PPC already
>> does this to handle the powermac case.
> I guess I've looked at simpler cases where interrupts and GPIO pins need to be up before anything can work on Atmel...  We kinda fake it now, but there's some ordering issues that are solved in this way. But I've not finished the work on bringing Atmel into the FDT world yet. Deferred setup isn't always an option, but I'll keep that in mind in case I hit that case...
>
> The other way to cope is to use the multi-pass enumeration stuff that John Baldwin put into the tree a while ago. In that case, you could enumerate bridges, interrupt controllers, gpio providers, etc first, and then do  a second pass that catches the rest of the devices and the interrupt processing for the first pass devices. This is a variation on the deferred enumeration stuff you are talking about, so that might be a better, more general solution to these sorts of problems.
>
> Warner
>
>

I'm still not sure that helps. Take the powermac case. The PCI parent
bridge assigns and routes interrupts as part of its probe stage. One
device (called mac-io) is the one that actually has the interrupt
controller, along with other things like ATA and sound. It has several
interrupts to support e.g. ATA. How can the PCI bus attachment sensibly
attach the macio device? If it delays attaching it until the interrupt
controller registers, the bus probing will deadlock, since the interrupt
controller is itself a child of macio! If it attaches it immediately, it
will not be able to route the interrupts of the macio device. It's a
catch-22.

The only solution we were able to come up with when this situation arose
was to treat the bus and interrupt trees as entirely separate things,
built independently, which allows the kernel to break the loop. GPIOs
can have similar problems. The basic issue is that newbus ultimately
assumes that the system topology can be described by a single tree.
Interconnections between the branches -- especially if it isn't even a
DAG -- fundamentally break the model and multipass doesn't necessarily help.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 03:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 03:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4izN-0008C9-3j; Sun, 19 Jan 2014 03:23:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nwhitehorn@freebsd.org>) id 1W4izM-0008C4-49
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:23:56 +0000
Received: from [193.109.254.147:36155] by server-9.bemta-14.messagelabs.com id
	BC/5C-13957-B454BD25; Sun, 19 Jan 2014 03:23:55 +0000
X-Env-Sender: nwhitehorn@freebsd.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390101833!11683249!1
X-Originating-IP: [144.92.197.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15608 invoked from network); 19 Jan 2014 03:23:54 -0000
Received: from wmauth4.doit.wisc.edu (HELO smtpauth4.wiscmail.wisc.edu)
	(144.92.197.145)
	by server-2.tower-27.messagelabs.com with DES-CBC3-SHA encrypted SMTP;
	19 Jan 2014 03:23:54 -0000
MIME-version: 1.0
Received: from avs-daemon.smtpauth4.wiscmail.wisc.edu by
	smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug
	30 2012)) id <0MZM00600OYGMO00@smtpauth4.wiscmail.wisc.edu> for
	xen-devel@lists.xen.org; Sat, 18 Jan 2014 21:23:53 -0600 (CST)
X-Spam-PmxInfo: Server=avs-4, Version=6.0.3.2322014,
	Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2014.1.19.31515,
	SenderIP=0.0.0.0
X-Spam-Report: AuthenticatedSender=yes, SenderIP=0.0.0.0
Received: from wanderer.tachypleus.net
	(adsl-76-208-68-77.dsl.mdsnwi.sbcglobal.net [76.208.68.77])
	by smtpauth4.wiscmail.wisc.edu
	(Oracle Communications Messaging Server 7u4-27.01(7.0.4.27.0) 64bit
	(built Aug 30 2012)) with ESMTPSA id
	<0MZM0080HPFQZ500@smtpauth4.wiscmail.wisc.edu>; Sat,
	18 Jan 2014 21:23:52 -0600 (CST)
Message-id: <52DB4546.504@freebsd.org>
Date: Sat, 18 Jan 2014 21:23:50 -0600
From: Nathan Whitehorn <nwhitehorn@freebsd.org>
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:24.0) Gecko/20100101
	Thunderbird/24.2.0
To: Warner Losh <imp@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
	<52DB3FFD.2070503@freebsd.org>
	<8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
In-reply-to: <8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
X-Enigmail-Version: 1.6
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/18/14 21:08, Warner Losh wrote:
> On Jan 18, 2014, at 8:01 PM, Nathan Whitehorn wrote:
>
>> On 01/18/14 20:44, Warner Losh wrote:
>>> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>>>
>>>> Hello Nathan,
>>>>
>>>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>>>> translating the interrupts in the device on a resource.
>>>>>> So if you have a node directly attached to the root node with
>>>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>>>> translate the interrupts via bus_alloc_resources.
>>>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>>>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>>>
>>>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>>>
>>>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>>>
>>>> / {
>>>>
>>>> mybus {
>>>>    compatible = "simple-bus";
>>>>
>>>>    mynode {
>>>>       interrupt-parent = &gic;
>>>>       interrupts = <...>;
>>>>    };
>>>>
>>>>    gic: gic@xxxx {
>>>>       interrupt-controller;
>>>>    }
>>>> };
>>>> };
>>>>
>>>> The node "mynode" will have to move after the GIC to be able to work correctly.
>>> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>>>
>>> Warner
>> Enumerating in some other order doesn't necessarily help: since the
>> interrupt and bus trees are independent, circular dependencies can
>> happen. This is not a hypothetical: on most powermacs, the main
>> interrupt controller is a functional unit on a PCI device -- a PCI
>> device whose other units have interrupt lines that eventually connect
>> back to itself. There is no way to fix that with ordering. So I think we
>> still need to defer interrupt setup. It's not that bad -- PPC already
>> does this to handle the powermac case.
> I guess I've looked at simpler cases where interrupts and GPIO pins need to be up before anything can work on Atmel...  We kinda fake it now, but there's some ordering issues that are solved in this way. But I've not finished the work on bringing Atmel into the FDT world yet. Deferred setup isn't always an option, but I'll keep that in mind in case I hit that case...
>
> The other way to cope is to use the multi-pass enumeration stuff that John Baldwin put into the tree a while ago. In that case, you could enumerate bridges, interrupt controllers, gpio providers, etc first, and then do  a second pass that catches the rest of the devices and the interrupt processing for the first pass devices. This is a variation on the deferred enumeration stuff you are talking about, so that might be a better, more general solution to these sorts of problems.
>
> Warner
>
>

I'm still not sure that helps. Take the powermac case. The PCI parent
bridge assigns and routes interrupts as part of its probe stage. One
device (called mac-io) is the one that actually has the interrupt
controller, along with other things like ATA and sound. It has several
interrupts to support e.g. ATA. How can the PCI bus attachment sensibly
attach the macio device? If it delays attaching it until the interrupt
controller registers, the bus probing will deadlock, since the interrupt
controller is itself a child of macio! If it attaches it immediately, it
will not be able to route the interrupts of the macio device. It's a
catch-22.

The only solution we were able to come up with when this situation arose
was to treat the bus and interrupt trees as entirely separate things,
built independently, which allows the kernel to break the loop. GPIOs
can have similar problems. The basic issue is that newbus ultimately
assumes that the system topology can be described by a single tree.
Interconnections between the branches -- especially if it isn't even a
DAG -- fundamentally break the model and multipass doesn't necessarily help.
-Nathan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 06:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 06:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4lts-0007xs-LA; Sun, 19 Jan 2014 06:30:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4ltq-0007xn-JE
	for xen-devel@lists.xensource.com; Sun, 19 Jan 2014 06:30:26 +0000
Received: from [85.158.137.68:30907] by server-11.bemta-3.messagelabs.com id
	6A/FA-19379-1017BD25; Sun, 19 Jan 2014 06:30:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390113022!9933700!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8445 invoked from network); 19 Jan 2014 06:30:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 06:30:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,683,1384300800"; d="scan'208";a="92139353"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Jan 2014 06:30:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 19 Jan 2014 01:30:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4ltk-0008L7-Ip;
	Sun, 19 Jan 2014 06:30:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4ltk-00045w-FL;
	Sun, 19 Jan 2014 06:30:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24440-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Jan 2014 06:30:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24440: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24440 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24440/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24438
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore          fail pass in 24438
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24438 pass in 24440

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24438 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24438 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=58f5bcaf05621810f06bf5b3592e2ae87475053d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 58f5bcaf05621810f06bf5b3592e2ae87475053d
+ branch=xen-unstable
+ revision=58f5bcaf05621810f06bf5b3592e2ae87475053d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 58f5bcaf05621810f06bf5b3592e2ae87475053d:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f26d849..58f5bca  58f5bcaf05621810f06bf5b3592e2ae87475053d -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 06:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 06:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4lts-0007xs-LA; Sun, 19 Jan 2014 06:30:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4ltq-0007xn-JE
	for xen-devel@lists.xensource.com; Sun, 19 Jan 2014 06:30:26 +0000
Received: from [85.158.137.68:30907] by server-11.bemta-3.messagelabs.com id
	6A/FA-19379-1017BD25; Sun, 19 Jan 2014 06:30:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390113022!9933700!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8445 invoked from network); 19 Jan 2014 06:30:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 06:30:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,683,1384300800"; d="scan'208";a="92139353"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Jan 2014 06:30:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 19 Jan 2014 01:30:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4ltk-0008L7-Ip;
	Sun, 19 Jan 2014 06:30:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4ltk-00045w-FL;
	Sun, 19 Jan 2014 06:30:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24440-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Jan 2014 06:30:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24440: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24440 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24440/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24438
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore          fail pass in 24438
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24438 pass in 24440

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24438 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24438 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  f26d849dff24120e6ad633db94abfbc6e6572503

------------------------------------------------------------
People who touched revisions under test:
  David Vrabel <david.vrabel@citrix.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=58f5bcaf05621810f06bf5b3592e2ae87475053d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 58f5bcaf05621810f06bf5b3592e2ae87475053d
+ branch=xen-unstable
+ revision=58f5bcaf05621810f06bf5b3592e2ae87475053d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 58f5bcaf05621810f06bf5b3592e2ae87475053d:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f26d849..58f5bca  58f5bcaf05621810f06bf5b3592e2ae87475053d -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 09:26:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 09:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4odG-0007Pi-4o; Sun, 19 Jan 2014 09:25:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4odD-0007Pd-U0
	for xen-devel@lists.xensource.com; Sun, 19 Jan 2014 09:25:28 +0000
Received: from [85.158.143.35:21200] by server-1.bemta-4.messagelabs.com id
	1E/D0-02132-60A9BD25; Sun, 19 Jan 2014 09:25:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390123524!12549830!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1126 invoked from network); 19 Jan 2014 09:25:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 09:25:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,684,1384300800"; d="scan'208";a="92153904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Jan 2014 09:25:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 19 Jan 2014 04:25:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4od8-0000lN-Qk;
	Sun, 19 Jan 2014 09:25:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4od8-0006O2-PH;
	Sun, 19 Jan 2014 09:25:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24441-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Jan 2014 09:25:22 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24441: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24441 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24441/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24440
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24440
 build-amd64                   4 xen-build                 fail REGR. vs. 24440
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24440

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 09:26:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 09:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4odG-0007Pi-4o; Sun, 19 Jan 2014 09:25:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W4odD-0007Pd-U0
	for xen-devel@lists.xensource.com; Sun, 19 Jan 2014 09:25:28 +0000
Received: from [85.158.143.35:21200] by server-1.bemta-4.messagelabs.com id
	1E/D0-02132-60A9BD25; Sun, 19 Jan 2014 09:25:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390123524!12549830!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1126 invoked from network); 19 Jan 2014 09:25:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 09:25:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,684,1384300800"; d="scan'208";a="92153904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Jan 2014 09:25:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 19 Jan 2014 04:25:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W4od8-0000lN-Qk;
	Sun, 19 Jan 2014 09:25:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W4od8-0006O2-PH;
	Sun, 19 Jan 2014 09:25:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24441-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 19 Jan 2014 09:25:22 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24441: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24441 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24441/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24440
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24440
 build-amd64                   4 xen-build                 fail REGR. vs. 24440
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24440

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 10:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 10:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4pW4-0001cO-6N; Sun, 19 Jan 2014 10:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W4ikD-0007Lz-QC
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:08:18 +0000
Received: from [85.158.137.68:47478] by server-4.bemta-3.messagelabs.com id
	8D/6A-10414-0A14BD25; Sun, 19 Jan 2014 03:08:16 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390100894!6293752!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5918 invoked from network); 19 Jan 2014 03:08:16 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 03:08:16 -0000
Received: by mail-ig0-f173.google.com with SMTP id c10so5082553igq.0
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 19:08:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=mbUcM4/PJrGkwpKLpmtu9BaBYUSjMxuRVmYbi+PFN2U=;
	b=WlcG4LnavAWFkq+wKZY1qxWJdswddL1+xrYfg8CNB/gsfAOoZLJx0yn4lYmWhaZnAE
	ms29SI4raN5rX9mkSKgio6kwUY8C07xPlR/a+MMwcDqN6NnyLl/Ev7nKNYDK6n1gHS/z
	Koke8oJUBJLVYOitvo08HzZEH4RIfBBJXVayy+uK2qE/DABiU/ApoaOJP/Z5Dxuy2FZ0
	Wm/5LKa2KXPrqaaeL7EpkMyXFcdZky8jLSi2TIJmroJZyfqplbviEQ0YD+iTj7UPY0MU
	swXD/U0XMHYDll1XT1ResqomEAzxMeCdmZDlVVm0CU43bAksUf0OD//6Tg3QZ+GYIoAd
	3iSw==
X-Gm-Message-State: ALoCoQmTxdGHefjl+PwBDUvNbzr+cY/cB4p/H7WWHwYxJ8l8PFt2okuEon3Z2rophcHicJnD7m5+
X-Received: by 10.50.79.228 with SMTP id m4mr5710692igx.47.1390100894534;
	Sat, 18 Jan 2014 19:08:14 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id g6sm12630791igg.9.2014.01.18.19.08.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 19:08:14 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <52DB3FFD.2070503@freebsd.org>
Date: Sat, 18 Jan 2014 20:08:12 -0700
Message-Id: <8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
	<52DB3FFD.2070503@freebsd.org>
To: Nathan Whitehorn <nwhitehorn@freebsd.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Sun, 19 Jan 2014 10:22:07 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 18, 2014, at 8:01 PM, Nathan Whitehorn wrote:

> On 01/18/14 20:44, Warner Losh wrote:
>> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>> 
>>> Hello Nathan,
>>> 
>>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>> 
>>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>>> translating the interrupts in the device on a resource.
>>>>> So if you have a node directly attached to the root node with
>>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>>> translate the interrupts via bus_alloc_resources.
>>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>> 
>>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>> 
>>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>> 
>>> / {
>>> 
>>> mybus {
>>>    compatible = "simple-bus";
>>> 
>>>    mynode {
>>>       interrupt-parent = &gic;
>>>       interrupts = <...>;
>>>    };
>>> 
>>>    gic: gic@xxxx {
>>>       interrupt-controller;
>>>    }
>>> };
>>> };
>>> 
>>> The node "mynode" will have to move after the GIC to be able to work correctly.
>> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>> 
>> Warner
> 
> Enumerating in some other order doesn't necessarily help: since the
> interrupt and bus trees are independent, circular dependencies can
> happen. This is not a hypothetical: on most powermacs, the main
> interrupt controller is a functional unit on a PCI device -- a PCI
> device whose other units have interrupt lines that eventually connect
> back to itself. There is no way to fix that with ordering. So I think we
> still need to defer interrupt setup. It's not that bad -- PPC already
> does this to handle the powermac case.

I guess I've looked at simpler cases where interrupts and GPIO pins need to be up before anything can work on Atmel...  We kinda fake it now, but there's some ordering issues that are solved in this way. But I've not finished the work on bringing Atmel into the FDT world yet. Deferred setup isn't always an option, but I'll keep that in mind in case I hit that case...

The other way to cope is to use the multi-pass enumeration stuff that John Baldwin put into the tree a while ago. In that case, you could enumerate bridges, interrupt controllers, gpio providers, etc first, and then do  a second pass that catches the rest of the devices and the interrupt processing for the first pass devices. This is a variation on the deferred enumeration stuff you are talking about, so that might be a better, more general solution to these sorts of problems.

Warner


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 10:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 10:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4pW3-0001cH-Rd; Sun, 19 Jan 2014 10:22:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W4iNZ-0006NY-Lx
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 02:44:53 +0000
Received: from [85.158.139.211:25989] by server-8.bemta-5.messagelabs.com id
	1F/20-29838-42C3BD25; Sun, 19 Jan 2014 02:44:52 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390099490!10532423!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21125 invoked from network); 19 Jan 2014 02:44:51 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 02:44:51 -0000
Received: by mail-ig0-f173.google.com with SMTP id c10so5057004igq.0
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 18:44:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=miznZQsfRtjA5VuYPXfaIynv5+4GL60bP9kdQW9VNR0=;
	b=Qdk1TukRKspf1+OHWRCFADwXjTOHmo4ETfWuDVgESs45+yUco27XjWHW30Cf1nkdBd
	HZJk3bUlzNSZkfM1cin0b3winlEzjfZCqMcMKLxQplQS4FsWbMYbFEjsiDWpxE5aHpT8
	qyJFo/j6id7XURl2cpCgEzN7g/Srwk0mBFEq34hHOZASWYf7sV09iM1TaPvkdMQYKfSZ
	4Et+H892FcTq5jC2ZPidqW2ASsQwRbQhqPeYP+a+14AaK0tR0twRE63bbfz6GouzvWPE
	g+hP/dQPJkF6AhmjLozI71QSSDC0Y8TSdiMdElsy4m9sZ2hkgiGCJS5/e6ySSlVrCjsB
	08OA==
X-Gm-Message-State: ALoCoQlZ0aSQZh+p/tddoh/o3RGdpRbCrhgi0cCLQBrqaezvUcCKnnqx7ou+0D0pRNHAFu1HIq4i
X-Received: by 10.42.122.146 with SMTP id n18mr124691icr.41.1390099490219;
	Sat, 18 Jan 2014 18:44:50 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id gc2sm12532065igd.6.2014.01.18.18.44.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 18:44:49 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <52DB1138.6010804@linaro.org>
Date: Sat, 18 Jan 2014 19:44:09 -0700
Message-Id: <3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Sun, 19 Jan 2014 10:22:07 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:

> 
> Hello Nathan,
> 
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>> 
>>> 
>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
> 
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
> 
> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
> 
> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
> 
> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
> 
> / {
> 
>  mybus {
>     compatible = "simple-bus";
> 
>     mynode {
>        interrupt-parent = &gic;
>        interrupts = <...>;
>     };
> 
>     gic: gic@xxxx {
>        interrupt-controller;
>     }
>  };
> };
> 
> The node "mynode" will have to move after the GIC to be able to work correctly.

This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.

Warner
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 10:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 10:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4pW4-0001cO-6N; Sun, 19 Jan 2014 10:22:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W4ikD-0007Lz-QC
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 03:08:18 +0000
Received: from [85.158.137.68:47478] by server-4.bemta-3.messagelabs.com id
	8D/6A-10414-0A14BD25; Sun, 19 Jan 2014 03:08:16 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390100894!6293752!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5918 invoked from network); 19 Jan 2014 03:08:16 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 03:08:16 -0000
Received: by mail-ig0-f173.google.com with SMTP id c10so5082553igq.0
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 19:08:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=mbUcM4/PJrGkwpKLpmtu9BaBYUSjMxuRVmYbi+PFN2U=;
	b=WlcG4LnavAWFkq+wKZY1qxWJdswddL1+xrYfg8CNB/gsfAOoZLJx0yn4lYmWhaZnAE
	ms29SI4raN5rX9mkSKgio6kwUY8C07xPlR/a+MMwcDqN6NnyLl/Ev7nKNYDK6n1gHS/z
	Koke8oJUBJLVYOitvo08HzZEH4RIfBBJXVayy+uK2qE/DABiU/ApoaOJP/Z5Dxuy2FZ0
	Wm/5LKa2KXPrqaaeL7EpkMyXFcdZky8jLSi2TIJmroJZyfqplbviEQ0YD+iTj7UPY0MU
	swXD/U0XMHYDll1XT1ResqomEAzxMeCdmZDlVVm0CU43bAksUf0OD//6Tg3QZ+GYIoAd
	3iSw==
X-Gm-Message-State: ALoCoQmTxdGHefjl+PwBDUvNbzr+cY/cB4p/H7WWHwYxJ8l8PFt2okuEon3Z2rophcHicJnD7m5+
X-Received: by 10.50.79.228 with SMTP id m4mr5710692igx.47.1390100894534;
	Sat, 18 Jan 2014 19:08:14 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id g6sm12630791igg.9.2014.01.18.19.08.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 19:08:14 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <52DB3FFD.2070503@freebsd.org>
Date: Sat, 18 Jan 2014 20:08:12 -0700
Message-Id: <8C16C019-B9AF-417B-9B02-C016A202AAC7@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
	<3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
	<52DB3FFD.2070503@freebsd.org>
To: Nathan Whitehorn <nwhitehorn@freebsd.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Sun, 19 Jan 2014 10:22:07 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	gibbs@freebsd.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 18, 2014, at 8:01 PM, Nathan Whitehorn wrote:

> On 01/18/14 20:44, Warner Losh wrote:
>> On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:
>> 
>>> Hello Nathan,
>>> 
>>> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>>>> On 01/16/14 18:36, Julien Grall wrote:
>>>>> 
>>>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>>>> As I understand, only the simple bus code (see simplebus_attach) is
>>>>> translating the interrupts in the device on a resource.
>>>>> So if you have a node directly attached to the root node with
>>>>> interrupts and MMIO, the driver won't be able to retrieve and
>>>>> translate the interrupts via bus_alloc_resources.
>>>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
>>> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
>>> 
>>> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
>>> 
>>> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
>>> 
>>> / {
>>> 
>>> mybus {
>>>    compatible = "simple-bus";
>>> 
>>>    mynode {
>>>       interrupt-parent = &gic;
>>>       interrupts = <...>;
>>>    };
>>> 
>>>    gic: gic@xxxx {
>>>       interrupt-controller;
>>>    }
>>> };
>>> };
>>> 
>>> The node "mynode" will have to move after the GIC to be able to work correctly.
>> This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.
>> 
>> Warner
> 
> Enumerating in some other order doesn't necessarily help: since the
> interrupt and bus trees are independent, circular dependencies can
> happen. This is not a hypothetical: on most powermacs, the main
> interrupt controller is a functional unit on a PCI device -- a PCI
> device whose other units have interrupt lines that eventually connect
> back to itself. There is no way to fix that with ordering. So I think we
> still need to defer interrupt setup. It's not that bad -- PPC already
> does this to handle the powermac case.

I guess I've looked at simpler cases where interrupts and GPIO pins need to be up before anything can work on Atmel...  We kinda fake it now, but there's some ordering issues that are solved in this way. But I've not finished the work on bringing Atmel into the FDT world yet. Deferred setup isn't always an option, but I'll keep that in mind in case I hit that case...

The other way to cope is to use the multi-pass enumeration stuff that John Baldwin put into the tree a while ago. In that case, you could enumerate bridges, interrupt controllers, gpio providers, etc first, and then do  a second pass that catches the rest of the devices and the interrupt processing for the first pass devices. This is a variation on the deferred enumeration stuff you are talking about, so that might be a better, more general solution to these sorts of problems.

Warner


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 19 10:22:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jan 2014 10:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W4pW3-0001cH-Rd; Sun, 19 Jan 2014 10:22:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imp@bsdimp.com>) id 1W4iNZ-0006NY-Lx
	for xen-devel@lists.xen.org; Sun, 19 Jan 2014 02:44:53 +0000
Received: from [85.158.139.211:25989] by server-8.bemta-5.messagelabs.com id
	1F/20-29838-42C3BD25; Sun, 19 Jan 2014 02:44:52 +0000
X-Env-Sender: imp@bsdimp.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390099490!10532423!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21125 invoked from network); 19 Jan 2014 02:44:51 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Jan 2014 02:44:51 -0000
Received: by mail-ig0-f173.google.com with SMTP id c10so5057004igq.0
	for <xen-devel@lists.xen.org>; Sat, 18 Jan 2014 18:44:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:sender:subject:mime-version:content-type:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=miznZQsfRtjA5VuYPXfaIynv5+4GL60bP9kdQW9VNR0=;
	b=Qdk1TukRKspf1+OHWRCFADwXjTOHmo4ETfWuDVgESs45+yUco27XjWHW30Cf1nkdBd
	HZJk3bUlzNSZkfM1cin0b3winlEzjfZCqMcMKLxQplQS4FsWbMYbFEjsiDWpxE5aHpT8
	qyJFo/j6id7XURl2cpCgEzN7g/Srwk0mBFEq34hHOZASWYf7sV09iM1TaPvkdMQYKfSZ
	4Et+H892FcTq5jC2ZPidqW2ASsQwRbQhqPeYP+a+14AaK0tR0twRE63bbfz6GouzvWPE
	g+hP/dQPJkF6AhmjLozI71QSSDC0Y8TSdiMdElsy4m9sZ2hkgiGCJS5/e6ySSlVrCjsB
	08OA==
X-Gm-Message-State: ALoCoQlZ0aSQZh+p/tddoh/o3RGdpRbCrhgi0cCLQBrqaezvUcCKnnqx7ou+0D0pRNHAFu1HIq4i
X-Received: by 10.42.122.146 with SMTP id n18mr124691icr.41.1390099490219;
	Sat, 18 Jan 2014 18:44:50 -0800 (PST)
Received: from fusion-mac.bsdimp.com
	(50-78-194-198-static.hfc.comcastbusiness.net. [50.78.194.198])
	by mx.google.com with ESMTPSA id gc2sm12532065igd.6.2014.01.18.18.44.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 18 Jan 2014 18:44:49 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1085)
From: Warner Losh <imp@bsdimp.com>
In-Reply-To: <52DB1138.6010804@linaro.org>
Date: Sat, 18 Jan 2014 19:44:09 -0700
Message-Id: <3AE8EDE6-D931-4F93-9BF7-ABFB297B5B96@bsdimp.com>
References: <1389733267-20822-1-git-send-email-julien.grall@linaro.org>
	<24851B79-7EC7-4E3A-94DB-4B9B86FDFFFC@bsdimp.com>
	<52D6B62A.9000208@linaro.org> <52D73C4E.2080306@freebsd.org>
	<52D87B15.5090208@linaro.org> <52D89DC9.7050303@freebsd.org>
	<52DB1138.6010804@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
X-Mailer: Apple Mail (2.1085)
X-Mailman-Approved-At: Sun, 19 Jan 2014 10:22:07 +0000
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	xen-devel@lists.xen.org, freebsd-xen@freebsd.org, freebsd-arm@FreeBSD.org,
	Nathan Whitehorn <nwhitehorn@freebsd.org>, gibbs@freebsd.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [RFC] Add support for Xen ARM guest on FreeBSD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 18, 2014, at 4:41 PM, Julien Grall wrote:

> 
> Hello Nathan,
> 
> On 01/17/2014 03:04 AM, Nathan Whitehorn wrote:
>> On 01/16/14 18:36, Julien Grall wrote:
>>> 
>>> 
>>> On 01/16/2014 01:56 AM, Nathan Whitehorn wrote:
>>> As I understand, only the simple bus code (see simplebus_attach) is
>>> translating the interrupts in the device on a resource.
>>> So if you have a node directly attached to the root node with
>>> interrupts and MMIO, the driver won't be able to retrieve and
>>> translate the interrupts via bus_alloc_resources.
> 
>> Why not? nexus on ARM, MIPS, PowerPC, and sparc64 can do this.
> 
> I have digged into the code to find the reason of my issue. FreeBSD is receiving a VM fault when the driver (xen-dt) is trying to setup the IRQ.
> 
> This is because the GIC is not yet initialized but FreeBSD asks to unmask the IRQ (sys/arm/arm/gic.c:306).
> 
> With this problem, all device nodes that are before the GIC in the device tree can't have interrupts. For instance this simple device will segfault on FreeBSD:
> 
> / {
> 
>  mybus {
>     compatible = "simple-bus";
> 
>     mynode {
>        interrupt-parent = &gic;
>        interrupts = <...>;
>     };
> 
>     gic: gic@xxxx {
>        interrupt-controller;
>     }
>  };
> };
> 
> The node "mynode" will have to move after the GIC to be able to work correctly.

This stems from a difference in enumeration between FreeBSD and Linux. FreeBSD enumerates the devices in DTB order, while Linux does a partial ordering based on dependencies.

Warner
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 02:21:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 02:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W54TX-0001fE-9i; Mon, 20 Jan 2014 02:20:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W54TU-0001f9-J7
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 02:20:28 +0000
Received: from [85.158.143.35:54950] by server-1.bemta-4.messagelabs.com id
	B5/3E-02132-BE78CD25; Mon, 20 Jan 2014 02:20:27 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390184425!12623066!1
X-Originating-IP: [209.85.128.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3004 invoked from network); 20 Jan 2014 02:20:26 -0000
Received: from mail-qe0-f43.google.com (HELO mail-qe0-f43.google.com)
	(209.85.128.43)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 02:20:26 -0000
Received: by mail-qe0-f43.google.com with SMTP id nc12so5928346qeb.2
	for <xen-devel@lists.xen.org>; Sun, 19 Jan 2014 18:20:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=SzHHLVpsIS152N4Ik5g/BO0QuA4eJTVormGNRQLKJ4k=;
	b=qJad0cuSyJf569UWOrfKlhH1OCVsiFvEc1ceAr0RPZ14Q5lVn85M7lKML3YNeDFS15
	RB5pFZ0McAskrsN3mktglqldXBYfhetMA7N4dHSkKViAGQkbMN1m0BlcWNP5EbIFDBy4
	/M7l7bFX7V9M89kjcCZym32GAAPnwcchXD6xWofNRFKQiDg7yoZGD+JRChcsrajl/cgf
	QRnbXt81iztsuyO7Z7+ThoqhgFK8VJfmfHXEuAfRp2MHNrisoCENi75YA/pym1i9rAL9
	7XuyTPfHxunHf+ldvN0NzGgFlFK5mp59EZjlURZmvKvTCiMwH26HdJrzGZXfEJcpr8ec
	nBmA==
MIME-Version: 1.0
X-Received: by 10.229.71.69 with SMTP id g5mr24127731qcj.6.1390184425715; Sun,
	19 Jan 2014 18:20:25 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Sun, 19 Jan 2014 18:20:25 -0800 (PST)
In-Reply-To: <52DA351C.2030604@citrix.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
	<52D9309D.1030808@citrix.com>
	<CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
	<52DA351C.2030604@citrix.com>
Date: Mon, 20 Jan 2014 10:20:25 +0800
Message-ID: <CAF1ZMEc9voS5vuDsg=vktSzXgkyPqKEq9uB8pqNAmn7FVGG6rQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCBKYW4gMTgsIDIwMTQgYXQgNDowMiBQTSwgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIu
cGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+Cj4gSGVsbG8sCj4KPiBUaGFua3MgZm9yIHRoZSBsb2cs
IGNvdWxkIHlvdSBwbGVhc2UgcG9zdCB0aGUgb3V0cHV0IG9mIHhlbnN0b3JlLWxzIC1mcAo+IGFm
dGVyIGEgZmFpbGVkIGRvbWFpbiBjcmVhdGlvbj8KPgo+IE15IGZpcnN0IGd1ZXNzIGlzIHRoYXQg
eW91ciB4ZW5jb21tb25zIGluaXQgc2NyaXB0IGlzIG91dGRhdGVkLCBjb3VsZAo+IHlvdSBjaGVj
ayBpZiB5b3VyIHhlbmNvbW1vbnMgaW5pdCBzY3JpcHQgaGFzIHRoZSBmb2xsb3dpbmcgbGluZToK
Pgo+IDExMyAgICAgICAgICAgICAgICAgJHtCSU5ESVJ9L3hlbnN0b3JlLXdyaXRlICIvbG9jYWwv
ZG9tYWluLzAvZG9taWQiIDAKPgo+IFJvZ2VyLgoKSGkgUm9nZXI6CiAgIFRoYW5rcywgYWRkaW5n
IHRoaXMgbGluZSBzb2x2ZSBteSBwcm9ibGVtIDstKQoKICAgQW5kICBJIGRpZG4ndCBtYWtlIGl0
IGNsZWFyIHRoYXQgSSdtIHVzaW5nIEdlbnRvbyBMaW51eC4Kd2Ugc2xpZ2h0bHkgcmV3cm90ZSB0
aGUgaW5pdC5kIHNjcmlwdCB0byBtYWtlIGl0IGZpdCBpbnRvIHRoZSBzeXN0ZW0sCmFuZCB0aGlz
IGNoYW5nZXMgKGNvbXBhcmUgdG8gNC4zLjEpIGNhdXNlIHRoZSBwcm9ibGVtLgpzaW5jZSB3ZSBp
bmhlcml0IHRoZSBzY3JpcHQgZnJvbSBwcmV2aW91cyB2ZXJzaW9uIDQuMy4xKHRodXMgb2xkIHNj
cmlwdCkuCiAgICBJIHdvbmRlciBpZiBpdCdzIHdvcnRoeSB0byBtYWtlIHRoZSBlcnJvciBvciB3
YXJuaW5nIG1vcmUgdXNlZnVsLgp5b3VyIHByZXZpb3VzIHBhdGNoIGxvb2tzIGdvb2QsIGFuZCBj
b3VsZCBiZSB1c2VmdWwuCihub3Qgc3VyZSBvdGhlcnMgd2lsbCBoaXQgdGhlIHNhbWUgcHJvYmxl
bS4uKQoKTGFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 20 02:21:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 02:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W54TX-0001fE-9i; Mon, 20 Jan 2014 02:20:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dennis.yxun@gmail.com>) id 1W54TU-0001f9-J7
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 02:20:28 +0000
Received: from [85.158.143.35:54950] by server-1.bemta-4.messagelabs.com id
	B5/3E-02132-BE78CD25; Mon, 20 Jan 2014 02:20:27 +0000
X-Env-Sender: dennis.yxun@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390184425!12623066!1
X-Originating-IP: [209.85.128.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3004 invoked from network); 20 Jan 2014 02:20:26 -0000
Received: from mail-qe0-f43.google.com (HELO mail-qe0-f43.google.com)
	(209.85.128.43)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 02:20:26 -0000
Received: by mail-qe0-f43.google.com with SMTP id nc12so5928346qeb.2
	for <xen-devel@lists.xen.org>; Sun, 19 Jan 2014 18:20:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=SzHHLVpsIS152N4Ik5g/BO0QuA4eJTVormGNRQLKJ4k=;
	b=qJad0cuSyJf569UWOrfKlhH1OCVsiFvEc1ceAr0RPZ14Q5lVn85M7lKML3YNeDFS15
	RB5pFZ0McAskrsN3mktglqldXBYfhetMA7N4dHSkKViAGQkbMN1m0BlcWNP5EbIFDBy4
	/M7l7bFX7V9M89kjcCZym32GAAPnwcchXD6xWofNRFKQiDg7yoZGD+JRChcsrajl/cgf
	QRnbXt81iztsuyO7Z7+ThoqhgFK8VJfmfHXEuAfRp2MHNrisoCENi75YA/pym1i9rAL9
	7XuyTPfHxunHf+ldvN0NzGgFlFK5mp59EZjlURZmvKvTCiMwH26HdJrzGZXfEJcpr8ec
	nBmA==
MIME-Version: 1.0
X-Received: by 10.229.71.69 with SMTP id g5mr24127731qcj.6.1390184425715; Sun,
	19 Jan 2014 18:20:25 -0800 (PST)
Received: by 10.140.96.108 with HTTP; Sun, 19 Jan 2014 18:20:25 -0800 (PST)
In-Reply-To: <52DA351C.2030604@citrix.com>
References: <52B181EB.6080303@samsung.com>
	<alpine.DEB.2.02.1312181143210.8667@kaball.uk.xensource.com>
	<21169.39199.902511.563980@mariner.uk.xensource.com>
	<52B1A1B9.70307@samsung.com>
	<1387373404.28680.19.camel@kazak.uk.xensource.com>
	<52B1B616.4010402@samsung.com>
	<CAF1ZMEcYpznv-yX5VizGZV8o62rUrwtQ1mz6MzMKzyMVPMg8jA@mail.gmail.com>
	<1389951973.6697.47.camel@kazak.uk.xensource.com>
	<CAF1ZMEc_NwxnvS4a6BUxBgggNhZ0kxWfZLfZX2WQRP_pPLnudQ@mail.gmail.com>
	<1389955476.6697.58.camel@kazak.uk.xensource.com>
	<CAF1ZMEcpR5pz9438633pLj7ATXgiQDXVMJN0XPKvg9h1tBPxbQ@mail.gmail.com>
	<1389956491.6697.64.camel@kazak.uk.xensource.com>
	<CAF1ZMEeFAU6enn-+fjryNfAqJDQAqbgVVHJRMraMMJwniHp95w@mail.gmail.com>
	<1389959942.6697.87.camel@kazak.uk.xensource.com>
	<52D9309D.1030808@citrix.com>
	<CAF1ZMEe6U-PrSo2S_vqg0ukOHM4tag5wE52scd3+M9tOhGEC0A@mail.gmail.com>
	<52DA351C.2030604@citrix.com>
Date: Mon, 20 Jan 2014 10:20:25 +0800
Message-ID: <CAF1ZMEc9voS5vuDsg=vktSzXgkyPqKEq9uB8pqNAmn7FVGG6rQ@mail.gmail.com>
From: "Dennis Lan (dlan)" <dennis.yxun@gmail.com>
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Eugene Fedotov <e.fedotov@samsung.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Unable to create VM with nic device on Arndale
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCBKYW4gMTgsIDIwMTQgYXQgNDowMiBQTSwgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIu
cGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+Cj4gSGVsbG8sCj4KPiBUaGFua3MgZm9yIHRoZSBsb2cs
IGNvdWxkIHlvdSBwbGVhc2UgcG9zdCB0aGUgb3V0cHV0IG9mIHhlbnN0b3JlLWxzIC1mcAo+IGFm
dGVyIGEgZmFpbGVkIGRvbWFpbiBjcmVhdGlvbj8KPgo+IE15IGZpcnN0IGd1ZXNzIGlzIHRoYXQg
eW91ciB4ZW5jb21tb25zIGluaXQgc2NyaXB0IGlzIG91dGRhdGVkLCBjb3VsZAo+IHlvdSBjaGVj
ayBpZiB5b3VyIHhlbmNvbW1vbnMgaW5pdCBzY3JpcHQgaGFzIHRoZSBmb2xsb3dpbmcgbGluZToK
Pgo+IDExMyAgICAgICAgICAgICAgICAgJHtCSU5ESVJ9L3hlbnN0b3JlLXdyaXRlICIvbG9jYWwv
ZG9tYWluLzAvZG9taWQiIDAKPgo+IFJvZ2VyLgoKSGkgUm9nZXI6CiAgIFRoYW5rcywgYWRkaW5n
IHRoaXMgbGluZSBzb2x2ZSBteSBwcm9ibGVtIDstKQoKICAgQW5kICBJIGRpZG4ndCBtYWtlIGl0
IGNsZWFyIHRoYXQgSSdtIHVzaW5nIEdlbnRvbyBMaW51eC4Kd2Ugc2xpZ2h0bHkgcmV3cm90ZSB0
aGUgaW5pdC5kIHNjcmlwdCB0byBtYWtlIGl0IGZpdCBpbnRvIHRoZSBzeXN0ZW0sCmFuZCB0aGlz
IGNoYW5nZXMgKGNvbXBhcmUgdG8gNC4zLjEpIGNhdXNlIHRoZSBwcm9ibGVtLgpzaW5jZSB3ZSBp
bmhlcml0IHRoZSBzY3JpcHQgZnJvbSBwcmV2aW91cyB2ZXJzaW9uIDQuMy4xKHRodXMgb2xkIHNj
cmlwdCkuCiAgICBJIHdvbmRlciBpZiBpdCdzIHdvcnRoeSB0byBtYWtlIHRoZSBlcnJvciBvciB3
YXJuaW5nIG1vcmUgdXNlZnVsLgp5b3VyIHByZXZpb3VzIHBhdGNoIGxvb2tzIGdvb2QsIGFuZCBj
b3VsZCBiZSB1c2VmdWwuCihub3Qgc3VyZSBvdGhlcnMgd2lsbCBoaXQgdGhlIHNhbWUgcHJvYmxl
bS4uKQoKTGFuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 20 02:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 02:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W54gH-00029e-Ts; Mon, 20 Jan 2014 02:33:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W54gG-00029Y-5M
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 02:33:40 +0000
Received: from [85.158.137.68:8446] by server-13.bemta-3.messagelabs.com id
	96/87-28603-30B8CD25; Mon, 20 Jan 2014 02:33:39 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390185217!10040413!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3516 invoked from network); 20 Jan 2014 02:33:38 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 02:33:38 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0K2XWhC021422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 02:33:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0K2XUxu011427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 20 Jan 2014 02:33:30 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0K2XTlS005242; Mon, 20 Jan 2014 02:33:29 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 19 Jan 2014 18:33:29 -0800
Message-ID: <52DC8AF9.3040807@oracle.com>
Date: Mon, 20 Jan 2014 10:33:29 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
	<52D94F8C.7060509@oracle.com> <52D96D73.1030803@citrix.com>
In-Reply-To: <52D96D73.1030803@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/18 1:50, David Vrabel wrote:
> On 17/01/14 15:43, annie li wrote:
>> No, I am trying to implement 2 patches.
> I don't understand the need for two patches here, particularly when
> the first patch introduces a security issue.

This is basically connected with personal taste. I am thinking that my 
original patch is removing unnecessary code for grant transfer and then 
keep rx release consistent with tx path, the security issue you 
mentioned exist in current tx too. The second one is to change 
gnttab_end_foreign_access and netfront tx/rx, blkfront, etc. But if you 
like to merge them together, I can do that.

Thanks
Annie
> You can fold the following
> (untested) patch into your v2 patch and give it a try?
>
> Thanks.
>
> David
>
> 8<----------------------
> xen-netfront: prevent unsafe reuse of rx buf pages after uninit
>
> ---
>   drivers/net/xen-netfront.c |   21 +++++++++++++++++----
>   1 files changed, 17 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 692589e..47aa599 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -1134,19 +1134,32 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>   
>   static void xennet_release_rx_bufs(struct netfront_info *np)
>   {
> -	struct sk_buff *skb;
>   	int id, ref;
>   
>   	spin_lock_bh(&np->rx_lock);
>   
>   	for (id = 0; id < NET_RX_RING_SIZE; id++) {
> +		struct sk_buff *skb;
> +		skb_frag_t *frag;
> +		const struct page *page;
> +
> +		skb = np->rx_skbs[id];
> +		if (!skb)
> +			continue;
> +
>   		ref = np->grant_rx_ref[id];
>   		if (ref == GRANT_INVALID_REF)
>   			continue;
>   
> -		skb = np->rx_skbs[id];
> -		gnttab_end_foreign_access_ref(ref, 0);
> -		gnttab_release_grant_reference(&np->gref_rx_head, ref);
> +		frag = &skb_shinfo(skb)->frags[0];
> +		page = skb_frag_page(frag);
> +
> +		/* gnttab_end_foreign_access() needs a page ref until
> +		 * foreign access is ended (which may be deferred).
> +		 */
> +		get_page(page);
> +
> +		gnttab_end_foreign_access(ref, 0, page);
>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>   
>   		kfree_skb(skb);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 02:33:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 02:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W54gH-00029e-Ts; Mon, 20 Jan 2014 02:33:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W54gG-00029Y-5M
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 02:33:40 +0000
Received: from [85.158.137.68:8446] by server-13.bemta-3.messagelabs.com id
	96/87-28603-30B8CD25; Mon, 20 Jan 2014 02:33:39 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390185217!10040413!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3516 invoked from network); 20 Jan 2014 02:33:38 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 02:33:38 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0K2XWhC021422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 02:33:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0K2XUxu011427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 20 Jan 2014 02:33:30 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0K2XTlS005242; Mon, 20 Jan 2014 02:33:29 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 19 Jan 2014 18:33:29 -0800
Message-ID: <52DC8AF9.3040807@oracle.com>
Date: Mon, 20 Jan 2014 10:33:29 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1389830228-2381-1-git-send-email-Annie.li@oracle.com>
	<52D7BE19.2010009@citrix.com> <52D8CCE4.9010804@oracle.com>
	<20140117120810.GA11681@zion.uk.xensource.com>
	<52D922DD.2060407@oracle.com>
	<20140117140246.GB11681@zion.uk.xensource.com>
	<52D94F8C.7060509@oracle.com> <52D96D73.1030803@citrix.com>
In-Reply-To: <52D96D73.1030803@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xen.org,
	andrew.bennieston@citrix.com, davem@davemloft.net
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/18 1:50, David Vrabel wrote:
> On 17/01/14 15:43, annie li wrote:
>> No, I am trying to implement 2 patches.
> I don't understand the need for two patches here, particularly when
> the first patch introduces a security issue.

This is basically connected with personal taste. I am thinking that my 
original patch is removing unnecessary code for grant transfer and then 
keep rx release consistent with tx path, the security issue you 
mentioned exist in current tx too. The second one is to change 
gnttab_end_foreign_access and netfront tx/rx, blkfront, etc. But if you 
like to merge them together, I can do that.

Thanks
Annie
> You can fold the following
> (untested) patch into your v2 patch and give it a try?
>
> Thanks.
>
> David
>
> 8<----------------------
> xen-netfront: prevent unsafe reuse of rx buf pages after uninit
>
> ---
>   drivers/net/xen-netfront.c |   21 +++++++++++++++++----
>   1 files changed, 17 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 692589e..47aa599 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -1134,19 +1134,32 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
>   
>   static void xennet_release_rx_bufs(struct netfront_info *np)
>   {
> -	struct sk_buff *skb;
>   	int id, ref;
>   
>   	spin_lock_bh(&np->rx_lock);
>   
>   	for (id = 0; id < NET_RX_RING_SIZE; id++) {
> +		struct sk_buff *skb;
> +		skb_frag_t *frag;
> +		const struct page *page;
> +
> +		skb = np->rx_skbs[id];
> +		if (!skb)
> +			continue;
> +
>   		ref = np->grant_rx_ref[id];
>   		if (ref == GRANT_INVALID_REF)
>   			continue;
>   
> -		skb = np->rx_skbs[id];
> -		gnttab_end_foreign_access_ref(ref, 0);
> -		gnttab_release_grant_reference(&np->gref_rx_head, ref);
> +		frag = &skb_shinfo(skb)->frags[0];
> +		page = skb_frag_page(frag);
> +
> +		/* gnttab_end_foreign_access() needs a page ref until
> +		 * foreign access is ended (which may be deferred).
> +		 */
> +		get_page(page);
> +
> +		gnttab_end_foreign_access(ref, 0, page);
>   		np->grant_rx_ref[id] = GRANT_INVALID_REF;
>   
>   		kfree_skb(skb);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 05:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 05:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W56z4-0008BM-KB; Mon, 20 Jan 2014 05:01:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W56z2-0008B6-IW; Mon, 20 Jan 2014 05:01:12 +0000
Received: from [85.158.139.211:31443] by server-14.bemta-5.messagelabs.com id
	3E/5A-24200-79DACD25; Mon, 20 Jan 2014 05:01:11 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390194070!10660588!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10077 invoked from network); 20 Jan 2014 05:01:10 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 05:01:10 -0000
Received: by mail-la0-f47.google.com with SMTP id el20so2662035lab.34
	for <multiple recipients>; Sun, 19 Jan 2014 21:01:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=ehGQBRHrnMmb4M8a+J7d8rRfXvyxjbRe/9ZUvxQl9no=;
	b=SFfe0IdLHj99RbLRw+jffCIhA90hIM8/sTS4CNEB0VB9DkzzNL3p2T6R4iWkcGiwfk
	gm/Prejfcq2xUEYIw0YT9LEVGK7s2Bt/St6beTxaXwksH7OKdL+1kcnmlg5XECBMG9Ni
	TQQfd2hC5+aroo5CiDAyjc2KhewkEbW9bFaLesp2QWStkXVySP9ZN1Wtb0E8uo9rkVwQ
	eWbz4yYBMExSXHdCeBYKFIY5wTPBt1u9gR7hUcxz82Hbt1dyW+ZV7BvlhoJ0g7MakUky
	tlT+BXlIrPIK2aGByO+YP87BOfVS74XjOUhCa09uFXENOUxBOaHe2FjB33D0zzf40brN
	FHRg==
MIME-Version: 1.0
X-Received: by 10.112.139.35 with SMTP id qv3mr274lbb.47.1390194069888; Sun,
	19 Jan 2014 21:01:09 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Sun, 19 Jan 2014 21:01:09 -0800 (PST)
Date: Mon, 20 Jan 2014 00:01:09 -0500
X-Google-Sender-Auth: R1NXL8tGNB9r_mx1Iy__oFmxjdU
Message-ID: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] TODAY, Jan. 20, is Xen Test Day for Xen 4.4 RC2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a reminder that today is the Xen Test Day for Xen 4.4 RC2.

This is the first Test Day for the 4.4 codebase (we had no Test Day
for RC1 due to the number of people taking time off in late December).
 As such, it is extremely important that we take some time to make
sure the code is functioning properly.  Please try to spend some time
today loading and testing Xen 4.4 RC2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 05:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 05:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W56z4-0008BM-KB; Mon, 20 Jan 2014 05:01:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W56z2-0008B6-IW; Mon, 20 Jan 2014 05:01:12 +0000
Received: from [85.158.139.211:31443] by server-14.bemta-5.messagelabs.com id
	3E/5A-24200-79DACD25; Mon, 20 Jan 2014 05:01:11 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390194070!10660588!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10077 invoked from network); 20 Jan 2014 05:01:10 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 05:01:10 -0000
Received: by mail-la0-f47.google.com with SMTP id el20so2662035lab.34
	for <multiple recipients>; Sun, 19 Jan 2014 21:01:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=ehGQBRHrnMmb4M8a+J7d8rRfXvyxjbRe/9ZUvxQl9no=;
	b=SFfe0IdLHj99RbLRw+jffCIhA90hIM8/sTS4CNEB0VB9DkzzNL3p2T6R4iWkcGiwfk
	gm/Prejfcq2xUEYIw0YT9LEVGK7s2Bt/St6beTxaXwksH7OKdL+1kcnmlg5XECBMG9Ni
	TQQfd2hC5+aroo5CiDAyjc2KhewkEbW9bFaLesp2QWStkXVySP9ZN1Wtb0E8uo9rkVwQ
	eWbz4yYBMExSXHdCeBYKFIY5wTPBt1u9gR7hUcxz82Hbt1dyW+ZV7BvlhoJ0g7MakUky
	tlT+BXlIrPIK2aGByO+YP87BOfVS74XjOUhCa09uFXENOUxBOaHe2FjB33D0zzf40brN
	FHRg==
MIME-Version: 1.0
X-Received: by 10.112.139.35 with SMTP id qv3mr274lbb.47.1390194069888; Sun,
	19 Jan 2014 21:01:09 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Sun, 19 Jan 2014 21:01:09 -0800 (PST)
Date: Mon, 20 Jan 2014 00:01:09 -0500
X-Google-Sender-Auth: R1NXL8tGNB9r_mx1Iy__oFmxjdU
Message-ID: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] TODAY, Jan. 20, is Xen Test Day for Xen 4.4 RC2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a reminder that today is the Xen Test Day for Xen 4.4 RC2.

This is the first Test Day for the 4.4 codebase (we had no Test Day
for RC1 due to the number of people taking time off in late December).
 As such, it is extremely important that we take some time to make
sure the code is functioning properly.  Please try to spend some time
today loading and testing Xen 4.4 RC2.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 05:48:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 05:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W57iT-0001cr-7N; Mon, 20 Jan 2014 05:48:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W57iR-0001cj-OC
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 05:48:07 +0000
Received: from [85.158.137.68:17251] by server-10.bemta-3.messagelabs.com id
	7F/A1-23989-698BCD25; Mon, 20 Jan 2014 05:48:06 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390196885!6418957!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2965 invoked from network); 20 Jan 2014 05:48:06 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 05:48:06 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so3168051obc.12
	for <xen-devel@lists.xen.org>; Sun, 19 Jan 2014 21:48:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=oOeR9JdnE3tmZu3BBX/KoRFOD+taCRAZm1FnR84sS4s=;
	b=vsZF5f7QsZNN98oSnjhBSDb10GT3Tl8hKNMmeGRM1ewH8NWikuXZ07D2cfRtLEg/eD
	j8TYQc8/zEjh+N4Xp3uFNgbk4qO1QIk0W8HstLu2DRoEtkko4axWHJ4H/WHmcWlSvEwu
	EMFA91V4TgE8mC1miOeGACSOsj6kDsnY4HIku93MsB9e1ZdxIQ7h0FI7eK4MeJhDLiyt
	sBJz2PLRDUboinrmbHoiopKa2Dk8wESFBdDlwEbYYw5zJRvcx6btxqSbYvSoMcA7CimC
	6/c86WfauPX05VnJmkbe/dcDQx4fpSoQHaGlDJfiulKP+LRMJgy9J1SZwiUP6omh9F8e
	ELew==
MIME-Version: 1.0
X-Received: by 10.182.48.130 with SMTP id l2mr4707447obn.44.1390196884650;
	Sun, 19 Jan 2014 21:48:04 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Sun, 19 Jan 2014 21:48:04 -0800 (PST)
Date: Sun, 19 Jan 2014 21:48:04 -0800
Message-ID: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Is it possible to do vga passthrough on xen-unstable with qemu-xen as
device model? I tried but I am getting error 'gfx_passthru' invalid
parameter for qemu-xen. I am able to do passthrough with qemu
traditional i.e. qemu-dm.

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 05:48:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 05:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W57iT-0001cr-7N; Mon, 20 Jan 2014 05:48:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W57iR-0001cj-OC
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 05:48:07 +0000
Received: from [85.158.137.68:17251] by server-10.bemta-3.messagelabs.com id
	7F/A1-23989-698BCD25; Mon, 20 Jan 2014 05:48:06 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390196885!6418957!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2965 invoked from network); 20 Jan 2014 05:48:06 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 05:48:06 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so3168051obc.12
	for <xen-devel@lists.xen.org>; Sun, 19 Jan 2014 21:48:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=oOeR9JdnE3tmZu3BBX/KoRFOD+taCRAZm1FnR84sS4s=;
	b=vsZF5f7QsZNN98oSnjhBSDb10GT3Tl8hKNMmeGRM1ewH8NWikuXZ07D2cfRtLEg/eD
	j8TYQc8/zEjh+N4Xp3uFNgbk4qO1QIk0W8HstLu2DRoEtkko4axWHJ4H/WHmcWlSvEwu
	EMFA91V4TgE8mC1miOeGACSOsj6kDsnY4HIku93MsB9e1ZdxIQ7h0FI7eK4MeJhDLiyt
	sBJz2PLRDUboinrmbHoiopKa2Dk8wESFBdDlwEbYYw5zJRvcx6btxqSbYvSoMcA7CimC
	6/c86WfauPX05VnJmkbe/dcDQx4fpSoQHaGlDJfiulKP+LRMJgy9J1SZwiUP6omh9F8e
	ELew==
MIME-Version: 1.0
X-Received: by 10.182.48.130 with SMTP id l2mr4707447obn.44.1390196884650;
	Sun, 19 Jan 2014 21:48:04 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Sun, 19 Jan 2014 21:48:04 -0800 (PST)
Date: Sun, 19 Jan 2014 21:48:04 -0800
Message-ID: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Is it possible to do vga passthrough on xen-unstable with qemu-xen as
device model? I tried but I am getting error 'gfx_passthru' invalid
parameter for qemu-xen. I am able to do passthrough with qemu
traditional i.e. qemu-dm.

thanks,
Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 07:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 07:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W58uj-0004kE-SC; Mon, 20 Jan 2014 07:04:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W58uh-0004k9-L3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 07:04:51 +0000
Received: from [85.158.139.211:42089] by server-5.bemta-5.messagelabs.com id
	F6/E3-14928-29ACCD25; Mon, 20 Jan 2014 07:04:50 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390201489!10651368!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14573 invoked from network); 20 Jan 2014 07:04:50 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	20 Jan 2014 07:04:50 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Jan 2014 23:00:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,689,1384329600"; d="scan'208";a="469353843"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 19 Jan 2014 23:04:30 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 19 Jan 2014 23:04:23 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 19 Jan 2014 23:04:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Mon, 20 Jan 2014 15:04:19 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Shakeel Butt <shakeel.butt@gmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ
Date: Mon, 20 Jan 2014 07:04:19 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
In-Reply-To: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> Sent: Monday, January 20, 2014 1:48 PM
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> 
> Hi all,
> 
> Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> device model? I tried but I am getting error 'gfx_passthru' invalid
> parameter for qemu-xen. I am able to do passthrough with qemu
> traditional i.e. qemu-dm.

As far as I know, only qemu-traditional supports vga pass-through right now.
> 
> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 07:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 07:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W58uj-0004kE-SC; Mon, 20 Jan 2014 07:04:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W58uh-0004k9-L3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 07:04:51 +0000
Received: from [85.158.139.211:42089] by server-5.bemta-5.messagelabs.com id
	F6/E3-14928-29ACCD25; Mon, 20 Jan 2014 07:04:50 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390201489!10651368!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14573 invoked from network); 20 Jan 2014 07:04:50 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	20 Jan 2014 07:04:50 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 19 Jan 2014 23:00:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,689,1384329600"; d="scan'208";a="469353843"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 19 Jan 2014 23:04:30 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 19 Jan 2014 23:04:23 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 19 Jan 2014 23:04:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Mon, 20 Jan 2014 15:04:19 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Shakeel Butt <shakeel.butt@gmail.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ
Date: Mon, 20 Jan 2014 07:04:19 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
In-Reply-To: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> Sent: Monday, January 20, 2014 1:48 PM
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> 
> Hi all,
> 
> Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> device model? I tried but I am getting error 'gfx_passthru' invalid
> parameter for qemu-xen. I am able to do passthrough with qemu
> traditional i.e. qemu-dm.

As far as I know, only qemu-traditional supports vga pass-through right now.
> 
> thanks,
> Shakeel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:01:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:01:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59nf-0007Nd-Er; Mon, 20 Jan 2014 08:01:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59nd-0007NY-Db
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:01:37 +0000
Received: from [85.158.139.211:43497] by server-13.bemta-5.messagelabs.com id
	26/A6-11357-0E7DCD25; Mon, 20 Jan 2014 08:01:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390204895!414326!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21110 invoked from network); 20 Jan 2014 08:01:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:01:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:01:35 +0000
Message-Id: <52DCE5EC0200007800114E59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:01:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
	<1389977676.6697.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1389977676.6697.132.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 17:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
>> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
>> >> max_pfn/num_physpages isn't that far off for guest with less than
>> >> 4Gb, the number calculated from the PoD data is a little worse.
>> > 
>> > On ARM RAM may not start at 0 and so using max_pfn can be very
>> > misleading and in practice causes arm to balloon down to 0 as fast as it
>> > can.
>> 
>> Ugly. Is that only due to the temporary workaround for there not
>> being an IOMMU?
> 
> It's not to do with IOMMUs, no, and it isn't temporary.
> 
> Architecturally on ARM its not required for RAM to be at address 0 and
> it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
> SoC design).
> 
> If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
> but target pages is just 0x8000, if current_pages is initialised to
> max_pfn then the kernel immediately thinks it has to get rid of 0x800000
> pages.

And there is some sort of benefit from also doing this for virtual
machines?

>> And short of the initial value needing to be architecture specific -
>> can you see a calculation that would yield a decent result on ARM
>> that would also be suitable on x86?
> 
> I previously had a patch to use memblock_phys_mem_size(), but when I saw
> Boris switch to get_num_physpages() I thought that would be OK, but I
> didn't look into it very hard. Without checking I suspect they return
> pretty much the same think and so memblock_phys_mem_size will have the
> same issue you observed (which I confess I haven't yet gone back and
> understood).

If there's no reserved memory in that range, I guess ARM might
be fine as is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:01:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:01:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59nf-0007Nd-Er; Mon, 20 Jan 2014 08:01:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59nd-0007NY-Db
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:01:37 +0000
Received: from [85.158.139.211:43497] by server-13.bemta-5.messagelabs.com id
	26/A6-11357-0E7DCD25; Mon, 20 Jan 2014 08:01:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390204895!414326!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21110 invoked from network); 20 Jan 2014 08:01:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:01:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:01:35 +0000
Message-Id: <52DCE5EC0200007800114E59@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:01:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
	<1389977676.6697.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1389977676.6697.132.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 17:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
>> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
>> >> max_pfn/num_physpages isn't that far off for guest with less than
>> >> 4Gb, the number calculated from the PoD data is a little worse.
>> > 
>> > On ARM RAM may not start at 0 and so using max_pfn can be very
>> > misleading and in practice causes arm to balloon down to 0 as fast as it
>> > can.
>> 
>> Ugly. Is that only due to the temporary workaround for there not
>> being an IOMMU?
> 
> It's not to do with IOMMUs, no, and it isn't temporary.
> 
> Architecturally on ARM its not required for RAM to be at address 0 and
> it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
> SoC design).
> 
> If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
> but target pages is just 0x8000, if current_pages is initialised to
> max_pfn then the kernel immediately thinks it has to get rid of 0x800000
> pages.

And there is some sort of benefit from also doing this for virtual
machines?

>> And short of the initial value needing to be architecture specific -
>> can you see a calculation that would yield a decent result on ARM
>> that would also be suitable on x86?
> 
> I previously had a patch to use memblock_phys_mem_size(), but when I saw
> Boris switch to get_num_physpages() I thought that would be OK, but I
> didn't look into it very hard. Without checking I suspect they return
> pretty much the same think and so memblock_phys_mem_size will have the
> same issue you observed (which I confess I haven't yet gone back and
> understood).

If there's no reserved memory in that range, I guess ARM might
be fine as is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59tx-0007W6-A3; Mon, 20 Jan 2014 08:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59tw-0007W0-DI
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:08:08 +0000
Received: from [85.158.139.211:13448] by server-11.bemta-5.messagelabs.com id
	73/C0-23268-769DCD25; Mon, 20 Jan 2014 08:08:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390205284!10727193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17672 invoked from network); 20 Jan 2014 08:08:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:08:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:08:04 +0000
Message-Id: <52DCE7700200007800114E67@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:08:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
In-Reply-To: <1389978832.6697.137.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 18:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
>> Interestingly, (a) too results in the driver not ballooning down
>> enough - there's a gap of exactly as many pages as are marked
>> reserved below the 1Mb boundary. Therefore aforementioned
>> upstream commit is presumably broken.
> 
> Can we count those reserved pages? (I guess you mean reserved in the
> e820?)

Yes, we could. But it's not logical to count the ones below 1Mb, but
not the ones above. Yet we can't (without knowledge of the tools/
firmware implementation) tell regions backed by RAM assigned to the
guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
from regions reserved for other reasons. A specific firmware could,
for example, have a larger BIOS region right below 4Gb (like many
non-virtual BIOSes do), which would then also be RAM covered and
hence also need accounting.

>> Short of a reliable (and ideally architecture independent) way of
>> knowing the necessary adjustment value, the next best solution
>> (not ballooning down too little, but also not ballooning down much
>> more than necessary) turns out to be using the minimum of (b)
>> and (c): When the domain only has memory below 4Gb, (b) is
>> more precise, whereas in the other cases (c) gets closest.
> 
> I think I'd prefer an arch specific calculation (or an arch specific
> adjustment to a generic calculation) to either of the above.

Hmm, interesting. I would have expected a generic calculation to
be deemed preferable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59tx-0007W6-A3; Mon, 20 Jan 2014 08:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59tw-0007W0-DI
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:08:08 +0000
Received: from [85.158.139.211:13448] by server-11.bemta-5.messagelabs.com id
	73/C0-23268-769DCD25; Mon, 20 Jan 2014 08:08:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390205284!10727193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17672 invoked from network); 20 Jan 2014 08:08:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:08:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:08:04 +0000
Message-Id: <52DCE7700200007800114E67@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:08:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
In-Reply-To: <1389978832.6697.137.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.01.14 at 18:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
>> Interestingly, (a) too results in the driver not ballooning down
>> enough - there's a gap of exactly as many pages as are marked
>> reserved below the 1Mb boundary. Therefore aforementioned
>> upstream commit is presumably broken.
> 
> Can we count those reserved pages? (I guess you mean reserved in the
> e820?)

Yes, we could. But it's not logical to count the ones below 1Mb, but
not the ones above. Yet we can't (without knowledge of the tools/
firmware implementation) tell regions backed by RAM assigned to the
guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
from regions reserved for other reasons. A specific firmware could,
for example, have a larger BIOS region right below 4Gb (like many
non-virtual BIOSes do), which would then also be RAM covered and
hence also need accounting.

>> Short of a reliable (and ideally architecture independent) way of
>> knowing the necessary adjustment value, the next best solution
>> (not ballooning down too little, but also not ballooning down much
>> more than necessary) turns out to be using the minimum of (b)
>> and (c): When the domain only has memory below 4Gb, (b) is
>> more precise, whereas in the other cases (c) gets closest.
> 
> I think I'd prefer an arch specific calculation (or an arch specific
> adjustment to a generic calculation) to either of the above.

Hmm, interesting. I would have expected a generic calculation to
be deemed preferable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:10:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59wR-0007wO-3w; Mon, 20 Jan 2014 08:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59wP-0007wI-Rx
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 08:10:42 +0000
Received: from [85.158.139.211:65062] by server-9.bemta-5.messagelabs.com id
	C6/33-15098-10ADCD25; Mon, 20 Jan 2014 08:10:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390205440!10715755!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2404 invoked from network); 20 Jan 2014 08:10:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:10:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:10:40 +0000
Message-Id: <52DCE80D0200007800114E78@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:10:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.01.14 at 15:32, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-01-17:
>>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>> access directly.
>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>> time, if there is a virtual vmexit pending (for example, an
>>> interrupt pending to inject to L1) and hypervisor will switch the
>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>> retry to handle the IO request later and unfortunately, the retry
>>> will happen in L1's context. And it will cause the problem.
>>> The fixing is that if there is a pending IO request, no virtual
>>> vmexit/vmentry is allowed.
>>> 
>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>> ---
>>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
>> 
>> Didn't we agree earlier on to do this in common code?
>> 
> 
> I think you agree with this fixing. Let me have a double check: Do you mean 
> move the check to nhvm_interrupt_block () as Christoph suggested or move to 
> another place in common code? Christoph' s suggestion doesn't solve the issue 
> as I said in previous thread. Also, since SVM and VMX handle the vmswitch 
> totally independent, there is no proper point to put the check in common code 
> to cover both.

Okay, fine with me then as is. Awaiting a VMX maintainer ack then...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:10:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W59wR-0007wO-3w; Mon, 20 Jan 2014 08:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W59wP-0007wI-Rx
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 08:10:42 +0000
Received: from [85.158.139.211:65062] by server-9.bemta-5.messagelabs.com id
	C6/33-15098-10ADCD25; Mon, 20 Jan 2014 08:10:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390205440!10715755!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2404 invoked from network); 20 Jan 2014 08:10:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:10:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:10:40 +0000
Message-Id: <52DCE80D0200007800114E78@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:10:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
 during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.01.14 at 15:32, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-01-17:
>>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>> access directly.
>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>> time, if there is a virtual vmexit pending (for example, an
>>> interrupt pending to inject to L1) and hypervisor will switch the
>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>> retry to handle the IO request later and unfortunately, the retry
>>> will happen in L1's context. And it will cause the problem.
>>> The fixing is that if there is a pending IO request, no virtual
>>> vmexit/vmentry is allowed.
>>> 
>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>> ---
>>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
>> 
>> Didn't we agree earlier on to do this in common code?
>> 
> 
> I think you agree with this fixing. Let me have a double check: Do you mean 
> move the check to nhvm_interrupt_block () as Christoph suggested or move to 
> another place in common code? Christoph' s suggestion doesn't solve the issue 
> as I said in previous thread. Also, since SVM and VMX handle the vmswitch 
> totally independent, there is no proper point to put the check in common code 
> to cover both.

Okay, fine with me then as is. Awaiting a VMX maintainer ack then...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 08:56:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Aek-00015w-BB; Mon, 20 Jan 2014 08:56:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Aei-00015r-VY
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:56:29 +0000
Received: from [193.109.254.147:49582] by server-4.bemta-14.messagelabs.com id
	06/0A-03916-CB4ECD25; Mon, 20 Jan 2014 08:56:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390208187!11911885!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26668 invoked from network); 20 Jan 2014 08:56:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:56:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:56:27 +0000
Message-Id: <52DCF2C80200007800114ED7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:56:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartAA9929A8.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/x86/time: fix wc_version retry
	check
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartAA9929A8.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When using | instead of || (attempting to make the compiler issue just
a single branch), both sides of the operator aren't separated by a
sequence point, and hence evaluation can happen in any order. In
particular the rightmost read of s->wc_version could get carried out
before the leftmost one. Use a local variable to prevent this. (In
reality the compiler is very likely to do only a single memory read
here anyway.)

Reported-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/i386/kernel/time-xen.c
+++ b/arch/i386/kernel/time-xen.c
@@ -257,6 +257,7 @@
 static void update_wallclock(void)
 {
 	shared_info_t *s =3D HYPERVISOR_shared_info;
+	u32 version;
=20
 	do {
 		shadow_tv_version =3D s->wc_version;
@@ -264,7 +265,8 @@
 		shadow_tv.tv_sec  =3D s->wc_sec;
 		shadow_tv.tv_nsec =3D s->wc_nsec;
 		rmb();
-	} while ((s->wc_version & 1) | (shadow_tv_version ^ s->wc_version))=
;
+		version =3D s->wc_version;
+	} while ((version & 1) | (shadow_tv_version ^ version));
=20
 	if (!independent_wallclock)
 		__update_wallclock(shadow_tv.tv_sec, shadow_tv.tv_nsec);




--=__PartAA9929A8.0__=
Content-Type: text/plain; name="xenlinux-x86-update-wc-ordering.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-x86-update-wc-ordering.patch"

x86/time: fix wc_version retry check=0A=0AWhen using | instead of || =
(attempting to make the compiler issue just=0Aa single branch), both sides =
of the operator aren't separated by a=0Asequence point, and hence =
evaluation can happen in any order. In=0Aparticular the rightmost read of =
s->wc_version could get carried out=0Abefore the leftmost one. Use a local =
variable to prevent this. (In=0Areality the compiler is very likely to do =
only a single memory read=0Ahere anyway.)=0A=0AReported-by: Julian =
Stecklina <jsteckli@os.inf.tu-dresden.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/arch/i386/kernel/time-xen.c=0A+++ b/arch/i38=
6/kernel/time-xen.c=0A@@ -257,6 +257,7 @@=0A static void update_wallclock(v=
oid)=0A {=0A 	shared_info_t *s =3D HYPERVISOR_shared_info;=0A+	=
u32 version;=0A =0A 	do {=0A 		shadow_tv_version =3D =
s->wc_version;=0A@@ -264,7 +265,8 @@=0A 		shadow_tv.tv_sec  =
=3D s->wc_sec;=0A 		shadow_tv.tv_nsec =3D s->wc_nsec;=0A 		=
rmb();=0A-	} while ((s->wc_version & 1) | (shadow_tv_version ^ =
s->wc_version));=0A+		version =3D s->wc_version;=0A+	} while =
((version & 1) | (shadow_tv_version ^ version));=0A =0A 	if =
(!independent_wallclock)=0A 		__update_wallclock(shadow_tv.tv_sec=
, shadow_tv.tv_nsec);=0A
--=__PartAA9929A8.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartAA9929A8.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 08:56:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Aek-00015w-BB; Mon, 20 Jan 2014 08:56:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Aei-00015r-VY
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:56:29 +0000
Received: from [193.109.254.147:49582] by server-4.bemta-14.messagelabs.com id
	06/0A-03916-CB4ECD25; Mon, 20 Jan 2014 08:56:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390208187!11911885!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26668 invoked from network); 20 Jan 2014 08:56:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:56:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:56:27 +0000
Message-Id: <52DCF2C80200007800114ED7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:56:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartAA9929A8.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/x86/time: fix wc_version retry
	check
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartAA9929A8.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When using | instead of || (attempting to make the compiler issue just
a single branch), both sides of the operator aren't separated by a
sequence point, and hence evaluation can happen in any order. In
particular the rightmost read of s->wc_version could get carried out
before the leftmost one. Use a local variable to prevent this. (In
reality the compiler is very likely to do only a single memory read
here anyway.)

Reported-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/i386/kernel/time-xen.c
+++ b/arch/i386/kernel/time-xen.c
@@ -257,6 +257,7 @@
 static void update_wallclock(void)
 {
 	shared_info_t *s =3D HYPERVISOR_shared_info;
+	u32 version;
=20
 	do {
 		shadow_tv_version =3D s->wc_version;
@@ -264,7 +265,8 @@
 		shadow_tv.tv_sec  =3D s->wc_sec;
 		shadow_tv.tv_nsec =3D s->wc_nsec;
 		rmb();
-	} while ((s->wc_version & 1) | (shadow_tv_version ^ s->wc_version))=
;
+		version =3D s->wc_version;
+	} while ((version & 1) | (shadow_tv_version ^ version));
=20
 	if (!independent_wallclock)
 		__update_wallclock(shadow_tv.tv_sec, shadow_tv.tv_nsec);




--=__PartAA9929A8.0__=
Content-Type: text/plain; name="xenlinux-x86-update-wc-ordering.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-x86-update-wc-ordering.patch"

x86/time: fix wc_version retry check=0A=0AWhen using | instead of || =
(attempting to make the compiler issue just=0Aa single branch), both sides =
of the operator aren't separated by a=0Asequence point, and hence =
evaluation can happen in any order. In=0Aparticular the rightmost read of =
s->wc_version could get carried out=0Abefore the leftmost one. Use a local =
variable to prevent this. (In=0Areality the compiler is very likely to do =
only a single memory read=0Ahere anyway.)=0A=0AReported-by: Julian =
Stecklina <jsteckli@os.inf.tu-dresden.de>=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/arch/i386/kernel/time-xen.c=0A+++ b/arch/i38=
6/kernel/time-xen.c=0A@@ -257,6 +257,7 @@=0A static void update_wallclock(v=
oid)=0A {=0A 	shared_info_t *s =3D HYPERVISOR_shared_info;=0A+	=
u32 version;=0A =0A 	do {=0A 		shadow_tv_version =3D =
s->wc_version;=0A@@ -264,7 +265,8 @@=0A 		shadow_tv.tv_sec  =
=3D s->wc_sec;=0A 		shadow_tv.tv_nsec =3D s->wc_nsec;=0A 		=
rmb();=0A-	} while ((s->wc_version & 1) | (shadow_tv_version ^ =
s->wc_version));=0A+		version =3D s->wc_version;=0A+	} while =
((version & 1) | (shadow_tv_version ^ version));=0A =0A 	if =
(!independent_wallclock)=0A 		__update_wallclock(shadow_tv.tv_sec=
, shadow_tv.tv_nsec);=0A
--=__PartAA9929A8.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartAA9929A8.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 08:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Af7-00017P-P8; Mon, 20 Jan 2014 08:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Af6-00017D-6a
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:56:52 +0000
Received: from [193.109.254.147:19445] by server-16.bemta-14.messagelabs.com
	id FB/68-20600-3D4ECD25; Mon, 20 Jan 2014 08:56:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390208210!9590761!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12483 invoked from network); 20 Jan 2014 08:56:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:56:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:56:50 +0000
Message-Id: <52DCF2E00200007800114EDB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:56:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC2F141C0.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/drivers: prefer xenbus_write()
 over xenbus_printf() where possible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC2F141C0.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... as being the simpler variant.

This includes an inversion of netfront's HAVE_CSUM_OFFLOAD, in order to
be able to directly use __stringify() on it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkfront/blkfront.c
+++ b/drivers/xen/blkfront/blkfront.c
@@ -187,8 +187,8 @@ again:
 		message =3D "writing event-channel";
 		goto abort_transaction;
 	}
-	err =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",
-			    XEN_IO_PROTO_ABI_NATIVE);
+	err =3D xenbus_write(xbt, dev->nodename, "protocol",
+			   XEN_IO_PROTO_ABI_NATIVE);
 	if (err) {
 		message =3D "writing protocol";
 		goto abort_transaction;
--- a/drivers/xen/fbfront/xenfb.c
+++ b/drivers/xen/fbfront/xenfb.c
@@ -760,11 +760,11 @@ static int xenfb_connect_backend(struct=20
 			    irq_to_evtchn_port(irq));
 	if (ret)
 		goto error_xenbus;
-	ret =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",
-			    XEN_IO_PROTO_ABI_NATIVE);
+	ret =3D xenbus_write(xbt, dev->nodename, "protocol",
+			   XEN_IO_PROTO_ABI_NATIVE);
 	if (ret)
 		goto error_xenbus;
-	ret =3D xenbus_printf(xbt, dev->nodename, "feature-update", "1");
+	ret =3D xenbus_write(xbt, dev->nodename, "feature-update", "1");
 	if (ret)
 		goto error_xenbus;
 	ret =3D xenbus_transaction_end(xbt, 0);
--- a/drivers/xen/fbfront/xenkbd.c
+++ b/drivers/xen/fbfront/xenkbd.c
@@ -126,7 +126,7 @@ int __devinit xenkbd_probe(struct xenbus
 	if (xenbus_scanf(XBT_NIL, dev->otherend, "feature-abs-pointer", =
"%d", &abs) < 0)
 		abs =3D 0;
 	if (abs)
-		xenbus_printf(XBT_NIL, dev->nodename, "request-abs-pointer"=
, "1");
+		xenbus_write(XBT_NIL, dev->nodename, "request-abs-pointer",=
 "1");
=20
 	/* keyboard */
 	kbd =3D input_allocate_device();
@@ -294,8 +294,8 @@ static void xenkbd_backend_changed(struc
 		if (ret < 0)
 			val =3D 0;
 		if (val) {
-			ret =3D xenbus_printf(XBT_NIL, info->xbdev->nodenam=
e,
-					    "request-abs-pointer", "1");
+			ret =3D xenbus_write(XBT_NIL, info->xbdev->nodename=
,
+					   "request-abs-pointer", "1");
 			if (ret)
 				; /* FIXME */
 		}
--- a/drivers/xen/netback/xenbus.c
+++ b/drivers/xen/netback/xenbus.c
@@ -107,8 +107,7 @@ static int netback_probe(struct xenbus_d
 		}
=20
 		/* We support rx-copy path. */
-		err =3D xenbus_printf(xbt, dev->nodename,
-				    "feature-rx-copy", "%d", 1);
+		err =3D xenbus_write(xbt, dev->nodename, "feature-rx-copy",=
 "1");
 		if (err) {
 			message =3D "writing feature-rx-copy";
 			goto abort_transaction;
@@ -118,8 +117,7 @@ static int netback_probe(struct xenbus_d
 		 * We don't support rx-flip path (except old guests who =
don't
 		 * grok this feature flag).
 		 */
-		err =3D xenbus_printf(xbt, dev->nodename,
-				    "feature-rx-flip", "%d", 0);
+		err =3D xenbus_write(xbt, dev->nodename, "feature-rx-flip",=
 "0");
 		if (err) {
 			message =3D "writing feature-rx-flip";
 			goto abort_transaction;
--- a/drivers/xen/netfront/netfront.c
+++ b/drivers/xen/netfront/netfront.c
@@ -37,6 +37,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/string.h>
+#include <linux/stringify.h>
 #include <linux/errno.h>
 #include <linux/netdevice.h>
 #include <linux/inetdevice.h>
@@ -100,7 +101,7 @@ static const int MODPARM_rx_flip =3D 0;
 #if defined(NETIF_F_GSO)
 #define HAVE_GSO			1
 #define HAVE_TSO			1 /* TSO is a subset of GSO */
-#define HAVE_CSUM_OFFLOAD		1
+#define NO_CSUM_OFFLOAD			0
 static inline void dev_disable_gso_features(struct net_device *dev)
 {
 	/* Turn off all GSO bits except ROBUST. */
@@ -116,7 +117,7 @@ static inline void dev_disable_gso_featu
  * with the presence of NETIF_F_TSO but it appears to be a good first
  * approximiation.
  */
-#define HAVE_CSUM_OFFLOAD              0
+#define NO_CSUM_OFFLOAD			1
=20
 #define gso_size tso_size
 #define gso_segs tso_segs
@@ -143,7 +144,7 @@ static inline int netif_needs_gso(struct
 #else
 #define HAVE_GSO			0
 #define HAVE_TSO			0
-#define HAVE_CSUM_OFFLOAD		0
+#define NO_CSUM_OFFLOAD			1
 #define netif_needs_gso(dev, skb)	0
 #define dev_disable_gso_features(dev)	((void)0)
 #define ethtool_op_set_tso(dev, data)	(-ENOSYS)
@@ -420,27 +421,27 @@ again:
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-rx-notify", =
"%d", 1);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-rx-notify", =
"1");
 	if (err) {
 		message =3D "writing feature-rx-notify";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-no-csum-offload"=
,
-			    "%d", !HAVE_CSUM_OFFLOAD);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-no-csum-offload",=

+			   __stringify(NO_CSUM_OFFLOAD));
 	if (err) {
 		message =3D "writing feature-no-csum-offload";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-sg", "%d", 1);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-sg", "1");
 	if (err) {
 		message =3D "writing feature-sg";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv4", =
"%d",
-			    HAVE_TSO);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-gso-tcpv4",
+			   __stringify(HAVE_TSO));
 	if (err) {
 		message =3D "writing feature-gso-tcpv4";
 		goto abort_transaction;
--- a/drivers/xen/pcifront/xenbus.c
+++ b/drivers/xen/pcifront/xenbus.c
@@ -124,8 +124,8 @@ static int pcifront_publish_info(struct=20
 		err =3D xenbus_printf(trans, pdev->xdev->nodename,
 				    "event-channel", "%u", pdev->evtchn);
 	if (!err)
-		err =3D xenbus_printf(trans, pdev->xdev->nodename,
-				    "magic", XEN_PCI_MAGIC);
+		err =3D xenbus_write(trans, pdev->xdev->nodename, "magic",
+				   XEN_PCI_MAGIC);
=20
 	if (err) {
 		xenbus_transaction_end(trans, 1);
--- a/drivers/xen/tpmback/xenbus.c
+++ b/drivers/xen/tpmback/xenbus.c
@@ -176,7 +176,6 @@ static void connect(struct backend_info=20
 	struct xenbus_transaction xbt;
 	int err;
 	struct xenbus_device *dev =3D be->dev;
-	unsigned long ready =3D 1;
=20
 again:
 	err =3D xenbus_transaction_start(&xbt);
@@ -185,8 +184,7 @@ again:
 		return;
 	}
=20
-	err =3D xenbus_printf(xbt, be->dev->nodename,
-			    "ready", "%lu", ready);
+	err =3D xenbus_write(xbt, be->dev->nodename, "ready", "1");
 	if (err) {
 		xenbus_dev_fatal(be->dev, err, "writing 'ready'");
 		goto abort;



--=__PartC2F141C0.0__=
Content-Type: text/plain; name="xenlinux-use-xenbus_write.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-use-xenbus_write.patch"

drivers: prefer xenbus_write() over xenbus_printf() where possible=0A=0A...=
 as being the simpler variant.=0A=0AThis includes an inversion of =
netfront's HAVE_CSUM_OFFLOAD, in order to=0Abe able to directly use =
__stringify() on it.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/drivers/xen/blkfront/blkfront.c=0A+++ b/drivers/xen/blkfront/blkfr=
ont.c=0A@@ -187,8 +187,8 @@ again:=0A 		message =3D "writing =
event-channel";=0A 		goto abort_transaction;=0A 	}=0A-	=
err =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",=0A-			=
    XEN_IO_PROTO_ABI_NATIVE);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "protocol",=0A+			   XEN_IO_PROTO_ABI_NATIVE)=
;=0A 	if (err) {=0A 		message =3D "writing protocol";=0A 		=
goto abort_transaction;=0A--- a/drivers/xen/fbfront/xenfb.c=0A+++ =
b/drivers/xen/fbfront/xenfb.c=0A@@ -760,11 +760,11 @@ static int xenfb_conn=
ect_backend(struct =0A 			    irq_to_evtchn_port(irq));=0A 	=
if (ret)=0A 		goto error_xenbus;=0A-	ret =3D xenbus_printf(xbt, =
dev->nodename, "protocol", "%s",=0A-			    XEN_IO_PROTO_AB=
I_NATIVE);=0A+	ret =3D xenbus_write(xbt, dev->nodename, "protocol",=0A+	=
		   XEN_IO_PROTO_ABI_NATIVE);=0A 	if (ret)=0A 		=
goto error_xenbus;=0A-	ret =3D xenbus_printf(xbt, dev->nodename, =
"feature-update", "1");=0A+	ret =3D xenbus_write(xbt, dev->nodename, =
"feature-update", "1");=0A 	if (ret)=0A 		goto error_xenbus;=
=0A 	ret =3D xenbus_transaction_end(xbt, 0);=0A--- a/drivers/xen/fbfront=
/xenkbd.c=0A+++ b/drivers/xen/fbfront/xenkbd.c=0A@@ -126,7 +126,7 @@ int =
__devinit xenkbd_probe(struct xenbus=0A 	if (xenbus_scanf(XBT_NIL, =
dev->otherend, "feature-abs-pointer", "%d", &abs) < 0)=0A 		=
abs =3D 0;=0A 	if (abs)=0A-		xenbus_printf(XBT_NIL, dev->nodenam=
e, "request-abs-pointer", "1");=0A+		xenbus_write(XBT_NIL, =
dev->nodename, "request-abs-pointer", "1");=0A =0A 	/* keyboard */=0A 	=
kbd =3D input_allocate_device();=0A@@ -294,8 +294,8 @@ static void =
xenkbd_backend_changed(struc=0A 		if (ret < 0)=0A 		=
	val =3D 0;=0A 		if (val) {=0A-			ret =3D =
xenbus_printf(XBT_NIL, info->xbdev->nodename,=0A-				=
	    "request-abs-pointer", "1");=0A+			ret =3D =
xenbus_write(XBT_NIL, info->xbdev->nodename,=0A+				=
	   "request-abs-pointer", "1");=0A 			if =
(ret)=0A 				; /* FIXME */=0A 		=
}=0A--- a/drivers/xen/netback/xenbus.c=0A+++ b/drivers/xen/netback/xenbus.c=
=0A@@ -107,8 +107,7 @@ static int netback_probe(struct xenbus_d=0A 		=
}=0A =0A 		/* We support rx-copy path. */=0A-		=
err =3D xenbus_printf(xbt, dev->nodename,=0A-				   =
 "feature-rx-copy", "%d", 1);=0A+		err =3D xenbus_write(xbt, =
dev->nodename, "feature-rx-copy", "1");=0A 		if (err) {=0A 		=
	message =3D "writing feature-rx-copy";=0A 			=
goto abort_transaction;=0A@@ -118,8 +117,7 @@ static int netback_probe(stru=
ct xenbus_d=0A 		 * We don't support rx-flip path (except old =
guests who don't=0A 		 * grok this feature flag).=0A 		 =
*/=0A-		err =3D xenbus_printf(xbt, dev->nodename,=0A-			=
	    "feature-rx-flip", "%d", 0);=0A+		err =3D xenbus_writ=
e(xbt, dev->nodename, "feature-rx-flip", "0");=0A 		if (err) =
{=0A 			message =3D "writing feature-rx-flip";=0A 		=
	goto abort_transaction;=0A--- a/drivers/xen/netfront/netfront.c=0A+=
++ b/drivers/xen/netfront/netfront.c=0A@@ -37,6 +37,7 @@=0A #include =
<linux/sched.h>=0A #include <linux/slab.h>=0A #include <linux/string.h>=0A+=
#include <linux/stringify.h>=0A #include <linux/errno.h>=0A #include =
<linux/netdevice.h>=0A #include <linux/inetdevice.h>=0A@@ -100,7 +101,7 @@ =
static const int MODPARM_rx_flip =3D 0;=0A #if defined(NETIF_F_GSO)=0A =
#define HAVE_GSO			1=0A #define HAVE_TSO			=
1 /* TSO is a subset of GSO */=0A-#define HAVE_CSUM_OFFLOAD		=
1=0A+#define NO_CSUM_OFFLOAD			0=0A static inline void =
dev_disable_gso_features(struct net_device *dev)=0A {=0A 	/* Turn =
off all GSO bits except ROBUST. */=0A@@ -116,7 +117,7 @@ static inline =
void dev_disable_gso_featu=0A  * with the presence of NETIF_F_TSO but it =
appears to be a good first=0A  * approximiation.=0A  */=0A-#define =
HAVE_CSUM_OFFLOAD              0=0A+#define NO_CSUM_OFFLOAD			=
1=0A =0A #define gso_size tso_size=0A #define gso_segs tso_segs=0A@@ =
-143,7 +144,7 @@ static inline int netif_needs_gso(struct=0A #else=0A =
#define HAVE_GSO			0=0A #define HAVE_TSO			=
0=0A-#define HAVE_CSUM_OFFLOAD		0=0A+#define NO_CSUM_OFFLOAD		=
	1=0A #define netif_needs_gso(dev, skb)	0=0A #define dev_disable_gs=
o_features(dev)	((void)0)=0A #define ethtool_op_set_tso(dev, data)	=
(-ENOSYS)=0A@@ -420,27 +421,27 @@ again:=0A 		goto abort_transact=
ion;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, =
"feature-rx-notify", "%d", 1);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "feature-rx-notify", "1");=0A 	if (err) {=0A 		=
message =3D "writing feature-rx-notify";=0A 		goto abort_transact=
ion;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, =
"feature-no-csum-offload",=0A-			    "%d", !HAVE_CSUM_OFFLOA=
D);=0A+	err =3D xenbus_write(xbt, dev->nodename, "feature-no-csum-offload",=
=0A+			   __stringify(NO_CSUM_OFFLOAD));=0A 	if (err) =
{=0A 		message =3D "writing feature-no-csum-offload";=0A 		=
goto abort_transaction;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, =
dev->nodename, "feature-sg", "%d", 1);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "feature-sg", "1");=0A 	if (err) {=0A 		message =
=3D "writing feature-sg";=0A 		goto abort_transaction;=0A 	=
}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv=
4", "%d",=0A-			    HAVE_TSO);=0A+	err =3D xenbus_writ=
e(xbt, dev->nodename, "feature-gso-tcpv4",=0A+			   =
__stringify(HAVE_TSO));=0A 	if (err) {=0A 		message =3D =
"writing feature-gso-tcpv4";=0A 		goto abort_transaction;=0A-=
-- a/drivers/xen/pcifront/xenbus.c=0A+++ b/drivers/xen/pcifront/xenbus.c=0A=
@@ -124,8 +124,8 @@ static int pcifront_publish_info(struct =0A 		=
err =3D xenbus_printf(trans, pdev->xdev->nodename,=0A 				=
    "event-channel", "%u", pdev->evtchn);=0A 	if (!err)=0A-		=
err =3D xenbus_printf(trans, pdev->xdev->nodename,=0A-				=
    "magic", XEN_PCI_MAGIC);=0A+		err =3D xenbus_write(trans,=
 pdev->xdev->nodename, "magic",=0A+				   =
XEN_PCI_MAGIC);=0A =0A 	if (err) {=0A 		xenbus_transaction_end(tran=
s, 1);=0A--- a/drivers/xen/tpmback/xenbus.c=0A+++ b/drivers/xen/tpmback/xen=
bus.c=0A@@ -176,7 +176,6 @@ static void connect(struct backend_info =0A 	=
struct xenbus_transaction xbt;=0A 	int err;=0A 	struct xenbus_devic=
e *dev =3D be->dev;=0A-	unsigned long ready =3D 1;=0A =0A again:=0A 	=
err =3D xenbus_transaction_start(&xbt);=0A@@ -185,8 +184,7 @@ again:=0A 	=
	return;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, =
be->dev->nodename,=0A-			    "ready", "%lu", ready);=0A+	=
err =3D xenbus_write(xbt, be->dev->nodename, "ready", "1");=0A 	if (err) =
{=0A 		xenbus_dev_fatal(be->dev, err, "writing 'ready'");=0A 		=
goto abort;=0A
--=__PartC2F141C0.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC2F141C0.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 08:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 08:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Af7-00017P-P8; Mon, 20 Jan 2014 08:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Af6-00017D-6a
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 08:56:52 +0000
Received: from [193.109.254.147:19445] by server-16.bemta-14.messagelabs.com
	id FB/68-20600-3D4ECD25; Mon, 20 Jan 2014 08:56:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390208210!9590761!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12483 invoked from network); 20 Jan 2014 08:56:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 08:56:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 08:56:50 +0000
Message-Id: <52DCF2E00200007800114EDB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 08:56:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC2F141C0.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/drivers: prefer xenbus_write()
 over xenbus_printf() where possible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC2F141C0.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... as being the simpler variant.

This includes an inversion of netfront's HAVE_CSUM_OFFLOAD, in order to
be able to directly use __stringify() on it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/blkfront/blkfront.c
+++ b/drivers/xen/blkfront/blkfront.c
@@ -187,8 +187,8 @@ again:
 		message =3D "writing event-channel";
 		goto abort_transaction;
 	}
-	err =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",
-			    XEN_IO_PROTO_ABI_NATIVE);
+	err =3D xenbus_write(xbt, dev->nodename, "protocol",
+			   XEN_IO_PROTO_ABI_NATIVE);
 	if (err) {
 		message =3D "writing protocol";
 		goto abort_transaction;
--- a/drivers/xen/fbfront/xenfb.c
+++ b/drivers/xen/fbfront/xenfb.c
@@ -760,11 +760,11 @@ static int xenfb_connect_backend(struct=20
 			    irq_to_evtchn_port(irq));
 	if (ret)
 		goto error_xenbus;
-	ret =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",
-			    XEN_IO_PROTO_ABI_NATIVE);
+	ret =3D xenbus_write(xbt, dev->nodename, "protocol",
+			   XEN_IO_PROTO_ABI_NATIVE);
 	if (ret)
 		goto error_xenbus;
-	ret =3D xenbus_printf(xbt, dev->nodename, "feature-update", "1");
+	ret =3D xenbus_write(xbt, dev->nodename, "feature-update", "1");
 	if (ret)
 		goto error_xenbus;
 	ret =3D xenbus_transaction_end(xbt, 0);
--- a/drivers/xen/fbfront/xenkbd.c
+++ b/drivers/xen/fbfront/xenkbd.c
@@ -126,7 +126,7 @@ int __devinit xenkbd_probe(struct xenbus
 	if (xenbus_scanf(XBT_NIL, dev->otherend, "feature-abs-pointer", =
"%d", &abs) < 0)
 		abs =3D 0;
 	if (abs)
-		xenbus_printf(XBT_NIL, dev->nodename, "request-abs-pointer"=
, "1");
+		xenbus_write(XBT_NIL, dev->nodename, "request-abs-pointer",=
 "1");
=20
 	/* keyboard */
 	kbd =3D input_allocate_device();
@@ -294,8 +294,8 @@ static void xenkbd_backend_changed(struc
 		if (ret < 0)
 			val =3D 0;
 		if (val) {
-			ret =3D xenbus_printf(XBT_NIL, info->xbdev->nodenam=
e,
-					    "request-abs-pointer", "1");
+			ret =3D xenbus_write(XBT_NIL, info->xbdev->nodename=
,
+					   "request-abs-pointer", "1");
 			if (ret)
 				; /* FIXME */
 		}
--- a/drivers/xen/netback/xenbus.c
+++ b/drivers/xen/netback/xenbus.c
@@ -107,8 +107,7 @@ static int netback_probe(struct xenbus_d
 		}
=20
 		/* We support rx-copy path. */
-		err =3D xenbus_printf(xbt, dev->nodename,
-				    "feature-rx-copy", "%d", 1);
+		err =3D xenbus_write(xbt, dev->nodename, "feature-rx-copy",=
 "1");
 		if (err) {
 			message =3D "writing feature-rx-copy";
 			goto abort_transaction;
@@ -118,8 +117,7 @@ static int netback_probe(struct xenbus_d
 		 * We don't support rx-flip path (except old guests who =
don't
 		 * grok this feature flag).
 		 */
-		err =3D xenbus_printf(xbt, dev->nodename,
-				    "feature-rx-flip", "%d", 0);
+		err =3D xenbus_write(xbt, dev->nodename, "feature-rx-flip",=
 "0");
 		if (err) {
 			message =3D "writing feature-rx-flip";
 			goto abort_transaction;
--- a/drivers/xen/netfront/netfront.c
+++ b/drivers/xen/netfront/netfront.c
@@ -37,6 +37,7 @@
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/string.h>
+#include <linux/stringify.h>
 #include <linux/errno.h>
 #include <linux/netdevice.h>
 #include <linux/inetdevice.h>
@@ -100,7 +101,7 @@ static const int MODPARM_rx_flip =3D 0;
 #if defined(NETIF_F_GSO)
 #define HAVE_GSO			1
 #define HAVE_TSO			1 /* TSO is a subset of GSO */
-#define HAVE_CSUM_OFFLOAD		1
+#define NO_CSUM_OFFLOAD			0
 static inline void dev_disable_gso_features(struct net_device *dev)
 {
 	/* Turn off all GSO bits except ROBUST. */
@@ -116,7 +117,7 @@ static inline void dev_disable_gso_featu
  * with the presence of NETIF_F_TSO but it appears to be a good first
  * approximiation.
  */
-#define HAVE_CSUM_OFFLOAD              0
+#define NO_CSUM_OFFLOAD			1
=20
 #define gso_size tso_size
 #define gso_segs tso_segs
@@ -143,7 +144,7 @@ static inline int netif_needs_gso(struct
 #else
 #define HAVE_GSO			0
 #define HAVE_TSO			0
-#define HAVE_CSUM_OFFLOAD		0
+#define NO_CSUM_OFFLOAD			1
 #define netif_needs_gso(dev, skb)	0
 #define dev_disable_gso_features(dev)	((void)0)
 #define ethtool_op_set_tso(dev, data)	(-ENOSYS)
@@ -420,27 +421,27 @@ again:
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-rx-notify", =
"%d", 1);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-rx-notify", =
"1");
 	if (err) {
 		message =3D "writing feature-rx-notify";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-no-csum-offload"=
,
-			    "%d", !HAVE_CSUM_OFFLOAD);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-no-csum-offload",=

+			   __stringify(NO_CSUM_OFFLOAD));
 	if (err) {
 		message =3D "writing feature-no-csum-offload";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-sg", "%d", 1);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-sg", "1");
 	if (err) {
 		message =3D "writing feature-sg";
 		goto abort_transaction;
 	}
=20
-	err =3D xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv4", =
"%d",
-			    HAVE_TSO);
+	err =3D xenbus_write(xbt, dev->nodename, "feature-gso-tcpv4",
+			   __stringify(HAVE_TSO));
 	if (err) {
 		message =3D "writing feature-gso-tcpv4";
 		goto abort_transaction;
--- a/drivers/xen/pcifront/xenbus.c
+++ b/drivers/xen/pcifront/xenbus.c
@@ -124,8 +124,8 @@ static int pcifront_publish_info(struct=20
 		err =3D xenbus_printf(trans, pdev->xdev->nodename,
 				    "event-channel", "%u", pdev->evtchn);
 	if (!err)
-		err =3D xenbus_printf(trans, pdev->xdev->nodename,
-				    "magic", XEN_PCI_MAGIC);
+		err =3D xenbus_write(trans, pdev->xdev->nodename, "magic",
+				   XEN_PCI_MAGIC);
=20
 	if (err) {
 		xenbus_transaction_end(trans, 1);
--- a/drivers/xen/tpmback/xenbus.c
+++ b/drivers/xen/tpmback/xenbus.c
@@ -176,7 +176,6 @@ static void connect(struct backend_info=20
 	struct xenbus_transaction xbt;
 	int err;
 	struct xenbus_device *dev =3D be->dev;
-	unsigned long ready =3D 1;
=20
 again:
 	err =3D xenbus_transaction_start(&xbt);
@@ -185,8 +184,7 @@ again:
 		return;
 	}
=20
-	err =3D xenbus_printf(xbt, be->dev->nodename,
-			    "ready", "%lu", ready);
+	err =3D xenbus_write(xbt, be->dev->nodename, "ready", "1");
 	if (err) {
 		xenbus_dev_fatal(be->dev, err, "writing 'ready'");
 		goto abort;



--=__PartC2F141C0.0__=
Content-Type: text/plain; name="xenlinux-use-xenbus_write.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-use-xenbus_write.patch"

drivers: prefer xenbus_write() over xenbus_printf() where possible=0A=0A...=
 as being the simpler variant.=0A=0AThis includes an inversion of =
netfront's HAVE_CSUM_OFFLOAD, in order to=0Abe able to directly use =
__stringify() on it.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/drivers/xen/blkfront/blkfront.c=0A+++ b/drivers/xen/blkfront/blkfr=
ont.c=0A@@ -187,8 +187,8 @@ again:=0A 		message =3D "writing =
event-channel";=0A 		goto abort_transaction;=0A 	}=0A-	=
err =3D xenbus_printf(xbt, dev->nodename, "protocol", "%s",=0A-			=
    XEN_IO_PROTO_ABI_NATIVE);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "protocol",=0A+			   XEN_IO_PROTO_ABI_NATIVE)=
;=0A 	if (err) {=0A 		message =3D "writing protocol";=0A 		=
goto abort_transaction;=0A--- a/drivers/xen/fbfront/xenfb.c=0A+++ =
b/drivers/xen/fbfront/xenfb.c=0A@@ -760,11 +760,11 @@ static int xenfb_conn=
ect_backend(struct =0A 			    irq_to_evtchn_port(irq));=0A 	=
if (ret)=0A 		goto error_xenbus;=0A-	ret =3D xenbus_printf(xbt, =
dev->nodename, "protocol", "%s",=0A-			    XEN_IO_PROTO_AB=
I_NATIVE);=0A+	ret =3D xenbus_write(xbt, dev->nodename, "protocol",=0A+	=
		   XEN_IO_PROTO_ABI_NATIVE);=0A 	if (ret)=0A 		=
goto error_xenbus;=0A-	ret =3D xenbus_printf(xbt, dev->nodename, =
"feature-update", "1");=0A+	ret =3D xenbus_write(xbt, dev->nodename, =
"feature-update", "1");=0A 	if (ret)=0A 		goto error_xenbus;=
=0A 	ret =3D xenbus_transaction_end(xbt, 0);=0A--- a/drivers/xen/fbfront=
/xenkbd.c=0A+++ b/drivers/xen/fbfront/xenkbd.c=0A@@ -126,7 +126,7 @@ int =
__devinit xenkbd_probe(struct xenbus=0A 	if (xenbus_scanf(XBT_NIL, =
dev->otherend, "feature-abs-pointer", "%d", &abs) < 0)=0A 		=
abs =3D 0;=0A 	if (abs)=0A-		xenbus_printf(XBT_NIL, dev->nodenam=
e, "request-abs-pointer", "1");=0A+		xenbus_write(XBT_NIL, =
dev->nodename, "request-abs-pointer", "1");=0A =0A 	/* keyboard */=0A 	=
kbd =3D input_allocate_device();=0A@@ -294,8 +294,8 @@ static void =
xenkbd_backend_changed(struc=0A 		if (ret < 0)=0A 		=
	val =3D 0;=0A 		if (val) {=0A-			ret =3D =
xenbus_printf(XBT_NIL, info->xbdev->nodename,=0A-				=
	    "request-abs-pointer", "1");=0A+			ret =3D =
xenbus_write(XBT_NIL, info->xbdev->nodename,=0A+				=
	   "request-abs-pointer", "1");=0A 			if =
(ret)=0A 				; /* FIXME */=0A 		=
}=0A--- a/drivers/xen/netback/xenbus.c=0A+++ b/drivers/xen/netback/xenbus.c=
=0A@@ -107,8 +107,7 @@ static int netback_probe(struct xenbus_d=0A 		=
}=0A =0A 		/* We support rx-copy path. */=0A-		=
err =3D xenbus_printf(xbt, dev->nodename,=0A-				   =
 "feature-rx-copy", "%d", 1);=0A+		err =3D xenbus_write(xbt, =
dev->nodename, "feature-rx-copy", "1");=0A 		if (err) {=0A 		=
	message =3D "writing feature-rx-copy";=0A 			=
goto abort_transaction;=0A@@ -118,8 +117,7 @@ static int netback_probe(stru=
ct xenbus_d=0A 		 * We don't support rx-flip path (except old =
guests who don't=0A 		 * grok this feature flag).=0A 		 =
*/=0A-		err =3D xenbus_printf(xbt, dev->nodename,=0A-			=
	    "feature-rx-flip", "%d", 0);=0A+		err =3D xenbus_writ=
e(xbt, dev->nodename, "feature-rx-flip", "0");=0A 		if (err) =
{=0A 			message =3D "writing feature-rx-flip";=0A 		=
	goto abort_transaction;=0A--- a/drivers/xen/netfront/netfront.c=0A+=
++ b/drivers/xen/netfront/netfront.c=0A@@ -37,6 +37,7 @@=0A #include =
<linux/sched.h>=0A #include <linux/slab.h>=0A #include <linux/string.h>=0A+=
#include <linux/stringify.h>=0A #include <linux/errno.h>=0A #include =
<linux/netdevice.h>=0A #include <linux/inetdevice.h>=0A@@ -100,7 +101,7 @@ =
static const int MODPARM_rx_flip =3D 0;=0A #if defined(NETIF_F_GSO)=0A =
#define HAVE_GSO			1=0A #define HAVE_TSO			=
1 /* TSO is a subset of GSO */=0A-#define HAVE_CSUM_OFFLOAD		=
1=0A+#define NO_CSUM_OFFLOAD			0=0A static inline void =
dev_disable_gso_features(struct net_device *dev)=0A {=0A 	/* Turn =
off all GSO bits except ROBUST. */=0A@@ -116,7 +117,7 @@ static inline =
void dev_disable_gso_featu=0A  * with the presence of NETIF_F_TSO but it =
appears to be a good first=0A  * approximiation.=0A  */=0A-#define =
HAVE_CSUM_OFFLOAD              0=0A+#define NO_CSUM_OFFLOAD			=
1=0A =0A #define gso_size tso_size=0A #define gso_segs tso_segs=0A@@ =
-143,7 +144,7 @@ static inline int netif_needs_gso(struct=0A #else=0A =
#define HAVE_GSO			0=0A #define HAVE_TSO			=
0=0A-#define HAVE_CSUM_OFFLOAD		0=0A+#define NO_CSUM_OFFLOAD		=
	1=0A #define netif_needs_gso(dev, skb)	0=0A #define dev_disable_gs=
o_features(dev)	((void)0)=0A #define ethtool_op_set_tso(dev, data)	=
(-ENOSYS)=0A@@ -420,27 +421,27 @@ again:=0A 		goto abort_transact=
ion;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, =
"feature-rx-notify", "%d", 1);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "feature-rx-notify", "1");=0A 	if (err) {=0A 		=
message =3D "writing feature-rx-notify";=0A 		goto abort_transact=
ion;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, =
"feature-no-csum-offload",=0A-			    "%d", !HAVE_CSUM_OFFLOA=
D);=0A+	err =3D xenbus_write(xbt, dev->nodename, "feature-no-csum-offload",=
=0A+			   __stringify(NO_CSUM_OFFLOAD));=0A 	if (err) =
{=0A 		message =3D "writing feature-no-csum-offload";=0A 		=
goto abort_transaction;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, =
dev->nodename, "feature-sg", "%d", 1);=0A+	err =3D xenbus_write(xbt, =
dev->nodename, "feature-sg", "1");=0A 	if (err) {=0A 		message =
=3D "writing feature-sg";=0A 		goto abort_transaction;=0A 	=
}=0A =0A-	err =3D xenbus_printf(xbt, dev->nodename, "feature-gso-tcpv=
4", "%d",=0A-			    HAVE_TSO);=0A+	err =3D xenbus_writ=
e(xbt, dev->nodename, "feature-gso-tcpv4",=0A+			   =
__stringify(HAVE_TSO));=0A 	if (err) {=0A 		message =3D =
"writing feature-gso-tcpv4";=0A 		goto abort_transaction;=0A-=
-- a/drivers/xen/pcifront/xenbus.c=0A+++ b/drivers/xen/pcifront/xenbus.c=0A=
@@ -124,8 +124,8 @@ static int pcifront_publish_info(struct =0A 		=
err =3D xenbus_printf(trans, pdev->xdev->nodename,=0A 				=
    "event-channel", "%u", pdev->evtchn);=0A 	if (!err)=0A-		=
err =3D xenbus_printf(trans, pdev->xdev->nodename,=0A-				=
    "magic", XEN_PCI_MAGIC);=0A+		err =3D xenbus_write(trans,=
 pdev->xdev->nodename, "magic",=0A+				   =
XEN_PCI_MAGIC);=0A =0A 	if (err) {=0A 		xenbus_transaction_end(tran=
s, 1);=0A--- a/drivers/xen/tpmback/xenbus.c=0A+++ b/drivers/xen/tpmback/xen=
bus.c=0A@@ -176,7 +176,6 @@ static void connect(struct backend_info =0A 	=
struct xenbus_transaction xbt;=0A 	int err;=0A 	struct xenbus_devic=
e *dev =3D be->dev;=0A-	unsigned long ready =3D 1;=0A =0A again:=0A 	=
err =3D xenbus_transaction_start(&xbt);=0A@@ -185,8 +184,7 @@ again:=0A 	=
	return;=0A 	}=0A =0A-	err =3D xenbus_printf(xbt, =
be->dev->nodename,=0A-			    "ready", "%lu", ready);=0A+	=
err =3D xenbus_write(xbt, be->dev->nodename, "ready", "1");=0A 	if (err) =
{=0A 		xenbus_dev_fatal(be->dev, err, "writing 'ready'");=0A 		=
goto abort;=0A
--=__PartC2F141C0.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC2F141C0.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 09:07:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Apd-0001l7-Lc; Mon, 20 Jan 2014 09:07:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0901ce0cf=chegger@amazon.de>)
	id 1W5Apb-0001l0-Nd
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:07:44 +0000
Received: from [193.109.254.147:8974] by server-15.bemta-14.messagelabs.com id
	0A/08-22186-F57ECD25; Mon, 20 Jan 2014 09:07:43 +0000
X-Env-Sender: prvs=0901ce0cf=chegger@amazon.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390208860!11904366!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 20 Jan 2014 09:07:42 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:07:42 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390208862; x=1421744862;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=ZWwYXcZHUV8NlAlcJOFo6A5LgPzjM3ysCylVlS3nKvg=;
	b=S91osblyz1QuMXoeR2b/p+mKVDNFLm0Agmax2/7nPfOkUYMql1tt7WwI
	OyXcghU2qxaXnYrSvNt6wDKL2lt2fdpUDCxR3IHt9rfs1yZDx2zgM9uZH
	7WUj47zHPX9tOwbZiB4DykR/nCAVF+YQVfjGPpRy46fZ2j/MY6fqni8Tm A=;
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="64277667"
Received: from email-inbound-relay-60002.pdx1.amazon.com ([10.232.152.194])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Jan 2014 09:07:39 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-60002.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0K97cJe010262
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 20 Jan 2014 09:07:38 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Mon, 20 Jan 2014 01:07:33 -0800
Message-ID: <52DCE753.1040507@amazon.de>
Date: Mon, 20 Jan 2014 10:07:31 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	"Dong, Eddie" <eddie.dong@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.01.14 08:38, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-01-14:
>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Zhang, Yang Z wrote on 2013-12-24:
>>>
>>> Any comments ?
>>
>> Considering Christoph's comments and reservations, if you can't alone
>> deal with this I think you should work with the AMD people to
>> eliminate or address his concerns.
>>
> 
> Yes. But the problem puzzled me is that Christoph said nested SVM works
> well w/o my patch which I cannot understand. Because according my analysis
> in previous thread, it also buggy in AMD side. But if they really solved
> the issue in their side, I wonder how they fix it. Perhaps I can use the
> same solution in VMX side without touch the common code.
> 
> Christoph, can you help to check it? thanks.

The fix I mentioned solves the vmswitch problem on AMD side.
The page mode problem you discovered is a seperate issue for
both SVM and VMX that needs to be addressed.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:07:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Apd-0001l7-Lc; Mon, 20 Jan 2014 09:07:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0901ce0cf=chegger@amazon.de>)
	id 1W5Apb-0001l0-Nd
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:07:44 +0000
Received: from [193.109.254.147:8974] by server-15.bemta-14.messagelabs.com id
	0A/08-22186-F57ECD25; Mon, 20 Jan 2014 09:07:43 +0000
X-Env-Sender: prvs=0901ce0cf=chegger@amazon.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390208860!11904366!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21462 invoked from network); 20 Jan 2014 09:07:42 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:07:42 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390208862; x=1421744862;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=ZWwYXcZHUV8NlAlcJOFo6A5LgPzjM3ysCylVlS3nKvg=;
	b=S91osblyz1QuMXoeR2b/p+mKVDNFLm0Agmax2/7nPfOkUYMql1tt7WwI
	OyXcghU2qxaXnYrSvNt6wDKL2lt2fdpUDCxR3IHt9rfs1yZDx2zgM9uZH
	7WUj47zHPX9tOwbZiB4DykR/nCAVF+YQVfjGPpRy46fZ2j/MY6fqni8Tm A=;
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="64277667"
Received: from email-inbound-relay-60002.pdx1.amazon.com ([10.232.152.194])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 20 Jan 2014 09:07:39 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-60002.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0K97cJe010262
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Mon, 20 Jan 2014 09:07:38 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Mon, 20 Jan 2014 01:07:33 -0800
Message-ID: <52DCE753.1040507@amazon.de>
Date: Mon, 20 Jan 2014 10:07:31 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	"Dong, Eddie" <eddie.dong@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.01.14 08:38, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-01-14:
>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Zhang, Yang Z wrote on 2013-12-24:
>>>
>>> Any comments ?
>>
>> Considering Christoph's comments and reservations, if you can't alone
>> deal with this I think you should work with the AMD people to
>> eliminate or address his concerns.
>>
> 
> Yes. But the problem puzzled me is that Christoph said nested SVM works
> well w/o my patch which I cannot understand. Because according my analysis
> in previous thread, it also buggy in AMD side. But if they really solved
> the issue in their side, I wonder how they fix it. Perhaps I can use the
> same solution in VMX side without touch the common code.
> 
> Christoph, can you help to check it? thanks.

The fix I mentioned solves the vmswitch problem on AMD side.
The page mode problem you discovered is a seperate issue for
both SVM and VMX that needs to be addressed.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:19:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5B0U-0002J5-K1; Mon, 20 Jan 2014 09:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W5B0S-0002In-Io; Mon, 20 Jan 2014 09:18:56 +0000
Received: from [85.158.137.68:31579] by server-3.bemta-3.messagelabs.com id
	DB/F8-10658-EF9ECD25; Mon, 20 Jan 2014 09:18:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390209533!10068126!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_SEX,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6898 invoked from network); 20 Jan 2014 09:18:54 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:18:54 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so6673065wgh.7
	for <multiple recipients>; Mon, 20 Jan 2014 01:18:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=DfSP2czzc9IdvCvH2tQ6nprfgk2OyR/BVtAB4BFQCGw=;
	b=kdqAOZuQEvURpmBZcDBqAnPGyJw7QkP3OydYo82k58pLQWbOlXByr22tPoKbZ4AMrn
	Jnsq3grCIw4PlcGaofzw23tETHs3SBlYo0FND1VUn5LKJSrwT8SCuJciEq8G85WcjiHh
	izicAEbjeb859+H+3qgK9aO8lwbJh1NeLFXmtVqW6o26rbPo/6EW3c/gptNTMk+m0Ezz
	VytDtH5iAPnLXm/anbWhfEEm8Wl//PXUr92iGHM/kczf33GR1im22+Hu1cMQ+poZjN+Z
	DQ0QxA4dXnOCkwslKEN9Gj5lUWDwn45c/o9Id7eTdL6Oz3DoiKs4ifiUhvdmbMt90bzR
	Jbpw==
X-Received: by 10.194.63.228 with SMTP id j4mr13538806wjs.34.1390209533631;
	Mon, 20 Jan 2014 01:18:53 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id eo4sm1090422wib.9.2014.01.20.01.18.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 20 Jan 2014 01:18:52 -0800 (PST)
Message-ID: <52DCE9FA.6010400@xen.org>
Date: Mon, 20 Jan 2014 09:18:50 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
Subject: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

the GSoC application deadline is coming up : Feb 2014. If we want to 
have any chance of getting accepted this year, we ought to get our 
project list into good shape. The project list and how the project and 
menters present themselves has a bigger impact on whether we get 
accepted than the actual application.

Also, I would like to add a mentor section this year: a short bio, what 
the mentor cares about and a picture. This will help make the project 
list more real.

We have *4 weeks* to do this. The bar for GSoC has been getting 
increasingly high. I know, we are tied down with Xen 4.4, but this is 
something you need to do if you want the Xen Project to participate.

a) Please, update 
http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently (these 
need to be in good shape *before* the application). What I need you to 
do is:
a.1) Remove items that are done
a.2) Add new work items : we ought to have a few sexy topics on say 
Real-time, mobile and some of the other segments (assuming we can get HW)
a.3) All project proposals need to be peer reviewed *and* clear ... The 
peer review process for projects we put in place last year worked well, 
by which we had past mentors sign of project proposals that were in good 
enough state.

b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should 
get these listed on the respective other programs. And we should link to 
these from our project page.

Best Regards
Lars
P.S.: I will also see whether we can participate as Xen Project under 
the LF GSoC program, but last year there was push-back and I don't 
expect this to change


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:19:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5B0U-0002J5-K1; Mon, 20 Jan 2014 09:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W5B0S-0002In-Io; Mon, 20 Jan 2014 09:18:56 +0000
Received: from [85.158.137.68:31579] by server-3.bemta-3.messagelabs.com id
	DB/F8-10658-EF9ECD25; Mon, 20 Jan 2014 09:18:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390209533!10068126!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_SEX,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6898 invoked from network); 20 Jan 2014 09:18:54 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:18:54 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so6673065wgh.7
	for <multiple recipients>; Mon, 20 Jan 2014 01:18:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=DfSP2czzc9IdvCvH2tQ6nprfgk2OyR/BVtAB4BFQCGw=;
	b=kdqAOZuQEvURpmBZcDBqAnPGyJw7QkP3OydYo82k58pLQWbOlXByr22tPoKbZ4AMrn
	Jnsq3grCIw4PlcGaofzw23tETHs3SBlYo0FND1VUn5LKJSrwT8SCuJciEq8G85WcjiHh
	izicAEbjeb859+H+3qgK9aO8lwbJh1NeLFXmtVqW6o26rbPo/6EW3c/gptNTMk+m0Ezz
	VytDtH5iAPnLXm/anbWhfEEm8Wl//PXUr92iGHM/kczf33GR1im22+Hu1cMQ+poZjN+Z
	DQ0QxA4dXnOCkwslKEN9Gj5lUWDwn45c/o9Id7eTdL6Oz3DoiKs4ifiUhvdmbMt90bzR
	Jbpw==
X-Received: by 10.194.63.228 with SMTP id j4mr13538806wjs.34.1390209533631;
	Mon, 20 Jan 2014 01:18:53 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id eo4sm1090422wib.9.2014.01.20.01.18.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 20 Jan 2014 01:18:52 -0800 (PST)
Message-ID: <52DCE9FA.6010400@xen.org>
Date: Mon, 20 Jan 2014 09:18:50 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
Subject: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

the GSoC application deadline is coming up : Feb 2014. If we want to 
have any chance of getting accepted this year, we ought to get our 
project list into good shape. The project list and how the project and 
menters present themselves has a bigger impact on whether we get 
accepted than the actual application.

Also, I would like to add a mentor section this year: a short bio, what 
the mentor cares about and a picture. This will help make the project 
list more real.

We have *4 weeks* to do this. The bar for GSoC has been getting 
increasingly high. I know, we are tied down with Xen 4.4, but this is 
something you need to do if you want the Xen Project to participate.

a) Please, update 
http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently (these 
need to be in good shape *before* the application). What I need you to 
do is:
a.1) Remove items that are done
a.2) Add new work items : we ought to have a few sexy topics on say 
Real-time, mobile and some of the other segments (assuming we can get HW)
a.3) All project proposals need to be peer reviewed *and* clear ... The 
peer review process for projects we put in place last year worked well, 
by which we had past mentors sign of project proposals that were in good 
enough state.

b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should 
get these listed on the respective other programs. And we should link to 
these from our project page.

Best Regards
Lars
P.S.: I will also see whether we can participate as Xen Project under 
the LF GSoC program, but last year there was push-back and I don't 
expect this to change


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:38:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BIg-0003Go-Ak; Mon, 20 Jan 2014 09:37:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5BIf-0003Gj-Ca
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:37:45 +0000
Received: from [85.158.143.35:9014] by server-1.bemta-4.messagelabs.com id
	61/5C-02132-86EECD25; Mon, 20 Jan 2014 09:37:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390210662!12752559!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29553 invoked from network); 20 Jan 2014 09:37:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:37:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92335811"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:37:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 04:37:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5BIa-0007rC-Rk;
	Mon, 20 Jan 2014 09:37:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5BIa-0007yl-CV;
	Mon, 20 Jan 2014 09:37:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24447-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:37:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24447: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24447 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24447/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   13 guest-localmigrate.2        fail pass in 24440
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 24440 pass in 24447
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore fail in 24440 pass in 24438
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24438 pass in 24447

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24431

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24438 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:38:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BIg-0003Go-Ak; Mon, 20 Jan 2014 09:37:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5BIf-0003Gj-Ca
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:37:45 +0000
Received: from [85.158.143.35:9014] by server-1.bemta-4.messagelabs.com id
	61/5C-02132-86EECD25; Mon, 20 Jan 2014 09:37:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390210662!12752559!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29553 invoked from network); 20 Jan 2014 09:37:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:37:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92335811"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:37:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 04:37:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5BIa-0007rC-Rk;
	Mon, 20 Jan 2014 09:37:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5BIa-0007yl-CV;
	Mon, 20 Jan 2014 09:37:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24447-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:37:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24447: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24447 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24447/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   13 guest-localmigrate.2        fail pass in 24440
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 24440 pass in 24447
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore fail in 24440 pass in 24438
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24438 pass in 24447

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24431

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24438 never pass

version targeted for testing:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:41:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BMR-0003m1-7D; Mon, 20 Jan 2014 09:41:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BMP-0003lw-8u
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:41:37 +0000
Received: from [193.109.254.147:2889] by server-12.bemta-14.messagelabs.com id
	78/F8-13681-05FECD25; Mon, 20 Jan 2014 09:41:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390210894!11853981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21218 invoked from network); 20 Jan 2014 09:41:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:41:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94422975"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:41:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:41:33 -0500
Message-ID: <1390210889.20516.1.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:41:29 +0000
In-Reply-To: <1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> libxl_event.c:eventloop_iteration would pass the allocated pollfds
> array size, rather than the used size, to poll (and to
> afterpoll_internal).
> 
> The effect is that if the number of fds to poll on reduces, libxl will
> poll on stale entries.  Because of the way the return value from poll
> is processed these stale entries are often harmless because any events
> coming back from poll ignored by libxl.  However, it could cause
> malfunctions:
> 
> It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
> the fd has been reused to refer to an object which can generate those
> signals.  Alternatively, it could result in libxl spinning if the
> stale entry refers to an fd which happens now to be ready for the
> previously-requested operation.
> 
> I have tested this with a localhost migration and inspected the strace
> output.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl_event.c |    9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index bdef7ac..1c48fee 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>       * can unlock it when it polls.
>       */
>      EGC_GC;
> -    int rc;
> +    int rc, nfds;
>      struct timeval now;
>      
>      rc = libxl__gettimeofday(gc, &now);
> @@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      int timeout;
>  
>      for (;;) {
> -        int nfds = poller->fd_polls_allocd;
> +        nfds = poller->fd_polls_allocd;
>          timeout = -1;
>          rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
>                                   &timeout, now);
> @@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      }
>  
>      CTX_UNLOCK;
> -    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
> +    rc = poll(poller->fd_polls, nfds, timeout);
>      CTX_LOCK;
>  
>      if (rc < 0) {
> @@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      rc = libxl__gettimeofday(gc, &now);
>      if (rc) goto out;
>  
> -    afterpoll_internal(egc, poller,
> -                       poller->fd_polls_allocd, poller->fd_polls, now);
> +    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
>  
>      rc = 0;
>   out:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:41:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BMR-0003m1-7D; Mon, 20 Jan 2014 09:41:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BMP-0003lw-8u
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:41:37 +0000
Received: from [193.109.254.147:2889] by server-12.bemta-14.messagelabs.com id
	78/F8-13681-05FECD25; Mon, 20 Jan 2014 09:41:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390210894!11853981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21218 invoked from network); 20 Jan 2014 09:41:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:41:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94422975"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:41:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:41:33 -0500
Message-ID: <1390210889.20516.1.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:41:29 +0000
In-Reply-To: <1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> libxl_event.c:eventloop_iteration would pass the allocated pollfds
> array size, rather than the used size, to poll (and to
> afterpoll_internal).
> 
> The effect is that if the number of fds to poll on reduces, libxl will
> poll on stale entries.  Because of the way the return value from poll
> is processed these stale entries are often harmless because any events
> coming back from poll ignored by libxl.  However, it could cause
> malfunctions:
> 
> It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
> the fd has been reused to refer to an object which can generate those
> signals.  Alternatively, it could result in libxl spinning if the
> stale entry refers to an fd which happens now to be ready for the
> previously-requested operation.
> 
> I have tested this with a localhost migration and inspected the strace
> output.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl_event.c |    9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index bdef7ac..1c48fee 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>       * can unlock it when it polls.
>       */
>      EGC_GC;
> -    int rc;
> +    int rc, nfds;
>      struct timeval now;
>      
>      rc = libxl__gettimeofday(gc, &now);
> @@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      int timeout;
>  
>      for (;;) {
> -        int nfds = poller->fd_polls_allocd;
> +        nfds = poller->fd_polls_allocd;
>          timeout = -1;
>          rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
>                                   &timeout, now);
> @@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      }
>  
>      CTX_UNLOCK;
> -    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
> +    rc = poll(poller->fd_polls, nfds, timeout);
>      CTX_LOCK;
>  
>      if (rc < 0) {
> @@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
>      rc = libxl__gettimeofday(gc, &now);
>      if (rc) goto out;
>  
> -    afterpoll_internal(egc, poller,
> -                       poller->fd_polls_allocd, poller->fd_polls, now);
> +    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
>  
>      rc = 0;
>   out:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:41:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BMg-0003ng-Jz; Mon, 20 Jan 2014 09:41:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BMf-0003nX-NH
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:41:53 +0000
Received: from [85.158.137.68:44863] by server-4.bemta-3.messagelabs.com id
	56/BE-10414-06FECD25; Mon, 20 Jan 2014 09:41:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390210909!10139858!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28529 invoked from network); 20 Jan 2014 09:41:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92336803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:41:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:41:49 -0500
Message-ID: <1390210908.20516.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:41:48 +0000
In-Reply-To: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving
	domain config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> This makes valgrind a bit happier.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/xl_cmdimpl.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..aff6f90 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
>                       ctx, fd, optdata_begin, hdr.optional_data_len,
>                       source, "header"));
>  
> +    free(optdata_begin);
> +
>      fprintf(stderr, "Saving to %s new xl format (info"
>              " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
>              source, hdr.mandatory_flags, hdr.optional_flags,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:41:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BMg-0003ng-Jz; Mon, 20 Jan 2014 09:41:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BMf-0003nX-NH
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:41:53 +0000
Received: from [85.158.137.68:44863] by server-4.bemta-3.messagelabs.com id
	56/BE-10414-06FECD25; Mon, 20 Jan 2014 09:41:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390210909!10139858!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28529 invoked from network); 20 Jan 2014 09:41:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92336803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:41:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:41:49 -0500
Message-ID: <1390210908.20516.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:41:48 +0000
In-Reply-To: <1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving
	domain config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> This makes valgrind a bit happier.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/xl_cmdimpl.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..aff6f90 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
>                       ctx, fd, optdata_begin, hdr.optional_data_len,
>                       source, "header"));
>  
> +    free(optdata_begin);
> +
>      fprintf(stderr, "Saving to %s new xl format (info"
>              " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
>              source, hdr.mandatory_flags, hdr.optional_flags,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BNI-0003tO-25; Mon, 20 Jan 2014 09:42:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BNG-0003t6-QU
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:42:30 +0000
Received: from [85.158.139.211:58457] by server-15.bemta-5.messagelabs.com id
	ED/66-08490-68FECD25; Mon, 20 Jan 2014 09:42:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390210948!10751770!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4299 invoked from network); 20 Jan 2014 09:42:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:42:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94423103"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:42:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:42:27 -0500
Message-ID: <1390210946.20516.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:42:26 +0000
In-Reply-To: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port:
 always free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> If portstr!=NULL but plen==0 this function would leak portstr.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

With the whitespace fixed:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/xenstore/xs.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
> index a636498..9cd99eb 100644
> --- a/tools/xenstore/xs.c
> +++ b/tools/xenstore/xs.c
> @@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
>      portstr = xs_read(xs, XBT_NULL, path, &plen);
>      xs_daemon_close(xs);
>  
> -    if (!portstr || !plen)
> -        return -1;
> +    if (!portstr || !plen) {
> +        port = -1;
> +	goto out;
> +    }
>  
>      port = atoi(portstr);
> -    free(portstr);
>  
> +out:
> +    free(portstr);
>      return port;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:42:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BNI-0003tO-25; Mon, 20 Jan 2014 09:42:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BNG-0003t6-QU
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:42:30 +0000
Received: from [85.158.139.211:58457] by server-15.bemta-5.messagelabs.com id
	ED/66-08490-68FECD25; Mon, 20 Jan 2014 09:42:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390210948!10751770!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4299 invoked from network); 20 Jan 2014 09:42:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:42:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94423103"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:42:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:42:27 -0500
Message-ID: <1390210946.20516.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:42:26 +0000
In-Reply-To: <1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1389980090-3479-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389980090-3479-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port:
 always free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 17:34 +0000, Ian Jackson wrote:
> If portstr!=NULL but plen==0 this function would leak portstr.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

With the whitespace fixed:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/xenstore/xs.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
> index a636498..9cd99eb 100644
> --- a/tools/xenstore/xs.c
> +++ b/tools/xenstore/xs.c
> @@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
>      portstr = xs_read(xs, XBT_NULL, path, &plen);
>      xs_daemon_close(xs);
>  
> -    if (!portstr || !plen)
> -        return -1;
> +    if (!portstr || !plen) {
> +        port = -1;
> +	goto out;
> +    }
>  
>      port = atoi(portstr);
> -    free(portstr);
>  
> +out:
> +    free(portstr);
>      return port;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:46:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BQo-0004AK-PB; Mon, 20 Jan 2014 09:46:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BQm-0004AD-Nv
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 09:46:08 +0000
Received: from [193.109.254.147:64010] by server-8.bemta-14.messagelabs.com id
	4A/E0-30921-F50FCD25; Mon, 20 Jan 2014 09:46:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390211165!11896479!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15688 invoked from network); 20 Jan 2014 09:46:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:46:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92337836"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:46:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:46:04 -0500
Message-ID: <1390211163.20516.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:46:03 +0000
In-Reply-To: <alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:09 +0000, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Ian Campbell wrote:
> > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > 
> > This can happen in practice on ARM where foreign pages can be above 4GB even
> > though the local kernel does not have LPAE page tables enabled (which is
> > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > must) but the resulting DMA address (returned by the grant map operation) is
> > much higher.
> 
> We are lucky that LPAE only supports 40 bits otherwise we would need to
> change any other functions that use unsigned long for mfn, starting from
> set_phys_to_machine.

Yes, the use of unsigned long for pfn is a larger problem for the kernel
on any 32-bit arch with LPAE type extensions, I think.

> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> > +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> > +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> 
> I'd add this comment:
> 
> /* avoid PFN_PHYS because phys_addr_t can be 32bit when dma_addr_t is
>  * 64bit leading to a loss in information if the shift is done before
>  * casting to 64bit. */
> 
> > +	dma |= paddr & ~PAGE_MASK;
> > +	return dma;
> >  }
> >  
> >  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
> >  {
> > -	return machine_to_phys(XMADDR(baddr)).paddr;
> > +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> > +	phys_addr_t paddr = dma;
> > +
> > +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> 
> This check is useless because PFN_PHYS contains a cast to (phys_addr_t)
> that is 32 bit. I think you'll have to:
> 
> dma_addr_t dma = ((dma_addr_t)mfn_to_pfn(PFN_DOWN(baddr))) << PAGE_SHIFT;

That's what I originally had, then I spotted PFN_PHYS and decided it
would be a nice cleanup -- oops!

v2 coming up.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:46:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BQo-0004AK-PB; Mon, 20 Jan 2014 09:46:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BQm-0004AD-Nv
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 09:46:08 +0000
Received: from [193.109.254.147:64010] by server-8.bemta-14.messagelabs.com id
	4A/E0-30921-F50FCD25; Mon, 20 Jan 2014 09:46:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390211165!11896479!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15688 invoked from network); 20 Jan 2014 09:46:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:46:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92337836"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:46:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:46:04 -0500
Message-ID: <1390211163.20516.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:46:03 +0000
In-Reply-To: <alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401171751050.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:09 +0000, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Ian Campbell wrote:
> > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > 
> > This can happen in practice on ARM where foreign pages can be above 4GB even
> > though the local kernel does not have LPAE page tables enabled (which is
> > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > must) but the resulting DMA address (returned by the grant map operation) is
> > much higher.
> 
> We are lucky that LPAE only supports 40 bits otherwise we would need to
> change any other functions that use unsigned long for mfn, starting from
> set_phys_to_machine.

Yes, the use of unsigned long for pfn is a larger problem for the kernel
on any 32-bit arch with LPAE type extensions, I think.

> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> > +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> > +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> 
> I'd add this comment:
> 
> /* avoid PFN_PHYS because phys_addr_t can be 32bit when dma_addr_t is
>  * 64bit leading to a loss in information if the shift is done before
>  * casting to 64bit. */
> 
> > +	dma |= paddr & ~PAGE_MASK;
> > +	return dma;
> >  }
> >  
> >  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
> >  {
> > -	return machine_to_phys(XMADDR(baddr)).paddr;
> > +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> > +	phys_addr_t paddr = dma;
> > +
> > +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> 
> This check is useless because PFN_PHYS contains a cast to (phys_addr_t)
> that is 32 bit. I think you'll have to:
> 
> dma_addr_t dma = ((dma_addr_t)mfn_to_pfn(PFN_DOWN(baddr))) << PAGE_SHIFT;

That's what I originally had, then I spotted PFN_PHYS and decided it
would be a nice cleanup -- oops!

v2 coming up.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:46:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BRC-0004D0-5r; Mon, 20 Jan 2014 09:46:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5BRB-0004Cp-6M
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:46:33 +0000
Received: from [193.109.254.147:9442] by server-16.bemta-14.messagelabs.com id
	F2/A9-20600-870FCD25; Mon, 20 Jan 2014 09:46:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390211191!9586655!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22270 invoked from network); 20 Jan 2014 09:46:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 09:46:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 09:46:31 +0000
Message-Id: <52DCFE820200007800114F17@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 09:46:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7447F762.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/balloon: don't crash in
 HVM-with-PoD guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7447F762.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

989:a7781c0a3b9a ("xen/balloon: fix balloon driver accounting for
HVM-with-PoD case") was almost entirely broken - the BUG_ON() there
triggers as soon as there's any meaningful amount of excess memory.

Re-implement the logic assuming that XENMEM_get_pod_target will at some
point be allowed for a domain to query on itself. Basing the
calculation on just num_physpages results in significantly too much
memory getting balloned out when there's memory beyond the 4G boundary.

Using what recent upstream's get_num_physpages() returns is not an
alternative because that value is too small (even if not as small as
totalram_pages), resulting in not enough pages getting ballooned out.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/balloon/balloon.c
+++ b/drivers/xen/balloon/balloon.c
@@ -537,17 +537,11 @@ static int __init balloon_init(void)
 	 * extent of 1. When start_extent > nr_extents (>=3D in newer =
Xen), we
 	 * simply get start_extent returned.
 	 */
-	totalram_bias =3D HYPERVISOR_memory_op(rc !=3D -ENOSYS && rc !=3D =
1
-		? XENMEM_maximum_reservation : XENMEM_current_reservation,
-		&pod_target.domid);
-	if ((long)totalram_bias !=3D -ENOSYS) {
-		BUG_ON(totalram_bias < totalram_pages);
-		bs.current_pages =3D totalram_bias;
-		totalram_bias -=3D totalram_pages;
-	} else {
-		totalram_bias =3D 0;
-		bs.current_pages =3D totalram_pages;
-	}
+	bs.current_pages =3D pod_target.tot_pages + pod_target.pod_entries
+			   - pod_target.pod_cache_pages;
+	if (rc || bs.current_pages > num_physpages)
+		bs.current_pages =3D num_physpages;
+	totalram_bias =3D bs.current_pages - totalram_pages;
 #endif
 	bs.target_pages  =3D bs.current_pages;
 	bs.balloon_low   =3D 0;




--=__Part7447F762.0__=
Content-Type: text/plain; name="xenlinux-balloon-HVM-PoD.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-balloon-HVM-PoD.patch"

balloon: don't crash in HVM-with-PoD guests=0A=0A989:a7781c0a3b9a =
("xen/balloon: fix balloon driver accounting for=0AHVM-with-PoD case") was =
almost entirely broken - the BUG_ON() there=0Atriggers as soon as there's =
any meaningful amount of excess memory.=0A=0ARe-implement the logic =
assuming that XENMEM_get_pod_target will at some=0Apoint be allowed for a =
domain to query on itself. Basing the=0Acalculation on just num_physpages =
results in significantly too much=0Amemory getting balloned out when =
there's memory beyond the 4G boundary.=0A=0AUsing what recent upstream's =
get_num_physpages() returns is not an=0Aalternative because that value is =
too small (even if not as small as=0Atotalram_pages), resulting in not =
enough pages getting ballooned out.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/drivers/xen/balloon/balloon.c=0A+++ =
b/drivers/xen/balloon/balloon.c=0A@@ -537,17 +537,11 @@ static int __init =
balloon_init(void)=0A 	 * extent of 1. When start_extent > nr_extents =
(>=3D in newer Xen), we=0A 	 * simply get start_extent returned.=0A 	=
 */=0A-	totalram_bias =3D HYPERVISOR_memory_op(rc !=3D -ENOSYS && rc !=3D =
1=0A-		? XENMEM_maximum_reservation : XENMEM_current_reservation,=
=0A-		&pod_target.domid);=0A-	if ((long)totalram_bias !=3D =
-ENOSYS) {=0A-		BUG_ON(totalram_bias < totalram_pages);=0A-		=
bs.current_pages =3D totalram_bias;=0A-		totalram_bias -=3D =
totalram_pages;=0A-	} else {=0A-		totalram_bias =3D 0;=0A-	=
	bs.current_pages =3D totalram_pages;=0A-	}=0A+	bs.current_=
pages =3D pod_target.tot_pages + pod_target.pod_entries=0A+			=
   - pod_target.pod_cache_pages;=0A+	if (rc || bs.current_pages > =
num_physpages)=0A+		bs.current_pages =3D num_physpages;=0A+	=
totalram_bias =3D bs.current_pages - totalram_pages;=0A #endif=0A 	=
bs.target_pages  =3D bs.current_pages;=0A 	bs.balloon_low   =3D 0;=0A
--=__Part7447F762.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7447F762.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 09:46:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BRC-0004D0-5r; Mon, 20 Jan 2014 09:46:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5BRB-0004Cp-6M
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:46:33 +0000
Received: from [193.109.254.147:9442] by server-16.bemta-14.messagelabs.com id
	F2/A9-20600-870FCD25; Mon, 20 Jan 2014 09:46:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390211191!9586655!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22270 invoked from network); 20 Jan 2014 09:46:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 09:46:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 09:46:31 +0000
Message-Id: <52DCFE820200007800114F17@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 09:46:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7447F762.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/balloon: don't crash in
 HVM-with-PoD guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7447F762.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

989:a7781c0a3b9a ("xen/balloon: fix balloon driver accounting for
HVM-with-PoD case") was almost entirely broken - the BUG_ON() there
triggers as soon as there's any meaningful amount of excess memory.

Re-implement the logic assuming that XENMEM_get_pod_target will at some
point be allowed for a domain to query on itself. Basing the
calculation on just num_physpages results in significantly too much
memory getting balloned out when there's memory beyond the 4G boundary.

Using what recent upstream's get_num_physpages() returns is not an
alternative because that value is too small (even if not as small as
totalram_pages), resulting in not enough pages getting ballooned out.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/balloon/balloon.c
+++ b/drivers/xen/balloon/balloon.c
@@ -537,17 +537,11 @@ static int __init balloon_init(void)
 	 * extent of 1. When start_extent > nr_extents (>=3D in newer =
Xen), we
 	 * simply get start_extent returned.
 	 */
-	totalram_bias =3D HYPERVISOR_memory_op(rc !=3D -ENOSYS && rc !=3D =
1
-		? XENMEM_maximum_reservation : XENMEM_current_reservation,
-		&pod_target.domid);
-	if ((long)totalram_bias !=3D -ENOSYS) {
-		BUG_ON(totalram_bias < totalram_pages);
-		bs.current_pages =3D totalram_bias;
-		totalram_bias -=3D totalram_pages;
-	} else {
-		totalram_bias =3D 0;
-		bs.current_pages =3D totalram_pages;
-	}
+	bs.current_pages =3D pod_target.tot_pages + pod_target.pod_entries
+			   - pod_target.pod_cache_pages;
+	if (rc || bs.current_pages > num_physpages)
+		bs.current_pages =3D num_physpages;
+	totalram_bias =3D bs.current_pages - totalram_pages;
 #endif
 	bs.target_pages  =3D bs.current_pages;
 	bs.balloon_low   =3D 0;




--=__Part7447F762.0__=
Content-Type: text/plain; name="xenlinux-balloon-HVM-PoD.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xenlinux-balloon-HVM-PoD.patch"

balloon: don't crash in HVM-with-PoD guests=0A=0A989:a7781c0a3b9a =
("xen/balloon: fix balloon driver accounting for=0AHVM-with-PoD case") was =
almost entirely broken - the BUG_ON() there=0Atriggers as soon as there's =
any meaningful amount of excess memory.=0A=0ARe-implement the logic =
assuming that XENMEM_get_pod_target will at some=0Apoint be allowed for a =
domain to query on itself. Basing the=0Acalculation on just num_physpages =
results in significantly too much=0Amemory getting balloned out when =
there's memory beyond the 4G boundary.=0A=0AUsing what recent upstream's =
get_num_physpages() returns is not an=0Aalternative because that value is =
too small (even if not as small as=0Atotalram_pages), resulting in not =
enough pages getting ballooned out.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/drivers/xen/balloon/balloon.c=0A+++ =
b/drivers/xen/balloon/balloon.c=0A@@ -537,17 +537,11 @@ static int __init =
balloon_init(void)=0A 	 * extent of 1. When start_extent > nr_extents =
(>=3D in newer Xen), we=0A 	 * simply get start_extent returned.=0A 	=
 */=0A-	totalram_bias =3D HYPERVISOR_memory_op(rc !=3D -ENOSYS && rc !=3D =
1=0A-		? XENMEM_maximum_reservation : XENMEM_current_reservation,=
=0A-		&pod_target.domid);=0A-	if ((long)totalram_bias !=3D =
-ENOSYS) {=0A-		BUG_ON(totalram_bias < totalram_pages);=0A-		=
bs.current_pages =3D totalram_bias;=0A-		totalram_bias -=3D =
totalram_pages;=0A-	} else {=0A-		totalram_bias =3D 0;=0A-	=
	bs.current_pages =3D totalram_pages;=0A-	}=0A+	bs.current_=
pages =3D pod_target.tot_pages + pod_target.pod_entries=0A+			=
   - pod_target.pod_cache_pages;=0A+	if (rc || bs.current_pages > =
num_physpages)=0A+		bs.current_pages =3D num_physpages;=0A+	=
totalram_bias =3D bs.current_pages - totalram_pages;=0A #endif=0A 	=
bs.target_pages  =3D bs.current_pages;=0A 	bs.balloon_low   =3D 0;=0A
--=__Part7447F762.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7447F762.0__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 09:47:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BSO-0004Ma-Kp; Mon, 20 Jan 2014 09:47:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5BSN-0004MQ-9D
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:47:47 +0000
Received: from [193.109.254.147:40030] by server-1.bemta-14.messagelabs.com id
	6C/ED-15600-2C0FCD25; Mon, 20 Jan 2014 09:47:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390211265!11822731!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7844 invoked from network); 20 Jan 2014 09:47:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 09:47:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 09:47:45 +0000
Message-Id: <52DCFECE0200007800114F1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 09:47:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB88B3BAE.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] unmodified_drivers: make usbfront build
	conditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB88B3BAE.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Commit 0dcfb88fb8 ("unmodified_drivers: enable build of usbfront
driver") results in the PV drivers to no longer build against older
(pre-2.6.35) Linux versions. That's because usbfront.h includes
headers from drivers/usb/core/, which is generally unavailable when
building out-of-tree modules.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/unmodified_drivers/linux-2.6/usbfront/Kbuild
+++ b/unmodified_drivers/linux-2.6/usbfront/Kbuild
@@ -1,5 +1,7 @@
 include $(M)/overrides.mk
=20
-obj-m +=3D xen-usb.o
+obj-m +=3D $(if $(shell grep '^\#include "\.\./\.\./' $(obj)/usbfront.h), =
\
+	      $(warning usbfront cannot be built), \
+	      xen-usb.o)
=20
 xen-usb-objs :=3D usbfront-hcd.o xenbus.o




--=__PartB88B3BAE.1__=
Content-Type: text/plain; name="pvdrv-usbfront-conditional.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="pvdrv-usbfront-conditional.patch"

unmodified_drivers: make usbfront build conditional=0A=0ACommit 0dcfb88fb8 =
("unmodified_drivers: enable build of usbfront=0Adriver") results in the =
PV drivers to no longer build against older=0A(pre-2.6.35) Linux versions. =
That's because usbfront.h includes=0Aheaders from drivers/usb/core/, which =
is generally unavailable when=0Abuilding out-of-tree modules.=0A=0ASigned-o=
ff-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/unmodified_drivers/linux-=
2.6/usbfront/Kbuild=0A+++ b/unmodified_drivers/linux-2.6/usbfront/Kbuild=0A=
@@ -1,5 +1,7 @@=0A include $(M)/overrides.mk=0A =0A-obj-m +=3D xen-usb.o=0A=
+obj-m +=3D $(if $(shell grep '^\#include "\.\./\.\./' $(obj)/usbfront.h), =
\=0A+	      $(warning usbfront cannot be built), \=0A+	      =
xen-usb.o)=0A =0A xen-usb-objs :=3D usbfront-hcd.o xenbus.o=0A
--=__PartB88B3BAE.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB88B3BAE.1__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 09:47:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BSO-0004Ma-Kp; Mon, 20 Jan 2014 09:47:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5BSN-0004MQ-9D
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 09:47:47 +0000
Received: from [193.109.254.147:40030] by server-1.bemta-14.messagelabs.com id
	6C/ED-15600-2C0FCD25; Mon, 20 Jan 2014 09:47:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390211265!11822731!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7844 invoked from network); 20 Jan 2014 09:47:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 09:47:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 09:47:45 +0000
Message-Id: <52DCFECE0200007800114F1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 09:47:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB88B3BAE.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] unmodified_drivers: make usbfront build
	conditional
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB88B3BAE.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Commit 0dcfb88fb8 ("unmodified_drivers: enable build of usbfront
driver") results in the PV drivers to no longer build against older
(pre-2.6.35) Linux versions. That's because usbfront.h includes
headers from drivers/usb/core/, which is generally unavailable when
building out-of-tree modules.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/unmodified_drivers/linux-2.6/usbfront/Kbuild
+++ b/unmodified_drivers/linux-2.6/usbfront/Kbuild
@@ -1,5 +1,7 @@
 include $(M)/overrides.mk
=20
-obj-m +=3D xen-usb.o
+obj-m +=3D $(if $(shell grep '^\#include "\.\./\.\./' $(obj)/usbfront.h), =
\
+	      $(warning usbfront cannot be built), \
+	      xen-usb.o)
=20
 xen-usb-objs :=3D usbfront-hcd.o xenbus.o




--=__PartB88B3BAE.1__=
Content-Type: text/plain; name="pvdrv-usbfront-conditional.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="pvdrv-usbfront-conditional.patch"

unmodified_drivers: make usbfront build conditional=0A=0ACommit 0dcfb88fb8 =
("unmodified_drivers: enable build of usbfront=0Adriver") results in the =
PV drivers to no longer build against older=0A(pre-2.6.35) Linux versions. =
That's because usbfront.h includes=0Aheaders from drivers/usb/core/, which =
is generally unavailable when=0Abuilding out-of-tree modules.=0A=0ASigned-o=
ff-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/unmodified_drivers/linux-=
2.6/usbfront/Kbuild=0A+++ b/unmodified_drivers/linux-2.6/usbfront/Kbuild=0A=
@@ -1,5 +1,7 @@=0A include $(M)/overrides.mk=0A =0A-obj-m +=3D xen-usb.o=0A=
+obj-m +=3D $(if $(shell grep '^\#include "\.\./\.\./' $(obj)/usbfront.h), =
\=0A+	      $(warning usbfront cannot be built), \=0A+	      =
xen-usb.o)=0A =0A xen-usb-objs :=3D usbfront-hcd.o xenbus.o=0A
--=__PartB88B3BAE.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB88B3BAE.1__=--


From xen-devel-bounces@lists.xen.org Mon Jan 20 09:57:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BbB-0004yT-Ql; Mon, 20 Jan 2014 09:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BbA-0004yO-H6
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:56:52 +0000
Received: from [193.109.254.147:26129] by server-16.bemta-14.messagelabs.com
	id 68/4C-20600-3E2FCD25; Mon, 20 Jan 2014 09:56:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390211810!11858594!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 20 Jan 2014 09:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427035"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:56:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:56:49 -0500
Message-ID: <1390211808.20516.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:56:48 +0000
In-Reply-To: <21209.29400.383300.983690@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:13 +0000, Ian Jackson wrote:
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 824ac88..ab6ac5c 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *
>   *       The application expects libxl to reap all of its children,
>   *       and provides a callback to be notified of their exit
> - *       statues.  The application must have only one libxl_ctx
> + *       statuses.  The application may have have multiple libxl_ctxs

"have have"

It's hard to believe but that seems to be the only comment I have, I
think I actually grokked the whole locking via SIGCHLD deferral thing
and it seems to make sense.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:57:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BbB-0004yT-Ql; Mon, 20 Jan 2014 09:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BbA-0004yO-H6
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:56:52 +0000
Received: from [193.109.254.147:26129] by server-16.bemta-14.messagelabs.com
	id 68/4C-20600-3E2FCD25; Mon, 20 Jan 2014 09:56:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390211810!11858594!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 20 Jan 2014 09:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427035"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:56:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:56:49 -0500
Message-ID: <1390211808.20516.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:56:48 +0000
In-Reply-To: <21209.29400.383300.983690@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 18:13 +0000, Ian Jackson wrote:
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index 824ac88..ab6ac5c 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -472,7 +472,7 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *
>   *       The application expects libxl to reap all of its children,
>   *       and provides a callback to be notified of their exit
> - *       statues.  The application must have only one libxl_ctx
> + *       statuses.  The application may have have multiple libxl_ctxs

"have have"

It's hard to believe but that seems to be the only comment I have, I
think I actually grokked the whole locking via SIGCHLD deferral thing
and it seems to make sense.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:59:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BdD-00059a-L9; Mon, 20 Jan 2014 09:58:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BdC-00059U-7W
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:58:58 +0000
Received: from [85.158.143.35:16872] by server-1.bemta-4.messagelabs.com id
	4C/DC-02132-163FCD25; Mon, 20 Jan 2014 09:58:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390211935!11464862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3728 invoked from network); 20 Jan 2014 09:58:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:58:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427421"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:58:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:58:48 -0500
Message-ID: <1390211927.20516.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:58:47 +0000
In-Reply-To: <1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> We are going to want introduce another call site in the final
> substantive patch.
> 
> Pure code motion; no functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_fork.c |   20 ++++++++++++++------
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index ce8e8eb..b6b14fe 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
>      errno = esave;
>  }
>  
> +static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, old);
> +    assert(!r);
> +}
> +
>  static void sigchld_removehandler_core(void)
>  {
>      struct sigaction was;
> @@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
>      assert(!sigchld_owner);
>      sigchld_owner = CTX;
>  
> -    memset(&ours,0,sizeof(ours));

Is "ours" now an unused variable in this context?

> -    ours.sa_handler = sigchld_handler;
> -    sigemptyset(&ours.sa_mask);
> -    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -    assert(!r);
> +    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
>  
>      assert(((void)"application must negotiate with libxl about SIGCHLD",
>              !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:59:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BdD-00059a-L9; Mon, 20 Jan 2014 09:58:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BdC-00059U-7W
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:58:58 +0000
Received: from [85.158.143.35:16872] by server-1.bemta-4.messagelabs.com id
	4C/DC-02132-163FCD25; Mon, 20 Jan 2014 09:58:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390211935!11464862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3728 invoked from network); 20 Jan 2014 09:58:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:58:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427421"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:58:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:58:48 -0500
Message-ID: <1390211927.20516.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:58:47 +0000
In-Reply-To: <1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> We are going to want introduce another call site in the final
> substantive patch.
> 
> Pure code motion; no functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
>  tools/libxl/libxl_fork.c |   20 ++++++++++++++------
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index ce8e8eb..b6b14fe 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
>      errno = esave;
>  }
>  
> +static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, old);
> +    assert(!r);
> +}
> +
>  static void sigchld_removehandler_core(void)
>  {
>      struct sigaction was;
> @@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
>      assert(!sigchld_owner);
>      sigchld_owner = CTX;
>  
> -    memset(&ours,0,sizeof(ours));

Is "ours" now an unused variable in this context?

> -    ours.sa_handler = sigchld_handler;
> -    sigemptyset(&ours.sa_mask);
> -    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -    assert(!r);
> +    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
>  
>      assert(((void)"application must negotiate with libxl about SIGCHLD",
>              !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:59:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Be0-0005Il-3C; Mon, 20 Jan 2014 09:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Bdy-0005Ia-Pw
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:59:46 +0000
Received: from [193.109.254.147:43393] by server-10.bemta-14.messagelabs.com
	id A5/66-20752-293FCD25; Mon, 20 Jan 2014 09:59:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390211984!8418489!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23412 invoked from network); 20 Jan 2014 09:59:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:59:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:59:43 -0500
Message-ID: <1390211982.20516.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:59:42 +0000
In-Reply-To: <1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 09/12] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> We are going to change these functions so that different libxl ctx's
> can share a single SIGCHLD handler.  Rename them now to a new name
> which doesn't imply unconditional handler installation or removal.
> 
> Also note in the comments that they are idempotent.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 09:59:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 09:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Be0-0005Il-3C; Mon, 20 Jan 2014 09:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Bdy-0005Ia-Pw
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 09:59:46 +0000
Received: from [193.109.254.147:43393] by server-10.bemta-14.messagelabs.com
	id A5/66-20752-293FCD25; Mon, 20 Jan 2014 09:59:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390211984!8418489!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23412 invoked from network); 20 Jan 2014 09:59:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 09:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94427577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 09:59:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:59:43 -0500
Message-ID: <1390211982.20516.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:59:42 +0000
In-Reply-To: <1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-10-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 09/12] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> We are going to change these functions so that different libxl ctx's
> can share a single SIGCHLD handler.  Rename them now to a new name
> which doesn't imply unconditional handler installation or removal.
> 
> Also note in the comments that they are idempotent.
> 
> No functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:00:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BeM-0005Tt-Hj; Mon, 20 Jan 2014 10:00:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BeL-0005TU-3E
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 10:00:09 +0000
Received: from [193.109.254.147:58217] by server-11.bemta-14.messagelabs.com
	id E4/73-20576-8A3FCD25; Mon, 20 Jan 2014 10:00:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390212006!11924958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21726 invoked from network); 20 Jan 2014 10:00:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:00:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92341572"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:59:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:59:06 -0500
Message-ID: <1390211945.20516.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:59:05 +0000
In-Reply-To: <1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 10/12] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> Pure code motion.  This is going to make the final substantive patch
> easier to read.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
>  1 file changed, 22 insertions(+), 16 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index a15af8e..ce8e8eb 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
>      sigchld_owner = 0;
>  }
>  
> +static void sigchld_installhandler_core(libxl__gc *gc)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    assert(!sigchld_owner);
> +    sigchld_owner = CTX;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = sigchld_handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> +    assert(!r);
> +
> +    assert(((void)"application must negotiate with libxl about SIGCHLD",
> +            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
> +            (sigchld_saved_action.sa_handler == SIG_DFL ||
> +             sigchld_saved_action.sa_handler == SIG_IGN)));
> +}
> +
>  void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
>  {
>      int rc;
> @@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
>  
>      atfork_lock();
>      if (sigchld_owner != CTX) {
> -        struct sigaction ours;
> -
> -        assert(!sigchld_owner);
> -        sigchld_owner = CTX;
> -
> -        memset(&ours,0,sizeof(ours));
> -        ours.sa_handler = sigchld_handler;
> -        sigemptyset(&ours.sa_mask);
> -        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -        assert(!r);
> -
> -        assert(((void)"application must negotiate with libxl about SIGCHLD",
> -                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
> -                (sigchld_saved_action.sa_handler == SIG_DFL ||
> -                 sigchld_saved_action.sa_handler == SIG_IGN)));
> +        sigchld_installhandler_core(gc);
>      }
>      atfork_unlock();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:00:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5BeM-0005Tt-Hj; Mon, 20 Jan 2014 10:00:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5BeL-0005TU-3E
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 10:00:09 +0000
Received: from [193.109.254.147:58217] by server-11.bemta-14.messagelabs.com
	id E4/73-20576-8A3FCD25; Mon, 20 Jan 2014 10:00:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390212006!11924958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21726 invoked from network); 20 Jan 2014 10:00:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:00:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92341572"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 09:59:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 04:59:06 -0500
Message-ID: <1390211945.20516.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 09:59:05 +0000
In-Reply-To: <1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-11-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 10/12] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> Pure code motion.  This is going to make the final substantive patch
> easier to read.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> ---
>  tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
>  1 file changed, 22 insertions(+), 16 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index a15af8e..ce8e8eb 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
>      sigchld_owner = 0;
>  }
>  
> +static void sigchld_installhandler_core(libxl__gc *gc)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    assert(!sigchld_owner);
> +    sigchld_owner = CTX;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = sigchld_handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> +    assert(!r);
> +
> +    assert(((void)"application must negotiate with libxl about SIGCHLD",
> +            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
> +            (sigchld_saved_action.sa_handler == SIG_DFL ||
> +             sigchld_saved_action.sa_handler == SIG_IGN)));
> +}
> +
>  void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
>  {
>      int rc;
> @@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
>  
>      atfork_lock();
>      if (sigchld_owner != CTX) {
> -        struct sigaction ours;
> -
> -        assert(!sigchld_owner);
> -        sigchld_owner = CTX;
> -
> -        memset(&ours,0,sizeof(ours));
> -        ours.sa_handler = sigchld_handler;
> -        sigemptyset(&ours.sa_mask);
> -        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -        assert(!r);
> -
> -        assert(((void)"application must negotiate with libxl about SIGCHLD",
> -                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
> -                (sigchld_saved_action.sa_handler == SIG_DFL ||
> -                 sigchld_saved_action.sa_handler == SIG_IGN)));
> +        sigchld_installhandler_core(gc);
>      }
>      atfork_unlock();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:16:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Btn-0006aj-32; Mon, 20 Jan 2014 10:16:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W5Btk-0006aY-V7; Mon, 20 Jan 2014 10:16:05 +0000
Received: from [85.158.143.35:34558] by server-1.bemta-4.messagelabs.com id
	C5/C4-02132-467FCD25; Mon, 20 Jan 2014 10:16:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390212961!12776097!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25621 invoked from network); 20 Jan 2014 10:16:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94431751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 10:16:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:15:59 -0500
Message-ID: <1390212958.20516.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Martin Unzner <knusperbrot@gmx.de>
Date: Mon, 20 Jan 2014 10:15:58 +0000
In-Reply-To: <52DBC709.80707@gmx.de>
References: <52DBC709.80707@gmx.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Jan
	Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
> Hi list,
> 
> I just installed Xen 4.3.1 for EFI according to 
> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot 
> it. There was a crash at the very beginning, notifying me of a bug at 
> traps.c:3271. I could not find any logs, so I just noted the stack trace 
> on a sheet of paper:
> 
> do_device_not_available
> handle_exception
> efi_get_time
> get_cmos_time
> init_xen_time
> __start_xen
> 
> Does that mean my hardware is incompatible?

No, just that you've found a bug I think.

I'm copying the devel list here to see if anyone has any clues.

> Where would I find the Xen startup log, anyway?

In a normal running system in "xl dmesg" or you can configure
xenconsoled to write it to a file. But with a crash on boot it is too
soon for all of that, the solution is usually to use a serial console
(see the wiki), but a photograph or just noting to the error as you've
done can work too.

I'll leave it to others on the dev list to request a serial log or a
photo if they think it would be useful to have the additional
information (e.g. register state, stack addresses etc).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:16:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Btn-0006aj-32; Mon, 20 Jan 2014 10:16:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W5Btk-0006aY-V7; Mon, 20 Jan 2014 10:16:05 +0000
Received: from [85.158.143.35:34558] by server-1.bemta-4.messagelabs.com id
	C5/C4-02132-467FCD25; Mon, 20 Jan 2014 10:16:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390212961!12776097!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25621 invoked from network); 20 Jan 2014 10:16:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:16:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94431751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 10:16:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:15:59 -0500
Message-ID: <1390212958.20516.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Martin Unzner <knusperbrot@gmx.de>
Date: Mon, 20 Jan 2014 10:15:58 +0000
In-Reply-To: <52DBC709.80707@gmx.de>
References: <52DBC709.80707@gmx.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Jan
	Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
> Hi list,
> 
> I just installed Xen 4.3.1 for EFI according to 
> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot 
> it. There was a crash at the very beginning, notifying me of a bug at 
> traps.c:3271. I could not find any logs, so I just noted the stack trace 
> on a sheet of paper:
> 
> do_device_not_available
> handle_exception
> efi_get_time
> get_cmos_time
> init_xen_time
> __start_xen
> 
> Does that mean my hardware is incompatible?

No, just that you've found a bug I think.

I'm copying the devel list here to see if anyone has any clues.

> Where would I find the Xen startup log, anyway?

In a normal running system in "xl dmesg" or you can configure
xenconsoled to write it to a file. But with a crash on boot it is too
soon for all of that, the solution is usually to use a serial console
(see the wiki), but a photograph or just noting to the error as you've
done can work too.

I'll leave it to others on the dev list to request a serial log or a
photo if they think it would be useful to have the additional
information (e.g. register state, stack addresses etc).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:28:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5C5r-0007Pj-LV; Mon, 20 Jan 2014 10:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5C5q-0007Pd-HY
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 10:28:34 +0000
Received: from [85.158.137.68:60403] by server-11.bemta-3.messagelabs.com id
	B5/88-19379-15AFCD25; Mon, 20 Jan 2014 10:28:33 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390213711!10141353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6540 invoked from network); 20 Jan 2014 10:28:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:28:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92348703"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:28:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:28:31 -0500
Message-ID: <52DCFA4D.4070903@citrix.com>
Date: Mon, 20 Jan 2014 10:28:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xu cong <congxumail@gmail.com>, <xen-devel@lists.xensource.com>
References: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
In-Reply-To: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] xen-netback in linux 3.12
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 21:30, xu cong wrote:
> Hi all,
>
> I found xen-netback changed a lot in latest kernel. The shared
> xen-netback threads in driver domain is replaced by per-vif kernel
> thread. In my platform, I found the I/O throughput and scalability of
> per-vif netback is better than previous implementation. Another
> advantage is that I can use cgroups to control the CPU fair share among
> all VMs in driver domain (I group netback thread and blkback thread for
> each VM). Is there any other motivation for this modification? Thanks.

as far as I remember scalability was the only motivation. Previously you 
could end in a situation where one thread (pinned to a vCPU) did a lot 
of work while others nothing. In this thread-per-VIF model the kernel 
has the ability to schedule the workloads as it want, and users can also 
poke around with settings.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:28:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5C5r-0007Pj-LV; Mon, 20 Jan 2014 10:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5C5q-0007Pd-HY
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 10:28:34 +0000
Received: from [85.158.137.68:60403] by server-11.bemta-3.messagelabs.com id
	B5/88-19379-15AFCD25; Mon, 20 Jan 2014 10:28:33 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390213711!10141353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6540 invoked from network); 20 Jan 2014 10:28:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:28:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92348703"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:28:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:28:31 -0500
Message-ID: <52DCFA4D.4070903@citrix.com>
Date: Mon, 20 Jan 2014 10:28:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xu cong <congxumail@gmail.com>, <xen-devel@lists.xensource.com>
References: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
In-Reply-To: <CA+hYhXtn_J5d=9JUU7qL5YVc4b2yqM9sonQX01cd=TPn--K02g@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] xen-netback in linux 3.12
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 21:30, xu cong wrote:
> Hi all,
>
> I found xen-netback changed a lot in latest kernel. The shared
> xen-netback threads in driver domain is replaced by per-vif kernel
> thread. In my platform, I found the I/O throughput and scalability of
> per-vif netback is better than previous implementation. Another
> advantage is that I can use cgroups to control the CPU fair share among
> all VMs in driver domain (I group netback thread and blkback thread for
> each VM). Is there any other motivation for this modification? Thanks.

as far as I remember scalability was the only motivation. Previously you 
could end in a situation where one thread (pinned to a vCPU) did a lot 
of work while others nothing. In this thread-per-VIF model the kernel 
has the ability to schedule the workloads as it want, and users can also 
poke around with settings.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CFB-0007vi-Mt; Mon, 20 Jan 2014 10:38:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5CFA-0007vc-4a
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 10:38:12 +0000
Received: from [85.158.137.68:31460] by server-10.bemta-3.messagelabs.com id
	21/8D-23989-39CFCD25; Mon, 20 Jan 2014 10:38:11 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390214289!10115290!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30361 invoked from network); 20 Jan 2014 10:38:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:38:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92350801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:38:08 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:38:08 -0500
Message-ID: <52DCFC8F.1050607@citrix.com>
Date: Mon, 20 Jan 2014 10:38:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
In-Reply-To: <20140117230219.GA28413@garbanzo.do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 23:02, Luis R. Rodriguez wrote:
> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
> from David one of the xen development kernel git trees to track is
> xen/git.git [2], this tree however gives has undefined references when doing a
> fresh clone [shown below], but as expected does work well when only cloning
> the linux-next branch [also below]. While I'm sure this is fine for
> folks who can do the guess work do we really want to live with trees like
> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
> required, so perhaps it should -- if we want to live with these ? Curious, how
> many other git are there with a similar situation ?

We don't recommend doing development work for the Xen subsystem based on
xen/tip.git so I think it's fine to have to checkout the specific branch
you are interested in.

> The xen project web site actually lists [3] Konrad's xen git tree [4] for
> development as the primary development tree, that probably should be
> updated now, and likely with instructions to clone only the linux-next
> branch ?

I've updated the wiki to read:

    For development the recommended branch is:

        The mainline Linus linux.git tree.

    To see what's queued for the next release, the next merge window,
    and other work in progress:

        The Xen subsystem maintainers' tip.git tree.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:38:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:38:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CFB-0007vi-Mt; Mon, 20 Jan 2014 10:38:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5CFA-0007vc-4a
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 10:38:12 +0000
Received: from [85.158.137.68:31460] by server-10.bemta-3.messagelabs.com id
	21/8D-23989-39CFCD25; Mon, 20 Jan 2014 10:38:11 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390214289!10115290!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30361 invoked from network); 20 Jan 2014 10:38:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:38:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92350801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:38:08 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:38:08 -0500
Message-ID: <52DCFC8F.1050607@citrix.com>
Date: Mon, 20 Jan 2014 10:38:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
In-Reply-To: <20140117230219.GA28413@garbanzo.do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stephen Rothwell <sfr@canb.auug.org.au>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/01/14 23:02, Luis R. Rodriguez wrote:
> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
> from David one of the xen development kernel git trees to track is
> xen/git.git [2], this tree however gives has undefined references when doing a
> fresh clone [shown below], but as expected does work well when only cloning
> the linux-next branch [also below]. While I'm sure this is fine for
> folks who can do the guess work do we really want to live with trees like
> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
> required, so perhaps it should -- if we want to live with these ? Curious, how
> many other git are there with a similar situation ?

We don't recommend doing development work for the Xen subsystem based on
xen/tip.git so I think it's fine to have to checkout the specific branch
you are interested in.

> The xen project web site actually lists [3] Konrad's xen git tree [4] for
> development as the primary development tree, that probably should be
> updated now, and likely with instructions to clone only the linux-next
> branch ?

I've updated the wiki to read:

    For development the recommended branch is:

        The mainline Linus linux.git tree.

    To see what's queued for the next release, the next merge window,
    and other work in progress:

        The Xen subsystem maintainers' tip.git tree.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:43:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CJj-0008PG-CR; Mon, 20 Jan 2014 10:42:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CJh-0008P9-Sf
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 10:42:54 +0000
Received: from [85.158.143.35:38069] by server-3.bemta-4.messagelabs.com id
	6C/51-32360-DADFCD25; Mon, 20 Jan 2014 10:42:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390214571!12785197!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19947 invoked from network); 20 Jan 2014 10:42:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:42:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92351906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:42:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:42:50 -0500
Message-ID: <1390214569.20516.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 20 Jan 2014 10:42:49 +0000
In-Reply-To: <52DCE5EC0200007800114E59@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
	<1389977676.6697.132.camel@kazak.uk.xensource.com>
	<52DCE5EC0200007800114E59@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 08:01 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> >> 4Gb, the number calculated from the PoD data is a little worse.
> >> > 
> >> > On ARM RAM may not start at 0 and so using max_pfn can be very
> >> > misleading and in practice causes arm to balloon down to 0 as fast as it
> >> > can.
> >> 
> >> Ugly. Is that only due to the temporary workaround for there not
> >> being an IOMMU?
> > 
> > It's not to do with IOMMUs, no, and it isn't temporary.
> > 
> > Architecturally on ARM its not required for RAM to be at address 0 and
> > it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
> > SoC design).
> > 
> > If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
> > but target pages is just 0x8000, if current_pages is initialised to
> > max_pfn then the kernel immediately thinks it has to get rid of 0x800000
> > pages.
> 
> And there is some sort of benefit from also doing this for virtual
> machines?

For dom0 the address space layout mirrors that of the underlying
platform.

For domU we mostly ended up using the layout of the platform we happened
to develop on for the virtual guest layout without much thought. This
actually has some shortcomings, in that it limits the amount of RAM a
guest can have under 4GB quite significantly, so in 4.5 we will probably
change this.

But for domU when we do device assignment we may also want to do
something equivalent to x86 "e820_host" option which also mirrors the
underlying address map for guests.

In any case I don't want to be encoding restrictions like "RAM starts at
zero" into ARM's guest ABI.

> 
> >> And short of the initial value needing to be architecture specific -
> >> can you see a calculation that would yield a decent result on ARM
> >> that would also be suitable on x86?
> > 
> > I previously had a patch to use memblock_phys_mem_size(), but when I saw
> > Boris switch to get_num_physpages() I thought that would be OK, but I
> > didn't look into it very hard. Without checking I suspect they return
> > pretty much the same think and so memblock_phys_mem_size will have the
> > same issue you observed (which I confess I haven't yet gone back and
> > understood).
> 
> If there's no reserved memory in that range, I guess ARM might
> be fine as is.

Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
memblock stuff and whether it is included in phys_mem_size -- I think
that stuff is still WIP upstream (i.e. the kernel currently ignores such
things, IIRC there was a fix but it was broken and reverted to try again
and then I've lost track).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:43:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CJj-0008PG-CR; Mon, 20 Jan 2014 10:42:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CJh-0008P9-Sf
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 10:42:54 +0000
Received: from [85.158.143.35:38069] by server-3.bemta-4.messagelabs.com id
	6C/51-32360-DADFCD25; Mon, 20 Jan 2014 10:42:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390214571!12785197!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19947 invoked from network); 20 Jan 2014 10:42:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:42:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92351906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:42:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:42:50 -0500
Message-ID: <1390214569.20516.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 20 Jan 2014 10:42:49 +0000
In-Reply-To: <52DCE5EC0200007800114E59@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<1389974929.6697.122.camel@kazak.uk.xensource.com>
	<52D967AD0200007800114AA2@nat28.tlf.novell.com>
	<1389977676.6697.132.camel@kazak.uk.xensource.com>
	<52DCE5EC0200007800114E59@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 08:01 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 17:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 16:26 +0000, Jan Beulich wrote:
> >> >>> On 17.01.14 at 17:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >> > On Fri, 2014-01-17 at 16:03 +0000, Jan Beulich wrote:
> >> >> max_pfn/num_physpages isn't that far off for guest with less than
> >> >> 4Gb, the number calculated from the PoD data is a little worse.
> >> > 
> >> > On ARM RAM may not start at 0 and so using max_pfn can be very
> >> > misleading and in practice causes arm to balloon down to 0 as fast as it
> >> > can.
> >> 
> >> Ugly. Is that only due to the temporary workaround for there not
> >> being an IOMMU?
> > 
> > It's not to do with IOMMUs, no, and it isn't temporary.
> > 
> > Architecturally on ARM its not required for RAM to be at address 0 and
> > it is not uncommon for it to start at 1, 2 or 3GB (as a property of the
> > SoC design).
> > 
> > If you have 128M of RAM at 0x80000000-0x88000000 then max_pfn is 0x88000
> > but target pages is just 0x8000, if current_pages is initialised to
> > max_pfn then the kernel immediately thinks it has to get rid of 0x800000
> > pages.
> 
> And there is some sort of benefit from also doing this for virtual
> machines?

For dom0 the address space layout mirrors that of the underlying
platform.

For domU we mostly ended up using the layout of the platform we happened
to develop on for the virtual guest layout without much thought. This
actually has some shortcomings, in that it limits the amount of RAM a
guest can have under 4GB quite significantly, so in 4.5 we will probably
change this.

But for domU when we do device assignment we may also want to do
something equivalent to x86 "e820_host" option which also mirrors the
underlying address map for guests.

In any case I don't want to be encoding restrictions like "RAM starts at
zero" into ARM's guest ABI.

> 
> >> And short of the initial value needing to be architecture specific -
> >> can you see a calculation that would yield a decent result on ARM
> >> that would also be suitable on x86?
> > 
> > I previously had a patch to use memblock_phys_mem_size(), but when I saw
> > Boris switch to get_num_physpages() I thought that would be OK, but I
> > didn't look into it very hard. Without checking I suspect they return
> > pretty much the same think and so memblock_phys_mem_size will have the
> > same issue you observed (which I confess I haven't yet gone back and
> > understood).
> 
> If there's no reserved memory in that range, I guess ARM might
> be fine as is.

Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
memblock stuff and whether it is included in phys_mem_size -- I think
that stuff is still WIP upstream (i.e. the kernel currently ignores such
things, IIRC there was a fix but it was broken and reverted to try again
and then I've lost track).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:50:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CRC-0000Oh-BK; Mon, 20 Jan 2014 10:50:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CRA-0000Oc-W5
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 10:50:37 +0000
Received: from [193.109.254.147:3685] by server-4.bemta-14.messagelabs.com id
	61/C5-03916-C7FFCD25; Mon, 20 Jan 2014 10:50:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390215033!11955119!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4502 invoked from network); 20 Jan 2014 10:50:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:50:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92353518"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:50:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:50:14 -0500
Message-ID: <1390215013.20516.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 20 Jan 2014 10:50:13 +0000
In-Reply-To: <52DCE7700200007800114E67@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
	<52DCE7700200007800114E67@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 08:08 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 18:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
> >> Interestingly, (a) too results in the driver not ballooning down
> >> enough - there's a gap of exactly as many pages as are marked
> >> reserved below the 1Mb boundary. Therefore aforementioned
> >> upstream commit is presumably broken.
> > 
> > Can we count those reserved pages? (I guess you mean reserved in the
> > e820?)
> 
> Yes, we could. But it's not logical to count the ones below 1Mb, but
> not the ones above.

I can understand that PoV but it's not like the PC architecture isn't
full of weird quirks and assumptions which are specific to the low
1Mb...

>  Yet we can't (without knowledge of the tools/
> firmware implementation) tell regions backed by RAM assigned to the
> guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
> from regions reserved for other reasons. A specific firmware could,
> for example, have a larger BIOS region right below 4Gb (like many
> non-virtual BIOSes do), which would then also be RAM covered and
> hence also need accounting.

Couldn't this be accounted for in the toolstack when considering the
target and max_pages? But I suppose it is too late for that now if you
want to DTRT on existing systems.

> >> Short of a reliable (and ideally architecture independent) way of
> >> knowing the necessary adjustment value, the next best solution
> >> (not ballooning down too little, but also not ballooning down much
> >> more than necessary) turns out to be using the minimum of (b)
> >> and (c): When the domain only has memory below 4Gb, (b) is
> >> more precise, whereas in the other cases (c) gets closest.
> > 
> > I think I'd prefer an arch specific calculation (or an arch specific
> > adjustment to a generic calculation) to either of the above.
> 
> Hmm, interesting. I would have expected a generic calculation to
> be deemed preferable.

Yes, I'd much prefer an accurate per-arch calculation to a generic fudge
which only gets close for everyone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:50:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CRC-0000Oh-BK; Mon, 20 Jan 2014 10:50:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CRA-0000Oc-W5
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 10:50:37 +0000
Received: from [193.109.254.147:3685] by server-4.bemta-14.messagelabs.com id
	61/C5-03916-C7FFCD25; Mon, 20 Jan 2014 10:50:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390215033!11955119!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4502 invoked from network); 20 Jan 2014 10:50:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:50:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92353518"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 10:50:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 05:50:14 -0500
Message-ID: <1390215013.20516.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 20 Jan 2014 10:50:13 +0000
In-Reply-To: <52DCE7700200007800114E67@nat28.tlf.novell.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
	<52DCE7700200007800114E67@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 08:08 +0000, Jan Beulich wrote:
> >>> On 17.01.14 at 18:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-17 at 14:33 +0000, Jan Beulich wrote:
> >> Interestingly, (a) too results in the driver not ballooning down
> >> enough - there's a gap of exactly as many pages as are marked
> >> reserved below the 1Mb boundary. Therefore aforementioned
> >> upstream commit is presumably broken.
> > 
> > Can we count those reserved pages? (I guess you mean reserved in the
> > e820?)
> 
> Yes, we could. But it's not logical to count the ones below 1Mb, but
> not the ones above.

I can understand that PoV but it's not like the PC architecture isn't
full of weird quirks and assumptions which are specific to the low
1Mb...

>  Yet we can't (without knowledge of the tools/
> firmware implementation) tell regions backed by RAM assigned to the
> guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
> from regions reserved for other reasons. A specific firmware could,
> for example, have a larger BIOS region right below 4Gb (like many
> non-virtual BIOSes do), which would then also be RAM covered and
> hence also need accounting.

Couldn't this be accounted for in the toolstack when considering the
target and max_pages? But I suppose it is too late for that now if you
want to DTRT on existing systems.

> >> Short of a reliable (and ideally architecture independent) way of
> >> knowing the necessary adjustment value, the next best solution
> >> (not ballooning down too little, but also not ballooning down much
> >> more than necessary) turns out to be using the minimum of (b)
> >> and (c): When the domain only has memory below 4Gb, (b) is
> >> more precise, whereas in the other cases (c) gets closest.
> > 
> > I think I'd prefer an arch specific calculation (or an arch specific
> > adjustment to a generic calculation) to either of the above.
> 
> Hmm, interesting. I would have expected a generic calculation to
> be deemed preferable.

Yes, I'd much prefer an accurate per-arch calculation to a generic fudge
which only gets close for everyone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CTQ-0000av-3B; Mon, 20 Jan 2014 10:52:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CTN-0000aq-R7
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 10:52:54 +0000
Received: from [85.158.143.35:29161] by server-2.bemta-4.messagelabs.com id
	CB/9C-11386-5000DD25; Mon, 20 Jan 2014 10:52:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390215171!12728912!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3934 invoked from network); 20 Jan 2014 10:52:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94439337"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 10:52:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 05:52:50 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5CTJ-0008EC-9X;
	Mon, 20 Jan 2014 10:52:49 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 10:52:48 +0000
Message-ID: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
    it in xen_bus_to_phys.
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..ebd8f21 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
 
 static u64 start_dma_addr;
 
+/*
+ * Both of these functions should avoid PFN_PHYS because phys_addr_t
+ * can be 32bit when dma_addr_t is 64bit leading to a loss in
+ * information if the shift is done before casting to 64bit.
+ */
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+
+	dma |= paddr & ~PAGE_MASK;
+
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
+	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 10:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 10:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CTQ-0000av-3B; Mon, 20 Jan 2014 10:52:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5CTN-0000aq-R7
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 10:52:54 +0000
Received: from [85.158.143.35:29161] by server-2.bemta-4.messagelabs.com id
	CB/9C-11386-5000DD25; Mon, 20 Jan 2014 10:52:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390215171!12728912!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3934 invoked from network); 20 Jan 2014 10:52:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 10:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94439337"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 10:52:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 05:52:50 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5CTJ-0008EC-9X;
	Mon, 20 Jan 2014 10:52:49 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 10:52:48 +0000
Message-ID: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
    it in xen_bus_to_phys.
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..ebd8f21 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
 
 static u64 start_dma_addr;
 
+/*
+ * Both of these functions should avoid PFN_PHYS because phys_addr_t
+ * can be 32bit when dma_addr_t is 64bit leading to a loss in
+ * information if the shift is done before casting to 64bit.
+ */
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+
+	dma |= paddr & ~PAGE_MASK;
+
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
+	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:18:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CrU-0001Xu-A9; Mon, 20 Jan 2014 11:17:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5CrT-0001Xp-8o
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:17:47 +0000
Received: from [85.158.139.211:8079] by server-16.bemta-5.messagelabs.com id
	71/C6-11843-AD50DD25; Mon, 20 Jan 2014 11:17:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390216664!468189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8356 invoked from network); 20 Jan 2014 11:17:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 11:17:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94444625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 11:17:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 06:17:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5CrO-0004M3-Hn;
	Mon, 20 Jan 2014 11:17:42 +0000
Date: Mon, 20 Jan 2014 11:16:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401201114470.21510@kaball.uk.xensource.com>
References: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: swiotlb: handle sizeof(dma_addr_t)
 != sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Konrad, are you OK with this patch?


> v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
>     it in xen_bus_to_phys.
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
>  2 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..ebd8f21 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
>  
>  static u64 start_dma_addr;
>  
> +/*
> + * Both of these functions should avoid PFN_PHYS because phys_addr_t
> + * can be 32bit when dma_addr_t is 64bit leading to a loss in
> + * information if the shift is done before casting to 64bit.
> + */
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +
> +	dma |= paddr & ~PAGE_MASK;
> +
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
> +	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:18:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5CrU-0001Xu-A9; Mon, 20 Jan 2014 11:17:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5CrT-0001Xp-8o
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:17:47 +0000
Received: from [85.158.139.211:8079] by server-16.bemta-5.messagelabs.com id
	71/C6-11843-AD50DD25; Mon, 20 Jan 2014 11:17:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390216664!468189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8356 invoked from network); 20 Jan 2014 11:17:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 11:17:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="94444625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 11:17:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 06:17:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5CrO-0004M3-Hn;
	Mon, 20 Jan 2014 11:17:42 +0000
Date: Mon, 20 Jan 2014 11:16:38 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401201114470.21510@kaball.uk.xensource.com>
References: <1390215168-23269-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: swiotlb: handle sizeof(dma_addr_t)
 != sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Konrad, are you OK with this patch?


> v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
>     it in xen_bus_to_phys.
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
>  2 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..ebd8f21 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
>  
>  static u64 start_dma_addr;
>  
> +/*
> + * Both of these functions should avoid PFN_PHYS because phys_addr_t
> + * can be 32bit when dma_addr_t is 64bit leading to a loss in
> + * information if the shift is done before casting to 64bit.
> + */
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +
> +	dma |= paddr & ~PAGE_MASK;
> +
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
> +	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:31:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5D44-0002Eq-3d; Mon, 20 Jan 2014 11:30:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5D42-0002El-Br
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:30:46 +0000
Received: from [85.158.143.35:16956] by server-3.bemta-4.messagelabs.com id
	FA/40-32360-5E80DD25; Mon, 20 Jan 2014 11:30:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390217443!12749646!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28092 invoked from network); 20 Jan 2014 11:30:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 11:30:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92362589"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 11:30:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 06:30:43 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5D3x-0008Pa-Rs;
	Mon, 20 Jan 2014 11:30:41 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 11:30:41 +0000
Message-ID: <1390217441-9910-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <ian.campbell@citrix.com>,
	linux-arm-kernel@lists.infradead.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v3] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: linux-arm-kernel@lists.infradead.org
---
v3: Added Stefano's ack, added accidentally forgotten Ccs to Russell and l-a-k.
v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
    it in xen_bus_to_phys.
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..ebd8f21 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
 
 static u64 start_dma_addr;
 
+/*
+ * Both of these functions should avoid PFN_PHYS because phys_addr_t
+ * can be 32bit when dma_addr_t is 64bit leading to a loss in
+ * information if the shift is done before casting to 64bit.
+ */
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+
+	dma |= paddr & ~PAGE_MASK;
+
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
+	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:31:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5D44-0002Eq-3d; Mon, 20 Jan 2014 11:30:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5D42-0002El-Br
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:30:46 +0000
Received: from [85.158.143.35:16956] by server-3.bemta-4.messagelabs.com id
	FA/40-32360-5E80DD25; Mon, 20 Jan 2014 11:30:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390217443!12749646!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28092 invoked from network); 20 Jan 2014 11:30:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 11:30:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,689,1384300800"; d="scan'208";a="92362589"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 11:30:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 06:30:43 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5D3x-0008Pa-Rs;
	Mon, 20 Jan 2014 11:30:41 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 11:30:41 +0000
Message-ID: <1390217441-9910-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <ian.campbell@citrix.com>,
	linux-arm-kernel@lists.infradead.org, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v3] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

This can happen in practice on ARM where foreign pages can be above 4GB even
though the local kernel does not have LPAE page tables enabled (which is
totally reasonable if the guest does not itself have >4GB of RAM). In this
case the kernel still maps the foreign pages at a phys addr below 4G (as it
must) but the resulting DMA address (returned by the grant map operation) is
much higher.

This is analogous to a hardware device which has its view of RAM mapped up
high for some reason.

This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
systems with more than 4GB of RAM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: linux-arm-kernel@lists.infradead.org
---
v3: Added Stefano's ack, added accidentally forgotten Ccs to Russell and l-a-k.
v2: Add comment on avoiding PFN_PHYS, and remove accidental truncation due to
    it in xen_bus_to_phys.
---
 arch/arm/Kconfig          |    1 +
 drivers/xen/swiotlb-xen.c |   22 ++++++++++++++++++++--
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..24307dc 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1885,6 +1885,7 @@ config XEN
 	depends on !GENERIC_ATOMIC64
 	select ARM_PSCI
 	select SWIOTLB_XEN
+	select ARCH_DMA_ADDR_T_64BIT
 	help
 	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 1eac073..ebd8f21 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -75,14 +75,32 @@ static unsigned long xen_io_tlb_nslabs;
 
 static u64 start_dma_addr;
 
+/*
+ * Both of these functions should avoid PFN_PHYS because phys_addr_t
+ * can be 32bit when dma_addr_t is 64bit leading to a loss in
+ * information if the shift is done before casting to 64bit.
+ */
 static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
 {
-	return phys_to_machine(XPADDR(paddr)).maddr;
+	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
+	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
+
+	dma |= paddr & ~PAGE_MASK;
+
+	return dma;
 }
 
 static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 {
-	return machine_to_phys(XMADDR(baddr)).paddr;
+	unsigned long pfn = mfn_to_pfn(PFN_DOWN(baddr));
+	dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
+	phys_addr_t paddr = dma;
+
+	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+
+	paddr |= baddr & ~PAGE_MASK;
+
+	return paddr;
 }
 
 static inline dma_addr_t xen_virt_to_bus(void *address)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5D8g-0002Vg-Q2; Mon, 20 Jan 2014 11:35:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5D8g-0002Vb-Bp
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 11:35:34 +0000
Received: from [193.109.254.147:9415] by server-16.bemta-14.messagelabs.com id
	C1/02-20600-50A0DD25; Mon, 20 Jan 2014 11:35:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390217732!11880120!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25521 invoked from network); 20 Jan 2014 11:35:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 11:35:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 11:35:32 +0000
Message-Id: <52DD180F020000780011502D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 11:35:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
	<52DCE7700200007800114E67@nat28.tlf.novell.com>
	<1390215013.20516.37.camel@kazak.uk.xensource.com>
In-Reply-To: <1390215013.20516.37.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 11:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-20 at 08:08 +0000, Jan Beulich wrote:
>>  Yet we can't (without knowledge of the tools/
>> firmware implementation) tell regions backed by RAM assigned to the
>> guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
>> from regions reserved for other reasons. A specific firmware could,
>> for example, have a larger BIOS region right below 4Gb (like many
>> non-virtual BIOSes do), which would then also be RAM covered and
>> hence also need accounting.
> 
> Couldn't this be accounted for in the toolstack when considering the
> target and max_pages?

Perhaps it could, but ...

> But I suppose it is too late for that now if you
> want to DTRT on existing systems.

... yes, any tools side behavioral change would make the job even
harder for the balloon driver, and wouldn't necessarily help with
existing incarnations (and in fact wouldn't be unlikely to break some).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5D8g-0002Vg-Q2; Mon, 20 Jan 2014 11:35:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5D8g-0002Vb-Bp
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 11:35:34 +0000
Received: from [193.109.254.147:9415] by server-16.bemta-14.messagelabs.com id
	C1/02-20600-50A0DD25; Mon, 20 Jan 2014 11:35:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390217732!11880120!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25521 invoked from network); 20 Jan 2014 11:35:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 11:35:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 11:35:32 +0000
Message-Id: <52DD180F020000780011502D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 11:35:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<1389978832.6697.137.camel@kazak.uk.xensource.com>
	<52DCE7700200007800114E67@nat28.tlf.novell.com>
	<1390215013.20516.37.camel@kazak.uk.xensource.com>
In-Reply-To: <1390215013.20516.37.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 11:50, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-20 at 08:08 +0000, Jan Beulich wrote:
>>  Yet we can't (without knowledge of the tools/
>> firmware implementation) tell regions backed by RAM assigned to the
>> guest (e.g. the reserved pages below 1Mb, covering BIOS stuff)
>> from regions reserved for other reasons. A specific firmware could,
>> for example, have a larger BIOS region right below 4Gb (like many
>> non-virtual BIOSes do), which would then also be RAM covered and
>> hence also need accounting.
> 
> Couldn't this be accounted for in the toolstack when considering the
> target and max_pages?

Perhaps it could, but ...

> But I suppose it is too late for that now if you
> want to DTRT on existing systems.

... yes, any tools side behavioral change would make the job even
harder for the balloon driver, and wouldn't necessarily help with
existing incarnations (and in fact wouldn't be unlikely to break some).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:42:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5DFO-00030y-Mo; Mon, 20 Jan 2014 11:42:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5DFN-00030t-Nr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:42:29 +0000
Received: from [193.109.254.147:63975] by server-1.bemta-14.messagelabs.com id
	B8/3C-15600-5AB0DD25; Mon, 20 Jan 2014 11:42:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390218148!11882032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14596 invoked from network); 20 Jan 2014 11:42:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 11:42:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 11:42:28 +0000
Message-Id: <52DD19B10200007800115046@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 11:42:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Martin Unzner" <knusperbrot@gmx.de>
References: <52DBC709.80707@gmx.de>
	<1390212958.20516.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1390212958.20516.21.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 11:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
>> Hi list,
>> 
>> I just installed Xen 4.3.1 for EFI according to 
>> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot 
>> it. There was a crash at the very beginning, notifying me of a bug at 
>> traps.c:3271. I could not find any logs, so I just noted the stack trace 
>> on a sheet of paper:
>> 
>> do_device_not_available
>> handle_exception
>> efi_get_time
>> get_cmos_time
>> init_xen_time
>> __start_xen
>> 
>> Does that mean my hardware is incompatible?
> 
> No, just that you've found a bug I think.
> 
> I'm copying the devel list here to see if anyone has any clues.

We've seen that before: You unfortunately have got one of those
UEFI implementations that utilize XMM (or, unlikely, FPU) registers,
and Xen doesn't allow this. Which is a consequence of the UEFI
specification being imprecise here: Some firmware implementors
read it to allow such, while my reading of it results in this not being
allowed. This is currently being brought up with the USWG for a
resolution. I'm afraid there's nothing you can do to work around
this for the time being (short of not booting via xen.efi, which -
depending on how you do it and depending on your firmware -
may have other bad side effects).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 11:42:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 11:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5DFO-00030y-Mo; Mon, 20 Jan 2014 11:42:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5DFN-00030t-Nr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 11:42:29 +0000
Received: from [193.109.254.147:63975] by server-1.bemta-14.messagelabs.com id
	B8/3C-15600-5AB0DD25; Mon, 20 Jan 2014 11:42:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390218148!11882032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14596 invoked from network); 20 Jan 2014 11:42:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 11:42:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 11:42:28 +0000
Message-Id: <52DD19B10200007800115046@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 11:42:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Martin Unzner" <knusperbrot@gmx.de>
References: <52DBC709.80707@gmx.de>
	<1390212958.20516.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1390212958.20516.21.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 11:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
>> Hi list,
>> 
>> I just installed Xen 4.3.1 for EFI according to 
>> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot 
>> it. There was a crash at the very beginning, notifying me of a bug at 
>> traps.c:3271. I could not find any logs, so I just noted the stack trace 
>> on a sheet of paper:
>> 
>> do_device_not_available
>> handle_exception
>> efi_get_time
>> get_cmos_time
>> init_xen_time
>> __start_xen
>> 
>> Does that mean my hardware is incompatible?
> 
> No, just that you've found a bug I think.
> 
> I'm copying the devel list here to see if anyone has any clues.

We've seen that before: You unfortunately have got one of those
UEFI implementations that utilize XMM (or, unlikely, FPU) registers,
and Xen doesn't allow this. Which is a consequence of the UEFI
specification being imprecise here: Some firmware implementors
read it to allow such, while my reading of it results in this not being
allowed. This is currently being brought up with the USWG for a
resolution. I'm afraid there's nothing you can do to work around
this for the time being (short of not booting via xen.efi, which -
depending on how you do it and depending on your firmware -
may have other bad side effects).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:10:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5DgD-0004Ey-Lt; Mon, 20 Jan 2014 12:10:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5DgD-0004Et-6M
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:10:13 +0000
Received: from [85.158.143.35:54404] by server-2.bemta-4.messagelabs.com id
	A7/2B-11386-4221DD25; Mon, 20 Jan 2014 12:10:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390219810!12792467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3571 invoked from network); 20 Jan 2014 12:10:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:10:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92372818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 12:10:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 07:10:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5Dg8-0005AK-VA;
	Mon, 20 Jan 2014 12:10:08 +0000
Date: Mon, 20 Jan 2014 12:09:05 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Wu, Feng" <feng.wu@intel.com>
In-Reply-To: <E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Shakeel Butt <shakeel.butt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Wu, Feng wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> > Sent: Monday, January 20, 2014 1:48 PM
> > To: xen-devel@lists.xen.org
> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> > 
> > Hi all,
> > 
> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > parameter for qemu-xen. I am able to do passthrough with qemu
> > traditional i.e. qemu-dm.
> 
> As far as I know, only qemu-traditional supports vga pass-through right now.

Right.
It is not possible to assign your primary VGA card to a VM with
qemu-xen. You should be able to assign your secondary VGA card though.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:10:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5DgD-0004Ey-Lt; Mon, 20 Jan 2014 12:10:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5DgD-0004Et-6M
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:10:13 +0000
Received: from [85.158.143.35:54404] by server-2.bemta-4.messagelabs.com id
	A7/2B-11386-4221DD25; Mon, 20 Jan 2014 12:10:12 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390219810!12792467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3571 invoked from network); 20 Jan 2014 12:10:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:10:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92372818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 12:10:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 07:10:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5Dg8-0005AK-VA;
	Mon, 20 Jan 2014 12:10:08 +0000
Date: Mon, 20 Jan 2014 12:09:05 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Wu, Feng" <feng.wu@intel.com>
In-Reply-To: <E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Shakeel Butt <shakeel.butt@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Wu, Feng wrote:
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> > Sent: Monday, January 20, 2014 1:48 PM
> > To: xen-devel@lists.xen.org
> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> > 
> > Hi all,
> > 
> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > parameter for qemu-xen. I am able to do passthrough with qemu
> > traditional i.e. qemu-dm.
> 
> As far as I know, only qemu-traditional supports vga pass-through right now.

Right.
It is not possible to assign your primary VGA card to a VM with
qemu-xen. You should be able to assign your secondary VGA card though.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Dt8-0004uW-1U; Mon, 20 Jan 2014 12:23:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5Dt7-0004uR-5N
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 12:23:33 +0000
Received: from [193.109.254.147:15351] by server-14.bemta-14.messagelabs.com
	id 5B/BE-12628-4451DD25; Mon, 20 Jan 2014 12:23:32 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390220610!11868671!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24950 invoked from network); 20 Jan 2014 12:23:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:23:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92376004"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 12:23:29 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 07:23:29 -0500
Message-ID: <52DD1540.7000503@citrix.com>
Date: Mon, 20 Jan 2014 12:23:28 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Paul Durrant <Paul.Durrant@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Any reviews on this one? It fixes an important lockup situation, so 
either this or some other fix should go in soon.

On 15/01/14 17:11, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
>
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
>
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>   drivers/net/xen-netback/common.h    |    6 +-----
>   drivers/net/xen-netback/interface.c |    1 -
>   drivers/net/xen-netback/netback.c   |   16 ++++++----------
>   3 files changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 4c76bcb..ae413a2 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -143,11 +143,7 @@ struct xenvif {
>   	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>   	struct xen_netif_rx_back_ring rx;
>   	struct sk_buff_head rx_queue;
> -	bool rx_queue_stopped;
> -	/* Set when the RX interrupt is triggered by the frontend.
> -	 * The worker thread may need to wake the queue.
> -	 */
> -	bool rx_event;
> +	RING_IDX rx_last_skb_slots;
>
>   	/* This array is allocated seperately as it is large */
>   	struct gnttab_copy *grant_copy_op;
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index b9de31e..7669d49 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>   {
>   	struct xenvif *vif = dev_id;
>
> -	vif->rx_event = true;
>   	xenvif_kick_thread(vif);
>
>   	return IRQ_HANDLED;
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 2738563..bb241d0 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
>   	unsigned long offset;
>   	struct skb_cb_overlay *sco;
>   	bool need_to_notify = false;
> -	bool ring_full = false;
>
>   	struct netrx_pending_operations npo = {
>   		.copy  = vif->grant_copy_op,
> @@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>   	skb_queue_head_init(&rxq);
>
>   	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> -		int max_slots_needed;
> +		RING_IDX max_slots_needed;
>   		int i;
>
>   		/* We need a cheap worse case estimate for the number of
> @@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
>   		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>   			skb_queue_head(&vif->rx_queue, skb);
>   			need_to_notify = true;
> -			ring_full = true;
> +			vif->rx_last_skb_slots = max_slots_needed;
>   			break;
> -		}
> +		} else
> +			vif->rx_last_skb_slots = 0;
>
>   		sco = (struct skb_cb_overlay *)skb->cb;
>   		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> @@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
>
>   	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
>
> -	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
> -
>   	if (!npo.copy_prod)
>   		goto done;
>
> @@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>
>   static inline int rx_work_todo(struct xenvif *vif)
>   {
> -	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
> -		vif->rx_event;
> +	return !skb_queue_empty(&vif->rx_queue) &&
> +	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
>   }
>
>   static inline int tx_work_todo(struct xenvif *vif)
> @@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
>   		if (!skb_queue_empty(&vif->rx_queue))
>   			xenvif_rx_action(vif);
>
> -		vif->rx_event = false;
> -
>   		if (skb_queue_empty(&vif->rx_queue) &&
>   		    netif_queue_stopped(vif->dev))
>   			xenvif_start_queue(vif);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Dt8-0004uW-1U; Mon, 20 Jan 2014 12:23:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5Dt7-0004uR-5N
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 12:23:33 +0000
Received: from [193.109.254.147:15351] by server-14.bemta-14.messagelabs.com
	id 5B/BE-12628-4451DD25; Mon, 20 Jan 2014 12:23:32 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390220610!11868671!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24950 invoked from network); 20 Jan 2014 12:23:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:23:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92376004"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 12:23:29 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 07:23:29 -0500
Message-ID: <52DD1540.7000503@citrix.com>
Date: Mon, 20 Jan 2014 12:23:28 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Paul Durrant <Paul.Durrant@citrix.com>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Any reviews on this one? It fixes an important lockup situation, so 
either this or some other fix should go in soon.

On 15/01/14 17:11, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
>
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
>
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>   drivers/net/xen-netback/common.h    |    6 +-----
>   drivers/net/xen-netback/interface.c |    1 -
>   drivers/net/xen-netback/netback.c   |   16 ++++++----------
>   3 files changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 4c76bcb..ae413a2 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -143,11 +143,7 @@ struct xenvif {
>   	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>   	struct xen_netif_rx_back_ring rx;
>   	struct sk_buff_head rx_queue;
> -	bool rx_queue_stopped;
> -	/* Set when the RX interrupt is triggered by the frontend.
> -	 * The worker thread may need to wake the queue.
> -	 */
> -	bool rx_event;
> +	RING_IDX rx_last_skb_slots;
>
>   	/* This array is allocated seperately as it is large */
>   	struct gnttab_copy *grant_copy_op;
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index b9de31e..7669d49 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>   {
>   	struct xenvif *vif = dev_id;
>
> -	vif->rx_event = true;
>   	xenvif_kick_thread(vif);
>
>   	return IRQ_HANDLED;
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 2738563..bb241d0 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
>   	unsigned long offset;
>   	struct skb_cb_overlay *sco;
>   	bool need_to_notify = false;
> -	bool ring_full = false;
>
>   	struct netrx_pending_operations npo = {
>   		.copy  = vif->grant_copy_op,
> @@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>   	skb_queue_head_init(&rxq);
>
>   	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> -		int max_slots_needed;
> +		RING_IDX max_slots_needed;
>   		int i;
>
>   		/* We need a cheap worse case estimate for the number of
> @@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
>   		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>   			skb_queue_head(&vif->rx_queue, skb);
>   			need_to_notify = true;
> -			ring_full = true;
> +			vif->rx_last_skb_slots = max_slots_needed;
>   			break;
> -		}
> +		} else
> +			vif->rx_last_skb_slots = 0;
>
>   		sco = (struct skb_cb_overlay *)skb->cb;
>   		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> @@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
>
>   	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
>
> -	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
> -
>   	if (!npo.copy_prod)
>   		goto done;
>
> @@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
>
>   static inline int rx_work_todo(struct xenvif *vif)
>   {
> -	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
> -		vif->rx_event;
> +	return !skb_queue_empty(&vif->rx_queue) &&
> +	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
>   }
>
>   static inline int tx_work_todo(struct xenvif *vif)
> @@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
>   		if (!skb_queue_empty(&vif->rx_queue))
>   			xenvif_rx_action(vif);
>
> -		vif->rx_event = false;
> -
>   		if (skb_queue_empty(&vif->rx_queue) &&
>   		    netif_queue_stopped(vif->dev))
>   			xenvif_start_queue(vif);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:31:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5E0p-0005EF-Pf; Mon, 20 Jan 2014 12:31:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5E0n-0005EA-Nk
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:31:30 +0000
Received: from [85.158.139.211:42685] by server-2.bemta-5.messagelabs.com id
	03/75-29392-0271DD25; Mon, 20 Jan 2014 12:31:28 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390221086!10607257!1
X-Originating-IP: [209.85.219.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32670 invoked from network); 20 Jan 2014 12:31:28 -0000
Received: from mail-oa0-f52.google.com (HELO mail-oa0-f52.google.com)
	(209.85.219.52)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:31:28 -0000
Received: by mail-oa0-f52.google.com with SMTP id i4so866765oah.11
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 04:31:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qZ4oByBFc/6dr03AjYF89aD7Tet+bJpnwyq1MgvLZV0=;
	b=WraApv0DqW3yTEid7FA9LjO65yR8e1p7xklb+aZXW9zp/NxFgIzTRqyJH6Tz+jve49
	03EaU67QCtaM6ppoenQg7c+wRe6NGG+8vXRb5Z4BGrKSy1BMJfnPzokMeGBBFdt3zXm7
	5Ochg7pN8KQLMWbgCK+B6qwmUO9erYb57JAu2iTQ05Jjt4mIsiY1HQ/691e3M5CcMcee
	LpRiC1kS2gWE12SeIbXOGNjCrSafxVWcWeEgTHNtys20KahLpDQng6P+B8SKrKM/ypLb
	PfX4sVmgxBLLQq4lBw7q2x7glwyH9oXbw3het4OyAqK4yGgO7wxn0me9/n7crPn8qyVD
	Py5w==
MIME-Version: 1.0
X-Received: by 10.60.131.141 with SMTP id om13mr705816oeb.68.1390221086566;
	Mon, 20 Jan 2014 04:31:26 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 04:31:26 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
Date: Mon, 20 Jan 2014 04:31:26 -0800
Message-ID: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Wu, Feng" <feng.wu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 20 Jan 2014, Wu, Feng wrote:
>> > -----Original Message-----
>> > From: xen-devel-bounces@lists.xen.org
>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>> > Sent: Monday, January 20, 2014 1:48 PM
>> > To: xen-devel@lists.xen.org
>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>> >
>> > Hi all,
>> >
>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> > traditional i.e. qemu-dm.
>>
>> As far as I know, only qemu-traditional supports vga pass-through right now.
>
> Right.
> It is not possible to assign your primary VGA card to a VM with
> qemu-xen. You should be able to assign your secondary VGA card though.

Let me understand this correctly. If I have two VGA cards then I can
passthrough
secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
if yes how can I do it?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:31:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5E0p-0005EF-Pf; Mon, 20 Jan 2014 12:31:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5E0n-0005EA-Nk
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:31:30 +0000
Received: from [85.158.139.211:42685] by server-2.bemta-5.messagelabs.com id
	03/75-29392-0271DD25; Mon, 20 Jan 2014 12:31:28 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390221086!10607257!1
X-Originating-IP: [209.85.219.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32670 invoked from network); 20 Jan 2014 12:31:28 -0000
Received: from mail-oa0-f52.google.com (HELO mail-oa0-f52.google.com)
	(209.85.219.52)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 12:31:28 -0000
Received: by mail-oa0-f52.google.com with SMTP id i4so866765oah.11
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 04:31:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qZ4oByBFc/6dr03AjYF89aD7Tet+bJpnwyq1MgvLZV0=;
	b=WraApv0DqW3yTEid7FA9LjO65yR8e1p7xklb+aZXW9zp/NxFgIzTRqyJH6Tz+jve49
	03EaU67QCtaM6ppoenQg7c+wRe6NGG+8vXRb5Z4BGrKSy1BMJfnPzokMeGBBFdt3zXm7
	5Ochg7pN8KQLMWbgCK+B6qwmUO9erYb57JAu2iTQ05Jjt4mIsiY1HQ/691e3M5CcMcee
	LpRiC1kS2gWE12SeIbXOGNjCrSafxVWcWeEgTHNtys20KahLpDQng6P+B8SKrKM/ypLb
	PfX4sVmgxBLLQq4lBw7q2x7glwyH9oXbw3het4OyAqK4yGgO7wxn0me9/n7crPn8qyVD
	Py5w==
MIME-Version: 1.0
X-Received: by 10.60.131.141 with SMTP id om13mr705816oeb.68.1390221086566;
	Mon, 20 Jan 2014 04:31:26 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 04:31:26 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
Date: Mon, 20 Jan 2014 04:31:26 -0800
Message-ID: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "Wu, Feng" <feng.wu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 20 Jan 2014, Wu, Feng wrote:
>> > -----Original Message-----
>> > From: xen-devel-bounces@lists.xen.org
>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>> > Sent: Monday, January 20, 2014 1:48 PM
>> > To: xen-devel@lists.xen.org
>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>> >
>> > Hi all,
>> >
>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> > traditional i.e. qemu-dm.
>>
>> As far as I know, only qemu-traditional supports vga pass-through right now.
>
> Right.
> It is not possible to assign your primary VGA card to a VM with
> qemu-xen. You should be able to assign your secondary VGA card though.

Let me understand this correctly. If I have two VGA cards then I can
passthrough
secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
if yes how can I do it?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:50:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EJ1-0006GJ-MG; Mon, 20 Jan 2014 12:50:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5EIz-0006GE-Do
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:50:17 +0000
Received: from [85.158.143.35:10333] by server-1.bemta-4.messagelabs.com id
	9A/E8-02132-88B1DD25; Mon, 20 Jan 2014 12:50:16 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390222215!12814344!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27358 invoked from network); 20 Jan 2014 12:50:16 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 12:50:16 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 3C496221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 12:50:15 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 12:50:14 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>"
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>"
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
Message-ID: <2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 12:31, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>> > -----Original Message-----
>>> > From: xen-devel-bounces@lists.xen.org
>>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>> > Sent: Monday, January 20, 2014 1:48 PM
>>> > To: xen-devel@lists.xen.org
>>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>>> >
>>> > Hi all,
>>> >
>>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>> > traditional i.e. qemu-dm.
>>> 
>>> As far as I know, only qemu-traditional supports vga pass-through 
>>> right now.
>> 
>> Right.
>> It is not possible to assign your primary VGA card to a VM with
>> qemu-xen. You should be able to assign your secondary VGA card though.
> 
> Let me understand this correctly. If I have two VGA cards then I can
> passthrough
> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this 
> right and
> if yes how can I do it?

Passing any VGA card as a primary-in-domU has always been problematic.
Passing it as a secondary-in-domU generally works fine, though. Do you
have a specific requirement for the card to be primary in domU? It
doesn't actually gain you anything over and above seeing your domU
loading screen on the external monitor. Once the driver for the card
loads you get output as per normal.

Why do you need it to be primary in domU?

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 12:50:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 12:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EJ1-0006GJ-MG; Mon, 20 Jan 2014 12:50:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5EIz-0006GE-Do
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 12:50:17 +0000
Received: from [85.158.143.35:10333] by server-1.bemta-4.messagelabs.com id
	9A/E8-02132-88B1DD25; Mon, 20 Jan 2014 12:50:16 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390222215!12814344!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27358 invoked from network); 20 Jan 2014 12:50:16 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 12:50:16 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 3C496221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 12:50:15 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 12:50:14 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>"
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>"
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
Message-ID: <2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 12:31, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>> > -----Original Message-----
>>> > From: xen-devel-bounces@lists.xen.org
>>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>> > Sent: Monday, January 20, 2014 1:48 PM
>>> > To: xen-devel@lists.xen.org
>>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>>> >
>>> > Hi all,
>>> >
>>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>> > traditional i.e. qemu-dm.
>>> 
>>> As far as I know, only qemu-traditional supports vga pass-through 
>>> right now.
>> 
>> Right.
>> It is not possible to assign your primary VGA card to a VM with
>> qemu-xen. You should be able to assign your secondary VGA card though.
> 
> Let me understand this correctly. If I have two VGA cards then I can
> passthrough
> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this 
> right and
> if yes how can I do it?

Passing any VGA card as a primary-in-domU has always been problematic.
Passing it as a secondary-in-domU generally works fine, though. Do you
have a specific requirement for the card to be primary in domU? It
doesn't actually gain you anything over and above seeing your domU
loading screen on the external monitor. Once the driver for the card
loads you get output as per normal.

Why do you need it to be primary in domU?

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:01:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ETO-0006Yl-8V; Mon, 20 Jan 2014 13:01:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5ETM-0006Yg-V3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:01:01 +0000
Received: from [85.158.139.211:27321] by server-6.bemta-5.messagelabs.com id
	B9/4B-16310-C0E1DD25; Mon, 20 Jan 2014 13:01:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390222859!10746931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13273 invoked from network); 20 Jan 2014 13:00:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:00:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:00:59 +0000
Message-Id: <52DD2C17020000780011507E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:00:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
>  	     ( c->cpuid_level >= 0x00000006 ) &&
>  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
> +
> +	/* Check platform QoS monitoring capability */
> +	if ((c->cpuid_level >= 0x00000007) &&
> +	    (cpuid_ebx(0x00000007) & (1u<<12)))
> +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
> +

This is redundant with generic_identify() setting the respective
c->x86_capability[] element.

> +struct pqos_cqm *cqm;

__read_mostly?

> +
> +static void __init init_cqm(void)
> +{
> +    unsigned int rmid;
> +    unsigned int eax, edx;
> +
> +    if ( !opt_cqm_max_rmid )
> +        return;
> +
> +    cqm = xzalloc(struct pqos_cqm);
> +    if ( !cqm )
> +        return;
> +
> +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
> +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> +    {
> +        xfree(cqm);

"cqm" is a global variable and - afaict - the only way for other
entities to tell whether the functionality is enabled. Hence shouldn't
you clear the variable here (and similarly further down)? Otherwise
the variable should perhaps be static.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:01:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ETO-0006Yl-8V; Mon, 20 Jan 2014 13:01:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5ETM-0006Yg-V3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:01:01 +0000
Received: from [85.158.139.211:27321] by server-6.bemta-5.messagelabs.com id
	B9/4B-16310-C0E1DD25; Mon, 20 Jan 2014 13:01:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390222859!10746931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13273 invoked from network); 20 Jan 2014 13:00:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:00:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:00:59 +0000
Message-Id: <52DD2C17020000780011507E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:00:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
>  	     ( c->cpuid_level >= 0x00000006 ) &&
>  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
> +
> +	/* Check platform QoS monitoring capability */
> +	if ((c->cpuid_level >= 0x00000007) &&
> +	    (cpuid_ebx(0x00000007) & (1u<<12)))
> +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
> +

This is redundant with generic_identify() setting the respective
c->x86_capability[] element.

> +struct pqos_cqm *cqm;

__read_mostly?

> +
> +static void __init init_cqm(void)
> +{
> +    unsigned int rmid;
> +    unsigned int eax, edx;
> +
> +    if ( !opt_cqm_max_rmid )
> +        return;
> +
> +    cqm = xzalloc(struct pqos_cqm);
> +    if ( !cqm )
> +        return;
> +
> +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
> +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> +    {
> +        xfree(cqm);

"cqm" is a global variable and - afaict - the only way for other
entities to tell whether the functionality is enabled. Hence shouldn't
you clear the variable here (and similarly further down)? Otherwise
the variable should perhaps be static.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:02:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EUV-0006cX-NR; Mon, 20 Jan 2014 13:02:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W5EUT-0006cQ-VH
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 13:02:10 +0000
Received: from [85.158.143.35:55600] by server-3.bemta-4.messagelabs.com id
	67/57-32360-15E1DD25; Mon, 20 Jan 2014 13:02:09 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390222927!10131640!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13726 invoked from network); 20 Jan 2014 13:02:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92387188"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 13:02:06 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 20 Jan 2014 08:02:06 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Mon, 20 Jan 2014 14:02:04 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v2] xen-netback: Rework rx_work_todo
Thread-Index: AQHPEhT6QaxFYMtfNkKJhENPI4F3rpqNf+oAgAAbJBA=
Date: Mon, 20 Jan 2014 13:02:04 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD020BC91@AMSPEX01CL01.citrite.net>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<52DD1540.7000503@citrix.com>
In-Reply-To: <52DD1540.7000503@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 20 January 2014 12:23
> To: Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Cc: Paul Durrant
> Subject: Re: [PATCH net-next v2] xen-netback: Rework rx_work_todo
> 
> Any reviews on this one? It fixes an important lockup situation, so
> either this or some other fix should go in soon.
> 
> On 15/01/14 17:11, Zoltan Kiss wrote:
> > The recent patch to fix receive side flow control (11b57f) solved the
> spinning
> > thread problem, however caused an another one. The receive side can
> stall, if:
> > - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> > - [INTERRUPT] interrupt happens, and sets rx_event to true
> > - [THREAD] then xenvif_kthread sets rx_event to false
> > - [THREAD] rx_work_todo doesn't return true anymore
> >
> > Also, if interrupt sent but there is still no room in the ring, it take quite a
> > long time until xenvif_rx_action realize it. This patch ditch that two variable,
> > and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> > ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> > kept as 0. Then rx_work_todo will check if:
> > - there is something to send to the ring (like before)
> > - there is space for the topmost packet in the queue
> >
> > I think that's more natural and optimal thing to test than two bool which are
> > set somewhere else.
> >
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > ---
> >   drivers/net/xen-netback/common.h    |    6 +-----
> >   drivers/net/xen-netback/interface.c |    1 -
> >   drivers/net/xen-netback/netback.c   |   16 ++++++----------
> >   3 files changed, 7 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> > index 4c76bcb..ae413a2 100644
> > --- a/drivers/net/xen-netback/common.h
> > +++ b/drivers/net/xen-netback/common.h
> > @@ -143,11 +143,7 @@ struct xenvif {
> >   	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> >   	struct xen_netif_rx_back_ring rx;
> >   	struct sk_buff_head rx_queue;
> > -	bool rx_queue_stopped;
> > -	/* Set when the RX interrupt is triggered by the frontend.
> > -	 * The worker thread may need to wake the queue.
> > -	 */
> > -	bool rx_event;
> > +	RING_IDX rx_last_skb_slots;
> >
> >   	/* This array is allocated seperately as it is large */
> >   	struct gnttab_copy *grant_copy_op;
> > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> > index b9de31e..7669d49 100644
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void
> *dev_id)
> >   {
> >   	struct xenvif *vif = dev_id;
> >
> > -	vif->rx_event = true;
> >   	xenvif_kick_thread(vif);
> >
> >   	return IRQ_HANDLED;
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > index 2738563..bb241d0 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   	unsigned long offset;
> >   	struct skb_cb_overlay *sco;
> >   	bool need_to_notify = false;
> > -	bool ring_full = false;
> >
> >   	struct netrx_pending_operations npo = {
> >   		.copy  = vif->grant_copy_op,
> > @@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   	skb_queue_head_init(&rxq);
> >
> >   	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> > -		int max_slots_needed;
> > +		RING_IDX max_slots_needed;
> >   		int i;
> >
> >   		/* We need a cheap worse case estimate for the number of
> > @@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> >   			skb_queue_head(&vif->rx_queue, skb);
> >   			need_to_notify = true;
> > -			ring_full = true;
> > +			vif->rx_last_skb_slots = max_slots_needed;
> >   			break;
> > -		}
> > +		} else
> > +			vif->rx_last_skb_slots = 0;
> >
> >   		sco = (struct skb_cb_overlay *)skb->cb;
> >   		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> > @@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
> >
> >   	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> >
> > -	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
> > -
> >   	if (!npo.copy_prod)
> >   		goto done;
> >
> > @@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
> >
> >   static inline int rx_work_todo(struct xenvif *vif)
> >   {
> > -	return (!skb_queue_empty(&vif->rx_queue) && !vif-
> >rx_queue_stopped) ||
> > -		vif->rx_event;
> > +	return !skb_queue_empty(&vif->rx_queue) &&
> > +	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
> >   }
> >
> >   static inline int tx_work_todo(struct xenvif *vif)
> > @@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
> >   		if (!skb_queue_empty(&vif->rx_queue))
> >   			xenvif_rx_action(vif);
> >
> > -		vif->rx_event = false;
> > -

The minimal patch is to simply move this line up above the previous if clause, but I'm happy with your patch as it stands so

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> >   		if (skb_queue_empty(&vif->rx_queue) &&
> >   		    netif_queue_stopped(vif->dev))
> >   			xenvif_start_queue(vif);
> >


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:02:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EUV-0006cX-NR; Mon, 20 Jan 2014 13:02:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W5EUT-0006cQ-VH
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 13:02:10 +0000
Received: from [85.158.143.35:55600] by server-3.bemta-4.messagelabs.com id
	67/57-32360-15E1DD25; Mon, 20 Jan 2014 13:02:09 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390222927!10131640!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13726 invoked from network); 20 Jan 2014 13:02:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92387188"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 13:02:06 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 20 Jan 2014 08:02:06 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Mon, 20 Jan 2014 14:02:04 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Jonathan
	Davies" <Jonathan.Davies@citrix.com>
Thread-Topic: [PATCH net-next v2] xen-netback: Rework rx_work_todo
Thread-Index: AQHPEhT6QaxFYMtfNkKJhENPI4F3rpqNf+oAgAAbJBA=
Date: Mon, 20 Jan 2014 13:02:04 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD020BC91@AMSPEX01CL01.citrite.net>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<52DD1540.7000503@citrix.com>
In-Reply-To: <52DD1540.7000503@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Zoltan Kiss
> Sent: 20 January 2014 12:23
> To: Ian Campbell; Wei Liu; xen-devel@lists.xenproject.org;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; Jonathan Davies
> Cc: Paul Durrant
> Subject: Re: [PATCH net-next v2] xen-netback: Rework rx_work_todo
> 
> Any reviews on this one? It fixes an important lockup situation, so
> either this or some other fix should go in soon.
> 
> On 15/01/14 17:11, Zoltan Kiss wrote:
> > The recent patch to fix receive side flow control (11b57f) solved the
> spinning
> > thread problem, however caused an another one. The receive side can
> stall, if:
> > - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> > - [INTERRUPT] interrupt happens, and sets rx_event to true
> > - [THREAD] then xenvif_kthread sets rx_event to false
> > - [THREAD] rx_work_todo doesn't return true anymore
> >
> > Also, if interrupt sent but there is still no room in the ring, it take quite a
> > long time until xenvif_rx_action realize it. This patch ditch that two variable,
> > and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> > ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> > kept as 0. Then rx_work_todo will check if:
> > - there is something to send to the ring (like before)
> > - there is space for the topmost packet in the queue
> >
> > I think that's more natural and optimal thing to test than two bool which are
> > set somewhere else.
> >
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > ---
> >   drivers/net/xen-netback/common.h    |    6 +-----
> >   drivers/net/xen-netback/interface.c |    1 -
> >   drivers/net/xen-netback/netback.c   |   16 ++++++----------
> >   3 files changed, 7 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> > index 4c76bcb..ae413a2 100644
> > --- a/drivers/net/xen-netback/common.h
> > +++ b/drivers/net/xen-netback/common.h
> > @@ -143,11 +143,7 @@ struct xenvif {
> >   	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> >   	struct xen_netif_rx_back_ring rx;
> >   	struct sk_buff_head rx_queue;
> > -	bool rx_queue_stopped;
> > -	/* Set when the RX interrupt is triggered by the frontend.
> > -	 * The worker thread may need to wake the queue.
> > -	 */
> > -	bool rx_event;
> > +	RING_IDX rx_last_skb_slots;
> >
> >   	/* This array is allocated seperately as it is large */
> >   	struct gnttab_copy *grant_copy_op;
> > diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> > index b9de31e..7669d49 100644
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void
> *dev_id)
> >   {
> >   	struct xenvif *vif = dev_id;
> >
> > -	vif->rx_event = true;
> >   	xenvif_kick_thread(vif);
> >
> >   	return IRQ_HANDLED;
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > index 2738563..bb241d0 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   	unsigned long offset;
> >   	struct skb_cb_overlay *sco;
> >   	bool need_to_notify = false;
> > -	bool ring_full = false;
> >
> >   	struct netrx_pending_operations npo = {
> >   		.copy  = vif->grant_copy_op,
> > @@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   	skb_queue_head_init(&rxq);
> >
> >   	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> > -		int max_slots_needed;
> > +		RING_IDX max_slots_needed;
> >   		int i;
> >
> >   		/* We need a cheap worse case estimate for the number of
> > @@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
> >   		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> >   			skb_queue_head(&vif->rx_queue, skb);
> >   			need_to_notify = true;
> > -			ring_full = true;
> > +			vif->rx_last_skb_slots = max_slots_needed;
> >   			break;
> > -		}
> > +		} else
> > +			vif->rx_last_skb_slots = 0;
> >
> >   		sco = (struct skb_cb_overlay *)skb->cb;
> >   		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> > @@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
> >
> >   	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> >
> > -	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
> > -
> >   	if (!npo.copy_prod)
> >   		goto done;
> >
> > @@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
> >
> >   static inline int rx_work_todo(struct xenvif *vif)
> >   {
> > -	return (!skb_queue_empty(&vif->rx_queue) && !vif-
> >rx_queue_stopped) ||
> > -		vif->rx_event;
> > +	return !skb_queue_empty(&vif->rx_queue) &&
> > +	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
> >   }
> >
> >   static inline int tx_work_todo(struct xenvif *vif)
> > @@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
> >   		if (!skb_queue_empty(&vif->rx_queue))
> >   			xenvif_rx_action(vif);
> >
> > -		vif->rx_event = false;
> > -

The minimal patch is to simply move this line up above the previous if clause, but I'm happy with your patch as it stands so

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> >   		if (skb_queue_empty(&vif->rx_queue) &&
> >   		    netif_queue_stopped(vif->dev))
> >   			xenvif_start_queue(vif);
> >


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Efp-0007QH-3I; Mon, 20 Jan 2014 13:13:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Efn-0007QC-OP
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:13:51 +0000
Received: from [193.109.254.147:42461] by server-9.bemta-14.messagelabs.com id
	EF/AC-13957-F012DD25; Mon, 20 Jan 2014 13:13:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390223630!9648006!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15420 invoked from network); 20 Jan 2014 13:13:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:13:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:13:50 +0000
Message-Id: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:13:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>      }
>      break;
>  
> +    case XEN_DOMCTL_attach_pqos:
> +    {
> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> +        {
> +            if ( !system_supports_cqm() )
> +                ret = -ENODEV;
> +            else if ( d->arch.pqos_cqm_rmid > 0 )
> +                ret = -EEXIST;
> +            else
> +            {
> +                ret = alloc_cqm_rmid(d);
> +                if ( ret < 0 )
> +                    ret = -EUSERS;

Why don't you have the function return a sensible error code
(which presumably might also end up being other than -EUSERS,
e.g. -ENOMEM).

> +            }
> +        }
> +        else
> +            ret = -EINVAL;
> +    }
> +    break;
> +
> +    case XEN_DOMCTL_detach_pqos:
> +    {
> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> +        {
> +            if ( !system_supports_cqm() )
> +                ret = -ENODEV;
> +            else if ( d->arch.pqos_cqm_rmid > 0 )
> +            {
> +                free_cqm_rmid(d);
> +                ret = 0;
> +            }
> +            else
> +                ret = -ENOENT;
> +        }
> +        else
> +            ret = -EINVAL;
> +    }
> +    break;

For consistency, both of the above would better be changed to a
single series of if()/else if().../else.

> +bool_t system_supports_cqm(void)
> +{
> +    return !!cqm;

So here we go (wrt the remark on patch 1).

> +}
> +
> +int alloc_cqm_rmid(struct domain *d)
> +{
> +    int rc = 0;
> +    unsigned int rmid;
> +    unsigned long flags;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    spin_lock_irqsave(&cqm_lock, flags);

Why not just spin_lock()? Briefly scanning over the following patches
doesn't point out anything that might require this to be an IRQ-safe
lock.

> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
> +            continue;
> +
> +        cqm->rmid_to_dom[rmid] = d->domain_id;
> +        break;
> +    }
> +    spin_unlock_irqrestore(&cqm_lock, flags);
> +
> +    /* No CQM RMID available, assign RMID=0 by default */
> +    if ( rmid > cqm->max_rmid )
> +    {
> +        rmid = 0;
> +        rc = -1;
> +    }
> +
> +    d->arch.pqos_cqm_rmid = rmid;

Is it really safe to do this and the freeing below outside of the
lock?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Efp-0007QH-3I; Mon, 20 Jan 2014 13:13:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Efn-0007QC-OP
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:13:51 +0000
Received: from [193.109.254.147:42461] by server-9.bemta-14.messagelabs.com id
	EF/AC-13957-F012DD25; Mon, 20 Jan 2014 13:13:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390223630!9648006!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15420 invoked from network); 20 Jan 2014 13:13:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:13:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:13:50 +0000
Message-Id: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:13:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>      }
>      break;
>  
> +    case XEN_DOMCTL_attach_pqos:
> +    {
> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> +        {
> +            if ( !system_supports_cqm() )
> +                ret = -ENODEV;
> +            else if ( d->arch.pqos_cqm_rmid > 0 )
> +                ret = -EEXIST;
> +            else
> +            {
> +                ret = alloc_cqm_rmid(d);
> +                if ( ret < 0 )
> +                    ret = -EUSERS;

Why don't you have the function return a sensible error code
(which presumably might also end up being other than -EUSERS,
e.g. -ENOMEM).

> +            }
> +        }
> +        else
> +            ret = -EINVAL;
> +    }
> +    break;
> +
> +    case XEN_DOMCTL_detach_pqos:
> +    {
> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> +        {
> +            if ( !system_supports_cqm() )
> +                ret = -ENODEV;
> +            else if ( d->arch.pqos_cqm_rmid > 0 )
> +            {
> +                free_cqm_rmid(d);
> +                ret = 0;
> +            }
> +            else
> +                ret = -ENOENT;
> +        }
> +        else
> +            ret = -EINVAL;
> +    }
> +    break;

For consistency, both of the above would better be changed to a
single series of if()/else if().../else.

> +bool_t system_supports_cqm(void)
> +{
> +    return !!cqm;

So here we go (wrt the remark on patch 1).

> +}
> +
> +int alloc_cqm_rmid(struct domain *d)
> +{
> +    int rc = 0;
> +    unsigned int rmid;
> +    unsigned long flags;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    spin_lock_irqsave(&cqm_lock, flags);

Why not just spin_lock()? Briefly scanning over the following patches
doesn't point out anything that might require this to be an IRQ-safe
lock.

> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
> +            continue;
> +
> +        cqm->rmid_to_dom[rmid] = d->domain_id;
> +        break;
> +    }
> +    spin_unlock_irqrestore(&cqm_lock, flags);
> +
> +    /* No CQM RMID available, assign RMID=0 by default */
> +    if ( rmid > cqm->max_rmid )
> +    {
> +        rmid = 0;
> +        rc = -1;
> +    }
> +
> +    d->arch.pqos_cqm_rmid = rmid;

Is it really safe to do this and the freeing below outside of the
lock?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:17:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ej2-0007XG-NO; Mon, 20 Jan 2014 13:17:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Ej0-0007X9-SS
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:17:11 +0000
Received: from [85.158.143.35:37546] by server-2.bemta-4.messagelabs.com id
	94/C6-11386-6D12DD25; Mon, 20 Jan 2014 13:17:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390223828!12822063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30451 invoked from network); 20 Jan 2014 13:17:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:17:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92392877"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 13:17:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:17:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5Eiw-00064M-Ab;
	Mon, 20 Jan 2014 13:17:06 +0000
Message-ID: <52DD21D1.5010202@citrix.com>
Date: Mon, 20 Jan 2014 13:17:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
In-Reply-To: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	dario.faggioli@citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, Dongxiao Xu <dongxiao.xu@intel.com>,
	dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 13:13, Jan Beulich wrote:
>>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>>      }
>>      break;
>>  
>> +    case XEN_DOMCTL_attach_pqos:
>> +    {
>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> +        {
>> +            if ( !system_supports_cqm() )
>> +                ret = -ENODEV;
>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> +                ret = -EEXIST;
>> +            else
>> +            {
>> +                ret = alloc_cqm_rmid(d);
>> +                if ( ret < 0 )
>> +                    ret = -EUSERS;
> Why don't you have the function return a sensible error code
> (which presumably might also end up being other than -EUSERS,
> e.g. -ENOMEM).

-EUSERS is correct here.  This failure like this means "all the
available system rmid's are already being used by other domains".

~Andrew

>
>> +            }
>> +        }
>> +        else
>> +            ret = -EINVAL;
>> +    }
>> +    break;
>> +
>> +    case XEN_DOMCTL_detach_pqos:
>> +    {
>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> +        {
>> +            if ( !system_supports_cqm() )
>> +                ret = -ENODEV;
>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> +            {
>> +                free_cqm_rmid(d);
>> +                ret = 0;
>> +            }
>> +            else
>> +                ret = -ENOENT;
>> +        }
>> +        else
>> +            ret = -EINVAL;
>> +    }
>> +    break;
> For consistency, both of the above would better be changed to a
> single series of if()/else if().../else.
>
>> +bool_t system_supports_cqm(void)
>> +{
>> +    return !!cqm;
> So here we go (wrt the remark on patch 1).
>
>> +}
>> +
>> +int alloc_cqm_rmid(struct domain *d)
>> +{
>> +    int rc = 0;
>> +    unsigned int rmid;
>> +    unsigned long flags;
>> +
>> +    ASSERT(system_supports_cqm());
>> +
>> +    spin_lock_irqsave(&cqm_lock, flags);
> Why not just spin_lock()? Briefly scanning over the following patches
> doesn't point out anything that might require this to be an IRQ-safe
> lock.
>
>> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
>> +    {
>> +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
>> +            continue;
>> +
>> +        cqm->rmid_to_dom[rmid] = d->domain_id;
>> +        break;
>> +    }
>> +    spin_unlock_irqrestore(&cqm_lock, flags);
>> +
>> +    /* No CQM RMID available, assign RMID=0 by default */
>> +    if ( rmid > cqm->max_rmid )
>> +    {
>> +        rmid = 0;
>> +        rc = -1;
>> +    }
>> +
>> +    d->arch.pqos_cqm_rmid = rmid;
> Is it really safe to do this and the freeing below outside of the
> lock?
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:17:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ej2-0007XG-NO; Mon, 20 Jan 2014 13:17:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Ej0-0007X9-SS
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:17:11 +0000
Received: from [85.158.143.35:37546] by server-2.bemta-4.messagelabs.com id
	94/C6-11386-6D12DD25; Mon, 20 Jan 2014 13:17:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390223828!12822063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30451 invoked from network); 20 Jan 2014 13:17:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:17:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92392877"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 13:17:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:17:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5Eiw-00064M-Ab;
	Mon, 20 Jan 2014 13:17:06 +0000
Message-ID: <52DD21D1.5010202@citrix.com>
Date: Mon, 20 Jan 2014 13:17:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
In-Reply-To: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	dario.faggioli@citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, Dongxiao Xu <dongxiao.xu@intel.com>,
	dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 13:13, Jan Beulich wrote:
>>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>>      }
>>      break;
>>  
>> +    case XEN_DOMCTL_attach_pqos:
>> +    {
>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> +        {
>> +            if ( !system_supports_cqm() )
>> +                ret = -ENODEV;
>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> +                ret = -EEXIST;
>> +            else
>> +            {
>> +                ret = alloc_cqm_rmid(d);
>> +                if ( ret < 0 )
>> +                    ret = -EUSERS;
> Why don't you have the function return a sensible error code
> (which presumably might also end up being other than -EUSERS,
> e.g. -ENOMEM).

-EUSERS is correct here.  This failure like this means "all the
available system rmid's are already being used by other domains".

~Andrew

>
>> +            }
>> +        }
>> +        else
>> +            ret = -EINVAL;
>> +    }
>> +    break;
>> +
>> +    case XEN_DOMCTL_detach_pqos:
>> +    {
>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> +        {
>> +            if ( !system_supports_cqm() )
>> +                ret = -ENODEV;
>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> +            {
>> +                free_cqm_rmid(d);
>> +                ret = 0;
>> +            }
>> +            else
>> +                ret = -ENOENT;
>> +        }
>> +        else
>> +            ret = -EINVAL;
>> +    }
>> +    break;
> For consistency, both of the above would better be changed to a
> single series of if()/else if().../else.
>
>> +bool_t system_supports_cqm(void)
>> +{
>> +    return !!cqm;
> So here we go (wrt the remark on patch 1).
>
>> +}
>> +
>> +int alloc_cqm_rmid(struct domain *d)
>> +{
>> +    int rc = 0;
>> +    unsigned int rmid;
>> +    unsigned long flags;
>> +
>> +    ASSERT(system_supports_cqm());
>> +
>> +    spin_lock_irqsave(&cqm_lock, flags);
> Why not just spin_lock()? Briefly scanning over the following patches
> doesn't point out anything that might require this to be an IRQ-safe
> lock.
>
>> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
>> +    {
>> +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
>> +            continue;
>> +
>> +        cqm->rmid_to_dom[rmid] = d->domain_id;
>> +        break;
>> +    }
>> +    spin_unlock_irqrestore(&cqm_lock, flags);
>> +
>> +    /* No CQM RMID available, assign RMID=0 by default */
>> +    if ( rmid > cqm->max_rmid )
>> +    {
>> +        rmid = 0;
>> +        rc = -1;
>> +    }
>> +
>> +    d->arch.pqos_cqm_rmid = rmid;
> Is it really safe to do this and the freeing below outside of the
> lock?
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:20:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Em2-00080X-Av; Mon, 20 Jan 2014 13:20:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Em0-00080R-Iw
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:20:16 +0000
Received: from [85.158.137.68:55326] by server-5.bemta-3.messagelabs.com id
	D0/EF-25188-F822DD25; Mon, 20 Jan 2014 13:20:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390224013!10161388!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26794 invoked from network); 20 Jan 2014 13:20:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:20:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94475441"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:20:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 08:20:12 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5Elv-0000XR-AR;
	Mon, 20 Jan 2014 13:20:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:20:10 +0000
Message-ID: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 README | 202 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 201 insertions(+), 1 deletion(-)

diff --git a/README b/README
index 29c9d45..60379c4 100644
--- a/README
+++ b/README
@@ -1,3 +1,197 @@
+Introduction
+============
+
+OSStest is the Xen Project automated test infrastructure.
+
+Terminology
+===========
+
+"flight":
+
+    Each run of osstest is referred to as a "flight". Each flight is
+    given a unique ID (a number or name).
+
+"job":
+
+    Each flight consists of one or more "jobs". These are a sequence
+    of test steps run in order and correspond to a column in the test
+    report grid. They have names like "build-amd64" or
+    "test-amd64-amd64-pv". A job can depend on the output of another
+    job in the flight -- e.g. most test-* jobs depend on one or more
+    build-* jobs.
+
+"step":
+
+    Each job consists of multiple "steps" which is an individual test
+    operation, such as "build the hypervisor", "install a guest",
+    "start a guest", "migrate a guest", etc. A step corresponds to a
+    cell in the results grid. A given step can be reused in multiple
+    different jobs, e.g. the "xen build" step is used in several
+    different build-* jobs. This reuse can be seen in the rows of the
+    results grid.
+
+"runvar":
+
+    A runvar is a named textual variable associated with each job in a
+    given flight. They serve as both the inputs and outputs to the
+    job.
+
+    For example a Xen build job may have input runvars "tree_xen" (the
+    tree to clone) and "revision_xen" (the version to test). As output
+    it sets "path_xendist" which is the path to the tarball of the
+    resulting binary.
+
+    As a further example a test job may have an input runvar
+    "xenbuildjob" which specifies which build job produced the binary
+    to be tested. The "xen install" step can then read this runvar in
+    order to find the binary to install.
+
+    Other runvars also exist covering things such as:
+
+        * constraints on which machines in the test pool a job can be
+          run on (e.g. the architecure, the need for a particular
+          processor class, the presence of SR-IOV etc).
+
+        * the parameters of the guest to test (e.g. distro, PV vs HVM
+          etc).
+
+Operation
+=========
+
+A flight is constructed by the "make-flight" script.
+
+"make-flight" will allocate a new flight number, create a set of jobs
+with input runvars depending on the configuration (e.g. branch/version
+to test).
+
+A flight is run by the "mg-execute-flight" script, which in turn calls
+"sg-execute-flight". "sg-execute-flight" then spawns an instance of
+"sg-run-job" for each job in the flight.
+
+"sg-run-job" encodes various recipes (sequences of steps) which are
+referenced by each job's configuration. It then runs each of these in
+turn, taking into account the prerequisites etc, by calling the
+relevant "ts-*" scripts.
+
+When running in standalone mode it is possible to run any of these
+steps by hand, ("mg-execute-flight", "sg-run-job", "ts-*") although
+you will need to find the correct inputs (some of which are documented
+below) and perhaps take care of prerequisites yourself (e.g. running
+"./sg-run-job test-armhf-armhf-xl" means you must have done
+"./sg-runjob build-armhf" and "build-armhf-pvops" first.
+
+Results
+=======
+
+For flights run automatically by the infrastructure an email report is
+produced. For most normal flights this is mailed to the xen-devel
+mailing list. The report for flight 24438 can be seen at
+
+    http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
+
+The report will link to a set of complete logs. Since these are
+regularly expired due to space constraints the logs for flight 24438
+have been archived to
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/
+
+NB: to save space any files larger than 100K have been replaced with a
+placeholder.
+
+The results grid contains an overview of the flight's execution.
+
+The results for each job are reached by clicking the header of the
+column in the results grid which will lead to reports such as:
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
+    http://xenbits.xen.org/docs/osstest-output-example/24438/build-amd64/info.html
+
+The job report contains all of the logs and build outputs associated
+with this job.
+
+The logs for a step are reached by clicking the individual cells of
+the results grid, or by clicking the list of steps on the job
+report. In either case this will lead to a report such as
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/4.ts-xen-install.log
+
+Additional details (e.g. serial logs, guest cfg files, etc) will be
+available in the complete logs associated with the containing job.
+
+The runvars are listed in the results page for the job as "Test
+control variables". e.g. See the end of:
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
+
+In order to find the binaries which went into a test job you should
+consult the results page for that job and find the relevant build
+job. e.g.
+
+    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/test-amd64-amd64-xl/info.html
+
+lists "xenbuildjob" as "build-amd64". Therefore the input binaries are
+found at
+
+    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/build-amd64/info.html
+
+which is linked from the top of the relevant column in the overview
+grid.
+
+Script Naming Conventions
+=========================
+
+Most of the scripts follow a naming convention:
+
+ap-*:   Adhoc push scripts
+
+cr-*:   Cron scripts
+
+cri-*:  Cron scripts (internal)
+
+cs-*:   Control Scripts
+
+mg-*:   Management scripts
+
+ms-*:   Management Services
+
+sg-*:   ?
+
+ts-*:   Test Step scripts.
+
+Jobs
+====
+
+The names of jobs follow some common patterns:
+
+    build-$ARCH
+
+        Build Xen for $ARCH
+
+    build-$ARCH-xend
+
+        Build Xen for $ARCH, with xend enabled
+
+    build-$ARCH-pvops
+
+        Build an upstream ("pvops") kernel for $ARCH
+
+build-$ARCH-oldkern
+
+        Build the old "linux-2.6.18-xen" tree for $ARCH
+
+test-$XENARCH-$DOM0ARCH-<CASE>
+
+        A test <CASE> running a $XENARCH hypervisor with a $DOM0ARCH
+        dom0.
+
+        Some tests also have a -$DOMUARCH suffix indicating the
+        obvious thing.
+
+NB: $ARCH (and $XENARCH etc) are Debian arch names, i386, amd64, armhf.
+
+Standalone Mode
+===============
+
 To run osstest in standalone mode:
 
  - You need to install
@@ -18,7 +212,7 @@ To run osstest in standalone mode:
    gives you the "branch" consisting of tests run for the xen-unstable
    push gate.  You need to select a job.  The list of available jobs
    is that shown in the publicly emailed test reports on xen-devel, eg
-     http://lists.xen.org/archives/html/xen-devel/2013-08/msg02529.html
+     http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
 
    If you don't want to repro one of those and don't know how to
    choose a job, choose one of
@@ -26,6 +220,12 @@ To run osstest in standalone mode:
 
  - Run ./standalone-reset
 
+   This will call "make-flight" for you to create a flight targetting
+   xen-unstable (this can be adjusted by passing parameters to
+   standalone-reset). By default the flight identifier is
+   "standalone". standalone-reset will also make sure that certain
+   bits of static data are available (e.g. Debian installer images)
+
  - Then you can run
       ./sg-run-job <job>
    to run that job on the default host.  NB in most cases this will
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:20:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Em2-00080X-Av; Mon, 20 Jan 2014 13:20:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Em0-00080R-Iw
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:20:16 +0000
Received: from [85.158.137.68:55326] by server-5.bemta-3.messagelabs.com id
	D0/EF-25188-F822DD25; Mon, 20 Jan 2014 13:20:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390224013!10161388!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26794 invoked from network); 20 Jan 2014 13:20:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:20:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94475441"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:20:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 08:20:12 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5Elv-0000XR-AR;
	Mon, 20 Jan 2014 13:20:11 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:20:10 +0000
Message-ID: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 README | 202 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 201 insertions(+), 1 deletion(-)

diff --git a/README b/README
index 29c9d45..60379c4 100644
--- a/README
+++ b/README
@@ -1,3 +1,197 @@
+Introduction
+============
+
+OSStest is the Xen Project automated test infrastructure.
+
+Terminology
+===========
+
+"flight":
+
+    Each run of osstest is referred to as a "flight". Each flight is
+    given a unique ID (a number or name).
+
+"job":
+
+    Each flight consists of one or more "jobs". These are a sequence
+    of test steps run in order and correspond to a column in the test
+    report grid. They have names like "build-amd64" or
+    "test-amd64-amd64-pv". A job can depend on the output of another
+    job in the flight -- e.g. most test-* jobs depend on one or more
+    build-* jobs.
+
+"step":
+
+    Each job consists of multiple "steps" which is an individual test
+    operation, such as "build the hypervisor", "install a guest",
+    "start a guest", "migrate a guest", etc. A step corresponds to a
+    cell in the results grid. A given step can be reused in multiple
+    different jobs, e.g. the "xen build" step is used in several
+    different build-* jobs. This reuse can be seen in the rows of the
+    results grid.
+
+"runvar":
+
+    A runvar is a named textual variable associated with each job in a
+    given flight. They serve as both the inputs and outputs to the
+    job.
+
+    For example a Xen build job may have input runvars "tree_xen" (the
+    tree to clone) and "revision_xen" (the version to test). As output
+    it sets "path_xendist" which is the path to the tarball of the
+    resulting binary.
+
+    As a further example a test job may have an input runvar
+    "xenbuildjob" which specifies which build job produced the binary
+    to be tested. The "xen install" step can then read this runvar in
+    order to find the binary to install.
+
+    Other runvars also exist covering things such as:
+
+        * constraints on which machines in the test pool a job can be
+          run on (e.g. the architecure, the need for a particular
+          processor class, the presence of SR-IOV etc).
+
+        * the parameters of the guest to test (e.g. distro, PV vs HVM
+          etc).
+
+Operation
+=========
+
+A flight is constructed by the "make-flight" script.
+
+"make-flight" will allocate a new flight number, create a set of jobs
+with input runvars depending on the configuration (e.g. branch/version
+to test).
+
+A flight is run by the "mg-execute-flight" script, which in turn calls
+"sg-execute-flight". "sg-execute-flight" then spawns an instance of
+"sg-run-job" for each job in the flight.
+
+"sg-run-job" encodes various recipes (sequences of steps) which are
+referenced by each job's configuration. It then runs each of these in
+turn, taking into account the prerequisites etc, by calling the
+relevant "ts-*" scripts.
+
+When running in standalone mode it is possible to run any of these
+steps by hand, ("mg-execute-flight", "sg-run-job", "ts-*") although
+you will need to find the correct inputs (some of which are documented
+below) and perhaps take care of prerequisites yourself (e.g. running
+"./sg-run-job test-armhf-armhf-xl" means you must have done
+"./sg-runjob build-armhf" and "build-armhf-pvops" first.
+
+Results
+=======
+
+For flights run automatically by the infrastructure an email report is
+produced. For most normal flights this is mailed to the xen-devel
+mailing list. The report for flight 24438 can be seen at
+
+    http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
+
+The report will link to a set of complete logs. Since these are
+regularly expired due to space constraints the logs for flight 24438
+have been archived to
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/
+
+NB: to save space any files larger than 100K have been replaced with a
+placeholder.
+
+The results grid contains an overview of the flight's execution.
+
+The results for each job are reached by clicking the header of the
+column in the results grid which will lead to reports such as:
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
+    http://xenbits.xen.org/docs/osstest-output-example/24438/build-amd64/info.html
+
+The job report contains all of the logs and build outputs associated
+with this job.
+
+The logs for a step are reached by clicking the individual cells of
+the results grid, or by clicking the list of steps on the job
+report. In either case this will lead to a report such as
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/4.ts-xen-install.log
+
+Additional details (e.g. serial logs, guest cfg files, etc) will be
+available in the complete logs associated with the containing job.
+
+The runvars are listed in the results page for the job as "Test
+control variables". e.g. See the end of:
+
+    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
+
+In order to find the binaries which went into a test job you should
+consult the results page for that job and find the relevant build
+job. e.g.
+
+    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/test-amd64-amd64-xl/info.html
+
+lists "xenbuildjob" as "build-amd64". Therefore the input binaries are
+found at
+
+    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/build-amd64/info.html
+
+which is linked from the top of the relevant column in the overview
+grid.
+
+Script Naming Conventions
+=========================
+
+Most of the scripts follow a naming convention:
+
+ap-*:   Adhoc push scripts
+
+cr-*:   Cron scripts
+
+cri-*:  Cron scripts (internal)
+
+cs-*:   Control Scripts
+
+mg-*:   Management scripts
+
+ms-*:   Management Services
+
+sg-*:   ?
+
+ts-*:   Test Step scripts.
+
+Jobs
+====
+
+The names of jobs follow some common patterns:
+
+    build-$ARCH
+
+        Build Xen for $ARCH
+
+    build-$ARCH-xend
+
+        Build Xen for $ARCH, with xend enabled
+
+    build-$ARCH-pvops
+
+        Build an upstream ("pvops") kernel for $ARCH
+
+build-$ARCH-oldkern
+
+        Build the old "linux-2.6.18-xen" tree for $ARCH
+
+test-$XENARCH-$DOM0ARCH-<CASE>
+
+        A test <CASE> running a $XENARCH hypervisor with a $DOM0ARCH
+        dom0.
+
+        Some tests also have a -$DOMUARCH suffix indicating the
+        obvious thing.
+
+NB: $ARCH (and $XENARCH etc) are Debian arch names, i386, amd64, armhf.
+
+Standalone Mode
+===============
+
 To run osstest in standalone mode:
 
  - You need to install
@@ -18,7 +212,7 @@ To run osstest in standalone mode:
    gives you the "branch" consisting of tests run for the xen-unstable
    push gate.  You need to select a job.  The list of available jobs
    is that shown in the publicly emailed test reports on xen-devel, eg
-     http://lists.xen.org/archives/html/xen-devel/2013-08/msg02529.html
+     http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
 
    If you don't want to repro one of those and don't know how to
    choose a job, choose one of
@@ -26,6 +220,12 @@ To run osstest in standalone mode:
 
  - Run ./standalone-reset
 
+   This will call "make-flight" for you to create a flight targetting
+   xen-unstable (this can be adjusted by passing parameters to
+   standalone-reset). By default the flight identifier is
+   "standalone". standalone-reset will also make sure that certain
+   bits of static data are available (e.g. Debian installer images)
+
  - Then you can run
       ./sg-run-job <job>
    to run that job on the default host.  NB in most cases this will
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:24:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Eq8-00089g-Ah; Mon, 20 Jan 2014 13:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W5Eq7-00089W-1C
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:24:31 +0000
Received: from [85.158.139.211:39058] by server-5.bemta-5.messagelabs.com id
	7B/B5-14928-E832DD25; Mon, 20 Jan 2014 13:24:30 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390224268!10816651!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29774 invoked from network); 20 Jan 2014 13:24:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	20 Jan 2014 13:24:29 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 20 Jan 2014 05:20:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,690,1384329600"; d="scan'208";a="469490056"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 20 Jan 2014 05:24:27 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 20 Jan 2014 05:24:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 20 Jan 2014 05:24:26 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 20 Jan 2014 21:24:24 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Gordan Bobic <gordan@bobich.net>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ///QC4CAAAY/AIAABUAAgACPQjA=
Date: Mon, 20 Jan 2014 13:24:23 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>"
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>"
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
In-Reply-To: <2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> Sent: Monday, January 20, 2014 8:50 PM
> To: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> 
> On 2014-01-20 12:31, Shakeel Butt wrote:
> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
> >>> > -----Original Message-----
> >>> > From: xen-devel-bounces@lists.xen.org
> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> >>> > Sent: Monday, January 20, 2014 1:48 PM
> >>> > To: xen-devel@lists.xen.org
> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> >>> >
> >>> > Hi all,
> >>> >
> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >>> > traditional i.e. qemu-dm.
> >>>
> >>> As far as I know, only qemu-traditional supports vga pass-through
> >>> right now.
> >>
> >> Right.
> >> It is not possible to assign your primary VGA card to a VM with
> >> qemu-xen. You should be able to assign your secondary VGA card though.
> >
> > Let me understand this correctly. If I have two VGA cards then I can
> > passthrough
> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
> > right and
> > if yes how can I do it?
> 
> Passing any VGA card as a primary-in-domU has always been problematic.

I think passing VGA card as a primary-in-domU works well in Qemu-traditional, right?

> Passing it as a secondary-in-domU generally works fine, though. Do you
> have a specific requirement for the card to be primary in domU? It
> doesn't actually gain you anything over and above seeing your domU
> loading screen on the external monitor. Once the driver for the card
> loads you get output as per normal.
> 
> Why do you need it to be primary in domU?
> 
> Gordan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:24:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Eq8-00089g-Ah; Mon, 20 Jan 2014 13:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W5Eq7-00089W-1C
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:24:31 +0000
Received: from [85.158.139.211:39058] by server-5.bemta-5.messagelabs.com id
	7B/B5-14928-E832DD25; Mon, 20 Jan 2014 13:24:30 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390224268!10816651!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29774 invoked from network); 20 Jan 2014 13:24:29 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	20 Jan 2014 13:24:29 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 20 Jan 2014 05:20:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,690,1384329600"; d="scan'208";a="469490056"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 20 Jan 2014 05:24:27 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 20 Jan 2014 05:24:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 20 Jan 2014 05:24:26 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 20 Jan 2014 21:24:24 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Gordan Bobic <gordan@bobich.net>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ///QC4CAAAY/AIAABUAAgACPQjA=
Date: Mon, 20 Jan 2014 13:24:23 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>"
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>"
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
In-Reply-To: <2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> Sent: Monday, January 20, 2014 8:50 PM
> To: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> 
> On 2014-01-20 12:31, Shakeel Butt wrote:
> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
> >>> > -----Original Message-----
> >>> > From: xen-devel-bounces@lists.xen.org
> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> >>> > Sent: Monday, January 20, 2014 1:48 PM
> >>> > To: xen-devel@lists.xen.org
> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> >>> >
> >>> > Hi all,
> >>> >
> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >>> > traditional i.e. qemu-dm.
> >>>
> >>> As far as I know, only qemu-traditional supports vga pass-through
> >>> right now.
> >>
> >> Right.
> >> It is not possible to assign your primary VGA card to a VM with
> >> qemu-xen. You should be able to assign your secondary VGA card though.
> >
> > Let me understand this correctly. If I have two VGA cards then I can
> > passthrough
> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
> > right and
> > if yes how can I do it?
> 
> Passing any VGA card as a primary-in-domU has always been problematic.

I think passing VGA card as a primary-in-domU works well in Qemu-traditional, right?

> Passing it as a secondary-in-domU generally works fine, though. Do you
> have a specific requirement for the card to be primary in domU? It
> doesn't actually gain you anything over and above seeing your domU
> loading screen on the external monitor. Once the driver for the card
> loads you get output as per normal.
> 
> Why do you need it to be primary in domU?
> 
> Gordan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:27:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Esa-0008Hf-Tu; Mon, 20 Jan 2014 13:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5EsZ-0008HZ-40
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:27:03 +0000
Received: from [85.158.139.211:6096] by server-13.bemta-5.messagelabs.com id
	A2/83-11357-6242DD25; Mon, 20 Jan 2014 13:27:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390224419!10817315!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31955 invoked from network); 20 Jan 2014 13:27:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:27:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:26:59 +0000
Message-Id: <52DD322F02000078001150C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:26:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> For each socket in the system, we create a separate bitmap to tag its
> related CPUs. This per socket bitmap will be initialized on system
> start up, and adjusted when CPU is dynamically online/offline.

There's no reasoning here at all why cpu_sibling_mask and
cpu_core_mask aren't sufficient.

> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
>  cpumask_t cpu_online_map __read_mostly;
>  EXPORT_SYMBOL(cpu_online_map);
>  
> +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
> +EXPORT_SYMBOL(socket_cpu_map);

And _if_ we really need it, then it should be done in a better way
than via a statically sized array, the size of which can't even be
overridden on the build and/or hypervisor command line.

And there shouldn't be EXPORT_SYMBOL() in new, not directly
cloned hypervisor code either.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:27:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Esa-0008Hf-Tu; Mon, 20 Jan 2014 13:27:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5EsZ-0008HZ-40
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:27:03 +0000
Received: from [85.158.139.211:6096] by server-13.bemta-5.messagelabs.com id
	A2/83-11357-6242DD25; Mon, 20 Jan 2014 13:27:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390224419!10817315!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31955 invoked from network); 20 Jan 2014 13:27:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:27:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:26:59 +0000
Message-Id: <52DD322F02000078001150C0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:26:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> For each socket in the system, we create a separate bitmap to tag its
> related CPUs. This per socket bitmap will be initialized on system
> start up, and adjusted when CPU is dynamically online/offline.

There's no reasoning here at all why cpu_sibling_mask and
cpu_core_mask aren't sufficient.

> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
>  cpumask_t cpu_online_map __read_mostly;
>  EXPORT_SYMBOL(cpu_online_map);
>  
> +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
> +EXPORT_SYMBOL(socket_cpu_map);

And _if_ we really need it, then it should be done in a better way
than via a statically sized array, the size of which can't even be
overridden on the build and/or hypervisor command line.

And there shouldn't be EXPORT_SYMBOL() in new, not directly
cloned hypervisor code either.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EwA-0000AF-IW; Mon, 20 Jan 2014 13:30:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5Ew8-0000AA-Ve
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:30:45 +0000
Received: from [85.158.139.211:62798] by server-12.bemta-5.messagelabs.com id
	3F/70-30017-4052DD25; Mon, 20 Jan 2014 13:30:44 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390224641!8096572!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 664 invoked from network); 20 Jan 2014 13:30:43 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 13:30:43 -0000
Received: from mail-ob0-f169.google.com ([209.85.214.169]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt0lADznE4pZu91sVyC9q2Wbzxn5T81F@postini.com;
	Mon, 20 Jan 2014 05:30:42 PST
Received: by mail-ob0-f169.google.com with SMTP id wo20so2917138obc.14
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 05:30:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qgj13y/lsvjaDu46u683wQOs6CqYfKnCYmdWZN6ntMA=;
	b=RwwHYGyWcbnUwy034QSY+LMYQp4HUdzjqNVr8RiD02SHYY1RxCWd1V1FTpTkLaUe7x
	lra4FOMSWt8MFvny49t/7fU/Lji1X3XDK5HWIWiVLXZmxYappF2K5pWk9JmD/lAt8HVA
	UqVdm8S2TGLGsvOxly8bcNvNNE+SIJp9eZ/P4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=qgj13y/lsvjaDu46u683wQOs6CqYfKnCYmdWZN6ntMA=;
	b=OS3++IMi9hoLE5HpPbd0H6Nwm1lYKVrKBJlRy4OgePSw1wgFtOSHNwlV3OsBHPDCbx
	S1qF3K2Ge+Pr3vfak76OT1gikzTDB4wtVTAEAykgsRHQ1h36HCLAEYMVsdSqux5hR9cn
	S9iTxxhj2fWobwcoM6QwlECV7uguda45l0YQU66uB99GBqpjnci+5YIVF7+CkExZoSQG
	h3EUrcUWOgErAKAGGCpgB7vOdrPR5VCiArn76gb4cZ5xUB62NWw7Q/DOQzXTw8p++H44
	nWxtCsNK2sl/90CpiyRlYvwOEcHlU1djEP6tk5M8BHD4ovI8hkAX7/LHe1xokH/kg6/g
	LHIA==
X-Gm-Message-State: ALoCoQmqcT7fGKX9/DY0rHS39KUSuMgxsLYb6/x4fDweXt4ZJlfCyV9jz14/ZdhrNuAeYWROt81nXvX14RZWHOZaVN04QJ1ZSnFX2WEsf+Deew58oI7S/1/jm+Sz0juaY4DM42zezk7ah2Ne6XoTArUfnQsoYpPXdj9920yc1BU5LIIiFPJfoDI=
X-Received: by 10.182.114.132 with SMTP id jg4mr15880214obb.29.1390224640537; 
	Mon, 20 Jan 2014 05:30:40 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.114.132 with SMTP id jg4mr15880205obb.29.1390224640457; 
	Mon, 20 Jan 2014 05:30:40 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 05:30:40 -0800 (PST)
In-Reply-To: <1389984018.16457.335.camel@Solace>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
	<1389984018.16457.335.camel@Solace>
Date: Mon, 20 Jan 2014 15:30:40 +0200
Message-ID: <CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Artem Mygaiev <artem.mygaiev@globallogic.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Sisu Xi <xisisu@gmail.com>,
	Claudio Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8830586905329590244=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8830586905329590244==
Content-Type: multipart/alternative; boundary=001a11c329a2da4d7c04f066e58a

--001a11c329a2da4d7c04f066e58a
Content-Type: text/plain; charset=ISO-8859-1

> Getting reasonable RT performances is another pair of hands... We'll
> (well, at list I will) be looking at that in the very immediate future,
> so stay in touch. :-P

I definitely will! Meanwhile, I am going to evaluate RT-Xen with LITMUS^RT
as suggested in the RT-Xen mailing list; while not QNX, it is much easier to
get up and running and to make corresponding measurements.

> This is probably going to be a lot similar to what is being attempted
> here:
> http://bugs.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E

Wow. That looks fantastic. I would watch this one attentively, thanks a lot!

> Anyway, again, there is rising interest in this sort of workloads these
> days, as Artem (from your same company, I think, and I'm adding him, and
> a bunch of other people too, to the Cc list) knows. :-)

Yep, I'm from his team actually :)

> In summary, I don't think anyone has ever tried that, given the above,
> if you decide to do that, feel free to ask for any kind of help and keep
> us posted on how it's going...  :-)

I will. Thanks!

--001a11c329a2da4d7c04f066e58a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div>&gt; Getting reasonable RT perfor=
mances is another pair of hands... We&#39;ll<br>&gt; (well, at list I will)=
 be looking at that in the very immediate future,<br>&gt; so stay in touch.=
 :-P<br>
<br></div>I definitely will! Meanwhile, I am going to evaluate RT-Xen with =
LITMUS^RT<br>as suggested in the RT-Xen mailing list; while not QNX, it is =
much easier to<br></div>get up and running and to make corresponding measur=
ements.<br>
<br></div><div>&gt; This is probably going to be a lot similar to what is b=
eing attempted<br>&gt; here:<br>&gt; <a href=3D"http://bugs.xenproject.org/=
xen/mid/%3C1387278345.8738.80.camel@Solace%3E" target=3D"_blank">http://bug=
s.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E</a><br>
<br></div>Wow. That looks fantastic. I would watch this one attentively, th=
anks a lot!<br><br>&gt; Anyway, again, there is rising interest in this sor=
t of workloads these<br>&gt; days, as Artem (from your same company, I thin=
k, and I&#39;m adding him, and<br>
&gt; a bunch of other people too, to the Cc list) knows. :-)<br><br></div>Y=
ep, I&#39;m from his team actually :)<br><br>&gt; In summary, I don&#39;t t=
hink anyone has ever tried that, given the above,<br>&gt; if you decide to =
do that, feel free to ask for any kind of help and keep<br>
&gt; us posted on how it&#39;s going... =A0:-)<br><br></div>I will. Thanks!=
<br><div><br></div></div>

--001a11c329a2da4d7c04f066e58a--


--===============8830586905329590244==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8830586905329590244==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 13:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5EwA-0000AF-IW; Mon, 20 Jan 2014 13:30:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5Ew8-0000AA-Ve
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:30:45 +0000
Received: from [85.158.139.211:62798] by server-12.bemta-5.messagelabs.com id
	3F/70-30017-4052DD25; Mon, 20 Jan 2014 13:30:44 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390224641!8096572!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 664 invoked from network); 20 Jan 2014 13:30:43 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 13:30:43 -0000
Received: from mail-ob0-f169.google.com ([209.85.214.169]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt0lADznE4pZu91sVyC9q2Wbzxn5T81F@postini.com;
	Mon, 20 Jan 2014 05:30:42 PST
Received: by mail-ob0-f169.google.com with SMTP id wo20so2917138obc.14
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 05:30:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qgj13y/lsvjaDu46u683wQOs6CqYfKnCYmdWZN6ntMA=;
	b=RwwHYGyWcbnUwy034QSY+LMYQp4HUdzjqNVr8RiD02SHYY1RxCWd1V1FTpTkLaUe7x
	lra4FOMSWt8MFvny49t/7fU/Lji1X3XDK5HWIWiVLXZmxYappF2K5pWk9JmD/lAt8HVA
	UqVdm8S2TGLGsvOxly8bcNvNNE+SIJp9eZ/P4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=qgj13y/lsvjaDu46u683wQOs6CqYfKnCYmdWZN6ntMA=;
	b=OS3++IMi9hoLE5HpPbd0H6Nwm1lYKVrKBJlRy4OgePSw1wgFtOSHNwlV3OsBHPDCbx
	S1qF3K2Ge+Pr3vfak76OT1gikzTDB4wtVTAEAykgsRHQ1h36HCLAEYMVsdSqux5hR9cn
	S9iTxxhj2fWobwcoM6QwlECV7uguda45l0YQU66uB99GBqpjnci+5YIVF7+CkExZoSQG
	h3EUrcUWOgErAKAGGCpgB7vOdrPR5VCiArn76gb4cZ5xUB62NWw7Q/DOQzXTw8p++H44
	nWxtCsNK2sl/90CpiyRlYvwOEcHlU1djEP6tk5M8BHD4ovI8hkAX7/LHe1xokH/kg6/g
	LHIA==
X-Gm-Message-State: ALoCoQmqcT7fGKX9/DY0rHS39KUSuMgxsLYb6/x4fDweXt4ZJlfCyV9jz14/ZdhrNuAeYWROt81nXvX14RZWHOZaVN04QJ1ZSnFX2WEsf+Deew58oI7S/1/jm+Sz0juaY4DM42zezk7ah2Ne6XoTArUfnQsoYpPXdj9920yc1BU5LIIiFPJfoDI=
X-Received: by 10.182.114.132 with SMTP id jg4mr15880214obb.29.1390224640537; 
	Mon, 20 Jan 2014 05:30:40 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.114.132 with SMTP id jg4mr15880205obb.29.1390224640457; 
	Mon, 20 Jan 2014 05:30:40 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 05:30:40 -0800 (PST)
In-Reply-To: <1389984018.16457.335.camel@Solace>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
	<1389984018.16457.335.camel@Solace>
Date: Mon, 20 Jan 2014 15:30:40 +0200
Message-ID: <CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Artem Mygaiev <artem.mygaiev@globallogic.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Sisu Xi <xisisu@gmail.com>,
	Claudio Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8830586905329590244=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8830586905329590244==
Content-Type: multipart/alternative; boundary=001a11c329a2da4d7c04f066e58a

--001a11c329a2da4d7c04f066e58a
Content-Type: text/plain; charset=ISO-8859-1

> Getting reasonable RT performances is another pair of hands... We'll
> (well, at list I will) be looking at that in the very immediate future,
> so stay in touch. :-P

I definitely will! Meanwhile, I am going to evaluate RT-Xen with LITMUS^RT
as suggested in the RT-Xen mailing list; while not QNX, it is much easier to
get up and running and to make corresponding measurements.

> This is probably going to be a lot similar to what is being attempted
> here:
> http://bugs.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E

Wow. That looks fantastic. I would watch this one attentively, thanks a lot!

> Anyway, again, there is rising interest in this sort of workloads these
> days, as Artem (from your same company, I think, and I'm adding him, and
> a bunch of other people too, to the Cc list) knows. :-)

Yep, I'm from his team actually :)

> In summary, I don't think anyone has ever tried that, given the above,
> if you decide to do that, feel free to ask for any kind of help and keep
> us posted on how it's going...  :-)

I will. Thanks!

--001a11c329a2da4d7c04f066e58a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div>&gt; Getting reasonable RT perfor=
mances is another pair of hands... We&#39;ll<br>&gt; (well, at list I will)=
 be looking at that in the very immediate future,<br>&gt; so stay in touch.=
 :-P<br>
<br></div>I definitely will! Meanwhile, I am going to evaluate RT-Xen with =
LITMUS^RT<br>as suggested in the RT-Xen mailing list; while not QNX, it is =
much easier to<br></div>get up and running and to make corresponding measur=
ements.<br>
<br></div><div>&gt; This is probably going to be a lot similar to what is b=
eing attempted<br>&gt; here:<br>&gt; <a href=3D"http://bugs.xenproject.org/=
xen/mid/%3C1387278345.8738.80.camel@Solace%3E" target=3D"_blank">http://bug=
s.xenproject.org/xen/mid/%3C1387278345.8738.80.camel@Solace%3E</a><br>
<br></div>Wow. That looks fantastic. I would watch this one attentively, th=
anks a lot!<br><br>&gt; Anyway, again, there is rising interest in this sor=
t of workloads these<br>&gt; days, as Artem (from your same company, I thin=
k, and I&#39;m adding him, and<br>
&gt; a bunch of other people too, to the Cc list) knows. :-)<br><br></div>Y=
ep, I&#39;m from his team actually :)<br><br>&gt; In summary, I don&#39;t t=
hink anyone has ever tried that, given the above,<br>&gt; if you decide to =
do that, feel free to ask for any kind of help and keep<br>
&gt; us posted on how it&#39;s going... =A0:-)<br><br></div>I will. Thanks!=
<br><div><br></div></div>

--001a11c329a2da4d7c04f066e58a--


--===============8830586905329590244==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8830586905329590244==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 13:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ewe-0000EK-5r; Mon, 20 Jan 2014 13:31:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5Ewc-0000E1-Uy
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:31:15 +0000
Received: from [85.158.137.68:34777] by server-6.bemta-3.messagelabs.com id
	E6/88-04868-2252DD25; Mon, 20 Jan 2014 13:31:14 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390224672!10137929!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 20 Jan 2014 13:31:13 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 13:31:13 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 5477B221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:31:12 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 13:31:12 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
References: "\"\\\"\\\\\\\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>\\\"	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>\\\"	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>"
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
Message-ID: <0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 13:24, Wu, Feng wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xen.org
>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>> Sent: Monday, January 20, 2014 8:50 PM
>> To: xen-devel@lists.xen.org
>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu 
>> upstream)
>> 
>> On 2014-01-20 12:31, Shakeel Butt wrote:
>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>> > <stefano.stabellini@eu.citrix.com> wrote:
>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>> >>> > -----Original Message-----
>> >>> > From: xen-devel-bounces@lists.xen.org
>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>> >>> > To: xen-devel@lists.xen.org
>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>> >>> >
>> >>> > Hi all,
>> >>> >
>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> >>> > traditional i.e. qemu-dm.
>> >>>
>> >>> As far as I know, only qemu-traditional supports vga pass-through
>> >>> right now.
>> >>
>> >> Right.
>> >> It is not possible to assign your primary VGA card to a VM with
>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>> >
>> > Let me understand this correctly. If I have two VGA cards then I can
>> > passthrough
>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>> > right and
>> > if yes how can I do it?
>> 
>> Passing any VGA card as a primary-in-domU has always been problematic.
> 
> I think passing VGA card as a primary-in-domU works well in
> Qemu-traditional, right?

I never managed to get it working - it certainly isn't just a matter of
enabling the option. There is at least the matter of also side-loading
the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
out all ATI and Nvidia GPUs of the past 2-3 generations.

Having said that - I never found a particularly good use-case for
primary passthrough. Once the GPU driver loads it works just the
same for all intents and purposes.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ewe-0000EK-5r; Mon, 20 Jan 2014 13:31:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5Ewc-0000E1-Uy
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:31:15 +0000
Received: from [85.158.137.68:34777] by server-6.bemta-3.messagelabs.com id
	E6/88-04868-2252DD25; Mon, 20 Jan 2014 13:31:14 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390224672!10137929!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 20 Jan 2014 13:31:13 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 13:31:13 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id 5477B221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:31:12 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 13:31:12 +0000
From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
In-Reply-To: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
References: "\"\\\"\\\\\\\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>\\\"	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>\\\"	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>"
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
Message-ID: <0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 13:24, Wu, Feng wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@lists.xen.org
>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>> Sent: Monday, January 20, 2014 8:50 PM
>> To: xen-devel@lists.xen.org
>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu 
>> upstream)
>> 
>> On 2014-01-20 12:31, Shakeel Butt wrote:
>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>> > <stefano.stabellini@eu.citrix.com> wrote:
>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>> >>> > -----Original Message-----
>> >>> > From: xen-devel-bounces@lists.xen.org
>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>> >>> > To: xen-devel@lists.xen.org
>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>> >>> >
>> >>> > Hi all,
>> >>> >
>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> >>> > traditional i.e. qemu-dm.
>> >>>
>> >>> As far as I know, only qemu-traditional supports vga pass-through
>> >>> right now.
>> >>
>> >> Right.
>> >> It is not possible to assign your primary VGA card to a VM with
>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>> >
>> > Let me understand this correctly. If I have two VGA cards then I can
>> > passthrough
>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>> > right and
>> > if yes how can I do it?
>> 
>> Passing any VGA card as a primary-in-domU has always been problematic.
> 
> I think passing VGA card as a primary-in-domU works well in
> Qemu-traditional, right?

I never managed to get it working - it certainly isn't just a matter of
enabling the option. There is at least the matter of also side-loading
the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
out all ATI and Nvidia GPUs of the past 2-3 generations.

Having said that - I never found a particularly good use-case for
primary passthrough. Once the GPU driver loads it works just the
same for all intents and purposes.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:50:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FET-0001NW-1w; Mon, 20 Jan 2014 13:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5FER-0001NR-KJ
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:49:39 +0000
Received: from [85.158.139.211:48003] by server-5.bemta-5.messagelabs.com id
	1E/01-14928-2792DD25; Mon, 20 Jan 2014 13:49:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390225776!8101760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29023 invoked from network); 20 Jan 2014 13:49:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94483402"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:49:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:49:35 -0500
Message-ID: <1390225774.20516.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:49:34 +0000
In-Reply-To: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 13:20 +0000, Ian Campbell wrote:

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

(I don't know why I find this so hard to remember...)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:50:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FET-0001NW-1w; Mon, 20 Jan 2014 13:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5FER-0001NR-KJ
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:49:39 +0000
Received: from [85.158.139.211:48003] by server-5.bemta-5.messagelabs.com id
	1E/01-14928-2792DD25; Mon, 20 Jan 2014 13:49:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390225776!8101760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29023 invoked from network); 20 Jan 2014 13:49:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94483402"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:49:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:49:35 -0500
Message-ID: <1390225774.20516.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:49:34 +0000
In-Reply-To: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 13:20 +0000, Ian Campbell wrote:

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

(I don't know why I find this so hard to remember...)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:50:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FFJ-0001QM-G4; Mon, 20 Jan 2014 13:50:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5FFI-0001QG-Bc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:50:32 +0000
Received: from [85.158.143.35:26288] by server-3.bemta-4.messagelabs.com id
	EC/DF-32360-7A92DD25; Mon, 20 Jan 2014 13:50:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390225829!10145587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1649 invoked from network); 20 Jan 2014 13:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94483578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:50:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:50:29 -0500
Message-ID: <1390225828.20516.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:50:28 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] OSSTEST: standalone helper
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I got a bit fed up of trying to remember which options/$env vars the
various osstest scripts took in standalone mode, so I lashed up a quick
bit of scripting (as one does).

I don't know if others would find this useful enough to add to the repo
and flesh it out (not to mention document it). Anyway, it is below, if
anyone thinks it would be useful I'll repost as a proper patch.

Ian.

#!/bin/bash

set -e

# XXX usage

op=$1 ; shift

TEMP=$(getopt -o c:f:h:rR --long config:,flight:,host:,reuse,noreuse,reinstall -- "$@")

eval set -- "$TEMP"

config=
flight="standalone"
host=
reuse=1 # Don't blow away machines by default

while true ; do
    case "$1" in
	-c|--config) config=$2; shift 2;;
	-f|--flight) flight=$2; shift 2;;
	-h|--host)   host=$2;   shift 2;;
	-r|--reuse)  reuse=1;   shift 1;;
	-R|--noreuse|--reinstall)reuse=0;shift 1;;
        --) shift ; break ;;
        *) echo "Internal error!" ; exit 1 ;;

    esac
done

if [ $reuse -eq 0 ]; then
    echo "WARNING: Will blow away machine..."
    echo "Press ENTER to confirm."
    read
fi

if [ ! -f standalone.db ] ; then
    echo "No standalone.db? Run standalone-reset." >&2
    exit 1
fi

# other potential checks:
# - ability to read apache logs
# - presences of working ssh-agent

# other potential ops:
# - run standalone reset
# - to set runvars from a dist tarball

case $op in
    run-job)
	if [ -z "$flight" ] ; then
	    echo "run-job: Need a flight" >&2
	    exit 1
	fi
	if [ -z "$host" ] ; then
	    echo "run-job: Need a host" >&2
	    exit 1
	fi
	if [ $# -lt 1 ] ; then
	    echo "run-job: Need job" >&2
	    exit 1
	fi

	job=$1; shift

        exec env \
	    OSSTEST_CONFIG=$config \
	    OSSTEST_FLIGHT=$flight \
	    OSSTEST_HOST_HOST=$host \
	    OSSTEST_HOST_REUSE=$reuse \
		./sg-run-job $job 2>&1 | tee logs/$flight/$job.log
	;;
    run-test)
	if [ $# -lt 1 ] ; then
	    echo "run-test: Need a test case" >&2
	    exit 1
	fi
	if [ -z "$host" ] ; then
	    echo "run-test: Need a host" >&2
	    exit 1
	fi

	if [ $# -lt 2 ] ; then
	    echo "run-test: Need job + test" >&2
	    exit 1
	fi

	job=$1; shift
	ts=$1; shift

	exec env \
	    OSSTEST_CONFIG=$config \
	    OSSTEST_JOB=$job \
		./$ts host=$host $@ 2>&1 | tee logs/$flight/$job.$ts.log
	;;
    *)
	echo "Unknown op $op" ; exit 1 ;;
esac



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:50:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FFJ-0001QM-G4; Mon, 20 Jan 2014 13:50:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5FFI-0001QG-Bc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:50:32 +0000
Received: from [85.158.143.35:26288] by server-3.bemta-4.messagelabs.com id
	EC/DF-32360-7A92DD25; Mon, 20 Jan 2014 13:50:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390225829!10145587!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1649 invoked from network); 20 Jan 2014 13:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 13:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="94483578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 13:50:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 08:50:29 -0500
Message-ID: <1390225828.20516.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 13:50:28 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] OSSTEST: standalone helper
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I got a bit fed up of trying to remember which options/$env vars the
various osstest scripts took in standalone mode, so I lashed up a quick
bit of scripting (as one does).

I don't know if others would find this useful enough to add to the repo
and flesh it out (not to mention document it). Anyway, it is below, if
anyone thinks it would be useful I'll repost as a proper patch.

Ian.

#!/bin/bash

set -e

# XXX usage

op=$1 ; shift

TEMP=$(getopt -o c:f:h:rR --long config:,flight:,host:,reuse,noreuse,reinstall -- "$@")

eval set -- "$TEMP"

config=
flight="standalone"
host=
reuse=1 # Don't blow away machines by default

while true ; do
    case "$1" in
	-c|--config) config=$2; shift 2;;
	-f|--flight) flight=$2; shift 2;;
	-h|--host)   host=$2;   shift 2;;
	-r|--reuse)  reuse=1;   shift 1;;
	-R|--noreuse|--reinstall)reuse=0;shift 1;;
        --) shift ; break ;;
        *) echo "Internal error!" ; exit 1 ;;

    esac
done

if [ $reuse -eq 0 ]; then
    echo "WARNING: Will blow away machine..."
    echo "Press ENTER to confirm."
    read
fi

if [ ! -f standalone.db ] ; then
    echo "No standalone.db? Run standalone-reset." >&2
    exit 1
fi

# other potential checks:
# - ability to read apache logs
# - presences of working ssh-agent

# other potential ops:
# - run standalone reset
# - to set runvars from a dist tarball

case $op in
    run-job)
	if [ -z "$flight" ] ; then
	    echo "run-job: Need a flight" >&2
	    exit 1
	fi
	if [ -z "$host" ] ; then
	    echo "run-job: Need a host" >&2
	    exit 1
	fi
	if [ $# -lt 1 ] ; then
	    echo "run-job: Need job" >&2
	    exit 1
	fi

	job=$1; shift

        exec env \
	    OSSTEST_CONFIG=$config \
	    OSSTEST_FLIGHT=$flight \
	    OSSTEST_HOST_HOST=$host \
	    OSSTEST_HOST_REUSE=$reuse \
		./sg-run-job $job 2>&1 | tee logs/$flight/$job.log
	;;
    run-test)
	if [ $# -lt 1 ] ; then
	    echo "run-test: Need a test case" >&2
	    exit 1
	fi
	if [ -z "$host" ] ; then
	    echo "run-test: Need a host" >&2
	    exit 1
	fi

	if [ $# -lt 2 ] ; then
	    echo "run-test: Need job + test" >&2
	    exit 1
	fi

	job=$1; shift
	ts=$1; shift

	exec env \
	    OSSTEST_CONFIG=$config \
	    OSSTEST_JOB=$job \
		./$ts host=$host $@ 2>&1 | tee logs/$flight/$job.$ts.log
	;;
    *)
	echo "Unknown op $op" ; exit 1 ;;
esac



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:52:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FH9-0001ZC-2F; Mon, 20 Jan 2014 13:52:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FH7-0001Z3-Hv
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:52:25 +0000
Received: from [85.158.137.68:49191] by server-4.bemta-3.messagelabs.com id
	33/C8-10414-81A2DD25; Mon, 20 Jan 2014 13:52:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390225943!10240745!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3395 invoked from network); 20 Jan 2014 13:52:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:52:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:52:22 +0000
Message-Id: <52DD38240200007800115102@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:52:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -126,6 +127,12 @@ bool_t system_supports_cqm(void)
>      return !!cqm;
>  }
>  
> +unsigned int get_cqm_count(void)
> +{
> +    ASSERT(system_supports_cqm());
> +    return cqm->max_rmid + 1;
> +}
> +
>  int alloc_cqm_rmid(struct domain *d)
>  {
>      int rc = 0;
> @@ -170,6 +177,48 @@ void free_cqm_rmid(struct domain *d)
>      d->arch.pqos_cqm_rmid = 0;
>  }
>  
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    struct xen_socket_cqmdata *data = arg;
> +    unsigned long flags, i;

Either i can be "unsigned int" ...

> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    spin_lock_irqsave(&cqm_lock, flags);
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = socket * (cqm->max_rmid + 1) + rmid;

... or this calculation needs one of the two operands of * cast
to "unsigned long".

> +        data[i].valid = !(cqm_data & IA32_QM_CTR_ERROR_MASK);
> +        if ( data[i].valid )
> +        {
> +            data[i].l3c_occupancy = cqm_data * cqm->upscaling_factor;
> +            data[i].socket = socket;
> +            data[i].domid = cqm->rmid_to_dom[rmid];
> +        }
> +    }
> +    spin_unlock_irqrestore(&cqm_lock, flags);
> +}

Also, please clarify why the locking here is necessary: You don't
seem to be modifying global data, and the only possibly mutable
thing you read is cqm->rmid_to_dom[]. A race on that one with
an addition/deletion doesn't appear to be problematic though.

> +void get_cqm_info(cpumask_t *cpu_cqmdata_map, struct xen_socket_cqmdata *data)

const cpumask_t *

> +    case XEN_SYSCTL_getcqminfo:
> +    {
> +        struct xen_socket_cqmdata *info;
> +        uint32_t num_sockets;
> +        uint32_t num_rmids;
> +        cpumask_t cpu_cqmdata_map;

Unless absolutely avoidable, not CPU masks on the stack please.

> +
> +        if ( !system_supports_cqm() )
> +        {
> +            ret = -ENODEV;
> +            break;
> +        }
> +
> +        select_socket_cpu(&cpu_cqmdata_map);
> +
> +        num_sockets = min((unsigned int)cpumask_weight(&cpu_cqmdata_map) + 1,
> +                          sysctl->u.getcqminfo.num_sockets);
> +        num_rmids = get_cqm_count();
> +        info = xzalloc_array(struct xen_socket_cqmdata,
> +                             num_rmids * num_sockets);

While unlikely right now, you ought to consider the case of this
multiplication overflowing.

Also - how does the caller know how big the buffer needs to be?
Only num_sockets can be restricted by it...

And what's worse - you allow the caller to limit num_sockets and
allocate info based on this limited value, but you don't restrict
cpu_cqmdata_map to just the socket covered, i.e. if the caller
specified a lower number, then you'll corrupt memory.

And finally, I think the total size of the buffer here can easily
exceed a page, i.e. this then ends up being a non-order-0
allocation, which may _never_ succeed (i.e. the operation is
then rendered useless). I guest it'd be better to e.g. vmap()
the MFNs underlying the guest buffer.

> +        if ( !info )
> +        {
> +            ret = -ENOMEM;
> +            break;
> +        }
> +
> +        get_cqm_info(&cpu_cqmdata_map, info);
> +
> +        if ( copy_to_guest_offset(sysctl->u.getcqminfo.buffer,
> +                                  0, info, num_rmids * num_sockets) )

If the offset is zero anyway, why do you use copy_to_guest_offset()
rather than copy_to_guest()?

> +        {
> +            ret = -EFAULT;
> +            xfree(info);
> +            break;
> +        }
> +
> +        sysctl->u.getcqminfo.num_rmids = num_rmids;
> +        sysctl->u.getcqminfo.num_sockets = num_sockets;
> +
> +        if ( copy_to_guest(u_sysctl, sysctl, 1) )

__copy_to_guest() is sufficient here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:52:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FH9-0001ZC-2F; Mon, 20 Jan 2014 13:52:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FH7-0001Z3-Hv
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:52:25 +0000
Received: from [85.158.137.68:49191] by server-4.bemta-3.messagelabs.com id
	33/C8-10414-81A2DD25; Mon, 20 Jan 2014 13:52:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390225943!10240745!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3395 invoked from network); 20 Jan 2014 13:52:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:52:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:52:22 +0000
Message-Id: <52DD38240200007800115102@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:52:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -126,6 +127,12 @@ bool_t system_supports_cqm(void)
>      return !!cqm;
>  }
>  
> +unsigned int get_cqm_count(void)
> +{
> +    ASSERT(system_supports_cqm());
> +    return cqm->max_rmid + 1;
> +}
> +
>  int alloc_cqm_rmid(struct domain *d)
>  {
>      int rc = 0;
> @@ -170,6 +177,48 @@ void free_cqm_rmid(struct domain *d)
>      d->arch.pqos_cqm_rmid = 0;
>  }
>  
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    struct xen_socket_cqmdata *data = arg;
> +    unsigned long flags, i;

Either i can be "unsigned int" ...

> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    spin_lock_irqsave(&cqm_lock, flags);
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = socket * (cqm->max_rmid + 1) + rmid;

... or this calculation needs one of the two operands of * cast
to "unsigned long".

> +        data[i].valid = !(cqm_data & IA32_QM_CTR_ERROR_MASK);
> +        if ( data[i].valid )
> +        {
> +            data[i].l3c_occupancy = cqm_data * cqm->upscaling_factor;
> +            data[i].socket = socket;
> +            data[i].domid = cqm->rmid_to_dom[rmid];
> +        }
> +    }
> +    spin_unlock_irqrestore(&cqm_lock, flags);
> +}

Also, please clarify why the locking here is necessary: You don't
seem to be modifying global data, and the only possibly mutable
thing you read is cqm->rmid_to_dom[]. A race on that one with
an addition/deletion doesn't appear to be problematic though.

> +void get_cqm_info(cpumask_t *cpu_cqmdata_map, struct xen_socket_cqmdata *data)

const cpumask_t *

> +    case XEN_SYSCTL_getcqminfo:
> +    {
> +        struct xen_socket_cqmdata *info;
> +        uint32_t num_sockets;
> +        uint32_t num_rmids;
> +        cpumask_t cpu_cqmdata_map;

Unless absolutely avoidable, not CPU masks on the stack please.

> +
> +        if ( !system_supports_cqm() )
> +        {
> +            ret = -ENODEV;
> +            break;
> +        }
> +
> +        select_socket_cpu(&cpu_cqmdata_map);
> +
> +        num_sockets = min((unsigned int)cpumask_weight(&cpu_cqmdata_map) + 1,
> +                          sysctl->u.getcqminfo.num_sockets);
> +        num_rmids = get_cqm_count();
> +        info = xzalloc_array(struct xen_socket_cqmdata,
> +                             num_rmids * num_sockets);

While unlikely right now, you ought to consider the case of this
multiplication overflowing.

Also - how does the caller know how big the buffer needs to be?
Only num_sockets can be restricted by it...

And what's worse - you allow the caller to limit num_sockets and
allocate info based on this limited value, but you don't restrict
cpu_cqmdata_map to just the socket covered, i.e. if the caller
specified a lower number, then you'll corrupt memory.

And finally, I think the total size of the buffer here can easily
exceed a page, i.e. this then ends up being a non-order-0
allocation, which may _never_ succeed (i.e. the operation is
then rendered useless). I guest it'd be better to e.g. vmap()
the MFNs underlying the guest buffer.

> +        if ( !info )
> +        {
> +            ret = -ENOMEM;
> +            break;
> +        }
> +
> +        get_cqm_info(&cpu_cqmdata_map, info);
> +
> +        if ( copy_to_guest_offset(sysctl->u.getcqminfo.buffer,
> +                                  0, info, num_rmids * num_sockets) )

If the offset is zero anyway, why do you use copy_to_guest_offset()
rather than copy_to_guest()?

> +        {
> +            ret = -EFAULT;
> +            xfree(info);
> +            break;
> +        }
> +
> +        sysctl->u.getcqminfo.num_rmids = num_rmids;
> +        sysctl->u.getcqminfo.num_sockets = num_sockets;
> +
> +        if ( copy_to_guest(u_sysctl, sysctl, 1) )

__copy_to_guest() is sufficient here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:58:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FN4-0001vp-I8; Mon, 20 Jan 2014 13:58:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FN2-0001ve-VL
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:58:33 +0000
Received: from [85.158.143.35:12369] by server-2.bemta-4.messagelabs.com id
	DE/6E-11386-88B2DD25; Mon, 20 Jan 2014 13:58:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390226311!11539561!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11402 invoked from network); 20 Jan 2014 13:58:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:58:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:58:31 +0000
Message-Id: <52DD3993020000780011511F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:58:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 5/7] x86: enable CQM monitoring for each
 domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1366,6 +1366,8 @@ static void __context_switch(void)
>      {
>          memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
>          vcpu_save_fpu(p);
> +        if ( system_supports_cqm() )
> +            cqm_assoc_rmid(0);
>          p->arch.ctxt_switch_from(p);
>      }
>  
> @@ -1390,6 +1392,9 @@ static void __context_switch(void)
>          }
>          vcpu_restore_fpu_eager(n);
>          n->arch.ctxt_switch_to(n);
> +
> +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
> +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
>      }
>  
>      gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :

The two uses here clearly call for system_supports_cqm() to
be an inline function (the more that the variable checked in that
function is already global anyway).

Further, cqm_assoc_rmid() being an RDMSR plus WRMSR, you
surely will want to optimize the case of p's and n's RMIDs being
identical. Or at the very least make sure you _never_ call that
function if all domains run with RMID 0.

> @@ -60,6 +60,8 @@ static void __init parse_pqos_param(char *s)
>  
>  custom_param("pqos", parse_pqos_param);
>  
> +static uint64_t rmid_mask;

__read_mostly?

> +void cqm_assoc_rmid(unsigned int rmid)
> +{
> +    uint64_t val;
> +
> +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> +    wrmsrl(MSR_IA32_PQR_ASSOC, (val & ~(rmid_mask)) | (rmid & rmid_mask));

Stray parentheses around a simple variable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 13:58:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 13:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FN4-0001vp-I8; Mon, 20 Jan 2014 13:58:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FN2-0001ve-VL
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 13:58:33 +0000
Received: from [85.158.143.35:12369] by server-2.bemta-4.messagelabs.com id
	DE/6E-11386-88B2DD25; Mon, 20 Jan 2014 13:58:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390226311!11539561!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11402 invoked from network); 20 Jan 2014 13:58:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 13:58:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 13:58:31 +0000
Message-Id: <52DD3993020000780011511F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 13:58:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 5/7] x86: enable CQM monitoring for each
 domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1366,6 +1366,8 @@ static void __context_switch(void)
>      {
>          memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
>          vcpu_save_fpu(p);
> +        if ( system_supports_cqm() )
> +            cqm_assoc_rmid(0);
>          p->arch.ctxt_switch_from(p);
>      }
>  
> @@ -1390,6 +1392,9 @@ static void __context_switch(void)
>          }
>          vcpu_restore_fpu_eager(n);
>          n->arch.ctxt_switch_to(n);
> +
> +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
> +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
>      }
>  
>      gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :

The two uses here clearly call for system_supports_cqm() to
be an inline function (the more that the variable checked in that
function is already global anyway).

Further, cqm_assoc_rmid() being an RDMSR plus WRMSR, you
surely will want to optimize the case of p's and n's RMIDs being
identical. Or at the very least make sure you _never_ call that
function if all domains run with RMID 0.

> @@ -60,6 +60,8 @@ static void __init parse_pqos_param(char *s)
>  
>  custom_param("pqos", parse_pqos_param);
>  
> +static uint64_t rmid_mask;

__read_mostly?

> +void cqm_assoc_rmid(unsigned int rmid)
> +{
> +    uint64_t val;
> +
> +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> +    wrmsrl(MSR_IA32_PQR_ASSOC, (val & ~(rmid_mask)) | (rmid & rmid_mask));

Stray parentheses around a simple variable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FRh-0002Wg-Fv; Mon, 20 Jan 2014 14:03:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FRf-0002WX-Nc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:03:19 +0000
Received: from [85.158.143.35:56632] by server-3.bemta-4.messagelabs.com id
	32/79-32360-6AC2DD25; Mon, 20 Jan 2014 14:03:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390226598!12768753!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20958 invoked from network); 20 Jan 2014 14:03:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 14:03:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 14:03:17 +0000
Message-Id: <52DD3AB2020000780011512E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 14:03:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
	<52DD21D1.5010202@citrix.com>
In-Reply-To: <52DD21D1.5010202@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	dario.faggioli@citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, Dongxiao Xu <dongxiao.xu@intel.com>,
	dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 14:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 20/01/14 13:13, Jan Beulich wrote:
>>>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>>> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>>>      }
>>>      break;
>>>  
>>> +    case XEN_DOMCTL_attach_pqos:
>>> +    {
>>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>>> +        {
>>> +            if ( !system_supports_cqm() )
>>> +                ret = -ENODEV;
>>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>>> +                ret = -EEXIST;
>>> +            else
>>> +            {
>>> +                ret = alloc_cqm_rmid(d);
>>> +                if ( ret < 0 )
>>> +                    ret = -EUSERS;
>> Why don't you have the function return a sensible error code
>> (which presumably might also end up being other than -EUSERS,
>> e.g. -ENOMEM).
> 
> -EUSERS is correct here.  This failure like this means "all the
> available system rmid's are already being used by other domains".

I didn't mean to say anything to the contrary with _current_ code.
As with any alloc type function, a future change may involve other
than just a RMID allocation, and hence -ENOMEM may become
possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FRh-0002Wg-Fv; Mon, 20 Jan 2014 14:03:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5FRf-0002WX-Nc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:03:19 +0000
Received: from [85.158.143.35:56632] by server-3.bemta-4.messagelabs.com id
	32/79-32360-6AC2DD25; Mon, 20 Jan 2014 14:03:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390226598!12768753!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20958 invoked from network); 20 Jan 2014 14:03:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 14:03:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 14:03:17 +0000
Message-Id: <52DD3AB2020000780011512E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 14:03:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
	<52DD21D1.5010202@citrix.com>
In-Reply-To: <52DD21D1.5010202@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	dario.faggioli@citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, Dongxiao Xu <dongxiao.xu@intel.com>,
	dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 14:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 20/01/14 13:13, Jan Beulich wrote:
>>>>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>>> @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>>>      }
>>>      break;
>>>  
>>> +    case XEN_DOMCTL_attach_pqos:
>>> +    {
>>> +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>>> +        {
>>> +            if ( !system_supports_cqm() )
>>> +                ret = -ENODEV;
>>> +            else if ( d->arch.pqos_cqm_rmid > 0 )
>>> +                ret = -EEXIST;
>>> +            else
>>> +            {
>>> +                ret = alloc_cqm_rmid(d);
>>> +                if ( ret < 0 )
>>> +                    ret = -EUSERS;
>> Why don't you have the function return a sensible error code
>> (which presumably might also end up being other than -EUSERS,
>> e.g. -ENOMEM).
> 
> -EUSERS is correct here.  This failure like this means "all the
> available system rmid's are already being used by other domains".

I didn't mean to say anything to the contrary with _current_ code.
As with any alloc type function, a future change may involve other
than just a RMID allocation, and hence -ENOMEM may become
possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FSc-0002c6-UJ; Mon, 20 Jan 2014 14:04:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W5FSb-0002bx-Jr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:04:17 +0000
Received: from [193.109.254.147:29431] by server-16.bemta-14.messagelabs.com
	id 2E/D3-20600-0EC2DD25; Mon, 20 Jan 2014 14:04:16 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390226654!11898606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7840 invoked from network); 20 Jan 2014 14:04:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92410883"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:04:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:04:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W5FSX-0006rL-3W;
	Mon, 20 Jan 2014 14:04:13 +0000
Message-ID: <52DD2CDC.8030705@eu.citrix.com>
Date: Mon, 20 Jan 2014 14:04:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com>
In-Reply-To: <52D54F96.2060607@citrix.com>
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 02:54 PM, Andrew Cooper wrote:
> On 14/01/14 14:50, Ian Campbell wrote:
>> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>>> device assignment if PoD is enabled.").
>>>
>>> This change is restricted to HVM guest, as only HVM is relevant in the
>>> counterpart in Xend. We're late in release cycle so the change should
>>> only do what's necessary. Probably we can revisit it if we need to do
>>> the same thing for PV guest in the future.
>>>
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Release hat: The risk here is of a false positive detecting whether PoD
>> would be used and therefore refusing to start a domain. However Wei
>> directed me earlier on to the code in setup_guest which sets
>> XENMEMF_populate_on_demand and I believe it is using the same logic.
>>
>> The benefit of this is that it will stop people starting a domain in an
>> invalid configuration -- but what is the downside here? Is it an
>> unhandled IOMMU fault or another host-fatal error? That would make the
>> argument for taking this patch pretty strong. On the other hand if the
>> failure were simply to kill this domain, that would be a less serious
>> issue and I'd be in two minds, mainly due to George not being here to
>> confirm that the pod_enabled logic is correct (although if he were here
>> I wouldn't be wrestling with this question at all ;-)).
>>
>> I'm leaning towards taking this fix, but I'd really like to know what
>> the current failure case looks like.
>>
>> Ian.
> The answer is likely hardware specific.
>
> An IOMMU fault (however handled by Xen) will result in a master abort on
> the DMA transaction for the PCI device which has suffered the fault.
> That device can then do anything from continue blindly to issuing an NMI
> IOCK/SERR which will likely be fatal to the entire server.

I thought we changed the behavior of Xen not to crash on SERR?  Or was I 
confused about that?

Since a VM should be able to craft a DMA pointing to invalid p2m space 
fairly easily, it seems like having the host crash in that scenario 
would basically remove half of the reason for having am IOMMU in the 
first place.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FSc-0002c6-UJ; Mon, 20 Jan 2014 14:04:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W5FSb-0002bx-Jr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:04:17 +0000
Received: from [193.109.254.147:29431] by server-16.bemta-14.messagelabs.com
	id 2E/D3-20600-0EC2DD25; Mon, 20 Jan 2014 14:04:16 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390226654!11898606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7840 invoked from network); 20 Jan 2014 14:04:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:04:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,690,1384300800"; d="scan'208";a="92410883"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:04:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:04:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W5FSX-0006rL-3W;
	Mon, 20 Jan 2014 14:04:13 +0000
Message-ID: <52DD2CDC.8030705@eu.citrix.com>
Date: Mon, 20 Jan 2014 14:04:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com>
In-Reply-To: <52D54F96.2060607@citrix.com>
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 02:54 PM, Andrew Cooper wrote:
> On 14/01/14 14:50, Ian Campbell wrote:
>> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>>> device assignment if PoD is enabled.").
>>>
>>> This change is restricted to HVM guest, as only HVM is relevant in the
>>> counterpart in Xend. We're late in release cycle so the change should
>>> only do what's necessary. Probably we can revisit it if we need to do
>>> the same thing for PV guest in the future.
>>>
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Release hat: The risk here is of a false positive detecting whether PoD
>> would be used and therefore refusing to start a domain. However Wei
>> directed me earlier on to the code in setup_guest which sets
>> XENMEMF_populate_on_demand and I believe it is using the same logic.
>>
>> The benefit of this is that it will stop people starting a domain in an
>> invalid configuration -- but what is the downside here? Is it an
>> unhandled IOMMU fault or another host-fatal error? That would make the
>> argument for taking this patch pretty strong. On the other hand if the
>> failure were simply to kill this domain, that would be a less serious
>> issue and I'd be in two minds, mainly due to George not being here to
>> confirm that the pod_enabled logic is correct (although if he were here
>> I wouldn't be wrestling with this question at all ;-)).
>>
>> I'm leaning towards taking this fix, but I'd really like to know what
>> the current failure case looks like.
>>
>> Ian.
> The answer is likely hardware specific.
>
> An IOMMU fault (however handled by Xen) will result in a master abort on
> the DMA transaction for the PCI device which has suffered the fault.
> That device can then do anything from continue blindly to issuing an NMI
> IOCK/SERR which will likely be fatal to the entire server.

I thought we changed the behavior of Xen not to crash on SERR?  Or was I 
confused about that?

Since a VM should be able to craft a DMA pointing to invalid p2m space 
fairly easily, it seems like having the host crash in that scenario 
would basically remove half of the reason for having am IOMMU in the 
first place.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:11:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FZ0-0003GF-0U; Mon, 20 Jan 2014 14:10:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5FYy-0003GA-Ak
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:10:52 +0000
Received: from [85.158.143.35:25459] by server-3.bemta-4.messagelabs.com id
	21/79-32360-B6E2DD25; Mon, 20 Jan 2014 14:10:51 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390227048!11543513!1
X-Originating-IP: [64.18.0.184]
X-SpamReason: No, hits=2.9 required=7.0 tests=HOT_NASTY,HTML_60_70,
	HTML_MESSAGE,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3937 invoked from network); 20 Jan 2014 14:10:50 -0000
Received: from exprod5og107.obsmtp.com (HELO exprod5og107.obsmtp.com)
	(64.18.0.184)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:10:50 -0000
Received: from mail-oa0-f51.google.com ([209.85.219.51]) (using TLSv1) by
	exprod5ob107.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt0uZ4Wn6GG5CqYFUp8zvqViq3KbKkJd@postini.com;
	Mon, 20 Jan 2014 06:10:49 PST
Received: by mail-oa0-f51.google.com with SMTP id h16so7096308oag.38
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 06:10:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=lqhZiwdKlrh6xG9zanoErMQ+c87cBA8Prl/f57QJIu8=;
	b=USrFCtnaZxmYVYnuBiiq4QhLiC/uZcmI1dOAc0e+tfcnHdsfNiORdoQDL+Qh1s6G8N
	MCyKNur4FhCyCcAOLeHc5eJdO7osDxEQA0ADhmaZqMgROZ1e+BEerIexxatkTLOqTYRf
	ivdZFu+krQ+1sANF6BHGqIfb/sB0X1NvajweI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=lqhZiwdKlrh6xG9zanoErMQ+c87cBA8Prl/f57QJIu8=;
	b=fQ9EBKaI2BIuz29XeoBqWP2WaMo2y+R17LmLfjTCKvfMc0XgItxcCgnydXQO/44+8M
	0f1ximPDraqpg4QA6nIAjjRE/ApeeTPwmDpVff6CfbGf4UL2H5FAlA6U6RXdd9eg2mBZ
	ONYduKgL+OiXmkD0+B17SNbq7hTe03yw0rh9I+Li4xD4ofDP12gIIuEqAxPzzp+ld7cd
	cXL0FFq0BbMTcBM5vj63DCVVxH941a9PQgb2Nj6yYYuLIULX3cvKiGoM9F1sc6mepAmv
	ipoSsYAqvIOK3ISIAV+b+/P5Se4j3puCSClucFrk7s5QKzWicg5y8sU2or9TuecL0dc+
	ZQ1w==
X-Gm-Message-State: ALoCoQnbSRyog1shdXPDpxMGlqvlai1FBvBal0Qhh6X4BMKXBu8AzQqpSD98g04OlRqH22IdGH+eLxMbp69f5Bzpsmhw51eArJFqjQ1EdGOEeM+xsqsQwapZtWpYlywjFiEYzTo5ZEN/49aui8BcDT6k2RS4TIr/AzYRfjrUfg/TXGvJ2Lr5l94=
X-Received: by 10.60.103.239 with SMTP id fz15mr15896751oeb.22.1390227047520; 
	Mon, 20 Jan 2014 06:10:47 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.103.239 with SMTP id fz15mr15896733oeb.22.1390227047358; 
	Mon, 20 Jan 2014 06:10:47 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 06:10:47 -0800 (PST)
Date: Mon, 20 Jan 2014 16:10:47 +0200
Message-ID: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5411464665329239742=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5411464665329239742==
Content-Type: multipart/alternative; boundary=089e011608ae50b25004f06775ec

--089e011608ae50b25004f06775ec
Content-Type: text/plain; charset=ISO-8859-1

Hi,

yet another question on soft real time under Xen. Setup looks like this:

Xen: 4.4-rc1 with credit scheduler
dom0: Linux 3.8.13 with CFS scheduler
domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler

Test program makes nothing but sleeping for 30 (5, 500) ms then printing
timestamp in an endless loop. Results on a guest OS being run without
hypervisor are pretty correct, while on a guest OS under a hypervisor (both
in dom0 and domU) we observe regular delay of 5-15 ms no matter what sleep
time is. Configuring scheduler to different weights for dom0/domU has no
effect whatsoever.

If setup looks like this (the only change is the Xen scheduler):

Xen: 4.4-rc1 with sEDF scheduler
dom0: Linux 3.8.13 with CFS scheduler
domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler

we observe the same delay but only in domU; dom0 measurements are far more
correct.

Can anyone suggest what can be a reason for such a misbehaviour and what
can be impacted by it? We came to this test from an incorrect rendering
timer work, but it seems that there is an issue on it's own. If anyone can
suggest more precise tests, it would be appreciated as well; system
activity in the guest OS is the same for tests with and without Xen.

Thanks in advance!

Suikov Pavlo
GlobalLogic
P +x.xxx.xxx.xxxx  M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--089e011608ae50b25004f06775ec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Hi,<br><br></div><div>yet another question =
on soft real time under Xen. Setup looks like this:<br></div><div><br></div=
><div>Xen: 4.4-rc1 with credit scheduler<br></div><div>dom0: Linux 3.8.13 w=
ith CFS scheduler<br>
</div><div>domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler<br=
><br></div>Test program makes nothing but sleeping for 30 (5, 500) ms then =
printing timestamp in an endless loop. Results on a guest OS being run with=
out hypervisor are pretty correct, while on a guest OS under a hypervisor (=
both in dom0 and domU) we observe regular delay of 5-15 ms no matter what s=
leep time is. Configuring scheduler to different weights for dom0/domU has =
no effect whatsoever. <br>
<br></div><div>If setup looks like this (the only change is the Xen schedul=
er):<br></div><div><br></div><div>Xen: 4.4-rc1 with sEDF scheduler<br><div>=
dom0: Linux 3.8.13 with CFS scheduler<br></div>domU: Android 4.3 with Linux=
 kernel 3.8.13 with CFS scheduler<br>
<br></div><div>we observe the same delay but only in domU; dom0 measurement=
s are far more correct.<br></div><div><br></div>Can anyone suggest what can=
 be a reason for such a misbehaviour and what can be impacted by it? We cam=
e to this test from an incorrect rendering timer work, but it seems that th=
ere is an issue on it&#39;s own. If anyone can suggest more precise tests, =
it would be appreciated as well; system activity in the guest OS is the sam=
e for tests with and without Xen.<br>
<br></div>Thanks in advance!<br clear=3D"all"><div><div><div><div><div><div=
><div dir=3D"ltr"><font size=3D"-1"><br><span style=3D"vertical-align:basel=
ine;font-variant:normal;font-style:normal;font-size:12px;background-color:t=
ransparent;text-decoration:none;font-family:Arial;font-weight:bold">Suikov =
Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +<font size=3D"-1">3</font>8.<font size=3D"=
-1">0<font size=3D"-1">66</font></font>.<font size=3D"-1">66<font size=3D"-=
1">7</font></font>.<font size=3D"-1">1<font size=3D"-1">296</font></font>=
=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
</div></div></div></div></div></div>

--089e011608ae50b25004f06775ec--


--===============5411464665329239742==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5411464665329239742==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 14:11:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5FZ0-0003GF-0U; Mon, 20 Jan 2014 14:10:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5FYy-0003GA-Ak
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:10:52 +0000
Received: from [85.158.143.35:25459] by server-3.bemta-4.messagelabs.com id
	21/79-32360-B6E2DD25; Mon, 20 Jan 2014 14:10:51 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390227048!11543513!1
X-Originating-IP: [64.18.0.184]
X-SpamReason: No, hits=2.9 required=7.0 tests=HOT_NASTY,HTML_60_70,
	HTML_MESSAGE,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3937 invoked from network); 20 Jan 2014 14:10:50 -0000
Received: from exprod5og107.obsmtp.com (HELO exprod5og107.obsmtp.com)
	(64.18.0.184)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:10:50 -0000
Received: from mail-oa0-f51.google.com ([209.85.219.51]) (using TLSv1) by
	exprod5ob107.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt0uZ4Wn6GG5CqYFUp8zvqViq3KbKkJd@postini.com;
	Mon, 20 Jan 2014 06:10:49 PST
Received: by mail-oa0-f51.google.com with SMTP id h16so7096308oag.38
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 06:10:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=lqhZiwdKlrh6xG9zanoErMQ+c87cBA8Prl/f57QJIu8=;
	b=USrFCtnaZxmYVYnuBiiq4QhLiC/uZcmI1dOAc0e+tfcnHdsfNiORdoQDL+Qh1s6G8N
	MCyKNur4FhCyCcAOLeHc5eJdO7osDxEQA0ADhmaZqMgROZ1e+BEerIexxatkTLOqTYRf
	ivdZFu+krQ+1sANF6BHGqIfb/sB0X1NvajweI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=lqhZiwdKlrh6xG9zanoErMQ+c87cBA8Prl/f57QJIu8=;
	b=fQ9EBKaI2BIuz29XeoBqWP2WaMo2y+R17LmLfjTCKvfMc0XgItxcCgnydXQO/44+8M
	0f1ximPDraqpg4QA6nIAjjRE/ApeeTPwmDpVff6CfbGf4UL2H5FAlA6U6RXdd9eg2mBZ
	ONYduKgL+OiXmkD0+B17SNbq7hTe03yw0rh9I+Li4xD4ofDP12gIIuEqAxPzzp+ld7cd
	cXL0FFq0BbMTcBM5vj63DCVVxH941a9PQgb2Nj6yYYuLIULX3cvKiGoM9F1sc6mepAmv
	ipoSsYAqvIOK3ISIAV+b+/P5Se4j3puCSClucFrk7s5QKzWicg5y8sU2or9TuecL0dc+
	ZQ1w==
X-Gm-Message-State: ALoCoQnbSRyog1shdXPDpxMGlqvlai1FBvBal0Qhh6X4BMKXBu8AzQqpSD98g04OlRqH22IdGH+eLxMbp69f5Bzpsmhw51eArJFqjQ1EdGOEeM+xsqsQwapZtWpYlywjFiEYzTo5ZEN/49aui8BcDT6k2RS4TIr/AzYRfjrUfg/TXGvJ2Lr5l94=
X-Received: by 10.60.103.239 with SMTP id fz15mr15896751oeb.22.1390227047520; 
	Mon, 20 Jan 2014 06:10:47 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.103.239 with SMTP id fz15mr15896733oeb.22.1390227047358; 
	Mon, 20 Jan 2014 06:10:47 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 06:10:47 -0800 (PST)
Date: Mon, 20 Jan 2014 16:10:47 +0200
Message-ID: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5411464665329239742=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5411464665329239742==
Content-Type: multipart/alternative; boundary=089e011608ae50b25004f06775ec

--089e011608ae50b25004f06775ec
Content-Type: text/plain; charset=ISO-8859-1

Hi,

yet another question on soft real time under Xen. Setup looks like this:

Xen: 4.4-rc1 with credit scheduler
dom0: Linux 3.8.13 with CFS scheduler
domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler

Test program makes nothing but sleeping for 30 (5, 500) ms then printing
timestamp in an endless loop. Results on a guest OS being run without
hypervisor are pretty correct, while on a guest OS under a hypervisor (both
in dom0 and domU) we observe regular delay of 5-15 ms no matter what sleep
time is. Configuring scheduler to different weights for dom0/domU has no
effect whatsoever.

If setup looks like this (the only change is the Xen scheduler):

Xen: 4.4-rc1 with sEDF scheduler
dom0: Linux 3.8.13 with CFS scheduler
domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler

we observe the same delay but only in domU; dom0 measurements are far more
correct.

Can anyone suggest what can be a reason for such a misbehaviour and what
can be impacted by it? We came to this test from an incorrect rendering
timer work, but it seems that there is an issue on it's own. If anyone can
suggest more precise tests, it would be appreciated as well; system
activity in the guest OS is the same for tests with and without Xen.

Thanks in advance!

Suikov Pavlo
GlobalLogic
P +x.xxx.xxx.xxxx  M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--089e011608ae50b25004f06775ec
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Hi,<br><br></div><div>yet another question =
on soft real time under Xen. Setup looks like this:<br></div><div><br></div=
><div>Xen: 4.4-rc1 with credit scheduler<br></div><div>dom0: Linux 3.8.13 w=
ith CFS scheduler<br>
</div><div>domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler<br=
><br></div>Test program makes nothing but sleeping for 30 (5, 500) ms then =
printing timestamp in an endless loop. Results on a guest OS being run with=
out hypervisor are pretty correct, while on a guest OS under a hypervisor (=
both in dom0 and domU) we observe regular delay of 5-15 ms no matter what s=
leep time is. Configuring scheduler to different weights for dom0/domU has =
no effect whatsoever. <br>
<br></div><div>If setup looks like this (the only change is the Xen schedul=
er):<br></div><div><br></div><div>Xen: 4.4-rc1 with sEDF scheduler<br><div>=
dom0: Linux 3.8.13 with CFS scheduler<br></div>domU: Android 4.3 with Linux=
 kernel 3.8.13 with CFS scheduler<br>
<br></div><div>we observe the same delay but only in domU; dom0 measurement=
s are far more correct.<br></div><div><br></div>Can anyone suggest what can=
 be a reason for such a misbehaviour and what can be impacted by it? We cam=
e to this test from an incorrect rendering timer work, but it seems that th=
ere is an issue on it&#39;s own. If anyone can suggest more precise tests, =
it would be appreciated as well; system activity in the guest OS is the sam=
e for tests with and without Xen.<br>
<br></div>Thanks in advance!<br clear=3D"all"><div><div><div><div><div><div=
><div dir=3D"ltr"><font size=3D"-1"><br><span style=3D"vertical-align:basel=
ine;font-variant:normal;font-style:normal;font-size:12px;background-color:t=
ransparent;text-decoration:none;font-family:Arial;font-weight:bold">Suikov =
Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +<font size=3D"-1">3</font>8.<font size=3D"=
-1">0<font size=3D"-1">66</font></font>.<font size=3D"-1">66<font size=3D"-=
1">7</font></font>.<font size=3D"-1">1<font size=3D"-1">296</font></font>=
=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
</div></div></div></div></div></div>

--089e011608ae50b25004f06775ec--


--===============5411464665329239742==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5411464665329239742==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 14:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Fl0-0003mr-AS; Mon, 20 Jan 2014 14:23:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Fkz-0003mm-2j
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:23:17 +0000
Received: from [193.109.254.147:23711] by server-10.bemta-14.messagelabs.com
	id EE/82-20752-4513DD25; Mon, 20 Jan 2014 14:23:16 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390227793!11977993!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23407 invoked from network); 20 Jan 2014 14:23:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:23:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92417883"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:23:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:23:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5Fkt-00077c-Pl;
	Mon, 20 Jan 2014 14:23:11 +0000
Message-ID: <52DD314F.9060606@citrix.com>
Date: Mon, 20 Jan 2014 14:23:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com> <52DD2CDC.8030705@eu.citrix.com>
In-Reply-To: <52DD2CDC.8030705@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 14:04, George Dunlap wrote:
> On 01/14/2014 02:54 PM, Andrew Cooper wrote:
>> On 14/01/14 14:50, Ian Campbell wrote:
>>> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>>>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>>>> device assignment if PoD is enabled.").
>>>>
>>>> This change is restricted to HVM guest, as only HVM is relevant in the
>>>> counterpart in Xend. We're late in release cycle so the change should
>>>> only do what's necessary. Probably we can revisit it if we need to do
>>>> the same thing for PV guest in the future.
>>>>
>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Release hat: The risk here is of a false positive detecting whether PoD
>>> would be used and therefore refusing to start a domain. However Wei
>>> directed me earlier on to the code in setup_guest which sets
>>> XENMEMF_populate_on_demand and I believe it is using the same logic.
>>>
>>> The benefit of this is that it will stop people starting a domain in an
>>> invalid configuration -- but what is the downside here? Is it an
>>> unhandled IOMMU fault or another host-fatal error? That would make the
>>> argument for taking this patch pretty strong. On the other hand if the
>>> failure were simply to kill this domain, that would be a less serious
>>> issue and I'd be in two minds, mainly due to George not being here to
>>> confirm that the pod_enabled logic is correct (although if he were here
>>> I wouldn't be wrestling with this question at all ;-)).
>>>
>>> I'm leaning towards taking this fix, but I'd really like to know what
>>> the current failure case looks like.
>>>
>>> Ian.
>> The answer is likely hardware specific.
>>
>> An IOMMU fault (however handled by Xen) will result in a master abort on
>> the DMA transaction for the PCI device which has suffered the fault.
>> That device can then do anything from continue blindly to issuing an NMI
>> IOCK/SERR which will likely be fatal to the entire server.
>
> I thought we changed the behavior of Xen not to crash on SERR?  Or was
> I confused about that?
>
> Since a VM should be able to craft a DMA pointing to invalid p2m space
> fairly easily, it seems like having the host crash in that scenario
> would basically remove half of the reason for having am IOMMU in the
> first place.
>
>  -George

The behaviour of IOCK/SERR NMIs depends on the "nmi=" command line
option, which for a non-debug build is "redirect to dom0", and in a
debug build is fatal.  Dom0's behaviour is normally to say "huh - my
virtual environment looks fine", which makes it the option quite useless.

IOCK/SERR NMIs can be out-of-band, possibly via the BMC on a server
class motherboard, and per XSA-59, possibly not caught by the IOMMU.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Fl0-0003mr-AS; Mon, 20 Jan 2014 14:23:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Fkz-0003mm-2j
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:23:17 +0000
Received: from [193.109.254.147:23711] by server-10.bemta-14.messagelabs.com
	id EE/82-20752-4513DD25; Mon, 20 Jan 2014 14:23:16 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390227793!11977993!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23407 invoked from network); 20 Jan 2014 14:23:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:23:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92417883"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:23:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:23:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5Fkt-00077c-Pl;
	Mon, 20 Jan 2014 14:23:11 +0000
Message-ID: <52DD314F.9060606@citrix.com>
Date: Mon, 20 Jan 2014 14:23:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1389613948-5774-1-git-send-email-wei.liu2@citrix.com>
	<1389711014.12434.71.camel@kazak.uk.xensource.com>
	<52D54F96.2060607@citrix.com> <52DD2CDC.8030705@eu.citrix.com>
In-Reply-To: <52DD2CDC.8030705@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: disallow PCI device assignment
 for HVM guest when PoD is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 14:04, George Dunlap wrote:
> On 01/14/2014 02:54 PM, Andrew Cooper wrote:
>> On 14/01/14 14:50, Ian Campbell wrote:
>>> On Mon, 2014-01-13 at 11:52 +0000, Wei Liu wrote:
>>>> This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
>>>> device assignment if PoD is enabled.").
>>>>
>>>> This change is restricted to HVM guest, as only HVM is relevant in the
>>>> counterpart in Xend. We're late in release cycle so the change should
>>>> only do what's necessary. Probably we can revisit it if we need to do
>>>> the same thing for PV guest in the future.
>>>>
>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Release hat: The risk here is of a false positive detecting whether PoD
>>> would be used and therefore refusing to start a domain. However Wei
>>> directed me earlier on to the code in setup_guest which sets
>>> XENMEMF_populate_on_demand and I believe it is using the same logic.
>>>
>>> The benefit of this is that it will stop people starting a domain in an
>>> invalid configuration -- but what is the downside here? Is it an
>>> unhandled IOMMU fault or another host-fatal error? That would make the
>>> argument for taking this patch pretty strong. On the other hand if the
>>> failure were simply to kill this domain, that would be a less serious
>>> issue and I'd be in two minds, mainly due to George not being here to
>>> confirm that the pod_enabled logic is correct (although if he were here
>>> I wouldn't be wrestling with this question at all ;-)).
>>>
>>> I'm leaning towards taking this fix, but I'd really like to know what
>>> the current failure case looks like.
>>>
>>> Ian.
>> The answer is likely hardware specific.
>>
>> An IOMMU fault (however handled by Xen) will result in a master abort on
>> the DMA transaction for the PCI device which has suffered the fault.
>> That device can then do anything from continue blindly to issuing an NMI
>> IOCK/SERR which will likely be fatal to the entire server.
>
> I thought we changed the behavior of Xen not to crash on SERR?  Or was
> I confused about that?
>
> Since a VM should be able to craft a DMA pointing to invalid p2m space
> fairly easily, it seems like having the host crash in that scenario
> would basically remove half of the reason for having am IOMMU in the
> first place.
>
>  -George

The behaviour of IOCK/SERR NMIs depends on the "nmi=" command line
option, which for a non-debug build is "redirect to dom0", and in a
debug build is fatal.  Dom0's behaviour is normally to say "huh - my
virtual environment looks fine", which makes it the option quite useless.

IOCK/SERR NMIs can be out-of-band, possibly via the BMC on a server
class motherboard, and per XSA-59, possibly not caught by the IOMMU.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5G0o-0004QU-Nt; Mon, 20 Jan 2014 14:39:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5G0n-0004QN-4k
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 14:39:37 +0000
Received: from [85.158.143.35:39393] by server-3.bemta-4.messagelabs.com id
	30/21-32360-8253DD25; Mon, 20 Jan 2014 14:39:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390228774!12842504!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10308 invoked from network); 20 Jan 2014 14:39:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92424224"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:39:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:39:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5G0i-0007MP-Fh;
	Mon, 20 Jan 2014 14:39:32 +0000
Message-ID: <52DD3523.1080402@citrix.com>
Date: Mon, 20 Jan 2014 14:39:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
In-Reply-To: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, David Vrabel <david.vrabel@citrix.com>,
	Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 11:10, Jan Beulich wrote:
>>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
>> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
>> softlockups from time to time.
>>
>> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
>>
>> I tracked this down to the call of xc_domain_set_pod_target() and further
>> p2m_pod_set_mem_target().
>>
>> Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
>> with enough memory for current hypervisors. But it seems the code is nearly
>> the same.
> While I still didn't see a formal report of this against SLE11 yet,
> attached a draft patch against the SP3 code base adding manual
> preemption to the hypercall path of privcmd. This is only lightly
> tested, and therefore has a little bit of debugging code still left in
> there. Mind giving this an try (perhaps together with the patch
> David had sent for the other issue - there may still be a need for
> further preemption points in the IOCTL_PRIVCMD_MMAP*
> handling, but without knowing for sure whether that matters to
> you I didn't want to add this right away)?
>
> Jan
>

With my 4.4-rc2 testing, these softlockups are becoming more of a
problem, especially with construction/migration of 128GB guests.

I have been looking at doing a similar patch against mainline.

Having talked it through with David, it seems more sensible to have a
second hypercall page, at which point in_hypercall() becomes
in_preemptable_hypercall().

Any task (which could even be kernel tasks) could use the preemptable
page, rather than the main hypercall page, and the asm code doesn't need
to care whether the task was in privcmd.

This would avoid having to maintain extra state to identify whether the
hypercall was preemptable, and would avoid modification to
evtchn_do_upcall().

I shall see about hacking up a patch to this effect.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5G0o-0004QU-Nt; Mon, 20 Jan 2014 14:39:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5G0n-0004QN-4k
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 14:39:37 +0000
Received: from [85.158.143.35:39393] by server-3.bemta-4.messagelabs.com id
	30/21-32360-8253DD25; Mon, 20 Jan 2014 14:39:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390228774!12842504!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10308 invoked from network); 20 Jan 2014 14:39:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92424224"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:39:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:39:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5G0i-0007MP-Fh;
	Mon, 20 Jan 2014 14:39:32 +0000
Message-ID: <52DD3523.1080402@citrix.com>
Date: Mon, 20 Jan 2014 14:39:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
In-Reply-To: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, David Vrabel <david.vrabel@citrix.com>,
	Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 11:10, Jan Beulich wrote:
>>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
>> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
>> softlockups from time to time.
>>
>> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
>>
>> I tracked this down to the call of xc_domain_set_pod_target() and further
>> p2m_pod_set_mem_target().
>>
>> Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
>> with enough memory for current hypervisors. But it seems the code is nearly
>> the same.
> While I still didn't see a formal report of this against SLE11 yet,
> attached a draft patch against the SP3 code base adding manual
> preemption to the hypercall path of privcmd. This is only lightly
> tested, and therefore has a little bit of debugging code still left in
> there. Mind giving this an try (perhaps together with the patch
> David had sent for the other issue - there may still be a need for
> further preemption points in the IOCTL_PRIVCMD_MMAP*
> handling, but without knowing for sure whether that matters to
> you I didn't want to add this right away)?
>
> Jan
>

With my 4.4-rc2 testing, these softlockups are becoming more of a
problem, especially with construction/migration of 128GB guests.

I have been looking at doing a similar patch against mainline.

Having talked it through with David, it seems more sensible to have a
second hypercall page, at which point in_hypercall() becomes
in_preemptable_hypercall().

Any task (which could even be kernel tasks) could use the preemptable
page, rather than the main hypercall page, and the asm code doesn't need
to care whether the task was in privcmd.

This would avoid having to maintain extra state to identify whether the
hypercall was preemptable, and would avoid modification to
evtchn_do_upcall().

I shall see about hacking up a patch to this effect.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5G7o-00054t-Ci; Mon, 20 Jan 2014 14:46:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5G7m-00054o-W5
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:46:51 +0000
Received: from [85.158.139.211:25026] by server-17.bemta-5.messagelabs.com id
	09/57-19152-AD63DD25; Mon, 20 Jan 2014 14:46:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390229207!10829899!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6196 invoked from network); 20 Jan 2014 14:46:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:46:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92426543"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:46:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:46:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5G7h-0007SN-LU;
	Mon, 20 Jan 2014 14:46:45 +0000
Date: Mon, 20 Jan 2014 14:45:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Wu,
	Feng" <feng.wu@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 20 Jan 2014, Wu, Feng wrote:
> >> > -----Original Message-----
> >> > From: xen-devel-bounces@lists.xen.org
> >> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> >> > Sent: Monday, January 20, 2014 1:48 PM
> >> > To: xen-devel@lists.xen.org
> >> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> >> >
> >> > Hi all,
> >> >
> >> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> >> > device model? I tried but I am getting error 'gfx_passthru' invalid
> >> > parameter for qemu-xen. I am able to do passthrough with qemu
> >> > traditional i.e. qemu-dm.
> >>
> >> As far as I know, only qemu-traditional supports vga pass-through right now.
> >
> > Right.
> > It is not possible to assign your primary VGA card to a VM with
> > qemu-xen. You should be able to assign your secondary VGA card though.
> 
> Let me understand this correctly. If I have two VGA cards then I can
> passthrough
> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
> if yes how can I do it?

Yes, it is correct. Simply use normal PCI assignment

http://wiki.xen.org/wiki/Xen_PCI_Passthrough

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5G7o-00054t-Ci; Mon, 20 Jan 2014 14:46:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5G7m-00054o-W5
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:46:51 +0000
Received: from [85.158.139.211:25026] by server-17.bemta-5.messagelabs.com id
	09/57-19152-AD63DD25; Mon, 20 Jan 2014 14:46:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390229207!10829899!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6196 invoked from network); 20 Jan 2014 14:46:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:46:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92426543"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:46:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:46:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5G7h-0007SN-LU;
	Mon, 20 Jan 2014 14:46:45 +0000
Date: Mon, 20 Jan 2014 14:45:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Wu,
	Feng" <feng.wu@intel.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 20 Jan 2014, Wu, Feng wrote:
> >> > -----Original Message-----
> >> > From: xen-devel-bounces@lists.xen.org
> >> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> >> > Sent: Monday, January 20, 2014 1:48 PM
> >> > To: xen-devel@lists.xen.org
> >> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> >> >
> >> > Hi all,
> >> >
> >> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> >> > device model? I tried but I am getting error 'gfx_passthru' invalid
> >> > parameter for qemu-xen. I am able to do passthrough with qemu
> >> > traditional i.e. qemu-dm.
> >>
> >> As far as I know, only qemu-traditional supports vga pass-through right now.
> >
> > Right.
> > It is not possible to assign your primary VGA card to a VM with
> > qemu-xen. You should be able to assign your secondary VGA card though.
> 
> Let me understand this correctly. If I have two VGA cards then I can
> passthrough
> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
> if yes how can I do it?

Yes, it is correct. Simply use normal PCI assignment

http://wiki.xen.org/wiki/Xen_PCI_Passthrough

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:51:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GC6-0005YZ-3H; Mon, 20 Jan 2014 14:51:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5GC4-0005YN-M5
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:51:17 +0000
Received: from [85.158.137.68:64391] by server-2.bemta-3.messagelabs.com id
	D8/49-17329-3E73DD25; Mon, 20 Jan 2014 14:51:15 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390229472!10161701!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18189 invoked from network); 20 Jan 2014 14:51:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:51:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92427960"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:51:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:51:09 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5GBx-0007XI-6A;
	Mon, 20 Jan 2014 14:51:09 +0000
Message-ID: <1390229463.14279.27.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Date: Mon, 20 Jan 2014 14:51:03 +0000
Content-Type: multipart/mixed; boundary="=-OmxrXrr6ZAf/VNm2stI6"
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] mce: Fix for another race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-OmxrXrr6ZAf/VNm2stI6
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Hi,
  this are actually two patches which both fixes the same race condition
in mce code. The problem is that these lines (in mctelem_reserve)


	newhead = oldhead->mcte_next;
	if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread of recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting a
new head that could point to whatever element (even already used).

The base idea of both patches is to separate mcte_state in a separate
state and setting it with cmpxchg to make sure we don't pick up an
already allocated element.

The first patch try to use the list detaching entirely (setting head to
NULL) to avoid the use of mcte_next falling to a slow_reserve which scan
all the array trying to find an element in the state FREE. This is
surely safe and easy but if list is mostly allocated you end up scanning
the array entirely every time.

The second patch (which needs some cleanup) instead of using pointers
use array indexes to allow to bound in an atomic way which head and next
pointer. The head is attached to a counter which is incremented to avoid
to have the list changed but the head with same value (it's like a list
version). The state is attached with the next index (which replace
mcte_next if state is FREE) to allow atomic read of state+next. To
handle both thread safety and reentrancy mctelem_reserve got a bit more
complicated and the updates are not so straight forward.

Now, the question is: Should I just send the first patch and ignore the
computation problem in the corner case or should I try to put in shape
the second patch?

Frediano


--=-OmxrXrr6ZAf/VNm2stI6
Content-Disposition: attachment; filename="mce_fix2.patch"
Content-Type: text/x-patch; name="mce_fix2.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent 63a9bfd7be20a7fddda3013be2713cbab7e33c15

diff -r 63a9bfd7be20 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 11:31:28 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 11:32:44 2014 +0000
@@ -32,7 +32,8 @@
 struct mctelem_ent {
 	struct mctelem_ent *mcte_next;	/* next in chronological order */
 	struct mctelem_ent *mcte_prev;	/* previous in chronological order */
-	uint32_t mcte_flags;		/* See MCTE_F_* below */
+	uint16_t mcte_state;		/* See MCTE_STATE_* below */
+	uint16_t mcte_flags;		/* See MCTE_F_* below */
 	uint32_t mcte_refcnt;		/* Reference count */
 	void *mcte_data;		/* corresponding data payload */
 };
@@ -41,17 +42,13 @@ struct mctelem_ent {
 #define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
 #define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
 #define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
-#define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
-#define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
-#define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
-#define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
+#define	MCTE_STATE_FREE			0x001U	/* on a freelist */
+#define	MCTE_STATE_UNCOMMITTED		0x002U	/* reserved; on no list */
+#define	MCTE_STATE_COMMITTED		0x003U	/* on a committed list */
+#define	MCTE_STATE_PROCESSING		0x004U	/* on a processing list */
 
 #define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
-#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
-				MCTE_F_STATE_UNCOMMITTED | \
-				MCTE_F_STATE_COMMITTED | \
-				MCTE_F_STATE_PROCESSING)
 
 #define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
 
@@ -60,11 +57,13 @@ struct mctelem_ent {
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
     (tep)->mcte_flags |= MCTE_F_CLASS_##new; } while (0)
 
-#define	MCTE_STATE(tep) ((tep)->mcte_flags & MCTE_F_MASK_STATE)
-#define	MCTE_TRANSITION_STATE(tep, old, new) do { \
-    BUG_ON(MCTE_STATE(tep) != (MCTE_F_STATE_##old)); \
-    (tep)->mcte_flags &= ~MCTE_F_MASK_STATE; \
-    (tep)->mcte_flags |= (MCTE_F_STATE_##new); } while (0)
+#define	MCTE_STATE(tep) ((tep)->mcte_state)
+#define	MCTE_XCHG_TRANSITION_STATE(tep, old, new) \
+	(cmpxchg(&(tep)->mcte_state, MCTE_STATE_##old, MCTE_STATE_##new) == MCTE_STATE_##old)
+#define	MCTE_TRANSITION_STATE0(tep, old, new) \
+	BUG_ON(cmpxchg(&(tep)->mcte_state, old, new) != old)
+#define	MCTE_TRANSITION_STATE(tep, old, new) \
+	MCTE_TRANSITION_STATE0((tep), MCTE_STATE_##old, MCTE_STATE_##new)
 
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
@@ -205,16 +204,27 @@ int mctelem_has_deferred(unsigned int cp
 /* Free an entry to its native free list; the entry must not be linked on
  * any list.
  */
-static void mctelem_free(struct mctelem_ent *tep)
+static void mctelem_free(struct mctelem_ent *tep, uint16_t prev_state)
 {
 	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
 	    MC_URGENT : MC_NONURGENT;
 
 	BUG_ON(tep->mcte_refcnt != 0);
-	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
+	BUG_ON(MCTE_STATE(tep) != prev_state);
 
 	tep->mcte_prev = NULL;
+	tep->mcte_next = NULL;
 	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	/*
+	 * This must be done after adding to list so we still own the element.
+	 * On reserve we have a slow path that just look at the state so
+	 * setting before inserting to list could lead to cases where
+	 * - you set state to free
+	 * - another thread allocate the element
+	 * - you try to insert into list changing an element which is not
+	 *   owned by you (potentially corrupting the element).
+	 */
+	tep->mcte_state = MCTE_STATE_FREE;
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -247,9 +257,8 @@ static void mctelem_processing_release(s
 
 	BUG_ON(tep != mctctl.mctc_processing_head[which]);
 	if (--tep->mcte_refcnt == 0) {
-		MCTE_TRANSITION_STATE(tep, PROCESSING, FREE);
 		mctctl.mctc_processing_head[which] = tep->mcte_next;
-		mctelem_free(tep);
+		mctelem_free(tep, MCTE_STATE_PROCESSING);
 	}
 }
 
@@ -287,16 +296,16 @@ void mctelem_init(int reqdatasz)
 		struct mctelem_ent *tep, **tepp;
 
 		tep = mctctl.mctc_elems + i;
-		tep->mcte_flags = MCTE_F_STATE_FREE;
+		tep->mcte_state = MCTE_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
 			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
 			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
 		tep->mcte_next = *tepp;
@@ -308,46 +317,90 @@ void mctelem_init(int reqdatasz)
 /* incremented non-atomically when reserve fails */
 static int mctelem_drop_count;
 
+static void mctelem_init_uncommited(struct mctelem_ent *tep, mctelem_class_t which)
+{
+	mctelem_hold(tep);
+	tep->mcte_next = NULL;
+	tep->mcte_prev = NULL;
+	if (which == MC_URGENT)
+		MCTE_SET_CLASS(tep, URGENT);
+	else
+		MCTE_SET_CLASS(tep, NONURGENT);
+}
+
+static mctelem_cookie_t mctelem_slow_reserve(mctelem_class_t which)
+{
+	int i = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
+
+	for (; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
+		struct mctelem_ent *tep = mctctl.mctc_elems + i;
+
+		if (MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED)) {
+			mctelem_init_uncommited(tep, which);
+			return MCTE2COOKIE(tep);
+		}
+	}
+
+	mctelem_drop_count++;
+	return (NULL);
+}
+
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
 	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	struct mctelem_ent *oldhead;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
-			if (which == MC_URGENT && target == MC_URGENT) {
-				/* raid the non-urgent freelist */
-				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
-				continue;
-			} else {
-				mctelem_drop_count++;
-				return (NULL);
-			}
-		}
+		/*
+		 * In case pointer is NULL fall back to slow reserve
+		 * There are 3 possible cases:
+		 * - there are no more free elements;
+		 * - another thread is reserving an element;
+		 * - list was broken as another thread reserved an element
+		 *   using slow path.
+		 * So it could be possible that there are even some urgent
+		 * elements that are freed but list head is NULL, in that
+		 * case only slow reserve can allocate these elements.
+		 */
+		if ((oldhead = *freelp) == NULL)
+			break;
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
+		/*
+		 * The correct way to remove from the list would be to set
+		 * head to next of first element but we cannot safely read
+		 * the next so we empty the list entirely for a small moment
+		 * and then try to set the head when we got the ownership
+		 * of the element.
+		 * This way combined with the way we handle slow path make
+		 * head pointer more of an advice that a always valid list.
+		 */
+		if (cmpxchgptr(freelp, oldhead, NULL) == oldhead) {
 			struct mctelem_ent *tep = oldhead;
 
-			mctelem_hold(tep);
-			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-			tep->mcte_next = NULL;
-			tep->mcte_prev = NULL;
-			if (which == MC_URGENT)
-				MCTE_SET_CLASS(tep, URGENT);
-			else
-				MCTE_SET_CLASS(tep, NONURGENT);
+			/*
+			 * element taken but somebody already used it so pointer are not valid
+			 * leave list empty it will be refilled while elements are freed again,
+			 * element unlinked will be take again with mctelem_slow_reserve.
+			 * This can happen in 2 cases:
+			 * - element was allocated using slow path;
+			 * - element was inserted to list but state was not set.
+			 */
+			if (!MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED))
+				break;
+
+			*freelp = tep->mcte_next;
+			mctelem_init_uncommited(tep, which);
 			return MCTE2COOKIE(tep);
 		}
 	}
+	return mctelem_slow_reserve(which);
 }
 
 void *mctelem_dataptr(mctelem_cookie_t cookie)
@@ -366,8 +419,7 @@ void mctelem_dismiss(mctelem_cookie_t co
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
 
 	tep->mcte_refcnt--;
-	MCTE_TRANSITION_STATE(tep, UNCOMMITTED, FREE);
-	mctelem_free(tep);
+	mctelem_free(tep, MCTE_STATE_UNCOMMITTED);
 }
 
 /* Commit an entry with completed telemetry for logging.  The caller must

--=-OmxrXrr6ZAf/VNm2stI6
Content-Disposition: attachment; filename="mce_fix2_v2.patch"
Content-Type: text/x-patch; name="mce_fix2_v2.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent c45cce86b944403632c81b0f3b98b0db33658e28

diff -r c45cce86b944 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 10:27:49 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 13:55:10 2014 +0000
@@ -29,10 +29,19 @@
 
 #include "mce.h"
 
+typedef union {
+	struct {
+		uint16_t state;		/* See MCTE_STATE_* below */
+		uint16_t next_free;
+	};
+	uint32_t whole;
+} mcte_state_t;
+
 struct mctelem_ent {
 	struct mctelem_ent *mcte_next;	/* next in chronological order */
 	struct mctelem_ent *mcte_prev;	/* previous in chronological order */
-	uint32_t mcte_flags;		/* See MCTE_F_* below */
+	mcte_state_t mcte_state;
+	uint16_t mcte_flags;		/* See MCTE_F_* below */
 	uint32_t mcte_refcnt;		/* Reference count */
 	void *mcte_data;		/* corresponding data payload */
 };
@@ -41,17 +50,13 @@ struct mctelem_ent {
 #define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
 #define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
 #define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
-#define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
-#define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
-#define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
-#define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
+#define	MCTE_STATE_FREE			0x001U	/* on a freelist */
+#define	MCTE_STATE_UNCOMMITTED		0x002U	/* reserved; on no list */
+#define	MCTE_STATE_COMMITTED		0x003U	/* on a committed list */
+#define	MCTE_STATE_PROCESSING		0x004U	/* on a processing list */
 
 #define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
-#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
-				MCTE_F_STATE_UNCOMMITTED | \
-				MCTE_F_STATE_COMMITTED | \
-				MCTE_F_STATE_PROCESSING)
 
 #define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
 
@@ -60,15 +65,19 @@ struct mctelem_ent {
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
     (tep)->mcte_flags |= MCTE_F_CLASS_##new; } while (0)
 
-#define	MCTE_STATE(tep) ((tep)->mcte_flags & MCTE_F_MASK_STATE)
-#define	MCTE_TRANSITION_STATE(tep, old, new) do { \
-    BUG_ON(MCTE_STATE(tep) != (MCTE_F_STATE_##old)); \
-    (tep)->mcte_flags &= ~MCTE_F_MASK_STATE; \
-    (tep)->mcte_flags |= (MCTE_F_STATE_##new); } while (0)
+#define	MCTE_STATE(tep) ((tep)->mcte_state.state)
+#define	MCTE_XCHG_TRANSITION_STATE(tep, old, new) \
+	(cmpxchg(&(tep)->mcte_state.state, MCTE_STATE_##old, MCTE_STATE_##new) == MCTE_STATE_##old)
+#define	MCTE_TRANSITION_STATE0(tep, old, new) \
+	BUG_ON(cmpxchg(&(tep)->mcte_state.state, old, new) != old)
+#define	MCTE_TRANSITION_STATE(tep, old, new) \
+	MCTE_TRANSITION_STATE0((tep), MCTE_STATE_##old, MCTE_STATE_##new)
 
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+#define MC_FREE_IDX		(MC_URGENT_NENT + MC_NONURGENT_NENT)
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +86,15 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
+	 * The free lists are singly-linked via mcte_state.next_free, and
+	 * we allocate from them by atomically unlinking an element from the
+	 * head.
 	 * Consumed entries are returned to the head of the free list.
 	 * When an entry is reserved off the free list it is not linked
 	 * on any list until it is committed or dismissed.
+	 * Instead of storing the pointer we store a counter and an index.
+	 * The counter is used to assure nobody changed head to the
+	 * same value was before.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +114,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	uint64_t mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -205,16 +218,29 @@ int mctelem_has_deferred(unsigned int cp
 /* Free an entry to its native free list; the entry must not be linked on
  * any list.
  */
-static void mctelem_free(struct mctelem_ent *tep)
+static void mctelem_free(struct mctelem_ent *tep, uint16_t prev_state)
 {
+	uint64_t *headp;
 	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
 	    MC_URGENT : MC_NONURGENT;
 
 	BUG_ON(tep->mcte_refcnt != 0);
-	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
+	BUG_ON(MCTE_STATE(tep) != prev_state);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+	tep->mcte_state.state = MCTE_STATE_FREE;
+
+	/* set the new head and counter */
+	headp = &mctctl.mctc_free[target];
+	for (;;) {
+		uint64_t old = *headp;
+		uint64_t new = ((old + 0x10000ULL) & ~0xffffULL) | (tep - mctctl.mctc_elems);
+
+		tep->mcte_state.next_free = (old & 0xffffU);
+		if (cmpxchg(headp, old, new) == old)
+			break;
+        }
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -247,9 +273,8 @@ static void mctelem_processing_release(s
 
 	BUG_ON(tep != mctctl.mctc_processing_head[which]);
 	if (--tep->mcte_refcnt == 0) {
-		MCTE_TRANSITION_STATE(tep, PROCESSING, FREE);
 		mctctl.mctc_processing_head[which] = tep->mcte_next;
-		mctelem_free(tep);
+		mctelem_free(tep, MCTE_STATE_PROCESSING);
 	}
 }
 
@@ -283,49 +308,68 @@ void mctelem_init(int reqdatasz)
 		return;
 	}
 
+	mctctl.mctc_free[MC_URGENT] = mctctl.mctc_free[MC_NONURGENT] = MC_FREE_IDX;
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		uint64_t next_free;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
-		tep->mcte_flags = MCTE_F_STATE_FREE;
+		tep->mcte_state.state = MCTE_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			next_free = mctctl.mctc_free[MC_URGENT];
+			mctctl.mctc_free[MC_URGENT] = i;
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			next_free = mctctl.mctc_free[MC_NONURGENT];
+			mctctl.mctc_free[MC_NONURGENT] = i;
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_state.next_free = next_free;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
 /* incremented non-atomically when reserve fails */
 static int mctelem_drop_count;
 
+static void mctelem_init_uncommited(struct mctelem_ent *tep, mctelem_class_t which)
+{
+	mctelem_hold(tep);
+	tep->mcte_next = NULL;
+	tep->mcte_prev = NULL;
+	if (which == MC_URGENT)
+		MCTE_SET_CLASS(tep, URGENT);
+	else
+		MCTE_SET_CLASS(tep, NONURGENT);
+}
+
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	mcte_state_t te_state;
+	uint64_t *freep;
+	uint64_t oldhead, new;
+	struct mctelem_ent *tep;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
-	freelp = &mctctl.mctc_free[target];
+	freep = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldhead = *freep;
+
+		if ((oldhead & 0xffffU) == MC_FREE_IDX) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
+				freep = &mctctl.mctc_free[target];
 				continue;
 			} else {
 				mctelem_drop_count++;
@@ -333,21 +377,42 @@ mctelem_cookie_t mctelem_reserve(mctelem
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/*
+		 * Try to allocate the element
+		 * If we got set we can update the list head
+		 */
+		tep = mctctl.mctc_elems + (oldhead & 0xffffU);
+		if (MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED)) {
+			new = ((oldhead + 0x10000ULL) & ~0xffffULL) | tep->mcte_state.next_free;
+			/* exchange atomically, if we fail this means somebody else have updated it */
+			cmpxchg(freep, oldhead, new);
 
-			mctelem_hold(tep);
-			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-			tep->mcte_next = NULL;
-			tep->mcte_prev = NULL;
-			if (which == MC_URGENT)
-				MCTE_SET_CLASS(tep, URGENT);
-			else
-				MCTE_SET_CLASS(tep, NONURGENT);
+			/* return element we got */
+			mctelem_init_uncommited(tep, which);
 			return MCTE2COOKIE(tep);
 		}
+
+		/* Read atomically state and next free element */
+		te_state.whole = tep->mcte_state.whole;
+		if (te_state.state != MCTE_STATE_UNCOMMITTED)
+			/* should have been removed now and list updated */
+			continue;
+
+		/* Try to update, this means that other thread/function
+		 * did not still update.
+		 * In case of reentrancy we need to update, we cannot wait
+		 * the other thread cause there is no other thread
+		 */
+		new = ((oldhead + 0x10000ULL) & ~0xffffULL) | te_state.next_free;
+
+		/* We don't care here about the result, as
+		 * - if succeeded we got a new head
+		 * - if we fail somebody already changed the queue
+		 */
+		cmpxchg(freep, oldhead, new);
 	}
+	mctelem_drop_count++;
+	return (NULL);
 }
 
 void *mctelem_dataptr(mctelem_cookie_t cookie)
@@ -366,8 +431,7 @@ void mctelem_dismiss(mctelem_cookie_t co
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
 
 	tep->mcte_refcnt--;
-	MCTE_TRANSITION_STATE(tep, UNCOMMITTED, FREE);
-	mctelem_free(tep);
+	mctelem_free(tep, MCTE_STATE_UNCOMMITTED);
 }
 
 /* Commit an entry with completed telemetry for logging.  The caller must

--=-OmxrXrr6ZAf/VNm2stI6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-OmxrXrr6ZAf/VNm2stI6--


From xen-devel-bounces@lists.xen.org Mon Jan 20 14:51:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GC6-0005Yq-Lr; Mon, 20 Jan 2014 14:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5GC5-0005YQ-DU
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:51:17 +0000
Received: from [85.158.139.211:38422] by server-1.bemta-5.messagelabs.com id
	AC/31-21065-4E73DD25; Mon, 20 Jan 2014 14:51:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390229473!10632206!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20042 invoked from network); 20 Jan 2014 14:51:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94504535"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 14:50:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:50:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5GBh-0007XC-Un;
	Mon, 20 Jan 2014 14:50:53 +0000
Date: Mon, 20 Jan 2014 14:49:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401201448210.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Shakeel Butt <shakeel.butt@gmail.com>, "Wu, Feng" <feng.wu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Stefano Stabellini wrote:
> On Mon, 20 Jan 2014, Shakeel Butt wrote:
> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > On Mon, 20 Jan 2014, Wu, Feng wrote:
> > >> > -----Original Message-----
> > >> > From: xen-devel-bounces@lists.xen.org
> > >> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> > >> > Sent: Monday, January 20, 2014 1:48 PM
> > >> > To: xen-devel@lists.xen.org
> > >> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> > >> >
> > >> > Hi all,
> > >> >
> > >> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > >> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > >> > parameter for qemu-xen. I am able to do passthrough with qemu
> > >> > traditional i.e. qemu-dm.
> > >>
> > >> As far as I know, only qemu-traditional supports vga pass-through right now.
> > >
> > > Right.
> > > It is not possible to assign your primary VGA card to a VM with
> > > qemu-xen. You should be able to assign your secondary VGA card though.
> > 
> > Let me understand this correctly. If I have two VGA cards then I can
> > passthrough
> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
> > if yes how can I do it?
> 
> Yes, it is correct. Simply use normal PCI assignment
> 
> http://wiki.xen.org/wiki/Xen_PCI_Passthrough

Sorry, I meant assigning the secondary VGA card (in Dom0) to HVM as its
secondary VGA card (the primary card would be the emulated VGA inside
the HVM guest).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:51:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GC6-0005YZ-3H; Mon, 20 Jan 2014 14:51:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5GC4-0005YN-M5
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:51:17 +0000
Received: from [85.158.137.68:64391] by server-2.bemta-3.messagelabs.com id
	D8/49-17329-3E73DD25; Mon, 20 Jan 2014 14:51:15 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390229472!10161701!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18189 invoked from network); 20 Jan 2014 14:51:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:51:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92427960"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 14:51:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:51:09 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5GBx-0007XI-6A;
	Mon, 20 Jan 2014 14:51:09 +0000
Message-ID: <1390229463.14279.27.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Date: Mon, 20 Jan 2014 14:51:03 +0000
Content-Type: multipart/mixed; boundary="=-OmxrXrr6ZAf/VNm2stI6"
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] mce: Fix for another race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-OmxrXrr6ZAf/VNm2stI6
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Hi,
  this are actually two patches which both fixes the same race condition
in mce code. The problem is that these lines (in mctelem_reserve)


	newhead = oldhead->mcte_next;
	if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread of recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting a
new head that could point to whatever element (even already used).

The base idea of both patches is to separate mcte_state in a separate
state and setting it with cmpxchg to make sure we don't pick up an
already allocated element.

The first patch try to use the list detaching entirely (setting head to
NULL) to avoid the use of mcte_next falling to a slow_reserve which scan
all the array trying to find an element in the state FREE. This is
surely safe and easy but if list is mostly allocated you end up scanning
the array entirely every time.

The second patch (which needs some cleanup) instead of using pointers
use array indexes to allow to bound in an atomic way which head and next
pointer. The head is attached to a counter which is incremented to avoid
to have the list changed but the head with same value (it's like a list
version). The state is attached with the next index (which replace
mcte_next if state is FREE) to allow atomic read of state+next. To
handle both thread safety and reentrancy mctelem_reserve got a bit more
complicated and the updates are not so straight forward.

Now, the question is: Should I just send the first patch and ignore the
computation problem in the corner case or should I try to put in shape
the second patch?

Frediano


--=-OmxrXrr6ZAf/VNm2stI6
Content-Disposition: attachment; filename="mce_fix2.patch"
Content-Type: text/x-patch; name="mce_fix2.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent 63a9bfd7be20a7fddda3013be2713cbab7e33c15

diff -r 63a9bfd7be20 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 11:31:28 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 11:32:44 2014 +0000
@@ -32,7 +32,8 @@
 struct mctelem_ent {
 	struct mctelem_ent *mcte_next;	/* next in chronological order */
 	struct mctelem_ent *mcte_prev;	/* previous in chronological order */
-	uint32_t mcte_flags;		/* See MCTE_F_* below */
+	uint16_t mcte_state;		/* See MCTE_STATE_* below */
+	uint16_t mcte_flags;		/* See MCTE_F_* below */
 	uint32_t mcte_refcnt;		/* Reference count */
 	void *mcte_data;		/* corresponding data payload */
 };
@@ -41,17 +42,13 @@ struct mctelem_ent {
 #define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
 #define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
 #define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
-#define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
-#define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
-#define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
-#define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
+#define	MCTE_STATE_FREE			0x001U	/* on a freelist */
+#define	MCTE_STATE_UNCOMMITTED		0x002U	/* reserved; on no list */
+#define	MCTE_STATE_COMMITTED		0x003U	/* on a committed list */
+#define	MCTE_STATE_PROCESSING		0x004U	/* on a processing list */
 
 #define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
-#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
-				MCTE_F_STATE_UNCOMMITTED | \
-				MCTE_F_STATE_COMMITTED | \
-				MCTE_F_STATE_PROCESSING)
 
 #define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
 
@@ -60,11 +57,13 @@ struct mctelem_ent {
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
     (tep)->mcte_flags |= MCTE_F_CLASS_##new; } while (0)
 
-#define	MCTE_STATE(tep) ((tep)->mcte_flags & MCTE_F_MASK_STATE)
-#define	MCTE_TRANSITION_STATE(tep, old, new) do { \
-    BUG_ON(MCTE_STATE(tep) != (MCTE_F_STATE_##old)); \
-    (tep)->mcte_flags &= ~MCTE_F_MASK_STATE; \
-    (tep)->mcte_flags |= (MCTE_F_STATE_##new); } while (0)
+#define	MCTE_STATE(tep) ((tep)->mcte_state)
+#define	MCTE_XCHG_TRANSITION_STATE(tep, old, new) \
+	(cmpxchg(&(tep)->mcte_state, MCTE_STATE_##old, MCTE_STATE_##new) == MCTE_STATE_##old)
+#define	MCTE_TRANSITION_STATE0(tep, old, new) \
+	BUG_ON(cmpxchg(&(tep)->mcte_state, old, new) != old)
+#define	MCTE_TRANSITION_STATE(tep, old, new) \
+	MCTE_TRANSITION_STATE0((tep), MCTE_STATE_##old, MCTE_STATE_##new)
 
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
@@ -205,16 +204,27 @@ int mctelem_has_deferred(unsigned int cp
 /* Free an entry to its native free list; the entry must not be linked on
  * any list.
  */
-static void mctelem_free(struct mctelem_ent *tep)
+static void mctelem_free(struct mctelem_ent *tep, uint16_t prev_state)
 {
 	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
 	    MC_URGENT : MC_NONURGENT;
 
 	BUG_ON(tep->mcte_refcnt != 0);
-	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
+	BUG_ON(MCTE_STATE(tep) != prev_state);
 
 	tep->mcte_prev = NULL;
+	tep->mcte_next = NULL;
 	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	/*
+	 * This must be done after adding to list so we still own the element.
+	 * On reserve we have a slow path that just look at the state so
+	 * setting before inserting to list could lead to cases where
+	 * - you set state to free
+	 * - another thread allocate the element
+	 * - you try to insert into list changing an element which is not
+	 *   owned by you (potentially corrupting the element).
+	 */
+	tep->mcte_state = MCTE_STATE_FREE;
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -247,9 +257,8 @@ static void mctelem_processing_release(s
 
 	BUG_ON(tep != mctctl.mctc_processing_head[which]);
 	if (--tep->mcte_refcnt == 0) {
-		MCTE_TRANSITION_STATE(tep, PROCESSING, FREE);
 		mctctl.mctc_processing_head[which] = tep->mcte_next;
-		mctelem_free(tep);
+		mctelem_free(tep, MCTE_STATE_PROCESSING);
 	}
 }
 
@@ -287,16 +296,16 @@ void mctelem_init(int reqdatasz)
 		struct mctelem_ent *tep, **tepp;
 
 		tep = mctctl.mctc_elems + i;
-		tep->mcte_flags = MCTE_F_STATE_FREE;
+		tep->mcte_state = MCTE_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
 			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
 			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
 		tep->mcte_next = *tepp;
@@ -308,46 +317,90 @@ void mctelem_init(int reqdatasz)
 /* incremented non-atomically when reserve fails */
 static int mctelem_drop_count;
 
+static void mctelem_init_uncommited(struct mctelem_ent *tep, mctelem_class_t which)
+{
+	mctelem_hold(tep);
+	tep->mcte_next = NULL;
+	tep->mcte_prev = NULL;
+	if (which == MC_URGENT)
+		MCTE_SET_CLASS(tep, URGENT);
+	else
+		MCTE_SET_CLASS(tep, NONURGENT);
+}
+
+static mctelem_cookie_t mctelem_slow_reserve(mctelem_class_t which)
+{
+	int i = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
+
+	for (; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
+		struct mctelem_ent *tep = mctctl.mctc_elems + i;
+
+		if (MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED)) {
+			mctelem_init_uncommited(tep, which);
+			return MCTE2COOKIE(tep);
+		}
+	}
+
+	mctelem_drop_count++;
+	return (NULL);
+}
+
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
 	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	struct mctelem_ent *oldhead;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
-			if (which == MC_URGENT && target == MC_URGENT) {
-				/* raid the non-urgent freelist */
-				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
-				continue;
-			} else {
-				mctelem_drop_count++;
-				return (NULL);
-			}
-		}
+		/*
+		 * In case pointer is NULL fall back to slow reserve
+		 * There are 3 possible cases:
+		 * - there are no more free elements;
+		 * - another thread is reserving an element;
+		 * - list was broken as another thread reserved an element
+		 *   using slow path.
+		 * So it could be possible that there are even some urgent
+		 * elements that are freed but list head is NULL, in that
+		 * case only slow reserve can allocate these elements.
+		 */
+		if ((oldhead = *freelp) == NULL)
+			break;
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
+		/*
+		 * The correct way to remove from the list would be to set
+		 * head to next of first element but we cannot safely read
+		 * the next so we empty the list entirely for a small moment
+		 * and then try to set the head when we got the ownership
+		 * of the element.
+		 * This way combined with the way we handle slow path make
+		 * head pointer more of an advice that a always valid list.
+		 */
+		if (cmpxchgptr(freelp, oldhead, NULL) == oldhead) {
 			struct mctelem_ent *tep = oldhead;
 
-			mctelem_hold(tep);
-			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-			tep->mcte_next = NULL;
-			tep->mcte_prev = NULL;
-			if (which == MC_URGENT)
-				MCTE_SET_CLASS(tep, URGENT);
-			else
-				MCTE_SET_CLASS(tep, NONURGENT);
+			/*
+			 * element taken but somebody already used it so pointer are not valid
+			 * leave list empty it will be refilled while elements are freed again,
+			 * element unlinked will be take again with mctelem_slow_reserve.
+			 * This can happen in 2 cases:
+			 * - element was allocated using slow path;
+			 * - element was inserted to list but state was not set.
+			 */
+			if (!MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED))
+				break;
+
+			*freelp = tep->mcte_next;
+			mctelem_init_uncommited(tep, which);
 			return MCTE2COOKIE(tep);
 		}
 	}
+	return mctelem_slow_reserve(which);
 }
 
 void *mctelem_dataptr(mctelem_cookie_t cookie)
@@ -366,8 +419,7 @@ void mctelem_dismiss(mctelem_cookie_t co
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
 
 	tep->mcte_refcnt--;
-	MCTE_TRANSITION_STATE(tep, UNCOMMITTED, FREE);
-	mctelem_free(tep);
+	mctelem_free(tep, MCTE_STATE_UNCOMMITTED);
 }
 
 /* Commit an entry with completed telemetry for logging.  The caller must

--=-OmxrXrr6ZAf/VNm2stI6
Content-Disposition: attachment; filename="mce_fix2_v2.patch"
Content-Type: text/x-patch; name="mce_fix2_v2.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent c45cce86b944403632c81b0f3b98b0db33658e28

diff -r c45cce86b944 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 10:27:49 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 13:55:10 2014 +0000
@@ -29,10 +29,19 @@
 
 #include "mce.h"
 
+typedef union {
+	struct {
+		uint16_t state;		/* See MCTE_STATE_* below */
+		uint16_t next_free;
+	};
+	uint32_t whole;
+} mcte_state_t;
+
 struct mctelem_ent {
 	struct mctelem_ent *mcte_next;	/* next in chronological order */
 	struct mctelem_ent *mcte_prev;	/* previous in chronological order */
-	uint32_t mcte_flags;		/* See MCTE_F_* below */
+	mcte_state_t mcte_state;
+	uint16_t mcte_flags;		/* See MCTE_F_* below */
 	uint32_t mcte_refcnt;		/* Reference count */
 	void *mcte_data;		/* corresponding data payload */
 };
@@ -41,17 +50,13 @@ struct mctelem_ent {
 #define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
 #define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
 #define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
-#define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
-#define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
-#define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
-#define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
+#define	MCTE_STATE_FREE			0x001U	/* on a freelist */
+#define	MCTE_STATE_UNCOMMITTED		0x002U	/* reserved; on no list */
+#define	MCTE_STATE_COMMITTED		0x003U	/* on a committed list */
+#define	MCTE_STATE_PROCESSING		0x004U	/* on a processing list */
 
 #define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
-#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
-				MCTE_F_STATE_UNCOMMITTED | \
-				MCTE_F_STATE_COMMITTED | \
-				MCTE_F_STATE_PROCESSING)
 
 #define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
 
@@ -60,15 +65,19 @@ struct mctelem_ent {
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
     (tep)->mcte_flags |= MCTE_F_CLASS_##new; } while (0)
 
-#define	MCTE_STATE(tep) ((tep)->mcte_flags & MCTE_F_MASK_STATE)
-#define	MCTE_TRANSITION_STATE(tep, old, new) do { \
-    BUG_ON(MCTE_STATE(tep) != (MCTE_F_STATE_##old)); \
-    (tep)->mcte_flags &= ~MCTE_F_MASK_STATE; \
-    (tep)->mcte_flags |= (MCTE_F_STATE_##new); } while (0)
+#define	MCTE_STATE(tep) ((tep)->mcte_state.state)
+#define	MCTE_XCHG_TRANSITION_STATE(tep, old, new) \
+	(cmpxchg(&(tep)->mcte_state.state, MCTE_STATE_##old, MCTE_STATE_##new) == MCTE_STATE_##old)
+#define	MCTE_TRANSITION_STATE0(tep, old, new) \
+	BUG_ON(cmpxchg(&(tep)->mcte_state.state, old, new) != old)
+#define	MCTE_TRANSITION_STATE(tep, old, new) \
+	MCTE_TRANSITION_STATE0((tep), MCTE_STATE_##old, MCTE_STATE_##new)
 
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+#define MC_FREE_IDX		(MC_URGENT_NENT + MC_NONURGENT_NENT)
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +86,15 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
+	 * The free lists are singly-linked via mcte_state.next_free, and
+	 * we allocate from them by atomically unlinking an element from the
+	 * head.
 	 * Consumed entries are returned to the head of the free list.
 	 * When an entry is reserved off the free list it is not linked
 	 * on any list until it is committed or dismissed.
+	 * Instead of storing the pointer we store a counter and an index.
+	 * The counter is used to assure nobody changed head to the
+	 * same value was before.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +114,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	uint64_t mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -205,16 +218,29 @@ int mctelem_has_deferred(unsigned int cp
 /* Free an entry to its native free list; the entry must not be linked on
  * any list.
  */
-static void mctelem_free(struct mctelem_ent *tep)
+static void mctelem_free(struct mctelem_ent *tep, uint16_t prev_state)
 {
+	uint64_t *headp;
 	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
 	    MC_URGENT : MC_NONURGENT;
 
 	BUG_ON(tep->mcte_refcnt != 0);
-	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
+	BUG_ON(MCTE_STATE(tep) != prev_state);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+	tep->mcte_state.state = MCTE_STATE_FREE;
+
+	/* set the new head and counter */
+	headp = &mctctl.mctc_free[target];
+	for (;;) {
+		uint64_t old = *headp;
+		uint64_t new = ((old + 0x10000ULL) & ~0xffffULL) | (tep - mctctl.mctc_elems);
+
+		tep->mcte_state.next_free = (old & 0xffffU);
+		if (cmpxchg(headp, old, new) == old)
+			break;
+        }
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -247,9 +273,8 @@ static void mctelem_processing_release(s
 
 	BUG_ON(tep != mctctl.mctc_processing_head[which]);
 	if (--tep->mcte_refcnt == 0) {
-		MCTE_TRANSITION_STATE(tep, PROCESSING, FREE);
 		mctctl.mctc_processing_head[which] = tep->mcte_next;
-		mctelem_free(tep);
+		mctelem_free(tep, MCTE_STATE_PROCESSING);
 	}
 }
 
@@ -283,49 +308,68 @@ void mctelem_init(int reqdatasz)
 		return;
 	}
 
+	mctctl.mctc_free[MC_URGENT] = mctctl.mctc_free[MC_NONURGENT] = MC_FREE_IDX;
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		uint64_t next_free;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
-		tep->mcte_flags = MCTE_F_STATE_FREE;
+		tep->mcte_state.state = MCTE_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			next_free = mctctl.mctc_free[MC_URGENT];
+			mctctl.mctc_free[MC_URGENT] = i;
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			next_free = mctctl.mctc_free[MC_NONURGENT];
+			mctctl.mctc_free[MC_NONURGENT] = i;
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_state.next_free = next_free;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
 /* incremented non-atomically when reserve fails */
 static int mctelem_drop_count;
 
+static void mctelem_init_uncommited(struct mctelem_ent *tep, mctelem_class_t which)
+{
+	mctelem_hold(tep);
+	tep->mcte_next = NULL;
+	tep->mcte_prev = NULL;
+	if (which == MC_URGENT)
+		MCTE_SET_CLASS(tep, URGENT);
+	else
+		MCTE_SET_CLASS(tep, NONURGENT);
+}
+
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	mcte_state_t te_state;
+	uint64_t *freep;
+	uint64_t oldhead, new;
+	struct mctelem_ent *tep;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
-	freelp = &mctctl.mctc_free[target];
+	freep = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldhead = *freep;
+
+		if ((oldhead & 0xffffU) == MC_FREE_IDX) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
+				freep = &mctctl.mctc_free[target];
 				continue;
 			} else {
 				mctelem_drop_count++;
@@ -333,21 +377,42 @@ mctelem_cookie_t mctelem_reserve(mctelem
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/*
+		 * Try to allocate the element
+		 * If we got set we can update the list head
+		 */
+		tep = mctctl.mctc_elems + (oldhead & 0xffffU);
+		if (MCTE_XCHG_TRANSITION_STATE(tep, FREE, UNCOMMITTED)) {
+			new = ((oldhead + 0x10000ULL) & ~0xffffULL) | tep->mcte_state.next_free;
+			/* exchange atomically, if we fail this means somebody else have updated it */
+			cmpxchg(freep, oldhead, new);
 
-			mctelem_hold(tep);
-			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-			tep->mcte_next = NULL;
-			tep->mcte_prev = NULL;
-			if (which == MC_URGENT)
-				MCTE_SET_CLASS(tep, URGENT);
-			else
-				MCTE_SET_CLASS(tep, NONURGENT);
+			/* return element we got */
+			mctelem_init_uncommited(tep, which);
 			return MCTE2COOKIE(tep);
 		}
+
+		/* Read atomically state and next free element */
+		te_state.whole = tep->mcte_state.whole;
+		if (te_state.state != MCTE_STATE_UNCOMMITTED)
+			/* should have been removed now and list updated */
+			continue;
+
+		/* Try to update, this means that other thread/function
+		 * did not still update.
+		 * In case of reentrancy we need to update, we cannot wait
+		 * the other thread cause there is no other thread
+		 */
+		new = ((oldhead + 0x10000ULL) & ~0xffffULL) | te_state.next_free;
+
+		/* We don't care here about the result, as
+		 * - if succeeded we got a new head
+		 * - if we fail somebody already changed the queue
+		 */
+		cmpxchg(freep, oldhead, new);
 	}
+	mctelem_drop_count++;
+	return (NULL);
 }
 
 void *mctelem_dataptr(mctelem_cookie_t cookie)
@@ -366,8 +431,7 @@ void mctelem_dismiss(mctelem_cookie_t co
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
 
 	tep->mcte_refcnt--;
-	MCTE_TRANSITION_STATE(tep, UNCOMMITTED, FREE);
-	mctelem_free(tep);
+	mctelem_free(tep, MCTE_STATE_UNCOMMITTED);
 }
 
 /* Commit an entry with completed telemetry for logging.  The caller must

--=-OmxrXrr6ZAf/VNm2stI6
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-OmxrXrr6ZAf/VNm2stI6--


From xen-devel-bounces@lists.xen.org Mon Jan 20 14:51:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:51:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GC6-0005Yq-Lr; Mon, 20 Jan 2014 14:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5GC5-0005YQ-DU
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:51:17 +0000
Received: from [85.158.139.211:38422] by server-1.bemta-5.messagelabs.com id
	AC/31-21065-4E73DD25; Mon, 20 Jan 2014 14:51:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390229473!10632206!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20042 invoked from network); 20 Jan 2014 14:51:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 14:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94504535"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 14:50:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 09:50:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5GBh-0007XC-Un;
	Mon, 20 Jan 2014 14:50:53 +0000
Date: Mon, 20 Jan 2014 14:49:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401201448210.21510@kaball.uk.xensource.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<alpine.DEB.2.02.1401201444410.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Shakeel Butt <shakeel.butt@gmail.com>, "Wu, Feng" <feng.wu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Stefano Stabellini wrote:
> On Mon, 20 Jan 2014, Shakeel Butt wrote:
> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > On Mon, 20 Jan 2014, Wu, Feng wrote:
> > >> > -----Original Message-----
> > >> > From: xen-devel-bounces@lists.xen.org
> > >> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
> > >> > Sent: Monday, January 20, 2014 1:48 PM
> > >> > To: xen-devel@lists.xen.org
> > >> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> > >> >
> > >> > Hi all,
> > >> >
> > >> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > >> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > >> > parameter for qemu-xen. I am able to do passthrough with qemu
> > >> > traditional i.e. qemu-dm.
> > >>
> > >> As far as I know, only qemu-traditional supports vga pass-through right now.
> > >
> > > Right.
> > > It is not possible to assign your primary VGA card to a VM with
> > > qemu-xen. You should be able to assign your secondary VGA card though.
> > 
> > Let me understand this correctly. If I have two VGA cards then I can
> > passthrough
> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this right and
> > if yes how can I do it?
> 
> Yes, it is correct. Simply use normal PCI assignment
> 
> http://wiki.xen.org/wiki/Xen_PCI_Passthrough

Sorry, I meant assigning the secondary VGA card (in Dom0) to HVM as its
secondary VGA card (the primary card would be the emulated VGA inside
the HVM guest).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:52:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GDC-0005hx-4L; Mon, 20 Jan 2014 14:52:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GDA-0005hl-AK
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:52:24 +0000
Received: from [85.158.143.35:13117] by server-3.bemta-4.messagelabs.com id
	77/89-32360-7283DD25; Mon, 20 Jan 2014 14:52:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390229541!5691364!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30032 invoked from network); 20 Jan 2014 14:52:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:52:22 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KEpHaJ013702
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 14:51:18 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KEpG4P009533
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 14:51:16 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KEpGqJ009523; Mon, 20 Jan 2014 14:51:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 06:51:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EC70A1C18DB; Mon, 20 Jan 2014 09:51:14 -0500 (EST)
Date: Mon, 20 Jan 2014 09:51:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140120145114.GA3858@phenom.dumpdata.com>
References: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI
 debugging using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 02:54:58PM +0000, Andrew Cooper wrote:
> Since Qemu-trad
>   c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
>   "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."
> 
> there has been quite a lot of extra logging appearing in the VM logs.
> 
> The hotplug debugging contributes 2 vmexits per slot per hotplug event, which
> are simply a waste of time unless a developer is trying to debug VM
> hotplugging problems.
> 
> Introduce a platform flag, "acpi-debug" to indicate in the AML whether
> debugging writes are wanted or not.

Oddly I don't see my response. But why not just do #ifdef DEBUG around
the hvmloader code, and this patch:


diff --git a/hw/piix4acpi.c b/hw/piix4acpi.c
index bf916d9..2b0ce32 100644
--- a/hw/piix4acpi.c
+++ b/hw/piix4acpi.c
@@ -287,7 +287,6 @@ static inline void clear_bit(uint8_t *map, int bit)
 static void acpi_dbg_writel(void *opaque, uint32_t addr, uint32_t val)
 {
     PIIX4ACPI_LOG(PIIX4ACPI_LOG_DEBUG, "ACPI: DBG: 0x%08x\n", val);
-    PIIX4ACPI_LOG(PIIX4ACPI_LOG_INFO, "ACPI:debug: write addr=0x%x, val=0x%x.\n", addr, val);
 }
 
 #ifdef CONFIG_PASSTHROUGH


That will it will only happen when you build with 'debug=y' or such.

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  docs/misc/xenstore-paths.markdown       |    1 +
>  tools/firmware/hvmloader/acpi/build.c   |    5 +++++
>  tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
>  tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
> index 70ab7f4..593a7b1 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -190,6 +190,7 @@ Various platform properties.
>  * acpi -- is ACPI enabled for this domain
>  * acpi_s3 -- is ACPI S3 support enabled for this domain
>  * acpi_s4 -- is ACPI S4 support enabled for this domain
> +* acpi-debug -- whether the AML code should write debugging messages to qemu
>  
>  #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
>  
> diff --git a/tools/firmware/hvmloader/acpi/build.c b/tools/firmware/hvmloader/acpi/build.c
> index f1dd3f0..59716bb 100644
> --- a/tools/firmware/hvmloader/acpi/build.c
> +++ b/tools/firmware/hvmloader/acpi/build.c
> @@ -47,6 +47,7 @@ struct acpi_info {
>      uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
>      uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
>      uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
> +    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
>      uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
>      uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
>      uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC struct */
> @@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
>      unsigned char       *dsdt;
>      unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
>      int                  nr_secondaries, i;
> +    const char          *xs_str;
>  
>      /* Allocate and initialise the acpi info area. */
>      mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
> @@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
>      if ( !new_vm_gid(acpi_info) )
>          goto oom;
>  
> +    xs_str = xenstore_read("platform/acpi-debug", "0");
> +
>      acpi_info->com1_present = uart_exists(0x3f8);
>      acpi_info->com2_present = uart_exists(0x2f8);
>      acpi_info->lpt1_present = lpt_exists(0x378);
>      acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
> +    acpi_info->acpi_debugging = (xs_str[0] == '1');
>      acpi_info->pci_min = pci_mem_start;
>      acpi_info->pci_len = pci_mem_end - pci_mem_start;
>  
> diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl b/tools/firmware/hvmloader/acpi/dsdt.asl
> index 247a8ad..e753286 100644
> --- a/tools/firmware/hvmloader/acpi/dsdt.asl
> +++ b/tools/firmware/hvmloader/acpi/dsdt.asl
> @@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM", 0)
>             UAR2, 1,
>             LTP1, 1,
>             HPET, 1,
> +           ADBG, 1,
>             Offset(4),
>             PMIN, 32,
>             PLEN, 32,
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index a4b693b..3f0ca74 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -347,14 +347,18 @@ int main(int argc, char **argv)
>              /* _SUN == dev */
>              stmt("Name", "_SUN, 0x%08x", slot >> 3);
>              push_block("Method", "_EJ0, 1");
> +            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>              stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>              stmt("Store", "0x88, \\_GPE.DPT2");
> +            pop_block();
>              stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
>                   (slot & 1) ? 0x10 : 0x01, slot & ~1);
>              pop_block();
>              push_block("Method", "_STA, 0");
> +            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>              stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>              stmt("Store", "0x89, \\_GPE.DPT2");
> +            pop_block();
>              if ( slot & 1 )
>                  stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
>              else
> @@ -426,8 +430,10 @@ int main(int argc, char **argv)
>          stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
>          stmt("And", "Local1, 0xff, SLT");
>          /* Debug */
> +        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>          stmt("Store", "SLT, DPT1");
>          stmt("Store", "EVT, DPT2");
> +        pop_block();
>          /* Decision tree */
>          decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>          pop_block();
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:52:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:52:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GDC-0005hx-4L; Mon, 20 Jan 2014 14:52:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GDA-0005hl-AK
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:52:24 +0000
Received: from [85.158.143.35:13117] by server-3.bemta-4.messagelabs.com id
	77/89-32360-7283DD25; Mon, 20 Jan 2014 14:52:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390229541!5691364!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30032 invoked from network); 20 Jan 2014 14:52:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:52:22 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KEpHaJ013702
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 14:51:18 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KEpG4P009533
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 14:51:16 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KEpGqJ009523; Mon, 20 Jan 2014 14:51:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 06:51:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EC70A1C18DB; Mon, 20 Jan 2014 09:51:14 -0500 (EST)
Date: Mon, 20 Jan 2014 09:51:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140120145114.GA3858@phenom.dumpdata.com>
References: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI
 debugging using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 02:54:58PM +0000, Andrew Cooper wrote:
> Since Qemu-trad
>   c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
>   "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."
> 
> there has been quite a lot of extra logging appearing in the VM logs.
> 
> The hotplug debugging contributes 2 vmexits per slot per hotplug event, which
> are simply a waste of time unless a developer is trying to debug VM
> hotplugging problems.
> 
> Introduce a platform flag, "acpi-debug" to indicate in the AML whether
> debugging writes are wanted or not.

Oddly I don't see my response. But why not just do #ifdef DEBUG around
the hvmloader code, and this patch:


diff --git a/hw/piix4acpi.c b/hw/piix4acpi.c
index bf916d9..2b0ce32 100644
--- a/hw/piix4acpi.c
+++ b/hw/piix4acpi.c
@@ -287,7 +287,6 @@ static inline void clear_bit(uint8_t *map, int bit)
 static void acpi_dbg_writel(void *opaque, uint32_t addr, uint32_t val)
 {
     PIIX4ACPI_LOG(PIIX4ACPI_LOG_DEBUG, "ACPI: DBG: 0x%08x\n", val);
-    PIIX4ACPI_LOG(PIIX4ACPI_LOG_INFO, "ACPI:debug: write addr=0x%x, val=0x%x.\n", addr, val);
 }
 
 #ifdef CONFIG_PASSTHROUGH


That will it will only happen when you build with 'debug=y' or such.

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  docs/misc/xenstore-paths.markdown       |    1 +
>  tools/firmware/hvmloader/acpi/build.c   |    5 +++++
>  tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
>  tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
>  4 files changed, 13 insertions(+)
> 
> diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
> index 70ab7f4..593a7b1 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -190,6 +190,7 @@ Various platform properties.
>  * acpi -- is ACPI enabled for this domain
>  * acpi_s3 -- is ACPI S3 support enabled for this domain
>  * acpi_s4 -- is ACPI S4 support enabled for this domain
> +* acpi-debug -- whether the AML code should write debugging messages to qemu
>  
>  #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
>  
> diff --git a/tools/firmware/hvmloader/acpi/build.c b/tools/firmware/hvmloader/acpi/build.c
> index f1dd3f0..59716bb 100644
> --- a/tools/firmware/hvmloader/acpi/build.c
> +++ b/tools/firmware/hvmloader/acpi/build.c
> @@ -47,6 +47,7 @@ struct acpi_info {
>      uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
>      uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
>      uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
> +    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
>      uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
>      uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
>      uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC struct */
> @@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
>      unsigned char       *dsdt;
>      unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
>      int                  nr_secondaries, i;
> +    const char          *xs_str;
>  
>      /* Allocate and initialise the acpi info area. */
>      mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
> @@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config *config, unsigned int physical)
>      if ( !new_vm_gid(acpi_info) )
>          goto oom;
>  
> +    xs_str = xenstore_read("platform/acpi-debug", "0");
> +
>      acpi_info->com1_present = uart_exists(0x3f8);
>      acpi_info->com2_present = uart_exists(0x2f8);
>      acpi_info->lpt1_present = lpt_exists(0x378);
>      acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
> +    acpi_info->acpi_debugging = (xs_str[0] == '1');
>      acpi_info->pci_min = pci_mem_start;
>      acpi_info->pci_len = pci_mem_end - pci_mem_start;
>  
> diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl b/tools/firmware/hvmloader/acpi/dsdt.asl
> index 247a8ad..e753286 100644
> --- a/tools/firmware/hvmloader/acpi/dsdt.asl
> +++ b/tools/firmware/hvmloader/acpi/dsdt.asl
> @@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM", 0)
>             UAR2, 1,
>             LTP1, 1,
>             HPET, 1,
> +           ADBG, 1,
>             Offset(4),
>             PMIN, 32,
>             PLEN, 32,
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index a4b693b..3f0ca74 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -347,14 +347,18 @@ int main(int argc, char **argv)
>              /* _SUN == dev */
>              stmt("Name", "_SUN, 0x%08x", slot >> 3);
>              push_block("Method", "_EJ0, 1");
> +            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>              stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>              stmt("Store", "0x88, \\_GPE.DPT2");
> +            pop_block();
>              stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
>                   (slot & 1) ? 0x10 : 0x01, slot & ~1);
>              pop_block();
>              push_block("Method", "_STA, 0");
> +            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>              stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>              stmt("Store", "0x89, \\_GPE.DPT2");
> +            pop_block();
>              if ( slot & 1 )
>                  stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
>              else
> @@ -426,8 +430,10 @@ int main(int argc, char **argv)
>          stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
>          stmt("And", "Local1, 0xff, SLT");
>          /* Debug */
> +        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>          stmt("Store", "SLT, DPT1");
>          stmt("Store", "EVT, DPT2");
> +        pop_block();
>          /* Decision tree */
>          decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>          pop_block();
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GJB-0006G4-JG; Mon, 20 Jan 2014 14:58:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5GJA-0006Fx-Gr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:58:36 +0000
Received: from [193.109.254.147:32162] by server-14.bemta-14.messagelabs.com
	id 61/A3-12628-B993DD25; Mon, 20 Jan 2014 14:58:35 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390229914!8507242!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24342 invoked from network); 20 Jan 2014 14:58:35 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:58:35 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5GJ7-0005Wn-TW; Mon, 20 Jan 2014 09:58:33 -0500
To: xen-devel@lists.xen.org
MIME-Version: 1.0
X-KeepSent: F3ADE803:67F2976D-85257C66:004EA77A;
 type=4; name=$KeepSent
Message-ID: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 09:58:32 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5GJ7-0005Wn-TW
X-GDM-EVAL: score: /30; hits: 
Cc: michael.d.labriola@gmail.com
Subject: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent crashes 
with multiple older R600 series (HD 6470 and HD 6570) and unusably slow 
graphics with a newer HD7000 (can see each line refresh indiviually on 
radeonfb tty).  All 3 systems seem to work fine bare metal.

The R600 crashes happen seemingly randomly when using OpenGL Compositor in 
Enlightenment 0.17.  My dom0 need not even have any domUs running. 
Sometimes it happens within a few minutes, sometimes it will run OK for an 
afternoon or so.  Eventually TTM issues an "unable to get page 0" error 
message, the radeon driver follows that up with a 
"radeon_gem_object_create failed to allocate gem" error message.  Then the 
radeon driver starts spamming that gem failure message until I'm forced to 
reboot.

Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.

I'm using Xen 4.2.1 32bit.

xen command line is:  vga=mode=0x314
kernel command line is:  root=/dev/md0 quiet pci=realloc 
console=ttyS0,115200n8 console=tty0

Fingers crossed that there's some magic boot parameter I'm missing.  It's 
been a while since I dabbled with this stuff.  ;-)

Thanks!

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)


 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 14:58:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 14:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GJB-0006G4-JG; Mon, 20 Jan 2014 14:58:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5GJA-0006Fx-Gr
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 14:58:36 +0000
Received: from [193.109.254.147:32162] by server-14.bemta-14.messagelabs.com
	id 61/A3-12628-B993DD25; Mon, 20 Jan 2014 14:58:35 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390229914!8507242!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24342 invoked from network); 20 Jan 2014 14:58:35 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 14:58:35 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5GJ7-0005Wn-TW; Mon, 20 Jan 2014 09:58:33 -0500
To: xen-devel@lists.xen.org
MIME-Version: 1.0
X-KeepSent: F3ADE803:67F2976D-85257C66:004EA77A;
 type=4; name=$KeepSent
Message-ID: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 09:58:32 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5GJ7-0005Wn-TW
X-GDM-EVAL: score: /30; hits: 
Cc: michael.d.labriola@gmail.com
Subject: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent crashes 
with multiple older R600 series (HD 6470 and HD 6570) and unusably slow 
graphics with a newer HD7000 (can see each line refresh indiviually on 
radeonfb tty).  All 3 systems seem to work fine bare metal.

The R600 crashes happen seemingly randomly when using OpenGL Compositor in 
Enlightenment 0.17.  My dom0 need not even have any domUs running. 
Sometimes it happens within a few minutes, sometimes it will run OK for an 
afternoon or so.  Eventually TTM issues an "unable to get page 0" error 
message, the radeon driver follows that up with a 
"radeon_gem_object_create failed to allocate gem" error message.  Then the 
radeon driver starts spamming that gem failure message until I'm forced to 
reboot.

Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.

I'm using Xen 4.2.1 32bit.

xen command line is:  vga=mode=0x314
kernel command line is:  root=/dev/md0 quiet pci=realloc 
console=ttyS0,115200n8 console=tty0

Fingers crossed that there's some magic boot parameter I'm missing.  It's 
been a while since I dabbled with this stuff.  ;-)

Thanks!

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)


 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:06:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GQC-0006ma-Hk; Mon, 20 Jan 2014 15:05:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5GQB-0006mT-J1
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:05:51 +0000
Received: from [85.158.139.211:36247] by server-9.bemta-5.messagelabs.com id
	9B/0B-15098-E4B3DD25; Mon, 20 Jan 2014 15:05:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390230338!10636444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE1MTMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9669 invoked from network); 20 Jan 2014 15:05:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:05:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; 
	d="asc'?scan'208";a="94510677"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 15:05:38 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 10:05:37 -0500
Message-ID: <1390230336.23576.24.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Mon, 20 Jan 2014 16:05:36 +0100
In-Reply-To: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1523637133843557338=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1523637133843557338==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Fuhv+whMVTQrsgxTNI/O"

--=-Fuhv+whMVTQrsgxTNI/O
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 16:10 +0200, Pavlo Suikov wrote:
> Hi,
>=20
Hi again Pavlo,

> yet another question on soft real time under Xen. Setup looks like
> this:
>=20
>=20
>=20
> Xen: 4.4-rc1 with credit scheduler
>=20
> dom0: Linux 3.8.13 with CFS scheduler
>=20
> domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler
>=20
x86 or ARM host?

If x86, is DomU HVM or PV?

Also, how many pCPUs and vCPUs do the host and the various guests have?
Are you using any vCPU-to-pCPU pinning?

> Test program makes nothing but sleeping for 30 (5, 500) ms then
> printing timestamp in an endless loop.=20
>
Ok, so something similar to cyclictest, right?

https://rt.wiki.kernel.org/index.php/Cyclictest

I'm also investigating on running it in a bunch of consideration... I'll
have results that we can hopefully compare to yours very soon.

What about giving a try to it yourself? I think standardizing on one (a
set of) specific tool could be a good thing.

> Results on a guest OS being run without hypervisor are pretty correct,
> while on a guest OS under a hypervisor (both in dom0 and domU) we
> observe regular delay of 5-15 ms no matter what sleep time is.
> Configuring scheduler to different weights for dom0/domU has no effect
> whatsoever.=20
>=20
Mmm... Can you show us at least part of the numbers? From my experience,
it's really easy to mess up with terminology in this domain (delay,
latency, jitter, etc.).

AFAIUI, you're saying that you're asking for a sleep time of X, and
you're being waking up in the interval [x+5ms, x+15ms], is that the
case?

> If setup looks like this (the only change is the Xen scheduler):
>=20
>=20
>=20
> Xen: 4.4-rc1 with sEDF scheduler
> dom0: Linux 3.8.13 with CFS scheduler
>=20
> domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler
>=20
>=20
> we observe the same delay but only in domU; dom0 measurements are far
> more correct.
>=20
It would be nice to know, in addition to the information above (arch,
CPUs, pinning, etc), something about how sEDF is being used in this
particular case too. Can you post the output of the following commands:

# xl list -n

# xl vcpu-list

# xl sched-sedf


That being said, sEDF is quite broken, unless used in a very specific
way. Also, even if it would be 100% working, you usecase seems to not
need sEDF (or any real-time scheduling solution) _yet_, as it does not
include any kind of other load, wrt the usleeping ask in the Dom0 or
DomU, is that the case? Or you have some other load in the system while
performing these measurements? If the latter, what and where?

What I mean is as follows. As far as Xen is concerned, if you have a
bunch of VMs, with various and different kind of load running into them,
and you want to make sure that one of them receives at least some
specific pCPU time (or things like that), then it is important what
scheduler you pick in Xen. If you only have, say, Dom0 and one DomU, and
you are only running your workload (the usleeping task) inside the DomU,
then vCPU scheduling happening in Xen should not make that much
difference. Of course, in order to be sure of that, we'd need to know
more about the configuration, as I already asked above.

Oh, and now that I think about it, something that present in credit and
not in sEDF that might be worth checking is the scheduling rate limiting
thing.

You can fin out more about it in this blog post:
http://blog.xen.org/index.php/2012/04/10/xen-4-2-new-scheduler-parameters-2=
/

Discussion about it started in this thread (and then continued in a few
other ones):
http://thr3ads.net/xen-devel/2011/10/1129938-PATCH-scheduler-rate-controlle=
r

In the code, look for ratelimit_us, in particular inside
xen/common/sched_credit.c, to figure out even better what this does.

It probably isn't the source of your problems (the 'delay' you're seeing
looks too big), but since it's quite cheap to check... :-)

Let us know.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Fuhv+whMVTQrsgxTNI/O
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLdO0AACgkQk4XaBE3IOsQkxwCgk6iIYFihzD9BuLcrkkIsB/V7
43QAoIoNdgJwHEGBi4amMbuOgJnLZgHC
=B3uc
-----END PGP SIGNATURE-----

--=-Fuhv+whMVTQrsgxTNI/O--


--===============1523637133843557338==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1523637133843557338==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 15:06:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GQC-0006ma-Hk; Mon, 20 Jan 2014 15:05:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5GQB-0006mT-J1
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:05:51 +0000
Received: from [85.158.139.211:36247] by server-9.bemta-5.messagelabs.com id
	9B/0B-15098-E4B3DD25; Mon, 20 Jan 2014 15:05:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390230338!10636444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTE1MTMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9669 invoked from network); 20 Jan 2014 15:05:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:05:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; 
	d="asc'?scan'208";a="94510677"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 15:05:38 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 10:05:37 -0500
Message-ID: <1390230336.23576.24.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Mon, 20 Jan 2014 16:05:36 +0100
In-Reply-To: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1523637133843557338=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1523637133843557338==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Fuhv+whMVTQrsgxTNI/O"

--=-Fuhv+whMVTQrsgxTNI/O
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 16:10 +0200, Pavlo Suikov wrote:
> Hi,
>=20
Hi again Pavlo,

> yet another question on soft real time under Xen. Setup looks like
> this:
>=20
>=20
>=20
> Xen: 4.4-rc1 with credit scheduler
>=20
> dom0: Linux 3.8.13 with CFS scheduler
>=20
> domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler
>=20
x86 or ARM host?

If x86, is DomU HVM or PV?

Also, how many pCPUs and vCPUs do the host and the various guests have?
Are you using any vCPU-to-pCPU pinning?

> Test program makes nothing but sleeping for 30 (5, 500) ms then
> printing timestamp in an endless loop.=20
>
Ok, so something similar to cyclictest, right?

https://rt.wiki.kernel.org/index.php/Cyclictest

I'm also investigating on running it in a bunch of consideration... I'll
have results that we can hopefully compare to yours very soon.

What about giving a try to it yourself? I think standardizing on one (a
set of) specific tool could be a good thing.

> Results on a guest OS being run without hypervisor are pretty correct,
> while on a guest OS under a hypervisor (both in dom0 and domU) we
> observe regular delay of 5-15 ms no matter what sleep time is.
> Configuring scheduler to different weights for dom0/domU has no effect
> whatsoever.=20
>=20
Mmm... Can you show us at least part of the numbers? From my experience,
it's really easy to mess up with terminology in this domain (delay,
latency, jitter, etc.).

AFAIUI, you're saying that you're asking for a sleep time of X, and
you're being waking up in the interval [x+5ms, x+15ms], is that the
case?

> If setup looks like this (the only change is the Xen scheduler):
>=20
>=20
>=20
> Xen: 4.4-rc1 with sEDF scheduler
> dom0: Linux 3.8.13 with CFS scheduler
>=20
> domU: Android 4.3 with Linux kernel 3.8.13 with CFS scheduler
>=20
>=20
> we observe the same delay but only in domU; dom0 measurements are far
> more correct.
>=20
It would be nice to know, in addition to the information above (arch,
CPUs, pinning, etc), something about how sEDF is being used in this
particular case too. Can you post the output of the following commands:

# xl list -n

# xl vcpu-list

# xl sched-sedf


That being said, sEDF is quite broken, unless used in a very specific
way. Also, even if it would be 100% working, you usecase seems to not
need sEDF (or any real-time scheduling solution) _yet_, as it does not
include any kind of other load, wrt the usleeping ask in the Dom0 or
DomU, is that the case? Or you have some other load in the system while
performing these measurements? If the latter, what and where?

What I mean is as follows. As far as Xen is concerned, if you have a
bunch of VMs, with various and different kind of load running into them,
and you want to make sure that one of them receives at least some
specific pCPU time (or things like that), then it is important what
scheduler you pick in Xen. If you only have, say, Dom0 and one DomU, and
you are only running your workload (the usleeping task) inside the DomU,
then vCPU scheduling happening in Xen should not make that much
difference. Of course, in order to be sure of that, we'd need to know
more about the configuration, as I already asked above.

Oh, and now that I think about it, something that present in credit and
not in sEDF that might be worth checking is the scheduling rate limiting
thing.

You can fin out more about it in this blog post:
http://blog.xen.org/index.php/2012/04/10/xen-4-2-new-scheduler-parameters-2=
/

Discussion about it started in this thread (and then continued in a few
other ones):
http://thr3ads.net/xen-devel/2011/10/1129938-PATCH-scheduler-rate-controlle=
r

In the code, look for ratelimit_us, in particular inside
xen/common/sched_credit.c, to figure out even better what this does.

It probably isn't the source of your problems (the 'delay' you're seeing
looks too big), but since it's quite cheap to check... :-)

Let us know.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Fuhv+whMVTQrsgxTNI/O
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLdO0AACgkQk4XaBE3IOsQkxwCgk6iIYFihzD9BuLcrkkIsB/V7
43QAoIoNdgJwHEGBi4amMbuOgJnLZgHC
=B3uc
-----END PGP SIGNATURE-----

--=-Fuhv+whMVTQrsgxTNI/O--


--===============1523637133843557338==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1523637133843557338==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 15:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GTw-0007Hg-6k; Mon, 20 Jan 2014 15:09:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GTu-0007Ha-CC
	for Xen-devel@lists.xensource.com; Mon, 20 Jan 2014 15:09:42 +0000
Received: from [85.158.143.35:20831] by server-1.bemta-4.messagelabs.com id
	BA/26-02132-53C3DD25; Mon, 20 Jan 2014 15:09:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390230579!11561501!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2834 invoked from network); 20 Jan 2014 15:09:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:09:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KF9Wrb000387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:09:33 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KF9Wkv021069
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:09:32 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KF9W95021064; Mon, 20 Jan 2014 15:09:32 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:09:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D5901C18DB; Mon, 20 Jan 2014 10:09:30 -0500 (EST)
Date: Mon, 20 Jan 2014 10:09:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140120150930.GA4598@phenom.dumpdata.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 06:24:55PM -0800, Mukesh Rathor wrote:
> pvh was designed to start with pv flags, but a commit in xen tree

Thank you for posting this!

> 51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as

You need to always include the title of said commit.

> they are not necessary. As a result, these CR flags must be set in the
> guest.

I sent out replies to this over the weekend but somehow they are not
showing up.


> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>

Since his SoB is first that implies that he is the author. But the
'From' line is from you. You can modify that by doing:

git commit --amend --author "Roger Pau Monne <roger.pau@citrix.com>"

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   43 +++++++++++++++++++++++++++++++++++++------
>  arch/x86/xen/smp.c       |    2 +-
>  arch/x86/xen/xen-ops.h   |    2 +-
>  3 files changed, 39 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 628099a..4a2aaa6 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1410,12 +1410,8 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> - *
> - * Note, that it is refok - because the only caller of this after init
> - * is PVH which is not going to use xen_load_gdt_boot or other
> - * __init functions.
>   */
> -void __ref xen_setup_gdt(int cpu)
> +static void xen_setup_gdt(int cpu)
>  {
>  	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  #ifdef CONFIG_X86_64
> @@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +/*
> + * A pv guest starts with default flags that are not set for pvh, set them
> + * here asap.
> + */
> +static void xen_pvh_set_cr_flags(int cpu)
> +{
> +	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);

You seem to be missing the X86_CR0_NE ?

The hypervisor sets "X86_CR0_PG" for PVH and uncondtionally for HVM
X86_CR0_TS, X86_CR0_PE, X86_CR0_ET so from the list of them that
'secondary_startup_64' sets:

208         /* Setup cr0 */
209 #define CR0_STATE       (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \
210                          X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \
211                          X86_CR0_PG)
212         movl    $CR0_STATE, %eax


The one that is missing is NE. I fixed it up.

> +
> +	if (!cpu)
> +		return;

And what happens if don't have this check? Will be bad if do multiple
cr4 writes?

Fyi, this (cr4) should have been a seperate patch. I fixed it up that way.
> +	/*
> +	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
> +	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
> +	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
> +	 */
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
> +}

Seperate patch and since the PGE part is more complicated that just
setting the CR4 - you also have to tweak this:

1512         /* Prevent unwanted bits from being set in PTEs. */                     
1513         __supported_pte_mask &= ~_PAGE_GLOBAL;                                  

I think it should be done once we have actually confirmed that you can
do 2MB pages within the guest. (might need some more tweaking?)

> +
> +/*
> + * Note, that it is refok - because the only caller of this after init

Ah, you based this on a old commit. #linux-next is the one that has
the patches.

> + * is PVH which is not going to use xen_load_gdt_boot or other
> + * __init functions.
> + */
> +void __ref xen_pvh_secondary_vcpu_init(int cpu)
> +{
> +	xen_setup_gdt(cpu);
> +	xen_pvh_set_cr_flags(cpu);
> +}
> +
>  static void __init xen_pvh_early_guest_init(void)
>  {
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		return;
>  
> -	if (xen_feature(XENFEAT_hvm_callback_vector))
> +	if (xen_feature(XENFEAT_hvm_callback_vector)) {
>  		xen_have_vector_callback = 1;
> +		xen_pvh_set_cr_flags(0);

Why? Why not make it unconditional? Or just change the
code to error out if !hvm_callback_vector?

Anyhow, I changed it in the patch.

> +	}
>  
>  #ifdef CONFIG_X86_32
>  	BUG(); /* PVH: Implement proper support. */
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 5e46190..a18eadd 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
>  #ifdef CONFIG_X86_64
>  	if (xen_feature(XENFEAT_auto_translated_physmap) &&
>  	    xen_feature(XENFEAT_supervisor_mode_kernel))
> -		xen_setup_gdt(cpu);
> +		xen_pvh_secondary_vcpu_init(cpu);
>  #endif
>  	cpu_bringup();
>  	cpu_startup_entry(CPUHP_ONLINE);
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 9059c24..1cb6f4c 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
>  
>  extern int xen_panic_handler_init(void);
>  
> -void xen_setup_gdt(int cpu);
> +void xen_pvh_secondary_vcpu_init(int cpu);
>  #endif /* XEN_OPS_H */
> -- 
> 1.7.2.3
> 


Here is what I am thinking we should have in the tree:

>From 0a1574661b47544d2e880d579d1954dcbb6aa2c1 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Mon, 20 Jan 2014 09:20:07 -0500
Subject: [PATCH] xen/pvh: Set X86_CR0_WP and others in CR0

otherwise we will get for some user-space applications
that use 'clone' with CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID
end up hitting an assert in glibc manifested by:

general protection ip:7f80720d364c sp:7fff98fd8a80 error:0 in
libc-2.13.so[7f807209e000+180000]

This is due to the nature of said operations which sets and clears
the PID.  "In the successful one I can see that the page table of
the parent process has been updated successfully to use a
different physical page, so the write of the tid on
that page only affects the child...

On the other hand, in the failed case, the write seems to happen before
the copy of the original page is done, so both the parent _and_ the child
end up with the same value (because the parent copies the page after
the write of the child tid has already happened)."
(Roger's analysis). The nature of this is due to the Xen's commit
of 51e2cac257ec8b4080d89f0855c498cbbd76a5e5
"x86/pvh: set only minimal cr0 and cr4 flags in order to use paging"
the CR0_WP was removed so COW features of the Linux kernel were not
operating properly.

While doing that also update the rest of the CR0 flags to be inline
with what a baremetal Linux kernel would set them to.

In 'secondary_startup_64' (baremetal Linux) sets:

X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP |
X86_CR0_AM | X86_CR0_PG

The hypervisor for HVM type guests (which PVH is a subset of) sets:
X86_CR0_PE | X86_CR0_ET | X86_CR0_TS
For PVH it specifically sets:
X86_CR0_PG

Which means we need to set the rest: X86_CR0_MP | X86_CR0_NE  |
X86_CR0_WP | X86_CR0_AM to have full parity.

Reported-and-Tested-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[v1: Took out the cr4 writes to be a seperate patch]
---
 arch/x86/xen/enlighten.c |   31 +++++++++++++++++++++++++++++--
 arch/x86/xen/smp.c       |    2 +-
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index b6d61c3..b320f2b 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1462,13 +1462,40 @@ void __ref xen_setup_gdt(int cpu)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+/*
+ * A PV guest starts with default flags that are not set for PVH, set them
+ * here asap.
+ */
+static void xen_pvh_set_cr_flags(int cpu)
+{
+
+	/* Some of these are setup in 'secondary_startup_64'. The others:
+	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
+	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+}
+
+/*
+ * Note, that it is ref - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
+ */
+void __ref xen_pvh_secondary_vcpu_init(int cpu)
+{
+	xen_setup_gdt(cpu);
+	xen_pvh_set_cr_flags(cpu);
+}
+
 static void __init xen_pvh_early_guest_init(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		return;
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
-		xen_have_vector_callback = 1;
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		return;
+
+	xen_have_vector_callback = 1;
+	xen_pvh_set_cr_flags(0);
 
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 5e46190..a18eadd 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
 #ifdef CONFIG_X86_64
 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
 	    xen_feature(XENFEAT_supervisor_mode_kernel))
-		xen_setup_gdt(cpu);
+		xen_pvh_secondary_vcpu_init(cpu);
 #endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9059c24..1cb6f4c 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
-void xen_setup_gdt(int cpu);
+void xen_pvh_secondary_vcpu_init(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GTw-0007Hg-6k; Mon, 20 Jan 2014 15:09:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GTu-0007Ha-CC
	for Xen-devel@lists.xensource.com; Mon, 20 Jan 2014 15:09:42 +0000
Received: from [85.158.143.35:20831] by server-1.bemta-4.messagelabs.com id
	BA/26-02132-53C3DD25; Mon, 20 Jan 2014 15:09:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390230579!11561501!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2834 invoked from network); 20 Jan 2014 15:09:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:09:40 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KF9Wrb000387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:09:33 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KF9Wkv021069
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:09:32 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KF9W95021064; Mon, 20 Jan 2014 15:09:32 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:09:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5D5901C18DB; Mon, 20 Jan 2014 10:09:30 -0500 (EST)
Date: Mon, 20 Jan 2014 10:09:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140120150930.GA4598@phenom.dumpdata.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 06:24:55PM -0800, Mukesh Rathor wrote:
> pvh was designed to start with pv flags, but a commit in xen tree

Thank you for posting this!

> 51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as

You need to always include the title of said commit.

> they are not necessary. As a result, these CR flags must be set in the
> guest.

I sent out replies to this over the weekend but somehow they are not
showing up.


> 
> Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>

Since his SoB is first that implies that he is the author. But the
'From' line is from you. You can modify that by doing:

git commit --amend --author "Roger Pau Monne <roger.pau@citrix.com>"

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   43 +++++++++++++++++++++++++++++++++++++------
>  arch/x86/xen/smp.c       |    2 +-
>  arch/x86/xen/xen-ops.h   |    2 +-
>  3 files changed, 39 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index 628099a..4a2aaa6 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1410,12 +1410,8 @@ static void __init xen_boot_params_init_edd(void)
>   * Set up the GDT and segment registers for -fstack-protector.  Until
>   * we do this, we have to be careful not to call any stack-protected
>   * function, which is most of the kernel.
> - *
> - * Note, that it is refok - because the only caller of this after init
> - * is PVH which is not going to use xen_load_gdt_boot or other
> - * __init functions.
>   */
> -void __ref xen_setup_gdt(int cpu)
> +static void xen_setup_gdt(int cpu)
>  {
>  	if (xen_feature(XENFEAT_auto_translated_physmap)) {
>  #ifdef CONFIG_X86_64
> @@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
>  	pv_cpu_ops.load_gdt = xen_load_gdt;
>  }
>  
> +/*
> + * A pv guest starts with default flags that are not set for pvh, set them
> + * here asap.
> + */
> +static void xen_pvh_set_cr_flags(int cpu)
> +{
> +	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);

You seem to be missing the X86_CR0_NE ?

The hypervisor sets "X86_CR0_PG" for PVH and uncondtionally for HVM
X86_CR0_TS, X86_CR0_PE, X86_CR0_ET so from the list of them that
'secondary_startup_64' sets:

208         /* Setup cr0 */
209 #define CR0_STATE       (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \
210                          X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \
211                          X86_CR0_PG)
212         movl    $CR0_STATE, %eax


The one that is missing is NE. I fixed it up.

> +
> +	if (!cpu)
> +		return;

And what happens if don't have this check? Will be bad if do multiple
cr4 writes?

Fyi, this (cr4) should have been a seperate patch. I fixed it up that way.
> +	/*
> +	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
> +	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
> +	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
> +	 */
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
> +}

Seperate patch and since the PGE part is more complicated that just
setting the CR4 - you also have to tweak this:

1512         /* Prevent unwanted bits from being set in PTEs. */                     
1513         __supported_pte_mask &= ~_PAGE_GLOBAL;                                  

I think it should be done once we have actually confirmed that you can
do 2MB pages within the guest. (might need some more tweaking?)

> +
> +/*
> + * Note, that it is refok - because the only caller of this after init

Ah, you based this on a old commit. #linux-next is the one that has
the patches.

> + * is PVH which is not going to use xen_load_gdt_boot or other
> + * __init functions.
> + */
> +void __ref xen_pvh_secondary_vcpu_init(int cpu)
> +{
> +	xen_setup_gdt(cpu);
> +	xen_pvh_set_cr_flags(cpu);
> +}
> +
>  static void __init xen_pvh_early_guest_init(void)
>  {
>  	if (!xen_feature(XENFEAT_auto_translated_physmap))
>  		return;
>  
> -	if (xen_feature(XENFEAT_hvm_callback_vector))
> +	if (xen_feature(XENFEAT_hvm_callback_vector)) {
>  		xen_have_vector_callback = 1;
> +		xen_pvh_set_cr_flags(0);

Why? Why not make it unconditional? Or just change the
code to error out if !hvm_callback_vector?

Anyhow, I changed it in the patch.

> +	}
>  
>  #ifdef CONFIG_X86_32
>  	BUG(); /* PVH: Implement proper support. */
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 5e46190..a18eadd 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
>  #ifdef CONFIG_X86_64
>  	if (xen_feature(XENFEAT_auto_translated_physmap) &&
>  	    xen_feature(XENFEAT_supervisor_mode_kernel))
> -		xen_setup_gdt(cpu);
> +		xen_pvh_secondary_vcpu_init(cpu);
>  #endif
>  	cpu_bringup();
>  	cpu_startup_entry(CPUHP_ONLINE);
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 9059c24..1cb6f4c 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
>  
>  extern int xen_panic_handler_init(void);
>  
> -void xen_setup_gdt(int cpu);
> +void xen_pvh_secondary_vcpu_init(int cpu);
>  #endif /* XEN_OPS_H */
> -- 
> 1.7.2.3
> 


Here is what I am thinking we should have in the tree:

>From 0a1574661b47544d2e880d579d1954dcbb6aa2c1 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Mon, 20 Jan 2014 09:20:07 -0500
Subject: [PATCH] xen/pvh: Set X86_CR0_WP and others in CR0

otherwise we will get for some user-space applications
that use 'clone' with CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID
end up hitting an assert in glibc manifested by:

general protection ip:7f80720d364c sp:7fff98fd8a80 error:0 in
libc-2.13.so[7f807209e000+180000]

This is due to the nature of said operations which sets and clears
the PID.  "In the successful one I can see that the page table of
the parent process has been updated successfully to use a
different physical page, so the write of the tid on
that page only affects the child...

On the other hand, in the failed case, the write seems to happen before
the copy of the original page is done, so both the parent _and_ the child
end up with the same value (because the parent copies the page after
the write of the child tid has already happened)."
(Roger's analysis). The nature of this is due to the Xen's commit
of 51e2cac257ec8b4080d89f0855c498cbbd76a5e5
"x86/pvh: set only minimal cr0 and cr4 flags in order to use paging"
the CR0_WP was removed so COW features of the Linux kernel were not
operating properly.

While doing that also update the rest of the CR0 flags to be inline
with what a baremetal Linux kernel would set them to.

In 'secondary_startup_64' (baremetal Linux) sets:

X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP |
X86_CR0_AM | X86_CR0_PG

The hypervisor for HVM type guests (which PVH is a subset of) sets:
X86_CR0_PE | X86_CR0_ET | X86_CR0_TS
For PVH it specifically sets:
X86_CR0_PG

Which means we need to set the rest: X86_CR0_MP | X86_CR0_NE  |
X86_CR0_WP | X86_CR0_AM to have full parity.

Reported-and-Tested-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[v1: Took out the cr4 writes to be a seperate patch]
---
 arch/x86/xen/enlighten.c |   31 +++++++++++++++++++++++++++++--
 arch/x86/xen/smp.c       |    2 +-
 arch/x86/xen/xen-ops.h   |    2 +-
 3 files changed, 31 insertions(+), 4 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index b6d61c3..b320f2b 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1462,13 +1462,40 @@ void __ref xen_setup_gdt(int cpu)
 	pv_cpu_ops.load_gdt = xen_load_gdt;
 }
 
+/*
+ * A PV guest starts with default flags that are not set for PVH, set them
+ * here asap.
+ */
+static void xen_pvh_set_cr_flags(int cpu)
+{
+
+	/* Some of these are setup in 'secondary_startup_64'. The others:
+	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
+	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+}
+
+/*
+ * Note, that it is ref - because the only caller of this after init
+ * is PVH which is not going to use xen_load_gdt_boot or other
+ * __init functions.
+ */
+void __ref xen_pvh_secondary_vcpu_init(int cpu)
+{
+	xen_setup_gdt(cpu);
+	xen_pvh_set_cr_flags(cpu);
+}
+
 static void __init xen_pvh_early_guest_init(void)
 {
 	if (!xen_feature(XENFEAT_auto_translated_physmap))
 		return;
 
-	if (xen_feature(XENFEAT_hvm_callback_vector))
-		xen_have_vector_callback = 1;
+	if (!xen_feature(XENFEAT_hvm_callback_vector))
+		return;
+
+	xen_have_vector_callback = 1;
+	xen_pvh_set_cr_flags(0);
 
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 5e46190..a18eadd 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
 #ifdef CONFIG_X86_64
 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
 	    xen_feature(XENFEAT_supervisor_mode_kernel))
-		xen_setup_gdt(cpu);
+		xen_pvh_secondary_vcpu_init(cpu);
 #endif
 	cpu_bringup();
 	cpu_startup_entry(CPUHP_ONLINE);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 9059c24..1cb6f4c 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
 
 extern int xen_panic_handler_init(void);
 
-void xen_setup_gdt(int cpu);
+void xen_pvh_secondary_vcpu_init(int cpu);
 #endif /* XEN_OPS_H */
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GYy-0007SY-4H; Mon, 20 Jan 2014 15:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1W5GYw-0007ST-OC
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:14:54 +0000
Received: from [85.158.139.211:33706] by server-4.bemta-5.messagelabs.com id
	B1/33-26791-D6D3DD25; Mon, 20 Jan 2014 15:14:53 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390230891!10638459!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25288 invoked from network); 20 Jan 2014 15:14:53 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:14:53 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s0KFEbCU019951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Mon, 20 Jan 2014 10:14:37 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s0KFEb8F019949;
	Mon, 20 Jan 2014 10:14:37 -0500
Date: Mon, 20 Jan 2014 11:14:36 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140120151436.GA19918@andromeda.dapyr.net>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
User-Agent: Mutt/1.5.9i
Cc: michael.d.labriola@gmail.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent crashes 
> with multiple older R600 series (HD 6470 and HD 6570) and unusably slow 
> graphics with a newer HD7000 (can see each line refresh indiviually on 
> radeonfb tty).  All 3 systems seem to work fine bare metal.

I hadn't been using DRM, just Xserver. Is that what you mean?
> 
> The R600 crashes happen seemingly randomly when using OpenGL Compositor in 
> Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> Sometimes it happens within a few minutes, sometimes it will run OK for an 
> afternoon or so.  Eventually TTM issues an "unable to get page 0" error 
> message, the radeon driver follows that up with a 
> "radeon_gem_object_create failed to allocate gem" error message.  Then the 
> radeon driver starts spamming that gem failure message until I'm forced to 
> reboot.
> 
> Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> 
> I'm using Xen 4.2.1 32bit.
> 
> xen command line is:  vga=mode=0x314
> kernel command line is:  root=/dev/md0 quiet pci=realloc 
> console=ttyS0,115200n8 console=tty0
> 
> Fingers crossed that there's some magic boot parameter I'm missing.  It's 
> been a while since I dabbled with this stuff.  ;-)

That should have been working. I had been using Xserver for years now.
Just to make sure - you aren't referring to running with X right? Just
simple framebuffer?
> 
> Thanks!
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GYy-0007SY-4H; Mon, 20 Jan 2014 15:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1W5GYw-0007ST-OC
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:14:54 +0000
Received: from [85.158.139.211:33706] by server-4.bemta-5.messagelabs.com id
	B1/33-26791-D6D3DD25; Mon, 20 Jan 2014 15:14:53 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390230891!10638459!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25288 invoked from network); 20 Jan 2014 15:14:53 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:14:53 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s0KFEbCU019951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Mon, 20 Jan 2014 10:14:37 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s0KFEb8F019949;
	Mon, 20 Jan 2014 10:14:37 -0500
Date: Mon, 20 Jan 2014 11:14:36 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140120151436.GA19918@andromeda.dapyr.net>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
User-Agent: Mutt/1.5.9i
Cc: michael.d.labriola@gmail.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent crashes 
> with multiple older R600 series (HD 6470 and HD 6570) and unusably slow 
> graphics with a newer HD7000 (can see each line refresh indiviually on 
> radeonfb tty).  All 3 systems seem to work fine bare metal.

I hadn't been using DRM, just Xserver. Is that what you mean?
> 
> The R600 crashes happen seemingly randomly when using OpenGL Compositor in 
> Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> Sometimes it happens within a few minutes, sometimes it will run OK for an 
> afternoon or so.  Eventually TTM issues an "unable to get page 0" error 
> message, the radeon driver follows that up with a 
> "radeon_gem_object_create failed to allocate gem" error message.  Then the 
> radeon driver starts spamming that gem failure message until I'm forced to 
> reboot.
> 
> Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> 
> I'm using Xen 4.2.1 32bit.
> 
> xen command line is:  vga=mode=0x314
> kernel command line is:  root=/dev/md0 quiet pci=realloc 
> console=ttyS0,115200n8 console=tty0
> 
> Fingers crossed that there's some magic boot parameter I'm missing.  It's 
> been a while since I dabbled with this stuff.  ;-)

That should have been working. I had been using Xserver for years now.
Just to make sure - you aren't referring to running with X right? Just
simple framebuffer?
> 
> Thanks!
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GaQ-0007X4-JW; Mon, 20 Jan 2014 15:16:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5GaP-0007Wx-6I
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:16:25 +0000
Received: from [85.158.137.68:32131] by server-1.bemta-3.messagelabs.com id
	F5/77-29598-8CD3DD25; Mon, 20 Jan 2014 15:16:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390230983!10193696!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18492 invoked from network); 20 Jan 2014 15:16:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:16:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 15:16:22 +0000
Message-Id: <52DD4BD3020000780011518A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 15:16:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
	<52DD3523.1080402@citrix.com>
In-Reply-To: <52DD3523.1080402@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, David Vrabel <david.vrabel@citrix.com>,
	Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 15:39, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 16/01/14 11:10, Jan Beulich wrote:
>>>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
>>> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
>>> softlockups from time to time.
>>>
>>> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
>>>
>>> I tracked this down to the call of xc_domain_set_pod_target() and further
>>> p2m_pod_set_mem_target().
>>>
>>> Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
>>> with enough memory for current hypervisors. But it seems the code is nearly
>>> the same.
>> While I still didn't see a formal report of this against SLE11 yet,
>> attached a draft patch against the SP3 code base adding manual
>> preemption to the hypercall path of privcmd. This is only lightly
>> tested, and therefore has a little bit of debugging code still left in
>> there. Mind giving this an try (perhaps together with the patch
>> David had sent for the other issue - there may still be a need for
>> further preemption points in the IOCTL_PRIVCMD_MMAP*
>> handling, but without knowing for sure whether that matters to
>> you I didn't want to add this right away)?
>>
>> Jan
>>
> 
> With my 4.4-rc2 testing, these softlockups are becoming more of a
> problem, especially with construction/migration of 128GB guests.
> 
> I have been looking at doing a similar patch against mainline.
> 
> Having talked it through with David, it seems more sensible to have a
> second hypercall page, at which point in_hypercall() becomes
> in_preemptable_hypercall().
> 
> Any task (which could even be kernel tasks) could use the preemptable
> page, rather than the main hypercall page, and the asm code doesn't need
> to care whether the task was in privcmd.

Of course this can be generalized, but I don't think a second
hypercall page is the answer here: You'd then also need a
second set of hypercall wrappers (_hypercall0() etc as well as
HYPERVISOR_*()), and generic library routines would need to
have a way to know which one to call.

Therefore I think having a per-CPU state flag (which gets cleared/
restored during interrupt handling, or - like my patch does -
honored only when outside or atomic context) is still the more
reasonable approach.

> This would avoid having to maintain extra state to identify whether the
> hypercall was preemptable, and would avoid modification to
> evtchn_do_upcall().

I'd be curious to see how you avoid modifying evtchn_do_upcall()
(other than by adding what I added there at the assembly call
site) - I especially don't see where your in_preemptable_hypercall()
would get invoked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GaQ-0007X4-JW; Mon, 20 Jan 2014 15:16:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5GaP-0007Wx-6I
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:16:25 +0000
Received: from [85.158.137.68:32131] by server-1.bemta-3.messagelabs.com id
	F5/77-29598-8CD3DD25; Mon, 20 Jan 2014 15:16:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390230983!10193696!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18492 invoked from network); 20 Jan 2014 15:16:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:16:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 15:16:22 +0000
Message-Id: <52DD4BD3020000780011518A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 15:16:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
	<52DD3523.1080402@citrix.com>
In-Reply-To: <52DD3523.1080402@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, David Vrabel <david.vrabel@citrix.com>,
	Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 15:39, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 16/01/14 11:10, Jan Beulich wrote:
>>>>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
>>> when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
>>> softlockups from time to time.
>>>
>>> kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
>>>
>>> I tracked this down to the call of xc_domain_set_pod_target() and further
>>> p2m_pod_set_mem_target().
>>>
>>> Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
>>> with enough memory for current hypervisors. But it seems the code is nearly
>>> the same.
>> While I still didn't see a formal report of this against SLE11 yet,
>> attached a draft patch against the SP3 code base adding manual
>> preemption to the hypercall path of privcmd. This is only lightly
>> tested, and therefore has a little bit of debugging code still left in
>> there. Mind giving this an try (perhaps together with the patch
>> David had sent for the other issue - there may still be a need for
>> further preemption points in the IOCTL_PRIVCMD_MMAP*
>> handling, but without knowing for sure whether that matters to
>> you I didn't want to add this right away)?
>>
>> Jan
>>
> 
> With my 4.4-rc2 testing, these softlockups are becoming more of a
> problem, especially with construction/migration of 128GB guests.
> 
> I have been looking at doing a similar patch against mainline.
> 
> Having talked it through with David, it seems more sensible to have a
> second hypercall page, at which point in_hypercall() becomes
> in_preemptable_hypercall().
> 
> Any task (which could even be kernel tasks) could use the preemptable
> page, rather than the main hypercall page, and the asm code doesn't need
> to care whether the task was in privcmd.

Of course this can be generalized, but I don't think a second
hypercall page is the answer here: You'd then also need a
second set of hypercall wrappers (_hypercall0() etc as well as
HYPERVISOR_*()), and generic library routines would need to
have a way to know which one to call.

Therefore I think having a per-CPU state flag (which gets cleared/
restored during interrupt handling, or - like my patch does -
honored only when outside or atomic context) is still the more
reasonable approach.

> This would avoid having to maintain extra state to identify whether the
> hypercall was preemptable, and would avoid modification to
> evtchn_do_upcall().

I'd be curious to see how you avoid modifying evtchn_do_upcall()
(other than by adding what I added there at the assembly call
site) - I especially don't see where your in_preemptable_hypercall()
would get invoked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:17:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GbY-0007cy-1r; Mon, 20 Jan 2014 15:17:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GbW-0007cl-3W
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:17:34 +0000
Received: from [85.158.137.68:46347] by server-7.bemta-3.messagelabs.com id
	EF/16-27599-D0E3DD25; Mon, 20 Jan 2014 15:17:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390231051!10166475!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27468 invoked from network); 20 Jan 2014 15:17:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:17:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFHSOX014992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:17:28 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFHRdB015215
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:17:27 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0KFHQjM020135; Mon, 20 Jan 2014 15:17:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:17:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3D1D01C241D; Mon, 20 Jan 2014 10:17:25 -0500 (EST)
Date: Mon, 20 Jan 2014 10:17:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140120151725.GA5227@phenom.dumpdata.com>
References: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 11:51:14AM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The LINUX (PV_OPS) section was out-dated and it's better to only have
> this information in one place (Tte Linux MAINTAINERS file).
> 
> The LINUX (XCP) section was an external project that that hasn't been
> maintained for years.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> Cc: Ian Campbell <ian.campbell@citrix.com>
> ---
>  MAINTAINERS |   10 ----------
>  1 files changed, 0 insertions(+), 10 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 4d9648f..7757cdd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
>  F:      xen/arch/x86/machine_kexec.c
>  F:      xen/arch/x86/x86_64/kexec_reloc.S
>  
> -LINUX (PV_OPS)
> -M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> -S:	Supported
> -T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> -
> -LINUX (XCP)
> -M:	Ian Campbell <ian.campbell@citrix.com>
> -S:	Supported
> -T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
> -
>  MACHINE CHECK (MCA) & RAS
>  M:	Christoph Egger <chegger@amazon.de>
>  M:	Liu Jinsong <jinsong.liu@intel.com>
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:17:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GbY-0007cy-1r; Mon, 20 Jan 2014 15:17:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5GbW-0007cl-3W
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:17:34 +0000
Received: from [85.158.137.68:46347] by server-7.bemta-3.messagelabs.com id
	EF/16-27599-D0E3DD25; Mon, 20 Jan 2014 15:17:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390231051!10166475!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27468 invoked from network); 20 Jan 2014 15:17:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:17:32 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFHSOX014992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:17:28 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFHRdB015215
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:17:27 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0KFHQjM020135; Mon, 20 Jan 2014 15:17:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:17:26 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3D1D01C241D; Mon, 20 Jan 2014 10:17:25 -0500 (EST)
Date: Mon, 20 Jan 2014 10:17:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140120151725.GA5227@phenom.dumpdata.com>
References: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389959474-7069-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MAINTAINERS: remove Linux sections
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 11:51:14AM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The LINUX (PV_OPS) section was out-dated and it's better to only have
> this information in one place (Tte Linux MAINTAINERS file).
> 
> The LINUX (XCP) section was an external project that that hasn't been
> maintained for years.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> Cc: Ian Campbell <ian.campbell@citrix.com>
> ---
>  MAINTAINERS |   10 ----------
>  1 files changed, 0 insertions(+), 10 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 4d9648f..7757cdd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -209,16 +209,6 @@ F:      xen/include/{kexec,kimage}.h
>  F:      xen/arch/x86/machine_kexec.c
>  F:      xen/arch/x86/x86_64/kexec_reloc.S
>  
> -LINUX (PV_OPS)
> -M:	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> -S:	Supported
> -T:	git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> -
> -LINUX (XCP)
> -M:	Ian Campbell <ian.campbell@citrix.com>
> -S:	Supported
> -T:	hg http://xenbits.xen.org/XCP/linux-2.6.*.pq.hg
> -
>  MACHINE CHECK (MCA) & RAS
>  M:	Christoph Egger <chegger@amazon.de>
>  M:	Liu Jinsong <jinsong.liu@intel.com>
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:19:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gdi-00086r-Kb; Mon, 20 Jan 2014 15:19:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5Gdg-00086Y-9u
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:19:48 +0000
Received: from [193.109.254.147:10509] by server-2.bemta-14.messagelabs.com id
	2E/A9-00361-39E3DD25; Mon, 20 Jan 2014 15:19:47 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390231182!12022441!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7653 invoked from network); 20 Jan 2014 15:19:46 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:19:46 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so3754362obc.12
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 07:19:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BsVa+haCUutDGinYSULsgRL7V/Osbnipc5xMRePS1Y4=;
	b=W+WWc6UuMc5jpwU9YotBIz02aTNy3l5gDhJnBGluMwzc9ydWqSgQZXUZ2hqSVi0z9c
	wOejKOLJF/fO7Q4irl9Ak88vUBiQnXNRBlU6gCGvG8kvuTmzoTgRSPJGDt2JcB3xQ7xO
	CoWBZujTzc3oDI1EPmYyR91H/j3RopFucV/8xB4XfHlUtW2aKvWvoPqm3a3EJdW2Ta0y
	/MfvnPpU4Mxc6TERCeBo1gHS7kZV/umjAttK4huQZ6Kg9HWeDGMhR6L2voO2I06NE1vh
	hC0zCUy9LREjbyB6xRjsXv4Ask2uOV8qO1ur2iDHnecBgVjRujr7j6O5CCrBoWFmGNz1
	aR9Q==
MIME-Version: 1.0
X-Received: by 10.182.53.72 with SMTP id z8mr16237261obo.36.1390231182066;
	Mon, 20 Jan 2014 07:19:42 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 07:19:41 -0800 (PST)
In-Reply-To: <0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
Date: Mon, 20 Jan 2014 07:19:41 -0800
Message-ID: <CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Gordan Bobic <gordan@bobich.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-20 13:24, Wu, Feng wrote:
>>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org
>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>> Sent: Monday, January 20, 2014 8:50 PM
>>> To: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>>>
>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>> >>> > -----Original Message-----
>>> >>> > From: xen-devel-bounces@lists.xen.org
>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>> >>> > To: xen-devel@lists.xen.org
>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>> >>> > upstream)
>>> >>> >
>>> >>> > Hi all,
>>> >>> >
>>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen
>>> >>> > as
>>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>> >>> > traditional i.e. qemu-dm.
>>> >>>
>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>> >>> right now.
>>> >>
>>> >> Right.
>>> >> It is not possible to assign your primary VGA card to a VM with
>>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>>> >
>>> > Let me understand this correctly. If I have two VGA cards then I can
>>> > passthrough
>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>> > right and
>>> > if yes how can I do it?
>>>
>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>
>>
>> I think passing VGA card as a primary-in-domU works well in
>> Qemu-traditional, right?
>
>
> I never managed to get it working - it certainly isn't just a matter of
> enabling the option. There is at least the matter of also side-loading
> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
> out all ATI and Nvidia GPUs of the past 2-3 generations.
>
> Having said that - I never found a particularly good use-case for
> primary passthrough. Once the GPU driver loads it works just the
> same for all intents and purposes.
>

I have successfully managed to passthrough VGA card as primary to DomU
with qemu traditional. I am trying to do the same with upstream qemu because
 I need some new features of the upstream qemu which are not available
in qemu traditional.

With qemu upstream I can passthrough as secondary VGA card to DomU
and able to see it in device manager in DomU (Windows 7) but Windows
couldn't use it and display some error that another card is being used as
display. I want Windows to use the passthroughed vga card  as its display.

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:19:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gdi-00086r-Kb; Mon, 20 Jan 2014 15:19:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5Gdg-00086Y-9u
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:19:48 +0000
Received: from [193.109.254.147:10509] by server-2.bemta-14.messagelabs.com id
	2E/A9-00361-39E3DD25; Mon, 20 Jan 2014 15:19:47 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390231182!12022441!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7653 invoked from network); 20 Jan 2014 15:19:46 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:19:46 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so3754362obc.12
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 07:19:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BsVa+haCUutDGinYSULsgRL7V/Osbnipc5xMRePS1Y4=;
	b=W+WWc6UuMc5jpwU9YotBIz02aTNy3l5gDhJnBGluMwzc9ydWqSgQZXUZ2hqSVi0z9c
	wOejKOLJF/fO7Q4irl9Ak88vUBiQnXNRBlU6gCGvG8kvuTmzoTgRSPJGDt2JcB3xQ7xO
	CoWBZujTzc3oDI1EPmYyR91H/j3RopFucV/8xB4XfHlUtW2aKvWvoPqm3a3EJdW2Ta0y
	/MfvnPpU4Mxc6TERCeBo1gHS7kZV/umjAttK4huQZ6Kg9HWeDGMhR6L2voO2I06NE1vh
	hC0zCUy9LREjbyB6xRjsXv4Ask2uOV8qO1ur2iDHnecBgVjRujr7j6O5CCrBoWFmGNz1
	aR9Q==
MIME-Version: 1.0
X-Received: by 10.182.53.72 with SMTP id z8mr16237261obo.36.1390231182066;
	Mon, 20 Jan 2014 07:19:42 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 07:19:41 -0800 (PST)
In-Reply-To: <0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
Date: Mon, 20 Jan 2014 07:19:41 -0800
Message-ID: <CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Gordan Bobic <gordan@bobich.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-20 13:24, Wu, Feng wrote:
>>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@lists.xen.org
>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>> Sent: Monday, January 20, 2014 8:50 PM
>>> To: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
>>>
>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>> >>> > -----Original Message-----
>>> >>> > From: xen-devel-bounces@lists.xen.org
>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>> >>> > To: xen-devel@lists.xen.org
>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>> >>> > upstream)
>>> >>> >
>>> >>> > Hi all,
>>> >>> >
>>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen
>>> >>> > as
>>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>> >>> > traditional i.e. qemu-dm.
>>> >>>
>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>> >>> right now.
>>> >>
>>> >> Right.
>>> >> It is not possible to assign your primary VGA card to a VM with
>>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>>> >
>>> > Let me understand this correctly. If I have two VGA cards then I can
>>> > passthrough
>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>> > right and
>>> > if yes how can I do it?
>>>
>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>
>>
>> I think passing VGA card as a primary-in-domU works well in
>> Qemu-traditional, right?
>
>
> I never managed to get it working - it certainly isn't just a matter of
> enabling the option. There is at least the matter of also side-loading
> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
> out all ATI and Nvidia GPUs of the past 2-3 generations.
>
> Having said that - I never found a particularly good use-case for
> primary passthrough. Once the GPU driver loads it works just the
> same for all intents and purposes.
>

I have successfully managed to passthrough VGA card as primary to DomU
with qemu traditional. I am trying to do the same with upstream qemu because
 I need some new features of the upstream qemu which are not available
in qemu traditional.

With qemu upstream I can passthrough as secondary VGA card to DomU
and able to see it in device manager in DomU (Windows 7) but Windows
couldn't use it and display some error that another card is being used as
display. I want Windows to use the passthroughed vga card  as its display.

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GeK-0008Ck-4I; Mon, 20 Jan 2014 15:20:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5GeJ-0008BZ-8J
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:20:27 +0000
Received: from [85.158.139.211:49227] by server-14.bemta-5.messagelabs.com id
	56/BD-24200-ABE3DD25; Mon, 20 Jan 2014 15:20:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390231224!10846717!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21217 invoked from network); 20 Jan 2014 15:20:25 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:20:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFJHAx011012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:19:17 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFJGpf025670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:19:16 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFJG7X018778; Mon, 20 Jan 2014 15:19:16 GMT
MIME-Version: 1.0
Message-ID: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
Date: Mon, 20 Jan 2014 07:19:15 -0800 (PST)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: <Ian.Campbell@citrix.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


----- Ian.Campbell@citrix.com wrote:

> > > 
> > > I previously had a patch to use memblock_phys_mem_size(), but when
> I saw
> > > Boris switch to get_num_physpages() I thought that would be OK,
> but I
> > > didn't look into it very hard. Without checking I suspect they
> return
> > > pretty much the same think and so memblock_phys_mem_size will have
> the
> > > same issue you observed (which I confess I haven't yet gone back
> and
> > > understood).
> > 
> > If there's no reserved memory in that range, I guess ARM might
> > be fine as is.
> 
> Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
> memblock stuff and whether it is included in phys_mem_size -- I think
> that stuff is still WIP upstream (i.e. the kernel currently ignores
> such
> things, IIRC there was a fix but it was broken and reverted to try
> again
> and then I've lost track).


FWIW, when I was looking at this code I also first thought of using memblock but IIRC I wasn't convinced that we always have CONFIG_HAVE_MEMBLOCK set so I ended up with physmem_size().

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GeK-0008Ck-4I; Mon, 20 Jan 2014 15:20:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5GeJ-0008BZ-8J
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:20:27 +0000
Received: from [85.158.139.211:49227] by server-14.bemta-5.messagelabs.com id
	56/BD-24200-ABE3DD25; Mon, 20 Jan 2014 15:20:26 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390231224!10846717!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21217 invoked from network); 20 Jan 2014 15:20:25 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:20:25 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFJHAx011012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:19:17 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFJGpf025670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:19:16 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFJG7X018778; Mon, 20 Jan 2014 15:19:16 GMT
MIME-Version: 1.0
Message-ID: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
Date: Mon, 20 Jan 2014 07:19:15 -0800 (PST)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: <Ian.Campbell@citrix.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


----- Ian.Campbell@citrix.com wrote:

> > > 
> > > I previously had a patch to use memblock_phys_mem_size(), but when
> I saw
> > > Boris switch to get_num_physpages() I thought that would be OK,
> but I
> > > didn't look into it very hard. Without checking I suspect they
> return
> > > pretty much the same think and so memblock_phys_mem_size will have
> the
> > > same issue you observed (which I confess I haven't yet gone back
> and
> > > understood).
> > 
> > If there's no reserved memory in that range, I guess ARM might
> > be fine as is.
> 
> Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
> memblock stuff and whether it is included in phys_mem_size -- I think
> that stuff is still WIP upstream (i.e. the kernel currently ignores
> such
> things, IIRC there was a fix but it was broken and reverted to try
> again
> and then I've lost track).


FWIW, when I was looking at this code I also first thought of using memblock but IIRC I wasn't convinced that we always have CONFIG_HAVE_MEMBLOCK set so I ended up with physmem_size().

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GhO-0008TT-Qq; Mon, 20 Jan 2014 15:23:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5GhN-0008TI-C3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:23:37 +0000
Received: from [85.158.143.35:16286] by server-1.bemta-4.messagelabs.com id
	36/6E-02132-87F3DD25; Mon, 20 Jan 2014 15:23:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390231414!12809449!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3591 invoked from network); 20 Jan 2014 15:23:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94521149"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 15:23:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 10:23:33 -0500
Message-ID: <1390231412.20516.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date: Mon, 20 Jan 2014 15:23:32 +0000
In-Reply-To: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
References: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 07:19 -0800, Boris Ostrovsky wrote:
> ----- Ian.Campbell@citrix.com wrote:
> 
> > > > 
> > > > I previously had a patch to use memblock_phys_mem_size(), but when
> > I saw
> > > > Boris switch to get_num_physpages() I thought that would be OK,
> > but I
> > > > didn't look into it very hard. Without checking I suspect they
> > return
> > > > pretty much the same think and so memblock_phys_mem_size will have
> > the
> > > > same issue you observed (which I confess I haven't yet gone back
> > and
> > > > understood).
> > > 
> > > If there's no reserved memory in that range, I guess ARM might
> > > be fine as is.
> > 
> > Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
> > memblock stuff and whether it is included in phys_mem_size -- I think
> > that stuff is still WIP upstream (i.e. the kernel currently ignores
> > such
> > things, IIRC there was a fix but it was broken and reverted to try
> > again
> > and then I've lost track).
> 
> 
> FWIW, when I was looking at this code I also first thought of using
> memblock but IIRC I wasn't convinced that we always have
> CONFIG_HAVE_MEMBLOCK set so I ended up with physmem_size().

FWIW it's select'd by CONFIG_X86 and CONFIG_ARM and overall by about
half of all architectures. So I suppose it depends where your threshold
for caring about future architecture ports lies...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GhO-0008TT-Qq; Mon, 20 Jan 2014 15:23:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5GhN-0008TI-C3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:23:37 +0000
Received: from [85.158.143.35:16286] by server-1.bemta-4.messagelabs.com id
	36/6E-02132-87F3DD25; Mon, 20 Jan 2014 15:23:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390231414!12809449!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3591 invoked from network); 20 Jan 2014 15:23:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 15:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94521149"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 15:23:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 10:23:33 -0500
Message-ID: <1390231412.20516.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date: Mon, 20 Jan 2014 15:23:32 +0000
In-Reply-To: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
References: <f5e79523-d3b5-4df1-a3b0-f3580715f084@default>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org,
	JBeulich@suse.com
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 07:19 -0800, Boris Ostrovsky wrote:
> ----- Ian.Campbell@citrix.com wrote:
> 
> > > > 
> > > > I previously had a patch to use memblock_phys_mem_size(), but when
> > I saw
> > > > Boris switch to get_num_physpages() I thought that would be OK,
> > but I
> > > > didn't look into it very hard. Without checking I suspect they
> > return
> > > > pretty much the same think and so memblock_phys_mem_size will have
> > the
> > > > same issue you observed (which I confess I haven't yet gone back
> > and
> > > > understood).
> > > 
> > > If there's no reserved memory in that range, I guess ARM might
> > > be fine as is.
> > 
> > Hrm I'm not sure what a DTB /memreserve/ statement turns into wrt the
> > memblock stuff and whether it is included in phys_mem_size -- I think
> > that stuff is still WIP upstream (i.e. the kernel currently ignores
> > such
> > things, IIRC there was a fix but it was broken and reverted to try
> > again
> > and then I've lost track).
> 
> 
> FWIW, when I was looking at this code I also first thought of using
> memblock but IIRC I wasn't convinced that we always have
> CONFIG_HAVE_MEMBLOCK set so I ended up with physmem_size().

FWIW it's select'd by CONFIG_X86 and CONFIG_ARM and overall by about
half of all architectures. So I suppose it depends where your threshold
for caring about future architecture ports lies...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:26:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gk8-0000Ad-O8; Mon, 20 Jan 2014 15:26:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5Gk7-0000AW-P3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:26:27 +0000
Received: from [85.158.139.211:28475] by server-3.bemta-5.messagelabs.com id
	2F/CA-04773-2204DD25; Mon, 20 Jan 2014 15:26:26 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390231585!10842453!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21948 invoked from network); 20 Jan 2014 15:26:26 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:26:26 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5Gk4-00078o-Gl; Mon, 20 Jan 2014 10:26:24 -0500
In-Reply-To: <20140120151436.GA19918@andromeda.dapyr.net>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
MIME-Version: 1.0
X-KeepSent: 16949C1D:7EB315DE-85257C66:00540CD1;
 type=4; name=$KeepSent
Message-ID: <OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 10:26:22 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5Gk4-00078o-Gl
X-GDM-EVAL: score: /30; hits: 
Cc: michael.d.labriola@gmail.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 AM:

> From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> Date: 01/20/2014 10:14 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
crashes 
> > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
slow 
> > graphics with a newer HD7000 (can see each line refresh indiviually on 

> > radeonfb tty).  All 3 systems seem to work fine bare metal.
> 
> I hadn't been using DRM, just Xserver. Is that what you mean?

The R600 problems happen when in X, using OpenGL, on my dom0.  The 
RadeonSI sluggishness is when using the KMS framebuffer device for a plain 
text console login.


> > 
> > The R600 crashes happen seemingly randomly when using OpenGL 
Compositor in 
> > Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> > Sometimes it happens within a few minutes, sometimes it will run OK 
for an 
> > afternoon or so.  Eventually TTM issues an "unable to get page 0" 
error 
> > message, the radeon driver follows that up with a 
> > "radeon_gem_object_create failed to allocate gem" error message.  Then 
the 
> > radeon driver starts spamming that gem failure message until I'm 
forced to 
> > reboot.
> > 
> > Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> > 
> > I'm using Xen 4.2.1 32bit.
> > 
> > xen command line is:  vga=mode=0x314
> > kernel command line is:  root=/dev/md0 quiet pci=realloc 
> > console=ttyS0,115200n8 console=tty0
> > 
> > Fingers crossed that there's some magic boot parameter I'm missing. 
It's 
> > been a while since I dabbled with this stuff.  ;-)
> 
> That should have been working. I had been using Xserver for years now.
> Just to make sure - you aren't referring to running with X right? Just
> simple framebuffer?

I'm not sure I understand the delineation between Xserver and X...  I am 
indeed in X, using xf86-video-ati and Mesa for OpenGL support.  I've been 
doing this for years as well, but with nouveau and w/out 3d support.  Was 
kinda hoping that the Radeon cards I got a hold of would allow for 
hardware accelerated 3d on my dom0.

I just tried adding 'nopat' to the kernel command line.  I remember doing 
that a year ago... don't recall why.  Any chance that helps?

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:26:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gk8-0000Ad-O8; Mon, 20 Jan 2014 15:26:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5Gk7-0000AW-P3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:26:27 +0000
Received: from [85.158.139.211:28475] by server-3.bemta-5.messagelabs.com id
	2F/CA-04773-2204DD25; Mon, 20 Jan 2014 15:26:26 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390231585!10842453!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21948 invoked from network); 20 Jan 2014 15:26:26 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:26:26 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5Gk4-00078o-Gl; Mon, 20 Jan 2014 10:26:24 -0500
In-Reply-To: <20140120151436.GA19918@andromeda.dapyr.net>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
MIME-Version: 1.0
X-KeepSent: 16949C1D:7EB315DE-85257C66:00540CD1;
 type=4; name=$KeepSent
Message-ID: <OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 10:26:22 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5Gk4-00078o-Gl
X-GDM-EVAL: score: /30; hits: 
Cc: michael.d.labriola@gmail.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 AM:

> From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> Date: 01/20/2014 10:14 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
crashes 
> > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
slow 
> > graphics with a newer HD7000 (can see each line refresh indiviually on 

> > radeonfb tty).  All 3 systems seem to work fine bare metal.
> 
> I hadn't been using DRM, just Xserver. Is that what you mean?

The R600 problems happen when in X, using OpenGL, on my dom0.  The 
RadeonSI sluggishness is when using the KMS framebuffer device for a plain 
text console login.


> > 
> > The R600 crashes happen seemingly randomly when using OpenGL 
Compositor in 
> > Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> > Sometimes it happens within a few minutes, sometimes it will run OK 
for an 
> > afternoon or so.  Eventually TTM issues an "unable to get page 0" 
error 
> > message, the radeon driver follows that up with a 
> > "radeon_gem_object_create failed to allocate gem" error message.  Then 
the 
> > radeon driver starts spamming that gem failure message until I'm 
forced to 
> > reboot.
> > 
> > Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> > 
> > I'm using Xen 4.2.1 32bit.
> > 
> > xen command line is:  vga=mode=0x314
> > kernel command line is:  root=/dev/md0 quiet pci=realloc 
> > console=ttyS0,115200n8 console=tty0
> > 
> > Fingers crossed that there's some magic boot parameter I'm missing. 
It's 
> > been a while since I dabbled with this stuff.  ;-)
> 
> That should have been working. I had been using Xserver for years now.
> Just to make sure - you aren't referring to running with X right? Just
> simple framebuffer?

I'm not sure I understand the delineation between Xserver and X...  I am 
indeed in X, using xf86-video-ati and Mesa for OpenGL support.  I've been 
doing this for years as well, but with nouveau and w/out 3d support.  Was 
kinda hoping that the Radeon cards I got a hold of would allow for 
hardware accelerated 3d on my dom0.

I just tried adding 'nopat' to the kernel command line.  I remember doing 
that a year ago... don't recall why.  Any chance that helps?

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:29:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GnE-0000Tp-D7; Mon, 20 Jan 2014 15:29:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5GnC-0000Tf-L0
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:29:38 +0000
Received: from [85.158.139.211:13932] by server-2.bemta-5.messagelabs.com id
	C5/CD-29392-1E04DD25; Mon, 20 Jan 2014 15:29:37 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390231776!10785983!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4949 invoked from network); 20 Jan 2014 15:29:36 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:29:36 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id A12DC221BEA;
	Mon, 20 Jan 2014 15:29:35 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 15:29:35 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>"
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
Message-ID: <b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 15:19, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> 
> wrote:
>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>> 
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@lists.xen.org
>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>> To: xen-devel@lists.xen.org
>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu 
>>>> upstream)
>>>> 
>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>> >>> > -----Original Message-----
>>>> >>> > From: xen-devel-bounces@lists.xen.org
>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>>> >>> > To: xen-devel@lists.xen.org
>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>> >>> > upstream)
>>>> >>> >
>>>> >>> > Hi all,
>>>> >>> >
>>>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen
>>>> >>> > as
>>>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>>> >>> > traditional i.e. qemu-dm.
>>>> >>>
>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>>> >>> right now.
>>>> >>
>>>> >> Right.
>>>> >> It is not possible to assign your primary VGA card to a VM with
>>>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>>>> >
>>>> > Let me understand this correctly. If I have two VGA cards then I can
>>>> > passthrough
>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>> > right and
>>>> > if yes how can I do it?
>>>> 
>>>> Passing any VGA card as a primary-in-domU has always been 
>>>> problematic.
>>> 
>>> 
>>> I think passing VGA card as a primary-in-domU works well in
>>> Qemu-traditional, right?
>> 
>> 
>> I never managed to get it working - it certainly isn't just a matter 
>> of
>> enabling the option. There is at least the matter of also side-loading
>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>> 
>> Having said that - I never found a particularly good use-case for
>> primary passthrough. Once the GPU driver loads it works just the
>> same for all intents and purposes.
>> 
> 
> I have successfully managed to passthrough VGA card as primary to DomU
> with qemu traditional. I am trying to do the same with upstream qemu 
> because
>  I need some new features of the upstream qemu which are not available
> in qemu traditional.
> 
> With qemu upstream I can passthrough as secondary VGA card to DomU
> and able to see it in device manager in DomU (Windows 7) but Windows
> couldn't use it and display some error that another card is being used 
> as
> display. I want Windows to use the passthroughed vga card  as its 
> display.

Disable the other (emulated) card in device manager and reboot
the domU. That should fix it.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:29:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5GnE-0000Tp-D7; Mon, 20 Jan 2014 15:29:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5GnC-0000Tf-L0
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:29:38 +0000
Received: from [85.158.139.211:13932] by server-2.bemta-5.messagelabs.com id
	C5/CD-29392-1E04DD25; Mon, 20 Jan 2014 15:29:37 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390231776!10785983!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4949 invoked from network); 20 Jan 2014 15:29:36 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:29:36 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id A12DC221BEA;
	Mon, 20 Jan 2014 15:29:35 +0000 (GMT)
MIME-Version: 1.0
Date: Mon, 20 Jan 2014 15:29:35 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>"
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
Message-ID: <b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-01-20 15:19, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> 
> wrote:
>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>> 
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@lists.xen.org
>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>> To: xen-devel@lists.xen.org
>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu 
>>>> upstream)
>>>> 
>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>> >>> > -----Original Message-----
>>>> >>> > From: xen-devel-bounces@lists.xen.org
>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel Butt
>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>>> >>> > To: xen-devel@lists.xen.org
>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>> >>> > upstream)
>>>> >>> >
>>>> >>> > Hi all,
>>>> >>> >
>>>> >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen
>>>> >>> > as
>>>> >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>>> >>> > traditional i.e. qemu-dm.
>>>> >>>
>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>>> >>> right now.
>>>> >>
>>>> >> Right.
>>>> >> It is not possible to assign your primary VGA card to a VM with
>>>> >> qemu-xen. You should be able to assign your secondary VGA card though.
>>>> >
>>>> > Let me understand this correctly. If I have two VGA cards then I can
>>>> > passthrough
>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>> > right and
>>>> > if yes how can I do it?
>>>> 
>>>> Passing any VGA card as a primary-in-domU has always been 
>>>> problematic.
>>> 
>>> 
>>> I think passing VGA card as a primary-in-domU works well in
>>> Qemu-traditional, right?
>> 
>> 
>> I never managed to get it working - it certainly isn't just a matter 
>> of
>> enabling the option. There is at least the matter of also side-loading
>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>> 
>> Having said that - I never found a particularly good use-case for
>> primary passthrough. Once the GPU driver loads it works just the
>> same for all intents and purposes.
>> 
> 
> I have successfully managed to passthrough VGA card as primary to DomU
> with qemu traditional. I am trying to do the same with upstream qemu 
> because
>  I need some new features of the upstream qemu which are not available
> in qemu traditional.
> 
> With qemu upstream I can passthrough as secondary VGA card to DomU
> and able to see it in device manager in DomU (Windows 7) but Windows
> couldn't use it and display some error that another card is being used 
> as
> display. I want Windows to use the passthroughed vga card  as its 
> display.

Disable the other (emulated) card in device manager and reboot
the domU. That should fix it.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:32:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gq1-0000zg-IQ; Mon, 20 Jan 2014 15:32:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5Gq0-0000z8-F8
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:32:32 +0000
Received: from [193.109.254.147:32418] by server-10.bemta-14.messagelabs.com
	id CF/23-20752-F814DD25; Mon, 20 Jan 2014 15:32:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390231949!12017662!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25256 invoked from network); 20 Jan 2014 15:32:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:32:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFVP8U023911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:31:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFVNtV021964
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:31:24 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFVNn1013446; Mon, 20 Jan 2014 15:31:23 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:31:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 52BB21BF6CC; Mon, 20 Jan 2014 10:31:22 -0500 (EST)
Date: Mon, 20 Jan 2014 10:31:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140120153122.GA24863@phenom.dumpdata.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
	<52D9672F0200007800114A9F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D9672F0200007800114A9F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> Question now is: Considering that (a) is broken (and hard to fix)
> >>>> and (b) is in presumably a large part of practical cases leading to
> >>>> too much ballooning down, shouldn't we open up
> >>>> XENMEM_get_pod_target for domains to query on themselves?
> >>>> Alternatively, can anyone see another way to calculate a
> >>>> reasonably precise value?
> >>> I think hypervisor query is a good thing although I don't know whether
> >>> exposing PoD-specific data (count and entry_count) to the guest is
> >>> necessary. It's probably OK (or we can set these fields to zero for
> >>> non-privileged domains).
> >> That's pointless then - if no useful data is provided through the
> >> call to non-privileged domains, we can as well keep it erroring for
> >> them.
> >>
> > 
> > I thought that are after d->tot_pages, no?
> 
> That can be obtained through another XENMEM_ operation. No,
> what is needed is the difference between PoD entries and PoD
> cache (which then needs to be added to tot_pages).

Won't that be racy? Meaning the moment you get that information and
kick of the balloon worker, said value might be different already?

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:32:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gq1-0000zg-IQ; Mon, 20 Jan 2014 15:32:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5Gq0-0000z8-F8
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:32:32 +0000
Received: from [193.109.254.147:32418] by server-10.bemta-14.messagelabs.com
	id CF/23-20752-F814DD25; Mon, 20 Jan 2014 15:32:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390231949!12017662!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25256 invoked from network); 20 Jan 2014 15:32:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:32:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFVP8U023911
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:31:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFVNtV021964
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:31:24 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFVNn1013446; Mon, 20 Jan 2014 15:31:23 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:31:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 52BB21BF6CC; Mon, 20 Jan 2014 10:31:22 -0500 (EST)
Date: Mon, 20 Jan 2014 10:31:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140120153122.GA24863@phenom.dumpdata.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
	<52D9672F0200007800114A9F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D9672F0200007800114A9F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >>>> Question now is: Considering that (a) is broken (and hard to fix)
> >>>> and (b) is in presumably a large part of practical cases leading to
> >>>> too much ballooning down, shouldn't we open up
> >>>> XENMEM_get_pod_target for domains to query on themselves?
> >>>> Alternatively, can anyone see another way to calculate a
> >>>> reasonably precise value?
> >>> I think hypervisor query is a good thing although I don't know whether
> >>> exposing PoD-specific data (count and entry_count) to the guest is
> >>> necessary. It's probably OK (or we can set these fields to zero for
> >>> non-privileged domains).
> >> That's pointless then - if no useful data is provided through the
> >> call to non-privileged domains, we can as well keep it erroring for
> >> them.
> >>
> > 
> > I thought that are after d->tot_pages, no?
> 
> That can be obtained through another XENMEM_ operation. No,
> what is needed is the difference between PoD entries and PoD
> cache (which then needs to be added to tot_pages).

Won't that be racy? Meaning the moment you get that information and
kick of the balloon worker, said value might be different already?

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:38:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gw6-0001di-HG; Mon, 20 Jan 2014 15:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5Gw5-0001da-Ho
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:38:49 +0000
Received: from [85.158.143.35:39103] by server-2.bemta-4.messagelabs.com id
	2A/07-11386-8034DD25; Mon, 20 Jan 2014 15:38:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390232326!12853603!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19912 invoked from network); 20 Jan 2014 15:38:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:38:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFcVYM005609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:38:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFcTFU029130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:38:29 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFcTRS028010; Mon, 20 Jan 2014 15:38:29 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:38:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D1DA31BF6CC; Mon, 20 Jan 2014 10:38:27 -0500 (EST)
Date: Mon, 20 Jan 2014 10:38:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140120153827.GA24989@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > Date: 01/20/2014 10:14 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> crashes 
> > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
> slow 
> > > graphics with a newer HD7000 (can see each line refresh indiviually on 
> 
> > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > 
> > I hadn't been using DRM, just Xserver. Is that what you mean?
> 
> The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> RadeonSI sluggishness is when using the KMS framebuffer device for a plain 
> text console login.

So sluggish is probably due to the PAT not being enabled. This patch
should be applied:

lkml.org/lkml/2011/11/8/406

(or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)

and these two reverted:

 "xen/pat: Disable PAT support for now."
 "xen/pat: Disable PAT using pat_enabled value."

Which is to say do:

git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1

> 
> 
> > > 
> > > The R600 crashes happen seemingly randomly when using OpenGL 
> Compositor in 
> > > Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> > > Sometimes it happens within a few minutes, sometimes it will run OK 
> for an 
> > > afternoon or so.  Eventually TTM issues an "unable to get page 0" 
> error 
> > > message, the radeon driver follows that up with a 
> > > "radeon_gem_object_create failed to allocate gem" error message.  Then 
> the 
> > > radeon driver starts spamming that gem failure message until I'm 
> forced to 
> > > reboot.
> > > 
> > > Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> > > 
> > > I'm using Xen 4.2.1 32bit.
> > > 
> > > xen command line is:  vga=mode=0x314
> > > kernel command line is:  root=/dev/md0 quiet pci=realloc 
> > > console=ttyS0,115200n8 console=tty0
> > > 
> > > Fingers crossed that there's some magic boot parameter I'm missing. 
> It's 
> > > been a while since I dabbled with this stuff.  ;-)
> > 
> > That should have been working. I had been using Xserver for years now.
> > Just to make sure - you aren't referring to running with X right? Just
> > simple framebuffer?
> 
> I'm not sure I understand the delineation between Xserver and X...  I am 
> indeed in X, using xf86-video-ati and Mesa for OpenGL support.  I've been 
> doing this for years as well, but with nouveau and w/out 3d support.  Was 
> kinda hoping that the Radeon cards I got a hold of would allow for 
> hardware accelerated 3d on my dom0.

You should be able to use 3D as well - with those said patches.

> 
> I just tried adding 'nopat' to the kernel command line.  I remember doing 
> that a year ago... don't recall why.  Any chance that helps?

Heh. So you want the inverse with a patch that fixes the PAT wreaking
havoc.
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:38:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Gw6-0001di-HG; Mon, 20 Jan 2014 15:38:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5Gw5-0001da-Ho
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 15:38:49 +0000
Received: from [85.158.143.35:39103] by server-2.bemta-4.messagelabs.com id
	2A/07-11386-8034DD25; Mon, 20 Jan 2014 15:38:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390232326!12853603!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19912 invoked from network); 20 Jan 2014 15:38:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 15:38:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0KFcVYM005609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 20 Jan 2014 15:38:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFcTFU029130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 20 Jan 2014 15:38:29 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0KFcTRS028010; Mon, 20 Jan 2014 15:38:29 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 20 Jan 2014 07:38:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D1DA31BF6CC; Mon, 20 Jan 2014 10:38:27 -0500 (EST)
Date: Mon, 20 Jan 2014 10:38:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140120153827.GA24989@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > Date: 01/20/2014 10:14 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> crashes 
> > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
> slow 
> > > graphics with a newer HD7000 (can see each line refresh indiviually on 
> 
> > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > 
> > I hadn't been using DRM, just Xserver. Is that what you mean?
> 
> The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> RadeonSI sluggishness is when using the KMS framebuffer device for a plain 
> text console login.

So sluggish is probably due to the PAT not being enabled. This patch
should be applied:

lkml.org/lkml/2011/11/8/406

(or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)

and these two reverted:

 "xen/pat: Disable PAT support for now."
 "xen/pat: Disable PAT using pat_enabled value."

Which is to say do:

git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1

> 
> 
> > > 
> > > The R600 crashes happen seemingly randomly when using OpenGL 
> Compositor in 
> > > Enlightenment 0.17.  My dom0 need not even have any domUs running. 
> > > Sometimes it happens within a few minutes, sometimes it will run OK 
> for an 
> > > afternoon or so.  Eventually TTM issues an "unable to get page 0" 
> error 
> > > message, the radeon driver follows that up with a 
> > > "radeon_gem_object_create failed to allocate gem" error message.  Then 
> the 
> > > radeon driver starts spamming that gem failure message until I'm 
> forced to 
> > > reboot.
> > > 
> > > Behavior is identical with kernel versions 3.8, 3.10, and 3.13-rc8.
> > > 
> > > I'm using Xen 4.2.1 32bit.
> > > 
> > > xen command line is:  vga=mode=0x314
> > > kernel command line is:  root=/dev/md0 quiet pci=realloc 
> > > console=ttyS0,115200n8 console=tty0
> > > 
> > > Fingers crossed that there's some magic boot parameter I'm missing. 
> It's 
> > > been a while since I dabbled with this stuff.  ;-)
> > 
> > That should have been working. I had been using Xserver for years now.
> > Just to make sure - you aren't referring to running with X right? Just
> > simple framebuffer?
> 
> I'm not sure I understand the delineation between Xserver and X...  I am 
> indeed in X, using xf86-video-ati and Mesa for OpenGL support.  I've been 
> doing this for years as well, but with nouveau and w/out 3d support.  Was 
> kinda hoping that the Radeon cards I got a hold of would allow for 
> hardware accelerated 3d on my dom0.

You should be able to use 3D as well - with those said patches.

> 
> I just tried adding 'nopat' to the kernel command line.  I remember doing 
> that a year ago... don't recall why.  Any chance that helps?

Heh. So you want the inverse with a patch that fixes the PAT wreaking
havoc.
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:55:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HBU-0002eB-HO; Mon, 20 Jan 2014 15:54:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5HBT-0002e0-A3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:54:43 +0000
Received: from [85.158.143.35:55402] by server-3.bemta-4.messagelabs.com id
	23/0B-32360-2C64DD25; Mon, 20 Jan 2014 15:54:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390233281!5709735!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6051 invoked from network); 20 Jan 2014 15:54:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:54:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 15:54:41 +0000
Message-Id: <52DD54CF02000078001151D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 15:54:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
	<52D9672F0200007800114A9F@nat28.tlf.novell.com>
	<20140120153122.GA24863@phenom.dumpdata.com>
In-Reply-To: <20140120153122.GA24863@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 16:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> >>>> Question now is: Considering that (a) is broken (and hard to fix)
>> >>>> and (b) is in presumably a large part of practical cases leading to
>> >>>> too much ballooning down, shouldn't we open up
>> >>>> XENMEM_get_pod_target for domains to query on themselves?
>> >>>> Alternatively, can anyone see another way to calculate a
>> >>>> reasonably precise value?
>> >>> I think hypervisor query is a good thing although I don't know whether
>> >>> exposing PoD-specific data (count and entry_count) to the guest is
>> >>> necessary. It's probably OK (or we can set these fields to zero for
>> >>> non-privileged domains).
>> >> That's pointless then - if no useful data is provided through the
>> >> call to non-privileged domains, we can as well keep it erroring for
>> >> them.
>> >>
>> > 
>> > I thought that are after d->tot_pages, no?
>> 
>> That can be obtained through another XENMEM_ operation. No,
>> what is needed is the difference between PoD entries and PoD
>> cache (which then needs to be added to tot_pages).
> 
> Won't that be racy? Meaning the moment you get that information and
> kick of the balloon worker, said value might be different already?

There's a small risk for that, yes (albeit said difference ought to be
stable, and I don't immediately see how tot_pages would change
underneath the balloon driver initializing), but I am still awaiting
alternative suggestions...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 15:55:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 15:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HBU-0002eB-HO; Mon, 20 Jan 2014 15:54:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5HBT-0002e0-A3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 15:54:43 +0000
Received: from [85.158.143.35:55402] by server-3.bemta-4.messagelabs.com id
	23/0B-32360-2C64DD25; Mon, 20 Jan 2014 15:54:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390233281!5709735!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6051 invoked from network); 20 Jan 2014 15:54:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 15:54:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 20 Jan 2014 15:54:41 +0000
Message-Id: <52DD54CF02000078001151D4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 20 Jan 2014 15:54:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52D94D3A0200007800114957@nat28.tlf.novell.com>
	<52D95233.1090003@oracle.com>
	<52D962460200007800114A60@nat28.tlf.novell.com>
	<52D956A0.5040208@oracle.com>
	<52D9672F0200007800114A9F@nat28.tlf.novell.com>
	<20140120153122.GA24863@phenom.dumpdata.com>
In-Reply-To: <20140120153122.GA24863@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] initial ballooning amount on HVM+PoD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 16:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> >>>> Question now is: Considering that (a) is broken (and hard to fix)
>> >>>> and (b) is in presumably a large part of practical cases leading to
>> >>>> too much ballooning down, shouldn't we open up
>> >>>> XENMEM_get_pod_target for domains to query on themselves?
>> >>>> Alternatively, can anyone see another way to calculate a
>> >>>> reasonably precise value?
>> >>> I think hypervisor query is a good thing although I don't know whether
>> >>> exposing PoD-specific data (count and entry_count) to the guest is
>> >>> necessary. It's probably OK (or we can set these fields to zero for
>> >>> non-privileged domains).
>> >> That's pointless then - if no useful data is provided through the
>> >> call to non-privileged domains, we can as well keep it erroring for
>> >> them.
>> >>
>> > 
>> > I thought that are after d->tot_pages, no?
>> 
>> That can be obtained through another XENMEM_ operation. No,
>> what is needed is the difference between PoD entries and PoD
>> cache (which then needs to be added to tot_pages).
> 
> Won't that be racy? Meaning the moment you get that information and
> kick of the balloon worker, said value might be different already?

There's a small risk for that, yes (albeit said difference ought to be
stable, and I don't immediately see how tot_pages would change
underneath the balloon driver initializing), but I am still awaiting
alternative suggestions...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:06:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HMQ-00042J-RU; Mon, 20 Jan 2014 16:06:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5HMO-00042E-Vc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:06:01 +0000
Received: from [193.109.254.147:14367] by server-1.bemta-14.messagelabs.com id
	09/18-15600-8694DD25; Mon, 20 Jan 2014 16:06:00 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390233956!10516881!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,DIET_1,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23429 invoked from network); 20 Jan 2014 16:05:58 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 16:05:58 -0000
Received: from mail-oa0-f41.google.com ([209.85.219.41]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt1JZFNrtqKOdD9Q+0K96Pi4R1mYPkbL@postini.com;
	Mon, 20 Jan 2014 08:05:58 PST
Received: by mail-oa0-f41.google.com with SMTP id j17so1871530oag.0
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 08:05:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ILfA2LbB4+psBT0rdTvlgjru79wVAfappgIN7hysGAU=;
	b=YUY05o4/T23LenyUJSug+x24L3PhdDGQcb2IxYhAB67/Q1sFUsYlC+E7kJ0ZVZdvsk
	5fWop7+ncVEsF50LpZEAQWMMFr2/SMYMwsi+B0iR+WQr9x4U6LK8A/7lb2U5f1ZKBiJ1
	qi7MWZLfHnq8a6MQr4/MAgXx2KiIqBxL+lc18=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ILfA2LbB4+psBT0rdTvlgjru79wVAfappgIN7hysGAU=;
	b=OSg4NUsSaA8b1C6zfqayuXiqrMKa1+tuMwcoHjWs5N8ki5fEHByHYj5DpjASZHwhsc
	90VlRBaUAawuCiGBLzbInVsFKpbp1wPe1V4OEqwbkeDBguyvNUMfytqeoXDSmlHGV8VB
	ulyiRZdI+dKio+qRLj1J4Bo1wNW4yl/0XOPwmBAqunDDxNOFM4/eRCvSKnuD1//w1PkG
	NJRxU2E26BYgCA7L4axBzwQeDlzDJ/NO8uE3nRkuB3reiLSxEKUWJZh/jWxZufj+4MiU
	MrE2CRP/B3JKKwA5BSRA5CjPELGDrSTx5JgD4xxd2b+GViMhRYfVIKQuiYgMP5LTTx+A
	zm/Q==
X-Gm-Message-State: ALoCoQkY+Ot6hmdk8EFweppX9jD0JRQIHtHsKYT1AAnEK0YUk2zxqPNPIBVk8yNqyHRKN1s4mcOWRcZmtdjS9dBaM3AutTu9+O+6rubqfY+Ac0wg4x+I6/kzBy0RSLoYxLCo/JQcY0P9EBZYJvR6bi9ttolcB8TxIAqGRVvSzbt+apipc82deoI=
X-Received: by 10.182.97.67 with SMTP id dy3mr928042obb.84.1390233956058;
	Mon, 20 Jan 2014 08:05:56 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.97.67 with SMTP id dy3mr928030obb.84.1390233955909; Mon,
	20 Jan 2014 08:05:55 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 08:05:55 -0800 (PST)
In-Reply-To: <1390230336.23576.24.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
Date: Mon, 20 Jan 2014 18:05:55 +0200
Message-ID: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1149827963086913779=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1149827963086913779==
Content-Type: multipart/alternative; boundary=047d7b2e4c0218debf04f0691191

--047d7b2e4c0218debf04f0691191
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario!

> x86 or ARM host?

ARM. ARMv7, TI Jacinto6 to be precise.

> Also, how many pCPUs and vCPUs do the host and the various guests have?

2 pCPUs, 4 vCPUs: 2 vCPU per domain.

> # xl list -n
> # xl vcpu-list

# xl list -n
Name                                        ID   Mem VCPUs      State
Time(s) NODE Affinity
Domain-0                                     0   118     2     r-----
 20.5 any node
android_4.3                                  1  1024     2     -b----
383.5 any node

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0    0   r--      12.6  any cpu
Domain-0                             0     1    1   b       7.3  any cpu
android_4.3                          1     0    0   b     180.8  any cpu
android_4.3                          1     1    0   b     184.4  any cpu

> Are you using any vCPU-to-pCPU pinning?

No.

> What about giving a try to it yourself? I think standardizing on one (a
> set of) specific tool could be a good thing.

Yep, we'll try it.

> Mmm... Can you show us at least part of the numbers? From my experience,
> it's really easy to mess up with terminology in this domain (delay,
> latency, jitter, etc.).

Test with 30 ms sleep period gives such results (I hope formatting won't
break):



Measurements
AverageNumber of times with t > 32 ms
CreditDom035832.564245810055972
DomU35840.7625698324022358




sEDFDom035833.6284916201117120
DomU35840.6983240223464358
We did additional measurements and as you can see, my first impression was
not very correct: difference between dom0 and domU exist and is quite
observable on a larger scale. On the same setup bare metal without Xen
number of times t > 32 is close to 0; on the setup with Xen but without
domU system running number of times t > 32 is close to 0 as well.  We will
make additional measurements with Linux (not Android) as a domU guest,
though.

Raw data looks like this:
Credit

sedf
Dom0DomU
Dom0DomU
3846
38393146
39393146
39393946
39463946
39463146
31393146
31393146
31463146
31463146
31393146
31393146
39393146
39393139
31393139
3139
So as you can see, there are peak values both in dom0 and domU.

> AFAIUI, you're saying that you're asking for a sleep time of X, and
> you're being waking up in the interval [x+5ms, x+15ms], is that the
> case?

Yes, that's correct. In the numbers above sleep period is 30 ms, so we were
expecting 30-31 ms sleep time as it is on the same setup without Xen.

> # xl sched-sedf

# xl sched-sedf
Cpupool Pool-0:
Name                                ID Period Slice  Latency Extra Weight
Domain-0                             0    100      0       0     1      0
android_4.3                          1    100      0       0     1      0

> Or you have some other load in the system while
> performing these measurements? If the latter, what and where?

During this test both guests were almost idle.

> Oh, and now that I think about it, something that present in credit and
> not in sEDF that might be worth checking is the scheduling rate limiting
> thing.

We'll check it out, thanks!

Regards, Pavlo

--047d7b2e4c0218debf04f0691191
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div><div><div><div>Hi Dario!<br></div=
><div><br>&gt; x86 or ARM host?<br><br></div>ARM. ARMv7, TI Jacinto6 to be =
precise.<br><div>
</div><br>&gt; Also, how many pCPUs and vCPUs do the host and the various g=
uests have?<br><br></div><div>2 pCPUs, 4 vCPUs: 2 vCPU per domain.<br><br>&=
gt; # xl list -n<br>&gt; # xl vcpu-list<br></div><div><br><span dir=3D"ltr"=
 id=3D":7q6"># xl list -n<br>
Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0ID =A0 Mem VCPUs =A0 =A0 =A0State =A0 Time(s) NODE Affinity<br>D=
omain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 0 =A0 118 =A0 =A0 2 =A0 =A0 r----- =A0 =A0 =A020.5 any node<br>android=
_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =
=A01024 =A0 =A0 2 =A0 =A0 -b---- =A0 =A0 383.5 any node</span><br>
<br><span dir=3D"ltr" id=3D":7q6"># xl vcpu-list<br>Name =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0=
 Time(s) CPU Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A00 =A0 r-- =A0 =A0 =A012.6 =A0any cpu<br>=
Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =
1 =A0 =A01 =A0 <s>b</s> =A0 =A0 =A0 7.3 =A0any cpu<br>
android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A00 =A0 <s>b</s> =A0 =A0 180.8 =A0any cpu<br>android_4.3 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 1 =A0 =A00 =A0 <s>b</s> =
=A0 =A0 184.4 =A0any cpu<br></span><br>
&gt; Are you using any vCPU-to-pCPU pinning?<br><br></div>No.<br><br>&gt; W=
hat about giving a try to it yourself? I think standardizing on one (a<br>&=
gt; set of) specific tool could be a good thing.<br><br></div>Yep, we&#39;l=
l try it.<br>
<br>&gt; Mmm... Can you show us at least part of the numbers? From my exper=
ience,<br>&gt; it&#39;s really easy to mess up with terminology in this dom=
ain (delay,<br>&gt; latency, jitter, etc.).<br><br></div><div>Test with 30 =
ms sleep period gives such results (I hope formatting won&#39;t break):<br>
</div><div><br><style type=3D"text/css"><!-- br {mso-data-placement:same-ce=
ll;} --></style><table dir=3D"ltr" style=3D"table-layout:fixed;font-size:13=
px;font-family:arial,sans,sans-serif" cellpadding=3D"0" cellspacing=3D"0"><=
colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px">
<td style=3D"padding:0px 3px;vertical-align:bottom;border-width:1px;border-=
style:solid;border-color:rgb(204,204,204)"><br></td><td style=3D"padding:0p=
x 3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border=
-right:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px sol=
id rgb(204,204,204)">
Measurements<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:left;vertical-align:middle;direction:ltr;white-space:nowrap;border-right=
:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);border=
-top:1px solid rgb(204,204,204)">
Average</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;v=
ertical-align:middle;direction:ltr;white-space:nowrap;border-right:1px soli=
d rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px =
solid rgb(204,204,204)">
Number of times with t &gt; 32 ms<br></td></tr><tr style=3D"height:16px"><t=
d style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:=
middle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,=
204);border-bottom:1px solid rgb(204,204,204);border-left:1px solid rgb(204=
,204,204)">
Credit</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;ve=
rtical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid=
 rgb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td st=
yle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:mid=
dle;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-botto=
m:1px solid rgb(204,204,204)">
358</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vert=
ical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">32.5642458100559</td><td style=
=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle=
;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1=
px solid rgb(204,204,204)">
72</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;vertical=
-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px sol=
id rgb(204,204,204);border-left:1px solid rgb(204,204,204)"><br></td><td st=
yle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:midd=
le;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">
DomU</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">358</td><td style=3D"padding:0=
px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:=
nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(=
204,204,204)">
40.7625698324022</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:right;vertical-align:middle;white-space:nowrap;border-right:1px solid rg=
b(204,204,204);border-bottom:1px solid rgb(204,204,204)">358</td></tr><tr s=
tyle=3D"height:17px">
<td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid =
rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-left:1px so=
lid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;vertical-align:=
bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px solid rgb(=
204,204,204)">
<br></td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204)"><br></td=
><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid=
 rgb(204,204,204);border-right:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204)"><br></td=
></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rgb(0,0,=
0);text-align:left;vertical-align:middle;direction:ltr;white-space:nowrap;b=
order-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,=
204);border-left:1px solid rgb(204,204,204)">
sEDF</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td styl=
e=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middl=
e;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:=
1px solid rgb(204,204,204)">
358</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vert=
ical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">33.6284916201117</td><td style=
=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle=
;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1=
px solid rgb(204,204,204)">
120</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;vertica=
l-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px so=
lid rgb(204,204,204);border-left:1px solid rgb(204,204,204)"><br></td><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">
DomU</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">358</td><td style=3D"padding:0=
px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:=
nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(=
204,204,204)">
40.6983240223464</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:right;vertical-align:middle;white-space:nowrap;border-right:1px solid rg=
b(204,204,204);border-bottom:1px solid rgb(204,204,204)">358</td></tr></tbo=
dy></table>
<br></div>We did additional measurements and as you can see, my first impre=
ssion was not very correct: difference between dom0 and domU exist and is q=
uite observable on a larger scale. On the same setup bare metal without Xen=
 number of times t &gt; 32 is close to 0; on the setup with Xen but without=
 domU system running number of times t &gt; 32 is close to 0 as well.=A0 We=
 will make additional measurements with Linux (not Android) as a domU guest=
, though.<br>
<br></div><div>Raw data looks like this:<br><style type=3D"text/css"><!-- b=
r {mso-data-placement:same-cell;} --></style><table dir=3D"ltr" style=3D"ta=
ble-layout:fixed;font-size:13px;font-family:arial,sans,sans-serif" cellpadd=
ing=3D"0" cellspacing=3D"0">
<colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-width:1px;border-style:solid;bo=
rder-color:rgb(204,204,204)">
Credit</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom=
:1px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-=
top:1px solid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;verti=
cal-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px =
solid rgb(204,204,204);border-top:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px sol=
id rgb(204,204,204)">
sedf</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-to=
p:1px solid rgb(204,204,204)"><br></td></tr><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204);border-left:1px solid rgb(204,20=
4,204)">
Dom0</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">DomU</td><td styl=
e=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid rgb(204,=
204,204);border-right:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td styl=
e=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:middle=
;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204);b=
order-bottom:1px solid rgb(204,204,204)">
DomU</td></tr></tbody></table><br><style type=3D"text/css"><!-- br {mso-dat=
a-placement:same-cell;} --></style><table dir=3D"ltr" style=3D"table-layout=
:fixed;font-size:13px;font-family:arial,sans,sans-serif" cellpadding=3D"0" =
cellspacing=3D"0">
<colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:mi=
ddle;white-space:nowrap;border-width:1px;border-style:solid;border-color:rg=
b(204,204,204)">
38</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,=
204)">
46</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px=
 solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-top:=
1px solid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;color:rgb=
(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-ri=
ght:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bor=
der-top:1px solid rgb(204,204,204)">
38</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,=
204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
39</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
39</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr></tbody></table><br></div><div>So as you can see, there are pea=
k values both in dom0 and domU.<br></div><div><br>&gt; AFAIUI, you&#39;re s=
aying that you&#39;re asking for a sleep time of X, and<br>&gt; you&#39;re =
being waking up in the interval [x+5ms, x+15ms], is that the<br>
&gt; case?<br><br></div>Yes, that&#39;s correct. In the numbers above sleep=
 period is 30 ms, so we were expecting 30-31 ms sleep time as it is on the =
same setup without Xen.<br><br>&gt; # xl sched-sedf<br><br><span dir=3D"ltr=
" id=3D":7q6"> # xl sched-sedf<br>
Cpupool Pool-0:<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0ID Period Slice =A0Latency Extra Weight<br>Domain-0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0100 =A0 =A0 =A00 =A0 =
=A0 =A0 0 =A0 =A0 1 =A0 =A0 =A00<br>android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0100 =A0 =A0 =A00 =A0 =A0 =A0 0 =A0 =A0 1 =
=A0 =A0 =A00</span><br>
<br>&gt; Or you have some other load in the system while<br>&gt; performing=
 these measurements? If the latter, what and where?<br><br></div>During thi=
s test both guests were almost idle.<br><br>&gt; Oh, and now that I think a=
bout it, something that present in credit and<br>
&gt; not in sEDF that might be worth checking is the scheduling rate limiti=
ng<br>&gt; thing.<br><br></div>We&#39;ll check it out, thanks!<br><br></div=
>Regards, Pavlo<br></div>

--047d7b2e4c0218debf04f0691191--


--===============1149827963086913779==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1149827963086913779==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 16:06:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HMQ-00042J-RU; Mon, 20 Jan 2014 16:06:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5HMO-00042E-Vc
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:06:01 +0000
Received: from [193.109.254.147:14367] by server-1.bemta-14.messagelabs.com id
	09/18-15600-8694DD25; Mon, 20 Jan 2014 16:06:00 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390233956!10516881!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,DIET_1,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23429 invoked from network); 20 Jan 2014 16:05:58 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 16:05:58 -0000
Received: from mail-oa0-f41.google.com ([209.85.219.41]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt1JZFNrtqKOdD9Q+0K96Pi4R1mYPkbL@postini.com;
	Mon, 20 Jan 2014 08:05:58 PST
Received: by mail-oa0-f41.google.com with SMTP id j17so1871530oag.0
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 08:05:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ILfA2LbB4+psBT0rdTvlgjru79wVAfappgIN7hysGAU=;
	b=YUY05o4/T23LenyUJSug+x24L3PhdDGQcb2IxYhAB67/Q1sFUsYlC+E7kJ0ZVZdvsk
	5fWop7+ncVEsF50LpZEAQWMMFr2/SMYMwsi+B0iR+WQr9x4U6LK8A/7lb2U5f1ZKBiJ1
	qi7MWZLfHnq8a6MQr4/MAgXx2KiIqBxL+lc18=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ILfA2LbB4+psBT0rdTvlgjru79wVAfappgIN7hysGAU=;
	b=OSg4NUsSaA8b1C6zfqayuXiqrMKa1+tuMwcoHjWs5N8ki5fEHByHYj5DpjASZHwhsc
	90VlRBaUAawuCiGBLzbInVsFKpbp1wPe1V4OEqwbkeDBguyvNUMfytqeoXDSmlHGV8VB
	ulyiRZdI+dKio+qRLj1J4Bo1wNW4yl/0XOPwmBAqunDDxNOFM4/eRCvSKnuD1//w1PkG
	NJRxU2E26BYgCA7L4axBzwQeDlzDJ/NO8uE3nRkuB3reiLSxEKUWJZh/jWxZufj+4MiU
	MrE2CRP/B3JKKwA5BSRA5CjPELGDrSTx5JgD4xxd2b+GViMhRYfVIKQuiYgMP5LTTx+A
	zm/Q==
X-Gm-Message-State: ALoCoQkY+Ot6hmdk8EFweppX9jD0JRQIHtHsKYT1AAnEK0YUk2zxqPNPIBVk8yNqyHRKN1s4mcOWRcZmtdjS9dBaM3AutTu9+O+6rubqfY+Ac0wg4x+I6/kzBy0RSLoYxLCo/JQcY0P9EBZYJvR6bi9ttolcB8TxIAqGRVvSzbt+apipc82deoI=
X-Received: by 10.182.97.67 with SMTP id dy3mr928042obb.84.1390233956058;
	Mon, 20 Jan 2014 08:05:56 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.97.67 with SMTP id dy3mr928030obb.84.1390233955909; Mon,
	20 Jan 2014 08:05:55 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 08:05:55 -0800 (PST)
In-Reply-To: <1390230336.23576.24.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
Date: Mon, 20 Jan 2014 18:05:55 +0200
Message-ID: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1149827963086913779=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1149827963086913779==
Content-Type: multipart/alternative; boundary=047d7b2e4c0218debf04f0691191

--047d7b2e4c0218debf04f0691191
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario!

> x86 or ARM host?

ARM. ARMv7, TI Jacinto6 to be precise.

> Also, how many pCPUs and vCPUs do the host and the various guests have?

2 pCPUs, 4 vCPUs: 2 vCPU per domain.

> # xl list -n
> # xl vcpu-list

# xl list -n
Name                                        ID   Mem VCPUs      State
Time(s) NODE Affinity
Domain-0                                     0   118     2     r-----
 20.5 any node
android_4.3                                  1  1024     2     -b----
383.5 any node

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0    0   r--      12.6  any cpu
Domain-0                             0     1    1   b       7.3  any cpu
android_4.3                          1     0    0   b     180.8  any cpu
android_4.3                          1     1    0   b     184.4  any cpu

> Are you using any vCPU-to-pCPU pinning?

No.

> What about giving a try to it yourself? I think standardizing on one (a
> set of) specific tool could be a good thing.

Yep, we'll try it.

> Mmm... Can you show us at least part of the numbers? From my experience,
> it's really easy to mess up with terminology in this domain (delay,
> latency, jitter, etc.).

Test with 30 ms sleep period gives such results (I hope formatting won't
break):



Measurements
AverageNumber of times with t > 32 ms
CreditDom035832.564245810055972
DomU35840.7625698324022358




sEDFDom035833.6284916201117120
DomU35840.6983240223464358
We did additional measurements and as you can see, my first impression was
not very correct: difference between dom0 and domU exist and is quite
observable on a larger scale. On the same setup bare metal without Xen
number of times t > 32 is close to 0; on the setup with Xen but without
domU system running number of times t > 32 is close to 0 as well.  We will
make additional measurements with Linux (not Android) as a domU guest,
though.

Raw data looks like this:
Credit

sedf
Dom0DomU
Dom0DomU
3846
38393146
39393146
39393946
39463946
39463146
31393146
31393146
31463146
31463146
31393146
31393146
39393146
39393139
31393139
3139
So as you can see, there are peak values both in dom0 and domU.

> AFAIUI, you're saying that you're asking for a sleep time of X, and
> you're being waking up in the interval [x+5ms, x+15ms], is that the
> case?

Yes, that's correct. In the numbers above sleep period is 30 ms, so we were
expecting 30-31 ms sleep time as it is on the same setup without Xen.

> # xl sched-sedf

# xl sched-sedf
Cpupool Pool-0:
Name                                ID Period Slice  Latency Extra Weight
Domain-0                             0    100      0       0     1      0
android_4.3                          1    100      0       0     1      0

> Or you have some other load in the system while
> performing these measurements? If the latter, what and where?

During this test both guests were almost idle.

> Oh, and now that I think about it, something that present in credit and
> not in sEDF that might be worth checking is the scheduling rate limiting
> thing.

We'll check it out, thanks!

Regards, Pavlo

--047d7b2e4c0218debf04f0691191
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div><div><div><div><div>Hi Dario!<br></div=
><div><br>&gt; x86 or ARM host?<br><br></div>ARM. ARMv7, TI Jacinto6 to be =
precise.<br><div>
</div><br>&gt; Also, how many pCPUs and vCPUs do the host and the various g=
uests have?<br><br></div><div>2 pCPUs, 4 vCPUs: 2 vCPU per domain.<br><br>&=
gt; # xl list -n<br>&gt; # xl vcpu-list<br></div><div><br><span dir=3D"ltr"=
 id=3D":7q6"># xl list -n<br>
Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0ID =A0 Mem VCPUs =A0 =A0 =A0State =A0 Time(s) NODE Affinity<br>D=
omain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 0 =A0 118 =A0 =A0 2 =A0 =A0 r----- =A0 =A0 =A020.5 any node<br>android=
_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =
=A01024 =A0 =A0 2 =A0 =A0 -b---- =A0 =A0 383.5 any node</span><br>
<br><span dir=3D"ltr" id=3D":7q6"># xl vcpu-list<br>Name =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0=
 Time(s) CPU Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A00 =A0 r-- =A0 =A0 =A012.6 =A0any cpu<br>=
Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =
1 =A0 =A01 =A0 <s>b</s> =A0 =A0 =A0 7.3 =A0any cpu<br>
android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A00 =A0 <s>b</s> =A0 =A0 180.8 =A0any cpu<br>android_4.3 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 1 =A0 =A00 =A0 <s>b</s> =
=A0 =A0 184.4 =A0any cpu<br></span><br>
&gt; Are you using any vCPU-to-pCPU pinning?<br><br></div>No.<br><br>&gt; W=
hat about giving a try to it yourself? I think standardizing on one (a<br>&=
gt; set of) specific tool could be a good thing.<br><br></div>Yep, we&#39;l=
l try it.<br>
<br>&gt; Mmm... Can you show us at least part of the numbers? From my exper=
ience,<br>&gt; it&#39;s really easy to mess up with terminology in this dom=
ain (delay,<br>&gt; latency, jitter, etc.).<br><br></div><div>Test with 30 =
ms sleep period gives such results (I hope formatting won&#39;t break):<br>
</div><div><br><style type=3D"text/css"><!-- br {mso-data-placement:same-ce=
ll;} --></style><table dir=3D"ltr" style=3D"table-layout:fixed;font-size:13=
px;font-family:arial,sans,sans-serif" cellpadding=3D"0" cellspacing=3D"0"><=
colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px">
<td style=3D"padding:0px 3px;vertical-align:bottom;border-width:1px;border-=
style:solid;border-color:rgb(204,204,204)"><br></td><td style=3D"padding:0p=
x 3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border=
-right:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px sol=
id rgb(204,204,204)">
Measurements<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:left;vertical-align:middle;direction:ltr;white-space:nowrap;border-right=
:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);border=
-top:1px solid rgb(204,204,204)">
Average</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;v=
ertical-align:middle;direction:ltr;white-space:nowrap;border-right:1px soli=
d rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px =
solid rgb(204,204,204)">
Number of times with t &gt; 32 ms<br></td></tr><tr style=3D"height:16px"><t=
d style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:=
middle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,=
204);border-bottom:1px solid rgb(204,204,204);border-left:1px solid rgb(204=
,204,204)">
Credit</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;ve=
rtical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid=
 rgb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td st=
yle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:mid=
dle;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-botto=
m:1px solid rgb(204,204,204)">
358</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vert=
ical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">32.5642458100559</td><td style=
=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle=
;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1=
px solid rgb(204,204,204)">
72</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;vertical=
-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px sol=
id rgb(204,204,204);border-left:1px solid rgb(204,204,204)"><br></td><td st=
yle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:midd=
le;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">
DomU</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">358</td><td style=3D"padding:0=
px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:=
nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(=
204,204,204)">
40.7625698324022</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:right;vertical-align:middle;white-space:nowrap;border-right:1px solid rg=
b(204,204,204);border-bottom:1px solid rgb(204,204,204)">358</td></tr><tr s=
tyle=3D"height:17px">
<td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid =
rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-left:1px so=
lid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;vertical-align:=
bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px solid rgb(=
204,204,204)">
<br></td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204)"><br></td=
><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid=
 rgb(204,204,204);border-right:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204)"><br></td=
></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rgb(0,0,=
0);text-align:left;vertical-align:middle;direction:ltr;white-space:nowrap;b=
order-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,=
204);border-left:1px solid rgb(204,204,204)">
sEDF</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td styl=
e=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middl=
e;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:=
1px solid rgb(204,204,204)">
358</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vert=
ical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">33.6284916201117</td><td style=
=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle=
;white-space:nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1=
px solid rgb(204,204,204)">
120</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;vertica=
l-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px so=
lid rgb(204,204,204);border-left:1px solid rgb(204,204,204)"><br></td><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204)">
DomU</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">358</td><td style=3D"padding:0=
px 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:=
nowrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(=
204,204,204)">
40.6983240223464</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-ali=
gn:right;vertical-align:middle;white-space:nowrap;border-right:1px solid rg=
b(204,204,204);border-bottom:1px solid rgb(204,204,204)">358</td></tr></tbo=
dy></table>
<br></div>We did additional measurements and as you can see, my first impre=
ssion was not very correct: difference between dom0 and domU exist and is q=
uite observable on a larger scale. On the same setup bare metal without Xen=
 number of times t &gt; 32 is close to 0; on the setup with Xen but without=
 domU system running number of times t &gt; 32 is close to 0 as well.=A0 We=
 will make additional measurements with Linux (not Android) as a domU guest=
, though.<br>
<br></div><div>Raw data looks like this:<br><style type=3D"text/css"><!-- b=
r {mso-data-placement:same-cell;} --></style><table dir=3D"ltr" style=3D"ta=
ble-layout:fixed;font-size:13px;font-family:arial,sans,sans-serif" cellpadd=
ing=3D"0" cellspacing=3D"0">
<colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-width:1px;border-style:solid;bo=
rder-color:rgb(204,204,204)">
Credit</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom=
:1px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-=
top:1px solid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;verti=
cal-align:bottom;border-bottom:1px solid rgb(204,204,204);border-right:1px =
solid rgb(204,204,204);border-top:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204);border-top:1px sol=
id rgb(204,204,204)">
sedf</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1=
px solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-to=
p:1px solid rgb(204,204,204)"><br></td></tr><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:mid=
dle;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204=
);border-bottom:1px solid rgb(204,204,204);border-left:1px solid rgb(204,20=
4,204)">
Dom0</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">DomU</td><td styl=
e=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px solid rgb(204,=
204,204);border-right:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vert=
ical-align:middle;direction:ltr;white-space:nowrap;border-right:1px solid r=
gb(204,204,204);border-bottom:1px solid rgb(204,204,204)">Dom0</td><td styl=
e=3D"padding:0px 3px;color:rgb(0,0,0);text-align:left;vertical-align:middle=
;direction:ltr;white-space:nowrap;border-right:1px solid rgb(204,204,204);b=
order-bottom:1px solid rgb(204,204,204)">
DomU</td></tr></tbody></table><br><style type=3D"text/css"><!-- br {mso-dat=
a-placement:same-cell;} --></style><table dir=3D"ltr" style=3D"table-layout=
:fixed;font-size:13px;font-family:arial,sans,sans-serif" cellpadding=3D"0" =
cellspacing=3D"0">
<colgroup><col width=3D"85"><col width=3D"85"><col width=3D"85"><col width=
=3D"85"><col width=3D"85"></colgroup><tbody><tr style=3D"height:16px"><td s=
tyle=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;vertical-align:mi=
ddle;white-space:nowrap;border-width:1px;border-style:solid;border-color:rg=
b(204,204,204)">
38</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,=
204)">
46</td><td style=3D"padding:0px 3px;vertical-align:bottom;border-bottom:1px=
 solid rgb(204,204,204);border-right:1px solid rgb(204,204,204);border-top:=
1px solid rgb(204,204,204)"><br></td><td style=3D"padding:0px 3px;color:rgb=
(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-ri=
ght:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bor=
der-top:1px solid rgb(204,204,204)">
38</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204);border-top:1px solid rgb(204,204,=
204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
39</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
39</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
46</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">46</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr><tr style=3D"height:16px"><td style=3D"padding:0px 3px;color:rg=
b(0,0,0);text-align:right;vertical-align:middle;white-space:nowrap;border-r=
ight:1px solid rgb(204,204,204);border-bottom:1px solid rgb(204,204,204);bo=
rder-left:1px solid rgb(204,204,204)">
31</td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;verti=
cal-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,204)=
;border-bottom:1px solid rgb(204,204,204)">39</td><td style=3D"padding:0px =
3px;vertical-align:bottom;border-bottom:1px solid rgb(204,204,204);border-r=
ight:1px solid rgb(204,204,204)">
<br></td><td style=3D"padding:0px 3px;color:rgb(0,0,0);text-align:right;ver=
tical-align:middle;white-space:nowrap;border-right:1px solid rgb(204,204,20=
4);border-bottom:1px solid rgb(204,204,204)">31</td><td style=3D"padding:0p=
x 3px;color:rgb(0,0,0);text-align:right;vertical-align:middle;white-space:n=
owrap;border-right:1px solid rgb(204,204,204);border-bottom:1px solid rgb(2=
04,204,204)">
39</td></tr></tbody></table><br></div><div>So as you can see, there are pea=
k values both in dom0 and domU.<br></div><div><br>&gt; AFAIUI, you&#39;re s=
aying that you&#39;re asking for a sleep time of X, and<br>&gt; you&#39;re =
being waking up in the interval [x+5ms, x+15ms], is that the<br>
&gt; case?<br><br></div>Yes, that&#39;s correct. In the numbers above sleep=
 period is 30 ms, so we were expecting 30-31 ms sleep time as it is on the =
same setup without Xen.<br><br>&gt; # xl sched-sedf<br><br><span dir=3D"ltr=
" id=3D":7q6"> # xl sched-sedf<br>
Cpupool Pool-0:<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0ID Period Slice =A0Latency Extra Weight<br>Domain-0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0100 =A0 =A0 =A00 =A0 =
=A0 =A0 0 =A0 =A0 1 =A0 =A0 =A00<br>android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0100 =A0 =A0 =A00 =A0 =A0 =A0 0 =A0 =A0 1 =
=A0 =A0 =A00</span><br>
<br>&gt; Or you have some other load in the system while<br>&gt; performing=
 these measurements? If the latter, what and where?<br><br></div>During thi=
s test both guests were almost idle.<br><br>&gt; Oh, and now that I think a=
bout it, something that present in credit and<br>
&gt; not in sEDF that might be worth checking is the scheduling rate limiti=
ng<br>&gt; thing.<br><br></div>We&#39;ll check it out, thanks!<br><br></div=
>Regards, Pavlo<br></div>

--047d7b2e4c0218debf04f0691191--


--===============1149827963086913779==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1149827963086913779==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 16:30:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HjS-0005Df-92; Mon, 20 Jan 2014 16:29:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5HjQ-0005DZ-Jn
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:29:49 +0000
Received: from [85.158.137.68:59161] by server-6.bemta-3.messagelabs.com id
	20/33-04868-BFE4DD25; Mon, 20 Jan 2014 16:29:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390235385!10245029!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12019 invoked from network); 20 Jan 2014 16:29:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:29:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94549200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:29:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:29:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5HjL-0000wc-Ic;
	Mon, 20 Jan 2014 16:29:43 +0000
Date: Mon, 20 Jan 2014 16:29:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140120162943.GD11681@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 
> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well, and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).
> 
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;
> +

I'm a bit confused, did this work for you? At this point d->tot_pages
should be (target_pages - 0x20). However in the hypervisor logic if you
try to claim the exact amount of pages as d->tot_pages it should return
EINVAL.

Furthur more, the original logic doesn't look right. In PV guest
creation, xc tries to claim "memory=" pages, while in HVM guest creation
it tries to claim "maxmem=" pages. I think HVM code is wrong.

And George shed me some light on PoD this morning, "cache" in PoD should
be the pool of pages that used to populate into guest physical memory.
In that sense it should be the size of mem_target ("memory=").

So I come up with a fix like this. Any idea?

Wei.

---8<---
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..472f1df 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
 #define NR_SPECIAL_PAGES     8
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
+#define POD_VGA_HOLE_SIZE (0x20)
+
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
                         uint64_t *mstart_out, uint64_t *mend_out)
@@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
     if ( pod_mode )
     {
         /*
-         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-         * adjust the PoD cache size so that domain tot_pages will be
-         * target_pages - 0x20 after this call.
+         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
+         * "hole".  Xen will adjust the PoD cache size so that domain
+         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
+         * this call.
          */
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+        rc = xc_domain_set_pod_target(xch, dom,
+                                      target_pages - POD_VGA_HOLE_SIZE,
                                       NULL, NULL, NULL);
         if ( rc != 0 )
         {
@@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
 
     /* try to claim pages for early warning of insufficient memory available */
     if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
+        unsigned long nr = target_pages;
+
+        if ( pod_mode )
+            nr -= POD_VGA_HOLE_SIZE;
+
+        rc = xc_domain_claim_pages(xch, dom, nr);
         if ( rc != 0 )
         {
             PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..1e44ba3 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
         goto out;
     }
 
-    /* disallow a claim not exceeding current tot_pages or above max_pages */
-    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
+    /* disallow a claim below current tot_pages or above max_pages */
+    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
     {
         ret = -EINVAL;
         goto out;



> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:30:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5HjS-0005Df-92; Mon, 20 Jan 2014 16:29:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5HjQ-0005DZ-Jn
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:29:49 +0000
Received: from [85.158.137.68:59161] by server-6.bemta-3.messagelabs.com id
	20/33-04868-BFE4DD25; Mon, 20 Jan 2014 16:29:47 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390235385!10245029!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12019 invoked from network); 20 Jan 2014 16:29:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:29:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94549200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:29:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:29:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5HjL-0000wc-Ic;
	Mon, 20 Jan 2014 16:29:43 +0000
Date: Mon, 20 Jan 2014 16:29:43 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140120162943.GD11681@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140110145807.GB19124@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@citrix.com>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 
> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well, and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).
> 
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;
> +

I'm a bit confused, did this work for you? At this point d->tot_pages
should be (target_pages - 0x20). However in the hypervisor logic if you
try to claim the exact amount of pages as d->tot_pages it should return
EINVAL.

Furthur more, the original logic doesn't look right. In PV guest
creation, xc tries to claim "memory=" pages, while in HVM guest creation
it tries to claim "maxmem=" pages. I think HVM code is wrong.

And George shed me some light on PoD this morning, "cache" in PoD should
be the pool of pages that used to populate into guest physical memory.
In that sense it should be the size of mem_target ("memory=").

So I come up with a fix like this. Any idea?

Wei.

---8<---
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..472f1df 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
 #define NR_SPECIAL_PAGES     8
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
+#define POD_VGA_HOLE_SIZE (0x20)
+
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
                         uint64_t *mstart_out, uint64_t *mend_out)
@@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
     if ( pod_mode )
     {
         /*
-         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-         * adjust the PoD cache size so that domain tot_pages will be
-         * target_pages - 0x20 after this call.
+         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
+         * "hole".  Xen will adjust the PoD cache size so that domain
+         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
+         * this call.
          */
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+        rc = xc_domain_set_pod_target(xch, dom,
+                                      target_pages - POD_VGA_HOLE_SIZE,
                                       NULL, NULL, NULL);
         if ( rc != 0 )
         {
@@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
 
     /* try to claim pages for early warning of insufficient memory available */
     if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
+        unsigned long nr = target_pages;
+
+        if ( pod_mode )
+            nr -= POD_VGA_HOLE_SIZE;
+
+        rc = xc_domain_claim_pages(xch, dom, nr);
         if ( rc != 0 )
         {
             PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..1e44ba3 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
         goto out;
     }
 
-    /* disallow a claim not exceeding current tot_pages or above max_pages */
-    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
+    /* disallow a claim below current tot_pages or above max_pages */
+    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
     {
         ret = -EINVAL;
         goto out;



> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Hrg-0005Ud-Fm; Mon, 20 Jan 2014 16:38:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5Hrf-0005UY-7L
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:38:19 +0000
Received: from [85.158.143.35:8341] by server-3.bemta-4.messagelabs.com id
	BF/D3-32360-AF05DD25; Mon, 20 Jan 2014 16:38:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390235896!12888809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5435 invoked from network); 20 Jan 2014 16:38:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:38:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92476061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 16:38:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:38:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5Hrb-00014R-4x;
	Mon, 20 Jan 2014 16:38:15 +0000
Date: Mon, 20 Jan 2014 16:38:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120163814.GE11681@zion.uk.xensource.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
> 
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
> 
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

Sorry for the delay.

Paul, thanks for reviewing.

Acked-by: Wei Liu <wei.liu2@citrix.com>

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Hrg-0005Ud-Fm; Mon, 20 Jan 2014 16:38:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5Hrf-0005UY-7L
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:38:19 +0000
Received: from [85.158.143.35:8341] by server-3.bemta-4.messagelabs.com id
	BF/D3-32360-AF05DD25; Mon, 20 Jan 2014 16:38:18 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390235896!12888809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5435 invoked from network); 20 Jan 2014 16:38:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:38:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92476061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 16:38:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:38:15 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5Hrb-00014R-4x;
	Mon, 20 Jan 2014 16:38:15 +0000
Date: Mon, 20 Jan 2014 16:38:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120163814.GE11681@zion.uk.xensource.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
> The recent patch to fix receive side flow control (11b57f) solved the spinning
> thread problem, however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
> 
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
> 
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

Sorry for the delay.

Paul, thanks for reviewing.

Acked-by: Wei Liu <wei.liu2@citrix.com>

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:46:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Hzh-000610-H3; Mon, 20 Jan 2014 16:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Hzg-00060v-Ff
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:46:36 +0000
Received: from [193.109.254.147:51616] by server-7.bemta-14.messagelabs.com id
	27/C9-15500-BE25DD25; Mon, 20 Jan 2014 16:46:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390236393!12032826!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 803 invoked from network); 20 Jan 2014 16:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94555727"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:46:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:46:23 -0500
Message-ID: <1390236382.20516.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 16:46:22 +0000
In-Reply-To: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
References: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Alex Sharp <alex.sharp@orionvm.com>
Subject: [Xen-devel] stubdom build failure with gcc 4.8 (Was: Re: [PATCH]
 Fix stubdom build failure for RELEASE-4.3.1)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it stubdom build failures with gcc 4.8
thanks

On Thu, 2013-10-31 at 14:44 +0000, Alex Sharp wrote:
Hey,
> 
> I noticed that RELEASE-4.3.1 (commit 4e8e0bdef96859e6f428af74be6f4a3bc7fd11f3)
> was failing to make stubdom, complaining of various forms of format string mismatch.

User "concave" reported on #xentest during the test day yesterday that
this issue affects 4.4-rc2 as well.

There were a couple of minor comments, which should appear in the bug
log once it is create (or search the archives).

Assuming Alex isn't able to resubmit are there any devs with a suitable
distro who'd like to take a run at it?

Ian.

>  The following patch at least gets things to compile, but I'm not sure
> if there's a deeper underlying issue. I'm using gcc version 4.8.1
> (Ubuntu/Linaro 4.8.1-10ubuntu8) on x86_64. My .config contains the
> line "PYTHON_PREFIX_ARG=--install-layout=deb" and I'm ./configuring
> with --prefix= 
> 
> Hope this helps.
> 
> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 54a5e67..b6aaa6b 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> @@ -463,7 +463,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index bbe21e0..9d0cb17 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -413,7 +413,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>                  continue;
>              }
>  
> -            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) != 4) {
> +            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) != 4) {
>                  printk("\"%s\" does not look like a PCI device address\n", s);
>                  free(s);
>                  continue;
> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
> index ee1691b..fa2e872 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -672,7 +672,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>      err = errmsg(rep);
>      if (err)
>         return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +769,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  
>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%u", (unsigned int*)&ret);
>  
>      return ret;
>  }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:46:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Hzh-000610-H3; Mon, 20 Jan 2014 16:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Hzg-00060v-Ff
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:46:36 +0000
Received: from [193.109.254.147:51616] by server-7.bemta-14.messagelabs.com id
	27/C9-15500-BE25DD25; Mon, 20 Jan 2014 16:46:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390236393!12032826!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 803 invoked from network); 20 Jan 2014 16:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94555727"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:46:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:46:23 -0500
Message-ID: <1390236382.20516.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Mon, 20 Jan 2014 16:46:22 +0000
In-Reply-To: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
References: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Alex Sharp <alex.sharp@orionvm.com>
Subject: [Xen-devel] stubdom build failure with gcc 4.8 (Was: Re: [PATCH]
 Fix stubdom build failure for RELEASE-4.3.1)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it stubdom build failures with gcc 4.8
thanks

On Thu, 2013-10-31 at 14:44 +0000, Alex Sharp wrote:
Hey,
> 
> I noticed that RELEASE-4.3.1 (commit 4e8e0bdef96859e6f428af74be6f4a3bc7fd11f3)
> was failing to make stubdom, complaining of various forms of format string mismatch.

User "concave" reported on #xentest during the test day yesterday that
this issue affects 4.4-rc2 as well.

There were a couple of minor comments, which should appear in the bug
log once it is create (or search the archives).

Assuming Alex isn't able to resubmit are there any devs with a suitable
distro who'd like to take a run at it?

Ian.

>  The following patch at least gets things to compile, but I'm not sure
> if there's a deeper underlying issue. I'm using gcc version 4.8.1
> (Ubuntu/Linaro 4.8.1-10ubuntu8) on x86_64. My .config contains the
> line "PYTHON_PREFIX_ARG=--install-layout=deb" and I'm ./configuring
> with --prefix= 
> 
> Hope this helps.
> 
> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 54a5e67..b6aaa6b 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> @@ -463,7 +463,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index bbe21e0..9d0cb17 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -413,7 +413,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>                  continue;
>              }
>  
> -            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) != 4) {
> +            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) != 4) {
>                  printk("\"%s\" does not look like a PCI device address\n", s);
>                  free(s);
>                  continue;
> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
> index ee1691b..fa2e872 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -672,7 +672,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>      err = errmsg(rep);
>      if (err)
>         return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +769,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  
>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%u", (unsigned int*)&ret);
>  
>      return ret;
>  }
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:53:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I6Y-0006XP-Go; Mon, 20 Jan 2014 16:53:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5I6W-0006XK-Jj
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:53:40 +0000
Received: from [85.158.137.68:42373] by server-10.bemta-3.messagelabs.com id
	B7/D6-23989-3945DD25; Mon, 20 Jan 2014 16:53:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390236817!10191508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28119 invoked from network); 20 Jan 2014 16:53:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:53:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94558144"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:53:37 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:53:36 -0500
Message-ID: <52DD548F.2060409@citrix.com>
Date: Mon, 20 Jan 2014 16:53:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
	<20140116000040.GA5331@zion.uk.xensource.com>
In-Reply-To: <20140116000040.GA5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:00, Wei Liu wrote:
> There is a stray blank line change in xenvif_tx_create_gop. (I removed
> that part too early and didn't bother to paste it back...)
Ok, fixed

>> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
>> +{
>> +	if (vif->dealloc_cons != vif->dealloc_prod)
>> +		return true;
>> +
>> +	return false;
>
> This can be simplified as
>    return vif->dealloc_cons != vif->dealloc_prod;
Indeed, done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:53:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I6Y-0006XP-Go; Mon, 20 Jan 2014 16:53:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5I6W-0006XK-Jj
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:53:40 +0000
Received: from [85.158.137.68:42373] by server-10.bemta-3.messagelabs.com id
	B7/D6-23989-3945DD25; Mon, 20 Jan 2014 16:53:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390236817!10191508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28119 invoked from network); 20 Jan 2014 16:53:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:53:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94558144"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:53:37 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:53:36 -0500
Message-ID: <52DD548F.2060409@citrix.com>
Date: Mon, 20 Jan 2014 16:53:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-2-git-send-email-zoltan.kiss@citrix.com>
	<20140116000040.GA5331@zion.uk.xensource.com>
In-Reply-To: <20140116000040.GA5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:00, Wei Liu wrote:
> There is a stray blank line change in xenvif_tx_create_gop. (I removed
> that part too early and didn't bother to paste it back...)
Ok, fixed

>> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
>> +{
>> +	if (vif->dealloc_cons != vif->dealloc_prod)
>> +		return true;
>> +
>> +	return false;
>
> This can be simplified as
>    return vif->dealloc_cons != vif->dealloc_prod;
Indeed, done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:54:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I6t-0006Yw-48; Mon, 20 Jan 2014 16:54:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5I6q-0006Yk-Oz
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:54:00 +0000
Received: from [85.158.139.211:33294] by server-14.bemta-5.messagelabs.com id
	21/82-24200-8A45DD25; Mon, 20 Jan 2014 16:54:00 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390236837!10862687!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26029 invoked from network); 20 Jan 2014 16:53:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:53:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94558250"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:53:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:53:56 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5I6m-0001Jx-F9;
	Mon, 20 Jan 2014 16:53:56 +0000
Date: Mon, 20 Jan 2014 16:53:56 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120165356.GF11681@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
	<52D98427.1060103@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D98427.1060103@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 07:27:35PM +0000, Zoltan Kiss wrote:
> On 16/01/14 00:03, Wei Liu wrote:
> >On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
> >[...]
> >>diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> >>index 109c29f..d1cd8ce 100644
> >>--- a/drivers/net/xen-netback/common.h
> >>+++ b/drivers/net/xen-netback/common.h
> >>@@ -129,6 +129,9 @@ struct xenvif {
> >>  	struct xen_netif_rx_back_ring rx;
> >>  	struct sk_buff_head rx_queue;
> >>  	RING_IDX rx_last_skb_slots;
> >
> >Hmm... You seemed to mix your other patch with this series. :-)
> Yep, this series doesn't work without that patch (actually that is a
> bug in netback even without my series), so at the moment it is based
> on it.
> 
> >
> >>+	bool rx_queue_purge;
> >>+
> >>+	struct timer_list wake_queue;
> >>
> >>  	/* This array is allocated seperately as it is large */
> >>  	struct gnttab_copy *grant_copy_op;
> >>@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> >>
> >>  extern bool separate_tx_rx_irq;
> >>
> >[...]
> >>@@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
> >>  		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
> >>  			unmap_timeout++;
> >>  			schedule_timeout(msecs_to_jiffies(1000));
> >>-			if (unmap_timeout > 9 &&
> >>+			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
> >
> >This line is really too long. And what's the rationale behind this long
> >expression?
> It calculates how many times you should ditch the internal queue of
> an another (maybe stucked) vif before Qdisc empties it's actual
> content. After that there shouldn't be any mapped handle left, so we
> should start printing these messages. Actually it should use
> vif->dev->tx_queue_len, and yes, it is probably better to move it to
> the beginning of the function into a new variable, and use that
> here.
> 

Why is relative to tx queue length?

What's the meaning of drain_timeout multipled by the last part
(DIV_ROUND_UP)?

If you proposed to use vif->dev->tx_queue_len to replace DIV_ROUND_UP
then ignore the above question. But I still don't understand the
rationale behind this. Could you elaborate a bit more? Wouldn't
rx_drain_timeout_msecs/1000 along suffice?

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:54:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I6t-0006Yw-48; Mon, 20 Jan 2014 16:54:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5I6q-0006Yk-Oz
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 16:54:00 +0000
Received: from [85.158.139.211:33294] by server-14.bemta-5.messagelabs.com id
	21/82-24200-8A45DD25; Mon, 20 Jan 2014 16:54:00 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390236837!10862687!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26029 invoked from network); 20 Jan 2014 16:53:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:53:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94558250"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:53:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 11:53:56 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5I6m-0001Jx-F9;
	Mon, 20 Jan 2014 16:53:56 +0000
Date: Mon, 20 Jan 2014 16:53:56 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120165356.GF11681@zion.uk.xensource.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
	<52D98427.1060103@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D98427.1060103@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 07:27:35PM +0000, Zoltan Kiss wrote:
> On 16/01/14 00:03, Wei Liu wrote:
> >On Tue, Jan 14, 2014 at 08:39:54PM +0000, Zoltan Kiss wrote:
> >[...]
> >>diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> >>index 109c29f..d1cd8ce 100644
> >>--- a/drivers/net/xen-netback/common.h
> >>+++ b/drivers/net/xen-netback/common.h
> >>@@ -129,6 +129,9 @@ struct xenvif {
> >>  	struct xen_netif_rx_back_ring rx;
> >>  	struct sk_buff_head rx_queue;
> >>  	RING_IDX rx_last_skb_slots;
> >
> >Hmm... You seemed to mix your other patch with this series. :-)
> Yep, this series doesn't work without that patch (actually that is a
> bug in netback even without my series), so at the moment it is based
> on it.
> 
> >
> >>+	bool rx_queue_purge;
> >>+
> >>+	struct timer_list wake_queue;
> >>
> >>  	/* This array is allocated seperately as it is large */
> >>  	struct gnttab_copy *grant_copy_op;
> >>@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> >>
> >>  extern bool separate_tx_rx_irq;
> >>
> >[...]
> >>@@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
> >>  		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
> >>  			unmap_timeout++;
> >>  			schedule_timeout(msecs_to_jiffies(1000));
> >>-			if (unmap_timeout > 9 &&
> >>+			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
> >
> >This line is really too long. And what's the rationale behind this long
> >expression?
> It calculates how many times you should ditch the internal queue of
> an another (maybe stucked) vif before Qdisc empties it's actual
> content. After that there shouldn't be any mapped handle left, so we
> should start printing these messages. Actually it should use
> vif->dev->tx_queue_len, and yes, it is probably better to move it to
> the beginning of the function into a new variable, and use that
> here.
> 

Why is relative to tx queue length?

What's the meaning of drain_timeout multipled by the last part
(DIV_ROUND_UP)?

If you proposed to use vif->dev->tx_queue_len to replace DIV_ROUND_UP
then ignore the above question. But I still don't understand the
rationale behind this. Could you elaborate a bit more? Wouldn't
rx_drain_timeout_msecs/1000 along suffice?

Wei.

> Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I8q-0006jl-OD; Mon, 20 Jan 2014 16:56:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5I8o-0006jd-Jq
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:56:03 +0000
Received: from [193.109.254.147:63970] by server-15.bemta-14.messagelabs.com
	id FE/5C-22186-1255DD25; Mon, 20 Jan 2014 16:56:01 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390236957!10528628!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28119 invoked from network); 20 Jan 2014 16:55:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 16:55:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5ICg-00010w-J7; Mon, 20 Jan 2014 17:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390237202.3900@bugs.xenproject.org>
References: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
	<1390236382.20516.76.camel@kazak.uk.xensource.com>
In-Reply-To: <1390236382.20516.76.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 20 Jan 2014 17:00:02 +0000
Subject: [Xen-devel] Processed: stubdom build failure with gcc 4.8 (Was: Re:
 [PATCH] Fix stubdom build failure for RELEASE-4.3.1)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #35 rooted at `<A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>'
Title: `stubdom build failure with gcc 4.8 (Was: Re: [Xen-devel] [PATCH] Fix stubdom build failure for RELEASE-4.3.1)'
> title it stubdom build failures with gcc 4.8
Set title for #35 to `stubdom build failures with gcc 4.8'
> thanks
Finished processing.

Modified/created Bugs:
 - 35: http://bugs.xenproject.org/xen/bug/35 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5I8q-0006jl-OD; Mon, 20 Jan 2014 16:56:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5I8o-0006jd-Jq
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 16:56:03 +0000
Received: from [193.109.254.147:63970] by server-15.bemta-14.messagelabs.com
	id FE/5C-22186-1255DD25; Mon, 20 Jan 2014 16:56:01 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390236957!10528628!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28119 invoked from network); 20 Jan 2014 16:55:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 16:55:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5ICg-00010w-J7; Mon, 20 Jan 2014 17:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390237202.3900@bugs.xenproject.org>
References: <A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>
	<1390236382.20516.76.camel@kazak.uk.xensource.com>
In-Reply-To: <1390236382.20516.76.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 20 Jan 2014 17:00:02 +0000
Subject: [Xen-devel] Processed: stubdom build failure with gcc 4.8 (Was: Re:
 [PATCH] Fix stubdom build failure for RELEASE-4.3.1)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #35 rooted at `<A4D3962C02EE81408CC4341DF532AAC3364466@Exchange2.orionvm.com>'
Title: `stubdom build failure with gcc 4.8 (Was: Re: [Xen-devel] [PATCH] Fix stubdom build failure for RELEASE-4.3.1)'
> title it stubdom build failures with gcc 4.8
Set title for #35 to `stubdom build failures with gcc 4.8'
> thanks
Finished processing.

Modified/created Bugs:
 - 35: http://bugs.xenproject.org/xen/bug/35 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IBV-000714-9v; Mon, 20 Jan 2014 16:58:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5IBT-00070w-SY
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 16:58:48 +0000
Received: from [193.109.254.147:12184] by server-2.bemta-14.messagelabs.com id
	D3/EA-00361-7C55DD25; Mon, 20 Jan 2014 16:58:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390237124!12037918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20850 invoked from network); 20 Jan 2014 16:58:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94560219"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:58:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 11:58:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5IBO-0001cY-Ee;
	Mon, 20 Jan 2014 16:58:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5IBN-00087Q-VJ;
	Mon, 20 Jan 2014 16:58:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24448-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 16:58:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24448: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24448 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24448/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24447

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24447

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jan 20 09:50:20 2014 +0100

    compat/memory: fix build with old gcc
    
    struct xen_add_to_physmap_batch's size field being uint16_t causes old
    compiler versions to warn about the pointless range check done inside
    compat_handle_okay().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>

commit ebc168f238ab7a729c45032c9663b14844f34656
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 20 09:49:20 2014 +0100

    common/memory: Fix ABI breakage for XENMEM_add_to_physmap
    
    caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
    
    In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
    but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
    
    By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
    now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
    unsigned int to uint16_t, which changes the space switch()'d upon.
    
    This wouldn't be noticed with any upstream code (of which I am aware), but was
    discovered because of the XenServer support for legacy Windows PV drivers,
    which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
    The current Windows PV drivers don't do this any more, but we 'fix' Xen to
    support running VMs with out-of-date tools.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Ack: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>

commit efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 20 09:48:11 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 16:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 16:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IBV-000714-9v; Mon, 20 Jan 2014 16:58:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5IBT-00070w-SY
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 16:58:48 +0000
Received: from [193.109.254.147:12184] by server-2.bemta-14.messagelabs.com id
	D3/EA-00361-7C55DD25; Mon, 20 Jan 2014 16:58:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390237124!12037918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20850 invoked from network); 20 Jan 2014 16:58:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 16:58:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="94560219"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 16:58:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 11:58:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5IBO-0001cY-Ee;
	Mon, 20 Jan 2014 16:58:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5IBN-00087Q-VJ;
	Mon, 20 Jan 2014 16:58:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24448-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 20 Jan 2014 16:58:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24448: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24448 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24448/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24447

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24447

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Jan 20 09:50:20 2014 +0100

    compat/memory: fix build with old gcc
    
    struct xen_add_to_physmap_batch's size field being uint16_t causes old
    compiler versions to warn about the pointless range check done inside
    compat_handle_okay().
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>

commit ebc168f238ab7a729c45032c9663b14844f34656
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 20 09:49:20 2014 +0100

    common/memory: Fix ABI breakage for XENMEM_add_to_physmap
    
    caused by c/s 4be86bb194e25e46b6cbee900601bfee76e8090a
    
    In public/memory.h, struct xen_add_to_physmap has 'space' as an unsigned int,
    but struct xen_add_to_physmap_batch has 'space' as a uint16_t.
    
    By defining xenmem_add_to_physmap_one() with space defined as uint16_t, the
    now-common xenmem_add_to_physmap() implicitly truncates xatp->space from
    unsigned int to uint16_t, which changes the space switch()'d upon.
    
    This wouldn't be noticed with any upstream code (of which I am aware), but was
    discovered because of the XenServer support for legacy Windows PV drivers,
    which make XENMEM_add_to_physmap hypercalls using spaces with the top bit set.
    The current Windows PV drivers don't do this any more, but we 'fix' Xen to
    support running VMs with out-of-date tools.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Ack: Ian Campbell <Ian.Campbell@citrix.com>
    Acked-by: Keir Fraser <keir@xen.org>

commit efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 20 09:48:11 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IHP-0007X5-9m; Mon, 20 Jan 2014 17:04:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5IHN-0007X0-Hs
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:04:53 +0000
Received: from [193.109.254.147:21532] by server-16.bemta-14.messagelabs.com
	id 66/63-20600-4375DD25; Mon, 20 Jan 2014 17:04:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390237490!9710171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32300 invoked from network); 20 Jan 2014 17:04:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:04:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92486622"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:04:50 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:04:50 -0500
Message-ID: <52DD5730.8080803@citrix.com>
Date: Mon, 20 Jan 2014 17:04:48 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
	<20140116000107.GB5331@zion.uk.xensource.com>
In-Reply-To: <20140116000107.GB5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:01, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:48PM +0000, Zoltan Kiss wrote:
>> v3:
>> - delete a surplus checking from tx_action
>> - remove stray line
>> - squash xenvif_idx_unmap changes into the first patch
>> - init spinlocks
>> - call map hypercall directly instead of gnttab_map_refs()
>> - fix unmapping timeout in xenvif_free()
>>
>> v4:
>> - fix indentations and comments
>> - handle errors of set_phys_to_machine
>
> There's no call to set_phys_to_machine in this patch. Did I miss
> something?
I've made several changes between v3 and v4 about the grant mapping 
stuff, this was an earlier concept, not the one I've finally sent in. It 
should be the same comment as in the first patch: "go back to 
gnttab_map_refs, now we rely on API changes"

>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +	    vif->dealloc_task == NULL ||
>> +	    !xenvif_schedulable(vif))
>>   		goto drop;
>>
>>   	/* At best we'll need one slot for the header and one for each
>> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>
> At the beginning of the function there's BUG_ON checks for vif->task. I
> would suggest you do the same for vif->dealloc_task, just to be
> consistent.
I guess you mean in xenvif_connect. Applied.

>> @@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>>   		goto err_rx_unbind;
>>   	}
>>
>> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
>> +					   (void *)vif,
>> +					   "%s-dealloc",
>> +					   vif->dev->name);
>> +	if (IS_ERR(vif->dealloc_task)) {
>> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
>> +		err = PTR_ERR(vif->dealloc_task);
>> +		goto err_rx_unbind;
>> +	}
>> +
>>   	vif->task = task;
>
> Please move this line before the above hunk. Don't separate it from
> corresponding kthread_create.
Done, I've also used task for dealloc thread creation, the same way as 
the rx thread does.

> Last but not least, though I've looked at this patch for several rounds
> and and the basic logic looks correct to me, I would like it to go
> through XenRT tests if possible -- eye inspection is error-prone to such
> complicated change. (If I'm not mistaken you once told me you've done
> regression tests already. That would be neat!)
Yes, that's ongoing, I don't expect the patches to be accepted before 
they pass XenRT.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IHP-0007X5-9m; Mon, 20 Jan 2014 17:04:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5IHN-0007X0-Hs
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:04:53 +0000
Received: from [193.109.254.147:21532] by server-16.bemta-14.messagelabs.com
	id 66/63-20600-4375DD25; Mon, 20 Jan 2014 17:04:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390237490!9710171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32300 invoked from network); 20 Jan 2014 17:04:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:04:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92486622"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:04:50 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:04:50 -0500
Message-ID: <52DD5730.8080803@citrix.com>
Date: Mon, 20 Jan 2014 17:04:48 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-3-git-send-email-zoltan.kiss@citrix.com>
	<20140116000107.GB5331@zion.uk.xensource.com>
In-Reply-To: <20140116000107.GB5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:01, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:48PM +0000, Zoltan Kiss wrote:
>> v3:
>> - delete a surplus checking from tx_action
>> - remove stray line
>> - squash xenvif_idx_unmap changes into the first patch
>> - init spinlocks
>> - call map hypercall directly instead of gnttab_map_refs()
>> - fix unmapping timeout in xenvif_free()
>>
>> v4:
>> - fix indentations and comments
>> - handle errors of set_phys_to_machine
>
> There's no call to set_phys_to_machine in this patch. Did I miss
> something?
I've made several changes between v3 and v4 about the grant mapping 
stuff, this was an earlier concept, not the one I've finally sent in. It 
should be the same comment as in the first patch: "go back to 
gnttab_map_refs, now we rely on API changes"

>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -123,7 +123,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +	    vif->dealloc_task == NULL ||
>> +	    !xenvif_schedulable(vif))
>>   		goto drop;
>>
>>   	/* At best we'll need one slot for the header and one for each
>> @@ -345,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>
> At the beginning of the function there's BUG_ON checks for vif->task. I
> would suggest you do the same for vif->dealloc_task, just to be
> consistent.
I guess you mean in xenvif_connect. Applied.

>> @@ -431,6 +452,16 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>>   		goto err_rx_unbind;
>>   	}
>>
>> +	vif->dealloc_task = kthread_create(xenvif_dealloc_kthread,
>> +					   (void *)vif,
>> +					   "%s-dealloc",
>> +					   vif->dev->name);
>> +	if (IS_ERR(vif->dealloc_task)) {
>> +		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
>> +		err = PTR_ERR(vif->dealloc_task);
>> +		goto err_rx_unbind;
>> +	}
>> +
>>   	vif->task = task;
>
> Please move this line before the above hunk. Don't separate it from
> corresponding kthread_create.
Done, I've also used task for dealloc thread creation, the same way as 
the rx thread does.

> Last but not least, though I've looked at this patch for several rounds
> and and the basic logic looks correct to me, I would like it to go
> through XenRT tests if possible -- eye inspection is error-prone to such
> complicated change. (If I'm not mistaken you once told me you've done
> regression tests already. That would be neat!)
Yes, that's ongoing, I don't expect the patches to be accepted before 
they pass XenRT.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Icb-0000O2-2P; Mon, 20 Jan 2014 17:26:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5IcZ-0000Ns-SE
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:26:48 +0000
Received: from [85.158.137.68:16003] by server-2.bemta-3.messagelabs.com id
	8B/0E-17329-65C5DD25; Mon, 20 Jan 2014 17:26:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390238804!10293777!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23708 invoked from network); 20 Jan 2014 17:26:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:26:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92495508"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:26:43 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:26:42 -0500
Message-ID: <52DD5C50.7010101@citrix.com>
Date: Mon, 20 Jan 2014 17:26:40 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
	<20140116000314.GD5331@zion.uk.xensource.com>
In-Reply-To: <20140116000314.GD5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:03, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:52PM +0000, Zoltan Kiss wrote:
> [...]
>>   	/* Skip first skb fragment if it is on same page as header fragment. */
>> @@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>>
>>   	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
>>
>> +	if (frag_overflow) {
>> +		struct sk_buff *nskb = xenvif_alloc_skb(0);
>> +		if (unlikely(nskb == NULL)) {
>> +			netdev_err(vif->dev,
>> +				   "Can't allocate the frag_list skb.\n");
>
> This, and other occurences of netdev_* logs need to be rate limit.
> Otherwise you risk flooding kernel log when system is under memory
> pressure.
Done.

>> @@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   				  pending_idx :
>>   				  INVALID_PENDING_IDX);
>>
>> +		if (skb_shinfo(skb)->frag_list) {
>> +			nskb = skb_shinfo(skb)->frag_list;
>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>> +			skb->len += nskb->len;
>> +			skb->data_len += nskb->len;
>> +			skb->truesize += nskb->truesize;
>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			vif->tx_zerocopy_sent += 2;
>> +			nskb = skb;
>> +
>> +			skb = skb_copy_expand(skb,
>> +					      0,
>> +					      0,
>> +					      GFP_ATOMIC | __GFP_NOWARN);
>> +			if (!skb) {
>> +				netdev_dbg(vif->dev,
>> +					   "Can't consolidate skb with too many fragments\n");
>
> Rate limit.
>
>> +				if (skb_shinfo(nskb)->destructor_arg)
>> +					skb_shinfo(nskb)->tx_flags |=
>> +						SKBTX_DEV_ZEROCOPY;
>
> Why is this needed? nskb is the saved pointer to original skb, which has
> already had SKBTX_DEV_ZEROCOPY in tx_flags. Did I miss something?
Indeed. This actually belongs to the header grant copy patches I've sent 
in as well. I move it there.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Icb-0000O2-2P; Mon, 20 Jan 2014 17:26:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5IcZ-0000Ns-SE
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:26:48 +0000
Received: from [85.158.137.68:16003] by server-2.bemta-3.messagelabs.com id
	8B/0E-17329-65C5DD25; Mon, 20 Jan 2014 17:26:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390238804!10293777!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23708 invoked from network); 20 Jan 2014 17:26:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:26:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92495508"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:26:43 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:26:42 -0500
Message-ID: <52DD5C50.7010101@citrix.com>
Date: Mon, 20 Jan 2014 17:26:40 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-7-git-send-email-zoltan.kiss@citrix.com>
	<20140116000314.GD5331@zion.uk.xensource.com>
In-Reply-To: <20140116000314.GD5331@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 6/9] xen-netback: Handle guests
 with too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/01/14 00:03, Wei Liu wrote:
> On Tue, Jan 14, 2014 at 08:39:52PM +0000, Zoltan Kiss wrote:
> [...]
>>   	/* Skip first skb fragment if it is on same page as header fragment. */
>> @@ -832,6 +851,29 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
>>
>>   	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
>>
>> +	if (frag_overflow) {
>> +		struct sk_buff *nskb = xenvif_alloc_skb(0);
>> +		if (unlikely(nskb == NULL)) {
>> +			netdev_err(vif->dev,
>> +				   "Can't allocate the frag_list skb.\n");
>
> This, and other occurences of netdev_* logs need to be rate limit.
> Otherwise you risk flooding kernel log when system is under memory
> pressure.
Done.

>> @@ -1537,6 +1613,32 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   				  pending_idx :
>>   				  INVALID_PENDING_IDX);
>>
>> +		if (skb_shinfo(skb)->frag_list) {
>> +			nskb = skb_shinfo(skb)->frag_list;
>> +			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
>> +			skb->len += nskb->len;
>> +			skb->data_len += nskb->len;
>> +			skb->truesize += nskb->truesize;
>> +			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>> +			vif->tx_zerocopy_sent += 2;
>> +			nskb = skb;
>> +
>> +			skb = skb_copy_expand(skb,
>> +					      0,
>> +					      0,
>> +					      GFP_ATOMIC | __GFP_NOWARN);
>> +			if (!skb) {
>> +				netdev_dbg(vif->dev,
>> +					   "Can't consolidate skb with too many fragments\n");
>
> Rate limit.
>
>> +				if (skb_shinfo(nskb)->destructor_arg)
>> +					skb_shinfo(nskb)->tx_flags |=
>> +						SKBTX_DEV_ZEROCOPY;
>
> Why is this needed? nskb is the saved pointer to original skb, which has
> already had SKBTX_DEV_ZEROCOPY in tx_flags. Did I miss something?
Indeed. This actually belongs to the header grant copy patches I've sent 
in as well. I move it there.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:31:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IhF-0000tR-Td; Mon, 20 Jan 2014 17:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5IhE-0000tH-42
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 17:31:36 +0000
Received: from [85.158.137.68:31200] by server-14.bemta-3.messagelabs.com id
	45/6D-06105-77D5DD25; Mon, 20 Jan 2014 17:31:35 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390239091!10196069!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,DIET_1,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 629 invoked from network); 20 Jan 2014 17:31:33 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 17:31:33 -0000
Received: from mail-oa0-f49.google.com ([209.85.219.49]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt1dcxH8kYi9VyuCwgStQuOO7Od40b82@postini.com;
	Mon, 20 Jan 2014 09:31:33 PST
Received: by mail-oa0-f49.google.com with SMTP id i7so5551949oag.8
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 09:31:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8xd1Dom5gx+IrFjUrxB5Kcx+ddd6EbP5Ms0Umiy0sQY=;
	b=S06qeUX2W/KeTlRScXOnjrWupMYJlAy6x5AT+pS4SATzY54IEDBjIgPk6Du1DjOMAg
	s0nJWjDTKJ5BrOaf8+6V5vLkX2bHQP3Bg3LWxhrCJQVrjUMwd7da4TiitKFtD1ZjiCl/
	BD5XtuaJNkd37R2FzGpFTdrhlzAiA6kJYQHdE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=8xd1Dom5gx+IrFjUrxB5Kcx+ddd6EbP5Ms0Umiy0sQY=;
	b=NOwHSc/9eh06TpC0TUJ8ByfA2R02eysnDSjlw1fWmUfxhhFJ4T9D9tzgOPcijANm6E
	FmJOMqoAGGYcGCuVuge0yTxSf5VXbTlS/3yzbWYtuCDM3QD6203bI9OgLTQIZxXtiQ8q
	eXJeBRnPhyVNeLbHwmC8kfBP3R2HvkHxsVm7sltLeGlp//gxQuXSY42kN7vf37wCOnlh
	CP0xLHY5Ws0wOQ5t7KjDx65HLLAtFOvPuxNL7JA0+38cVZB7lUPgr1+Vn0KzAJRGZE7R
	JecFjg/KAQv8qF8+e/lHXffY+Txpiry74EVIbHhFq2dU02Ae9ItfehxD1IhQah6u4C5l
	0Q5w==
X-Gm-Message-State: ALoCoQkwpKGIgQH8IgivhVYNDj0hd/TW//xHY49OFtDRDMDODRz55goSutFjdqhMxfjONj79KRIzsyt6kvIUGDVZjYkz1uKmapnF1wyAG6t0MYIf33PCpAaacMhlIjUY1VSNAnpBT2ThB/UTvVfEQ+WmulVI/kMs3EYnLaZBdFqEz9M/M4eSjpE=
X-Received: by 10.60.134.230 with SMTP id pn6mr7398389oeb.40.1390239090983;
	Mon, 20 Jan 2014 09:31:30 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.134.230 with SMTP id pn6mr7398377oeb.40.1390239090853;
	Mon, 20 Jan 2014 09:31:30 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 09:31:30 -0800 (PST)
In-Reply-To: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
Date: Mon, 20 Jan 2014 19:31:30 +0200
Message-ID: <CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1035845384834103901=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1035845384834103901==
Content-Type: multipart/alternative; boundary=047d7b471e6a29dd8304f06a43fb

--047d7b471e6a29dd8304f06a43fb
Content-Type: text/plain; charset=ISO-8859-1

Hi again,

sorry for the broken formatting, see better one below where the tables
should be.

> x86 or ARM host?

>
> ARM. ARMv7, TI Jacinto6 to be precise.
>
>
> > Also, how many pCPUs and vCPUs do the host and the various guests have?
>
> 2 pCPUs, 4 vCPUs: 2 vCPU per domain.
>
>
> > # xl list -n
> > # xl vcpu-list
>
> # xl list -n
> Name                                        ID   Mem VCPUs      State
> Time(s) NODE Affinity
> Domain-0                                     0   118     2     r-----
>  20.5 any node
> android_4.3                                  1  1024     2     -b----
> 383.5 any node
>
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0    0   r--      12.6  any cpu
> Domain-0                             0     1    1   b       7.3  any cpu
> android_4.3                          1     0    0   b     180.8  any cpu
> android_4.3                          1     1    0   b     184.4  any cpu
>
> > Are you using any vCPU-to-pCPU pinning?
>
> No.
>
>
> > What about giving a try to it yourself? I think standardizing on one (a
> > set of) specific tool could be a good thing.
>
> Yep, we'll try it.
>
>
> > Mmm... Can you show us at least part of the numbers? From my experience,
> > it's really easy to mess up with terminology in this domain (delay,
> > latency, jitter, etc.).
>
> Test with 30 ms sleep period gives such results (I hope formatting won't
> break):
>
>                              Measurements  Average Number of times with t
> 32 ms
Credit        Dom0             358             32.5
72
                 DomU            358
40.7                        358

sEDF        Dom0             358             33.6                        120
                 DomU            358             40.7
    358

We did additional measurements and as you can see, my first impression was
not very correct: difference between dom0 and domU exist and is quite
observable on a larger scale. On the same setup bare metal without Xen
number of times t > 32 is close to 0; on the setup with Xen but without
domU system running number of times t > 32 is close to 0 as well.  We will
make additional measurements with Linux (not Android) as a domU guest,
though.

>
> Raw data looks like this:
>

Credit:
Dom0    DomU
38            46
31            46
31            46
39            46
39            46
31            46
31            39

sEDF:
Dom0    DomU
38            39
39            39
39            39
39            46
39            46
31            39
31            39

>
> So as you can see, there are peak values both in dom0 and domU.
>
> > AFAIUI, you're saying that you're asking for a sleep time of X, and
> > you're being waking up in the interval [x+5ms, x+15ms], is that the
> > case?
>
> Yes, that's correct. In the numbers above sleep period is 30 ms, so we
> were expecting 30-31 ms sleep time as it is on the same setup without Xen.
>
> > # xl sched-sedf
>
> # xl sched-sedf
> Cpupool Pool-0:
> Name                                ID Period Slice  Latency Extra Weight
> Domain-0                             0    100      0       0     1      0
> android_4.3                          1    100      0       0     1      0
>
>
> > Or you have some other load in the system while
> > performing these measurements? If the latter, what and where?
>
> During this test both guests were almost idle.
>
>
> > Oh, and now that I think about it, something that present in credit and
> > not in sEDF that might be worth checking is the scheduling rate limiting
> > thing.
>
> We'll check it out, thanks!
>
> Regards, Pavlo
>

--047d7b471e6a29dd8304f06a43fb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi again,<br><br></div>sorry for the broken formattin=
g, see better one below where the tables should be.<br><div><div><br>&gt; x=
86 or ARM host?<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft:1px solid rgb(204,204,204);padding-left:1ex">
<div dir=3D"ltr"><div><div><div><div><div><div><div><div><br></div>ARM. ARM=
v7, TI Jacinto6 to be precise.<div class=3D"im"><br><div>
</div><br>&gt; Also, how many pCPUs and vCPUs do the host and the various g=
uests have?<br><br></div></div><div>2 pCPUs, 4 vCPUs: 2 vCPU per domain.<di=
v class=3D"im"><br><br>&gt; # xl list -n<br>&gt; # xl vcpu-list<br></div>
</div><div><br><span dir=3D"ltr"># xl list -n<br>
Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0ID =A0 Mem VCPUs =A0 =A0 =A0State =A0 Time(s) NODE Affinity<br>D=
omain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 0 =A0 118 =A0 =A0 2 =A0 =A0 r----- =A0 =A0 =A020.5 any node<br>android=
_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =
=A01024 =A0 =A0 2 =A0 =A0 -b---- =A0 =A0 383.5 any node</span><br>

<br><span dir=3D"ltr"># xl vcpu-list<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0 Time(s) CPU=
 Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A00 =A0 r-- =A0 =A0 =A012.6 =A0any cpu<br>Domain-0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 1 =A0 =A01 =
=A0 <s>b</s> =A0 =A0 =A0 7.3 =A0any cpu<br>

android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A00 =A0 <s>b</s> =A0 =A0 180.8 =A0any cpu<br>android_4.3 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 1 =A0 =A00 =A0 <s>b</s> =
=A0 =A0 184.4 =A0any cpu<br></span><div class=3D"im"><br>
&gt; Are you using any vCPU-to-pCPU pinning?<br><br></div></div>No.<div cla=
ss=3D"im"><br><br>&gt; What about giving a try to it yourself? I think stan=
dardizing on one (a<br>&gt; set of) specific tool could be a good thing.<br=
>
<br></div></div>Yep, we&#39;ll try it.<div class=3D"im"><br>
<br>&gt; Mmm... Can you show us at least part of the numbers? From my exper=
ience,<br>&gt; it&#39;s really easy to mess up with terminology in this dom=
ain (delay,<br>&gt; latency, jitter, etc.).<br><br></div></div><div>Test wi=
th 30 ms sleep period gives such results (I hope formatting won&#39;t break=
):<br>

</div><div><br></div></div></div></div></div></div></blockquote><div>=A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0 Measurements=A0 Aver=
age Number of times with t &gt; 32 ms</div><div>Credit=A0=A0=A0=A0=A0=A0=A0=
 Dom0=A0=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0 32.5=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 72<br>
</div><div>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 DomU=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 40.7=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 358<br><br>=
</div><div>sEDF=A0=A0=A0=A0=A0=A0=A0 Dom0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 33.6=A0=A0=A0=A0=A0=A0=A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 120<br></div><div>=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0 DomU=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 358=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 40.7=A0=A0=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0=A0 358<br>
<br></div>We did additional measurements and as you can see, my first impre=
ssion was not very correct: difference between dom0 and domU exist and is q=
uite observable on a larger scale. On the same setup bare metal without Xen=
 number of times t &gt; 32 is close to 0; on the setup with Xen but without=
 domU system running number of times t &gt; 32 is close to 0 as well.=A0 We=
 will make additional measurements with Linux (not Android) as a domU guest=
, though.<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div><di=
v><div><div>
<br></div><div>Raw data looks like this:<br></div></div></div></div></div><=
/blockquote><div><br></div><div>Credit:<br></div><div>Dom0=A0=A0=A0 DomU<br=
></div><div>38=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>39=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 46<br>
39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br><br></div><div>sEDF:<br=
></div><div>Dom0=A0=A0=A0 DomU<br>38=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br=
>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 39<br>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>39=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br>
31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex"><div dir=3D"ltr"><div><div><div><br><div>So as you can s=
ee, there are peak values both in dom0 and domU.<br>
</div><div class=3D"im"><div><br>&gt; AFAIUI, you&#39;re saying that you&#3=
9;re asking for a sleep time of X, and<br>&gt; you&#39;re being waking up i=
n the interval [x+5ms, x+15ms], is that the<br>
&gt; case?<br><br></div></div>Yes, that&#39;s correct. In the numbers above=
 sleep period is 30 ms, so we were expecting 30-31 ms sleep time as it is o=
n the same setup without Xen.<br><br>&gt; # xl sched-sedf<br><br><span dir=
=3D"ltr"> # xl sched-sedf<br>

Cpupool Pool-0:<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0ID Period Slice =A0Latency Extra Weight<br>Domain-0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0100 =A0 =A0 =A00 =A0 =
=A0 =A0 0 =A0 =A0 1 =A0 =A0 =A00<br>android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0100 =A0 =A0 =A00 =A0 =A0 =A0 0 =A0 =A0 1 =
=A0 =A0 =A00</span><div class=3D"im">
<br>
<br>&gt; Or you have some other load in the system while<br>&gt; performing=
 these measurements? If the latter, what and where?<br><br></div></div>Duri=
ng this test both guests were almost idle.<div class=3D"im"><br><br>&gt; Oh=
, and now that I think about it, something that present in credit and<br>

&gt; not in sEDF that might be worth checking is the scheduling rate limiti=
ng<br>&gt; thing.<br><br></div></div>We&#39;ll check it out, thanks!<br><br=
></div>Regards, Pavlo<br></div>
</blockquote></div><br></div></div></div></div>

--047d7b471e6a29dd8304f06a43fb--


--===============1035845384834103901==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1035845384834103901==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 17:31:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IhF-0000tR-Td; Mon, 20 Jan 2014 17:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5IhE-0000tH-42
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 17:31:36 +0000
Received: from [85.158.137.68:31200] by server-14.bemta-3.messagelabs.com id
	45/6D-06105-77D5DD25; Mon, 20 Jan 2014 17:31:35 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390239091!10196069!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,DIET_1,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 629 invoked from network); 20 Jan 2014 17:31:33 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 17:31:33 -0000
Received: from mail-oa0-f49.google.com ([209.85.219.49]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt1dcxH8kYi9VyuCwgStQuOO7Od40b82@postini.com;
	Mon, 20 Jan 2014 09:31:33 PST
Received: by mail-oa0-f49.google.com with SMTP id i7so5551949oag.8
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 09:31:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=8xd1Dom5gx+IrFjUrxB5Kcx+ddd6EbP5Ms0Umiy0sQY=;
	b=S06qeUX2W/KeTlRScXOnjrWupMYJlAy6x5AT+pS4SATzY54IEDBjIgPk6Du1DjOMAg
	s0nJWjDTKJ5BrOaf8+6V5vLkX2bHQP3Bg3LWxhrCJQVrjUMwd7da4TiitKFtD1ZjiCl/
	BD5XtuaJNkd37R2FzGpFTdrhlzAiA6kJYQHdE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=8xd1Dom5gx+IrFjUrxB5Kcx+ddd6EbP5Ms0Umiy0sQY=;
	b=NOwHSc/9eh06TpC0TUJ8ByfA2R02eysnDSjlw1fWmUfxhhFJ4T9D9tzgOPcijANm6E
	FmJOMqoAGGYcGCuVuge0yTxSf5VXbTlS/3yzbWYtuCDM3QD6203bI9OgLTQIZxXtiQ8q
	eXJeBRnPhyVNeLbHwmC8kfBP3R2HvkHxsVm7sltLeGlp//gxQuXSY42kN7vf37wCOnlh
	CP0xLHY5Ws0wOQ5t7KjDx65HLLAtFOvPuxNL7JA0+38cVZB7lUPgr1+Vn0KzAJRGZE7R
	JecFjg/KAQv8qF8+e/lHXffY+Txpiry74EVIbHhFq2dU02Ae9ItfehxD1IhQah6u4C5l
	0Q5w==
X-Gm-Message-State: ALoCoQkwpKGIgQH8IgivhVYNDj0hd/TW//xHY49OFtDRDMDODRz55goSutFjdqhMxfjONj79KRIzsyt6kvIUGDVZjYkz1uKmapnF1wyAG6t0MYIf33PCpAaacMhlIjUY1VSNAnpBT2ThB/UTvVfEQ+WmulVI/kMs3EYnLaZBdFqEz9M/M4eSjpE=
X-Received: by 10.60.134.230 with SMTP id pn6mr7398389oeb.40.1390239090983;
	Mon, 20 Jan 2014 09:31:30 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.134.230 with SMTP id pn6mr7398377oeb.40.1390239090853;
	Mon, 20 Jan 2014 09:31:30 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 20 Jan 2014 09:31:30 -0800 (PST)
In-Reply-To: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
Date: Mon, 20 Jan 2014 19:31:30 +0200
Message-ID: <CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1035845384834103901=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1035845384834103901==
Content-Type: multipart/alternative; boundary=047d7b471e6a29dd8304f06a43fb

--047d7b471e6a29dd8304f06a43fb
Content-Type: text/plain; charset=ISO-8859-1

Hi again,

sorry for the broken formatting, see better one below where the tables
should be.

> x86 or ARM host?

>
> ARM. ARMv7, TI Jacinto6 to be precise.
>
>
> > Also, how many pCPUs and vCPUs do the host and the various guests have?
>
> 2 pCPUs, 4 vCPUs: 2 vCPU per domain.
>
>
> > # xl list -n
> > # xl vcpu-list
>
> # xl list -n
> Name                                        ID   Mem VCPUs      State
> Time(s) NODE Affinity
> Domain-0                                     0   118     2     r-----
>  20.5 any node
> android_4.3                                  1  1024     2     -b----
> 383.5 any node
>
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0    0   r--      12.6  any cpu
> Domain-0                             0     1    1   b       7.3  any cpu
> android_4.3                          1     0    0   b     180.8  any cpu
> android_4.3                          1     1    0   b     184.4  any cpu
>
> > Are you using any vCPU-to-pCPU pinning?
>
> No.
>
>
> > What about giving a try to it yourself? I think standardizing on one (a
> > set of) specific tool could be a good thing.
>
> Yep, we'll try it.
>
>
> > Mmm... Can you show us at least part of the numbers? From my experience,
> > it's really easy to mess up with terminology in this domain (delay,
> > latency, jitter, etc.).
>
> Test with 30 ms sleep period gives such results (I hope formatting won't
> break):
>
>                              Measurements  Average Number of times with t
> 32 ms
Credit        Dom0             358             32.5
72
                 DomU            358
40.7                        358

sEDF        Dom0             358             33.6                        120
                 DomU            358             40.7
    358

We did additional measurements and as you can see, my first impression was
not very correct: difference between dom0 and domU exist and is quite
observable on a larger scale. On the same setup bare metal without Xen
number of times t > 32 is close to 0; on the setup with Xen but without
domU system running number of times t > 32 is close to 0 as well.  We will
make additional measurements with Linux (not Android) as a domU guest,
though.

>
> Raw data looks like this:
>

Credit:
Dom0    DomU
38            46
31            46
31            46
39            46
39            46
31            46
31            39

sEDF:
Dom0    DomU
38            39
39            39
39            39
39            46
39            46
31            39
31            39

>
> So as you can see, there are peak values both in dom0 and domU.
>
> > AFAIUI, you're saying that you're asking for a sleep time of X, and
> > you're being waking up in the interval [x+5ms, x+15ms], is that the
> > case?
>
> Yes, that's correct. In the numbers above sleep period is 30 ms, so we
> were expecting 30-31 ms sleep time as it is on the same setup without Xen.
>
> > # xl sched-sedf
>
> # xl sched-sedf
> Cpupool Pool-0:
> Name                                ID Period Slice  Latency Extra Weight
> Domain-0                             0    100      0       0     1      0
> android_4.3                          1    100      0       0     1      0
>
>
> > Or you have some other load in the system while
> > performing these measurements? If the latter, what and where?
>
> During this test both guests were almost idle.
>
>
> > Oh, and now that I think about it, something that present in credit and
> > not in sEDF that might be worth checking is the scheduling rate limiting
> > thing.
>
> We'll check it out, thanks!
>
> Regards, Pavlo
>

--047d7b471e6a29dd8304f06a43fb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi again,<br><br></div>sorry for the broken formattin=
g, see better one below where the tables should be.<br><div><div><br>&gt; x=
86 or ARM host?<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft:1px solid rgb(204,204,204);padding-left:1ex">
<div dir=3D"ltr"><div><div><div><div><div><div><div><div><br></div>ARM. ARM=
v7, TI Jacinto6 to be precise.<div class=3D"im"><br><div>
</div><br>&gt; Also, how many pCPUs and vCPUs do the host and the various g=
uests have?<br><br></div></div><div>2 pCPUs, 4 vCPUs: 2 vCPU per domain.<di=
v class=3D"im"><br><br>&gt; # xl list -n<br>&gt; # xl vcpu-list<br></div>
</div><div><br><span dir=3D"ltr"># xl list -n<br>
Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0ID =A0 Mem VCPUs =A0 =A0 =A0State =A0 Time(s) NODE Affinity<br>D=
omain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 0 =A0 118 =A0 =A0 2 =A0 =A0 r----- =A0 =A0 =A020.5 any node<br>android=
_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =
=A01024 =A0 =A0 2 =A0 =A0 -b---- =A0 =A0 383.5 any node</span><br>

<br><span dir=3D"ltr"># xl vcpu-list<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0 Time(s) CPU=
 Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 0 =A0 =A0 0 =A0 =A00 =A0 r-- =A0 =A0 =A012.6 =A0any cpu<br>Domain-0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 1 =A0 =A01 =
=A0 <s>b</s> =A0 =A0 =A0 7.3 =A0any cpu<br>

android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A00 =A0 <s>b</s> =A0 =A0 180.8 =A0any cpu<br>android_4.3 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 1 =A0 =A00 =A0 <s>b</s> =
=A0 =A0 184.4 =A0any cpu<br></span><div class=3D"im"><br>
&gt; Are you using any vCPU-to-pCPU pinning?<br><br></div></div>No.<div cla=
ss=3D"im"><br><br>&gt; What about giving a try to it yourself? I think stan=
dardizing on one (a<br>&gt; set of) specific tool could be a good thing.<br=
>
<br></div></div>Yep, we&#39;ll try it.<div class=3D"im"><br>
<br>&gt; Mmm... Can you show us at least part of the numbers? From my exper=
ience,<br>&gt; it&#39;s really easy to mess up with terminology in this dom=
ain (delay,<br>&gt; latency, jitter, etc.).<br><br></div></div><div>Test wi=
th 30 ms sleep period gives such results (I hope formatting won&#39;t break=
):<br>

</div><div><br></div></div></div></div></div></div></blockquote><div>=A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0 Measurements=A0 Aver=
age Number of times with t &gt; 32 ms</div><div>Credit=A0=A0=A0=A0=A0=A0=A0=
 Dom0=A0=A0=A0=A0=A0=A0=A0=A0=A0 =A0=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0 32.5=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 72<br>
</div><div>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 DomU=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 40.7=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 358<br><br>=
</div><div>sEDF=A0=A0=A0=A0=A0=A0=A0 Dom0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 358=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 33.6=A0=A0=A0=A0=A0=A0=A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 120<br></div><div>=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0 DomU=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 358=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 40.7=A0=A0=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0=A0 358<br>
<br></div>We did additional measurements and as you can see, my first impre=
ssion was not very correct: difference between dom0 and domU exist and is q=
uite observable on a larger scale. On the same setup bare metal without Xen=
 number of times t &gt; 32 is close to 0; on the setup with Xen but without=
 domU system running number of times t &gt; 32 is close to 0 as well.=A0 We=
 will make additional measurements with Linux (not Android) as a domU guest=
, though.<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div><di=
v><div><div>
<br></div><div>Raw data looks like this:<br></div></div></div></div></div><=
/blockquote><div><br></div><div>Credit:<br></div><div>Dom0=A0=A0=A0 DomU<br=
></div><div>38=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>39=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 46<br>
39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br><br></div><div>sEDF:<br=
></div><div>Dom0=A0=A0=A0 DomU<br>38=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br=
>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0 39<br>39=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 46<br>39=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0 46<br>31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39<br>
31=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 39</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex"><div dir=3D"ltr"><div><div><div><br><div>So as you can s=
ee, there are peak values both in dom0 and domU.<br>
</div><div class=3D"im"><div><br>&gt; AFAIUI, you&#39;re saying that you&#3=
9;re asking for a sleep time of X, and<br>&gt; you&#39;re being waking up i=
n the interval [x+5ms, x+15ms], is that the<br>
&gt; case?<br><br></div></div>Yes, that&#39;s correct. In the numbers above=
 sleep period is 30 ms, so we were expecting 30-31 ms sleep time as it is o=
n the same setup without Xen.<br><br>&gt; # xl sched-sedf<br><br><span dir=
=3D"ltr"> # xl sched-sedf<br>

Cpupool Pool-0:<br>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0ID Period Slice =A0Latency Extra Weight<br>Domain-0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0100 =A0 =A0 =A00 =A0 =
=A0 =A0 0 =A0 =A0 1 =A0 =A0 =A00<br>android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0100 =A0 =A0 =A00 =A0 =A0 =A0 0 =A0 =A0 1 =
=A0 =A0 =A00</span><div class=3D"im">
<br>
<br>&gt; Or you have some other load in the system while<br>&gt; performing=
 these measurements? If the latter, what and where?<br><br></div></div>Duri=
ng this test both guests were almost idle.<div class=3D"im"><br><br>&gt; Oh=
, and now that I think about it, something that present in credit and<br>

&gt; not in sEDF that might be worth checking is the scheduling rate limiti=
ng<br>&gt; thing.<br><br></div></div>We&#39;ll check it out, thanks!<br><br=
></div>Regards, Pavlo<br></div>
</blockquote></div><br></div></div></div></div>

--047d7b471e6a29dd8304f06a43fb--


--===============1035845384834103901==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1035845384834103901==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 17:48:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IxA-0001jz-KB; Mon, 20 Jan 2014 17:48:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5Ix8-0001ju-VI
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:48:03 +0000
Received: from [85.158.137.68:53396] by server-1.bemta-3.messagelabs.com id
	C3/57-29598-2516DD25; Mon, 20 Jan 2014 17:48:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390240079!9082919!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28485 invoked from network); 20 Jan 2014 17:48:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:48:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92503204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:47:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:47:58 -0500
Message-ID: <52DD614D.5080604@citrix.com>
Date: Mon, 20 Jan 2014 17:47:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
	<52D98427.1060103@citrix.com>
	<20140120165356.GF11681@zion.uk.xensource.com>
In-Reply-To: <20140120165356.GF11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 16:53, Wei Liu wrote:
>>>> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>>>>   		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>>>>   			unmap_timeout++;
>>>>   			schedule_timeout(msecs_to_jiffies(1000));
>>>> -			if (unmap_timeout > 9 &&
>>>> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
>>>
>>> This line is really too long. And what's the rationale behind this long
>>> expression?
>> It calculates how many times you should ditch the internal queue of
>> an another (maybe stucked) vif before Qdisc empties it's actual
>> content. After that there shouldn't be any mapped handle left, so we
>> should start printing these messages. Actually it should use
>> vif->dev->tx_queue_len, and yes, it is probably better to move it to
>> the beginning of the function into a new variable, and use that
>> here.
>>
>
> Why is relative to tx queue length?
>
> What's the meaning of drain_timeout multipled by the last part
> (DIV_ROUND_UP)?
>
> If you proposed to use vif->dev->tx_queue_len to replace DIV_ROUND_UP
> then ignore the above question. But I still don't understand the
> rationale behind this. Could you elaborate a bit more? Wouldn't
> rx_drain_timeout_msecs/1000 along suffice?

Here we want to avoid timeout messages if an skb can be legitimatly 
stucked somewhere else. As we discussed earlier, realisticly this could 
be an another vif's internal or QDisc queue. That another vif also has 
this rx_drain_timeout_msecs timeout, but now with Paul's recent changes 
the timer only ditches the internal queue. After that, the QDisc queue 
can put in worst case XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into 
that another vif's internal queue, so we need several rounds of such 
timeouts until we can be sure that no another vif should have skb's from 
us. We are not sending more skb's, so newly stucked packets are not 
interesting for us here.
But actually using the current vif's queue length is not relevant in 
this calculation, as it doesn't mean other vif's has the same. I think 
it is better to stick with XENVIF_QUEUE_LENGTH.
I've added this explanation as a comment and moved the calculation into 
a separate variable, so it doesn't cause such long lines.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:48:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5IxA-0001jz-KB; Mon, 20 Jan 2014 17:48:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5Ix8-0001ju-VI
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 17:48:03 +0000
Received: from [85.158.137.68:53396] by server-1.bemta-3.messagelabs.com id
	C3/57-29598-2516DD25; Mon, 20 Jan 2014 17:48:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390240079!9082919!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28485 invoked from network); 20 Jan 2014 17:48:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:48:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92503204"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:47:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 12:47:58 -0500
Message-ID: <52DD614D.5080604@citrix.com>
Date: Mon, 20 Jan 2014 17:47:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389731995-9887-1-git-send-email-zoltan.kiss@citrix.com>
	<1389731995-9887-9-git-send-email-zoltan.kiss@citrix.com>
	<20140116000334.GE5331@zion.uk.xensource.com>
	<52D98427.1060103@citrix.com>
	<20140120165356.GF11681@zion.uk.xensource.com>
In-Reply-To: <20140120165356.GF11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v4 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 16:53, Wei Liu wrote:
>>>> @@ -559,7 +579,7 @@ void xenvif_free(struct xenvif *vif)
>>>>   		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>>>>   			unmap_timeout++;
>>>>   			schedule_timeout(msecs_to_jiffies(1000));
>>>> -			if (unmap_timeout > 9 &&
>>>> +			if (unmap_timeout > ((rx_drain_timeout_msecs/1000) * DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS))) &&
>>>
>>> This line is really too long. And what's the rationale behind this long
>>> expression?
>> It calculates how many times you should ditch the internal queue of
>> an another (maybe stucked) vif before Qdisc empties it's actual
>> content. After that there shouldn't be any mapped handle left, so we
>> should start printing these messages. Actually it should use
>> vif->dev->tx_queue_len, and yes, it is probably better to move it to
>> the beginning of the function into a new variable, and use that
>> here.
>>
>
> Why is relative to tx queue length?
>
> What's the meaning of drain_timeout multipled by the last part
> (DIV_ROUND_UP)?
>
> If you proposed to use vif->dev->tx_queue_len to replace DIV_ROUND_UP
> then ignore the above question. But I still don't understand the
> rationale behind this. Could you elaborate a bit more? Wouldn't
> rx_drain_timeout_msecs/1000 along suffice?

Here we want to avoid timeout messages if an skb can be legitimatly 
stucked somewhere else. As we discussed earlier, realisticly this could 
be an another vif's internal or QDisc queue. That another vif also has 
this rx_drain_timeout_msecs timeout, but now with Paul's recent changes 
the timer only ditches the internal queue. After that, the QDisc queue 
can put in worst case XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into 
that another vif's internal queue, so we need several rounds of such 
timeouts until we can be sure that no another vif should have skb's from 
us. We are not sending more skb's, so newly stucked packets are not 
interesting for us here.
But actually using the current vif's queue length is not relevant in 
this calculation, as it doesn't mean other vif's has the same. I think 
it is better to stick with XENVIF_QUEUE_LENGTH.
I've added this explanation as a comment and moved the calculation into 
a separate variable, so it doesn't cause such long lines.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:57:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5J5y-0002Gs-1W; Mon, 20 Jan 2014 17:57:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5J5x-0002Gn-6n
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 17:57:09 +0000
Received: from [85.158.137.68:38590] by server-14.bemta-3.messagelabs.com id
	FE/66-06105-4736DD25; Mon, 20 Jan 2014 17:57:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390240626!10199901!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7214 invoked from network); 20 Jan 2014 17:57:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92506197"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:57:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 12:57:05 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5J5s-0001v1-Ob;
	Mon, 20 Jan 2014 17:57:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5J5r-0002q4-B3;
	Mon, 20 Jan 2014 17:57:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21213.25453.756929.319250@mariner.uk.xensource.com>
Date: Mon, 20 Jan 2014 17:57:01 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390211927.20516.11.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
	<1390211927.20516.11.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/12] libxl: fork: Break out sigchld_sethandler_raw"):
> On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> > @@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
> >      assert(!sigchld_owner);
> >      sigchld_owner = CTX;
> >  
> > -    memset(&ours,0,sizeof(ours));
> 
> Is "ours" now an unused variable in this context?

Yes.  In fact these were deleted in the next patch.  I have moved the
deletion to this patch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 17:57:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 17:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5J5y-0002Gs-1W; Mon, 20 Jan 2014 17:57:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5J5x-0002Gn-6n
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 17:57:09 +0000
Received: from [85.158.137.68:38590] by server-14.bemta-3.messagelabs.com id
	FE/66-06105-4736DD25; Mon, 20 Jan 2014 17:57:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390240626!10199901!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7214 invoked from network); 20 Jan 2014 17:57:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 17:57:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92506197"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 17:57:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 12:57:05 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5J5s-0001v1-Ob;
	Mon, 20 Jan 2014 17:57:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5J5r-0002q4-B3;
	Mon, 20 Jan 2014 17:57:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21213.25453.756929.319250@mariner.uk.xensource.com>
Date: Mon, 20 Jan 2014 17:57:01 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390211927.20516.11.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-12-git-send-email-ian.jackson@eu.citrix.com>
	<1390211927.20516.11.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/12] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 11/12] libxl: fork: Break out sigchld_sethandler_raw"):
> On Fri, 2014-01-17 at 16:24 +0000, Ian Jackson wrote:
> > @@ -202,12 +215,7 @@ static void sigchld_installhandler_core(libxl__gc *gc)
> >      assert(!sigchld_owner);
> >      sigchld_owner = CTX;
> >  
> > -    memset(&ours,0,sizeof(ours));
> 
> Is "ours" now an unused variable in this context?

Yes.  In fact these were deleted in the next patch.  I have moved the
deletion to this patch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:15:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JN6-0003HT-0F; Mon, 20 Jan 2014 18:14:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W5JN3-0003HO-S1
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 18:14:50 +0000
Received: from [193.109.254.147:47830] by server-14.bemta-14.messagelabs.com
	id 61/1E-12628-9976DD25; Mon, 20 Jan 2014 18:14:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390241686!11988974!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18684 invoked from network); 20 Jan 2014 18:14:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 18:14:48 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 20 Jan 2014 11:14:41 -0700
Message-ID: <52DD678F.3070504@suse.com>
Date: Mon, 20 Jan 2014 11:14:39 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com>
In-Reply-To: <52D9AECF.6050309@suse.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Jackson wrote:
>   
>> libvirt reaps its children synchronously and has no central pid
>> registry and no dispatch mechanism.  libxl does have a pid registry so
>> can provide a selective reaping facility, but that is not currently
>> exposed.  Here we expose it.
>>
>> Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
>> possible for those to share SIGCHLD: libxl expects either the
>> application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
>> of this series we relax this restriction by having libxl maintain a
>> process-wide list of the libxl ctxs that are supposed to be interested
>> in SIGCHLD.
>>
>> I have not tested the selective reaping functionality.  The most
>> plausible test environment for that is a suitably modified libvirt.
>>   
>>     
>
> I've been testing this series (plus 1/3 in your "tools: Miscellanous
> fixes for 4.4" series) on a suitably modified libvirt and the results
> look good so far :).
>
> I'm running four scripts concurrently that
>
> - start / stop domA
> - save / restore domB
> - reboot domC
> - get stats on dom{A,B,C}
>
> They have been running for about an hour now, and I haven't noticed any
> problems
>   

I let this run over the weekend and today noticed libvirtd was deadlocked

Thread 5 (Thread 0x7ffff10ea700 (LWP 42142)):
#0  0x00007ffff4d20b7d in read () from /lib64/libpthread.so.0
#1  0x00007fffeb88d028 in libxl__self_pipe_eatall (fd=39) at
libxl_event.c:1369
#2  0x00007fffeb88f52c in sigchld_selfpipe_handler (egc=0x7ffff10e9270,
ev=0x5555559986e8, fd=39,
    events=1, revents=1) at libxl_fork.c:501
#3  0x00007fffeb88bbf5 in afterpoll_internal (egc=0x7ffff10e9270,
poller=0x5555559a2b40, nfds=3,
    fds=0x5555558d96a0, now=...) at libxl_event.c:990
#4  0x00007fffeb88d2d2 in eventloop_iteration (egc=0x7ffff10e9270,
poller=0x5555559a2b40)
    at libxl_event.c:1431
#5  0x00007fffeb88de18 in libxl__ao_inprogress (ao=0x5555559beb30,
    file=0x7fffeb8a0a1b "libxl_create.c", line=1356,
    func=0x7fffeb8a1530 <__func__.16339> "do_domain_create") at
libxl_event.c:1676
#6  0x00007fffeb86813f in do_domain_create (ctx=0x555555998550,
d_config=0x7ffff10e94d0,
    domid=0x7ffff10e944c, restore_fd=-1, checkpointed_stream=0,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1356
#7  0x00007fffeb86820d in libxl_domain_create_new (ctx=0x555555998550,
d_config=0x7ffff10e94d0,
    domid=0x7ffff10e944c, ao_how=0x0, aop_console_how=0x0) at
libxl_create.c:1377
#8  0x00007fffebad01b6 in libxlVmStart (driver=0x5555558b7be0,
vm=0x5555558d3280,
    start_paused=false, restore_fd=-1) at libxl/libxl_driver.c:630
#9  0x00007fffebad7594 in libxlDomainCreateWithFlags
(dom=0x5555559b9c00, flags=0)
    at libxl/libxl_driver.c:2924
#...

Thread 1 (Thread 0x7ffff7fc7840 (LWP 42135)):
#0  0x00007ffff4d2089c in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007ffff4d1c4f2 in _L_lock_957 () from /lib64/libpthread.so.0
#2  0x00007ffff4d1c35a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fffeb88943a in libxl__ctx_lock (ctx=0x555555998550) at
libxl_internal.h:2760
#4  0x00007fffeb88bf3d in libxl_osevent_occurred_fd (ctx=0x555555998550,
    for_libxl=0x5555559953e0, fd=45, events_ign=0, revents_ign=1) at
libxl_event.c:1049
#5  0x00007fffebacd56c in libxlDomainObjFDEventCallback (watch=40,
fd=45, vir_events=1,
    fd_info=0x5555559b5b80) at libxl/libxl_domain.c:132
#...

It looks like libxl is waiting for a read with a ctx locked on thread 5,
then receives an occurred_fd event on the same ctx in thread 1.  But it
is not clear to me why read() is blocking...

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:15:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JN6-0003HT-0F; Mon, 20 Jan 2014 18:14:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W5JN3-0003HO-S1
	for xen-devel@lists.xensource.com; Mon, 20 Jan 2014 18:14:50 +0000
Received: from [193.109.254.147:47830] by server-14.bemta-14.messagelabs.com
	id 61/1E-12628-9976DD25; Mon, 20 Jan 2014 18:14:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390241686!11988974!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18684 invoked from network); 20 Jan 2014 18:14:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 18:14:48 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 20 Jan 2014 11:14:41 -0700
Message-ID: <52DD678F.3070504@suse.com>
Date: Mon, 20 Jan 2014 11:14:39 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com>
In-Reply-To: <52D9AECF.6050309@suse.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Jackson wrote:
>   
>> libvirt reaps its children synchronously and has no central pid
>> registry and no dispatch mechanism.  libxl does have a pid registry so
>> can provide a selective reaping facility, but that is not currently
>> exposed.  Here we expose it.
>>
>> Also, libvirt has multiple libxl ctxs.  Prior to this series it is not
>> possible for those to share SIGCHLD: libxl expects either the
>> application, or _one_ libxl ctx, to own SIGCHLD.  In the final patch
>> of this series we relax this restriction by having libxl maintain a
>> process-wide list of the libxl ctxs that are supposed to be interested
>> in SIGCHLD.
>>
>> I have not tested the selective reaping functionality.  The most
>> plausible test environment for that is a suitably modified libvirt.
>>   
>>     
>
> I've been testing this series (plus 1/3 in your "tools: Miscellanous
> fixes for 4.4" series) on a suitably modified libvirt and the results
> look good so far :).
>
> I'm running four scripts concurrently that
>
> - start / stop domA
> - save / restore domB
> - reboot domC
> - get stats on dom{A,B,C}
>
> They have been running for about an hour now, and I haven't noticed any
> problems
>   

I let this run over the weekend and today noticed libvirtd was deadlocked

Thread 5 (Thread 0x7ffff10ea700 (LWP 42142)):
#0  0x00007ffff4d20b7d in read () from /lib64/libpthread.so.0
#1  0x00007fffeb88d028 in libxl__self_pipe_eatall (fd=39) at
libxl_event.c:1369
#2  0x00007fffeb88f52c in sigchld_selfpipe_handler (egc=0x7ffff10e9270,
ev=0x5555559986e8, fd=39,
    events=1, revents=1) at libxl_fork.c:501
#3  0x00007fffeb88bbf5 in afterpoll_internal (egc=0x7ffff10e9270,
poller=0x5555559a2b40, nfds=3,
    fds=0x5555558d96a0, now=...) at libxl_event.c:990
#4  0x00007fffeb88d2d2 in eventloop_iteration (egc=0x7ffff10e9270,
poller=0x5555559a2b40)
    at libxl_event.c:1431
#5  0x00007fffeb88de18 in libxl__ao_inprogress (ao=0x5555559beb30,
    file=0x7fffeb8a0a1b "libxl_create.c", line=1356,
    func=0x7fffeb8a1530 <__func__.16339> "do_domain_create") at
libxl_event.c:1676
#6  0x00007fffeb86813f in do_domain_create (ctx=0x555555998550,
d_config=0x7ffff10e94d0,
    domid=0x7ffff10e944c, restore_fd=-1, checkpointed_stream=0,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1356
#7  0x00007fffeb86820d in libxl_domain_create_new (ctx=0x555555998550,
d_config=0x7ffff10e94d0,
    domid=0x7ffff10e944c, ao_how=0x0, aop_console_how=0x0) at
libxl_create.c:1377
#8  0x00007fffebad01b6 in libxlVmStart (driver=0x5555558b7be0,
vm=0x5555558d3280,
    start_paused=false, restore_fd=-1) at libxl/libxl_driver.c:630
#9  0x00007fffebad7594 in libxlDomainCreateWithFlags
(dom=0x5555559b9c00, flags=0)
    at libxl/libxl_driver.c:2924
#...

Thread 1 (Thread 0x7ffff7fc7840 (LWP 42135)):
#0  0x00007ffff4d2089c in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007ffff4d1c4f2 in _L_lock_957 () from /lib64/libpthread.so.0
#2  0x00007ffff4d1c35a in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007fffeb88943a in libxl__ctx_lock (ctx=0x555555998550) at
libxl_internal.h:2760
#4  0x00007fffeb88bf3d in libxl_osevent_occurred_fd (ctx=0x555555998550,
    for_libxl=0x5555559953e0, fd=45, events_ign=0, revents_ign=1) at
libxl_event.c:1049
#5  0x00007fffebacd56c in libxlDomainObjFDEventCallback (watch=40,
fd=45, vir_events=1,
    fd_info=0x5555559b5b80) at libxl/libxl_domain.c:132
#...

It looks like libxl is waiting for a read with a ctx locked on thread 5,
then receives an occurred_fd event on the same ctx in thread 1.  But it
is not clear to me why read() is blocking...

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:17:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JPF-0003Np-Im; Mon, 20 Jan 2014 18:17:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5JPD-0003Nj-7l
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:17:03 +0000
Received: from [85.158.143.35:36669] by server-2.bemta-4.messagelabs.com id
	65/83-11386-E186DD25; Mon, 20 Jan 2014 18:17:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390241811!12824228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTU0MDAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28166 invoked from network); 20 Jan 2014 18:16:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:16:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92513825"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 18:16:41 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 13:16:40 -0500
Message-ID: <1390241799.23576.42.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Mon, 20 Jan 2014 19:16:39 +0100
In-Reply-To: <1386984785.3980.96.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
Content-Type: multipart/mixed; boundary="=-udcPDT2/1ZUlzquM5prW"
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-udcPDT2/1ZUlzquM5prW
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

create ^
title it credit2 only uses one runqueue instead of one runq per socket
thanks

On sab, 2013-12-14 at 02:33 +0100, Dario Faggioli wrote:
> Hi George,
> 
BTW, creating a tracking bug entry for this issue.

> Now the question is, for fixing this, would it be preferable to do
> something along this line (i.e., removing the right side of the || and,
> in general, make csched_alloc_pdata() a pcpu 0 only thing)? Or, perhaps,
> should I look into a way to properly initialize the cpu_data array, so
> that cpu_to_socket() actually returns something '< 0' for pcpus not yet
> onlined and identified?
> 
I prepared and gave it a quick try to the attached patch... Only to
figure out that it won't work.

Well, it does for certain configurations (so, perhaps, Justin, if that
is your case you may be able to at least do some development on top of
it), but it's not the correct approach... Or at least it's not enough.

In fact, what it does is initializing the pCPU info field used by
cpu_to_socket() to -1, which means now all pCPUs --apart from pCPU 0--
are associated with the proper runqueue.

pCPU 0, OTOH, is always associated with runqueue 0, and that is
necessary and intended, as it does not get the notifier call, and hence
it needs to be initialized when the correct cpu_to_socket() information
is still not available. And that's where the problem is. In fact, this
is fine if pCPU 0 is actually on socket 0, but what if it is, say, on
socket 1? :-O

That happens to be the case on one of my test boxes, and here's what I
get on it:

root@Zhaman:~# xl dmesg |grep runqueue
(XEN) Adding cpu 0 to runqueue 0
(XEN)  First cpu on runqueue, activating
(XEN) Adding cpu 1 to runqueue 1
(XEN)  First cpu on runqueue, activating
(XEN) Adding cpu 2 to runqueue 1
(XEN) Adding cpu 3 to runqueue 1
(XEN) Adding cpu 4 to runqueue 1
(XEN) Adding cpu 5 to runqueue 1
(XEN) Adding cpu 6 to runqueue 1
(XEN) Adding cpu 7 to runqueue 1
(XEN) Adding cpu 8 to runqueue 0
(XEN) Adding cpu 9 to runqueue 0
(XEN) Adding cpu 10 to runqueue 0
(XEN) Adding cpu 11 to runqueue 0
(XEN) Adding cpu 12 to runqueue 0
(XEN) Adding cpu 13 to runqueue 0
(XEN) Adding cpu 14 to runqueue 0
(XEN) Adding cpu 15 to runqueue 0

root@Zhaman:~# xl dmesg |grep 'runqueue 0'|cat -n
     1	(XEN) Adding cpu 0 to runqueue 0
     2	(XEN) Adding cpu 8 to runqueue 0
     3	(XEN) Adding cpu 9 to runqueue 0
     4	(XEN) Adding cpu 10 to runqueue 0
     5	(XEN) Adding cpu 11 to runqueue 0
     6	(XEN) Adding cpu 12 to runqueue 0
     7	(XEN) Adding cpu 13 to runqueue 0
     8	(XEN) Adding cpu 14 to runqueue 0
     9	(XEN) Adding cpu 15 to runqueue 0
root@Zhaman:~# xl dmesg |grep 'runqueue 1'|cat -n
     1	(XEN) Adding cpu 1 to runqueue 1
     2	(XEN) Adding cpu 2 to runqueue 1
     3	(XEN) Adding cpu 3 to runqueue 1
     4	(XEN) Adding cpu 4 to runqueue 1
     5	(XEN) Adding cpu 5 to runqueue 1
     6	(XEN) Adding cpu 6 to runqueue 1
     7	(XEN) Adding cpu 7 to runqueue 1

:-(

I'll keep looking into this, although I can't promise it will be my top
priority for the coming weeks. :-/

If, in the meantime, someone (George?) has an idea on how to solve this,
I gladly accept suggestions. :-)

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-udcPDT2/1ZUlzquM5prW
Content-Disposition: attachment; filename="phys_proc_id-init.patch"
Content-Type: text/x-patch; name="phys_proc_id-init.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 42b8a59..1588d71 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -59,7 +59,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
 cpumask_t cpu_online_map __read_mostly;
 EXPORT_SYMBOL(cpu_online_map);
 
-struct cpuinfo_x86 cpu_data[NR_CPUS];
+struct cpuinfo_x86 cpu_data[NR_CPUS] =
+	{ [0 ... NR_CPUS-1] = { .phys_proc_id=-1 } };
 
 u32 x86_cpu_to_apicid[NR_CPUS] __read_mostly =
 	{ [0 ... NR_CPUS-1] = BAD_APICID };

--=-udcPDT2/1ZUlzquM5prW
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-udcPDT2/1ZUlzquM5prW--


From xen-devel-bounces@lists.xen.org Mon Jan 20 18:17:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JPF-0003Np-Im; Mon, 20 Jan 2014 18:17:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5JPD-0003Nj-7l
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:17:03 +0000
Received: from [85.158.143.35:36669] by server-2.bemta-4.messagelabs.com id
	65/83-11386-E186DD25; Mon, 20 Jan 2014 18:17:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390241811!12824228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTU0MDAgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28166 invoked from network); 20 Jan 2014 18:16:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:16:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92513825"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 18:16:41 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 13:16:40 -0500
Message-ID: <1390241799.23576.42.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Mon, 20 Jan 2014 19:16:39 +0100
In-Reply-To: <1386984785.3980.96.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
Content-Type: multipart/mixed; boundary="=-udcPDT2/1ZUlzquM5prW"
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-udcPDT2/1ZUlzquM5prW
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

create ^
title it credit2 only uses one runqueue instead of one runq per socket
thanks

On sab, 2013-12-14 at 02:33 +0100, Dario Faggioli wrote:
> Hi George,
> 
BTW, creating a tracking bug entry for this issue.

> Now the question is, for fixing this, would it be preferable to do
> something along this line (i.e., removing the right side of the || and,
> in general, make csched_alloc_pdata() a pcpu 0 only thing)? Or, perhaps,
> should I look into a way to properly initialize the cpu_data array, so
> that cpu_to_socket() actually returns something '< 0' for pcpus not yet
> onlined and identified?
> 
I prepared and gave it a quick try to the attached patch... Only to
figure out that it won't work.

Well, it does for certain configurations (so, perhaps, Justin, if that
is your case you may be able to at least do some development on top of
it), but it's not the correct approach... Or at least it's not enough.

In fact, what it does is initializing the pCPU info field used by
cpu_to_socket() to -1, which means now all pCPUs --apart from pCPU 0--
are associated with the proper runqueue.

pCPU 0, OTOH, is always associated with runqueue 0, and that is
necessary and intended, as it does not get the notifier call, and hence
it needs to be initialized when the correct cpu_to_socket() information
is still not available. And that's where the problem is. In fact, this
is fine if pCPU 0 is actually on socket 0, but what if it is, say, on
socket 1? :-O

That happens to be the case on one of my test boxes, and here's what I
get on it:

root@Zhaman:~# xl dmesg |grep runqueue
(XEN) Adding cpu 0 to runqueue 0
(XEN)  First cpu on runqueue, activating
(XEN) Adding cpu 1 to runqueue 1
(XEN)  First cpu on runqueue, activating
(XEN) Adding cpu 2 to runqueue 1
(XEN) Adding cpu 3 to runqueue 1
(XEN) Adding cpu 4 to runqueue 1
(XEN) Adding cpu 5 to runqueue 1
(XEN) Adding cpu 6 to runqueue 1
(XEN) Adding cpu 7 to runqueue 1
(XEN) Adding cpu 8 to runqueue 0
(XEN) Adding cpu 9 to runqueue 0
(XEN) Adding cpu 10 to runqueue 0
(XEN) Adding cpu 11 to runqueue 0
(XEN) Adding cpu 12 to runqueue 0
(XEN) Adding cpu 13 to runqueue 0
(XEN) Adding cpu 14 to runqueue 0
(XEN) Adding cpu 15 to runqueue 0

root@Zhaman:~# xl dmesg |grep 'runqueue 0'|cat -n
     1	(XEN) Adding cpu 0 to runqueue 0
     2	(XEN) Adding cpu 8 to runqueue 0
     3	(XEN) Adding cpu 9 to runqueue 0
     4	(XEN) Adding cpu 10 to runqueue 0
     5	(XEN) Adding cpu 11 to runqueue 0
     6	(XEN) Adding cpu 12 to runqueue 0
     7	(XEN) Adding cpu 13 to runqueue 0
     8	(XEN) Adding cpu 14 to runqueue 0
     9	(XEN) Adding cpu 15 to runqueue 0
root@Zhaman:~# xl dmesg |grep 'runqueue 1'|cat -n
     1	(XEN) Adding cpu 1 to runqueue 1
     2	(XEN) Adding cpu 2 to runqueue 1
     3	(XEN) Adding cpu 3 to runqueue 1
     4	(XEN) Adding cpu 4 to runqueue 1
     5	(XEN) Adding cpu 5 to runqueue 1
     6	(XEN) Adding cpu 6 to runqueue 1
     7	(XEN) Adding cpu 7 to runqueue 1

:-(

I'll keep looking into this, although I can't promise it will be my top
priority for the coming weeks. :-/

If, in the meantime, someone (George?) has an idea on how to solve this,
I gladly accept suggestions. :-)

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-udcPDT2/1ZUlzquM5prW
Content-Disposition: attachment; filename="phys_proc_id-init.patch"
Content-Type: text/x-patch; name="phys_proc_id-init.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 42b8a59..1588d71 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -59,7 +59,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
 cpumask_t cpu_online_map __read_mostly;
 EXPORT_SYMBOL(cpu_online_map);
 
-struct cpuinfo_x86 cpu_data[NR_CPUS];
+struct cpuinfo_x86 cpu_data[NR_CPUS] =
+	{ [0 ... NR_CPUS-1] = { .phys_proc_id=-1 } };
 
 u32 x86_cpu_to_apicid[NR_CPUS] __read_mostly =
 	{ [0 ... NR_CPUS-1] = BAD_APICID };

--=-udcPDT2/1ZUlzquM5prW
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-udcPDT2/1ZUlzquM5prW--


From xen-devel-bounces@lists.xen.org Mon Jan 20 18:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JXx-0003xV-1x; Mon, 20 Jan 2014 18:26:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5JXs-0003xQ-Ca
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:26:00 +0000
Received: from [85.158.139.211:58097] by server-3.bemta-5.messagelabs.com id
	E6/13-04773-73A6DD25; Mon, 20 Jan 2014 18:25:59 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390242357!8162854!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2195 invoked from network); 20 Jan 2014 18:25:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 18:25:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5Jbm-0001wi-HU; Mon, 20 Jan 2014 18:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Dario Faggioli <dario.faggioli@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390242602.7482@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
In-Reply-To: <1390241799.23576.42.camel@Solace>
X-Emesinae-Message: control
X-Emesinae-Control-From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Mon, 20 Jan 2014 18:30:02 +0000
Subject: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> --=-udcPDT2/1ZUlzquM5prW
Command failed: Unknown command `--=-udcPDT2/1ZUlzquM5prW'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 437, <M> line 29.
Stop processing here.

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JXx-0003xV-1x; Mon, 20 Jan 2014 18:26:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5JXs-0003xQ-Ca
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:26:00 +0000
Received: from [85.158.139.211:58097] by server-3.bemta-5.messagelabs.com id
	E6/13-04773-73A6DD25; Mon, 20 Jan 2014 18:25:59 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390242357!8162854!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2195 invoked from network); 20 Jan 2014 18:25:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 18:25:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5Jbm-0001wi-HU; Mon, 20 Jan 2014 18:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Dario Faggioli <dario.faggioli@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390242602.7482@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
In-Reply-To: <1390241799.23576.42.camel@Solace>
X-Emesinae-Message: control
X-Emesinae-Control-From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Mon, 20 Jan 2014 18:30:02 +0000
Subject: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> --=-udcPDT2/1ZUlzquM5prW
Command failed: Unknown command `--=-udcPDT2/1ZUlzquM5prW'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 437, <M> line 29.
Stop processing here.

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:34:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JfY-0004V9-5E; Mon, 20 Jan 2014 18:33:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W5JfW-0004V4-Io
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:33:54 +0000
Received: from [85.158.137.68:28091] by server-11.bemta-3.messagelabs.com id
	9B/C8-19379-11C6DD25; Mon, 20 Jan 2014 18:33:53 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390242832!10215297!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24731 invoked from network); 20 Jan 2014 18:33:52 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:33:52 -0000
Received: by mail-la0-f49.google.com with SMTP id y1so5745505lam.8
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 10:33:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=SDFwHq9srSl3PaTGZm/7q49G0XmfzGq7j0ytEAwwOlc=;
	b=D/r+goOfjvUlXJ8GwnUPj8BRaO7gBFZ5Ktq7IVBVOl3BRf33kNHQ7vi4OnrtrA/z0T
	C0g/5oIL6mFrg+bZ2qIEBv2wEqoRBwEy4AuT5tTpr5ntz4mk8sXbq+PkSnt6VgLzUar9
	Gi05xW8Pz5P9SMKT0iFXe/7qBBwgcvbZAW8xHpsXO7Bdyf79BePhGm3Wc6eUkpHUhx4Y
	G/N01iH9UObPiCmvqvh86EiVdGfYvBvkdxnlKRIm/QzZlZ3E176Dg8x2hpElOHairSdh
	P1dTkLVkUJvlF48FhmKAKwlvViEckOrPjtRrourKERTEWU7ghK8VlMaprHylikshXu3W
	47Fg==
X-Received: by 10.112.135.165 with SMTP id pt5mr2776893lbb.33.1390242832100;
	Mon, 20 Jan 2014 10:33:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 20 Jan 2014 10:33:32 -0800 (PST)
In-Reply-To: <52DCFC8F.1050607@citrix.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 20 Jan 2014 10:33:32 -0800
X-Google-Sender-Auth: ZveQULPY5sVhApxxD7WUPOTw98c
Message-ID: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
>> from David one of the xen development kernel git trees to track is
>> xen/git.git [2], this tree however gives has undefined references when doing a
>> fresh clone [shown below], but as expected does work well when only cloning
>> the linux-next branch [also below]. While I'm sure this is fine for
>> folks who can do the guess work do we really want to live with trees like
>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
>> required, so perhaps it should -- if we want to live with these ? Curious, how
>> many other git are there with a similar situation ?
>
> We don't recommend doing development work for the Xen subsystem based on
> xen/tip.git so I think it's fine to have to checkout the specific branch
> you are interested in.

OK thanks.

>> The xen project web site actually lists [3] Konrad's xen git tree [4] for
>> development as the primary development tree, that probably should be
>> updated now, and likely with instructions to clone only the linux-next
>> branch ?
>
> I've updated the wiki to read:
>
>     For development the recommended branch is:
>
>         The mainline Linus linux.git tree.

Is the delta of what is queued for the next release typically small?
Otherwise someone doing development based on linux.git alone should
have conflicts with anything on the queue, no?

>     To see what's queued for the next release, the next merge window,
>     and other work in progress:
>
>         The Xen subsystem maintainers' tip.git tree.

That's the thing, you can't clone the tip.git tree today well, there
are undefined references and git gives up, asking for the linux-next
branch however did work.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:34:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5JfY-0004V9-5E; Mon, 20 Jan 2014 18:33:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W5JfW-0004V4-Io
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 18:33:54 +0000
Received: from [85.158.137.68:28091] by server-11.bemta-3.messagelabs.com id
	9B/C8-19379-11C6DD25; Mon, 20 Jan 2014 18:33:53 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390242832!10215297!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24731 invoked from network); 20 Jan 2014 18:33:52 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:33:52 -0000
Received: by mail-la0-f49.google.com with SMTP id y1so5745505lam.8
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 10:33:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=SDFwHq9srSl3PaTGZm/7q49G0XmfzGq7j0ytEAwwOlc=;
	b=D/r+goOfjvUlXJ8GwnUPj8BRaO7gBFZ5Ktq7IVBVOl3BRf33kNHQ7vi4OnrtrA/z0T
	C0g/5oIL6mFrg+bZ2qIEBv2wEqoRBwEy4AuT5tTpr5ntz4mk8sXbq+PkSnt6VgLzUar9
	Gi05xW8Pz5P9SMKT0iFXe/7qBBwgcvbZAW8xHpsXO7Bdyf79BePhGm3Wc6eUkpHUhx4Y
	G/N01iH9UObPiCmvqvh86EiVdGfYvBvkdxnlKRIm/QzZlZ3E176Dg8x2hpElOHairSdh
	P1dTkLVkUJvlF48FhmKAKwlvViEckOrPjtRrourKERTEWU7ghK8VlMaprHylikshXu3W
	47Fg==
X-Received: by 10.112.135.165 with SMTP id pt5mr2776893lbb.33.1390242832100;
	Mon, 20 Jan 2014 10:33:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 20 Jan 2014 10:33:32 -0800 (PST)
In-Reply-To: <52DCFC8F.1050607@citrix.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 20 Jan 2014 10:33:32 -0800
X-Google-Sender-Auth: ZveQULPY5sVhApxxD7WUPOTw98c
Message-ID: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
>> from David one of the xen development kernel git trees to track is
>> xen/git.git [2], this tree however gives has undefined references when doing a
>> fresh clone [shown below], but as expected does work well when only cloning
>> the linux-next branch [also below]. While I'm sure this is fine for
>> folks who can do the guess work do we really want to live with trees like
>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
>> required, so perhaps it should -- if we want to live with these ? Curious, how
>> many other git are there with a similar situation ?
>
> We don't recommend doing development work for the Xen subsystem based on
> xen/tip.git so I think it's fine to have to checkout the specific branch
> you are interested in.

OK thanks.

>> The xen project web site actually lists [3] Konrad's xen git tree [4] for
>> development as the primary development tree, that probably should be
>> updated now, and likely with instructions to clone only the linux-next
>> branch ?
>
> I've updated the wiki to read:
>
>     For development the recommended branch is:
>
>         The mainline Linus linux.git tree.

Is the delta of what is queued for the next release typically small?
Otherwise someone doing development based on linux.git alone should
have conflicts with anything on the queue, no?

>     To see what's queued for the next release, the next merge window,
>     and other work in progress:
>
>         The Xen subsystem maintainers' tip.git tree.

That's the thing, you can't clone the tip.git tree today well, there
are undefined references and git gives up, asking for the linux-next
branch however did work.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:59:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5K4K-0005kT-P4; Mon, 20 Jan 2014 18:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5K4J-0005kO-3a
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 18:59:31 +0000
Received: from [85.158.143.35:2823] by server-1.bemta-4.messagelabs.com id
	D9/72-02132-2127DD25; Mon, 20 Jan 2014 18:59:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390244366!12912539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19004 invoked from network); 20 Jan 2014 18:59:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:59:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92533942"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 18:59:14 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 13:58:18 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 18:58:08 +0000
Message-ID: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   81 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 4 files changed, 87 insertions(+), 30 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..87ded60 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
+		else {
+			unsigned long pfn = page_to_pfn(pages[i]);
+			WARN_ON(PagePrivate(pages[i]));
+			SetPagePrivate(pages[i]);
+			set_page_private(pages[i], mfn);
+			pages[i]->index = pfn_to_mfn(pfn);
+			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
+		}
 		if (ret)
 			goto out;
 	}
@@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i], kmap_ops ?
+						  &kmap_ops[i] : NULL);
+		else {
+			unsigned long pfn = page_to_pfn(pages[i]);
+			WARN_ON(!PagePrivate(pages[i]));
+			ClearPagePrivate(pages[i]);
+			set_phys_to_machine(pfn, pages[i]->index);
+		}
 		if (ret)
 			goto out;
 	}
@@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 18:59:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 18:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5K4K-0005kT-P4; Mon, 20 Jan 2014 18:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5K4J-0005kO-3a
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 18:59:31 +0000
Received: from [85.158.143.35:2823] by server-1.bemta-4.messagelabs.com id
	D9/72-02132-2127DD25; Mon, 20 Jan 2014 18:59:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390244366!12912539!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19004 invoked from network); 20 Jan 2014 18:59:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 18:59:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92533942"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 18:59:14 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 13:58:18 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 18:58:08 +0000
Message-ID: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   81 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 4 files changed, 87 insertions(+), 30 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..87ded60 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
+		else {
+			unsigned long pfn = page_to_pfn(pages[i]);
+			WARN_ON(PagePrivate(pages[i]));
+			SetPagePrivate(pages[i]);
+			set_page_private(pages[i], mfn);
+			pages[i]->index = pfn_to_mfn(pfn);
+			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+				return -ENOMEM;
+		}
 		if (ret)
 			goto out;
 	}
@@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i], kmap_ops ?
+						  &kmap_ops[i] : NULL);
+		else {
+			unsigned long pfn = page_to_pfn(pages[i]);
+			WARN_ON(!PagePrivate(pages[i]));
+			ClearPagePrivate(pages[i]);
+			set_phys_to_machine(pfn, pages[i]->index);
+		}
 		if (ret)
 			goto out;
 	}
@@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 19:02:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5K7M-0005tt-H9; Mon, 20 Jan 2014 19:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5K7L-0005tm-1u
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:02:39 +0000
Received: from [85.158.137.68:16554] by server-7.bemta-3.messagelabs.com id
	C4/EE-27599-EC27DD25; Mon, 20 Jan 2014 19:02:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390244555!10260498!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16183 invoked from network); 20 Jan 2014 19:02:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 19:02:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92535680"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 19:02:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 14:02:34 -0500
Message-ID: <52DD72C9.1040202@citrix.com>
Date: Mon, 20 Jan 2014 19:02:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
In-Reply-To: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 18:33, Luis R. Rodriguez wrote:
> On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
>>> from David one of the xen development kernel git trees to track is
>>> xen/git.git [2], this tree however gives has undefined references when doing a
>>> fresh clone [shown below], but as expected does work well when only cloning
>>> the linux-next branch [also below]. While I'm sure this is fine for
>>> folks who can do the guess work do we really want to live with trees like
>>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
>>> required, so perhaps it should -- if we want to live with these ? Curious, how
>>> many other git are there with a similar situation ?
>>
>> We don't recommend doing development work for the Xen subsystem based on
>> xen/tip.git so I think it's fine to have to checkout the specific branch
>> you are interested in.
> 
> OK thanks.
> 
>>> The xen project web site actually lists [3] Konrad's xen git tree [4] for
>>> development as the primary development tree, that probably should be
>>> updated now, and likely with instructions to clone only the linux-next
>>> branch ?
>>
>> I've updated the wiki to read:
>>
>>     For development the recommended branch is:
>>
>>         The mainline Linus linux.git tree.
> 
> Is the delta of what is queued for the next release typically small?
> Otherwise someone doing development based on linux.git alone should
> have conflicts with anything on the queue, no?

We've not had any issues so far.

>>     To see what's queued for the next release, the next merge window,
>>     and other work in progress:
>>
>>         The Xen subsystem maintainers' tip.git tree.
> 
> That's the thing, you can't clone the tip.git tree today well, there
> are undefined references and git gives up, asking for the linux-next
> branch however did work.

I think it did work, you just needed to checkout a branch manually.

But it looks like Konrad has pushed a master branch recently, as well.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 19:02:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5K7M-0005tt-H9; Mon, 20 Jan 2014 19:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5K7L-0005tm-1u
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:02:39 +0000
Received: from [85.158.137.68:16554] by server-7.bemta-3.messagelabs.com id
	C4/EE-27599-EC27DD25; Mon, 20 Jan 2014 19:02:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390244555!10260498!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16183 invoked from network); 20 Jan 2014 19:02:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 19:02:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; d="scan'208";a="92535680"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 19:02:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 14:02:34 -0500
Message-ID: <52DD72C9.1040202@citrix.com>
Date: Mon, 20 Jan 2014 19:02:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
In-Reply-To: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 18:33, Luis R. Rodriguez wrote:
> On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS patch [1]
>>> from David one of the xen development kernel git trees to track is
>>> xen/git.git [2], this tree however gives has undefined references when doing a
>>> fresh clone [shown below], but as expected does work well when only cloning
>>> the linux-next branch [also below]. While I'm sure this is fine for
>>> folks who can do the guess work do we really want to live with trees like
>>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify branches
>>> required, so perhaps it should -- if we want to live with these ? Curious, how
>>> many other git are there with a similar situation ?
>>
>> We don't recommend doing development work for the Xen subsystem based on
>> xen/tip.git so I think it's fine to have to checkout the specific branch
>> you are interested in.
> 
> OK thanks.
> 
>>> The xen project web site actually lists [3] Konrad's xen git tree [4] for
>>> development as the primary development tree, that probably should be
>>> updated now, and likely with instructions to clone only the linux-next
>>> branch ?
>>
>> I've updated the wiki to read:
>>
>>     For development the recommended branch is:
>>
>>         The mainline Linus linux.git tree.
> 
> Is the delta of what is queued for the next release typically small?
> Otherwise someone doing development based on linux.git alone should
> have conflicts with anything on the queue, no?

We've not had any issues so far.

>>     To see what's queued for the next release, the next merge window,
>>     and other work in progress:
>>
>>         The Xen subsystem maintainers' tip.git tree.
> 
> That's the thing, you can't clone the tip.git tree today well, there
> are undefined references and git gives up, asking for the linux-next
> branch however did work.

I think it did work, you just needed to checkout a branch manually.

But it looks like Konrad has pushed a master branch recently, as well.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 19:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5KSl-0006oy-7F; Mon, 20 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <knusperbrot@gmx.de>) id 1W5ISy-000849-FQ
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 17:16:52 +0000
Received: from [85.158.143.35:56097] by server-2.bemta-4.messagelabs.com id
	7C/2B-11386-30A5DD25; Mon, 20 Jan 2014 17:16:51 +0000
X-Env-Sender: knusperbrot@gmx.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390238210!12847873!1
X-Originating-IP: [212.227.17.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjIxID0+IDI1Mjg0\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjIxID0+IDI1Mjg0\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9828 invoked from network); 20 Jan 2014 17:16:51 -0000
Received: from mout.gmx.net (HELO mout.gmx.net) (212.227.17.21)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 17:16:51 -0000
Received: from [172.20.77.101] ([92.228.247.163]) by mail.gmx.com (mrgmx101)
	with ESMTPSA (Nemesis) id 0LabZr-1Vc2bF1pKJ-00mNMZ for
	<xen-devel@lists.xen.org>; Mon, 20 Jan 2014 18:16:50 +0100
Message-ID: <52DD5A00.8070502@gmx.de>
Date: Mon, 20 Jan 2014 18:16:48 +0100
From: Martin Unzner <knusperbrot@gmx.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DBC709.80707@gmx.de>
	<1390212958.20516.21.camel@kazak.uk.xensource.com>
	<52DD19B10200007800115046@nat28.tlf.novell.com>
In-Reply-To: <52DD19B10200007800115046@nat28.tlf.novell.com>
X-Provags-ID: V03:K0:w1KY2tvxdDn74WeDqPUO/m7VuQIHtcdO+l24EhOfEBKaF5DF6Mo
	1KK21ZH7vQUA4MN/ZBD3q180NC5xMR9Rb6YQ2ey/0TdnKdN0gScgQDqz46mHRcEMHcuJxIx
	qxP7EbCdCNfHWPRSS6lnsmVTN9jYuqRaa8bVUXvrhUHQXtmpM0/YXZ1SUbteMosdY5sGVUy
	BDVErVGvhECc1ur+pyrsw==
X-Mailman-Approved-At: Mon, 20 Jan 2014 19:24:45 +0000
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

OK, well, that is unfortunate.

I suppose you will announce new versions of Xen on the xen-users mailing 
list, or is there another channel where you would communicate a solution 
for this issue?

Thanks!

Am 20.01.2014 12:42, schrieb Jan Beulich:
>>>> On 20.01.14 at 11:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
>>> Hi list,
>>>
>>> I just installed Xen 4.3.1 for EFI according to
>>> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot
>>> it. There was a crash at the very beginning, notifying me of a bug at
>>> traps.c:3271. I could not find any logs, so I just noted the stack trace
>>> on a sheet of paper:
>>>
>>> do_device_not_available
>>> handle_exception
>>> efi_get_time
>>> get_cmos_time
>>> init_xen_time
>>> __start_xen
>>>
>>> Does that mean my hardware is incompatible?
>> No, just that you've found a bug I think.
>>
>> I'm copying the devel list here to see if anyone has any clues.
> We've seen that before: You unfortunately have got one of those
> UEFI implementations that utilize XMM (or, unlikely, FPU) registers,
> and Xen doesn't allow this. Which is a consequence of the UEFI
> specification being imprecise here: Some firmware implementors
> read it to allow such, while my reading of it results in this not being
> allowed. This is currently being brought up with the USWG for a
> resolution. I'm afraid there's nothing you can do to work around
> this for the time being (short of not booting via xen.efi, which -
> depending on how you do it and depending on your firmware -
> may have other bad side effects).
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 19:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5KSl-0006p5-NL; Mon, 20 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5KBJ-00064h-LE
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:06:47 +0000
Received: from [85.158.143.35:28173] by server-3.bemta-4.messagelabs.com id
	B7/68-32360-4C37DD25; Mon, 20 Jan 2014 19:06:44 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390244801!12897540!1
X-Originating-IP: [72.30.239.77]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4391 invoked from network); 20 Jan 2014 19:06:42 -0000
Received: from nm34-vm5.bullet.mail.bf1.yahoo.com (HELO
	nm34-vm5.bullet.mail.bf1.yahoo.com) (72.30.239.77)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 19:06:42 -0000
Received: from [98.139.214.32] by nm34.bullet.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:40 -0000
Received: from [98.139.213.10] by tm15.bullet.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:39 -0000
Received: from [127.0.0.1] by smtp110.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390244799; bh=wxtNlxT7xDUrJP8eP2qaxKKmG8Iu3vojc4nrSaI0gSI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version;
	b=Qe+n7VEzyMZzhzlYeY+CoE7qgivd1raupEj2czut9UlWFmOAc9qhodJQ1NPFsCOtoDquSNRbRK+sMEQ/xSz62m4Zne5S+IphyEV/g9qNO6xbLkNDxOhOYJTbDbaOD65ZpBHTLAIKviSAwzmTGLcTn7Xv3tmgJPup1JH8I8XCS4E=
X-Yahoo-Newman-Id: 522902.50386.bm@smtp110.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: UaMhH.QVM1lbCHTmOdMBMtwj9v8Rme8HKwp7ZS4iKAKkcS9
	WP3hUBToFPVgpNP8fjyuWBaKrCUbo0hYq.zoJKdOIwACWfXeQ9dwLJ4J6jpS
	ChOxycOzd0flz5GSB74FJjTDXbmVDuRi_whKAm96FIbiI3gDAk73FeFCVezd
	ihHZttMiuENwyS8M1x.SDntPIjFcCa_gMC1MzrlVtArF5xRvdt7gFtOY.Dh4
	MtFd3Fs1454C1uJW_08mxLG4CmbKeNxUrgvBacypFmO2g28ve52rq8KMelTp
	oWqYaQ.sSSV_qWRRLnsNCmi3yebuy2kl4jvwsre9qG382sXIpR7FZz0Vneml
	EjJe2F2PSh2VsbWCzRT6VXCZma4LCbCNJ47Fg3MaRY.mhrE9i6lTpD2V485n
	darVkLLCfwx1.5oLqKWGHj8M3tmDMATlWWT1T6RJ.7sjGJE0FAjWai9PBqw2
	woVzRpSQVFtzsNG94FOtFyyE1vjSm9CeUaBzoY38Ppr99xcsHg6CIk.GGIvZ
	HH6G695Q7iEgKWlekbxIIwudEtPpcl1Rn8ABEgmrxPx7jCOD0ew--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp110.mail.bf1.yahoo.com with SMTP; 20 Jan 2014 11:06:39 -0800 PST
Message-ID: <1390244796.2322.6.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org
Date: Mon, 20 Jan 2014 12:06:36 -0700
Content-Type: multipart/mixed; boundary="=-7SiOlc9xVXlIsddwPZiY"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Mon, 20 Jan 2014 19:24:45 +0000
Subject: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-7SiOlc9xVXlIsddwPZiY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

xen-devel list,

Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
Screen shot of the crash is attached.  Hardware is a Gigabyte
GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
line allows the system to boot as expected.

Thanks,

Eric

--=-7SiOlc9xVXlIsddwPZiY
Content-Type: image/jpeg; name="panic.jpg"
Content-Disposition: attachment; filename="panic.jpg"
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQAAAQABAAD/4QBoRXhpZgAASUkqAAgAAAADABIBAwABAAAAAQAAADEBAgAQ
AAAAMgAAAGmHBAABAAAAQgAAAAAAAABTaG90d2VsbCAwLjE1LjEAAgACoAkAAQAAAE4GAAADoAkA
AQAAACAEAAAAAAAA/+EJ+Gh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8APD94cGFja2V0IGJl
Z2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4gPHg6eG1wbWV0YSB4bWxu
czp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNC40LjAtRXhpdjIiPiA8cmRm
OlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1u
cyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczpleGlmPSJodHRwOi8vbnMu
YWRvYmUuY29tL2V4aWYvMS4wLyIgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZm
LzEuMC8iIGV4aWY6UGl4ZWxYRGltZW5zaW9uPSIxNjE0IiBleGlmOlBpeGVsWURpbWVuc2lvbj0i
MTA1NiIgdGlmZjpJbWFnZVdpZHRoPSIxNjE0IiB0aWZmOkltYWdlSGVpZ2h0PSIxMDU2IiB0aWZm
Ok9yaWVudGF0aW9uPSIxIi8+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPD94cGFja2V0IGVuZD0idyI/
Pv/bAEMAAwICAwICAwMDAwQDAwQFCAUFBAQFCgcHBggMCgwMCwoLCw0OEhANDhEOCwsQFhARExQV
FRUMDxcYFhQYEhQVFP/bAEMBAwQEBQQFCQUFCRQNCw0UFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQU
FBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFP/AABEIBCAGTgMBIgACEQEDEQH/xAAfAAABBQEBAQEB
AQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEH
InEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFla
Y2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbH
yMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQID
BAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJ
IzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1
dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY
2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/APia9tf9SyQ7/XILBR359Py61ClibXUJ
BcGMptEqhyBkjPTjJ9PwNalyDbrJaMjQrGTGvAYFioHHTjkdfenNbo8CxxFEmZFeVic4AJ78Y61p
ynO4u+hzZ0+S4a5DTndIpzJGoIXnJ29cZOBjngmp7bw8LyInKxxqvLjaxOMN0HU8+/XpWudLeOZF
EqOhyxVSOvr1HbmrGlWctletd2k6GKJz5sTR+YSD0Ze3t0xS5b7FQjLqcve+GTbmNYWyszcLjGTj
IOM+neqZ8PlXVAH3qAzEcnvn69vz9q9C1bT7fzrS4TPlSgq0ZG3b8uRwAR+oP51Ys7FbOe2sp2BD
bpXuZE4fd0BPJ4Hb3PWocLGyVnqeUXGgkTTSeXPIwI3biCyk9s9Mc9fYmobnSXZJAIQmDxH0JB4x
/n0r01tJS/WbejGMSASN0UdQAcfyI6iotSs4FjuIo4g0czFlMqrhdvULxz82OfapUWmNTPO9b8Kz
6fB532Ntm1UBX7uccjnvz/Oq+q+EdSs9IjuJEe3tXKNtXHJIzk9q9Yv7KPUbGaF5ZARGrIrAEyMe
OuemKi1eCLUPD9xBEVLQgKCznluOcY5xkcfrWjk2NzueIrYMBNmIsoyAw5x/XvTf7KnNuQMjC8Z9
P8K9lvfDNoLdFa3SJWjBZjjcCTyCATkYUfkfWqv/AAitsYCttALphnzHPGw+q57cD/69Ye+mWkns
eRvbTRW+90fzCcMVOSRj+VReXM6LgNGpJ2ncOv4V7JN4a07VpdPsfKMMQH3sZO3jPQdOBzVHUvBt
n9nikRZGb7QBKhAwVzjPH5c1pd9RWWh5aqhBiViY1/ix09/1q4+g3c8DXscJWALuO0e3XHYZIrsv
EHhy2hLW0CRqUYqrKRjjPI49v61b0q1v7K0NxcI0UEChQki8SA9vbPJ9KhNtku3Q86tEnvRIIYXY
ouWK84x1/HmnR28tyyeQr+cAC67Rkjr1r0/RtKWyj8qCAO0iSSNyBsXBxt9e35VW8MaRPp6tM8bt
c3MpTzgvCx9x36AfpVgkeexQTuZAsf8Aqzgyen+f61VmjlVf9WY3L7fkbB+td1m4tNa1G1tpYxBc
S7zPKnzg5yAMDuP5VZ8SeGX1PxdNHeSGHZbhi8SH529uMk8jgVO4+U85ihWK4jQnKscMQMlcDn8a
lhaOUPC8u2NBlSwxnjA/nXWN4JAMyRXe0mHzliZTyvfOR+tR3Hgdllh3yKsUsYLMBwrDsD36jn2N
aKKK5Ucq8pA2Fcg8kMOvpilaUMGIU7hwpBxtHX/P1rd1PwtNb6f/AGh9ojeJwPlYDhs4K/XAz+FY
LJuliUhYgibRnneR1P61mzFsYpInZF4Uc5JIG6kaRXl3AfOzYKnBH1zUiCWR5DklVOMYGMfTuahk
QtCHVvMBbgYxtBoVybsdNGWjiJxyCpOOmDSSsGhZSeBnaxzxUqoqoTyjBsEDoKhUbZjh+WHpjPoP
btVK40mBjFyASSjA4Lc+ntS3SCNf3e4qvBJ6nGRSu0cLSbWIJAxzxSpK8MRVwrRZ2ncckj/9dDbK
GOW85YTIAvQHHb2oZJAAFbkHBPBGalniEsoG0bF6N0/Go1mbzCrAbd3DDvQriGrCVZstuc8fLyMU
MqcxsBjqpH8qkjhVmAR1VmPKsCMe9EsardCLcCFOQVGR+tICsqPncQ5xwRjOKlKFnQFsqxI+Xt/k
VIrlVkJB54wRxRDGFHzJweVAp2uJkKxOG67j3UULLJHv+TfgchOSKsH9ycKDk/Lwc8+lIkYjl3Fe
epx3HpTtYEQyzGVN6EKMZyB0qOZGRl8zO9vU8AGpkkSNW+QvnOVA6U+cF42yeemPekyrEP8AEZFU
+pPX+dOLOrblQ+UvG9Rx+HrUyRbhtkX5UGVwefrTIocnrynPI7U0AisscROzcQMc9vTNRAbFWXO4
joG5GTU6RFpvn3KAeVJ+92p8sWyQrnA77xgZzTsg0K8TrsZnG0KcZRc5podkGSfmxjGO1SMkayjb
v3LnIB7/AEp7RhN8jtlvXNQSQ7WlEeQQpPDZ9BUu1pGedDnB+XCck0m0MNybnYcg9+RTYfMVtyu4
XccgemDmhXKGRvuVgchg2OOMGgBsseemT6Ghw0kzmPKqTnJ4x71K8OA8hypfug446VQEZ3KADkqe
Gwal8l9rEcnb8rAc4qNoGKgYwV656imyYlU4J5ODvOKTASN2wBjc6ru3ehqQXUiA+ZwSMZFVzC0Y
AKsinj1pXiDKzEHGR7Z9amwE7SNIj4AD9h1zTZZmYDYpDdGPv7U04AUD7pHXHNOQ5UqrYC5BJAos
AGRpSylCSP4l+lRFndSoBGOpHpSvlCSpO/HIxQzmFGOMk4xRFCJEDDDpuxgggmleR5mZ2TI6AkY7
VE0gU/OW3H+E9cUq4TsWGeM89qq3YZN5kglC4+c88dQPWhLjYhSRmYhu3cH/ACKgRGeXLAlSeMnt
7U5iWkIYY2nAJA/ChrUBvmMA7plh2zipjIZcKpClRnax4yahkKw5yNuTnIPIGOmKcVZ9jKOc8Hqc
elQtxWJYl3cOxPOAKY1xy+4HIYjGcY+lIWc5k3/eHIPah4T5ZBDDDZOOnTvVhZD2uGkf5FIVcghu
RjFSG58tlxkqo2kjrVQlkck5OF6NSyOcHCndxhvSlcZZluZByOuCNuOuajN0VDDJxnODz2qFnAwC
xBPbHH402SLs2PoBSsQ9S1JcrKm4KeeMn/Cg3zwIIyCiHBYnv064/wA81FnYuFBB7DNN+e4k8vnb
05pgkWTch1I8tn2cqQPlJx3/ADpPtEkSuCVVDgEnp19KhBaHeo4B6nPSmqd+FbGCeee1MssyX0iR
siSEE9QOeKYLpogzl9u7Hyiq+9lZ1xwO4qI/e4O78KTEXmvZbhgxBUA4HHNSSXYVXTbhiCM1Wj3x
y8k7M54HB9qY+SxbG3JwT607DLkk3lxKoLndjJODjj19aRb0lQG3NjgHFVTmM7QNxPNJP8wLHOO3
HFFhF1bsM2SjccH1xmlabc+1WJyM5H06VSWXhVCDjjcO9NLb5M7iW7Y4qeXqMuzSZYESgMOx+lPj
v3lwHkZiM7eelZ24gHcN3oD2pZItvCtjJyB6UiS7LeyNnBYAHHy9DxSi7mVsq7YPAB6D9aoAt0Ld
z16Z9aSOTcdhZuRg+uaLAnY1Z70zsrvNlgMbSMEDFNF8Xl+eQbBkjvg/5FZSMMuS2WzjIqTOSWOM
dyR0puKLuaJ1KfJdZnAVQuCetEl+0m5hJ+8ycc81mo27LDOR29vWmq5STONr/mMU7IRsQagTJ8zl
VClRnoKJb6Z33klQD8rZ4PH/ANesdy0pJDYx3FBlZVKByUBzjNGgGxJelpCzsWkAI4xzUEmpyeWq
t8pX7mOAPw7VnJKSeTkjqOlSPOHCxjCpnOcDOandiTfRlwahN5hAYbcjjYAc8f8A1qknu3CKV2bl
ztAGMHoeveqOVTOWPXnHcYpk6kqHVipI4U1XKhmu+oOFk2yCIuq7lTjOBx+VQnUZIR+6Zt+Rlx7d
PrWWJGR1Lckjp60jO0cigEBh0Ipcq7DuzX/ty5jREM52BiwXnI/EY4NNXXrhHJZ2II27ckjHoKyy
5jDBWOfpnpUQlkYlWckZJFTyoV2dAfE9+S5F7dZJCgLM3QDb0+lUpNaulEgaZ39Nx3Bfx7dazZCQ
ApyGPfPWkZQQQrFs9jTSSC5ebVZFbdvYZ7g80n9ouJCS7O/JyTk+wqr5pjUPgbsYIx0qHLDoOefx
piNOXU5Wt8B8beMfjUY1GQMeeD6VUaMFBuTaSM8Gm7yCcDgYI3UAaP8AaszuqKMFRwKUavcLHw3D
c4zx+FZefLOcAn1oMrFTnnPQ002I2YvEV3bMHW4KyKQQFGAKD4kvn8wefJg8/erGDMmCV6jgmh5A
ScZDEc8daLJjsbS+JLyBfkmfnlgT972pZ/FF/JkefkE529ifpWKJtrAgHI6n1qPzMMWzyaFoM3P+
Egugq5mbbj7vA59aJPEFxOoUuXx2YYHTiscNgdMjnkGkYlkGXII4xV3FZHQR+L7+2csJmwF+QZ+7
9DU6eP8AV0h2faNygbQWPIGMda5Qv1AJY0fMRk5xms2r7lLQ9CsvjN4r02yW2t9Vf5AwQ7VygPXB
x61OPjn4tdmL6m+G4+6DgegrzoGPDfx5yB7UhkEeCmcg/e71k6MHuhqTWzPTR8evFUZQjUmTByV8
scDp1HP/AOqpG/aH8YRRCNNSGDnc23JGTnKnsc15a5IbO49eAR1pGYk4PA9GqVQpr7JXtJ9z0+P9
oHxaJIy1/HIFbODEAfr15/HNSj9obxcJeb4SRq3MZUEEf/WIFeXABcH+Gljcbzhefeh0Yfyh7Sfc
9Xk/aR8XGSJRdRqkZJCqpwTnODk9KkX9o7xbkE3aBQTgbTnBJOMZxxnA4ryV5PmAwPwphU5yfcYq
fYU+we0l3PXh+0l4sj4juoxImFUsnBHfPI54HNPT9pLxWS/mywzFmzymOx9+mcflXkEiBskEKB05
60m3A5J3Z4Ap/V6fYPaz7nsKftJ+KSmQ0ELIpCJFuAz3zk5OcetNX9pfxYmFYWxXqE2EBec+p/Gv
I/mWPC88+nFPXc8eCp25xjpS+r0+wc8u565/w0l4pAzI0IULjaoYZGTz196ef2l/EiKAsVrIO5Ck
5x0yCeteRG43gbk7859PSo3IP+rGKaw9P+UftJdz2l/2m/Ertu8uA7QACo2nGeT7HjqKYP2nfE7f
umigKnPBznqcnd1PWvGPNaElCu9emTRI2EBHB6cd6X1Wj1iHtZ9z3Bf2otchV40trdgeAGY5A27S
M+hwD7ce9RSftQa2AEjtohHs2ldxGeAOTzn8favEQSADgjFKVDKZMjcOcdKX1Wj/AClKtPue5Wn7
Uuuo48yxtpGQALyRgd/x569qH/ad1rMii1iidzvLq248DGM4GB+deGqVwM5yelIm5mIyAfU1H1Wj
/KL20+57vD+05rBAEtuCAu0ASMVPPbjqR39qcP2ntTMqmex8snlyrDd9AcV4ci5GchgB0x696XfI
/wDGufu5I7U1haXYn2su57pB+1NqcbHNkoXuzvkg4z0xj8ahH7VGoq21tPWRVckr5hwwPJ7cc14P
KdsjAA56kYp7RnbuL5JHQVX1Wj1iP2s11PoCX9qh5FEg09lKqAyo4JL5znJXp7Ukn7Vl1MVaSwBO
3jacZyR97j6nr1r583EgKMZ9MUSJ87bDjFL6pR/lL+sVO59AQ/tQzRbRJYPOpYkq74Kg9hx+hqaL
9qB5HkcabJkDYA0o5H5fp+tfOzlssScmlWR8gjrmj6pS7B9Yqdz6Nk/adRvkXTGjByDkhieD79v6
VJL+01G6bv7NePKgMARkLxz35xg9a+dIizbnYdOn1qaK4LSkyD5Dxz/Ol9Vp9he3m9z6Kh/aXiSI
77J3XnAKjcO5oP7UlnPLE8mk3IWPgrvX1yRn8h0r53DqkeF3MMfePBpuWnGC23HGR3oWDpdi1iZr
Y+kf+GnLTzGElpOYuoTg4z/P0NB/aR03GTbTuNpKsMDbgA8joRmvm8uMAEkn0HrSu7qGOAd38qX1
Kl0Q/rUz6VT9p3STPzZTrAnCgRgMcAYxjvnvWb4+/aK07xYLbyY5kihZnwynJc8Z5J7envXzv5n3
cgjnv/n2p2Q0TAoPYDqPrWkaMYbIwlUlPc9Wb4swMRtt2fIIBZeB1zVF/iNY3Ue2WCVCpUDZzu/2
iT3rzZl3ALvIHp705seYpO/HTHcVra2xlypvU7O98ZQNkxpIGb7xZTgf7vPpWJqeuw3BRRlsEgS7
QOOMfyrHOZX2gscHjuaa+6RiCuCDgep7U0HKug64vSzMyqMYwarxyqZMuCVB6D/GnPGDC2cZB6Dr
mmvCZDwCCB09auwWOkj1SC+tEto1d/KOeeh46/8A1qa4EyhFUICTgsepz1rI0m8S1WdHVtzjAwcV
qiWC5hGJgjhCW355btj9P/rUJC5UNuLUIWEpUSKQeBnPbg1GqQoMJnPOSwz9KuSTxzoucbmHzYPc
CqjvH8x8wIRk7Schu3NMY/yCZoRKQqYyCO6069KRxSBsBUIw454556e1V/t6rK0rzAuQBgH5celM
u7r7UF43R8gkcc+9DQk9RLO8g3uHbAzw+OtfSnhrxp4Y0PSLPTIbyCPYgklfcqguRktnPOM4x7V8
xOijaWCg4PIIH6VA1zJI4AAxnk9N2K5amHjV3Nozs7n2OPiFoQRHN+kz53/IR83bIz7dzRJ4/wBA
2Nsvov3oBJWQZDFe4J6/1r41+1OWYlvlIwR3+lDu6Asr7ecAZ6CuZ5fDozZ4hvSx9jv440Hcpe/R
WTgsT93p1YcZ5/Q/i8eMNFjMgj1SLaoZkG8AsO/fvmvjhpmUbVcqcAEE0izTLgGZscH73T3qfqC7
ijXcWfZMfibS2kknguY0yuWZO3YYPP8AhzVp/FOnxKDvDKWz+8YZH4djjrXxgb2ZVwJmIzkH19/0
p0Woy2rOyyyb8bQQxFUsFFdTVYq3Q+yl8R2UcT75Vnlt9xEfQjIPOD2HT8qli16ykYENhyC33Rn8
+uP8a+NBqk8KF1mcsSctuOc0kWrXa7Va5mdcEYEh6dcUPAxfUn6wz7MXWLC5ZZZHCQyHILt8uQcK
fl9SeuO9SvrEBbYsnlcBtokDBRzzjH+zivjSXxBqIcRm9mA6BhIRxmmJ4l1O3VUTUbhlB3YZy3Pr
z6VP1Fdx/WWtkfaZ1CGaHYjW4TOfODZBB5B/z61Gs8MEcxFyjTgbOCGTPcHPavjYeKtThZ1jvrhE
Ygld+M4GKfJ4v1Zgv/EwmyB13devP15qVgfMzeIkz7NGoxmLImhVSAABIDtyTgY7EAE06K7t4p5C
rFyjYLYOAM9fyzxXxnF411sZVdRmAY7iM9DViP4h+IrckLqc8avneFb73rmr+o36ijWad2fYtxPC
0QaIpC20sGkcg/dyDx7YGM092WIiRDGsiEb5IyCBnnGVPXnvzwa+Pk+I2vI6MdRdlUnl+OfXipYv
idr8EkZ/tDeAc52DLY7k469Kn6hZ3uaOvc+u1dwyQTFJJGYgRLzuGB1H4g+hxX1n4R0o6Z4M0mzj
xCixqEREwQT8xBHrkk4r8lIvib4ihm85dRlBjLMqtg9ecnvXUL+1H8TPLOfEbkKQoDRJ8qgHaB8v
Tk1008PyanNUfMfp7f4gjVmcq+RuzwOM/wD6qjCm5H7vC5xtk2bsr6Dv7fhX5kv+0/8AESSTDayk
rNL5gcwpvyfcAfpVqb9p34lSqgm1l5jH80aNCCI/w+vfNdfs29jCzR+mcVlNNG8u2QLvJ8yUYA44
HH0P5VYlsv7NtJJ2EgZ1AOB95ecD35r80I/2svikyF4dVKYDBlEG5CO55H1/T0p7/tafExmid9V8
wozkoYduW6+/A9KPZSQcrPqz9prULuy8P2GPNaCeU/u0j6YU/PyefTp9K+L9TBm1DKv5lujhxk45
zxzjOetVPF3x+8W+MI7eDVr1ZEtvuKEC9eee/pXKXniW8Z2nZ0Lv1C5wen+FVqlYXK+p1E8FvBfx
xEBoy+4An5mz09Mj/Ckvxfx27gjZtcuArbTyMcn8u/auTk8SXbsjvgyIMCTAOBzxg+5qSXXb+RzM
QP4euOcD0rJptm0bJHZWs0jWbxZa4VX3sclS/TI4PfHTNU54RJqQt5Ag2dADuOew/l1rmY/E135E
8MaLsZzIy4+6cdQarnWLlUZgxBZs7vU57+1JQJdzrLUyxSygIWVhtZADnNblgv2TUYwCZjGxZ9yg
hcjAHP1z+BrzqLxNKAA8aMoPLY5//V/jSy+IbjynXzCgbBIUkDHbPr/9ek4spHpGiyvbzTBJQfOY
yOsf3VU9OD34HPGPwr1fwELSPxf4YEN8od7pJJSeCSGICnHPYV8zf8JlcgjakaqrEj1Pt6GtbQ/i
fe6DqMV/HEjXkRzGzrkDkHsR6dvSnGFncuUrq1rn6qaVBBPftLBEXj42FOSMA8gn61n3kkk15LDE
sg8luVI25Pr9evQ18AW37ZHj+0g2RS2ahUKJJIjEgZzkDdwee2KSf9sjx7Ir7msowHRkSNGwuCc4
yTyQcZ9Kpwk3uY2stEfota2dvbyxnJedv3gPcZ64GfWuM+OVzFbeGrQNIwdrxfkALgDY+M7SOefW
viSy/bP8f2rC4WWwEiosQZoSSevUlunJ/OsXxj+1J4y8bWVvZalc20VvBIZo/JV0BJBH8LY4ye3O
eegrOdJtDptqXvrQ3rVpI5BNhZLUFAC24Fh16Y65z+daMNuzXhdXeSMHKmMYZR1xz1Of5HFV9QlK
Rs01uZghUoTycdBxn/635U2MGC5jPnblciQrkgoCAf8AHjpXoJvoZ3sx1xGkUHmOGBLbnkZcuScg
DIOCOmO/NJbFbECJHkS4IEYiG7YBg7cjuev51pWsgtpTHO7qjqzI4Xd9Pz6e1Y3ibxNpfh0BdQSS
SSQFkgQEq445I7fWrTIctSxCz/aY5bmWRPKIMvGOewAHBPH6027ka8umkjjF0ycRo+egJPIx3Bz1
/Q1zI+K+hyFJlFwk6n5jIM4wOq4/DrSr8UtEJQEywByFaUqfmHHOOw/wFS7M1R0ktw0UMpXnzZDL
LCWIVj1z9QQvUe9Pe4Ny9m7xqRcjDPITgtnH9OvtXNTfEvQGIVWkbOR8ylSp29fTk45qeD4i+Hfs
ksYmkRlx5bMjEE+gGOD34/rTXLsYWk3odLa3V0qtJchY5IwocIQep6D047e1MuLiSLz1ditq0oKF
XJzzkA44PP8AWsAePtFmihJleJ3BLqc7CRyD9eT26AU+Hx3oxO2S5jdYvkRZANp5yGH69afuo0Vz
alne581CsbMYvmcR4Y5PIIHQ/T/69EOpSRWwVCFnjYxs8fJK+hHTpz68istvF2hyRK41CNi/Rmf5
h3J6dMn8KfJ4j0JUmB1FHLZPJyRnnrgg5/rUNpl2ZqyTP5kDpEoMDghwO2eR+Oeg96fczNeBY1hG
5CCyKflAPJArIbxTosgUfb40ZWX5Qwz0xnr7gVLJ4o0tLaSRb+2eSNujuDvXjnGeRyOOtFlYlScS
HUIfPuJJFUkAhgXYnaAMY9PyqWWSCW2NnNBLKVChBG5IZjgcgd/wp41zS7kjZcq0iDe2Nqoc59T6
571A13pN1KPJv4B5mF+VgBgEZPPbrWXKrmiaY+wuWXzIRmS3BIIZznIHJPuOnbtUVhczaddeZGsT
xq4OwnIPfPQ55OT9DU8d7YROT/aESKmRJ5e0hicZz9Tn60STWs8f2sXUUcUTAFCy5JIIGR+dXypk
uTTIi0F+ri5tjIHIYT+WNqp1UkY6dKaZFuNVkkO7CKYwkgIZOPlPOMcdsd6lV9Ontp0EkaFZAx8p
lQcAbenUdsZ7UyG3gRpYxdCQN8yyuArL25x15z27UuVIrmvYrTNHLcWcituKIY5UjXBOQ2VxjjHH
1xUU7C4sZoY0MkhkMoEnWNTnjOPb9a12jhmYzvdo1xIPMaQY4zgYAOBgY/WnjTYkYIl5CyOvmsC4
B4yBgn1PoTT5R20uYOoLHe2cNisbyxqfvKM/NyOOP8KwpfC7AkJEJWX5FKtjcc/w4znvXetDAtw0
jNuG8qybhjcQTzj86inKK53fKrHbGqfMeePwFDgZc66nn39hz27FUXyxkfvCpYfiPfmmSaMXnlMK
SRqCx2SR9Sff/PWvQRYh5ptsi7wAG3YAyOM/h6CohZRyyyOHLLjsdwA+o64+tSlYhSV7M87k0jcs
oZVQgqQ5OR7Y9+34VIfDaS+YAN8THC7hjn2ru20eG3KpGieXFkMGI64BIIPfjvUj6bbzRgl44VLd
N/Pfgc8jpnmjlN4s82k8OuFYLASFIVXOMA9abJpTGBIkhfcSWJKdVGefbBxXpkvhtkeNI/mJBO5D
2AHfPOM+/X2qqdHa5tNikGUPjcJOR9f5+hqbdhSZ5u+kSXMhYRFSBjap7+tRtox5LjjBG5zhge2a
9LXSYZFbJTbGflL4wMHpmnNoTRSHczzLKMsu0YPPXjOOo4rS1tGKLR5nNoRDqI1YOACRjJ6ckfqf
xqI6HIjFfKlVVI3PjA6A/wCFeiP4ad42At1VGLbDkYAAz39v5ihNNYyBHIMMgI7ZVQMAg554z1rJ
rXQ1ujzpNMNw7bC4Kk7gD8oPT88VENOfbIH3sM/Lx3r0NdILR7ZAoWPjpgqT7Z55OOeKifw4qZZD
ucfNtJG7noewOOKqKZi7HAraMAmMowPPPVucdaS6sJBsaMHcMgEH1Ga7qfQ1ZAyDfIeDuYAH17da
q3GhyyBCECjBX5D196UrkKSvY4YW9wjeYhUHdzkZOaUIysGO7YDlh3Ndy/h2e2ZGWPBH3mZQAR3z
TZPC0iyKQGuIMrwq87uhNCTZtdHGpG0jNtV1GOAfSovJlDFmRhg7c989uPrXcp4Yu4x5nkSLHg5D
jBOPx5p0fhO580LHE7rgEuxGSPenYGcNPuUqxjZf4d38/wCVIwkCbjn5jgEcn6/hXoj+DJkbe1sZ
YyT90e/c9qaPCUjqyfZncLmPbt5OQSR9MDr9KVmL1OERF83btJk28nHXA60yeJyCedyjHbmvQm8J
zRIClviVRjYq59gcd6WTwdcCUNFbGUq3zoMc8HIGaOVkuUUedGHkBeJF+bOac4a3KlQZlycj0r0O
bwe++YSW5GDtVgN+efUenFKngra23yzuBBVyAxH4etQ79hqUTziSLYFbDPk/Xb7VGuWk2xE4J5Ui
vT18DXLJv+yseCBHtyQAP5/4VFH4FFxYi4WAwliAmV69f8RzVK5V10POfnVnMbMM847VGqNJKQ+S
DnjOAfrXo1v4AZ1Z3g2twQADjHf6mlXwIq3HNsZBGSHGecn0x1+lDGrdzz5FMJZd4wBkAnoOnFQM
roCTkhieRzmvRZ/h6XhdnjO0tj5Fyc8+lSD4fOtnIj2kqHJIVhgYJznP0FCEecMAEAB3t6A0kn7t
mTOQw/iNehR/D6O5clEZI8YCscfN7YqBvhzIkcjODcdcYHI5wBj+L/61D8kLTc4SNGjkLtGNjDqO
nellZSmIySg4OR2ruW8CSR/eBJ24YjkqeeD+Xeq8vgOVICxGIQRl8H64oiu5XNE4yIEEuQpOBk+1
SCRVl3Mm0kknaMDPFdPH4FkIeTcxQYI4457Uf8IXcTq5+YseFDKBj1J9qelyb9jl1k3HoQQMAkVF
IjMxITDHjOa6tfBFzFA7y5iCkruOCoIz3z7VE/gu5DkJJv5BDqOntQ2Wos54qoCeYpZuh4x2oDhy
uCWK84NdDJ4RmWZ4i4Egby23Ljn/AOvxSHwbemWbyomKqdrlV+765qFqPlaOdC52rk+px0XipDGz
Kyk/u/UH+da3/CKXqI+UZD2weWHPb8DTD4YvxKBFGCuMhs5H8vcVVw5WZX2bLZBBYdAcYNQiDcsm
T8x46YwK2pPDN7ERmIEn+LmmXOgXsRdyhyvUbSc8VImjMniIYRk5Gfl//XUE0ZUkkgEdMd61o9Gu
p41dEfGMqGWo5dIuopCAm7J5JBIzTuUoNmeiGcgAjn1GeRTSrMWbPI6ZHStJ9KnK7VTZKPlbjHNN
bTDskby3faMHAxzUXE4MzxC0odABgHJY9qVIigYfMzdRVs27ScGNlcgDp3p72zAPuBX5ccL1q1Yg
oSoRgnJY++KGicdUAwOnrV8W7eaAVcIp4O3nv2qKZWYltpJB79/88Ur3EQmOQqXxtVONoqCUMufl
+Q8A5q4El8wcnnk98e1Escizs0m3JxgBatalJXKYVjgjJbOMiiRm4C5J54LcfWrSRnzgWQkDkY4o
W2V33nOV755xQVysqvFIoBY5TPUU6RGbaQp2seOO1WRDhWURnGeGz19ajlhcSFA2ATtAFFxqJAEC
SOcElc5B44pHZyRIA2OmOvFXREXIx/GcZHFQPG8cxXf8wHY5BpaE8rRWlQbAwyFY5Oe3NOdQGBK7
gD1zxUskQTA2cEEg0G3YK6ngNzyKV0PlZEreYp27htORxxSEFySTj6GpkjIQF1IA9+TxQkRMmHUh
ccE9/pQKzGiRwjZQ9Op9KiaMEghCGOMe9TM5DbACQRzSlscL8pUbgT1qhEa5jLDZyBSBg8BTI3KM
nIxmppIzJI0vUY5I71GhQJjbtz+NTa4EA4b7oX3pXcspyRnrmp5IyVzjKluOKjChXYPx39cUrC6k
e4qrEkgkEDHpTCSxHJKgdDVhkWR2A4TnJz1NRpCMgg71xgUxhvLH5jkgcE01/mGcHPbipniQAkLj
HU5pqsydF2gdMinYVyMRnpjtyc4pNrOV7DJGQc1YDqAf7x7mmCLe/wAuVI6Y70NWGMV8NswcZ6k0
3zNh5XgdzT3j2SMoPHrSIWztClgakBokJ4GcHsKVkbdnGSvJBpWjUN159e1MKbTuJPNCJQsvmFzk
kHGTjimBSScMcY7U92YsST1HQmmyMse5e5HUU9ChsmRlcnr3pwAAA5bNJJ90OGyxPeozkMGAOetO
4ExGCOCQByCaaBvOV4AprfOmSx5GaXftG3OB2IouAsm8c+vpTXJjIb+I9MihlG0YJIxSjI4zuxwM
0bgI3CBg3zEkdKQucDpkevWlJ3Hb2HOKQkYG4EmkAg2nOc4pd3qe2MUmVIwM596VguzgcjqaAFHA
zg4pGG3qRwM0hZsYP6U4uSACPxNNgKIiTw3bP40rRkZZ1Oe2KCxRm4BFKwZxwSeOx6VOoCBi+FIO
T0ApzfMTn5cdu9CuFZSo+YdfelPDtuADHoBQAxDhe5wetTYD4ycY5BqNnJI4KikdiMbck+uaAHsR
sBIYjuPWnugKKfmBXIyPXHFRAryoPPbFPlLMdvOc9xzSEKUIbkkt0BFOlkZpOpK56U3eYzxlsdeO
KexXzBtO0eoqdQEVAuMLlT3NHmDcdqbATjBGcUhQqhxkjHJI4pBG8nRgT0xmrTGJcZK5AIHbb0NV
gN4A71clZcBW6YxkVUkyh64B/OmAA4HIz2qXgpnAII59qiKkcZO0+lSRqcP6Ac/ShoBPLxw/HbHe
pNgbHbsDSPJlV3ckdAKXcF2E8MOhXqakksJ8p4GAR/Ec5przYcAhcA+nWoWkbcSB9KaZCGBIDdwD
TGSSL1YHOTjmowpKHkh84wRQPn56EDk+9NZyEByMZ79aAQ59rHPQjjB7mkbDMvYD/JppYEZ+amrG
ZCSMCmA0r1OR69aTYfTrzU5tQACW6/zqZlMWBgFT0JOKLjI9rRryDv8A7pHapQxkDEcbflA9ana4
G3P32YY+nFQGLYmN/wArN1qQHmSTGJAHY8H1xTHAfKsmADng80m0qM5Jz0OaUOEYHcw/vcCi4hVA
WMNtHB+pokkWRlClRxgZpAuCzBuM8EcU/wAxkySAcjHTjp/OmxiSI6Kiscg8D2NII8HJIBJ7U8F5
Bjtj7x5xTWG9T82BnPA9qm4hj/6xtwzgdj1p/meZlsZH93oM0hIYMRndxj3pGIwWwV3NzjsMelMY
/wCZeV+U9MjtRFKVJ3AEep70nmLvD9xTDIrIqgcrx061NxXCSJeQAQx64P5UGKRFzznpSqpcYAJB
PXimnO8gnYwbGfWqTGRzoXcsBj1FQgNG/OR61ZMxLYb5UB4AFJIu/cACvHC1QiKS4dV8veSuc4oM
kjtySf5VGflY8cds1YBDABSob3FMY62hJPmOp2g8nsas3MwjjChhwSKesixRFXJK465yBxVZXM+5
mI68cdakRBICx3btxJ5FNMbN2/EU4xsp3Kent0okOF4YnHUj1poQ0qoYjAHWhgDH0zzwe9ARnBdQ
TzyakCsmN+AOwPUimUNX59xwcnp60h4PT2zSgM7cAgDoTQFBc4PfHHahiJooiYmmLAAcYxSStEU2
g52nP1p0qSR4QEHC9M9B60lxarEAc7t3IA5NTcCJ4pGTft3KB16Ypq7oWVyuc9DVxIB5JZSzZHRh
ioYt8sbqEyCcfShMCBsEqyknPrSgGP5iw2g+vNW5oVtUQMfM3YIY9h9KYCZ3IWMEL1buRQMjhiWa
Yh2bJ6HPNOa0SOVAZAQeuD296tECIIxVXYchdvNUJTukYhdvOMUwJ1swWwkgUevpRLGVkACgg5IJ
5qykRiVON+7qGHT3pl3KYod5O5iSCR0H0p3QEUkAO4pjcAFKn1xUZi2naAcYwwI6e9QxyyckHBHJ
JqactLECx2sM8etFwEEpXAUDAGTjvT4tjE+YCcnt24pjxmONTswSMc1IrGNssN3GcAe1AM9B+B+m
Qa18RrK3uIEuIzbT7Ukzt3LGSp4I716zbaRpr+YJYUEO3azLwvTqOPcc+9eUfAeYt8SNKCgEMJ16
ZxmJu3v0r2Y2c98rOIfJTdsiQKD8q43dO2T3r67JowkpKaufIZvVqU60VBvYzri00SCKQPplvOiP
lVGcnnG7Axzj9RTp9EtnSILYxyox3hi+MsccE88kVZttKuHuI4nYmTcu5EbJ5Hfp0zWkLCS0f7Mk
qsyHO1hgv2I+lfUThRivhX3HhU6ldu/M/vPnv4yxRW/ju6jhtIYEMUTNHAuFyVB/ya4hM4YAHaP4
B1r0r46mOTxvNPBEYla1iBxzkgYyCST1FcQmmtIoV2EeQMOe5Nfl2ItGrJLuz9JpJuEb72M/y/3Q
cA4YlSWORTpUmj272+Tb0DdveppLcwzMHcrtGVbjk1JZ27Tw3PI2gchiOvrn8KwTTNLFOOMMFwrK
OhJ5yastaySxybVyynDAenNSxx+WrOpHB656jtV6AJbsr70G9TvC9Ez0+ucCk2IwGRoCy5GO/rTp
3MsY3BmwONx7VotbpOshTaZFAb5VySMVIYibqG12hgAco2OKVy1sYhjZgOCO/TpQ4IxnLfhV+SXb
HGCgIDYVm5OB2pXAjieQpty3y9waadxNWM0fNnA4zUjHzJMqOe/5dKvyeRJChaPlhw6DFVoY/wB3
JwOnPA6e2a0EV2G3nG0n+HrUqRmTgFRgdc4pk8pJK4B98c+1MyTjnbx9KTEfTVg2+yQzPskUZLA5
5HoewGQP61YaCO7eaF/3fmggxxrk7hk46Zz/AErPsnaS1mwphKn5MvtICknAI9cdPpWvc3XmTbkZ
QeXG3ghugz2xyeorqjG5lKS6FeLT9zW4FrLJcxylvLBC7myeMEj+fNeV/HWS5l8c30V5H5SwiOGH
nkAIOD75zn3r2GaBgBukUxn73mkrnuT7j8q8Y+MnmHx7qbTruaco5zJkr8g7fh1qZxcdiISuzzxt
qDaOT3PrUiuGjKbRtODyMkfSlESyXO1MqO22tfSNPt7y88lSrNnOW7DB/l1/Cskb2MkRyKGEeOD1
P9KV4ZQ7YOeMf7teoWXw/tF3rM+2ZiCAeAOOMn/CtC18AWcW4hXml2bmVVz37HB7GnYV0eRiPfGg
DspA+6TzmmGI7FdZOnXdxzXrsnw6tJGKSJ5cpZQgc5Yk8YODgHp1BpJPhvZieVQSsyLgBFATPY/l
jkU7akc66nkBfICYKk8E47UhkcKBkgE7gD6167c/DeGaJx80cjAYEfUZXOT6/TFPj+FNvK8amRY8
oD5h+X145wOOM8/1q+Ww1JM8bfe8jAk7ic81MHKkSYLRrwQQOa9Xh+FcTRrLJuiJLKu7P3/TFTf8
KstCqEblMpHAYYznkZ7f/qqWi9F1PI0mLbyXKsxz04I7UyRsNmPIA4z6V6vL8LbcHacbFQuC3Ud8
dfSg/C6G6ttobEq8A7cBhgdcc560uUanG9jyuaTzXdlXeAAM9Dx/+qmiV4mYuCwb5W55PsPyFeqJ
8LY3hlhBeK5jI3M54zjPbtjHTvQnwpZkAedCSrZO0ZGAeg6c8e9JxsDaT0PL5LmYw5L57Z6kY9fy
pTPK3ypKWbJYfNkfT8v516MPhmrfKFKrnu2dy/T1/wAKkX4UlcFv3U0m1Qh+Y4J6jB69+oP0ppIT
kmebxahOmQJXIOMjJH4cU86pcFtokkVSeAHxXoKfCWSRZHlkxGCFBGck5xgD9fxqK8+Ek9sWVJDO
g5wCMgfyzVJXM3Jo8+TUbhYiiyvGwOQFYj+VTrq16uxxcTKF5wj8f56V3P8AwqWX5ipeUKgZ1GQO
R3OOMce3NE/wiucwlJVaOTIc5P7vpx+tTZj5jhrnVLmaXd9rnVQcn94fzxSJq17bOVjupQAOSrnm
u0/4VNPBLsEmShG4SbQM89KG+EtzPKwilI2sVbA4DZ6fWizEppM4+TX7xoGAnkBflmVzkn3PvRBr
N1ErOLmU5/vMSR+dde/wruImKvKj7DkcdWIziox8Jr0plXSNTx+8bgY47d80WK50upzTeJtQlyft
UiOuWEqttIOMdvao5PEOpBCVvZ/vZ5fBHuMYrrh8J7sIFjlX51GCeCSf89qST4SXkX/LVWIxgHAJ
OORjv9anldxKaZyTa7qEpAF5ODncCJCFznrj61L/AMJLqFqMfbJSxBwSxOMnnHpzXVS/C2+WHnaC
oI3Bf8+3+RUUPwjvZZZEWbbtXcTtPHJ/Ht+tOw010OYn8W6jNhPtcwHGG3nAqb/hJ9UVdpvJMHJw
GxjPYflXRJ8KL4IBuOAxGMc54pG+E+oSgsso8rdw+0nAz1xStYrnRzLeKtTlY77mQ4IKpnBBqVfF
OpRSl1uWLMMBX5xW9F8LdSiSRjEWj5CyAjpzgexwD+NSn4T6msIk3qo3fMMZOOeQPyqrCuupzp8S
X5KMty+TwGJGf8/Wj/hNdXYFftLBmUh9qjJGee3+RW8vwp1OJm8wYA6Db938T/8AXprfCnVDI4GE
4BBBGDnP9AaVmTeK6mK3jPU5gVe5bCDABANSDxjqSKiC6+VVz0B71rv8LdUKN5a/vNwUIwwXzUc3
wo1RNxj2bsgKjEHnnpj8PzqrPoF4rVszj461eMv5d8+5l2jIBHXPcVLF8QtZXCfbeSBglFwAM9sV
MfhrqKqJZ2C7mKkDORgfT3p4+FmrZk+VMIxU88nJOPr7n6U+SSEnF7DU+JGuxF1NwmwksYyq4yf6
Us3xQ16SNVa6RiQBjaO309addfC/WI2kkjh3IF53HbkZxxk9KY/wt1lNxMPlqM4yOR0x/OlZmmjJ
0+K+u71kE8atjb/qxwMEenPFSxfF7XyJT5luWP3W8pcrgY+tUZPhjq0BZFgyRnBLAhhjt71H/wAK
61uOFx9lYccgcn6n24otIl8qNBfi5r2/cZInwTtOzkZxk565460s/wAWdaaZQ5tdnUhUyT+OeOtZ
Ufw91mWQxx27OQPmOPu5BP8ASpYvhrrgUs1nImSVyR/T/PWjlZPuvQ1/+Fxa0swciHdjDEA5J/PA
444Hc1Zb4264AAkdqGjAGZEJxjnHXv6+9crJ4C1iQsPJJK5AXvx9aevgTW5BJuspSUXLZ7Y460WH
ZI6i3+N2rrKjG0tiowSMEn1IyT3NNl+NOqwb82cJEhO4ZI61zA8EawUaRLWRtqnK8BsdO9QT+C9V
jdSbeR/mIGQTyO1PlbC0Tth8dNUijZTY2snGOdxwMdufXP51IPj1qAkZnsYGkOMknIHGCMHOc964
ZvB+rrhnt2HHUHg+1Mk8LalubfbOSO4HUGhxYWTO7Px2uizbtKttq54ViAx7dvSlPx2niLN/ZEJO
8uilyQAR9O3PPua8+Xw3fbiXhYsrYyRgmmHRb8IW+xyFR3KcYqdUJpHpB+ObvlW0uExkfd3D5QM4
H3ec9yajHxqjj3qmkREFwdhkyMAf7vfArzmXQbqJQDG208DPc+1MTR7pSV8l8k4yOdv1osxqMXse
k3Hxtilds6Ou3ZkbpATu4yT8vt+FL/wuqzCmRtIIkYgsUZSM55AGBivNptLuozh4mZgM7eTQmk3L
rv8As7YAByR1pWJ5Enc9QHxm0WUOX0OSPfkhUdcBsjk8fNnBoX4waVNNbKNNmSNfv7irbQD2Gefz
HSvKPsE2xnEbgg4YbcEZpBZyeWXZCoztBK8E+lK1i7+Z64fi1ocjl59KmLPjLgg5x1/HGOvpVlvi
r4f27106fc3DKqgbfTjPOOleNzxSL5eUIyCQB1pGhm3ECNlyeTjGKbj5AexD4oeGVUGSynjkBG4R
Rhgw9OvPbr6mpY/id4VlWQfZLqFAQq7owdw56gHjoPzrxVbWYyDC5YnkVNJDMAdqnLcYHrSsaK57
InxL8JSt5jW0yjIHlBDyOc9/xqc/EPwRIwkKTqX+ZY/LI2+oOB+NeHTwS4O5SG44HemiF1X/AFZz
jBp8pL1Pcn8b+CnLB5ZQ8h2EiM5K468jj24zzSHxZ4GXzlW6cOAdpMTqcnPoPT19a8PACqQVLNjg
npTdsiox29O49aFEXzPcU8W+C5pkF1Iqqw4KofkPPJ4z+NSSa/4LklkaO7xnCbzyOnBGa8Nd3Iyc
5xt+bgUxw2Mbuhxj3pcq7Du9rnuo1nwTcuF+3QgKM7wuBx6cdelJHceCpJSY9Tt0Krkbhw+B3bAB
OR29a8OjlcIVK8Lng9+KaX28j5Tn5VFPlQtT3IjwfcSSv/aECBRujUMFC8Zx9Mmpp9L8LBXaO/gZ
XUM2GzsPcc+h/lXg6sEc7fn+XHI7+9PaYvEwGQMYxS5V2J5b9T3FtD8KXXlOmpWsQBGYmdRxjluo
78Y96dc+FfD11hotRtGIXAAlQk8465968ILEK27dnspHSnNdBUwoYBhzk8UuUpO2x7cfBGjF5EF7
Akg2kESAsc5P0xTG+H2kyvI0d0juvLBHAKjJxkZ9vpXiLbg25XOcc/ShdpLKwbcfcAVViuaR7TJ4
F01Bve4twqD5gsoyp/p9Kr3XgWyRQ6ToztuCkMMNjt+VeRmYMrfM7McZy3FNMswyF3DqeTnHFRZD
UpHqM/gOGJUP2hUk3bdoH4+nNUT8PFlYO0ibOSFzyQD2rz0Xkm8ZnkwOchjwamGpXUbgLcyhTzkt
xS5blqb6no3/AAr4xzDbMDknCsMY9Mn6VWl8A+YoUOu0DdnOWB5+lcJNqd0kgC3c7YGD87c0k+p3
XQ3U3IIP7w4Pb+VLksT7SR283gUSoqxXPOGO1TyarXPguS3iUI6Fxzk4GRjtXHQ6ncxEbLqVCp+U
bjUja1d4XF3KR/vk/wBaXKx87Oqh8Iu0m15FAxnryPr/AJ70q+CJ7phGhDO3QZ/HiuVGq3TA7LmU
uBkEE/l9KfBruoWbxPDeSRyLnBB5OfrVcpPOe5/Dn9mW68UaL4l13XrkaDo+i2T3Elw7Ll3AyFwD
0wDXh89skSusTl0DEIWHUVf1T4g+JdX02XTrzxDqE+nSv5sloZ2ELt6lBxWCJ9wG0kAdaSj5g5XL
G0yKMHbg803yFlduDg8DmokkkAIVzg89KYruScufl53VdrE3LLxbGIZQRjhTx+NVWkOG4wP4QO1I
5I3DP/AvakMLhQxYZPFRYVhRuGeenBGadIQcHHzdB71G28fKXwD+poZG4Od1CAeZGUKdv096cLgl
i5Gx8cCkExKYKjB5B61FycfKaZRI8nzNwcnt1FPEjRMQBx16VC27d6YHFNDOWJyTTAlDfvMHuena
mOzLnd0B7UMdpGGJPWm5ZgDjjmp6i6gSHO7pz2pCGYE5GPc0855JIK+1BYPuIUYx0p2ARMKxGODT
Tkgjj6+lKD8/IwKSRn+6QMfzoGII8JktgE4wKXaQp3Lx9OaRnGM9W6fSlLqy9y3A5pgNwB24yQKJ
B5Z9+eKfGcOx68cUYG85PA5weaAG/LjOcZ9qTbtHB+vFK5APyDHekwQc8jNAChggA49eRSZK5AXO
aJGVjkDGO1Dg4z0NKwCEjgjr70qnsRzSquY8nv3z0pGOcjBzjrTAc2S4U4BFKdqEqrsfXmmlmXp1
PWgYK+hHUUAKw3NwcD604jcec596j3kHBGBRvIbO0Ee9SwJwMthunt2pqfMSNuPTNDTD5vlAPsPa
kMqFSCvX9KLASMQGGAN2OcUrOzMSehPB9ajWZUYYzkDA9qdHdgOC/Iz2pNCJJXxz2/u9c0ibXJLj
r0AqGS4yTtPHWhWVwcEqwHGKSQyVty5U5VemKcWCSbhkjpgUhuS0ZRlyenNMllC88gdsGqsA84LE
7c7ugJ6UPGjAluQOnrTDKr5bpxjA9ae8qAAbepzk9valsySuWUjgYPcVKhCyAjK9hmk3xqrNgjcC
AacXjf5j95RnJPJOaCiKU7XYtnPpimnA+bnjtUqyxqW3guSe5p63Ee45Tg9MdfzpiK6oSee/epHR
uuCcdKcXRRgfMR2/+vStIhKgErnr6UIYzymdBgn14qUW6xgnZnHBZjSrKqP8zEjGMLxUv2hWADkO
p5I70gIsrtC4BoaNY1+7hiDjP0qQmDzNzMcnsOOKmaK1LBvOLAdVxigCosEk+OoHUZ4qbZCqqGzk
DLYq0/2EqMSsA2cqB2qlPDGB8s245xx/OiwCeYC2U4DE4PvQr7hk5BHr2qu7tjAbOe1RgvnbmnYC
0WO4ZP3RxgUiRB3LvzgZqBZCWJ5PbAqRZQHKqDjpg0ASsvmFhjgfhmkKD+D5R/tHtSyPl1QD5B3G
OtNDjDgtkYxjrSAlRiMjJAPpTCGJBxkk5xiot555yOw/Cpt+6NSpRT1OetITGlyGyAVUHjHIpWYS
AnIABySB7d6YzjaQG3eg9TRgR5LMetMETyBXVTuXIGD2zQqg5C4AJzgntTNqNGZCyh88L0NR+YAh
3SHn+H/69TyhYkkdkIVR9CT7Uj/vNo3AyLz83Tp0pHZHwd2wg/X8qRSm4ksD15Pc00gGlTM7beAO
ACeakMXlfeYbTwcdaCqEDJC8859aQnYOnC/KWzyaoZHdbZm+UAEDFRIBHKu7K46kdamZBGA27acE
Y+tMki3Hr0HGKYE9xIpQKORyeep+tN2gxkhQh6BST+dRp1XeoIPWre0SSFV2qSAc9hSsKxAEZSu7
p0LZ/wA5pSinGDwfbGPr+VSfZSAq7l3E9CeOmamWzcKHX5t3QL35pCKhRsMueuMAUx2JChh06t/S
ryWbSq6gqJcj71KNNCKWZ0VM4JamBUEjsAFRVY9c8GnxwG3jZ5ACwOMEVfjsQnmOrCRVXIOCM49K
ilsXlfdCAYzkqc9vWi5RXKIiE87yM4GPWhvJIG0kvkZV+g+lWjpwQZOJPl6r6+lJ9gS5mCqhVQOf
6UMREQZH8va25s5YDI6dKeVUZQptYYKhTnHfrVj/AFB2R4dn7AdB0z+houYCoyQo+UZIOQOvFQOy
KUi+fIAQQ3OOeKeCltnAIbbtJNK9rMAVQkgdRnGeKe1g5bAO84yRjj86pAVJWjcbmXgnI57VYs7N
XRmY4BX5eOage2cfI0eHBOAKmiE1uwRlBkxgKR0+lOxDuJcsXf5XY4+UDPSqjWjkngkrxt71aeLz
mRiOSSSOuKZMjQkleefvHoKFoUkQvEjgs3DAcimgBFKFWBHOTVpbYtGZGTBBySD1zTRFHOxX5kdj
wc8YovcZEYjMRgkgAsT1qMgvGf3hJ/umraBVUr8xzyvHHuKRwELNsTK4z3Jqb6k3Z2vwJkcfFfQV
thtmLSqjDru8tsY/H+de3pDeW103l3JfY/mqjNu5yDjA65wQfrXgfwu1i18N+PNL1G7VljtXL/KD
jO1gCcH1xXrbfFzQGIUsyjJO5VYZxnjIB5Pb+VfRZZioUE+dnh5hhZ15KUeh0oa5X7QPl4VsOCQS
cdPxB/nVkWd3JPHIUjAaMKA5yV+h9a5K5+Mvhy7Dszsr8qkewkLwBnpyM596g/4XBpNvL80hdmdj
91yF4HA6fXvX0csxw7jpJXPHjg6sJWaOI+OcAXxrtEgDGyi8wj5QTyPX1H559a5WGKOK0gfcY24O
SMqeO3vWt8U/Etn4u8Vm409pDbfZokBdNrEquWz68k/UAVzsMktysao5KgEKh4A9+a/P6r5qjl3P
s6TlGKTKN4oluHXDnB47Aip7WzQRPIGIXA3hByB/nAq8mmqiK0hAkJBAb5s1KNOjikedyYwZcFDj
j0rJNFN6lZIBcSNA4SMAZDNgdO3akjtJBEABu2nGUIwB7+tTXJKl2CNswMHofqaZO4hnRR+6dT8x
5J6en/1qTQeZDeOsJkbYFQN1HfjkVRaeGBvN4uGI27HyOcYq46b5ziM4JxjHAb0/lVK92xTuHjwA
PTvVJpjuKl5DKIlud7IjEkDt7D64xSS3yTPKuzajHIHXHHAqszKHaPbuBbKleaJ5owxCZXH8WKGh
3JpLwvbhTGMJwoA6DuaZBPGjvIV+83TOadLcoJVICPGRyB9P50G5t1RtoOeRtP6U0IrttBYFTH3A
pYYJJJGEY3HGTmomyR16+nTFWLeBSpkztycDnFNjR9A2ZligYvIrEI8pY7tvXPI5HB9+54rQ06Us
4bdLcSLksxGGPOMenf8ArWNpF29wYpCEi8zkeWMANgcYHHAGelb1hcPGgJ80QyKVG0gDGBkn68j/
APVXSnqc/M10I7izF4XKbp0jYOYlzwBkHg8fXn1ryz4yyRnxhc/63BVGV5FOSNoHUjp19eMV7HY4
kLHyhGeVVTgYHUHAOMfj3rx341HzPG0pCSyobeBmkmOSf3YxyB+H0ApS16kQa5jz5naNzxvLd8V0
XhhpIJ4sLvfdlY8DnvuOelY02zy0k3btw+5/drpfh/a/aPFOkxyKskVxOsJEjYUEsAMgc/xVjsdG
57l4d8E6j4h0Z9SiiCqqgokgwDk47Z7Z6cU/TtHkvtRg09WMc8h2rtYgM3YFQpwP9r1r1z492sng
Oy0nRdMvBFYXMOZRGu1mIbPyt65PPHINeQWcs+nGK4jkNvKhLxt3bDfQ55HWtEnJXMJbmv4v8G33
hHVbay1L93O8RkCsNpKHPXPXGDzVnw34I1LxlNNBpEX2k28ZnaMsNxXIG3pjuO9esmKD9oPw4srF
NP8AF1hGIgr4AlRiTkDuM9uxxWhaRr8A/DV0RMlx4kntzAsWeEPyuC2MZBHP1qHKw1CNtTxnwn8P
de8TQSpY2cjukpieIp91s85xkDHIz7da0Y/g74kF28TafjUA8ism3cAADgDnJOc5HvXoX7NuqXV5
8XvMaQrHqCzTTRRzhQG2AAYA5Gecnp61Y+ILeKPDfxV17xJpFzdJFpMyyyRY3wkSLjJIPzdenUZB
4qVPWxsoqKWp5F4j8M6rpeuf2ZewyQ3cRAS2cZGTnCjJz69/8Ki1jwtqfhxhbX9q0UgDSASjDBST
hsfhj/8AVXc+N/Hll41+K+ja5bI8Qu3s9ysflSRSodCcHg49M8+9fVfxg+GemfE7SZbGR4YNXiQp
BdqAshVctsHUEH5h68n1pzny6Dsj4Vi8JaqdEbWXtJjpccxia4xhOTjJ9xz+VZ8lsLieJYxkMQCy
8ZbsF+v86+ubfwlqPhr9mDXtN1C1ENyiSSvE7lyFMwyMjpx6etfKkcYh2GAEKh5Qnk47FgcnkDtz
3pQm3sY21Ga7oGp+HyTqMF1ZmYGXy3jJAA+UAYHt29DVRYkkWJViTzMZYFiAv4568V9F+ANLf4/f
DzU9I1WMf29ogV7a+iO0vGQxGV+iYx6k18+TW/l301vJCHEZYRn+JSMjIzzXQpXRDdpaFSVYbeGc
Ro5aUjaZBlQB2z065pETyLqMnmOQ+U3mcg59gOanlmwoaNRIWGWYAcY529eOTQwlmtJN25mI53DZ
tzk8nPHYflTUbimIE8ssiIY4m5jUNuGccZH0x0oih3gM5WJUOAXBH5cZzUkB+ywx74hLGq/KQBhD
2/Hj8s05DMyPJNDG8QbKgqCC2Dj16E/pQ1bYlXe4/dINuwSkuvJYYZcY4Pv15+tVpF2TFJVHkgnd
KI/kJ69fWpBFIvlOZVdcb9/oOv8AI/pVqC1k8xw0ZLMxYSE5xxjHFJI1uVzYIIFxI7E87HXgg9Of
pTWt2ijVmlyd5YRrxgYIx19utPFuwdmDsrrncynnH0/CkaCO3iXJO5wXGTyPy+n601qyX5EZgiMj
MS8sfmkfdGVPIBPfj6dzTjatcgl43kzKQR90sQfvD+fSnxW4dozNuMXQsFyQ2Dg5/HGferFu32ki
MgIznYdxwu4DjkZx6Z9zRJJbCUb7jZ9GYyRboGygLKJF27lzj8RnFNmjyw2uuW6Z54Hv+VfUHj/+
yvgvL4N0yDTxqGi3MMzXvmYeRtx2lhnGeSfwFeFfEPRNO0zxbcx6JO9zp1wI3XcPuKyAnpnoTjHt
UKdwaSehyf2RWdYjFIBjLKAck59vrj6ZpZFw6DYYyHGJG6447n6DpXvHwG8C6bq/h7xZ4nuY2mvd
Itn8uGRRsb90zhuTycg/QZrD1XXNC+I3wxbVZIUsfEOlLtMCc+crMMD6qPTPakpXKfLHTqeRqLiV
5XSEsGJLFhkng5x+np2qzGksYZzFI1uoKvM4LMDnrwP/AK1ang/QYdf8XaZZm48pZ50Ryy9mZV5B
Of5V9GePV8OfD7xhpvhG8s4ZtIvbAQG42bZFDhlDjHAwwxyfXmnzpPYiEW9bny2LmSNBgKVY5kGP
lb2/XrUnlp5kwRfKjQqRJneVxyGGff8AlW7450O28L+IdRs7aY3dlBcGOORADkepIwPy/WvWfht4
Q0q3+Cmv+N5oje3MZaKGBkyoAeMbl7H746+lEpBa7PA4fLjTZ5byOEBdnXOCc8H3x2+tNSIeY0hQ
rEUIB4XPTI4H1/KvXfiND4f8XeD7LxRozLY3rlLW804H7rgctjOAPu8gVpfs9/DCw8WXWpaleltm
kW5na2xmORnVyOfqpNNSSV2JQbeh4sNPBVFiU7h0faMHuD+VI0cWHZh9okcbGZlwfz/hPsMda9n8
CeJdM1/xnpml3Hh+OSG7uFijZGZsgn0HpgnrWN8Wfhnb+AfHl1pcN0z2RCSrM4IOxjkc47D/ADxQ
ql+gSgzzJ7SDcIQPLTZu2hsDk8/jxjOKesNsoZEG4KWLEuCfpwPSvW9T8T+FNN0XSdP02yj1D915
lzcTdS2ScDK8jH9Px6HxD4GtPF/wffxpY2n9nTafJsnt3PEgLAZLcDjPYdqftF1CMGtkeFjToJLi
O4kaQwZIQluMYyQVB5696kggiaZpUBDltwcEOT1/z+teifC/4az+MZbjUJgyaJp433F4RtBCcso7
/dxz710Kax4Jk8Qy6dJpyDTJHeBbtmIVBtO1gAOhIGD71PPFM1R4v/Z8cciGWNCWZshhkc5B7e5x
/KpY9MhHyJucyDPm7ecden9K7vx18KtT8J+JIdLSGW9kuzi1kVTiZsDge5BHHv7V1WteHNA+Gmi2
ttrUUd7q80qubdP3bRRlNwyMk/KeMn3NJzvsOK7ni/8AZkKSqS3ltIuFBHDYHGQBU32eFn/eKhHI
O8kYxyv1/OvXvEvgaw8TeFoPEvh2JRLawql5ZlgWjchju7544B9h0zWT8O/hy3iGdtSv3+z6FZZk
nuimY8DClTjq2SOmKUZ90ZvR6I85isYJEWcwBXfIBC8ADsQPX3FNXT4yHEcQ8iQsqh165yCMf1r2
Xw5o/g/xPNe6Mkb6XfSq4t5XdQHk3DafbPoc1594l8NXfhHXbjT9R8xJ0zGCueQOOK2Ukw66mB/Z
NrNIFjj2IF2YBxkAcc/TtUEdjbTq4SBAoIbk4GQc8ADjnPNfQ/wd+B0fie3kvdYjeO2aIi1t4ztw
cgAknnGM8V4ndLFZ6pdQqqbo3CMY8EOD9e9JVEmN8q2ZkR6TbyJjyUYPyxA+YHkZ6U9dFtfNDiGN
VU43MmWyD69a9tt/Bng+D4c2Wr3dxMl7MoEtpGMuhyQeM9OB0rQ+G3w28IfEa7lt7W4uY5beATSA
IwIPII5PPP40e27jUT58n0m0gQM0BErxmPO3J/zz196ZBoFnPKnmWvlI5wfNG4Pk5xg/nnFepfDn
wVY+IfGN1pWp3T6fbRLJuuJRjDBsY9jkn6d60NW8O+CtJ1SbTpNSnY20jJG3l5DEHGC3QdPWk5ol
xd73PID4Q003EhFrGkSvkSE5Oen9BS/2BYpKbdLJdrrtkbZk8Dr3New/GX4X2/w3GlRWc5njvo2l
DkY8vGCBjPcNn8D6VR+H/wANrrxRa6pqs04t9OsEO+cjGHAyAPfn6dOmafPHqT72x5PJ4Q0uOSGJ
bVDIThDJxk9euf8AOaRfCWlmZc2SFDhxhs84PBx6Hn8K9U8b+BH0DSrHWrKdL/SL1Y/n3YCTEkgf
kO3v1rkZAYt6FgUI3AIOB19gfTrS510IvK5x58E6dICRpwby5MAlQcY7nPX/AOvS3Hw/0sQur2US
27MWOV6H0GK6yeNmVCH37m3KF6VJbRfaCECmRmOcYxnJ69v8iq5l1NIxkcdF8P8AR5/Lc20PmIAA
SgDOOMEcdj3PXNVbz4daCZGJtNs+Dh3Hy/kB/PPXNfQ3iX4Ir4bksmv9VSNnjDbXAUkYB55OMHNc
tf8Ag6xtYJri01e1m8n7ikq5c9cdeelHPHYclJHj6fCvROM2ojc9ZAoABHQj8zxSS/C7R5Ymhitx
HknZIOoz0/WvV/B/gefxbJLHbK0EUSO80rjiPALYz06Cty0+E82pXqwWl/bz3LsdkKuPnAGRjjB9
cDvUcyT0Ertas8A/4VLpJuShhcFeCvpwe/pSXXwg0I+ZIVEUCsAxQlig7EjP4E9+PWvUNTsvsF20
ToUu48pIkhII7DP5/wAqpG3xcRsq5y2PLHRvY+orTmuY2lF7nm3/AAp3SJnASNggU4APytlsg9PS
q6/BrSCgjfd8owAi4yewxn0xXpghkjYhSVUHGCwO089z2/xrQ03Sp9a1SGzs4y9xK4RCg6dv8Pzp
XSNY80jyGX4HaKqLtVlYABAW3H6nOc1F/wAKS0qZlzA2w7sEttIP1Fe9678Ntf8ADum/bLq2U2ou
GhKg5O7aCc+2B3/rXLNEWhZl2hlbAJ525749xk1Om5pzNHlT/BTS5WQFiihtpOck8fTpnjvUSfA/
SprsF94jGQAMDn1/PFe9aV8NNZ8QWUV1YQ+ZBM7RoCSokIAzt49+vfFc7d2EqXMiPFtmB5zwcep4
ziqTiHvdDx+T4HWVwSVYr1yFOR39vpjntUc3wOtSYwhZZBk+XwFAIyefXtivaNO8OXmqXy21srST
7iwCkZwoz0PsDU+u+FtS0RFa8t5IS+QofPOM8A9+lNuEXoPmktzwWb4GWzCRhM/c4Pp2Ge54/GkP
wHttzf6Q5HJDKoOMd/zIFe0NFvSQlRlgMjrtBHIPr0rQj8G6v/Z6TwWoS2eMyBlTqn94dMDj15zU
KzEpSPAJfgWs4ULKwJOFAXgZ9fXnFRXXwJZWlO5htJBZjgdxn9O9e4NmSMFdqBXJCd19ec1c/wCE
cvdRhW4t7V7iIsQHQj5m54H+HvSdkaJtnzw3wQkyghnMi7tshYAAHtzn2NQzfBW4d1MUxBBI2ge3
6c19DX2kX+l27SXltJCNxAEq4Bfrx79eKzYcvIbe1jeSfeAo2bi3BPT6A1KsyW7bM8Hk+DN1FceU
JOv3yV5Ud/qc0knwcuCVQTLkr82T0x2/p+Fe/PoGoK4D2MxdjhcRncAfbHUE1mynEjERE5JLFc5B
Pc+lBKnI8Bl+E16HwtyodjgJ1amv8KdQikPmMpT1LYwMc+te+yWst05eOBpCz4ZlQ5HoPx/CmNbG
3mCPE0YxiQMvB9x6/X3oHzyueAS/CvUg7AYGPvAnBHU559qif4ZX6IGCjaAXy5wTxkDmvezZniNl
3ZY7dikMwx3Pvx3pk+nzRzXLMkoXgAqn3Rjj/H86WlzROW54FJ8PdUj6JtA5xUL/AA/1SRTIBwAc
g8n/AD0r3cRAB2ZcnlFU5ye3+fpSLZeQCwtwQwyCYxn6fQ02hp+R4H/wg2pHZviKK38TDAPNB8EX
4xhDt5wcda9xvQylYjDh1GCjDng89O3SofIkllRWygGQE+X+fPGKkq99DwyTwvqCc7CVPIw1Nk8L
Xybg0RBAyRivbpbWNGKhQQo+YuB6n/Oarm1ikRd1tu2ru3jAbPQc/wBPemFjxOTw9eqRiFmAPPtT
To92UI8vn7xPbFeyva5K/uVMbZIJPDn1/wA+lR3mnxz+YvkjO7hFXBIHTp2xj86nlI1bPGl0a6Mg
UQlwenHWg6XdplhAxxn7vNetTabHCVk8obP4duOvT161DLaweYzDO5V+6OhBFCRfKzyg6bcNHv2n
aPQdKX7Dcpk7STjn8q9NW1t5ZQ5gA38rzjaQD1HvjpVa4tUdf9U0bKcNkZwO34U9BPQ85axnY4ZG
PpxTGs5AThT9TXoz2KwDa8IUgcY7jGTUE2n2YBWQE55DoOEHbNDQWPPxaTY4QjHfFN+zyAnIP4V3
i2cIbYVDxgkZHHHUGobjTYkLokSHHO4d6VhHFrbyHcCMYBPNMeCVCBtPTg13B01BEd8aBm6ZPWom
sIjEyjlsnlam2o0cSVZuoJIp3ku5wRz2z2rqzaJl2CHnnFMntINmQM5HJxkjinYexy0qYAzmk8pl
GR0rqJVhSP8Adx7uPqBUTafGybiuznPI7U7Cuc2dyEDoT6UYbd0OfQ10EtipZESIdM7j39KH04QT
7CfmBOR6UrAc+UZexFOcsoIYdRxW29pExCqPmByT2pLi0jSQhgAx4BA4pJDMJvm6ClLM3JJPWttN
PwS7KQrjsKcunrImGKsB0PamBggkeuKXJHJ6+laosAgzjk+nWka1V5NvDADOVI/KkMzWBAIGc0i5
GfX6VqPpuc7xt54ApghQnCkYPBJHNAGY5Y9f5Uu9mHPStU6eAAzDHYj8M1EtgDnOVGM8+lIDO6dD
z9KTdWmbIGQgD5c8N7U37CXZgFwQpPUDOKdhIz9xPuaFbHI6+9aDWKFlAIGfXvSnTw+CjAkjoo6U
DM+TtnGfYUbsAYGD9a0m035sEntkgZ/OkbTwu7I+XOAaAM53O5smkJyAOavppvzHLAAcc07+zSSS
o3jpnNAGduOeBS7zjg81fOnr25zxjHTikfTxGpYj16+nrSCxSbIUcjrjbSO2DwKu/wBnsoU+vQ05
rFeMgkcncKaQiipz2HSm9MZ6A9qux2C7+oYcHB7inDTyA4GSR6jFAymevGDj1pGfjH6Vof2chRuu
QR0PUU19MMjfL6k470AUB2xxk0rr6fSrq6YZHYqCuOxpTp537QSOx9qAKG0sTjtSkhS2ST9KunTl
8zYzhcnAc01tP8tgSwPsetAFQnjBJHfg0hYZyRwKuvYq67gcH3pj2JRwM8HtnNAFXOG+UdKRyx5w
RkVoDTNzBRkZOQexpkmmtuwnJUZOe9RfUdimFB5zg5ofaOmRVr7C20EDbkdTStprKDuwoHJyelWI
p5y3UkU7IwcfqasCwbJYkYx/DUkmnN5ec4A5wR0qRFPcR1596DtOMdanNm2VHr0pfsDM+0A5HXFN
AQB8MDjpSySK7EtknGKsrp55ZjgZAHbg1E1icZB6daYyI7egOT9aV1G0Ec+5qRrJygIxnpSG1ZOA
+elAEJXByAePWnsygjAPvUptWXgtzj160hsm27g2cAZoAiLeZnggeg6UobaDu5x1xUi2rMhwSx7A
CnmzKpnkMc7s9MUARswYZwRjjJpFn5IOceo6082UgbBbjqMGlXT353DjrkciiwiIMBHuwc5PJ705
nQOuzc2RyKebCT5cfNz0701LV0clW+YfdIoGI0iB8ZYAHPApEfaAwLDB4weae9lIVZ2BHuRjmlew
kdiE5A9MUrgKJ1jO7Lpk5yDyKa9wZpAPMYJ1GTTDazMSD6454pTZTKDjBHfFVoBK1yyB1DkoeRkn
0pqzjAAkIAOAenFBheNHBAznHXpTfsUxyvllmNS0BI907D/Wlhk8CkMjsw3zlfRwOPpTGsJgm4oQ
OTkn0pFhkdAUBZTyWxiiwFhJy3zyXGHQ8dTxSSXbS71MhcZJAzx3qA20mDlcDODmnrCduWGMc5Bp
6dAHm8ZSEDlkA6VKdRn3solUK/Bx0qkltJKWCoTxn8KBbSFCVGVz1zRYCwt7JF8wkyM7sHnuDT2v
XJJV85GD2qmsDMQCCR7UeS4ONpGenHWlYRc+3yxPtD4Q8kL0/Ckmu356leckHqKqtBMByhyPzpZI
2xs2gtnqDTGTvfyxqU4ZWHB7Un29gDkAgjAyOmKgO7ywpLH0HWo8ZToQQe1AF57oCPcMgnt6fWk3
l5NxBB259zVQoVUNgn1PrTlZ1jPBUHilYCcTMx27cAHOM4J471NHfGFmY7mzheuVql97llYk9qQn
EhBztpisXUu1IYMpxn5fYf40gvhCSpHsMrVQbllA3ZxzSySB+XGOc5FAWLh1AM3meWu4N1PXHT/C
rK+ISrqWVTgFQAuMA88+tY5xyASc/rSFs4GOnNTYLGvJrh3qVTgHCqD0H9Kln1rzIpBIkgbI688c
1iqpZSQOo60+RnGFZjgHp1xRYZrPrUUtssZi6Yycdfce9NfV45pg77kP8WME/UDj8qyJBlQSOQKQ
qCjMThsjC460WQGyNXMokCjKkfdzjp3z3NZ07iZ2Zsp3GTmoCwKsGUjAwMU5VWZCAMEcgAdfahRQ
rCIpYhy4QgipRApLch+uOcGoNuflJ2sTyKVwYwFI+fPBqhj12Eg7i2Dhs9ql+zq8JnYhcnAxUCKz
kDBBJxwKv6VDCupWYvGd4GlG5IwN2M89eKA2M+VMHLMCOx9RVmKJFwy5AI781v8AxE06y0rXmttP
VjCAp3MhU5KgkYPoTjI61z8aNL1kKHHXNDaQk7nudlCtnAvmeaMYeMoeA2MdenQfj+FdG8EccWLe
EFkkUhyuUYHaSOQBnGOme1cXa39yJkVgbYxldvzYKkHr7df85rr9OuHnihA82K3IO85Db+eDj1GD
XTCN/iMJ1LGmmn+e0jSRBTIMYjUEDOevHP8AWvIPjlBHa+K1WF/NLWkEjEEEElACB3BBByPXNex2
mswxy7JAMJ94si8k4Gc9scnj1ryP41tnxFZkKI1awiZSAMYGV5OeT8p54pzik9BwlGTPOpdj2eQv
3SBnJ78VueAp3TX7JoyVkSZCT6/MvfjnpXNSTlXADEr069u+K6DwxLFJeQyKywmKQOPUt2AzXO1d
m1kz7p/akENzB4Wube5e6tDG8gJbCO3YYJxkDd7cV4isDyqWaZ5DGpAl45wDkfhwPwr1HwX8StP1
jwPc+HvGEDyQpbZs7gEGaOTJbb2GCD275rgkmtvtsErEywRXClkKfM0anJ+UdMgHoea0V0jN04rq
ev8AwW8Pf2RrFj4v1e7TTdMsZBKZBKFV1b5doyckZPU1P+0b4LuvFmp3HizSLoXmlXEQEjwHcYdi
hcnbkc7h6dKqfHLx9ouv6bpth4T82PSmg23VqFAjZw425BGeoY9e9RfADx/pnhzWdRsPEtyyeH72
CWJo1T90XYKAxAHBG0/nmsrO4OzepH+zBALT4zaWqx+YzQTcsy5UmIqFx/EMt6eldn8Zvihcaf4h
+IPhe/gkNtcJB9mkeFQAVVG4Ygbg2OnOOlY3wp1Xwh4d+LF9r0mqRW1pp91IbWOdDhomyACSByCo
PvkV0niKz8DeK/ihdeJbzxDBDpkzrJNZlSCAqYOCV4BwvUVHLrexXurRs8G0LT7vRvGOjyXAKSCa
NwsqcYOCG+ce/X1zX0h+1F431HwJ8QfDF9p8n2a6hhd4t5PltiRlOUH3u3fv2rzj436t4V1H4jaJ
PoNwt3ptvBawM6A7doHOTgdnPbtXSftXeMNG8aXmh/2deR3iQQyRztH1jZju6ce1OUVJq4ko6anq
/ijxhafET9nTX9Ysi6qLd4pFJb5ZEZAQrL945OcYx0yOK+IfOa1FvIY2YSuRIOSQcdxzj1/wr6P+
GPijRoP2ZPEmh3V+h1S5a5Jty21mV9nTHoAfevnPMouJISromNxZhhRnjP8APt6+laQSWhEo6+6f
SX7F9sJdT8XhpFfdDb/usrlF3Ng8ZxznOevGK+etQWB9SvlDugSeVAZEP8LNt69cgY49K92j8RaL
8A/AUlrpt8154j1SAJLcREMqKHOM4PAODg89T614MbySaS4kuGaaSdzJI6r8u9juK+/Unv61UU73
E3d7FaGUIWhOFjdGHmL0zgdfQ9/x5pksMcSyorS75Xw6lyAOOAMf561dljjAYIokGzIVegPPHTr0
9MUyJQsbGVQzOMsh4wSO/vV2d7kuxXltDDGgQqzSDkgbuM8Aj8D0qaOzdnVIwSpXlSMbhyenrjmm
ovnSrJGr8ISmwkHPXJ9OPWp4iGVY1VcF8sWBGemSOwP+FV6gmkRTxxjaoh8v5QoIO0Rg8bSOuadF
bpFIyKWJctGSGAXPXufek8yBNrTjY7glk5JBB4OB7fzp5UPFI0L52EH5tysvHbIIP5Yosri57EEl
nIiAb1cqpIIbIOSd2cdwR09adLK1xNGVT5UJ2uD35zz1/wDr1Ye3migVo9zsxw87ZKkjkgY4B+lN
lZ7aBg4Xay/LlciXkZGCec56+9UombkV4ZcRyST7wh3FdqenXp68VveDJ7bR/ElndX9ub2EyBXhX
ALBjjAz7EH8KzUikeB2yVlILGJiWUMeuCenA/StfwVol34p8R2unWhkt7xpVG+QqeRgkjcCDxWdS
yQQbvqfRf7W89lbaRoVrc2jfaZ0k8mfIYxqNmQARk5O0/j3r5geJ1O+SX5icE7i2CenpzX1L+1l4
fl1fQvCmqWZW4hsY5VlZX5UMI+v1KfmB6V8s3E6tIVICKRvGSSd46cDtnPNZQ5bFOyPq/wDZw1XT
9Q+GHiiRdPWJrWIm/UruafMLEgZ9FUjHvXy/rghGt3FxYQtbWsjlxbFduFLHrj8uenHSvqj9mnwv
caZ8KvEUNzIki6vHJJbgybTloivc8DPp6jpXy54g0x9C1W8s7yLy3tmZJFAPcjPPOTxnNON0ypWb
vc6D4L6jo9l8RtDOq2jXaTXaxQgOF2SFxsbHfBx9K9V/a+utKXxbbWiW0sOs/YhO8keSrxEnGffI
bGPTrXmXwe0q7174g6J9iRmS1uIbiYBtpiRWVixJ+g/OvWf2v9GvH8X2+vxR/wCgS2iWvmr98MGY
qOnTn9fehazLeqPnJrc3kc0czhQjFiZAeDnPQ++D+Br61+H2oeHpP2ZdWljtJItKihkE8O4ljKNh
PJxnJ28/T0r5JifylGYfMDDeVOcdcc/nn8q+svAHhrUIf2YtasZbeNLq9WeSGJ+ihtu0MB67T7/S
nON2rGV2lofKt7LaRzPHZRP5JbdEijIUBhj5ehOa9z/ZXsdY/trV3s5g+ixQqNQtyu4su1yMEnIP
P/1q8Oeza0lktnBMisdwYEnO7kfTPU+1e/8A7Jmu22j6j4h0uZil5exRRwIzhWcqHyB/33+WfSiS
0sOm9TF8CXngiPx/oJiiu1uUuoRCQ27JzhFyD83OM4Az75rE/aFi1zTviJcS606MsgVbeVFz+6G7
YGBJ7D268Crvgf4Ya9a/FTQLm5tGSCHU1lJdvLKBZAScg5HAPH8q0v2mdU07X/iq5tZkljitoYJC
kmVVxuDA5/Af8BBFTC0WayXU858HNoCxXR1rz/KzsQWoBYZJzxxxyvTFezauktx+zrL/AMIqsj6V
9qkOoG4+R1USLwM5yDn171wHiT4OarottZy6ZLHq1neo00MsCg7cdQSCc4r0/S93hz9l/UNK1GIa
bfXcsot4GwxkAZT1zkcgE+1ErPZmab6ifAyLzPgJ4+eOFmK+cGMv/LQG2HA5Jxj+Zrz3Vfh1pSfC
Wy8WW16Jb+4mWA2EcgBQbmGSPUFM/Sui+A3jqx/4RnxB4EnnMLasJlguWUKqsYggQscAn5eOmea5
h/g9r914o/4R3EyRhpf9IiGIwAGLMM+uMdee1ZW7l6dD3D4pRj/hNPg/J5jGM3CnmTAkz5JGBjvy
B9aw/iZ4Gh+IP7Qs2j3kgskk0uKZp8/3QAEx0535z7ewrL+KvxW0y28c+DhbJIyeG5I2d4HVlf5l
Bx2yAgA7VV+PGkzeMtT0zx34fha/srqCC1JgXLQOIznJHI4I7UknfQam1Y6T4AeH10KL4n6cshnW
0ZoNx5WQKkwX6A7f/HhWb4Wt9/7Jest5bFba8kAfG0MFuIyePxPzd8H1p/wvlk+EHgTX9c1xGWbW
FTybYgmSQgumc5zg71yfQVV+F2qW3ir4O6t4BjXytUll8+285yUkJkDBR3A+UU0nfcbkupx+qfCd
PDOneCPEa6mLv+0prWSS3XaCGyj8cZ29B1rqP2t4Us/GulMNokFmXI2NknzW5zjnt/nFcP4W8EeI
Ne8S2WmziWOLTbgFhNv8uNUbBwMdsYrrv2mfFul+KfFkU1uWkNnAYGJGcbmLHB74yM/TtVxbTM5N
NXR037LniXU9c1/UrC8u5LiKG0SWJWRR5Z3hewyeNv0xXgPiaBhqOordJIWmkYYOCDyc7SCBjOfy
r3X9j+yms/F2tyTJIFfTlZZUUqW3Sp7d+n0rxPxXayjW7uOSCWGNZpFVmjO3OW5HqPT/AOvWkEuY
zlZ6mKhJ3O8p2hcEAEEgcdB7V7x8LrZvgvoGo+KtblEc17D9mtLPeBK4ypJAHOeR3/KuJ+CthaeJ
firo2n6jCLi0l3EQv0YKN2NuevA/Wtr9oi+k/wCFm67YvJJLb2bokEG7hPkUsB+eT68VT1dhpW1T
PMr6+K3sk0OYFnleRQjcDnIDY6f4D3rW8E+DdS8ca/DBakujnzp5W+ZVUc5OeOefesaR1kXZt2Iw
DKoXB9hj6CvfbqyHhX9mSx1vTS6alqM0STu6AEbmIIHHHygnPfPPWqkxrvc539oTxjaeI7rTtMtw
JEsflkmUna5IXOOPYDrXdfCfwpE/wN8T2Ql859RE0okGAqfugqKcHPYZ6YxXzd573BMjNuQoQF5L
9eST+Ar6G+CjPN8APHQhneZwLhImBJC/uBkD2/xrFqxonc8QudVvtH0q/wDDckpntVuxKiz/ADYK
gjjjgGtP4aeArv4ga01urAQ2x8y42qHwmMBQB05xXLzhxMyRrGSvPOQMY/p0r3X9kaEp4o8RgmNF
ltEYDGMneB1x7/U1TajsTF3Zwlt4d8G32qranUplPneVuCkhPmwO3XNc9408NDwl4purO5SSSEDc
hJCpImeCPfg/41Tvs/2nOQpM8kj7URsFjkn86ranaXVndBLuOYN1AnJBHp1/Hmqik9xybT0Pf/2g
fDs3ix/DcumXKTNBbujKrZwWIwB144557CvE9d8Jar4ehjuLy3draRzGJJI+rDtjtjPt0NQ3moar
p626XElzbpJzGFJVSAOTz2xnp+leyeBbVNa/Z78XXF7I91LbiaaIzlj5Z2ZBBPQ5/wA9KTXKyLX1
M74AalbP4f8AE2hNMftt/BI9upxtYeUwPPTnjisX4V+C9Z0z4jaBNPayGGKf97I/ybQAc4zzz04+
vNdV+zxpkf8Awh/jnUXRjd2cRWG4AAIBhlPBI6nC/mOK4r4R+KNVHxN8O20t3JNFJdpFK0kpw4JK
8hgcfT8PSsHds10RT+N2oWOvfEnU7/SiktpOkSl416lYwp4+q9vUVweSih9+X4O/jd+dei/tC6BZ
6N8TNStbK3aGNFgnWLIxueME4/Edfc152yyQvHtUON2CFPPt/WuyC0OZysyskSqCxZnQ8MSfY/zr
3D9lbQ/P8c3lzJbefbpYtskmUMFbzYz+ZA6+1eMywjft2kAqBgnnj0Pf6/8A6695/ZJvJz4xvrCd
2NsbIybGHAIkQZ7dcgE59KyqJo0hJM4F/Fl34Zv/ABTomqWzXNteTXBKSD5hIS6rIp7cbfXivPPK
kuSY7a3kllZSqrEDu3YwM++cV0/xA1C71HxjrNxdz/aJjdyx+aowRtkYAfgMDr6msS01GXRHOpQe
UbiECURMAclT75BP4etLZaF21ufQPx2TUfBlp4Hu9I8yxhtI5GkgQ/u/NxERux1xgj8a8R+Iuv2X
irX1v7K1+w5gSOeBUKr5qjBIzzg5/rXvf7TmtXy+DvC0MQEaXkPmSiNFO75Izx/dyS1fNU21GcLh
WyTktncf8fpRBNsd0me4/s3eHoZ9C8Y3ktr5d3Ha4tbhjgjMU2SOfXZyK84n8ajVfAd54c1mOae+
t9j2l7Lnzhuky6k54+U+1e1fsw69czeC/E0bquNPhVoSO4KSt82RgcKB3HrXzbfXs+pXkuo3CRmW
6bewj+YJnPAP+FLlTbY2yTwdpEep+KdFszFJPDcXkMUq8ZIMg3EdcDFe4/GTxW/ww+Jek2NtbibQ
Y9JSNtNRQFdCZFODg5/+t715J8M/EZ0P4g6M0cEc/nXscDJMflUGQcn1PHAr1j9rfVG/4TC20poY
VjNtHdpK3DM5LggfgvT3PFSovmG3ZHhnjWLS5/E2qXOmForB5ma3RRhtv8IwenTgfSvavjbbw/D3
wN4Ji8PF4YrppGdoVBdmKqxPPrn8Me1eAyTkeezwq7ICEUYwPxJHHrX0b8RgPhd8MPBVjLbf8JIt
zM8oe7Zt0YKKQo5yOCf1olowjqY/hC3Txx8BfGl14gYT3OmyyzWsoTAEnkkr6Z5xx7mvK/h54r0/
wZLd3kliLy6eNxAzpu8slTyQceo717HozSfET4A+KZLaD+x00m7lmkjgYbZgsBfY3GcdOBjtzXze
rmaJZNu+LPTGAQe36HrU7jbSep6X8MPHep6v8SfDlhfwWs9rqF2sM0bRoSFYgE/hnqPSs34weBrf
Rvi3P4a0SDy4pTbqsTtgJ5iqcAknpk8+1dT8KfGOj3fj/wALW39gxrczXccCzxMBsc4UOBt9ecdK
yvGXh+88NftD2dlql9Je3S39tIJ3ONykKVXj0UgdKi7RS1Za+JFvbfBXSbLwrbWcd1rs6w3d5N5Y
k5O9cA57EfpT7LQIfjn4GvLnTbSOHxHoNuhmxCEWaMlj1yAOFx+Fd78bbfQf+GjrRfEv7vQoNIDy
mQ4DOd+0Z/A898Ufs9W2lrrPxZXSma4014BFbMwPMe2Yg9OfvEcdRijmaKSTZ5V8L/h5aR+HNR8f
69DLJo+mcx28YyzusgBX3GTj8ag8NePtO8V6xeaRrmmx2thqRNujwKS0cjn5CBkjv/Ou58Jjf+xR
4kmO5czuXduAD50eCBxnAxx0OPrXI61p3gvTNK+F95ol2H1e4v7cX4aTLo25c5AI24Oe2D+NQ31N
Ejk9f+Ces2XxNHg2GHzLm4JltGf5d8WWw2ffYfcVt/EbWfDvwuuYvCuk6QNTn0wtHf3U7MrSS5B2
r6jBr3zxdYxRftl+DQd6yNokzOHJIJCzjJ46Dd+deVX3hHwt4r+NPxT/AOEmvxZx2N0Xs2YlGMoU
lgfXoOPyoU2xcqucd8T/AAPa+JPB6+PfDNt5OjEhL6HYcwMu0Yxgkgk9e2Kb4P8AAVn4G8BTeOvE
9u032+B4dLsHyqyBogyt0PPXJ9BXceDo2H7EHjoiQNunnkUoQpYB4jxnvwf09ad8arAn9n34O28h
aISGCOSRzhgnkqCQOeqk579MVXPcTtE4XwZa6H8YrPUtBi06LR/EjAyWajkTBFYhc7Qfw7ZriPA3
wj1rxj47fw00TQT2TH7ezHi0UZ5Pb7w/XpXs/gbwBpHgb9qjwlYaLqBv7ZrGWZ5SwIEhilBDYJ/h
Cn6mu7+EYWX9pX43BkkZ1RAFHTBGeB07Lk9jxUc1tEEX5nz5qHiz4fWHju20tNGEuhwNHbyagrgL
IzABmBA6KzHn/ZzXM/Fn4OX3hHxDpk9uftWg6sUaxu0/22O1DnqQD174HNaml/DLTNY+CniXxVda
hFBrVhdPDHYFgxkAPXGMn73oeF/GvWfjsGh/Zr+DzQK8rwS25jD8NIxiyPTv/nmleSYltdnn/wAY
PCPhv4EzeHNAvdPl1e/ezE9xNEAFyZJASOO2OB7day/iB8L9G1/4I2XxG0GL+zYI7k2l3DMQWmcz
iNG3D8fz9q9I/aMstL1vUvCl144v30LXn0cpPawRvJhfNbndzg5bPXsetY/j2STSf2VdE07wzINX
8JzzySXV+7HdHOJywTH1BH4+4pqUrlc1jx34P/CaXx/eyanfNJb6FpQ829u/4NilSygn+LBH51k/
EO68I3btD4etLmwVJ3XzHkBWZFyFwBz+foK+k/gfYWXiL9jTxtDqVybTTJp7nz5yPuIqxHPTJAOf
1ryLw78CfD/jD7fp/hXxEuo65HYyX0VkxGZSv8IB59/yrSMu4r8x4Q+9HVJM44VBzzk4HOM+lfTN
98DfCvw2HgjSvFvmzaj4gSVpbmFF8q3GAVBJwcZIBP5V4foXh6G58aQ6Rrsw0cQXHl3MznBiZGwc
+nIxX1V+3toNtDp/hC5N/Da31pbSxQWpYfv1LJuIHUgf1HtRKava4tj5C+JngiX4deMLzSJJDJ5G
yQMuMAOoYc9+COa9g8C/B7wt40+A/jDxfAbr+1NBtXQjGAZhGH7nGPmHHHGK8R1bWrrXLqS91Itc
X0iCJnY/fCKFX9AB+FfWf7Kd7aaT+zB8VNQv43ntoppXlgK7g6iBOMH2x+R9KTnZolM+MI0BHnMy
lSDjb3PritrwR4L1Tx74hg0fSFL3VxkAhThcDOT6DivX/HHwn0rxb8P7Xxv4DizaWsCpq2nR48y2
f5mJYdRgD34+ma9B/YEsYXv/AIhtcpHNc2+nI8Uvl/PFkuMhu3U0udNG0Y9Tzu7+HXw+0/xpZ+F7
rWJ5NScwQSyiIiFZWA3ASYx1/n7V5Z8S/htqXwz1+fTr5C0DgS284Q7XVvu4bucYJrotK+CGp698
GNa+I/28/ZrOZfMhLnzXYsM8k9twFezftTWqf8Mw/Bm+njaW4ljh33Dr+9cfZSSGPoD/ACpKSTST
IkurPkkQ5BQMQpPAJ/Oobg7TITlgvCipQ5AwHGMYAK9/aoy+WkBG0YCkdia15iFZnb/BT4Sah8Y/
GMWkWgYRoFlupGBxFDuAY8A9s4rrvFfwH0678IX2v+FNSbUo9Knlgvo3DAqiD74zjNezf8E8vB0k
Ov8AibXmnjYPZLZpbh1MnLA5I69QPTvXzr4wvNV+EvjLxZoNnetNb3yy21yjPkMjnceMYzwvPuay
Tk3oa2SPOIrYM42IN7cDnljnjH5179a/szRad/wjemeJ9STSvEetRTSQ2cnIG3O0E4xzhc8968o+
Gvhifxb450nTYHRZmuk5dgFADg5/Svqv/goV4cv9O8U+FfEdheNALazNoixOFIYSElwc55HWoblz
WKt2PjnxboF94O1650fUIzBeW7ASRnqM9OK7X4WfBm88e6XruuXTvp/hvRrWS5uL1s7A4wQh7cg5
x7VzHjfxRdeN/E11rGpYe9u9iuyAhSVXaD19AK+1Pg/8MtQP7FfizR4pgLnWvMuIpw2ViVkjwrEe
gGfx96JSlsTZdT5M+Knwkk8GWmlazZzjUdEv4UkjvoeYw7bjsHocD9a4rRPD194l1q103Sbdri+u
GMcaoOScZ5/z2rZvvHOsx+B18IXUpm02C/8AtMfmFiYmVduF/wBnk9q9d/Yq8J3Os/HXQNUjk8u0
05ZJZjkE5MbAAAjk9T9Kak0tRWRn67+zHdW9xrtlZ6vFqfiHSIYpLjTIWy43gHgY7dxXh0Omyz30
dnDEZLh5PLWJBu3E8AD8a+jv2hdW1z4QftPa74ksXib7fMZo4vN4ZfLCYfHY4Zh9fWvDvh9q0Gi/
EDw9q16zmK31GCeZo1ySqyqzDHfIH/1qfNISsj0TVv2e4vCf9nW3iTX7TSb+7tVuJbO4lVZYdxO3
9AfyNc18TPg3qXgbR9H1uGddU0LU4gY7+PGzdkjBPQH5W6Gvb/2tfh/rXxZ+Klt4k0e3Z9M1DSLV
o5ZRtIOJDtIPQcj86k+Pl3B4G/Zc+HvgK8Rm1lSt7K0QJjAzOSSccnc3TPvjFCeq1Kun0PCvAnwe
ufGGg6rrtxf2+laLYFd1zcSAb2LAYGSOnP6VuWn7P0mu6fqdxoWs2+s3llbi4+x2rqzMrEAdGJ68
duSK86TxFq02kx6Bb3dw+mvJn7CjZRmGeQvc/hX0p8J2i/ZS8F3/AIz1wBvE2tWTW9hoshKnyt0b
BiMcdc846euKbk+4+U+UZrb7PNMkp2ujspT+43II/MV6V8Gvg5d/GjV5NKsb2GyuIwG2O6h5Fxzt
DEZxjNcNJBP4m1jZbxNPd305+WMF3Ys3GMDJ5I6Cvsf9mf4K2Pww+MvhObV9ZQ+J5tLmuG0eDJdd
y469sc/lUOVhWsfI/wAQfCc/w+8baz4duJQ91ps5t5XXBBIH+B/Ot34cfB/V/iHZ6tqUc8GnaTp1
t50l9dHEROQNu7PBxzV79ptRH8fvH42yEJqbqN67eAoB/lwe+c8V77+1R4Zl8IeAvhX4H8IxSQwa
7ayfarO0GTdyBYNvTk5Z/wBfaq53shL0PAPGfwS1Pw34Vh8Q295FqWmPIYneBgRHhA244PofoPWs
n4bfC3WPiXrMun6cQqQxmW5mkICRoPU/gfyNfQf7JPhTVdH+KWvfDXxQjRabcaVNLcaYSCUkJjAP
sdrDpntWv8A59Ms9F+OnhGxcrq93Nfpp9kqfvBGiTIuD+K8HuKhzexooniOh/s+3es332XT9Ys7q
5KMYrZJVLyMvOAA2fbpXleqWE+k397p9zbSW1zDIVkScbWjbNfRP7LXwx8ReGfjV4Y1bUbGW20ux
kuJpp3ceWiiBxuPPqy8f4VlarN4d+MH7ZkkNu8Vz4c1rW0t0MQIEkWAHIHoWDc9854qoy8xNanD+
HvgXrGreG49fu5k0zTZpTDD9qIDSYXdlcnkdf51n/EP4T6v8M7mzS9BltLyFZ47tMFCDnjIJ54r2
n9qfSdd8WfHS98CaHBPdafolnbvZ2EMXCqYY9z4APJJAySeuK6H4daEPiD+yr8QR4iiN7deFGlSw
24zAYrfKqcnpkgVXM0TtsfP/AIJ+D+t+NdBv9asfKi0602hrmZwFd2OAq56nitLVvgPrltoOsata
SxX0enRo80Fvln+YgDHtnP5VyZ8da3d+GIfDDX0i6PHKJltkRQGcEkFsDJxk19C/Ci4h/Zj+HWue
JPEmJda8SWq22naG2BJ5avy7KR6upI/Cn799yGz5WMHlAHZhlPIzgZHUfWvQ/hT8Fdd+LlxNDpDx
M9vwVkcK208/XiuGFvNf6mlnaRNNcTvthhVcsxzwAPWvtP8AZP8Agk/gT4q2dxqGpwjV5NFaV9KQ
/vI/MIHPsBj15JpuVhJnxx4o8NXfhrxLqWjXRH22zna3ZQeA6sVxnj0r1DWv2V/FGgx2txeSw20V
zFvAlbBHAznJ5xkdPWuT+MNyy/GPxxLGo3/21dssi5I/1rYPP1qLxp8TfEXxHksDrN/JdNZRFLfO
BgZ5xjvwPyFKUpdDaPKzo9d/Zr8T6J4HvvFUvkS6XafK9wr5AO8KQAM55I56de1eUNH8rtuUD++D
0z04r6+h1CPwh+wbq2ka/I9vrGs3LPZWszqjSJ5iFWAbthcnvg14n8Efg1P8UdUkurlms/DmnAy3
+osAIo0ABKEk4B6fQVm5tDaXQxfBvwS8SeP/AA3q2s6baH+zdLt2uZLl+EZQrEj6/LXFWWnvdTRR
RRmSSQ8Kq7mb6ev0r9G/h/448L+LPgR8StL8LWRs9K8N2F3ZRzg8XIMMm2XpnBAbkHt718Z/sxvp
1j8cvB9xrLpDp1rcv50k+AqnynCn0+9t/Olztq4JIfN+zl4stxAJYhaTtGs/ksScBlyMnpXHeOvA
GsfD7URZapb/AGe5lhEw4wGVhxj1r2H9q3w94lvvj94wn0qxvXs2miSCSGQogUQpjA3epJ6YOeM1
2X7Zh0uw+HPwz0tAkviSwtI1vsf65V8hDh/XJOaSbvqD1Pnbwh8IfEXjTSJdQ0+3K2CsI/NlyoZz
kgDsehqbxB8E/EvhnRbrVru1F1p9qyrLNBkhCfXPQV9G/EVINR/Y5+H9l4S+e8truJNSaxBEqSGG
UnzMcqxJTg+p4qz+zFFJ4W+D3xUvPGTuummBFthqADKkmyXlQ2erMnHrSVRtk8q6nxa0JwzkDOOg
9D0xSOvlnaDztHFStK1zAr70Lk7iqKMEnrgADvTkbcTGVj7/ADZ6nBre9jLRshSNWBUYIK/fPGD7
V13hD4VeJvHNreXOkafJLaWuA85GFy3QL2J9hzxWT4U0E6/4m0nSWKxte3cNpvJyAXkC5/8AHq+n
/wBrC2n+E8vh74UeEI3tNMNhHfzyQnZPPN5jhcsOv3T+Jpc7vZGsYnzr4p+G+t+Crexl1myaH7aG
eI8hSqkA4z9RXO6XoF1rOrW+nWMbz3U8ghhRRyznoP1r61/Z60p/jr4B8XeEvF0Mzp4etEltrqSR
hcIXMrMrZ5PK9z/CBzVL4LeFNN+G37NmufGVI0vvEMDSW1qsyMRFmeNA3BwOx49RQqnTqVp0PCbz
4F+K9LtLu5uLDy47WJppizfdRfvde4/pXETfut0J5YHa27GTj1r2r4P/ABL8V6N8Q9Km1hbnVdP8
QXK2M0N8zGKRJZFBKqflP3iOn+Nel+JP2Y9Bvf2uYvBgZotGurA6vKobBQkOfLX2ynSmqlr3FY+c
9L+EXirWNNtr+206R7K4JWKUjAb1OQOg9cVz+veFr/w9ezWV/E9rc2+N6Ec8gNn6EEH8a9x+PPxC
8Tal8T9S0bw5Fd2Gm+HZp7C3i05WTbGknJfHuvToB9K7D4o+D7P4l/szW3xZuFew1uCZoHWP5hP+
+EQJHbn0o9pyvUOQ+Y/DvhzUPE0/2WwtJZ2RTIWiQjCjqT9Kva94E1nwzZi/1KzmitJJTEkz52yH
noe3Svpf4n6LB+zP8FNDtNFRLnXvF6FpdQYASrGI42KrxkDL4x6n3FV/2c7q5+M0up/DrxRDLeo1
pLfRXNwR5kDYABweoz+PP41PtXvYnlPlWOEHCMueMEL6/wCTXUH4UeK0meKPSriZgDlEGeMZzj6G
vefgR8D9Ogv/AIk+KdQVb+z8Czzxx2s+czvEsjAkcjkoOD/hnhbL9ojxUPHb+KbtWl0iSZ3GnqQI
tn3dinbjIHT361bmyuU8VVY8fOxBB27Qf4vTNbumeFNV12B7uws5bmDdtaYDgtxxkd6+jPjp+zNZ
WPxk8E6VpEpt7bxgzuLfosBUIXGewySf8iq/7RXjI/CbxBF8N/B8B0610eOJ7i6RI3e5kkiUksCp
IwCp7ZPOOKh1G9kQo2Pm7VfD91o85h1CJrQgCQLJ3U8Aj8j+VLpekXWp3q2lnA1zcSZ8uNFyWOM9
MV9S6n4Vg+P37Per+NZIBpuq+FV8u6m4JuVihDAdO/mHtiq/hfwnp3wL+AmmfFG5sxqniTWgIdOW
QYWzWRH+bGeeFJ/TipVQqyPm/UvBWq2VpLcXNlPHEhVTIYyAMnAyT0rFCiFgdwALYwPf/P6V9QfA
jxpL8Xtbl+H3iq0+022u7pLW6jiVPKkjTcB8o/2c59RUXww/ZZi1345eKvD2pzxJpvhkrNfEMSZV
I3bF9z6+i1qqiJ5bs+e7bwpqb2SXD6bdPG+GVkiLbgeh6c1SNs8d0wkO0xsdyk7cDHf0/GvfJP2n
ltviFDPZaRa6b4UsWSKHTjaxmQIuV5PY57ZrQ+P3wHSw1nwnr2iqsGneNJIxDAcbkllwx3AE95Bx
70+bX3jRRsj54t7Ce8t5TbwyTKM87Cccc5P40S2z2CJGV+z+YpPlleuODj2r6h+Jt3p/7LmjaZ4K
sNLt9S8S3NvFf6lf3MQlj53oUVeo5A7jimv4WsP2kPhFqGu2Fn/Z+v8AhyJheYQKj/K7kn1OF/8A
r0ovUl6HyuWEkhgMbYxhT6VP9iMYaNoTGhbneMZr0j9n/wCF2n/Ejxhbw6vqUOk6XaENcSmUBjnh
VXLDuOvPWvYv28vDuheF/wDhAU0SxjsreaCYSOgA80L5YXOO+ATQpe9YzbPk/wCylDzHtzkbQc/h
/WgW8jsQ0RVkG/lSM8ck9h2r0D4OS+GNP8XLqPixJDY2pWWGIRmQSvkZBH93GcmvRvBPxS0zxd8T
tJ0q38IW5sb++WJhFuDxxmXaCfpv5PSiVRx3QkrnzxNEkaAbAEHVh2Pr7U26tVYRAHapPBPQnB55
r3L49fCvRvDfx1i8L6PeYiv5Y3k2nzVheWQqq465Hy8dhXcfG6z8P/s63vhzwrB4bi1aQWLXE960
mzzHZ2A4z2IPf2o9praw2rHypJZIhQhiyg4I6c/SmGNYmJDKMrzle3XFfUPxZ+GmjeIP2ftH+J2k
2UejNOuyS2wSC7S7WPQZwA3T0rx74QfC3UPiv4ttdIsTHb2obNxdyZIhUA84xyT2HrihTT1Gkjgj
EXdk2RhumN2B0xjjpUUlmojXKqWZtgCnOf8AH619T/tifC7w38L9H8Hx6RZ7Wu/NSV1C5dkVMMSD
wTk9P5mvE/hd8MdV+J3iqy0nS4FcM6+a5TIjQsF3kdgAfbpVKSeo1G5wv2eKNijRlRnBwcn61GLe
Fc7WO8nI3CvpjWrT4aeDfHNl4W8l9VFo8Nvd6lHINiynIkORnpnkcdDXD/HP4MXHgfxDBc2Ia40P
UmWWzuIhmMhslVDAkE4x78+9NST3JaZ5FPEG4XcrFvkI5z/k4qM2cUSL5gWQMe3UZHU19ES/B7w3
8Kvhzb6v458671fVtrWukpIRJFHu5cgYOflX1Az2qh47+FWl6p8PoPFHgeOWVIYA99auN7pkjnnp
/Ef8azc0JeZ4OtnHcuywAMSpyvOeOT/KmxWiNE7qI1O0nDDO7/8AVXsfwe+B8/jfUJ9X11pNI8N6
Y4a7uywijk4PyKe546Cuo8NfDbwL8VBr1j4ZkuIdWtlX7Ik6hWm+b5mHPIwcge/PSkpdhqKufPK2
seIwYwA+QQBkg9c/XtUC2m5m2ACTOOcfnXQa9o1xoWs3FjcQyQTQM0ZWQFW4zyB9Kw5Yv9K3IWYd
QwPbmt0D0InsAG8vJCAemDux9TTZLWN2DISihcH3H/1qtSLgh1JAK/eZsg8dc0xJ/IiZQCY2wAR2
+tCFcrta7VVOJEYYDKO/rmlNhEk5XzFOGxyPl+n1qzs8lF6EJyFJ7HvUMjhSrbRKAu44OMt64P4U
NXC5DPYkK/OWBHA6EU6SyUEFF+UjOP60sLM7t5vIPOzdx9fWnyM0hzIdgCkYU9f/AK9KwIgNuiwg
Ajf99s+ntUgtBfXKu7YGeSo/LipZWhztU5jx8p7g46frUtoT9mkeRXZzwTyAOKpCKasba7ZJItxB
zg+maWCIjUoFQhFMoAJ7ZI/WgW4E5BDBznbyDjI4zVjSg8us2XlMUmEq7D1wRz/SloPc2fivELXx
beWylmWKXGWOTnav/wCv8a5UqsSgBseveum+JT7vGmp7lLRmcvsGFPPUYHArmhGODu2AjhB1FS0M
9j0tJJvs6QNsCADzOoRgB8x5x2HWuusImQwwSzKElLD92M89iTkDqD/OuK8P3rC1WNpXG7hcjOOg
xkAn3+neuh0+diXDIGIKqgk44Pfj3wPoTXZDTc4p66o6S1gSKJV8pHABUNKQS/JHXH9fWvL/AI9r
5mv2UcqeXJ9gTHOQQrOBg/nzXpTaxJKHZAi3ABWIlWw7ADIJ7dMA+4rz349Pi80Z5bdlkawDljgg
4dl7deRzzSnboFPc8iEfzKQOBxjHetrw9EUu45mXywrcE9O/PvWPNC0cgIU7XGQa1PDQP26GNiQp
kA3dec9qwTsdiR7rpySw2kD/AGsE72OThTjHAwe/f8K17At84mckMwRd3V8Z9BweM4x2rqfhR8K7
zx5pc14ZDaWduu8XNxAAhK4bb1+nHbjpms25tVsdWFgZEnHnGNpoX4dckb1+ox79a0VSxnyXkVIL
jynuCokWPJVlHTOcnt3/AKUk3nGRI3V95wzIPlzlcDBUdMHPTFeh/Fr4P3Xw3i052JuBqKtIJMKp
TbggMQScHdx14BFUPh18Or74meLE0SzmjS48mSQSSHamFAzzxnrwO+DQ5dRcjlocYxFrdJKz7ZWQ
xndH94H0P1547/o+5eGSSZ8ExwjcQSc7fUeueBn37V6n4f8AgTf+IPHOo+GY7qBdV09mRUY483u2
05P+zWtcfst61p2p2+lSalYy6m7lUikODtx2HUntmnGqluHsWup4rZXjiRIXjwjdC67lxxwck5wP
5CprR5LZ5SqlXyF6BsYwD7Hp16812fxN+FmrfCvW7PTNWWJZ7hFkt1j43kkADH1wPxq58SPg9rfw
3srG61e122t6pPmxHeEGBknsOWH51TlF6jjTZ57M1zBPIIZSqkHYhPy5HByaGBd9nleYYsqJEOSe
MnGfY/rXdeH/AIM634n8E6r4s062NzpdiD9ocplyFAJIXGD8vPHtXFN5kMQiWMpGFCgHAxn+Hgce
54HWkpIVmmSm0mhkhyzMEBymMY9T7Zz/ADqGGWa3SIFRneEfLjnupx/nFdz4w+GeueCtG0vUdQgV
rTUASs0ZJH5kAen6da4q1MBVTGzyqgJQBR8vfJyP096rR7A7PqQzxhnVQ5hPmeYxGcdee3so/A1Y
8x3R5I0ceZyz44x3pLqQxhwEUIvBBAwScZ6dv8aSKeeSNixKw7QyhVGSvTOKpIwfuvQc0x4kVcRj
JJVcMPbpnpTbe0jkTes7g9QdwxzyMggEYP8ASjElxEUZtqY3OoIAIHc9+/606UeTKIDllK5+TB4z
jg+n61m9GOOu5HFsd0MrknZj5Rk4I9u+KdcjfCfKcbDgEnqR1yccH/61TrGSY2hZSB6cnOeeOxpW
jS6hGz9264yGAVsjjI9RxWkRyskRiRowjLGXZkw4X72SR8+TwRx060x1NxCfNB25JTjJXjB4/wA9
alGdsezLBRv2k9fb0IpyyPYRB5ApRzsUgYy3TAH1x+dO7uRp1I1iijYywn/RyfmKHO8+p9zzzRDH
tBMDsHiPmI/K7uAc/wBKnlG+eNZE2rG6jKjA9Pm/z3p00bvNiFWU8syNjnjnHPvx7UNLqQnFvQs3
PiDVruzdJLqWfayhgzkKMA4GD1yc/n7Cs+OKC5SQlWUovI6k5PRcf0rqdF8Ba5rdudQ07SpnspHK
q4wsbnjkE5z/APX61T1XQ7/wxqC6ffwy21xGqttZCu7jgg45/wDr1i2rmyTIE13UrSFYLe5kSNGG
2PzCOOmKr3Ut1f6gk8skrzKDyWyc4OCc88VY07R7zXtRFtZIb27JeUJC3JUA5+X8D27Vsah8M9f0
fTbi9vNKubOBMDeU27M8L2wMgmrukJxZiQ6jc6aySWV3PBO5ZS8LFX2EAtz6HGKtXniDUtQtzDc3
097GjbwkkhO0AdOM9BjketVGihVCkbYmI2bWZWIJ9s1t2HgTX7q0gkg0m4WGSNfLcR4jmHUDJ552
/p71DQtbmCrL9plbfuyAAVYZ6n5T+I/QVtW/i/xFaFY7XWboRJhXBchVUAjj8wKoS2p0+9urW5tT
FcxlonSRfnXg/wD1+tS2OhXmoyvbWEL3bIm5UhTdheBnA7DNUo66mnLoVJbuSS4WeRvMkkc5bnkg
89amsLu80u9tLm0uDb38WHV4XIw2c5z2Geau6j4e1TS7EPeWE9vBuKCSVCADgkjOOwB71l+bl2Yn
yWXgRsOeOO/41s1EyUdTpofiV4ut4VaHXL0NGQFYyZIIHPX/ADzXOzublJJtzyTyHewB755/X881
sx+FtYIaQaZdN5nyqUibcRj0Ix75+lZaLDDLFFLCbdhJskUnMh5Od/bsRmoUV0LcfM3LPx74i0TS
4LSw1e4srKFmxFE4HDctgEEZ9M1HrvibWPEsNpFqd688URZoRLIWCkjGF4+lU7XTrnUYJJILQyRK
cOyqSQOMHOP85pL7Tru0WIzWcltH/wAsmdMZ5wSP8PpWOlx6Ir2t1NEqSxuUmimXbjOFI5BBH512
lv8AFPxYGZm1SQSlSN2BkZBHJxn/APUa49IxFeMVimMmwkbTjOeBjPc5xV46Vfw3G+aykCKSxAXK
le3X1xVqMXuyeVso7t1u8T4jIHy4B2g9u/8AnNdLovjzWvC1o9lp072MLsHkhaPKE/Q9/wDCublU
kctuRySADgnjjj2/nVqCBpYBtVnXhjtTIq3CLMnzRNnxD451XxeIptUn8yS2DxxxnCsqbucKAAMn
8eATWfol1NoOp2t9azMt5bSiZCMb/lYENnjgZGfYmq08EkibnhMSgsMY2FueefT/AOvUcqReVIr7
2ccbBliMcjPftSUIiU2tzu7r4u+Jro3KTXNvHFcI8crwWyq53Zzz2/DFcTJPIZGZy0qZ5dwSG989
yae1vMCJ2L+QVGGaMptBHQ55/OnxWyNIpLs0QGVBA9Tyv146+lXaES0nM9B0X48+IdAOLI2kQYBQ
6QjceAeT+A4HpWL4q8f6l4s09YLiC1UCQuTCm1nYAgDceO4OO3qa5x/3UZcDDtJhcjvt64H40Fld
uhaR14RgQU6nJ/Wo5Y3uPkZpeEfEt94Q8S2es2roLuBiVCKDzgqc/nT/ABj4oufGHiG91i7iIvbh
laXaDyFQAH1HCjP86x1tcF1dUkQt9/YARnsfp61OrFlcSZIUY3NnPB7Z+laQSvsQ01oVZIpJMqoV
GCZZw3KjsQO/au4T4sarq/w+s/BRt4m022xtfJLPgll6+hIwR6Vx8iqmE5l3rtG0g8Y49vahgkbl
AdjAjaVH3j+Hfj0ola4kn0BHht3VSHLHBAI5OcZ49s16t4b+Nh8L+GDoC6RbS2zRFZm3FTISMbmA
75ry6ARRMI8zSDZjJAA3AU3yz5cnmhV7NHgkPg5BP/1qwdrlc8kWb+4i1HUbidDHGJHkaOFcZjLH
IGfbj9K1/BXjDUPAeuJqWnzzKuV80bjiRQclSOh71hJ5hQSSKjrn5C2GHTjFOZZXKgxsrEfeC4H+
c5ppRe4mz0K3+J3hcal/aLeG2+1I4mBdwCTkk4wpzk/Q1ynjTxfc+O9auL6dRADKTDCwXEaFshR2
4PrnvXPSRNGUbd8gOMEY7e/4VIYFkKuCznHRfX0q3CCGnI9Lh+Jum+INDsrHXtNN1PZFxHPAFjLK
RwnToOfzqPUfilY6f4Gk8NeHLR9OS7l3XLyxg+cjIAV6+gxXm7kiFFXc4AJ8sjkZ7cY7+lSCNnjJ
ZHTaSeG74NQrXJ5pJnZfC74mP4Dmv7aVZZ9KvYSLi3jGXbghSPcfWtnQPE3gXwzrdpqttBeO9i3m
xoyFsOMHrk9Me9eYj9wgwu4MwBJ6gflUoVDK6hWIHXaeo7/SnyReqKu3uaHjTxRqPjzxDdatfhHn
kCr5iAjKqMLwenAANYohUOWkJIK7VXdjkdPr3qckR5XBlHT5eOD65p8kxiuN6YicH5Bt3E/TqB37
1a0CyerKRRoiXZ87/uhT0r134DeLvDXw+v7rV9Yup474xNaiEQll2Flb35JUdRxivI3EcMkp/ifL
MNuQeMHGP50su6GAyNuz0wefXtUOPNuEZK+x03xKm0mfxJd3Wi3Ml1bXcrzbnRl2FmLFcY9TWFpF
pYajfQQalIttbSHyZblBwinocYyefbuKhkzbRh/mQFSX2gkY9cDmiF2R/vgoBngY2k59h9aXIjbm
fY94+N3jHwv428N6V/Z2piS90yEosYhdWkJCAgErnA2g9e3vXz01uoKtLtcKxwc5JPI5rRE3lfKW
3IBkc8An+lU8bkkOQWb72Rt2jP1/rQoW2Ib6n0B8A9a8IeEPDusnU9egtrrWoCvkynBjCo6859TJ
+leG+IdJg0rW7uytroXcEDBEuo2zvUDOQQB3/nWa9u8ykEjavzlccY7D/PrT3DM/lhuSN24sQQef
/r9aShZgpu50Hw30+y1Dxppn22/tdLhtJort558D7jqxPPB+7xXov7S0+keKfEVl4g03VLe8j8pL
SS2jkBbKlzu9fQdD+FeMQRsAW3EM2NqR9v8ADvUTOQ7DbulXkYH3TjPP6VFmmbSndIpyRqqyR7PL
LKw3Lz8pGD1r6I8c31n8cfhx4W+xXkem3OmM8E1vPIFYfu0XfgZ4O3+YPSvntozeg+bIGYAq6ocn
IxTZQDCkcRZo1yMZwM/Xt2/Opkru5kptbH0Fpd5a/Bj4K+LtKv76G5utZllit0gYSMC0G3c2D6j/
AAryn4WeFtB8U6xLY6teGxuJkY256LkKep/KuUkQPu+Vc7PmZu3XHFUywh2yLJKrjqV6fT6H/Gko
lc/c9p+Gnwb1fw38RtE1K7uLVYrC6W5clsn5TnOccDhgD61zHxl8e2/iX4yTa/pkxkgiMAhMg2gP
EoHT/eB/zivPTd3EEkkIm8tEU/czgjk8Y9aibjDlN6AggdB+J9OPalyNMamrnufj+1j/AGh9FsfF
2lSKNctkjsr6xC/KpwzFlz1+Y9RkdO9SaVqSfs8/Dy/eWRLzxPr8SmOx34EMYYoXztHODxmvC47+
6sk2wXU8AU5Cxsy88kHIP0qvfXk16rtcyyXU4PEk0hc8dOT/ACqeRsrmPYPhH430/wAR+CNW+F+s
mDTbTU/ltLpGyWkaUNtOR0+X6dazfC/wV1DT/FE1xrFwNJ0bSd19LduOSImDKMY4BwfU8dK8mEps
ZCod96nck0bFGBH0Oas3XiTXLiBohrV9cQyjY6zzsfMU87SM8j/69HIylKx6t4l/aNjvfjrb+NLe
yE0OnQNYQ7ZSDNDub5xkcE7z1HQCoPjT8N7jxb4luPFXhqca1pXiSVrlUhO4RMCAVPp+eRXibH94
VG5nX0459PcVsQeMfEHh+CC10zW7ywgiyQsDYCljkkD1OAajl5XcXtGeyeO760+EvwHufhyblb/V
dWBe7CS8WocRsCRj/ZI4/Wlv9YHx3+AelaJZbdP1nwinmC1VwHmSOHYGB54OfTqK8C1XW7zUriW7
v7qa8v5hlpp3LueMDJPXApuieIb/AEC/F9o9xNZ3MSNGJoflbay4YHr2/lSVMafNue0fs/8Ag298
CeIz498UM+m6dpLSxBZzhp2eNsYJ5PIGAPXmrXwY+O2nx/G/xZrd/H9gg8XuIhI7j/RcLkEnj+77
c14n4g8e+JPEmmHT9T1ae8sFl82OGVsDeAcHA4HX0rmo53gYHerbWPLHP0PHH/6zR7JstNLY9A1r
4KeK9N8cjwzbwvewTzR+VOgzCUk5D555wc8GvSPjv410vTdK+Gfw+81Lybwu9nLd3cbZgjYLtZdw
4JAH4V5LD8bfG+liBbXXWZYo1jRjDGzYHQbiOg4xXCXs7311PM7EySSM8m0cMzdT6e9PkfUiUm9j
6W/bc0668V+ONDv9Jjku7CbSAsdzbNvQnzpCQCM9Ay8/4VDr1pJpP7B66ff77bUJ9SBjtp1w5UXm
S230xk5+leN6X8ZvFegaRBptpqUZtLRdkcU8EbmNCScZKk4yxrH8ZfEXWviNb2aa9dLPFaoY4kSM
Iq5O77q+/fHrRyO44O+5718NU+3fsMePtPt5Y9QvPtE5aFc7wC0J2lR7AkfQVhfsSaRe2Xxxt7u/
tJrIW+k3TkvGQuDsXkngYHr6+1eQ+CPiXrnw3vL660W5WBLqPyriOZVdZFJB+6wIzwOa3bz9oXxf
eaXfWKfZbRL61e2mFpbRxyMrLg/MB1HUkY+lSqb2NvdT0OU+JsQ1b4x+K1sGS6FzrF1wgLhv9Ifg
DB6jBx9a+i/+Ch+nTL4g8CyGKWQJpkiFyhwpL5weOCcfpXzD4N8WXPhLXrbWLaKG4ubYko9xFuTc
QeueprtPij+0R4y+LGiHTNeksHg4w8UGyUAHI+bORmqUHe4XXU8llgRShlUIn3hwWxz2H4V9e/s7
ozfsY/GJkQm5/wBI2ooIYlbdORk89SP/ANdfIM5eIM2GJXk98/hXpfw3/aN8S/C3wxqWhaXHZXFj
qUhedbpC6sxUL1444puIJo9p/ZDWRvgd8aDMnlq+n4iLcbi1tcgDB6HO2sD9hzxvpfhvVfFOi303
lXmtWghsgQx8xgJCV3Y46/qK8v8AGXx98TeKvCsugC3tdH0+SRZJTpm+Pz2UEYb5jleScV5vbX9z
pM0d1aTyRXEY3pcRttdD6jHOajkGmnudjcaZ4/0a8ufhv5moL9pdFk03J2NIxBB2jrnCn8K9n/a9
8RWKfCH4Z+CGkSLxJo8UMl/ZqM+QptmXa2eeu0Y7c1yP/DYWtvrkOqyeHtLudWgVcXjhwzMFChuv
XHpXiGt69ea7q95qt/dPeXdy7NJLcne2Tk4/XikoMiVujMy4iVUWJVII557c5qCQpGxLKXfnGe/W
pmk8uQN8zYAz6+5ppeMMyEqq5BGTycGrSZmmj6+/4JuoW8eeNGVnR00mIDfnDjzGzj9OR618g6rG
ZNZ1Bnk8xjcPggcHk56+/rXuHwT/AGnbn4KaP9k0jw/a3N3K3769kdg7x7idhHpgkD615V8QfE8f
jjxbe6vDYR6QLhlP2WD7qnaAxB4HJB/Oqho2PVmf4b2Q6/owh3B2vYVGxsbsyAYH519R/wDBSe5d
/ib4Zh81mjGi5ZMnkmVuw4B6c184fDrxbZ+B/FdlrNxpSautm25LSVtoZhjBzjsa9N+PP7Tq/Haw
t/7Q8PQabqNuy+XcI5dtmDlMfU/pScfe5i1JngxcGPzGOCSBt9K+y/hbNPF/wT88f3BupVka7mCO
XYjbvhGB3A4xxXxvGqzKThS4OQpHp2PtX0j4e/bA07w78NI/BA8DRzaS0GLhPtAXzn2gEn5e55/K
pcWK6Pm553uGDyEYYbgDzivc/wBh6aR/2kPDaSO3kPHdfu0JwzCB+w4PGevrXh1zcI97NIkTRLMz
OkeOFBJIAA446cV6r+z58b9M+BfiG61ubQP7V1XYVtZlkVfJJVg3VT1DYPPbpQ0xqSvqU/2qL+a8
/aG8fF5XcpqJAR23bQEUY9B06Cud+DNiNX+MPgizmSKSzudXtY5Q5ypTzFyPfIrZ+OfxR074r+MX
8QWehDRri5UtcjzN5mkJOXJwMcEDHtXB6Pr134c1Oz1CxlNveWrLNHKg+YOpyGH5Cq5XYlI+iv26
9TvdJ/aCuLawu5rK0h0e1jjtYJdgQYY42g8c4IrrPjpax3P7Evws1m4El3qrSwxPevzJsKTHaWOe
Pun8K8/8TftCeEviLqVvq/ifwbNfa41lHbT3gnRkcopG4LtypP8AWuZ+Mnx2l+KGmaP4b0y1fSPC
mkW6x2umsyko6gr5hYeqnGPesLO+xrFnb/A34Z6X8PfhcfjR4ttjdWdnK0OmWajcZJS7RBm9twH0
GTXjvjPxX4h+Mvi/UNZnilubuZA4ht9xihQKFAUHoMKO3PNeueHP2nPDtn8CNM+GOs+GbnU7W1m8
55zKvLCVpFIBHq3c49jWr8Ov2mPhx8LG1JtH8D3by38AgzM0bdASO3TLYJ56U/eXQpyRS/YI8O2O
tfHhkvrSC6S202edY7hA6pIrJtPsRzW1+zXc3mu/tz6rcXc010sF1qo3SSEiNd0iKB7YIGO2Se1e
E/Cf4s3/AMI/Hdt4h0oBAsm2eGPBDxM4Z059ccHtXoGp/Hvw94d8d2HjPwPol1p+ti9e5vJbkpsl
jc5dAB0LZIz70uSTv5i5kzi/2hY3l/aA8fiMlFGtT/K3QHdz+Hv6GvpT9sTxLc+FNX+C/i7RYY7q
PSLWWXzgCYBIFgwM/VfY8V82fHrx1ofxL+IF54k0Ozl0pbtC9xFcEbnmJJLZXPUY710fw8+POnaX
8Ob3wX4w02fXNGfa1iIAvnQtu3MNzdASAevt3qnGzvYm66Hs37I/jjVPif8AtE+LPH2vwwxpNpDx
yXMS4iiIMKhdxzyVB4981o/A6Cxg+En7QfjGyVItTgu9Sa2v48l1Ty5H+X0554xXh3jX47aJB8OI
fB3gDTbzRtNuJXk1Se7AWWfIXYoZWJCjH0xgVX+Cvx1h+H+l674a1m1lu/C+uWtxFdw2yKXZ3UJu
ySP4c9fbvWUoO7NFJpHR/sd+Mtc8S/tA+GrG71i7vbS5hu1urW4uGdX/ANHc7fboPy9q0tJsNN8O
ft/RW1nHFZ2Fn4gKKuwKEG0k47Y5P6VW8D/GL4YfCrWl13w5oeqSa3HayxQSSxqqB2Uj5sN/LHWv
BvEnjLVPFPinUNfv5jJqt5MZ5rmMbCWJJ7dAPrVcu9iHLW59M/Gv4t6t8Ff2s/F/iPSraN/tdrb2
sXmAmMqIYSwBxzyO3rWn8Dbtm/ZD+NWqXzoj6lJcsHf5Qzm3HTJySSa8/u/jR4L+JfhDQ7fx7Y3j
6/ppdft1hbqfMjJGwE5B6KOPw9xzHx0+Na+OLXTvDnhi3fRPBumRxi2s0wjyOE2lpcEg8DpTjFvo
LnR13wf+Gml/DbwCnxY8eQJLYtlNHsZF3C6d43x6/wB3jIHTOa8i+I3jnX/il4ou9a1FpJ3O4xW0
eXS1jI+6oxwOASfXmvdz+0J8OfEXwK8J/DzxDpuoO+iRxO8sEO5DKgdeCGBPEn+NM8D/ABk+D3w+
8O+JLTTtBvr3UNShaKB7q0UhGKuFPzMdoye1CvFjuhv7AfhTSPEfxL1S61C0+0yabYi6tmYDCOZY
1B/AE/rWz+xh4g1HxV+1FreoaldyX94unXSuxwSI1kUIufQc+o4rx39nf46XvwR8af2lDCs+nX6+
TqECg7jGDn93zgEH/PTHXeFvjH4Q+E/xE0jxH4KtbsCWZ4tSW8gCObd2BYLhjzwMUmpN7E3Vzx/4
pbG+KPjdAHVf7bvR8xGR+/cc4OD+Feyfsj/Bax8c3viHxJrDCfTfDFubk2ZXJuSUkKrz0A2V5L8X
dc8P+I/iDrOq+GRMNLvZXm2zgq5kZmZzz2LN6ivTv2cPjpofwm8F+P8AS9WguXu9btVitzGm5AQk
qjPpzKc9q2aIvbY85+Kvxd1f4seILq9vX8i0TEdnYK+YoFAUYX3OwHNdj8Kv2j5vhr8NtY8GTaDa
ajpmqSvJNK0jI7I6Ku3oeoTt6V4rGiRxFSHd0GGIAwMDjpSTuzgRyH7wGZO68dqHFMtSkfoN+z34
20PVP2dviXrkGgRWFnZLO81lbsT9pAtSxUtgfTv2r5l/Zp8P6d8T/wBoTRrK7slt9Iu57qd7VJOE
RYndVzjLAYAOCOK9N+EPx6+FXgH4F6x4H1GLUZJtcg26jcJbkje8exucjgcYx714R8OPiUvwn+Js
PiLQbZHgtZ5PIjlJBEbZXseu1sVzxile6Hc9H/aZ+K/i3w78efHFhpuqm2sbKdILeBY1ygSJMYJG
cDkV2v7c3hiwsPCfw78UARtrusWRe8uTuzLtghx06D5s8+31rlPHutfBr4j+NNW8S3+pahDcapc/
aLxRav8AKWAG1cdcEDkZ+lcx+0r8eo/jBrul6fpETQeF9BhENhuz5j5RQWYZwPu4A68VaScvdBS1
PYfinBcfA39kv4cS+FZf7Ou9cuIrm+kwrGVmtixbDDHUr09qX4AXNz8aPgN8Vl8WyLdW9iwu4nMY
VQywu4LAdQNoP69q46f43+DPip8GPDPhHxzPLo1zoDolo1pCZVkRIQhORnHQnJqNPjV4O+DXwb8U
eFfAUtxqmo+JJmiu7maMqkUBj2HaTznBOMf3qIxs0NtHy5GrrGWI2cfdz27ClXdE6YYr3wRn8KnI
fYWBz2PGcdcGofMadFUElCc89/8AOK2auZWR1HwwHn/Evwvw7garakgYwR5ycEV9c/tOfEq2+E37
Wuk+JbnSk1aK20KNEt3GcMzSAkFs4wCMZ9a+KtPvpdNuIbmCRoriBg6PHwykHINfSE3xO8E/Gr4f
6bB46v20vxRpsiwx6iqM73MQjIBc4PJLcjH8PesWuVmsXY9V/ZN8XP4rvPjv4vaAWtrdxCeKI/cj
Oy4bax4B2jHT1rm9FMtv/wAE3dReSNZpptQLgbC2c3aLnGOThf8AOK84+I3xi8P+D/hvbfD74a3s
yadMm/VtTIZJbmUbhtz1wfbjFZf7PnxwtPCzyeEPGCve+CtRwspds/ZXyW3gAd2IzS5b62NFKJ1j
fGLR/ih4n+DHh/T9JayfRdVtFkk2Kv2ht0SKVIHHQkrX0TdGNv8AgoUrtuKw+FQnycgfMQR/3yW/
Ovnbwfr3wp+C0mp+J9N1WPxNrttCp0vT2idFE28kPkjqBt78YNeUad8bfFdj8S28byapdzaoZctK
0hJePdvMORzsxxj36VHslryi50nY9YufjLoXw28Y/HK1vtFbU9Q1rULmGLOCsLhplLZPIyWye/He
ur1KHyf+CcNpNHK8bPf7gq5O7/TW6j049Og9q53xY3wj+L3ifTPE2qeIV8L3F3EtxqWnRudzTFm3
nITJycDg/hzXFftBfHC38VxjwX4YX+zvA2lSeXawxYIugGLeYT6E5OOvfjNLkcmnYfMm9z6A/a11
vTfB978BrzUEN1Y6couJIdv+sRFtiQQcg5x+n40n7OHjjSPil+1vrGv6Jp7aXpw0Fo47dgFO4PCp
PHsT+vSvLvD3xQ8NfF/4XHwn8QNR+zarpq+bper3LMzEl1yuADghRtHsaIfiN4Z/Z3+G08Xgy8j1
nxtrAkgfUYiT9jiwvOWXOcj7o4yCe1V7O8bInmSPVfhLJF/wqb9pS8aQq8uqakQ/3lb93IMkEY6N
+Oa+f9V+KfhPUP2fPDvgXTtIkXxNBOJLi7RAm5S7McN1OcoOnTiqfwF+Pk/w+1i6sNWklu/D2slo
r6BjtLNJgNLkA5wCeK9C0/4e/CXRPHuoa3feKrPUdCtfMurbSopFLPjJROBk9AMcdec807Wvzak+
1Seh7x8Z7aWT9p74ARsUeRI7lvlAJO1Rjg+u05OO4ry7xz8QvCPgH9rz4iXfiixN7G9tbRWytB5u
yURRgYB4XgY64rxHx7+0H4h8ZfFW38ZwytYS6cSNNgA/49Yufl7cnJyfevSfHVt4O/aFsdN8WQ6t
ZaDrd2JG1NLqZFeV1REDcnJHyHH+90oUGlZ9h3TtY6n4OTxf8MVfGCeQCGCe5usRI38JghwM9+cj
86d8ZYbWy/Y4+CwvlJtjd2aTlW6II5M4GM9zn8a4P4vfFnS/CngWP4UeBpTLpUccTajqiOrC4ZkB
ZfTk4z05WtH4cfEO1+MHwpk+HXii9W0n0+Nn0i6cBYxIke2NCSf7zE8nnjHNJRta67grPqdz4E8b
+D/G/wC1t4Fn8FWK2tgLC4M48kpmQRSjJB4x938zzXovwejjf9o39om4YjKmFSQSNuI5ACefr+QP
evEvh/beF/2XNIu/F1xqKeIfFLK9pp8VpIriFnjO52AJHXH4dq4T4NftLX/hD4maprGsDzbfxE5/
tkxJne2xgrKD7sOOKSV9fQ2c4u2o3S/Enw/j/Zv1bT544pfHE1wdrlS5aMyjBzggfKD+OfWvon4v
W8R0f9mS2Ds7Ne2I6DkmK2GT04PH515ZN+zB4cufiPvtfEVtD4Olk8+USShXMYX5gvPXPy9Oma5X
4wftGv4o+Ifh1/D9qlnoPhFof7HFwDuLIFG8joQNij8KJWlsQ5pbHtHxxm8Ij9sSOXx4Y4tBi8PR
oODtWVmkK7gAezH/APUKT9naW11L4bftB3mkySw6W7Tmz+YL8hgm29xt7cH2rkPiTolr+1HpejeN
7HUYLXxRIsdje2zzBUYRx5+QcHhmPX1Io8VeKrX9nH4Nz+AtGuV1HxR4jjWTWJ9nyQRNG6YU4wxx
gY5xz7VcEQ3zaXPlTRLi5eWNElEO5gNpbbhgc/zFfW3/AAULkNnefDq0PzhbC4GZBncMx/N7+gPH
Q14D8LvhFrHxLnefT/J8m0mSOYXBUYDchsDnselfWH7a/wAOdT+JN14autHmtp7bTdPmgm3SlRlm
U5X3G09uhHtWl0pCUUfDun2M2qXsOn2lvJczyMUjSEFnLE4479a+kzBpP7KHhFL4iG/+JWr24Xyy
yv8AYUOGBK5z2Az61d/4J/6JpVz4+8S3WpLDcXGn2aS2LSFdyMWILDvxjH4iua1H9nX4hfEH4kzy
+IZ8/bb4xyX8isQkefvgcDaFAAHXgikpKT1EeJ6Dqup6v46tbxMXGu3d7HJE9wc5maQbSSecAkd+
1fUn7Ut34asPEXhm28eR3l/4ph0iJ5rmzjIXG84HU99xwexHNeTeNfAOl/AX9obRNLm1BdQsbCWz
vZbpR/qwJNzEjJ6bM4yc16x+2L4G1r4p/EXTfE/h2H7dpM+kQpbzrxyWduoz1BH51UpPm2Ft1Kfx
3gu1/Zb8GN4aZLfwSNspimGZWmkaXaASeOSefbtXyxpOuapoc5l0+6nsJuA72zFZBj0Oenevq39o
DVofCf7IngTwNfT+R4oWS3mlsy3IRWlO49APvD9PrXzP4N8H6x42kmbSrCW+eFdrtGhbbu4G4gEj
Oc0ua61J0ufTn7fnmLpnw5WZfPk+zzl5X4ZmCRbjj36/jWV/wTzshdfEPxXOrbimmIvkupzlpMgj
tx9PTpXb/t7eDdW8QL4ETRrMXUdlZTRzshDFfkT5frweR1xXjv7G3xb0/wCGHj25i1RfIj1GL7EJ
5GCpCQ2ST7HpU290FJXKtp8FdI8UfDP4heO9S1j7PcWl/c+Tbso/eEEEZ5HJYkdO3vXtfxI01v8A
hn74AKoRHl1LT2VSCwYmMEHnJ6kcdME8V4f4j+AvjUfERvDmmK13YajLFOLmAlYTvYNls5xtDcg/
3R1rv/2hvilo/hm0+E/geF/tTeBZ7O+urqICRWMYClBzgnKNkH1FKWrNbpanVftNfDiL4r/tT6B4
ZfUF0+L+wDI0h+WMFZHPAyOox/kUv7NPhmx8L6J8dtNhC3celsYI57htxKJHNyPXoeM9q5z9qW31
L4latoPxG8K3FxcW9zYw2yWdoFEscmHYgnIIPI496ufDHVW/Z7/Z38X6n4sFzHq/ixRHa25bzHBa
CQByB05Yk59u9JrRWNYuL8itqVqYv2AtPQ+WTeXX+tOV3OblmB6ZPG7n/ZxWJ4Q+A0nwi+N/woS4
1NL2fWJzKQkhAXaBxx1+8CRjnnmug8JX8Hxh/ZBbwP4enddf0m4FxLBcHa/33m3c9c+tct+zP4W8
Vat8YtJ8X+I5LyTR/DuWuLu9ZjHGCrfKmRyOOoqoxcVuRKMbnDftl20dr+0j4kjhMaxxRwjCDG39
0OvbOMH6GvF1SMGVn/drgDft+vPHSvSf2i/iBp3xP+MHiTxDpLM+m3XlrFK6gNJtjVCcD/dA+mK8
xLPKr4OI1HLDjIHbFdS0MG0hV8tWBWSRdpK8r9e9LhDEY1YE9QCuDx70hb5VeXPl5wHUevaq0se4
sYyBjgZ5OMf/AKqEJlmOLZdlZCdo+8jDGeP/ANVQy4xwhjcMBluNo7Ukk371M5wg5OQck8VJPKZA
X27t3Utyc/59KWwFWZAGTaAuckknhverP2mTcBMi7QONvTpx/KnStJKq+ZCHjHCqTyB1qDzVPlhh
wPupjg/54p6MasLNiUkgnaR8w6Z9qnt4Y3lZkyA3zMGPQUkcwLyIFBJ+6QB6U6OVR5u4DcT0I/Q0
AzOuiFllVjtb7u4dDWt4Yhz4j0lcsf8ASVGYxluv+FZbIZ7iVlj2Nuzt4/zit7wRGr+MNIiZTzOM
qO+OaBbEnxCHl+ONYbYxj+1OMnIOA2O/06fWufuFUt5mWCkkADpitjxgySeLtZOGKC5kHzDodxJF
YeIm3BySwPqen0qW7DR6xpcFvHexuGMqH/VLI+AzYOO3H4Y6dK6+0t4nyku1LjOAc7kCYzk+2Qfr
XD6HcztHGvlmMQq375RgH1xnocE12kMX2mJczI28ZdWbB4U+nOTn9a2jdsxs0beI4ImVQo2nAK5H
P5dBj26CvPPjwgnuNAfIRHssBC33iJHBP9fbNd/G0jwtEI/JUKq73ULjHIGe3XOK4T46Wot4vD5Z
C001q4UrjHD449f/AK9atNIzT948qglMulyAkFEOAvfB71a0q4S1uw0m6MqQQyYJ4IqjvjbT5I2b
ZLvBwRyRg8Vc0GKJr6E5E7Z5ibKg89CR26VyyOpXvY/Sf48wWng34IeFU0qOS1trny45RA3Dq8Iz
ycdDgn19a+coVtbSGJoj5NxldrBvmyvuBxg4Ppwa+jPjXLF4r/Zz8A6nphN7An2czPGokaEiEqVc
YyMHjP6Yr5vjCSxSMJo3ijPMjHB46c8Z7VcdY3M1J3sz334aeN9P+LHh8+BvFcpnvcCGw1RmLSKz
YCo2BkdO/vXUano+i/sw6TdLY3UmpeKrsNHbck+XG6gB26A4IPT3Arlfgl8L412+ONdlkstN0+T7
Unl/I0kiENgHqQRj8+K7f4p2ekftI6TP4l8H77jWdJh8iWyePYzw7mkBXnHUnB+oxWDb5tHoXoeX
/BLWb/X/AI++HdU1KSe4lnusSXAbOCVK4AbOAM/r7V618fvhVrus/FLWvEen/wCi2umaVDeozcjd
HgkKvZuM5rxb4CQ/YvjT4I+1QNHOLtiYpicq+10wT6jI9a9g/aW+JWueEPitq2l2+bnTtS0OO18h
n7sDucAjggNjj6/SlrK1ypO2h4r8Rfiff/FG+8N3N3FG2pWiRW8s6uQJyJSxdODjIKn1z+Fff3iz
T/D/AIm0WLw7rrIDrNv5MO8KJZMxgttyOoxnj0Nfm9f+Grzw8+j3WpRGG2vwLqzkY5CxdDxjB6ZA
64r6x/bMnuNM0L4dX9hJLaS2sku2eNgCuI4/mz1yexHripqQvLRkxlZK52OgfCWT4X/A7x/o9xJ9
pinhupoDAhwkZjAGVAJJ+QdB618CAkxpxJlyZt8qbpMdcn8MfrX3t8MfihdfFX9nbxhNqS7bzT9P
uLKWbr5hEJcSHPcgj8c18HPevMu3ypElU7yvZc9BgYHT9RWtGKtqYSlaVj6e/ZYu7j4paZr/AIC8
SOL3R4Lb7REJIgXiLSKMhi2R69M5PWvnLxVor6F4p1XT1ZpY7W4lgZyu0MEJBxnOSMDIHrX0T+wf
GD498T7zsiFggYueMtIDkAdyEGee35+HfEhD/wALE8Ts9yZZft07gLzjMjYU9v8AJ9a0i+V2Qnvc
5aZfkwmHYqGMbv8ANjOR6HsP1plxHCtvGz70kYbEXIxjnt+BPNWJJZHWRXYtIilwGTl27g5/woVX
VIm8xVJU4jUZAP8AvcAjr0rdbGb30IFgMUduUfnziGVwfnXBPGBgYz+lJcyFLyRIo5yGTd5hGPqP
1H5VPcSIimJEYkndtVcc9sc96cHKQLEjLcuxIfaw+U4/p161DSuNXZFjfblVwsgfdhh8vHB6fQ1M
9vHl3VyzREIybTlRySfw4pjAJGu2QK0RZ2G0AYIGc9+xon3ebc5XAblO+cHpwPrj6ZrWKJaIYLsW
blZWQ7gQMJllYnvnjngfhWhdXJjLGEOrnh1YAgnjk9PT8KprHKsEqog7EruzkD1/yasudhyI2ZiS
wjHICZPPT3oaGkRw25u76UIxQbuWduODz9a7n4OeArX4mfErRPDlxO0UVyZH8yPPJRC3JGDg7R+d
cLCjDzJSS8gBHy4AA/xr179kmEn9oXwvLtEMqx3IYBfmdWiYAD3BIP4Vz1HYqFr7HYfH7xvq3hvx
fP4K8NiTSdP0YBFFox8yQugb5jnp7ZHWuibw9D8dvgVf+Kb4Pba14eE8D3iqAZ1iiDDdxznjP49K
ueMPij4Y+F37Rfj241/R21dbsQW6oEWTyz5UZXIPABGcfStL4Hsmofs0/FGSA4ja4vfJjZRhAbeM
4weMDK+nQVxylbU6oq5z3hTw7b/Bf4J2/wAQ4Y47nW9aaKJHlyWjRhINoPOTuUseAB+Ark/gd8T9
V8QePLbwj4oSTWdN1+QWjecAvlH+8GbBIxu9+g969L8cqIv2K/CRY7hbzRK2AeT5lwDn8e3rXP2/
xO8JePvjd8KR4esXsZrO7SG7UptLHKkHKkgjKkDn1oU7q7RHKubUyNH/AGcbSf8AaGm8ICdZbO2t
BqLMW3F0BjYRnj0kAyc8Csr4sfF/UdI8dyaRoHnWWm6JcmwESgHd5cmOflwM4zg9uK+j9HtC/wC2
LqbrI5kbQIziRDg8Q/dPthv1rxCTx54M0DxN8XdJ8R6e11dX17NDaPsD7HzKp+bPXJSqUm2JK1g+
KfhC3+KPwXj+KNnbrperhzHfQo5IkxN5RIyvPABHHGa0LrQYvgP8HbDWIY47jXfEUREd0zbvLVol
kRdu3nGT2649a1dPVz+wvPMgHy3DbFBD5IusZOT0yOnvW38fGttP+GXwlm1CIyWMU9rJNEhAGxYI
twPBzkZz+FSqjvqPqec/BLxW3xdmufAPiMG5ku1kubeZY1WS2ZIiQ2Nox/FjINM+Gv7Oy6x8WdW0
fU7qK5tfDsoadcnMqHO0Bhjg4B6c811/gTX/AAj4n/ao8N3XhS1WwsBpNwphVBGrP5UueOv3cdOK
9F+GZ3ftA/GJY4WSNBAibh97BkHHt6H0xVuViFbRnzn4k/aEvrLx7bXthbRWukW00UBsZkR98aY3
AYHVuOcccGtn4y/C3S7jw/ovxE0mFLG111YzNakgKJpQz7sgnHIK/iPSqHh7VvAafCXxjperxkeK
bqeV7S4G5sL8mzbkjaNqj9a9J+IxC/sn/DtovMkLXNoUOzIx5chBI+nP1pSbVrFrYxPGWjwfs2+B
bTTbK08/xFrKb5bxl3IgRyCAPpx6Vn/DuC0/aJ8LXfhO+toV17TonvbK+dV8s7mUbWAGerD+favR
/wBpU6Ta/E34a3mvOE0KNLg3b4bYFLrwcdOpNZ3wLl8L6n+0n4jk8Jov9htpTG3MWQM+ZBu4PuWx
7D6UX0TDk5jyz4PfA8a/f67qWtyY0rw9JIk9pGOWeMF85BAI+TGKsQ/tDwD4hPdnTbdvDDylfspt
f33zZUbSO/Pf8TXr3wos430f45PCjKjX14CrDJDCKXIwPpivFL5fAJ+B88sxhj8aw3B3JJ98qJDt
47jbg8DPynmqjq9RWcWSfFv4I/2D4o8Pz6RI403xB5aWLOpZ0d1UgEZ6fvP0rpvG0Gl/AbQrLw79
mTUNfuRFe3TTpujUYZSAv1Qe3516F8WEFvpHwOBkWEPfWke0sPnO22IIzn0A+lVfjjp/hyT9o3QI
vFckcWhT6UGllnIADjzChLHgc4/EU+bXUTjc4S30Sw/aF8CTT2NjHaeKdEhUXMcKERzJJISuDycE
KD3x781i/B/4W20ul3/jPWRIdA0ln3owGXkjKHAyP9r/APXXrf7NmjaTYfEj4mQaLMZ9GdIxbMjF
vk3ygfMQD3z+WKofD5Qf2TPHpUSl1lum6bm6QdCP89aOfWyEoLucJo3xT8O674nv7TV9Ct4tCvBL
bpceUTKgLEIxHY4I4HTNc349+C2peFPiBbeH7aF5TqRIsHdcGRQQeSM4xz9QM1qeKvDXg6T4R+E9
bsL7zvEU8tu13bJJgjdycjqNpGea9/8AivGsHx7+EbOXdpZZxkdeR16Y6EinzNFWS2PHPGZ8OfA/
T7bw/wDY11jWUYvdearDCsgcJnHoRxz1qv4x8F2nxD8Bx+MPDNklvNp8Ii1LTIlyVGwuxB4GBuHX
sK7bxr4H8PeNf2l9e0/xFdPZ2X9nwzJOrhMt5cYCk9+A/v19af8AAbTre1+Hnxh060kd7aBp4F2O
cH/R5iNvXB/wFPXcnfc8l8Jz+C/D3hG9v9Ug/tXXJmijSy3nEeGOWyQR0x612Hw10Xwz8YZNa0P+
wU0WaKwaWC8Eu7a4YLn5QAPvdD78V5B4Z8Kaj401y20/Tg1zczEY2ElVGDyfThTyeOlex+Mtesfg
x4ePhnw6TLr1xGRqF9nDRlkU4RhjkH39apbitboc18IfBGm6n8Zj4f1GD7TYQpcrhflyYQy8ADoS
Ae3StDx7f+CfCvjTWtITw2xWxuXtzKkhG4qADgev49q5j4F+L7bwL8TbTXdXuZpbdYZ47hwC8mXj
IBH4+o9K6vxfP4A8Y+Nda10axLENQm8+SJ4WUqx7/dz09cVLvfUpW6nnfjTxD4e1Wwt20bSZdMZN
3nF3BBUdBwe3OPbFdb8PPhNay+H7jxN4oeSPRUQiMgnzZmMe4bR36dKy/i58JZPh/cwTw+deaXex
LLaXRJ+fKBiDx15689RXrPxTtt/7K/gCFvNWWVrSNgHy5OwgEtjnkA578etDa0sZNJs4nRPCXhD4
r6NqkHh+2l07XoHQW8V6UBmwwLbQCeoU4P1rzjSvA+q6h4ii0CGGWLUGulSSNYyShZsc+gwQcnsc
17ZpXwkPwq+Ofw8iF6L+G/kkdnVhwRG4I/Ir265r0zwrAsX7WviuHy18sabFMoUgqCWi5AHIP3hS
5uhShF7HieueF/A3hC+0/RNRnu574xIby4hj3RxSMQp3MCMY59eAK5T4mfDY+Bb1JrJjcaLdIZrW
dWyrxBiBk+vSumsfg7c+Oh491q3v0gOk31x5sbtndt3MepJ9fyrt9aVD+xPY3EsQkuYZhxsDbc3L
5Iz09KXNZ7mvI+p5b4I+F1vdeF7zxT4hmks9HiBS3+QF5mDgcZwSMsMY7Zq7cfDTTfFfg681fwnf
SX9xYzubmB0YSeUqhsqOc8D05z1r1X9obRhqnh/4Z6fYKkTag3kKicLl44duV4BwcVifBv4e3vww
/aCtdC1CRnM2lzzSgOdkqnoMZ5IKnr2Jqk7q5k6aPBPDvhPUvFWtf2bp0SXN2Vc8jgYHOQccf/Xr
024+Evhex8V23h2514xX0joquYmwJGGdpIBUDBxzjr1zXrvwZ0e2/wCFlfGOJI4VltLpooAkQBhB
EwwCc8naD16V86WPw71nU/h7fePY5nltbabZcSmXdJvyo3g5znJxjH5UXd9yuRIx/GHhjUPB+uza
dfRtEyuVjzypB5BU4GcgZ6dxWDLbCMbRklSdp6Hnvgda+k/jxYrN8Cvh3qs2LjUmSCOWds7iPs4P
UZ5zzz6dq+c3RtsbB3lcdW4GSTntj1NbRk31OeVr2Kbxjfl3zJu6g9+/Hucd63fC/hW/8Y65b6XY
xOzy7SwXhQuQCxJ9MiqJjyxRd2WyfMOOD/hX0H8FtPtLH4CfEDxBCjprNh9piS5C8qogR8D05JPb
0puVjSEFucZ/wz+JtYvtH0/xBY3WpWUbn7OHDuSnLKQMfN27da8o1KxfS7meO4iktpoJMSKw2mJh
2P8AnpXY2/hrxV4Y0zTPGiG5tre7mEkd+dxLux53Z9fxHX0r3/4z/D3Q7740fDyB7Tz7bXXY6gAv
ExXbjcQM9MDPtzWHtNdWb20PBdN+EGp32hxazeyW+l2ly7LH9pYKXwNxIBI6g8fjWT4++GWq+AJr
X7awuILmFZYriL7jDGcDnnH1r1b9obS9e8WfGHUPDOi28t3aadaQyWtgq5EcZRQ7Y7cnGfYVq/C/
RG8cfBDxrp2uC4uJ/DvmLZ+bgyxbINwBJGeDkdaaqWe5k4t7Hgng/wAEX/jbWhYaYhn8xuGUYVMf
yHB61vat8GNfttMvdQt1ju47DHnxwMC5UtjcP0/KvWdDtV8A/sqx+LNGZLLWtRlCz3LKANnnSKcH
nHC9evJ6Vw3wgg174efEbwlD9ka2sNbube2kWdNwnhZsY5644Ofcml7Rt3QKL2seP3KS+ejpmAud
uzO3vjGOfWu80/4JeJtS0u1vBb7DeozxRN8jBfunP5fyr2eP4MeH4P2sJNAWKQ6clgupwxDoJc7w
p5xgH26V5n8YNW8R+JviN4jW08w2mj3U1qgt49iwRJI23OB169euPehzlLZmihqeZ+IfCV74Y1u7
03UraS0vbV9jjbxyuQAehzxyKm8P+CNX8YG5g0y3+0GKNpZCPl2qMAknp3Fe++OLCL4i/szW/wAQ
dUMY1+wlMJniXCy/vxGC2CPb8yKm+J1u3we+CXhWz8No4u/E/wAt1MF2yndCrkBgQeuPzOKOdi5E
meAeI/hnrvhnSoNT1K2MFvcMYvNAyA+M9T3IBrlmZTcQxNIcOBhVBzx17d6+i/2fp9S8ReJb74ee
IbaW407VYJ5j9tBkeB1jwrJnOByMd60/hF8FdIm+K3xE+2zJdReDbhhbwlCwcMkmN3PYJj61PO0w
5EtTw9fg14ruLyON9DlR+qROw3EdenYeh61xVzDLFJKBkuCxaDowIz17/wD6q9Ovvjd4zu/F0fjK
KWaOJXSU2O5vs7KFC7COmCMZI5yfeu7/AGgvhbY6i/w/8V2A/s6Txf8AZoJ4EJKRlkVs4x/tHJxn
jJ9KrmfUpJdDwPTPC2r+J7Nr3TIpJoLcBXkiXevzE9Txj7p/WqmueHr/AMMT2seo2dxZtOrSIX7h
cZIPfr6/Wvo74+TS/A+y0f4d+EpHsZ5LOO/uNQhAE0rqXjAIweDtz+XSpfhppTftHfDTVtD1n59a
8OwRyW+rTA72DFmIYD+HKDOOeR6UKbB07ny1Bam/eOOKGWWeZxFGoXJYnGAABk59avaj4F16xtZT
Nptzbxwhpnd4iBGoHJOa97+BXgnT/C3wn1z4t6pt1B9JaQW9o4CrG6OuTjHuP09a5zwB8dtSu/Fi
W/iNI9a0HXn/ALPls2iWMIs7qu4EDtn9TS57sn2TR4UI2SRFVlMnXcw4Y59O3Snp4d1G706O5itr
l42baksUZZW2jHavoDxb+zL/AGd8f7LwNZ3jR2urQy6jDKQo8qIO3ydMZwp7VF8dPio/w88ST+Cv
B9pBpdh4cMlrM7wJJ9okZVckZHAGahtt6Ao2Pne6sZrOUxToYZlOGRxgjqMn9OKZbWD3sn2e1V5m
bJWOL5jjrnAyTxX0f8T/AArZfF34LSfFTTbM6ZeaUxg1WBgFjnKLGhZcdOTkHk469sSaH4S0/wDZ
w+D9r44vbOPVPEniiMw6YWXfFbrJDvUsCf7qkn1xjvVc9jePKkfMF1pN7Zxs08EkTAjPmIVXOOcE
j0FUdsa7i2SGI+RR09/z/nX0x8HPFlr8d5Jvhv4z0+2utQvlafTbyztxH5TxxOTnnjpxj1I71z/w
l/ZpufFfxe17wzqLJ9i8MNt1coDhgwO3b064P4GjnJsr3PBJNF1GOEyNazspJYERFhz3zn0rPeJ4
XMOAdwLBOwPft/nFfRup/tK2EXxEgn0rQLY+EIDDD5L26+c8SYD4GfvenWsn9oL4IW2iXHhjxXoC
T/2D4vMTWcJJMkckqs5Xae33cAepp83cPQ8BS3MsLuUaVChBKAnHXA6Z7Uy6tmt5SjIY3fDISAOO
fT6Cvqf4gaJpf7LfgzTPDs2m2+s+PNXihvrt7pf3USDzFwu056jGOc1Xj8G6b+1X8Kp73RdPTTfG
/hi3V7u3hQrDPE0h55PPyo2AckGo5rMpRfY+UHgJjkOXLSk/dPy59MdKaI5HmWRUkjZCCS4IXB6E
cdPpX0L+zz8HbCTwpffFTxgsieD/AA+RNtjG43MqSAEFcElQSBjg561qeCPi74L+JPjPVPCfiTQL
HSNK1uNrOyvLZWLxyO+IgQ3AJzjt07VLnroTyny/NDncpIDuTl1AweP8+9VpbUgopmI242+q+wFe
0+KP2Z/F2hfGyL4d2UaC7uS0tmijIktd5Ac+uAhJru/iv4l8D/Aq6tfBek+GrfxVqWj74tTvLx2i
cT7s7AdpB69ewAGa09rFFI+Wp2UqdrBgpwckDOO5qCeKNUEaRkksSTuwABX0j8f/AIRafrvhJfix
4EQv4UuCyXkHlkCylXy0IwR/eL5x3q34G+Emh/CX4WP8R/H9kL2XUoGg0XSG+YXJZQwP3T/Dk59A
etT7RPYs+X3hEaLtITk4zxg4OMe9VZo2feqsNu4HcTg4HsK+sPCeg+D/ANo7wbrGhaNpMPhzx1aO
13YWsZ8xrqNIGOwEqM5bGfTK9a8w+FX7OutfFD4haj4emWTS7XRZHj1a4fGbYKr54Pf5TikprqQ1
c8Yl3IqjO4hvmOP8+1R/fxgsO4JHPH9K+r5fiJ8EF+K9toH/AAjHneGnnFudeaUKoUIAZCCN2N3f
PbNeXftAfBS5+D/iePyJGu9A1JBPpuoH7s6sofaD3IDD8MU1K7M+W2549cBZQQhIC8AjgYpFhQod
0eSRwVPI6VKhaGJkAJVTjPt/nNQvG8lwRE3XHCL6etX5kq1xf+PYOoPlHnBHOfrUYQgGRnwD91vX
PtX0D+yF8BYvjb461CTUlUaH4fhS8vY84afcsmxF7dY8/ga0fG/gT4f/ABF+C+o+M/Aiy6VqWgSS
PqGl3DqZXhziMhc8DoR7ZHJpcyvY3UT5r27twUb8D5mPaopOI1b5T68ZIrZ0LQ5Nd1zT9LtDuu9S
uIrWIEH77uFXPtlq+wdY/Z4+Gnw28ZeHvhj4laUeKNS0N7ptWchYTcvvCKCTjO77o9hUuYctj4iB
fIULgEYycZp7xlHjGSWxgKK3vHXha5+HvjHWPDlywnutMlNtLKpLBjgHcPY5B/Gvdvgp8CdEm+BH
iL4teNUnk8PWf7i2gt8eaX81UY9eRkjj60OQuW+x8yyDczgqwI457e9KiFiwJOM/eJxxXuf7QXwl
0rwdaaD4t8J6hHe+FdahRLZAwLxyiMvIrY79Ov8ASuS+AvwfvvjZ8TNK8K2Uoi8/Ms0zDIjiUjee
o5wcfUindbiUTz2RVRCWJZyc5J+97+1MBUl8DccbdwFfXfiz9nvwB4ot/iFongm9kh8VeFLtoZIr
35BKkYbzNozlgMEZ49+tfMXw78FX/wARPGOjeGNMXzNT1OURW6tgDcecnnpgdKOZAt7GC7+UiKoP
y8FeOvrUZDqElUEnowx1r6k+Kvw4+D/wa8X3HhPX5NYvNb021i+2+RGWDO8avwwOP4h68EVzX7SH
7P8AZfDDQfCfi/QGkk8J+IoIDZyS5372i8zLAnPK4+lTGabLaPn9gVkyoJOfmHQ0saNjKkYHA4xj
6V9C+DfhJ4H0L4XReM/HWtNEuoXCRWVlaKzyqpDgsyg5/h+mMV0Xhv4AeDPjT4I8XXPw8uL6XXND
EdxJBPC6I0R3EkBupIRh17DiruiuVHyuY97BHyrY3Ajp+dOIw/yZK8nI7e9dD4G8Haj8Q/FNhoWh
wyX13eyeUgjU4VSfvNx8oAHJPFfQmofBj4SeEfGul+CNd8RXMusTLaw3M9urmOKeQBSA4G3BY9SM
AHnmneKJcUj5ZCkA8kknBGf0pzyqEwrbcfKcjpXpPxu+DWo/BvxlJp0sMj2FwTJY3DJ8rxFm2fNn
G7auf/1133hX9njRPD3won8dfEq+m0Swu2jTTLaJGMz5JBYp1PYjjpmpcl0JUT51MSmMEkZIxjuB
nrTiN0ZDSEr7V9B+Pv2ftIPwvsvG3gK8l1zRI2kivg6HfEwlCIu3G4c1xXwX+C2qfGfxfDplpGbe
wt2WTUL1lwsMOcMQxGM+grJ1EaJI8xkZ2CRlvKjB+8W71IuXTYuFzgHHBx2NfTWifAP4a/EDX/EX
hzwh4murzWbOGd7WK5jby5XQlQAxUA89MHp618/694L1rQPE9x4evLOWLWIZvIMHlksxDbQVA5IJ
6fhVRqRZPKzBlwZQfmeMHC9j71J5K7ZEBXcRwwJya+jNc/Z+8J/C3QNDT4ga7caVr2oxPK9ikRcx
KDjnaDzhgfwPWuV/aE+BM3wm1y2vLEPeeF9RiV7O+RchiQNwOOnLCqTTGoHjao8eABhxgbuwp4If
59ysw6HH3j6V7L8GfgA/jrQNX8Wa5eNovhTTI5Wa8dciSVQp2AfTPSugvf2dtC8X/DvUvEPgHXDr
F1pMoS8s/LOdgjLsVHXoB2rNyV7Fcp89LBl2IbaAcjcO3tTZPu7Ww8mcqFPUV03g3wVrHxF8Q2vh
3RbOW41S5fy1VkI2Dnl+w4BNfQE/7OPw5HjyHwXL43SPxCSsJghBb96U3Y3cjPXjI/GjmSepLifK
25p5EUAgk4w3NIYiJBtI+XnPaut+I/w+1X4Z+M77QNThaGaHDxeYCPMRskMOOfTivUvCv7Omn6P8
MYPGfxB1xfC1lqMoXT7Z42aSZSpYMwAJGQDjjt71o5RiLlPBFmcLkHDlsluARRKh3MWcMccZ7/Sv
a/i/+zwfBvgrRvGXh7Un1rwvfRKZLkgHYzMQowAOuMe1c/8ABH4E6r8afEX2SHNrpdsUlvb1h8kU
WGxz6nb+lT7WLEos8xkBlhQmFlCjBAP6mlaFpSuwMM8ivozTf2cvDnjrT9et/BHim313XNOtGuY7
JHx5mGC8DaO3v1xXg7eH9UGrvpL28h1MSm3NoqkSb84247HOKamnsVy3M92kEZKqgIXAY4yo4xg+
vFVptzYwQzEAnnGa+lb79l/QfBtloFh408YQ+H/EmoQLM1kFyYssRtJweuOpPY15h8cfg1qPwe8W
3OmXdvLLYvzZXuPluFVVJI+hODQmmzJqzPOjIr9GLP8AdKhenFLMghJTg545pZGZeCA/HXHT6U2T
5kLE4JPbnjtVOIWsOSBAPLxt/wBpW5P4fnUUkTiTYgHPKkdMetEyKHVgSG6D8qRgx8vG4lQe/aoE
hXcI/llS2OAVOc8U4xyxSBUY78ZxnoKRDv34yGOMMpr6L8Ffsn3fiDwvoGs+ItZh8MT65dra6TaX
Y2yXJ25BOcfeHTGevvS5raGqVz50uCyIFGQDhuTkN9BT7bPOBtKdvX1rp/iZ4B1H4b+MbvRL63aO
5tnPlmRSPMj3EI49QwGa6L4IfBLW/jVrdxa6WFt7CzjM+o38wIS3jwTyemTg4z6Gnp1Jtd6Hm63A
uQ5JI44UCmJP8jA8c7doPBPTP1r2D4o/s93fgHwxYeJNLv4vEGg3AffqNuQ0aEPsUEjjk9h/SvLb
DSrjU9Rt7O2gM80zKkcaDJZicAAdSckVomtxKLuUmZWk3BgzDnJHQ4/xomDBnCgGQgA57ivpG5/Y
v1zTdO8g6lbSeII9GTU30eN/3yE87SOoPGAT7+lfPl9ZzaPeS2V5biC5gkaKZWHKMMgg+mCMU1Zj
aKWwsrGP90SCCucn/PFKNqRoquVfOdpHX2HtXrfwy/Zu1r4i+GL/AMTXN/D4c8P2s8cUd/fApHO7
MVKqTgdcDOe9YXxi+EOr/BrxbNpOoJugQg2t4CMTfKCWXGemcdc1KauKxwYRydpUEp1VeCKkKNMG
xmPGS2OcivRPhN8ENc+K41i7hmh07S9LtjLPqF0dsSkFcLu7nBz+XqK7aD9k7U9VstRuNJ1/Ttav
bK0e4+x28yMzhRk4AOe2Bx+Ipc0EylFI8GaRYwGSMtnlCOSfb+dN3KMScxjlcovIP9egNaml+F9T
1XXxpltZSS6qJfKSBV+YMDgqR7c8c17hqH7HmtaTera3viTSYdU3ruthKCqKR3zjpnuP6U3OK3G1
2Pn+X5FSQMckcMD19vepN+YUyrKVbgHg9OCD/Wus+Jfwr1b4V+KLzRtahJaNFZJ4x+6mVlzujPfr
7Vb+F/wT8Q/Fg3r6PB9nsrOF5JrmXiL5MYXOeDyOKOePcizvqcK53SOZHLbTtBLdMntTI3EwLJgb
DkL6Ed8V6147/Z48R+BPCcGvSeRe6a0nktLbybwhwScnsBgV5ro2lX2uaha6dYW4ub26bZFDCCWb
1xj+tK8Wi+W5WkuY/MXIYBRtU7slT2/Wodkce0Slm7KV4wfevdD+yF42guZ7Py4ftscTStAWHGAD
1yDnHOMZrxK6tp7KaaCbKOjnKPx8w6DHXrTjy9DNp9BFupjNP5MjjaGKhWIxnHp16fp706a5kuGS
OW5Z92CWZ8lhnpk9B7fWvSfAf7PPi3xx4aTxDY2ywadI5iilmJUysAc4GOmT16VzvxE+GHiD4Zay
una3ZlLmSNZV2A4AJPPcdMHr3p80SuVnMpf3NqNlvI9uWBZ9jkZGOPxH171PJ4i1Hymi+3zBCwYl
ZGAdgOpGefStfwN4D1f4i6x/Zuj2j3V6Y5JT5an5FUZJPpzgVveKvgR4r8H+HG16+0yV9NidUkkV
WZQx6ZAGQP8A61KMoN2Id2zh9M1u50pylrPJCZFKyiJyhK9ccYOKvP4y8QWJXydc1GOMf8shdNjj
ocZ681kFYmkQxl5ZQpJRRye+Pxr0tf2b/H8kkJ/sWX50EiwyPmYqV3AhR7Y96pqKY7Nbnm+o6jPq
DS3d5LLc3MwO6SZix9uTzW/Y/EPxTp+nxWkGu6pDbhSi20dwyIAF4wM4xWO8DQ3BidDFLH87RyLh
156EHpXXj4NeM9RsbK8j0W4kiu0SW2cjaZEYcMM9u+cc9qUrbMbi+hx+ta9feIr5LjU9SuL67XCm
Wdi7FecdTx1q14c8b6/4Tjmj0bUrjSWlIEhtnKmQg8E8dvSovFnhLWPA9/8AYdc0uXT7qVBJGJFI
3JkjOT15U1FomhXfiO/Wx0u2uL27K58m2QucdzinaJCTb1OluPjD46dGWXxTflShVlack4bqM98+
lccisXlleYIxJGMZLZ4IrS17w3qXhpo0vtNlieRTIrOpAI9QP0rHIDOrbjhsew6UWQ7WPSLf47+P
LKyt7S28RXUYhQwo2RwgGANxGeMdRivP5ppLueaQ4eeaR97t1LE5LH1P1rW0rwh4g1OzF1p+mT3U
QD7ZEQ8qOvI9BWRDYSXV4YFVvtQfbsBBBOen8qSSGrs6vwv8YfFHgvSpNM03Ufs9jK2WhaNXUORj
Izn2/lVLxx8QfEPjnVILzW9QNxNFCIE+VVVVBJGFAwOp/OqcPgnXVhZW026Z2UMFWAt2zzx/nNVt
Q0S/0tI5Li0eFJMBGljxkcEYNOy6D5Wi54G8fav4A8RDVdKuDBdqrqQ6BkKlSCCCemOK6rxF+0J4
u8V+HZvDt1dpFps5V5Y7aEQsdrbguVGcZ7e9ecqj3TG1hguLmfcxSKJCxPGSQKtX2iXkBf7XC8KR
7TK+0kKSMjJ9fanpcE2ZcqMDiP7jKG+U9B/nFOL2zzKqo7oucjPt1onRgEcEtGRlcEYP1pkqmMHI
2yDDFj97HsKrQkWTYZhsdunJ6c+9JPAjMCTvHB+VsH/9XSkU7yBgM+TyB16/rU32gEqZD86qB8y8
ED/CkwK5iSQKwPGM5wcUkr+SDsbhjkFeop7uTuRCEGcnaOOaWWRFSLIwAOWXqTUblaCSyRs8vmSP
sP3eCST+NQKytHgnaQMgt39qlc9cjYzHIBwcfWl8lWbO4LuXtyP85ppaiGNJGi7QzFzkHA4XHSrV
qJUZ1SPZKRgdDnnmq7SRxAAZaTHHp04qaF5HgaZC3mKxyWPXirEZ7wO88gdgkitgoRya6b4dWwl8
a6OXRpF+1plWOe4+lcvLvPzvguWJOeprsPhPC1x8QdHDHAWU7g3Y4OCMe+D+FIEzD8QbbnV71w3m
gyktkgFzk5zVIW7sdzLI+QBuzVnXZWk1u/l+9unZiTht5JPzfqelVZJHnkcMfLVjuGzp0A/z+NZy
LPSdHMEsaLGwmRGJDDJJJ5JI7V6FYTJIEWK2VJIBufYpwR6Dke3J9T1rzLSVcx7xl5MkqE4DH09h
9a9BsnkvIwWmNtvG1So+8B9euT39q6YuzM5PTVG1auLt7p3XAkGU2nbgDjOOnauH+Odvus9EkFuY
oNkoDZ+8C3Y4z3XNdvaWs0l3AN4RF+fAXcJDnjPpx/M1xfxzsRLa6MwcNb/vyjKSAhypIJ79v0qp
zuYQirp2PFJ5jJHsAIVDkf8A16vaKzz3PynDg5AzjPNUHCBh85YZ5x/jWpou2G6yShRjnDHJ+npm
uY6dT7G+B3xcuvh/bfYtQg/tfwzfW7R3NnjcS+AqyDfxkBTkHHX2rF8RXtlea/ql3pUM1nZC5LRw
M++RRx6HBGT0HGK5Pw9Ju0q3VmwyR4Mbt8wxwfYnAP6VtWsYikjVTvUxgPIoJzz2H9K0Whk43Z7h
8WfjTp3jv4feG9Fs9Oms7ixZHnkV9inEWzO0erc4PpmuT+C3xNtfht8QrPVplluraBGiuk80oPKZ
SORghuWGAfXrXEXGbXd5EQjhRjyw5cehAJA54z7VBNDBFEX2hXJ2zSb+QeOg7dAMe/SiNJSM3Jwd
kj3jRfiz4LHxvu/GF9pNzZWEU0Uunw2qg7nyNxIJABzg8Ht36V1vj74u/Cbx74zh1u/tdWiktkVN
slsdsixkELwx9W44zz6V8z2/mRDzp02bn3QbwGLrn2PHTODTY4kMbTu6KCd6FgRIeOAR6n/HgUnS
UWbJuR7R+0J8VPDvxP1fQLnw5a3FnHY2fkCPyvJYDcWwqDpt5xyRzWr+0d8avD/xU8IeFrfSbW5W
801Clx9qATkoAQuDzhkGe3PrXgNzc3DrlpWO04Dn5WPOMD2PPNPmut7rtDs0ZAVWb5QM9D+XTkcV
SpXYua2x7/8ABb43+HfAfwX8ZeHtTuJ0v9SWRrYwx7sFojGAPQ7ufwrwNrhri5mMqEGZvMxsYTbe
OGz+HenWtqskQSSfe0xIPyglTnI6YB7fhTJWMMmBHsTeHDREjB9+2OlUqaWhm5NvmPoCT4veG/hz
8KYtM8EuZPEN+rm9v2jKyx7ip+Utw2CMAZ968F1S/n1G6eaV2dpXwxLHLnPfP16k+tVWdnLoAxgk
I+Zm+Y47DjjkfoabGxeILJsSVFwRtGMds/54qo00tTPnd9SddksnyhmYLjCde/GfXj0pluiiIll3
tuO19p2gAnAz3Oac7Q75ZNzEqVkKBiNpHcfUYPX8KfK7W+zzHkYE/KY35AGeTnnsKvYE1e9xl1LF
Ov2ho3QAklQoy6nJAHHHY/hUgdUWNyLhEI3AOgyO+QOeD/Wort4+GhO5T8rh2Ax+FTQSPHDuVPMm
bACLkfLg4wPap0bLv2CcoJCIj5QOc7gD9TjHrQuGTahZ/lyMn0p8bxsFysRkGS3y4ZeoPuemKjW5
EDLHECGKsdrEAsAdxAyef8DXTBKxm5jmu/stu7kMhGPMwSxI9MDpyeadbO8sQmhYLxwxBBA7gY5G
ahVNryMmFB+RmX5do79qmM4kVk2us3T92MA/iT7Cok9dCeZvQdDL5UZ48zJOSnAyO1a3hzxDqHhL
WINX0e+mtr+Bw6SRMUPAxt4ycYJrHSyIjRH4TaTw2M5981IyeaiSQgZLk7QSWB/z+dZNKRabTPpr
xfqHgv49aZZeILrW7Twx4ndDFqMVzIP3pUfKeeowGPWqHxQ+KGieCPBsPgHwFJ51rPCJdQ1G3ZTF
cs8ZSRSCD3APX064r53bcoVnQMz/ADg8EEdMH/69SNuRGPlussjF9wHyjk9+3OPwrL2KeprzSZ77
8Gfitp2t+F3+HXjeWP8A4R6TL290WVVspBvYZP1ZSPoeOa1/APhvwR8HXvvF95r9r4kvrbbNplrb
yK2JA5xjbnnkZ9OtfNbRTksJdjAuSAPmOfU57/5zTZ382DyzKJTkOuFBGc5P+cVPsG9iee7uerr8
fPEVh8W08c/aHllXELxMobfblv8AVkYBBAHbv9K7zx74I8GfFi8svE2k+IbbRZNSUzXdhezqJVlL
MXyrMMYG7GB/IV85SyO+4sWVVOdwT07kj0psrzQyRTFWfeSWOex6nH/AvpWkaVmDnbc+hPjD8TdN
0Hw1H8MfBrM+h2zbby6wjC5LMH+Rx/tDnAq94R8a2Pxs+GMvgvxbciw1fTIHm0q/YKkagKEVScgZ
AA+o56ivnFyrSgyXDCKSQ5lk+6megzj6dqe+5WC+bIFcgZQEjPYHGOD+VS6CJU2+p9L+BdI0T9n/
AEm68W6hqMOta5bytBYxWEiyDEgK7iM8KBnIH5muP8BftCav4Z+I954mv2NzHrE4/tK3WIZZV4AX
BGMZ75ya8glZ4I0QzHYeArHPTPQfielIoclFWRwXYkiPAUfhjjvUql3BTaZ9E+JPgNp/iPxpY6p4
e8QWyeH9RkjmkSVlR40bc0gGOhGRgYHpWR8c/ipYy6dpngLw40kmi6JMkDu6/feMOpwwbGOc+p9q
8NuEIkFwYi8qhlErDJwe3r1HNEc8kk8mDvBIY7wTz159eRW8aCesmQ6kr2sfTV1r1r+0z4Ftop50
s/GWgxfMCNscqySAEjJPXac9+Kl8EaZY/sy+GbnxRqcyyeJr2Oaws4EHmKAVVwW56ZT8K+ZBKEdV
jeSJnBUNHxxnJBx26flUk1484ClvMkDhW3gg5x15+nWk6Vn5Aq0ont3wl+Otr4b8R65Z6zD5mi+K
5XluzGrF0klDDcig9Pm578g9qvf8Mx6hP48FpBqECeHGl3m+UfvGi2luQc4OSfy7ZrwOH944Dw7S
FJUoMkAnvgHjjNX7fWdQWZ5Y725iEudxMrDBx1wal0l9kPbNs9o+MPxlg1DVvC2n6TEZNP8AC88D
W9w2Q8kqKinA7AeXnjrmuk+Ienf8NGeHLPxXolz5XiPT4IbK7sAp2AlnbIz1+8a+cWlQiYqd2cBj
k/XNT2Wq3NgH+z3txBMwz5cEu1gM5yMelHsnfcaqan0n4FR/2bfA+pa7qifa/EurwtFDpvOB5bdy
BwSGz+WOawvgV49tr7wtrXw41ER2drrXmNDeMxzHM4QEEZ5B2Ajpnv3rw291K+vY993cTXJwxBeQ
ucnGevqD19agSUC5WeQMkyPvVwccjuCORR7FPVDdVo9g0n9m/WZvHdzpeootvpcDySjUhhUZVDMG
HXGQPfmun+KXx8tdU+LPhvW9Ktjd2fhqZmjlJIW4JwGxwOBz2Of1rxxPHHiCSJx/b2oTLt5D3DZU
Yxjg+nY5zXNh5rjY5ibcDtBKkZGefw4peyb+IPaH0L8bvCl58Sr238e+Gi97b6iIoJLWM5kgdIwM
cD8Dn+VanhKdPgJ8JNbOtTCTWfEcQkSwjYF48xvH079h6V4HonjPxB4atSumavcWVvvDvbwyYTf3
OO+cd/0qnrHijVfEkkE+sXtzfTxr5SyXD9FznA/H/wDXQqbvozJ1Ipn0t8IvDlz4Z/Zxn8U6BaM3
iu6U2zP5XzbEmZeBzj5T/KvK/C/wX8U/E3XZrSS2nt7gKZ3musDed3zDqOcmuY0P4meJvDlubGx1
m7trAAt9kjZduc5yMjI5rWHxs8aQRRz2+vzwkYDDYnUHPA68j3pezknobqcdylp3gO7vvHMnheSS
O1mSSWFZp22qWTIwCTjJPQ/zpviz4aa54U1q/wBMm066uCp2vJHGWDDHBz3HOcfWuevrqXUtSur6
a4f7TPKbhgecyM2eBjA5/wA9a74/HXxr54VtVhZUj2KWtYyRgcZJG49u9PkmgdSJ6p+1Cyj4W/D6
yBKaiiKZ4pWCsn+jgA4B45HT61XW8X4s/s66boegsZ9a0B4nuoWcKXCRyElcDLckdAOQMdK8I8Te
LtT8X6wL3WZnmuViSNSDtRcDAG0cA4/Hml8K+KdQ8KeIIdY0l1jmjVkUkZVgwwQR3BHWj2TsZqSb
PSPgX4Z8T+JPijomr37u9tok7STSXQbCIQwABYZHJ6D8q9B8KfF7w7D+01e63LK407VLePT45f4f
NHlgZz0GUxnGO/FeOa/8cfEesaPcaW0tvb2t1tE7WsCxyvg5A3LjAzXAwMsLIN77zzvzgAg9R6Ue
zbNFNRPSviHpXibwb4x17T42ura01m5eTdETsu0lkZs45z3H4kV6R44ni8G/su6X4Q1e4aDX7t/N
S3KsxKLcmU8EZUYbjke1ec6f8d/EFppWn201rY6iLJPKjmvI2eTgsR82c4BPWuN8V+MNR8a6/e6t
qN001zMdzhixRAQOEBPA4HAqfY3dxqtY9++NF+fHHwn8E6/4XmNxHoyl7qRI2VrZ1hiGT3BBX9fa
sD9n271vxX8WYfFWq3Et9aafp88VxqEhxsLRnCnHY9R+NecfD/4j6t4HW4a0SO6s7uFoZLK4JKMD
wc47jtWj4j+Nmp6v4YOi2GnQaFA9wGum06R1MgUMu085xg/T2oVN7CdXyPZfgr4w0e6+LfxNhgv4
pH1ibzNOCsAJT+/4Ud8Ajj614Q+q+J/D+k3vgcy3MUMkpSXTpExucsDgDvkqCPXNcvpmpXOl3MN5
aGS1urOUSRSIxVwwGMjB9+9euj9oOKbWLXVbrwpaXurRNGxu5J9rNtGAeFPPShQaY1O503x8u00z
4E/Drw/M0a61arA0tuzYkRBAysxXHYqBzjrXzPIDI+GkICOcHGc+hPpwBXU+MPF+p+NNcvNV1Nnl
eRiwEj7iiZyFHTgZxgVgMksojULtyTlmbhex4zWsY2MXHmkOuWYTNtXymZwV2AEivo74IyQ6p+zt
8RtEhnD6q0M7pawvulZfJAXK8kZJ4x6etfOU9sAC6lmA6ITkL7Z7itvwL4wvvAOu2epaZI0NyUMc
oAyJIzglWyDgEqOfalOK6HRHTQ1dT8feJNZ8B6b4Hl3TW1jJsVVT98WUnC8dwc19F/GW7s0+OHwf
/fJGLeWTz8vkKCUwGPbOe/HHtXm+n/GLwRpniy48Q2/ha6j1VhNKg3L5ayN3IzjGT2GefrXi2t+I
9U8T65d6xfStc3ly+WmKYxxjaB6cVzOmynJI97+LvxC1H4WftIa34j0y0WeO7022tleZSI3UqGO1
h6bR+daPwGvzqPwx+LN9M6273ck0hlc/KS1u2QDxnksePevP7L4taF4p8G6fpHjXTrm71LTJStrc
2IG/yFjCqGOQSevH04qj8S/i5ZaloOn+FvDNnLp3hu3jXzhPCEknkAfJY5OB83uSaSg2Sp2O81RI
dR/YcsLeNmJS4TdEgLEZuZCCR6cniuasfjFqPxM+IPwxtX02Czj0rUrdVMLMSyl41+bjgAA9a5f4
SfE7/hBtTmsdVjbUfDd6FF3bOpkCKNxBRcgZy3P0FdJoPjz4cfDQ6lrHh2K/vNdeEfYba+tmRIm3
DJB3cccnHp3o5UtAU9bntBKL+2q5DqCnh5QxYZyuAB+I5/ya8d1D4uSfDTx18WNMbTIr0a1qU8as
z5WI7pACe/O4cY7CvLp/G2rt4zk8Um9uV1hrgXCSiQkx/Nu2A4HynAGOntXqXiXxT8OfiW1lrmvX
LaNrzIyXkNlDIySNvJyGAOSRjrjge1S4cvQuMrm6bcn9gq/SUuJvPKbXBzxer0B68CtD9oa6i0f4
f/BjUfsyzJZTQXYTIUHbbxtt5zyTx7fNXl/xj+KUHiMJ4e8Mf6J4UswPIiibabjKr8z8D+JTx19e
taHgT4kaX4m8FX3gzxvKws0haTT9Tkcl7WQhVVAME44z+YpKMuwNpnafC/4kWXxV/aq0jW9O03+y
beLSprOSHC5yFBznj17f3a7P4P2qyfFb9oIAsFeZVZl4yNk5H1ryTTfFXgv4I+GtSvPDeqP4i8VX
DtDBMysj2kbRkM2WXnB5/AV514A+MOs+BvGU2v75r+G/kc6lbM4LXatnBLEds8fWp5W3cFLct2/x
S0a0/Z81bwRNpbya3durpfRBC2GdCf8AaGAMGva/jOf+LafASVsBften4RuDzBGTkY/P6Vxl74J+
GOpePLfWo/FFlY6HO0U8mmyHDquwb4wQRg7geMdzXC/GD4z3XjrVdPTTRJaaFogjTTbQ4O0oAocY
9lGParavsWrPY9m/aI8UaB4I/ak8Nar4jsTe6auilZ08nzAcyTBCfUg4/PParX7L2q6X4l+IHxgv
9IsZLTT7mCB7eE52rGfM4x2OVbgf3vauI1bXNB/aG8E2c2t6vDovi7Sligku7uVIluYsuxIzjuec
e1V7rxnpX7O/w3Gk+FL+HVvFuswn7dqEEiSRxKsvAIU43bWbA7E1HLogWhe8Dwqn7A/jlg8gDzXJ
Q4Jx+8iwcY9gfx9q43XfHXgvxBoPwl07w9biDxFY39odSdo2XeAyBwSwyw3ZOKg+A/xdsdN0+7+H
PiZXbwjrSmF5eAbeSRhl2bshAx7VveGvgpoHgjxBe+JPEfiTTtR8P6Ws13bW9pcI07yI4aJdoYZI
wBj1p8rLUrHtPjyyLftw+BCoZf8Ain7lee5/fkAkfj+deQ32u+AtD+P3xl/4ThPtSyXR+wh49+H2
AYyP+A49wfSvP/E/7R/iDWfjTaePbaGK3uLEtbWUPl/dtixJV+cEkMcmu4+JXwu0/wCPOoQeOfBt
/Haf2yXlvrW+JV45UYKOM8DC9MUcvKK92XvAFoH/AGAvHakJIZprjY0agfxx5z75zz/sinftBi0m
/Zj+AX27dJZtNYpPlsB0+yfPnP8Asg1z3xk8a6f8LvhrP8IvDNwb8Tq0msTsSFDSKrbUPIJ3ZPsD
j0qXwp4ktf2ivhFF8O9WuIoPEnhyFrjRgNwWYJEEQMRgZ5Ix+OKlx1Gn5Gx4HHgO4/bE8GT+Ami/
s+HSrqKYQrkGbypD1xnO3H6iu++CFoT+0f8AtEELlCYFz1YsFYNk9+TXlnwn+Hlt+zi1/wDEPxfc
KupWSvb6bawMZPPkkjcHO0ZAPAz0HJrivhH+1FP4P+MHiHxDrUCrp/ixj/abQqW8jaGCsoAPqB+F
Fgcl2Oa8P6V4Auf2ZvEN5fTQx/EJJpHs4HkHmD51HA5+Ug/p1717j8c0RvgZ+zdIZPs7NqGl7dxA
IHkA9cewrzbXv2OtRl+IUNnodyZ/C9w8Uw1VipZYGwxz0GR049Pwpn7R/wAc7C6vPBfgvw7EmoaX
4Ce2eK6cZE88KbNpyCNvA5ANNq70KjJHqH7T+jeG9X/a/wDB9p4wvI7DQH0JjNLKwQEiS425J6c4
+uRTf2Q9P0XT/iZ8ebbw5OLrQ4ILdLSZGDK8Y8/d7cndyPauQ+NejN+1r4S0b4ieGpGl8TaZbRaZ
qWjo5XySzM5bJA3YLH8Dmn+BdRj/AGNfg/rGpa4yT+L/ABZAscOiIwLCNJGXcx56CTJPvQ46FuSs
yHwDaNF/wTc8ZgkLHJcThYjzgedAp7DuG/OuI8QfD/wL4b0/4Jap4c1ODUPEOo6pZtq0EUiswOI9
y/L0wxx0/ire/Zu8eaV42+D3iD4F6tNFolxq+X02+Pz+bM0isVK7ccFUOT6n0Fc58J/2YvEOnfEi
LVfGciaJ4e8K3A1KXUJ5AIpRBJGw5PRTgn8KlRutQukz6Y8YQq//AAUe8IDDM8XheaXazbVAJuAM
epz1/CvDdS+F/g74kfHn9oGbxTqr2B0q+ea1/fiIlirbiM8HBVev41Q179rzTrz9rvTviJHpbvo9
jbPomd3zTQbpP3o9B8+7HcDsaxv2nPgnrfif4jXvi3wag8UaL4unfUUmtWURx5IXaxzzyTRytMy5
kdr4ZSQf8EwfEkzlt8l27KpAz/x9w9ecZ7/jWp+1BoUPiD4Kfs2aXfF4rTUZ7WKeVR/q1a1RWJx6
BiR9D61zHxr8S6X8B/2YT8C1u01zX9QmNxfTQlQLIeZFLscAkgnoB3q543vYP2s/2Y/DNr4duBb+
KfAyF7jRnbMtxGsIiLKM9xyD/Wp5XZWGpK5qfBj4VaB8Kf26tL0Lw1qLajYReHp7gyZDfOwYHJ6Z
+UH8feuy+AiiX4mftVXQLLINRmQnPGFScDPOOigV4/8AsneDL74J3+r/ABg8dD+xbDSLW5sEsbuQ
Ce8kaIMBHngkgAAZq5+zL+0Lo+r/ABO+Kmn6lEmmjx9cT3FrNO6qkJYS7Uck4JIkUe5JoS018ht9
DxiH9n/Ro/2Q2+JB1Uza6J/LFizclPOEfyj1C817N+3xEsPwQ+BqE7P9E3bHGCMWsAA69fmP514r
dfsp+PG+IqfDyNLgRvMITqDg/ZNgTcXHOCAOnQ8V3n7dPxV8P+KE8E+BdHL6hc+DoRBeXcYBgkdr
eJfkYE5xsGcVpFNyFJI+SJ0AH3Q0fJ3DGT70tqMrlCcr1z1wafM7MCrBQ3oOBUY8xlXdtZjgLtPA
rVowTVz9Ef8AgmxougReD/G1/bap52s3kEcWp2rKw+yRhpdh7ZBXd09K+FvF09t4X8R+INH8Law1
7oU5+zG5TIF3Cr5AbIz94V9p/wDBM7Sb2Hw98Wb54JFtJ9Pt1hlKHEjBbgMAQOcYAr4JvNFudL1C
exuY5LW7tzieKVWVkOM4weR7fUVEVqzofxbnpP7OGhaJ4k+NfhWDWdWXSLdL+CWMnOZZhInlxjHQ
se9fR3/BTzStHtPiXoespqm3xGLOKL+ylO5liVpWWbPGPmIxXzD8AdLvda+Ofw/S2SSZ012ymdUQ
n5FuIySfYda98/4KiWt8n7QtvdfZZfsj6Laqs7LhC4efK5HH+RWa0kDskfJmrazN4g1e91HUZRPf
3b+ZM8rffJ6n/PQDtX6N6H8OfCi/8E8dQ0abxVbpoEzNdT6uP9XG/wBqjfZ2J5AX8eOtfmw2wwt5
ikkkdB156fnX3oNFvbP/AIJW3EJs7gXM2p7jH5JZ9p1LOcY6bRnPtV8qvcXMfDF1rl9deH4NGa5k
/suCeS5t4cYVHcAFgCeMhV4/+vX1n/wTS8M2F58VZ9ak1aKLVrKwnSHSc/PIrbCX/Db2PpXx9NN/
oqHkhhtG4cgdq+sv+CY+mzn9ozzxGzwpol2Hk2/KG3xbQT0HXvRISZ59+01GPhd+0p4yv/DOtrc3
d7c3b3EkEgJgeZ5BJCwHGQDiuN/Z51++8I/G7whqunaVLq2pW96DBYQkBpmIIKgn2JP4VB8f4bgf
Hf4hSNE8SS6/fSI0gKhkM8m08/Suu/YrnS3/AGq/h5JM4SNLuXeCdo5hkA+bp+dEloJNpntf7VXw
j8O+NPjjreq6p4607w9qGow20s2k3EyLLb7YEGCM+ijvz2zVT9ujUrrSPhp8KPANpYXE+iaRp0E1
trgOYrtxbiLaueeFw3/AxXmP7cumz6h+1P8AEV1tJ5o1ltSAsRxtFrDk8cY4Nez/ALd0SQfs3fAq
1d9s62MW6NvvJ/okAGe/ZvzpRVmU1dHx78OvC0XjjxRa6TfavDpdqyOzXl1IAiELkDngZOB+Nfa3
wA0Yfs/fCL4r6/oeoR+N7i7tre2NvpLo5gGycb3OQMDzCT/u+1fHGvfB/wAT+F/A+k+LrrTnXRtS
ZUt5YxvclgzKGGOOE/Divrb9gG1/s/4S/HGe6VrdBZR5aVSAp+y3B9MjqB6/nSm7go9zL/4JjaSl
z478byzxwSPbaTEyHq6bpcflxn+leI2HwD1vxr8JfGfxQGpLFZaNcuJFlJMsxVwdwP8AwIV7B/wT
f8eaJ4R+IfiLStTuTHqOvWMUFiu3AmkjZ2YHOMHGD9K8Q1mw+Jvg7WtQ+FLPf276lOiyaJAQYp5J
drKcZ5Hyqfpmsm2m7F2jc+mvjrpkV1+yr+z20kP2i4nvbKFpJBuO0wtlSTkgZx+XtVj9tb4V3/xR
/aY8C+BNEaK0+06Ixij+7EgWWYk+3CjoOuODWf8AtNeMNF8G/DP4C+ANWvVHiPw9cWF7rFqeTbRp
DtYtgYzknjPaof25fEevT+PvB/xb8DXkq6HJpa2ttq9i2f3kksz7e/GCOlTByRV0jc/Yo+Hs3gnx
x8afB2rNHfx6RawRzW+C8LMN7ZAPr07cjiud+F1v/ZH/AATt+I2rxMYdQuLqaJrpMLIUZ7Zcbhzj
ax4q7+x94gu/Ang34pfFPx9ePaWevwIkF9dt+8uplMwO0gc/MVAx78VS+C9xB8Qv2BPFngHQG+1+
MhcGd9OJPmeWbiE9Mf3Yzz9apJ82wlLV3PJNN+Bnib4XX/wd8a302yLxNq9mbdYpfnAZkkAc9eVJ
z/8AXr638a+HtOm/4KN+EYTZW/7vwzJdyRbOGkH2nDt6n5V5+lfHvwZ1Px/8aviR4A8KTXM+saf4
Wv7eYWbKETT4onQMx4HQKFGfwr6e1b4zeDL3/goTpuq/2zAdMttEk0R7kPhFuz5g8sk8Zy23nvj2
pSj2LueKfE34G+I/j5+0T8arrTrpXi8OX7NItwcjysMAidAAPKYED0/Gu0sVOpf8EydYvrxTdX0e
pFLeaU7zGv2yFcKeuMEjHt+Xl/7QPi3x78Gvj18SP7NvLrRrbxReSzbLdA4u4GkkEZ5B/vnpXqPi
+4j+E3/BPq38CeIpItM8U6vd+dDpcz4mMYvldmx1xtUHp3ptu6J3f3F79qnQJh+zr+z14U0JUs5N
WaBHijPliaVrWEKW5AI3SZOazv2X/hFrHwd/bFtPB2sziVpdCuLueKJso4KYXK5xnO4cmtD9qXVr
nxZ8Afg54r8FXa3sXhdA93ewYItZEt4EBYEdnT9BWV+xb4q8Q+L/AI6at8UvGN1LPo+l6NPaXWs3
K7IlKomEz0yASce9K7sO+tj0T9nnw7pun/FD9pnVIbCAPpd5NDZuyqPIAW5JCH+EYVemOPpXxtN8
GvFv/CmU+MDXTR6d9oWNpJZMTl/M8verD7x3fyNfWv7M3jbR/EOuftF2NjqNu+reJLu+utNh3c3M
RF0Ay56/6xOnQEV8hR618S9U0WP4LFrgxeeYxoQiUsZVJkIJwTkHnOe1Wm7Mh2T0Psb9oPQrDUtV
/Zia8s0ubvVb20W7mdPmmBS0yD6/eOQetcX+1n8L/EXxh/ayg8CeHCIrW18P291b2TNthhAVtxVe
nOQvrXaftK+PvDuj/GX9nvSLvWLeKbwpPHNqwHS2AW1K7+y/6s/lXnf7ZXjLxP8ADT9of/hYnhXU
2soNU0a1tbPU1VZYZUEQLKmVIP8ACc+/vULXZlHQ/so+Gn0z4M/HvQtej+22+hCWFI5v3sMc8cNy
WZM9OUQ8fzrMtYf+ET/4JoLqumRSWGpX18Bc3MLGOaZftrqVZhyRgEc9hV79n7V5/Bf7KXxn8TeM
Jv7ObxWJ5LF7llje8lktplO0cYyzcDjofWqskieP/wDgm5aeHfDskera5puoo97bRP8AvYl+1zSE
sDyPlOfpnrRFag3roedfDb4J+Lvgh8cfhFPqsqW0fiLULZYktpiu+HfEXVh/EDvUc+hr6O0jwRo0
3/BRrVLWTTbQxweG47uGNMYExCEuR65Y8187fs8+OPHXx1+Pvwxm1qebVrDwvcxzGbylSOzt8qRu
xjOfLTr6EV9C/Df4jeF9T/4KE+MdTi1azexn0SLS7ObeqiaYfZwUUk8nKsMDPQ0mpJsdtEfLXjj4
S+NvjB4z+LPjWO7kuLDw1qd0J7meYlkijaRgqegCpwOnNet/Eq1jv/8AgnJ4X1u9U6hq/wBsjgS/
uiWnWM3cw5PXkIB9PpXjfjf4i/EH4YeLfiX4PtydOt/E+oXTz27REvKsskioVPoVbsO4r2344yL4
W/YR+HfgPUpBZ+KZ7iC5bTWwJVQSzMWIA4GWQdOT9a0vqmNx7HnXif8AZ38D/BvwT4Vm+J2o3un6
5rzSzRwWUcjr5K7Rlu4PzDr69OK5b4h/AzwjefCE+PvA+vteWtrctHew3QKMiqAAVXqeeeeuRXsv
/BUGWKz1f4XQrJ93TLsgbsKF3RD8T8or5DtPD3iK+8LXWsWdrcyaBE5Sa6QfuQwA4PPPXFUr3vcw
avucszeTIFyGIc/z6j61JLEShO/YG5x2NOKl8uWLPjIZh+NErs+OhXqc+ntWhKsjV8FeGLrxn4j0
/RbEJ9tvX8pNziNQxz/EeBxX3x+3j4I1XSPgv8NbuzuWsn8LqBLPC+194hiQMpHcMG/PrzX57WV1
NaTq1pLJFLg4dJCjLx145r7t/wCCkN1PbeBPg9YNeXDlrOZ2G/cG2w24O/PU5IOfrWaj79zVS0Pj
j4n/ABGv/ijrNjq2pDF7b6fBp7Tk5LiPPzknqxLZ+tfdH7C3wx1W0+BfxCvblY0/4SK3KWO0nIT7
PKAxBHBJNfndI4jhjIcYyNw529eDX3l+xlqU9v8AsnfGK7FxNELKGdIH38rtsZGyO4I3fkBTkrWJ
ufJWoeO/EXg7wV4k+F+qMLm0kuYcwiQvHbvGwbMfpnA4rrv2PPh7qvjL4/eFprSNDaaVfRXd3LK3
AjDjAx3JIx2xnOa8VdzdSGSVpHBYhizZbPQZPrxmvZ/2N7q4/wCGlfAEENxNClzqaxSpDKV8xdrM
A3qBgnHtVT+FhHVnt37bWoeJ/hN+09Y+PrORoLO7s7WFQjbfP2Dc6OOuCSO3T6V8f63e3XjXxjqd
2kH+naleSTmPPBaRywA/E4r3v/goDqE13+0z4hspZJHSxsrKGOJpNwjHkK52j+Hlq+cYZ5IZUlg3
CRWLDYcNuHQg9QaVmloJXvY/RT4jfCbX7X9gPSfDBhhOq2Msd5dIjqEVVmmkYZ6k4K9Oe/tXxH8R
PitrHxN8P+FdP1qR7mbQkliF3JKWkn3lcEnAxhVAr7A+O3iPVLX/AIJ4/D++XUbv7VqM9tG8vnDz
JEY3BILZ7jAx7AV8FXTLJD+6wrpztPG7jFOEU0mxu1z7X+AF5Z+Mv2KPiD4L8PuJfFqCe6ljjQhm
UyR+UdxGD8qY69O1J+xd8LPEfwj+JGq+LvF1qNK0Sx0acz3EjAkD5fbpgE8+gq58OYF8Mf8ABN7x
T4g0kLputzXMqPqMLlJGX7TEudw56cD6Gsn/AIJ265rHj34x6/pevatea5p8uhv9pS8lMqNmSJeV
PHQn9fWo5L6A2fOHiT4j3Mfxr8QeMPDd+1g93q11PbS7AdkUkjbcAjjKle1amm/Cbx18T7yLxT9l
ur+fUbrzxfM5DM+TyDyQOO3ArnvH9i998T/F9tpVorf8Tq7Vbe0ThQJ3UYH0Ap/h34ieMNAvLHTb
XXr+wW0nEYskuCqpz0254+nfkd6ua0HHfU+kv+CjHiLSdW8Q+DPD9hMJL/RbCUXwi6xsyRbAx7HH
IOe59K6b9pfRZvhL8Bfhr4H8E280J8Urtu4o1DSXZMUL8nrglh0OMe3TH/4KUeHtO0bxt4NuLO1i
W/vdNupbt+rTFWhUMxPsCB9TXdftn+M7jwNpnwJ8W6ZZi/h0cm5EgOIgdkG0M3YNhhwKwWlrK+g1
a+pwv7HPhTxCvxJ8QfCnxnZSHRbrSZ724024jwiuWRPMzjKk5wDntW7+zj8O/DvhOy+PHjKzsI31
Dwpc39ppi3Q8yKNY45GBIGATlefY8YrR/Y3+K+ofHP8Aag8S+NdUs47FF0HyibaQmCPEkShdxGMk
bjWt8Eplvvgd+03LGCxu9S1VkUAbnXynXKYHOelJNKTVuxV1pynyFo/jD4kaL4hHxVk+1tLe3TO1
5IhaCR2+Ux46EfLtA9q+p/2hvgB4W1v9qD4W2D28kH/CYtO2piNcB1ijGAPTOAPxzXznP+0JqPij
4NeEfhX/AGSLGDT72EechLyyAOx2lduQfm6+1fbvxjkT/htT4EQLKitDaXryo4C7fkbHfvt/SnO8
fuBI+ZP2wNW8Ual8Zr/4beE4riHQfCkEIitNLUxkCSFCzME6j/HpzXW22lf8Lr/Yy8Q+LfF8A/t/
wlLLHaXcMamd444oxhj3P7xge3fgiq/xZ/aEb4CftdfFnVDoZ1X+0be1tI5HJGCIEYkHBz97kD0A
rc+F832f/gnV8SblxGz31zeMdqY37mhA4PPT+VJP3krFXsZfibTIP2df2OfCmueG4Q+v+NDBHdXn
HmqJ7dmKxsOhGAB781z/AOyFqHibVPiHP8LPGMb3mh69bXFw/wDaG5pkZYiVZCeADkdjyK7r9p7U
X0D9lL9n3UDbtcCyvLC7eNiArhLUuVK+gwRTP2fvjRb/ALQf7aOja/Z6WdLS10G5thFGxBwOPmPH
98U7NQTsQpamX+z9+zz4Ztvjt8V7i6Rr6w8AzGK1tLoKyOTHKcseenl4/GvEU/aV8fXPxBl+IE5u
I7BbgRxWHzJZKgUoEyRyP4uO+K+xvglcx3HjL9qmS1kCFr+VY5UB2tiG5OeTyckZHrzXxyPj3o0/
7L9l8NY9Cjh1Z5xIdRbbhV83zMnod2DtxzwKHre4ua8tD3X41/BLwxpPx7+EuuS20f8AZfiy7U6j
aPhYI1RI2POe5bBHf+fLftk+MfG/hf49vp3hO61Cz0S1061NrBp8ZECgocgHBHUHvnn2r2n9qDwX
ZeNviP8As3eF9Rlmt4b1preSS3UK/EcB4PUA46jGK8o/a5+PWufDH44aj4U0eHT7nStM0+0iie9h
Ejg+UCC5P3uuaSu3dlLTYf8Ath2lhJ+zN8Kdf1K3hbxTeRW6zTv8srRmB3c464BI/H61D4R8L2f7
M/7Lth8T9Ntl1jxV4qRLa1mlIBs0lR8ELg55QHB646inftzaGl98HfhV49lM1trGqwQWz2ofMESG
2Mi7V6Kc4H0+lei/Fnx9dfCr9iH4U3NnZ2d7Lfw2cflXi7lUGF5Mgccjpx6mqu9EJ3aPgPxd4u8S
+MriPUfEF5NdvHGIFLrtCjk9gM+uawzZNPIFjRmHQIoOSMZ49eK+6PhToGnftTfAH4g3+u6Xp2j3
WhhLiCXTIQCzLA74Ynt09MVxX7BPwm0TxWfFPjrWYjeSeFIQ9rbSDKO7RO2T6kbcVpzu2hCjraSP
Wv2FPAXijwzpnijTfE2ipBpcdvFJp63MS+YrP5hcZxnkYOP1FfD/AIC8Uaf4H+JVlr1zYrfWkGof
bZLEY+ZVcsqc55zg/hX3d+xJ8Wte+MHiL4ta7rlycNaQPDZhv3VsD5oGxc8cKPm96/N6WWaW6l85
jy+PMAAPX07UoXk3zF8qWx9H/Fn9sPW/F3ii6u9D02x07SngRYLea3RiDjBycYOf616X+2nEk/7L
fwp1ZrGC1vtQkgkma3iCEt9nLHGByB8/H0ry39mf4DaXrel3vxG8c3DW/gTQpBLNklRNIpX5Wxzt
yy8DqcA1xfx6+PWp/G3xBaqsTWPhjSgLfTdNilDKFUsofgDDFSOPwpNWldbE3Vz2bwB4B0b9nL4A
WnxZ8S2A1PW/EEKx6NGCrLCJULITnGM4BPXGKd8EPEmlftORX3w78SaVbW+rSh73T7qxjWJVEath
GIwSctngD616B+0TY6aP2U/2e4tYVzp73umpcYyoWH7NlzjsduR61L8E2+Hd9+2lo3/Ct1jOlx+H
Llrh7ckqZc7cYOTuwV98VmmlZ2Cy3Phf4h+FpvAHjLXvD17hrvSrlreRUbdlge2OB/OuZM7A5Od2
DywBz7V6f+0888v7Q3xIlYMJBrcw5HGAxx3Pbn8egrzA3SKQxYMWPAHX35rtWxk0QhyZd6lcY7kd
T0Oe1TF9kW4qWfZgnIxz2/WmJJm4/dsPLc8oRjn09ue9Oj3KWL/M8Z2kHp9PfpVBYqs++QkhtoAA
C9T7/wA6kYl4Hwvm7jhSxyR7frTpLdghkyAzNwSeRSujRZKS4kUnlfQjHPvQK2o1xJEjSH5T0KKO
cU6OBch8blxyoOM02Zf3gQSkgKWY4x/ntRLEyRZR3bdgFlOPoKVrjBUREO+Mtk4DY5Bq2mYrWRGw
8TAFccEEc/pVYMIXHmKzJg4LHoPrVuGNoLV2Qhck4fPT6fmKqwb7GW7KXd1RvL527uee2a7T4LoZ
PiLpEmFAEm9iTtPAzwcHBwCfwrjI4POj++3LevArtfg6n/Fc2WxTKYYpXIAHHy+p9yKkmzRyF/Eq
3czFgQr5JiA5H4VA7F2GSxAHAXtTruQtcTlhsO44VRxjPT8v5U0zKjHlSBwD7VjLc0PQdEvvJhRc
ur5+cMCTt59CMD/P19K0y+E7RJF5Y3AqORk8nrkDH+etea6GIXzG8zQEn5Ag3begI9cV6H4bgWH9
xIsah8lCqh3fB+8eOOnY/wCFdKOeb6JmrbXZjjkEJG4rgjnJ5yGBJ57iuT+Nca/8Ir4dFz80jz3L
Aq2B0jz8vrwOa9AFlFEfKigljdEIy68OT3wOhrjPjzAsHg7QpSwEIuZgA33slU4+nB/Kqa0MoJ3P
Bp7aJLUOm4scg7v8as6N/o94pVTvDAgDkim2simxuMfMAMAEHPepNBgMl+CW8sFtoBzt5HNYLc7F
q9T7V/Z9+B8vxJSLVNZum07w7bA+bdNgEuQSFTBPcdfbFYPitdMi8RXKaA8l/pe7ZbyyDbvXsc8+
/PGa+hvEMbL+wd4Za3kWwd7WzBkZQFbDbArMp6k7R179K+XmiSSPDkeUV2ksx2k9cY+oxye4+ldE
I8yuc8pWlY98+IXwM03w78LPDnivTLt786myidFU7YsxFwPXjB6469Oa474MeAI/id8Q9N0XUHli
gu5JEnkQcx/KSC3pk96634E/GqLw1PN4c1q3kvvC2qr5fleUWML8/NnPQ8fKB9K9j+KMXhz9lXSP
s/hqzY+I9ZJks5blN6x7CA/OScHf0zzgVnzSWlzVci1Z5Td/A3QNK+Odx4C1DWriK3uhEmn3CQs2
6ZhkLkHHJJ/I9iK6jxl+zp4A+HevW+lap4unsLi7RZoEkC4VC2wt93uegJPNeR/DbVZ9U+NPhe5v
LiRbqbVbZzli7EGdSe3QDn8K+wP2gPgavxP+IX22e7jigs9GmPlK2JHkV2KcHHH+etY3aesgdvsn
zN+0P8BrX4LJoxg1P+1o77cyoQA3ByScZB6gDHqOK6H4h/snal4Y8A6X4j0u4bUI3tRcXsI6wo0Y
fcpA+YZznnp3ryrxd461jxbo2jaPqJlcaRmGKSRt0i7nViD6kAD647V+iS+PdG8F/DzwSfEWBa6x
aWdiyhPkDPDHwQOi8/gKqVV07C5ep8S/Bv8AZ5X4weEfEuvrqotjpLsghePzHk2oX4OQBkDGR249
q8elt7iOaVnR9ivhiybQP14zjr6V+o3hr4XaV4E8P+NDpKFoNXSe82DJVT5UgUIQCcfN29eK/MTW
IJob2NSBBguDGPlDdRke2T0NXGq5kW1seneKP2fXPw0s/Ffh/U11vTk3G/eBV32rADIODyMk9u1e
RW8ElnNHJMPLB+VGU8+p9ex/OvqP9g+6lvPHuteHZ52u/D91pcjyadKP3bOGUFjkdcZH/Aq8T+MW
h2+l/FTxTa2seY4b6eGJguGQeY2FAzzxtye3pTjOV7CdNXucMBHblEjCPNnARf4h2Wpbj57hWdGw
PlfBAI/P88Un+qlRQreYoz5jck56gMfenSxiSNg+7cfvANkHHr7cVqnqQ0RSonnBXijZEyd6khhx
kd+ucU43JaNth/erh95Hv2J4qa0gRpiyNlC5BWRuRnJAHoOPwqNZPs88QkDyhT8uU3Kqjg4x0Gci
tG0SlpdCMpZZJioN2SuVXoU5Yj69PrTo5TcXA805CnGAhAB6/wA8GpJy4kjIEa4O2Nc4LDr79KZI
wjulLh2Gd4QsRtA9COvPb3qlqQ7DkhbaZTlg+Dlhn5cngdh/P1pJwbRZY/spMRcHspLfX36064DR
kbD5gL9xj3498mldLmdyjw+Q+4+WGcEY6Zz27/rijQNEWLm3LM8c28KPuqQRzjrken+NQJFEyCNZ
jiQYDN8pGB+FEUtw0ifaZDIWUBZcgPjBGP8A69enfs7eGdO8U/Gbwno2p232u2u5nWe0kPBAR349
vl+tZSmoLQ1jaTNXwv8As3+IPEPhHT9au5bfSYLslEe+PltkHBIX0Pbnng1zHxH+EniD4Ualb2mr
bLu3uY91tdxfNDMM9PY4xXu/7UOl+LvHPxlvPC2hW9xd2GjW0MtrZRKAqZRN7A/xDO3qRjmum+Fe
kr8Vf2ZfGWmeLN+oy+HZJksJnGySHy7YME3deCGBzWKrTW7NXBM+Wfhz4A1P4l+JodF0aPE0gLSS
yfLHGAGO5u/8JHGa7XVf2X/E+l6LqGqWtxa38VqmZ7ezAaVRnnC56556E+1e5eHNMT4Zfsdw+KPD
5EWv6isPm3hAZzmd0wO3Hr/keVfBOPxl8N/jB4SW6tJbHT/FN5HBcpOvEyGQcjJIyAQfxo9tLdMa
ikeEXi+U/kO5Mu7DoEIOTxtwOe4r1jTf2YvF2saHpmpqLa2gv490YuH2OU9wR6V7/cfBDwmP2vYt
OewIsptKOrCAEqPP3Fg3XpkE49vavJvj4fGnj74o+K4IIruSw8OXL29qLAbDDEGOGOCCSCpJNP2t
3oyWr6WPH/H/AIM1H4c6/JpWs2f2a4ibnedyvjGDnuDuGK0PAvw11z4kXtxb6PA00vltJLg7VVR3
Y9vavpTXNIX4xfsmL4t14ifxBpTypHdgKpdEmVMOec8etT+LbWH4M/s4eHrzwsPs9x4jMBu55iWf
EtqDhDkY57+54p/WOiMlS1d0fPviP4EeLvDGh3Gq31sv9mwlIpJIDvbJzyRjgdK4VUMkyozMszEF
OhJwO3NfRX7Ns3iPSPiva+DPEdrNFpXiGO4+12F3klm8pyHXJ+UAKRxXZ/DT9nnw1b/tM+JtMmh8
2w0COC8sUmYSj52T7wOem4/hU+1bNuRdDwq3/Z88d6ilnJHp/wAl3CrqsjgP8wyuRyBnHWuB1rw9
qOga3e6bqFtPa3Ns3lSRyBduVyevXINepfEb4keK9c8b6hr6S3qx6PN5CGxR/IgWOQ4yMYPQ5J9T
6V6f8cPC9t8Rf2f/AA38TLiH7Jr8kUa3K26hYpPMdlLMuM5Gz361caskYuF9T5u8HfDPU/iDeTza
XazyyWv+t8snaB09up4rW8V/DLxN4V06PUtW0429rPP5Qdvu+vB//VX0b8ZrKL4FfC7wv4a8Lbor
jX1lW4vOUnYfu2+VgeG5x69axP2dtWv/AIg3+r/DfxWlxqFjfWU93Hc3JJltmARMqWHOQc555FHt
ZPVi5E9D5jSGWNlBMkhfAyxzkn/9dd+nwP8AGG8Z0qQbhuBKjO3Bxj16DH19q9u+BXwO0b/hZnjR
rwrfxeErgwQwywjbIcS7WbPcGIfj07V5rqfx28af8JgPFsdxNbwxSxv/AGeAwtQqkIVxyApDfn+k
+0fQcaSPKpoGgVrYRvHLGfLZXwGDcdQR06/nWp4f8I63rsV3Np1nPeJAoE1zEBtTccD2B4PHp9a+
gfj38MNO1XR/BfjizhXTj4lNml1awnMcbSJ5m4Y6/exkentWh8ajL+z54X0fwX4Vtwl1qMH2m51R
MCaZkdw2M9Onb17Yq/ajdM+a9e8Oan4Z8iTV7Oe1knkKx+YvzNwC39P0rPtoHkuojCC5mGxEByTn
oB/nvX058FLs/tI+EdT8F+I187U7CD7Taayf9cheVQR9RkVnfAH4RaYbHxZ4x1JFv18KyXFuljJ0
M0cYbcOxxyB9TVKsyVS7nh1x4E1+yDynSb5QCZC3l8FQT+Q4PT+lZ2/EkSgkrj5uevfH6V7NpP7R
2vr42bWrxS+kyyyvJpPlow8h1IVc47cfXB9a2PjT8BIdG+IfhqDRbtbbTfF92sUEOwgwyMBu+bOM
fMCBU+1d9S3C2x4ZF4fvtSt3ubGzmuYC+6Ro48jIGRkjoTjt6VV1PRrm0mkF5bT20xy4WZCu4d8Z
9+9fSvxh1r/hRuk2PgTw3AqX4gjv7nUpY1YSZUjCqehyp/WpNB0W3/ad+Gl1JJALPxd4bjjD3iIi
R3cbB2A4A7L+n4VarWZi6N3ex8zNE5MEcMHzynaWb73fg4qzPp12LfcLKdQvI3oVXHXPSvdPgL8N
rTTvBl78S/EkbahaaVKZILKJQCJEdRz82GAJ/r2pPC/xwTXfFF5pnifS7OfQtRMlmEggxNHvOFYs
T2z2A6+1Uq9nohqmeBvF8mGDRqpGWBUlqkW1YGM5YRAnJCk4OP6+nsa9m8cfs66hoXxbs/B1jLHK
urEvYO5AxEgbdkdMrsI+tdF8TvEeifA6OHwfoGmWt9qGnNvvJ72PO5mUEAN1xyxpSrX6Fch88Xlu
8SMhURyEjKnhgOoOPqMVNI7JcGEbjOoxjPfuOOp4Fe++M/BumfGj4dTePfDFktjd6ZGLfU7MwkLu
jjDMUOefvDn2qD4VfDbT/CfgxfiP4tif+yFIFnbBS3mlty/MBjB3KB0Peo9ogULHgsskqKhCkqR8
0hXqD05xjPpTZ1WSININ3AUBQQAD9K9+8DeKPDXxXuJfCesaRbaNdaqsdvY3lgjM0cpYDac8Dtzn
ua4lfgZr4+Kf/CFyRSfbtxk8xQMGHdgSt7bcN3701UXYfI2eawukkbO7FfLbKkYzyO/p0qRN8lwx
cBVxg557V77448ReCPhpq9v4UttCh8Ry6Wph1K6kYqROHK43FfmOBzjgcCsb4xfDDTl0f/hP/CDt
c+Fr4lZY0ziyYEJtIY5xnOcDjHvmmqie5PsfM8cVmDkBHUj8OCOSD/nrUYmQBfMjYOzls54I9ele
5+CvhlpfhLwLN4y8a2++1uIni0yxm/5enKK6MF6AkdOfwqz4Z8MeD/jXouo6TotgvhzxbBIJbaK5
m3m6UKxIGMbvp2/Gk6qXQfszwaUgSPGpcE5z0z+dMd0KshQlW7Ak4I719Afs4fCbTPHPijxLpHib
TZ4brTbaPML5RkdmYEEH02jn3rmrnxN8Mre9uk/4Rq/hMTsmVZcvtYgHPGRx0zipVTmdkjSMVHdn
k0rDygxUo4Tbng5GeD+GKEmjMO2YkAE5G0jP+TW/461DRtQ1WKbw9YXNpYeT89tcKC3mc5bIJ/Q4
5rnYVdonLrubJ/ix8vcdOvb8K3V+xzuXvaD3CGOZVBePZ065Gfb/AD0od3aPhQsfAGeSeOtKsDMS
qSFQBgAdCtdN8O/h9cfEXxVZaJak+dMTIc8Hy1G5j+WalyUTRJs5hyrx8q3yjHPU8fyqJI2hhJVP
KIyAmOMV7/rXwE8Paw/ifTvCuoXDeIdBlEdxazAhJFXdu2Z7/I35AelfP8aNJhYyqSkH94p4PXaR
nuRj8zWbmpbFJWeo1UkZhyIpF52nkhvcioLpWSTcOFlJOw9x6mvdvDnwJsbbwTomv+MNTm0c65dQ
2lnGELbtw3Bvu98jrxwea88+KPw4vfhz4qn0u+HnLhjazr0lh3EI+fUgfoahSCdkcks3ylflA7Oe
o/ChgZEHybs8Bsfr/Kuz+Efw0vviz4hTSNP3pbRlWu7lhlIkbODz1yRwK6Hxh8EPsPhBvE3hfU49
esre5lhvjGcrAFyTkcE9hx+VVzLsKMW9TyGJnjWUqrht2Mtx+P6Uxo2MGP3gXjexPUjn/GrT2880
ixBuGyu1V+duvP517PpP7Ld1cQ6VBqmsRaXrWpW0k0GlSko7bAcehJ4HT1pOaW5pyHhc8YLAsitt
ywjTKgCoJo47gmZ1UwkZVWAKlwTWxrGj3nhzVJ9M1GGS2vbZik8D8MjH0x2Ir0Lw58A7vUfCEOva
te2+hWEszQ2y3bBBL8obcpOMjBOMehpcyIUH1PJp2eV0MiHIGBgkAfhzUdx5nmpGpd2PG4g4b2zX
qvin4CanpXhG48QaXqFvrdraT+VILAec0K7WbJweBgDmuS8C+BdV+JHii30fTVednHzyFlARdpO5
ueM7cdO9HMi4xSOUdHIkZ1L7QSecDHccj1/lUE8kSshIZAzYJ67SAMcdhXttt+zVquo6pFp9rrmm
SyudqxJMjb3wTgfN6ccA968i8Q6TceH9VvdMvLQ21zbsUeOZdp4yOaSaZpp0ZkxRo5YhSxweCo4J
/wA/pVedzMztvyyL90LjJ7/WvRvAvwR8R+OtLvtQtEjg0+1Kgz3J2oxxwFP8XQ/THvWT8RfhPrXw
0t9PuNTgM9nqas8M0fKKVYLjIz3I6+tNcrHZnGL82ZNo3bMk498YqpLIJi5mUyEhix+8SeuOfwNb
GjeG9R8R6xZ6VpsZvLy5ZY44ohzlmCg/QEjJru9d/Zl8b6Rpeqag9gLyHTEe5lihlBIRR8xAHUjk
49ql6MmSa2PJJJdxAIYFSSckcjH+P86guBLMyjLKMlsA4CnHOAO1X54zidcETA+nTHGCPwxXYeG/
gR418U6PZ6vpmmST6bd7xE7qy5CnqMj1z7cUXREebqeczCTfubLBjjBfI4//AFVDJcywySiCWZSx
IbyWZDGO+cdq3fF3hHUvBfiG60fWbdrO/hQM8Z4B3Dtntj2qLwv4R1fxxqMlhodk+oXaRGWSNTgK
q9ST2/Gk2jWzZgz3st5EYp55ShOf3jblQ889aq3bJDKpMjt04UcnjtjtXc+M/hL4n8E6Z/aeqaRP
b2vm+UJZNpXcQSM465xXD+SZA5D7jsJBHQdcfXrSvEtRaHya9qtvGkMWpXkceNyRmdwAeOvOPT8q
zLsqpBkc+Yw3nZ0OK75vgX46vI4ZV8N3bpMiyx4U4IIyp5xg47e9ef6jFJbT/ZZlK3CMRKJBt8sj
scGhWFsP03xLqehLKdN1i+04TkM7Wly8W8joSVI6dOfequsa1qPiO7trnUL2+1W4iQxxS3kzyOqd
TtJJwMjPbPFdLZfCfxjqekQ6lY+H7i9tLhQ0MkKblkT+8D0I/wAawtd8Ja54cuVj1vT59JunQyLD
LGVJTO3I4GRkdqegMyLW6ltLuG6tXktbuBlkhuI3IZWHIOeoOcVt3vxQ8U61p13YXXifV7ixueJL
ae6dkmHOQwz8wPofWueS0uL6+t7Szjaa4mJjjjQEtI2eFAHfNXPE3hPXPDiBtX0q609JGxC0sTKr
4GevTvSTRm5NGLPtdiHVuOdynGB/nFdJoPxe8aeFbKLStI8UalY2sRLxQRSsFUls/L7Zz+dctJvd
gp4buf6VrxeD9XvbEX8Ol3U9m6M4uI4y0YUdcn6ih2vqNNmXrusajrWp3N3qV5cXt5OfMmurmQu7
t6sSeeKt+F/GeteANTlvvDupTaXeyx+TJc27AMYyQxToRyVHFY5fzHO5SCQMFST+JH60lvG1+6wQ
ozOX2hU5LEkY/nT0KR1XjD4reL/H2mW9n4k1651W3tpPNiScjCPgjcMDrgkZ9zXEICkiNG4WVSCr
Z5UjoQfWrupaZdabeyW97byQXOFZo5kKsOPl4PYjFVCpt1BTkF84POPUUnsWk7nqMn7UvxYRQv8A
wltxKiJsy8URYrjHJ2Z6e/515NNcSXgHmBVkYli6cHvknPua15vD+qeXKV0258kqWdmiJC45JyBx
+PvWQYljILDO4ZLZqI26DaKkwwCqksw6k+lBxG4Z1O3qMDqOf50+QBAwAIdjj3oW5ZWyUDMePw7V
TRlZXPVPA37UPxE+HPheLRPDuuJp2kJnEH2WJ8gkk5JXPJY96868WeML3xh4h1DWdVnWe/vn8y4l
jjCBjgchQAM4A6VneQbuZFiU7znaq9T9BUb2k6SStJBKmzjmNgPTuPXI/CklFCtqdV8N/iXrvwv8
THWfD1xbwX7J5XmywiXauQflDZAOVHPtW58Xf2jfGXxqsbS08UXkF6tpcmWN4rdInzgjBI5x82a8
ywYdg2fMTwoNSzWk8Sq3kyIhAKsYyAR0BzSaLtcSOZkuFdXO6NsneAVJHse3Tivepv23fib/AMI6
PD4bTk0dU8v7PHaKq7QpGP1z9a+f2jZQ67W3jquc475P1qRELFlbIwOT14oQcpF9oRl2sjEEYyOT
n1Net/Bn9prxl8CbC5t/Ci6bE13KZHlvLYu/KquMgg4+UHHrXk6wyxwCRyAZM7VxyRUbZ+RpH8or
gKBxk+lPlTCx13xW+KGtfFzxnJ4h1wWf2x0EJNrF5SsAzEEjJ5+Y81ytlqVxptytxBcS2t0jfLPB
IVZD2wfyqOOONlDjc4bA5G3k+n+e9NkQk7SoIQ7s45I+tFmyXvoe/wCoftveN9YlSa60fw/qFzHG
IWubizYu+1cKxIfk8HP19q8y+KPxW8R/GTxPPrviS5Wa4dI0jggBSGFFXAVFJOP161xQU7W3Fh2y
i0oEeG+cyP8AxY4x7U1Gw1c9h+Hf7UXiz4Y+FJ/DlvaWWt6Y03nxw6vulEBC7R5Yz8oGenSrXjn9
q/xb468F3HhSGy07w9pdzNHPOdJEkMk20fdY7iMeteLhCVAkGwHILE8Y/wD1inyKZIwYkGzGCwHB
4qWkaaktjqVxo15b3lpcPb3UJPlSwsVZD0DKex9xzxX0Qv7dXio6pBqk/hzQbrVoY1RL94nEysqg
Bi27k8D0/WvmxogYt4xt9c9P8mlJUsu9QS33UPbJ60lBMLs1PEHijU/F2sXWq6pdy3N/dSGZ5ZJW
clifU5OB0H0r0r4Y/tOa78MPCF34Yl0yy8S6NJcrcx2uqMzx27BcYXnoeTjpkmvHjDicInGVJGR0
PpQ0byyYYbSDz+HYVXKiW2eofF/49+IPi9aaVY3MVpo2haajRW2kafuS3GSTvKnqR09qxfhb8VNa
+Eni/Ttd0aZxNbPhomkYRzpyCkgBGVOSfqK5GECUqjJlB2U4/Gq6kxyM3J2+2SKGhpNn0Prf7Yuv
Nomu22h6DpHhy81WEwS6nppcXCKTk7Tng579f0r59luZWnYuzNK7eaS5y2eucnvnnPXOabKgbJLb
QV4zxR5QBOAzPgg+3HHNZjs0fQmmfte6s2g+HLHXPDOn+JbzRYPs0Wp6nM7y4B4Jx36c5PSvLvip
8V9e+L3jO81zW5H86YgQ28cjNHbJgDYgP3RwOlcbvmjZFYgKeSMe+KQyGRlCHBY9R0P0oUUTbW56
18IP2h9Y+EFtrGn/AGVdc0PVLYwyaTdzssKPnIcAZwT0OKv/ABK/ad1fx14EXwbpml2vhHQTM0l1
BpMrBbsMvKNkZ214wsRKsrEKobDZPNSxlGJhRgwGctkE0rJMrlZpeFfFt94J8SWetaTPJZ39lIrx
yxEo2ARuXII4OCCK+hX/AG35rfxDeeIoPAWjR+I587NTRjuUlSm/OP7p/GvmVgFdosgMAMd8nHrQ
0IB3+Wz44YHoBjjt9DVezjInUueIdbvPEuqT6pqN5Le6ndyebPdTuGZ27kk/y9MV7d4R/aqudE8A
WfhbxJ4ctfGsFlM0tpJe3BVreMqFCLlTgAAjg9BXz+6qV/drwBljjpxQsWcqC3Q4Ocfr+NHsYgmz
1f45/HzWvjW+nxSxpo/h/ToVjttGtmzDGVGN/QZbt7CqvwR+OWrfBfxGuoWO+9sJlZLzS5JSkF1l
Co3jB5Gcg15s9wDuEJLIFww6GmyNysn+rCk5x1PHek4JFJs+idV/a7ns/CuraH4M8KW3gq51Hy1m
1LT7klyincVA2jGemc968G02+uNHv7bUbWWS2u7OUSpOrZZXByGB65z+tUIlIkLY3DocVEsn3sAr
zg9+KpQQ+Z3Pp2L9s62v5dE1LWvAVlr3iOwihhk1Oe42mUxcrIRtOTu5+prxX4nfFrXfih40vPEO
s3LyXkrs1sm/KWsRcuIk6YCk4/CuJkiMIcNJkDBDEY47fTtUv2b7rqrMpHzBvTnmj2cULnkfRK/t
f2mveEdG0bxt4NtvGeoaTFJBDqt1OUk2MR8uNh6fL/3yKwPiv+0xeePvBNh4Q0HQoPB3hqN3lubS
3n8w3TMQ3znYCMEE8HkkeleJSbXfeRtGQN3Tn/JpMvsCleCfTpSUVfYzcu4jFYwoYFiXORjA/lTX
/wBVvVSGzkjtUjwuAo2uNp6+1MExRtkisScgGn1GmdL8Ote0vwr4v0/VtX0hdatbZy7WTvtEuVIA
zg8ZI4r3L9oj9si1/aB8I2ek3XgiDSr/AE9VWyvY7veLdcruULsHylUAI96+aFGRvZsqOgPanmVZ
l2lduMjJB9OM0+RXuPci81ncoBndg4xX1n8Kf21PD3wq+FN14Ktvh79qt76F47+Q3gUXLtHsdmwh
PK/zxmvkp0dZG3ENjpnIp6Bsnbgk8tnJoaRKuzX1zULS+1u/urKFbezmnd4rbIIjQk7R05wO9elf
s2/GLQ/gl41HiTU/Dkmu31sytY7ZRGtu4yN+D1OGwK8f8spuBUSHORjt0p7swWNcsJFOTjoPSjlT
Ks0ezftP/HXSPjx4tj16x8OS6DqLri+cyK3nkIqr0H8IQD8a8q0W9t7bUbS51G1W7sUmQywLgblB
GVzzjIB7VmTM3yu0m585bPU1O0Jg2yK2AcgYUkfXHeq5UNaO59c/FH9snwf8QfhOPh+PAEsVhaQf
8S5zcIBDMFbY20L28xv4vpivkV3Z4gpAXPUr2PcUwzOG2rgNuycdhTZ3IDMVOc5AI70kraCbu7n0
J8IP2lrPwp8Nda8AeMdMn8R+Erza0WnwFUaJ/M8xjkgZyQvftXU+Gv2oPA/wo0jXLv4beFrrQvEt
7Y/Yoru7ZHWJdwOCAOcfXnAr5VGzcQQSemAvBp8sO0cAMwGAFGCankuFz0D4V/FC/wDAPxIg8S7I
9QumuGe7R41IuFeTdIMEcE/NyMYzXtGofGf4Eax4zl1258E63De3N79snyE2B9wOcK3Q46YJxmvl
QFiu4Aq27Ofr/WpsCQFwx3HqAcn8aSpJ6spTZ6t+0h8fNV+P3j99Zuo1Gj2Ia3021bAaKE4yG6ZL
bc+1dp8Kf2jtGX4Zat4F+Illc614adV+xJFCJJoH3lmwSR8v3cc8Y9DXzpvDAMCdq8EdMjvSh84j
UMqjpgY45xn/APXTcYi1Z9JeMv2j/DHh/wCFZ8DfCTS7/QI9QkkOqX13bqk0sTfdRGDEjGTyen41
538CvjXf/BbxnBqSBtQ0u7Jt9RtpTuWSF2UOdnrjI5zXl0Z8maNwowygKR6f41JEv2hiNp46nPXm
hwTJu0fXlh8bfgX4O8dax420TQtSuvELwzT2sFxaj7Ok7g7R97Awcc8YGa+d/F/xe8UeLfHk3jC7
1GY63JL9oWSOVg8AIxsj5O1QCeAcVxDx4fKttXPH4UpKh0WNsZ4biodO+5SnJH1z4n+Nnwo+N3h3
Qbz4lC9sfFForpeS6daSOZlJAUsy9flUHB9cVxP7Qn7RUPjyK28IeC45tE+HmkBVhtoVNv8AayFG
WlHBPzDjPB614EoHnu2PNYjG339qZ5paQrsVFyRhgdo9veiNGMdROTZ9LfBD9oTQJ/AWteAfiW0l
zoUySPYXRQySQMUWPagxxtAOB3yQK2Lf4z/D/wDZ88AazZ/CmWbUfFWpzm3k1m8hMclnAUYZjP8A
eHy4wO+ewr5VIXyy6jcGPy89OvUVWmc4HylTkkgjAPFV7LUm3c9h+CPx88RfCHxkuozzTahpuoTl
dWsp5CwuUfKs7HqWG4n869ng/wCGb9J+JsnjEasLqCFmuofD0cL+V5gjCqmMYwDj05618b71jRtr
tjnJC5wf5+lAl+UleVXOCP5U3TT2LR75qf7W/iHxJ8edK+IN/A01loVy82naNI+I4V8soADg4OAC
eOoz6V6v8arX4PfHP4h33jqX4hW9hPqFtAj2m0gRlY1HOQM9a+LCXdXfa5Geh6YHSk3SSP8AOm8o
PlyOg71Hso30BSaPqb9sT496F490Pwd4A8NSfbtD8L28WdYEnE8hh8v5V4yAM556muv1Px/4K/aG
/Zl8G+DtY8Q23g6/8OXcUQFyy5mEcGzeoyMDknufl5FfFKxnEiAnYoIyx9utBnLsoeQqgJXYTx25
578fzq/ZrSzHzs+4tE+I3gP9mD9n/wAW6Bo/iKLxdqnio/Zo47KTcYkeFlLkdMDce/p6Vyf7Dnxq
8P8Agez8W+CNduFsk8RQyNFqVxIsUMZWFlwSTkZy3P5dK+TLeaQKFjBKlj8g4/TFVw+yYyszFWBU
tjGBnkE1HsSue+595fsz654U/Zu8Y6t4Qu/E9nrEfi+xw2pQTr5NmY1cAOc87mbA6elfGfxA0G28
K+K9T0qDUE1WG2uNv2iAgpITydpHvn8vaufYho1yzrH0GOPyNMd1Yuyt50/BJc8+vv8A5NVGnyku
T6H234S8e+HdJ/4Jy+JdAm1W2j1u+nmRLB5x5uXmQj5c5I4J6Hv2zXxHFIHBbIbAPHfIPf8Az2oM
zShUfc69lzxn3Hc8daRpooZvMSLcjYHJwfp+dXGFtBXPtH4feOtK/as+CkXw68SSwabr3hqKSXRX
3eXDIFh8qIsSOT8xyOeAOlWvhp4Z0P8AYt8Lat8Q9bvE1TxXL5unaVZW7hkJZN2Wx0G5R8w9fevi
VrqVHK7nBORw3XPanC7uMlZJpXVDlYi5KA464zSlTT0GpWNLxj4n1Lxt4n1rXtSaN9R1O4kubjyi
du5zngdhz69q55BIsrFRhjgKTzkf41IHzgsQpbPyg470rMIXCopZpD0Izt9h+X607WE2OkdJVkwr
FVIA59P/AK9R8DadrFM5AHTjt+lOnKiPYyqmTkBhjOcf4frUYdkjZRux78AfSqSJuMImuJDsbgN9
wn7v4VMxZkIAxz0AwD9PxNM84l98an5hzkdf8mnySGTKIXjbq2G7UxkbRIqsrDBZcZBpIAjSFCdr
EZVhyB9fSolBZFUAlmPUip5UIcEEZxztO4daBdRZIyJcSDeSchxk4xVvFuIZsbspycdPoffOKpLL
vkD5KqoCtkdPeppYw0Mm1izfxHuc45/rQDM4T7UYMCSTnAPFd58HIpG8WNLb5ZksLhyobBIEZJHH
XgVwMOUYl0yenIr0X4MWo/t3U5GjLqNLuj8rYK/uXII/ED86CTgJTHMZGRDgfwMeAP8AGmzFlABB
fHAzTnh3sp5dCPvE4+vXk4PFOCwlMSuTgnG0ZP41hLcux2elyGAu0xVhncQBllbJ7+n/ANavRfDs
/l6ciLGUTduyy7XYYGQxHIrz7SoB9vKQeY8kUjAhjk468Z68nkH0r0nw1JCAEjO8xyLkNkKcjGBj
j9e9d0TlfLc3/OuPOhhVNyDGDIcZ9Mc8nH8q5r4628g8C6MzjlrqRP3iDKkqpA474U8/4V3sFzZx
S5crvVd+AN20g8cDr+B9K4j44S+f4A09U34N+Vc7iVP7s8jjg54/GibuhpK54FZyPbWdyC7RqQO3
X24+tLpTeXdxOjMB/dbOD2OcdapSSMY0RjuUcZB/HNXtJKm9hVGzg4+Uc9a5LO51RP1F8G38HxS/
Yph0HwzKdR1zSrW3S6tk2l0aM73dQR35/l1r5kurW4tbs28luyTBsbSoUH5uuO+OhHrSfBr4gap8
MtTsdY0uQRzIrN9lkbEUwwQQ3ODk7euecV0XjPxUPGevnVvsdvYLOP3ltbsShPIO4E5yc/jxxxW0
VKKMnFOR7l8D/g/YeHPDq+PvGoNvZWbPLaxMAgmO3IIJ6kkEDnFdyPHPhT9rOwk8MajbL4e8RWob
+x7ieUMZmKncFwBnO0EqfSvDvEPx91TxL8K9L8A6jaQPZ6coH2xWYzSbBheMYzz1965LwD4yu/BP
jXR/EFt5Fzc6dKs6QTDEbttI2nvnn8Pes3B7g3rZHSeHvCd54J+NvhrStWs51vLfV7cmFgNykujh
iQRxjB4J4/GvfP2yPF2oeEPit4b1DSmltL1tJeJZIxlCGlkUqQB7gZII6V5Rq37RM998XV8a33h6
wuZGt47cW0gIEbp0dXGcEdOneuq8U/thWfim/s5Nc8A2NzLblkhkN0HbduzjBT/ZXjJ6dKhRle9h
c2p5f44+D2r+ANB8Oa/qKmEat5kkdtJGwMW3Bwep+ZWznsK+nP2q7dG/Zy8ASMqxIZbchCC3W0IC
kgf4V4V8cf2gZ/jRDo9rJon9mRaYksfyygeduwByOeFGM46jpT/iZ+0Ld/EL4X6B4Rn02O3OjmOI
3AkLF9kWzpgdRjk+9XyuT1RfMrH0T+xv461Txd8J/FGnajK0q6DH9ktWZ90gjNvIw3E8Z4756e9f
DOrSeXeyIzNGsQbmQbeoJwQckcV698C/2iZfgtoHibSl0qO8i1VC+Hcp5cmwp1AI5U49eK8XvJmu
r2SVQIZWYt5Zkyi87gPm6joOaqMHEzk23ofSv7BGn3CfE/V9aZJBpf8AZE8f2kBRCG3q+Cc8HaPT
v1ryj49aml58aPF1xaSQz2dxqcxgeHhWXdgkHPzZIHTjmuyuP2iktPhPD4S8N6cugkZW/nhkGboM
mG2/LxkjJGa8SnmlR3Z5t7OTtjAHTrgAf1qoxs7sybfRhPILYbSC0inaUXsD06/gKo3YF0AXRkRx
ggen944q5FI0TSxqrtLGRkyKWDZyeCTn5ePwqpJEtwsgbzIzHtYueoPJwPxroXkZNtD4FkjnUHzS
QoBDYzt7bsnrx36c0+V4Ikwu/oxLqgBYd846nnp1/KnNh5oVikBiYF3SRTljj19uRj3pTCqEr95s
Fl8sZz3K+3Pr3pNCb0J9puAzjOJMbUPAXvn/AD61HPsUqHkIwMZOSMY5PAp6qViuDDI+Hz8rcA8E
A/59KjcPcqE3BFQYLFyp3dBgd/z7VVkTdPYm+zQvbgLvdwxLCY5+nH+e1MksiCEU4jVi37v7pJPX
tjt+tSNNsSMxuUIBAaM84HU8etESKnmOzscrwjAEAehz/P2qU1cb1Gb4BLIDGhVeArkHLZ6Y7+vp
wa9h/ZSkCftD+DJ7idIgs8sQXOB/qn5PTJ4A/GvH0heRshw8hQkADBfjsR0+XOamTNsWn3kbCvly
xn7hBz9R9ePrWdQ2g+Tc+wvjR8Y9V+CH7TviTU7LTIr0X2nW0RM7FQBsQkqfYqcjHOeSK6L9nHW7
rX/gN8WdTvkSyk1C4vJUGOPntflHJ9McV5f/AMNBeDPib4J0fTPiToeoz61pkjLBfWIVWeMLhSWL
bunJHOSPesn4x/tAWWvaLpnhPwXZzaR4Vt4Y1Yy/u5ppFBHO1jkYI4Y9ulc6i5bI1Ukt2eu6n5kn
7AumNBvMtq8KHIyV23jcsB22t+ue1cLpfx71L4w/FP4P6RNpcVq+kapAPOtmLhwCuWzgcfKOMele
f/AL46L8M9WuNO1VG1HwlqUQS/sXTzMbEYKyBuh7H12jvXofhb4rfCX4XLrWr+ELLUL3xFJblLb+
0YD5SuHyMsTlecE+o70nC2li1JPVHvF1JGv7ZFozLx/wi7puY5JbLHb+Xv614rr/AMeZvgz8Z/ir
aLpkN5Bq960a+bKUKMGk5yAeCH7eleA3XxH1ufxpL4n/ALRuhqpmacXXmup5YkIBnOwbtuOOK9u8
VfEr4V/GPSdJ1DxzDPpvie3RvtK6dbMVmYH7+Rz6enNJUmtx8x2ngLL/ALDviFdrCRp7nAPy5P2m
MnHP48VZ+OV0dO/Za+FN80IumtmsJSr8gsLfdgkg4Jxj9K8d+O3xys/GFrb+H/DEf9meEbTCLEi+
X5x2DO8A9dyj8iaufBX46WNl4f1LwZ438zU/CdzE7RTyBpJbOQLtXywAeOmMdD9TT9k1qLnR3Pgj
412nxo/ad8AaglgukvZQ3ERQMHMn7mcsTjoPm6+5z2r134fpu/a7+J2UwRp9kFBHDDMZzn6jp+te
HeFPiP8ADb4L+HNV1LwdPc6/4rlCCB7+2ZGjUgqxUkDAwSSAecV5B4d+MniLw542i8UR6hcS6qZA
Z5JpGH2gAg+W2D904Ax2A6cUKMnshXR6NB8bLD4c6R8S/Cb6Ibm41rUbuSO5kdQUL5UFht9i3Fek
eKYxN+wdoDIVbaLU5iBAJNzIOPz9K4vxHe/B34la/pfim81J/Dl4UiuNRsEt3ZXdWDOnAOc9OPTP
tXNfHL40WvjGaPw74cb7D4T0wFYLZcrFKgk3LIVOMYPTNUoSk9CVK2jPc/2ofEEHhuT4Sa3NCbiG
yufOljzgOFEDYx0yQSPxrK+EPxJsfib+03Frem6QdItzok8AtnRVOVKljx26fl0rzvwT8XdA+I3w
+ufA3xJvvLnsk36XrlwCzI7MNgBxwQFC+4rV8OeKvAfwA8F3+qeHNSPiTxLfH7NDctv3QRuu0nlQ
MAqD+HtRyvbqQpNM9h+DMBj+Jfx2jIO43ytwc7si4IzkcHBHGe3SvnbQfip4Z0z4B6h4JuNLNxrZ
kaWO/CRkKPNV8FzgjGw8YPasT4a/HjWPAHje51+5vJ9Qiv5GbUbdhtFyGByeB94Z49Oa9G134a/D
DxV4+stdh8WWln4fmMck+mg5/h+dQw6ZI6/XNPltuUpHbfFllH7PvwecBBsu9MHl7xtKiHsw9No/
Kr/7SOu6V4P+OfgHVtZsvtmk29vcmZNgckb3AwD7sD1HSvDPj18ZpPGuqWei6MJdN8OaO220hWQY
doyyJJwMn5cEAnHtXbx+M/D/AO0X4Aj07xLfLp3jbR0VYNRuGWKO6RmJboQD0OQeOhpeztqwcrs7
n9nTXtE8V/tDeOtU0KA21jdWFsYVC7RgSorcdMkjP4074RoG+EPxrEaB2F/qPyMAekJIBHTPA4ri
9N8QeHP2bfAVzd+HtUg8TeMtUDW4uYJUcW8QYFN4DHABHYZJrifgl8d38E6tqNjrqPPoHiB5Rf7B
8wdxsMmMehPA/Wpcbu9g5ot2JNa8W+BF/Z/sdEisIo/GsTqZLjylEjDzCfvZB+7gYHHFfQnxeKp4
r+B0jJj/AImce5XIGMiDn+Q4ryq3/Z58MyfEidpPE+lT+D1keUW/2lDMsO3cAfm7HjoMCub+Lf7Q
lx4u8V6HPo8bJpOhSRTaaZFUM8iAKS2OvKDvVKIOye56d8XdT8J6V+05p83jCKS40n+wEJGwugcb
whIAznk4/wDrVsfs1XOk3/iH4tf2Im3Rnnhltvl2qEYXAxz/AJ5rkPFlvp37S3huz8SadcpZ+J7A
R2V5Hc/u0mjAY5jGcHluo96fZXdt+zB8Nplglj1Dxrr1qolTl4YyhZQxw3HEhxTcdEClJF7wcD/w
xZ4pSWP5YjdY2EEFRJE3Ht1/OvPNftfAL/Dj4dv4fuCfFiXVr/aUKyEPjjzGK4APzkH8+a0PgV8T
dKuvC9/8MfEzfY9G1wvDFeR/eEspj69duWzyemKs+HP2YrrSvGtzJrt6tt4c03ddfbFcbpBGdynP
TnAOPahKwX11PZfiVGF/aq+GZ+bDQXQw2cDPndPyH+cVw/iTS/B2rftP+NbfxpdJZWaWUJje4lVF
ZtsbDDHvtY8egNcb8Rf2kE1X4zaJ4n063jnstEmkS3DgqZY2B3MSe/zH8OnrWz8X/A//AAuuSz+I
HguRdVTUwIL6BOGt5I41XjjnuPTpSSaLXKbvwIith8Dfi9BaM09pFc3yxkH+AW+evrjv7VR8WQ7v
2EtD8rfiH7OQV4IxdSDnI9CSaS81Wx/Z3+DGo+GpZ49Q8Q+I4PNm08sFa2WSEozH1AIxgVU+F/iS
y+KvwY/4VRd3FvpepwbHsJZCGWYrI8gXb2P9D1pcruDkug248C+D/B/xH+ENz4Z1Rbua41G3+0ok
4k2klCDgHjJ/n9K9PGR+2ZEoZig8LqfKz059PyrxT4NfBDWtC8bWviPxEn9habobpqFzJcyDB8tw
do68cd+2O9X7z9omw/4aN/4S9LGW40uGx/skGNhl493+uU9CPmJ7dKLO+4KS0J4/hh4b+IHxL+L0
utai1nd6bqEj2uMKWJ8wnqfm5AGfYVraAyp+w7r7BnbY05PGGOLleeTx2965T47fCrV5PHlx4j0C
Ma5pniIyX0F3Y4ITkDY2OD1BH4+ldL46vrf4L/s6yfD6+kj1XXdUEskkEDri2V2WT5wecc46fpVN
slaGp8e4F1b4FfCO3kldIbq4sYXK8nD2ijn8Tn8KqeE/hZY/CX9pbwdpljqhvYrixubjcM/e8mQb
W7HgDGPSk1C7Hx1/Z40HS9DIg1zwuqNPZySKHmSK22b0APQnGMemM1zn7N/gvWbLxkni7WxLpuk6
HG/myX3ynMkTgKuTk/eH+NO2hXMd1YfEXRfhv+0t8S5dflltbe7EEdtKiM43bFLKQM4BJ69ODXlV
v8OPh5rXiBrG08bCOW6u9scJjZWDF/uDKEfxAD8Kg1myb9of4s+JdT8MboBdhLgRXp8tnG0ICoPf
5MkZrk/C3hfXrbx9oUMum3KPDqlur74cgBZwHJzxxgn8OK0S5dYsy+J6lD4lfDzVfhn4iuNJ1KPm
Nm8uVekqbiA2Pov61zMYMihSgOTgH0Hc19Gftuyxy/EvSmSQysumsrKm0hSJ5MA88HBr5yupBCZG
K4UAZ+bnPPauiMpSMpWi9B21fOUs25iAvXA717/+x1okd78SZNTN8kElnbTxJp78PcBo8eYoPJC5
P614DGyiOMurHPHzYJ717J+yTGo+N+mOqsD9ivAfQDymxnsOtY1E0VBvoR/HBbn4Z/HTWdX0rV42
uLq5uLgrE2WhZ85Rx2OHIwfevJNG0xNX1K3tfOWE3Mqos0hCqhJxnJ6D19s12v7Q9mY/jR41cgkN
qUhBI4IOCD+tcIWEUAVULJw2M4GO9Rsi4J31PsH9ovwPLN+z/wCFI/7Sgibw+kczyNMAs6rbMvyN
3JJUjH9RXyX4j8U33iEabFeSrO2m2i2cTucsVDMRk/8AAiK+kPjzHNJ+y18L02vKyx2ocYweLRuo
/D9K+WUjU5ZQzJk4VjkDGf5VMUrahLV6H1n+xL4Tms7fX9cMsb296kdvHEHBYFJHyT3H5V4FrE+u
/CvVPFPhM3C3FrqMDRXEW8qjCQ7t49GwvNez/sNym48V+LYt+FaxgOwEZ/1pGR3Ar5p1i5ml1C9M
sxkn8+QB5Dk53HpyacVdibtY0fh/4au/F/jXSNP06MT3JkE4LHaFVSCc+/B/Wvp39tDTNV0rxF4c
8W2DSW0OnxSQC4hco6MZs4JHqCv5GvlbQ2ks9csJVna323KjfCcHbkA/nn9a+kv28jK3iXwqpnby
ns7gtFkjJ87rx15x+VHLeWpo72R82+P/ABlP488SXmu3KCO/uNgx1G5I1Tgnsdufxr6S+J1lN8Y/
2d/h6PCLfbjoskVteRsCPmS1KMvI5O7GPX2zXyq8X2Xe4RpByx3DPOenJr6u/aCubrwv+zd8KZdD
mk0j7WYDK1gxjMjNZFssQefmGfyqZKz0FF21GfBrTJfg18F/iJN41jGl298RBbeYSDLIbaRccd8k
fSvmLwh418ReBrWR/D+oSabNdKscpjVW39QcEqQMEnpivp/9mSV/Hnwm+IsXiO6m1iOFA8X20mXy
mFvN84yc5yOvHQV8jWNhdXNkrRIbl2QSEQrkAY9s9MZpRL1Z6j8HPhP4otPid4J1CTSrj7INSguP
tBYBdhYEsRnpgntWz+01e6L43/aRtBp8ttcWl0bGwuHjOCCZSsgxxzhq5/4LfE3xZ/wtXwPZTeJN
S+wS6rBA8Uk5KSIX2lCPT8PSuv8A2l/D2n+HP2q9Jg061WzhlOnXbhPkQMbgiRj7kDP4VMtHoTbU
3v2t9K1PQ9Y8H/DLwrZzjRpLATx2VouZJJI3dNzMBk4BB9M/hUn7Lfh2f4leG/G/w68ZWzzadoyR
S28dwMT2jvK7Fc9R8yZx7Vv/ALVXxAu/hN+0f4J8XW1mNRFpo0sYjdiEkLyyAjdg9iCM46CrP7If
imXxz8Tfi14quLVbBNVjspfLX5gm3zBgtjGenTsRUOTsbpHB/AbR7PwV+zV40+J9jEr+JrI3MNrN
dDfHGqOuw4AGSSR9a8k+HHjbxf4H+IOja7JJNPb61OttJ9uDPDNHPKnmALnP3WBzzjFe5/D9Rdfs
G/EBY3EredfnIGfmDRkED6g15Vq3x8i+Ivhn4U+Bo9GFo/h2/sGN6j7mnCkRgbcZwc55NFw5bs9I
+If7Ofhm4/a78OeGVilg0nxFZXGp3cKORiRGcsq+isQSR9a83/ae8b+JtQ+LGreEtGS60/SfC0v2
a1j0bcgCOsblpNpxkHj2xX1H49Ii/bm+HAwQraDegFsYyfNOB3z0/CvE9Z+PWm/Az9oz4yNqeiNq
0WpTwwR+VsJRhGMsc9sN2PbNLXoSoq6K/izSLX46/smXvxD1e0hi8UeHDLAl5b8PcrHsQCQ85Jz0
PGR05qTxZpi/szfsz+HtW8MQrJ4g8bFYZ9QcHzbVZLYSHy/ps6dic1a+HCSy/wDBO/x8yRsZDdXi
hX4G7zk/Q46/jV79pS/TS/2ZP2f7yaITxW89hLIhj3BwtkxZcHHUAjn1oV29TVpJaHAfss+K9X8d
eL5fhb4xjuNa0nxBBcTvJqsjvLbMkRwU3E46/mD3q/8AAP8AZk0eb9oD4g2eo3U17pXgR1MUT4P2
sOjECT6bDketdL8NPi/oHxn/AG0/BmtaBpM+kRW+i3VnKjoFywRyWIHHG4DOfw716P8AASETftGf
tMgPjM1qu0LhR+7m4479P8eaV9bWJk1Z9z411n9qrxzP8Q18Z28s9lo1s8UqaTGxW0eJFAMeT/eH
U16h+0X8HdFfWfhR8RLGGOxs/GV1YLqemAEKpdVdiGA6bSQR7ZrzXQ/jF4Q0j9lXxL8PZ9AlfxZc
PK9vqTRKYwPMRgS55GAvTHYV9AftJ6KdX+B/7N2iPK8IvrvT7KS4i5ZFktVQsOOvzZ5/pRJvoUkm
7o439unxl4k+EvxI0Pw/4JvJ9E8PQ6DELW3tFBTcJZlJyR02qKj+MNoPFn7BHhbxv4g8u48ZJdLb
Jfy8TNEbt129s5VF/Kuk/a/+Kd38AfFXh3wVpejab4ghstAgX7frYaS4l/eSJyQwz/q8/jWP+0JZ
H4rfsR+EviNNK+jzWssUX9kWDkWbZunj37Cevcc96NeZEpNO5y/7Onw+034cfAjV/j1q9imv3Fhu
i0zSz1jdLjyzIGI4PzL/AJ6fM/xO+Lnij4nsh8QaoLqKKZ5o4ykarEzsMICoBOBxk+lfb3wh8Xr8
PP8Agm1qGuvYW2sLBd3ebK7Y+WztfKMMQRnk549B1rkf2bdd0b9q/wAQeJPA2t+C9M8PwNpT3MOo
6cG8xXWSNdy5PBG8fl+FKLkveIb1sfFfh6xj1HxTo1hcIWF5ewQsg/iRpFVuPcH0NffX7SXxL0z9
lj4peAfDGn6LDL8P/wCxrgXumGIF2DSMgKsemCM/TPtXyX8LtQi+FP7RVpby6Nb+Ihp2vtpaR3jE
EqLryQ4OMFsAHnjI/L6Y/wCCovja2tdb8NeFDosMl1cWZvm1QviRI/MZRGB6EliapNyeo00j43+N
48Pab8VPE8XhQg+HY7nZZkHeDHsUkD1+bd7cV75+wT8JfAniTxnY+IPFWtWL6ilz5em6BPP5btcq
ysrEZG7PIA6c45r5KkAeXfkNjjcRnPH+Fek/swxR/wDDSHwyYKof+3bYKccj5xgZ/p6irktNA32P
T/8AgovZwW37TN3FaW0NrGdFs3IjUIqkmXJxVL9j39nax+KE/iDxf4jZbjwf4Tje7vLWInzbiRYj
IEAzjbgHj2rY/wCCmKt/w0/eg8p/YlkNpAw2TKf6kV6j+wYV/wCGYfjvcBWQpDKQwzziwkHAz2x2
qbu1hQe+p5ZZftyw3Xxdl1DUPDOlf8K5kLRLZxWa/a44GjIB3f3t3bjGTXFftm/s1w/AbxjpOoaX
IH8NeI4zc6bAT88QCIzI3Hq+R9faptan+Eo/Yr0GOxMP/C0RPF5+QRL5fm/Mc4wyhMcele7/APBV
UNBa/ByTCvGun3m5e4+W1H07j8qIqzNGfn9LsUlgGWQfeDcEVBJAcMNu/PTJx3pxRFWRsuwXjOc8
mmFHmidosHHYjOfwrRozW6PsX4D/AAm0P4JfBU/tB+PLVtQ8xUh0LT4ow6tJIHjDsOucg/QAnnNW
vgT8WdB/ah1O5+F3jvw9p2m3niBo10rUdGtmUrMgaQqzFjg4UkEDHUV23xiVJP8AgmD8LVkYJbz3
unhsnG1TNPk4GOMHP4e1N8N+GvhZoH7avwBtvhhe295bN9pl1JoJhIBKIG2Z/utjecda57dTRJNn
hPw5/Yz8SeJv2n9U+GGoNFGNCdbnUryFuFtnAZQuepwyjnv9K6zxZ+1N4R+GvxO0zw74Y8E6TqXg
vQmh067vru2ZLmbym2TMMHBxjgkcn65r65+DxU/8FEPjq4Ys8WmaagK8H/VwA18YeG/Bfwu1b4C/
GTX/ABDqtrD8QrbUb5dLgM4WRkVtyFUJycsWpJpvUEk3uQftafs66doUWg/E7wXul8HeNpo5LZHT
aY7ibe4VQTwpUYwe4684rtfEvhXw9+xB8LbK38QaNaeIvib4qgjuPsV4pkt7WJHYMQw7YYcd2/Me
ifHeJf8Ahhb9maOYEBtY0kSbwM8xSknuO4rc/bM8MeFPFX7Y/wAJdK8dX0dh4Xn0a7FzPNOsajbK
5QEk92CUr67A09keOWXgrw5+2r8HZpPCmj2/h74j+FlaabT7NGWGWKSXYHJPB+VScckGvPv2Tf2e
rLxFpGrfFzx5JJZeAvBzi4k8tSWuJoijFSByV5H1r6l/Yj0Lwv4c+PX7QFh4Nmju/DdjFZQ2NzDL
5ySRkO52sDzggj8O1cd8M5Qv/BLv4qTop/fXF7nrt/11uOD6f/Xqr9BWSZwHgn49fDb4rePNR8F6
54M07QPD+vrNp2n6paoxmWWRhHAWGMKSGznsa8r8dfsd+LfDv7Q9r8L7GNJrzUGa5sJJWIDWoZgZ
GODjARj+HvXo2ufCb4deFdC/Z41nwfq6Xvi7Vtb07+07eK5Ejox2OW25yvzj9K+uPH0DXP8AwU+8
FbduE8G3OUHGz5rnB+vy0nJLYfLdJnyn8W/GPwz/AGYdU0/4baJ4UtPGmqaIJRrF/fMY5EnOGCA7
efvfQYrC/aY+Beh+Kvh/ZfG34bwFvCV2PL1O1Rdi2UqbIwFVgDneWBP0Negax8Jfh98T/wBo39pe
88cawmmzaRqXnaYBdLEZGKyZHXPVFH41peG/LX/gkXrUwUhJdQfA553ahCPX16/Si7THFWPN/hz8
JfDf7P3wah+KnxL0/wC36jrUEkXhzQZoiUnBRJFkYqGxkd+wPvW54G0jwB+1/wCBNa8MaP4ftPCP
xEsfNv7G1tGLfa4o4uRv2Dqzc5Ga9S/a+8N2PiD4T/su6Dqd2NP0u7vbW3upmOPKiazgDnnAGAT1
pv7O3wr8KfCj/goB/YHgnUjqmiw+Fbi6EgmEu2R9quNwPoF49W9xRzKxa3PlH4Bfsm658U/iFrOi
6xnRtH8OPNFr92SCbYxqxKAg4P3G5PbtXqUPx2+B8nxgh0L/AIQWzTwUrm1XxHLySnlHEmwrnGQB
k/Wvob9niRxq37X10x3RnVr5FBXhdsd32/AV8fzfAjwfYfsSaf8AEhtVefxdPcpD9gEoAEbXPlAb
M54XnNSnqFyD4r/sj6p4P+O3hTwqlwzeHPF17FFpetspZSrlT0A6jzFAGf8A63pv7Rt18O/2XvG1
p4Bj+HNv4mmstIs5JtSmuijyu4bcxXa3zYUc+9e2ftf22vR+I/2ZLTw2I7rxAHLWMVwMxvKkVoVJ
6YGQT1/WuA/bRT4ST/GiMfEO+1OPxsNIsVvxpVuZIEYI2NpDDjnOPYYp89yNDzL9qH4LeHrb4CeA
vjD4ZsF0C01u2t7a40hiCVkfzG3huNxwuOnT6Vlfs4fs7aWPCl58WviY32DwNp/yW0Qwft0h3x7R
g5HzYAx1r079tu0vl/Zj+FFt4XWGT4ViK1e0uplZbpp2hlKbh/CNpOevJ71pfEK1aD/glP4GgBPm
SahbnK8DJuZz14xRdtIqLa2OB+F0vwk/adGveBbbwnD4E8RagqDR70y+a08m4khflHPfH+17V4z4
X/Zi8b6x8bP+FYJYvb65FKGuZGXesNuWH7447EMpHPVh+H0zoPwE8MfBP9rX9nmz8MasdYGqTS3d
6zvvC7EUggg8A7m/Kvefhfsm/wCCk3xZO0f6P4dso1GASMrbcj/Pb8aV2mXZPU+T/F/in4I/CHxf
ongiLwsfFsOmGKz1bX1lCfvt+yViACTt5JFcR+1J8AbfwG9v438KyNqngLxEfOsbuNT5cDEvtiBA
9E710ejfAHw18Qfhj8cviHq+tSW+s6Hq+pfYrRXyJdp3gkdSCzYHSvXPi9BIn/BOb4K2/wAy+dfW
A4+XBJuSce/A+tWnZ6BZHl2jfBfwf+zl8IovGPxR0uTVfE+uop0nwuZxHKqpJ87nGecMpI5xjHrU
knwo8F/tJfCCTU/hnpX9i+NdC86a98PBw8t1CXKo2cdOpB+or379t/4faf8AFb9pj4KeDtSvW0/T
LrT7vzplwrIAx4B7ZKDmq37Ffw10v4a/tU/GTw5ol+2pafo2m20aXDtuyGeN29QTnd0rLm1uFj5C
/Zz/AGa5/ixrl/q+vSNpXgbw6rzaxqMhIUGMqXhHcHac/SvU/D3if4CePfinqng9fCkWiadei4tb
PXrmfETyYKxthgCNxAI5z0r0z4QxNL/wT6+PWoIArXd9qTjAwxBSFR+PNeG+Jv2edC8C/Bv4M+Nr
fXRd634m1Kygns+CYxLhiBjgY2jmrJ5UmebePf2a/Fvgn4vJ4CSylv8AVryVl09duDeJlsOPbCkn
0Ar2T4i+DvhL+zBpujeGPEOkL428csjy6z5Eqq1oGVSqMGPGQeB14zyDX1h8V4Dcf8FFfgtEXQND
ouoS42jONtwBg49B+QrwvxR+z1ov7QH7YHx4Ot602mwaPLbTIIyAHcwgENn02jgVTlzIFboeU/tG
/APRLjwVbfFH4ZMl54SmhSO8t7UA/YJFiVnLEcjJYAjjBI9qj+BHwG0fwz4Bvfiz8UEW38MC3ePT
tPYkNqEpj3xlDyDnDD6jmvY/hU0en/8ABML4kzmYhJrm8AZlPI8y3XbgdMgY6nrWr+09oZ179kT9
mjw+Zfsx1C606JmI3AB7TazZPXHmcVjzN7svROx5Z4G8H/C/9pzwVruj+E9IPhHx7bYnsorpo2e7
RULMq7cA5wc9x1rwz4X/ALN/i/4n/Ey58ER2z6dfWMirqckqH/Q1Pdx79vwr66+EHwI0f4F/t6eE
fD2j6n/bEbaFdajLJxhGMMqY6nrgHAr1H9nuX7R+07+1VeJsRILmJFbdjos4OfYFKqMn0ZEkro+a
b28/Z40D4mWPgp9COrafH5VpceJFmRrYNgB3PqN+ck984wBXkH7RX7N2rfCfxlEtip1Dw5rsqyaT
eWzllmDklI19Tg/Q102k/s5aPqP7JWqfFyfW2OqW9yyRaeXAEi+akeDkHnlue9fUnx4tPJ+GX7JF
hJEHlbVtKQI3G5hawcZ64ya2Une1xOMT5/g+CfgP9m74UWmtfFa0k1rxjrjRS2fh+0kEdxaw/MC5
BGQfU+owOlU/HnwO8K/Fb4M2vj34S2zQXWlwk6voUcnmXEW58Bn9QBGx9/evd/2pfgzb/H79uvw9
4Sv9TbSNPHhUXDyqoGdss525+pX8jV79hvwdYeAbj9pHRrWX7XBpD/2fHcF/vrHHd4bnpwPXtUq6
a1J5E9T8z/OErRhGA+UZDHgnHJpglWRmVkTJB2t2yKeZf3cbPuklKg5bOenrSRp+95bYpweeRj61
s7maWpt+C/BWrfEDxTZaHo9hJqGpXLkR28QJJC8k4wcADv0+lfTnxC8G/BX9n2TQPCHi23vPFfie
GJjq13ZOu22fzB8rDORwTgc/dzjmnf8ABMe3W6/agjdtpZdFvDtwM5/dDI/A+3atW9/Zrtfjn8Rv
2jvFOoa0umnQNXuZYY8DbKymZsMx9PLX6c1jdt6uxoovY4b9q39nK1+Htta+OfBuzVvAWqr5sFzb
lZEtyAF2MR2LdDzyDnmrnwq/Z10HwX8JNT+J/wAWDPZ6Nc25j0nSk+Se8kONroGxnOeBx69K9m1W
1a3/AOCUFmXA3PfA+WTyudTbkdOcg10v7bPgQeO4f2avBjXIsrfVHNm0jDAiBhtlBGQfmxnB9TUK
peyK5ddDwy1+CXw/+P3wk1bU/hdBPpnivRpZLh9P1GQCe4iWMYKAZ+UkgDtmvHvhB+z14j+MPjif
QooH022sC/8Aa2oTLiOyVQ27c3TPykY7de1fZH7KPwQsPgj+3NrHhC1vptTisvCxmW5JzuZ3h3Aj
23cAdK3vgvFHF8PP2sr1FdGGp6nlyACCsFxnH5j8qam1oxq6R4fbeGv2cr/4wt4DhXURCjG0/tjz
F+ySukRJbfnpnjI4JFeCfGX4CeI/hB4/bwrLBJfmaRRpr20byG8UhSNigZYjegOO5rvL39mEeG/2
WdL+LEmshrzUJESOwVgNitKyg/TaFP4fSvtn4+6XFL+0N+ypbtCGAuJnkUgNwkcJHXnsaTq2dkWo
q9z5Q1D4D/Dj4CfDbRLr4oveaj4u1uYvHpmnFDJZwugYb1J4HHJz16VkfHH9mzRR8OdG+JPw3LXf
hmW2C3tox3zW8u1nZpMEhcKOee3vXtPxw/Z//wCGi/27fGXh+TUV0xNM0KyuDK38LBE+UcdxIau/
sz6HL4b/AGQf2iLWWZLlbGfUbRWXJXEdt5eR6fdz+NTztSQrR3sfNv7NP7N1v8Rba98XeLrpNH+H
2m7mnvJCIzdEKTtRsjuv6Gu78GfBX4SfH/SfEGmfD57/AEzxLbJDPZRaiVU3GS2do3fN8qnjtkZJ
r0r4s2Zi/wCCZ3w2treMpPf3FiNqcAEzTMST+ABrF+HP7NE37On7W/wZsW1tNUl1Y3Fy6xKQF2RP
x17ZH15q3Jtc1yFq7HyT4V+DniXxJ8QLfwZbaTcHxG1wsMtpsOYcnBd/7qjqT06V9E+IPhD8DPhv
4x0XwR4h1+/m8RTJbJe3Foga3imb7wL9APXI4BFfV/wm062m/b7+N04hGbbSdPjTCj5MiInb6c5J
FfD8H7NGoePvhT8Ufizca0baPRNUvcWrRrI91scHOcjaTuxwO1O7b3DRM4/9pL4Cal8EvF7iPN74
cvm8zTr9eUaMs2FyOCQBmvQfBX7NXhvwx8Frr4h/FLUrnT7C9ZY9M0y0x9ok+cqXK9TnHT05NfQf
xkt937Gv7OwmjdrmbVdIRmPztho2Jye45z71J+218Ibj41ftPfDPwTplyNPhk0aeTz8lVRVd25Xp
jCjn3FLncrJ9g0Pnn4l/s16BrPwm074g/DG8m1fRoDL/AGha3OROnzbUITrkkH6ivBPBPgbW/HPi
6z0DQrKS91S8mEUSQpuAydu5j0CjOScjAr9Bf2E/h5P8OPip8cPBt7MNVttFit4Hd+VZm85/unjo
OtfDXw0+M2u/CXX73V/DL21jqU7FWnmtxLszIG4DKfTt9fStOZq4ciue0ftRfsi6d+zr8HfCOv3G
qS32tahdizvVODFE5h3naB6FSO/WvmDT9E1HW7630/SYJb29lYRwwxqWeRvbGfQ81+g//BRjVbrU
v2dPhBeXaB7u/nju5RCNu6RrIM3B6cknB9q8e/4Jp2Frq37StqLy3W5W10m6nWKXlFcFArY9QGP5
8UKb5VLuTGzdmS67+y34D+FemaBp3xH8US2HirU4Gle0tQxWM79qDODjqo9z+deX/tJfs93nwD8T
xQRub7Qb5TJp1+MkugA3ZI4BDHHXuK9b8efs6eIvj38W/jr4os9Ujt9M8MatcqRPy4Ch2IUDpgg/
XivSfGVuNU/4Jc6Rc37S3159ojWK6mY+Yg+2MowxyQCqHr6+1QqrjL3tjTkR88fBL9l4+OPB2p+O
vGd9J4R8H2ORHdOu37QwYA4UjO3nGR39at+M/wBkqwPwtm8cfDrXZ/F1jZzyC7jYDfFGi7iwxznk
cY719M/tv+A9U8T6d+z78NvDTjTLfVzLCbeJvLjLJDCQ0mDyACx6H86qfsQ/CbV/hN+0T47+HXiK
b7faW+ipPPDG263dpZIwrBSMEhSy5wOnTpS9pKKUhcqPhf4VfCrXfjL4zs/Dug2r/aZeZZGXKwpk
bnOeOAehOa+jLb9jDwhr/inV/Ceh/Ea3vdes1mP2AAebvj5OB3wSAe31xXtvwGt7Xwr+zF+0b4l0
20Fjq1nqesRW11Au2SFI4QUCkcjGexr5f0T4KfEH4c6T4D+KVzqEdrB4g1K3jjubeYrMwmfO4kHq
VySPfvRzNtsLa2R4L4k8Gaz4Q8T3fhvUrR4dYs7hrV4ThiXDAcdjnIP419EWX7Fq6d4N8N6p428W
WfhHUtYaTybK6IV9oPBIPGcEHPvX2P8AFD4f+Hb/AP4KCfDGKXSIGe60i9v7hFjx5s0YcB3x97oM
fT6V83ftDfCD4gftJftRfFW30U/a9M8HSpDDHdyssVrE0QYhBjHLKxx39eKu93oRyJs8C/aC+A2s
/ALxL9kuElu9FuolNtqaoPKm+UFwuPTIq38D/wBmTWvjRpmp6ul3DoXhfToWkl1e8OIdwK/KGzjo
3U9K+qdSU+O/+CYF3r/iYSaxr+nzSRW99cANJCBeqgAb2TAz/sitP9sbwxf6D8Afgt8O/Alu2nr4
pkWK7sLQiP7Y/kRttcnGQWYE59PzqNRtpL+rByrY+XfiX+yVq3g74cnxnoet2ni3SbeZ4bs6c4cw
AIG3ZBOcDOfrmvGfCnhHU/HviXT9B0O0lvNQuSFEMYGBlgu4+ijcMmvvv9hzwD4p8GfGbxP8J/Fw
RNK/sN7y60zduiLySRKDgjqVLLkge3GK1f2bvCGh/D74YftE+PdGsLaDXdG1DV7fTrxVJMEUMW5F
AY4xuwcCn7R/Cw5EfOt1+wPr0M99ZW3iXRrvWrSKSZ9OScNOCi7toUN/u9uM18z6ppF1oWpXVlqM
DWl5C2yaFzh1YY6j8a988AaN8VPh74n8LfF+9FwsHiTUIo11V3B85JpACSp5O4ZP4mvsT4x/AjwX
r37dHgHTLvRontdX0271TU4gCBPJGpVWKj/dH1+vWHUs7F27nxVov7IHjLVvAlj4numtNIsr+CWa
KO8YRyFEO4NjIxkZ/L3rwpzHHO0LMsmxijyqcjPTI9a+3/2zPhf8WviX8ctZjsNJm/4RvRWW10ZU
IhjEbIrMw6BsnAJ7YxXxj4t8J6t4I1y40TVrM2WpwkebA2AV3AEL/wCPD861jNWIcTJQCIlCplzn
jJHPb8aHbzY4w58tTkqScDH9elfSXhj9ijVH8A6X4o8VeIbHwbFqfmNaxajKEkkVcYZQRgjLZ9el
ZPxc/ZM1X4b+AbbxnpetQ+KtEMzRyzWCblhCgfMSCeDnvjtTU03Yhqx8/wA6IYkQKC2QSzHPbtT8
MmGYMQp4XGPpST7ZpXYx4cjjbjGff0prPujDjk4242/p/OtCGJFE0jD5XfBxjAPP40zzHBlVWyM5
ZMcHH/66kJlO4LnYeigfpTXm52hmHB69/wDPFIXUHjlkGfKBwpbG7FMghBO4yHeQT8p+YU5ZSqsQ
dyk/dxyfpTjIUPmgbQozjHSkWNGyJdu/c/3iDxn/ADzUscaFzuOAAVLYzx/T/wCvUEjCSRRhFbOS
xyCfxpHTywQw2uvXHT8vwpahexMiC5HkBVCg5OSB+f8AhV27Cpp48tsBcggLgk44JrMeIl9gyewI
4yMdavwRgafKztukVSQM8insJO5johaRDlyW64wSa9L+DoxqniCVH2pDot0xJP8A0zI6dCO/4da8
2Q5QtgAgYJJ56133wzhVLHxlJKSVTR5fuEZQEEZPtk4455qQeiOAnjIZo8l9hIznvS71jGZMq+cZ
I60pmJRVZgcEkmpmcx4ZSuT3YcfhWctyjt7YTRa3dF3aMpK24cbhz716JpUEVykTxsYg/wAqMvAL
Yyc+uRgZrz+UK3iO7ad5FdJgR+7Chx3+o6V3ujX0KQCMJvIUnCAAcdOT17H2rriuxyydjtdIsQ0h
eWMtv4jfbjIz098eg9a5f47QofhzbIGbdDqKlcjaVXy26bRjo2OeflPrW1p+oTRFhCHVQ4TygAMn
nJJPQ8DgVkfHBSPhihaJFkTUY03j7zZR8be3QHP4Vq4tK5jCScj5pLblO4cLwDjr71cspQl7EcsV
+8dowQB+FBtRNC8ijDqx8wN/+qn6Z5n22NlkYFMKSe2eARXJfU9GO59CeHRKNKjlbZEXQFF278nr
hicEduK6iGRQhQxd8YbnIPYDivTv2VfgVb6xoC+PPF5MXhK0g8z/AEgZWZt2CDjhQPz+lc98VdX0
fWPGU83h2wTT9O3NFHEihlbbwWznjdwcV0RlpsZTV2c4JLgFo1IEcpDuCvzKOMYxjAOPfp3zUseX
nRlwrKdwUHB9QeetfTVn8LfCfxH/AGYYdW8L2DzeN9NjQXfks5kHzEMSvG4bVJAFeU/AHwLdfEH4
k6Rp82mwz6dFcxnUiFGVQlVK88jIJ59sVi5Exik7M82/fRT4E7NIrB23EEhhkge456D1qawvWugX
XbLEzFyoX+IDknqc/THWvpb4g6d8OvBH7RGl2Vrpv2vw3eRx201tbSAiK68zacBuuOPzr0H46eGv
hT8DL3SYLzwjdagl6ZXDx3O1YVVgrHjGPvdR7ZNHPZ7BypO9z4uSGEeRGzBTht4U/Lx/In2qrHDB
JCfKZjNG5Up0U45yff6+1fTX7T3w68AaX4H8H+KfArbYtWd1/dXBdWUqG5BJ+YHjqSM+grv9R/ZE
0Pxb8DvC+reHoW03xFLYQX80dzhxLuiBKnnrnBrSNSCd5IUlzK6Pi028JRllQKA+FwvBHTjHXt1F
EskSymdQIyufnfncSO2fbP5GvqT9l34CaJ8RW8aW3iiyuYb7SY4YYlL7XiZ1kJJxxnKDvXzTrEMV
pe3cMTHbbuUj3ZYYAPzAemP0raNSm9kZtSWlzMuIVFt5IlO9GyyqMYGOCc+5P5VKzAN5oRGIHmEh
WLL2NfRvhb4NeD/jD8IjP4JZrTxlpUQa6tLk5Nwgiy5jGSWGSeRg5OD2r5/v9JvdNuHtLiGSK8iQ
JIXUowPOdwPfOKXMpaJCfukAtmvISIf3bqN2ZF4HPrn0qKQyyPvjUysegJ4JAwcdulCzm03CSQK8
mQFDHpx6D9KfekWxt5VgdoyMs0bqPr+HFOzTDdDXUOYoc+XGEwePmAGOPb1zSsNzD5ijKvAX5jnt
6/lRb3KsxLq287iWKjHXBPfBP+elLuRGmCMZQp27+c4/pnHatOW5i9dxse4l9xeWJiAASAwbPUeo
/LFBleIbjCFGQCzLktxknI6nH+etJOdglyylY8EYH6d881IglgRiFDRgDJ+9zyalomNkyWCHBjbO
9lVctIvKt3JOMn05/KopVjl2Quo3Y2oY85zyDx+NSyKq3JeWR8uqktgjII5Hv/8AqpiwCESTRh41
5zIzfeAxznv1H/1qSsht2FU4ZmErB4vkJbjPGMcADOPT0ojiYxEo4EYOTtHPtz/nrSs9vLb7ixVy
OAW6eh/HmpIVubqYR28fmySfIBE3MnPAx9KlpPVmifNoOhmaUGSTzN5bG7HGD0IP9aZAZLlkUxsp
UcHkqegOPxP86+lbT9njwv4G+HOl+IfiVrVzo91qDqI7W3UyNGhTcoYKpbovYcVhfGb9n638HaFp
Xi7wzfPrXg2/hSRr6Q/NGzAkZAxjIAGP5VMakb2NVSW54acSSvMFERT5eCSTgn/OPerEErOVIGBu
yM9T3z+tekfA34J6j8ZvFbxQI1roFnJvv9QkXG1CDt29icjkdsGvStP/AGbvBnjibXNN8HeLp9Q8
QabC0qWs0ToJHVmXAYqAQcgcZ7c1TqK+5PK1sfNK+XbzOj9XOAW4y2Sc/jTmhIuZJCUk3ADY5wvu
QPXv+NXdf8P6homvvpF5Zypq0Vx9m8hE/eNIDjaF9yP1zXv93+zR4e8FeG9Dn8ceL4vDN/qMbFrN
4tzAocDGDxnK5HXkYpynFbDUX1PnN5Xu5Wbe5cLt+Ze/Qc4/rUjK0ZRM87iGJB2lcdeO/X8cV618
aPgZffCbV4pIwdT0K6iV7TUBGdj8ZKk5wMAjHPfFVPgh8ENQ+L1zdss62eh2ETtdX7DIjIUHb75H
I6fjUKZpGCPNXt3AG1xNcAFAX+84AyAPb60khN3NtZQhfMjuGwMemO1e73v7MVnrnhjWNS8HeJ4v
FN5p0aPNZW6qzohJPAA54zx1OK8TOkXF5eQ6fawSy3rSCJbdFO8uTt2kDvnj61tGpTIcddCuqpDB
IocuwPyuSfkxjIH5VJcyEFY9vmZATHT29PUV75N+yxbaXZ6BDrvjKz0TXdQRJjY3BClA5wF5OeDw
fx6V5v8AFP4Vap8LfFU2j6pHLMS7NbXIjPlTIMYIOenzChVKbehElJnHyRrJtMkTcDnY3+cU9pJV
jSLzpdroRucAHHsBXoHwo+DOq/FWPUtSjI0rR7SJpnv5xsi4OMEk/jW/4n/Zt1LRPBE+v6TrFr4o
tIZ9syaexlaJcFiMjOccfhU+1gmJQbR47AgSY+WmJRydx6ke47VPKyXMrPcRlZn45wwZRzj6ZP60
/TrF9Svre1s4HvZZ8KqxjmRj0Cjua9wX9kzxOJbO0udV0qDUZ0VkspbkCVd6ggYOD1wODT54PclR
cTwt5D8sb/LliQMDBGD29OlWYo3O9t375kweOAP6mtPxR4N1Hwhrl7omrwva39pIyMr8BgCV3L6g
44PvXRfDb4MeIPiZY6lc6YYIrazXc91dShYtxIG3n056etS5IrU4sXJn3qSz8kYPoKXzfMaQ43Iy
4QnoRn9fWvQPH3wJ8RfDrQodSvkhu7K4laJZbRwwyozgnOenNcQmnzXs8cNtatLPLiNIkGHLnptH
51KaM4xkRyTMyMZS0oC52huvPekE+8sSCj4EvyAcdMZ4r12H9lTx2ksluLSJrgkB0WVSRkZOVBI6
Y7968mvdOvNNvXhmtha3CMY5FYFXGOOAR9OorWLj1FKMiuswmtw4JUJkjaNy5wRjA7datyyyPbqs
bnGAMDkE479+/wCldV4O+FfiXx1pkl7pOlvLFGyI8jYVF3cgHJ5P0zVfxl8NPE3gG2s11rTpLRrw
O8bEkhsdefXntVuUeg4xkjnRE8AWfcEcYO9W545/THFWk1G9uI5YvtVwscikOFkbaykYIOMde/41
Fa295qt1DZW0TS3Er+WsYG7cTgD9SPzrurr4J+M4Yp5xoN3F5KPKyBCGKDPABxz8pOOfpUJxvuOz
W55/FiAOU+cDJVQOF/zxWpp3iHVdJQwafqNzb2wcufs0zRnJ5bgEY7c1nsmyRlxlnOJCxwynHf8A
KtrQ/A+t+IdNF1pOlXN3b79pniQFWPpnp/8Arq3y9QV29ChfX91q08kt1dXFzcbQglnlLsVBJxlj
nuabp1/Lp1zb3dnLJBd2/MdxAWV1Oc5z61f1bQL/AMN3xstTsZrC6EQkWKePZlW7jgZHHaqVhpt3
cSRQxl7q4lKxxon3mPQAAf0qHY0UHvc1bvxv4m1CyurO48RalLZTKC8M91IysB/CV3YI7VixrsKE
GRdiAbScYGPate58Ha5ZwTXV3pt5bwhRIXnjZNg981iSeY0oZwyxrlgx9QOBg9etHKmJux1mh/FL
xT4f0+Cz03Xry1toNxjiif8Ad4JyRjoOR26ZrD1/XdT8R38+oandSX15cMpkmlYbsABQM9wMdKSz
0S/nhWSCxllt3ztdYjsOOuMDn6VXuo2QSRXEDQSqwbbIpBBwMcfh/OmoRRneTZp+GPFeqeFNXTUN
Hv3sNQSN0WWNFOVOMjBBGDjoR1rd8SfFvxd4q8PXGk6rq8lxZyukkkSxRxhyp+XOxRkcfpXH2No7
vmPczjPybck98YpslrcA+c6TW/8ADmRCuM5/A9qzsrl+8i7pGq6h4b1G11OzuWhvbWQPDKpyysBx
x36969CP7RfjiOTfHLYCYgv5htIsk4yGzt7HNeaWrZaQykuz5CjIJJx2pqwtC6SiOQWxH8adc9O3
vVWiNNk3iDXbvxJrF5qOoSGW9u5XkkfOMkkkkexJzjtVNS3mBvlKjB5HX2P4fzqW4tY4ZiThQnAU
H8c025jWeUyIzuwQgBjwxxxx+FWn2MZXI2mZoyTuVt20Ar0xxz6c13nwz+MOsfCi8mk0uzsrl5WI
8y7RiygrtPQ9D3H0NeejzZJ2XyiSRgAHHHrT4bhbpVkR2mfdglW6/j+n41DV9wg5HYfFD4pah8UN
YTVNRsbO2uVDI72SMok6YLEk5x2Hua5nSbp9N1S1l8hLjypFmaGXO1wrZwSOxqjbxqksu5H3AnIR
umfUU9m8pSuSBn15+lZ6bG8b3PbPGH7UV74v8JXPh+98M6dHahNtqY2cGFtpUMBuIBwT09a8Ski+
xKuSI0AwoBxtHpx2okIeLByrKfl9OhpL1DGFDZye+enHf17VFlsOV9z2H4QftF/8KesZYrPwra31
1cZNxevcMrlQ25VICnpjqD36da838b+JbHxh4jutVtNLj0eOZFzZW7Bk3DO5hlR1JPasKQDyyVjf
C4+gH400MWzGxEbBARx39DVxiomestzc8Ca1pXh7xDa3mraYdWsYCWNksgTc46EknsRmvTPjx+0J
pXxq0lI38Ky6dqVm223vPtQZUUuC4xtBxwevfNeLhneT94ofaBwOP0/Go+RMflYxHjGcge1PlV7s
1TdrFfaighx5pc8q4yGHPB/Sva9I+PGiar8NtK8KePNLu9bttKuFNgbJgnkxCIoqkgKTgZ/PrxXi
NyI4VWNCzbDhmznJP8qLsloT+6Kdt54zUOMWxq60Pdbz4/eGPBngPVtJ+HOjXmi32syAXVzfIjIY
wjq2AGPzEMfzrhPg18S7f4WeKoJ9QsYtT0a4Hk3UDQCRkjAOPL6YOSK8+kdzCgQAxk/MVGDjjgD1
zVeRpSq4jJUEZAJ/4Dk0lFdA5pH0DoHjj4L+GvFek6zDpmvJJp92L2ItENoZeUzh8noB+VeU/Fr4
naj8VvGmqeJ79VRZgLe1jijCGOBS20Hk5bDZJrj2Z4kAJZjuJO5ieMe3TtTJgBGV3BQxDeWh4HUZ
PvUygi4qT3PetA+N/hjxj8Mh4T+JRneTT5Il03UrCJ5Z/LAbIkJ3EY3Yz3781U8d/G/w/wCE/hlZ
eCPhn9rtobhX/tPV7mN4bnl1ZFQ55zggk9sDvXhMkaxO8bjr14yMYqvPE78hhlhs2Hjjp16VlyIJ
KXRnp/wN+Ndx8KtdOnajC2p+FtRIt9RsZMmNUZgWlCDjcMdcZINd94d8TfBDwF4l1nxZpd3e6zqE
CXEunaTdWcvlLOW3xgEpxg4wc8V82cyElWIx2A4PH86TzsSyb3kZ9nAYAYPAGfXiqUF0BOSOs8S/
FvxN4k8fS+M5dWubTVxcSzW0kczN9lRmbEcZPIUKcY9zXrXjrxN8Kv2g7LSdf8QawvgzxeFkGpQ2
8UkpnORtYnZz8oBHTrgV82tJIgJdQA4xtIBH1yKgu7mRkD7VWRTzhMntzTcB89j3f9ob42aVeaJB
8PPh+ZLPwZYqFuZbaRo/7QlZF3FlIGQCOeeSatfCb4yaF4z8BXvw2+KF8RpdtbtPo+r3b4OnSJGq
xouB6bsHPt3r50RdyMWzGxJKkn8B/Kq800iwu8iHaucnv0//AFUclg5z6o8LeJ/hr+zHoGseIvDe
qW/jXxjd4ttPa2O02aSIQ7cjGMgHnJ6V5N8Jf2j/ABR8NPiTL4uvLyfVk1N92t26lV+3fKQDnGAQ
TkdOOK8jNxJ9pZSdobkMDzjHp+VRSNiRghaMKcgnnLZ//VQ4kKTZ9Y6n8GvgxrvxPj1y08f6TZeE
53hnm0KSUeYoCKXjJPTJ7c/ergfjJ+1ZN46+InhiXS7KSw8C+FLu1l0nTQieY4hKlnyAcbgoA9B9
a8FeZXaQyAHYwwyjAyex+gzUUrKN39xDsUkDk46j2rNwXU2i3ufaP7Qmm+Ff2ptb0HxtYeO9E8M3
UukRQzaTf3aCeF97yFSNwIPznmuX+PvxN0DwH+y94W+CWkalB4n1XKXdxq2nTJJaxhblpGUkHO5i
entXyQxSVi5zuH8WMZYdvp/9aoJrg7wTksBndjIwO1Cp63ZV2j7I+BXxA0H4i/sk658F5tRg8O6/
C73cF1qEyrbyiS683CkkfMoUAj3rY/Zo8KaD+yLrHinx7r/jDSdZtYtIaBLPTJ0eZ28yNmKICTnK
AY9QK+FbiSZg0eQQj8cZIBPI/SqtxbCRiijCemOh9c0KBO7uj1n4X6VF8Yf2hLS9GowaJFcay+ss
15KqbIhOJ9pJ/iGenvX0T/wUz0HTvFGs+H/HGkazpmoWtlaHS5YYLkM2S7yA4XPGA34mvhVgqOsR
cBVXDANkcjofUc0M22Ix5JB4U7sjPORjtT5FcErEDokErBgcZyCpyQT0J/Dmvpj9jj4D6j4i8e+C
PH41HTrXQtM1lbmZZ5lWQiPqQO3Pb2zXzKEPkx+a5KEsMA4z2zmoQkkQ8oysgIOAGwOc96co32C7
R90f8FDfgvqXjT4k+IPiFpeo2F1otno9uJEjnUysYt+4AZ65avMv2OP2h7T4ZHXvAfieIJ4N8Wxy
Wt1exRs0ltK8YiUg5IClXbJI4xmvmBXkdXzJKVIP7sSHDcY5Gcenao5ZSNwBzlSMHoc9eKhRJWj1
PsnSv+Ce01h8VLuHVtctYfhpZyTT/b45FEzwKu5Rjpkng47A+tecftqftMWn7RXj60j023VPDHh1
Xt9MuniKSXKSLHvdlbp80YxwOBXgNxqt0oZBPK28HdmQ56Hr7HNUpEEkOMbXxt29OfarUXuyubsQ
kJCWdgDG5zubkjH/AOqlkkj2ARAMMnjJDHj14ppAEPDF8jkenWo4y6HG4YZs8ngfSmLW59zfBnxx
pn7U/wCzWvwK1G4GkeIPD4W60N8km+SBHKhyQQuGfBHpzg1H+zV8AZv2btdl+MfxV1CPQrPw1htP
s0IP2meSJ0KnHP8AFjGByR6V8T2esT2N2k9tcy2U8RO24gkMbdMHBGCDird/4q1nV7VbK+1fUNQt
/MDmO6uXlUsOQ20tgnnqazcDRN9D6v8Agn+2/Hpv7U/iH4ga9psWm6R4zaK21BndnFjFGgCPkDn/
AFa54796p+P/ANgnxTqfxjhg8LtHqHgzWpYrs62zALCk775Dtx8wRenTOR718iSuJWMasWUAhiR1
/D9K6CP4j+K7S3iS28T62kUSeRFEmoSrtQcBRhuAB29hS5LCb10Pqz9rv9orQdM0vwN8IfCG3V9M
+Hc1nJNq7SZ+0XNujxmHaF5xxuIPXiut/aI0c/ty/D3w58UPBBW98SaVDFpuo+HROA1tvdnLFyAe
p79jXwI1y7yeZKZJGJLOzktknqxPUk989c1seH/HmveEPMj0TXNS0RZXV5EsLp4VbGcE7SMkZIoU
Whp9z7n+FVxF/wAE/Pgv4g8UeK2hv/HfjFYre08ORvyqRSspbzBkfdlDH6VgfslfErQvid8A/Ef7
O2r3kfhq+1wSNp+olwy3Mskiu0YQjqDGOCeQfWvjDXfFWs+Kr0Ta1rWoa3KisiPqFy8zIp5IBYnH
QdKzbHUbzSry2vNPuJbK9s5FmhuIGKPEwPDK3UfWlyXHz9D63+Bn7H3iXw98ZP7a8bE+GPCnhO5/
tOTVbhsJIsE6sB6Dcqk5zwOa3Na/bp0q9/bYs/ibDozXHhuwsm0BCJfmaAyyA3GSD2cHGOBXyhqX
xV8b6vb3FnqnjDXtQs7hSs9tc6jM8UinPVS2D16H8q5aa5ZpwFAYAEKVGD0//XRydxKT6n2F+1N+
zF4m8b/FS/8AHXw8t/8AhMdA8ZSyatDdWmCkLM2CpOTnuc49fSuk/aS8baN+z/8AsvWX7Otpdp4g
8QzP9q1W6glCLY/6QlwEKgZy2MDvjnivkTR/jF468PaVb6XpPjbW9M0+2yIbazvpEjUE5OADx3rn
Na1m61q8u9Rv7ua+1G6fzJ7m6kaSSVj1ZmPJ/Gnydxcz2Pvb4japF+2z+zJ4ffwk8dr4v+H0DPP4
cLiSaZfKjhDDocYjyDz6Vl/ss+C7j9k7TNd+M3xGcaRNb2dxpmnaDKwW5vZWWN9qk8fNsAA7cntX
xV4S8deIfAl5Pd+HNYv9CvJ1EMs1nMY2dchsEjqMgVpeMfin4t8cQw23ijxHqOuwW58yCG+uDIkT
YI3AHjOCRnrS5bDUpWsfYX7JH7Snh/WPF/xb8M65EfD7fEu6u7qLUriceVbzSiVUiPHXEwwc4yK8
n0n9hz4gXnxebwTcRSQ6Ja3DJN4gmGLZY0j3iQZOeegHqa+brK7eC4SSKTy5UbejqdrKQeCD2P0r
0i+/aO+KGp2M9vd+O9cltpomjdVvHGVIIK5ByeCep9KThctPU+wvir+074T8b/thfBnT7a/hi8Ne
Ab4w3euyzKLad3SEblbpgGPGe+a4f9vH4T+MfG/7SOq+INA8PX2taJd2VhHBd2kJeOXbAMnP14/K
vitifJEQ3BAuzOeq16ZB+0x8U7C0hgtvH+sRQ24URxrKpVUHQDKntx/OkoNbEvzPq/8AbV1Wx8N/
safBDwJd30MPimzjsbi90rzAbiNFspIyzJ1xvbGcVH4A17Tv2pv2Lrb4O6DdrpPi7wzJHci2u5AX
v1j8yQlAOoLSEd8V8Q+L/Het+Ptcm1rxHqlxrOpyxrGbm6cM2xRwvQcDJ/OpfCnjnXPAusQa14f1
F9J1SFGWG4txhkB4I98jIxRyy0GpH1X+xn8Edf8AA3xEh+KPjxj4R8P+DSblpNRkCG4Z4pFCqSeM
Ejv1wPp0fwO/bO8OXX7Z/i3xxqdncaXo3i+ODTYZ5yuLQRiFUeY5wFJiPfjcM18oeMfjt4++Ifh+
XRfEfijUNW0mWRZGtZnwhKnKnjGecGvP5C7TyESYbG0pyB6U1Tb1Dmdz6Y+KP7JvxBsPjLLoGl2c
uq2Gs3gnj1a3UG22XEhbf9FHJ/wr1D9rv4s6J4A+FPw2+DFjMNf1fwa9lcahe2simEvCsqmEdSHy
SeemRXzXYftTfFrSILWzsvG9/b2lrGsUMAVGCRqioqjKk8Y9e9eb6tqFxqV/d3dzJJNdXErTzSSO
SzuxJYknPUkknNPkuJSbZ95/tdadP+1d4K8G/FvwCZb1dOtf7PudIjIa6ieWYn5lU/LjqfapP2W2
k/ZA+Ffi74qePBJDeeILaOx0/QWP+nSFJSCxViOMsCfRRmvjPwJ8Z/G3wvt7y18IeILrQIbx1e5S
FUYSlQQD86n17etVfHvxW8X/ABOe0m8X69ca1NZqyW0s4UbFYgsAAAB0HbtU+z6M1TaPsL9kjxxp
/wAUf2dvH/wSMqaT4k1ozz2lxcFRDN5piGFAOSwMYJA9civJ/gf+y/8AEDXvjBY6frkFzoOjeGbr
7deX+pKwtvLgmAzHu7sA2D6da+edD1+98O6zp2raRdPaalYTLcW9wp+5IpyG54PPavQvEf7UPxT8
VaNqOj6p4yvbzT76Jre5hZIlEqEHIJVAe9NQa2Ic2fUvxI/bI8IT/t16B4yitbiXw74btZtHe625
WWZzKpkXHVMy9fTmuD/bF+DPjJ/jLrHjfwpFc+ItH8bXLXttLpJLKqgKuxyOCSefzr5EAG7YAQPp
wf8A9dem+G/2nviZ4O8P2ei6J4sudO0yxU/Z7byYnEeSSdpZCQMn1o5baE859TfFTX9O/Zp/YyPw
a1i6j1Dxn4hkkuJ7SzlDrao0scn73up2qAOOee1XviJNH+1j+xv4Jh8ESsuqfD5VN7ptwdty/k2y
x5jC5PJwQfevhPxf4y1fx54k1DXdcv5NS1a9YNNcyHl2CgA7QAOgAwB2rT+HXxK8UfDC+udV8K6z
NouoTQ/Z5niVG3x5zghlI6gUOFw57s+uP2MPAmufC/xLqnxn+Ics+laLoVhdWhTUZM3NwXiztQMS
SQOnIznit39lD9pnw5q/xw+L8V8JtNn+IU0k2nXVyVEcYCz7UkO7CsfMXAGc4r5A+IPx88ffFLS7
PRvFXiS41bToJvOWAokarJtwGO0DPBPX1rz9ZQjoY2yyNuCEcrjoc+tJUmuom7u572/7J3xI/wCF
qP8ADJLS8ltZLgRi8Mriy2mPzPMbHGBgfiK95/ar/aQ8H6L45+D3huzlm1pfhxe2t5qNxYlXSUpH
CpVGzgkCM8ZHIFfOg/bH+Lz+ZI/jW5LyL5RAtoVO0jH3hHnp79q8ZZS7li2dzbyTycnJ6nnPem4t
u7KTPub9ubwlr/xN17w18YfAc0+o6VqmnW1kiWDH7RC22SQq4U5HDgEetbPwa1I/sjfsx+MPEvju
Rv7Y8dRounaRGA10A8Uyq0qnkZ8xj7Y55zXyF4B/aB8ffCvRrjR/Cuvvp+m3ExuTCYUmCSHAL/OC
AeB7e1Y/xK+KHiX4s6vHrXirVDqd9BAsMBYKqogyQFQAKOWPbvSSktxNnFOjTYaQFFQdx1/CpIWx
MzLlCo+XaPypXnNxAA7EjPYdP8KYyFZFUMTnJGOtaXbJR9M/sH/FjRPhP8f9P1DX5PKtdRtn01Jo
4/uSSPGAWPZflyfpVr9ov4I/Enw/8cPFltpKX1/pvjK9lv7WTT3byLhLidsK5Hy/xgkHsa+YY5BC
xcYDIc8tj359uK9m0j9sT4reGtG0zSdO8TNDbWEKQ2wkt4pSiKMAFmQnjr1PQVk1K+htGVj6T/ac
8QaX8Ef2PPC/wPvbz7d41ZkvZ47IbkiiW5eYqzHOD84HOc7at/tX3k37TfwJ+HnxB+HM8skPhOCZ
L21VmW9hZmhQEKnPWLPYYIIr4N8S+Lr/AMYeJtU17Vrw3uq6hO09zcFAN7E88DAA6cV1Pwz+PHi/
4NSaj/wiOsf2eNRjVLmOWFZUk2btuVYEfxHkf0pKm1sJzR9gfsVWmrfBqPxV8dviLcXFto6abNax
NeBmvrmQSRN8obBYHy9o+tT/ALI3xS0j4j+D/jn4BjmSy8Q+OLjUr/TBP8qMk0RRQT2ILDI7AV8f
/FP4++NfjLpemWPiTV3nsbGRpI7eCFIEEhAG7CAZ4Hc9zXH6Dr134X1rT9V0u4ktr20mWVJI2IIZ
SCOnuB+FPke4uZ3PafDH7P8A8VPE/wARf+FVOdSax06d4yZ5JBYwrGTulBPygck56ndX1D+0Z+03
4Ltv2u/hjOl1Jcad4Ckni1K5jGU82VF+VDn5scZHbBr5Z1f9tb4r6lBqNq3iKCEX0DW8zxWMKkhk
2k7guc9eT614VOuJpCHbLYI9+euOnP8AWp9ld6lc59rftw+GPGWn/GO6+J/hO+v20PxPaQRW11oc
z79sUKZDlD0yOPp2rsdBuov2W/2H/Ful+N5Wt/FHj+SdrCwDB7j99aooMh5IxtOSc4z74r5V8Bft
W/Eb4d+Fo/DumarZTaZDIWiivrZJjExbLbd3Y+nbmuM+J3xc8T/FzxNNr/iu+F/esiwxFE8uONFG
AqqDgDPXHfmj2Wtw9oj7k1WBf2jf2C9A8NeBJFuNd8IPbPf2r4V28qB3IQd+ZBg9+a4D9iTQPG3i
f4yaV8RvF99fy6D4RWdry71t2ZovMt5FCIHAPG4HHt9K+Yfhf8XPEXwc8S/254euhBdyQyQOHAKy
o4AIYEEHoO1dl49/a08ffEjwneeHtVv7S30q5nW4ng0+2EBlZem5l5YZ5PrjmqVJ25WieZbn2J+z
t+0p4P8AFf7ZHxN1W1uXW18XLbWekNNEVEnkxgNweR90HJr5N8VfDL4q+E/Gmo/CWM6lImp3m4af
YyMLW4eUhlZiONuCpIPZea8TivRbSQyRztGwbcrw5BU9iCCCK+g7X9vX4m6abRzJo811aosMd3JY
7pXRVwMsCPr+NUoNMV1c+gv2qviN4c+HXgL4HfDPU70Nr3ha70u91SOMF/s0UMXlscgHJJyQOvGe
BVT9vS68Sa/rvgT4y+AdTuI9DbTFtUvbLzFnDyu7quACRlTjnoTjjNfB2s67qHifW59Y1W/lu7i6
kaed5mLFnLE457Ak4HSvTfhd+1N4x+E3hy70DS5LTU9JuZhOYNWiM6xOoIGzJwo6cCpVNx2FfufX
X7H2paj8FvhL8T/it8TJrnTYvEscD21xeMWnuXHnoCU5OS7rgYxgjtX57RW93ewMlvZ/aJofmaNM
nI6ZwB0/AV3Xxl/aA8XfG1tHj16a3gstJg2W1jpymOEZ53MuSCw7foKi+Enxp1f4OS31zYaZpGpx
3qIjjVLZph8pJ4w4x1/QcUcjS0Dme59lf8FFtOuh8BfgjYxWrsoVWMUasQCLKID3GMkc143/AME7
fHGieBP2h7KfXLpbGG9sJbKKeTIP2iSSIIvHr83J6Zpmuf8ABRr4h69b20Go6R4e1FIlAi8yyPyH
POPn4+XGPcc18zTatJcatcao7rHdTXbXQEQKmNixbKgcDBIxz2FNR91JqxKabPpL9obVvij8HPjZ
8Q/DumXN5p+neNNUnu4bfT4kc3kEruqDJBwSMj1z3r2349Xll8H/ANg/wT8NPEc0dl4xu5YXGmA7
jsS5eQ7sZwACBn1rxDRP27fFWnWWg2+oaHomrXekwrBDqeoRPJcFVA6kHkkjOcj8a8J+KHxN174s
eNdQ8UeIr6S4vLuVjHGGJitk7IgYnao4+tHI2722NFOx96/t1eK9S1Xwx8KPit8Pbxb3RvDUdyTq
VuPMEUrNFEOMYyNjdvWmfsC+J/EOo+KPiP8AGTx/eyQaRNYR2763eqsSvJG6llHAO0BFA7ZJr5U+
D37VviT4NeF9U0Eaba+JdDvSpGn6nI5ijIYlmRegySDx6etO+Nn7Wnif4veFbDwzDY23hfw/ZSyS
PYaU7xpdbsY8wdwCM46ZJ9Kh029B82p9Yfs86/ZePv2WPjt4U0e6S+8SatearPa2K/6ySKVEWNtv
BwT0+tfNnwx8b/E/4oeM/hz8M7x5r+y8M6zAy6Z9mA+zLBIAxfoWVEyM15R8HfixrHwe8bab4k0S
7lgnt5B5sYcqk8XG6OQDOVOB+Q9K97uP+Cgl5DZ65deH/AeleH/EeqQTxDWLRv3kbPyWxt5ORnrj
86ShJaWJfc+r/HXxL8MN/wAFFfBynW7JYNI0C6sbiUyqBHdStIBGzEj5jlcDrz2r5p/aK+MPxF+A
X7QvxZfSIxptl4yuwY5JoWbzoUTYrxnkH7x6dyO9fHU+r6ld60+r3FzcT6nNMbmW7lkJkkkzu3bv
XPOfUV9Uab+3JHqPhbw8njTwPa+Ldb0iHyYtUubvZI67gcv8jc4Az7jNVyNaApO57N4gMfw4/wCC
X9p4a1+R9O1jVZmNtp918txIGvvMzsPP3Burb/bT8ZaloPw7+Bfjrwaqata+G5Hne8RvNhhcQxRg
uRn+LPXoRXw3+0B8dNb/AGh/G93retyyQ6eh8vT9KVwYrOLCgqpwMklFYsRnPHSu++Af7WJ+GPgP
XfBfijS38WeErtF8nTpHC+U25mcrxjDFl44+7S9i00/61Lvc+lv2CfiRr/xg+PPjr4neJ9kSDRUt
Jr6JClqu2SNsZY4ztUng9D71ufBG+g8R/ssftHQaG63l7qGq63JY29s26SZWjAjOAeQxI5Hb618q
/Fz9sKPXvhbH8Pfh/wCHj4F8OzM/9oIkit9qB24UFQDjC4P19q89/Z8+OWp/Arx9Y6/pxZ7NlCX9
muNs8OVO3pweOCPp3odNp3S6gmekeF/jz4x+K9l8JvhFJp8CWugaxYukcS4nPlybMMCccBjnvxzX
3h4y1C2vP+ChngC3hdbia08LX4kiQhjGxJ/U56V8j2v7a/wy8N634m8VeD/hvcaX451WGYR6lP5L
JHcSEkuR1xnDZxz0rwD4b/tFeKfA/wAX28ftqK3+sXFzJJeyvEuZldgZFHHGQOwpeyd27Duj1n9p
341+NdL/AGrPH2m2HinU4LK31iOKK2huSIxGgQBAoP549D1r0b/gqRoGnaN4u+G91ZWCJf6jZXL3
EygKXdTCqbifqBXL+Jf2iP2e/G/jS78Uav4H1o6zdXQup5YmyGkBHzYD8jI9K8c/al/aZvv2h/iT
a6yUay0jSMppNtJGN8cXyklwOu4qDiqUNb2FdH3b+3H8VtE+Dvg/4aQax4Pt/Fhuo5PKjupSgttk
UY+U8jndzx2Br5X8Z/toxa/8CfEfgLw74DtNB0S+UIJom4j3yAuehyxwB+RrovFX7U3wt+OHgjwb
bfFay1C68RaMJIxLp6ARKJDg5AYErhF7HpXC/Gf9ob4fH4Rw/D34X6BPZW80jy315qUAExG4ECNg
SeSCCSOB3q4NKy5RcyPmOQMTt3EbgCQe/wBKhiQuSiHafRiSBU20eYE3ZJbeT36fnUMsXmFfLbK8
k4+tb3Mhpj/eyGQuyhuMDI75z+n50isDIxcMoHI+lW2iO9J3cvk53Pnp/n+dR7VVmeMMA4yQ3OP8
/wBaBNXK43xXABBIc4wB14qSQyFAAw2qSCOxokwsgldsqTg7eSSPbtUm9Q4Xf+6Ud+CQen86QIgu
YULDAZl4wOM+9IsigFSuT27DNTsrPcyKqhkJIQhuBVfbvIx8zAEbF6CgdieOOUqS4VVJ6tzjPUVL
NbqtnvZHywOMnr/k1AZSFRwvcFVJ475z6dqsXM5Nu8rFTvUgdTjvimBmYZ1iwAgC856+v8q9B+H0
cZ8M+M5pY2llj00ouwgEbnHX1BOB+VcMpLQFVGQF9f0rvPA0Ett4D8fS+eDGlhAyonqZVAB9PvdP
apsB55cKZpskGMHouf5UpljiOH8xx2wKdv3ENIC+7JXbyfwOKVQdxbzFQDjnismmWkmeiLbtPq0i
OdhbbIquB8hf3z3zx9a7vQbaK3tC0KBt7qQQMk8YDHOcDj9K4FLyW61ZXdgjSQoGIIHmAIo3E4/W
u58PW8kMCQofMGMMqqRu74APqeK61ocktTs4LkG2UfassXBLom5QOc4IBz6cdaw/jLPFcfDSRwGm
l+2xfMQAFUI5PB78gdyPateC3uPKMikgB1DmVcEgcbelZvxfiUfCmZ3BXbewqgBxzsk+8PoD61q3
poYRS5j560mUzw3G1d5EZLBegFZ+nSATiQNj0wPu1bsZ47TT7lnQGZyRnAxjt/OqulITcKhjO/gh
fauNq7PTjufrB8H4I2/4J5XfmEyIttOCVO4nEyhSCR19vUmvlNHW8sraRF4GCkx6AYOBjHqT3719
HfsleOtN+IH7PGo/CCORdM1yW2lNrLMQI52kbcoRvUbR6dRxXhfjbwLqPw88RvoepxyWlxbSMI1e
PCuvPIPHGPSri7bmM4vmudP8I/ifrPwz8XpqGkW7XTSFTLpzDKzqMjGByeWODzivsL9o3xXa/Bf4
dWl5oOkxaffeIh9mkvIRtkiJRXGCBnO5uvHOD6V5j8C/hjpHw6+H0/xe8TxpqFpHGr2ccOXZFJKF
sEYHJz+HtWt4A/aQHxe8RT+CvG1jDPpGusLaxNtFueB2YBdzjjGAOQB0FYybcrpaFO2yPmHw/dSy
+N9JuAWNzLqEDpLI2GLNMvzZOe+Mke9fop8X/BfhDxf8R/B7+Ip4/tUK3YtLd1OJwy/NyeOMDj2r
42+J3wR1L4Y/F/T/AA/FbvfW9zcxXVmIW37oFm+mVKjAP55OK99/bk0fU9RvfBk+lQag90PtUaS2
St8pyhGWXkHAOMe/rUqb5lcXJdWPjnV9SvbPTn0NJJDo9nLJJBZu4ZVlJ5cE57D6V+gNt8VYvhT+
zX8PfEMlrLqNsbGxtnCMFOxo+XHX+6ePpXyl8YP2e7v4VfCbw/rl83m65eTtE8ZP8BUMhIyPmHQg
Z5Psa90+MdtA37FPgtv3kkMMGnlFUdcRv6dsZ/OtJJStZi0SsfQ3g+z8OalBe+JvDqwyDXFMk11E
u0zlVYKW49/171+U3iqKSXWroPlTHMylvL2McHHPTqece9fYn/BPbW769tPGWn3F1LNbWqQNDal9
yxFjLnaO2QADwOlfHPi24c+JNSV7lY53u2/1BzuKnk89iM8/THNEIOO5nNe8e9/sJzyQ/HmOJXZY
k0i6OGHQ7kwB69CT7fSuP/aqynx+8bwpKAGvAybjwOBuGOO3YetemfsTeDNY0vxpeeO71Ug8N2Vh
PbSXE5CHzGGTt79B16V4z8f/ABZp3jz4teJtd0yVZ9LvLlZIMgjzI8Dqfru6djWi3Cb2R5qix5iU
FCpBwxycMOuffNSTzLAhkydik/KFz+NPV5uWEatlsKitt98dOevX6Ux/9cq3KsIscOOC5wcjPb9K
6kzJ3WxLEYSkrSqHZGPzqoAIPf8AU0yO2KTSKwdGJLMScnp/9cfkakkAVS0bCONs7wE4ORknnPP0
pkdy8yMInwxCkAN2PHPr6+lLm1M+aw+Ty1UL9+QEnbnAIGOAD9P1pBH5bSFlWMyE7I2J454/n7io
whlkhIjjCAk5XBI+Y/8A1hzSyPL9qCCHLF8bi+Rtx1/OqTbF7S+jRLLKQmzydzj5l47en4euKbLG
ZIDs4AUkFcZBPBxkUjo0LSK6fPznae3p+XamGFX2jcqhQZOWIOMcAAe/c+tVe3QizuO8giFf3YDk
g7SDjsf8D+NewfsxJbS/tB+CYZU82GXUceU3f5HxweowM4/GvJGjMaAiQqi/MwJz8vU5GP8A69d9
+zz4k07w38cvBeu6nOLKxtNQ3XMpDYQEMoPGeBuxzxz+NZVJXRtC3MfTX7QPwh1b40/tHXugabeC
CePR4bsfaHIiVAiqMemWJ9TXQ/ss+GrrS/hf8VfCmtD7dJpt/PatbyAGIAQPgIATwcZzwa5L9rDW
fFXw7+K1r8QvDF1LZaffabaWseowSKEmzksg3Ag/KEPTtXT/ALMt/qHhX4O+PPFvja5a3ttbuDLb
XVw+GucwSDcMDOWZsfhmuN3OpOyaGaTbRaR+wje3mnQrpty6SBprYbGJ+2kZJGD/ADryTQfgj4y+
EfjD4ZeKrm/iFlrGq2a/uJCHAd0chvZlOOfQGvXvABh8ffsYat4V0eUX2tWiyebYx5aVQbxpAdp+
8CAf1rxP4beOfiL8TvHngfw7fzXOsWuhahDO8EkKkQRxsoZiccYC4GT1NG2pV0paH074r8M6a37Y
/hV5NPt3Muh3E75jX55F83DnjlhgcmvFvjB8IPF/xq/aC8dxaTMk8WjyQtCl05AhRo1+7k8ZIP3f
T6V7H428aaBZftgeEJpdXt40ttKudPmd5SEjmbzNqE9AxJxXj/x/+IHjf4KfHDxhqei3H9l2OsMj
RzXEStFKixLuKn2ZmJ7/AIUk2ZyipHWfCmzn8Q/sa+LYfEMrapPp0moiGWcmVofLVCNpPPHP5mp/
HUcmk/saeEpdAL6bc3/2Mzz2mY3lLo4YsVx1IGam+E8c3hr9kHxT/wAJFJ/Zs2pPey2zXBWNpzJG
jLtyfmyc469DUfjRj4x/Yp8PR6J5mo3VgtolyLb55LVog+8sq8jAHQ9mFRd3LUThPg58NPG/wX+P
vga11aSSzstZkkQrbyELPEsTkrIoJ5DbcZ9fbFev+G/A2hf8Ni+I4/7Lt/Li0aDUIQRwk58pS6jp
nB6+1eQfCP4qeL/jL8fPh+2rq98NFmkMz2luBHAjIVJkIHGSAMn0Ne2aB4l0pv2ztajGpWrGfQI7
cATg7pVMOEB7H5mwB1waq7uU0j54+JPgLxz8WfiB4713T1ub+Pw/fzQMPNGYlXcyhcjpgDHbJNel
+JbWPx3+w7b+INZDajrOnq7Q38/+uTF4UwSOowBwcjp6V5340+M3ij4I/Ej4laHZQW9rYa1qM+xb
uLLOWaVQ6kY4ww9e1enrDJp37B0tnqZ+wXLh9q3DbWk/0zeducE5XJ+lVqZpJDvj5pVz4V+AXgTw
94URrCLXpore6tbZcfaWltlOG56Fm5/piuT/AGZ/Dnir4afGeHwfrCXVnp2qWd3cz2EyhY5AIztf
GPUY4+tdt+0Vq01l8Cfhj4lskN9HpdzZ3L/Z/nUMtuoyxB4AdRnmuT+Cvxf1r42ftNeG9W1C2iU2
OmXduHsoyIwGiY5bk4JJXHPrQm7Diknsd18G/hp4e0r9pb4irDpsccWiPbPpqLwITIhzgdzyRzXz
l4gtfiH411vWvHWb2e30262G9iQAW4hfI4CnAA2mvrf4ZTQH9pv4qFTGGnSyZdpzu2jaTnvz2r5k
sPj1qfw/8MeMfAsWmxvb6pd3AaWcMs0RkUIflyM/dGPWrTa2FZNq56b8b9Fs/Hn7PPgjx5qcccni
RhaxzX4UAyI+8MCMc9M9PWtD9p/T7vwJ4R8I+BfBtsbfTdU+0LcW0MQMkxUowGQM9WJwPQe9HxBg
dv2J/CCyKEmRbNgjDGPnkHI68A1v/tU+LLjwZrXwt8UWtqt99iuJ5trE4b5Yzgn0OD+VJN3BxVzg
v2WI9Su9f1v4Z+J7OWTRbnTpr2WzvI8FXLxjepPIOHI9PlB9a6L9n34X6HpfjT4k6ikLz3fhq/mt
LH7RwiDZIclRgMcr1PrxUXwI+JzfFj9pq+8Qm0jszJ4fe2MaSMxASWIgk++f0r0D4MEXHjX43whd
zHWDwVwGBjl/MdRRd3FZI+SZfiJ47n8Sx/EUTXks0UolMqq/2QNtCbCPu4wQPXivbv2ifhtpXia2
+HfisWhtNS8R3VpZ3ptwFUiaJGLAD+LPc5ryPTfjvFo37P1x8Nn0lVmkkbF8j8r++V+YyuTxxke/
FfQvxelaP4VfBiYqGK6rpTYU4z/o4Pfp0FDumikla5x37Sl3d/C3TNA+HvhGH7HYT2y3Es0AInld
HcDkc9QCfwFTfABZfi/4U8Q/DrxdH9qi02NJrS6bJuoGeVurMT3x+FdD+0Z4zi+G/wAevAniS5sV
v7ey0+5E1uSAzqWdVIyCMgtmnfs2eNLf4hfHT4i69a6fLp0F5bWrC3cqSpVlB5X16/XNJO5KaOU/
Z5+HOleH/CXjHx/fwpreqaDJdW9pDOB5W6FVkDgE8MSQPw7V57ofx98WaL4+k8TXVxcahZ3ckhn0
yWRjbMj/AMKc4GO3WvbfhJC0vwA+LiOdzNe6pztIH/Huvbrxj9K8c8QfGPw9rnwC0fwbBo8g1fTv
K33wjRYnCknKnO45yO1NJj0bO2+OPwA0iL4weEINLmfTLTxddyxSRomVtiEQZTtySDj3NRftA+OL
r4Syad8PPBsb6JHYQw3kl5bYElyXQg5BBHUAk+vpXrHxhiMnxU+Bco3Oq6nJ8wHXMcfP+RXJ/Evx
noHgD9qOTUfE9k15pT+H4UULAJtshY4JH0Vh1wOaak7kuC6GN4RtYv2nPhFqL6mscXiXw4VhXVl5
a4UI77Xx0zg+tZnwL8J2Xw/+E2pfFzVof7ZuYhmzskGPJKymPcN3Und6dq7r9mbUrLWj8YLnTIjF
pU+pebaQFNoRGglIAFYPh9in7B2pksEYxzAjB+U/a+Ov4U+ZlWOE+HP7Qmq6n4ybRfE8La5oOvn7
C1q4jTyjJIo3fKgJwCRyeeKk8Wfs3rYfHfTvB1lqCR22oxS31vIVy0MK+Y209j93HFXNe8aeAvEl
r8JbPQ7Ty/E9pqVn9uZISpyFUOGfo3z46e9e3+MVKftZeBJFQkPot5E0nUcCU4HvU81tgUUzxv40
fFKb4XayngzwdbR6ZDoLtDNNPCk3nvIqPnBGcfMemPyqbx/4W0z42/CZ/idodstjqenbk1WEkBZh
FEu5ox65KkZweta2o3fgCy/aK+KCePlimtHNubZJkZwJDCm/5VB5IxyatfCpoF/ZL+Ihs1b7OG1I
xAAghfKjx1AJOMckVV7MfL1Oe+HvhPSvgx8KD8TfEFsdVu9QQLptqhBSPfG20yA454yeuBkiofhT
8QrH4zPdeCvGWmW1rJrLKLG70u32eXIgZiCR9OCPyrp/im8R/Y+8EyySlLUPY+bgEAx4YEYHbv8A
hVSPTvh5afH34WSeAbiF4pXn+1pbuWUkRHac9iQWzjj2pLXcNOx5RpHhnRPhJ8Y5dN8eRzzQ6cUu
I0tV3rIzYdNwB+7jPHPaugt/2hn1HxZBDB4X0NdMmvxHEskGXFuXA5wfvbcH25qh+1wqW/x31aRW
I3WVo20dCfLI/P5faq/grxV8LbBNBXVvCWqz6qjRJLcRTAoz7x833gcZxwAetacqMky/+1v8NdP+
H/jyzk0lSsGqwy3KwBPliZXAYfT5hj6V4WVaRtro6grn5fXmvob9s+11+y+JFhNqt1DPp1xDMNNW
HGYYw67g2R97JBJz3r58kRo5iyliOmf8/SuimYTeo1AsZfarMMckfwn1r0f4I/B68+JmvTMsi2uj
6flr27IyIflLLgAjOdp57c150HaZGkUJGAoLKeCVzjP15/WvrD9jW1A8DfEWNBu8wRjg9SbeYD+l
TUk9i4K+pykvxK+EqfEJ9HTwkjeGi62x1wyMoK7MeZsIzgN3/EVwPxg+CeofDjxRaQ2G+/0jVjG2
l3SJuFxvwQin+9jPHpitW0+FWhan+zDd+N31UJrkLbfs2RjiRVAwOckHP4ivefi783h74DSs451b
T0O48sTBH2/CuQ6LWPMNT+H/AIQ+CXgKym8aacdd8Tal5VymkpIEktojnduHcgg54z2rM8XfDDQ/
iH8NovGHw7iIn06LZqmh7g08RMh/eYHTgMcHsK9P+Ovw/t/ib+0/4b8P316dPtbvQWdpkPzbkkmP
Ge+cfrVz9l/wrH4O8ZfFzw1BKbqDT5ra3S524DjEx5POT83P0/Gi9mHLc8C+AvwgX4iXp1jVHOn+
FLACW7vLgeWjLkZVX6f/AKq6vw14a+EPxK1rUvDOlWt1o2qzxSLpmo3UqrbzShtsYBByd3UAjoe5
rrPCMm/9hnxcEdmCrd7sc4IkTOPTA/GuE1X4Hj4e6P8ACvxUmqpdx61qdgZLfaVMW8rINp6dtvOP
xo5mwtqeVeJvhzr2h+ObrwmbCd9XSYpFCEz5y7uHUjORweleweKvhz8O/grpOi6f40uL7UvEs6SS
3kekneLcDG0MNw28EfXmvefGtuq/tjfD6UD96+gX4+6OwbBrxrxR8DLr42/tFfFCG21CLTG02aGY
eYpYSboVAHBGORn8aFJt6sVrPQ88+OfwSt/AKW3ijQLhtU8I6kgaG4iAbyn2glJMdBk8E/Sk+Dnw
It/GOh6p4q8VTy6d4UsbeQtPF8jySABxs5yRgdO+RXrnw7tDP+w/44tpCjtHNfru+8oKtHyM/wBP
WpPjpYHUP2VPhTaRMIvtc2nwgKoVS8lrIBuHH8RB/CnfUdmjzGH4JeDvij4S1S/+G99e3Ot6XIrz
afe/uzLFsLHYrE9hxXiXhbwVqvizxHa+HdHtLiXULmRoIY3H3WAJ+YjhRwetfT/wk+C2t/An9qHw
bpN9qUOoQ6hZXk6NANoCiGRcMO/Y/lXqXwe0i2tv2rfjM8dugZBY+UxUHy90YY7fTJNTzND5dTwb
UfgN8NPD/jfTPCWqeMbm18RzrBFLAkW9PNdRj5gMLk5HNeKfF74a6l8KPFs2japC1uGIaCTaSs0R
ZlVlI9cZx2rq5fgl4k8TfDrxX8SoroS2uk3dy0wllImZ43+ZlbPbIwfbFfQXxot4dV/Zy+B99fJD
eXMt7pMbzXA3uyvAxcEnk+v5UufUtHhvgf8AZ0t7n4f3vjXxjrjeGtClEf2OZkJLhmYHIxnqB0/K
qnxR/Z+g8PeBNL8Y+DdTPiPw3cBhNdIGzCwYIoIPPJOMmvcv2ufA+rfEX4t/DzwPoMgtxcadO0Vs
JfLt02Pndt6fdQj8vep/2QvBN/4a8afFDwB4iX7da6fDaeZau++2WRizZVScAkYPT+Ghy6lWufI/
ww+F2rfFnxra6HpCSM5ZTdXOA628ZcKztyOmePoa9Si/ZS8P67e63onhvx5Z634n0yKZzpqYLu0W
QV6g5PAzXqHwkgi8MfsY/ErXdMibTNVEuoxpewoFnRUZFTDDHAB6V4ZpHww8d/CrUPh548a5lsLT
XtTtxDeQzEuVlw5VsdQyg5z70czWpFjxrW9OvNLvZdKvbf7Hf20xtpbd2G9XUkFcdAcivadN/ZPe
28I6FqPi7xTY+DL3UvNaKy1LakmxXIz8xHVdrYx3r6f+JfgHw9qH7cPw8hn0e2aO/wBIvru4UrxJ
Km/DnHU14L+0D4A8bfHn9oT4i6dpPm6zbeG5YVjtWc+XbwPCrbVB4GWD/XFPncthOKZ4v8cvgvrX
wZ8RjTdRJubKeNJbPUguI7kbAW2+655H0pnwc+BOvfGPVdQhsni0/SNOhkubzVbpikKbMYUtjGSD
n6A19OaLE3xB/wCCfHiPVPFBfVdU0T7Utjc3ufOtwjoqqG9Mcfl6U7456dceAf2QPhZofg1H0k+K
5rS31FbVPmvGntCzKx6/M2AcU1NvS4KmlqfPvjj9lvWPD/gm68T6NrOmeK7S0mWOddKk80RqQxLZ
BPAA6cV4hpmlXOs6haafYwve31y2yG3hXc8nBwFHWvsT9kfwp4y+FP7QsHw/8TQvp+m6/pl5c3ek
SAFJ1WNgkg9Odw4PevTv2dfhJ4U0X9or42zWemRCbwrcQrpHmAn7GZIZS20nr0xz60ua2ly3BLXs
fO0v7CvikapFpk3iDR7fVZlR/sP2gLONyBtgQnO7/PFfN/ijw1qPgzWL7TdRtprTUrR2hkgnyrjt
xnB2nHBHBr0m9l+JurSXvxsBv57i1dJG14xgJDKmEUZC8AFsHsQa+rf2pvhzofjeT9n/AMVanYq+
s+J77TtO1e4hbZ9ogkjRmBGfVjz2pKXcag7pHyX8Lf2XfGPxX8MXfiDTktdO0mKUQLPfv5XnMRn9
2ejDvnPasT4x/APxL8Fbux/4SCJJrTU4/MhvrY74Gwcbd3QHj8c19P8A7eGj67efEXwx8G/BtlcN
4bstKg1C30jTU+berTJvbbyQAg46e1bP7LWh3fxa/Z/+J/w8+Idu15p3hApHYwTKEntZFSaUgsPR
lXFJy1KUdD4a8BfDvVPiX4zsfDWgQPeaneuFXqFjHLbmbsAM969R8UfsXfEfwp4a1fXri0tbi301
PNlitZfOmKjr8or3z4RW0XwN/YB1b4r+HlEXjfUZRC+oTRh/LRb7yVUAg4wjHkd/wryP9lnU/iH8
M/j14Ja/tb6HT/G15HZ3Z1WN2W9idw7sofvznPufWht7iT1suh83aJoV54s1mw0rTrCa71K8mWKO
CMYbczbQD9CefTBNe2S/sG/FWDUJIfsFkrNgGM3AJOcc5HPfGMV2/wC06x/ZH/a41LUfh/BbwLc6
dHfGG+iMsaPM8hfbzxyg+ma4T4K+D/iF+0b8bJdUttZvYWa7bVNQvzcutvbhWEhH3vlzkADHQ+1P
3inK/Q8V8e+CtW+HninVPDuuW7WWp2EzQuhHGByD34Ixitv4WfAzxV8Zri+g8N6bNqB02MS3Em3a
gDHaq7uB68deK9w/4KHfFzwx8VPjHax+F4UlXw/byWN7qKKgS6k3BwVYZ3hRxnsc17n8O/COteHv
+CYHiOXTtLu9O8TXK3FzG1pGwumT7YCp4G4jaM49Kvma0I0Z8A/Er4beIvhT4hk0TxHZSWGqJEsx
SQEjDDIIboe/Sm/D/wCG+v8AxY8QLonhbTJtW1LymuJFhAAVF6sxJAUZI79xivUvj3+0W/xu+FPg
vT9Yt4X8WaHcyJPqMMIU3FsIlRNxJJ3E5JHTIPFfRX7CUNrH+x98bbvTDEPGIivFtnibF2iCxXYV
wcgeZuxjvUNtEW3dj5fv/wBjP4uabp1xfT+HG8q2he4fZcRlgqoSwxn2714lM5WBMph+6sBlW9D+
Nfbv7AU/xGvP2lNFTX7zxDdaHJYXwu4tVmmkt2xAdmQ/y5JPb0r58/a+ttNs/wBpz4m2miw2sGmL
qzLbxWqqIlAVQSMcdQ2feqi31CyieOrBI2Q2WO0jj9OKY43yRDaCvQk8jFTNGCWQHYygAt3P0qCU
FMoxLAj5TnH0z6fWrIV7m74L8Ca98TPEEWjeHNNl1O/kV5RbQKCwjTG5j6Y6+tdD44+Afjv4c6Id
b1zw5eaZpgcRm6mG3DMeAR79sV9nHQbT9jn9ifw3478KwLfeM/H8UEUuqXRxJp6XFq8gWLHI27On
r17Vy/7FPxQ8T/FD4g3fwd+JK3HirRPFFnNcrPrO95bRooWIaPd1BOPx/Gsm2jZK9z4Wkh+yICzK
UK5LA9V69q9Qg/Zl+Jl3FDPbeDNSkgmiE6MIT8yFdwIH0PevrP8AZo/Yo8LyftQ/EnStZuW1bRPh
xcwPbWtxGCt20scjKJP9ldoOO+BxXi+u/tzfEu/+MqeO7GWez8MWFxF5Xh6GRhZeSi7fLY7f4gCc
46kelTzjtZnzXe2lzaTzW037p42MUiMeVKkgg47gjFb/AIO+Fnij4h2N1d6BoV7qkVq6xzTW8eUV
yucZ9cA19pftffs06F4g8Q/B3xtomPDsnxL1CysrvT4VBjt3mRXaQepzJz0z160/9sHxnP8AsjaV
4U+B/wAMYX0iZrODVb/XLU5urx/3sZXYQc58rcTk9hRz3B+h8PeMPAPiDwJqEFn4g0240i8uIjJH
HOm0soOCR2xmsjR9AvfEWrWelabaz32o3Unlw20K5eRuTgD1xzX6GfCzTo/27P2ZvEdr4wtxaeMf
AKKsXiAYaa7UJLL8y9twTBGOTWR+zP4J0P8AZ6/ZUvv2k73TI/EfiCYrBp1hNIVS0JuWttwbBGSW
yTjoMUudijC+58aeIvg7400LS7i+1Lw1qVjYWaGSe4lgOyMZ2/MRkDn19a41k8uUBUUlMcEnB/Gv
s/8AZX/ao8VeOvitb+BfiST4y8PeOJV0l7e6jQJbs7Ellwo3DDgEfQ1r6J+wdplz+29dfDSbVt3h
vTrOPxG0gjId4TIpFtjPAO7BPpmhTNLeR8hWfwf8Y67pNvqel+FtRvrG4iLw3MMBKyKO6+vSua1T
S7nSdTutPvraS2ubaQxSxupV0cdQw9RX178c/wBsjxX4d+L1xoHgJE8LeDfBd0dIj0q3hjdbhYJW
RixKEoCFxgCus/bC+Fuh/FP4BeHP2jNDsk0C914xQ3ukkbhNLLI8aylx/uenQjgYpqZKifEvhnwl
qvixruPSdPuNRngQSSx28ZdlXOAxx0H1qbxH4L8Q+Fo4JdZ0q505LkskJuYioYr1AJ4PavvL40Wc
P/BPv4N6P4W8IQfbvHvjOB5rvxOQoaFIpE+VYyCD/rdo+uTSfs6a5/w3b8PPEnwu8dQeZ4s0m2fU
LHxGIUBh8xxGqlVUAH+lJzCyR+ecUBd40Cgs7BFXjcxPTFdVdfCzxbaWtxcy+HdRiggj82aZ7YgI
oGSeR2r6y/ZD/Zz0Dw/4X+IHxn8Y2ra5Y/D+4vILbShj/SJrfbJ5x7emAeOtZfw6/bv8X6p8aZJv
FFomp+Btfnks/wCwxBEgtknfZHiQLltoIB555xRzlpI+MiPNcugAXHIPp65roNF+HfiTX7dbvTNB
vtQtCSgntrZ3j3DryOuK+vvjX+wbbab+1t4U+HugaqLDRfF6T3sIEZ/0KCIHdGpzyflOOg+tW/2o
v2iLr9nPxWfg/wDCW0i8K6b4RYre3s8MUxvZpUSXPzqSAN7ZOcnNVzMnlPiPVNGu/D+onTdRt2s7
6NQzwTAq6gjIypGRwRS6TpV3rtzHa2FpNeTkMwjgQs/HU4HYV94/GzwVov7VP7LL/HvT9Ph8P+IN
Bha21qJQD9t8kImVI4HLgj2OKj8K+D9K/Ym/ZX074qXenR698QfHMYt9MuCAyaZHLb+YudwwcFSx
9eB7Uc6Fyo+G9f8AC+uaDZpNqWk3VnBI4QSTxFU3YJwWxweDWZlFIzJGFzkFm4/zzX3x+zL8V2/a
0XWPg/8AE20i1a/1VJdS0zUoLeOIW5hjzhtig8MMggGuW/Zw/Yhsde+Pvj/QvFd3HqHh/wCHUgXU
IlBBviyOyqPb5OaXtLhy2Pk22+HniW5iEkeham4zujK2z/MCMg9Py+tYk6gARvgSAEOC31GPzr7G
vf8AgoXrcXxgtNVtNBtrb4c2zKg0E2sRmNuqFD+96BifmABHHFTftZ/sjWOl+Ovhvr3g9odM0f4k
XVvbW2nSglrWeZEclj1x84PHQ0lUd9UO1z4803w/qmsNusbC5uooj80kERdUPYEjvx+lJqek32l3
MMeo209lJIvmLFcRlCy5IDAEcjg192/tC+KbX9hfwjpHwp8A2cB8XzQW+r6x4gvLdJklLB0O1W55
KHHoPfmrFhoWn/t8/s26lrs1nFovxA8CQBLnUxEEgul2vMwULzghOmPlPTvRzu4JM+BIoJry4SC2
hklkc7UiiQszt6ADJJ+laF74U1fT7OaebS7qO1jyZHkgYbBx9444GSBz619j/swfCrQfgv8AAK5/
aY8YWX/CQfZyo0bR4MZhkMzW/mMTkEkt+A96sfs8/tTXfxv8eN8OPiVY22raH4yYabaGxtY4mgkc
5DMR27ZA9KOeW9tBqCbPh3zFwhMgLcrx0+n51oWXhjWNQiFxBpd5cWzA+XJFblg4GOhA569a+r9J
/YNvNW/bAvvhRPq0A0i2t11u5uowBJ9idxiNRjG/DAZ6c1p/GP8AbHuvhJ8SIvBPw70Cw0/wX4Pm
GkTQ3lukk108MjCQqQcLlQcZ5J5NNy10DkR8TXVq1heNBMsiTocPGy4ZW6YP0NXNI0i81mcW9laz
3U2CxS3jMjBQeWwB0Gf1r7P/AGxvgvoXjb4VaV+0T4NtW0rT/EMsQvtOnABaV3ePfwcA5QdOO/U1
1Hi/QNL/AOCe3wWsYTa2mtfFLxpBJs1KeIPb20KSISuxuwWToOpHPaj2l1oJQSPgzUdA1HTSn261
lslYnYbiNlDD8Rz2qgI2fcq4ZshcjqRnt/nvX6C/BHVNP/b9+G+tfD/xBYW9l470aOXVNO1q1t1h
gUOwjjBUHn72CpBBGK4H9k39lvTLhfHHxJ8czJd+F/h5cXSSWFvy9zPbqJN2eMjjvgHIpqZLSbPk
2TwjrCEt/Zd8CmePs7joM8cdMc5rJ8tZkWVZMA8ELzg/WvtPwf8At63Gr/GG6l1zQLQ/DrUJprZN
NtrOJ7mON12oS+RyO/Xgn2ql8ef2FLjwv+034X8CeGry3t9K8ZvLNpm4s32KKJR5gfnJIwcYNJVN
bD5T5ItNM1DUrdfsthPdYO0G3haQ7u4OAap3dnJZOYJ45IZyASrqVYegII7191/tE/Gew/ZFvLD4
Q/DbR7Q3eg5n1nUdUthMtxNLCrjZznq2STjsBxUnxZ8A6J+1r+zJc/G7QbKLQfEXhi3MOtROAgnE
EO9xGQ2OfNUqTirTdyGmfA2z5v3bZ4/XvU07BX3EZJ/lUcgQA+WNy5zk4GQadGFkjck7VXJyp6U9
hDZCkjjBKIPlPHUYrUtfDeoXUJuItMu5oSMLIkDMrdT1A6e9e4/sX/sv/wDDS/xMntL+68rw5oca
XepxqcPLGxIRFPbkEk+ler+KP267Pw18YLXTvC3hOxh+HOkPDZPDNZr9rkjT5JWzkjnkD1x15qHO
w1qfFM1lJbnO4h8lSOhHODkexFLaafcXvmG2hkuJUUl2C5CAdz2H419j/tb/ALLtk3/CF/ErwPus
9E+IdzbxxWEx+aO6u8yqx54X5sYH4V3PxWudF/4J6/DLSPBmmWNtrHxP19Ir671C7h823WJXKsBz
kcAqAPWl7RPRFW7nwHcadeWFqj3lvLbmTOyOSMqG9Tz+FQSZnliSFCSzbVVecE+lfoV4b8OaT/wU
F+A+pRxabZaN8T/CKLM1zbxeVbOJZCc4yd3yxcg85rgv2Sf2evDuj/D/AFf4+ePoU1Hwl4bkkNrY
Q8vLPFIq7iucFc4wD6ip5w5T48n0q6tEaSaznjVAXYvEy+g/qKqPIQq4UEAfM3fPYfoa+5/g9+2F
p3xY+JNx4S8ZeFbCLw14odtLtP7MsVE0DzyBI97E/wB1hnHQjiuH8XfsJa5Y/tWQfCjTLu3Wz1KO
TVba5kk5isFY8ntuABFP2iC2p8qx2d1eRm4htXkjVSdypkHscH1yMfhTri2ktpvJm8yMg5MZU8Hr
0r7l+OP7Rvhz9mzxFF8Lvhz4S05rDwoZLTUrnVbfe09ySGAQhsnqclu5rP8A2pvg5oPxW+DFv+0L
4GgTSrSVTHrFncoImWRHSAFFGQPmB788VUZ6itqfEyxPMHSFWlZcnaoy2OOcdeponhmhQyvbtApJ
X51bGcZwPXrX3n4d+GXhn9iT4EWnxG8X6RD4g8eeLbV7fRreeLz7eMNGsi7lHTOASeuDjIqx8HvF
uhfty+Ate+HfiLQ7LSPG0Ak1TTJ9KgEUJCKFUMeowzYPsRR7XyCx8BCEiWPI3AnGQ2KtW+nS3O5f
s037o8uIyV6EkZwecCvqn9nP9im/8bfEXxbF4xlaLwp4GuHh1vyHbzZWjVnCoRyVO3HHOAeOa7KL
9tvwbcfGVbc+B9GX4b/8e8MwswbsRFcB1Gcdf4cDjvmmp32KUT4akgEzbThiScux7Y/TpTmt/MWV
4opCseMsufl/HsO9fWH7Sn7F2p+D/jF4b0rwk0cujeOZQmi/aHO5JCqMdwPIX5yfw9xXovxb8Q+C
/wBhnw5pfw40Tw7YeK/iBcCK81qbVIG2bXj6Kwx3AwvTBNS6ltkLlPgxX2RyBmBIGF4PHOfrzULK
7uEYFWfhQe49q+9vif8ACzw9+1b+z43xU8G2lvofiDwxaeXrWnpCyQqFjMsm1duScHgj19qx/wBn
D4FeGPg78K/+F+/FO3W80FkRdFsUj85ZnfeqllAzksAB9atVEFkfFDxLbwgmJ1BAAY9Dx0qJII/K
DICC5xk9q+9vgt8UvA/7WUerfDPxR4MsPB2p655a6VLpCkM7IGkbc2BgjYPb5iO+K8e+GP7EHizx
Z+0Hq3w01IiyTRSs+rXcMm54rdiNhUkAbiCD6Cl7RdTOyufM7Q7pcgllAA3e54FTtIFzC6nJ5HHS
vujxJ+0j8IPAPxS0vwXpXgTTNY8HaP5dnfa1LGVkEi5WZwuOcY/HmuG/a9/ZaXwjr2geMPBEZ1Dw
d42uUewljOFSe4LOkar12kEc9h701OL3Ktc+ThCrcI6g57fe56A/U0hjCTGM5IBGQeuefTtX3jrP
w/8AAP7D/wAJ7CDxx4dt/GvxP8SpHcvotyw8u2iGQxEgQgBePcnpUfiP4O+Ef2tvgXB40+F+hW/h
vxj4fhzqWjWyNhw7DAY4GcKjMp570udXNYx6nwikboSVkUOWGVJ5/AVHdKRj7ygnA46tX1v+yP8A
st6T4m8PXvxc+JUyad8N9EmMo80/8fkiMUKEj+HeBxyWNegfC7xP8EP2lNb8Q+AB4Ht/AepapELf
R9TUiR5JfMG0KuAQ2SDjpjOaPaK+2hejPgraxgyz9P8Alnnn/CiWZFVQ+QFO3aOcH/8AVXvlr+x1
421L9oaX4Trp7LfiUzNcqRsjsvMx9obOONpHA78da918c6n+zz8CfGOg/DmbwgnjOTTgLbWtd3qp
gn8wo4fI+Yr1IBwBxSlUs7WIcEfB8iKjBEmwHAyx5pIgpUuWCDtzgH/JGK+pf2vP2XP+Fe3tp418
HAah8OtdcTWk9sNyW5c4SMZ5OcZB6V3fh/8AZ68Bfs2fBMeMPjZZnW/FWugNpfhZJNkojDAEhc8n
5lYk9OKj2qTJ2Ph6coIiUBVtxDbeeM5zTZPkZVVmLN94A4zjtX294y/Z28EftDfBOPxt8GNHbS9e
0lX/ALR0AyFp2DsEj4yVHCMwI68153+yT+yZN8YL+TxP4qk/srwHokm+/vJG2pclHG+IMeByADz3
NP2q3FytnzG6bY945GSO3+fxoE4hUloiccLJjgeoP6c194+Cvhv+z9+0FqviXwd4M02fw14gVJP7
NvLx18uebdtXYNzbh8mdpAyOlfMsn7NfjI/GZ/hxHpskmtPc+QsQRlVIjIUEx/2MfN6Y71ftUxqD
6nlTOXbKkuAS2CuOPx+lMmK+YeSFC4Kn1/zivu7xL8IP2dvgx4q8OfD/AMW3moax4t8qJNQ1Cxbd
DFKWKkOQfk57EdPzrxT9rP8AZjvvgr4xS80vfqHg3VpDNp2owjzEVSwCxlxxk547mkqi6iaPnhIs
u6ISBjpnkn0pHDrHlBtKnAJHvX2N8PP2UPCHw9+EB8f/ABpvb3S7a+ljGn6RB8s5TeVLMmNxzkMQ
Ogqt8Zv2VfD8nwVsviR8KJ5tS0KKKSXUIrxv34TOA4AGfrnsc1anF7Eux8iz/MitGo3sSSoHAHqP
1pvnsIVZS205G3OQefSpEQLg5CNtySD/AIc+lK1t5kQZe33jnIJz2o6giKNVZ1B+c4ztHpjt+lIr
pG7qM+SQQu76Z444FNlHk3BR0Plg8MByKBA7fMSroenzc4/Hp2qxMVUMgBQqQxI2ccdv5/zoSNDC
fNIQrx075/z+VLIfLkXA4Q8dsGlkTc7SHLFjyCOo9frSEJNbIFaRAyAfdQck/wD1uKRSUdicnPzA
7eT2/KpUkaGclPu42/NyagmdhIHAZmPzBSMYHpSKYB/KldmAG9sbU646c1LOQtoyHG1unsaiVnCk
8Nt9fSrGoTTi1fzAq8cEDrTJKUSv5YKSEDGWGOg716D4YuPL+FHjYM5CPHbjDr1ImTC5z+P4V5zE
xiMcnPAx7H616N4ajY/CLxpNDgNvtUbewJZTIGwAfofyoA8+ZmTDHPoFHJ/H0pQY/MPmN5Y7BulN
2kN86hEZuuMVJdAttJLOcdccVk3Y0Wp3E0bQCBzKmPs8edoAOccjHTgjNd54b1EQmPZIZAy8s7E8
8np/9euFu3uAdISSJD50ClXJz8ucZz2659q7/wANpAYJJ5iixRFVXcvfB4OOvX0rfVoxkonU2Ooz
3dswkaaWPd8qqBknPGOORyOlZXxhtZYvhfqYmMhkN5b7VPzc7ZCc8cdMVv2Gq2yohBMcUpAWFEwq
ejKQc9v84rG+LGp/b/hX4gUy+SUu7UBlUt5hywYE/wAP3j+lNuxgrX0PmEzSNGygkjqR24/lV3Tw
UuE8wFmccYHXiqHzRMQBlWyAcZq1CSJI5tu/bjhupx6VmtWdSbR9EfDjWbnwqlnqGnzS2+oW0qzR
SjjEg5Bx7YNeleJvG/iHxlfRaprl7PqV2sQRbiREViQo/ugA9M5ryzwVbC6sII2KrMV2rGgY5JKg
YAzyTXcxade2EaxX0M9o4QsYrlGUkHgAdu610LlRk+Zs7KL4seJ4fAieEm1GT+wZDhbKTI2kuWwO
cAZPcVi6Nq19ous2epW1w9vdWEy3MTqo3I4IIABHPI44PWoLXTJ2slkWBmiQ53upHPHAJGDz6ccG
mkNdK/k5Vmf5dvO72HoeDV+6iZKXQ9Du/j9421HxfZeK7rVvNv7aH7NBLJCh8tMEkBcDOc88f0rp
Iv2tviVeMj3OrRXIZjkNbRMFB6D7oOePXt0rxiTTpoII7ceY8sZ3rGV+ZM5wcdae+nzw2yyGzuVl
dGcqEJjwD0z65FZctPclSnG1z0L4mfHXxZ8TdNj0bxJqElxZ2j+ZEDEqEErtBBUD5ceuetVNX+N3
ivxD8NtP8IahqDS6NaOi20SKqFdgOxVwMkAZ7964doPMkVnEgXywPMmBB+mDzn/GntkQpJ5bGNNw
DYGGOcYwPx/Cq91PQPelqzsPhb8ZvFPwjv8AVm8PXYtjqMSrcB1DhioJUDI4PzHp61yd7evcXU87
SyXPmOWdyMFmJ4GB0A6fhUAilucpGjjLBmKrySR9f/r1GXJEkPIkC5A3YJPtVNLcj2kluekar8cP
E978N7HwZJdhtLidViSGDy3k2n5Q/GcAd+/cV5+ZFjRcxNlsMFU46dhzwMD26UyRp1sxJJbyODyr
tkH/AD7+1EokulLbEkw21l37SMjAPPapSSL5lPURAI5nQAvE4GCijJJOfX/PNIkhnKpK7R7sH5B0
4Ocgf0PrUcaNZiBjEzHPzMr7zu44/nT5YjBlkD5TP7thtK5HPPPpRoJuQ6RYQBG7bFUkKMEluOv6
d6a0BaXh2t0chiSvIGe4Oc96azpGrlInbIBHt2Ge35etTcxIyybWJBIw/Gce/wDL/GmkZNp7kSW0
jgiVlwvCcnJIIxjA4/wzSb2ZSc7dhHJPJOe3vRaM8+zduCxhfusM4Gc5Pc8Afn61bYsXdo4yjct8
/BOc981rFMzSV9CswDoT5oTByTIcZHSrEhld2GSxQcEHpkYwR+NNjjeBZNybgp5wR82e36UkcAml
QRKINmSR0Gc9vwzSleOxdyBY94b51kcHqhJGcE4/8e/zirtviOKVYyscpIwQOpOc/T/9VOuopVZG
jcYXMgIGcjoTj/GoJrNrmNYw0nXcxwAcZz/n6VlZvcdmtj3bwP8AtUaz4P8ACK+GdW0W08VaZb3D
Sw/2i242xxjbHwehyfqTgisn4q/tEa78VrHTNOFrHomj2UARbG1mbysqflYjPOASMY7V5CkUrxSu
sjSNuO7nPbqfrz+VTJB5Ee9mQsDtyCMjrn+tVGnEu8up2fwx+K3iD4QeLrDWdGuxG4m/0mybd5N2
hB3K4APPOR6ECvYX/bDktJNUk0TwVpujavfwywNqtvMRIhbqc4ySGBPUV81w7ZJ/OY8RHK8HJJzk
jHfrThMkjyiQkOCyfeyrepGPqPzqnSinqZuTuaWoareaprMmpX9xJJfPObppXGWMpbeSDn19DXue
j/tXR6h4Y0rQ/GXhW38X3GmO8cWpX0oEu1uCeh7HHvgV8+QhnU54iQ7d4YYIxz+INPEAabaPnMi7
t4BBUH171fs4PoLmkmeufGf496v8XmssxHSdFsolFvp0XzLGQCu7I6noM8cCofgl8ddV+C+r3Fwu
7U9Gu1YXWkyOAs+VCq2cHBH6g+1eWG7kmRCA0ccY8sBuuOKsxkQhRMGKtksx6rjJ7D+lQ6UDojNs
+iU/af0fwtoOvWfgfwdB4a1e/RF/tNJvN2FckYVlz0J/OvB49fuYNai1G2mkS/WY3X25WGVlzu34
x1ySemOlZCF4WVycGVWCsucDp17c56YqeykU+W23zhkjKdCenpUeyiZczbPo25/ad8KeNbbRp/Hf
gyfXvEmlW6I17DKkQuG3H5sAgcDn/gR4rhvjt8d7r4ya87MktjoNu5+wWYCgRjaAxbAySSCec9a8
pztlkdgGJQAKvGfoalDrOoZIZRGAGUOBzjP9aShFMlyk9D2H4G/HcfDXSdT8M69Zy654RvYJt1qu
zdFIy43KWPQjI/AV2i/tE+EvBXgvVrH4caDdaRqd7KokvL0JJlQMEAAkk4OB296+apZJEEjY4Hyv
heDjnH4gGlnUCGCVNyljyc8npwRR7FXK5nayZ0HhfxhrngbxPa6/p97Mms222ZZ5XZy3GTvGeVPQ
g8YzX0T4k+Mnwh8e+IdF8T6/4e1WDV7eOEzR2lvG0LuuGyRuOQDxXy9dKJ4sodqjCqOue2B7UkYY
Aq+Nq5LZXBrT2KBTkj1v43fGS9+LniMyQx/YtBsw8FlZqdg8knILLnG72rpvhn8dND1D4eXvgb4j
28t/oIQf2ffpAZZ7dyxyMnnAGMEDOMivBXZ0IVIzv3YY+g9/fOPzqUQuDMock4XKheOOhBP1/Wp9
nFC5pM+kY/jR4K+F/wAOr/S/hpb3kmq6hcGO4vr+Io8UbR4JTB6ggcV5b8Nvi9rvw18Xtr9ldPqU
zszXltISEulbO4SYPP3sg44P5VwkcXlpvkYyFhkHnqfeopIioQRsAQ3TdyRkZpezTDmlc+prvWvg
TrHxJtPFkl/eae0jpJJo62TCByEJbjb0Lck8Z59a8v8AjT8Zbv4qa1bm1lmsNJ0zYdPs4HKLCELB
XwDw2MAjP8IrzK3TZLsdzhjw2ORx0ppkYNIFDR45Jx16f4irdKKCUpI+kdO+J/hT4y/DeTQfiJet
pWv6aUFlryRSTSzRliWyQvB6AjvkVZuPiZ4R+CXw3l0vwBenVvEmpK63WtMpSSGMHcnBGD1wBXzS
hctiRehzvGOnv/ntUuI2UHHnbwOvAGPShUosSm2er/BX42Xvw8167tNTMut+HdaympWE0mFZnARp
QcE7iucjvx0616Np3w0+ENn8QpNYfxnYXHh+OWSVdFDgYBU7V3ZzkMScHnpXzHOgitnKlih53kcg
e2KRlCv56zlZQxViwycEevp06Uex10LU7Hr3xT+PGveOvHFlrNnJJo9ppUok0m3XbugO1RlmA+Yk
jJzkcgV6BrupeF/2kvCVhq15rNh4V8c2LRWl817KIlu4lQ8oDg7csSMAcgivmSSNGhdg8nmgKGYn
P6E08yqykguCp4JOMHtjHTpUuklsCqtn09rPjrRf2fvhtH4d8HahBq/ijW41lvtSgZZoBtyjdOjF
WOBXM/AP4tacNGn+GXi6FW8J6iDFHKoC+Q7SbmLtn7pODnjFeEy3ZVkLsZPMTO5mJK89/ToeKAn2
mEMd3lsx5xkflSVJFe0bPpXwV8C/DXhLxfceJNa8UWuo6Do5kvraK1ukZ3aNw0eFViS2AMjua4/x
n+0xruq/FyHxdp9hFDDpBeHTreVCS8Lghlk55Y7j06GvHbl1kDEFpGAPDc/pSB9uIzFhsF9q8Lz0
6dO1L2SW4lNpn0/8Tvh7YfHny/HXge8jhutUOL+1v3ETIyqqAkZyBx1+nrT/AIieMNJ+A3w2b4b6
JcJqOu6nEZtSd8tHCJYlDbWHckLwe1fLaSR3D+W2XI/hYk7Rjn+QqSdmdN24ykgABm4OMADn2o9n
rqUqtz6W+FPjDS/jZ8LrX4V65KNJ1KzRG0ueBS4m8lWYBieAQR0zyOlL8I/gfL8Ldcl8ceOrhNKs
tBYXMbwsJDcOQyYIxkdf1r5nim8ueGSKV4XjyVKApICMgsDnOMEjj+tTTaxdXLTQzXNw6MuGjklZ
g30B4zmhUr7Mbkety6tpP7RHxqnudd1CLw5a6pHHbWciDepZPljUkgYLA5578VDefsx+N9K8YfZY
9Oe7tLS+TyJ1mQLJGsgKtz9M/hXj8EgiuUVWI2fMFD8r6HP4Vtnxt4ht5D5fiTVY+dwBvpcD0Ay2
cDPY9qbpyXUlTie5/tveIdM13xro2mWV1HcXulwXEN0i5Pks7RMqn3IXNfN4RiSiy4AG4kjntxVi
+uptWlnurmV7mWV/Nlld2Z3b1znJ7etQxCNzJht7dTJg5A9M1UVykNpkdzCGfIUBCMgYwMH2r6C/
ZP8Aifo/gzUNa8N6mv2SDXynlXhfakTiOQfN6Z3ADpXgDq8pyzbeNoOO47UjoQGVsjsSnUfjRJcw
ovlZ63q37OHi2y+IDeD4rSS7spHiii1Qg/ZyrqCJACTgAA5HbbXo/wC0J8UdH0G58AeFrRxqtz4S
ubK6ubm0kVo2aJVjKKcnkbWODjFeLJ8dPiBbxRR2fivUVgjAU7mV8KOMAkZ6YHp1rjbuZryVpMuZ
pMszk8sSSSx46nPvWSptmvtOh9NftLaZcfFXTPDnxK8JvJfWAsVs5bW2JFxAXdnydpyMZxxjv7Vo
/ABpPgX8PPEfjTxdI8MesiEWtiGDXMhjeVScMeTlx+Ar5w8GfE7xL4FtLu38O6m2lRXbq0yLGrrI
w+6cMCB3Bx1zUHjT4ieJfiBNYyeItRGpPp6yLETCqLHuIJ4UD0FHsn1D2p9CfAfVtP8AHnwD8R/D
SxkFp4mu4554YpyFjkDshABzz93ke5rzP4cfDjxr4r+JGkaNKl3GuiXiTyLeSMtvbpDKCxx93oCA
eTjAzXmulavdaJq1nq9jL9l1GycTQXKKNwIIIwDx1A616FqX7SvxB1S01K2l1O1+z38LQymOyRGK
N94bhzznk0vZ9EP2h7R8QPjV4Vsv2qPDGsS3M7WGh2V1pt1cxRbkWaQsoIPUgbhyAa88/aX0/wAX
+APjBrfiTS7+707SfEzCaC8sC2JY0ijVlbbyOTnmvA/KhdHjJ5C/NjA3HH6/1r0jwn+0N4v8I+G7
XQA2n3um2bOYE1GzE7IG5Kq2enI49sUeyaLU0z2u3jT4Tfsf6novi2RrPVPEMtwLK3CszyeYI2XP
deFJ59qb45mf4p/sj+Fx4WeW9v8AwzLaDUY0DpJbmG3bewzgnGQeK+cPiJ8SfEHxK8QT6lrV0jyF
FVIY02RxoAQMLk4OOv0pvw0+KmufCjWZNT0QwyG4geGa3uUZoJwRj5lBHTHXNJU5XuS6ivqew/sr
X3jD4j/GnTPFes395q2maFYzw3N9fH/j3DxMFAJAzk9sV6X8EPih4Y1v9qL4jvZ6rGYdeFoumysu
Fu2iTDhD3x79q+f/ABn+0x4l8QeErvw/Z2eneH7G8mjkmm0uJoXkVQMqW3E4PQ8dK8ftby4sLq2n
tZZLKa2YGGSJtrxOM7XVhyKlwl1KU1e56Nr2seP/AAVfaz8NWu76xg1K8kZ9GWHcZ2kkygGRkhgV
+6R3r3n47anYeEvgp8HfC2q3CW2vafcaVdXOng/vYIo4yjyMvoCSPw46V55F+1zq66tp+o6j4W0P
VNZsoIkXUbje0zMoA3FugOeeK8Q8b+K9T8e+KL7XtZvZbq+u5S2ZHLLCu9mWNR6AMR+FRyNsbqM+
rv2u/E2saB4p8DfEbwhcZsYrGSGHWreNZo1Ly8LyCBuUsOR60/8AY38Q6rd618R/iP4wuRDY6nBa
+drNwRFBJLG7qwXoOBsH1rwL4bftC33gPwje+E7/AEm38U+Hbh0lg0/UZmSO0IJJ2YDdSc49aT4p
/tE6j478Hab4W0vS4PCvh+3LzXFjZSF0uWZtwzlQCAQDgjrSVOTD2qR7d8HJovFX7H/xL8NaWwv9
dM2oTpp6MTMyOytGwXqVPHIFeG6N8TfHnxN1PwD4DumN/Y6BqVu9va29vmSMQyBDvdewQkcjtXKf
DH4qax8IvGGneIdKneLY6ie2RlC3MIYFo2yDwcfh1r1mP9rzR9B1TXNd8OfDi10PxRqUEyJqrXm7
y5JMneVCHoef0q/ZyHzp7Hv3xE8V6Ta/tzfDgS6lbKkGiX9tPL5qhYZX3bUc5+Un0PpXgvxm+MPi
v9n39o/4nXehC1s08QzQ721C2ZlkijgQFkIZR952GeetfNmra1d67ql1qOpX0t1f3cryzXbHDPIT
uLHGO/Ne93H7V2g+MvCeh2PxH8FSeLdY0qOaKLU0uI4mdWI6jAwdqqDUuDRSs3qek+DIptB/4J6e
LRqsb29xdG8khinIWSbdMhUqO+V5GO1P/aV1O5tf2U/gjrujot2dGuNOuGZRvSF47Pgvg5GGA718
5ftB/H2/+N+u20cUA03wzpkaxabpQC7YflAYk7RluAPTAGKn+Bn7Qy/CbSta8MeI7GXxF4J1m0lW
fSRt4kZVXehYjHCkYz3B61PK0xuR7L+zj8ZfEX7Qf7X/AId8R61YRIdM0K7tDNYRssEYYBgXJJwS
SRg9M1638AtTttV/aC/aU+yzRzubu2WLy3BaTZFKG2464OB+OK+YdQ/ak8K+Cfhvq3h34TeFbnwn
e6rIsd9f3ZjmfyfLKMq/MxyAf1rwbwJ8Qtb+F/jCx8T+HLkWuqWrlkMgLrKvRkYE8gjI9qOTW4Ka
s0dlF+0N4l0P4B638Gv7Lhgtbx2Mk0u8XKM8iuybCO53DP0r7F+P9zFpuh/ssWlwDBINe0surcBF
EUWd3oOvtXjt/wDtK/AXXPibB8Qr3wPry+I1kjldI0j8hpkXAZl34PI6/T14+bfi/wDGfxB8YvHV
74m16byJG/d21vCCsdtErEIEAPBxjJHU59qfK2XzrmTR9h/tc/F+9+AX7Yui+OLDSV1aRPDCWflT
sUjbfNISdwBwQATj6Vp/sVeMpviHoP7QnjHULX7BDrd6LwhQ3lrutpNwUkDIFeHaP+0t4E+IPwb0
zwh8ZdO1LU7vSbmJdP1TTk/feQke0CWQuMnOc+vHFYPxr/ad0Sb4YaF8Nfhbb3eg+EFiJv5pk8m8
uJFc4XepwVIJJ9fzpchPN0PZJNl3/wAEl3EMDz/vTuiQ5bH9qAn9K4Lw3+1Xd/H/AONX7Ovh6Xw/
b6Evh3WY8NBcGVrj5EQHAUbRge/WvN/2Y/2lf+FLeIZdG19G1f4e6uBFqelzq0xgxuIkhTOFbLDc
PbI5FeieCvj78CPgDF4l8SfDyx1jXPGcqGHTo9btGENsxkzlScbQAe3OAKXL0E5Wk5G9+2j8J9c+
OP7bdv4S8PQNd3E2k2DXbIRi0hEr75n9lDjjqcgVifH34u6B+zl8Pn+DHwouIxqYi2eJvEUCAPPK
FeOWHGDhyQP90ZrkP2Uf2ntO+G3x+1j4gfEfUdQvjqOmXNo9xBE9xIHeaKRFAycIAjADtkVu+KNW
/ZQ8YfEnV/F134g8TeZqOotqNzbLYTCMu0pdlwE5BYnIyfaqs76lq2h8cmMWgVNpjQFQVUY2+/0r
9PdA+O3jK3/4Jpah45k1KJvElsJLOK8aFMCJb4QA7cbc7Pavkz9tn4nfC/4m+OvDlz8NLBbDSbbT
3hvnSxa1Mjl+MqyjcQo/WvoCL45/s52/7J118Gj4v1YWc0TB7p9Nm3+c03nsSdmPvcf1qt3sDt0P
zpvLyW5nZigMjOzsCuOpJPSvvz9hbQbDwZ+y18YPixZWq/8ACWaVBf2lnPIdyJHFaLKMqMAku3Oe
wxXwBczxwSEW8/mRMeuMEce/NfWX7IX7SnhnwZ8MfHXwn8cyDSPDHie2u5DrcSNI8E8kaQ7CgB42
DdnpwaUlchPS1z0T9iH9p34gfGD9oFPB3i+9s77RtR0y+82O3skgIKxdFZeccHuf8Pkf9p7whp3w
4+P3xA8O6Ojx6VpuqyW1ujsWZU2qcZ6n7x5r6k+B3iT9nr9m3xpN4+0f4kTeJ9Rs9NuLa20wWUoE
zSJwAwXjJAHpXx38XfiXd/Fj4p+J/G13aDTrjW7trqS23FliJUKAM9eFFOKtcltaWOIkVhKBvbk5
ODwfQ06Mv984ZgTgNxTnYuqlVUqpKtgfe9P0pVAjkc4JbP3XptjindXP0r/am1K20D9h39m++u4G
uLOzu9GuZYM4MiR2Tsy8+oBH40nwe+PvhT9on/goH4G1rwpoVxodjY+HLy0kFxHGhkkCsTgKT0Dr
ya8a+AH7Rvhb4kfCK/8Ag38Z7xG0S3tXfQdfvGwbB1iEUcWAoJIDMVP1FdB4P8Z/CD9jTwL4j8Te
DPE1v8RviXdS/YNOuIJMNp6SRlS7EcYB5PHPArFpvQ0vZn1B+zs6L+01+1rMSSF1CyBAHZYJvzr4
R0T9on4e6Z+xTrfwxm0Ce58Z3NzLLDf/AGdNip9oWUAuTuHyjGB/9eud+BP7Yfi34NfF3UvGeqXc
/iG18QSbvEVm5VWvQFdVYcEKVL546gYr3K/+C37Nl98bV8Vj4o6Na+DJZkuJPCyXQGR5XzRbt2cF
8cYqeVrQalZ39D3j9oOAPon7G0A35PiDTcggZwLeE59v/wBdcx+1D8XfB/wc/b80zXvHGitq2jx+
D0gESW4nO55Ztp2n3BFfI/7RH7ZHiD4u/EPRtV8PvLoXhrwtMknhywKoTbugCLKcr8xOwEZzgHpX
svi/xf8ADL9tb4Z+H9d8W+J7D4e/E/RvL0+/vdUnjX+0IkiyXVARhS7lhx1zQotaCvc9Z/Yo8UaV
4z8I/tR+JdHsmstNvr+4nt42QLsiNrcMgx24Ycds1xeuShP+CPOlmcCETTxg7RwcakxHH/Ac1xPx
e+Pnhb9mr4G2fwl+EGr22p67rtpHN4j8U2TJLHLuVo3QZBwzKcDHQfXNYf7Kf7Q/h7xH8P7r4EfF
1hJ4FvgTp2pSOkMWnMm+UeY4xwWAwT3Az1pOLWtgUkrHolp8cPhp8Yv2iv2ZNN8BaFJpL6NqQTUC
9mIQ7mOPYB/eAMb8819IeE42uf8Agpp44lyP9H8D2UZ7HLNGf6dq+UvhP4P+Dn7KN7qnxL1r4g6P
8SNd0dY/7C0rR72Mz/aGYgvtDcnBHXjGa8a8F/tteN/DX7SEnxZ1GX7fdakyW+qWccS/vbFXBEKZ
AClVAAPUn65pKLbNXNM9N0n4zfCzwN4c/aP0HxdpDaj4v1vXtUbS7pbIS7CfNRMv/CQ+eT/er1P4
oRC1/wCCXnwqgiJVp73S9mG6k3Ex4P8An9K4Xx9+z78HfjX8SdK8faD8SdF8NeFta+z6lq+j318i
3MUryb515b5GIbGOx+teefte/tTWPipdL+F/w5EVn8MvCMqxWrOglN3cQyPtmRwclBnjJOcknrS5
W3chyVrH1V+3L4l8LeC/2jv2fNV8a2j33hnT4L+e9hEfmDZtRQSnfDbT36dKT9ifxj4U+IP7Ynxr
8ReCLEWXhm50vTjaBYDDuwyBjsPI+cNx7V5HeeO/CX7dHwUtLTxZr9n4U+KHhUR2y6rqEy29vdQy
ybnKrnByiY5HB/Cobbx74P8A2Cvg3dw+CfEln4x+K/iyF7eTVLN4riztIo3yrMobjAfgc5I6cUcq
dtBXtoeifBxmj/4J+/tF3TBilxq2tt97qpVFI/pXg3iP4kfCDXvgz8EPCPhHSUTx9Ya1pqa1OLGS
J2UZVyZCoD5dk6GoP2Of2oNP0LT9b+E3xACy+AfGkjxSzIFVrW4nKhnkkJGEOO3Q13Pw9/Zl+GHw
k+JmuePfFXxC0XWvCXhvzdSsNK069SS6eVJN0KuM/MQqjjux9qVnfYadtT6m+Ku6f/go/wDBaLDB
YfD+pyZ+pnB/lXgPiD4k/CHwF+2N+0O/xStIr97prW30zzbU3ARhBiQAAHaT8nJxXz742/bq8YeJ
P2lLL4uWMVvAdKaS00ywng4+wszHy3+b77BiSR3NewfHH4MeCf2tNQ0n4r+CvFuk+FJ/ESvc6xY6
5eIkq3CkKDt3HbwhH4iq23QrvQ6X4ayLa/8ABJr4gSRAMJp78DnruuYB/I/p3rd/a8TSdI/Zh/Zc
TxFEz6NBfaY+oRbN6tAlojSAgA5+Xd0968W/at+PHhz4Y/Cs/s6fCqdbvw/aP5mt6jMRKt07mOXb
E4PHzD5uwwBW/wCAviloX7Zn7PMfwr8banb6J428K2skugalJIsFtPtiEcYYk/M+M7gB0BPHQTyt
2E3d3PSfgl4q+GXjn/goPod98LbOO10CDwjdo4htXt08/LliEYDHysv516H+z2Fb4t/tiX+Gjc6k
V8wj5cJFdYx9MdK+f/htofhL9gDwRrHj/VNfs/FnxGvDNpmi2+lyCaKNHizmQZBC7lJPPQjHWvMf
2X/2zL34c/E3xbc+MoxfeHPHs0suuyW0f7yKaQMu+MdlHmNkHtz9Wo2Q+a7VjNg8dfBxf2G7zQXt
opviu90xWQ20nnon2oNzLgADyge+O3PSvtL9o+1J8TfseWXkkf8AE7s2CtyV2wWp/Tn8q+fT+wh4
OtfjPNqE3j/Ro/hckvniAX6/amhEWdofd13d8dK87/aM/ba1P4hfGHwlr3hW0jtPDXgiZJfDqXkZ
3yMEjBecK/OfLwAO1HI3sVzq6PpX9orxT8M/C3/BQqa++K0cc3hyLwhDEgkgeYJKzOykhAf4TJ1/
vcdqd+xxeaPN8If2pNU8NqBokmo30lgiKVQQfZLgxAA9PlI4/CuB+Mvhbw3+3j4Q0L4leGdW07w5
46Xy9O1i01i4EKN5cXRVJJK7iMH0Y1mfE74saJ+xr8AJfg94HubfXPGXim2Fz4h1Pd5lvEskRhfy
yP4iFIAzwOe9Uo6oXMdZ4wjtrf8A4JBeHEMjeRNcQbvM+9k6g5YDAz1Bx7VnWfib4P8Aif8Aac/Z
ltfhXBCj2t+f7XaKB4iZPLjCKxcAsQVkzXL/ALMfxk8PfHP4LN+zh8Q5E023QLLoeoW48lC8ZklC
yPuA3bm4GORn61o/A74FeHf2Rbm5+L3xH8S2OqzeHGil0fT9CmWaWe4bzI/mDHn76ng8YJycUcq2
Y1Usz6f8Cqtx/wAFM/HjqWLW/gmzj5HIJaA4B9K+QdL8T/B3SPD37R8PjGO3fx1d67qg0GSa2aWX
/lqsZjIBx+8Jyf8A61cj8M/27/FHh39pu9+LGt2EE0OtJHY6nZ2sRzFZqV2+WCfvLtB64P616X8S
v2J9F+JfxTsfFfg/xpYWXgbxCYtTuxeXY+0L5rtJLsB4BwVwD3JpOKuyb6na/FKEN/wS0+FdrHH5
YurnS49v1nmz+eP1rtP24rrwhpv7Q/7O48d7B4UhS+a+8wfLtxGFDY5xvC/kc96+ZP2vP2ltKvNL
8O/Bf4euX8F+CpY4zfzp89xeQGRQVYcMgLZJxyc9q9J8W6hpv/BRT4LWGoC9t9D+KfhNVsZYbhxF
bSCWUM7LuOSCqE8dD9Kjkd7lczTuekfsX6p4F139sb4ual8OI4YfCCaTYpZiGExKxMke8gEZxuBH
NZHwSKw/sNftMXnzAS6pruSDnOYFB/niuG8O6rpf/BPL4RX+px39v4g+Lni2KSGCCI+bZW8cUoKu
+CCOCDnuQBXCfsc/tKafc6L4l+Cvj4eT4U8dzToNStV2SRXM5RWDEYCqcHnBwe9Uk1qS3dmfrmrf
Bm9/Z1+D2jeGGhPxFOq2K6qkAYTDlvM356AllH4ivtr42I1z+37+z5as2fJ07VJlHQjAkz+e39K+
V/Av7Emj/DL4lal4u8c+KdMT4eeHPO1K1Wyug9zOsMgkhU9OqIMgZJNcN8S/25tb8VftS6b8UdKs
rf7P4deWz0e1lDYktJN6u7jjDssjHHbAHbNFlcpSvoe3+MdT+D1n+3N8crr4vtbLb/ZbKDTFuslW
kEEZfGOM4CdfX60/4IfY7L/gmH8YLy0Qw2k8upvCrAgKmYFUeucACua/aB+AOlftU6rp3xd+G2t2
1m3igs+rW+qzrG0DqEiwFzxgp/nNYf7R3xh8P/s6/AyT9m7wBetq0t3ubxFqk/zCNpVWR0iI4yxA
z1wPU00veSuJy0sfDd1PvQIFBZ+3J5qJkIZlAZCOop8km50j259RnoabLLFMhI3seCQeM10bmdtT
9G/+CSVsRP8AFa9CocWVnwg+bBM5GT/wGvFPCd58E3/Y/wDGJ1OS0b4qXGoTrYedn7Qv71Ch4GAA
M/r71j/sM/tSQ/s7eO9QtdaSMeF/ESJb6jKqM00QRXEbJjsGkOfavWL/AP4J0W9x8awLLxPYQ/DV
51uWnNwv2tYSmXXk4J3ZUH0INcslroarc9r+PmnJD8Af2SLBI8Z1zQ1HOANtshH6kVR/a1Pw6f8A
bw8Dp8Ubm3h8LReFZBILpiE815ZhGCccHOcHjpXzv+1L+2db+LfF/gjQfAlsn/CI/D67t5tPuL6N
g97LCojy3I+TaqgYGSfSvUfjR4Q0z/goJ4D0L4k+EL+3s/HWnxwaRqmn3T+XEqqGkfZnnhpBg9xU
pPZie56D+wxJ4YPjj9pe88Foh8LxTW6abJFkoY1S5OAe+D79CK4/w7HHaf8ABJDxSYgIlnkuWLdc
5vhk/XArGu/GOk/8E7/gbceHdKu7fW/ip4zhinvrUZe2tIlDxs6lehAYgAnJPNc3+yV8YdE+LHwf
1T9m/wAZTHStO1Nd2laja5DO5lknkWRjwOVAHscUWk3cpMsalY/BWTxB+zTZ/Dmayu/E0mt6cdbe
A5lOGg5f0JbPXHNfVurBJ/8Agprpqk7Tb+BXwWXcCTK/vxgNXyt8Dv2TLf4B+MZ/iV8UNbt7XQfB
rDUbOLT5vNkuJlfKZGMkcDC9cmuP039v3UZ/2w2+KV3pkH9lyxjQktsN+704SgiTHd8ZOM9W9hSS
a3FzI70Q/BvUvi3+01efEy6tH1uDWJ10WO5cqSyxy52c8ncACPb6VvW0QtP+CRbTBRm5nDneM5H9
pgdPouK5T44/scv8dPidbfEL4ca3aN4Y8ZKmp3c1/KFkheVyX2occBCPlyOhFU/2vPjp4c+Hnwrs
/wBnLwGsepaRpDgarqMykMsyzeeI0IwGyzHPXHua0s200NNM9o/ba0vw/caX+zPpfi+dbPw4b+Nd
QMmQoiW3gDgnoF2gg1D+zLpHw4g/b61kfC5oZ/C9v4QLCS0k8yJrjzo1focZwVyPWuJ1zX7D/goN
+z7plrDOui/EXwWjPHpZcG3uFbbGvzt2KoPcH86g+Enhiw/4J4fD/V/iV4xvV1L4haxFNpej6LEw
KOAVdQzIDjLICT2FS4uSSuK6R7B8Bnz4Z/a6v1bzg3iDUysgHZYJuOeuM9K+PNQ8KfBfT/2KNA1G
yv7O6+KVzcIs8S3O64VTOd3yA5ACDpgd67L9kL9qu2n1/wAe+DPGsUOnaX8Sb27nkvoA2YLm5/di
JVwRt+c4LH0qXwv/AME6b2w+Ml2+ua1DZ/DbTXnnF/5ymWSFOVGDwCR1P1wKFG10gur3PrP9oCwj
H7Sn7Klo42iG9u2Ufw4SGIj8tteOfFzwt8L/ABb/AMFAvGkHxUvbe30S18N2j2qXc3kxGYqoALcc
4JwM15D8ef29YvFf7SfgzxboGmR3fh7wLNMum+YXjN+kiIrSHI4xg4454rsf2n/gOv7XWp6X8W/h
heQ6nJr0a2+p2lzIYxbmFVj+XIznOc+u3NJRa3dtCVI7P9l6DT7f9iv9o640dvN0oXusx2RDEgwL
aDy8HPPykc96qfHKxgi/4JlfCazvCsSXF1pgkYAkIHaQk/XDE59q5j4pfEDRv2LP2Y774K6deR+J
PGniqJrjWCMiOyS4hCMwIHzY8sADgkZ6Vb+GXjbQ/wBsj9lkfBu5vE0PxL4YQT6bDGozeJDE2w+g
5fae/FEW3JN9Wx2TZv8Ag34bfCvwX+258FbL4Z30F5am1vrjUUinEwWUW7+Wc5ODksevp6V7d8G1
Lft2ftDztEw8rT9LhDKuQFMQP58e36V8yfstfACX9lLVLj4y/Fm/TR/+EfWS10+yjbzGunliIJO3
n+IqB9fasr9mr9uWzj/aZ8a+LfFVjHpGl+PJYVnl3Fk09IkKpuPfjblsYBz+E2d7lK2xynh/4UfC
nVf2Y/iX461XUV/4TtNRvVsLNp1EpHn5QbP4t3P1HT2+mfjLbeV+zn+yjZpGQrazoSEFcgH7MuPx
5/SvnjxX/wAE6/F+o/Hs6XpF35/gLUpVuj4ik2NtVxvb5d2SQSeR6jFbX7VH7X3hqy8R/DbwJ4Ut
DreifDu+srxtVjl2rcy26CMxgFTgAJ1yeatx1C66Htf7V3w98F/E/wDbe8B+H/G+oPp2jP4Vnld/
tfkqXWZ2QZzznDcdPyqf9iDwzoXgzWf2ktM8PSre6NZ3iQWU2RtaIQ3LL83TAzjI9K8z/a0+HN1+
2hoHhj4vfDnOo6tDaw6Xd6LFKubVvmkYliRyrSAcDkGrnhTV9O/4J/fsxa5ZeIrr+0/iF43jEqaF
HIBJbK0TxbjyflXcSSO9aO7tYnmaTL+yMf8ABIu6jRmzPKwOw4+ZtTGPz4FY2hfs+/D/AOD/AMcP
2arvwdqcmq6hq+oeZfDz0lA2xIxGF9CTj02n2qP9njx5on7SX7Il9+z62of8I14kt9j2t3IFxdss
73H7tSc8FMH61gfslfsveJ/ht8Tk+JvxImXwp4f8HT/bfOvn5uGYMoUE/wAIJHcZyBWdnyNPcuLT
PrbwxDHL/wAFHPFtzsYSQ+CbRNy9fmmjx355r4+j/Zt8GfEXw5+0J4/13Wvser6Nr+pvp9skoTBQ
vJggn5t7NjHXjrXSfCf9uzwzffts+IPGmp6fPp+h+IrODQba5uJFH2ZImU+bL2VW2A4B4569a85+
NP7E/wAR9R+ON/Z+GGm1fw34pvf7RTWoQfIiW4mZzuw3G0FTkdRSjzJu77Fc0b3XY+hfiZCp/wCC
f/wKtigMcmo6MH3D3Jbk9j1rS/bX+EunfHL9qf4P+DNY1I6Vpk2magz3KnDLgkgc8c7AOfWvMv2t
/jb4Z+HXgr4a/A/S7xvEeo+DrnT7jU7qzZTGotgymJgCcOSBx2BFan7Zejap+1N4E8IfGP4aTTap
Ja2YspdFtvnuYjLJvbODxtxggfXoKLO6JWruz0L9hv4caR8K/ip+0L4Y0e7XUdP0j7DAlyzAhxsn
YjIJGRnH4dK4zSGWz/4JT+LJIy8MlzPdK4VRkl79V5I5HvUX7PlxD+xH+zr4x8dfEWSV9X8X+W1r
oSkC9L7pU+cMcnmTcSemD1xzn/s7+KdP/aP/AGKNf+CulX403xvbqLnF4VRZS90Z12cknhNp4znt
RFNO77jckmYuifsr6f8AAb4v/s66tp/iB9Ru/EGrW8l1aswxGwjDnbx0+YDmvp7RbKOb/go/4nuQ
iGS28FwIGxtKkzKc5984r44/Y7+BvxHuvjpo/iXxlLfaV4c8FXC3st1rskoj2jcuyIucAn5TwBwB
7V698OP2v/AOsft4+JdbS4lt9L1fTodCs7idVVGmilALZzgJhRznkelS0+/YTlc8X8Qfsnj4tR/H
r4lz6+lm+ieINTEVqItxm8t2k4OfcAcete7/ABUt0P7EXwAC7TK+p6Mm85YElSSDntnPGe9fNXxu
+A3xd0j46634R0h7660vxbqU2oRvYO/2ORbiaRk38FRgcHPQA+1e7/tTfE3wx8Jvh98D/hLrN/8A
a9d8NXmm3uptbAtFHFApRiWxkFmzgexPpWjvzL5k3V9GbP7bXwav/j7+1D8OPANpqH9jp/Yd3d+a
4OxMSkcDufkGfrUn7EPw3k+Fj/tC+EL2/GsWejNHbKwUbAfs8zMQMYBORkD0/PkP29pvFfjK+8Ff
GP4bX+oNpQ0wWMdxpodLmN5pGfPHRSMA5HHHcjGp+y/qF7+zt+zH8QfiJ8S7ue1u/FxH2drlme6u
HKSxpuBy2SzcE9vaiPM0tQfLY/M+R2MxJjzuAxxtyTznH+etMEYC8SAS43YA4b2+v+FK7JCcCRmY
khvNwST359fem7wmShbrknPK8dK6jISVv3gAHlqVYFRzx7e9RzYDKc7VwCVzz196QSje29mwG+pp
Z2kEhZ/mJJwvT3PtVCFhUIHO4PKOgPUZFSgljsJL+q9cVWmZgcxx7MDOcYJz+hqY3KiIyA5JG0qB
z/nigY2RJD5hIUBTjpk/j/ntSDzbiMhDv4K5J6D60Pt2bmiIJJ2tuP4ipGmPylFIjPQnn8KAEXdC
+1uGC4VQPvZpuop/ogJyxAwc9vemh08wFg4jJ5K8n6VNfw7NPeVXJMhAZXPI4xjFArdSggZ9qgBV
Udx1r0LSBn4N+Kyq/d1C1UlW7DI7d+h/A+tcJ5USwqcsAUw24jk57e1d7p8CQ/BHWmR2j83V4E65
VgEduQTwfl9KgGefNHvEe0lEbONxzjtQR5EjBpCxGBnsaidQV2rINucZ/r+lOh4JUYK+vXNZyZVz
0hW8vT9N3sdgiPklRgbdzD+f9a7Dw9bySW7TYQqDkFwACw5wcDkY4A7flXIlxJb26PLuaLOVkkyM
YOPx6d/Wun8M3ZubQwbHmG/ru5cY5wPoD+ldiOaR3Wn2UdzZqJMmODDooJUbs5yQOo5/WqHxXtW/
4VfrZhcFY57bcEQkOS3bnKjOevcVNb37xzyL5AVkABJXG8/lyeR+RqH4mzpF8KdVfD43RDBzziQY
Y9Dn0/D6UmrEqV9j5ntIjdiWILlsFwxbOAOaSxjZJ1LKG5AAI96t6GqrfO5GUYNsI9ccY/E9KgtL
jdd4KKApKheijmsdUzoW+p+jf7DPwTs9U8EX/wAV/EMz3WiaQJWj02MAOJYcMSxI5GOg71y/xh+K
k/xa8Y3eoraRWemQ/u7SCOLy3EI5Bf37e1e9fsQ/6Z+wz4zilmBy2onzEQcL9nQnAB7A18lWBhNm
pgjZeTlWfGSeeQM88/nVwTkTOVpWR9kfAjVdA+KvwKvfhwltbWfiiLc0NxeFQJwXDttP3iRnHA9D
Xm/wd+AnivVPivHZ32nPptroky3U8l1EyQyIsmCN3OcqDjnsQcV454EXxDH4s06Xwqkr+Io5SLFo
2+fzSCuATjg8j3xX6H/Hi98ax/Aq1m0yGX+2zbodVSIhmWMwHzuMH+I84/SspKztcFJs+fPj78Yd
E0743aTfeFdLs7j+zFexuN8CiCUlvb7y4cdDwcnsa9w/ad8ZT/DPTNBn8P8Ah/R5r3VZZVNvLaKw
n2rGwVcdT8zDnNfn8l1LLqPlTJ9oktSvzNIq5J4LHPQjOfzxX6a/F3xb4X8Han4BuvFWnG4ilvDH
Bdg4S0fygWdvbp+XtScbPQrS2p84fH3x74S+J3wF0XU9L0q00vWY9QW2ukEAilWTyxv2jglScevT
2r2nwD8F/DXxB/Zm8H6VdWFtFJLpSSx3kEeJBJ8x3bhgnJ5PPcn1r4O+JVxFN4w1q7sJUe3lvZnh
ZOVbzHJDAewPb34r7gh8d6n8P/2LPC3iHSMR31pZ2oAaMEbfNYPlT6qDx2J9qTTWqKVlFnMfsofB
efwf8R/H+ieLdJt9SNtDbrFLJEHQKSSwBPGWyG4HYc18deO7OKz8ba9ZRKYkivLiNAnzAIsjKD26
gA9utfqD8F/i9ofxh0G31bTXij1FQqX1mXVpoCDt+bH8JOcGvzM+J9wI/iJ4kR4mkkW+uCSDjDeY
4xzkDBHQHHtWkHd6mLtzan0D8C5fCfx98DH4aa1psOl+IYIi+mX1tEd8qxoS24nhT8xz1yCa+dvi
J4M1P4d+MdV0DUoUiu7KULI0WWU5GRj8Mf5Bx6t+w/K8n7RehttCIlle52uT83lcYH61V/bNgUft
CeK3mVY40Nsd0QILfulxkevXsc1tHcKluh4csmwRl12LI5+aQ/Ke/A/M/jSmZpz5pGFYBd8ZII/u
59zyfwpJWWC5RS8e1clSCeefb8KYoSK5BUAbhuYqBhvc8cnj1rWyuZJvuSLJJInmLzIBnA4VuCMH
37/hTDboZmmeMszMFbcM+/r0yoPFFym8/u44w6ZXaU+bkdcjt1/I0jSiJQJWMiS5xsPyrwMd6tJG
L1ZItwreU6r5aBshGwCcex+vSnzxSSzRFA5nYeU8fUEEZ/nVWORmlBZkwFKheu0MQN2M+oHOOOas
QzGVgXDs8eFzySCOSDjt+PQ072ErXFkSOCTbKrpLHgbTn5vqf89aVHjDllJkbguhJULxj19P51PA
n2qVwR5vDAEnGzpjr71TUyO+Au0jcT5aAZ44ANQ5XK0jqyWRpDllVgvzfLgDbjgc/TFaGiaTf+Jt
Wt9IsYvtmoXTrFFAhy8jHp06nPYVmWckjq3mCVcDkk4B7Zr1L9nd42+P3gFtwjI1aEhS3zHLYx+v
5Cs5M1haWzPedS8B/Dr9mD4bafB4r0ODxX4xv2S8ms5SUe3V15Azn5VIYY45J4qn4x+CvhP4z/Cm
Pxf8LtNj03UNOiA1Tw7GQXVycnk9wAT6EV6R8c/hZovxc/af0nQNevptNtn8OfaFnt2VZN6zMAoL
AjufWtb9krwdZeA/Evxd8Padci8srPUbeKOdSMsNknXHGRwOPeufmt1OpJM+b/2cf2fZfGk0vi3x
ah03wTpTNNPPOQq3AXhkBBz8pzk9OMc123hK6+BPxH8Xap4Hj8Lp4clmSWHTdbllwrzZCRlTwQTk
Y7nFekeCY1uv2IPFEY3bfs2osMk5A83JHNeX61+ztpXgDwZ8N/GNnrM1ze6jqmny3FvMBhS7K7bO
TjBGOnejnfUjl1PIviJ+z/4t8O/EVfCEVnPPf3Mr/YnCF0nRSTvUHlhgfh3r2vxf8N/hT+zf4P0W
18Yabc+LfFDu7XZsJmLW+4b8MrOOMMAMnoOBX0D8SUWP9qH4UygZkkt9Rjyw6ARseD26np6ivL/i
B8BrL44ftPeNLK/1B9MS006zuIzCmWk3IqnPOONnT3NNVG9GwcUeYfGv4A6Lqvgyy+IPw2Zr/wAO
zQqLqwhO5rVghZy3oABggnOe/NZ/7OX7O6eLpbjxT4vkk0fwVZhmLTMEF0x+7tY9gR2I7Dmvcv2c
9BPhz4DfFnR5ZlnFlf6jbb+zBbZRn8etUvEVvv8A2BNLVi5UW9uW2MR/y9+v5c0e0ew+VLY4Pwp4
A+Dvxwk13QvDEN1oHiRF8zTZdTnVUnkDFQVAOWGMEjn7w79PB7r4T6za+O38IDS7uHWBcJEkccXy
n59u4f7GecnjA619CWH7OZ+C/wATfhDq1vrB1CDV9WhLwSR7fJbCt8pJOQd2K94vIFT9sSxYqTI3
hNxu6gfvjyeOvFTztdQSS2PAPGHwl+EvwasPD2m+NTqV/wCIXjLXjaRITFEQed+XGOCODnODxXE/
tE/AJfhyIvEXh+V9Q8G6hiS3uUcFoX2qSr4/hPOPyx0NeieIfgDefHP46fFU2+sjTo9LvIyySqXD
l1bCgZ4HyE/iK6n4a2jf8MK+JbO4/fyQRagrM44DCTORnsAf096tTsQ48x4z8B/2eY/G1jqni3xX
cS6R4StIJF8wEKZnCqwZPUYJGfXGK6XTPgN4B+LPhLXl+G2q3l14h00xuLHUSEM2SSQD34VhmvS/
jRZnUf2RPh/a2w2Cf+zYwQSBzCRk46++fWuQ8BfAfUvgJ+0l8O4brUFv4tVN2VkhYrs2wvlCD9Vw
ehxT9o31F7N9D5s0TwpqWueOLLwza20za1PKsX2crh0JznI6jAU8njpX0Brv7Pvw18B3+kaH4q8X
XmmeIr6CJp4rdPMWF3IByQCMbievYZ4r3fwvZwQ/tg+NGWKMSf2FayhygyCWjBI49q+dtX+AXiH4
qeJPiZrdpfRJDpGr3SSR3T/NMA0jbVPYBdv+NHtG+pShqcT8bfgndfBnxXJaz+ZcaPO5lsbwsCJ0
OOpzwwZsY/Kt74P/ALPv/CwPDeqeJNevH0LwvZxSH7buZXYqcHAI6c8+9eyaxA2s/sFW89+n2+5g
gRw02WJZbzGAW5wen0rV/aW0SfUfhZ8M/D2jH7HHqd5bWKwQvsjcPCuAw7rux19c1PPcfLynjus/
s56Rrnw/uPEfgHxG3ipdNmZbyFI/LkSMKWOQfQYPTmvG/DfhjUvFPiCz0nS7Cae/vDsjh5Djj72O
w7+1fTXwB+HXiL4QftB2XhrWJh5GpaTc3DW1vLm3kQKFGVBwSCD19cV6b8HPC+laX+0L8V/s+nww
m1mtHtsIAIvMjJYIP4cn09arma2ZDgr3PDZP2UvDmj6/ZeH9Q+I1rZ+JbyKH/QZEAkDsCQBkn/69
eN/EL4f6v8L/ABTeaFrFuyvE58mQqAs8WcCRfrg8V2mpfDPxx400PxH8SIpnubTT7mZ3u55szKIH
PKjg/KADx3zXt/xis4fEv7NHw88Q61F9t1wNYK18wBmYMjb8t15PUfWqVSSd2Q6aZ4n8LfgFqnxA
8Mal4lutRi8P6DaDEd7qSERyHcVbByOARjPvVr4ifAG/8LeCrLxPpmsWvifRWkdJruwbcsWO5K5A
XqPYgete6/tY6Pq8reAvA3hWPybDVWuYRp8DCNHaPYyZ6dCSfrWN+yzoviLwt8QvFPw58UQyDTm0
s3cmny4aHLOi7gR3IJzj65pe1lcuNM+WdA0DVPE9/b6TpNvJqNzcuIo4IhuY98/QAMSfSvbX/Y41
saxJpieJtHe/iGfsYlAmK4/u5JHOOwr1r4BeDdI0HUvixqVlp8FpqekareWljMOtvGEYgKOAB0xn
0r5oEfxG8tvizAb1m8xrg66QOJADGMjGCOdpAGPyrRVpdAcE9Dh9c0jUND1O607UraS0vreQo8co
IPDEZ+nv3r0jwH+z54l+Iejz61aJFp+nRyiNXvX8ozNgEFd3Ue9fQP7R/gbRPFd58LdZ1Czje51v
UbPTr+ZUCmaF1DYIwcHORx6+wrE/azt9afXvCvw68MWUs2k/2b9oj0u1iB3MjMoOepwvb8aTrSkS
qaTPCPib8GfEPwtgsJtbEEtrfITHPZN5igg4IJ6en51yPhzQ9S8Ua7a6LpkD3N1cyrDFGi5O5iME
njAHXkivrH9mu0u/iH4X8Y/DjxpatJYaM0CQ2lxHsmtizSNwTyOVUj2GKr/AzQ7PwJ8DPG/jjT7d
ZPEunm+ihubjLgCEKVyueORk496SqvYvlUdTyfU/2U/H1np+oTmxhcWaM7qkwZ3VVJOBn2x1715L
veFHG0x4f7hXDDP19K9P8JeNPGvg3xvZeNCl7JHrFwsb3N7A5t5xNIM9e2N2Oa9p+MnwQ8PXP7Qv
g61jjkgs/E8tzNfQwbUG6NQcrgcZzzRzvqGjPAPCHwM8X+OtCTVtE0l5dPkkaPziQhbaQCRng9fX
tWD4t8B678P9b/svXLN7GaRBKqMwP7sjg5H4/Sve/wBqLxRrrePIvBWgRy2Ok6JbxSpHp5eFmLoM
lmVhkDAwD61v+GbQ/H39nvWZPEwjOs+FndLa/jT99IsNuJFWQnk53EHpnGaOd9SeRXPlrw/4fvfF
Gsw6ZpVrLf38x/dRxD5jgev+e9dFr/wd8aeG9Jk1bVfD9xa2UDZmmBBEQzgZz6mvoD4dWVl8GP2c
P+FlaXaxah4k1SOECS6TPkb5jDhCMHjdn3Iri/gP8W/EqePtM0DXZbjxHoPieUWVxDqsjOke7+NA
2eucY6U1Ua2K5VsfP0sYEPmjJcg/KBn/AOtXV6f8K/F+q6cl7b+Hry9s3TfDOkZYSL6qBnd07V7g
P2b9F/4aYHhUXco0OKzGteRs+bb5g/dZz93O4fQCs346/HDxRpvxDu9A8M3MvhvTPDsxso4LQ8S7
PmDsMYwQMfQe9N1m9yFTPn7Ure80fUJ9PvY5La4gIEtvLGY3RvTawyKrlS8h8x1ACkg/3ia+qPi5
4d0z42fA23+LKWqaLqtqjPfRAfLdFJPKO5sZ4wSPqPSvld4UaQBlMJ2j5iM8Z5wP0raE1IzlGzEU
rwrITIVJ2gcr+P8AnrT2UxpJOZSkSKNyrjPJ6+p/AVJPGsUXnlySPnBySMdCfxr2T9mH4L23xY8U
Xt1qzoNE0jynnsypzOz7yBkemzJpTlbYqMNTyY+GdZESk6Zet5i7gBA5355B4FZEq7gQdwP33UjB
BGeor6N1X9rbW1+I8N/pVlFB4StXWH+x5IIzK6INrAPgkEnOOewpPj58F7CBdC8Y+HFS00jxZPbr
9kmO1oZpwZPTAXBPHY5rJVbbj9nfY+e7a1mu9/2eN7hlIQ7ELcnOOlRva3UMZaSKRVl6eaCDjoeM
e1fV3jqey/ZS8A6d4d0fT7e/8Yawgubi/uYllhAjkw3XB/i4/pUWgQWP7V3w5l05rOHTPHXh6MMt
xbRLHbSJI/3QByPljxg4+pzSdVsvkifJ8kRd1T7xX7oB6n+WOlJKJhbyZjfyV6SYJA6/l/8AXFfQ
vwA+DGnwaVqfxE8YOJ9B8OTSsILc5ZpoGBcFejJ7Z5rR8M/tDaJ4v8dX+g+J/C+l23hPVPMtLdrS
xH2iPzZAsbMc46NzgdfWhVg5Fc+ZJ4S1uAh28YyfT61WnnhjtyzPhVGHYH7p4/L/AOvXuHxG/Zq1
jQfi/aeCdLcXMWtebLpczygYiRSzFz2K4bpXffE7xj4Q/Z8S18CaH4c07xHqVmzzalNq0fGZArqM
9T8p5PsKftm3sLkSPlV1WWEZdgxXkkAn2/mKqogjYoXkG4k42kjP4V9OfF74ZaJ8Svh6fir4DgS0
hgj8vV9KRQkdu0UXzlPcEgEgcjFQ/CH4X6P8O/hy/wAUPiLEj6ZJGP7J0mVVdboyJmNnxnBJ6fnR
7XyD2a6nzRJtnUFEcKo3Dce/r9MHpTPIBiAZXCkbU3Zzkda+qvAtx4F/aU0zVPCl14bsfBnieZll
0y4smZ2k2ZdlBIGOF59QfavK/h/+zv4m8W/Ey48F3dtNZXOnSD+0ZiQwtkZQVc84OeMY681HtL7l
xgkeR3GxFbGdw/5Z5IzxmomLK4G4hl+YZ7A9P1r6s8TfEH4M+EPHun+Gx4Ktde0WyEFpe+I/MKlX
BMbts2/MVIJOCOtedftDfAiXwJq9rrnh921XwjrjpLpt5FygMjExwgnvjBB4z9aSqIbgmeHytskj
k2N83y7Gbgc5J56VRuSJE2qSUdTgAngj3/CvrC3+Fng74B/C5fEHxM07+3fE+svF9i0DeElgQEhj
79ck9MECquufCjwh8dvhTH4n+GWnrY+I9I8waj4bebfNIDIADkd8AkHoQTVqqQoLY+WpVZYw6glN
pJ6fy61UuGQtJ5igxgZRnY5wfb0r3P8AZ8+Ac/xK1ybVdbZdI8HaOXfUr+4yIWCMBJEH6BsZ57Y/
Cu78L2XwI+J/jXXfA+l6fd+Hr+VLqHTdXu7gCCWXOyNlJOSSSCq9x71HOrmnKlofIl8QzblGSDgE
enbr9KilUTxE5ITsccZHP+Feh+L/AIKeKfCfxOk8FPp1xd6o0rrbKsR/0uJSVMicDK8E/wBa9v8A
Gnwp+D/7P2h+HtI+Ia6n4g8X3SPNfLoswEduNy7d6lhsGCABznDVXOhKNj5FnKhkWRNoGBu7D3+t
VriENGzOhbcdowPlz3r6H/ad/Z7svBVhY+NPBdyNY+H2pxo8ckLCR7Rgq5VyARgknr0PHpVn4Gfs
56brXhHVviB8RbmXRPBFlayG2SN/LuLqUIrLsB6jbuwO5x1rNytsXZM+aHZEkhULtYLygGWLc47e
wqlKkijIU4+9gZ5Br64HwA+Hfxq+Gus618I7zUYvEujTF5tM1cqJJ4VQswjUHntjB68HFfPnw1+E
/iT4s+PLfwlo9jIuqBnScTgotpgFt0px8vCn69KbloRyps8+nnVh5LOSzMOQM9OcZ7Cobl5JhKSA
XVd+4HAJ9/x4r7Xu/gJ8AbD4vJ8NpvFert4olMdv9pjVfswlMZYgyAFRznjPB4r5l+Mfwc134N+O
7vwxrFntnRk+zXEPzRXKMMqUPc4wMdalSQcuu557M58sAnCDHK81TmZydhEc68EEgZX6cV9XeH/2
WPCPgf4Nab4z+MGvXnh06xeRLptlYq8kzwvDvBePGQT82Rt4wOawv2gv2XLfwV8PfD/xF8C3M3iL
wNfWqmW9K/vYJGY7QUAyFxxz07mmpJs0sj5qEhlKqY8xxgnv17EduPTFUnbZGqqu7d1z1XAr339m
X9m7Vv2hvFrB5W0zwvpq+ZqmqsnyxqVfAUn5WYsoBHavQdH/AGQPh58X9A8Vw/CX4g3HifxXpcPn
pp11F5XmfMRhS2MggEdSBwT1o5kmDifIF0HE7RKrcrhW9fX6VVa6Dh0wQeowByfU10Vv4Q1nU/Fd
v4UtrKZ/ED3AsEsguX80vsAPtu79ODX1NrX7FHw58Dav4S0Dx78T4/DHjnWrWCWTSzFvEcsmFCBh
x9/cBnqfahzSBI+MJExL/q22/NtYCoL3zEAKSMwX72ByPb+VeuftE/ADXv2ePiHe+HtWje4s3V59
N1HGEurfcQGJHAbBGR7e9dv8F/2RD8Qfhrq/xD8Z68fAvgq1EaW2p3Cg/aWMhRiBwQoO0Zz1b24n
mRpyK258yFGMzYJK8ckYJyPSotkikKybiGxnPJ5r6d+NH7HQ8H/Cyw+JXgXxKnjzwq7yxX13bwso
s9h27sZJIyCDn2ryX4JfB3XPjx8QNN8M+HbZp5ZXRrqbB22tuWUPKx9AGz707ozVNHntwigkY8qP
GAo7/wCNVzJmMBBwPvZGK+2J/wDgn/4d17VvEPhvwf8AFbTvEfjTR7e4m/sdCvmF4sgoQM4Ibapr
418TaBfeG9cvtG1a2ax1WymaC6tpB88LjqD9Kadw5UjMljKl5F+UKPz+lRPLJ8o3BARzxktVi6hW
NniUnOcjnj3qNpPtEpXJQAfMR7+35U9CUruwxzJHs+farGnuqnYI+EAAwvH6fU17X+z1+yp4g/aD
g1vUhdr4d8LaPbvNd69fDbaxupUmPceCdrFjjgAc4zXVfE39ibUfB/wtufHvhXxZpnxB0W0uTDef
2Htk+yIELu7FWIwoAyO2ajmjexoonzOuPNdHzwCQuKgZizCRNwbOd20dfWug8KeF9T8da9Y6PoFh
Pqeq6hJ5NvbW675JP/rAckk/jX1nJ/wTX1ZvFD+HY/iL4b/4SNYyf7FaYfaWYIX27N2Qcc49Bmpb
1K5Uj4tO4ZRmJ7gY7Ypyy+Y4VgOmDkde2a0vEvhvVfBXiC90PXbR9P1axcx3VpOu1o2+noR375Fe
8/DX9h3xX4/+Hlv4zvtf0jwVpN1dfZ7I67L5DXC+Wr+YhIwQQWx/umruhcq7nzZPEyKduEC/3etP
TeHMYYsMkkqcAj/OK9g/aG/Zn8Sfs6anpS6u8eraRqdulxba5YKTaTbgcIG5G7Az15BGKy/gL8Af
E37Q/jCTQPC4jWGKKR7vULhGEFuoQsC74wCegHXrTbQcvmeZtcIJVXYQwGC/ZjnNRzOpYKAI8Y6H
pX0947/YM8Y+E/AGveKdK1zR/GEOkugmtNEmE0yKzEFyO2PTPY9cV802NpPqEqWFnBLdX1wwijt4
xl5GbogA7k+lZppisQsySSKc/KuWXIBJOOn5Uk87Mufm247jtX1to3/BNT4hXL6Xa3Ov+H9P1a/t
kuE0y7uys8YcZC7QM5zkfhXzT4/8E6r8NPGGpeHNfs5LHUtPmaGSN+h2syh1z1U4JB7g1SZSSOcY
gorPklWyCevHSgMxCeWNpBPbtXuvwS/ZG8c/HDwvfeJdKNrpGh2s8cAvtUfy4pmdSf3Z7gcZ+tUf
jx+yt4x/Z9sdIv8AW5LTU9J1NGki1TTdz26nIAVmxjJ7c849qLoLdjx3zBI6Y3BsFmBHJ7ZqIjzp
BIOGTLDI/wA5rsPhj8MPEHxi8e6V4V8N2jTalfS+WHYkJEMEl3I6KMda9u8U/wDBPf4k+GPDetax
9o03WE0qJp7i206YSyhFbDBVHJYcnGOfrSckTY+ZVkDRkF1DA54XrxUZZZRwzDByOSPX/H9aesMg
do2Uxzk7Cm0q2c4xt65zx9a+lPB//BPf4p+LPCOi+IVg07S4tUhE9vaahdeVNtJIXIwcEjB/Hnmn
cdmfNNxGzzOsIGT8xNKpKqsbsTn72OBjv/Wus+Knwp174MePdS8K+JbX7HqNiwXGfkkUqDuVuMrz
19RXUfBD9mrxf+0TPrJ8Mwxi00qJZbi8uW8uIbiQiB/X5WJ9Me9F0KzPKY/Jj+ZYwR93AXkcVKih
XWQlyMfcXnH+ePzr2T4z/sf+PfgV4Ss9d8QxWtzpdzObcXmnSeakZ2k5c44H9a8q8L+HtU8Xa/Y6
Lo9nLf6pqE629tbwLlmJIA47AdyeBRdEmXHh90kcahwc8gEAjofwoz0OT8p4BHNfU1//AME4vi7p
dveOINMlntYnla3jvAZmCqWYbQDyegH8q+YZbWS3urmCaFoJ4W8topF2shHBDA9DSuirXKxmbIYt
hRlMKcA+x9aJWbK4Hy5zgdB26dq98+Gn7E3xN+KXgyDxRp2lW9lpNy7xW0l7MsPmFeCwBOcEggHH
TmuC+MfwV8T/AAJ8WR6B4rtBBdPbpPDJHzFIGHZjw2Mdjn2oUkHLbqcIkoWPacjHzA46Ef8A1qgQ
4ZYzI0q7sgZ6/T/P5V6F8I/gv4p+O/idtB8J2K392kD3Vw7sEiijXHLMemSRgfX612vxQ/Yy+J3w
s8HXvifWLC0n0e0nS3layn81l39CcDgA4yevNO6FY8OVzGiM7MCw+UEj9aTAVOr4UfKqknb16DtT
o4ftEyCJxcySZCRxjJPoBX0xp/8AwTr+Mt/DZyxaXaQNcwpKlvNeKsuGGQMc9M59aTcUNRZ8xPNl
tylsdMgY/Ckj3SKw3NtfAcKcZx7e1a+veHNR8M6te6bqto9pf2MjQzQP1SRWZSPplTz3r0n4Qfsp
fED446Bf614Y02NtKsrgW8t3dSCONnIBwmTzjPJ6Ck2u5XIzyGeeSZ1G8swBUbzk4GenpQbkuqru
8vDH5umK9N+Mn7OfjP4B3Gkp4wsoYF1NHmtZrY742CnDLu6A8qfoa5T4e+AtY+KXjHTvC/h2ya+1
TUZfKggRhkcFiT6AKCSaLIVrHPSXUku9nZyT8p+bjA7Y/AfkKha4fh84UZ4HfivobxR+wh8WPB3h
DV9c1DSLaS00yNri4S2ud8qxqOTtA5+nWvndwYHXeTuB2bRx36detNJENMlV/MAijdo4hnKqcAg9
c/j2oldRK8rh5X3fMc8MRx/gPwr3zwb+wt8WfGvhPStfsdGht9O1OJprcXUwikaPJCkr1yewx37V
5F8Rvh9rfwt8Wal4Y8R2rWWo6dKIZEYdSQGBBycggj86at0DY5ySY5kwcvxwewpJhuIx+7UD+nWo
DlnLFiMjBJOKkkAZgQrMMdxnt2p2C4Pvk/dYPzLwR3qea9u7iQyPdTE4wTJIWz+dP0zSL7WdVstO
sYGnvLqVYYoIlJdmZsAADqa+jbv/AIJ7fGe3eb/inYX8hDK8Quoy7YXPC7v8mk7DSPmmWRrcqMkl
RgJ7VajvbmGN/JeWNQQ37tyuSBgfXHrTdQWeCeSG4iMUiHaY5VKuh7ZB5B+vpXrvwr/ZM+JXxg8J
p4j8NaIZtHe4e2FxPMkau643FdxGQCSM99podgszyEyvcbnnmkYZ5WT5i34k02OdlxGjtuBO1x8p
GQR/I13Xxa+DPin4KazBpPi3TTpd9cQC5jRHEiOhJHDAle3SqHw3+FXiP4weJbfw34V0s6lf3AZy
o+URqPvFm6KMetK6Kszl7i6upLWS3e7l8t2yyhzg/wCcCo3d0ViDkZ+Yd8+vrXt3xD/Y5+KHwt8L
33iTW9Fij0WyKCaWKdZDEWbaCcdskfnXirRylwQwLk7VUDduOOgHU/1qrpk2uWI9a1C2szFbXl1b
Rqu3bHKyAAjnoe/9aptLI8heVi3mnzGeR85Y+p79/wAq9/0P9hX4xa3pGmajD4bBg1CBbiFXlVHK
vyMjoOucV4r4p8H6h4X8Q6joWsRS2WoabO1vcQSgrsdSQfw44I4OKFYXKVrTVbvS7vzbS4ltGK7W
eCUoWHYHHYUl9q95rRR7m9mvNudonYnYeM4546Cu8+E37Onjn42WmrXPhHSf7TttOKLPMz+WisxI
2hm43cEkZo+LP7PPjf4HQac/i7SjYwakX+zTq+5GK4ypI6feFPQrY89gdreVHZ2Dq+VccMPfjoe9
XZfE2tG3kgfVr4206lGQXbkMv90jPTI6Y/Sl8P8Ah7U/FPiKw0nRreTUdUu51htrWFSzSOSAB7c4
yTxXr/iL9if4vaBpOoale+FZI7bT4Xubh1kViiKuW4DE5AqdCTwlmaMHecqCAGC42kccVr2nivWN
KtFtrDVb+0t2Jdktrpo1J9cAjBOOvWslo/KAMo+QE7k3c8Doa9e8D/sk/FD4h+FbLxDoPhuW50e+
3G3ldtm8BsEgHr3P4U3YaPLr++u7yfzJpZ7uUgIstwxkfaM4GSc8U3TdXutImNxYXclld7Cplt32
tg9Rke2a2viN4C174XeKb7w94k0+XT9Uswu+F/7rAEEHuMHtU/ww+Evif4ua7NpfhHRp9YuoIGuZ
UiHyrGOPvdMkngUWSBmXrvjTxBr9qlvrGuapf2rsGMF5dPKmR0O1j71Rd1COMMxKkZUf5x/9evQv
iP8As8+P/hToses+KvDsul6VLMttDczsCu8hiAcE4+769xXnVvbtLJHDEN0srbUQZJLnooUck5xw
PWiysTr0OqPxh8cpCkVt4v1u1hjURqiX8iqgH8IGeBXI3LSLIJH3NIxzlskn1z3Nev8A/DGfxkZE
m/4QnUQjL5hBAVguCRxnrx+vbNeQzxyWZMc5eJxwykYI9QfT0ojboO76nRaH8R/FfgyA2/h/xNqm
kQzfPJFYXbxIzYHzFQQMjA556VS8T+MtX8XXrXuu6reavfFPJSe+uGmkC5JxlicDJrrPAfwK8e/E
nRW1fw14autX0yOZoJLuFRsDDrjJyce1YPxA+HfiL4V66ul+JtKl0rUXjEixTjIKEkA5/DrQlG+h
pZPqZ/hbxZrHhTVoNT0a6m0rUrUnybq3Yq8bbSpIPpgn866PxP8AHH4heLdHm0nXfGOratpkhVpL
S6uCYm2ncpKjGSDg85rn/Cfg7XvHuu2+k+H9MuNW1WUFora2TJIA5JPYDuTXS+NPgN4++HukjVfE
Xhq+0/TVKBruSI+WCxwoLYwOadkK1jhJXVkyXJOcnPqRyRXqGn/tQfFXQdNtLXTvHmr2tvbARRK8
+4xxjgAZyBXlzsgR3YK4Odo3cD/61egR/s6/Ea9sba5i8GaxJDMiyJLHbMQ4YbgRn2P0qeWIrX1O
GutWuL+/vtQup5J7ueZppZnkJZ2YksSe/XP4V2Xw8+OPjr4T2N3aeDfENzpNjeyK9xCmNrkDAYZB
2nHBIx61w17am1ubi3nYxzwO0LxehBIOT65BFdJ4X+HPiXxdaTXWg+H7/WbeHasrWURkCE9uO+Kf
LEdrln4i/Fvxh8Wrm0ufFWu3OtXVpGYYPOICxxkkkAKBk5P3jk9KzfA/j3Wvh34msvEWi37afrFk
5MdwMMckEHg9ep4qPxF4Q1rwhcpb65pN5pM86l44bqFkYpnAOCPUdqo6RpN9rt9FZadZS319McR2
0KbnYgdh396GkTrc9T8aftY/FT4g+F9S8O6z4qnutJuyvnwLFGjShSG+ZlUHGQOAR3ryeOcR/vRI
Q45AHTf1/n3rW1jwJr2haebrUNFvbK0DYa4lhZEGeg//AF1z4jHlGXajAttJB6cdh+VSkg5Vc+hN
K/br+Mmgafp9paa/CkVlAsETSWcUjqijjnAOMADueteHeKPEmqeI/EOqazq189/qOoTyXU0zc7pH
Ykn9eKmi8G+IbiFJYNH1F7ZlDrKttIyspXOenIPHtzWXcwyKtyjHAVtpOOFI6gj8KdlfQaTuewfC
n9rDxx8HfDN5o2g3UVzYXNws7W97GJ0DcD+IZBwucdKwfjZ+0J4z+Odxp7eK9UW5stPjEdtZ20Yj
hQ928scFu2fTpXC2mlz3VrM1rbXDFCELRoWBJxjt7/rVe6srmxuViurV7e4ADOjoVbHIBwfpRGMV
sUyq6eWHZw7Y5A9OKnLGVSGjChgeM47HmnHO6cuoyTlR07f/AK6rmTdKFCbt3UA5xTZFgADRxorA
SKTnac565pWzhxh2Udjxj3/lUaypa7SqhwDgrk7uOKl4eKRnDF2GFVffoaSDYHd4JYgcNkbhk5AP
/wCrFK0hkkZvuHOSP502ONZHTc2MjHzcUNKjZZG5ztO0ZqiRsqpIWB3c/wARPH1xihxJbvsDCWJi
QxH+fYUxn8/MjtnnGQMcY78+1Pnu0K+UnyqvI9M0DQ3y1lbagYRg4yD0xT9UkAtx1Iz0bk+x/LFA
l8lYx5zZJG4FR+nr2p2sSvNFl1y2QWfAB9higd10Irzy1kUwbzH0Vzwce/vXefaEi+CNxbeb9/WY
5fmX72EkHHfso/OvPCFcKq8sRgDHavQbyTb8DrZVJDvq5yirztWNwOc8ipZLV2efKodGZFyG45XJ
pLlAnDKVweAKkUm2kUEbV5JHUU3D3M7ojKSOcFsDHr1rB3uUd8wa6tbV5GZyRIjA4A7fMP5Y+prs
vDsKwIiSyokTHKrjcd2e/rwf1rjbQFdMiScFGyxViSR2yD+hrqPD981yBCF2CMFi2eWz26dOB+Yr
sT7HJK56NpmLUNgIvnLlCpyyrxn8cHHNUfiPKt58NNf8mR9qGBdgwFP7wBgPfp7jFMtrGYoiGWaZ
iOpbBXgHH4kVL8QbWNPhV4ig3r5cccLlGBJGJE5Hr1xnPWtJJtIyjo7M+aLe4jtbeVpUDMowqsoz
mqFs7eaojK7ywA3U0phnfcdg5OF7+mKWIKJkYMSO/GCPpXNbU7kfob+wn+0LYeDPD2ofDHxSsdt4
Z19pozfw5DxySoI8HHAXAHJHoaZ8avgw3wg8Wy2lq0d3pFxIWs5oZfnZNxK5I44yOn9a+a/AcZis
/PcyOYRhdxztwOoP4dea9Ji1Z9SgQXVxdXDxKVDtIdwHoCfw7963jFpaHPJxvqfZH7MfhjQvAfwa
1z4oMsd74htIrhVguCDHG0ZCrsAJILYznGefauB+Gn7Ufi21+KEFzqF1Lqui6uxs59Publmt7ZJJ
wpKrnB2g59SM14RFc3MdlLYw3cq200gdkZiEb6446fzqNriV7pZlkXCEHAUEY9T6/wD1qXs09yub
lasfVP7THwDtrL4r6Tb+EXDz+IPMn+yvIqxxMhGCpJ6E7Tj/AAr239rb4YeIviZ4N8O2OhBZbyzm
Z51fbg7o1AYcjowPPpX59z+IL/7Tbj+0rx/IjzBOs5IQ7hwBnK4xx9KSLxnr8sX7zWdUCrKVieG6
dMEHjIUgDIznjvis1Sd9y/aI+ovjP8A9K+D/AOzho8tyFu/EVxqcctzcAKroWiLPGD6ZXGfeu615
f+MA9HQbTJHawLjeMg/aG/HPI/OvijUvE+q6tCLS81O8u4j8zq8hZSccZU9+oyKYdf1UaWdON/dS
aezmQWhuG8sMDnO3PbHHGK09k2RKp0PrD/gnUIbTxZ43jSQNvtbV2JOScSNk56dulfNfxVEp+Ivi
uPzZHiOq3Sq5frmVj165JJNc9o3iHUPDl6tzpWpTaXMyFWFqxjaRRzjI7Z7Ec1Xe+lN1K8oZmdvN
cYXcSwPz5/WmqfLuZuV2fVX7J3wyh+HcL/GXxVfppml2cTiwgQAi4jeMpuyPU9voa8R+PPxFT4r+
P7zxDJbPaefIj+QJd4XauxeeOw6EdTXLL4j19NOXSv7WvpdLQsVsnu3MI6jAQnGcZ6Vlx4uS0si+
WinlX5A9Oh/ziko2G2uhEzIgTBZCpLhWOdw74HX+lPlYJJmGJmmOAV2/U8n19vemySLdFd2yRVBz
uwdoB6e3TvRG4kMrIpVd3DA5JwMZPHFaW13Iu29hjxKgZUJySpxztJ64GPxp7wRMu6SIuygomOn0
zjI+vSnXjyt5cKFDGcb9vTAz9KS4SWWaYrMxiGVGRkkDvj8u9XGIWSepWnjM7F3Rd6DCfNyMgZB/
L9Knd08sJG+QGJ3RYGTx+vApWCebvVWAbIdeMAAgHjtnNI0TJiaJS0T/AMOzaTk9fbv/AIVbsgcY
vYldNkkbxknd8pYEEKAP7x7cDioUCPvKv5zRvjJ47kj9B+lLIpdvI3thWHQe/uOe9WEQxvu8kAdX
c8BRjqP14qL2MXG41UjupZN0sg+Ycsp4Oc9TwRXT/DXxgvgL4jeH/EBT7Tb6bfQXk8QALSIrjIUk
gKQOetcoAZGEisrpt5A4Occj354qZV2qWbdvzlVDEfUH/PpSklJalQbTPt39p3wjdfG3SPD3xP8A
AszarafYY7b7NZsDcxMx8wA4PBGdpHatT4A6dL+zP8IfEPi7xfOftPiB7d7fTQxFySC0YJDdCS4P
HGK+QvB3xf8AGPw+jnt/Dev3mkWt3IZZo1KsGcDaCFKnnAHT0qPx58VvEnj9bObxJrE+qPZwmNPt
BGFUknjCgZzjqK51T6HRzrY+tv2bvG+lfE34N+Ifhc039neILtLkxC4dVWVZGB2qQckjnOK8h+Hf
wD8eax8VdP8ADt4t9a2eiXy3k9zezSCFFhkXO0kbcFRxjA9q8O0PxDqHh7VLHVNNn+zX9pKs0MwP
zKwwVOPw6dK9Qu/2qvijqED2c/imUQ3EDq6+TGGKk4PzBRgY9BnPeq9lJdROcUfRvxn/AGkPCWi/
tC+EL5jcXdt4Ze6gvZ4ArqvmoOV55ABHcf0rmP2ufBviJvGMHxE8K31zd6RrltBaRy6UzbxtjJ3E
p64GPpivj83UbM5Yu6lflIcsWHPfvyT19K9R+H/7RXjT4e+FIdG0jWDbaVG7yx2rwxsIyxzgblPG
e1R7NocZ3Pp/4dW//DPP7NOuN41nKXXiKWWW1t4vmmZpoMLuzyDwSSemOah+G2o2vxn/AGULzwD4
elLeJ9LgTfZ3Z2lsXBlXa3Rhxjjp0r5F+InxX8R/FTVbe+1+7S9eCIW6OIkjCqM4A2qMDLE/jVPw
N4z1f4eeLrHWdGvZLbULTJR48lWDDkMOhXk8H1zVKkylUuz2z4B+C/H/AI4+Lnhxb+e6nsPDV9He
Ti/uTtt41cZ25LckrjaMcDHFey678dfCtj+15p99LeudOh0aXRpLlYWKifzSccD7vI5xj+nzv4i/
a3+JviPQ9T0mXUtOghvoJIJJI7RY5GQg/dYHhvQnHSvHF1I3AELDMhDHPc5PJyP8mn7GTF7TWzPq
D9p/TPG/wu+K2va/o95fWOneJZPNt20onE6pGu4Nt4zkk8+vFej2MZ+DH7G+qaR4xY2Gp6wl4trb
oTLJI03zRjr97HXJ45r5/wDCf7V3jbwn4WsdAii0rUdO01SIf7QtWnl7tgtuHQ47dBXHfFP4t+IP
iz4gTWNWkjM/lrEkcZ2QgqMYVSTsznPBoVKXYTqRifV2uzL8Zv2SNEsPBr/bdU8PG0F7bp/rbdoY
jvGCOT06DnNcH+zpeeO/it8bvDmrane3uo2XhkyNcy3j4WFZInVRg46nHQV4n8MPi3rvwd8R/wBr
6NKHE0TQ3FtMC0Myt3YA8kcYJ6c+teg+IP2tPFWq+H77SbPTNK0BNSAWe50dGinJU8AMG4yMD1wT
0o9nLsJVFe6PoLwp8U/Clx+114huI9ZgMF/pcOmwTMNsclwrxgxhj1YlT+VeH/FnxF48+EXxM8Za
LbXdzpOneIb2e6RIcFbiN2baRkHkhiDivBBLd6dqiXyzzxXkb+ZG8blWJ5IO7Oepznn8a940/wDa
+1JNK0iy1fwrpniSfT4lgh1HUtxm4PJc/THTr3qPYyuP2p694ph/4V/+xLaeH/Ef/Es1m4iKxWNw
376VzdGTbjOSdpBP1xVv493k/iz4G+BfFHhVxqf9gzw3k8tu2TbmOABi2M4KsBkH+VfK/wAWfixq
3xf8QT61qj5iPy29orEx264wQgPTPfvWn8IPjzrXwge8gt7ePV9EvYXin0e8lK24JAG8YBwQFI6d
DVqg0rh7W57P+zb4z8V/Fv48WHijWvNvrfTLC5t5b4IixxKyZRWKgAcseec+tep/CTxbo99+0r8U
Y7fVbSRr9rQWq+av74xx4fyz/Fj2zXznrn7VVxceCtT8P+GfC9p4PN9KGlu9PnJaRBgNxtGMgY69
O1eK6P4k1DQdZt9RsLo2V9aEyW1wq4MT5yuD3A/wzR7KQe1Wx6rq3xR8ZfDqw8VfD8gWdpqd1KJL
C5tAZh5zbTsz1DADoCPTnFe7fFydNE/ZE8FWOpqLTUFTT2a1lxHINgy/yHnjofrXCy/tZeHNa1PS
Nd1/4fQan4isIYwl/wDaiMupzuA8vHUZ79a8c+LnxV1T4veM7jWdRmla2XfHZ2zBdlpCWJ2jgZ+p
5pezkDqJaH1f+1t4q1LwtqPw28ZaFGt8mny3MyzIpkiyyxgbip6EE/lWP+zL8QdU+KHxx8ReK9Xh
ihxo32NnhQxxx7ZU4OTjoCeteSfCf4+QeFvBuo+DvFWmTeIfC0y7ILMsq+S28scN1xkg+2OK1/FH
7RejaT8OJ/CXw+0K48MRXsrm6d5hI7RuhUgHGQScc5459aj2b7D9oe8/BCe2vtc+NkUE0czS61Oy
iP7zKY35A7jnqPWvltvjvrdn8Hbn4bLpsJtFJj3BW+05Mnm7cYAzvPp3xXH+AfHesfDfxJaa9o9w
EvYc5LYdZV5BVwfUE5/CvoBfjh8HdT+IEXjm58Ia1F4gRkd2j2GIyBMAlA+D9TVqm1uhc99T0v42
Sra+GPgkJPkdNc01mQsA6gRDJP0OBmsT9pT4i3Pwo+PPgvxJa2P2/wArSbiKSGRiiMGcgHcFODya
+ZPit8T9S+KviZ9c1IvGyHy7e3jYhIIg5KYH97GMn1FeneG/jx4Y8TfCv/hDvibbX+oQWTwrY39n
CGuAq85Zi2SR0yOoJz60ezZpzxueq/so+NpviF8Sfid4jnsksjffY5MRksg2iRQMkDoAM/WmfDp3
vP2UviVGsa7op9YVUGeeCw/nXmfjT48eG/C/w1t/Bnwyt7+zhuC5vdQu4zFclS4ZdsinknkZ64AH
euD+Cnxlu/hF4m+0JvvtDuz5V9p5VnjKs6l5AnTftB578jvS5GtSOZPRHR6v+0Gnin4Y+EvAS6EY
ZtOurMNeGQPuEbhcqABtz1Jz3PBr6Y+LmY/2hfg6+4hjLfoQF9YgeTXjeieJvgXoPjm68WWU+p3M
2ZZ4dLn09xCkjLnaOOgPT0J614748+M3iLx142TxI93LY6jCx+xfZ5G22mQB+7BPHTJ9ST1p8jYK
SR9BfEf4u2fwY/af8Q6nfaVLqsV5o1pEscJUMDnqMjHG31HWtX9n3V4PEHws+Lepwwm1gvL69uRF
gfIHtt23jjjOPwrz3VviX8OPj54J0x/G19P4Y8XWD+RLd2lm8zzRhdoJIBwCfm254PrmqfxL+Nnh
zwn8PIPAvwsu8wSKkmoa1DG0TTPnDLgqPvAckEYBAHFLlbKUkjt9XjEn7AmmhVZj9nhHuD9srFuv
jN4d+JfxF+EFlo+hy6bcadq8QmkmiRA2Qi4XaxzyM8+lct8BvjVZaZZTeB/GpbUfBt/hU81spYEE
t9dpbB9iM8c10vgay+FHwgvdW8Wx+K4PFF3ZQGfStL3sjiUZZdpI5Y5VRwcc0uRlc0Uz2SUBf2x4
yUxu8J53Hv8AvzXlkfxX8H+Afih8YLTxLps2oyahfFINluJuiMGXJHGSw9ehzXjsn7QHiq7+LMfj
j7V9nvI90McJUFfsrMWEBGOnIGevfg16f428M+Afjhdad4w0/wAT6d4L1HUEY6xpmoXAZzJwNwHQ
Hr8wxkFeKHCxKmnsbnh6M/8ADBWtpsYFIrrIGO12Tnp/nmvk2VvK/dLufI9iTj06c19CfHT4v6Lo
fhIfDbwJJjw7ChS+vomWVLveBIQjEH+IkkjHJr53lVHjBDAIODnnIxjrW9KNtzCo7vQA6kMZC+5g
QBj7pP8AX/GvrD9hBVS78ZwgEFYLI7T1PzT8/oPzr5JBWIKFiLEYCnPWvWPgN8Z7j4O+MjO1t5+h
6g6Q6im3dKVXdsMfI5DOevbNOok9gpt31N7T9e+HUfwF8UaTqcS/8Jr9puDbO9u/mjMg2FGAwO/f
sa9j+JbrN8BfgnLIruG1LRdw7nMJHP51xOq/suaXrvjyG+0LxFYx+DNQeO6mWa7Q3EKud0iqD69s
9N3tWD+0D8abe9m0XwV4UVF8P+F5onivJPneS4tyVUqQSNgAHPfNc3LqdTkrbnq/x8Pho/tA/D0e
LlT+wP7OuzK0ikxg5bG8jnGdv9ad+zTJ4Yb4zfEyPwiyNoHlWf2VoSTGRhw20nr8wNcp4ggt/wBr
fwBZ61Yyrp/jzRo/Im09iI4JVeTO5SxyRhcg54/KneHPsX7IHw71LWdSuI77xxrimK209BuiPlOS
uSmccPkknngVHLcnmSNDwUn2f9kb4oBSW23OsDcBn8q8y13Rfh1afCT4c32i3sD+M5L3T/tipcF5
fmI83cp6AN/KtD4C/F3StQ0XX/hl4oLWmm+KJLgx6jEhPlTXAVTGe2Cc4P4GpfD37IWtWPxGnOuT
Gy8KabNJdHVAylpEjYMnAOfmAORjjn2qlGwk/ebTPcviKhb9q/4UbSc/ZNSJBHQeW3f8T+VeaeJv
BngPxX+1F8QofG97FZWkdpaSW3n3YgVpjDGGK56kLg4965X4k/tR2+ofHfRPE2jWS3ul+HDPb24L
MhvFkXDtyOOvH0561ofH34PXfxj1aP4k/D2c+IrTWo1hnt4lw9s8KBQASeuQQQR2qbWZb11N74Rw
Ww/Y8+KEFs5mtln1ZImb+JPLXafxGD+NRfGWK3l/Yw+GsVxIY7Z7jSUkkH8KFTk/gOfwqp421ey/
Zx+AV38O55zrXizxFFLJcwL8osxNEAz5xyFK4HPNR+EtWsP2jP2fIvhrbXkejeJNASJrWGVi5vo4
Yjgp065IPXGKLbCbu/uLWhfDTwj8M/2qfhrD4P1F7y3vLa9M6m4WZUKQSBQCB3BPBP8ADXp3w4Mf
/DWHxYCHdKLLTd52n+50J/KvC/2dfgzqnw28XP8AEDxwB4S0rw6pcLertMpmjeMgHpwSOnUn6VZ+
F37Uehn9onX/ABJf2j6ZonihbexSeeQf6II1wkkh6YY5B9MilZgnqjl7H4J+G/EHwh+Jni641N7X
XdM1O/MNsPLVHKtuVSvU7icDkV6d8ToAv7IvwgznMV7ovJ6fdxyPxrxr4ifsx+Mv+Fn3em6PaTax
pWt3S3UGrWqMbdo53L5dwPl2AnvyMY68d/8AtI/EDRfAvw/8F/Cuyn/t3VvDb2Nzc3MDKYkaDcvl
nnKuSDx2GKlR2LUl1Z2v7Tnw/svif+0N8L/DWoXTWdpd2l+00i/eAXDADPHJXH51H+yp4Atfhb8e
Piz4bsbx760tIbBYZ5BklW3vg+4MmOOyiuW/aSspP2jvAXh34i+CH+1vpMclveaRG2byFpHHZTkY
2k/Q5qT9mvTj+zh8PfFHxM8d+dp0WqLDFBpsuWvJGjdl+62CSdw47AZNWldCVkty54Bh/wCMKfiq
hYnN5rPPbqP5V4/q37O1t8OPBPwm8cw661zNreqaaZrZ4gvlo+yU7SDyQU56cV6D+zn4v034q/Bj
x78KIJzpfiTWW1G7tHvNoicTFSoGCeQTyBngGvIvhh8C/H/iX4wab4cuob23svD18bm4mvnkFkiw
TgOI8/LyANuAODTSaQaSZ9geOYy/7a3wwfuNA1P5Tz/e/wDrV4R4o/Zxj/aG/ac+MaTaydLGkS2k
qkpvLF7YKB94cDyz24zXQ/E39pnwjpX7X/hvVd91PpXhq0utKvriCNWUyy/xJ83KqTyeO9eX/the
CPGfgz4x6x4x8P3F+dI8YeUba70K4kzKEijQpIIyONxJGc/eNSlqVZXR3/w8heH/AIJy+OYCWkMf
9oKhVQvSZVyB7EH9at/tR+HpfEn7OHwG0CCSOBtTvtLsd7ErGpktCuTjk4LZx7VS8SMP2fv2Ibjw
T4qmFv4o8TrcjT7S2zJsMrLJ87YwoAzknvnFHxXvZP2iP2QfCOqfD2dri88GSwTanCuYri3MFoQ+
wdSQSpBHXt0qVe4mkyv8DvgNd/s9ftn+HtAfU01NLvw7eXqzIpBPJXaw6deeBXp/wKs4bX46ftNz
JH5cv2+0AIUDIMEzce+T+grwn9iOy8X+KPirf/FPxbfX8mg6Bp91a3GoazcM7JuQNtUOSQAMnA4/
E16F+zH8dPB3jL9oP4xWFhqHlDxndRXGiSSxMgu1igdZMEjg85weetHqNrRpb2PlKH9mTW9V/Znu
vjBHq9vHa27MBaFSJiqzeUzh88Hdz7+lfY/7QOn2+rSfssy3SLNM3iDTwWZc7gYIyQc9e/XPWviW
78G/FrTtZufgQt5qkZluBEdFjkBglZv3odc4+XgMWzjrX1r+1N8WfDPgbxx8A/Deo6mv9q+FdVs9
Q1mGJWb7LAIVXcxAI/gY4znipa10L0TRz/7XHwT1r9of9sOw8HaTqMOm+R4XivTLcrlI1WaVTtA7
nevT0rY/Yz8H3vg/4aftA+DNdki1E6LfTWTDO+EstrJl1U8LnaDj1rjf28dY8Z+D/ij4Y+MXgXUL
i20LU9Fg0631/TWDZy0sm0cHh1ZDlh/D1rf/AGWtVvfhd+zV8U/H/wAS7yTTbfxfcNeWl9qB/eXj
SWrKpCjoWY4A4/LFURHQzr2WTSf+CT9ncaWxtri4SPzHh/dNKDqO0gsMHBX5c+n1rz/4X/syeOf2
bP2kfgddanq0DQeItQcPFp0zttRYwzRvnAK4cdsf173wcf8Ahdn/AATNk8HeDf8AiceK9FSE32jo
ds0JW9MxBB65QEj1xXlP7LHi34oftCftI/Da41jUb7xLpHgqc3E00yIq2MJQp8xUDklF+9k8Gp3Q
07tn1p4V8NaSn/BSHxrOum2odPB1pchvIX5JvNX5+nDEd+tfFPjz9nn4gfGPxH8bfiRY30c9j4Y1
zUVklvLomUrDI8myMYOAse3A4HAxivrnwL8avBOqf8FEvFv2bxHZSxX3h210e0mV/wB3PeJIC8Kt
0Lgdh618bfFvx/8AFn4M/Er4m/DS1urnRLXxjq15cvpzwJJ9tjupZI0eM4J+ZNq8EZxVJJPcUn7y
XkfRXxjs4/F3/BN/4U6nq6JqWrm50lVvbgbpvmnZSN55wQMfhV7/AIKEfD3XfiB41+DPwp8HMlla
6pBfhNLDmG0JiEbIzqvB2qHxxwTVH9oDUrT4VfsMfCbwH4nu4dJ8XR3GlSvpMj/vo44p2aR9vPyq
COan/wCChXjXxJoepfCT4yfDe9juNN0yC8WPXLdVnt43mMaLuOCMEbxj1zSS2HzXGf8ABPv4b+Jf
hr8Vviv8K/GcyX1hY2FrLLpLTGe0WSZwzMqnj5lYZPcipfgLpVv4C/Ym+O3iXQreLSfEEd9rscOo
2ibZ41j+WNA45AU9AOmPWq3/AATw8a+JfFPir4r/ABk+Id1HFp2o2VpE+vzqlvau8DsrAdFGAEB5
/nU3wF1qy+IX7Efxu8M+G7tNV8Qtd6zcR6bA2bh45W3RsEPJDcgEetTsyr3PlXwZ8FPiz8H9d+E/
xZ+0S6Rp/izW7KOO9hvC08iXUiu3nDJJ3ruJzn3r0n/grb4a03Rfjj4YvNOsLe1utS0aWW9khTBm
ZbgqGbHU9s15/wCA/jL8Vfjb4q+FvwquB/a2meEdbsbhbG2sts0SW0qRsZHHQIpIOe9d9/wVp8Ta
drPxz8PWmnaha3c+maQ8F4IJdxt5WnZgjYOA23Bwea2he5E2kfCiOIhhJDkk+4xST7Wt7gghQUYb
gMduMehp0/ltGzbirf3u+PamyI6W0rKdz+WdpHf8qqXkZx3R+n37Xnhu98L/ALJPwW8DfDm3bSF8
YXdraXWmWD+UL9prQFlkOcsCxXPP1rmf2B/hx47+DP7S+ufCjxvDJb6JqPhq41S40J5BJbTZeOJX
I56rvHv3rsf22PFOoaX+zh+z/wCO/BRjvl8O3trei9iHn29sY7ZB+9I4C7k2nkdDzXPfsH/Gbxf+
0d+1nrnxD8WQWypaeFZ9Me7tIWitYlWWJwpJJw3zMSM9MelYWsjoTVmd3+yb8OPC/gfx9+094i0j
SLa11Hw7rF3Y6TcLGN1lCsEr7Y88KMgdBzjvXwZp/hb40adpNn+0fL/aCi4u/OTxMroXMrO0HzLy
cH7p+XGCK/Qj9l/xBYeKp/2r00q+g1G4vdd1Ca1hgcM88fkTIrIo+8CcDI9a+AE/aM+IutfBmw/Z
8Wwt/wCzbe6SNbeG1c6h5iTecY2GegY4xt4Aq72CKu/uPur9rj4O+E/GX7Tv7OFzqei29xP4nvZY
tYkC4+2LFDCy7x0Iycfp0rxv9v3wP8Qfjn+00Phf4PsLjVtC8P6FaX9vo0O1YrYspDOM9zkAE+nt
X0b+014i07Tf2r/2WrG51C0tWsry9kuFlkVTCrQwhN4J+XcRgZrwf9rz9ozxl+yv+2r4l8VaBplr
dJq+g6fpyHUY2MMoChiVIIyykY4OBkVNwSSNv4G6TefFT/gnp8U9E+I0Z1q58ET39npYu1G+xa1s
1aNcjGSjsw5JqDxDDJ8Bf+CYvhvV/AMR0TxD4rNjHqd7Coee4M5kVxyD1AAGB0Nan7M2vXTf8E9/
jt4m14Laza1da1eq7jy1laW0UjZnqCxIGCe9RfH7Ubq1/wCCZfwh1HR0OoPYzaPcylBvEZjWR23Y
6ANgHNJajdr/AHHlf7DvhT4j/s9/tW+E/BfiHT7jQtH8Z21zJd2FwEeO+jhglZG7kFWPY/xV7r+z
v+zp4A0H9vT4yNbaBEIfCkOnT6PFI7MlpJNEPMYZPJ54J6c15l+zT+0j4s/ay/bW+GOu6zpNpaW/
h7TtQiB01GaNUeFsNIxOASelfSn7P19ZX/7bX7Srx3EM07RaQkflyA7wkBLY9cfLnHSiw3vsfm54
0b4v/EbxV4k+OjpqN2uganIY9eijAjsvs8vyRqACAFBHYj1r7A/a7+G+i/F/4W/s5+NfENmH8WeJ
NS0TT9Uv7f5GnhmhLypgcDLMx4xg4r5Mb9qfxh4G+C/jr4JQ6JYLp2rX135jzpIt2glmDEBSQDwO
OO4r7i/aAWLQvhV+yRpV8/kXMfiLQxIsnyumyBQzMDyOT3qmtRJbHkv/AAUY0DxNH4q+HvwB+Gmn
TN4bGkfbYvD2nplpXikcKzN1Kqq55OMnmt79hzw5qPxM+Bvxf+EfxUspL/SfCksUNtpV2gWayciW
UgMACMMox9PSrH7dvx11v9nP9snwb450fRbfVWh8LNagXZdYj5k8uQGXocDtnP4Ctz9gT4h3nxL0
f9on4jataLpkmt3cd3Iyn9yhEE2VVj128D16UmmkSkcL8D7C0/Z7/wCCb/ib4t+FYY7Tx9feZE2r
TRK7IBe+QoAI6BSTjuc/h4n+yj/wtb4K/tB/D3Ub+xvdJ0/x7qUVldnUIi0d/BIyu5XeODlgcg98
V9AKvn/8EfblbYPcuMl41+ZkH9qZOR245/WvM/AH7WeuftMfHj9nPwtf+H7TTofC2rwuZrSdpDOB
HGm5lI+XGwHqalGqWrue0RfssfDyX/gpRcaa2h7tGi8OL4nNoZTsF79pA3H/AGc4O3pmvlz9qC++
K/xs+PnxD1qxhvr7RvAOqXFhbSabCVi0yCGSRl7YLYjBJ5PI7YFff2h3kU//AAU08RwhlZ4vAMCH
5hyxuEOMduM8e1fEfiT9sHW/gH40/aI8EWnh60vk8UeItRU3VzKyPFuaSIsoxh+GBAOKE2mTo3c9
V/aM0K1+PX/BPvwZ8WfFcCXXj9BDbR38CiIssl0Y3BAGOUWr37cvh6++CHwb+Gfwa+EunSWumeLH
uor6C0Tfd3rL5TffxnLNIc/h0FWviQj6Z/wSt+HMFzGY5rifTcRSfIxLXTsevsR+dbv/AAUS+LVx
8D/if8APG9ppq6qukfb7lIZXKRSEpCu0uAdvBznB5HSmrsT1OC/4J76V4i8SXfxB+A3xM06ZvC8e
lfa5dGv4ys8DyyIhw5+YAg55q9+yJ8L/AA98JPhh8e/irptmJvFHha91iw0iW8/eLbQwRBkG3uSQ
M/8A166j9gj4yXn7RX7UfxU+JF5pUelC60ayg8mBmkiiCSKMbyBk4UnoKX4JsZ/2Hv2kpYFeaSXV
tfYeXklwU2gj1HBH4GlZpk21PjH4c+PPjJ4P+J+hfGZ470HxPqgikv7qFjZ3wmfa6YBwFxkjHTYM
Zr7C/aL/AGVvAPiT9uv4X6fNpzwaf4wt7691qGGQqLh4UfZgj7oO0ZA/SvmOH9rfWPip8Ovg/wDB
v/hFoLWLw/rWn7b1LhnmuBFIEB2bRt+/yOf0zX378V5vN/4KKfBO3DqWt9B1WSVM8qG8zb+ZxQ9C
0j43/b1v/iH8Sv2g9Y+G3hXTLuTwx4IhgNlY6LCV8kSwREu+z03YBOAAD716JJaD9pz/AIJua54q
8eW63vijwd9rTSdRhOyT9ysaKWI65D89iRWT8Xf2t779lT9s/wCONzB4ah1064thbATytD5Qjtxh
s7TuB3dB/d966b4WXD2//BJ34h310gtpLw6kxbGFJe4jAwcDPcZ9qSi07jsrK/kZ/j3T5f2U/wBg
PwW3w8t2t/EfxCktodWvZl864kNxZuzCMj7p4ULjpn3rjv8AgnvdeOvCXxqu/hJ4ysr1PD/ibS7m
+utM1pGaUCOParrvzgHkHtXqX7Y/iGbwR+yH+zdrttZrfLpV/pV2YZGwjGKxLbScHAO0r+NY/wCz
R+0bdftY/t4aD4tl0JNCXT/Cl3ZLaQSmcY5bcX2rjJkAxj+Gq2RKsaP7Kn7LfgTw7+1X8bphpr3V
r4Eu4E0a3uG3pG0sUpZiOh6YA9hXxx4g+MXxi8SfEO7+OkcOooNOvVZLyKCX7BGYysQiAB27QNoI
J7mv0f8A2brhLr48/ta3aP55/taFQq8nCQTDAHpX562v7YV5afsnah8CoPDMTPdO7nVnnO9d9x5x
Xy9vJ4253d+lTa7uVHVo+pv2yPgb4d+Ilz8AvFN3atY63461HTtL1o2jCMPHJEkjbB/Cd0jc+/0r
l/8AgoJN4j8K+JPB/wAA/hnp91a+GLTSYNUFjpMbm5lZHlXLsvJXCKT78k171+0akdtqf7Idg7iN
/wDhILFwjnGFS3gyfYDBrzH9r79oqf8AZl/bysPFn/CPrr0Y8IJZCB5vIwZJpG3btp/uf+PVMW77
DRX/AGUtLvP2kP2WfiF4F+KEb3yeC3WPT5JMx3kDpHLMBK5JyQ0YHTpkGsn4CaRpv7NP/BP3XPjf
4as4n8e6moiW5vh5kcKLeGBQg6gbWJPPUe1dr+xH42fxz8If2mPH1zaR2A1fULu/ZIjlIw1nKwAb
AyBvAzgZ9s1zXilmh/4I6aWVUO8xhUBec51Mt29hmqjzXSfcTtfY8Q/ZD8afFD4e/tF+C4dZTUX0
nx/eizuYdYiZormB3DF4g3HG4EY7GvfdO/ZA+Hlz/wAFFr7w8bCWTw/aaInik2LONjXjTocHj7mW
ztFcN4D/AGuX/am/aN/Zx0aPw0uhL4a1I+aqS+aZT5IAOdowBs3enNfVHhGWK5/4KX+OFDYmh8D2
cbDrz5kR/TNU9Lg7HwD+1f8AFH4m/E74++M7jSP7St9C8B6hLp1mujxyRwWkcMrsJJSvylv3eSTj
AAr2/wDae0Cw+PX7AXhH42eJbSCPx5AIVW5s18uJxNd+S25cnPyxqeenP0rze6/bOk+DGq/tF+B1
8Kpql14q8Rap5N28+DHuaSHGzByBgHrnnvxn1r41RtpX/BJHwDayKIZLtdMHl42klp5H4HY9PwzT
S94htdD8yCqhm80FHJIBP0zTFh2lNxAboMGpLhWjRRt/eAfeXkUYb5STh/XvWrMtz9CP+Cdnwv8A
D+hfBz4nfGua2Go+KfDcd5BpnntuhiCWiTE7SQNxY4z6V4F4T+P3xo0P4paf8X5RfTtrF2zZu1mO
mzhyUMaKTtwFDAY6Fe9fVH7FCyWn/BOL433XlbWb+1WVcZLY0+Jf1INeET/tixfEb4EfDD4LQ+Ff
scukahp4lvfNXMojmA+VAOCd3+NYt2TsbQSvqfRf7V37JfgnxV+138J7L7PPYWvjqW6bWI7NwuTB
ChynHykjAPHPNeY/t3eKfHN18ao/g54AsrvT/DHg+ztbi0tNAjdZR5kKgvIUbkDJA4HU5r66+N1w
kn7ef7PNi+FMVjrE4HfJjfGfb5Rj6186fGH9q7/hmD9uj4wam3h0eJf7QsLCxWJ5RGYCsCMCCVOQ
SSMcdKFsWi14TsF/ak/4J9+K9T8ewCfxB4EN1b2F9bjZNm2tVZfMJyTy5Dc87c02202H9kb/AIJ/
6H468FQxjxr4xFpHe6heLvdPPR8+V6bQBjjrWz+zxqZl/wCCa3xn1yaH7LJqc2tXLDkDeyouAe4z
kfhSftR6knhT/gnh8C9Qe2kuY7S70S6eFQcMEgaQqe3OMc0k76omS1PJP2CfGfxB0D42aZ8OfGFv
f32i+MFkae38Qb3YxRQTNujD9ASOevQdK9P+AP7HXgO0/bd+JWl3FvLe6N4K+xXumWk5BBllRXBk
7NtIOBVb4LftL237VP7enw11e38PHQ7fQ9G1CIIJA5dmhcgkgAYxwAM1758B5VvP20f2mLmMFxAN
Mg5PUiBhj/x39am7u1YSR+dvxP8A2gfi140+M2rfErTxqNvo/hq9+ywLpcUq6fDFbzEIH5w2RjcD
jIavo/8AbD+EOi/Fv4UfBb4n3VsdO8V+LptL06/Fgo2SJcxs7nacAtljgnn8q8HtP2w7fwd+zT48
+EbeFTJfa1e30i6iZVyiSTZ+ZcZP3WHB719i/HO3+w/s8fsq2BUqV13QFIk+U/LbDt+PPpihaSs/
MqyVjyj9ufUNX/Z08LeCvgV8KIJ7Gw1Oza/uLyyU/wBoXLxytnLJgkEjcSBk9K1P2MYLz9qH4M+P
/hT8UoH1FPDkcBhvrnJv4XleVyrOxJyDGMdOO3FdN+2n8e7T9nj9s34d+ML/AEVtbsrDw1cxtbpt
VwZZnG4FuM5AGOOprT/YH+I8Hxe8a/tE/EKDTm0qHWJ7KVIMghUWCfuO/BpO+gLbU8w/ZX+H2g/A
39k3xz8frCwj1XxlYC8jsV1FN0UKxTKikDruORlgee1eN/s6fHb4saH8etA1HX5NQ1jSfG2oJp81
vqpka1liuZk3NGD8vAfjAxg9K+hPDTeR/wAEkfFc6LkTLeSDPO5TfAZI7/drzO3/AGs9J+PfjL9m
3wJpvhd9BuPD3iHTvPleRT5oj8tMKFHQ8nn0qtkNJX2O88e/sWeCtS/4KA6H4ci8yHRNU0ubxTe2
cajHmLK2IwCSAhZeQAOB715V+2b8dPiFqXx11nwl4NF9ofhrwBO9pFb6CHQSRjY7PLs9NmBnAGDX
23qTCX/gpjpUZGGh8APnuQDcsMY7dRzXy1rX7Xeg/AD41ftIaRqHhebWr7xHrUsUEzOgRMRFFU7h
wCWFKy3t2FZGx8evD+n/ALUH7COn/GjX9Ph03xppAkCT2ahluALgW/zk8kYAb2PT0qX4z6dD+w7+
yv4R0f4e24fxP4/LRXmuyLi6jaS3jkIiIwe4UDt7nmtMSGx/4JB2hjTyXuI0BUj+/qZ/wNdT+3f4
/wBM+Fcn7NOu6tprXtlpWqNeSWpO0EJBEPzGQQD6VMeaVlbuPbQ8t/YR8W6x8cJfFXwU+JVtc69p
d1p0+qfbNWDPd2sv7uPCFidv39w9DnFXP2R/2VfCfgrxv8XPGl+JNZPw21K+sdLtbtQ0cjQwu3mM
Ou4beMdD0ruP2RPjdp/7R/7cPi3xlpGkSaPpqeE1tILeYKJDtmhy5C4HUn/PFdb8DLrzPhf+1dfq
C0beItbdZOzEW7A4PtxQr3auLTc+JNC/ba+Lsfxkb4kXP2678P3t0M6KzMmnohHl+WByM4DHPGWX
8K96/ao/Yq8J6z+058L4dHnl0K2+INxJ/aFrbKojt/IiVmZFxwSMfic14xJ+1P4W8Wfss+Bfg5a+
F5bfxFBeWn2rUmWMRkCYu7rj5yxB9O/Wvvj45y5/bF/ZntChVUTV3I75FuB0z7UXs3bsFkz5J/bi
+MuvfBTxlpPwX+GUM3hnRfDdvb3cl1pQKT3bSRgKrAD2yT1OexrqL7QrT9s79iDV/HniuxbTPGPg
tZ401CFQz3ot4AxEmcfKxYnHque+K1fiv+034R/Z4/bv+K994p0KfWo7nRtPtIBbxKzRsIQ5zuOM
HcOc+1aPwA1IX/8AwTk+MmsRxtbR6hPrU6QnHyhgoC9cd8VOsZJIlQ6nN+F/Dmm/sW/sWWHxV0Ww
i1Xx54titYReXSpmzM8ZACYHRRzjuetc9+xX8b/EPxx8Xan8I/ijbz+LdJ8SwzXaXOpgqbMwozYV
cDIJGeMYIr0X9qTU9P8ACP7Cf7P93qUXnada6holxPFjh0W2Z2UjoScY57074OfHXwX+0N+3R4S1
fwXo76RZ6b4av4JlkiSNmkKlgcJ2Ab9ablomjRNdjhf2b/2EvDLftOfETS9bv21zw54AuYPKs7mA
BLt5I3YKxB6IR3645rzPWP8Agof45f48w+IdOha28A2M6248PRIvlyRIu0jft+8SSRj0AxX3T8A5
Vf4/ftRXayBmXVLaPYoz922lJI+tfCGjftGfC+y/Y0v/AIdHw/PL42u7uZxepaoyrun8wsZM5GE4
9auL505WLSVz079rr9kTRPEfxQ+F+u6A0Ogw/Ee6gtJbOKJcQSuoleUYAySHJOe4qX9qf4qR/sWa
X4d+DXwmsotJ1WK1t9V1HWRGsr3gZXVl2MpySVBJzwOlfSPx0tc+P/2TrYozNHq6E7+MYtowSR2N
eX/G34o/Dr4T/t8anrHxFsku9P8A+ENtLe3Etv5oSQu3PIx03DOazi3LcntZHI3fh3Tv28f2UNR8
TXcD6R408FxmKfVVUK12YYTI42DoH3Zx2rJ+CPw58Mfse/sxR/tB6vpTeKPEusW0SaZC6j/QhLuA
6nHJA3HGe1el/st6xp2qfsoftFa/o9q8Gn3WqaxcWsZj2bYzahkXAyAArD6ZrO+OI03TP+Cb/wAK
U1gf8Ss3OjNdAksrRly7g+v+fwXtG7Ik4H9lH9oi7/af1/UvhD8UdNj1RPENu7WV5bwJElv5IZyT
6/wEEf3cVzfwP/4J/WGs/tI+MfCviXVvtugeC2ha7jhQoboTRsyr3weBk+3avYvhv8R/hV8Sv23P
hrJ8LtNht7Gy0TUF1CeKDyFZ/KIQYIG5gB+AavZ/goS/7TP7S9wUIEdxp8XynPAt5c5H4VCk27Ly
A+Pbv/goc1t8b2h03Qra1+G9jKlhFpzWiG5kjjBjYl84BLbfUYGKrftWfsZC4+LHgbVvBxg03Rvi
NdxJFZzEhre5lQSOzH5sgg5x2YH1rBXxr8EbP9jPVNNFrZyfE2a8mMM32c+fFm75bf1HyDPX0r7T
+MkZk8e/snW6xOpXVYyyyAEpi0jGPb3x3H41o29bDWjSPnP47/EfQ/2GvDujfC3wdo1tqfi6RIdU
1XU9St1eEK6lfk6EsWQ4Hb+VL4ufDLQ/2xP2bJfi34WtotG8YeF7U2mqiQCKOVYYjLKu0Ag535U/
yr1D4y3HwjX9ujxNcfFSKEWEXhSzSz+1btgky5PQYztB79uKp/A9tKH7B/7QN3oIRNHmm1trTAK4
iFqqoMHkcADnmnCTUkOUVY/KqVHWPzQSxPJYnOcHFCwh3QCTLlevYe1PmXCqrxsrR5Az1A5xxSO8
csilWYs3B3DjtjmuxI59EMJUO6rGGfO0rj+tMbEPDh/8Ke5Zt65O0AZZcECmJ+++Usztg4wOfxp7
Esa8a5kLkBWPKjrk+lOQsJgChBTAO4ZqPaVYneAy9MnrU0bBnYJLIVbliD1+tFxCxqZUcNmMMevQ
dKUWyCJGYYDnAduhwOcUkk8kS4chogcLkdff+lMDyM+7OFJJXeAfrxTGOxISW4YdOR+X0puoiT7K
C6nGR3q3EqMY13Bg/JUkDH4VX1O2HBZzIoA/hxg5+tIVhkEcWwAI4jXDSHrgeo/Ou211YYvg5oaK
GScalK5WQ8MuGwRxx15/CuJLLKilVdXOCxB4HfpXe+Id1p8G/DTRRBftF9cs5wBkjA/yD07Vm07g
ecsHIBLHJOCv49qkuEaAqVB5HpzTTMw2uUKkHA29cjvQ8DyjDzYwTjtms+pSZ3Nq+dMWQk4MpZ2Y
9MjjBP0rrvDk0O1jG+wgZ8vqW5559fwrh7MPeaWhlBSMybd64GD1yeehwa67w7FPJeCWNlEaFQ7p
hsDPoeTyK7lrscc9Geg2WosZJZo4W+Vf4juA6kE9eMZpvjjN74C8QTeWqlbcCIqfmJ3qTz7ZIx7U
mlQQXZf94pmjT548EZyOOPTjpVrxhEsfw08QrtxKlsn7tcbSRImWz7DP/wBarlZomKXMro+VLiMF
N8RJjHX2pbWEmdVAyMg8dCM9a0NGt/tl3LCVyHVj83C9P09c1URDLM0O7AD9T9fXrjiue/c7ktT3
L4dWLXNrbQRhJppXWOKJGy7FmAUYBx/k17Z4i+Dfij4fLbjxPpU1gbhS8aZV8rwDyDjqfXpXr3/B
Nf4TeHtT8G+IviDfQjUL/RHltba2lwY8iDzGk5GQ2RjI6YPrXmvxL+KGq/FTxhd69qV+xaYK0Nmh
Igg+UDaoY98A5pqpLZEShG+oaR8MPFmpeB7jxVYaXNPotoX+0XafOU2FdxOOccjOOeK5SKFkf7Is
RaXcrIu3IJLYGDnnnsfWvpv9lf4z6XZeGr74YeLbuO20PXZJIbW+QkNHLMAArHoBkHB9fSuo8M/s
T6lonxMu9R1zUI7HwbYTNPBqCyhnlUEOp5wF4zkjP3SM0lWa3M3FXPnLWvgh4903U9N0i90W7tdT
1GU/ZYZB8rjI3EEDGMe/cVrS/stfExmkE3hG+iZI97y/L14IHByT04x/jXd/tPftH3Hj7x3ZXXhm
6fTYdEZ4bC6jZ1lnZiC24AcAEZByRwPWvpn9qC78W3fw28GW/hW/vINT1S7hheS1JUvugyCzDkDc
c0niJp6Ifs1a58IeMfgr42+H2jy6jrGg3mn2e9YllucKrsVOOfz/APrVUk+FPieTwbB4lbRbqTRf
NIN7HGWiwCQGBHbcuPy9a+g/ip8drnxl8EL/AOH/AIwtHTxhpOqQWzShQxuBGCWlJxgHjBx3x619
Kfs1aVp3iH9l3wdp2oWSXOmT2U0UlrONysglcYPp/T8Kr28upnGkm20fnJ4f8A6746muYfD+lXer
31rCJ/It1+ZV3YB9v58Vj6xp9xY30sF5A1rdW26J0OVk3rlSvOOAR39K/R/4E/s/3HwY+L3ia8s3
abw7f6fDHa3JOGMnm7mQ8k5HPfoa+GPj1bhfin4w8pFVTqd1sRTnB845LYHHv3496pVbhZI5lvhz
4gj8OQeIf7FuY9Ef7t+Y3WJlOcEEjaSc/r1rFkLbm8khSMFyy8FMdR+or6k/Y++KM2v67D8KfFFu
us+GdaVkto7gnNo8cTy4X/exkehX8K8s/aD+EkPwg+IWp6HY3Qu7ZESaJ+jYkBYZHTAHQmpU+Z2G
4I8omKn5kcvuba4bK7RyQMY5zj9antp1knTyiI5VO1sjaijGOvfI4xUdybeOVUCMWHyl+Dk9evFL
cMwnjFq0pTaAXCctjP4en61pFNsxa5WDvFJcb5l2YY70B27vl4569R61E8k0bgHayIdyOQfnPHGe
/wD9emzGACSO43JImCC468d8d+P1p/kfZEKyIGCjJyS+BnOMn/61dMG09Qeuo54UuJIoQo3BsLGh
3EnHT39aZDMyuAsjKN5XDt0HJ6f1pFjaWYsY2EiDPmYG5TznBqRYo5JAz/OoQl8AjAPGPbkA9etT
Ju5N10G3MoE3nM77PLxGIgODuzyR1yPX1p4lS5SQxyBpFG3klsfUf571BJcbEARsK2Azlfmz04B7
/wCNWsRqrDozDGOg25HP1/8Ar01bqJMihJeQqkbqCcnaQVznoB+VWwT5uCz5GNxQYIz1x69aqvE1
vBmMPId4dgpyEHHHvW94L0B/GHizQ9Cgl+z3Wq3cdsJmXcBuIA4/HpQ3FbAld2IdM0q/1SdRZWVz
L5chUPDEXRG46kA7eCOvHNS6tpl/pdy8F3p89sS7KrTQEZAyDjPUda+2/id4h039jLwHpXhHwpAR
4m1VFvLnWpoVkSRk2o7Mp7kAAAdhUfhC4079tb4W3uk6tZwWvjvw9EjJqiJiBjKzYIC44KpjGOKw
Vaz2N1TTPhmOJYIhF/rZQflQHnOc4x3Pb8atTaLq1o3mXGmXEJVQ0g8sgRjGWJP1B6dK+rv2Zv2e
dJ0jTdZ+I3i1v7U0/wANzTyw2cbHd59v8xc9AVx0Gcc1d0P9tGPVvirfafr2iwzeCdSeS0htYLFW
uFV/lRmYHp2Jz3qnWT6DdNbXPkB2JijeN1y4VmU5JI5II/OpFgkWBpAZLiLeYjKoyoAyTyOOnXoa
+l/jV+yNdaB8WtC0Lw7eQRaV4keWHTjcOzNblACyvx0APB9q9F+K3jbRf2TvCmleB/CGi2l/4gZk
u7q41O2WWPDqd5zxySowBnj8aydVE+zS6nxM1sI9iCAxgDcWCEbvTPOAODj60ksLxLvwysFJYE/d
/H8e1faGu+D9F/bE+Eo8VaPZRaR4z0ELb30CR+VBMyqJGVcdc54JP9DXP/sx/BrS/DXg25+L/jiS
S40ayY3FjZxqJNwUvEzOmPXGBnqCaarJFxhY+VZY5o4lid/mjGAzkA4z0z3Pv7io1tz5vmgYVD1O
eB0J49a+xvA/7SHhP4n+NL7wX4t8MaXpWla6slnY3FnZETo7SbIwzcgEqw5HQrXm/jb9k7XtK+Nt
p4L01ka01QSXFndNLk/ZlJ3F+Bhhjn659qarD5bs8EaMefvkhG1yAAM8c47euakkilJWNyZGKglQ
2NgPQ8/hzX2b8TPF3gL9mW20rwTo3hHTvFGp2QYX9zq1uN/zBXX59vzE7snHAAHesr4ufCPQfjh8
NYviV8O7VLF4YiNU0uKNYVQxxlpNox94Hb0znrWkcRYzlTufJBQxkrOpjIU4Dkjce/5UOhxGyJ1O
I5DkBvccYGCO/pX1X+z58B9H8L+FZPiZ8Rkik0ER7rLT5ozcJcJIuAzAE5yxGAR68VteAvGPwv8A
2gNS1bwXqHg3TvB2pXMarpV7AhLPKSTtBwuGACHbnnJFKVa7FGkkfIflFbdHbL9VIH97HJ/XPpTQ
GmkVQxVAw3DI/n+Vetn9mzxjZ/GRPBEdu8t0d0ouHZSgtN5UTFueNvYc9q9f8e3fwg+BUuleEx4P
t/G+p2aNHqeoO5SWOQdd3GC2D2PbFHtLGipo+SAjxS9SoPGwnk0+VnSIAKwDdCPTv/Ovo/8AaG+B
Wny6HB8S/hw/2/wjPF5ktpbAsLMKmGbkkkZGCvY1c+D/AMCtF0PwVc/EX4nxJBoi28i2OmzyFWuS
6BkfPHJAIUc9c8Yp+2H7NHzTHLIkeUQdCGL4yPwP4U5LmOS6YLGzgZyCc7R26V9U+DfCPwq/aC8P
arpfhbRv+EK8WxMptVu7ou06csQueowpyMcZzXivhv4I+LPEXjxfCEWmXdhqglEd27oXW0UjPmMe
Bt9/en7VEeyuzgNkuXJw4JICgk4469sVYDeZkRyFWK/cYdPcGvqjxt4d+Bnwy1zR/C2sadf6vdiN
I77UrC5UwwOWIfedwK4KknA4Fee/tGfAh/hlfvrukgah4P1B2ms7mAmVIA23ajSdsljtOSCBTVdP
Qn2Wuh4zLBIbdo2mO5Blc9Cfc063jlaBFeQ5dRvQkjnqP619DfCz9njRrH4e6r44+Jj3Fho4gLWV
lHIYrklcHIUjknGFHOc5rQl+Avgf4reAbrWvhbPeWur6dKxnsNUk/ePGqMTtXk5JYYOcHGKn2kRO
mfOLRuqsGBBOQD2H1/OowJRIMtjI5Oev0/Cut8B/DLxB8RvFMOg2drPb3TsYpROpQQtgnMmRx0P+
evvWofBP4MaT8Q7LwTf+IdUj8RSbFDRqgiVygx84GByceuTg0e0jcbpnzI7NKIi7ZAGNuMd+OO1Q
ySgSlSGkUdxXd/F74X6j8HfF9xpeqBpLeUmWzuIlLJLEZGVeg4bgZHqa9C+Hn7PGmy/Di+8aePNQ
n0DRiUa0MS75JVJwdydc5wAOeMmh1YrYFTueC28gdUWJGyBgq3C8fjTlYrzt24GD6k/5Ne3+Pv2e
tNh+HUXjTwBqlx4k0SAzG9WdBHLEqn0OCcHOc9sGvN/hr8PdU+KXiy00LSot8h/ezSoeIYwVBkJ6
cBgcZ5xxzTU4sFTszk5ZGgw2WyQSCDnjHp6VKTITtA2kjPzn1FfStv8Asu+Cr7xa/hOx+JYn8SWw
lDWRtgJGdRnBbJHHGcenSvnfxT4R1fwP4hv9G1iFrLUbZwbiNmJBB5VgfQg5/E1Smu4+Qz5PPiPm
BySehAHGBSbypkyzFx1UD2zXt/hX9mg3fgSHxP4s8S2vgq2uZ1jtvt68SoyBlbOcfMM49gaxvi/8
CtQ+F1lputW19Hr3hq9iUpqtsP3W9icKTn+IYwenOKTqInkaZ5XJCElclXJ3blOecY744/8A10TR
JKBnnb/frsfhh8NNZ+KXiiPSdLjd8t+9ugm6G3UjIZyOg7Y5616Ve/skajNYavLo3izRNf1LTbZ5
JdPsJi8u9d2VCjkHIwM1n7RXsXyHg6xMZAHO0KDwD29KbvYrt3dOCCeccDr+FPliuLWa4t7geVdQ
s0ckbKf3bA4Kn3yOlezaD+yZr+reF9O1e61TT/D0t5uaO21OXyX2g4B2sAfw9xWnPHqyOWXQ8Yfb
lflYnknd0PWhAW8yPf8ALgbeOM11PxQ+F+u/CTxBLpWtoZECB4ryJcxTKQM7TjsTgjqOveuNdsuH
Bk2+vTI9vWtLJ9SbNPUsNGT99fkySpx6dabuZYyQXUjptJz9afIVUnB2DBzkfz/OoZFbckMBZ5HJ
Xy0+Z2A5OB+QxWUnYpK70BUZiVwXUEANHxyBwfwzSiZY1RU3GTb8pYkZNeq6n+zN4+0nwpN4gu9K
hGnx2ovpEW5UyomzPKDnPr/XFeTcPtLDMgGcKOPehSVtS+QlluDkPhmPRWB6fUVFclRiKR5GXaQr
Z7duT744rs/BXwn8UfEi0vJvDmky6ktmV8zy2VBls8ZYgE8HvWD418J6r4F1ubR9YtGtdTgIDwSj
n5huDAjII56j0NJSiOUNDKFwUlRlMkJX+ID8avTatqEltLnVL9jMrKFa5cqc57E/pVS1sJ9Qvbey
hje6uriURRxpy0jE4CqO/JFdZ4s+EPjP4eab9u8RaDd2GntKIluJApUMx4Xjpn3H507pmaTicVjz
jHEyltgAx0wMdc5q/pnirWdIt2t9N1vUrKISl9treSRrk8k4U/Ss45Qsy/f3YK9e3Sul8P8Awu8W
+MNHXVtG8OXt9YlmRbm3j3IzjqB06EdaLxNNWYGqahfardPd6je3Go3RAje5uZWkY4zgZYk4xmo9
M1q/0K7W6068utPvIwVjuLKVonGRjhl56Gql/DcWzTBwFuI3aJlcH5cHBBB6HINaXh3wX4k8aWlx
daNpV7q0NmwEj20XmGMnpnA/Q4pPlZKUixrnxH8S+IbP+z9U8RarfWsjgvBeX8siuQfl+VmI4OPx
rmpyULIGPTqwypXr0rf17wF4p8LabNqOpeHtTs7WP5TcT2rokeeBkkcDJ9e1c5BbTaheRxWsbXM0
riKOKHLM5PACgdTz2prlElJna6X8c/iFpNja2GmeNtVtIbaMRJbiYsEReFC5Bx0xXC6jeXWo3l9q
N7cSXF9dSNNPM53NI5/iJ7nmuiuvhr4r82MN4b1rdMx3N9glyMHpjGRXMXttJbPJHIkkU0JaNxKp
DowyCCD0II7+lTzLoWk2ze8JfEzxZ8Plu28Na/e6Qb0xiY2uB5m3O3IIPqaj8efE/wAU/EZLP/hK
fEM+sfY2d7ZZ0VfL3gbiNoXngDOM8n3rDtIZ7+5WK3tpLu4L4jii3MzHBzhR147Y96r6tpV/YOhv
Laa2Z2IQToV3DuASByPT6VKaQnENK1e70XVrHUtMuHs72wuEuraZMDY6nKkdjyBx716JrX7WXxYv
9Nu9PuvF7tb3VvJBMDaQB3DqQTuVAR1HSvLVIdoo1XZ820Ip5Y5wP8+9SXGm3QJZ7K5VQMEywsAA
VycjHGPfpVaMaiZKBEt44kypAwSOAfU5r1Hwh+1D8Sfh54UtvDmla9BFpVnuFtDc2sU3lZOcZYdM
k9+OK8xMmAwzhSdrY6Ef5FQXGn3ByY0lLMONqZJHX/P1pWsy+Vml8RPiHr3xO8VT+I/Et42parcK
kZlCCNVRBgKqj7o4rR+Fvxj8R/BnWbnWPDc6RT3Fu1pNBcRiSJ0Ygksh6sCK4243QtseNopkO1kc
bWGex9Kq3DykjYdkpbcQO49Peola5UE0j1v4pftWeP8A4peE/wDhHNXvLKHRnm8+RNPtvs5l2ggC
RgeRyTjgV4zbajPYXsVzbXBt5YGWSOaKTEiEZIKnsRz0qV5WhkIZDEWJPCtjpzzjg9OKrXIlb54x
tjI5+XHJHP1//XRZMm7TPpL/AIeFfExbgX8dp4bfUFVYxezaYfNb5dpIYSDk182a5rl54h1C51PV
LyW81K8lZ5rmeUySOfXkk9OnPTgVWuYXCNNK0jIGyB1GfQAf55qpcyFZjGqnBwcBc5A/z+lTZLYv
Xc9o+Gv7X3jD4UeCB4PtLXS/Emix3RuYYPEED3P2c7du2L5xtUAcDtk1z3x2/aR8VfHmfSI9ba00
7TNMgMNtpWko8NqDnIcoXYEgdPSvL7gRKiYc72YZw3X+oHGKinuT5brEFc4/Ie4z6A0WiJ8yO3+D
Xxz8RfAXxxaeJfDdz500QKXFhPI/2e7QoyhZFDfNjcWBIODXrPiv9vjxdf8AhTW9I8PeF9B8IXOs
KsdxqmiB47oIG3EA9u4z/tHFfMnkG4X5doJ+6OOeKgZkhIznA5bLAfUfrQlG5N5Mkh1O6s9Rt76C
5khvIH86G5SQiSKQHIcP94NkA565r6ssf+ChWrSDw7eeIPh54a8Va5otpDbxa9fhzdO8f/LTcV4b
cAeO5NfI94zGbKHA4OBgkc0ySVVcAsEwcYVuR7Y70ml0Lim9zrvjR8WPEPxy8fan4q8RXb3V1PJI
baEyl0tISxKwxg4wuCB06816R8E/2w9Z+D/gXVPBer6BbePvB16IjDo2sXLCO0ZWZiIxtbhiVJHq
teC3ISUlsA5Jbjv9Mdqil+chVblV3EIQSB05pPUvl7Hvnx5/bK1r4veANM8BaN4as/h/4StXkluN
K0mcyRXhZwQHzGvAOWx6kV5z8EfjN4g+AfxC0vxZ4fumhngKpd2obbHd2+4FoHGD8rYHIHHavPpY
38sgMTznPoPWkuAzqm4nAHB9hS0HytH2NL/wUQ07RL/xRrPhH4QaP4U8ZaxaT26+ILe/8ySKSX5j
LjyhuIbDYyMkV8e67qOpeItZ1HVtVvX1PUr9zcXFzOQXkkJJLE46n/Cqk7GJ9oKsCAdnf2qOS4+X
aG55yfSqjZbGUorqRFY/myMH35FDTqsRjSQrNknAHAH+TUbFWkdSrD0Dc/rUixLtYsxHGSM5pNBF
H0V+zz+2Hc/Brwn4h8CeJNFfxt8P9Yt5FbQ3lWJYpHYF3RihPzAnjOO/eul8f/tyWMfwXuvh18J/
A6fDLT9UnkfVJY7hJmuYpE2OmdvDMMcgkgDrXyepf5cEAqNp9/pSMygq+0FeSCDmkopGvKdp8H/i
v4g+BXxD03xX4WuZLXUbZtkqxkETQEjfG2VIwQMdDjr1FfWsf7enwosPibqPxH0z4M3Vt44mEzLq
DXylBK8ezftx1IUdB618KSBy7YARSMAhqTaittJZH54btzT5UxOJv/ED4i+I/ib4w1LxX4m1KW/1
7VHE883CbSAAqqBgAKFXAHpX1Haftu+DviJ8LNA8L/G/wLqHjjUNGnY2mqWUsUTeUQFRWGQSQMAn
odor444TYMk9x7U+Ulh8x+VT1+lJxiKx9EftVftbSfHy20XwvoGnS+GPh7oUUa2WhlIyzSqpTe7D
OQFwAAe5NJ+yn+1zL+z8us+Gtf0248SfDjW7e4S/0WLYXEzoFEkZYdwMFcgfNn6/OjEeYEY7STnc
cimyjE68A+hPPTuKOVBZo+1Jf21fh/8ACD4Za9oXwK8C6r4N1/XJIjcarqJjuNkQBDKCXZt20kAA
YBPavlz4b/FTxF8JPHuneMPDV+1trVjM0ySS5kErFSGDg/eDA85rkGjJVyCdwOSaV0IDbTwOQaLI
Z95an+2F+zn4g+MFp8Tdb+FWvzeM4Ft5nkj8k2zTomA+zzMEjA5x2r5S+PHx48T/ALQfxKv/ABb4
guGRlfybG2gJRLaBWbYqjPBwRyOteZsuWHmEEZ+Yen1prptkIycsvXFFibO59leFP2xfA3j34I2H
w6+OfhrWfEz6NcQnSdR0hYxMsEcQVRJIzhi2d2T3BHesD9ob9r/TPFfwq8P/AAr+EmkXvg74fWMC
m9huwsd1dyByRuZHJ24yTk5JYelfKsRUIGQkvz36f5xSLEQ+QNoPOeuamw7s+k/2Q/2s7v8AZu8R
zWOrR/2r4A1dSmr6UYvNYAI+14gTtB3MM9iPcV6h4Y/az+CPwC0nxdq3wd8F67B471W3WCxvdeii
lgtfmySMSblHJPHdRXw6y+XLhclDxl/WleV1cMVB5OT7Ucivcep2mgfGLxZ4f+Jtn4/ttauJPFNv
drem5ndnEjhtxR1yMoe6g/TpX174r/aj/Z2+MniXwp49+IPgnxA/jawt7f7fFpMMQs55kbeSQzgu
pfI+btjPrXwmMMHXYVAGRx3qJ3eLagwqH5d3Q02kxrQ96/at/ab1X9pfxs9xhNL8I6UTBo2kxExx
xQhiVeRQ20ydOg4xgV6X8I/2u/Bmv/Am/wDhN8bNI1HXdBthGNG1HTYfNu4P3jOys7NxjCYI7cEV
8evL5IJ2glgTgjmnAHyuVIJPBzwvHpRZBd9D6/8AiR+2B4N8G/A6z+GHwC0vVPD9hfPM+s6rqcZi
vJAWDBY5EbOScjPYDArzL9lX9p/Vv2cPHcd2Q2qeGdTcW+taVM7Oj27Nl3RC23fjv35B68eFxxeZ
03K46n196WZN0vy5PHVaOVC94+7dE/aR/Zl+FPjTxP8AEXwX4c8QXPjW6hu5tMtdTswbSC5kO4EA
HCqHAHGcA18neJ/jl4y8X/FO58fajrV0viOW6N1Hc20skYtznIjj+clUGfu5wRmvPnINyFYLk9W6
E0SOEkCR/Nnhie1HIg1PuXxr+0v8Bf2ldF8I6j8ZdI1iDxzYRzQ303h2z2w3GXG0sQSx+VVOD6t1
rzD9sP8Aa1PxruLXwp4Jin8O/C/SEVLTT4ka3N2xVNzTpnDYZTgH3NfNazM4baeo4AHFDuFZGwrZ
XuO/rS5UJ3Z9b/s5ftb6FY/DHXfhR8Y7W51zwFe2srWVzDE1xd2c7YUqhz8qhfu46Y9DXR3H7U3w
o+AHwd1nQvgDp+qp4q1qUx3es63abZraBothMb8HIwMADAJJNfErGUh2KFSD1Axnj1qMeYDklxk8
ijlsUro9Y+B37Rvin4DfEm18XaTdT6lcTbl1KC7md1vkb7xky3zNySCTwfavqo/G/wDZNf44t8TF
07xEdXWY3h0c6fus/P8AKMeduCOuW4PXntX5/wCTMzKgXGevamQyCMlX4frntQ4pj1R6x8cv2lPF
3x2+JMnjDV72exa3Zf7LsLKd44tPUDA8oZyrYUZYck+1fRU/7Ufwj/aP+EOhaT8bLbVLPxdo86w2
+raLZ+bPPaiMKBJIRk5LHPuARXw6zMijHXoMnNOJFtC5PBxgKp/z6UlGKYK59dftG/tcaHdfDTQv
hH8GobzQPAVnbRfbr6WJoLy+lUMmxsEZUrjdkc5A7VlfscftcW/wajvPBHjqObXfhpqwb7TbOjXD
WThGKmJOwLbM+4yMc18ryGQhQGy3VetA8xQOC7HqM9/aq5exLZ92eC/2hP2f/wBl7QPFXiH4R2mp
6/48vEji03+37UiOzycNtbAKgKxJAOTtA4r5f8F/tC+OvCPxZj+IsOtXd74n+0/aZ5pbiTFwCc+U
4zzHwBt6cdq82klkkZQF3HHdhgmm7pc7uVJ4JxUcj6jUj7z8afGH9lb4u+NfD3xB8Xxa5pfiMwW8
uqaVpmn/AOiSXCtvff8AKSxZsrnOSMd68F/a4/al1T9obxj9j08HR/h/pLNbaNotruhiaJWPlyyR
5wXx7DAPSvBEVpHXeCOpznOT9PwpkkxWTeWx2DGqUbBbqPz+7IwygZPOT75pVjMhLbun3T61FPkO
SQShOB6VJGvl4ckbR06fnTYke+/slftQ6l+zx41SG+jbVvBmsMLTWNLmLSRGJnXfMsQO0uEGOc5F
e9eH/i7+yt8K/iT4j+I/huLVtd8R+XcXOmaJqFgUtYblzuQR5UADIAB5xnivghXdoi7Ebckg49qT
5RGXbLLGM5boPxqVC47nqPj/APaO8cfEP4uSfEe/1m6t9cS4MtoLW4kVLJMY8qEbvlU9x3yc9a+p
fHHx/wDgB+1P4S8M6r8WDqnhjxxaK8eoSaJaSH7T/Cu5wrbhhVPJ46Zr4HeWRY/L25GcAEdqljd4
FCkDavYU/ZpspNn1V+1v+11p3xE0mx+G3w0tm8OfDDSkXbHHG9s+osUAYSR8ArnJ56nmtT9mf9rf
QdP+H2sfCX4wefq3gO8tn+wXxja5nsZdixqigfdULkg9jnpmvj4uzrvlTBBypz1xSPglvlbI6Ad/
cUuREttH3Vov7RXwW/Zd+Guvf8KUjvNf8e6vKsKahrNq8bWUJQq5V9owBjIUd27187fBH9p/xn8E
vihB41sr+41W6mZ5NXhup3I1JSGz5vPzMMkgnoQK8dZnKYLYGfTmppQsquRktjaF6Cl7NdAi3c/Q
LWvHv7IniL41WnxKu77VLa63w3VxocGmuLSSVY+jKFxjOCfUg183ftJftT+Jfj18R7XXVuLnQtF0
iRf7C0u3lZUtgjHZLjs5XZkjpjFeHgM4ExKlm/h4+mKgaULKQ6jOOCBjP+c0+Q09T7uX9pb4WftI
/BDTvDfxzurnTPFGivFDbazaWzT3N5Ci4BLbSQSWOQcjjNZHxj/at8HfDr4OaV8LfgBLLbabPaod
Y8ULG1tdzurnKE7QWLDOT2DYr4r3LGolj+ZwRg44FIMZZ8YJB5A4xyTn8qFCxDZ9U/scftbW/wAK
Lq58E+P/ADta+GesIYLy2ud9xFZ43tuSMA/eYjOB6ccV6h4D+Kf7Nn7N8/ijx74M1TUvGfjIW5fR
bDUtPlWGCZmJBUlABjIBJIIC8V8BzNtXKqSD0wOp5pyv9njUEmTK/Lu5A78UuS41Kx7BZ/tN+Pof
jb/wtE6/cya6115kqyTyFHt/NL/Zc5J8vDbcdOPavqX4ieOf2Yv2jte8MePfFuv3Pg/xMLZG1XS7
G0kKSzbtzh3VORnKlgR9a/PyWbCgESEZ4xz/AJ7UkdwQAyuXkKhWOOQPSl7O+txXPqT9sP8Aa0Px
h1CHwN4AP9g/DDQXEdraWRMaX6qQwkZcDaQc8e9ehfDL9pPwJ8bvgJd/DD44X8ttNp0I/snxO4aa
5XMmSAcHbtCqCcjI7V8MOCzM+wADkt60sw2xImGGcnI7j+lDpq6s7FJ9z701P9on4V/ssfBK78P/
AAO1B9f8Y6680F34gkiaGeyiYZVxlBuAOAB7E15P+yb+13q3wU8X3tt4hlm17wx4jcprNjcFmDGR
1Etyc53PtLZHGR+FfMZYxy9c4OOf5UheRlbYSdmcgDn6UvZK1h8yP0J8OWH7Jnw1+K+t/EOLxXHr
FvD9ourHwt9mk8qOUjKxqMHPzZwOAM+wr5y+J/7Yfjz4l/GyL4l/2lNo95pzMNHtUkJ/s6NkCsiE
AZzzk45zXgckz4APLj+8c4pfMDkZBUDg96fs0xKWp+h3xQ8Y/An9sTw74c8WeLPFMHw18dAtDqLL
EWa4A+WPe5AB4VSD1A4rhf2rf2rvD1n4Ds/gt8Fbp7HwTYRAajqlvIT/AGkzKNyEkZPPLNnknHav
i8NNFEWUktu3Zbt/kVFLLsSMhcOOSoPU1UadnrqU5H3v+z9+0j4S+LXwU1H4N/GvUUi021tpG0jX
7mdI/seIgkcYAxkqC5BOeBjmtXwV49+DX7Efw71TxP4J8SQfEvx/qkjWenXaOFNojRE5YZxtDLyQ
MnIFfnrbMrSk5ZAeXyOB/wDXpF8yYONoCxg4zxgAc/Sp9lbS+guY+o/2af20vFnww+Lmqa74mu5t
c07xPO51+3VUBnZgyh8442A5AHYYr3Cb4HfszXHxwfxpP8StEHgwqbv/AIRZbhU8wiI/LkFTjcN2
OSentX52fa5I8vnJ3Z4/hzjofwoMrthiRsB3egBOev5n86SpWb10Fc+ov2gv27fFPxU+L+k+I/D0
8ug6J4XnEmg2bRBvLcLtMpwOSwHCtnAPbOa95+Jr/CT9uTwP4e8Y6l4w034f+P42+x3smoXISS4E
ce0DaxUY3cjg8Z/D85Zrl2BUZHmDORznj0pz30qhHR3QocMpIIJ6+hodJX0Gmj72/aI+Pvhb4BfB
hfgf8HdRiuxqNus+v69HMJknWaNklVDzh22gnHQVY+APx48MfH74F3HwO+Kt5b6c1paNPomrSSJB
bQ+UgWLcT0YM5YcEcV8A7iAuAE4bBHA98CljuJosIJWjxjbJmmoW1Q7qR+iHwf0P4ZfsKeFtc+JF
/wCMtP8AiD4zJ+waXbaNOsnlF1YNkBuQepY4wBivL/2cP27Nf8I/GjWNX8Zompad41mU6y9rEEeF
ghjjKcgKq55z2ya+P1kzK8kqbgzfeIGW9/yppBcMQ4G5zgY7YznFL2SZOx+ic/7DHw7HxsfXx430
aH4YsTctpiXyrcFTESyk9D859Rx2rzr4+/t56j4y+NXhvWfB1vFb+HvBdyX0f7ZGd1w+xUZnAPT5
TjHsevT4xWV7Mqiu8UZG7fnnPOCPzolQXTBBKxdj8249+nX8P1peyu73K5rH6P8A7QHww0D9uDQ/
D3xH8HaxaaR4luEFhqdvfy+WFEKtgBM8fOx5x09M1zv7QHxd8PfspfANvgN4DuE8Qanq0Lvr9/OR
JGizR4kCkcbmIAA5wv4V8BPe3QXCySAKwOVflT6j0qOd5CXklkkZ+7MSzFugBJ5//VVqnZ6kuUh0
kplyXbe5BAHU9Khi2xRy7x82eSDzx/kUwL5eTHl8P37d6e0ytJJgBumXxjt/jV7Gd7iurKPLkTax
GSQP6/lUc8ZiZSHwPuk9ulOaV1BMYO9hhi3zdPTinJIszANkoAMpjp6incLEQBiCRhhh+VJPI69D
6Ux4DEi72IOcBh2+lSOyGXaCirkAEcDH9O9OdmZiwCuDjBYYHPpz1piImCsSXIZgNvy0qko4JBYD
hc/1p4h3swUiEZ+Ysen6fWmoCZVOMgEgE9/WgYkqK07umRuOABkflzUmpDYqxsGC5DA9hyaaodBk
rhAc8Uy9mLMI2Dbc8jHA+lDADEFlBYgHO7APB9/5V23i7fJ8MPB4kKyYe6dZVJ+XOwlT9AR09a4i
1Vw7pGm9T3ZsFffmu68bWoTwJ4MVSzbI52Zy2c7m+U+54NZjtdnESzOqS7ihRgMevPrUT5icGReq
gbTTZACBuyXPp/h2pwR5mOx1GMZyR+HX8amw7WOtszczafNHHAvlmVZA7cdAccD2zXXeHYLqJypl
2jBdkPGB0/AdK4zR5HSwu4Wk+ztJs+b7wUgnHHuCa63QGCypNIxlMqlct82TznA9f84roWhy25tb
HbWbuUVXjZmLbd3AL9MDpyP89q2PF6bfAHihiECNZlAiv84+Zc57dMHn+7WNbXAWNh86qCUMnXb3
GT6HGa1tfMdz4B8TJva4kisXdMYCso28fo1W9jGLkpHzZot5v1UEffYbRgZzxVQz+TqUrbTu3Hbx
6mnJdiy80qVLuMpJGcFQeTj+VVYpGN0OAQeTnp/j6Vj1O5M/XX/glu0Vz+z78Q4ypaQ37mSNhsOP
sWAD2wSDg99xr5Luo/NvJFhSRoXbcGVcuAMAE++Mc9OlWf2Pf2htV+A+rNdRK194emHl6hpoyY3U
jiQgAncOnAPfivWP2g7H4cyaxFr3gDWTJp1+hEumw7kW1OByoIzjOOD0qlFp3MqjTdziPAPgbVvi
F4ksNA0mMzzzzIOfm8oZGWJ46E+hxX6HeMfh9L4p+BNv8MNN8T2cninTLGKOZYbsiVjCh3ZUEMQS
R7c818//ALLXxL8B/Dj4P+K7y8urfTvGgFzFa3HJlkjESlMN0Hz9fwrwbRvilr/h/wCIw8VQ6lKu
sefumvQVYsrOBIcHgjYCPxP1qHGbewvaJaHP+KtHvfDVxeadrNrLpV5bH96WAJiK84+nB56Gv0g+
LXxcufg18OfAOtrYxahFcTWdvcxFQQF+z7mKZP3jg4I9eteBftZa78LPit4g8KSab4itbK51GQxa
tqUablijwoG/sG5I/nXqvxr1f4X/ABZ+HWmeFn+JGl6X/Zrwut2X67I9hBHAPGTwaT9BOXNsfFvx
P1K58d+L/Eniext5bbSLvU5WZ5QFA3ksFJHcqR3z+FfYPhm6fTP+CfNpdW0rW8lvYO0bpKeG+1kL
83XGSDx2rj/i7e/C3wT+y/c+GPCWr6fqusNcW9xKI3LS3UinLOR2yo4xx07VJ4f+K3he1/YVl8OX
Wq2x8QtZ3EI0tmPmuTcFgFUjP3SOaTd+hqmraHrn7J37Qx+MWjPo+rRtH4l0qMNM6x4jlh3KiNnJ
+bPB6dPbNfD37QkYsvi940dFbc2s3O4jkL87Eg8ep7eteofsPePdA8GfE7xJc69qkGlx3emLFBJc
ttjfbKCc8cnHIxXk3x81ex8QfFTxbqNhJDeadNqU80cquDvQseVI68k0RiRLfQ7P9jUqP2kvCamV
sMLgeWen+ofkZ9M/qa2/26YPK+O+o7m8sT2NqUbfxKPLIPA54x+tbH7NU/w5+EPgK4+JusajBrPi
aJSbDSomUXNthijKMnktweegrwf4n/EHVPiz4tufEGsFJbm5IWKM4GxQMIoxxkDjjr+tWo63ByT0
OOWSS2lCy429UbGee351HcDO5A7BGbhhw2SOc+3SkuUbzT+7O5+rgcADp+NP2C4hgdw7tuyXPBB+
vftXRFHNK/UcbcRsRLEJg5xlumPf8McCoNQzLmSMyRDnAXjJPt7cVNNNJ+9Vtvlxjcdx9RwcUXFw
8cMeUYjbuJyP/HRVXd9CLXW5EJpzcvHIAzMOUB5J9R+FKsi+ZKi5j4wm5c5479h0psUbKysAI5BG
AEY42L7e+P50scTCAokm0kAjeO+f4hwcmhvUcV0J7cuqT53lFfg9c8cEY7dfzqKCBBGzMMK2WO7r
gccD8Kc8DsME+VEvO0HPToe9RzKssaK0bou7aGAORxwQR/ntRqxSVi8Jkg2lW2jOCR0Yf/W4rtPg
qjp8YPAikmQrrdmWYj5yfOQjJA9eM9K4QvlJAsZYAAsEQrkexwPQVo6HrVz4Z1XT9W07auo2Fyl5
CW+dPMjcMuRxnlRx3qJLQ1pPU/Q/9ofUPBuj/tBfD+48dpbtoJ027V3u4jLEXydgYAHuc5qb9mS6
8L3Xxn+KzeDXhOgkaf5AtlxEOJMhOBxnPrXF+JE0X9tr4Y2+rafex6L4+0QLbS21xJ5dtvcgvjIJ
ZcZII6E4NT+F7fR/2K/hdfaleXUesePNdDKsED74C0TExrgHgAMTnuSa5LHXe2x2fwvtET9nT4oR
R7pIxeawu1+Djy+leEeJYfhK37OnhifRGtP+E1Etp5wSRzOsm/Mu5c49e3cegq3+zJ+0nbRanrPg
Lxigi0jxRPPImo26HMM8+1SjcfKhzgHoCK3NM/Ydl/4Wk0uq6hAngm1kluIryC4Xz5VHzJuGMDry
Rn7po5fMFZu7PefjPCf+Fw/BdwMldRuh1xx5SfnXB/FnSfAOqftRNF8Qfsy6QfDkRja8laKLzfOb
aCwxzgN1Pb8vKPjv+17FqvxN8Oah4WtobzTPDN2Xtp596m6dlAcEYHy8EZz+Fdl8W/AMH7W3hfTv
iJ8P7wz60kKadeaRcyCNEI+dlJI5ZS+PQhqSROnQ7r9lu10Swtfita+G8HQodXKWbRyb0aMREDBP
sP5VgeHv3/7BGq4YZFrdY4zj/SicfWqmhTad+xR8GpoNRnTUfGOvkXsunknYhCqj4Iz8qAj0ya5n
9mr4r6P8Qfh7f/BrxLs0RtQWSLTr2Fmb7QZHZ2QgrhXBxjnB7elPlsDYar4G+F+haX8I9Z8JT20n
iKbWdO+0rHd+bK24qXymcIQ5HQCvd/GxVP2q/hp1DSaZqA+oCMcf1r59+HH7HniDw18Uf7U8SzJp
3hfQJTex6ozpm4MMgcEqGO1WAJJPIApnxK/bA09/2gPD3iTR9O/tDQvD6z2pmdyjXMcqqGkTK8Yy
ceu33ocW9hpxukeheJvhx4F8eftP+O4fG1wIkhsrGS0jmu/IjdjGFY9Rk/dGPetH9nfS7DS/2dvi
XpmnP9osbfUNWihkDBt6eUNp3dDxjpXB/tHfBfUfjfqNh8R/h3IPEtlrKJHNbRBVMPlIVzub3XBH
rmt4XVl+yF+zhL4Y1S6/tbxPryyznTkIQwmaMI+Oo2pj2yc4xVctwckty544V5/2C9LLOY3FpZgs
nYfaFH8qxrz4LeD/AIafFL4P6r4Y1GS6utR1ZROjzRuGXywdwCjI6/54qb4P+K9E+PPwCuPhILxd
B160hSCze5ZWF1sfzVdF6kDbgjt15rkfgN+zP4n0X4mWniXxTbnwvpnhWQXhluMeVchc7tp7KAu7
J/xp6iVrn0Pdgp+2DZfKSG8JOAf+3jpXlD/BDw98YfjV8Wl1rVprA6fdwmGOB0GfMiJZjuGeCo6e
tZmp/tU+Hl/aitdaSwnk0W3tX0R75ZUwcyZE4AHK59SOORWD+038FPFOpfECXxd4UWbxPo/iZhMs
mlJvFvsVV+ZlJJVsnkcdaaXcLtPQ9E+H9mq/sMeJbVQZUih1JeCRuAmc5z/npV744WZ1f9lf4d2h
n+zpdTaPC8wXdtDRBc479ayvFWoWP7Nn7L83gjXLsah4g1iG5VLKxYb4TPubJBOQoJI3dPSoba7h
/ag/ZesPC+hXEVl4j0FbUT2d1IpZzBHtJXac4bnBP40W1QnLqN0D4Eaf8D/2lPhqmm6pPqEeorfO
63KgMpWArxt7fNXqnhNEX9rTx2FADPodmzevVf8AP4189fsnfDrxZe/EnT/Fmv8A2zT9N8PxuZX1
dmQFZI5FVE3HsTnsMH8K7Twb+0N4UuP2r9c1FJbhNK1iyg0i3vGj/dmdWHzMf4UOMAmkUpJ2ucmv
7Ow+KV/8U9fOsCzl0nWrwQ2pg8xZNhZ8scjqeO+MV3dy3mfsD6fJ8y7bWI/KfS8ryL42fDjx14Q+
KuuW9lJftp/im+lu7b+y5pfJmWV2+WQAgbgW5zxzXrPxTvLf4P8A7KWm+APEdyjeJLu2MccVou8A
rOJSWPRQAVBPr0piTNz9qfw0/jHRPhZo0c32STUNUjtlnY/6vfCBn3PfHtXOfBr4PS/BH9p2z0Z9
W/tSO80C4nR/L2EfvFGD16bPXvWj8Ybx/jh8BNA8Q+BZnuH0Cf7VcJuMdxAY7dgwA67wSp4+orkv
2RLTxX4o+I1z488Q3l5e6fp1hNZS32oykkFtrBFBAOANxPHGfekNWPavgzAsXxq+Mahdp+22jfdA
xmJz2r5Vtv2f9U8UfDzxF8Qv7TtYraymvJJLWQM0jhJG3cgDBweOf51738DPi94X134+fEZLPUty
eIJrRtNdlIWfyo2V8E9Dk5AOMjpXzv4og+JHg7xHqXw3F/f2cOrTuE0yJg0U4lc4KkZ6jHcd80K4
Kx9G/FK1F/8As5fC+S7PmSi70be7jc3KAN/9eq37WPg7UPiB8QPh94W025W3fUIr3aHYrEGQRuGI
HXABwMd6r/HLxBYeAfgb8O/DOtXcVv4gsTpdxNZsdzhYQBI3GeAQfrR+1dqGsahZeBviN4Jvi1jp
SXUv9r2zBkjEojRSQRyDhgQffpSVxuyD9l3wFrPw0+J/jbwjrdzDeRrp1vP5cWTDJuZuQDjscHIr
W+CemwaJ4F+MdzbQLbXkGr6okc0agSIgiDIA3UAdh2rmP2SdZ8ReIvFvi/x/4pupJ7CfTkgl1WdV
ji3xNlgMADCr/I103wF8UaR4r0v4t6XpV9Hc3d9q1/c2tvkbpYXjCpIq91J7/T1qhaM+a3+F3jfR
vh9Z/FNdQk8mYpMt4l032pHdyhdiSCcs3Jyep45zX1R8d/Dmlav8Rvg/e3Njb3Mt1qximeWMHz4/
LBCsCOQOuD0xXye3jP4k3HhaP4UyC5eKFhbLo5tVW4Z1YyBdwUseRn6V9YfHjxLpej+N/hFDe6lb
2lxZauk9zbvMN0UZj273GeFzkZPFDuNWZ53+1L4M8VfE/wCMGneCtALzWMGkxXwsWlVIUKySKXAJ
Azjb+VdB+zN4dvb7wF4/8DeLoftkOj3S2o066YSJCTGWwpHQZwRjoelY/wC054z8RfDL4xaT428O
rG1vdaNHYxXksfm28paSRioxwWxtIweldD+ylrl9qmkfEjxXr+LSPVbyO5N1KoihfEbKxXPQA8cn
8aLslJGH8NIx4M/Yt1fxFoUa6drz285kv4ABK+y4ZVJPsvA/+vXl/gPwr47+FHjzwT4gmhutLs/E
Oo29sbnzEf7QkzqzLJyTyuT26+1es+CSviL9iDW7HT40vrtbe7Q21qNzFvPZhx78GvJdK+NPiv4k
6/8ADTwpf6fbzWmj6vZuZbVCJsRlUzKueMAnOABSSZasme8+L/hr4Wuf2rfDcU2kQuNS0y51C5jw
QklwrcSHH8XHNeNftE6H41+Lnxo8TaPp9jd63p/h5olht4QuLVJIkbcOhyWBPc8cdK+gfF1/bQ/t
ceBEaeJZH0W8iKFxu3csBj1xXj3xK+NGt/A/48+P5bGwtrmPVzajF2r4ISBcMpBx1dh+FNEPU2ba
Z/ip+xzq2teKV/tLWdCjvXsruVNkkbxAhOBjkD5TnrjmvkC9O1D0LITggc/hX2T8OpZbn9i3xvPc
xiM3EWqS7R0YMCePY54r44uVjUsEKsFJUjaeBXVS1OeejGtGnOQWHPSvUP2aNJn1z40+FVsrGW9S
zuRNeyLEWSJArfM/UDnGM968qnnZpNyMVZu5Neufsu+JL/QPjR4dg065lt4dTnW1vI1wVnjwxwwO
eh5yOaKkbIdOWuh7V8ffiV4k+DX7RVpr0MEkvhy8063t50mjZ4ZU8xjKqcgCQKoI/wAK+V/Gmq2v
iLxdruqQ2xt7S+v7i5hhKrlUaRmUYXqcEcDNe9ft0+KdRk+IlloEl250eDT4rtLRVBAmZpVLk4z9
0AY+tfNNtO0IBh3QOvKzAgbcdxx9K5zS7ufacuj6v4c/YeVLK1u9O10RJLKbeMpcf8feWchefuc8
9q8D+MXxqX4v+DPCh1S0itfFOmyTLeTxR7VmiKhYyM5PPBIzwc19KS/FLxOP2L18XreBvEgtAjXU
kaHcRc+UTtI25Kg9q+DDK7soMbMDkj5RhPbt3NEUglLU+jP2IvDsVx8WZ55rD7VaW+nSyx3E0W5Y
5i8W3aegOA351u658dpPDnxN+Jfg3x3anU/Cl9c3LW5vkaVocKRGI06bMgYxznpUf7CHjDVYfG+r
eFC6f2VLZSag0fljcJg8SAhsZwVPIz2ryf8Aak8a6j4t+M3iWLUDEIdHupNNtQkYX90pyNxAyxJJ
5J+nemldkzZ5VawtFEkSCRpREFVRyzsF9O3NfdX7RHje/wDgL4V+Gl54Ri+xafBcM1xp0A8uG6Xy
0ISTHTOTye5r4q8I+LbnwR4j03X7KNPtunyGaJJ03x52kcr34NfbP7bHj+/0T4SaPZQQ2bQ+IWaC
6kni3tGnlbyYx2bPfmi15Di7RPl74++J/Dnj7xvBrnhq2jsINQsYJby1hQDbdkuZc9ieVGe+M817
3rFxefC79ibw1qvg9/7E1a9Wwkvbu2wJZGfh2LNn5s4BJr4xknijwQwYEjEvQY55xX2vo18/wP8A
2N9M8RadHFr9zqJtLiS21zM1upkcDCpxgDjGO4zRJJMpO4n7KfinVvjj4S8f6L47u5PEemwrbeXF
fxKPLLCUsMhRnlFINfL/AMF/iFo/wb8W/wBuar4cPiKeBV+wRfaBH9nfOfMyeCenBHYetfWX7PXj
25/aI8GePdCv7Cx8KrEsGLnwzG1rI3meYSxOTzmPH4n1r4m8LeCdU8Y+LLLw3o9vLf388xgRHH3Q
GxudugAAHNJWBp82h718IP2gPi78TfjFpFpZ6hNe2VxqCXN9YpbwmO2svNG4FtoIAUgZz1rk/wBt
a38JWPxouR4Xlt0maIvqotHysd5vbcD2DHgkDvn1r1Xx54n0T9kX4ft4O8NOt58QdVhD6nqjr81p
HIm1mV1AxgxjavHZvSvj5Ek8Ra5bxvdSvdajdpDLcTMZHLSSAFyWPLfNnrQnqU221Y634QaN46vf
EsOu+BNMuNQ1DSZlczQwecsRYYAcHruBI+hJ96+lv+CkEMUel/Dm6lREu2nuoyyZUg+XGSODyNw6
c1P+0Br0n7Jvw+8NeAPAtubS51dZbi612H93cM0ciEk7RyW37ck8AYxTf+Cj2H8OfDlpWLlp7rLg
dcxJ/jmoTbY7JrQ86/ZS+DGh2XhPVvjT4vBvdD8PLNNbaXEvzGaAh2kYdDjA2j3Oa0vAf7ad144+
JE+j+MNGtNQ8C+IHexgtoLWMXFqJpAkQY5+YANgkc9Tziuq+CXl/8O+/iHuDY26tv3d/kH9MfjXn
2p678IL3wH8HrDwx5SeOotY0oaiIYpFnypHm72I2n5yMc96XW40vesasP7H+jeE/2stE8IaiTeeD
9XtrrUrOKKQiREiGRE5PbIHTqCab+0Z+1r4p+E3xq8R+HNF0jw6NH0iSGKOG400O7L5Mb/eDDu2B
xxivavjr4d1fxN+1x8K7bRNak8P3iaRezG+jjDsUVgWTaeCGAx+Oa8p/aT+Nfws8O/HLxDYa/wDB
+28TapYPClzqz3piMzeSkg+UKRkKQOeuPyNWwbd0Yf8AwUO+Gug+GpPCXjmwtDa6rr263vY4sCFt
kKuG24+U8Yqx4H+Hvhz9kj4NyfE7xfZJ4h8WeIrU2mi2CoJrdFki86NXBGAfkO5uePWmf8FNNK1g
Wng/Wm1lX8O3kci2ejtEqC1mEAJcOOW3A4x2/l6B+1LBoy/A/wCAS+I5NuhjVdKS+Yn5fINqRISR
0G3dzTvoPmlaxwHwu8b+Gf23fC+sfDnxVomnaB4yiEmo6RdaJa+XHhIwAWYk9GYZHGRivlGx+BPi
K9+NM/wveeztNfivHtJLiWcLAhVCxbJ7Y7deor7h+Eel/DPT/wBt7So/hhcWV1pK+FbmS4ewuvPi
E3mhSAcnB27cjPvXx1+1+Nv7S3xLkc8rqjbMHOB5a5Pp3oV3sQ0k7n2b+0d+z74J+D37E/iGx0Gx
tbq50+KO4TW51RriSTz1LP5oHoWX2Ffmz4V8JX3jzxJo/hvRWF7q2r3C2tupcBAzHhix6D3r75eV
pf8AglFLM8ruy20gBcbiMaiRjvwMfkK+S/2VYkj/AGl/hgsZK416ElQPl7//AFqad4jjdSaZ9HfE
N/h7+w38PNG8Ft4W03x/8SdQEWpamNYtw0cUbIUbbJtPG9NqqPc1kfFL4P8Ahj9q/wCCNr8WPhjp
UGkeKdAtVtte8PwR/Z4BsQyOVyBudc5U5wVP0r1f46eBvh18QP26n074kahDZaXF4Ohnt1mvBbB5
hcOBlsjszce1S/seaJo2g/B79ojT/D9wbzRINb1KCyuvM8wSQLaYjO7+L5cc98571ls0WrPU8E/Z
X/Z68NeD/h5J8ePi2qr4QtYvM0zTpIxIt+ZMxhpEAJwWwFHcnPFdF8KPFXwc/a9bW/htqfw60n4a
eI9UiRtC1DToPMkkZMuybtihWCoMjOCCeuK7Dx7DHN/wSl8IJcSmKExWG6ReMD7Zxn9KNJ+Cvwx+
FH7V3wFf4e6oL+XUJL6S9K3q3IO21+VsD7mdzccdKmV0WknufK/gv9izx34n/aBuPhddwnTJdP8A
LudTv12lIbRydsoGSCSMEAHqcdq9u8e/F79nX4OfE7SPANl8M9K8V6PpQt9N1nxNI+JYnV/LkYoE
+dkxuJBGe1fVngHaf2+/izySy+FtLH+7zn9f6V8cR/s9fDXxz8L/AI8ePdd1+ay8X6PrusPa2gvE
jVfKZmiUx8MwYnH41XqSt7HDftg/sx2vwe1zTviL4U2618KvEE0V7ZzK5VYDK25bfrnYV+6eODjg
ivZv2h/hd8MvEH7CGjfFLwr4Cs/DGsagbCcNbEmRFeYRyKxGNwxntnn2rZ/auxa/8EzPhMiEuuND
AOOf9Qxziu80T4tf8KT/AOCavgPxOdFs/ECpZ2UDaffxb4pBJcFTlfXnI9DQugXsfktMwZ23KUKk
n5RyuOOlfoL+zB8IfhVD+xHrfxT8a+DY/FGo6TPfyybpWR3SN1VYwc4Axiua/bJ/Z+8K+PPhnb/t
HfC3/RfDGqgTapYPEITG3mJbq0SADB3ghh6896t/s0/tPfB7Rf2RdX+EXxKvtYsJb+W7W4On6dJM
DFM4ZCrKp56cH3FUNs87f9pL9nWJ3P8AwzxLDIwA2DUF2ALjBCg4zwO315r5X8VXdhqPiPU7vSrF
tL0+6upJbW0dvMMMRJKqW744H4V9w+Af2UfgD+0Pa+KdJ+EfjXxJceNdPsGvYLTWLY28Od21QQyL
8pJCnv8ANntXxD4y0K78FeI9X0TUoimpaXdy2dxGDu2yRsyMMjjhlNXF9DJrqYN5KYxhkLyHkkHq
O3FEe1VdmUrj5jtb7vHagkXCkmPk9AvNEDGM8FgS2dvBFEhQ1kj7f+E37Lvgz4OfBPVfix8frKUR
X0DRaL4UMrRXczBvvgZBLMMED+EHNal1+zj8KP2pvgJe+JPgVplx4c8Y6HJNcXXhrULl5rm5iVT8
iqXbGWxtPTOQetfQP7f3w1g+LHi/9nTwNd3raZaarqdxbyXCDLRgW8frxn61S/Y++Alj+zz+2d8Q
/Cek6pLq9jB4VtJ1uJ1AfdJKpIbH0/Wsnc3Uk7u58L/stfsja98ffiJcWV9HP4f8L6JK7a/qN2pR
bfy2XzIN/RZMbu/AGa+h/D3gv9kP4j/GbWPhjo2l6jpt8RPb2Xii41D/AEG4nVfl8tt/zck44w23
vXvXwiwv7NH7U8wyjNr3iNmY8/8ALsOe31/GvkK8/Yy034f/ALPfwo+LsfiWS+1PWNU0tm08x4jj
E0oJCNnORgDn8uaWu9xu17HifxW/Zk8efC34wy/Du50W51HV5W8vTJLSFimoJgHzIuCSBnnrjniv
pXxn+zr8Av2UvA/hXS/jGup+KfH+o+dPe23h24ybRTgqHTeCqgMoB74OBX2d8YbVbj9vD9n4/N5k
Oma3Ifp5AAz+P8q+cfif+x9F+1j+2n8cBeeJJNEt9BtdMZFSFZDI8lqMZJxhR5XOOeeoqk3Im+qP
BP2v/wBj/SfAmlaf8TvhdN/bPww1O1Rm8uczvYSbQWMjdlOR1JIOas/swfsgaHrXw81v4ufF+WfR
Ph3Y2U4tLNGMNzfyKgYSITjKld4AByT9K+hPg3pp03/glJ8SbJpPMSJtViWQk4YLMqg9eOlb/wC2
D4HPj39lD9nPwalylimsavountOUyqCS0KZ2555bOPalcEeEad+zD8Gf2mPhD4j1L4Evqel+MtHm
WaTStfuCJbiFYyW2JknJ6AjuO1fMvwS/Zt8YfHj4ox+CtMsn029jdhqFzexukVgq53GTI6/KQF7m
vvP9mP8AZhH7LX7euneGk15teS58I3WoicW4hILSBNpGTwNvc9xXs/7PdpHD8V/2srmKMLIddAyB
ycWrkjP64pajeh8r/wDChv2R4PjnB8KW1fxBc63v8iTUvtA+wecYSwBlyP074B6V8n/Hr9nPxV+z
/wDE+78GapaS6lNcMDpd3axM6XysMqIyB8zgEAgd/avSZf2NLhf2PIvjo3iNZJpJOdNdDkA3X2cM
JAeoJBxiv0R/aI0+GX4ufsnQTRKxj1t8BlB+7aR8c+/NPmsHLY+Km/ZH+Ff7PvwW0XxH8ftT1O28
Sa/cp9k0bQmV57eFo9yiRCNxxtbcemTjHWuX/af/AGOtH8J/DLw/8U/hLd3mu/DzUrWH7THL+8ub
SVwx3uFHCgbQfQj3r6O/ah/Zlv8A9qv9vHV/D1tr0WhJpXhK0u3nliaXOZGG1VBGMlskg9vetr9k
LwpL4B/Y4/aL0C5mF+mkalrVkGIGxmhs1ViAegLDd+NLmsK19T5A/Y7/AGOm+PNxe+LfF1xJoXww
0ZJGvdR3+WbptjFVjY8YBA3H3x3r0Xwx+yX8EP2kvA/jNPgf4h1mfxvpIikgsNfKRJMpY52LtBIK
q3IOASK9l+MuiTXH/BKX4c6NYbLaXUG0m3wOATJM3X2Jx+tc58E/2QtR/ZU/ba+DVneeJLfXU1e0
1O5UwW5hMRS2YFD8x3A7gR0HB46U79Qtc+EfA3wY8WfET4l2Hw/0fS5/+EhnmMDwSoVEGOrSdNqj
14r7G1X9lL9mrwD8SvDvwx8V+N9dfxldRWqXLWmGtY7mT5CC+CE+c9GPAxX2D8FtNtv+G8v2hL5I
ESeKw0WLftGfmhyxB6jOOfWvzwu/2ONd+Ifwj+KnxrHiSzgt9M1bUXFjLE7TSrFOSx8zPB54GPSm
2CWx5p+1V+zN4g/Zn+JN1oN7FJfaRdn7RpOpou5biFpGEalgMBwAMj3969o+HH7HPg/wZ8BX+Jvx
31W98OWOpPCujabp/wC8unV1PzMgGd2eQuOAOc9vrL43WMOp/s6fsqJqGLl28QeHFeWVQzNm3+YZ
Priub/bw+BesftJ/tXfD/wAA6Xq0ekJJ4bubwzTgtFH5czbTtGM9h9KUXfqPqfL3xz/Yw0XT/ghp
PxX+D2p3/ibwnskGpx36YuYW8wIreWBuHO7PHocAc1wP7I/7K2u/tO+OI4IpPsHhDTJFl1jWvuqk
fOUjYghnOBx2HJr7y/4J7fDO7+DmoftC+Cb+6TUl0W/trYyKP3T/ALmUkgHpnIz71zeg250T/gkX
rz6aRbz3MdzvljwrEvqOwkkdeP04pc7C1meSeF/2PPgN8c7jxt4b+FXjrVtT8c6RazPb2t/GsVvN
IjbAVbYNy7gASpPBFfG938KvEtj8S7jwA2l3D+K4777F9i8ptxfzPLHUD5ScHd0wa+x/AX7Gvif9
nT43fs9eI73xBaXUHiXXYCIbHejQjakpQk4yCCQRX2ZBo9nN/wAFM727NnGbiH4frIJSg4Y3YXcP
fHfrT5mPY+L/ABV+xt8FvgkvgzQfi3491TSfHGqWqy3FvpMQlt4mLbcMwQ7RyPmOOhrw/wDbB/ZR
1T9mnx3JEm+98Hal++0jU2kDebGAu4ORgKwLcDvn617V42/ZK8UftF/FD9onxvDr1pBY+Gtfv4hD
cszSSCPdJtU4IAAUDHHU17R4rtDqH/BJrwkNQU3tw4swkk43MC1+RnJ6cEihycR8t7any5+zx+xj
YeKvhLrvxW+K+qXvgv4f2MBaylgQG4unD7WOwgnbwAOASfzrofiF+xf4S8VfAib4m/BLX7/xfa6f
dTLqdpdpsnWJF5KIQGyPlOMDI/KvqD/gor8LNS+LfiP4AfDLRLpdNi1ae/hMZcrbqEhhILoOuBux
6ZrG/YA+BWtfs9ftSfEvwNq+oxaktroFpNIbd2MDGWRSCVOPmwMZI5AFLnZN2fn1+z9+z94i/aK+
I1l4X8OQsFJ82/vnGEtLdWUO7E98NwvUmvq7S/2K/gB4t+K2sfDXQPidq9140tGuYxBNAoh8+NSS
gk2bTjjoT0OK97+A9lBoH7Mn7UWqaeiWk8mu+InjuYEEbgCD5eRjGCeK+L7X9jnxj8OPhn8MPi7d
a9C9vreq2GyG3LfaY/tEgZSzdzkD8zzU8z3uPc8B+I3ws8TfC3x1qPhDXdOeDXrCXyfIgBcyHAIZ
D/EDkV9WSfsRfDr4O/C3w1qvx68ZX3hTxHrs8rQWGnRmfy0AG3cEVivBDE8dSK+4vjbotlfft4/s
/Ga1hkk+waw774wfM2xHBPqQQDXzF+0P+y74u/a0/bY+K9lo+sW2nW/huz05lOolnXEkC/IgH3c4
Y/XH1p81+olHU+ff2t/2OJPgCuieKPDd5Nrnw71e2ha11STBdZmjaRgwA+UFQCM4qt+yh+x5L8f7
XWPF/ii/l8N/DfQreeS81jPLOqhgqA9V2ksSPQCvrr4G6ZLb/wDBLv4n2usuL5rY6tGjTHzQhTYg
2EjIAIOPSrn7VXhi+v8A9gT4FeFPD0o0qXXL3RbBljcxI5mtX+/t6guQSDmkqje7Hy6nz8/7EPw+
+Knwm8V618C/Gd74w17RJomksb2L7PuiILMoLhcnapIIBHGPcfKvgH4T+I/il45sPBvh/TZ7nxHe
MYVtjwI3AJJcn7qgKeT/AIV99/sp/sy+Kv2Vv23fC3hrWtXt7+LVfD1/futk7eTIqqUAdCOSDkj0
ya+if2c9DsLL9rP9p/Ubazihkiu9OjjZUA2k28hbb6ZPXHrQpNvcLW6HybefsNfAnwx8UNG+GviH
4p6pb+O72CCNrCKAGETuN33wm1c44yw6e9fJH7RfwI179nz4k3nhnXLd/KVmksrnO4TwFmCPkcAk
LkjtXpq/so+O/FfwL8R/H2XXoDaQ3VxOY5pnN5lLgRh1b2zkc9hX1R/wUxtFH7IfwXe5jSXUfOtI
3uZFBcgaeWYFuuCwzVxk7kSbPy7LrNJwNqBs785zT90ZIXgZz1NRBVRVYsGQluQMU5SCjkFR6Mwq
2Qj3H9lX9lvXv2nfHsenWIay8OWDhtX1f5dtrFzjaMjLHb07dTXv+ifsMfCH4pp4t8PfC74mX/iD
xpodlK62NzAEhnmRiuAzIMgsMfKT9eK9V8IfafC3/BHu51DR5Rp+p3kMpe7tD5UjF9S2ZLjn7vFe
V/DL9kTxl+zh8eP2ftb1LWIJ4PEusQbEsZmEiJsWV45OcMuG981k5WVzVHxnc/DXxFZeNpPB0mlz
jxCt7/Z4skXfI0+/YVAB55r7E1b9hn4WfCbSfB+m/F34lXfhnxrrUInl063i8yOIFwuCVVsdQNxO
M57CvtWLw9p11/wUtubv7JAJrbwCkwPlr/rGugN3s3vXxZ8Tv2T/ABn+0h8Yvj744tNXt5NN8La3
dwiC/kaRzHGrSbIgeBjGMZAo5vMpb2PE/wBrv9lDVf2ZfHi2Rae/8J6kSdI1N2VvNRUUuGxwCGfA
45FdR+zf+xrb/Er4deIPiX491a58F/DrTLRpY9SCAvPIDhyq9cL9DknjpX1j4sV9X/4JL+H7jWMa
nevHbKs15+8kXdf4GGbJ6ADr0rX/AOCgfw91zx/pHwE+F/hWRdIXXbu4tWtVk8m1wtvEVEijghTk
45yfwpqTeorI+WfH37EOgar8D7n4j/BrxXN460/TbiUanBPD5TxwohZ2VThiy8ZHcHivnb4L/BTx
N8efiBa+EPDNq09/OWMzthEgjVgGdicD5eeOp6V+h37A/wADPEf7Pf7VHjj4f+INQhvYofDiXMtv
aOWtXMssYVtpAyxXI6e1d/8As7aRY+Gfhv8AtT61ptnBY3qeIdbSKaCMK0SxwHaFI7AnPB60c72Q
banzwn7BvwW1P4r3Hw3sfivM3ji33KdPW2PlmRUEhXcflyB6HPJHtXxT8VPhD4g+Efj/AFDwp4hs
5LTV7Jwpg+8ZcgFShH3sg9s+le4SfsreO/Bvwk8KfHWfX1Ntq93buESZjdgzSMiyF88k9/ryeK/R
P9pDw3pGuftb/s2RX1lbXTyz6k8qzRBvN8uBCgbI5wcke/apVTVq5Vz4e0P9hvwv4L+DGjeLvjb4
yl+H1/rdxiwsYovOYps3IXUAkHHJHGOmRXDftSfsc3HwO0DRPF3h/U5PE/gLV4IZIdYEYQI8m5kQ
jsCoXk45Ir6X/ax/Z08a/tYftn+KvC+g6lDDp3hvRLGZU1CZlghMkZHyIARuY9eB0rrP2VtDvIP+
Cf3xl0XxHINRk0W41ezjW6PmJAYLNAAmf4VYNjpT1TWu5NrnxR+yd+yLq/7SOtXdzPLJofgrSEd9
Q1zb8inYWVF3YB9SQTgV6q37Bng74l+BvFGq/B/4iP488Q6KsZbS0hWESZc5HzcglQ+Mdce9e9/G
fSbrw1/wTI+HekeESdHv9bfS7V2sX8hrhpt28My4zuJwST0rif2Yf2ZfHH7Kf7YXwy0XW9VheLxB
Z3008enXDGORIYZMI4wNwDFDyOuahzktbi5Ez4E8NeA9c8ZeNLXwZpFhcXHiK6u/s0Vltw/m8jnP
QDBJJ7fWvs2X/gnh8OfDHiXwx4P8W/FkaL491W3gll0lLfJ3yggIDkgHeMA55x719j/Bnwbo9n+3
t8c7+3021iktdO0kRERKPKeSIM5T0zt5Nfn14g/Zq+JfxI0Px/8AHhdUZ7HSdXu5TPPek3Qjt5mA
8vuNnGOe3Sqc2HKePftF/AHXv2e/iXqnhjXLZ0tWkkfT7uQ4W7t97BJBjpwOn6Yr1/4H/sRp44+E
F98TviH4oPw/8Hr5YsLu5iDC5DNjfyeFJwAMc5zX2T+0Podt4x/Zc/Z2udbtYtY1TUdV0CG4vLmL
zpZBJCWcMTyQx7Vg/wDBQr4TeL/jl8b/AIZ/CrwfdLY6dc6Pc3bWLTGK0Xy5eHZBwSFGAcZ+lLnb
sykrHyb+0D+xMnw8+FGifErwD4h/4T7wjMz/AG3UoEVRbKDtRuG5G7IOBge1eX/s2fs8eIv2iviP
ZeHNAjkS3WVX1XUQuYrC3JwWbOBk9h1J9q/Rb/gnj8OfEPgCX42fDTxpPFqljoz2ls2nGTzrRWkS
ZpCqtwdwCdh09qzPhTaR/Df/AIJmePNf8OxJomtyJqLrfWSCOZT9q2KA4wcADA9qftHawuVI8KX/
AIJ1+E/FNz4l0rwD8XLDxP4t0eCWX+x4o13s8Z27WYOduWGCcd6+NNR8I6pp/i+58M3FjOuvxXJs
msBGWkMwONm31zx+NfYXwh/Zo+Kn7Pvxc+CXjHVbxbO38VaxaQSPY3ZMkiSskrpLjqCucjOD6mvs
7XPhr4Yv/wDgpDpFxJoVhJIng6TUpCbZPnuPtDKJTx98cc9eKaqPVDS8j4nh/wCCd2meGvDXhe7+
IvxJsvA3iLWoXlTR7lFeRDkKq9ecErnA79a8U/ah/Zm8Rfs0+PjpOrZu9IusHT9XWPEd0NqkkDPG
C2ME84/L6F+PP7PfxR/ab/aC+NfiLSL9r3TPBmqS2sEdxeFWijVN4jhByB0J7c4/D2jxvbf8J1/w
Su0fVPFOdb17yYRHf3v7yaMvfBcBjyPlAGfaiNVqai3caTPjv9m/9jDWPjd4b1rxdqutx+D/AAXp
UDOdbvIQ8Tsp+ZQCy42jOT2x3rZ+MH7Bdx4E+E0vj/wd4wi+Iuj28zxX8mkxAi3RFZmc7WOQuBnJ
4BFfYP7fXgnX7z4bfBj4RfDiJdHh8S3s1nNplpIbeG4UQq2JMHlQSWINYf8AwT1+FPi/4S/Fj4hf
CXx2v2jSo9JS6l0l3WSylE0iJuC9PmUMDmplNq0r3uFne5+c3wt+EfiH40+MNK8K+GrFrm8vpcPI
i7lgi3ANK3TCrnJr6yP/AATIa68Qap4e0j4oaBqHiKxid30dSPtAKrkgqGyvPt3r6R/Zw8I6P8Nv
gt+0R4z8NaZBYa/ZarrsWn34iUSwxxQgoiEjhQ6jjp8tfG3gb4SfGX4b+IPAnxru5ry3j8VanbI+
ptOJJ5jPIMl+pw6Z6j29Kbm1d3KV3oj5q8S+GNX8H63c6Lq2ny2GpWriCa0mUh4nxnp1PUfmK+nP
Bf8AwTs8QX/w+0jxJ4s8W6Z4DudWLNa6XqvySuBjnk8ZyOgPUetfdHxm+Dfg7xF+3l8Kp7zw3a3E
t9pmpXepMykLctEAsTOBwxXYBz2/CvmX9tH4X/Fr9pn9p/x5pnhqKfVtD8DLbiztBMI4rYyQI7be
nzEqT68Cj2nMLl2Pl39pD9mTxN+zl4sg03WF+16VNGslpqsMZ+zzkrkhWPUjjjPerX7Of7Knir9o
y61KTTnh0nw9psDy3Gs3+Vt/lIygb1AOT1FfcVxZyfFr/gmJca746j/tvxBpSzmzvboAzW7pciFc
MP8AZBHv3rR/a10HXPht+yr8Kvht8KIBpreK5E0+6tLNAZL3daK5XJ7sx5NCqtrQPI+M/jF+wp4s
+E3w3PjW21i28Z6ZHP5FzLoa+YIQELM5xnCqRg59jx0rwnwP4L1T4ieKbDw9o1tJfavfSiOKGBN5
5PLHHYcZr9Hv+Ce3gLx98Nvib4r+EPxCt5I/D1zocmoyaJdGORGaR403ZXJGV3jBNdr+y/8ACzwt
8LND/aE8Z6BpVva674d1PVrDSrhwX+yQxRl1VVY45KqTxnikqjasSrnyjff8ExfH9kL+3GveHrvV
raMynToLpmmAVdzjaQMnGOPevkXVdIvdA1O6sNUt5rK/t5vIktnBR1cHGCDzX1L8NrP44eCvH3hv
426ml/5Him+hVtYl2+XN9okCsoT7oyo44HG3pX2h8bP2b/A3iz9ub4ctd6OrLrNne6lqYBIFzJAM
RZwQP4alVmr3E4nwv8Pv2APiV8RfCGmeI7f7DpOnXxYQxapMYppAvAZV9Cemeo5ryn41fA7xN8Cv
GU2g+KLSSF02mK8i/wBTLlQw2Nk5719c/ttf8LY+L37Rfi3w/wCFtPu9S8O+AzCttaWESqlvvhEh
ckYOflx1PTpXoPxh02z+OX/BNuD4i+Mo0uvFOhrK9lf248ohhcm3XcB1BUDP41qptPlZWiR+YD7o
yZDnByQGqExS4kkkUqFGMLwDntxVy6IwjE/MxwQRwcfh1qKfblGIfGdu3J+YnPetbXMNCuyFWBjj
2AdW6nFSTTq6gofn9+/HP1pUG1mib5gRk8/hioxMqeXGkbEZ5BGOKLoVwEYjTeV3ZycDtinCVZlK
InfjPB+ufwpEUGRQQ5AOSqnHH1pCFZz5ZbB4K47+v86dwuL5RlT5SxYYw3UY9fpml+bzGJkDgfKw
Hy/lT1R/ldSwOPmU/wAv0pMKrKA7K8nDMRgZ9KCkOt7tLidht8pWJIAO4Z+n5VFeuTcA+XyH4JOM
1KYAMszAk5zgcggf/WqrcsDLCArDa21mPc0r2JZMrpG6liZABjaoya7Px6848L+CxJhdtgxEbHG0
buvI56YP0NclbBGvxiTaCcZ29Rg9q6Lxi8aaf4ZQus7RWLBIjxtUsTkj86klXukcW0u5lGN4U8AD
j35pzMMDKbPQ+opDcIdxY5YjjAxzTGYo5+UMeMHOeKlmp1mkxCSG9EjuVEQcMoJJ56cHscV0/hWy
iU+aJJJZVffhBzxycg+vT2Irn9Mg2TzFC290ZEUKSXPHYfzNdRpDNEoiMoWViZNvX0GP8a6kupy8
tnuegWNuiZZXLLGCxK5wOgxgfh2qz4ny3gbxQropkNg+1gduRgZH9fwrN0zzpYolaSJyyleucDp3
XI449uOtaviCRbTwR4jinjjikaymUeczZJwQOencCq5m0Z8nvXufK93Cizkc7AT0HzYpka+W6uhI
x69/rVt7R5WIA3beMjkn3qOOMxzIWGWBOFJ/LPFYdTtseyeA3ka3jEYZcgAIg2j3Hv8ASvRtKkER
JdTG0gVC7OAWbPp3rjfgz4W1Xxrc2OiaZYyahqlzIVt7e3AMhyRkjI7A5NfTvxl/Z8/4UC2kxXvi
ODVLy7Qi4tduXiAUYbGBjPzf/XreMo7GElyu55ldwQ+auYoY3kAAyCBk88enbimeaN5UBpSEOwBe
T7Ann8e2a9s+Dn7NWofGL4da14ysNRis7TSmeEWsgbfIUjDsASPQgdx746eO36+S0MUbjdI4H3Sd
q56/L278Dpmt4Sj1YpxT6jJRJclpWmZ27SOTuXjkc1IIWRgsjr+8VlK7SVbr1/M8H1r3r4j/ALIO
o/Dz/hF4dW1+wSLX5RbveudggY7WIIPseOe1dZffsAaro2mw391440SCyKg/aJi6ooYcDcex47jp
WLqU29SeVrZHySXUziCTdJsPl4i/hGOgOeOg71YijitoZArSRxsGkYrwxJ5OfxxXv/xR/Y01v4X/
AA6uPGR8QWGtWEbxtutGONrnaCOORuI6ZNVfCv7KXifxr8J08c6bcWk9kUkC6eX/AHzhJNp2nGM8
Hjt9ablTtoSou54XHdPBKpjTiTILN2PH/wCqgh2mLLuLptATGOPftgDt7V6R8G/gdrvxo8X3egad
cWumXFnbyTSm7Bwm2RV5BwTySBXMeP8AwPfeAPFGo6JqEwN1p9y0E5QgozAkYGMEDgf4Vjza6DcZ
MwC7iFllG4sNu0kD5ccDPfk54phgSUblRZZH+4G+U8j8uuPyr1/w7+zP4k8WfCi68aaJe2t7BBw1
hb5kuBh9rZVeeg6fpXjd/aPHdlpHdjGpj8pDtHJ9Mfp71smgUWSteLFc+UiPIytvYhTjp1B6Ht0z
TXSWJhM+EG7AVWO7pwSMY6+/PtSSrHCse1DuwWyMY7DH+femxys21C25CQAGB4Prz2/xFNO+xNtR
5VHSOZlwGY7nUbcjng8Hvnr+dNkPmOzeSWjUZcbcjB69vSmzSyLlTIGV/lAXgY9cd+opsJ8qSWDd
mIgkMO/HUUNMzkSTSuJfPwFz8u1cFj+Bzx9KersARMGZs4OB+QGP6mqxaCNNkbK0iuGUDnLfXqPw
qWaTcu6RQJHwUJPQ/WpSJTCZi+NsT24wGUMn3kPf2/8ArU2ZTvizvVQSD0z374ojZo40cyMrq3EZ
HHTHB7j69KllbZbiR4iGwRtAAOck44754zTSfQm13uLEEZQ+cRqcAlsEemT3xTzIrRyNx5+0hlAO
AM9AfXGOfrTLV2aFxINrMFOzGQSQMDH4daWC5ljjbfMJFXJOU+ZDkjBOBz/9aqfmbQWupYimuEkU
p5mTwfKJXPQZ4P8AKpxeykHcznJzmRtwyffg+3413nwu+APjj4w219e+FrBbu2syIZHuZ1RMtnKo
xHJ6k+hI54o+JP7Ovj34R6ZZ3ninTYrG0uD5MUkEwmRmxkhsfdOByeh5rFuNzp5WnucAsWb12ZDJ
KvzBienp+Wf5VcTXb6NCZrmZpCMRqZDyOBj/ADzVfSrC41/ULTT7SL7dc3cyJAIctvYkKoHqe+O+
K9lk/Y3+LUK+a3hWUpBCSWE8RZiv+yG5JHPAqk4idzxm0uDI7eb8kcY2KqoVI6/lxWjb6vqNnZTL
baje28UwA/0WZo8kADJ2nrhcfSqd1ZvYXE0EyrHLE5WRX+XZjswPIrrvA/wR8efEzRZNS8L+H5dV
tbeVoGliCbFcDO3JYZ7HOK0tCxGpz+r6zeamkUlxPdXropXdNMztjuBuycdDwcc1FY61LHeRziSa
KWFt6upwysDwQRyMe1dD49+Fni74YyWkPiXRLvS2mR5LdXUMsmOuCpPTOKwtO0u51m9tbGwhe9vb
uYQw20Y/eSMSMAD8Rx2qVyLqTaT2NWfx/wCKp7R1bxJqksMqtHKkt7IwIP8Ask4P/wCusVZyIZQz
7VboynnHfHHB9/evQ5P2c/iXZW89zc+DNVjt4IjI7SQ42gLuPAY56H/Jrz0RssTRbxOqfM2xv0Jx
/nNVeC2CzOm0P4g+KfDdhHbaV4i1PSbMMzGG0vHiiJY5ztBAySeuOc1na54t1rXr9LrWtUu9UlwE
Et1M0jbByBz0HJ4ArU0H4YeMfGWjDUfD/h7UNTsSSguIIGeNmHbIH+c1n+JfBWseFp1ttb0m50i/
8pX8q7hZHKn+LawzjORn2oXKyJRkV9O1q+8N6tbalYXVxY6nAd0N3bOUePr909jgkH611+pfHPxz
rOl3djf+KtYvrOeIpPb3F0zK6kDKkZ5H+JrhLPTLvVb2KysrR7m5nbZBbxktJLjsoHJPtXRap8NP
GGjwz3N74c1W0hiBeRpLRwsajJ3E7RwP5Vfuom8kYURwq+U2wL8oPoPbPp9a9B8L/HPx14L0e30j
RvF2pWenWzt5UK4KKM7iF3KTjJzjI/pXnuwSyA582Pb0DcAVqaf4Y1vUrOK6ttNu7myOdk8Vu7Ie
oxuAx2pe6zSLky74v8Zaz441ltR1rUZ9U1CUqrTTtuYqBhRgYHTNSeC/HOufDzWhrPhu/ew1IJ5f
mxopJQjDKQQc/l2FY+p2N7p13JaXkMthOADtZdr855I9+marxW0k1ysCAvMzH/Vx5LYXJwv61XJF
jd+h6X4j/aK+IfjjwtdaJq/iSW7sLj5bpPJjjLgEEAkLkAY555rzKRo4owFZkO0BTngjGfqc4q1J
YXMVlLJPZzqqsA0zR4CjOMk9B1qArtEjFmCYB3ccDB+XkYxgVXs4Ea3PZvDf7WXxF8LeHLDSrfV7
aS0sIlig8+yR2CKAADkZ4GOa4Lxd421b4geILvxB4guTdajdqFkITam1RwFXoo4HH1rAhspBalng
dV3k5ZevuQM/nT3fzNio2Mr8xQkgA8/qKz9nBGl2jufhn8YvEnwp1W6ufDs0Ef2yIQzQ3cfmRFRk
8plefQ+9dT47/af8bePfCsmi6hPY2NhJKrynToDCXIP3SQTwT1HfArxsshgd2ZAiqQG3YbPt+Hei
DUEmbapKBW+dT2ODUqMbkuTNG21S50y9jmgnkt3jYtBJAxjZPTBB4Pvx3r36x/bQ8YWrWcl1ougX
c1uiQi7uLV2nJAxuLBh19sfrXznJLIW3oZUPU7M4PfI5x7VNczmCFXlYLk8qW5U+n6/rTagiVJ9D
pvGPjXVPHfiK91/Wbl7m8uJmYIxJWNSxIRBn5VHGBXZfCr9oDXPhdoGo6RFaWmv6JeYzpeqbpIYm
yxbaM8Bs5I6ZA4ryYeY9wUb/AFY4ZA2ST15P+elODbGTyywLsQij+L6U7RYKUme2/EP9prWvHHhA
+GLPR7PwtpgLNJDpEjIJQQcxsOMKSSeK818G+LtS8F65Z6zo9zNZ6hafcYNjeufmRuvykcH2rCZH
haSRVkLg45OMnr39qjWZn2OUMahydpbHHbmqVOL2Q+Zp6n01F+2Hp3/CSTeIx8OdJ/t/bsF+sxWY
krtPzbD249cV4R4v8Waj448QXeta3O19qEy/PKwG7A+6oGOAMgAVzu5fOk7sy5BBGP0pwHm5aVuj
bcKSM9fSocUh3TPdPBn7SMWkfD8eDPGvhg+NNNs2R7JLi4ETRRquI0yFOcEHBPQdaofFn9oj/hOf
B2leFfDulJ4W8M20WybSo3WQSYYFQWwDgEdO+TkmvGjuDKqkKqjGCex6GpFZGkYArKRkEEHI9xQo
xfQHfud58HfitqHwU8Ypqum4lsJiqajaqoLXcQOcAn7rdcGvX7D9pf4feE9R13XPCngC60vxPqMU
w+1yzIyLKxLcpuIA3YOABmvlu6MrW8gtyud2d7nPy5GR+hqQRSLaq7ZfqWUHJH+P1qHBXvYuLN/X
/Emp69rt/rt9dynV7qc3JukIRlfOcr6AHGB0r3S+/aL8B/FTwzo1r8S/C2oatr+nGXF3p+xAynHP
3wRkADGDyCRXzafmX+Mkcgu38uO1NQn5yjhtygFeN1NRi9yeaSZ7V+0D+0APictj4f8AD8M2keEb
ERtFbunlSPIqkHdtblR6HvzXjUlwxMjn58gfLTeFjZtpbszEdO/FMknYjYq5Xsx6j9PrVWUdiHO4
pZSqyJhiTjAGB616d+z54t8I+C/H9prfi/8AtBHsB5tkbGMybpc8Bh6YJ/KvMJlXBXczSbcjZ059
6fG8ih3jOFZcAscd/wBD/hQ433HF8rue8/tP/EjwF8VNU0/X/Dc2pHXDGtrdRXUDRxG3Xeytgn72
4/l2714VLseSON3kwGPmMrZbHtkHtUMsph3yFiWccovb/PFQvczSlSFIj24wnBbtz+QrNw7Gqmkf
Zr/FL4Ly/Aw/DOPxBqqad9m8oTNYy+aH8zzOTsxnd7YxXxxM8cMvFwXGW/hCluepHY4H61DM37t4
hIxOzpnn0/Sq1w7O6hWyxbr0JPY+1OMUjKU23c+lP2TvGfgP4b6xqPiPxR4hk0vVGR7CG1e2d42i
YxsX3IpOdy4wcVwX7Q8vhPVPiVqOu+E9ZfXLPWy95cAwtGIJt2CAWAyD1ryxJXYJ5rspzxgEHPpj
FMaWdrh1EW0DLKCB06jrj9aagh8zZseFdK0zxH4j0+w1nU/7E0+SURz6gU3CJCDzj8Mfj2r6y/aq
8X/Dn4m/DS3ttK8b2smreHh9otrOFCWuT5ezZggYz+P+HxnM+C5DYcDJPQE+gx2/wpVwGdAwCgAj
PTHpUOFncrn6IrNIZbeM+SdvcHgnvj06ivrjwt8RfCfxp/ZuT4e6v4itPBep6Q9pFHdanINtwEIZ
XQZGehUjOQfrXyKZXKhnXapYkfNng+tNuc4MpysYBBAHB5B/w/Ok4qTEptH2l8KvEHgX9lrwZ4w1
d/HmleNrq/8AsyxWekzIZXIZ1VQu49TKT14wa4b9h/xt4c8LePfFV7r+qWmkPd6fG8c17MI0z5py
AzYGeRXy+C0cJUptUkZXHGOeRUVyXwv3onPRugzR7M0U3fY+ptT/AGbvDmvfErUNeufi34Yl0a91
NrmWNL9PM+ztLuaPOSNwXgHOK4H9pXQfAnw0+KGgTeBdQS/05II724S2uBcgypODgOOhIGMdPpg1
4lIZLdGJUl3BYMw+8ee9RmTzd6sNpK/Mff8A+tUOFiot6WPuH49aVof7V/hvwh4u8PeKtI0ZbW1n
E+n6vOI7hQzISu0NwwaIj06etYP7VvirRvj98CvDPjbw/qlrAmgzzPdaZeyLHctuCxtsTnJBGR6j
FfGE8K+ZHI8BJXLDIBJPr/OoZWMyAYztX7xAJUH09On8qaggfMtj6T/ZY+O+l6do+ofCXxtHt8I+
J3lt4ruLKyQTTlUYO5OAhz17Ee9dv4N/ZH0vwL8R7/xT4o8YabP4K0JpL63FrfK11KIpBJF5gAxw
FGcckjtmviyXcASwztweRwKrXEUVzI8xB3ldr9vl9/WjlGm1qfXlp+2RpviT9rrRPGGtRNZeDdMh
uNK0+aKI+YEmGBJKp5GTjOOgxV79o39j/wAafFb41+IvEvh/U9DbRdXaGeJ5L4LJxDGhGMf7BPXu
Oa+K3mZgEMZMYYq6F+TkHt0wf0yKrGR0mZ4maIsN2UIGcnnH6Cmqd9Lkt3afY+0v+Ck3xF0DWrXw
n4GsLtrnWNEczXnkxkxqjwBQN3TJHYHjIzWl4Z8W6V+3L8B5Ph1qNwNC8feFYhdWEUZ2QXQji8qJ
yzZ4bcVYA5Gc9K+FLlVZnZSzuWJfnrx/+qoGuCI8JNJDKYsHyWKtj04+godOK0uTzSep93/BH4SW
/wCxT4c1n4qfEm+js9bjSfTtL0ezlEyXJdFZASoPzsyEegA5r5M1ka9+058br+50vTrUeIvEl7Jd
pZfaNkceIyfvEc4VPTk157e3U00mGup5UVshXlZgMexOAetVDd3MF+LqGd7W4Q5jliYo445wR+I9
8mp5OzLTfU/U+b9nzxk37AjfDJraA+MDbu32WOdSrMbwzbQ5OPunPWvzutV8Rfsx/HDSbrWtJiOu
+Gr+K+NkZFKSjGdu4ZxkHrzg1xLeLtdRpXPiXWklUYDf2hMMcc4O7p09unFY2p3t5qWoyXF5fXF/
czKNz3MpkZsD+8xJPGOvoKaihc7crn35+0v8HD+2jovh34w/C6VtYv5oYdM1PRJWCNa7VMh5bb8y
M4Uj0ar91rWmf8E//wBmW58LanKuufEDxopuJNFjkCGzMluIpGLZOUTbjPGTwK/PrSPFOu+HT5Wk
67qelQSSebIljeyQBz0ywVhk4HWoNX1m/wDEF79q1PU7zUrtU8lbi9uHnk2AkhdzknHJ496hxTZq
pPofefwD8X6H+1T+yu/7Pd5fx+HPFenW8Y06SUeYuoCFzNkLxjBADDOcHIrF/ZL/AGT9c+A/jub4
qfFWWHwTovg7c8ZuNpS68xJI87lPygFl7ZO4Cvh/Tte1DQb+C+03ULrS9Rt2byLi3maKSMlcEhlI
IyCR9DV/xD8TvGPiW1uNL1fxbrur2DlXks73U554pWUgqWVnIOCM9OoqeU05rbH2f8J/29NA/wCG
xvEfjLVtJfTPDXiu3ttIF1LOM2MUGds8mFwVY9RnjI61ynxj/YA8da/8fWbw5C3iDwf4nvV1FfEU
ajyrWO4mMjlhznYCDnvxXxnI4JcHDKR82e4/wrtNP+O/xK0HTraxsfiD4mtLOBfKito9VmEcSAYV
VG7jAp8q6ErQ+t/29Pi94V8N/CXwn+zzoUx8Rah4XjslvdTidVhha3RofLYZzv4JIHSvTfFfw98Q
fEz/AIJY+BtC8MaVcazqj2mnTJa2q7nKLPuYgd8CvzA1PUrnUNUubu7upLq8u5DLNczMXeSQklmL
HqSe/vXY+H/j/wDEvwno1roug+P/ABDpOl2iGKG0t9QkWNEPOEXOB3o9ku5Ln0sfenxWhb4Q/wDB
LLT/AAX4u8rQ/Fd3EqW2jXsoS5mI1ASlVU8khCCcdARXyN8KP2Q9Y+N3wP1/xx4U8RWup6/oxkE3
hGD571lV8JwOm4BiMjBx1615d46+JPi34kzWcvizxJqXiOW0jZLaTUbgymMNgkDPTJAJ+lVPAXxT
8YfC+/ur3wh4l1Hw1e3cYhnmsXAMihtwUgg8A/zP0ppdA8z73/4Jm/CXxh8JPih408ReNPD+o+Fd
Di0Exvf6xD9niDCZXb5mA4CqxJr4O/aD1qw8Q/G74h6rpVwl5p174gv5obiJt6So1zIyupHBBUjp
Wz4o/aa+L3jDRb3RNb+JOuanot9CYbqzllVUmU9VbaoOCK8yc7FwvB7EjH5CmlYhybI0iDM5SRow
F5HTNJbbXKshIYHIHr7Go5SJGbLgtuyWJ7VHK32fb5agqeSRSYRdnc/Ur9qoXH7bf7Nfg/4k/Cy4
mk1Dwg0st3ogJGpRO2yM7QhJDLsLD1GMelZv7EPhDVf2XPB/jn47/Fm8u9MtL6wNnHp2ol21Gdo5
NwIWQgsW6Kua+Avhj8cPHfwWu9Su/A/iS78PT6jGkNyYAjrKqsSuVdSOCTzjvVv4p/tC/Ef42WWn
2PjrxVeeILTTpmuLaKZY0WJ2BBPyIuTjPXNTy33Lv2R98fsffHPw38a/B3xs+E6TSaH4k8a3mr6r
pkl+ypFLHdIsapuBJ3qWUlQDweM4r52+Dv7J/wAXta+O2nfD7UUvrXR/C+ofa7i7vZ5hpsa20ynd
FkbcsD8pC9z+Hy9pev3nh3WNP1bTLmSz1XTp0ubW5TBaN0YMpGQe4r2rVv28Pjtr2i3+mXvxAuJb
W+ge3nxZ26syMCCNwjDdD1zn+dDitkVzNO59mfHT9tr4d6J+3R4C1ETXV5onhCC+0jVNStkDxJPO
oAKc/Oq5G5hXlf8AwUB+FHxI8OfH7VviL4Lu9TvdE8cW0X2a48OXEqllhhjQpJ5eAQTggn1r4JhE
axurNwvyqCTg55r2n4cftn/Gb4VeFLLwx4a8XvY6DZbhbWslpBOINzZOxnQsBkngkgduKEl0Ju76
n2j8QZLL9jz/AIJ33Xwt8bXiy+NfE8V4bSxs/wB4ymR1lPmHPyhRwT3960vjNO/7Wv7DHg3Vvhnd
yf2r4ElhvL/T2Jiu43trVkYIqkndkhlIPPY1+bPxP+KHiX4ueMrvxT4y1NtY1y7jjikuHVVBVFwo
VVACjHYd61fhD8c/G/wJ1+81fwLrB0i6u7c29yXhWVJUyCFZWBBII/nzS5UVzI+3v+Cefg/xnofj
HxB8dfiXqmoReGtC0e801r3X5ZTdkjy5DgSfMVVVb8TXoP7G/wC0l4O+I3xr+O+g2VzNY6h441CX
UdDN5F5cdxEsLR43fwtlgcHt0r4D+Ln7YPxQ+OPg+Lwz4x8QpqGjQ3AuRb2kCW6s4DAFtgG77x+U
8Z+leR6fqt1o+p2WqafO1te2cyTwzqcNGysCCD9QPzp2J5mz6O034A/HK/8AiYfgI1xrElvb3OHs
3uJBpIUKZvNJI24yMjuTX2b+2B+1D4E8D/tOfBXS7i/nuj4EvprnXZLWEyJbLJbxheR95sDOB0Br
5Fu/+Clnx5JmC69pW+VTGbgaWnmhSMcNnrXy1LeyTyyXMkry3MrFpJJm3GRyclie5Jpctx3Z+hn/
AAUT0f4i+GfjNp/xk8A6lqMHh7xJpdnp9rqegTMJZAiF2VwvIUjB59K7z4fX837K/wDwT+8ZTfE2
8kt9d8fyXlzptvgy3E0l3ZLsMgGSGJVixPQntmvi34VftvfFT4L+CovCPhzV7E6Jb3Ek8MWp2a3L
Rs2MhWY5A4PH+1XC/Gn49eMf2g/FcXiDxlqAvL2OFYIYoI/JgiQE42xg4ByTk5zQo3DmsrI/Qi9u
E/ag/wCCauj+H/h1NJd+JPB62T6nYK3lTwvbqXkCE9Tg5XB9q85/4J96V8Tvit+0JpXxM8X6lqF9
4a8FWl1BcajrszDyPOhfCR7sZ6gnjgAc18g/Bb49+Lv2f/Fr+J/B12LPUTbvbTJcIJYJkbHDxng4
wCDkHj0rvfiv+3j8Wvi34IuvCOsanp9jol3Ij3CaPZi1km2EkIWVs7Txkd8fmuUfNY+8P2ZP2qvh
345/bZ+Lf2PVpBF4wWwttBuJ4GVLs21uwlAJHy85xuxkDg18T+M/Anxy8IfFbXfgZBPq6f8ACQ6i
8q6LaXJNneCdy4fcDtA2gFiSMbT68/NllqtzY3MM1tPLZ3cTb4riJ9rxnsQ3Xv2r6xi/4Ke/Gq3F
o5l8NSXcEaxC8l0otMQAFBZt/Ujv70cvQaktz6q/a9+MfhL4PaX+z58PtYvRJr3hTV9G1TWbaCNp
FtLWGIozlgMHJBwBknHauL/4KO6l41tvFfw++PHw11a8t/DM2iLp8Gt6ZJhw0zPIu5PvAMh9OoAO
K/OXxB4q1Txh4n1LXdZ1GXUdT1GVpriaaQszMWJxkk8AHAXpgV7P8Gf21viN8CvBd74W0KfS9S0K
W5N0tprVr9pEL7AMR5YbQdoOPr60cltgufb37HniDUfgP+zf8SPi58XtRmsoPGM8V1Z3d6/mXF6x
jkRDsXJBYsoAwOB2rN+D81v+0L/wTU8RfDPwPML3xvpkX+laXIdkiM14bhcFsBgUB6cZGM18H/H7
9p7x7+0lc6O/iy+tYLPTLfyrbTtLiMFshzneUyQW5xk9AKwfgv8AG7xX8B/Hdl4p8IXv2e/twVki
mybedSpXZIgOGHzZz2IFJQsLmZ9R/sbSfFr4/wD7Q/gP+3dT1bXNB+Hd+Ly5bUpQsNhGAIwvIGWO
zHfhT1r6c8K/tOfDzV/+CkGsT2/iGKWxvfD8Xhq1u9jCJ79bgFoQfXI64A96+KfGn/BRn4q+MfBe
veGzB4f0G11ZBDPe6JZPa3W0HJ2yLIcZ5HToxr5hiv57SVHguHt7mJvMDxuVZG6hgwOdwPOadik0
fXP7R0Xxo+D37R3xC8IaZqGtaRp3j/W5by202wlBh1OK5lZE6ZwSDtIyOnNfRf7TviOx+A37C/w3
+FPiy9W28dMLBpNLt90rrHDOXkckDooK9TyRgV8z+Gf+CmPxW0DSdB0250vwzrr6NaR21tqeqWDS
3ZWMYV2fzBluBzxzzXzz8TPilr3xY8dat4t8SXrXeqX8z3DDe3lw7jnZGCTsQdAM9BU8rYKSTP0k
/wCCh2u+IvGXw7+EXxu+EupTTaH4fN1O2tWDbZLczGCNDtYZ6q6nI471T/4J46z4p0yX4kfHz4ra
pLB4a1HToLca/qk65kMEu1uBzj5QBxivi/4E/tiePf2f9C1fQ9FTStZ0TUwjS6ZrcLTwRMpY/u1D
KFyWJPuAaX48/tl+Pf2gtC0rw/r66Zovh3TXeSPTdCge3gmY4IEil2DBSuR7k0KDIvZn3P8AsneO
dI+MX7N3x28BeGL5brxnrFxrV9aaZISrSQ3KbYXBOOCSO/Ga+RvhJF8cPit8SPCXwcnu9a1XT/CO
rQySaTdERwWEVrMFdmJxwoyByc9BXgvw3+KGv/Cvx9pXirw1eyWeqaZOkqgMRHKquD5cgB+ZDggj
vxX0prP/AAU1+JWr2OtRWmh+FtE1HVrWW3uNVsLB4rtfMABZX8w/N3z6gGlytaFqZ9gfGX9ozwFp
3/BQ34Y2934gt0tvDthfaff3ZBMVtd3G5I4mYA4JJUE9Bkc188/t1eJPiz8AP2n/ABl4l8OatqPh
zRfGq25tLuxdSt+kFvGpQcHBBJ9M596+EbnU7i4uZbieeWe8lYySTyuWeRiclmbOSc9819O+A/8A
gop8RfB/gPQPCt3pPh7xTb6PmO3vdft3u7gKWyBuLjOOP++QKpRaKT2PqO5mb9mv/gmVq/hn4h3A
0jxf4oS+FhpszCSe4aaRWXhM87cE+me1XP2jry8+MX7BHwy8QfDC5l1Y+D7izvL+4smAnsWtLR1d
trDOUfacYOR61+evx5/aC8YftEfEGbxP4pvAZDGiW9las4trVQoUiNGYhd2Mn61tfAD9qbxl+zld
6qNANnqdhqdu9td6Zq6NLbPkr8+wMOcLg88jipcHuF763Prf/gnrr3xI+MXx2vvjH8QNYn1Lw54b
0a906bXNQKxRRFtjhAABwAWJIGABXsf7H/x28E+PP2jvj/baLrUN1e+JtSjutHTBAvIoopEZkOMH
1/EV8GfF39u7x18W/hw3glbLRvCGiTT+bdQ+G7Z7Q3HyFSjkOcoQeR7AV4N4b8Wap4M1y01vRb2X
TNWsZRNbXMEhjZGHbII4PQjOCM0uS4nK57lcp8fbbUb79m+G61c+bdujeGYim1t4NxkttztP3j82
K+mv+CrPjTQ7D4e/Cv4cLqEVx4o0QpcX2nxtuaBBZrGrN25OcV5fL/wVR+Ic+sLq48G+DotZ2hTq
ZsnacHaV3bt3XacZ/Cvj3xP4j1Lxf4g1DW9dv7jVdXvJPNnu7qQvJIT7nsOgHYCtEne5D1Mh4gkK
SMPlVjg/3jViKfDHG0NjGCP5VCxY+WjBVU9PX8qQ/NNuQHYOCCAOfr60O5C0P1I+HyJ8aP8AglVe
eBvAU6a14v0iGNbzTLd/3sTfbvPI5HOUBP4V5D+x14p+L/7SH7Rnw4fXdS1HXtB8A3hubo3YWNLC
PYyjdgLkkx46HvXzP8AP2hfFP7OnjmLxR4UmCOqNHcafcszW12CCAsigjdjOQeoxXsXjP/gov468
SeBde8N6LoXh7wU2rosd1qPh63e2uiA2Thw/HBYZ/wBo96x5G9DVNo+1PB3x/wDAWrf8FJ9f8nxL
ZyW9z4Zi8P2dzuPlTXyzKzwK/Qt8p789K+Pv2jPFvxn+Cnx6+JngzSNU1XQ7Dx1rlxdxadbRo/8A
aMU8zxxtGSjNkrhcAgjjNfI9nqd3pdzDewzzW97bzCeG4jkIljkDZDqwOQ2eQc5r650X/gpn45sN
O8OJqfhLwr4i1PRraKCHVtUtnkuv3YGH3bvvZBbjHJq1Bpj5j6H/AGitWsvgR/wTv+H/AMNfFd3F
p/jaeGxZ9ILB5gsdwZJXIH8K55PrxWj/AMFDPFGva38PPhF8ZfhNfLqGmeHpLm5XW7EiRLcyGGND
gjH3g6nI7Ed6/Nv4s/GPxJ8bvHupeLfFmoPealeyP5cYZmitIySRDGpPyoPT8a9O+AP7ZvjL9nzw
lrvhm3tbHxNoN/5ZXS9ZDT29sylidg3cBi2SPUU+VrVAmj7A/wCCePjDxbrfiT4kfHz4paq7eH59
LjtP+EivwkUbCGUFlCqBwACM47V037JnxA0X4u/Br9obwz4avY7vxPrWpa3qFhp7ErLcW80IWGRc
87SzAZr4Z+PX7bfjb47eCNK8JS2On+EPDlo8hl0/w/uhivNwHEozggckD1PqAa8q+FnxY174PfED
SfFXhu7nsb7TpVYokjKs8YYExsARlDtwRUKnJO4+c9v+H+sfGz4m+IvC/wABZbnU7uy8O6jAZ9CM
aILRYZQWLttViEBzyTnivuz9oL48eAtI/bo+Cun3niOzj/4RxdTTVZi/7uylmi2xJI3ZiRj8Rmvk
C8/4Kd+M45NX1LTPA/hTS9d1SGaCTV7a2eO6TcuN4fJJOefwFfIOoa5qOs6rc6lqd1LeajeN5txP
MxZ5XPVi3Uk8HP8AhVez1uDne2h9+/t4eOvid+zd+1T4l8d+GNTuvD+meKrGztba+iVZorxIYV3I
AykZDYxjHX3r0n4e3k3wI/4Ju+Nh8RpDoGueLP7Tls7e7I+0XklzENh2g9W547AV8v8Agn/goV4w
0L4e6L4T1vwtonj+DSfMWG/8QBprgBskAk+nA+g9q8k/aO/aa8Y/tH+LotS8QzxW1jawpFaaXYSN
9ktsLgsqHufXqRxRyvQlS6WPvv4vT3XxS/4Jt+Arj4cSrrt74Wk0+4vzaOC9k9pEWlyO5U44964L
9hPx/wDFD9o/9pfRviF4wupdY0HwpY3tu+qTRLFFZ+bExVOAPbk56etfK37OX7V3if8AZsvtS/sy
NdZ0rUYXguNEvpGFnLkAb2QAjdgYyOxwa7n4l/8ABQnxd4x+GV/4N8O+HNK+H1rqFysl5e+Hi0Uk
qbTlDwPvcAn0471nySe6Dm1PvD9mr40+DPGn7Z/xzGm+IrO6OtNYQ6Tsk4v/ALPAwl8o/wAW0g9P
8K+CfEXjb42eENb8VfAO0W8s7bXtUnzoQgVpJvtEhYbTgkIVAOcjueOa+ePCvi3U/Aus2GtaLeSa
dqunurQXNudskbLjlSPXofXJyOa+y5f+CpOty6/aeIJfhh4Wu/EFtEqJqs28XAKoV3BtpwfmPHbP
WqtK+w9D6W/ah8deHPhp4G/Zz8Da7qltZa5pesaLeX9gz/Na20MQR5Hxwqhsj3wcdK4n/gov8Q/H
Hw0+Knw3+Lvw+uHXRf7HNjHrkUay27NNIzhPmGPmXBzwelfnB8Q/iP4k+KvjHVPFnijU5dT1bUXJ
kaZyyxpk7YkB6IoPAA/ma+gvg5+3n4i+GnwxfwL4g8O6d8QNFguEm0+PW5Gb7CqqAEXIPyrjKgY6
kUcrjpa6KUj7A/YM8Sa1onwt+MPxd+Kd9HY2Hia5iu01e+ZUjucRyKSoAHG5lC+tZvwzlT4rf8Ew
vFPhXwW6634pW3nEmk2pzPG0l4XXI91BP4Gviv8AaS/bK8R/tE6FoegGwt/CXhLTIBGmgaTKTbSO
DlXcEDO3AAHTvXLfs3ftKeIf2a/HMPiDQpZLi2fP23TWciK8TDYVx3wWyD1BB9afI1qiHO7sfRf7
OHxa+Ln7S3x6+E3h/WjPquheCdUgurhILIRLZJH8mZcAHPy4wSOQeK+w7b4reE73/gpBNYr4gsTJ
D4QXSVdZV2teG53GANnBk77f618Sa9/wUqvLbwx4q0/wV8OdG8B6vriGKXWNMkIn3Fsl/lUc8sRz
1NfJGleKL/Tddg1uG+uYdViufti3sbkyrNv3793UndUcj1Zanc+zfjx8ePiz+zv8d/jH4X06FtIs
vGmry3Vv59n5j3ELsYleJj1BBx3wa96+L1za/CT/AIJteCfCXi26j0fxDcrYgabdMFuSRd+Y+E6/
KpBJxjArwKP/AIKWx6jB4eu/Fnwu0nxb4i0e2jthrGoTbZpGRgwkHyEDLhWxnt7CvnL9ob4++Iv2
i/H154m8SXBdPNP9n2G/dFZQnpGgx7cnvmlyNy5uoc2p+j3/AAUM8b6/4W0j4NfFXwJs1Ow0Ca8u
v7ShHnW0ReOJEZyDjkgrye5rL/4J7+PvGnxP+JPxO+MnxDSOy0y40uC2/tcx/Z7PEEvzKrFiMKF5
Of518jfAb9t67+Efw21fwH4p8Nw+PvCV2UaDTdSuNsUAVi5CjY3ViCOnIFSfHv8Abhv/AIm/CnTv
h34R8MW3w68KmSRtQsdPlDLeBiCAfkG0ZBJFWou1hcx9qfAPW7b4gfspfHzT/C1xBqevXuoa9LBY
WsoeSXzVKRMBknDEYBFfJHwx/aC+Kfxi1n4VfBq6sFn0/QNYs3NvaWrLdQpbuOZTngKMZJAP614h
+zn8fPEf7N/j6z8RaNcOkAkRb6xDDy7qEEExnIO0H1AzX05J/wAFGfC3h+68W674S+E9j4a8a6tb
TL/bcV3vPmyAZkwY1zgjP1Hvmlybxt/THzWeh9keP/HGhxf8FAPh9p02t2kL2Xhy/iljaYBUnkb5
EY5+VmB4B5OK+UP2j/2l/iJ+y3+1H8X49CsLe1g8WSwNa3GoQGRZljh2bosHnDOfXoBXwxf+PNZ1
TxjdeLb7UJr3xBc3P2uS/kYh2lBJDccDnoo4r7Dj/wCChfhDxv4e8M/8LO+F48c+ItBiIj1czKu5
94OSpHH3Vz9OlHs7N+ZSmj3gtceAf+CVUlr4jQ6ZrOowykWt8PLmcyXxbdsbBJ2Yfp0rY/bs8c6p
8P8A4a/Avx14WgTU4NAv1vDMimWBVFsqLvZcgDPy5r8+/wBqz9qPXv2m/HTapfSSab4bs/3emaGG
Bjtk2jJbHViR198V337Ov7alp8M/hjrnw98eaPL408F3Y2w6aSFkg3Hc4GRkqTgjmpUeVrS5Clrc
+r/2EPjL4j/aK/aD8bfEbxJZRWkMOhJp4uraJktU2zI23cSfm24JBxXXfBnUoPEHwE/aak0q6i1G
5utb1944bU7mYGIhcAcnPb1r45+L/wC3NpEvwXPw3+D3hi68BaNePK+qSNsLzRuDmNSCSuccnOQB
ivIf2aP2ktb/AGdviPY65Z3LXGluRDqdm53CeAsCwGf4+Dg+p5oUGve87mnMnqew+Ef2rPFvxa0H
4XfBceHoEt9E1exjYwqzXJEUwUBhj5cAnPH5dK++viDqVs37eXwxtnuY1kTw3qZYNIMbmYhV69Tn
/wCtXx7B+3h8GfBPiPxX4y8EfDXULTx1qsc3lX1wyNEk7rncRuOBuAJ4FfHOtfGrxfrXxAvfiBda
vK3iSeZbpp4WKgShgQoUEDblRx/OodFvWI3JNn2t8av2wdd/Zl/ad+ONjpnhyPUV8R3MKxy3btG+
Et1UsnGGALnAHtzXY+JfO0j/AIJGyjUFktrq7i4j4DSF9SJHB9Rg15pr37avwX+LuleEdT+KvgrU
9a8X6PBl7zSkWNDJvDEDMi5yUUkHI5I7mvCP2s/2udS/aV8QrBYxXGgeBtLzHpekKRGpUhfmlVfl
LZU8ZwMmtox95MwufP7xgKqbHK7gV8xuScVA8iNvc/ISw25PT/OKdKEkdVBG2Mbi59ev9KjnVlxI
ACm4fdOcH+VarzIasSlYwhDfLuOcR859qhlc53fNu7ovb6/hSsxR2GAWcYUkZxz60kqu9w3lgFic
iRDjaMc80PXYAJE/lEHyxnjnAIqRmVJV2r5JzlWzw1VjEwbKAtjI3L2HrU+W8n96dwXuOue1OwDX
gLsyRscN7dfoP89aLgCNWZJFIf8AhI3EH+lOknBC7VYOpO0g96I1ldWRBzINzYPOOaaQ9BAC4I29
cdOv51QkGJ1X5nQ5AUN0NaETxqp3qUKjIVe4qq67LlAxwCDgkZ7UmSWFKwHcytLsBAGCePTn8a6r
4lRuE8NsXBD6ahK5ztO5uMduMVz1oVgyD+8AyuSuV/PPtW38SYm+26QqsQn9mxtk5Lcs2Mg9M4NA
ktTj3iLAL5QbPIOacxVOJCMj+8v8qlDFXlJHDg4JHBP0pkMcj7QZBgL1AB/Cs7Nmh1WiWoS+ZWeR
TNG4Gw8Z2knt0x9K6Lw1qUVtKBLKIx5igyYDFMDg9eevSuZ0XeuqWzA/M3y4foMjqfbpW5o9vHc3
C2vlxxTv9zC8jnkjsegHrXWlc5JJM9I0q8dCY4opI43yGb0A6fxHt9a0dbuReeF9eaKFsJp8rSBm
HK7eC2ep59MdKzdKjt/szkZ84sVSPcNvPOeRxx+pNautRrH4O10p8ySWMwZmAGFKn9R3x1x7U2kj
J72Pm7T4ymuRrLKTG24MQccY/wAKqXSRJqEixq7qHOA31qzoxYarEwYfxDgcY6VUv8vfySbSFLYO
M4J7Vzvc7IvQ/U7/AIJH6Dpd9pHjfW7rTVfW7F4I4LxwSUR0cnYe2cY/A+teF/EDxZrvifxbrN74
gvLjU52uHCLOwOxQx2jIHsK96/4JBanatYePbB5xHezrZusEpCyMFEwO0fxYGOR618/+L9GvfD3j
7WbC6jubEm6l3tLEVzukPJ77ePy61pBJMzrLll8j1P8AZx/aH1D4K+IzBIk974T1BjHeaazZUbyq
+avXDBR2xkV9R2X7Ovwo0fVR8W4tUjuPBDxC8ispUItwGJA24GSNxHykHp718dfBf4Nat8bPGaaN
ph2xRgPd3rnEMcQYbjlR94jOBX3LHr/wnm0+L9nn+17obIfsUcjNtV3QCTAlDAZ3Y4H0qZb2XUla
K7Pjz9on4/ah8bvETLmew0K1lENpYKSY06r5vQZJXaMY4ycc19j/ABL+HOq/Fb9m3wLoenCRrueL
TTM0bhTGht9jvz1wHPTHSvhv4x/B7XPhF4rutE1ESAK4ltr+PayTIS23GB1AXkY4r7Z+Mfi/V/Bf
7KngzXNDu5LW+hi0plkVipK+XkhuR8p24I9KykkmrGkX3PnL4jfETXPhd4U8ZfBW/c6/p1hdQRWN
/cSBWhCgSbAvXqV5J6HjgYr6l/Y+1e28P/st+H9Q1CdbWytGvHkuJSQAgnf5jn1NfEnjKy8T/Gm3
8YfFCS1aHS4J4jMdgC7pCEBQcFiCmOc496+pvg15Vx/wT+1ED5wLDUgVwUOTO55z0PI5/GokjRNW
PafCfwY0HRPijd/EHQzHGur6eY5IYxlJC7q/mhsnqAK/O39rCzlT47eNNsnlk6mzAoMMcjPGPwzX
0r+wp8ZNa1vULrwBqkiXdvZ2st9a3DMzSRKrRR+Xk8bPmOOnT8/nL9q63kj+P3jZo5VdW1AsSpAY
gKuRkdcHjB6Yp09zGSaaZtfsUeMdW8L/AB50DSLC6mi0zX3khv7fAeKbEbuuc9GBHUc84q7+258P
9J8FfFotoVsNPguLeO7lig+bdKwcE8ngfKOM4rlf2R4ZJP2kfArxBnKXchMbMRtHlOpYgdOCfxIr
0j/goHIT8Z7WFJdsjaVEGC84w0hJ68Ht07/SuiN7jklZaHyreCRL1TgMrjOF4xx19O4/CqxkkEhw
pWQsSpHfr278/h0qy3lrL5iE+SgBZTGcPkjuPrjPFRTKJQpLrl/lyCPkyOfyrqhdanIlqJbI0IkE
+11B2FG4K8Djn0z+lI25Wlgj3DhQHZN5I2k9iMfX2qzEBE8qbG4yRvGB6HHfPFCzNZxysZCxdSCq
npntkdB9aHJs2smtSBV23HIO/jBcbe2SQOvv+dWJpQsZ3GMbsjYvXGeCMH3qrO8dyRj/AFTNgE9l
6g1KWChLYIJ9r8qvAXnIIween5Vk3rqZMfJb7wI/M355XLdMc4pWHnABplYKjISx5HfGBTESJJGc
7TuwQHPXGcgdz1zUjl/nNv8AJEv3+eCO2f8A6/vVqSBRGTY3Kp+WQgn5ecdufXpW94R0KLX/ABjo
em+b/ol3ew20rqdpYNIitt9Dg8dhWO4iltW3K7Ow2PsA+71GB9OtdF4El3+P/DCJG7ONVtGJBOSo
nRmJH/Afw5pVJaG1Nan3H+1Nqmo/A7wB4R+HXw+tprW21SKWFWs42ku28socgrzuJfJOOx9ao/sl
6rq3xM8P+LvhT4/sp7yy0y1SSOW+B+0oJWfjcwyCOCD1BPp07f8Aau+IKfCb4h/DLxfLYyapHp73
6vaI+zKvEqFtxBAxuHWs79mr4v2nxn+PXjfXbSx/s2J9ItUMLNvY7XIzuAA65/OuHmudKS3M39mL
4H+HvA994/8AEsSS6prGgale6bYtfYZEjQK6tg87yR97Pc+tfP8Ab/tJfEew+JLfEIm6lspZmB02
dplsSpTYY1+bbgEcc/eFfYnwgCz6Z8Zo4s8+ItQGFHQmIdvWvlXUf2m9E/4Z0f4ZyeHpTrEKiP7S
CohXbN5u/s2QpA6YzxmmtSWtT2L9pn9nXwx4k8d+CNatkk0ubxLqUGl362eNjIwLb1XgBvVsVn/t
Y+N9R+DGg+Gvhp4Fjl0a1ltEn+2ac7LdnyzswNo5zgEtXsHxjYxT/BpjvX/iobNSVHTMZHP51xn7
QPxU0r4OftEeEvEOtWc97YnRJ7ZltlVnVml+U4YjNNNsXkYPwH1Bv2qfhR4h8FeN7d3v/D5hhh1r
O663PuIbLZw48sA8855wQKq/sw/C7Sfh58OvEvxNu0XXNb037alvHNhUiFuzHcMZ2s2OT2GK7L9l
Px3pnxJ+Inxb1/SLKWwsbq5sWjjmiVGP7uQFjtPcgn1/GnfDO2Ef7L3xDiOQvna0OnPG/t+FPms7
Aj558B/tg+MrX4vx6jq9xcap4f1a4Fr/AGS0hEESyyYBTcDjbx+HXrXpHxu/ZQ8P3vxr8KWWm3B0
nTfFFzObm1towog8qIM3l4IwG449Sa5PV/2hPB3in4Q+CfBllYXdn4ktLiw86aW0RV3RupkYODnk
854zzX0/8WQE+NXwdfOD9sv159DbilzWKUVfc8I/ag+Lmo/A8aZ8Ofh+n/CKxWVvHem6tdoM4cOu
0A9OVyT3OPx2fC1jZ/tm/Ai4vNdtU03xboZ+xjWY1DPKUQOTjjCuG5U5wTkVr/FP4neDfhd+1DPe
+MbUz2lz4dhijk+yfaBG3nOc7cHsvX3rd/ZZ1/SfE2n/ABU1DQ7R7PSbjXJZoYpI/LwGhU/dHA+l
Pms9ESop7nm/7OPw60T4RfBe9+MOo2reINUiiluLSA/K1uqO0RCsSeW6k44Fc98Iv2tfEHin4oy6
D4uT+2fDniK5GmR2TeWq2pmkCrgqgLKFYg5PvXpug7G/YR1MBdipY3bYxuztuGP9K4fU/iX8LvFN
t8I7DwnYwW3iC31zTFnVbA27IuF3hn2gN8wXv6VV29Q5VczfG/7IVpb/ALQGmeGNL1BbDQtcimv4
goZntkjyXi5JyCOAeOCa6X9oD483vwI1Kw+Hvw8so9Ft9IjVp7ieJZY3Ei71Rd2SDkkk8/TpXtPi
+JR+1D8PJed7aTqK9OoCjH8zXmvirxP8N/D/AO0t47i+IUVpILiy0/7F9qtXnXKx5YABSAfuHn+7
RcpRRieO/CGlftW/BRviPpdudC8VaXA0V08pG2fyYyzphSRjLkqeopnwR+Hmh/AL4US/F7xFbnVt
RmgWWwt7dP8Aj2jlIUIAeNxJ5bsPxrsv2e5tLu/2ePiINHBGlHUNWFuCpH7sx5Xg8j5SKzPHpgb9
g/S2kDNB9isS6nOSPPTI/pSu2DSRy3wh/aOX43a7c+AviJp9reWPiEG3sXsYVjKuAzfMQc8gZDjo
RXFN+yBfS/HP/hBjqMY0oW51M3IkO82fnbdvrvx8vX3r0i4v/hKPjB8IV+H72B1M6lJHOunEk7BB
jD54yMj9a9fkZV/a5iB3bm8InHocXR/xquZoSSsjxT4s/tEaZ8CtYsvBngnwzYT2Ohr9lvDqFruZ
mGPuMGBJwCdxzkkcVQ+P/wALtH+KXgSP4xeC4ns4ZYi+pWM6+VuVGEZYKAQGUqc+uM1ujQ/hTq3x
l+LY+IMljHdx38H2R764aHCtDltmCoJz25rc8BG3l/Yb1lYnMkCWmo7HBzuAnkI6/hT2FypnK/DP
4caF+zh8LJfiJ43hi1bWNTTy7C0C+dCBIm+FCCMBjtOWJwMnmpPh1458KftV6TqHgvxHoOneF9fu
GFxp8mlQjcQi7mJPUEAEHswPFdr8fE02/wD2dPh6NUl8nS5L3SPtEpONsRQbjntwetY/h3wf4C8H
/tOfD9PA1xbzxXNhqBuhbXP2gLiI7SWycZ+YfhRcXLqeJeAv2XfEuu/Fm88H6o6WsGkiKTU7qCXk
QuMp5fUbmHOM8YNem+MfjP8ADT4b+N7XwvpngXRdW0azVLPUtYmgCTI6kpJj5PnICnJyMk4r2vwU
BF+0x8Sl+cvJpmmuPTAVh/hXgmmfCH4f+LtN+LGva9qL2uu2OsamIgt6sRRVJkVtnf5s8n0pc1wU
bNI5r9pD4C2WkWtv4/8ABrfa/B+rgTskQ2i2eRl8vaOCVYsQP7pGPSuv8OfC3wt+zt8MbjxT8RrG
31jXr2LyrLRLsB1R0JIVWAb5myCT0AxXYeLiLj9h7QmBJH2LT23Y9Jo+a3P2pPDmmeMtd+FGk6tM
YNPvNWkikkXAIBiGBntk4GfehMrk10PL9H8I+AP2mvh9dxeD9BsfBXjbSzJOmnQ/N50QXaoY4X5G
LAZ7EV5j8Hf2b9d+IXjy60vUUn0vS9KuWj1adsFoJFwREOcZbnkZAFfRHwj+Guj/AAn/AGnb/RdD
u57mxk8Mm4xO4dldrhARkAA8ID0713HwhQx+MfjIgYZOt7xt7EwCq52tEJxT1PEb3xD+z5F8To/D
LeDLWXTRKI/+EkW7P2aOTZz/ABZAyNpIwM815X8aPgDq/wAMPF32aztJtX0XU3X+y7u2Q4k3t8sP
Xlx+oINbul/s+6HqH7Nl58RV1WcaxDDOfs7MrQfJKYyPXOFJyD36V9JfF/8A5Fj4NncVz4h0sZ7/
AOrPc0uZ9wUFueMwfB3wH8FPhjb6t8T7KXWvEWptG0Gk28wjubdWxlBhhkqc7mJxwBVfxt8FPC3x
L+GVj42+FVnLHJZROt/obyGW5+9wCu4kSDBOM8rjGa9R/aD+F0XxY+PPgrRLq+k0+0k0m7laWJAz
5RhwM/7wqb9l7wJB8MviH8U/DEF4byKzfT2RymwkNHK3Pqfmx+AoUn3K5Ez5u+A3wEu/itrkeoag
Tp3hLTX8y/u5sxLMqkh4VOMZGOTkY55r0vRPh78C/HnijWPB2gNq+na0gmhtb+5ut1rNMpK4jbcQ
3OCBxkD1ru/B6iP9kv4hRwt9xtZQEcj70nI/Dn9a8hl/Z0j8FeGvh74/Gttdi+1DTZGszEVEAlZW
G1gTnHQ5HNPmb1uJRSdjyHxn8LvEPgfx3L4UnsZ7vU0kEdv9micpdA8ho8jJyP617lr3wH+GvwZ8
JaG/xN1LVX13UGcPHoxWSOMcHG0qSAOPm65JwfT3/wCIUSD9pX4VSMMsbXU0z/2xBGf8968i+K/w
Ju/jj+0h4wt4NWg0qPT9NsXZ5YTLuLKdoA3DHQ/pS52+ocqPL/j1+z5b+ANKsPE/hOebU/BmoRQ7
bl5N8kUjKzbn+UfKQF57EkccV4V5TNJmJthGfkB68dfrivuP4EaVLp37MPxE0W8cSvp11q9oWBLL
lIuq55xnJAr4cSWO1tYmlDMAigZwN2RW9NuRzTikxqSyRHDbSzDv/U+tauieG9R8Xa5Y6NpNo19f
3sgijhRc8k9SewHXJ6YrJ3CdB5jmIuCBgdOM/ia+qP2BreP/AIT3xL5kYeWLTkYSMvIJl5+hxTqN
odOKZDqP7L/w88I6noWieLvH97pvibUIo5PskcIaMSMduA4UgDfwCxrxv41/B/VPgz4vOl6i0smn
Ts8lhqDOu2eMEDJA+63IyD616JrvwL8T/FDWfih4ot7u1e10bVr+BDdSsZpBGxfagwQFCkYyRz6Y
zXq3jxY9f/Yo8LahqgF5epDp5+0zr5kgPnIrfMeeRkH1rHmkjXlW54j8Kf2cofGfgvUfGnibWH8K
eGLZN1vfTxFlly+GYruB2ghQPXPAqx8R/wBmi30r4cx+NvA/iAeMtFidzeSJAYvKRAcuBu6AjBGO
h/GvfP2v/C2q+LW+G3g7w/KtsupXs6NaGUxQSBI0Ybwo5C8kDB57d6xP2WfA+vfDT4s+LvA2vzCe
yGkx3gtFl8y3fe4BbaeAcZB45rNtotRufJXw+8Aax8RPF9joOhwSXV3IxklA5WGMNh3bngAn19q9
6X9jvSLvxJeeHLH4o6TdeIYQ0raSbceaMKSQVEme457c9a9f+Bukad4f8NfGe80rT7exvrTW9Uhh
ngjCSIipuVQ3UKG5x618tWfw5+JHhfQNJ+LpvTFb3csUv9opdlrxjJIV3OTzhu454bnpT5mnuSoX
1R5n4l8O6n4V1W90rWLOSx1OyIWa3mGCmQG/Hgqc+9ez+Fv2S7/UfBFr4i8T+J9N8DR3czLBFrA2
llAyjA7gDuAyPb1r6Z+OfgrQtY+OfwimvdKtLh726uortpIVPnqkKsivx8wB6ZryT9qrwH46+Mnx
0PhPw4XutN0fSoNQjsGmSKCNnMiFuSPmPQD2p8zYKCPEPjR8BNY+C9zYz3VyNY0PUo0kh1i1H+jF
zk+WCe+F3D1BrE+C3wa8TfG3xbf6Vplv9l0+BGM2rzK3kQttyqsR1Legr60+B2lXnir9mDx34f8A
G0LalJoF1e2kNveNva2MEClArZJG1skHORnANV783Pw5/YZ0u/8AC0h0bV9RhtDcXVsoMs7TSKrE
sQeSrYz27VPMx+zSPFfFP7HXiDSPCGqa7oniDRPGMenoGks9IlaSU8gHpkcDJx3Ar5+s7e51W4ht
7WGa7upWWOKNEMru5OAAoBJ6gY9a+pv2cvB3j34F/HjwjoerWlzotj4oSb7TG0kckV0IoWYZ2lsM
rEH+E4NezeBPhJ4S0/8Aa/8AGc0Gi2sf2DTrPULWNVwkFxKzF5FXpknn2zxVcz2uFj53tf2GPGl5
Y6Y1z4g0PTtSvYY5I9Pv7grOm7+HbtyT24HUGvBvH3gbV/h74q1Lw9rdq1vf6dK0ZY/dlUH5XU45
VgAQfr6V6/8AEi0+KfxB+I/i3x9BbX+pWfhfVJ4INThRFWwS2mZkCLkZ2jDHA5yc17t8ffD1j8U/
2dfhv421+JJPE08mmRSX8X7stHO6+YhA4wc9McHpTUrPUpK58ufCr9mrxr8Y9J1G+0ZbOz02zdc3
mpSmCKXcDjY20hsY57e9U/jH+zZ4u+CmladqOrx291pd/I0aXemyiWOJh0DMMYyTxn3r6p/bP03W
9F0HwJ8I/h7YNFpOrx3AfSrJMyTLC0TqoY5IHLMT3xzVf9jfStb8QaL4/wDg/wDEPT3bQ9MhhKaT
d4EkCzM7sAy8gE4YcnB9OlTzWZVkfD3hfwdqXjnxHpehaLbSXmp306wJCjAAZONzE9FGRkmvZfEP
7CPxU0LRNRvntdPu47S3ed47a+V5WCrkgLjlu2K+gfgF4R0r4Q/AT4n+PdBt1PiiwfVba3vrkmQi
KAnyl2nIwMDPr3NfPXwn8T/FTwF8YPDnji6g1SC18WanHb3F/f27fZbxLiVS4UH5RnllxjGBjiq5
mT1tY+dbmF4lMciSCWItlHXDeuCD3r2PwX+xl8VPiN4U0/xBo+hQJp94GMP226WCVkBI+43IBI49
cfSvrL4k/s2eBdX/AGy/CqXGlsbLXrG71e/tVkYRS3UJXa+M8Z6nHXFeKftkeI/iV8Svjprvhrw7
b6sdF8FBDDFoUTqU82GN/MkKdeQQATjAPFJSd9CbWaVj5l+Jvw31v4S+K7rwv4mtPsWqwqr7VYuh
VhlWV+c+/v8AhVj4X/BXxf8AGbXbvTPCemNqF5bWrTyhnCxgBlGC7EAHkd819seOraD9pn9hQfEP
xbBH/wAJZokFzLZ6jaqYmLRy+Vlh3DKvIPfpipvjJE/7Lf7IvhjR/hvbC31DxfIlrd3kyma6d7i2
aR2Ugg7yQFHpkcZpuVwULbnxv8UP2Y/iT8HPDh17xh4f+waV56232mK4jlXcwJGdrEjpjJ9B615G
bWa7u4YrZJLia4cRw28Yy7sWGAF75OBX3r+w14n8Va74j8RfBX4h2l1qWi6lpU+ovFrgka7hP7uL
AMhJCsGLDI4IOK6r9lr9mXwb4F+L3xY1yOKbU5PBOoPY6Pb3rBkiQweZuYY+Zh0DemahyexXKz5Q
m/YY+NzK0i+BZxEEJy08O4L16b+cjPHWvn17fyJZI7mMwupZGin4YEcYwQCCCCMH0NfSMX7Wfxcg
+Kq/F+aS/XTJbjaNOxONHMe3y/JxnGeOoOd1fSv7WH7Kvg7xl8c/hdqsQn0keOb9rPVoLLaitthD
+YnHDnoT3oU7OzYuRM+Gfhx+zV8Tfit4ck1zwj4Uu9d0gTNb+dbvGiiRSNwyzDPUcge1cz8S/hN4
r+EWvJoni3SrnRtSmiFwkE3JeMkgMCCQeQQea+2/26fij4q+GXiXQfg38MYrnwzo+ladDqXmaC0k
d3cEh4wpMZGVGATkckjNdB4Vtrb9tn9ijW9S8c2kUfi3wR9ptbXW4R/pEj29qHDSMckb95DqD1Ga
rmdxqNj88fBngLXfiNrlv4e8OaTcatqd0WkgtbYbmYKNzH1xgH9K6bx3+zZ8T/hx4dvNf8SeCtU0
nSIGRZr24iAVAxwpLZ6EkD2r7f8Ahxo2nfsifsQj4t+G9Pg1Txv4ihtj9s1BQfsvnN5YWMgZCr1x
nk4zXE/sPfH/AMdeL/iyfhX8TJrvxtofi6GaRl8RFma2MUbyZjVgdyMV5HYhSOlTzXKSs9j4IQJG
hZgojYAhy2c/ie1eoWH7K3xd1extr+2+H2vXlpdwrNDKLVtrRsoYN6nrngV9ofBb9hzwfbftj+MN
FvZW1Lwx4LS01Kx025VXE73CMwSbI+ZUOcDvgZrx74o/t6fFBPj7NruhTXOjeD/Dd8bIeHYJStlc
xwSMjB/lxlgOcfdGMUubWxe+iR8e6rp0tnqFzBdI9re2sjW01u4KtG6kqwKnkEEEVv8Agn4QeMvi
TYX994Y8N6r4ggsHRLk6davKIy2SMADJOFJ4z+tfe/7YH7NXhz4oQ/Cn4paFDB4Xv/H17plhqVnB
GDFuuw0pnJGN0gLYJI5xV39r34gXf7Enw68I/CP4TWjaNealaLfXXiS32rdTGNwjkqFIZn6k54HA
q+Yz5dT86/HXw78T/Dm9trXxFoGpaFe3CNJCuoWrwNIo4yu4DIrH0jQNU8TaraaXp1rcXuqXcnlW
1rbxmR5n6lVUZJPf8K/Tj4NKn/BQz9mXxB4Z8exxR+NfBrxQW/isIslwZGBkDFcDblU2MM4PWuc/
ZQ+GXhr9m39mLVv2ktZ09fFniS2tpGsbMjyxZ7J3tyUYlvmfOSSOAMCp5nsVyJK7PgzXvgl8QvCm
myatq3gjxBYafbIGlurjTJlijBOMliuAPr7Vw8kZwykBwDzyM19/fsoftseM/in8Zm8BfFMp418K
+OpzpqafPGiw2RfcwxhPmTbhcH1BzXz3+2/8ALL9nD48ah4Z0e9Nzo15aR6pZxMmGto5ZZQIfcL5
eAfTFVG/UyaR8/NCMltpC54A7cdaSMnzZSMEjI3N2qWUNsIHODyR1q1o9lPq2p2mnqEEt3PHADIc
qGZgozj3OaGEYom0XwnrniXzRoukX+rPEgkkFjbPMYwTtBOwHGSf5+lL4g8G694VKf2xouoaT5rM
Ivt1s8QkwATt3AZwCOlfpp8Y7my/4JpfAPRfCfgW3F18RvFwbzvFLQqwDwOhZjE+4YCzbVUeuTzU
fwT1+3/4KS/AzxD4A+IFps8f+F4xc2nipYlVUeZ5BGwjXbggIVZehArO7Rrbsfl9DZy39zHDEpkm
kZVSNULM5JAAUDkkkite++HnijSraW6uPD2q2tvboZJZpbGRURB1YkrgYr75/Yx/Zh8NfDzwp40+
O/jtE8Qx+Cp79LHS7ZcBZrJiWnBJ5J2gKp4HXmmfBT/got4i+JXxz/4Rbx7pNrq3gHxldHRrXSoL
WNPsQuJVji8yTGXAVirfWlzNmmnRH51zRtuVWYEAcHNaln4W1vUbSO5s9Kv57aVm23MNq7oxGQQG
Ax2r7i8df8E5beH9s7S/h9pmrw2Xg/XbabXYVVWaS0tY5AHtxk8nsGJ711P7Tn7aN7+zL42svhL8
GtFsND0HwUPsl9/aNuswuXKo4VCclQAxyxPJahSaMmrn5yXlhc6ddS29zC8N1GdkkTgiRD6EdQaS
Cwur+cRWcU17Ngt5cCb2xnGcemcV+if7ZPwg8NftF/s72X7TvgqyHh29niEmr2FwQGuwsq26kFcg
MrA9PvA/jXR3Ph3R/wDgml+zRbeIW01PEXxY8Yo9lbagsZkgtiV81VKu3CpkZIGWI9Biq5ieW25+
Y91pVzp4jkntp7ZJM/NPCYw2PTI5qLIikH7sA8jOccV+nvwS8caX/wAFH/hh4k+GHxBsLa08eaLF
Jqmlazp1uIYI1ZRHG5Vech3IK4wR3rz39j39iPTV8TeOPG3xJnh1Hw38ONTvLaXS7dSftdzahZDJ
1A2ADhT97ODxVKQ42ufB91o+owb5JLO5hCfxSQsoPHPUVmFMujM3B6c59/6V+kPg7/gpzH4r+PU2
m634U05/hPrE7WEUEOnr9tiSTCRPIxOCSTyB2Jx2ri/2gf8Agnde6D+1P4Y8F+DtQtbXw940e4n0
03DEmwSCNXmVzglj94rjrnFTzF+6z4aSya4haREZwGKEqMgMO315qPaYZXQxlJEAyjcEA8g/rX6Y
/tGftGaF+xAuk/Bb4S6BYXeq6Kv2nWL/AMQWoufNMyK42sTks245PQDAwazv2mPhV4W/a2/Zvg/a
M8Aaemgazplq8Wu2MqmFJ0gG2RY0XI3Kx4JPI60c2oWR+cJiLSIoB3uDhVGScfTnuKBaSLGrlCoY
8b1I3AjqM1+lHwl+FXhX9hH9nUfGvxxp8fiXx34ltlttEtfLE9vaieDzoUZGwASUy7AngYFaHwA+
LHh3/goL4L8S/CPx/wCHNP0bxaqPqekahoVmLeKNI1QKWOSQyu/I6FWpc2o+XsfmJLAqKJOV2n+P
gcU6O2kknaFR85XcfT6V9v8A7M//AATzv/GHxv8AGGkePr63/wCET8CT/ZtTFjJl72UoXQJkZVdo
BOcHqBXoT/8ABS3wlL8c/wCzv+EA0NvhAkhtjcPpinUGQR48wDdtx5g6Yzt70nJonl1PzYuLZoGK
scEHDeg9qlKoQcOH7cDv1xX29+1Z/wAE+rzwd8cPCdj4Antv+EZ8f3ot9NjvJSGs5igeQOAv3ADk
Y7cV6l8ZvHXgv/gnb4I0D4Y+EPCekeLPiBOIdV1nUPENl58OHQqWVuDy0fyqOgHvTUmxOB+ZUkas
WVJASGBOT09KUB1jZWOM5+XHXAr9J/i/8KvCn7c37Nq/GrwDpVr4b8Y+GLV7XWdNSP7PaTCGEzTL
GF6nMgKseo4PrWT+yf8AAfwn+zv8Ej+0l8Vol1QTQj/hH9IRROgWZdkZlQjHmM2R6KDz7HPcFE/P
CO33WzN/Ci4O498U/wCz/dkGArrjPqfpX6W/s+/HHwJ+3B/b/wAHfH/gbRPB+ra3Gs2jXnhqyEbk
xbpXDOQdjKI19iCRXjHwi/4JzeJ/Fv7S2ufDvxFfwR6J4XeCbWb+zmw81vMrNCsWV4dgOeMDB5o5
jRHxxND5cqGR8bhu255xUQGQ7AksPuheR04zX6ReLf25vhb8MPjNYeEfDXwy8PXvw30Iw6bf6pc6
VuviEYxytH03bdvfrgmuB/bX/YuTQPFXhbx18MkivfB3xAvreKzs5JFjNvd3ZZ0VVwMREMPdeRVK
SFZXPh1I2+8xCM2GIB/AUkmIycAEgkNk88fe47YyPzr9MPGOn+B/+CbfwX07Rr3w/pnxB+MHiRIr
yeLVIBLaxIpKvsOPlRSDgdSTmmah8NfCH/BQ79nY+J/BGh6d4N+LHhJCl5ptpbi2spWkYttLbTuB
SMlW7E4PWlzCaXQ/ND5Yz8rhC/AyM570svlxxZVlb+LcHyPavvX9i79lXw7oHg66/aF+LexfBGhb
riw0+NBN58kcjRNJNGFOVDABV9Rk12fwS/ac+GH7T/jLXPhX4t+GGi+ENL8UiXT9F1HRrELP5rMQ
nmED5H24IPQEdaXMhcp+bESEgszAH+VDOsQw+2RV+6yng9eRX13N/wAE7fGLftVv8JUu7f8As/yx
q41fzFDLpXnbN+Mf63kjbjrzXs/xk/aQ+Ff7LXjHQvhN4K+HGieMtM8NKNP8Qaprdnm48wPhwrbR
5j8uSemSOcUc4WsfnE8QK78E84+XkD/JqKTleDlDwSB/n2r71/bh/Zg8P+IvBth+0J8KVRvBmvrH
Nf2RRYDAzskUbRR4BwWB3A9+ldhonwg8Gf8ABPX4Ev4y+Ieh2fjH4oeJ4Wh0zRrqET2kLIQ4UNj5
PldSxz2wM4o510Gon5slVO1uGA4wp746UxoAp8xjs7bM56mv008KaH8PP+Ci/wAGdb0LS/C2l+Af
i34f86+totKtfs9vOv3Ii8m3LIxIUjqOteQfsf8A7DZ8beLPEfiT4jKNP8CeA7y4h1SJJA73V1ak
O8eAMmIKpz/eHFLnfYOU+KBNtkZRLkjJVQec96WdUz5nzYK7tw7+vNfpd4U/bi+DPjn433nhLUvh
loGmfD29lk0+y1yLTibpiw2RSFdnyBs/UAivl/8AbZ/ZPuv2WviSunxyfafCWriSfRLrzQ0hRAnm
JIOxDOOfQ007vUnQ+aZlWUhsnOeMnkj0qXBjQsAx5+6TwKMjzgCM4OVweKfIHdS8hwFBJC8fnVko
S3iTkHI2gMVPXr709xHiby5VCqSWJPIBxjj8a/QP9nD9nbwX+zb8Dp/2g/jXYQ6ut/aCPQPD0kSX
UEyzRhoWdMEb2II5OACfWuv+B3jr4Vft5+Hte+GWteAtI+G3jC4xdaPcaFaAM6Rr5hbftwMFRlTw
QcVHNY0SPzOlXI3Nkdtx6ZpmVTYWk3EEEFec9f0r66+CP/BPvxl8R/j5q/gDxOU0vS/C0kY8QX1t
MhbZKjPGIh3LBRzjgHpXt/iX9sL4FeD/AI36b4H0v4U+HdY+HOm+Vp174juNNLToVBWR1jKZdVI6
9etPmVyrI/NtyUh3q2VZjgrzjio1kjAGSEAPPOBX2x+2N+xJN4K8c+H/ABF8OES/8FeObqGLSImZ
UEV5dFnjhC8EIVZSCcYGfSvU/EPhb4cf8E6/g7p9j4o8Nab8QPiv4nMN9NpuoW4kgt41YK6xuFIV
EJOM8k81XOiWj81zFncEfzMMVyh6H69KZsjjfeQWkJwB369RX6V+JPg74E/b0/Z9HjD4Y6LpvhT4
meGkK6jollCILaZnYna77Rv+SNirA9cg155+xX+yLob+Fpvjp8XFS1+H+ghru3tDtnFy8cjRv5qA
E7QyjgdePpS50CifDBgeFll2sO/I6f4dqFUyEkZYKeGNfpb8Jfj58Ev2mvHfiL4aaz8NtB8Gaf4g
STTtB1jT7QG7eVmKRnhBsbG1h6HANeA67/wT58bWv7TJ+Etg8Mkdyn9owak8o400TbTK2f48fw+t
T7RM0UUtz5QuImtym8FWkHBHAPHP4U0xNnIIOSCQfXGORX6U/GH4ufAz9kvXNB+F2h/DXRviFe6E
n2XxDq+q2yrNG5ILDcYzvcgseOBwM1xn7bH7LPhfXfA9h8e/g4kR8C6ookv7KBVgitjuWJGijIBG
ZNwYDvUqomwsmfBcxBVm2FSmABnr7/maiiwQ6q4Iz8wDZwfw6dOtfpD8OfgF4G/Yq+BV78UPjNol
n4h8Za1btDo/hi5UTRlvvqoIDAMRtJYjgD1rT8IeHPhT/wAFBvg7q/h/RfCGmfDj4paP5uowWWlQ
qBcoqbIy8hRRsZ5FBB5BUmq50ZOHY/MsK8YV3Xy1bqO2B/8AWpZixCyeYVkb7o9ev+Br7I/ZX/YT
1n4ofEnXR44iGj+DfBd/Nba5cJIG8yaAB2hUEcoRyWHavZdF/aq/Zs8QfHl/CR+FWgWngSWR7G28
XfZSJGbysBymzIQtlQSeOvFRz26D5T815QrRszHJXG3jr9ahjPmBdxwm8lXPAxzxX17+0f8AsE+K
/hv8ctL8JeFrdta0jxRKy6BcMyKZGCeZIknQDbnvjIxXuHxH034L/sF/D7QPBus+D9K+KfxIupUv
NUXVI0DWqPHncWKthMrhQPc9qvnSBpdD814ogxZ93GM8jkVBuG7jC5zhSOT6Y5r9GP2jf2a/Bv7R
nwP0j42/BPTrfS/7OtBHrfh+0jSKNFSMySkcDc6FgPcYNYf7Jn7K3hT4efDVvj18cY4E8LQxq+ja
VMvmJeGSMhHdVycliQoI9+1T7RPZGajqfBC/v7XG4AdQ3XnOAaI4XBAKkgg5Ofzr9LvhLL8C/wBt
/Q/Efw7i8A6Z8K/GNxFv0W5sV3TTKhZ3IO1RwE5XPRq+dvhN+wZ488eftA3Pw31SKXSU0J0k1vUI
9pWK3fJjdQx5LgYGM4z7VLmrXsaJK+p8ssjvsVCrL2JYEkA1LKjIjBwXwueOo49+a/SvxH8XP2Vf
ht8YNJ+Gtr8NdI8Q6XpvkaZqnisqFjikBKSlwVJfaVBJzgknmvD/ANsv9ivUfhx440rXvh7A2t+A
PGU0T6U8IULBNcMxit0GcldpUgnsaqM0U0uh8eBZCC7AFec5PFWJlZrZx91lYgoDyvHf0r9HZPg5
8IP2HPgXY3/xe8NWfjr4keIBFNF4cmdGeFcbXC54wOcsBycD3qj8Sv2Zvh5+1X+z9afET4G6Knh3
xHosZfVfCtmoZpS7g7WJ2jcoDsCByOO1UqibsxKPkfnaFcQguHJx1c/hTUl2u0QkUyj1IxX2v+xj
+xrpvi+wm+KvxPnXSPhjoTG5YX64jvijMjqeeEVlHXOSe/NeqfCqz/Zi/am1jxj8PfD3ga08Ba7e
RPHoutTSq8lxKSwV0XOc/KH29wecVLqJD5Ufmt+/VeWBwOMdOlMlVxGMkiRuMdS34V9EWf7EPxDv
/wBof/hUyWEsd/HN5s18U/dR2O/YLo8/dYYIGc5OK+ofHGi/spfs++NfDfw313wlF4t1q2jitNX8
RRz7IracsUfzAG4YctgDgAZxSdQSSufmurE3AIyRxk9MdqlI3zsjEAMN3Jxvx0579a+wP22P2OT8
M9etvGfgFBrHw+8SssmnTWI3RWruwEcWRywYkbSB3Ir0Twd+yv8ADX9m34Bv47+P9nNq3iPVo0/s
3wrDO0M0S7tuwBXG4tuVmzwufzftFoPlR+fZCG3UyF3UOAdvIb2/+vRNCYWLFlaPptx19+tfoF8S
v2V/h3+0L+zzD8Q/gDpraXrOmLI2peFi/m3L7nAClSx2kBWI7EdK8r/Yv/Y4ufjXqUvifxwDonw5
0RvOvb65bykutpIaLfkbQOMnjgY78DqxSuRbU+T1Kq6qVIQD5m/rQbYjzFV2CrjgnK4+n5V+jngj
4Pfsw/tF6p4x8CeCNJvfDviG3gmj0nUL66KxTz5KqUw5LcjO05yOa+SLf9lH4g3Xx2/4VbFot42v
Jc+TJviYQi28zZ9p3Y/1eOc9+lQpqWqKXmeMMrLK6h8ZB4H09PrVhIflH7kTqqjdjOPp/n0r9EvF
nwD/AGWvgp4o8LfDTxrcalqXjC4hhi1DUtMnPkxTudpMhz8mSckY4GCRXz5+2h+yTd/s7+L21DRv
9P8Ah7q7CTTLyB2cRj/nm7cjPGRzzmrjK7sWkuh8zEo3mbNxJ42qOBURMvk7SCi5yMjrUwBt9/WQ
g4XH+etOu28uMo8gbdkkDAFamTREwaOMwgsFOSWY8Z9hUMmWRtrjOcbR0NTMp8hAPmyST3xULW5A
kbA+91zjHFMgnijCIzKmTwSMYApkgCSO68k8lT1FGC+0ncsR6HH60OgYklzvHO3H9aAHTs2wFQB6
LnkfiKbG8m7yx8pYHtwPxpTbsSZWLFFB4VcfTNK6eYgwdu4cIzHqBQAyWB2y2SSCQwA6DtVV4/Mv
AoyNo7/nVhd0aEqXGMZ/LmoWbFyPlxlefrUsTuWIY5Xh80JvAGGAGMfj2OM/lXQeO5TPrFqUcyj7
FGpVyRtHJ4z9RWTZW58uRvMG4rnCk5wPatH4ixs+vo29dotozwpQLx0+v+FPZCSdzmSQ7fMMdlOa
JC6MNvBxznjNPkjWRCrMWZBwfaoJmXeQC3HGW44rK5qdloMnlX9hLlEdZhuJ+YKOOvBzn0Fa2hTR
W12oeLzhG7NtKgckYUg845x9M1zttGHlhXqCwYSkE7jnoRXUaU76Xq7vKokbqYyvTkda71E4nK52
enW7yhpAAhf7qqQwUenA5wc+9bup+WvhfXUaOSRUsZhsKt/cbkjgnHAx71j6Xd20iGSHepwGEanA
OMk4BIOfrx9K17uKK/0bVbdC8TvYztGq9FxGzc0S2IUdT5nS6ks/MO3a7JsyQARmorJh5qqzsFzk
beue1JdZZx5hKNyxDDOT9aZFLiRWVRwMfN0rltqdqVloe+/AjxtrPgPX7PWvD9/JZarYuZbeTO7B
KjIIPHTI/E19YfHP9ojw78eLXT78eE30nxMkCQXuoNINk+A24AdwGwQfc18X/Dfc9pIXYkKFC8cY
xyPx5/KvUdLMks4mimZ2cYbbyOecY9c8fStYxTZjUnLQ+rv2bv2m9M+C3gHxD4en0aa9mv5GkiuY
5Agz5QRt2ewODgCvn3UdVN8TcBXWYOX3KccZJUhvx61mxgJJNvleNChyi52gEYJ49cetOtwu6YH5
Cu08AEA55Hr6Vp7LyJVRt7H1T8bv2qvDfxh8M+DLO98M3iQadexTahC8yBpYwoR41YHjOSeSOnvX
deJP2y/hB4r8ExeF9X8I6tc6JEkYjtknQlfLGEUEPkHHHJ6V8OJHJtkKMhLNkbT0APf/AOtTomfG
0wllRsOSDkdQDn8qxdON9TZc3Y+xPiN+1L4D1z4F33w/8KeGb/RrY+VbwLcMm1FWRXJOGJYkZ7k8
9a57wH+1DoPhf9mfUPh7LYXt3qssE8EV0mDFiQ/efkHjnp+tfLvkS28exGCRgl1LqAOMHjp65B/C
muZ4VDsWcMDlVTbj1H0PHWqVKLE5NaHt37LXxm0j4P8AxQvvEV/YXU2n3GnzWzC2VN4JkVwcnHGV
PGe9cL8dPHtn8U/it4g8VWUEtpY3V07wpOmHC7dvJHrj/wAe9q5S5mMQCuJDb4IIjARAAcn5vXn1
7VTdz9pX94Sm7dvxwASeMd+MGj2aWxCktj6X+Bn7RPg74HfDLV20nQ7+b4gXUTql7NGJIA/JQAsw
O0bucV8/+LPFWpeMtYuNZ1G6knu7gtJJJnIGSSdvtzWXKjLbTkyNIo24QgkN3PqAffFRJMkZKtyQ
uF3jrkDjP0pLRhJczvII0ltbZojl05APViF+n4UxnEs6r5ofKjMYGP8AJ6VM0DgpIVYgjPl8gAYG
cmoYlVciM8Z27QpzjHb05P6VrfQx5bMmvZWjfy2b5QcZ4zuz0/DP86bJG0XlhZlRzu4By/TqccDm
mCEQlmcMsoPAwfkOcZ47/wAqj815tk0TM7luOg7dz68d/U0RSZM/IszHHlOgZGVT/q8EHg8ADvQt
o7zE7ZCRHkscBc846d/8KiEkcdxJ5P71toaQk/Lnp8v6fnUovFSXDxbsnaCrDj3p8nW5KiuoRXUc
AWInzC0mBvX5QeQTn+ntTlllFs6MxjAKhien0PqMZ/OmzNEWntkky24EEH7oOScgj60IWEYy7SFT
khRtLYGOh/Os1a4tb2uI11JFMXCE7tqhs5xzkED3xVq2uniZDE2945NwKthl5yO/Bz0qmihvlDM7
eUFG5SpZBkg89eSamMYt4xkbs8c8cf5x+dDs9DVXj1PsH4c/tHeDvix8Jb3wb8Yp3s5rNfKsNfVX
ubh1LdWO0lWGB7MB14rRuvjv8Mv2f/hVdaD8Ibp9X8Qak0izaldwtFcQqVJWUsyDeFyNo96+KN+9
o5cbJFUMM8H6datC4aQuCjojrkMBxxwT/KoVJdDo5rntvwO/aZ1z4SeOJNUvbiXWNI1Z/wDiaafL
KT5jswDz8k/OBk++Md69ze1/Zok+KEXjiTxXAliXLjQ5rArZmTZ2zGO5zt6Emvh1Ljd8zDBPyBdu
WBzgUoklh3qzPswXC8N/DjAB9av2RF3c9/8Aj5+1FrXxK8dQXOmTzaRo+hzpLp0EMmBuXI84DAO7
oOemPc59atvi38Ov2lvhfb6b8StTj8J+MNLaO2j1eSMSSTDHMi4XADYORngjNfFUkzuhd+XK8H6f
zpfM3lg6M2TyWGd3/wBaj2K6MFLufbWrfGD4f/s5/B8+HPhdqdvrfifU0b7TrEJCyBlPEkgIJLfP
tAAwOa85/Zp/aZu/AWsTeGPFol1bwhrcrx3Svg/ZpJjiSViedpzyo45J4xXzeF+1XY+/u3dVOM4G
B15x9aklDKrLuJYnnAycdyKfsl1DmZ9saD8K/gP4b+KFz4nf4iaLqGiReZcW/h8XAHlMfmUbg/z4
IO0Y74ry/wCMf7WOu+OPiPpuu6DPNp1nok4m06IqjlGIw27jDbh1Hv7V88xTPlQoAwNu7PU1XgEs
DyMQyAENznaT/wDqFT7GKEqrWx90+Lb74eftc+BrLW77XdP8C+NbEi1mk1SVVLovzHClhuQltwPU
Yx70ni74reF/2YfhfB4T+H2qW2teKNVjWa71hHE8IYrsMpAbAJ6Ko6dTXxNBeNJcqgVV3YDZYDcP
U5+mKTkkRofLO7lo8EH357VKgrhzPdH1Z+zP+0Bpi6PN8K/HbfaPDGrM1nbTttj8gyFiyyMMYVmP
B5wT6V2vgr9nzwJ4D8f3/i/XPHOkavpGkF9QsbK1vUE4KMXQyYb5ioXt1J5r4huwxIDboxgqNoH6
57f57UsaN5anCmEDHCjGB+H1q/ZdmPnlfY+jPiB+1/rOu/GbSfGehRxpp2jGSCztJ4cs8EgAk8zD
Y3MB1B44616x8Uvhr4Y/at0ux8d+DtctNJ1e7H2a/g1W42ArGCNpT+8MDnoRzXw5HcEzS2x3ockD
KYB9xmrDSfZopFO3ao3L8vy8en456UvZi52tz7X+IXxB8O/ssfCGL4eeFLiLU/EOrR+bdylhLChl
TZLICuMH5cquOlZ/7PPxV0b4ufDVPg142b7BLJEkGmXVtlfOWPEg3MchXDICM8Hp1r45N4skJZlL
EAAZXhOP/rUQ3EkciyLLMDGfkKHHP8x9faq9j5mftm3qfa/wi/ZZg+DfiO88c+ONYgOm+Hgb6wa0
n3k7Q+4vxk8bTgdTXCXP7Y9xL+0PbeMotFV9DW0/sry2yJfsjTbvN4z84HzFfTjvXzXBdS3Ucxkl
dkkILRtgEDHtyfpnrUZk8t/7gTkAj7w9P5UvZMbm76H2b8ZP2c5/jXrlr4/+G17Hqlp4jXzLtrqU
RJFjaqsAcHoGypGcjmrnxr8eaP8As6/BtPhNoskmu6rdwyw3Mki4+ypPucuexOWOFznHJr40sNcv
oSFivLq3hUMVSGZkGScnOCOp/nUGo381xdy3VxLJOxAR5GYs6nsck9MZo5H1H7S2h9seCfFul/tV
fBST4c/bF0LxTosMTWixjcs4hiCpJyMYJOGUEkdc1T/Z++AeofBTV7v4ifEO8j8O2uipJ5durq8b
iRCjOSp45PA5JzXx3p+q3OmXkdzp9zNaXKBgs9vI0TKCORuUg846fStHUPGOu6xbT2N9rmp3FtIB
mG4u5ZUPp8rHA/AUvZj9qfT/AIG/az09f2hda8QalpR0/QtaSGw+0vPn7NHECFnb5ehPUcYBzz0q
p8Yf2UPE2vfFBdV8KZ1nw/4iuzqUt+rqI7USuWP8WWUBywIHIOK+UXkI8yJ9ysjYV1XjA6cH3Fdf
Y/FTxnodtBZWHi3W7GzjXEVvb37qiDsFAPAA7D+lNUmP2uup9TftE+PtF+FPwc0n4Q2t2uta5HbQ
xTTxYAtvKZHBdQflLcAD0JJNafxCuLf9sT4JWmo+F5UtvEvh+VrqXRJMPOJAhGzggjdgFT0NfFup
6nf6vez393dS315cHfNcXMm6SQgYyzdzwOtXPC/jnX/BtxNeaBq15o91MnkvLaSlCyZztPryM+tP
2TH7VM+r/wBmP4aXnwOi1/4j+OceGoYrWWyFjeAJJLyjhwWbGTs2hepNW/2ef2kPDut/FXxjZXUM
2lJ4rvjdWM1w67V2R7SjnPBJBxjIr5Z8S/FPxj460lLDxF4hv9UtEkWVYbmbcoZeh9zzXJXYLoiB
tyjo6HLLx1HvR7JhztM941f9lnxzZfEmXwdBb3U2mXkpePVY0kNkImBbc5xhSpyNvqR1r2j9o/4v
aH4Pm8CeEy0mq6noF9ZahezW2wqiw5Uryfvkqfl7d6+cbb9pv4owhUj8a3oiUAKrRxs2APVkJNeb
ajqE+qajd3t07T3c8pkkn4zI7MWJJHqSan2T6k+1tofZ/wC01oOofFzwx4Z+JngW5ubu2s7YxNb2
O4Xg8116BOcqeGGfzq9+zTp2ofBz4e+KviF46nltrbU44ZUgnLNdbYjIoLKwB3NvGBzxXyj8PvjF
4v8AhVZXlt4b16TTIbyVJJbfyo5F3DIJAdGwSOMj0FJ8QPi/4u+J8lkniPWpdVisiWt08pIlVmxu
YhFH90AZ/rTVJjVU+q/gB4w0v4ofCfxz8P7KV7DXb4ajNbJdrtDRTswRgwznaWGR1FeLeCfhx8Sf
EXxL07wVeXWoG10HUUuGiu55PscKROCSucqeD8oUc5ryDSdfvfDGtWeq2EixX9nOs9vM2SVdSCOM
+o+mK9Xn/bF+KeoWMttLq1pGJUaN3WxjVgCMEg4znrz09qPZsr2iep798Xvjh4U8PftIeCZbq9dY
fDy3lvqUywsVhaaFQgB6HqM46fnXCfta2XjPwN8T38b6BqV3p+ka5bW9mlzpk7K7sq/cYD16gnPf
kV8sM8m1cu0j5IdmxuYAHkn616t8P/2ofG/w98NQ6BYyWl7p8BLwHULfzmXJyV3FhkYzij2bTuCn
0PoT4cm5+Dv7LOvSeObj+z7zX5LqS1SRjLK73EA2K20E7mIYn8zXw/LcH7PDFhiCgUgDJRsdu/pX
Y/FT4w+Jfi5rdrea/LEwtohDFbWqtFDEOcsE3H5juwWz0FcNvH2jBJVgu8leARz/AIVvTjyGE5u5
MeWO+Q5I+bcvXHpX0p+wz4q0zQPiZq1tqN5HZyavZR21mJePNlEhOwH1Ibj1xXzEjAjciliDhWye
KuQ3ktu0U0RkguIW3JLE+HjYdCD1B9CKmpG4U58u57t8VfEfxF+FHxE8Z+GbTUbrSrbxLqVzdW1l
EiOLuKV2CsjFWIJDBTtIIxXsXxbeP4e/sheHfC+uSpY+IWgtANPZw0pKTI0uMZztB5Irx+0/bS8T
2unaVb33hjw94jvdOiSOG91KNvtDbQPnLcgNwCSOp5ryr4pfFHWviv4kude1y4Z2ZybazjZmitIy
B8keeQPlH1Oax9nJm3tUtEfYf7WPiPUpfCfgP4ieC7oXFnpE891/adsgljiV0VFLAg8E5U5HHesT
9kjxL4l8b/ELxb8QfFNwJ7FtKS0bVXjSG3UxvuKgjC8AEk18/wDwg/aI1b4Q6PqGizWMHinw5ex7
f7J1OT91CxZmZlGCMNu5HTIBrZ+If7VOpeNfAkfhXSdBtPBOmtK7XMOkyFVmiZWBjI2qACWycdeK
XI9io1U+h9K/ArX9M8Sab8YdM0q/gub281nUJ7WBHw80bxgLIoPVSSORxzXyrafEz4ieIdB074QS
WsjpBNHB/ZhsgtyHiYSbDjBwCM5POK4HwN411b4a+KLLX9AuGt9Qt22ErwJYyQXjbPVSBj/64r6A
g/bT0q08UXPiIfCvSofEk0ZVNTS9PmsSAuGbyQcY6jNTy6lKTVj3z43+K9G0j43/AAeivdUtLaS3
vLtp1lmVfKV4NqF8n5QzcAnqa8n/AGlfin4t+B/7QV14k0K1iWz1XRYLQXF9bs9vMVaQnawI+ZTt
4B6GvkzxT4sv/Hmv3Os65ePfapegGa4c8MF6YAwBjGB7AV7hoH7WNhd/D2z8N/EPwdD8QI7CUyW1
zdTqjbMYQEbDlgpIyeoHPNHKxc1z279nnWb64/Z0+I/iLxFssH1q71DUFnmjMEcoktlO5Nx+6W3A
c9qy/Fs1xrX7AuizaGDqF5aW1jIViXzmQxzoXyo5+UAkj0FfO/x7/aRvvjDpunaTp2mf8I94T06F
FTRRIsgeVQQpLADhVIAH41W+AX7Q+ofBDWJpPLm1jw1fbvt+kgqN7bSFZSwODk888jiq5HuUppnr
Hwh+O/in9oL9o/4fyajp1qbfQftLPNptu/yLLbsN8jFmABKKB06mvf8AwfrFnL+2D8QbRLqFpv7B
sMx+YMlgWyMdyB19K+Zx+134S8C+Gdetfhh8PZfCOv6mFRtQlnSZEKk4bBLZwGfA4GTXznoHi3VP
DHim38RaffyQ6xBc/a0u5PnZpskksD2OTx056UnBsXOr2PcfEf7R3if4Wal8T/h3BplobfWda1Fk
ku45EnjSZ2jygHDZwCPrXvnxCaPT/wBjv4bx3+LWQS6Jujm+VwfMjzwe4ryvVP2r/hZ4u1jw/wCL
vGHw51LUfF2lwQo15bSp5ZkT5shN4BUOSRuFeEfHv45av8evGl1qt8PI0OLdDpdgQMQxZyC4yVZz
xk9u3vPKw9pZqx9g/tofEfUPhB8SPhb400+ygv2sY9QURT7hG+9I0I3LyPlckf7tJ+xv8S9Q+MXx
W+J/jC/soLD7Za6fF5dsWaNdgkXAY9ThQT9a8D+Gf7U2gx/Ce9+HnxR0TUPFugRiJNPa02tcIASx
DszL0IXaQemQab8Vf2rtBh+DsXw9+EWk6j4U0iUzR302oIvnFGO7EbpIxyckZPbii3kNTS0PfvA0
q3/7H/xg+xsJ3+2a/hEHJILYGB6/1r590/8Aap1j4nWnwn+HJ8PQ2qaTrWlh7+G4LtMIXVAdm3jr
nr2rzf8AZ3/aE1b4B+PIb2EPeaFfuItU04ndvRny8qAkASe57ZFe4aD+0v8As/fDrxb4g8Z+FPCn
iBfFt7FcGJLyFfsomZt5wN5EYLAcjtnFJxdrLcXOrn0v43kA/bQ+GsfmgP8A8I9qZ2Z5xlea+dvi
B+1Rdfs4/tN/F+1/4RiLWo9WNi6M115DJstlG77jZB3n0+7ivkjxT8YPF3iz4iS+PLzWrm38QNce
dBcWcrotsAciOIFjtT/Z79819JeLP2k/gX8fPD3hvVvizpWuWHi7ToWW8l8PWhMUmW+6W5LLhAcH
pkjNXyMammz0bwfdNef8Ey/El5Mhhae1v5SrHpm7bH+P41rftgeNT8Pvg78C/FCad/akelazp16b
fIUMEtmYLk9M4xn3r5f/AGsP2q0+Ma2vhTwYr6P8O9PVTFbpG1u9620Z81cgbARwvryc8Vp/AX9q
vw/F8MtZ+GHxlgvdb8FT2rCxv0ia6u7VjhfLGc7VUHKMB8vSp5WtyuZSPc/2cv2go/2jf2y38QQa
A+hW9p4TmswsswlZyLhGJJAA/jHHtXq3wS2t4y/aR8v5z/bzDGc8/Zc/1r5Zm/ac+EP7P3wl1vSP
gbFqV/4p1SbY2oazZtHLbIyFS4fapO3bkD1Pevn/AOAn7Tvir4GfEVvFK393r1rfyFtasLy4Zvt2
7gyN6yL1BP0qVF7g3bRHbz/te2Fx+ydafBr/AIRa5k1ZWVTqiuvlgC7E4Pl/e3YIX6+1fevx5jKf
Fv8AZxiJUMuuXGd3HS0P6181w+O/2R7L40zfEyPUdVfUGJuRoQ0iX7CJDHs3CMR4zkZ64ya+Y/jb
+1N4z+MfxOTxf/aV1oEWnyCTRrC1uCRpzAYDIf77DluOvFLkbdxpn2Z8f/2ldG/Zp/bQ1LWdY0Gf
W7e/8K2tli1KCSJjM7Bvm6j5TnHPStP9ljxVbeOP2Wvjv4lt7X7Baatq2t3scHGI1e1RwOmMgNg9
uK8Y8TfHr4G/tVfDLQn+MV/d+DPHulyeS+oaVp8s0l3Eq4B3iMjDbtxUn5WDYrmf2lP2tPDGlfDH
TfhH8DJHs/Cn2OP+1dat4mtri6JUxvEUZFOWAQs/foKOVtjvb1Pc/i7qQ8P/APBNP4dag0S3UNv/
AGLNJEejr5ytjnj2596zvh9+1J4Y/aX/AG0/hC3hnw/d6SNGsNVS5ku4kQtvgJULtJyBtzz6/WvD
/wBlH9q7w/pfg3UfhB8Yi2rfD29tpDa39wjTf2ZtUYjCKpO3PKkHKkeldp8O/iv+zl+yJ4U8SeKP
h/r7/Erx3II4NMhv7WS3eBT8hCOyfKgDFmPU4A9KXKJTSPrn4UASftmfHNh95dO0VW/79ORXwxJ+
1r4J8GfAH4tfC2/8NXs/iTU9U1gw3cUMTQZeZtrMS25cFfQ15J8Kv2xPHfw0+NM/xLudQufEVzqk
w/tywln8tb+EKVVAMELs3ZU44xjvX0h4l0b9kH4n/FrSfiRd+P7TRrK4aG9vvCb2ciwTzDDkP8vG
WI3DGCQT3NDXViUrHvfxhjWP4IfsxxMGIPiLw6oyec+TxXOftd/GLwt8Df2w/hx4n8X6XPq2kR+G
b2HyreBJnjdpflYKxAr40/a2/bQ1b4wePNNtfCU9x4a8G+FLiN9Es4QAXngdhHdZ28fKF2pk4Few
Xfxs+D/7a3wM0y1+Lniq1+HvxH8PPHapr0yb3u02KXdUAAKOSdy9iM5p2a6DvZ3Pfv2GPH2ifFDX
v2hPF3hzTpNM0fVdWtri3gmjVGC/ZWGSFJAJIJwPWuMvWVP+CSWovIm5Psc7MoGcgak2ePoK8s+J
H7TXgD9l/wCBNn8M/wBn7WLTXde1ZPM1XxdYhfkYMA7MrKwMjISqj+EYNch+xd+1noegaBN8Gfi/
KmofDbWFMUF1fMFi05vmkYStgMVd8EEng+g4oSe7Rd7nqem/tKfC744fGv8AZq0XwNoVxZarousD
7ZPLpotlVRb7dgbHzjcufTgc9K8m/wCCtMm79qWzjdSU/wCEbs2yeg/fXHtXo3wr8Pfs3fsmaj4h
+JcXxN0f4mappsLSaBo1nIouIXBOAnzkFyCFzgcZr4f+O3xt134+/EvVPGfiaZprq4H2a2i2qq29
sruY4gABnaHPPOTzWkdTF9jz2eVhIzDlXOAq/lxXQ+B5Ui8a+HkCBv8AiZ2uSSQSfOT8650BxG/O
5ucDsBU9nqc2mXdtdWbFJoHSWN8ZZXVgynHfBANNgtz9i/26PHHgj4d/tBfAPXfiFZLe+FrU6s1w
rWv2kIzQxKjGPByA5U9D0qL9iPxv4F+I/wC0t8c9d+HVmtl4ZmtNIWNEtTbKzbJdzLGQMZYN2rxG
z+L/AMPP2/8A4CJofxN8Uaf8Pvij4Z2pBr1/KkEE4kb5njQsAQyx4K8EHBqWf4s/Dn/gn/8AAaXS
Phr4j0zx/wDFPxMGin1nTZ0uLeDy3JRpYw5CKqSsFHUnOaytfQ1Ukj1bwTIY/wDgnn8cJGw+brxI
WwOMeY+f0rxhfif8C/GOlfs36D4D0uCPxtYeJNFi1CWLTXglQKQkoklx8+ZMHqecGuH/AGKv2vLH
SINZ+D3xZf7d4D8YyzQm8IWIWs9yx84yuCuI33dex9q9J+F37NPwT/Z++I2u/FHxB8UfD/irQfDf
m6pouj6bfoblZI382LeA37x1VVUL3NLlsiozS3Pq/wAThZP+ChnggfeaPwPfNyfug3AGa+ZdW+I/
wE8I/tC/tJWvxZsbS61m81KJNPkutOe6baLXaVRlVthyRzx1HNfO+tf8FDPF+o/tT2/xZtoIF0+z
DaZaaVLAB/xKmmDMjHP+tIGd2cA9OK9x+NX7Pnwu/a58SaD8WPBXxH0LwfF4lRbvXdN12/RbkvlV
OI9x2PhSp6jPNDiQpJHZWgRP+CP8GyLZFLagqrDorarx19iK7j9vLUvB2i3/AOznc+PVibwlFrkj
X8c8ZkjMYtl4ZQDkZxxXzD+25+0/4Y8J/D63/Zy+EgRfCGkIsGp6k/74SMJEmRYJNx3YfduJHsOl
dzonxL8Lf8FEv2dZPB/jrWLHwl8VPCkYnstRupxbWM7udiuFLfMCqAOvYnIpcjG5Ju56j+yTrfwq
8S/tpePtQ+EdrZW/hj/hE7WMtp9uYIWmE4D4QgY429ucV0vwlfyvgZ+1RPlc/wDCS+I8nHAxbivD
/BD+Dv8Agmv8F9W14+INN8cfFrxEJLC0XS7kT2kYXLxh1DAqgPzMepzgV5v+xr+2zb2/ifxX4A+J
4gfwl8SNQupbi+hj8tra8vGEbqxJwIipxk8rgHNCjbUTkrEV7rX7P2pfsy/CTTPDQ05Pio+q6St6
UjdbvesoExcngjJ47cjA6V96/F23M37bPwD4/wBVp2uSH6eQo/qK+RvBf7AHgb4c/Ga98ZeIviJo
d18MdCkl1HTrS11FTeExFXiVz0bbtbOOuB615V8WP+Cj2veKv2n9B+I3hnTLc+H/AAs89nplpdIV
e5tZgFlaUZ4c4yMYxgdeaOVAnqfQ/jLUPgPY/twfHFvjNJp32gWOkrpI1Tf5ePsw80Ljjdny+vY8
d6tfBtrFf+CWXxDfTAyWTf2wIc8nb9oIXP4YFch+0J+zd4c/bav9B+M3wr8XaXp03iODZq1nr12s
DJ5YSJSq4JVl8tgQeDgEVjftcfG/wt+zH8Co/wBmn4azNq1xNA41vU7vMwiWY+a2yQYVnZie3A96
VtStLo9p/a/Phi3/AGaP2c4PGDovhU67og1F5MhBbi0bzNxA6bd2faqH7Pcfwff9va2PwcNhJoCe
CLn7S2mOxg8/7VGDjPfbszj1FcH4B8ceHf8Agoj+zIvwo8R38Xhv4ieFIFvNLkhfy7W4MUXkwyEt
nIy5DqMEdRwasfBbwFoH/BOX4d658UviDrFrrPji6WbRtK0vRboTwSK6pIiHC5BZ4clicAULoCkk
rXPov9n+RD8Rv2prnfwPEGwkdRstTn/PtXwQ+m/s8/8ADDlncw3OnH4wSSIrKJ3+2ljeAMNucbfK
9q6L9kP9vT+zPi/41sviNbW1t4f+I169xd3lnG3+g3LjylUY6xkMASRkYzXU2v8AwS5s9O+PUt9q
Hiq0j+EVrM+ofahfxm8dAm5YyNuBhwQWA6CnboNTsfWn7Q8aL8dP2Xrc5BGtXhH/AAG0U1498btK
+C+u/t6eIovjHcafBp8HhOw+xHU7hoYTN5zHBIIBOCCM+p96+e/2m/8AgolL4n/aH8JeIPA+n293
4a8A3MsmnSXSOrak8kSpKGBwVXggY9jXp37SPwBs/wBvbRvDnxo+EetQHWb+L+ztT0/V7hYEhEQI
4BGQysCDzghgaLWFe1jvP2UYNBsf2Ov2hY/CpLeHY9Y8QJppRi26AWqiLBPJG0Lye1UfjtZaev8A
wTN+FFnqkrQ6bK2hJcSqdpWMnLnPbC5rifjb8S/Df7Cf7Nk3wL8NagPEnjzxLbS3GrzzktFZrcws
ksqlABncgCLyccntTfgF8S/D37cf7Msn7P8A4luT4d8XaBaxy6TNaArFeRW0aiKR2IIB3Nhl/EUl
FoL63Or+HfhH4H6B+3B8HP8AhS99ptzbnStWfUhp961ypZYNsZLFjhvnbpXv/wAGEz+2d+0ROQcr
baHHken2ZzXyz+zZ+zfY/sMQa58bPjDq0VlfaOrWmm6fpkyXCzCVCh+6CcsSAo4Awc1wv7O//BRz
+xv2mvFXivxppNtpfhrx29sLua3Z5DpqwRNHE3H3l5G4+5qbMdzIg8IfALVf2W/ilretXtl/wtUa
lqjWtv8A2i8d0HE/7oCENhhznketfYvxetwnwb/ZYspOh8TeHlKkdcW5r5s8Vf8ABLbUdd/aDF3o
/iG2b4U6pcC9uNSe5iN0kTgyOgXocscBsdCCay/2v/26dLg8f+BvB/gGCLVfDvw31K1uhqNwGDXt
3bZiMQyB8gAI3gcnp0p2b2EpJNXPe/2uND+GPiz9trwHpfxYvLax8Mf8IhdsHvbv7LC05nIRS+R2
3d+uK1P2DtM8HaFrH7Rtt4Clin8JWutRRWEsEvnRsi2zn5XJO4ZJ5zXl/wC0P8NLf/goz8MfCfxU
+GWoQN400uGLS9T0G6uBDHCcebKmWH3lZxg9CKQ+JtG/4Jl/s1T6BcXi698V/GsIu59M35gt22+U
zhkBAVAR1PzEUW1En5m/cskX/BITUS8pWOS0mDOg7NqZ/wAa5iH4Y/Arwh8a/wBmiT4VXtjda3da
yraiLTUjdMEWENlxuO07gw7c5rL/AGPvi/4Z/aX/AGeNQ/Zh8X3Z8NalNamPSdQtSd18ole4bORt
VlKjIz8wzjFR/sx/sTaj+zT46vvi38Z9Ws/D2geCM3Vg9nOk32o/OpdwpZgMEEKOSWx2pWexd0fX
mnwmf/gohrEwziH4fwxn6teZ4/AV8i3fw4+AnjLxR+034h+Iup2KeMbXxFqUWmwXGoeRIgSI7DHH
kbyXLdjyK5XwV/wUkjb9sy8+IOraDBaeEdVtE8Ps4ZjLbWaSlkuT8pLN3KY6EdxXQfHr/gnPrvxf
+PNr4z+HmrW+oeBvGdwus3eryzor2gmkMkhRDgsuxgVHX1HFDiyebU9S8SRRJ/wSs8C2z7ylxHpa
gnlub0Nn611v7dnh3wT4u+LH7POhfEG8js/C9xealJdtPN5MZVbeMgO/8ILbR+PUV86ftsftL+Gf
hf8ADjw9+zt4DkXxHD4VNqupatcucxzW0oZYRjALEj5mHQcV3/xV0+y/4Kg/s+aJ4i8HXcWm/Enw
azLc+G5XxHunZA6szYyCkRZWHGcipUXHcFI7r9jnwb8OvB37Yfxbs/hjcQ3HhmLQtPCNb3PnxrIz
5cI/ORnHGeKvfB+5e2/ZS/aW1DHliTWvE0gZevERB/lXmXwm0XT/APgmN8FNY8cePLiO6+IniaIW
tj4ZhlBRzE5KqJEDYyHDM39a5L9h/wDaj0Tx3onjb4JeOdmgf8J5PfSWWqwszlri8yjwYK4BG7Kk
kA/WrT7D1ORuvhP8A9C/Z/8Ag3r/AIZ1S0uviZqGqaQl9DFqYe4JeQGdWhz8uDx0H1ruv+Cz8wbx
P8LbfIBWy1FjnPeS3/wrL+Ev/BNbXvh38apdd8bavDpXw48IzSapBq63SM9ylvMJIS6fwjauTn0+
grwn/goD+1XYftO/Fu1l8P24i8N+Hkn0/T77Lbr0Oys8pVgNoyowOtNash6ny3xGuSg7/d5wM08R
C4R9xZYlXdleox3qKfKk7eQRxinmcLwQS+05XPTnvVu9iYqzP18/am0bSr/9in4A6Lr9wbXSbvU/
D1reXO7Z5URtsO27oPlzzVP4T/DX4TfD79v7wdYfCu9t7u0bwvqFxfLb3n2pElzsHzZOCQeRn04r
mvCPiPRP+Ch/7Hdt8NYLkaJ8Q/BtrDcw6VvDC7NvbmKJ8tgBHL4IzkH8DVD9lv4AL+wtpOsfGn4x
Xw0S+sop9K0vQrd0mFwJFUqAyE/MxUqB0A5NZX6Gyeh9N/AOTzP2pv2m5hkhLvSUB6ni1fP618GW
nwq+CGo/saeMfHV/rVv/AMLLN1fvbQNfhZ42N0VjUQZycqQen8Vdn+yR/wAFANMi/aI8cXfjDS4N
A0X4iXiXDXonZ/7PkSIpHGRt+YNnrxgmsnU/+CVPjE/Hs6TZairfDCeUTyeI5ZY2ljhKbmUJuBLb
htz7+lPYE7H1p8fLSJfBn7LNiPlQeKdF2jGfu2/FcV+1d8Pfhv8AE79uLwXo3xN1KKw8Pw+D5rgL
NefZUab7UwRS5IH97AB/hrxT9rb9u7QB8UPh3oXgiyTX9B+HGpW9+9+7tF9vniURmBRtAUAD72Dz
0rsf2o/g4/7fvgXwp8ZfhRfHU9ZjtYdJvdDEqxrBgvLIC7FTvRpNv0qE+WyFdM9S/YO8MeFPBupf
tEWfgq4S58MWuuJBYyrL5o2pbyEgPn5hknn3Nciqra/8EkdVk3BVuLKdpGYYAV9ROf0Nc/Za7of/
AATQ/Zg1HRtXuhrnxO8aRC6bQYX2i2LReSW3DPyIM8nG4jisn9kf4m+Hf2p/2WNT/Zq16+PhbxAL
YxWF0G3NfKJmuCUB4+UgAjPQ57U1uVz6MdbfBD4JfDn4ufs03Xw51iDUPEuqa1A9/wCTqn2kyIkS
uzsm4hPmGOB7CvqiPbcf8FDZM53W/wAPwVXH3d15/wDXP518efss/sUa98DfiO3xT+L13F4R8NeB
W+229w7xy/bJMMpJ2sSqgEEdzkehqh4V/wCCkOmXH7bF149vtDNr4Nv7FPDSy+cS9varPuW8b5Oc
nB29getKyTByvodXf/Br4MfEv4kftN69481ZLXXtL126NhFPqQgCkW5IZY8gsd49+lduIoof+CTO
ixbW2TWtsR5nVi2pBs/j1ryf9o//AIJ6+LPil8eP+Ex+HF9b674Q8bzrqlxqzyRhbPzpBu2qGBdQ
hDZHOK0f2zfj54Y+BvwP0L9m7wlcjxPqOkpBDqmobtgt2hlEuwrjlmbPfgUt5Ep2se4/t1eDvDPj
rxD+zt4Y8X3v2Dw7d6vOt7N53lARi2Tjf/Dk7Rn3rM/ZV+GfgD4Y/ts+PdH+HN79t0CDwnbMxW8+
0okrzqWAfJz2PB71yfxdit/+Clv7Mmk6/wCBbn+zvHnhJpJZvC7SKH3yHy9rOcbcrGWU9MnmqH7O
Pge3/wCCdfwi8SfFT4o3XkeLtdt5LOy8MCRd8xiZpEQOMjc+Mk9AMd+KbV0hqSR7P8ErkJ8NP2ot
RXGx/E2u89B8lvjP8q+PLv8AZy+D2i/skeAfHel6z9o8f6ld2KSQvfq5kd7kh08nPG0ZHPZea9C/
Yk/az0H4gv8AET4XeL418LT+P9R1C9stSjl3xrNeBYxbAEffAbgng7T0rj/hn/wTL8b6J8eLq38S
3R0z4b+H7uS+h8QyMjfaIomV0wm7IJAIYngDNKSumkNP3j7g+OUbS/tS/s5WyBdqz6vKwzgjbap/
9evD/it8Dfhr8cf26viJF8RtUFpYaT4c014oHuFgUuQcksTzhe3+0a8w+OX/AAUT8PT/ALXXgfXt
B04av4U8GNe2v22Kb5r8zxhGdBtwApBx1zjtWl+21+yrr/7Svi7S/jD8H7xfF9j4jtkt54beREFt
5KBUYMWGed4I7EDjrTs1u7aCXkeh/su6TY+Hv2E/jha6Q5fTLe91+GzlzkPFHbhEYHvwo5yfrSft
CaJY6p/wTw+Eui6pIbTT9QutAt5ZEONkbdWz24ORXI/F3xvoX7CH7I0vwTGqJ4o8f+JbW6aeBAEF
ityhDyMOflBG0c5PJ4q54U1nS/29f2K4Phnol+ugeN/CUFu8Ons+9rz7LbhUYH5dqOz7Seox71nH
3Ul3uNd2bHw9/Z4+HHwN/bn+F9t4BvGvIZ9E1S7uVkulnKNsKg5HIyGI59K92+DjpN+2B+0JcDJa
KPRIdxH/AE7OePxr5R/Y3/Z01b9la71X40fGa/Hg+30aGaxt9OuXVmvGkj7EMecnCqMknPpT/wBl
/wDb78Nat+1J46utf03/AIR/R/iBdW4tb+afK2Zht2VFkO0fe6dcAn8aFdXZTt0POLX9mv4X+Lv2
WviJ8UdQ1WWDxrBqmpvDAbpFAxc7VUxHkjBz65J9q+xPi/pYj+E/7MemhgyR+IdCjwQMPi3GOP8A
CvkXxR/wTO+JEv7Qf9g2Tm48DX10bk+JSRshjfc7ZTdu3A8YzgnbzXov7Xn7ZvhDwf4/+GXgfw8k
niCx8A6pZahqOoW8gMUjQIYzCuOS425PbNVJu90hdT0v9qv4KeGv2hP2zfA/hXxZqE2n6VD4Subw
mCRY3dxOwUBm4Hr07Vo/sQ/DrRPhdJ+0DoXhy9fUNIsNZW3gnkk8xmVLaQ4JHU8nmvL/ANtj4Sap
+2D4V8G/Gb4S3Z1+aGyTTbrQbNwZ4g5MjqXU/eQtgqcVf8D3mmf8E6P2VdZfxffLe+N/GIS6g8Op
IEnhZ4vKy2SSQuSWbHUYpWvKLTH0NXUrEL/wSantQ3lJPakE9Bh9TI3frn3xXP8Ah79kbwP+z7+0
L+z9qvhXxBd6lqGsag7Xcd1MkhO23LBl2jgfMw59vSn/ALO3jLR/2uv2MdU+Bmn6rD4d8Y2Vqi25
nGftOybzwyJlW4I2n0yDXE/sS/soeNvhx8Vv+FjfE9rjwbong2R5Gm1mcFLglZI/kfOAvIJJ68et
T8UdQ0uz7G8LxtN+3j4zZl/1Pg+wjVh1AaUf1zXxrL+yN4O+LGj/ALQ3xO1zXLmLVdL8S6vNa2iO
oUCLe4DA9dzH/wAdxXXfCr9vjwRqv7afiXXLqG407w54gtbbQ7O/uWUKhgYkTMegRuvXoa8t+OH7
CfxUvP2mb+x8PC71Xwp4u1KTUxrVuzLa2yTyOxEmDg7A2fp061UWotojY+mvG1qlt+wt8ErdlKBr
/wAPhVB9Wz/WpP21/gfp37Rv7SHwh8Fapqkml2P9n6rdSyxgbgFZCu3PGSV/IV59+118ffCfwQ8F
/Cv4OLKfEur+FrnS7vUrizdPKjS0+VkOWyHbAOD69qX9tbwlrv7T/gLwT8aPhRNcah/Z1i1pNpFm
S96nnybmBCd1xhh6c890rocXrqek/sNfCPS/gj8TPjx4T0fVH1XTtOudOjSaTBY5hlcg4443Bfwz
XLWlmh/4JbeL1EmxbuG9Jkfplr7GT+QrO/Z6U/sMfs0+K/HvxTvJBq3iuSGa20XP+m/KhjVWDEEn
L5IHQVB+z94r039p/wDYu8UfBnw5q0WleOILdxi9OEmWSdpg0f8AEeFZTjpxUxdn8ymkc34J/Yv0
T9nH4yfs9a7YeJZtWutc1VHuI540GJPIL/JjkL8x4PPv3r6p0JDL+3j4lnOJGg8F2sakAZQNODgn
sScn3FfFH7FnwD+KN/8AHHR/GHj6bUdF8L+BJzO83iCWVIyQjRhYQ5xjhTngYHTtXsnwu/bQ+Hvi
b9uPxLdLdzW+kapY23h2xv51CRyTwuxZuf4DtwG9/wAabXUV0eN+Jv2OLf4y3Px9+K134iFhcaJ4
g1N4LWNS5k8lmYZIYY/hA4PSvSv2uo3s/wDgmN8Oxlrl410fLA8t+7OT178V4d8dP2efjZpv7Q3i
LwdoVzqt5ofjTVZNSjksHlNh5VxMxXzeNq7Qfm4PrzXpX/BRrx5oHgX9nv4e/AmPUDfeLNLjsJrx
7UDyo44IZIyWbnBLgHb71tFpzIdz80bja/zJuUgnke1NOGlV5FJB4yT+fFOupvNUADZtyW2gfTNV
55AZhtxkjJHXB710bGOpMdgXzFBcYwpPH40SklNpyMj8KewjVDl2RmJIGPT+VRukZJKCQgAE56Z/
wplDQTuCFgUZsnHPHfAoliGCUBBPAd+BimsWWco4wVAIweCKex3KkZc4I+4w7Uh2HupXIRmY4+UD
kY71Gsb78FPlUA/ez9akMTCORG3Rpj5Qw/xpixukZduN/Cjdzj2oQrDlmYZ2cLkrkDnAqtDIi3RL
kuNuMscVNtjjf5iOcEcd/WnW+ntJDNdGTKINpyo5z1PNMVi1p4HlSEAMmDgFsYz0yaueORu15kJB
JgQYUkgnGc5P1FV7aOMW8+MsVAAIHTpzx3q343YS+IpZSxD7IwVIx8vlrj8MYFQ2C3OejEu9YwMs
3HPaogHjZwVIbPJK56U50DNlyQ46Fj2HSiRY2ADZU9Rs5/M1iyjo7bAnjMTExpwuOPqK6MIt1qcp
mJa4UjcWY5XA9uo+6KwrNNocNhBEGYmNxvAz2PAJrcs7sDVl8xIi24hSEyWBGMZxx6jNeolrZHAv
M7rS0SISW8saROXJRt2Qw7Y4464/CujvoSmh3bKRE4tpvu4PHlng45xx+o965aG3aWVZG3xsMZK8
nPrxwOnvW9sSHTr6WW5MoFtMF+cBcGNh3A+oz3qZxaNYyimfN4sVuruaJQFkUExIgyCQemfSqjod
+1lI+bLK3t2H61qaO0I8R2ZH7yEnABOB09fxqDVkjGtXDBwR5xOAeAPauZnSrPU+g/2YvhRrPxs8
TWnhzQ4J0uJVXzrho2kht0zjdJtPA+9zx0r6x/aW+Fnwy+EWnaZ4f8KyXd14ttrdEvbm3m3W8mAQ
ytzwwIPAGRVj/gjxNDPr/wAQ/JDAR2ljvLqRlvMk6c8d+nXNeRfGESxfFfxlCTI08erXTEuANv75
wMYHToelEG2yKrUXZnvX7Jf7Pfgf40+FfEia+JxrdjKY7aCC5CsEKEhgvcBuPwr5+1zwrd6b4kfR
k0++g1HzPLjsDCWmbcfl2x4J5zmnfDzx7rfwz8U2viXQr1oNUtm5zhlKdWHPVTzn/wDVX6P+HvEH
gHxr4Etv2gn8Ho2u2tlLdbiSZ4zHmN1U5C9sg4/WnKpKGhn7stT52+PX7Ofw++DfgHwdq7NqCXN5
c266nbmVfNFuyBpSi4BBDDH1Jr0HxD+zX8DfDXwytfHkh8QHQ57ZJovImDuVlUY+UqeeO545r5N+
Nfxe1T4veMJ9U1i4kWB2ZbWNiNtumeFXbwMD1yT+VfeuleEdP8f/ALHvgfRdZ1NdJsrjStNEtzN8
vIVMLz0JPH41zS5tzZPzPEvGv7Nvwj1f4C6j8RvBt7qlyYIPMhN1cD5Tv2bXTHy4649qq/s/fsie
HPi98E5fET3l5F4iknuYLVg4WAbGwm5dvI6ZJzXDfH53+BnxA8aeCfCVzPaeGdUgt1utPYM6qcBs
qzAnnPODX09+xbry+Fv2VJtUnSS5g0y61CbywAG2IxYj06ZP/wCqrcpLqNWdz5d+D37M0Pi749al
4B8afbbKCwtp55Y7TK+YUYAYcrypzkY9K84+P/w00n4bfFrxB4a024uGtrKZY0MhwygqJByOpwee
K/UvwNqHhP4oyaZ4+0gwXV99la186OQM0IbDNE4BwGHHvg1+d/7aMKx/tAeKplMYZpYUCKmHY+Sh
JzkdiKcZN7sxno0a3wB/Z98DfHjwBq8Gna3eQfEG1heb7G21LbduPljG3nAAB543V8/eM/CmqeC9
a1HR9ZtBaanYyyRyRnopDEAgnqpwcHvXov7MM7H9oX4eyLcSgtqqDC/LkFDlWHpwPyr2n/go3ax/
8LB0aZY/+YWhYqQCSJJcdRycgc1pFu9ipLZo+OtSljMUcayhX5YRck9+M9un15FQyiWWRTwIidrg
j73tzz61LKkdw7yREjI2tJjKMf05xkZz602QGO3CFgqEfKxwSCT1rot2OZpvcJzsnkQkucZHlsDn
jpntUsDnBDSiLGcIRnv0/wDr+1Vlw7XIWM+UCCAOSoPGSP8ACiNiIHLRhhu+VSOQuOBnvznr61Cv
cyu0yaRj5bMcsPuqX+XJ5yaktI4+SULAcYb19frmmW8ayFmMhQjgliPlOOP8ioLfzBESxT5HOcfx
Y6Z59PatL3KvdlgyM0TSuWb5sCMjLZOQePQetIFSY4MbrsQYk4wwPb9BRHMzMXLFkbClRg7Rzzn3
56UjlSV2HYr5IBP3j7cc85qFoytiWYrJIvlFRKASw5Zfwx3p0TRELh2VoxhA6nAI9zUVqYWiZZdy
/eUbCATnnJxz0HQetaGgPDLq+npIRIhuIVaN85ZTIqkdM4IJB9qqTshQ1Z718Df2SLv4keEr7xl4
o1dvBHhi3iWa3vZo0cTLlg7csNijCjkdelbXj79jmO1+Gv8AwmHw+8Vr4605JD55hiVdkaqd7qS/
OGAyO1fSH7YnhXWNd8OeBfBXhI/YE1a9ktfssDiGBkEeQGAwCBy2OnB4Ncd+yD8O/G/we+MeveCv
FdyH0660T+0o7OO5823LGcIXCdAxGQfxrm9pJM7UrnxL4J+H+u+PfFlr4f8AD1rPqOo3jB/KwFMA
BG5zgcKoIJJPbivqST9gXQk8RL4fk+J+mR+IZU3jS3t/37DBbAHm5IwSc47V9F/AjwbpGhfEb4tX
VppNrZ3UOrCGGaOJd8cZiDbVPUA8HHTmvjLV/hX8VvEljrPxhtruSaG2nkuYdUN2q3kZidkLgegA
wB6DGKr2jYcvQ8w+Jvwx1j4VeLr3R9Zs5rZ4Hd4mdPkmi3MocHHQ4zmvTvhB+ydq3xM8AzeLdY8Q
QeC9A3L9jvtRjwbhGIxIpJUKuTtGc5/Gvrv48+GtI8ceAvhjq2tabbahe3Gq6WktxJCCzpKvzqf9
kk/dPHNc5+2P4b8TeJrrwN8P/BcZhtdQjnzpsEggtysPllN2OAEGTjpS53cOWzPmr4wfsjav8MvB
dt4s0nXrXxpobljPc6fESsajPzZDEbeCMg8EYryfwD8PdY+I/iqx8P6RbvcX9zMqDy0LeXHvXc74
+6gycnPavuX9jXwz4s8Kal46+HfjdTNp9pb28sWl3DpLFGJjIXCgZG1hjjOK6v8AZ28FaF4K0H4o
aroml21lqlvrWo2yXMcX7wRRgMiZ/ugnIH0pqq3oHKfPzf8ABP8A1k6vcafD4+8PXGoxqX/s4Fln
A2jAKg5A6ckfxV8x+KfDWpeD9e1LSNXtZbW+tZNjRToY2JxnIB7HqDXscGi/FvSLm1+OEqagHnkS
Z9WLxkEMwiG5Ac7GB29MDivrj9o34W+GPFvjz4Y6pf6TBcT6jrMdneTYwbiExEhGx24NLnYlBHyJ
8LP2TPFHxR8GR+KVu7Dw5YvI0MR1t3gE4wCHX5TlTkgH1B9Ky/jH+zD4n+DVrp+p6lcwaxpt/nbf
6UzvDGfl2hiRxnOQeh59K+l/209K8X+L/EHhv4b+EbOa70ptO+3HTLMIrFonKKdzY4UYGCQOfXrr
/si6Zrfiv4deMfhv8SLKS6tNDngtI7G+AEkUbKXCllOTgqCDk/XFJSsLkPhv4d+A9Z+JXjGx8PaL
A91dXUu0tg+XACGO5yOAvynmvcdT/YK+JFhHdyx32j3gt1eUW9rdHzWGCdoUpyemBxz3r3/4A+EN
J+GvwK8Y+LtAs4bfxKiamDfzZcsIXkaNSCcADAyBjOOa+cfBOsfFbwL8RdM+It3YX6W2u3ccc+pT
QgW12txIrN8vQLj7uORxiq53fQrlPB7i3mtZZo7qJoJ4sh4pQVKn+6w6givXfBH7JfxD+Jfha317
RLW0i0+cvFF9suRG5AyCVHPG4dc819Y/G74F+DvEPx/8AXN5pz58QS3i6nHFKyJc+VApUtjvwM46
1wH7ZWoeL/EnjWz+HXhm1vJdIsbKC+Fpo9u7T5bzIxuZeiLtX8xT9pIj2Z8z/E74HeKvg/q9rp/i
a2S3WaNpre5hlEsMgBPy5GMHrwR+dY/grwTrvxD8SWGiaDY/bdRucxxxkhVwASzMx4AABr7n+Dun
TftAfALW/DXxFtjPNod39ihmUNFcKYokZHYg53jdg+vNVPg1oth8Gv2WtU+IWiWsUviOeylupJrw
bkzHKyBQBjAwOQDyapVZEexSPnLWf2Q/in4b0HUtRvdDhFtZQNOzwXsUjBVBJIXOemT0rxNIZjcI
8knDkqUY4GT0znvxmvpv4FfFD4k+FPi9o0us2urXGk+KtQSyuDqyyCDEkmd0W7ABG7jHBHFeseNP
2aPBV9+09ols0M0enaxaXOq3WnRPiMzxsuMDHyqckkdO1HtX1H7JHy54U/Zv+I/jvQLPW9G8OS3u
k3QYxzrNFDvAJGQGYHqPTtXKeOvAevfDnVX0fXNNmsr1YxKY5sD5TnDAglTnB7nvxX05+1z8VvGg
+JMng7wq97pGn6HFG6toxkjeQyRox37ONqhgAOmSa7mewtP2pf2Zp/EXiK3Fhr+jC4MV9aKBIzwI
c7gc/K+TlffIoVRoXsl0Pifwb4Q1rxvrUOieH9Pm1TUJh5gjQbm2DBdieAAM/qK6bxL8BvH/AIM0
i51bW/C1/YaZABJNcMoaNBu2gnByO3boeor6t8JaVpv7Mv7M0fjrSrGPVfEmpwWss91doA6mcoux
SoyEXOcZ5IzXEfsv/G3xXrfj6PwZ4yM3iXSvEW6F/wC1nMnkOkTudqtnKsByp9iKTm73KjBbHygw
ECBX3A53Ff4enX9BXZaR8EPHup6db6jp3g/XL2xuo1kSaK0bYy9QwB6g9cgc19M6P+yZ4aX9pi60
WW4uJNAsbKPW47I42uWnYeQx/uDA4xnAAz1rnvj9+0z4s8PfFKbR/DVwfDeieGp1szbQEbLso2SW
G35VwNoUds0e0l0KdNM+YtY0TUNC1O707VrO5s760kAntZVKSJ6bh2yMH8am0DwpqniW/Npo1hf6
3coPNKWdszttLYDMADhRxnNfYPx08FaP8cfgVZ/F+2tB4f10WXmTxxgP9oTzBGFkbAyV2kg4zzit
XW9PsP2O/gKNV0a0Gq+KNa22bazLtSRHkjZ0JJByqEcLjk9apVWZ+zSep8bat4G8R+GoFudY0TVd
KtmOElu7KWJS2f7xUDr61jyjy23beMYVk5I75r7K/Z5+Lk37QcOpfDr4iwR+I3mSS8hvnWNfL2bQ
EZEA5BO4N9QRWT8F/wBkzTz8ZvE+n67fDVNL8MSQJ5AQoLzzY2dC4ycAcZGeSKr2pXIz5lPgvxFn
zf7A1YQucCcWMoQg9MHbg+lZL/JJIjt88bGNhnOCDgj6givqnxf+21rmk/FMppNmIfB2mzfZpdNl
jj824WNmSR1fGV6DAz2GetWP2kPgDpfiSLw/8QPCjRaZbeKLm1+02twh/wBbc4KzcHj7w3KPcipV
XUj2Lvc+ULWxudRhme2gnmjiC5KRM+zPIOQDjr60l1Y3NksazwSxtIAyiWIxkgDqOP5V9seM9WsP
2MPhfYaHoNsb7xjramSTU2TfG8kZQOxQkkDDYVRxzUPhPUNP/bQ+GmqaLrtrFaeOdCj3x6lCnlxI
8u/YQAxOMJhlPpkVftWUqaPiyXbDJBhhIzAjByM9f5e1SXEbwzyrJHIDtJUmNl+UDJ6+3+eK+ov2
bP2dLKC813xl4zMN9pvhi6uYFtIixDz2zZaVgfvLgcKfQZrV8O/tm22v/Eq50/WdAs/+EB1GR7S3
ZbL/AErY2FUyDdggng8dCDUuprsXyrofHrFWtDtdpPmOeMEc0SlQioZDsOVVm4A47Z47V9H/ABd/
ZEuvDfxW0nRfC88UeleJpnTTzdTEm12Rh5VYlTwOdvXI4r0L4neNfC37J+jaL4I8LeHbLXNfDC8v
J9XtfNUI+7MhkAHzFgMDjAFV7byM/Zu+p8XpKszApJ5u7jKtn+XSmTAwoAEICk5LDn6V9jePvBGi
/tNfCy3+IvhC0GleJdIhNvqFgFWGF9ib5EAIIJG7KsDyDgmvjaYvcWqyEEnG7kckYz/kVrCd9TKc
Uh0y8qI2wh6lR1P41K9yCGQsS2doU9D/AJ/rUKyI7RMIyXVcB25684x+Ar0P4JfCfVfjH47ttH08
rFbI32nULp3x5NvuUMVHUtg4HPX6VnOZNNXdkcC8sWxVdvLI5wTzz3+lRv5aQyFldxt3ZjPIOR14
6V9f+Pvib8Jvg94lsPBFj8P9J8Vro8Udpqup31svnxsCFYEmMmVtuWPOOeOa5L9pf4Eafa6UnxM+
H+y68I6oBcXEMIWOOAsURCinB2MSSeOD7dM41O50OmmfNMTLOgkVWVGAO/HCk+vvyKhcv5RQFw2c
llHKY4xn8K+w/DXwg8Jfs7fCu98XfE7S7bW9dvl2WWgSgSIGUttVSMjcykMzYwoHtT9N8D/Dv9qL
4eXieDdBtPAfjTSpWuDp1qATOgXADHCgo5IGccEVXtelhKkkz4+kut4UlmJHLAjBB/l0xRI37xiS
JMfLgjv/AJNe3fAz9mvVviL4y1GHXopdI0XQbp4tVnlAwsibWMA+YdV6sM4BzXqNp4t/Z21L4oye
ER4GsrXSnkaGLxKLlltXfZwQM8AsSobOMgHNYuSubnxvOpt5UCxuA+AVIyBx0z6cVDvCwyAswAYE
kEAHnn8q9g+MH7NPij4Y/EGz8N2Fvc67a6pJs0W5ijwbrC7mRuTh1Gc54Iwa9g8RfDb4S/s1eAdH
g+IGljxx4yuZBK9raTNDPCjgnO0Pwildu7ufyrRTsY8t2fHzSRK5G8OBk5XtUcaMwBUEgckg8AV9
UfGv4AeHvFPw7sfiV8JrffpUduv9p6HGzTTQnAZ2OWJDoD8y+nNZX7N37OGn+KtHufHfxEcaX4Dt
4z5KXMjW/wBsUqQH8wMCEBwAOpPHtU85SgfMbOVMoY8AY+Vuv+cU2eYCWPKuMg7sZzx0FfYngn4e
/A39ojSPEGg+A9OvPB/iqKIPYTatdNJ5/UlkjMjBwMc9xvBrwPw98BvGfir4nP4BXS7mx1e3k23U
k9u3l20RPE7EAjYQDg96FMfJc8vnTgEDaXY555PWmzlQRvBbBB29h1wDX2h4q+H37OPw38caJ4J8
RR6vqmvTQwW97qdleH7JBKzeXumw4CHcuSAOB1rxX9pn9nPUPgt4rE+ntJqfg7V3aXTb6EF1jJb5
YJG6F8EbSOo96Sn3JUHc8QjkaSTDfcGRgJgHHQ5x9apXa7G/dhm2k4B6c+1fYHg79mXwf8Pvg/e+
O/jRNqNhHdGNrDSLGUQ3kcecbShxuc5DbR0HvmqPxJ/Zh8IfED4N23xE+Ck+oXkNkJxf6XqMrPdO
it0VACQ4Kng9VPtijmNHE+R2jQoZGJDtjgDJU4/lUFzOzYjPbkKAMA4/+t+te4/sz/s56n8e/GG8
eZYeFtOmDanfyfLlVfDRISD+84PXp19K9s0b9nr9nn4o+LfEfgXwp4g16PxfYQzrbS3su21eWM7M
xttAkAfqByRk1NxcqPhVmeQKGJbB+ZN2BwCF7e9QMwSIBdr8gAnoR/8Ar7V3fib4Q+KvCnxQm8A3
ulTv4gFx9nhhhQyLMCeJIycEoRg7iOM19MeJv2Wfgz8CvDfhK3+Lvi/V9O8V6jHJvj0MrJCGDAEk
GJiAA6jccA4PFPn1sPlR8R3chkQpyGZgA5BxjPXNVz8hIlUnoCcYz7f1r6J/a4/ZYuv2e9Zi1nSJ
n1bwBqgQWmpSuHeB9gysxUBQDnKsOCOO1bP7PX7IVh8QfAWufEX4j6ld+G/AtjbSvBc2mPPmMZ+Z
8Mp+UANjAO4n2ovYFFHyrKdyOqvgscKUxz+NRzbPKYyIzlgTgDIwMdD6ivszxZ+xz4O+IPwd1Dxr
8DfEuo+KrjS7hkvdO1ONYmkjRNzqi+WrCQZBGevIHNfN/wAF/gp4h+PXjy18JaDCFuJW8y+uJDhb
KFSPMZgejAMMDqTTUkVY87lICggNuB4yecVntcCJ8EMWODvPf/PSv0EX9h74Dah8TLv4a2HxX1s+
OYo3/wCJbJAjKsnl787jEFOPvbQ2eMV8XfFf4T+Ifg/431Dwb4lsng1W2bagiIZblSNyyRY6gjHH
qcdqacQSOJKylir7SYjkheG570STFNwO456nn5u/rX2vpf7DPg3wB8KtI8TfG/x5deANU1S5KQWd
vCkyKpG5Ff5WIOFJPIAzivNP2tf2Qrj9n+10PxNoWoyeJ/AerRwiLXG2oUmcMVDKp+6VAIOMc4pO
SK5V3PmgOA+VJJBwckf0okcxbysfHdj3r6H/AGVv2Mr/APaLutT1fVL2XQPAWnRStc60cH98qBgi
gkZUBixPt1r0vWv2A/DfjP4ZeIvEPwZ+I/8AwsrVNJlRZNNWBIgcYLKpzy+3JAPB6ZqedCtY+JDI
4LFV3MmRj7p796he4neJo+gI5yP5V1Hgb4f678TfGuneFfC9jLfeIb+Uxx2snyEEAs27JG0KqEnP
TFfa95/wTe+H1l4107wVqPxstbLxvdwqy6O1kvnOzLuIX94MggNjv3ockRY/PgQMH3tJlR2POPp7
81JM5JDRBii8Y759cV3fxs+C/iX4E+O9W8K+JrU213bSM8E5OVu4C5EcycnhgOnY8V778K/2B/7f
+DkXxC+JXjm0+FOl3VwiWJ1O3VlngdAY5GPmLjcScKey0XRaSPkB5i8rYxGvQjnApgk8qTCyEHHI
znBr6W/ah/Yr1b4CeF9F8Z6NrcPjbwNfxRv/AG9ZweXFG7sPLBTe+QwIwwOK5T9lz9lvX/2ovGS6
fpDNYaNbrnVNZkjLw2gKMUGARuZivAzTugsmeJyswU5Ubepz6+v61TmbfIRsPGcgE9a+39d/4JqD
VvBXiXVPhz8VtF+JOr6IolbRdJgDSOecx7hMcMQGxkc4xXxPdW8tvPLBMjQ3MTNHJC/yvGQcMGXs
QRjFNO+wNWKotlVGO7Oc7hT1UhQu3YmO3emXIKRlcbB1DDvSo0ckuFZmQAZGcZobIW9hkrgEYGQc
qAOnrTpLjEZD53Y4PpX1J+z9+wn4i+Nnw6vvHGpa9p3gDwyjJ9k1DXkZIrsE4Lq25cLnABJOSelZ
X7R/7D3if9n3wjpXjAazZeNvCV+rF9Y0RS9vbcqE3NnGHLYDdMrU3RbXQ+agxGxzliB355qR5Dlk
IBXJOSMDJHIr0f4C/ALxN+0L8RrLwp4WiZpXO+8vpVZoLKIgkSSMoOB8uAO5NfR/if8A4Jc+MrHR
fEuoeHPG/hjxfqWjQPM+haVI0l07IOYsDoxIIAODnjrxU82thqJ8U7WLbASc98/5xQVQnIwzD+IL
yKttpV2NWGmNbTpqTXBtPsbKfO83eU8sp1DBuMH0wa+wPDn/AAS1+I2seGPDmp6l4o8K+F7/AFi1
SZNI1ieSO5RnwQhXby4yMgd+Kdw5T4xaQEMoBG0d+M8VDKzusitGW3AgE9s16P8AG34HeJ/gL4+1
Dwj4ss2gvLZiYbyNG8i7iwP3sTEDcvIB9DxXdfs7fsU+O/2j9F1nWdFnsdB0LSmVW1PXJHggmJLb
hG4Qhtu0gnPGRmnzIOU+foowoVUjAB+cYAGD71KkqkYLbSCc4A/z6V9D/tDfsRePf2e/B+l+KNXu
tM8QaDdzPC19oczTx25AyDI2wAAncM5xkYryP4YfCrxF8XPHWmeFvC1i+oavqUixqACI4FLAGWQg
Hag3DJo5kK1zkCMko4JAboQOc0hlzEuzkjoSOlfZfiX/AIJY/GHQ9E1m/i1Pwzq1xpdrLO2nafey
S3MgVd2xV2A7j0APsK+PLnT5re8lt5oZIryNzG8Dja0bg7SpB6MG4x6+9TzIai2QqjyEMRtXHHyj
kGo3l80Lk4bABHXpwOa+r/An/BNj4x/ELwPofiW2j0bSrbVYTNBbarfm2uFGcKWQqcEjnHXBFeIf
Gb4I+KfgH4+uvC3i+xWx1e2VJBLE/mQXCMobdGxA3DkDPY8UcyFY4FJTnaP3bLxuUDPX/wCsKneQ
NFw7Pg5VGAAz7fga9X+AX7K3xA/aUutZTwXp9q9vpaCS4vNRn8i3O44Cq+CCw5JHYCuj+PH7EPxN
/Zz8KWniXxTZ6dcaRcXP2b7RpF0boQsVLDzPkG0HBGc9RT5kPlZ8+sylhuO4Z5Q9PoalNyQComkX
jHLZGPX61q+FfCGqeNvE2l+H9GtXvdW1W5S0toFByzswUHPYfMMk8AV9Ral/wSu+Olhb3U/2DSJU
t0aQxx6mhkZVGeF55OOOlK6DlPkaNkEexVIwT+f5fSnx3d1bjyhczqgbdsEhC8+3T9KZdW8sFw8U
6GzeLKyJMNrIw6hh2NfQ/wAKP2AvjP8AGLwVp3i/QNDto9F1DzPIa+vY7eRwrEcIxBAO04NNtIVr
nzqLszOWZmlkJOXdssfxPWnm4kgljlgmkhePPzxsUb3GRzXdfGv4IeLfgD44bw14ysV07UUhjnTy
pA8ciuCcq44OMEfhVz4HfAHxv+0X4lvNG8C6ONSu7O3N1cTzyCKGIZAAMh43HPA7/Sk2TbU4C/1G
7v1KyXVxKGYMwkkLBiBwefY1Xlm8pT6n5WUjJPrXu/xj/Yh+LvwM8GnxV4v0a2t9ESdYDNa3cc/l
l87SwUkgZHXnrXiFpY3GpX9tYWFvJd3d3IsUFvENzSSNgBVA9cikmnuLlZYPibW4EITVL1T93i5f
AUdABnoOw7VjbACzcBfvH6/WvrA/8ExPj+YlkHhS2jJXcV/tKFiBjuNwP4V8v31heWFxJZ3tu9nd
W8jRywyjDowJBBHY5B6iqTXQuMLi2mtahZIYbS/u7UvyyW8rIp4wCcHBOKbqGp3t+6y3V1NeSxJ5
SmeRpGCg5C5Yk45zj3r2j4R/sU/Fz46eET4k8HeHFu9FM7Wwu7m5jgEpXG4qHYZXnqPSuP8AjV+z
746/Z/8AE9tonjfSP7LvLiBbi3ZHWRJEJYcOpIyCvTsCKA5exw1pc3VlNFcw3MlrPG29Hgco6Hpw
w5B57Vcv/GGt6pbvb3mtaheW2NpguLuWRCODyrMQema6L4VfCHxd8b/G8PhjwhpbavrEkMk3lIQq
KiKSWZ2IAGcDJPUivRvib+wv8Zfg54Lv/FPijwotlodmE866gvIpvJ3PtBZUYnHI6cc+1Fx8rR4O
hB2JuGQDnsQPStaz8Ya3ZiK3tNf1e3hiUBYoNQlRVCjgKA3HboKyPm4ZSHBHy4zz7D1J/rX0hp//
AATn/aB1fS7LU7XwG32e7t0uED31ur4ZQQCpkBHB6EZourhZ9T5tuLhrqSSaWd5ZnkLNJI+5ySSx
Yknkk85960dK8RanoXmPpWrahpjzFfNNjdPEZAM7d20jOMnr603xP4bv/CviS/8AD+rWZstU0yd7
W6tmHzRSIxUg49x+PFelfBn9lT4nfH3SdS1bwD4YbVrCxmFtNcvNHEm8jdtG9lycEZx6jNNtEpHm
mteIdS1oxHU9UvdXaMuI/tlw8oTJGcFicZwPyqvBeXNlPE8TypLGwkjkhO1lYEEMpHIIIGDXo3xo
/Zy+IX7PsulxePtAfQjqaSPav50cqNsxuG5GIBG4cZrkfBHgfXPiT4t0vwx4c06XUtc1OTyrW2jY
fvD3yxwABgnk1mmi0rkOpePvFOp29xFeeI9Wuba4QxvBNfzMjg9cgtjB5+tc3ISQB90qCBnqBX0Z
4w/YD+O/gXw3quvaz4FeLSdLtmup7hL2CQpGq5Y7Fck4Geg7V88lUkXCHLHq2elWiGrDCpLtnDDH
yk0lvhS5LDGMnnggHpTm3lcMSBHjgUTW3lSOfMZwFyTgimC1NLQdf1LQb57vTdVvNJumjMYmsZmi
cocHblSCRkDj2qxqvjPxFrdiLLUtf1XVrfzPMEd9eSTKrcjcAx4PNdd8Gf2e/Hnx8n1K38C6DJr1
xYRrLcbHVERWJCgszAZO3pWx8Xf2T/ih8C9Ettc8a+FJ9A0m6ufssM8k8UimQoWAOxmxkKcZx060
tGacp5GJdsiM27cASOOn4106/EzxnEzBfF+ueQBhk/tOcqQRjH3un41j6dpVzrmqWOl2dtLc6hfS
rBb28Q3STSMwVVQdySa99uv+CdH7QlhbO83w9naKJWmeRLuAjYASflEmc+2M/WloFj5xnKOxBY4x
1JyPfA/Gt3RfHHiHw/Z/Z9K17UtMtSzP5dleSQLvIxnCkDOAOaxXtW890lTa6cPGp5B7g+hr2T4Z
/sdfFn4w+DrfxJ4L8I3WtaPNK6LerJHEpZTg7RI6lsdOOMg88UNrqHI9zyvXdf1LxRqAvtY1W81i
9CiPzr64aZwvJC7mJ/vE496ZpviPUNE1CK90+8nsbuFj5NzbuY5EOCDtZSCODXU/Fj4MeM/gh4kX
w/440qXRdTkgW6jgdkffGSyghlJHVTxVT4YfCvxR8YfFcXh3wfo0uvaw0TzmCEgYjXqxJ4A7cnrT
siLNFXVfiP4s8QaVNYap4p1rUbKYr5ttd6jNIjqOispbBAODz6VzkroAo3ZXPJH8q9o+JX7G/wAY
/hX4UuvE/inwVc6PodiF8+6aSORYw7BQfkYkcsM14ypCR5ZQpUEjGPmyOMevamkiuXzOr0z4xeON
Cht7TSvGniCztLeMQQww6nMqRoP4VUNgDHYcVzl/rNzqN9NeXtzJc3VzIZZpZGLO7tySxPU17rpf
7A/x91Owtb60+G1/PZ3MazRu00ILRsAQcF8g4PTrxXiXiTw5qHhbxFqWkanaNaanp9w9tcWzEExS
KSrKce4+lCS6D5bFvw5498QeFGnm0LXr/Rp5hslOnXUkDSoM7dxQjP3jwfel8S+OPEHi9rf+29f1
PWVt8mGPULyWYQDgHaGYgZxjjrXW/CX9m/4l/G2y1G98CeE7rxFBYyLDcTW7xoiFlJAyzLzgdvWo
/iz+zv49+CD6avjvw3daB/aSv9nabayPtxkb1JGRkcZ70tB2ODtdTnsbyGa0uHhuYX8yKWJyjqQc
hgwOQQeQe1dbJ8afiJqFnLZ3HjzxNPazxtDJG+qzsrKV2kEFsEEcYIPXtXO+GPBup+MfEuneHdCs
pNT1nVJhBZ2kf3pHPQD+p7Yr13xP+xL8cfCOhahq+rfD/UrTSbK3e5uZwUby0VcsSqtk9+3aloTL
Q8MXfbSkEksgxnp0NdloPxn8d+FdPg0zRPF+v6LpKkmK0sr+WGBckE7VUgDJ6465NcYH3puicB89
TwB249K9m8E/sb/GP4i+HdN8Q+H/AAPqOpaNqKGS2ugEVZQCRkAtnBweo/nQ7LchPseXeIPFOq+L
Nal1TXNUu9V1KYKr3d9M0srhRgAsTnAANP8ACvjjX/BWrfb/AA/rmo6HfiNomutPneF9p6jcpHGc
fkKueN/h9rnw68X6h4d8V2E+iazZ4M9lOB5iAjIPBwQQQeKvfDv4O+MvjLqN5p3g/wAOX2u3VlEJ
p47KLdsTIAPJA/XtQ1EuzY7xj8ZfHHj2wTTvEXi/W9e08SiQWuo6hJPGrjowVjjIyefeuRs7mSJv
MQtEM/KynBUgivR/ib+zP8Sfg7oMWteM/B+o6Lp8k/2dZZ0UIZCpIBKnHQduK85trO61K8t7Ozt5
Jby4kESJGhZnZjhQAO5OBxUaFRpvoekJ+1N8XbWYFPiJ4jjjVQhRdSlZQMYwFYkdPQV5tcTvLcvO
7vNPLI0kkjsSzOeSST155r2i8/Yp+N8ALj4ca4bWGIzNM0IUY25OckehrxlUDSsm8F0Izu42npz3
BzxirikNxO28G/Hb4g/D3TGsvDHjDVtBs/PM7W9ldNHG7kAZ29M4A/wrF+IHxL8UfFPWY9U8V+IL
7X76FPKhlvZCxijBJ2j2ySeneuk8D/s7fEv4j6adX8L+DtY1zSt7RNdWduzRh1wSN3Tv61h+Pfhv
4j+GGuf2P4q0O60LU/s63L2d5EUco2drY9MqwyPSlpcXKVPCfj/XPA+sW2teHdQuNJ1SENHDdQSF
JIwwIbae2R3Fdh4v/aX+Jnjnw1eaJ4k8Zavq2kXW0zWl3cM0TYOVyucHkDrmuR8GeB/EPxA1ldK8
MaRc6xqE6tIlhaReZIVUbjj6D+nrXS+NPgJ8R/h/oUuteKPA+s6RpEMiJJd3NoyRxbugY9BzjnOO
RUKKYzgEkTMTIVBQgsTyMY54Ney2X7Z3xl0yztdOsPiJrUFraII4Asq/KgGApyuSPxrxe4Zc/OoV
dxB8ts/0znFekw/sx/FiceZb/DrxJNBKqyJKdNlwEI3BtwQe1CjFMhtnnera1d63q97ql7cNfX13
ctcXNxKSXkkY5ZifUnmvQ/hx+0j8R/hBY3ln4L8TXOjwXlx588KqhR3xjJBU9gPSvNbqJori4tJI
mimt5GhaJ1KOrgnIYHkEEEYNdT4P+E/jPxxp0114c8J6zr1tDKIZZ9OspJkibspZVIzj3q2jNJ3N
P4rfGbxl8aLqxufHXiC58RXNlEVtnlwEgVjlgqqAM5A/KsvwB8SvE3wx8T2Ov+HNTm0rV7PJguYc
blBBBBByCMdQRiqHibwdrHgfVk07xFpF7o1/LH5yWl/C0UoXJG7BAyMg/lVfSPDmq+J9RjsdHsLr
Vr+bcUtbGIzTMByTsUE475pOKa1LTPY/Hn7Z3xe+JnhO/wDDniLxfdXej36Bbi3WGGPzlHIBZIwc
cZ684614oLxLGRGiZkKNuBXJwfr+Jrd8S/Dbxl4W0+S613wrrWmWMYVTPeWckUaBsABmIwCc9znm
uclWKFsEhZH4JZgAB+NS4xND6N0//goX8cdIgsbOw8UIYrKFYIFks4/lVRgdRycYrwTxT4n1Hxn4
l1TXdcvX1HV9Slae5u3PzMxOecYxjoAOMVp23wr8Xz2wuovCWv8AkumUljsZGVhxg5xz1H61yb5V
hiIggncknB3DsR2+ntTSS2IdyIB5s7vukg7j6+gpEjV0ZsEyA4DbuBnuasSRHf12MOg9CRiq6AKu
EX5ixViRweK0RNh2YyXjyC4YZyOCaHd1AGNzH5SOmRmiTBmZOr9QAemKJtyyOWLEK3Y5B9KVhCZY
srFUDleBnI+lLvL7327ioGAg6k/5/WnylNgKbSenzAjtyP5/pUDFTOAqPE44JHT6CmirEsk7N80j
kyEkYxn681K8sbqilDtJxjHI49aQomVO8fKO33gfU01sqZpQ5UMOM9yeMilYepG8UaxyANyR1AwD
7Zp9rqMsdtdWgLLFIEdgBkMVJx/OoHBQKu4A8kY7n0xTLOSIyyyAlNoJESjqSeAKdjM3LCcQ20si
yIvAyFGQrZGMj3xTfHCZ8WXkCvtOI1YBs4IQDH5jp2qzaCK7s3McZXaqgBhk5zzu9qzfFCbfEt9F
Jw5kLblIPP1HaoshRepj+WIyqygxsSQW64NPm8vanztvxydvX8TSNEU2sTtcnPXND2u/lQ0i9zjn
NZMs6GILdRN5rOpLEAMM4GDnp36VvacrPeK+9A0YUsNvAUgDA9+OorJvLSOz2qkYEZ2t5hk5JI5G
Kt2Mq/bIEjwpztkUn7oPf3/SvTcrPQ5LW3PRNHu5RBNCIw6eZl2Q7mQYPGT9V/X1rdjlae21KJVM
UT2koJCkkkRtjA4+nb86wtEu47L93JvLDJUycbR6A9+O9dFDrCy6dePFbMB9mmilwP4SpwQOf16V
Dm2Qmrnzjpcbwa3C7Qx7YjvZCcLt+pqtqs8dzqcrQrlCcAZ71HNcvEXWIEBgQcdeveq8LOjEjjPt
k1gdi1R+iv8AwSd+KegfD/4h+INF1q5+wz+I4LeCykcjYzxMzAMe2d55PpV39qn4Wa98O/jJr82p
QMbDWLyW/s5YwZIWjlkdsbsDB4J25PWvjD4aFjF+6uJ0K9MOQysMfd/H3r6r8R/tO+NvH/w80/wj
rk8V1ptkkaRyR26hgEGFVic5JA6j0pqOuhNRc2rPQf2W/wBm6T47eIpru6upLXwzp0gF9NEwEjPg
lY1Vh0PGTjpxX0bq37a3gzwp42j8H2mhWj/DuH/RLm5jhIaNduGJiOAVDcHjJr5G+Dv7RXiz4L2O
q2fhiW3iiv2jkuVurYMCwBXK9CGxjnn7o4rgL7VJLq9u7pstcTymUhjkZJzwPqar2Tepm5xjK3Q+
kv2p/wBmy2+HUcni7wxJ9q8I65Ik8SnloXkUsAoPRMYxjnHGOK968bwXg/YD0JIo3gvo9E08EBGD
RNhAWwOeCc9K+TfE/wC1j408S+ENA8PXRsZbbRZ7ea3kjtiGDRDC78H5uCRjjrXcRf8ABQj4lGwR
ZbDQ5oRGVdTZNjjuDvPTGORzUOnUvqiuaLVkZ1t8BvE/j74Q+Lvij4t1a5R4IzNYSOwZ7kLI0Z3Z
5UcLzx3r6J/ZWkI/Yy8RvLIAyHVtxC9MA/nwBXzX45/bb8beOPBWr+FtTsNFt9N1KFrci0t3RijH
JHUgcZOe/bFc34A/ai8S/DT4Y6p4K05LG50q+M5eSZMSAzDD4J6jHrzz7Yo5JdSefXQ9Y/4J/eJd
WsPi7J4fgvZxo17p093c2pIMbzrt2yDjOeWHB7CuL/bcVj+0T4jXCwqDCyNyxYm3Td/u/dWvMPhN
8XdY+C/i+HxVosNtcXcUT2wWaPKNC2MrjIz0XkH1qT4p/E7Uvix4x1DxLqTwwXd8qGaGFdqgKqgD
ByeNgHXvTVMG+a19zuP2Q/A+s+Nvjr4Y1PSbOS6tNCvEvb6dsKkSEMoO7uevy4yfbFd//wAFCfGm
jeI/iXptjp1/9pn0u2NtcxRZCrJvOVZu+MjjpxXlnw1/ai8T/C/4cX3gvQ7HT0t73zA2pCF1uj5g
5O9GU7hyQcHqOa8mv9Rmv7m4mnM1xLO7PNK7lmZick7vXvj2/Gq5HcmU+hFGyebiNWZsZO5x836e
mKjkVY2jRlPn4x5IOTk9Bj8jSTqYWZDExn43BBjCkD26d6qyXQVyke1WKgbvVuTwRW+yMua5Ykij
Z5AhL7CW3Z4YjkYHboKjWRkVGkUx5Gfky2D05otmjDjYwdNpV2PIB4wD+tSiJYiI8Fc/LgnK9c9f
rSSIVnuNNu0cIBkKBJNwDqdw4wPxpZplih2RRFpCMjaASh+ox/OpJJCIYwwKxgHJ3c54qNmd5Avk
dR8wkXnGPUHuMVPKUl2LAvY7PajtG5kdUyF+97fgAPyNQIw3pGg3qo4I5Axzx1yKY6Au3lIwAyDg
jOD3/wA+lEEESlipjREOB82OfbjrV8qKafUljb98wKiNmO1g2SV9+McmrmkXPka9ZTzODDFcRyHP
HloH3E5HJwATz6VAP3LyjftcD5tw3HBPWmecdnzEhypDbSelZuLZUOVO5+m37XviDWo/Bfgjx94H
lW/j0i/N4b+1RZ4kiaPbvbnBXsee/bFcb+yB8SPF3xn+MOr+MPEcAksrXR5NOGoQQeXArrMjbCQS
pONx496+efgZ+1jqvwd8P6n4b1XTI/Ffhi9RYotLun2RwZB8zB2nKtnlcdfSuu8cftqy3nw3m8M+
BvB8PgWC6kPnyWDht0bAhwMIu1mwBuzmudxb0OhTSejPrH4GeKdK1X4p/Fy2s9VtbmebU4pIYo5Q
zSIIsFl5+YZwMjjjtXxprHxz+JPgnSta+EK6fGI5ZbqJbM2bfa5EllZlEfT72TxgnBFeK+D/AB/r
fw28S2niPw5dNaarbMxWZcMzBvvIwIwQQMEV9USft3+BtS8S2HifVfhVFL4nsVCpqguVEoYAL8pC
Hj5zgE/ypqm+wOS7n0F8aNYt/C/wm+Gltqd1DY3sWp6PvhlYKy7MbiV6gAg5+lc9+2F411/4da34
C8eeGoYruDTo7sSXEiGS3xIIlAYqeAQWOfY18L/G/wCMuq/Gjxpca1qUz/Z+UsbZwP3EJO5YxwDw
c8nnrXq3wX/a7g8KeAb7wH440KTxh4fZEis7YSBGjQAl0c7RkE4I59fUYfs5ISqq59A/sb/EnXvj
J8Q/HnjXWbOK2tbiysreKa2Qi3Yp5hIVj1IBGecjNegfBC6t9X0X4t2lnNDcO/iPUSiRvuyroArc
diQeR718o/ET9srSrj4XxeCvhp4en8EWTvJFceYUbETAllTBJBYk/N2A4rxT4XfGTXPg943j8S6H
NJ5mNt1bStvW5jLAsje/UBuozn2pKLD2qPUbn9pDxlefDJPhO2h277CtnhYH+1ptcSKhUE84A/h6
e9fa/wAbbyKy1X4SNNIsOPEdtu8whSAY2XnJ9WFfO9t+1x8FpfiInjhvh/q0XikjnUdy8EpsPG8A
nacfdzXzP8Zvi7r3xm8X3WtatL8p+W0hhynkQqSUUDPDDIO7g5p8rewe0Ptj9q34san8D/i54W8X
WWmx6hD/AGRPZyC4ysbbpQdoZckNnB6dAfWtH9jj4g3/AMVPEPxP8WX1h9g/tC6siEjB8pNkTrtV
j97AC5Pv0rwfwB+1t4X8TfCVPBPxe0S+8TLbNFHazWSgSSRoPlLvvUhwV5bIzn85vit+1joknwut
vA/wr0+78L6SivBcPcoBKIxghY2SQ4JIOWJPSlylKZ9DfDSCLUv2aPHtrbsCDLrMYCHdg5k4Hr1F
fNjftWar4s8CeD/h7N4ditXtbrTomvknYmZYZF6IV+U/KvGT3rzf4A/tB6t8EvGceoLNNf6Dc7Yb
+yc7hIm7LOi5AEg5we/Q9a+hdF+P/wCz74d+IN7420zRNeTXblWl/eWw8gM45KoXwpJ7jpuNCgw9
orn0F8VXRPjT8HtzAMbvUFAJx1tv/rV5P8c/jrJ8CP2lTqD6C2rWl/4dhgZhL5e1hM5HzYI7DrXx
18UfjX4l+J/jZvFV3qMunXscmLW2tHaNLUAAApznOFBJB5r6Jb9pL4afGz4c6dpHxcttRsdZtJ9y
3WkWxJZVGA24bsZB+Zfxo5bEqXNsezfsleOv+Fm+GfiJ4ga1Fmuo65NN5Kvv2BoUGN2BnGKybCQD
9grUHjJlEWk3bAAbj8txIcY/DFeMfF/9qLw9pfw00zwF8IWudN0oW4judReN7a4UDooJALFgCWbv
071yv7M37SU3wn1IeHvEUr6r4J1D5bmK6Zpvsmd250TBGCT8yngj6U+R7jUr6HpMn7Uln8Vtb+Fn
hmDQJNPuNP8AEGnyS3Us6uvyYQgLgEE7gfwr6N8V/uv2mvARBGJNG1FMdxgxmvn7wj42/Zw+F3ij
xD430G+vNS1aWN5rawnsZWhgfllEJMXy8naGJ4HSvn7xp+0R4u8YfFKLxtbarNpeqWjg2UMLkpax
cZiC5AdWx82c5z7UcrfQpyS3Pqjxp+0FpHwJ/aO8fDV9Mmv4tStdPKvbFQ0eyE/e3cYOe3pXTfAT
U4PEP7NXjbULaNoIry51eZIm/wCWe4MQv4ZFeXeLfiR8G/2jvCuj3/jjWT4F8Wx5ju/scTyNIo3K
F37CGQ9R3HP1rC+Ov7Svhvw74It/hx8JGW10mS2X7Vq1nEYw4YOrxgFQSx2qWf8A2hRysj2kUew/
E/UjYfsTaJqAQzNb2elyFVwSSJYvw61j2X7QXhj4y/HP4WW2g6ZeWdxZ3V4ZZLyBYztNu20KQSSD
g+leXfs0/tD6bFoB+G/xJMeo+EZk2Wt5efOLQIAwifC5IyAVbORge1dl4Ck+CH7P6a54tsfGVv42
1W2h83TbOTAmiOG+WM4PLBgCxHGKVuli1OO573ZSY/ao1RCp+bwpEwOOOLk5rxofHDwV8M/iF8WN
H8S6VcXd5f6u7QyQ2YnWRPKA2sx6AEn/AL6rwOH9qnxZb/GEeOvtHnzv+4fTxtEZsRJv8gNtwMDj
djOcmvYPiDovwb/aEu9D8ZxeO9O8D6jexLNqdjLMgnlORlXyw2sAGG4deDT5ddQU0drobRv+wOSm
PKOmS7RjAA+1Njiui/aa1vS9C8A/DfV9Wi8/SbbX7GW4jZA4aLynLDb34HSvDv2kPj1o2n+HI/hb
8N2hh8M20QivLq2cSRTRth/Ljc5yMklmB555rX+Ffxn8L/HL4Z3fw8+J1/b2F1ZQGTT9bupVtojt
Uoh3ZUeYm7kdGHNKzHzJ7HoHw/8AiP4L+Iv7Tmk3Hg628iKDw9dJcSGzNt5jebGQNpAOQDnPvXof
w0iWH45fF7B/eNJpbH2H2ZiP5mvE/h3pXwz/AGVfC2p+LZvE+neN/EYP2e1OlTo0u18ARqgkPVsk
se1eUfDD9r3WfC3xY1XxNrsQvbHX5Yl1SC1g2sgjRkjMQJ/hGOCeRmjlbFzI6628efCnTfg7440b
W7a2fxi13qQhaWwZpmlMj+UyyhcADj+IdK9r8eSbP2evhjKieWFutDYLjO37mK818W/s2eAvHvxC
07xNpHjfRbTwlesl3e6bJfKZn3MWcI244DZHB6HOK5P9pL9om0vJtM8CeCwkfh3w7cQJ9rZQ/mzW
7BY0jbccoNoBJ680+VvYfOtj239pO98JaV8WPhhd+NYYZ9AVNREy3EBmiB8uPaWUA5G7b24pf2f7
zwfefGv4iP4Hjhi0RrGwYJbQmKIyfvA20EDHI6YriLzU/D/7afwpigm1C30L4gaLH8huJBFbl3I3
MEySyMIx7g1Y8OW+jfsY/C++1TUNTh8Q+OdYDQRQWT74HkXe0SheCqDPLE9+KWpF+U7/AOFh8/4V
/FqEhxjW9ajO4c8pn+tfP18fg/J8BfCkmmf2aPHgksTIkTH7UZfOVZd49PvfkKsfszftLC21rXPD
fjUW8Wl+KL6eY3NshUw3U7KhjPPEZB4bsRzxXR6T+xC1j8Vhc3Gr28fw/s5PtEaw3OLqQLhkR/lw
AGHJB5A7GgcZXSPefiqQPi/8IMg4N/fc+/2avNviTpPw71b9p/UofiFJaJaDw5bPbf2hOYYt/myK
fmyOcE8Z9a8u+Nn7W76p8VtA1PwtZRXuleF7h5oHuFZPtruhST0KrjocH1IrtPi18NIv2rdC0j4h
fD29STVMJZXthev5SqE3EqSRkMjP9CORQlqHMmdn+zpaaHaeAPihaeH3WbQ4tavYrUpJvTyhAgUA
55GK/O7z/tUEZWN0YKqZGNowMdu3WvuHVvEOmfsZfBA+GzcrrfjfXA109kzMEV3RUkbKqcIgUYzy
xHvXw5Kvk2tsrhm2oFB+gAH+Nb04s5aruxz/AL5vMkAPykFlJ4Pt78ivqf8A4J9kv8U/EJYHI0XC
n289f/r18pKpiUMqlo4wFD4r2P8AZo+NNj8E/iX/AGpqNuH0rUYF0+6kTcPsyGRW83ocgdSOKmor
7GlK0T3P/hUvw78c+MvjJqXivUhb61YazP5Aa/EPlIYgytsz83OeueVrfuZAP2BNOaHcyiwtwCep
H2pRXBfHP9lPXviB8Rh4u8DSw65ofih1vJbwOira7to3ckeYhUlhjkYxW3+0V8RNC+DXwb0/4Mab
dLr2tpapDcTA7PsoR0lVpFAOC+cAZ96xtqbpnoX7U/hfTfGVx8KdD1u6a20u+1c29zMHCcGDpuPA
Jxj8TWf8E/hp4e+Fv7TGu6T4bvZbuxk8NRznzZhKVY3AG0sB/s5/4FWL46nt/wBsj4HwzeGJUtPF
fh6Rrh9DlcM7OFZAucjAccq3Tt71l/s5/D24/Zp8O698RviPcrocslvJYQ6VKVMjAMHUBgcFm8vA
Ue5zS1C6TPXfhOPLf41gj/mYrxuO+YVr5Xu/2evDFr+y7p3xAg1i7OssIXNs0qGBj54jKgY3Ahef
vdQeK9J/Zx/aU0PxN408a6LqsD6Cni7UZb6xurl12I0iKghfphzgEdj0zmvL9P8A2LfH8fxT/wCE
Wm80eHI3Eh8RmMm2ZAA4wmeGJ4Iz97mm00F9bn118XG3fFL4L5X5jqtyfmHT/RTXmPxe+BulfHT9
qG90vVtUu9Nt7XwzBODZFBI58+RcZYHA+bPTsKxf2h/2oPDOifF/wcttDPq1v4PvnuL64s3Rll8y
IxlI+cFlzzkgdRWR+1j8Ndc+L8uifE/4cXUviLTb6zhsJbXSNzzxgGRt+UycAvhgOQQOtFmgTTtY
9F/Zw8LReDvhR8W/D8d6by2sNX1C1S6YAblW3RQTjvgDNYPju2F7+wFodsW2+bb6enmFc7f9KTnH
04/Gk8KT2/7Jf7NWpReM7sXOveJJJLiHSISPtAkmhRNhDEE7doLtjArO+GupWH7S37Jk3w20e8TT
/FmlW0EUltfMFEjRyiQFcEkowXbnsTz2pJO9xt3WhF4c/Zs0z4BftB/CefTdbutU/tKe+EiXCKoX
bbNgrt/h+Y8fT0r2zwxF/wAZY+N3Bz/xT1gMY/6aPXzL+yZ8G/HSfFG18VeLotR0HSPCrSl/7eml
xK0kbpiHecBRwSeBgLXceB/2rPBup/tYa5Nm7t9K1axt9ItNSmQCEzRyPyxz8qEnAbp09adi79zz
q9/ZXtviZpfxf8bv4jl0+407XNUMVqsCmM+UWk+Y5yM7sfQV7D4xRZP2Pfhj9oG8+boec8nO9Oa8
F+NfwF+JWkfGbWtE0CLVbzS/GGoSXaXOnSzLaBJ53z5+35V2K3zZ7d69V/aX+IGh/Bf4K+A/hxf3
Dar4l046dcSw2Kb1Edsy+YxY429DgHBPtS3YJpWudH+2B8KP+F0/Fn4T+FP7VbR1uo9Tdp1j8zAR
In4XIBPykfjTP2P/AISH4J/Fz4seFf7SOqxQRabIlyU2Fw6ytyvIB5APriuZ/a5t9Y+NXw48G/Fr
4X31xc2ujRTyMtg7xX6CVoxlFUZDKUIYZzg1J+yPb638HPAHjv4sfFDULu3sdVjgZf7ReWS9KwGR
dzo/zZYsAq56AUO9yU1Y6HwdCY/2OPiv5KCKR59ebA+XaxaTnj0rwHTv2Rbz4Vn4N/EBfE0eoRal
relmayFsY2j84q42vuORxgjA6163+zf8QdG+OXwN+Jnw50i4a18T3x1WeC1vlMYMVy7+U4PPQsMj
qK8I+BXw4+L/AIr+NegeDNbu9YOl+DNSjvrqLU7qX7JDHBcKAIs/KxKnC44x9KYK3MfbPiy2jf8A
bH8BzFFLr4Z1ABu/+sSvmL4nfsk6n+0j+0j8YNQt/EcOjpo72cEcc8Bm3mS2Vyo+YbR8oOeevSvT
PHv7TPgXSf21fDFtNqbfZtL0650a9vViZoobuZ02RlgOnHLDIFeJ/tiaf8VPhR8f/EOveGb/AFWw
0TxsIDby6NIzC4eKFIjHIFBww6jpkMeaa0E0meseFrKRf+CZup214wvHisLxMyHcuFvHAAz2AAA9
K1v2xPAV78TPhN8HfB+mXg00axqtrali7KnNo5XcF6gEZ/Cud+IWoW/7NP7A1v4E8aXKReK9Qs57
e20+3LTtLI9wZduQOiq4yTipf2lNcv8A43fso+DvHnwlvZbxvCl4mpzNa5jurcwQPHIFVgDvRj0I
5AzgipS2uO6e5S/ZH+AGr/s3/tU6v4YvtZi1aG88KNqBe3DJGxN1EvKseoKv19eMV658DNMt9K1n
9oi4tYlhkPiW5O+FQjcWqHqPcmvCP2C7rx5r/jHxT8YfiFrF1d+HbPRptL/tfV5QG/dyRysQBj5F
Ac5A65Feh/sr/Gzwl8UfGHxs0DRNWSTUtb1e51LTYpVZPtNu0CR+YhYcgMBx1wRxUpW3HfufHDfs
n+OdA+C2g/GyLxTbky3FvdSxhpBdpvn8tX8zIBJZhkccE1+gnx30Wz1D9or9ny5ntYJZl1DUgWkj
DMQLPI59jz9a/O/Q9I+POteMrL9n6fUNRlhsLuNZdGkdBbRRRuJvM8zHKAYYfMe2OeB9r/tJ/Hvw
T4I/af8Ag7p+q65FbvodzeSaqVDMtks1sFiaTA43Ej6A81Vr3C60X9bHmP7WH7O3ir9qL9q/W9A0
XWLewttF8PWdxs1CeTyR5juCURQcMehPFdT+z94W1LTv2D/ir4Z8TuuqT6NPrWnrDOfOjiEEQVVT
cPugrkcDHtXnn7ePiX4o/A749n4k+DdQutJ0PxBpVtp8eo2gjmjmaMO5jZGDc8gg45zwa9C+HU+o
fBT9gnxpffE67/szWvEzanewC5cNLcyXcRaJdo6M3Py9uaHFcyE3dD/jZo+oWn/BPP4eaF4Tb+yb
vV00SwAtX8gSCYKrBmUZ+bIz1zXCfso/s3+N/wBlj9rPwtomsavBLpev6LqE7w6bdO8MzRBdokVl
XLLuBBx347113xE1C6+Nn/BO/wANTfC66Oq6x4ai0yS4js3AntZrSNGlUhsfOvDY57da8+/YO8ef
Fj9ob9oKx8d+Mb261bQfDGnXllJe3EUMEcDzqjKoCBdxIXng4xzRbRBfU+m/gd4N0PSv2r/j9qFl
pdrBeL/Y4SVIVUpvtXLhSOm4gE4r88dc/Zq+LXizwh4r+Px12Nn0/ULueG7bUJRfqkE7RhoztOAu
CANwOAfWvun9nH43+CvGn7WHxusNJ8SWd9Pq7aa2mhH4u1gtnSYxE8NtPXH16V8IeJPGfx78P6v4
i/Z2tWu4X1a9uVTQRaRs8izytLlJiG+Qr82QcDJ6UWHFJ2aPvn4/+E9M8b6b+zfqOuadbanqM3iP
S1uLmeIPI6tbs7qSRyC4BIPcV5F+3r8HvH37RX7Qfhj4a+DruK30uy8OHV2sr25MNkHWdoxJtAOW
AZFHB616V+038TfC/wAMr79njw/4j16x07VNN17T7u9tJZwZIIEgeMzSAZwgfjceK8q/4KGfEL4i
/A74z+Evi54CuVh0S60A6Q+peSlxbSF5WmCHORyqhgf9nrTS6ibV0dj+w54E1qy+Cfxg+GXxDC69
ZeHdUl0qLT7txc28KLbhike4fc3YYDAxntWbHDcfDb/glnHqXgfHhrXbzT4N99p+Ip5WkuxGzFxz
uKMRntnitb9jvxLr/h79nD4o/Ef4p3EWiv4ovpdWi1C9KwpPHJaoqMo6YZshV6nIFYtldSfFv/gl
oNK8ByJr/iGxsIEmsbFw89vJHdiR0ZByrBFJwRyKlFrU8m/Ze/Z2+LH7Kn7U3wwj1u7Sx0bxcLm3
uobG98xLhYrZ5dkqYwSGYEHnnPNeSf8ABTzwxpHhv9q3W49G0220sXGmWVzMlqgjWSZ/M3SEDjcc
DPrivWf2Rfjl8X/2rv2n/h1feI2TU9B8Fm7murixsRBFa+daui+aQfvMVAHTvXj/APwU98S6Z4l/
a012TSb6HUIrfS7K1lltpg6LKgk3LkcZGQCKuK1Ikz5MBYOA+QFP3Md67T4VaTa+Ifif4N0y5hS4
tr3XLC3lhddwkR7mNWBB4PBI9Oa4ojzXA6Hkgbu9dn8FNUtNG+M3gW/vZo7aytdf0+We5mfaiIt1
EzMSegABOfanLZmcXqfpl/wUY+HvjP4ga98Mfgr8NLfydIv9Ou55dDt5VtrVktjEYs9BheeOnSr3
/BPjwP4qg8G/F/4N/Fe1XU9L8PXFpbRaNeFbiGFJkllZVYdVJCMOeKq/8FEfix40+CnxS+FvxW8D
2tve6baabeWkmpTwG4smE7RhUZlIxuGSCCOneug/4J6fEfxP4/0P4vfFjx9bw6Pb+ILy0uI70xG3
tZEhgaNjGWJ+VcKucnms9dDVNJGH8G9Kh+A3/BOHxn4x8D2kWjeLHgv5f7TjjHnO8V5JFESTnO1R
gA18+fsxfC342fs7/tEfC7xPrsFxpWlfEHUxaahMZ4pBeJKDOVlRSdrHO4HAxhulfRPgi6X4gf8A
BLnxZpvhVhrWri01JBY2LCeVZDfSOqlV55XB6DivDvgJ+1B8TP2mvjn8FvCeq6PZzaf4R1ZLu4l0
u0kDxJHCYxJcEuwXjI7AnikVfU+pf+FD+BJv+Ci82pN4ZsHlHhEa6QYhsF/9t2/aNvTfg9fXnrXx
x+1H4C+N/wAfvj78T/FWkW2oatofw81aSz0+6glihWxjhPnYjUkF2XhjwSflzX3hZeMdGm/4KJah
pa6la/bV8BJB5ImXJl+27imM53BecdcV8QfGX9rP4kfs6fGP45eAdP0e0Fr4q1y6urf7faymdkmT
yQ8GCA+4BccHkU1dBpdHvXx08P2vx/8A2BPh5448a2qan4yYaaP7VEYSVfOukjlHy4GGU8j1waj/
AOChPhbxP4f+Hvwt+Bfwf0k2ukeI3u4JdE01EUzxwCOULub7q5LMxyM981seOpD4F/4JwfDWw8Rk
aTqG7Q1ktr0+XIrfbInIKtgggdfTFO/4KM/F/X/gh4s+CfxM8M6db6tb6TLqQeSdWa1xNFDGoZ16
bgxwc8kUloxpqyOa/wCCdPg/xbHpHxS+BPxW0uSXQNKtbVxoWohHEQuTK0i716q2FI545xjFaP7M
ngXQvgP+y38Z/iJ4P06G18X2U2tw2+oTAyMkVszmCIbs/KCAcdyBnNX/APgnZ8YfEHx9+JPxh+J3
iLS4NJi1C30u2SW2Vltv3KzqwVmPOOMn3qf4XTr4h/4J/wDxpi0iQancS3HiMpHb/vGYs8hAwPal
uxXV9j5E/Z0tPjr8J/j58P8A4k61Zapp1j8QdYtrC81C9jSSHUI7uaORwRk7cqGYdCNvHpX2N48/
Zi+HWuf8FDfC93c+HbWRb7QLnXb63JIjnvY50EcrLnBIIB9DjpXy38N/2xfGP7QXjL4DfDG78NWd
rb+G/EulyyXNj5jzSLbnyi8iHIQAZJ+lfeOvanZf8PB/C9o1zELlPAl3tj8wA7jdpgY65xk4/wAK
e5TfY+E/21bf4wfHn9pbxtaeGtN1bV/D/wAPJo7ayTS4tiWe+JJXcsCNzbkOOpGK9l+M+lxftK/8
E2NF+JPj22jvvHGnWxlttSt18pkY3ogbAGBhkRc+/NecfFb9tLxV+yn+0r8ddBsfC9je2/iHVEuI
59QkkiaICAIJVCg71O725HWvXL5jon/BJfR/7R/0OaeytZGSYeWSZNRV+h9Qc/Sp6maSJP21tH1f
4Dfs4+AvhL8GdHewt/F13Np1xZ2CGS6lUx+a+xic72YkE56H2rlv+CcmleMkn+IfwE+KOk3B8OjS
RftpGrxETRidhGwD5zsZeR6MDg16T/wUT+K958GP+FFePNO06LVxpGsz3BilcrE4a3CgFwDtzu4P
rWL+wf8AH/VP2n/2lviP48v9Aj0GEeHbKwjit5GljXZKxIMhUbmzk9Bxin0KQn7IHwW8H/BPwz8d
/iDoenmXxD4b1bW9M0y5u5GkW3tbdFkjjAJxyVXJ6nA5r5A+F/xJ+PnhL4yeGvjfqNvqa2XjDVIb
ae8vbdlsLqK4kVPLRPuqNo+U9toIr71+E80dz+zj+0pNZkXPm6/4ndViwxP7sjjHX7pr4q0T9trW
vip4N+D/AMF5fCdnbNpWs6PBJewXLySuLeVUwYivyHGM/McYP4FmXuz6p+P/AOx/8OPG37cfw5ur
3SWEPiez1C91m1ilZI7qW2jjMTEA/LnODjr+NeG/8FBvEHxf+KX7QGseAfBGnanP4e8Dw21xBBoN
vIrI1xArb5GTrgqwA4xg19q/Ei7hP7cvwctjKolTw/rMmwkZwRGP5ivlP4zftrX37Jv7Y/xptl8J
x+ILbW7fTHSSa5MHlmK1XkfIwYHzGH1Wgi6Oj8YaWP2q/wDgmxN43+I9tHc+M/D9tdyWWpWsflSr
JbytCGccg7gp3DpzkVe+OWm3H7JP7EHhTQPhHZSW+teNriCxurpUae7mlubN2d0I53koqrjgcYFW
/Ampzzf8EovEGp30a2k19Z6lcFDkAeZfSYxnnHPHtitb9uX4kt8Ifgh8A/GMGnx6tDo2uaffG3dy
iSKlm5A3AHGex9cUkGlzyj/gnNrHxBbx54r+CXxO0+8ufD+p6LPq72fiCGT7ShLQw/KZMNsYM3GO
CuRXd/safsweBPhj8Y/jd4hs9PkubjwZq0um6L9ufzFt4fI8wkj+Jh0DHnHfNQ/sbftO3X7WH7ZW
reKj4fXw/bWHgtrD7OlwbjDC8jfJfavJ3dMdq9k+Bc8Us37UN1BIs5Hii+UhTkZS0Tj9cU7K4nbo
fm9YftHfHmP4qwfHprbVlsLi6VATbXB0UxuPI8oAnZ1yM5+9zX2Z+1/+yP4B+JX7TPwgvb61uLV/
GV9cQa3HZzeWtxHb23mLjjhicAnuM98V8kwftw3XiL9mLw58C18IQRzi4tLNtUF6d7bLpZsCHZkN
wozu9a/ST45TxR/tM/s3WrkBzd6u6r0xixoNHa+h8df8FEfGHxBf4kaV8Efhrp+o2Phjw5plrqUd
t4ZhkW4+ZWjUM0Z4RcjHHVuc16B4Fsj+1/8A8E+NbufidZn+2/BgvY7G9t1aO5V7S2+RpN2TuJLB
h3x0qp+0Z+12v7Jv7cPi7Ubnwy3iG31Tw3p1soW4EDJtZmJUlTu69OK6/wDZ08YSeLv2B/jP4suL
VdNbV7jxFqBhQkhPMjLAAnGeoGaTvfQhaHJWOnQfsWf8E9bDxt8PdPjm8Y+LINP+1394hll33SBS
U2lSNoPyqDjNcF/wT5+IPxP0D43R/Cz4g2Oo33h/xZZz3kkPiiKZpf3MRO6ISEgKTwQBivYv2mvE
R8Ef8E+PhBrAtluv7Pn8O3LwScK/lxhyrcHAJXHTvXOfs/8A7X8X7Xf7afgG8tfC0nhuDQtD1SNx
NMsxkaREIIZVGBgdDSs9xp9zS/Z0/Ym+Hfhn9sv4mPFb3NzpvgptNudHsbmQPFDJcwO77hj5guPl
HbNfKnxO/ad+N2v/ABs1H4t2Fvq9vonhrUWtYY7GO4Gk+VbylSso3YbeOGOerdq/Sb4Iyed+1V+0
Y3B2S6IgAPpZt/j+tfnNJ+3HH4Z/Z18cfBdfCDXN9qN7qCJqn2nYqiW6ZvmjKj5sZAwT2qrdStb6
n03+2f8As5+E/jZD8EviFd250rxJ4r1TSdJ1L7AwWKaC4RpHOO7gscNknB5zXP8A7fXiLxL8EPDf
gX4B/BvTrrTNLuNM+3M2jiX7e4hkCqodDnkgMzdTgc1718a9tr4Q/ZdtpQq58U6KCG4I22zf1rzD
9sv9pq3/AGXP2yPB/ie98Pv4ht5PCM1kkEM4ikVnuSdwJUj+Cp3Jtczf2StOvP2xP2XPGnw2+MFn
Ldy+FrmOytb+4BGpRNtMoZ2fJDqVA9xwazv2ZfB2j/ssfsQeI/jtpFhHrHjiaxlkWTUQDHGkd08S
IgABQEHLYPJr0n9g34oJ8YtN+PvxAWw/suDV9c85It24qi2gwC2ACQOuK5LXrwaZ/wAEiLi6WMPn
TN5RuQd2oZ/r+tJMLWPAP2N/j38ZND/aX8O6d4yi1W/0T4hXT2l1b69FMbbYxaQvbhm2rjOMAYw3
I6GvPP8AgpX8GfDvwZ/aMn0/wyHs7HWbBNbltmG5YppprgOEOOF+QYBzjNfRvhH9tK0/ao/aI/Z9
8O6f4Tl0b+xNUkubi4e4STewtSDsAUfIMHnAryP/AIK9S+b+1HaKMExeGrIYB55muiRj8a1i9SZa
HwxkMrKcjqc4q/ZWZurpbdc+Y7qgIPPJAqgHXaq7cEMSee1dL4Ehe58X6JHET/x/24bJxkean/1q
JXRMdWfqL+1wbv8AYn/Zx8JfDT4Naa1vqPjGe4tLjVIwW1BjtEjMjJgs53lRnOBwKyP2BfFniH4/
eGvH/wADPi5ZXOrabaab9sS41Yym/iE7FCA8vzADllJAIz1r0z/goH8Z7b4A/FX4A+M77TZNVsdN
utTeS3jYIzboIo/lY8ZG/OO+Ko/sQ/Ha0/aV/ak+K/jvT9Dl0Wzk0PTrNYpXEjgq7j5mXjJwT9Kj
VGljmP2NP2bfB3wej+L3xLW3l1vWPBuravpekC/2ssUNuqurYx/rDgAsOcZ9a+afhr+2D8a4/wBo
DTvibfx6pdaH4hvktDpVwtwNJEczKgWLPygqBkHPUdK+3/hPeIn7Lv7R2owgy79b8TyjaeWxGcdP
pXyFZftxaJ8UfhB8Hvg5a+E7ux1XT9V0eCW/eVGhbyJkUlFBLEnqQcd/Sm0x6Nnun7Tn7EvgLxh+
2V8OYVju9O0/xut/LrFrYlI491rArKYxt+Tf0b/69eef8FCPi98QNA+I1r8GvhbYXvh7w94Xsra6
3eGlkWZ/OQAbxGMBFwfqTzX2R8Wdtx+2v8CoS+Gh0vXJ9v1ijUf1r5w+Kf7Y2j/su/tq/F9tX8OX
fiD+09P0iCEWzohjKQZOd3UHzP096jvcLPQuXGkWf7bf7AF74t8d2qQeJ/CsN49pqdgnlyyyWcDA
Fy+ThiW3L60yw8P2f7C37Cth418Bacl9438WRWC3GoXy7pEkuYRny8YIVMZVcnnk1tfAnxEms/8A
BNT4ma81sLBNRGvXPlZ3bN7uAM4GfTNXP2sfFsHw3/Yq+COsSWX9pWmn6l4fne3Jx5iRwlyvPqFx
+NJNiPH/APgn98Z/iJrnxYufhJ8To7zxJo/iWynvJk8SrNJLF5cfSMSEgoxzx0yODXZ/s4/sM/D/
AEP9sD4jwTxz6no/giaxudLsbwK0YkuIWkxJx8wTtx2Ga1PgZ+1Hpv7U37bPgrVdK0CbQLfSfDmo
xPFcurysWKnJ28AdK96+BU8Vx+03+0bLHkvHd6TGwHP3bRuPz/nQpMHa5+c3xO/bW+M+p/HK/wDi
LpMOpWXhPwzdm0j0myeVNNkihkZSJccMW7+nAFfR37bf7NnhX412/wAHviRHHJoOv+MNT0rStQFg
qhJI7lTIznjLOuSAT1GM9K8EX9ubw5oP7MPjb4OJ4Sum1+9u9SjW8Lxi3/eXLvuIzu3BTjpya+4/
jhb+X4N/ZksgML/wlOijZjpstmP9KL+994I8I/bc8X6v+yr4C8E/A34NWt3oq31h9sk1bT9zagyx
Pg8pgktglmx044rX/ZSku/22/wBmbxX8OvizazT3nhmS3toNbkBN/vYNJvJkBKsNgGe4POK3f2xP
2j9J/Zp/bE8DeKNb0SbXLFPCV1beTbMglVpLkYZd+B/AR171v/sE/E+3+Mdz8efH9lpj6TYatrkb
wW0xUuFS2OMleM89qL7D6bHlv7Jvwu8Kfsx/sp+Lf2gbfT18ReLUtbp7cX2NkEcM7xosZxlS3BZu
+PavKf2Sv2svi5eftE6TD4xOp674d8f6gdPex1hZjZ2ySuGBtw3y4C5AHcHkV9CTTx2P/BJ/WZZo
yYn025ZlXuDft+nr7VwHhz9sfw1+0X8Wv2efCeheGrzSZtG1yJp7i58sghLYqAu3kAnPXHTNDkrD
5ddTV8SfsFfD/W/29IdLMlxFoE2knxXNpaovlGYXW0Q/9ciScr6cdK8w/bN/at+JumfHG98JfD4X
3g3wp4AuPsA/sTesFywAf98FG1VAXAHYZNfdET/aP2/bhRk+R4AVWHbLXuR+lfI2tftr+FP2f/iz
+0R4Y1fwveaxqOteILkwXNssW0Yg8sK+8jODzx2NO93sJWTOi/ae8E6F+1X+xfovx1vtMTQPGsFo
GMliARMjXAgKyEgFlAAZT2/GtD48QW3/AAT5/Zi0bw58MLBv+El8YSyWcmuvj7csvlh9ylQNxGSF
XPHUc1u6sdv/AASu0AHMRubOyAVcLw9+pxj6Guh/by+KGj/BvxT8AfEmuaXJqunadrF1cS2yKpYq
LZRkbiBuG4EDPapTvFCbs9Dyb9hP4k67+1J4O8a/Bj4u2l14giWwkv11XVNxu4lkby1AEgyCMllY
dOnern7HP7Ingz4deJ/ih471VZPET+BtUv8ATdMtLxUMQWBFfzmyP9ZwAD0GM9TXd/sh/HTQ/wBo
79q/4geLvD2l3GlaanhqytUjuo1VyfNOS20kZyOgrrPhLeJB8Ev2kdQAOz/hIfEMmcg52w4OD+Bq
Y7tPuVdnxB4D/wCCifxXvP2gbfxnqa3cng/V7xLVPDckjixiSTbEu1tvJH3unJzXtv7T37BfgnxH
+1l8PrfSZpPD+m+OZbuXU7OyjUJG1tEjlogCNpfPPXkk4rzOD9sTwH42+APww+EeleG7yLxDDf6X
DLdNDEIC6ToHKlCWyWbP3RX3b8Wfn/a++A8LDIjtdbkAA6f6OozT+22tLi5UtUfIH7c/7RPiX4G+
N9E+Cvwigk8GaR4ftoL6e60lcST70Y7GXGNvGS3Uk11vjHQNL/bo/Ydl+JHiKyTQ/GnheC58vVbT
bLLdfZYzuVyQCEkZiSvYjNafxK/aq8D/ALPP7aXxYm8Y6Tcan9r0nTLeAWkCytHshLkNuIxuDjp6
VtfBrXrPVP8AgnR8Q/ENtbmys9T/ALcuUjwAwDysq5xwD0olpOLQLucjoGgaR+wJ+xVB8SNC0tNc
8deJ4bQJeXKhWtXuogQqcH5EwWxxk9a5v9hT9o3W/wBpPV/EPwf+LMDeMbHWIJL+O41IjfAsajjY
EH8RUg54I4r1z9qzxdpvw+/ZP+Bus6vZteaXZavoc91bFAxaNLcsRtY4JwOhrO+BX7Q/gf8AaT/b
M0TWPBGjTaZZ6b4VvYZ3uLdIXkkMsfZT0AbqfWkr8qaHuecfsx/8E/PCOkftK/EJNfvm8SaB4FuY
Y7Syu4Ri6M0XmL5pzyE24x3rz3Uv+CnPj5/j9/wkVpaTWvgGxma3HhkFdksarsJMmzIOfnH4DvX3
X8FJXT4rftGXjncF1mBF+iWhr4N0z9qL4UXP7HrfDSHw9Ivjp2dTOtku3cbvzDIJevIOMUpK8XcF
uen/ALYf7DXhzxr8Zvh9rnhy6Xwz/wAJ7frZX9lHDmOF9hmedQMZY7myOASQaX9rv48TfsVaP4a+
DHwh05fD13Fawape64sSyGYNvjO5WU5dmQMWJ6DFfUfxohZvil+zhAUG5dYnYjnjZZ84/KvG/jf8
ffhp8D/22tZvviJYPeQS+ErK3t3WyFz5bGV2Ix2yAfypxd37wK2hzOt+H9P/AOCgH7HF14u1azXR
/HvhJHgbV2iUtcNBCJZF4xtV9+COxH0rK+EXgbwv+wp+ye/x0v8ATU8UeNNct7c2Z8sJ9hadNqxo
WJ4zlicc9Oleqfs3eIdO139kj41eJdBtTZ6Tqepa9e2UPl7NsTQ4X5V4HbgHiqX7QF/pHh79hP4U
T67AtxpKXWgvdROoYMnDNx+dJNtaa7lcuuh5n+x/+01e/ta3mufCL4vWUXii28QQvcQS+XHGtusK
hmB2KuTuAYHOQRWD+z3/AME8dFl/ad8a6f4o1OPXfCvgqWHfZSQEfbXmjZ4wxyMBV69c4Fey/Cj4
w/DH4y/tneFLz4bWBjtdO8M6gt3KLL7MjMzLgrn73oT717D8HyF+N/7Q0yZ/4/7JSxycEWjH/P1q
It6q/UT0eh8Xa1/wU/1i0/aADWGlww/DKzk+ynSDbR+dIiFo2Ik7HO049OO2Tw3/AAUt/ZZ0X4S6
/o/xG8JYstJ8VzESaTg/uLjYZXlHPRs5Ixwc1sW/xu+AkH7I8vhA2drN8Srm5dCYbE+b5j3ZfcZc
Y+72zmvVP+CviCD4RfCe1AKst9KRngALbID79MVvF+/ojKVj8sHDzO0rFpDgsMHOajTEe7zeXJ3B
VHPTqac0zRh+CuRhQPTjmkjjQE7s79uQSOvHQ10XM1qI8ZZmKkFPUcGiNGGXdC2DlR1z+NOSNWiI
CsykD7x6Gkmla1iVWkZQ3Pl56/5zRcZIIo5reRpZjvBGwHoOuOfX/wCtUdxGYlicKGHILg5J9P8A
9dRR/u0ZT9zOUJ5BP+c1LJAwIJAHGSD1xVFKwSMfP+RNySD5iR/F3ApkkSxyDa3zBguxjx+FOkC4
2NuMajPyjjNRTIG5DYweuOQaCR8sYmb5kAlBwWU4FU45yk0uwgk4woGR1q5cRMsbCRl29geCfWq1
hsgkkLDf6IvXGfX86UhG/YK3ChnEe9cEjGecD355/OqPi1lOu6gFBBeYkOG4rotBZb+HbKyvC9xG
vz8jJPBx1/L0rmfFxQ+JdUMW7aLhwCxBPU8kj+dTYiLu9DPZkyCWKvjBU9qkWdIgSAdxPJHOajwJ
MOfm3AgktmpFjZZXbYMHpzxistDW/c6+HF1YNHtLeXGxZ5GyW53AZ68DNOsUgFxbFh5oiUHAcjB5
yOMcZP6VDpF1GUuJMMsUqsrELlsevORn6Zqdwtu0exsR4OSBnd9a9Rwuzkc1Y9B0u2W6ijkV/Llb
DEElskcDg9M/410lhts7O6QgJ5drLvVQS33OB/k1xujSyXUkNyZvLU5wUXAUds/kPXrXW2TSXHmE
fNiKZEA5wxBxn2yckms5RaITi2fNF6gV9ytlskN82cHvVY5jkOTkA9ue1atuwbU2gdP9apQZI4JH
rVW8szp98Ypht29QTkD8q5b6nYrWPR/h2EfLsGErKWUjICgDPbuMCvVNFjuPLXJULIxAdjjfyeO/
of1q5+w/+z5e/tE+Pk0WO+Sx0uzVLrUmVwHNuHAYL33HgY9+a+vP2sfiN4L8M6XbfCHwdoFr9n8P
7be6vJ7bFwJY+PlfgEkZLN33e9aQnZ7BNK2p8rRTS3FxO87kB/3ieYMFU2/dB4wBknnn3qe8t96v
G7uVxlHQAkDtn6/5zivq/wDYE8X+Cf8AhIta8L+JLC1l1PWdgsHu4PMjcKrGSMluFJ4I6ZPHNec/
Fb9mnxdpvxpl8IWtq8kms3LGyuYF/dCKRsrng7QvHTOAD0rV1k2YOkr2Wp4YHSziJ8rzIxHhi2Rt
bOOpOM8rx3GcCknV7l4liYPM6HcATuK7ug465x2r7/8A2pZPCPws+E3hLwrJpemTeNYRZ3DJHagi
XyseYZGGMqWB+8ea9Ik8W+H7T9mOz+Jv/CCeHm1B7GO4+xGzQJlpAu3OM9849a53VYKCS0Py/wBq
zKjOkoVgI1dlCksQCByMAemaWW2JMeYHG0kBQQc4GTznnt0H4V+gUXxJ8C/HD9nfx5JJ4U0nQfFu
labK89m1okezq0LxuR8ynA75yDx0qX9iL4YeF/HfwK1pta0Kwu7q61a7tzcy26tMibI1BViMqe4x
TVXuaKKWx+d88w4t5THG0eWKcA49ee3PX0p80kyYj8nerJ6c8jj+Yx619y/DX9lhvhn+1bYabrGn
Q674TuLO7msri9iSYOAoIDcfKwJ6Y/GvHv22vC2n+GfjprEOiWcOnwrFbymK3UIqs0RzgAcdF49z
WirJbIylB30Z85rdCN4kZ0ecHdtDdVHTj/69TTkRksEaBZcodw3A8cn+dfVf7IfijwBrdjefC3xz
4YsYLnWWeKy1p4/MnlllO1Ys7SUYcbTnHArzD9pT4C6p8EPGr2V0xutNud8mmXZZS0sYPO9R0Iyv
6e9Uqik7WBwZ4zLHJE4bd56FssVG0KQD69f8+lTOpJQrH5kgJYA4ULznP8/xqIyqJCvJcngZ5TIz
yOR+OOlTzT5uAskmXkBViBwGxnDH046e9XZrUzUVcLidbaEGFwgUnA7HqT9arpqBjfyntwqKQCEO
WweDjjscfhmvdP2Pm8FN8bLCHxrbWl1o81tLb2y30e6EXDsixhh0ycuOenFdH+2Z8Gpvhx8T7i80
zw+mneGNQdW00Woyhk2AyKqg4Azk4wP1qFVinZmnsuzPmyXYluY4pSZgWQN0IGOuPx96sfMkcqwh
gvBVmJOR0P1r7u0n4L+DPhX+yRf3/wAQ9CsNM8U30Vy1rc3Sg3PmvuNuoKk4YArx2AOa+FbyIRXF
04yqpl8luCOAAM/WjmUh8tnuUx++jfeDtJ2BhgY69KdJEY2R4wG2nbufGMHrVh2bcWkKyq4Ls4yS
2T0H4jpVdJPNMeWJPzEhmAyDjjHP8XfNUpWG0IqRKyyKvmscgBmGSO/H4VJvHnZCbT0UHHT/ACKJ
ENrw6lSSQSVO1DxySPTI744qGSZxOfLcABDlkO7OeRx65/QUNk6EyeVJL82ZWXIAOMrwOff/AOtT
2VY1WRppBK3BHOB17fjTJplWXzHR5A5EfDdH4wTxz/nnildpABvCFQ3DMxJwT7cc4HXJ4qVqyEkt
RzShJQiBpNzE7icck8cfnQZUkT5QFkYgFlB6Ad/WmyRqsildqbW+dh1Ip8JEqful3HIJ8v5iT0Ay
Ox9K1vYljleaZ9rKQoQKyJgZHXGfw/WiF4oiY/MMh/iJB6n9BUazNLLIY1EceQCW7HGBjnp1/KkW
ZJ0BD5ycqzjaDyRn9Dx7Urpgo9SzOpK7Y3DKrDcCB849vTOKdlUldpvnUZHlnIwfrn8ue9VmnCo6
geaEJ2yA8gjIq2jW0bwK6yTxqy+Z5f3yuRnaTwDjOPrWbaQ1DmATsk2wHGcBSRyD70jE4MbExMTx
t7+oz07c/WvuHxb+zJ8JtU/Zs1T4j+EbbVYpf7OfUbWW9uNzh0OGDAg4BKEEA4r4iPmyhSAZNpO7
C8rz/wDW9KlO5XK1KwyFBJMcBkkUEqFYjf2/wq5G6s0wKMG5AOcDPWvff2SPhL8Ofi/4j1Dw/wCK
5tWg8QlTNp6WUwSKSFVBfJwcMD2OOBxmuS/ac+E2jfBv4uX/AIc0R7yfTUt4Z1+1tvZN6k7d3GeQ
Pwqk02HK4Hk0rh1jDztydrBc4HTGeM+ueanRxGmxdqjI25OQ3pyRwfz61VaQSOrbMNuA6YPFTLeh
ypB+bcchl4/zxWuhz9SeYskboBHJKDgDcRjjI9c/WhLsGT95GyqWO3HOD3wc1WeJpVVUYgF9p3HH
A7Zx6+1e/fs//st3vxk8MeINfvbybTNBsLaZIJIcGR7uNQ23aR93B65qeZLc1Sctjw2O7T73VQBG
VIJxzxjHTnAqWS4BucqCYz0XHzAkdM59Rio8iKTZGoC9SOv0qEASsi7QTksVzz+X5VaV9QUpJlq5
mRiqZaUqSDjucDr6YxTTdiOMu6EYONgPOB3706N/PdwQ28Yb7vBHp9aawkEkgPy/3fcdeaaYS13Z
c83ylWSTO8/3W46defrVQhri4yVZU6AA4HByM4PtTncS4KhWOQMHDcfT8qWS7dB5bhY0YDbjgscZ
zn86nUzsiybkM7Ll1G3j69sGmTTMrlivJOCe3SqoUS7yisiNkqR0HXuKmFyhDE/ulOB6gnHTP+et
FkMszTgQMgGT1VicZPWrEbssUiltjMCOvT/PSqJjEuXUMF6EH09qcyuGzsUoFOWOemT+uRTdjWKJ
2lDKR5mWzlRnJ/GkWZoo5A/zRFwA7rgHrjPbnp+HvVYQ7F8wxqpUH5+xP1pYpx5W9lDxgfxnp9PS
loS73LQuXk2sG5yxZk4DccZ9+B0x2pomlkmcKp2gbmQ+mf5Z/nUDzRp94DoDsySeOB0H6U9Y22NK
DKxZSVwvHUdT17VpExbY4zNJM7qm0EbSO5GMc+1S3E3mTAkHAIY8nGeM/jVYSB5SOV+U7iSBknjt
259akR1kVipkBZMAk/jz+VNJFK5ae6ecBldyi8EMMfUDPWmJdIkoUN86n1zgEciq0Mqi3JRi5K9A
T+FWE25aV0EjKmMA4GT/AFFZ2saXfcdLcNFDIV+ZG/vDcCaka9aI7ZW+Vl+Qqe+T2HTqfSq/lKDt
3YPJOWz26Y9OD+VQGdFRcDaB8yc549aCubzLq3zkDY24N3wMAAEVZOqz2luYYb24gBcSOkUjR4cD
GeDzxxn8KylTYzFmOV5OBjmk85nQs43SFiWKnOP09MUNIRevryWeVppJpZ2UEEzOXIXPuT+VVbh3
CrIOjsAvGccfpnmkkuDHKwdBD5i7gH9T2P6Uzz2RhligAPB54NCfQh2bJEImeRc5VTyOfTp/Ontc
tEXH3iPm4XkVFA32dhsy2TnZkfMPwqEoFO05JQbsj9allHTWvxF8VaHaW9vp/ifXLaCAfJb2+pTR
xqucjaocAfTisvU9XvNdvZb69vZrq7lf99cXcjPI7AdS7ZLHHrWajMzkhvkAJGe9H2tV4CBh684/
ClZFqTN3w74x1fwrdm60bWb7SZvLMJmsJjE8ikg4JUjPTNO8S/EzxT4vtorLX/EOra3bRS+bHFfX
Tyor4+9tYkZ5/WuegR4ZCyjcSNxUj7v+eKa43SMQ3PfAH61Vkgepd+2+U25JTDKrB1dWwwYcg8d8
4INd2n7RHxRCkHx7rhjK/LCbjjGMHdxXmkrfZ5UdE8xugIPb8frUMyzCZWZsYGcH73+fyqOouZot
tcSJEAzeYGBy+eTzkk+/c11vhX4y+Ovh9pR0zw34qv8ARtPMrTNb2hUIWIHzAFTgnHPrmuGeQt/r
RubOcdeKqtelZZ02uAp+82cH0x9OlFkw5jq/HPxD8RfETUbfUvEes3OsahFF5CS3JXMSZJwAoAHJ
7VQ8LeMdc8Ca9Dr3h7UpdO1a3V0iuoThlDLgg5BBHsQe1YkqP5fmRyDcTkrg7qYkwnR1csjKc4Y4
zx6dfSosWj0jxh+0r8T/ABt4YvNC1zxleX+lXqbLi1WCFPMAOduVjDYyB3Gea81aYSo0Q2mNs53H
tj0qJpkBDs+8dFycA9qgkYuWGx0nBK4Ix/nPrVJrqhe9c9s0b9sP4veHdMtNOs/F8n2OzhS3gjey
t3wigKBuZCxwB1YnPrXkHiLxJq3ifX7/AFrU7x7/AFK+nNzcTyYzI568DAA9scVmyFpF3AEY9e3r
VdZFDjbJyD82w80+WKGtT0X4VftB+PPgsuoW3hLVV0u31BllmilgjmTeuRuCtnBwefoKPi5+0L46
+NkNhaeL9ZjvrWwZpooIIFiQsRg7tgAbj19TXmly7FSj72Ax87Dgj2x171W8yOORHUO2Oeudv1H+
FKyHdo2/C/i/WPBHirSvEOhXjWerabMs1vcoM4YclWB6qRkEHgivYtZ/b6+Muu6Xfadda3YJbXcL
28ptdOSKXaylSVfsen5188XBW5nIjZgQcHnGff1qKd8TbdxiODguck8Amosrj1Y6W5DIykmYMpBe
Q5yffNe6eB/27Pi18OfCWk+GtL1TTLjTNOi8q3k1Gx+0TKmSQC+4Z2ghR7AV8+bowZFO5QOck9Ti
oGw+1iGEeTjsfX+tVp1M22dr8WPjB4m+NfjK58R+Kb2O5vpY0hCwIY4olQYwiZOAeT9fwrW+CX7S
3jL9nu/1KbwleW4i1CEJcWeoQma3ds8OUBHzgcZB6dc15ZNPsDFeH9Pw4zULyKQ4aPeG4IU4A75/
z6UNXKi0j3j4zftt/Ez41+Ex4W1y406w0sziaRdGt3tjOApG1z5jZXJyV6HivGvDni7VPAviPTdd
8P309hqNhKJ7eaCQqysG4BwRlTjBHcVjl/Ktyqb2bI+cj7vsT/nrVVgJMjad+c/J29/1qLWNHpqm
fYk3/BUL4ttdzkaR4QhmZCouRYTeaARjJbzeMcevSvkHX9YvvEmqz3+tX0up3tyd1xc3UhleVyMF
iTntx/8Aqqk8/nOpyykLt2sOQKr3UPy5BDM2DtzxjH/1qpaC32PqL4Z/t/8AxE+GngGw8Hy2GieL
NNs3ZoLjxFbyXU0K5OEz5gBAzgeg4rzf9oX9qPxj+0hrVjfeKZILWxs4hHbaVp4kS0iI3Zl8tmPz
kNjPoAPr5BO8hZGRw3PIPGKgmnZizydF6DHt/jT91iu0z2j9nj9qTxX+zT4ju9T8OeVeWF5E8V3p
V8Xa0uGIGJSqkYcEAZ54yK9C+JH/AAUY8e+O/h5eeE9I0DRPAcN9IrXV74Z8yGd1ByyqQRjcMAnr
jIr5QUkow28H5ic9vWokuYQ5jYc9O2KmyRXNc3PDPi3VvCGr6fq+garcaNq2nyedbXltIVkjbB5B
78ZBz1BIr7Lj/wCCrviZdUtNUvPhj4Uu9chiWL+18OLgjbhmVsZXOT8ue+K+GV2KCSRhuMnt6VDP
Iu0j5hnovpQFm+p1nxQ+JOvfFrxtqXivxNcSX2rX0pZvtEu8RRbiViXI4Vc8AV798Gf+CgXiX4af
DGLwJ4i8KaR8RtEtZxJYjxFI0htYgoCxKCrZC4O09gSK+UZHWOM/NvyTg4qKOaTz25GxRleOlJ6k
pNH0P+05+2j4q/aTsNI0SbT4PCXhSwiWNNB0qZvs8rKTtdhgfdUAAY4rD/Zm/al8VfsueNhq+iEX
+kXStHqOhzzMkF2dpCOcA4ZTzuAyeR3rxOeXzBvQbdo5OKjV/MYbhhlGAM8n6VO5adj7Y8U/8FMd
Qh8D69ofgD4Z6F8NtS1xVMut6HNtlQg8tsEKhm27lBJ4LZr4ouZWkllnkkeaWR2kkZjksxOSxPcm
mzSMj4fkKMYpu55Yym/IHXtQtCtGFxJtlJjTjr0FRoylcF8gnJDDOPYUry/Ky5CFOeB1qGdssM8A
5IYN1psnRH1/8DP295vh78HLj4a/EPwXB8VvC6Oh0621G5WNbWFMbYiPLbcARkEnI6Vn/tMft4X/
AMa/h7pHw98J+Gk+HfgW1j2Xmj2twswuwGVo0LbFKqpUnAPJPJr5QaQqijhunSmSOe5/OpVh3Pd/
2WP2p/En7LnxDTXNLY6lod2PL1fRWkCLexqrCP5iDtZS2QR6YPFfRL/8FJvCfhLQ/Fkvwv8Ag1be
BPGGuwuG1tbuOQCQsSHZPLOcEltpwK/PxXIfr0HPFPZ9zNnJHTpjjrRoDdzprfx54h03x1H4xt9U
uB4qjv8A+0k1l2zMLjfuMh455zx09sV9yP8A8FJPhr4wvPCGv/EP4KN4o8eaDawomuR3USjzkIYu
ikAAFxuAPTJGMV+eX2jGVG5jnlcUxZJEPHB6ZoBHvP7Vn7UevftQfEO41vVWks9HtGaLSNGZlxaQ
tjIYqPmdiOW/DivVvgJ+3ppngr4Nah8Kfiz4TuPiR4LxEmnWySpG9uiuXMbtgFgGCkHORjFfGwC4
O4tk9AfWmyzqRsAyM9B2pWRVz7P+Pf7e2l+K/gjpvwt+EHhS6+HHhDy5otRtpHSR5o2O4RowJIyx
YsevavJv2S/2pNd/ZY+JEWt6eGv9Bv3SHW9JUri5gH8SE/dkXOQe/Q14RI4QdA2e47U+BWQlgwOB
kD8KLEpo/Q/Tv+CgnwX+GV74y8WfDj4O3/h/4ha9BMF1C5ljkgEzlnDsu/hN5DEKATXxBqnxa8V6
n8UJfiHPq8jeK5dQGpC/HVJd+/CjOAoPG3pjiuMkdpEG6RmK45B7Uw5baCSD2AH5ZoBPU/RHWf28
vgj8Z7LwVrPxh+E+q+IvHOg2yrLfafJHHA8wYElV81dykqG2sOMkV4H+2V+1/rf7U3jfbAJtG8Ea
Z8uk6PIFVtrKu6SbaSrNleBn5RXzcJGiYchhnJ4xinSTt82QAM/nQkij7O/Z3/bn0LQPg5rfwk+N
mh6l478EzRpFpgsQnnwjzGdo3keRDgHaUIORjFbPxS/bz8F+H/gQnws+AHhrVfBWn3Uk41K61PaJ
hG4JPlSLIzbyxxuPQLxXwmZXLgrkk8Z7ikllkRyd+OmBmiyJvZnvv7Jf7UuvfssfEa31m2ke98O3
7LDrWlv8yzwlgWdBkASgA4Y9ehr6o0L9tj9mL4dfEHxb8RfBXw68Sw+P9St7mSKe/jjNr58v7zIT
zj5YZ1XJUdM1+biTuSCQAwOaWR8BmZjl+qk5osi7npfjL49eNfHPxXl+I19r1xF4sacXUFzbyMi2
jDBEca5+VBj7vQ5PrX2L4i/bR/Z//aI8OeDrz45eDtcuvG+kRuJ5tAgCWsrFxkZL7ipCKdp6ZIr8
7CC0RYOPlPXHUU3ew+6xxnqelDSZLZ9a/tqftuXP7RupweF/C0Evh34a6XtFrprxeRLdEoMmYI5U
qpB2r0HU81037Mn7cPh3w18Ite+EXxtsdQ8T+BprVo9LntIhcXdvuyGj3O2AADlCPu9PavimUhYy
+QX7j096jSdwrE53Z4pOwkz9AvE37bXwq+C3wH1LwR+zjo2q6RrGtXEq3up67bkSwRPGVZ45A+S4
wgXsME189fsy/tT+Iv2cvihD4jsbp9W0+/fZrWl3ErMt5GzLvfBIHnAAkMepPNeDrdHhmO7GTk9q
gILD0IxyaSLufprp37Tv7H3hb4zaj8VdM8M+Jz4un8yVLWWwX7GszRhSyx7iqscdenzE18XfFn9q
Hx18Zfi03xC1XVZ9P1eCRX06Kxnkij0xQoG2EbiVyANxzySc15DCPJ8xgCynueufb8qhXa43M219
w696fKgT1P0Y8Q/th/AD9p34deGIP2gtH1mw8aaVJKHvPDdmQkyn5QfM+ZsMu3K84Oe1ecftj/tv
6f8AE3QtM+Gvwpt5vDvwu0+3jEqiKS0nv32spjlQEAxjgkHO49a+MnnIUlshS2BTZZ8sRuPBwWpW
QuZo+2/2R/23dD8GeCdZ+FXxjhvNe+Gt9aSpazGJru5s3ICCNQc4TbkrgfKw9Ca7jRv2tvgD+zF8
LfEtp8ANJ1i68aaxMiC98SWjBoYyCpIlGCAoyQo7tmvztMm7MisQQMDJppaSUFnk+bPXNK1x81z3
H4HftWeOPgb8XF8e2+qXms313JnW4L+6lkXUl2kHzOfmYAnaT0OK+xtR+P37FurfG62+Kd3YeIk8
QK8c7WEelk2LzKm3eYgCCehJ7lc1+ZbTMgBLZc9qbJO4UckYHY9aHEfNY+gv2k/2t/GH7Q/xQTxL
Jf3Gg6XpU6nQdOs5njSz2MdkwXOPNIwSfoBwK+kbP9rr4NftIfA3SfC/7REGpWfinRbmOO21fw/Z
tLPcRxxgLJ5u1ipbc+5ehxmvzr84YByxYHgEcCpVu3xh5C3p6j0o5UK590/tHftr+FNK+EOi/CD9
n6G70HwnHaoup61JC9neTFflMZwAWLhcux65xxXNfsSftnWnwXW58A/Ee0k1/wCFurKfNjnje7/s
8gMRsi5yrN1A6HBxXx2lyZf4QwPJJPNCSMz7kzgtgAcmiyHc/RvwL+0Z+y1+y3pXi/xN8IbbWPEP
ju7jC6dbeILJxHbkscrHJsBRcOc98KB7V8F/E34ma78VvGGpeKfEl7cahqt85aSS4l8wou5mEa56
Iu4gAcACuZebeACxV+4bvUUvLkED2o+EltsRnAwGUljzj0FWLGV7dkdJD5m4Mm04ZSOhB7EYqs9w
T1UgKMZNRx3Hl88oefmH+FW1clOzP0M+FX7ZXw3+NH7Pl/8ADP8AaOkvpLjT1RNP8Q2lu91eyLvL
k7wrFHXaik/xDGeab8Qf2xPhl8B/2e4vh1+zc1+1/qizR6j4k1K1kt72NTgiRZMLuclmA7KBn6/n
4lz5Q3bMjJOadcTlwOvGAC3X2qOVXNE0fU/7G/7ZOp/s9eM5rLxFPJrngHX5THrOnXIeZYvMYebc
IgB3MQW3D+IH8/oXw18Xf2N/hD8SPEXxS8NTatrXiSWO4u9P0S701xaQTsd4EWY/kO5QBk/Ln6V+
aSS4QK7EFT6ZJpRIZB8zl+O54xSsmwv2PbPH/wC1/wDEbx78b1+J769c6frdpMW022tpn8iwiJGY
UHGVO3nP3s819a/Eb9oP9mD9rjwv4U134tT6r4N8cWaOuopoVhI3n8hdrzCM70wikc5G4ivzad1h
+bcpBB6DpThNk8k4AxkDr/k4p8iE5WPtP9sz9tvTfH+i2vwr+EMEnh34XWESrJPaRPatqQKfNE6E
D5ASSc8sevrW9+yl+2p4Uv8A4Uat8IPjw8+peD2tZV07W5Inu57Y7FRIgArbdoLFG7HivgrzJJCh
Mnt0IwKc00kQVGcMnPI/nT5UJNM/SDSf2nP2e/2Pvhbrq/BCS+8YePNTnWNL7XbN4ZoInTaSJTEv
yrtztHUnmvmr9n79szxn8Dvizc+N7u+u/EC6tc+brltc3LN9tUgrubA5ZA2VPtivnmOeM+Ywzhm3
EN1/OoWmkU7E3BRk5PPFLkRVz9P5/GP7FWu/HJfiY+tX1vcnbcTeHRo0i6fLMIyu5kEWDn7x5wSK
+Vf2l/2zfF/7QXxRtfElne3XhrQtAmD6Bp1rOyC3dCQtxxjEh4+g4r5wjuHBGQWHBI9aiabYSxLn
J5Ud6nkSEpH6T3H7TnwP/at+A+k6V8d9Um8I+NdFnSGLVtNt3nup0RB+83BHO1yWLKcjcM1znx4/
bU8D/DP4L6P8KP2c7uSOwlgX+1PEogktbpirAEfMi72cBtzdhwMV+fzXDs2FVVI+6B+mf0pGldS+
cEH5iF/pUqKL5j7c/Yj/AG0bD4bW8/w0+JxXV/hbqyPDIb6NpxYZV2KiMAgo7nn0OK9S8C/Ev9lH
9lMeK/HHgPWJ/Hnjh42fStP1OydBBKWPEbmIeWDuGSTnC4Ffmo0z4QxnHGG3HrTIbgRySAvluhCn
oaFFJlcx9BaN+2l8R9I/aBPxam1mW61WWUCexZj5Elpv3fZBwSEwAP1r61+Jnj39kT9ojxt4Y+IP
ijxXd+EfEUcUE2q6Ja2Evl3Em4OVmKx/MeqlgeRjPSvzJ3lmcA5IOSpGc1KsjlgA+7HRgf5f41bX
Yk+xP21f21h8YdTh8DfD+MaD8MdCcQ2kVpmFb5kKlJdm0bVUr8q+vJzXqnw6/aq+G37Sn7P138O/
2gL9dL1TS1i/szxNKjT3ErF2JdcLlHUIinHUMK/N9pmGCdzYJOF5wfWnswYIc/KQRh/X1qXC7uil
JdT9HfEP7T/wh/ZF+A1x4X+AOoHxP421wyw3XiOaAxz2wILLI5ZBu27sKvQdeeleR/sXfttap8HP
GNzpHjOSTXvBHiS4c6sly5kEUkxzJcbdpzwSCo6ivkO5u5polZk38cjd096ijm2BiFKODtC+o55o
5RaH6eeFH/Y5+FnxR8QfE3TvGcWtlUmu7Pwy9g/kWs7YZRHhBggrhc/d3e1fLHxD/bt+Ifj74+W/
xNtL+50Z9NmZdI0tZBJFZQNtDx8gb94X5sjvXzXI4dXVgWJBB9venpcDuOQCc5wSaHC/zC6P0z+L
HjD9mz9szRPDHi/xj4wX4YeMijpqdvaoXknQHaqSvtAPyrlSeRux2rzv9sT9sfwwvgi0+C/wUQab
8PrSILf6nbIUF7uU7ogrKDjJDMx6n6V8JT3Lun3QPmAI9uv86ZLMzli5AcjA4zz/AJzSsPmXQ/RX
9n/9q7wL8avglqvwf+PF/BaW1ra7tJ8QXxRvIIXy4gox99Mkhj2JBrX8KfFj4DfsJfC3VL/4ceJr
b4n/ABE1V2t7W6TCS28LLxnA+4rKGI/iJHSvzUKlJFcNt2khcEBu+MU7zy0aFiVGCqnPQ4/wqVG2
lxKVtbH15+yz+3b4l+FXxZ1DVPGN3Lr2g+KbndrkZjGWlYBROCF4wOoHGM9+a9/g+F37I2mfHeXx
8/xK0WTw4HN1F4UyFt/O2DqTgn5xuwepr8wxcMyRwlyVAyD37niieaWZmHmyblYkKSRj8aFFNkyn
d6H1v8ev2/PGXxV+N2keL/C87eGtG8NyZ0GxlRWMRZdskr4HzFxlcHOF6DmvoL4rXvwN/bu+H3hz
xfr3jfTPhl46tnEGpm4lDySJGGAQqWX5c5cHHGTX5hPM7MisG9MN9P1pyXQjkJk+YcrxzxVuCvdA
pn6K/tQftSeDPg/8I7T4GfAi5tbjS5rUHWdetW8yK4WVCrqjZOZH6sc8DArR/Z3/AGl/Bfx5+BF7
8F/jBqFppMdlZs+k69eSLbw2/lqEhByeXQtkZPIHevzce4U7ShBxnBBHA/yKYbokGaNCWA5wcA8f
/WqeRpqxXOfp58OL34MfsFeBPEHjSw8X6T8TfH11L/Z2mxaTMoMaOv3SgdhjKlmPoMCvIv2XP2+t
a8IfHDWtQ8dTfbtH8YSiTVXWNV2ShSiEHjaqq3pyFr4iN48sokZi+DuOR0J6n69qdHNKjsyoYyDl
QuRtoVNDunufqDD+yH8BYPj0/jZfiV4ffwHC5vk8ORXce4S7N4UvuO4bsNg18uftxfthXP7T/jqK
00u1a08D6I7ppcUke2WViu1pWPocHA9MV8xzXG/dLISSCSRnH/66i81uVDlWI4yOvpimk07kysxs
8v7gKEIduoBxjH+TTUkLyFn3FAvKKMYHtTVLPvRVHoMHgnnmlR3RtzPsIJBGOQK2uybWJJs+Qyhe
eoHoe1V5mZRHmNyNuMnrgHk1J50s7FgfvHA6fnUSmQK2ULgA5Y8j1oQMeHGG8pSD6nnp2qRZT85G
VKjJDcDj1ojKogY8pjdt/wABRJO4zJGqjcdxT14qiSNbnIYGRmUjAX0pVlKxiIKdxYkHZ3xTLcrG
2SpQOQN/U4PWpN32dmIfvlM96loaIbtHTzGY8rkHIzkmodM3F3weTjC5zVu6jmfO4HawyCSOevX3
qLSoUMcrhQWVup4OMEZBp2Ez1j4K6QNU8YaBAv7wTapDGOMdCSfoR/kV5h4phaLxJqcTHeoupQrE
843H9eK93/ZUtY7v4u+D8BV8vUhuKsQQRDI3TkdRz2x17V4JrQzqlyyKVYO7Nu5LHcahpEwS5mrF
RvLWNFJJB/vfjTCZUkYx89vm4x7VI8Bl8kMTk5wOMUSTFcFXLoegIIIrLY0asdHFA1q2DIyxjgbh
jA/wq/C6xrszvB+4cd85Jz6nj8qqrcI0rqR5jbcmTAGRzxUyoPLt1aPyo5CQHQZb3P059a9NSV9z
n5W+h3ukTKXhiWMsx+dCG56d/bqfxNdPZK9481v5ToGVgVXIGAhxg9v06VzOkLaWaxSrcYlYBVyA
u7vjHQ9MZ9PrXWaRfweZJ94MQSxZRypGPwPQZ+lKUr7CUUnqfNdsjRarCzvtVZccdcZxmrXi1Wk1
u4Ulny4O4kZ5HscAUyMeZrKzTSBVDlgSuOnt69Ki1HVBf6tPcqgKuQAMYrltqdUUtj9F/wDgjs0X
/C3PFCRsA66Dgqep/fx8j8TWV+1SscP7S/j6J8OjagwQBeUYohI+h615R+wP8fov2dvikdWutPGo
aVqlr9hvduVkhi8xWLqAPmxycY59a+uP2wPgjDqEx+MXhDUk1vw74gK3tw7yKrQsyjYYycEg4Awc
Y4ojbmHVV2mfJ8beU7Mk87TxsBkjaUYHjkEEbTj9DX60/BbxP471z9n2PV9d04x+MVtJzaQSxbDL
hT5BIJGc/Lnpmvi39g/4J6F8VfHOo6vrbtPb+HVilj0yRd0VwZN4BbnJC7QfrXUfF79szxboXxpn
fR5Z7DQdEuvsZ0dXAhuhHIVLMSvCsvp0xUzTb0M7WVj5r+J2ta/q3jbV5PEsc0WuzXjGVJQFZXJO
5tmflXqQB69K/RP4YXnh+0/Yl8P3Xia2nvtATSw93GpJkK+eeRjb0PPbpXmf7Unww0L4l/B/Q/i1
o9l/Zer6nBaTy2tqUxcPc7MFnwMspY88Z7163bfCnxK37GMXgk2Kv4kOlCH7G0gI3mXft3E4+6fp
Ut9yEnZ2Pjj9szxVpd98Zry48LX0EmmtpNqHezciPgOduAccfKcV9N/sG3U2l/s267cRn99bajeT
BnTC7xDGx4ySRmuF8FfscReC/gl488S+ObRbjxJdaXcNbafKwkWwEYZk2tyCxYA8cYwK7T9hllk/
Zv8AFic+Uuo3y4cgnHkIDnn1zQ2mXH3VY9Q/Z0/aJ0f45aKFfy7TxNZKFurVsLuyCd8YySVwOfTH
NfGX7fNxJH8er4NGrRC0tsJkKXLKOp7/AHeOKz/2Ibp/+GpdEQPIf9EvUdpOMgRkYAz3ODVz/goH
bKf2hrmcgEf2XbKGYL8rYfGATk/XHFKK1B6nj/wNnaX44fD8M8qofENkQ38O/wA5cZ9j/PFfVn/B
StFXVfBkzkRqILhA5Hqy8Z6jt+XtXlP7HH7P2o/Ejx3p3ja5nfTvDnh25S9e7k2ss80UisIx83yj
AyTjjAq7+2/8dNJ+KvjSw07Rh5unaOska6lC4dLkvg7k9MbT/wDWrZL3tAeqsfKrJm5VoW8suQS2
Bn2/kKAgRWLSbiH+bYCp6d/5VKQsIBRguesjqdv1I/wqvKGkiYKixSqDllTBY8kBs9eDnPvW+tjF
2HQuVgZvOdcDejJJ8w5Hr0xxz7cV+nv7DfxD8Q/Ff4c3Vp4t0wX1po7pFp+q3iNJ9sQ78/M4wxXa
FyM18J/sz/CRfjp8UbDwvPeJYWoR7y4cQ72aOLbuj7Z3BgPz4r60/aR/aRuP2crzTvhz8N9Ni0O3
0SFZZnlRXRo3AcLGGz/ebLE9TXPJamkXZangX7XHxR8SeOfiXqNh4gebT7PTJTBb6ehaONQHIWUq
T1Yc59K+w/gj8E9N+Hv7PujX/hLQdK1bX9YtrfVLhvEQBRnljVnUHHygZ4A461wnxN8H6F+2L+z+
3xH0+3OgeItOhmE7yqHEywqS8ZI7HqrDkdDXXeJhdfEf9iTSLLwZu17UIdO0+3a3007pPMi8sSJg
cgjacg81D7Bojtbb4Tf8LF8Ja9pPj/wx4Y0k3AEdtd6AFMgGMlizLlSDjH418Sfs1a3pfgD9oWLw
xJo9l4h07UtSOjJLqCLMY18w7ZVYjqcZI+laHw0/Y4+IHi+y13UfEuoXXgK0s1WVH1RpFDHDbiBk
AAADJJyDXmnwPgNh+094Ds2u11FrfXolWdclWAkK7lPcHOcnmixaaTPsH4q3/hL9m34naTotr4H0
XW7HxlqCXcovoVZrQtIkbiMFT8oyrAHgc9eleW/t9fAzw74E1XTPE2g2Zsn1h3ims7aILDGyqDuA
UfKCevvXe/t5BYfix8KZjJs23Mb7cjD4uYuGBHTnrVn/AIKVsI/C3g1sN/x83IJXOcbE7D3xTg7M
zktNT88PMeSFUC55IYN2x3PtxTtjeXMiSbs/Ntb5uo6Dv6e1Nv8ACDgFAeXI4LDvikuMS+awJQDA
UA4Zf1GK6OYwsrn1j+wT8C/D/wAVPFOta34iiS/t/DphjTSpVDxSySq5LSA9QNowMdR7V2Oq/tFf
CHQviZP4f1v4NeHbDT7fVDYzaqLeI7VWUxGQr5XOCM4J6Vf/AOCYd1b2sfjq0ku1N3MbR4oXcbyi
iUEgE5IBJ5rwr4h/A7x54u+O2uaVYeFNTii1DWZxFfXFrJ9lCtMxDl9uAnOc571lu9zfROxW/aY0
P4eaR43N18Ndbi1bTNRYzy2NumI7J+fkQkD5fQY4zXuf7Ongp9T+F+j3F3+z9pniuOd5Nusz3UMb
XC7ypcxyAkHqB9OMCvB/iv8AAjV/gF418NWHiLUtMvmu54rgvZuxARJFzvVhwD6cg8819n/tvX3i
fRvhv4Vm8ASanbqt4Qy6CWUeT5RKnKDAUYzjp1qWWkkeR/ty/s3+HPBXg2Lxv4dtoPDiyGKyuNKt
YgsYdgx3gg43cBT2718XhDbptMe/GeB8p7nA9a/Sr9tGK91v9lfT57dJb2UT2NxcNGpchQpLM2Mn
rj8a/M+ZVnikJfZL90gvkYPQDHUdPeqi9COuh+kXgYpc/wDBO+7WPaEXQr9VCngBZZQB09q/O6/U
SXTuxYncX+Ucdc+tfo98MNOvLj9gC4tZLSb7XJoOo7IPLKu+WmK4XryMH8a/OCRxFGd25grtHJ/y
zkB57EewH41URydmfRH7BLhv2kdKZWbDadeqB2I2p1/IVN+3wpH7QupESEMdNs8AnjGHqh+wI0s/
7SOkyW8LG0XTrovKAcKTHwD25/XFaf8AwUNjkPx2fKOiSaVbeW5B2tjzAcdOfmA/GklqKdmj5nlC
gNskMbAclmJyRjge/eo4mMqqwUKWYk4xkkAYOe3T8ab8rmRnUbmyXYLxn9PQD/8AVUUMpzkB3Tdz
xjJ5wOK6Ucelzq/B8+h23iPTG8RWtxqGiJcBr+C0k2zPEeu05HPQjntX6gfsz33w81D4RXreBNOv
tO8OLdXC3Ftf5M3mbFLk/M3BXbjmvyit2iCPMRmeQY2vzg9B+Vfoz+wVL5n7PfiOMfvSmqXI2vnk
/Z4eOe3SueaaZ2xS5bo+Qvjy3wwvJ9Ok+F9pq9ipEn2oagSUcfwlAWJHU/lXp/7NvwF+Hvx2+HGs
2QOoWfj60jebzN5W3UFisLYxyMpg968ZvPgX48t/BreNH8O3cXhryx/pLMhwA5XdsB3BSR1x3Hav
o7/gnBJv8ZeMsMzKLCANkYG7zX6evB/DpSemzIja70Ovtv2NPhKdVtPBN1qeqSeM/wCyjey/ZrvE
SkYQyFQOMueAeor4w+JngPUPhR4vv/DmtCO4vrCRY5Wtj8khKBxtYjjIZevc816f8Tvib4i+EX7U
PjzX/DV8lte/2jPbsZ4VlUo+OCCP7wHGR+tJ8C3Hx/8A2ntOuPHKnWBq5luLqH/Vws0cHycKRgDy
1GM80JyQWTkkep+AvgJ8DfHF3o9jaXfjSLVLuNCrTQNGhdlDZz5ZA9c5xXhf7QXwNvPgj44k0q7Y
XmnXe+fTbgyKWeEHbhhwQwPJ7c8E19rfEn45+JvBn7UXg3wDYG0Tw5qSWyyxyQgyfO0qna2eAPLH
6V45/wAFGnMXjfwofL3o+mzBjxgYlXGPU9RQpyuDirpmD8Iv2U9B1f4VXPxB+Iur3WgaB5CXVpJp
8is4hyQzyAo3U7cADOPStzQv2VvhL8VdJ1uP4d+ONV1rXLOz82G11FAsYbny8hok+UsMEjOPUV6V
L/pH/BPGMoNw/wCEeUoPpINv8hXw34Q03xL4jvp18LWusXuoRx4mj0kyNKsZ4yfL525x174oTb1u
NpJ2sb/w2+B3iDx58VZ/AcEXlX1jJKt9K5XZBHHKscjfeG488D3FfQevfsr/AAf8Ja7LoV98TL2w
1JGCPbSWwZgWG4DcF/2geD3rB/YNivLP9oLWLW9hkguI9JuvOS5XbKJfOg3bgeQT3zzXf/H39sLx
F8NPi7rnhrTdC0O6tbAwKtzfQu0rs0SuRw69N3H0pa3L0R4N+0f+zVd/AvVrWSGeTU/DN6AsGoSb
VcygEmNlB64GQehFeX+CV0qHxhoB1z/kBpqEJuyVJUwCRfM3Y7bd3SvX/wBob47+Pvif4f0PT/Fv
hpdB09ZvtcMqWM8Ank8tuFaUkHCsTx7dK8Q0ixl8R6vYaVagi8vZo7O3j3bC7vIEUbuwyy8+9axX
cy5tdj3f9rw/C5tY8PH4ZvpUmYZxqCaQ2YiQU8stjI3cyc9a8AieMtsQ4bGBk8Zwf8D+VemfGv4D
+JfgZd6RFr5sriLUUmeKSynaRQI9oYHcqkH51/M15a0hXzJSdrZw5HOCCcA+p5reKVtDBq7JZkjM
YT5iFIOByM5/yfwpATNINzOucAleSMUvmttQDdjB5wMk++BzUScgrvZS2OByen6VSdhWJXUxmQFC
FIyB0pokDAxREhD97IIxQRLIGLusg5DOSd3t/n2oDkhdwwechBjjNSyUtSUoJUDDkMcjPYDv9eKr
iJlifJQOM4Ddcc/5600v5Ctjeg5xuOeM+9L5iygAb2I4Ve4ouOyJxNKJGJmYMzZ6ZPI702ZvNmV0
JBfr3AzTGclHHJVjtLHjFI86rEiKCfm+Y9qYxZFd1wwynIUYznj1qOSUW5LOzMhO0Ke30xSvKkRT
e5HBClV6H0qOa4VzHjIAY53EDH0H5UEkgv1RlMTZyDwR1GOn41Wmu9sO5TtwGOfQD1+lQm6XzHjM
bMQcYb3/AMKBH5cj7iMk7vUD09/wrK+polfUsxmUKJPur2AAOT6/z+lEs8eUaLLI3BR1ORx9R0/y
KheZ/KBJwoHPPSnG4D4LgHGWyw9v/wBVWmyrIsSvMrRBXbJxk4+8On9Kjlgkjk3BygH8XeopZ/OQ
kMyqDlSBk49MEUspZ4925xk7BkcfWgi3cfExhDycBm/vD9eCPy9hVUny7n5vMLhgC+M+nP16VKCh
WRFO1lBbezkE8dB+ZqLdIQU3ZQ4KuecehqbCaHKB9oBMhG7nLAdD0z07AVFc+cGcgtn2IzQZssPP
JZ1GBsIBYe4qGeXzzJhSgByCw6flRLQtWRJK/mWyNkqxz0btgjHFVWlCO25CVbGT3z9akE6RGLC5
YZxnqBxUc8QkLrncCd20nGMetQgW5EQi7kwUVc7RjrUEjAD5fmDMTtwOtSSMku1gpVgPvEdD9O9R
zeWsUgyPNx1A7/nSNdiJpZkQEqxQDOcc1EdsVxuBCqxzjHf1oeQTq0KggL1fHJqF3k3PhQ38JLDn
GfXtVXFYLm4PllUIEnLFienFVjMGcNtxuHUHr70+YqJNw+dVOAWOM59qgby0gyzK2WOFHIHtSHYg
ueWkJYheiHPfr/SkeVU2Ttk4JYLjvkfpSuWXmE7zjcAx61WdikDpjDj5iW4xx/jUuzJemxFPeFoN
2MDOCQDkDpu/PNVnIjT5jyc4IHHc/nTp4vNCOpOPu89RxyagM8aSsAzIjZGw9FHH+c07CuMKtNFu
ZmLty3rkcZH6VWhSRQXWULk5/wBqp90hJdQBGx+5nmq048tcKVY5JY/3fYUbFpDZ1kUZLhmYbfvH
k1Xuo2UxpzHJnL7jwKe8kcJ4578npVdttyVbcyMCTgVJdkyJtyuQQeVJG1fyJqNzncCrYyMEY9Ot
TFiHDNkleFP+e9Q3UjM55AOOo5J9aQ+RISfEYwHO0jptA6+lU55d8Dso3AcFTkH1zSttlQ8nk4JP
FE6eXuiDBiFyMDpmmKxDOxlQAEgc9OMCoyPNl8zavHUk0TMyFTtJz69TUYYIrFlOQc8c0ENIdLuw
VG712ngAetIsjHO48dOnf8aV5dyuS44wQcc9aiknjTbuJKn5jxyTSGiJ32naCCW54IoLDfuCsARj
DemKdIUyh+bYePlHX2NMc7gzBdwX7qikyrCEhV3ZxzgKOaiklAZWPzN146g09UaNd2NikYI9D/kU
yVo1Xg7T1HrQO/cG3hC7kliAdo7VC0bOoAUZDfwdTUryxqhURkqT179KZIBFJnOcd6LE6DZwgJXO
09ahOSAScjsAKmlwY2ON5x3pjKBFnPzK2Ae2KQh0h3NwRjBwccVAYnkdeQW6HtTjJn5Af3bZxxSI
DuJyFCjJXOM0ANZ2DBWXaBnp2qVlBAbcNnucdqYZhI+SAoxwp5pXHAbACigdxmzzsshzn/GoyHMo
7tn7voKkLKIhkHIOQabMpjJ245Gc0AP8p3BHUjJ6+1QNnkKec5wBzU3mZQ7O/PI5FM+4MBQc99vN
PQoYC/zEdQPrTnDk/NgBu9IZQxYqOTnoO9JuLjYex+lIFZ6DndtpP3cDBOOtPjJkO5iRt7D6VFIf
mHUDOMj1qQAxdTyeoP8AjSGlYA6q5zyBzTXnMoyQdxGcd6SWTg9jwB3xSySKVCqM46lu9AMayZDc
Ef0pNiu3VnfPAA4oJ8wnacD0HQUvmIu4ZO7sw4oEgkQuRtOAOvqOKFj/AHoBJYY7Cl8wspBJzxnN
O3b0ywJP96ixQ15SNy5BI9eO1Ro7yK2MJzQfmOCRk5wTQVZCGYfpTIaJDLuyHIXHBPQ0gk3NhSF9
QRSHZKeoyByPWkeQMp2DYvqfpSsFxRDuyjM2xuDk0wBkcAyEhm/ClbaAfn3E9hTZCAoGM+hINAXR
Mo2vjczYGQAeKeSFRTICcH7tQKjMvJ5Hb2p67WG7OCeee3FS9CkNlQTSH5iwHIHSpvLHlg7jx27i
otjqD5ZyxJ5+lNmY4beTjPI7H1oS7AxzbUORwFPp1oLbSXUc9Dx/Q0YQjO3AB6EURhgOAThuciqJ
RGsjbMAEfj0qVkwVLHcO5HHamriJ2fBZTzt9KmYRs7c7hng5pMqwEIsn3d2eTSMxwAqEDuQOlIWU
/MRsKgjHc05y4X5U25/vdalalEanngAqPz+tAJSRw2SM8Ecc00gockZz3o2EoRn3xVktB5Z3MWHT
t14pRvTcu4lu6sMHFPQZO0sVyOKDl2LN8wHG4euagaSI5osk87u4IpzRkfMUIx19SKHJygHXvml3
kthjheuKOYLXIlVlcIzgI2cZP1qy5LK27JxxwP1qOTbKFBO09M55ppHlqTksfamUlYeAiHMuMkDG
ec0xFCkEEjJwBmht0kY4I9qC2Au5SuOMY60IYLGVc54HHynnn1prfOMAt15XFOM4zlto46Y/KkkQ
+Y0gbBxkAU7MhoQssa8Zx/tU4BZ3zu2qAOneml12KRg88d80NkHHIIznPah3JWjHxyRqSvLY5BAo
eYEErnBzmmOgjCksG47fSh4+VXzAM881Frlkg2NnaSGA5yciopMqyucr34FBPkkj73GPcVI0iYG1
sOuAM8mpBCSSdieAPmJ70uz5QQf3eBxzzQ4kJLNkAnr/ADpPMZogm7p0BHFUHUc8oDtsX5j0B7Un
2YHJD4cnoB1OKAD92Rvl28Y705gCVCsdw564AFFhiSSqw3MhEgHPGRSxqQiSby2RwMUwfMzp0wOT
mhI1EUi5KlVyDgHJxTC4g81S/wAmMjBH/wBepNzOoU7mROhY9KieXC7cldwGR15qaTy2JRQ0kiNj
eOlMZCGdQTj5ehPqcU4O7qCCQQe5/lTvtBjVE2At3680xANjZIUk5zmgAwMMT8xTjAPfNTTSqygA
DIOSwGcj0qPY/JIBDdBjr709fKiGWY7U5wh5PtQAiNvALAHAPOcUkiCVuSQnovc0jsquXG7Hp/Kl
hY7EITIPGW65xSvYAUbUfGWYA4+tOkXb8yqeP4T0oeFopSCxUAZwO9IEKuWDthwQe+KmyYDXJc9S
hUbQfbPXH0pyyhF2uCehBxgdKY0YdgUXb1yV9KkZQEVW27FG75RVWQrINzvMC2M4IBBwTx3pBhX5
yyHrjjtzSyBQoch+e5HJqQpmEncEHYg5z7Yo2CxEqZkKKC655Azx+NSgRykk/KFGQAMGo1uAqsOA
HOCQCDj6U6dQsamP7qkggEZJxmhMLoJEeI+Yw3rjpn1+lOecgAsWPQfjUJdcrv46nA6d6lkVY43J
yXUZABpMYyS2kJJZsKwwM85NRRyuDGQx39+Og/yKlZl2Hlgw5AbOMe1RFfNbchyhH3c8596VxE8j
72O1ckHJB4B60hbfKTKwbA4C+g//AF1I+2ZWXbliOSOOai8sLIMjseQe3SrTGROoh2MeRkjceKmD
NBEzxkMGJ+UHp+P+etLuV5AhJ2MO/HSmn5CAMsNvIzjH+NWIhb9/sGeowe/NTOFaEqSMg4GMZ6U5
IEJfHCryc9KSHyo9xBLxgjrxmlcVh4SSOThdnHKjIOPWo44Y5IiQzF1O1Sq8DNMY/cbPznPG769P
ypzgmQDaUMuG2huTx0xTKRFcrKAyuPuD5ck9al0x5Ft5do2K0gUg/T19KiugVBQIWBAO7p64q1Yh
7nSnJ3iIyhcgfKDg/wCFJkPQ+hP2U4Wb4n6BKWiQ776Qp6lLGdgx9RwM/WvnS6uRcTTOMFmdmPzE
gZY8Z79cfhX01+yZZuvxGs7hovMt4LDVZfNJO5G/s6ft9cD8a+YmVVmZVVUc8lT3POcd8VEiad+d
v0K7BnkCBguRnNLLGVY7So+poaANKd0nzDJ57exp8Ku52gAMB8xJ4/CsWanS2szrqKPIwk2j5kC7
s9hnPXtU3MFyo8po1DbgmNv8xxxVaBWgM7RofMUsuJMcAfxVZtpmlaN5GdgXG8t97HHf6D9a9CS1
OWLfU7vQ7AXEA4xx8reYCQc9wRx37+ldXpGlPb3TYd/LOVztwT8vJXJ465AGa5WwjLAReWfJKjY5
XKswxz/M8V0VilzJffZdskg+VVZNqhMDBOaTTsNWvsfOepu73TK5OA2PmGM+/wClVXISTKAlOuM1
evEN5fzmFSAGZsAcYyec1TVQfnU87ugrHZnRFHo3gNg/kvIHDqcIVJzu7V7Ro/iPV5NHjsJNRuV0
63wgsvOZoVGWxgdB16dq8Y8FThiTEjN5cZkAYE8KDlcjvX0dZfA74gaZ4GtvGcvh+9tvDc6q8d78
piZScZI3Fs7vbv71srb3KkrbmfoXiC/0S7uW07VbvSmki2mS0naMvtyRuwefbNVri6kn8yaV2mlJ
Zg2Sd5OeuOvfr611vgf4TeLPielzb+F9Em1u5tCrXIgC4RCf4iT7dB6HpXO6npV9o2ozadqEX2C6
tpDDLDICrqyZDZGTjnPHtWinE42pJ2NCPxhrr+G49ITWNQGnRgLFZpcMsaAcDChvlIxxj3rY/wCF
teOY7aFV8Z64Qkm4hL1xjg4BGeQePeprr4IeOovDUHiSfw/ff2BeLH5Oo+WCjlm2qcZzzxz710Fn
+yn8W5IFf/hAdVmgBMm7aqY4JHG4lj0HC96xlKLZqotHKXvxY8bajZSWlx4w1q6hlVklhe/m2Pnr
uUPgj9Oay9D8c6/4VhurTStWvbO2vMC6sra6dUOVIyVBwe/512er/sxfFTSrC7u9Q8E6ja2FtGZG
uWj27VAy3OenHfHeud8I/CHxp41sLnU9C0G/1ewsQ4nmtoCyo4G7yycdcfN+I55pRtcTTOf0fxTf
+G7tNT0y9ubO/iG6Ge1kKTR8EHaw5BIJFTa14m1DxdqUl/reo3OqXjxgPPe3DSuwHQZbnv0p2keF
tT8TeIYtC0vTZb7U5X/c2kCbpnIBYjb1zxnHoDU/jDwPrfgTVn07xJpc+k6gArm3nxvAJIVup7fh
zWr5UyfUdonxD8T+HNMudP0TW9S02xmZvOsra6dIp88MWQHB4Fc/cXTyKS2ApBR2zgKDmun8OfC7
xj4msrjV9F8M6hqej2xk87ULeAtFHtAJGRz3I/P0rlJItzZKO8fLLk4PY/j/APXqLq+hlzTuRBXl
gQqpkjVdysHxj0788YNTSwyQSxNKEbIJCgkN+WMfjUEdsjq0j5ROyEHnDAAj17/pUk8K7lYq0pVt
2SudvXGecD61SujRJGr4T8Xa34D1iPU9Eu7vRtQjZ1W6tJfLkVTgMO/UAZz1qx4r8aat4y1281zW
dQuNVvJgEknvWDMBtGB056YrnZWeV8bj+6+ZlX+IY9c889RTpsKeoIZgNqr0Hr29/wAqLJg79Dtf
DHxi8VeEvCl94f0zX9QstEvCd9lHMVhcMMEbTwMg9vSp/h98ZPGvw007ULbwt4iv9FhuXM8sdpIN
jNg5bDA89ea4WO3j3FYIwY8EFduDn1/HFAk2zPv5DnJA6j2Hp3/SotEhSktz1DxR+038SfE+g32j
6p4w1C/068jMMlvK6srqSDg8eox19ea4DTNavdF1K21SxunttStpfOgnBzJHIpyGBwcHJzWZcRoZ
Thi525ZQf889akgmC3EiMoZVUhRGMMPf6U1y9EUpNM7bx18Z/GPxN1O1n8R61danc2QX7NO5VfKI
Ib5dox1wef7tWviD8c/G3xWtrO28Ra3earb2gkCCUIojyVHG1Rk/KB17mvOwjM24w7gYyDljk454
GeeM/nUz+asQhlYqXAbLc55P5dvemkPm5nuMlaWR2R0KqeCQQvOOAWxjHrU6MRbgfKkncKfm6frU
um6Vd6rfyWtnb3F6xBlMcULSEKevCg8Z7+9WtW8Ka5pVu8t/pd/bW5bBe5tnixwcEFgBzimmgcST
wt4z1nwB4p0zxDoF9Lpmo2D5huU6rkFTlSCpBBIIIr2pP29PjFEePFME7KSWEmmwYPTHRR/PvXz/
ACK5kyj+YDzgEDA6cjPr/OpvN3xqIgFwwOXHy5xnn2/wqGritbqbfjTxnrHxA8T6hrusXj32p6lJ
5s7MccdMBRgBRjAA5r07wH+2H8SPhd4WtfDWkarCNKtA6xRT2kcrJvOQFcjOFJOBXhrzSyE7iZGL
bWJ5aQY6g9Rk/wAqtJvDeUFkkcghgkZYjdzg49xj8KWiLWvU9f8AD/7WXxI8O+H9d0Wy1iFrPV7i
aadrm1WR0aRcSFDgbQTk4xgHpXk0F9LZussMgSdH3JJtGd27g5PGc85NVXs1i84Evbh0zsaP5z04
x2PX8qFAC5MmwHh2kOPl9/Q9KqyYtUfTdp/wUJ+KtlZLA9xojJAuwu+nEnAGF4Dj8fw4FeAeJdfn
8U6xqOr3jqb2/uGvJhFHsBZjkkDOOvbtWNeOjo+91UkDJU7c/jj1xU8krxBSIUUg4DMScY9qat0M
G3fU9Y+Cn7Svi34EW2qJ4fh02Vbx0aYX8BcDbnoQwI+8ePYVd+N37Ufiz48aNZWmtWekx29lK0sX
2S3aOQkjHzEu/HX+vSvGH/czQ7S8jSdYx34yenbH86bJE0sTiQmCcj+ROf5Hmmoq4+eRZnykX8TM
42hVxzyPlPFIVRcb/uOCWPILGmx2wli2KWZSRhmOTkDuQKiBhZnRZIZvKXb5YkBIGepzWyt3BJvo
W0nhVP3qOI41wNoH6k/TrXqfwO/aK8TfA/WJ7vQ5or/T7hClxpV4WMEjY4cAEYYYHPcZFeYqY9oU
KACCAQOOlQ5j3AZX5ct5e/BOOD9RUNJ7lq62Pd/jP+1j4p+NegWehTWdp4d0KAs01ppbMEuhgYVw
f4VIPA7/AErW+BP7Xd78DfDLaLYeEtK1JXmZ2v5ZmhnIIBCNhTkA5x6Zr54ZSJHdGKwdQCxJAPNP
LXEasoKKEGVbaBgn379e/qKjliNSZ6f8cPjX/wALw8Uw6zN4estAvIrYxzrYOXaclgQ78D5hgDJz
kCuM8KeJNS8J61Y63pWoS2GoWTGS3ubdsMjYIOfYg9OhBNYgZ0tz+7xKwxhupoe0WCPOSM5LYOcE
YIoaQXadz6q8T/t0r4ps55pPh1pf/CSx23kWviJpgZYJQuVkQGPcu1jkDdXMfFj9q0fGb4bafoet
eE7U+IbFoV/t5pgZGK4L7VCfLvIGeccnjgV8/vNFFISxI9M9zjP8qlYv5hwS6qoCggDI/ChU0xc0
j6F+D37XF58L/B974Q8SeHf+Ey8MyxhbawllCi2j5LIcqwdSSMDt+ddjY/tr+FfB2iayfBXwrtPC
eq3kXkR3ttLHgNghJGRYxlVPOPavkkRN99mEcSZyOuePXNRIoaL5W2u+HBPG7jp7nvSUEh88meie
BvjR4n8D/Er/AITKC8a41yWV5LvzGwtwJGDSI+BjDEDtxgY6V794o/ax+FHi/XZNa134OtquqvtR
rqS4iZ32gAfkMD8K+PmilltX3kFWPG0c/wCeKdEVt0Vd7bmHRj2Ht+H6VajFle8e7ftI/tMX/wAe
Ly0trWxk0nwxYbZILCUI7tLtYNIXXpgEADp1rxfTdVu9G1W1v7KZ7W8s5o7iKYHJR1cMpH4gHH6V
WmDTwqdrs0YJAU5Gec5HH86jAkktS0bNnA5Yc/8A1qdktDC8k7noHxZ+Ovi/4x3Wnt4nvo7v7CHE
CW8AhVVfaWPy9fuqPwrg8lZcqrjA5zyo70p8z5lbO0YBA5A/zmnKqwk5k8w9cheM9apO2w0m2Omk
aQEhdueSSTzx0/Oo4cxDdkK4HTOcetEm6YlgCzLwR6fhQ+ZUJTOQASoznvmnFsvlYuFgTLkkknBx
3pbsyKQCpLMc9OoqIqZWwflJPJIzx/8AWpx23aR75C46qzdz14zTJ5Wx8zy78kMXbhlfHpSxPJG3
nBst1AA4HTgj/PWkmiaYkFjwfvdN3tTRMULBWwFbdjruP+RSFyjysgVkOCqsfmx39h2qMsojcYwW
fGSMHg9KcWkjgIwSDyeOlQAFEPl5Ax/EB6/1p2QrETTSMOUxIhOPT0HWo/L8tEI3M5HJPQ+oq48T
rKAqnnktjBGQcUwyGEQuqk7wcA8g57VLaQ+XqVFjj+UthixIwB1P07VLvLogA+Uk9O3HNEIQMFbz
CV5LZ5z6e5qSaVZB5hBP91StZiVyBQoQ/MGHIJH16fyqUtJ5f/PQgnAPamyRmRHUYWPneBn5f84p
rXbxlV5JOVJ9OKq5ohpYQhCJcZGW6jnt/OhLiSVGMeWKtuc5GPp0oXdLI2RuZASxAwF47/pVZ0+z
yRMGJZwSSRwec5Hp+fena4rNEt2/lSs0m7kgBFXkZ/8A10GRfNwXKY4zUMg3MsvO8k9zijIOEZdx
BIz2P0qb2ZnqLNMN5kJ+ZQfxFQ+aI3+bCsSWyeo/Ch3Mch4wC2Me3eoFPzy53KoGQCOc/X8aXNct
K5OHJJwq/dJyvbjoKgd1DMysfmGCh6jn/wDXUc0hdVVflc9z1I5qJzn5xwQuc5/OkLZkruZpRJEr
Aqc59qim2hQcfJ0GTyB/Wo7mYtnO5QOOOmKSYK6IQdyBSc9/zod0arUjkchmB25HRl4457+tRyHZ
K2SqIRnOfvE0kkoZ1fGBgZzUEoFwQ8ZJIGSfep1LvYVyhTaPmGMbc/59KqFPLGAVEQbeAAc9MEHm
ppHViFf73XIWoHQFkbc0Ybgse/8AhVLUCO5bAAQYGckjJqqXXDAKSScjjI/H2qyTHarIWyMDdjrm
qFw7r8+diseBz61LViZWI5iWm3bTsHDFTz7Y7VDKiFgXJbvtPXNT3MyeVtwFfknAzntxx+FU2RWX
5N7OAePftTQkkRLKCWCnJzkkntUcxZDhypCjjb3+tLcR5yxAjJ6KB1GPWoYOEYFjt6FR396C7WIr
iVpmQhMkc5Ue9RuW+VmAAPG1T3pJJZFbI+UMNuc8U2Rjl1XlVHRhzSY1qBMjHJJUjkqck81WYK1w
4GT/ALJIzUjMyZ2SZbqV6cU2Qn5Co+bAzk1Fyyrcp8uEXLL1PfH0qKVWRhuYADsDycVLd52sFLDP
8XXFMby1I3HzG29Mkc1ehLI5EMnzKfmwCCeailj2AEv8vccAmpIVdj5gYhRnBIwOnSoJIDjIGDjP
HapKtcSaLcgwQoC5+tRhmkJUjO35cjpUrsWXJHmM2MMe1MXBLtjtnAoE4jC2xWTgZJIOaiaR4RkN
uX0HenSZeVGOV9vTinEFFIzgZPWmLVEbNuTfszyTgnpUMiALj7xz0AzUkhORgMFx9496aGHmYU5Q
+h5qWyiOR3CncVHYDHSgkben3s9fSlmQl0PJUe9DRPGxGwA5PHXijUhoieL7wXpjOSaBHG3DMRjm
h1PJx8pPTOKkKKxV9xAxhQaW44kG7eAR0B6HjNI7L1DYYdQe9TSKIX/vdh7VDLbo0id8/wAIp7Da
uQyqqyKEZmQ8cjpUpjCqW2g4PQ8H60jZLBTk+lDRhHG4hyfSkRYUbmQnbgjkCmGM8k4bPXFSSu4X
oeMimgIqDJJcnv2p2HyiDksF7cAZ6UgLKuHXcKJMw7QeMnORTZASoIbg+9Iew4hVcryD6j0qPawY
kjAPPNTKTGM7TyeCB1o2NkkDrxg5NBQyQHaVH3sUxgSpZgfl6g96cpLEbgVBJzu7etPSPerAZGTx
kdaRFmRowc9dgA/OhiGbCqV/2aVxlpOMAdCTSRq5w3cd80yrDQv3gAwHX61JEAFwR1HHNKUHmD5j
gA5zTSOdqgZHBJ9KBiBSBjBLE9TTm2mIJubAbPNMkk+QDocUkgcbVAHTtQQ2IyHyzt5U8A0oMrR7
SAyjueop3ll1IyFA/nTim35ejjIPfIoLRGWVDuXBzxg/SkxuO1hyTwRmlAG8pt5BzyOlPdcupx90
HBX6UCauRJG6kYDbs/xU5+SwI49cc5xT2Yq4YnJz0PekxvdsAAnoDSCw1nCfMCCePrSSNJIA207h
37YpxUJ8zKDjr3z9Kc7DnLEEdACeam9xEe12ZApGAc5pWG2QrwxPNDMS+RxmpYQgYkgbjwODzQik
hhjUB3OGJ7ZzTUnYB125zyD6VIyhTt29+CKJFKxlxjBPPHWmhkUrF1GWJx79KcQkbnb8xHftTjGG
Kgj2IHegFQ4U8BepFDAa8q+UCyZIOT3pGEjEszMMjHzckUSRs53D5Qv4/jTywI5JbI6r60krhYRj
+7UAhiOm7imp8ud+cHke1SSkOOFy47HrTDEASGUluABimmFhh3NEOGdfSpUJAyAO446UuGQbWBQZ
/OgZGSAM9R607CSGynK5yVYZwwFM2F8HJ75NS5OCzMV4wRmmhwATtOBycj9KhpIAkdRMwwT9euab
OmTvTczNztY1M6YWQl9xfkEHFMRw25G4GM8ihFIH+UAncTwOeMU0p5h2urMMevT3pzsdgywbBxyM
AD3p8mGTGSMnb04FUBE0OSxAyM9c0Bi2QBhl6k8igRbWII59c8U1SdzMEDrjoT1oCw5oVCZDbiDz
9aikU7t6jcSfrU5GFPJGO3Y1GRmFiMhyMYoCwNAzRFydqeg45+tIGCE4BlK/n9KehEURIJ7cZqSa
ZNp2nc3XgfpQFhkm4R/Nkkj6kfjRKuRlUAwBlh0Jokk87hQWB5Kj0pWTJEZO0dcmpsA2M+YfmfGT
jrxTNoRzgsWBwoJ6jFOEeJCgXdn0oQhQzHIX7pBHIppBYZFE82WZtoHUEVMYRuLMflf5VxTUj3tJ
z8vr1oZSnG859abGoh5Tb1BK88AimkhJwRxgn6VLJH8i4yRg8egxTJwRIAGIwQNx9O1TYGhrYM5w
OOTuB6ipFQqWzJzyevNIGbyF3EE4PemITC+8rksMAnnHfFCEPuGkURr8wPXPGDSYMhYldrYyFHC1
JKgR13EbyDj2o2EEs2N4yRimBXcsJMSM2AcA9qkhVx8sYCBwS3PFNA4K/Q5/wp0Ukanlc5zjA5HF
KwxSoLtJuG0AjjpSiLdCWEjbVPA7c0iDhVYsIiTyx4+ppzBNxVEYrnJ4zmlYBWURLyxJIK885FR7
fJDtkkDkVKxWUZJGRwVBxmmmaNJDlmdgNhJHbFC0AbMzbFZVyDkU2MlGCt8wPIz29/zqRzhVJK7W
OcdjUTvmUuANoGOBj8KAJpGkLOXY46AVHGVLH5WbGeSMU6QCZjuB29SOmPWnyzYkXAYKFwOBz6nN
FgIdrtn+8vQdaeUIizlvTHcnpSO+JHYZZc5/2qXYxCgqSAecmhWQrD9isQ7MEQDbx3zSSMskShQU
IbB9G4pZlSSN2YrGoJO3PftSb5LgBiwzu4y2PfpRdDGMGcDGCg4IA6U7Z5cnHyRkABqRNoaTClQM
jeRnHByaaCY856ADAPA/WjQQsjK0gXDAgcFTgVJO4jRdo3Kx9Cc/Qio4x5Lsz5ZW+YDGR/nmlnbY
VUAgn5gccEelUguI3mM+Ap5TOe49KUqHlKtkBVxgknnBpksrs5SReEGMdCeelTvKoXa67OMGRf5U
2Ax4swbiSOQDjoahljlIJYcE59BUmChB2mPaPlPrQsxiMnzfN0z1x7c1NtR2GGOPAQE9ckn+GrCR
B2JZvMODtHbHufWknYwSAnhD97AwQKawYTOIpNse7OH6n0rQCCba0hyeSPujoR/k1as/Me0ASIBj
JuVs4xwcjFQXQa3jAJVkJ3ZXqD7mrmkRNNZ75ZfkLnAyAOBz/OkyT6c/ZYtDJqeohQyyw+GNauIl
Bz8wsnXJP/A+lfKdzJtuJWBdpGb5S4+YDceuO+K+p/2Yp5VsvFDR3AiaLwhrpQgHchNtwSeTxnqO
eK+VJUIuJGUBhknGeB6Cs2CXYc6l1eQxg7iRnHUmmGR4/kGFPWnCKWMeaueBkkdB+FMCKXbLHf1J
ycVnYaR108UdyJkRQ0vJLZ+dfc+tWlUW0MgX94r4Ubj1b1P1yMimBGF5++MbTxjcwz+OM0reW7Fl
UFXIaRv7zcdOnA4zXoctmcsZXWp1ujzJLHHEHeOJUy4PJ3Y5wT9K7XS7iN9WtP3TMsz7WAycJySc
D6CuS05kVYOhPmZw7cA4PQf5zXT6WsMd9Eq7VUnHmDAIJIyPfrxTk0UnqfP+ZbPXBGmXHmGMxynk
juD9RS+I9MisdbkjRHjRvmXJBB65x7elMuow2vuqFnUzELzuJy38zV3xvbNBrzxSsoKIAMDoMf45
rkdrnQrn1h/wTe+DmhfGb43ppXiGSQ6fYWB1RbaErtndJI8K+c5U7uQK+mP25PjZqk/jmf4baXCm
i+HdEbyHtLU7UuN0SOrMnC4XdwOa8V/4JB3Bf9o65jdzhfD14EDD/ppBnHr/ACxXTftyoIf2oPGG
EfeBayId3HMCZP6evH40RirhVlqkXv2Sf2hrf9n3x9cyalbLL4Y13y7e8uVJ32wjDFZVUD5h8xzj
09q96+LP7E178UPibb+J/C+qRXXhbXpI9Qu7uSUeZFvbL+UM45U5BxkdK+EtF0q98QahbWulRPdX
13IIobeKNiXY544z9celfqv+zZ8OLv4T/A+z8IeI9ct49Y1VJJ4Laa4KyQebGAIlDEMSp/ujqTgU
pJJ6E6vU8Q/a4+MWi/Dn4a2vwW8LyNqdxaWcVvcSyO2+DyipjBYAAliuTzx6da9e+HfjrxFqv7FE
fiX+0riXxGNLuHhvGGZN6yuqHjrwFGfSvgf47/CrxR8JviBqVr4gR7wyyCSHUCzFblcAblLegKg8
5yK+7f2dvEc/hX9iCw1q38uafT7C/mjD/OjbJ5sA+3GKi2o43s2efaN+0Xr3wx8F+IvBPxknuLfV
bvR577StWkPmy3CSKQI2AGAyk8fl2rqf+Ccc8dx8KfE5T5kGtnkjhv8ARoOeQM5618uftM/FWP8A
aQ8Z6bf+HNFvLiXSdJVbpVTiJi26Uk9dgyvOO3tX0l/wTfSUfB/xgvkmFm1htoPBY/Zohu4J64z/
AIVbjYpNu90d+37MFp4d/aQ8N/EHw3EtrpyLcG+slKrFEzQsgaNf9ovkgdMH1r5Y/wCCjtsp+M1r
NtIZNLgIKgl/vSHgjt8n5jtXtP7In7UV7r/iiX4ceJknuL5ZJP7KvhGSWjUM7JMckgjsT2wK8i/4
KNLFH8XrA7v9JfSYWVdxUlQ8ucY6nOPzqVqzOST3PPv2Pv2gdU+EnjrSPD8hn1Hwrr062UumSnCx
yTOqrMoOecscjjI6+tdj+3z8CNI+GfiOw17Qybay1vzWewjiAS3aMJkIRztbcTjoPpXzn8Lif+Fp
eD/3mFbXLIxkkg489Oozg9q+5v8AgpsgPhTwfIC4eO4uW2rjDDy1yOfqPyrZaMlq6Pznnd7gcgr8
q/J1I6f4dKsRlyxjDDepYFpMIpAHv16fnSXEgnnWV2EhwRygGOc8ce/61VkcKzbyXAbdksApPTpW
10c70GNcnyXkjX5ivyAc5+v+HvVu5neCWONgF2fK205AOOQOenTvSXBd96t+6RgCCOpI6fT8KhDg
sF4fy+SGByw7H+VMXNJFkTmMuXDF3GMKeRzkk9gegpskbNGwRl+YFm3jGT1yT2qs87bXVcxoTtUs
Tzgc8c/X8qnu2CvIhk+VuDu6dMH8+ayaKi1J6iTKyqJAFLMCTvG4emDz0p7kyOpeYvyVX+9k5yOu
SfmqBoBF5aSHKl+qds9KneWMbiR5hjcbcL1yRzz0/wDrU0gktbETxLJjYXPl/NuyN3uc+nHanReZ
IWVAzxR53hl+Y/jk/wA6QwlVYKm2LoW7LxyP8+hqQbjK7RMoc4Clec8//WqkwUEfY3/BNAK/xh8Q
/JGuND+7xlT50fYdP/rVP+29+0L4xfxt4q+HUM1rB4fjlhUL9mXzWHlxyf6w9Mlj29aof8EzbZrf
41a820tv0FwZN2QSJ4f55P5Vwv7cMxg/aO8YB0dELQfvQg+Vjbx9/pWXU6GeC3BR3TBGT8oQfd/A
HjkjrXs/7MP7NetfH3xXEzrPp/g6ycNf6kysFkwRuhjYcbyD68A57V5b4CstJ1nxRpVhrustoWjX
Myx3WphcmBCRubB5z+eOuK/RT9pfWNR/Z6/Z3sNL+GWkLbeHJYY4bjXbSRAY1ZQoYjqTIMZk7Z45
xgvdg7WPk79rXQ/hV4J8XxeH/hzaXaXdpvXUJmuXuLeR8KQUdmb5l5BxgV9Pfsd+EtI+Hv7LOq/E
XT7NZfE11YXc001z864t2l8pNo6DABPfmvzsF2FvPNlHzJIRtJLA57EZ7jPr1r9cPgl428M65+zH
BremeF00nw9a2F0suhoq7SIt6yr0AIcqxyf73NKSCOzPlz9pPxn8Pv2hP2frH4j6aPsPi3TbuDTZ
7dCqNucgurR5+ZOSVbrjNeI/spfDzRvib8ctC0PXlkm0+482XbDKY2cxoZBn/ZO3BHv2rznxxf2G
teJ9W1DRtPTQtLurp57eyQgiFGJKqDjtnHTjmvo//gnt4n0HTfjRHpOpaH9t1zUYXGm6ujHFqVik
aVCp/vLxnnpinsOFr3Z9bfEH4leCNM+KP/CovFGl2lppGu6RHFb3C24Ch5DInls/ReFG044PevzW
+MHhnT/BHxS8T6JpUz32j6dqL20EzTh2dR0ye+M7fwr6s/4KUa74eOu6DpP9jy/8JKkK3L6spwq2
xMiiMjnJ3AtntxXxRY3NpDeF7pJbyBWDyRo5UvzkgMMEZ9fxpK4SipM/Q74A+CPDHwY/ZNuPiPba
Lb6xrN9po1K6OoRh1baduxQfurtGTg8nmvM/2u7T4ffEL4WaD8VPCLR2eo3dymlzWsKrGFYBy25A
PvIQRxwQQa+qNG8SeBb/APZTTWINAmi8BNobyHR+ri3AIaPO7rwed3vmvyk1y6tptSuUsIpbew+0
SSwwyEN5a7jtU9shSASOtONyXa561+yR8J9J+L/xl07SfERuP7OhtpL4xxkAXLRMhCNnkLyM469K
+2fFE/wu8Y/EjxN8GNa8Padod7eWcX2G9tLSOOW43xl32vtwrLjjPXmvBP8AgnHrHhY+N9UsrjTZ
5PF8lvLJa6iH3RLajy/MjI4IYsQehBA4x0qt/wAFB9V8NW3xVsV0u0u7XxlbxRS3+oLLsjeMriEL
82dyjJyAOg5NPqDkkeH6H8MrTUPjha+CEu5I7Z9dOlm6CDcU88p5gX1IUn0/Kvsz47+IPAX7Lem+
EvD9t8L9F8URXFnIonv44lfEQRSzsYm3M24Ek4ya+FvhKmq3HxT8H/2FdLBrD6nb/Zbi4+aNZTKu
0t1JG4jPrz61+hX7VF38LdNsfCY+L9he6tqLwyLbtoyyKm5fLMp2hxhSxTGc9KT3KUlJKxwvxX+F
fgz40/suw/ErR/Ddp4N1KxsJ9SjttKhjRZVUkNFIVUbgdmQcZGa8m/ZgX4MeE/CmseJfHs1pq2vp
HNFBoWo2vnKyAKwZAVILNgDPQY7HNfRnxGeK+/YvvZPhesNp4VGlXTSxaor+f9kBk83YQThyQ/XI
5r89/B/gXWfiL4o0vw14dg+16hczgRgIWWOMsA0rEDhUDgk0Iatc+0fgL8SfhV+0J42l8NL8FNI0
ZnspJvtbRQyBVUL8uBGpXhhgg9q+WP2hvhdF8GPiprHhfTb2W+tYFiuYWlX5gkg3BDzg46Z56c19
e69rXhb9hH4WppmliHWPiLq0ayuZQX82QbVd2IAKRgE7R3P418ZeHtH8SfHP4mwaYNQS88S6zcFW
vL+Zm3fIzHJPO1QDwKuLtqS1d6HU/st3Hha8+MumaR4s8M/8JLaa0Rp0Pm52W8pZCr46NgdcHIr1
H9vL4U+Fvhxrng+Xwrodton222uxcraDYrFDFsJXoSN7DPvXv2nP4I/Zal8DfDvS7BNV8SarfQu1
xcw73CyybJJvMxwQ3Rewrzf/AIKRj/SPAfr5d8F29VOYOanmd7lNLoee/srfsvW/jm1fxx44iXTv
A9p/pEcdy+xb9QGDM7BhtjXA5PX8K9M8Iwfs5fGrxZrfgvTfCa6FqZikisNUlcoLohioeA+YckEh
gCOfSuv+HaC4/wCCfjRv+7D+H75Tt+XaN8vHU4IrjbX9ljwx8Lb34S+MdK1jUbm/m1rTo2huXjaF
vMHLIoUEH8Twfas3K7KS1seTaD+x/rX/AAv+PwDrN3LbaYsb6hFqnl5W9t0IBCgHhjkAjPB55r03
4tzfs4fB74gXHhbVvhxqN3qMMUMkk9lK/l4dcqMmZT07AV7p8UdQ1zTv2gvhs+h6dDqck1nfxXMc
0nlhIMwl3Dc8gdBjnp3rzP8AaO+HXwX8Q/Fue88a+Prvw74gntoA+nwgbWRQQjZ8tiMj0NCYXPKf
2wv2ZND+G2n6X4x8JqmnaJdvHZzaZLvkdZWUlZFcknkKAQT1pn7NH7KNj4k0m68a/EhW03wikbiG
0upHtWuMqCJy4KlY+eOeTXqv/BQC+1qP4Y+HrDTtPjk8OvdwvJq7TBijhXCIYyOhXLbsnpjANdN8
UdFj8Q/sX+G9LkmNvHd2GiwmVh9wM8Ayfz5qrkJbs8y8J/BP4FftA+GtfsfhxDqHhzxBbov2WW/u
pMnOSsixtI+9DtOTjIz24r5Q1v4deJPCfjK68LX2kzjXo5vKW2jgctOdzKjIuPmD7cgjqK+2PAv7
Mdp8Avj18O72y8QXWsLqC38Ei3caoVK2xIIK/ljHavLP249Vv/DX7Smma1pl5LYX9jo9nNBOh+64
nnI4PBHy9CO5qlLsxSik1Y9D+Hv7CWhW3wwuL7xqLoeJ5oGuQlldMq2n7oERkdGIbdnI+lfDVjam
bULWOd2jEjIjhRgrlgpxwc4zX6Yfsr/EfxJ8VfgjrGpeKb5dS1OK7uLUTpAsW5BCjKNqgD+I8+9f
mjYy/YNQsrhCDGjxzOzHdt2sGbGBznBH9D0oi33B/FY+5fin+zD8Bvg9odjqPiibxDb21zN5MJtL
h3JcgsQFReBwTXjninRP2ZP7C1F9J1TxcmrLaubTzopTG8gX5RgoF68ckda+lvj/AHfwz/aI8G6Z
pS/FfQNCa3uRdrMLuGQtmJl2lDIuPv5/CvnT4jfsZ3GjfDmfxb4L8XQfEK2tmxLFYQKMxj77h1kY
MVxkqB3oTfViML9lP9my2+Ndzq2p6/qT2XhnTWaKZYLhUuGkKBl4KthQDndntXpHg34H/s5fErxd
H4c8P+MPE1xrNwr+TG+Y0k2KWYqWgCtgDNfIGnSztcE2kknmyukaeSxDOWPy/dOT1r7j+CHwm8P/
ALLXgd/il8R3EfiAW5NhpshCzWgIYGJFLYaRwRn+6Paht3GkrXZ8nfGL4T6p8EvHt34b1R/OMaie
0nhbcJoCzBHbphsA5HqKu/AnwL4S+IfxJtdD8V61qGjWV/mG1nsEBd7pmUIm4qwUHJGcdccioPil
8SfEf7QvxPm1F7Brm/vttlp+nWseGEas5jiwCdz/ADnJ7+2BX1z8L/2efBHwR0bwpq3xCBu/HGoa
nbSWdvbXDobaVigSPyw4DhG5ZsHn9a57Kw1GJ4F+1l+zppPwB1Xw1Bo2q3uoWerQ3DyLfpGWjeJo
8FWRV6rJjkH7vWvAFYAEuJCh3YA6456ivtr/AIKUlTqPw+XIVjHfAMTgjm3P8gfzry79l79l6f4t
anP4g8TCXTfAdnzK8paE35AYFUcgYRSoLMD7dziXKxEVfU7SP9jHwV4j+Cd5498M+N9Q1Dy9MlvE
DQRiHzY0JdCCoZfmUrgnI/WvknQtMv8AxHqdjYWFmbu9v3jiggj6s7kKAAPqOe3J96/UXRLbwZZ/
sx+LbLwLHKvh21s9TgTzH3lpFEgdg2TkE8g56V8of8E57KNvjTeLcxLI0OgM8BZAQjCSEEqfXDY/
P1qedl2XNobsv7E3g7wjoHhyPx/8SD4U8R6wgH2IRRGPzRt3RoxJyAWAyTg14v8AtJ/s8ax+z34s
gjupZdU8NX2PsGskKhkYIN8bop4YE5HYjp0r3j4y/s5eNP2g/jb8Q72w1i1FroU8FtaxajcyYRXt
kdkjUAhBnk9Mk16Lq8dx4i/4J73s3iF31PU7fRpna4uz50olimZVIY5ORtADdgKFJ3sO3U+H9N+C
PjjxB4CvPGemeHbi+8N2olM97C0Z2iP752Fg7Ac/dB6GvPVjSRmkVmkIB2k8L+X1H/6ule8/sz/t
IX3wP8Uvb6nI974I1KTy9Rs5N0i2wOd00aA43dmGORXaftdfs5Wnh/Tk+KPw/U6j4O1krd3MNuSy
2jSZZZY1A+WE55/uk9MVqpvZmbifK0cJkaBZZRbhmUMzqdiISAX46gDJx7Yr7C8Kf8E/fD/j60l1
Dw18XtO15IcK72VksoRiM7WKznaT9PXivjhSrylixZW+6MZAHt+VffP/AATGB/4Rr4gg5/4/rXJ6
gnym5FQ27gkmfBer2T6VeXdhclTNa3EsBMZOGKSEbhnkA7Tx6Gs+VlZWyzlugyMc985rd8bo58Ya
8JiVP9o3YVQeiid8H8QRzXNnJGV+6Mdfr0FbJIxbsLNmZQEO1u7HGBUSfvJWO0oEBboMce/50CQS
qMK3nB93zEYP4f5617P8Avh38K/iFZ6sfiB4o1nw5eWzo1r/AGbbmQTI3UnEb4II4GBxScrFRjc8
W2Zcg7v9709qrIHcupBij9c9q+r/ANoH9jPTPAvwvtviN8PddvPEnhiONpr/AO3lY5Ei3BUlQbVJ
AOQQecY96579kT9mjSf2i9Z8S2mr6veabb6XbwyKbJELO0jOOdwPACGpczaMEfOU0nklUAGcZyRy
RVaWRwGc4UkcfLz64r7k8Zf8E1PEFv8AEHSbLw/rH2/wbcbPtupXhiW6tRu+cCPgPxjBA715p+1r
+xhqP7PWm2Ou6Hf3XiDwqQkNzc3RVJbWYsdu5QeVYcZA69annLcV0PEvg78N5fjB4+0rwrDq9no9
zqHmCO71BtsQcLuVMjqT0AANdp+0x+yZ4k/Zqj0K51rWNO1e31h5oo3sd67JEUMVIYdCCeR6V7J+
zp+xKmq+EdP+IHxC16TwnpqXltc6bGmxjMhkXYzkn5NzEAD3JNejf8FX3MPhj4buD/zEbsbT0P7l
f8/jUxlqDSR+b8ilVV1PmYPzAfwj1/lUM7JKjMrFiOeefy/IU6adBKVVkBK4IU5571TbczlQfNEf
GN2OtbXMrXIlB3OXyMHHJ6c9u1e4fAb9kfxx+0P4W8QeIvDN9ptumkSCJo7+Z42lk8sSYTEbdQQM
kjqa8SEgJIBztHQDj361+jH7HHwQ8Qy/s93mufDn4yHSJ9ZgaXVNNXSopxaXojI8ti7FlYDaCQOc
AjtWblqNQaV2fmsZZZApz5ZJOd56fh68VGSzBwqFXYd15P41aEcrxxxld0vChVxlmPp75NfaXh3/
AIJ1WelfDfRPEXxN+JVj8MtR1BmH2DUYY2WMgkqpdplBYoASB6/hRzpOxfL3PhqTiRA6lVPOGHU8
0k0W/wCcNuYddnBr67+OX7A1z4F+Eg+IPgTxhbfE3QYXd72XTbZR9ngRWLzKRK4cKVIYDnmpv2VP
2Apv2mPhk/jCTxknh+IX0tmlr/Z5uC+xUJYt5iY+9jAHak5IaSPkBlKRlsYQgAADk/5xVZ1Msm9e
VXGc9vXvX214b/4Jf/ETUvixq3hvUrpdJ8MWyySQeKDAJYrkcbFEQlBBOeQTxjvXLxf8E/PF1t+0
vF8KL/Uf7P068tZr2w8UfZC8N1DGgLYQOMOCwUjdx9Km6LsrnyLcW8gYENlO6jp+VDwbUUnD5JyR
wcDtX2x8bP8AgmF8Rvh7FpcvhO9HxAN3I6Tra2f2d7XoVLAyNlTk8j096wf2lv8AgnP40+Avw8t/
Fdhqg8Z2qc6rb2lqY2ssLkSZ3HcmQQTgYyDRzoLI+PWRkTZhgB+dNlG3exUEMeQO3X/GvqL9on9i
PWP2dvhD4a8f33iSz1mz1aSC3lsLe2eJ4DLC0g+csdw+UgnAqL4x/sNa98IP2ePD/wAU5/Ellqdp
qi2by6ZHbukkIuE3LhiSGwdueAetCaYaHy6y/MFUlVHWodzpDgKMAetTBwNwIIQkrlhjn8P881FJ
bgSOSSyk8E/SrMmQwqw+YkHH3s8816b8EP2cvG37SGv6hpXgvTor+7sbZbq6ae5SFUjLbRgk8kkd
PrXnjCNEBGcnqOxNfoZ/wTP+D3xCj0XW/iD4A8X+FLWXUFbS7rSdVgkuJI1jfejsI2BXJJxnHFJs
tRPz38TeG9U8J6/qmh6tF9k1bS7mSyuoCwby5Y3KsuR7jtWe9uojDMQyt6f1rvfjTb67J8ZfHq+I
5LWTXzrt6t81mv7kzfaHDFc87c5x7Y75r234Df8ABO/4g/HD4eHxm2p6X4I0aSUR2ra+siG6jK58
1cD7hLKFPfBouKz6nyjOpEw2tkKcEhRyKGAEjMARhj1//VX2N8WP+CaHxE+HXw51Txfpmu6N46tr
Er9osfDokkuBGfvMBg7toKkgc7cmvnT4QfCq9+M3xE0Twhp+o6dpN1qjskd5qcnlwx7Y2kyT1zhe
B6kUrjscDcqzgN74A/Cm4VCFJw23juK+0fHX/BLj4oeEvA2teINN17w74vbTYPOfT9FmlluZVzk7
AUAJ28gZ7HGelfLPw98Aah8RfHeg+FLKW2sNQ1S/WwWbUJPLhjdjj5z1GDkdM54oukFjkJVZZdzL
3wSaUQKFLFiD0OeMV9hXP/BMX4uWXxOj8HT3GhRJLpp1FNZku5Esnw20xq3l7vMBwSu3pzXmH7SH
7Injr9l6+0yPxXHa3tjqkO+DVtMZ5bUOCQYy7KuHPBweueOlTzXCx4hHavd3UcUEbzyOQkcUYyzs
TgAD1JNd58VPgB8QPgjJpy+OPDN34bOqJIbI3ZQrME27sFWIBG5Tg4PNaH7PXwk8S/Fv4taD4c8J
SafHrzub2F9Tl8uAeTiTBIBJJKgYHv6V9X/8FUdT+Kepj4dW/wARNC0PSrKIXT2c2i3jzrNNiMSB
9wG3A2kcc80+oWR+f8kGMAE4x0x1NI8RwCGXJPQjpXdfCH4QeJ/jn480/wAH+EbD7dqt9uKvK5SG
BQpYtI4B2rgHnHPQV9Jzf8EmPj0J9sY8MyYIKsdU5Hrxs9f6UXWwj4u8oRuWB3ZGcdRTZm+TAQDP
IPrXTeO/B+r+AvFOr+GfEFn/AGbrWlXMlrc255IdWKkg91OMg9xXOsrsPm2ZAJ59KYrEMcUjkAsQ
q9z2q3g+WrjJ464x/npUSKobdt4/Q17TefskfEaz/Z8tfjG+l248FzKswlW5UzCJn8sSGPggbj65
6cUikrnihiDEMAS2ck+nFKCVbk8Drk9K9ng/ZN+Ilx8BZfjDFo8UngiNDL9rF0nnbFk8suI+pAbP
5VlfBb9m7x1+0drOo6X4F0uLUbjToFuboTXKQhFLFRgsRk5Hbp+VK6Q1E8wx5pY+WcAY6e3FRsmC
dp+b0FfRPw//AGCfjf8AEWx1C70jwb5kVjfTabcpc30MDpNEdrrtdhkA5GRXivi7wnqPg7xPqnh/
WbZ9N1nS7hrW6tZhho3ViCD69OCOOeKlSuwaIPC/hHWfG+txaP4f0m+1rUpVZ0tLC3aeZgBliqIC
Tj6Va8YfD/xH8P72G28TeHdU8PXc0fmRQ6pZyWzSIDgsodRkA+lfSn7BnwX+M2rfEnQ/iJ8N9OaH
TNM1RLHUNSmeFYmt3KfaFCycsPLPVRkHpX0X/wAFoYPN1H4VtgBhDqW5hn7ubb8DVXFyn5ekK5Py
ct0780BgwKtgj37V6p8H/wBmr4jftAT6qngLQJNdOmLGbkefFEsYk3bOZHHXY3Az096n+Mf7LvxG
/Z/t9JuPH3hmbRLbUmeO2nM0csbuoBKkozAHnIB7A+nFBZnkLYfcFG3b71dTSLhtLGpi2nNgZzbi
88s+UZQu7y93TdjnGc45ru/hH8CfHHx01i+0vwHoj67f2kQubmGF44zHHuADZcgYJ4wOfav1F/aY
/Y71OH9iPwh8Pfhn4OWfW7XUbHUNQsoJFEsk3kOLiVndsFtxAJz6Y4qb6jSPx1nVwwIwqY5Ofxpx
CiMFTtJBJx9K2PEPh3VfCusahoutWb6fqenTyW91bTD5opEYhlOOOCD9a9V8JfsXfGrxx4Z0zxF4
f+HWrajpGpRefbXcbRBZUPRtpfIBx3AqrobieGyufKyBt7YPX60k2AqEZXJxzXovxc+AvxB+B8+m
J478LXnho6grm1+17CJdmNwBViMjcOPeuBk2su6SMucDgD/PrSIGOA6epHYHrUQj/iAxj2qV9u3j
IA6E8cU/b5akqpOM5ycCgVixY6Rf6ja3E9vp089tAMyzxRFkjyM/MwGBxzzVRQqYYA7T0A6Gv0r/
AGNPFXi/4D/sp+O4tY+DfinWLDXo59Xs9VsbWMwtbvZqgLbmyANgbgHhjxng/m5aWkkjQRpG088h
RESMZLE8AD1JJApXTLt2KcrsxGPlz0x1qbIt8g5cEZ45zXqGt/szfFLRPFWjeHL/AMCa3baxrnmH
TrNrUlroIAZCnPYHJz/Ksf4k/BDx98IotPl8Z+FNT8NLfs6Wn26AxrIVAJAP0OcHn8qLoVmcI4ZF
OBkg8NihlVYicHcOg9a7L4f/AAq8W/Fa+utO8HeHdR8S39rD509vp1uZGjTOMt6cmuq179lD4x+G
dIv9X1X4aeI7TTbKJp57maxYJFGoyzn2A5obQJHkqK0qPjAOO3akEYZSoOT+VSDByyt0OMqcjNdf
4E+Dnjb4n2uoXXhTwtq2v2+mkG6lsLVpI4TtLfMQODgZxQhpXZyEmTIDwWUZIPb/AD/WpLeylupn
jtozMwQyFI1LMoHfAyce/vUkNsLguScnAbbnB57Gv1c0TwT4Y/4Jf/s5T+L9V0+HxN8UvEymxt5o
omktxKUaSKL5iNsagZZurYpNjcbH5QPYXEGDNG6+m5SAc54BI59fwqscKGABJr9bfgn8UND/AOCm
Hwl8T/Dnx7pFto/jXTlbUrS/0q3MUFvx5cUo+cncGYhlJwQa/PPxf+zB8Q/D/ivxvo9l4dv/ABHH
4Xv5bPUNT021drZCgJ3E/wAPy4bHbPNOLT3Hynju0LkY3KexHIpHG/cMA855FWYojcugUeZI2FVU
O7cx4wPXmuo8Z/Czxh8PBbjxZ4c1Tw2LkF7ZtSs3hM6r1KBgN2OPzFKVkLlaOShgEreUFwzdB2/C
ntbIqf6wqyj5iwx+GK/Tz/gnZ+wTHpt3onxT+JtqtpcNL5nh/QL0GKQSqWInlRsbiVG5UwcYzjpX
mX/BYCwtrL9o7w2LSzhtlPhuJ5TGgUOftE4BIHBxgc9ahMEfA5UyYLMoPZgOtWPszxIpbftYHaSO
GGcda+1P+Ccf7Kfhf41+ND4q8Y6pp50TRLpY4tFe8Ec91dYR4iU/ij5PGeTx616//wAFltH0+x1j
4VPa2UNrIYNSDyQQqpKhrbAOMccn86uLux6H5jn5ZEAULjIIzUr20m4HZlC2AVHf/JH51ZnXy2ZQ
B8wwDnB4r9af+CUuh+L9J8I+IfCvjLwGNN0O3Eeo2Go6lpbQzzSTf6yMlxhwAqkEdv0TaurBc/Ia
UGF/X2ojQSR7gTgnk45r1j9qW0tLP9pD4ow2drFaQxeJb9Y4UUKqr5zjAHpkf5xXl8ilwdigDbkD
PehsaVyAxZkC53AL0xg0LEXypVizHAOcGvW/Dv7NPjLxD8EvEfxYayax8L6Q8EXnXYKNeebJ5e6H
j5wrFRnPevsH9hH9ljwv4O+Hsn7SHxNMNx4Z0y1kvNLsEXziBG8iPNMm3DZwNqg+5qLsrlPzkMXl
wl5MoEP7xQvQZ4prRhJW28ggEAnPFfrD8Cv21vAn7TPxP1n4YeLPh5omkeHPERm07RZ7Ox2zybyw
VZSPuMyDOV6H0r40/ao/Y38R/Af4z3HhDQrG68VWV5btqOlJp0MlxdC0MjKBKqr95cAEgY6VcZdy
LHzQgeOQMG+XGQMd6V1UoVZGY5xuY9a0NS0i40jUbrT7y3mtLu1laKe3uFKSROMgqykZB9jXQaZ4
B1+SyttZ/wCEe1STQJyIv7Ta0l+zfMdo/e7dh5PXPWjmsCTOSS3KsDkliMqFGc05UTfES+1SpP4f
Xtxmv28/bR+K2gfskfDDwpqeh/DfwtrU+o3QsvJv7JEjSNIS27cq5zwBz618I+O/+Ckt1448C63o
CfB7wLpzapay2Rvre1JaLepBccdQCfxFK92UtD4wljSPln5BwMe9RSxCVNquQrdMjoB/Kv0Z/wCC
SPiXTtU8ceIvh7qnhTRdWs7m2k1lNSu7RJJ0MZhiCZIIK/OT+deJf8FCvhneeGP2nviBqNl4Xl0n
w413bNFcW9kYrQ7rePkMAFyz7uM9SeKFq9CZHyqkYli5XkAg/hTo4vNjAWNm2tkn/wCt35qdoN1x
hiI0LYXJCqM9yTiv0d/4J0/sCt4jvNM+JnxCshFo0E3naLpUvDXcillMkqMv+rwMgd+tJySdhI/N
yVFUnJYcZ6+oqF2IZhuG3HJr7r/4Kz+D9A8JftA6Fb6DpVlo0E/h6KWeGxgSJHfz5gCVUAZwMZ9q
+GbiJfMdU3Rk9fTNVcTaIYhvYEhioBAA4+lBGeFJOM/MRUoCRLtZS5I47ZpQDdQnnaw6EDB9aVwI
wTJhPMwO4HWnBmhJBJAxt46UNIquZdu9vRjnP1pJYvtD7uChweD+lNgCfMZBngjgAU99jRpIxACr
njFRycKUOVXkY/SnptdCMEgEDDfr+tSJiMYl2Ki78E9O47UkW75RJgr2P6UhRyXjJIwflcnAoUbA
FjX6sepp2BE77og+3liOfzqvMmURCx9cmnQn95IqkJJ3Gcj8vzoldklCtkkdCOD+NCTK0CUb0LR5
L7ckMPT3p3+rRmJwx55PJ9ulSIo8wb1JJGOTjio5oj5Ujk9cLjrjn0/Kq3ELFH85MjAxEZ2rjJ9q
ZLCGDMi4APCin7/MTaf3ZHO709qXDBsJgovoeDx/9ap5UAMihJVIwRjgtgYxTZCrhSpOBwR1GQOm
aSZ/NdQAhJHYA8c4B/KpWBihiJCKmeoYZNC0AijVpsrwpX5hgflRM5TapDZB+7nNFxGzAYXG7pz2
96ftVDxmRiuDu559MVQDHyxTZIMknIbrn3qSIqWJddyspVRjJBqCcshClNrLkbSOR/nFSSEKQSDg
9QOlK4DrQFXkWRuMZGcHcfao5GRmAIzz823uPTNOCJOViQ42gEHPGKbdSbG2FsFV5BPBNUgHtulX
LKfL/vEfXjNJHtiXJZuGGTjk1EZcjIyV6nPcegqWaVAVdAY93BBORjPrVDTK91ASuzzecZ24PAFX
9NiEelwbCW3StuLAY4AxiqF6rAsAckg5Yen+FaFlMBpNvGw3NvZjnpjjj/PrUSIlKx9GfAmaOx8O
eN5Hco0HhDVUkKLjKukanBxjo2Pxr5iCCRmjGFB7v2PNfTvwSItPh78QrljKwj8IX0YfcAwDTW2R
zxzgDtjJr5jch2y2V9AGyMf54qGyabW4xydqh2DAZOF61I6lo1dQVUkjOMdKhaNd5AGQe4PSh0RQ
4LFcOeP5VmXc7vydt/cs7qBHjYz8gg5x0HtSCCMxsZgvDlc45JIzk+3GKrXQFvMQEbfkhiMH8Ae9
ShmDOsqYb+JJBkng9a9Js5EdJoFuscKF2KsWBTIBDjPQ46dq660WGWePzI3gbJ2s555OCxwM9Rx1
rkdPVWS3mjuXYR4xGcAg498enfpXS2sXnap5gnj8xWEoR2LByAM8ZHft9KyauXbseISLBFrLSSuo
VJGY7MkNg54/KqGq376hqD3E+GD5xjoBz0qbW1jTU7oRjbF5rL+v6VnMVYllG4d8dqytqbptHtP7
OXxQ134QeP7HxP4YvpbTU7YFSEClZ4shmiOezbQDX6C/Hv4i/CD9pH4aWvja31W38KfEeGLF9o5U
vNcHgbGYAbgMfK47HBr8xPBUyrNCEQ72Ygc9+3+fevT9IuD5gkLMkrr5e2Mr0J4J9f8A9daKFwlU
VrM+y/2G/GHw+8E/EzUr7xreWdiYrRW0m8vvlRJQx34PTdg8fpXNftM/GvU/GHxp1PVNN11r+y02
8KaTMoAWBAwZcDocMTyc9q8KtL9o4TGqL5v3ctlttSSzSFZTHIkc235FcfLIx6A//qp+xTe5HNfV
H3n8RPjh8Pviz+yNpt34o8Q2U/jyOzhLJ/y9Qzl13nYOcEdePSvSPh/4q+EWi/s6w/DcfE/SpIrm
wnt3u5pEilUzMzMxTjGGc/kK/MAXn2diwHlI7EbWAVj/AI96tz3skWNyFnYAY25GOO/9aiVK2zBT
TP0X+GkPwN+Cvw08U2mjePNF13W9Qs7pDqDXKGeRWQ7YlAJ4BA6da5T9gf4v+D/CHgfxfpmt67aa
Vcz6qZ4ILo+SZVMYHyZ6n5cevA4FfBrm4mWRiGw3ygqeODzkY9/wptyiANG0w2Oik+W2MNn9Dmko
rqDnZnvH7PHjbQvCX7VPh3VdV1VLDS4765jkupuI13wyrFubGADlR6DIzxXV/t9eN9C8cfFKzvfD
2q22s2kelwx/abZxLDuDykqG6Z5HT1PtXyw8rW7R26o+zGMyAgqp6HPfHHpUhMM3kFQfLj4OT8qn
vz69Krk7DcoyPpj9i7wT8PH1248a+PvEulwSaTKTp+hXjiJnlAV1nBLAtgqcDGM1zP7WH7St/wDH
jxdNb25+yeGdPYrZ2j7Nxbo7M4B5OOgPSvBfPRYySjMByBvBGc9Rjpx6D+dRyTJCziOJkSMqy7WH
A7cetCizOTQyQFgVVnjYsCFJznt24xninXGUXbHF937xGcnJ689MVGt4z2oPmFWfAXCnk45z6cd/
aku380FAmOMFW6gHt+lNtmSVyZMLgmEyzM2523AZxjIPQ5/GnXkgRQvmBjkFvlzn04P0/SopJVjg
eRVkLt8hLJtIGcZIJ9f50+S4hdJJoo/3ICnhs+gJ/X+dUnfcdrCTuJETyclSWLIwAAz0OR/nmp5n
3oqMZN+wqzIMYHbGfY1VEZDlmDIBkMgwSw7HGccjH50kUr73WKQsrHKwFc5HfP6fnSepKH3Nx5Ky
MoLLjcXZhw2c7f6UyS42xqApeNsPlRkY/DvzSzF/KcNENrDLfLgr/wDW6UQZijJU+X0IC88jHFUi
uUnPz27bgyMOCQB1AJ6/57UjHdCVZsZ7jmm21yqPIxBbcoOQcBcnk/zp1xIrxSASlmLswYY7+gwB
6YxU3Li11Pq7/gm1qdjpXx11SW7uY7Mz6JPCpuHVA22aE4Gcc+1dl+3F+zpeav4j8VfE+08SaI+k
vHAWsTKftPyxpG23GVP3Qfoa+GkidbZWfcdrZVzkvnP+fyqxcTSyReS0rPbgqQG6EhRjIyQPqPak
o3dzRyRJKoCBDlgxOTx/Ovrr9kP9q2DwzD/wrnx95moeD9QPkWt1dq0ptWchBG2Sf3J7f3Tz0PHx
2Zo44SjFUVMgdemeM9qa8piaREHmglgoHr3J/wAKGjLntsfSP7X37O1r8EfGdtPpF4kvhzWG3WQa
YPLA643h+B8uWGCOBX3d8B/hTqHhT9mOPwZfXtjLqF7Z3eJbd90I+0F2XsOBvGePWvyIS9munAup
53jAOBI5fC456nnp+lXLLX7+3XbHezRRIAqRpKxB+nPHbkUrXLjLpY3vH3hqfwL4y1XQr1g15pdw
9rI0UnmJlTkEHuMHHr1r33/gnr4C1bxX8bbXxTBLbR2Hh0SG7jdysrmWGRE2rzuwW68dK+Wru9up
ZpnnaSac/PvkkJ3Y6dev1qez1e70+cizuLq1Zxib7PMIzIeTjIP9apxuNSPuL/gpN8PdYXxJpXjd
Daf2IbKLTZBJLiQSq8zj5e6kOORk8dO9fFdhCNSv4LaIMJJ3VFcyBV3MQBknpyR7VHqHiHVNXgAv
r66uoFbeq3MzOVJz65z1x+JrLhmW7mCIGWX5tzgkHPXtgg8dqnk8x3cXc/XLQvg3r9p+x03w5nMD
eIX0K4sRtlzH5j7yvz/RhzX5V+INGufDesX2k3RW2u7O4ktplQAgSKSrDPPBKnn3qzJ8RPGf2eT/
AIqjXYFcfKy6jOByuCNpOP8A9dc/PcG8na4kleWXO93kOTJ7k9c5Hr3rSMH3OeUnJ3R9j/8ABOXw
RrOqfE278WwIv9gadbTWNxIz4fzpFUhQvUrhQc/4Un/BQ3wPq2jfFVPFlxFEui6xbQ2VtOHyzTxo
25Cvb5SSD/jXyhoPjHXvC6yNomtalobXBDSx2dxLAspHTcEYZPJFWtb8aa14nYtrGranqyRhin26
7kuCinOdu8tt9OKajJMvmT0Zu/BnXdP8J/FPwbret3ITT9O1e2uLmXB2xxrKu529l6/QV96/tl/C
fxL8eLDwRq/gGyg8RWMUNwzTwXcSK8cvkmN1LMAwOw8ivzVWSONTlstMoKgenqSB9fxrrtF+MXjr
w/YQaVpnivWtK063QLDb2upTxIgHOAiuAB9Klxk2WppaH6GX1o/wV/YauPDvjS4tdJ1dtHvLBIGk
X95NIZTHGoBILEMMgH1rL/4J/wDg+wtPgjqPiHTdNgh8T3N1dWn2xxlyq7fLQkn7oODivz98U/En
xR43t7eHxH4j1PXo7Z2aIX11JMoZgRwGPHHt2qx4L+K3jT4fWjW3h/xTrGj2jyebLb2140SFzjLb
emcAc+1S6bD2nkfSkP7IXxY+MfxTudS8dwzaIuoZlvdZdoZMbVIVERH/AN0AcAAVS8NeBbH9mn9s
LwvpOs67DNYW8izf2nLGLeJBLBKo3ZY4O4gdccivG/8Ahpj4qxOj/wDCwPEAUqWb/TyQufbHPNcb
4m8Wa1411q41jxBq93qd64VWmu5jJI20fKMntz0FPkkNV0nofor+0X8NPEXiP9oj4ZeLdK0qS+0H
TWt5L3UIiNlvGlyJHYnPTYM+n61y3/BRbRL7W/DHg/xTpltJf6Rp63HnX1oQ8aCYwiMnGcq2DzyO
lfI9n+0X8SY9Gj0iLxtqsekpbmzWzMqlPKK7dvTOAOOv41j3/wAXfGV74KTwfN4kv5/Dka7ItPeX
MRQcqCOpGSeCfT2pctmF9Lo+2f2V/iBoXxa/Z7vPg+bxtI8Rw6ZcWYe4UbZo5WkKvHyNxUMNy8Ed
fpw3wY/ZY+INp8ZtMTXp7+z0DwrqAvxf3W8214IpfkWAFsAFRnpwO1fInh7xFqvhvxBZa5pF1Np+
q2LiWC8gbDI23ryOeCR+Nej337W/xe1W3u9Pm8Y30tnPGYJ0eKBcqwIPIjzyPcfepcjBVH2PuJ/j
p4U8Yftc+HdA068EkmkWV9aS3bMv2eWaRY2CRtnDEbSD7/SvEP20PhF418V/G7UdT0XwnrGtWE9j
axpPZW7vHuUNu+dQcY4yBz0r40ivTBKstsximjPmIE+XZIDkMCBwR1zXuFr+2h8XUiVG8b3O9VG3
dZ2pOQO/7nn39e9Cg+g0+59W/t9SRx/s3aRYXUgjvJL6z/0YcSNtRt2Fz0HerfhbVrH9p/8AZTk8
IeGb5LLxBp1pZ2VzBfLsaOWDym3YXPysE4YZ618D/En4peJPitr8es+K9Wk1m8t4Bb258tIxDHuL
YCooGTnnHWq/w8+JPiH4b+Jhr/hbUX0nUo0MRlwrLIhxuUoylTwBjPoKv2bMudJn15+yf8Kfibd/
Fyz8T+MrrWLfS/DSzxhNflmd5DJG6Yi3nG0ZySOOB7V5X+2F450v4tfG+c+HHlvobS1i0kskTHzZ
o5Zt/lgfeGXAB7849+W8Zfte/FPxt4e1DQNR8TqbG+gMUwhs4oXZeMguigjPPQ9OK8y8O+JtR8I6
/p+r6dOtrqljMl1bTBQwRlPXaevpg1Xs5LUpVIt6n6OfsQ+HdW8PfAXV7PU9MvNMu21K5eOG9gZJ
GUwxYIU9RkED2FfnRrnh7WtEvLay1TSb2wupMxxxXsTQBsEDKkge3ToK9qh/b1+MiJNnWdLk9Gk0
2MbTnngf/Xrz34t/tAeMvjZLYv4nvLa6j04P5CWtqIcbyPvEZ5+UdKzUZRCUotnR/FX9ln4gfC/Q
bHV9RtoNV07UGCr/AGPI9wVbbvXzFCjjqARuHFfYH7Hdhd6P+yprMWoW01g63GokJOhiITYNpAOO
MY5r5C+HP7YfxJ+FXhK30DSr+0uNJtSwtxe2XnPCpzhQ24HAJ4zml+Jf7XfxF+K3hRtB1m+sYLBm
DTrYWxgaYDPDNvOV55XGDij2cpMaklse1/8ABP74LeHPE0F9471GEajfaXcfY7W1eNWhUmJJN+Dn
LDcAOmPyx5p8U9f+Iv7Vvxit9Lt9LuLO3W4aDTtNnyIbRVUiR2chRuYKSc+wFcV8Gv2nfGXwF0nU
tP8ADb6dJZX0i3DwX9s0oWQKFypDqQSoGe3HSvSv+HiPxRCCT7N4dywB/wCPGQqDjoT5oNPllFj5
kY/w1+F+ufBz9qzwBoPiS3S0vjqEM6GKRXjaJw4U7l/2lxg45r3z9ryGX/hpX4PTbJfIE8GZFQlB
i9jY5btwK+OvjD8ZfEnxj8Y2/iXWXgg1KG2S1iXTozAkaqzOpUl2bduc85r1S3/b0+IqeEo9Gmt9
CvEitPsv2+9tZGuGwu3eSJNpfgduTzUuL3KU0metf8FJ4WE/gCfD+Xi9jyo43E25Az2PBrrPBUZm
/wCCdNxHGpMn9gXqBV5JIllH9K+SvFP7VvjXxx8I7fwLri6VqFoiRqdRngdrw7GDKSxfaCNoBO2m
fBf9qnxb8CLG50vS7ey1jRbsAGy1EPLHG+T8ybXAG7cd2euPak0xRa1SPr79m5T/AMMS67Egzstd
WCRYztGJCBj8a+df2A/GujeGfjVCmrarHa/2ho72Fq852pJO0sJVAegY7GwD1xUFj+3f450jxzfa
4NN0dLS7t1tX0iKKT7KGXJ83G/O/naT3GBivB/EniyfxJ4t1XXNsFjLdXL3Sx2iNFHDIzFhsGcjG
cjnqKFB20KU7M+rf2mPH3xa+B3x18USaBcz6bo3i54ZLBlhhmjuXjhWNgCykowJA6jgg816t4uZ/
hR+wdNoXjS5j0vxDc6VPapaTTKZZriSR3WNdv3mwwzjpXgHhz/goJ4w0jwtoumal4X0PxTdadEIx
qeoyyGdyBjceD82MZOcn2rx743fH/wAUfHrxOdX1mRLK3gRUtdJt5We3g+XDMobByxzknngdqag2
9TNzS0LfwU+DGu/HbxjDoGlK9tbJhtQ1AKGSyizy2CeSegHc19NftC/HHQP2cPAEXwc+G8kMurG3
2ajfF1nSAEbZd6ljiVsZwOFzXg3ww/a88Q/CL4Wat4V0DQtMiurrz/K1zJW4jeRflYgZ3FCeM/lX
heo6jc3t9Pe3FxJfXdxK89zPctukldzkszHqSSMk04wad2S532FhlW3iSGR1bacFf4mA9/pX3/8A
8Ex5Fbw78QihJU31pjJz/wAsm/z+FfAMF8LO/gnZIp/JdXMb/dbDZ2t/snkEe5r660b/AIKPHwpY
PBpHwn0PSkkwrtZXhhDkDAYqsPzY9zSlG7RopXjofKnxC1AN408RKvzSfb7lSSSeDM/Tt2rnjK8M
LBsiPjDcHdj1HapdVvZdS1LUL6cqZru4kuW8vIAZ3ZyB36t3rOunZvmVfmX5coP5D0q0zm66lsyK
rMzgjGDxyR9D+Ffo/wCDHsv2aP2Grbx54Q061k16/sLHUby4vg0onmlaNCW5yAA5AA4HpX5rPOSg
UP8AvOuBjnnP4dK+kfgj+2/efDf4a3HgjxZ4Th8f6AGSOxtL6ZI1ghHJjZTEwZQwDDPIpNO5srWs
j66+Ivj+/wDir/wT11zxXqccMN9qegvNKluu1ARLt4BJ/uivJP8AglpOsmufEdR94W9iSfX57jFc
ToH/AAUAhsLXxJol38N7K+8EahCkdn4ajuFENiu0+anMeGVz82NoAPpmvMv2X/2rI/2bPEXiW7s/
Cv8Aaum63HGFtBeiJrQI8jKAdjbgBIV/AfhHLJl3Sk2fox4P1y/n+D/xTu5L64ae11PXFt5XkJeJ
UL7ApPIx29KxfiATr37OfwwfUWF6bu90B7hrj5/NLNGWLZ6556+tfHegf8FBrzRfh9478OXfg1JZ
vEF7qNxa3kV5hLcXO4hXUpltpbqDz7VjfEb9uq98Z/APw74C0/w/Lo2saS1kP7biuQ6ZtsbXVNoI
J2qcE8ZPXFHJJMqM4n0H/wAFEtc1Cw8dfBjSbfUJ7TTbzUJGmtIZCsczJNbBdyjhtoZiPSsn/grM
wTw38NmY/KNQu8c4BJiQDmvOLj/goX4b8X+DtCg+IPwtXxX4o0yFo49VNzGiiXgGVQEymdikgdxx
XFfGL9tvT/jj8CbXwh4w8EtqnjOB/MtteWdUSBw/+sCgAhinBUcHHPsJNO5OmyPly6KQM20qykkk
kck1TaRXQKnCDAwgGT9a9h+H37JfxZ+K/ha38R+FPCb6tpFwzJFdPd28W9kYqww8gIwQR26V0D/s
FfHyMKB8PZQ4HVNRsyPcf66ncrRHzysha4kBLIe/XgV+on/BLoiX4DePGVNjf20yj3xZwY/n/Ovz
n+Kfwr8YfBvxCmh+NdGk0PVJoVu44XdJFeMsVyHRmB+6e+fpX17+zp/wUD+HnwI+Euk+Fovhzqa3
SRgalcWVxGUu7jYFaT5zn5gq03Ftg5Jxce58RaQ+7XtMXegZr6AkqSQcTLx+lfpV/wAFamdPhp8P
wkhQf2tOxB6H/RmH6Zr86vip4m8O+IPiTrut+ENBl8LaFcXa3FlpjyBzb4VcnPODvDMOSOlfXMH/
AAUA+HXxK+FeiaD8cvh1f+ONX0h2c3dmY0inYZVZQokUhihAI6ZzgVLi1IFJPc9k/ZyDx/8ABL/x
SZXHmf2Xr5ZjgZw04HT2ArQ/4J3X0mlfsO69e2kj28tvd6rNFJn5kZY1IIPqCP0r5u+MX7fHhWX4
ESfDD4P+C73wPpN55trd/wBoRxui20qv5ixAO3zMWzlugziqn7Of7cXh34K/s0eIfhzqPh3ULzUr
t70WtxbNH5GJ4gMSFm3Ag7uinjH0pcr0Endto++/EfjjW7L9nL4ca3FqsyatqEuhJcXat80pmeIS
bj33bmz65rd8bru/ab+Fh7rpWs5/75t6+BfEf/BRXw9f/ATwB4Vh8JaomuaDc6U10ZZIhbyC0Zd/
lsGLZYJxlQOeeldd4n/4KdeCNW+NXgXxbbeGdcj0bR7G+tr+Obyln3ziPaEXeVbBj5yR1qeVl3R9
t/D/AMV6pq3xw+K+jXd48um6UNK+x25OVg8y3dnI9NxAJ9xXnF74k1Lxl+xd4/1HV7t7q7e31iLz
ZOTsjnmRV+gCgV81eCP+CmngXw78XviV4kvPDGuyab4h+wGxEAheVRBCyESL5gAyWz8pb3ri7X/g
oJ4Yh/ZY8Y/D2Tw9q8fiW/XUIraaMRm1IuZndGLF93Ak5G3+E0+VsE0e1/8ABTaMyfsY+ChgDF/Y
dT0H2SWpv23own/BO3wWhYlQuhgt1OBEP8K8Y0P9vX4Y/Er4Fab4K+PXg/VfEs2mTKUm0ZESKZY1
2xOf3yMH2kqR0NR+N/2/Phb8TvgL4l+HPi/wnrz20Ujx+GTYRqPLhiXFm0jB8q6EAHgg+/dW1TDu
j8/Zj83yglWz2qBn2+YA2cjpjrXdeBfgl8Qfirpkt54Q8Iax4htbSTyLmTT7RpVSQqGC7hxnBB/E
V0kn7H3xrSIE/C3xUW6n/iXP8oI7Y61aZJ5C0C4DM/y7sj0+tfo7/wAEbufGvxLxgBdPscgdyXl5
68/d6+1fBfjr4W+MfhfdWkfi7wxrHhz7UHNv/atm8Pmhcbiu4c4yPzr7N/YJ/av+CX7MfgG//wCE
gsPEUfjXVJ3F9eWts11DLCjkwhAG+XAfkYznJzSY0fL/AO02EP7QnxVLPtZ/E2pH0x/pMg+vrX6K
ftkF9O/4JleC442K4s9AjIVsbgUjGPpX5+/td+MPh78R/jX4h8R/Dq21K10fWEW9uxqiFGe9d3aZ
kUsSqkFeOxzX1D8Nv23fhN8UP2Z7H4W/Hiw1Rf7La3tbZ/D9vIwuYIFUQyFlPyOMEEdDjI60NO5X
Mmet/wDBI5ZP+FCeP/Ml84LrJQKDwv8AosZIH/fX514F/wAE3f2T/C/xz8Va/wCLvF23UdL8M3Ec
cWjbMR3EssbMHkbOfl7Ad+vSu6sv22vgZ+zL8GfEPhz4D6ZrV1r2pXInS31+3k8tGdVRpGdmBIVF
yFzycV4t+wR+2ZYfsyeJdS0zxHavN4O15kkvry3haS4tp442CuqD7ytnBHbtUtCuj9B/2Jfi78Nv
iXqfxC074feA28FDQruG3u2Zwwu8tMqMAPugeW3HvX47/GuKMfFvx6kJ8oQ65fldgxtxcycD36c1
94/Df9sv9nj9m74iq/w6tvEV3ovi29kl8S3N9aSlrTaHaF4lIBI3ytkAHj36/Fn7Tmq/D7xL8YvE
uq/Da41G58LaoxvN2oxGKRbmQlplQMA2zceM+ppAnqfor/wUYuJLf9hr4fziWRZRd6VuePO85tJM
9MVU/bXd7j/gmV4EmuT51ybbQGLyfMdxiXJz+fNev/tM2fwq1D9lbwdD8YNU1HSPDBXT/Ln0wEzG
4+zNtXARuNu89Owr4m/bk/bS8HfEj4WeGPhP8MFub3wnZRWrzarewSQy5twVjhVXC5+VQS2McjFO
KdwufIPwmeW3+Jvg6WKUw3Ka3ZukiNtKHzkHB9xn9a/RL/gs6gn074VxlyFabUTsHchICD/n1r4T
/Zgufh/a/G3w9dfFDU9Q07wpZs12LiyjaRvtELLJCrhVZtpKtnAz0FfYv/BRL9oz4EftI/C2yuPD
Xi+/u/Gfh+ctptnHYTwxzLK6LMH8yMDhUyDkYI75p9bDN3/gjHbxtcfFeXapcJpihwuDgm67/h+g
9K+K/FXxu8Z/Dj9pTxF4r0PxBew6hpniO/lto7md5YmxcyAK0ROGUrwRxXq3/BPL9rjSf2aPHuo2
Hia3VPCfiVYkvNTjR3lspIhKyOUUEurGTae4zmvYNI0z9iDTviynj+5+Ies6zImoS6kdK1HTZpLR
5nZmwy/ZQxAZyQCeeOtFrA7NnxH8ffjv4q/aN8bjxV4w+xjVEso7RfsFv5MflozEZGTk5Y85/lXm
ewEEDIC85Pp7V7H+1P8AFDwr8XvjTqmseCvCFn4S0IldOtLTTYBGb0I7hZzGFXazgr8uM4ArzKfw
1qgRm/s2/KBc5e1kGPbOKdwsUUwwWQgAA4OTX6y+PrfH/BG/T4YQTu0KwwOCTm+jNflGdPvY7cyS
2lyIk/jaFlUYHc4wK/Rr4D/tQfCT4vfsaz/BH4o+I1+HrabbW9gl+jNJ9thSQSLMnyYB3JgqT71A
I9F0p/sv/BGmVicE+HJSN3q163+NfGf/AATt1K/sf2wvh6trcy2X2y5uILqKGVkE0Ytpn2Oo+8Mq
Dz6e9fXGg/Hv9nzVf2fvF/7Od18QDpPhnSrKOw03xNMhzqEbsZiyLsxlH+Q+oNfJH/BP+ONv20/h
wsUxuP8ATbwrNs2GRVtLjD7e2Rg496aKR9m/H7xlrfh7/gp38KNC0rWr6x0i/is5LzTbado7eZnN
yHLoDhiQq8kV57/wUQ+G+j+Of27/AISeHZ4BBb+I7axtNRe2UI8iNfSISTjlghPP68V9FftA+GPg
ton7W3hD4leOfihaeFfFGg2cMkOiXBVVnjVpdjs3UDLv+VfB37Tf7Z2lfED9sbwz8SvD+kS3WieD
JraC0DvtbUkguHlMinHyhw/yg57E9alIm6ufY/7UXxNvf2cvGvwK+Cnw9tIdB8Oahe2PnzQZWYxJ
dRQiPcMZ3ZBZjyce9eS/8FpWA1n4UqzFQLfUW56H5rfPPr0rsvjZ8UPgN+0Nq3w9+Mn/AAtm00DW
vCdkmoL4YkCtPPIrrcC3cZyrh02/KDn+flf/AAUK+MXwu/aZ+DXgb4geH/HNpa+J9MHkt4RZhJdA
XDRmUMAMhozGuTgggds00NNJl39jz9m/Tvhn+z3rXx0+IHjXxD4X8PajbQz20Pha+kglFusjIGnV
B8zF2GAM4BOa+lvjtd+D/iL/AME4vFOsaDqN94q0I6LLc6fqmvgy3e9JsbyzKCGVgQDjoBXgH7P/
AO0B8Mfjf+xFffA3x14stPhlf6dZw6f/AGhfypsuYxN5qyRhioP3NrKSMZHPSu18DfFD4Jzfs6eM
/wBm6b4p6XYWuk2Bs7fxZcSxpb34uJHm3RAtglGO1lDH2PYMls/L7wb8Q/FPwv1GbVfCfiHUfDt+
8Zikm024eBnTIIU7SMjIHBr9bP8AgoH8T/F3gb9jvwFrOgeJ9Q0HW9Qu7BLnULG4aKaUNZSyOC4O
cFkBPPJAr8dL6OJLq7gjlW+iR3jE0Rws2CQHXPqOfxr9cdM+MPwN/bO/Z/8Ah/pXjjxXpnhBPD2p
20uo6HrF+kEtx9mhaMqCWU7HD5DDtmhi3PzT+CJfx1+0T4DTXy2tPrHiqyOote4k+0CS8jMm/P3t
245+tfsf+1n8Q/ht8NZvC+meLPiR4n+G6yQyfY7TwoJFWaJdoO5Y4nwF4A6dSK/LT4x/Ez4U+CP2
sND8RfCXwq9j4T8I6naSSRQ3BePU5Le63ySwlmbCsqgAk89fSvs39p7Rfg/+3Zp3gXxhB8cPDvgN
7fTWVtN1WSF5180q5SRDMhRlIIP04NTbUd0ct+2N+1x8CfiR+ype+CND8QX/AI18VRC2GnXerabM
boMJ0Z5TM8agEoGBIOSOK/MMEgAHoSRu9Pzr7c/a58Cfs6fBf4G+GfCnhK4tfHPxKuQHl8U6VqAk
jj8t1MplVZGUbw5VUA7deM18RzCNtuJRt9z1x1NaKwaMRovML7W/76xinPJ5e/Dbm7gDkUw7CUCy
qZHGBs5JqWFlc7gynIzu6igFG5+w3/BPfxdr3in9grxoNc1OfVf7KGpafYm5fcYreOyjKRA+ilmG
K/Iay8yxa2eB5IpE2MjISrRtxgg9c55Br9jP2L7HwF8Nf2Ph4Rv/AIp+F5NQ8U29xeySG+ih+ytd
wKvllGfcSnfOM+gr8mvEXgePwx8TL/wgniDT9RtLTVf7LXWoH/0VkEoQTkg8KAdx54waS1HY/Yvw
/wDHS58C/sA+FfjT4hsI/GHirSNDjnhubzaszSTSiDPmbflyGXdjqF5r5f8AiR+2R8OPjR+wXe6N
8QNTi1j4r+XMbW1ubFmmjuhckRyRSBNqjyyvIIO3Oc19HeO/APgnUP2FH+C9t8VPC32+y0iCBdSk
vofLeWGRZuUD5AYpjucGvxYMp2EO6yYAI2YxjHT6VKSEz9kP+Ce/h3wx4Q/Yau/FjzR+FrjUBqc2
peJIoVFzFHFNKiyFgMt5apkD1FXfh5+1l8EvCdzqaeIv2h7nx5YX1ubf7BrFkxjAPJICxfNlcjHT
BPtXkv7Fvxa8G/Ff9i/xP8CbnX7Twh4kj06+tjeapKiQSx3csrLJGSwLbd+1h1GAec1zPgH9hT4P
fCey8S+LPi98TtB8Z6Hp1i88Wn6DqPlyblJZicPudiAFCg9TSSQXR8WfFu48EeIf2ivEz+HboaP4
Bvddc201pbHZb2jSAGRI+CABuYD8h2r9gP2Bvh38KfAXw58UD4ZeNpfG+m3t+G1C+kQIYZFhAEeM
DGFOfx9jX4jeK7rQZPGmsT+GUu7Pw+95NJYQ3xDTpbl2MayEE5YLjua9l/Z//bY8efs2eHfE2heE
F0uW11uQTyNf2zSPHIE8vepDrg7ccEH7vSm43YJmh+0f8OvhF4H8WaPbfCv4gzeOBqM051GJ4Cq2
TCRdgBwM7izjnP3c1+mv7cuoeC9N1H4GD4gxW03hL/hI2N6t7GXg2fZmAMg7rlhnNfiXa6jJb3Ed
z5q+fGARIQDlhyGI7881+p2j/EXwf/wUq/Zxn8M+LtTsvCHxO8N4uIbuaYW9p5zb0jlQFsujIhDL
1BPHas3Gxpc9R/Zd1X4Q6/8AtaeNb34PR6UmhJ4Ss47htGg8m3ef7S5JC4AyEEYPFfM+lftlXn7O
/wC278V9D1sNqnw913xNcW2oWLYxbO7pH9oUYJbCZDL3UdjXpfhMfD7/AIJl/A7VNffXdP8AGvxQ
14yWFuNMuvOgkddzwps3ApEnyl265OAeRXkf7HHwz8LfFvxT4w/aJ+Mev6OLTT9RuNVOjLKInF0j
rP5rxk5MY6KmTnHOc00tNBcyPphf2MPgF8I/F+ofHy/vY28FCH+0rPRpYM6fbM6qVkjUck5yVUjg
t7V8m2n7YWgftLftkeFPEHxVhg0z4TaZJcf2dp2oqskNoWtyFknZV+Ys6qeeh2iva/BH/BVTRfiD
8aLzwh4p8L21l8JtVLafY3cluTON5VI2uFLbVjILZAGRkelfI/7c/wCzZof7OHxOt4vCOsWeqeFd
bWS80+yiuPOnswhQNHI2TkbnJU+nWjlb3End6n6ZWvj/AOFfx9/aC8E6roPxrttTbRnaXT/COn7f
Llm8tgzk4BPy5OD0xxjNfLP/AAWJ8LeG18Y+EvES+JFHih7RbCXQAoLi0BmdbjPUDf8ALjvXln/B
L34dv4s/aCtPGEmsabpNh4RDT3NpeTBZbtp4ZolKDpxwSfYV23/BXzwRI3xN8OePrbWtLvtOv7FN
FFlFcBriKSIyy7io/hIfr2P1pxVila58m/szoZv2hfhmEzGzeJdPG71/0mMEe+RkfSvtz/gs2zN4
o+F6K2CljqDdfWS3H9P1rxr9gn9lnVfiR4n8L/EqHxV4f0rR/D/iGB7uxu7gi7kELpLgLjA3Y4yR
0r65/wCCjn7Mmo/tFzaJ4n8O+LPD1ivhvTrpbi31C82tKDiQbNoPPyd6XUHoz5e/4JmfsneHPjx4
r1Pxb4rKX+k+FbiKP+xpot8V7JJG5BkJxwpCnHOSPSvur9lD9qa//aK+M3xO0+Cz/svwv4bjtrWx
s22sxbzJo2kyAPveVwOwxXyj/wAEh/jd4b8J6r4q8A6xdiw1zX5YbzTy/wDq5fKiYOpc8BucgHrz
Xt37KXw6sv2Pfi/r2k+L/FWhXd58RrgjR/7PnLZaBp5XEu4LtJEwAxnJGKlaPUVj8u/2l4lvf2if
ifO4ZQ3ibUj97qPtUoz+mPwr2L9h79i3UP2j/Eza9rwbSvhzo8xe91AhSt28exjbjJBClS25+gAp
/wATfgRZx/t8DwH4u1u1uNF8ReIvtk2oadcj5ILqWSVUZj92QAgEHPX3Fen/ALb37Z+l+GdCj+BX
wTlOieF9GQWOoatYBomkeIlHgRhgOvA3PzuJ+taP3noatpbGL+3T+2Ro3i+wj+DHwuht9O+GWiFL
e5ltQjW+o+WyPGsRxkRoyHofmP0r6b8KJBZf8EiMXIBtn8OylwvGVa7bP5g1+QEMqwJsUD5QMY6j
05Ffpl+w58e/Cnx2+Bt1+zP49dtF82zay0m9tn2NdRBmlILnKiRSAQDwwB681LVnoZptHY2eifs7
2vxf/Z5m+E02iS+IJNaBuTpU++RoxbEsZgCcNkL1xzmsv9vD9ojXf2df21vBHiTR3WaKDw5DHfWL
IrfabVrqVpYwSPlJCLgjoQKd+z5+xDo/7JnivV/i58UvF9lPpnhJJLvSjptxu+X5wzypsBLFSoCr
1JP0rxYyXP8AwU1/bDnvJDaeHfB+kWq2+55zFcS6ckz4ZQwz5reYW7BcjuKNdbiurn0z4p/ZF+EP
7c2paB8Z/DmtDQLO5RZtetIEy1xINrskxDjy5FXKkjjHrxj57/ba/bE0S3063+C3wjt7Ww8BaAy2
95PZqjxXbIUdI4jyVRWDAt1Jr2jx5/wUL8B/sxeOdH+FHw88NWmreE9BaPT9bvxlDG6uI5Nm0ATO
FBLMeprxr9vf9m3wTqGgaV8f/h1qNlD4N8QPDNqtpEwjceayKstvGAMHBbch75PrUaPcpNXO2+JH
/BSn4J/GXQLHR/G/wi1jxFbWR8+NJbmJVWTZtJUhweemKX4l/syfBz9ob9jbUPiv8MfCo+HF5pa3
epFTuY3CWwdZInwxGDtyCB1HfOa5rxd/wTU8P/FX4d+EvFvwB8XJeWOpKz3KeJLry129im2PKlWD
KVI7ivafFsOn/sY/8E7NU+HvjXV7ZvEurWWqWFpDp7GXz57gysApwMAKwyTgdqFZbMHboeAf8Ego
MftC6/uRkeLw3cABuwNxb/l0P5Gvrey+L2iftH/Hj4s/s6ePNCGr6VE7vZSqiqsUUSxZyRht4dlZ
WHfivlT/AII8zwR/GjxWJHSKVtAKorvksPtEeSPy/lX1h8NP2a/EHw5/bO+Ifxk1vUtLTwpqdtdN
Btucyxh/JJLLgAACJsnJotdtockkz4G8Bfs++CfhH+3ZH8NPiNqtlfeEtKmd572+kNrHMWthNBvO
7jl0B55IPrX6T3WoeHvi38dPBV94f+MPhy60bQnNxaeFtLnhkmuJBEyt8yvkgDnGDgD2r8hv2w/i
3ofxa/ad8c+MfDc73eiXk0SWd08JQypHAkZcAjOCVOM9scc17N/wSx+FOqeLv2htP8Zae1tDo3hc
yPfLNIBI7S28sabR1PLZ9OKqSW4rpnqX/BYbwFpkfjDwz4zHiSyGrS2SaS3h92HnmNWmlFwBnO35
mXkda/NOQrcOodirKDgd6/Q7/gsJ8N9es/ilo3jp5IZPDt/YQ6VDGkgMqTRmWRsr1Aww5Br88I1a
MMGCv7Z/z/kVfQy0RHKPmIUvtBIHHembC+zHKg/Mx4/z3pzM0q+YoZQGyccYpzYG4q+BjCg96VxD
ZI41YEA59AeB70pA8v74Oe/UdKYQqYTGBkkgDvT2CsrBV3A4AJHWqKIXiZcneMZ6k9P0p7DcpON2
QO3rThhypC8L7cUuS8e9VKr06ZzSsgIwJdg8v5SMjJPOKRI23DcfmPUZ5qVmbcSV4xtOBxUcaqFd
Qpy/J74PqPyoAsRRKHkcFgGJwR3FMf5yyjaSP4uhFJHvaQIZDheBjv3wfxokVln+X5Rj5ueD/k0I
LjY0kST5yXTOPr60/e6AhTlWyNp7e31pGhVyWDHOcAcZB9aS6QGXdz82MgdRTuAFYiZWDHZ9OTzT
40YKQBzg4wP1NRpGrSASNwuVJTnH4d+1I+PlPIGcHJPXNSmTcUsiglYwpycHGOeaHJNuNwKsMkH+
E0bl8xSUYL0JNKBvZQMNGOPpQUL5jrGx3A59Bnjr1pYnyd24s5Pbg9Kidxv2qDASMEEdR9OxxT43
3bXCqRn8RTQDOGYmRioIyScnoP8A636064ZWiys2+TOWUduoFSSSBWdwABtOEPOeKYrhmDLhHYbc
L1ppXEKMYjAADn5ckdR3pkqo3JVVcZVs9+amHlx5Ls20rj5RjPFNjKRykAfMeBnn8KqwCNGIANuH
Ug7dp/zigEZXzULcY3dqaX3rgJ5hjB3FeBj1p6JuVuNqdueSfpTsO5UvZUcvtJIz+lX7KI/Yodk2
WcMdvp6Vn3CqAygFT7nOR7Vp2flx2Fqyk71BJZRwvPcd+1S9yWfR/wAKI1Hwk+IqySx4HhqVNg5B
33MAA6jnjrnjPtXy2zEY3r5oJyCOAfSvpr4fySL8HPiXNtRohoULBY1OSwu4xyw+nIr5nMpl4WLY
oO4KR0rMcUhsTpuVunqKUbGLEndu5+Zc/wCetKroMjdtb2HFKULBWUjBzzUBsdHd3kn2/DoyAklS
CcDjt/n0pZJZwhlPmKzsF9SPf+VUbiZrkybd22P5QGPB/wA4qeV9mbYsNhXkYz07da9DS5zs67R7
d5LUyRlmWMBR5hxjkdSB9a6jTJl+1wlnVZB8qg5yCcZ9fSuR0GeKSHbnJ2g7VJyceuO3Xiuo02zZ
d0nmJI5wzSudoRT0OD1xn8qvZakx1eh5BqNnJda5dRELHulcBmBwME8D8qyXtWtpmTBBQ4YHr+Vb
GuSyxay8gYzP5p2kfKGyTxjtUvi6BUvlYIYw6q2OATn1xXJJnUvM1fA0Et3qiQQR+a7OnlxbMmRi
wAX688Y749a+97j9hq78FfBiLx/4t8QweH9VnjymgXcOJclgAm7cMseuAO4ryb/gl1ommeIP2qtD
stUsIL62isby4jS7jVgZEVSj4P8AEuT+ee1e9/8ABRjxhql/8eJfDl7ezNolnbW89rZSH9yrtGd0
ir3OWxRFt7BK0UcN+zf+zxP+0X4g1DSrbWodF/s+2WeSeWBpi3KrjAZcfezjNcT8WPAk3w3+JOr+
F7m6jubjSrjyXnRSiueowCeOMetL8HfirrHwT8a2HibRL+RXiPl3FkG/d3MG7LRv9ccE/XHSv0Ls
fhj8Lv219I0T4hQmTT7qLCanaWyqHM3ys0UxI5wMgEdQQRVc7izO3M10Pky3/Y01XUvgBZfFOHxF
bGGSyW6OmywFTtLlT84PHJXt2r0bwz/wTnufEXh6312H4k6XLp00BmWSG13opxzl92AAevfg1X/a
7/aes9Qs1+HfgIDTfCNlCI5ZLLbHHcjP+rC4yqqwxj+Ik57V7v8AsiaVdeJv2LJNMtHD3l5HqVpD
k8bizovJ9/51DnJjsr3PB3/4JuXt9o2o6xpfxI0rXIIleSMW0BMe5VyVDLIQD+HevNP2dv2QtU/a
C0vXb/T/ABDbabDpc62+J4GYyOctj0xt28+9ev6rqOu/sF3lhpMd7P4h0fxNpks1zpc0n7qzusBS
6NycDp2zXff8EyrmW48IePnljKN/a0JOWB3HyRk8DvxyetRzPqJQTPjrSPgB4l1b44W/w9vwdG1S
8nMBa7QuoIDtvz3UhCRj17Un7RH7O+p/s9eI4tL1HUotTa8t0uY5LJCsahmKldpGfvL1J9K/VG/+
HnhP4j+LvDnjuxeJ9T0i4fZf2ygtMqh4zGx9ASa+M/8Agphbl/Hvh+RjhDpWAD0/1r5P1HGP96tI
zYpRS2PEv2c/2Xr39oTT9auNL8R6ZZz6e5Uadd7vPc7FZWC4+5lsbj3ryfxf4T1DwLr+paLq9rPZ
X1qximhuVCP9SpAOD1B7g963Pgz8QNb+HnxQ0bWdF1N9OvPtENqcKpWSJ3VWQgjByOO46V9u/wDB
SvwVpCeG/D3iSDTrZNduLs2ct6UOZIhGzKrYPOCBjPvQptseltj85F8qNnZgGjwFKlSQMkYz6dcf
iKebhwhX51jdtgOSSxHpx6Y/KkaYC53Or7HU52g9DxioHDW1+oUERs2TICD7c9s8fjWm5jcllkCS
l2Zi2AQOBjnnLfXmo4b5Z1UNgROxySDlx7jAHNOl5hPmKpfO5RJk8/Qfj+dV0LyrIpjwY3wR83I4
wMZ45z36ChIiTZOeIfkIaXACqSemQc+nT+dTKwKn5clgQCegOfbtVESTmNBJ+6VSW6AcYPBJq0rR
xB0IMbFiWIfgcetU0ELsIrp5fLYq0YYlMdcYyBnj26+9OlkQRq/mMVBODGADnP4e9RzfOpMRymTg
4zkjrz/kUxm8qMgBRNHziQYxnig2uupOw3psRArOmzJJGQDnHHXOBStbYRHD5x8zq3TvyOn6elR+
fuYKzfvDneEzx149RTmnjWZlAZE6jB3D6ep/lUkpI7/4NfB/xH8c/GEfhvw5JCLk2kt2ZJpfKWNU
2g84ODucDoT7V6f8Qf2Fvif8M/BmpeIdUbQ5dPsIjJObK8ZpAoAHCmNc8+4rpv8Agmy7p+0bdxs5
kQaBcBCemBLDn+lVP+CheuajD+0JrOnQ6pcw2T2Noz2onfyiTHj5k6enH1NNTd7GllofLiRwooV8
hmO4D+E/n/Oh93ngDaiODtJA3HgmkeJBG7eZho8/MjdOenHbFfY37G37I6eKnh+IfxBhSz8LWH7+
0t7pjH9pcBSJtwwPKHXP8R46UmylFI+QmsJoDF58U1sZBvAlU4YY/hyB6/rXoHwf+B3jH40apeWf
g/TEvXtI/NuJbqTyIlGQAoc8bjk4Gc8V6b+2F+0vB8cvFEWmaBYRReF9HZhbTSW+yeaTBDvnP3AR
gAdc89q+xf2L/CmraB+ys0FzptzpeqXz31zErwmKZw+fKfaeckYx7YqXJlJI/PP4xfs+eM/ghLYL
4p08WbXqO1vJDMs8TkHDAsOhGQceh/LzqVCrqm1jMT+5UYLS+yj9K+tvHf7VFx4i/Zw8SfDjx5pE
8PjS0eC3spZbbDFEZDukEnKyAKQTgZBGK8//AGJ/DF54i/aL8H3H9ky3+n2cks1zMsAkjt18pthk
OMAFtv40XZHUiuP2I/jDFoUusP4WEloYPtjqL6HzNuN2BHnORjp1rwvy5reVRIQrRkoShwSfcent
X6lftM/HPxL8APjF4V1wWVxf+B7yzFnqSYYxRsZcllxx5oUHGTyOK/P79orxfonxG+NnivXPC8LQ
6bfyJJbRywmIgLHGrHaOm51Y475oUrCdm0Z/wz+CPjf413d5b+D9Gl1SSzwL2R5khSIEnHMjAEnb
xg9KPiZ8H/Fnwh1W003xfoz6ddzw+bGy7ZE25P8AErFSeMcHjHNfo98AtE1Xwt+xT/o1hc2HiJ9J
vrhEWErcvLmUxNgAMTt2Y46Yx2r5e+Nn7VmlfF79ngeGfFOkmz+I9lfwwkm2LBUQjzJPMI/dsQGB
X1P5UpsHFI+VIPMllijVXmkkfy0UYBJI4wfXOPavWde/ZR+K/hTQ73Xb/wAGXtvplrAZbiQSxOVi
GGJ2q+445JGK7T9gTw7/AG9+0Vp88unPe6dZ2U7yyOm+GKTYfLLdlbOcGvq/42/tJ3nwQ/aLs9M1
6yuL3wDrGlwpcSGJnjt23Sh3QAYY8oGXqQw9BVe1dwdNH5hWoiEYG4vMzDIUkpg559ulen+G/wBm
T4p+LNFs9b0fwdqN/pl2A8MyCLDp0DfM+T0z2q2114X8T/tTQTadaxf8IpeeKoFt4HhCRmAzqAPL
bgL97r2A4r7e/bn8beMfhv4b8Gr4Dvb3RYJbiaGf+yohgIETYvCkKBzjjFDqMSgtz4E+IPwP8bfD
W2hvvE/hi/0a2uH8qGW72eW8m3O35XODgNxWP4N+Huv/ABE1STTPDmlXOsaqsTXLwWqgkgY5bHAH
+cGv0h8I3F38T/2I7vUPHkDaxqf9mX1wzapBtkDxmXynxgENgDkc818f/s5/tTwfALwtqlpZeC7L
U9UvCXj1SSXyLiEFFUI/ysWUFd2AR+lTztg4q9jlJf2YPi5CI2k+H2uMqYJ/cA4568Ht1rzKUNBe
yRMksEsEjRyLIACJAcEEHnOePSvvT9iv4lfGP4t/EKTVNb1m51LwXZxSR3rXKxohmdMxiMBATg4P
HAH1r59/bQv/AAhqHx61aTwb9k+y/Z449Qks0Kxteh5fOPoWxsyR1NUp3J9nZnkXhLwd4i8X6kmm
6FpV5q+oMrObWxhaVjGMbmIUcdR+dWvF3w88VeBjb/8ACS6Bf6HJNu8lL61eIuF64JHOOPzr3D9j
P4ZfFK98W6N438HQPY6FHe/YtQurh4kWa33oZU2ucsNuOVGc4wa9x/4KYRKdG8BSMduLq7Xv8x8t
Djj6Z/Co5tTXlsfBGkadNrN/BYWtvLd31w4ihtoIzJLI+D8qquWJ9gK6i/8Agv8AETRree4l8F+I
obKKMtLM+l3ACKOST8gwOO9fY/7HPwf8M+B/hTd/GjVoV1zUo7a6ubSJ4VzZJA0it5ZP8b+Wee2R
isP4Qft5eIvGXxuTTNatI7jwprl0tjZ6cIkSSyaSRVjy+397gE7hn19KFUaDlV7HxD5qKQyNvcYI
IBYEe3vk9K7UfCDx2Qxh8E+I3jIMmRpNxubPOV+X/PFfb11+yR4L8J/tUeG7uOziufDmtx3VymhS
p+5tbiGNG3Dn5lJ52twOR0qh+13+1H4++D/xbj8P+Gru0ttO/s2G72T2aSl2ZnBwTz/CBgetX7Vr
ZEKNz4O1XR9S8NXBstZ0y90q9KAm3vYHhcjJw21gMfWodN0+51G5jjtLa5vrqXLJbW0BmdiOSAij
JwB2r9B/27fCmkeJv2fdJ8b3unxt4kg+xRrex8MqSkF1wOq5Ocdu1Rfs9/Dfwz+zX8B/+Fxa7HLr
WqTabFqMZt0w9tDMkf7pASASSwyx7fTk9oxKnrqfCF74N1zTIGvb7Q9VtrVAN9xNYyxoh92ZcAEk
dcVjNulJBJUINmWPI47e9ffHwB/a1l+PXi7Ufh38QdItr+x8SJLHYpaRARpEEdmjm5ycoBhh3H4j
59+P37P1v8LPj1p/hS21aC20fWmW5tbu8BWKwgeQoVlJPzFcDnPOe1UqjE6SueGWlheXMTzwQXFx
FEMSSQws6rwWy2BxxzzURljMAZWUMW4HPToK/YT4KfDTwh8Nfhimi+Grm11S12H7dqEMonF1ceWB
I7HLAZwPlzgAgV+QmqQm1vLiKJAmJGjQBdu1QSB/L/8AVSUrsj2VnZFaWZn2g7yc87cZB68+3H61
NJbzJ5UuyTynw4yhVHGB0OADX0H+yp+zHe/GzWItZ1hXtvA1jJuuJ33IL7aSGijcY4GPmb6jrX1Z
+3RoWl6Z+zBLb2VnBHZ2N1YJbJEAoRBIqqEPbggfSj2vYvl5UfmYZSkUyCRN2flZuRg8evP1pSG8
9jlUiVQSE+br0JH4VreHvC2teOdXstG0axmvNSu5NlvFBHvYYPLEAE4BPJ7e1fqf+zf+zhonwJ8L
yWjGHU/Ed4qtqV23zKcFiqop+6vJ+pzUuo7mii7an5OB/MMQUmNVzk5HGM8H+lQzSIkbOwxIXwck
fLxkZFdp8abVLH4oeOxapst4tbvxHBGMBQLiXaOMAAdOK+/fD9r4W/ZY/Ze0nxTp2ix6j9rWwvdQ
e7HmSytOIlkIPYhScAcZ7darnsJRurn5oGfev7tUD5zvHIPXpTTcoqbi678cqHAz+PoK+lv219E8
BPc+EviP4JuEe08XrNNdQRELGPLVcSCLAKtkkMD37V3HwW/ak+GPw1/Z1stLvdDi1PxtYRXLJYy6
bvW4lMjshMu0gAqVyScjFS3fWxUUj4ve8jfcS6lid3MgOeOo5pyuPMYZLAgYU8gmv0l/ZY+Pdj+0
R4i1vS9T8BeH9Laws47tZbWESZ3OU2sGX26ivmLVPgNL8XP2v/Gvg7QUt9M0+11OS5uSCEWG2DRe
b5a4PP7zgdMmlGS1Ke9rHzNcXMcMkgWVUUA5LNg/hxg9ak/1YiByCeMn+LH6Z5r77+OPxr+HH7M0
Wm/DPwz4I0fxVqFha/Z9Q/tiIK0SMqlN8nlnzGcOSccfTNfAEs7u9w7IkCF2ZYouUjG4kKPUAYAN
WpdzJpMYszXDOUIRiB97r0xwcUwzo8UmS/mHg5HGfT9K+mf2cf2YvB37Qvw3177J4rv7D4jWYleL
TiqLbqo4iZwUJdWI+Yq2VyPQZ+dvGnhvUvAPiDUfD2v20lnrmnSmKeA87SCRuHHzKcZDDqKaknsS
4pGP54KOCWEfOWOfp/SlkuSZFQDJwNpX0x6fhT4bxbS8glMaXYjkVmgmUlJMYOxunBxg4PQmv08/
Za1f4a/tNeGdaubj4O+GNDk0qSG3aMWcM4k3xltwJiUjp7/WodSz1QRit0flxcbyobdlCOdp7evT
moln3wjYDFt4ClcZPOfyyK1vFdmtl4o1qyt41S2t9Quo4wpwAqzOoGPYVgSS7nVSHchsfLwSMcU9
yN1ckd8ttOHyM4xjJx1pkx8wLHsKbhuJ6fr6VAZzx85RgcEtnPpg1e0B7JfEekHUlaWwS9hadU3H
MQkUuuByRt3ZxVOyVxpXdimLkBnCNkLjO3r9aimlMKl9u9iMr6kf5/nX1b+3jP8ABq5TwT/wqmLR
bfUS9wdRXSoPKJgKL5fmDA539+vXmvksKCQFyVQtlc/gen4VaasmFmx1xN824S7SxyVfBHaoch0Z
mBRCMkBsZqGXax8oSqHBPDdR7c9+acbYNGohI+6SADkHjmpbT6jUGhmGhm6439hz646ZqvIihdiF
oiOrKfun1p0rwOwdp1URn5wGC7uOAPr/AIVBuUoQCrSEFiFPUfX8Ki5ex2Xhn4xeNvA+nNp/h3xj
4h0PTYnZ0tbDU5ootxPzHYG2jPUnHWtIftKfFdSgj+Jvit4zywOszhl9s7s15lMVtpfvKwxt2lsk
nH69Kds2rvZY9zsxO1uDx05pX7MErvVGv4y8c+IvH12l74k1/UfEN/Cnkpc6jcPM6pndtBYnisCU
uyMUYqhOQd315z+VBnWPaplVS5ODuGD64NQ3IXyxyFOMYz2z0zV3b3ZVkLcSu0YV33e65JNQmcsz
53ZOQoYdf8KjlaRAyx8Y4B69e1Nnk+4NwIzzuHQ07oixJNPJMxTcMEYLA96aLhUyWd5D1w3Qc1W2
qTjJGT19KklgaMF/m2bSylhjOPekmhWaY2O6NxcqQzuCvGP8Ke80YMgALOfl3EYIOaiZjtULjaM/
MOufTNRAqOS2OehI5odi1cYZljdu757f1qXMsgcM3ltnqeuP8Kr3CqJUUvGsrHCgH29Kn2gsAVyr
dO4YZqBpDDNl3YsNpx1GOn+TTnlaRsnCso6jrXTeAJvDFl4z0KXxlY3V74TW8Q6na2DbJ5LfOHVD
kHPToRxmvun9vH9jr4SfCj9nrRfHfgDSbzSb+bUbSNGnv5pllgmjfhkkdgD05HIqOZXsXtqfGvww
/aT+JPwS02+svBPiu98PWN5MJrm2hSN1kcKFDjejYOFA464Fdwf+ChH7QMTAR/Eq9BYdXtLVx+Xk
/wBa+fXm2vswQOe1Rv5UT7d4ypzgnpmqRO56F8Xv2hviP8bzpn/CeeKrjxDFp+82kcsMMQiL43MB
Gi5JwOuelecPO0cTKCQzn5SD2qQsjSq2fu5A5P8An/8AVX2H8JP2HLHX/wBjzx78Ztf1JvtiaXc3
fh61sjgwm2eQM02Rht5jwAOgOepqb2L5bI+NJJDhQ+ctyBnp70I7fdBPXJGOtPmXBaQ5dy/T+6M9
6IoGknU/NjOcj+dO5CiRfaEjmYuGbA4yelOafKMVypzmo7oqW2gBzu5wc/nio5N7A7W4AxxxzRuM
fNeyOYycucc8ZNdd8Mtd8OeGviB4a1XxPoR8S6DZXqzX2k+Z5ZuI/wC7nOOCQcHg4xXGAyR4UMAD
zuA6GnJKzvksFkXsO/41FiVufo1+0L/wUb+E/wAdvglrHgS7+GmtJM9mw0ozSxeVZXSxlYX3I2Rs
3e+RxX50zuxYgE7WGR9OuK+qP+Cf/wCzR4U/ad8feJtI8ZapfWVnpulrdW8enTpC8jtJsbJZT8o9
PevBfjV4MtPh58XPGfhfTbxr+w0XVrmxtrlyC0kaSFQWxxnGAcelUi92cSspUA4J5PB6kUNNIdqf
cUHj+v1qV1ACqv3ydrZ6CoZHCgr1HqKYNB5mAQHxzjnvTJZC643bhnoPWnlFfYWA25Iw1NaIOrkk
AMRgdD1osTZo6HwR471b4feMNF8UaN9nTU9IvIryAXK742dGyAy55B6GvrqP/grd8a9of+zPBsrL
1V9LlA/HE2a+H94DZGflBGDT2DyL8o5PDZxUcutw5mfZ/iX/AIKqfGHxH4f1HSbnSPBvkX9vJbSl
dLkJVXUqcZmI6HuO1fG/mskKpGCqIpU7e49xSR7sn7oIGdzHGK+of2Kv2JdR/ai1LU9W1G/k8PeB
tI3w3urRbGme42K6xojZGArBmJ7H3p7FI+YTeSOqxFiu3gHOOP8AOK+nv2EPjB8LPgj8Vbnxj8Sb
fWZL/T4Q+hzaZEZo45GV45t6AgklH47dc9q6f9rL9gOP4NfDLSfiV4A8Ty+OvBUozf3s6RRm2V3V
IpFwRuQsxU4GQevFfHJyMFd23Ocdc/WgabPsT/goP+0R8Hv2i9c8PeLPAS65/wAJRGDZahLfWjQQ
PbKrGPAJPzB3PI7ZzXxzNMzSZ2lxu+9mkk8xpdyjjHQHk050KqQ+Y2HQYBphZh9rkYhQWwG6EZFN
eaRnJL5YZ+tMRigYruJ/n+FOMJTDbGBIyDQQ7jo55Y5Gy7JjgKG4P1pWunDMpb5eflBwDz3xSxxN
Pk7vmyPv4INQm3aNQxXJxggHg+9AtSNzuBIzuPXn0pxkUggux/vZJ/nUUibc4yh/u018sx28ZHQ/
SpEW0cA5JIZT680+WZpE2qCFI+YjpVOJVkiJx+8Xr/n8KfMnAG9sYJwewxRYtEryKmR94jgbuMV9
YfBn9uDw98K/hhpHhXUvgT4O8V3enq6vrGoRr51wGcsNwMLc4OM7u3Svk0xpLtyT7hh1FI6sxAUH
rgY6UWQ2fdh/4KPeB5EiRv2Y/AUhIJ+Xy8D2/wCPWvlj49/Fax+MHxEvfEmk+CtJ8B2c8MMP9kaP
/qQyLtLnCqMnjOAOnrXnKKXIYZ8teuDRk+bI/U7sZ9qdkCJpdzxqu8FUOMbcg9f/ANdMeb5CrEE8
jjoRj8K+mfhl+wh4j+J37L/iD4zWXiXTrKy0qK9uf7Int3Msy2qkv84bCk7WwMenrXzKyKwHy5LZ
IBOOtGgAg84iPbHF8pbcVGR+P+TzUYRldsMfm79cUKHRtuBg8dc4p+Xj+dRgLnO7v60BcGUKNhLS
KDwpHoaWWYO+Sq7ieW2gEfp/KhtzlsDBwTkUkSEMGIDZx1pNXC1xrDKj5RgdcUvkJMSwHXGOea0d
G0bUNb1W20yws5L3Ub2UQW1pbIXlnkJ4RVHJJr7b0b/gkj48vtL0a41Lxz4Y8PatqkEcw0bUGkFx
CzAExnj5mBJB29xS5ktClFI+EuUHl4w1TtcMVEbElVPp+v5/zrv/AI8/AnxP+z78SNR8H+KbOSK6
tyWtbyNG+z30OeJYmIGVOeR2OQa87ZSqsiDIPAYmq0YPyJpLkyMI1KooPVlHynv0qNnMgZWwUxjk
dR/nFOeDBBClWAznPWvfv2Zv2J/H/wC09/at1oq22iaFp8e5tZ1gSJbvJn/VxsFO8jkn0FJuxNj5
/kkJfd85OMjPOT6050kSR5pJjI5bknGTX3DrX/BJv4k22i6rqWmeMfCniW6sbSSddL06WVppyqk+
WnGNxIwM9yBXyX4L+FXin4geN7XwZpekXc3iKa4Nt/Z7QMJYm3BWMgAygTPzEjgZ9qjmNFFHH5Ak
bezB8FcjrjOcUSztIzOCN+RtyuCBjpX3e/8AwSG+JVrO0I8ceDYLokDyJLubIyPTywR+vWvmT9oz
9nTxb+zX4/fw14nt1aRo1lt9St1c2l2pVSxjdgMkE4I6g0LUGeXJdyQsV8zIY4AB4Dev1pst484D
SSu2Tlizct1H+NMkhBcZbO7p2wac0LoApZSCQAc9OOBTUSL+ZYt702csckE7KwyvB5GfemPdN50T
NNIuwjY2c4Ocg9PYflWx4F8Aa78S/E+meG/Demzahq+oTJDFDChfO443naDhR1LHoK+y5v8Agj78
YbcP5XiDwhdSKhIgN3MHHB4wY8eg9OahrUu7PhqaaRpTMkxSQMGD5JOR/ED6j1quT5cew45Jwc/y
rX8RaDqvhPXb3Q9Z06bT9UsnMVzayKVaJhnggj2yKygF+QEje44HatLE3uBKW5Xd85P3l6Y9KkM8
sE6yxSyQTR8oYyVI9wfaqqq0blsbXPOM/rmvpf4BfsCfFH9orwrc+IdBjsNL0qCcQpc65NJbLcfL
nfFiNtyjPUfnUNpDtc+fLzU7qW1EUt1NKrDaUMjEY64Iz64qKO+ubO63RzvbysNu9GKnHB7e4H5V
9WfFb/gmT8YvhX4G1fxbeHR9b0/TIxJLb6LdSXFwyZALLGY1ztyM8njJxXyZtIGZAHzyGWmrMWw6
W5RJGCuyu7Fmbrye/vT7jVbia0htXu5DbxAeXDvO1B246D/69VYmRlG5dpXrkZ60AlCxUqOMdO1O
yHc1YfEN7aWkdrbaheW8aElVE7DAPJ78c81U1TUrq/kX7TczXeDlDNKz7SepGTxmqLHzpSwBfHXt
UktwrfIBt45JPtU2EaNvrdzYSGS3uriyYfKsltI0bEZyQSpBxwOKlfxTq9yhjk1/UpUKmNopbp2U
g9QQT7/rWLC+zKlN7Dnn0prIZOASWYZzQtB3Lat5ACMx3YBz6e1XNP1+80iOVrC/urNpAN5tZmTc
B0zjrjJxWS7LmPHLpyR056cVIXAYBHXkHPYfSlqK5pa14k1XXhG+pX91flWJX7RO02MnnAYn0HT0
rNkdTvf7pGcgDqaAHhdQrZXpwevc0hU7x8xbnt2qhbjHdniZs85644FK0mFjJQgH1/iPtSFyN6gZ
YkkjGc07b5g+XLDOVBp2GMcBlYk/LjnHNKJGjRMO237y4BAHBxTykYJVQd38RPrT2QwfeJxtye9A
DVwAzeYSSOcdvqKVTw4DYYfkf/r01ioGQOWwwHSk2krvb5yfmOOaSQAiyNkZy3p6/wCeKXeFk+4A
5zlj3znmldgS7byuTjOOfw/z0zSBMFpXYswPJHUj1pNhYfIu5mLs288cAAflSCTadsfVgQCQMVG6
jazhjluM5/nS7nJCM20L0PvTuFgbZFIzHHy9ccAnFTZR3+YFgAT/ALtRxkuCjPk/TnJ560yVTICA
D5gwQVPFADWfYSfmLdQ2c5qwwAi4Odq8D1zURZpI2aQAtu6ng5o3/K7E8njHpSQCSFQNxDh85Y5y
foKer+QwLKdpHY57d6VJU2leGOeABzTkTzHO1lU459P88VSVgIQwwT94bgOnFTSStJIBH8hK/cHY
0sZwWU4C9QUYfTr9aY8nA8wKGzkEntQAyVcKQMg5wvTBPp+lOKuoTI2fLknGec8UyQKyhiflBySD
0qWOSSQsCBx1JPUe341SAHQrtBDOufmPqPT2pN4bIVSqA85/lTVBjfZu/d5zipJ7hNnyttIOSMU7
gxFMUaEr8sjAqQCelQrJiRkI2qRyT19vxpHZJE+YF+Mk9SDU0jR+Uu5dzHnrx0HTFBJSuMQKAoBb
PX1rW0xwiwMm0rg5UnqeeKyb9VBfaeN3CjpitG2CxparnZIV3bT9fXtUsEfQugIsXwB+KE0RZFGk
2CsQBzvvArfopr5s3iLy8MWXaeAT0r6SsQf+GcviZ5JQyCDSYWZV52m6mZiff5e3pXzc0ZlhVgAh
5+8eDWcjRO+hFkb13KozyRjrT2kRQTtMmT91e1RuWJwRnaPvEDI/CnKqElWJYDoy9DRYk2Y8P8jM
yICcqTznocmriRhmKSIiEKdrY5/H/GqDCKO4Dg+Y5AYhB8pH/wBapVmLtIMDI5VM5x7CunUnSR0u
n7oGdQQqbeME8cevU/nXVaVbPNdWvDhJGHJJA6evfJ7e9cTprOYQ7ZSEgDAO72BA69fQ9K67SGc3
kWzeQo37VPOPX26//rqrtmHLyyueV6tEZPEE6R7i4mYD0J3H9K0vG98NQvLT97u2QKrDOSD6Vn6j
fC28Q3c20yDzpCM9SCTkE9z71k3EjSyNI/zuxJYnrWElc6EfZn/BLrXdO8N/tUeG5NQuEs4Li0vI
I5bmQIiyNH8oye56AZ7mvef+ClfhzV7b4+prUun3baRPplvFb3SRt5LSKW3DcBjcP6ivzo8H3YtZ
YZgGaVCssYQncGB4we3Wvv63/brtviR8Bm8C/EXw43iXXLWAra6754VvMHEbsqjKuARlhwR15NOC
swlaVvI8f+F/w58Q/EvxFZ+F9BtJbma6nVDL5asI1J/1jnj5QWz7ba/Szwlq/wANv2IfC+leD9U1
CebUdXk86+uLZfMUTEIhdxuzGuMY45AzXwb+yr+0OnwB+I9xrl1pUmsWl3YtZSQRSLG0XzBw2SOc
Fcc44J61g/HX4yj4ufEbXfFEdo1ja6jKohgdy7RYUKfmAH91fzPrTlGUnsRdI95/bC/ZbPgW8fxl
4SiN54Y1BxIHt2MotyRn5iAVCEnhs96+if2P5Li0/YvvW3G2mhXVSjRHaVIaTkH2PQ18xeE/20dP
0r9mFvhbqXhu4utRSxmsU1Dz1FsUaRipxy4IBAxivQPh9/wUD+Hvgj4a2fhqz+HF7BYJbmK4s4bl
TEzuD5uN/JViWPPXPQVm1ItNI81k034iftn3Nhp7xRT23hnTGi/tOMuiS42na7lP9YQq4Hfk17z/
AMEx7WWx8OfEi1mfM0OqwI69g4ibOOTx0x9KwfDH7fPwr8D+G9Q0jw38PdS0G0ufMcRQPGytKy7d
zYY4HC85P4V5N+yv+2HpPwEfxfb6p4evL+11e4jnt2spkGx0DqynOBjpgj0p69UTdXsd5+zX8afE
Xgr9qG+8ECZ77Q9d1m6t3tp5SyWzK0zeZH/dPyYI6GrP/BTZIl8X+GpHhd5P7NZFK4wcyscHg+gP
5182eFfjNp/hf4+2HxClsbm6jh1d9R+zKArlHMhK5PAIEhHviu1/a2/aS0r9ovxFpV5oml3ujwab
bPbn7cQJJizAn5VJGBgjOecmqUdQ0sjxTwPpV1rHjbw/ZWNrPd6g15APLhBd2XzUBOF68Z9vev0U
/wCClM0X/CrfDsBuFSc6g7LEZMFv3LAHHcAkfiRXyZ+yZ+0L4H+BGq6trniPwlfarrhKw2Oo2jLm
CMqfMUqzAckA5GfTtz5x8avjdrfxp8a3fiDWHy8mY7eEDakMWeBtBIzgLk+1JRaZDmtkcFqE9y7i
FEJk378bd3Hsc9sd6iMiExqA+w4JzyRkdx14z+lMlScGJmkdVOeVG0AY6Hv3/Q0yUCDLguUY7gCO
Pzz05raxj5skciSYlJQzLxsOf8+lPRSzKgEiqwJZvXAyP6VXdg8xlj+WSTBdV6ke4/AdKGkfzo3k
DhXyAducdTg/l+tLYd0+gJLI5AMeU3E7pOoBAwMZ5HFPknMY+VWIK4LKuc4HOR1zxUM6xrGFKbpG
wyfNgY5xkEZIqUSRW0sYdGK5YbgVJbg98fTnFVcFAf8AaPItg3Ch+NzjHb9DRJdyoEdZNiyLh4nB
zuBz+Z/rVaYgJ5u1gjE8k5VcY46c0G7EhE3DQsSwAGVHYHPbmkZyi0x2x0h8zeEOdzNxznn+ZqZ7
hFHzI5Y9AOPlPvmobOTbN5gnjZWl2hW5A9j+NOe6aCXDxKFYbQx7dwc/jTTNIWPrr/gmyrW37Q/l
FcO2iXRwTyFDxc8epIqp/wAFF4rhv2htRWNgCbSzkCseCBH9OuRXB/sffGbQ/gh8YrTxF4jN6+lR
2M1nK9qnmtEXC4JX7xyVAr2z9pj4y/s6fGiw1zxDbz+IZvG0unrHZGO2lijZkz5YbI2jv17VnszZ
q58heFtds9L13S9QvtPj1m0gvYpLiwfCx3EaSKxRuCMELjOO9fpp8UIz+1X+zTbzfC3Vms4LWCMX
Hhm2SNC5VVYWzhsbCuBgcAj8K/K2UR+WGQFRuJ2knJ/Hr1/wr1z4BftE65+z14y/tvTZJLvSZCE1
DSpnYQzxnguFBx5i44Ppkcik73BWaszzm6spra7mtbiOSG4hYxSIHw6OMqVPfOQR2r9Yf2OfiBr/
AIz/AGZYNV1a8+3anYNdWUE7ooYpCNse4AAE8elfGH7ZvxN+EnxWfTvFfgNr5vEM+Yr+N7V4ISmz
5G2sAC+7AyM55z2r6E/Z/wD2pfgH8K/hDpXhq31zVbZXi8y7jvdPuHYTyKPNG5UxjIPTj86erGrJ
WR8FeP8AxvqXj/xvqOtavP8AatQvHMsskkYVi3XGBxgDA6DpXtn7CXjvV/BXx60HRLCcR6Z4gkNn
e27Rqd6pFI6HJ5ByB09+teLfFfT/AAovjnV28E3l5c+GjcF7O4vo2WQxleVwQp4JIBwOMd69R/Y0
8W/DbwT8TP8AhKPH2q6hp13pK79Ka2hklhkdtyv5ixqxyA3HA70NDg+jPf8A/gpv8QNW0+48NeEY
p1j0S9i+33EYjG5pI5CF+bqAMg4718D2FwLPe8c7xMcMjhMleeMH6819nft0/Fr4RfGXw3Yax4d8
SXl94u04ra29ktnLFG8Dvl2fzIwRjHBBHPFfFFrdRJMI5PMljYhWZF5Ttk/hjAprYwteVz9f/g38
W9e8Vfskw+O72W3uNfi0u9n8wRBI3eAyqpKg+kYzX5R+Ktbv/FfiK71nUrgS6hqc7XVy8YCq7scn
AHGMY4GOlfo38N/j3+z54X+C0Hw9svG840ltPmgcXFrOJlWbdv8AmEWMgu2OvTvX5w+NbfQdP8Xa
vY+HtSl1HRYrqRLK8ljMbyRhjsODjtgZ4oiVPc+pP+CdfxH1zRvjF/wiKCH+xdbimluEdMuskMRZ
CrZyM8gjpgV0/wDwUo+IOov4z0jweVtzpVtZxamny4naZ3ljIDZ+7tXFec/sM+Jvhp4A8bXvi3xv
4nfQ9Y0yExWMUsTGCWOVCJGLqGyR0A4x171v/t4+M/hf8TNX0bxV4T8YprHiVI106bTreJygtx5k
gl5UFSC+Mng5+tJLUqUtEfOnw70W18TfEHwzpWq+dNY3+q2tpKgOxsPMikbgeOG/lX6XftVfGLWf
2aPA/hG18HWlnKLiV7JV1JXnCRxxZHJcEn3JPSvy58O+I7jwz4m0nWrK3F3NY3sN6kTMQGMUgcAn
GRkqOR2NfoP8Tfib8E/2svAHhk+JfH3/AAg9/au9y+nmRPPidk2OjBlII54YcUpKzHFqx3Flrj/t
NfsZX2ueKovsF5cWV9K66ZK8KLJA0qKcbjlTs5Vsg56enxH+zT+z3rHx88UWVvaLNpvhi0KTajqj
Rb4ygK/uEb/now+uO9fTviT43fDH9n/9mmTwP4N8Rw+PbmWOe0jWCUFlE5dmmkKggKC3br2rO/YK
+NfgXwH8JtV0PxL4rsNH1Q6rLceTezeWXiaONQ6lgMjKN34pX7FK/MV/2ov2kdJ+DHhiH4P/AAwe
Oxe1tRbXt9E5BtYioIEUikfvTkksen1r5P8Agd4OsPiN8ZPCeganJNcaZf6iIbt1nZZHVgScHnBJ
6nryea+p/BPwM/Zy8P8AjiHXNR+LOi+J7BJJZjpeoXEHkszAgFmDc7c9/SvN/jT42+HXwq/ag8L+
Jfhzb6dqOg6THbXd1baHKDE82ZA4UjK7vL8vp3A6U0Wkr6n1F+0R8Sb34QeJfhT8P/C8cOk6RqV9
axySRbhKkUdxDGI1wcYIbnOcjNcV/wAFM9yeGPArlMoL+5GPU+SD1+gNbPxQ1v4S/H3UPAXjpvij
pXh660NVvY9MuJ4TK7b0l8uRGcFWDRYIwf8AHlv2rPiR8Pv2gvgXp/iLS/Gem6drOjGW8i0G4lV7
qZiPLaIxhtwPGQQDx+gtyXbTU9A/Z98v/hhGcAF4l0vWMBiGyBLcE9Oo61wNv+0F8JvF3g/4ZeG9
DtxH4qXUdJxGNPMZjkWSMSEy7QpPJHXvXmP7Hv7Vdn4FiPw68cFZvB1+WhhmlRBHZGUnzBIeMxPu
Oc9CT2r2Lwd8DPgV8PfineeMz8RtCvtJtgbix0GTUYSljIGV1YMJSz7dpCqR371LutB3XNzI9m+M
3hFfF3xd+FcH9o32lNFLqFwLnT5PLl+SFDtzg4ByQeOnHevMf2pf2o7b4PfEldBk8CaT4lf+z4bt
7y/IDqrNIAvKH+4cc+teP3v7d51b9o/Q/EN3p1xF4C0oT21vbRxIbrEse1p2w2DyAdueg9a9O/aI
+Bvgz9o/xpbeKLX4s6FpEJsIrRrbz4pN4UuwOfNXH+s6YoXmJNaWNP8Abp0weIf2etL8Upe3thaw
PZSnS4ZALYiVlwzLgZKbuOenatjxDJbwfsE6TPeKXsovDunPOmM7o1MW5fyGK4D9tn40eF9J+EOl
fDXStRj1vU7iK2dbuzeOaCKOBl5fDH5iVGFwe/44H7Kv7SHh34heCD8G/iY8C2UlotpY3j4t4ZIk
XOx3yMOCoKkfQ+9WYk7npWl/En4O+MfjH8Lovh8unDWIry58w2OnNbMkJtZMqxKLnnHr3rxT/gpP
Ps+L3h1WB2DQycjjB89jz7cH9K9S+E/wE+HP7N2s618QNf8AG2m+IJNJikn0xbadI3t02uGGwSHz
HZWVQOnHvXyv8aPixqP7Tfxks7mWS08PadJcppmlyXXCxQGUhXn+b72WycHA9KI3FJOTWp9ef8E7
dkvwN8Rxxg7BrMm1QcjBtbc8fXr+NfnhqxjttXu5ZlkaNLmRdmSW4kbI+nH1r9TP2XvhjZ/AjwLq
ehX3izSNXub2+a9M1pKFVAY0QDBbPASvgX9pb9ni8+CdxaXs/ifTtbtdYup2ijsmJli5L5ZfQ7uv
4VSdwatK6PtfVvi98Ftc+F0HhLw/8SbDwVprQrG8enoEcRsvzxEMvyk7uSOc9+tdT+0T4Q8La1+z
LeaVrPiH+ytAtrK1kh1UkfMYtjQ/XeQo49a/LDwfoknjXxVpnhq31C2s31S4W1W8vWKQqWIXLH8Q
K/Uv4/8Awzk8a/s1z+EbLV9PivrC0tHW5nl2xMbYoxyRnGQhx9RU7MbV1c+bP2DfiB8NPBGn+IdS
8W6tpeieJHmUWlxqcnlSLbGMbljLnj5gcgc9M19X/BU+H9V8R+LNe0jx9F46u9SkhM5iMW20RQ/l
oAnQYY+mcV+QKyXG5GSWSNZPmwMd/wCE8H3/ADr9Fv8AgnP4DvPD3gfXfFd5qFlLbeIZIlt7eJ8y
QiBpVbzM9CS+cenpSaaZe6Pmj9oSy8K/Df8AaU1W5W4h8aaQ2o/2jqWnRnaUaSZ/OtWYfxLg8578
gV90fHXxn4T0z9l6417VvDran4an060aHSciMqJNghGf4dpZee2K+Ev2ivgxrVj+0zqfh5L6ya58
Vag99ZXK3GyOJbiY7fNOMqykkHrmvuP46/B7VfF37LH/AAhFhc2X9p2VhYqXllKQSG2MbMN2MgHy
zjI9OlNX5jHaOh+ULu8NvGvkhh5flgnkLzk8/X860vCvhnU/GniW08P6HZyXusX7GC2hjcAs20sR
yQBgKep7fhWG0oePAYoVJxtfOM8kfnXtX7F15FB+0z4BWSePa1zMvzYUZ+zTbcEnPJOMeuK2ktBR
tzas+sIE8H/8E/8A4SvJcTprXxF1iMqI42Hmux3FBsL8QIwwW7nP4eZ/sD+LdV8dftL+JvEGtyxS
6nqOj3NxcNCgRS5nt84UZwMAdzWP/wAFLboP8btDSPGU0FFdgo4bz5Coz9CePYVyn7BfxL0L4f8A
x1j/ALfuRYW+q2EumW106HYLiSWEojEfdDbDyeM4rBqyLi7u6PoL4/eNv2e/+F66hofjjwBqWr+J
5Jra3udSt2IiYtHH5ecTLwFdB93tXiX7cf7MOifA3UdJ8QeFy1t4d1ab7KNKBdzaypGWJVmYnYwX
nPQjjrXrXx6/Yy+IHxG/aIvfGOjPph0W6vLO5V7i62OgiSEN8uw55i9ecis//gqD8Q9IuNK8MeE7
O8SfxDZXzX11ZhCfKhaCRVLEjHJIwM5qo6uwtUkfFXgnx7rPw18XaZ4l0G9On6rYy+ZExJ2v0BSQ
Z+ZSM5Hv619+fHn4a6X+1b+znpnxI1GxHgPxhYWSzpdatmGORAoZo+G5jcn5GPPPSvEv2Kv2ZbDx
paS/E/x5NaReDdFd5oYTIpSd4STIZ1IP7tQBweuPSuM/bH/a2uPjjrDeGfD5+wfD7T2KW8CnC6gy
/cnKlQVUDhU5HOfTBFNy0FKS6nzvgtGG5JAX5VPIPcCv0V/4JZzeb4U+Ih6hdRtQGwBkeScH/PpX
5z2sVzfT2VnboGmnkSJQzY5YhVGe3JH0r9V/2Df2fvF/wE8M+LLfxdb29tPqV3DLbJb3CzfIiMMk
r0yW780576FrZtH5ifERlHjjxQQSNuqXn3vTz5OQf89K4+LDCRyWMnXPXiui+JMxX4g+KTAcQ/2r
eHBYHP8ApD9PQVzMl1tUh12Ac1ukcvM+oGUMjFFVZD/D36k5r1H9m34k3Xws+M/hLW7XTre+ke5W
ylgu03q0c7LGWB6qwDZB+o715NIximSSKPcuB19+1elfs6eBtb+J3xm8M6NotuL69ju4r2UNJ5YS
CGVHkbJOPlGPrxwaxqfCzWD1PtH/AIKt6RYWeh/Dy+hsbeG5+23URuEQK+zygduRg4yAcV57/wAE
0fg34d8fePde8TeILIXt34ZjtW06NzmMPL54Z2U/eIEYAB4HJr2n/gqf4F1nxB8LPDviDT7BrvTv
D15Nc6jKjDMETx7A+CckZOOAcZryz/glX470TR/GPjTw5fX8Nlq2rQWbWNtK203Hlm4LhPUgOpxU
PVI1gt2zv/DX7al7rv7U6/DCXwB4dTR31+fRBfxxsZwsbuoc8bcnZ+tecftJ/A7wz8OP24PhoNGt
Qmn+K9Utr690uRVa3Wb7ZGH2KRwrhiSOma2fB37IXxS0b9tSDxvceHFj8KJ4nudUOoC9g2iF5JXV
ggfdnDrxtrQ/a48X6Pr/AO3d8FNP07Uoru60m+tba+jgbcLeVryNgj+jYxx7ip72HH7KZ9Y/F/4T
/DL4q28fw78S6NAk+qRm+g+xW6xSjyGTLLIFwpG4DnqCaxW/Z7+FPir4Za38IbbS0+x6RFHZ3E3l
L9rgkdRNHJ5xX5n5DZ6diK3/ABQVH7S/gZSzBzoWpkDscPb5/nSfDJs/F/4wnqy3un8D/rySkkxu
yPP/ANmbwT4K+Gf7MGnXmuaXYXNtpjX5u9QubFHlkCXcy7m4JJwBV/wT+zL8GPgh4rbWrPR/OuvG
N19mt4NQQXMCO2+bZGjLiMHB69MAVjW0u79hnxCz4yINT3E88/bZs16B8VnRNR+DKnOG8RQBW9/s
k1HQeh5P8Pv2WPAfw8/bC8UzWWkWt3p2teHDqK6Ze2ySw2krXQVxECMKhwCB2yR2FeL/ALP/AMLf
Ckn/AAUj+LOlz+HdMm0yztru4t7OS2R4Ud2tclUOVH337fxH1r6y1DxXpOiftiWOm6jewWV7qXhL
ybBJnCm4cXW5kXPU4XOPY15B8J/hV4l+H37enxH8ceItNbTfCmvRSWumarPKnlTzStbFI15yGPlv
gHqeOTQm7smL6n5/ftxeHdH8OftWfEOw0Syh0uwjubd1s7aMRxI7WsTuVUDAyxJ47mvAJ4o423ED
gdcnrivqf/gpJ4N1jwv+1P4m1W/0u4ttK19YLjT7wr+7uBHbRRvtI6lWHI7Zr5Rm82MnadyNwGJz
j/JNbqxnHmsrjmiU7gcqT6Hr7Zr9Nf2UfhH4L8Rf8E8/GOq6t4V0u+1KWLWH+3XNqjzgIDsIlI3D
btBGCORX5lKn3geCcHPT64r9Z/2FriHxX+wT4m8KaPcR6hr0MOrWzadDIpnVpQ5jBUnI3buCetRL
R6FtPlfc5L9gP4TeCvHH7GXi/Vdf8J6XrGozXWoobm8s45JgFt02hXIJGO2D1r4m/ZN8S6L4L+Of
gy71vwtaeLrC7nXTG02+VWVTOyoJQCrAsuc88e4r9MP2Ffh74i+HH7K+v+DNe0ubSfFry39zHpN5
hJykkSrG+3OdrMCM+tflz8K/DV54Z/aC8I6HrOnXOn6laeJdPs7mznXZJA4u4gwK5/8A11krpalL
SVj9GP2hdH+Ef7H/AMRLDUm+Emj+K7f4iXq272s9tEItOaIRoxiVkYBW83cVAHIJz6eFf8FQv2ZP
B/wi1Pw/448J2C6MmvTva3WjWMKx2ySRx7vNjRQNpYcEdCeeOa+k/wDgoj8HPGHxV8Q/CGbwv4fv
ddt9L1WSS8a0XcsCs8GGcAjAKq/PsfWuE/4K9a/p8Xhj4daYbqF7+PULm4e0DgyLH5G0OU6gZ4zT
GraXMf4P/s0+AP2Ofgle/Ez436bYa14m1KPyrXRb+JLmCCT5niii+Vh5jgZZugx9a9H/AOCnepRa
l+xtot5aW4tre61bTJIoFwBGrIxVR04GccelaH/BRDwdrnxq/Zt8InwBpk/jAtq0N4o0pftAaE20
67/lzldzLyOlZ/8AwUK8N3/iT9iDRTpOnT6q2j3Wn3N5FaoZGgjiiZZWcLyAh4b079KcfiQnE+D/
ANg/wheeJfjdmH4aaf8AFC1g0u5Nxp2oTRxW8GWjCy7pAULc7cHruPpX6LeLP2K/A/xm+EGuaZqX
wm0P4SeId26yv9GW3uJ4yhDh98arlSQVKnqCenGOE/4JVvYyfsweL7TT5oW8QLq92ZY7dwZgDbxe
U2M5APbPGc13n/BPxPiHZ/CrxVpnxL/tlPEh1OSW1t9eeT7Q1uYIxlBJzs37xkcUJ3CUeVOx+SXw
f8V+HPhp8SNJ1jxZ4Vt/G+hafJILrR7jAS4BR0XgggkMQ2GBHH0r9mvCHxd8CXv7FFx49tfBEem+
BI9HvLh/Cqom0Qo8iyRYwF+Yqx6fxV+InjbQtT8M+JtT0fXNPudL1eyn8m5sblCksbdRlSAeQVI+
tfq78DNOm8ef8Er7nQfDy/2nq8+halYi0tm8yQTPPN8hAyQcMDjGcGqdrjeqPzP/AGifG/g/4k/E
C71zwH4Ih+H2jtbIn9kwOCryIW3ykKAq54GAP4c1+m/wg/YT+Hvw3+A2j39/8OLb4u+J9SSG+nF0
YoJEWWNWKIXYKFTH1JJr4E/aX/Yv8X/syeENC1rxRrGi3o1d2tlstOlYTwuYy/KuBuAwwJBxn61+
jf7Ves+NZP2JfCd98LbnVp9WaPS8TeGzI8zW/lYfBiy23IGcVG7DYw/iZ+wf4B+LvwI1uKw+Gdp8
HPFdoWnsp4WSdx5a7huMbYaN8spB5HXsK/GuUFoYcNywDFM9Pxr7v8H/AAT/AGsPiN8JfEniy/8A
iH4i8NaZYLOs+meJtavLae6iSLc21CMKpB25Y8nNfCDiNoYymQu0EDr8uP0x0rREdSvvcMyBd3PK
9cCpgAhyoAB59yPrSsxRPMTh+23qDQGIdd5Bz/F6UmgsfpD/AMExfhv8HvjP4X8Q6H4g8Evd+MtK
c3Nxqr3EiJLbythFUo4KlSpyMe9fEP7RHhSy+H3x0+ImgaSrjTNM127tbaN2LlIxI20FjknrjJNf
dv8AwRp0i9j8Q/EjVDb3A02S0tLdLloWETSK8hZVfoSAVyB618Z/tdaZead+1P8AFKHULOazaTxD
dTxxzpsLxPMzJIMj5lYcgjrzUpWKPtn9mP8AY++B37VH7NvhzWbHw9ceF/EGj38Vrrmom5kc3jwh
HuB/rNuyRHyDgFc+1dd4i/Yf/Z+/aC8B+OrH4Q6fF4b8QeHbwWcev29zLcW0s6IsjJjzWDoQdpI5
HbNbf/BMPSL20/Y08VNLYXNuL3Ur+W1UxMrTobWJQyA9QSDgjg4r8wvA/wAcviR8HbTVNI8NeKdV
8NxXZC39nbSBN0gBUl1I4bBIPAP5U1cpu2h9E/8ABOv9jbRf2kPEms6/4ykW48L+H3S2n0iN5Ee8
nkjJGZEcFEXGcDk+1fXmgfsW/AHxP4u/sCX4CeMdGhLSL/at7c3MdmNoPzbxcn72ODjnIrkP+CNj
oPAXxLjE/mzDU7QkscuR9nIyfy/SvAND+Of7WfxM+NGoeDPCfizXYtQur26WCO6to4beBI3ckGR4
sKu1eDk+1TZ3Jep4F+2J+zsf2Z/jXqXg5dRGqaeYI76wmZSHFtK8gRHz1dRHgnvjNeImPBbbknOA
fb1r2n9rLR/ihoPxqv7X4v341PxhHaQZuRcJMpgKkxhCgAAHzdgc59a8ckIOcfePbPQe9a2J3CK3
35wwLAgD0zX7Yf8ABMj4UHwf+ys12NdtNVbxdK2qA2wyLTfBHGIn5Pzrs5HavxNTKOFU8njiv2D/
AOCVEEo/ZO8aMqNHHJrF2YGGfm/0OAEr/wACDdO+aloaPgj4lfEzxx8EdC+Jn7Pk3iOw8S+F5dRh
DzxyNcIhjdJiIDuwgYhA644KtXT/ALAXwl+FHx0+I+peDPiDDrR1u+gMuiGxm8qIeWrvMGI5ztCk
Z4618uX7u9zMshdJV+WQN95X53A++eOa+rf+CXTC5/bH8K4BYw6fqBY4PGYCBnsO9ZtFpn1doX/B
NT4F+BJ/DPhLx1q2q6v4z8Q3N1HaGyvTCskaB3BMYHAEaqCe7Z+tfE/xf/Yy1/wV+1Vp/wAFtP1C
0urnXJkfSdQnchfs0rSeWZcLkMqxNnHUjI6169/wUa+IXiH4Z/t3Q+KPDmpTaZrmkaVZS2FyiqwT
dG6uNrAgghiCMcg1xP7NPxj8U/HH9u/4X+KPGuqtrOtG9S0Fy8EcKJGkMxRFVFAHLsfXJNNCTPe/
i3+yz+yR+zbJ4d8N/ErUPFbeJbnTkuZJdNlnkSTBKvIdqkICytgV5Z+2v+wx4d+EngDw38TvhpfX
d14H1OO2juLbVLlmuVknO6KZMqPlKnBUkYxW3/wWEnaP9ovwurAhW8MRrG23ILG5m4x36CvoT9ut
ja/8E4vBiEYkCaCoBHQ+Uv8AQGnqgOa+B3/BL34UfEb9n3wV4m1PUtft/EGuaLb6hJdQ3iJEk08S
uMR7MYUsAB3xVH4Lf8Ef9NtrDW0+KXiJ7nUJZcaV/wAI5cmNI4wDl5A6ctkg46cV7uZprT9hz4LL
FMbZ3h8LxlgSMAvb5r1Tx3cyp+0v8J7dJWWFtM1t3QHhiEtgMj8aLsLH4S/Hf4Ja78APijr3gfxA
9vcalpzRstzbMWSaKRd0bjIBBK4yMcHNeeJGY4iCMkNyfw719ef8FQXU/tk+LCMsyWenqR25t1PT
8a+RpUy4A4GcAjoRVInlGpGHi/eMUOCeBzmv0M+Bv7Hn7NnxW8N+DxN4v8ew+J9atoPMgNkYoxOy
jIVzbFQu4kAhiPevlX9lDwDo3xM/aP8Ah74Z8QWg1DRNS1NYb228wx+YgRm2EqQcEgV+qnxt/aL8
T/Bf9rj4MfCLwtZaXZ+DNagtobmFrbLojTSRBY2z8oVYxj9c0mXofmL+2F+yzqP7LvxXuNAkuPt+
h3yve6NeOVaSS2DbdsoGMSKSQeMEDNe4fA39gjwN+0Z+zTaeMPBfjDVpfHllJHDrGmzpGlvBIGBl
jClASREdytuIOB64rsP+CyxJ+Kfw9QYAbRZ8nGc/6QDj9D+tes/8EiZIoP2f/iK6E7V1ssNx7Cyh
/rmoYk7nkNp/wTu+Evxi+GPirU/gZ8Q9X8TeJtJKxm11TbHbed94xN+5QglQwBBIz1r887vTJrSe
5t7qMx3FuzxyBDkqysVI46kEGv1p/wCCQKqvgL4oshJC6zACW/veQc/qTX5VeJJFk8Sa4VJLvfTy
5GGGDIx6/jQgW5+m+nfs5aXZ/sC+Lda+G3xi8Uz+FrjQL3UpdOxGltPKkRNxCyFA6BmR1IBHXv3/
ADR8L+BNU+IPjHSPDOgQpcazq1ytnaxSMFDTMcKCTwBX6q/AQmH/AIJH+IypKtJ4f8QYyPWW5AwP
TFfAP7FxSX9rn4Vq6BiNej+YdCRuI/UVN7F2PpPxf/wTw+BfwW0fw/D8XPjJqXhfxRqtmsr20MMT
Qq4x5gj/AHbHaGOASe1eYftb/sIWfwU+G2gfEz4c+Jbjxt4Bv0Q3V/dmNXiMrqIHTaBuVtxB4ypA
r1b/AILJyhfi98P0YjH9hzFVzjJ8/t+Qr4w8SeEPihZfC7S9R1iw8TW3w+n8trGW9Wf+zW3EshQH
5Fz2OO/FUpE8pweh6Fd6/qMFnaQ3Ms1xKkCGGMyYLEAHA+tfYH7bH7BNl+y14X8La5oviHUfEQ1W
4ezu7e6tlUQssW8MpX1bIwc9e1ec/so/tb+Iv2YtYu4tL0/Rb3TtburYX8mqWpkeNUYqWR1YFRhi
T1Hy1+lf7dv7bB+A3hHwvN4Kfw74ovtZuJklhuX+1LFGkYO4LG45JcDmne4XsfK//BIXwRoPib4t
+LtX1HTrfUb3Q9NtpdPuJU3C2leRwzrngNhcZ9jVD9ob4c/tAfGn43+PvijpIuLvw/4C1q7tdKu4
54YTZRWcu/EUYIZyPvEkEtgg12//AASA8T21/wDEv4pNeT2ltqerW1tdRWsbBN58+5eQRqedq+Yv
TOBiuQ+Kn7V/xd+CvxD+L/wi0zw9A6+I/EGoy6ct7YytdzR3crIrW+1hvDDlSQ2Tnnipsh3uz3/9
tHw3o/xR/YZ8B+NfGiW6+Jng0V5PETW6ma1+0GLz2AGMqdzZTofavkL9qL9gCX4O/DTRfiN8PvEM
3j/wRcwCW7vUWNTAHKiKRVB+dG3EZ7Ec9a+vP272/wCEU/4J5eD/AA7qbi01jy9EtnspMLMXjRDI
Ch54K8jtWf8A8ErbD4jaT8NNUn8RvFB8LZnSTTk1tpPN3MikNBv+UQNuXr1PSn6AfL/7PP8AwT0b
4kfCjVviT8RfFH/CvfBsVstxZXjIrtNGN2+SQMR5aBsADGSf18S0n9pH4k/Dv4aa38MfDPjWVPBt
5LdW7xw20ebhJGIdo3Kl13jng/xV9x/8FbJPiamhaEtiI4/g91c6I7jzJ8JtW7x8uzdnZjIJ681z
37Bv7L3hXwd8OD+0d8S5IptD0uGa+0uwRTKIFhaSOSaVMfM3y/KvOOpo1ZO53f7GHwnT9iP4SeIP
jP8AFXUhodzqlh5UOiyOg3JgTRAHPM0m3AQ8g5ryv9hj476H4z/b/wDF3jDWorfw0fF1pe/2fbzy
biJpJbYpFvwBuKxnsPSvFf2kfj/8QP24PihdnQNI1e/8O6Wf+JboOlQzTpFCJHVbieNcgSsGHJHA
4HSvV/8AgknoMNz+0x4ij1TTka70rQZ9sd5CPNtpvtMCk4PKsCrD2p2sFzv/ANpv9jT4z/ET9snU
fGmiaC154Wm1KwngvhfwoEijSESDYXDj7jdu9Wv+CyPjbRLuP4feGbfULe416xubq6urJMPJDE8S
Kpb+7ndkZ+teb/H79oP4kWn7feqaBpvjrXrDQ7bxNY2S6VaX7pCYS1urp5YOCCC/bkmvR/8Ags/p
9hbx/DG8Szj+3ztqCy3CKBJIirBtUt1OCRgUa3HY/MF7MSXOY/nZQWOMc57Y79q+/fhd+wv4c+GX
7MnjX4n/ABuaHTdR1DRriPRdIvJQn2eUx77eTcrZMzFcBB0Brpf2MP2P9C+FPg+T4+/GyJdO0vSI
TfaXpV1G2+IL5iNJPCV+Zm+Qooyc88dK+ZP2w/2uNe/ao8czNLJNpfg/Tpduk6PDK3lbUZ9s8i/8
9WVvoBgUXbC2p9of8Ep/APh/QvgR41+JiadBL4stru7s4tTkyzR26W0MnlgZxgtye/vXz34Qf9pb
TPir4e/aJ1CyvRZa3qMEU+pv5f2NrWd1t1XyA2Qm0jadvXBzX1Z/wTR8u5/Yl8aW2ngXGoNf6ofs
yEPIWNuioCo55wMCvmzwD+2d8QfiF4V8EfAceDbZZbLUrCzupIElN4I7e5RjvhP3MbRuJ6YPHNJv
QtJtn0D/AMFIv2ePAvjv4o/Cq5vtVsPAmq+KL+403UfEk/3Xjjt90SuCyqTu2qCSPvda+GP2vf2M
PEn7KviO0We5k8QeE79UWz1xIPLVptuWjKbmIPBPU5zX2P8A8Fmr+3/sr4WaeZUFw1zfyeUCC4Xy
4hnHXqete/fsKWvjbxz+zroKfFzRLXULeP59LOtxmW8MYZgvmpIvBA4U9SuKXM9BaH54WH/BPDVN
K/Zv1P4q+OvF1v4Amit5bmy0fUYF3XI2boVMnmcNJzhdpNecXP7V3xa8efCLw78HodUWfQIjBZW1
lYWwhu7jacQw715PzbR78ZNe7/8ABVHxV8TLj4rW/h7xTEdL8C21utxo1tp7t9iuMGQeZIWABlAw
Nv8ACO3Nes/svfs8eEf2NvhA3x8+Lc8N3rU1vG2k2qqJ4YFlVHgCgKSJmcH5skKDSZVktzuPgVBH
/wAE7f2XdY8QfFjXZtQ8Q+IWju4/DMkwkuUk2BPs8e5z5h+cFiOB+Ffl58NvB9h8YPjBpOi6jrtt
4PsdavJfM1HUMeTakhnGcFc4xtAJHJFdt8efjh4//a08c6n4uvrPULnTbHfHbabYI89rpMW3kbgu
ASE3FiRX0n/wSm/Zv8KfFTxBr/jnxJA2qt4bmigs7C4jVraR5YpCzyKwO/A24HqM0vhWguXqZd7/
AMEqE1bwnr9/4D+MOiePdZ062e4XSNPtlDTsFO2MOJnCFsYGV618L65od54f1G+0rVbSSw1bT5Xt
rq3mzvSRWIYMD3B4/Ov3F/Y2/aK0b41+O/iVo2keAtL8IJ4buVt2utOVFa6HmzIu8Ki8/uif+BV+
Qf7VsiXf7RvxTeNOP+Ek1HdtAwSLlxn+VXG7IaPJGiznDMuRkALx+NQOCZEzIB2LZqRX2IQY95yc
Ekj+VKFSRQ7c45wRxxTJIzhicEAdyf8A69OhQ724AKcnvx7U2MeZvwo9SBxipFTyWCsOO/NIWw4G
MLjP7znGRjNRtHEDu3FSB82e5qaVUhdMusjMBhSegqCOMLM7biW9u9QmwTEMhSbg5BGVboadM+4t
Icnd97Bxj8aSSVkkLMgTcPlA7UhfknaCx5ww7VpcY7EbjcAzSOMj3xQkXmKBuPUnAGCOKWfG1Si4
53cnPbj+tRfMZUJ2nB5BIB/zzU6jHgRhk8tt456jBIpNiy+YuWwG6k0qRHLFAPlHDMMn86VCu4Ac
vzknp0oQCOSGRuoPGD24p7EJvw4weAuaYyOXBDBQFIx755pnl7nXdkkjg9qVxXJSFiGWP3QMD1pX
cSIju5+YH5QKSQKVMZHGeSOlNeDd8oxtHbOOKW4XF4QYGRliMjkd6ezBXQbgAcbjjOP/AK9I7iOF
htDOOhByKakLSgFyMnPBHGMUWY9RHQ4KKdzE4Ye9LHGkj7EDoiqMnOcetIxZjI7v34CjFIWBCqrM
24YZR/n1pku4i7lnJVgydM1KsqhvlUEr1AGarmFmmQHGSfXrVkhVOFDArzkdOlNFK43fw5wWGN2P
89KRJDHGcDdu/jIIxT5SPLY7vlDYAI7elOVWlhwE+bnBJwAMc02hPUiiAB3lv3fOCB/Onq29iyH5
lB5PsKXeVCx7AVxjA71Cy+Y253KqSTxSSBFiRFdSMAgjd0piq0EgQOdoG7Gc4Pt9KbNtw+OFPXGA
T9aGQrlQSH75FUkMdKVDLtUD+Is3P50iZuIWcIAD97b0P0FNLGNQrL8/PPXNTIu0thtpIJA6cU+U
QyR2jjYoMIcAdu2ajLMod127lXr3PrTtpw25cxqew/T61GIS7GRfkjx91jzjof50wILvBVW2jJH3
R61qwq0UFp1Vtq7nHGBkjn2xg1lX6/OGRvlX16/jWrBG8sduxUyGRAAuOPSobsyZH0BpqCP9l34i
SowiLyaMhcEAAq9yTjgZzgDHXmvm+WVWhDKuSvTjAP1r6Gvp2s/2YfGIjn8yO41XS4PlXAJCXTkE
GvnZotqowHyhiRg5H0qNGXF2Q1iEZ+PvEnAWowpQHLFOeDjinvmOQMOSeCvWnpEblNmRlOxpIRrz
4QllZ0CKCuR1HtSDDupI98gEZ9/rV7WP9HyoTYR8m/ORnPWq8jSTAlkDSt/ETzjnnHpmut6ERdzQ
sbqX7THIqFkYHcWGAuOldbpSPJLFKXBiX5WRwQcZ6jgelcZbIZUijmlYEHOWO3J/Cuu0+eeJ40UE
xBATIWBJHPUeucdeeanmCZ5l4kiA1q8CghTM+0HggZ61mKXTcBkZPb+Vb1wIT4huTdsXgaZlI+6S
M/pVHWrA6ZeSwvHz1RlbIx25+lZ31Ki9DW8MQtcSoElKHOBgd+30r0jQo9kyeTKXdlz5e7BHPp1x
wK5X4Q+D9X8b+JbLR/D1hPquq3T7YrO2Tc7kckAew7+lfph8R/2cfhB+y/8As/21n40j/t74i3yF
IZrKZo5kldSQ5jD42LggtjnFXGdnsOUUlc+Jz/o4gkyVUDazHjA6EfT86t+buIfzSJHO5FJxgf4V
7h+xl4L8F/En43DRvHMcB0yW0l+yWr3ZgElwDEEAKsDkhm4yDkd6m/a8+ByfCD4t39jo2i3Gn+HL
oxvp0rlpBLlV3IjMSTtbj8RW/tU9LGFmeDzhHuIWfzAISfmX+H347ZOetOy5EmVeR42+VwwGQcZ4
/Hmvu3SP2U/AOh/si3vibxro1xofjE2FzcCe7uGimWQFvKQR52/MoX5Sp+8a7X4B/s//AAH+K3we
svGEXhK7je0SSK9dr6bcZokHmkFHAZT2wB9Kx9or7Gqi2fmw0fnyTYkZ5QmVizkMT+HaoJikJkbc
WeM7RGTuOep6+x5FfpZ8IPgz+zT8fNO1P/hGfD91Y3tg5jltri8kiuVIUfvAokJK/MPm9a8t/Y8/
Zm+Hfxe13x8ninTZ9SfSLmBbVBdyReSrGXBBQg87BwSalzv0D2dz4k/0maMFEbEhG1nbBx/T6/Wk
aSWaOSXy3idslg3AI7dfpX1/43/Y2l8CftF+DvD97Hdaj4G1vUkgS7j3JiIsSInkBzvAxznnk1D+
3T+zz4O+DXiHwv8A8InYyWNvqEE0kts0rS5dHTDKWb0Y/pQpGXIr7nyI7kJvwSrt2yc/QenNNdUc
hPmRWHzJkKp7D8ec19Ifsf8Ag34VfEDxRP4U+I8d+mpXjY0i6WYwRBgG3ozqQcnAIzx261z/AO1B
+zVrHwB8WzW0yXF74fuGMtlqixlYcMWxGxxjzAAMgn3q+ZF8ttzwyby4dymPdC2AoLFeeh56025Q
pwjpKMqRFzwPf2+npVgLshhSUG43ndkc4GP8PT0qrdOEnEZ3GTggIRwPx6VV7mckkKJWk+VA0LAZ
bpycn+VSyyqSCZCpLcBgcMcDJP5VFvwh8oFHIIJLcf7XT6Y796gkggjZN+6VtxB3H6eg6mhENW2J
HQSTW7PGGUHejA5xjkH6f41OYQ8TlS++MkrnjI4OPpntVZlMNtH5YYqchV5HcgAnt2qZX8+GMJhc
/NyckHtg9+lBtFtIfbzGSB8AFm+Y5zyQDxj3IxSnJicERvkmQHBC/lURdbfeWy2QQBjPPTp79c+m
aUyo8bSE8qwEfH3lPGfp7GlYGxpaLZuVQQeCqAEc9OMep+tSo6NCWaIwgjHlqQc843DIB4HamSx/
ZFIEmQz4yGx6dhxiiQ5lVQWZkBCnHO09P/10WMYvUV0a3XywuHyQy5ycDuQP881PI7uVMpOBgoVc
jjPfHavYv2Tvg7pXxy+Mmn+FNduru0sJrG4uXNiFSYGMLgFmBxnf+le8ftE/s2/s+fA6y1mzk8Ta
/D41Onm60/T55DMsr87ASIcDcwx978KTZs11PiyV18psx8deMZOPqOeaqSiMvku8W07RGTg9cenq
KdczLOVaGIxuOhLcYx05/GrOjaTeeJbiC1s7bzb13WOKNFZ3dmYKMADnk1akkLluUzcMysznaeoQ
kE4H41NDKsbJHtIUqM7Dghvf04z2r2L4t/so+PfgZoWn654stbBLC8lEEZtrvzTG20sA42jHAPTp
jrXd/snfsbt+0Lb6jreualPpHg60kkgW4s2Q3E9wuPlAZSFVVPJI7jFS5i9mz5jaRkmUQoHjOSRn
Bz1znHellhMcqMi7ZOWycZXJxzX2J+0H+w9pXgn4aweP/h7rtz4j0OGJ5bw3xjDLEcBXQqqAgMME
Hpn618qeD/BuqePvFek+HNKhWXW9RuPs1vG7hQzc9T6Y5x7UuZPUtR1MuS4ZJldW2mP92QGOST3/
AJ1HNPIFkOG254dOrfWv0Ht/+CaHhEWlhouo/EDUovFdxZNcfZlSLazDG8rgAlFY46g4r4o+Knw2
1r4ReLr/AMOazB9n1i0IQqku5SrKGVgRkYKmmmh27HKzXLo3yvuyA23BPbr/APWqJEdm+fO1W+Uk
8jkV9PfsvfsUzfG/wveeKvEmqyeHfCkCP9jvLRld5XQkSEgn5VXB5PXmn/tK/sWv8FPC9n4w8Kar
ceKfCE8Kvc3lyY1a3LkeW3GMo+7APY4B601JCt3PmET3B3KkZUScYTjcOw98kVNJNLFIPMBXbwMA
jkDp/n0rf+H/AIC1r4meJtO8M+H7R73Ur9mWCBgiYIBZmySMKoDE84r7Q1D/AIJh6Smm3FhYfEOe
bxRDY/aY9OeCIKz4IyTncEZxjOO1NSV/eJlB2PhIgx2+0E5Ax8wzuXPHH40qbn5V2LR8rtXAUegP
TFauteA9d8M+ObrwlqVnNHr9vdfZTZty/msQEUEEg53DGDgg19b6V/wT707QvAelaz8R/iTB4F1O
8GHtJ0h8qJzz5fmM4DNgc4puUCFFnxrC5+yqmGAOFChz2J9eSMkmnf2gbdHQksHG04BI68fWvrD4
mfsDyaL8NX8X+APGSfEK1ik3yxwxxqvkrnfJHIjkNtI5HcZrzv8AZi/Zb1z9o7WbmaG6bSPDNojx
z6oYRKplwCqKu5SW+bPXAxzSUo7hyu+h4kZpvMiaMZDA/Nu4PqP8+9SyM0Ee/LSEEnax5Uevp3r7
P0v/AIJ6+ENd1kabpnxv0m+1VGZfstnbQvPkcspUTk8YJPHH4V8ufFT4U+JPgz44vvDfiO3ZZrcl
oLjaNs8G4qkyYPIbafocilzJ7A4yRyS3IdCVyrA5DH06n+ZpjqLjKxBhgfN5ZIGO+T+NehfA34U2
nxl+I9j4VuPENp4Za6ikaK9u49++RdpESrlQzH0LduM16L+07+yPefs36Nouot4jh1u31O5e23Ja
m3eJwhcdXbIIX1H0oTuDi0fP4InVkjXzFbIfcelKt1OjSDesblSuCv4j3H4V2XwZ+DXiH42+ObHw
/oMLZMge9vyC0NnF3kfHXvgdzxX07cf8E0tTlmurKz+JmkXGpQpua0SzKy5xxkeYxXqOcelNtLqW
onxh5pa3OCxEyHj26YPp+nvUUiK0ShbZG3/eYgMeCSOcfWusl+EniuPx4ngV9Kuf+EtaTyf7M2fP
0zuwO20bt2cEc19OXf8AwTX1qwSBL74k6Bp9xKhYwzQMhX2BL/MO2cCknHqS4N7M+OGmFrGf4Cfu
qAMAemO9SQzcNiV2JAJXPH0/+seDzXsn7Qn7KHiz4BCxu9QePxBoV6FX+19PhcRRSE7Qkmc7cjbg
5wcn0rG+A/7P3iL47eLG0rQ42tbO3DtdarLG32a3IGVRmHQnPAqlKKFCD7nmgSJLhIxEhV2zhgBg
nGDz+FLIyzMcfOVwQPQjHevq/wAVf8E4fHenaHqmoad4m0LX7izid/7Ls4pBLKVBOxSTjccYAOOe
OK+SnBtZri1nhltJ4XKyxTIVeJhkEMvbBzkHoRV80ZbFW13JZnSZ0eTfheihFI+vI9aaWAneUBVl
bCbsYIXsP5V758F/2L/HPxp8JXHiPTZLXRLBG8u2/tdJInvBsyzR4U/JkgBuhwfSvAtSsp9OvruC
eMxzwSPG653EurEEZ6Y4xxU3iDTQGct+72MZiQOScY9Mf1qWVmKIWADE7sgKSOeh4PpjFfVcP/BN
T4lX1lDef254bhM0SEwSXM4CgqDyREeecccU3Uf+Ca/xRs7C5uoNY8NXUiw5SFbmZeQDwCYsEngd
h71m5RLinY+UndraNDv2+men0Gf60kdx5NyduY1JLFkIBJIxk+/SvY/gV+yp40/aA0/VbvQJtLsL
XT5Uhl/tSdkZ2Zd2AERuMHqcV6S3/BM/4qLcGSLV/DAJHRr6cgc9f9R9PyoU0xNM+VZ5mluDL829
mwHbknBPfjP/AOuppNcu5IXjkurkxkFWUyMVYem0545rrfjP8FvE3wH8UL4f8UzWE95JbLdQyWFw
ZUaMsw+YlVYEFT1A7Yrrfg5+x58RPjl4Tk8RaBFp9pp3nmGGXU55Lc3GFB3oPLbK5OM5+ma0Uoon
lbPFGYxMNrHHQKcFQAMYAAx2HtT1uGheB43KSxkMjISrIw5ByOhHtXU/FT4Y+Ivgx4tn8L+JrFoL
+MbvNhBkglRhuV0kwNw5AJ7Hg1xbsYYxk5Gc7Se3/wCrmqcrkKGpZu9SnubuWeW7lu5WKp5twWkb
C9Bubk4zUXmGJyyyMr9dwJBGOf8ACvX/AAr+yb8RvHHwruPHmk6ZbPosCTTul1L5NxNHGCWaNGHP
Q4zjOOtcd8J/hJ4n+N/itPD/AIXitZdTa1a6Au5vJURrjOWPX73Yd6xuupuo2MX/AIT3xJbTEjxJ
rSqR9z+0ptvt/Fn09qyNQ1l9Ru5Lu+uZ726mwDPcSmRmx0yxJJx0r0H44fs7+NPgFc6TbeLrO2hX
VRI1pNZ3QmQtGBuU4AweQfxryxpFQMCqoigAKRjBP9apJE3ZoR+INRsdOudPtNRvrOwuQfPtre6k
WGXcADvQMA2eAc5yAKy5Z3hRlPGDnb6dOKW4OYHkjjkaFW2mUD5e3VuncVTu3/dtsZskfKDznr6d
aqNjORYDlN3mKzMABuU4x3HeutX4z+PbSNYIvGfiGKONdgEesXGCPTAfAH9K5Gxs7rWrq3trVDPP
cOlvCmQvzMwVTkkdyOtfRF1/wTx+OgiWaPwvaKFUZQapAc8EnAz1/wA+9S2kyop21Pmye4kkKq2Z
BgsxZiWbOeuc5OaZlHlKqdoUcgdzTdSjnsr6eKVHhuoHeGWGQcpIrEOG9CCCPwqrJKZMkSfKFxhV
6/WtFK5k0iYzM4ZQNx6AZ49CfpV3QPEOo+EdUh1TQ9Wu9I1W2DeVe2E7wSIWGGAZSDgjiscyKw2E
DCrjJOM1BJHgqQGcAAk+lOwJa6Hd+I/jX8QPGWh3mk61458R6zplxzNb3upzyQuuchWUtgjI7iuX
0vX9R8PanYahp17eaVfWcomt7qzmaKWNwequuCOM1l3M3yIiSNtcc7c8/hSGRpV28sQT8p79qyNP
U9QT9pn4tpIHX4neLMx5zjV52H1OW5H6Vwt34m1C61h9dn1C7l1h51vDfPM3nNMG3CXeDnduwd3X
j2rnJS5mcDcACB84/DPSnNuiGDwzZ5znNVoCvfQ9Gvf2gfiTc+J7LxBN498QvrFgjw2l8+oSNLCr
/eRWznBwM+tMt/2g/iPomtavrGneO/ENvqOsbDqN1HqEm+4KgqhbJ5wMgex4rn/BPw98R/EnVJ9N
8M6Nda/qNvaS3sttaKCyRJ95uSOmQPfNcxK7qgDKQCAQjDGOO59eelCSG29jvP8AhdHjiHwRceEo
PGGtQ+Grnf5+mLeyeUxdtz5B9WJJ55JNTap+0X8R9YHh7+0PHWvXX9hzLc6cZL05t5FBVWHHUDI5
zXmzSyiMsQVQZx6E1CUMSqz/ACFucH5u9S9dBc0kd34p+MXjTxz4usPEuteKtT1bxHp4j+x6pPNi
e28ti6GMqBghiTx+NavjL9p74q/ECwhs9f8AiBreoWME6XMUTXIXZKhyjAhQcg89eory6KT5HI4y
CMdOK7nwh8BfiH4+8K6l4l8OeEdV1zQ9PLC4vLWIPGjIm9u4JIXGcZ6jiosty1d7ifED4yeKvi5f
6Vc+P/FWra7DaMI0kuGWV7aN2XzDGuAM7Rnv0r6psvgz+xE9tFI/xp8VBhj5HjOdxGehtCf1r4Yk
lEkYDfMpUNuU56jj8Kc0zGLb5hVEPBIzg+1Ow07bH3Mfgt+xHGGEfxw8SIzsSGKZw342mP8AGvln
wd8XvEnwV8Z63e/DPxdqWnQSTS2seoRosb3lqsjGIvGy46YboMZ7V560zNubcTu4O7+tRB8Sjcco
PQZp6BzPqe1L+1x8X38ax+MG+IWrHxHFaNp4vF8oqYCdxQx+XsxuGfu5yOtb37NHhvxn+0F+1VoF
8Lsav4g/teDX9Qvb51UiOCaN5JMgDJwFAUAda+eRKEkc/dB7nOKu6Tr2o+G9YS/0vULnS9Qiz5V1
aTtFIoPUBlIOKi1y4vqfs1/wUZ8bfF34ZfDnSvFnw21c6RpWmzONbliWJ5dshjSEhZFOVDFs4HGc
49PyC+IXxL8UfE/xVe+JPF+sz65rVwqo17PsDbFGAo2qoUewA6UniD4m+LfEtg2n6v4q1zVtOkIL
297qdxNCzKcglXcg8jOMVyDMVABzgDg54xTSsKz6Htvw0/bA+MHwm8JQeGvCHjO60rRoHaSGzaGC
ZI9zFm2mSJiASScZwKq2f7XPxd0rwx4j8NR+PL5dI16a5mv42igdpDP/AK3DGPKbsk/Ljr2ryzw9
oWreKdbstJ0XT7nWNWvZfLt7OyiMksp6/Ko6+v4VY8aeDdf+H/iGfRvEui32harHGryWeo2zQzBW
+621h0ODz7GjQSudH8JvjR4x+BfimTWvA2vS6FqE8JtpZljjl8yMkHayyKynlQen4813ep/tz/G7
UPGmleKZ/HU51zTYJbW2ljtbZI1jkI3qUEW1s7QcsDjFeBkKGBOW4wPamgt5mfKUj+854/KlZbjb
Z0/xI+IniD4peMdS8UeKNTbVde1GRZLm6aJI9+1FRRhAqgAKBwO1dZ8GP2mPiN+z++qyeANfOjHU
tn2uKSCO4jlKE7W2upAbk8gZ9a8kD5kLNnp8oxgU95HXcd+3jGQPxp2Rnqd38YPjf4x+Ovio+JPG
epnV9VECWqT+UsSiJScKEQBRySTx3rv/AIN/ttfFf4D+E28M+DvEUNho5le5S1ubKK4WKRgAdm4Z
CkjOM9T2rwUF1fIALEc8UHccFvlJ596LFI99+Mf7cXxe+Ofgqfwr4s8SQ3Wi3EiTSxWVjFblyvIV
ioyRnkjParP7M/7Ovw7+N2k65c+LvjRpPw2vrG6SGCy1Lyla5iMe4yjfKgI3EjjP3a+eR8r9C7Dn
GP51ZiDTMowztKwjVEBYsScBQBySTjgUmWj7rH7AXwWCLj9qrwo4Hr9k+bjvi5rwb9qT9nfwb8Db
fQJvCvxa0P4lHUmlSaHSxGr2m0Aq52SuMMTjB9BXluveAfFnhrT2vNZ8Oa1pOnAhWnvtPmgjVm4G
XZQBnjqa5yQksV3AhSOD3A7frUhc+hfgb+3Z8W/gB4Jj8J+FtQ0waLFPJcRwX2npK8bSHLANkErn
Pr161wHx8+Pni39orxynifxhcWkuoQ2sdpEtnbrCojQswBxk/eZjyT1rkNJ8G+JtctReaT4d1bU7
ViQlxa2MksQI6gMqkEjvXOeaUd2ZvmBIZR1U98jtTA+ufA3/AAU3+M/w38FaD4Y0q48Pz6bpFpFZ
W73emlpDHGoVQxV1GcAc459q+bPiR8QtX+J/jzXvFutyQvq+s3Ru7n7NEIo95H8KAnHQfWuctYnu
pljVWaWRtihRktk8ADuTxU+paJf6JOba9tLmwuCu/wAm7haKTaTgHawBwSDz7UAeg/Az9oPxf+z1
49t/FfhG6jhvERoLi3uFZ7e5iIPySRhlDYOCDnII4r6VP/BXb4zpJ/x4+FN3dhpkoP4/vvp618Nk
+VlQuSR3P60iMAwB5HfdS3Juz0bwnp+qftFfGnTbPxF4ottN1LxNqLpca3q8haCFm3OC2WHy5+VV
yByBX1gP+CVMpn3p8d/A7AEZAOMfh5h/nXweJGhIaNtuOcY4qWG3eWWQxRGV0+dliTcwz9OcVVmW
j7O8d/8ABNHUPBPg/XNfj+MngfVH0y0kvRZRylHm8tS2wEueTjH41D8KP+CpXxP+F3w68P8AhPT9
C8K3Vjo1nHYwS3FpMkjqiAKXKygZwBkgc9a+MfkfdnMirkEA5B9vfrTXjLYBO0nGwAcse368UWQj
sPix8RtR+MHxE1/xnq1vY2Go6zcm6mt9NjMcCnaq4UE5/hBOeSSTXqP7K37ZPin9lSTxE3hvQtG1
gawYnm/tONy8ZjDAbGRgcEN09q+fMvEcMpDqSCrDBB7jH1oxKMuQVLAkkjGaegXPoT9q39tDxL+1
amhp4i8OaJoz6O8rxTaYj+bKHAAUu7E7RjOOma8M0bX9R0HVbPUNL1CfStTspRPb3drI0ckUo6Mr
Aggj/GspXJYHOB/tVPjdIrE4xzjHvUi2PvK1/wCCuHjRtLs7XW/h14S8QXltbpC17diUvMVHLEHI
BY84zxmvEf2ov20PG/7UtxpUOsxw+H9CsIk8rQ9Mnk+yySgnMjqT8zAHaPQdK+eZH7bNxP8AFUch
VGOwkH3pWEpan1zff8FF/G138EfBfw5i8PaRHF4b/s7ZqheQvcCzK+WrJ0G7YuSD68V1mu/8FTvH
GvfFXwf42bwjoUDeH7O9tP7PWebbcfaNm4liMqR5SYxkdfUY+HiAEBEuPlOcinuSqb0HmAcHaMjn
pRYtXPrq2+EvxK/4KVfFTxz8QdDHhvw7cRvawT2N/fSKqL5O1NmEZm4Q5JA5NdJJ/wAEjPjgdrrq
vg1zjkDUpc5HoTBXxVBqdzZSGSC8mgaQbW8iRk3DqM4P/wCqrX/CV6zBIEj1vUo1zgbbyTA/Whpj
5j3b41/sq/Fr9i+fwz4s1bVdNsria8ZNPv8AQL9nlt7lELhuUQqMBvUHBB68+v8Aif8A4Kk634r8
Hx2+o/DLw7N4uh082lv4pedjc20wXAuIiYsowfLhQ3B718Q6pq+o6rEqXuo3d6qZMazTtIq54JAY
nB/WqTFvLVAxY9SM/wA6LdyT6n/aO/bk1P8AaQ+FHhzwv4h8GaSmv6W8Bk8VxTl7qUqhV9qmMbA5
O5gGI9vT7k/4JI2U2kfsz+LdRmsJiLnXp54leMj7Si2sABQkfMCQygjjivxziuzyOBt5Geea+u/h
z/wU4+Mnwu8DaD4V0s+HLjS9Gs47K3N3prNL5SDamSsqhjgDnA6dKH5FXO9+JX/BT7UrfwH4j8Jf
Dz4ZWHwu1a6nCS6jYyIskJDAOTCIVG8qNuSTjPtXwcZy0ryN+9LMS5Y53EnJz6g810nxO+Iur/F3
4ga94x137L/autXH2q4FrF5UYcKqgBMnAwo7n61yuckQoTuPO0Cgk/R3QP8AgrVomjeAYPCkfwR0
xdDjtTaPp0GphLZ0YfODF9mxhssSPc18FL4+utK+IY8WeGAvhi/t9QbUNPjs2G2ybzC6ImR0UEKA
eCB0xXMuFRvvbsHYw6Y+tRS2QUAqoOeSRS0DU/QXUf8AgqL4X8b6PokfxE+AuheONbsLVLeTU725
ifc2BvZVe3OwMw3bQcAnrXk37Xf7d+rftH+HdD8I6ToCeB/A+nRoZNFgmSZZ3Q/uiSEXaiKoAQcZ
J64FfKxljaMIck9ARULhXKnJwvAOOtJINx8l3IjKqkCM8HB6ipQEjJ2DZgHAHUiqxPmjdtACjAwM
U6DEkoZug9elXYSudP4E8da58OvFml+J/Dmotpmu6bMJrS5VAxRvcEHIPORX6BH/AIKpeA9a13Qv
E3iP4GRat4y02CNI9a+1QmWN1ySYiY8qNzMQM8Zr83FKM+ZH2qgzjPrSjbGZCzYAGM91+v6UnFMu
zR7r8Zv2o9X+Pvxsi8beNLOPVNCs7+NrXw3K4EKWSS7/ALMWC87hwz4yc16X+1T+35e/HjwVpPgL
wZoUngHwJZwolzpiSRsbnYymJQVX5UQIDtB5J56V8ePNGzsjcqMHCjPPrSbwynO4Zz35PFTyhzH2
p8Av+CgkfgD4M6z8LviP4Uf4keFpY/JsIJJI08mI7maJ9y/OobDKeo/LHafAv/gpP4M+HXwD074X
eIvhbd65pkKXNq8VveIbeW3kldwrBxnOHK8+meM1+fLyEI4UMVx83qKdHIGUfMQqnkE4GanlGtT9
J/hv/wAFKfgb8HJ7+fwV8Brrw5PdIsdxJp9xAjSqMkAk+hNfMHgD9rzV/hj+1FrXxZ8PW/2W217U
7i41DR5iredZTXPnNCGxhX4ADD0r53ZWTIMi7em7Pc/5NRyyeUnzNkdeKaQH6Qav/wAFBf2dNY8f
P401D4A3d34qNwlyNVMkLStKuNrk7sZG0du1fOX7WP7Y+sftK/FjTfENxZfZPC+hThtI0OUqXRSY
zIZHAwWcxe4AwK+amlJVQm5ySSf9njpQZSpwynJ4GeatINT6X/bL/bb139qjXbSCO3m0PwXpxVrT
R3KljKYwHeRlOGG4HbnoMV81yyvkhQcY79elN5WPH3hn73XBpjckk4JYn7p6iixN2e1fsv8A7Tni
T9mn4nWfiTR2e40+Vlg1TTnOUu7cuhkAz918J8rdq+z7b/gpZ8B/DPjrWPHmifBjV7Px1qUMiTar
mEGRyB9/5+ASq5IXJx3r8yIm3sMBvT2xSxuUuZACSMZYn6VNhq5794a/aI07xj+0jbfFP4z6VL46
s97TT6ZAyxhGVAsIjTIXapVeMjOMkmuz/bI/bw139pPxVa2+hm98NeCNOZZLHT2KxztLs2tJKyMQ
cHO0A4HGRXyYr+YcoxcHjKDB+lSPH8hVOT95U6nHPNTZFpO59y+Nf+Cgnhn4yfswXHgH4n+D77xN
41t45Dpur2rJFCkoGIJmIIO4A/MACG29Oa9Uvv8AgpP8CfHHwY0LwF49+HfiPXtPtLG1gubeLyhC
00SKuVYTI2B6kD0xX5ilgq43FXGOnSoWmXIB5PcUWEz9R/h//wAFGP2cfgp4K1bw74C+FviLRre9
Mkz2gjhZJpDGFDNI87EjAUY574FfMn7EH7alx+zB44vjqFlLqHg3WVD6lY2UKm4jlUEI8ZZgBgsc
jPIr5S8wHksxGCQD/KhHO47SV55xipsK9j9KNC/4KG/Bf4O/Em31P4YfDzWdJ0/W7uSfxW0+3zLl
NrNG0atK2HDsxxleM9a+P/2t/iX4D+Lfxn1TxT8PtA1Pw7pWqx+fdxai43zXjOzSSBQ7BQSegP4C
vFo5QJSzIW7kFufQ/wA6ddSs0hTfmPqV7Kc1SutQbuNbcMbRvZv4SKbuZlVM43dlpNyNFmNm3nrk
Y4x2qQv5oJCBTnjNUmShJV+znjoo5HvSQyC4k+Ybz6ds0srK5JRmycDbjOT3pd6NuK4jfpyOh9qT
ASRhHK8jAlsYAbsajM4kk3AbQRypNSxFJXfMpDDjDVXOOV2jPoeaQxXiVMZySPm6/wAqfOVmAIzk
DC8dsVJHIDGTxkcDv2pTkkgAk4BxngUwI5HCgnG3oB6n606MLhyTkjHUCmzkh85zgcqw6U5Z1MLo
7c5ycDmps7gKP3Yxk/KeucY9eO9E23yy4GCfuj1pJCEQsPuE8Z6mkYmUcISVGc+p7Ci1guKGztYv
gKMYI6GiP7+7O6PHIzjB9aJhlv7ufQZpmDEil8hHG4AdT/nFFtREzGTygCQckliOPbmoLiUeaBGG
2dsk9KsMI3TcshKkbtnY/wCeKgktzKFI+SPsSe9VYYOwkK+WSSvQKMcVIqP9zJJQFWGO31odGjlV
y649u1LIzl90DEjcST/eH0oswHyGIuAA20EEZH6VG8bfaBuLkAYBXoKbHM0cWWZSwXKkjPU96fJL
5pCjhezdyKWorlfYNyjnaM49c1IgcnKk7c8ljyxzT2QBs7QZOwPHFJJC7bZFw20gAcce+KEhiPlW
353ZO0qc/nQpC5KMQTxuI9aBw5Ri43dx1P0p8im2JUnJIHy4yB7UxDkQ24zs8zaCM5/w/Chl8u2i
3bZDn5sr90dfx7/lTHkdI2OzbGeMHsKGTdt68cke1UkAOFdjuAHccYJp0j5BwfnJwMe3rS7Ej2sX
YjpkfTGPrTPNKbGXnGcEj2x/hzU2GOunCPliQWGAfamQuBC6kkksMDJppYuqEA9OBnNSTbxFhHGR
+tWAI7pBIpB3jlSB/P8ACnSR+anyEB+rMBxj0FMklGflyemSW/Pn0piSkcuMJ0PY49qLAVL0AMQA
24nJ9K1rS4/dwKxC4A46ke9ZN0TCFPcA478fStuwePKnapXAGD9Py/8A1VEhHuXiKeKH9k3VpQhY
XPiXT41fPZbS4IPT6/mK+c3cbMKzAjPPvX0R44iW2/ZZaOKXyhc+LIPkC7S4WwlJyM89V9McetfP
u2MYDE4Xkg81CBO5Gs7KqfKXbBLKe3HJpFlWNATG3zHI2mnlGYmRMICO3cU/yiAF8tZCOSp6rmh2
LSOr1x47idT8whYnC8ZU4O09PfrVeSVp4tqhm3Bip6gH6flVnXbCaEhDCFYqVwRlsD0H51mpcqqZ
QszY4OOOn6d665J3OOEi9bM8cSb0Tyxlvl4OT3B9f8K2bBmS4iRZvNUZV42UYIA65PX0/rXMWiqh
SKV2KFCWOTnPJHHp0rpdJszvtyp3FuqE8Z9c/jU8qLepweuIF1W6XAUmQjavO3J6dfoK3PGVr5dp
YDG3EA+91J7GsrVovN1u6UoDIbhhuJyOvrVjxRqMd4IIFbdLAuxmJGOvbFYNamsbLc+l/wDgmdPJ
a/tc/Dv5WEU0l3GxVyNx+yykZA9xnn0r6b/4KgQ7fjtokpCvu0SMAnHysJpOgPGTn2r4r/Y4+K+m
/Bn4/wDg/wAW6xFcXGm6dcu9xHapudEaJ4y2MjOA+cdTjgE1+iP/AAUH+FN98U9H0b4teGLpNb8O
jTo4Slshd4oyJX84EDgYfn0ojox1ndJrofBkc01teQXEUjxXVs6yCWBjGysrAgoRyCDzkelfqv8A
sg/E2D9qP4ZhfHOgWmsap4akig/tC9iSX7QzLkShSPkfCrux3r88f2bPgvc/Hj4p2vhOC+FgiRNc
XUsi5BgQoJAo67juOD+dfcnxw/aS0r9jix0L4efD/Q7a8m02JftcV8jACMgMpDqV3OcnJ6c05NsS
Stp1PnH9s79oPWvij4yvvDzLNp/h/S5Wgjso5QUd0LK0jHGc5HA7Cvqj/gnmYL79li+t7mXZb/2l
fxSyZyVjIXkk/wCyc1yP7QHwi8PftYfB5Pi74CRrPW4oHkmhn/decsQcOpA48wHo3QjrXb/sC6Nq
ll+zJqEOoWVzbT3WoXksCzxlDJG0abCo7j+uai5UdE7njP7UujaN+z5pvw38SfDSZdP1S5s7mCTV
rFF33iFYxvfPB+8x56YxxXSf8Ewp5bi4+JU85kkmuJLKVpZGzvP7/J9j2xnsK8/+An7Ifib4267P
N47k1bRvCmjmSCCzuS8U7O3zKIwwICZO4kdeOua9M/4JyaZFovjT4uWMEkkkUE9pEvmqA+1XuQuc
EjpVJXLWl7n034I+KPhT4ua1rGjqsD654a1F1lsLjDSxNGxVJ1GOh9R0PFfJ/wDwVCHlS+DJc/8A
LC5Xlc4O+I5zjjpjP0ryy01jUdD/AOCgEJ0yeSyE3itrOY2zFTJCZm3JJ2IIIyD1zXq3/BUV5Y4v
CBUbU8m6y/bO+H/E/jiqSOeT0TR+f1ldyC5s5VzMEmBCbcMCpyTjORyD0r9SP+ChcCTfs66eZCrB
dRgyGON2YZAQOlfnJ8FfhJ4h+NXjyz8OaBEUml+eW5lVvKgQZLMzDofTr1xivu//AIKQ/Efw/pvw
20rwNLctNr7XUF79nWMsqxKjruYnpyQQOvTinbUbk1FH5o3DeQJ8jYoOdqkfKT2HFQPDIsYyN+WA
ymMqOetOWHawLqDKBwGG5R1wDk+3piqzMbVzIWxlyWiDfKx64HpW8UYczb1DeFllVw5VeDsJxtPf
OOKlZvMgEiyGUAZXHGM9AB6daVVleLzySCQdzBdwX2x6e/tUI2J8wARNuflPCn0J7/8A16Ggu7lq
zki2t5sg8vecFCN2M9MDt/hUd2sJEY8wISeMYDfdxgdM8fyqGeaMjAVoAXHA5P8AX3oktYvLdQGA
ZgxwAD15/OhMauxZt8satGHGAcr046A9OMYFPM0VkQS2d5PGSdvpzzzwPxoEiRSlQ4doxlQnKt3x
k+lRBGLup+6SSAf73/6v8iquEokqDJlkWRCroDgKD8x6D2HShZIZcSHIYrt+cdAP68GoJ7ghHVS7
eQpJVxk89BgZx/8Aqp0YMcanCsA+BgYIzjk+1QZJO59X/wDBOf8AdftQ6SeofTL5Pl5A/do2Tx/s
nua6f/gpiSnx0tCVKA6HAwdVOcb5hn35yK4z/gnBcg/tS6PF95v7P1AZDYGAi9uh6D356V23/BUC
ymPxk0qZY5Np0iBRLtOAPNkBAPTGSM9+az6nXK9kfHumQ3eo3dnbWcD3VxczCCO1SPezuThQFzzl
sCv0a/Zv+Anhr9k34czfFz4nmOHxILY3MFq67ZbFWTmBELfNKf046c1+c2l6lfaNrEN1FMbe/t5l
nimRvmSRGDBueuCB2r9Hvgh8W/C/7cHwnl+HPxDkit/HNvCyxXgCCWcr/wAvMAwAHG1dyf0IwNDi
9D49/aJ+PmvftCeMp9V1EyW2lRZjstNV2WOOEM2GKZP7wg4J79OlfpB+xD8OT4D/AGe9Pt21q11v
+13Oo/aLMho4xJFGvl5HUjbzX5e/Hn4M+IvgP45u9A8QQEREtNY3AwyXEBYqrg5PpyO2a/Q3/gmg
Gb9nS/ALBDrdyUU/d5hh4XA6Dpx6H1oaGldHx78TPiH4w/Z7g+IXwej8Q2etaDqMwYysTOIg2Hfy
vm+QnIBUg8j3zVz9g74d3Pjv9oHR9Uj1W0s49AxqLwTqfNucNtIjHQH5uT6EcV4F4z3/APCV6pG5
AlW7n+Rwc48xux5BHH6V6T+yVOP+Gl/hwkTb5G1WPecqTt2tnBHbgH3xTa0CKdz7l/4KA6d4k8NW
fhL4m+HNeh0q48L3DK1sZjFJc+ayDC4I3D5cMp6qTX56/GD4s6t8b/Hl/wCJ9eihtL67EcYjgb9y
AiBVwDyPu9eep9cV9jf8FUt4uPh+Qx8vZe8H7u7MOPx6/hmvz6a5ExKnbliFBfj9f1ppaXMU7PU/
Zv8AZ7+Et54I/ZrsvCF1qdpe3V3aXT/bbQloc3BdgQe4G/8ASvzo+I/xi8beAPBXiv4Ea3e2etaX
bX8UZvUneZ4BEyOY4j0CFlU4P3csMV9ufsZ3TTfsZ7xM0wWPU/LPXjdIRtHp6AV+V95IIJWd5N24
ru3ZyzdeRjvzUI1b1sfWn/BOv4baj4o+MsfiuG8tYdO8OrIZLaR/303nRSRgooyMDPJOPTvXun7c
WoeMPhB8RPB/xb8NarBa2llGul3di8hH2nLu4R0/ijIznnIIBr5S/YIu5Iv2p/CBhmeOGX7WjhSc
MptZCFb15UV6p/wVGvHX4n+FLfz38r+xnfyC/wAjHzzyVx1xkVSV2OcrWZ87+JvinN47+OM/j/Ur
dYJrjVINRntrUZEQj8tQqk5JO2MYPHJr9Hv2pvg/q37VPwr8NN4RubeydLn7ao1dJLc7GjZcEbSQ
Qcdq/Ln4cKF+IPhQM3yPqlrlORn96hAOAeODX6Lf8FL9SvNI+FPhWSzu5rQnWwr+VIyBl8iQ4OPc
D8aTWok9Edf4L8NS/st/skajpXjCdZp7aG9DNpaSXAZpd5QD5c9+pwB6ivzO8G/Fnxj8OdOntfC/
ijVfD1ndYa6tLOYKjPtCj5SpGeg4xnNfpB+xhqU/iD9kSZ9QuJb0s2oxM1w7SfICwABYkkY9fevn
X9gX9mnTfidqU3jvxJJDd6XoV19kg0doziS4CRSrMzZ5VQy4XHJyTxxSWg1udn+xd+zinw2tH+Nv
j67fRYLa0e6sYZXYGKJkcSyzgjOSG4A/rXzx+1X8drf9oD4rvqunWEkOmWcY07TsAmS6RZHIkK4B
BYtwnbHPPFegftWftI69+0Z45b4eeC9OurjRrO7MEGmxRkXF/cR5Dl1ViDGpBIB44ya87+D3gHXv
h9+0/wDDPSfFGlXWkX76vbTNa30JXejMwDAk8jIP4gUbbDSu0fQP7P8A+xHpOg+F9J+IHxK1a70O
8+1213Z2VtIoWNSU8oS7kJ3s5HAxjj3ruv8AgpxAsnw28GsWkQf20wJjHIzbSjPSl/b61W/svG3w
gt7e5mis5tTZpoI2YLIRPbAEgHBxuPX1NL/wU5l2fCnwl8xAOt5O32t5Tk9+oFNbkOzV33Nr9iPS
tO8Ofso3Gv6VY29nrUo1CSa8jiUyyPG8mzc2OcYHB46+tfPPgT4a/HLwL428O/GHUZ0+y6ze2p1C
+FzHJNLbXMkakSRkcAjaAB93jGK+hf2IrmLV/wBke90rTpY7m+hbUYfssTgyIzlyoI6jcWyM+teA
eEf2kPij40vfDfwau/C9u0mmXtlb3kUVtMt+kNvKjbmB+UAKgy2O3vUmiWuh9efGaXwr4E+Mvwy8
Y6tDb2F1Nc3eny6n5fzlXtyEVyBkjdjk9PYV4x+2l+y54/8AjZ8UdL8Q+FLS1vtNh0qO2JlvEiZZ
BJI2VDEdmXmvoj4kzWU3xg+FFnO8Mlz9rv5kgYguMWcnzY9M9/XFfI//AAUJ+JfjDwb8XNNtNA8V
6noto2iJMtpZ3zwLLIZpAWwrDJwoHftRElq7SPVv209a07wd+ybaeFNZvYE8Q3UWm2ttak5MssUs
PmEY7AK3NXfh/DN8J/2FINd8H2lvYeI5dAjv2uYoVZ57plA8xs/fPPGfbpR+1ZGNa/Ykj1G+KXl+
LHSZxdSgO4kaWDcwJ7nJ+uas6Hc3HiT9gKzbQFfU74eGkWKG3XzHaSPGU2jqRtII9qNbjtZaniX7
OHw/+MXwM+Nvh268T20llpHjK6ki1F2njuFmk2STKGCk+W+SenbIrF/b98N+GfAfx98J63H4fjuY
dUhF7q1jARGNQKTjduORgsmQSOvGc10vwa/ap8bftI/GP4e6DqHhi2trTR71r2/utPSUtG6QyITI
G+4u449M8dqwf+CoN3GPiV4MhEgbbpU/mKCPlJmXbkDnn0NVFLZmdtVY+wP2avjZoHxr8By3fh3Q
rjw9p2kyjT0sZwgCBY1IC7eMAHFfkP43mEnizXI2ZhI19doSW2HPmvnaewr9Dv8AgmPub4QeKQzF
sa3gZPb7NCB+Gc1+d3jmSRfGXiPbtAj1K6UL/EB5zgH8TVRSJle6O/8Aid+1Z8RviV4Ps/DfiPxG
r6TaukrCK3WGSYpwvmOmN3UccZNffH7Gtlf/AAs/ZoutR8cznRba5u5L+KXU7gDbbyRxCMksTjcc
gKfXpzXyn+wH8HdA+KHxTub7xDC11/YFvFqEFsxBR5TJhfMUjlRjIHrVn9vr48eJPFnxI1P4eQMu
l+FPD8yRS2sLHF9JsSRWfgcLkAKOOtJLnlbsVzcp7r/wTJdJPA/jgxMWU6rEckcf6nt+NfPfxH/Z
b+ON5448QXdl4e1y5sbnUriSJ7fUkAMZlYqw/eg4wV4xXjfw1+IPxF8F/bx4D1LXtOjm2G9fSInl
Rm+baXARgOM4r6C/ZM/aT+IvjL9oXwfoGseNdT1nTL6W4We3ndHiYLBIwBwo5BUflSs4OxoveZ8t
XmgeIdS8TPoElvqF34l87+zfsN3I0lys+4osfzHPDZ78Cv1a+Knh/wAX6H+yHZaf4YtLhPF2laZp
rR20AzKskLQmQYB5ICtkZ5wa+T/+CjVpbeDPj34b1rRR/Y2rTaZ9vkvrJRHIZ4p/lkLAcsB3PYYr
6Y+O3xH8T6D+xFD4qsdRltvEk+kaW8t7EFLh5mgEp6YyQ7du9J6zRHN7tz4i/aZ/ajt/2jfBfw+h
vNKl07xHpBuH1OfywIGdlCL5fJbB2lueBn6EdD+yD+yT/wALguh4u8WxfYvAdk5bNwNi6iF3BlDZ
GEUrkt04A68j5Vj/AHaAYYrnkE7SR3OfrzivrX4C/t9RfCX4QaT4B1DwLH4msrUTRC5N/wCWssUk
jttdDEwJw+Dzg+lOV1awQa1P0Bn8XaF4z+D3iuXw6wfS7K0vtOjMYAjbyo2Q7CM5Tjg+lfkV8Aby
XTPi98PZdPvJoZ/7WsIC8UpU7WnjVgSOcEEg+2a/VT4BfFnQviR8A5fFeleFbfw9pcf2wSaNbBPL
zHu3gbUUfNj+7X5eeCPEOn+K/wBpjw1rek6Rb6BYar4osp7fS4DlLVWuYvkU4HuTgY5OOBS0aLjp
U1P0c/bL/Zdvf2ltM8NW+leILTQ9R0mSeWNbuNnWcOqgj5SDxgfnX5P+NvCmu+BPEuq6B4ispNM1
XT5fLmtpVKkAZw6kgbkbAKnuPyr9LP29/iFrvw38Z/BvVPD99Jp96NWnjZ0CkOjiNWVgwIIIJBH0
rjv+CqvhDRoPCPg/xPHpsEevSambKXUETErwC3lcIx/iAZQRnpVRlrYzstzzD9hr9ojw3YabdfBn
4gWFkPDviCWWCzu3hwDLNkPFM+cBW42txg8ehryj9r39lLVP2bvFLz2MdxqPgnUnZ9PvlR3Frk8W
8rHjd6H+IZ4rx/wn4Q1b4j+JtP8ADmgWT6hrGpSmC3tgyoGb0ZiRtAHOe2PpX6YftD+LvDf7Pv7K
Nh4H+KF4nxF8Q6jZraWemXOxZmcIAHUgZ2QnnzD8xx17VUW+eyCSS1PyvVnjcsHwCAVIyCCD6/l+
NfqN/wAExPGviTxn4H8dN4j1/UtdNrqcMdudRumnMKGLJVSxJAzX5Zx5aCJGY5QBT1OABX6Y/wDB
J0KPA3xE2nP/ABN4M57HyKmpbmRcLcrufnP8TLhB8RPFKpucNq97uKj/AKeZOlcpOSrMScA5IA/l
XYfE0Sx/ETxaBsCf2ve55OebiTOfxri7kNMNxJXb/COlarRHNyjlmLc7CCOAAOD/AJzX0X8Df2Df
iJ8fPCVx4o0i4sPD+lGfy4H1zzYhdJsDGRAEbK/MBnPJBr5pgITLOSFJ67skD8Onav0H/Y0/bC8M
eLfBdx8Ffi/e2sWjJbeRpusXMy28IiRRiCWQFcOpGVbvjGcis5SaZvFI8s+Jf/BNT4rfDbwTqvic
XWheJItPi8yW00aSZ7koMbmRGi+baOdoOcDivDfgp8E/Evx68e2PhjwvbCWa5DPNeTBlt7VQpOZH
UHb90gcdSAM19neOvh/8V/2Ifh/c+M/hB47Txx8PdWIn1Frm0juBZoOI50O87kIO1mXpgZHcdH/w
SVuX1FvilezFJJ55rOSRoxgF2a5Zjj3OP0qG7GsUndnj8v8AwSe+Lrzvt8R+EjuwPKF7OMY/7d/Y
V8z6x8FvE/hv4wR/DbWLRNO8QnUksB9scxwvvcIkiyEHdG2QQ2K+/wDxR+2n8Bfhl8XdcnHw316b
xHpWpzxzapbyLhp1dlkYB5wMZDcY/Cvkz9tT9pjw9+1D8RdG8Q+HtJvNMg0mwNi4vynmyMZN+cIW
AC9iDnmqV0Sna2m5+j/7Fn7LWq/s8/DTXNK8RnR7vxDqd48n2/TAWAgMSKse9lViAysce+a/LT9p
P9ljxf8Asy6xpdn4om0y7t9W82S0uNOuTJwjDcHVkUrwynjI96/RD/gmDq17f/sy+Ipry9nv5oda
uY0eaZpCFFvCwUEkkda/JzxLrF7q+oPLfanc3cjyOFS4uGlZcnkDcTgEgdPSim21cUvjfcx5WVZm
EZww5+bpmqo3kbmG0MTtZqnljG3LZUN0+gFVmmE4T7w9FbpitkZNsA21pHbCqOSo7/8A16/Vb9lb
wh8avgX+y5rS6f4R8PeIdN1q1l121J1spNEstsvyMDGQxAQHAOOcZr8rFAZCVZcjGM9P/r1+wH/B
OO7ubz9ivW457iSZLe81G3hSSTcIkEK4ReeFyTgcYzWMviSNIv3WfkAAII0Dqq8KMge2OnOew/Kv
qHwD/wAE3/jV8QfCOkeJtL0/SoNP1OBbmGLUL4286qRwWjKHbnrjPevFvg3brP8AF3wAkqK0cmv6
dG6EZBBuY1P6H9a/Rr/grV4k13w/onw3i0bWL7R4pry7Ev2G6eDfiJSM7SM4xx6ZobJ5krHwt8fP
2N/iN+zfomn6z4x0y3XTL+b7Mt5p90J4oZcZAkIHy57E9cEVgfAX9m7xx+0ZrGq6R4JtrG6udOt0
urhb66W3GxmKrgkHJyPT0r9LNMvZ/E//AASc1K81q4l1K6l8JX0r3F7IZnZ1klKks2SSMDB7YFeR
fsKfsv8Ahuz+Dmt/HDxvqF7JpzWV01tZ6Tcz20sEFvJJ5zO0bLvLGLIU8DHqajm0NbO7TPlz42fs
O/Fb4A+EU8TeLtLsf7FMqW8tzYXouPIZgdpcBQVUkYz0z35rz74L/Azxd8ffGv8AwjPgqyt7vU0t
ZLthdzrEixoVBOW93HHvX7PeCfFXgb4n/sl+Kr/wYmo3HhibTdTiWPWy8s29Y33BvMZjgHoM1+QP
7Jmr32mftGfC6bTLmax367ZW0jW7tGWiknRXRsYypBwQeD6U+Z2uONm7HbeHP+Cevxs8XeJ/E+i2
fhu0t7rQLhba6kvr+OJHLoHQxnncpUg7vccV4z8W/hN4p+CfjW88KeL9N+w6zaKhaMOJI3jZVYMj
jhl5xkdxX6m/8FDPGviDwf8AF34EQ6Bq99pKanqrRXi2U7RCdRcWiqJMEBlAkYYP97tXG/8ABY+x
hPhj4a3ixxrci+vIjOQN23yVbbn0yAcUJsT7nyd+wD4G8fa58dtJ8Y+CvD0HiFfCcguL60mvUtN0
c8csQCsx+9gP6jgZrW/4KaeIfFXiX9o2GbxN4VHhSS30G2hhtvtqXXnR+bM3m70AHUsuP9mq/wDw
TS1rVbH9r/wda219PZ2V/HeQ3lvDKyR3KrbSyIrqDtbDDcMjgivQv+CujpH+0bopJJb/AIRq2GM8
YNxc9fyprRsHZ2Plf4Lfs/eOvj9rt1o/gjRxq97aWv2idnnSFIkyANzuQOc8DrXqmr/8E2P2htGs
7u7l8FQXSQRNIRbapbOxAXOFTflj7DrX3T/wTphj0H9hvxDqtkgt78XGrTG5QASFkT5TuAGcbRiv
gz4Mft6fF74TeLDrc3ia98a281q0D6X4ivZprc5IIZV3fIwwMEdsjGDSTbG3rY+ZHt3VpM4LK21w
wwyEdQR2PrTGh2AtjBPYnOa7r4wfEmf4u/E3xD4yutMstIvdWnFxJZaWm2CMhQny+5CgknuTXCTy
O23A3KehA/r3rQzloI8eQRhlkxkqf8a9s+FH7HHxe+NvhWPxH4M8JPrOitK9uLsXlvCpdPvACSRS
ecDpivGFHmoygYwCRX68fCv4CeH/ANjT9nnTNQ+J/wAT/E2gtfX2+RPDmpTR2iSyqWSNY1Tk7VJJ
xgkVnKVmVHvY/Mf4nfAzxr8DPE9toXjvQpdA1K5tvtMUTvHIsseSpKujsp5BGM5GOQK+hv8AgmJ8
MvDnxC/aYij1/TI9Wt9G0mfVLeObJSO5SaARSY4yRuYjPce1fZ3/AAVE0fR9Y/ZH0vWI4UvprfU9
PNlqNwm64WOQEEhz8wLKRn1714v/AMEgPiLJb+LfFvgOTRbIGe2fWE1dB/pGEMMRiJxkp84Yc9c1
LZonoe+fEf8Aai8Ba/8AHb4j/s//ABcs7OLwzPawvY3t2wjt2X7NHM8cjZBDbjlGz1Ujjivxvgtr
O712O1gaU2kl4LeOVeWZC4AIJ6kjHPqa+wf+CqfxLg8W/tHzeHP7Ds7B/C0C2z6jECZ77zooZh5n
GNqcAA56mvnr9nf4gwfC74w+E/FE+j2fiCGx1BA+n34ykgfMec4OCpcMDjqoqrWRCeup+uvxz+LP
h39gP4Y/Da20Lw9s8FHVP7PvLGzTfKsLRSSM6ljy2/5uTz0yK/Mv/goFB8Nrr49trHwzuLO80bXd
Lg1i8FjOJIheTPKXOMnYSBGSgwBnoM1+in/BU/4j2fgz9ms6Tc+H7TWn8TXR063nu+lhIIXkE6cZ
3jZwRjHOTX4sho2jITjaSNnTApJCuevfsreEvEeu/HTwpf8Ahvwxe+LpPD+o2msXdhpyKXMEcyMx
+Ygfma99/wCCqfxGk8f/ABP8Gwy+C9a8ISWOkzrv1q2SGS5DyqfkKMwKrsI68FvevJP2D/GOueEf
2pfh3/Y1/PZjVNSi0y+jiOBc20jAvGwPBGVB9RjjFfTH/BZOJZfiX8OlY7SNHuyDtzn9+nH8jRfU
o+Gvh78A/HnxT0/U9R8K+FtU16x0v/j6uLKDzFh+UsARnJJAzgZ6itnw3+yr8XPGnh+21/Q/h7rW
qaLcBnivbe2bY6DIOM89Qe3avqb/AIJE+MtbtPjxqnha31Sf/hHdR0i51C609yGjeaJ4USTkZDAO
R154r6f8F/FvxPo3/BSbVfhFp9+LP4d2mktdQaJBCiQxSm1ilYrhcjLyFsA9SeKLsmx+NAg8pmSR
duwlXRhgqc9CD05Fff3/AAS81PWPhfe+LvFOofDjxN4j0DWLKK1tNT0bSjcqHhkcyLyRwd4HGfu1
5f8A8FLfBWheDv2svE9roOnW2k21zZWd9PBaIEQzyIS77RwCxGTgdcnrX1L/AMEcvHOvapo/j3wl
c3zzeG9F+y3VhZsARby3DTGXa2M4JjBwScHPTNVdAfm98ZtXi8TfGHx3qlpps2jwX2uX1yljcw+V
NbK9xIQjJ/CwBAK9jmvr7/gnD+x3e+MPi3Z+KPiF4K1a38M6dYJq2j3N7C8Ntc3IliaJs/8ALQbS
zbTwepGK+ZP2n40b9o/4oKARF/wlOpgKW6/6VLn8M5r7+/4JX/tB/ED4j+L9W8E+I9fk1Xw5oHh+
I6fBNDGHhKyoigyBQz4Tj5iaTHufJP7fvwb8R+D/ANoj4ia+/hPUtL8LX+rq9pqq2LJYyGSJDhZA
NmSwbjOSSa8F8L/DHxd410+/vfDnhrWdetLDDXU2mWEtwkOQSNxRSBwCea+q/wDgoh+0n8RPFHxh
+IXwvvtdX/hB9P1OIQaXHbxru8tFdcuF3n5myRnFfUn/AAR9tRD8CviCzx4Da8QSSDuAtIv6EfnT
JPyh8O+EtZ8Yana6VomnXesalc7jb2djA0002FLHaigk4APaqniHw5qfhPWLrSNZ0280bUrYAT2e
owNDPESMgMjAEcdOORX2F/wTJsVb9tTSZhGNi22qMg242DYw9O2cVW/4KqKi/tha8Fi3s2kaczEf
9c2/+tT6jZ8aM5VMIA75xgmozE7bd3CnsanmijhuAJCd2c+w9Kax3ozA7f7pPWnYzS1PQ/2f7B77
40eDpp9BuvEmm2WqWl7f6dZWbXUj2sdxGZSY1ByAvUe+K+kv+Cn/AI68BeMPiR4MXwZoM+iz2mlz
m/Mujvp/mBpF8v5WVS+Nr84wN31rzv8AYD+IeufDj9qLwONEuI4xr97Fo1+kiB1lt5ZU3KM8qRty
COmD9K+lv+Cy8UZ+IXw3ZYwJl0m9Jkx1/fR7R+HzfnUPc2SPzy0vwfrfiS3ubnStD1TUre25uJrK
ykmSL5c/MyqQvHPParFj8PPE2s6bFqVj4e1i70yZWKXtvp87wsB1w4Qg9+/avuX/AIJF/ELWdO+M
eseA1eF/Dus6ZNqV1DLEGYzwmNFKt1A2yEEHg4r671H9rbwR8HP2ntB+BFrb2XhHwrptjMdQvb4r
DarJJCs0EaOx46tkseScelK7uJo/EHy49yyCTClemenetjUfAniHSdPkvrrQdUtbBBua9uLORIsd
juIxg8c+4r7ki+D3wh8e/wDBTnT/AAv4ctdN1r4eX/8Aps9pp9yZbWWcWU0zhHRsbRIqEqDjtX1n
8ZPjZrN5+2X4X/Z9m0vTLv4e+IdKQahbTQHzHRorliquCNoxABxRzBY/F/VPDmoaPa2lzfadeWUF
0A0E1xbOkcykZGxiMNxzwehqjBb/AGl1jhRmlJARE6uT0AHc5xx71+jP/BXr4halpvibwd8LrO2s
bXwnZadFrCIkP77zQZoFUMTwoQdAOvevOv8Agn/+zD4f+KXi7w345v8A4g6Po99oHiOFl8L3G03V
6Y/KmTGZBncflACnp7Yp3sFkfGupaLfaNN5F9Y3NlcON4huImjYDOASGA4619QfsHfsXTftP+MJt
S1eY2fgXQ5gmoywyL50820OkIBB+UjBY+5FfeH/BRr9l/RfjSNJ8S3fjnRPBV9oemXeI9UC7r1cr
IFB8xTgFMZGcZ6c1zH/BHtCfgr4/nZFXfrafd5BxaRE4Pfkmp3HZIw7r9qX9liP9oNPh8PhP4Sn8
MFxayeMTpkPlrNtPGww7iu/Cbw3U5r5a/bp/Ynuf2Z/HialoKyXvgTxBcFNM+YNLbzEM7W7KOSqg
fKwHIGD0r2yT9n34BXn7I1v43hu7RfiQXEwcaqWme6N4AYjblsYIwMbenNe//wDBTH4mav8AB3Qf
hL4t0OC0udR0vXpLiOG/jMkT4t2BVgMHkMaSVg0PxmmsWtrh45IzBIuC6OvIz049DStA5R5tj/Z0
+++3IXpwT07j86/Wr4i/AL4f/wDBTTwJoHxJ8C6pD4S8ZQ7bLUvPTcyxx78xSxqeodso/dcfQc7+
1F8VfAP7EvwFf4H/AA90+w1Xxbq1qx1Sa6Tz40EqtHPNId2fNJX5UxgDtgVdwsj8tvsDXUZlhV5k
TIcxoWAOOmR04qMWjwOocYLjftkwDj/Ir9Tv+CdHwx+MHgL4KX+sWo8KeHPD2r3y3dt/wllpK08y
eUiCZWVlAjbACg9eT3r0P/gpb8F9H8X/ALLqePNYt7VvGnhlLZodQ0j5IZhPNFHKmD9+Mhty5PB7
9aXNca0Plv8A4Ji/sn+E/jh4o17xV4uhGp6Z4algSHRpow0FzJIjkNLnqFxkD1xmvZPhh8Uf2bfi
p+0ND8Lbf9nXRrGe7vruwXVJbeAozQiQltgTOD5Td+M1m/8ABHey8ULN48ubWbTh4SNxbxXccisb
l5xE5QxkHaAAcHPbpXffAK5/ZW1L9qqB/Atn4gj+Iv22/aNrkzfZBNtlMxAZivTzMY9frUhe58Jf
t7/s46R+zd8fLvRdAnaXQ9TtF1i0tmUA2ayzTL5AOfmVfLGCe2B2r5reEySlUVjuHDelfZX/AAVE
/wCEtn/aov18TNp4t49Kt10sadu2/YvNnMfm7jnzdxfI6dMVyn7F/wCx7rv7TnjaJ54pLDwPYyB9
U1OWNtkqKy5ghcf8tSD6/KOeeKadhpJ7nh1p8IfFd58LL/4gQaPczeE7C/SwuNRGAiTOAVGCdxGG
UZAwNwFfa37DX7Emgv4Sf41fGkW1l4Et7Vp7Ww1GRTb3UTAr58pVvkCkgBT1P4Voft/ftMeEtD8E
wfs6/DLToRoGjvBa6neqhCpJAyFIomHEhyuXc5/OvofUdJt7r/glX4Y0u9nNraXugaTBPNwNiS3M
G488dGPWkxaLY8y+Funfsn/tox+L/h34Y8B2nw/8RvARpOo7VS5ugNx8+AA/Ns2Bip/hb0r89Pjj
8EPE3wB+IGqeEfFFm0N5ZHfFPgbLqBmYRzpgn5Wx07Hg1+p/hX9kn4Z/s9/tS/BfU/BN9cy3uovq
ouILm9E+9RYsRIo6qMnHHHNc7+0j+0foXwC/b2jk8VaFZav4W1Lw5Y2moXNzbCaSyQyXDCVAeOoG
RjJHShOzGtz8jS+6ElW/ixgAHtSJFFMNokVyTjlhnpn6V+snxJ/4Ja+H/ir8Y9O8beCtbsbD4b6+
6ajqllFIRKN7l3+yEKUCsrDCt0Oa8c/4KGfG74b2OjWXwH+H3hjTZYvDjRRX2tm2AmtpYAFWKN9u
XJX77574q3K+w7o1P+Cef/BPu1+I9nZfEf4i6eJ/C0oc6Xo9wQ6aijIV85trfKFOcAjtn3rwH/go
P8K/Cvwd/ae17w94R0iPRtFW0s7hLCBmMcbSRZcqCTgEjoOK92/4JVfFrxrqXxx0rwTdeKNQuPCN
pot60Oiyzl4IirRkFVxxguceleaf8FQpFuf2x/Fq8mSOw06Mcn/n3B/Ln+dSmQ1d6HyAUtyzZlId
gSVB/WkeKOJX3PhN2M9gcZr9fP8AgnF+zJ4Y0v8AZ1HxFh0Sx8Q+MfEcVwqRa3GsltCIZ5UjRcqS
oO0FiOvHSvojQ/gxF470zXtF+Ifwz8D6ZpF7avbpJosavIxYEN8xjBXg5BHINRzXFZo/D/4K/A7x
P8dvHFh4U8I2X229uvnlkUhRbQhkDzOSQNq7x6nJHHNfoV8R/hj+yd+yDpfg/wAFePvDb+NvFNxF
5d/fafI4njwVPnToJl8sHzAQOuBxWV/wT6+G+nfC/wDbz+KHhfSLyTUNL0PTb6ztriZg0jRi5tQN
xHBI6Z/2a9I8Y/sWeBv2nP2gfjdr3ijWtUsbzTb2zt4UsJ41WNRZRnc6spyCQPToeaVrvcs+a/27
P2G9I8EaRb/Fj4RJHqXw3vYFkuraykNwliAv+vVyxLRsRg/3T9ePg6f52+fay87eAufxFfsvqenR
aH/wSU1KytpDcQxeGJkSRR94G5bkD8a/G24kjk2AxZKj5SM4x/nNWr9TJuzKvltEoIO3APJHI+ma
DmSQlsAHkcYzTJGd+3IOCpOaVG3A/wAJA6npiiwuYlSP5xu6EimTkFmIB7kAHimPMwKKSRx164FP
kZQCpOAOMnjNMq4xkUFWXg/3TTtoQ7u+TgEUgwYWySzdBk8Y5omyoAGMEZ+lIAkURzDHJHTApX3S
yEgj5TjHAx/nmowhIBfPJxtI/rSqWJZo1J4weM4pNiFnhbKts+fGTxj1odmRVZiMjnnrTjGxYl2c
kc5FI5SRjlzu7nGCaSKHEOFyWC4OB05z/kU35gCCAWFNIIIVvm38Z7ipF3lOAzDOM9M0WQDZCG2q
FYSZxggU5jiUhVOP50yVnKZGB3OfWo7gsSAykswzlTwcd6YFiWJSFwh3Hr6ZFMb5CFUlfXJqNXkA
I3BmPGCelPAUTyljjAGA3r3oAaHBYA7iTTwibdqnDk4yexpEzDM4U5Pr/hTyfmUlgyr26Y96dgI1
l2CT5CpPB9xUoUSMQvPPUjIA96hlyRuCEAElh2apYXO4oIyAw6Ke/bNCAQQmSYfOQIznHYU6Ridq
MMnGM+o9adGjsArcMCecY4qAoFUFs7gdwyKdwJHUq21QAMcjtQ58u3O48jocZzRtmdmYZAPB9OlO
hgQIXlI2jACnBzx6UroSREWG7kAkjPBPAxT42dZNyuDzgeop4KYYsATtwPpUTLti+Vdxxwe31pDJ
iC6sGbay8Ae/0ouMSs24l8HDAHvULF5Zd+4HPtmgKftS7so5J+Y8U0wJB97PUKegPHSnGWNJR5WS
GJLBugqNE8sSIv7zaeueo9xT9ygHKkKw+52HfNMBGbzpRGCFzzhjimSDjGxQo9D1605z9pdnCEuT
nC9B+FKpiICMHDBcnimAyONX2cttPykHjFLI0gYNs3EDaoI9KfMsYwEbzQTyuKZIFkQB94dfvYxn
PoBSW4x0tqJnQKnlrjBJ4zQ+2KEIAWkAyMg4qEDzECqWz057+lKpdeGyCCVBPpVaEXKt0pkO4gBe
c7R0rat1QyxKpfJVeR1WsG6BDlSd3TFb1lMsDKu44cDco5GM+lRIZ7J8SJzH+zjpAGVjm8TyMdgU
KSLNR0HfBHP/AOqvBBkxyddrHgEV7x8T3mb9mnwp5gOH8TXb/Njn/R4gvPXj+teEMrHpwfVv6Uti
Y3HE/uCokJcEfl6U5XkldiZGLdyEzmo2RY1Drk5OMZpwG5SwO0ZwB6+tJl6nX69fNeRpI6COUAqC
GOXPc46DqPyFYsYijjd3VmKRksO7E9vpTnnV3AmJUopK5GQfTJ/KluwTEzO77yx2t0Ix0x6g+nvX
VJtnOvd0GR4iUY37yDtB9eoNdPZTWzC38p3jJ+Ug/IWx1yc8dzXKCUQwZ28KMcduxrZ047lQKMFM
7h1HOelSWkznNZ3pq13nAIkdjznNZchaTMj8nHU960ddj/0+6jUMpVywRlwR+HbiqJLICrZLZw2T
/So6miNjw4WVkIkdDk7QnBJ6Dn8a+mfhb+058Sfhb4LufBuj+JpYtCvS7vYSQpKQ0nDDcykqD6Ke
M18xaTAxkjZFIOcgdQp/yK9X8NeHdVv0+02Ol3t5bwKxlltLV5EzjPUD3B9sVpFXJe1j0r4X/Evx
R8JfEtv4j8M6idK1m1gktSwjVlZXIyCrKR/CO1L46+KGvfE3xbfa/wCJr1r7Vb1EV58BQQoAACqA
vQDjA/rXOadYXepXFvaWi3VzePE0vl2y+azKBknavPqenSnappuoafNLHPavDcwsEmWVSr9jjBHX
HY1VkRroepeD/wBpv4geA/AV/wCCNF8RLDoF2k3mW8kKs6GXhgjEZAxk/n7Y6zw5+3h8YfC3h3T9
JsvEVoLayt/IhWfT4pHYKAqDcQM44ycd68HttE1G+ilvbaxnntYiY3u4rd2iTg8MQMZz1zzVq38K
a9LBAyaVqckT7nWUWkn70AZ+X5NpHv79aztG5qrs+grr/gol8ZvKRm13TotyYZVsIT2OT93IIyPb
jvXmvwv/AGmPHfwd8Q6xqvhvUFjvtc2C8NxAsiuQ7MGwflDZZh+Jri7jwJrHnLEdF1B/my6pavjI
59OB79KzYdH1G7vLhILea6MQLeXbxNIdoP3m2gnOV61SSRDhK5ty/FPxHcfFD/hMm1F4tfW//tP7
TEAFaYOHHyYxjIxjuO9dR8ZP2jvGfx5m05/F15aXLadC8cS29usKgMyliQOp+UflXmlzpc0d0Iri
N/NR8soGWU9MHHT+fSn3ulT2cryvZywgZCG4UqOmMDIBxntRoUoaWZ6F8Gv2mPHf7Ps+oSeFruGK
01BFEyX1sJQdrEh1yAV4OPfIrjvF/jm/8Y67qes6zcSXt5qFwbmSXdgksf7ucdOOMdKxTZ3N2GAW
do4871iiZxnsCAeM5A/Gory1EVw5mRwwYkqein2B59qNDFppkU06rIyqpEpyR0JAODjPbrmnxSRx
KFkKpHjLFn/PrVd4/Lgz50asWBYZ9Secf8BpLyJriS3GUNqScsOScccjv3qlqRZokkdI3ZYyUyCf
3h4J9qVAzxK7hELnK5YcAE9vXjH4VE08TJKVYoGk3KjLtUA9e/t09qhaFEuI5mRWKcxgtliT169u
+arQm7Lbl9zlPNKI2Q235myc8j8RSyo4jMioRJLGDv4BPYcfhUcanCxGQhmBO7PQf/WppiIHkTna
IhtSVvmAyMnkdO9R1LTbHMTbzgTqWBIzux0I5JohRpJkY7i0Tb3XkfKwyCPfHNV4g0rRKMSYGT83
O3HU1YjjaG4lLxsGl+9JGNu7A9M9Kb1Got7kkdwzSyBG8tHBfJzu65xT0kKW5baWfrgEY4656c9/
wqFpot2xzg/dAIwD1B6de/6Uy2eRlkTYipuLKFPAHPc8/hS2JaS6ne/Bj4zav8EfH+neMdCjs59Q
t98fl3ceY3Rl2upxhuc54I6Cvofxt/wUe8cfEDwtqeg3vhbwukF/bSWssoimkdQwIygL4B5GPSvj
qWXOwRkBBuOEXcX47n0Aq0sXmxs+GaZyNqDgk/7I7/h6U0rktyEnvF84vHuWQ/MoPO8n1PHHP6Vo
+H/EWq+Fteg1PRrubTr+0cSwXMTmN4XBBypHbjkd8kVi+WZLV4pI/mdtqlh+me3epvLkmV5NwByA
T0GMe9D10LjzJn0Z8ev2y/Ef7QfgrTdC1/w/pVo9hMs7X1qXMzuEKnqMAHJJXGDgeld98Of+Ck/i
34d+CdG8OQ+CNCuotLt47NZkmkiaXYoXeVXjJx+hr44S3YQvtYPLjLNn8j+OKSeVUmhO7HmHC4yc
8c8+v6Vmo3NE5Hf/ABo+KL/GL4kar4pn02z0e4vCD5FimIhhQCemTk4ySevpXU/s4/tCX/7OHi6/
1y00PTdfe6tltvKvD5ZhwchkkUHB5OR3HevGFA2MEbMjYXfnOznofrSrE5ZDcICq4BXd6Z5/+vWv
KrCvLofVnx7/AG79R+PvgGTwtqngnR7BZZEmF/HcPNJCyMG/d5UbScAE56E18tm8W2n3DYsYJAUp
uxj2FRRW6wny/NBZ8BQWA4/yP0ps8Qe6YbSc8GTP3j1yAOelNOKRHK73Z9zeE/8AgpvqvhbwvY6W
vw40aOG2gWBRaXzxoxAALeWIsDJ5xnuea+TPij47Tx9491rWxpFpoSahKbkWVgv7mLIAKg4HcdMD
qa5CFZFjVDNGSWIxJ8pIxzzSzOiQlzJ5i87mCk468Y7dKjRGp7n+zH+08f2b9V1O9TwpY+JGvEjR
ZLuTyprUrv5SQI2AwbB/Dmuu/aa/bIh/aM8H2Gm3Xw+07SryC6SWPVvtZuZ40AOUXMSlQxxnmvl3
DPFJH8yROuGw2N2PX0/WpFkjivYowfnHRmPOO9JJXJu3uWWnlgZfJkeGRXDLLFIQwIOQcjv09Olf
Y/hL/gozdjwRpuheN/h5YePLqxXnUdQvUBm6hXKGBwGwcEjrmviq4HnPMxOFU/w/T/69TlCLXdGT
G6gHGd2fb2rRxuZ3lc+wfih/wUM1PxJ8P7jwh4N8Gw/DsXJAkvNPulfZGSfMREWJApbuwz1NYf7L
X7aL/s2eF9Y0Obwu/iOyu7lbyF0vPIeJhGsZU7lbIwinsfavl2NvNj84qJGGVbB4z05NMnuniiaV
1MgIJAUEnIHOeOKORFptH3fY/wDBRfwPpevtqtn8D7Gx1JAzm/huYEnG7O4h1gySfY85Oa8T/aN/
ahvfjf8AE7QfGeiWM/he60K3WOzMdzvkWRZGfzC20ZwcYGMcV89fLJI4d0EbY2kkZBPXNTXCGIAb
0MZBU+nvn2pcqHd3ufc4/wCCjGha54W0WHxt8MbXxT4hsYf+P1p4gizbRmVQ0f7vcVU/Kc8ewrkv
iJ+3TafFT4KX3g7xp4Eh1nXJEdINYM67Ld2J2TKuzcrqpA4Izjk4Jr5BhmVC2XJjDMqEH5umcH19
adJM0wCq+VztJI6AdM+9NU0TKTeh6l8Cfj/4h+AvjePX9ImF1A7JHfac2Al3Buy69DhscgjofqRX
1paf8FGvh3aeKb/X7P4SXMHiO5iMcup+ZbiaUADAaQDJHyqPwr899paUbCuFOVOMZ6cVYV5BCY2G
FdtpIIAbuc/hScEgi5N6nq2oftGePbz4xD4ly675fiWJwbc+SjJDENw8koFAK7WIPGffPNfTXiT9
vD4R/EM2V/4z+Dz6/qsMCxmdhBPs74Utzt3E4/Ovg7eSpbLFVyARjHXrmmxRnyUZSBGzEkA8HHII
9+tQoLcu72Pqz9qD9t6b4z+ErHwn4U0mfwp4VVUF7bXKRs82xlMSqVzsVdgPHXj0rlP2X/2sNS/Z
31woIW1fwnfuDf6XbhVkD7SFeMnhWGRkEgEfhXgTEOB8xEa/wg5O3+tRzu8DYUbSuTtIwRz3/SqU
UTZ3Pv6x/b8+F3g+y8R3/gT4ZXui+JdYEjyXLxQIktx8xVpSrkkBmJ4HevjfU/iDN4p+Jn/CUeLL
ceIJLy/S41GEnyhcxeYDJGvOVBQEAZ4riHkJV3H7xj/dwMD/ACadLN5KowDYZhuzxVqC6ApW1sfo
f8Pv+CgnwS+GGmNpPhf4deINCspm894LWK3KsxAUN/rznhQPwr5y/aU+J/wo+I76fdfD/wAD6h4W
1prmS41G7uigW5RweNqyP82456DHNfP08qRytH5gjwfm9C31/Ko5DL5Thjkr82FORtqeSw3K59Ff
sd/tMaF+zj4x1q/13TdU1Sz1OyW1zYCNniZG3AkOyjBGR1zwK82+OvxKg+LnxY8TeLrCznsLPVbw
zwQzlRIqhVQBsEjOEycH+KvPQRErhsmRDuCg4LL1qOa83Kh+4u3BYqQBkep/pTjGz06kSk3ufR/7
I/7Ws/7OviG8t9XtG1PwnqI33lvaRB7lJACqSR5KjuFKnqACK9v8M/tTfsv+FPGyeLtI+HniPTvE
CySzpdJENqyODvIT7SUydxGMd+BXwJsfaHX59q4O3Jwfr+FRo5iDA4ZQScE44Pt61m4q5alK2p71
8Yv2j7f46/Gy28VeKtLln8HWc6W1vpNufKnOnibcwdgxzIw64IHJHFfUvjf9uj4C+L/hXeeBb7w5
4nbQnsls47GO1RdixgeUFkExxtKoQ2T05zX5wExyXDEqykZyR260ySWUvhG+XHOcjcOatQje5Kbt
Yna5Ahk2GRfmJVH5YA9ATimpdxhVKowVGyUXnt/jVCSfbKGcFWII+U8Y5FDSO5RwPLj6kkZ3H8ab
SehlezP0o+E37cHwD+Gvwn07wjBpfiOytDan7bbixMuZZF/ffOHycksARgdMV8Qx+KvDPhX472Gv
+GrW9bwnpuu29/Y218QLgWySo+0kEjOAwGeeRmvNTdqiEsrfvfkH93/PHSojeMQ2xiuBw+O1Y8nQ
2jN3ufpx8Tf2x/2aPjC+iSeKrLxDqD6NcfarMrp0qeXJxnlW5Hyj8q+bf24v2wbX9ojVNL0Tw7aS
R+ENJlW8hnuoGhup7kxsrZBPyoobA9TXyvLMXGQ27A3scdDn0qrLPIf3bMxzyO/61UVyu4pNn2h+
yP8AtM/CT9nP4aa1qt9o+p3PxQu1mRSbcyRzImfJRXHyxqSw3f14r5c+LPxb8Q/GXxpe+KfE17Pc
ahelikJkZorSMnIhiUk7VXgcdcc1ybNGTKAzxxBTlgM//qqvdBi4I4HTc3ODVr3XoPWW5c0m7tTq
Np/aEUj2CSxiZLc4kePePMCnsdoOPc9RX6ZfA/8Aas/Zd/Z50K/03wld+I7KHUZVnnW9065uGdwu
0fMFIHFfl3KqtK6j74XPzdPrxUMlzKsDBsuTzkLwBk/40ezUndk87WhveMNbj1zxhruqwB44b2+u
blEkGGCPM7LkZPOGHeuZlnPmB1YsAdpHbFPlXzJUZU8zIxnsB6iq7ScmHBGTyx9KGNalmCaMBtwU
4OeB94elfePwf/aW+AnxH/Z6Hw5+MulQ+HZ9OENta6ho+nStNcxptZJVkjjYxyAghs8H8a+A5YYJ
HB8xlKN90nBYdKR7pYmIKsVx1PU+1S0mzTmsfqPZftk/s/8A7P3wQ1nwx8Ptd1nxvKsbRWHh/X4L
gx5YYMe6SFVWPkkrz+tfNn7Dn7Xul/s6/EbWIvEOmx23hHxK6fbJrGN5DpxRpTEyooJZP3m0gDIA
B9q+SHkl8wlUIO75lBz/AJ6USTnb5ZGCAcFT83XgUuUFPW5+lttrf7FWn/Fp/iLL4s1i71eS/l1J
7S7sLqWzadyzHMRtuRliQM+npXyF+198aPDnxs+MN3rHhTw3baDoOnxGwtHtbcQNfRq5ImkTaNpP
Yeh5rxKS7MxICdMsWY8f56VWklL78hiDyAOc0WFfbyPuT/gnz+2f4Z+CGl614H8dE6Z4Z1OeXUIN
YjjkkeGYxxo0ToiscME3Bh0IIxyKu/thfHz4Gf8ACl9K+G/ws0XTPE11cAefr0tkYbmw8tldW3tG
rO7/ADD0Azwc4r4NW4U24wWdhwc8Af5zSPNIHb92SP8AaXpmpUbbDUk2bdv4M8QajZQ3dloGr3lt
MG2z21hLIjAZBwyqQTkY49Kkk+H3iaMKq+G9bkfGSBp02V/8cr3v4M/8FEfir8FPh9png7Qh4fut
J00Mlt9u01nmVCS2CyyqDgnHTPua7v8A4ey/GrLN9h8JGNRkk6XNz+H2j+tO7RTij4yubO5sbi5t
Lq2mtLqHG63njKSAEZUlWAPI5FfrB+yp+0P+zj8G/gBpnhL/AIWG1tPqUT3mpQanbyidLiaNRMoA
iwACMDGfxr82vj7+0D4l/aN8cr4r8VRadbamtmllHHplsYUCKzHJy7FiSx6mvMDM+3IPmdefT6UN
XYr20PTNdv8AQfhf8a7i98CapJ4k8PaDrMV7pN5cgxG6WGRJFBwBxkbdwB6ZxX6HfFX4z/s1/to+
AvBeoePPH8/gPWNO8y4OmRyDzraV1CyI5aJgw+XgjGa/KmaZTGCcA5GWB9qia9KE+W5Bzzt6mk43
1FzW36H6Z/tDftX/AAi+GH7JcPwZ+F2uf8J0b/Tp9IS6RmBtIjlmlkOxQzZbAUYrN/Yr/a2+HM/7
POufBb4m6rF4Ssfsl3Bb6tJIQtzBdPIzr907JEMpxnII+hr83TcvITtJJUcHPrU32mSNSUJVuQT3
6UcqBNu9z9cPgt+0J+zt8IvDmp/BbTfiJ9u8Lz2FzdL4ju3OC87MssG4IAXAIYH0PTjn4E+DOm+H
9B/bH8G6X4d1lvEGh2vjGyh07VGiMf2qIXSbWx0zjv3xmvCWkaQ/PuJHTLc17b+xx45+HHwv+OGl
+JviZbanPp+mIbjT5NNQuIbxJEaNnVSCwwD7Z60nGyNY6NNn6tftgeAvhP4h8T/DnxF8TfHSeDn0
C8e506OSZUW8YPE7KcgnAMacj1r4T/4KW/tZeGvjr4n0Pwj4SZNS0Xw7LLO+txPuhvJXjVSkYx0X
nLZ611H7ff7V/wADv2jfhrbJoK69L440qdRpdxNYPBCkcjqJw5Y7SCi5HGcgYr8+r945SGjJYAfM
G5/GhInqfXv/AATf1L4c+GPjLL4v8d+N7TwndeH7fzNNtr91hjvDMkkTku39xT0HPzA12/8AwVA8
TfC74m674a8a+DfiJpXiHXPJXSbrSbGZJgLdfMlE2V+6QXK4PXcK+BxKFDB92COQelMkuDtByXCr
gH09qfKh3P0j/wCCef7WHgTQPhRrvwg8dalB4QjlW7ntNcvZ1jt545wFZCxwFkXdkA9R9K1fhV+z
p+yJ8KNY1DxBrnxZ8P8AxHs7exlWLR9SuLZ0Q8MWWNGzI+FIAwepPWvzFa6ZWIUsc9Rnr+FPkuAq
7lVti5woOPYmjlC9zq/jB4o8J+MfiV4g1nwd4f8A+ET8N3lyZbDSHkyIYwijp2yyscDgZxXGAxF2
BdAwznc2Mf54r034AfHK+/Z/+KWn+M7LRdN8QyQW81u9jqaExOJFALAjkMMdcHjIxzX1jJ/wVr1m
5+ST4OeDpY2P/LSVyDjv9w07tC0Z8DxTxKrgyRZ3YCq4PI/l0r9dfGnxZ+Df7en7NmhaXq/xI0r4
ZX9pfR3NxZapdQC4hkhV4yux5EyjB8hvTtXyl8a/+Cid58YPhtr/AINn+FPhPTE1W3MA1G3LPJbD
IIdAUHzAjI56ivjU3mJnG4kEnknt+HFTa472P10/aB+Kvwk/aN/ZM8S+DrH4m6Pouo+FpdtuZ7mI
vfSWSkI0ce4F45gOCuevfFeM/wDBKpvAfgnXPEvjvxN8Q9I0LWDE+jQ6Jqd1HbN5T+TN5wLuCwyp
XAHVTX54y3rqP3cmB1YAcE4//XUMd4yMzf6wudrBu49KXKLnR9hf8FK9H8KTfH2bxh4Z8caR4ti8
VQedcWmmzRzGxkgSKEKXR2B3gZHAxtNeHfs4eCdG+IXxp8NaF4h8TWvg3SZLj7RLqt8V8pDEPNCE
syqCxXaCTjmvK55mbexXJJ3Fs8570QNI7iQ5btyP8/5FVYlSP2E/4KaXfgT4vfs9G60n4leGxqPh
i4OrW+nxX8M76gfLaMwoFkyGIckYB5HSvx+bbDgxncwByp7VI0xdyAApAIGOuOePzNQsojBH8Wck
jvTQz6f/AOCd3g7R/FP7SGgarrfi7TvC9t4ZZdaRL91QXrI4URKzMoH3sk88dq+n/wDgrZofh7xn
pPhTx5o3jXQ7640onSJNHt7yOSaZZn3iRNrH7pTkEdK/MATEtnjIGAdoOPzp5IK7inqcgDP50rFX
Psv/AIJZ+LtE8K/tRCfW9XtNJhudAvYIpLyZYo2k8yBgu5iBkhGPX+Gve/DXxE8Mj/grpr+qtr+n
jSH0xrZL9rpPs5mFhANgkztz8rcZ7V+W7ylAAq59Q4yPy71G0riMLHwXOc8ED/8AVQ0K59af8FO/
FWmeIf2ufENxpV5bapaJpthCbi0lEibxESV3qSM/MM19Z/8ABJrw1o/gn4b6/wCNtR8YaMr+J3jt
10x7lY5rQWsky5fcR97zN2MdMc1+TUk5OUYYHLMF7+tJgohViwXORjqam3UR9A/t0/D5fh9+0x4w
C63p2v22s3UuuwzadJv8lLmaV/KfHG9efwx616Z/wTF+PHhn4OfHe5g8TzvZ2/iazTSLW7GPJinM
yMgk/uhum7oMc18YFsnGzYhO7cKaZm8zKbSc9D+f+fpVE3sz9bf2q/2Z/gL4Y0X4q/Fn4j6nNqus
63fPJpf9i37+bFI8apDEsathm3gkk8AVyn/BIv40eFtF8PeJ/hlqN5JY+I9QvG1S1e5XbFPELeKN
wG6BlKk47jpX5h+cJEyy7yTxuOcDHb86+l/2W/gb8IPi34f1u6+IXxfg+HOqWV4I7OymMSPLCY1b
zQ0h5+YsvHp70jRan3X+x/8AsD69+z/8eYPGWo+MNB1q0is7yFLawaQzky7cE7hjA7/Wvj//AIKq
YP7YOv4dQf7J076/6tuK+nfhh4n/AGcv2E/AninxR4f+I1j8VPFTbfs1vBeQtdsGKp5UaoxAUk7m
brgV8J2n7R0Hjb9o6H4nfFzw4njyzmYi+0XbHGjRLE0cSKMBSEJU89SDzQgZ4TORsaRhkcd+9NRk
cjJG4jG0mvvdf2tf2SAu7/hmIDIyDtg7evNcv8V/2lv2YPFnw78Q6P4d/Z8m0LXru1eLT9TjeOMW
s7D5JQysSNpwTxzjHequScr/AME7PhTf/Ef9p7wxc2OoWVnbeGpY9duku3IaWKORRsjAHLZYegFf
V3/BYX4ZanqFh4Q+IdrdWbaXpcb6RcWryATbpnDq6g9R8mDjkda/Le1uXtrndFI8ZQEB4zggH/a6
+lOvtSvL8gT3VxcAdFmlL4IBwRnOOCenvStqO7ufoX/wSG+Feoal8T9e+IyXdiukabaTaK9u0v8A
pDTS+TIDt/uhV6nvkVwv/BVP4Y6r4Q/aSuvF15cWk+k+LLeKSxjSTc6NbQxRSB0xxyVwQec+1fFm
n6xfaU0j2l7c2Zb7xglMe4jgZwRT9X1m61Bo5J7qe7ljUqhuJWkKg8nG4njP8qXULs95/YY+Jnh/
4Q/tQ+C/E3im7bTdEtmubee6EbOkPm28kaMwGSF3OoJ7Zz2r9R/F/wCzJ4n8U/t2eEPjLa3env4U
03T1icGY+cWENwgCrggjM2c5r8MopQu7f95xyex7f/Wr7g/4Jl6jZ3nxvPiXxT8UY/D1l4atg0Wk
axf7Fv8Azo5YiFMkgACfKTgH+HgCpau7jUjU/wCCv+D+0ppBYFgvha2A46ZuLn8ulfHXwv8AH9x8
LPiV4Y8ZWVst3caDqEF8kLuUWby3DFCwBwCMjOD19q+1/wDgrbpXhzxF4y8M+P8AQfHGia39ttU0
aXSLG5jnlQR+bKJsox+XL7Tx1Ir895ZN+zGSBk9MAj3qhWufSH7Y37Xt7+1v4n0PUrjw5D4btdIs
5LaO2S7NyJWZgxcsUXGMAdPxr2r/AIJeftdeHfhFrmofDXxTs0/SvE94Lm01eSQ7YroosflOAvAf
C4bIAI5618CglN3JAA5H92hJNr7Y8oTzuzzn1pNBdn6h6T/wSe1Wy/aQF/c64rfC62uBqEF4sifb
yR8whZcYA39XHYdOeOf/AG4/jCv7a/xm8I/BL4XC01FtOv5m/tiS42wzzCFhKq5AG2MK3OTk9K+C
W+LnjdEMQ8Y+IAhG35dUnGRjH9//ADmue07W7rSL+1v7G5nsb22ctFcW0rRuh5GQwIIPJHB71Nhq
SW5+v/iP4sfCz/gl94K0XwVpGlSeM/Gt8RcakkDLDdyxMWPnSuEICgjaien415f+2X+zx4X/AGlv
hQv7Rvwfa2nmNm9zrmnLtjWZEDvM5yARPGxww/iA/P8ANTV/E+p+IdQnvtU1K81C8lxvnvZ3mkYA
YUFmJJA7ZPFXLLx94h0nQbrQ7DXtUs9Gut32nT4L2WO3l3DDFo1YKcjrkc0KLKTR+wvjzQJ/2vP+
Cfnh3QvhRf2ur6lDBplvLG84g8uW3VBNDIWwVYeh68dQa1Pi38O9d8d/8E9r74c6EY9U8ZeHtM0/
TtTsluFDJc2hgknQsTgnahI55yMV+OHhj4p+LfBNrcW/h3xRrGgW07+ZPFpl/NbpLJgDcyowBOAB
k0+1+KvjDRm1IaZ4v1+0bU5GmvWg1OZDdMQQWlw/zEgkEnPBotYd0fpJ/wAEcfHOhWej+OfCdxfx
23iLULyK+tbGbh54Ei2uyeu05yM570v7KX7DnxY+GP7Xln478S6PZ2fhu2vNRuDcRX0UrMJY5lj+
UEt1kH071+YWgeLNV8K6ta6nomp3ekalacQXllK0UyAjGA6kEce9ds37TvxYjjKf8LQ8XhCcEJrl
1j8Rv5/l7UaibR9h/tWaZ4X/AGrv+CiWg+C9L8QodNuLWHRb3ULLEnlTRi4eVFyQCwwF9Mn1rr/2
yf2tvDn7OHgV/gJ8Dvsumz2sBg1fV7EKyWxbfHND0P79iMs38Oexr8ztP12+0PW4NY0+/uLTVLeY
XMV7FMyTLJnJkDgg7snrnvVe91O51O5nvLmaeee5kaWWaZy7SMSSxZjySSSSTyapNEuVyx9oWJgF
3A9yGy3vg8df65r9Zv2cfHXhf9tf9jGf4DDVV8L+LtJ0m107azCRp1t/KdLiNCQWQsiqw7ZNfkKj
7ZOSWQdh1Oa6Pwr421vwZqsWr+HtZv8Aw/q1uGSK9064e3mAYYZQ6kHBH9KTEmfqB+xT+xr4p/Z2
8c6l8VvjFrEHhuy8NRTR2yT3EUkMscsZR5nkDnYBkYB5O7HNeQeI/Cd7/wAFN/2v9Z1Hwvaz6T4D
063t7C/1mYo7RQosux0XcCxdwwAHQHJr5I8W/H/4leNNAudF8Q+P/EmtaVc48+x1DVJpopcEEZVm
xwQDWT4D+Lni/wCF1xPN4S8T6p4cuLlFSd9Ku3hEyqSQHCnDDJPUHqaTKu7n7EX/AO158Cf2cfGG
jfAS1jD6LHCLDUNRguFey093LRulwxbIYnlgPu7u1fCv7fn7G0/wA8WN408MKdR+HHiCYSQ3SMGX
TppGZhCW3EsjDBVvw+vxzd38t9PPc3cz3NzdO8s0krFmkkYksxJ5JJJyT1rrPEHxk8deMvCVh4Y1
3xfrGp+HbDZ9m0y8vXlhiKAhCFY8YBAA7YpISk0z9Bf+CX/7OXxB8I/Fuy+Ies+HZdM8I33h6cWV
9NLGftHmvEUIVWLLkKTyBXC/8FNv2ffH0fxs8a/FFPDk1z4LdNPQ6oky7EAhjiPyg7vv5B4/nXy7
4W/as+L3grQ9O0bQviR4h0rS7CIQW1nBeHyoUX7oUHtjjHvVXxz+058V/iD4cn0LxJ8RNc13Rrpl
aaxvLrfG5Vtwzxk4YA4zRdl82p+of7Lt5J8SP+CaV/4T8GXyX/jG30fUdPaytZwlxb3Eks5jDDIK
EhgQeM9q+Y/gv/wTp+NfjOTVh8QPE+rfDbSNOg85L69ujcGZgTnOycAKqjJYt3r4/wDh38bfG3wj
urq78G+KtS8MTXsUcd01hMVMyqSV3ZyDgk9R3Priun8S/ti/GbxV4dvdG1X4m6/qOkahFJa3lrcT
jbNEwKlDhRwQfWktyeax7l+xL8YvC/7Kn7V2u22v62Nf0O+M2gf8JJaENAzG5jK3TMWOUO07iCSM
55wa9t/bE/Yp+KPjr9oGbxj8N76bWfDvjloXurqxuTElggRIyXKt88ZQlgR1GRjvX5gSTKExklG4
C/w49K9k8I/tkfGbwL4bsNC0T4ja5Y6XYQiC2tI5lKRRqMKq5XIwMDrTUewue5+iH7YvxE8K/ssf
sb2XwEOq/wBv+LL/AEj+zYvs5VTCgdXaaZdxZAdxwOc/rX5GyMAwwi8scsM4x7CtXxx44134heKr
zxD4i1a51vW9QKvcX16+6R9qhVyQBwAAMAdqxQDuDHrjnJqtiW7jGfcBIxAIbAxxRLLlirKUz/FS
sCUOGHGM4HWmSN0+QhScYxUiQ8IjkOmAoGOe/FMd1znBwOmaeSYiqkKpIB+U02Y/OUcnBIz6fhSL
0EYIdpxk+vanzkM4LENjgL0H4U2RFByWbYAflzTMq/Qhdo5JP1oAGkLDa5JUd2FLhchUyGPRs4xS
soLE7woB255P1pqYZvlJO3GDQMmMjbNy43HnBqKXI4wVPX3NKiLGwYEt7j1pzXGzcMbzjIOOn0os
A0MChXIBxyTxz9ajyzrjOO9KkZdthcjcemOeKlMezdlQ3YH3p2ECy/OEZV2qvB+nrTzLBNtG4nqQ
AvA9hUEh3AKSCvIOOxo2kR7DyCef6UmuwDhFukw3yBjnnnPtRK6+dk/dI5zjilk+UKGY8HGBx+lR
ozMxUcDoOMVKbuBK6+UxCp5jvwCDnnPamzBojuCfe6g06SR2jUBTuQdRQHZlVHTIxkDpnnjNWMf5
W52LBskfdHpimSBd4VGxxncP0p5MXmK+44BwfrTJoUTdJhcN8qkmldAJG7q2Ocg7WY9+9P3iQq4X
YOgyOPzo+1FYyDtLc9O1MEgkdVcMFPfNA9CeUmAEMnfnPU8VFJMZVIUbQhAGKTCSSFmBVgMpnjNO
CqtxkZIByT1HSpFcbMDtQFcnB+tSERhThFBwOhJ/KmzsGUANu6n2HH/1qjD+YVRBgHBbjofajUBy
MzbmcELjpnrRJtd8KpHfr0NSZLORlmAOM7eDxTZWKBwoO8cAnuPerQDXlBk+VNhVvvd+afcOkYCs
5Xdxg8k0shWRmKIqhumecfnUckQkK5bcxXPyjoc96oB0Eyx+Z1BHK/4U3DySH5CzE59RSlMBFZCM
n7xOKVg23AJVfVT3paXGG5RJuEfOegGAPypZCYkLMnUZ3HsP8ajiZo0DKxDjPXpzT0lJTaSGRvlA
HJ/zxVCE8sKsmCWKkDaRg9Ka+19iv8q5+8ecewp8krs3kohY9RjkUyRv3BQhs5zgCkySjLsDbQSx
3YAx1Hat9YcXDBApK7VCkYJ49fbg1hPgXKADJ4GBXQsI59WkaQbBkMFC7dpznH1/zxUgep/E+Yyf
s/eCYm8wf8TrUZW8wAchIR0HPQj+teJEL91QPfK84FekfEzxJFefDnwno0aiOSxu72V9rH94JPJw
f/HDXmZ3SYKtsKnGT6fWlYtbAzxMhz2PHFEYjwSoK57lc0YV5HLFmCjJOOnFKJTKCqhh/F0qSbmg
zCWPZK5dN/AGcjjg/Sp7tRJCoBBSNfmdyPvf4dPyqmsA/fuAdqDO0dMf57VbW3im3ZDMuONxyB7f
/XrpItchgKW0Jww2kgr6t7Y/PrW3p2qRqkESrGkwIIkUkuf6YGK5h1l3urHYUPHYVfgldGWQ4Zdu
cdcUmw1RTvZHh1GQsPmEm4/N2B9aveKtLW2aG9QApdoJAjHlcjrWZqB8yaVgwxv/ABFdD4niWXw3
oz9Sse0YbIUYHBz7j+dZPcqx0fwB+HE/xh+KXhnwTBeiyl1y6W0S4aLeIiRksRnoACa/WD9oDxr4
e/Yc+DGm/DHwNpMdvrer20k4upoRJAx4WZ2yc7mJyABgCvzY/YSVh+1r8LgF/eLrMTZBwdu1gRx7
Mfyr7o/4KxuV8Y+AsBWc2FztQqfmxIuecj16Uk3cmcuVJnz1+yh8Y9H+C/x00jxF4gtptSs3R7We
eBVBj8wBRJtPBAOCcEdc9q+rf24P2a9f+K+t2HxC+Hsa+JbPUoYIvsumASZ2byJQQSCG3EEjpgD0
r863lYpGWZUDjaocYUdB1+uP0r9Tf+CamiePtJ+GWot4keRfCcjr/YMErISgDSCYgA5VS23APHBp
ttMmD5ixBZaD+x1+yfdeGfGF5balrGpR3WLW1RWleSdXwoUkEqp+Xd047V0n7A/xV1T4n/BC7fVr
eGGXQ9Qk023ZYwm6FY0dC2CQSN5GRjIA718a/t32Hjuy+MWoyeKhM+nyOzaLIPnj+zB2wBjheOoP
OWr6b/4Jg/8AJEvFAaLLf26+5Tzkm2hOPpjtU7Gqu0bHwS/a71TXfFb6B8TdBi8OwapK66JqgtjD
a3IUNvRnZiCflGCOpyK4j9g+W0ufj78aBbIk1o8iTRSLhkwbicjbjpwRx3wK4H9tD40+DviZ8MPB
uk+H44rPVrHUrnztJjyDaEK6g8KAcnkY/vH0q9/wSyndfHvxCt/ldDp9oQ6kYyskgxx1+8OfpRdl
RUm27aHsfxv/AGPrK6+MPhX4g+FbYRD+2rabWdPRWkEw85CZEXOFGFO4AYwSa4j/AIKk6Xbjw34N
uo7VEnD3UYmUbe0ZCk45GfWvXvDH7V9nYftD+Jfhj4slhsib0po9+7CNZMhNtuRj73PDdzkV5b/w
VFA/4RLwUpAybq5w5XdtwqH8BjPP09acSJM+Sv2Rf2nIfgF4+Z9R06C88PakqxapiHzJ4kG4h4j7
EksD1HTmvbf26P2StH0XSn+Kng4QWWh3rJNfWrSFMSTt8siK3AUllyoxg9K+Gt8UXm4UAsrKqmXH
GAOnb26V+s/7YMDX37F8OFQt5GmybSAE/g7HtWt7MGk1c/I28t0L5MKkIhVmIXAH9f58+1QJA42K
m4IFydx5B645+tX55JHQB0YSMAzJt78/1445zWa97tkKCRwrcggHj9PaqTM2BlMgZGHm+YNo3dTy
OMjpUkaAXUZbYGBChu/oOewqpKoZ3lZirHCswOM55wcUfIHEyo53E7Uz8uPTPUH9Ksw63LN3M8s7
mPbgnH7voO39AaR2kiITd8u0kP2zjgY79qrLA8chLHZgcIvfk4+mannuTAwjVyEIyy9CRj8uuPyq
RxlqWWnliRG4ATBLbcbhjpmotrSO7zMFwvDHnAPP9BUCyq7uBGdrNlsjHPIHt60kl4S8OIiu3kIP
m7f4cfhVb7Gt7ak8mxUiyGKHqUxx0565/wAmnPbvAkjRkbC3y7jg7evUDqeKrXc2cRb2jkH3N/AU
+hyf85p0Ss0YV3dtpOUIBB9fy5pGfxM+kv2B9J0/U/2pPB1pd20eoxNHdu8U8YaNdts5GVIIOG2k
V9eft0fHzSPgtMPB1r4B0PUjrGlO5vp4lUwbi6LtVUycFc9RXyX/AME+5l/4au8EKrAZjvF4HX/R
JD9fSvWv+CqaCP4m+GXLuu/RiMBMrkSvjJHTnNZrc3a0R8NyO4uQIo1KElcEgKp7DJPv+ldp8Gvg
x4h+N/jmz8L+H7ZXvJ1MklzMSIIY+rO7AHAA5A5JxXIWMEM19aw3V3DZx3EyRtdzRgxwqxClzz0X
knB6A1+s+j+GtH/Yp/ZmvtZ8G6RJ4t1m5tPPuNWskM0VxLsJSeTDErEowPl7AZ9apsaSW58pftff
sx/C39nTwvpVnpPiLUrjxrceUxsLgo0bQZIkk+RF2jPQZJ/nXon/AATU+AXhPxXo+r/EHV7Maje2
d5LpUFjcoj26KYonZyhB3N8+OenPtj4i8c+MtW8f+L7/AMTa7dyX+r3szTTM7FljLEttRSflQE8K
Ogr9WP8Agnrq/hHU/wBn6zHhbTLnS7m2uPI1lbk7jNfCNN8gbcQQy7Tx+VS7opWPNvjJoPwm/aa+
BHjHWdL0618O+J/A0V1czQ6fEkMsTxiT5H2qN6P5ZI9D7ivgb4SeDbX4jfFPwt4Vu7ia3sdW1KK0
nmtsF0V2A4PI9cEiun/ae8Q+Gk+N3i0+AoNS0rRLiVo72GSRlS5mDHzfkz8yb8nDZ9sV1H7CmueC
YP2ifD8PinTLi7vZ5Auk3EbfLDemRPLZ1DDjIPOOvWnqkZxd2foH4o0/4O/s/wAngLwVrnhjR00v
V45LCDVtQtI5G82MIF819hJZy5+YkYPoK/Ob9sH4M6H8EvjRqHhvQbmVtKnhhv7dZCCYPMLEw5A5
A2jB64Ir7e/4KY3fg+H4TabDrlrdz+JJp3XQ3tm2iNhs84uScbdnsTnpX5d6rrNzrN0l/q1zcahe
fLG8lxOzyFFONu4+wA/ChITd2foZ+wD+zn4N1D4Z3/xH8QaZb+JNSnaezjsdQgjmtoI4m6oCp+Zu
5Oao/tF+CfhN8ev2ar/4peBrSx0DV/DkWLm10qFIjvLJvt5QqjJGcq2Oc/l9L/sf3vge+/Z20V/B
dlc2GhqsqXVvdEmVbkf6/ccnJLZOQehFflR8Xta0G28f+LbP4d3Wpaf4KvLhHjs5rh0WXYAcurMd
wDAkbu2KS1Kk10LH7P8A8L7D4y/GXwz4Ovb2ewsNWuJVlmiRXkKJC8hALcDOzuD972r9OPEvhL4G
+CvFWg/C/WfCGh2EuvaaYbK+ktIledwRGI/M27hIc5DZ5Pviviv/AIJw6h4MT4320WuWVz/wk0sb
tokkbExRPsYSBgpxkoWxwR1z2r3n/gpp/wAIlHo/hua4a/h8dxkvpk9q0ixx24kUysxXABBxt5zk
g0uoX0PjT4z/AAUtPhp8fdV+Htjqd1c2ltdW0Ed3LEu/ZOiMOnBK+Zg9OlfefxK8C/CT9jb4J6Lc
6v8ADzT/AB48dwtk9zeWkDXErsjyFmd1bAwh4+lfmbY6xrfiDxVb6m2p3FzrNzdxKbm8cyOz7gqF
mIYtzt5r9cP2hY/BcXwY0VvjuwkhW8jEh0UTqjXJVwNoU7sFd2c+9Deo7JI82sfgr8J/2v8A9n2f
W/DPgzTvh9fG4kNteaZaQrKk0WQQzRqA6HJBB/mBXzD+xX4b+C93rWqa78Ttf0yGfTyIbbRtaKi1
uVdB+9w4w5Vgwx6kH0r7n+FL+HIf2bdQPwIihksVNz9kXWTMEeb/AJaFifmye3bOM96/JHwx4Z1f
xl4gsNH0Wym1PWb6QRwwW0ZZ8nrnGcAEZJPHriqTdg0u9D9Afhb8Xv2fPi/8VrXwLpfwW02Jr2SR
Yb+TT7fynRA5D7QuQDtP5ivnP9tz4A6R8CvitDDobOdE1y2kvoLMp8tniTDRKe68jGeg47V9ZeE/
Cfg//gnp8FLnxH4imTVvHOpRmMuu5vtM4V2jgjGPkQZwzY75PavgvxJ4t8XftQfGGGXUdQKazrl/
FaWNrPKTb2ayOAsS9wgJH1JJoRLTk9Cb9nLX/DOj/F3RH8WeGB4t8P3TmylsJFRgrS4VXMbHD7Se
nvntX1j/AMFE/gV4C+Hfwy8Oa14X8J6doV7Jq62c0unxCISQmCVtrKMA8oCP/r16poHh7wJ+wl8L
dC0u5tk1jxZr15FEHkQyG6uyUDlW2/u413ZA4rO/4KfNs+B3h5iu4HxBEvBxyba4A6fWkm7jkkfM
X7GH7Hcnxf1GLxR4mjEHgjT5/MYsR/ps0bDdERnhAPvN9a+gtL1P9lDxN8Yb34bR+BdEgucmCHVx
DGllcy7RmONw33vmIBxgkYFdl+whGP8AhkmRUTAN1qXy9Acsf/1fnXl9n+x14B8P/Bvwv8QdPv8A
U211W0+7Zmuw8Ds08YZQmOOWPQ9RSu3rcq1tDzLxX+wBq2kftDaF4OS9ki8Ca1NLNba2sQLxhI2l
MDDd98BSAeMgg+or2f46WP7Nn7NWuaJoWv8Awp/te8ubH7RHcW0QkwiHYS5eQZb5QTj1FfQfx2ud
ctfE3wzbw5b21zqn9tuBFeOUiMf2WUSZYZI+Ukjg815n+1x4X+BOv+KtBm+LPiW60HWY7IpaxWsr
hXiMmTnEbA/OMds+lK7HomeOftg/sreBrf4LWvxS8AWMXhSKCzgnm01YjsuY53j2H7+I3Uydsgg4
7CuU/Y3/AGNLXxrbx/ET4jRLF4OhhMltYXUgWO+G05mkYEbUUjjOM47Dr9D/ALaQurb9kmGx8H2s
N/4SktrGOW9e4ImjtA8JhaNSPnLYXJJHB6elvwtaC9/4J32lrJmNJfCbQuUwMKVKk+nTmnzPYS1v
Y4Hwb4P/AGXP2gNZ8TeCfDPhmPQtbjilitL/AHGIz4LJ5tr+9O/aRuxgcEds18UfFv8AZ28VfCP4
lnwXqNrPeTzSqml3kaYGpxltiPGM43ElQVJyCa+79M/Y68FfA74kfC/xT4e1LU7i7n1lbd476SOR
HD28jbl2qCPueuOa88/4KhalcaN46+HF9ZvLDexWt1LDcQD95A6TQsrg54wfY1UZSvoJpXPQf2eP
+CePhDRPAS3HxK0pPEHiLUNlwYZGeL+z8xgeWNkmC2Sct+HavzW8W6emi+LdX0y13SJa3txaoJWJ
YKszouScHOFH1r9Uv+Cf/wAVfFnxX+HHiO98XavPrN5aat5EE9yqBhGYY2x8igHlmHTtX5e/FYAf
ErxaYQzyf2teliwyeJ3GOPp2pxbl1M5Kz0P0F8Y+AP2TPhf8MdJ8Q6nolnrkU8cEccWk38lxeTu6
j5tizA/U8YFbnwj+Av7Nv7Qfw9vfEnhTwXeWNmsj2bNdXFzDPFIqK2QPNYcB1x1r81vAngXU/iV4
g07w5oNs1zrl/KsECR5/8ePOxMdW6AA1+mR1jw3/AME9f2dItLvr2TX/ABTqbG5j00zfNd3ZSKOX
YQvESYXkjoPU1N7Oxb1iz4f+A/7Meu/H/wAfPo1ik2n6FYyFNQ1koSkKqxwnBGZGA4+uTXuP7Uvg
39l/4V2PiDwnp+g6lL48j01pLa50++nmS3nZWCeaTNtByuSpXoeleof8EvZxd+DviFOy7ZJNZjlY
DnBaLdgHuBkjNYOs/smfs/8AxS+Letafa/FHUj4s1DUbmSfSbO4gLJKWZ5EAaInjDdT0FTGV3dlS
XNofnIsSwo43GJD949gOfbj6/Wv0Q+Df7Fvwt8HfAy08b/Fzfq51Rba8EkNzPbx2MM4jVE+RlJwX
BZj74HFfLnxV/Z2PwX+ONp4K8ZanJaeHNQuUaPX4AufsJkCGQrztZQfm468jjiv03+MngXwTrf7K
V54d1PxH/ZXgyHR7VYNbLqdscYjMEmSMNuKpxjndjvWjlrYlJct0fnn+3H+yxpXwF8SaPrvhW8WX
wh4iLR2dnJMZXtZUj3EK5zuRhyMnjmvlueJVO7zAgBLYcjHfgCu18RfEbxT4m8K6F4W1fXbnVNE8
PtKumQOE2IGJyVwAxHYBicCvub9kD9nr4e+F/wBnDUvi54u0aDxjNdWVzffYL+3SWK2itnl+SJWB
wz7OWPt+I3tYm2jbPzeEm6REBL5BfAHylPWmm6BmYNjZneH4Gfxr9GPAXxo/ZZ+M0ms6B4h+GPhz
4dQzWbeVq95BaxK5bgiOZFGxxkMOefwr4n/4RbT9H+NI8PeEruP4gaXaa0kGnyouF1RPMTavGQM5
2FhxwTwDTUrago2djzyORXVoxKQ2SzKx549T6U15mMYXzNrE9cZz1xiv2d8M/ADwT4lvzY65+zzo
vhuzaFs3pktJsN/dAjO4E+tfmH+1j8CLP9nr4zaz4V069lvdKEMV5Y+cMPHFLuIiJ77NpG7gkYq+
e6JtrY3vhL+xT41+Nfwf1Hx74W1TSL2G2aeMaK0j/bJXi6oABtVmwdoJ5yOma+e9Tgltbm4tbmGS
xurVzHPDOhSSNx1RlIypB4wa9h/Zk/aO139mvx+mvaYj32kXYWLVdIH/AC+RA5+X+7IvVT+B619a
ftg/sw6F+0X4CT45/B1Yr7ULqBbrVdOt9g+2pjc8nLALOgPzD+IDHJxURlrZmjVtT86NKjsZbyzO
pCUWPnIt19nP7wRbhuKkjG7bnGeK/TX4O/sZ/su/tA+Hb7U/BN34lvIbN1t53mvJomilKBxlZFHY
9uP0r8vraRfKyjEb8YEnABPrmv1K/wCCSEiTfDXx6Qc41eHnpnEAGf51lOTi15humfl94r0ceG/E
+rabHLJKmn309qkkgAaRY5WTJxjkhecCsgsnzGMnJySGz0PXmuw+Lqm3+JXi+JmJZdavVbPHIuJO
R7Hr+NcYkvlYC4GBkk810NGSHsUcgINwByB3/H0phiLMRuIwDgE4znFPTDsuWMWfvd8j8q/T/wDZ
a/ZW+Dy/svweOtW8PRfGPU7yNb6eK0QNNbNtUtaqnmD5o8kkH5j6dKycrM20Py6dipDMWWPOSVGM
8cUFI2ZgYi2D/EOa/SWz8O/sjftP/DPxTY+G7Gw+EHiawYGC71eVLWeGQZIdUMxEicFWHXnp0ryb
/gnr+yV4f+PnjXW9b8U3MeqeH/C86QSaagYxak8iy7WLhhhAArdDnjpT50lqKx8ZSpI0rqvzIo4X
oenpTIv33VgQOuMV+z+n/slfBLUvFz6PN+z1e2VsJWjGrTOfsuFzh8icnBxxx3FfAn7UXwB8N/so
ftMaPZ3lpP4l8BX8qasumRt5c6WnnbZbbeW+fb0DEjIIzS5mL3U0n1Ln7H/7C0n7SHhnxF4m1vV2
0rwxYRzWtrLp7gzyXyqrZZWUjYobn1NfJTQy7f3nHALAcD0r93/2RvFfwr8U/BK71D4X+Hbrw74S
jvriO4sLqIrIZ1RfMJ+d85G0fe7V+Tv7VPib4NeLvEmmXHwc8K6p4Zt9sy6m1++Ip3Lr5bRoZHwR
8+TwMEVUJ6XYnH3tDwJQpYbTk9sd6a0UkQDdVJIwT19a/RXUf2Q/hkP+CdEXxFj0JoPHA0FNU/te
O6lDGbzOfk3bMYJGNvT0607wr+xz8MNd/wCCd198QbvRHPjWPQ7/AFVNWS7l3LNDJKUGwNsIxGo+
7yM0ue9rdSkkr+R+cjo0vJ+QD+VMZQCArFQePmI5r6l/YS8I/DT4ifGUeDPiZoN1rUWswmDSfLeS
JYrpcu29kdW5RTgjgY5619YT/sk/s0fBj4qaZ8PPGOgarr+u+Mb95NEnS5uQkFu77UgkZJVxtbI3
YJIwT7Tza2LsflRKjZKOoVcZ56GmMueQduADnrxX0/8At8fso2f7MvxXt10i7+1eFfEUc13pto5Y
zWfllQ8TMSS4G9SGznB56V8vtG2QByAMHnGR6VqtjG44zFUI2ZJHAx7U6RvkcY6ZO7PAwOagCkNz
kocgKDiu4+C7+EoPif4af4gWl3e+DWusanDZkiRoSpGQQQcAlScHOBxWctNTWOr0Ppz9r/8AYI0H
9nH4HeHPHOjeKdR1i8vLm3tri2vIoxGRLEzZjKAEYI754718ao2TkliqtjcRxiv09/4KXfs/+GfA
/wCzz4d1XQNQ16OK01W2tbexvNVnubbynjYACORyFIwMHtzXyP8AsTfswJ+098WBol/fi00LR401
DU4wGWS4g83aYkZSCpJ4z2BPtRzJK447s+ebsFSiq7bickE5zUJjYE8FSRnHSv1H1/4M/sV+FPjO
vwvvtH1+LxW95Fp4iS8vWhM0oXYnm+Zj+NRXzP8At9fsd237MnjmwvvD97v8IeIGf+zre4laS4s2
jVTJEzN95eSynk44PajmQ7JvQ+UFwYzyDhSDnuaRowUG5vkAHT9a/Q79ln9mX4E/Ez4M+Gta8V+A
fHd/4huhKt3f6fb3hs5G8xlDI0fybcADI964T/goH+w/pf7NU+leKvBssr+DNVmFl/Z11I01xa3P
lvISHblkZUJ5yQQfWkppuwrHxWzIzu8YLLjsKjyzFAo4KkbfWvvH9iH9k34P/tPfCHxLpt7Nqlp8
S9PMrfaEncQwxSDFvJtwVYblII68GvYfAP7CP7N/j+/134dWOoa0fid4esFTV5Uu5RFFdBQhlCkB
HXeQcdwelCkh2sflikDqAHXDjjjpX1p+xf8AsHw/tXeGvFWq3PjCTw8mkXMVtBbwWazFy8W/c5JG
Ow/Ovn/4v/DLVfgx8TvEHgfXJYrnVdFuVt55rQkxSBlDo65wcFWU44wcjtX6af8ABPr4HeAvHX7P
w1Twv4r8XaF4knT7J4hXTNTktlW7ALD5duGAVxgjPBqXJphoflFrmgz6Pq2oadM4lnsriS0d1BCs
Y3KkjPYlc/jWe8GdjMNrDII710fiPSJbXxdqunRztcTxahNaGadsNMRKy7m/2j1PuSa/Rqb/AIJ/
/Az4EfBjRtd+PviLULbVbqcwXF7pM8n2bzH3OkSqkRbAROpHUGq5h2R+XrDyZt+Mg8c81JtClsqG
zyCK/Q74w/sHfCXxp+zrf/E39n/xPc6jHpAmuLtdXu3aK4hhVjKgDRqySrgEZAB6dxX53RjygF2b
ssVyp71S1IZFKAp+9jFSRqS33gR0oEEZUM2QWPC/z/pW54M8F6r4z8T6doGi2M+oapqE6wQW8CFm
OSATwDwAeT2FDdhJHu37HX7Itt+1fr+vaf8A8J1aeFNQ0uKKSG2mtPtMt4rbtzKvmIcLtXOCcbhm
uS/as/Z01D9l/wCLN34MutWh15Vs4L+DUIYTCXjkLjDJuYggxsOp7V+nv7KP7DXg39mfxb4I1zXt
buJ/ijdWt3GlrDMGs2YxnzRGCm75UI5LDJ7V8ff8FZyp/apXMZcr4fsScenmXP6//WqE2VY+Jivn
RCRFwc7iB3qukZViWPy8nAr9NPBP/BMTwF4T+EWma98ZvFGuaPrN3MTJDoTrNDEHy0cfEDsTtXk9
M1kfGf8A4Jp+D5fgjf8Ajf4M+ItY8Q3dgXnuLXXWSMT28YJlCgxRlZFwCMjBwRnpTUkFj84kmIYK
wI28g4psoLMMAAg4GDivsb9hb9hWL9qC31jxF4j1O40rwbY77VZtOkT7XJdYR8bWVgECPknHOcV7
X4T/AGAv2Zfif4lvvDHgv4y65feKoraR/sBeAvHt4LMjQKSFYgkA5+lDlcLH5nxQFy/H417p4I/Z
G8a+NPgL4z+K8cKab4d0CAXEEl2jD+0kDYl8kj+5ggnoTx2NS6D8A9G8C/tJz/DX4z+Iv+EP02xl
eG91eyIkTmHzIWDFSFDgjkgkE4r9fNI+GHw2sf2LH8F23ipj8M5NCmhPiLeo/wBGkLM0ucY6se1F
waPwIlRjnerKBkn/AAqMRZc45BPUCvUv2jPBPgP4ffEzUNG+HPjF/G3hdIIZY9ScLuEjA+YhIUBi
uByB/FS/s5/Bw/HL4ueEPCDTXdpYazf+TNqFtBvMMYDM55GP4cc+tVcm1zzNIRs4x8oA/wAiuz+E
3wx8R/F/xxpnhTwtYXF/q99Kq/uo2ZLePeqtNIVGVRQwJPvXs37bv7I1t+yf4+0fS9I1q817RtS0
77YLm/iRGhdZCjKWXCnPBHH519sf8EnvB2iaN8AvF/jWHTrQeJTqd1af2o0YM3kJbwOsZbrt3ZOO
KlsaR4jqH/BIP4mIl0tt478L3V0qlxafv1ZsDKjocHJHP0r4U8TeG9Y8Ia9faLrumz6ZrFhMYbuy
uoyskTL2IP5g9wQa+5fA/hT9pfw38XfC/wC0FqxYaf4gv7OG/vvPhdJbO5kjiCm3X7qbduDwQcdO
a+gf+Cgn7O3w6+Ivxm+Fsmu6tB4Fu/E0t3YX/iGKNQ1yY44zBG5J27s/KGboDihSHbufkC6u8hJB
JB6DninDlT1ZOu0mvpD9rj9jXxN+zD8QLexQ3Wv+FNVC/wBlawkADTSAfPC6KThwSMdAc16X40/4
J2w/Cb9mybx/8RPG0XhXxS8DOvhuaOORZH38QLJvyzsgzhRwTz3pcyGkjnfg5/wTQ+K3xi+H9h4u
tZtJ8L2t27iC018zQXDxjhZNoiOFPUZ6gZ71lfH/AP4J0/FH4A+BpPGWqzaPr2jQSLHdNoUsskls
hB/eurxj92CACcnGRmv0L/bB8BeIv2pv2SvCsHwknXXZJL21vM2l4kAlhWCVGG5mUZDMoKk9vamf
BXwnrH7On7BnibS/izcQaJqIt9U3rfXiOP3qMIk3BiCzdgCetFxrRn5b/s4/speNv2n/ABHf6b4U
t4IYbWB5Z9X1AvHaRuCuI/MVGy53g7R2Ga971H/gkB8aLS2ubiLWPCN9JGjOsCXs4eQhei5gwCSM
de9fRv8AwSf8Q6Xq37MHjHwtpt9CnimO/urg6eJAlwEktoVjk25+6WBAboCOxqh/wT2/Zq+NXwi+
O+p638Q9Ov7PQ5dEmt1luNSSdHuGlhKkIsjEfKr9h1/JXB6n5Ua3ouoaBql1p+q20+n6pZSGG4tr
mMxyQyDqjKRkVVAydw+YA5244B+lfUX/AAUnjCftlfETywoDPZMQg6n7Db5zXhHw2+GniT4t+LdO
8LeFdLl1XXtQY+XbRYA2gZZmJIACjkkmqISOYmjYxM6KuW/Akeuf/r17H+zp+yR8QP2nptS/4Qi2
s/smnorTX+qTNBb7icbFcI25uvA6V7l+2/8AsneAv2YfhH8ONEsNXj1P4jSXUw1S6MpD3MDK7qxg
3NsRWARTjnBr7Z/aX8P65+z1+yl4e8AfA3R1s9T8QX6aLHDbgtORPBK80iMSMSEp949M8YwKXMWt
D82vj9+wN8W/2evA8nizxNaaVeaQs6QzXGk3bTm3zn55FKKQpIxkdOM183MrRINjKfm445HNfr5/
wTzj+KEE/jr4N/GTS7qbSk00XsVprgM0rxXDGN03ljujIDcHODnGOleA3X/BNyz+K3jz4uad4C8a
6bo83hfWbi1sfDF5GZ5jGIlePdJvDKpZ9oYqwGPY0XEz8/ppN0m4qcdQR3owW+6MFjyCcY+teq+A
P2dvG/jr4vx/Da30K8t/FEdx9nvrKVBusEDhZJpOeEXOc9xjHXNevfGz/gnp4q+FnxR8J+AdB16x
8f654ghllNrYwmKaySMpmSVC5whD5ByM4NO6I5bs+SnTDYB5HbNP8pmBbbsYDBHXiv0O1H/gkPca
RqBtL341eGLS7bDC2urQxyAHIHymYHn19q+av2qf2R/F/wCyv4xXSdZf+19GvoPM0/XLaFo7e4bB
3x8k7ZF6kZ5BBpXNFZGp8G/2DPjB8dfBEPinwvoVmdEndoYZdQvFtmcrjLBW5K8kA+xqr8cv2F/i
78AfBj+LPF2g2kWiRzpBLdadepcGEtwrOo5VcgDPrivSY/2tf2i/2lvA2lfDrwhpE4j0JIJp7vwb
bzQXkscaFF3usvCknJAAyRnjpX3T+zNY+L/DX7Dvi2L40PfQ6qy6mZF8UyM0ghMI8tSZTkjIOBnk
9Klbj5j8RGQo2C4x1Xn9aVVCBi3OOm4cfWnXEKxwwSRnf8gIBHt/+umEebwGPy+tW2ZDADyGJ3E5
z/jTmZmU45A4OP6VFJGVOMHjj15qRpCFZCRnpxUaANbLSAAbeeSRSTyg+2BzgU+QElWydgPI9aiZ
XJIwB9T2q0S9yyNjKGVjzn5euaUMvl87lYjnjoKrB1ChSxyOhB6VMrgHIJJJxjPOKhlJjW8lXZwc
MM496cJCz7tvfGCevvTS25yHKk9eaWbADHgA8EelIoe0vzA9s8g0zzH8zAAUAED1qNgQ4DAsDzx2
oAVidzYINCRJNKu5iclQpxjuaR3zj+Mf3T2pkhYOSMjIyOeaZuKgpjeCcnHXNADtys2x/ukZHHNK
wDSFc43cZB6UyVFjIYsOO2acxO3cg35+bI4oHYQkqyrxjpuI7VOm2NjnBGBgioCOCQPXgdfalJaF
Sd3J4A9adxpDZD6AtluoqSPBC/3geh5FMWbfIQgCtjpmkldhglipOdwHrzTGPkKRScruGOAD2pZW
MzFsgoOB2/OogBkAycjI9c0b4/KIGS2O1ZiHz4e2JHUHoO/FJjy0AP3fQc1HsXaAC4yOjHinMd0m
0DaR7fzprULj2cq25lA3DPBpFkRX2kE579hSORtJCkEHJyfamrgNv6Hnr1HpRYXUmkRcM+MckA+n
FRyYVVJbcccYobcwLN1Ydz1pSq5PBUAckU2UCSFX3KNvXHGcU15GfcpPIPpzSyFI9pyyAjpToSCH
LIflPU880riuOXIQ7RyMcmkMhSQAE4x29P8AIpZvLU4VMkgAjoc02VQEyFCnoRn0p7jFeJl2uCMj
5gOp6/pQIBvJIwckjP8Anmmj965I5XoMdhTpHIYbQVUEgAncQPejYB5lwQQp55P596jmjLSKobaz
HBz2pziTCsoBVvXvTSrKEd/l3HAA54/xqeYVyYSRx5DZHBHPYU5Qpk2g715IB4zVVi0TlWAcscZP
8qkBdFKKhORyVGcCgEDTBzjbh8fKvT680rfvMqMFUG3BAHNOjBkCq2Bjpz2pSyNcFVypHBY4IPvm
nYY9VEpMSqVJAwM/0qHcyyMrjAORgHoamIEZbYCXIx6596gn+WPG35mzj2pIu1x/nKFcF1UMOhPP
501AYl5I75x0zTVjTy90mG42gEdqeyM4Cr/F2HTA9KvQgkgcyqFK7Cozlc4pkqOuGEilix+QHtUe
4DK7c4wODUn2cxyNuOcDg4zkUtCkPaMxjf8AwsMc9DjrS3EPkTHaGVSNwAIPHt+lQlWC4BJyOQfT
FDqzpt2HA7KeoHvUqWoMfPmaCN8gAE5yMHOaYq44TD5GWDdBSblMYXghiecc1KpLZVgzFhkAKOw7
1pdEjH8oFlU7mxg5pRGBIGwO4Ck9PeooowZADggcKMd6keNw5TzAsa7s7sDnv/KncbHEO8zqmGIJ
+Y8cAUyRyqShupOcgE4piwLJlkIIQ5GT1PrTZfNKEF1UHOQT/P3pXJsypKEE6qjEnPOfWt5nZdTc
IiDkZO/I44yKwoyZ71Plyc9M9a2IdpvFDEhd4BYD5Secf59qholq5a8UMhe2jSRgoLEKfwrI2edJ
sUrz+Hb/ABxVzXQiXULR/vUAJKv/ALxrMZedz8E+hxgY4pdCkEmWkKA7g2clj3z6/wCetNukWOR1
AztYruJyT701myCuAw7cjiniPKKrKASN3z8flU3GtTXmKiNjtZHPIVuAf8elS+ZI2nBXDFV5dgME
UksQeIIzjgAhj1X8PrTJJ0kQx7mUDA2c84710szuVkKIkkLPnIBHy5OR0qNJ2+0bFGVB+YKO1Qzm
NGYxg7skZI/zzQMxqrRhlYZ7Vm2VcLoxNdsiljE7gZwPxrR8Q30EnkW1qcwwgI3zblbjr0HoKx2d
ZFk3Nhycg89arlyvy9h3HenYq9j0D4S+OdS+G/j3RfFOhSrFqejXK3Ns0vzIXHYjqQRkGv1h8V6p
4E/4KDfAiDxXZ6pb+HfHegxGCSDUJfKSKXaGlUKTlkbnac9uemR+O2mCO2uEkQ5GRkHjjvXb6feS
m7MTb3DZ+TaducHB9M447CmoXYpcrWp9Ofsi+BfDHjP9oLw/4e8WPnS/NlcmS42gyxgvGmeRglT3
5r6l/b++P2u+Ftb0/wAD+DNUj0XS7a2S8a60efy5xJ8w2fIcY6HpX5042mNpZYlaJcsS24ueMc9e
oHFXp79ppGeW4aLZHkfKeF79vp0+tW6FzJSsj9Tvhf4o0j9sL9lC8h+IL2R1bS2kt3uIZQsoeGNW
SbnnLZBI6H0rof2AvAZ+HvwUmnn1rTr5NfujqMC2kgxEhRUw3+18vI7Yr8jo7u6jZyly0MOVAETY
DDHJx0GfmH41KLqXSUAimdUQnKoTtzjB79On4jPWsnTaNIyTZ+rfwN/Yr0j4f/E/WfGXinULHXJk
upJdKgWTMUO/duZ1YkbgGAH51xH7FV5pNh+1N8Z7O1ntYIVeRbaKPChkF3KRt7YAwMDpX5wz6vqM
ACS3c4tiGaPYx2suRkgkewqldXlykxuUu5kum3ETxEq65JJ5HPOeR71apNkudnofVvxv16xP7dCT
Pch4IfFNiZJSwEZXzoQTuPA28gn2NfQv/BUfUIF8E+DHjdZW+03L/unG7b5a88ckZx+VfmTJfTMI
/Nkm3r867pCxycHOO5yAeKW41u8uwqveySli/lRzzsVQkZOASQMn+Q9BQ6bQozVkrHvf7JH7NC/t
EfEg21/qaWmgafsuNSiiuQlw6HcFWMbe7LyfbpzX0L+31+03oa+Hh8HvCoW7ttPeKLUrttxMRgI2
QrxhuQuWycYPFfn5Y3t9aXNxJazTWs6qV+0QSFCoIwQCCPU8VFLczSPNJNK8833CWYtu5+9nv164
pRp66kyndCTStFvUM5cEkbQQMD5gM8e/1qNblsBV+WVi3mKGBBPTj17HpUcnlS25BcswBOS3A6/4
1Dbv9nlVdx2uuBu6A+2e1bKKRgncttcibLeUXBX5mHf0xj/PFOlMdpvOWfB+UErgH0HfPeq/2zyp
WhkIZAAdm078A+vpUcmxpVyuAOmDlSM98+2Pzp8ty3Ky0JLu4dViEiysX+Yjb8uOeTj8u3WmxqJI
WaZH2FsqhwNvHAGPpTS/2rfsR4hk7yDlfXIHb6e1TNJL5wQtuXA+bO08egx9BT5UkZJu92NZmj3i
MtIQN3H3SMDBPoaSX7RMr7Dkn5WIbvnr9f8ACnAmaWR3URqDkE5IY+/qMfyqK5uHNypVFLD5QQMK
v4j24zUbGjlckSF2kMhjZ5YX8tsgEBiMg+/anxTb5wHJIQHIY4A6jP06moWGZmTIxgHarcf/AF6n
EqI7KJsv1bAyB7jp/kU7BA+kv+Cfc0cf7WHgZ2lVQ4u1GXxkm1lCjB6k8/lX15/wUE/Zn8b/ABh1
iw8SeGbazvbDSdLYXEM9wInJVpHO3PXhgf8AgNflvYapfaNqlvqGnXs1je2b+dHJDKUdHA4II5B5
7c811F38aPiHeC8tJvHfia4t7lNjRyavccrg5Vhv5z0NQ4m7aehhyw+VI0UrAHdwzL0P+B56+vav
r79h79sVPhxt+Hvju4+2/D++UQWtze8ppgYNuQgg7on4BB+7n04r4yuQ1rGsAP7kcgLnA9v1qOMC
7+U74433Ab8EFeg4/Khq4k1E+1P27/2TF+F1zN4+8LsLnwjqkgDRqY1FlI5JVEwAWjYZxycdK+sP
+Ccnw31z4ffABZdaiSAa5d/2nZxhwzfZ3hjVCwHCk7enpjvX5K6j8RvF+uacul6r4m1fVNOtii29
hd3sssMYQbVKRliOBjn68Vu6R8bPiD4Y061sNJ8c+I9P022BWO0tdXuI40Xk7QquAOTStcFM6z9p
zwBrPw0+MviTTNXsvIu7i/muUMbqwaCaR5I3B7ggj3GCMCul/Yw+Gmt+PP2gvDMmkRpPDol/b6te
iRghSBJFLNyeTkDgdzXh3iLxhq/jO8Opa7q99q9/KAr3OpXDzSkAYwWPPA/n71J4a8Wa74H1pdS0
DWNQ0C+MbRG6066aCVkJBwShDY7+lU4sFJI/T/8A4Ke/D7WPFPww0PXtPthNZeH55p71w3zRLIqK
rAdTyO344Fflwzhba4V/nxlyq8Nxkk4xj8Peus8R/Gnx/wCJtLm0nWfGut+ItNlI823vdUnkifHP
KFsHBAPPpXBmdZpZCdrZXBZhnJJPBpcrIVnK7P2h/Ye+Hev/AA+/Zt03StetVtb67lnvYohKsh8q
bDRliCRnB5r8nfin4N1r4a+NtV8PeIbZrHWLNts8bOJAQQNpDA4wykHr/Krek/tD/FDQLW1srD4i
eJ7GzgQR29vHqUnlxqPuoB6Afyrj/E3iXVPEusT6lrGpXOqapdMTc3V7M0kkuAACzH2GPwoimato
+lv+CeXw/wBc8VftA6HrWm2LPpPhp2n1C6eVRtEkMqJhScnJbt0xXvn/AAVI8Ba7fw+GPF9rZyy6
Dp0ElnfTxOMRNJIpj3KecEjGR+Pavz98E/EzxN4C1aa68LeItW8PXEybXk025aHeB0BA4bknrWv4
1+PXxB8f6P8A2V4j8b6xrenLMsv2a7uWaJmUjaSD1wafIybpmB4fv10rxJo9xM5WGC8heR2Iyqq6
sWB9sV+tP7YPgTVv2mfgJog+G7W3iASajFqEUkNzGiSRCKVchmIGQzjjr+Vfj5ezqsiKwV4lOQgx
liRz/T8q9B8F/H34jeAtJTSfDvjjW9B0kM0v2a1uykUbN97aDnGTzScXcOZbM/U79mLwrf8A7OP7
NVza/EV7Xw5JaXV1cTPc3UZjRHwEy4YqSfTPU15L/wAEwfB2hXmg+LfFZ0yB9bj1D7Jb37LukSBo
lJVT2BbPTrivg7xn+0N8RfiBop0fxJ4z1bXNHeWOVra7uS6yFTkcDHQ5Iz7dxVT4f/GPxn8NBenw
h4s1Lw5FdFTcLZTELJgnaWVsgMM9fzo9mxqabPsz4p/spfG/9oT9oCa/8X28eneHBcyWtrq0csMi
W1krMUCwhwSSO+MknmvPfiH8A9N/ZB/aV+E73/iUaholzf29819cxCA2/l3Mayl/mI24YHPbmvIj
+2L8Zy0jR/EzXAcBQWlQjOTzjb/nFcN4++KfjD4mXMGo+MNfutevLdPLimvZA3lpn5guAAufQUJM
rmtqj9Sv20fhX4n+McPw01LwXYR67aadqJvp5oZUIELCMq6EkbgQDjGag/4KH+GtV+If7OsF94Yg
XWIdM1EajcGAhgIEhmV3467S3IHp7V+dPh39p74reDfDFlomh+PtW03SrNfKgsxKrCJAMhVLKSFH
IxngY6VlWPx++Ilj4WvvDVv461iLQ73zWmsVnBjYSEmQcjOCWJOMdTTUGtRNqx90f8E5vjjoF34C
n+FGoyDTddMtzdWkzyARXSSsCVTdg+YCx+XByB161yPgX9g34jaV8cU0m91m7h8C6Vcx30OtLKTH
eBGWRU+z+ZgOT8pODjB65r4Q07V77RNQt7zTbya01SyuFlt5bZtsiOjBkdSM8hl6HrXr/wDw2v8A
G7ytv/Cw9VPZnKQkH2/1fH1yOtJQfQlyine5+jHxG/aa8Ef8NK+AvAw1KNrnTrua4vdREqfZbaR7
aVEhd88OWI4OAMjPJryn/goB8AfHXxV+JPhfWfCXhqfW7a20uSB57cofLl8wsgwxGOvXpgV+cC3Z
e5uZpGMtzO7TzO55kY8s5PrnmvYtK/bQ+NejaXaWVr4/v47a2hEcSyQwSEIoAA3GPJwMcnmhxaHd
Oz7H3/8AtTRr4K/YbGg6rdRWOqpo+n2CQySgO8yeUGVB/ERtPT0rE/ZW+IHhz9on9lq8+FVjqL6P
4jsNGbSrpZl+YB1YLNGMguvrjGD+FfnP8RvjV44+MF9az+L/ABPda+bLetqJlRFiV8bgFRVGTtHJ
BPArC8G+O/EXw88R2fiDw3qkul61ZMzQXNsiswBUgjDAgggnIIP6VNmylo22fe/7MH7JPxM8O/Gq
HVPG2o31noXhC5Z7Fri5kmhv/vIpiVnIjTbg9OM4rzL9v34u6T8a/jPo3hnwlHc6jc+HvN02W4iA
MdxcSvHhYSpJYjaBnjk4FeR61+2x8b9b0jUtN1Hx5d/Y72JoJI47a3jby2BDLuEWR6ZBzzXj+l61
Po97ZX1lcS293BPHdQTwthllRt6MOvIIU/UdKtQa1IbTa1P1j/4J5/C/xd8Mfht4lt/F2jXOiXl9
qv2iGC6++U8lFzjsMrX54ftF/Cvxt8PvHviDUPEvhvUNI03UdXu2sL90/dXGZHcYboDt5x6fjW9/
w3l8eFiWL/hYE+9Vzv8A7Osznnv+59q4r4oftJfEj416ZY2PjLxM+tadZym4hgNvDEBLtZdx8uNc
nazDnpk0Ri4jbTPqP/glVCrfErxs7QjcujwbZCo7zHODXjf7eer3d/8AtP8AjazvLma8t7WaCG2i
lm3JEjW0LFVUnCjdk8c85ryX4V/Gvxh8GNcudY8I662jX9xB9nuGSNJUdM5A2urDqPr+NYvjTxvq
3xA8Sanr3iDUpNW1jUJftFzcyqoLMFCjhQABhVGF44pqLTM202j9Gv8AglbrNvL4V8eWDXcTXhv4
J0ty43+X5O3OOuNwIzXD/Bb4F/EbQ/25E8R6j4P1S08PR65qd0dWktysDRSrcbG3ZwcmRcV8TfD/
AOJPiL4U+JbbxD4X1mTSNWtlcRzxKrcMMMuxlKtkcEEV7Nb/APBQj48y/wDM7LjsDpdpnPof3Q/T
HSs/Zva2hqqsb3R7T/wU9hfxF8Z/BmkaQn9oa02kyQR2kA8yUySTfIu0cgtjjp0r6K/aE8C+IL/9
gS48OWelXVz4gtvD2nxmySMtMkkXkl/lXOSu1iQM9DX5R3PxO8R3XjuTxt/bM3/CXPe/2idUkCbv
PBBDBdu0YxwuMAV7Ncf8FD/jnd27W3/CYQsJIyp/4ldsrYxzg+X1x34p+zbmpEcy5bHz59oCwbY3
csucKBhl7446H2Pev1d/Z5gl8Xf8E7LnStDUajqh0jVrRbWJg8hnMs+EwP4jkYHuK/JSeXz3bzZi
J9xkJJ5dick8fUmvRvg/+0b48+A95fz+C9b/ALKGpIgukmgSaGUpkK2xwcNjjIwfwocWmmhtqUHB
9T074N/sG/Fb4pX91ZX2kz+BbOzgVvtGvWskUcp3YCRgKCTgE9PT1r1f9gLwqPh3+2X4u8J65Pp1
zqmk6bdWUNxayBoppFkt2zHn+LaTx1HPoa8g1z/goh8b9Z0q902fxXZ/Z7qB4JHg0yCN8MMEqwGV
OCfpXz54e8TXnhrVNP1XTJ5LG/sZ0u7e7jbDpKrBgwz7gZBzkE5ocZNaGinbTofr7pd18S7T9vLV
EvZtdHw1l0kLbiRX/szzvKQ8H7u/du9+tfG//BUjSbix/aMi1O8s7mHT73SLWG1uXRhDcSJ5xdVb
7pKhhkZ4rzrxz+338ZvHnhq80TU/EluLCdkJ+yWEVvM21gww4Bxyue1cb8a/2q/iP8fNB0jS/Gur
WWo2emzfaYo4LJIGMhTZuZl64BPHHX6U4wd7kXS0PQ/2O/2QdU/aQ8ZRX+pwXFh4B07Bvrxg0ZvS
esMDAEEjHJzwK9u/bl/a/sfDGkSfBb4VH+ztPsFFlq2p2LNB5Xl8G1iZcZyAA7D6d6+T/BX7XvxP
+F3wu1LwDoevJZ+HbtZxsNsklxH5oIk8uUjK8kkdSM8HoK8dYvboYzGxl5fJOSWPOSe56n8apRs7
yFKTlotixawSXs8UUMXnTyuIxCpySScADue2B+FfrJ/wS38D+IfB3w58af2/omo6JJdapE0EWo2z
wsyrCBlQ4BI96/JbSdUudJ1C31C2mMN9aypPFMCD5bqwZSAQQcMor6rj/wCCofx28oJ/a2hsQMFz
pCgnjr96s5RcmilJWsfOXxTVX+JnjOVxlzrN6WYNkk+fJ19OtcaVAAZN4AHzbhV3W9Wm1LVb+7uJ
TNd3c0tzNKwALO7FmOBwOWPas8yyshcgjdyFPXHatrkKxJzcSKGZlGQvHX+vWv0A/Zr8HfHf9kn4
a2/xV0Ox0/xr4C1+ygu5/ClleSGYeaECT7BEQHXgNszxwRxX58zM2xVDZUgtgdj9fyr3/wCAX7cH
xO/Z48KXHh7wveadc6VLM1wLTVbZpkgYqFPlEONoOASvI44xWUlctWPvT4t/AXw5+3L8A7TxXoPh
s+AviLpiO6Ws9r9lUz8F4ZmMY3o23KuOmfqK5n/gkPuttH+KNjdOsd9b39rFJalwWj2JKjcZPG4E
Zr5V+Jf/AAUf+MnxZ8D6n4Wv7rR9N07UkEFxLpdo8Nw0efnVZN525xjp0J5rxT4NfGjxJ8CvHlr4
u8LXy2mqWrMGWcF4biNgQY5EBG5ec+oIGCMCpcXZFKVr9j7G8TfET9qbx3+0j4h8HeFPEXiTSbeX
W7uCxea18mxht0d9hMhjIA2jr/Ovn39r/wAJ/F/wV8Srax+MevR+I9Z+xJJZ38Fz5yGAu42r8q7e
VJI2jt1r1Qf8FafjTEXJsvCkinhS+ly5XjqcTjvXyj8RviRr3xP8Z6p4o8TXv9oavqszzSlWYpEC
SQqKSdqjoB6VotyOblsj9XP+CXUiXP7JOuQxHfImtXwaMD5gTFGeR79vWvgj4hfsWfFH4afCe2+J
upWunt4dCwyyW8d2TdRxyMFRihXuSCQDwPpXGfs9/tQeNv2bPE11qvg+8gkju4Db3VhqCPLayjjY
5jV1+ZSB8wOccVuftG/tm+P/ANpi10228VS6fa6Zp5cpp+kxyQxTO2PnkUyPuK7RjJwMn1qFGSVi
uaLlzJn6JfYbnW/+CUsNtp0Ml1dS+EUEUVuhd2O8HAA5JGP0qz8P9Kv4/wDglld6b9muV1VPB+pR
G18sibzP33y7cZB9sZr89P2dv24viF+zRoN/ovhr7BqGkXcomWz1uOSaO2fGGMQWRSN3BPYkDvXc
aV/wVC+MWkeNdb10Dw/cJqkUUcmmz2MxtoPLDBXiXzgQzbjuyTnA44pRi1ZPoKST5knuch+wpO11
+1/8Mgkm4nUZpMgdvs02RyMnr68d6+8v2qdC1K+/bz/Z7vLayuZrSFh5s0URZEAmYncccdq+Of2B
tL8QfE/9sHQPFkGlieKz1G61XV7izj2w2vnx3HRedil3wB7d6+5v28f2sviL+y9qHhm48MeF9N1D
w7qaPHc6vqUEskcNyCdsWUkXaWUZGc55pWu7GrdrNnz1/wAFkPl8YfDIjlvsGoArjPWS3H+P5V+b
VxG6OVTgY6Z6e1ep+PviV4o/aY+NFvq/irXYLW/1y+t7RXeV0stNjd0j/dhmO1F+8cdcEmvq0/8A
BIzVJwXHxh8MN82c/ZmIx9RIPatlJWszBQPgBo+FBGG68HqMVd0i3luNQgtLaGSe7mby4olUuSfY
D/PNfd1z/wAEitdht3uYvi/4ZkdELEG1cLxzwfMOBx1r5A+FXxK1b4CfFfSvFOnJYahq/h+8m2JL
++tZyA8RGQQSpySCOehpPXY0i1GR+qP/AAVPt5ZP2T9HdFlKRazYmRkH3V8t8lj2GfX1FeA/8Efi
h+MfjptwJOgxbQOP+XgZGPbj9a4X4m/8FVPHvxQ+G3iPwnqXg3wxb2utWUljJPGJ5GRJFKswVmxk
A8Z7+tfMPwg+L3iL4IePdJ8XeF76Sz1GxcM8YciK5j3AtBKB96NsYI+hzkVnJO2hEWuZ+Z9U/GbT
72X/AIKgEi2neJvFukTK3lthQotct0xtyv8A46a+hv8AgqhaWmq+K/gTYXqGS2n1mZJVwcMjPbIQ
eRwQxFeNJ/wWF8d+Z50nw78KvISMOJ7gMT+tfK37Q37SXi79ojx7ceKPEtwYdjKun6bbTP8AZ9PV
dpxECcgkqGLdSfpRJNu5adkkfrF+2T8UvEn7M3gv4a6d8MbWz0exutWXTZLeOyR4orcKMKFKkKMn
qMfWuG/4K6Tqn7PnhaRs4/4SKPJQ8AG1uByfTJH518sfDz/gq58QfCXgLTvD2r+FdI8ZzWETRrqm
q3EhnmAJ2+Z1BYKQu7vjnrWB4g/4KPeOPHHwR134d+JNB0bW4tWiuIW1i6MhmgSV2ICpkgtGGAQ5
GMD0qUmmXddD2/8A4I8qF8a/E0bdo/s+wA7cebcf4/pXqn7KFpP/AMPAv2iLl4JYozuQOy/K/wC+
TBz+BrgP+CPmgapFdfEPXpbC5h0a7htLa1vGQiGaRHmMiq3Qkb1yB0zV39qH/gpD4y+CfxS8Z+B7
H4d6fpt1b+ZFZaxeyyJLcRsCI7lRswy55HJHGO1CV9hSaTPkP/goCUb9rr4onIXF/Au7HXNnBnn6
19x/8EfEcfA3xpI4YJJ4gyjMMBh9lhGR7cY/Cvyc8TeKdU8Wa7f6vrmpT6vq19KZ7q+nfdJPIeMk
+uAB9APavtT4Hf8ABUXWvgj8LPD/AIMi+Hmj6lHpNsLYXqXr2zT4HEjosTAsRjPPOKtp3ErWPlfW
7RE+L+plyHxr8wdAvTN0www9xX6gf8FeCf8AhnLwrgDYPEcTMxIwo+y3AyfxIr8yfj78Y1+N3xU1
7xwuh2XhWbVTEXsdNYldyIAXZsKS7EZJAzX058L/APgqn4j8IfDjSPC3i3wVp3xEu7AOP7T1O+Ky
yJk+XvBicFgp257gCkrp3KvokfPHh/8AZ5+LviX4Qan478P+HtSu/AkSXEtzd2l4kayJGNsreSJA
zgAH+E5APWvGJoRGoK5O3n0wK+1f2g/+Cl3if4wfDKTwP4Z8MW3w40+7ZlvZdLvfNeeAqweEDyVC
KxbJI549zXCfs6/sBfEb9pXwJL4r8Map4es9Lju5LLy9SupUlZ0xklVjbA5Hfmq5u4m77nzUI8BN
pOW5IxX2n/wShgjk/aztpJEyyaBfGNjztYtD6+2a1f8Ah0F8aFIxrng4qOii/uf/AJH/AA/GvMfH
3wm+Lf8AwTz+K3hHX59V06HUrlZZ7K50q6M0UyIUE0EquinawZBjGOeDxSbvsI+7ZtSvLj/grZbW
ks0z2tt4Zbyomc+Wga2JYgHgEnrivlv/AIKpTp/w1o43Auvh6xXDdhvnzxx69a6Dx7/wVeu/Emm3
d1onws0zQPGxiWO38RverPNEARkD9wGII3DaWxya8Z/bC/bStf2rbXw2ZvANj4Y1bSWk87VY7rz5
7lGTHlBvLQqm7LYyece9StCk0fp9+25+0X4j/Zl+Bui+KfC9nYXeo3Op29gV1JGeJY2ikcthWU5+
QDr3r46uv23v2ovix8FvEOvaX8P9JuPB0lrdWl3rGnaVM/lRhCsjqDcH7oPXae/BxXM/DT/gqNea
N8MdK8I/ET4eWfxOkspGCajqV2il0GTHviaBwWUHbuByf1rN+Nv/AAU5vfH3wkvPAvw/8EW/wytb
pzHc3Gn3aOGt2VhJGiCBApbcCWHPHvRZiTVzkv2HvF/xd+EGsz/EHwX4T1TxL4AtybXxFb2kJeCW
JQjSSIAw/eoighsd8HINfXd38Pfg7+3foXi/x18Gr6+8FfFmKcN/aRuJbOd3CLnfFHJzHIuVLgA5
5Pv8Tfsdftw65+yld6nYf2d/wk3g3UEZ5tCaYQhLnCKJkfY2MooUg8Ec9a+h9F/4KueBPCkl3feG
PgFYaBqk0bRi6tbmCFmzzhykAJGQDjPahJjbPhHx1pPibRPHGs6b42g1CDxTbziPUotTdnnMgUcs
zklhtKkHPQiv1d0NJH/4JHCKBWeSTwjNGiYO47pWXp+Nfkr8Q/iV4h+KnjfVvFnim/8A7S1zU5Vl
ubho1TJVFRQAoAChVUDjPFfVP7LP/BRC9+BvgC98B+LfDTePPCuwHTbGWZITZqSzSREshDoWIYA9
McU3clO+h81fEj4P+OvhZBZf8JZ4S1Xw5b3hf7JNf2zRxzbRkhSRjOOdvXFfWf8AwTq/bO1f4a+K
/B3wivNO0ZPCur6tKG1S4DJcwNMpYAvuC43gDkdDivJP20P2zdb/AGsvEWnx/Yj4e8I6WA9jo8ki
SuJypV5XkCjJIOAOgA96+a/OUgx4Lqp4IHU8c+1MWiP13/4KO/tp3vwovY/h74e0zw7r9r4h0KZr
y5vS07QeYzRjaqtjplufT2q//wAEsp7fUP2P/FGj27Qz6jHql8JbSFv3nz28WzgnPzdAT6V+PPmB
nc4y3qVGT9a9X/Zz/aF8R/s2/E2w8XeH5mkQbYNRsDgpfWu9S8TFh8pIXhuoPSlYaaPrf4Z/tc/F
j4iX3gz4BXHguBpNL1DTrbUmt7Wf7dbQWtxCzGVDwm0KmWIA4PrXoH/BY/VImk+E+mxTo1952oSr
bxyfvclYFQhRz1OAR3rOX/gqp8L9F8Rap4q0r4KT2njDUITFNqfnWySzcDaJJFXJXIXPfivkb4ef
tQrF+0zF8Wvirog+Ik7s7SWuI0EcgQLCYlZQqiMooAPXk9aVi2frr+xrZfECT9n/AMLp8XI7ebWU
jDWpv1Zr5Y8nZ9p8wcShdoyOcDnnr+cv/BT3VfiXc/Hgad46je38LpCr+H47N5Dp0q5IZwGwDPzh
weQMAcYz5/8AtdftveJf2mfHNtc6e954Z8LaQwfStNRx56SlV3yyOnVtynaM4A+td38SP+Cg2jfH
f9mRvAvxC8FTa78QLZGWy8SRyRwwRTgkJcAD5lbZgMuMMc/grPqTufZ//BR3x1r3wT/Zp8JzeBNY
n8HyHWbe1MukkQsIBa3DlBtwACUXiqn7O/iHVPjB/wAE4fE9/wCONQm8WX0una0r3GqgTSHyxKI9
27uu0EHtwa+dvC//AAUo8DeMPhDpHhH43/Dy9+IeoadKW+1RpbmKcrlY5CpZcOEbacDnGe9ZXxh/
4KP+Fj8C5vhv8GfAs3gOyvhLbXX2yOHyorWVXEojVGbDsWHJ6AmqCzPjPwV8SfE3wv1S31rwvr19
4Z1X7OYHu7CYpIUbBYE9xlRxjsK/RT/gmB8bviz8XPjD4iXxX4o1vxN4WttIlYyXzGSCK682DYu7
GA+wucZ6V8NfsyfEn4a/Dnx9Lf8AxR8Ey+O9Baze3itImQtbzFkIlCsyg8Bh94YzX1r4n/4KU/Dv
4efB/VPCvwD8B33gLVL+bzVnvLeLyIS2BJIFWRyXKgAZwAee3JYdzyL9ubw3qnxM/bz8a+HfDNhJ
rOq311Y2sUFshfa/2S3Us23oq7huPbFfYWjaH4G/4JcfAVtd1uK28RfFTWoyI4kyTczqADFC+wtH
EisCxPXHqRXwF+x5+0bp/wCzj8c18da3aXOuwyafd2s5jcGZ5JAjK+W6ktGAfY57V538a/jd4l+P
nxF1Hxn4pulm1S72rHDACkNsgUKERfoBk96LaibG/EP4qa58XPH934u8VajPq2rXmSZpySIYxkrG
oP3UUHAFfsl+3h8UtQ+DHwy+HPj7S9LXW49D8R21zNEWKxeSbWdSWdVbaPmADdMkV+GrNGqnJ/hI
O7v2xX29+yh/wUB0b4f/AAr1j4YfGTSL/wAbeCbmBobFLaJJplR2cywy73XK/MpUg5GD7Umuwkz7
E/Yn/aO1X9qr41eL/G9x4V/4RzTbXQbbS4DHM08cjLcSu370ooJ+foBx618da34v8feHv+ClHiy9
+GnmXOtXXil7OS1TIhvLbfEJY5mxgR/Lyf4etdd8TP8AgpF4C8IfAuX4e/s9+EtT8FTXLvFJNqcC
KtrDKr+ZJERM7GUkjDNwPwFcR+xb+2N8M/2ZPBPim/1rwpq+r/E6/ln8rWIdk0c8RAaNHLygr+8y
WIBJ65qOVlKSP1f8Y2Go+GvCOreMfDfgbSNR+IxsAzQIyRSTybV3R/aNm5wNowDjdtA44r8xf+Cd
fjDxR8Rv255vEXjO9nvfE1xpV+16bqPZIjKsSbCNowFGFAxxivI/hb/wUE+JnhD4+S/EjVb19Xtt
Vm2atoAkf7KbdnXcIEZsRuqqNrfXPBNelfFT9ufwFof7Uvh/4wfCbwtqNpqUlvJD4nj1eJY47+Nz
GuERZSFkCIfn6Ehcg81STEmjlv8AgqNIj/tkeJGZyHg0/TmVlOGj/c54PbmvF/jT8ffil8VdM0rS
PiJ4k1DV9P01zLZw6hbxxBG2bCwKopc7eCWJP519wfEP9rr9jb4q+Ol8ZeLvAPijUvEX7otdPbOo
JiACZRbgK2B7V86/t+/tn6d+1Pr+k6V4Z0cad4M0IedbXV7aiO9mndMPkBjhBxhfUZOeKaQKx438
Kfi78Q/2d9cvdZ8H6ld+F73UbcW0sslqpWeMMH+7KhU+xHTJ9a/Vn9nvxdqX7UH7AfirUvig0Pie
8kj1WN5JolXIhVjEwCgAEHuOfevn7Tf26vgN8dfgj4a8OftCeEb691vSpQR/Y1i/ksyKVSRHR1Zd
ykblHGfbGG/ED/goH8Ivht+zpefDv9n/AEO/sXvzNA6a5ayiK3hmVhNIGaQsznIA5IBotqJ2PzVl
3tHHkt9wHjknj9e1R9TwxDHoasONkZXDbVACn1xTBIqICFA5wTTehFiOaQkopyO5I601nDOcDHYM
3U0sg8xww4XPAoYICu4YOOR2NShkcxY4YnJIzyKXCbScngfw9RmlacBflUn19qblXX7uwL0A6mru
K4RFW+7lgT1NSOyWzEE8kc1EACFMa9ckgHrTjEd5YkksM9eBUDBhvBYAE45qVg6xMNnAGciot7Ig
yck+hxTpAwXcGIP6UMYiMyoDyc5HFLKq5DAkAnke9JmQAgNknPH0oZTt3MxbHGMdT6VIhCwyeNxP
HmDqKQKckgMQOelPddwJAKjsKa24/dfj1NMQ51SQrvGAR160rNg4O5cDjAoV1VFU7WGO5x2pizYl
yudoPAPNNFoe6HG9RyF5PTNN6NziRgPXgUOTvY+vUA0KN52n5QM44zUthcbJ8wyig4BNIVXG4uQQ
c4x04qVmCxAgk9QcYqGYPtJUfKT370CuPc4AO0BR2xUTpvXjaqk9BUhAY5bPX1pfLCq20jK4yTwM
e1KwK4SLll+8VHBfHAFJMfMmwu4LntSy4jTILAZ9KWO4KZMbY3cYP86ooZlTC+3qRgBqcu1Vx1JP
TPQ0rA8hwAT39KjV2xndgg4GKGBIXVgFA5UEnvQ6nI2SAgHBPb8aVF2YbAYE4znOOKb/AH8Ac84A
pNiCVvKkw67wODz1pu9nkbCkIxJ4701owpIJZ3OevAFKx8mTILD/ABqCbajpCJGJAK9ADnrUiRqV
C878/wAXSkQGSRVc/uwMEE9PemMwJEechT1x096pFIe0SfOFO1vcdaVCWUIeGbnJPSkc/vFH3w38
R4xT35+ZhsYDK/7VUxkexkzgneOQP6ClSJkYEnLHI5P/ANalLKclC31bqe9NXc4VN3B+Xn+VQLqS
z5doxt6An1qFs4AwxA529anOc4xlQcfNxn/69QvIZZQWO5c4waauMeRvJychgQq4xt/CozEwwPLL
AGpTKGJzyR044/Oo1wTIS4b5Tsx160agSBn27AWQ55ZupNLMx80A4Py4AHXNPSPcrtKwVkUgYOST
jPHHFRBco758uVQSCc8Y+lAxhc8FBgkfKBzxUkSqyEqdhUE8Ecj2/SkQKsW48OOOOh460kZCYDqH
yOh4yafLcnqPZFnUHfuCkAgAfrUYXezDkFfugHiiIvbt+76Me5yAafJAwkJUt5nXp7Ukik7CviHL
CQ4IC98g/SnI+J1ZSMEbdwGPxpk0isqq7HzQfrzSupMjsoDIF607WYhC4eRvlDDOcDg1Iv7tyTJl
cBSQeAKhJMUgKuAD1H4U6RTlV3hR0x1PTrV2uA+TyVOETLcEHsfWmBQkZDr95sY9B7GkCBdzNkSN
naoPBI7mnhV+VjJhuB84/Ood0PcheJ0AO8Kx+Vs0ySbYjBlwQcgHvVg+UxIZtrD0H60xkDW5y24H
nk5J9KaFcpopku12owO4YKjpWxHFI968LE5LYHHb396x4gYrpVZyhyORyMVpWywNcGR5GV1fGM4B
HvSaEM1yRhKgP3QuA3Ynv1/L8KzXjwnzLjPp61d1yZbiZB0KKQfpniqzq6RbFJw5ztYcH0oBDWjZ
I9w+Y5zkHikZlxl8tjgEdKkiYJGxYCQEbTnPHNDSohyCUXoD14qLFI3Lho3IjiUu+0BtxHPr16fS
mOGEXliMGQL2PQdqZcRbvlEnl/Ng7TnPXn3phnEMrqwZt2BvAIz+H6V1mFyrLGY5mJ3CNj1PXJ/p
TXhBUptLEnBK9gKldSrEs+SxB444x09zUIcDeQpDNjnNQ7FJkEduZcqg5JGajvraSxuHhnTDA9R0
PFPkYQM/IznI46Guh8RWwl8M6Zfuc3Dgqz7gScdsAccYrK+poUtGeOMIxXLDPPb2Jr66+AX7EPxA
+OvgiXxdpDWum6Ra5UDUt8D3GBuygKYYYx82cc18/wD7NXh/TfGnxr8CeHdYtxeafqms2trcW7MQ
HjeVVIyOehz+Ffq5/wAFFfGeo/Bf4T+D/Afg1k0Xw9qUMtlJDb5EiwQrHsVG6jGTk+wrWMnfRimu
VXaPz8+HHwU8R/E74iWvgzQGiXUrqV4w97MqRoVRmIyOTjax4B6dq2vj38APEfwA8Rf2P4na1kkl
hW4je0kMqMjZAIOARyD27da5b4Y/E3X/AIYfEPQvFOhXm7VtLkLoZDuWQNkMrrzwVYjHH1r9LNe8
G+BP+Cjnw30vxFpeoL4f8W6aDa3kUirNLbjJ3xugflSw3K3cfU1TnJPVmcoc0fdPiD4J/sgeN/jp
4FvfGfhpNPXS7Z5IQL25MbyugyQoA6YPBJHP0rrfA3/BO74nePvDWl+INK1DR0s7uNi0cl/uCsGK
lQQjd88+1fWHxj+KHhb9iD4MQ+AvA627eKLmJpSg/fIkhVfNlkUuCCw+6P8ADmn/AMEs9bv9Y+HX
jn7bdSz7NYUpHI3EW6PJAGTjPB6DqKxc2+o4xtsfP5/4Jf8AxabIkuvD6xYBEcd4zYPPcoOffnqP
SvJ/AP7Jvjz4hfErxJ4C0oWkGq6MWbUBdzlIowH2sFYA5yeAf9n2r7KutW+IP7J3ii6+JGvX8mqe
EPE+uyWl/oc2TNbJulMckRY4BwM4zgjFZf7Dvjqy+If7WXxT8QWEMsdrqtlJdxiYEFVN0CFI6ZwQ
ePWhSki1FNnxH45+AvibwB8Q4/BupWpi1qOdYovtMiqkhcgIwfowOMZGc59q6D40fsheOvgDplhq
via1tktrp3EctpcLIqMDna3Tqp9K/Wv40/BDQfjPFpks/lJreiXkdxa3YJPlMrKzI4B5yB36ZGK8
D/4Kh27zfB7w6Y1V2XVTlXOAV8lyc/l+lUptkSSPzw+B3wA8V/tE6zqOj+DFskksIxdTi9l8sbGY
7QAeeSPw4rmviV8Mtf8Ahd4tvfDviS3nsb62kKSsw4YDB3KRwRyMHvT/AAN8R9f+D/jjTfFXhq6a
z1WwlDCMyFVmiBBaKUAjchOPlP8AOv00/bP8F6J8Wf2U7L4j6lYKniG30yzvI5rTgYm8svH82crl
sjPPA5ocmS6aaufk6YXSPAy5PIP8OO+fTpUEkoEmZIgm4AptwNp6YP8AP8Ku6gqAsMAOm4BAc49s
5xVSaA+cXkdHeRRGUAyOmM/XpVpmcUiOZlhmJJJZVHmMqcv3/nimyTRzxvIgYBAAyEnPToPanNC8
hRYVAnBKurHGODwB+X5UxUDzjeNrg844A68f/WrTmsDi+g4TNFIE2kBjuLEdB7YJ+nHp2oSeK6eK
PKyiTjbkoPXIx05p0cO2NWVQMMSpU4OPX/P5VHOIzgJvUZ25I+ZvxH070rkuNhWkjMYXMiuuVOQM
cAH/AOt0pY1mSGXKs0TtgKWGD36CoWkaL94YxKknAOOh45/r6VNvS5PlqR5YQjyzz/wLP8vxoauy
VZ6MdKcpAzRFIvm27RjJ9KcYwHLRozMG/hXO365+oqvdSp5wQoXQsSG7D3/nVlZUjdRGSrg4YA9v
845pao0jHU6Xwd8PNa+J/ifS/DXhq0N9rOou0VtBuEZZlRnOWOAAArck17lqP/BO3426Fpd9qVx4
XtpkhiMskcOpwltqjJwu85PXvVT9gu4/4y48BIZDGvmXHlrkYY/Zps+54/nX1J/wU/8AiL4v8G6v
4SsPDniLU9Esr6yuDcR2Nw8SykOo+cL97gkc+tTzM3aS1PzVmtikxg8tllDECJuDjJAz/hQsjHCr
EQjrsLH5eR/9emyJsiC5Y5UDLZLEnsSeck19OfsX/shah+0PrrahrMBsvBOnyj7VcMrxveZGPLgb
GOCp3enHFU5eREVdnzT9kkiPmIJBvG0kjAJBGRk9On6V1fw2+GfiX4v+LE8OeD9M/tbVnQzPbl1j
G1RySxIUL9SO1fZ37f3x38FDQYvg/wCD9J02dNKlijvLpYdptWjwVhi4G44xuOcc455r1X/glj4S
udI+E3ifU9Q05ree91jNtcy25RpYBDH91iBuXdu6cZqOY0SR8D/Fz9lz4j/BCwi1Txn4ckstLuZx
BDdRTxyxhsEhWMbttJAPUc4xXlNyhDK3KgkALkjHbp0r9KvGP7YWlafpfxc+GPxX02V7u0F3Ho8t
7ZNL9r3PIIkYYwMfuyr8DB9s18U/s1+HLnxb8dfA9ommzaoq61Zz3kSQl1SITpvZ/wDYxnP1qlJk
pJuxteEf2I/jH4u8M2GvaR4Kmn0u7hE0Ej3UETPGRnPlu4YZ69Oc149qGkTaNqV1YXlnJDdW0hgn
imUK8EgOCGXsQR09q/ZH9rj4269+zvZ+CvEekaW+oaDFeyQanaxqVi8koAgJAIQ5+72yK/PL9t/4
peCvjB8XYvEHghMWsmm28N9K9s1u0lwGlY5BA3MoZQTj154pqTYlFLY8l+Fnwk8T/G3X7jRPB2kS
aze2sRnmSJ0Xy0yACWdlUc4xk9qv/F/9nvx58EUsJPGvh59Ij1A7bWTzY5UkYHlMxswDAYPUcdvT
9K/+Ca/hObw/+zYbu5017G/v9RuZkmliKSTQYXymBPJXHSvDviD+1/onjn9nr4heAfiVp72fjvTU
e20uS6tjIbqXJCSZ2/u5Fzgk4yCD3NLm1Kdj4Bt2MiqJVCySNtVe30r3C3/Yn+NV74fg1m38FXJs
ZrX7QrefEzMm3cp2Bt2SMdOeR6Vd/Yu8LSeLf2mfBHmaPJqFha3ZuLkNF5kcS+VIN0gwQBkjk98V
+i37UH7SF9+zf8QfAV9c6fNe+DNSSe01MxAhbY749sgIBywXd8vcA/gc7uPlR+OFzHFay3CjaZkk
Me9lwTjjp2+hr1v4f/sr/Fb4meGYPEHhzwdeato1xuWG6jnhjWYjqRvdTjPfHOMVvftPeL/BHjf9
pnV9d8FeQfD10bJmmhg8qOaQKPOcKQOPU+ua/Rr9sjX/ABZ8Of2f7a6+GQuLC9ivLeJRpNsJGWBl
bO1Ap46dBS52Zcml2fmH8Rv2Wvif8LvDY17xR4Su9J0eKRVluy0UiRlmwNwRmIzxz0ya4nwV8P8A
XviT4nh8PeF9On1XVJ1d47OJQHIAyWJY9O2c44r9c/2TNV8R/FP9nO7/AOFmwzatez3V1bzR6xbB
TLAACu5GUccnqO1fAn7M37Vdl+y/qXiaBPBcHiE3lz5VreLN5UsKIzAjeVbKkEceoqlNlKCUrHMy
/sPfG+Egx/DvUGAUNxNCSPUf6zOO+BXj2vaLqfhrWL3S9VtZNP1SwmNvdWcy4eKQH5gff/Gv0N/Y
/wDjD8cv2gPjPNrV3rDr4AsriSa9t2REg2uH8uCIhMuV+XnPGOeteX/8FM9f8G6x8YdLg8OtZy+I
7K0lh1yS3iIIk3J5SSMAAXCluecDg1KeupUkk0j5N8EfDzxD8SPEcGieF9KuNW1WdHeK0twAW2jL
dSAOM9TXT/EL9n74gfCa2tb7xd4Zv9Cs7lzDDPcIGjL4yAWBOD7E9jXrX7H37PnxZ8aeItN8d+B7
iHw/a6Reqv2+8cxecpA3qq7WDrsZgeCD+FfYP/BUVGP7PmlsrYI8QWoJHoVk/wD1fjVKo7kSilqf
ldo/h+88Q6raaVp9q11qd5IsVtAhJaVyQqoB0ySQOtetr+xx8cEh+03Pw91mLCgnaEYj1yobkAAG
vsD/AIJt/Abw0vgVvipfWX2/xF59zZ2scwV47ZI2Uh0yPvkr97sD+Ncz4f8A2+viDe/tB299e6Pc
w/DzULhLGHRiiHyt+2NZPP8ALyW3EsRnHb6T7RlKmrnwNeaeY5ZYblJEl3GOSN4yrbhwRt9cjGPU
V6xp/wCyJ8ZNQiW7g+HWttbzxq8biEDcDgjgkY4r9G/ib+y74Ii/aO8B+PP7Oth/aN/Ja32lyQRm
1nl+zyusxXH3yyjOepAP15P9u743/E74TeL/AAlZeA7u7sdNu7CeS4+y6clwGkVwEBLRtt4Pbmjn
Y+Q/OD4hfCnxl8LZ7e38W+Hr3w7c3KGS2ju02+aAcHaec446Hisjwp4M1nxzrdnpGgWFzq2p3QKw
WVmu+SUgZOOnYEknjiv1T/a1sIPG37D0mveJ9NS88QQ6NZ36TNCFngunEW8pjGwkkgr07EVhfsj/
AAy8J/Af9m2b4wHT5NZ1y70h9Wdp40WWCNVc+TE23Kg9zznP4UuZgmtb9D4D8Qfst/Fnwzo95qmq
fDrxBBp9sjTXE7wBwigEsxAYnAAznHrXltrbXOoXMFrp0Et/e3LrDBbQjLySMwUKB3JJAx1r9H/2
b/24PGnj345f8Iv410o3OgeKbmSHS4Ftgq2A+dgpfYPOXbhTu5HHvnJ+NfhrwT+xB+0ppXjyz8Jj
xFouv21zPHosaoo025WSPMsJYFVGXyB2ycHpV872E6abPk5/2TPjGypt+GniSIbirE2fT/vk/wAq
868XeENY8C6xNo/iDSbzQtTi2maz1CFopEVgdpAPUHnkZ719zeEf2xvjd+0T8e7bSfhyINC8PSSR
ynTbm3hmMVsgXzWkmZOC3zHGe4AGa6L/AIKsyeC20HwxEPsJ+In2pWXy8C6Gn7JQxcjnyxJtIz3B
x3qVJ3sDjY+I7v8AZx+Ken+Hptdm+H2v/wBnxxeebj7CzK0ON28Yzxj6mvNrqOUKybAgB6jow7Hk
e9fs1+xh4w8SfFb9lHTLvX7xNQ1d0u9OS6KBBKkZaOMtgY6AAnHbpX5ZfGz9mzx5+z1f2EPi7Tha
wXcTNa3tvKs0D7SNy7gflYZBwe1XGpcycNdDyudlifG47ONwznPuKSUjhos4B5IyC3ocZrtPhf8A
CTxR8ZvF9toHhSwS+1OeNpFWUiOPCruJZzwPxr2x/wDgmv8AHlIAZNB0t9vSKLVoi2Oc4JI5q1ON
xeyfU+W5UQopUEzA8eg4/nW94T8F65441eHTvDei3+vaiys4t9OgaVwABu4GfzqLxd4Y1LwZ4k1H
w9r1k+m63ptw8F1BKfmVgT+HOMgjIIORX0/+wr8Ivivc+PtF+IXgexii0S2vVstQuppYkR7d9jTq
Ec5f5ccjBz0PalKdioQSZ83+NPhv4o+H1zCnijw3qvh+W6LCA6lZNb+bjk7dwGfwzVbwl8OvFPjy
a4h8MeHdT8RG2CtPHplo87RBuATtB/mPxr9H/wDgrbEreAfh8+3cRq84wOSf3B/z+FeCfs/ft0eH
f2efghe+GtI8DyHx5cLcBNZV4hFcvl2gaYEhyEVsEAHO3jrUJvc0UbuyPnTV/gb8T/D2lT6lqPw9
8S2NnDGXlmn0i4CRjGSzHbhQP8a5bQfCGv8Aji+hsNC0e/1y/MLSra6dA88pUDO7aoyByOT6iv1L
/YM/aW+IH7RWt+LNK8ey2OpWFrYRun2exSFGLsVZWx1BXgg/kK+c9M+K3hT9ib9uj4kzQeHbm58M
qg0+DT9Pcb7bzkt5iY/MOCAwbjdwDgVUajs9A5UpcrPmwfs7fFcuC3w68XCQjaGOiXOP/QPwrlPG
Pw78T+BUQeJPDmreH/tAYW7anYS23m7cAhdwGSPavt/Vf+Cmvj7xn8aNOsvB1pb6V4SvNRtrSGwv
7OOa6kQsiyEsrHGcueM4wK9Y/wCCuUUUnwc8Fb0JY+IdoK54/wBEnOOPcClGTcrEyikj8oxbrt4I
cg7sjr06Gmyvhjh22D+/6193fsQeEfgX8fvh/qnwy8X+FrWx+IhS5a11idgJrhCcB7dt334vl+XH
QA9M4+Wv2i/gB4h/Zy+I954V8So89u26TTdTChUv7YHiRQCcEcAqeQapSjJtEv3WeZ26yXdzFHEj
zvO21EQZJJ6ADufbvXXXnwV+IyRPL/wg3if7PEpZ5v7FuVQ4758vHOP1rmNH1G40TVbHU9Pn+zXl
nMlxbSj+CRSGUkHg8gcGv2h/YC/aH8WftFfDzxFf+MZLK4v9N1BbRJbO28gOjQq2SuT3JrNzadi+
W6ufiTJh28zcQScFm657/rmmNkq+4/Kp42Nj8a634oRG3+IXiuNgu2LVr1EVBjbieTgiuMaTe5Ur
gkE5qrGSsK0jKgdgRHyA2cZqRCXVigLBTjJ6DPaoljAgBDEoSeT/AAmvW/hf+y78UPjR4fk1vwT4
Qutf0mKQ273EMsUa79oJALuueD2z1pNm8UeSmXCD5htB6LSjDtvZOR04617F8Q/2Q/i78KPDFx4j
8VeAr/RtCtdouL6SWGZIgxwC3lyMwGcAnHGRkgVkfDD9nX4i/HC31CbwP4UuPEEVi6JcGGSONYmb
JC5d15wD3p3HY8zlLknOAQc4xyRiopkKqpJY9doI6CvUL39nv4gaZ8TI/h5eeE9QtvF84Bg0t1DS
TKQSGUqSpXAPzbsDBzVXxH8APH/hbx7Z+BtV8K31l4tv9jWWlTBRJdBiQpjOdp5BHXgiqVjJpHnL
jarMxwehPtTSJGmAy3yjpnjFexzfskfGM+L/APhFpPhzrS699m/tAWZjQ5h37N4cNtOCRnBz7Vy3
xK+DXjL4PahDpnjXw1f+Gru6iM9st2oAmjBwxUgkEjjI7ZFDYKKOFmkyjqV4z1J/lS7zH8xyTj+I
9KW4lAKhhgg4wg/X+VQswMpUMzdTwvFSPToejfB34/eNvgPqN7qHgfxBPoN1fxLb3ZhjicTIpLLl
ZEccEnoM8mug+Lv7XHxa+NvhxdB8a+LZNd0hLhblbcWlvAvmKCAW8uNScZ6E9e1eY+FfBmr+Odf0
3RNBsLnVtW1CUx2tjbRF5ZXA3EAD0AJOfSvXn/Yh+O0ZOfhXr+3j5hCucY7gN/L0qbq5dr7nhLsC
580q3X5TyCfcflUqsQD/AAlgQNgxyPStNvButDxS/hk6Vcp4kS8+wHTGQpOLgsFERUgHOSAK7q5/
Zh+K1h4psvC03w+12LxFexyXNvYm1YNLEhwzqemBx37+9PQaTPNpJPMgLlo92QnllRwo5yOP1qry
8RUEDYOMjAA/Cu0+Ifwf8a/COezt/GXhnUvDlxfhnt1v7Zk80KcNtPQ4yMjrzUr/AAU8dRfD7/hO
B4T1WTwfsEra5Fas9qF3bdxcZAAPU9u9GgmmzgtrqQQw2rwcdKmlyYSwUs6sOcjFdta/BTxzeeAr
rxnbeENZuvCMCM8usx2bm2jVWIYlsdF7ntg56VzOjeHtT8R6xZaZpdrLqGqXcgigtbRDJJK3ZVA5
JpXHGDMtrhlQFSy9zxSGXecA9ecGvXl/ZN+MzJtk+Fvi4rkjP9lS9QCf7vt1968u8Q6Ne+HNWu9I
1OyudM1S0cxXFldRtFLCwwSHVgCD0PPrTuDXcotI6fIozyT8vGamjlTzcsSWJ6DoKgRj5mVO49qA
pYbuQxOOnWh6kqx9G/BD9uj4rfAHwYPCPhPVLBdFiuJLiGK9sEmKM5y4DZBxnnn17DiuH+PXx+8a
ftDeKbbxD43vrS5vrO1+yW4tbdII1j3FuQpOTk1wHhvw9e6/qkGkadZzarqV6RFBZW0Zkmmc9kUZ
JNfuJ8LbSfx5+xfOvjrwDbeHNTi0C7sZ9Ku7EIxWGFkVyjqCu4KGwe9Z7Fbn4Rzxsj7g2UUnkdad
LlyCWO0Hscdq0bTTJ766trSC3klu5vLijiiQu0jNjAUDJJJIruW/Zm+KzMSPhn4ubJ76Fdcjn/pn
TuRZ3PNWkDFSBnGMgnpTUYLI5QkMDWpqvh+90HWZ9Lv7K5stUtX2S2NxEyTxt2UowDAkdARmt3Uf
g7410rXLDS7vwd4gtNS1EMbS1l0udHuAoydgKZfjk7c9aZRxy8kkbh/eHvitXSPFutaBA0emazqO
mLu3lLO7lhVjjGSEYc4A/IVe8WfDzxP4FFs3iLw7q+gxXJIhbUrGW3DsByBvUZOKj8O+B9a8ZXU1
toOlX2tzxgO8Gm2z3EgBOMlVBOKGFrlt/if4u4H/AAl+vDA4H9qznP5tWNq/iHVvEE0c+q6ne6tc
ou1Jb+4eZ1XuAWJxnjpXQax8FfHekWVxf33grxJZWdsGeW4n0i4RI1HUsxTCjvzXJTQ7UGMuOoKc
gj1zTVh2ZGm9mDeYQADhe9NZgVA5JXkDGK29J8F6/r1pcXulaJqOpQWpxO9layTLFxn5igOOOee1
Urezl1CeBICJZJyFXnrnGM+3NIagyu0b7EZnEeB0Y45piyLKCqyBnB571+uHwz+AXw3/AGAP2fdS
8cfFXT9O8UeMNSjETWlxAtzFJMPMeG1t9yHaWXG5jxlc9BTNN8HfCD/gpH8CNVi8N+F9P+HPxA0V
3khis7eJJbWUhxFvkWNRJBJg5x0I9QMrmG0fkjIjBcHcmeRTdxVThsgHkkDg12vi34XeLfBGua/p
Wr6FeRT6Ddy2N/cQQtLbxyR/ePmKNu3ocnHBFcnBamWaOKJGdpWUKBzuJ9B3zkU7kFUBt6byMNjn
IGaUBZXK+aocDJy3OMV9qf8ABM34VaX4m/afaw8YeG01K3g8P3k6WOs2OUVhJCivskGDwXHT1r6Q
/aV/aX+B/wCzv8XNX8CXH7O3h/xBPp8EEz3kVpZQo3mxhwArRE8ZAo5uw7H5Mytg7HYF16Cnm1Hm
B492W69q+jf2svj98OvjpB4eTwX8KdP+Glxp7zPc3Vj5Sm5R1AVGWONFOCM5OcfjX0L/AME2/wBh
aw+JEdj8VfHUEGoeGo5JF0nRn2SxXUimSKRrhCD8qsvyjueopXBn50ZEgkJOXHOB1/GkdjEq7EAP
uK+zf+CpXwu8J/DL9orTbbwpoVh4bsr3w/b3M9ppsCwxPL51whcIuAp2xoOB2r41mhO5WDM2RinY
z6jluHZdxYDncQaJOTuU5U/kCa1/BngnVPiD4q0fw1osD3erapcJaW9uDgOzMAMn+EZIyTxXrfw4
/ZC8b/EL9oK6+EjWq2GvaXOTq5aVdtrbK8YeUHo/yyIVHfcKDVI8Nlikjdi+UzgZzwaRxt3HjjGR
nNfrT8VZv2U/2N18H+APEnw/0rxzriolvqV8bCCa6tl2oftFznuwfcFHYGvFv29P2J/D3h/QI/jb
8H7e1vPAN9Ck2o6bprL5FsjAKtzAc8ozEblA+U88c4SYj8/Hlctxlc9u9MHTnPuCauGyaKQCUFWw
TtPBHY1A0POFPAOfcUwsRCAySYUHL5xnjn2oZ/NXOPmxgn8eK+5f+Cff7E6fGi/T4keNQlt8OdLd
njWR4zFqDxkiVH5ysQHVvY1jftqfET9nHX7HUPCXws+HS6Xrml6sF/4SnTyiWdxCqsJQhDkupPA4
xxnNK4rHxmmYXyoEgI24HY0lxFIMkjAJ4BrU0DQ77xHrVhpGmWxu9Rv7hLS3t0YAySuwRUBJ6ksB
X7Dfs1f8EyfAXhP4SMnxO0G38TeLNUhWe4+0ZB04+Xjyo2RsZUkncO/0FNsdrH4xvxt3qcdB9aQT
+WykYGRjgf1q/rlr9j1O5hRXKQStGpfOSAeDk/nWYD8rcjOc7iKRm9CS5fzHY9eMA0iJ5hDbgccZ
FNDbxhyNpHT0pskpQkLyOmBTDQmMjHfxwRgdsUnmlThiTjg5qNgSpCqd3YH1pS3zEFTtJxyKnUNB
zTSTOoXJ5xhjTmdsE8Z5HXiq52bznIK9MUqFcNu+Ze2PXvU3KsTGbEHPzEHoP501pCYygOGHII5q
EvuOSdowB0ppbach89qpDLTl9jFmG0Dqv0qLzGZDngdvekceaqlm2YGSKZDKiKAQWJBwT0oYD/PC
j5lDf7tNd9zMApGOM0BkUsPvHA+U96C4YbfTvUp2AaUbazBRtyOelPZwAQQpGeDn2pHychT2xTly
x24Ujr70XuCRGGCuSDtbqOOgqwDwSAHyOucVE4AVlKjPqe4pMlG+RSqkdM0FD8eZz0ccYFNdvMjb
GeD3NNAIZzycHPHPNLuLbFA3DPOOn0pC3E8vIzk7gcDb3qRsg7nYqoODjqPWhpNrIArKVJHPamAb
1Ysdy5xzSHYJcuhYtxv70rR5ywPOM56j6UTLkjoM/wAI7UrEKoOD8pxiixNhjKGZgp3YGcCnl1Vo
8nL9TilYeZ8w3Dd1HaodokYLnlTwM9RQMmEalmcMSynkGlc7UJALMT1X+Ee9NH3WLjBOBt9KRX3t
8vLDk+lA0RIyKxLZLfSkYsQWHC9MLTpAZ3ORh88cdBTmY7iBgdue1IXUkaLbgFeM1GuBlV6d++aX
eRMXx7802fJy4BT5s8cUxiqd2RJnYPTrTXYuxCgDgnjgUoLKvzAHAxxSLyDxkHv6UmFx5zFgqNzH
uegpsjgghlyc7sj1pXZQAmflBGTjmmhSiM6NyGHygDikrsAZmSMZBz2IPalVR3JTcOAOnNEhIkHT
eOjHoKduDsuSOmDz1P8ASqsFgaJcFhIQ46ccUki/KGLZ6jOe/vT0LeW21dx5wQOgolPnYBC4HBYc
ZNRcYnmFgd2AQcD1pRICwdycZ7d6R0MIYDoT2pJGBVhs2+h70/QkUPtYhYyAxJGKemPLwQS54HAz
+FR/aEMYByGLfLtqWOHIYkncMng9u/8AOndjuMKO75XG7qPUfWnPHEkp2A7eoB6jiliKxKTnJc4x
TJUYEPkFidp44ouFx5cY2gEMMnr+lMMaqpRBtbII5704AKjMSxf2phfIXKBSDzgdaLhcFjSNWGGL
MMg5wM09FCqGXBZhgIO1CNGZQJCdo4wvUGnvGsUhw29ccEUk2MRw6ugGAxzv4yRURbdMyggqOnuK
WbY+QgIYDJYntjPFLBEk8rZQkYO4g8CrASRVTjbtbGQPekSMSMC2I92WFT4jjKqyAs2dpB5NRNHt
Ur8zd1UD5uO3+fWkA6MtIArod3T0596fI4Qng7sEZ9B3pmAXGP3KhQDu5G7+tEs4Y5QE+gx1/wAm
kmA37rITGHU+p6cU4YdG8pCAvJJ5A9elNKFY92wsR1zjkZ6fpSxStEqDCoeh5+uKrcB4KLLkHcGO
M4yKWRiJ8sVCcE46CiSUeZvVS0mfTk0kkTI/zEHzO2KeoD3eMO3QZOPX8aRyu10Y7AOnHX8aikBQ
v95X/wBkUrorsVbnKhuvH60WAJIUUsxBfgBeSPxqOR12MoJLDovWl2hnCeZwOmRioJP3TqT85J5x
/n2pCRCoLXKZbJ3dGrYsiBftLKN+ORnoc1kRbY74bgdpIz9K09PeKGZGkPyNk/px/Wk1cCnqQBuH
KkMFOBz1qMmQEZJO3qOwHrTZx5czcKcngdfenElHG7OBkg44NC0BDA8YbLZcZ5JomK+blWyCOBjO
KCfKjVWG4ZPUdKGmEQBUAk9QRTSGdBLD5FzcRogeIKeT97pz34qvJIpUMkibuuFJAUntmrl0oXYC
gjlYYkAz0Hf6mqLBFJ2+YSwxyMDP1/KtmZakSEzTBBkAAng5578+lLvZHfeDKP7/AKH/APXTItyB
9yeZ0Cjvn1pjStFIQCfm5K9z9f1rN6saRTkeQ+Zx97gnHX8a7DVSY/AGmu/y7icKByTk5yfy/MVy
m1VmJk3AbssOMAH0qbW9Ya/dEjJS3iXYi5zgVFtTVHqX7Kl6tp+0L8M5pJEhii8SWDMXOBg3EYwS
eg7/AFr9Pf8AgrHbyP4X8CThGWJbq5Rp8fKMrGQM+5A/Kvx38P3L212ssMjxSowZXVyjAg5BBHII
IHI6V+kHwS/4KBeDfFfwSvfhv8fbG91e2jiS2stUtbdppJ1xxvOcrIhxhx1A9etpNPQb1St0PkOz
t5ReiMRtM0zhPLB+ZycDjnr7V+p/7IPwT039kP4f33jr4ia62i6hrQW3niuZVe3to95aIZQfeI/K
vz7+CnxO8LfDD9o7w54snsbm68L2GpSOwkj8yYwOHSOTacDcu5Dxz8tex/tu/tVeHP2gtX0qDwhd
au+iW1qYrq3uYzBFJMWyMoWzkKcA4705RkLRHpv7df7NlxqE1x8X/CM765ourot1fvE4IiXYoidD
kZjI68ccfh23/BKWV28IfESF3Vymp253Lnj90RjHtj26mvLv2Lv21PBXwq+FWr+DPHb67dBbl2so
pbdrqFbdo1HkqP4FDbuCMYPtXY/ss/tdfAv4O+DbsSR6xpmv6vctPqSJZvNGWBbYQQdqja444PPT
isra6kpcuiOU/aA+Nfij9ovU7n4JWeji+1628STiC8LqN8SmVUBHGCo35PoB61e/4Js+FtQ8A/tG
+O/D2sWpstXs9IMFxA3JVlmTv06Mp+hr0zwJ+1n+zP4H8X+IfEuiNrf9tarIZL65n06Ztu5i3y7v
ugknp6V438Lf2wfh54P/AGvfG/jnUpLuDwzrkU0FvdJau0odpISNyAZwfL/Ch66IqLWx6T8Tf2lN
f+Af7bOpWCQvqXhXXZrS3u9OjcbldoYlEq/KSCC6nAwCD1rsv+CnziL4NeH5SrHGtKAFJByYZP8A
6/X6d6+If2qfjXoXxH/aKuvGnhyW7udHtbm0uYpGAjc+T5e4qpGQCUHucV7t+23+2D8OPjj8LtG0
XwtdX1xqCX32pzPZvCsQWJ1PzMOeWxxwRmhJ3Jkro+CpJPtkv3f3rBmVMbmc44wO5r9efjfEV/4J
6JHNHJcN/wAI3pqsqkhmOIef68+lfnj+yX4q+FnhD4s22t/FH7UtlZKJ9Pa2heSJbkHgOqAlhjtj
GRzXdftq/tnXPx11X/hHPDE82m+CbQnyRG7xNfn5MNInGNpHyjnAOetXZkSfLGx8taxLHPI4Vdmx
vLJccMOwBHXHNUniVmSQbT5g2og+XHP+f1qJ3Q3DS3DOyL8xDNuOTwOaimMFu4mQGVVUfeG4N16e
3pitF5kxRLu8lC2d78HJPJI6UrTmK4LNIpKg9CDnj/6/eoWiNxJkMhU5Ownp2zmi4IKHyyNrZ4ZM
nA4H1xwKb1F6D5VNw0kGVRguVYt0HXPPXOO9NERyBy4CmQ7xgAY9T7H9aiaYgGSMKxCdDztHTj9f
XvSyzSzuJHJcldxd/mIAAGc5+n5UybW3HSSyBtgCybcv85yeTyPp/wDXoL+TPKqOoyfukdsemPrU
CSJNcy53h8g7Qxzxj8x3/GpzMn7yUoBIVb5m4O7/APXTuQ0hPlt4WIZslt+C3Y9asIInXZna3Qbe
MjOfz61UnUyw7iwTHBCAce9Osf8ARn8+GVmcg7h0x04/GluOD1PpH9gwh/2sPh68bybY7q4AxyOb
WUYPboT+dfR3/BWCHzPFngIZ8sfYLr96ThVzIg5+vFfI/wCyb8RdG+Fvx+8E+I9emuLbSLO8Zrp4
13LCrxSIHYAZYAsOnYE192ftJfFb9lv9ouys7jWviY0eoabbzCy/s5ZBuJG4gq0RDHKdOKys2zpb
VlY/MkXEenzx3MsDXaRskvl9GcAjjpxkD9a/X7wd4gsPjZ+yLcad8C7qDwfdw2UlqdJeNHmtWKvv
hYBjsZySRJznOe5x+OGpTRi7lSKUyQbiFbaVLDJxkfz6fSvU/gB8fvEn7OXjy08SeHZ2vbeTampa
TJL5cd9EM/u2O04YclT2J9CabuiOZSVjmfEvhzWPCOvX2m69ayWWrWk7pdQTMd8Ug+Yhiepxzz1z
6V+pf/BM74n698Q/gfeWGtyQzp4evRp9nIke1/JMYcK/Ziu7GQOQK+dP20vin8DPjr8O9J8d+G9Y
jtPiEohSbSkgZZpEkwXS4G0ZaPa2Gz69Qa9V/Yl+N/wK+C3wasoJvG4sdf1cLe6tb6gsm6KYKEKr
hMBBjj1HND1Ki1Y+N/2yvinqfxO+Ofii41FIIm0y7m0q3W3jKERwSyKpbuxxznP8qT9jD4r6v8OP
2hfCkulCFv7cvYdHvRMgKfZ5p0DYP94EA5z2HWov2xZ/AWofGHW9T8AeIJfEWk6k32+aRlOyO5kd
zIiEhcrkhvxIzUP7HUvw8svjFo2qfEfXZtB0rS5hfWd2Ttjku4XV41f5ThSFbtz0yDinYIvU/Qn/
AIKYfE3UfA3wVtdHs1tmtvEk0ljeG4h8wiIJuO3ng5x2Nfk60LXEu5yEAwqv/czjoe1fpT+3F8Xv
gf8AHf4MX8enfEK2uvEOi7rvTbawUu00xXZ5bKycgg9QQRjrX5hrO0SYZXR0ThHOCx+n1x371S2I
vaTR+0f7BXxX1X4rfs46bqOsmBrrSrmbSlkhG0PHCqhC3bdtIzjFflb8eviTqXxP+KviLxNqxtFn
vbgxMLdNqeXFmNAAe+FHJPev0Z/ZV+NnwE+C/wAFtJ8P2nxEtFknzfXqXrMHiuJVXzEwEAUAgACv
zl/aM0vwdYfFXX7XwHr3/CSeHmdZob9m3cuMsmcAEKeMgdKhIqXxaHsH/BPP4n6r4H/aD0fQtOWJ
tM8Tv9hvEmG8qqI7o6kY2nIx755zX03/AMFS/iXq/hzwRoHg+x+zrY6+J57qV4t8ymB4SgQ5wuTJ
yfbtXyb+wbqnw58OfGYa54/8TyeHZ9EjW50rzGAgnlJZGDEKTwrDA4zmvoT/AIKF/Ev4P/Fz4c2G
r6L4/tNQ8TaI7LaafYtvFxHMUEm8EcBQoYHoMdOaS3CWyPz+0hxd6tYWdxH5kUs6I8anoC/Pzc5r
9jf2kviPf/snfs92N/4JsLS6a0u4LCKDVHkmVUcMSc7wxORxz/SvxdtdTmgu47iHbI8LLLGeOCpU
rnjkZUZxX6nav+0L8Ev2wPgVaaN4/wDFSeAr4XKSXWntdKs8U0WRuRmXDIwJIOM4PTg07WeoovTQ
9S/Z5+Il5+1t+zpqN94wtYbGS7uZ7GVdHkkgwqqhDK28kNlvXHHIr85vgL+zB4h/aD+I76PYia08
NWFw8Wo606744FXBEYA2lncDA9Opr7P0D9oL4Hfsm/AvUtK8C+MIfHd0s73EGnx3iNPcSylVI3qo
VVAXJ47H1rzH/gnV+0R4E+H2g+OrLxp4lsPDmoX2qLdwi9k8tZUKYbDHOSGJz9QaE7Iu2rPRP2ov
2lPDf7J/gS2+E3wyto4PEC2wXERMiafBJ5gZ2fJPnFlJGQcZz6V+fnwo0WDx98YfCWna3Lc30Wua
3bQagzSsZJBJMqvukzktg459/Svu3/hW/wCyl/wt+6+IOo/FGx1S5ub97+bTry/heyeVyTtYbMlc
t0J7DNeQ/tg/ET4UeG/jB8MvFnwnh8P31zpsn2++TQ1WOKZobiJolYphd3yyjODwfSi6BK0lzH1b
+1r8Qp/2cvhn4I8KeCrSDStP1O9j0kOrMGtbdQp/dnP3j0yc9+9ZH/BTxT/wzpY4kMbHXbZQyjnJ
jl4/EZrP8e/Fv4DftWfDvwXq3ib4gWngu6sZ11Eafc3ccc9vPsG6Jww5wQOR1HTrxi/tGfH/AODP
7Tf7OWt6e/j2z0HU7C6e6s7SSQedNPbs/lAIRl0kGDlf73qKSXYGl1O4/wCCdGG/ZOiUkE/2jqCt
jkD5z+NeXWn7Z3wy174K6L8PEtbtPEYe0sBA9oPJSVbhB5gfPtu455rwT9iz9se8+BGuQeGvEcxu
vAWpXBMsbBA2nzSMoadTjLJ1LDP09/qTSfCn7J+h/GW4+IsHjjwwZWBki0Vr2A2UM7AZljj6hjjO
BwCTgUrdCtndn0D8dfCtp4y1P4d6betPHAfEAl320rRuCltMwwykEcjqP/r15P8Atiftc+If2cPE
mgaVoWjaVqSXtjJdzPqbSZXa4UBSrDkjPWvmn4k/8FFr3VP2gtB1zSrCW48BeHrlxHpnyrJeNtki
a434OMowKrzj617f8dE/Z3/alm8P+I9Z+L9h4fltbExJBHqNvDJtch9siScqwPBHBoVr6iT1R0f7
YdsnxV/YsPim/NzY3I02z1Y21ncMkRaTymZHXo6jdxnOMfWtH4eXEVl/wTys55EaSK38ITOwA5IV
HJx+RryL9sL9p74ceGv2drP4W+DNctPGEl/p8enC7sL6OVbSGDywrylc5ZsDCgDPPQV51+xF+2np
vhzTovhV8S5IpfCF0jWljqN4qiG0VgwaCcngxt2Y9CcHg5FW6hfVpHvul/tbfDH4zeLPhX4d8Lx3
J1qPWIJBHPZ+SLdFhcOobPoQMCub/wCCiPwr8S/F74lfCzw94ZtZrq7uIrxJCqbooUaS3Bllx0Re
pPtWh8ONL/Zb/Z58UeJ/Huk/EDw/qt3J5lxaWL6lbztZfeYx2qg7txztHtiuI/Z9/bt0r4l/tQ61
rXjLVrbwj4XbSJbLRI9VlihjhPmxOVaXOC7BSck9sUaon3W9D07VtT8E/wDBOj4JJZ2MMWqeONYT
eFAfF9dqoBdwOYoVzxgD8yTX5f8AxF8f698TfGepeINdvZLvV9SlZ3kncuyJuOI05+VFzgAYGBX6
N/Gv4Ufs8/HL4qP451v426Rbyutuj2EWr2ZhKRYGAWbcM9/rXk/7fNp8A/8AhXHh3/hWVz4STxFD
qQRz4amiZxbmJ93mCE4ILBOTzmmuxLT3Z9I/sBmWx/Y2tpIpTG6Sai0bBuUw7Y+hBGab+z/qlv8A
tp/sq3umfELTkvvLuG05pw7ec7RxxvHOH5KyAv8AeHXHviuJ/YM+N/w60L9mSLwz4l8a6J4e1KK9
vopbbUr6O3kCySFgwWQjIw/Wuh8FfFT4MfsY/A3Wrfw/8QNP8dMLtruKxsL+3mup5ZAiKiqjHAAU
HJ7A1nE0lK9zB/4JcaJb6X4b+Ja4Ek8GuC1ErIPMKIhC5IHfrge9ddqn7W/wK8B/EfVVv/Hvic6r
p97PFc2Li9ntEkDMrqECFdqnIHbgV84f8E9/2r/DHw+8S+I/Dvixo9Et/E982ow6pLKEtraQK2Y5
mbG0EdG6Z+tekQ/s8/s2S/GGfxzrHxk0DXLS4v7jUX0G61W0+zO0xZtrEPkqGbOOhxzRFPqO+up5
f+0B42+G/wC1z+1t8LdP8N/a7nSr4x6dqt4LZ7N5cy7lwWAYkKCAe2eK+qf2hfinP+z7d/CH4beD
NLjsdP13UrewM8chVra3ilgXavqWDgEntnqTXwn8a/jL8MPB37VnhzxV8LvClpbaB4RvY2ujp7LF
bao6vkyQ7Mj7pYBuh4zxX2b8TvEvwR/aTg+G3xAu/ippHhuTw/Iur29hc3tusxYmNzFKjOGUgx7S
Bnn9be+pLW1vmcb/AMFbHaPwB8PyuN39q3BGTwf9HPFfnb8KPiHc/Cf4haD4xtLW01GfR5xMLO+/
1Mi4KlWx0yCcHntxX3z+278Uvhd+0x+zRYeJdI8e6bp+s6JM1/a6HLcxm6uHOYjCYt24E9QwH868
2/ZHh/Z1+LPwZ1TwL490rSPDHjSzSTHiXUXjgluPMd2jlhlYj5o/lyh9B1BNPm0QovV2PqP9jX9r
u4/aa1zxNpF34TsvDLafZRzrLp0xYyK7FeuARjgg1+aP7WvhQ+DP2j/iHpB1K/1jyNSVkvdTm825
cSQxPhmPUjfgHHQDvX6F/s5eCPgj+yDL4o1+y+Mmj+ImvbMK8D31rvVIyXwio5LMcgY+lfm38efi
1afGP42+L/GllY3Om2Ws3aTW1tcMPOVFiSNdwBwGPl5x/tY7UQurif8AET6H3r+xT+zh4a/Z++Fo
+O3xFvoXmbTxqNo0W+WOzs3jBDMuCXlbcORnHGK+Mv2sf2odf/ab8byajNNPp3hnTyw0rRQx8tBg
jz3BxmRgefTOBX2X46+OHgO9/wCCaUXh6HxhpD+ID4VtdL/syO9j+2LcKEQoIs7twI9Olfl5cSsD
8r/MCSTj8qcV1RUrc7j0Rs+CrvX7fxbo8vhqS9i8SwXKy6adNDG4SUHK+WoySeenfPNfrv8AtJ+H
9E8T/scw3f7Qb2mj69bWsUkd/pSNJNDelR5ewbchmbAdB8vJGcV84f8ABP8A0T4N/DPwHqnxf8X+
JtIufFemrcfZ9Lup40udPWINkxRlgWeQYwdvcfh8y/tZftU65+034/k1a6ZrLw1YM8ej6U67TFET
kPJyQZSByR0zgUkteYU2tjxIAuMTrvmAAYoOAe/H1Ffq/wD8Ei4wPhN44cSCUf21GmcAYIto+P1/
Wvyq8PWtnrOv6Xp95qA0u0urqKCa9kGRBG7hWc4I+6CW59K/Z39k3wd8MP2VvA+saRb/ABd0HxDH
qN2NQkuZr62h24iVMYEhyMKD2qZXbQ42sz8dfiw7x/EnxRlT/wAhe8PDY48+QdK5ObJYkEcg85xX
RfEbUrbU/G/iW7s5ku7afU7toZ0O5ZENw5Ugjtg9a5aSN1ZQchfY8Z9Pat7mKuTxTbCm870HOe//
ANev19/Yf/aC0X48fA0fDDTrtvAHjrSNOWONtMVVaeNVVRdx5Xbktw6nJ6885H4/FfLO1cjJ4yc9
81+lHwN+FXwA/aM/Z50yXw3rEXwm+JmlpDaX+tG+8i8+0Ki+Y+3zlEkcgJwRjqehFZyWqNU3bU6P
4ifG/wCN37HXg++0r4z6Np3xh8P+ImeK21b7UUgtvl2tBODbkfNlSAR6gGtf/gkOyzeCPiXMtnHZ
I+swMIYgQsYMTEKM9hnivT/FmvfDz4X/ALKer+Gfi14/034o6PaWgtm+ztH9tulBAjQKJmLSBsEP
kHjJ6V4l/wAEm/iB4S8P+EfiPp11rFroskmpw3Vva6pdJFKYDGVU5YgNgrg4zzz3rPWyKTu2faye
MdJvvAWo/EaTw/aPqGlxXpid1QzbYJJU2iUrld2w/Tca+d/+CldzFpvwM8M/EPTIUsfFWkavaXGn
arGo+0WocMSFf06ZHQmuisPjN4Fn/ZM8aTw+LdGARdZj2/bYwxb7RMQFUnJyCpGOu4V5p/wUc+JH
hXxB+yToMGl6/pupXF1fWTQwWt1HK7AISflByMDrVpJvUielvVfmep/tc/EjxB4B/ZQ074gaHeLY
eMEi01V1RYULqJ2iEqnKkbWzyOnA9K4f9s62t/iN/wAE7LDxT4gtYNT1+PSNK1KK+kiXzY7iVoPM
dGx8u4OwOMAg17B8X/g9bfHT9lLSfCV7rkPh23uLHTp3v7kZRNixsAfmXqRjrXzz+318XPBfw1/Z
N0j4QQa2uva3qGn2dhbvppSRBFatDvllwx2BghwOufpUUls2E1p80fk9cq0gRyhicqMJwf8A9dQh
REQrMTk84FWZJI2bMk6bdxGPMH+e1QFovNdg6lVIyS2VGenPSumysJp3P0g/4I5+E9I1DxH8Rtcu
9NiudS02Kwisru4jDPAH+0eZ5Z/h3BVBx6VifFT/AIKI/Ff4XftR+JbNtWTUvBei+IZbVtCNnDH5
tqrbSgl8suG4OGz1HPWsv/glV8efCfwu+IXiDwpr96bObxaLSLT7xyPJE8XnZjkOflLCQYJ4yMdS
K9z8Sf8ABNjw742/aB1Txxr/AMQdNv8Awtqmryalc6EibJXViWEXmiTAHTJAz1rC61NGpKa7WPin
9p/9pfTPjn8atH+IPhLwkvgTWNPjikl1BZkluLi4jlDxythQpKbQM4JIOOgr9MfjX8WfEHh79ibR
/ivYyW0fjyPQtNuYdTNtG5jkuTbibClSArbjlcY6egr80f21vDHwl8H/ABuudE+EdvdR2dhB9n1U
faHnt/tat0hd2Zjhc7sHGcY71+o2qfB1vjp+wv4Y8E/2pFoc2oeHNJ230ql0iZEgk5GRkHZjr3qW
/fSD7Hu9zzv4/wAVt8df+CbaeMPGVnb6h4g/4R631iG+EIR4LltmZI8fdJDEEDAIOMUvhK1hj/4J
PXESruiPgi9IGM5JEuD+fNZn7Xfi7wv+zj+wvY/Cm91lNS1y/wBGi0TT1s13GcxGPzJTydigc5J7
4FH7I/jXw3+0h+xFqXwk0nVl0vxHp+izaJeJeLt8sS+YI5lA+8hB7dCCD2yXtJDeqaRd/Z8tY4f+
CWN7EyqyHw5rmfTmW65+lfB/7APxg8I/Br4/Wd/430eK6sb2MWVrq08at/Y8xfifkcKQSpYcgc88
1+lvwo8BaV4Z+Amt/s5nxVZTeK7DQrmOa6hOEWO8ecpKBnPBY5Hbj1r82f2Vfhd8LJ/2jdb+HPxm
Zr0pPJpGmXdjeSRWzXyTFDmRCpw4HylsDsRSfw6sI352fpH8dtW/aI8M+O9I8V/DCfS/G3w5eNb2
/wBGZLeKfyVwxSGU8uHTJVgScn6Z/MH9t79onwT+0r4+0vxJ4b8ETeGNWt4JINWu7koZb5/kEe7Z
3jCEZPJ3Y7Cv08+CP7O/xG+B3xRuZdP+JqeIPhcU+zWXhrWbiaWSwthjYsR+6GQDaD0I61+ev/BT
21+G1v8AtCzv4GmZdd8p/wDhJYIkItEu/wB2YymRt3ld27bxkDPOa0gKR8buEji52gZzgDmpY4mk
ACBpFB4HfFEjHeEcbT0O7t7VYSVICCWwem4Ht/jWjRmtHqfql/wTt+APg34WfAm4/aB1u3m1vW2s
ru8gieIM2nw25mSQQ5P+scIcnjsBivpX4XfGmb4/fsseKfG8tubO3vrfV1toGADxwIJVjDY6sFAy
a8Q/YG8feH/jl+xXq3wq0nWFtvFthpuoafdxXKspjFy8/kzD+8h3jOMkEEHtn1r9nb4YJ8JPghc/
AfUNf0268XR6Xe3G22ZtnkXMkqo/IBwGJB+nvWCNnbWx8Kf8EpvAuieMfj9qd3rml22ozaHoqX+n
tcru+zzmZFEi5/iABwe2eK97+MX7b3xN8FftwW/wu0pdK/4RNta0zTWE1nunK3CQFz5m7g5lOOK8
P/4J/eJNN/Zm/a88UeB/HN/Daajcwf8ACOQ3cBMlrJdJMjRjeOgcHAJAGeDya+j/AIofsCeLfHX7
YsXxdtPEekW+hjVtO1FrKQy/aNtusQZRhduT5XHPftS0u7jXdnmH/BXj4c6LoOseAPHWkaatj4q1
C7mgvNQt+GnWGNGiLryGKnGCRnHHSvpKz+KGqa/+w/ovxl1S0sbzxvo/h2XWbS5aHCR3IjZGIUHo
y8EdOfpXy3/wV4+MHhzXdd8HeCNM1D7brugzXFzqkEIJW2EscYjVm6byMnGcgemRXtekajaW/wDw
SYmlilRo4/BVxENzY+cbxtPvntRbUSsaXia9g/ax/wCCc2p+K/HNhZ3mrjRtR1WCa2QxC3urYzrF
LHzlThOecEEg9ai/4Jh/DvQvD37Msfiyytba08SaxNeR3mrSLk7IppFjBycbFAzjPrzWd8F9Qg/4
dTatJHIqiHwvrkbgEfI2+5+UjseRx71V/wCCc3j3RfiR+yVqnw10zU44fF+nw6jFPbShl2JcySmK
YHHzJ+8GSOQRjHSmtg+07H0f4a8faT5t9D4n+Kfg3xJps8BiNtCbe3xnGdx85wVIyMEd6/Ff9oPw
X4J039q/xL4X8H6haaf4IOtwWsF7HP51rZxyLF5rK2TlUd34zwFI7V9j/CP/AIJHpoup6rqXxf8A
EVnNoNraGSD+wLh0dWU7meVpI8BQoJwM5Jr89vixYeF9C+I3iSw8Farc6z4Thv3Gl6hcoUaaDgg4
Kr0JIzgZAzirSC9mftz+xP8As4eHf2efAOv6Xofi608cQatqH2me9tUQRowiVfLO137AHk/xdK/N
n9qL9l/Q/gD8T/CV1pHjzS/F8ev643m6falI57Ii4D4IV2yvz7TnBGMY5qp+x/8At9Xv7J/g/wAR
eHY/CcXii21G7F7BK+oG3MD+UEYEeW+4HavAx39a+Z7jxXd3HiiXXcoL2S9a/wCMtiYv5nBxnAPH
rgUrApJSP3V/aw8OeDPFt58LNF8feQfDV34idJkuZzDG0gsrgxqzgjGWwOo/WsL9m/4YfDb4WfHz
x3p3wxW1i0ifQ9OmuoLK9NzFDN5twAoJZsZUA4z396851y98M/8ABUf9l822i3/9g+N9CmW8fTHb
clrfBJERZSV+aKQbsMvIznqMU74KfDTw5/wTS+AfiHxj451SO58Uamnly2VvNmK7mj81re2gym7c
wbBY57noKhPUd1Y434d/tbaH4N/ax+LPwa8eWdrN4Q8SeI7uO2u5YwVink2o0c5JA8pxxnsfrXa+
BP8AgnT8LvgP8VNX+KOr6vDc+DtNJ1HSNMviVi0x1YSK5mL/ADhduFB9RnNeNfsffs+af+0Z8SfF
/wC0f4/S0sfDMmsXWpW+lCZZFWVH3MLjcuDGi444yeeK9z8Lft1/B39o/wCIuu/BjU9IiTwrqCHT
9L1W7YG01V8qgjjQoDE2WyhJ/hGMcU+ojwzT/wDgpl4a1L9rVPFmq6DdW/w+0/TLrSdOu7S13XpE
hifzpUzyC0RUKDwGB9a6HxT8df2MP2kfibFqOv8AhjxBqniTWpbewN5JbXdsjn5Y4w2yVVwPlGcV
4rqv7CfhT4Xftcad8O/iH4tvdJ8B+IraSXwzq0EyCeeYOiJbzMyFVb5mGSMH5eea7w/8ErfiB4M+
O+n33hfVdP1LwLZaraXsN3qFyI7wRI6uyNGqbSQQwyMZxnFUGh5V/wAFE/2M9I/Zt8WaR4g8ISiH
wh4ikeCLS5pHeSynjQMwV2JLIw5GeQRiov8Agmb498Uaf+094F8KReIdSTwzcfb2l0hLt/sjsLSR
8mLO3O4Kcgcnmvd/+CxfxE0G/h8CeDrTUUuPEmnXc97d2MeS9vDJCFjdscDcTwOvWuS/4JqfsoeN
k+I/gX4yTRWUfhAR3xVxcgzsxjlgxsxn7x9e1K5KRzf/AAV7US/tNaLGz7VHha1J+XJwbq65FfEO
geH9S8Ta7Y6TpNlPqGp38ogtbO3QtJNJ2VVHU8V+mv8AwVO/Zg8e+OvFd38VdBs7e98N6D4bijvl
+0qk6LDNPJIyofvALICee3APSsv9gXwf8MPgT+z1eftGeN7yGbVmkubS0hutpW2eJ2CRQDGfOk2d
euDVXJtY9G+C/wAKPBH/AATe+CN58T/iEwu/HOpQCIRQIWkTeFaOziQnqGX5n49+BXl3/BOX4l6l
8af23vHvjvV7W3sdS1nQbm4ltbTPlxfvrRAoJJJwEHJr46/ad/ae8SftPfEa58S66XtdNt98GkaS
pUrYwFiQMhQWYnqxrrv2C/2j9I/Zu+PVtruuWck2g6rato95dq2Gs0kliYTsMHcqmPkdcEntSsUj
9D/GH7J/wl/aF+O/xl1Lxqbh9X0+5sY42t9QNubdDp8RD7QRnkHk8cVN4jEXhn/gltcCwdbqG08M
L5LuNwcCYYJz1rzr9q39grXf2hfjVp3xL+H2v29/4e8WR2z6nMJ4wtvEscSJNEc/vVMYLbccEe9X
f22fin4T/Z9/Zv0z9nDQppPEnijU9Oh0mOJJkElpEHjIlnA+6Xz8oxz9BmpuVZCaVD4C/wCCoP7P
/wDZkltbeD/in4ZU7DBHhbWQ7lVlJGZLeQJyBkqcdwM6Xhv4d/D7/gmL8AtS8Q+Kvsnirx/rWbbi
Pcl7cJ5rwxRqwzHGFI3tjt9KX4HfCbwf/wAE7fgxc/Ef4lXUV347vbcAWqvH5ok+b/RbU8byVYbj
7elVvjL4L8E/8FNvgT/wmvw7mFj8RtAV/K0+5lXzgR5gW1uFDYUPyyP/AEzSFY9T/Y08QaR4r/Yw
vfEHiCzg0bRdUm1u81G1skKRW8Jnn81YwMkKFBx34r5++Hn7If7Jv7SuneIdG+Futa+uu2diZlnk
mnRIS2VjkKyIBIoYDI5r0z9iZj4o/YU8S/DbTZYT460mDW9KvdFmlVZ7W5lluPLSUE8AlwNx44PP
BrE/4JwfsqfEH9mrxP4x1nx/pltoljc6XDbxyi8hkXKSM7sSjHaAMHJx3p+gH5a+PvAet/BH4nax
4Y1GYW/iDw7fmFrmxlO0SxkOksb4BH8DA8EcdK/YD/gmR8UfF3xY+AHiLVPGPiG88SX1rrUtnBdX
z7nES20DAZ+rmvzN/abdPjJ+2J42i8GTwa62t+IPs2nPbTKY7p22xrsfOMFu+cV+pf8AwTu+BHjT
4D/AbWfD/jTTI9K1i81ia8jtlmjkAja3hRSSjEdUI65piPxD1zdFd3ClvlMjMCvckk8/59K59lO4
ydOcEDivYPj9+z18QPgDrFpaeOtBfRjqayS2cvmxyxThGAfDIxwRuQ4OD81eTTKxXIA+UbSMVREk
QNGXA2ruDDgDilDYJK/rTj8xU5O0diaWdAPnUgD26UXEkQOcHg5I5zSM5eRicsPQHpUko3SExgZP
601wYwSRgk9QaTYrCFdxbn5uwIyacuY+4Ge55o34ywXBP8IphdgGOzJX+LFTbUpC7kbAIDr60jfO
4wcDtjj9KdIAG9D7U2d0IATgjvmq2GBlwpLE7zxn1pwIk4ZNvHJ9ajKZH3gxHBFBdgCWJA4BWhiT
H7owzEZz2oDb15J9D6UuFUbwDjoOPakmZS7AZ6+lQMe5KFcDKkZ9s1EWCNwuHAqRADGuSdueoPOa
jYqXYYYt1yaQD5G3KdzBy3U9PpTBOQ7ZHTjkU7bGjDbnHXkfrTJZckgYPPGKaGOdtxJU7cdaeJNg
IXlTz9PeoSu4EnIAPaph8isoUkt0z9KewkxEDyDaAFJ/ib+dDZQ4zhh1prIE+YcMT+lLJL5gJb73
XIpDYrDgbu5okkOcKRjPGe4psZMZLOrbeo+tJkMc4OM8UtQJcNIhGMFOeuf51GFQK2cgjpilbO0/
KCfQ0jP1DHbgYAPNMQ5CxB5+Y87vWmu22QhSTkYwKdC6KxwAc54PalX0VSfcdaNCkQgbZcDcc9zU
7EKpQY6YJPWmN5ZO0Bg5OAB1FNjXygQ2S3XHGBQAKRjAAznBp5Q7mQ5YdRn1pFJQ5HynjB96HDyS
gl8MOTmkMRAu3BYnBOOKdKVGAEIPcjjNMeJgGxkgcDBpTuKjLEgHBU8UCJDukGQBjsCcVGXjiYoR
vznJ9DQGJXcACCcAA4PSmOXXAYDjjPf86LDJRMMcjJPSmPGwn+VcU3yijKXbGeiipSzM5BYgE4Pt
VASRgCMrnlj8v5VGWO/k/MD0zTmKhCSysMYBHXFRypGSDHjceoHYVnbUQ+UjamSwUnop6/nTQFdy
EHU5GafG6tKdmFI4570jnY+3OwZ5A6EUAPZFTBVFY9xSRs5Vk525zgURkneA2EPGKPNdHC5VVUY6
dPekmxIXyN3zDgEklR65pjuoQhgMA9u1BlBIwzOSSemAfwpZV+Zi7bSDkj3qx7iCbYvyhkUnBB+l
SGNAzMW2k+/B5pnmfIPmX5Tye+akdto3uwAJ6lcmk9AFESncgVg/OeAePxph/dqA4C7uG5zinOgk
5JIJGNwFRtMASm/72cP2ppDJBEsj7tm0ZwJAaQv5akFtydGwT83v+tI8RR/vM3zYweODUR3DeFG3
I6kcfpSbGmWRIc7UjHGcAn9ajIkLNn5QpI4/GhVKRNLMQCRnAxg0rIWf5RuG0k4yAPwpWExskkiO
6YV+P4u3vQmY9rFCScY3YxUly7MuR8gGMHGDRKpaMq5Lt1OAcY9/fpQhIZPJ5xYfdU9BkUeWUwqL
g9Wwc5FOt4URvnBYj5toPT2/KlR1QM4OzqeOfwqkMiU5QRlSFzk9hUs5Iddo4UYDDn9PzpigOArS
naeen+eaPKkUFizSxKTwfpVgKpWUMSwEmegHB59Kevzoqs27qAT1A9KYGjTCkKAOmBn8aTzVE21M
BSc/Lg54qUK4YCsGkLbSOo7GoZlCzbkw5Ayf/r1O7qynP7raDyB146VA4yc4yOjcY5qhldMLeYAx
hs9v51etkeSQllG4E5VB7dhWeMC6DAbD1Ge1X7Y7Jdysc8jLcZ4/+vSEyncNuuJSoAO7PHFPuN8g
BLBtowB+FKY23yHjrwP60rr5ZZiG3qOecYqWMjJLSZYkue5PBqOS3ZFAOG9iKfcDfIGU9RgY4J96
dI5jkJOMYAHBoQHQTIXV3ikOxflJYdfT6Gqu1C+3LbMdzkL64/GpZPMkjaLkHduIHf8Awqs4XzDG
7nKIcY7+3vW7Mb2GTvJl3QKGU8Fu49aj374d+G68sRwTT93nnc+5x1G3AYH3pkrmZdjOdq84zjH/
ANes27FKV2ULhyVPGQc5I9ahKrnpkjtnrVyZI4bl1ZS6d2q1rmhf2esM8eTbzKGjbr9c+9Tc13IL
Aea33ioJ+8Pbsa7DT1aCaAiNtv4HOe/NcvoNlPd3CRwxySuWwsUKbnY+w61+mX7O3/BPfwfpfwVv
/iN8bbq80+zNo15DaorwSWcWw7i6kFi3TAGDxVxkrg1bc+Go7VxJumZvNQ5Uk9T3zjtzj8KtbIkK
NJFKqSDaW3AjpxnkHPv7V6n8JfAXhPxv+0foXheS5uF8Fanqn2aC5lcxym3+by9xPRidnXr3HNe2
/t3/ALLGhfs/3nhq78LxamdCuopBdTXB8xY5Fxj58AdM8VpKSM/M+Rri4uLidNxCtGfMO1QTyT0P
cdOKeoMUY374ycKGILHGeB7c19wfsZfsYeFfjj8O9f1/xra6tZvFd/ZrCaGYwJJEI1bzB1DLk/Su
s/Zm/ZA+B3xi0rV45NQ1q98TaFevaamkNwYghEjiMjKchlX1P6VmqiRtdbH53yXUpm2KSsjBV3Og
5XHAz+dRO+1MsVAyu8dSR34PsPWv1K8M/sZ/s1eL/HuueFdH1bVbjxHpBP27TmvSHQBsEjdH8wDc
Ej+teI/Dv9jfwdr37YHjL4aalc6lL4f0iye4t2SVBISfJKAkLxgSn2+XpSU0zPZnxEskSv5isIi6
jYVOQQMAH69PzFQXczSQxnyllAj+YFj1J6dOv+Jr7D/av/YnvPgh4ltbvQll1HwbqU0dvDcuwMto
5wGWU4x0GQ3tiu3/AGu/2I/B/wADfhJZeI/DF9fS3sl5FbzLeMjKwZGJZdqjB+Un3quZEbnwQ5ke
YqEDbAA2OdvHP8qZPKqRAMV3MS2c4HT+LHOeP0r2r9lnwH8OfiJ8Urfw38SdVvdMstSQ29jdWjeW
r3BYBI3kwQM9iRjtXZ/tifsZXv7O+rNqWnG41Hwbc5FteSfPIjYG5JiMAEk8HHOB6VXN0FKL7ny1
cJGSrEsXUcjAxjnGOe9RoAsSkYRmXjccBW+g68dqsXFqkbtt7vtIP3QO4/A/SqrbvtEsZLY4GcZ3
e+Ox/pTMouzHXV0j5dU8sqoBY5+7yc064hkSJl3Nv4bORx7DoT2pGuI0QkrwxCkEdRjp+gqN9rMs
0hLSHOxc7sD/AD/KlsNrmJI5ViUu/E2NoGOPeo5f3rr5IwcHaRlucGnTSGafLt8xzmPOBj1x+dRw
ZZEA3KwJyQvI68fpTCwK6gny13SEE7uwGOmB3pZJFkyFX/VAfOR1OD/9ao3d42UbDsOAyEdOnUZz
1zTpEluMRt8roxGWXlh2OeKTJkuxPG88cQmyhXG1lOBuyR278iiMMrM7Eo5UgouAvPvUM8S2w2YA
xj3wMde/cVJbq1uqFiWK9MH7n1pjirE7RvawEMSW3dmB47jnp/8AWqUSxA7IgZHI5YDBBGRjPf8A
/XXffs+/DOz+MPxj8H+EdUupLe11m9MM08JBdV2McqD3+THPsa+6Pit+wH8BPhFoyy+I/iPqvh2a
4idbP7dNDiQpyQFEYJ6gde9Rza2N0ran5ngyMFZlZNzFQcYbgkZ6kYNPhfyRKA8jBjkhidwx6Y/K
rOoTW6STG3kaaIuwicrjIBIBx7jnFQ2tlLfzJDDD5ssh2CJV3ljyccAkZx2FNtEOF3dEYdrls7Sk
nRVJ4Jwckjv3/OpYw8a7S+HZjhgCAvTOfz/LFe6eOf2KviZ8Pvhda+Ote0q2h0d4opZHhuAZoEcD
ZuQYI6gEduprd/Y6/ZI1T9pnxHO93ctpng3SpDDf3sDL9oEjRbo1jDZBJLDJI45p8ysNRtsfOQuA
soZt7tGecEjP+z60l1O7MqR54JYBMkA9h/k1+gnxw/4Jo6Rovw3vdd+GOu3niS/02WQXVnevCQ6I
SJdrKFw6FT8vfn2r4N0bw7eeJ9f0/S7FGuby9uo7O0jX5DJLI4RFz2ySBntmi4JNsz7WUPtZ1ZUG
c/Nnn3HfoP8AOaV5jER8hLff3kZ3Ht29f5V+lXhb/glp4QtfDmhw+KvGWo6b4o1CNla0tWhMZmCl
ise9SzFQPXHB4r4p/aE/Z8179nv4hXvhzWQJ4nzNYXyMCt3AXKq5HY8DIPQmlzI0SPK4rqSXdE2Q
gY9G49efxGaeVnLGSTZ5IJBKnnpwQMfhX1L+xx+w/P8AtHm/1zWr6fSPB1u0lqs1mUE8s4CnbtbP
ADA5INdx+0r/AME9rX4d/DJvG3w61+78T6ZaI89+LpojshUfNLGUABUbSCMZ9OlF0KStufDpMksj
MHCo3ZSfrwaZJIrOZGzH8+7KsSenYk8V0ngnwJq/xA8U6V4Z0WNLjVNVnW2toi4w0nbOeByO/wD+
r9E7P/glP4JOk2VnqPjbUbXxTc2Jc28ZiaMShV3si4DMqsRzkcEU1NbGXKz8ymDFIwxPl4JUD5cj
196kS4ZAGTDbPvNySv4+ldp8W/g3rvwh+I154Q8QQm31WzKlcHcs8TEiOVSM8NgnB6YNfaXg/wD4
Jq+EvD/wxt/Efxb8azeEbuXmaJJ4FtYN33VZ5AdzHrjPFF0XFHwGLpkh5V8Ou1uOD/nJqsZjDIzR
nbEG+bjqOnFfoF44/wCCbvhbxB8LtQ8SfCHxvL4wv4OYrZpoZba5K/fRXQja+DkZOM49a8J/ZR/Z
D1P9pHxbfQyznSfDOkyGLUb6MjzkmxlUVD1Y+/AAP0o5kO2up88SSyuokdj8p+++D15HH5VH9sli
ncygNuYbELZz6jP4frX6Lwf8E9/gbqXi6Twzb/GOabX0uHibSIrm0a4EgXlNnLZGOR2xivkv9pz9
mPWv2cPHsmkalItzpN67zaVqZYBryJSM/KD8rruAYevI4NNSRMr3PGTcJFIHiUoyjBbP3D7U+S5k
8jC4TycHIbbk9ATj8a9E/Z78AeG/ib8V9L8MeKdfPhjTNSeRGvlwCj7CU+ZvlGWXHPt619Hftg/s
J6H+zh8O7bxdofiO+1RG1GOzmtNQiQHEithlZccggDp0o5lcGrbnxgLpjBLL5qujZGQMnb/k0RSG
ON3RpM8JlT0PYkenXivUv2df2ddf/aM+IUWhaTE1vZRMk2pX7EbbaDeFZh2Lc8KOfbrX2t/w7H+F
/wDbsnhyD4oXq+J1h8/+z825nUYzvMWd239KOZInk7n5qtJcBNzDnAzgEcdifQ0nn3MU7ICwZjky
+ueQDjt9a9m8Q/sqfEDwz8cU+F72An169Zv7OcyqkN1Bhv3wI4VdqsSD6EV9Ya5/wTa+Ffgq30xf
GPxefQL6ePcIrl7WAO+Pn2Bz8wBPXBquaN9UOK6n53TvIq7W3YGAQOmKgSaTzVMYZlyST04r7C/a
q/YGf4JeDLTxl4L1i58UeFvJD6hPN5YMAwvlTKVOGRsnOBxkHpXAfsn/ALImuftJeJZI2kk0nwlZ
sU1HU48Flba2xUU43Elcd8d6OZdCUm2eASXLygHByBwAMjHoaZFIY92xTtHX+Hb64/z3r9F7v/gm
H4I1m11zT/CXxVm1XxPpkbA2MjQSCKbB2rOqHcgJGO2K+S/hv+yt42+IHxtf4a3en/2Vr1hIF1US
yJts7dWTfMBu+cYcFcZzn8QuY0SuzxrzZHk27sKMvt5HOcnmmCYmRi5KgjqT1xnk+vXvX6La7/wT
X+EPh/XoND1T41yaVrEuwx6fcyWcU7FgAuEY7jnsK+d/2t/2NtU/Zi1izuYrmfXPB14gjt9WkjVZ
EnCsWikUHg/LuBxgjI60cyI1TPnU3Eiuygl36cLyR659wRUU12bYMFHlgkgY4ye9foDon/BMzwj4
3+EUPi/wt8T7y/kuNOa7tzFBDJamXZkxsVO7AYFSM5GOelfnzLCoGP3gDgHpggH3oj7wOVtBs08h
bfGyxMuRkjoPT+VNW6lgwGOVALKA+O5/xqNkdGOMFFyA394dPxNEsOGjDq3XAOQQefWtHZEx5r3H
rKxl3CTKjkBT2/r6U5L103Lj93nJye/0qGRUZVQZAOR1wOOCa9H+AfwE8R/tCfEG18L6FbOyMFmv
L4uFS2txIiyOScZIVgQByayNdWecS3kk2FLgA8Lu601pGMbkkNhsOCM/p3FfU/7af7Etv+yppfhb
VdP8UTa7aancSWpjuLURvFIsZfcCGIORkY46V89fDr4aeIPil4103wr4asn1DWdSZhFEuAq4Uksz
EgKAAeeOAad0Sovuc5NMkbRSrGqOuAoAAx6YPWqzXUrPuG1ZSC+Sc59fx/wr7O/ax/4J7/8ADOHw
nsvHFn4vl1ieO5gs7ywntQiKZQQWjIbJwex7fSvi+WNzM23opxwfb1oT6olp3H3ExYozcSLkbiMn
PGP5D8qrSyHCbnJbPAx0+lfWX7Gv7Hvhz9qSw15bzx0/h3WrCRNumwwxyyywlcmTYzAlQSBkdDXl
n7Uv7P8AP+zd8X73wQdVGtQRWsF5a3phMbukgb5WXJAIZG6etLmUtitmkzx7chYBgJQOV3gZB9uK
hmZ3clmO05OB0ycV63pf7LvxP8TfCy5+IukeFZ9V8IweZJJfWksbOEj4kZYt28hSDnC/wmvJHG1M
qoZWB5zx+opoppbDlRwpYOFAGMAZB9c0x1RlJcCTJADEDsfocVf8O6XZ614j0uw1DU00rT7u9ht7
i/kX5baN5ArSHpwoJPXtX6NaX/wSQ8LeKtImufDfxlTVY0ynm2llFPEr/eCllmOOCPfBpXVxcul7
n5pXJXaqhiW6YHBqICSQAknH90noRWh4j0SXw7q2p6dcyK89hey2kjqCFdo3ZCRk9DtzWVI+3cUy
AfmyaZBY8wsRuBfA4wPfvTncbxuxIFGAWA/wqozhPlMhIz0X+tO8wsqux2L1+mKQXJfOCyGRYwqr
x5gUA0rzvMNqquwZH3c5/wA5qvNIxYlWxxnGcGhHkjYufnAPKkUDT1LousKi5ICsAck8f411fwqt
vCusfEPw7Z+N7+40nwtNeLHqOoWo+eCJur9D7fQc44rhgxky24ZJwVBxT1cncFJOf4WFJq5ofsz+
0X8eP2cviz+zvrngQ/FeytLWOxT7JJaOxmEkChogAU+bJQArjnNfjjFcP5CB0xI6Bn/h5x/n86Rp
ZHAUTcLwMHpmoD5gA/eEyZ6sOKFHSwdbn2/8GP28vhp8NPhj4e8Ma18AtJ8RalptsIJ9TX7MPtLA
nDnzISQSOuT1rsj/AMFIPgsW3Tfsz6Z5i8blWyOM+/k18QfCb4P+JvjT470vwl4c0+a+1K8fLhTl
YogVEkrf7KhgTW7+0l8BdT/Zx+LWp+CNQ1C21GW1igniu7cMqyRyLuB2kkqRgjGT0FNRitLibdx3
7SXxg8M/F/4o3Xibwj4Gt/AWky28MX9lwbNskqbt0xCqFUnKj5R/DnvXl5u5TKyjKZ4PAyajWAyy
Ek4x/FzxSDeThslfWiyBXR0fgbTtG1Hxpotp4g1A6L4dur6CG+vIlybaB5FWWQDGPlVifwr9jvij
8S/2ffG37Nd18MLb4z6Fp2nw6TBZWt5BqUTT4twpj+Un58mNcqOuSBX4qIzspAJwRkEdqeZpBEw+
cBiTjPGcdfrUcuty9WrDry8/1b7zOyj5ZCu3dnv7DpUC6hJHL5gLq6jKlcDr/PoPyprHkbQcH9eK
6HwF4B1/4peMbDwz4bsJdU1e+fbDbQRljgdWPoqg5J9AasnUwXvnEpcOyyHg7f7voahWcxMwGAAd
wGcAnrnp1r1P9pj9nbXv2Zvia/hDXb211OZ7KDUIriz3BHjk3KQQeQQ0bD3ABryedw2eoOcDK8dK
LJkpl0avcrICtzMxPVVlZRjHYD+VWtA1CHTvE2n39/bR6rYw3kVxc2E2NtzGsgZ4z/vKCPx71jMQ
OB1xxx1pI8+Yr5LBc5WpsUfo7/w2n+yXJIWf9m/Y+MOy6ZYAZ9M+YM1Xn/bD/Y/uIJVb9nBwWBB2
abYq3TsRKD+VfneHbcWAYD0HQ0qBtwCZxzn2NCXmVdM2tU12JPEOo32hrcaHYz3MzW0KzHzIYGkZ
o4mccnapUZzUC+ItS+1faY9Tu0vGTY1wJ2349N2eRWXJlgGZGIXjGOtDEou7aAT0U09GTzMsyX0i
3JdnfzWYymXcd27Od2fXPOetbA8e+JRGQvinWifX+0punb+KudbzZFxyB3BHNR+Uy8EEkHj3osh8
zL99qdxfXE01xcy3M7vvkllYs7k9SWPOambxBqKaO+km+vBpjtvayFy4hY9cmPO0889OtZiIcn5s
k9M9KaI3J4Y9M59KdhXfU1h4k1SHRJtLj1S9j0+U/vrOK6dYXz1ygO09u1SaN4h1Pw3Mtxpmp3em
zAFTJZXDxOVOONyEHGQOPasdkjwpCktu6rXpXwL+A/in9oHx/Y+FPDFq0lxcktPdSq/2e0jGfnlZ
VO0Hbj3PFQ0ik2c/qPxM8VXNvLbTeKtcuIJlMc0EmpTsrqRgqQXwQRwQeK5aYmaZwMsOgXGABivu
PxV/wSQ+MGg+HtU1O21Tw3qslpbtOmn2lxM005UZKJmJRuIGBnvgV8RTWVxa3FzBdpLaXcDtHNBM
CrxuDgqykZBB7U0l0B6kMudpBQKB+FRbjG55PHQ4yRUjQySsPvnJxkg+lJDE7TiNB5sjvsRB1Zic
AD37VRHKbHh/xfrnhOaWbRdc1DR55AEaSwupIGYZyASjAnGT1q74g+IfinxnxrviXWNbhiP7uPU7
6W5SNvVQ7HBxx+lfT3w5/wCCWnxj+JfgTS/Elu+g6HDqUXnx2Os3E0VzGp4G+MQnbkDOM9CK4z9o
r9hL4m/syeF7PxB4mXStT0OecW8l5pE8kq20hUkeaGRSoOMA8jPHpmfdLWh4bp3j7xJo+lXOjaX4
g1Ox0m8LC5sbe+kjgkyu1t0akKcgDNY1nqLWtwkiSSRvEQ0cqEqYyCMFSOQQQORjHFdz8Hfgp4l+
O/j3TfCPhe28+9upMNM2fJgXk75GAO0celemftE/sIfE79mPwPD4p8UNo13pkt4toZdMu2kaJmBK
5DIpIOCOBwaNAbaPEvEXjnX/ABbqNvf65r2qate242QXV/eyzyQrncArMxK888d66c/tEfFBtn/F
xfFg29FGvXWMemPM4rzmONmUE89ODX0Z8H/2Efih8efh7qPjLwtp9oLC3kaOKPUJTBJebUDZi3Lg
qcgA5xmhpCueCa94n1PxLqt1qer6pe6pqty2Zru9uHnmlIGBudyWOAAOT2rofCPxr+IPgzThpvh/
xv4k0LTkZmSy07V7i3gVmOWIRHAGTya46eOUTMJhsbHzL3U+hpJFRUZx8vr1OadkK7O+8Q/Hz4j+
JNBudL1j4heKdR0+6Qx3NrdazcyxTJ/dZDIQwPoRXIv4n1j+yF0dtWvn0dbg3qac9w5tlmIwZBFn
bvIzzjPNZSRM6Els7c4GKNjM5RCSTnJb0xmiwXY9gJ2G5lX6H+dSpL5aqBMODjHpUTQru2gYPUDr
/npU0NrLdTQ28ELPJKwjSNVyzMemB+VJAejeGP2gvid4X0i20jR/iF4n0nTrSPyre0stYuIooVHO
1UD4A9hxgVxviHxTrPizV7rV9a1a+1rVrpwZ7+/uXmnkKABSXJzwAMelfTnhj/gl18dPFnhzSdbs
NO0mKz1G1ju40u9QWKZVddwDJjg4IOPevnj4pfC3xD8H/HOreE/Fli2m61pjhJYgQVcMu5XRhwys
pBB/qDRZFEfjH4qeNfiJHZL4p8Vaz4khsCxtU1O/luFgLDDbQ5OOgFL4L+Kni34a3V3eeEvFOreH
Lq6jWKZ9Lu3tzMqkld208gHPB9T61yu5mXCM4Gcc96bsTgj7w5B7CiwJs7Twj8XPHPgLxDe634e8
W6vous34Zr2+s710lutzFz5jdWO4k5OeSa6rV/2r/jL4k0jUNK1D4p+I7/Tr2BoLiGe9YxujAhlI
I6EHBp3wK/ZV+JH7SUOsSeBdFS+h0wp589zOkEZZsgKC5AJ+Vs4rvPHH/BOL47/D/wAJ6r4h1Twl
Bdafp9u9zcLYalDPIkaglmCAgtgckDJ64zSshXZ842eqXOjapaXthdPbXtlIktvcW77HidGDK6kd
CGAII9K9gX9tX474VV+K3iUckk/bT+XTgV5t4F+Hmt/EzxfpHhzw9YvqeranMLaCCI4yxyOSeAOD
kngfhX0XN/wTC/aIQAReD7I71wWOq2w2jrgjf7Y60aAeBfE34y+Ovi9LY3PjbxZqXieeyV4rU6jN
v8pWIL7RgAZKjP0HpXCyBXA27j2BrrfHXw91z4aeKtU8KeJ9Ol0vXtLm8q5spMFlPVSCCQysCCGH
Br2LwT/wT2+OXxC8J6b4j0PwZ9p0jVIBc2ssl/bwuyHoSjyKRmq0Q0j5v2FEAIBPvxS7QuVVgfY8
DFes/Gr9mH4l/s9R6XL498MS6LFqhaO1uBPDPG7qBlC0bMA3IIB68+hqD4K/swfEf9oZNS/4QPw7
LrUWnhPtMwuIYkXfu2jMjrkna3TNQ2hWPK5UMTKpUqc4qMj94OfmPBBr6M8a/wDBP/46+BPDOqa9
q3gO7j0zTIWuLiSO5tpTHGoyzbUlZmAHJ2joM9q+eXi8pcFw5PIKngjtRdCIGbYVZiCT2FLuXd1I
B7VC5MpGN3XBHYU513knj60WTFcfLKiL0Ibpx1ppjLrnavFIUUjdzuHcU8gu4IPzDOAe/FSmhkbB
lYkZz0NLEoYgd+2f8ac+55ypPzHkgUsaAxtg5fOCp7Ck2JITG4YUsQeo7UsoAdmL5XgfSmhWLEBh
jtTZECkd+x96RVgkjaNQwPDc8UkcgLkdCfalYkPhsgZ/DFMZSGCrnLHggUxFhiXH+1jgnqKCV2Nx
njqKYAMlTlwKHIJ4UArxn2oGAAfcWGfdeCKeCWXaACV5qAja3zEk0+WMopcHB9DRcQvmfw87sZJF
IzbRnaMk9Qc5FNUDa397r168U8L821TgdMH1p3GwZ/LyoB46K1JnMqsSPbHb0pWZRNtlG4Dqc9KQ
4+bYSy9MelSJDnX95kHcMZOOppJMMAwjw2cnuaUKS3zAcdcdjRI/mIecgDHTimtRgdyoDtHHtSbX
IODwecUrkhUHKkDJzzmmFsfdJJPQHjFJodiWNF2/3mPY9R/nimMjeX82QM4A9aRWAYZLbx1JAxSv
mcAcOpPHPSgQ87liQgh884BPFJ94kxHAI78Z9aYqFJShyFHH1pjhlcqOxwBigLDiqRkOQcgcg05V
80ELjOc560B8IAcHJP1pANrqAzKvfikABwrn92NvY+lDv86qQDz1IFRlNzkAk4OcgVIWGR8u73NM
ofM0bEoAAc/KRyc4pvlb3ySSAeQD1pY3aNsqpbgn5u9CPJsCjAGcZJ+6aAFd1eLAIwvHzevt7Uzy
gGDE5APII6/SgxsgbzAWyeMnr2zSbiXJYEDPUZOKd7gKSuwAB1YnJycjvUsshGFbaVI4KjrRIchc
LhR3p1xcBo0QxhNoIUetRuFiKSMh164I4HrTjEyOWOc9iP5Uk++YoWbC9AAfanyRS43q+31LnvQg
GrskEfmYiz8u7JHPqadNGSCd27sQOP0pquiTKDl1wCec/lQVySFIYjk7uOKtMBCd4fCqCDnPoKsA
I43cbB+ZqEybCAq7SVwx4+Wl2GMsSxY8kehqGxXEkCk4+8DkgEfXFEsRVCcqCRzx+VClIkjdMrk4
YE809zJImzYoAPbJ4ppDvcjiBLlnO4YzjvUhyqbcZYdPXp60o/dkbkHOF3A8Y9aiuJCGG0gr7d6T
WoWFKBiy5CgfxMDQrOHwOCTt29AaSZzJHgLjp+WKQYR92QWUEMGH9adgDLhzvByvr/KlXAIDbmUd
cH/PtU0rB0C+ZjuQCeTUI3rCxIIzxtzw1C3F1HqItjldyM/Bb1HuaSRtlttUCUNwWPXOOuaaVyio
nyDGcgcn/OKDtKAqGVWJPvV7jEkgYssewovuevvUkgVInhMjHacgLjp/nFBf7S7Fn+fHIHT8Pela
IK4dAXVgSD6j/IqUxjWaMooj3EgZY46H/OKQK4PKKzgD2zS+UcB/uI2RlTSFGgO8AsxB5XPIqkib
CBBIr8iNkG7IPU44qKRv3TMAdmMEY4zVp2gddgOJmOTt44x0/nVa5QhioP7vGTgnrV2HYpw7RIN3
J7DtV2FUcAuxRU+dRkkH/OKoghJiCcryCfSrqNlDlNw7Z7f5FS0KxAY5HYkHaPUHt7ZpWZdw3OT2
2gU4PJBKwbIC9QevNV3BUEg5GeHFQMkkkC4VSdoOQD2pr75lBzk+melEgXC/eY5yc9MVJHbllKrh
GBzhuc0wNubhdhV1KtsC9x9TVW7jLzgNtQKuN3PJxTpptjDdt8zOWHQHpk81BcNLc7WDErxj0x2r
S7MkiskzA4VioI2+lCO8gJ5BAyW/lVpUkyVBViqlsbunriqqBg77fmBHGRjtUtMtWEud7KCf73Br
rZokf4fWrOfnimbaMcMvH9RXITRLId24hiuSCc811V9ffZfAVvpr4R2m85MEnj09utRZlrc0/giS
/wAVPCjhtpi1iyZF4wxE6AA546+tfrz/AMFWzcR/Bbw48BlaNtVKNFC2N2YmIPocbSeff2r8b/hz
rVt4d8WaFqV4jtDY6jb3cqooJKJIrED3wD+VftJ+2XoH/DV/7MeieJ/hzdQ69p1rM2pkwsQ7xCJ1
cAcHcp4K9evFUtyqivFI/JyG7d0VEaSKROkiLtZcZJ5HQ4zz2r9P/wBij49Wn7TXg67+E3xL0lde
u9NtVnhuZlBjuLaMoq+YQ2RIpK8/xfga/N/wH4L1Lxp470zwjpcSS6lqcot4Flk2rvJwAeuOoOa/
UzTh4J/4J3fBSza4thq/jC+QMYCx824kwDJGkoRtsanOMjGSPUU5JE2SWpwX7d37RafC3RE+Dvgn
TZtIt4rVIryeFAqR25QMkcWck+5OOmO9Uv8Agk7dNcP8SnkkmdpjZSKsq7cLm4HA7f45rt/FvhXw
T/wUO+EMXi3w1Guk+NNOQxT20ijzEkIz9nmfAyOCVYcc1zP/AAS+8I6n4M1r4o2Gp2EljKktrDtd
CAGRp1cAnrye31rNpDj1Nz9rD4VaX8HvCer/ABg8N33keOrXxD9rGqQMRhZGG6CRA2GHABB65ryj
9gv4lar8VP2xdf8AEeuGN9U1LQ7meWWNNiErJbrtVe2BgVo+MfhX4++PH7Snjf4fQ393a/D3+3Dq
mowzOBGi5UGSPI+Ziei8AY71a/ZV+FFl8Ef299e8JWN017bWejXKxzsoV2DC2k+fB98flQZp+8fc
+o+I/C3jbxFrvw/1Bori/htopZ7GdgDLFICQyDOTjHJA44rwn/go7E4/Z3VIRHhNTgB8zoq7JBke
/OB9a+aP29/G+s/Dv9rvStf8OX7aTq1nYWrrdRY3HO9WU54YMpxg56V9Jf8ABRxHm/Zh8wu286la
EuAeQVcNnA7gkfXFC3DW1z8iniEMpeQMHZyyMqjMfPDAY69K/XnxXcTeIv8AgnGl3PLNd3k3g63l
8+T55TJ5aZbnq2c1+SOieGtV8ceJdK0HSLee51LULhbaBYk5yxAGcA7RnAJ6Cv1q+K9zZfAv9hCD
wn4v1W30rWl8NrpMMayAtLcLEBtjxncRjrjHGa00uN7WPx/1C1ismYSOfM3lTsXII5O726isqW4n
gRsqMdVCDrjv07iruo3Uct5NI6uxB3tg5JzjOD6VRe7gYqhdmRs5HHHGOOxqkc603EknZwuFxkgs
x+76GpfLWDcckErxJuztz6VDHMSCkUThcZOT39f/AK1Ivzyyxg/Mx9c5Pf8AXP6Um7FrUVwAkbMA
8hJJKqDgfj9adEQwHlDoQrsODgds96jWIJGxiZndBlsrznPYHrRcs4yqEgHqW+UZPOAeff8AwqhW
SY+ZlldsqSg43Hhmz3+vTmnCUyxl9p2rjCZxk/X0/wAarGNWBZWZ1TBweGIIHP5/yojkSHcgLuoO
CWGevqOn50XSIbbLMRYMTE33lwFxuz0yB7dqbAT5yu7bGZsFQ3C8d6RAIpUCn5sjao+8T27f5xUa
3UYAYxmIliCGyT+PUUrtiS1PoL9iMyw/tX/DLzVADai2DHkYHlPjPYggivrn/grSREvw/ckkMt6h
Xrx+5JP4Y/Wvj39iZ1l/ar+GhTbg6qpY7sdEb8+v86+yf+Csyyx2fgKcKxt4heMzKpIBzDjPHpu/
yKXU6HpY/NAATqyJGJGkbYu9cA89gR61+lH7C37GVj8O9Jg+L/xPhFlfQQm5sbC/zH/Z6rvDSyAn
BLLggHkflX5rpftHKrcLJG5KjqVweOfXPP8AjX6jfsq/tOeHv2q/AMnwh+KYhbxLLbNHb3DFYV1B
AMAxc5EyAjgdcZ9aGXfTQ+Y/20v2wtQ/aE8TtpGg3F1Y+ANOkIgSFnj/ALQOVPmyr0IBU7R2B6ZN
fZv/AATH+F0ngX4GS66dVg1KLxXNHqMccOSbZQm3y3z3FfAH7U/7L2sfs0+LZ7W+JvPDd44/s7U3
bAnHUhuTtcEgEZ5/EV9mf8EkHl/4Vp44D+Z5Q1WERiT7o/dHO09CM8/jSYkrptHm37Qnxh8f/sk/
Gz4haLZapZaxofjS3n1CKwmZ3WyMzuocLn5XGWyOjDFeC/sa/DN/iv8AtD+FbOLVbfTX0uRNVf7Q
x3yrbTxMY07licfgKxP2vg8v7R/xGW4uZMf25cdeTt3cDB6DkfkK579n83D/ABy+HzQyskw1uyYB
AQzL9oQ4JHqOvtmpkEW7n6u/t8+GvEV58HE8UeGNZXRNT8IXDayJ9+yRlSJgQjY+9yDt4zjFfl3+
0L+0P4j/AGjvFGka54isLSxubLTk0+NrVGAk+YsznPQsx6YwMV+g3/BVaUx/BHw7mSWOM62PNMZO
Nv2abqO/O3rX5Oz3by5DkshJ4H16Va0RK+I/av8AYJ+GMvw0/Z30dZNRt9S/txxrKvbLhYllhjAj
PqV24NfDnxh+L3xA/Zrv/il8F7q+tfE2h6/5gt5JpH/0CK43EiIDplX5U9GBP1+rv+CXcjL+zTcx
KZfs8WuXSwI/RV8uJtqjsuSeK/Lb4r3Ei/ETxP57O9wNRufMeU8k+a2M574H8qFqOXxWZ9Af8E7f
hlP8QP2g9L1eHUoLb/hGWOqSxTtmS5RsptQAc43ZJ4xla+zf+ChVn4u8L+H/AAr8UvCXiBNFuPBt
y7XEZk2NcpO0SBPRgSuGUkZDHHpX58fsT3Fx/wANXfDd7VZUP9oNHK8ZZf3ZRsqQOo6dcjH4V9c/
8FbGki0z4dOZpBbefeZjDfIXxFtbGQCRzjr1PFHXQUltY+KvjN8dNY+PHxQuPGutWllp9+8EVtHb
afuZSIwQACeSTk/niv1h+Ofw2m/a1/Z2s9J02eXw1LfS214v9r27xPFsPKunX+hr8WbG7jm1ayJd
lVZ1BKNjgHGDx04r9ff+Cis95bfssXb6fJPE/wBus8tasykJuJ/h5x06UPcOiR1H7Pfwpl/ZM+Am
pabrWof28LW5uNSkl0q1dyVYL8qxgFiflyf/AK1fkTo/xg8VeC9f16+8K+ItU8OHUp5ZZv7Pmkhe
RDKzoHVSORuxz05Hev1B/wCCaVzc337NlxLdzyXLvrV1807lzjZGOpwcV8+/sK/s0eHfip8SPGvj
HxPHHqFv4f164gt9KlRZLeZnZ3V3z125BA+lJPQHfmuzs/2JP2SpPBQk+NPxRuJ7XVFWTUba3vy8
c1sTuaWefdgktkkA/Xk184/tqftJp+0/8RtKtvDmneZomkTSWWl3KI3n35mMeW2kZALKoUYz19a9
V/bM+PXjH9oP4uf8KW8DafdwaZZ3v2SaF1ET6hdxu4JD5wIgAOpGeeOleK+F/gR45+A/7RXwrsfG
ujNpj3uvWdxA0UyTQui3Ue/DqSMjK5H+0KdgTu9T6b/Zu/YR8LeCvhsvjr4xRyx3rCG+gsPMeL+z
0HzBZAoBaQsQcdsAdQa9R/4KgD/jGQMoORrlmQAcZ+/mqX/BSl7xfCPw6jtnn8qTxHGJooQx3DYe
WA6ge/rV7/gp0I2/ZiR2DMqa1ZsHUZC/fwfb/wCvUBdPQP8AgmnBBD+y4LuC3EE0uq3xZtuHOH4B
PfH+NeBad+y18XNN1XTPjw/jlLu/nube+uv9IcXZt2dUKlsBSAhI256HGDivbv8AgmZ4r0rVP2fb
nw3BqdtLrllqFzNNZF8TLHJtZZCh+baSSM+1eF+HfiD+0Lqfjq0+AktnAkdpcILuKW1UMLFJRJu8
/OCpTGCOowOtT0NH8Wh9v/HjV7Hwnrnw51+40ya/mh14W4azg82dUktplOAOSMkEgeleX/tjfse6
x+03r3hzVdI12w0ddOtJbZ0vopCXDsGyMDg9R7V698TPE+jwfEf4b6DLfW51qbVXuY7LzB53lC1n
UvtznGTjJ/Doa+NP+CqWp6tYeM/AKWF3eQwvY3OY7aV0Uv5icnaeuPUVaFHdHtn7YcsPww/YoufC
uovNe3UmlWujLNbQO8byKI1LM2MIp28biPStH4LtdeHv2AtLutDYaRqo8LTTw3EKgMlwUcrJ05bO
OfaqPx9lkm/4J4Xk04M0/wDwidk7+adzFvLizknv70v7P2qRfE39guDR/Cd7barrsfh240xreKZV
aG82OBG/PyHcR1xxg009iE17x478FP2T/ih+z78avBfjXU/F1tqdvr979k1n7JNJ5k5lV5P3wYYk
+ZeuevPGeD/gpv4h1r4c/EL4ceJfCWr3HhrXZ7W+t5tQ05xFPLGph2q7Y+ZRubg5x2qn+zX8X/j3
8cPjPofhfxTaWo0LwZfibWP9EW3mieNXjXcxb5zuz90c9cCvdP2gfg/4U/aQ/aD8E+HNU1WGYeGr
G41LUNNiIdpEMtuFifBBQPj647cimnqNrVWPlv8AZA/ZS8QftB+P1+LvxSur290+OZZo5dQlkF1f
SIqmNlfAAiX24OMVT/4KPftX6T8V7+P4c+F4UvtJ0HUBcXmrYYCW6QSRmFAVwUXdy4yCQQPWvTf2
/P2jdQ8Hx2/wK+HumTaL59tHbXtzBCqR/ZpEAjgtsHIJzgnAwBgZr4j+Ln7NPxH+A2k6ZqvjXw9N
pWnX8/2dLhZ4p0MmCwQ7GbaSMkZxnaRTRm5XP07/AOCea27fsgaS10QbKS51FpGXONhnk3Ed/Wvm
j9qX9gTQtP8AANt8Rfg6LnUNFNsLm8013aeSSFgCssHGeAfmQ9hkAEGvo/8AYEhZf2LbFBwxfUip
weplkPQ+5/Sqv/BNP7af2dNTivjM3l63cpGlwS21fKhJAB6DcW49/epi3GxpUinJ6bHxn+wr+x3p
/wC0ZrGpa14mnz4S0iRrW5sYpXinnmZFZMMpGFHJP5V9XWf7Af7Omo+IW0WLRPFi3avJHullvEh3
AHcRIybeecHODmnf8EyFSPQfiwqMpC+KpcEHnG04JGa8p8TftwftAa18fdd+HfgfSNGvbmDVrmxs
oJbDMjRxyMNxdpVXGxck/WnzNseh4D8dv2LdY+HX7R+jfDjStQtrqx8WTk6Dc3LkGOIvgrNgdV6Z
AOcZ61+lXww+Gmhfsi/D7Q/DuhaNd65rmqXMKXlzb2zyCWVvLWaVnC4jjUAsFPp9SPzd/aX+J3x2
8M/Gzw14i+J9vbaD4r0GGO50YWMEX2cAOWLZV235YYYZHHYV6t8HP+Cknxa8c/FLwf4f1D+wZbHV
dVt7GYQac0cvlvKqsynzTyA3pRJPcVNXW56//wAFbNIvLv4UeDdQgtJ57Ww1aQ3M8SFkgDwlFLno
uTxk469a+LP2R/2pZ/2X/Eer3UPhi38TxatBHG5ll8uW2KM5yjhWJyH5HA+Uc194/wDBUz4max4K
+BlloWmmCOz8UXEun380sW8rEIi+F5wpJA59q/JLTlee9gVA080udgQ5I6f0zTtdBBvnsfr1/wAF
QLlrr9kiWVQFebVdPIxyBlif6mvyj+Gfwt8QfFzxfpvhjwzp09/qF7MkbSLGSkALqrSuR91F3Ak9
hX6t/wDBTK2kuf2Q2WJGl2ahp7sAeQA2cmvmf/gkm8TfHDxWVdGP/CPAoAc/8t0Lc/iKly5UiYXc
pI+vP2bP2PfAH7MviTRLqG+uLvx/faXLDNI87NFKq+WZjGhHCglMEnPTrXwb/wAFUQB+1NeOWOf7
EsAAq8k/vjjP9fevrzwnf6jdf8FQvFcVzPObCDwwEgilLFEbZbk7M8DOWPHoa+Q/+CqSEftV3ByQ
p0GxPPA+9MCefoB+VXGydgmvhfzKf7DX7bN18AtfTwz4smkvPhrqcpSYS7pBpTseZlUKSUOQGQfX
1rsv2/v2JIvCjS/Fn4Z20d14J1HF3qVlZ/Mtk0hLfaIVVf8AUtuyfTOeh4+fv2VP2X9f/ah8eJpF
gZLXw/aSI2r6uFDLbIdxAUEjcxKYAH8q++P2wP2mfC/7Jvwvh+D3w7it5vEktl5DI+J4NPt2BV2l
BbIkYZKrjHc8Uotqfu7Cn7yT6n5KOiSqqxnO7gkHt6/jX65f8Ei02/A7xeFUpGuvlQnQDFtDnH4n
NfkVDF5IEe1pWVQm1B8zdu1frz/wSOWUfAzxW0m/5tdJHmDBP+jQjOPTjH4Gm9TVJKLPys+KbFvi
H4sG1VjGsXvzdd37+TnH+elccm2RuBgY43c/Wut+LMjf8LI8WmL5lOsXnfoftEnH4f0rjQjecFOG
BPyseOPSrOZCTJkZC4zzjtUUyx+WCQwPoT0pJSVGIxkj5Tu6fhTPJIAXewDDsOKQD5E3LkMQwI68
ZFPiZlwWbkdWPNNxzySR0AFMSJpB8qZ3HaaY0iQu3mjAyOu7pXX/AAy+HOpfFbx5ofg/RngTVtbu
hZ2/2ttsauQTkn04PTNcnJDgp8xA9K6T4eeHfEPizxhpOk+E4bqfxJdXI+wJZMUm80cqUb+EjGc/
rUM0jufpZ42/4J3/ALNvwc0vR/8AhYnxC1zQby4hyZpLuOOCWQBQ5X9wcDceBnPNeG/tmfsFab8F
PAWm/En4b6vd+J/A9wkf2p7uWKSaHzWUQTRMirvRtwB4JHB9a+nPDv7Q+geKfDvhv4L/ALV3g280
7xVezQxW099bFLS8UYWK4eZJP3b5yGIOM88A180/8FAP2Y/F/wAChFfeHdb1jVPgxdyIbbTjeTTW
+kyArsidSxXYzHKNj2POMzFtlPR6nv8A/wAEzvhp8GNPbw/4s8OeMbvU/ibPokkepaFdSqBabmXz
dsewHAKqAdxyCD3rK/4Kd/Cv4QarN4k8Vaj48fR/inFpcC2mgCRHW7Ct+7Xytm4Fhu+YMPWvGv8A
gkzLj9qK4UgJu8OXqryPmPm2xOO/rXUf8FCPgD8Q/i5+094nvvB3hS/8QWtnpOniaS0UEITHJkcn
kkDpQtGEtbHxt8BfCfhbx38YPC/hzxhqV3pHh3Ubp4Lq/slHmRr5bFSMq2PnCDODgGv0EuP+CXvw
e+IfhrXE+HfxC1u88QW0BeA3RgeATEHZ5gEKHaSMcHiuc/4JGfCfQtY8UeNfFOtaX5viLw3LBbWD
T7la0aRZlmzHkDd8u05HG096+xv2ev2lNU+Mfxn+Lng6+0ez0218F6gLS1nty/mXCGSVCz5OM/uw
eAPvfSlzO45K2h+LNj8Mo/DnxjtvBfxDuX8MxWWrx6brdxAQ7WqeYFkdWwwwASwJ7dq/QyL/AIJE
+DNS8Vaffaf461e48CXGmGcTK0JuDMSpRlfZtMbIxbOOo96+Lf2150tv2r/iyqkGZtXlOw8ZyoxX
6aftT+JNV8E/8E8FvdI1O50u/j0PSIBd2srRyKrm3RwGHIypI+hpuT5rITaSufFP7X//AATwt/gp
8OLX4i/DrXb7xZ4SSISai188XmQIxURzRlFXchLYIxkcH6e0f8Ey/g18JrfUPCvjvT/HP234mfYL
uO68MmaNTbljskPl/f4UZznBBzXy14m/b+8ceJ/2Yo/g7dado8lgLOPTZdYVZDcyQRsCvy52hjtU
Z9icc11f/BLB937WOhn5ix0nUCeeB+7TilKVlcasz6U/4Ke/BL4YeJYtb8c6x8QYtA+IOl6En9n6
BJPEftyxvIyJ5X3zvLsoI4BOa/JyVAjEFcAEjHX1r9GP+Clfwe8cfFf9pm4Pg/wzqfiJLHw5YfaD
YQ+YIj5tyRkdcnIwAO3tX51ahBNYT3FtcwzQTxO0UkEyFJInDFWVlPIYEEEHpitEQkVvuN1IQZ+t
bPg6x0rVPGOi2euXp0vRbq9ggvb5f+XaBpFDyEeiqSfwrERW3YJz24HNW4444kOGB3fxN3zx/Wh7
aFrc+1P2r/8AgnbP8FfAmm+PPh9rk/jzwfLGr3swjRpIEfHlTIYyRJG24A4HGQc4PGx+zj/wTYb4
ifCa98e/EbxO/gLSpIvtOnS4jx9n2ktLPvYBBnHBweK+o/8AgllZfE60+DE8Xi1XHglmDaDDqQcX
KRFFP7sHjyDuJGe+ccdOc/4Kzar8R9O8AaPbaWr2/wAM5pCmrzaYz+aznAjjuABgQ55HOCeD2zmm
xvQ8C/Zn/wCCaI+KfhTWPFfjHxZJoHhZXk/sfUdP8uQXsKPIslw28nYh2qQCa7/Vf+CTngnxV4W1
qX4ffFiTXtetrcm3hlSB4mlwSiyNGcqGIxnH54r6c/Zo8aD4d/8ABPLw74qa0TUBo3hi61D7K7fL
L5ZlfYTz124zXzx4C/4Kd+P/ABtNfr4M+AkevSW4V7pdCnmlZFJYKz7ITjO04zQrjtc+TP2W/wBl
fTPjL8aNc+GXjvxBN4E17TkmjjtQqNNPcxyBHjTJAOByAOo5HAr6R0X/AIJQeGbLX77R/GnxSbRd
Rnvng0KBBAkmoW+FKsqu24vltpC+nuK8R/Z68e33xN/4KG+GfFGpaemk6jq/iia5uLJQ263ba4MZ
3AMCNuDkDp0r7b/bUDzftj/sxIrMqrqjM3GR/wAfEGP5Gk5NMfLZn5v/ALVv7KGv/su/EmTQ9Rdt
R0S9Vp9J1jYEW6jXYHyoJ2ujOAR34I4NJ+yT+zkn7TXxcs/B0uqzaRp0lpcXU9/BCJDHsAwACQOS
e/pX2p/wWVPPwuTAO5dSPPsbU9eKv/8ABNf9r2HxHf8AhP4N3vhfT7O6sdKnS216GQCS68ohgGTa
MMVPOGPSr52kRF3Wp8M/tZfs2t+y78X7/wAKnWTrVo1rBe2l6YDEWWTcNjDJG4Mp796/Sr9g7w7p
Xwv/AGGJ/H2gaTZReKr3TdSv7i9ZCxuXgkuPJVznJUbRwD615/8A8FLv2rdP8Pz+Kvg6fA+ma1ca
jpMBbWrq4xJatIWZSqBCdy7cj5hgkGvU/wBlRptR/wCCakUOkW7Xt8uhaxBDawjfI0vmXIVAvXcc
gY96jmu9S9LHiX7N8n7TPw7/AGkvD3iX4g2l6/hz4g35t7wz3Mc1pHvRpY/KjWQmEqAQuQOMgg11
v7W37FXgb4x/tVaHaW+uQeAtW8R6PPfXU8cQk/tC5jlRV2oXUeZtdicddvQ1Q+An7cXib9pb4jfD
fwFF8PhYromoQ3Or6lbzyTeSIYZEy0ZiXyhv4+Zj6e9ct/wVmv8AULX43fCf+wZbiPxHb2M02ntY
k/aEn+0J5bRgclsjpg0lfoFkfIv7QH7IHjL4G/F+08CXFtda42rXIh0K/gg2RakrsoULyQrAuAyk
8demDXrXxm/4J66n+zt8BLL4l6r48tdO8XW32a6/4RyaFBi4MiFo4pd5MjJwehzg9K/Vr4Hp4o8Q
/CrwjqHxG0uyHi9LKKWVtoZ1cop3sCo8uQn7yrwCOvp+UH7eWtfFX4mftQxeEPF9o2kot3DaeG9K
STfZFJykYmSUqN5ZsBiRkZxgYqlJsnqM1H9qP9oH9tjxL4X8B6Lcx6XeRTNKo8NNNYlwAqmWeTze
VQZOBtHJr7U/am8X6F+zH+xZdfDLxPr93408U6xo1zo1m0pD3VxJIkhE8gZiVjQkDcSfujv0q+Gv
DvgH/gl/+zxLr+uiHVvG+phQyo3N9eBMeTDIUzHGBySRjgk1+Ynxj8YfEP43azqfxU8T2GsXljfs
YotUNlILK3g8xlSFJQgTavT3IOTzSs2O6eh+o/8AwTq+H1h4O/Y/Xxh4W0K3k8b6zBfySNKebqaG
edIIjyAFyoHGOp5r4X+PvwJ/ak8ajxB4w8f+Gten05ZpdUubaG6EtraKMnMUCzNgIvTCk4Br71/Z
K1+68Gf8E2INc06cw3mmaJrV7bTEAlXjmunVscg4Kj8q4n/gmb+0f8Q/j3r3jqw8f+IW1+GysraW
GCe2iTy97yKw+RRkEKOuetNXEflN4HuND03xpol9r9k2r+G4r6CXULKFsPPb+YPNRSCOSm7HI5r9
+f2cvjH4G+M/wiXUfh5bXFj4a0vdpMNrdQeSYfKiXChcngKyjrX4n/tc+HNJ8JftPfE/SNHsoNO0
221uXybS3ULHACFbCqOFGS2APWv0w/4JUIF/ZU1h1ZXZ9evSxXGCRDCOMfQfWmx2ufjnqkDS30rZ
JZmPK84ySRX2V8CP+Ccen/HX4VeHfF0Xxe0bRZdVVmbTJLQSy27hyvlk+chzwO3cV8u+BPDcPjn4
m+GfDkszW0erarZ2MklvjzESWdI2IyMA4Y81+xnxEuvhd+xH4c+Fvhm2+Htnr66pqKaRbXk0cAuo
2G0md3MZLtlgcDGaHdCsflf+1d+yP4n/AGUfGkOl6s7atoV+A2na5DAUguDty0ZBLbZFwflzyMEd
68LKoGjHbOWyMCv11/4LJPH/AMKe8Ao2Vzr8jZHTi0lGP1H618Z/sS/sV6r+074wF7qhl0/wJpkg
N/eqNkk+VfYkJK7W+ZRk84Bp3FY3v2Df2HLn9onXU8VeJopbP4d2MgdnwyjVCCyvCjggqFIGTXrf
7EH7Ovw9uv23vijp8duNa0XwLdNcaBI9y0qRyCbapJBw+0cDOenOTXS/t4/tl6R8K9Bm+CHwge20
lbWFrfV77TFQR2qHeklmi7CBIc5Zgcrz3Ncn/wAEa0jj+KPxBAbDNo0BWNmyQPPORnPPUc0rsR13
7XHxj/aN1P4261qHw80fWbbwb8PLvy2l0cOYb1wscsn2pd37xQuAVA4BPrXpf7aPwr0D9pH9kvwr
8SNagtvD/i9bTTLm31EEiO2+2SwJLFJ/fjHmE4PQjI754v4kft7xfs9/Fj4w/D298EX2uXV9rDz2
c8E6x5862jjUFGGWG5RjaecnpXpH7WDSWn/BMtRcwtbTnRNCWSCThkJmtcqc85rO7RVj8+v2o/2B
fGX7MWhaX4hk1K28V+F7slJtV02CRFsnONnmglsK+QA+cZ4OOKufs1/8E8fHP7RfgG98WxalaeE9
J2B7CfVrd9l6MEl0IxhRgAnmvtf/AIJjeKfGPxP+Bup+FfHGgx6/4CsZPsel3upqj74gA3kMjL+9
UZBD846dhjI/4KjfFLxb8KfhxoPgPwno3/CNeBNYjktLrVbNYxC6BRi0VQMxZ654DDIHeq5wsfIf
w+/bx+If7P8A8D9R+D2kabo8FzBLeWcHiK2kkNzAZJX3PHt+VypZij+m09q+/P2KbT4l/DD4O+Jf
iF8dPGep3WmT2puorDXJHkm0+CEy75GDZ5ddpAHUY9q+b/8Agn1+xbol94etfjf8UHsf+EWtInut
Lsrxke2aJRIjz3AYEAKRlRntzXlv7d37aOq/tLeJJ/DHhZZIfhzo07FIoU3PqE6GRBOzL0jKtlV9
Dk9RT3FdHvP/AATe8R+FPGX7W/xr1vQooYNP1Tzb3SYXjEbeQ12zFkTtw6ZA6Zrf1W//AGkD+38Y
YW8Yr8Lz4igXbHHJ/Zv2IbN3O0rtxnPTvzX5eeCvHOq/D/xbo/ijw3fvp+taVMtzZ3SY+RgcgEd1
PQr3BxX3r+y9/wAFDvjb8Ufj/wCBfCWtalpV/pGrX6wXiRaYkbiIgkncv3enB9qXKwTOg/4K7yQe
FPit8IfE1lY2h1a3juZzJLGD9o8me3dEkx1UfOOf75rz3xx+2/8AtCftMW2mxfDLwvrPhOx0VpIr
t/B8c919odguxHIj2qFCnC/7XtXef8Fk8P41+GSEsX/s6/2qvU5lgxj8QK+R/gX+1z8Tf2ZrbWtK
8GanaWVvqEyXN1Z6nYrLtkVdm4bsFchQD/u0FWP0s+N1tq2v/wDBMO+uviLb/bPFsXhuG5uDqsY8
+O9DrhjuGRIPUc9ad+zJLfaP/wAE0ob7wHF/xVKaDqDW7adEDO96kkwGABkuCABnngVS/aXmh+O/
/BNFfHHi22S410aDa65FLb5iEV0xQFgoPQhiCpyD6dKsfs930HwG/wCCbQ8e+F7GCHXV8Oz61LJc
qWE9wC5BcZ/QYFTrcBP+Cc/iL4zeLIPHafGCXxBeWqraLYDxHZmHO4TCYKGRcjAjz/nH5SftQ6Np
ui/tCfEqz0a0istLg8Q3yW8FuAsaoJ3AVQOABgjAr9Yv+Cfv7TXjH9q7SfiJY+PzptzBpptreIaf
bm33xzJKHDYY/wB0DjBGDX5IftC+HNO8F/G7x/oOlRtBp2na/f2tsjyFysaXEgUEkknAAHPPFWhS
PNZCMHHJ9BTXK7B8rAnjIp5fCqQOvQHrSXUnChQQe9UQMdhyCH44GBRLcqMMFJPTFNV2cqpI9s0c
sxz8qjuahoWo5p1wcdMcZoYBlQqCzH9aaDu5wBgAU9d8xK4KqMsuKkYzyztOGAbPSkQDkng89+tG
NhJlUjHvSMH2naNoNOwXHBij8nOPU07efnLgofbjFQscsDngcc9KkL7yeNucg7ee1FgTCViIwwGC
e+KccADccj1xmow+AQ5PB+7Twu+M7CRnJ5FDKuhHY7x0OeBR5jeWxYEg8Uhysq5BHXmnICSM8OOm
e9IQb/MBx8uO3TNPAbJLcHHr096jCqC+/qD370bCx+X73dTxQMZLh3bPOfSnhhETnKkHgDuKRYw4
JGFI45p5PPU5PHAoCwOxcnDbQeoHekLlh5YGQMdadEEjyx7dzUcjMWYjJXPAIxzVJjJnfEnUP6np
j6UwfIGUhyG5BPekyRn5yFJPyimkfMVD/QA8UgFnRhLgtxjgU5VePGcsuRgUoO7ZlhwOeOQKjfmX
hioHY0ALKrNIXVuQc+9KN0jHgFsdAeKjlXIyGxj1705Q2wtt49v60WEPdGaNcjawJwc9qbudQ2cM
xXGMfjUjIGKhj06N26dKi4VjkFz0FNhcfFJJJkqFVevP8qYTIsvTnripDF5khVcksegpSm5jyRtP
QjOahoQm7Ack7uMEdqjYFiNuWUDGfSpC2eNoQ9B3OaQFlYHk7vvZpJNDI3ZozkE9Mc07cWXcHCnI
znqakdY97LtLAfkDTWy4yRk8ckYyKAZLHKBE4kYoCeO/aomYspfbuZeMj6UiJlWB6dQCeKc6HG9W
Bck5HoKgkSJfnOBuBGfQj8aJQ8qspf5Qerd/anhN7KGBLdMgHilkQPEctkg5ORyau5SAqm4Lt5HG
F5H1pskezLZHTjnr+FSZATAGXbkHbyBzUUm0kHqcAc85qkxigBUQkAqD1B7ehoL7maM5UjgEdDSF
NhKrwfc8ClLujKxRvnGBjg/hRuSIrRxkgjd8u7PUg1IhldHzkZB+7ximqgZSygA9+TUg3KpGAvBB
xxmlsUiRNuCxxjptIzn2/lUEjbE+RCdvJPtUiOQVc9RxjHU+tRNOGfKJ8uc7Sc7vakU7DVIxgDLk
9OtNiChgWHTknvUsr4bcF4HQgYPvTEDzpymWTrg9R9aokQSbZdiqwyc88j8qsSHphthUnJPTn0FR
BWQbt28Hjpgrn1p105nxsPzA4+b0780X1AGUMhckEdg3+FClhMDIArITkLTEcKyq6Et0DHkeoFTJ
5kYlbb/sk/yobAbujYlnDbQS2OmR/jSxuPMBUnaOQOo561FnYCXO4dDilCoIVVcqM96EA1m3nglF
U5I9c1M5zyCuTj5R0/D0qJkCsqL8zKScnv3qRpHlYAkLGrZ2kc0NtAiHzo3bJGwk8YGeKe7LHlfM
LKMk4X1HSppyJSWAxgj5tvCj16c//XqrNIYgwK7s9QDtBppyKvYpRlvNJHJf1FW081JCyj7oI25y
OlVIwUkDfeIPfpVrzyBlmwQCCM1dyCBZDIWMmWbHA7mpnUvEAT+8Y7dpGMCogWdXwuT19xTpJjEF
3HcccDqcepqGMRyIwQwYtnnPYU0ykKrI209C2eaJDhizcZGRt7/WmA7icgYzU2FY2mjiKlNpLt2P
p2qtMfMRcgKMYAVgSoH/AOo1ZaXeHRTtC5IAHzEY7n8arh0kCYOwkckfw84ra5CKxlJ2BmKnkBhw
cHrmhyc7UJGO+PantgwMEA2A/eIGfxNNkXdLufjcPvAY5xgVLKRE6YQuScjoQeaLq+l1CVWldmdR
gc8AAcUsj7WCgdeMAZqE797blw3IwaEUXNPuPLcFychgcCvpX9mv9sfx9+zIb218LXNteaPqI3ya
ZqSNNbxOCfmRVYFWOTnB5wMivmm0jYSsdpGPlOTwK6fR9JudRlV7OOS7kEigrEpbaT0OB0/+tVWQ
uZx6HsbfGHxDF8XIfiTaC0tteTUf7VjURfukkzkALnhfau4/aK/a08W/tILo8viaLTrX+ywwhi0y
DapLH5yxZm/urjn8DXhtvZXMkKiWN8vtAiA+bJ6Y75q/c6Xd28MUM8UkBdPMbcpXcQDwe/b0qkkx
JufQ9t/Z4/bF8bfs22msQeHINL1Cy1Nonkh1NHKrIuVDKysMHBwc5HAr0X4f/wDBST4jfDyw1Gyt
9J8PaklzeS3yvcW8ispkYu6fJIMgMTj0B6mvkhLOe5LhIpPsyRqxk8slVGeMnGOuKsjRbp4VeNXZ
GbduVcLz0wce4z9aHGKNUpLc+3Yv+CrPxIcL/wAUp4XEvVnInGRjoAHJOPXI+leNWP7YvjLRf2hr
n4twWlhLrV3G8M1i0ZW3eJkWMLgNkY8tMfNnivDrjwnf+YxFtdWzh8DMLjLDknBHr61VudNu31FL
PyW89l5iRSXzj72Bz3zxUpIiT5dUj0z4+ftC65+0D44k8U6tY2Om37WyWpismbYipv2kZOSfnPOa
7742ft5eM/jf8KrbwX4g0rTbeBJ43a7sRKs0/lqQCRuOMn5jjivnK502SFvInQHY3OCd+MfxH8Qc
e1RPY3iKl21nPGB8yMI2AKj0z6U7IxUnLQ9H+Afx9139njx+virRbCy1C78iWB7bUQzROGwcjB+V
gVHI7EjvVv8AaF/aU8UftHeMJdR19xbW6qqwaVbyM1vAoUZ2qxIDEjkjk15HNE7O0sO6XYeU28+5
B/L8qJ7J1U745ADyBghg3bt7VTjZ3G2MZ1lnkLAKEXrnBPXgZ5xUDhvMQGJn5yyd8ev5d6kuoPOi
3yud23kBeD6nrVczTrNHKSFUsQHQD5+Puj8KTdjJ3LcM6xLJhUwBxvJ49O4PrQSTONm4joWHI+o/
SqphS4ifbKVyc/Nzzyfyp0jgqAwwCONpPy++aEPmAtviYsX3LkAJ0Ayf1zSRjEJ3pgPkLg5x/wDW
6frSXMaXM8LqGKgbzk9Rnr/jT2UxAFQkkPcMcUIqLTEmIcEL+7djgs3Qgf1pY2zFjoC259q8kgdP
btSyQgg7IwYyCV2nj8qajJbCT945R+GQDik1cbshrOVkDFzhgcYPI7ZPpxUys9ukceC25wzHPA44
Oe/XFUz5RkLiLggHI4wRjt7jNSwSwKG2uzDcSCw3BeOOR6+lJaEpJPU7T4Y+PdS+FnxB0Pxpp8Sz
alot6t0kd02Y3K8ENj1GR+Oa+09T/wCCuGsa1YzWd18MtEm85DFtlvpJU5GGyDGMjB/SvgCN1maN
VmIRsj5uS3Gcf/XqZdNuI5Db+TI+QTHGF+ZBgkZH8Izjn3pcyub+hNruuDVNVvbqRI42mkZ9sQ2r
HuJOFyeAM4H0qPS9VvND1az1HTLma1vrCdZ7adHKSRyLyroR0IP+FU3ti0hbbu2Lhx15Hr6UgtlM
yBN6qOpAJ5/zitNGQk3I+t/iv/wUE8QfGn4Gw+APEfhLTrq+kjt1m14yFpHZGUtIqFcKzbeTnqTx
XR/An/gpXqXwO+GeieCV8Babqo0yNojerfNA8vJYFlERBODjI64r4tltBJIzLII2PQE9ePT/AD1p
hiWU+UiuJpuUbZnd+HejlRdmj279pv8AaI/4aQ8eweJrjwxYeHZobQW7ixm8wz8kh3copYjjBI4F
Zf7Nf7QV7+z/APEm38V22g2uuJBC9u1reybOHx86sASGUjrjofy8uFm6Qs6QybQqgsykAA9AfQnB
/KondE2RqrRMG++DkMevTt+FTZGd7M+3/jP/AMFN734vfDjXvCk3w20qO21KA24vJr95zEWGPMVD
EPmAJxyOa+KLm4AkYZ6EZBfBx1I+uadY2d1d4CRAo7ZyykqeDkZ9eDVaSIysqsWYgAgggZHb+tKx
fmfdHwv/AOCp+pfDzwJoPhWP4b6dctplpHam6XUHgEpQY8wqsLYJCqTz1NfMH7Q3xrX46fFDVvFc
fh2z8Pi9ijDWVmdyswBzIzbV3Mx6kjPHSvNorhwQrYwwAfA5wOh6euP1pl1aNbl5WRshzluCRzjn
3oj2Mrts92/ZW/ahuP2X/F+oazD4astfW9tVt3jupGiliwchkk2tj3GORj0r1X9of/goreftBfDT
UfCU/wAOtP0mOeWKSPUpNQN00W1wxKAxLtJ2gZz3r4zNsGkRQQRnls8VaGnywIsYfenJBB4x61SW
pa94EuJPPEm5Tk/K6jj6ev419qfB/wD4KdeJfAfw/svDPiXwtD43azJRNR1C+KSGMY2Kw8tt5BGN
x56fWviAqjZAYoM7MZyR1Oc9+o/Opf8Aj5jX5ozDGPnKOBzTlFFLQ+6PiX/wVL8S+K/Amq6B4d8H
Wvg6+vY9o1Sy1BpXiyRuKL5S8lcjPUc9OK80/ZW/bf1f9mKy12yTQLfxbBrDxXTrNeNbyRzKpBO7
Y27IPOcdOtfMv2YKpk80ooG0sT3OOMZxmoGh8uQfvM5BUFRnpzn8OaSSIbaP0TX/AIK37L5p/wDh
UmnfagC5kXVvnYkdm+z+9eC/tTftl6l+0prHhm//ALAj8KHw+sjW5t7xppvNd0YOH2ptK+UmMA9e
tfOEFvPK7uUkd428sBVJY9cZHXGBVN5fLO8YDbznOcg8gj9afKhxk+x98eDf+CqGuad4J07RvFPg
fT/Ft9ZRmKXVbu92G528CRo/JI3FevPPXvWDf/8ABS/xD4m+FniDwl4l8E6XrraoLhIri4uTi3hk
J2K0flkOY8/KcjoPSvit0KvEkKu0jMTlQST1JGPzp0kTMu9Q5C88DOQfX/8AXSUV0CUnueg/B74y
+IPgn410zxR4WvWtr60YCaF2zHdwlgWikGOVYDHr3HOK+0n/AOCuN0rPOvwtsGvSpAlGruDjHc/Z
89uma/OmKBptw2kIT91VJJ6Y4HbOKseSRI8S7RMvBTdj65q+VdSVUb0seleMP2h/G3jH4rP8QbjW
7qLxEsvnWdzbzMotArMUjQf3VDEYPB79TX1vp/8AwVgv7jTbCPV/hlperanDAsU11JflDI+35yqm
FgoYjpmvz7eKVpVO4JgY+XqO/wDWkaGWbCxq5O3JOxskevt2qbItN3Pqb9p79vvxN+0R4ftPDVto
48FaDHua8tLO/M5vumxXPlphVwTgZyT6V53+zj+1B4n/AGbPF0WsaCiX2lTkJqGjTSmOO+XawXLA
Eqyk5DY46dK8dlHlMybHJAyAV+YjPU+n/wBaoFL+aqklNwJBJAA4z/SiyFa23U/Q/Wv+CtGrSaPq
EWlfDjTtN1W5gdYrwaoz+VKVO12TyBuwTnGecda+df2f/wBr7xH8F/jHq/xA1S0Pi7U9Zsnt9RN9
cGOSdtysGWQKcYKKMbMYHavABEpWXFwhcYHDZOD06duagW0YOG81nKk4K9CMfSlZEOTi7n6Lzf8A
BWqOe7NzN8ItPkmJASZ9Wywxkg7vsx7+leN/tYft5X/7TXgmw8LyeELXw3a2d4NQedL9rppSsboE
A8pAo+cknJP618nxtNOcRqZFJPIUY6VAWIkJG9gQG4xj6VcYocZcx9jfswf8FDNZ/Z3+FsXguTwV
aeIrK2uZrm1uG1E27xpI24xsqxOD8xJDZ5z+NdV8UP8AgqVr/ivwBqvh7wv4Nt/A19fKEGq2moec
8Sk/OUXyFG4gYznjPqBXwn5JkhUY5YbpFUjgZ6460l2PMVUEi5LHAJxkD2POaXJYq57j+zT+1V4l
/Zn8Zf2tpo/tLR7wBdU0aSYpHdtsIEm7axVwcndjPY8V9QL/AMFcUt5Rdx/CDTxeMSfMTVyH5HJL
fZs81+dTStg+ZuWNeM5wP/109ldBIgALqfmDH5ge3H40rIfNc7/49fHjxJ8ffiHfeK/ENz5hkLR2
Wno+6OxgZsiJDgZA4JJ6kVufs0fH5/2bvidH4vtvDlr4l/0Z7ZrW7k8oqGIIdJNrbWBVe3QmvHPJ
cAMGDqRncjZ7/wBKmjZCjuxBAJXLZCg9uPWm43Q4pJaH2v8AtFf8FKX+Ovwn13wXcfDGw06XUECQ
3lzqZujbuCpEiJ5C4Yc4ORivJf2Vf2qx+zJqWrzP4H0/xYurRRx7LqfyGtSjMcq3lvnO7kYHQc9q
+e7hF85l83YAucE5x6Z/HvTttwChETSRsOHzhfQc9xmmoXFdJ3P0e1H/AIK/JfWkltd/B+1uoSP9
TNre5Gx6g23r7V8haL+1Br3hf4/X/wAUfCNhaeE5rq/a6bRLRv8ARjA2zfbtheVbZknaOSSOgrxm
ZwZVJYkL1Oc/Sm4iS6ILOwI+4P8APTOKTglowT9659+ePP8AgrFrGs6fO2gfDPTdB8SExiPW5NQ8
91CsCwAECkhgNuCw4PevGP2wP20Iv2q7Pw1FJ4CsvDd/pLvJJqK3X2ieVGTHlZMa4UMS2CW5Ar5t
nmii8oDcnBG4qQDgcn0qk2HBJy3XBQ9xmkki20fX3wP/AOCiuq/AT4F3HgPRvBdjNqqLcC28QC68
tozIWZGeIREPsLHA3YwAOK+UfFPijUvFmvX2uavqE2p6tqEzXN1eTtl53PUnHT0wOOKy2zKCoYHB
656kVA4cNiXcuSQMc/WhKxk7tm/4W8UHwr4q0TWxaQX76fewXRtLj/VzeXIrlG4OVbaVPXg9K/Q3
S/8AgsNZaPYva6d8G7PT3PPl2usCOMtjrgWw9K/NhUBljUfIFHBIBJpHihEYYSKhXBZsjbz60rJl
a2NPxJrj69rmpapPGkc19cTXUkaElVaR2cgZ7AsQKx3ZTCFyduePao5mO1tzlgpOCDUDYYhixyP4
Rk5NMxTexLPkxqq4JY4IbuP8mvtDwd/wSl+LnjPwrpPiOw13wollqtpHeQxyXsxdUkQMudsRHQ9j
XxYY3kXIOcnr3HFdfpnxb8beH9Lt7Gw8aeJLG0t02QW9tq9yiRDGNqqrgKOuB0qW2tjVW6n1xL/w
R9+NIXC654Oc9yb24H448j+tch4S/wCCZPxf8W+L/FnhuK88N2N94amhine5v5Nk4lTfHJHsjY4I
B+8Acg5rwOL48/EWM7v+FheK8kAEDXLodvaSszT/AIp+L9G1a+1Sy8XeIrHULwAXV5bavcpLPt+6
HcPlsAnGfWjmkCUb7n17J/wSB+NIjCjWfCL8AHN/OD+fkV558Xf2TPi7+xMPDnxAvdS0y2eDUVjs
9R0W6Mj29xtLJuV0GQ21h0I7HrXja/H34m7t4+JHi0KOONeuhxjp/rKxfGPxQ8WeNra3t/EPifWt
dgt282GHUtSnukR8EFlWRiAcEjOO9Gr3DbY/Qe0/4KyeGtf0XSY/G3wbtPEWtQWqpPdvdQNG0gA3
mNZImKgkZxn8+teLfth/8FBNT/aU8Lad4R0LRH8GeE4sPfac80czXjqytEMhBsVCvAB5zz0r5BLY
iDA7QWIycZPtjrUTEs2ByOn/ANahRSJ5rnofwa+NHiT4FfEDTvF/he6+zanYMVMbqrJcRNjfE4IP
ysBjI5HUc191+Pv+CwCal4N1u38J/D2fw/4tvbcRRarPeQzJFJtwJCoj+faCdoNfmosBDsH5yp4B
9B3pZFARShDrjgBgQaair3K5pNan0J+y/wDti+Jv2cfirc+JjNJrukazI769pjFVa9y0jq4cj5ZF
eRiMYByQRX1Dq/8AwVg8OWviXTdW8GfCkaJdXOoJLrtzJJAsuoW2G3puROXywYMx6r7mvzVbdjgH
JHSmA4O5iRjgY6H3o5FcSm+p9U/Hr9qnwV8Vf2lvDHxO074cLYW1g8Davpt28TnVtsoJL4XbnYNu
TnP4V9NePv8Agqz8OPHHgDVfC2p/CHUb7S7y0Nr9huLiA2/3fk4A4AIGCBxgYr8xX2Ow+Ykt/Eep
PoKAGZQd+xf4Qx5Jx/8ArpKOtx9LWIpHSKLgGNlAXKnPHvXdfB/4w698DPHWkeMPDFwLbWNOkJUu
AySow2vE4PVWXIPQjgggiuCulG8lSSp4Jbv+FQMpgbJZSOoVeQBim4p6MFK2x+p/i3/gsDoEnhnW
G8M/D3UNN8Z3tl5MV/eSQNBHKFby2fB3OqFiQMd8d6+KvhH+z/8AEb9sXxl4tv8Aw4LLU9bQnU9V
uL2dLbzZZ5HOQAMctuOOgxXhRf5w7HGORnpmus8DfE3xb8MtSkvvCXiTU/DV9LF5M1xpd00LSpnO
1sdRnn2qbNbDTTZ9NN/wSo/aCjc7dG0JsHkrq8fz/gR/Oua8c/scfFD9lWTQ/iB4+8J2OteEtP1a
1N7BaXUc6yJ5y/I46qHHy56ZYfWuHj/a++N4j/efFnxXu4AP9pOMCud8fftC/Er4maMdG8WePvEO
u6U0iTmzvr1nhLKcqWXocHnnvTV+pfMfVf7WH/BR1vi34A07wJ8N9GvvA3hzyxFqBfyld40K+VDD
5bHy0G3kjBPAHHWx+zp/wUa0/wAE/CHUPhv8XPD174+0MW/2aw8gRPI9uQ26KfzGUMBxtbOfyzXw
gYi0YaMgLyQCece9QsZFZWzkDgg0W1Fc+9v2Wv8Ago5pXwo8Caz4C8d+FbrXvBBMy6RaWKRSSW9v
I8jPbTB2AkXDgA/UdCMelwf8FOPgt8MPC/iBfhf8Jb/w7rt7bt5ISxs7aCScBhG03lyklVJJOATg
nHWvy+kuWjiXHByVwKGladcGQnBxSURc1+h7F8Lf2gbvwv8AtI6N8WPEUDandxa42rajDaxLEZTI
W8wIOgwGOB7DNfQn7Rv7f+gfFb9oD4T+OfDvhvU7bTvBs4nuYNSaJJbkmdHZY9rMB8qHBYjk+lfC
+SVJGBgkUSsc7GP/AHyP1o5FcXM0fYn7f37Y/hr9qq/8G/8ACNaNqulWmhR3QmfVViBleYxABPLd
xgeWeSe9fIttqtzYXqXFjcTWlxE25JYHaN4z0yGUgj8KpszAAO3zY544pPuZ3YRSOh6mqSI1Zd1H
VLrUb2W+vrqe8u5sB5riRpHbAGBuY5OK+j/2Lv2ztU/ZX8b4uI7rVvBWqsF1XS4vnkQgNtmhBYAP
ubnJww9ODXzHIoZQcFfrQJPLdfmGB3654pOKY7uJ+rUP/BTX4A+Bf+Ev13wP8OtZ0/xhq8bzTStY
28KXVz8xUzMsxwNzEkgetfMH7PH7XHh7Sv2k9Y+Kvxk0m98XaldRF7KWziWT+zZ9+VaGORwFULwM
HI64OSa+Q5JZGYDkk9x3qcqxmc/MxwOFo5SlJs+uv2j/APgoT42+LHxk03xD4S1O98K+HNDuA2kW
QJjeQgo3mXKq5WTJQjbnGDXrHxh/4KFfDL4xfDbwXqHiHwTqc/xR8OXtpfQ3NvFGlrBOksbXGxzL
uKOqHCsDzt5yM1+dhy6NscPhtrDuD6UlwZTHtzn8z2pcqHzWP1o+IX/BRv8AZj+NGl2Fj438EeJN
ctbeYzwQ3emIyxSYwWG2bPTI71xvxo/4KH/AzV/2Y/Ffww8DeGNf01LrSJdO0yzn0+OK3hZ87SSZ
WOFPzZwTX5j+cdiEuS2McHr7U1mLBhjkcAUlElyv0Pvv9jX9vzw78Jvhbqvwy+KGl3es+DTDMlg2
n24lm2Ts5nglG5cofMJUjBHzDnivVPCn/BQL9mD4HaL4jvPhZ4C1jR9dv7Y+XB/Z5hiupUDeUrs0
xwoY8kc4NflcZS0jDfkAdB2pZpTlVB+UkgCixV2ep3HxctvHXx6j8f8AxB0uPV7PUdbTUta0yxQo
kkJdfMjjUt2UcAnnHJ5r9Lvhr/wUX/Zh+EvhX/hH/CXh7xFoOi+a8rWdvpRcb2A3NzKSc4AzntX4
+qTuK5+YnaD2NSl5I4zsIY89TVONwTPpb9pz4qfBi9+InhLxL8AtH1Pwxc6YRdXn22F0ha5jmSS3
ZY2kbJG19x6fd719bv8A8FFfgL8aPAnhR/i94U1eXxfpJN0IrCzcw292vWSFxIOG2qQDnHTtX5Vs
z7Rn58j+LpxR9pKMpO5jjA/Kk0Lntofol+0z+3f8J/2oP2Y7nQ/Eehava/ES0la40uOG32wQ3Acq
r+buxtaI/Mp7k+grj/Dv/BQaX4b/ALFWg/DDwVZzaf4zUXdhf6nKh2W9tI0jCaB1YHzcSADP3Sp9
q+HllYHJJOT1JPSpQzHc4Ix2z0o5R3uT6rfTXl3LdTySXE08jSySzOXd2YlmZiSSSSScmuw+D3xn
8SfA7x/pXjDwpfm11OykBMRZhFdR9WhlAI3I3QjtweorhWDIsmXzjqR2/wA4qJ2PmDaOACfrS5bi
P1rvP28P2WfidrXg7x1448M6snjzR4UeN10uWT7JKCHKhlbbIqvnaTnqfWvmr9rP9uyP9ov4qaTp
6xainwe0rUIGk0qBngl1aDfG0rzLuwWG1gi8Yz6mvjAXDkDAIDdxxiovOaJt24t645JoUR3sfpP+
0n/wUc8NWfwl0X4efs//AG3QrGO2FpcahJbyWs9jDGYxGlu27JZgGDMc8e5pnwf/AG/fAXxC+BOs
/DD9ouK91hREtvb6nDbvczXsZyVeRskrMjAEOOvHoa/N8XBbmPLEHkGieeSWZpAMM3OB60cqGmfq
x8CP26/2e9M/Zl0v4Y+MZdXtrKKzuNKuLGXT5pPNtjI+w74x3jKnjoc1tfB79qn9jD4AnUj4LGo6
W2qBFuTNpd7cFggIVQZAxA5PA46elfkgsrr5fJI7Z5pHkdEZvmXb8uev50cgNo9aPir4c+IP2oLr
W9e0+6g+GOoeJbm6ntrOMpJHZSXEjKVVDlcKynC+hwOlff8A8L/2lP2Nv2aT4g1z4cx6t/wkFxZM
qw3FlfSvMUDMiK8wITc3fI61+Tm7GG5UH/OKXzZCCBJtB+8jHtTIUj3L4kftUeMfip8cbL4ma9JH
Pf6dex3WnaTKzSWdkiOrCFEJztbYNx6kmvuLxx+0Z+x7+1HoHhXW/ilFeaD4ts7TbNaWFndpJbu2
N8RlhQrIgYErk9D2ya/KlXkXOBknNNMzE5JA5J46ZxSsVzLsfpl+19+3L8M5f2c9P+DnweaXWNKv
bOPTZ7y9t54BYW0RQoF81FMjnbj0GM81B+yl+3P8Lbr9nC9+Cvxoln0XS7WyOnW2oWltNKL61csS
CI0YxuhOM4wRg+or82Jbp3G8SE9jntUTTMqnc27PTIp2Qr9j9cvhp+1H+yZ+yT4L8YXHws1jUtY1
m+hFwNNube8L3c0av5SeZJEAgJc5PYE+lflv8S/Gt38R/HHiPxVeW0drda3qFxqMsEZ3LE0sjSFA
Tzgb8VzHnFuVUgD9OKhJzIxLfKT34o2C9xsj/MQpJGcAYpZNwTdjgnHPWlZkjLBCSR+QpgKseCc/
XilcjUYV2qQvOOlTMWZScDHcmoypCklhnGMdqRhgg8kA88dKW5ZIsoUFSe2OKcC+GIIUnjB/lUWB
uLMcD2FSEBhy5G48GloVcjJ525ycYOaC4YksSv4e1LJcbMqACexI601iZPlYYB7jtSsRcMqFKjJx
3zTzzEQBjv8AWoxEqs3zYHPXvT9gA3HLdsE9KBoHZYxtwOTnINIhOSA3B9+KDCVY78njg4zUkax7
TuPXNAxN+1jvw2DgYpjq0kv3SB2FIR5WCRlSSf8ACnFiGYgZf69OKAEjDnBYZHXnmk2sspYsTzTp
TmMMQdpb1xSTySIcMnGODT0EKyfvMMSO/wCNOP76QHcOOgFRSBmdcEbvanMVG0n5AByPWkhpiyhQ
pGW544PFNJZvmJ5B5ApJQzIQMMpPQU5k2gbGUH37UCByu4rt46A+9BgERbaN3GM9hSyOGjXgKV6t
jOaJi2zG7IPJBPSlqUKCCVKgg4wR60x3G846HihY8MuHDZGCSPUdKe8O5QpxgcZpiRE5eNiwGfSp
Dkgnkgdc/wBaU7ULFiSF42k9acY0CiQvjcueOuaBkQAZif8Ax0Ht9KcRtwQHZumemRTZFYsfmJAP
BFKrAvtzk9CMUmSLIULqfm3ngnpg0jI6luqnNPlC+bIQGG3qD061GJQQrAkyZyBtPNCHYcBubZ5p
x1BPqaTIRSAcns2eB7U1xtJ9Q3WllRVAyMkHH170LUQ4FWkLbeOAcf0pzyYAP3cgjaDmkdQCpEm7
j5R3B9/xqTajKWLngcqT1PNKzGiIjK5kGe2AKe5jiVFCHe2QwJpHVFcAE4PJbsKaHVHJZt+B8o5G
aqyGSks0YKZGAM89aXzhGgJIbvtb0qJjvXCkF/ftSjbMGwh8wdMc1LigsPZkABzjngbqSFUWQA/M
QCDj+VMdk2kcMGTGT/CfbFKrpszg+Zu+8emKPQByxEhpBnA/hB/rSTskkoJYtnge1C72QlDhc0kM
aODufLA9uKSvcBQPKPDYDfKSKkkfMmCfl6ZxTSmxIyQGBOPvdPwpzCMhNpB+bqTyMVoMYFdS5Izj
p3FBQxICON3IVuevf+VOM7Iz5cxt7HqKAynkrtyMYA747flSsgHOC0bkZkK8Dt9aYQDAfLAHXP19
KVspGV2k+uOtMVvLkGFAOejHp70XsIUMZCIxlARgKD+f8qVwd7x5UE8nA68dRTpkMjeZkx84H1Pe
pkbbudnPlj+Pr+A/EVKbZViGQPGBnIUAED09RTZJSWfeh9vl6HsKfK4GWyXU4B4/z7U2JmLMBwGz
94jtViFO1fvAxfxbcc/5zSGMM25VJ/2iB/Kk5jZsqCmMYZuBSJK7wFfuqo+XHX15o0Fcc2HwzEKR
x0zRJHgsdwLLjOTxUexlUk/MRkcdz71KqxxuGTkFtobP86TYwZSiYYcHg4J59OKiuYzHGGO3cRxg
1LIqvIm1SwAOBjHNRXEonUlgpAXb8vbihMCkjM43YwuehPAqwRhGGM/MTuA4qrbFQ5LEnHYd6u4+
X5AcMDxjmqAiQoEb5ztPLADp6c1E1uGYbnAB6d+KWJjEWKjd25A6+tSJsiUtIf3jZABFAEHmR/dJ
OAe/Sn+UgLK/UHjBpMBeSpXH8J7mnrKfmbYWcnBx2qbAaN5GTIxU7QOE56imbnJJIz8u0t2/z/jU
jlJGHytsDEADtjrn09aguJiJOCQpG0DPUVoSRXG6U53DaTk4HHPtUUki7gGiZQO2cZqSSUGQZOFA
yOeg9KYoEik9BycGloMbcfKVMWVA5I7Zro9X08a1oVtqyHbKiiGRFA5x0bj2rnHlAYnHykEEHniu
u0/yz4AniL+XMsxkdQeccAH/AOt7Gpe4zD8OWn9qataWSypGZpVQM/AyWwOa/ab4b/C7wP8A8E8/
2fbnxX4mtodU8T6hEkU8qQl45pzuMMYGCEHPzNwOPYV+LXhqdU1uwlLYkWeNkIX5shgeOK/bv/gp
nIE/ZSt5S4TOo2ag465RhxUpXeoql1C6PzbsfjRbRfHjTfiLqmjRw2f9rw6nd2NgoaMRrKjSKoxj
7ikY9TX6I/tkfC63/ad+C/hfx/8ADeK11OK1ja4Rbe3P2i4hk2jaoGDuU5yp96/JtJmimWFkJU52
5IAzzzyOuBmv0I/4JRN8Ro/Eeui3Yz/DFy4uXlxtF8FBUxdxwcNgYOB3Bq2uV6Cg2kkey/sP/Bm6
/Zv+DviTxX8RGt7aC/QXht7q3xNZRR78o6nPLDBwPpirP7D3xl0r4jeNPiP4X0rRLKDw5p17Jqem
3HkhJDFPK2E28jAwcYxjnivPf+CoGq+Oov7KsVaa38BSKkiTQR5D3ahyUdh22gEA4ya5H/gkk4i+
IPxCtvLAxplq+7OWB819276kjj2NT6lxfNe59NXv7Udp4a+Puo+CvEnhKCw8KRzixt/EkVuzx/aW
2lUkONq53EexrzHStF0I/wDBTieGy0+1azbRHlkVY0MRl8kEsoA+9nr7k11P7Z/xQ8CL8GPip4Xg
ns7PxVHdWolsJdonuZWeJhKiZyw25GRzwelfI3/BOTUmuf2sdEJlkYnTL6KVJWYvu8teuewC459a
i3UcVc+uv22P2LLL4u6bJ4r8J2yWXiqziVZYIkUJdQKGyAoHLjIxzzgfjY/bl8E6QP2STJ/ZdrbX
1obHy7gQrHJEcqrc4yMjgivQfiL+1DpPwl+Puj+CfE+LPRtcsomtNTYARwXBd1xKx4CthRnsfbOM
v9v9Ypv2WfE7lsr5toUdPm5M6AEc89eKtGXTQ/Kn9nn4wf8ADO/xa0zxVNo8ev6OVa1vdPuVDOYX
YZkTj764yB35Hevt79r79l3wr8c/hzH8a/hu9vbJPYjUbxFUwR3luqEiUDHyuoHIPBA9q/My+nt+
f3zyQlgrFkIz6nGO5zX7F/CBjc/8E3bHIDlvBt0oAOc/upQB83tgc0OTuXJJxufjYbIx3FwoDbEJ
8zcMkHoRg9uvNU5I43LTIHRRkxBmyAO/GPXitnxPGJNWnkbzIwzMSsXygDnjnp2rAXe1zkHBLH5l
P68VpF33OVkkiMsZ8sBmIIZOhUev8qSKH7PJG8m1WwQcZIHB9O9RTQxxXSyFcxyZUuyl1HGQCOc8
45qWUoZS+wyOcZycDJHp3/CncF5gJYoDs37SgIU9eO1Dyl4XJMYIGCSNoI6k/wD1qQhJtztH8owM
DAHJ4Ofr+lI6NMkiKGUjhmJ3E/T8h0p3Hy66EBuDGAGxhBkMOAR17/0qZETzSxCYYkhuRkD0NLIC
10ihSCvQkYB9wf8APWkkuN1wvPIJVs+/XPv/AI0XK5L7jgWW7xnaAv12g9OTTRAtsm4tkMR93OR/
nNKSjrgLhv7+Of8APGKTc32klZDIAobYOgI7n68/pSsiHE9q/ZB0+w1j9qH4a6fe2sd1bT6xHugc
ZRwFY4YHg9M81+pP7Yfxr8M/s1+HdId/h1pniZdcea2eIxxwpEqquWb922R8+MV+XX7GUXmftR/D
CVm8thrsK8OORhs4+p6+1fc3/BWnjwp4EbdtYXVyFzjDfKmRz+ePaoWrN7WSTPzE1yaPUNZvruG2
hsoZ5HlS3teI4wWJCL3wucfhXRfCT4Va58X/ABpY+F/C9sZNQvMbGkRnigyOGlYD5V9/auUiDyui
uCmMIXXqDnG7b6c9PpX7Kfs3fCTwt+yz+z1L4z0Kzn8b63daaLqe+0uPzZrlMbkiQbj8q55xzwTj
jFNtoqKsfMP7VX7GXwl/Z2+CNhfX2v3w8dSpGkMXmGSO/mUDztse0lUAyeoHTuaf/wAEu/gZ4W+I
PiHxL421q3bU73w9NDaWVvKc243xtvYxng9sZ6YFfKXxy+OviX45+PL/AMTeJpIzNcSBYLOJ3ENj
GqhfLRCxx05xjJJJr9Mf+CY2veDNV+B0lr4d0mTTdfsXji11n58+ba3luCDyNoPYGk5Mu71ubfj/
AMN/CT9qTRPiB4Dl0+x0bxP4WuJRutYUiuYDGMx3CkAZQk4I9yD1Br8kPAPhuy8V/EHw3ok83lW2
qana2zEEZVJZkjZkOOoD8Z6EV9If8FAfFWg2H7RutzeBLm/0HWFD2evXEcjRi5uCuGC4blSCuegL
DpxXn37Fuv8Agzw5+0F4ZPjbSH1SwllW3tJo1LG3ui6GCQgEZAcY79QccU76GSWtz9S/Edr8LP2S
vA3g/TtQ0Oxh8OyXi6U2p3kCSyRM6OwkkbaS2WGD0wD7V+d3/BQv4JeGfhX8XbG98KkLp3iOyfVD
bxbfKgYyYxHj+A5JH164r74/4KHap4K0z9nDVf8AhNLG5vhNOsWki1yJI7/Y5ifIIwAA2c9s8V+N
2o+Ib3Ulhjv7++uI4AsEJvJmkEcY4CLuPTBHA4oVwsmz9Fv+CXPwH8Ka34O1f4h6tp0ep6yl7Jpl
styokhhjEUTswQjG8l2GewGBXb/HLQvhF+1R8BPHOp2FlaaN4k8EJdTzLaQpHcW8kQkwj4Ubo5PL
P/6xXpf/AAT917wdr/7OOjv4P0yXS1tpTbarFKDl79Y4xK+cnIYbSMdiK/M79rfxT4WPxx8a/wDC
ALquj6JdzGHUITLJHHdXKu3nYUE7kL5IDcZycc0o3Kklc4j9nn4f6f8AE/43eEfCeriRNN1vUI7e
4MMmJFG0k4/756nNfsN4y1X4UfA+98EeB9a0DSrLSdcik0+2urm2RkRo1RVWRivO/wAzG4nr161+
cf8AwTn8QeC9I/aA0mz8T6dJf6zfEw6PeqN32a7znLAEEBlB55GT0r68/wCCo+oeCrb4OafaeILW
6l8UXc7roE9ruHkyDYZS5B4UrjsT3HSldtiaaR8MftmfBTw98EPjxqHhjw7NIdImjt76GO4cEW4m
dt0Ab0XAK55weuea/Rvxnpvgn9h79muPUdO8IQeKbewlijMd6Y1lnkmbljIY2wMngY4GBX49Pq+o
a7qgvrzVbq/v5WSMzTyNNIcfKB8xJxjHGelftj8X9R8L+Cf2drWT41ovivSofs0V6bSzYefMWAQi
MPnOSP4vwHSld3KtocZ8OY/A37ePwG1e41XwTb+GDNdvZFrN42uIZo1R1lSVUUggv0I7HPWvi79i
m5+Cnwy+J3i3UviZrUVpqeh3Mttpg1GMtazIXljd2QIQW+Vev1r9Av2evEPgnxf8D9QufgjYQ+HN
PNxPFBHqFqwRLsIuWdN+SMFf4q/HvRfhn4l+KXxQvPCGkW41bxRd3lwr4yiFlkbzWJOOM5P59Kdy
ban6Q/CX9tLwl8aPjkngPQPhLFd6NJcyxLr6RoyCFCQs7R+SNqtgYy3cV8z/APBTP4C+FfhN8Q9H
17w5ALH/AIStbie602NQIIZYjEDJGAPl3+YcjpkEivrnRdD+Hf8AwTj+B8moX7/2p4jukEcs8KA3
N/MSxRQhb5Y1PUjgAZr819f8Z+Jv2rvjnZSeINQig1HxFqkFjCId5hs1d0iCxqzNtA+8RnkgmtIv
qTa7VjS/Y+v9f8L/ABh0XxDpXgybxva2k32a8t4rVpURJRtJLbSFYA5GRzt96++/+CnvgXQ4v2bU
v7TRrK1vrXV7RIbi3t1R0RiwZQVGcHpiu11FvBn7AfwV0bStH083Gq6pcRWEdz5eftt6Ux5sx3cD
AJwP8TWL/wAFPpvJ/ZeeXcY9utWZG3v9/ipvrct7K581f8E8f2OrP4hzW/xL8VyxXmg6fcsmn2SS
HdNcRODumGMbB/d7nrxX0Jb/ALcPwe1f46N8O18PadcaDO62i+JlgVreS5b5REU8r7pbKb9xGcet
an/BNiIx/sn26Oiox1O/JUMCOZPUVx9r8Gfgdc/AjRvENnZ6PH4rga3ukuo7wLdNdidcqV3cndkY
xUeY7e9ZHJeP/wDgnH4e0z9pHwvPZkxfDfXruc3OnwvsmtbgRvMIkIH+qJTjuuMelev/ALUP7R/h
f9ku+8MaJB8N7HXY760Z43Vo7cQpGVTH+rbdxyenSvY/jZYapqF74Ai0a/j02/8A7fVluJovNUKL
ecspXIzkAjqOvWvM/wBrr4lfBDwHqfhuD4u+G5NfvbuGU2TRWXneVGrLvJO8YGdvr/jV2Jdjzb9t
34Q+D/i3+y5bfFS00ePQfEFjpMOqWz2cSgvHMiM0EuFG8fNweoIz3Irl/wBhL9lTw54F8CwfGvx/
LazqbOS+sUlG6K0tthDySrjDNhSR1wPevZv2zTdar+xvf3ng+W10/wANf2ZbzzWtxbkO9h+7Kxoc
4jIXA5B9OKtfDGG2b/gntpKXQ/0V/Bj+aGORtMLbv5mnzapEJO8mjkPhf+1f8I/2lfH2vfDC98G2
dppuorNb6Vdz26vFq0a5DYHljy224ZQTnHvXxJ+01+xrd/A34x6D4U07UUu9F8V3WzRby8kAeImR
EMc2B0Qyp8wHI54r73sPhJ8DvDPiv4Va34Gg0Sz15dUQW7aZeK0k0b27iQsgY56Lz2J968K/4K7F
X1b4YIWKkRag+ehwHteh7fhVK99BtJNH1h+zf+yf4T+Avw3TQpLC11nVr1RLq17dRLKJ5toDBNw4
jHYV+Mvxf0CDSfit4z0zTo44bS01u+hggVcIircSKFxjgADoOmBX6g/8EtvEup+J/gp4kn1TVr3V
5I9caOKW9uXnZY/s8JABfkDJJ/GvzT+LciSfHLx1kDYviPUS55XcftcmKSvZj5f3isfpH8Bv2efA
v7DXwg1Xx/8AEOe2v9euLcfbbl0MsSAndHBEpX7xOBuxyfapvhf8T/hb/wAFCfAXiXwRq/hWPwxr
dspmFlGVaeFMrsuYpVRQCHbBX88g17Z+0Vp/hvVfhz4esvF6wP4duNZ0+O9W7YCIoX6OTwB6muX+
GHw3+FXgH9oC3k+HNtpVnPd+HLlbyDSrgSoEW4gKMcMcHJaobY+W+6Pkr9nv/gmldr8a9Yi8d3tt
e+G/DV0oWG0mcPflgJISfRQMFgec8V7n42/bv+EvgX47W3w8bw1ZXGhwMbPUfEMUCiKxuMspi8ry
8uA23LKf4jwcV9MeENv/AAsLx8VBJNxZg+n/AB6p/jXy5rvwu+BWvfCTx1qmsQaC3i9b3VZpriS7
VLsXaXMpQBSwPVUGMc9O9O4JI8p/a/8A+CeMmpfELTPEnw1e0sNM8T3yW91YTuEhtLmUsRLGAvEb
ZJKjoenB49o15vhh/wAE6PgRYWM2lweIvEV+4kitrhN0upXQEaysHKNsRchsEcduTX0L41Z/7C8D
FlO7+2LDdvOCDz1964j45+Gvh54s+LXgax+IcGm3Vh/Z+ovaRapIEh84Nb5OSRztz3ovfcOp4v40
+Gnw/wD+Cjv7Plv4k8KWdv4c8a6YWWJo4ght7rYGa2mYKPMiYbTkDuCMEEVb/wCCbmsw+Ivhbr3g
HxF4X02PVfBGoPp1xcbEl88u8hIOV6qVZc5OQAa9p/Zu8M+DfCGp/EPS/ASWkfhpNXieNLCQSQJK
1rGZFVgT0PbPHT2rxn/gnqV/4WB+0KQwYnxY+QD0/eXGB+WDUXegWV9D87P2uvhJqvgD44eOHk8M
XGh+HbzXJzpztbtFbSKfmxE2NpHU4B4B9K98/wCCSWg2GofGrxab62t79rbQVaMToH8tmuFBIBHc
DH4V9rW3ibwh+2IvxW+EvifSSbjw3qMlo8qpgKhZhBPG+ciRSpz06ehr4N/Z7+Jek/8ABPz9p/x/
oHjH7Vr9hb250oX+mQgzMd0c0bbGYdVbBGeCM85rSTclp3Ig0nyn0t+0p+31oHwD+MOreBY/hNYa
/NYRQu17Jcx24YyRh8Y8l+3HXtXxB+1x+1lo37TNl4fhsPhvpvgqfS7mSaS/tJ1mmnV0K+WSsSfL
n5u/KivsrUv2pf2Uvjt8SNNh1/4a6hq/iPXJYLGO91DSYmyzMEQMyzEgAkcgfnXhX/BST9kHw/8A
AzVNI8a+DUXTtA1y6NlNoi7tltOI2ffEcnCsqHK9j9aqLu7IPhS5iX9k79jr4R/tSfAPVEsddv8A
T/irYrMtwpn/AHUDeYxt3MZXDRtsAJU56jrivjT4k/C/xJ8LPHGseFPFNj/Z2t6bKY5UckrKDyss
ZPJRhyD6V0Pwd+K/ib4IfEHS/E/hO8NnqlmSv2chjFdKTzFKgI3q36E5HNfql+2L8CPD/wC05+zh
B448S28fw78ZaTp/26C+1FlBhwu9reQhvmRz93JyCQcZyKUZO7TCenvI/Hrwh4gk8H+LdC1v7Jb6
hJpt9BdLbXJzHN5civsb/Zbbg/Wv28/ZR+Kfg/8Aa3+GWr67L8ONJ0OK2v5NKls3iiuA4ESPnd5a
8ESdP1r8LLYyFFZlD7gpLY4BPav2F/4JGx+X+z54m2srKfEswBGMj/RrfrTejKilKLufkl8QdOh0
nxr4gsrWLZa22pXUUKom1UjWZ1C49AAAK5u5+RjgAcnjPSuy+KTKfiJ4q+ZznVrxuB/03kwPpXFy
Kodcqzf7Of5VRmrCkYXcDggZ2jvUUsvl52gkno2Ogp07gOFCcqMc96ag3RkY5bt0xTG9RoiZDjnb
3xT1kRM5A2ZJw1LsUkgcEjOf6VXWAyBmxg5x+PpQFiX59x4AXd0616B8EfHdr8MPip4W8UX2i23i
Gx0u9E1zpt2oKXEZRlZeQRnDZGQeQK4EuVjUABAcseOeelaOnu0coOQ2RjnocDg/Xv8AhUNFRV2f
sJ/wUl+E3hjU/wBmC31jRPBlhBri6nYvanTdPQXWJGwyL5a5bIOMDrivyC1rw9f+Hr77Bq+m3mk3
wAk+y30LQybT0bawBx2+or9/fj/8TPDPwY+BUXi3xRbTXVrpf2WW0jiXc/2sY8nuONw5J4xmvnz/
AIKQ+A/D3xE/ZOsfiPqdilt4o02KwuLW8tyVZEuZYVmiP95CHJwe4Boi+hnJKLuflf8AB7SjffE7
wr5+k3Os2dvqltdXVtbWzTtJbpOhlBVQTt2Zzn1r7K/4KoXXwpvLXwCfBtjp1p4hMs5uRZWJtna1
2DbvG1Q3zgYzz1r5+/Ym+Jms/Cn9prwNfaVDAw1O/j0G7huQSDb3MsaMQQeGB2sD6jvX21/wWQtI
ZPBXw3mMKmcarcr5gUbgvkEkZ9OPzqoX52FToflZpelXOq38NrYwyXV5NkR20CGSRwASQFHJ4Gav
3fgXX9PurK1udC1K3vLklYLWW0kEsxXqEXHJGc4HPNfo7/wSE+FXhzVLrxt42vtOivPEWlXMNrp9
25y1vHJExcKPU5xz+FfU/wADPjGf2iPjD8R9E8QeHNNh/wCFb64IdJuYg5lLk3ERkYk4ztQ8AAfN
9KXPc1UbH4t/D7Ul+EvxS8P6z4l8P/2guj3sN7daHqUJjM0WeVZGHGVyRkYyBX6l/t9/Bn4fav8A
sgzeKfD/AIF0rStYMmnXdhNp+nRxXEfnTRBl/dgFso7AjnP5V8Vf8FC/ihdfFP8AaZ8UWl1ZWVpB
4ZMuiW8lrGVedVO7fKSeWycDHFfql8QPGfhX4d/su6R4k8XQzTaRpmmadcokMZdzOqxmDAH/AE0C
deKz1U7idpRufgVq2k3mhzva6hYTWV6y7zbXKGNsHvg847VjyQtgLI2EHIAHNfsp+394C8M/Gv8A
Yttfilqempa+JtN0uz1Sxu7clGjW4aHzYj/eQhuhzjAIr8c7gKZGUEsRzuPU1qtTN3vYq5EgEQGV
BySRz+FXbSxlv7mO2tleWaRtiJGhZmOM4AHJ6VViJxucZOOD6Cvtj/gltrHw20349bfGcC/8JJcJ
FH4Yu5w3lR3BWUSoSDtDMpULu6445xUyajuWonyRc+CNet9scmiamrtgD/QpcE9v4ay7ezllnjhj
/fTSMI0ReTuPGAOua/dn41fH74i/BT4ixNqHw2i1/wCFwAuLzxHpCySTWNv0cyR8gsvBOOCv6fmh
+1J8W/hW/wC0v4f+JfwWWV5luItX1KC8s5IbSW9jmDDCNggMFO7HGenOaSdy0nex79/wTa/Yx8H/
ABA8AeIPHHjzRpNVuJpbnRoNI1OACFE2xlpwCN28k7QQRjnvX50eJ9Fbw/rl5p7W0tv5EzxrDMpV
yqsVB55Ocdf1r95v2Of2jL/9pX4Q3Hi3UtDttBubfUJrFrW0laSNgiIwYEjPO/pivye/bN/ayvP2
nvEOkC78L6doC6FJdQwTWrs81yrOFw7EDAHl5x70KQ+V81j5qtNIu9S8xrW0uJkU/N5UTNtznHQd
eD+VE2mXFrGvmxtCzk7FmUozADqM/h+dfqJ/wSz+GfxJ8N+CvFviS00zSLbQtcmgNm2txybpxGr7
pYmQfcO4DnqRX0h+2z8DtE+Lf7L/AInuvFthax6/4c0261mxvtKJXyLiKJ2G1iMlGAwynr+ANCkS
00z8LLbSpZt7JC86oMtsQvtGeM46ZPH40lrp0tzve2tp7lQcF4oy6A/UdOvevsn/AIJdfEmbwd+0
jZeHV0yC9sPF1u9hO84+e2aFJJkdeDnO1lOfUelfpd4u+Ivwy/Zs8Y+F/BNpptnp2reONZMqWkMG
2Mea4WWdmxtUbiAFz1bjAo5tbCsz8BYrMSyNzl8kALz3/nX6J/8ABM/9jLwf8WPDmv8Ajnx1YPqq
QXE2j22k3Me2FlaGNmnPfd85UY6YzWb/AMFJfh14Y/Z9/aL8E/EDwnZ21vqOqyNrN5pLkfZZbq2m
hYNsA+XzATuHQlSepNfe37GX7SDftQfCu88VyeHoPDkltqcli9pBL5qkrHG+7O0c4cD8Klz1Ktof
g1450JPDXi3WNK8uWOGwu54Ikmzu2LI6qeeowo571hLGXXI/h+bLcAV9X/twftXt+0rr1nZSeDNP
8Nt4cu7qBb6KUyXF2pbbhjtXCgpnHPNZ37HH7H+q/tSahq02ma9pmlx6HLbtdWt6HLTJIWPy7c4H
yMK0ukSlc+aTbiF2RkXeOxPJ98V7h+yj+yzrv7T3xMi0PTpzp+k2YW41XUkK77WEkgFVP3mJXAAr
9b/20P2RI/2lfhnpGhaE2laFrOlX8d1Df3Fvg+SIpEaIMoyAdwPp8tfNH/BJrwreeCfip8avD2px
xR6no32bT7jyJA8e+Oe5Rip7glQfyqW7lJHoHxB039j79nzxl4U+H/ifwTod/rt5HDbT6itlDKLY
5VBLeNvBj3E5Jwe56V8tft+/sLH4L6q/xF8AQG7+HWpMpls7ceYulSPtC7TkloZC3yn+EkDoQa+s
/G37Fnwy+OGpfGTxdrEl9H4pi1a7j+1213sFuY7eNo8pjBHQ89c12f7Q/iR/Av7A+nayttFeSadp
2gzCC4XKPtnteGA7etJOwWR+GywRl2Xeu9Dgqp5X6jtSfZS+Wj+fBxu5xn61+vnjD4U/Df8A4Kd/
APSfFng02nhHx3pn7pysQU2czBTLbXGEBdCOVYD0I7itI+Evhd/wTX/ZpnfxFZWfijxbq0e02s6e
b/at6FP7uPKfu4hu5JAAHvVcwj8bYoflLMylc7WceuM0SIJEbaTlAQB3Y+v61+nP/BL3wL4yXTfH
3imz8CaGPD2uXEUlnHq4eKNQGlJW3bY+5BuCnpjatfSn7Zf7Nnhv4t/s06/f+I9F03QPE3huwu9X
sr3RQMQyxxu23dsUsjhQGUj9QDRcLH4o+A/AusfEbxbpXhnw/atfaxqkv2a1ty6rvkIJ6sQO3rX6
/fD7/gmn8NPBn7P+oaV4x0uPxH4pms57mXWpA0U9rI0WQsW1sDYRwe9fkD4Y8Q6h4O1nTtd0i/n0
7UrCeO7tLy3xvhdTuVhkYzx3HPIr9sv2Ovip4m+Lf7G974n8YarJq2rsupwyXrxojFIwwGQoAyMH
tUtu4WPwrnSRbdIg55RSQ3HpzURGBuzuC+gz/nmtG4kICqgULsXocDkfpzXuv7I/7JXiD9qTx3Dp
trHJZeGLRhJqurupWNEV498UbbSDKUbgfielUS1c8p0z4UeLtc+HeteOLDRri48M6RcR22o6goyk
DvgLu74+ZeenIr7f/YB/YKt/iLYw/E34mW8cfgpY3ew02cgJqEbRkGZ3DAoinp6kdq9W/bw+Mvw+
/Z0+A0/7PXgnTILvU9Rsks71FY5sYQEYTTMF/eSOAMDIPfoK9U8CWcV3/wAEudJtZ3eCC48IhJHQ
7SquTk57dalyKSPOfDPwO/ZC/anXxv4E+HOiQ+HfFWlq8UOq2+9H8xSyCa3JkImQMnzDuG7bs1+a
fxr+Cfib4E/EPU/B3imz+yajYvujmCnyryAswSeInqrbT7ggg9K/WbwJ+w74G/Z0/aE+FXibwff6
mTO9/ZzWt7OssbKbGVgy/KCPu9Mkc1B+0T8fPBXwa/bU0ay8faFZah4c1zwrbWk+p3kKzLp5+1XB
WRlKnKZ4OOmQaSkUtD8XIlMyk/KU6hhzwP8AJprRsrq2QysMg+1frH8R/wDgk/ovjL47WHiLwrqk
Nh8MdXk+26jp0EoWS3LZci1O0jy2ypA/hyQOMVwn/BSTxh8K9P0jQfgr4E8HafqvjDTp4FfUdPt1
8+wK/KLfKpuklkHUA8devR84rH5tCNUwzSoFboxYA0j5LMAeEO35Dnn8K/d39nb4W3mmfCr4fab4
k+DHhZbmLSrSC9v2eHzgBGoLvE0G7dxkqW65Ga+Ev+CqH7NHhf4M/ELQfF3haJdOtvFpuPtejxKE
t4JoghMkQH3Q+/lcY3c96Lglqdp+yz+y98A9H/ZVk+LHxevLbxJDdx/bHhhuZI308Kxj8hUikDM5
btXo/hD9k39l39rv4Y+LR8JNI1Dw9q9ntij1W4a6Vre5Kloy0crkOvGG46HrX52/s2eGvh14w+Ku
maV8TvENx4Z8JXUc3malCwjEcqpmMFmVgoJGM4xkD1r9if2Qfhr8MPg/4B8Yt8FvEUvxBS4mFzcR
yX8TbrhYiI4g6oqpuGBkg+tTdltJH4X+MPDF54K8Wa54f1EI19pGoz6fO8PKmSKRo2x6glTj2rET
O/kAbuCSa7T4u6xfa58TfFuoXtmbC/udZvZbi0Zgxt5GnctGSOu0kjPfGa4dzh+Qd3XnoKq9zF2Q
6WQxZK9D1FKxJ4x+YpjyZAAHy4yAOlC4JTqzHge1MV7h5gQkevNOZjuONuD696aVLDjqDwfak2s8
gO5SOm3pSuUSlmKBcgJ1571ERuJDDHQc9aSQKMkkgZ4HpQzZG7qR60XAc6ZU/wB4c8UxsrjIHI4w
M01pMZLZxjnAp7KNuSxCnAHrUO4DZg0gz0foQeMClBPyg/LmlY4XKgsvtSGUMCPu0hjfvZBBxTXB
B7EdsUsi5ZcZFNdQDwce5qr6CtqKAxUDG449elGSxG4leO1PcEnORgfrRnkEDB9TSuKyAJuILMSu
aUABGEeS2TuJNNZWJwRubtj6U5WCqU2fN154pajWgjMok3diMYNNcEAqRx2NOdcfMBk/ToaRsqME
EjuTQhO4mTwAQwFLIH3EEYIGCfek8vbnjAPHWm7Tg5Vm9xTsGpIpO0kq3BxxSkgsX3gjlcH8aIwG
Y7mx7HnJqMR7n2nvnAHrSHqS7jsATHXpTJEVW2sRluCO1OjXawBJZlB4PAFOYKzn1HIFOw0hkbMo
yDhRyBnGaUOMk7cDoOf50joSwwcgnaaGHltycjGfpRYEiQxoy9Qo6A460km0RnLA889qRWVwrHD4
HTtUTsGIx064NUUS7N0xG3Ax0/CkmDh3XKnnnByBijzirZIPQU8vmVjtA3Zx6ipuSMSMtuZQCwPP
FNckEEkf7v8AdNOyY3KbjyMgDoeKSSMsxIBG7tjikAjLuyxIYZ+6aQAklsYz0wKk8z5CvHyjsO9I
GKqSowenAoAWRSoLMdo+79f88USHbhgowB19aaFJBGc7uTkZNPRd7soXO3rmmOw1mIBzyTzS7jIU
yd2B1PSlLZzubDAgKuOopjMjJxlSvpSQWGjIEhVcn1qxGA+VZMI3IyetRo+8e5Yk8fjUroSDhjnt
s60a3GMKEkAMCq5OM8e1OMTR7sbThdxBFI7QiNdy4OMEDvSSRkxkx5K4yT6UxDpzHl3MeMLyc8g/
5xTNwETMoww4B9qRlWCT5zktzj0pfNUgoxYoeeT0qbAIpDoAq/Mc4zQ20MQWB2kg4BqQqkUYK8HP
BNJE6yDDMRgdQMj8apWATbIqnGPlPX1+tBgxkqeW+bAFCLJKBtwQCM7z39acXO1i3VQRhv8A61Ax
rLufO87vTHFPdCNqlgM8hu1RrGrzb1D4xnA7UjkxlR8wHU5PX/ClqSyRTsjO5Q36nH402ZZGXKZA
I3DJ4pFUsFYbgGPOenAqWImKLeRwT8rZ+YU7jQwKcYByx6c9KdbRr50ivjPSmsrbmZVzhux5pS0Y
3EoGYc/j7VOgkySbETMikleB7fr3ppZdgAwec7SMDpTSTKxlL8E5wBz9aDKqjhWwTjkc+9C0KuLF
I3lNzj5uRzgCkVmZCUjGOQQT19xT1VWcrGflA4HTn6044jR2HPlk8DOT+NVcBsh80MFUL/C3sf8A
IpCB5gG7HIMmfpmmrOrBQwKsWO7aeKf5ri22kYLdOO3rzSbAcoWHzQI8tkMuDx7n/PpTREdvlhh8
2d31pJCr7AEMYI5JbgmkPzSDyiTt684z6ULUAmBWQYOeOBngcVDORLbbmUdeGHGeOlTMwb5uTheg
9KqTAuSeg5wPSqAgi3qSduOKtxkmNsMcEev9KqJ8gPzHnqAe1WRIGjC4xgnrRcVyIMDGq/xbjnmn
N8r/AHQeeCx4qONuThmVc9alnfcIwrDg/eJpNjGGP5ZGOFI6DOc/hSRjeTg7T3p0wVpjuByueg70
wKxztJ20IDRuEkhxMrgBgcgHBz1x+OaryJt5ZSjdgc8VdmxCdr8IxJw3IHHP9KpnDSLvOBuxkjtV
CRFLiQgKB5YGNyj07mlm2GNSwOM59AfpT2QOz+VyDkHPGKhkQKRnkL+GaQxLpXyASvljoc9q6LU9
XbTdAttOt2VxKvmSAHOM+v5Vz0p3SFiSFP8AeGcYqKWQSSHAwDxnPfFDVxGhpN1suVbaFIIbGBj1
5r9nfg78XPCv/BRP9nm9+HviRxoXjXTYYpJYIiVV3RMR3MZPVN2QVzkH8DX4sWYLT7S2OMYHrXVa
bql9ol19tsL28tJlXy1ktZmjKhuqllIyPxotqXo1ys95vPhEvg349weAPFt1DAiarDY3VxbShx5T
uoLKx6EgnBxnOOK/TL9qz4lW/wCxv8BtB8O/D+1gsZtQ/wCJfb3wcCWBUQEz9DvfA6nua/HW+v5t
Xv2vbq9kvruYlnluWd2PHctznjvV7U/Emr61bpFeavd6hFHkiO8uGlI9huPy4z261fLfch6H6r/s
QfHG5/a5+F/in4e/E1LbX7zTohHJeu22a7ikLDcy4G1lyAGHXjvmtb9hz9nd/hD8V/ihqVjfQ3eh
LOdGgXcDPuifdlxzxg+tfktpPiPUtJZpdOv7rSrvBj86zkaGRkzkjcpBxkDjmrtt4/1+zW8aw1vV
dOlnlM8wtr+WJZJD1ZwrDc3PU0nC+w7q+h+uPi/9h5PiR+1JqHj/AMTTW8/hd3guobOFv3rzRJGA
HUrjaSpJ7nAriY9F0LwR/wAFPdHttJs4dKt7rRpA6QgJG0skDkgDOMkoD061+aU3xe8ZWwWJvGvi
OY/c8w6tOTjA4xu7elZl/wCKNXvdZTWxq13LqbSB1vZbp2mUgjBDEkrjt+lL2b6iUtT7o/4KvNHN
8V9AgkkkVRoqsdvygDzZc8jkk/L64/Gvpz9rnUYLv9hh7t51aOex0phLHzuJkhOVx+J49K/HbXfE
Op+Jbx7zVtQutWuDDsNxdztPIUzhQWYkheDx71ePjjxBe6LFo9zrWpzaTAqtDYXV9JJDHs4XZGWK
jHYihQQlFvQ7T4GfBnUvj78WNM8Kafc29qLpmlnMsnJiVgZNo4zlc8f/AK6/SD9pn4u+Dv2RP2fY
PhLoxfWNTm02TTobJ3/eQW8iyZlcjAHUgDI7elfklpep3ejavDfaXe3OmXkLN5d7aSlJEIG07XXk
ZyehpNd8U6l4kvDd6xqt7qt5t8oXF9cNPKwCkffYk/d4xT5QlpoOvb/7U5lbhWLZI7nPXvnPNZrT
IJ3EG0Y+bnr9MelOdmiiIKEdlJPOD3yKrRxkMJFzuJwD0w2OlNIjoOmkXy0j8zzXJ5CMfvdhx/kV
O4MhcFAH27id3zYHaqUluFlG18Jt3HGQ2c9cf1qaHh5drKwAPCnI/Cq0MdRggUHa8Yc52KSchVJ/
+tUsV0RMOTG4+UlTkdxTIpTHNuDoXz9xvUHp/wDrqObdcF5wioFYgoM4X2pFRdmWsskeSvmKpIxk
hl9SD/npUSTbJASFBGZPvZYAgcE/WlnYKpLuQzEEBecYHb9aj8pXdvncFV+TGB5mfWkkW3cab5pC
WERByWy3OBVy3gaKGM71B5GWwPoM/lWdPCsrCPCRqjAKWOc8dOP505GmcKCCQo6N3x709CFdO57j
+yDKtv8AtPfDB1G1V163Z3YkgZO30wPvCv1H/b3/AGc/Fv7QfhTw7b+E/sb3WlTzzyRXkvlq25AA
QcHJ4PHqRX4pw6i2k3cdzBK0M0TrNHNC5BicEFWU9iDzkdxXow/aW+LMsS5+KPjBk2FTGdXnCsPQ
nd9PT8eaVlc1ck1Y4/Xba+0HVtRsL6Ew3NnLJBPFnOJFbDAfiK+o/wBhf9tm+/Z98Sx+FvE891f/
AA71F8srAMdMmdhulXvsxyyjjuORz8jXN5JfGa5nkM1ycsZJGLOW7nJOTn3qCS4kiCu7KrFdwPXC
9dvtTaTJUmj9JP29P2PNGTQLj4u/DqK3l0O9VLrUbOyZREEcLtuIQOCrZBYc9civXP8Aglr8MNZ8
HfCvW/E2oLFHZeJ5oZ7KOOVXISIPGd2Ohzj1r8qpPij4tl8Hr4Wl8XazP4bAEf8AY76jK1pGoOQo
iyVC55x0/StTwp8afHfgmwTStB8deItF0tXdxaadqk8MRc/eO1XAyT2GBzUcpome3/8ABQr4d6v4
A/aO8Q3V/Gi23iSd9VsZMZWWM4U8jowYdPxri/2UfhbrHxZ+O/hrS9EkgS5tbmLUbh7lyqrDDIjO
RwcnGcD/APXXl3jb4geJfHd7DN4k8R6jr93HGI0n1W5eeRFGSQGYnjk/XPtWd4Y8Uat4R1WPUtB1
i90jU4shb2ynaKVBg5wy4IyDVoSfKfs1/wAFIvhzrXxC/ZzuG0aNZZNEvF1e5UnB8iOKUOVHcjcD
jrxxX4xyylLZTOzElwmxex7Yz6/hXXeIP2g/iR4j02503VfH/iXUtOuo/JmtLjV7l0mQj+JTJgj2
xzXn8Vx5BkVJPnOTuY5+amo6GfN71z9tv+CdPw21n4b/ALOViNZe3d9buTrFt9nfcBBLDHsDejfL
zX5d/tY/DvW/hR8afE+k63bIt1cXU2owzLICssMszujg544OMHkEEdq43Rvjz8RfDdhbWGleP/FG
k2NvH5cNpaa1cRxJx/AgfAAPt3rk/E3jTVvGmvSapreqXutahOoR7vULlp5HAGACzEnFCixuV2fT
n/BPD4Z6v8RP2kNC1KxNvHZ+FpU1W8aVyryIdyAIMckEjP1Ga+x/+Cp/wz1vxX8LtA8UaWkcll4Z
uJpb8NJtZIpUVQ4H8QBUAj3Ffk94T8beIvBurDUfDevanoN/hoTcaZdywPt6lSUYEg4HtxXQeMPj
r498Y6TPpGveN/Eer6fIwMllfanNLE/OQCjMVI+oqXFpjbTRh2N5Fp+oW8zEBo5N2I/4j1Az2ycG
v2p+O/hlP2x/2W47f4c6xY3f2+W2ura6kkxGDE4Lo2AdrDkEdq/Dpr0QurId56HP8utdb4Q+MXjn
wVpq2/h7xhrugWzSGV7PTdTntomcgAsVRlGSAMnFOwcy6n7M/ss/C+4/ZE/Z91Wy8e6tp9rDbX82
oz3kc2YY43WNRliF5yuOncd68h/4JiyaJ4nuvizrttBBLePrvmQXDpmVbeUO6kEjIDZPH1Ffmd4i
+N/j3xno82ma/wCN/EWsabcFRNYahqk9xFlW3KSjMQTnBFVfC/xF8VeBmmfw14n1bw+syATHS7yW
2MgGcBjGQTgkn8elTysvmi9T9NPjB+wr8Svj/wDtEHxH408S2E/gZLl4YIYX/wBIgsdzMscabMBu
cFiSTnNeNftG/s8+EP2Mfjf8IvEGk6jqc3hubU47u9ivMTPAYLmBmcMqj5SrnjHG2vk5v2kvirFA
Fb4l+LyWBBkbXLkZPqPn7fjXNeJfiT4n8b3kcviLxHq3iCeNdiTaneSXJQcHALs20ZA4GBwKqzYl
bQ/aH9qD4S3n7UngTwNqHgnU9PvbK01KPVorhpsRTxbSMoQCCe3bqaoftweE9Q+Nv7MOrJ4Se01C
40y8S/njaReBbFhNGD03rhhg+h9RX4+6D8YvHfhTS49J0vxr4j0vTFXbHZafq1xDDEM5O1FYAdc/
jVVfir4vtdBvNHh8W69baXePI9xaxalPHFOX+/vUPht3ctkmkou4ptbXPvz/AIJsftU6H4c0lfhL
4pMWmNe3UtxpV+zbY5HkKL9mcchWyflOcHp1rvPDP/BMm20b9oQ+IbjV/P8AAltN9usrdZCt6s4Y
OqOduCoYHnIyK/KO3vZrFt8M3lSxsrRvFlMEHIKkcjBwQa7hv2gfilMsat8S/Fz7AQytrt0yH/Z2
mTpj2o5RKavofqf8XP26fAej/tIeDvBIuUl0rSdRc6tr0bFobScxTQiEgDorMNzg4Xj0NSftvfsj
+Kv2pNX8J6t4U1fSLaz0+xmiL3kzr5nmsrBkKKwIwP1r8dW1J3u5GfzWlfJaSRtxdiSScnqSee/e
uw0747fELSreGzs/HviWztrcBIYodYuUWNAMBVCuMAYAAFPlLvFn61ftneI9K+E37GVx4P1vVbeH
Wr3R4NIs4EyTdSxrGJNgxnaApOcdMVwX7D37Q/hP43/B1vgX4iL6VrEGmS6VChfY1/aMsgJjOMCR
E6jnpnHUV+XniXx/4g8b3EN34k8Q6r4ilt0McJ1O9luPJBOTt8xjjPQ96o6brtxomrRXunX11YXs
DiSCe2laKSE4OGVwQQRn9aXKJWu0+p+s37OH/BPe2+A/xW1Hxx4k8QLqVlo7vJojRSsu2L58vcgq
BuVCPunGcmvlr9tD48af+1x8d/C3hjwYkB0zS7p9KstVuZSsd7LcPGpkAK/IgKYzyTzxXy/qfxx8
eanbXNle+OvFN5aTKY3gl1y6eN1IwQQ0mCDnvmuLj1CWN42gmlt3TlHgYqUIxjaRyCPanFNak88b
6n7m/sSfs6+IP2bPhtq2geIr6x1C9vdUe9V7DcUVDGigEsBz8voK/Or9tv8AZc8Y/A3xPqHjrUbm
wvdE8Ra5ctbtaTEyQu7yTKroyr1GehPIr56f48/EZVjUfELxSE+Ulf7buRkAYwcPWN4m+IPiPxfH
nW/EOra3HG5kjj1O/mudrkYLDzGODjuO1PlYOet0frr4C+InhH/gop+zhqPgzUbl9A8VR26G9sIJ
R5kEqEeVcLkfNEzYOMcZKnBo/Zj/AGZNA/Yc8GeIvHvj3XrOPVFgeG7v7ZmFpFa71KKFK7i5ZR09
QBX496J4r1jwvcLcaLq99pNyUMRuNPuZIJQp+8N6MpwSBxnGe1auufFLxV4q09rLWPF/iDVrBpFY
2t9qc88ZKnIyruQccdu1SoMaqLZH6SfAf/gpT4f8QfHnxFbeI9OTw94V8RXUcenapJMz+Q8aFIzM
NvAkAHPAU4Bz1rqPF/8AwTM03xX+0k/jY61u8FahPJqN9p5kIu/tL72IjcLjYWZTycjkc8V+RovH
kkKtIVBJGPXPWuwHxt+IMKiNPG3iSK3VdiQwaxcooA6AAOAB7D+lHs22EZJWP05/bM/b60f4XeM9
G8HeErKHxJqWhajDe6rIZSEgMTYNsODl2BPzfw8da6b4u/Dnwv8A8FIvgb4f8T+C9cjsNd0tz5KX
J3xW0ziMz21woGcgBcMPTjINfjne61PfvNcXc8k88jmSSWQ7ndjkl2bnJJNaOg+PvEPhKORNB8Ra
poyXDCSVNNv5LcSNjgnYwycd6OTXQpuL3P2J0658Gf8ABNH9mlE1O7Oqa/fOZVsxMQdR1Dy1BSL5
fkjAUfMRwBzycVwP/BKrxd/wmNz8ZdUmWOK+1HWIdSlt0P8Aq/O85sDuVBJUHvivy18T+PNb8YvC
+ua9qmuNECkR1O/luTGp+9jzGOM8ZxjoKr6D401zwldTXGjazqOjSSjypZbC5eBnHYEoQSB/U0nC
9tRKSu2z9yPgp8Ar/wCCfxd+Mnj7WNXspNH8VXK38QQsGto0MjMZCQAMBu3pXwJpkvwa/aY/br8a
Dxhq1z/wi2vXCxaDfwF7dLm6CQxoA+3gNtkCk8E49RXyLefGDxjfWklpdeMtevLadSkkU+qTsrjn
KlS/PU9e1cq126FI0uXiKEbHj4MZHQrjnjA6VVmSlFNeWh+olv8A8Erte8I/HjSvEfhjxVYt4N0/
U7W+itdSLm8EcciO6Eqm1uVIB44xmof+CwPxN8P32heEfAlpqSy+KbPUf7TurONWYwwNbyopc9Bu
LDA645r88T8a/HgZSnjfxIpTCh11e5BwBj+/XL65r+oa5qEt5qF9c6nqE+N97eztNK+BgbnYkn8T
SSadxySkrH6R/wDBOP8AZR8M6V4ST47+NryyubO1jmn01HkJiskj8xJpJ1KgZAGQOcda8H/bz/bY
1D9o3xPP4a8NzTWXw502YrAFzjVZFY4uWyoIUfwrz6+1fLdr408Qadolzo9trepWuk3LHzbCK7lW
CXPXcgbac45454rEknV3TqqgEAE/himlYbSex0Hg3w/P4v8AFmheH7aWOK41a/g0+OeU7UjaWRYw
W9gWH5V+4v7EH7OWvfsw/CfVfDfiTULDULy61STUPN08sYwhhjTBLKD/AMs/SvwaSV4JAY52WRW3
AoxGCDkcj3AP4V1svxu8etbNCfHHiMxSZEkcmr3BVh3BHmUNXYr2ViP4lTMPH/i2RDlG1W7PXnPn
v/TFcizrsDA8enYHNMuJjPM2GkbdnOT16n9c9aZJt2bBxjtVkLyHhQAME7s/UUkk21iQ6g7QThug
pFYxB9rFGHQj1r9Dfh5+0Z+xVb+BdCj8T/Ci9/4SBLCCPUJF0oyrJOsahzvEo3AnOCQPpUt2di1f
qfntIy8D5cr129aadjAgMFDDgZ5bmv0xP7RX7COI5T8J7xeOANDOP/R2M15v8P8A4z/scaH8Q/Hx
1j4X6tceGL27guNG8+zMwiAjxKoTzcou/kDJ4PbApcxXkfCjtFvCmQBxgYJHsM/0r0D4NfDXUfjB
8T9A8G6PcW8F9rFwLeOS5cJEh2l/mIyeQpAwK+8H/aC/YLQFz8Kr2MDIJGiuPqOJu9eDftWfFj9n
LxHpHh27+B3hfWfBvizT78Sy38ED2SGEIcciUkuHCFWAyMHmk3cNmff/APwVCgNl+x9c2zMuU1Gx
jGT94gn1+hqj+3HKsv8AwTsaWORdraboxR+oOZbfH1r8ifFPxb8YeMbCGx1/xdrut2kUglS31HU5
7hFcZAba7kA8nn3qtqfxG8U6x4TtPDV54n1i70G02mHTLjUZpLWPb02wsxUYPTA+lKKaaZM4Rlpf
qme0fsPfDHVPiv8AtKeDLTS7iGB9Iu4tcm+1MQGgt5omcDjluRx79q/Qb/grP8L9b8Y/A7TPE2ko
k1r4VvHvL+Fn2v5EieXvXPB2kgkenSvx/wDD3irVPCWsR6no2q32i6pArLFd2Fy8EyBhggOhBx+l
dFr/AMcviF4t0yfS9b8e+I9W024I82zvdXuJoJAMEB42cq3IzgimrphKMZKzP0K/4JCfEvQNFn8Z
+B76/itfEGq3EN7p9s/ym5RImEgT1K4yR6HPY4+sf2af2eNd+DfxU+MniXVrqzuLXxjrAv7JbZiX
SPzJ2+cEDB/er09K/OH/AIJp3vwl1D4lXvh3x9byxeJr8ofD+rC6ktxDIquZIhJGylHIxtJPPI9M
/pD8MfDdj+zTpXj3X/E3xbv/ABd4dfN/Cus3nnyabBGJGZFJdi+QVHAGdo4qOpo9rn5VfH/wPqPx
O/bf8c+EtKkhi1HW/FEtlbtcybY977R8xxx3r9Gf+CiGnTeHf2FNU02Uh5rVdKtWMZ4LLPEpwe44
71+S3x6+KcHxA+PPi7xzoEl3Yw6jrEmoWFxkw3EQ3Dy3BUgqflyOcisXxZ8cPH3jnSl0vxF458R6
/p6usgtdR1eeeHeudrFGbBI7elU1eTZzxlaKR+tn7Sdz/wAatw6OG3+FNIGcj/p3zz09a/GC6WIz
eYWC8sTyBj/9WK6if4r+Lr7wpF4VufFWuXHhaJUSPQ5NSnezUKcgCItsABAIGMCvsL4G+P8A9i2H
4S+FYfiD4S1SXxnDZiDVJkhu5FkmBILAxyYIOMjjvTT5VZmi1bZ8I+W8YfJ3KOQzY+lfVH7Bn7PX
w+/aN8b614e8YeKr7w9rkFvFNo9tp80aPdH5zIRvRslQqnAOcEntX0C/jv8A4J7POd3hTUiWGSxs
9RCjPXGX/lXx/wDtC+J/hpo3xvXXfgNLrOhaFBDBcW8sjTW8tvefNvaEs28Lgp1PUt2NDfNojS9j
9hvgZ4Y+PHg/xbqfhz4i3uj+NPAAg+yadqUQSK6EaggNNHt+cuuAwyecnp1/Lj/goX8IPA/wW+P9
xpngXU7eW1v4WvbrRYGDDR5y3+pGDwrA7ghxt+mK8xb9rb4zKFA+K3jDzAQVI1ifjjGPvcj65ry7
XNa1LxHrF7quq39xqWpXkrTXV3dSGSWWQ9WZjySaajbczctbn66/8Ej/ABzoWp/AzxH4PS+hj8RW
2rz3sunGQecYJYogsqr1KbgVz2I56jPgn7WX7BXg/wCAHwHk8dXfjm4sPGhaN10W8kiaG9mZwJIo
QAHLKrZyCRxkjBr4e8EfEHxH8Pddi1rwxrd9oGrRxmFLzTp2ilCN95SR2OBwfQVf+IPxk8afE17R
vGHirV/FMlmrLbHVblp/JDEbgoPAzgZPU4xUpSRTld3P2D/Z0nh+NP8AwTnsfC3gjU4JPEMfhl9F
kjSYRyWt4EI2yd0JPzc44Nd14J+GfiKH9jbU/hVfXFvcePIfDd3ptxbG7WQrLMswhZmycK+QQT7+
hx+Ifw6+N/jf4TveyeDvFereGHvQFuf7NuDEsxXO0kdCRk8+hxmty0/an+K9l4kv/EcPxF8QRa7q
EC291ereuJJo1zsVu3y5OOOMmpUZIOdH0n/wTd+DvinUf2rorqSwGnnwJczDW4bhlDxO0dxbhB/e
O8N044z3r2H/AILE+AvEMl/4J8c2tmZPDtnbvpV1dKwPkzySh4gy5BwdpAI7/hX54+B/jR45+H/i
XVPEXh7xfq+ja3qJJvr61u3WS6LOXJkOfn+Yk5PQk1o/ED9o/wCJfxX0VdI8YePNb8R6bFMtwLS9
uMxB1BCtgAcjJ61aRLknscPqeqXuquJruea8uAAgkmkLuFz0DEk+tfrZ/wAEfPGujXHwa8VeFFv4
E8Q22uS30lgZAJfIeGFVkVepXcpBPYivyCaZUJ+bgkEAdPfNdP4D+JviH4aeIrfXvCms3mgaxGrx
x3tjKY5NrgBlJ7g4HB9BRJN7BFrqfbv7VX/BPzQfgp8DdU+ImrePmsfFgYzx6LdiLybmV5cvDEc7
mbDZGPTnivhvw34q1TwhqsWo6TqVxY3cEqSBoJWjMhQhlVtp6Zre+Jvx08ffGNbMeN/Feq+JBYFj
arqEwZYywG4qAB1wOfavO2kw4wWJHpwD70rX3LvY/Qj9sn/gpBovx7+E+i+HfB+l+I/D2qw6gl7f
XM0scMe1YZF2K0chZxvcHkAfLzXMf8EzP2l9A+CXxd1bS/Fkj2+m+L0t7QarJIBHa3KPIymVmI2o
3mkFuxA9Sa+IUnKqEMmATnAGafLdh1AIGCefQ0WFzH60ftAfsAfED4jftK3HiDw14quYvh74vnW6
1mW2vPJez+UKyrGGxMrKFIOO5Brpv2/finoWhfCfRf2c/DjvrXjjXxpunWkUUkeLYR3EAjM53fI0
hUADHcngV+cGhfto/G/w3o1npemfE/XrbTrSFYLe3WVH8qNQAqgspJwAByTXmWt/EDxBrfjafxZf
6zeXXiaW7S+k1V5iZ/PQhkfPqpVccYGAKLDTXc/Yr4ReA/AX/BN74Dz+KvHF75niO/CSXiWxVp5Z
GEam2t4y4EgRjnP1Ncn+1T8F9A/b4+DGkfFz4U373PijSrd3i06ZxuuU4L2sqbsRTLjI9eAcggj8
vfiV8dvHfxlms5vG3i3UfE5sFZbMX0gIhDY3bQAACdoyeuBUvw3/AGhfH/wetdRg8GeLtS8OW+ol
Gu1tHAWQqCqkhgecEjP09KVncq6P18/Z2W7+IH/BOy38N+D7wDxZB4eutJ8mOcRz2t8DIux+QUbd
g849eldP4E8A+Ln/AGItY+Hmthrv4gr4c1CyuLKe7WaYzTCfydzbjw+Rgk/yr8XPhn+0D8Qvg5ea
re+EvGGqaJNqcgkvXtpl/wBJYFiGcMpBOXbnGea6C2/bJ+MuneMb/wAU23xE1mLW7+3itbq4V4z5
scf3FKlCvGTyADy3PNJJibRzHhP4Y+JfFvxAs/AOmaW8niue9fT1024ZYnWeMsHRt/C7djZ+lftL
+yJ8EPF/wp/ZEufA/iWzS38SzDUj9mSZXXMxfYNwJAByO9fh7YfEDxBpvjRPGFrrV1D4ojvDqKaq
sv8ApAuCxcylsdSxOeOdx+lezxf8FBv2gkhIHxQ1br94wW5IP4x9Kppk3WxSl/ZP8Z+Gvj74K+GH
jzS5vC914muoYYJgyTIY3coZEZWIbaecZ9B3Ffoz8f8A46eCP+CcXwU034e/Du0tLrxxdW6/Z4WU
Ou9REkl1dhXDBmXlR3K46Cvyu8eftDfEX4m+J9G8SeKPFt/q2v6NtbTr5pFR7RlkEismxVAIYA8g
9B6Vy/jfxtrXxE8Vaj4l8S6rc6trmouHub24ILyMAAOgAAAAGB6UyU0g1bxJqHiPV7rV9Xv59S1S
7l824vbmRpJZn9WLHJ+nQDgV+t37IHj/AMNftU/sV3vwVsdU/wCEc8WaboraPcRTqrOEOfLuIkDA
vH0BxjByPTP447yWOCDtPQdq6j4e/FDxL8MPFtr4m8Kazc6Hr1mHEN3b4LKrKVYEMCCCD0IpWGpI
/Vv9if8AZI+Kfw0+I0vjP4seJry3svDHnw6bY3Go/abedHidHn3M5EShSMD064FeUfF/wfbf8FHv
20Li08F3VzD4N8LabHpOs64UVkDLLdNuiG7Lq7fKCOuCfr8p+M/26vjX8QPC2o+HNb+IOo3ek6lA
ba8thBbRiaM/eXckSsAehweQa4v4S/tAeP8A4F6vqWoeA/Es/h26voUt7poY4pElRCSgKyIw4ycH
GeTzSsVePc/bC6/aG+Df7P3izwt8EZ9Yks7y4gW1h2yGSG1LFlVZpi+YmZgcemR0FfGnxC/ZVm/Z
m/bf+H3jmW8mvPhprXiSK5/te9kZxY3MjMPInkYnIJYFXbqPpX50eIvFOoeK/EWpa3rl6+o6pqVw
93d3En35ZXYszHGBnJ9OB9K9C8c/tU/FD4k/D/TvBHirxbda34XsWhEFjLHEv+qXam51QO2B/eY5
IBpO6EpI/ZD9pPw98S9W+OXwW1vwbcaj/wAIZp980niEWFzshaEyRENIoPzrt356jGTXzz/wWN8N
6prHg34deIrGxkvtDsZ7qK7v4Bvjt/OWHymJHQMVIB6H15FfEGlft6fHHQ/Btt4Vs/H93FoVvZf2
fFbNa2skixBdqr5jRF+F4znPFcx4m/as+KXiv4RwfDXV/GN5qPg+BYo/sM8UO5kjYNGplCeYQGUE
At0AFCuNNdz3vw5/wTc134l/s1aF8TvAfimHxHq15ax3UvhtYAj56SQrJvxvXsrAZx15r7W/4Jk/
BXxp8B/ht40j8f6M/hua71GOaFLuRM+UkAVmJDEAZzzmvyr+DP7VnxP/AGf7G/tfAfiqbQ7S/kWS
5tjBDPGzqu0MFlRtpxxxjOBXUeO/2+fjj8SvCGr+GfEfjqa60XUofIubeKytYC68ErujiVgD0ODz
zTs+oXR5p8a72LUfi948uopY54LjXb6aKZGDBkNxIQQR14xXn8n7tQfvBvSpJZWndjuwucnjt6U3
ABwHO0dM1VkRuRgHhccn06fjSSxmPBX5Sp65p5ZAxC9e/wCVNLhV3bSV7tQxKw8SHaFz35NM2cnH
XPBNObapJ4x/DmkL84HUnmoKEYFQQ4wT3oUB1IAwcdaayneeSQO9IRtbj1xgjrTSFqOlT5NpJXI4
UHimIWwGX+Hkn0p7Ddg7QEzxnvTGRgx4VVI4x3oaFrcG4KsCWyfvf40OF6p3HX0NCNl8EZAPSlbK
DcDgnjBFQULuGOcPx171GwLIQTt7A4pwGBzkE+lL5JbczHA7HNUWNibG0jJJpX3FDx948GjCoylR
jPU9xSORtx6HqR1pWJYRh0cEyDjjp1pz5bGSc88Uj5MOTgZPX2pq7t+V+anYRLI/CqWbd6Y4xTfm
X7vccnNOI3x7pCMiojl8HOcdxSAfnPHIz29aNuGGWYA8YpGUnJ3HgZweKTAUZOSQOAP1pDHyE7Tk
4X1ABIqNMArvfC+vepH27QoPvg0w/wCsAAJUH0pgPbkYRvlznFM2MWJLZx1NKoAGFyTnBx3pGO0E
HcrDsKAHLE7biDnvgUNIVUgMrY4yRzimAPuIHJPvUpgDkcZHoOBVFEbRo+QzYY9MdKcIxgumAgwP
XmnSKDknGOOB2Ao/ixyMDhR/OpAjAJkLE8Zp2Gyw34bGaa77QoIIIPWhCwDNyflpEjkDpMDuDMDj
FAcl/mOMHgGpURny/GR171EW3cbQzYPOKYxrlgx568jjmpFIdydo4HIPWm7Su5QxbK49KjZfKwQ5
L4xQmMlaM78x7igGfxpygsgIRg3O4huRTEOScMBkdfwqaDa0u2QttwclfWqAa6hDuBJY54PNEqrF
hmIIHHIzUbAOCV6n2pU2qmCA3+9xjipT1ES5+dmUYz94D09aaZCszNGAyk9uP1owEiGMEDt3pqf3
g/CgnHr7U7DEfaQCP+BDPNITgEI5Az0NSSCOTJPPYY7CmbNueCOMD5aVhDpDuVnI8vPBA6U5lEbJ
jBD+3amM4iRWG7LHHA6jFJ+7Lxg5UD0pWGPcK23aSGJ5PemsAWALbVHP14pziMBznpkj1qPY+0F8
nP5/SgRPKdwITGAoBweaTg4UH5ugx0NNiZtzooCEkjA9KZIDFght2Bx7mjcCS3lxvUqrHpu/u8jn
+f50wwbjJsbIAzyetPkbzELACM4+6fXnP9KFQcnAVcHOTSXmFx0TEq4wFB5HHPApu3eqoSq84yet
MUkTbcsMdCSB9amETLFlvn/ugdce9A0MlVHk27wAuQCvFPGfKLfJtH8OOn07VETGjKpU5LZfmppF
j88DLBGGAA3Q0MTImPzsysRwQGPpSo5Maqr5QsCVX1pZYfKCszFlB6KOhpyN5DklxtbJBA+tSmCQ
nlCORndCVOfzojdiXJbC9NozjpUnAAwQQzcljnP+eKJ3jhXbE2Gfv3o1Yxiwg+YvBGflJ605twRF
jBYZPAHTHpUcZcFUUDdjG49j3601twGACTnp1pXBFiYi3UCVSAcrkjNNlRY42xg7jlVTHA/z2qKT
BhYsHUEbsdAD/nNOTZ5eBgBjuDdxxj/CtEgJcZwuzYAMbu5qpdhXiyp2Hkbc/wA6e0hkViRvwAMk
kZOKZc7vs4LSggjoT09sdasCspMYK8sR37AVZRFlt1UfNIO3rxVOLcdzBs7R+OK07JEWOVhy5Ukc
Hp3/AJfrStcRlgk5G5VA7E8VIsflPmR8Y7dc0+NUbI4RTkjIzzSyhZTz8pA559qQyNtpjONwA9ac
I8Jndhs+vFRuxZwC+Yy3alkOwgjDKRx3p2A0ZJWeJgXAXv65/Kobkp5nMZJxuJ9Rx+VLPIwYs6eW
y8fKOtSXAhkJ2BhlRkA8dOaq5KKxgZnTHAwSFA9KdOBIGccq3TnnPpSRwtHISSQpzt5pjcyAsxXD
ZAHOKHqPYY6IpJyFJPA7fSn3emz2yxSPGVR13K2cgjpSOuRvZscZwe5rq7BYtT+Hl+bhSbizlX7M
4AJIPUHvjpT0QI5SyRmnVkBL55r6H/Z2/ZR8eftH6pNb+FbWGGGzhaVru9cxREhtgAbHJznPpXgG
noyzEnk7SeTjtn/6341+7fjG+s/2Sf2KE1j4fafaWLi0tZf9IDsgecIryHBByN2f1qE7s12XMkfl
N40+C2veBfidJ4B1RbZPEy3KW4ijcMC7bduGHUHKnp36V3Xxw/Yz+IfwB0Ow1TxXDpxsrwmCOaxm
80KQNwRsgHcRk8ehryC/8Y6pc+KH12S/nvdY+1i4F5cOZZZJUwQzFhkngYJ7AV+qPw6+IfhP/go7
8Cb7wh4pWPSvHWl4dxEdii4AYJPCpYkoQCGU5xk+xq5OS2Ibcj8+f2ef2YfFP7SU+uWXhc2K3Wlo
kkrXdx5aqrZC4wpzyDxxXoXgr/gnT8U/HF3rMFo+l2s+jXr2F3HdXHllZFAPy4ByCDkH0r7o8D+A
/An/AATs+EGpavq17/aPiK+JjkuIgxkv3GfKVYs8ADGSOnPOK8e/4J//ABp8QfF39qz4iarqf+hw
avpP257OGUiFnWSJUZUyeQpI3e9Zc0kCPIZP+CVnxf4UyaC75G5/tny4DHkZXO7AB6dzXlOrfsfe
PdM+Ndv8MJ4rOPxTfwmWzU3P7iWMBm3BgoIGEbqP4TX6MfHaP4q/D34neJfit4f1Zn8KeHoYftHh
28YiC9gMKeayZwAwPIPr69K8Y8N/Hvw78ev+Ch/w81jwxHcizXTpLeQ3Uex0lW2uGZdvsHIyD2an
zyGndnxl8b/2ZPGf7P8ArX9neKLdEkvEElvKjq8NyuQGCkdSMgYODzXV+Lv2HfiT4H+FkHju9tbW
TRp4IplaCcSSRCVQUZlJGF3MB+I7c1+w/wAZPhF4f+Nngq/8M6/biSKdd0My/wCsgkH3XU/XqO4/
OvLP2rPDZ8PfsYa/oQLTjTtJtLVmhXlhG8SkgcdhRFu5Lk1qfjx8Kfg54j+NXjnT/Cvh37MdVuAx
QTz+XFuUFj2znAJx+taPxr/Z38b/AAB1waL4o08Q3kkYaOa3lEkTIQcMr+x6n1Bz1553Sdc1LwX4
g0/XdGvbrSdW06czW15bsY5omHHXPpkHPUE5r9c4bTS/20f2KLPxF42063/tJ7C6ulks2O1LiAyx
h16ZDFMlTxz7CtLimtOY/GCdXEQSQqcLl8Nzj8+tVZGKp8q7SDuIB5Yf04rX8QWqxTmOKGNC/RVU
7Q3t+tZgU7BtIyhG9D2Hb+lUcznrYicgMGkjBZ1yr9SRjP8APFSTI0scapGYzsGZEydx9/r/AEoe
dpiCAXxwi4yAPXj3oDBYzFguW53Z9Ovemxp3K8ZWP92w2zhiMnHXj8qRhLCg8tG5P3N3De/NSXMz
I8U0itcyKeHYBvw6df8ACkmUnjd8542q2SDUaD5kiaePEqiUELweg59qYtrG8w2vuAGC2eB6ZH41
G8UhLxPvVx/CR933pEjVEEcQJUjG8N39SPaq0RVyzfJ9jaMD76/vcg8N2qBZS255S0gbBIzwDiiR
DHKgb5Ci4DN247e/H61DjKN82UB/PmnoZu72JkBlaMoDydo+XAqSYPmSIMdxP3SMfhioreQRxRMz
ENnO04z+lK8vnyljIc9Rn/P0qeW+ppEfKMQqsEQKq33cYyaZDIxlUjcQTt2hjtLcjB/WlLbY0ZiC
2SdueD2A/mPxpkkhaNfmHAwFUYyMnqfUU7Ddhbh0DmKJNpI5YNx+H41JHKDKSql8DG0Y9DjHr2pj
cqj7MDqSABgemaQK0ku6MZA9SAM4/wA/nSKQrTbY0UFRljy3t09B61LjbCI2ALdQ2ciqxleRTFIC
3zEYIA79/wAzRNCQGVGCrnHAxn0oMZSd7FmAxKSW652gg8KetMuLrzpn3jDEAFl5zx0qKIqgZ3G7
Bzt9yOtAlIkMrttGONnc+lCKXmThXSYbyxyOGXB/P0qNkUKJGG7B5ycAqaVpi5Z2LFmzkZ/WmskQ
2AEZOOc8E1Vx6Av7t8bmyeAc9Aev86VEw64bGcgk88e1RufkIGXUA/OOwHv3zUXkgokisOvIft+F
FzPqWJI1MhJO5RyB6/jTwzAIqr93JOe3vSLIsykkIAybQF9ai2Dycsnllju69fShDdiWG4Min5Qo
U9RyfbP+NNlJ8x1G8EjPBpgf5SPLYk5BKtg460lvMTKo8skYO7b2I/yKoSauWGYuoMh3CM8OcZHW
ladlj4jyhxySPl+lQrN5pkQruY5w54PX1o80rGIGYZIOSf4TSNk0KtwxJw3zdyevT/61KJHkTPmk
BhgjbnGO4/nVXLMQM7mU4P8ASnZe3BQLkFSNwH3R3FCMpSRNI7EABiu7ozdQPT9Kc84jjUhRI443
txUCy73JBAb0HQVEUfYyhtzjjg9famrXI0JXkkIEYYEsQR65qwY9sLOxOV6Dv9arbfMYHaeFAHIz
RIrRNuDFgw2nJ5A9MVeiBDnuA0xWMMFHBZuxpJXYuSFJxwpz1PvTTCGDIpyc53H2pEUqjH+IcEk1
DsUiRk3RFZOcnI9sUsTKIlQN8wIOM1WDeY+5nZvLzlV9PemtIGYMoxg4OaAJmy2QD8zHHHIFSso8
wiTe4HJJOAKrQoXkDlvl5GMdDTyQ+VL5x09+KQ0JK0SlvKJZl547U5CVjwSBnnaoySagYK8nUKTw
F/XmlidRJgvwo5Wq0J5hwfGe8i5ABPWpGmAJBOMdMHP/ANaoiuCfLBz1ANRThSwABGOq+pxSuLm1
JHb5QqnYV7HoaA+5GC7WVeAOmfaoioaIk8nPO0cj2pFja3k3ggkAc0rml+5NFGsRAI28c4pXlLyb
DzHndjjiq7ztv+YZ/wBsUb9oPzjd3NId10B2ZWYEEDqM05pwse0KVI5ye1Rec05lQEHABBx1prMF
wCnGBlTzxQK9iRLiRdyqQTkFmPb2pjTGWPCsAqHGVHNMuJUjl2pgA8nNNZ9zMVBUZxzQJMlbMj8s
25ulJIAjBmOc8DHOKZktu2udq+1NO8yK25gQPXFBehO7xuUXG04+971BIgx8xyeaQADKsu9gdwx2
9aOWy4cFc4G49KBAAF3AZGRwoNQncByMAHAI61KxYNvTAINKitJvfrjnIoKslsNEhjHQc8kn+Knm
dvKxwvOc1A65bGQx9TSSOQdmTj1HrQTdjyXVVPzlTyCaWORwrcnngk0xyByGP17UxW55yp5FKw0W
hNIjYMh2nPBNRSSF5CqkgD1+lN2DcA3XHyk/1ppPOOd3bsKQ2yVclwdwznvSOS5J9e3eo0ZPNGD1
7EninvIrKQBgjgZ70BbqBkQ7c53EdTSuxCcgg9T69KhKu0YZm2gZ2riiXJz1Yj5ulNEsvW842Mvm
FGZSpKjr+dP+2pLCIvLTcOu5Rz74wKz3YxupAIQDjjBNOzycfMCO1DEmyxI43L827PLAn+tRy44I
OAc5BpoJOUQYJ7GoWTaDv6jn1FCFqWIJAkvyntgflUsr7wNhIBAzx0qvE+xQqOCCc4x39KewxwT/
ALW4cUygaYiM44z1470O/mMDuzimbgsR4HrkU2RiE3EHkcgUkhEjP/ESAB3FI25yGAw3rUa9TsbI
p+MKxByBzQAJJ8x+Yj17UsjkthnPHcdMVF8o5ySfSnyglQQvBGaVwJWG0Aqu5ScA561EEOCSR16G
mg5ACgKQcjNLMigIMnOMnJpoB/mb3IBP0zSIQ0hBJ6HoajJ8xsgcD1/lTldY0OBz0GegoYAMBsNk
FeD9aVo8LlME9ee1K8TI3OcHqcVGxOcLgnsM9aBokG84LDcefw60hZlLEAHJ69Mmm5DZJLFv0pDm
M7Dk9x6UWKfkJIfm3KTn0FO3AkfMcdweKSRwZQoGcHqO9NklUiQAce3SkIlA2K4ypHQGgNvXBQEd
8cVGeW4HA6kUAGQjjDYzgUJkilyTkcAdqUFXB5xnrmmMGByQFXtiiRBGqqTlv4lHNA02iRiHVQMc
cEnoKaxIAyc+hph+YYXPWlLgKOCT70A3cc8mxjtIYDpSsSyLgsOckVHI4dc8BsAYHemsSqgdKBEk
rdCOueRSO67s89OCVxTEIzljTnUnAxwaABp1J4YgE8mhpChLA5wOOKawG5srjA7Ujgh84yBwM0Ds
SySBmU4BJI57U93ZYyBkZ64qMs23JxtHYU0yErg4P070hoe8pQ7gc8HrSmYg56jGOlRko2SVOAMH
impJhmKnaBnOPSgViZiSvHDDHIpZpGdflBBwc478VXLjHy/Ke4p4lbBJyU9RTEOdwyk/dbH50CTC
HPLHofSmYUZJJbqQM07yyybcgc9h+VQXYk3oyYJwQOfeolI4AJY00oBnGAT1pAuRg8HGOOmaaaDU
eAWBOO/SkMJbeADgdqa2/bkjC5/CkY5cgEgU7h1HFCUwThhxjvRIuCpDc/0qMgg7nLMCc5B61ODm
MHBx0Ge1K6KGSEsRkE47illlDKOzHPDDBpRgNypGeMikdTlm+9jFMBu8lMgbQvUdeaTzQqgYyKdL
l2K4KEY4NNZV3HGPahkaod94MQAF+tIyDOQMjoOaBhCCPm+lNfcGAUnGcZJ4zUJIofIxdApGcHjF
HTIOSevtSSKPM4IGByRzimorEk55HAOcZqmVcexOeTjA5wKaDv8A4i2B0Pelkb5izAAnqAKTG+XA
wrVADwNzfMBjHQHNEhBLKn3cnocU5GYqQTgjjNRMoG0MSQT94U9RjzKRHnOcDH3aVnRwFACk4yT2
+lEpHl5A+UDB96TeqqNnTuDSuK4SMpAHLZ9D0ppRuVD9sZprbCQwyTjpSygk7gc7h1o3EA3OwPHy
jHPfinYMgKggdwRTFAKnDMWHQ+lSOixxHblmI4yKAG7Hd+Oo5BHWnMrBOoYnqe9IsikEHKtj7o4p
rRk5zuBHcUIBfMcZAGPX1pcAsRuJHPPWkXIXCgnnrTxgP0LfUdDRsAjvz8inFBwUdidpB796N7Kp
VkYjqSDx0NNX92/AO7upFK47jo3MygH7w6DFJIDnpg9M5oB5xnBApNwO4FtvP51V0MUSFGx8wUjk
inyyxpLlUPXg1EY8gtyMHoDTw2TyWHOD61DFcbLtUgFmXPJBpWDH5flYA5UHoaLjK4wMjHqDTtny
jr04Y84p7FCRLvVwq4A469KJCVYbQRg8jPX2pGJIJUDcO9NCmJtwO5j61VxEzFj97hT6npTDEvlK
STuOeM8dKkl3IoxnkchqZIpwQAXcc4oGOkYGQKAOMAqD+tCDdGy/xpwe3U01VQICqkP/ABc+1KIZ
EVXRwOSCelK4AigoTkZB6fUZ5pPMZHPGU7nrSAs3ryfvdiKkUcADOCeW9qQuojxL5g2NuQ+o6cU2
YNsIEY443Dv71JIrbiYhuROeelNC8kszF84OOCaaQxEc+WR/DjBpybXj3h9uATjqOlRvGS21ASwz
nnt9ac0IcEn5c4747elS7AHksZSX45yB3NOWJGX96du08N7UOxRVR25P8QHpSyWzOWPyFcdjg0IB
rR/vsIzFP9rqKJkKZOeh5DHrShmYoeXG09Rj8MU0FWfdyQP0qrXAla1YmNl4IGWAOe3FAkIcYZl3
dRnrxSCRySAcFuDn6UskQ3qcrt4BAHGe9FgFcAgnYvy4G4dc9aSD5yztll6+pqGQGOXGSwzxgcdK
nKDMZXLMcrgcYGKlgIXVzgZOTwo4GaMswZgu3aDgY570SQNH7DOMt+tJ8siqDhfm645FMdyQS/ug
VClhyfXPrTWKy8ADJ6EjPNNlAb5xgrnaBngnmlBxIyZUgc8kf0+hpppivcsxr9myWCknPCnOKqK7
jfkYz9xmIP6VKkMc6KUlEa5IO7v9KiVd8nDlyg9MjFFkhkhby0UMAWHBOAeKY/yoTgY68cU7EeAj
A72PT19KXLAtEVBTGCQM80k0IiKrjoRGCAR1xUF5Ijqy4GR6VYkXbGE3bDgklRnn/Jqrd7Y1ypHz
EjHpVKwEUHGAMAE81rWQe73rFg4VskuF6A+tZMS5I7DOMHp0rRgMSqW3KE2HgdScemadhMo7Y2Jy
Tv749aQxNI3faM9aYoAAIBCnkk/zp0kxAG1iysDkZ6/hQMMRjGRnHUCo5DsY7gQOoB64qTiP7p/X
imSM+75m57HGeKQGhNcFpDwpYNnOc5FRecGDDYFDHGByTx2pZFbkspIx95eg/X6UgjKSgsCvA2mq
aJTEkyiADPU4Yk5FRoCXOeFGTjHWpFdoXEi5JJwQwyPofyplwr+aV5V+MADgDtigZDNHsfBJz1bP
rXYeHYZB4N1ZGYRo3zqWI5AGeO/PH1x7VyrplkDZ35wd3f3q3farJcWUNvHN8iZ3YXafb+dFwsNs
38spOWKmNs/K2C2DX7m/tSMbz/gnnFLCGvYxo2kyM0S7yUBhLMPoMmvwqgt1eZh5hViMjPB+lfoD
+wv/AMFA7T4R6PJ8PPirczax4Cmt2jtLp4XupbQ7QvkFPm3RsM4GOPp0S0dxy1Vj5HFk8dxGloC8
1w4AOACWz/d9f16V+jv/AATu/ZXm8LpafGjxZeTaHb2kL3FoshEUckTIyyPKDyFA55x2Pavi/wCN
Xin4e6b8Z31X4cRXV34TiuYr6zjvoyrB+HaMBsFeRjn+9X2R+2L+3f4H+J3wSsPDnw51zU7PU7u5
Rb2CG1e3UQiJt8TMRtZSSBxkcVUnzaCUuRaHsX7aHwLh/am8F2Hjz4fa7Dr82kRPbi1sXWWO6jDl
nCMDw6nt3FfPn/BLqJ7T9pPXobhBHOnh2eMqAR0uIevvkGuP/wCCf/7WXh39n3xLr1h401PUrPw3
qdr50UcUTzQJdqwDN5a52sy4HA5xzXu3ww/a/wD2efCnx2+IXji5l1DS7zUplhsb1NOmMc9uUQyH
Yq5Ql0zyBkAVm29i0rbHSftqftUX3hfXvHXwe/sGfV5PEGmW9tpk0ICiB5lw+89WHQjHSvm79ln4
ReLfgr+294A0zxVpcmk3UrXMyxs6sHja1mVGBDHPJIOT2NfU13+1X+ytqHxVT4hXesXreJ7aIW4n
m0i8MagKQDt8rGcdD16V4l8Yf2xfh5rX7ZPw58d6JqN7qnhrRIxb31wlrJF5W7zAzhHCs2A4yAOl
TZsmO57d+2/8ePEP7PHxl+H3ifRt1zYGyni1KwJ+W5hEqkp7MckqexWvWP2l/E8HjD9jvxJ4jsd1
va6jo1vqEaynDLG7RvgkZxwccV8Ef8FCv2jvAXx21rwrd+CdVfV7ext5EmlaB4lLeahACuFJ+6T0
xXpviT9tz4X+IP2HYvA6atdx+Mf7AtdIbTXsZAfOQRox8zb5e35c8MeDVctifs67n51z+eZrjkkq
xwAu0cEjj16dfpX7H/sZRSH9g3TYZEkik/s/U8pIvzLulnYfXgivyt+Cuo+B4vjF4Ybx/HJN4O+0
ML+SLeXiBU7HO3BI8wDPHQ5xxX2D+2l/wUA0bV/BcPgb4N3rR6dLEPtuqwRvbkIMj7OiMoPzDGW9
M/WjqXLWHL3Pz98QyMNQcFi7PjduUA9On51h3jRzzO2CqMctjIwB06de3NTTzyTXHmKwLd8knHHu
arxj7TGPm2kOQR6/StUciWth5M0oG0JG+AMhiMDHrTIsK8gCtIV6ljg8/X604csCpO4nk56D1A9K
hiyJjJ5gJTnAX73qTjtigpR1J1dcbk3RyHGC3B46fSmXEu4qU7tlmA5J96bcOZZWG9QrDHA5Uf3R
UajYrbJAX3kEsM59+vXpTKa7EryBWYEfLjL7upPbn05qO4A+YQghweMDvj/Gh4zJIpaQkgYUkYGQ
e9PV/M2KDgg44/i465pBZiSu5Zc7Wk53FVz25z/jQzhiMBIuMrk5x7ZpslsPPJRgy5+ULj/I7c0B
mE0iho8sx68Z+me+aCQkcB28za7jGM844weKkklWNQuBGXHJPQ+hqIqvnBSmX53DHQ+x9P8AGmHf
NIzBVK7RtGaaKJpdk67gfu9crgcZ49qTa6gSqFLZ5Vj0GP5VGId7KZMhQSAOpHf8s050xEOcpkZU
8cdjzQh2Fkfzp2VipOACvTP4UgufLfOzAU8bhxge9RJG6sST5blSc46inEpIqghwckNlc8+9FjPV
CmR0nbCmNJDkHOf1+lPdiWULzGh7AE/U0sewgEOWVcKRio33W8jBScMc9OcYxQiWmx7S+Yh3Hc6t
jO3HGfQ/nSK+8ybF2pyBu9f8iot3O5yOAenGePQ0piEUZk3bv4tueR+FUJJtljIO0kFHI7ntUSus
LAsrlsZB6gj05p7zAqnGT6DsOOP51WMuZlXA25Jwc4FIp3JRdAM0QUCNgc57Dn8qZKNsqKecEEnP
BNK8pCsy8pL8oI6ZBp6/fBYBIj1OM4NBK8xLkyZAMfy4xuHHH4fWgSooYNzkZGcdemKSW5Evyryz
jhsf0NMIRG2lcsvUH1pi32HTusi7ASqDpgZLflS7/Kic4cEkAHvilJkJDOAMdD6elNmLcqm4Oyj7
3OeaRSViYuXjXH38EkkcAdqY2UjdQiiMnle4PX8qjimdIiNw3nI2nr0/SlEn2UBtx3DkEY5H1p3N
ENkOcktnfgqAeOKA7SRgMHAI5Dc0pKu5KjzFJ67ug9PrTRIYmlU5Ix0BzikTy3HZwAHQrklhjjmm
lAodUZgwPHByRRhVUM7kN6DnHvSAMzFgdwOVJxTIs0ImcszR5ZlwuG+6eOelSuDFIWQY29GzwPrU
OCnQ/L0yOelSqInUhgwOccdAMelItCy3DmVBtUB1wOhPU96jMrnaFUxgjBIx/KjyyNuMYB+UN39K
dJIMrjcWHtjn/wCtTSQcrQqNHlgwA7Ak8H600s6M3mcEjjA4xTJWZcgY9ScUk4UE7WOMdcUMe44O
SdqLhT1OOtNlG1XKphS2dx4xTlUKUQ5y3PfIpk87BxGvzIDyMdeKASsOkUbFZZNhPJ55qPcoYyDc
2TyP6UpikIwoOSOhHNBAK8AB8AHPb1ouS1d6Czygkso2ZGDzmo1Ll2K59zjikk68Bgo/vL1NK++O
RlU55HI9aaHaw+SVEzkAFuPpQ9wiq6nL88en1qFnLMdwx6+hNG75WAzweO4xSC41mL4G4AdQo5p8
jIpJJGCOoFM3gpkEsCcZHSnyqm1lKhhj8cUhjYjuRmk2lFJUc4z6VCAXbbnEfTGeeKXafmSMqUxy
3oP8aQIqYG7k85HU0CaFwqD51BPofSk2+XIwcErndx2qUOjAhict0wOaim4YgtkgngccUkO1hGmL
vkKMHjBPFE0pUMF+ZyeSegpvknGRlwvJXFLI291CJgYzgDn2pjQx8tICuAMZx+FNKkqQuCM/d6UE
ncwzn0HTFLk7SSFYDvQIY42k7uWU9jT9zRqxDbQ3QGkJWMo+3f7dsUruhyJCeTwetSMYQcncMnPV
aWYiROcdeMUOQjfL+YppO8fNgYOSadxDBy2Qflwc05QFwApz2pxKhCpGc5wRUbDgktgqfXtRcB7g
Ekt97GBt701Ii6g9Vzgse3tTJCpbjINO85kBTcCKQ0O2LgYHyg4yDUb5JPzELnB4pEfcAOeeOalc
hiMDLBcU7F2Qwu3Cplh0Bp8khLMqZ+hpFBOABn1IpsuHbIGCOKLEtIkVFeQK4yp7jsKRjGpKjJPT
PTtUe0H5clc81INpJVjhhwCO9IkcjkMRu6DIPrUYRpEJJwST0pZMxDgk5Hft7Uud2S3X1H+FMAVg
jg7tv4dTT2kAckKCOmD2qLc2xtqjGOSaVpC3HA44z3obACSBt6ZzQ0w3EkHB4/SmGNxy2AOKa77A
efmPUYouBIcfwkgClU7jwSB0IPemMH5HAHanox3AHGM/pSKVhWby14P4elOkkIC7sA/rUbyLlgQC
B0pd2EYhuh4PagLIbLksC5yBzwOlLJslAH3mxx60jAN1PJHUUOoBIXnbzkHrQKw5nKFsnO0Y/WkV
h5nTPGRTXJYgDAbOKeSzvz1Uc+1K4hXeThdu1hzj29ajbMxbaOnO7vTizbgWJJI4OKQZVMgkluoA
70xoCjIwDHAHtSxnknllB70jhpMMTwRyMUrvtKgMcelBV+wxnVnHy89M5ofaqEYO4eh60syjzMKP
6UzAicAk7vr1NBN7h5hJYgct1walVj5fZecZB/OoiCMEgZ7+oochumN3agESy7XZRG2PUimv8kmQ
cDHOfWmosjHI+X1ApSndmzzQWxQRu3L1Hc0jEA/MevSkZiBg4z0yO9NK7x9PWgiw5kBYEjGPTkUc
Dngn16AUxWKjBB56UhPGcE5PagFuTli5AKjPsKaXKAnBxntSFtsaYPfqtDDcMg5CngGgbQpOx85w
v8xUbuSAATnPT1pzHcu3Jwec4puCULk9OlK4kPKhVwOCxpJMcgZVh0phILZB607csTfdx+tAMRmw
QuSSRg8dKQARl02BieM1IwwS3qfvGhmVQTnnrkUmCB8bQRz1JUfpQrYG1QfemMFOFyQP71MU88tk
etCY7EspUdtx6/LTWbAPAoYAMcHkc0qEFs5wPWi6Gg+U9flHvTmzgZ5yaa4EkuGPAPHvSriPOVyW
OBntU3RQm5tu3BwOxpQVBwAfxolbzByCp5PFAfGc44Hej0AUKPLc5LHryKCSyxqTwOfpSecMY7fW
lkTCMwAPPSpAUudoUtg5oVjG+33y2TTFxkENgmlwSMEc/wBKAFeVvMf5h8x6kdB/kVGsmCRuGe+B
xmiQsDjOFzjNNkbIRgAx71RNyYEc4PA4psr5YbegHf6UqlSuCuO+R3oeIYHrmpKGI0hBHPPHNDRk
ZG44Ge3tT8ZPy5bHU00u2e4B/WmFhHXJBLUuEGQSQTTig3ZHOcjFIULFjgcenai4AAc4GAc8Z71J
ksAG6e1R8IRgkewoKkFCcgnoaBikkNkdAeMjikDqeoyO9DDax5yM9PWmlmXHABz0qbAKXXGAuVzk
YpRIDEVUkbuv9KRVLIGA+XPOTgU9tickZ/2Se1CQEeDuXYAMjp609mPLYGSMAChcNuIJBAOM035k
6EcetVYdhwTKMD97Gc1HtZQxwSoGKlzvZVyG7kCmysDGqquAvcdM07AII3ZQEB+XkinvI2WUEhW4
PFRyKSME5UjnNTCMvBu3cg4HuKVgsREHYAHYMfvA0r7h05YjnnrSxFjlduQP4vSns4AyoUjBBJ4p
WCw2QKBlBlunJ6UmFXaSm7HWmZMmBkAAZ+WnkiTOemcdaWxI5lUyMFDdM4NNfAD7j82ODmmrCysc
tkHIyKXyxgAkEYPX1pgK5XyxtBA7nHWnpIHxyzHkc9qRW8sgkBuvA/nTZC5kD7sDNCKuPWL5i27C
sCdpHSmvG3mbdxJA5GetIjfM+cEHpmnI7NJu+Xce/pTuguKnKgOckdRjpSMBuG44Q9zTvMePIYbg
SR9aarJA+cAnkDPIB+lTcLgsewglSMnqx6UoLOxU/KDzg0jHYWjwp749KYAyoXBKjsCMimA8J5ny
g/MvOOmRT5V2qqgYwOoOf8iopPndAD7BulEq7XKvhj0U0CHM5ZWTPB4+Xv6UOvzg8sc8/NyOKMmJ
toYq3qBxSEkE7wuOn1pN2Fccj+TIwjC9SBQ4KkEnBxkE0hiVGB2dsjPrQxDIMcHOOO4oTuVcXf8A
vtrJudurY4pPKkUv+9KgevGakfczuBhR6imyN5zBiQzrz81O1xjJHImVs8EHgDBFSbQiE8nHQAc0
w4eXlSD0HPrUrxGHIbIU/KdpzmpbFciUNKSC2AOnfn1qRUVMr1U9Pf1pPLG/GcqccD+E02SMkuuQ
zLzkcGmmholJjJYKhxn+HOaT5mcYfaPbp9KQK0SklCCRwecY9aajboXjAIYncMjrRZDJlieSUpvG
M9SePwqB4yuTuwqnAKnr/hUo+Yh2zvBP3fp3FAKPEzDcuDkk0CInJjPO5TgHa4wOe9PYFlIVdwA5
I69fSiU4B53gtlWYdOPenBwwc45B5bHvUq9yUERWEgE5yMshX8+KWRo1mkUER54OOOPb/CkWJ2IB
beBySB0HPehwPNYFN3I+cf4VXNYolKoxghDgOeS4B3Y+n+etVyBcOwBfaDwGPNKZI45mOSQBglT0
qWMRSodoy2eXBPJqQVhsjNHtRlDRnGWBHFVb8CNG2/NnndjtVnEcTg7+XO3A5x61X1DhWxIWI9u1
WrjditCxxhV3cYxirL5jLEkHjjb0qrDh0IBG4cjOauXBCQsoywIB3HoBVXEVWA2jPReMg+1MUL5e
Nm9snnvT5YAXGwkjOOTz0qPy2EgBOMn1qbgLuzgsoOOARSTMAoXaAVPp2pHRom4yCpxk9jU8jKRj
Kk9z0OaYGlIFdyEQ4xuIXoOahaUhCDtbAx1GSc1patp32JVKlw0gJbcMDOT09ulZLQSuYmK7oxxu
I4Na2ITIiVlQbVEb87ssAOnWgqInIc8kdM5I4pXYyuAIxwePYU9pQAB1YZBqSrleRgQcAls/QVAw
UA4BPvipSSpG4bgfmxitbUtGSXSoNRtEYxgBZ/RW9s/SpvYZl2pZ33Ln5eTzW9CqAK5wVByABnBN
YdgzLcLgZJIyua/Q/wDYQ/YCj+MVq/jD4h2stt4TMTJBaMTDJKxUFZlcHkAE9e/0ouFtD4mt4BcI
XnZioAw+4Y6/p2q9YmSMSIFLKASVPByR2I9K9i/aK8HfDjwX8a7/AEnwFe3GoeEoruNZZr1yI42U
4lVZCPmUEY3fX0r6w/az/YT8CfDL4FWHjXwGL17iFoPtMm8TxyQOjEyZxkDO05zii9idD864wSwL
MT1YE9cfT9asG+l8l2mIcY2kswPPPHT1r60/YK/ZW0T49eO9dTxjpOoN4ctbFjb3cEhiSWUSKChP
rtYH8+te8+Df2JPgRqH7QfjP4carNqs2q2ccd/p1vHKFD27oGcFgpG5WYjnBII60+dFW7M/M0x/6
nO2WZ1yqgADA7j9O9W5rWQYkY/McEIeeecV+s+sfsVfsw6L8RNO8G39xeWfinUIzNa2Mt4waVSSc
B9m0Z2tgZB9PSvEPjL+xf4F8I/tafDrwbZNeHw94l2vLbtIPNj2lwcOB8w4BwR0z7VPP2GfAM8/y
kSAq3OI9vOP/ANf9Kit5/NYGTcpYnapbgnPX2r9E/wBrz/gnPZeA/Dz+Kfh0Z5NJs4C2oWFwxmnU
8YkjPdQM5U+lWtW/YL+Hsf7HcXjy2/tK08TR6AupytLKJEaQrudSmBgZJAxjGBWikupG2rPzhcu7
/ODHLyAeuR2B/GqzpLb2oRlBfPzAds//AFq9M+E2k+C7r4qaPa+P3uNP8J3cwhvrqBgvkL13Hg/L
nGT2r6r/AGxP2ArP4c+F4PG/w3ll1Xwt5KGeNpfNaIMPllDKPmQ5Xntk9jUuV3ohNpq9j4G2usbp
IVBJ47mo5AkTOFbZIAeBkAj2FXrizaAkYyy7uQMhfUk1nzbTNIpyCoHIGc8Z61adjF2JEMbAyEjP
UKrccjp+tV44o3nchmV2wFJHGM96mIEO8eWegySccf40ssZM0jCM5zuAznjHHX/PNF9QTQ6a3h83
j52UBhlcdv8AGmRSozMu1o8rnaRnmlvGtzMNplDKNpAPQ45+o60rqFk3R52HsTVGt0V7mAxuACSN
2JAD+GBT3kAwNuQw2gA846ZP5VE7NDOARvVuSw9c8DH+NSORhwRk5zvA6+nWpZLYOWjIEa5bGMn0
/Km7szxkkyBFxj3/AP1U+WQuwbJ5yPl+nXP41WUgykElwRgdeP8APFBm2TofOLryJQxblupzTJYn
Xcxk3NxtHqaWZMk8sCnJwenoaV5UiyyEs6kYx3HqPQ1RcWmIkgikZSSrD+6349KV2jYKRjcACzv1
A/z/ACqOOXdI77AmPvHOM/T8KlnZICjIz78kHPTb60i9hXA4Q4Z8jaRz8v8Ak0krhoCqMTs5OeAc
UxVAuTht7LzxjtUklwqyIMEdvXdx7UamfNcjCIFcq+09CfwqUb40WRXGZMjLdh9O1RSyRwSTKPmR
8gs3Y0YjKR7B90YbmmUkhzOnlqzIWAyRxzn1oYCTYAMLnJP4c5qLchE0mSEBwQhzzSS3KTRB1Drs
zwQCD9aAbSJV3LO+z7gOME9fpTBOwuWG3aBxyKRzhiwfHy5Az93nj8aVVaRW8wjaRu3Hv700ZavY
CS8m1QZgDjGOMmgRmSRhlVI4wTgn/PNNSNogFOScZySPu+tOYgCMr9/k8igrVD3QxlXcqu3hlYdB
2xUUhAdMOrFgDlMHB96cXWUSFziRgQN3AqO22mcCVcKAc4HA9KQk/Il3q53EMWLfxdvepGaOTO1g
o+vJqKeNy6qV2DsxHakXB81JcbtuG9B9O9GgczHg7o96kEHPPrx0pkjCPywgJB9e2KaPL3lVYqcY
zjhe/HvRI7klSWcMevofWgnmY7e+9XzujzyQPu+lF0w3KzgKwPQDGRTWY26lFdWJy3l9/qKbNOZF
T5Bgdcjk/jQaIekRbAyNw/iYZ/OmwyiMuWG4E8H0Ht+lKAViGVwQeSvpmmCISMSCQDnAHpQQ5akx
+ZIyzlnJJwO1MSR0f5MyDqoPU8UpKGTgYPQMT1pWlcSAuW4JB2ep/wA/lT0NFqI0jySneTuXngDA
74pgBJDGXcR3bnmnGMK8m1sIvBPUmmtGchWwQPXt6ZoHqg807s/eBJJZfT0p1x5bMSjBUOTj+76G
onIM20ZCEZBzkUhi2fcVnyCBu+v/AOqhiTHPGgYsv31I7/rSSOdg5yRnBNNYARu20k4pSvmj5VOV
PPoBikDGqGYgKzEkcgD29aV5ViaQFe/PzdKYUGQFLHuBu5PtTkdD8oXDcg96ZNwLR7JAWL56Z470
iqDu59+tL9nKTfKARjH1qOfJO3eA4J47UFJXCbawA+ZWBzn2pvmDegBDAcEAcGpPkKqGyx6jb0NR
7hG5c5DZ+XAoKshyAKmAD6ZI6GmSnzGT5uAM7R6UpcsCA3PXPpTZCHHGeD1zSFfsNZDtORtBOePS
kYjcpPzFePSpTsRt8gLgds9eKgkbzTxnGSP/AK1NkO48ksepDjBKinnYM/eyfXtVcYVuZGJ6YPan
mNXBG8nPoKVgQb9mevzA/NSAlXXuvZhRIoQ53AAHA4pjPtEa9GIzgUDHsgDEr0IJIIxUYA3gx/Ng
4wRwOKfK6tu5xnjae9MLsikrgkcY9aB2HyyMGKsfm9DUUw8xjtHHpinSEmPgk4yT60YEjKAdvHPe
lYQxweGJxn+VK5Ji4UjBPB6UEDlS3y5470p2vwMbPekAzad8mBx2wabIhDFeRkcnmpNysTu4weMD
+tDEOXVvlBH8XWn0KsNyjDO3fjrzimYGcrnjr/WlkiK9OQB1o+9vdSw46CkSEy5Ygde3apEbAAwP
l/Wo8qCeST702QvlWAAGcbulMq5MzrGWKgBcEZB9qY6ZJyCVPIprktH94gZ6k0p5Udc549KdyRGY
4BJ/4EKXbuKnG5z1NMC5bk/LnIA71IcFuGIwOBUgHzDc3BAzyaaCWBduDnjFCqxUjcRml5ABPOO4
/rTGrAwYrtGRg5J9RSOhUKSc++ae0hB+bJ3Z4xg0k33WOT/u9x+lAgZuD94UNxzwCeM5qPOUz+eK
EVUz95j69qQD3bK7Seg6UMrOpOMDHJpHHJAwT6j0p7HKlSSO3PSgaGiLd8oIz1x60rQ4BUOMDnB9
ajIEfzYy3v2pwf5huOD6GgdwALNnuBj6VJ5at04I9aau5GbsD0zTJt28c59TQK5INvmYwQx9KQ7V
zz8o/M0wDgnOz/apd7ICMcH/AD1pWAVpix6sQc4FIBvTP3GUck0h+nIpRhl+UHOMnFNBuMZcvtYk
gc8UNhSBzTiilSxOX44HpTQccYzx+NAhXl9u3QU1lJG4kgZ9af8AJkZ4GfXrSMMkgZwDnOelA0xJ
C0hHYdc5pEA3E456U3gBTtwe+eppzr5bkqQ30NAh+9UOQpI79qQqzEqoxu6A0OAoGOMjOaGfCbNo
yMnd3NA7jWDJkEfMCRSiMEMTw3vSbRhSGJJHTFLI/JPHJoEGRhQSTjoKYeAQOvZT709gF5Byc9KC
xd1xhW6c0gFIC4PYcYAzzTJiu/5SQRzmnCUKh3AlunFRhcseuMHOfpQXcVhuGQMegp6xli25+nPH
eoySw7sfb0p+1uMkgdMHpQwQjttZjsznOOaR8jG4AA+1PkRCfp1qEMvPJ46cZqUwZIzEKQT3pCmU
3NkLSM2PXk0EjLADPpg1RNhzlS2EyRjpTMAPtwBkdakKKwBDfN79qaqkgkAkgVKRdhxIB2sMYPrS
hhkjHsMdKjkba+T9047U5ZvIUnBKk9RRygSOoVQVOGBxmhVZ85wMfjiomcMW44PPPahHVX+YZHtz
SsO48EoCB/FwaZOMZOAVzxSnBY9iT0HahlJcDccDmkA2I4bkcetPGCmQSQSeB2ocqgJByGPI6Yp7
ybWG0YHbBoAjJCIWDHcDkrQWDuu0tyOhpJlG8c59adztXOSRnGBRYVxCpXO4cc/hTgoRdx57cduK
aHbcd3K0ohDfdPPrRYY5hGuDu+Xv9aaxBbA6Huaa37tf4cgfnTg+RjOBngUMBCuG3Icr34604skj
jkJj2ocFc4JwPSmhGeMqAepJosOwEgE4yV9afs4yOh60BfLPQ+4x7UScqWJ2gelPYewwoWbe+AvY
ZoG5SOcDtSIQVw2QM8ClBOcdhUiuBUhsknjnilVgrEsA2cd6HOVKscZ601wYVBII70IByyIZN2Pl
zwP7op0sa5zjap6Y6mkGxs9VPAIxTCG3YY/QelO4XF2kKw5welOfO0HB+UY+tIw3YHPHfNK8civg
AjbQmAwKThl528c8VI6llHBI9R0oZpEAGTjqQKcZECYAOM5IJNLdjuR7d7bQNw9qdxGdp+UZ4pjZ
ySeABjAodg6ElMEdyegxVBclMSn19+aiJDFlUZC9BQqHaAjEL/nmgYUsR06DjqalpgxELFgw4Gfu
n0p2Aoc7fkGAQf4jTWbLAKCAOtDMe4+nFIkcW3OAnH+yP5UuQCwyRg9COvFJKjEjge5FPYc8fIw9
aaGkIEjCMCCSxB4PSmYAOG6DkUrgwnJGSehPehUUyEFcj3OMVWgwZ22DDfL0NAQROJF+YihI/NYj
O0eoHH40pVPMChgUxkkEjmlYYqOqliSCAOB1603zA2QVyD13dvxpZF2thOD17c0BDg4bKjqD1NIB
RGhZfl3Z6nOKcSyqw3Ec4xTEUo+NxHHPpTocZIK5zz/kUIBhUIhbOT0p6qrqFlxgEYz1pQqSMOSu
D+P5UhVFc7gWwfSi4h0igpwMbj1BpEbAIKsW9ST/ACod9hCgM4bGG64pnmbXKqSWDc44zinoA4p8
ztg4zkDNDxERhvugnO096EIyQzFM5PzdqfA7qWYszKvGMZqQI2T58LxnnNALAuSuCD1/lUrsCAdx
34ySDTXZGbKgMCMZ7j/69UtRjkX90SwAZjw3Q0MOfnV2JOT6/lR52XO/7vPHHXFKxWRiPugDPNZt
ADKjNkDDDHUUNcqj4CDJIy2KSRidpkG8YyMeuKkinkSI5UHPqM5osO5HcuZAqqzIgwSSc0puCIxH
uUADlz3/AB7UeVuCoMgkZHtUZUbBuO7OD6VdupOoSKsRCp0bGOMk/jUwhJiJYZUDPHBP0poK7nHL
MOiluB9KaY9iN8zMTkbAenvRYYmVPqztjCHIx71YfEabNq/McgFeD7ioIkWVSy7ht7E8fjSbdrFM
FcHjv+FFgHPcb1CswOPu4P6GhIvOMgJyQMZ3YAPrTX4QABSMck9qkjuBEfLdfvDqepFSyQWOIJ8y
75MkZJ6fhTpAd6MCCOR8pxk1CqrDIpB2AnnuaekmZcqcEEkZPFCQAXjRmYqXJztGelQXjoqxomQS
Mnd9anI3xFwu7HBz9ap3fAAJYLyBmrQ7jM7VBwcHqR2rTtY0l029bLB0jAGD3Jx0+lZcBO7IJVxw
oA6n3rZil3aTfYI+UKrHHPX0qrE31MtPLkQhlw4wDgfyqFiPMBDHC8AnripcqShO7bng47etN+Tl
VOAT1PWoehaGTOyk87g3JHakZsoikBdo6+tSznCgfLIWAwe9D7IiNsW7I5B7U7gdtqGpyeK7KAPB
HFfKdpZEwG6459vYVMvhK9ijMCwq+Dls/eBwc4PbpSo/9hagbpWjAO1kBXOwegVu4I6+ldP9uOp2
f2uRCF42O/Ads/X19f1rujGLd2cqk0eU6pZNYahJDINuzoVbIHFQ21l5+/vjJ54BruvGmiyXMDEI
zXUeN3ygZGOv8uK4ILNYsvzFSx6+ntWM1Z6GkZXGMo8wg9Mfe9a67w2Wk8L62JAJY1XlQAQoxgN+
eK5GQiWTa7fdBwe1dV4elg03QtUlusiK5UouEGCf/wBX8/eueSNTmbaVVXJjyB3Ppx/niv3Y8S3V
yv8AwTPtLiwmeO5TwXZuksTlCWEcZJyOecHOOeTX4SacQbhRs3/7BPrX7i/A3xHp/wC0v+wV/wAI
N4U1O2n8UWHh6DR7u1cFPs86oAuQR0ITg+1TrzItr922fj7dSXCkIZJpyzt+9fJJOOCf/r199/8A
BOP9qjUtJ1bT/gx4ut5te8N63ui0mR0Dtau33omBP+pIyR/dP14+IfFvg7V/CXiu78P69by6Tqlr
c7JbadAhjcNg8cZBIPPcfnX6sfsv/Avwl+xl8FZfiL47nt11u6iFzPcKNyJvLNEkY5+chguRVyki
EktTr/2p/jFov7F3wki0rwPoSabqWqNKdOigg3W0LgjzJHJPH3h+Jr4y/wCCbXii/wBe/a/S51S5
ku7670vUHknkJLSMTG2ST14B/L0FfVfgT40fDr/goz4K8SeCdU0uXw34gtA0ltFKyyTxx8AXETFQ
ODhWX6eteL/sh/AbxF8D/wBt7+xtWgM8Vnpt0gvYgxi2MmYzn7oLAA46joaWjHBe82z6y/aO+Anh
Xx1aeL/FuqbpNe03RRLYS28m2aykhWV0kXBB5J7+hr8/PhF8fvFHxw/a7+Dk/iqaGa+0a5j06GSJ
QHlHzbpGOfmJAH6+pr6c/bI8O/FHxX+0Npfhz4fX15pNt4j0FbC8ulQi3kUPKzKzFSo+Xv1Ga8r1
n9l7Tf2Y/wBrP4Hw6Vqtxqlvql/HI6XSLuhlVwrFWAHBDnr0x70mkhwep+jfiDxzomi+JtI8OapO
kN3rcc32VJgAk2zbuTJ7kMMDvXEftFaLZ6L+zN4702wtkgsrfQ7hIrcHCKAhwPYV8x/8FXZpNP8A
D/w+1G2mmt7u1vJ3SaFmUpjymzkdDlQQc9QK9jTxVqXxB/YBn8QavdG81LUvB8lxc3BQAufJbJwO
M4FC3Ikro/FrVId+ozxxxloy3yhDkDH17fr0r9g/+CdNxc+If2MdNg1O4m1ALNqFmDeP5uIlkZVQ
ZJ+UDgD2r8f7jTrvX9V+zWFrLcX1zKQkNuhdyQSflHUjiv2Y/ZU8HzfssfsmQW3xAvrTTRbvPfzS
hvljWYhlU8Z3ZOMc898VT3GvhsfjT47eOHXtQSMgBJnjVI+QACQBnnPGDn/aFcv5scsh8xVRjlgA
cEccZP8AnpXQ+ObpLrxNeSQnzYQ7eW6kp5nPJx+HH0rnXCSOZV+QFyFDnPHNaIxcbjEUSse4TG5D
0wM96hLqs7sAWYngAH8KV5VIJQFOu7juOlOBKK0ZYsCwJJXP0oJ5UHmK+5Nu1wcjnOc+/p/jUY3x
zJG8TjILkk8Z9PyH60qtDHOJFDr945HQj2p4d5HVt6BCDlT17YOD2o3NFFIbcATBmIVioyWbP5VH
eCQOQjtNn5wnoP8AD2qVzsGFbCHOBg9venLt2NPgqVG0MPU0WDQgDeYjOqnci7R3J98Uwh1UvvZg
OAAf04+tTN++O0QkZ+Ulec8f4/zpHYtAUjyGIzuHY5otchxEI8uIKqbVY4L4yBzUbymTGdqrjq3H
aljEmMl2C54yM/5FPkUSI2QMHvgEDtVBGI1YgYJ+PlBGeM4NCOdgQrtZW+UdyaepHzfPuIG4qABk
9zimZVpDlWcnkFjzj6dKVxtPoMkkXnahEftwSc+lSMVid1YFyeQGG0gkVC27zFLfu0zyB6Uss/mA
sMMQMDPX0pkOwoYKvluBhuTtPX8afvaONkaM7EYKRjJz6E0phXfnA29QA2QaRpizEKoIU7mVuVJ6
UDQwu0Max7sDJ24wQO/p05p0CpKGVmCcE5bgbqfKqsoHO4c4A6f5GKY8wkJPUYyR/k0rBbqNMuXE
ZXIzkZ74/wAmp5WjmHyEoidD0FVJmEkihE3BG6k9PeiQMsgwMLn149qYJkkoJZ1VS0RGSvX9Kck5
Yqcbc8sB1H/1qR9pUoC3B3FQcZpZ2ZYzwI4+g7mkO9yN4VLF5QWyflI6D60+LKRHMgEmcKR0A96R
32xBXO4++QB+VII40USiXt0HQ/41QDmw+47y2wHGON3FR7yzlSMgDGcc4+tPe4VCN43KSQMUqsIV
fOSGUEHk/hQS1cGmIQ/LtUZAYDkjvTS0agk5GTxk0912CM44CkgkA85qvJIrtsC424HX8v60hKKF
kOAMEjdgbgfWn3AMcGAQzY44BJHfimvKrRbtuCSPl9P/ANVSvEkiq3zHdnnsKC0iLaW3IoJQKOp4
P0pH3MpXkBTnjrQQTGQG3EZ+YdhRuBwpO7LcY69KaRKWo4KkbowckltoPBx+FPaRHnI3ttB5I5+t
IP7hb+LHHeoyYzM/zElV44yM0rGiTQ5SifMGLYJwSeCM+lRvI0z71l4f+EDpS5KZHPHp9OlMSYMQ
SoAA5wOetMGrjzHuV2I+6uafua1G3lW5ZQDmmJdlRjO5mXJx0/z0oeN2QMTgDgjvg9KQloTxWyPB
JI7htuCOKq3BMcbhSTuIzzTiWcPGJCIAfmUng0XaIYwEyRnOARj86ZTCRQRkghwOCBiq6FTySfr6
Gpi6ylVG/IGTjnaMU3zEiZht3dBxwBSMXqySGQR4ZiSvT6mmLIwfKnC88gc5p7MGhIOMdSajB8xG
yC/+z0I9KDVIjK4+badxGc54waHAePIPKdQOlKygAFSdx/vDFJJ8qAZw4GDgYyM5phcgAIk2E8Ek
YJqa4by1JPQtj5KaTGTjODnBI65AqOUjbsJ3GkZ6km91Xhfl9c00o2NxbAzkmkHyFeSAcHAPb0pj
szSP8uAo6dqCgaRTKwHzDGM4p5bCgAcfXrURJXbhduBzg1JIxLHauPQY46UAJKcdSUz6dKYy7Hxk
Djr1qRxuA3FTx93PvUUzLuJJzjuKBjgyjGVyDnn2qRgzKMDBPOe3SoRIGAYnvx2pd5zs/hzyc55o
GmkOdQFPB3DrjvTWCgbijKM4pWG1mAc7WBPPWmEjqG3EHv0xU3E7AwGVZW+Xp06USCM4IUhsU4zb
8IydTxjgfSiVDASSDsPGCenpTAaZdsZUnC+hpJMSKCQSTjpSomQSMEg8Ypkg2Z5GPXt+VFhCsjnc
wXIXP4UHKSA4BXHII4NIc5wGAHGMHOadljyvCkYINFgByA5IGM9jRIQSBjIHU4ppBBAHQc560STO
Qc4x6Ypl2QbQ2BkbQc02V/myo2n2FJjIUjIxUjEuVwOVPUd6kVhozIoVV6c570PIyn5jnB6U9m3B
eSppjjdkYBA9etNCaGhshiRg/wAqcTvIORkDGTS7N/BGMDjPFJtzyc8Z4pCHFCeSenr0H404sFBL
cnp1ppL7c84OeKcIiXZWU/L2p2AY4diVQYUAk5pu/IIKnjnHrUkigMGBLcc0jsEB2gdOueaQxFba
+fun7vy88Ux5PvL0HOT606CPzMvu2jPIzTpY+FKYIJxRYtCRvnJAyM96QKrucZwePpTx8kZHCk8k
1GOecZI/WnYlkhckLv5xxzSBhyOMevpSGNmBy2T2GKRv4sErjgikKwkjEkjHH0phJICipWVlAUsG
4xxS7CoDLz6g0ARl8lht6ZqVkDL93kDkA9KYyrvYjGQOlGfLYk8EnOR3oKuhs3Xrk/XqKNwDZ5FO
2KWyD8wOcmmySIeATgZ5oExXYMh43HtjtTOFGOCc9+aUgxt6Fuc5p33hyfmqiRjBjkuOcUqozAHA
z2p3lseST9KQoGbC5A9SOKkpaCEmQkHGO3+FIDnAOSQemOlOkVFK8knOTxxTHCghsYz6HrQIcWwR
gcgUjkE5ZSO/ymhiCMEY+nWkkLMx9OxNACFgGIGfU4NPk3KV24xjJIpN6qGGM5/iNNlXOev9KAEH
Ruxz3p4wT82QaayDc2Tk4z60MQwIH4ZoBDm+Tp0pVkA6nPpzSHAC7uW9R3pGYAnaMUirscW+faTg
EdB0zUbsA+F//XQzgkE/e7U1gGBJ4bNTYV7geVUEHOexpSqMeMrSoSBxxxSRsDgkc+1AybcuQMkg
8daTon3vm7gGmLF984IOc5pZAwXaVJc9B6Ur6lDFyrgkEnqOac7jBC4znmlwQQN3PTp0qN/lIJ5B
5471exLHEZIYfLn8qFiKkscEUgLA8cgHgUqMwcngHrz6VOoaD0AZWYZJPXPakyFOQQSBTQ7KuTtI
znOaceELKcZ7UupQx5BKp+TBIzTfKYHGD2709gVAHA4zzShSdxYHNU7Egr+Tu/jNPAUspDfOaaVV
YyZMlv4SBwPrRtEikAZfPBHSoKQ+ZkYDHBAz9aYju3P90ZOOwpdo4Pcdc0FQT0Kjv6GgB6qspHOc
8EfhSzJHG4G0hAc8VH5hTO3qfXtQ8mec/P6etNaj0Hhw8WVUjHU+1EpDKyo2Bxnmo1fIbqg6UroM
lQSBjIPrRYZG+6TceT2609xt+VskDnFKSgxlhjNKMIVQnhvTmkTYDIgOQAPYnvTTLuQgLz357012
HA6jucUZTLFRtOeB2+tCAVnkbnbu5wSR0pznnjJJPHtT2ciQqT8voOxpAATkA9abQ7DXILlwM9Nx
zTWxIyrtxg49aezHBQcoTyabGuxWHO7HB96LASDBUL6HH1qNjtfIbnvimcoo39T05px2r83PPI70
7CHkKVUjnHOR6Ub1cHgnFRsCMmNcYHTP606MhV5Gc1IDjKOEx0zQDkEE/QY601CQGGPlHI3daei7
y5bABOaaKQgbyw0YOcj7vSmlMLuGTk46Ur8Y43EHpntTBhCCw3emO1MGOlUtlVGOe3el+Y45+XPW
htrtuC5zwFP0p3mFPkQHcRkD/wCtU2BDHVhKT/CT05HanyqJMkMPTPrSSSM7HedqjggetIoSReMj
b6CnYLDyCycAjaelRxttcnDHtgmnebhW2cgjvTsCJBlui5zRYBsm1R+7ySAMig5OI2wuRkE9qTeZ
AFxwpz9amkTgnd8wGdvQUWYXIZsxyHc/XH5UobKbW4B4BxTi4ZDyGwB6HoKc20q3zgZPGT0qQGuu
5FyxBxz70kbjfliCp4wRg5pS4di20tg464wKdIq7kkVSQBj8R/8Aro1EMeLY+SykEcUjIhVSNxY9
e4NCgAhmG/2/CnyO53blMaggemBQMRGAwmeecZ/zxQJVjlKlcP69e3SkRVaX5TtUjrTldZS+FY/N
yAO9KwCgAybe3cEdDUp2JIQxO1uoAqIybzIzHB+nSo9ru7EAoe/oafKFh5ODjkEjlRSSRBSApwAS
cqOtBb5nwAQBg56ipllhKsFO0kHDHoOP/wBdUkMYkiQSMyL5oI/iGcUybDhtgzGTuJ6U4RPG6sAV
XHY5Bz3pZcD91jJAPJ4FS2AsIZGCg8+pOO1KhHmbAuGJ6ds+tI0kXlfLAwDHCt0z6596YzbYkHJZ
emOSPWmmBLvYBiAFUfKSO1NZEII3sxHzcd/f+XFQmJ/MVmYnJ5BPX61YjSPzMCQLzzz09v0ppgEm
5RnLK5GASOopqh43Jz0x8xPWknkYy7g29R/kVI+1mQjlW6getIBk8io5gyEw5zj7x9Ofwp5hSPBy
zELyTxj8abI0fmtsXIU5AZf61Gs/mLMMfIx4weh9jR6gLJJvXaVJIHOeM06dlQrnLADO04/nTwpl
jLAnrjIPpTHiVMNJl1HVh+oo0YCSTI0Q4YsBknr/AFpvnKGkyCqdcAYH5UMV3IQV28ZC/wANPlh3
EluFAySRxj1pcorD5J0KbSCrgfeUcfl9Kz7tssAQc9ffpVstuLOzbmx94AcCqk7oqFVJ4JwSOaqw
DVAG3HY8mt+18lPDmo4ZBIxReevX/wCtXPgMzKWJAOcNWqLpP7GltVjG95VJY9SAPajYVihMxiJ5
2luQD0HtTQ0ZVzgk+nvQvy5OSy5xSqisrMuFA6ikyhrAQ7Rg5xikIaUkc8cnA709WDvGpcqueXHN
Mn/dytsGecZJ6+9SgOtvoJrq/ESxBpAMrtbOOOfrXXaXZ+TY28bRqTGCZA4+8w55z37/AIVh+GEi
eGZx5c1w7YYsB8igc/hXQ3urW6RK8kbQlRgDIwfQ57V6EVy7nK5Juw3Ubr7IweZHIw2Q75IPP415
vqsguLq52D5d25cjBOe9aXiLxCbuZRbNsXBDhGyCSP8A61YLvukSUseDt24/pXPKV2bRSWxFI6xs
rMDt557kUySeSeHazERqcKDSThXY4ztH8J7fSmvbOhUvld3IPYis2a2HWy7pPkzkHqP6GvY/gJ8f
PF/7PXjmHxT4S1EW93EvlT2U5Z4LpCuNkqZG4DOQeCD3rxyJj5o2HK55yOK6XQdHvtfvzaWFlPe3
rIzxw28ZkcqBklVHXpz9aaLSaPZPjp+0N4h/aH8ZL4l1+KxsNSECW+7TLby1AXJ3ksxJOWHGewr0
n4i/tyfEP4sfBy1+HGtQ6a1japAHvLe3ZZbgQk7CSWwPuqTwOh9a+br3Q73TLmS1u4XtriJcSWlw
hjkjzzllI47Vs3/hPWdG0qK+vtPubeCUqizTQsqNkZ4J4x3qHa5Hodt8Dvj/AOJ/2ffiBB4w8N/Z
3uRDJby292hkhmjbAYEZBBJQdCOle16X/wAFMPid4f8AiHr3ipNP0S7m1e1t4ZbV7R/Ki8sEKyYk
DdznJ9PSvlLS/DWoeKNSFlplrNf3BjMzQwqZGKKAWOBnpWpB4C8QzajcWNvo+o3MsR2SRRQMTGf7
p9Cew44rRJD57n2ef+CuHxOM0Ljwn4XkRjhwsc5Kccnd5g/LHavGvi5+2z8QPjN458KeK9RsNJ0b
VvC0gnsPsEcmzzBIsgDhmJYZQZAI+8a8eh+FPjJjNND4fvjEMfM1rJtODyvA7Z/z1rN1Lwzqum38
Vjc2NzFfyqGS1e3dZG3AnKjvwQcD1p2RF7antP7SH7aPjT9pSw0q08Safo+mxaYXaIackisWYDLE
s55wvbGOa2PD/wC318QvD37P8nwnTTtMvdIksJNNi1NlcXMMD5yDzgkKSAcdcE18+a54Z1XRWBvd
MurbOQi3Vu8JOMdAwB/H2qN/B2sHRl1uXT7tdPlA2XbREwsAccN0H4+tLYz5r7Gl4F+JWr/DXx9o
vjHQ47c6jpdyLiJZ4w0cmDyrAYOCM9CK9h/aR/bk8c/tHWdrpurxQaNplqSxsNKkkWGfGTvfceSM
DHpj3r5/ttOk1K9trS0t5prm5PlJDFGWkfPoBn64/wAKl1LQNQ0mTyp7Ke1lXD+RcxMjDOcEqRn+
lGjLs+pSMzPJtRiMEYYYOM9vpUU8LRXDRs2MA8AY4/Gn3EkySBgNpUctjg4qBpdsyLKrIcYXIPXO
M1Li1sJtCSAIAiqWJHOO1IZ2e3LIpK4ywJ4U5xx+lLArRO7IXI6MpPKgf/XpixBNhUhVbJPbjntV
JdzNasc3ywpKEy+AMAcZI6c9PrUbqdiZQrIowWBzkfQ095PLfETbg0gYsw7Y/nxSurGZ0BaXJPIH
Tvz/AIVaRoJIJmKrGruP4jnaD9KYBIJWBAIZiq4wT/ninM7FncuXKghSpwePT06VJHaszCTO8t8y
7OnuDn8OlJkPcijcL5o27mztyeRx7duaNu0kgkEnOCCAPX6Dimx/IokRzk5zgcn8KkeQ3GTECARj
Dcj680JkqSGmNnViNxKEt+8PT3FDoZEG35jn7xP9P89abPFvZkDkSDj5jjt3pwBiVcgHPDFR0wKR
Vxh+Z8qFARcAgcj1zUkm8bnOFIBwR9KZJaSEkoDGXGMnv6/pUpeUxqyvH8q4JOQMAn9eaCk7iMoe
IiRcTk5JVetRCBZEkeUsWVdyYHJ57/h3p8gZ8skrY6Ef3s9cUkwZH3KzADGQvJP5/SqDQZl0R1CD
zQflz0Htn8DTGZ2jzvVQOCynP1qaVpCRCAV29dw5P1Pc01ERx5a5xySx6ZxSuS4roMLSDDqTjOOO
Afr+VPwWIJKZxgYbHtUs9zMuFwQOBkDrTI0XJZRls5A6A1QcpAxCScgrxtLkA4FSeWfNYEMxY5zk
DHHGaRZRG5JYAN94c4OO30pzkHC+UoHOD0OKmxI0gxorNlGBwozkdD/jTpXYeWuBjHRuMD/IpgjY
s6szEsxxjnFEwV5IzIoKgZAbPHBxgUWG9xzk4JJEmenTP1/So2jkRtrfKSML6f8A66sNjIcYJA5H
qKSaPzZC/wDdP3SMjHahMq1yAQxjIkUgZ64zzUs2cKoBZBwNw6U6ZSGVlG75scfWiYrExLEnPHAB
xTuLlIN0jSnGWAHpSElZUUrkYySo5x7VYWHJXduBCgg46inOIkZxls5DLhvvcHqKaYuQqyPuk3qi
Y7Drj6VLbs0xLFyETpx1/wA4p7QqCHQb8jGB/CfX2pbmOe3AJJERAIYgj14/nQWkkRKR90JIWOTk
DAx17UixhIRLjdkntgj6mnSFlVnXB5wD/DQ8Xm5f+Edge/NAaEeZGDNGvyjJbHYetNnWIhCWIDfM
do71YQvGkoU4j4BU9+/NJ5ZWLJJIUdBzx/k0DGALOGZnaMg56dqik2+XJj5QpwMjrT3gEh5JVR1x
1xTZWjZl+9kAgY79eaaRDYmwF3LLxjkDtSSuEChScYHJ5JpzhXDCLPzdcjmnRRK0AX/VuOoPQ0DV
xsirLHmNtoByWbuemKhkIfcruEH+zU7RsYSDyB0UD3qFoSijADORnB7dqVgEWLJY4LD+8vX6U14s
Pg5Az0/CpmaSIEjC5GBx39qU27t84BZicgH6f5/OnYFYiWLauVYvzh1pZkYJkLtQcAk5J4qeWNo4
yGG3pnjknFRorMdgXBHYDORjrSKKbhnlRZMqgOc4ofcUJ3fMOBtUfnVlrdRuGQpz0yTimpCCQEXc
q8detIykiuHMbAtkYIJJHNPURgFixJPIwOpqR0ZGODzn7p6UkK/IflGFbn396CoohP7lvmTcDyB/
WnDaQWXIDE5HpUqWrDcxbGRgA9AacsDorAsMg4plW7FRfLjwxBPfHWnTKSxaNyfl3EN2qVSdoBjQ
gHoR1pGcuJZGBRAuMCkVYgGQMuowo4PciniNZEZQuOe44x6U5RyAWCduadghdp3HPOQKCepXe2eV
juXBzgfSh4Ska5BXj8c1N5TtlXO1V5JPWiOI9GJUnkE0BZFVzk5wR/sk02RAmCODjIyMZqyI9pkG
enOQOaSaJmchkBxwCepqbEtECp5m7eQOcghuvtSsB9w5yx7noKlMezA+6Mcd6MKzcgMSMc1S0Elc
hLEfLj7vGaWWIYwSTgc46VL5YVMfxf3cVG0YLEZI56etDL5SI46YwQOwp2wlQ2Qo7D1qd7chicZU
H1pXtWU5ALd8CkJKxCUCFgRgHrULAupBIODVlU+chR9d3NIYgWYZIweuP0plWInRo+F+6RngU5Uy
OTt9VNPYjccDHoBzSSDAOQwHPO3iiwrDSdvUA5/SmsuWY/d2+tSBFIILAZ7ntTWieRxgkE9gKBNN
jXy3yqBnuTTS29gDj+VTGI4woJAPB9aGid14Ta2ccUuouUikTyzlTx0INKsjBmUnJPWpHBzhmKHB
HHP+c01gCAWA6YyKoVhgVmyMcYwc9qmkUKFBHBz70iwlOxwOMGneQy9Rlh1B47UFJEYARSoweaQr
uCnG1OhxUhtztLsM56elBi/hOQfQ0ihJLcIRjp65qLJJBHQdPep3hllcADbgY4NPaFAnAIIPJ9aT
HbqVgHH7wHA6fQ06UAE5G4kZznrTjGEYnBO371SCLjOcDB4NCQELIW24XB9/SkZDkt09hUr4ZSGy
Ae+KbtJXaEYn1xTZGwxrdsE/lzUboyDDYXjrirMkHOFfknqOaR4nUEc4x169RUjsiCIMv3uh4pGi
PGAORmp5Y2TA59cUgOIgGBBBBwB2osFkQeWrx7yHJzj2p7RqrZ3D6elSMpMpw3B+bFIQN2SCzMPw
NPYaiRkYJAOQB6Um0lUAIzzx3qSWHaMjg98UpibHBXg9AOfzo3GVyN2FycsacdqngHd05qTy3MgO
BtxnNKYi5boMnuaCSJo0PBbAPHWhoCh68dBmpUG88gevtTzFvVOz4xxSKGfYm2E4wSPmPtUO0ryA
Sc96sS3QCBVAU4x9arSNsbIYkZzQS9xjrhQSDzTHHOf4SKlDBgD79DSEqFIUHrikIeFEWHYhjjgC
mSL94qfw9KaoBfGaUKd5G0n6UWJuIFJdu7DvQzYHI5PtjNOA5xzxngU0cvgmgBDHt2sAQM96c2QC
QQTnqDT92AfnGcgYxT2Kq7Ajt2+lSykM5OQflzTjsRiQfm6bj2pjODwWJzUfUE9aEguPZioBz8x5
Bpqgs3GSQcEjtTcFRhh7jFSMFDg5ATHzYoZQ6WMRvgHPclORTXjD5bPTrmggEkIevAoK7nOTn2qU
AkwG0HH59TTGLc46HsKlXaD1yc45FKEEMhYHH1NWmKxG28HnOMdDTm3YXH3ueRTi26X5sEHjNACm
Qn7uOh9agYm8htpJb2NKMr90bO9EuA2VP496YzNKcnJwODQBIVcuTnOecYxSlS44OcVHkfdDHJOc
kdKcU+U8knnpTuMQRgNkkjjrUjlVUjqf7uOlMIQow5APUUK2JMjn2xT0Hcc7Alkxk46kYoYySZ2D
Ax29KJ2DsXz7YxTA5RiACCePwobFcYhy2T9OalK7BjqMk57U1iuT82AewFOZ9oJGGUg496gQhlUY
BUAd+OtIQNrA8qemKYQEC/mRUjN+9DFcD0xTAaGGSoXHP3qeJAd3pnApryFpMsAox2pVDSqVVS2B
uAA9qBhK6rg7dueg7U5pN5HylSR2OKcQrxbSjMw79sVG/wB7nKDsDT2BpjW+bJBLEDrin/wANyy/
lRzExTpnmljIiI3DK4pANYFlABC56ikD7VIHIHYjvT5PnJCggHuOO1D7QwAJjIzlwe3rTGNklJJL
EcjhQKI/mB4IQdQD1oIVmXDfKT1HpTWLAsF5X6Uhj7mQ+Yvy7RjsOvNE8y5yfmz7YpyqH+ZmC5Of
oKj2JsKlsnOc4609RC7sHcuFBwSx7etEnzMV3AZOc4/SjKshPG4ZB+lK+9+du1QeMUIBREjwsNwz
kDFMYvnO7bs7+tOUKpIHJJzSrKYsrgKSeaYxI4SwYnqwyDnijdtLErtGO/GadJ+6K5UoTzzxkUi4
DfN82efm9KQAn7tsou4Nk54pr4b5iSDg4U9qVjskbbwnfH40smJGOOmOpAoFoJFudegCnrUk2xXG
A205OKhaI8hQTjnHtToV81cKoGB1PHNSkA9yEKkbstkrzmmSo8cp6nnNO8oDAG4eo7+9ODl13Sbs
4+UD07ZqhjYSkTgMpII52nn6U9YzMQMfL/ebqPapGSFELrkyKuSp79qjSZuVG3cOQx71DQhJAIpG
ZSF5479KRgVlDKxyecg8Gm4MisWO9h0x3/CnmLPKkADGcr+NUkhg+8kfP8o6nPPvSTybACpK55z1
B/zilJj4Tdtwc5IpoRCxeM5HJwSeaVwFkZpAW2BATzjGOn+IzS7PKTIbOM/eHQYoYKsnz7mOMAqP
Wm+UpbaxxuzkZ4Bz1o2FsErgR7zu2g8FT3qQFHgUk7XDH5ic5HamyYV/mIxgYz1/Ck2YyQCB/dqW
MmcoJj0kJ6L2qKeTpgbCWPIpXkXzU2K2QM4x/Kh3Em4suCxJx1xQmK40K0kTFfuqQ2SadIquHLj5
mIyyjGDTFbzIihIVM8jBwfanvIrxuULYU44PSmAzyRG5AbPYc9fSnywAbmAIYAYK8ihVhK8qS7Ht
0x/9fmp0lLnGNqKfugfnRqUrDFJgwoYMjDG4+vpULRbiSq4YEcZxk1JdGPz3c565C9MDt+lK6bmB
yQo+bKr71W49CRlZFiDYUdCwHB9SKjkkDKDGAAMBgxzn0pplMgVUIbByDUrxJ5ZDKCWySegBpWsL
chZNzM6KcDkY/OkZ2mgH94nAyafuKqNpYkjBB/pT9iOAqjB2nIboDVK4ishCqW2qVVsgelMm8qVF
AzvDcn1qZYwm+PfvXP1FVLyTLqSOBxTBEpLxxgRrhh361X3NIQGP3jkHNKrhCuGIyAMjmkdmBK4w
ByKTAlYtGvllNq5yMjrSi6Zo2jRVUEAO3TIpDOyHnDEDHzciovld9rHGeuOlLQdx+94wAChzxjAz
SQxmRmBXc3+0wUU6RRGm5cMM8047lHCnd6t3FF0CVzf0vVn06JzCQQ43N8mSAD3P0qvJqFxerIWm
KxsxJB4GPSqVlL5btlsL/tDjNPlliAYluQM7x0PpjHfrWzuZJJDQ0cQ8tgVAPBbpj6d6iLCQHcxU
ZORjqKSZpZ3DPlywzyR6YHOangCvGElQDadxJPU9v51Fhle7URyx5BHHAXuK6XStOHiLQ7iLZHH9
kyYiPlY7hnn16H865i7O6UjnavHH9Pau3+HYa6tdZWKRU/cHAPXAUngngGpaZopWOKt4Y2uAhkKK
cAvt4HPWv2B/4J6/sz+Evgx8IU+OPiG4GoT3WnSX8M3ktutbYKd4287shQenavx9glAuwH/dxh+p
+8oz3r9z/wBnpPN/4Jl6WoBj2eE7wHcMY2iXPrnpQr7Fyk+XQ/N39r/47Wv7QPxX1TXrHSLWxsYg
LOxe2hw88aklJJc9Tgj6dq/RzwxeeFv24P2Nv7C8MLa2ev6fZwWk1ndRL5lldxKuVyOgcAgOOPm/
Cvx21VRaTRPEd9uDkFuCM8gcdsGvqD/gndoXxDvvj9pWq+CZWt7K3Ctq5kfEMtkzAOhB4ZuOMcjG
eKcomSXQ+pP+Cf8A+x74q+FvxAv/AB54zt10d7S3n09NPuY+qv5bGUMQBjMfXpx2rsPAP7S2h+Jv
28bzw34StIbvRdXsTZX1ybcp/pVskj+Yh6HgBcnqK9D/AOCgS/EE/Aq9bwLGZLclv7ZWIAyfZccs
o6kA8nbzj8a/Ov8A4J7SNJ+134GuZJXnci7PmBs4Bt5FOT3Gce/PtWbiOLu2ux+lP7QX7ROt/BTx
/wCHrCz8HDX/AA1NALrWby3VmntIC5QuqrwQoGeevPTrXh37VfiTwf4w+OP7OOt6GbK+/tTUI5Vu
I9imSAz25AYdem4YPqa+lfjN8YPA3w91m70jxXPDpNxqWhzyR6hcFUSSNdwMOT1bqQOa/Hb4KJJD
+0f8PobkSrbSa7pzxK24Ha1xGAeeACQD+dCTJT97VH7BftN/szaB+0R4En02eGKy121UvpupKuDE
/UK2OqEgAj8q5bwb8Jv+ER/YnufB2vaTA97p2h3sE8MsKuGkTzNrgEYPRSD9K3/2pf2i1/ZssfCm
v31m95oF5qDWmopDjzVQxswZM9xtJx3xiux8beItO8cfArxHrGhX0GoadqGhXUtrdQOGjkUwvggj
jrTWrsTJpJn4SeFfFep/Cz4j6N4hsfIlvtDulkVJI8oxXlkcEdDz788V+pHiXwH4P/4KKfs/aZ40
0azXQvGNkkkIIXb5V0qjzLd22/OmcFWHqPcV+TfiCRpbyQqpBeTLSyncGOepx9K/WH/glBP5/wCz
XqsYTyxHr9yAMYyDDC2fxJJrTZlXUos/JXxloC+GtcvNMuI2W5t5GjkQtj5lJXqpI4I6g1hzMJYx
8xXAwcHcw6nGf89K9E+OVyZfid4okmiUZ1K6KlgoKAzucfoO3btXnXnqZBIQSp4IxjJx6CqMFrqA
gZQIy4OQWL46HPQn1ru/BnwK8cfEPw3eaz4b8L6jrWnWjlbia0gLhPl3EdsgcfnXDyNEV25KK7bT
sOdgPcZ9q/Yz/gm74H8L+EvhhYah4W8dHV49ftUvdQ0C5MbS2tyAFYgA7kwBgjGDwaTdjeNj8r/B
PwP8cfEEai/h/wAL6jqi2AC3JtoGcxPgthgoJHA6fSsLSvAWs+IfE8fh3TNJludVmkMP2NkKytIM
5G3qCNp4OOlf0N2XhPQvBljqs2j2dloIuy91dTxRhEL4JMjjpx1r8+9O+Afw8tv2u9R1VvjBaQ62
qJr1nd2rW6xi5MjNJG4DbW4KnbnlXNHN2LT12PgPxb8B/G3gzX9L0HV/Dt9p+t6kVW2sJ7dhJLuO
BtGDu54+Xp3qL4i/BDxd8KDDH4n0C+0e4lQyRC5iKCRQduVyME5x0Pev6CrrwroviOTSr/UrKz1a
8ssS2t7JCrFGIB3oecZIB4NfKf8AwUR+HWheP/AT3GteObLSv7GKXUOjP5XmyhjtbHO8kg8cYyKV
2yGk2flVZfAP4g6h4GbxlB4Xv28MJGZG1IQnycZ2ls+g6mn+Hf2dPH/ivw3e+JdP8Jahd6Jaq0rX
1tEzxsq8k5Axx371+1n7Lnw+8N+CvhDbeF9B8WweNvDXlZiRnjm8pZMs8ZZSSVJPAYZHSu/1fwxo
3hbwFe6VpU1h4N0rY+Z44okgg3n5jtbC8k/maXM0Z+zSP59fAvwt8SfEvxC2j+HdJudY1fYXa1tE
3OUHVgDjOK2f+FGeN5fGx8Gw+GL8eJJAf+JZNbMk7ADOdjc7ffp15r9C/wBkj4D/AA/+H37QHiG+
0f4p2N14h0S9FrEInhKX8E0YJQLu5OeDt6Eewr7/AJfCeiT+JIPEEml2ja3BEYY9QMQ85IznKh8Z
xyfzo57l8qP54PHfwx8TfDLWl0zxPoOo6LqLYxb3kDqMHhSGICkdB8vSuh8S/s6eP/Cvg+HxPrHh
fULHw9OEEeoPFmJ9w+XBGevb14r9HP29/hJ4F8e+OPC2s+LPiRZads1CHR7jTBLDFLb20rZLEjJB
ViGJYdK+tvhl4G0O0+FGn+Ezq1t428PWtulnDNOsUqPAqgIjbcqxAA5NVzFJI/C6+/Zs+INt8PbT
xtJ4Y1KPwu0SzC/ePCKjfdPGflPrWb8OPgp41+L91qUPg/QbrW5bMhrlbZSfLXsSO/fp71+7nxo8
J6Fqnwk1DwteeILXwToN5bmwe4PlRosTIR5aF8BT6Y544r5d/wCCffwx8D+CdR1rUvDvxBg1LXWv
rnSr7To5IjHdxxvmORFwGwQNwYZHUZo5hW1PzQ8I/s+ePfHXi/VPDejeGb271uyR5bqyMJSWIKQC
GVgCDkgYOCc8da5vxT8Nde8LeK5PD2q6beWWso/lPZzIUkD7tu3aT6kfXIr+iu38MaPZ63daxBpl
pDq10gSe9SFRLKo6BmxkjgdfQV+ev7S3wS+Fvjr9p3w1f+KPivaFtbaa2vIEuYozaSwrmLJU4XJA
XDYyQPwnroNOzsfBHj79m/4g/CjT7G/8ZeHL/RLO9fZDNNHmOQ4yACDgHkcE56+lS+Nf2ZfiJ8Nv
DFpr3ibwveaZo12Y/Ku2TdHl0LLuYdM479zX706VoukeJvBthZ309r4v07y0xdXcUc0dwV6OQBtz
7gV5f+194O8MePPhHe+HPE/jSy8G6VOhlQXTRIs8kRDxj5yDhWC5CkHmqUrbg32Pxc+G37M3xF+M
WjahqfhLwxdavY2TlJ5YQNqttzgAnJP0Bqb4b/szfEX4szaqfCfhm9vhpLJHegx7WQsD8oB6kY5F
fqr/AME8fBvgnwn8OLa88H+Nl1uXWbdLnUtK3xkRTINpYKPmTv1zkEZ6CvqqPRtJ8Pw6hcWdpaaT
9o3z3NxBCke5sEmRyB8xHJyc0ubUD+dLTvht4g1rxpB4WtdFul8QT3n2BbF42jkMoYrgq33eRzXZ
eOf2ZfiD8NvEWj6J4l8JX+m6hq8oi0/em5ZiTjaGBKg8jIzX3RN8Gvg3c/tkvq9z8W4ptQu4P7ct
7q1uoEeHUElHyl1+XkYITGSFIr9Fxplhq9vYS3MdvqhhCSw3M0Sud2OJF4wCeuRTbYt0fz+fGH9n
H4gfAy10ybxh4em0OG+fbFO7B4pHGCF3A4DHrg9cH0qz4a/ZX+JvjP4bTePdE8KXGq+GoUdxc2xD
mRUzvIUctggjj0r9UP8Agor4Z8FeLfhNdReKvHUXhy80qP8AtTTtMlki/wBJlQMOEI3sWBKjHQjp
1r0H9jvwj4F8F/CSy074f+JJtf0GZFu/JnmWTyJHUFwAFBXJ6r0yD70uaxKWp+PPgT9lD4mfFTwb
e+KvDfha7vtKtXdC6cMxVcttUnJPQYHeuP8Ahz8G/FHxX8Z2fhPw5Y/atbuPMcQuRGdijLE5Iwcd
v8DX9B+vW+leHfCV+PtUXhnTI4meS7tlSJbcd3GQVB+or4D/AGdfhj8DPD37TviefRPiRNe67YzR
X2jXdvdIfNWUSefHwpWQAsQcAcNRzGi8z4Z1L9lT4kaL8V7P4b3nhe8g8UXgzbRnBhlXDESLJ90r
hT34IxxWF8ZPgN4y+B/iO20XxfoNxpF5dW/nws+145Vzjcsikg46EdR+Vf0MSWFrNeJcyW8T3UYw
krICyD0BxkV8L/8ABQ3SvhRrOqeHr/xn46urTWtGv4B/YMUqjdaSsnngKE3D5Bu3ZP8AKnzEs/PW
5/Y7+Kuj/C6H4iTeF2bwtLDHcRz28qyy+U4BEhQfMFA656ZqQfsZ/FWb4Qt8SI/Dc8vhtLY3xlEq
+b5I6v5Od/AyTx0Fftl8DNF8JaN8MdL0rwbqcus+FIIhHZPcP5irDtGFVioJTuM569asfGaDwvH8
MNXsvFWrv4b8MXEDW13dWziELE6lSu7adoIPWkpMbR+EfwP/AGbfHP7QOpX2leErKC6u7S3W6mjm
uFiG0nA27vr7V0Hhj9jP4peMPiVqXgG08OTWXiPTYnmuVvW8uFFGACJOVYHcuCOoNff/AOwJ4R+C
2heK/EMvgrxTe6n4pstSubHzlLgXlizBoi67SCp28MMcrX3n9mhS4MwjQTMNpcKNxH1qlNozcLu5
/Oj8TvhB4j+E3xC1Dwf4kszYa3b7QVkI2srY2yKc8oc53D0PTBr0z4p/sTfFb4LeF7TXdd8OG506
+cR+dp8i3HkkjKhwCSMjoce1fZv7U2kfs7ax+0t4VvPGfiqe/wBSuLmTTddsbhpD5EYR/JZiEHlq
shCjHBDZ5619/wDg+10+Lwtp1tYT3F9p0UKxwy3oYyMg4XO4AnjuRmk5DjqtT8NPi3+xJ8T/AIRf
DWHxtr2lxPoc5j8xraQM1uHx5bSIOVBJAPPBNZ3wK/Yz+IX7RXhvWtf8IxWFxb6TP9nljuLkRvI/
lhwqDvnIAzge9fsJ+2Inw5u/hDqNh8TtfvdA8O3iNGJrR5UDygZTOxSCQQCA3BI715x/wTutfhcv
wutLr4em/TUpIFj1xHWbymuU4y+4bA+DkAH7rdMU+Zjtqfmn8I/2G/ip8YpNfTSdDGlzaNcfZ7xN
UfyWD8/IgONxG096830j4J+Jdb+Ldr8PZbVbDxPNqZ0w215Js2zbiME/hnIzkdK/omvxHFa3Dv5k
aeWxdoQd4GDkjGTn0xX5vaBbfsz6j+2VfPaahq+r6pqMIuYpgLtpLfVo5tx2HHmByozyMAr+FHMS
tJJPY+TPHn7BfxW+HXj7w74XvtC+1Sa66RWuoWDedbK7MqsrOBldu7PzAcYrD/aL/ZA+IH7NLaXN
4ptrWWy1PetteWE/mxq6AEo5wCDtJPTGAeTX752wV4Iiu9lCDBlB3dO+ec18X/8ABSfUvhHF4CW3
+IV3qkXiCFVutDtoEuDFNIDh1XA8vlSQxJBAI6UKbLPz48JfsGfEvxt8Cv8AhZmgR2mraZIjyw6f
byE3kiIxVyEIwSCp+XOTjjmtDwV/wT1+K/jP4S3nj20sIEiijmlj0e8YxXsgjLblCMMA/LwCec1+
uH7MK/D1vhbZT/DG2u7XwpckzxJcJOibz98oJf8AaznbxkGu1+Jd3o+m+CNWu9fa9XRYoS922niU
yrGPvH918+Mdcds54oU2Jrqfgn+z1+z3r37TXjZ/DHh29sba+itmvH+3TbFKADjgHJyy9vXpivRt
F/4J4fFzUfjHd+AZdKTTZ7eB5zrdwXNg8YXKssgBzkkDHUHORX2J+w3c/s9Xnxi8Y2vw/wBN1Btc
t9Qe70e+8m4Rm09lUMAeMRrIXXDjoR61+hGF34/ipc7bHZn87Xx3+AHif9nr4j3PgvxOlsuoiCO8
t54JgYLiF9wVweo+ZWBB54r1z4p/8E5fir8Pfh/pviywtrTxVY3axyGHSHaSaFJFDIxQqOMkDIz1
5xX11+2L4r/Z7sf2hvB7eNNL1bUPE1nqHl6vZ3NtcPE9iySFJMN8rIsu0qEODluDX3h4Gg0ux8H6
ZHpFtcWGkRQhbaC83h44/wCEfOSQMYwCemBS5tQtc/E/40/8E9PiJ8FfhH/wsHV7rTrrSYxC9zbQ
SlZ7NJMAM4ZRkhmUEDpWV+zV+xB4r/ai8Ja/rPhnVtJtf7HnFs1reysGuJCm4YKghQeBz71+uP7Y
uq/DvSPg5qUnxOsdTvfC0sbQSf2fHLIkbtjYXCEAHdt2lxgNXm3/AATp1j4b+IPhNZz+CNAudM1a
1h+x61c+WY1kmUkjeQ2GYggg4yAcZFHMJbtHwD8I/wDgmf8AFT4ovrpvoIfB8ulyi3KavG6md/my
Y8ZDAEcEcHI5rxHwj8C9c8RfHe1+FdxNbaXr0uqPpc8s7boo5lJXBYdQSvHrkV/Q/eSxPaXfD3CK
jiSO3Y+YeOQMHIbsOhr84fAXj39nW+/bW1fS9M8HatLrGrRJFGs9tKJYtWilcy7Nz7kYqNxbI5U+
vNc7tYdnc+bvGf8AwTL+Lvhb4neHfC0VrFrGm6u6q3iCwRntLYZO/wA3OCmAB169ulcD+1b+xz4t
/ZT1DQo9dvrLV9O1hHMF9ZHascqYMkbBsHgEEHAzz6V++PnwwvDG7iN5BiNJGAY8c9Tya+G/+Ckv
jD4U+G9M0mLxt4U1XV/Ei3MN1pcyQN9nkjR089N5bbgpkMuMnI4NLmE7o+JNK/4J0/ELxX8AtO+K
Hhe+sPEcV7bC8TRINxuWj3FW2ZG1mGOmeece969/4JofErTPgHdfEa5ntrW8t7CS/ufDt4rRXUUS
bmYDgru2qGwcZ6V+tfwC1nwZq3wp0vW/BWjv4d8JXsIu7WKdBDGY2XO8LuIUevTkHirPxu8S+FNC
+Fera14p0y48QeFbeIzXkWnw/aP3IB3OVBG5AM56/ShS1E00fiF+yh+yXq37WWu61pOkeILDQZ9J
to7uT7bEz+crsV+XHoQPXrXqPgP/AIJd/Ffxf8QfEPhnW1i8MWuloTFrc8TSWt78wC+URjOQcnuO
nGDX11/wTm8afCHxXc+INN8DeCr3TdW0nULqWK/mhAdbCWRjD5jb8njK7cHGBmvuyO/tJr2W1juI
pLqEAyQI4LoD0JUHI/GpU9SrXP53/iJ+z/rPwx+Pc3wx1e+soNUgv4LMXkbl4CkzL5Uh7jKuCV6j
kV9AfFf/AIJa/FTwBqmhQ6OsHji11KXyZ7rTVZPsjEjBkQ5wuOd2T0I9K+gPij8UPgVpX7bvh6wv
vhlqNzrhEuj6w2pWa4aeUp9ldRI5Dc/x5HDrzX6LQ39hpGmWK3DR6RC4SKGC6kVCpxxGOSCw9AT0
p86vYlJtH4b/ALW/7BniT9lPwxoviHU9csde0vUbk2cklshiaCfyyyAhj8wKq/I6Ee9avwY/4J2e
Lvjj8AB8RfCfiDS7q9ZpxFoUyMkkjRnGzzOgY4yMgDkfWv0F/wCCj/i/4eeFPhPLH468G6n4gutQ
ja30a9tYgYYbwAtGGkLjZyCTxyoI5r1P9kjxh4L8c/B3TfEngrwx/wAItpN2mJ4vKjiQzR5SThGO
drAgsQM1XMEYs/MPwZ/wS4+JfiL4Ual4t1GWDw7rEEc7JoGqIySyeWMqC3RQ2D14rxf9lT9nCf8A
al+Iz+EbPXrbQ7hdPkvjJdRF1cIyKVAByT84P0B9K/ebxrruiz+AdU1SSzfxVpEMTSzWulFZ2mVD
lgoDAMRjO3PaviH9hP4p/Bjxt8XPGGjeBfhtcafcPfvrWl3s9vAslvbMkaSZO/ciiToq7uHHpRza
DSldvofMOg/8Etfipqfxev8AwhqXk6XodujTReKliMlncLwVAGQwY5wR2wfSvA/j3+z7rH7Pnxhv
fAOu31vdTxLFNFf2uTHLbyfdk2ZypGCCvtX9DA1exbVDp4vLc6gq+Y1oJVMoT+9sznHvivzs/aj+
MvwS8HftXeE7bxF8Lr+68R2NxJHrFzc2cflXFpMjLFKAXIl/eEMCcYAb6AUhKMrpXPnX4wf8Erfi
V4C0TRNV8LXMPj+G9ZEnh06Jlmttygh9pPzJ15U5HpXO/tXf8E9PE37MPw707xfc+ILDWdMmuo7S
8jRGjkgkcfJgHO4ZBBxX7X2OoaT4a8OWTziHw9pyKsUUN5Kkax/3UzuK59ADXzv/AMFBvFngDwl8
FL5/iD4Nv/E+nX8b2dpc2cCyLa3TA+VuYuChJ6MAehHsUpjaaPzX/Z//AOCe3iP9pD4Kaj478NeJ
dPt9Stp57aLQ7iE7pZEVSqmTdhdwIxkYGa6n4cf8EqviZ43+G+o6/qt5F4P1y1klWDQtUt23XGxQ
VJcN8qseM47Zr9E/2FvGXg/x/wDBLSda8JeD18LxR28djfOkUUay3MSKsg+RiTjg5YDrXtmva9o+
p+D9WvYYh4p0+GF/OtNLdJ3m2jJjUbgC3sSKFUvqKUXufgP+zT+zhP8AtFfFyPwFBrlt4fvmguZf
Oni88B4fvJww755z24r3ax/4JV/FFvjXL4PupoIPDfkPcR+K0iaS1ZMfKpXgq+eCuffkV9H/ALF3
xY+C3jX9oTxvong74Z3Gnz314ur6ZPdWsKPbiNFS4PL5QCQ5CqT948V+hT6zp66sumG+thqTR+aL
MzL5xTn5tmc44POO1HPqHK2rn89n7TX7OGrfsxfFi48HaxqNrqR+yxXlpe2+QJYpCyjKnkMGRhjn
t617v8Tf+CXPxJ8F+FdE1/wncW/j6C/WPzrewDJNArqGV1Uj5157HjjjuPo79sn41fBn4f8A7Sfh
CLxR8L9Q1DxJpN75+oXc1pC0F1YTRyIJFzIRJh8MNwG3a3Q1986Jf6J4a8IWU3kW/hjR1jVYoLuS
OFIQx+VchiozngA96ObUUVJrU/Fn9pz/AIJ1+LP2b/hTa+ObzxBYa1ZtNBb31qkZhltTKQABkkP8
xAOMY96qfs3/ALAHiD9pb4Q6x4y8MeJ9Nhv7C5msl0q8hf8AeyoiOuJARt3BupBHT0Nfp/8At3+N
vAHg34JX138QPCOoeLNC1CJ7KKWwiSRbeZxmIkl12ZYAhxn7v51f2AvHfhH4kfA/SNW8K+EP+Eam
tYI9O1OVYo0Wa7iRA/3Wy3G07mA4OKOcFHdH57/DD/glb8UPH/gXWNZ1WeHwXrFtJLDbaRq1u2+5
2xghiwOFUsdoOD0JxXhP7On7O93+0B8Ybf4dHWIPDuoMt1uuZ089BJCDuTAI3dM5B7V+/era9pOq
eFtVvLZF8TWdvHIJrXTJEnaUqMmMfMBu/wBkkda+B/2Q/jB8GfGH7S/jLRfCPwvuNNl1O4TUtMuL
i0hjkglhXbc8F8xjd8wCk/xcCn7Sw1CVz5vtf+CVnxTb40N4PvWiXw6I5J4fFsUbNaOu0lVZfvK5
IwV/mK8L/ah/Zr1n9mD4py+EtbvrfURJZR6hZ31nuVZYiWU5U8qwZG4z0we9f0HvrenR6ummPf2y
6k6eYlkZlEzJz8wTOSODzjtX59ftr/GX4N+Af2hPBieKvhvf6h4k0m+W4v7ue0iMFzp0ySRlly/7
z5wCAQMbW6UvaBGLTPmf4of8EufiR4N8F6N4l8JX8HxChvkjeWz02ApcRiRdyuoJw68jOOf6c/8A
tJ/8E7fGP7OPwt0/x1qGv2Gr2DSQW97aIjRy2skoAXGeGAckV+0/h7UNE0Lwbp9xHbweGdH8tfJt
7l44UiVuVXhioznoDXin7c/jD4e+E/gjfXfxE8KX3irw9extbRS2ESSLBO6/umJLqUJbGGAPT35F
MbTufl3+zp+wV4h/af8AhTrXjPwv4l062u7C7ktY9Ku1YNO6xo6guPu7t2AenHtXW/DX/gld8Tvi
F4E1LVr9rfwbrFpLLBBo2rxuHuCq5Vg69FJOM47Gv0F/4J/+MvBnj34I6VqfhHwj/wAI7Jb28ena
nPHHEkdxdRIqv9xssejZKj72K+gdX1zS9U8M6pc26jxHa28bia10uRZXkKjJjGGA3f7JI60lUDlZ
+AP7Pf7Oeo/Hv4zQ/DltTg8O6iY7r9/dR+ahkhzuQbSM9G5Hoa9xj/4JYfFl/jM/gy4MEOgNE9xH
4uiQvaMoUlVK/eD5AUqfrzX0n+yH8Xvgl4u/ae8caF4P+GV1Z3V7cJqWl3FxawpJbvGgW62hn3RA
P82FJ/i46V+h76tYR6pHpz3tuuoSJ5iWjSqJWTn5gmckcHnHY1XtA5bn89f7Sv7NXiH9mX4pN4O1
y+tNQMltFe2t/aZ2ywOWBOw8qVZHBB64GOte0/FL/gmH8T/Bng3RPEvheW38f22opG0ltpKt50Ku
gZXCEfOvbI6ZHHPH1B+2v8X/AIL+Cf2h/Bkfiz4dajqHiPSb0TajdXVqjQXGmzLKm5d0hEg8w7lB
AwUbp3+9PD95ougeELCWKCLw3o4jXyYLp0iWFT91eGKjgjABo9oHKz8UP2lv+CevjT9nD4X6f441
DV9N1TTnlht723hVo5bV5PujDHDDdwcH3qr+zt+wF4p/aU+E2seNfCviHSxcWF3NZx6TcqytO6Rq
4XzOi5LjkjFfqN+3j4n+HXh34J3rfErwvqPiPQb5HtoJ7CEOttcMp8klt6lCWPDcjg/jU/4J/wDi
7wV46+B2j6p4P8Iv4aeC3j0/VH8qONZbqJEDk7WJYnIYMQOCKXPYSiz86/hn/wAEtvir8Qvh7rGt
6gIvCWr2c0sVvousRsklwyICDuGQEYnAPtmvCv2dP2fNT/aL+LcPw/stUtNA1JoLiU3F8hkQNEPm
jwuOevfsetf0F65rWm3PhfU7yJH16zt438230thLLIVGWRdrD5vbIr4B/Y++KHwL8XftJeNdE8G/
D67guru5XVNHuJrZFltygC3JGX3RAOwYAZ4zx2o5xpO58yWf/BLn4sy/GqfwTdRRQaSIDcReKwjP
YsoXIHByGLfLtPPfmvCf2jf2cvEP7NvxSufBPiC5t7qUQRXdte2h/dz27khXxnKncrKQfSv6JDqN
muoiyN1CL0p5gtvMHmFPUL1xx1r87/2x/i78DfB/7S/gyHxb8P8AVNQ8SabfN/a0t3Zh4bjT5o3R
ZBuc+aokKsvTG1unSlzjsz5Z+L//AAS9+Kfw88PaPrfh7yPHNhqBQSwaQjG4tty5RiDwyerA/h0N
Yf7Tn/BPbxt+zN8PtP8AGOp6tp2taVLcR29yLbMcto7qSoKn7wJBGR044r9u9AudJ0TwpYMlsPDu
lpGqQ2946RiFf4V+8QPYZr59/wCCgnif4b+HPghfp8SdA1bWdM1GGW3sptOiZ44bvaTDuIYBGLAb
SQRwe1NTuFmj8vvgP+wL4x/aN+DGo+PfCOsaXLcWc89suj3BZJZpIwp2BsYBZW4zx0roPh1/wS9+
K/xG+G2r+JXMPhvU7RpUh0TWIXinuiiBhgnhQScAkY4r9K/2CfEfgbxp8DtG1fwT4Vfw6pto7LUW
WFYYpbuJQsnCt8xz/EVHBr3XxNrOlyeDtU1Bo5dd02GF3lh0k+dJKF5ZUCMMt7A0ua7FZo/mamtm
imcSZRlPzL12+3H41WkRuQq8HmvVf2jtT8C698ZPFOpfDSyudN8G3VwstlbXSbChKjzCq5JCl92A
TxXl0xaRyARgZ5qw3K+Mc9CPSneYOAT1OelKIgRnqTxQyMpOVHFInYaylwWxjsaRgwI4Ix19aeX2
BguD7CkDBPm5LdRSJ0YvmAJ8hOe/vTeMjHp0pfvRsSMHop9aQx5BIGcnrmmPlE8vdz/WlZRE5wDj
pkdKTaSuSeaV2YsVPIx2NTew7CMF3ADOfWnK5HJBAFMyQAuDj1PWnE7Rk4x70rgPLo7FgpOOpI6U
2QMmMcg9c0gDYymRnuKaXKt83U8nNCQ7kjFdynOM9cUvO7kYDcVEfnbIU04cuBkg96egXHMoMvyv
grzkDNKQWGQQWA6DHNMkdQeOe340Iu18ggHHaosFwYfMc9uaXJU/KnAHWlBB3bhx3IqPaDuwc5xj
HamIVfmyxySfSnqFcqclR702NdrDk/4U9lXdnBKdOD7Uig3IjlidwB6HvTXbhj29KR4sb2HQfrSj
b0GcHr6UgBiAiAIc9CSeKUr5i4HB56ClIyMDt0pu7buGeT6CmAu/am1uSOKBGNp69OD3oCL1cYI7
g05WLgc8YxQASActjKDqcc0AJKQclMjJI6UwcgrnP40/5mKopwF560AKSCu0YAFE7LjEZyO+fpSM
wyQeASKDtOEViB1yR1p3HcFVSwXnjnkUK5QnazBsEGkVzvOSMY570i4YFnOQo45pCCWUgFQxORwa
FzPKoZT1wcUzoDlc44APpTgDG7EZJz0FGoDpY2BB7ikyZSTuA9j34pd7Nkg7T0+tGAuN3THPvQA7
y1jH38Y5APc02JdrPu5yOfSnK0Zj+6cnimoSrbSOCOtMpB5gZGGzn2NA3bV3BttIwHKg7cHJPenO
2c4BwOSCetGwXGNtLOm0lcfxc5p5kQSLuHQdqayBTjoD6U11OzlTxU3C46MrknBHv6mnsSfmDnr0
6ZpkYX5C+7Gfumnu+9vLWNdx6YoTC4ivGMENgZ7CiYxsxIJUZ4Y96YkZiAJHzDnZjpUjPuVo3jVg
CCMdhVjDIBwxJC9qVlAUFeAeCBTUHyYYgHnqfxFKsoyckBjxgZNISE4P+yNpIA78VGzDCgrgk4JN
KqhmO4HGen8qVlweFGwfw5pASEMkZMnykfmRTEBBY7igbPHY+lOmKtuOD7E84pm7ON3OBjOKVwRK
6LE7AM2T/EOlJJH5Sh9uVIwcHp6U5HDL8ucEZ6VCuDIABuA5OelAXJ2BRcYJJ6YPGMURMYDyR5ZP
OOuDQzFHOMAnjPWk2hZChUMDgjI6VVgsLIjQSHLbQeQAc0x280SEggKPvDrUzI0gYL0PPbrUTsiI
Gzv/AL2R1qWIRY93YcdfpSMsaggvtIPHHUU4uhAYsyHknH8qkcfLvO1h2OTnFNIaIZpC5zgcDAz3
p/kleRuLhcnd0x1pZXLOVxuxwM1HN5jBcjp6UmwYpJSTfjcwHAIJ+tG9mLMSpc/dx0FPulMe1VIL
EZJ+vrUQI/eEErzknHQVIrkzsT5e0qJe2B71FJJsmAB3Z6jHGakUKArBTu7OT2qNsxsHUh+5NJXA
l8p3O1eGBxj1pJ4kGFAKM3XPY0iPt3FMk9h396DLmBRkttJyOmTVXHcWXdGVDEgAc49aTzSD9zDH
JHPGKRUW4YMQACOuTxTg+RmQkkdSTVDGr/EZfu9AMZNPhBV2xksB/eIyKYZGdiSdqg5PGKI403Ei
QbzjHfPFK4CyRhSrRhnVuARwAfSpFkKLtJJIJHTpzUTKFIYh/UH0I9aV0UyBlbc+3JHvTuAofbIW
ZCEz0c9BT5icswbDYyuByaSRQoKllAAxu7E0yRmMjDaOMqTjvQgHs0ZG1YyW6lz61n3kb+ZlwRuH
B9auPlSDwcDjnmqU5Ynbu46U7gL5aoBvUq69TT8qrZwZD1INMj2yDa2QcdGPFLlEKmMnf39RQwHr
PhSXtwy5456Uzy0WMBgQTyBx0qVGYFwW2gZO09zUboXBGBuxznrU3sK4b1yoZiUyePWnfa3hYbQQ
SOdxzUJO4sCfUDIpJFcoqswGOQc9aBpnR3mhNp8Y80gOSPLQnkcZzWdKD1UEnJJ45z9Pwq/qGqTX
jbpZzIdu7pgZ6YI7Vnu7MSpVewJOcg+orpbXQzSY6JEdw0RzGp5Z+CPwq0riOAgnYTz93IHrj07V
IbZrh/MUFFLbm3DA47fpVW+vEEQjDsQ56dAKzuVYz5J1EjbVB7ZbB/Gui0rXI9D0q4KSZuLlTH5e
BtZcdx64J54rnptsZ2kkEjnHeoXYuwOMvn7pqWOxPBMUuC7H5jzjOR61+oX/AATm/bJ0LVfBcPwB
+IYWztLqCWy0nUmc4uFmLbrdyc7CN/ynp27V+XMDgyrkBQfetWFzCxKlmmjbcjbuUP5jFCVx6n1d
+2X+z5bfs5fEW603T7yPVtHv2+22jAksi85WTBwDkYz34r9FfAHiPwZ+zR+w9D448D2tlKz6XFqI
SebJmupAoZWbO7hiRtHpgV+LY1O61GMJPNJKwIwXdn2gc8Z57k1dtdVuUt3tJL2UWSHcLUSlYmbP
937ueTyRV8qbKtK1mfov+wZ+2x4y8XfG+bwR481lNc0bxKJmtZL6TLWk6qz+Umf4GUEYYnkdea9l
8P8A7M3hrwh+3Zb694W1OPTo7SxbVrnS2f5T5weJkjwegIDbT0r8gX1C5iMU8EnkMAD54co4IGOC
Op5Pp61oweIdUTVDcx6nffa/KKCf7QzOoHO0PuyB7Dip5ETonuft5+03+yzaftF+MvBl9qF5b/2P
o7OLu0ckPKjMudpHfA714h+3jofhb4e+M/gLdWkFppsemaxFCZFCgrBHLAV3HqQpXoa/Le48Wa1G
Cia5qUAYBdovZMEY5wAR1xVG91+712Jxe6hdXDxE/wDHzIZNq98ZPQ/5xSUQuk9D9b/+Cqup2svw
Q8PLHNHK8mqCSPadwZRC5zx1HT8cV0f7HWu6fqf/AAT8toUvYmax0rU7O5CsAIXDTZU+nDD86/G2
71u91iCCG4v7u5jiOY1luJJEUkbSArE44GOKWDW9U0q0nt7G/u7a1kZpJYIZnSORiMMzAHk4GOlC
hZmaWrXc7Dw3pVv8QPiBpOhSX8GmQ6pfrafa5VISLzGChtp9dw49+tfrbqeu+B/+CdX7OcGmWXl3
mrToZobMuR/aF2ERZX4B2jAB/KvxUnu/PQhmYlSHDdCDnIwe3IrR1jxFq3iBLf8AtfUdQ1M26t5b
XdzJNsJGMjJ44Hb+tFtdSnL3bFnxx4suPFeu6prkkEMFxfXEt1Ii8lTI5YgE/wC8ec9ulYi3KwPt
gGxZAchiTg8jvUEkIlYBWxlcNv7HrzTAGZLgMqyOUPy4O36+uKrmRitWSzBpGUIMIBuK9eeua+p/
2S/23Lr9lfSdZ0//AIRfT/EMeoyrNBNLcGCaFsbWXcEbKng9uRXyxHGWgP7zaSueCQDxyOlQR3DL
E3mzNHhvlb2JxjPX0ovctPlPuv4cf8FTfH3hTXfEk3inSo/HGk6q++10+ecQLYAlsxq/lksm0gEM
vYe9fO1t8a2tPjrD8RdN0KwtDBqh1T+x9oMGxn3GHOOhGVzj8K8mIWZIyS0YPzF/9nFMlnKybyuR
naAv0FCKUran3V8UP+CqHjzxTqmgSeDtMg8F2umOGurMXAnF7yp8sq0a4QBSvHrXn/7WP7bsf7UO
maIj+CLHw9d6fIzDUVufOnlBHC5KLtUcnGTyBXys0z+dIwh2biWHOe3+fzp04Hl/eDEjhT1Hrx+X
61dtTPnbZ9l/Ab/gozrfwQ+DD+BtP8I2F5fQCdbTW2uzGU35MZaPysNsJ/vc1teBP+ConizQ/h5q
fhjxz4cg+Ic94ZVi1C7uFgxEycK6CIhsNnHTjHpXwpJL5YRg5XjlccE+9KLvbFsAkQ7zks3GD26V
Nky1K+57V+z/APtBy/s//F238Zw6Nb6vG3mJcadI21GDA45x8rKduDzwPy+gtb/4Kq/EHUfippXi
XTtNh0rwzboI7jwxLciSO5GMMxk8pWVucjGcECvhOTypZAfNDMpGR2z74FPuMOArsdq5IJ46n/61
TypGlz6A/as/ah/4ah8a2GvnwxZeHDb2fkSxw3PnG4YOSGdvLUn05Hb0r16w/wCCn3i/w78DbPwP
ovhiy0nX7OyhtE8RW9xgAJtBk8ox4DkA9yM9q+HtqojAeYo6Fh0/l9Ka8gCDa/GeSx6j6U7ENs+2
fEP/AAUt17xp8AdT+HvijwrZ+KNSurV7KTxDPc+S53Z2y+SIyN68DIYZIzxXmX7Jv7W15+y14k1r
Vbbw7Z+JLfUrdY5bSWdoXjZGyrJJtb1IIx0xivnUPGyNtiJTPzEn8vw6USyNEnlxMFDcLjBAP0J6
9PzpoXMfcuhf8FT/AIi6Z8XdQ8T39pHfeE7xHjTwq8wWK1GFCFJgm7cMEsSOdxGOlfPnx7+Olx8Z
vi9e+O7DRLPw1cTvDKlvavvRJE6OX2qSSepwK8a8+VsgqQzDGMZHT1pI5GjLKHIXPA29qdgU2nex
9z/E3/gqd4/8ceCLXQPDmkW/gTUbeVPN1iwvGlaVVTG0I0YChm5PzHAFYHxz/wCCiet/Hf4OReA9
a8GaWNR/cPca99oJbdGQWeNNo8t3x2YjkivjlkDbY2d1bOQemRTPLLkrvwhyM55PFKyLuz6q/ZR/
bq1n9lnRNd0m18M2fiO01KZLpDdXDwtBJghgGCtuBGOMDGK7H4Z/8FQfih4L8QeIdQ12O38Z2mqy
NJBpd5cmOKwO5mAiZVJCbWA2n0HNfEcspgCxPlwx+Vieop4WVAMMMt0TuP8AIoUURzPseuXfx31A
/Hj/AIWhZ6da6XqB1ZdW/s22bNsCJNxi7Ha3Q8fxV798WP8AgqJ8TfH1xo0nh9IfAa6bcGZhp85m
F90wswYDCDbyP9qviiQkHY4wuTuI+nrUYl2zxSgHcpwDjOadhc59TftU/tz65+054T0PS9Y8JaVo
hsZvtQurSZpJWfBXbuYDC5+bjrx6VpfAP/goj4w/Z9+EEngnR9A0rVkWWeW1v7qWRJLYyc8qAQ+H
ywyR1A96+RXAuHbzMFAeeenpSr8jbyA5CcKDgj05p2Q029j7S+F3/BTj4l/D/wAK6no2tR2vj57y
Qsl3rc7+dDuXDpgDDL3AOOpFeCfB3496z8Cfi/bfEDS7OyutQhknb7AyFYJFkDBkJU5AAbg9sCvK
mAVVkY5diGxn9akhVCrkpgk9Mndn1qlBMltpn194x/4KX/FPxJ8UNE8W2d0uhWWlxKreHbKZ/sV4
csW80E/Nuzj1GBXnn7Tf7XWt/tSa7o+p6/4c0jSf7Kt3t447Lc7SBmDEuz9emAMdz614GxDA8+Xg
EHPOT9KQSKgYkFjgZGOuDS5UhKUmz610j/go98SdG+Blt8N9JgsLCKzsU02216IyJfRxLgAqQ20M
F4De2aZa/wDBR74oW3wavvh7qUemeJYr22nsX1fVxJPdmKQHOSGAYjdgZH54r5OdhGuHbBxnAbJF
QSygriMsGPqelHKi22e0/s6ftReJv2X/ABZea/4YttPuTd2ps7i31GItHIm4Mp+Uqcgj1r0PTv8A
go18Xrb4r3vj9NVt5JLlPKfw5K0zaWq7Ao2xeZlcFQ2c5zn1r5UmDYVJPkGMBm/P+tRLGyIBkyAt
wBwD70rIlNnqnx4+PXiH48/EO88aeILSwstWuYYoNmnI0caLGCARuZjnnrmvW/iV/wAFHPi38TfA
Nh4Vur6x0O2j8vzb/RIpba7mKDgM/mMACQDwAP5V8sMUlRcA4Ax71CYnZ8KNqjuRjn60tBrmWx9O
fEv9vv4m/F/4QH4c+Ixo8+lNHAkuoLasbufymUrvcuV3ErksEGfxrH/Z3/bW+IX7Muh6tpPhOPS5
9P1KYXEq6nbtP5Uqrt3LtdTkgAckj5RxXz7PG2QoUjIAyF7/AFoaTygI3JHHXqBRYqLfU+nPhz/w
UB+L3ws1jXNT0/V4NcfXZDcXFtrokuI4pCzN+5UONn3sYzjAH1ryg/HDxN/wtyT4lpNbxeLDqg1g
SCEJEJwwYDYP4eMYzz6815vcM5kK7sjqG6H2pruqtlicnPOaFZA5Wd7H0l8Uv29Pi78XNa0LU7/x
FFo82iStLaxaHG9rGWLKdzjexc5ReDxyeKw/2gP2x/iH+0rY6VY+MpNMktdMleW3TT7XyMMwClmJ
Zs8Dp0/KvBHnEbMwO0c44qUSOUC7QwJ+8ByRTSSM79z6G8E/t1/Ff4cfCGX4caHrFrBorJPFFM9q
HuIUmJZgkpORyzY4yM8U7wH+3f8AF74WfD+68H6J4hSXS5/N+bU7YXcsW8YYRyMcgdwDnkmvm4uG
JI3cdMcHFKsyxsf4z6mmkhXb0PTPgr8evFfwC8YweKPCN9DY6vHbSWjedCJYpY327lKHryin2xXa
3H7b3xXufi+nxJPigr4kig+zoYoES28vBUxmDBVl7885wc18/S3BR8g59fUVE7IrBQo6kZOfSk4p
mibPUPjX8evFfx+8XnxZ4u1GO+1X7GlkGt4UgRIlLEBVUDu7HnPJ/Cu18eftufFz4mfDy08C6/4i
jk8PWiwxlba2SCaZYlATzJF+Y8qDx3x9K+flkxuBBMbDaeaZG7bX54x3HQUrai5mtD3jxh+2d8Uv
H3wmt/hpr3iQX3haKOGExNaIJ5Y4iDGHm+8eVX0J2j3Bo/Bj9rD4lfs82Os2vgjX10uHVpEmuIZr
WOZd6rtDDcp2nB59cD0rxV5kZyyqVOMYB4pUkAYtjBckk4oshcz6Htfwx/bC+LHwf1jX9U8N+LHt
7vX5DLfNdwpcLJJvZg+2QFVbLsMgd64uy+L/AIpsfienxCh1mVfFovjqX9ogLuNwxYs2Au3ncRjG
MYGMVxDXDL94ctx9P8/1oMg3kMWXB4Y9h/nFHKh8zR7V8Qv2tvin8TPGfh7xN4k8Wz3mq6FKJtOe
GOO3S2YMrZCxoATlVOSD0x0rK+M/7S3xH+P17pcnjzxIdbGliRrOMW8UKRbyNxxGq5J2gZOa8pab
ruzjnBHWopm8nBPQ/LhaaSDmZ7Zp/wC1t8WNK+EY+G1v4ymTwcIGtf7PEMW4QkklPMK+Zt68Bu+O
BUej/tafFLQPhPe/DjS/GN3beEbiKa3ex8qKXEcufMQSOpcA7jwrDGTjFeMB45TgZLKMDPbihRky
YJyvOc0uVJ3E5N6HpXwm/aC8cfA7XbrW/A/iGbRdRuofs1xIkaSLKmc4KSKwOCM5xWroH7VHxP8A
C/xL1P4gab4vu7fxdqYZbzUtkUhlRsfKyMhjwNq4G3jaMV47lldAMdyGPNOLbpAD/FycHtSsrjTb
O38ZfFXxP8QfG934v8Ra5Pf+JrmVJ31F9qOJEA8sqFAVdu1cbQMYrqPin+1L8TPjdbaXbeM/GN7r
NppcpmtI3WKEI3GGPlopZhjqa8gnZVXcMhs8ZFRmVVXB3Bm+XPalZblao9i+Kn7UfxO+NGg2GieM
vGF3ruk6fKJra2niiRVcIUDMURSzbT1JPU96b4J/ai+J3w68BX/gnw34z1DS/DN4ZfN06IRMg8wY
kCsyMy7uc7WHXjmvIfKK/KGIXkk4oZmQs0eWPTJ7dadkK73PXPAX7UvxP+FHhG+8MeE/G2oaLol4
0jT2cSxupZxhmQuhKEgD7pHTiub+Gvxc8WfBvxNF4i8Ha9ceHtXWFoRcWqoxMbY3IVdSpB2jqOw5
rhJj5ik4G/0A601XOcnk9qEkCcrnrcH7TPxLg+KM3xIPjDUP+EzeLyf7VZk8zaV27NoXZtx/Dtxm
ub+JPxU8R/FnxZc+JfFet3OvazOqI95cbUcxqMIAEVQoGT90CuMkkUE4HXkfWh2EYxg5PbFOyQ23
1PVviN+058Tvi3oOn6F4u8aalrukWLrNBZ3JREDKhRWJVQWYKTySeuetL48/ae+JfxM8FWHhHxH4
y1DWPDlk0TQWFwI9uY12puYKGbH+0T09RXkrScYwVXHHPQ0JyCQw46Clpe9iU2es/D79pT4l/Crw
vqXhzwl4x1LQtE1B3luLW0ZcM7KFZhuUlSQMHaR60/4c/tK/Er4PaXqmm+CfF99oVhqL77i2tyjI
7bdpf5lOGIOMgjpXkrvtPrz3ppBiGT37Cm0iud7Ha+A/ih4n+F3iu18SeGNauNI121WRY76Bx5m1
xhgSQQ2c8g8cfSt+9/aP+I178UYPiHP4x1J/GsSiOHV94EkabCmwALt24J4xg5rywP0I4I7Gg3DD
t8v60nFMnndzufib8X/Ffxg8Ty674u1+71/VngW0+03RVSIl3YQbQAB8zdu9dJ43/aT+JXxG8F2f
hTxL411TV/DtiU8mwuSmwbUKplgoY7QeMntnrzXkUmCm7196JLh2dkxj2Bo0GpPoeq+MP2l/iV45
8CWXgjXfGep6x4atDEItPuWUxgRjEYJChjtHTJPal+HP7SfxH+Emjaho3g3xlqfh7SdRlM1zb2rr
teQpsLDcp2kqAMjGcD0rynccZB4z3oVuSo5Ve4osmF2etfDf9o/4jfB611K08GeMtS0C01FxJdw2
5R1ldVKhvnVsHBxkHmub8DfE7xP8N/Ftr4o8Pa7dab4ghaR47+JgZQzghySwOc5OQeua4rJKbgTu
xwMU7zGiOTk4PHt70JLYak1uep6l+0f8R9V+Jlv4/u/GepyeMLOMJb6sWUSxAAqFGFCkYZuCOcnN
ZXxP+LXif4yeJG8ReNteuPEmqNbLaLPc7V2xLuKqFUAYyx7Z5NefvOdrlm5I475NIriP5WyQOKOV
Jj52eu+Mf2nPib8SvBFl4Q8TeNdU1bw7ZeV5VlO6Kv7sYQsVQMxA/vE1H4t/aW+Jvjb4e2ngTXPG
uoap4SsxCIdMuAhUCIYjy4UM2MDgk9BXk4kwuR17e4pfNaVM8D6VVkK9z1j4XftLfEf4MaLqej+D
PGOp6BpeoyGa4tLYxtGZCoUsu9GKnAAyuOgPak+G37TXxL+C+n6lY+DPGd/ocGokNdR26xssjhSA
+HVgGweSOT+VeUG5O3AQEE5z6UrSr5rE8jHrxmlyoHJo63wV8TvFHw48Y2nivw1rV3pOv2skjx6h
CwMoZwQ+dwIYMCcgg5znrXX6r+0/8TNW+KFn8RLvxlqEnjKzjWO31ZvLDRoEZdqoECBcO3y7cEkk
+teQmXEcnOTnjBpz7JIAHQ7j09qlpMhSaO6+KPxj8W/GrxI/iHxvrk+v6y9slmtxOiJtiXOECoqq
OWY9OpNdN42/aj+J/wATPA9h4V8V+M77V/D9k8Zh06URomY1KoWZVDNgdNzH8a8dXbG3z7iR0weB
TgzKuF6ehp2TL52j1nxp+1J8T/H/AMObbwNr/jS/1XwtZeT5NhNHEAFiGIwWEYdtuB1J6DNO+F37
U/xP+DXh7UtC8GeMr3Q9K1GTzrm2hSJw7lAhZWdGKMVCjKkdB6V5CwJdlU5AzyDxSZCs2G2jqD15
p2XYnnZ6/wDC79qL4nfBHTNSsfBXjC+0a01OQTXVuIopEkcKRu+dGKnB5KkZ71yXgT4peKvhV4vs
/FPhTXZ9J8Q228rfxbXchwQ4YMCGDZyQRj24rjmlkYnB/A802TOCpA3+p4H4UrIV2ewX37UvxP1P
4qWnxKufF98/jO1URxauixKUTYU2CLZ5e3DHjbznPNcx8WPjD4v+NfiyTxH4z12fXtXkt0tRPOqJ
tiXOFCoqqOpPAHJzXCu5VgM9DnANIJ2G4L1PejQfMz2Xx7+1J8Ufip4K0/wn4s8a3+raHp7o9tbO
sUeCqlVLsiqz4B43E0nxA/ao+KPxI+HmmeCPE/jG81jw3YND5VlNFCvMSlYy0ioHfA/vMckc1475
wP3uDjvSSMWzgYGTyaNENzZ7J8Lf2qvij8F/CuqeG/B3i+50TRNRlaea1jhhkBkZQjMGdCykqo5U
j86T4X/tX/FX4JaFqGj+DfGF5o2m6lKZrq38uKdWkK7SymVHKkjGSPQV46shZgpXd6GkeYh9pBx7
nilZBeRPM7ykkvu3dWbn65qIqoxncFzn1qMqxbDMFwOg70xmZchck5pku44sgduuO2ajH7xuSQD6
U+LbzuPOc5ahmAXAUE560CXmNaIqc8YPpTWYMxBUgdj60+UHc6sCuD0NMckPkfOOmRwaB6Dk5YBu
nah2ChhtBXrkGmHJJ+UlcUDKtg8AjoKB3FOCcIexFK2QDtPQ8mlUKBle3Y96A5ViDhRjP0pWHcbI
QAAM7u9MZdxIPGaf5oJy+SR1IpW2tkjr6CpsIYPlQ7D36Z7UoHBB5z0pVQDO0luKVSQ3Axj1qhoj
kwNqKTilHzYz8vb61MpRm3OdrDsRTJjs5wMnriovqMiZcHOQDSx9CBmhT5jknPtR5eMgngn1qiLD
uT9049RT5IkVNzMCT2ppDAjPXuDziggGMA5B9hS0Y9R0ZGCc8gcA96jLEygdFpyrgFi/IxgEUCTL
8gc9/SnZIoSQAsMkso4B6U7YNp25yfSlYAvkHOOM9KTzNhABwTxmpYyNiy5Udam44yQSR93FNaTf
ISBlRwTimEgSAgjA55ppE7DnQAZ65PSneaqFPLUj8KVQqrhm5HTHf6UoKlV4Ckn8adhoYCpPzAk+
opHKuVIG3PXPrUkq8naRwOTUY27vUe1KwDwpf5jkKOMZpSu/HovHBpRkAKpwO2TSlT09Oc9jSsBH
KDE7DdtxTARhhtK+hFOlDlTj7w6k0rfLtO4sc5Pp7UgEcI824kjJxmlZDI+8/KOx9aTy9rBmH1FM
UN98Z254U07DJH3AEsRk8D2pSpzjGc96HX5DxnjJHpSq/wApG7YT0I5pCEZNuQuNo5NKPnkI3cAf
lSQsN+HOF75709lKsSgyOlAXIjEry8EAH+L0pDgY4yTx1p8vDLwPU8U8gRiQZBxkDB60bjIRuUvI
QcAHgCnHcQQSCn3sDmnq+UADYB5IPPNNkYOemDzwBQAHafkbAOc5zSTW5Lbhg7R2p64hKt94HnJF
Ru/GVPfOSM5HpSARvm+cfNheeOtKflDkr0A2gnmnIU3kZbJ4IFMyY2yxyRxk09x7iOFI69RnJqSN
tjZ3g4JAIHWleNniwe65yKaAOgJCkc46g0CJY9ruxJJLDp2qI4UttfcMYyeKc5Kk/MOAP8ikQL0H
LZxz2FO4XJI1VuN3PpTfLXe5Ukq3AGeppjod285OepAxinFG+WVAQVHr096LlAVcKo27RnAy1DEI
NoB3Z4wKUSM27zJCTnPPNOkwQJEYHB+YE0tAIjGxAYFtvSpowQCWPTge3FMUu0bZOOvSnFuh3E7u
o7Yp3AWSIlDiQTbRuwOMUyRgrqCMkHOfal8zy3KYzzxj9KMGT5C3+e9K4gk/eYYuA3Tjg/WlifdJ
kkN1AX175qOfyyEwCw78dPxqSXG1FVSCPmDEYyBTTGKUeJ8sOBhuvrTN42ls7Sf4T3pY0Te29Duf
oQOBSGJRI+9to7Ac4qRWGIAC25QCTjHalKEF13YzySBipp5N0WzCjbkFgOvHWo2Bz5RyQcY45NKw
WFaFU+Zjznj3oCqEkO4MeBt6Y9xQ8flMw2cMduMjimNFtJQj5gTyD3p2BD1EYdYxnd0PHt9akwrW
7sMsqtgZPSmESFRswAeCy8gUx8uQgQEtz8o71FtQFl+4BGwBzxxinSRMsb7gFXIxgZBoKEq3DIB0
BOTmkYggASg5GQCcfh+lVcBySAdWO4N0GOKZJ5XnqfvE4+bpmnRSeUHPI2g9ADT38tvSMnkFRwOx
/XNFwEnuPM+QKwXOOOAP84pply24qQSMYBxx6UrIi427vccgf/XqQujyquSpByGz3q0BEFUI2Bhj
8478en86kBw+NxOR24JplwgichHErjjPtTIoSuS6ttUdc96Bh9nycjPXGMflVOVPLlw+Q/T0xWg7
MHCqwLkY9AMVn3Ch5QcHH8R680ALneN23JHHFKUZmbBAwOzA5/zmmjZsyASPQUMijBIYD0PakIkX
y/mLKS3oDSESgc/dbng9QKUARZA74yaHdlX5TkEc+1KwCsqyR712qVP3CeaYoBHzsw54GeKkMWxQ
33yVzhhUcsO8Lyo4z05p2GaV2rXEruwCxk8g9v8A63WpIoZUyvyHo29s5PXim3BfyWjC79hzvzjn
J7UsCFdpbevOQQMj8qsVywC6I2XZELb2U9CfT/PrWTeuJ5m2AAEnn+7/AJxWnqOyyDxfNIWALF+3
pWQBwwIZQBldozk/5NFgIpSe6A44ye9Wxpcp083qqJIw2xjn7voT7e9QzSK8SgKScck8c11fgGP7
cdQ09y7WssDF0j7+mPx71IHH7NsvcHPbnFe1/s4/s/eJv2iPG1v4d0NI4oiQbi/YZS3BUlS3BPJH
avG3AguZVKlNjHOeSK/bz/gnVoWl+G/2MrfxhpunW1p4huLS8a4vBCA0xhZxGXx94fKPw49qOaxT
biro/Oj9p39kbWP2Ydbs9M1a7t9SttRiMkV5EwyzDg5TJIPOeld7rn/BPXxTpHwAsfilDqdjqVjN
pseo3Fkv7t0iZMhhu4yMjPPfHavEvjR8UPEnxW+IOq+IvEl48l3qEjTrFvPkKowAsa9uFXj2r7e/
YL/bI0nxb4Yh+BXxU8mXTr+J9O0e9lwiSRFT/osxyMN/cb2x6U22Spua8z4q+AnwUu/jt8UtN8EW
NzFpd9qEcjRXVzHuDBELHI+npnpX0bb/APBMLxc/xTuvBj+KdN064htU1C3uQpdbuAttkIHBVgeD
kd6+4vA37OXwx/YzHij4gXt2DFGHmtZrx1WS3jCsfJjJI3MwwBnrj8a+P/Bv7VGsftB/t7fDrXFt
5dM0iK+ewsLdMJItsyvxKQSGJ+8ccfKam8iN7I1ZP+CP3ixJ12eNtLkjMjO6mNwFyMDaMenOD3rx
b9oD9hXxP8AdX8LWuoanZahaa7crZQX0IbCzMQCrJnJxuBzzxnjiv0+/aW+Cvin4t654cn8OeJLv
w02kRT3UdzbudpuQY2iDqDyPlavhj9oD9r5vjSnw48M6lpr6f4u8P+Ili1eRVH2WVlmWIvGfvAkr
nGB1PNCbLS1POPj/AP8ABPLxx8BPBEfieW8g1/TIz/pb2RZWtCf4ivIK8AA56n3qr8Nv2APF/wAV
PgWfiRoWp2jqY5SmlyHbI3lPtkwxGP4Gxnr7V+0mrwafqOnNp2pCGS2v1NsYJyMTBlOUwepxngel
ch4K+Fml/C/4b33hXRsppC/ant4iBmJZdzFffBY002J7H4EaP4Ln1Xx1pHh83EdhcX90lqlxMuY0
d3CqzgY4BIJr3L9oz9hHx1+z1pllqt8ya7pE3y/btN3bIpOyyLj5Qex6dvc+E+Lrn7Zrd87KY0ed
yBG7c8kqc8c7do+o96/WP/gnJ4+1D9oD9mzWvDHjo/8ACQWukXTaQs1780s1s0YZRIT1Kg4DdcAe
lNi0kj8dZbZoC03lHewOEwR71C0YaXKRhZM8YbIxj/8AXXrf7Qng7S/A3xi8V6FpSm3stK1K4tIU
PzDZGxQbuOvy/rXl1wESVygOBghV6Zx1oRmRO5jCu8JcKMb1PpzyB9f0r7c+D/8AwTG8UfFjwHo3
i618VaPFYatbLNHEQxKZByDjcOD2z1+lfE0YG5lVsSZJOOBjua/W7/gkZql3c/B/xhYy3cs9pZ6v
H9mhlkLCBWt13BQegJGePWpbaZrHU8B8X/8ABJz4g+HfDmpajZa/puszW8ZkSxiRxI6gchSf4uOn
fNfDfiHw5c6JfG3liMUkeVZM/MnqD7g5H4V+7H7Png/x54Y+JnxWvfFlxeS6JqOpibR/tNz5iLEG
kJ2DPyjDJ2HSvzB/aZ0rw/48/bU13R2vE07RtR19LKTUoQGjjDbQWG09A+7k8DFWmJ2bsfKgjaVd
4RgT8obPWk8l03uwKdche9fqFH/wSX8H6tDImlfFOa8vhGTGqxQyqD2JCnOMnk+9fBnxh+COsfBb
x1ceGPFNtLYzwOEacDKyRliBKnZgQM470+YSikzzSSzmmiAVQ/G7PoOP8P0NNSKRdqyxblIOMdcj
vX6OT/8ABLLTNQ8NeFdd8OeOWvdI1Lyzf3SQhjHDIAQ8ZBIO08HceASe2KZ8W/8AglZBofw8vNe8
B+JZvFup2RaQ2YCgyxgfMsZDkFx1wTz0oui2kfnVFG0wlQRDHQHbnnpjHWvqvwh/wT08YeMP2eYP
itp2o2csM1k+orpEm5Z/LTdkZ5BPyk449utfOK6GdN1+3tNSkOmx/aBDdTN8vkpvw5I9QM1+zGlf
BrxP8Lf2RNT8LeCfGenaxo6aJdSWU1/aFiIZUaRgkqPg8M20kHqKXUbtY/Ei9gJIDKxkc5zuIUAj
IxntVfaYHZgoPOFXIxz6/nXXeHvBGr+OvGul+GtIt2u9R1OX7NaRghS5Iyoye+O/FfpBpP8AwSb8
Iaf4X0yXxL45udN1SWJWuFkWIKsu0EopJHAOenvzT5kiVsflnbKpEiEYYdNpxuA+v+eagukZ3Clg
XyQPmHBz69u1ff8A+03/AME15PhJ4BXxj4L1mXxTplpGZdQiMXzqv/PZNpOUAznHTr0zj4IW2ZQ5
aPDKcPlcEf8A6qpO5DSIo7lxH5bgttPQdcmmo7RuxlI3A9R09hQ0GyVXZ9wPAXHX/Oa9F+BfwS1v
4/8AxK07wl4bji+1T7p3e4bYEhQjc+e+MjgU72Fy9TgBH5tyZMFNhI69KSONsskiDy25LgdO/wDO
v1euv+CUvwt0ErHqnxCv7O4mQnF3LApOeu3dg4zXz1+2P+wHP+zxpVn4j8L3NxrnhaRvKuJJE+ez
Y8hnIz8rdj68VHMi9j4mkhCShijAgcnHT/Oammhd96nYVHzYYcn/ABr9B/h1/wAE3/CXxk/Z48Pe
NPB/jC6u9du7eOa8tspJH5i/66ED+FwScZ7gdM16Jr3/AASk8A634L1ceD/GV3e+JreFhAbiaORI
58E7JAvK5Ix7Uua+wpH5YsjxwsgVVydoc9vT/PvUCu8SOpzKHICg5roPGvgvVPAfivU9D1iB7LVN
Pne3uIJz80Tr/Ccceh4z1rBmMmQyDa3TANaIyVySFMMzoio68qv4c5NfWf7Jf/BPnxB+0p4evvEF
9qMvhXQBj+z9Qe1Lm7fLBivIOFIxk9cV8veGoEuNb0yA5kkmuo0fgHK7xxz6j+dfuJ+2R461H9nz
9lfUb3wQlro11CbfT7cRwArAkrbGKKMDcASQfXmpvd2RstEfEvxu/wCCUPiP4eeAL7X/AAlr8fiy
8ssyy6V9mMMjRA8mM5O4gckcZ5xXm37G37EWm/tW6P4iuJ/FjeH73RbiOJrVLUO+GUlWILAgZDDt
061s/si/t76z+z+9/pHic3ninwreCSbyTIHuYLjgBo3dgNrfxKfTIre/4Jm6vNq/7X93eov2eC80
7UZnjDZ+VpA6IR/s7/544obkkCV3qeq3H/BG+wLzCP4kSh2BMYbTuN3bI8zp24I4r5I179jnxL4O
/aV0L4S+IZBYLrN+tpY6ykRaKeBidsyDjJ6Arng9e1frb43+DXiXxB+1R4D+IVnqEUfhzRNNuLS8
s2lIaR3EgUhcYIy4/I14l+1J4q0PXv21/wBnrR7DULW81bSdUl+3wRSBmti5iKK4HQkKxApKTBQX
MfE37an7Ev8Awykvh/UbLXTr2k6w0kO6WIxzQzIobGASGUgnHTpXX6H/AME4D4m/ZOtPilYeKHj1
mTSn1htJntl8t0UFjFvBypwpGee1fRn/AAV3iaX4feASrKuNUn3bhkEeSOMf55r2H4SxrP8A8E89
MUEhZPBM5PqMwSZH5mnzO4n8LtufhjcoVKYRl3L8u8lsDtyah3CPhn3EjH09qvXJCRxIxZj5ag4H
BJFZce47mK7ADzu7/jWhjd31NbR/D114g1e1sbC3e7u7qQQQQxA7nc8AD8T1r9Lfh3/wST0i48A6
VqPjzxbdeH9cmQm7s7fy2ih3E7U3kgbsY/GvE/8AgldY215+1PaCaKOTytIvZlSUBiD+6AIz6ZPP
vX3B+2X8CfiJ+054oj8GeHPFVt4f8M6dYQajc21wrbbi5aSZULFPm4CZxnFZN6nRe1j4X/bP/wCC
f+r/ALNdjbeIvDF1d+I/BsmEubuWMedYS54Mm0EFG4AbjBIFfHflLLKsYDly2zaw5znAx/hX73fs
6eHPEN7+zefCPxLKa3qun/atGvVuFV1kjjJRFOB8w27cE8nivxJ8F6xp/hH4l6Bq19ALjStN1mC6
mjVMnyY7hXbAH+ypqr9SU9bHE6jYT6e0UVxHLAZVLp5sZQsuT0z16Gs1wACFBdDwSeoxX6Uf8FN/
jp8Hfi58NPCcfgrV9L13xLban5hmsYj5lvbGGTcGYAcFtgxnrj61+bdxbrEBh+JD93sPaml1Jd2y
NysnIJKDqT2okT5FZn3swxwcf/qpJbcRr8pGSvQnNRbSrgliSeDk9KYrXFk+YlskqBg4HX/CllZS
BkFgMnH9KkeFyThSpJwQGzkVHIAuPlHBIyOtILWI3yyjD9+CGzinySlj82PmbGfSoz+83Ajn+lPj
QOd3Hy+vSmNXBpnRGAUbemT3xSebu6DJx2pkxHmK2/JHAH1pSflbOcE9fT1qeomMeQgkMOh65/Sn
Bc4G8qp5zimyALh+F78nvTifmIGAmccimIkdkL9hx1qJ5g0mTkBhlvenNJIqbSFCtznAJ4qOQs8S
5DKQeMHqKAJJLeRI1l5ZJBwG+v8A9aod5ZtjZVD29KdMzhlRckYAA7elEqqq4GRjr2INArA0gIZ1
AU4/KmvLtHIIPqOlOUKFBwCx6iiSOSJwWXaSc9aBxTGSF9y5G4dMmnu7x4Ux/d4BHeglz0AIX3pV
Usm9iDgfd7igvRCMx3ZKnB6CmSRFyPmHPOM1Mx3nAHK9KYvJwTl270DYrsQhyct2HbFKJNj42MQR
yAeTUTwumQc45yaRgA4BJYEcA0E6olJVGDYycZC98VGz5UKoAOerCjC7SoX5xySf502QMBtzlieo
pbD5hWfAUEgkfxDjFBcuc5Oe+TTJNwVVPK+uOtK+NwZcsx54ouS9R6uCuCNwpCpBxjYB1DU1k8xm
yG49e/FSpmMtuwR69aEIa5BBKngDpTfMJUkHBzgrRlRu2vkk4GaV2Zn9SCDuzTHYXdmIgluOP0oc
Bjgpu7DHb3pshIY5JIznkUjyKPfigGOcLnBY8dsdKdLIm5Ay85zn1FMCncdvJ9KTy9wzu4FIQ+Q7
BhSFB7H0ppBJyDu4/h/lQ/QHn0zTVA3ct36etDGtRVlOeMgg9MUruo2jnJ4PPWjYULHbgHpScMRz
yO/pQhAsiElTxjjGKczLsfaDk9MnpS4EoG4Z7Zx1oBVcgrgg4ANMdhkbDbv2lhkD8KkDBZtmBhvz
po28leO1Iy4ZTjIHt3pWHewvyhyp5GcYzxR5gVj1YY6DpTCxZshOnPFPbmPLct2ApkjC4bf1Xrgg
U4uUyeNvHWmEA7jjJ7enSnlyW2lT0+tIAZmc5xg5xzSyK8YJAG3H3c0jrtbLAbc9uaa67iQDx14N
KwCqWL4B25GBSltmQ2CT0OAaD8hTbhWHOc8U7arb8DBU8k96CkrjdwV2BbAGODT2bzSWwSQeCKhf
Bc4+bPORTsGNmG4jvxT3E9A4lJwpyTSb/JzjjjqOacCoVirAH+ErS5BQ5w2BzSsCQw7XOcE8daTz
fLYHggd80ruGHygqO5pPuBmCnH060Ds0DNyzBsc5HvTWAZsEgc559Kc5JCDbknkY6/jTVOw/Ng56
jNAtRs0nmOOTwcDFPl353AjOOlHk5LbV6nODTdpDc88dBQGoEgDOOepFAYAKvQ980g5bj5R3pZNz
N3PpgUCGvnI3Dk/rSkEAMfXpihdu8B/0FKFHOWGPSgaYiFlJIztNAJOSRn3NK5+b5e5/yKAxj52h
jnOGPFA1uBYMQAM46kdqGGVOTk54pqBsYAzk80siFCeCc9AKRYcEErz1BNNBwffP6URcAgg49+1D
AHdt654UUyQYbxk8HHHNKWDqhI4UY4oICkBT7H0obJ4UD8aBJMY+QSepNOUDDZ5bOeaBgKSc5HWn
YbaDuHXmoGNGA3HQ+lISAf7uOg9acEwDlsDNKGySSvy9OaYWHvyVOT1wQBSFlVSFBU9ST0xTSWDB
iCME4IoZG3AMQ2eTzUWKGNlxj0OeaciggLnDE9qe4CN04I9OlN8zL4x8wBxTAUuMeXyFz81N6yAL
0HrQwBbJznvj1pzsuMZOSD0+lIBdreaVIGBx0qSRCTyRtHXHbmoMHaCG5PYUrscZByCehqkmK6HS
LGScZ2Y6inqvykDBNQ7SUwNwA5NOHz7iO/8AKnYVxwXIGQfQ5pFU8My89qcBg7VyV7MaQysow43e
1LYoeXxkFTtx1pqllDjOR/Omh1DHkuT146UrLtGMg45B98U9RXFZicg4Gf0pOSWGCu4jGfSkxtB5
JPTB7VIQ7jYOWxkYqd2UmDbVwrYbOQTjpUO8jCgck+vFPa3I+ZX69jQqljyfnXoAPSrsFwdyodXG
fYU2TCr90c9CeKRgWYYOAeTkd6fM7P14UdxUsBuFD7Rzj24p4JRN2cDvzUb5YbSc55zTyiqmT85Y
8ewqbCsPTGCdvJ4LMcA03aZAAuNwJDDrSXDqwBGQoPQikZ8puj7cdOcVSiMcgSCTjt1z16U1TtmZ
gMK3OCelCqQATyT+tK7gNyRycAUWEE20t8oHHHB7VGAdpABXBpzuMk7cDoQKewVduGGWP4gdqkBq
OICjqCz55JoLNMxYYUZyF7U/yi3BIKev9aZJ807FVyc5yPXNFirCC6bBVRxnk+tRghgWLYb07VYK
ZQkkO6tnpTVbggqNxJ5oELvG7JbI9D2qJ5GTeFPHUMOanBUpuJXJORgcmkWNmJPDMf5UCIVZpMZP
LfpU7jCFfbHHeoyuY8jCkZ4qaM5jIVup+4eoNBSI1VAfvZ4545J9KlfYZDhcqBg4Pc9KZ5KlhyDz
1ByaDEPMGTtx/nmpSGOaYhQBlV+9TZTuBbnk49qcMSBmc7gowB2JqMQs0fLEKD0PrTsKw9EBcZXD
dc01uXYjqh5IqSOMKmTkkn7o5zUcqsSDhgelUMRg0r52jJ9OlSNkOqjOQMUNlFC5IyP6UkB+zv5n
3gDtG4ZGakBoEjLIAuccY9KkBUn5wV5596ickBy0rDLH5vSncDLyNvyeM8fSlqK465SOK4byELA8
ru6j605kzMFPIz1NMLBQWViTnOcfpTTE+dxUg8ksORTQJk08quGb7pHXb3/zioFcCTIXDHvS7VZk
J3e4U4wKPMdk2k4PXOO/+cUx3HFNhC5xnLYyaIolAyQdpPLGnSGR1YbRtHXHUcetMDliykbsjjJp
WAJADGwXJK9O1PiJ3REgYwR84yo4pNhIBCl8D5mGfypxk2lxuwozj5c/hVWAZtGHZn8rodoHWlXD
HeVLFzy2elCIFQlXIduQD9elN3EeWWXcqtnpz781NhEshIb5yrxr61C5W3AJyQDnnqM1I8asAWct
nhW68+lK0KhNoYg9CDzmqtYZC7mUkKvmMOc5wMUrqZGUA7wq5+U8UuxHY7BwueMdTjtUkhwoQj5Q
OQc0rAMVmhZmeJG3Zx6rx1qjPJJG5DcMOtWzGjJhXORyzD0qo4Bdx/rNx+9TECylWO0FgeMGnBsk
lhuPcHqTSDBwMFc9QetPAKdCAw5zn8qdhj2IZcHK5OcelQg/PhQZGBGB2xU0ZGC5OZMYG2lSHacg
Bd38RHIHFSArrtdUKhcjJwP61EzKuQdxbPRafvIkZJXO2P7o9af56KMrh8nrtwaV2gLpjaaRjvLY
bknqeOuavnEKFmDBApG7HWq6Mi7k3YgbBPynj1qnqM4bKRyiSPBAz1HcVroSQ3f72dmD7l65I5/z
xTJziHdyjt0Gcr9acuRGpcDJyQQRwPcVXlCmIkFicnngD2/lQASzbgFK/KBjj1rtPh3HDDe3ckv7
qEw/60sPl/wriS5EIHBJ554z+NX7fWWsrKe3RfLEpz68Y5FJ2GMvJlmu5Qjb1JOOnP0Nft3/AME5
LuHXP2GYtLsis13ANQt3hhfJDvudfpneDj3r8OrZkztcEhjngfMMelfRP7I37V3iL9lrx7Fq+nFr
/wAOXRxq+lGVikkeRl1XO0SADAbv0qbal/FFx7nB+LtLn0fW9QsLuJ4bmGZ7eSCXIkhYEjH5L/Kv
tD9gP9ii4+KOrQ+NPGFvd22gWUoNrDLmKV3A3LIG6kBhj/GvMv23vj58Kv2gPFWneKPAmjalpGqu
GGrT3kQhWZhjY20E5cf3h265r2Xwh/wUi8LeHv2Qx4AbStYg8bRaXJp0NzZBVg8xg3ly+YHDDggn
jrkemU79DCNo7n218XLL4eftleD/ABX8NdI8VwT63o06vcRW5PmW06EgbgR8ykggkfnX5ufAv4Xe
Ivg9+2t4A8P+IbRrG/tNbEJVsqkuUJDIwJDDHI57kV5T+zd8cl+Cfx28OeN7hr2eC3uHOoxQyl2u
YJAQ+ST8zc7sHqRX2d4u/wCCiHwV8VfHnwl4wuvDetXdhpFnLGZ5rJVnt7gOGjZVLc9D0NT72xuk
l7yPpj9sb9pDX/2dtW8FXWkacdattWa6tJtOC4Mkm1PKIYKSCGJ9q/OX4nfs2eL/AIfaj4L8fa5b
OLDxNqccwiAzLb3DziVlk5GAfnOdvavsPxT/AMFHv2cPHraTf6zo2vajLp8ouLKV9Ny0LnHIxJx9
0dcivFf23v26fA3xx8DeGbHwdDqqXem6iNRb+0bQRJxGwVThiQTk/TirWgovlkrn1z/wUL8V6t4E
+AVj4h0K9k07VdP1qzlt7qMgGNyGUHkEHrjB4wea7T9l341Xv7QP7OumeLdQtUstUmintbpUPyPL
HlWkXGOGIzjtmvhj9r/9vzwF8evgDa+GdBj1S31l7qCW8iubfZHHsUlgGz8wyQOKj/Yw/wCCgHgb
4G/AZ/BPi6HUvtlvc3DWM9pB5iyK/IVuRsIJ9/vD3o5epDd1I+IPHs6y63dSbPKdHYuAONwOGwT/
ALQNfqb/AMEltB1bR/g94pudRsJ7K21DUoruzeZSBLEYAAy5AyPlHI9a/LGHxLYjx1Z65c2gutPS
7W4nsnyftEe/Mic56jj8/avv341/8FJvB2mfBiDwd8F7O90W9aD7I8l1aNALGHbyYmVuWOeGB461
CT5iUko2R8h/teXZ/wCGgfHvlMsjLrl7kg5K5mYj8B0/Dv1rxpY2ll/dSOcDJXOfb8ulXNWvJ9Yu
7i5u5pbm7nkM09xPJ5skjNklmYnJb1zVGVEwWyDKrDaOnH+cV0JolRtuTW7iNwPOFvJzk7c8YxX6
p/8ABHSYSfDL4grgMy6xDmUPkN/o47Z4wMfnX5RtPLuyVRC6/IcbiBj8smv0y/Yz/bm+CfwU+DGn
aLrGn6hoPibeU1FbGxaZLllJCSgg9CuMjHGfxqXqaLyPO/2r/wBs74v6F8VPiD4S0zxVJZeH4b65
06G1SzgB8oEqVD7N+ffPesf/AIJk/B3w98V/jJqF14iEl2/h+1S/hts/JLIZAMyDuBkHHfca86/b
T+LPw3+K/wAXLzxB8PLfUPsuoxLNfS3sRgV7n5gzIjc84Qn3ye9c3+yn+0dqv7MHxStPEtpbR6jY
zwG01WymOHktS6EshH8a7cjg9Md6htscbJ6n7FeHfil4XtP2lNS+F+neFINO1O00v7edUgVEEits
JTaFB/jHc9DXwT/wVdVLf446e7ExvLoduwcgEHEs49e3WvX/ABt/wUQ/Z+tNRuPH/hm01W98exwL
HEXsJYRcRkYKSFjjbgEZxkHBrwT9vT9qD4P/ALRWhaDqfg/+0f8AhMIVEU0l1atFGLYgtsJJAZg7
dvf2ppSRLdrM+1v2BSLn9h/w4pfevkagmS27jz5gOfpUf/BN2ee8/ZxmknO6VtavRntj5QP0Ar5s
/Y+/bw+Hfwi/ZoPgfxWNQtNa043gto4LN5Y7hJHZ0+YZ2nL4IbFU/wBiD9vj4d/BX4Xaz4b8ZvqN
ne/2i95aC1s3uBIska5XKD5cMh6+ooSLerZ8XfF63S0+JPiNjLlY7+dTufcciRuCenr+Ffr3+yaD
P/wT+8NxOuCPDd3HtUkjA84ccZx6V+N3jPxfa6z4/vtbjtftlvLeteNbXQCeavmFymBjGckfQ9q/
VTwX/wAFG/2bfCvgLTPDdodW0rSorPyRpy6NOyRq2d6DAIIyW6cc8VT0B7WPgD9kfyk/a2+F0k2N
/wDbEQKgcBirevuR+dfc/wDwVy84/DbwQsZI/wCJlPyOefJyOO/I/lX50+KfiJoPhf4+XHjD4W/a
odI0nVl1DRPt6tkbHB2srHO0nIwSDtxnmv0O/wCHiP7Pnxb8DeH4/iro12NajUSXGn/2XNcQW9wV
w2x16ggfl9Ki+ot0fFVp8TP2itA/Z8n0azutWtvhdPHNC0r2sUkRtpOGVZmQsAzM2MNwCcY7fOEj
cvydnJwc57/171+l/wC0h/wUS+GEvwEufAPwl0+Wf7dCdMMeoWElvDZ2zIcvGG5L8jb2B5r8zLu7
a5eQJEW6YydxUk5P9a0TZlLcjx5uAxEiZzluK+zv+CWMK/8ADWOlvnaq6NfqqnnJ2x5569q+NlwB
vGJSmBtZhgD/ADmvQfgT8afEHwC+J2k+NvD/AJct7p4kRrSUZjuYWGHifuMjoRjBwe1D1Lhufa//
AAVfcn40+EcxyEpoysjxyFSP38nQDuSQK+s/21LXzP2I/EaNG0xTT7AlE6nE0Oce/XHvXkPiX/go
D+y58TBpmpeLvDl/qupxwKoW50X7QbcnlkVs9iTyK8V/bb/4KJ6B8XPh/F4E+G1rdN4fvQqatdX9
mYZAEZWSKMFumFyTjsMVNtRr3dJdz6F/4JM7v+FBeJNwHHiKRRtGB/x7Qdulbn7CNy0/xI/aCVyc
p4rdfmPPEk+OOwxivlP9g39uTwP+zf8AD/xL4b8Yw6l5dxe/2lZy6fALjcWjVHjI3ZBHlryeOaP2
Vf2/fBnwi+JvxV1LxLpOsRaP4t1RtTspLOETzIfMlOyRQRt+V16Hgg9etFgk/fsjw/8AbtMZ/ao+
JWwHb/ax3MRzu8pM/h1/T1r52kQbmVSHXPy59/WvS/2lfilpnxe+NfjTxfpMU9pYatqZuIIZxtl2
7EUFhyASF7E15eJ8SHgbhwpzWiM1I2vCl9b6b4h0u5uAVitrmGR2GFPEgIx7cV+1f/BRDSb7xh+y
LrL6Fbyantms73/RB5n7lXDNIAM7gAc8fWvw/SYQtvDlmA5BPPfPP4V9z/sZf8FDV+BXhmfwd8QL
bUPEPg9EdtOe1QT3NqSeYcMwDREZwOxyOnFSzXdWPLv2bv2JPGf7Sum65q9sqaRpVhbS/Yr6c/u7
q6B4i4OVGRycH2r0j/gl3pVzpf7WbQXaeXc22k31vMA25QylAxBGQcHIzXtHj3/gqb8NvDvw01LR
/hZ4W1HRdcn/AOPRbrTYYrSFnYeZIypITnGe3JxXhn7En7aHg39nCTxfc+N9Au9W1HU7gXMGqaPa
QtMN2TKjbnUgE4bjj2o1YlG0rnuv/BTH9oH4h/Cn4r+G9K8IeL9Q8NadPoxuZ47JwvmP5zrnkHng
V8k/sm+JdT8Uftg/D7VNW1G51S+vNejnuru6bc0rtuG5j69PpwK9L/bn/bM+Fn7UHhXR4fD3hnWb
XxFYzEjVNRjhjAtyjgxfLIxYFyhxgYxmvjvwn4o1Dwl4n03xBpN29pqmmTx3VtOvWOVHDIcHg4IH
H4UW0FCTUj9Tf+CvfmD4f/D8xqzf8TWdWCHnmIY/XFeyfCmFrH/gnjYRsGVovBVwNr9RiCTrn+tf
Pukf8FXvh5rfg7SLbx/8PdS1rXIIlF41vZ20loZ8fM0aySZAP/1q4L9pX/gqFp/xA+FFz4N+GHh3
UvCpv4ntbu81K3hVYrUxkNHEqOdrHIGewz65pJDaZ+el3ctPAGEq5GDnHJ49B0qmSbkBNoHqx5xj
pxUrSRMAN/mHPOOqn1qPzAjq0bMWUk9duTW1+xhbU+yv+CWfiOy0b9qnS476dIBqGl3lpbu77UaX
92yp6ZIRsDvX2p+2h8dvih+zZ8VdM8TeEfDNt4g8P6xpMemym8jfyluklldV3IQVO1jjPBye4r8c
dO1ma2nSW1nktrhXDJJE+wq4+YEHrkHHIr9GPhT/AMFbbfR/AFjonxI8GXXijXLXIe/tJIVimRTl
GZX4DgYBI64z1NZNO5stdj7q+Dd74j034JXOvfEKGy0LXL1rrVL6NJV+z24diy4YnAG3b1NfhR4M
8LR+Nvil4c8O3FwY7bWtWt7Hz0IyizTKmT9Cx/IV9Iftk/8ABRDVv2kdDtPDPhzSbvwr4SZd+oWt
zLHJJeyhwY8so4RcZxxk9elfH1tqVxpt3bXdrcNbXdrIs0Usb7WRw25WGO4YA9+aqILl5rs+xP27
v2FbD9lvw94f17w7rFzq+kandtZXUN6oEyS7GdSpXA2kKRjHaviydSm1WO1M5A6GvVvjB+1Z8Uvj
lpml6X448Wz6xp9jL50Fs0MMSh9pTe3lopbgnrmvJPMiVG3Alm+9ntTS7kN+YyWRZC3HsCDjB9qe
WWVssu4KSeR1qGQLIwboBwB6U8MGUYYhRxWiSZmpNMlWRpopHTcFA4yearyAmEMD868HOOfepEdV
jZQcDODnvSMQ7Bvuf3sdAKzejG53E3+UVO3DEcgjk0qsrFA4AUZzgYNI0anLK21M468U2GQCc9VI
BwexoBMbKMvhRtGep/SmuQqHDMdx4HoKVlJ3McjkZU+tIyhWbnleinvTsTuPIDBdpALZBP0pAGd2
54HJB/nTV+dtpxuXtxT0DM5GNzMCAMd6VikO3L1wD6cVApUSE7iFJJx6e1SFSjkNiTseeKVlQFcr
tUHv1oLsRyYLkD7ucgg4OOtDMNpBU4JzzUsqb1coNgz60yM5Hzks3QZPGKCSGRFX7ueeachJZQQX
OeOakWMyKQxI5x6YpyxFRuJwaehSiyBm4IB2tnoeRUsj+WGU4Y+vX8KjwwLk4K5/P2ppc9924ew4
FFhWsDtsVupXufT2qQNH1A4z+OKaUjlxwdzHmmuQpC4IAPfvRYauOl2sRgsSTjGcUk6lDgqTmmAq
ZARwoyRjmnNKUJ+YH8OKLFXE+coQVye5PpThlBtU5A6nHOaUtk43Anvg9qWVmzhR7ikToRnlDz0P
PvTfM2Ng4wPXtU21QoOeQMnHrTSEY5wRkZINILJ7DTypXlsn+HpS7OMljk9OKchUHcRnPynFDMU+
VRyvQk0CtYjAPPGQTjkdKWYBANgPpzSMHchlYcnpSMhUncuT2x3oKTJSPMUdsGo5E2ODtxng5NPK
sBlMnHJ9KZOzjHzZXODg5z70Ao3eowqxYkAKeuM07JaPhDgEDk08qMggE+2ecUhi3nAbaB1yegos
NxSGFsA8EHPT1p4CMAeh64NEud2OirxwKakWVOcY5A4oIFG3zOoB6bTT5FEQ2qDknk00gMeFGc5J
NIVkU4JwcEfjQLYcFGQc5B7elIY98mN2fUU1R85y3SnjBfPTjPNBWgjoGYY596cxRcsdy4GPlNIX
O4bRtJ6g9TSBDlgaA06Au3bkkluxzTTIudu3J9c4p4CkqG57c9qdIgUggbhjGcUBYapjUMOT3+tB
wSVQHj1oKhiNqcdzmpGQMQq55PX3oLSI3B6HoRxzQY1CMwOM0GNoXdWXa3XDDGBSzE5K4IB55oJa
FMgEeAoGfQ9Ka+R6A8ZNJySAo4680sagkjO0jt2pgg27kJ4yKjlGCQV5qYxrnLnDdMU1gFznJx0A
5pDshJCMDrgdBTiTswBnP50xX/eLxwc8Ac06bcGc52gGnYFYiIxjaoUdAGpZH34IURkcED1pSCgU
scgHkAU0kBnJ4BPBqWgbCQkIM8Y7dxTJFBBPTjg1KijJ+TtzmkdCCTj5fX1osQ9RiyiNMo2HzQSZ
SRzyMcjvSbRliPl54qUSkk7sgAD60gW4xo9xIORgc5NOYg5JBwp7dhSqAGXLFlY8k0qRkO+36YI6
mixpYiYpuxtLK3cU0oSCu4Bc8UrrjaRkn9KAwU7iuT3OOtFiLage+1wGJzihm4OSehPHrQIwx3c0
u0ZOMk85PagdgOAjYyxpvO4AbsjnApxGDxnd05pSCDkkjIoGJIQx+T6fNShCjEE5Vu44oVdhIGT6
5qVlXPLbR1GehpjI0RWAAGc8YJpoXY/c44GKVQ27IXJHOFGTTlwp+XOO49aQEOfvEnnsKUx4+8cd
6mdUYljxz9RinTKgk4OcdqTuFrkDALHgDk96Gj3kbm3EE/KDRI+SAfrxTWxksvJ5zigCR8MoBPzZ
IxTG5U7evpThGqfOzBufunrSuwIVl4HpRZAMfcSSp3AetAwFycnPpTownUtsPXilkGWzyVIyO1Rb
UAmCrwhBHrTXO4KFOG9uKcFKncT8p7Goy4bduGAPTvTsxMQhlkI+6QfyqcgtEAevXd/So9oZdxPz
e1PkBAJDEqeOBmi7EkhFXEiqccnrntTmG4uBkew9KTaZJEQAegJPSldSqvgAnnjNFirEcZHPI54x
7U53UrgdfShk2KMqQT2xkUhG0ADg+lNIQ4tyxI2/7ppEVfLY7grk4wV4PvmlljaNiSPlPO4c9qCQ
VZRkjcTkjrxTJ1FCKSckMOmaXI/iBIzxjg0ioJHPGM9Kc6eQNqtk44HvU2KWoxcEdCGI6mgRKuQu
TnqRyaRdzSbwBnp15FOZfLJwSNxz71QxhGwswXGD1pFGF3ZIOOgp5XJySc9x60wNGH6YzxSYDlVW
I5bBHNJKdoJ4BBxg08IF2qvQjnvimlTHuD5wB3pJgKIw5wxG725zTHQK5wGVR1X3p4x8mCysTyR6
UK5/eH74PU46VVwEjJyeuOQAO1IArbQAQM549KaAF3E52eop6PgsxUkbSOD0OOtADXdUPy5xz16/
WpVJSHfjOeOnJFRpGAylmww5GRmp9o3sQu0YyDnvQ9QIC5UkIM7eOOacqgR98nnJpCWZgAuTjORS
OUO5m3B+uOlSUK7KUwMgr2xS+YWCgjcT7d6R902WwRnAzilfDSKBlAP4sc0CHuGVPnjIxnBIzTFE
iBWHTPJB9qd5rlSu7coOV3HJpfMZzjOAQeTwBS3GISpk2g7SR1PNPVFRXUnaRjDdc+1GVX5m/i4q
N2+b+I/jTGCSjkn5TzipFmc78ZCnjGeuKhAMKlXPynsOue1OVMyN1Tt60hDpgA0YjQhV5OP8+1SG
dZDvLbVBwB1zxUQJXIYHAPI9akYxv5Y27Qrfw9cds0wQyVlmkUqfuk5KipHIeXYD8wOBnvTxtDkY
IPY4/nTJNrBiACuSc5+77UajHSAGJmKjGeQT+ZqsSdvERLDkkjsasZL/ALtGygznP+NOVirDYpUA
H5gcg0twISEYB1QDvtP0pRhtzPgbj1x0pZJVY7igHXj3qIDftXqfvEelDQEiurjBIIPt7f8A1qYZ
FOAQcMMFs/5704rvLDoi+nf2psoDMDswOijdyOO361FhWGyuNqjG4Y696ORErsd6vwcEce36/rUs
kOUIK7QRgleufemvbqBtVw2fm5P5fypjGAHydysVl6AdjViNXgkVGIXJyB/Sq6YkkUKcnJA44BxV
h5fJy6x75CQNx5waYIYwaKQqpIZj822kDtCf+mm7djPGMd6dJsednAY49OKgZgu4Zxk8t1FUBLuy
AcjOM9Cc1JHIGKoSML1JHaktUEjkFGlUAsFXtUDRPuOXBGcYzzQBKiFA0YfKDOF7ZpGbywCSoLcc
dRTlg2I5OQp4DKc/hUTSrJIqhCuBtye1ICRQuY8kkknv2FRtJkEMhO4546/SpoYtyluC/TaMjj1z
UbE7mOPmB5x2461SEK7B0UqCMcEdP1qpKV3/AHtpQ/eU9auTyfuiCPnznBOAB/jWe/zyEDPPr1pX
aBocpBcHluehochJAygHvk4OaAhwwLjnt3pBnaMgHBzk0XFsSLIRJuwFTPIHI6U8ygrtKtuOWVgT
zUQ3KFBUkryT/hUhVJkJjJG0kjI5x71O5Q51jkB2gA+vvRsUJGNjE45IxTTIzFSqbe2ABzSOmWGw
tkjODQBcedoLQBDv3Dlifas/dHsB2NuHc9Kluio27CQo4Cge1NEP7rIXLZwVJI7Vo0QhGmC7Q6/I
DwRSTnMancrZydgHIokjfCsVwaaGQFy3BJyCo/SmO4SECAKUzJ2PoKSRfMUE9O3GKHfeCc4YDha1
tJ0tdTsLiJeZo1DqD3HfHr0qWUY5ysgXOCB1WtSPbGVUFpBwQ2OhrOaMxSlSpReuGGcV9vf8E/8A
9i65+PHiS08VeIrfHgyxJYeYgcXTq3zRkZx0Pf8ApSTRSR8lEyxeWpb90WB27h8uO2Px69KnYRma
NtwjBJYMqjAA7Cvsf/gopo3wh8HeNLHQ/hxpUcWrWMfl6olsS1sc5KAHJBIIYHA68dq+g/gF+y78
L/j3+xDDe6fo6y+M/wCzp45LiDi4S8i3hVKDAOcDGRzkc96vmRzSTbuj8to40gc7QFaQYQEjj6A9
DU06yFmjLbFUFCX+Tgcmvpv9kz9mrUfHH7ROmWWs+FZbzQbC7B1iC9TYLdN7qvynqQwH656V9m/F
/wCDHwG+FX7SngbTb7wvay2PiiE6ZNplv8yQTblELmLsG3gH8OuMVDlZlo/JsWxl2gbFA4U5A7Z/
Ed6IXMkTTFC68kbjw3H6V+1nxq+EH7NnwO0vTb/xX4Ftraxvbn7PE1nbtIFbg/MA2QoHcdBmvBP2
/v2bPhZ4W+Evhfxj4N0aC1invo4BLYNvhmhkRmVjnOcED8CQe9HMQ4yurH5nSEOWkKgNt3FQDtB9
P0zTpN7QyBMSbBg7fvY6dPqa/ZP44f8ABO/wD8RvhPFF4P0m00LxTbwrPa3cCBVuW2DMb+gYcA9s
9+a4T9j79i3wV4l+CniCy8eeEiniWHVbu0eSdmS4gARQvII6ZOKnnbYcrW5+TTRPEPmQKRzubjHt
g1MjrO/397rwVIyPqM12ninTbPwl8QbmOe0kurWxv5IxbzoN00ccpAGRgZwpwfoa/Rzxd+yL8Kf2
pvgLpXjn4K2kem65BCAIOUabHMlvKjfdkB6E+3ODWunUu2l0flcjBL5N0pCv98njmn3TROxkY4ZB
naQCT2/D/wCtWv4s8Ly+G9UvNP1C1eC7s5DDPC64ZXXqCOxB4rDMXmTIoUdM5/z/AI1WnQY1iTIm
9SI+Twep55qUTRMsrthSeFBzj60jbHZQU+Xoy5/u9fevdf2TP2WNc/ae8dR2torWnhe3fN7qaxlh
bn5tqY3LycH249cUXCx4VcoZN3z4lAAIxgj0IPpjFK8fmFXeQK4H387s/wCf61+mn7Zn/BOnTdB8
OJ4s+HNnIsdnCseo6VApYlQGzNGOSevK9uvrX5pavp7WN1DDMDb+VlNp/iO7uD3rNvUx6lOadnZo
wf3JODvHHTn+QqSRkPmhNygAEljk/n6VPc6TJdSkm3lQnsy4btz9MHrTJ7GSPbEq4dc53jAHqDn2
q0w9EQCdyZegdufMUkcY+tOhkdVklctlsHB4/L26U6PTLm7jKxQys0RyZIVJBxg84HOBj86kWzkK
zvImwxHcUZdpHB529ulPcItorLMt0yLuLOT8pIwenrUkEgAKdQ2cgtjBGegqZbMuEGwyMRuCxDcz
D2x7fzqR/D107uskTxwoxCDblhkZyR1GaSsbalSJJZDj94A3I2EjvxSXEjMoAYryd2OvPQ/oan+y
TQkYZ5MHYAv5de3SnLpUzMIZYJS7fMqiNhkev6nn2NOwameNk+drYCcBQOW49adJHPbSRBcFjgMO
nOPWrkumz29wwmt3TOTuK4GBx/Mdq9B+BHweufjH8XPDXhIzTWlvqt4ts94kJlVF2l29sYH6+1Jj
ULu55nbxuhkJOAw9eccjj9aV38pTu8xGKHtgj61+gP8AwUQ/ZP8ABfwQ0LwFf+EtOe1nkt5bK8WL
IjmMaJtlx0ViTycgHNfArJw8gOU6jdzkD29x6UkTZshM6wPulUfMMfKcAk4omMdvdOXLBgvBwSQD
7ev6194/8Es/hv8AD34h6144tfG+kWeqXscNvLp8eooQmz51lKE4BI+TOOma+c/2ufCHh7wL+0T4
10Two0baHZXxFuI5fM2lkV3Xd04ZmA+ntQmS7pnjJnSOTO4kZyOTk57Uk92N/wAu1toI4GCO/NW7
bS59QheS2i80KxVmx0I/rzVi88LanZxu09jIsYAJlZCB6Y5/zyKtWBmRt82ccbVJyVX86hdCZuAS
o4xnn1rTsNEvNQMS2tnPNN83yQxEng+34Vd07w1qurTSpZ6XfXrglZDBbO4Vh2Ixx26+tX7pnyu5
lRvG0JOMunI3DkjNM3Sht+SQDuBFXJNPe3nminilgli+V1kQqR1HKkexrPj3KSjMAPQf/qqbXLTa
3FnYnaWc4DYLAirun2FxqF9BZWcLz3VxKsMUUfJkYnAA6c9Kz2jARgP3jLnB9a9m/ZP+FmrfF/45
+EdI0r7OLm2vYtUlE0oTMUMqPIq56nAND0LVmYXxO+AXjz4OWunXfjTw3d6Fa3+RbNMv+sYDJUHo
CBj9fSvOo5SzsQuyMDIYg/jiv2K/4K1+ItJtfgDpmi3F9bx6pdavBNBaGRfOZEDlmC5zt4IJFfj3
JGCSisQCflUdeff1/wAaV7q5MdZNdhrNJIFCyhQTg9x+n1FRSSlpDkklOFx0PvXUR/C3xVLYjUYv
DGsHThD9oN39jcRMmM7w2MYxzmoW8A+INP8ADSa9No17HoswAhv3t2WCQk4AD/d5yAOe9Ito5lmK
OAABx1I5P+c1XJZnVs7iMjFdb4S+H/iLxy1w3h/RNQ1ryFV5PsEDStFk8FgO31roNX+A/jrSLDUb
668H61bW1gN0s8tlIi4zgkkjgD/CqizNxtqefKhWQsMDAwzHtVqNJEjedEMkScSSEZVAemT2z6Vp
+GPBOueNNQWw0PSrvVr5ozMbe0haVig6khQTgZFfrR8C/wBjjw1oP7EesWt/4YlufF3iXQZLu+gv
VPnrdLG5jWMHlCDtxjB4HeiVr2Ku0rn47zOCQxUF2JBA7n2qvLIzny2YRlT/ABdfpXTeKvBWqeB9
WudE1u0n0/VYAjNbXK7JE3DIyCc4x0P0rnHKuGR1yeRv707CbuI0YljT5mLqeQo9KSd0WWRccdit
KEPAycEZyBXafD74NeL/AIqy3i+EvDWoa8tuM3BtYtwiBBILHtnB/I0JpCOF8vdkuGIzwR/KpGXc
hzGdueCor1bwp+zD8UfG2kNqGg+BtX1PTzJJCZYbZztkjIBXgdckjnjg1xZ8Fa5H4rPhltPuotdS
6+yNpkiFZvOzjy9nXORiqvEVtTG0/SrjVZVtrC3lvJ2DSeXCm5sLyx47DvVORTNIrRAGMcls9a/Y
T/gmt+yJP8MdL8QeJfHvhL+zvFRuTa2BvBuKWrRIXwPukFs8+1fn7+2B8CNe+Dfxh8SzXnh650fw
9q+sX0ujzGILbvF5rELGc4A2kEL+XFZXTH9pI+fJYWV2GDgcnFROCBuMZJ7HvXpnwp+Bfjf45X95
Z+CdGl126tIhLcrEVCxqx2gnJHXB/KvQdW/YI+PGl6XdXtx8P79bW1jeaV/MiZgiDJwockkgHgCr
jqxysj5yeUA8EgA55H86CFWRmBORnDVae38jJOQ/cEfd9Qc96quvlllBGCeAec+lW0YKVxkhzwAR
gn5u5q5pmj3OsahbW1nFJPd3MgSGGIZd2J4AHc9se9VxvgZS2Tx8xzkV75+xb8OvE3jX9oDwrqfh
nSJNZHhu+ttXvLVXVWaFJV3YDHnjNZS0N4K55R4s8Ca/4C1KKx8R6JqGh3syeZHBqNq8DunQsAwG
RnjIrm7yE5AwWUHgbq/Ub/gsJ4p0vVdK+HWmLpd7bav591cfabmyaJTD5YUxhyPmO7B2g8cGvh34
Mfso/En9oHTdV1LwXow1S107CTb54423kEgDceelNWA8ZIJB+Ynn64p6gY5GQPwOa+mPA3/BPP40
/EDw0mu6d4TxEZXiEV3cxQSqUYo6lXYEEEHggdPcV4R4n8Iap4Q1/UtH1jTJ9M1XT5mt7qyuE2yR
SDjB9uhBHUEdsU7Ilb2JfAXwx8UfFXWpdN8KaFfa5qMMZma2sY97qgIBJHpkirnj74U+L/hjf2Wn
+MfDmo+HL+7QzQw30RjMiA4JUn09O1fZf/BNb4HfFjSfiT4f+J3hfTrZvB880umam9/J5Rktm2l3
RTgttxlSM8ivoD/gsVYwz/CfwHM0aGdNddFcgbsG3kJGfTj9BWalrY1b5T8hCrICQAoGc5+tMmjy
5Xd5gIzle1fQnwM/Y4+I/wC0hpWp6l4Q0u3Fhp8qRGe/k8hZGYZ/dkj5gB3ziuz8bf8ABMX44eB/
Ct/r8+k6dqNvZQNPJZ6VdiW42jrhcfMQBnC1oku5MpJHyIYxvJZSFNJJsyeNxI4WvYf2e/2YvFv7
TGv6jo3hP7KL7T4BcSreS+VhC204B5yCPoM+9cp8RfhT4i+F/jzUfCXiPTJtP1+yuBbi0b5nkBJ2
PHj76twQQMHNGncm5xKwEuRtxzg8VLDaNPcx2scW+eVgiL3JJwMfjX0X43/YO+Kvw++CUfxN1PS4
DoZt4rqaCKUfaLSF+d0iHBUDIzjpn8vpL/gkx+zp4a8e6v4i+IXiKD7ff+H7pbGxspQr24LxZaRl
I5YDgemTS5kgSuz4j+KP7Pnj/wCDlpYXnjLwvfaDa35KW9xcp+7kYDO3cOA2OcZycH0NeceSpUuT
x04PWv0I/wCCon7XTfErxXd/CPQbIR6J4dvgdSu7mAebLex7lxEcnCANjpkk+1eE/DL9hD4m/GX4
L3XxF8KJpuo2MTTKmnici5lMRwyouMZI5AJ5qlbdkK582GFdrnHQ4x0qYrtg3RZA77+hPNfZ4/4J
S/GZPBA12P8Asa4lNiL1dMS4YXOTGG8rG3aW7dRkivEP2f8A9mzxL+0P8SbvwNo1xb6NrFnHLNPH
qZMbJ5bhWXZyd2T0NQ7I1i1scPp/wi8U6x8PdU8bafo9xdeGdLmWC9v4lJSF3wVzjsdwy3QZ5rlZ
IfLiHy8HnJr+gLS/2a30j9jW6+Edpb6dbaxd+GX0u4nRAsUt00GwysQCSd2Oe2B6V+Jnx3+Bnif4
DfEe98GeJYFGpx7JYJYstHdRPnY8Xc5IYY65UikncTdpcrPLDb9WOQfXNMwUOed4GAD6V9qfCj/g
lv8AFn4meB9L8ULc6ZoSagjP9g1jzI7iMBiAWAHGcZx6Vy37Q3/BPX4ofs7+Cf8AhK9XGn6xo0cg
ju5dId5TZg/8tHDKMJnjcOmecZpoLpHzZ4T8M6j418SaZ4f0e3W71TUrgWttAzhPNkbooJ4Fdj8Z
P2ePHHwE1KysPHOgXOiXN7CZ7cyFXjkUHB2upIyOMjOeQe9croGqX3hbX9N1vTZ2ttQ0y5ivLadc
HZLG4dGx7Mor9p/hvP4d/wCClf7IYXxppCWmsJJJZS3UcYBtb+NF/wBIg5yFIdTtPXLKeKhys7Mt
3Suj8O2jDEnkejHvTGOyQBucj+E1ueJdAk8K+INU0eWVbmSwup7RnVcBjHIybgO33c4rEYeYAPvM
OpPFURzJjSCAWz1PrTgXYdMEUkgOOcL7dq+vP2dP+CcvjT9pf4a23jfQPEmiWFhPNLaiG+MhkV4z
gg7Bwc8/THrRdAkfIpEhIIxt7mkbPRmOVzyK+7fGP/BIT4seGfCur6rb6/oWsXFpbtcDT7QyI85U
ZKoWAGSBxnivnb9mX9mXXf2oPHt14V0LVrHRr6CwOoB9QJxIAyghQOSRu59KtW7k31seNCJhJhl3
etMdAZMDknjIr9D4f+CNXxNiGD418OOT13LPgfTivjb49/BHxN8APiZqvgzxPHGdQsirxzQNmO4h
fJSRfQEZ4PIwaGl0C5wCnCDnnPNOKgoQASP7x6VGF+fOVJxgkdhT0y8ojQFgDj5ec/hUlRHAcNxn
BJ3DpQ+E4wSDyB2zX218Jv8Agld8UPih4A03xG+raf4ZN8GP9narFIJ1UcKxAHfHQ9jWD+0P/wAE
1/iR8APAlx4xnvbHxLpNo2LxdKR99pF2lYNyyg9cDjr06CsNyS3Plrwj4N1fx34o03QNBs21HVtR
mS3gt4xy7MQB9Bk9a9i+K37Enxh+CnhW58S+KfCEtrodoV868tbiOdYwe7BTuA9yMV9U/wDBHvwB
fXPxJ8UeLLvR2bR49LFrbajLH8n2gTKSFP8Ae2nNfZPj/wDan8KeFf2gNe+EHxOt7XT/AA1rGlxz
6fqWoqqWU6NE3nwys2ByVOD65XrioctSrPofg07OXOcuW743fTgd6+gvCf7A3xt8feCNM8TaJ4S+
06RewmeBzdRq5TP9wkH14rA+FngaPxv+0Zp2jeFNOfWNMHiMSW6WWZALJLwYfvlAhBye1ftX+1N8
eY/2XPAnh/xT/Y73nh2LVorPUobOIZhtnjk+ZegXDBccgE4HejmsJpn8+mpaNd6ZqF1ZXcM1nfW8
jRTW86bJYnUkMrL1BBFVkjAGNrn5ckE19ff8FLPEnw88d/HzSvEfw8vLC/XWdEgudRutMkV/OuWd
wm/aeJdgUMOvC5rd+En/AASt+J3xT8A6V4mm1Ww8LNf7mOnapbyrcIoYqGZenzABgPQjmtLpiWiu
z4g2bZNhQjPHNI0Ducgkr6Yr64/aQ/4Jx/En9nfwK/i26ubLxJo1tJi9m0zdvtYyeJXRgDtzjOM4
z6cjkf2YP2LfHX7Ul5qR0VotG0i1i3DVr+FmtpX3YMalerDg47UibpvQ+eTAwCgcNSvH5sIUEbt/
ORzX3zr/APwR5+Kmn6Ne3dr4n0DVLy3jkkisozIjXBC5ChmXAJPGDx05r5u+AH7NHiH44/GKb4dQ
3EHh7XbeK4knh1ZGV45ISA8RTIO4Z5HoCe1BaaucNpXwj8R6z8PdX8c2OkT3XhnSbiO2vL9EykLO
MrnvjkAt0GRXH9SQQSpOO/Ff0EaH+zZPov7G03weSayGrT+GZdHlvUTETzvEyeY3GTyRzjPFfiN+
0D8BfEX7OfxHvvB3iiNDd26JcQ3cDbo7qByQsi9wPlIIPOQai4K/NZ7HrH7Ef7E8P7Xq+MPtHiN/
D/8AYYttrx24lMzS+ZjIJGABH+tfP/xZ+Geq/CD4h+I/BertFJqOiX0lnJLCfkl2n5XX2YYP41+o
H/BNf4ifBv4N/s3614rvvEtno/iOcyPrljfX0Ync2wYx+VESC2UkGMdScV+b37SHxQsPjN8b/Gvj
bTbS5tLDXNQe7torrHmLGQAu7aSAcDpnilFtieh5mhWTO4YIHPvUeEJycg5xwaUISwJJGeMUgRHz
j5e2KZMdyxHEZAxUgsM/KeM/jXuujfsNfGvxH4OtvFOm+Cr650e4tReRvGyFniKlshd2ScDpjJ3C
sr9k3wPdeOP2gPh/ZQ6XLqtsmt2Mt7FHEZFWBbiMuzjH3cZzniv2y/ac/aM079lTSfBesahprP4Q
utT/ALM1BrOHJtIzCxjZQMAAFeR3AOOaXMan89kkTwzsjbkdMq6uu1kYcFSD0II6UzytwLfw9vfi
vrD/AIKPw+ALr9prUNW+H91YXen6vplrqV7NpsyyQPdSGTew2nAJURsR6nPerHwb/wCCbXxL+O/w
00bxv4a1fQf7J1JHMUdxdMH+VyjK3ycEEH1xiquI+SCi7mJyMHjB60Ip8zbuKY6kj29a9U/aC/Zz
8Zfs2+Ox4c8aWKpLNH59nf2mWt7uPHJjb1U8EdR+IJ5z4VfDnUvi78QNE8G6RNb2uo6xObSCS7bC
FirEZP4dueRQwujT+In7PXxB+E/h/Sde8VeGL7StG1XabXUHQNC5KhlUspO0kdAeuD6V55LC5bIX
IJGFPev2V/b+1fX/AA7+wSnh3xN4XuW1c/2XYXGo2Rils7eaKWImUsG3IrFCqkr1cA1+OIDBd0jH
gg+uPftSuCZ6zbfsj/Fq78EReLoPAmq3Hh+Wz+3pewRh1aDYX3gA8jbg4HrXkIIYH5g5XpznA9K/
cj/glj4+1Tx5+yhYWertHN/wj99Lots6qQTbokboGz1IEmM9MAV+NHxJ0tX+Kviy3tYQSdbvIljR
ckf6Q4VQB+AA+lCdwOIkQna2cu2T8o6fWgIT8y5wRycZr678A/8ABMr41fEHwjpPiLTbHTrWx1OL
zkW/ufKnjXtlCO/auE/aI/Yu+Jn7M+madqvi/ToP7Lv5TCL3TZDPDDIPupI+PlLDOMjBwaAPBIVk
YKETqQBxz17V6b4l/Zm+JvhjwrL4k1HwVrFvoccAuJr97VhFFHx87Meg5FU/g38F/FHxs8Y2Xhfw
vB5upXJ4d2KpFgE5LD7vAOK/ef4E+G/Ful/s4aP4c+JVnZPrljpbaddRRHzI5okQxoW5IJZAM/jS
G9D+c94gwPc/oOaNwUFRg9sVbv4UjmZRhI1+6uR07VXzhiqcgckmkwGvEAg56U7yicbTk5yVNKse
5mKpuOM4J619I/Df/gn78Zfif4I0fxVoOgW17o+qo0kEjXiBgBkZKk8ZIPf60k2I+bvK3SOr4VeT
xS+WJcADg8D/APVXZ/E34V+J/g74uufC/jDS59I1y0+Z4ZB8rqejo3RlPHIPt1r6i/YI/Yy8SfEn
4i+C/HOveGYdX+Gs01ytxLJIjI+1JEAZc7gQ4B6enpVbAfG+teG9R0C/k0/VLS40+9jVXMNzGUba
wyrYOOCDxWe0XVWGD0Bz3r9W/wDgrR+zXr/i6Xw1408H+F/tVhoGlTRapPYw5kWMSJsyBywVS574
G6vymbbLh2yGxxjp9arcV0V2h2gknBz2NSCQqQoGB+fNIRk7c5HXmnpuTJQhs9Tj9Km1txLc0tK8
PXmt3sdrp9rcX1xtz5NvG0jkDvgAmo9Rs7jTLqWxureS2u4W2yRTDa6n0IPIr7E/4Ji+FPEVr+0H
o3jaHw7c6t4YsRPp+p3NvF532YyxZR9gyxAO3PHGc1Y/4Kqaj4T1r9paI+GLNYdQGjwf2uy2z27N
PvkKllZRlthQZ9AKdy7HxYikkHJdc/gKQx+YVYLuXGeOK9C+F/wG8b/GB9T/AOEP8PXeuPYBTOtq
FIXOcdSOeD0rovFf7JHxg8G6Fe63rXgDWLHTLKD7RcXLW+5Y0B5Jx0x3+hpXWwWPHAiNjMgIJ5UH
pTXXJVQp2k5GK7f4e/B/xZ8VNTmsPCGhXOv3cNv9qliswGKJkANjPrx+dTeLPgv4x8FeKtM8La74
dv8AS9d1FVa1sriErJKWbaqr680Es8/KMJGUZzg4z9KkgjDsu5uSCcn2/wAiu38X/Bbxl4A8S2Gh
+I/D19o+pX6K9rBdR7WnBbAKkZBOeMVr+N/2dPiH8PdCn1rxH4T1PSNKhlEZvLm2ZY1J6ZY+v5Ur
pAkeYGMjIHG059qBgBj9/ngd67if4Q+LYPA48YyeHtQTwyxB/tNoCYNpO0Hf06jHWq5+E3iq58Fy
+MYtA1B/C8b7H1NbdjbgDgksBjGTjPbvRdMZxrMrP6q3O3tUbqqvhht54xxiuztvhd4p1HwfP4pt
dBv7vw5bZ8/VI7d2gjxgElsYAGRntXIyDDyYw/oV6MPajlQtSGRi0ijhWH61Kw3xkKcsOGI61CWB
ByuMdfUmpbSYIjg7lJPUU7WGOgtjOgCHJUEso5bHrSfZysmVbHHKjsMZr6Y/YI0e2/4aP8I6jrWj
PqnhWO6lt9Qlkg8yCDzIHCNJkYC7ipJPQA16T/wVS8E+AvD/AMbvD+oeBjp0Et/ppk1SHTZFaMSq
+IztU/KxTJ9wPapTTZWisfD8rqjKFRipxwRik+620YC/T2NdJH4M1q70aTWRpt4+kL11AW7GFecc
vjHXjNOuvhr4ntdPfVJvDWqjTo4zKbx7N1hKY67iACO/0o0Yjl2Rmc4HGMEjvUqwyeU7AMIwcBjw
Ppn8K1NE8Man4i1H7JpdjcX0x5WC2iMrsAD/AArk44r9QfDf7B3gxf8Agn/r2t3unXF14v1DRT4j
hmZMXFpdRwFliX/ZyCCv+0aE1sFj8qQjOduflQ/oKjdFDPnOO3HSrsljPBM1nJA0Fwj+XJEwO5W6
bSOu7oMVdn8Haza6hBZXGmXsV1MG8mF4GEkmDglVIyQPahvWxSTMd8sykbjtHGKjbLHczgccDHBr
XvdDvdNt42u4JLcuSqtMpUEjGR+GazZIZGkILZccc9MU+gNEYQttKpyOTipXlQjjJb1J4pCY45P4
nbnOODTxbySSiNYy0jHCKoyxJPQY71BN7DWiQgBR9SDmkCnmSRwc9Pl79wBWsfC+qxB2lsLqMFtq
Zhbg9D2z1qhc2720jxyKyyRNsaMjBVhkEHPQ00MrMuAS2M7ejdfrSdN5LHd7c12Xwu+GOtfGLxtY
+GfDls91q16GEUBOAxAyefTFWvi18JfE3wP8ZTeGfFukPpmqLGJUVwMOrc7lYcMO3HfIqbq4zi4V
VnaMFBu7tj880vkMMnjCttIA5x9K9t/ZZ/Zpv/2lfH8eiWEklvZw/vLy7TBMcZ4+7nPXH0GT2r9A
v22P2F/hl4A/ZN1fXdH0xNM8SeG7WCQahaLj7SwIjZWUno28n16HnFOMru1idz8j5FbcMbpOdoTt
Uklo37uL5QAQzc8c96+pP2Cv2brP48fFuDT9QuR9m0yNtQlhZfkkKOuFcehJ6YOa/Qbxt8Ov2Rvh
t8RbXwFr3h/T7LxLeCEx2i2Mjn94cJllUgZNT7XVpLY0slufiqImP+r2Pg/eHI6ZzUBRGbCOOnyr
n/PtX6Gfty/sF6V8KPiT4X1zwlctpvg7xNqEeny2zkO1jcs2fkyOUK7iMnggjpgV9N+IP2Xv2d/2
ZvhNomofFO0tGCzi0bVWt3ZppnBIBVATnap5PpSdT3uUmS7H4tGJoHDNh1xyF57UzzRFuVRtf6dB
X6xftQfsN/DD4ufs5r8SfgyiaZc6XaS6jAzl1iv7ZcmRG3/MGAVip/Doa/Jy4gxHvGccEE9/ofSr
V5K5OqFjdV3bAxTGSecfjTpQMkoyofVjz+FRqhWPO9wRjA9D/Wmh0QkMN2AcH0oUSua4z5i+MsD3
I6nirflKs48o+Z/stxk1FEzKS4YFh2x1odmaU78HJyQo6fnQ0xojfEmTnBBzjGMGkVjlU5Cg8AAZ
/OpnIDllXdECMAnGBUMsrSIQegGFpkMlLIYtiltwJb5jzik3MV3ENhzgbB7elQkRjaAxBxkn29Kk
M0w2MDtK/LnGetDVgTHOhLMm87Y+Sehz/Wh2jWReWccHJ70sgATB3HcOWI7+tR4JUMq78djjGKLD
J1EsbHgMDztJwMUsrhnXJ5P93timCQkhgocrztPApnlySN+7XD5+YevfFFhg8geXK5Utx6A8U9oX
DjzFCoeBtXFDQuAzE+WpIG3OOfT/AD61HvlbcoG/b8oB6A07CFlIVBJt+Xuc81UnlQlhtyQOuauH
ehIYgAgli3P4VT8wGUlhuH93pkUDAlcsQc56BR1603cA+MlR7807DIoJQghiFAphU4+dDz37YoFY
eTlepBJAH1qWJf3ZHmkBjny+vA7mmIoKMgbcNuQMdKWLcJSoXLj05zxSsNDxAgOS2BjJ57eopyNt
kZVQv3zzSxN+9IkUMBkYYHPT0qQyKsrGNTHjg7Bn8/yqVoVoyN2SMgElcnLcdKdK6qyFEBKjli3J
6DpVUCSYhgCwUEbj9OKYH8qQEDgdD71sZFoCS5jwVZ1UnoKgbaHwCc8ZGOhqYOTFv8w9eFzjNNlt
2iUSEfKxOHwCD7ZpAiMqpbIG4g/N6fhXT+ALwQa+vyAllOC4BVfcDvXMygpHuU/KQcD+dbHhK7i0
7UGvZY/PjgU/IDjOQcY/Gs2WUtTBOp3DMVV/MJIIwA1ftZ/wSckWX9kSZQqhl1a9zhcYJCnn6Z/L
FfinfXUd9f3k6NjzWJVWOc89/XpX6pf8EkP2hvDll4QuvhDqsxsdbuLuW/0+STAS6VkG9FPZlx09
Klbl68rsfn/8RgW8deIP9IMluLu52ZbIYGRjldw4547EfjmvSP2Pfjr4x+A/xi0298NQz6pBqUi2
+p6DAoY38IPYc/OoJYMOeMdKuftj/s5eIvgH8SdWg1pTPpeozPd6Xq0KkJOjMxK98SKSAV9xjg19
q/8ABPD9mzwp4T+Flp8adaWHUL2a1kvYXMILWSpu8wKMnOQPY5FaPQyg1KPOfVnxv+INl8FvhJrH
xF0zwws+pywRs6QxrHLul6NK+Ois2TnPNfkB8PPiPrnxO/au8FeKfEbtf6xdeI7FrmWQAHcJY0GF
AUAKOMhew+tfd/wz/wCCkmj/ABU+PUnw61fw5Evg3XpTp+kXrRsZpWYYCzISQFbkdsfhXnHxj/Yd
vvh1+1P4B1jwZAbvw/e6sl8LUtl7RYpUeRR0woB4IycdemTN9NSor30foJ8RPCXhbxZqugW/iUW8
7B50tbS62lZ2aP51APU7Rnivxc+PXj/xHpXiHxB8KLS/uX8E6J4ku5LDTy6use13XCkjO0EngnHf
vX6e/t5/DXxj8SfAvhaHwQ17DqtprKTNc2ExjkgjKMN4I54JX8M+tfLn7X37Hvhj4O/s36Hr7zXN
74ngvguoao0rb7xp8s7Pknc2QME+/rQZqXLK7Pvzxj8UdJ+Efw00rxFrIk/skC0hmmjGfKWTaocj
0GRwK7LRbzTdX02PU9KkguLPUEFwlzbkMswKjDgjrwB+VfMf7bhjuv2IL+eQ7YfsWnuQmOQWjAAB
GD16YrA/4JV65qGp/s56jaXl5NdRaZrk9rapM5Yww+VEwVR/Cu5mIHueKEtgTbcr9D8tvjXG8HxE
8UR7WjWLVbiNECksF81h16kZ+bP8q+9f+COk3mab8SwWbe0li5Ung/LMN3pk7ccelfBXxljvrr4y
+JoIZ0e4fVriFY2chAxlbI5OO/J9RX6k/sCfAG7/AGYvhbr/AIr8XaibNtWhW6ureYbUtI4t5B69
w344960sXCSsz8+f267aKT9qD4hLtaInUCpQDkAqCWyfXOQPy4r5zcLGHNs5THGM449Pzr2b9rb4
oaV8Vvj14t17QjJJpOoXheC4uV2F0VQuQuMgZGOcHGOOK8NlhWBWQZDA53A8mqMldFiJCrkqm2In
jfyf8+3Wv3K/YU0hIv2OvCd1oUNvbate6dMVuTCF3yh5FjL8cgHHUetfh9bsn2NnV/LaNQAe6/5P
ev22/YxvprP9gzw3dW7mC4t9IvijqeY2WSbBz6gj9Kyd2zZaRbN79mL4d/FvwJrnjIfEjWrPWtO1
i6bULZICG8mZz86r6Ltxx0yOO9fJHij4afDW2/4KJXGh+KVi0XQ/MgvLJGZYbZrkpG6RsW4w53jH
HUAdTXa/8E0/jb44+KHxH8bWHivxRf67aW2mQXNvDd3JlWN2kIbA7YAx+ea4j9or4I237QH/AAUB
1bwdfeIW8Pvcafb3NrIuAzskKEquf4sBiMen40WIt7yZ9efHLxt4T+Bkunvf/CeXXdFuBg6hplhD
MsDDkhlIyOBntnHtXxB/wUQ8N/BrxJ4d0jxt8P8AXtHGsXcogvdF0+VN8ishbzGhByrr0IwM565H
P2N4D8UfGb4VeI9I8C+LPDLeN/CltF5L+NLZy87xchDLH3YYw3GT15zXyp/wUk/ZT0X4eJH8RvC/
kaTYahc+Tf2KfL5crAkSRqB0ODlfXHrwhWTZ9S/s0fDb4X6N+zVoeq+DPDlp44t2tPtRjlWG4u2l
ZQZISzZAdc4257Y9K5PXvGPwF+P/AMPPEHhXxVo9l8MrmFh5EOuxxWEu8A7JYiCNwBBBAOa4r9n3
4G/Ez9mz4cWPjX4U6vbfEPRdftYbufwxcExJIZEXE0bA4DDgHGMgnIyK9u+K3wXtv2uPgrbzeINC
l8E+NrSIvEZ4vmtpgMmMsQC0TcemM56g001cqS0Plv8A4Jk/s+eCfFmu+L/Ft3jXZvD2oPpdiXO6
2miZSRIUOdxOMgn8zX34PCGknXzbN8PdMFk3ynUPJgxjH9zGa+PP+CU80Xht/iv4Murq3fV9N1WJ
jFC+Q0a+ZGWX1G5Tz6tU/wARvhv+034z/aU1S10nxd4h8M/D651BhBd2tyvkww4J4UcgYAH4nuTS
VrlSSueM/t6fBjw3+zZ8bvA/jvwzptu1nqN61/c6LKmY2MUkbSgEg7VYOQB0BPpxX3t4c8BfCzxZ
o/hz4tP4X0vRkk0YXXmzQRRoltLGJMS4G07Rnn3Nfmz+2p+z/wDEnwj4/wDB+i+IfHd78SJ9ZQ2m
lteTYkikZ1LIVZjgZC4Pfp2r9FdR8Aa237Eq+DWs5E19PBsentaDlhMtsFKdeTkEU7a7ma0jdnCf
H74M/Df9qL9nC+8T6Hp1nYS2UFxe6bqVrCq7vIMgKtt+8jbW9xkHrXzf/wAE5v2ovDOgXfh34T6t
4QabXrvU5E0/X4Io2Rd6F9rsfnDAqwyOowfaviyX4zfEjwR4f1HwbY+KdY0jQJ5JEl0m1uXSNN3E
g2nkAgcjtk1u/sdeKodC/ag+GF5qF7HbWMOtwpNLM20fOGRSfxZa0cbBFq+h+vH7ZH7QPgz4FeDb
A+MfC0niy31lprWKySONgAEyxO88DkdPSvwx1+6t9S1a8u7KzXT7GWV3jtSxYRKWO1CT1wMDPtX7
iftd/svyftPzeEdLOotpVhp7XNxLexpvILLGFUDIzkbvyr8jf2ofgW/7PfxZ1XwdNfx6msCR3Nvc
xngxOOAw7OO4P17ikrBfU/Qr/glr440L4jfCbUvCN94bspdS8LTLuvpYo5RPHMWZeoyCCrDnORiv
h/8Abn8I6b4S/aY8d6bo1mNO0+O4R0hjyEBaFJGxzgDMnSvsv/gkV8Otd8N+E/GHia/tvL0bXjaj
T5sqfMEXm7uhJ/5aDrg18zf8FIfBmreHv2n/ABRqF/ZyxadrKQ3VlM52xzKIY0bDZ6hlI9qRa+JH
3v8AADwJ4G/Zw/ZCs/GcehJfq2hx+I7/AM7bLLI7W6O4VnzjgAccVd+Avxd+Hf7cXgjxjYt4FXTd
Ps5FsLqG8SJzJ5iFtyOg4Ix1+lWfCdhD8ZP2ALDR/C0i3z6j4NTTIFUjIlW3ETIR2YMCMGuU/wCC
dn7P3jL4A+H/ABra+MdNh02bULuCaAROGDqqOGJI78gduMUIlq7Z8x/swCy/Zw/bw1/4bm1i13Rd
SupNGiaZQZIG3ebFLg5B+UlW6diPSvtnx/4o8G/sxeMPDdlZ+EoZI/H+uR2xS0RIo7WchEMu3acg
5UkDHIPrXwhFq1re/wDBVmKaC7jeD/hKwqyRuCrHyNp5HXk7cV9Yft6XC2/xK/Z1LdD4wiHJwq/P
Fyf5fjQ1qNK6R4p/wVZ/Z68LeHdE0/4maTAml6vqN+mm38UCBY7g+XI6SEDgMNjAnHORmvzBlEsC
h8ksPvYwNx/Cv2R/4K2bD+zzoSEyCRtfj8vygMki3n46jjv+FfjVLHlc7SGOcseTWkXoYSveyI2z
G67N68fd9c10Xgbx/wCIfhZ4wsPE3hfVJdG1uxZjDdwbS8ZKlSCGBBBBPBBFc9JbBLfdv2l87Rnk
9ahLSDJBUkn5i3f0/rWl7kRk76Hd/Ev4v+L/AIzeI313xnr914h1dYFtVubnYPLiUkhVCKoAyxJA
Hc812H7JvgnSfiV+0P4D8Oa/bfbdIv8AVUiubc5USLtY4z9VFeNRZkWQbBGjPuJGck4PH0r3L9j7
xXp3gn9pf4a6pq8621hbauiz3B+6iuCgY+gBYZPYZrOSN4bn7H/Gj4u6V8Br34ZeC7bwpBqWm+KN
RTQYoA4jitYvkjHylSGGJAMH09682/4KC+JtO+CX7KFz4f0TwtYX9prT/wBh29rMdqWgeKR/NUYJ
LLsyOnOOa7r9pH4I6z8ZfG/wf1/Q7m2bT/DOuJql2WkH72HdEwKHv9w9PavIP+CtsoX4BeGUYZVv
EUZ49Ra3JH61mtynodj/AME2fhp4f8J/sw+G/EGmaTFZ61r8Uk+oXDD55WWWREBPoFUDFfS1rb6j
fC4t9Yg0+azdChSEEhs5yGDcYIP86+Zv+CfXiXTvG37Gmi6FoupxHV9MgvNPuIlceZbSNLKYyy5y
AQykdu3avIfgl/wTb8Yw6/qE/wAVPG2oXmnGEG2j0nVZdxkzzvz0GPQ0Lcb6po+f/i14k039hb9u
PVtW8BWMN5pdrbiWTSZ5v3aC4jO+ENgsoUhWX06dK/TvwR8aZvFv7N9n8UZNNjgmn0OTWGsEkJUM
sbNs3eh2/rX4h/tJ/D3Rvhv8dPFehaH4jTxZo0Vypgv1uBKTuUFkdtxDshwM+1frP+x/rmkfGH9i
TSvDOkX0D6lDocujX9sJP3lrMyuoDjqMg7gccim0riV+XY/K/wDa1/adu/2ovGFh4h1Hw5p2gXNl
Z/Y447KQyPKC+7c7EDOB0HbPvXhC3DW7PHsI2jsOf84r7+/4KAfsafDD9nn4a6HqPhbWjpnieWcR
HSruXznvozgPIg+8oQ85PGCQa/P2QCaZCQQcdATyfX8q2W1zFb2EMuACq4BGeTzX3N/wSc+J+q+F
/j+fCFvGlxpniezkN6ZTlont0eSN09M5YHPXI9K+Hxbo5G8qgDbeuen/ANbmvu//AIJOfCWbxV8c
brxqmqwxReFImD2IGXmFzFJGrDngAq1RJm0I9z9FfjH+0Z4N/Z98X+C/Ct+sdhd+K9SChhFst4ke
QCSaRuACXYDnqTk18Of8FUdP8OeEviZ8PfHfhOezt/F00ry3Mtk6v5rQmNoJJFB6hhjPBI4ruP8A
gr18N9YuNC8KfESyMT6XpBOm3anG9GlkVo2HqCQRxyDg1+V17ctuLKAFY7iQPuk579aUVoKybXkf
ut+wp+0p4j/ac+GWsa/4l0+wsL7T9TawA09XVHAiR9xDM3Pz+tfnb+33+1z4x+L/AIr8Q/DK8sNK
t/Dfh/xFOIJbWFxcyeRJJEu52crzznAFfQP/AASF+LXh+y8N+KPhze38Vv4iudQ/tazhlIDXcRhR
H2epTyhkdgc9K7L9r/8AZM+C/hL4O+N/GPiq5Oi+KdQubm8t9YjlffLeSSPLFGI84bP3SO4BPFSr
Ipq8l2PQ/wDgmf4V0zRv2UvDupWthbW2o6jNeG6u44wHm2XMsabj1ICqMV7nafELw/p1zKdT8f6B
cxEHbG11BFsOf9/njivmT/gmV8VfD/jP9nC3+H1vqCx+KNB+1LcWb5Ehhlmd0mUHqv7zb1OCvPUV
mfBz/gll4S8EeKNR1fxzqq+NbWeFljs5ITEkTEklywbJOOO3r1pBJanwH/wUI07wbY/tSeJf+EJk
s5dJngguZzp0okgF06kybdpIyTgkDua+Z3XKk7QePyr2D9qzSPAHh/46eKrD4aX7X3heC4H2cg4j
ilC/vYkJA3qGBwefrXkq20ruAc7cMc4646j9DXSmc6hZFeHaWRQWAPBz3NfUH/BPnx9q/gf9qbwO
ukzJGms3Y0m/icZEsEmMj6gqCPcV80PB5UbO+MDJBz05x/P+dfWf/BMf4a2XxA/ad0ue+1U6dN4d
iGt2tt5YJu2jYKVJJGAN4Pr+VZzWhrB8r12PrT/gscu34f8Aw7YIZGGq3AAHX/UHv6c14R/wSa+I
GtaV+0RceFba5KaNrOmT3F5bOgId4QDE6nqCN7j3B+lfWH/BVn4Uz+Of2fovE9pqEVq/hG5N9LDK
2PPidfLcKf74yCB35r4h/wCCX+v2mg/tc+H0v51QX2n39rDJJgASmMOq57EhG/IetZyeiFC92mfp
F43+L+veEv2z/h38OdLa3g8M+ItOvL7ULcQKWeZElYPuxkElF6Hn8a8K/bd+BnhP4i/tlfBCw1HT
wkfiRbm21ZrZjG91FEUZA5BHTLDPX8q6H41eNdK0T/gpz8HYbu8hgT+xJ4HkaRQFklW4WNWPbJwA
Peuc/b3+MWl/Cf8Aax+A/iS7f7VaaKLm5vYrZg8qRPJGhO3PXbvIHfaaa0bDflfW7/U+lfit8XNL
/Z+v/hV8P9F0owf8JLqdvo1kYVURWsKNGHyD1JQ4H4nOa+d/+Cwc3l/CLwNEE3u+uu2O5C20hPH5
V9BePvAPg/8AaPHwy+JumeIo/I8OzLrmlzAgxXCNsfD5OV+4PcHORxx4n/wUStfD/wAe/wBkuLxx
4c8S2ZtvD851WHdIF+0qA0UsQycrJyce4980k0OSfU9v/Y9bT9A/Y9+Ht3LKmnWUWgJc3Fw+EEY2
lndj7AE5PpmnWn7XfwS0szC4+Lmi3of+Ga7RwmOv3V/nXjn7DXxl8IfHj9lGP4Y3WqNpmvaToz6N
qUErBHELiSNJoyThgVx9DwcVv/C/9jr4J/s4+GvEeqazfWviux+z+dJPr/lT/Zoo1YsEHPJz9egF
TGzKlfsfnRr/AO0la/BP9tPxb4++EV3ayeFn1Ni1rbRYt9RtnWIzxqMcBpFYgr0OCPSv1v0/4b+A
f2hI/AvxP17wV5HiK0hivbE6jEY7q0Y4YI+Mbwp5GcjuOtfmX+xd4B+C3xa/bA8UX0bS6b4V06R9
X8O6VqpjSO4j3BWSRWJ6GTIX0PPSvqP9pz/gpvpXwV+Kun+CvBOlWXiO30qVY9fmKny4lOzEVsyM
AXCFs5BAOB2Ioe44qySPJP8Agpb+2hfa1qus/BrwjLJpum2n7nXb0ja17uX/AI91BUEIp6sDyeBx
Xo3/AARyYv8AC34gbvmkGtQ5YHj/AI9k/rmsL/goB4f+FX7SP7OkHxs8MeI7Kw8QadbxzQrJOsU1
7FvAa2liznzVIO3jOQR0NfKX7Cv7Ysv7K/j2aDWY/tXgvXZI49WWJDJNbsm8RzxgHHG4Bl5yB6ih
q9rAux57+1wRL+0z8VA7ZP8Awkd2c55xv4/lX3N/wRo8RapLpnxE8PTXssuk2hs7yGzkOUimlMyy
MvpuEUeR7Zrg/wDgqJ8KvhzdrpPxZ8E+ItLfV9duI4NR0+zukka83xlkuVUHKsAMNxzkd69e/wCC
UJ+H3hP4Nal4gl8Q2Vh4r1G6ay1S1vL6ONVELO0RRWIOCkuc8/zrSTUtWTTVo27HpPiz9v7w18Pf
2sNa+H/jGT/hG/Cukaf5X9rS5kjlvHWGVdwVSVGwuB15+uK/Ov8Aab+P2nN+2N4j+I3wa1yTTFkW
BIdYs4zAJZWgCTOFdMEMepI5IB9667/gqT4X0HSP2gF8T6D4msNdXxTafarm0tZllNpLAscIyykj
a6gYB5yp7V8XwXXlTglNxzlt5649qVgi9r7n736h8SPE8f7BjeOk1Fv+EubwONUW/wAAsbk2ocPj
pncfSvxi1v4geKfjB8V9D1vxjr13rmqteWdsbq8ZS6xLOMKqrgAZJ7d6/UP9iP8AaZ8AfGn9miw+
H3iq/tfD+q6DpsOkX1vqNxHBHdQqNkcsTMRuUqgBHUH6g187f8FPfiz8JbjxP4Q0jwNYWN/4z0G5
F9NqmlGM2ixFSVgdkPztvWNunGOvNTHUe0uY/TX4q/EXwn8JPBDa74w1c6BocUkcBulDnDtwqgIp
bnB7V4N4r/by/Z4uvh7rNnL4xOtWM1lNG8M2m3U3nqUIKktFgg5xz61BH8W/gj+3N+zzZWvizxFa
6FaXMqSXmm3mpRWd1b3MXUfMeVzyCOoINcb8ZNR/Zp/Zw/ZN1jwvbNo/i3TpopLS10+0u4Lq/uJ5
icNvUhgFJ3Fh0C8URtoTPZ6H44oAsaOgK/IpCt0GB3/EGv2j/wCCSUDx/srzvJhjNr93IJAT842R
AHn6Y444r8gfhXD4f1H4leD7PxfI0Phe41C3ttUnSTYY4WYKzluMAZBJ9Aa/XX9p39tvwF+yt8HN
M8N/C2+0nX/EM1t9k0uGwuY7mGySNVHnXGxs/d6DqxoerLvpax+QvxaGfiV4vZFX/kM3qg/9t3rj
nRELrjr0PpV3WNXn1i+nvL1zNdXMz3EsmzbvdmLMcDpyelUJNzspxtPcY6itLGOgHaozgkcc1+n3
/BPL9nvxP8Qf2fYtc8J/G/XfBKXGo3KXui6WqNHDOpC5IJyGZAje+RX5gs7HYpIH0HP41f0zxDqm
lo0dnqF5ZKW3f6PcPGDxz0I5460mmWnbqftT8T/2T/i9b/D7xDLpf7R/ie51EWMxSG/ZYoJPkOVZ
lOVBGRkdM57V+dH/AAT1+GM3xO+PcWnWfj3U/AWtRaVNcWF/pTASOQVV4sk4PynOPbPavne68V6z
PAUl1q/midSrxSXkjK49CC3IrPtb+a0njlt55LeVOFkifY6+4Iwfyo1tYSdne5+6n/DI/wATBLuP
7SvjMA442p9Mda/J/wDbe8BeJ/hz+0V4k0fxX4tufG9+qQSw6xdTeZK1uyny0cH7pXkEDjv3rxqf
xdrdxIS2tak5BypN5IcHHUZNZlxc3FzJJdTSySyv96SVtzN6ZNUm1uO9yIqI956EYHHU1seH4FfV
LMOdhaQYPrz0/SsdY8HzCdxP8PcVesJZkl81ZTFIjbkA6k9efapkWtz94P2/vHXiP4ZfstXureFN
XuNB1ZLuxt1vbV9jxo8iq2D2zUX7D3jLWviz+yDZ6p4yv5vEeo3Umo29zNfnzGlRZZECtnr8vGK5
Hwn+0l8EP2xv2c7TSfiRr+neGXkaKHU9J1LU47KUXMJVt8ZLDKFhkEfoRV3X/wBo74Cfshfs+32m
eCPEumeJIrUyCy0PStWivLuaaZiSSQxO0Elix7DuazTTE1dNHi//AASB+I16Ljx38PDEraXaiLWo
JTkOkj7YZE91OwMPfPrXlv8AwV1+KzeI/jJpnggaXbwL4Ytkuv7SU/v5muUVtnsq7M47mu+/4JZ/
Eb4SeCPB/iLWfEvijSfDfjiS4NlOdV1JbcT2vyyRlFkI4DblOP7p9a8l/wCCqEnw88RfFHQvHHgr
xppfiO+1qza21O10y8juViNuEEcmUY4yjkYP93Iq47jne6seJ/sVfEy/+FX7THgXVrBElNzqEWlX
NvK+1ZYbllib8VLhhjjK1+rH/BS74qn4afsx6xaDSbfVW8TyHQsXJ+SBZIZHaYDuyiPIHHODX5Xf
sM3PgKH9pLwxP8RbmDT9Ch3Tw311OYIbe6j2yQs75wASuOeK/R7/AIKBfED4P/Gb9mvxFa2vxP8A
Dc2saJjWNNgstUhnknuI0cCLYrEnersvHQkGl9onofjfLJJv+WYQ7cgSZwRjv7fpX7Y/sj/tLWn7
VPwdl8H6nf33g/x9plktvO9pI0EzooVUu4WI5zxuXscjoRX4irOUlDQcMG+UFQRnqOO4/Ov2O8Ff
F79nT9rr4N6De+OdR0jwTr2mkW9zbXOoxaddwTKqhyjZBeJ+CPY9iKmVrl62Mv8AaV+Kfxd/Y5+H
F/pWtWUfxi8H66slqniLWGZJbFpE2mG4VVYMp6qSQDkjNd3/AMEqbf7L+yJp4cAEategkccBlH4d
Kl/aQ/ab+CfgT9mXXNGm8UaZ8RYJrAaVbaVZ6jBeXVyWXYjPhui/eLn+76186/8ABL79sfwp4H8J
XXws8cahb+H1WWbUtO1e+mSG1kVgnmQu7EBXzlhngjPcUWEj6nsv2+f2f9P1u7t4PG99dXMbtFJH
9ivZURgeQMx4/KvzP/ay+PWjeIf2vdW+I/wn1m60poY7ZI9WtIWtZGnEJSZsMAxyGKncOcd8V+iX
wr8KfsmfA/xbqnijw7408KDVL2BklN1r8FwApbexRCxwSQOlfk/+0/8AEXwj8Tfjv4r8VeCdHk0T
w/fzJJFblgDLIBtkmCqcKHI3Yz3B46VSC9mfsbD8QfE1z+wCPGqancSeLn8EnUU1AMPNN19n3Bs+
u7FfiJ8VPiB4s+I3iNtZ8Zavqmsas0IiS61V2MgjUnaFzjCgsfzNfqV+w1+158OfiD+zpa/DLx9r
Nj4Y1Lw/p0enStqt5Hbw3tqOI3jdiAWVQqsvYgHkGvBP+Cq/xh+FHjm98L+HvBNrp+ra7phN1ceI
NGaJrdIJEkH2feh+Ztyo/fHHc04vUiWjukfn1JNgEsqqTxlcf5NVS6yHoSOoI5p7SKSvGAoxjgUn
mDAOFBHTFWTcRssqsr/NyCppbeDcqFj34qIsw5yM+oqRt5QFj8ynAU/zpDVj6g/4J7fE2/8Ahl+1
N4Nlso0ubXWLhdFvLYsRmO4dEEgxxlG2tz7jjOa/R7/gqt8T7bwP+zj/AMI9PocGrzeK7r7BBNcN
hbJ0Uy+cBg5cbcDp1r85P+CdbeAR+0nos3xA1CLSbGyRr7Try5ultoUvYXjePezEDBAbAPevvf8A
4Kc3Pw7+L/7Ok+oaf8QdAfXPC9yNTsbW21GGZrskGN4QqsSSVbIIHBArK2prY/Inw/oUnivxLo+k
7/IbUbyGz3AfcDyBN2OOma/eHwH4C8DfsX/BTwt4f1rxbd6XpsE62YvN8ipc3chZ/wDVqG2ltp9u
DX4K6DrtxoOs2Op2e1LrT7hLqESDI3o4dP1UV+3X/C1vg1+3J8DfD93rPie18MPDepfGwvdQit7q
zvYVZdrKzfMo3kjHUEHihvULHN/8FcNE0+8/Zdt9SuLWKW+stdtBbXDIC8YcOrhT2DDGfpX45+Gd
b1HwvrVhrOl3Mmn6lp1wt5a3VudrxyIwKsD2INfsP+1X8Qvhn+1X+yX420zTfH2kWeqaBNLcR+Zd
RqZrqy3E7E3ZaOQZ2sueHB9q/J34C2PhnxP8Z/B2keN7h7PwtqF8lrfXSy+UsQc7VYsegBIzk+ua
aegkrM/Yv/gotez3n7B3iS4uFMlzcR6U7kAZ3m7tyTx05r8ORaqQqKWZVYEcf5zX73/tP2Xw++M/
7OHifwGPiLoWm+fZR/Z7walA5SSBlljJAfkExAHHOCcV+BZupPLX5V6jjHU+3600Nbu5+0H/AASC
BP7MmsMxJdvElzkFcAYggHH5V+Y+gxrcftYWkZAmJ8cITHgMD/xMeBj09q/WX/gn1e/DTwB+zD4W
/sXxTYRHWY/7Tvob/UIlliumRI5UwWGAGjxjFflv+03otr8Bv2rddOg6vZa/Bp+qQ67Y3FqVdSXk
+0CJipI3KflPPocc0kw6n7ifFrx94V+GfheDUPFfimPwbpZnW3jvWkWNS5ViI+VI5AJxjtXzx+0J
+1L8AvGP7PXjfQL34gaN4ka40e4FvBOS8k04jZoSoCj5g4UgjnIrT8XXvwg/4KC/A3w8914s/sfT
jei+Nt9rit7yCdEkiaORHORgufqMEda8i/aq8Bfs8fAr9jq88NTpZ67qiWzWGkXmntFJqT3hV3jd
nQ5VQV+YnjAwQc1KuVodp/wTi8AeGfhz+ydYfEWHTt+tahY3Vzfz7ixlWCWbAXPQ4XGcV7b+zv8A
GS4+PfwLuPG81v8AY47+e+EFqxB8mKN3RUJAGSAvXvXy3/wTQ/aL8IeNf2fZPg5ruoR6L4g0q2uo
1W5kWNLmzmdsPG5OCymXaV4x8uARX0L8DLTwH8BNOg+CNv4rtbmV7a51LTjLOgke3d8SAnON4dmI
A7H2NCepLSeh+A+orlgNgznkjkmqDAx5AO0nkhhXofx5+H3/AAqb4teKfBiX1vqiaJfPAt9akMky
cMjfXawBHYg9a86lPmMzMe/OR/KtCHY6TwD4Xn8ceM9F8PW9wsN1qt7BYxPIMhGlkVAfcAsK/ez4
N/DC0/Zb+D/g7wnqvj0adFbv9nBuzEsdxcSOXZIy3IDEkhc8Zr8F/h74rbwT418P+I44vtEujahb
36Rj+IxSrJj8duK/dLxtp3g79ur4OeEPEvhnxKlvY2d+ur2zP8hW4jR18qZTypVjzx24zways7lI
8X/4LGeBNI1D4FeHvFb2cY1/Ttais4b1VHmeRLHLvjJ7qWVTg9MV87/8Evv2ivG+g/FTwz8JILyC
bwbqt5d3T288IZ4m+zvIdj5yATHnHTJNfW37c9vp37Tf7HPiLUfCmvWZuPC942pTxNIMGWzEizwH
0OC209yF7HNflv8Asl/Giw+Bf7QngrxlrMTjSNOu5EvtoLPHFJG8TPgddoctgc8dKsEfpT/wUv8A
2tPHX7PF74b0LwhLp0Vvrmn3TXj31qJiBuSMYyQB98/l3r8aJ4wCVUj5AOQc5r+h7x58Kfhb8fl0
zxt4mt9M8T+HE0dmtZbj5oliciTzgeo+UV+CfxutfBdj8VvFUXw+uLi88FC9b+ypp0ZXMRAOPmAb
AJYAkZIAqkyW7HDsisSwPQ4yKVUJ4B5H6mmFepxn8fUUKCoY57gBV6mlcSZ+gH/BIDxzrGg/H/U/
CttKjaNr2mS3V5FJgkSQAGNkOeDiQg8cj6Vc/wCCv1lE/wC0noDIqRyN4Zt2kkHDNi5uQM/Qcc1p
/wDBHr4b6fr3xG1vxpHrGzUdAt2tH0oxD5o7heJN3XrGR+FdD/wWR+FOrReI/CfxJikR9GntF0C5
QEeZFKHlljOCeVYO4OOhUetRF3bNLn0f/wAE/wC0sPB37DmjeKLDTrVdV+wahdTSBMGZopptqsc5
I+Qd6o/sGftleKP2tNU8Y6X4q0TRbW20u2heN9OjkAlEpYFXDswPC4OKw/8Agmj8VvDfxR/ZYf4W
wX8Np4n0a3vLS4sTkP8AZ5ncpOo4yv7zBx3HbNd1+xl+xfcfsiah4nv7zxOuuQanbRoT5XliERlm
59RgnmkB5t+yX8L9D+E3/BQD42eH/D0H2XSLfS4Jbe3xxB5pilZAcfdBcgdeOK+y9U8N+FPiBrm7
VNGtdS1Hw9dKIZ7mEF4JCiuCjdRwwr40/Za+K3hf4j/8FEPjPqXh/V4dTsdS0e3+xTxH5ZhCtukh
XODwfbmvr/wLqME/jv4jWaTI89vqNs0ka9VD2kWM/XB/KmNnF/H7wJ4T+M/wttNdvrFbqTSbiHVN
MvGTbLC6SrnB6gNggjv+VeoeLYdF1Wzh8Pa7ZxahZ62ZLM2k0e+OUCNnZWHptU15W2v6a37KsmpC
5jfT4bQiaUNlVCXG2QH6EEfhXb+N9Ttrfxv8OGlnVVuNRuI4SWAV2NlMQB7nnFTYVibT/CHg600O
8+HNtpFsmjpZZl0poy0JglZ1I5zwSrfSvOvhb8KtA8Afs/8Ai/wPZWwm8PWVxqtrHBcAP+6LOQp9
cA45r0Kxv4P+F061ZGUC5/sOzlEWeSnn3ALD2BIH41xvw08f6F48vvir4FsdTgXxHper30NzZM48
1Ul5SXb1KZfGcdR707ahZHj3/BNPw9Z3P7GEdhe2cFxZ3N5fxvBNHuR4ycYYHqCM8V+J+qWqWupX
MSZ8uOV0U4wMKxAx7YAr99v2Q/hXc/AT4Yn4X6zeRSavDLPeQNvH+kwyBSzxjOSqO20nHp0yK/Dr
4wfDXWPhN8TfEfhTxJbCz1bTrx4pWLAq6H5kkGOzIykH35wa0gTb3tDztl8wkHIAJ+YU4xcEjgA9
M1M0XzOqFGXG4HPUckfyNNQ4jeQHcTlR06//AFqYz7v/AOCTXxKu/Dn7QD+EPsi3mleJrOVJfMGf
s8sCNIrjjGCAVI9we3PTf8FgvCuj6d8avCN9aadBbXd9ocjXk8Eaq0uybCl+MkgHA/CsH/gkx8Ld
Q8S/HIeNYb20+yeH4pY7i0dj55E0TKrgAcLu4zn19K9L/wCCyvgPXp9c8CeMILSaTw5DaS6Zc3cP
KwzNJvRX/uhhnB7kYrKOsmOxq/8ABJjxXb+PPBXjz4Ya/odnq2h2oS+SS5RXUpNlHgZecjKMwOc8
n2r6g+GfjXw3rHxc8Yfs/wBz4UtrzTvC2lQ5vbtEkW6t5ETbE6bccLIB77enNfHv/BGy+tj46+IM
H2gtdSabbsIiMfIsrDOe/wB4V9A/BS7hX/gpb8c4ml2yy6NZbI2PLbYrbdge2R+dStLia1sfGHjj
UtI/4J8ft063L4c0sa34ehjSZtJmYIwguIi3ko5BHyPgjjpge9fqOvxq09f2Zm+KQ0Lbpw0E6wdI
Lj7nlljFu24PpnHPpX5Qf8FNr7P7ZXipWC/ubPTimFHOIAcE/j+tfoX8BJtM/aA/YB0Xw/oc4ke4
8PLolzArjzbeRB5Tqw7H5SRnrkGk7qemwz8pP2n/AI56R8b/AI1v468MeEk8EyNaxBkEqt58yE4m
baoGfujjnAr9gfgbL4P+OXwO+HfxS1/w1YabqWn2b3YnZFY2xQNHP82OUbYSRj07ivzl/bj/AGHP
Dv7OR8IzeGPE0ckes3v2ObTtVnUSwlgSko4/1eVIJPGSK/SH4H/CTXPBf7Gln4CvfKudbXQry0UQ
P8jtKJTHtOBx8680NXkuw9jkPir4I+FX7dPwA8UTaStkyaPPdR6Zr1jEpe3ngXIdCMEow4Kngg/j
X4VTW0kUxJbexG4uDw3uK7pPGHjn4Xyat4Xg1rXPCrxTva6lpdpeS24MoGxxKgIDNxjJHSuIg2rd
Lv4HQAH72Ogrb4UJIiZQQqsuSP1NfYf/AATK+DPh34s/Hww+Ibdr2DT7F9Qtwr42TRyx7Qw7j5j1
966z9pP/AIJ4ab8NP2Y9N+K/hrxBJerb21reX1ncgMHim2DMbDqVZwcH3qP/AIJJ+MtI8NftDtY6
jL9kn1jSp7a1eRgqSTB43C89yqnHJrnm20mgPs39pn9tb4a/sy/E2PwRrPw9n1uT7HFdvPZJCUij
fdjKsP8AZPfnmvi//gpBYfBDxZL4e+JHwu8UaM+s6q3k6pounOC0g2lhOyKf3bqflYYGeOfX6C/b
e/Yr8efHn9pqDxPpFrE3hySxs7O4m8zEgVS+/Abj+I/lXyr+3r+w9Z/ssWnh3V9E8RJqGnajI1tJ
Z3Uo+0xyYLBlXvGQpB9CBWsLX0JZ4j+zl8ZLn4F/F3wz43is21A6Rdh57WEDMsbKySKue5R274r0
79uv9rzQ/wBrnxl4c1XQ9AvtDt9IsZLZhePGzys77udueBgd+9ZX7CzeC5f2kvCln43t7WfRb55b
Ui92+SXdCse7PHLHHPrXb/8ABTT4LeB/hH8b9NfwFFBZWOs6b9turO1kVoYZhIVygH3QwIOPbiiM
k5PQpbniX7PPxr8VfAfx9D4l8JTLbah5Mlu6vD5yyRtjKlScHlRiv2G/bl1q41b9gXxTqVyyG6vt
KsJZTt2qXeWEnjPHJ6Z4r8if2ev2efGHx01HUE8LQxTXGlqk8waRQcEkggEjPK/5zX7LftV/C3xB
8Rf2PNQ8HeHrPz9clsrCOKBjt5jeItn8FIrP/l5psVtufAf/AASlXV/+Ggb660zFxZjS5BewsArP
GzLtYHplWPTPOTX3L8Xfg58GPGH7Ruk+IPEnifT7D4gwC1+y6dLeJHK3lktH8hOWySOK+PP+CV9+
Ph9+0T4r8G+JFbSfEQsZrQWl0ux3ljlQsOe+MEDuORXsPx4/Zd8d+Lv269N+Idjp/neG4J9MkN0s
gLIkW0yd/l5Xp3/Gsf5roqVr6G9/wUm13XbnWPhP4X03Q5mF1rq3MOsMym3EoRkETDBIPzhskYx+
ntn7SngrwN8SvhTo1h8WtTtfDVhHexzmS5u1hQzqjLtV89wxP0ryH/go7450XwlrPwTn1O4i8u28
ULczxZDOsQUZYrzx+Haug/4KIfBrxX+0H8JvCVh4Ht4tTkh1hb2RhKgHk+RIA4LHB+8P0rRJ8/yE
dB8UV074QfsOeILbwTay+LdEt9Fnt7VrKZZP3EgYNLvHVUDMxIBPFfgrcon2ZBHGeBnhs4GeDn8q
/dzU/D1z8Mv+CdesaBrRgs9RsfB13ZSq77U84wyDGWPcmvwpvX8xApLdRv28AH0H5Y/Ct4aIhoqq
oZjlz8oyCf0qKdELxkBlI6qTwaS4VlkIHzAHjP8An2qQIEQ72YMT3PT8O1WSA8qRtwG1wflcHqad
I29ApbbKDzgYH41GdqMY15Dc7j2/ClEZYhgCyHghep96kpDTnOxT1JBpBIoRmOZD02EDGfWhikZ2
72EnQrtHFPlAZ8CJgeuD1xj1pMYxlXnfGVP8IxTpR5bqi8knJfGT9KEUltrnYrZwVGc0jqFikVT8
4YEc84xzSuIkEKhwpG4DJ254pkYdpDuTaCMgH6VGzLITuGNrZzz2pXfPBPy9m9Djii6C4/YhLKpy
W5Iz0xTmn2FAMr1O1SeCfX8qbKpDAkHAHynHB470wQ7mRuHUdt1AtSS4XdOwVySe4/z+tRyySE5A
ZUIw2OhHXk1LGwRPMDblzjGMdaZ5sYhdIx04Hr+FCuOxDyV3dVXkc8Y9Kq7kbLNnI5FWZbfaFToS
T8rdqhJCuN53DGMZqrAhy/KOQxPXgfrSybVZlBH5dR6UMVLDa2FHQAdKHkZlBPqWz/WnsG4vlODh
TztGSSCDxT0UImVDKxOCG6cUhkZQEHI9sZzUhl8v/ZY9Vxz/AJ4FILEkcCPMGZXKk9e/H+RRGuXY
8Bf9ocn0pvEu5IgUfGcL6e9Nj38jeWx6rmpbsMqlsHCEqvqTnP4VM0kbFNwwQvOe57fSopSrMSq4
B5HNSxxebwynPWtLkkcq/KVyMqcnFE0i5IwWXtk9BSvMoJX7vBJx9KhyNuRnK5xn06VNx2HySblw
DjPPHamLO65CkjPB9DzmlZMqp28EEj9aeY2khZgPlXAPtQMbIFjI6ljznvW9p+qXGj3VleQXLWt1
aP50UsTFXicchgw5BBA6GsJFErHnA7k1taBoFz4h1GzsNOVrm+upRBHBkZZzjHX3Io0Gr9D2b4pf
tY/Ev46+H7PTfF3iJ9WtbOTfDG8ESFSAVz8qA9D3Jqx4B/aw+Kfw3+Ht74I8PeJ5bHw1dGXdbC0i
O0SABgrFcjvwD+Nc18Rv2fPHfwds7CbxnoV3oZvATC0ihkbGeNyEgHoee1bGg/s1/EjW/h9c+NLL
w1c3HhyEFmvLUrKr7TtONuTx3+ntVaEafI4bTdW1TTvENrqmn6i1pqVnPHLbXMZ5gmQhlbPqCM/4
5r3XxB+3l8Z9d8V6Lrd34m8nUtCEgs5VhRVJcFWLrja+R6jHFeG6F4U1PXNdstK0WB7++vZljgto
Pvlm6dfqa9RvP2OvixZeKLHw1P4Wul1m8hkuILdhtEiLwwBOAeSOPfvQmnuUvdPQF/4KYfH+WMGT
xfasE2sQmk2+HGB/EU6Hnt3rlPjD+2p8Vvjj4Vbw14q1qO40rckxt4bGOHewyM7l64zx+dasX/BP
74zxoUHgq+OUBZPNVdo9PmIzjjpxXF/E39l34j/B3SrXVPFWgT6Lp8kqwrdSYKlzyAWBI6Dsea09
3uJtGt4g/bE+JniX4Saf8NdX12K78LwxQwCNrVfMZYvuKZPvEcKeefl61T+B37WHxJ/Z4ttUsfCO
qLp9jqU6zzwXkQuAzBQu4Z+6Sq46+lReJP2ZPiH4S8KWfirWfDt3aeH7tQYrp4iRgrxuIBxnjGTz
VH4e/s2+Ofitod7qfhHwzNq1haHbPPbHdhgAxGCQScVGg1e2hwOv+INQ8SaxfavcyCW8up2uJZUw
m6QksW46ZJzxxXsXxA/bM+KnxX+Hln4O1/XXvfD1mIz8kSRzTFBtHmuqjcAOeff2rxXUPDl3p2ov
ZP5tvcJJ5bQkfdbpg16P8Sf2dfiH8LNCs9T8R+HbvS9NugNlyAWhPGQN65XJyOM9jVpIl3PK2mln
EjySvKDyvPQcnqaiubppArMd0a9T3x1qa7h8hlKeZIucZYYIB9fzqts3r5YURsMYPqeuaRjrcvAe
QZHjjA3c5HT6/rXt/wAM/wBtH4rfCn4X3nw/0PW7aPw/Os4BurYSzR+dncI2JyvLE857/SvBgP3Z
8tCy5LM/rx9eelFuGlnEZ+djyu4gClY11tY9Q+D/AO0T4z+A/jZ/FHgq6gsr54DbXXmwrLFcRnBC
upwOCMgjHeofiT8dPFvxS+JH/CwNX1LyfFazRyx3unjyRC0f+rMY/hxgDnPU5zXnjWyvsDqEVyS0
akjvwRxzSTPvklSFHYx84JA+Uf4UWFqfYdj/AMFTfjvbW4jfUNEmEQCebNpK726fMcP9e4/CvFPj
t+0948/aH1q11HxrqUEgs4/KgtrGLyIgNxIymSCTkV5ZPZl52kDDH98nk8dMfnTZJFeZtqkyRYIU
DHAPH6kUuUu59AfAb9tv4n/s9eGJfDfhTUrSTRmn+0Lb6nb+esRb7wjyw2KcD5RxkV0fj3/gpN8b
fiN4V1Pw1qGpaTY2GoxGC4m0yx8qYoeCFfd8uc845r5aDNGHRwFIIBJ/hH+NNuYmeN3A2oBnJHHQ
cfpRypA2zu/hp8WfEnwj8ZWfi3wvffYNc05m2Oy5R8rykigjepzyDX0ZN/wVV+PJKCK60BkUZd/7
LGc8cfePXn9K+OvszNMvztGzJ0U/gPakleVpDuJ2K2GKjn27jvTSRHOz0D4i/HPxh8UPiZcePdX1
jZ4iW4hnguIY/L+ztGwMfljkAAqpx9TX0Cf+Cq/x3tAym90G4jGAGOlAH893Xp2r45VMlCjeZk8B
hnnHUjvT7r5biIYONx3Rflg7vfnir5Li5jW8V+Kr3xtreo6vqsok1C/leeZ41ChnYlmwAOB8x4rL
iuUieRANwK5IPc4AJz19arkNtZogWwuV5xx04B57ipbfdtjkdfnLYdWPGMf0yKdrGabvofUHhH/g
pX8c/A3hTT/DthrenTWOn24topr6wE8yoB8gLlsnA4B64A+tfN2ueKdV8Uatdajq95Lf3lzM00s8
7FmYsSSc9uew4/KqDLvUFn8tgOf6fpTWYyDO9mV+hUc4znFRsapS6nv3wS/bl+LHwF8Ef8Ip4W1i
zg0eOWW5hgvLJLhomc7mAJ5xnJx7mub+Of7VPxA/aLnsZ/HWo2l3Jpkbpax2lssCpuO5uB1Jwo5P
AHfmvKWVW3lUGXO44boPb8xUUm0sSqDDbWMh64HXj+tUlfoD0PZPgh+1t8Sv2dbS+tfA+tx2VlqL
LLPZXlsLiNGXjeitwCc4Prx6V3vin/gpH8ffFnh6/wBJu/E9pb2eoW7288tnpUUbqrAg7W/hJBxn
tXy8sibpCS+wj5Q/cduKJBI8ZG9jGcnYTkDtkDt2pXs9h7m1oHi7UvCmuadrOmXU1rqWmTJc2l1G
xDQyIwKsPXkdD15zXo3xf/a2+Jnx2uNJm8VeITdvo8zTWH2eCO2WJ8qQyhFB3AqvJJ7+teMhZDMY
UztxkbuBU+9o9jMoD9PXI9ad9QS1PXPjR+1v8Ufjxomm6H4z8RLqdnYOZoIYrSKIeZgrvYquSdrE
dce1ePuzIEjbJccnAwCaLhBO/OAi/Nweh+tRzb2kO/nCjBHf3pEyRHcNvYn7rMeW9AD2ohRTcEhj
tIwaey5DFgxB+6COAMU6CQROquWQ/dIHQ/1p2Eo2FRjIRtfbECcgf3hx9RXd/Bq98MWvxQ8MN43h
Nz4RGoxLqSlnG2A8M3y/MQOuB6Vwcny71XJydxGCBVwuJEPCscAFM4qWjRbn736N8J9SkHgbVPhb
8SLmw8A28cMn9lNtu7e5teCBG7fMAVJGDnHHTpXy/wD8FdPip4ZPhbwt4FXVEl8TRaj/AGhNYR8t
FCbeVA7ntkvxX5sad8UvGejWNtpumeK9a0uytkMcFtZanNFHGOvCo4AHt7msDWda1DxHfTX+p6le
apdMFD3F9cPO5AGACzkn260RWuopRUtEzqfh98afGHwfv7nUfBfiS/8ADV/cRmGa4tZOZEzkAggg
89M9M8Y5rtr79ur4631sYJPipr/kzKUZlaJcA8Y4TPc8/T614UF3Mw35Y45A5AqQRLHhPMYqGOT0
I9qp8vQhcxJcahJJcs7Dc2MGRzyfx/Gur+Hvxj8bfC03dz4Q8Uaj4avLzalxNYS+W0iL0VvX61yJ
AjUDaHxnqOlMzh2JU9ztYcc0kPU6n4jfFHxb8VtXh1Xxf4j1HxJfQw+RFNqEu9o4wSdqnsMk1+i/
w6+On7EreBPDh8R+E9OstfWziW7gbw9NK8c2wBgHjQhvqCa/LpcRuQFJXnDBqmM7lwr5AbhRjqMG
m7hFI/Wa1+Pv7CtlJIbfw1p7yAnew8MXJ54zkmP3r4h+P/xr8IaF8cNY8Qfs8a5rfhDQr+yhS5On
+Zp6mYM5ZUUYOzG04IxknFfPBkkZsh/LIycY7fWoXgZ1ZwOScHkdMcGpSG9Hodt49+Nnj/4haPDp
3iTxvr/iCwSQTG21PUZp4y69G2sSM88elcQTu3LyB3w3BPU4pbmNxOcZ2qeRxjGKWZVeUrk5ABBI
5z9KpqwK/U0dD8Raj4ZuodR0u8uNN1K3ctb3tpKyTRswIyrDBU4PXNaXiX4oeLvG8SQeI/FOta3F
C/mxW+o6hNPEj427lR2IU4OMjmuZZ5BuUlmAPAzgfWh8g7iu9WGck8ihK427GxoHi/WPDF5/aGj6
leaXclDD9osZ3t5SpxlSyMDjK5rdn+NPjqdJIpvG3iKQSZLGTWblsgjBBy+D6c1xAYSDBGD6Adae
YpCy7VJUjJ3dhTsiVJtntX7Ivxi8KfA74zWHiHxj4e/4SLw+9rLaXFskCzsoK/K6o/DEEY5IJBNf
fx/4KXfsz2u9I/hrqqRpjJj0KzCgnnHEvv8AhX5NEHy2QcYyB70qpIQoIYk8bQf1qLGl77n6dfGH
/goT+zf8Qfhv4j8OD4a6xcPqFnLDD/xK7WELIU+Rt6S7lw2Dkc8V+ZOm6nc2LpNDcTWl5HkCSGRl
IyOTkHPOB/nFVrlWlOEYhd2SBTXAJJJ5PA4yWpkWXQ1dS8SaxqNt5V5rOoXUTESNFc3ckqNg5HDM
ehA/IVnxX8sd0skczwToQ6SKxDZ9j/X3pFtCwbkAYPLentTGhDMxxtC9+1CFaxYe/uJLs3E9xLLO
+MvI5LjHTn+VNmv5numlkllmlc53ucnOSev1NVp42iQvj3yD0pHDBgdpY9sVdrkJ26l0alLFFiN5
I4hgBYyVBH06ZqGS6Z4Qgx5ScKrksF78enOelRhDIvzMF74zUZspIowR8u/OefyqeUptssJel1xn
LAbSwOMZOcfnzT7m5EwZJiZlYkF2xxVZYirKhzhh+Z/yKkl3GLGMFc7i3Hak4lJsa04VlywYLwGI
6D/PamPMxLkuecsSnB55psoLiNmCgHlT7c0+bJdQ3zDbnk4wKfKK/cJrlnBbaGBJOWQZJ9c9fxqP
z3yX568YHSnTFgxGBwcYHb60JHvQvxjJzznB5o5RJoSJgxBIIxyCevA4pzXLxqQDhxnsMjPoaaY1
MhPJdhz9B3pfLaSQbFz7kUJMpvzIdzEcAYAxx6daQuzM7nn37ipzCwX5gcA9QaJI2AIZT94jZ34H
rRsFriLKAgVju7r0/wA8UwuFKhQwx0BGOf605YwIwwAL5wMHP4UixmTLKTwecHpTUbmTbWgGcyys
CoX5SAzAHGe31p8soLHagAwCflGRj/8AX+tMWJC6tv4PI/xpZtrvt3AFecZxmk42NIu4hZVXazEP
jBAH6f596c9wGVs4wTzjqeP/AKwoEOcBWwp6Z7UgtnZ8qeegA5zSLuVlQlg4PAOOeM/hRKyAeiHA
ye1Wnh6DaVx1GetNaDBKlgqr6nmqUbkMqDrkZxkjBOM+9STR70DAnI7e1TNCQxIIK5698etPlt2V
WkP3RkdAM1XKTtuU5Iz2OePu0ixjAwSGB71YCiQLj5jz93B/OpViYkKcLnJ5wM1PK0ONmVCjRSYU
ZCngGnhmZDGp4NSCIxvliFUDJzxinrAMZHPPORn8KkuyKgjYt0zjipmkERG3g9c+lWns9r5A+Ukr
gkZJHUVDJCjIzIy4Xg8jj2oswHJMEaNgxOPugcY70lxceVJuKkRyE5PXt2FPWEEgsoRQwDEfTim/
ZA6lgMxqRzu7dKNBaj47wR5IwwJwCBUErmQ/McHNTQwMZtiqWJyox7f/AK6bJHhlByBkoWxxnnij
0D1GRkxuU3Hd13D9KdM5kQAou5e+BzQ8QkXco2lOMYzmnNCJNpQDA5JzmgViI71lVkG7b/Ce9TST
nayyMdnp7+tRFGQEr82RzipEtCyM3Bbn5Tjsf1pFWt1GSXZWMKCQjdgOPrj196WGd/uYLKDubPbi
murp85ZQeynHNSiBpIQVU4I6gdR/kigdriTXErOuGIAB9DnPY8VF5rZ5Ug+3HH4Ur8IFaTb14phU
s67cMuB06U7EPclFwyBl37wwxtI6j0/So5JyTuxt7DHT2FSG2BUkspbuQw4+tJ5WxnCuCeh780WK
KkjHHI4YHHsKChUN04Geasy25C88Y4HOc8c1GtsfK3OcAnHXofegm2pXfDjoe+RmlZmiJxk455pZ
lYZ29OBUZQuzjOMdfypASxTGOTByUPtuHTvUxkUjDYyRhcgD3/nVdTJGu7AwR6UYUr8xwQe9IfNY
dcOSWB5J5IHfH+fWntP5m4uFZfvDI79P5Cqx3JnHK5p7P5iBehIwTnFIpMlkn3xqgQkAccfn+dIJ
HjIYvnHPTP8An/61ViXSMoCQc9jmnfNJgE5PTA60C5izLdGRhuVA2OWxg9MYz16U15myduCyjvVc
qG5zknORQwz90EcUBcsBmG4g4U/wg8HvkimtNkKp/h6Bqh84vgfdOMU1TImf8aQm+xdW7I25UZJz
vZQTmlN2yKQqkR44yMY+lVpWDBGHT0HAphdmGCSveglXLS3hTksc88A44/yKSa7LzLIihAONp5qs
IQQSWO09CfWpGbaMHlcnGOnemaJ2Hyv8xYZweck9DUUj5AP3geM46GiQnBH8ZGPao0jdlCk7Tg8Y
60rEtMmV8NnJ4HTpk1ch1iS3GyJ5YssWykhBB45/SqaruVcLnbwc96a4HmHKFWX0NDKTaNIahcQC
4txcTJHNnzo1lPz55+b+9+NVjcMjsd2COee/Peq+8lsKQMdaCpkjGSAoOBUagacXiPUoLI2aaleR
QbdqxrcOE2+mAQP/ANdZjTGRWQMxA6A80rtsw2GBHXccio3JL56E88d6tEsVyd3CgE8YFKBtkX+7
jnNIilj0OM9z1prR5QNnIPGM9KSFbqauk+KdW0G5ebSdVvdJmddjS2Fw0LOuc7WKkbhx0Naet+Od
e8RW0dtq+vaprEEbBlS/vZLhUYAgFQ7HB5rmHyAcL0NIGJJOOAOc1SsF2jotC8V6r4WvBf6JqV5p
V8qlPtFjcNA5Tuu5SDg9xnnAroZ/jl8Qbi0a2n8beI5rdxh4ZdWuGV1xjBBfpg4xXnzNt28Edsk+
1P8AOZSTnqOuP0pWHzs3NC8U6n4U1e31LQdSvNF1G3D+XdWMzQSru4YKykEAjPeuhsfjN49sNb1D
VbDxtr9tql/t+2XS6jN5t1hdo3sWJbA4GTxXBzSbyMfKegqVp3jU/MEboCKLDTudlH8VvF9p4Yuf
Dcfi7W10G73edpov5fsz7iGYmPdjJYZPqST3rRvvjr48vl0dLnxtrtxHo7rPp5fU52Nm4G0NHlzt
IGQMdia86RikahiMsc8HJp8wj8sMrZbq2fSlsF2elQ/tB/EqPxI3iOL4heJF1prYWYvzqUnnGEHI
jLA5Izg4PU1j6P8AFvxl4b8W3firSPFOs6f4nuw4utXtr147ifcdzb3B+YEgHn0rislRs3YHT5eQ
eaR5GjG1SNo7DmjcLs9Tk/aQ+Kd14lsfEsnxB8RTa9ZxNBbX0moyebFE+C6A56EjpXpP7L/xz8EW
fx1v9Z+Osc3inSNatZkvdUvYXu50n+TYxCgsRhNvyjgYr5kibc7AuQo4HOKn8zB5G5hgrgZz2oaQ
0frJJ8b/ANgeYwf8SexGPljA8P3RC8YxgoRXlf7Tniv9jHxz8FNetfAY/szxdZxedpcmm6TcQs9w
BkI5ZApVsYOT9K/Ox5dsg4AOOc9KinJkjDOSRnjNSKyudv8ADX4x+M/g7q91qHgzxJf+G9QuofJn
uNPfaZIwS21gcgjPOcV0HxG/af8Aip8V9BGieMfHOr+INIWVZhZ3LqUZ15VjtUZP415SZQWGGwQM
E1HJK7hRgFE6E04judn8Ofit4p+EPiCPxB4L1u78O6wkTwfaLRgGeN+qsCMEZAPPcZ61uaZ+0X8S
dG+Jc3xAsvGOop4zvdwuNYdkMsilVUqQVK4wiDBH8PavNfNjM25ugyAgHtTVcqV2sSV5+lFw3Ol+
IHxI8S/E7xZf+I/FOr3Ot6zelTcXk+0M+1Qo4UAAAADAArqfhB+0X8RPgS2pyeAfE114dXUSjXSQ
rG6y7clSQ6sM8nnGea81kQkNIxPy857GmmVAT/Fj/CjcVrHafE/4v+LfjL4kfxF411268QaubdLU
XMwRcIucKFQADkk/jXrmg/8ABQ34++HdGtNOtPiFdLbWUK28KSWlu+I1AVQS0eTgAck5r5vjlXkE
MQeevQ0NNtQqm47l56Ur2Hc2PFnizVfGviHUtf1m8bUdX1O4a5ubmTAMkjHLNgcDmskuVTywQQ3B
OM9+/wCVV1cEsoyQOlDys8qug4A5OKd7iuewav8AtVfFXWvhDB8MLjxQ8ngmCJLZdNMEf+rRtygy
Y3kAgcZ7CvNNM1aWwvLa7trhrS7s5VmhnjOGjkVgVZT9RWUx3sSzcN6fSkMfCksME8gntigSbufW
Vv8A8FM/2h4rfaPG0IiRViRTptux47klM5PqfU14d8Y/jV4y+OvixvEHjbVRrmrfZ0tUmMKRhI1z
8qqoA7nJ4zXCYQSEgbYz0G7PeiOUhwwGV9/ShPsXoO8obSSpBHzDPQintMzocSGSU98nI9qhkHmn
cHy49WpIpzv2AA84+XsaA0PSfgn8e/G37P8A4kfWfBWqtpd/JAYJC8SyxshOcMjZB5HBxkfjXuDf
8FRP2h9x2+LrIHBXA0q34Oev3evb8vrXya7+UQyklgc8jj+dJ5mV4G1s/f8AX8KmyTK5jvfFPxl8
XeL/AIkSeP8AUNWmk8Vy3KXP9oQEQusqABXAQKBgKOMete7D/gqB+0Esh2+K7Jkj4UPpkDbuMc/K
Ofxr5I3TKPvnodu05K0jROwYNtAUZODzz0zTVr3sRc9J+Kfxr8VfGrxnd+KvF2onU9bnjWPzMCNF
RRgIqLgAcfzr0j4Yft/fGb4S+ENO8K6F4oS30azUrBDc2kU5iUkkgM6k4yeB0HHWvnEiND1CHGAT
yOOKcIxudsbgeMdv88VV0ylJnuXxv/bC+J/x98PW2keMvEYv9Ot5vNS3treO1QttIy2wDd1+leHS
7DDu3hyfUU5p5CFHlhicgHI+mMUxlG0ERhR0Iz3pARpHGqhctnGSRyfpQ4DHLcgnAyOnHenlyoDI
BwMdKjd1CfO4BPTj9akQ5YzuZkIGOCfYUspZWKllRNwBZD1qS5djKWACnB+70pZDHFuJI2H+HrzQ
rhYhECs7sqlXHOD/ADpCdwyx3KG7E1L5o84MxJxwQw4ApjyqU2qGz9flobJegPtZ+XYR4xyen0/W
kYZyyBfLBwDtx+dKlsksbo7qjgbwTxkmnIqiDyGYhySc46UropDDGZd+75Cq9BjBpVaNtrOxYqOF
WkXbDuILyBe/qKmzG8jFCBz92ToKTGQMytuUDYAS2OmabPEvyuvC4yQDVmeNJH+XBYD7mcYPQ8/h
URWM7mLEgZC559/8aEA5QGtR5SEqOdzetIJFaFf3YXnHTJJx+lMCtFHtBCxnkZ4wf8aUPlUk3Dee
5HXPtirTERSYcOxzkD5Qe1VeCW2joeSelX3dPLPykt6k1UMY8p2ck5OQAf4qoEMcqU2hT13BlHSh
vmZgc+gAFPJ2LEqhgfQn1/8A10ot2+Z3yqdMn6UAOzngr5fTn1PPtUjIAGYZKjqc0kwIhwFJUcBw
O9OUbItzEgkHG2psFxCX84KNuCeoHtTwotpSVfLt1OcU6NCjAMjk8HzAOabPhiThRn+8DmlYaV9j
PMYHRt2Rk4qVmiNsigs0zHqTwBk//WpsyESZxjHOB3o2Dy/mDDjOOtXoyASMGQb22qecnpQ6rvds
kqOhp6Rjyyeny9BTSPLckgEgjOfSpaKQshVE8tkyRhsnp0rb8GWa6trCacSpS5TYCTgjgnr2/wAM
1gvKskjNjC98966b4fTFfEloCuY1PzAcZ9j+tSxmHqGnPp9/dQsT+6k2FmXHQ9xX6gf8El/2fPCX
jDRdR+IV9aC61XTL1rBFlQNGcxo6sue4z2HXHpX5m+IZI5df1El+DKW5HH146mv15/4IzqYvgd4y
jO8v/bgY78ZwYEx0+lJajto2j5q/4KCftY6p8ZPHF54OsY5NP8NaFcmAwSxIzT3CO6O+7H3cqMYx
XuP/AATK/af8Maj4Hk+CGvyf2ZeM8/8AZU87Ksdykn3oQc8SBixA/i7c8V8N/tKv5fxz8cGRfLgG
q3LK/wDESJWB4xx0HP8AWrH7L/wN8R/HP4j2WneHZbuyEMqSXGowkg2i7siQN6hgDwcg4q2kRCzV
n1Pv74Tf8E0JvAvx7XxDrOsDUPD2l3K32kzRDZIHWQuqyDPAAGD+NV/2x/2wba3+NvgbQfBM6vqW
g6pFLcauiJLbt5jBHgB6k4649etfY3jPwZ4l8QfBC88KaZ4mks/Fx0tLddYVsSNMFAMmevzY6/7W
a/ETXfCOvfD/AOM1loPia1ms9XtdTiWQXUnzCTzgdxOTlWwpDdCDnvSUepceWUkpbH7WftGaz490
7wbpi/DhrZPEl7fJDGbqESIF8t2IIKnHKgZ7V8k/taftK6H8UP2SvEXh7xEsWg/EPTr63tNR8PTs
PNM0ci+Y8I6shG4g19g/HT4zWHwM8G6R4k1W1NzprX8FrcsucxI6tmQcfw4zz1r8gf2rLO6+JHxA
8RfE7SrW7bwHreqyi11TZ5allVQCckHqDj25ppEpXktD9i/h54c0/wAU/Afwro+sWkd7Y3ehWkU8
MoyrAwL19/f1rmP2af2eLf8AZ0tvFejadOJtEv78XtkCTvRCmCje4wB3zjPtXPeJ/F1/4G/YYtPE
ej3Lw6hpvhayuoJVbLErHETznnIBBz1ya6P9lT9pDTv2lfhvFrcNs1hq9o32bVLE8iKXBwyt0KsB
kenQ+66oGtWfjl+1Tpi2vx28e28P7toNYugwGcNmaQ5z6YIGK/QX9hb4zr+1v8H9f+F3xI05dbfS
bRImvyuwXNs2Uj3AY2yptHzA88Hrmvz+/a/j3ftD+PhCuVGtXLp2HEjAg/iDj6delfUf/BHhUh+I
nj9BBLCx0qA/OuFP75s4J6jkVpdEx1jqfKf7WXwZg+B3xl17whaXxvILVleEEgMI3+ZM/RWArxGe
MlN2wBgOCT6Hr/P8q+wf+Cm0aJ+1T4jcEgtbWZxyekIGe/c4H0PrXx28hilwQeepP8R9hVozi9Qk
iJ/copEZXgqcnPc16/8As6/sz+Kv2ktV1PTPCk9it7psSzyw3ThSyE4B6+xrycbBbrJtIIIB2kjJ
z3r9OP8AglZP8N73SZIoYrm0+JOnTstzLCHBu7VyWQsRwYwTgg9GUe1S3Y2S8z5e8H/sC/FXxj8R
dV8I/wBjJp1zpSbp7i9YxwuAQMq21gwbPHqK8w+JHwG8RfDH4lHwX4ggjtNaFzHC0ocCJ1fpIGPR
cHPPYE1/Q0yJ5m7b82MZI5xX51ftP6h+z3L+0d4ePi6/v9VuZLiSy1u0mSVxESD5LjIByrAL8meK
nmJ6nyR8Uv2BPif8MLXQ76/0+K9tL9tn2qwczLE3VFc44z1yRjg81W+M/wCwv8Q/gb4JsvEOuJZv
pc8oR57eQsItwyu844z05A5r9vvCMdj/AMIxYJp8lzPYLEFha93eYVHTduAPHTkV4r+2onw7uPhD
f2XxH1a80nTJ4ZDavbiQq1woBThRgsDgAMRnJoUhvR6H5SfBb9ijx18efA2qeMPCj2N1b6fM0LWs
0pWaSREDYUY4Y5GB39utbXwx/YE+J/xW8PavqFppiaetjK0EltqG6CWR1XPyBh64X86/SD9gyb4b
33wss7zwFa3en3U0SrrNvJHIiG7VQHJyNpbvwc4YV9L6t5MGnXjyGWGIROzvbg+YBjkrt5z6YpNt
g72P57/AvwM8R+PfidbfD6zFvp+vzTPbxxakzRkSKDlTx14NeqeK/wDgn58UvCXxI0zwcdGe5fVD
GLfVodz2YzkMXcAhCvoeTmvrT4aD9naf9sLXBpOpXura3dKl/p1+iTtLb3wd/OjjO3ng7jkcc+lf
ojGiOsZILkAEM3Xp1oUmmLl0PwL/AGhf2SPF37OOp2lv4kSBxfL5kE1o7PFJt4bDEDkZGc46966S
P9gf4laz8Gbb4jaTBZ6xpF1bpdQwWM3mXCxnO5ihxu298E9/Svv7/gofrHwltdG05/G+oXreItKu
Ib3T9MWOZo54y4WReF2kFc5IOePwr6J/Z5g8EQ/C/Sv+FeGf/hEZohcWSSLII1VxuOzeAcZJyPXN
XzyEoxaPyHsP+CevxNn+Ds3xBFnC9v5DXB0/lbtEViGOw9cAE4HOK83/AGdf2Y/Ev7R/i698PaFd
WNlqFram6Zb9ypEe5VJ4zk/N3r94vihfaBpXgTVrnxNcz2egJCReTW/mZWMjBz5YLY554r42/Yam
+Bkvj3xNbeB3up9f0/U5lsL9YZFefT3AKk8YKZypLAdFNJzkSo2eh8WeF/8Agn58UvEvxW1TwPLp
g0yWwhZ5r+7Um3deAGR+hzkEc+vpXlnxi+AfiL4FePbjwn4kiihvUI/e7/3TROcLIrY+779a/oZK
Lvzjn6V+e/7VniH4CL+0h4TPiya71PWheHTtc028hmkSO3IZo5NrDhQ5UApx830oTZd7PU+Rvih/
wTr+KPw08O6XrdtZweItMvAA0mllpJIS6hkZ067Tk5I9unan8X/2APiJ8FPhdB411hbW8syIvtYt
mz9lD9C4Jz1IGQOCRx3r9s/BsenxeGNOTSvPXTY4VS3F0GDLGBhRhucYxjPbFeZ/tYXnw8t/hNqF
r8TLq9tfDV6phZ7WOZwJBhkJ8sHkMAVz3FCm7g0z8h/2bv2KfE37Sfh/XtX8NanpgXSpBBJY3TES
MzR70+YHvz+RrqPhN/wTk+J/xRn8Rxz6cvhX+yZ/s4j1bMbTtg/dwCCox1Br75/4J2ar8MPEHwvt
b7wHp1xZ6qkX2PV3ELqkkkfKFz93cVYMOc8sK+up9oil3biApyI8lsY7Y5zU3fUp+R/O9afAjXrj
422fw01OW20bxBNqS6Sy3LZVJC2FBx69j7ivdPiN/wAE2fiz4K8W6FpcNlFr9tqbpbjU7FWe3t8t
g+b/ABLgEHOMHBr6bj8X/s8Xn7af2RdLv9R1LVbUxzPPZzB7fVVkBUDIDhmVTzyNwGOtfohbBBbQ
hd4TYoUS53dO+ec1Vxn4W/tTfsOeLP2YrTStT1K7tdV0nUX+zxz2pO2O4CswjYE55UH9fatj4Xf8
E+/F/wAZfgRD8SfDOqWF87Rz7dIUMJ3eNyGQHoWwMgd+K/QP/gox4m+GOl/Cq60/x7aahPqjx/aN
D8i1lkja6TO1QR8meuQexr1r9lPV/AniH4SaPq/w80d9G0G+iExhELQx+aPlkwuSM7lOce1F2StT
8v8AwJ/wTD+I3i34W3Xiy8nh0nVIfOK6Fdx7Zpdg42sSFGccZ45rxb9nX9nh/wBoL4pp4IXVoNF1
J4JnV7hScNGDuXqMnOPXoa/fPxZe2Vp4Y1C7u4bm8sYoWeVNPVnlZR1ChDk9+B718G/sdePvgP4l
+P8A450nwZ4WuPtc14mq6PeS2RWZFCoJwrE7lVZCOPT1xQmwW580P/wS7+J//C6E8INJHbaEyGVP
FCxF7cpjPTs2QRtPPGenXyP9qf8AZb1v9mD4gW+h6tcw6jBqFt9osru3G1JSDtb5SSQQccEV+/TT
RCcRGRRKwyIyfmI9cfhXwX+3t8V/hF4L+I3g9vFfhfU9Q8W6RqcF2sj2haCWxY4m2ljtdccBcD5v
xouxdT5d1D/gmL4y/wCFMaL468H6zD4vm1G3hvm0mCLy5VidAw2Et8xGT8vXnp2p/wAQf+CZfi/w
N+z1P47udZtY9UtLIahf6K6YMSbdzru+Ubl6kY7HvX62fDu/8O/8IHpt9oNqukeG5LdLi1SUCJFh
ZQVIBOFGCOO1c5+0J4w8HeFfhHqmu+MtJufEPhGKPddxWEH2geUQfnKhgCg9ecZFCbFK/Q/Hn9j/
APY7X9qa/wDE9gviSHQbrR4kkEZgMvnBsqDkEYwwx1J4r1b4Yf8ABKbx74j8Wa7pXjC7Twzp9gGF
rqkKrcR3rb8KyAEHGBkhsHntX1Z/wTt+Jfw78feHdU0/wj4Um02+0O7njN7Lbokv2SR2aAO+Q7Hb
kHIPIPNfY8Wq2N1fz2UN5BNeW4Bmt45VaSIHpuUHIz70XYW6n8+nxJ+Al/8AC79oOX4a63fxeZDf
wW8+oQDcnkybSJQOo+Ug4PQ19P8Axa/4JN+NvDV3okvg7UovGNjduI75mPkS2oyMOAWIZcZ6dCOn
New/Ef8AaK+EPhz9s7Q7G6+Hd3Bq8Pm6Frkt9ZwqDJOyG3YZchhkg7sj5X79K/QG51nS/DWnWr6h
dWuiWzbYYlu5kiUNjIQEnBOAeAe1F2mJWaPxv/bE/wCCez/sx/DnTvFWn+Il1uzkuktb1J4hG8bM
p2ldvUZBGP1qb9l7/gnxa/tMfA/UPFum+KJtI8RW1zNZxWNxDmEyIFKbyQGCsGHI9fav0G/b3+Iv
g/4f/Ba//wCEz8GXfijTNUheygmt4kZLe4I/dF2JBT5sEMAfumtz9jD4naD8Xfgvo+v6JoMWjPDC
unXuwIuZ4lUNyvUHIIPoabkylHQ+IPhj/wAEm9f8Q+CdVv8Axrq7eG/EaGWOz0+MJNCcKNjMwPQt
npzivlj9nj4BQfFH9oK3+HPiW9l0qFL2exub+zAdEmjD4GTgEMUwPqK/eM+KtK8Q+HdSvNDmtPFM
dukga30+5jlErhc+VuBIDHgc+tfCX7NH7UPgjxb+1v4v8L6R8PP7DXxIUeFbgRq631sj+cWUDC7l
BIweqc43UJsSTUjzbUf+CQ+t2nxlsLO015rv4ezAtNqhkVbyAbfuMhGGO4cEdj2xXgf7a37H0P7L
3jvRLLTNYl1/S9cgP2ZWTN1FIjKrqRyDnepH41+3Gp+NNA0XX9O0W/1mwsdX1EH7HYXFyiTXGOvl
oTlvwr4t/wCCiX7Qum/CHVvC2nXvw5XVdVXULbWNM1yV4/LAhlUyr03biMrg8YbNHM2XuzwfSv8A
gl7afEf9nXw74w8D+ILp/Ft/Zw3Uul6qqxw7sYlhBABQgggZ4496sfEf/glXZ+Cf2b7vxG/iGVPH
enaf9suLO4kX7JLIo3SRKR3IBC89cV+kXgf4n6Frfwl07x/O1noHh6+sU1MzzTKsccTqGDO2AB15
rJ+Knxf0nQ/gfq/xB0bTrbx9oNraPeNHY3UZjmgXIkZXIKnaAcjrwaFJoznG99T8j/2CP2SvCv7T
Gta9beLLvVNMSC0E+nvaqI0lG7Y4BYEMynHrX0R8Nf8AgkP5Xj7X18c+IGuPC8TA6Tc6ZIiTTjdk
eahT5TtGDj3x7er/APBN39ozTPi5p3ifwtYeGrPQxo9/NqNtFBKCI7S4lZ1RRtBJRiQTwP5V9caV
8UPCeu+NtT8I6fr9hd+JtMTzL3S45gZ4VOOWX05H0yKXO2HLY/C74s/s2D4Y/tQXnw9tJbzXtGiv
YWE+nRGSVbN2UuTgHBQMVJ6fKfpX2f8AE/8A4JE6Ze6j4fvvhv4mePS5JFGp22qOCTESD5kLKvXB
PyntjBq58X/2zpPAn7cGgaNd/Dyz02XSZH0W+1EXiObqC48to2zsG1VyHwfUivu3x78WPCPwk0jT
73xhrlj4etbyUW1u87HY8mM7QQPTnPQU+Zjirq5+XH7eP7BPhD9nb4c6Z4s8G6jeSul0La9sL6Xz
TIr5KyIeq4YAenzCtz9lT/gnx4V+OX7PN9ea6NY8L+P1muYInueBEDhoZDGyglCCOnocV9Zf8FC/
jbP8I/gVczx+FLPxVpevI+nedPcgLBI6ExybNp3jgnIIxgV1H7Fnx4Hxu/Z50rxRfW9ppsunq9he
rDJwjwDBdsj5QVw34mhtiinqfNvgb/gkp4TsfhnqUXjvVnn8XOZjBqOmzMsEClcR/IcZwecV8k/s
dfsv2nxJ+Oz+HfG3h/U9R8KRNc2F1qVtFJFFDdIf3Yc4yu4D82FfsR4N+Nnhb4veDtZ1r4d61p3i
ZbBnhOXaOMTKuQj5XIB4+bGPTNfE/wCxx+2nqHxO/ap8W+Hb3w3pfh638TMZkSGXfsubZNr/ADjA
dnUH/vnPPNLmZSWt2aFn/wAEjvC1n8ap9VfWDd/DmaJm/sWUst1E5B+RZFwCuecnnnGK+VP2z/2N
7X4TfHLSPDnw8tdR1XS9eh321iEMjw3C53wq+OQVAbHbP0r9dta+PngHw58T9M+Huo+JLW08XalH
5trp0gbMgIbA342gnYcAnPHuK+Lv+Chf7Xfif4K/FTwjoVroGi3kNhcw+ILO9aR3mZE3K8TKPuZ5
BIPINNSYlBXQeI/+CW/gj4mfBvwzfeEJLrwb4ue1gmuv7ReSVGYoPNhkU8qQxJyO4x0NYP7VH/BO
DwL8PP2aL3WvDEN9/wAJtottHcyzRSSTrfOoHnfu/wCEEbnyMYxX2/qX7QfhHwh8GdO+I/ibWLax
0K7tbe5863BkH74LtVVGSxy2OPQ1w37Rn7S48IfsyXfxS8BNoviTTmt0mSLUJGRZ4pPkG0DB3gkZ
UjsehoUpClFLVHxP+wJ+xZ4U+M3gDxDdfEzwhqMUl0El0XVN8kIlt3XkoRxuVgDz6ivdfg9/wSl8
A+CYvEsXja8fxlFdyf8AEvcM9u1tEN33ghGX5Ge3Fav/AATP/aMuviz8L9U0HWWsLO88Lz+WixfI
0kEu6RWwW4CncnT+EV9BeAP2mvh38VfEXiPQPDPiO3udX0KQxXUNwpiXIZk3IWwHXcpGVJ7etTdl
cttT8mPhx+yNdR/th3Pg698L6t4g8EaHrXk30wjeIiwdm8mdsYJGBk46gGvtbxP/AMEnfhnqnxW0
PxFpE8+l+Gbcq9/4dLO6XBHQpJu3Lnv16e9eReE/25/Fx/bwOkeIF8NadpUtwfCNzcWCu0LlZWaO
43luTvIXBPAYivub4k/tVfDf4SeMPDnhvxJr6W2pa9II7UwoZY0JYIDIy5CAsQMn9KptjSukfnR/
wUQ/Yw0P4aa14Mvvhlol9FbapKNLubGMSSwpMSPJbecnc25l6noOle6eCP8Agmt4B+JH7M3h6w1f
w9c+B/iMLVBeampZpkuUb5yyk4ZH64GB83Xiov8Agpz+074k+Gum6B4e8LXnh+7sdaK3ImGZr21m
t5UkDD59gUkKASOzCvoX4aftbeFNU/Zo0j4peKtc0+1B0xbzUbW1cGSKUfLJGse7cTvBAHuKTbui
YQdjw/4mf8E1/h14f/Zd1LRtM0qW+8dafpzy22tWkTfaLu7QF1BQEghiNuPQ9a8F/wCCcP7JFj46
h16/+KHgJtR0DUIR/Zl7dZCJPFIySptGCh+v90ivtvxl+194c8Qfs1678Sfhv4l0OS8srOW7gs9d
bY7NETuheEOrhztIHqSOxzXz3/wTa/bAvfHt/wCN/DvjzWtJ02dbuTXLTfttkYXErGaNCz9FfBA/
2u9HM0Chdno/wt/4JefC74eeLvEWo6tH/wAJfpN+u2w0/VIQTYLuLEBgfmPIGcA4FfEPij9jLVNE
/bam8L+FvB11rPgizvYNXFncoqiXTWZfOUBtu4KzOgxzwPrX6aeEv21fhN40+KGv+BbHxJBDqejq
xkvLt0is5ypUMIpi2HILAY474zivhX4jftzeK/C/7dGnW95r/h688L6Df/2I+oabB+6lsbpoS7u5
cgtHlehwCrdaabYcuqsfTXxE/wCCYXwn8aeL/DWu6JBJ4StLCRDe6VYxjyL+INv2sCQUY8gn0OMc
V8//APBSf9i/QPDHh7w34n+GPhKXT50vFsNTh09CLbypMiOR/QiTC56ANz2r7K+Lv7bfwp+CtxoE
Os+IY9Wk1iXykfRGjuhAoxmWXa3yJ8w9fpXzz/wUq/apuvCPw60rS/APi7w/qNn4nEllewW2y6uY
UC7/ADkZX+X+FeRwSMdaFOSYOKZq/B7/AIJ9+E/Gv7L2neGfH3g9PC/jqOKSCXV7EgzqwkLRyq2c
MCNuQfcccV0etf8ABOj4ZeHv2b9U8IxaG2teIo7Kf7NrcMe29kuDlkIOSB820emM5rb/AGd/21/C
2v8A7LOm+PPHPirS4ddsLKb+1NPE0UNx50JIKpEWBJcBCB33jnmrz/tweA/iP8BfEXi7wd4z0zwv
rljbztFZeJGiS4SWMbgrQF8kPxggn734Uc8mwcFqfH3/AATf/ZLn1fxJrWqfEjwDa6n4ZeCSwibU
o1dra+ilAdWjPK5GefYV9X/D3/gmv8J/h/8AEjX/ABE1l/belalGywaHqMKvBZEuGJRhg9sAdhXz
j/wTq/bYv/EvxK8b6V8S/FOm6RBq7HXYGuVjtovtLYWWNXJwo2hCFzng9Oa+s9H/AG/vg9rHxj1T
4fDX0s7ixVsa5dSxJpk7qqsyRz78FgG7gD5W54qXJofJtc/P34u/sQ6vo/7ZraN4G8FzXvhKeWHW
7bT79hHBLAjp9phQk8qGJwOwI7dftv4n/wDBNf4P/E7WvDmtWekv4SfTmRp7LTUCQ3kQYMY5U7E4
IJBHU5zXyN+0B+3R4m8KftmaQ+n+MNJ1rwb4cvliiv8ATYUeOS1uQnno7LkOUUjkHqufp9u/F/8A
b9+EPwb03RrmbxDH4sOoy7DH4cmhupIYwAWlkUONqjI9+arnkwUNrnzb/wAFJ/2OfDdp8PtH8VfD
vwaunarp12Ib8aXEIoXtpFKh5AByVfy+ewJzxXoXwF/YB8Ka1+zHa+D/AIleC7LT/FJ85W1exKm4
AaQvFKko7gEAg+nIwa5X/gov+17bab8JNOg+GXxF0DUY/EPmadqOmWhhubpYJImbzV5JjxwpyP4h
jBFd1+yn+3P4MvP2YNP1r4geN9Lt/FGi209vfafNcRx3U3kEiNkizucunl9ByT2pOUlYlRWqZut/
wTu+Fmi/AO+8EnQY9b1YW04t9ekgQX6zOWZGV/4cMR+Ar5Z/4Jz/ALJGq23jfXtT+Ingey1LQAs2
jvHfKJJbO9hcFtyHorA9R/s19U6d/wAFAfhp8UPgl4l8ReHvGVl4J8RWVvN5Fh4heBbpZUQun7ks
fMVuny56nuK+W/8Agn3+3He6n8X/ABnafE/xjp+laV4hDazHcXwitLdbweWjxqxOFBjAwCRnYT1N
HO+4/Zq+h9X+A/8AgnJ8IfAXxM17xTFpJ1ax1RCItB1RFntLNi24mIN2znAPQHrXxN8af2GNatf2
1ksPA3hSNfCmoNFrdpZaiwW0lETp9phQegPIU9j6Yr7jsf8Agoh8Hrz4y3/gN9ditYLaIuviWaeM
aZM4RXZFmzjgE8nglSK+Kv2jv24fE/h79r/R7jQfHdl4i8D+Gr6OSCfTLaN45Le4VPtETSLkSFEY
gMM9AetHM3cahZn2f8UP+Cefwj+K+q+G9YGiHwrdaW6yPb6WiRx3SZDGOZMEN0xkep614T/wUt/Y
68Ov8JdM8SfDzwbb6Zq+kXirenTYBHE9pIG3GQDGdsgQ57bm9a94+L3/AAUR+Dvwi0bSL2HxHb+N
pb6URPb+HrmK5ngTbkyyKGG0Djjrk9K8O/4KG/tm6bB8H9Nh+FXxM0O/fxAXsNR0u0MVzN9lkhYm
XIy0RHC8j+PtihSYOOp3XwD/AGE/DOs/swW/gj4o+DdPg17dMDq2n7BOyM++OZJBkhgCBg/3eQRX
Wt/wT4+Fmi/ADUPAMXh6LV7xreYW+sSxIt+JmJZGEuPlwxHTjA6VyP7Kf7d/gmX9mOx1Xx/4302H
xVodvPb3VhcTRxXVwIMmMxxnBYvGE5HU10Wnf8FBPht8Tfgf4i8Q6R4xsvAPiS2guPstlr0kP2kS
ou5CsRbEqvwPlz1PcUk5XG4HzD/wTo/ZD1e38aa/qPxJ8B6XqmgBJtJkj1OOOaW1vIpATlG6AjPI
/wBk+9fW/gP/AIJ4/CPwB8S/EHimHSTqltqqFU0TUlWe0syWDN5QbpyOAemeO1fI3/BPv9uC+vPi
54ttfir4ytNNsPESvq8dxeiK1tku18tWTJ+7mMDAyPudzX1jpn/BR/4M6l8Z77wGPEFvb2dtCXTx
TNcxrpc8gQMY0lJwSAcZ6EgilzWBxvufFHxl/Ya1Wx/bVGmeBvCKDwrdmLX7W0vmVbWVY3Q3MMee
Ov8AAegb0xX298Uf+Ce/wf8Ai1rnhzXH0AeHLrTJEke302NUiu0DBvLnjIw3Ixnrgkc18O/tFftw
+JNC/bG0zUNF8eWuv+BPDN/GbW70uCFontrgILmMuqkS7VJUEdxnrX218Xf+CjXwd+Eug6Nf22vx
+OW1CURtF4eminlgTbkyygEBRyBj1NU272JjGx4X/wAFKf2O/D4+HGj+Kfhz4It9N1TSb1Y9RbTI
lhiNnIrAvIo67ZBGSewJzXovwU/YN8M+If2YbTwT8T/CWn2/ieMPH/bmm7ftON26KZJcZDAHBB98
jmvPP+Ch/wC2VYL8JtJT4WfE7RtQt/EIk07UtKsfKuLgW8kLN5ueWjP3UIOPvjpiu8/ZZ/b18DN+
y7pmsePvHGnp4u0i0mhu9KuJoorycwlhGI4hgsXQJgjOSaLvcaVzq9U/4J7/AAs0f9nnUvAtr4aX
WdS+yzC11h0RL9p2LPG3m9sMVHpgdK+Yf+Cb/wCyZq0HiLW9V+I3gix1DQpll0/bqIWSa0vYJBnM
ZHy556e1fSVr/wAFBvh38UvgN4k8RaH4vtfh94otYLgW2n6+0P2lZo13piJiRIrjA49fUV8w/wDB
PT9t6dvip40sPir40stMsPEG7Wo7rUDFbW32zKq6KxAC5jC4Gedh70uZjUT69+HX/BO/4RfDf4he
I/Etro39p22sAhNI1FFmtbLLBmEQPIGRwOwwO1fDfxH/AGF9c0r9tqfS/BfhSH/hETLB4gsrTUJF
WCe3V4zcwR54O1y+EOMAjtivt/Q/+Cjfwd1v4z6n4C/tyC0tbSItF4nuLiNdMuXCqzIkpbtuIz0J
UiviD40/tyeJPD/7aOlX9h45tPE/gjw1qIgtr3TreHy3s7kRG6UsikSbVyoI7pnrmmmxONmfbfxG
/wCCdXwh+Inirw34ktdCTwxPpjo89lYQIsF5Gp3COWPpnPBYc4JBzXz1/wAFMP2NtDTwf4d8TfDr
wgmk3dhdrZ6kdNjWK3W1k4SR1H92TGSOgY57V9E/Fz/gox8HPhNbaC8PiKDxm+ozbJV0C4iuHtYx
jMsqg8deF4JwfSvAv+CjX7Y1kfhtpum/DD4j6Hq1l4gWSx1XTbERXM8UW3f5m4ZaMnhSD0yMc0KT
Hyps9F+Ef7AfhLxV+y/pvgr4keDrHSfFturo2t6WFS43B2aKdJBzyDyp/wBr1rb8Vf8ABO/4aWf7
NWpeB7TQRquvwWMv2PXY4ljvnuQWeJi2f720YJ5Gc9apfs3ft8eBrz9mDT/E3j3xvpi+LdLtJo9R
0p5o4ryWWHIAji4LGRQpBAOS1Xrr/goB8P8A4mfs7+I/FfhTxtp3gfxXaW87W2meIXh+0+fEu5U8
lm+cOAACvPzccilzO4OFj8Qtc0a98O6ld6VqdtLZ6nZzPb3NvMu14pVYqykeoIIrKdWIJA47nNdL
8RPHOqfEzxlq3ivXZludZ1i5a9upI4wis7cnCjoOwrnm+YEgYHStCdSuJCylXyR2z2p5AIbLHdn1
4oYZZQ/rgU5mB4J49RUisNUYBDHHHrTcjIGTn09aWWLAA+8exA4FI8bKucbSe5oGgZcHJ+6fTtSg
PtDDGVND/cJzjnoaazEDBPXJwKBMXcN5+bBAPbvikPyqQQck9u9DnaMev503nsDn164oKHkcFh8v
PQU37rccH1pqk7DjOSeRSkMT1B4pENMklZiQD7UxjtY4JJz0pVUvnJwR696RwVQY53UwSZLKx+zg
gqoJ+4Dkj3qJm3x85yPzpQQYjtyD3xQOMblyPrQUk0B+baWB5FOPEeMFTnjPemK+7AHHtTXyGyDj
3zTZI9EYNgHtyAaR42UnBznsaVkduVDcDoDSGRssD1H8NS7lagoIOMcetPLZXaOM8GosED37mphl
lB5yTgmgENkYhCMg7epNIRgcnA7HFObbu5x9DQCUIBU/XrTGOBYkbB04ximgCM7epHOfQ0rDIOMK
c8Y60rQs5Lbcgckk1LQCmLMhO7IOT071HvwxLnA/U0rYVFySSM0zHBJAI7etERMnRQ6AEk5+Xr0p
sgwDtXag4yRnNNidhuYAEjOKR8lAADknn1x6VQlqKVZ1YlhuHQZpSCMnJbIxyOKRBlN2OR0pGYxA
bvrxQOxI7hQFPDjgdqRS3lMWB2jtjg04Q+YpZvvDHLdKR90LYC7RnvU6AOZV3ZVenUd80hcOPkQ7
QMEnoaFLRZY4Unnr3pJJGxnB7k/SqBhhtoLcc8e1LsZmK8kjpg4p4lIQqRuyM/SmMHjJHUjpg+1S
1qSnYdKu/JYgMvTBzmmOuW2qfmznJNDSNv3MxDHvnHFO85icbQR159KktWIwGTjdy3B+lK6ZYKCM
jPX6UPOfNO5c+mOKZK24kltue1HULkrs0j/OuGz1FP8AJC7izDGz1qCONgq5bC9eDUpdfLKFevVv
SpABMyqVGSCMlQaSRo9ytzuAzx3obKNvKg4B7UxWG4qMYPJJ61SAdIN6gsCD7jFKJPJxkEt0oKgF
AwIDdCDTmIEfzMZCp4XpmjVgI8nJYKD+HWmq4JA+4DnGe3HNI7bmOCFwcZ9qaqbtx+aSNeR2qkhM
RdqBlI3HPBxipEZAVWQ+3GaaiZOTls98d6c2UOwqPqQKliQ6Ty0dAMbCcY9KQAOxwcqOgxTXJL7t
o3D2pTtDnaNpGVwO9FhobPiEsrZA9KfGAoY5245wae6ZO523H+6KbtyW8yPketWyhVl3qVYllHU9
6eyKAN5OOQuR0NRGUKDuwSozg9KlFx50oaQ8NznoPpUWHoNBAK44J5OanaXdG0QbIOCSTUfyJFls
YDnIU96jZlO1F5B60iSSRneVkH3enXpmlltgwIBwSM7iDj9KYT853sQecYHSlZ9w8sHLHnIpdQvY
egW3YrKNxPCtngUweWrn5c5ycY70xgPkBVmIbk/0qYLvjSTbhl4DAYqguN8szH53O1ckAngd8Uss
isxj2ZB6ZPTIqIoWOcZGcqM8GneWWuEDPsUe3OKdirj5IwCqq5aJer5phUHertlSemcf/r/+vUrz
BY/lZlB67fTHpUa7HlXYpz3Pb8aWwXFkjjKFfQDg9aaLby2/uqh5APWlZ280Db90/wAI5H0NNcFp
DHwfm6gdaQtARyrZwQ3Qc4zT542yhZyWJzn2xSzK3mBFUgbeNhodGAEeSFQ5JU5P6UFA8KNKV27g
Onb/ADzSzJg45wfvFetMkjBnYR70DHcGcYJJqWO5LqyygEkYDDrRqPQcV/deY3y853ZzvP8A9enA
q+5RGfLbHPeopMrJ5Zyxx36CnzyPwoByBjAGMj14o1C6GPDvmwGDRDnI6ipfLEThCSWIAQhgcDHf
8KgkdygZY8NjJ9qkiDzGNThUJwT0X60dSSveZCBeWbPJH04qiwbdjOcHIPqKvzyjbIUjYDqrBuKp
pgxMxOXY8exq07Ax7Idgb5m3fjxTTl9oVgTnhT0HtSyF4VwCDjjHrUtvjzEdgFPt+lGpIogkQMGZ
llj7e3emhXM5VtrH1HrVwl5N8jIHBz8uQD+VVyPLUE5jJz8i8kU1fqFxxTawVjsfqQretWpIAhLq
BIDxtJ/WqwBWVnCNJGB82Op9qebm4cbY2ddp+6Bkila7LjJIz0fLgFioA++O3FPkfDscl167+pIp
N67SBt2g5yR1oknKwqg+YbcGhKwrDA6GJQE+fJ/GlZ0jwgVivG456+tR5JYYAGR2NOyq5Vzg9OaG
JDCCCwAG33ra0TVItMaZ5CIpCu1SF9uv+fWsd23R4JJA4+lE27eQ53Y9RUtXKHzXHmzNJ3Y8knFf
Tv7Fn7Zet/sqeN4ppoZtR8Gam6pqunxAM+BnEsYOMOM+vI49K+XtyKzfKSx6DtWlvYR7nIYAYHYD
PSqUUNOx+iH/AAUI8N/B3xXplj8S/h54t0j+0NSdWv8ASrebMk5kUsJNmco2WyRgdSe1ewf8E5fj
Z8MPA/7Pd+Nc1jSPDfiawmuBOL6dIZZ4MK0ZxnL9MdM5zX5Qo5jjB4JxwAfmPHHNWgkUkscshYyK
DhSOgH86OULxScT6b0D9snxfYftPWPxFvNYnEkuoLDfrbj/RpLQsI3Qx5xgIOG65xzX3L+1dr/7P
/wAXvFHww1W/8X6LNcTamIXvbS/Rmji8tmQybW4UOoGT0NfkGbkW1sHij3OFI2oOB705ZEmwAPMI
UoF24Cg8nI/DNHI2x3g7WP3g+Nfir4KfHDwCfCmtfEjw5FZyTRXCPHqsG7dGcggFuleP/theOvhN
pH7Hut+E/DPiPQb2WIWyx6fpFzFJNKwlTeQqHOSM5P4dK/IOVVlER8tkkjRcbWHzEDg46H8qSaSe
BEYIGDYJOAdozxjjrxTUbEtxP1y8UfH7wB4v/wCCdM1jb+J9Oj1MeGodPk06S7RrlJ0CoVKZJ+8p
xxXm/wDwSz+OngnwPp3jnw74g12y0C4uLmC9ge/m8qKYbTG2GYBd2QuQD3HHWvzUvGEiKXiG7Lbm
IHB/x9v8KD5+1oyrOGYhi38I/wA4Pai1xLdtdT2r9qfxbp/iH4+eO9QsZ1vdLfVbhoHhlDxyJ5rE
spX168fhX6MfBj4x/Aj9l/8AZotPEmg+ILHWb6W0SWS1t7mKTUHllAbySmQwCkdDyADX49bBEm2T
fu4bB5AHf/PtUsl4zZghHllV4Ykjj29Op/M1XKtxbKx3Hxw+NWt/HTx/qXi7XBFJqF84Yrbx7EjR
RtRAOvAAHPoTXARsWUGTZEWIAzyx7cc8Y9f0qPBlCghnKkqWQ9MnPftTZ0aOIkA+VnPzHkU1Yzsk
TLGq3HdVH8eckDFen/Af9orxX+zz41l8S+D7i1jvZLZ7KWO9g82GRCQRkAg8EA8GvJiAAWEpJA24
OQeRn8eaRZ5AqKEIc/IeMenI/Sqsg9pY+k7b9u/4vWfxqn+JP/CQJ/aE0ZgfTDvbT2jKABfJ3Dph
SOeozzXC/Gj4/eKvjz47PjDxJFa2uqpHGrnTrbyF+T7pGSxJIA7/AErylpXfdsZtpP3CM/TP61JN
dSOwEYIONzM3PGP/ANVSrId0z6a+IX/BQL4ufFHwDB4V1PWrbTrODy832lwNb3EpRcBncMcepA9a
p/EH9uv4n/Fn4P2/w78Sz6de6XGsAOo/ZSbufygCpdi2NxIALAZOTnrXzWSwjaNcEydQO9BmMg4A
UgAHg+v+NFh81tj6J/Z//bU+If7NOhatpHhaWxurLUXE8sWpQtJ5Uu3aZEIYY42k9Qdtanwx/wCC
gnxh+Emo65d2usx+IDrMnnzQa6XuUjkycmP5xsyD0HHA4r5m8wSyM0pKqeNo/lQsRjRE37sdBj+V
NRuQ5tnpOlfHDxNoXxdg+JNrcW1t4kg1J9THlQkQ+a7MzqEySEO4jGc47ivS/iT+398X/iX4v8P6
5e65Fos+iSrLaQaJ5lvCzbgzeYu4784CnPbsOa+abiZWnBKNjpnOfTFMdiWOOgOC2O/Sq5USptbH
vnx//bB8cftLtpcfi+LSYTppcW40+2aMfOQWySTnhVroPBf7fXxP+HHwaf4Z6ReWCaUkMlvb3xhY
3kMUhJYLIGABBJxxkZ618yPK9u67lwE5K7c7gfXjimCT5ZGWMYGGwAQRzUuJcXpofTfgT/goB8Xf
Avw2v/A6ahY6rpN8s6G41eJ7m4iWZcMEdnHGSzDIPJNcB8E/2jvF/wCz341PifwfNax3skMltNb3
sfmQyo+DgqCOQVUg+3vXkucSkyJuRf4AKDcElhsAIOAQO39e1HKkLmPpB/2/vi9J8ZYPiL/baQav
GvkvpYZ3054tu0x+Ruxz1znO7BzXCfHz9onxV+0L41HiHxPc2n29LdLfGnQCFFC5xjueT3PYV5Nt
dHL4ADdSRnmp5iZYQqghi3IK8YotqUpM+mPH/wC3r8WPiJ8NbPwXqOrWlto9r5C+fYQmG7mWJflD
yBu5AzgDOPeqHi79u74reM/g/F8ONX1Kx1PRUjgiN1Nbbrx1jIZd0pPzN8o+bAPHfmvnGVjGoGzf
s43A9aSOTIBKlnx3fjGf161NhuZ7n+z9+2B8RP2aLbWLfwdd2Swaq6SXMGoW3nKHUEB1+YYPP04r
V+Hf7b/xa+GXjjX/ABLo3iEXd1rrtLfwaihntmcnduWNm+Qgk4IPQ4r52nuWC4PC9AemD1pYTIPM
WM4ycHGD9DVWQKTZ3uv/ABh8Ua18UpPiLdagqeK31FNUF7BGF2zI4ZML0wu0KAc8cV6D8Yv23Piv
8brrRJtf15LNNGmM1qdIjNqVkyvznafmI28ZyPavAVZWLbyxA449f/10bHRVOSXVslTySuKLIT5j
3L45ftjfEj9obQtL0fxxq1tdaZpk32iFLS0EJMm0gNIRnccH25NSfC/9tP4o/CL4Z3ngXwvr8dn4
fuZJ2Eb2iPJAJR8wRzyo6n2JOK8Gdmlx5f3WGWwO/pSymMTM+SRk7WyT2p8qEm0e+/DL9tH4r/Bz
w7qWgeGPFLWumXshmkS9gW6KsUClkMmdgOOQO4/Pz/4bfGTxJ8K/iDb+M/DGpNpuuw+aIrhlDIQ/
3gysCCDnofbpXAOXYMGBZRyMjGB69OaSSTYyY+fHQv1oasNS1PavEn7XXxQ8WfE7S/iRqXimYeK9
OCRWU9vEsUMKLnjygNpyWbOf71Y3xk/aG8a/H3xBa6r441j+1761t/ssLC3SBUTqRtQActzXmMlw
4YhAFDDgY60jfOqhlKBuuw8ZqbIbdz22/wD2vvitqXwst/h1P4xuW8KQ2yWn2BIoo8xIQUTzAm/a
AAMZ6AVDa/tZfFTTvhc/w2i8XXkng1oXt3sZo43Jhb5jH5hUuFySAM8Dj0rxuScKwKlkZeFJ6896
r5lRNuQdw5wOe/Bqklcnc9K+Fvx98d/BDWL7VPA3iK48P319D9munhjSRZVByAVdWXKksQcZGa1N
F/aZ+JXhX4ian410fxbf2vizVFYXmp5R2mDFSQwZSmPlGBt47V5PNOCVUZzt5yOM89KaylkfIx8o
I57Yp6Am0dZ46+JviPx/4xu/E+vavPqevXMqzPqDkJKZFxtIKABcYGMAAV0nxK/aQ+I/xn0ywsPG
XjPVdbsrF91vbTOqKh2gbvlUZOOMnJGTjrXlKgEjC4bPY/rUrMWjaQ/OM7cev0NGnUNeh6l4/wD2
kPiT8R/DFl4c8TeM9V1fRbNlaLT5pQYwyjCEnblsD1J+lM8EftDfEf4X6Bf6P4V8a6poOl30vmXN
taS7FkYqEJwQcHaAMjHGPSvLWlbYc5UjlQTTmmZ3AwvQd+vrRZEq6PR/Avx58f8AwtttQi8HeLtU
8ORXxU3K2E20TYzjPHJ5PNc5ovjTWPD3iK18Tafq95Z+IrS4a5h1FJGWcSnOW3dcnLZ+veuaeR+R
uGSOgFKHEabQdxwTk80WQ+Z3O58SfGDxh4z8WWfinXfFOrap4isdhtNTuLtnnh2HKhG/hAPPHfnr
UPjr4p+MPihfQ33ivxPqfiW5tY/LhbUrppfK5ycA/d/ACuJe4IycAx5xg+vXpUkbhzuTGG4O4d+2
KnRMfMdzL8bPG0/gmHwi3i3Wl8J267F0QXj/AGUANu2lM/MM8gHIFVbf4s+NLXwrN4Xs/F+t2Phu
Yt5mk29/LHbPv+8DGGCkHkkY71xcs3mSbSQFB5z1pGmYspIPqAaQNnUeF/iF4j8A30t14Z13UfD9
5Ihje60u7e3kdP7rFCMjvg5xSad8SPEmneIpdds/EWrW2vTM7TanDfSJcyM4wxaUHccgDqewrlGk
yRxgZ5zyae1xskwV+Reh6e1NWFdmxrHinVPEesT6pqmo3d/qE7h5by5uHkmcgcbnYkk8cc+laHib
4g+IvFSQRa1r+p6zaxKRFFqN7LOsecA4V2IGRgcelcvLJs4x0B6EUx23PzxxT0HzNHTav4113xPp
kFhqetajqdvaANBFd3ckyxEDA2hmIGB6AUzTvGOtaRpk+nadq97Y2Nyxaa3s7qSKOQkYO5VIDZAA
5zXPsCqh0yAe+enHpUasGB6lx1bGTT0C7Nyw8UaxoML2+n6pe6ck5BkjsrhoQ4AOAwUjOM4x71St
NWm069juLeWWCaMgrPE5R1OMZDDkHkjPvVNW+XDfKeuSKA/lHeUZhnB55FKwXvua7eJr+e/F/LfX
E19jIu5ZWebIHGHJJyO3P0xVe41u7v7l57maa5kYYMkzlyR9SSaz5X3TH5Sq47djTMkfNg4Pc0Xs
M2Jtau5rX7LcXE7W8ZDRxmQ+XkDghOnH0qs+ozSJ5XnSGHcD5ZY4656dOvPTrVFpc8HLsTxzx0oU
hm2EtlRnrxT3J1ZdOpT26yIsrhSOQpxn6/n+lPg1GS1LmFim9drIjbQR7+ves1T82OpH+FKrxseR
gjjNTYrUu294YZWMZLMfurnFSfa3EnmSOGc5yVABPsce9Z/zOdwUkqM5HSl5LkOp46ljxRYabJpp
WP7zLbicZHT9KdLdyFQhOMkYI7gc/wA6rPIzAEfcIwMHoabK4LgnGAMcUBqWDe+WmwqCN2dwAPP5
ZpVuclgq5zyA3P4n0quwXdlsjHYmm79gAGAfYUCsy/FdF8xnZtyOOAMjof0qKI+ay7SQytkFR39a
qrlJVZQ/zHsKnyAdyMAc4560DRYS6Fqv7pgpPDZH8zVeW4aRi7Eu/AJ6E+2aqSorsTyG7jPWlV9j
qSCR/Wgltl174KowOCOh5HYf4Uo1B5U8vACHgp/UVSDAA9g3alCL5uTx2IUYp6CVywbgh8hcjPOQ
OPwNSNcmRgucRnOVIyCe3FVAFjdyPmJ5APWiSQlQ2zaexoK17krXku87cBgMfXipVvpJFGAODkAc
c+vvVENxtC8dTSlsAEEjHUdKRNy5LdNtKuSck7QScjn0/Km/a3HU4PTIPt61Bw6FmHPrQCWBz0A4
z3pDSbLU94wUEOW6nOOvHp6U5blolBDZJ4YE8Gs9nZ2AAyBwAacWZcEcFe1BaL63RljIG9QclueO
fao5SMgHCquflHP4e9Vo2AMgAweuPWkLggrtB56fhQFy9JqJQIiA/Kf4eKJ7153DbtzDjB6Gs53V
ZCTnOfSpmdG4KFSOcgUxfMsLeOrkEsMg52nqPwpx1AoFO5uB8pJyRVN3CnA+XcOtRKzROckNjjim
K7Lj3Qz84Mg7e30qb+0HVVwdoPII7VQ3b/4TtIOMfSleRj8yrnHQZ6Ug1LZuFO0sdrKcjH86e108
VwHBO5snp/OqTAcZkBznIpT8wwuc9iaGUrlxpQWjXAAU9c9cml+2usrIjEA9wetUjwD8w3DikdNs
g2tliOtId7FuS6+boeTg8dqSO6eGXJJdQSRk45qq0zDLbvvDofWgu0isN/OOh55p2JuWlvXJyzk5
6knn35qJpn83zAxGOQc9KrlVK9CSB60AtjA5GfWixLbZZF9KrhQxbf1bJ6/5zThKTDlTtHP3qqfd
JDLgg5yKdI3y7fm6nAoBFgXbx8I7KpOSeuf84qU3DJEwJPzctjHzVTkG9cc4AyeKbkoQVO4Yyc0r
GmvccRvJwu0jjG45NQGVlzk4zxjFSEFtx6buhzTCqbGVydwPGKGIbIwbHykAdx1JpzlSigKQc9+t
NacFQv3uvGKQvtICnB7CkkA6SRkcsMkEYx6UwuZmUZIx61IQAm5h83vTWUnbkZP1xQwFkLLhnIJ6
AY7UhUBmON1KD5jcnPbNNJwzD+EnBPtRYBB8zgd8d6lZVZQGPTrgUwSBZcFcAcf/AKqZKd2RyM8U
aDEJxnAOB0oX5ec5J9ulO2ny8YHr160gBkU4DEr1J/lSEIAScckg9qeqqSASQRSFGBAB+7weaUKH
XnIIPegQ7b85+X5SeKWZNwO75iO+ajdXK8c/T6Uo+VsAndn86bGJ5KhMsCD1xnqKSQFGHBCk9KfN
KzOAR+BpZCDjHIPt0pANMnktkgYI4A5FDuW6g/eNNbjcuM4HWmk/NgE89qADcCOeCKcjAc8+1DId
nB+YDkEUwkB9pzQA5pCeM8ex605n2bRkZHPFNYkEc7TjHHpTgm5zuGBntQTfUXdkcA5PIwaBnORk
jHJNJsJ7ED60ox0JP1xQDI5FGSQTt96epO0MwwMcEUH53UE8dPloyAx2tgjr9KkaJNhHKc45zimh
yzHnAznJ6k0hdlAK56evQU5cswZiS350xjX+V8tkD+dPEg84lvTgUFNjZzuX3NDBSw+XAzwaLgS7
F2sUJBIwDgHI9KYUD7ULkBh0Jpi4AcAZzyR0OaUKYz6np9KYD1RQcMepPI7CmlEUhQxfPQ4xRu5K
7cLgDNRDcrZVt3pntRYCQy/KwLEDIJBFGCcMCB1Oc00MG2uynfntRIvmFgAcsMYNBKiKxG7cfmA7
Z9qWZgGHAwRwB1oSMIhDDkmo/NC/Lgkg9cUaFEgk5LMSfSopJNzHeQB7CpRk/KRuUHI46UshGzIA
3g/ePpUegWI0YkYUjOM05p3VAGxvPAwOKadvzZBBI5x0NSMu9V52qpOPal6gIxIjICgk8n5qjj4z
nAYc4PJqSTbs2rHtbHJznPvURQMMnKkdT6fhVq3QViZkDcjOOQKSduAhUZABBWkysnBXcQPpSKPK
f7pxj+LrSC5Y8iKaOM7sFc5H+RUADBpFPykdN3Sm4IkYHgdRTusvyklgeT60rhuMd2GCB7egzRuM
zDefm5Gc+tSujhm3LuVefanGSJ0QKNu0EbgPvU73GkOc5XH3mB27x/ntUSw+YG25WRO+adJkMqr9
7oQOMf408qHZY1BBPy9MA/5xVDGtKsY+bLnkDHQjFDZMxbcPmGPmHSmoAsuGOAO+cU9lUYRSNxwR
zSAaturYIO8kevQ0MwJKbsryFHXH+eKa8AUrluSTwO1DLwpGCVPHaloIWSIF1A6sOTUhb52OASp5
XnimkjzMElVOc49aUN9nYvywz/D0NLYY9t74BwhwWIxyfamk8sSu1eFUDpn3qZyXXduJz17VAsjR
B1MbBXPXPtSAmGI4gQV3HJJ3dPw/Kl3iQYLqFUEg81E6fIpVTu7n+mKHXbgg99wIGMj0ppoB5jXY
Q3zr1U56H2ppUzONpJJ5IPQ07AZyiDA3fwntTU2pNkqXUcAbv502NDt4MjKUXjkDGaYLkxv8z9iC
oHFSzldu8DGTxjpn0qMRo+/5iUI4DdRSBh5RQjksMblI7+3tT32OwVS2eqkj8sU1j5fyoCQOOOKf
5JWJXQgoDjcSfSgSE2uzEMQcpu2k0FSCEJ2BuwpI4gejbO2PT0NNuFZCrcc8Fu3SnYCU7Bhc5K9A
TwfxpkUREqNjaVJ+X/PWpI44RE7KSZdoHPQA03yGRAwGeDjHHSptYaH3ERLCTcFVwTweh6UjqI7p
BIuVC7dyn7ufWgNJ5gVsDg4yOc4/+vRKGSXhmfABYb8Z464oWoXFYjLKrb36ZPPT1FTQGNohiMBF
5Zup+tNizau3yhlYhtrcleKVogsrMoCsD8rr9M9KLWAq3MTbJCQACQvPY1ngBG8skFc9R6+tX71y
zsuWHZqpINhbchKkZHriqQmKfusDgEdCeOP8atWCeZAUZVAxkv1IFUXBkOBgj35xWxp7KiZLNs6A
Ac/5/wAaoTKzyPwmdsvILDqQelNZWD7dxctxhuvH+elW721GwsisW3E5P93sKrXSvFIdkRTjBKf4
1RKZegtDKxBcLF0LEHHpgYptzZyxsJIZCGcc7GA7nr6VWgufIl4GQv8ACckE446fWrcE8k0e3ySW
HfHLep+lILpGD5YVWH8QIwP60rKyxHeoFEpdpMO2Qo6qaeJFlf8AeZA24Oaye5qQgkpgg9wAB1pd
xRiG/hOMdhT5mcKpK7Rzg+opGKsvU5PUmmArKREHVhz1GM1a03Sm1SaSKJz54QsqgZL47VSEgjwP
mGOCcda2fBztF4lspEVsq+dueWGOnXn6UmIxXUeaMncAecGvWPgL8Dtf+Onjiw8P6JYLcLMVaaWa
UxoqblBG7nnn0Jrz3xNZJZazeRRgBA+0V+qn/BGS3tLnwl46kkt4zeW9zAqOyglV2seD26/z9aFK
xaTtc8a/bF/YZ8F/s4+ANO17TPEu/Um2mXTbhy0z7jgNGBg4BPJI6Ctf9kf9g7wt+0t8AdV8R3Gq
3thr8dzNbxqwAjR1jDJ8o52EOp654NeLft0+NtS8RftH+N7LW9Qu9SXT76Sys/NIMcEa5KogXAwN
3Oec/So/2Sf2tPE37L3j2O7hL6l4WvNi6ppHmErKmR+9Tg4lVen97GKp83QlR5l5nA6J8MLy/wDi
nZ+E5LWe82ah5FysCksirIFdiAegHevvL4x/8E6Phr8KtZ8GX994kvbPwxql9/ZeofaJVzFJIpaN
w5HCgqRzn7wr7ItPCXwmTf8AHi2soLeO80z7ZLqQgYZgZQSxjAzuIAzxnivy/wD2zP2udY/aB8e2
dlZPcaf4IspFexs43+W4YM4Ez5HDlccdBn1FK8iYpJpPc+vda/4Ja/Bzw3p39pal4m1G3sIufOv7
pEgTOADvOMfice1eYftR/wDBOXwb8OfgfqXjfwXrtxcvYKlwReMs0VxAzAAKykDqwO7nivvfx58N
7L4vfCjSPDmobH066W0kuInHEkahWK9Rivzl/ah+NOs/AOz+If7Ptsx17wtcLCdOmv3YSafC4EjQ
A4wy9uoxn3NLmbFa7sdJ4b/4Jk6F8RP2bPD3ijwxq89v4s1DS4r5YrgBreVmAZl6fKx5AP0riP2O
f2GdF+M9z450nxqdS0TVfD08UHkQsUZGcOCGU4zgpn8a/Q39l7Wbbw7+yR8PNU1KUQWlp4dt5p5C
OI0EYJ9+B+P416houhaLFqt14i0yCFZ9Xhiaa5gAxcKoOxiR1OD19KE2XLRs/n/+PfwtHwc+Jvif
wjNcfbn0u9eDzYgQrryU4PP3cceua+v7P/gnt4P+L37O9h47+FXiC41XXJLdZZLC6dSGYL88JCgF
JB6H2rxD9vWKKP8Aal8feTFOkqXu6QEjDbkU5AxyPmB5zjiva/8AgkJ4n1i0+Lvijwyt5ctoU2jt
evaNnyVuFljUOMDAbazDrn8hWjuTH3o3R8IeJvCkvhfVru0vIClxZyPG8b5DowyCrAgcg8Z9qwDC
zTxsqoiyAkk+o5/HtX29/wAFTPDun6T+0bey2Nols93pdreXBiG1XkYyqztjHJCL9a+HZEDPFKwT
hsKnQg44J7URM1K6uOj2FxvIjDjqx+UEdv8APpUXl/aJDufO3n5FIzjv+n608KQhDKByWVsgk/4U
5Y2gtWlyHDfLuOMjiqC5GG86UqEO0jgrxj86ni3xhTIdzbdoLfpkd6hikzKRHuXHQ5znj2p0zZnR
T8zMOFGOnc5oGh1zbieJUidVCjaWXPzc8nviopJmlDRK2Jcnd8pGT60qOfOYACPzB0c9OOeR9Kjk
b7PPuZN79Mjkjjv9c0xapkgCiIndnBx83PzUj/PICyMcDILdAcUsknmRuAoDMAeuMn1/T9aZMree
Aqs7Ko5J5PHrTjuQ9R04LO8m0IfVD8o/+v0qupG1t+6VG/u9SO9SndFGRLwC2NoOQfwoaH9y2wgH
H3Fq+VEp2YrhUMgjJ2o2BKyc+wxSOqRlnIYBxh1x09KWGY+SCSu7cCBnpx/9epFR5xM7SMUY7dhH
XA60rI0vciVwSXCnag4B7jpTxIBwwKHaeCMn86i+QzNgMoA4Oeo+lMYZuQGLkg5HcAZ5/SpYiZHY
wko5QDtjLYps00gICs20joMZ+ppoIWQb2ZsjBx6etI8UeJHMhJAyV/w/z3q0rjGzStgldzKTjcR2
pX3r3CjacACliyEKkEqOACPu4ptwmEVzLj5uc55GOlS0Ta45FRkPODnksM4NMWSVXKniNSenrTHI
d9oHGPXgmnxq4BRvXPuKViib5QCpDKrdSDn61CDHEQpk++CAwYZ/H9KQRmJdytkt0HP+e1KyYkO5
QOP4u3/6qEitR0pcbirZAUHjtSIzJGjHAGcbV6n3NM3EuyjEgIwSeD+FMCFX2qSW7qeTzVWJHK2+
RvnGM8EdD7f/AFqlnYShVjQGY5JUegBpADEgIIAz2xxUb+SWfaWBYcE+v+RSYDtuUR89RwDyMetS
Fx5QC4ZsZBHT61VPAHlkgL0yaN4Zfv8AJ4x14qUCZOLksrcswxj3FAGVYlmBUcIBmov9YiYcsufX
p9aeqiCdVWXGR1z+VBaJHmQMnm8+Vkex4/8Ar1GvmtluSp6ID0FDzLIhUpkZySf8aBdGNSFUFegD
LnHrRYLgFCkfKFI4IY0MsbJlWxzkKaWfLgYJB6DJ7URFF+RsFhx14IoGRhUBweR0BPahGAVwB87c
g0TRsrncOnIyf5U4usadOnI3dKvch6CcDkfM5xuU+lPZC5Zhx2UA1Gj8BiGxjP0+lMlaRWUq3Aye
G5pE7kiZZ3Xf1PQjPamuyg4w2Qc0srRCFXRmZ8HPFJJKdiqqnb1Pr0oCwwkAkdAeg9DTt8nllQPl
wCc8cUx4gjjby565p5k2qFOM+meKGVoPkUAr79AOMjFROF27j97JBA4odi7Hhh3G48U1F37sN8uM
8njipsGjHqQgDjqOSSc0xxvG4tubPYYzQ7KUHlr97ovpRg7d+QOOhpiHSHJGW7E8D8MUhZFHJbJ/
u+lRszEJg9KejgPwOD3YcUFIRcEr827PduMU/cqtliWGTk7uaWVVyA2F54APUVE8TB8hAM8nPamD
QyQnz+WGOefWnM5YhgMKfQYpOW/gCq3HXPPrS/L5YHOQe57UCGlymevzccVINnoQQMHd3pFAI4cd
cZzT2QpgY3ew5qRpA5MgUAdD+lM/hP8AEVP407LyDuMdsVFImzsOTjGaAY55yoOOMjnmhsnGWJVv
emiMqShXkeg5pJCQDg/Nmgi7HuCCOSFHao95HykEZPWlEpxsU9etGGY85yDk5oZRI2d2WwcjGfwp
owz88AcZ7mgHHyplsnqR0NOJ2Buct0PFIq4krlcEjGOAaTgqBu5+nSmyBic7sqfWlVg2FA6HrTAa
MK39/wBM8VMWCN8pDEd+tRMx+YBQfQnrSl325wMngHOKA0HHJkGDtXplulRqNo6nnkjOKDuIDEHb
1HekCkkyHIb0xVJEseXLY/hx2pJGAibcuG9ab5gwQU3Nmmhzyo54PHtU3C5IjL/tE+vtSM+CQAWI
pcHaRgDGaVDGSA27kZOPWkFg3PKQDxxkc09p8qqsTmomHmOWQHgYo2sq88g0yrj5yVY4OD1yKbv/
AHZIOGPVac52k854xzTCMITjpkfeGaAY45HO4YHXnmkJeVgdvJ54pAxwDt+6c04SmMMQP90+1HUS
EwUJzwSO9DGRSfmOB3p4AmY89RknPSl3IGODu/Ci5VhgQFDuJJI+lODKUxjHGOB3qM5BADE9ulDy
kLgHJ6E9sUCY8OQeeRnpmkf5ZCvzLnkVH5g77vb3qXeGycE0DG5IKg5Pq1L5hwVKllHQ012O77gP
sKVNwOAep+7RuA4ncv3cHFNaQsfmBHbinPKd+0fdB9KjyxJOOT0+lAtx43HjaOPWldyEUZ6nOBSS
Mf4XzkYPWkKNGQ5Ax6jmgLIeMLGN2SepwOlLtaTBDKu3gA/So5JnIwOP60Ku8cttOOoFCE3cayFm
37/XrTxhAAxO7qM9KRkAGS25ulOZxnk8Hjj6UhJDSQBlDhc5Oac02+IAD5upOMEUw5JJBUEfnRIz
ZBYZ7E0ytQA3MGGfQjpSuwdnKlR6A0rucEdFzximM4C9Px70DGtGQ43dBTHbLgEDGOM8cU9ySdgB
J680rguQG24xndSAa7bQPTtTgwYjJPHXFJKFZgi8E8c0OQpVccjqRSSADndxxgcH1o4bkHI703dv
wDj2xUkYI3nAI/lTACCXLZXHQE9BURJABfp0PFSGMAAcZ5P0qN+V6ljmlYQAE5yTnt6UpJz98D8K
aoHOTx1xSliSCRkdMelIYhbLFjwTz6UrYUAlqQkHsOD+dITnORyOgxnFBJKoKgEEihtxUsWyOmSK
Gd3XOQB6Z5pnmgkgHAqihByeVBPY08MpLl8njAApskpKADkg80oiEkYJcBvTpSAQptXO08Y/GngK
DhsgdBxTORkFsYP5UoQseOc9GPcUrANk+VsA8U4sNjKF5657mlU5fGOW6jA4oXBbB6dvpQA3rtON
x7804ZD4yMk9+3FG4QtxyT6UjNhWVgWJ5BzwKQh3JBDHaR0pq/N8nO71/pTfMO/JGV/lQTvzgYPb
FMLIc0TuwAX5x6DpimlWjOM496eGLqQCQ3r1+tMzlhhj0wTQMk+6AxbOfegmSRcKNoH054/+tSvK
cuqE4Pc1FnBODnpkUCuSgZVmkxgHt0pNgVxiU7Sc7R1oIwh7CmtHjvuHT3NFhXJQxj3HnDdc8/nT
H5fHX0NOKjLc7BnvTX3K6rjLYyO+BQNO4RygAKxL85wabIAMMq5HTIqZI1AdgFbkEc8j6Uzhlz1A
6g0DASFlAHAA7CnuXJA6J1JIqDDLltvTrg/0qYSKd43AZOef8+9AIlY5VCSMZweeTTCgSIkqAeuc
dKiJEajfHuB5JBoNwY87B8h4IagBNrRkAnJPtyacygFgThm455AoS4Iydp3HnntTWGW3EZyecdKA
HvGgI+YlCf4e1MRlyeCT6ntSBSRuQkAnaQKQqI2AY4I7UrCRM8iSOoLdMDdRcQlnCxtk9SD2psyk
525CkZx0zSMnmZbGNvB9zTGOSPOVV8EZyR3qSZSEzuyffGKRMCTADZxwSKSQPIHwoKg8cUrAI5LM
uF5IIyRwKRpGYjJBCnIwOvvTdsitgYJPPbih8YYBST2AFLlETiUupBBHemtGdwDAcDIx6VFv6v3A
9eQaElAkySWbpjuc+lMaHFjuBy20fhSyhWVQgYMejA/59KYFKjuQOnFJuYozd+3pTFcc6Eg5Ytjp
T4yWYbiXJ4z7YqOWU88l1IwCeoqV5iJEEX3iANoHWkNag8oVVwMjqMmnMpctvG7K4AQ4x/nNEoAf
a2E2nBSmvzISFyB6cfQ0mOwpzGvIxt9TzxRIRKGO8HCjgdac6uzEN8zMeh4pspCQ/KApHAJNSkMN
4JX5di/73c1JJO0kOMfJ0B6VDIn8IUhwB34HvT3cLGy7gVYcqR/n/JqtEJkuI43y2FUjb1J5pqxk
p8rgKAeT1x/kUOMQ84yBk4HA6cU6ZyXXk7MAAHAqb3HcVypmU5JRRgn8On1oukaJVIVircn5s4pu
B94sd55IK/hUYTDjLkjO3jvVWC5OxTcMZJC8Acc1A4KMeAp+7uxzVi4DyIgA2bzwAeg+lERYocMN
irgtnGR+NSIhCyI27e7bvu7ed3v+lTI7GEArznHWoA3kzmRE46LtP+f8mnAmUMNhBHA54ptEDjES
3Dllzkc9KV2kkTLlQmOMLnmo4ywd3AwqcDkdasskk8km9lRE6YIwaSGCz/OPkUY+faowOvH40ydt
rEODg/wL0J9j9ajkneMBcqV65Ax+FW8qkKys2Sp4CntikwGTThJVTgrJncPT8aVt0khC9V6r1H4U
sJEsx+UB5eArY+U4+9+dTRtuZ/MYb0z8wHehIaIpUk8vcq5PViP89akDswC7wyDndjOOR1p06uLj
BCANwGyDxjOcioItz+bHl1kHRl/p61dh37EOqTIGKfdcEZIbjA/Cs1pdytzgH071Zv4WVizFSy9Q
o4qrsIyMFe/pSe4ahtYp8uBjqR/n3rc08obMkhFK/wASoASMev1rFGfKUAE7hkkcE81vWRV4Ecx7
zyG6fh364ppC0HktMS7IQ2MFeuMn19MUkkAlBWTMbYAQKcD8BUjquT1Yj5VGOD7VVQSvdDaqJGBw
McD3/M1oiFuMa0gQgQs+CoEgbofoPwpZWNkAqbI2IHzEnpzxx+FXXnikYRs8e3HLqufwqeMMs+IQ
Zdq43IMZBPX9BS2BtHHqWRuVOWHy85xQrbGY9ySORxT5CuM4+YD5SppjGSQgZLDrtHWszYJyQACW
ODkHtipXt0WIFWPIHGO9QSBmO4jBztzT0k2oPmbOentQA47WzhsYB49TWj4Zizq1vK0oUQtv2jgn
HbNZ52spAB4P4H1p1peG0uWeNQBjABGfx5osBqeKrqKfX7uSJ8pJIeABjHrmv0p/4I2/ELQ9J1Dx
v4UutSit9S1Fre4tbabCecVVlYIT944xwOw6V+XjsTMzg7j1yT+ddH4W8S6h4Z1yy1rSdQm03VrK
QS215AcSQuMYYH17fjUmkZJXT6n1J+3t8ONZ8EftCeMLzUtImtLTVb+W8sbpYyI5o3IOQehPLAj2
967n9gH9j2b42axD4w14D/hHrCc/uwDtllVlJRwexHucZ6Vxvx2/4KAaj+0L8JdK8J+IvB9hNrVh
5bf8JEZcySSLw7Km35S2ASM46irH7I/7fWufsy+GNf0FtEi8RWN9MLq3jkuGRreUJtOMAjBCr6VU
m2tCabajyyP0m1P9rj4Q+F/iXp/wRud7wahH/Z6zxhHsIt6sBAzbsjP3emMn2r4L/bV/Ydvvgh4x
tNU8I6feX/hDWLryoFtsyNZOTkRNnJxzkEdlPTv8i+JPGFz4o8ZXuvMxtJby9fUR5R3CF2kZ8c+h
/wAa+yPGH/BUjxB4w8JeFdMuPClnFqGjXlteXN7Jdb47sxKQcLjIDZPPOD2qE5rQORSeh98/tLar
4k0X9lCLUvDX2iHxBaW+nTwiCMyOrBo9w2jrxnP418ZfF/8AZT8Y/En4Ia/8b/H2r2tv43aOK8mg
gBS2ktwI0UYOSrYABx6H2rQuv+Cw18ulLH/wqvT5ET5DC+qsoXHQgeUeMfrXI/GT/gqRdfFT4SeI
PBUXgOHRV1W2Fo13DqBl8lScthSgzwKpJtiVN3uj7Q+GYRf+CdVisKEKng+dUXlicRyD0zzjr715
n/wSt+J/iHxF4d8Z+C9Z1OfVbTw7LC1k1z8zQLJv3RhjyUyoKjHAz64r5G8B/wDBRbXfCf7OF78J
p/C8Wrh7G40+DVmuWUpFKWzlNvO0Occ9AK4j9kj9sXUv2WvFWsajYaNBrOl6vAsNxYS3LR5ZCxSU
MEYjG4jGOh7VaQrPmd+puf8ABQZI7X9qbx1MoDMbtdwbqo8iI5H5mvof/glB8DvE2keMtS+I95aC
10K5sJNNhEpIldiyOHx0KkDg8ivh79oD4y3Hxy+Kmu+Mp9Pi0yXVJhItpBJv8pQAF+chcn5R2HWv
onwf/wAFPPFvgn9n6y+H9j4fhh160sRZQ+JYroBkwSquYijZbGO9U0KF1HQtf8FVfE2maj+0FJFY
3cV2YdJto5nt5NwSRXmDRtg8MvBIP96viCMM5ZixkJ+6GGO2Ov0qbV9Vu9b1S81DULh7u8up5J5b
iVi7F3YljnvkmqIYGEyyyAKxGAASPTOPxppGXw6ErspkDIp+U5wMflxRKY8bXVjtO4nOOMZFUozK
FJPyeo6E8VMgDMrSEENwvccetVYSvcWIRiQtt4YYOTyPf0pr+ZvjWLGRk565OORT5EVXKyAgkr7H
8Dn/AD60MqxzkEeZAg42jhu55/GpE73BwHhGIwrt0KnOOeR6YNOuYBG7vHGQsfI5/TioY0VgZY1x
ngYIyaknlIV40l387voPT+dUtyxMorbi2ZVG9c8E/T9KSbMaE7izMCQAucen5UGddkYYq0mM5omk
y4aUMCO4PB59Kq2pBFEHJjJXKLnJ6knjtThcNHKzyoSnQcUTyOjkkZC/dP6UoiaSMsXBOeT1/wA9
KpXI0E2b3Zo2VYAOB0/DGO2KViiuqxyOQBkkjBB9KQyb5idnAyeBzSxsUDMVOOevT0/xpPQpIjVF
QsEjPXHrjvTioJcLJllxhcYz/nmo38uB1zk7l4bPGetSruhw2MHZlST1qbF6EbsTI0ciEF+Rjjik
VVdW3rtEa/e/vGiVVDiNmKjuR1+tPkjVCPLOQTxubJ6elaJkiNc42K2d2MYQcdfT1ppRZM+WwBBI
JI4JpfMDOxOW55zw386Y5icJ+8zuPCnrSaAaB0VirLjp2JpWXDKPvFRgflSvDgDCgMPx/CmtAwbc
VKtHwQam9mUkOaYSDa2RtPA9KjaVAGBJJxnnqD6+9LOQhdgPl5465/CneWqBCcktwEx+ooDYiD4U
7UXggj/CnDy3LsY3AY84OCPapJAgDcAAcDAwT/SmM8ZbaoYBedgB60rMVxh3eY+0gA9yO31piBgx
AIyOTkY/z1qVXJQSllRGyQo5GKcIi3mMjfITz3NPUEEkjKrAgDJ+UgdfammYLOQy9Plyo5pAjI+c
bgo6E9fwpMbsPn5fQr/n1pWK2HvMjRlVQJk9xzThAkbKQWbbk5Iz09KiiHmSgZUnHAAzUyeZE54L
EjB9hU2sUQbSzZYgr19PpSA4jxnHOQc4qa4UopBflv8AZHOKPKdkIBUgDjFUjNkDM7lmIL5B5znF
SbyP9YvIwTgYNKAqSAPuHX7p60EMHZWDB+vzd8UbiSY0yk7cozfw+wppYmPJUlM4yO3FShdwKklg
RnsM+tI7SByVGR93AHTinZA9SLe275D1bn2pxiZv9YMKDy2M/Skz5ZyTlzxjvn6UEFSQM9c8Him4
oSFkZd6D5tuOSPTFKYmkJCyYJ5I6YoYuCc+nII6ChwjyNgEKenbNTsUMXOCOrDufpTWUuwxwx+6M
YzUyqmEc5APGPpSS7ZPlKndn5eaLlWIdhLYIIYHkHr+VPZTsYErg9gPypREN390+oPWky2GDAN7d
xQTfUJFwSQQUx0AxTZWOSVAAyflA6U5A3VOS3G3HX0FEi93Xy2I5GO9CHa4rEA4bjA3dOvtUZICA
42A8cU4JhWGSec7aSQMzN03DkDPFVZMljHYxkfKWAHT0pPOYLu5JPXPenZLoWPUe/tUkzB+dgDA8
fWpSGiI4IXIPqKezfOFWPCtyFNPjRoombAZh09TTTuI25GeeD9OM0WC6IzFslbKAEHp6UpdpOmVb
oKcdznkbyT1B60zy1LeYGCkZ49KLAmKwIcfMwPfFIytIGfjgZPH5UGVgmAMHu2MUx3+Y4bIPc0rW
KvcM4YneSWp5G0lgSzEdDUZG0gZQHqT/AEp7OACNw2k9B0piFckZCqobvikyz7mUFcjPXrSklQHC
nOOppjOZW6AHFKw0HmFFJZAf6U0zHJI5GcUS7hLh8YNLFw+ce/FKxPUUu20hjnHYdac8zSAHjIHp
Tc7lbpnkipHQMox83PNBYwsqEMxGaawJYtj5ewp8pTP93jHXOeKaGy+5vTCkdqLjHHLDAB2f0pHX
cMDgHFKzOqcZyM8UwuGK4OCBk/WgTHFCmcY680sgBBAzkdMVE0hYggncf1pWBznkNnqO9BKuSxuM
MxPtx602QbhuDd6UAKSGbgdKaVLkgEjnA9DQXcX7QUTJALD06EU0bmxzkDp7UOpDFWHPSmsdgKYI
7kmgGSMjv1bGAehxTGGXYmlbA6ZOPSml1VMDPNMnQmTbtIBwKHiCgAfMW6HNQmQFQoBPPSn+YJGy
45HGRSsO4hkCtgr7Zx0oZGDE4bJ4HFOzu75AGKYzEHAOQT9KAuAjLYJOCOKDuUYGcU8qQB+83ZGR
TVUcMDyvrQFhNqFkznHIx3qRhgFQW9qgf5Sx4BPrTmZsgN6dc0WAmkjOPm4phIDnaPfPtTXkYgg7
iemM04tycAjjg+1AxxO51PA570jk5HH4jimZYSHJ4znLcYp4YsMspyehpC1A5BJQZXHJIxQzOW5G
MUyZiT95eO4/GpH5cKvA6nnPNAkMfcdoYkOOnv70ioeWJKnHPvSu5BwzdD94jpUbk4BbLHOc57Uw
aJMkAA/d9aeSN4zjPYY5qLcGAJO09MU55Rx97NAIdN1+Xk/rTTKGQKckdx0pDgMN2WY8k0hlCnIX
HBGMUFXHnYo+Yk+hFIxQjJOPbFRnORk7fT1p0suTg8j1xQK4bC6M36mmlQeQCfegtyDjtSBmRsEd
aQXFbntg9MD+dNdCFJyRnnGOaUszsOhJ4+WhhjsSR60hibQuAob605lbJwQOe1OEvzcEk45FIArE
lixGOMU7AKjKgxuOfSmMN0jFvlbt70uDjGTnFI6bcFeevUUMBxdMnjrx0phXbzk7etBXB+Y5GQeK
JDvJKgACpFew0ybzwO/pQQQ7EnBx3o5BBHX0IqSVo2AxlR7jvRYYvls+5iR6Y/CkcY3fLgDkH2oG
CxzwCOKcziUgbsLjGDRsBEZVweMc5z60BQ57ilGcAYHsacZFXAb59/Oe9AChsAhvm2+nelEnmkEk
Lt4wKjBCsQBnPUDrTty7flAGPWgBdm5CeDj+I03yyo5I/Cm5JGQ2Qe1OEYYD5vmFFhOwcAZz7UFi
cg4PGKI2+8cFvYCnEbicr+VBLGkjDcZzQI2ds7TgHjFKflIyAfbNSK6YAC4OMGixSuJjblfuqO1J
5YRd3LcevApNpO7naBSh9gKkdRjgUxkRIHUYx1pfldj7DtTtyqjFhknjIFNVmX5gMjOM0gH+YqM2
ATng5NKZNyruABU596hYec3yk/U1KV5bIwcdaLiFaYsW2gAHrmkX5VOWw39KQxMqr82QBnpTz8+5
cc9SxoFr0AEJuGM56EcYp0ihuQdpAJIY9fpTNvcsBg88U8rvQ5O89cDjAoGrjA5XBwpA4x/Wmsow
2UyCc8VIWVANhxnrntSO7FWLAkjgCmFxNoyF2EMfYYxRIUMmCcAHtShj0BKt39qYY1wAwJYZxg0h
kjFFckLzxndTeBuDcOelDNliASCMYoZiVw2Gz0YGgBqsI89+uQaUzKCCBuxztpksj78bssVwSetO
kj27gTtIHAoAcCD/ABdcdB19qe2Ecjd07AdaR4AMFzlgBgDp+dIybeRk56c0WAdJIm1lIbJHAB7+
9NL87SWjb+KnozbmDDe2c7gcUrSKZSWAbHPJoAhDl24Ocjk7QcUpbKj3+8RT8/OWjXAI6daiIw5I
yV5JFIBwhGzK5Ppxinwt5fQAtjg+lDO6fKCCpGRu6n2pNxICkgAdMfyoFdCEkq4AIYHAwODSMFVc
YJHfIx+VIWCuRzjinSyYlOMFT6dcelAJ6iFBkADGRxj19aduXIKnDAjn1pkhYLtzyDx7+5p7DzH3
bducc0xjlQyB3J5zktj9KIwLlgCxUDgrjNKV3RABwvOCB6etNGI3kZ0Z0PAzwOlKwx8cwjk3IdoV
iRxQI0mc5I3Ocgj196iKn5jxzwVXjFEbbAwBzxxxjmiwrltYV80EuT8p3MB6VEI1lUNgDB5B5/Sk
j3cFvlYdOc+vFKiEynB28ZyT1qGFwmcCTC884xinGVwM4PQ8n1xRIQRnaPlOeOtKNrn98zbSOVQd
PT+lCsTqJFIrSB3+dwOEPcc0oljVXJVvNAPDdD0pYI0wsgRxuBHODggU51V5TEUCk/xbutaFjWlA
kQ7AQAPlxwPQU2SUBixUsvYgfl9P/r09YWhkSTOed2Af1P6U50VYmJJJY8HHqeetQKzGMT57ZjUF
cfKnA/z/AIUpnDSJlMvyTg06P96xzkAN/ewc4OKJ1UylEz5mAMd/fBp3Af5cTq298kADGPb9ajjV
V8w7i6ZxuOaZsw+GLoo4GOTzUjF41dVLCMH5lY9c9OlFxXJDsICg4YKTuA9RwKdjc6xlBk4HHUGm
JCJIi6gqqj7q8ntxzViMGSIk5CLkZ3ZP1/lTFcLgbJmMZGE+UgcHP1qN5/s4BKuzsMDd0BPvTo5p
IYzH5jFn6AKPmA9O9TOzvalceYpIZVUgbTyO/fnH40bBcRrYYJ8xi4AOcnPI5/Ac8UySM+Qi+Y2G
OduTgevHrjHNPjtGC/fbk7gDnBPbA9+KcQbh5cZCj5Tx8uQeQT+FK5SM3VEUEoiDpnOapiUyAsww
wI5AyatX8jLnd86nhTjhfaqQLbVJYc8ZGOKe49gcMUAXIGTgMen0rctFxaDfJsYpwqDrxz+PSskp
5j5LM64xg8Dj0rbtYpDAjLuAI+VAfm/AU0gJJvKZY2DFShOFAw2cen1qnLcTx3BYZHmgKc/KAMf5
NXGmL3OZD8jjJL9RjofrU28zEhvnUjnAAJJ781pYwb1Kqxu0e6CIAhfmPX8SafBNJFI0cZkjIAO6
PJz+X5055JUiciMpGv7vK8E/40tvbWs8QDTvEy9uSanqJS1OWLhUQMoKjkUMSWJQCNsZ2g0qqdhI
TfwVJbgCnDACLkFwc8dh6CodjpGTJkMTh887s5OfelIVmAVc5Izk5NPlkVZCQWwScZ5/CkeNywyu
ADkMB/WpYx4CFJFAIC+/U1VKZ4x+Oastas0igDZhfmJPWr3h5Ik1XymbMcyNGzY5XII4/MVNx2Mk
RlcOUJUdeO1W4lV2IRWAOUBLYyTnHP5UuoW72NzLblGXnoe49f5V9bf8E/v2U7H9ovxrcXeqTxNo
2iyJLd2LD55N3Q5yMAfqalspRurnzRc+GdXsYluLuxubWApuDzRFFfjopOOe9X9N8N6zqli19bad
eTWMHD3MUDtGDtzhmAPbFfot/wAFGviv8P8AQ/C8Hwr0PQ7W+1yxAiuL5Ywp04oI8LnALM6EZweB
jNL/AMEpviP4Mj0/xD8NPEcMA1XVLvztO+2qrrdJ5eHjXjhhszyee3NUnYyUG1zXPzZmsZoFME0Z
Vy5ztxnPXp+FdJJ4E8Qwpa282j3cJndlgUwkF2A5AHOTjnivtTxb/wAE8vF0/wC0vcaXBbLY+G7i
6lvbbUkcMkcbOzJGBwVzjHRgMV9Pft0/EzwZ8Gfh94QLWlk/jfTL21u9PsxGAzrGwE2Tt+6V3e/e
tE0PTTzPycm+FHjCS1jb/hG9WQMWzILRyMgE4PHBxVTUPhz4m0yxN1qmk3enwYG6Se3dVz25Ix1/
Wv3r8W+O7bSvgXbeO7Twxb397cWFveQ6WiqN8kwTCb8DuwG4/jXjPxE+KvhD9oP9k/4iwapocWj+
J9HsJRf6DdoHuLG4XPlyLwNwO3cGHvQmlsDfY/HVvAviAaOupppd2NO6fawm2Lk9c/X+dSeHfBWr
eJZ/J0nT7vUXKcx2sRkK/UAetfth+x74L0Px3+xx4P0vXtHtL6yvbOWOSCeEEGPzX2jkdQu38qxf
2Xv2Rz+zp8bPHEkUY1HwzqlnG2nXk4BdMOS0bDoCNx5AGQBRzBez2PxRurA2tzskjeB04ClTuPXg
g9CP6VebwRry6ZHqkml3ltYMwRbi5t2WOTIyMMRjGOc19N/8FEPDGneHP2l/FMGk6fFZWrvBI0Vu
oRctCrMRxgEszE19Q/sG/GLwf+0d8KJfgN478PW51Kw05vs7kq4u7UEgMpABjljDJ/MYxim2Uryj
zJH5T3tncxtIhdiVJwD1ORyajd2AWMovmxg4JA2k9fzr6H/bG/ZvuP2avihc6D9qa+025ia7sLhy
N7wMzbQQOm0/ITjtmvnuaIpbt8qqx59s1SRg5JsbHZs4kVyXVgcAHue4IrodA8A+IdZsprzSdC1G
+hRgoaC1dwDx8pOMA8isNVYpGIy6ydCT0A//AFV+iH/BIv4pvb/EPxD8PZbCO4stQs21SO5lA3Ry
xttYdMtuDKO2NgpSdjSKPiPUfgr41Ql5/DGrwiFj5khspMIAcZ6fUiuMns3jiZOcA/IH7seD9K/o
R8RfF2x0n9oLw98MZvD4n/trTZb0akWXZHtD/u9m05zs9R171+a//BTn4CeHPhb8UNL1Lw5FHYW/
iC3lvZtOiX5I5ldVJjUcgMWzgcZPQUKRm1K+x8JPYDymXYFTGCMY57H3HSo44HkhLklRkjd0Br6L
0v8AYb+M2paVa3tl4KuZradPNAXKtjGVI3Y9uP0NcV8S/gd4w+Es8Vt4m0KfRpHQSILlQFk5PK49
wM/WqUl0HZJ6nL6V8KfFWqafHfW+gajcWUwYxyJaSFcDq27HA461z11Zny1VsFt2fmzlT3Hp+dfs
f/wS0+Jj/E79ne58O6vpNus3hi5/s8z4DC5hYFkDAjgqvy98gD3r4T/bY/Z61v4bfFLX9VOhyaZ4
ZvNUuW0+Y8QupYkBD24HAPrTjO71FK8Xax8oSAxpzh933t38wfxpFR9zbVHmEZCgZGD/ACrodC8J
6l4t8QWOjaPave6pfTiG3tYs73c9AM/zNe0v+wd8cWL7PAd/E7YbczIBtwT1DEZ7fU4rTnijJp3P
nFIXEClRmUnB57/5/lUs/looSYZkwCSDXWeOvh9rvw88QXeja3Ytpeo2DiKWCRQPmHfOeRnHPfNe
ieBP2Rvin8SPDVt4j0LwfeXumahHvidSEaT5sFsNjjgn8vWsnJNmsTwoosqxSEljggDH9KeCW6gB
FJ254/M17B8Sf2XviR8K9Hj1PxN4ZutOs5XKLNMhxkDOPbjn8K6H9mv9mTxL8YfGGh3UXhm51Xwj
Fq8UOo3YC+UIg6mUHJzjbntRzI0ilueH3nhzUbG1tbi5tZrWO6iE0LtHgSIeNyk9VOOtZckAhj3h
CzDqF6Cv2b/b7/Zdh8QfBHwxYfD/AMJxTah4fl+zwfZYwZYrQQvlQT1GVQ4OeQK/HDUbKaxvri3n
KrNFI6SRgYZWU8g/Q8e1OLuZSaMzaXGdpwGPOM44pu0M4+QlenHQGp5JtquAoJ3etNUkKA3IByFx
1NaXM0tSSOKSWTZHC0jlwFVFyTngY9TXQeIvAuv+EkhbW9KvrB7nPli5t3jyAAcgsBkfMK1/g/4J
8QfED4h6VpvhjSm1nV4LlLxbEFRvWJw5wenOMfjX6ef8FRvEvhm9/Zz8PQ3ukvZeIn1CB7W3ntCs
kAEbNKm/bgAYwcZzj3rNtX1Nm+VaH5DyR+Y5DH5geBjgn3ppAjfewJ2Nk5HAr0b4UfArxn8cNZm0
rwbpL61c2oE06BggCZxncxA/Cu88NfsQfF3xvNqqaT4Tmd9LuGtbhJXWN1lXB24YjsR/9eqcoonX
dnz7LGDJIkRbbnkk8c1GqGLcxIVvugZ5rtPiL8Ntf+GfiG/8Pa7p0+malav5U0NyhQk4yCM9QRyC
M5rlmUMql0IC9cc9qtNdCblVggDgDEg45PGatafp11qd9b2dnC9zdXDiOOCNeZWPAUepJqMQ+YHw
MKSeAMfjX2n/AMEsPhxo/jz9op5dd0uLULTTdImu7dZlDJHcLLEFP+9hifwrOU7G0Yq9z5x8Wfs/
/EPwVpcmp6/4R1XSNNRlRrueBljUtwAWxtBPsa88uLZYVKsRIobaqLycc/pX9A/j341eBLn4wzfB
LxraW6Jr2lpNZ/a0zDebi4eI8YUjZkE4598V+HHx18H6V4A+Nnjfw/o0v2rS9K1e4tYZ2k3MyBzj
JHGcYHHGQamMhXTMT4f/AAt8V/E/Vriz8IaBea5d28Xmyw2ce5kTIGcfX86wda0W98P61eadqNtN
Z3dtK0U9tKu142U4IKnoQa/R3/gkx4J8UeGvEeseOk0Y3vhTWbSTTZJ4GzNDNE4ZTsPUHkf8CBr5
0/4KJ61p3iD9rLxrNpely2KJ9njuFuLYwStKsSh2wcZ3cHPcYNNT11E3rY+YWQE5TkjqpNAQOwIB
BHBA57V73N+xh8Ro/gbbfFU6ctx4WmtxeA28gkkWE5/eFQM4AAz2GSSeK83+Evww1j4x/ELR/CGg
iP8AtTU3aKAysQuQpb5sc9AfxqromxxoVGZgAV689/emNH5kmCGOf4u5r6E8ZfsWfE/wR8WdI+Hl
3o6ya3q4iktLiEl7dkZtrkuAcBTnP1FewRf8ElvjCZVSS50GDOMlbpmGcc4+UEfU0rxEj4aZd7tt
XeQTjcCMUSAxId4Cs3OO544NekfGr4D+LfgN4yvfDvivTzaXsI3RzRZMM6Ho6MfvA4OcdxXm0iYm
kGC6nkEnGDVadBEUjYIwgzgYx64zmpTGcAuu0EZHPXihU2tlsZI6scdq9y/Zx/ZL8Z/tM61Npvh2
2FlaQ2zzHVL1WFurjaAm7Byfmzj0FS2aRV9TwyWABumce+OaChxjLc/w47V9/v8A8EefihGLpx4m
0Isg3xRxu5LnH3cFRjnPOa+NtT+E3i/SfiBd+BZ9Eux4sgma1bS0j3zPIBnhRnI5ByOMc5pXQtLn
EruTAzwPrSKGIJ53MTjHb3r9CtI/4I/ePrrTrO8k8V6RYyyxI720iOzJkAsD2z2xxj1rwT9qb9iP
xn+y29jd6w8esaFefLFrFmh8mOQZzFIDyrEDIzweRVKUROyauz5uaPDDKs2O57VKQVclQeB1xzX1
9+zV/wAE5fG/7Rngd/FJ1BPCli8iGzbULZj9rjKht64I+X0OOa9E+IX/AASE8d+FPBus6zoviSx8
S6laxieHSYYGjkmA++qszEE4GQOMkUk4i1TPz+cbl2LnAHzDvUflO0asW47rjpXsXwA/Zx8T/tCf
EeDwvokFzDElwsWp3phyNPQ7uZFJBzlCMdjX2jH/AMEYtWExz8RIF3AZJsSQPX+Pn9OtJyitEUmf
mZMjEAgZ9WHXGOlL5WGmV1xtHygd69v+P37MHiD9nf4pw+EfFMnl2lxIjW+sQRZhuLdnVTMB2K85
U8jA9RX1F4h/4JIatovirwvFb+K/tvhnV8w3WpLbjzbOZuYvlBwVbON3bHuKFJIbV9T87TH5YA2/
I33vY4qRx5iAgADHSvt/9qz/AIJjeJfgX4BXxf4f1VfFOm2TFtTjSLypLaPqJQMncB0OOR1rwX9m
v9m/Xv2k/iDY+HNEjEdkjRyalfLgi0t2bDPj19B+NS5oEkezeDf+CXfxH8d/CjTvG2i67ot5a6hp
w1C0s0ZjLKCNwQN93JGR9eOK+ONU0250q8mt7uCSC5hkaCaKRSrI6khlI9QQRzX9EX7MHwPvv2ef
hXa+CLrxJN4mtrCZzZTzRCMwQtgiIDJOAd2Oe9fijcfBvW/jt+0/4o8K+G4TJeX2vX8i/OB5ca3L
eY/PHAJOO9OM+5PLqfPuwlmUZAHcjrQ8SrlePx9K/WyX/gjV4afax8famknygE2sTAD+IYwPw5/E
18d/tsfsR6x+ylr9jd2t1LrXg3UyEtdSdPmimAO6GUfwkgZUjqM+laJqXUTajufK0iFACV+U9D6V
E2SRleAO1WpVCoxH04HWoQGYgdvYUNBF3GCMEAADcOc0nlgOODvHY9jTiEid1IO3HXPNPODIdvPP
AFJFDT5m3+8Bx/8AWp4BzyAMjHFNJJicFCqg00lhgAkqe3YVNirkb4VwSCMcDvT+AQcYJ7mm53N3
yO+KVlIPJ3E8jHSmLcAirKQQce7dKepIXeGCA5H0poVlA8xMjPBIzTJcMcA7h6jipGPdAXUHn154
pzHLgAbSDwopGjDEMuAO/tTSQZMD5h6ikMHzklSSaUpmJSRgk8mlYDnjbt5yaZhih64JwDTARl2h
iflHTcO9MDEZCjjvjtU0hZhtHzL78UkihACCGOeSKdybCNnZyPl7E0AlGwPmHbPSkHT72MetOZQu
GGPpikCViV3yc9OMY9aifbuOVyR6n2od2cqPu5/SmSxlJcv1z0HfimkNsNwByV5/SlYA84DY54NJ
Jw2QCOpxSSAlQQelO5O4jP8ANxnB60rrxxkDtzSyKI1HBBzjJoJXd0OCPSlcdhy4RDg8+wxmlbcA
CRhT0NL5i+WEZc9we9RHK556/wANCDYBkAlSCxpXyMDADemaUF5AAf72ARSygoWUDaSSemaTKEkC
DlSS3f2oxgccDH3qjdTgDo3rTv8AWJjd8w5x68U7EkhQxuCOnv3p21WbI+UY5+tREtnHQ09FyzcZ
OOtIaHPGjqMHL9801mAQ5IGO1GCPm2+xz1pkqjOcEg9PWgYrksAwUAAHPFKEKAE5DdVUmgKrE9Qu
c8mmMzKqZ57Z96dySRpyFMbAFd2cHtTNgIwBhuhBNDLtU9j1OTSFSJNvJOM/WgBxUBiCcYPUVIU3
KdnfoDzimE/MDgD1FKG25ZgcnjFIoANr4bnvkUwqNx+bJxnPpSSbg3AOD6UDac/MfoaLEsWRwApX
oOoNOTBxjJUdjTGYAnB9eKeAozzjHI9aAvYRhnAz146U1yP4eMce9I7ZGQSfrTyokOeg6+9ILjSW
RhjqeuRSncFbJAXAprE7xuPPQChhkgE5BoKEIBcD7uP1p5zkFeABzkUmAR+uTR85IHBB9qBIbgbw
ccHo2acThmABJximyAZUlcEDoBUu4sjEZz3oGQs5Byo69KkIATOMH07U1Uw5C5/KiUAHKj09qkBN
rgg4Jz7U+Vs4IPUdMdKjbcjYOSB0oZmUg4JFMAJOcMMnrQdzljildmQrjlsHOaRyZGyD9aQDl+fg
ZB7UgXnkng9DxSBCCNvJPcGnBPmYBucdSaEBIyx7Xbox4AzURjUeh96mdx5fK+vTpUUSkZOAc1Qx
+xSPu7SMk00ttwAcDvx1p5XZy52kjkA5qMqxYMOnYmgVhwO08d+gNJMxQgNkn0pCd3AOT60hPJ+Y
59SaVxXsP2bxu6HqKdnC5HyEenJpu8EYLHI4BxTeQmSe2OtFxXFUFQwfp9eTQ4x82Rjrg84oJ6kK
Tg4PvSODkM2Qp4wTSDUeqbgd2d3pQ7Ar8mOnTvSFCi5LbmOAM1FncD1B9qCh5QKoBbDEYxTt2OGO
D2xRGAyHdnPJwD1pG3hV6FTx05/GgVkKzlgdxIGO1OjXG7GSD0z6VGSUyAoPPalwcg4PHGBQCVhZ
D8pz3PFCjaSTkZXbinTHYoCryBn1prIVG4MC3GPcU2S7sVSGVgWC888U7gqUGQuM4P0piAnHUEde
KdsYBmJBB469KVikhJN0bEq3zdvamIXWVW4zg8446VK8eQQpwB371EjANzn8PWgNiVgyuSOSD1Xp
SXDBgmwbSBkgCnNuZdpcJ1wB3pJmCkgEjPOfagYxSq4xj5ux5zTpXExUFeT6f1owGAXOQp/MUzYF
KsgySeh9KAFQbtw/hHcDpTjlCV2kd8nqabiTcQ2QvIAFTBsxgtlue/Ue1MCAKzdG2k+o/wA96cQS
UGwY705mdgBjk9vT8aWXaSRk7QMM3f8Az0pAMfKEYODnhs9aM4LDdx/ETTWYO+QDjrgjoKlKgAGM
DfjPPpQBGoAALHDFdykntmpJl+RWPLngt2FRPtVwSuAO1SiSRognPJzjHNArIic5A4IwO3OaUddy
jcRycjpStvA3MDkcbVp7sgU8HOzJIFAWGBCZDg5Y846U/wAtyPmDA9h2oAZlchgSemR0pzssp2sC
MHPBxQMaoI+baAR13DrTWcLu3DP+0OM0pXdvc7m44BPWnlCi53nHQoRSASP5dwVSSepA/lSzQtHv
DZViOmOlBBVA2NvUEjmkMZkDPuJ4z70C1HFt77mj6HqTjIojX52bzMqOeec/5/pRE67Su/BBxkjp
in7ghwDkkDOOBj1pNXGJI2OCAGPY9qU/u2xHg7uuDwPWnOEB5BG3/Z546mmtKhXMe5iWwMjg0rMb
CZ9m6MguoP3gT6U2Vt2Rg9cDJzgVPMiK33S525BU4x/9amRwmdirbUz1K96sQ9sRhskjDBQCeCCP
SpJDuG3mMDgrn2qEL5QJdiWQ7gCe3TikuTv37skjGCppATxqJImRWXPJBxwOPf3xTRbOtyTuIcZI
yCN3tTETylZ3TJIyMdDxU6KJJgzEghuh6D3pbgLKRsQMNvOVzyfz9KZEQ8SksVCAj1/ED/PSlkYJ
KGfc6ocE5GCPap3VVRmhyG8wkof4eKLICJF8vGFEn8PJ7dhilAnaaVXAVuDtTnH+c1M0aiRVVeNu
S5PWnQx+Y7He0pc7f3YGVH+eKnYXKmIsgRNhU+YhPt196gLxofmXhTjDZznOeT3q1dRR+YrqzAHl
vl4z7U5JE+dWByexHGcda0sQ1Zldj9odt8hXa24gdDx+tWJVW0jbYDHgh1/iDZ65/Okj3phoAm4H
JOOM84OPx6VHHdPFBkqoTdzlep96LFpoz9XjfaGypjJ/hGBVDKrH8vLHOBjtVjVH86T5ueTz2FRW
8RVSrEYPRqVy0I8b4+UHA4C55z7D610Fisq2qSY2SRAhTuxtBHXJ7Z9OcisIlBMoXJYY5PT61seY
V8oKwkZBkjqMZqkRLQs+cJ7bDZLRjCk9h9aZMqySRuSW9W9cdqljlUSSGYZJJG5TgA9s9eBn/wDV
Q0wa5llHl79u0qyZJJHUk8Y4PTnpWhhoxzcyF4tjJzkufmyM/wD1qfbrFMSxPmO2T5m7jHpnHaoP
LDhi8W07d8bHhSO/1p8aSJKQreX8oyT83I9ufzqGD16HKIvkncCBk/jQ6+W6qD82cgAU6V0QLjcz
Ack+tMQvuZy2X6dMmsrnWOVlSPc3JJzgnoKSZygyrHA9DTXi2RZJ+f04IoecTDATZzk7eBRa4CkN
noVB/iJq9ocHl6xauSGKurAMcZNUuHVmII2jHy+lXdEZYL2OfKnYPMCE+n9aLBuXfGUaya5O4kyW
xnOQemM4Nfo9/wAEWZkOv/EhTJIzC0tT5XQKBI/P4+tfmdqF8+o3c0kuAXOQwH+FfT/7Bv7Uf/DM
PxON5qFr/aHhzV4lstTSLCyRLuBWUHHO35vl4zn2pcuppFOzS6lr/goE8KftbfECSNnCi8QbWTC5
MMeSD6EjGfavFvBVlrmpeJdNtdBmuxra3INrcWGQYWx94EYIx6194/8ABRL9nPSvEVlJ8dvA2sJr
mka8sJu445lKLuAVJUOScH5QRjj25NaH/BKHwF4N8Sz+LdZ1NIX8S6WyW4jmZcC3ZWJwAc4yByec
gelXLyRzUVJR1WqPvz4a3Xjib4HWL+JFtrnx9b2D7sIVjluFU+WxXOcHjPTqa/Ez4867438QfFbX
r7x5Lcp4jkvNlxYTIUS3+Yqvlox4TBznuK+nvj3/AMFG/HHhH9oRo/DNx/Z3hrw7fmxvNKJV4L9Y
5NrEttyNy5HH3TjrXv37Z/wd8FfHr4XeFfiRYXFromr6lLYq2oRgYlhmZcCQ8E7d3U88VlrsaSUo
tTto2fQHh7xTo3gv9ljwnrPiKB30iy0Kwe5QpuZQI4xnGeSDg9a/Nf8A4KL+MLbxF8adT1Xwrqck
2h3Ol29pdXmnylbWRiG+WRkPzEHOQRxx1r9J/FHwZufFP7LLfDZb8LdSaPHp32vbkMVCjdgHvt9e
9eUah+y/4R+DH7HfjvSLqygu9Qk0a4nup5VP7y4RGKOoJJBz6Y56dqST0sErKTl2N79h/UpNJ/YU
8KX+noslzb6deTRRE7gHE0rBSc84OAee1d1+y5+05oP7Sng6W8sT9l17TGFtqunvw0UmAd6+qN1B
+o7V5d+xLr+l6t+wxDb20286fbala3MZfDo4eU85ORkEEZx1r5d/4JN+I7K3+O/jKwuJltbu60lf
KjAASYiQFgDjquOnv6CqV+om3Ko10PPv+Cll5s/ab8QKsEjFIbbLZ+QHyRgnI44J6VH/AMEu2X/h
rfRyHjH/ABLb5R8vzE7Bkccf/qqt/wAFQLth+1XrkEaHabKzfejEhiIyCCM4z746fSvqL/gnV8Ff
Bnw9+F6/GrUdViuZpIZ5hfEYFlEAUliIyT0Ttjk1q1cIScIWZ5d/wV9hWL4peFrhYAT/AGMUaRlP
OZX+VT0B4Gfwr86BFtuJOm1XAIkbkdf5V9T/ALe37T9h+0f8VzNosCwaFpEbWNnerIWa6TdlpMYw
uT8uPbrXyvNH5kpJk2sx3dRk9P8A61axi2c6snoW4mjjjmUqzM2FLnJwOegHFfZP/BKLcn7VlmGj
xEdFvwncgnyjn6YBH418XRx+SXwxkYLknP8AnNffv/BIv4b2utfFnU/GR1hTqWg2r2r6WVwxinXA
kznB5Qj86mcbG8XzH6eXfiDwlL8XLXQrqOD/AIS+PTvtlnJMi7mhLMrCNs5yMHI9Dn1r4o+PR8Q+
Jf8Agob8M9H8W6RDN4biuTDph8kmOe3ePeWYk/MwkXBHbGa7v49+JtN8Nf8ABQv4KSX0y20dxYTW
hldhjzJBOka+25iB+VYf7fnxG034T/tBfA/xXqG9rfSrl5rhYkDOIfOiDtj2UsawHF2Z9j+KLu38
O2dkiatp2hW6YjjF6yIjADAC7mHTivCv2xJvAHjz9m/xMus6vo2pahY2TXFnJb3sfmLcAYBTa2ec
nI6HNJ+018ANL/bU8E+EL/Q/FHlaMqvdQXtnH58dxHIEwcbhz8nXtzxXz9+1D+x38Ifg1+y5v1O+
/sbxbDGi22oBmL392BkxiLJGG56cCqilclrujU/4I/TeZ4L+IyKAsZ1C1dVHO3MT9+/TNfR9h4n8
OftMar8T/hb4p0Zmbw/feQz7CqvE2fKlRuodSrc/SvlP/gjpr1oLT4kaO1wBfM9ndRQO+XMOJUzj
HYj14yK+v/hv8GJvhp8ZPiV46utSEtl4lZJijAqIFjBOSenALD8KLGs7X1Pk/wDYq+A2n/C/9sr4
keHL+O11STRLFJrK4aAZTMqlHU9m2sM+4OBXon7QX7afiz4SftWeHfhxp+h6TdaLfz2Mct1cq/n7
ZyFYqwbAwemRXmPwt/ao8D2P/BQnxtqT6vCfDPiG1j0u21cgi3WZBDty/ozKy5OB0r2v4+fsWXPx
e/aC0X4kRa/9nh097OX7F9m3ljA4cAPnoSB+ZpWM000mch/wVA+HWgSeEvCXi86fGNaXWobF7hAA
0sZV3AYdDhkHXsTXuP7R3xIn/Zl+AQ1vwjo9i32KaC2hs5IiIER8gnapXHT1rwX/AIKr/FTQdB+G
ug+Hv7Qhl8RpqcOojTCSXMKpKAxA6AtgV7B42stA/bp/Zcgg8JeIEhtdTNvI88B8xrWVcF4pFBzu
XcQRx60WdwjbXQd+z/47b9sL9l6fUfG+i2SJqjXVlPaWqskZVDgMu4kqe4Oa/O79kf8AaX8WfAL4
sR+BNFsba80HX/EUEFyuoIxdAZfJLxsDwxBXqOSK/R34K+BNJ/Y2/Zyu9K8Ua/CmjaXPcXMmpTjY
qpKwIB98nHfk1+LeneOLTQfi5p/i1C1xZ2utRamqR/6xlSYOB1GDgDjPbHerSFf95dbH7Q/tu/tB
6/8As3fCuw8S+HbCy1C9uNTjsWjvlZkCtHI/8JBz8lfhr8RfFd/478Y6l4kvo4Y77VLqW8lW1TbE
pdixwPTLHqa/erxN4V8BftfeAPCt8ZrbxL4OluRfbIm3RykIy7WIPBVjgg9wRX5Gft9eA/h38N/j
TcaF8Ob2Ka0hj/0+whYsLCcEL5Ybp2Jx271cLGck09T5ilh23HzZGE4XHbr/AFpirGxkyCGbjk/K
PwouGjwGV2CZwecn3FRyhZeQFYnnJ4zx0xW1iUemfs7fEDV/hV8b/B/irSHRL2zvo4vLbJjlRz5b
o4BB24fr7Cv1x/4Km28Fx+yhfPMibxqlpsd0B2Els9enGea/KL9lD4aWXxd+PnhXw3farFob3NwZ
opXh3h5I8MIxkjBIB59q/Zr9uj4WX3xY/Zr8R6Tp0iJeWPl6kiucCQQ5ZkB7Ernmud/Ebbo/Kj/g
n98RtV+H/wC1X4Jt9LaPZrlwdHu1kGVkt5CrNjphgUUg/h3NfqX+0x8XtS+BXjz4dN4fsLKSTxjr
UWmam1xGTuQMihhgjD4cjPpivx+/ZV8R2vhj9pv4X6hdzLb2EGvW++WTsrMIwT6ctzX6Zf8ABSDx
Tp3hbxJ8CdQvriOGG38ULcF2I4VWiJJ9sd6mSbZoktDnv+CungLQ5fhZoHi57VYtai1FdPa8Q4LQ
NHK21vXDLwe2TX5CXQZZGAwBuIGD973r9i/+CvF3by/s6+HstHKk2uxMhD9vIlO4c8j/AOKr8cZJ
EaXkkluOetb01pc5nuyTzncbnBCA4PGMc84r9Nf+CO/xOsv7V8V+ALjTUGoCI6vZ3qKMiHKJJGxz
nJZlI7cGvzFuJWB2Rl1AOOa/TL/gjz8OtLuda8T+Ol1ctrFnG2lSaVs2iOGTY6yZzzkxkfhUzN4M
vf8ABXDxz4c/4Sfwr4ft9NuYfGOnKl++sQkIEtXEoWPcDnO9c+2K/MecyXcjySXBmZyS7u2WPPXJ
ySfqa/Sj/gsF8NLiz8U+F/H0c0bWF9ANInjLHdHJGJZEbHoQ2M+3vX5oyFo/lJLKeCGHOP8AP40R
21Mk9bH6df8ABG7xnrB1Txz4Qa4MmgRW0WqRQyDJSdn8tiG64Khcj1FeIf8ABT0A/tY+KY1B4s7A
nA5BMPX9Mfia+kP+CP8A4B0Ky8KeJfG9pqry6vcSHSLyxlKgQhCsisOh5B9+p9K8m/4K2/C240P4
v6b43j1OOax8Q2qW5tIz+8iltwBk89GD5zx0qUrs0b1Psr4Lo/ib/gmzpdrpsZvbqXwRcWqQQAMz
TCGRCgHdtwx9a/PP/gnZ4B1zW/2tvC+r22mXMtholzK+pTqpAty0EqqHyf7xUetSfsKftuXv7M+u
xaD4huZ7/wCG+pT/AL+EksdMc5JniXHQk/Mvfr1r9QfiN8W/hT+zd8NNU+JNsmnW9lqaJLD/AGTE
u7UZXBeIBV6k5Jyeg61NmtBPR3O8vreK4+N2lu6KzxaDcshYZ25uIQcehrnviL8d/hz8O/Fbab4m
+Imm+HdSjRJG066uQr7SMg7cdDxX51/suf8ABQm7u/2rNX8SfE7U/sWgeI7b+zbaOLe1tpbeZH5a
gc7U+U7j6nNfYfxg/Yw+Gfxp+NEHxG13VxcXv+ilbL7TEYJfKxgEHOQQuCOc5pWaB2Pmr/gqZ8cP
hH8Ufhl4ct/DPiPTPEPiy01QMv2HLyxWxicSbiBwpJTr/OvzBucMwUnAYhueK/Qz/gqhJ8INH1rw
7oPhOytrPxtYr5l3/ZyKtutq4JCuV4L7lUgehPqK/PAmNV3HBYHA9DW8Y6Gd0SKsZLqSpAPHoB3r
72/4JjftdaR8GdevfAvilEtvD3iC6E8OrjOyyuAgTZJwfkfYvORg9eDXwL5jbTtA2kngjH5V+g3/
AATE+KHwyksfEPwu+IVnZNcazO0+nXd/CojdfKIkiaU/dfjI6e3NRNFxZ97/ABV8IfFjwrr93498
B+LZ/E2m2/8Apf8AwhV6V8q6jwd8cUoHHGCv9e/xB8Ifj437SP8AwUj8Aa/d+C7bwhfWdvdWVxbr
IZppWjt5zukZkTBHAxjIwc9sfc/wi8I+AvgjqN82lfEb7Xp118pstU12K4SILkrsBOVwOOD0FfnX
8af2n/h/4N/4KA2HxL8FWEsllok7WuuSxKoS+ceZFNJDzg5jY4bjJGfes0VFLmTZ+nPxx+Ofw6+D
Uumf8J34pPhs3Yd7bAlPmBcBidgPA3Dr618v/tnftg/Bb4hfs2eKfDNhrcmsarqVtGmmxyabMN0w
cMjhpECjGCckjvXpPxb0H9n79sDQvC2veI/FWmvbxWry2ccmqx2kyLLtJDoWBBBUZB9O/FeH/wDB
Q/4ifBbw9+zxp/gPRoNN17WZfLi0yTR5Y5vsJgKDfM6nIyuV568/WqjENOqPoHwdrN14S/4Jy6dq
+j3T2F7YeABdWtzCdrROtpuVgccEEA5r88PgV/wUG+M3w81u/u7q61H4l6fNCUex1WV38ghuHRlB
IbqMdx2r6p/Yv/ap+GnxQ/ZjHwo8favbeHLrStJ/sW4Oq3SWsd9asrIrRMzDkL8pHUEZ5zXo/ga8
/Zd/Zh8MeJ9R8P8AiXwzIl1Abq5tV1iG6muPLVsJErMTk5wAO5oirdB83LJniX/BK7xdcfEf43fH
Hxbc2NpYT6x9lu2trFSsUO6SbCgHn/E5OBWV8X/jN8QLD/gpfpPhSx8Y6va+HTr2mW50eG8dbZ4m
ihZ18sEAg7mJ/GvHf2Qv2xvB/wAGv2n/ABVrcekXOjfD3xnP9nEGVY6Yvm7opXwfuAvICMnAbjpX
3Vrtx+yrrHxhi+JN7438It4uinhu1vG1qPIeJVCELvwDgL2pNMaeqdjzD/gsPawv8NPAEhgikmfW
ZolLqOQbdjjPUAkfoK9j+FXiLUpP+CdOm6zJfTy6nF4KnlS8kcmQOkMgRt3XI2rg+wr4l/4Kgfta
eFPjLq+geC/B06azZaBcG+n1u1kWS2mkkiK+XGQeSoOSfWvYvhR+118NLL/gnXJ4b1jxTZaX4psf
DV5o/wDY9zIVuZZQjpGEXGWDBkPGevtRyPQy1SlY98/ZR8RX3jH9hXSdT8QXU+q3Vzpeoi4mvZDK
8iiWdcMT1+UAVzH/AAS+0ey0b9knTtTgs4o7ye7vDJPsHmSBZCAGbqcYNeTfsgfth/DHwr+xIPDH
ibxRZaF4g0i1vbP7BO5EtwXMkiNGAvOfMxxnpXBf8E1v23fDPwy8KXPw2+IF3/ZOll5NQsdZuj+5
iLBd9u4AyMsGcHp8xHHFHL5FJNylbsfaP7EHxj1z47/D/wAU+KteEUd1J4iubaOG2Y+VFFHFEECg
9OOvvk1+SXhLxh458DftWaz4o8CaZPrfiTTtavpoLGCA3HnR+a6yIyrztYNyeMHBzxX6S+Dv2w/2
cfgl4ttPDfhTxPYjw3r7y3Vxc2e82un3CqACx25HmAY68FB618JeIfjn4I+AH7c2o/EX4W3Z8VeE
2na5uLQbo45vtCsbiON2HQOQykg88dqai2XbU/SaPU/hv+2h4R0pdUv77wr4ptneCTSPt/2PUbWc
Ll4zHnLrkBgQOQM8civgz/goh42+NngPTLf4T+Ormy1bwOzwz6VrwtNs1+sI+UPJuP70cbgRk9cn
NfX97+1B+yP4r8TWHje88RaTZeJlSOdb1rWaO6iYDIDFU5YDg9ensK+X/wDgph+2v8PPjR4T0rwH
4KW38UIksepSeICrr9kkUsvlRhkBJZSdxzjB7mnFNPYh67o/PXRtAvvEutWWlaXaTXl9eS+TBBCp
d5ZD/CAOSa0PG3w58Q/DnWv7K8UaTdaDqflCb7JeRlGKHofp/wDXrovgJ8Rrf4UfG/wP4tv4TJYa
Rq9veXCIm4mIMBJgeyFvrivrL/gp/wDtLfCz9oSLwO3gLUTrGp6W9z9svPsUkOyJ1UCPdIqljkZw
M1d7vVGbVrWPgcAMo4LHOB6V3d38EfHNh8P4PHU/hi9g8JXCK8ereXmBgzbVOeSAT6gdfeuHR88Z
6Hdke1fpd8Jv24/hRZ/sDXXw18XXEsfim30W90iLTWsJJ0uSQ/kOHClRkOnUjBU+lO+tkhtO11uf
mbcB97AfNgcjp9a7T4YfBPxn8a9Ru9N8F6Fca7dWcQmnSHHyKxIBOSPQ/lXGyqpGQcbflwM8n6Gv
rj/gnH+094Y/Zi+KmuXvjN54NC1nTPsZuLaBpnhmSQOnyLyQwLDI74od0UlofJuuaJf+HtWvdO1O
3ey1Cyme3uLaUbXikVirKR6gg1TKE9RnAzkV7f8AtnfFPw38Z/2lPGPi7wnC8Wiag8IiaW2ELSlI
lRnK9clgTk814k7CPhRyepNMhMRn3NjOB15HSkfJUIGGM9PXNDuAq7TtI6j/ABqNjnBx/FzilZDu
wJdGA6HqeaeGLhjgBs/SmkfMTwwPb0/GkaLAJIz7UrBcA5DD5i2O/Wnq5BKnnNIBjJOOBjpRuLtz
x24otcaZJKy4DgYxwM01dkZ+dc9T+nSmGPg7WJKnkGlQjaVYZHb2o5Q5hsj7lAA4PtSr8gycnP6U
H5RuGCOgprZCD5ST0+tFgEfLEndj0p7MDyu4tjqeac4VQuFxj0poO93A+ROooGJKxkYDBH07U4Lz
kEY6URmMgls7u3HBowpDAsVYdqQDZMk+/uKGUqqMOeOcdM0rfKMAkg/4UhG0e+eRwQf85qrCuJLI
Q3JPpx3FPQrgq3PoQM0ZDN91SAT0ppU9ztGKLBcG2qcjkZ5p4ZUI+cHJ5BqMLgDod36UFB1xuP4U
mgTA7RlVUEHuOlG3ys4HXjBochT8gI46dKRX2Z43dwTzS2HuGQwyRg+1SKyg9Gzz0pjKWAbHX3oV
WGSpwMcU7XC5IVBXvvJOKaxw3OcfWlKs0ZbI+Q5B701UaQsS3FSUxShBYhsj1pSuUGCSB1JFAQ5L
KOAcZJ6UNvzjHGcYp2J3FdUxlyRkfezTH4I2scjjOKRz7bge1K3A3BuSOgPSiwDX3OfmzStyvJ59
6Q4LDBGO4/CnSYRlIfdweaaQriYOAA28joOlIVOTtO3PagkE5HGR0Hb3pXTPAbIPGKdguM2oQvPz
Z/OpAojZsnP0pAmw88/jyaRdxYsRlRxjPNKwxpwAxU4HpTiN3bGPWlBVlOQTzyvSmlg8hIODSsNC
7gX3HJPOPanSYYk4znuKa4ZBkcA9c96cThcr8pHPNSMRRu78dMVJKgG0hgG7jFRM7bcHqeQaDlmb
ALH2poWwjn5VwSaFXewycA808EIBjAI49aY5Cs+W4HoOtA7jpGCtwc88UhBwG55HrQfmRyfm7UqI
VXPbHFAAXYg8e2DTlAc5yVCjGO9MJOeuOeT6UGFmJbqP50gHnjLK25enIppiVc56k9KkONioCWbt
UbA7jkYHpmiwAVUAAHbx60m4FduSMn0pr43qBjP9KkIDFgQV9KLAJJhQcA59SaT+IHvTtuVJJ3Nj
imlAQfm+maAHiIBgWPGc+tRtuLHd0zxT3RsYyQADTTHkqeevGaLASBgG+7kAdjzUbYP3cflT3CjB
GSR2OAD9KQuzHpgDvTsSxpIIDHqOOlGRkjOAexpwh+YsCNnYn1pDHtDMGGev0qbDS7iGNimRyuaa
qFgOeBzUpIkAA+UfU80uGh3DG8jI+U5FFh6ER5GOR6H0pyplSFUhhz9fWn7QS6kkHtSSA9hgEHHP
WnYQowUIAYOeAf8AGkVXjzu52n7poyfuevTGKUJI2FLcmgYBSWJJ+U9yaeIxjlu+QBTXkZxtztUe
nSgjBTGT+NADTs3/AMXf71IEzJkhgP8A61OkOWy3qAcinn96VTAHON3tQA3axAy+AaAFBYEZx1NO
kTEgwu3P8Oc4pCQYxyWIPQDtQAOPMYAEcL278VEAW2jGMnpUk5UIoQEYGSD1pJUCqGHzHpz6fWk0
G4MmxfmJGPQ0kxV3G35VPIBpEDY3KeD60qqTgheTwDnvTsDEIMbcHGehHeh3+f5s56DmpUQksQct
0O48Go5UznjJ3dRSsTqI53fJ1680rK7n5chRjqaaQcY5wO+Ked4O3cWAzjB7Uh3GlJOPXPAzTmQM
Tv4YDoKMI7kMSff0pF2tlQcjrk1QtRchw5IIVR19KUfM52Y5zz7UKAykgkLxkDvSxIwzlyc9hzg4
NKwrMa48oOuPvEEGk2F8EsQAvHUfrTz/AKnJOWORjGePX6010JAKkHtg4osUrgEkdm2k7Bzvb2pc
biSQTgEZPpinpLhcINrLkEY4phXODzjJB4pDH4zGGXgjkqOvr0pjbvmZWznnmggLGcYA6BD1+tOh
/dnJ4B4OelMBduEwX2A5GTTGQ8sshZx37dKdMgedmK4Tk/8A6qF8vYzFWznjB4FFhoQEpJ34ODxz
TwhBJ+Z1PBI4p0IaJpMDJYZIf1oRHZ9q4GDjGaLAGxPMAIIDfdB4Galh+RygUKcnDDrzUcoV9xBJ
dDjA6UkcefLGSoJ/h6jNFgEO7eTHlccMf71SKi+YqxgsT3J6YourZYCP3mVGACo5NOiLAZzwOd2a
VhCMEUkrIcqeTn+tNkV+dzADAG7PH1p5iV2znPOTnoanaZBlWiCFflC8YI5P9aAsV1XYrY4ByvTr
mp32oWMROwgZyO+PfrVcMxIYE4HGAB/n0qUxtO22R8kg5x2xTsIsTNGsyqvMZBAXHAPpUM8U8kvz
ZwQp5OAR2FPijG1vnywPQdDzjmpGjjhmSRpD5bDO3BGcdRRYVyvNGSjBjgH5fXHFPGCxjVixbkHs
RUvlKADIuxW+73IAPvUhHkSE/fj24wp4zjjP4mlYi4NPcLChUrsI2NwM7adbszD5FYjJYFeBn6/j
Qm58o5cgnAUcgnp/X9alhxmWEiRTja5Q4BH1/L8qVik2G55lcBfNlYEgkHnPUk+tNkiuY285VHOS
xwOf0psKCGJiHfKtuHGQB9PU5FSpvjA2bvKYkkP06d/88UtU9CW5MZLG8TpcJ0TkFPfvxQVSS0kk
ALy4J2E5xz/k1LFBtjxlQDnCqejdMZqGVQJmWUGEsBhgOe/WqHF2Mi/VRIHT5sk4wOMd/wBarNEE
jDIOM5Bz+dTaqGguGwTjoDmq+xn46AjJJGKhvU1RJuXyVKnLnnpzWxCZJIA52usSlFxw2SM4/Pv9
ax0GG3EDAPB9vetV1mdVCNjd1VBjHPX2q0iXqTwt5zLuDOAPm9j2/wA+1W0tLea5A88qxQ4DDocc
f0qnBEPPOSqttP7zrx6H0PFWnuZIid48wgfK/Bx3wRnrV2M1K3QemPMELsdpU4+bhcn375/KrAmj
lZvNkywOCcnn3zVSRxK0Srncw3EFeMdBzTlshd26TKqSrkoFyo6fX8KhilJ2OQcbxyDk/dc+golj
2PuXsfXJzQykSkE5A6/lQ6lMlx8xGcZ6e5pWOgcZFKq+QT1w3Wo2bPJXCenrSqqh2DZKAZGB0poQ
MoJYYzyBRcZMp2xljk5HQ01njyuxWHBBJPf2prKXjOW2kHhD6Y7VGzg8Ku3nuakdxzBsgDO3OQCe
9XoLgxqig4OScjP41nrkA5PHvVuGRoiAMnGdwPQU0K51EXjzxJHpB0Jtc1BdG4K6d9qkNupBBz5e
do6dhU+jeMta8NTXMmj6te6X56BZjZ3DxM6gnAJUjuTXPRZAdnI3epPX0FWcRx2xcoG3EnG7jHp9
ev5iqFd7Ikurp9SvpLi4ne4muCXeSRtxY5zluOT19etat5401u50hdIPiDURpluAyaebpzFwcjCE
4wCBxj0rDwojQpIFmdcFWXkdwP8A69NgKbgR87OfmJHStEkyeZ9zupfi944mh8mLxpr6qSSGGoyj
JIGSfm4Pb2qK++JPi68sri0u/FGsXcM0flypcX0kisuckEbsNn3rkBMikugJx2z3pd2871l2g45I
Oc8//Wod0O/NudBp/jDxFo9tPZaXrWo6fZ3O95be1uWRJiw2ksAcHIGDntWfa6tfaVqUd/p19c2N
3Efkmt5mideOgZeR09apEEKWUcdAM9T604xnymbHy5DbuD27VO25Wvcu6h4h1TWdTlvdT1K61K8Z
SrXF5cPNKcY4LOSTx744q1p3irxDp+j3GlW+uanFps24mxS6kWIhjyDGCB1/WsZnBQeWRh8nOcZp
k8k06BVAZCcHb3Ht+VXdGbTegNGFmaV5PNXfh2wMk+n1zmnSQhAdoLdPl7gUtxE8EnmOBt/u+n5V
CwkWUbWYsD8zEZBWjmYrImUgHBLqynkZ/I1r+H/HHiHwrctcaJq95oshHlu9ldSQFlz0JRhkZHvW
Q7L5qeZ+8jQ4yGHPBHpxSXRW4dc4KFsHcehpPXcS0Zva54w1nXtQj1DUta1PUrmLBimvLySV4gDu
G1mYlcHkYwRjPWpvEfjTWfFf2eTXNc1HWpYsiM6hdSTmNSdxClicDP8AkVzTBlXGAOx7YxTWV1ds
oS2PyqlFE8zR2Gk/EzxnpGmx2el+Mtf0uxgH+j21jqU0UUS9CAqsF59ah1vx/wCJPFlvEmua/quu
QxkOkV/dyziM9CwDscHH04rl0l/cuqSMzccYwB68VLma3jDoxUdT6/QCna5akza0PxdrPhHUP7S0
bV7zS70qYvO064e3lVMfd3IQeeP0rdvfjn8QbuGSO58d+Kbi1cEPFLrFwyuCMEMC3I69a4d2dQyh
ipHZO/1H6fhULMPMOc5C8jihLuEpN6F97gXKpcBy04HDdGXnOR9PWuvX43/EWNoQvxA8Sqq4Ct/b
U4PA6Y3+5/pXn5j+djuEYAA+bjPtT4olidECuSePUE/T/PWqcUyOdo3/ABB4o1jxJqa32uaxfa/f
xoALvUbpp5GUdAS5Jx04zWhoPxP8VeE7Gaz0LxPrOhxTSec8Om6hNbRyyDHzFUYZPGM1yCyqg2x5
BYHPHA/zikCGCVZG+WR/nwx6d8/zpcg1Uudhr3xb8deK9NfRtU8X65qNjuV3t7/UZ50JByDtdiMg
55xXLn/R0jJ+Y5ySTznqDUM16SAAoVzzkH5v/wBXJqIXCA4MZzyACecY60uWwm77HX6J8WPGXhLT
k0zQvFutaRZCQyrb2OozQpuPU7UYDnHPFctdXc1/eXN9dXc1zNKxllnuHMjux5JLMSWJJ6k1Xx5m
DtLAZwDk8dev40sdwVbn5VBwOh7dPersTd9RGiKOSp27xkkgHd7e1OiYs2CwUA53NkCmpjaWLBnx
hQeO9IqbYGyxPGcjjBFCViW7vYu2N7cafq9vfW13LHd27rLFNDIUaNhggqRyD7iuo1H4w+PNTt57
e48a+IbiC4QxyRTatcFGXGMEbsYxx09q4fc+CWfqeefmPHWrDFlhyjnOdobHT3qXuaIek7RqmJT5
kfKtk9jkEHPbitnxD4u13xesR8Qa1qGtNbgiH+0LyWZY1/2A7Hb26Y6VzxPmSCLcMgEnC4H1zT5F
2N+9YsD0wegoTK1NPxB4v17xLbxwavrWparb2p/0eG9u5JliyMfKGJA4wOKxvNVl5O4AcD0pVRnZ
CXD7gcgHk/jURVmZlbgr0OapMzcSwHeQbSCzMOP8mtDTPE2paE0w03VbyweRR5jWkzxF8A43EEZx
k/0rJklfcA5ZnbjjrQF2D5nAYDH4U3YpXRr6t4l1bWok/tHV77UQjeaDd3LyhTjBK7icGs8lpmDk
sS53EE4AqMyxtEqsu5wOCTkH8KdJIS3H3vrwKlITdi9p2tanpEMiWWp3djG4G9LSdowcdCQpGTye
vrUVxqd/dZkuL+4vZQAN1xKXPb1zVZiC4Kn5jndtXviozuBALZjz8xxWlhczZYSR5BlWYELgkdT/
AIf/AK6sXmo30sMUUlzNLEv3Ed+Aenp71RRcxuY23KOODzT2uDIuXOXVf4Rz9KVhtkqXTRhW3MAm
eo6Upv5tsjT3MwkY7jlj82R0+nt71S8x5XUr0x34xQ6hphubcCCQCaAC4IZgzyFmPHHBAx0/SmLG
JAVJ24zgjvSshMmCNuRgZPtS4EZ4yNpwc/ypNMQ5RhyPnKAdR39qlhkLH5VKof4enNQySSKCm0sC
O/QD2oO5mznBPQZ6Cna+5XmW0uNi7QCd5J5/xpPM+deCR6Kcbepqm4EjruZ8jJLngYp0UvlNgBWU
nGT3FUoroK5YeYqWDoWQccd/1+lDMHDGNQqDrk9SarMSzsxyqngGmvIrY2KxCnGKQcxee5LRgMFJ
HGW5+nFQtMobd3JyeOPYfhmq0jBj9D35NDTK24kDcBgelFxJotvOLfHl5UAnbg8fjQ0z7ctgY6ZO
cCqv2ghGViDnnHpTQW8wAnCZ25AzWbd2UiyJd24nORzkAYpHmd8hiwkboTUbbADkHP8AtdQaa+wB
XfJ55APPpTUugrPuWZ7pxhdxRRwNhznimx3hiXcQQpzgMOM+tVZm3Rb8/KPlwCTSbFHCt0H3WOaT
Gi1JfvcDJy3p7UkV05XAxtXnOMd+KhwoHB57880oVCAM4x1yaFcd2TteyMQWO7b90jr71XnYM29e
MnOTTXCOQc54JxnpTAo2kdMepzTTYtydiGK+YSw9jjNRbtxLMAfSlYbgDjae3vTQ4jY8Dr170mIA
7GYkHb3470rSq4Y7yx/u1HIzEsQfoAOtIuSuHTHXpUookM2SASOeARSvJ5g2ryR6jrUWwsApAxz1
HanRHax5wFHy570aggJlYqWGM8YHpTXbdgE9uvpRLJ5i554PbtShN/TGOQe3alYOopTaG6M5zznt
600yZyAoDHnJ7UrRtGWQNkAdDxQw2AZAySecUihgGGD9RjrQgAYk8A88HtUkhJGRgkcEZ6VGSqKR
yGqyB5ZWGcYAPT1FNIGCAeO+e9JGoC5CnJOMml4U5brSAUHadw2+nIzTnctgcAc8AVG5GcYY4HOR
QwOARk5p6sAP71iRlfQf5+tCjbx2pXQYDHNOcBSTk5bjFHKAhJY9OD3pCCr7VBI7mnmQhNuAvv60
3eWc5zkdxRYYDJIUjGOaa8ZDMAM7jwetOzv6E59aRnKnBwDu6Unceg1nYKVySPTPFIVAw2Cxzxx1
p0ke1lYkbTzgUE7kxz3PTmlcLCeWpPPA7UhGCQMinAYUgbhjoetIU3gdFOMn3ouKw52UZUDvndUa
SMBlakcMVXIBPrnrRv2YIzjPBx+lADRncG4HrmkI25AwcjqRUm5mcSFDnOeKR0LE45BOc07BcjVR
szhiCccU7IiXcvORgg+vtSuvBI6L6dKBlk+Ucnjk9akoQyZLdh7dKdtK5O4Dvg0zPKbwdp5/CnHk
kDgetNWBtisSgO0khqQuqg/MSeooEZ2nORjrSlIwoIJz60cpNxpUHnnOefpRt2k4bn0HXFADNljy
vfmhgHAKfLz0zzTGAByAoIzxxQ6FR0z2okPzADII4pwJYcADjucUyRoUmPJzg8fKaRkyfl6H1pSC
5yBgE9PSlK7SR90j16UWATZuAA+91AHYUgGSTk8GhiOTyD60bSGPUnGeKllIE3biMZpzAqMZHHOD
QmPNJ5B/SkMmQwIJzU6laAUL4XkYHU01eDyOBxinttwADtwe5pJY1Hzc7genrVKLEOZRnv0qMgx4
KnPqaUOC20AjPvSsMMccYH1qWrDTFI3hiQeBUZAznHHvTkkUZO0j6UrYkbj5RjrSQxM5XnkDt0oE
u7huv8qNmPu8AepoCKTuz81USBwvGF6UjYD8bgPXtTwfN6ctyAKTYy8HOPcd6gYjqU4BzjoadJL5
hPy4DU3ZlgCTzTvKMgJAyV5zmgYwrsy3UDpilZi6FiDkdAKeq5b+6O2RRj5+Tz3PqKoBuSVz37in
cDKsCopJSWc4IC+1LtAJBIY989R1pAJuAkAQdxkn0oZizNg8A8DNKjAOT17AEZFDKDk9AOvHtQAq
g7T0DL03Co3TAXKgDOcA0pBcZ9O3an7PlOenUY9KBDPNOAMcDrjpSlgU4woPFPKgg7RuVRzn0pjF
NrFQdxIwO1AIbgkED+Edh0pcGPuST3FKmehGM8ZHSiQZYDA5HSkMGyWJGWwaCxTgLleeSKkZVBQg
44+6RilYsMBTgA9PSgCNTyMU5UZ9o34ByMZoYMBuCjrjI70u4Fc7snPI+tAkGFUqwySOME9KbIQV
3IA2eKUEuDtIOB0PYUoDYAB5A4FAxNjO3yjHY5709iSvBKgD160xiWwdxL9ORjmnvJkE4Gc844Ao
AUhdwf8AhOSQDQyrIwc5VCMVGHLx49Dkj2pDEY9vfHXP6U0DFKlWyxyv8xikkXLAIAVz0NIVJ2cn
r1p5ZkZkz1PC0WEhXiIK5UFCSCaYwDEKvJ7rmpCGPO4c8ZY4FNKncCx2j1HU0DI2LLNtwMg9c1Jz
I23ceOc46mmnk5B34704sqMR90jkfLyamwEbLsZlJGQOc0qyjJ+bI4A9qfIHdSxyvPfvSFXEYWPA
x1yKOoWGsQABnIB9MU7hmwBgAH6UhDYzgHHp1qRpPMYgAEEevWmA1rdkw2djEZOOwpyuDHxhcHqO
9KsBdWYnBHWkKs8hI+ZeByOfegBT82MAFRxg1HkDg/kB0qSYMGYgkkjOcdBTQBvzk/gOTQAiMEck
E8nIY8gfhSAbo3ZpeckfWnOFUncvHBwB+tOaBXjJY4YYC8cEetKwmrkTYKAEZJHUU+IqSPXIz9Pp
TnbZGqryp6fSgOgwQG9eO9FgtYbKvzYVW5bgDuKsZEIfGeGwR3NQBpNmdzDnK5BoZjgAglic7s/o
aaQXEBYyEENuJ3FSaeYztKltrkcMfpzTnVkY7iQ4GMA1JPAGVcvvGRlvT8KTuMYWQtzznAHuKPMI
ldtvlbB8q5ojGfMbYN3QMCeeOPxqVU3hzKoLDuR168UhjSq7S+0uG/u8nOPSlADW+05Vs8k/e+tP
t/LZj5mQpbKnGe3FMVt5bcoOQTuYZIx2oQmhxjWPyyM5UYx1Jb1plwrMjPn53PPqDTo8RISqlnY5
3Hj9etLJGWfPTvzTsTZkhISNCzqu4DEYPWmxuyRncgHPXqcdMU60aOWYghsdzjj2p88RjnGF2sMD
HUBu/wDOgNQKcSIAzoRkEj1/Dp2pUjZSqcMTx6bf84p6gYcFdrqvQ8gjt+NOEhIVlVXIYkBgScc9
fXnHei5m/MaqsFYq4yQQVJPIx7/jTmhVYWiDBpAN24ckj096JHKSFGGdxJCBuF9RnnNO3o+HSIsi
dSvfv1FCepHXQeJoo4wI1/fqcYB6jj/69K0JnOYiyx5LbHHBGO/J9KjkSJZcQygSM3PcCngOxdAG
fBwPl+U/nRY0VwkRncBXGc4GONw9P89amdZIGhVlYKAcdcH8e/BpS+I13L5m0bWRhnaB6+nP86Tz
hHvbYHdgfmfPygf5FLY0uV5ArAjLNgHB6c4x/jSXE7zKwIAZUChvXn26VZjZZroW7bQvUKTxj/Jp
JJI45YmJQGM4HGcj3Hc80rk2Oc1FmScqwJwMZPJx70xNzRKSPkb5T3NXdUQG5PRQOrEY7dvzqmsS
lX3ZVVI+XPX6evapdrlpEuzfMm1wF5PI61qJJNGDIqAkHcF9TiqNqAYsHCsrDAB5q+gaF9pTftb5
Q3T0zn/6/rWlyWy2zwpswjLIFLHJGMDoetExliVisZkBBcsGHU9TjPPQ8VNEWVllkIVsnIZ9pOe3
NNFwf9aSuQ5yhHfOBmpTZCs2BQKo8uTCDBETLkZ9vfH8qddW4hlcs4VSeittOTURmMhVSohUklzk
jB7EVPEzGZ45TukBOcDPQ4zn3oeptbscZIrSEtsACdSOnXrTCPkfc+D2PXP+eKlldphtJCpj5f8A
CmOuYOBjJ5zxSTGAj2qSzYdfenOQNuQNwx3zke9MlhWGQozhjjOe1KJcs4C7uOo9KGVYXeSRwM4x
jNQAP6de1SDIJGMnbxk9qjWTY+70PSkIeAdhBwAONo/z7VZSRFhfjk9e/eq0rBuVY5Oc8dKt2U0C
xuZV3NnAABA/OmhMcij5tzmMMMHcOtW482znC5jVQDhSev1qskwJVSmATgZ4qzkwnbI5VmyQAvHs
apENFoyxlQ5OHAzs29TjFVJJk6oMS5zjlc/lSLGdobcSBxgL15/SnuvlyCUFGC8nK9fwqrjSHIxQ
xttWM/eJHJGfUU6aOMDJ/wBSRwxGMtnn8aijZnRmUkK3ylgPmA5P5VPKm1AjN5eQWAcdeev86q4W
FiDFY1XB7jJJAHIxj8qSSNvO2MNyHJVu/Tv+VK08kRCqQpBA3Adh/wDWoaUynIGcZYs2eR6f1ouh
pWFhuAqBFXhAVYkfe56j9KbG8ZB+Rt5yQM478UTK6wq5BZd4AC9UJ/8A1Us8af6xZDub37detGhL
uWftjSYwUK4wqngj1yOlQM6M4Ea8jc5yp68+/NNIQPhduV6jqCPxpqB4lRVkZzgtkdQKpK5F/MLi
Vm4YRl+eR0J9CKVW3hiNpIOCg6Zq0wzAxOw8khm69OoPfrVMYRNyS7+i9ScnpTsDLDyRmCTDE9QG
6cnmkWUuIgQxZyQxP1piyoCCEzk8jPFI1zNb7Rjcr4wuPfqKuzJ0HOA6OoBTa3OR830NJNLGCNod
mKYLN/CcdgKmYNE0m0qY+69+/FRYDSsykI2CAcYORnFC8x7DFnDxoMr/AL/TI+tSkCAeYz7CPXnI
44qFwrEA4jHUkjrROfPuFJcnafvep/Cr0JuWr2YysJBEPn4IBA4Iqq84V0VVYNxl27HvSyPll2qw
CnJA6D6etOktSo37jvU8o/Uj1pWFZsIEZkyg3ZPXbjilZd7AjAwOQDyeOOfxFEm1QHD5OMFT0xmm
yMTIXGZEHG/IG32pDsOmDOQ5UDaD0PLe1R+XLK0mCqgDO3cORnpxTJm8wrIMtg7T7fhTjbsXKttZ
u+OlNhawCQhwYsEkYAGeKdJHI0rMWJBJI5zgUzaf9aB9z5QAKc+5DvAUs/IUAjA9KLjsE0RaIuVB
K8decUwv5qhdgUYyM9aeUCGRclgOo6jpio1cF2IYqwB24PI/zik2JkYDOeBk54PapRKIyXB6fKR0
FRTPKWfflx1K44P/ANenq+ISokJzztPp6UJXBXHgNPGT8qYyeBnNSyR42uWGcYwT/OocmKMjA3M2
eD+n6VBLk57Y5Kse/PFHKVexY2MTIGKlBgDb1PsBTZfJcAgEKCfkkPQ+/wCOaRQpG8KHCkEluOfa
nlY1BAUuWGBtzxkf55p8qQr3I2YxOhVt6fTtSs6SSghjuHQdMUs8rKyqMYK8Y5x9aYkheQDAz39T
7UaAOMQJyo47g9KjjlJRoyRjPVascNh1+Vh0xz9QaZIS4cJ8gJyfehJMTQ2FzE6mMZ4Ax2NJJmPO
NzDGTgAUb8soxudDggd/oaOTkAsgI5DcmrSFcZvYJ2C+mKCwKj5jkHke1ErbXYFj5nTaemKQFokC
Bc9SSetKwuopCxhTu6HORnpSYMrHpnPXvTgzTSBTtLds9vegvvZyV2jPTsKLWL0Yz7OzAswKcYy3
f6VIHTeRtLKBwT/Wm7GbpzjtniiVmC5IBHGMCpGBV3baSMse5wAPWkMBVGkzyTgE05wJnIXgehFM
RCylSx3nngY6dqEIc5CFgSWHH/6qY0g4B6MOO4p0rb0CBSuDxigL5aZ2gYJ55yKq4CuDHGykhh1A
PGPpTC5clwoAIwDT2iMoLKNw+982Mgf4U3J+RF+bJzhRnikFxGwxzn5h6nvSNtDNtHB6ZHJolILO
MbcHuKQEOx9QBRYTADcGbjI64pytgDBwR0x1pX2orKORng1FwpBUHI7HPNJoEyRs792zdkZOP50x
nCqv94n0p8jqYAmOM9D2qJ4s8rzjgkYI96lKxQ52AG4AnnoRQ0yzYOxSfcU51JXK/wAIJz+FJFEo
3knccY+ppgJIoclVOFHIz1poTeTyTn3pw3oxIKhemMU7KHb1DDrzilYBjfIcDKkflTHJwSmSCOcn
pUvyfKpJJOc+gqOXyw5Q5BHH5UASI+Uw7DavcdaZ8sqknseDiiaJmUMOVAxxS7VGOduT07UgGyAY
GeW7HP8AShc7FIJJB6nvTg6xPkgc9D1pMkKxYgDPpjNOwDJpCkjAjAPrzQG+fbjjHUdKV13qWByT
wBQqdnxz8uM1DQ0Nba7cDj0oLbcgDg0sgG4YJbA64pHV+pxg9u9MNh7OxcfNzjJB6035sMGwO+cU
5wCgBPzdsfypoXqACOCRn6UKOoXAZlJONpxj8KNhDHPOe56Yp4R0xg7ge4NRyuXkwR/sgCr2EIRk
g7cd8k06ZUwSuenf1pTKSMN0HHPWmZGcH5lHapYD88sWO58d6aTuBGSwz6UMQ3TOfUinFQBnAPOa
AGSSFzjngdDSqpaPJIK5z05pWIKkBQGJ4NAPljHBHrmmkgHMrlVbGQMZzTJETOQxyeadJlmBAIBO
SfWlwQuQcAdabQCMu/gHb8valkgLNhSGwAc0hfnPUY6YpwBlwOOSaVrgQshIA7U87kyzLgnnPbNP
K5YbQUxxg96QysYwp6D1FDiNMHDBSWJHfFRSNtGcg+3pTpskY3Fuc8fSkIwpU8HrmpQPcaDnHf0F
OkDEt1AJz7Zp25V+UgnimyEqFIJP1/lTsIcAY89GAFKdpUEAkHtTGLZJxjvinHptBJB54pWAbIh5
xgg/hS4KkcfhQylMg9fRqJPkJA6jHHanYAaQpt2jkduwpQcgkKBxTV+ZuoAzzQ3DELk49T1qbDuO
aT5iCT1xRgLnAGAfSmnlgvA9OKcQoBwc57npTSEKXwoPTJ7UwqACT8xzwBTiRKAMbwBjB4oARWbJ
6DH607MYySTcGIDHJyTT0ACHIGcUEgkAZUdcA9acQURmXhemSc0WYhm5sgBQOegqUAsCMHJ7ntTP
J2oCfmYnApsoGcgkYpq4DHPBLAEdOaVjndggfhQqFlP3jxT0QKuSuTyDjnFKzAauP4wQCeo60gGJ
MAjI5FPdQW6YDGlQgFioBx6jtT5QCVdxGcbuT06U3zfnDtk4NKdkgI3bSB1xTQpIwfu579KAFkZG
O9iQx6e1N3hs4Y596VvmJ27QOnHSmljkjaDz3qGNA+FJVeQTnIpcBV+/jn86HyvsPagrvUAsWI7U
WC4BATjOMD17035lJyBjrx1pWc54+UjqKcdrkDOB3JosO40MvXBye1KrYY/MenNJs57le1PKELuO
Fz2pcrC411BI2knp29qGzHFhTt9c0bMnOSDnqacRvQA8Y5z60coXGjOAoOOckE0jH5eV47kdKCC7
gk4YU5/lbbztPcmnYLiNwOOB2ApMboyd2T0PNOLqoG35s9c08qYnHyDJHbFCsIiYlSR90jipCisP
vnk8rj0oldmZgOgOcnsKSSHacBty54xRYBjnAAAzkfSh+hAJG0cDrSsOAGAwDk0jbM42tj1pWHcF
G4kkEBRk470/Cs27Iz1xnios54xgVLDtXJyCAelJoaHHBY5Oe9Nk2lxt5JP3qHO1d5IHGMetOREw
N7E56DHf3qSiLDbiwxxwcd6lKkjdlcY9acIyc5IyM4AoSMBgdwK45BpADb43XepG3k4HSmbjI5HA
GeMmleVTcMc7V7980RhmDZ4B4G7vTsTsLgSDaFG4HGRQ0RLDJ5/Pn6UixyxMAPu57GkKjG8EAg8q
TkiiwXJmIjeQnkDI/wD1UkKLOW4CgDOSTTWIjC5Oc8nNLkIAQDz0z9aLDTuSqjqpdcbiMHnjFVpt
wI3fNv8AQ9qnk4jcFmUA52470hk2ygnjuBxxQgG70YojAnAwPYUssqs64GwquN4HU+9IpAnbeN4P
YDmhE8rzHY7SOVUjqKbQkBzjG3LDv2NK0bMUZwAvOCaGkZ0+XAJHIzyKQEquCCQPQcAUrFaBKVhJ
wPlPXtTycYDkDd8wwMfrTdocu24KMgZxQ0o3qXIB7DHUUAJOS0uVJYg5BNNd9zsSMnknA4pZCJAc
EZA6DvSyEOq4HI5/WgBJAYijrjYeu2klIVQFXoM571KUXYi/MWBJJ6Co5hhtuMsD1FNK4MQyFAQv
JIp4GDvZsgcYpDACSVyFPP1qQxJuAY7guM460MSGO7FlUAt7Y6inn96SmQoznafXtSbyGYgMEPA9
qjAbzSccn26VIx87TMHVyBtyOfQVHyuQGA5zk1OFUopOScE5A71GSrKMfeJ7jH1NUkSw3OdmAJAO
Rx0qQiKQMSCFP4YpmWjT5QV5HXoacpw45Bzng+tIaY6ZIxvVXJ54+lQAMhAVc4GeeeKeMSKSD+8c
/d9KlPyhmX5GHHvj0oCwxV86QE4HUkjqcU4/uVHGQDjOOtNk8tJMoeM4+bPT2/SpJzudiQRHjCgN
wKAuNYsGKkAFj1HTFS+YlwHUu7NndyBnpz9aiEnmhsMdvTk5xTowv8JO7nLY4ApCVxIkVmkSIhOS
u7IOanQqgIHLDkHuB/kVGJYWEYVQGUYyOtSiRoYuFBAB/hPP+TSsULLAfMJjO4t8wJHfPb2xmlZR
GhU8sAfkz161C/myTZGUK/Lgfrj86tNEGi+QZKjLk8kihDFVHTYZGZckZBGB9KZKv2e5CMwk25Kr
jt65H0pXle4CqXfH+0MAYp7SphEKYU/KBjA5pmUnqLGoe480r5cX93PB460SbUcxh9y9Rt4HXvTw
4MMiqoRc5Riv4U5CYQW8tXUjJYDk/jTKTugVChG9QzZIVccdOO/NIImw+0EiNsbQSAalt58zqRuj
ZeEYngj/ABpYndlkBC7tpyGY+vpUlWQ1nJMmFCAj+EZ2mnxOsMe/eTI2W4HGCMHr07U9HVFGQA3r
g9Kgn81ngDRglmJAB/n6/wD1qatch6bCbISUEzbQGzgHAPPWpLhoVZirjrwnUtz34pdymWSKZo5A
GBEhA+XAP+eKSNIYI8kZkwSoIOW981TsJXYonaO63gD51Kfe71HczlEIDKJSWIDHtjnHvSxOtzCw
K7AF3rIVwRjOR9P8KSZ45xH5atJwSHOMgEcfyrKyG1YwbuNklbdGWc+nQj6VE6eX8rfccevfrU91
cnzHZv8AWKemSDUUUwdXSQMARkbcH+dSlqVYmtFWO7Vd4dF9u9bSyLGpmxuRkKk+3IIrJhiWe4Qw
tweTjoPwrbSGRAfNdXiIwi9ecH9R/StLEPQjMhUgu6gEZOTn6D69KdcRqJjld29QwOeWOOp9/eop
YGlGBsBL4Dt1P4duallQQxqASXLAbgPlA75P4imkZ7skeQxCJV2H5irdDx6/r+tWJELINrLFg/NI
ckkkdPy5/Gq0SfZ1ZgUJJZCGwQPf9asRM4lkVW5T5eDtP/1/rUbM3jc4YPukVZScD0OMVNcjcw2t
kY4HpUBw3TJf0x1pACHAKk4P41JoSyyNIGYrzwCzVOGAU+YVX5cALjnpVb57g7c8e56U5kBQZyWA
5zQO4OUzIRnjgcVCEy3XjripSWQEpwPWiJFRw0uV3UhEYIVSTyp7e9WIUIKqMMTyDn9ajkOEwoUn
qSKntT5qbUBZ1XJB7ihATRwvLn5/lU5IJ65/z2qaMRyElkIYHht2On4/5xUUEhTIVlBwfkXnrVmG
Z3i2lMhQW25zt4z0/pWiM76jQN6yR5ZCPVuMYzSxu4JOPLic87uQRjBH55qDf83OS2OGPY+lXDCw
TdvLk8mJTwOuOvrTKEZSAeSEB4IOBjvTfNZi27GZCWAA4H4VLs27C6hwAR5RORzzz6c54qFYywbI
2BcjlsdfSqQmOWYzFzIzYI5wO/tTpJDsk52kkDk+3HvUUfl7/l8zJIB3cDpVgQpPM+SqYXkjGcdR
/QVdkJXFa6lMaqqr8x+Zl5+lNtlWaNkdsHb0Jx8vt+VOmXbGpJ3HG5iB1HpTGYG5ULGpKryexP0o
22BruNdiGZQCCMIu44yMVM6hi5kO1iBgnio3lKtgorMeFbrz/nFSucnY6neq7xjPy545P4Um2Skh
rFUj3GUPsxhW54FNZhmTgksARn+XuMU+TY0KoFIPbgc802dj5JQvkocDnmqiDQ1GBiKKcrnG7JHW
pJAFuvmO8rkgKcY4xxQTEgIDM54BU8np3poCuse1N8nIxsqyWJIFcyMn7sEf3iSx/oTRI3mzIygs
QAPnOeanLhJiwjCuTwD0PfpUMriVwgTCg7iV6c9//wBdBOgShdzeavy8YbHP0pPIYxgxkBOn40+a
XzIWjXco/h4OccfWiRAiRqPldyd6DHT16delUrBoQkuMEDO0EY9+9JIq7PlaTltxbPY0/wAp3Lnq
xyAx4FK0TxosQIPGevSm0NIjeFoYpJcErnKFuMn0xTmEjSBgw2vxkHp9asttjUKwLsQFwxzk+34V
EYyikZH3sbAM49Kzs0MbwsZGRgHLHGQadCcOjDDR+vSkQGaYqARgEkAjgAdcfgaD5kcJ2kNGRgFu
qjHpWlzK+pIWKOoyXhPOMYJNRSM0kpAQhlOAp4xSCQxAENv2jgge2cU/zGkJWQMz55YjpVJFke0j
KblJH3iO9I9urbSyjPUr6+9SmI7WXyyQDgAjAHNEzgLjOxu3P3TUtAiMLLkuMOpGMdKYsBkb/Uks
O+etS3BNugIb5up9/WjfmIOoKY+5uPAHqacdAauQYCO+9SqsOM9uKkcxSfKAR2LKeKjnYrOz7tyk
dV/WnD/Vv0I/uqcdehp6CTYrlGG3IwAcFWJH4+9KoVMh3bzB+A9qYsRiZRKSB1wO/FSvuZNw5HfI
6UtytBjgMinc2QckHrUUigDcAWYNyelOMeSjArjrx1p/llOQcvu5BGM5FHLYm3YTa5QiLPmFcH5f
5VHJHvBYMSE4IHrTijsgcS7TnAGeRihnwqhXbLcNzwapWBpjY14frz0yemachKKxYhm24Ht7019q
KME5wevGalMkfl7CnzDjPX8KV7BuV27tjeAcgj9TUrSMOdu527NSOpwUChUbPBHWjcTGm5mDoeQO
+B6U9ATsxGIUl8BXye3amFg6HDEt3IqR2V23Kuc8YNM2AgHOHUfTinYNiWRldAFbbJnDZ9KieXAH
Yjjj0pxVVUsMSHPU54pu0S/MflTtg+1JoV2OZQXVgxJbknoBUaqS3HGGxn60+RSMgEbeevJoQgHH
mYA6ZGM9cVKKCVPLZM8kDP402T75AJx0FOkSQcN68457UhickgZ988U7IlyBZArgAE4+8Ce3vRli
5UFdv+zx+FRkbDj7pHcU4AIxBG4ntVcqELKilN/IIY9D24qOaNGG7JB9uKf5rx7hGeD1yP5U4yMy
4CjrkZ5OanQsjPm4wASh5zURBY5YkY7evtU0oIXow7YNSrtUByfmVhx1Jo1BJFfG1TkYxxzzTowM
H5toJI4HAqQybGd1ORkgZqJXywLdOfehavUbZM7xqqq7FkAIwv8AOoHYnaSAFJ4A60m0sd2ec4xj
ipHKqcKgZRxgGkxXuNC56Zzngd6XeFZiQQ/Pv26U5wAQw+Tn7tEqYIVsc9zSKGsqqAT688fjSyjz
Xyp4ZgRx2phGH77e2fWlMbKu4BdpOR35oAJFO7cq7TyD+VRFeMDI+nOKn/eOztuxxxx/KmkAu29s
4PUd6LAICu4LgkgcmhwXi3Ahu3HpSbRv3/N+VNYMFGMjb6H+dJgKSQq5GAvXHpTQCrFuXPuO1OGX
3gnkDJBoGQMDhRzkCoauO4M29mfGwDrgd6bIuMDdgk8k0A5JwcE9eKk3BFde59RmktxEP3iVyflz
kj0p6IW5IyxPQUvyvnJOeRkdKV1eM5A3ADArRARuGLkA8cA47UEGMdyen05pwVhngA4OeetOZtu3
B+969jRoBGQAzMM7j0pSCMn7mOxpdgaVxgnrz07UMrMHyCGHPNLQQnlk8ZB44zSmJo1IYklegzxi
k8s/xZPYEd6e24oAxzxgUWTGRbNrjc3B7AU/Yu1wevb60kaBlJyevU087drEAnsPemkA0qMHduXP
tTSmQ2GwBxzUkjBQTj3+tI23aEyc/wAqbAjX7wDZKDgkVI+ccDGOKUAAv2J4570j9gCBxkjrilYB
hJJz0wfWnMN7OGJPFK2RGWJwAKQupXIB3E9T3pgHlHYTt5xwc0ZwW4z1HpipHTdFnGeMgDvTFHmt
hWAHoalDEIAb5clgOlLJGe45OePSjywDuAyh5om4yeVBOSKbERBfLOPukcgGnuMheOMck0uwuAWH
A6c9qG3BSGBAPTH6VICLtJIVcnrn2pZIyD1LcDv0o524AGR1Hc0ENuHGCe2O1FwIiB+OaUbo25Bx
6Cn7QZOTwOfpTmUFuAeeMULUBucFuqnP503nJXkgDpUsqEKdxyc4AxToxvBAJXjjjqauwiNiVIyA
AeOOKaFGSMYHv6VLIm0EEAe+Pah1HlgsSAMrhaLDI2jVsAcDP3vSmknbgjj1zUoTlgfmGDweKbKR
HgY3ZpbAAds7gwHvjnpSkZx/D+FOO1sR7Nu4kHI5pD8u4MCxGcDNFwGFiVIbOP7oNMLg54OD3qb/
AJZ5AIzzjNNMeVVgMAUALIodFO9mIP3cdKYV+XnqOM1IABt3dTnj0oZc/McYIGaYXI3UgljnHagA
uNo545zxipSCVIJwmaa+3cBkggYPHWgVxroMcDC47HvTTGdvJAPfBpCc4AbA9cVKU3OCflyuc+4q
RkW0E4z82emadgEkD73tUjFHKNtDYOPTNDnzeq5PXbjgCmBEVYAnrj1PWhsAkk9OcU5DuYDle+Se
1NGH6555wKNAHAZAOCeeKSQEYZug/umpZcLhVUMBgkmmMyAgBMDPPFADf4htAJbuKR8pydvPHWla
L7pA5IJJzgU+ZUUADDeo9KkCJtoOASB6A8U5trjlsYHAFMI2DoD7CnBsehXGPcGiwCONwOMBsdKa
pO/knIGM+hqYYLsSM4HWk+8WYsAO1KwETsS3J9s1MxEoxj5vUU1IyFORz1pQdjAlSv1p7gMdc8kH
PQUhBQ5HQ8fWnF8seDgn0qQIqqTkDHIzU2AhLc5weKWTLldo4yevrUjqGGMZx6n26UBdwKtlQfX1
pWHcQruhwW47qaRd2duSce9DRkvyp5HenOSRlRgnHA70WBDUHLENj2PelbcAGyAvqKcFKYOB6Ffz
p8jKpA2kDtj1/wA5pbDtcgC5G5m2p+tPUDPrg5/+vQo3Yyfp6+2aQ7uQRtYfe9TQ9QWhKu4lsucD
ofaodpVh8ueecmlILHOSAB60+bhievrQlYTdx2RJISSFO3gdOaYwBIO7OOBzTiMxZC5JGcntQsjk
N8oYt1PbOKNwTsOeRsHJBDdcimJH85wACemTxQQu4D5lYc4ByDStIjSZY7mPdutNIGxnzIQWcA55
z1FK2yZQzZLevtS+apYZBx3AoQoAMtkE9AOcVMhoCoDFcH0J6fjTZMsDnhiMcd8CpBKqxkbWY4Pt
imhlQB2zjBKhu9Go3YQuXXngHngdKaxXcAM4HcCndMll25JI2mnlA0GMYy3UUE3IQp8wbQdx6g8c
VIYMbkJYZPBBpSHEh4LHBHPB9sUrc4CZJwOCaTKQzBVRn+EE57mlyZpF3kI3T0qVnO1MBlOcE/5+
tNLli0ePu5OetUhNahLIXjAYgqowMcA//XqEJtBJJzjAFTqnnbjtGS3PPanMxUZXaEzgDPSnZBqR
l5Y4xwqnpkmnNxDwGJPftSypncTzz154pqqQgDHEZ6HpSsIkSMqdwlDcHcpGT+FNCGVnO85xu259
qljB8zzFKlex70jGIl85DA5Jz3q0xEW1sAksdx4B5qQgx+UWIA5yqnkehpyyAuAyEEMdpBok8tJG
X7r4wc81PUVxGZQzMnpkEjGW9qRot8ZLEB+u4GlwzxqDJk9QPenkRscjjAyVOOKphcgeHMTSNhO4
yM5PFSEBogrqCwGD+VKSYgV2GTcu44GfxpQhIJZWCY43f0rMYiqFw0m5Wx1xgVEJnhkJOcngk91+
lOMuSBkHdlenzAetTFUKIQD8w5b+VJFIDbLGuQSqkjHOO/pUgZ23NICYVzyxH4VFKuzcPLUljkr1
JNSGArAB0DfLhuD34H44pDv2JodjwmQruABxntUfmyIR5YypXnBxu9KHmYMVXgrkDAJAz1+tS2kZ
UqxIIOeExz9KkauxVIaQMVDHIJHf/PNStIiSbgp3ORtweAeOtMuCFchWIjJ4wOox/wDWoglZsBMo
F4+bgDPtTFp1HyosylI0OeBvXGT9DTbZCAwdOUP8WNqjH86l8qGCXzTuCD7xHT8B+NQXEnnx74iC
AQWH94c9fyoFp0LPkvv3S7VDKSAP0A/KomjadPK24ZQQCcnjn1pwnWQiVDklCpX73Tv75zTpk8mQ
IAQc4yT0HcGkNXI4S8UiLuALDHzHoQOuPX2okDTOGLtIV53qONvrT0VSzLjBRi7Px0A7VJdwxJ+9
Mu4gBgMEHJOSBx2oQNNjXWZYH+78y7lwepoFwVA3IcMCWBGMn8vX+tPjba+9QScbgWwQeMcgio5C
iAAuw6g4Gd3+fem2iVFx2HorxRoAuE5VWQg5Xr1z6CoXV/I82NniEZ2ZUjOD0x/nvT2hQokkfyKn
USdCen9agn81VLN8/wDDtOVU9eMUmJ36mTqIDS4Q+ah6NwD+P8qgeJQzMysmMAqPXuTU91cKLjcg
ULjGQAPyHaoQ2xj5Z5bqrjk1PU0Rb09SsuzGxOQHYgCtny5YYs5eTbnayqSAPX0rHtFYbX+Q84VR
0rdtGYWkZimZPKOQCcErnGB69aoltdRu154JcnYy9cqcnoev49vSkt2TCIC7QyEFwOo46hTUEchR
5JmVlZsEqMkEdAcH8e1SJHM0ykEyAcYGS/1p2MkuZlh7eHfHtSWRyu4MDwBx3H8u34Uy9nU3DFmO
3gAqhU47DjnGKkmV5Lh1jULFt3EgngEAcfzNT25CKrQlWJUDfgnj3z+FJpI2imjgWgaN2AbAUZzT
3fzSmRuxxx3prkxmQBQOg600yEAYJxnOffvUI0HsFDIOgxSFydyqOWPb0qaJVWDzHAbHIAPIqAyA
DCkCgCRAqKcksWHGDVeTO9g3J7VNGMsSemDx0FRsMsQMDJyPrSGLtAHdQfbNSwjOAwbGMEg9qiLn
bsOBznP9KsRZMYBG4jjB75osInaPZLgMuFXhlGRip/MdWT90cdyP4v8ACiytUk+bnzWJXb6DpVu6
tZonC7cttLYA6Y7GrSJuipOh81vLUqu7G084qSN/Jcs3AyOF5INaK2qBXaKQySD7yg5IJHYfSj+y
XNuD5iK+QCpPysRxn69OadhXRXSWK4Q52hXYglu5x1H/AOumzMvQAu8Z5i3feBz39anjtWjCNKoU
DIXD5yT/APXqN4VumO2UOO+TnHt+tPUXMmNdhBcEEKI8bgemfb88UsjKEEhAV2XacqDT7lEeQLIA
V27Qw5z259KlWFGjdeeQMbTnGOuKtIVykX2+akf+qA5x25o2uiNhxtbB2jn88VYubIxTERONmBlh
ggsefWntbRyYC5HAIHpjvn3zSuC3IljRPl5Hy7tw5wc8cU5CSV8wl0zyu3p6fr71JLbeW8saP5Sd
s88np+tJMrt5vltlCQWwPTvzU3GyIRmTHkts3HdwMkjHqfemvtjdmVSvO4KRU8v7pVxM7/ONvsOc
j86U28fzP5nzvyD156Yz2FUpWERMZGZ12qwY7skBcfSmPOUjKmLaq5HLcexqwgCEiQK21wBv7H19
cUskbYAkZdzdy3Yf/rquZIi1ymrgxlxuVuMbfoaTzFSNw6ncV2qA3X14q7NEq7HLF1II4YHNNeKK
R2ZmRQv3T/Q0ue41CxC0YjVhHlSBgg9KgZi/lkMGU8kKOR6Vd8tbiUqropxyUPP/ANeo3tofmFuj
DAwcdz1/pVqVhOIriEqjEsigY2+p9abJBvBYKdvUZPrzj+dTmLfCFDAMQTg/w9cCoklEJ/14HZ/b
8DS5xqyIizSk4UbuoPAB+hpUkeRQhUKvT0qWRVZlWCQA5BKdvz9KYo+zSMJGUlR1U5A5xRzk21B5
EJ2BdzKuACPXtToUEqeWF3MwPAHQCpEt0dNzyAA9AeKrRqLeVkLMz7sDaeMVSmhcuo4DIyoxtHXm
lkdpchW8znPXINTO6FjGhCrnLEjLD2pMrJCQxQugIwCPXg+/4U/aFcpGG+R/MXJx0Y5zj/IphZJY
0Ux4LEr1xUhRJFfMvy7uCwHFH2mGMdeckE+o9annK5CGYl9w2cIcZWo2cMxGd0fqw6cdDVhnR0IB
EaOByO9COtuo3vgu3JB4xj2/CrVSLFylR4sAtGFzn6ipgjFWZTyARx/L9Kl86FD0+bOTzweaZMio
CXZQueNpwKblEnlIHLhskBF6ZUZJ609mYsqAqcHB5/zzU4ZBLC/B2DJHXntRLawKysXCknfjOcjn
8uahSSZdrlSVWcELlWXAPbAqQuZ8qvQDOeOlLK0SAN5hAzjk1GZI1jw0m3qB3/Cr50Z2aGMhDlQe
eMYPAzTpXcF1+9jjgZ784qU3CiLfhQxOUHp9aR3jQ7V4YkPu56+1Z85pa41WCod3JzgnHGKZtdjI
QfnJ5xxkU93A/wBZgjrt3dacqBmYnBfBGFOMj0o50LlZG5G5QwYnHb0pjMFYBF2DOCQOtT7VaYAu
GG3u23PpUZkiV0y5IBPb+tUqiJ5GJzEzMFYA9T6ClGPMXb8ynjPFSpcJGysGGdpB5wPSmLcQs425
6c46g4rRVYk8rHNGFLZAOB3zg8VHGSjDAGUHKnp/npUjXUUYH7zEhfpjih76BCpXkHl/epdWJXKy
JozuO7O5ueOwpkqmTP7vCnoGNS3N7AWQx8KRjpzT0kt5bhQ8gVMj3x9aXtEw5WRSW+1FbjacjBPQ
9qCyk5YnaTxk8VZvpIrWd4C4HlNzzuGPSq0l3blRxgL2pe0QuViPGrScllXHBHSl3DzCudxB4GMU
1rqAluMgc880jXsKxMzfOScY703VQcjHpEofcy/KTkUwwqxyD82fu/8A16dLfwrGrJncO1Il/CEk
XPzMvysw6UKqh8jHINpcY+Y8c84prJ5eQBkDluOM1I19HGFdGBJPKCoDfowXAUEfeBFHtEVykkyr
KqqjBR3GMZqHYy8jIxnApz3yOcKuOM56U19STzd5XIweBnj/AOvSdRAoMV33MGB54HPFCQvMMIV6
8+9NXUIsMccjOBjk/WmyXyNnbn1wKn2i6hysnSNlb5h93j60vl/aEZ8AEdM96rLebup24GcHvSC8
RSP3eDjmj2sRpMlB9Q2cdc5pUJcEqoznrx0pn22LcCysdxztz2pGu027QuQCMHHNL2qHyskKsuVG
7J/UUijzGKLHliMehFRNeqjHj6Uj3+cEcH2o9oPlZZuCA54GScqvT86YpK8txnrVdrzewyOe2Kab
sclQd3pS9og5WWmO7cwHzE9aYzsuVA+U9RUKX2GJZR6fSke7DKQCR7nvS9ohcpaKh5O23uR16VEY
234BIxjqag+2seO3rTjeE5+XPGM+vFRzhysnlKhgEI6ZzSOMNy2RjuarvcZ2ggjYCo9qQThnyc5/
MVXtB8pakUmLcCCc+vFOfAwEOSTnFU/tDEDjIHbGKDcsXAxkZ4x1o50HKWchCxHfg880u8vn720d
c1VW8ZQeMnGOacLuQqV4bPTFLnDlLCjHXd7YpJAWGB69zUDXkyHHHAwRTftci9vrkUKoLlLpPBGP
lHUCmn5CAHOcZAPGfaqou2A45JznNN899+cdDxmq9qPlLqhZGwzbcZ96aFAlGR8oP51WW6wH3D5j
74xTRcuhwRn60/ahyl1sbm2gLk52k0oZkUsi4HcHvVD7S5Y5H4fhSm5fy9gHOc80e1DlL8SPIpJ+
6QelQhyo+42Bzj0qqlzIjHBC54pBM4Py8Uvahyl4LtVQDn6j9KGK5IA29s1T+1yhmPBz1GKDcyYA
2/e6H1oVUOUtksx2DgYHIoKjD/NyPXvVM3MhYjp2IxQ0jxuQwwR2pe0DlLspCFcLuI+YDt+NMz5z
Dnav9Kq+ewOSMnPU+lOeaRsDG33FL2guUt42MrJlVxyaRmCrnJIzjIqqJ5YxwOOmDTftDhegGe9J
VB8pcdFy23PPQigr5bbeR06c1UM8i/dOM9aT7RIeoz60/ahyl58FiVByOvPWkUNj7pbacZBqst1K
MD7oPb1oNzImFPBPOe4pqqHKWm6jgsfT1pgj3ucjgc4PABqtJPMWJ6Y5pFuJEBySc880/ahylwIM
8ksfQGgBTuZlYMAAB1qq9y74fO04xxTTI7ANzx61HtQ5SywMTkEHJ7UNveM8BVzx6mqxncgHJPoD
SedKjBs801UDlLSuxQ4YDFKsgYbCcHuRVX7RJtPGD3pPPfOD370/ahylp22MuGLAccinMDhlHOPS
qjyMDnr34phmZuhI+lHtA5C4zbvvDgdqeGEwwADycgVS86TH3c470v2iXcTwDjsKPai5S0+Y8KFG
/ox7ZpUZ2VlI4PU9KpmWQvknnNPkml3scc+uKXtR8pZycEk8DoDwDThIXwpQsT/CKpCR87T1/Cnt
NKpZvutz7Ue0DlLLKYyAxI6gjvUZVS3HCgdR9KgM8hJwDjvSfaXAHy8D2o9oHKWVxl2Dc44HrQzB
3OBxjnJqq0zF845p29zgkn1o9qg5S0qgqRyMeppoUKwJ78HFVvOkOdoA45pTPJxkcYPan7VE8pYe
PY4AJO4cA9qACobaSGx69PWq/mSAgleM96UzyR5AO0Hrx1p+2HyloRq6ja2CQcmo2RgRvyx647iq
/wBolR8jOPpTpbmV2xjBPtU+0Hyk7hRJuOQO+aVyW3Lu5qo1zIyhSM7af9ok2/6sBhxkimqgcpZV
dq7geAAMkU2YMHK5HBwDVZ7l3TBX5DxzSCaTaQo7+nNL2gcpaKFipLc9OKc0MjsUJwV9arG4l9Mf
Sh7p1Jx8vNL2g+UtvwvDjOMD0qH5pCc45HAqJLpsfL69TSNdSF8Acg+lHOPlLULYZiR06buTml2f
LuY5B4FVBOxIGOQeTQZ3L4wR6CjmDlJ5FYHnjGDTg27Dc5bjp1FV2mkV+SeueRQbj5uGORgAilzi
5S5MFKnHzDByc9MUs527GCBlx1qmLhhuDAtxxQ9yxKAj7vQCnzjsXGPBBwARgYNNYhRhWBUc7cc5
71UluWY52kUrSuMEjoODU8yCxeJAG3bxxmoSBu+XBJ9qq+dJu4zlqPtD4AyeM9aftLCsXHUc5A3n
rjjikaJwVXcGXPPt/jVZppkAJzjHFEk8pXByDnijnHaxYfByPMGe2cU2MMQN7bhg4UdcVAZpEkPy
84OKka5baFByuTkkc/SkphyosSIiL94Me2O3tQokeNtpOwfewcd+tU2kZXVhkKck5qRnIGShUN0H
Y1aqC5SyFzNjd24akddyg7ipzwPUVXF3Ih4HAByB9aQ3cjgsdxOflHYUue41EsETIoIYsex9D9Kc
0QcA/wAJB59RVVp5gEAKknnI6/SnC8lCLhVLKNowuaXMFi40sc0KqilGGTkH2NOMhiUIWDE9j3FU
FlYMUAAZeCzcnFIbqZkMWSQpyfWjmQWLcb/NhfkBzhVPFWQiblXggnPtWZJNIgO0noDz2pDcSAAD
/wCt+FP2iFyl8sI2wqZJPHenGEO7fJs7ED1qiLtlI3knrz/KpopJXZ2UYA5LGj2guUuCHDRcM2ef
ekYHz3UqTuXgkCq2bgS9SeCFB6dO1TbbgFVKjK5xzVc4ciJng8tTkbVKjIAGD+PrTGQCP9yRt4OT
j8uvTFQSicjcpHkg8gfdzSQ3Mh37kzuOBgdv8ijnJ5EWt0jFQWJfaThewyeKWUk2owvzMx+Y9sdq
rI9zI5YIoBGAcdeP/rU10ug5fG3HBBPTIqeYFGxbPlRsMIMcknk7RROBI+XGS5x05Xnr9Kp5mYsX
YgqMjA5/KppYJZGDMEUD+AHgcUuZDSFmAExKuGwxUEdSMcVKrMzAEk7O7HOB3qvLbufJZ5QzYIyC
M8UqRTFWVGyrk/McYFK4WLSSMD5iuM4wCf6e9JLNJGoIKg/xEjqKijQgsxQE8KCX5685P4U5bX7R
NKI3V8kAFm7Y6frTuUkKZTgNvVpFbaO4+pFONwDtV8HAJJzj/wCtUMUWWfBAUryGPANLDaMUKNKo
93PancXKWDP+5ZQwdd2Wjbr37fj+lSPOvkBwNu4beOQfXiqktvmInKxhASdjZz/+uoYwqxckl85V
QMKw+n5/nU3HYvC4MEORMVIBC+mOuPfrT5Lu3y5ZWLydFHy59yff9KryWBljWRWEZPtkde1RvCqp
guS4YMMHt9D1+lDlcVixLcAOro6lh94t1I9Af/rUq3PMrMo2yHcCf4eP0qIQiMhwVkVywJU9AO/6
U2dQkcRjuANwK7RxjngnNBVrF62nCEEYlIwFAYcih7lorkzDmNsghjgD1we1U7dwgO0rlucHnjn3
qeN8RCDK+YQG2ev1/OquO5JLdRvIA58xCcEAdOvf3qO5vke3xuJLMRuHYdx7detTwQwNGgPyxMxZ
2AyEP+RV8aFEl4hEokQNuyrA/p27/SlcLHKSW7zyKkb5jByrDnj1/SovLKySIVJGSvPA+tdYdMt4
rnzHIZGbHHYZyR9SMdahv9MVrww28LAjLlnBzg89O4o0FZGbBISkMIRQVB5x97v/AJ+lXPLhKTMC
+5e5GBn2/wAPaooYH025WIxMzFSysAQCD3HtVqWDyXxIJAG+dTjjk8fhVJmcooYXc4LOWKFGXauS
PT9RUzXKNFJ5iMsmOd56D6D6fzp1yrRF9m3a425ySRjr0qO8jQWpaRds332B6r3zx+NImN0Mu2aN
lVjsxgZ+X5fb2rW0+E3W5HkhEi9DNg8f5xWLHcPHBIpcHJGY924DuMetaVlI5uC4EATywCH65+n5
07XRTvc4JR8wVvfrSlcBtoORxgCkJG4/N2HJpzKMbt24ZxycfjWdkajFGGycZHalMRlzgAEc08x7
CyMOvAOc1G6SR4Y7sHocdaRQqEAMzRu2OnPA+tOEYcDYpYjkgdqUK0gwVOB1LCt3SrnR44EW7jaO
4TIEnJDggYz+OfzoQFWxsbaZWjkZN+TjDYAHWtWKxsZY/LETlhy3IU9eQOuemc09pNGSFIxcKVx8
0kQOfxBqWG80SEArI5Z1BwCcA9f/AK351SJegnladBFHiQoR8jjf8xOOD04HQ/8A66vo+mi9j8/c
ETCtsxjoPmz+tUpLjRoJQHfedoZNwJAJ74Gc9uTTPtmjM+JJpgpG3ABwD+X0/wAKd7EpN9DoDHZz
RRxncjfxuGyoHbAHX1qcQ2yZiZfMf5SgzwpI4yMYJx/kdK5qPWNNtTE6F5FByd456fzpG1izCytA
8pPXY56jn8uCKOdhyI3bIaS/+viLlGLBd21myevTsT260l5FpsG5YVz5oXcVcHAxnuPXH/1++TJr
1ikkTvGkjr/A+Tj8scfjmob/AFvSp5yyiVU6g44z7c9B+dK9yuSJotpdvaLI0sYj6DceSQR0x68+
1Ium2zRIYppVx8p5y3QZ9iOtZSa/aQQSMwLsSCdo68delTQ+INPiIJyUZchVXGD9aV7Byo110200
rUI5HidgGyWI6gnqM5HTt7Vozf2MCWgSRZZGICP2zXKrrlvIN80juM4bJOVGOg46/wCcUXOt6bLA
oSSTeTgMFOUHp/8AXouSoam0kWkzkeV+5K5++xIb5uScH61WuRYRrtjX5skk5Jzx0HPTmsmbUdPZ
VQs2VG3BX5cZ6nHNRf21Zb1R1Rk2kFsE444x/nvQOyNqeKyjlTLN5AUkFSMrkd8fniqqXGnwRliW
kY/MquBuPALAnOOxqlJqWmkYO+RB/EoOSf8A6/8ASnvqOkgfIGkByDuQgfT1PrQNRRDeXMc8xaGN
U3dzxgn1GPxqSC7tPIIuI3Z03BZFJ+8e+PSnjVdIMeSjblGcOvHHGKjh1DTY45QYTtJHIB4B7+3N
A1FIjSSzdhtjk80Yyr9fUmpbg286BAu7HJDnrz6Usd9YwISGYMRtIUEkgU2SbSoJMHzNm/cQygtt
z16/1/xpWG0Nt2RnzsIQc7T0p0k9pO7KEbzFcFuenpj9aG1SxaFSo/do+4BlOcYAGfXucUg1LT5d
8irLiPrzkkDn/wCtiqM7EszWkbPtQu6t/DgD3yaj320lxtniaNdoAz047n86eb3SdkjAnbnJDLyS
PWo5bzT51VeSHAJJBOB6YHTHShlcqIpba0mdyoZH5CgHhjUGyMBl+ZJPUc4HcGrKXdjHENshVwfu
hT19c/hSpeabLjcoBAOFXPJ9c0WFYYJI/Jw4JdWIy4yMVHLHBkeX5hVkLEsehx/Khrq1aYj5iuSM
AHP41Yj1bT4FZWIkySMMpXb+VAblFYfJTdGGyQQSD1pqpEJFJ8xM4+8MjpWn9s04W4A3FsnjBHGf
T/PSh7rS0jUGVWcdCFPzHr3H4Uh2M02qzSMzbmORkDsDxj8c1Gwg8/ygc9gmDgf5xWna3FmEcSOM
43AkcHHSoVurCOV3mjbepOM56Hv0/wA5pdRlGe3KKV6gLuOW7f5/nQ5MowTjjO0HHHTpWlJdaWz5
U7U6KOf5nrUIl0uOTeXMmCMpgj3Oaokony7aRdwZ0IGOef8APWnvGksMh5BXnjt/+utifUNJihCx
gL1BG8nP+c96R5NLl3hpfKVhnvyMemKNw0MiWAbFKq43fMct27cU0RtsI3Fs5GO4roFvNGYRtI2Z
cFNx5B9DVWS6sXlUvcDczdcHAHp0pJNFGNJAke5NzPuwcn6VKsMDRsEchhk89D9K1bqXSdnyT7l6
7hnA9sYpT/YluceaDnPPJx78VaMncw2thuDdfl3Ek4p6pmVTykWOTnOK2DNpEm/En3QCM5Geamzo
sSuv2lDtIblmBPqOlSy43sYbRbGDBSVHCnOc+9DfvJCRyQeUHFbrS6I2WF0qbVHr+VIx0HHEineS
SQTwB74/zmhEu5z0kP70IBwB0J5HtSzWxiUqylQvr6+9dCr6LDMky3qTA4OGJB6Zx0/Chzo0js5u
BtJ4OTx7EU7DStuc55SErgbcjb+GKVbLzLhgmc4yOMV0rnQtk3+kxcDC7mYHHfGB9Kag0Qxh3vIw
wAwoGecHNFh7nOtaqJVjUjfn7xHfmh7RY1YOQuwZ46n2rprgaIEV0ukVgckliS3t09aYYdDe4Lm7
Rt4AAUn5T+XTipsM5owwnJ2Mq56A80NboI1GDkdD3rop10LYdtyvH3m59Ow+tPcaAqc3iBB90LuL
H3PFC0Ec+0MM21mUhzkn5skjP/1qriD58k7s9yK6jytFSQg3abRnGM+/tQY/D6Sq7XSyg5O0M3y+
h6Y/z0p2BbnMyW6q+EBbABOOc0i2oO3EZI5JzxXUvD4fQ7o7pdxXe7ZY8j8PrTLltFDRyQ3EWWX5
huP5/XrTsBzRt0KNxg5IwB0ppt4lYA7j3DY7V1US+H4pDm7WVGP94jgdumef8ahk/sBMBbgNgjBG
7gH1OKnQh3ObeCN2VASCTjPrTWtVRiGzx8vFdb5fh7e4lvVwuSJFJweOwx9Ki3eH3Ib7SMYyRg8n
HuKehafc5xraMZKliRxhRSTW4jBTjPfsK6Qy6CVXFxG55BGGGBg+1Jc3mheUPKlUuirtLbzuOOf4
aSRTa6HNNaZHy8+tSG3WLbG3DHGOOvFbNzNo5VQkpVQMDAOTzyelV0n0mORv3pYA5Dbev4UMLGbL
DESRtJIGaX7FG8ZfoOcDParhutOLSNtYsB19vpTrfU7KPzItnmgsSu5egxx9KzaGUTbRRyEhc+mR
jFRmEOxdxtbHAq0+oWZ25BYg9aR9RtCXbZk9OO1MRB9nixt2FQ3Ic/59qiMOSQE6cHcatjULcsDh
vu8/KODTE1CIYZlJ6AijUNSMwg9cDDZx6CneQjMDIh29iOlSjUomBUJ1P3h1A9KjGpxmDYEbI74z
SGRPD5cp24IU545ApDboQWOAAeB3NTT3QjdCql1I5z3pDeRn+FxgYOQOadhDTZqgQGNsHqDwaYIl
JdR/DnB7nnFPkvolQKNzFRwW/lTEvI8EFMYHBFFhjvJVptpj5PTBzTDEMo0ceQDg1KupRht2wgDt
jPNA1NFyFB+nahCGTRhWBYYyd2KR4QT93CDkN68VI9/E6AY7YHtSTahG0a4U7hxgdKLBYhMIdc47
+lDRB+FG3HQd6R78ZJCAe1H2/bztw2KVgsPSEksWYE9MGkkjG/BU8dvX3qI3rHPH41Zt9XCSEvHk
FdvBwcU7BYaYfNPQE9vypfszKGLIenboKha4w5ZcKuePWnTX5eNFHGByKLDE8nO3jk4z7U9lKuyl
dxYd6qi4IIyMgU57p2cMOMUWFYteWqgFl3Y7dBUTRD5pBwuelRCdl5BBz604XTqgGRtOTjHemMV1
JBOB161Kq5GSowPQVX84h2yATj14FNWZ8nvwRSsBYchScL1HU9qVlaJ0Y8kj8KryTuwUbuAeKR52
OO/1pgWVTzZCNvzN196dtzJuZDkcYFVfPOM5O7sab5z56496ALDp97jHOeDSeV1PRfQHrULSMQM5
PvjrRJOxOensKVhWLBjLy9QM9MU5yg3E/eGcVUE7hgcnNDSFs5NFgsWAMjLDCkcVIu0JkkZ7Aiqj
S8Y701pWYD1osFi2Dvx6LyT2pPMWQsNwz2J9Kq+c/Y0hkb1x9KLAWy7Pu3N0A7dqcsqgk87jnH1q
oZZCMlsjGKPMKnnhu9FgsW5ZBtIZQpJzTcgcLwevWq0spc4yTTfMOfmORRYLFp1OwkDGD0oO5u2O
enrVeU4PDbhjOc0omccjpRYLFrLOxCKPekJ8psAZY5Bz0qqJ3XJB5PemmRiRzzRYZZZCOoIqVwj2
4OMMO3rVEyMT972pXdj3O0cUAWXBhUHIwRnApjtuxuXHHbvUIlK8cH606Of5vm5BpWFYljYKHBHX
pmly33jz/WqrOSSM8UF2PGfwppBYvI+Hyy/RqY25m+bLZPU1V3uBwSBUnnuOV4PPNAywS3lBlBJ7
80wgnnpUAmkTJ3EA+nQ0NKzLj9aGBMVIY4496VJM5GBz3NV2Zlxk03eSuKSQrFxmVQwxlsYBpm4r
nPP1qsWY+tKWPUtzmiwWLbv+7wSc549qjbJxgFl9cVXLndnPNO8xlUjs1FhWLIKg4IIA5p0gZfm6
g+tVzMcL04GKYZXJ5NUUWlcbyoXJI7etI+SwyST1NVjKd2R1HcUCUhtwJBNIRaYeaT/d6DHOKUhm
GRnOT7VT8xlzzzUnmttAU80WGTSMxfJ+Yd/b2pDHuJOODwM1AJHOVH3qGlY4B6A0WFYtK/lw4YDj
IqJpM/OTznPFRPL82OoB/Okdh5hIGFPamMndlK5I5zmllAGOCkg9e9Vw5B60rys5JOCT1OOaQidm
c4G4/L2ppOPujA7jNRO+SM8jHahpi5yQAQOtFgsWGXHIXKduOKSRjhmLE461EJzsZecH+dIlwVyG
5B7UWCw/d8oYng9qkYDIIPydQTVXzDgcdOOaPNIGM8elKwWLG752wxwOBT7mZXZGCFVxjiqwctkE
flSlyAASQPY1QycMZflA49B1pdzo+AcAH+LntUCztvBXikknkY89fWgC0TuHABYnvTR94knBJ6Cq
qyNzyeetL5pLA4HHoKmwrFvd5hVFAUZ+81PyZCI/MOMYOV4FUnBDg5JyKZvYNkEj8adhl3d5bOoG
R0LdKNynbgZJ53A1VLtzluT70hODxkGmBbYqGBVckjt2NQklS2CcHqPSo/OZc7Tj3pNwYksSB6UA
TuwVsFTvPOR3FIJjGpwAHbrxk1EzHI7Y6EdabI+5j3PrSaAmO4nkgkfw561L57N1QkA9PT6VU5Iy
T0496cXYLg8elKwrFh3G/wCZcDHABqXzfKQqSQXAxj/GqO8yYDHpTnkZ2G4/MOB7U7BY0UvVDksG
ZiNqjOQOKZJeOZFIclhkADkVS2/MPmAz1NIoAJ3kgjoaYWLfnkO+4kc8ehp5zHiTdgPztzVJwhyp
PToTSLMzYUcHBFDuFi557RSqV4YHJHUUXFx50hYcbmDE9qqbioIyAfb6Uj/f4PFKwWLbXG0SAnkt
wAfepnkEjguXB2k9MA1QCjfyeh6sc01nYuCWJHrTSCxc35CBi27vzx/9akF10IHPQAd6rOV2EHlu
2O1DxtGVySoYZFUItq77ozuO8HODytONy0VxlCRzyCMdKqNauFJ+8inG7sKe8Eu4ZBBxk+1GpLJ5
Zv3kjMxYn+8KkJzEuZCmM5xVQwyb84zgYz2p32eV1PAJB4OetDGXIn5EauSz85PWiCWWKQtuRWDY
2nv74qCOGRpVCKwOTkk4p8tlJ56xkMO+QM/hmnYYpJOVaXeucjH54FSzuSAwUKScHjk5qBIyxLFS
GCgDjA/GpJ4nVY2UbsAnA7c96BEkeUR2CYZRlVJ+XHqfwp8IaPaBEzkqfmPIx9PaoZZnlkBZvMb1
Q4DcY5FXZ72O5m88xLEBtjCR/KMgYyfXOMn6npQkBBLausIkchGPIUgbTg4/p+tLKWZBuJMgYYBH
I4qWKUyzOhcgdh2B/lj8auEtN86oV8tckf1/lWiiZpq5VhG0lNzNGqklQ3X3OPek2hGBDvtzkoTn
d6/zouLdniDFfLViSVXoF7Z/WlW6aIlTBiTquACcdqrkHd9AmgmEqZc4fg7T+IFWXtrg7C0xQooO
4Mc8cgH/AAqLz5FhZcK0wI5Hc98celLDqKqzIyiRUJ2rjkn1b8qhopMvXeq3tzFF9oc/ul+QlR0J
+g/KoWbJSViygZVVznI+nYYNTM6iR4n37gdh9M9MAd+arSae0cRk+UPu+aInkf5AFCiTImaKMRsh
w2TgFTuA+nsfrUiLN5aSSN5iYLBVXGfX1+lUYSzIzbS0echiNvGPlx/nvUjXEt1EpTCuwCsFwcep
49R296dhXLyyRSiRbiIRjnCK2HbPXjsBTzDIseI1AYYXgYOBnvxmo4Z3tPILLuldsYdMk8cfSr1o
721xIkx+zhxvV8denHA7frSfkWpI86diwK7fnPcAYprjKgAjJOD2pY843AAgcketOdxKhCx43HOe
1ZFkQBJALYOeM9KstmSPacfIc5zUAxGV3AsfT8akSRCjgqc4/KgYsUUpc4I2k7c7sCo5IyqtlQrK
ealjjDRfMxCjJIIOOPp3pVbAkZRujH8bKMjNIaG+U32dm+UDrjvTlTzVRFHz47d6ePLO1cgAjBJ6
fnTI5TCU8snCnPzdDUjshFZY4nO0jLYGemMfnUkcm5AAu5euSOh+tNKCUbpHIkORkDNNRS7EJuPo
x7VRI+Xy1JYKQSvQ9BURjMSghgpYb6dIxXDHll4IIz+VK2Xj8zIYDgD0oADIssgY5Pcj196eflj+
U4Xr7moD8p3DhsZ9qchkKnB2knGTzSEL55k3bWxv4x0wKHt2jbduGE9DUjSDcWVApAwAecmmD97I
GY4BBzjt/nFDGPyS65GGOScnrS/ZWYsyKcA52Ag0mTIRtwVzk560ol2qzKu4Z64wKGBAzEgcFTkd
BUpX5QSoUckqO9ErplSoG0E/LSs2d+3bj75Ht6Ckieo+IDcNuTxnAzk+op0aojszL8inqeTTrdfM
IXeELc/NjgAZ/pSTMNhXJCqBnOOTTvqNiyRxbPMADY6qe3tTIipLA7dp/vDJFEsqOiFQQc8jHUUf
aw0uTGMHOAvvVXEPESrJy7ADk/8A66WOPa58xN5bjcRx3pju7FUICBQc4GcmlkkEbsMD7v3geM/5
NFx9Brx5YYRenYcUpTYz4UDJ28N60eZIcYDorZ7dgP8AA0rOd64y6Z6HjHfmi4kCrtUpxkKfvDg0
kTeQSrEZK456g1G7nzFbPBGAp96WUb5B5gxhcfL1zUajsOkcSvlgOpIIHSo5owC20MzZyCvTGKGW
Pjk7gpzjv1pZGDLkMVbPY9u1UhCBB5W4RkHPQVG0XJIRg355qVk+RmRvvYJGKSNGdiD8uBzntSuA
RKd3KLx2wB2pJNiBiVPPQ5xtNLKXhCjv3GOtOwShJcR7OSe/SgaGyI4hEjDC561BJMuWCqdh7k5q
V7shQGbf1wQPWq4OUPU45K+tMYPIWYHJG3px0oLgKwHBPXNLJHgfKuPUZphySAAQcc0iRcfMf4sf
lTmJ4+Y+n0pJHB4JxikO3ja3Qc07jDIyDt4olcqcHkHsKGYgfXpiklALDggYouMcXUDjn196j3d6
c6Ij4zlQeooATc2/cB2xSFYQHBBHbk5p6bWBLncewpuMKx9TgCkBO7I44NMYH5RjqKc642hcA9zn
mkbDZIJNG1tx5+bigBOFJyefQ01gwYjNOc5/CkbcQTnAHrQAp+QBs57Yp5ddu0Aciojk8Ec0uRlc
0ASPIyAArjIxk81Gx2nK8EUryeYVyCQOMU18K3Qj2NIBMj1pxPYZPajaAeaXnbgdPUUAIAXYgDJF
LkjAXqe1N246mjcf/wBdMBPnQnkg9KX7p5596Uk8nj86axznAwKAHHIAI6evrSqxXoeD2pmQVOAf
rSsCCccjrQAsh3E8cimsNppSQ+B096BggknmgA+6VODSsO+KQnJBNIW60AK3ByMml4yPmpmKdxjP
+TQAHOTg5NBcqx5IoI7nP0oILnI596AEY5x6+go3YyB39qMA55wc9KTpx/KgB2SO+MUpJB+ViR70
3j05peBn1oAWMMTw201MmI1bJLMOuO1RxhTnJI44wKeTtH3snuKAEz03ZPYmlLbmKjj0z6ULIWky
vrmnyKhUtn5vUdzU3AieIFiCTuzTQD6A47mlyxYnueetI+VwDnPWqADtJH60SoQeMlTRgc8fKfxN
SOf3YGTj1NAEbREYI4HvQIznnke1BUAsAeB0x3pw4XceSf0oAY3J5OB7U3A75qVxhycEZ6CoyBnn
PWkAAHqAcUBfmAzj3pp61KQoAIzg0wGN37npSlDSsMKQMUmc5IJoAaw56YpduOMZpZEC98n65ozk
80CNqw8J32pIskcZ2v0bHtmoNZ8NX+iCNru3eNXGQzKRkdjXs2jyReIfA0Men3KW2oNEp+WQfKwI
4IxyMDp79a5f4geJtTPhmPRdYtpBPEdvnYG0sGPPA9KuyIUtTz3RtCutdkdbcBygyw9qn1XwpqOj
2Yup4HFuX8syAcBuoBr0n4NWz2OkajfiQxhwULYzgEdD7HFaXiPUD4n+F1y7yFWVWlG0fKxR+mPX
HekPm1PIND8P3OvTSR26sxQZZh0A96XWPDN7opj+0RFUkJVWwcZ9K7r4HZj1m/dQ+FgwSh5GT3/K
ug8b417weflzPbamseU52jJX88flRYu55xb/AA71a7hjeGFphIMqEUnPftWBBp09zdm1SMmcEqUP
BFfSGkuwv49MSZUSK3UY2lAGI9+h6+1eM2Oq2+h/EqW6u4y1sly4kUjnbk4OPypWFcZF8MtXuSUi
i3sgG4LzjPNc5c6Rc2GqtYXUTQXKvsdGHQ17pq9ne3+o2mp+HrhWjCESwL0b/wCvj/Jry3WtUuNW
8cwS3ts1rdRyJFKg5YlTwfrikBmax4WvNHnto5xgXBwjEYHXHJ9aPEXhO78OiNrhHAfuUwAfT8q9
J+MgkY6KzAqGn7rlgf60fGMj/hGtPEf+paUOQnCg7Dzj9KVibnD6T8NtS1mygubdd8UylkYA/N7f
Ws7S/CV7qmrXOnpBItzbgmWJkO5ccGvWfD+u/wBjfDrS7wRCJoNrO+NwcbzkEfTPFW7ezGnfE6TU
QCEv7ITB1GOQORgZ9F9sUDueGT6TcQ6m9g6FbhW2kEYxXTwfDDULlkjjB89wzCI4zgDOR68ZP4Ve
u5Pt3xYlJTcWmyFK5ydvv14rp/GP9onxpYW+mXEkcwUSIigAsMcn6dfzpjvc8l1vRbrQL+Szu4zH
IvIB7j1qiQMV3nxTF213Zvf2ogm2solU53AHgH6c1wbkZIUkj1NAxhGKCcmg5B5oAJ7UAFKSWNB6
inDGDjGaAGmgjFKQSvTgd6aaADigHj2opyMF6jNADTmgcEUrDBpKAHMwLZxj2pT8zcDApq47804t
sPy9u9ADeRwDS7iOmPShuTk8Gj5eeTQAh4oBxSsQT1zTT14oAOtKcjjpSUoGQTyaAEJNKDzzzRij
j6UAHU47UpORjrimkEAGlUZPXHvQA77vO2k4654pGYnqcilznjt7UABwF4puadgjByPwoIwMg5oA
QjB55oDY7CgAHvS7OuB0GaAF3Db1w3rSNg0EHr3z070rcnP50AJjjPGBQTx0yKCMH39KUP69KAA5
J54pKUksaDSEJQQTSng01jQAvONo70ccEZz6UAn8KXgKcHmmMbyfrSlRSxLvfb3NTPbNH97rjNIC
ENsYHGcUjHcc4p2xVznPTNOEIMbvuHy9j1NMCL9KdyCM9atC1MscZjByevNTf2S67EbCsw4J9fSg
VzPA3HPpQQAavS6e1vtMmArEjgjimrZEgE4weB7+9DFcojOeOtOZSvXOasXNr5JBBBHtUD535B46
UFDc7zzQFIPqaUqMe4qSCETOF3YFAEbZ49aRlxnrVt7NlXd/dOPqKsafpyXaTFn+ZOgzSuBnM+4A
YxQ20AikkXaxHJxQdvAAIPc0wBz8/HSjJzknj0pWwDwSR1zSEBm4OOO9ACE5JIpS2DzzWjFpclzZ
zTw/P5fJC9RVeCyeduc8DLZ44p6CuQyOGIA4xSMW356nHGKfPD5cm0Z6Z/Co9wA/PkVIBw+SzHPp
QrGM7u49aUsSOmOa0VtEa33k8dDRcZQX5skEHHPXFMY5AGMf4e1aH2MLMoY7VwBuPUVLLYxq5jB3
IR8rntTFczXO4KpGMdMdTTmXKrnJAJA471o2+lq8mJJAQhHX060k2nx28/kmZZVJ+Ux5PNUIzHUb
jzg88GnFlI+ZiWHTNTXcKW8rx53Op5yOvWoNmQCTwagRaikKx5D5J7dvxqwt5MhlwvEg2ngYBHpV
SGP5xlsnqB3NbN3BHBbAh/NjDDnGPTitAKqkGRSF+8OMnHFOZSi5BDgN8wzz/nNbFulnsWFw0Yxt
Gei8E5+tU7vy1lBiCRliCQRzkd/x60WBFTbHkLkMw5AHQD1zUpUMm0O6sR94/wAPqOnsKdN+9nEm
AjbedgHbPOKRYmaFWjcrKemRkY9/b396oHoRSqHAUAOc8DPbvxj/ADmnXdwJ0woXBHQDBzjr+dPm
gaMKB1ZQMnjFNjXB8hoxliGPy5Iz3BpWJTIfKCKrCPII79Cc9aGjRtuQQ2CeRkA/hXRXdnCiGGfa
UKh90fUHGf8A6xrDXash2q2x8gI+RkH6dKpIBgidCkyvwDltvUD6GrETSKCSuEPBB5yKjRQu4mLe
eBzxmlSQrMQ+HGMcjhauD1M5J3ujoNK0+bxLeW9pY2ss+oXEnli2hUszHPAUDJrb1X4YeItDgupL
3wvq9j5Kb3mks5VVV7seCMYqx8DfiHD8Lvibonia4txd2VlcDz1JAJRgVJUeuGJ/Cv1Mn8eeENT0
5JZvEWizafdRbgJ72MB42HoW75r7XAZfhsTQ5m/ePyjivivMeHMZShRwrq05rdX3vqtPKx+Pc1ov
mZttjKSe+ePX2qCWCSJiSm4kEnJ6DHr9RXpnxfsPDmjfErxTZ+HJ1m0aO8k+ytGwePZjordwDuAP
PQVxEfk3kXlqXKleCoAYj3/x+lfL4qhKhWcHrY/UcLiFiqEKyTXMk7PfUxnkMczEo24EFTjCn1+t
T29y8qsz/JnAEjchv/rmpZrdGjU4fKfKMkYPX257VDPbyI0Py7HySqqeDn2x1rmV0bu1y2zPdWny
4njACsQ2CB0/H/69NVcStGFx0VcgYPbr+lUhM8MCupRtp2nYSMKeM57dP1q/HKslu29I0I43JhF/
3vrnv3qG9b2Dl7ErGJgvlkGdTlnAGAw6dOtWtJn2yyGX5ZBkbg3J57e3H8qqRosav5sq4jIZSOh4
5X1zzVzTUTcxUCRcdxkj9Dx0qGykrHAFdkjEH5c4BNJjaN3VM+nWlDBGyAQ3YZpdjO4Q5Ge2KwNh
+4/KS2dvKjd0705pHdyyj5yOc4ximzKEPBORxjNS7FAGxQxHLZ4Ofak2NIjkLLFgglm5zStGYgBk
7Dyfl6UkxKgAA7yTk5P4CgI+9l3cAZxjP1/lQIsLB5kQKYKjkjpmpGt1jhIOS2f4hjj0FRxRgoGE
oUn7pzjGKiKsyl3LHd1IOf1oKuMEbE4AITuT6UE75CHJAAwAp4xUwV5N0QyBt6HjOPeklzE+/wCU
8YGMHrTJAosqfx5U9SOKA8W8kgqoyBintEANgcsGxn1U45FMVtjsMjAz97rSAa4CZJAZC+FZjntT
/Jy5I4LLgB/5fWnR7S298BMn5QePzp/nSQ4zwxJIB5C5/wA5pgQ+d5cpbk/3QfanKpVpC4GBk9eK
HQXDhcASMeT7elSNAsUIY8kkqAOTx6ipYEZkB3cAc8ccUOxQhETBY8nNOwFALr97oBxilJSORVAI
B9TnnHWlyiQhJBOz5iR601LeQkg7Uf8A2j17/wBKkmcKVjEJUg5wRyf1oG8qXIJwcDB5Bp2GR3Ki
Pb5WVcDkdfbFPA8w7SMswyMdPxpkkkkh/iUg5LHocU8ShUzyGbkE4qgCXEeGVSSPlIA4pksbIE2x
FeN2c9ulKS4wMgL94kcgHvx60kxDbiSzBQccdqQh7gM3zFckc/WnAx4LMflXsen0qBQJWG1R0IOO
9SLDH5nzsRGxOMdQKYyOLAbjPGcZ6Z/zxTpVCr3Bz+OaQuqZ2KChPAxmnvL5iFQjb8kcDjGe1ACK
meRlu3I6f5xSFCWXDAbv7w6U7ACY2hgvHGP1oDNNGM9+BtbgCnYQyQMGVmUk+5FP3GVigjUYBOOO
B7etOLFWjfYSewPaoj8hB/iPOBmp1AEZypz3PQfzqUxsyBid+T6dR05/KmSjGCCpB5xzxxSTlwG5
YjO3J6YpXDYewV2ySMHG0+/T+lQTsCmC3mMDyMcfWo2cKCBksMnHoahLAnqd1O4hQuA25Sdw49qQ
oAwGeP0p6uxYAZBHfFOKY2bhvPcDiqKEmQJ/ERxxzxTUYqCQ2eKUxLt65A4+tPZVOQo+782Tzmmr
C1K5O4jue9SeWGB7N3pZFVWPOO+etPWLKjaMk849aVgGGIcjOD270eSqKckNzgYqTAwc4B64BpnJ
TA6damwCNEoQdS2eRQsRZsEgADgmhkJIHtnA+lLuYYHJ4x06VRIOhDBRz7g5pGQElQCcdzUzuoiB
UfNnH196HaMlCNxOSSq9MUJDuQ7vLLZwR2qIkux57Zqd4yCxU5HuMZqORPmJK7eO1BQihkYZ+XPR
jSc4OTgE05mBQg/3utNVlAYMNw/lSARgzE85OPWlcKoXAycc0pAOSBx703aHHy8nHTPSgAyN2Bxn
1oJyMk59qGXDYJxTMEewoAB/kU/GwkHNMxSnjOeaQB0PNO3KVwRz60jjHb8qT7o6c0AAwW56UMME
4pUGDmg4yM596YDcHNKSScUmfm9qOp4oAX7pPNBAwSDnmkPOT3pzFSOnNACbxu55FBGTx0ptKAO4
NACqcHJGRSsdw/GkbrwMUn4UALg9OxoGQM/nSE7sCgg5NADsBT8ynBHFIwxjt9aQnPWnKAc5z+FI
BvGfWpXbfnAAAqMqVPNIM0wHqM9ePSnMAvQljSZ+TBOT2pGIAAAyaABQQc9c9qfGdzjccYpo2gg9
RjOKUEHkjFSwEkCgtnOfekJBGSckU18kk9QPWgcjAyaAHFsjpjIpwfcAAMN6mo6ccHHPI4oATO1v
QelPDHOevsKYSM9M8UquyscYGexpgLtLN8jEtjNNIPPGPWlU7XJBIPtSHJJoATOQfX2p5JA5GD1p
N2Bxxxg0jNzxQIcTg5ZeCPWmDHOTilByBk9P0pZEKjJHHY0xiHGDzz2pQyjGR9aYAR70pDdxQB7D
pvha2ufD8OqaBqT297tV/IAzvPGVI45HPr0q/wCPLqC88BMLxA98FUsVUDa/rnr2xivHbTWb2zjM
cVw6Rnqobjrml1HWL3UXY3Fw8hbkgng/hVcxPKj1bwneW+m/C67dLhYZG3OQvBZgOB+HP50nwvv9
Pv8AwfeafdyYkLSIwJ+6rAEEnjvn8q8jW+uY4zEszrEQV2A8YPaiG7mtUZYpGjDddpxmpuFkekfB
i7j07VNXt5Xijk8ogGU9eQMDp/ntWv4U1CC51DxLps5EqG8acn72VPp+Ofzrx+O7nt5GkjldXbgs
pwTTVuZVlMomcSN95gTmi4z2jw94gs08baqJnYIERI0LYLc5/QY/OuIW405PiTeJqSmSxedotygD
HPDd/wCdccbyZZmkErlzxuJ5IqNnZ33FiWPOSec1Vwse46fpS6F4lgudOvml0qSPElu8gwp6nH1x
n8a4r4la1BN4ytryAKxiRAzI27ODwPyrjI9VvI49qXMqr6bjVaR2kYszlj7nJrO2oHuHi2zsvGtv
p9xa3bOIzvZGxkZA798Vg/FzW7XUbO2ht5/NKOGfnHOMHjJ9BXmcV9cQrtjmdVPYMcVFJIZGJZix
J5LHmnYnlPVf7dgn+EzWQnPmiMBVPoCcjGfxrb8I+JrDUdF0h55fKurW3MDszcnACgD22gV4gsjY
27yB6Z4pu9lP3iD7UJFWR11zq0Fl8RmvncfZhchmfB+VCOT74B/SvQNXbTdR8Q2mtwXu14ScAHaN
vXr+NeJNIXbLEsT1zTjO4UAMQMdjVp2KTsdl8Utch1fU4VhYOsYI3DuO3PeuIbrTmOcEkk470Ejb
yvPrUgxNwOc0dTkcUh6cUE8Y6UCFZs0gG7p1pAKeJCDQAMNoxn8Kbt445pW9fWm0AL+lIeadtP0p
GGCaQAAeTQaBgHvQSOeKAFwAeTU0o2RgAZz3FV6kSYopXAIPXNMBh60pAB61LKiDayc+o9KhPB6U
AKzBgOOe9NopWxxigBMGnBSKVGAx2PrV9bQTRGRBtVeCCaVxFAj8xSYIPIyDVp4QoC56CmKoORuz
zmjUEV2BBoK4OKlnQ+ZkDAppBfk5BoGIVyo7GjbkEnt7U7cAAB17g0B9pOPyNFxDAKXaM9TUgYZw
SME54ocAyE5yvOKQDQNvUUuCenHGKDgEbucCgHfnoOKoYNE2eOc00qR1OAeKmO15BsYDjjPbion3
DjPWkAhTD4zgmhhhj2I4pzPvPP504sctgZPrRcRFRUj/ADLksMjA471GRg+3rRuIMGkxzS5oPWgY
hx1xmhvXGKKXOetMAAOOeKsR3DQqVJ3jHAqAj0/WnkKEJySSelSIsOI5rYspIYEZXAqqGGfu55yO
aVZCo49OtMP15pjuWVvJLddpAPGKR7yWcLvkLBAcZPSq5+ZsmkA+b19qYFqXUpZYljJyqjHPXjpT
RcSrh92eMcioScnO0DtxRknIJ4oDQkmnMyHOSRUbBsZxx7UrNycDGeOKa2XOf0FIB3D4A4PPNOSQ
xkMOMHrTSwG0qCDjmmtz60DJ/tEjJjPWmi4lgkJjfDYxx6YpioGxk9aa6YYjpii4rinBB4JJ6U05
VjzTlBIIz0pGGDgcmmMcVYLu6g8CiSIJkFhn2obOMkEAn9aCO5IJoAlt76azDCMgbjzkZpPPZn3Z
wT1NMcnjjOMg/WmBSxGAcmkIdMzM7FhznORTEHzDIz7UsgwSCTxS7PlzkZoGKQSSCPenl3iK8+4A
NNdTu5OfQjpSAHr3HY1Iak8jyS4cvlicAZpJmkaQYcnGASTxURUhiGyM/pUzJx/dxzk+lFxhIZY2
yz4K91702OeQKw3nZ3odfMIUA4HrUkNnPP8ALHHk5x7Va1EQPKHmZmy+W4z6VLK6oxCLnPtVg6Zd
A82x4J6jrjrinDT7qUK/kk8ZJxjA/GnYRXzslOD6dODihpm3FQ5Ce3HFWBpd00XEZL5xgelSrol5
I+FhfJ42lTn/APVVElNJyDtRi2TnHofp3/8Ar1PskkYMyncp5O7t64qzDpV3E+DbtE65OZFPPccf
561IunajKUWONvmPJIPH1HpRYLjYpysS7FMbfwEHkCpmnQRhVUn5dvXOBnnP6VI2mXUrqTaMIy+3
cQevTHNTS6JMlygaEEK/HB/l35qlYNyuozHuDKNhzg8HHei0ijuAfm2MvOe549cVYl8NX0mD5Mka
cFSwIBPXv+FTNpFzaRyboFEiDlmH3Tnnv36UFWtuMdBE0cPLqwyx/iB6/p1qubQwkyBATnu3atC2
0HUWQyLHJwfmLA8Lg8Yx+tPm0K9uZ2Q2zbgVG1ckjPU4oQpTitjJZPMMnyMkjfNjGRt/GnQq6wFZ
MKCPlU9G9Dya3H8LXd9KU+zGJAwj3BeATxjinv4J1ESyCWKQmPKCQqfvDnj0BH86a3MZSRh2kyR4
JXyu24+mCc4/KlZlhXy3T7v7zH3STnJz+H863JvBOrzx+YLQRYTGCCeRwc0xvCd+sBZbRgQQreYC
oAI45I5z9PSumE5x+FisnujFjuXuJOMsQuMSAnaB6Zp8MDwOxJwRxCBx6E8Yz+FbSeDdXEksMcDx
9w5GAV+vapLjwlqyrGGt337CAAdzMc+3B698HpWj5payGnBGUJo2mRW2owByM4yTz09f8KlikSRR
wvXIbGcnH8j6GrX/AAgmqSykfYW4TDTZzu9D+HoO1XIPh/4hSWFI7CVoZMMJs/KE7Nn+lSlcybTd
0YNxYRyRyoh2chum7JHAHH4VTPn20LIyyKARlXAGDj+ddknhDVjHzYuhwVE3VX9Dxz/LtTx4I1C8
ddlsZpJBgqnRACBluPXv7c1XsmzSM49zk4ZURJGZysZQnrg/TH1xzWlpD3kc8sls6qW4ZJHKkgfd
Ix0GOPwrtdH+Desaldjai/Z5Fba4BZiAD17DgZznHQ5roZPhVe+FJXW/u/JeXBjnKFo3XvycfN0/
WsZ059EDnG+5/9k=


--=-7SiOlc9xVXlIsddwPZiY
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-7SiOlc9xVXlIsddwPZiY--



From xen-devel-bounces@lists.xen.org Mon Jan 20 19:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5KSl-0006oy-7F; Mon, 20 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <knusperbrot@gmx.de>) id 1W5ISy-000849-FQ
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 17:16:52 +0000
Received: from [85.158.143.35:56097] by server-2.bemta-4.messagelabs.com id
	7C/2B-11386-30A5DD25; Mon, 20 Jan 2014 17:16:51 +0000
X-Env-Sender: knusperbrot@gmx.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390238210!12847873!1
X-Originating-IP: [212.227.17.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjIxID0+IDI1Mjg0\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjE3LjIxID0+IDI1Mjg0\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9828 invoked from network); 20 Jan 2014 17:16:51 -0000
Received: from mout.gmx.net (HELO mout.gmx.net) (212.227.17.21)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Jan 2014 17:16:51 -0000
Received: from [172.20.77.101] ([92.228.247.163]) by mail.gmx.com (mrgmx101)
	with ESMTPSA (Nemesis) id 0LabZr-1Vc2bF1pKJ-00mNMZ for
	<xen-devel@lists.xen.org>; Mon, 20 Jan 2014 18:16:50 +0100
Message-ID: <52DD5A00.8070502@gmx.de>
Date: Mon, 20 Jan 2014 18:16:48 +0100
From: Martin Unzner <knusperbrot@gmx.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DBC709.80707@gmx.de>
	<1390212958.20516.21.camel@kazak.uk.xensource.com>
	<52DD19B10200007800115046@nat28.tlf.novell.com>
In-Reply-To: <52DD19B10200007800115046@nat28.tlf.novell.com>
X-Provags-ID: V03:K0:w1KY2tvxdDn74WeDqPUO/m7VuQIHtcdO+l24EhOfEBKaF5DF6Mo
	1KK21ZH7vQUA4MN/ZBD3q180NC5xMR9Rb6YQ2ey/0TdnKdN0gScgQDqz46mHRcEMHcuJxIx
	qxP7EbCdCNfHWPRSS6lnsmVTN9jYuqRaa8bVUXvrhUHQXtmpM0/YXZ1SUbteMosdY5sGVUy
	BDVErVGvhECc1ur+pyrsw==
X-Mailman-Approved-At: Mon, 20 Jan 2014 19:24:45 +0000
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN bug at traps.c:3271
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

OK, well, that is unfortunate.

I suppose you will announce new versions of Xen on the xen-users mailing 
list, or is there another channel where you would communicate a solution 
for this issue?

Thanks!

Am 20.01.2014 12:42, schrieb Jan Beulich:
>>>> On 20.01.14 at 11:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Sun, 2014-01-19 at 13:37 +0100, Martin Unzner wrote:
>>> Hi list,
>>>
>>> I just installed Xen 4.3.1 for EFI according to
>>> https://bbs.archlinux.org/viewtopic.php?pid=1359933 and tried to boot
>>> it. There was a crash at the very beginning, notifying me of a bug at
>>> traps.c:3271. I could not find any logs, so I just noted the stack trace
>>> on a sheet of paper:
>>>
>>> do_device_not_available
>>> handle_exception
>>> efi_get_time
>>> get_cmos_time
>>> init_xen_time
>>> __start_xen
>>>
>>> Does that mean my hardware is incompatible?
>> No, just that you've found a bug I think.
>>
>> I'm copying the devel list here to see if anyone has any clues.
> We've seen that before: You unfortunately have got one of those
> UEFI implementations that utilize XMM (or, unlikely, FPU) registers,
> and Xen doesn't allow this. Which is a consequence of the UEFI
> specification being imprecise here: Some firmware implementors
> read it to allow such, while my reading of it results in this not being
> allowed. This is currently being brought up with the USWG for a
> resolution. I'm afraid there's nothing you can do to work around
> this for the time being (short of not booting via xen.efi, which -
> depending on how you do it and depending on your firmware -
> may have other bad side effects).
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 19:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5KSl-0006p5-NL; Mon, 20 Jan 2014 19:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5KBJ-00064h-LE
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:06:47 +0000
Received: from [85.158.143.35:28173] by server-3.bemta-4.messagelabs.com id
	B7/68-32360-4C37DD25; Mon, 20 Jan 2014 19:06:44 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390244801!12897540!1
X-Originating-IP: [72.30.239.77]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4391 invoked from network); 20 Jan 2014 19:06:42 -0000
Received: from nm34-vm5.bullet.mail.bf1.yahoo.com (HELO
	nm34-vm5.bullet.mail.bf1.yahoo.com) (72.30.239.77)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 19:06:42 -0000
Received: from [98.139.214.32] by nm34.bullet.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:40 -0000
Received: from [98.139.213.10] by tm15.bullet.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:39 -0000
Received: from [127.0.0.1] by smtp110.mail.bf1.yahoo.com with NNFMP;
	20 Jan 2014 19:06:39 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390244799; bh=wxtNlxT7xDUrJP8eP2qaxKKmG8Iu3vojc4nrSaI0gSI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version;
	b=Qe+n7VEzyMZzhzlYeY+CoE7qgivd1raupEj2czut9UlWFmOAc9qhodJQ1NPFsCOtoDquSNRbRK+sMEQ/xSz62m4Zne5S+IphyEV/g9qNO6xbLkNDxOhOYJTbDbaOD65ZpBHTLAIKviSAwzmTGLcTn7Xv3tmgJPup1JH8I8XCS4E=
X-Yahoo-Newman-Id: 522902.50386.bm@smtp110.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: UaMhH.QVM1lbCHTmOdMBMtwj9v8Rme8HKwp7ZS4iKAKkcS9
	WP3hUBToFPVgpNP8fjyuWBaKrCUbo0hYq.zoJKdOIwACWfXeQ9dwLJ4J6jpS
	ChOxycOzd0flz5GSB74FJjTDXbmVDuRi_whKAm96FIbiI3gDAk73FeFCVezd
	ihHZttMiuENwyS8M1x.SDntPIjFcCa_gMC1MzrlVtArF5xRvdt7gFtOY.Dh4
	MtFd3Fs1454C1uJW_08mxLG4CmbKeNxUrgvBacypFmO2g28ve52rq8KMelTp
	oWqYaQ.sSSV_qWRRLnsNCmi3yebuy2kl4jvwsre9qG382sXIpR7FZz0Vneml
	EjJe2F2PSh2VsbWCzRT6VXCZma4LCbCNJ47Fg3MaRY.mhrE9i6lTpD2V485n
	darVkLLCfwx1.5oLqKWGHj8M3tmDMATlWWT1T6RJ.7sjGJE0FAjWai9PBqw2
	woVzRpSQVFtzsNG94FOtFyyE1vjSm9CeUaBzoY38Ppr99xcsHg6CIk.GGIvZ
	HH6G695Q7iEgKWlekbxIIwudEtPpcl1Rn8ABEgmrxPx7jCOD0ew--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp110.mail.bf1.yahoo.com with SMTP; 20 Jan 2014 11:06:39 -0800 PST
Message-ID: <1390244796.2322.6.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org
Date: Mon, 20 Jan 2014 12:06:36 -0700
Content-Type: multipart/mixed; boundary="=-7SiOlc9xVXlIsddwPZiY"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Mon, 20 Jan 2014 19:24:45 +0000
Subject: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-7SiOlc9xVXlIsddwPZiY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

xen-devel list,

Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
Screen shot of the crash is attached.  Hardware is a Gigabyte
GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
line allows the system to boot as expected.

Thanks,

Eric

--=-7SiOlc9xVXlIsddwPZiY
Content-Type: image/jpeg; name="panic.jpg"
Content-Disposition: attachment; filename="panic.jpg"
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQAAAQABAAD/4QBoRXhpZgAASUkqAAgAAAADABIBAwABAAAAAQAAADEBAgAQ
AAAAMgAAAGmHBAABAAAAQgAAAAAAAABTaG90d2VsbCAwLjE1LjEAAgACoAkAAQAAAE4GAAADoAkA
AQAAACAEAAAAAAAA/+EJ+Gh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8APD94cGFja2V0IGJl
Z2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4gPHg6eG1wbWV0YSB4bWxu
czp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNC40LjAtRXhpdjIiPiA8cmRm
OlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1u
cyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczpleGlmPSJodHRwOi8vbnMu
YWRvYmUuY29tL2V4aWYvMS4wLyIgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZm
LzEuMC8iIGV4aWY6UGl4ZWxYRGltZW5zaW9uPSIxNjE0IiBleGlmOlBpeGVsWURpbWVuc2lvbj0i
MTA1NiIgdGlmZjpJbWFnZVdpZHRoPSIxNjE0IiB0aWZmOkltYWdlSGVpZ2h0PSIxMDU2IiB0aWZm
Ok9yaWVudGF0aW9uPSIxIi8+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPD94cGFja2V0IGVuZD0idyI/
Pv/bAEMAAwICAwICAwMDAwQDAwQFCAUFBAQFCgcHBggMCgwMCwoLCw0OEhANDhEOCwsQFhARExQV
FRUMDxcYFhQYEhQVFP/bAEMBAwQEBQQFCQUFCRQNCw0UFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQU
FBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFP/AABEIBCAGTgMBIgACEQEDEQH/xAAfAAABBQEBAQEB
AQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEH
InEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFla
Y2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbH
yMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQID
BAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJ
IzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1
dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY
2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/APia9tf9SyQ7/XILBR359Py61ClibXUJ
BcGMptEqhyBkjPTjJ9PwNalyDbrJaMjQrGTGvAYFioHHTjkdfenNbo8CxxFEmZFeVic4AJ78Y61p
ynO4u+hzZ0+S4a5DTndIpzJGoIXnJ29cZOBjngmp7bw8LyInKxxqvLjaxOMN0HU8+/XpWudLeOZF
EqOhyxVSOvr1HbmrGlWctletd2k6GKJz5sTR+YSD0Ze3t0xS5b7FQjLqcve+GTbmNYWyszcLjGTj
IOM+neqZ8PlXVAH3qAzEcnvn69vz9q9C1bT7fzrS4TPlSgq0ZG3b8uRwAR+oP51Ys7FbOe2sp2BD
bpXuZE4fd0BPJ4Hb3PWocLGyVnqeUXGgkTTSeXPIwI3biCyk9s9Mc9fYmobnSXZJAIQmDxH0JB4x
/n0r01tJS/WbejGMSASN0UdQAcfyI6iotSs4FjuIo4g0czFlMqrhdvULxz82OfapUWmNTPO9b8Kz
6fB532Ntm1UBX7uccjnvz/Oq+q+EdSs9IjuJEe3tXKNtXHJIzk9q9Yv7KPUbGaF5ZARGrIrAEyMe
OuemKi1eCLUPD9xBEVLQgKCznluOcY5xkcfrWjk2NzueIrYMBNmIsoyAw5x/XvTf7KnNuQMjC8Z9
P8K9lvfDNoLdFa3SJWjBZjjcCTyCATkYUfkfWqv/AAitsYCttALphnzHPGw+q57cD/69Ye+mWkns
eRvbTRW+90fzCcMVOSRj+VReXM6LgNGpJ2ncOv4V7JN4a07VpdPsfKMMQH3sZO3jPQdOBzVHUvBt
n9nikRZGb7QBKhAwVzjPH5c1pd9RWWh5aqhBiViY1/ix09/1q4+g3c8DXscJWALuO0e3XHYZIrsv
EHhy2hLW0CRqUYqrKRjjPI49v61b0q1v7K0NxcI0UEChQki8SA9vbPJ9KhNtku3Q86tEnvRIIYXY
ouWK84x1/HmnR28tyyeQr+cAC67Rkjr1r0/RtKWyj8qCAO0iSSNyBsXBxt9e35VW8MaRPp6tM8bt
c3MpTzgvCx9x36AfpVgkeexQTuZAsf8Aqzgyen+f61VmjlVf9WY3L7fkbB+td1m4tNa1G1tpYxBc
S7zPKnzg5yAMDuP5VZ8SeGX1PxdNHeSGHZbhi8SH529uMk8jgVO4+U85ihWK4jQnKscMQMlcDn8a
lhaOUPC8u2NBlSwxnjA/nXWN4JAMyRXe0mHzliZTyvfOR+tR3Hgdllh3yKsUsYLMBwrDsD36jn2N
aKKK5Ucq8pA2Fcg8kMOvpilaUMGIU7hwpBxtHX/P1rd1PwtNb6f/AGh9ojeJwPlYDhs4K/XAz+FY
LJuliUhYgibRnneR1P61mzFsYpInZF4Uc5JIG6kaRXl3AfOzYKnBH1zUiCWR5DklVOMYGMfTuahk
QtCHVvMBbgYxtBoVybsdNGWjiJxyCpOOmDSSsGhZSeBnaxzxUqoqoTyjBsEDoKhUbZjh+WHpjPoP
btVK40mBjFyASSjA4Lc+ntS3SCNf3e4qvBJ6nGRSu0cLSbWIJAxzxSpK8MRVwrRZ2ncckj/9dDbK
GOW85YTIAvQHHb2oZJAAFbkHBPBGalniEsoG0bF6N0/Go1mbzCrAbd3DDvQriGrCVZstuc8fLyMU
MqcxsBjqpH8qkjhVmAR1VmPKsCMe9EsardCLcCFOQVGR+tICsqPncQ5xwRjOKlKFnQFsqxI+Xt/k
VIrlVkJB54wRxRDGFHzJweVAp2uJkKxOG67j3UULLJHv+TfgchOSKsH9ycKDk/Lwc8+lIkYjl3Fe
epx3HpTtYEQyzGVN6EKMZyB0qOZGRl8zO9vU8AGpkkSNW+QvnOVA6U+cF42yeemPekyrEP8AEZFU
+pPX+dOLOrblQ+UvG9Rx+HrUyRbhtkX5UGVwefrTIocnrynPI7U0AisscROzcQMc9vTNRAbFWXO4
joG5GTU6RFpvn3KAeVJ+92p8sWyQrnA77xgZzTsg0K8TrsZnG0KcZRc5podkGSfmxjGO1SMkayjb
v3LnIB7/AEp7RhN8jtlvXNQSQ7WlEeQQpPDZ9BUu1pGedDnB+XCck0m0MNybnYcg9+RTYfMVtyu4
XccgemDmhXKGRvuVgchg2OOMGgBsseemT6Ghw0kzmPKqTnJ4x71K8OA8hypfug446VQEZ3KADkqe
Gwal8l9rEcnb8rAc4qNoGKgYwV656imyYlU4J5ODvOKTASN2wBjc6ru3ehqQXUiA+ZwSMZFVzC0Y
AKsinj1pXiDKzEHGR7Z9amwE7SNIj4AD9h1zTZZmYDYpDdGPv7U04AUD7pHXHNOQ5UqrYC5BJAos
AGRpSylCSP4l+lRFndSoBGOpHpSvlCSpO/HIxQzmFGOMk4xRFCJEDDDpuxgggmleR5mZ2TI6AkY7
VE0gU/OW3H+E9cUq4TsWGeM89qq3YZN5kglC4+c88dQPWhLjYhSRmYhu3cH/ACKgRGeXLAlSeMnt
7U5iWkIYY2nAJA/ChrUBvmMA7plh2zipjIZcKpClRnax4yahkKw5yNuTnIPIGOmKcVZ9jKOc8Hqc
elQtxWJYl3cOxPOAKY1xy+4HIYjGcY+lIWc5k3/eHIPah4T5ZBDDDZOOnTvVhZD2uGkf5FIVcghu
RjFSG58tlxkqo2kjrVQlkck5OF6NSyOcHCndxhvSlcZZluZByOuCNuOuajN0VDDJxnODz2qFnAwC
xBPbHH402SLs2PoBSsQ9S1JcrKm4KeeMn/Cg3zwIIyCiHBYnv064/wA81FnYuFBB7DNN+e4k8vnb
05pgkWTch1I8tn2cqQPlJx3/ADpPtEkSuCVVDgEnp19KhBaHeo4B6nPSmqd+FbGCeee1MssyX0iR
siSEE9QOeKYLpogzl9u7Hyiq+9lZ1xwO4qI/e4O78KTEXmvZbhgxBUA4HHNSSXYVXTbhiCM1Wj3x
y8k7M54HB9qY+SxbG3JwT607DLkk3lxKoLndjJODjj19aRb0lQG3NjgHFVTmM7QNxPNJP8wLHOO3
HFFhF1bsM2SjccH1xmlabc+1WJyM5H06VSWXhVCDjjcO9NLb5M7iW7Y4qeXqMuzSZYESgMOx+lPj
v3lwHkZiM7eelZ24gHcN3oD2pZItvCtjJyB6UiS7LeyNnBYAHHy9DxSi7mVsq7YPAB6D9aoAt0Ld
z16Z9aSOTcdhZuRg+uaLAnY1Z70zsrvNlgMbSMEDFNF8Xl+eQbBkjvg/5FZSMMuS2WzjIqTOSWOM
dyR0puKLuaJ1KfJdZnAVQuCetEl+0m5hJ+8ycc81mo27LDOR29vWmq5STONr/mMU7IRsQagTJ8zl
VClRnoKJb6Z33klQD8rZ4PH/ANesdy0pJDYx3FBlZVKByUBzjNGgGxJelpCzsWkAI4xzUEmpyeWq
t8pX7mOAPw7VnJKSeTkjqOlSPOHCxjCpnOcDOandiTfRlwahN5hAYbcjjYAc8f8A1qknu3CKV2bl
ztAGMHoeveqOVTOWPXnHcYpk6kqHVipI4U1XKhmu+oOFk2yCIuq7lTjOBx+VQnUZIR+6Zt+Rlx7d
PrWWJGR1Lckjp60jO0cigEBh0Ipcq7DuzX/ty5jREM52BiwXnI/EY4NNXXrhHJZ2II27ckjHoKyy
5jDBWOfpnpUQlkYlWckZJFTyoV2dAfE9+S5F7dZJCgLM3QDb0+lUpNaulEgaZ39Nx3Bfx7dazZCQ
ApyGPfPWkZQQQrFs9jTSSC5ebVZFbdvYZ7g80n9ouJCS7O/JyTk+wqr5pjUPgbsYIx0qHLDoOefx
piNOXU5Wt8B8beMfjUY1GQMeeD6VUaMFBuTaSM8Gm7yCcDgYI3UAaP8AaszuqKMFRwKUavcLHw3D
c4zx+FZefLOcAn1oMrFTnnPQ002I2YvEV3bMHW4KyKQQFGAKD4kvn8wefJg8/erGDMmCV6jgmh5A
ScZDEc8daLJjsbS+JLyBfkmfnlgT972pZ/FF/JkefkE529ifpWKJtrAgHI6n1qPzMMWzyaFoM3P+
Egugq5mbbj7vA59aJPEFxOoUuXx2YYHTiscNgdMjnkGkYlkGXII4xV3FZHQR+L7+2csJmwF+QZ+7
9DU6eP8AV0h2faNygbQWPIGMda5Qv1AJY0fMRk5xms2r7lLQ9CsvjN4r02yW2t9Vf5AwQ7VygPXB
x61OPjn4tdmL6m+G4+6DgegrzoGPDfx5yB7UhkEeCmcg/e71k6MHuhqTWzPTR8evFUZQjUmTByV8
scDp1HP/AOqpG/aH8YRRCNNSGDnc23JGTnKnsc15a5IbO49eAR1pGYk4PA9GqVQpr7JXtJ9z0+P9
oHxaJIy1/HIFbODEAfr15/HNSj9obxcJeb4SRq3MZUEEf/WIFeXABcH+Gljcbzhefeh0Yfyh7Sfc
9Xk/aR8XGSJRdRqkZJCqpwTnODk9KkX9o7xbkE3aBQTgbTnBJOMZxxnA4ryV5PmAwPwphU5yfcYq
fYU+we0l3PXh+0l4sj4juoxImFUsnBHfPI54HNPT9pLxWS/mywzFmzymOx9+mcflXkEiBskEKB05
60m3A5J3Z4Ap/V6fYPaz7nsKftJ+KSmQ0ELIpCJFuAz3zk5OcetNX9pfxYmFYWxXqE2EBec+p/Gv
I/mWPC88+nFPXc8eCp25xjpS+r0+wc8u565/w0l4pAzI0IULjaoYZGTz196ef2l/EiKAsVrIO5Ck
5x0yCeteRG43gbk7859PSo3IP+rGKaw9P+UftJdz2l/2m/Ertu8uA7QACo2nGeT7HjqKYP2nfE7f
umigKnPBznqcnd1PWvGPNaElCu9emTRI2EBHB6cd6X1Wj1iHtZ9z3Bf2otchV40trdgeAGY5A27S
M+hwD7ce9RSftQa2AEjtohHs2ldxGeAOTzn8favEQSADgjFKVDKZMjcOcdKX1Wj/AClKtPue5Wn7
Uuuo48yxtpGQALyRgd/x569qH/ad1rMii1iidzvLq248DGM4GB+deGqVwM5yelIm5mIyAfU1H1Wj
/KL20+57vD+05rBAEtuCAu0ASMVPPbjqR39qcP2ntTMqmex8snlyrDd9AcV4ci5GchgB0x696XfI
/wDGufu5I7U1haXYn2su57pB+1NqcbHNkoXuzvkg4z0xj8ahH7VGoq21tPWRVckr5hwwPJ7cc14P
KdsjAA56kYp7RnbuL5JHQVX1Wj1iP2s11PoCX9qh5FEg09lKqAyo4JL5znJXp7Ukn7Vl1MVaSwBO
3jacZyR97j6nr1r583EgKMZ9MUSJ87bDjFL6pR/lL+sVO59AQ/tQzRbRJYPOpYkq74Kg9hx+hqaL
9qB5HkcabJkDYA0o5H5fp+tfOzlssScmlWR8gjrmj6pS7B9Yqdz6Nk/adRvkXTGjByDkhieD79v6
VJL+01G6bv7NePKgMARkLxz35xg9a+dIizbnYdOn1qaK4LSkyD5Dxz/Ol9Vp9he3m9z6Kh/aXiSI
77J3XnAKjcO5oP7UlnPLE8mk3IWPgrvX1yRn8h0r53DqkeF3MMfePBpuWnGC23HGR3oWDpdi1iZr
Y+kf+GnLTzGElpOYuoTg4z/P0NB/aR03GTbTuNpKsMDbgA8joRmvm8uMAEkn0HrSu7qGOAd38qX1
Kl0Q/rUz6VT9p3STPzZTrAnCgRgMcAYxjvnvWb4+/aK07xYLbyY5kihZnwynJc8Z5J7envXzv5n3
cgjnv/n2p2Q0TAoPYDqPrWkaMYbIwlUlPc9Wb4swMRtt2fIIBZeB1zVF/iNY3Ue2WCVCpUDZzu/2
iT3rzZl3ALvIHp705seYpO/HTHcVra2xlypvU7O98ZQNkxpIGb7xZTgf7vPpWJqeuw3BRRlsEgS7
QOOMfyrHOZX2gscHjuaa+6RiCuCDgep7U0HKug64vSzMyqMYwarxyqZMuCVB6D/GnPGDC2cZB6Dr
mmvCZDwCCB09auwWOkj1SC+tEto1d/KOeeh46/8A1qa4EyhFUICTgsepz1rI0m8S1WdHVtzjAwcV
qiWC5hGJgjhCW355btj9P/rUJC5UNuLUIWEpUSKQeBnPbg1GqQoMJnPOSwz9KuSTxzoucbmHzYPc
CqjvH8x8wIRk7Schu3NMY/yCZoRKQqYyCO6069KRxSBsBUIw454556e1V/t6rK0rzAuQBgH5celM
u7r7UF43R8gkcc+9DQk9RLO8g3uHbAzw+OtfSnhrxp4Y0PSLPTIbyCPYgklfcqguRktnPOM4x7V8
xOijaWCg4PIIH6VA1zJI4AAxnk9N2K5amHjV3Nozs7n2OPiFoQRHN+kz53/IR83bIz7dzRJ4/wBA
2Nsvov3oBJWQZDFe4J6/1r41+1OWYlvlIwR3+lDu6Asr7ecAZ6CuZ5fDozZ4hvSx9jv440Hcpe/R
WTgsT93p1YcZ5/Q/i8eMNFjMgj1SLaoZkG8AsO/fvmvjhpmUbVcqcAEE0izTLgGZscH73T3qfqC7
ijXcWfZMfibS2kknguY0yuWZO3YYPP8AhzVp/FOnxKDvDKWz+8YZH4djjrXxgb2ZVwJmIzkH19/0
p0Woy2rOyyyb8bQQxFUsFFdTVYq3Q+yl8R2UcT75Vnlt9xEfQjIPOD2HT8qli16ykYENhyC33Rn8
+uP8a+NBqk8KF1mcsSctuOc0kWrXa7Va5mdcEYEh6dcUPAxfUn6wz7MXWLC5ZZZHCQyHILt8uQcK
fl9SeuO9SvrEBbYsnlcBtokDBRzzjH+zivjSXxBqIcRm9mA6BhIRxmmJ4l1O3VUTUbhlB3YZy3Pr
z6VP1Fdx/WWtkfaZ1CGaHYjW4TOfODZBB5B/z61Gs8MEcxFyjTgbOCGTPcHPavjYeKtThZ1jvrhE
Ygld+M4GKfJ4v1Zgv/EwmyB13devP15qVgfMzeIkz7NGoxmLImhVSAABIDtyTgY7EAE06K7t4p5C
rFyjYLYOAM9fyzxXxnF411sZVdRmAY7iM9DViP4h+IrckLqc8avneFb73rmr+o36ijWad2fYtxPC
0QaIpC20sGkcg/dyDx7YGM092WIiRDGsiEb5IyCBnnGVPXnvzwa+Pk+I2vI6MdRdlUnl+OfXipYv
idr8EkZ/tDeAc52DLY7k469Kn6hZ3uaOvc+u1dwyQTFJJGYgRLzuGB1H4g+hxX1n4R0o6Z4M0mzj
xCixqEREwQT8xBHrkk4r8lIvib4ihm85dRlBjLMqtg9ecnvXUL+1H8TPLOfEbkKQoDRJ8qgHaB8v
Tk1008PyanNUfMfp7f4gjVmcq+RuzwOM/wD6qjCm5H7vC5xtk2bsr6Dv7fhX5kv+0/8AESSTDayk
rNL5gcwpvyfcAfpVqb9p34lSqgm1l5jH80aNCCI/w+vfNdfs29jCzR+mcVlNNG8u2QLvJ8yUYA44
HH0P5VYlsv7NtJJ2EgZ1AOB95ecD35r80I/2svikyF4dVKYDBlEG5CO55H1/T0p7/tafExmid9V8
wozkoYduW6+/A9KPZSQcrPqz9prULuy8P2GPNaCeU/u0j6YU/PyefTp9K+L9TBm1DKv5lujhxk45
zxzjOetVPF3x+8W+MI7eDVr1ZEtvuKEC9eee/pXKXniW8Z2nZ0Lv1C5wen+FVqlYXK+p1E8FvBfx
xEBoy+4An5mz09Mj/Ckvxfx27gjZtcuArbTyMcn8u/auTk8SXbsjvgyIMCTAOBzxg+5qSXXb+RzM
QP4euOcD0rJptm0bJHZWs0jWbxZa4VX3sclS/TI4PfHTNU54RJqQt5Ag2dADuOew/l1rmY/E135E
8MaLsZzIy4+6cdQarnWLlUZgxBZs7vU57+1JQJdzrLUyxSygIWVhtZADnNblgv2TUYwCZjGxZ9yg
hcjAHP1z+BrzqLxNKAA8aMoPLY5//V/jSy+IbjynXzCgbBIUkDHbPr/9ek4spHpGiyvbzTBJQfOY
yOsf3VU9OD34HPGPwr1fwELSPxf4YEN8od7pJJSeCSGICnHPYV8zf8JlcgjakaqrEj1Pt6GtbQ/i
fe6DqMV/HEjXkRzGzrkDkHsR6dvSnGFncuUrq1rn6qaVBBPftLBEXj42FOSMA8gn61n3kkk15LDE
sg8luVI25Pr9evQ18AW37ZHj+0g2RS2ahUKJJIjEgZzkDdwee2KSf9sjx7Ir7msowHRkSNGwuCc4
yTyQcZ9Kpwk3uY2stEfota2dvbyxnJedv3gPcZ64GfWuM+OVzFbeGrQNIwdrxfkALgDY+M7SOefW
viSy/bP8f2rC4WWwEiosQZoSSevUlunJ/OsXxj+1J4y8bWVvZalc20VvBIZo/JV0BJBH8LY4ye3O
eegrOdJtDptqXvrQ3rVpI5BNhZLUFAC24Fh16Y65z+daMNuzXhdXeSMHKmMYZR1xz1Of5HFV9QlK
Rs01uZghUoTycdBxn/635U2MGC5jPnblciQrkgoCAf8AHjpXoJvoZ3sx1xGkUHmOGBLbnkZcuScg
DIOCOmO/NJbFbECJHkS4IEYiG7YBg7cjuev51pWsgtpTHO7qjqzI4Xd9Pz6e1Y3ibxNpfh0BdQSS
SSQFkgQEq445I7fWrTIctSxCz/aY5bmWRPKIMvGOewAHBPH6027ka8umkjjF0ycRo+egJPIx3Bz1
/Q1zI+K+hyFJlFwk6n5jIM4wOq4/DrSr8UtEJQEywByFaUqfmHHOOw/wFS7M1R0ktw0UMpXnzZDL
LCWIVj1z9QQvUe9Pe4Ny9m7xqRcjDPITgtnH9OvtXNTfEvQGIVWkbOR8ylSp29fTk45qeD4i+Hfs
ksYmkRlx5bMjEE+gGOD34/rTXLsYWk3odLa3V0qtJchY5IwocIQep6D047e1MuLiSLz1ditq0oKF
XJzzkA44PP8AWsAePtFmihJleJ3BLqc7CRyD9eT26AU+Hx3oxO2S5jdYvkRZANp5yGH69afuo0Vz
alne581CsbMYvmcR4Y5PIIHQ/T/69EOpSRWwVCFnjYxs8fJK+hHTpz68istvF2hyRK41CNi/Rmf5
h3J6dMn8KfJ4j0JUmB1FHLZPJyRnnrgg5/rUNpl2ZqyTP5kDpEoMDghwO2eR+Oeg96fczNeBY1hG
5CCyKflAPJArIbxTosgUfb40ZWX5Qwz0xnr7gVLJ4o0tLaSRb+2eSNujuDvXjnGeRyOOtFlYlScS
HUIfPuJJFUkAhgXYnaAMY9PyqWWSCW2NnNBLKVChBG5IZjgcgd/wp41zS7kjZcq0iDe2Nqoc59T6
571A13pN1KPJv4B5mF+VgBgEZPPbrWXKrmiaY+wuWXzIRmS3BIIZznIHJPuOnbtUVhczaddeZGsT
xq4OwnIPfPQ55OT9DU8d7YROT/aESKmRJ5e0hicZz9Tn60STWs8f2sXUUcUTAFCy5JIIGR+dXypk
uTTIi0F+ri5tjIHIYT+WNqp1UkY6dKaZFuNVkkO7CKYwkgIZOPlPOMcdsd6lV9Ontp0EkaFZAx8p
lQcAbenUdsZ7UyG3gRpYxdCQN8yyuArL25x15z27UuVIrmvYrTNHLcWcituKIY5UjXBOQ2VxjjHH
1xUU7C4sZoY0MkhkMoEnWNTnjOPb9a12jhmYzvdo1xIPMaQY4zgYAOBgY/WnjTYkYIl5CyOvmsC4
B4yBgn1PoTT5R20uYOoLHe2cNisbyxqfvKM/NyOOP8KwpfC7AkJEJWX5FKtjcc/w4znvXetDAtw0
jNuG8qybhjcQTzj86inKK53fKrHbGqfMeePwFDgZc66nn39hz27FUXyxkfvCpYfiPfmmSaMXnlMK
SRqCx2SR9Sff/PWvQRYh5ptsi7wAG3YAyOM/h6CohZRyyyOHLLjsdwA+o64+tSlYhSV7M87k0jcs
oZVQgqQ5OR7Y9+34VIfDaS+YAN8THC7hjn2ru20eG3KpGieXFkMGI64BIIPfjvUj6bbzRgl44VLd
N/Pfgc8jpnmjlN4s82k8OuFYLASFIVXOMA9abJpTGBIkhfcSWJKdVGefbBxXpkvhtkeNI/mJBO5D
2AHfPOM+/X2qqdHa5tNikGUPjcJOR9f5+hqbdhSZ5u+kSXMhYRFSBjap7+tRtox5LjjBG5zhge2a
9LXSYZFbJTbGflL4wMHpmnNoTRSHczzLKMsu0YPPXjOOo4rS1tGKLR5nNoRDqI1YOACRjJ6ckfqf
xqI6HIjFfKlVVI3PjA6A/wCFeiP4ad42At1VGLbDkYAAz39v5ihNNYyBHIMMgI7ZVQMAg554z1rJ
rXQ1ujzpNMNw7bC4Kk7gD8oPT88VENOfbIH3sM/Lx3r0NdILR7ZAoWPjpgqT7Z55OOeKifw4qZZD
ucfNtJG7noewOOKqKZi7HAraMAmMowPPPVucdaS6sJBsaMHcMgEH1Ga7qfQ1ZAyDfIeDuYAH17da
q3GhyyBCECjBX5D196UrkKSvY4YW9wjeYhUHdzkZOaUIysGO7YDlh3Ndy/h2e2ZGWPBH3mZQAR3z
TZPC0iyKQGuIMrwq87uhNCTZtdHGpG0jNtV1GOAfSovJlDFmRhg7c989uPrXcp4Yu4x5nkSLHg5D
jBOPx5p0fhO580LHE7rgEuxGSPenYGcNPuUqxjZf4d38/wCVIwkCbjn5jgEcn6/hXoj+DJkbe1sZ
YyT90e/c9qaPCUjqyfZncLmPbt5OQSR9MDr9KVmL1OERF83btJk28nHXA60yeJyCedyjHbmvQm8J
zRIClviVRjYq59gcd6WTwdcCUNFbGUq3zoMc8HIGaOVkuUUedGHkBeJF+bOac4a3KlQZlycj0r0O
bwe++YSW5GDtVgN+efUenFKngra23yzuBBVyAxH4etQ79hqUTziSLYFbDPk/Xb7VGuWk2xE4J5Ui
vT18DXLJv+yseCBHtyQAP5/4VFH4FFxYi4WAwliAmV69f8RzVK5V10POfnVnMbMM847VGqNJKQ+S
DnjOAfrXo1v4AZ1Z3g2twQADjHf6mlXwIq3HNsZBGSHGecn0x1+lDGrdzz5FMJZd4wBkAnoOnFQM
roCTkhieRzmvRZ/h6XhdnjO0tj5Fyc8+lSD4fOtnIj2kqHJIVhgYJznP0FCEecMAEAB3t6A0kn7t
mTOQw/iNehR/D6O5clEZI8YCscfN7YqBvhzIkcjODcdcYHI5wBj+L/61D8kLTc4SNGjkLtGNjDqO
nellZSmIySg4OR2ruW8CSR/eBJ24YjkqeeD+Xeq8vgOVICxGIQRl8H64oiu5XNE4yIEEuQpOBk+1
SCRVl3Mm0kknaMDPFdPH4FkIeTcxQYI4457Uf8IXcTq5+YseFDKBj1J9qelyb9jl1k3HoQQMAkVF
IjMxITDHjOa6tfBFzFA7y5iCkruOCoIz3z7VE/gu5DkJJv5BDqOntQ2Wos54qoCeYpZuh4x2oDhy
uCWK84NdDJ4RmWZ4i4Egby23Ljn/AOvxSHwbemWbyomKqdrlV+765qFqPlaOdC52rk+px0XipDGz
Kyk/u/UH+da3/CKXqI+UZD2weWHPb8DTD4YvxKBFGCuMhs5H8vcVVw5WZX2bLZBBYdAcYNQiDcsm
T8x46YwK2pPDN7ERmIEn+LmmXOgXsRdyhyvUbSc8VImjMniIYRk5Gfl//XUE0ZUkkgEdMd61o9Gu
p41dEfGMqGWo5dIuopCAm7J5JBIzTuUoNmeiGcgAjn1GeRTSrMWbPI6ZHStJ9KnK7VTZKPlbjHNN
bTDskby3faMHAxzUXE4MzxC0odABgHJY9qVIigYfMzdRVs27ScGNlcgDp3p72zAPuBX5ccL1q1Yg
oSoRgnJY++KGicdUAwOnrV8W7eaAVcIp4O3nv2qKZWYltpJB79/88Ur3EQmOQqXxtVONoqCUMufl
+Q8A5q4El8wcnnk98e1Escizs0m3JxgBatalJXKYVjgjJbOMiiRm4C5J54LcfWrSRnzgWQkDkY4o
W2V33nOV755xQVysqvFIoBY5TPUU6RGbaQp2seOO1WRDhWURnGeGz19ajlhcSFA2ATtAFFxqJAEC
SOcElc5B44pHZyRIA2OmOvFXREXIx/GcZHFQPG8cxXf8wHY5BpaE8rRWlQbAwyFY5Oe3NOdQGBK7
gD1zxUskQTA2cEEg0G3YK6ngNzyKV0PlZEreYp27htORxxSEFySTj6GpkjIQF1IA9+TxQkRMmHUh
ccE9/pQKzGiRwjZQ9Op9KiaMEghCGOMe9TM5DbACQRzSlscL8pUbgT1qhEa5jLDZyBSBg8BTI3KM
nIxmppIzJI0vUY5I71GhQJjbtz+NTa4EA4b7oX3pXcspyRnrmp5IyVzjKluOKjChXYPx39cUrC6k
e4qrEkgkEDHpTCSxHJKgdDVhkWR2A4TnJz1NRpCMgg71xgUxhvLH5jkgcE01/mGcHPbipniQAkLj
HU5pqsydF2gdMinYVyMRnpjtyc4pNrOV7DJGQc1YDqAf7x7mmCLe/wAuVI6Y70NWGMV8NswcZ6k0
3zNh5XgdzT3j2SMoPHrSIWztClgakBokJ4GcHsKVkbdnGSvJBpWjUN159e1MKbTuJPNCJQsvmFzk
kHGTjimBSScMcY7U92YsST1HQmmyMse5e5HUU9ChsmRlcnr3pwAAA5bNJJ90OGyxPeozkMGAOetO
4ExGCOCQByCaaBvOV4AprfOmSx5GaXftG3OB2IouAsm8c+vpTXJjIb+I9MihlG0YJIxSjI4zuxwM
0bgI3CBg3zEkdKQucDpkevWlJ3Hb2HOKQkYG4EmkAg2nOc4pd3qe2MUmVIwM596VguzgcjqaAFHA
zg4pGG3qRwM0hZsYP6U4uSACPxNNgKIiTw3bP40rRkZZ1Oe2KCxRm4BFKwZxwSeOx6VOoCBi+FIO
T0ApzfMTn5cdu9CuFZSo+YdfelPDtuADHoBQAxDhe5wetTYD4ycY5BqNnJI4KikdiMbck+uaAHsR
sBIYjuPWnugKKfmBXIyPXHFRAryoPPbFPlLMdvOc9xzSEKUIbkkt0BFOlkZpOpK56U3eYzxlsdeO
KexXzBtO0eoqdQEVAuMLlT3NHmDcdqbATjBGcUhQqhxkjHJI4pBG8nRgT0xmrTGJcZK5AIHbb0NV
gN4A71clZcBW6YxkVUkyh64B/OmAA4HIz2qXgpnAII59qiKkcZO0+lSRqcP6Ac/ShoBPLxw/HbHe
pNgbHbsDSPJlV3ckdAKXcF2E8MOhXqakksJ8p4GAR/Ec5przYcAhcA+nWoWkbcSB9KaZCGBIDdwD
TGSSL1YHOTjmowpKHkh84wRQPn56EDk+9NZyEByMZ79aAQ59rHPQjjB7mkbDMvYD/JppYEZ+amrG
ZCSMCmA0r1OR69aTYfTrzU5tQACW6/zqZlMWBgFT0JOKLjI9rRryDv8A7pHapQxkDEcbflA9ana4
G3P32YY+nFQGLYmN/wArN1qQHmSTGJAHY8H1xTHAfKsmADng80m0qM5Jz0OaUOEYHcw/vcCi4hVA
WMNtHB+pokkWRlClRxgZpAuCzBuM8EcU/wAxkySAcjHTjp/OmxiSI6Kiscg8D2NII8HJIBJ7U8F5
Bjtj7x5xTWG9T82BnPA9qm4hj/6xtwzgdj1p/meZlsZH93oM0hIYMRndxj3pGIwWwV3NzjsMelMY
/wCZeV+U9MjtRFKVJ3AEep70nmLvD9xTDIrIqgcrx061NxXCSJeQAQx64P5UGKRFzznpSqpcYAJB
PXimnO8gnYwbGfWqTGRzoXcsBj1FQgNG/OR61ZMxLYb5UB4AFJIu/cACvHC1QiKS4dV8veSuc4oM
kjtySf5VGflY8cds1YBDABSob3FMY62hJPmOp2g8nsas3MwjjChhwSKesixRFXJK465yBxVZXM+5
mI68cdakRBICx3btxJ5FNMbN2/EU4xsp3Kent0okOF4YnHUj1poQ0qoYjAHWhgDH0zzwe9ARnBdQ
TzyakCsmN+AOwPUimUNX59xwcnp60h4PT2zSgM7cAgDoTQFBc4PfHHahiJooiYmmLAAcYxSStEU2
g52nP1p0qSR4QEHC9M9B60lxarEAc7t3IA5NTcCJ4pGTft3KB16Ypq7oWVyuc9DVxIB5JZSzZHRh
ioYt8sbqEyCcfShMCBsEqyknPrSgGP5iw2g+vNW5oVtUQMfM3YIY9h9KYCZ3IWMEL1buRQMjhiWa
Yh2bJ6HPNOa0SOVAZAQeuD296tECIIxVXYchdvNUJTukYhdvOMUwJ1swWwkgUevpRLGVkACgg5IJ
5qykRiVON+7qGHT3pl3KYod5O5iSCR0H0p3QEUkAO4pjcAFKn1xUZi2naAcYwwI6e9QxyyckHBHJ
JqactLECx2sM8etFwEEpXAUDAGTjvT4tjE+YCcnt24pjxmONTswSMc1IrGNssN3GcAe1AM9B+B+m
Qa18RrK3uIEuIzbT7Ukzt3LGSp4I716zbaRpr+YJYUEO3azLwvTqOPcc+9eUfAeYt8SNKCgEMJ16
ZxmJu3v0r2Y2c98rOIfJTdsiQKD8q43dO2T3r67JowkpKaufIZvVqU60VBvYzri00SCKQPplvOiP
lVGcnnG7Axzj9RTp9EtnSILYxyox3hi+MsccE88kVZttKuHuI4nYmTcu5EbJ5Hfp0zWkLCS0f7Mk
qsyHO1hgv2I+lfUThRivhX3HhU6ldu/M/vPnv4yxRW/ju6jhtIYEMUTNHAuFyVB/ya4hM4YAHaP4
B1r0r46mOTxvNPBEYla1iBxzkgYyCST1FcQmmtIoV2EeQMOe5Nfl2ItGrJLuz9JpJuEb72M/y/3Q
cA4YlSWORTpUmj272+Tb0DdveppLcwzMHcrtGVbjk1JZ27Tw3PI2gchiOvrn8KwTTNLFOOMMFwrK
OhJ5yastaySxybVyynDAenNSxx+WrOpHB656jtV6AJbsr70G9TvC9Ez0+ucCk2IwGRoCy5GO/rTp
3MsY3BmwONx7VotbpOshTaZFAb5VySMVIYibqG12hgAco2OKVy1sYhjZgOCO/TpQ4IxnLfhV+SXb
HGCgIDYVm5OB2pXAjieQpty3y9waadxNWM0fNnA4zUjHzJMqOe/5dKvyeRJChaPlhw6DFVoY/wB3
JwOnPA6e2a0EV2G3nG0n+HrUqRmTgFRgdc4pk8pJK4B98c+1MyTjnbx9KTEfTVg2+yQzPskUZLA5
5HoewGQP61YaCO7eaF/3fmggxxrk7hk46Zz/AErPsnaS1mwphKn5MvtICknAI9cdPpWvc3XmTbkZ
QeXG3ghugz2xyeorqjG5lKS6FeLT9zW4FrLJcxylvLBC7myeMEj+fNeV/HWS5l8c30V5H5SwiOGH
nkAIOD75zn3r2GaBgBukUxn73mkrnuT7j8q8Y+MnmHx7qbTruaco5zJkr8g7fh1qZxcdiISuzzxt
qDaOT3PrUiuGjKbRtODyMkfSlESyXO1MqO22tfSNPt7y88lSrNnOW7DB/l1/Cskb2MkRyKGEeOD1
P9KV4ZQ7YOeMf7teoWXw/tF3rM+2ZiCAeAOOMn/CtC18AWcW4hXml2bmVVz37HB7GnYV0eRiPfGg
DspA+6TzmmGI7FdZOnXdxzXrsnw6tJGKSJ5cpZQgc5Yk8YODgHp1BpJPhvZieVQSsyLgBFATPY/l
jkU7akc66nkBfICYKk8E47UhkcKBkgE7gD6167c/DeGaJx80cjAYEfUZXOT6/TFPj+FNvK8amRY8
oD5h+X145wOOM8/1q+Ww1JM8bfe8jAk7ic81MHKkSYLRrwQQOa9Xh+FcTRrLJuiJLKu7P3/TFTf8
KstCqEblMpHAYYznkZ7f/qqWi9F1PI0mLbyXKsxz04I7UyRsNmPIA4z6V6vL8LbcHacbFQuC3Ud8
dfSg/C6G6ttobEq8A7cBhgdcc560uUanG9jyuaTzXdlXeAAM9Dx/+qmiV4mYuCwb5W55PsPyFeqJ
8LY3hlhBeK5jI3M54zjPbtjHTvQnwpZkAedCSrZO0ZGAeg6c8e9JxsDaT0PL5LmYw5L57Z6kY9fy
pTPK3ypKWbJYfNkfT8v516MPhmrfKFKrnu2dy/T1/wAKkX4UlcFv3U0m1Qh+Y4J6jB69+oP0ppIT
kmebxahOmQJXIOMjJH4cU86pcFtokkVSeAHxXoKfCWSRZHlkxGCFBGck5xgD9fxqK8+Ek9sWVJDO
g5wCMgfyzVJXM3Jo8+TUbhYiiyvGwOQFYj+VTrq16uxxcTKF5wj8f56V3P8AwqWX5ipeUKgZ1GQO
R3OOMce3NE/wiucwlJVaOTIc5P7vpx+tTZj5jhrnVLmaXd9rnVQcn94fzxSJq17bOVjupQAOSrnm
u0/4VNPBLsEmShG4SbQM89KG+EtzPKwilI2sVbA4DZ6fWizEppM4+TX7xoGAnkBflmVzkn3PvRBr
N1ErOLmU5/vMSR+dde/wruImKvKj7DkcdWIziox8Jr0plXSNTx+8bgY47d80WK50upzTeJtQlyft
UiOuWEqttIOMdvao5PEOpBCVvZ/vZ5fBHuMYrrh8J7sIFjlX51GCeCSf89qST4SXkX/LVWIxgHAJ
OORjv9anldxKaZyTa7qEpAF5ODncCJCFznrj61L/AMJLqFqMfbJSxBwSxOMnnHpzXVS/C2+WHnaC
oI3Bf8+3+RUUPwjvZZZEWbbtXcTtPHJ/Ht+tOw010OYn8W6jNhPtcwHGG3nAqb/hJ9UVdpvJMHJw
GxjPYflXRJ8KL4IBuOAxGMc54pG+E+oSgsso8rdw+0nAz1xStYrnRzLeKtTlY77mQ4IKpnBBqVfF
OpRSl1uWLMMBX5xW9F8LdSiSRjEWj5CyAjpzgexwD+NSn4T6msIk3qo3fMMZOOeQPyqrCuupzp8S
X5KMty+TwGJGf8/Wj/hNdXYFftLBmUh9qjJGee3+RW8vwp1OJm8wYA6Db938T/8AXprfCnVDI4GE
4BBBGDnP9AaVmTeK6mK3jPU5gVe5bCDABANSDxjqSKiC6+VVz0B71rv8LdUKN5a/vNwUIwwXzUc3
wo1RNxj2bsgKjEHnnpj8PzqrPoF4rVszj461eMv5d8+5l2jIBHXPcVLF8QtZXCfbeSBglFwAM9sV
MfhrqKqJZ2C7mKkDORgfT3p4+FmrZk+VMIxU88nJOPr7n6U+SSEnF7DU+JGuxF1NwmwksYyq4yf6
Us3xQ16SNVa6RiQBjaO309addfC/WI2kkjh3IF53HbkZxxk9KY/wt1lNxMPlqM4yOR0x/OlZmmjJ
0+K+u71kE8atjb/qxwMEenPFSxfF7XyJT5luWP3W8pcrgY+tUZPhjq0BZFgyRnBLAhhjt71H/wAK
61uOFx9lYccgcn6n24otIl8qNBfi5r2/cZInwTtOzkZxk565460s/wAWdaaZQ5tdnUhUyT+OeOtZ
Ufw91mWQxx27OQPmOPu5BP8ASpYvhrrgUs1nImSVyR/T/PWjlZPuvQ1/+Fxa0swciHdjDEA5J/PA
444Hc1Zb4264AAkdqGjAGZEJxjnHXv6+9crJ4C1iQsPJJK5AXvx9aevgTW5BJuspSUXLZ7Y460WH
ZI6i3+N2rrKjG0tiowSMEn1IyT3NNl+NOqwb82cJEhO4ZI61zA8EawUaRLWRtqnK8BsdO9QT+C9V
jdSbeR/mIGQTyO1PlbC0Tth8dNUijZTY2snGOdxwMdufXP51IPj1qAkZnsYGkOMknIHGCMHOc964
ZvB+rrhnt2HHUHg+1Mk8LalubfbOSO4HUGhxYWTO7Px2uizbtKttq54ViAx7dvSlPx2niLN/ZEJO
8uilyQAR9O3PPua8+Xw3fbiXhYsrYyRgmmHRb8IW+xyFR3KcYqdUJpHpB+ObvlW0uExkfd3D5QM4
H3ec9yajHxqjj3qmkREFwdhkyMAf7vfArzmXQbqJQDG208DPc+1MTR7pSV8l8k4yOdv1osxqMXse
k3Hxtilds6Ou3ZkbpATu4yT8vt+FL/wuqzCmRtIIkYgsUZSM55AGBivNptLuozh4mZgM7eTQmk3L
rv8As7YAByR1pWJ5Enc9QHxm0WUOX0OSPfkhUdcBsjk8fNnBoX4waVNNbKNNmSNfv7irbQD2Gefz
HSvKPsE2xnEbgg4YbcEZpBZyeWXZCoztBK8E+lK1i7+Z64fi1ocjl59KmLPjLgg5x1/HGOvpVlvi
r4f27106fc3DKqgbfTjPOOleNzxSL5eUIyCQB1pGhm3ECNlyeTjGKbj5AexD4oeGVUGSynjkBG4R
Rhgw9OvPbr6mpY/id4VlWQfZLqFAQq7owdw56gHjoPzrxVbWYyDC5YnkVNJDMAdqnLcYHrSsaK57
InxL8JSt5jW0yjIHlBDyOc9/xqc/EPwRIwkKTqX+ZY/LI2+oOB+NeHTwS4O5SG44HemiF1X/AFZz
jBp8pL1Pcn8b+CnLB5ZQ8h2EiM5K468jj24zzSHxZ4GXzlW6cOAdpMTqcnPoPT19a8PACqQVLNjg
npTdsiox29O49aFEXzPcU8W+C5pkF1Iqqw4KofkPPJ4z+NSSa/4LklkaO7xnCbzyOnBGa8Nd3Iyc
5xt+bgUxw2Mbuhxj3pcq7Du9rnuo1nwTcuF+3QgKM7wuBx6cdelJHceCpJSY9Tt0Krkbhw+B3bAB
OR29a8OjlcIVK8Lng9+KaX28j5Tn5VFPlQtT3IjwfcSSv/aECBRujUMFC8Zx9Mmpp9L8LBXaO/gZ
XUM2GzsPcc+h/lXg6sEc7fn+XHI7+9PaYvEwGQMYxS5V2J5b9T3FtD8KXXlOmpWsQBGYmdRxjluo
78Y96dc+FfD11hotRtGIXAAlQk8465968ILEK27dnspHSnNdBUwoYBhzk8UuUpO2x7cfBGjF5EF7
Akg2kESAsc5P0xTG+H2kyvI0d0juvLBHAKjJxkZ9vpXiLbg25XOcc/ShdpLKwbcfcAVViuaR7TJ4
F01Bve4twqD5gsoyp/p9Kr3XgWyRQ6ToztuCkMMNjt+VeRmYMrfM7McZy3FNMswyF3DqeTnHFRZD
UpHqM/gOGJUP2hUk3bdoH4+nNUT8PFlYO0ibOSFzyQD2rz0Xkm8ZnkwOchjwamGpXUbgLcyhTzkt
xS5blqb6no3/AAr4xzDbMDknCsMY9Mn6VWl8A+YoUOu0DdnOWB5+lcJNqd0kgC3c7YGD87c0k+p3
XQ3U3IIP7w4Pb+VLksT7SR283gUSoqxXPOGO1TyarXPguS3iUI6Fxzk4GRjtXHQ6ncxEbLqVCp+U
bjUja1d4XF3KR/vk/wBaXKx87Oqh8Iu0m15FAxnryPr/AJ70q+CJ7phGhDO3QZ/HiuVGq3TA7LmU
uBkEE/l9KfBruoWbxPDeSRyLnBB5OfrVcpPOe5/Dn9mW68UaL4l13XrkaDo+i2T3Elw7Ll3AyFwD
0wDXh89skSusTl0DEIWHUVf1T4g+JdX02XTrzxDqE+nSv5sloZ2ELt6lBxWCJ9wG0kAdaSj5g5XL
G0yKMHbg803yFlduDg8DmokkkAIVzg89KYruScufl53VdrE3LLxbGIZQRjhTx+NVWkOG4wP4QO1I
5I3DP/AvakMLhQxYZPFRYVhRuGeenBGadIQcHHzdB71G28fKXwD+poZG4Od1CAeZGUKdv096cLgl
i5Gx8cCkExKYKjB5B61FycfKaZRI8nzNwcnt1FPEjRMQBx16VC27d6YHFNDOWJyTTAlDfvMHuena
mOzLnd0B7UMdpGGJPWm5ZgDjjmp6i6gSHO7pz2pCGYE5GPc0855JIK+1BYPuIUYx0p2ARMKxGODT
Tkgjj6+lKD8/IwKSRn+6QMfzoGII8JktgE4wKXaQp3Lx9OaRnGM9W6fSlLqy9y3A5pgNwB24yQKJ
B5Z9+eKfGcOx68cUYG85PA5weaAG/LjOcZ9qTbtHB+vFK5APyDHekwQc8jNAChggA49eRSZK5AXO
aJGVjkDGO1Dg4z0NKwCEjgjr70qnsRzSquY8nv3z0pGOcjBzjrTAc2S4U4BFKdqEqrsfXmmlmXp1
PWgYK+hHUUAKw3NwcD604jcec596j3kHBGBRvIbO0Ee9SwJwMthunt2pqfMSNuPTNDTD5vlAPsPa
kMqFSCvX9KLASMQGGAN2OcUrOzMSehPB9ajWZUYYzkDA9qdHdgOC/Iz2pNCJJXxz2/u9c0ibXJLj
r0AqGS4yTtPHWhWVwcEqwHGKSQyVty5U5VemKcWCSbhkjpgUhuS0ZRlyenNMllC88gdsGqsA84LE
7c7ugJ6UPGjAluQOnrTDKr5bpxjA9ae8qAAbepzk9valsySuWUjgYPcVKhCyAjK9hmk3xqrNgjcC
AacXjf5j95RnJPJOaCiKU7XYtnPpimnA+bnjtUqyxqW3guSe5p63Ee45Tg9MdfzpiK6oSee/epHR
uuCcdKcXRRgfMR2/+vStIhKgErnr6UIYzymdBgn14qUW6xgnZnHBZjSrKqP8zEjGMLxUv2hWADkO
p5I70gIsrtC4BoaNY1+7hiDjP0qQmDzNzMcnsOOKmaK1LBvOLAdVxigCosEk+OoHUZ4qbZCqqGzk
DLYq0/2EqMSsA2cqB2qlPDGB8s245xx/OiwCeYC2U4DE4PvQr7hk5BHr2qu7tjAbOe1RgvnbmnYC
0WO4ZP3RxgUiRB3LvzgZqBZCWJ5PbAqRZQHKqDjpg0ASsvmFhjgfhmkKD+D5R/tHtSyPl1QD5B3G
OtNDjDgtkYxjrSAlRiMjJAPpTCGJBxkk5xiot555yOw/Cpt+6NSpRT1OetITGlyGyAVUHjHIpWYS
AnIABySB7d6YzjaQG3eg9TRgR5LMetMETyBXVTuXIGD2zQqg5C4AJzgntTNqNGZCyh88L0NR+YAh
3SHn+H/69TyhYkkdkIVR9CT7Uj/vNo3AyLz83Tp0pHZHwd2wg/X8qRSm4ksD15Pc00gGlTM7beAO
ACeakMXlfeYbTwcdaCqEDJC8859aQnYOnC/KWzyaoZHdbZm+UAEDFRIBHKu7K46kdamZBGA27acE
Y+tMki3Hr0HGKYE9xIpQKORyeep+tN2gxkhQh6BST+dRp1XeoIPWre0SSFV2qSAc9hSsKxAEZSu7
p0LZ/wA5pSinGDwfbGPr+VSfZSAq7l3E9CeOmamWzcKHX5t3QL35pCKhRsMueuMAUx2JChh06t/S
ryWbSq6gqJcj71KNNCKWZ0VM4JamBUEjsAFRVY9c8GnxwG3jZ5ACwOMEVfjsQnmOrCRVXIOCM49K
ilsXlfdCAYzkqc9vWi5RXKIiE87yM4GPWhvJIG0kvkZV+g+lWjpwQZOJPl6r6+lJ9gS5mCqhVQOf
6UMREQZH8va25s5YDI6dKeVUZQptYYKhTnHfrVj/AFB2R4dn7AdB0z+houYCoyQo+UZIOQOvFQOy
KUi+fIAQQ3OOeKeCltnAIbbtJNK9rMAVQkgdRnGeKe1g5bAO84yRjj86pAVJWjcbmXgnI57VYs7N
XRmY4BX5eOage2cfI0eHBOAKmiE1uwRlBkxgKR0+lOxDuJcsXf5XY4+UDPSqjWjkngkrxt71aeLz
mRiOSSSOuKZMjQkleefvHoKFoUkQvEjgs3DAcimgBFKFWBHOTVpbYtGZGTBBySD1zTRFHOxX5kdj
wc8YovcZEYjMRgkgAsT1qMgvGf3hJ/umraBVUr8xzyvHHuKRwELNsTK4z3Jqb6k3Z2vwJkcfFfQV
thtmLSqjDru8tsY/H+de3pDeW103l3JfY/mqjNu5yDjA65wQfrXgfwu1i18N+PNL1G7VljtXL/KD
jO1gCcH1xXrbfFzQGIUsyjJO5VYZxnjIB5Pb+VfRZZioUE+dnh5hhZ15KUeh0oa5X7QPl4VsOCQS
cdPxB/nVkWd3JPHIUjAaMKA5yV+h9a5K5+Mvhy7Dszsr8qkewkLwBnpyM596g/4XBpNvL80hdmdj
91yF4HA6fXvX0csxw7jpJXPHjg6sJWaOI+OcAXxrtEgDGyi8wj5QTyPX1H559a5WGKOK0gfcY24O
SMqeO3vWt8U/Etn4u8Vm409pDbfZokBdNrEquWz68k/UAVzsMktysao5KgEKh4A9+a/P6r5qjl3P
s6TlGKTKN4oluHXDnB47Aip7WzQRPIGIXA3hByB/nAq8mmqiK0hAkJBAb5s1KNOjikedyYwZcFDj
j0rJNFN6lZIBcSNA4SMAZDNgdO3akjtJBEABu2nGUIwB7+tTXJKl2CNswMHofqaZO4hnRR+6dT8x
5J6en/1qTQeZDeOsJkbYFQN1HfjkVRaeGBvN4uGI27HyOcYq46b5ziM4JxjHAb0/lVK92xTuHjwA
PTvVJpjuKl5DKIlud7IjEkDt7D64xSS3yTPKuzajHIHXHHAqszKHaPbuBbKleaJ5owxCZXH8WKGh
3JpLwvbhTGMJwoA6DuaZBPGjvIV+83TOadLcoJVICPGRyB9P50G5t1RtoOeRtP6U0IrttBYFTH3A
pYYJJJGEY3HGTmomyR16+nTFWLeBSpkztycDnFNjR9A2ZligYvIrEI8pY7tvXPI5HB9+54rQ06Us
4bdLcSLksxGGPOMenf8ArWNpF29wYpCEi8zkeWMANgcYHHAGelb1hcPGgJ80QyKVG0gDGBkn68j/
APVXSnqc/M10I7izF4XKbp0jYOYlzwBkHg8fXn1ryz4yyRnxhc/63BVGV5FOSNoHUjp19eMV7HY4
kLHyhGeVVTgYHUHAOMfj3rx341HzPG0pCSyobeBmkmOSf3YxyB+H0ApS16kQa5jz5naNzxvLd8V0
XhhpIJ4sLvfdlY8DnvuOelY02zy0k3btw+5/drpfh/a/aPFOkxyKskVxOsJEjYUEsAMgc/xVjsdG
57l4d8E6j4h0Z9SiiCqqgokgwDk47Z7Z6cU/TtHkvtRg09WMc8h2rtYgM3YFQpwP9r1r1z492sng
Oy0nRdMvBFYXMOZRGu1mIbPyt65PPHINeQWcs+nGK4jkNvKhLxt3bDfQ55HWtEnJXMJbmv4v8G33
hHVbay1L93O8RkCsNpKHPXPXGDzVnw34I1LxlNNBpEX2k28ZnaMsNxXIG3pjuO9esmKD9oPw4srF
NP8AF1hGIgr4AlRiTkDuM9uxxWhaRr8A/DV0RMlx4kntzAsWeEPyuC2MZBHP1qHKw1CNtTxnwn8P
de8TQSpY2cjukpieIp91s85xkDHIz7da0Y/g74kF28TafjUA8ism3cAADgDnJOc5HvXoX7NuqXV5
8XvMaQrHqCzTTRRzhQG2AAYA5Gecnp61Y+ILeKPDfxV17xJpFzdJFpMyyyRY3wkSLjJIPzdenUZB
4qVPWxsoqKWp5F4j8M6rpeuf2ZewyQ3cRAS2cZGTnCjJz69/8Ki1jwtqfhxhbX9q0UgDSASjDBST
hsfhj/8AVXc+N/Hll41+K+ja5bI8Qu3s9ysflSRSodCcHg49M8+9fVfxg+GemfE7SZbGR4YNXiQp
BdqAshVctsHUEH5h68n1pzny6Dsj4Vi8JaqdEbWXtJjpccxia4xhOTjJ9xz+VZ8lsLieJYxkMQCy
8ZbsF+v86+ubfwlqPhr9mDXtN1C1ENyiSSvE7lyFMwyMjpx6etfKkcYh2GAEKh5Qnk47FgcnkDtz
3pQm3sY21Ga7oGp+HyTqMF1ZmYGXy3jJAA+UAYHt29DVRYkkWJViTzMZYFiAv4568V9F+ANLf4/f
DzU9I1WMf29ogV7a+iO0vGQxGV+iYx6k18+TW/l301vJCHEZYRn+JSMjIzzXQpXRDdpaFSVYbeGc
Ro5aUjaZBlQB2z065pETyLqMnmOQ+U3mcg59gOanlmwoaNRIWGWYAcY529eOTQwlmtJN25mI53DZ
tzk8nPHYflTUbimIE8ssiIY4m5jUNuGccZH0x0oih3gM5WJUOAXBH5cZzUkB+ywx74hLGq/KQBhD
2/Hj8s05DMyPJNDG8QbKgqCC2Dj16E/pQ1bYlXe4/dINuwSkuvJYYZcY4Pv15+tVpF2TFJVHkgnd
KI/kJ69fWpBFIvlOZVdcb9/oOv8AI/pVqC1k8xw0ZLMxYSE5xxjHFJI1uVzYIIFxI7E87HXgg9Of
pTWt2ijVmlyd5YRrxgYIx19utPFuwdmDsrrncynnH0/CkaCO3iXJO5wXGTyPy+n601qyX5EZgiMj
MS8sfmkfdGVPIBPfj6dzTjatcgl43kzKQR90sQfvD+fSnxW4dozNuMXQsFyQ2Dg5/HGferFu32ki
MgIznYdxwu4DjkZx6Z9zRJJbCUb7jZ9GYyRboGygLKJF27lzj8RnFNmjyw2uuW6Z54Hv+VfUHj/+
yvgvL4N0yDTxqGi3MMzXvmYeRtx2lhnGeSfwFeFfEPRNO0zxbcx6JO9zp1wI3XcPuKyAnpnoTjHt
UKdwaSehyf2RWdYjFIBjLKAck59vrj6ZpZFw6DYYyHGJG6447n6DpXvHwG8C6bq/h7xZ4nuY2mvd
Itn8uGRRsb90zhuTycg/QZrD1XXNC+I3wxbVZIUsfEOlLtMCc+crMMD6qPTPakpXKfLHTqeRqLiV
5XSEsGJLFhkng5x+np2qzGksYZzFI1uoKvM4LMDnrwP/AK1ang/QYdf8XaZZm48pZ50Ryy9mZV5B
Of5V9GePV8OfD7xhpvhG8s4ZtIvbAQG42bZFDhlDjHAwwxyfXmnzpPYiEW9bny2LmSNBgKVY5kGP
lb2/XrUnlp5kwRfKjQqRJneVxyGGff8AlW7450O28L+IdRs7aY3dlBcGOORADkepIwPy/WvWfht4
Q0q3+Cmv+N5oje3MZaKGBkyoAeMbl7H746+lEpBa7PA4fLjTZ5byOEBdnXOCc8H3x2+tNSIeY0hQ
rEUIB4XPTI4H1/KvXfiND4f8XeD7LxRozLY3rlLW804H7rgctjOAPu8gVpfs9/DCw8WXWpaleltm
kW5na2xmORnVyOfqpNNSSV2JQbeh4sNPBVFiU7h0faMHuD+VI0cWHZh9okcbGZlwfz/hPsMda9n8
CeJdM1/xnpml3Hh+OSG7uFijZGZsgn0HpgnrWN8Wfhnb+AfHl1pcN0z2RCSrM4IOxjkc47D/ADxQ
ql+gSgzzJ7SDcIQPLTZu2hsDk8/jxjOKesNsoZEG4KWLEuCfpwPSvW9T8T+FNN0XSdP02yj1D915
lzcTdS2ScDK8jH9Px6HxD4GtPF/wffxpY2n9nTafJsnt3PEgLAZLcDjPYdqftF1CMGtkeFjToJLi
O4kaQwZIQluMYyQVB5696kggiaZpUBDltwcEOT1/z+teifC/4az+MZbjUJgyaJp433F4RtBCcso7
/dxz710Kax4Jk8Qy6dJpyDTJHeBbtmIVBtO1gAOhIGD71PPFM1R4v/Z8cciGWNCWZshhkc5B7e5x
/KpY9MhHyJucyDPm7ecden9K7vx18KtT8J+JIdLSGW9kuzi1kVTiZsDge5BHHv7V1WteHNA+Gmi2
ttrUUd7q80qubdP3bRRlNwyMk/KeMn3NJzvsOK7ni/8AZkKSqS3ltIuFBHDYHGQBU32eFn/eKhHI
O8kYxyv1/OvXvEvgaw8TeFoPEvh2JRLawql5ZlgWjchju7544B9h0zWT8O/hy3iGdtSv3+z6FZZk
nuimY8DClTjq2SOmKUZ90ZvR6I85isYJEWcwBXfIBC8ADsQPX3FNXT4yHEcQ8iQsqh165yCMf1r2
Xw5o/g/xPNe6Mkb6XfSq4t5XdQHk3DafbPoc1594l8NXfhHXbjT9R8xJ0zGCueQOOK2Ukw66mB/Z
NrNIFjj2IF2YBxkAcc/TtUEdjbTq4SBAoIbk4GQc8ADjnPNfQ/wd+B0fie3kvdYjeO2aIi1t4ztw
cgAknnGM8V4ndLFZ6pdQqqbo3CMY8EOD9e9JVEmN8q2ZkR6TbyJjyUYPyxA+YHkZ6U9dFtfNDiGN
VU43MmWyD69a9tt/Bng+D4c2Wr3dxMl7MoEtpGMuhyQeM9OB0rQ+G3w28IfEa7lt7W4uY5beATSA
IwIPII5PPP40e27jUT58n0m0gQM0BErxmPO3J/zz196ZBoFnPKnmWvlI5wfNG4Pk5xg/nnFepfDn
wVY+IfGN1pWp3T6fbRLJuuJRjDBsY9jkn6d60NW8O+CtJ1SbTpNSnY20jJG3l5DEHGC3QdPWk5ol
xd73PID4Q003EhFrGkSvkSE5Oen9BS/2BYpKbdLJdrrtkbZk8Dr3New/GX4X2/w3GlRWc5njvo2l
DkY8vGCBjPcNn8D6VR+H/wANrrxRa6pqs04t9OsEO+cjGHAyAPfn6dOmafPHqT72x5PJ4Q0uOSGJ
bVDIThDJxk9euf8AOaRfCWlmZc2SFDhxhs84PBx6Hn8K9U8b+BH0DSrHWrKdL/SL1Y/n3YCTEkgf
kO3v1rkZAYt6FgUI3AIOB19gfTrS510IvK5x58E6dICRpwby5MAlQcY7nPX/AOvS3Hw/0sQur2US
27MWOV6H0GK6yeNmVCH37m3KF6VJbRfaCECmRmOcYxnJ69v8iq5l1NIxkcdF8P8AR5/Lc20PmIAA
SgDOOMEcdj3PXNVbz4daCZGJtNs+Dh3Hy/kB/PPXNfQ3iX4Ir4bksmv9VSNnjDbXAUkYB55OMHNc
tf8Ag6xtYJri01e1m8n7ikq5c9cdeelHPHYclJHj6fCvROM2ojc9ZAoABHQj8zxSS/C7R5Ymhitx
HknZIOoz0/WvV/B/gefxbJLHbK0EUSO80rjiPALYz06Cty0+E82pXqwWl/bz3LsdkKuPnAGRjjB9
cDvUcyT0Ertas8A/4VLpJuShhcFeCvpwe/pSXXwg0I+ZIVEUCsAxQlig7EjP4E9+PWvUNTsvsF20
ToUu48pIkhII7DP5/wAqpG3xcRsq5y2PLHRvY+orTmuY2lF7nm3/AAp3SJnASNggU4APytlsg9PS
q6/BrSCgjfd8owAi4yewxn0xXpghkjYhSVUHGCwO089z2/xrQ03Sp9a1SGzs4y9xK4RCg6dv8Pzp
XSNY80jyGX4HaKqLtVlYABAW3H6nOc1F/wAKS0qZlzA2w7sEttIP1Fe9678Ntf8ADum/bLq2U2ou
GhKg5O7aCc+2B3/rXLNEWhZl2hlbAJ525749xk1Om5pzNHlT/BTS5WQFiihtpOck8fTpnjvUSfA/
SprsF94jGQAMDn1/PFe9aV8NNZ8QWUV1YQ+ZBM7RoCSokIAzt49+vfFc7d2EqXMiPFtmB5zwcep4
ziqTiHvdDx+T4HWVwSVYr1yFOR39vpjntUc3wOtSYwhZZBk+XwFAIyefXtivaNO8OXmqXy21srST
7iwCkZwoz0PsDU+u+FtS0RFa8t5IS+QofPOM8A9+lNuEXoPmktzwWb4GWzCRhM/c4Pp2Ge54/GkP
wHttzf6Q5HJDKoOMd/zIFe0NFvSQlRlgMjrtBHIPr0rQj8G6v/Z6TwWoS2eMyBlTqn94dMDj15zU
KzEpSPAJfgWs4ULKwJOFAXgZ9fXnFRXXwJZWlO5htJBZjgdxn9O9e4NmSMFdqBXJCd19ec1c/wCE
cvdRhW4t7V7iIsQHQj5m54H+HvSdkaJtnzw3wQkyghnMi7tshYAAHtzn2NQzfBW4d1MUxBBI2ge3
6c19DX2kX+l27SXltJCNxAEq4Bfrx79eKzYcvIbe1jeSfeAo2bi3BPT6A1KsyW7bM8Hk+DN1FceU
JOv3yV5Ud/qc0knwcuCVQTLkr82T0x2/p+Fe/PoGoK4D2MxdjhcRncAfbHUE1mynEjERE5JLFc5B
Pc+lBKnI8Bl+E16HwtyodjgJ1amv8KdQikPmMpT1LYwMc+te+yWst05eOBpCz4ZlQ5HoPx/CmNbG
3mCPE0YxiQMvB9x6/X3oHzyueAS/CvUg7AYGPvAnBHU559qif4ZX6IGCjaAXy5wTxkDmvezZniNl
3ZY7dikMwx3Pvx3pk+nzRzXLMkoXgAqn3Rjj/H86WlzROW54FJ8PdUj6JtA5xUL/AA/1SRTIBwAc
g8n/AD0r3cRAB2ZcnlFU5ye3+fpSLZeQCwtwQwyCYxn6fQ02hp+R4H/wg2pHZviKK38TDAPNB8EX
4xhDt5wcda9xvQylYjDh1GCjDng89O3SofIkllRWygGQE+X+fPGKkq99DwyTwvqCc7CVPIw1Nk8L
Xybg0RBAyRivbpbWNGKhQQo+YuB6n/Oarm1ikRd1tu2ru3jAbPQc/wBPemFjxOTw9eqRiFmAPPtT
To92UI8vn7xPbFeyva5K/uVMbZIJPDn1/wA+lR3mnxz+YvkjO7hFXBIHTp2xj86nlI1bPGl0a6Mg
UQlwenHWg6XdplhAxxn7vNetTabHCVk8obP4duOvT161DLaweYzDO5V+6OhBFCRfKzyg6bcNHv2n
aPQdKX7Dcpk7STjn8q9NW1t5ZQ5gA38rzjaQD1HvjpVa4tUdf9U0bKcNkZwO34U9BPQ85axnY4ZG
PpxTGs5AThT9TXoz2KwDa8IUgcY7jGTUE2n2YBWQE55DoOEHbNDQWPPxaTY4QjHfFN+zyAnIP4V3
i2cIbYVDxgkZHHHUGobjTYkLokSHHO4d6VhHFrbyHcCMYBPNMeCVCBtPTg13B01BEd8aBm6ZPWom
sIjEyjlsnlam2o0cSVZuoJIp3ku5wRz2z2rqzaJl2CHnnFMntINmQM5HJxkjinYexy0qYAzmk8pl
GR0rqJVhSP8Adx7uPqBUTafGybiuznPI7U7Cuc2dyEDoT6UYbd0OfQ10EtipZESIdM7j39KH04QT
7CfmBOR6UrAc+UZexFOcsoIYdRxW29pExCqPmByT2pLi0jSQhgAx4BA4pJDMJvm6ClLM3JJPWttN
PwS7KQrjsKcunrImGKsB0PamBggkeuKXJHJ6+laosAgzjk+nWka1V5NvDADOVI/KkMzWBAIGc0i5
GfX6VqPpuc7xt54ApghQnCkYPBJHNAGY5Y9f5Uu9mHPStU6eAAzDHYj8M1EtgDnOVGM8+lIDO6dD
z9KTdWmbIGQgD5c8N7U37CXZgFwQpPUDOKdhIz9xPuaFbHI6+9aDWKFlAIGfXvSnTw+CjAkjoo6U
DM+TtnGfYUbsAYGD9a0m035sEntkgZ/OkbTwu7I+XOAaAM53O5smkJyAOavppvzHLAAcc07+zSSS
o3jpnNAGduOeBS7zjg81fOnr25zxjHTikfTxGpYj16+nrSCxSbIUcjrjbSO2DwKu/wBnsoU+vQ05
rFeMgkcncKaQiipz2HSm9MZ6A9qux2C7+oYcHB7inDTyA4GSR6jFAymevGDj1pGfjH6Vof2chRuu
QR0PUU19MMjfL6k470AUB2xxk0rr6fSrq6YZHYqCuOxpTp537QSOx9qAKG0sTjtSkhS2ST9KunTl
8zYzhcnAc01tP8tgSwPsetAFQnjBJHfg0hYZyRwKuvYq67gcH3pj2JRwM8HtnNAFXOG+UdKRyx5w
RkVoDTNzBRkZOQexpkmmtuwnJUZOe9RfUdimFB5zg5ofaOmRVr7C20EDbkdTStprKDuwoHJyelWI
p5y3UkU7IwcfqasCwbJYkYx/DUkmnN5ec4A5wR0qRFPcR1596DtOMdanNm2VHr0pfsDM+0A5HXFN
AQB8MDjpSySK7EtknGKsrp55ZjgZAHbg1E1icZB6daYyI7egOT9aV1G0Ec+5qRrJygIxnpSG1ZOA
+elAEJXByAePWnsygjAPvUptWXgtzj160hsm27g2cAZoAiLeZnggeg6UobaDu5x1xUi2rMhwSx7A
CnmzKpnkMc7s9MUARswYZwRjjJpFn5IOceo6082UgbBbjqMGlXT353DjrkciiwiIMBHuwc5PJ705
nQOuzc2RyKebCT5cfNz0701LV0clW+YfdIoGI0iB8ZYAHPApEfaAwLDB4weae9lIVZ2BHuRjmlew
kdiE5A9MUrgKJ1jO7Lpk5yDyKa9wZpAPMYJ1GTTDazMSD6454pTZTKDjBHfFVoBK1yyB1DkoeRkn
0pqzjAAkIAOAenFBheNHBAznHXpTfsUxyvllmNS0BI907D/Wlhk8CkMjsw3zlfRwOPpTGsJgm4oQ
OTkn0pFhkdAUBZTyWxiiwFhJy3zyXGHQ8dTxSSXbS71MhcZJAzx3qA20mDlcDODmnrCduWGMc5Bp
6dAHm8ZSEDlkA6VKdRn3solUK/Bx0qkltJKWCoTxn8KBbSFCVGVz1zRYCwt7JF8wkyM7sHnuDT2v
XJJV85GD2qmsDMQCCR7UeS4ONpGenHWlYRc+3yxPtD4Q8kL0/Ckmu356leckHqKqtBMByhyPzpZI
2xs2gtnqDTGTvfyxqU4ZWHB7Un29gDkAgjAyOmKgO7ywpLH0HWo8ZToQQe1AF57oCPcMgnt6fWk3
l5NxBB259zVQoVUNgn1PrTlZ1jPBUHilYCcTMx27cAHOM4J471NHfGFmY7mzheuVql97llYk9qQn
EhBztpisXUu1IYMpxn5fYf40gvhCSpHsMrVQbllA3ZxzSySB+XGOc5FAWLh1AM3meWu4N1PXHT/C
rK+ISrqWVTgFQAuMA88+tY5xyASc/rSFs4GOnNTYLGvJrh3qVTgHCqD0H9Kln1rzIpBIkgbI688c
1iqpZSQOo60+RnGFZjgHp1xRYZrPrUUtssZi6Yycdfce9NfV45pg77kP8WME/UDj8qyJBlQSOQKQ
qCjMThsjC460WQGyNXMokCjKkfdzjp3z3NZ07iZ2Zsp3GTmoCwKsGUjAwMU5VWZCAMEcgAdfahRQ
rCIpYhy4QgipRApLch+uOcGoNuflJ2sTyKVwYwFI+fPBqhj12Eg7i2Dhs9ql+zq8JnYhcnAxUCKz
kDBBJxwKv6VDCupWYvGd4GlG5IwN2M89eKA2M+VMHLMCOx9RVmKJFwy5AI781v8AxE06y0rXmttP
VjCAp3MhU5KgkYPoTjI61z8aNL1kKHHXNDaQk7nudlCtnAvmeaMYeMoeA2MdenQfj+FdG8EccWLe
EFkkUhyuUYHaSOQBnGOme1cXa39yJkVgbYxldvzYKkHr7df85rr9OuHnihA82K3IO85Db+eDj1GD
XTCN/iMJ1LGmmn+e0jSRBTIMYjUEDOevHP8AWvIPjlBHa+K1WF/NLWkEjEEEElACB3BBByPXNex2
mswxy7JAMJ94si8k4Gc9scnj1ryP41tnxFZkKI1awiZSAMYGV5OeT8p54pzik9BwlGTPOpdj2eQv
3SBnJ78VueAp3TX7JoyVkSZCT6/MvfjnpXNSTlXADEr069u+K6DwxLFJeQyKywmKQOPUt2AzXO1d
m1kz7p/akENzB4Wube5e6tDG8gJbCO3YYJxkDd7cV4isDyqWaZ5DGpAl45wDkfhwPwr1HwX8StP1
jwPc+HvGEDyQpbZs7gEGaOTJbb2GCD275rgkmtvtsErEywRXClkKfM0anJ+UdMgHoea0V0jN04rq
ev8AwW8Pf2RrFj4v1e7TTdMsZBKZBKFV1b5doyckZPU1P+0b4LuvFmp3HizSLoXmlXEQEjwHcYdi
hcnbkc7h6dKqfHLx9ouv6bpth4T82PSmg23VqFAjZw425BGeoY9e9RfADx/pnhzWdRsPEtyyeH72
CWJo1T90XYKAxAHBG0/nmsrO4OzepH+zBALT4zaWqx+YzQTcsy5UmIqFx/EMt6eldn8Zvihcaf4h
+IPhe/gkNtcJB9mkeFQAVVG4Ygbg2OnOOlY3wp1Xwh4d+LF9r0mqRW1pp91IbWOdDhomyACSByCo
PvkV0niKz8DeK/ihdeJbzxDBDpkzrJNZlSCAqYOCV4BwvUVHLrexXurRs8G0LT7vRvGOjyXAKSCa
NwsqcYOCG+ce/X1zX0h+1F431HwJ8QfDF9p8n2a6hhd4t5PltiRlOUH3u3fv2rzj436t4V1H4jaJ
PoNwt3ptvBawM6A7doHOTgdnPbtXSftXeMNG8aXmh/2deR3iQQyRztH1jZju6ce1OUVJq4ko6anq
/ijxhafET9nTX9Ysi6qLd4pFJb5ZEZAQrL945OcYx0yOK+IfOa1FvIY2YSuRIOSQcdxzj1/wr6P+
GPijRoP2ZPEmh3V+h1S5a5Jty21mV9nTHoAfevnPMouJISromNxZhhRnjP8APt6+laQSWhEo6+6f
SX7F9sJdT8XhpFfdDb/usrlF3Ng8ZxznOevGK+etQWB9SvlDugSeVAZEP8LNt69cgY49K92j8RaL
8A/AUlrpt8154j1SAJLcREMqKHOM4PAODg89T614MbySaS4kuGaaSdzJI6r8u9juK+/Unv61UU73
E3d7FaGUIWhOFjdGHmL0zgdfQ9/x5pksMcSyorS75Xw6lyAOOAMf561dljjAYIokGzIVegPPHTr0
9MUyJQsbGVQzOMsh4wSO/vV2d7kuxXltDDGgQqzSDkgbuM8Aj8D0qaOzdnVIwSpXlSMbhyenrjmm
ovnSrJGr8ISmwkHPXJ9OPWp4iGVY1VcF8sWBGemSOwP+FV6gmkRTxxjaoh8v5QoIO0Rg8bSOuadF
bpFIyKWJctGSGAXPXufek8yBNrTjY7glk5JBB4OB7fzp5UPFI0L52EH5tysvHbIIP5Yosri57EEl
nIiAb1cqpIIbIOSd2cdwR09adLK1xNGVT5UJ2uD35zz1/wDr1Ye3migVo9zsxw87ZKkjkgY4B+lN
lZ7aBg4Xay/LlciXkZGCec56+9UombkV4ZcRyST7wh3FdqenXp68VveDJ7bR/ElndX9ub2EyBXhX
ALBjjAz7EH8KzUikeB2yVlILGJiWUMeuCenA/StfwVol34p8R2unWhkt7xpVG+QqeRgkjcCDxWdS
yQQbvqfRf7W89lbaRoVrc2jfaZ0k8mfIYxqNmQARk5O0/j3r5geJ1O+SX5icE7i2CenpzX1L+1l4
fl1fQvCmqWZW4hsY5VlZX5UMI+v1KfmB6V8s3E6tIVICKRvGSSd46cDtnPNZQ5bFOyPq/wDZw1XT
9Q+GHiiRdPWJrWIm/UruafMLEgZ9FUjHvXy/rghGt3FxYQtbWsjlxbFduFLHrj8uenHSvqj9mnwv
caZ8KvEUNzIki6vHJJbgybTloivc8DPp6jpXy54g0x9C1W8s7yLy3tmZJFAPcjPPOTxnNON0ypWb
vc6D4L6jo9l8RtDOq2jXaTXaxQgOF2SFxsbHfBx9K9V/a+utKXxbbWiW0sOs/YhO8keSrxEnGffI
bGPTrXmXwe0q7174g6J9iRmS1uIbiYBtpiRWVixJ+g/OvWf2v9GvH8X2+vxR/wCgS2iWvmr98MGY
qOnTn9fehazLeqPnJrc3kc0czhQjFiZAeDnPQ++D+Br61+H2oeHpP2ZdWljtJItKihkE8O4ljKNh
PJxnJ28/T0r5JifylGYfMDDeVOcdcc/nn8q+svAHhrUIf2YtasZbeNLq9WeSGJ+ihtu0MB67T7/S
nON2rGV2lofKt7LaRzPHZRP5JbdEijIUBhj5ehOa9z/ZXsdY/trV3s5g+ixQqNQtyu4su1yMEnIP
P/1q8Oeza0lktnBMisdwYEnO7kfTPU+1e/8A7Jmu22j6j4h0uZil5exRRwIzhWcqHyB/33+WfSiS
0sOm9TF8CXngiPx/oJiiu1uUuoRCQ27JzhFyD83OM4Az75rE/aFi1zTviJcS606MsgVbeVFz+6G7
YGBJ7D268Crvgf4Ya9a/FTQLm5tGSCHU1lJdvLKBZAScg5HAPH8q0v2mdU07X/iq5tZkljitoYJC
kmVVxuDA5/Af8BBFTC0WayXU858HNoCxXR1rz/KzsQWoBYZJzxxxyvTFezauktx+zrL/AMIqsj6V
9qkOoG4+R1USLwM5yDn171wHiT4OarottZy6ZLHq1neo00MsCg7cdQSCc4r0/S93hz9l/UNK1GIa
bfXcsot4GwxkAZT1zkcgE+1ErPZmab6ifAyLzPgJ4+eOFmK+cGMv/LQG2HA5Jxj+Zrz3Vfh1pSfC
Wy8WW16Jb+4mWA2EcgBQbmGSPUFM/Sui+A3jqx/4RnxB4EnnMLasJlguWUKqsYggQscAn5eOmea5
h/g9r914o/4R3EyRhpf9IiGIwAGLMM+uMdee1ZW7l6dD3D4pRj/hNPg/J5jGM3CnmTAkz5JGBjvy
B9aw/iZ4Gh+IP7Qs2j3kgskk0uKZp8/3QAEx0535z7ewrL+KvxW0y28c+DhbJIyeG5I2d4HVlf5l
Bx2yAgA7VV+PGkzeMtT0zx34fha/srqCC1JgXLQOIznJHI4I7UknfQam1Y6T4AeH10KL4n6cshnW
0ZoNx5WQKkwX6A7f/HhWb4Wt9/7Jest5bFba8kAfG0MFuIyePxPzd8H1p/wvlk+EHgTX9c1xGWbW
FTybYgmSQgumc5zg71yfQVV+F2qW3ir4O6t4BjXytUll8+285yUkJkDBR3A+UU0nfcbkupx+qfCd
PDOneCPEa6mLv+0prWSS3XaCGyj8cZ29B1rqP2t4Us/GulMNokFmXI2NknzW5zjnt/nFcP4W8EeI
Ne8S2WmziWOLTbgFhNv8uNUbBwMdsYrrv2mfFul+KfFkU1uWkNnAYGJGcbmLHB74yM/TtVxbTM5N
NXR037LniXU9c1/UrC8u5LiKG0SWJWRR5Z3hewyeNv0xXgPiaBhqOordJIWmkYYOCDyc7SCBjOfy
r3X9j+yms/F2tyTJIFfTlZZUUqW3Sp7d+n0rxPxXayjW7uOSCWGNZpFVmjO3OW5HqPT/AOvWkEuY
zlZ6mKhJ3O8p2hcEAEEgcdB7V7x8LrZvgvoGo+KtblEc17D9mtLPeBK4ypJAHOeR3/KuJ+CthaeJ
firo2n6jCLi0l3EQv0YKN2NuevA/Wtr9oi+k/wCFm67YvJJLb2bokEG7hPkUsB+eT68VT1dhpW1T
PMr6+K3sk0OYFnleRQjcDnIDY6f4D3rW8E+DdS8ca/DBakujnzp5W+ZVUc5OeOefesaR1kXZt2Iw
DKoXB9hj6CvfbqyHhX9mSx1vTS6alqM0STu6AEbmIIHHHygnPfPPWqkxrvc539oTxjaeI7rTtMtw
JEsflkmUna5IXOOPYDrXdfCfwpE/wN8T2Ql859RE0okGAqfugqKcHPYZ6YxXzd573BMjNuQoQF5L
9eST+Ar6G+CjPN8APHQhneZwLhImBJC/uBkD2/xrFqxonc8QudVvtH0q/wDDckpntVuxKiz/ADYK
gjjjgGtP4aeArv4ga01urAQ2x8y42qHwmMBQB05xXLzhxMyRrGSvPOQMY/p0r3X9kaEp4o8RgmNF
ltEYDGMneB1x7/U1TajsTF3Zwlt4d8G32qranUplPneVuCkhPmwO3XNc9408NDwl4purO5SSSEDc
hJCpImeCPfg/41Tvs/2nOQpM8kj7URsFjkn86ranaXVndBLuOYN1AnJBHp1/Hmqik9xybT0Pf/2g
fDs3ix/DcumXKTNBbujKrZwWIwB144557CvE9d8Jar4ehjuLy3draRzGJJI+rDtjtjPt0NQ3moar
p626XElzbpJzGFJVSAOTz2xnp+leyeBbVNa/Z78XXF7I91LbiaaIzlj5Z2ZBBPQ5/wA9KTXKyLX1
M74AalbP4f8AE2hNMftt/BI9upxtYeUwPPTnjisX4V+C9Z0z4jaBNPayGGKf97I/ybQAc4zzz04+
vNdV+zxpkf8Awh/jnUXRjd2cRWG4AAIBhlPBI6nC/mOK4r4R+KNVHxN8O20t3JNFJdpFK0kpw4JK
8hgcfT8PSsHds10RT+N2oWOvfEnU7/SiktpOkSl416lYwp4+q9vUVweSih9+X4O/jd+dei/tC6BZ
6N8TNStbK3aGNFgnWLIxueME4/Edfc152yyQvHtUON2CFPPt/WuyC0OZysyskSqCxZnQ8MSfY/zr
3D9lbQ/P8c3lzJbefbpYtskmUMFbzYz+ZA6+1eMywjft2kAqBgnnj0Pf6/8A6695/ZJvJz4xvrCd
2NsbIybGHAIkQZ7dcgE59KyqJo0hJM4F/Fl34Zv/ABTomqWzXNteTXBKSD5hIS6rIp7cbfXivPPK
kuSY7a3kllZSqrEDu3YwM++cV0/xA1C71HxjrNxdz/aJjdyx+aowRtkYAfgMDr6msS01GXRHOpQe
UbiECURMAclT75BP4etLZaF21ufQPx2TUfBlp4Hu9I8yxhtI5GkgQ/u/NxERux1xgj8a8R+Iuv2X
irX1v7K1+w5gSOeBUKr5qjBIzzg5/rXvf7TmtXy+DvC0MQEaXkPmSiNFO75Izx/dyS1fNU21GcLh
WyTktncf8fpRBNsd0me4/s3eHoZ9C8Y3ktr5d3Ha4tbhjgjMU2SOfXZyK84n8ajVfAd54c1mOae+
t9j2l7Lnzhuky6k54+U+1e1fsw69czeC/E0bquNPhVoSO4KSt82RgcKB3HrXzbfXs+pXkuo3CRmW
6bewj+YJnPAP+FLlTbY2yTwdpEep+KdFszFJPDcXkMUq8ZIMg3EdcDFe4/GTxW/ww+Jek2NtbibQ
Y9JSNtNRQFdCZFODg5/+t715J8M/EZ0P4g6M0cEc/nXscDJMflUGQcn1PHAr1j9rfVG/4TC20poY
VjNtHdpK3DM5LggfgvT3PFSovmG3ZHhnjWLS5/E2qXOmForB5ma3RRhtv8IwenTgfSvavjbbw/D3
wN4Ji8PF4YrppGdoVBdmKqxPPrn8Me1eAyTkeezwq7ICEUYwPxJHHrX0b8RgPhd8MPBVjLbf8JIt
zM8oe7Zt0YKKQo5yOCf1olowjqY/hC3Txx8BfGl14gYT3OmyyzWsoTAEnkkr6Z5xx7mvK/h54r0/
wZLd3kliLy6eNxAzpu8slTyQceo717HozSfET4A+KZLaD+x00m7lmkjgYbZgsBfY3GcdOBjtzXze
rmaJZNu+LPTGAQe36HrU7jbSep6X8MPHep6v8SfDlhfwWs9rqF2sM0bRoSFYgE/hnqPSs34weBrf
Rvi3P4a0SDy4pTbqsTtgJ5iqcAknpk8+1dT8KfGOj3fj/wALW39gxrczXccCzxMBsc4UOBt9ecdK
yvGXh+88NftD2dlql9Je3S39tIJ3ONykKVXj0UgdKi7RS1Za+JFvbfBXSbLwrbWcd1rs6w3d5N5Y
k5O9cA57EfpT7LQIfjn4GvLnTbSOHxHoNuhmxCEWaMlj1yAOFx+Fd78bbfQf+GjrRfEv7vQoNIDy
mQ4DOd+0Z/A898Ufs9W2lrrPxZXSma4014BFbMwPMe2Yg9OfvEcdRijmaKSTZ5V8L/h5aR+HNR8f
69DLJo+mcx28YyzusgBX3GTj8ag8NePtO8V6xeaRrmmx2thqRNujwKS0cjn5CBkjv/Ou58Jjf+xR
4kmO5czuXduAD50eCBxnAxx0OPrXI61p3gvTNK+F95ol2H1e4v7cX4aTLo25c5AI24Oe2D+NQ31N
Ejk9f+Ces2XxNHg2GHzLm4JltGf5d8WWw2ffYfcVt/EbWfDvwuuYvCuk6QNTn0wtHf3U7MrSS5B2
r6jBr3zxdYxRftl+DQd6yNokzOHJIJCzjJ46Dd+deVX3hHwt4r+NPxT/AOEmvxZx2N0Xs2YlGMoU
lgfXoOPyoU2xcqucd8T/AAPa+JPB6+PfDNt5OjEhL6HYcwMu0Yxgkgk9e2Kb4P8AAVn4G8BTeOvE
9u032+B4dLsHyqyBogyt0PPXJ9BXceDo2H7EHjoiQNunnkUoQpYB4jxnvwf09ad8arAn9n34O28h
aISGCOSRzhgnkqCQOeqk579MVXPcTtE4XwZa6H8YrPUtBi06LR/EjAyWajkTBFYhc7Qfw7ZriPA3
wj1rxj47fw00TQT2TH7ezHi0UZ5Pb7w/XpXs/gbwBpHgb9qjwlYaLqBv7ZrGWZ5SwIEhilBDYJ/h
Cn6mu7+EYWX9pX43BkkZ1RAFHTBGeB07Lk9jxUc1tEEX5nz5qHiz4fWHju20tNGEuhwNHbyagrgL
IzABmBA6KzHn/ZzXM/Fn4OX3hHxDpk9uftWg6sUaxu0/22O1DnqQD174HNaml/DLTNY+CniXxVda
hFBrVhdPDHYFgxkAPXGMn73oeF/GvWfjsGh/Zr+DzQK8rwS25jD8NIxiyPTv/nmleSYltdnn/wAY
PCPhv4EzeHNAvdPl1e/ezE9xNEAFyZJASOO2OB7day/iB8L9G1/4I2XxG0GL+zYI7k2l3DMQWmcz
iNG3D8fz9q9I/aMstL1vUvCl144v30LXn0cpPawRvJhfNbndzg5bPXsetY/j2STSf2VdE07wzINX
8JzzySXV+7HdHOJywTH1BH4+4pqUrlc1jx34P/CaXx/eyanfNJb6FpQ829u/4NilSygn+LBH51k/
EO68I3btD4etLmwVJ3XzHkBWZFyFwBz+foK+k/gfYWXiL9jTxtDqVybTTJp7nz5yPuIqxHPTJAOf
1ryLw78CfD/jD7fp/hXxEuo65HYyX0VkxGZSv8IB59/yrSMu4r8x4Q+9HVJM44VBzzk4HOM+lfTN
98DfCvw2HgjSvFvmzaj4gSVpbmFF8q3GAVBJwcZIBP5V4foXh6G58aQ6Rrsw0cQXHl3MznBiZGwc
+nIxX1V+3toNtDp/hC5N/Da31pbSxQWpYfv1LJuIHUgf1HtRKava4tj5C+JngiX4deMLzSJJDJ5G
yQMuMAOoYc9+COa9g8C/B7wt40+A/jDxfAbr+1NBtXQjGAZhGH7nGPmHHHGK8R1bWrrXLqS91Itc
X0iCJnY/fCKFX9AB+FfWf7Kd7aaT+zB8VNQv43ntoppXlgK7g6iBOMH2x+R9KTnZolM+MI0BHnMy
lSDjb3PritrwR4L1Tx74hg0fSFL3VxkAhThcDOT6DivX/HHwn0rxb8P7Xxv4DizaWsCpq2nR48y2
f5mJYdRgD34+ma9B/YEsYXv/AIhtcpHNc2+nI8Uvl/PFkuMhu3U0udNG0Y9Tzu7+HXw+0/xpZ+F7
rWJ5NScwQSyiIiFZWA3ASYx1/n7V5Z8S/htqXwz1+fTr5C0DgS284Q7XVvu4bucYJrotK+CGp698
GNa+I/28/ZrOZfMhLnzXYsM8k9twFezftTWqf8Mw/Bm+njaW4ljh33Dr+9cfZSSGPoD/ACpKSTST
IkurPkkQ5BQMQpPAJ/Oobg7TITlgvCipQ5AwHGMYAK9/aoy+WkBG0YCkdia15iFZnb/BT4Sah8Y/
GMWkWgYRoFlupGBxFDuAY8A9s4rrvFfwH0678IX2v+FNSbUo9Knlgvo3DAqiD74zjNezf8E8vB0k
Ov8AibXmnjYPZLZpbh1MnLA5I69QPTvXzr4wvNV+EvjLxZoNnetNb3yy21yjPkMjnceMYzwvPuay
Tk3oa2SPOIrYM42IN7cDnljnjH5179a/szRad/wjemeJ9STSvEetRTSQ2cnIG3O0E4xzhc8968o+
Gvhifxb450nTYHRZmuk5dgFADg5/Svqv/goV4cv9O8U+FfEdheNALazNoixOFIYSElwc55HWoblz
WKt2PjnxboF94O1650fUIzBeW7ASRnqM9OK7X4WfBm88e6XruuXTvp/hvRrWS5uL1s7A4wQh7cg5
x7VzHjfxRdeN/E11rGpYe9u9iuyAhSVXaD19AK+1Pg/8MtQP7FfizR4pgLnWvMuIpw2ViVkjwrEe
gGfx96JSlsTZdT5M+Knwkk8GWmlazZzjUdEv4UkjvoeYw7bjsHocD9a4rRPD194l1q103Sbdri+u
GMcaoOScZ5/z2rZvvHOsx+B18IXUpm02C/8AtMfmFiYmVduF/wBnk9q9d/Yq8J3Os/HXQNUjk8u0
05ZJZjkE5MbAAAjk9T9Kak0tRWRn67+zHdW9xrtlZ6vFqfiHSIYpLjTIWy43gHgY7dxXh0Omyz30
dnDEZLh5PLWJBu3E8AD8a+jv2hdW1z4QftPa74ksXib7fMZo4vN4ZfLCYfHY4Zh9fWvDvh9q0Gi/
EDw9q16zmK31GCeZo1ySqyqzDHfIH/1qfNISsj0TVv2e4vCf9nW3iTX7TSb+7tVuJbO4lVZYdxO3
9AfyNc18TPg3qXgbR9H1uGddU0LU4gY7+PGzdkjBPQH5W6Gvb/2tfh/rXxZ+Klt4k0e3Z9M1DSLV
o5ZRtIOJDtIPQcj86k+Pl3B4G/Zc+HvgK8Rm1lSt7K0QJjAzOSSccnc3TPvjFCeq1Kun0PCvAnwe
ufGGg6rrtxf2+laLYFd1zcSAb2LAYGSOnP6VuWn7P0mu6fqdxoWs2+s3llbi4+x2rqzMrEAdGJ68
duSK86TxFq02kx6Bb3dw+mvJn7CjZRmGeQvc/hX0p8J2i/ZS8F3/AIz1wBvE2tWTW9hoshKnyt0b
BiMcdc846euKbk+4+U+UZrb7PNMkp2ujspT+43II/MV6V8Gvg5d/GjV5NKsb2GyuIwG2O6h5Fxzt
DEZxjNcNJBP4m1jZbxNPd305+WMF3Ys3GMDJ5I6Cvsf9mf4K2Pww+MvhObV9ZQ+J5tLmuG0eDJdd
y469sc/lUOVhWsfI/wAQfCc/w+8baz4duJQ91ps5t5XXBBIH+B/Ot34cfB/V/iHZ6tqUc8GnaTp1
t50l9dHEROQNu7PBxzV79ptRH8fvH42yEJqbqN67eAoB/lwe+c8V77+1R4Zl8IeAvhX4H8IxSQwa
7ayfarO0GTdyBYNvTk5Z/wBfaq53shL0PAPGfwS1Pw34Vh8Q295FqWmPIYneBgRHhA244PofoPWs
n4bfC3WPiXrMun6cQqQxmW5mkICRoPU/gfyNfQf7JPhTVdH+KWvfDXxQjRabcaVNLcaYSCUkJjAP
sdrDpntWv8A59Ms9F+OnhGxcrq93Nfpp9kqfvBGiTIuD+K8HuKhzexooniOh/s+3es332XT9Ys7q
5KMYrZJVLyMvOAA2fbpXleqWE+k397p9zbSW1zDIVkScbWjbNfRP7LXwx8ReGfjV4Y1bUbGW20ux
kuJpp3ceWiiBxuPPqy8f4VlarN4d+MH7ZkkNu8Vz4c1rW0t0MQIEkWAHIHoWDc9854qoy8xNanD+
HvgXrGreG49fu5k0zTZpTDD9qIDSYXdlcnkdf51n/EP4T6v8M7mzS9BltLyFZ47tMFCDnjIJ54r2
n9qfSdd8WfHS98CaHBPdafolnbvZ2EMXCqYY9z4APJJAySeuK6H4daEPiD+yr8QR4iiN7deFGlSw
24zAYrfKqcnpkgVXM0TtsfP/AIJ+D+t+NdBv9asfKi0602hrmZwFd2OAq56nitLVvgPrltoOsata
SxX0enRo80Fvln+YgDHtnP5VyZ8da3d+GIfDDX0i6PHKJltkRQGcEkFsDJxk19C/Ci4h/Zj+HWue
JPEmJda8SWq22naG2BJ5avy7KR6upI/Cn799yGz5WMHlAHZhlPIzgZHUfWvQ/hT8Fdd+LlxNDpDx
M9vwVkcK208/XiuGFvNf6mlnaRNNcTvthhVcsxzwAPWvtP8AZP8Agk/gT4q2dxqGpwjV5NFaV9KQ
/vI/MIHPsBj15JpuVhJnxx4o8NXfhrxLqWjXRH22zna3ZQeA6sVxnj0r1DWv2V/FGgx2txeSw20V
zFvAlbBHAznJ5xkdPWuT+MNyy/GPxxLGo3/21dssi5I/1rYPP1qLxp8TfEXxHksDrN/JdNZRFLfO
BgZ5xjvwPyFKUpdDaPKzo9d/Zr8T6J4HvvFUvkS6XafK9wr5AO8KQAM55I56de1eUNH8rtuUD++D
0z04r6+h1CPwh+wbq2ka/I9vrGs3LPZWszqjSJ5iFWAbthcnvg14n8Efg1P8UdUkurlms/DmnAy3
+osAIo0ABKEk4B6fQVm5tDaXQxfBvwS8SeP/AA3q2s6baH+zdLt2uZLl+EZQrEj6/LXFWWnvdTRR
RRmSSQ8Kq7mb6ev0r9G/h/448L+LPgR8StL8LWRs9K8N2F3ZRzg8XIMMm2XpnBAbkHt718Z/sxvp
1j8cvB9xrLpDp1rcv50k+AqnynCn0+9t/Olztq4JIfN+zl4stxAJYhaTtGs/ksScBlyMnpXHeOvA
GsfD7URZapb/AGe5lhEw4wGVhxj1r2H9q3w94lvvj94wn0qxvXs2miSCSGQogUQpjA3epJ6YOeM1
2X7Zh0uw+HPwz0tAkviSwtI1vsf65V8hDh/XJOaSbvqD1Pnbwh8IfEXjTSJdQ0+3K2CsI/NlyoZz
kgDsehqbxB8E/EvhnRbrVru1F1p9qyrLNBkhCfXPQV9G/EVINR/Y5+H9l4S+e8truJNSaxBEqSGG
UnzMcqxJTg+p4qz+zFFJ4W+D3xUvPGTuummBFthqADKkmyXlQ2erMnHrSVRtk8q6nxa0JwzkDOOg
9D0xSOvlnaDztHFStK1zAr70Lk7iqKMEnrgADvTkbcTGVj7/ADZ6nBre9jLRshSNWBUYIK/fPGD7
V13hD4VeJvHNreXOkafJLaWuA85GFy3QL2J9hzxWT4U0E6/4m0nSWKxte3cNpvJyAXkC5/8AHq+n
/wBrC2n+E8vh74UeEI3tNMNhHfzyQnZPPN5jhcsOv3T+Jpc7vZGsYnzr4p+G+t+Crexl1myaH7aG
eI8hSqkA4z9RXO6XoF1rOrW+nWMbz3U8ghhRRyznoP1r61/Z60p/jr4B8XeEvF0Mzp4etEltrqSR
hcIXMrMrZ5PK9z/CBzVL4LeFNN+G37NmufGVI0vvEMDSW1qsyMRFmeNA3BwOx49RQqnTqVp0PCbz
4F+K9LtLu5uLDy47WJppizfdRfvde4/pXETfut0J5YHa27GTj1r2r4P/ABL8V6N8Q9Km1hbnVdP8
QXK2M0N8zGKRJZFBKqflP3iOn+Nel+JP2Y9Bvf2uYvBgZotGurA6vKobBQkOfLX2ynSmqlr3FY+c
9L+EXirWNNtr+206R7K4JWKUjAb1OQOg9cVz+veFr/w9ezWV/E9rc2+N6Ec8gNn6EEH8a9x+PPxC
8Tal8T9S0bw5Fd2Gm+HZp7C3i05WTbGknJfHuvToB9K7D4o+D7P4l/szW3xZuFew1uCZoHWP5hP+
+EQJHbn0o9pyvUOQ+Y/DvhzUPE0/2WwtJZ2RTIWiQjCjqT9Kva94E1nwzZi/1KzmitJJTEkz52yH
noe3Svpf4n6LB+zP8FNDtNFRLnXvF6FpdQYASrGI42KrxkDL4x6n3FV/2c7q5+M0up/DrxRDLeo1
pLfRXNwR5kDYABweoz+PP41PtXvYnlPlWOEHCMueMEL6/wCTXUH4UeK0meKPSriZgDlEGeMZzj6G
vefgR8D9Ogv/AIk+KdQVb+z8Czzxx2s+czvEsjAkcjkoOD/hnhbL9ojxUPHb+KbtWl0iSZ3GnqQI
tn3dinbjIHT361bmyuU8VVY8fOxBB27Qf4vTNbumeFNV12B7uws5bmDdtaYDgtxxkd6+jPjp+zNZ
WPxk8E6VpEpt7bxgzuLfosBUIXGewySf8iq/7RXjI/CbxBF8N/B8B0610eOJ7i6RI3e5kkiUksCp
IwCp7ZPOOKh1G9kQo2Pm7VfD91o85h1CJrQgCQLJ3U8Aj8j+VLpekXWp3q2lnA1zcSZ8uNFyWOM9
MV9S6n4Vg+P37Per+NZIBpuq+FV8u6m4JuVihDAdO/mHtiq/hfwnp3wL+AmmfFG5sxqniTWgIdOW
QYWzWRH+bGeeFJ/TipVQqyPm/UvBWq2VpLcXNlPHEhVTIYyAMnAyT0rFCiFgdwALYwPf/P6V9QfA
jxpL8Xtbl+H3iq0+022u7pLW6jiVPKkjTcB8o/2c59RUXww/ZZi1345eKvD2pzxJpvhkrNfEMSZV
I3bF9z6+i1qqiJ5bs+e7bwpqb2SXD6bdPG+GVkiLbgeh6c1SNs8d0wkO0xsdyk7cDHf0/GvfJP2n
ltviFDPZaRa6b4UsWSKHTjaxmQIuV5PY57ZrQ+P3wHSw1nwnr2iqsGneNJIxDAcbkllwx3AE95Bx
70+bX3jRRsj54t7Ce8t5TbwyTKM87Cccc5P40S2z2CJGV+z+YpPlleuODj2r6h+Jt3p/7LmjaZ4K
sNLt9S8S3NvFf6lf3MQlj53oUVeo5A7jimv4WsP2kPhFqGu2Fn/Z+v8AhyJheYQKj/K7kn1OF/8A
r0ovUl6HyuWEkhgMbYxhT6VP9iMYaNoTGhbneMZr0j9n/wCF2n/Ejxhbw6vqUOk6XaENcSmUBjnh
VXLDuOvPWvYv28vDuheF/wDhAU0SxjsreaCYSOgA80L5YXOO+ATQpe9YzbPk/wCylDzHtzkbQc/h
/WgW8jsQ0RVkG/lSM8ck9h2r0D4OS+GNP8XLqPixJDY2pWWGIRmQSvkZBH93GcmvRvBPxS0zxd8T
tJ0q38IW5sb++WJhFuDxxmXaCfpv5PSiVRx3QkrnzxNEkaAbAEHVh2Pr7U26tVYRAHapPBPQnB55
r3L49fCvRvDfx1i8L6PeYiv5Y3k2nzVheWQqq465Hy8dhXcfG6z8P/s63vhzwrB4bi1aQWLXE960
mzzHZ2A4z2IPf2o9praw2rHypJZIhQhiyg4I6c/SmGNYmJDKMrzle3XFfUPxZ+GmjeIP2ftH+J2k
2UejNOuyS2wSC7S7WPQZwA3T0rx74QfC3UPiv4ttdIsTHb2obNxdyZIhUA84xyT2HrihTT1Gkjgj
EXdk2RhumN2B0xjjpUUlmojXKqWZtgCnOf8AH619T/tifC7w38L9H8Hx6RZ7Wu/NSV1C5dkVMMSD
wTk9P5mvE/hd8MdV+J3iqy0nS4FcM6+a5TIjQsF3kdgAfbpVKSeo1G5wv2eKNijRlRnBwcn61GLe
Fc7WO8nI3CvpjWrT4aeDfHNl4W8l9VFo8Nvd6lHINiynIkORnpnkcdDXD/HP4MXHgfxDBc2Ia40P
UmWWzuIhmMhslVDAkE4x78+9NST3JaZ5FPEG4XcrFvkI5z/k4qM2cUSL5gWQMe3UZHU19ES/B7w3
8Kvhzb6v458671fVtrWukpIRJFHu5cgYOflX1Az2qh47+FWl6p8PoPFHgeOWVIYA99auN7pkjnnp
/Ef8azc0JeZ4OtnHcuywAMSpyvOeOT/KmxWiNE7qI1O0nDDO7/8AVXsfwe+B8/jfUJ9X11pNI8N6
Y4a7uywijk4PyKe546Cuo8NfDbwL8VBr1j4ZkuIdWtlX7Ik6hWm+b5mHPIwcge/PSkpdhqKufPK2
seIwYwA+QQBkg9c/XtUC2m5m2ACTOOcfnXQa9o1xoWs3FjcQyQTQM0ZWQFW4zyB9Kw5Yv9K3IWYd
QwPbmt0D0InsAG8vJCAemDux9TTZLWN2DISihcH3H/1qtSLgh1JAK/eZsg8dc0xJ/IiZQCY2wAR2
+tCFcrta7VVOJEYYDKO/rmlNhEk5XzFOGxyPl+n1qzs8lF6EJyFJ7HvUMjhSrbRKAu44OMt64P4U
NXC5DPYkK/OWBHA6EU6SyUEFF+UjOP60sLM7t5vIPOzdx9fWnyM0hzIdgCkYU9f/AK9KwIgNuiwg
Ajf99s+ntUgtBfXKu7YGeSo/LipZWhztU5jx8p7g46frUtoT9mkeRXZzwTyAOKpCKasba7ZJItxB
zg+maWCIjUoFQhFMoAJ7ZI/WgW4E5BDBznbyDjI4zVjSg8us2XlMUmEq7D1wRz/SloPc2fivELXx
beWylmWKXGWOTnav/wCv8a5UqsSgBseveum+JT7vGmp7lLRmcvsGFPPUYHArmhGODu2AjhB1FS0M
9j0tJJvs6QNsCADzOoRgB8x5x2HWuusImQwwSzKElLD92M89iTkDqD/OuK8P3rC1WNpXG7hcjOOg
xkAn3+neuh0+diXDIGIKqgk44Pfj3wPoTXZDTc4p66o6S1gSKJV8pHABUNKQS/JHXH9fWvL/AI9r
5mv2UcqeXJ9gTHOQQrOBg/nzXpTaxJKHZAi3ABWIlWw7ADIJ7dMA+4rz349Pi80Z5bdlkawDljgg
4dl7deRzzSnboFPc8iEfzKQOBxjHetrw9EUu45mXywrcE9O/PvWPNC0cgIU7XGQa1PDQP26GNiQp
kA3dec9qwTsdiR7rpySw2kD/AGsE72OThTjHAwe/f8K17At84mckMwRd3V8Z9BweM4x2rqfhR8K7
zx5pc14ZDaWduu8XNxAAhK4bb1+nHbjpms25tVsdWFgZEnHnGNpoX4dckb1+ox79a0VSxnyXkVIL
jynuCokWPJVlHTOcnt3/AKUk3nGRI3V95wzIPlzlcDBUdMHPTFeh/Fr4P3Xw3i052JuBqKtIJMKp
TbggMQScHdx14BFUPh18Or74meLE0SzmjS48mSQSSHamFAzzxnrwO+DQ5dRcjlocYxFrdJKz7ZWQ
xndH94H0P1547/o+5eGSSZ8ExwjcQSc7fUeueBn37V6n4f8AgTf+IPHOo+GY7qBdV09mRUY483u2
05P+zWtcfst61p2p2+lSalYy6m7lUikODtx2HUntmnGqluHsWup4rZXjiRIXjwjdC67lxxwck5wP
5CprR5LZ5SqlXyF6BsYwD7Hp16812fxN+FmrfCvW7PTNWWJZ7hFkt1j43kkADH1wPxq58SPg9rfw
3srG61e122t6pPmxHeEGBknsOWH51TlF6jjTZ57M1zBPIIZSqkHYhPy5HByaGBd9nleYYsqJEOSe
MnGfY/rXdeH/AIM634n8E6r4s062NzpdiD9ocplyFAJIXGD8vPHtXFN5kMQiWMpGFCgHAxn+Hgce
54HWkpIVmmSm0mhkhyzMEBymMY9T7Zz/ADqGGWa3SIFRneEfLjnupx/nFdz4w+GeueCtG0vUdQgV
rTUASs0ZJH5kAen6da4q1MBVTGzyqgJQBR8vfJyP096rR7A7PqQzxhnVQ5hPmeYxGcdee3so/A1Y
8x3R5I0ceZyz44x3pLqQxhwEUIvBBAwScZ6dv8aSKeeSNixKw7QyhVGSvTOKpIwfuvQc0x4kVcRj
JJVcMPbpnpTbe0jkTes7g9QdwxzyMggEYP8ASjElxEUZtqY3OoIAIHc9+/606UeTKIDllK5+TB4z
jg+n61m9GOOu5HFsd0MrknZj5Rk4I9u+KdcjfCfKcbDgEnqR1yccH/61TrGSY2hZSB6cnOeeOxpW
jS6hGz9264yGAVsjjI9RxWkRyskRiRowjLGXZkw4X72SR8+TwRx060x1NxCfNB25JTjJXjB4/wA9
alGdsezLBRv2k9fb0IpyyPYRB5ApRzsUgYy3TAH1x+dO7uRp1I1iijYywn/RyfmKHO8+p9zzzRDH
tBMDsHiPmI/K7uAc/wBKnlG+eNZE2rG6jKjA9Pm/z3p00bvNiFWU8syNjnjnHPvx7UNLqQnFvQs3
PiDVruzdJLqWfayhgzkKMA4GD1yc/n7Cs+OKC5SQlWUovI6k5PRcf0rqdF8Ba5rdudQ07SpnspHK
q4wsbnjkE5z/APX61T1XQ7/wxqC6ffwy21xGqttZCu7jgg45/wDr1i2rmyTIE13UrSFYLe5kSNGG
2PzCOOmKr3Ut1f6gk8skrzKDyWyc4OCc88VY07R7zXtRFtZIb27JeUJC3JUA5+X8D27Vsah8M9f0
fTbi9vNKubOBMDeU27M8L2wMgmrukJxZiQ6jc6aySWV3PBO5ZS8LFX2EAtz6HGKtXniDUtQtzDc3
097GjbwkkhO0AdOM9BjketVGihVCkbYmI2bWZWIJ9s1t2HgTX7q0gkg0m4WGSNfLcR4jmHUDJ552
/p71DQtbmCrL9plbfuyAAVYZ6n5T+I/QVtW/i/xFaFY7XWboRJhXBchVUAjj8wKoS2p0+9urW5tT
FcxlonSRfnXg/wD1+tS2OhXmoyvbWEL3bIm5UhTdheBnA7DNUo66mnLoVJbuSS4WeRvMkkc5bnkg
89amsLu80u9tLm0uDb38WHV4XIw2c5z2Geau6j4e1TS7EPeWE9vBuKCSVCADgkjOOwB71l+bl2Yn
yWXgRsOeOO/41s1EyUdTpofiV4ut4VaHXL0NGQFYyZIIHPX/ADzXOzublJJtzyTyHewB755/X881
sx+FtYIaQaZdN5nyqUibcRj0Ix75+lZaLDDLFFLCbdhJskUnMh5Od/bsRmoUV0LcfM3LPx74i0TS
4LSw1e4srKFmxFE4HDctgEEZ9M1HrvibWPEsNpFqd688URZoRLIWCkjGF4+lU7XTrnUYJJILQyRK
cOyqSQOMHOP85pL7Tru0WIzWcltH/wAsmdMZ5wSP8PpWOlx6Ir2t1NEqSxuUmimXbjOFI5BBH512
lv8AFPxYGZm1SQSlSN2BkZBHJxn/APUa49IxFeMVimMmwkbTjOeBjPc5xV46Vfw3G+aykCKSxAXK
le3X1xVqMXuyeVso7t1u8T4jIHy4B2g9u/8AnNdLovjzWvC1o9lp072MLsHkhaPKE/Q9/wDCublU
kctuRySADgnjjj2/nVqCBpYBtVnXhjtTIq3CLMnzRNnxD451XxeIptUn8yS2DxxxnCsqbucKAAMn
8eATWfol1NoOp2t9azMt5bSiZCMb/lYENnjgZGfYmq08EkibnhMSgsMY2FueefT/AOvUcqReVIr7
2ccbBliMcjPftSUIiU2tzu7r4u+Jro3KTXNvHFcI8crwWyq53Zzz2/DFcTJPIZGZy0qZ5dwSG989
yae1vMCJ2L+QVGGaMptBHQ55/OnxWyNIpLs0QGVBA9Tyv146+lXaES0nM9B0X48+IdAOLI2kQYBQ
6QjceAeT+A4HpWL4q8f6l4s09YLiC1UCQuTCm1nYAgDceO4OO3qa5x/3UZcDDtJhcjvt64H40Fld
uhaR14RgQU6nJ/Wo5Y3uPkZpeEfEt94Q8S2es2roLuBiVCKDzgqc/nT/ABj4oufGHiG91i7iIvbh
laXaDyFQAH1HCjP86x1tcF1dUkQt9/YARnsfp61OrFlcSZIUY3NnPB7Z+laQSvsQ01oVZIpJMqoV
GCZZw3KjsQO/au4T4sarq/w+s/BRt4m022xtfJLPgll6+hIwR6Vx8iqmE5l3rtG0g8Y49vahgkbl
AdjAjaVH3j+Hfj0ola4kn0BHht3VSHLHBAI5OcZ49s16t4b+Nh8L+GDoC6RbS2zRFZm3FTISMbmA
75ry6ARRMI8zSDZjJAA3AU3yz5cnmhV7NHgkPg5BP/1qwdrlc8kWb+4i1HUbidDHGJHkaOFcZjLH
IGfbj9K1/BXjDUPAeuJqWnzzKuV80bjiRQclSOh71hJ5hQSSKjrn5C2GHTjFOZZXKgxsrEfeC4H+
c5ppRe4mz0K3+J3hcal/aLeG2+1I4mBdwCTkk4wpzk/Q1ynjTxfc+O9auL6dRADKTDCwXEaFshR2
4PrnvXPSRNGUbd8gOMEY7e/4VIYFkKuCznHRfX0q3CCGnI9Lh+Jum+INDsrHXtNN1PZFxHPAFjLK
RwnToOfzqPUfilY6f4Gk8NeHLR9OS7l3XLyxg+cjIAV6+gxXm7kiFFXc4AJ8sjkZ7cY7+lSCNnjJ
ZHTaSeG74NQrXJ5pJnZfC74mP4Dmv7aVZZ9KvYSLi3jGXbghSPcfWtnQPE3gXwzrdpqttBeO9i3m
xoyFsOMHrk9Me9eYj9wgwu4MwBJ6gflUoVDK6hWIHXaeo7/SnyReqKu3uaHjTxRqPjzxDdatfhHn
kCr5iAjKqMLwenAANYohUOWkJIK7VXdjkdPr3qckR5XBlHT5eOD65p8kxiuN6YicH5Bt3E/TqB37
1a0CyerKRRoiXZ87/uhT0r134DeLvDXw+v7rV9Yup474xNaiEQll2Flb35JUdRxivI3EcMkp/ifL
MNuQeMHGP50su6GAyNuz0wefXtUOPNuEZK+x03xKm0mfxJd3Wi3Ml1bXcrzbnRl2FmLFcY9TWFpF
pYajfQQalIttbSHyZblBwinocYyefbuKhkzbRh/mQFSX2gkY9cDmiF2R/vgoBngY2k59h9aXIjbm
fY94+N3jHwv428N6V/Z2piS90yEosYhdWkJCAgErnA2g9e3vXz01uoKtLtcKxwc5JPI5rRE3lfKW
3IBkc8An+lU8bkkOQWb72Rt2jP1/rQoW2Ib6n0B8A9a8IeEPDusnU9egtrrWoCvkynBjCo6859TJ
+leG+IdJg0rW7uytroXcEDBEuo2zvUDOQQB3/nWa9u8ykEjavzlccY7D/PrT3DM/lhuSN24sQQef
/r9aShZgpu50Hw30+y1Dxppn22/tdLhtJort558D7jqxPPB+7xXov7S0+keKfEVl4g03VLe8j8pL
SS2jkBbKlzu9fQdD+FeMQRsAW3EM2NqR9v8ADvUTOQ7DbulXkYH3TjPP6VFmmbSndIpyRqqyR7PL
LKw3Lz8pGD1r6I8c31n8cfhx4W+xXkem3OmM8E1vPIFYfu0XfgZ4O3+YPSvntozeg+bIGYAq6ocn
IxTZQDCkcRZo1yMZwM/Xt2/Opkru5kptbH0Fpd5a/Bj4K+LtKv76G5utZllit0gYSMC0G3c2D6j/
AAryn4WeFtB8U6xLY6teGxuJkY256LkKep/KuUkQPu+Vc7PmZu3XHFUywh2yLJKrjqV6fT6H/Gko
lc/c9p+Gnwb1fw38RtE1K7uLVYrC6W5clsn5TnOccDhgD61zHxl8e2/iX4yTa/pkxkgiMAhMg2gP
EoHT/eB/zivPTd3EEkkIm8tEU/czgjk8Y9aibjDlN6AggdB+J9OPalyNMamrnufj+1j/AGh9FsfF
2lSKNctkjsr6xC/KpwzFlz1+Y9RkdO9SaVqSfs8/Dy/eWRLzxPr8SmOx34EMYYoXztHODxmvC47+
6sk2wXU8AU5Cxsy88kHIP0qvfXk16rtcyyXU4PEk0hc8dOT/ACqeRsrmPYPhH430/wAR+CNW+F+s
mDTbTU/ltLpGyWkaUNtOR0+X6dazfC/wV1DT/FE1xrFwNJ0bSd19LduOSImDKMY4BwfU8dK8mEps
ZCod96nck0bFGBH0Oas3XiTXLiBohrV9cQyjY6zzsfMU87SM8j/69HIylKx6t4l/aNjvfjrb+NLe
yE0OnQNYQ7ZSDNDub5xkcE7z1HQCoPjT8N7jxb4luPFXhqca1pXiSVrlUhO4RMCAVPp+eRXibH94
VG5nX0459PcVsQeMfEHh+CC10zW7ywgiyQsDYCljkkD1OAajl5XcXtGeyeO760+EvwHufhyblb/V
dWBe7CS8WocRsCRj/ZI4/Wlv9YHx3+AelaJZbdP1nwinmC1VwHmSOHYGB54OfTqK8C1XW7zUriW7
v7qa8v5hlpp3LueMDJPXApuieIb/AEC/F9o9xNZ3MSNGJoflbay4YHr2/lSVMafNue0fs/8Ag298
CeIz498UM+m6dpLSxBZzhp2eNsYJ5PIGAPXmrXwY+O2nx/G/xZrd/H9gg8XuIhI7j/RcLkEnj+77
c14n4g8e+JPEmmHT9T1ae8sFl82OGVsDeAcHA4HX0rmo53gYHerbWPLHP0PHH/6zR7JstNLY9A1r
4KeK9N8cjwzbwvewTzR+VOgzCUk5D555wc8GvSPjv410vTdK+Gfw+81Lybwu9nLd3cbZgjYLtZdw
4JAH4V5LD8bfG+liBbXXWZYo1jRjDGzYHQbiOg4xXCXs7311PM7EySSM8m0cMzdT6e9PkfUiUm9j
6W/bc0668V+ONDv9Jjku7CbSAsdzbNvQnzpCQCM9Ay8/4VDr1pJpP7B66ff77bUJ9SBjtp1w5UXm
S230xk5+leN6X8ZvFegaRBptpqUZtLRdkcU8EbmNCScZKk4yxrH8ZfEXWviNb2aa9dLPFaoY4kSM
Iq5O77q+/fHrRyO44O+5718NU+3fsMePtPt5Y9QvPtE5aFc7wC0J2lR7AkfQVhfsSaRe2Xxxt7u/
tJrIW+k3TkvGQuDsXkngYHr6+1eQ+CPiXrnw3vL660W5WBLqPyriOZVdZFJB+6wIzwOa3bz9oXxf
eaXfWKfZbRL61e2mFpbRxyMrLg/MB1HUkY+lSqb2NvdT0OU+JsQ1b4x+K1sGS6FzrF1wgLhv9Ifg
DB6jBx9a+i/+Ch+nTL4g8CyGKWQJpkiFyhwpL5weOCcfpXzD4N8WXPhLXrbWLaKG4ubYko9xFuTc
QeueprtPij+0R4y+LGiHTNeksHg4w8UGyUAHI+bORmqUHe4XXU8llgRShlUIn3hwWxz2H4V9e/s7
ozfsY/GJkQm5/wBI2ooIYlbdORk89SP/ANdfIM5eIM2GJXk98/hXpfw3/aN8S/C3wxqWhaXHZXFj
qUhedbpC6sxUL1444puIJo9p/ZDWRvgd8aDMnlq+n4iLcbi1tcgDB6HO2sD9hzxvpfhvVfFOi303
lXmtWghsgQx8xgJCV3Y46/qK8v8AGXx98TeKvCsugC3tdH0+SRZJTpm+Pz2UEYb5jleScV5vbX9z
pM0d1aTyRXEY3pcRttdD6jHOajkGmnudjcaZ4/0a8ufhv5moL9pdFk03J2NIxBB2jrnCn8K9n/a9
8RWKfCH4Z+CGkSLxJo8UMl/ZqM+QptmXa2eeu0Y7c1yP/DYWtvrkOqyeHtLudWgVcXjhwzMFChuv
XHpXiGt69ea7q95qt/dPeXdy7NJLcne2Tk4/XikoMiVujMy4iVUWJVII557c5qCQpGxLKXfnGe/W
pmk8uQN8zYAz6+5ppeMMyEqq5BGTycGrSZmmj6+/4JuoW8eeNGVnR00mIDfnDjzGzj9OR618g6rG
ZNZ1Bnk8xjcPggcHk56+/rXuHwT/AGnbn4KaP9k0jw/a3N3K3769kdg7x7idhHpgkD615V8QfE8f
jjxbe6vDYR6QLhlP2WD7qnaAxB4HJB/Oqho2PVmf4b2Q6/owh3B2vYVGxsbsyAYH519R/wDBSe5d
/ib4Zh81mjGi5ZMnkmVuw4B6c184fDrxbZ+B/FdlrNxpSautm25LSVtoZhjBzjsa9N+PP7Tq/Haw
t/7Q8PQabqNuy+XcI5dtmDlMfU/pScfe5i1JngxcGPzGOCSBt9K+y/hbNPF/wT88f3BupVka7mCO
XYjbvhGB3A4xxXxvGqzKThS4OQpHp2PtX0j4e/bA07w78NI/BA8DRzaS0GLhPtAXzn2gEn5e55/K
pcWK6Pm553uGDyEYYbgDzivc/wBh6aR/2kPDaSO3kPHdfu0JwzCB+w4PGevrXh1zcI97NIkTRLMz
OkeOFBJIAA446cV6r+z58b9M+BfiG61ubQP7V1XYVtZlkVfJJVg3VT1DYPPbpQ0xqSvqU/2qL+a8
/aG8fF5XcpqJAR23bQEUY9B06Cud+DNiNX+MPgizmSKSzudXtY5Q5ypTzFyPfIrZ+OfxR074r+MX
8QWehDRri5UtcjzN5mkJOXJwMcEDHtXB6Pr134c1Oz1CxlNveWrLNHKg+YOpyGH5Cq5XYlI+iv26
9TvdJ/aCuLawu5rK0h0e1jjtYJdgQYY42g8c4IrrPjpax3P7Evws1m4El3qrSwxPevzJsKTHaWOe
Pun8K8/8TftCeEviLqVvq/ifwbNfa41lHbT3gnRkcopG4LtypP8AWuZ+Mnx2l+KGmaP4b0y1fSPC
mkW6x2umsyko6gr5hYeqnGPesLO+xrFnb/A34Z6X8PfhcfjR4ttjdWdnK0OmWajcZJS7RBm9twH0
GTXjvjPxX4h+Mvi/UNZnilubuZA4ht9xihQKFAUHoMKO3PNeueHP2nPDtn8CNM+GOs+GbnU7W1m8
55zKvLCVpFIBHq3c49jWr8Ov2mPhx8LG1JtH8D3by38AgzM0bdASO3TLYJ56U/eXQpyRS/YI8O2O
tfHhkvrSC6S202edY7hA6pIrJtPsRzW1+zXc3mu/tz6rcXc010sF1qo3SSEiNd0iKB7YIGO2Se1e
E/Cf4s3/AMI/Hdt4h0oBAsm2eGPBDxM4Z059ccHtXoGp/Hvw94d8d2HjPwPol1p+ti9e5vJbkpsl
jc5dAB0LZIz70uSTv5i5kzi/2hY3l/aA8fiMlFGtT/K3QHdz+Hv6GvpT9sTxLc+FNX+C/i7RYY7q
PSLWWXzgCYBIFgwM/VfY8V82fHrx1ofxL+IF54k0Ozl0pbtC9xFcEbnmJJLZXPUY710fw8+POnaX
8Ob3wX4w02fXNGfa1iIAvnQtu3MNzdASAevt3qnGzvYm66Hs37I/jjVPif8AtE+LPH2vwwxpNpDx
yXMS4iiIMKhdxzyVB4981o/A6Cxg+En7QfjGyVItTgu9Sa2v48l1Ty5H+X0554xXh3jX47aJB8OI
fB3gDTbzRtNuJXk1Se7AWWfIXYoZWJCjH0xgVX+Cvx1h+H+l674a1m1lu/C+uWtxFdw2yKXZ3UJu
ySP4c9fbvWUoO7NFJpHR/sd+Mtc8S/tA+GrG71i7vbS5hu1urW4uGdX/ANHc7fboPy9q0tJsNN8O
ft/RW1nHFZ2Fn4gKKuwKEG0k47Y5P6VW8D/GL4YfCrWl13w5oeqSa3HayxQSSxqqB2Uj5sN/LHWv
BvEnjLVPFPinUNfv5jJqt5MZ5rmMbCWJJ7dAPrVcu9iHLW59M/Gv4t6t8Ff2s/F/iPSraN/tdrb2
sXmAmMqIYSwBxzyO3rWn8Dbtm/ZD+NWqXzoj6lJcsHf5Qzm3HTJySSa8/u/jR4L+JfhDQ7fx7Y3j
6/ppdft1hbqfMjJGwE5B6KOPw9xzHx0+Na+OLXTvDnhi3fRPBumRxi2s0wjyOE2lpcEg8DpTjFvo
LnR13wf+Gml/DbwCnxY8eQJLYtlNHsZF3C6d43x6/wB3jIHTOa8i+I3jnX/il4ou9a1FpJ3O4xW0
eXS1jI+6oxwOASfXmvdz+0J8OfEXwK8J/DzxDpuoO+iRxO8sEO5DKgdeCGBPEn+NM8D/ABk+D3w+
8O+JLTTtBvr3UNShaKB7q0UhGKuFPzMdoye1CvFjuhv7AfhTSPEfxL1S61C0+0yabYi6tmYDCOZY
1B/AE/rWz+xh4g1HxV+1FreoaldyX94unXSuxwSI1kUIufQc+o4rx39nf46XvwR8af2lDCs+nX6+
TqECg7jGDn93zgEH/PTHXeFvjH4Q+E/xE0jxH4KtbsCWZ4tSW8gCObd2BYLhjzwMUmpN7E3Vzx/4
pbG+KPjdAHVf7bvR8xGR+/cc4OD+Feyfsj/Bax8c3viHxJrDCfTfDFubk2ZXJuSUkKrz0A2V5L8X
dc8P+I/iDrOq+GRMNLvZXm2zgq5kZmZzz2LN6ivTv2cPjpofwm8F+P8AS9WguXu9btVitzGm5AQk
qjPpzKc9q2aIvbY85+Kvxd1f4seILq9vX8i0TEdnYK+YoFAUYX3OwHNdj8Kv2j5vhr8NtY8GTaDa
ajpmqSvJNK0jI7I6Ku3oeoTt6V4rGiRxFSHd0GGIAwMDjpSTuzgRyH7wGZO68dqHFMtSkfoN+z34
20PVP2dviXrkGgRWFnZLO81lbsT9pAtSxUtgfTv2r5l/Zp8P6d8T/wBoTRrK7slt9Iu57qd7VJOE
RYndVzjLAYAOCOK9N+EPx6+FXgH4F6x4H1GLUZJtcg26jcJbkje8exucjgcYx714R8OPiUvwn+Js
PiLQbZHgtZ5PIjlJBEbZXseu1sVzxile6Hc9H/aZ+K/i3w78efHFhpuqm2sbKdILeBY1ygSJMYJG
cDkV2v7c3hiwsPCfw78UARtrusWRe8uTuzLtghx06D5s8+31rlPHutfBr4j+NNW8S3+pahDcapc/
aLxRav8AKWAG1cdcEDkZ+lcx+0r8eo/jBrul6fpETQeF9BhENhuz5j5RQWYZwPu4A68VaScvdBS1
PYfinBcfA39kv4cS+FZf7Ou9cuIrm+kwrGVmtixbDDHUr09qX4AXNz8aPgN8Vl8WyLdW9iwu4nMY
VQywu4LAdQNoP69q46f43+DPip8GPDPhHxzPLo1zoDolo1pCZVkRIQhORnHQnJqNPjV4O+DXwb8U
eFfAUtxqmo+JJmiu7maMqkUBj2HaTznBOMf3qIxs0NtHy5GrrGWI2cfdz27ClXdE6YYr3wRn8KnI
fYWBz2PGcdcGofMadFUElCc89/8AOK2auZWR1HwwHn/Evwvw7garakgYwR5ycEV9c/tOfEq2+E37
Wuk+JbnSk1aK20KNEt3GcMzSAkFs4wCMZ9a+KtPvpdNuIbmCRoriBg6PHwykHINfSE3xO8E/Gr4f
6bB46v20vxRpsiwx6iqM73MQjIBc4PJLcjH8PesWuVmsXY9V/ZN8XP4rvPjv4vaAWtrdxCeKI/cj
Oy4bax4B2jHT1rm9FMtv/wAE3dReSNZpptQLgbC2c3aLnGOThf8AOK84+I3xi8P+D/hvbfD74a3s
yadMm/VtTIZJbmUbhtz1wfbjFZf7PnxwtPCzyeEPGCve+CtRwspds/ZXyW3gAd2IzS5b62NFKJ1j
fGLR/ih4n+DHh/T9JayfRdVtFkk2Kv2ht0SKVIHHQkrX0TdGNv8AgoUrtuKw+FQnycgfMQR/3yW/
Ovnbwfr3wp+C0mp+J9N1WPxNrttCp0vT2idFE28kPkjqBt78YNeUad8bfFdj8S28byapdzaoZctK
0hJePdvMORzsxxj36VHslryi50nY9YufjLoXw28Y/HK1vtFbU9Q1rULmGLOCsLhplLZPIyWye/He
ur1KHyf+CcNpNHK8bPf7gq5O7/TW6j049Og9q53xY3wj+L3ifTPE2qeIV8L3F3EtxqWnRudzTFm3
nITJycDg/hzXFftBfHC38VxjwX4YX+zvA2lSeXawxYIugGLeYT6E5OOvfjNLkcmnYfMm9z6A/a11
vTfB978BrzUEN1Y6couJIdv+sRFtiQQcg5x+n40n7OHjjSPil+1vrGv6Jp7aXpw0Fo47dgFO4PCp
PHsT+vSvLvD3xQ8NfF/4XHwn8QNR+zarpq+bper3LMzEl1yuADghRtHsaIfiN4Z/Z3+G08Xgy8j1
nxtrAkgfUYiT9jiwvOWXOcj7o4yCe1V7O8bInmSPVfhLJF/wqb9pS8aQq8uqakQ/3lb93IMkEY6N
+Oa+f9V+KfhPUP2fPDvgXTtIkXxNBOJLi7RAm5S7McN1OcoOnTiqfwF+Pk/w+1i6sNWklu/D2slo
r6BjtLNJgNLkA5wCeK9C0/4e/CXRPHuoa3feKrPUdCtfMurbSopFLPjJROBk9AMcdec807Wvzak+
1Seh7x8Z7aWT9p74ARsUeRI7lvlAJO1Rjg+u05OO4ry7xz8QvCPgH9rz4iXfiixN7G9tbRWytB5u
yURRgYB4XgY64rxHx7+0H4h8ZfFW38ZwytYS6cSNNgA/49Yufl7cnJyfevSfHVt4O/aFsdN8WQ6t
ZaDrd2JG1NLqZFeV1REDcnJHyHH+90oUGlZ9h3TtY6n4OTxf8MVfGCeQCGCe5usRI38JghwM9+cj
86d8ZYbWy/Y4+CwvlJtjd2aTlW6II5M4GM9zn8a4P4vfFnS/CngWP4UeBpTLpUccTajqiOrC4ZkB
ZfTk4z05WtH4cfEO1+MHwpk+HXii9W0n0+Nn0i6cBYxIke2NCSf7zE8nnjHNJRta67grPqdz4E8b
+D/G/wC1t4Fn8FWK2tgLC4M48kpmQRSjJB4x938zzXovwejjf9o39om4YjKmFSQSNuI5ACefr+QP
evEvh/beF/2XNIu/F1xqKeIfFLK9pp8VpIriFnjO52AJHXH4dq4T4NftLX/hD4maprGsDzbfxE5/
tkxJne2xgrKD7sOOKSV9fQ2c4u2o3S/Enw/j/Zv1bT544pfHE1wdrlS5aMyjBzggfKD+OfWvon4v
W8R0f9mS2Ds7Ne2I6DkmK2GT04PH515ZN+zB4cufiPvtfEVtD4Olk8+USShXMYX5gvPXPy9Oma5X
4wftGv4o+Ifh1/D9qlnoPhFof7HFwDuLIFG8joQNij8KJWlsQ5pbHtHxxm8Ij9sSOXx4Y4tBi8PR
oODtWVmkK7gAezH/APUKT9naW11L4bftB3mkySw6W7Tmz+YL8hgm29xt7cH2rkPiTolr+1HpejeN
7HUYLXxRIsdje2zzBUYRx5+QcHhmPX1Io8VeKrX9nH4Nz+AtGuV1HxR4jjWTWJ9nyQRNG6YU4wxx
gY5xz7VcEQ3zaXPlTRLi5eWNElEO5gNpbbhgc/zFfW3/AAULkNnefDq0PzhbC4GZBncMx/N7+gPH
Q14D8LvhFrHxLnefT/J8m0mSOYXBUYDchsDnselfWH7a/wAOdT+JN14autHmtp7bTdPmgm3SlRlm
U5X3G09uhHtWl0pCUUfDun2M2qXsOn2lvJczyMUjSEFnLE4479a+kzBpP7KHhFL4iG/+JWr24Xyy
yv8AYUOGBK5z2Az61d/4J/6JpVz4+8S3WpLDcXGn2aS2LSFdyMWILDvxjH4iua1H9nX4hfEH4kzy
+IZ8/bb4xyX8isQkefvgcDaFAAHXgikpKT1EeJ6Dqup6v46tbxMXGu3d7HJE9wc5maQbSSecAkd+
1fUn7Ut34asPEXhm28eR3l/4ph0iJ5rmzjIXG84HU99xwexHNeTeNfAOl/AX9obRNLm1BdQsbCWz
vZbpR/qwJNzEjJ6bM4yc16x+2L4G1r4p/EXTfE/h2H7dpM+kQpbzrxyWduoz1BH51UpPm2Ft1Kfx
3gu1/Zb8GN4aZLfwSNspimGZWmkaXaASeOSefbtXyxpOuapoc5l0+6nsJuA72zFZBj0Oenevq39o
DVofCf7IngTwNfT+R4oWS3mlsy3IRWlO49APvD9PrXzP4N8H6x42kmbSrCW+eFdrtGhbbu4G4gEj
Oc0ua61J0ufTn7fnmLpnw5WZfPk+zzl5X4ZmCRbjj36/jWV/wTzshdfEPxXOrbimmIvkupzlpMgj
tx9PTpXb/t7eDdW8QL4ETRrMXUdlZTRzshDFfkT5frweR1xXjv7G3xb0/wCGHj25i1RfIj1GL7EJ
5GCpCQ2ST7HpU290FJXKtp8FdI8UfDP4heO9S1j7PcWl/c+Tbso/eEEEZ5HJYkdO3vXtfxI01v8A
hn74AKoRHl1LT2VSCwYmMEHnJ6kcdME8V4f4j+AvjUfERvDmmK13YajLFOLmAlYTvYNls5xtDcg/
3R1rv/2hvilo/hm0+E/geF/tTeBZ7O+urqICRWMYClBzgnKNkH1FKWrNbpanVftNfDiL4r/tT6B4
ZfUF0+L+wDI0h+WMFZHPAyOox/kUv7NPhmx8L6J8dtNhC3celsYI57htxKJHNyPXoeM9q5z9qW31
L4latoPxG8K3FxcW9zYw2yWdoFEscmHYgnIIPI496ufDHVW/Z7/Z38X6n4sFzHq/ixRHa25bzHBa
CQByB05Yk59u9JrRWNYuL8itqVqYv2AtPQ+WTeXX+tOV3OblmB6ZPG7n/ZxWJ4Q+A0nwi+N/woS4
1NL2fWJzKQkhAXaBxx1+8CRjnnmug8JX8Hxh/ZBbwP4enddf0m4FxLBcHa/33m3c9c+tct+zP4W8
Vat8YtJ8X+I5LyTR/DuWuLu9ZjHGCrfKmRyOOoqoxcVuRKMbnDftl20dr+0j4kjhMaxxRwjCDG39
0OvbOMH6GvF1SMGVn/drgDft+vPHSvSf2i/iBp3xP+MHiTxDpLM+m3XlrFK6gNJtjVCcD/dA+mK8
xLPKr4OI1HLDjIHbFdS0MG0hV8tWBWSRdpK8r9e9LhDEY1YE9QCuDx70hb5VeXPl5wHUevaq0se4
sYyBjgZ5OMf/AKqEJlmOLZdlZCdo+8jDGeP/ANVQy4xwhjcMBluNo7Ukk371M5wg5OQck8VJPKZA
X27t3Utyc/59KWwFWZAGTaAuckknhverP2mTcBMi7QONvTpx/KnStJKq+ZCHjHCqTyB1qDzVPlhh
wPupjg/54p6MasLNiUkgnaR8w6Z9qnt4Y3lZkyA3zMGPQUkcwLyIFBJ+6QB6U6OVR5u4DcT0I/Q0
AzOuiFllVjtb7u4dDWt4Yhz4j0lcsf8ASVGYxluv+FZbIZ7iVlj2Nuzt4/zit7wRGr+MNIiZTzOM
qO+OaBbEnxCHl+ONYbYxj+1OMnIOA2O/06fWufuFUt5mWCkkADpitjxgySeLtZOGKC5kHzDodxJF
YeIm3BySwPqen0qW7DR6xpcFvHexuGMqH/VLI+AzYOO3H4Y6dK6+0t4nyku1LjOAc7kCYzk+2Qfr
XD6HcztHGvlmMQq375RgH1xnocE12kMX2mJczI28ZdWbB4U+nOTn9a2jdsxs0beI4ImVQo2nAK5H
P5dBj26CvPPjwgnuNAfIRHssBC33iJHBP9fbNd/G0jwtEI/JUKq73ULjHIGe3XOK4T46Wot4vD5Z
C001q4UrjHD449f/AK9atNIzT948qglMulyAkFEOAvfB71a0q4S1uw0m6MqQQyYJ4IqjvjbT5I2b
ZLvBwRyRg8Vc0GKJr6E5E7Z5ibKg89CR26VyyOpXvY/Sf48wWng34IeFU0qOS1trny45RA3Dq8Iz
ycdDgn19a+coVtbSGJoj5NxldrBvmyvuBxg4Ppwa+jPjXLF4r/Zz8A6nphN7An2czPGokaEiEqVc
YyMHjP6Yr5vjCSxSMJo3ijPMjHB46c8Z7VcdY3M1J3sz334aeN9P+LHh8+BvFcpnvcCGw1RmLSKz
YCo2BkdO/vXUano+i/sw6TdLY3UmpeKrsNHbck+XG6gB26A4IPT3Arlfgl8L412+ONdlkstN0+T7
Unl/I0kiENgHqQRj8+K7f4p2ekftI6TP4l8H77jWdJh8iWyePYzw7mkBXnHUnB+oxWDb5tHoXoeX
/BLWb/X/AI++HdU1KSe4lnusSXAbOCVK4AbOAM/r7V618fvhVrus/FLWvEen/wCi2umaVDeozcjd
HgkKvZuM5rxb4CQ/YvjT4I+1QNHOLtiYpicq+10wT6jI9a9g/aW+JWueEPitq2l2+bnTtS0OO18h
n7sDucAjggNjj6/SlrK1ypO2h4r8Rfiff/FG+8N3N3FG2pWiRW8s6uQJyJSxdODjIKn1z+Fff3iz
T/D/AIm0WLw7rrIDrNv5MO8KJZMxgttyOoxnj0Nfm9f+Grzw8+j3WpRGG2vwLqzkY5CxdDxjB6ZA
64r6x/bMnuNM0L4dX9hJLaS2sku2eNgCuI4/mz1yexHripqQvLRkxlZK52OgfCWT4X/A7x/o9xJ9
pinhupoDAhwkZjAGVAJJ+QdB618CAkxpxJlyZt8qbpMdcn8MfrX3t8MfihdfFX9nbxhNqS7bzT9P
uLKWbr5hEJcSHPcgj8c18HPevMu3ypElU7yvZc9BgYHT9RWtGKtqYSlaVj6e/ZYu7j4paZr/AIC8
SOL3R4Lb7REJIgXiLSKMhi2R69M5PWvnLxVor6F4p1XT1ZpY7W4lgZyu0MEJBxnOSMDIHrX0T+wf
GD498T7zsiFggYueMtIDkAdyEGee35+HfEhD/wALE8Ts9yZZft07gLzjMjYU9v8AJ9a0i+V2Qnvc
5aZfkwmHYqGMbv8ANjOR6HsP1plxHCtvGz70kYbEXIxjnt+BPNWJJZHWRXYtIilwGTl27g5/woVX
VIm8xVJU4jUZAP8AvcAjr0rdbGb30IFgMUduUfnziGVwfnXBPGBgYz+lJcyFLyRIo5yGTd5hGPqP
1H5VPcSIimJEYkndtVcc9sc96cHKQLEjLcuxIfaw+U4/p161DSuNXZFjfblVwsgfdhh8vHB6fQ1M
9vHl3VyzREIybTlRySfw4pjAJGu2QK0RZ2G0AYIGc9+xon3ebc5XAblO+cHpwPrj6ZrWKJaIYLsW
blZWQ7gQMJllYnvnjngfhWhdXJjLGEOrnh1YAgnjk9PT8KprHKsEqog7EruzkD1/yasudhyI2ZiS
wjHICZPPT3oaGkRw25u76UIxQbuWduODz9a7n4OeArX4mfErRPDlxO0UVyZH8yPPJRC3JGDg7R+d
cLCjDzJSS8gBHy4AA/xr179kmEn9oXwvLtEMqx3IYBfmdWiYAD3BIP4Vz1HYqFr7HYfH7xvq3hvx
fP4K8NiTSdP0YBFFox8yQugb5jnp7ZHWuibw9D8dvgVf+Kb4Pba14eE8D3iqAZ1iiDDdxznjP49K
ueMPij4Y+F37Rfj241/R21dbsQW6oEWTyz5UZXIPABGcfStL4Hsmofs0/FGSA4ja4vfJjZRhAbeM
4weMDK+nQVxylbU6oq5z3hTw7b/Bf4J2/wAQ4Y47nW9aaKJHlyWjRhINoPOTuUseAB+Ark/gd8T9
V8QePLbwj4oSTWdN1+QWjecAvlH+8GbBIxu9+g969L8cqIv2K/CRY7hbzRK2AeT5lwDn8e3rXP2/
xO8JePvjd8KR4esXsZrO7SG7UptLHKkHKkgjKkDn1oU7q7RHKubUyNH/AGcbSf8AaGm8ICdZbO2t
BqLMW3F0BjYRnj0kAyc8Csr4sfF/UdI8dyaRoHnWWm6JcmwESgHd5cmOflwM4zg9uK+j9HtC/wC2
LqbrI5kbQIziRDg8Q/dPthv1rxCTx54M0DxN8XdJ8R6e11dX17NDaPsD7HzKp+bPXJSqUm2JK1g+
KfhC3+KPwXj+KNnbrperhzHfQo5IkxN5RIyvPABHHGa0LrQYvgP8HbDWIY47jXfEUREd0zbvLVol
kRdu3nGT2649a1dPVz+wvPMgHy3DbFBD5IusZOT0yOnvW38fGttP+GXwlm1CIyWMU9rJNEhAGxYI
twPBzkZz+FSqjvqPqec/BLxW3xdmufAPiMG5ku1kubeZY1WS2ZIiQ2Nox/FjINM+Gv7Oy6x8WdW0
fU7qK5tfDsoadcnMqHO0Bhjg4B6c811/gTX/AAj4n/ao8N3XhS1WwsBpNwphVBGrP5UueOv3cdOK
9F+GZ3ftA/GJY4WSNBAibh97BkHHt6H0xVuViFbRnzn4k/aEvrLx7bXthbRWukW00UBsZkR98aY3
AYHVuOcccGtn4y/C3S7jw/ovxE0mFLG111YzNakgKJpQz7sgnHIK/iPSqHh7VvAafCXxjperxkeK
bqeV7S4G5sL8mzbkjaNqj9a9J+IxC/sn/DtovMkLXNoUOzIx5chBI+nP1pSbVrFrYxPGWjwfs2+B
bTTbK08/xFrKb5bxl3IgRyCAPpx6Vn/DuC0/aJ8LXfhO+toV17TonvbK+dV8s7mUbWAGerD+favR
/wBpU6Ta/E34a3mvOE0KNLg3b4bYFLrwcdOpNZ3wLl8L6n+0n4jk8Jov9htpTG3MWQM+ZBu4PuWx
7D6UX0TDk5jyz4PfA8a/f67qWtyY0rw9JIk9pGOWeMF85BAI+TGKsQ/tDwD4hPdnTbdvDDylfspt
f33zZUbSO/Pf8TXr3wos430f45PCjKjX14CrDJDCKXIwPpivFL5fAJ+B88sxhj8aw3B3JJ98qJDt
47jbg8DPynmqjq9RWcWSfFv4I/2D4o8Pz6RI403xB5aWLOpZ0d1UgEZ6fvP0rpvG0Gl/AbQrLw79
mTUNfuRFe3TTpujUYZSAv1Qe3516F8WEFvpHwOBkWEPfWke0sPnO22IIzn0A+lVfjjp/hyT9o3QI
vFckcWhT6UGllnIADjzChLHgc4/EU+bXUTjc4S30Sw/aF8CTT2NjHaeKdEhUXMcKERzJJISuDycE
KD3x781i/B/4W20ul3/jPWRIdA0ln3owGXkjKHAyP9r/APXXrf7NmjaTYfEj4mQaLMZ9GdIxbMjF
vk3ygfMQD3z+WKofD5Qf2TPHpUSl1lum6bm6QdCP89aOfWyEoLucJo3xT8O674nv7TV9Ct4tCvBL
bpceUTKgLEIxHY4I4HTNc349+C2peFPiBbeH7aF5TqRIsHdcGRQQeSM4xz9QM1qeKvDXg6T4R+E9
bsL7zvEU8tu13bJJgjdycjqNpGea9/8AivGsHx7+EbOXdpZZxkdeR16Y6EinzNFWS2PHPGZ8OfA/
T7bw/wDY11jWUYvdearDCsgcJnHoRxz1qv4x8F2nxD8Bx+MPDNklvNp8Ii1LTIlyVGwuxB4GBuHX
sK7bxr4H8PeNf2l9e0/xFdPZ2X9nwzJOrhMt5cYCk9+A/v19af8AAbTre1+Hnxh060kd7aBp4F2O
cH/R5iNvXB/wFPXcnfc8l8Jz+C/D3hG9v9Ug/tXXJmijSy3nEeGOWyQR0x612Hw10Xwz8YZNa0P+
wU0WaKwaWC8Eu7a4YLn5QAPvdD78V5B4Z8Kaj401y20/Tg1zczEY2ElVGDyfThTyeOlex+Mtesfg
x4ePhnw6TLr1xGRqF9nDRlkU4RhjkH39apbitboc18IfBGm6n8Zj4f1GD7TYQpcrhflyYQy8ADoS
Ae3StDx7f+CfCvjTWtITw2xWxuXtzKkhG4qADgev49q5j4F+L7bwL8TbTXdXuZpbdYZ47hwC8mXj
IBH4+o9K6vxfP4A8Y+Nda10axLENQm8+SJ4WUqx7/dz09cVLvfUpW6nnfjTxD4e1Wwt20bSZdMZN
3nF3BBUdBwe3OPbFdb8PPhNay+H7jxN4oeSPRUQiMgnzZmMe4bR36dKy/i58JZPh/cwTw+deaXex
LLaXRJ+fKBiDx15689RXrPxTtt/7K/gCFvNWWVrSNgHy5OwgEtjnkA578etDa0sZNJs4nRPCXhD4
r6NqkHh+2l07XoHQW8V6UBmwwLbQCeoU4P1rzjSvA+q6h4ii0CGGWLUGulSSNYyShZsc+gwQcnsc
17ZpXwkPwq+Ofw8iF6L+G/kkdnVhwRG4I/Ir265r0zwrAsX7WviuHy18sabFMoUgqCWi5AHIP3hS
5uhShF7HieueF/A3hC+0/RNRnu574xIby4hj3RxSMQp3MCMY59eAK5T4mfDY+Bb1JrJjcaLdIZrW
dWyrxBiBk+vSumsfg7c+Oh491q3v0gOk31x5sbtndt3MepJ9fyrt9aVD+xPY3EsQkuYZhxsDbc3L
5Iz09KXNZ7mvI+p5b4I+F1vdeF7zxT4hmks9HiBS3+QF5mDgcZwSMsMY7Zq7cfDTTfFfg681fwnf
SX9xYzubmB0YSeUqhsqOc8D05z1r1X9obRhqnh/4Z6fYKkTag3kKicLl44duV4BwcVifBv4e3vww
/aCtdC1CRnM2lzzSgOdkqnoMZ5IKnr2Jqk7q5k6aPBPDvhPUvFWtf2bp0SXN2Vc8jgYHOQccf/Xr
024+Evhex8V23h2514xX0joquYmwJGGdpIBUDBxzjr1zXrvwZ0e2/wCFlfGOJI4VltLpooAkQBhB
EwwCc8naD16V86WPw71nU/h7fePY5nltbabZcSmXdJvyo3g5znJxjH5UXd9yuRIx/GHhjUPB+uza
dfRtEyuVjzypB5BU4GcgZ6dxWDLbCMbRklSdp6Hnvgda+k/jxYrN8Cvh3qs2LjUmSCOWds7iPs4P
UZ5zzz6dq+c3RtsbB3lcdW4GSTntj1NbRk31OeVr2Kbxjfl3zJu6g9+/Hucd63fC/hW/8Y65b6XY
xOzy7SwXhQuQCxJ9MiqJjyxRd2WyfMOOD/hX0H8FtPtLH4CfEDxBCjprNh9piS5C8qogR8D05JPb
0puVjSEFucZ/wz+JtYvtH0/xBY3WpWUbn7OHDuSnLKQMfN27da8o1KxfS7meO4iktpoJMSKw2mJh
2P8AnpXY2/hrxV4Y0zTPGiG5tre7mEkd+dxLux53Z9fxHX0r3/4z/D3Q7740fDyB7Tz7bXXY6gAv
ExXbjcQM9MDPtzWHtNdWb20PBdN+EGp32hxazeyW+l2ly7LH9pYKXwNxIBI6g8fjWT4++GWq+AJr
X7awuILmFZYriL7jDGcDnnH1r1b9obS9e8WfGHUPDOi28t3aadaQyWtgq5EcZRQ7Y7cnGfYVq/C/
RG8cfBDxrp2uC4uJ/DvmLZ+bgyxbINwBJGeDkdaaqWe5k4t7Hgng/wAEX/jbWhYaYhn8xuGUYVMf
yHB61vat8GNfttMvdQt1ju47DHnxwMC5UtjcP0/KvWdDtV8A/sqx+LNGZLLWtRlCz3LKANnnSKcH
nHC9evJ6Vw3wgg174efEbwlD9ka2sNbube2kWdNwnhZsY5644Ofcml7Rt3QKL2seP3KS+ejpmAud
uzO3vjGOfWu80/4JeJtS0u1vBb7DeozxRN8jBfunP5fyr2eP4MeH4P2sJNAWKQ6clgupwxDoJc7w
p5xgH26V5n8YNW8R+JviN4jW08w2mj3U1qgt49iwRJI23OB169euPehzlLZmihqeZ+IfCV74Y1u7
03UraS0vbV9jjbxyuQAehzxyKm8P+CNX8YG5g0y3+0GKNpZCPl2qMAknp3Fe++OLCL4i/szW/wAQ
dUMY1+wlMJniXCy/vxGC2CPb8yKm+J1u3we+CXhWz8No4u/E/wAt1MF2yndCrkBgQeuPzOKOdi5E
meAeI/hnrvhnSoNT1K2MFvcMYvNAyA+M9T3IBrlmZTcQxNIcOBhVBzx17d6+i/2fp9S8ReJb74ee
IbaW407VYJ5j9tBkeB1jwrJnOByMd60/hF8FdIm+K3xE+2zJdReDbhhbwlCwcMkmN3PYJj61PO0w
5EtTw9fg14ruLyON9DlR+qROw3EdenYeh61xVzDLFJKBkuCxaDowIz17/wD6q9Ovvjd4zu/F0fjK
KWaOJXSU2O5vs7KFC7COmCMZI5yfeu7/AGgvhbY6i/w/8V2A/s6Txf8AZoJ4EJKRlkVs4x/tHJxn
jJ9KrmfUpJdDwPTPC2r+J7Nr3TIpJoLcBXkiXevzE9Txj7p/WqmueHr/AMMT2seo2dxZtOrSIX7h
cZIPfr6/Wvo74+TS/A+y0f4d+EpHsZ5LOO/uNQhAE0rqXjAIweDtz+XSpfhppTftHfDTVtD1n59a
8OwRyW+rTA72DFmIYD+HKDOOeR6UKbB07ny1Bam/eOOKGWWeZxFGoXJYnGAABk59avaj4F16xtZT
Nptzbxwhpnd4iBGoHJOa97+BXgnT/C3wn1z4t6pt1B9JaQW9o4CrG6OuTjHuP09a5zwB8dtSu/Fi
W/iNI9a0HXn/ALPls2iWMIs7qu4EDtn9TS57sn2TR4UI2SRFVlMnXcw4Y59O3Snp4d1G706O5itr
l42baksUZZW2jHavoDxb+zL/AGd8f7LwNZ3jR2urQy6jDKQo8qIO3ydMZwp7VF8dPio/w88ST+Cv
B9pBpdh4cMlrM7wJJ9okZVckZHAGahtt6Ao2Pne6sZrOUxToYZlOGRxgjqMn9OKZbWD3sn2e1V5m
bJWOL5jjrnAyTxX0f8T/AArZfF34LSfFTTbM6ZeaUxg1WBgFjnKLGhZcdOTkHk469sSaH4S0/wDZ
w+D9r44vbOPVPEniiMw6YWXfFbrJDvUsCf7qkn1xjvVc9jePKkfMF1pN7Zxs08EkTAjPmIVXOOcE
j0FUdsa7i2SGI+RR09/z/nX0x8HPFlr8d5Jvhv4z0+2utQvlafTbyztxH5TxxOTnnjpxj1I71z/w
l/ZpufFfxe17wzqLJ9i8MNt1coDhgwO3b064P4GjnJsr3PBJNF1GOEyNazspJYERFhz3zn0rPeJ4
XMOAdwLBOwPft/nFfRup/tK2EXxEgn0rQLY+EIDDD5L26+c8SYD4GfvenWsn9oL4IW2iXHhjxXoC
T/2D4vMTWcJJMkckqs5Xae33cAepp83cPQ8BS3MsLuUaVChBKAnHXA6Z7Uy6tmt5SjIY3fDISAOO
fT6Cvqf4gaJpf7LfgzTPDs2m2+s+PNXihvrt7pf3USDzFwu056jGOc1Xj8G6b+1X8Kp73RdPTTfG
/hi3V7u3hQrDPE0h55PPyo2AckGo5rMpRfY+UHgJjkOXLSk/dPy59MdKaI5HmWRUkjZCCS4IXB6E
cdPpX0L+zz8HbCTwpffFTxgsieD/AA+RNtjG43MqSAEFcElQSBjg561qeCPi74L+JPjPVPCfiTQL
HSNK1uNrOyvLZWLxyO+IgQ3AJzjt07VLnroTyny/NDncpIDuTl1AweP8+9VpbUgopmI242+q+wFe
0+KP2Z/F2hfGyL4d2UaC7uS0tmijIktd5Ac+uAhJru/iv4l8D/Aq6tfBek+GrfxVqWj74tTvLx2i
cT7s7AdpB69ewAGa09rFFI+Wp2UqdrBgpwckDOO5qCeKNUEaRkksSTuwABX0j8f/AIRafrvhJfix
4EQv4UuCyXkHlkCylXy0IwR/eL5x3q34G+Emh/CX4WP8R/H9kL2XUoGg0XSG+YXJZQwP3T/Dk59A
etT7RPYs+X3hEaLtITk4zxg4OMe9VZo2feqsNu4HcTg4HsK+sPCeg+D/ANo7wbrGhaNpMPhzx1aO
13YWsZ8xrqNIGOwEqM5bGfTK9a8w+FX7OutfFD4haj4emWTS7XRZHj1a4fGbYKr54Pf5TikprqQ1
c8Yl3IqjO4hvmOP8+1R/fxgsO4JHPH9K+r5fiJ8EF+K9toH/AAjHneGnnFudeaUKoUIAZCCN2N3f
PbNeXftAfBS5+D/iePyJGu9A1JBPpuoH7s6sofaD3IDD8MU1K7M+W2549cBZQQhIC8AjgYpFhQod
0eSRwVPI6VKhaGJkAJVTjPt/nNQvG8lwRE3XHCL6etX5kq1xf+PYOoPlHnBHOfrUYQgGRnwD91vX
PtX0D+yF8BYvjb461CTUlUaH4fhS8vY84afcsmxF7dY8/ga0fG/gT4f/ABF+C+o+M/Aiy6VqWgSS
PqGl3DqZXhziMhc8DoR7ZHJpcyvY3UT5r27twUb8D5mPaopOI1b5T68ZIrZ0LQ5Nd1zT9LtDuu9S
uIrWIEH77uFXPtlq+wdY/Z4+Gnw28ZeHvhj4laUeKNS0N7ptWchYTcvvCKCTjO77o9hUuYctj4iB
fIULgEYycZp7xlHjGSWxgKK3vHXha5+HvjHWPDlywnutMlNtLKpLBjgHcPY5B/Gvdvgp8CdEm+BH
iL4teNUnk8PWf7i2gt8eaX81UY9eRkjj60OQuW+x8yyDczgqwI457e9KiFiwJOM/eJxxXuf7QXwl
0rwdaaD4t8J6hHe+FdahRLZAwLxyiMvIrY79Ov8ASuS+AvwfvvjZ8TNK8K2Uoi8/Ms0zDIjiUjee
o5wcfUindbiUTz2RVRCWJZyc5J+97+1MBUl8DccbdwFfXfiz9nvwB4ot/iFongm9kh8VeFLtoZIr
35BKkYbzNozlgMEZ49+tfMXw78FX/wARPGOjeGNMXzNT1OURW6tgDcecnnpgdKOZAt7GC7+UiKoP
y8FeOvrUZDqElUEnowx1r6k+Kvw4+D/wa8X3HhPX5NYvNb021i+2+RGWDO8avwwOP4h68EVzX7SH
7P8AZfDDQfCfi/QGkk8J+IoIDZyS5372i8zLAnPK4+lTGabLaPn9gVkyoJOfmHQ0saNjKkYHA4xj
6V9C+DfhJ4H0L4XReM/HWtNEuoXCRWVlaKzyqpDgsyg5/h+mMV0Xhv4AeDPjT4I8XXPw8uL6XXND
EdxJBPC6I0R3EkBupIRh17DiruiuVHyuY97BHyrY3Ajp+dOIw/yZK8nI7e9dD4G8Haj8Q/FNhoWh
wyX13eyeUgjU4VSfvNx8oAHJPFfQmofBj4SeEfGul+CNd8RXMusTLaw3M9urmOKeQBSA4G3BY9SM
AHnmneKJcUj5ZCkA8kknBGf0pzyqEwrbcfKcjpXpPxu+DWo/BvxlJp0sMj2FwTJY3DJ8rxFm2fNn
G7auf/1133hX9njRPD3won8dfEq+m0Swu2jTTLaJGMz5JBYp1PYjjpmpcl0JUT51MSmMEkZIxjuB
nrTiN0ZDSEr7V9B+Pv2ftIPwvsvG3gK8l1zRI2kivg6HfEwlCIu3G4c1xXwX+C2qfGfxfDplpGbe
wt2WTUL1lwsMOcMQxGM+grJ1EaJI8xkZ2CRlvKjB+8W71IuXTYuFzgHHBx2NfTWifAP4a/EDX/EX
hzwh4murzWbOGd7WK5jby5XQlQAxUA89MHp618/694L1rQPE9x4evLOWLWIZvIMHlksxDbQVA5IJ
6fhVRqRZPKzBlwZQfmeMHC9j71J5K7ZEBXcRwwJya+jNc/Z+8J/C3QNDT4ga7caVr2oxPK9ikRcx
KDjnaDzhgfwPWuV/aE+BM3wm1y2vLEPeeF9RiV7O+RchiQNwOOnLCqTTGoHjao8eABhxgbuwp4If
59ysw6HH3j6V7L8GfgA/jrQNX8Wa5eNovhTTI5Wa8dciSVQp2AfTPSugvf2dtC8X/DvUvEPgHXDr
F1pMoS8s/LOdgjLsVHXoB2rNyV7Fcp89LBl2IbaAcjcO3tTZPu7Ww8mcqFPUV03g3wVrHxF8Q2vh
3RbOW41S5fy1VkI2Dnl+w4BNfQE/7OPw5HjyHwXL43SPxCSsJghBb96U3Y3cjPXjI/GjmSepLifK
25p5EUAgk4w3NIYiJBtI+XnPaut+I/w+1X4Z+M77QNThaGaHDxeYCPMRskMOOfTivUvCv7Omn6P8
MYPGfxB1xfC1lqMoXT7Z42aSZSpYMwAJGQDjjt71o5RiLlPBFmcLkHDlsluARRKh3MWcMccZ7/Sv
a/i/+zwfBvgrRvGXh7Un1rwvfRKZLkgHYzMQowAOuMe1c/8ABH4E6r8afEX2SHNrpdsUlvb1h8kU
WGxz6nb+lT7WLEos8xkBlhQmFlCjBAP6mlaFpSuwMM8ivozTf2cvDnjrT9et/BHim313XNOtGuY7
JHx5mGC8DaO3v1xXg7eH9UGrvpL28h1MSm3NoqkSb84247HOKamnsVy3M92kEZKqgIXAY4yo4xg+
vFVptzYwQzEAnnGa+lb79l/QfBtloFh408YQ+H/EmoQLM1kFyYssRtJweuOpPY15h8cfg1qPwe8W
3OmXdvLLYvzZXuPluFVVJI+hODQmmzJqzPOjIr9GLP8AdKhenFLMghJTg545pZGZeCA/HXHT6U2T
5kLE4JPbnjtVOIWsOSBAPLxt/wBpW5P4fnUUkTiTYgHPKkdMetEyKHVgSG6D8qRgx8vG4lQe/aoE
hXcI/llS2OAVOc8U4xyxSBUY78ZxnoKRDv34yGOMMpr6L8Ffsn3fiDwvoGs+ItZh8MT65dra6TaX
Y2yXJ25BOcfeHTGevvS5raGqVz50uCyIFGQDhuTkN9BT7bPOBtKdvX1rp/iZ4B1H4b+MbvRL63aO
5tnPlmRSPMj3EI49QwGa6L4IfBLW/jVrdxa6WFt7CzjM+o38wIS3jwTyemTg4z6Gnp1Jtd6Hm63A
uQ5JI44UCmJP8jA8c7doPBPTP1r2D4o/s93fgHwxYeJNLv4vEGg3AffqNuQ0aEPsUEjjk9h/SvLb
DSrjU9Rt7O2gM80zKkcaDJZicAAdSckVomtxKLuUmZWk3BgzDnJHQ4/xomDBnCgGQgA57ivpG5/Y
v1zTdO8g6lbSeII9GTU30eN/3yE87SOoPGAT7+lfPl9ZzaPeS2V5biC5gkaKZWHKMMgg+mCMU1Zj
aKWwsrGP90SCCucn/PFKNqRoquVfOdpHX2HtXrfwy/Zu1r4i+GL/AMTXN/D4c8P2s8cUd/fApHO7
MVKqTgdcDOe9YXxi+EOr/BrxbNpOoJugQg2t4CMTfKCWXGemcdc1KauKxwYRydpUEp1VeCKkKNMG
xmPGS2OcivRPhN8ENc+K41i7hmh07S9LtjLPqF0dsSkFcLu7nBz+XqK7aD9k7U9VstRuNJ1/Ttav
bK0e4+x28yMzhRk4AOe2Bx+Ipc0EylFI8GaRYwGSMtnlCOSfb+dN3KMScxjlcovIP9egNaml+F9T
1XXxpltZSS6qJfKSBV+YMDgqR7c8c17hqH7HmtaTera3viTSYdU3ruthKCqKR3zjpnuP6U3OK3G1
2Pn+X5FSQMckcMD19vepN+YUyrKVbgHg9OCD/Wus+Jfwr1b4V+KLzRtahJaNFZJ4x+6mVlzujPfr
7Vb+F/wT8Q/Fg3r6PB9nsrOF5JrmXiL5MYXOeDyOKOePcizvqcK53SOZHLbTtBLdMntTI3EwLJgb
DkL6Ed8V6147/Z48R+BPCcGvSeRe6a0nktLbybwhwScnsBgV5ro2lX2uaha6dYW4ub26bZFDCCWb
1xj+tK8Wi+W5WkuY/MXIYBRtU7slT2/Wodkce0Slm7KV4wfevdD+yF42guZ7Py4ftscTStAWHGAD
1yDnHOMZrxK6tp7KaaCbKOjnKPx8w6DHXrTjy9DNp9BFupjNP5MjjaGKhWIxnHp16fp706a5kuGS
OW5Z92CWZ8lhnpk9B7fWvSfAf7PPi3xx4aTxDY2ywadI5iilmJUysAc4GOmT16VzvxE+GHiD4Zay
una3ZlLmSNZV2A4AJPPcdMHr3p80SuVnMpf3NqNlvI9uWBZ9jkZGOPxH171PJ4i1Hymi+3zBCwYl
ZGAdgOpGefStfwN4D1f4i6x/Zuj2j3V6Y5JT5an5FUZJPpzgVveKvgR4r8H+HG16+0yV9NidUkkV
WZQx6ZAGQP8A61KMoN2Id2zh9M1u50pylrPJCZFKyiJyhK9ccYOKvP4y8QWJXydc1GOMf8shdNjj
ocZ681kFYmkQxl5ZQpJRRye+Pxr0tf2b/H8kkJ/sWX50EiwyPmYqV3AhR7Y96pqKY7Nbnm+o6jPq
DS3d5LLc3MwO6SZix9uTzW/Y/EPxTp+nxWkGu6pDbhSi20dwyIAF4wM4xWO8DQ3BidDFLH87RyLh
156EHpXXj4NeM9RsbK8j0W4kiu0SW2cjaZEYcMM9u+cc9qUrbMbi+hx+ta9feIr5LjU9SuL67XCm
Wdi7FecdTx1q14c8b6/4Tjmj0bUrjSWlIEhtnKmQg8E8dvSovFnhLWPA9/8AYdc0uXT7qVBJGJFI
3JkjOT15U1FomhXfiO/Wx0u2uL27K58m2QucdzinaJCTb1OluPjD46dGWXxTflShVlack4bqM98+
lccisXlleYIxJGMZLZ4IrS17w3qXhpo0vtNlieRTIrOpAI9QP0rHIDOrbjhsew6UWQ7WPSLf47+P
LKyt7S28RXUYhQwo2RwgGANxGeMdRivP5ppLueaQ4eeaR97t1LE5LH1P1rW0rwh4g1OzF1p+mT3U
QD7ZEQ8qOvI9BWRDYSXV4YFVvtQfbsBBBOen8qSSGrs6vwv8YfFHgvSpNM03Ufs9jK2WhaNXUORj
Izn2/lVLxx8QfEPjnVILzW9QNxNFCIE+VVVVBJGFAwOp/OqcPgnXVhZW026Z2UMFWAt2zzx/nNVt
Q0S/0tI5Li0eFJMBGljxkcEYNOy6D5Wi54G8fav4A8RDVdKuDBdqrqQ6BkKlSCCCemOK6rxF+0J4
u8V+HZvDt1dpFps5V5Y7aEQsdrbguVGcZ7e9ecqj3TG1hguLmfcxSKJCxPGSQKtX2iXkBf7XC8KR
7TK+0kKSMjJ9fanpcE2ZcqMDiP7jKG+U9B/nFOL2zzKqo7oucjPt1onRgEcEtGRlcEYP1pkqmMHI
2yDDFj97HsKrQkWTYZhsdunJ6c+9JPAjMCTvHB+VsH/9XSkU7yBgM+TyB16/rU32gEqZD86qB8y8
ED/CkwK5iSQKwPGM5wcUkr+SDsbhjkFeop7uTuRCEGcnaOOaWWRFSLIwAOWXqTUblaCSyRs8vmSP
sP3eCST+NQKytHgnaQMgt39qlc9cjYzHIBwcfWl8lWbO4LuXtyP85ppaiGNJGi7QzFzkHA4XHSrV
qJUZ1SPZKRgdDnnmq7SRxAAZaTHHp04qaF5HgaZC3mKxyWPXirEZ7wO88gdgkitgoRya6b4dWwl8
a6OXRpF+1plWOe4+lcvLvPzvguWJOeprsPhPC1x8QdHDHAWU7g3Y4OCMe+D+FIEzD8QbbnV71w3m
gyktkgFzk5zVIW7sdzLI+QBuzVnXZWk1u/l+9unZiTht5JPzfqelVZJHnkcMfLVjuGzp0A/z+NZy
LPSdHMEsaLGwmRGJDDJJJ5JI7V6FYTJIEWK2VJIBufYpwR6Dke3J9T1rzLSVcx7xl5MkqE4DH09h
9a9BsnkvIwWmNtvG1So+8B9euT39q6YuzM5PTVG1auLt7p3XAkGU2nbgDjOOnauH+Odvus9EkFuY
oNkoDZ+8C3Y4z3XNdvaWs0l3AN4RF+fAXcJDnjPpx/M1xfxzsRLa6MwcNb/vyjKSAhypIJ79v0qp
zuYQirp2PFJ5jJHsAIVDkf8A16vaKzz3PynDg5AzjPNUHCBh85YZ5x/jWpou2G6yShRjnDHJ+npm
uY6dT7G+B3xcuvh/bfYtQg/tfwzfW7R3NnjcS+AqyDfxkBTkHHX2rF8RXtlea/ql3pUM1nZC5LRw
M++RRx6HBGT0HGK5Pw9Ju0q3VmwyR4Mbt8wxwfYnAP6VtWsYikjVTvUxgPIoJzz2H9K0Whk43Z7h
8WfjTp3jv4feG9Fs9Oms7ixZHnkV9inEWzO0erc4PpmuT+C3xNtfht8QrPVplluraBGiuk80oPKZ
SORghuWGAfXrXEXGbXd5EQjhRjyw5cehAJA54z7VBNDBFEX2hXJ2zSb+QeOg7dAMe/SiNJSM3Jwd
kj3jRfiz4LHxvu/GF9pNzZWEU0Uunw2qg7nyNxIJABzg8Ht36V1vj74u/Cbx74zh1u/tdWiktkVN
slsdsixkELwx9W44zz6V8z2/mRDzp02bn3QbwGLrn2PHTODTY4kMbTu6KCd6FgRIeOAR6n/HgUnS
UWbJuR7R+0J8VPDvxP1fQLnw5a3FnHY2fkCPyvJYDcWwqDpt5xyRzWr+0d8avD/xU8IeFrfSbW5W
801Clx9qATkoAQuDzhkGe3PrXgNzc3DrlpWO04Dn5WPOMD2PPNPmut7rtDs0ZAVWb5QM9D+XTkcV
SpXYua2x7/8ABb43+HfAfwX8ZeHtTuJ0v9SWRrYwx7sFojGAPQ7ufwrwNrhri5mMqEGZvMxsYTbe
OGz+HenWtqskQSSfe0xIPyglTnI6YB7fhTJWMMmBHsTeHDREjB9+2OlUqaWhm5NvmPoCT4veG/hz
8KYtM8EuZPEN+rm9v2jKyx7ip+Utw2CMAZ968F1S/n1G6eaV2dpXwxLHLnPfP16k+tVWdnLoAxgk
I+Zm+Y47DjjkfoabGxeILJsSVFwRtGMds/54qo00tTPnd9SddksnyhmYLjCde/GfXj0pluiiIll3
tuO19p2gAnAz3Oac7Q75ZNzEqVkKBiNpHcfUYPX8KfK7W+zzHkYE/KY35AGeTnnsKvYE1e9xl1LF
Ov2ho3QAklQoy6nJAHHHY/hUgdUWNyLhEI3AOgyO+QOeD/Wort4+GhO5T8rh2Ax+FTQSPHDuVPMm
bACLkfLg4wPap0bLv2CcoJCIj5QOc7gD9TjHrQuGTahZ/lyMn0p8bxsFysRkGS3y4ZeoPuemKjW5
EDLHECGKsdrEAsAdxAyef8DXTBKxm5jmu/stu7kMhGPMwSxI9MDpyeadbO8sQmhYLxwxBBA7gY5G
ahVNryMmFB+RmX5do79qmM4kVk2us3T92MA/iT7Cok9dCeZvQdDL5UZ48zJOSnAyO1a3hzxDqHhL
WINX0e+mtr+Bw6SRMUPAxt4ycYJrHSyIjRH4TaTw2M5981IyeaiSQgZLk7QSWB/z+dZNKRabTPpr
xfqHgv49aZZeILrW7Twx4ndDFqMVzIP3pUfKeeowGPWqHxQ+KGieCPBsPgHwFJ51rPCJdQ1G3ZTF
cs8ZSRSCD3APX064r53bcoVnQMz/ADg8EEdMH/69SNuRGPlussjF9wHyjk9+3OPwrL2KeprzSZ77
8Gfitp2t+F3+HXjeWP8A4R6TL290WVVspBvYZP1ZSPoeOa1/APhvwR8HXvvF95r9r4kvrbbNplrb
yK2JA5xjbnnkZ9OtfNbRTksJdjAuSAPmOfU57/5zTZ382DyzKJTkOuFBGc5P+cVPsG9iee7uerr8
fPEVh8W08c/aHllXELxMobfblv8AVkYBBAHbv9K7zx74I8GfFi8svE2k+IbbRZNSUzXdhezqJVlL
MXyrMMYG7GB/IV85SyO+4sWVVOdwT07kj0psrzQyRTFWfeSWOex6nH/AvpWkaVmDnbc+hPjD8TdN
0Hw1H8MfBrM+h2zbby6wjC5LMH+Rx/tDnAq94R8a2Pxs+GMvgvxbciw1fTIHm0q/YKkagKEVScgZ
AA+o56ivnFyrSgyXDCKSQ5lk+6megzj6dqe+5WC+bIFcgZQEjPYHGOD+VS6CJU2+p9L+BdI0T9n/
AEm68W6hqMOta5bytBYxWEiyDEgK7iM8KBnIH5muP8BftCav4Z+I954mv2NzHrE4/tK3WIZZV4AX
BGMZ75ya8glZ4I0QzHYeArHPTPQfielIoclFWRwXYkiPAUfhjjvUql3BTaZ9E+JPgNp/iPxpY6p4
e8QWyeH9RkjmkSVlR40bc0gGOhGRgYHpWR8c/ipYy6dpngLw40kmi6JMkDu6/feMOpwwbGOc+p9q
8NuEIkFwYi8qhlErDJwe3r1HNEc8kk8mDvBIY7wTz159eRW8aCesmQ6kr2sfTV1r1r+0z4Ftop50
s/GWgxfMCNscqySAEjJPXac9+Kl8EaZY/sy+GbnxRqcyyeJr2Oaws4EHmKAVVwW56ZT8K+ZBKEdV
jeSJnBUNHxxnJBx26flUk1484ClvMkDhW3gg5x15+nWk6Vn5Aq0ont3wl+Otr4b8R65Z6zD5mi+K
5XluzGrF0klDDcig9Pm578g9qvf8Mx6hP48FpBqECeHGl3m+UfvGi2luQc4OSfy7ZrwOH944Dw7S
FJUoMkAnvgHjjNX7fWdQWZ5Y725iEudxMrDBx1wal0l9kPbNs9o+MPxlg1DVvC2n6TEZNP8AC88D
W9w2Q8kqKinA7AeXnjrmuk+Ienf8NGeHLPxXolz5XiPT4IbK7sAp2AlnbIz1+8a+cWlQiYqd2cBj
k/XNT2Wq3NgH+z3txBMwz5cEu1gM5yMelHsnfcaqan0n4FR/2bfA+pa7qifa/EurwtFDpvOB5bdy
BwSGz+WOawvgV49tr7wtrXw41ER2drrXmNDeMxzHM4QEEZ5B2Ajpnv3rw291K+vY993cTXJwxBeQ
ucnGevqD19agSUC5WeQMkyPvVwccjuCORR7FPVDdVo9g0n9m/WZvHdzpeootvpcDySjUhhUZVDMG
HXGQPfmun+KXx8tdU+LPhvW9Ktjd2fhqZmjlJIW4JwGxwOBz2Of1rxxPHHiCSJx/b2oTLt5D3DZU
Yxjg+nY5zXNh5rjY5ibcDtBKkZGefw4peyb+IPaH0L8bvCl58Sr238e+Gi97b6iIoJLWM5kgdIwM
cD8Dn+VanhKdPgJ8JNbOtTCTWfEcQkSwjYF48xvH079h6V4HonjPxB4atSumavcWVvvDvbwyYTf3
OO+cd/0qnrHijVfEkkE+sXtzfTxr5SyXD9FznA/H/wDXQqbvozJ1Ipn0t8IvDlz4Z/Zxn8U6BaM3
iu6U2zP5XzbEmZeBzj5T/KvK/C/wX8U/E3XZrSS2nt7gKZ3musDed3zDqOcmuY0P4meJvDlubGx1
m7trAAt9kjZduc5yMjI5rWHxs8aQRRz2+vzwkYDDYnUHPA68j3pezknobqcdylp3gO7vvHMnheSS
O1mSSWFZp22qWTIwCTjJPQ/zpviz4aa54U1q/wBMm066uCp2vJHGWDDHBz3HOcfWuevrqXUtSur6
a4f7TPKbhgecyM2eBjA5/wA9a74/HXxr54VtVhZUj2KWtYyRgcZJG49u9PkmgdSJ6p+1Cyj4W/D6
yBKaiiKZ4pWCsn+jgA4B45HT61XW8X4s/s66boegsZ9a0B4nuoWcKXCRyElcDLckdAOQMdK8I8Te
LtT8X6wL3WZnmuViSNSDtRcDAG0cA4/Hml8K+KdQ8KeIIdY0l1jmjVkUkZVgwwQR3BHWj2TsZqSb
PSPgX4Z8T+JPijomr37u9tok7STSXQbCIQwABYZHJ6D8q9B8KfF7w7D+01e63LK407VLePT45f4f
NHlgZz0GUxnGO/FeOa/8cfEesaPcaW0tvb2t1tE7WsCxyvg5A3LjAzXAwMsLIN77zzvzgAg9R6Ue
zbNFNRPSviHpXibwb4x17T42ura01m5eTdETsu0lkZs45z3H4kV6R44ni8G/su6X4Q1e4aDX7t/N
S3KsxKLcmU8EZUYbjke1ec6f8d/EFppWn201rY6iLJPKjmvI2eTgsR82c4BPWuN8V+MNR8a6/e6t
qN001zMdzhixRAQOEBPA4HAqfY3dxqtY9++NF+fHHwn8E6/4XmNxHoyl7qRI2VrZ1hiGT3BBX9fa
sD9n271vxX8WYfFWq3Et9aafp88VxqEhxsLRnCnHY9R+NecfD/4j6t4HW4a0SO6s7uFoZLK4JKMD
wc47jtWj4j+Nmp6v4YOi2GnQaFA9wGum06R1MgUMu085xg/T2oVN7CdXyPZfgr4w0e6+LfxNhgv4
pH1ibzNOCsAJT+/4Ud8Ajj614Q+q+J/D+k3vgcy3MUMkpSXTpExucsDgDvkqCPXNcvpmpXOl3MN5
aGS1urOUSRSIxVwwGMjB9+9euj9oOKbWLXVbrwpaXurRNGxu5J9rNtGAeFPPShQaY1O503x8u00z
4E/Drw/M0a61arA0tuzYkRBAysxXHYqBzjrXzPIDI+GkICOcHGc+hPpwBXU+MPF+p+NNcvNV1Nnl
eRiwEj7iiZyFHTgZxgVgMksojULtyTlmbhex4zWsY2MXHmkOuWYTNtXymZwV2AEivo74IyQ6p+zt
8RtEhnD6q0M7pawvulZfJAXK8kZJ4x6etfOU9sAC6lmA6ITkL7Z7itvwL4wvvAOu2epaZI0NyUMc
oAyJIzglWyDgEqOfalOK6HRHTQ1dT8feJNZ8B6b4Hl3TW1jJsVVT98WUnC8dwc19F/GW7s0+OHwf
/fJGLeWTz8vkKCUwGPbOe/HHtXm+n/GLwRpniy48Q2/ha6j1VhNKg3L5ayN3IzjGT2GefrXi2t+I
9U8T65d6xfStc3ly+WmKYxxjaB6cVzOmynJI97+LvxC1H4WftIa34j0y0WeO7022tleZSI3UqGO1
h6bR+daPwGvzqPwx+LN9M6273ck0hlc/KS1u2QDxnksePevP7L4taF4p8G6fpHjXTrm71LTJStrc
2IG/yFjCqGOQSevH04qj8S/i5ZaloOn+FvDNnLp3hu3jXzhPCEknkAfJY5OB83uSaSg2Sp2O81RI
dR/YcsLeNmJS4TdEgLEZuZCCR6cniuasfjFqPxM+IPwxtX02Czj0rUrdVMLMSyl41+bjgAA9a5f4
SfE7/hBtTmsdVjbUfDd6FF3bOpkCKNxBRcgZy3P0FdJoPjz4cfDQ6lrHh2K/vNdeEfYba+tmRIm3
DJB3cccnHp3o5UtAU9bntBKL+2q5DqCnh5QxYZyuAB+I5/ya8d1D4uSfDTx18WNMbTIr0a1qU8as
z5WI7pACe/O4cY7CvLp/G2rt4zk8Um9uV1hrgXCSiQkx/Nu2A4HynAGOntXqXiXxT8OfiW1lrmvX
LaNrzIyXkNlDIySNvJyGAOSRjrjge1S4cvQuMrm6bcn9gq/SUuJvPKbXBzxer0B68CtD9oa6i0f4
f/BjUfsyzJZTQXYTIUHbbxtt5zyTx7fNXl/xj+KUHiMJ4e8Mf6J4UswPIiibabjKr8z8D+JTx19e
taHgT4kaX4m8FX3gzxvKws0haTT9Tkcl7WQhVVAME44z+YpKMuwNpnafC/4kWXxV/aq0jW9O03+y
beLSprOSHC5yFBznj17f3a7P4P2qyfFb9oIAsFeZVZl4yNk5H1ryTTfFXgv4I+GtSvPDeqP4i8VX
DtDBMysj2kbRkM2WXnB5/AV514A+MOs+BvGU2v75r+G/kc6lbM4LXatnBLEds8fWp5W3cFLct2/x
S0a0/Z81bwRNpbya3durpfRBC2GdCf8AaGAMGva/jOf+LafASVsBften4RuDzBGTkY/P6Vxl74J+
GOpePLfWo/FFlY6HO0U8mmyHDquwb4wQRg7geMdzXC/GD4z3XjrVdPTTRJaaFogjTTbQ4O0oAocY
9lGParavsWrPY9m/aI8UaB4I/ak8Nar4jsTe6auilZ08nzAcyTBCfUg4/PParX7L2q6X4l+IHxgv
9IsZLTT7mCB7eE52rGfM4x2OVbgf3vauI1bXNB/aG8E2c2t6vDovi7Sligku7uVIluYsuxIzjuec
e1V7rxnpX7O/w3Gk+FL+HVvFuswn7dqEEiSRxKsvAIU43bWbA7E1HLogWhe8Dwqn7A/jlg8gDzXJ
Q4Jx+8iwcY9gfx9q43XfHXgvxBoPwl07w9biDxFY39odSdo2XeAyBwSwyw3ZOKg+A/xdsdN0+7+H
PiZXbwjrSmF5eAbeSRhl2bshAx7VveGvgpoHgjxBe+JPEfiTTtR8P6Ws13bW9pcI07yI4aJdoYZI
wBj1p8rLUrHtPjyyLftw+BCoZf8Ain7lee5/fkAkfj+deQ32u+AtD+P3xl/4ThPtSyXR+wh49+H2
AYyP+A49wfSvP/E/7R/iDWfjTaePbaGK3uLEtbWUPl/dtixJV+cEkMcmu4+JXwu0/wCPOoQeOfBt
/Haf2yXlvrW+JV45UYKOM8DC9MUcvKK92XvAFoH/AGAvHakJIZprjY0agfxx5z75zz/sinftBi0m
/Zj+AX27dJZtNYpPlsB0+yfPnP8Asg1z3xk8a6f8LvhrP8IvDNwb8Tq0msTsSFDSKrbUPIJ3ZPsD
j0qXwp4ktf2ivhFF8O9WuIoPEnhyFrjRgNwWYJEEQMRgZ5Ix+OKlx1Gn5Gx4HHgO4/bE8GT+Ami/
s+HSrqKYQrkGbypD1xnO3H6iu++CFoT+0f8AtEELlCYFz1YsFYNk9+TXlnwn+Hlt+zi1/wDEPxfc
KupWSvb6bawMZPPkkjcHO0ZAPAz0HJrivhH+1FP4P+MHiHxDrUCrp/ixj/abQqW8jaGCsoAPqB+F
Fgcl2Oa8P6V4Auf2ZvEN5fTQx/EJJpHs4HkHmD51HA5+Ug/p1717j8c0RvgZ+zdIZPs7NqGl7dxA
IHkA9cewrzbXv2OtRl+IUNnodyZ/C9w8Uw1VipZYGwxz0GR049Pwpn7R/wAc7C6vPBfgvw7EmoaX
4Ce2eK6cZE88KbNpyCNvA5ANNq70KjJHqH7T+jeG9X/a/wDB9p4wvI7DQH0JjNLKwQEiS425J6c4
+uRTf2Q9P0XT/iZ8ebbw5OLrQ4ILdLSZGDK8Y8/d7cndyPauQ+NejN+1r4S0b4ieGpGl8TaZbRaZ
qWjo5XySzM5bJA3YLH8Dmn+BdRj/AGNfg/rGpa4yT+L/ABZAscOiIwLCNJGXcx56CTJPvQ46FuSs
yHwDaNF/wTc8ZgkLHJcThYjzgedAp7DuG/OuI8QfD/wL4b0/4Jap4c1ODUPEOo6pZtq0EUiswOI9
y/L0wxx0/ire/Zu8eaV42+D3iD4F6tNFolxq+X02+Pz+bM0isVK7ccFUOT6n0Fc58J/2YvEOnfEi
LVfGciaJ4e8K3A1KXUJ5AIpRBJGw5PRTgn8KlRutQukz6Y8YQq//AAUe8IDDM8XheaXazbVAJuAM
epz1/CvDdS+F/g74kfHn9oGbxTqr2B0q+ea1/fiIlirbiM8HBVev41Q179rzTrz9rvTviJHpbvo9
jbPomd3zTQbpP3o9B8+7HcDsaxv2nPgnrfif4jXvi3wag8UaL4unfUUmtWURx5IXaxzzyTRytMy5
kdr4ZSQf8EwfEkzlt8l27KpAz/x9w9ecZ7/jWp+1BoUPiD4Kfs2aXfF4rTUZ7WKeVR/q1a1RWJx6
BiR9D61zHxr8S6X8B/2YT8C1u01zX9QmNxfTQlQLIeZFLscAkgnoB3q543vYP2s/2Y/DNr4duBb+
KfAyF7jRnbMtxGsIiLKM9xyD/Wp5XZWGpK5qfBj4VaB8Kf26tL0Lw1qLajYReHp7gyZDfOwYHJ6Z
+UH8feuy+AiiX4mftVXQLLINRmQnPGFScDPOOigV4/8AsneDL74J3+r/ABg8dD+xbDSLW5sEsbuQ
Ce8kaIMBHngkgAAZq5+zL+0Lo+r/ABO+Kmn6lEmmjx9cT3FrNO6qkJYS7Uck4JIkUe5JoS018ht9
DxiH9n/Ro/2Q2+JB1Uza6J/LFizclPOEfyj1C817N+3xEsPwQ+BqE7P9E3bHGCMWsAA69fmP514r
dfsp+PG+IqfDyNLgRvMITqDg/ZNgTcXHOCAOnQ8V3n7dPxV8P+KE8E+BdHL6hc+DoRBeXcYBgkdr
eJfkYE5xsGcVpFNyFJI+SJ0AH3Q0fJ3DGT70tqMrlCcr1z1wafM7MCrBQ3oOBUY8xlXdtZjgLtPA
rVowTVz9Ef8AgmxougReD/G1/bap52s3kEcWp2rKw+yRhpdh7ZBXd09K+FvF09t4X8R+INH8Law1
7oU5+zG5TIF3Cr5AbIz94V9p/wDBM7Sb2Hw98Wb54JFtJ9Pt1hlKHEjBbgMAQOcYAr4JvNFudL1C
exuY5LW7tzieKVWVkOM4weR7fUVEVqzofxbnpP7OGhaJ4k+NfhWDWdWXSLdL+CWMnOZZhInlxjHQ
se9fR3/BTzStHtPiXoespqm3xGLOKL+ylO5liVpWWbPGPmIxXzD8AdLvda+Ofw/S2SSZ012ymdUQ
n5FuIySfYda98/4KiWt8n7QtvdfZZfsj6Laqs7LhC4efK5HH+RWa0kDskfJmrazN4g1e91HUZRPf
3b+ZM8rffJ6n/PQDtX6N6H8OfCi/8E8dQ0abxVbpoEzNdT6uP9XG/wBqjfZ2J5AX8eOtfmw2wwt5
ikkkdB156fnX3oNFvbP/AIJW3EJs7gXM2p7jH5JZ9p1LOcY6bRnPtV8qvcXMfDF1rl9deH4NGa5k
/suCeS5t4cYVHcAFgCeMhV4/+vX1n/wTS8M2F58VZ9ak1aKLVrKwnSHSc/PIrbCX/Db2PpXx9NN/
oqHkhhtG4cgdq+sv+CY+mzn9ozzxGzwpol2Hk2/KG3xbQT0HXvRISZ59+01GPhd+0p4yv/DOtrc3
d7c3b3EkEgJgeZ5BJCwHGQDiuN/Z51++8I/G7whqunaVLq2pW96DBYQkBpmIIKgn2JP4VB8f4bgf
Hf4hSNE8SS6/fSI0gKhkM8m08/Suu/YrnS3/AGq/h5JM4SNLuXeCdo5hkA+bp+dEloJNpntf7VXw
j8O+NPjjreq6p4607w9qGow20s2k3EyLLb7YEGCM+ijvz2zVT9ujUrrSPhp8KPANpYXE+iaRp0E1
trgOYrtxbiLaueeFw3/AxXmP7cumz6h+1P8AEV1tJ5o1ltSAsRxtFrDk8cY4Nez/ALd0SQfs3fAq
1d9s62MW6NvvJ/okAGe/ZvzpRVmU1dHx78OvC0XjjxRa6TfavDpdqyOzXl1IAiELkDngZOB+Nfa3
wA0Yfs/fCL4r6/oeoR+N7i7tre2NvpLo5gGycb3OQMDzCT/u+1fHGvfB/wAT+F/A+k+LrrTnXRtS
ZUt5YxvclgzKGGOOE/Divrb9gG1/s/4S/HGe6VrdBZR5aVSAp+y3B9MjqB6/nSm7go9zL/4JjaSl
z478byzxwSPbaTEyHq6bpcflxn+leI2HwD1vxr8JfGfxQGpLFZaNcuJFlJMsxVwdwP8AwIV7B/wT
f8eaJ4R+IfiLStTuTHqOvWMUFiu3AmkjZ2YHOMHGD9K8Q1mw+Jvg7WtQ+FLPf276lOiyaJAQYp5J
drKcZ5Hyqfpmsm2m7F2jc+mvjrpkV1+yr+z20kP2i4nvbKFpJBuO0wtlSTkgZx+XtVj9tb4V3/xR
/aY8C+BNEaK0+06Ixij+7EgWWYk+3CjoOuODWf8AtNeMNF8G/DP4C+ANWvVHiPw9cWF7rFqeTbRp
DtYtgYzknjPaof25fEevT+PvB/xb8DXkq6HJpa2ttq9i2f3kksz7e/GCOlTByRV0jc/Yo+Hs3gnx
x8afB2rNHfx6RawRzW+C8LMN7ZAPr07cjiud+F1v/ZH/AATt+I2rxMYdQuLqaJrpMLIUZ7Zcbhzj
ax4q7+x94gu/Ang34pfFPx9ePaWevwIkF9dt+8uplMwO0gc/MVAx78VS+C9xB8Qv2BPFngHQG+1+
MhcGd9OJPmeWbiE9Mf3Yzz9apJ82wlLV3PJNN+Bnib4XX/wd8a302yLxNq9mbdYpfnAZkkAc9eVJ
z/8AXr638a+HtOm/4KN+EYTZW/7vwzJdyRbOGkH2nDt6n5V5+lfHvwZ1Px/8aviR4A8KTXM+saf4
Wv7eYWbKETT4onQMx4HQKFGfwr6e1b4zeDL3/goTpuq/2zAdMttEk0R7kPhFuz5g8sk8Zy23nvj2
pSj2LueKfE34G+I/j5+0T8arrTrpXi8OX7NItwcjysMAidAAPKYED0/Gu0sVOpf8EydYvrxTdX0e
pFLeaU7zGv2yFcKeuMEjHt+Xl/7QPi3x78Gvj18SP7NvLrRrbxReSzbLdA4u4GkkEZ5B/vnpXqPi
+4j+E3/BPq38CeIpItM8U6vd+dDpcz4mMYvldmx1xtUHp3ptu6J3f3F79qnQJh+zr+z14U0JUs5N
WaBHijPliaVrWEKW5AI3SZOazv2X/hFrHwd/bFtPB2sziVpdCuLueKJso4KYXK5xnO4cmtD9qXVr
nxZ8Afg54r8FXa3sXhdA93ewYItZEt4EBYEdnT9BWV+xb4q8Q+L/AI6at8UvGN1LPo+l6NPaXWs3
K7IlKomEz0yASce9K7sO+tj0T9nnw7pun/FD9pnVIbCAPpd5NDZuyqPIAW5JCH+EYVemOPpXxtN8
GvFv/CmU+MDXTR6d9oWNpJZMTl/M8verD7x3fyNfWv7M3jbR/EOuftF2NjqNu+reJLu+utNh3c3M
RF0Ay56/6xOnQEV8hR618S9U0WP4LFrgxeeYxoQiUsZVJkIJwTkHnOe1Wm7Mh2T0Psb9oPQrDUtV
/Zia8s0ubvVb20W7mdPmmBS0yD6/eOQetcX+1n8L/EXxh/ayg8CeHCIrW18P291b2TNthhAVtxVe
nOQvrXaftK+PvDuj/GX9nvSLvWLeKbwpPHNqwHS2AW1K7+y/6s/lXnf7ZXjLxP8ADT9of/hYnhXU
2soNU0a1tbPU1VZYZUEQLKmVIP8ACc+/vULXZlHQ/so+Gn0z4M/HvQtej+22+hCWFI5v3sMc8cNy
WZM9OUQ8fzrMtYf+ET/4JoLqumRSWGpX18Bc3MLGOaZftrqVZhyRgEc9hV79n7V5/Bf7KXxn8TeM
Jv7ObxWJ5LF7llje8lktplO0cYyzcDjofWqskieP/wDgm5aeHfDskera5puoo97bRP8AvYl+1zSE
sDyPlOfpnrRFag3roedfDb4J+Lvgh8cfhFPqsqW0fiLULZYktpiu+HfEXVh/EDvUc+hr6O0jwRo0
3/BRrVLWTTbQxweG47uGNMYExCEuR65Y8187fs8+OPHXx1+Pvwxm1qebVrDwvcxzGbylSOzt8qRu
xjOfLTr6EV9C/Df4jeF9T/4KE+MdTi1azexn0SLS7ObeqiaYfZwUUk8nKsMDPQ0mpJsdtEfLXjj4
S+NvjB4z+LPjWO7kuLDw1qd0J7meYlkijaRgqegCpwOnNet/Eq1jv/8AgnJ4X1u9U6hq/wBsjgS/
uiWnWM3cw5PXkIB9PpXjfjf4i/EH4YeLfiX4PtydOt/E+oXTz27REvKsskioVPoVbsO4r2344yL4
W/YR+HfgPUpBZ+KZ7iC5bTWwJVQSzMWIA4GWQdOT9a0vqmNx7HnXif8AZ38D/BvwT4Vm+J2o3un6
5rzSzRwWUcjr5K7Rlu4PzDr69OK5b4h/AzwjefCE+PvA+vteWtrctHew3QKMiqAAVXqeeeeuRXsv
/BUGWKz1f4XQrJ93TLsgbsKF3RD8T8or5DtPD3iK+8LXWsWdrcyaBE5Sa6QfuQwA4PPPXFUr3vcw
avucszeTIFyGIc/z6j61JLEShO/YG5x2NOKl8uWLPjIZh+NErs+OhXqc+ntWhKsjV8FeGLrxn4j0
/RbEJ9tvX8pNziNQxz/EeBxX3x+3j4I1XSPgv8NbuzuWsn8LqBLPC+194hiQMpHcMG/PrzX57WV1
NaTq1pLJFLg4dJCjLx145r7t/wCCkN1PbeBPg9YNeXDlrOZ2G/cG2w24O/PU5IOfrWaj79zVS0Pj
j4n/ABGv/ijrNjq2pDF7b6fBp7Tk5LiPPzknqxLZ+tfdH7C3wx1W0+BfxCvblY0/4SK3KWO0nIT7
PKAxBHBJNfndI4jhjIcYyNw529eDX3l+xlqU9v8AsnfGK7FxNELKGdIH38rtsZGyO4I3fkBTkrWJ
ufJWoeO/EXg7wV4k+F+qMLm0kuYcwiQvHbvGwbMfpnA4rrv2PPh7qvjL4/eFprSNDaaVfRXd3LK3
AjDjAx3JIx2xnOa8VdzdSGSVpHBYhizZbPQZPrxmvZ/2N7q4/wCGlfAEENxNClzqaxSpDKV8xdrM
A3qBgnHtVT+FhHVnt37bWoeJ/hN+09Y+PrORoLO7s7WFQjbfP2Dc6OOuCSO3T6V8f63e3XjXxjqd
2kH+naleSTmPPBaRywA/E4r3v/goDqE13+0z4hspZJHSxsrKGOJpNwjHkK52j+Hlq+cYZ5IZUlg3
CRWLDYcNuHQg9QaVmloJXvY/RT4jfCbX7X9gPSfDBhhOq2Msd5dIjqEVVmmkYZ6k4K9Oe/tXxH8R
PitrHxN8P+FdP1qR7mbQkliF3JKWkn3lcEnAxhVAr7A+O3iPVLX/AIJ4/D++XUbv7VqM9tG8vnDz
JEY3BILZ7jAx7AV8FXTLJD+6wrpztPG7jFOEU0mxu1z7X+AF5Z+Mv2KPiD4L8PuJfFqCe6ljjQhm
UyR+UdxGD8qY69O1J+xd8LPEfwj+JGq+LvF1qNK0Sx0acz3EjAkD5fbpgE8+gq58OYF8Mf8ABN7x
T4g0kLputzXMqPqMLlJGX7TEudw56cD6Gsn/AIJ265rHj34x6/pevatea5p8uhv9pS8lMqNmSJeV
PHQn9fWo5L6A2fOHiT4j3Mfxr8QeMPDd+1g93q11PbS7AdkUkjbcAjjKle1amm/Cbx18T7yLxT9l
ur+fUbrzxfM5DM+TyDyQOO3ArnvH9i998T/F9tpVorf8Tq7Vbe0ThQJ3UYH0Ap/h34ieMNAvLHTb
XXr+wW0nEYskuCqpz0254+nfkd6ua0HHfU+kv+CjHiLSdW8Q+DPD9hMJL/RbCUXwi6xsyRbAx7HH
IOe59K6b9pfRZvhL8Bfhr4H8E280J8Urtu4o1DSXZMUL8nrglh0OMe3TH/4KUeHtO0bxt4NuLO1i
W/vdNupbt+rTFWhUMxPsCB9TXdftn+M7jwNpnwJ8W6ZZi/h0cm5EgOIgdkG0M3YNhhwKwWlrK+g1
a+pwv7HPhTxCvxJ8QfCnxnZSHRbrSZ724024jwiuWRPMzjKk5wDntW7+zj8O/DvhOy+PHjKzsI31
Dwpc39ppi3Q8yKNY45GBIGATlefY8YrR/Y3+K+ofHP8Aag8S+NdUs47FF0HyibaQmCPEkShdxGMk
bjWt8Eplvvgd+03LGCxu9S1VkUAbnXynXKYHOelJNKTVuxV1pynyFo/jD4kaL4hHxVk+1tLe3TO1
5IhaCR2+Ux46EfLtA9q+p/2hvgB4W1v9qD4W2D28kH/CYtO2piNcB1ijGAPTOAPxzXznP+0JqPij
4NeEfhX/AGSLGDT72EechLyyAOx2lduQfm6+1fbvxjkT/htT4EQLKitDaXryo4C7fkbHfvt/SnO8
fuBI+ZP2wNW8Ual8Zr/4beE4riHQfCkEIitNLUxkCSFCzME6j/HpzXW22lf8Lr/Yy8Q+LfF8A/t/
wlLLHaXcMamd444oxhj3P7xge3fgiq/xZ/aEb4CftdfFnVDoZ1X+0be1tI5HJGCIEYkHBz97kD0A
rc+F832f/gnV8SblxGz31zeMdqY37mhA4PPT+VJP3krFXsZfibTIP2df2OfCmueG4Q+v+NDBHdXn
HmqJ7dmKxsOhGAB781z/AOyFqHibVPiHP8LPGMb3mh69bXFw/wDaG5pkZYiVZCeADkdjyK7r9p7U
X0D9lL9n3UDbtcCyvLC7eNiArhLUuVK+gwRTP2fvjRb/ALQf7aOja/Z6WdLS10G5thFGxBwOPmPH
98U7NQTsQpamX+z9+zz4Ztvjt8V7i6Rr6w8AzGK1tLoKyOTHKcseenl4/GvEU/aV8fXPxBl+IE5u
I7BbgRxWHzJZKgUoEyRyP4uO+K+xvglcx3HjL9qmS1kCFr+VY5UB2tiG5OeTyckZHrzXxyPj3o0/
7L9l8NY9Cjh1Z5xIdRbbhV83zMnod2DtxzwKHre4ua8tD3X41/BLwxpPx7+EuuS20f8AZfiy7U6j
aPhYI1RI2POe5bBHf+fLftk+MfG/hf49vp3hO61Cz0S1061NrBp8ZECgocgHBHUHvnn2r2n9qDwX
ZeNviP8As3eF9Rlmt4b1preSS3UK/EcB4PUA46jGK8o/a5+PWufDH44aj4U0eHT7nStM0+0iie9h
Ejg+UCC5P3uuaSu3dlLTYf8Ath2lhJ+zN8Kdf1K3hbxTeRW6zTv8srRmB3c464BI/H61D4R8L2f7
M/7Lth8T9Ntl1jxV4qRLa1mlIBs0lR8ELg55QHB646inftzaGl98HfhV49lM1trGqwQWz2ofMESG
2Mi7V6Kc4H0+lei/Fnx9dfCr9iH4U3NnZ2d7Lfw2cflXi7lUGF5Mgccjpx6mqu9EJ3aPgPxd4u8S
+MriPUfEF5NdvHGIFLrtCjk9gM+uawzZNPIFjRmHQIoOSMZ49eK+6PhToGnftTfAH4g3+u6Xp2j3
WhhLiCXTIQCzLA74Ynt09MVxX7BPwm0TxWfFPjrWYjeSeFIQ9rbSDKO7RO2T6kbcVpzu2hCjraSP
Wv2FPAXijwzpnijTfE2ipBpcdvFJp63MS+YrP5hcZxnkYOP1FfD/AIC8Uaf4H+JVlr1zYrfWkGof
bZLEY+ZVcsqc55zg/hX3d+xJ8Wte+MHiL4ta7rlycNaQPDZhv3VsD5oGxc8cKPm96/N6WWaW6l85
jy+PMAAPX07UoXk3zF8qWx9H/Fn9sPW/F3ii6u9D02x07SngRYLea3RiDjBycYOf616X+2nEk/7L
fwp1ZrGC1vtQkgkma3iCEt9nLHGByB8/H0ry39mf4DaXrel3vxG8c3DW/gTQpBLNklRNIpX5Wxzt
yy8DqcA1xfx6+PWp/G3xBaqsTWPhjSgLfTdNilDKFUsofgDDFSOPwpNWldbE3Vz2bwB4B0b9nL4A
WnxZ8S2A1PW/EEKx6NGCrLCJULITnGM4BPXGKd8EPEmlftORX3w78SaVbW+rSh73T7qxjWJVEath
GIwSctngD616B+0TY6aP2U/2e4tYVzp73umpcYyoWH7NlzjsduR61L8E2+Hd9+2lo3/Ct1jOlx+H
Llrh7ckqZc7cYOTuwV98VmmlZ2Cy3Phf4h+FpvAHjLXvD17hrvSrlreRUbdlge2OB/OuZM7A5Od2
DywBz7V6f+0888v7Q3xIlYMJBrcw5HGAxx3Pbn8egrzA3SKQxYMWPAHX35rtWxk0QhyZd6lcY7kd
T0Oe1TF9kW4qWfZgnIxz2/WmJJm4/dsPLc8oRjn09ue9Oj3KWL/M8Z2kHp9PfpVBYqs++QkhtoAA
C9T7/wA6kYl4Hwvm7jhSxyR7frTpLdghkyAzNwSeRSujRZKS4kUnlfQjHPvQK2o1xJEjSH5T0KKO
cU6OBch8blxyoOM02Zf3gQSkgKWY4x/ntRLEyRZR3bdgFlOPoKVrjBUREO+Mtk4DY5Bq2mYrWRGw
8TAFccEEc/pVYMIXHmKzJg4LHoPrVuGNoLV2Qhck4fPT6fmKqwb7GW7KXd1RvL527uee2a7T4LoZ
PiLpEmFAEm9iTtPAzwcHBwCfwrjI4POj++3LevArtfg6n/Fc2WxTKYYpXIAHHy+p9yKkmzRyF/Eq
3czFgQr5JiA5H4VA7F2GSxAHAXtTruQtcTlhsO44VRxjPT8v5U0zKjHlSBwD7VjLc0PQdEvvJhRc
ur5+cMCTt59CMD/P19K0y+E7RJF5Y3AqORk8nrkDH+etea6GIXzG8zQEn5Ag3begI9cV6H4bgWH9
xIsah8lCqh3fB+8eOOnY/wCFdKOeb6JmrbXZjjkEJG4rgjnJ5yGBJ57iuT+Nca/8Ir4dFz80jz3L
Aq2B0jz8vrwOa9AFlFEfKigljdEIy68OT3wOhrjPjzAsHg7QpSwEIuZgA33slU4+nB/Kqa0MoJ3P
Bp7aJLUOm4scg7v8as6N/o94pVTvDAgDkim2simxuMfMAMAEHPepNBgMl+CW8sFtoBzt5HNYLc7F
q9T7V/Z9+B8vxJSLVNZum07w7bA+bdNgEuQSFTBPcdfbFYPitdMi8RXKaA8l/pe7ZbyyDbvXsc8+
/PGa+hvEMbL+wd4Za3kWwd7WzBkZQFbDbArMp6k7R179K+XmiSSPDkeUV2ksx2k9cY+oxye4+ldE
I8yuc8pWlY98+IXwM03w78LPDnivTLt786myidFU7YsxFwPXjB6469Oa474MeAI/id8Q9N0XUHli
gu5JEnkQcx/KSC3pk96634E/GqLw1PN4c1q3kvvC2qr5fleUWML8/NnPQ8fKB9K9j+KMXhz9lXSP
s/hqzY+I9ZJks5blN6x7CA/OScHf0zzgVnzSWlzVci1Z5Td/A3QNK+Odx4C1DWriK3uhEmn3CQs2
6ZhkLkHHJJ/I9iK6jxl+zp4A+HevW+lap4unsLi7RZoEkC4VC2wt93uegJPNeR/DbVZ9U+NPhe5v
LiRbqbVbZzli7EGdSe3QDn8K+wP2gPgavxP+IX22e7jigs9GmPlK2JHkV2KcHHH+etY3aesgdvsn
zN+0P8BrX4LJoxg1P+1o77cyoQA3ByScZB6gDHqOK6H4h/snal4Y8A6X4j0u4bUI3tRcXsI6wo0Y
fcpA+YZznnp3ryrxd461jxbo2jaPqJlcaRmGKSRt0i7nViD6kAD647V+iS+PdG8F/DzwSfEWBa6x
aWdiyhPkDPDHwQOi8/gKqVV07C5ep8S/Bv8AZ5X4weEfEuvrqotjpLsghePzHk2oX4OQBkDGR249
q8elt7iOaVnR9ivhiybQP14zjr6V+o3hr4XaV4E8P+NDpKFoNXSe82DJVT5UgUIQCcfN29eK/MTW
IJob2NSBBguDGPlDdRke2T0NXGq5kW1seneKP2fXPw0s/Ffh/U11vTk3G/eBV32rADIODyMk9u1e
RW8ElnNHJMPLB+VGU8+p9ex/OvqP9g+6lvPHuteHZ52u/D91pcjyadKP3bOGUFjkdcZH/Aq8T+MW
h2+l/FTxTa2seY4b6eGJguGQeY2FAzzxtye3pTjOV7CdNXucMBHblEjCPNnARf4h2Wpbj57hWdGw
PlfBAI/P88Un+qlRQreYoz5jck56gMfenSxiSNg+7cfvANkHHr7cVqnqQ0RSonnBXijZEyd6khhx
kd+ucU43JaNth/erh95Hv2J4qa0gRpiyNlC5BWRuRnJAHoOPwqNZPs88QkDyhT8uU3Kqjg4x0Gci
tG0SlpdCMpZZJioN2SuVXoU5Yj69PrTo5TcXA805CnGAhAB6/wA8GpJy4kjIEa4O2Nc4LDr79KZI
wjulLh2Gd4QsRtA9COvPb3qlqQ7DkhbaZTlg+Dlhn5cngdh/P1pJwbRZY/spMRcHspLfX36064DR
kbD5gL9xj3498mldLmdyjw+Q+4+WGcEY6Zz27/rijQNEWLm3LM8c28KPuqQRzjrken+NQJFEyCNZ
jiQYDN8pGB+FEUtw0ifaZDIWUBZcgPjBGP8A69enfs7eGdO8U/Gbwno2p232u2u5nWe0kPBAR349
vl+tZSmoLQ1jaTNXwv8As3+IPEPhHT9au5bfSYLslEe+PltkHBIX0Pbnng1zHxH+EniD4Ualb2mr
bLu3uY91tdxfNDMM9PY4xXu/7UOl+LvHPxlvPC2hW9xd2GjW0MtrZRKAqZRN7A/xDO3qRjmum+Fe
kr8Vf2ZfGWmeLN+oy+HZJksJnGySHy7YME3deCGBzWKrTW7NXBM+Wfhz4A1P4l+JodF0aPE0gLSS
yfLHGAGO5u/8JHGa7XVf2X/E+l6LqGqWtxa38VqmZ7ezAaVRnnC56556E+1e5eHNMT4Zfsdw+KPD
5EWv6isPm3hAZzmd0wO3Hr/keVfBOPxl8N/jB4SW6tJbHT/FN5HBcpOvEyGQcjJIyAQfxo9tLdMa
ikeEXi+U/kO5Mu7DoEIOTxtwOe4r1jTf2YvF2saHpmpqLa2gv490YuH2OU9wR6V7/cfBDwmP2vYt
OewIsptKOrCAEqPP3Fg3XpkE49vavJvj4fGnj74o+K4IIruSw8OXL29qLAbDDEGOGOCCSCpJNP2t
3oyWr6WPH/H/AIM1H4c6/JpWs2f2a4ibnedyvjGDnuDuGK0PAvw11z4kXtxb6PA00vltJLg7VVR3
Y9vavpTXNIX4xfsmL4t14ifxBpTypHdgKpdEmVMOec8etT+LbWH4M/s4eHrzwsPs9x4jMBu55iWf
EtqDhDkY57+54p/WOiMlS1d0fPviP4EeLvDGh3Gq31sv9mwlIpJIDvbJzyRjgdK4VUMkyozMszEF
OhJwO3NfRX7Ns3iPSPiva+DPEdrNFpXiGO4+12F3klm8pyHXJ+UAKRxXZ/DT9nnw1b/tM+JtMmh8
2w0COC8sUmYSj52T7wOem4/hU+1bNuRdDwq3/Z88d6ilnJHp/wAl3CrqsjgP8wyuRyBnHWuB1rw9
qOga3e6bqFtPa3Ns3lSRyBduVyevXINepfEb4keK9c8b6hr6S3qx6PN5CGxR/IgWOQ4yMYPQ5J9T
6V6f8cPC9t8Rf2f/AA38TLiH7Jr8kUa3K26hYpPMdlLMuM5Gz361caskYuF9T5u8HfDPU/iDeTza
XazyyWv+t8snaB09up4rW8V/DLxN4V06PUtW0429rPP5Qdvu+vB//VX0b8ZrKL4FfC7wv4a8Lbor
jX1lW4vOUnYfu2+VgeG5x69axP2dtWv/AIg3+r/DfxWlxqFjfWU93Hc3JJltmARMqWHOQc555FHt
ZPVi5E9D5jSGWNlBMkhfAyxzkn/9dd+nwP8AGG8Z0qQbhuBKjO3Bxj16DH19q9u+BXwO0b/hZnjR
rwrfxeErgwQwywjbIcS7WbPcGIfj07V5rqfx28af8JgPFsdxNbwxSxv/AGeAwtQqkIVxyApDfn+k
+0fQcaSPKpoGgVrYRvHLGfLZXwGDcdQR06/nWp4f8I63rsV3Np1nPeJAoE1zEBtTccD2B4PHp9a+
gfj38MNO1XR/BfjizhXTj4lNml1awnMcbSJ5m4Y6/exkentWh8ajL+z54X0fwX4Vtwl1qMH2m51R
MCaZkdw2M9Onb17Yq/ajdM+a9e8Oan4Z8iTV7Oe1knkKx+YvzNwC39P0rPtoHkuojCC5mGxEByTn
oB/nvX058FLs/tI+EdT8F+I187U7CD7Taayf9cheVQR9RkVnfAH4RaYbHxZ4x1JFv18KyXFuljJ0
M0cYbcOxxyB9TVKsyVS7nh1x4E1+yDynSb5QCZC3l8FQT+Q4PT+lZ2/EkSgkrj5uevfH6V7NpP7R
2vr42bWrxS+kyyyvJpPlow8h1IVc47cfXB9a2PjT8BIdG+IfhqDRbtbbTfF92sUEOwgwyMBu+bOM
fMCBU+1d9S3C2x4ZF4fvtSt3ubGzmuYC+6Ro48jIGRkjoTjt6VV1PRrm0mkF5bT20xy4WZCu4d8Z
9+9fSvxh1r/hRuk2PgTw3AqX4gjv7nUpY1YSZUjCqehyp/WpNB0W3/ad+Gl1JJALPxd4bjjD3iIi
R3cbB2A4A7L+n4VarWZi6N3ex8zNE5MEcMHzynaWb73fg4qzPp12LfcLKdQvI3oVXHXPSvdPgL8N
rTTvBl78S/EkbahaaVKZILKJQCJEdRz82GAJ/r2pPC/xwTXfFF5pnifS7OfQtRMlmEggxNHvOFYs
T2z2A6+1Uq9nohqmeBvF8mGDRqpGWBUlqkW1YGM5YRAnJCk4OP6+nsa9m8cfs66hoXxbs/B1jLHK
urEvYO5AxEgbdkdMrsI+tdF8TvEeifA6OHwfoGmWt9qGnNvvJ72PO5mUEAN1xyxpSrX6Fch88Xlu
8SMhURyEjKnhgOoOPqMVNI7JcGEbjOoxjPfuOOp4Fe++M/BumfGj4dTePfDFktjd6ZGLfU7MwkLu
jjDMUOefvDn2qD4VfDbT/CfgxfiP4tif+yFIFnbBS3mlty/MBjB3KB0Peo9ogULHgsskqKhCkqR8
0hXqD05xjPpTZ1WSININ3AUBQQAD9K9+8DeKPDXxXuJfCesaRbaNdaqsdvY3lgjM0cpYDac8Dtzn
ua4lfgZr4+Kf/CFyRSfbtxk8xQMGHdgSt7bcN3701UXYfI2eawukkbO7FfLbKkYzyO/p0qRN8lwx
cBVxg557V77448ReCPhpq9v4UttCh8Ry6Wph1K6kYqROHK43FfmOBzjgcCsb4xfDDTl0f/hP/CDt
c+Fr4lZY0ziyYEJtIY5xnOcDjHvmmqie5PsfM8cVmDkBHUj8OCOSD/nrUYmQBfMjYOzls54I9ele
5+CvhlpfhLwLN4y8a2++1uIni0yxm/5enKK6MF6AkdOfwqz4Z8MeD/jXouo6TotgvhzxbBIJbaK5
m3m6UKxIGMbvp2/Gk6qXQfszwaUgSPGpcE5z0z+dMd0KshQlW7Ak4I719Afs4fCbTPHPijxLpHib
TZ4brTbaPML5RkdmYEEH02jn3rmrnxN8Mre9uk/4Rq/hMTsmVZcvtYgHPGRx0zipVTmdkjSMVHdn
k0rDygxUo4Tbng5GeD+GKEmjMO2YkAE5G0jP+TW/461DRtQ1WKbw9YXNpYeT89tcKC3mc5bIJ/Q4
5rnYVdonLrubJ/ix8vcdOvb8K3V+xzuXvaD3CGOZVBePZ065Gfb/AD0od3aPhQsfAGeSeOtKsDMS
qSFQBgAdCtdN8O/h9cfEXxVZaJak+dMTIc8Hy1G5j+WalyUTRJs5hyrx8q3yjHPU8fyqJI2hhJVP
KIyAmOMV7/rXwE8Paw/ifTvCuoXDeIdBlEdxazAhJFXdu2Z7/I35AelfP8aNJhYyqSkH94p4PXaR
nuRj8zWbmpbFJWeo1UkZhyIpF52nkhvcioLpWSTcOFlJOw9x6mvdvDnwJsbbwTomv+MNTm0c65dQ
2lnGELbtw3Bvu98jrxwea88+KPw4vfhz4qn0u+HnLhjazr0lh3EI+fUgfoahSCdkcks3ylflA7Oe
o/ChgZEHybs8Bsfr/Kuz+Efw0vviz4hTSNP3pbRlWu7lhlIkbODz1yRwK6Hxh8EPsPhBvE3hfU49
esre5lhvjGcrAFyTkcE9hx+VVzLsKMW9TyGJnjWUqrht2Mtx+P6Uxo2MGP3gXjexPUjn/GrT2880
ixBuGyu1V+duvP517PpP7Ld1cQ6VBqmsRaXrWpW0k0GlSko7bAcehJ4HT1pOaW5pyHhc8YLAsitt
ywjTKgCoJo47gmZ1UwkZVWAKlwTWxrGj3nhzVJ9M1GGS2vbZik8D8MjH0x2Ir0Lw58A7vUfCEOva
te2+hWEszQ2y3bBBL8obcpOMjBOMehpcyIUH1PJp2eV0MiHIGBgkAfhzUdx5nmpGpd2PG4g4b2zX
qvin4CanpXhG48QaXqFvrdraT+VILAec0K7WbJweBgDmuS8C+BdV+JHii30fTVednHzyFlARdpO5
ueM7cdO9HMi4xSOUdHIkZ1L7QSecDHccj1/lUE8kSshIZAzYJ67SAMcdhXttt+zVquo6pFp9rrmm
SyudqxJMjb3wTgfN6ccA968i8Q6TceH9VvdMvLQ21zbsUeOZdp4yOaSaZpp0ZkxRo5YhSxweCo4J
/wA/pVedzMztvyyL90LjJ7/WvRvAvwR8R+OtLvtQtEjg0+1Kgz3J2oxxwFP8XQ/THvWT8RfhPrXw
0t9PuNTgM9nqas8M0fKKVYLjIz3I6+tNcrHZnGL82ZNo3bMk498YqpLIJi5mUyEhix+8SeuOfwNb
GjeG9R8R6xZ6VpsZvLy5ZY44ohzlmCg/QEjJru9d/Zl8b6Rpeqag9gLyHTEe5lihlBIRR8xAHUjk
49ql6MmSa2PJJJdxAIYFSSckcjH+P86guBLMyjLKMlsA4CnHOAO1X54zidcETA+nTHGCPwxXYeG/
gR418U6PZ6vpmmST6bd7xE7qy5CnqMj1z7cUXREebqeczCTfubLBjjBfI4//AFVDJcywySiCWZSx
IbyWZDGO+cdq3fF3hHUvBfiG60fWbdrO/hQM8Z4B3Dtntj2qLwv4R1fxxqMlhodk+oXaRGWSNTgK
q9ST2/Gk2jWzZgz3st5EYp55ShOf3jblQ889aq3bJDKpMjt04UcnjtjtXc+M/hL4n8E6Z/aeqaRP
b2vm+UJZNpXcQSM465xXD+SZA5D7jsJBHQdcfXrSvEtRaHya9qtvGkMWpXkceNyRmdwAeOvOPT8q
zLsqpBkc+Yw3nZ0OK75vgX46vI4ZV8N3bpMiyx4U4IIyp5xg47e9ef6jFJbT/ZZlK3CMRKJBt8sj
scGhWFsP03xLqehLKdN1i+04TkM7Wly8W8joSVI6dOfequsa1qPiO7trnUL2+1W4iQxxS3kzyOqd
TtJJwMjPbPFdLZfCfxjqekQ6lY+H7i9tLhQ0MkKblkT+8D0I/wAawtd8Ja54cuVj1vT59JunQyLD
LGVJTO3I4GRkdqegMyLW6ltLuG6tXktbuBlkhuI3IZWHIOeoOcVt3vxQ8U61p13YXXifV7ixueJL
ae6dkmHOQwz8wPofWueS0uL6+t7Szjaa4mJjjjQEtI2eFAHfNXPE3hPXPDiBtX0q609JGxC0sTKr
4GevTvSTRm5NGLPtdiHVuOdynGB/nFdJoPxe8aeFbKLStI8UalY2sRLxQRSsFUls/L7Zz+dctJvd
gp4buf6VrxeD9XvbEX8Ol3U9m6M4uI4y0YUdcn6ih2vqNNmXrusajrWp3N3qV5cXt5OfMmurmQu7
t6sSeeKt+F/GeteANTlvvDupTaXeyx+TJc27AMYyQxToRyVHFY5fzHO5SCQMFST+JH60lvG1+6wQ
ozOX2hU5LEkY/nT0KR1XjD4reL/H2mW9n4k1651W3tpPNiScjCPgjcMDrgkZ9zXEICkiNG4WVSCr
Z5UjoQfWrupaZdabeyW97byQXOFZo5kKsOPl4PYjFVCpt1BTkF84POPUUnsWk7nqMn7UvxYRQv8A
wltxKiJsy8URYrjHJ2Z6e/515NNcSXgHmBVkYli6cHvknPua15vD+qeXKV0258kqWdmiJC45JyBx
+PvWQYljILDO4ZLZqI26DaKkwwCqksw6k+lBxG4Z1O3qMDqOf50+QBAwAIdjj3oW5ZWyUDMePw7V
TRlZXPVPA37UPxE+HPheLRPDuuJp2kJnEH2WJ8gkk5JXPJY96868WeML3xh4h1DWdVnWe/vn8y4l
jjCBjgchQAM4A6VneQbuZFiU7znaq9T9BUb2k6SStJBKmzjmNgPTuPXI/CklFCtqdV8N/iXrvwv8
THWfD1xbwX7J5XmywiXauQflDZAOVHPtW58Xf2jfGXxqsbS08UXkF6tpcmWN4rdInzgjBI5x82a8
ywYdg2fMTwoNSzWk8Sq3kyIhAKsYyAR0BzSaLtcSOZkuFdXO6NsneAVJHse3Tivepv23fib/AMI6
PD4bTk0dU8v7PHaKq7QpGP1z9a+f2jZQ67W3jquc475P1qRELFlbIwOT14oQcpF9oRl2sjEEYyOT
n1Net/Bn9prxl8CbC5t/Ci6bE13KZHlvLYu/KquMgg4+UHHrXk6wyxwCRyAZM7VxyRUbZ+RpH8or
gKBxk+lPlTCx13xW+KGtfFzxnJ4h1wWf2x0EJNrF5SsAzEEjJ5+Y81ytlqVxptytxBcS2t0jfLPB
IVZD2wfyqOOONlDjc4bA5G3k+n+e9NkQk7SoIQ7s45I+tFmyXvoe/wCoftveN9YlSa60fw/qFzHG
IWubizYu+1cKxIfk8HP19q8y+KPxW8R/GTxPPrviS5Wa4dI0jggBSGFFXAVFJOP161xQU7W3Fh2y
i0oEeG+cyP8AxY4x7U1Gw1c9h+Hf7UXiz4Y+FJ/DlvaWWt6Y03nxw6vulEBC7R5Yz8oGenSrXjn9
q/xb468F3HhSGy07w9pdzNHPOdJEkMk20fdY7iMeteLhCVAkGwHILE8Y/wD1inyKZIwYkGzGCwHB
4qWkaaktjqVxo15b3lpcPb3UJPlSwsVZD0DKex9xzxX0Qv7dXio6pBqk/hzQbrVoY1RL94nEysqg
Bi27k8D0/WvmxogYt4xt9c9P8mlJUsu9QS33UPbJ60lBMLs1PEHijU/F2sXWq6pdy3N/dSGZ5ZJW
clifU5OB0H0r0r4Y/tOa78MPCF34Yl0yy8S6NJcrcx2uqMzx27BcYXnoeTjpkmvHjDicInGVJGR0
PpQ0byyYYbSDz+HYVXKiW2eofF/49+IPi9aaVY3MVpo2haajRW2kafuS3GSTvKnqR09qxfhb8VNa
+Eni/Ttd0aZxNbPhomkYRzpyCkgBGVOSfqK5GECUqjJlB2U4/Gq6kxyM3J2+2SKGhpNn0Prf7Yuv
Nomu22h6DpHhy81WEwS6nppcXCKTk7Tng579f0r59luZWnYuzNK7eaS5y2eucnvnnPXOabKgbJLb
QV4zxR5QBOAzPgg+3HHNZjs0fQmmfte6s2g+HLHXPDOn+JbzRYPs0Wp6nM7y4B4Jx36c5PSvLvip
8V9e+L3jO81zW5H86YgQ28cjNHbJgDYgP3RwOlcbvmjZFYgKeSMe+KQyGRlCHBY9R0P0oUUTbW56
18IP2h9Y+EFtrGn/AGVdc0PVLYwyaTdzssKPnIcAZwT0OKv/ABK/ad1fx14EXwbpml2vhHQTM0l1
BpMrBbsMvKNkZ214wsRKsrEKobDZPNSxlGJhRgwGctkE0rJMrlZpeFfFt94J8SWetaTPJZ39lIrx
yxEo2ARuXII4OCCK+hX/AG35rfxDeeIoPAWjR+I587NTRjuUlSm/OP7p/GvmVgFdosgMAMd8nHrQ
0IB3+Wz44YHoBjjt9DVezjInUueIdbvPEuqT6pqN5Le6ndyebPdTuGZ27kk/y9MV7d4R/aqudE8A
WfhbxJ4ctfGsFlM0tpJe3BVreMqFCLlTgAAjg9BXz+6qV/drwBljjpxQsWcqC3Q4Ocfr+NHsYgmz
1f45/HzWvjW+nxSxpo/h/ToVjttGtmzDGVGN/QZbt7CqvwR+OWrfBfxGuoWO+9sJlZLzS5JSkF1l
Co3jB5Gcg15s9wDuEJLIFww6GmyNysn+rCk5x1PHek4JFJs+idV/a7ns/CuraH4M8KW3gq51Hy1m
1LT7klyincVA2jGemc968G02+uNHv7bUbWWS2u7OUSpOrZZXByGB65z+tUIlIkLY3DocVEsn3sAr
zg9+KpQQ+Z3Pp2L9s62v5dE1LWvAVlr3iOwihhk1Oe42mUxcrIRtOTu5+prxX4nfFrXfih40vPEO
s3LyXkrs1sm/KWsRcuIk6YCk4/CuJkiMIcNJkDBDEY47fTtUv2b7rqrMpHzBvTnmj2cULnkfRK/t
f2mveEdG0bxt4NtvGeoaTFJBDqt1OUk2MR8uNh6fL/3yKwPiv+0xeePvBNh4Q0HQoPB3hqN3lubS
3n8w3TMQ3znYCMEE8HkkeleJSbXfeRtGQN3Tn/JpMvsCleCfTpSUVfYzcu4jFYwoYFiXORjA/lTX
/wBVvVSGzkjtUjwuAo2uNp6+1MExRtkisScgGn1GmdL8Ote0vwr4v0/VtX0hdatbZy7WTvtEuVIA
zg8ZI4r3L9oj9si1/aB8I2ek3XgiDSr/AE9VWyvY7veLdcruULsHylUAI96+aFGRvZsqOgPanmVZ
l2lduMjJB9OM0+RXuPci81ncoBndg4xX1n8Kf21PD3wq+FN14Ktvh79qt76F47+Q3gUXLtHsdmwh
PK/zxmvkp0dZG3ENjpnIp6Bsnbgk8tnJoaRKuzX1zULS+1u/urKFbezmnd4rbIIjQk7R05wO9elf
s2/GLQ/gl41HiTU/Dkmu31sytY7ZRGtu4yN+D1OGwK8f8spuBUSHORjt0p7swWNcsJFOTjoPSjlT
Ks0ezftP/HXSPjx4tj16x8OS6DqLri+cyK3nkIqr0H8IQD8a8q0W9t7bUbS51G1W7sUmQywLgblB
GVzzjIB7VmTM3yu0m585bPU1O0Jg2yK2AcgYUkfXHeq5UNaO59c/FH9snwf8QfhOPh+PAEsVhaQf
8S5zcIBDMFbY20L28xv4vpivkV3Z4gpAXPUr2PcUwzOG2rgNuycdhTZ3IDMVOc5AI70kraCbu7n0
J8IP2lrPwp8Nda8AeMdMn8R+Erza0WnwFUaJ/M8xjkgZyQvftXU+Gv2oPA/wo0jXLv4beFrrQvEt
7Y/Yoru7ZHWJdwOCAOcfXnAr5VGzcQQSemAvBp8sO0cAMwGAFGCankuFz0D4V/FC/wDAPxIg8S7I
9QumuGe7R41IuFeTdIMEcE/NyMYzXtGofGf4Eax4zl1258E63De3N79snyE2B9wOcK3Q46YJxmvl
QFiu4Aq27Ofr/WpsCQFwx3HqAcn8aSpJ6spTZ6t+0h8fNV+P3j99Zuo1Gj2Ia3021bAaKE4yG6ZL
bc+1dp8Kf2jtGX4Zat4F+Illc614adV+xJFCJJoH3lmwSR8v3cc8Y9DXzpvDAMCdq8EdMjvSh84j
UMqjpgY45xn/APXTcYi1Z9JeMv2j/DHh/wCFZ8DfCTS7/QI9QkkOqX13bqk0sTfdRGDEjGTyen41
538CvjXf/BbxnBqSBtQ0u7Jt9RtpTuWSF2UOdnrjI5zXl0Z8maNwowygKR6f41JEv2hiNp46nPXm
hwTJu0fXlh8bfgX4O8dax420TQtSuvELwzT2sFxaj7Ok7g7R97Awcc8YGa+d/F/xe8UeLfHk3jC7
1GY63JL9oWSOVg8AIxsj5O1QCeAcVxDx4fKttXPH4UpKh0WNsZ4biodO+5SnJH1z4n+Nnwo+N3h3
Qbz4lC9sfFForpeS6daSOZlJAUsy9flUHB9cVxP7Qn7RUPjyK28IeC45tE+HmkBVhtoVNv8AayFG
WlHBPzDjPB614EoHnu2PNYjG339qZ5paQrsVFyRhgdo9veiNGMdROTZ9LfBD9oTQJ/AWteAfiW0l
zoUySPYXRQySQMUWPagxxtAOB3yQK2Lf4z/D/wDZ88AazZ/CmWbUfFWpzm3k1m8hMclnAUYZjP8A
eHy4wO+ewr5VIXyy6jcGPy89OvUVWmc4HylTkkgjAPFV7LUm3c9h+CPx88RfCHxkuozzTahpuoTl
dWsp5CwuUfKs7HqWG4n869ng/wCGb9J+JsnjEasLqCFmuofD0cL+V5gjCqmMYwDj05618b71jRtr
tjnJC5wf5+lAl+UleVXOCP5U3TT2LR75qf7W/iHxJ8edK+IN/A01loVy82naNI+I4V8soADg4OAC
eOoz6V6v8arX4PfHP4h33jqX4hW9hPqFtAj2m0gRlY1HOQM9a+LCXdXfa5Geh6YHSk3SSP8AOm8o
PlyOg71Hso30BSaPqb9sT496F490Pwd4A8NSfbtD8L28WdYEnE8hh8v5V4yAM556muv1Px/4K/aG
/Zl8G+DtY8Q23g6/8OXcUQFyy5mEcGzeoyMDknufl5FfFKxnEiAnYoIyx9utBnLsoeQqgJXYTx25
578fzq/ZrSzHzs+4tE+I3gP9mD9n/wAW6Bo/iKLxdqnio/Zo47KTcYkeFlLkdMDce/p6Vyf7Dnxq
8P8Agez8W+CNduFsk8RQyNFqVxIsUMZWFlwSTkZy3P5dK+TLeaQKFjBKlj8g4/TFVw+yYyszFWBU
tjGBnkE1HsSue+595fsz654U/Zu8Y6t4Qu/E9nrEfi+xw2pQTr5NmY1cAOc87mbA6elfGfxA0G28
K+K9T0qDUE1WG2uNv2iAgpITydpHvn8vaufYho1yzrH0GOPyNMd1Yuyt50/BJc8+vv8A5NVGnyku
T6H234S8e+HdJ/4Jy+JdAm1W2j1u+nmRLB5x5uXmQj5c5I4J6Hv2zXxHFIHBbIbAPHfIPf8Az2oM
zShUfc69lzxn3Hc8daRpooZvMSLcjYHJwfp+dXGFtBXPtH4feOtK/as+CkXw68SSwabr3hqKSXRX
3eXDIFh8qIsSOT8xyOeAOlWvhp4Z0P8AYt8Lat8Q9bvE1TxXL5unaVZW7hkJZN2Wx0G5R8w9fevi
VrqVHK7nBORw3XPanC7uMlZJpXVDlYi5KA464zSlTT0GpWNLxj4n1Lxt4n1rXtSaN9R1O4kubjyi
du5zngdhz69q55BIsrFRhjgKTzkf41IHzgsQpbPyg470rMIXCopZpD0Izt9h+X607WE2OkdJVkwr
FVIA59P/AK9R8DadrFM5AHTjt+lOnKiPYyqmTkBhjOcf4frUYdkjZRux78AfSqSJuMImuJDsbgN9
wn7v4VMxZkIAxz0AwD9PxNM84l98an5hzkdf8mnySGTKIXjbq2G7UxkbRIqsrDBZcZBpIAjSFCdr
EZVhyB9fSolBZFUAlmPUip5UIcEEZxztO4daBdRZIyJcSDeSchxk4xVvFuIZsbspycdPoffOKpLL
vkD5KqoCtkdPeppYw0Mm1izfxHuc45/rQDM4T7UYMCSTnAPFd58HIpG8WNLb5ZksLhyobBIEZJHH
XgVwMOUYl0yenIr0X4MWo/t3U5GjLqNLuj8rYK/uXII/ED86CTgJTHMZGRDgfwMeAP8AGmzFlABB
fHAzTnh3sp5dCPvE4+vXk4PFOCwlMSuTgnG0ZP41hLcux2elyGAu0xVhncQBllbJ7+n/ANavRfDs
/l6ciLGUTduyy7XYYGQxHIrz7SoB9vKQeY8kUjAhjk468Z68nkH0r0nw1JCAEjO8xyLkNkKcjGBj
j9e9d0TlfLc3/OuPOhhVNyDGDIcZ9Mc8nH8q5r4628g8C6MzjlrqRP3iDKkqpA474U8/4V3sFzZx
S5crvVd+AN20g8cDr+B9K4j44S+f4A09U34N+Vc7iVP7s8jjg54/GibuhpK54FZyPbWdyC7RqQO3
X24+tLpTeXdxOjMB/dbOD2OcdapSSMY0RjuUcZB/HNXtJKm9hVGzg4+Uc9a5LO51RP1F8G38HxS/
Yph0HwzKdR1zSrW3S6tk2l0aM73dQR35/l1r5kurW4tbs28luyTBsbSoUH5uuO+OhHrSfBr4gap8
MtTsdY0uQRzIrN9lkbEUwwQQ3ODk7euecV0XjPxUPGevnVvsdvYLOP3ltbsShPIO4E5yc/jxxxW0
VKKMnFOR7l8D/g/YeHPDq+PvGoNvZWbPLaxMAgmO3IIJ6kkEDnFdyPHPhT9rOwk8MajbL4e8RWob
+x7ieUMZmKncFwBnO0EqfSvDvEPx91TxL8K9L8A6jaQPZ6coH2xWYzSbBheMYzz1965LwD4yu/BP
jXR/EFt5Fzc6dKs6QTDEbttI2nvnn8Pes3B7g3rZHSeHvCd54J+NvhrStWs51vLfV7cmFgNykujh
iQRxjB4J4/GvfP2yPF2oeEPit4b1DSmltL1tJeJZIxlCGlkUqQB7gZII6V5Rq37RM998XV8a33h6
wuZGt47cW0gIEbp0dXGcEdOneuq8U/thWfim/s5Nc8A2NzLblkhkN0HbduzjBT/ZXjJ6dKhRle9h
c2p5f44+D2r+ANB8Oa/qKmEat5kkdtJGwMW3Bwep+ZWznsK+nP2q7dG/Zy8ASMqxIZbchCC3W0IC
kgf4V4V8cf2gZ/jRDo9rJon9mRaYksfyygeduwByOeFGM46jpT/iZ+0Ld/EL4X6B4Rn02O3OjmOI
3AkLF9kWzpgdRjk+9XyuT1RfMrH0T+xv461Txd8J/FGnajK0q6DH9ktWZ90gjNvIw3E8Z4756e9f
DOrSeXeyIzNGsQbmQbeoJwQckcV698C/2iZfgtoHibSl0qO8i1VC+Hcp5cmwp1AI5U49eK8XvJmu
r2SVQIZWYt5Zkyi87gPm6joOaqMHEzk23ofSv7BGn3CfE/V9aZJBpf8AZE8f2kBRCG3q+Cc8HaPT
v1ryj49aml58aPF1xaSQz2dxqcxgeHhWXdgkHPzZIHTjmuyuP2iktPhPD4S8N6cugkZW/nhkGboM
mG2/LxkjJGa8SnmlR3Z5t7OTtjAHTrgAf1qoxs7sybfRhPILYbSC0inaUXsD06/gKo3YF0AXRkRx
ggen944q5FI0TSxqrtLGRkyKWDZyeCTn5ePwqpJEtwsgbzIzHtYueoPJwPxroXkZNtD4FkjnUHzS
QoBDYzt7bsnrx36c0+V4Ikwu/oxLqgBYd846nnp1/KnNh5oVikBiYF3SRTljj19uRj3pTCqEr95s
Fl8sZz3K+3Pr3pNCb0J9puAzjOJMbUPAXvn/AD61HPsUqHkIwMZOSMY5PAp6qViuDDI+Hz8rcA8E
A/59KjcPcqE3BFQYLFyp3dBgd/z7VVkTdPYm+zQvbgLvdwxLCY5+nH+e1MksiCEU4jVi37v7pJPX
tjt+tSNNsSMxuUIBAaM84HU8etESKnmOzscrwjAEAehz/P2qU1cb1Gb4BLIDGhVeArkHLZ6Y7+vp
wa9h/ZSkCftD+DJ7idIgs8sQXOB/qn5PTJ4A/GvH0heRshw8hQkADBfjsR0+XOamTNsWn3kbCvly
xn7hBz9R9ePrWdQ2g+Tc+wvjR8Y9V+CH7TviTU7LTIr0X2nW0RM7FQBsQkqfYqcjHOeSK6L9nHW7
rX/gN8WdTvkSyk1C4vJUGOPntflHJ9McV5f/AMNBeDPib4J0fTPiToeoz61pkjLBfWIVWeMLhSWL
bunJHOSPesn4x/tAWWvaLpnhPwXZzaR4Vt4Y1Yy/u5ppFBHO1jkYI4Y9ulc6i5bI1Ukt2eu6n5kn
7AumNBvMtq8KHIyV23jcsB22t+ue1cLpfx71L4w/FP4P6RNpcVq+kapAPOtmLhwCuWzgcfKOMele
f/AL46L8M9WuNO1VG1HwlqUQS/sXTzMbEYKyBuh7H12jvXofhb4rfCX4XLrWr+ELLUL3xFJblLb+
0YD5SuHyMsTlecE+o70nC2li1JPVHvF1JGv7ZFozLx/wi7puY5JbLHb+Xv614rr/AMeZvgz8Z/ir
aLpkN5Bq960a+bKUKMGk5yAeCH7eleA3XxH1ufxpL4n/ALRuhqpmacXXmup5YkIBnOwbtuOOK9u8
VfEr4V/GPSdJ1DxzDPpvie3RvtK6dbMVmYH7+Rz6enNJUmtx8x2ngLL/ALDviFdrCRp7nAPy5P2m
MnHP48VZ+OV0dO/Za+FN80IumtmsJSr8gsLfdgkg4Jxj9K8d+O3xys/GFrb+H/DEf9meEbTCLEi+
X5x2DO8A9dyj8iaufBX46WNl4f1LwZ438zU/CdzE7RTyBpJbOQLtXywAeOmMdD9TT9k1qLnR3Pgj
412nxo/ad8AaglgukvZQ3ERQMHMn7mcsTjoPm6+5z2r134fpu/a7+J2UwRp9kFBHDDMZzn6jp+te
HeFPiP8ADb4L+HNV1LwdPc6/4rlCCB7+2ZGjUgqxUkDAwSSAecV5B4d+MniLw542i8UR6hcS6qZA
Z5JpGH2gAg+W2D904Ax2A6cUKMnshXR6NB8bLD4c6R8S/Cb6Ibm41rUbuSO5kdQUL5UFht9i3Fek
eKYxN+wdoDIVbaLU5iBAJNzIOPz9K4vxHe/B34la/pfim81J/Dl4UiuNRsEt3ZXdWDOnAOc9OPTP
tXNfHL40WvjGaPw74cb7D4T0wFYLZcrFKgk3LIVOMYPTNUoSk9CVK2jPc/2ofEEHhuT4Sa3NCbiG
yufOljzgOFEDYx0yQSPxrK+EPxJsfib+03Frem6QdItzok8AtnRVOVKljx26fl0rzvwT8XdA+I3w
+ufA3xJvvLnsk36XrlwCzI7MNgBxwQFC+4rV8OeKvAfwA8F3+qeHNSPiTxLfH7NDctv3QRuu0nlQ
MAqD+HtRyvbqQpNM9h+DMBj+Jfx2jIO43ytwc7si4IzkcHBHGe3SvnbQfip4Z0z4B6h4JuNLNxrZ
kaWO/CRkKPNV8FzgjGw8YPasT4a/HjWPAHje51+5vJ9Qiv5GbUbdhtFyGByeB94Z49Oa9G134a/D
DxV4+stdh8WWln4fmMck+mg5/h+dQw6ZI6/XNPltuUpHbfFllH7PvwecBBsu9MHl7xtKiHsw9No/
Kr/7SOu6V4P+OfgHVtZsvtmk29vcmZNgckb3AwD7sD1HSvDPj18ZpPGuqWei6MJdN8OaO220hWQY
doyyJJwMn5cEAnHtXbx+M/D/AO0X4Aj07xLfLp3jbR0VYNRuGWKO6RmJboQD0OQeOhpeztqwcrs7
n9nTXtE8V/tDeOtU0KA21jdWFsYVC7RgSorcdMkjP4074RoG+EPxrEaB2F/qPyMAekJIBHTPA4ri
9N8QeHP2bfAVzd+HtUg8TeMtUDW4uYJUcW8QYFN4DHABHYZJrifgl8d38E6tqNjrqPPoHiB5Rf7B
8wdxsMmMehPA/Wpcbu9g5ot2JNa8W+BF/Z/sdEisIo/GsTqZLjylEjDzCfvZB+7gYHHFfQnxeKp4
r+B0jJj/AImce5XIGMiDn+Q4ryq3/Z58MyfEidpPE+lT+D1keUW/2lDMsO3cAfm7HjoMCub+Lf7Q
lx4u8V6HPo8bJpOhSRTaaZFUM8iAKS2OvKDvVKIOye56d8XdT8J6V+05p83jCKS40n+wEJGwugcb
whIAznk4/wDrVsfs1XOk3/iH4tf2Im3Rnnhltvl2qEYXAxz/AJ5rkPFlvp37S3huz8SadcpZ+J7A
R2V5Hc/u0mjAY5jGcHluo96fZXdt+zB8Nplglj1Dxrr1qolTl4YyhZQxw3HEhxTcdEClJF7wcD/w
xZ4pSWP5YjdY2EEFRJE3Ht1/OvPNftfAL/Dj4dv4fuCfFiXVr/aUKyEPjjzGK4APzkH8+a0PgV8T
dKuvC9/8MfEzfY9G1wvDFeR/eEspj69duWzyemKs+HP2YrrSvGtzJrt6tt4c03ddfbFcbpBGdynP
TnAOPahKwX11PZfiVGF/aq+GZ+bDQXQw2cDPndPyH+cVw/iTS/B2rftP+NbfxpdJZWaWUJje4lVF
ZtsbDDHvtY8egNcb8Rf2kE1X4zaJ4n063jnstEmkS3DgqZY2B3MSe/zH8OnrWz8X/A//AAuuSz+I
HguRdVTUwIL6BOGt5I41XjjnuPTpSSaLXKbvwIith8Dfi9BaM09pFc3yxkH+AW+evrjv7VR8WQ7v
2EtD8rfiH7OQV4IxdSDnI9CSaS81Wx/Z3+DGo+GpZ49Q8Q+I4PNm08sFa2WSEozH1AIxgVU+F/iS
y+KvwY/4VRd3FvpepwbHsJZCGWYrI8gXb2P9D1pcruDkug248C+D/B/xH+ENz4Z1Rbua41G3+0ok
4k2klCDgHjJ/n9K9PGR+2ZEoZig8LqfKz059PyrxT4NfBDWtC8bWviPxEn9habobpqFzJcyDB8tw
do68cd+2O9X7z9omw/4aN/4S9LGW40uGx/skGNhl493+uU9CPmJ7dKLO+4KS0J4/hh4b+IHxL+L0
utai1nd6bqEj2uMKWJ8wnqfm5AGfYVraAyp+w7r7BnbY05PGGOLleeTx2965T47fCrV5PHlx4j0C
Ma5pniIyX0F3Y4ITkDY2OD1BH4+ldL46vrf4L/s6yfD6+kj1XXdUEskkEDri2V2WT5wecc46fpVN
slaGp8e4F1b4FfCO3kldIbq4sYXK8nD2ijn8Tn8KqeE/hZY/CX9pbwdpljqhvYrixubjcM/e8mQb
W7HgDGPSk1C7Hx1/Z40HS9DIg1zwuqNPZySKHmSK22b0APQnGMemM1zn7N/gvWbLxkni7WxLpuk6
HG/myX3ynMkTgKuTk/eH+NO2hXMd1YfEXRfhv+0t8S5dflltbe7EEdtKiM43bFLKQM4BJ69ODXlV
v8OPh5rXiBrG08bCOW6u9scJjZWDF/uDKEfxAD8Kg1myb9of4s+JdT8MboBdhLgRXp8tnG0ICoPf
5MkZrk/C3hfXrbx9oUMum3KPDqlur74cgBZwHJzxxgn8OK0S5dYsy+J6lD4lfDzVfhn4iuNJ1KPm
Nm8uVekqbiA2Pov61zMYMihSgOTgH0Hc19Gftuyxy/EvSmSQysumsrKm0hSJ5MA88HBr5yupBCZG
K4UAZ+bnPPauiMpSMpWi9B21fOUs25iAvXA717/+x1okd78SZNTN8kElnbTxJp78PcBo8eYoPJC5
P614DGyiOMurHPHzYJ717J+yTGo+N+mOqsD9ivAfQDymxnsOtY1E0VBvoR/HBbn4Z/HTWdX0rV42
uLq5uLgrE2WhZ85Rx2OHIwfevJNG0xNX1K3tfOWE3Mqos0hCqhJxnJ6D19s12v7Q9mY/jR41cgkN
qUhBI4IOCD+tcIWEUAVULJw2M4GO9Rsi4J31PsH9ovwPLN+z/wCFI/7Sgibw+kczyNMAs6rbMvyN
3JJUjH9RXyX4j8U33iEabFeSrO2m2i2cTucsVDMRk/8AAiK+kPjzHNJ+y18L02vKyx2ocYweLRuo
/D9K+WUjU5ZQzJk4VjkDGf5VMUrahLV6H1n+xL4Tms7fX9cMsb296kdvHEHBYFJHyT3H5V4FrE+u
/CvVPFPhM3C3FrqMDRXEW8qjCQ7t49GwvNez/sNym48V+LYt+FaxgOwEZ/1pGR3Ar5p1i5ml1C9M
sxkn8+QB5Dk53HpyacVdibtY0fh/4au/F/jXSNP06MT3JkE4LHaFVSCc+/B/Wvp39tDTNV0rxF4c
8W2DSW0OnxSQC4hco6MZs4JHqCv5GvlbQ2ks9csJVna323KjfCcHbkA/nn9a+kv28jK3iXwqpnby
ns7gtFkjJ87rx15x+VHLeWpo72R82+P/ABlP488SXmu3KCO/uNgx1G5I1Tgnsdufxr6S+J1lN8Y/
2d/h6PCLfbjoskVteRsCPmS1KMvI5O7GPX2zXyq8X2Xe4RpByx3DPOenJr6u/aCubrwv+zd8KZdD
mk0j7WYDK1gxjMjNZFssQefmGfyqZKz0FF21GfBrTJfg18F/iJN41jGl298RBbeYSDLIbaRccd8k
fSvmLwh418ReBrWR/D+oSabNdKscpjVW39QcEqQMEnpivp/9mSV/Hnwm+IsXiO6m1iOFA8X20mXy
mFvN84yc5yOvHQV8jWNhdXNkrRIbl2QSEQrkAY9s9MZpRL1Z6j8HPhP4otPid4J1CTSrj7INSguP
tBYBdhYEsRnpgntWz+01e6L43/aRtBp8ttcWl0bGwuHjOCCZSsgxxzhq5/4LfE3xZ/wtXwPZTeJN
S+wS6rBA8Uk5KSIX2lCPT8PSuv8A2l/D2n+HP2q9Jg061WzhlOnXbhPkQMbgiRj7kDP4VMtHoTbU
3v2t9K1PQ9Y8H/DLwrZzjRpLATx2VouZJJI3dNzMBk4BB9M/hUn7Lfh2f4leG/G/w68ZWzzadoyR
S28dwMT2jvK7Fc9R8yZx7Vv/ALVXxAu/hN+0f4J8XW1mNRFpo0sYjdiEkLyyAjdg9iCM46CrP7If
imXxz8Tfi14quLVbBNVjspfLX5gm3zBgtjGenTsRUOTsbpHB/AbR7PwV+zV40+J9jEr+JrI3MNrN
dDfHGqOuw4AGSSR9a8k+HHjbxf4H+IOja7JJNPb61OttJ9uDPDNHPKnmALnP3WBzzjFe5/D9Rdfs
G/EBY3EredfnIGfmDRkED6g15Vq3x8i+Ivhn4U+Bo9GFo/h2/sGN6j7mnCkRgbcZwc55NFw5bs9I
+If7Ofhm4/a78OeGVilg0nxFZXGp3cKORiRGcsq+isQSR9a83/ae8b+JtQ+LGreEtGS60/SfC0v2
a1j0bcgCOsblpNpxkHj2xX1H49Ii/bm+HAwQraDegFsYyfNOB3z0/CvE9Z+PWm/Az9oz4yNqeiNq
0WpTwwR+VsJRhGMsc9sN2PbNLXoSoq6K/izSLX46/smXvxD1e0hi8UeHDLAl5b8PcrHsQCQ85Jz0
PGR05qTxZpi/szfsz+HtW8MQrJ4g8bFYZ9QcHzbVZLYSHy/ps6dic1a+HCSy/wDBO/x8yRsZDdXi
hX4G7zk/Q46/jV79pS/TS/2ZP2f7yaITxW89hLIhj3BwtkxZcHHUAjn1oV29TVpJaHAfss+K9X8d
eL5fhb4xjuNa0nxBBcTvJqsjvLbMkRwU3E46/mD3q/8AAP8AZk0eb9oD4g2eo3U17pXgR1MUT4P2
sOjECT6bDketdL8NPi/oHxn/AG0/BmtaBpM+kRW+i3VnKjoFywRyWIHHG4DOfw716P8AASETftGf
tMgPjM1qu0LhR+7m4479P8eaV9bWJk1Z9z411n9qrxzP8Q18Z28s9lo1s8UqaTGxW0eJFAMeT/eH
U16h+0X8HdFfWfhR8RLGGOxs/GV1YLqemAEKpdVdiGA6bSQR7ZrzXQ/jF4Q0j9lXxL8PZ9AlfxZc
PK9vqTRKYwPMRgS55GAvTHYV9AftJ6KdX+B/7N2iPK8IvrvT7KS4i5ZFktVQsOOvzZ5/pRJvoUkm
7o439unxl4k+EvxI0Pw/4JvJ9E8PQ6DELW3tFBTcJZlJyR02qKj+MNoPFn7BHhbxv4g8u48ZJdLb
Jfy8TNEbt129s5VF/Kuk/a/+Kd38AfFXh3wVpejab4ghstAgX7frYaS4l/eSJyQwz/q8/jWP+0JZ
H4rfsR+EviNNK+jzWssUX9kWDkWbZunj37Cevcc96NeZEpNO5y/7Onw+034cfAjV/j1q9imv3Fhu
i0zSz1jdLjyzIGI4PzL/AJ6fM/xO+Lnij4nsh8QaoLqKKZ5o4ykarEzsMICoBOBxk+lfb3wh8Xr8
PP8Agm1qGuvYW2sLBd3ebK7Y+WztfKMMQRnk549B1rkf2bdd0b9q/wAQeJPA2t+C9M8PwNpT3MOo
6cG8xXWSNdy5PBG8fl+FKLkveIb1sfFfh6xj1HxTo1hcIWF5ewQsg/iRpFVuPcH0NffX7SXxL0z9
lj4peAfDGn6LDL8P/wCxrgXumGIF2DSMgKsemCM/TPtXyX8LtQi+FP7RVpby6Nb+Ihp2vtpaR3jE
EqLryQ4OMFsAHnjI/L6Y/wCCovja2tdb8NeFDosMl1cWZvm1QviRI/MZRGB6EliapNyeo00j43+N
48Pab8VPE8XhQg+HY7nZZkHeDHsUkD1+bd7cV75+wT8JfAniTxnY+IPFWtWL6ilz5em6BPP5btcq
ysrEZG7PIA6c45r5KkAeXfkNjjcRnPH+Fek/swxR/wDDSHwyYKof+3bYKccj5xgZ/p6irktNA32P
T/8AgovZwW37TN3FaW0NrGdFs3IjUIqkmXJxVL9j39nax+KE/iDxf4jZbjwf4Tje7vLWInzbiRYj
IEAzjbgHj2rY/wCCmKt/w0/eg8p/YlkNpAw2TKf6kV6j+wYV/wCGYfjvcBWQpDKQwzziwkHAz2x2
qbu1hQe+p5ZZftyw3Xxdl1DUPDOlf8K5kLRLZxWa/a44GjIB3f3t3bjGTXFftm/s1w/AbxjpOoaX
IH8NeI4zc6bAT88QCIzI3Hq+R9faptan+Eo/Yr0GOxMP/C0RPF5+QRL5fm/Mc4wyhMcele7/APBV
UNBa/ByTCvGun3m5e4+W1H07j8qIqzNGfn9LsUlgGWQfeDcEVBJAcMNu/PTJx3pxRFWRsuwXjOc8
mmFHmidosHHYjOfwrRozW6PsX4D/AAm0P4JfBU/tB+PLVtQ8xUh0LT4ow6tJIHjDsOucg/QAnnNW
vgT8WdB/ah1O5+F3jvw9p2m3niBo10rUdGtmUrMgaQqzFjg4UkEDHUV23xiVJP8AgmD8LVkYJbz3
unhsnG1TNPk4GOMHP4e1N8N+GvhZoH7avwBtvhhe295bN9pl1JoJhIBKIG2Z/utjecda57dTRJNn
hPw5/Yz8SeJv2n9U+GGoNFGNCdbnUryFuFtnAZQuepwyjnv9K6zxZ+1N4R+GvxO0zw74Y8E6TqXg
vQmh067vru2ZLmbym2TMMHBxjgkcn65r65+DxU/8FEPjq4Ys8WmaagK8H/VwA18YeG/Bfwu1b4C/
GTX/ABDqtrD8QrbUb5dLgM4WRkVtyFUJycsWpJpvUEk3uQftafs66doUWg/E7wXul8HeNpo5LZHT
aY7ibe4VQTwpUYwe4684rtfEvhXw9+xB8LbK38QaNaeIvib4qgjuPsV4pkt7WJHYMQw7YYcd2/Me
ifHeJf8Ahhb9maOYEBtY0kSbwM8xSknuO4rc/bM8MeFPFX7Y/wAJdK8dX0dh4Xn0a7FzPNOsajbK
5QEk92CUr67A09keOWXgrw5+2r8HZpPCmj2/h74j+FlaabT7NGWGWKSXYHJPB+VScckGvPv2Tf2e
rLxFpGrfFzx5JJZeAvBzi4k8tSWuJoijFSByV5H1r6l/Yj0Lwv4c+PX7QFh4Nmju/DdjFZQ2NzDL
5ySRkO52sDzggj8O1cd8M5Qv/BLv4qTop/fXF7nrt/11uOD6f/Xqr9BWSZwHgn49fDb4rePNR8F6
54M07QPD+vrNp2n6paoxmWWRhHAWGMKSGznsa8r8dfsd+LfDv7Q9r8L7GNJrzUGa5sJJWIDWoZgZ
GODjARj+HvXo2ufCb4deFdC/Z41nwfq6Xvi7Vtb07+07eK5Ejox2OW25yvzj9K+uPH0DXP8AwU+8
FbduE8G3OUHGz5rnB+vy0nJLYfLdJnyn8W/GPwz/AGYdU0/4baJ4UtPGmqaIJRrF/fMY5EnOGCA7
efvfQYrC/aY+Beh+Kvh/ZfG34bwFvCV2PL1O1Rdi2UqbIwFVgDneWBP0Negax8Jfh98T/wBo39pe
88cawmmzaRqXnaYBdLEZGKyZHXPVFH41peG/LX/gkXrUwUhJdQfA553ahCPX16/Si7THFWPN/hz8
JfDf7P3wah+KnxL0/wC36jrUEkXhzQZoiUnBRJFkYqGxkd+wPvW54G0jwB+1/wCBNa8MaP4ftPCP
xEsfNv7G1tGLfa4o4uRv2Dqzc5Ga9S/a+8N2PiD4T/su6Dqd2NP0u7vbW3upmOPKiazgDnnAGAT1
pv7O3wr8KfCj/goB/YHgnUjqmiw+Fbi6EgmEu2R9quNwPoF49W9xRzKxa3PlH4Bfsm658U/iFrOi
6xnRtH8OPNFr92SCbYxqxKAg4P3G5PbtXqUPx2+B8nxgh0L/AIQWzTwUrm1XxHLySnlHEmwrnGQB
k/Wvob9niRxq37X10x3RnVr5FBXhdsd32/AV8fzfAjwfYfsSaf8AEhtVefxdPcpD9gEoAEbXPlAb
M54XnNSnqFyD4r/sj6p4P+O3hTwqlwzeHPF17FFpetspZSrlT0A6jzFAGf8A63pv7Rt18O/2XvG1
p4Bj+HNv4mmstIs5JtSmuijyu4bcxXa3zYUc+9e2ftf22vR+I/2ZLTw2I7rxAHLWMVwMxvKkVoVJ
6YGQT1/WuA/bRT4ST/GiMfEO+1OPxsNIsVvxpVuZIEYI2NpDDjnOPYYp89yNDzL9qH4LeHrb4CeA
vjD4ZsF0C01u2t7a40hiCVkfzG3huNxwuOnT6Vlfs4fs7aWPCl58WviY32DwNp/yW0Qwft0h3x7R
g5HzYAx1r079tu0vl/Zj+FFt4XWGT4ViK1e0uplZbpp2hlKbh/CNpOevJ71pfEK1aD/glP4GgBPm
SahbnK8DJuZz14xRdtIqLa2OB+F0vwk/adGveBbbwnD4E8RagqDR70y+a08m4khflHPfH+17V4z4
X/Zi8b6x8bP+FYJYvb65FKGuZGXesNuWH7447EMpHPVh+H0zoPwE8MfBP9rX9nmz8MasdYGqTS3d
6zvvC7EUggg8A7m/Kvefhfsm/wCCk3xZO0f6P4dso1GASMrbcj/Pb8aV2mXZPU+T/F/in4I/CHxf
ongiLwsfFsOmGKz1bX1lCfvt+yViACTt5JFcR+1J8AbfwG9v438KyNqngLxEfOsbuNT5cDEvtiBA
9E710ejfAHw18Qfhj8cviHq+tSW+s6Hq+pfYrRXyJdp3gkdSCzYHSvXPi9BIn/BOb4K2/wAy+dfW
A4+XBJuSce/A+tWnZ6BZHl2jfBfwf+zl8IovGPxR0uTVfE+uop0nwuZxHKqpJ87nGecMpI5xjHrU
knwo8F/tJfCCTU/hnpX9i+NdC86a98PBw8t1CXKo2cdOpB+or379t/4faf8AFb9pj4KeDtSvW0/T
LrT7vzplwrIAx4B7ZKDmq37Ffw10v4a/tU/GTw5ol+2pafo2m20aXDtuyGeN29QTnd0rLm1uFj5C
/Zz/AGa5/ixrl/q+vSNpXgbw6rzaxqMhIUGMqXhHcHac/SvU/D3if4CePfinqng9fCkWiadei4tb
PXrmfETyYKxthgCNxAI5z0r0z4QxNL/wT6+PWoIArXd9qTjAwxBSFR+PNeG+Jv2edC8C/Bv4M+Nr
fXRd634m1Kygns+CYxLhiBjgY2jmrJ5UmebePf2a/Fvgn4vJ4CSylv8AVryVl09duDeJlsOPbCkn
0Ar2T4i+DvhL+zBpujeGPEOkL428csjy6z5Eqq1oGVSqMGPGQeB14zyDX1h8V4Dcf8FFfgtEXQND
ouoS42jONtwBg49B+QrwvxR+z1ov7QH7YHx4Ot602mwaPLbTIIyAHcwgENn02jgVTlzIFboeU/tG
/APRLjwVbfFH4ZMl54SmhSO8t7UA/YJFiVnLEcjJYAjjBI9qj+BHwG0fwz4Bvfiz8UEW38MC3ePT
tPYkNqEpj3xlDyDnDD6jmvY/hU0en/8ABML4kzmYhJrm8AZlPI8y3XbgdMgY6nrWr+09oZ179kT9
mjw+Zfsx1C606JmI3AB7TazZPXHmcVjzN7svROx5Z4G8H/C/9pzwVruj+E9IPhHx7bYnsorpo2e7
RULMq7cA5wc9x1rwz4X/ALN/i/4n/Ey58ER2z6dfWMirqckqH/Q1Pdx79vwr66+EHwI0f4F/t6eE
fD2j6n/bEbaFdajLJxhGMMqY6nrgHAr1H9nuX7R+07+1VeJsRILmJFbdjos4OfYFKqMn0ZEkro+a
b28/Z40D4mWPgp9COrafH5VpceJFmRrYNgB3PqN+ck984wBXkH7RX7N2rfCfxlEtip1Dw5rsqyaT
eWzllmDklI19Tg/Q102k/s5aPqP7JWqfFyfW2OqW9yyRaeXAEi+akeDkHnlue9fUnx4tPJ+GX7JF
hJEHlbVtKQI3G5hawcZ64ya2Une1xOMT5/g+CfgP9m74UWmtfFa0k1rxjrjRS2fh+0kEdxaw/MC5
BGQfU+owOlU/HnwO8K/Fb4M2vj34S2zQXWlwk6voUcnmXEW58Bn9QBGx9/evd/2pfgzb/H79uvw9
4Sv9TbSNPHhUXDyqoGdss525+pX8jV79hvwdYeAbj9pHRrWX7XBpD/2fHcF/vrHHd4bnpwPXtUq6
a1J5E9T8z/OErRhGA+UZDHgnHJpglWRmVkTJB2t2yKeZf3cbPuklKg5bOenrSRp+95bYpweeRj61
s7maWpt+C/BWrfEDxTZaHo9hJqGpXLkR28QJJC8k4wcADv0+lfTnxC8G/BX9n2TQPCHi23vPFfie
GJjq13ZOu22fzB8rDORwTgc/dzjmnf8ABMe3W6/agjdtpZdFvDtwM5/dDI/A+3atW9/Zrtfjn8Rv
2jvFOoa0umnQNXuZYY8DbKymZsMx9PLX6c1jdt6uxoovY4b9q39nK1+Htta+OfBuzVvAWqr5sFzb
lZEtyAF2MR2LdDzyDnmrnwq/Z10HwX8JNT+J/wAWDPZ6Nc25j0nSk+Se8kONroGxnOeBx69K9m1W
1a3/AOCUFmXA3PfA+WTyudTbkdOcg10v7bPgQeO4f2avBjXIsrfVHNm0jDAiBhtlBGQfmxnB9TUK
peyK5ddDwy1+CXw/+P3wk1bU/hdBPpnivRpZLh9P1GQCe4iWMYKAZ+UkgDtmvHvhB+z14j+MPjif
QooH022sC/8Aa2oTLiOyVQ27c3TPykY7de1fZH7KPwQsPgj+3NrHhC1vptTisvCxmW5JzuZ3h3Aj
23cAdK3vgvFHF8PP2sr1FdGGp6nlyACCsFxnH5j8qam1oxq6R4fbeGv2cr/4wt4DhXURCjG0/tjz
F+ySukRJbfnpnjI4JFeCfGX4CeI/hB4/bwrLBJfmaRRpr20byG8UhSNigZYjegOO5rvL39mEeG/2
WdL+LEmshrzUJESOwVgNitKyg/TaFP4fSvtn4+6XFL+0N+ypbtCGAuJnkUgNwkcJHXnsaTq2dkWo
q9z5Q1D4D/Dj4CfDbRLr4oveaj4u1uYvHpmnFDJZwugYb1J4HHJz16VkfHH9mzRR8OdG+JPw3LXf
hmW2C3tox3zW8u1nZpMEhcKOee3vXtPxw/Z//wCGi/27fGXh+TUV0xNM0KyuDK38LBE+UcdxIau/
sz6HL4b/AGQf2iLWWZLlbGfUbRWXJXEdt5eR6fdz+NTztSQrR3sfNv7NP7N1v8Rba98XeLrpNH+H
2m7mnvJCIzdEKTtRsjuv6Gu78GfBX4SfH/SfEGmfD57/AEzxLbJDPZRaiVU3GS2do3fN8qnjtkZJ
r0r4s2Zi/wCCZ3w2treMpPf3FiNqcAEzTMST+ABrF+HP7NE37On7W/wZsW1tNUl1Y3Fy6xKQF2RP
x17ZH15q3Jtc1yFq7HyT4V+DniXxJ8QLfwZbaTcHxG1wsMtpsOYcnBd/7qjqT06V9E+IPhD8DPhv
4x0XwR4h1+/m8RTJbJe3Foga3imb7wL9APXI4BFfV/wm062m/b7+N04hGbbSdPjTCj5MiInb6c5J
FfD8H7NGoePvhT8Ufizca0baPRNUvcWrRrI91scHOcjaTuxwO1O7b3DRM4/9pL4Cal8EvF7iPN74
cvm8zTr9eUaMs2FyOCQBmvQfBX7NXhvwx8Frr4h/FLUrnT7C9ZY9M0y0x9ok+cqXK9TnHT05NfQf
xkt937Gv7OwmjdrmbVdIRmPztho2Jye45z71J+218Ibj41ftPfDPwTplyNPhk0aeTz8lVRVd25Xp
jCjn3FLncrJ9g0Pnn4l/s16BrPwm074g/DG8m1fRoDL/AGha3OROnzbUITrkkH6ivBPBPgbW/HPi
6z0DQrKS91S8mEUSQpuAydu5j0CjOScjAr9Bf2E/h5P8OPip8cPBt7MNVttFit4Hd+VZm85/unjo
OtfDXw0+M2u/CXX73V/DL21jqU7FWnmtxLszIG4DKfTt9fStOZq4ciue0ftRfsi6d+zr8HfCOv3G
qS32tahdizvVODFE5h3naB6FSO/WvmDT9E1HW7630/SYJb29lYRwwxqWeRvbGfQ81+g//BRjVbrU
v2dPhBeXaB7u/nju5RCNu6RrIM3B6cknB9q8e/4Jp2Frq37StqLy3W5W10m6nWKXlFcFArY9QGP5
8UKb5VLuTGzdmS67+y34D+FemaBp3xH8US2HirU4Gle0tQxWM79qDODjqo9z+deX/tJfs93nwD8T
xQRub7Qb5TJp1+MkugA3ZI4BDHHXuK9b8efs6eIvj38W/jr4os9Ujt9M8MatcqRPy4Ch2IUDpgg/
XivSfGVuNU/4Jc6Rc37S3159ojWK6mY+Yg+2MowxyQCqHr6+1QqrjL3tjTkR88fBL9l4+OPB2p+O
vGd9J4R8H2ORHdOu37QwYA4UjO3nGR39at+M/wBkqwPwtm8cfDrXZ/F1jZzyC7jYDfFGi7iwxznk
cY719M/tv+A9U8T6d+z78NvDTjTLfVzLCbeJvLjLJDCQ0mDyACx6H86qfsQ/CbV/hN+0T47+HXiK
b7faW+ipPPDG263dpZIwrBSMEhSy5wOnTpS9pKKUhcqPhf4VfCrXfjL4zs/Dug2r/aZeZZGXKwpk
bnOeOAehOa+jLb9jDwhr/inV/Ceh/Ea3vdes1mP2AAebvj5OB3wSAe31xXtvwGt7Xwr+zF+0b4l0
20Fjq1nqesRW11Au2SFI4QUCkcjGexr5f0T4KfEH4c6T4D+KVzqEdrB4g1K3jjubeYrMwmfO4kHq
VySPfvRzNtsLa2R4L4k8Gaz4Q8T3fhvUrR4dYs7hrV4ThiXDAcdjnIP419EWX7Fq6d4N8N6p428W
WfhHUtYaTybK6IV9oPBIPGcEHPvX2P8AFD4f+Hb/AP4KCfDGKXSIGe60i9v7hFjx5s0YcB3x97oM
fT6V83ftDfCD4gftJftRfFW30U/a9M8HSpDDHdyssVrE0QYhBjHLKxx39eKu93oRyJs8C/aC+A2s
/ALxL9kuElu9FuolNtqaoPKm+UFwuPTIq38D/wBmTWvjRpmp6ul3DoXhfToWkl1e8OIdwK/KGzjo
3U9K+qdSU+O/+CYF3r/iYSaxr+nzSRW99cANJCBeqgAb2TAz/sitP9sbwxf6D8Afgt8O/Alu2nr4
pkWK7sLQiP7Y/kRttcnGQWYE59PzqNRtpL+rByrY+XfiX+yVq3g74cnxnoet2ni3SbeZ4bs6c4cw
AIG3ZBOcDOfrmvGfCnhHU/HviXT9B0O0lvNQuSFEMYGBlgu4+ijcMmvvv9hzwD4p8GfGbxP8J/Fw
RNK/sN7y60zduiLySRKDgjqVLLkge3GK1f2bvCGh/D74YftE+PdGsLaDXdG1DV7fTrxVJMEUMW5F
AY4xuwcCn7R/Cw5EfOt1+wPr0M99ZW3iXRrvWrSKSZ9OScNOCi7toUN/u9uM18z6ppF1oWpXVlqM
DWl5C2yaFzh1YY6j8a988AaN8VPh74n8LfF+9FwsHiTUIo11V3B85JpACSp5O4ZP4mvsT4x/AjwX
r37dHgHTLvRontdX0271TU4gCBPJGpVWKj/dH1+vWHUs7F27nxVov7IHjLVvAlj4numtNIsr+CWa
KO8YRyFEO4NjIxkZ/L3rwpzHHO0LMsmxijyqcjPTI9a+3/2zPhf8WviX8ctZjsNJm/4RvRWW10ZU
IhjEbIrMw6BsnAJ7YxXxj4t8J6t4I1y40TVrM2WpwkebA2AV3AEL/wCPD861jNWIcTJQCIlCplzn
jJHPb8aHbzY4w58tTkqScDH9elfSXhj9ijVH8A6X4o8VeIbHwbFqfmNaxajKEkkVcYZQRgjLZ9el
ZPxc/ZM1X4b+AbbxnpetQ+KtEMzRyzWCblhCgfMSCeDnvjtTU03Yhqx8/wA6IYkQKC2QSzHPbtT8
MmGYMQp4XGPpST7ZpXYx4cjjbjGff0prPujDjk4242/p/OtCGJFE0jD5XfBxjAPP40zzHBlVWyM5
ZMcHH/66kJlO4LnYeigfpTXm52hmHB69/wDPFIXUHjlkGfKBwpbG7FMghBO4yHeQT8p+YU5ZSqsQ
dyk/dxyfpTjIUPmgbQozjHSkWNGyJdu/c/3iDxn/ADzUscaFzuOAAVLYzx/T/wCvUEjCSRRhFbOS
xyCfxpHTywQw2uvXHT8vwpahexMiC5HkBVCg5OSB+f8AhV27Cpp48tsBcggLgk44JrMeIl9gyewI
4yMdavwRgafKztukVSQM8insJO5johaRDlyW64wSa9L+DoxqniCVH2pDot0xJP8A0zI6dCO/4da8
2Q5QtgAgYJJ56133wzhVLHxlJKSVTR5fuEZQEEZPtk4455qQeiOAnjIZo8l9hIznvS71jGZMq+cZ
I60pmJRVZgcEkmpmcx4ZSuT3YcfhWctyjt7YTRa3dF3aMpK24cbhz716JpUEVykTxsYg/wAqMvAL
Yyc+uRgZrz+UK3iO7ad5FdJgR+7Chx3+o6V3ujX0KQCMJvIUnCAAcdOT17H2rriuxyydjtdIsQ0h
eWMtv4jfbjIz098eg9a5f47QofhzbIGbdDqKlcjaVXy26bRjo2OeflPrW1p+oTRFhCHVQ4TygAMn
nJJPQ8DgVkfHBSPhihaJFkTUY03j7zZR8be3QHP4Vq4tK5jCScj5pLblO4cLwDjr71cspQl7EcsV
+8dowQB+FBtRNC8ijDqx8wN/+qn6Z5n22NlkYFMKSe2eARXJfU9GO59CeHRKNKjlbZEXQFF278nr
hicEduK6iGRQhQxd8YbnIPYDivTv2VfgVb6xoC+PPF5MXhK0g8z/AEgZWZt2CDjhQPz+lc98VdX0
fWPGU83h2wTT9O3NFHEihlbbwWznjdwcV0RlpsZTV2c4JLgFo1IEcpDuCvzKOMYxjAOPfp3zUseX
nRlwrKdwUHB9QeetfTVn8LfCfxH/AGYYdW8L2DzeN9NjQXfks5kHzEMSvG4bVJAFeU/AHwLdfEH4
k6Rp82mwz6dFcxnUiFGVQlVK88jIJ59sVi5Exik7M82/fRT4E7NIrB23EEhhkge456D1qawvWugX
XbLEzFyoX+IDknqc/THWvpb4g6d8OvBH7RGl2Vrpv2vw3eRx201tbSAiK68zacBuuOPzr0H46eGv
hT8DL3SYLzwjdagl6ZXDx3O1YVVgrHjGPvdR7ZNHPZ7BypO9z4uSGEeRGzBTht4U/Lx/In2qrHDB
JCfKZjNG5Up0U45yff6+1fTX7T3w68AaX4H8H+KfArbYtWd1/dXBdWUqG5BJ+YHjqSM+grv9R/ZE
0Pxb8DvC+reHoW03xFLYQX80dzhxLuiBKnnrnBrSNSCd5IUlzK6Pi028JRllQKA+FwvBHTjHXt1F
EskSymdQIyufnfncSO2fbP5GvqT9l34CaJ8RW8aW3iiyuYb7SY4YYlL7XiZ1kJJxxnKDvXzTrEMV
pe3cMTHbbuUj3ZYYAPzAemP0raNSm9kZtSWlzMuIVFt5IlO9GyyqMYGOCc+5P5VKzAN5oRGIHmEh
WLL2NfRvhb4NeD/jD8IjP4JZrTxlpUQa6tLk5Nwgiy5jGSWGSeRg5OD2r5/v9JvdNuHtLiGSK8iQ
JIXUowPOdwPfOKXMpaJCfukAtmvISIf3bqN2ZF4HPrn0qKQyyPvjUysegJ4JAwcdulCzm03CSQK8
mQFDHpx6D9KfekWxt5VgdoyMs0bqPr+HFOzTDdDXUOYoc+XGEwePmAGOPb1zSsNzD5ijKvAX5jnt
6/lRb3KsxLq287iWKjHXBPfBP+elLuRGmCMZQp27+c4/pnHatOW5i9dxse4l9xeWJiAASAwbPUeo
/LFBleIbjCFGQCzLktxknI6nH+etJOdglyylY8EYH6d881IglgRiFDRgDJ+9zyalomNkyWCHBjbO
9lVctIvKt3JOMn05/KopVjl2Quo3Y2oY85zyDx+NSyKq3JeWR8uqktgjII5Hv/8AqpiwCESTRh41
5zIzfeAxznv1H/1qSsht2FU4ZmErB4vkJbjPGMcADOPT0ojiYxEo4EYOTtHPtz/nrSs9vLb7ixVy
OAW6eh/HmpIVubqYR28fmySfIBE3MnPAx9KlpPVmifNoOhmaUGSTzN5bG7HGD0IP9aZAZLlkUxsp
UcHkqegOPxP86+lbT9njwv4G+HOl+IfiVrVzo91qDqI7W3UyNGhTcoYKpbovYcVhfGb9n638HaFp
Xi7wzfPrXg2/hSRr6Q/NGzAkZAxjIAGP5VMakb2NVSW54acSSvMFERT5eCSTgn/OPerEErOVIGBu
yM9T3z+tekfA34J6j8ZvFbxQI1roFnJvv9QkXG1CDt29icjkdsGvStP/AGbvBnjibXNN8HeLp9Q8
QabC0qWs0ToJHVmXAYqAQcgcZ7c1TqK+5PK1sfNK+XbzOj9XOAW4y2Sc/jTmhIuZJCUk3ADY5wvu
QPXv+NXdf8P6homvvpF5Zypq0Vx9m8hE/eNIDjaF9yP1zXv93+zR4e8FeG9Dn8ceL4vDN/qMbFrN
4tzAocDGDxnK5HXkYpynFbDUX1PnN5Xu5Wbe5cLt+Ze/Qc4/rUjK0ZRM87iGJB2lcdeO/X8cV618
aPgZffCbV4pIwdT0K6iV7TUBGdj8ZKk5wMAjHPfFVPgh8ENQ+L1zdss62eh2ETtdX7DIjIUHb75H
I6fjUKZpGCPNXt3AG1xNcAFAX+84AyAPb60khN3NtZQhfMjuGwMemO1e73v7MVnrnhjWNS8HeJ4v
FN5p0aPNZW6qzohJPAA54zx1OK8TOkXF5eQ6fawSy3rSCJbdFO8uTt2kDvnj61tGpTIcddCuqpDB
IocuwPyuSfkxjIH5VJcyEFY9vmZATHT29PUV75N+yxbaXZ6BDrvjKz0TXdQRJjY3BClA5wF5OeDw
fx6V5v8AFP4Vap8LfFU2j6pHLMS7NbXIjPlTIMYIOenzChVKbehElJnHyRrJtMkTcDnY3+cU9pJV
jSLzpdroRucAHHsBXoHwo+DOq/FWPUtSjI0rR7SJpnv5xsi4OMEk/jW/4n/Zt1LRPBE+v6TrFr4o
tIZ9syaexlaJcFiMjOccfhU+1gmJQbR47AgSY+WmJRydx6ke47VPKyXMrPcRlZn45wwZRzj6ZP60
/TrF9Svre1s4HvZZ8KqxjmRj0Cjua9wX9kzxOJbO0udV0qDUZ0VkspbkCVd6ggYOD1wODT54PclR
cTwt5D8sb/LliQMDBGD29OlWYo3O9t375kweOAP6mtPxR4N1Hwhrl7omrwva39pIyMr8BgCV3L6g
44PvXRfDb4MeIPiZY6lc6YYIrazXc91dShYtxIG3n056etS5IrU4sXJn3qSz8kYPoKXzfMaQ43Iy
4QnoRn9fWvQPH3wJ8RfDrQodSvkhu7K4laJZbRwwyozgnOenNcQmnzXs8cNtatLPLiNIkGHLnptH
51KaM4xkRyTMyMZS0oC52huvPekE+8sSCj4EvyAcdMZ4r12H9lTx2ksluLSJrgkB0WVSRkZOVBI6
Y7968mvdOvNNvXhmtha3CMY5FYFXGOOAR9OorWLj1FKMiuswmtw4JUJkjaNy5wRjA7datyyyPbqs
bnGAMDkE479+/wCldV4O+FfiXx1pkl7pOlvLFGyI8jYVF3cgHJ5P0zVfxl8NPE3gG2s11rTpLRrw
O8bEkhsdefXntVuUeg4xkjnRE8AWfcEcYO9W545/THFWk1G9uI5YvtVwscikOFkbaykYIOMde/41
Fa295qt1DZW0TS3Er+WsYG7cTgD9SPzrurr4J+M4Yp5xoN3F5KPKyBCGKDPABxz8pOOfpUJxvuOz
W55/FiAOU+cDJVQOF/zxWpp3iHVdJQwafqNzb2wcufs0zRnJ5bgEY7c1nsmyRlxlnOJCxwynHf8A
KtrQ/A+t+IdNF1pOlXN3b79pniQFWPpnp/8Arq3y9QV29ChfX91q08kt1dXFzcbQglnlLsVBJxlj
nuabp1/Lp1zb3dnLJBd2/MdxAWV1Oc5z61f1bQL/AMN3xstTsZrC6EQkWKePZlW7jgZHHaqVhpt3
cSRQxl7q4lKxxon3mPQAAf0qHY0UHvc1bvxv4m1CyurO48RalLZTKC8M91IysB/CV3YI7VixrsKE
GRdiAbScYGPate58Ha5ZwTXV3pt5bwhRIXnjZNg981iSeY0oZwyxrlgx9QOBg9etHKmJux1mh/FL
xT4f0+Cz03Xry1toNxjiif8Ad4JyRjoOR26ZrD1/XdT8R38+oandSX15cMpkmlYbsABQM9wMdKSz
0S/nhWSCxllt3ztdYjsOOuMDn6VXuo2QSRXEDQSqwbbIpBBwMcfh/OmoRRneTZp+GPFeqeFNXTUN
Hv3sNQSN0WWNFOVOMjBBGDjoR1rd8SfFvxd4q8PXGk6rq8lxZyukkkSxRxhyp+XOxRkcfpXH2No7
vmPczjPybck98YpslrcA+c6TW/8ADmRCuM5/A9qzsrl+8i7pGq6h4b1G11OzuWhvbWQPDKpyysBx
x36969CP7RfjiOTfHLYCYgv5htIsk4yGzt7HNeaWrZaQykuz5CjIJJx2pqwtC6SiOQWxH8adc9O3
vVWiNNk3iDXbvxJrF5qOoSGW9u5XkkfOMkkkkexJzjtVNS3mBvlKjB5HX2P4fzqW4tY4ZiThQnAU
H8c025jWeUyIzuwQgBjwxxxx+FWn2MZXI2mZoyTuVt20Ar0xxz6c13nwz+MOsfCi8mk0uzsrl5WI
8y7RiygrtPQ9D3H0NeejzZJ2XyiSRgAHHHrT4bhbpVkR2mfdglW6/j+n41DV9wg5HYfFD4pah8UN
YTVNRsbO2uVDI72SMok6YLEk5x2Hua5nSbp9N1S1l8hLjypFmaGXO1wrZwSOxqjbxqksu5H3AnIR
umfUU9m8pSuSBn15+lZ6bG8b3PbPGH7UV74v8JXPh+98M6dHahNtqY2cGFtpUMBuIBwT09a8Ski+
xKuSI0AwoBxtHpx2okIeLByrKfl9OhpL1DGFDZye+enHf17VFlsOV9z2H4QftF/8KesZYrPwra31
1cZNxevcMrlQ25VICnpjqD36da838b+JbHxh4jutVtNLj0eOZFzZW7Bk3DO5hlR1JPasKQDyyVjf
C4+gH400MWzGxEbBARx39DVxiomestzc8Ca1pXh7xDa3mraYdWsYCWNksgTc46EknsRmvTPjx+0J
pXxq0lI38Ky6dqVm223vPtQZUUuC4xtBxwevfNeLhneT94ofaBwOP0/Go+RMflYxHjGcge1PlV7s
1TdrFfaighx5pc8q4yGHPB/Sva9I+PGiar8NtK8KePNLu9bttKuFNgbJgnkxCIoqkgKTgZ/PrxXi
NyI4VWNCzbDhmznJP8qLsloT+6Kdt54zUOMWxq60Pdbz4/eGPBngPVtJ+HOjXmi32syAXVzfIjIY
wjq2AGPzEMfzrhPg18S7f4WeKoJ9QsYtT0a4Hk3UDQCRkjAOPL6YOSK8+kdzCgQAxk/MVGDjjgD1
zVeRpSq4jJUEZAJ/4Dk0lFdA5pH0DoHjj4L+GvFek6zDpmvJJp92L2ItENoZeUzh8noB+VeU/Fr4
naj8VvGmqeJ79VRZgLe1jijCGOBS20Hk5bDZJrj2Z4kAJZjuJO5ieMe3TtTJgBGV3BQxDeWh4HUZ
PvUygi4qT3PetA+N/hjxj8Mh4T+JRneTT5Il03UrCJ5Z/LAbIkJ3EY3Yz3781U8d/G/w/wCE/hlZ
eCPhn9rtobhX/tPV7mN4bnl1ZFQ55zggk9sDvXhMkaxO8bjr14yMYqvPE78hhlhs2Hjjp16VlyIJ
KXRnp/wN+Ndx8KtdOnajC2p+FtRIt9RsZMmNUZgWlCDjcMdcZINd94d8TfBDwF4l1nxZpd3e6zqE
CXEunaTdWcvlLOW3xgEpxg4wc8V82cyElWIx2A4PH86TzsSyb3kZ9nAYAYPAGfXiqUF0BOSOs8S/
FvxN4k8fS+M5dWubTVxcSzW0kczN9lRmbEcZPIUKcY9zXrXjrxN8Kv2g7LSdf8QawvgzxeFkGpQ2
8UkpnORtYnZz8oBHTrgV82tJIgJdQA4xtIBH1yKgu7mRkD7VWRTzhMntzTcB89j3f9ob42aVeaJB
8PPh+ZLPwZYqFuZbaRo/7QlZF3FlIGQCOeeSatfCb4yaF4z8BXvw2+KF8RpdtbtPo+r3b4OnSJGq
xouB6bsHPt3r50RdyMWzGxJKkn8B/Kq800iwu8iHaucnv0//AFUclg5z6o8LeJ/hr+zHoGseIvDe
qW/jXxjd4ttPa2O02aSIQ7cjGMgHnJ6V5N8Jf2j/ABR8NPiTL4uvLyfVk1N92t26lV+3fKQDnGAQ
TkdOOK8jNxJ9pZSdobkMDzjHp+VRSNiRghaMKcgnnLZ//VQ4kKTZ9Y6n8GvgxrvxPj1y08f6TZeE
53hnm0KSUeYoCKXjJPTJ7c/ergfjJ+1ZN46+InhiXS7KSw8C+FLu1l0nTQieY4hKlnyAcbgoA9B9
a8FeZXaQyAHYwwyjAyex+gzUUrKN39xDsUkDk46j2rNwXU2i3ufaP7Qmm+Ff2ptb0HxtYeO9E8M3
UukRQzaTf3aCeF97yFSNwIPznmuX+PvxN0DwH+y94W+CWkalB4n1XKXdxq2nTJJaxhblpGUkHO5i
entXyQxSVi5zuH8WMZYdvp/9aoJrg7wTksBndjIwO1Cp63ZV2j7I+BXxA0H4i/sk658F5tRg8O6/
C73cF1qEyrbyiS683CkkfMoUAj3rY/Zo8KaD+yLrHinx7r/jDSdZtYtIaBLPTJ0eZ28yNmKICTnK
AY9QK+FbiSZg0eQQj8cZIBPI/SqtxbCRiijCemOh9c0KBO7uj1n4X6VF8Yf2hLS9GowaJFcay+ss
15KqbIhOJ9pJ/iGenvX0T/wUz0HTvFGs+H/HGkazpmoWtlaHS5YYLkM2S7yA4XPGA34mvhVgqOsR
cBVXDANkcjofUc0M22Ix5JB4U7sjPORjtT5FcErEDokErBgcZyCpyQT0J/Dmvpj9jj4D6j4i8e+C
PH41HTrXQtM1lbmZZ5lWQiPqQO3Pb2zXzKEPkx+a5KEsMA4z2zmoQkkQ8oysgIOAGwOc96co32C7
R90f8FDfgvqXjT4k+IPiFpeo2F1otno9uJEjnUysYt+4AZ65avMv2OP2h7T4ZHXvAfieIJ4N8Wxy
Wt1exRs0ltK8YiUg5IClXbJI4xmvmBXkdXzJKVIP7sSHDcY5Gcenao5ZSNwBzlSMHoc9eKhRJWj1
PsnSv+Ce01h8VLuHVtctYfhpZyTT/b45FEzwKu5Rjpkng47A+tecftqftMWn7RXj60j023VPDHh1
Xt9MuniKSXKSLHvdlbp80YxwOBXgNxqt0oZBPK28HdmQ56Hr7HNUpEEkOMbXxt29OfarUXuyubsQ
kJCWdgDG5zubkjH/AOqlkkj2ARAMMnjJDHj14ppAEPDF8jkenWo4y6HG4YZs8ngfSmLW59zfBnxx
pn7U/wCzWvwK1G4GkeIPD4W60N8km+SBHKhyQQuGfBHpzg1H+zV8AZv2btdl+MfxV1CPQrPw1htP
s0IP2meSJ0KnHP8AFjGByR6V8T2esT2N2k9tcy2U8RO24gkMbdMHBGCDird/4q1nV7VbK+1fUNQt
/MDmO6uXlUsOQ20tgnnqazcDRN9D6v8Agn+2/Hpv7U/iH4ga9psWm6R4zaK21BndnFjFGgCPkDn/
AFa54796p+P/ANgnxTqfxjhg8LtHqHgzWpYrs62zALCk775Dtx8wRenTOR718iSuJWMasWUAhiR1
/D9K6CP4j+K7S3iS28T62kUSeRFEmoSrtQcBRhuAB29hS5LCb10Pqz9rv9orQdM0vwN8IfCG3V9M
+Hc1nJNq7SZ+0XNujxmHaF5xxuIPXiut/aI0c/ty/D3w58UPBBW98SaVDFpuo+HROA1tvdnLFyAe
p79jXwI1y7yeZKZJGJLOzktknqxPUk989c1seH/HmveEPMj0TXNS0RZXV5EsLp4VbGcE7SMkZIoU
Whp9z7n+FVxF/wAE/Pgv4g8UeK2hv/HfjFYre08ORvyqRSspbzBkfdlDH6VgfslfErQvid8A/Ef7
O2r3kfhq+1wSNp+olwy3Mskiu0YQjqDGOCeQfWvjDXfFWs+Kr0Ta1rWoa3KisiPqFy8zIp5IBYnH
QdKzbHUbzSry2vNPuJbK9s5FmhuIGKPEwPDK3UfWlyXHz9D63+Bn7H3iXw98ZP7a8bE+GPCnhO5/
tOTVbhsJIsE6sB6Dcqk5zwOa3Na/bp0q9/bYs/ibDozXHhuwsm0BCJfmaAyyA3GSD2cHGOBXyhqX
xV8b6vb3FnqnjDXtQs7hSs9tc6jM8UinPVS2D16H8q5aa5ZpwFAYAEKVGD0//XRydxKT6n2F+1N+
zF4m8b/FS/8AHXw8t/8AhMdA8ZSyatDdWmCkLM2CpOTnuc49fSuk/aS8baN+z/8AsvWX7Otpdp4g
8QzP9q1W6glCLY/6QlwEKgZy2MDvjnivkTR/jF468PaVb6XpPjbW9M0+2yIbazvpEjUE5OADx3rn
Na1m61q8u9Rv7ua+1G6fzJ7m6kaSSVj1ZmPJ/Gnydxcz2Pvb4japF+2z+zJ4ffwk8dr4v+H0DPP4
cLiSaZfKjhDDocYjyDz6Vl/ss+C7j9k7TNd+M3xGcaRNb2dxpmnaDKwW5vZWWN9qk8fNsAA7cntX
xV4S8deIfAl5Pd+HNYv9CvJ1EMs1nMY2dchsEjqMgVpeMfin4t8cQw23ijxHqOuwW58yCG+uDIkT
YI3AHjOCRnrS5bDUpWsfYX7JH7Snh/WPF/xb8M65EfD7fEu6u7qLUriceVbzSiVUiPHXEwwc4yK8
n0n9hz4gXnxebwTcRSQ6Ja3DJN4gmGLZY0j3iQZOeegHqa+brK7eC4SSKTy5UbejqdrKQeCD2P0r
0i+/aO+KGp2M9vd+O9cltpomjdVvHGVIIK5ByeCep9KThctPU+wvir+074T8b/thfBnT7a/hi8Ne
Ab4w3euyzKLad3SEblbpgGPGe+a4f9vH4T+MfG/7SOq+INA8PX2taJd2VhHBd2kJeOXbAMnP14/K
vitifJEQ3BAuzOeq16ZB+0x8U7C0hgtvH+sRQ24URxrKpVUHQDKntx/OkoNbEvzPq/8AbV1Wx8N/
safBDwJd30MPimzjsbi90rzAbiNFspIyzJ1xvbGcVH4A17Tv2pv2Lrb4O6DdrpPi7wzJHci2u5AX
v1j8yQlAOoLSEd8V8Q+L/Het+Ptcm1rxHqlxrOpyxrGbm6cM2xRwvQcDJ/OpfCnjnXPAusQa14f1
F9J1SFGWG4txhkB4I98jIxRyy0GpH1X+xn8Edf8AA3xEh+KPjxj4R8P+DSblpNRkCG4Z4pFCqSeM
Ejv1wPp0fwO/bO8OXX7Z/i3xxqdncaXo3i+ODTYZ5yuLQRiFUeY5wFJiPfjcM18oeMfjt4++Ifh+
XRfEfijUNW0mWRZGtZnwhKnKnjGecGvP5C7TyESYbG0pyB6U1Tb1Dmdz6Y+KP7JvxBsPjLLoGl2c
uq2Gs3gnj1a3UG22XEhbf9FHJ/wr1D9rv4s6J4A+FPw2+DFjMNf1fwa9lcahe2simEvCsqmEdSHy
SeemRXzXYftTfFrSILWzsvG9/b2lrGsUMAVGCRqioqjKk8Y9e9eb6tqFxqV/d3dzJJNdXErTzSSO
SzuxJYknPUkknNPkuJSbZ95/tdadP+1d4K8G/FvwCZb1dOtf7PudIjIa6ieWYn5lU/LjqfapP2W2
k/ZA+Ffi74qePBJDeeILaOx0/QWP+nSFJSCxViOMsCfRRmvjPwJ8Z/G3wvt7y18IeILrQIbx1e5S
FUYSlQQD86n17etVfHvxW8X/ABOe0m8X69ca1NZqyW0s4UbFYgsAAAB0HbtU+z6M1TaPsL9kjxxp
/wAUf2dvH/wSMqaT4k1ozz2lxcFRDN5piGFAOSwMYJA9civJ/gf+y/8AEDXvjBY6frkFzoOjeGbr
7deX+pKwtvLgmAzHu7sA2D6da+edD1+98O6zp2raRdPaalYTLcW9wp+5IpyG54PPavQvEf7UPxT8
VaNqOj6p4yvbzT76Jre5hZIlEqEHIJVAe9NQa2Ic2fUvxI/bI8IT/t16B4yitbiXw74btZtHe625
WWZzKpkXHVMy9fTmuD/bF+DPjJ/jLrHjfwpFc+ItH8bXLXttLpJLKqgKuxyOCSefzr5EAG7YAQPp
wf8A9dem+G/2nviZ4O8P2ei6J4sudO0yxU/Z7byYnEeSSdpZCQMn1o5baE859TfFTX9O/Zp/YyPw
a1i6j1Dxn4hkkuJ7SzlDrao0scn73up2qAOOee1XviJNH+1j+xv4Jh8ESsuqfD5VN7ptwdty/k2y
x5jC5PJwQfevhPxf4y1fx54k1DXdcv5NS1a9YNNcyHl2CgA7QAOgAwB2rT+HXxK8UfDC+udV8K6z
NouoTQ/Z5niVG3x5zghlI6gUOFw57s+uP2MPAmufC/xLqnxn+Ics+laLoVhdWhTUZM3NwXiztQMS
SQOnIznit39lD9pnw5q/xw+L8V8JtNn+IU0k2nXVyVEcYCz7UkO7CsfMXAGc4r5A+IPx88ffFLS7
PRvFXiS41bToJvOWAokarJtwGO0DPBPX1rz9ZQjoY2yyNuCEcrjoc+tJUmuom7u572/7J3xI/wCF
qP8ADJLS8ltZLgRi8Mriy2mPzPMbHGBgfiK95/ar/aQ8H6L45+D3huzlm1pfhxe2t5qNxYlXSUpH
CpVGzgkCM8ZHIFfOg/bH+Lz+ZI/jW5LyL5RAtoVO0jH3hHnp79q8ZZS7li2dzbyTycnJ6nnPem4t
u7KTPub9ubwlr/xN17w18YfAc0+o6VqmnW1kiWDH7RC22SQq4U5HDgEetbPwa1I/sjfsx+MPEvju
Rv7Y8dRounaRGA10A8Uyq0qnkZ8xj7Y55zXyF4B/aB8ffCvRrjR/Cuvvp+m3ExuTCYUmCSHAL/OC
AeB7e1Y/xK+KHiX4s6vHrXirVDqd9BAsMBYKqogyQFQAKOWPbvSSktxNnFOjTYaQFFQdx1/CpIWx
MzLlCo+XaPypXnNxAA7EjPYdP8KYyFZFUMTnJGOtaXbJR9M/sH/FjRPhP8f9P1DX5PKtdRtn01Jo
4/uSSPGAWPZflyfpVr9ov4I/Enw/8cPFltpKX1/pvjK9lv7WTT3byLhLidsK5Hy/xgkHsa+YY5BC
xcYDIc8tj359uK9m0j9sT4reGtG0zSdO8TNDbWEKQ2wkt4pSiKMAFmQnjr1PQVk1K+htGVj6T/ac
8QaX8Ef2PPC/wPvbz7d41ZkvZ47IbkiiW5eYqzHOD84HOc7at/tX3k37TfwJ+HnxB+HM8skPhOCZ
L21VmW9hZmhQEKnPWLPYYIIr4N8S+Lr/AMYeJtU17Vrw3uq6hO09zcFAN7E88DAA6cV1Pwz+PHi/
4NSaj/wiOsf2eNRjVLmOWFZUk2btuVYEfxHkf0pKm1sJzR9gfsVWmrfBqPxV8dviLcXFto6abNax
NeBmvrmQSRN8obBYHy9o+tT/ALI3xS0j4j+D/jn4BjmSy8Q+OLjUr/TBP8qMk0RRQT2ILDI7AV8f
/FP4++NfjLpemWPiTV3nsbGRpI7eCFIEEhAG7CAZ4Hc9zXH6Dr134X1rT9V0u4ktr20mWVJI2IIZ
SCOnuB+FPke4uZ3PafDH7P8A8VPE/wARf+FVOdSax06d4yZ5JBYwrGTulBPygck56ndX1D+0Z+03
4Ltv2u/hjOl1Jcad4Ckni1K5jGU82VF+VDn5scZHbBr5Z1f9tb4r6lBqNq3iKCEX0DW8zxWMKkhk
2k7guc9eT614VOuJpCHbLYI9+euOnP8AWp9ld6lc59rftw+GPGWn/GO6+J/hO+v20PxPaQRW11oc
z79sUKZDlD0yOPp2rsdBuov2W/2H/Ful+N5Wt/FHj+SdrCwDB7j99aooMh5IxtOSc4z74r5V8Bft
W/Eb4d+Fo/DumarZTaZDIWiivrZJjExbLbd3Y+nbmuM+J3xc8T/FzxNNr/iu+F/esiwxFE8uONFG
AqqDgDPXHfmj2Wtw9oj7k1WBf2jf2C9A8NeBJFuNd8IPbPf2r4V28qB3IQd+ZBg9+a4D9iTQPG3i
f4yaV8RvF99fy6D4RWdry71t2ZovMt5FCIHAPG4HHt9K+Yfhf8XPEXwc8S/254euhBdyQyQOHAKy
o4AIYEEHoO1dl49/a08ffEjwneeHtVv7S30q5nW4ng0+2EBlZem5l5YZ5PrjmqVJ25WieZbn2J+z
t+0p4P8AFf7ZHxN1W1uXW18XLbWekNNEVEnkxgNweR90HJr5N8VfDL4q+E/Gmo/CWM6lImp3m4af
YyMLW4eUhlZiONuCpIPZea8TivRbSQyRztGwbcrw5BU9iCCCK+g7X9vX4m6abRzJo811aosMd3JY
7pXRVwMsCPr+NUoNMV1c+gv2qviN4c+HXgL4HfDPU70Nr3ha70u91SOMF/s0UMXlscgHJJyQOvGe
BVT9vS68Sa/rvgT4y+AdTuI9DbTFtUvbLzFnDyu7quACRlTjnoTjjNfB2s67qHifW59Y1W/lu7i6
kaed5mLFnLE457Ak4HSvTfhd+1N4x+E3hy70DS5LTU9JuZhOYNWiM6xOoIGzJwo6cCpVNx2FfufX
X7H2paj8FvhL8T/it8TJrnTYvEscD21xeMWnuXHnoCU5OS7rgYxgjtX57RW93ewMlvZ/aJofmaNM
nI6ZwB0/AV3Xxl/aA8XfG1tHj16a3gstJg2W1jpymOEZ53MuSCw7foKi+Enxp1f4OS31zYaZpGpx
3qIjjVLZph8pJ4w4x1/QcUcjS0Dme59lf8FFtOuh8BfgjYxWrsoVWMUasQCLKID3GMkc143/AME7
fHGieBP2h7KfXLpbGG9sJbKKeTIP2iSSIIvHr83J6Zpmuf8ABRr4h69b20Go6R4e1FIlAi8yyPyH
POPn4+XGPcc18zTatJcatcao7rHdTXbXQEQKmNixbKgcDBIxz2FNR91JqxKabPpL9obVvij8HPjZ
8Q/DumXN5p+neNNUnu4bfT4kc3kEruqDJBwSMj1z3r2349Xll8H/ANg/wT8NPEc0dl4xu5YXGmA7
jsS5eQ7sZwACBn1rxDRP27fFWnWWg2+oaHomrXekwrBDqeoRPJcFVA6kHkkjOcj8a8J+KHxN174s
eNdQ8UeIr6S4vLuVjHGGJitk7IgYnao4+tHI2722NFOx96/t1eK9S1Xwx8KPit8Pbxb3RvDUdyTq
VuPMEUrNFEOMYyNjdvWmfsC+J/EOo+KPiP8AGTx/eyQaRNYR2763eqsSvJG6llHAO0BFA7ZJr5U+
D37VviT4NeF9U0Eaba+JdDvSpGn6nI5ijIYlmRegySDx6etO+Nn7Wnif4veFbDwzDY23hfw/ZSyS
PYaU7xpdbsY8wdwCM46ZJ9Kh029B82p9Yfs86/ZePv2WPjt4U0e6S+8SatearPa2K/6ySKVEWNtv
BwT0+tfNnwx8b/E/4oeM/hz8M7x5r+y8M6zAy6Z9mA+zLBIAxfoWVEyM15R8HfixrHwe8bab4k0S
7lgnt5B5sYcqk8XG6OQDOVOB+Q9K97uP+Cgl5DZ65deH/AeleH/EeqQTxDWLRv3kbPyWxt5ORnrj
86ShJaWJfc+r/HXxL8MN/wAFFfBynW7JYNI0C6sbiUyqBHdStIBGzEj5jlcDrz2r5p/aK+MPxF+A
X7QvxZfSIxptl4yuwY5JoWbzoUTYrxnkH7x6dyO9fHU+r6ld60+r3FzcT6nNMbmW7lkJkkkzu3bv
XPOfUV9Uab+3JHqPhbw8njTwPa+Ldb0iHyYtUubvZI67gcv8jc4Az7jNVyNaApO57N4gMfw4/wCC
X9p4a1+R9O1jVZmNtp918txIGvvMzsPP3Burb/bT8ZaloPw7+Bfjrwaqata+G5Hne8RvNhhcQxRg
uRn+LPXoRXw3+0B8dNb/AGh/G93retyyQ6eh8vT9KVwYrOLCgqpwMklFYsRnPHSu++Af7WJ+GPgP
XfBfijS38WeErtF8nTpHC+U25mcrxjDFl44+7S9i00/61Lvc+lv2CfiRr/xg+PPjr4neJ9kSDRUt
Jr6JClqu2SNsZY4ztUng9D71ufBG+g8R/ssftHQaG63l7qGq63JY29s26SZWjAjOAeQxI5Hb618q
/Fz9sKPXvhbH8Pfh/wCHj4F8OzM/9oIkit9qB24UFQDjC4P19q89/Z8+OWp/Arx9Y6/pxZ7NlCX9
muNs8OVO3pweOCPp3odNp3S6gmekeF/jz4x+K9l8JvhFJp8CWugaxYukcS4nPlybMMCccBjnvxzX
3h4y1C2vP+ChngC3hdbia08LX4kiQhjGxJ/U56V8j2v7a/wy8N634m8VeD/hvcaX451WGYR6lP5L
JHcSEkuR1xnDZxz0rwD4b/tFeKfA/wAX28ftqK3+sXFzJJeyvEuZldgZFHHGQOwpeyd27Duj1n9p
341+NdL/AGrPH2m2HinU4LK31iOKK2huSIxGgQBAoP549D1r0b/gqRoGnaN4u+G91ZWCJf6jZXL3
EygKXdTCqbifqBXL+Jf2iP2e/G/jS78Uav4H1o6zdXQup5YmyGkBHzYD8jI9K8c/al/aZvv2h/iT
a6yUay0jSMppNtJGN8cXyklwOu4qDiqUNb2FdH3b+3H8VtE+Dvg/4aQax4Pt/Fhuo5PKjupSgttk
UY+U8jndzx2Br5X8Z/toxa/8CfEfgLw74DtNB0S+UIJom4j3yAuehyxwB+RrovFX7U3wt+OHgjwb
bfFay1C68RaMJIxLp6ARKJDg5AYErhF7HpXC/Gf9ob4fH4Rw/D34X6BPZW80jy315qUAExG4ECNg
SeSCCSOB3q4NKy5RcyPmOQMTt3EbgCQe/wBKhiQuSiHafRiSBU20eYE3ZJbeT36fnUMsXmFfLbK8
k4+tb3Mhpj/eyGQuyhuMDI75z+n50isDIxcMoHI+lW2iO9J3cvk53Pnp/n+dR7VVmeMMA4yQ3OP8
/wBaBNXK43xXABBIc4wB14qSQyFAAw2qSCOxokwsgldsqTg7eSSPbtUm9Q4Xf+6Ud+CQen86QIgu
YULDAZl4wOM+9IsigFSuT27DNTsrPcyKqhkJIQhuBVfbvIx8zAEbF6CgdieOOUqS4VVJ6tzjPUVL
NbqtnvZHywOMnr/k1AZSFRwvcFVJ475z6dqsXM5Nu8rFTvUgdTjvimBmYZ1iwAgC856+v8q9B+H0
cZ8M+M5pY2llj00ouwgEbnHX1BOB+VcMpLQFVGQF9f0rvPA0Ett4D8fS+eDGlhAyonqZVAB9PvdP
apsB55cKZpskGMHouf5UpljiOH8xx2wKdv3ENIC+7JXbyfwOKVQdxbzFQDjnismmWkmeiLbtPq0i
OdhbbIquB8hf3z3zx9a7vQbaK3tC0KBt7qQQMk8YDHOcDj9K4FLyW61ZXdgjSQoGIIHmAIo3E4/W
u58PW8kMCQofMGMMqqRu74APqeK61ocktTs4LkG2UfassXBLom5QOc4IBz6cdaw/jLPFcfDSRwGm
l+2xfMQAFUI5PB78gdyPateC3uPKMikgB1DmVcEgcbelZvxfiUfCmZ3BXbewqgBxzsk+8PoD61q3
poYRS5j560mUzw3G1d5EZLBegFZ+nSATiQNj0wPu1bsZ47TT7lnQGZyRnAxjt/OqulITcKhjO/gh
fauNq7PTjufrB8H4I2/4J5XfmEyIttOCVO4nEyhSCR19vUmvlNHW8sraRF4GCkx6AYOBjHqT3719
HfsleOtN+IH7PGo/CCORdM1yW2lNrLMQI52kbcoRvUbR6dRxXhfjbwLqPw88RvoepxyWlxbSMI1e
PCuvPIPHGPSri7bmM4vmudP8I/ifrPwz8XpqGkW7XTSFTLpzDKzqMjGByeWODzivsL9o3xXa/Bf4
dWl5oOkxaffeIh9mkvIRtkiJRXGCBnO5uvHOD6V5j8C/hjpHw6+H0/xe8TxpqFpHGr2ccOXZFJKF
sEYHJz+HtWt4A/aQHxe8RT+CvG1jDPpGusLaxNtFueB2YBdzjjGAOQB0FYybcrpaFO2yPmHw/dSy
+N9JuAWNzLqEDpLI2GLNMvzZOe+Mke9fop8X/BfhDxf8R/B7+Ip4/tUK3YtLd1OJwy/NyeOMDj2r
42+J3wR1L4Y/F/T/AA/FbvfW9zcxXVmIW37oFm+mVKjAP55OK99/bk0fU9RvfBk+lQag90PtUaS2
St8pyhGWXkHAOMe/rUqb5lcXJdWPjnV9SvbPTn0NJJDo9nLJJBZu4ZVlJ5cE57D6V+gNt8VYvhT+
zX8PfEMlrLqNsbGxtnCMFOxo+XHX+6ePpXyl8YP2e7v4VfCbw/rl83m65eTtE8ZP8BUMhIyPmHQg
Z5Psa90+MdtA37FPgtv3kkMMGnlFUdcRv6dsZ/OtJJStZi0SsfQ3g+z8OalBe+JvDqwyDXFMk11E
u0zlVYKW49/171+U3iqKSXWroPlTHMylvL2McHHPTqece9fYn/BPbW769tPGWn3F1LNbWqQNDal9
yxFjLnaO2QADwOlfHPi24c+JNSV7lY53u2/1BzuKnk89iM8/THNEIOO5nNe8e9/sJzyQ/HmOJXZY
k0i6OGHQ7kwB69CT7fSuP/aqynx+8bwpKAGvAybjwOBuGOO3YetemfsTeDNY0vxpeeO71Ug8N2Vh
PbSXE5CHzGGTt79B16V4z8f/ABZp3jz4teJtd0yVZ9LvLlZIMgjzI8Dqfru6djWi3Cb2R5qix5iU
FCpBwxycMOuffNSTzLAhkydik/KFz+NPV5uWEatlsKitt98dOevX6Ux/9cq3KsIscOOC5wcjPb9K
6kzJ3WxLEYSkrSqHZGPzqoAIPf8AU0yO2KTSKwdGJLMScnp/9cfkakkAVS0bCONs7wE4ORknnPP0
pkdy8yMInwxCkAN2PHPr6+lLm1M+aw+Ty1UL9+QEnbnAIGOAD9P1pBH5bSFlWMyE7I2J454/n7io
whlkhIjjCAk5XBI+Y/8A1hzSyPL9qCCHLF8bi+Rtx1/OqTbF7S+jRLLKQmzydzj5l47en4euKbLG
ZIDs4AUkFcZBPBxkUjo0LSK6fPznae3p+XamGFX2jcqhQZOWIOMcAAe/c+tVe3QizuO8giFf3YDk
g7SDjsf8D+NewfsxJbS/tB+CYZU82GXUceU3f5HxweowM4/GvJGjMaAiQqi/MwJz8vU5GP8A69d9
+zz4k07w38cvBeu6nOLKxtNQ3XMpDYQEMoPGeBuxzxz+NZVJXRtC3MfTX7QPwh1b40/tHXugabeC
CePR4bsfaHIiVAiqMemWJ9TXQ/ss+GrrS/hf8VfCmtD7dJpt/PatbyAGIAQPgIATwcZzwa5L9rDW
fFXw7+K1r8QvDF1LZaffabaWseowSKEmzksg3Ag/KEPTtXT/ALMt/qHhX4O+PPFvja5a3ttbuDLb
XVw+GucwSDcMDOWZsfhmuN3OpOyaGaTbRaR+wje3mnQrpty6SBprYbGJ+2kZJGD/ADryTQfgj4y+
EfjD4ZeKrm/iFlrGq2a/uJCHAd0chvZlOOfQGvXvABh8ffsYat4V0eUX2tWiyebYx5aVQbxpAdp+
8CAf1rxP4beOfiL8TvHngfw7fzXOsWuhahDO8EkKkQRxsoZiccYC4GT1NG2pV0paH074r8M6a37Y
/hV5NPt3Muh3E75jX55F83DnjlhgcmvFvjB8IPF/xq/aC8dxaTMk8WjyQtCl05AhRo1+7k8ZIP3f
T6V7H428aaBZftgeEJpdXt40ttKudPmd5SEjmbzNqE9AxJxXj/x/+IHjf4KfHDxhqei3H9l2OsMj
RzXEStFKixLuKn2ZmJ7/AIUk2ZyipHWfCmzn8Q/sa+LYfEMrapPp0moiGWcmVofLVCNpPPHP5mp/
HUcmk/saeEpdAL6bc3/2Mzz2mY3lLo4YsVx1IGam+E8c3hr9kHxT/wAJFJ/Zs2pPey2zXBWNpzJG
jLtyfmyc469DUfjRj4x/Yp8PR6J5mo3VgtolyLb55LVog+8sq8jAHQ9mFRd3LUThPg58NPG/wX+P
vga11aSSzstZkkQrbyELPEsTkrIoJ5DbcZ9fbFev+G/A2hf8Ni+I4/7Lt/Li0aDUIQRwk58pS6jp
nB6+1eQfCP4qeL/jL8fPh+2rq98NFmkMz2luBHAjIVJkIHGSAMn0Ne2aB4l0pv2ztajGpWrGfQI7
cATg7pVMOEB7H5mwB1waq7uU0j54+JPgLxz8WfiB4713T1ub+Pw/fzQMPNGYlXcyhcjpgDHbJNel
+JbWPx3+w7b+INZDajrOnq7Q38/+uTF4UwSOowBwcjp6V5340+M3ij4I/Ej4laHZQW9rYa1qM+xb
uLLOWaVQ6kY4ww9e1enrDJp37B0tnqZ+wXLh9q3DbWk/0zeducE5XJ+lVqZpJDvj5pVz4V+AXgTw
94URrCLXpore6tbZcfaWltlOG56Fm5/piuT/AGZ/Dnir4afGeHwfrCXVnp2qWd3cz2EyhY5AIztf
GPUY4+tdt+0Vq01l8Cfhj4lskN9HpdzZ3L/Z/nUMtuoyxB4AdRnmuT+Cvxf1r42ftNeG9W1C2iU2
OmXduHsoyIwGiY5bk4JJXHPrQm7Diknsd18G/hp4e0r9pb4irDpsccWiPbPpqLwITIhzgdzyRzXz
l4gtfiH411vWvHWb2e30262G9iQAW4hfI4CnAA2mvrf4ZTQH9pv4qFTGGnSyZdpzu2jaTnvz2r5k
sPj1qfw/8MeMfAsWmxvb6pd3AaWcMs0RkUIflyM/dGPWrTa2FZNq56b8b9Fs/Hn7PPgjx5qcccni
RhaxzX4UAyI+8MCMc9M9PWtD9p/T7vwJ4R8I+BfBtsbfTdU+0LcW0MQMkxUowGQM9WJwPQe9HxBg
dv2J/CCyKEmRbNgjDGPnkHI68A1v/tU+LLjwZrXwt8UWtqt99iuJ5trE4b5Yzgn0OD+VJN3BxVzg
v2WI9Su9f1v4Z+J7OWTRbnTpr2WzvI8FXLxjepPIOHI9PlB9a6L9n34X6HpfjT4k6ikLz3fhq/mt
LH7RwiDZIclRgMcr1PrxUXwI+JzfFj9pq+8Qm0jszJ4fe2MaSMxASWIgk++f0r0D4MEXHjX43whd
zHWDwVwGBjl/MdRRd3FZI+SZfiJ47n8Sx/EUTXks0UolMqq/2QNtCbCPu4wQPXivbv2ifhtpXia2
+HfisWhtNS8R3VpZ3ptwFUiaJGLAD+LPc5ryPTfjvFo37P1x8Nn0lVmkkbF8j8r++V+YyuTxxke/
FfQvxelaP4VfBiYqGK6rpTYU4z/o4Pfp0FDumikla5x37Sl3d/C3TNA+HvhGH7HYT2y3Es0AInld
HcDkc9QCfwFTfABZfi/4U8Q/DrxdH9qi02NJrS6bJuoGeVurMT3x+FdD+0Z4zi+G/wAevAniS5sV
v7ey0+5E1uSAzqWdVIyCMgtmnfs2eNLf4hfHT4i69a6fLp0F5bWrC3cqSpVlB5X16/XNJO5KaOU/
Z5+HOleH/CXjHx/fwpreqaDJdW9pDOB5W6FVkDgE8MSQPw7V57ofx98WaL4+k8TXVxcahZ3ckhn0
yWRjbMj/AMKc4GO3WvbfhJC0vwA+LiOdzNe6pztIH/Huvbrxj9K8c8QfGPw9rnwC0fwbBo8g1fTv
K33wjRYnCknKnO45yO1NJj0bO2+OPwA0iL4weEINLmfTLTxddyxSRomVtiEQZTtySDj3NRftA+OL
r4Syad8PPBsb6JHYQw3kl5bYElyXQg5BBHUAk+vpXrHxhiMnxU+Bco3Oq6nJ8wHXMcfP+RXJ/Evx
noHgD9qOTUfE9k15pT+H4UULAJtshY4JH0Vh1wOaak7kuC6GN4RtYv2nPhFqL6mscXiXw4VhXVl5
a4UI77Xx0zg+tZnwL8J2Xw/+E2pfFzVof7ZuYhmzskGPJKymPcN3Und6dq7r9mbUrLWj8YLnTIjF
pU+pebaQFNoRGglIAFYPh9in7B2pksEYxzAjB+U/a+Ov4U+ZlWOE+HP7Qmq6n4ybRfE8La5oOvn7
C1q4jTyjJIo3fKgJwCRyeeKk8Wfs3rYfHfTvB1lqCR22oxS31vIVy0MK+Y209j93HFXNe8aeAvEl
r8JbPQ7Ty/E9pqVn9uZISpyFUOGfo3z46e9e3+MVKftZeBJFQkPot5E0nUcCU4HvU81tgUUzxv40
fFKb4XayngzwdbR6ZDoLtDNNPCk3nvIqPnBGcfMemPyqbx/4W0z42/CZ/idodstjqenbk1WEkBZh
FEu5ox65KkZweta2o3fgCy/aK+KCePlimtHNubZJkZwJDCm/5VB5IxyatfCpoF/ZL+Ihs1b7OG1I
xAAghfKjx1AJOMckVV7MfL1Oe+HvhPSvgx8KD8TfEFsdVu9QQLptqhBSPfG20yA454yeuBkiofhT
8QrH4zPdeCvGWmW1rJrLKLG70u32eXIgZiCR9OCPyrp/im8R/Y+8EyySlLUPY+bgEAx4YEYHbv8A
hVSPTvh5afH34WSeAbiF4pXn+1pbuWUkRHac9iQWzjj2pLXcNOx5RpHhnRPhJ8Y5dN8eRzzQ6cUu
I0tV3rIzYdNwB+7jPHPaugt/2hn1HxZBDB4X0NdMmvxHEskGXFuXA5wfvbcH25qh+1wqW/x31aRW
I3WVo20dCfLI/P5faq/grxV8LbBNBXVvCWqz6qjRJLcRTAoz7x833gcZxwAetacqMky/+1v8NdP+
H/jyzk0lSsGqwy3KwBPliZXAYfT5hj6V4WVaRtro6grn5fXmvob9s+11+y+JFhNqt1DPp1xDMNNW
HGYYw67g2R97JBJz3r58kRo5iyliOmf8/SuimYTeo1AsZfarMMckfwn1r0f4I/B68+JmvTMsi2uj
6flr27IyIflLLgAjOdp57c150HaZGkUJGAoLKeCVzjP15/WvrD9jW1A8DfEWNBu8wRjg9SbeYD+l
TUk9i4K+pykvxK+EqfEJ9HTwkjeGi62x1wyMoK7MeZsIzgN3/EVwPxg+CeofDjxRaQ2G+/0jVjG2
l3SJuFxvwQin+9jPHpitW0+FWhan+zDd+N31UJrkLbfs2RjiRVAwOckHP4ivefi783h74DSs451b
T0O48sTBH2/CuQ6LWPMNT+H/AIQ+CXgKym8aacdd8Tal5VymkpIEktojnduHcgg54z2rM8XfDDQ/
iH8NovGHw7iIn06LZqmh7g08RMh/eYHTgMcHsK9P+Ovw/t/ib+0/4b8P316dPtbvQWdpkPzbkkmP
Ge+cfrVz9l/wrH4O8ZfFzw1BKbqDT5ra3S524DjEx5POT83P0/Gi9mHLc8C+AvwgX4iXp1jVHOn+
FLACW7vLgeWjLkZVX6f/AKq6vw14a+EPxK1rUvDOlWt1o2qzxSLpmo3UqrbzShtsYBByd3UAjoe5
rrPCMm/9hnxcEdmCrd7sc4IkTOPTA/GuE1X4Hj4e6P8ACvxUmqpdx61qdgZLfaVMW8rINp6dtvOP
xo5mwtqeVeJvhzr2h+ObrwmbCd9XSYpFCEz5y7uHUjORweleweKvhz8O/grpOi6f40uL7UvEs6SS
3kekneLcDG0MNw28EfXmvefGtuq/tjfD6UD96+gX4+6OwbBrxrxR8DLr42/tFfFCG21CLTG02aGY
eYpYSboVAHBGORn8aFJt6sVrPQ88+OfwSt/AKW3ijQLhtU8I6kgaG4iAbyn2glJMdBk8E/Sk+Dnw
It/GOh6p4q8VTy6d4UsbeQtPF8jySABxs5yRgdO+RXrnw7tDP+w/44tpCjtHNfru+8oKtHyM/wBP
WpPjpYHUP2VPhTaRMIvtc2nwgKoVS8lrIBuHH8RB/CnfUdmjzGH4JeDvij4S1S/+G99e3Ot6XIrz
afe/uzLFsLHYrE9hxXiXhbwVqvizxHa+HdHtLiXULmRoIY3H3WAJ+YjhRwetfT/wk+C2t/An9qHw
bpN9qUOoQ6hZXk6NANoCiGRcMO/Y/lXqXwe0i2tv2rfjM8dugZBY+UxUHy90YY7fTJNTzND5dTwb
UfgN8NPD/jfTPCWqeMbm18RzrBFLAkW9PNdRj5gMLk5HNeKfF74a6l8KPFs2japC1uGIaCTaSs0R
ZlVlI9cZx2rq5fgl4k8TfDrxX8SoroS2uk3dy0wllImZ43+ZlbPbIwfbFfQXxot4dV/Zy+B99fJD
eXMt7pMbzXA3uyvAxcEnk+v5UufUtHhvgf8AZ0t7n4f3vjXxjrjeGtClEf2OZkJLhmYHIxnqB0/K
qnxR/Z+g8PeBNL8Y+DdTPiPw3cBhNdIGzCwYIoIPPJOMmvcv2ufA+rfEX4t/DzwPoMgtxcadO0Vs
JfLt02Pndt6fdQj8vep/2QvBN/4a8afFDwB4iX7da6fDaeZau++2WRizZVScAkYPT+Ghy6lWufI/
ww+F2rfFnxra6HpCSM5ZTdXOA628ZcKztyOmePoa9Si/ZS8P67e63onhvx5Z634n0yKZzpqYLu0W
QV6g5PAzXqHwkgi8MfsY/ErXdMibTNVEuoxpewoFnRUZFTDDHAB6V4ZpHww8d/CrUPh548a5lsLT
XtTtxDeQzEuVlw5VsdQyg5z70czWpFjxrW9OvNLvZdKvbf7Hf20xtpbd2G9XUkFcdAcivadN/ZPe
28I6FqPi7xTY+DL3UvNaKy1LakmxXIz8xHVdrYx3r6f+JfgHw9qH7cPw8hn0e2aO/wBIvru4UrxJ
Km/DnHU14L+0D4A8bfHn9oT4i6dpPm6zbeG5YVjtWc+XbwPCrbVB4GWD/XFPncthOKZ4v8cvgvrX
wZ8RjTdRJubKeNJbPUguI7kbAW2+655H0pnwc+BOvfGPVdQhsni0/SNOhkubzVbpikKbMYUtjGSD
n6A19OaLE3xB/wCCfHiPVPFBfVdU0T7Utjc3ufOtwjoqqG9Mcfl6U7456dceAf2QPhZofg1H0k+K
5rS31FbVPmvGntCzKx6/M2AcU1NvS4KmlqfPvjj9lvWPD/gm68T6NrOmeK7S0mWOddKk80RqQxLZ
BPAA6cV4hpmlXOs6haafYwve31y2yG3hXc8nBwFHWvsT9kfwp4y+FP7QsHw/8TQvp+m6/pl5c3ek
SAFJ1WNgkg9Odw4PevTv2dfhJ4U0X9or42zWemRCbwrcQrpHmAn7GZIZS20nr0xz60ua2ly3BLXs
fO0v7CvikapFpk3iDR7fVZlR/sP2gLONyBtgQnO7/PFfN/ijw1qPgzWL7TdRtprTUrR2hkgnyrjt
xnB2nHBHBr0m9l+JurSXvxsBv57i1dJG14xgJDKmEUZC8AFsHsQa+rf2pvhzofjeT9n/AMVanYq+
s+J77TtO1e4hbZ9ogkjRmBGfVjz2pKXcag7pHyX8Lf2XfGPxX8MXfiDTktdO0mKUQLPfv5XnMRn9
2ejDvnPasT4x/APxL8Fbux/4SCJJrTU4/MhvrY74Gwcbd3QHj8c19P8A7eGj67efEXwx8G/BtlcN
4bstKg1C30jTU+berTJvbbyQAg46e1bP7LWh3fxa/Z/+J/w8+Idu15p3hApHYwTKEntZFSaUgsPR
lXFJy1KUdD4a8BfDvVPiX4zsfDWgQPeaneuFXqFjHLbmbsAM969R8UfsXfEfwp4a1fXri0tbi301
PNlitZfOmKjr8or3z4RW0XwN/YB1b4r+HlEXjfUZRC+oTRh/LRb7yVUAg4wjHkd/wryP9lnU/iH8
M/j14Ja/tb6HT/G15HZ3Z1WN2W9idw7sofvznPufWht7iT1suh83aJoV54s1mw0rTrCa71K8mWKO
CMYbczbQD9CefTBNe2S/sG/FWDUJIfsFkrNgGM3AJOcc5HPfGMV2/wC06x/ZH/a41LUfh/BbwLc6
dHfGG+iMsaPM8hfbzxyg+ma4T4K+D/iF+0b8bJdUttZvYWa7bVNQvzcutvbhWEhH3vlzkADHQ+1P
3inK/Q8V8e+CtW+HninVPDuuW7WWp2EzQuhHGByD34Ixitv4WfAzxV8Zri+g8N6bNqB02MS3Em3a
gDHaq7uB68deK9w/4KHfFzwx8VPjHax+F4UlXw/byWN7qKKgS6k3BwVYZ3hRxnsc17n8O/COteHv
+CYHiOXTtLu9O8TXK3FzG1pGwumT7YCp4G4jaM49Kvma0I0Z8A/Er4beIvhT4hk0TxHZSWGqJEsx
SQEjDDIIboe/Sm/D/wCG+v8AxY8QLonhbTJtW1LymuJFhAAVF6sxJAUZI79xivUvj3+0W/xu+FPg
vT9Yt4X8WaHcyJPqMMIU3FsIlRNxJJ3E5JHTIPFfRX7CUNrH+x98bbvTDEPGIivFtnibF2iCxXYV
wcgeZuxjvUNtEW3dj5fv/wBjP4uabp1xfT+HG8q2he4fZcRlgqoSwxn2714lM5WBMph+6sBlW9D+
Nfbv7AU/xGvP2lNFTX7zxDdaHJYXwu4tVmmkt2xAdmQ/y5JPb0r58/a+ttNs/wBpz4m2miw2sGmL
qzLbxWqqIlAVQSMcdQ2feqi31CyieOrBI2Q2WO0jj9OKY43yRDaCvQk8jFTNGCWQHYygAt3P0qCU
FMoxLAj5TnH0z6fWrIV7m74L8Ca98TPEEWjeHNNl1O/kV5RbQKCwjTG5j6Y6+tdD44+Afjv4c6Id
b1zw5eaZpgcRm6mG3DMeAR79sV9nHQbT9jn9ifw3478KwLfeM/H8UEUuqXRxJp6XFq8gWLHI27On
r17Vy/7FPxQ8T/FD4g3fwd+JK3HirRPFFnNcrPrO95bRooWIaPd1BOPx/Gsm2jZK9z4Wkh+yICzK
UK5LA9V69q9Qg/Zl+Jl3FDPbeDNSkgmiE6MIT8yFdwIH0PevrP8AZo/Yo8LyftQ/EnStZuW1bRPh
xcwPbWtxGCt20scjKJP9ldoOO+BxXi+u/tzfEu/+MqeO7GWez8MWFxF5Xh6GRhZeSi7fLY7f4gCc
46kelTzjtZnzXe2lzaTzW037p42MUiMeVKkgg47gjFb/AIO+Fnij4h2N1d6BoV7qkVq6xzTW8eUV
yucZ9cA19pftffs06F4g8Q/B3xtomPDsnxL1CysrvT4VBjt3mRXaQepzJz0z160/9sHxnP8AsjaV
4U+B/wAMYX0iZrODVb/XLU5urx/3sZXYQc58rcTk9hRz3B+h8PeMPAPiDwJqEFn4g0240i8uIjJH
HOm0soOCR2xmsjR9AvfEWrWelabaz32o3Unlw20K5eRuTgD1xzX6GfCzTo/27P2ZvEdr4wtxaeMf
AKKsXiAYaa7UJLL8y9twTBGOTWR+zP4J0P8AZ6/ZUvv2k73TI/EfiCYrBp1hNIVS0JuWttwbBGSW
yTjoMUudijC+58aeIvg7400LS7i+1Lw1qVjYWaGSe4lgOyMZ2/MRkDn19a41k8uUBUUlMcEnB/Gv
s/8AZX/ao8VeOvitb+BfiST4y8PeOJV0l7e6jQJbs7Ellwo3DDgEfQ1r6J+wdplz+29dfDSbVt3h
vTrOPxG0gjId4TIpFtjPAO7BPpmhTNLeR8hWfwf8Y67pNvqel+FtRvrG4iLw3MMBKyKO6+vSua1T
S7nSdTutPvraS2ubaQxSxupV0cdQw9RX178c/wBsjxX4d+L1xoHgJE8LeDfBd0dIj0q3hjdbhYJW
RixKEoCFxgCus/bC+Fuh/FP4BeHP2jNDsk0C914xQ3ukkbhNLLI8aylx/uenQjgYpqZKifEvhnwl
qvixruPSdPuNRngQSSx28ZdlXOAxx0H1qbxH4L8Q+Fo4JdZ0q505LkskJuYioYr1AJ4PavvL40Wc
P/BPv4N6P4W8IQfbvHvjOB5rvxOQoaFIpE+VYyCD/rdo+uTSfs6a5/w3b8PPEnwu8dQeZ4s0m2fU
LHxGIUBh8xxGqlVUAH+lJzCyR+ecUBd40Cgs7BFXjcxPTFdVdfCzxbaWtxcy+HdRiggj82aZ7YgI
oGSeR2r6y/ZD/Zz0Dw/4X+IHxn8Y2ra5Y/D+4vILbShj/SJrfbJ5x7emAeOtZfw6/bv8X6p8aZJv
FFomp+Btfnks/wCwxBEgtknfZHiQLltoIB555xRzlpI+MiPNcugAXHIPp65roNF+HfiTX7dbvTNB
vtQtCSgntrZ3j3DryOuK+vvjX+wbbab+1t4U+HugaqLDRfF6T3sIEZ/0KCIHdGpzyflOOg+tW/2o
v2iLr9nPxWfg/wDCW0i8K6b4RYre3s8MUxvZpUSXPzqSAN7ZOcnNVzMnlPiPVNGu/D+onTdRt2s7
6NQzwTAq6gjIypGRwRS6TpV3rtzHa2FpNeTkMwjgQs/HU4HYV94/GzwVov7VP7LL/HvT9Ph8P+IN
Bha21qJQD9t8kImVI4HLgj2OKj8K+D9K/Ym/ZX074qXenR698QfHMYt9MuCAyaZHLb+YudwwcFSx
9eB7Uc6Fyo+G9f8AC+uaDZpNqWk3VnBI4QSTxFU3YJwWxweDWZlFIzJGFzkFm4/zzX3x+zL8V2/a
0XWPg/8AE20i1a/1VJdS0zUoLeOIW5hjzhtig8MMggGuW/Zw/Yhsde+Pvj/QvFd3HqHh/wCHUgXU
IlBBviyOyqPb5OaXtLhy2Pk22+HniW5iEkeham4zujK2z/MCMg9Py+tYk6gARvgSAEOC31GPzr7G
vf8AgoXrcXxgtNVtNBtrb4c2zKg0E2sRmNuqFD+96BifmABHHFTftZ/sjWOl+Ovhvr3g9odM0f4k
XVvbW2nSglrWeZEclj1x84PHQ0lUd9UO1z4803w/qmsNusbC5uooj80kERdUPYEjvx+lJqek32l3
MMeo209lJIvmLFcRlCy5IDAEcjg192/tC+KbX9hfwjpHwp8A2cB8XzQW+r6x4gvLdJklLB0O1W55
KHHoPfmrFhoWn/t8/s26lrs1nFovxA8CQBLnUxEEgul2vMwULzghOmPlPTvRzu4JM+BIoJry4SC2
hklkc7UiiQszt6ADJJ+laF74U1fT7OaebS7qO1jyZHkgYbBx9444GSBz619j/swfCrQfgv8AAK5/
aY8YWX/CQfZyo0bR4MZhkMzW/mMTkEkt+A96sfs8/tTXfxv8eN8OPiVY22raH4yYabaGxtY4mgkc
5DMR27ZA9KOeW9tBqCbPh3zFwhMgLcrx0+n51oWXhjWNQiFxBpd5cWzA+XJFblg4GOhA569a+r9J
/YNvNW/bAvvhRPq0A0i2t11u5uowBJ9idxiNRjG/DAZ6c1p/GP8AbHuvhJ8SIvBPw70Cw0/wX4Pm
GkTQ3lukk108MjCQqQcLlQcZ5J5NNy10DkR8TXVq1heNBMsiTocPGy4ZW6YP0NXNI0i81mcW9laz
3U2CxS3jMjBQeWwB0Gf1r7P/AGxvgvoXjb4VaV+0T4NtW0rT/EMsQvtOnABaV3ePfwcA5QdOO/U1
1Hi/QNL/AOCe3wWsYTa2mtfFLxpBJs1KeIPb20KSISuxuwWToOpHPaj2l1oJQSPgzUdA1HTSn261
lslYnYbiNlDD8Rz2qgI2fcq4ZshcjqRnt/nvX6C/BHVNP/b9+G+tfD/xBYW9l470aOXVNO1q1t1h
gUOwjjBUHn72CpBBGK4H9k39lvTLhfHHxJ8czJd+F/h5cXSSWFvy9zPbqJN2eMjjvgHIpqZLSbPk
2TwjrCEt/Zd8CmePs7joM8cdMc5rJ8tZkWVZMA8ELzg/WvtPwf8At63Gr/GG6l1zQLQ/DrUJprZN
NtrOJ7mON12oS+RyO/Xgn2ql8ef2FLjwv+034X8CeGry3t9K8ZvLNpm4s32KKJR5gfnJIwcYNJVN
bD5T5ItNM1DUrdfsthPdYO0G3haQ7u4OAap3dnJZOYJ45IZyASrqVYegII7191/tE/Gew/ZFvLD4
Q/DbR7Q3eg5n1nUdUthMtxNLCrjZznq2STjsBxUnxZ8A6J+1r+zJc/G7QbKLQfEXhi3MOtROAgnE
EO9xGQ2OfNUqTirTdyGmfA2z5v3bZ4/XvU07BX3EZJ/lUcgQA+WNy5zk4GQadGFkjck7VXJyp6U9
hDZCkjjBKIPlPHUYrUtfDeoXUJuItMu5oSMLIkDMrdT1A6e9e4/sX/sv/wDDS/xMntL+68rw5oca
XepxqcPLGxIRFPbkEk+ler+KP267Pw18YLXTvC3hOxh+HOkPDZPDNZr9rkjT5JWzkjnkD1x15qHO
w1qfFM1lJbnO4h8lSOhHODkexFLaafcXvmG2hkuJUUl2C5CAdz2H419j/tb/ALLtk3/CF/ErwPus
9E+IdzbxxWEx+aO6u8yqx54X5sYH4V3PxWudF/4J6/DLSPBmmWNtrHxP19Ir671C7h823WJXKsBz
kcAqAPWl7RPRFW7nwHcadeWFqj3lvLbmTOyOSMqG9Tz+FQSZnliSFCSzbVVecE+lfoV4b8OaT/wU
F+A+pRxabZaN8T/CKLM1zbxeVbOJZCc4yd3yxcg85rgv2Sf2evDuj/D/AFf4+ePoU1Hwl4bkkNrY
Q8vLPFIq7iucFc4wD6ip5w5T48n0q6tEaSaznjVAXYvEy+g/qKqPIQq4UEAfM3fPYfoa+5/g9+2F
p3xY+JNx4S8ZeFbCLw14odtLtP7MsVE0DzyBI97E/wB1hnHQjiuH8XfsJa5Y/tWQfCjTLu3Wz1KO
TVba5kk5isFY8ntuABFP2iC2p8qx2d1eRm4htXkjVSdypkHscH1yMfhTri2ktpvJm8yMg5MZU8Hr
0r7l+OP7Rvhz9mzxFF8Lvhz4S05rDwoZLTUrnVbfe09ySGAQhsnqclu5rP8A2pvg5oPxW+DFv+0L
4GgTSrSVTHrFncoImWRHSAFFGQPmB788VUZ6itqfEyxPMHSFWlZcnaoy2OOcdeponhmhQyvbtApJ
X51bGcZwPXrX3n4d+GXhn9iT4EWnxG8X6RD4g8eeLbV7fRreeLz7eMNGsi7lHTOASeuDjIqx8HvF
uhfty+Ate+HfiLQ7LSPG0Ak1TTJ9KgEUJCKFUMeowzYPsRR7XyCx8BCEiWPI3AnGQ2KtW+nS3O5f
s037o8uIyV6EkZwecCvqn9nP9im/8bfEXxbF4xlaLwp4GuHh1vyHbzZWjVnCoRyVO3HHOAeOa7KL
9tvwbcfGVbc+B9GX4b/8e8MwswbsRFcB1Gcdf4cDjvmmp32KUT4akgEzbThiScux7Y/TpTmt/MWV
4opCseMsufl/HsO9fWH7Sn7F2p+D/jF4b0rwk0cujeOZQmi/aHO5JCqMdwPIX5yfw9xXovxb8Q+C
/wBhnw5pfw40Tw7YeK/iBcCK81qbVIG2bXj6Kwx3AwvTBNS6ltkLlPgxX2RyBmBIGF4PHOfrzULK
7uEYFWfhQe49q+9vif8ACzw9+1b+z43xU8G2lvofiDwxaeXrWnpCyQqFjMsm1duScHgj19qx/wBn
D4FeGPg78K/+F+/FO3W80FkRdFsUj85ZnfeqllAzksAB9atVEFkfFDxLbwgmJ1BAAY9Dx0qJII/K
DICC5xk9q+9vgt8UvA/7WUerfDPxR4MsPB2p655a6VLpCkM7IGkbc2BgjYPb5iO+K8e+GP7EHizx
Z+0Hq3w01IiyTRSs+rXcMm54rdiNhUkAbiCD6Cl7RdTOyufM7Q7pcgllAA3e54FTtIFzC6nJ5HHS
vujxJ+0j8IPAPxS0vwXpXgTTNY8HaP5dnfa1LGVkEi5WZwuOcY/HmuG/a9/ZaXwjr2geMPBEZ1Dw
d42uUewljOFSe4LOkar12kEc9h701OL3Ktc+ThCrcI6g57fe56A/U0hjCTGM5IBGQeuefTtX3jrP
w/8AAP7D/wAJ7CDxx4dt/GvxP8SpHcvotyw8u2iGQxEgQgBePcnpUfiP4O+Ef2tvgXB40+F+hW/h
vxj4fhzqWjWyNhw7DAY4GcKjMp570udXNYx6nwikboSVkUOWGVJ5/AVHdKRj7ygnA46tX1v+yP8A
st6T4m8PXvxc+JUyad8N9EmMo80/8fkiMUKEj+HeBxyWNegfC7xP8EP2lNb8Q+AB4Ht/AepapELf
R9TUiR5JfMG0KuAQ2SDjpjOaPaK+2hejPgraxgyz9P8Alnnn/CiWZFVQ+QFO3aOcH/8AVXvlr+x1
421L9oaX4Trp7LfiUzNcqRsjsvMx9obOONpHA78da918c6n+zz8CfGOg/DmbwgnjOTTgLbWtd3qp
gn8wo4fI+Yr1IBwBxSlUs7WIcEfB8iKjBEmwHAyx5pIgpUuWCDtzgH/JGK+pf2vP2XP+Fe3tp418
HAah8OtdcTWk9sNyW5c4SMZ5OcZB6V3fh/8AZ68Bfs2fBMeMPjZZnW/FWugNpfhZJNkojDAEhc8n
5lYk9OKj2qTJ2Ph6coIiUBVtxDbeeM5zTZPkZVVmLN94A4zjtX294y/Z28EftDfBOPxt8GNHbS9e
0lX/ALR0AyFp2DsEj4yVHCMwI68153+yT+yZN8YL+TxP4qk/srwHokm+/vJG2pclHG+IMeByADz3
NP2q3FytnzG6bY945GSO3+fxoE4hUloiccLJjgeoP6c194+Cvhv+z9+0FqviXwd4M02fw14gVJP7
NvLx18uebdtXYNzbh8mdpAyOlfMsn7NfjI/GZ/hxHpskmtPc+QsQRlVIjIUEx/2MfN6Y71ftUxqD
6nlTOXbKkuAS2CuOPx+lMmK+YeSFC4Kn1/zivu7xL8IP2dvgx4q8OfD/AMW3moax4t8qJNQ1Cxbd
DFKWKkOQfk57EdPzrxT9rP8AZjvvgr4xS80vfqHg3VpDNp2owjzEVSwCxlxxk547mkqi6iaPnhIs
u6ISBjpnkn0pHDrHlBtKnAJHvX2N8PP2UPCHw9+EB8f/ABpvb3S7a+ljGn6RB8s5TeVLMmNxzkMQ
Ogqt8Zv2VfD8nwVsviR8KJ5tS0KKKSXUIrxv34TOA4AGfrnsc1anF7Eux8iz/MitGo3sSSoHAHqP
1pvnsIVZS205G3OQefSpEQLg5CNtySD/AIc+lK1t5kQZe33jnIJz2o6giKNVZ1B+c4ztHpjt+lIr
pG7qM+SQQu76Z444FNlHk3BR0Plg8MByKBA7fMSroenzc4/Hp2qxMVUMgBQqQxI2ccdv5/zoSNDC
fNIQrx075/z+VLIfLkXA4Q8dsGlkTc7SHLFjyCOo9frSEJNbIFaRAyAfdQck/wD1uKRSUdicnPzA
7eT2/KpUkaGclPu42/NyagmdhIHAZmPzBSMYHpSKYB/KldmAG9sbU646c1LOQtoyHG1unsaiVnCk
8Nt9fSrGoTTi1fzAq8cEDrTJKUSv5YKSEDGWGOg716D4YuPL+FHjYM5CPHbjDr1ImTC5z+P4V5zE
xiMcnPAx7H616N4ajY/CLxpNDgNvtUbewJZTIGwAfofyoA8+ZmTDHPoFHJ/H0pQY/MPmN5Y7BulN
2kN86hEZuuMVJdAttJLOcdccVk3Y0Wp3E0bQCBzKmPs8edoAOccjHTgjNd54b1EQmPZIZAy8s7E8
8np/9euFu3uAdISSJD50ClXJz8ucZz2659q7/wANpAYJJ5iixRFVXcvfB4OOvX0rfVoxkonU2Ooz
3dswkaaWPd8qqBknPGOORyOlZXxhtZYvhfqYmMhkN5b7VPzc7ZCc8cdMVv2Gq2yohBMcUpAWFEwq
ejKQc9v84rG+LGp/b/hX4gUy+SUu7UBlUt5hywYE/wAP3j+lNuxgrX0PmEzSNGygkjqR24/lV3Tw
UuE8wFmccYHXiqHzRMQBlWyAcZq1CSJI5tu/bjhupx6VmtWdSbR9EfDjWbnwqlnqGnzS2+oW0qzR
SjjEg5Bx7YNeleJvG/iHxlfRaprl7PqV2sQRbiREViQo/ugA9M5ryzwVbC6sII2KrMV2rGgY5JKg
YAzyTXcxade2EaxX0M9o4QsYrlGUkHgAdu610LlRk+Zs7KL4seJ4fAieEm1GT+wZDhbKTI2kuWwO
cAZPcVi6Nq19ous2epW1w9vdWEy3MTqo3I4IIABHPI44PWoLXTJ2slkWBmiQ53upHPHAJGDz6ccG
mkNdK/k5Vmf5dvO72HoeDV+6iZKXQ9Du/j9421HxfZeK7rVvNv7aH7NBLJCh8tMEkBcDOc88f0rp
Iv2tviVeMj3OrRXIZjkNbRMFB6D7oOePXt0rxiTTpoII7ceY8sZ3rGV+ZM5wcdae+nzw2yyGzuVl
dGcqEJjwD0z65FZctPclSnG1z0L4mfHXxZ8TdNj0bxJqElxZ2j+ZEDEqEErtBBUD5ceuetVNX+N3
ivxD8NtP8IahqDS6NaOi20SKqFdgOxVwMkAZ7964doPMkVnEgXywPMmBB+mDzn/GntkQpJ5bGNNw
DYGGOcYwPx/Cq91PQPelqzsPhb8ZvFPwjv8AVm8PXYtjqMSrcB1DhioJUDI4PzHp61yd7evcXU87
SyXPmOWdyMFmJ4GB0A6fhUAilucpGjjLBmKrySR9f/r1GXJEkPIkC5A3YJPtVNLcj2kluekar8cP
E978N7HwZJdhtLidViSGDy3k2n5Q/GcAd+/cV5+ZFjRcxNlsMFU46dhzwMD26UyRp1sxJJbyODyr
tkH/AD7+1EokulLbEkw21l37SMjAPPapSSL5lPURAI5nQAvE4GCijJJOfX/PNIkhnKpK7R7sH5B0
4Ocgf0PrUcaNZiBjEzHPzMr7zu44/nT5YjBlkD5TP7thtK5HPPPpRoJuQ6RYQBG7bFUkKMEluOv6
d6a0BaXh2t0chiSvIGe4Oc96azpGrlInbIBHt2Ge35etTcxIyybWJBIw/Gce/wDL/GmkZNp7kSW0
jgiVlwvCcnJIIxjA4/wzSb2ZSc7dhHJPJOe3vRaM8+zduCxhfusM4Gc5Pc8Afn61bYsXdo4yjct8
/BOc981rFMzSV9CswDoT5oTByTIcZHSrEhld2GSxQcEHpkYwR+NNjjeBZNybgp5wR82e36UkcAml
QRKINmSR0Gc9vwzSleOxdyBY94b51kcHqhJGcE4/8e/zirtviOKVYyscpIwQOpOc/T/9VOuopVZG
jcYXMgIGcjoTj/GoJrNrmNYw0nXcxwAcZz/n6VlZvcdmtj3bwP8AtUaz4P8ACK+GdW0W08VaZb3D
Sw/2i242xxjbHwehyfqTgisn4q/tEa78VrHTNOFrHomj2UARbG1mbysqflYjPOASMY7V5CkUrxSu
sjSNuO7nPbqfrz+VTJB5Ee9mQsDtyCMjrn+tVGnEu8up2fwx+K3iD4QeLrDWdGuxG4m/0mybd5N2
hB3K4APPOR6ECvYX/bDktJNUk0TwVpujavfwywNqtvMRIhbqc4ySGBPUV81w7ZJ/OY8RHK8HJJzk
jHfrThMkjyiQkOCyfeyrepGPqPzqnSinqZuTuaWoareaprMmpX9xJJfPObppXGWMpbeSDn19DXue
j/tXR6h4Y0rQ/GXhW38X3GmO8cWpX0oEu1uCeh7HHvgV8+QhnU54iQ7d4YYIxz+INPEAabaPnMi7
t4BBUH171fs4PoLmkmeufGf496v8XmssxHSdFsolFvp0XzLGQCu7I6noM8cCofgl8ddV+C+r3Fwu
7U9Gu1YXWkyOAs+VCq2cHBH6g+1eWG7kmRCA0ccY8sBuuOKsxkQhRMGKtksx6rjJ7D+lQ6UDojNs
+iU/af0fwtoOvWfgfwdB4a1e/RF/tNJvN2FckYVlz0J/OvB49fuYNai1G2mkS/WY3X25WGVlzu34
x1ySemOlZCF4WVycGVWCsucDp17c56YqeykU+W23zhkjKdCenpUeyiZczbPo25/ad8KeNbbRp/Hf
gyfXvEmlW6I17DKkQuG3H5sAgcDn/gR4rhvjt8d7r4ya87MktjoNu5+wWYCgRjaAxbAySSCec9a8
pztlkdgGJQAKvGfoalDrOoZIZRGAGUOBzjP9aShFMlyk9D2H4G/HcfDXSdT8M69Zy654RvYJt1qu
zdFIy43KWPQjI/AV2i/tE+EvBXgvVrH4caDdaRqd7KokvL0JJlQMEAAkk4OB296+apZJEEjY4Hyv
heDjnH4gGlnUCGCVNyljyc8npwRR7FXK5nayZ0HhfxhrngbxPa6/p97Mms222ZZ5XZy3GTvGeVPQ
g8YzX0T4k+Mnwh8e+IdF8T6/4e1WDV7eOEzR2lvG0LuuGyRuOQDxXy9dKJ4sodqjCqOue2B7UkYY
Aq+Nq5LZXBrT2KBTkj1v43fGS9+LniMyQx/YtBsw8FlZqdg8knILLnG72rpvhn8dND1D4eXvgb4j
28t/oIQf2ffpAZZ7dyxyMnnAGMEDOMivBXZ0IVIzv3YY+g9/fOPzqUQuDMock4XKheOOhBP1/Wp9
nFC5pM+kY/jR4K+F/wAOr/S/hpb3kmq6hcGO4vr+Io8UbR4JTB6ggcV5b8Nvi9rvw18Xtr9ldPqU
zszXltISEulbO4SYPP3sg44P5VwkcXlpvkYyFhkHnqfeopIioQRsAQ3TdyRkZpezTDmlc+prvWvg
TrHxJtPFkl/eae0jpJJo62TCByEJbjb0Lck8Z59a8v8AjT8Zbv4qa1bm1lmsNJ0zYdPs4HKLCELB
XwDw2MAjP8IrzK3TZLsdzhjw2ORx0ppkYNIFDR45Jx16f4irdKKCUpI+kdO+J/hT4y/DeTQfiJet
pWv6aUFlryRSTSzRliWyQvB6AjvkVZuPiZ4R+CXw3l0vwBenVvEmpK63WtMpSSGMHcnBGD1wBXzS
hctiRehzvGOnv/ntUuI2UHHnbwOvAGPShUosSm2er/BX42Xvw8167tNTMut+HdaympWE0mFZnARp
QcE7iucjvx0616Np3w0+ENn8QpNYfxnYXHh+OWSVdFDgYBU7V3ZzkMScHnpXzHOgitnKlih53kcg
e2KRlCv56zlZQxViwycEevp06Uex10LU7Hr3xT+PGveOvHFlrNnJJo9ppUok0m3XbugO1RlmA+Yk
jJzkcgV6BrupeF/2kvCVhq15rNh4V8c2LRWl817KIlu4lQ8oDg7csSMAcgivmSSNGhdg8nmgKGYn
P6E08yqykguCp4JOMHtjHTpUuklsCqtn09rPjrRf2fvhtH4d8HahBq/ijW41lvtSgZZoBtyjdOjF
WOBXM/AP4tacNGn+GXi6FW8J6iDFHKoC+Q7SbmLtn7pODnjFeEy3ZVkLsZPMTO5mJK89/ToeKAn2
mEMd3lsx5xkflSVJFe0bPpXwV8C/DXhLxfceJNa8UWuo6Do5kvraK1ukZ3aNw0eFViS2AMjua4/x
n+0xruq/FyHxdp9hFDDpBeHTreVCS8Lghlk55Y7j06GvHbl1kDEFpGAPDc/pSB9uIzFhsF9q8Lz0
6dO1L2SW4lNpn0/8Tvh7YfHny/HXge8jhutUOL+1v3ETIyqqAkZyBx1+nrT/AIieMNJ+A3w2b4b6
JcJqOu6nEZtSd8tHCJYlDbWHckLwe1fLaSR3D+W2XI/hYk7Rjn+QqSdmdN24ykgABm4OMADn2o9n
rqUqtz6W+FPjDS/jZ8LrX4V65KNJ1KzRG0ueBS4m8lWYBieAQR0zyOlL8I/gfL8Ldcl8ceOrhNKs
tBYXMbwsJDcOQyYIxkdf1r5nim8ueGSKV4XjyVKApICMgsDnOMEjj+tTTaxdXLTQzXNw6MuGjklZ
g30B4zmhUr7Mbkety6tpP7RHxqnudd1CLw5a6pHHbWciDepZPljUkgYLA5578VDefsx+N9K8YfZY
9Oe7tLS+TyJ1mQLJGsgKtz9M/hXj8EgiuUVWI2fMFD8r6HP4Vtnxt4ht5D5fiTVY+dwBvpcD0Ay2
cDPY9qbpyXUlTie5/tveIdM13xro2mWV1HcXulwXEN0i5Pks7RMqn3IXNfN4RiSiy4AG4kjntxVi
+uptWlnurmV7mWV/Nlld2Z3b1znJ7etQxCNzJht7dTJg5A9M1UVykNpkdzCGfIUBCMgYwMH2r6C/
ZP8Aifo/gzUNa8N6mv2SDXynlXhfakTiOQfN6Z3ADpXgDq8pyzbeNoOO47UjoQGVsjsSnUfjRJcw
ovlZ63q37OHi2y+IDeD4rSS7spHiii1Qg/ZyrqCJACTgAA5HbbXo/wC0J8UdH0G58AeFrRxqtz4S
ubK6ubm0kVo2aJVjKKcnkbWODjFeLJ8dPiBbxRR2fivUVgjAU7mV8KOMAkZ6YHp1rjbuZryVpMuZ
pMszk8sSSSx46nPvWSptmvtOh9NftLaZcfFXTPDnxK8JvJfWAsVs5bW2JFxAXdnydpyMZxxjv7Vo
/ABpPgX8PPEfjTxdI8MesiEWtiGDXMhjeVScMeTlx+Ar5w8GfE7xL4FtLu38O6m2lRXbq0yLGrrI
w+6cMCB3Bx1zUHjT4ieJfiBNYyeItRGpPp6yLETCqLHuIJ4UD0FHsn1D2p9CfAfVtP8AHnwD8R/D
SxkFp4mu4554YpyFjkDshABzz93ke5rzP4cfDjxr4r+JGkaNKl3GuiXiTyLeSMtvbpDKCxx93oCA
eTjAzXmulavdaJq1nq9jL9l1GycTQXKKNwIIIwDx1A616FqX7SvxB1S01K2l1O1+z38LQymOyRGK
N94bhzznk0vZ9EP2h7R8QPjV4Vsv2qPDGsS3M7WGh2V1pt1cxRbkWaQsoIPUgbhyAa88/aX0/wAX
+APjBrfiTS7+707SfEzCaC8sC2JY0ijVlbbyOTnmvA/KhdHjJ5C/NjA3HH6/1r0jwn+0N4v8I+G7
XQA2n3um2bOYE1GzE7IG5Kq2enI49sUeyaLU0z2u3jT4Tfsf6novi2RrPVPEMtwLK3CszyeYI2XP
deFJ59qb45mf4p/sj+Fx4WeW9v8AwzLaDUY0DpJbmG3bewzgnGQeK+cPiJ8SfEHxK8QT6lrV0jyF
FVIY02RxoAQMLk4OOv0pvw0+KmufCjWZNT0QwyG4geGa3uUZoJwRj5lBHTHXNJU5XuS6ivqew/sr
X3jD4j/GnTPFes395q2maFYzw3N9fH/j3DxMFAJAzk9sV6X8EPih4Y1v9qL4jvZ6rGYdeFoumysu
Fu2iTDhD3x79q+f/ABn+0x4l8QeErvw/Z2eneH7G8mjkmm0uJoXkVQMqW3E4PQ8dK8ftby4sLq2n
tZZLKa2YGGSJtrxOM7XVhyKlwl1KU1e56Nr2seP/AAVfaz8NWu76xg1K8kZ9GWHcZ2kkygGRkhgV
+6R3r3n47anYeEvgp8HfC2q3CW2vafcaVdXOng/vYIo4yjyMvoCSPw46V55F+1zq66tp+o6j4W0P
VNZsoIkXUbje0zMoA3FugOeeK8Q8b+K9T8e+KL7XtZvZbq+u5S2ZHLLCu9mWNR6AMR+FRyNsbqM+
rv2u/E2saB4p8DfEbwhcZsYrGSGHWreNZo1Ly8LyCBuUsOR60/8AY38Q6rd618R/iP4wuRDY6nBa
+drNwRFBJLG7qwXoOBsH1rwL4bftC33gPwje+E7/AEm38U+Hbh0lg0/UZmSO0IJJ2YDdSc49aT4p
/tE6j478Hab4W0vS4PCvh+3LzXFjZSF0uWZtwzlQCAQDgjrSVOTD2qR7d8HJovFX7H/xL8NaWwv9
dM2oTpp6MTMyOytGwXqVPHIFeG6N8TfHnxN1PwD4DumN/Y6BqVu9va29vmSMQyBDvdewQkcjtXKf
DH4qax8IvGGneIdKneLY6ie2RlC3MIYFo2yDwcfh1r1mP9rzR9B1TXNd8OfDi10PxRqUEyJqrXm7
y5JMneVCHoef0q/ZyHzp7Hv3xE8V6Ta/tzfDgS6lbKkGiX9tPL5qhYZX3bUc5+Un0PpXgvxm+MPi
v9n39o/4nXehC1s08QzQ721C2ZlkijgQFkIZR952GeetfNmra1d67ql1qOpX0t1f3cryzXbHDPIT
uLHGO/Ne93H7V2g+MvCeh2PxH8FSeLdY0qOaKLU0uI4mdWI6jAwdqqDUuDRSs3qek+DIptB/4J6e
LRqsb29xdG8khinIWSbdMhUqO+V5GO1P/aV1O5tf2U/gjrujot2dGuNOuGZRvSF47Pgvg5GGA718
5ftB/H2/+N+u20cUA03wzpkaxabpQC7YflAYk7RluAPTAGKn+Bn7Qy/CbSta8MeI7GXxF4J1m0lW
fSRt4kZVXehYjHCkYz3B61PK0xuR7L+zj8ZfEX7Qf7X/AId8R61YRIdM0K7tDNYRssEYYBgXJJwS
SRg9M1638AtTttV/aC/aU+yzRzubu2WLy3BaTZFKG2464OB+OK+YdQ/ak8K+Cfhvq3h34TeFbnwn
e6rIsd9f3ZjmfyfLKMq/MxyAf1rwbwJ8Qtb+F/jCx8T+HLkWuqWrlkMgLrKvRkYE8gjI9qOTW4Ka
s0dlF+0N4l0P4B638Gv7Lhgtbx2Mk0u8XKM8iuybCO53DP0r7F+P9zFpuh/ssWlwDBINe0surcBF
EUWd3oOvtXjt/wDtK/AXXPibB8Qr3wPry+I1kjldI0j8hpkXAZl34PI6/T14+bfi/wDGfxB8YvHV
74m16byJG/d21vCCsdtErEIEAPBxjJHU59qfK2XzrmTR9h/tc/F+9+AX7Yui+OLDSV1aRPDCWflT
sUjbfNISdwBwQATj6Vp/sVeMpviHoP7QnjHULX7BDrd6LwhQ3lrutpNwUkDIFeHaP+0t4E+IPwb0
zwh8ZdO1LU7vSbmJdP1TTk/feQke0CWQuMnOc+vHFYPxr/ad0Sb4YaF8Nfhbb3eg+EFiJv5pk8m8
uJFc4XepwVIJJ9fzpchPN0PZJNl3/wAEl3EMDz/vTuiQ5bH9qAn9K4Lw3+1Xd/H/AONX7Ovh6Xw/
b6Evh3WY8NBcGVrj5EQHAUbRge/WvN/2Y/2lf+FLeIZdG19G1f4e6uBFqelzq0xgxuIkhTOFbLDc
PbI5FeieCvj78CPgDF4l8SfDyx1jXPGcqGHTo9btGENsxkzlScbQAe3OAKXL0E5Wk5G9+2j8J9c+
OP7bdv4S8PQNd3E2k2DXbIRi0hEr75n9lDjjqcgVifH34u6B+zl8Pn+DHwouIxqYi2eJvEUCAPPK
FeOWHGDhyQP90ZrkP2Uf2ntO+G3x+1j4gfEfUdQvjqOmXNo9xBE9xIHeaKRFAycIAjADtkVu+KNW
/ZQ8YfEnV/F134g8TeZqOotqNzbLYTCMu0pdlwE5BYnIyfaqs76lq2h8cmMWgVNpjQFQVUY2+/0r
9PdA+O3jK3/4Jpah45k1KJvElsJLOK8aFMCJb4QA7cbc7Pavkz9tn4nfC/4m+OvDlz8NLBbDSbbT
3hvnSxa1Mjl+MqyjcQo/WvoCL45/s52/7J118Gj4v1YWc0TB7p9Nm3+c03nsSdmPvcf1qt3sDt0P
zpvLyW5nZigMjOzsCuOpJPSvvz9hbQbDwZ+y18YPixZWq/8ACWaVBf2lnPIdyJHFaLKMqMAku3Oe
wxXwBczxwSEW8/mRMeuMEce/NfWX7IX7SnhnwZ8MfHXwn8cyDSPDHie2u5DrcSNI8E8kaQ7CgB42
DdnpwaUlchPS1z0T9iH9p34gfGD9oFPB3i+9s77RtR0y+82O3skgIKxdFZeccHuf8Pkf9p7whp3w
4+P3xA8O6Ojx6VpuqyW1ujsWZU2qcZ6n7x5r6k+B3iT9nr9m3xpN4+0f4kTeJ9Rs9NuLa20wWUoE
zSJwAwXjJAHpXx38XfiXd/Fj4p+J/G13aDTrjW7trqS23FliJUKAM9eFFOKtcltaWOIkVhKBvbk5
ODwfQ06Mv984ZgTgNxTnYuqlVUqpKtgfe9P0pVAjkc4JbP3XptjindXP0r/am1K20D9h39m++u4G
uLOzu9GuZYM4MiR2Tsy8+oBH40nwe+PvhT9on/goH4G1rwpoVxodjY+HLy0kFxHGhkkCsTgKT0Dr
ya8a+AH7Rvhb4kfCK/8Ag38Z7xG0S3tXfQdfvGwbB1iEUcWAoJIDMVP1FdB4P8Z/CD9jTwL4j8Te
DPE1v8RviXdS/YNOuIJMNp6SRlS7EcYB5PHPArFpvQ0vZn1B+zs6L+01+1rMSSF1CyBAHZYJvzr4
R0T9on4e6Z+xTrfwxm0Ce58Z3NzLLDf/AGdNip9oWUAuTuHyjGB/9eud+BP7Yfi34NfF3UvGeqXc
/iG18QSbvEVm5VWvQFdVYcEKVL546gYr3K/+C37Nl98bV8Vj4o6Na+DJZkuJPCyXQGR5XzRbt2cF
8cYqeVrQalZ39D3j9oOAPon7G0A35PiDTcggZwLeE59v/wBdcx+1D8XfB/wc/b80zXvHGitq2jx+
D0gESW4nO55Ztp2n3BFfI/7RH7ZHiD4u/EPRtV8PvLoXhrwtMknhywKoTbugCLKcr8xOwEZzgHpX
svi/xf8ADL9tb4Z+H9d8W+J7D4e/E/RvL0+/vdUnjX+0IkiyXVARhS7lhx1zQotaCvc9Z/Yo8UaV
4z8I/tR+JdHsmstNvr+4nt42QLsiNrcMgx24Ycds1xeuShP+CPOlmcCETTxg7RwcakxHH/Ac1xPx
e+Pnhb9mr4G2fwl+EGr22p67rtpHN4j8U2TJLHLuVo3QZBwzKcDHQfXNYf7Kf7Q/h7xH8P7r4EfF
1hJ4FvgTp2pSOkMWnMm+UeY4xwWAwT3Az1pOLWtgUkrHolp8cPhp8Yv2iv2ZNN8BaFJpL6NqQTUC
9mIQ7mOPYB/eAMb8819IeE42uf8Agpp44lyP9H8D2UZ7HLNGf6dq+UvhP4P+Dn7KN7qnxL1r4g6P
8SNd0dY/7C0rR72Mz/aGYgvtDcnBHXjGa8a8F/tteN/DX7SEnxZ1GX7fdakyW+qWccS/vbFXBEKZ
AClVAAPUn65pKLbNXNM9N0n4zfCzwN4c/aP0HxdpDaj4v1vXtUbS7pbIS7CfNRMv/CQ+eT/er1P4
oRC1/wCCXnwqgiJVp73S9mG6k3Ex4P8An9K4Xx9+z78HfjX8SdK8faD8SdF8NeFta+z6lq+j318i
3MUryb515b5GIbGOx+teefte/tTWPipdL+F/w5EVn8MvCMqxWrOglN3cQyPtmRwclBnjJOcknrS5
W3chyVrH1V+3L4l8LeC/2jv2fNV8a2j33hnT4L+e9hEfmDZtRQSnfDbT36dKT9ifxj4U+IP7Ynxr
8ReCLEWXhm50vTjaBYDDuwyBjsPI+cNx7V5HeeO/CX7dHwUtLTxZr9n4U+KHhUR2y6rqEy29vdQy
ybnKrnByiY5HB/Cobbx74P8A2Cvg3dw+CfEln4x+K/iyF7eTVLN4riztIo3yrMobjAfgc5I6cUcq
dtBXtoeifBxmj/4J+/tF3TBilxq2tt97qpVFI/pXg3iP4kfCDXvgz8EPCPhHSUTx9Ya1pqa1OLGS
J2UZVyZCoD5dk6GoP2Of2oNP0LT9b+E3xACy+AfGkjxSzIFVrW4nKhnkkJGEOO3Q13Pw9/Zl+GHw
k+JmuePfFXxC0XWvCXhvzdSsNK069SS6eVJN0KuM/MQqjjux9qVnfYadtT6m+Ku6f/go/wDBaLDB
YfD+pyZ+pnB/lXgPiD4k/CHwF+2N+0O/xStIr97prW30zzbU3ARhBiQAAHaT8nJxXz742/bq8YeJ
P2lLL4uWMVvAdKaS00ywng4+wszHy3+b77BiSR3NewfHH4MeCf2tNQ0n4r+CvFuk+FJ/ESvc6xY6
5eIkq3CkKDt3HbwhH4iq23QrvQ6X4ayLa/8ABJr4gSRAMJp78DnruuYB/I/p3rd/a8TSdI/Zh/Zc
TxFEz6NBfaY+oRbN6tAlojSAgA5+Xd0968W/at+PHhz4Y/Cs/s6fCqdbvw/aP5mt6jMRKt07mOXb
E4PHzD5uwwBW/wCAviloX7Zn7PMfwr8banb6J428K2skugalJIsFtPtiEcYYk/M+M7gB0BPHQTyt
2E3d3PSfgl4q+GXjn/goPod98LbOO10CDwjdo4htXt08/LliEYDHysv516H+z2Fb4t/tiX+Gjc6k
V8wj5cJFdYx9MdK+f/htofhL9gDwRrHj/VNfs/FnxGvDNpmi2+lyCaKNHizmQZBC7lJPPQjHWvMf
2X/2zL34c/E3xbc+MoxfeHPHs0suuyW0f7yKaQMu+MdlHmNkHtz9Wo2Q+a7VjNg8dfBxf2G7zQXt
opviu90xWQ20nnon2oNzLgADyge+O3PSvtL9o+1J8TfseWXkkf8AE7s2CtyV2wWp/Tn8q+fT+wh4
OtfjPNqE3j/Ro/hckvniAX6/amhEWdofd13d8dK87/aM/ba1P4hfGHwlr3hW0jtPDXgiZJfDqXkZ
3yMEjBecK/OfLwAO1HI3sVzq6PpX9orxT8M/C3/BQqa++K0cc3hyLwhDEgkgeYJKzOykhAf4TJ1/
vcdqd+xxeaPN8If2pNU8NqBokmo30lgiKVQQfZLgxAA9PlI4/CuB+Mvhbw3+3j4Q0L4leGdW07w5
46Xy9O1i01i4EKN5cXRVJJK7iMH0Y1mfE74saJ+xr8AJfg94HubfXPGXim2Fz4h1Pd5lvEskRhfy
yP4iFIAzwOe9Uo6oXMdZ4wjtrf8A4JBeHEMjeRNcQbvM+9k6g5YDAz1Bx7VnWfib4P8Aif8Aac/Z
ltfhXBCj2t+f7XaKB4iZPLjCKxcAsQVkzXL/ALMfxk8PfHP4LN+zh8Q5E023QLLoeoW48lC8ZklC
yPuA3bm4GORn61o/A74FeHf2Rbm5+L3xH8S2OqzeHGil0fT9CmWaWe4bzI/mDHn76ng8YJycUcq2
Y1Usz6f8Cqtx/wAFM/HjqWLW/gmzj5HIJaA4B9K+QdL8T/B3SPD37R8PjGO3fx1d67qg0GSa2aWX
/lqsZjIBx+8Jyf8A61cj8M/27/FHh39pu9+LGt2EE0OtJHY6nZ2sRzFZqV2+WCfvLtB64P616X8S
v2J9F+JfxTsfFfg/xpYWXgbxCYtTuxeXY+0L5rtJLsB4BwVwD3JpOKuyb6na/FKEN/wS0+FdrHH5
YurnS49v1nmz+eP1rtP24rrwhpv7Q/7O48d7B4UhS+a+8wfLtxGFDY5xvC/kc96+ZP2vP2ltKvNL
8O/Bf4euX8F+CpY4zfzp89xeQGRQVYcMgLZJxyc9q9J8W6hpv/BRT4LWGoC9t9D+KfhNVsZYbhxF
bSCWUM7LuOSCqE8dD9Kjkd7lczTuekfsX6p4F139sb4ual8OI4YfCCaTYpZiGExKxMke8gEZxuBH
NZHwSKw/sNftMXnzAS6pruSDnOYFB/niuG8O6rpf/BPL4RX+px39v4g+Lni2KSGCCI+bZW8cUoKu
+CCOCDnuQBXCfsc/tKafc6L4l+Cvj4eT4U8dzToNStV2SRXM5RWDEYCqcHnBwe9Uk1qS3dmfrmrf
Bm9/Z1+D2jeGGhPxFOq2K6qkAYTDlvM356AllH4ivtr42I1z+37+z5as2fJ07VJlHQjAkz+e39K+
V/Av7Emj/DL4lal4u8c+KdMT4eeHPO1K1Wyug9zOsMgkhU9OqIMgZJNcN8S/25tb8VftS6b8UdKs
rf7P4deWz0e1lDYktJN6u7jjDssjHHbAHbNFlcpSvoe3+MdT+D1n+3N8crr4vtbLb/ZbKDTFuslW
kEEZfGOM4CdfX60/4IfY7L/gmH8YLy0Qw2k8upvCrAgKmYFUeucACua/aB+AOlftU6rp3xd+G2t2
1m3igs+rW+qzrG0DqEiwFzxgp/nNYf7R3xh8P/s6/AyT9m7wBetq0t3ubxFqk/zCNpVWR0iI4yxA
z1wPU00veSuJy0sfDd1PvQIFBZ+3J5qJkIZlAZCOop8km50j259RnoabLLFMhI3seCQeM10bmdtT
9G/+CSVsRP8AFa9CocWVnwg+bBM5GT/wGvFPCd58E3/Y/wDGJ1OS0b4qXGoTrYedn7Qv71Ch4GAA
M/r71j/sM/tSQ/s7eO9QtdaSMeF/ESJb6jKqM00QRXEbJjsGkOfavWL/AP4J0W9x8awLLxPYQ/DV
51uWnNwv2tYSmXXk4J3ZUH0INcslroarc9r+PmnJD8Af2SLBI8Z1zQ1HOANtshH6kVR/a1Pw6f8A
bw8Dp8Ubm3h8LReFZBILpiE815ZhGCccHOcHjpXzv+1L+2db+LfF/gjQfAlsn/CI/D67t5tPuL6N
g97LCojy3I+TaqgYGSfSvUfjR4Q0z/goJ4D0L4k+EL+3s/HWnxwaRqmn3T+XEqqGkfZnnhpBg9xU
pPZie56D+wxJ4YPjj9pe88Foh8LxTW6abJFkoY1S5OAe+D79CK4/w7HHaf8ABJDxSYgIlnkuWLdc
5vhk/XArGu/GOk/8E7/gbceHdKu7fW/ip4zhinvrUZe2tIlDxs6lehAYgAnJPNc3+yV8YdE+LHwf
1T9m/wAZTHStO1Nd2laja5DO5lknkWRjwOVAHscUWk3cpMsalY/BWTxB+zTZ/Dmayu/E0mt6cdbe
A5lOGg5f0JbPXHNfVurBJ/8Agprpqk7Tb+BXwWXcCTK/vxgNXyt8Dv2TLf4B+MZ/iV8UNbt7XQfB
rDUbOLT5vNkuJlfKZGMkcDC9cmuP039v3UZ/2w2+KV3pkH9lyxjQktsN+704SgiTHd8ZOM9W9hSS
a3FzI70Q/BvUvi3+01efEy6tH1uDWJ10WO5cqSyxy52c8ncACPb6VvW0QtP+CRbTBRm5nDneM5H9
pgdPouK5T44/scv8dPidbfEL4ca3aN4Y8ZKmp3c1/KFkheVyX2occBCPlyOhFU/2vPjp4c+Hnwrs
/wBnLwGsepaRpDgarqMykMsyzeeI0IwGyzHPXHua0s200NNM9o/ba0vw/caX+zPpfi+dbPw4b+Nd
QMmQoiW3gDgnoF2gg1D+zLpHw4g/b61kfC5oZ/C9v4QLCS0k8yJrjzo1focZwVyPWuJ1zX7D/goN
+z7plrDOui/EXwWjPHpZcG3uFbbGvzt2KoPcH86g+Enhiw/4J4fD/V/iV4xvV1L4haxFNpej6LEw
KOAVdQzIDjLICT2FS4uSSuK6R7B8Bnz4Z/a6v1bzg3iDUysgHZYJuOeuM9K+PNQ8KfBfT/2KNA1G
yv7O6+KVzcIs8S3O64VTOd3yA5ACDpgd67L9kL9qu2n1/wAe+DPGsUOnaX8Sb27nkvoA2YLm5/di
JVwRt+c4LH0qXwv/AME6b2w+Ml2+ua1DZ/DbTXnnF/5ymWSFOVGDwCR1P1wKFG10gur3PrP9oCwj
H7Sn7Klo42iG9u2Ufw4SGIj8tteOfFzwt8L/ABb/AMFAvGkHxUvbe30S18N2j2qXc3kxGYqoALcc
4JwM15D8ef29YvFf7SfgzxboGmR3fh7wLNMum+YXjN+kiIrSHI4xg4454rsf2n/gOv7XWp6X8W/h
heQ6nJr0a2+p2lzIYxbmFVj+XIznOc+u3NJRa3dtCVI7P9l6DT7f9iv9o640dvN0oXusx2RDEgwL
aDy8HPPykc96qfHKxgi/4JlfCazvCsSXF1pgkYAkIHaQk/XDE59q5j4pfEDRv2LP2Y774K6deR+J
PGniqJrjWCMiOyS4hCMwIHzY8sADgkZ6Vb+GXjbQ/wBsj9lkfBu5vE0PxL4YQT6bDGozeJDE2w+g
5fae/FEW3JN9Wx2TZv8Ag34bfCvwX+258FbL4Z30F5am1vrjUUinEwWUW7+Wc5ODksevp6V7d8G1
Lft2ftDztEw8rT9LhDKuQFMQP58e36V8yfstfACX9lLVLj4y/Fm/TR/+EfWS10+yjbzGunliIJO3
n+IqB9fasr9mr9uWzj/aZ8a+LfFVjHpGl+PJYVnl3Fk09IkKpuPfjblsYBz+E2d7lK2xynh/4UfC
nVf2Y/iX461XUV/4TtNRvVsLNp1EpHn5QbP4t3P1HT2+mfjLbeV+zn+yjZpGQrazoSEFcgH7MuPx
5/SvnjxX/wAE6/F+o/Hs6XpF35/gLUpVuj4ik2NtVxvb5d2SQSeR6jFbX7VH7X3hqy8R/DbwJ4Ut
DreifDu+srxtVjl2rcy26CMxgFTgAJ1yeatx1C66Htf7V3w98F/E/wDbe8B+H/G+oPp2jP4Vnld/
tfkqXWZ2QZzznDcdPyqf9iDwzoXgzWf2ktM8PSre6NZ3iQWU2RtaIQ3LL83TAzjI9K8z/a0+HN1+
2hoHhj4vfDnOo6tDaw6Xd6LFKubVvmkYliRyrSAcDkGrnhTV9O/4J/fsxa5ZeIrr+0/iF43jEqaF
HIBJbK0TxbjyflXcSSO9aO7tYnmaTL+yMf8ABIu6jRmzPKwOw4+ZtTGPz4FY2hfs+/D/AOD/AMcP
2arvwdqcmq6hq+oeZfDz0lA2xIxGF9CTj02n2qP9njx5on7SX7Il9+z62of8I14kt9j2t3IFxdss
73H7tSc8FMH61gfslfsveJ/ht8Tk+JvxImXwp4f8HT/bfOvn5uGYMoUE/wAIJHcZyBWdnyNPcuLT
PrbwxDHL/wAFHPFtzsYSQ+CbRNy9fmmjx355r4+j/Zt8GfEXw5+0J4/13Wvser6Nr+pvp9skoTBQ
vJggn5t7NjHXjrXSfCf9uzwzffts+IPGmp6fPp+h+IrODQba5uJFH2ZImU+bL2VW2A4B4569a85+
NP7E/wAR9R+ON/Z+GGm1fw34pvf7RTWoQfIiW4mZzuw3G0FTkdRSjzJu77Fc0b3XY+hfiZCp/wCC
f/wKtigMcmo6MH3D3Jbk9j1rS/bX+EunfHL9qf4P+DNY1I6Vpk2magz3KnDLgkgc8c7AOfWvMv2t
/jb4Z+HXgr4a/A/S7xvEeo+DrnT7jU7qzZTGotgymJgCcOSBx2BFan7Zejap+1N4E8IfGP4aTTap
Ja2YspdFtvnuYjLJvbODxtxggfXoKLO6JWruz0L9hv4caR8K/ip+0L4Y0e7XUdP0j7DAlyzAhxsn
YjIJGRnH4dK4zSGWz/4JT+LJIy8MlzPdK4VRkl79V5I5HvUX7PlxD+xH+zr4x8dfEWSV9X8X+W1r
oSkC9L7pU+cMcnmTcSemD1xzn/s7+KdP/aP/AGKNf+CulX403xvbqLnF4VRZS90Z12cknhNp4znt
RFNO77jckmYuifsr6f8AAb4v/s66tp/iB9Ru/EGrW8l1aswxGwjDnbx0+YDmvp7RbKOb/go/4nuQ
iGS28FwIGxtKkzKc5984r44/Y7+BvxHuvjpo/iXxlLfaV4c8FXC3st1rskoj2jcuyIucAn5TwBwB
7V698OP2v/AOsft4+JdbS4lt9L1fTodCs7idVVGmilALZzgJhRznkelS0+/YTlc8X8Qfsnj4tR/H
r4lz6+lm+ieINTEVqItxm8t2k4OfcAcete7/ABUt0P7EXwAC7TK+p6Mm85YElSSDntnPGe9fNXxu
+A3xd0j46634R0h7660vxbqU2oRvYO/2ORbiaRk38FRgcHPQA+1e7/tTfE3wx8Jvh98D/hLrN/8A
a9d8NXmm3uptbAtFHFApRiWxkFmzgexPpWjvzL5k3V9GbP7bXwav/j7+1D8OPANpqH9jp/Yd3d+a
4OxMSkcDufkGfrUn7EPw3k+Fj/tC+EL2/GsWejNHbKwUbAfs8zMQMYBORkD0/PkP29pvFfjK+8Ff
GP4bX+oNpQ0wWMdxpodLmN5pGfPHRSMA5HHHcjGp+y/qF7+zt+zH8QfiJ8S7ue1u/FxH2drlme6u
HKSxpuBy2SzcE9vaiPM0tQfLY/M+R2MxJjzuAxxtyTznH+etMEYC8SAS43YA4b2+v+FK7JCcCRmY
khvNwST359fem7wmShbrknPK8dK6jISVv3gAHlqVYFRzx7e9RzYDKc7VwCVzz196QSje29mwG+pp
Z2kEhZ/mJJwvT3PtVCFhUIHO4PKOgPUZFSgljsJL+q9cVWmZgcxx7MDOcYJz+hqY3KiIyA5JG0qB
z/nigY2RJD5hIUBTjpk/j/ntSDzbiMhDv4K5J6D60Pt2bmiIJJ2tuP4ipGmPylFIjPQnn8KAEXdC
+1uGC4VQPvZpuop/ogJyxAwc9vemh08wFg4jJ5K8n6VNfw7NPeVXJMhAZXPI4xjFArdSggZ9qgBV
Udx1r0LSBn4N+Kyq/d1C1UlW7DI7d+h/A+tcJ5USwqcsAUw24jk57e1d7p8CQ/BHWmR2j83V4E65
VgEduQTwfl9KgGefNHvEe0lEbONxzjtQR5EjBpCxGBnsaidQV2rINucZ/r+lOh4JUYK+vXNZyZVz
0hW8vT9N3sdgiPklRgbdzD+f9a7Dw9bySW7TYQqDkFwACw5wcDkY4A7flXIlxJb26PLuaLOVkkyM
YOPx6d/Wun8M3ZubQwbHmG/ru5cY5wPoD+ldiOaR3Wn2UdzZqJMmODDooJUbs5yQOo5/WqHxXtW/
4VfrZhcFY57bcEQkOS3bnKjOevcVNb37xzyL5AVkABJXG8/lyeR+RqH4mzpF8KdVfD43RDBzziQY
Y9Dn0/D6UmrEqV9j5ntIjdiWILlsFwxbOAOaSxjZJ1LKG5AAI96t6GqrfO5GUYNsI9ccY/E9KgtL
jdd4KKApKheijmsdUzoW+p+jf7DPwTs9U8EX/wAV/EMz3WiaQJWj02MAOJYcMSxI5GOg71y/xh+K
k/xa8Y3eoraRWemQ/u7SCOLy3EI5Bf37e1e9fsQ/6Z+wz4zilmBy2onzEQcL9nQnAB7A18lWBhNm
pgjZeTlWfGSeeQM88/nVwTkTOVpWR9kfAjVdA+KvwKvfhwltbWfiiLc0NxeFQJwXDttP3iRnHA9D
Xm/wd+AnivVPivHZ32nPptroky3U8l1EyQyIsmCN3OcqDjnsQcV454EXxDH4s06Xwqkr+Io5SLFo
2+fzSCuATjg8j3xX6H/Hi98ax/Aq1m0yGX+2zbodVSIhmWMwHzuMH+I84/SspKztcFJs+fPj78Yd
E0743aTfeFdLs7j+zFexuN8CiCUlvb7y4cdDwcnsa9w/ad8ZT/DPTNBn8P8Ah/R5r3VZZVNvLaKw
n2rGwVcdT8zDnNfn8l1LLqPlTJ9oktSvzNIq5J4LHPQjOfzxX6a/F3xb4X8Han4BuvFWnG4ilvDH
Bdg4S0fygWdvbp+XtScbPQrS2p84fH3x74S+J3wF0XU9L0q00vWY9QW2ukEAilWTyxv2jglScevT
2r2nwD8F/DXxB/Zm8H6VdWFtFJLpSSx3kEeJBJ8x3bhgnJ5PPcn1r4O+JVxFN4w1q7sJUe3lvZnh
ZOVbzHJDAewPb34r7gh8d6n8P/2LPC3iHSMR31pZ2oAaMEbfNYPlT6qDx2J9qTTWqKVlFnMfsofB
efwf8R/H+ieLdJt9SNtDbrFLJEHQKSSwBPGWyG4HYc18deO7OKz8ba9ZRKYkivLiNAnzAIsjKD26
gA9utfqD8F/i9ofxh0G31bTXij1FQqX1mXVpoCDt+bH8JOcGvzM+J9wI/iJ4kR4mkkW+uCSDjDeY
4xzkDBHQHHtWkHd6mLtzan0D8C5fCfx98DH4aa1psOl+IYIi+mX1tEd8qxoS24nhT8xz1yCa+dvi
J4M1P4d+MdV0DUoUiu7KULI0WWU5GRj8Mf5Bx6t+w/K8n7RehttCIlle52uT83lcYH61V/bNgUft
CeK3mVY40Nsd0QILfulxkevXsc1tHcKluh4csmwRl12LI5+aQ/Ke/A/M/jSmZpz5pGFYBd8ZII/u
59zyfwpJWWC5RS8e1clSCeefb8KYoSK5BUAbhuYqBhvc8cnj1rWyuZJvuSLJJInmLzIBnA4VuCMH
37/hTDboZmmeMszMFbcM+/r0yoPFFym8/u44w6ZXaU+bkdcjt1/I0jSiJQJWMiS5xsPyrwMd6tJG
L1ZItwreU6r5aBshGwCcex+vSnzxSSzRFA5nYeU8fUEEZ/nVWORmlBZkwFKheu0MQN2M+oHOOOas
QzGVgXDs8eFzySCOSDjt+PQ072ErXFkSOCTbKrpLHgbTn5vqf89aVHjDllJkbguhJULxj19P51PA
n2qVwR5vDAEnGzpjr71TUyO+Au0jcT5aAZ44ANQ5XK0jqyWRpDllVgvzfLgDbjgc/TFaGiaTf+Jt
Wt9IsYvtmoXTrFFAhy8jHp06nPYVmWckjq3mCVcDkk4B7Zr1L9nd42+P3gFtwjI1aEhS3zHLYx+v
5Cs5M1haWzPedS8B/Dr9mD4bafB4r0ODxX4xv2S8ms5SUe3V15Azn5VIYY45J4qn4x+CvhP4z/Cm
Pxf8LtNj03UNOiA1Tw7GQXVycnk9wAT6EV6R8c/hZovxc/af0nQNevptNtn8OfaFnt2VZN6zMAoL
AjufWtb9krwdZeA/Evxd8Padci8srPUbeKOdSMsNknXHGRwOPeufmt1OpJM+b/2cf2fZfGk0vi3x
ah03wTpTNNPPOQq3AXhkBBz8pzk9OMc123hK6+BPxH8Xap4Hj8Lp4clmSWHTdbllwrzZCRlTwQTk
Y7nFekeCY1uv2IPFEY3bfs2osMk5A83JHNeX61+ztpXgDwZ8N/GNnrM1ze6jqmny3FvMBhS7K7bO
TjBGOnejnfUjl1PIviJ+z/4t8O/EVfCEVnPPf3Mr/YnCF0nRSTvUHlhgfh3r2vxf8N/hT+zf4P0W
18Yabc+LfFDu7XZsJmLW+4b8MrOOMMAMnoOBX0D8SUWP9qH4UygZkkt9Rjyw6ARseD26np6ivL/i
B8BrL44ftPeNLK/1B9MS006zuIzCmWk3IqnPOONnT3NNVG9GwcUeYfGv4A6Lqvgyy+IPw2Zr/wAO
zQqLqwhO5rVghZy3oABggnOe/NZ/7OX7O6eLpbjxT4vkk0fwVZhmLTMEF0x+7tY9gR2I7Dmvcv2c
9BPhz4DfFnR5ZlnFlf6jbb+zBbZRn8etUvEVvv8A2BNLVi5UW9uW2MR/y9+v5c0e0ew+VLY4Pwp4
A+Dvxwk13QvDEN1oHiRF8zTZdTnVUnkDFQVAOWGMEjn7w79PB7r4T6za+O38IDS7uHWBcJEkccXy
n59u4f7GecnjA619CWH7OZ+C/wATfhDq1vrB1CDV9WhLwSR7fJbCt8pJOQd2K94vIFT9sSxYqTI3
hNxu6gfvjyeOvFTztdQSS2PAPGHwl+EvwasPD2m+NTqV/wCIXjLXjaRITFEQed+XGOCODnODxXE/
tE/AJfhyIvEXh+V9Q8G6hiS3uUcFoX2qSr4/hPOPyx0NeieIfgDefHP46fFU2+sjTo9LvIyySqXD
l1bCgZ4HyE/iK6n4a2jf8MK+JbO4/fyQRagrM44DCTORnsAf096tTsQ48x4z8B/2eY/G1jqni3xX
cS6R4StIJF8wEKZnCqwZPUYJGfXGK6XTPgN4B+LPhLXl+G2q3l14h00xuLHUSEM2SSQD34VhmvS/
jRZnUf2RPh/a2w2Cf+zYwQSBzCRk46++fWuQ8BfAfUvgJ+0l8O4brUFv4tVN2VkhYrs2wvlCD9Vw
ehxT9o31F7N9D5s0TwpqWueOLLwza20za1PKsX2crh0JznI6jAU8njpX0Brv7Pvw18B3+kaH4q8X
XmmeIr6CJp4rdPMWF3IByQCMbievYZ4r3fwvZwQ/tg+NGWKMSf2FayhygyCWjBI49q+dtX+AXiH4
qeJPiZrdpfRJDpGr3SSR3T/NMA0jbVPYBdv+NHtG+pShqcT8bfgndfBnxXJaz+ZcaPO5lsbwsCJ0
OOpzwwZsY/Kt74P/ALPv/CwPDeqeJNevH0LwvZxSH7buZXYqcHAI6c8+9eyaxA2s/sFW89+n2+5g
gRw02WJZbzGAW5wen0rV/aW0SfUfhZ8M/D2jH7HHqd5bWKwQvsjcPCuAw7rux19c1PPcfLynjus/
s56Rrnw/uPEfgHxG3ipdNmZbyFI/LkSMKWOQfQYPTmvG/DfhjUvFPiCz0nS7Cae/vDsjh5Djj72O
w7+1fTXwB+HXiL4QftB2XhrWJh5GpaTc3DW1vLm3kQKFGVBwSCD19cV6b8HPC+laX+0L8V/s+nww
m1mtHtsIAIvMjJYIP4cn09arma2ZDgr3PDZP2UvDmj6/ZeH9Q+I1rZ+JbyKH/QZEAkDsCQBkn/69
eN/EL4f6v8L/ABTeaFrFuyvE58mQqAs8WcCRfrg8V2mpfDPxx400PxH8SIpnubTT7mZ3u55szKIH
PKjg/KADx3zXt/xis4fEv7NHw88Q61F9t1wNYK18wBmYMjb8t15PUfWqVSSd2Q6aZ4n8LfgFqnxA
8Mal4lutRi8P6DaDEd7qSERyHcVbByOARjPvVr4ifAG/8LeCrLxPpmsWvifRWkdJruwbcsWO5K5A
XqPYgete6/tY6Pq8reAvA3hWPybDVWuYRp8DCNHaPYyZ6dCSfrWN+yzoviLwt8QvFPw58UQyDTm0
s3cmny4aHLOi7gR3IJzj65pe1lcuNM+WdA0DVPE9/b6TpNvJqNzcuIo4IhuY98/QAMSfSvbX/Y41
saxJpieJtHe/iGfsYlAmK4/u5JHOOwr1r4BeDdI0HUvixqVlp8FpqekareWljMOtvGEYgKOAB0xn
0r5oEfxG8tvizAb1m8xrg66QOJADGMjGCOdpAGPyrRVpdAcE9Dh9c0jUND1O607UraS0vreQo8co
IPDEZ+nv3r0jwH+z54l+Iejz61aJFp+nRyiNXvX8ozNgEFd3Ue9fQP7R/gbRPFd58LdZ1Czje51v
UbPTr+ZUCmaF1DYIwcHORx6+wrE/azt9afXvCvw68MWUs2k/2b9oj0u1iB3MjMoOepwvb8aTrSkS
qaTPCPib8GfEPwtgsJtbEEtrfITHPZN5igg4IJ6en51yPhzQ9S8Ua7a6LpkD3N1cyrDFGi5O5iME
njAHXkivrH9mu0u/iH4X8Y/DjxpatJYaM0CQ2lxHsmtizSNwTyOVUj2GKr/AzQ7PwJ8DPG/jjT7d
ZPEunm+ihubjLgCEKVyueORk496SqvYvlUdTyfU/2U/H1np+oTmxhcWaM7qkwZ3VVJOBn2x1715L
veFHG0x4f7hXDDP19K9P8JeNPGvg3xvZeNCl7JHrFwsb3N7A5t5xNIM9e2N2Oa9p+MnwQ8PXP7Qv
g61jjkgs/E8tzNfQwbUG6NQcrgcZzzRzvqGjPAPCHwM8X+OtCTVtE0l5dPkkaPziQhbaQCRng9fX
tWD4t8B678P9b/svXLN7GaRBKqMwP7sjg5H4/Sve/wBqLxRrrePIvBWgRy2Ok6JbxSpHp5eFmLoM
lmVhkDAwD61v+GbQ/H39nvWZPEwjOs+FndLa/jT99IsNuJFWQnk53EHpnGaOd9SeRXPlrw/4fvfF
Gsw6ZpVrLf38x/dRxD5jgev+e9dFr/wd8aeG9Jk1bVfD9xa2UDZmmBBEQzgZz6mvoD4dWVl8GP2c
P+FlaXaxah4k1SOECS6TPkb5jDhCMHjdn3Iri/gP8W/EqePtM0DXZbjxHoPieUWVxDqsjOke7+NA
2eucY6U1Ua2K5VsfP0sYEPmjJcg/KBn/AOtXV6f8K/F+q6cl7b+Hry9s3TfDOkZYSL6qBnd07V7g
P2b9F/4aYHhUXco0OKzGteRs+bb5g/dZz93O4fQCs346/HDxRpvxDu9A8M3MvhvTPDsxso4LQ8S7
PmDsMYwQMfQe9N1m9yFTPn7Ure80fUJ9PvY5La4gIEtvLGY3RvTawyKrlS8h8x1ACkg/3ia+qPi5
4d0z42fA23+LKWqaLqtqjPfRAfLdFJPKO5sZ4wSPqPSvld4UaQBlMJ2j5iM8Z5wP0raE1IzlGzEU
rwrITIVJ2gcr+P8AnrT2UxpJOZSkSKNyrjPJ6+p/AVJPGsUXnlySPnBySMdCfxr2T9mH4L23xY8U
Xt1qzoNE0jynnsypzOz7yBkemzJpTlbYqMNTyY+GdZESk6Zet5i7gBA5355B4FZEq7gQdwP33UjB
BGeor6N1X9rbW1+I8N/pVlFB4StXWH+x5IIzK6INrAPgkEnOOewpPj58F7CBdC8Y+HFS00jxZPbr
9kmO1oZpwZPTAXBPHY5rJVbbj9nfY+e7a1mu9/2eN7hlIQ7ELcnOOlRva3UMZaSKRVl6eaCDjoeM
e1fV3jqey/ZS8A6d4d0fT7e/8Yawgubi/uYllhAjkw3XB/i4/pUWgQWP7V3w5l05rOHTPHXh6MMt
xbRLHbSJI/3QByPljxg4+pzSdVsvkifJ8kRd1T7xX7oB6n+WOlJKJhbyZjfyV6SYJA6/l/8AXFfQ
vwA+DGnwaVqfxE8YOJ9B8OTSsILc5ZpoGBcFejJ7Z5rR8M/tDaJ4v8dX+g+J/C+l23hPVPMtLdrS
xH2iPzZAsbMc46NzgdfWhVg5Fc+ZJ4S1uAh28YyfT61WnnhjtyzPhVGHYH7p4/L/AOvXuHxG/Zq1
jQfi/aeCdLcXMWtebLpczygYiRSzFz2K4bpXffE7xj4Q/Z8S18CaH4c07xHqVmzzalNq0fGZArqM
9T8p5PsKftm3sLkSPlV1WWEZdgxXkkAn2/mKqogjYoXkG4k42kjP4V9OfF74ZaJ8Svh6fir4DgS0
hgj8vV9KRQkdu0UXzlPcEgEgcjFQ/CH4X6P8O/hy/wAUPiLEj6ZJGP7J0mVVdboyJmNnxnBJ6fnR
7XyD2a6nzRJtnUFEcKo3Dce/r9MHpTPIBiAZXCkbU3Zzkda+qvAtx4F/aU0zVPCl14bsfBnieZll
0y4smZ2k2ZdlBIGOF59QfavK/h/+zv4m8W/Ey48F3dtNZXOnSD+0ZiQwtkZQVc84OeMY681HtL7l
xgkeR3GxFbGdw/5Z5IzxmomLK4G4hl+YZ7A9P1r6s8TfEH4M+EPHun+Gx4Ktde0WyEFpe+I/MKlX
BMbts2/MVIJOCOtedftDfAiXwJq9rrnh921XwjrjpLpt5FygMjExwgnvjBB4z9aSqIbgmeHytskj
k2N83y7Gbgc5J56VRuSJE2qSUdTgAngj3/CvrC3+Fng74B/C5fEHxM07+3fE+svF9i0DeElgQEhj
79ck9MECquufCjwh8dvhTH4n+GWnrY+I9I8waj4bebfNIDIADkd8AkHoQTVqqQoLY+WpVZYw6glN
pJ6fy61UuGQtJ5igxgZRnY5wfb0r3P8AZ8+Ac/xK1ybVdbZdI8HaOXfUr+4yIWCMBJEH6BsZ57Y/
Cu78L2XwI+J/jXXfA+l6fd+Hr+VLqHTdXu7gCCWXOyNlJOSSSCq9x71HOrmnKlofIl8QzblGSDgE
enbr9KilUTxE5ITsccZHP+Feh+L/AIKeKfCfxOk8FPp1xd6o0rrbKsR/0uJSVMicDK8E/wBa9v8A
Gnwp+D/7P2h+HtI+Ia6n4g8X3SPNfLoswEduNy7d6lhsGCABznDVXOhKNj5FnKhkWRNoGBu7D3+t
VriENGzOhbcdowPlz3r6H/ad/Z7svBVhY+NPBdyNY+H2pxo8ckLCR7Rgq5VyARgknr0PHpVn4Gfs
56brXhHVviB8RbmXRPBFlayG2SN/LuLqUIrLsB6jbuwO5x1rNytsXZM+aHZEkhULtYLygGWLc47e
wqlKkijIU4+9gZ5Br64HwA+Hfxq+Gus618I7zUYvEujTF5tM1cqJJ4VQswjUHntjB68HFfPnw1+E
/iT4s+PLfwlo9jIuqBnScTgotpgFt0px8vCn69KbloRyps8+nnVh5LOSzMOQM9OcZ7Cobl5JhKSA
XVd+4HAJ9/x4r7Xu/gJ8AbD4vJ8NpvFert4olMdv9pjVfswlMZYgyAFRznjPB4r5l+Mfwc134N+O
7vwxrFntnRk+zXEPzRXKMMqUPc4wMdalSQcuu557M58sAnCDHK81TmZydhEc68EEgZX6cV9XeH/2
WPCPgf4Nab4z+MGvXnh06xeRLptlYq8kzwvDvBePGQT82Rt4wOawv2gv2XLfwV8PfD/xF8C3M3iL
wNfWqmW9K/vYJGY7QUAyFxxz07mmpJs0sj5qEhlKqY8xxgnv17EduPTFUnbZGqqu7d1z1XAr339m
X9m7Vv2hvFrB5W0zwvpq+ZqmqsnyxqVfAUn5WYsoBHavQdH/AGQPh58X9A8Vw/CX4g3HifxXpcPn
pp11F5XmfMRhS2MggEdSBwT1o5kmDifIF0HE7RKrcrhW9fX6VVa6Dh0wQeowByfU10Vv4Q1nU/Fd
v4UtrKZ/ED3AsEsguX80vsAPtu79ODX1NrX7FHw58Dav4S0Dx78T4/DHjnWrWCWTSzFvEcsmFCBh
x9/cBnqfahzSBI+MJExL/q22/NtYCoL3zEAKSMwX72ByPb+VeuftE/ADXv2ePiHe+HtWje4s3V59
N1HGEurfcQGJHAbBGR7e9dv8F/2RD8Qfhrq/xD8Z68fAvgq1EaW2p3Cg/aWMhRiBwQoO0Zz1b24n
mRpyK258yFGMzYJK8ckYJyPSotkikKybiGxnPJ5r6d+NH7HQ8H/Cyw+JXgXxKnjzwq7yxX13bwso
s9h27sZJIyCDn2ryX4JfB3XPjx8QNN8M+HbZp5ZXRrqbB22tuWUPKx9AGz707ozVNHntwigkY8qP
GAo7/wCNVzJmMBBwPvZGK+2J/wDgn/4d17VvEPhvwf8AFbTvEfjTR7e4m/sdCvmF4sgoQM4Ibapr
418TaBfeG9cvtG1a2ax1WymaC6tpB88LjqD9Kadw5UjMljKl5F+UKPz+lRPLJ8o3BARzxktVi6hW
NniUnOcjnj3qNpPtEpXJQAfMR7+35U9CUruwxzJHs+farGnuqnYI+EAAwvH6fU17X+z1+yp4g/aD
g1vUhdr4d8LaPbvNd69fDbaxupUmPceCdrFjjgAc4zXVfE39ibUfB/wtufHvhXxZpnxB0W0uTDef
2Htk+yIELu7FWIwoAyO2ajmjexoonzOuPNdHzwCQuKgZizCRNwbOd20dfWug8KeF9T8da9Y6PoFh
Pqeq6hJ5NvbW675JP/rAckk/jX1nJ/wTX1ZvFD+HY/iL4b/4SNYyf7FaYfaWYIX27N2Qcc49Bmpb
1K5Uj4tO4ZRmJ7gY7Ypyy+Y4VgOmDkde2a0vEvhvVfBXiC90PXbR9P1axcx3VpOu1o2+noR375Fe
8/DX9h3xX4/+Hlv4zvtf0jwVpN1dfZ7I67L5DXC+Wr+YhIwQQWx/umruhcq7nzZPEyKduEC/3etP
TeHMYYsMkkqcAj/OK9g/aG/Zn8Sfs6anpS6u8eraRqdulxba5YKTaTbgcIG5G7Az15BGKy/gL8Af
E37Q/jCTQPC4jWGKKR7vULhGEFuoQsC74wCegHXrTbQcvmeZtcIJVXYQwGC/ZjnNRzOpYKAI8Y6H
pX0947/YM8Y+E/AGveKdK1zR/GEOkugmtNEmE0yKzEFyO2PTPY9cV802NpPqEqWFnBLdX1wwijt4
xl5GbogA7k+lZppisQsySSKc/KuWXIBJOOn5Uk87Mufm247jtX1to3/BNT4hXL6Xa3Ov+H9P1a/t
kuE0y7uys8YcZC7QM5zkfhXzT4/8E6r8NPGGpeHNfs5LHUtPmaGSN+h2syh1z1U4JB7g1SZSSOcY
gorPklWyCevHSgMxCeWNpBPbtXuvwS/ZG8c/HDwvfeJdKNrpGh2s8cAvtUfy4pmdSf3Z7gcZ+tUf
jx+yt4x/Z9sdIv8AW5LTU9J1NGki1TTdz26nIAVmxjJ7c849qLoLdjx3zBI6Y3BsFmBHJ7ZqIjzp
BIOGTLDI/wA5rsPhj8MPEHxi8e6V4V8N2jTalfS+WHYkJEMEl3I6KMda9u8U/wDBPf4k+GPDetax
9o03WE0qJp7i206YSyhFbDBVHJYcnGOfrSckTY+ZVkDRkF1DA54XrxUZZZRwzDByOSPX/H9aesMg
do2Uxzk7Cm0q2c4xt65zx9a+lPB//BPf4p+LPCOi+IVg07S4tUhE9vaahdeVNtJIXIwcEjB/Hnmn
cdmfNNxGzzOsIGT8xNKpKqsbsTn72OBjv/Wus+Knwp174MePdS8K+JbX7HqNiwXGfkkUqDuVuMrz
19RXUfBD9mrxf+0TPrJ8Mwxi00qJZbi8uW8uIbiQiB/X5WJ9Me9F0KzPKY/Jj+ZYwR93AXkcVKih
XWQlyMfcXnH+ePzr2T4z/sf+PfgV4Ss9d8QxWtzpdzObcXmnSeakZ2k5c44H9a8q8L+HtU8Xa/Y6
Lo9nLf6pqE629tbwLlmJIA47AdyeBRdEmXHh90kcahwc8gEAjofwoz0OT8p4BHNfU1//AME4vi7p
dveOINMlntYnla3jvAZmCqWYbQDyegH8q+YZbWS3urmCaFoJ4W8topF2shHBDA9DSuirXKxmbIYt
hRlMKcA+x9aJWbK4Hy5zgdB26dq98+Gn7E3xN+KXgyDxRp2lW9lpNy7xW0l7MsPmFeCwBOcEggHH
TmuC+MfwV8T/AAJ8WR6B4rtBBdPbpPDJHzFIGHZjw2Mdjn2oUkHLbqcIkoWPacjHzA46Ef8A1qgQ
4ZYzI0q7sgZ6/T/P5V6F8I/gv4p+O/idtB8J2K392kD3Vw7sEiijXHLMemSRgfX612vxQ/Yy+J3w
s8HXvifWLC0n0e0nS3layn81l39CcDgA4yevNO6FY8OVzGiM7MCw+UEj9aTAVOr4UfKqknb16DtT
o4ftEyCJxcySZCRxjJPoBX0xp/8AwTr+Mt/DZyxaXaQNcwpKlvNeKsuGGQMc9M59aTcUNRZ8xPNl
tylsdMgY/Ckj3SKw3NtfAcKcZx7e1a+veHNR8M6te6bqto9pf2MjQzQP1SRWZSPplTz3r0n4Qfsp
fED446Bf614Y02NtKsrgW8t3dSCONnIBwmTzjPJ6Ck2u5XIzyGeeSZ1G8swBUbzk4GenpQbkuqru
8vDH5umK9N+Mn7OfjP4B3Gkp4wsoYF1NHmtZrY742CnDLu6A8qfoa5T4e+AtY+KXjHTvC/h2ya+1
TUZfKggRhkcFiT6AKCSaLIVrHPSXUku9nZyT8p+bjA7Y/AfkKha4fh84UZ4HfivobxR+wh8WPB3h
DV9c1DSLaS00yNri4S2ud8qxqOTtA5+nWvndwYHXeTuB2bRx36detNJENMlV/MAijdo4hnKqcAg9
c/j2oldRK8rh5X3fMc8MRx/gPwr3zwb+wt8WfGvhPStfsdGht9O1OJprcXUwikaPJCkr1yewx37V
5F8Rvh9rfwt8Wal4Y8R2rWWo6dKIZEYdSQGBBycggj86at0DY5ySY5kwcvxwewpJhuIx+7UD+nWo
DlnLFiMjBJOKkkAZgQrMMdxnt2p2C4Pvk/dYPzLwR3qea9u7iQyPdTE4wTJIWz+dP0zSL7WdVstO
sYGnvLqVYYoIlJdmZsAADqa+jbv/AIJ7fGe3eb/inYX8hDK8Quoy7YXPC7v8mk7DSPmmWRrcqMkl
RgJ7VajvbmGN/JeWNQQ37tyuSBgfXHrTdQWeCeSG4iMUiHaY5VKuh7ZB5B+vpXrvwr/ZM+JXxg8J
p4j8NaIZtHe4e2FxPMkau643FdxGQCSM99podgszyEyvcbnnmkYZ5WT5i34k02OdlxGjtuBO1x8p
GQR/I13Xxa+DPin4KazBpPi3TTpd9cQC5jRHEiOhJHDAle3SqHw3+FXiP4weJbfw34V0s6lf3AZy
o+URqPvFm6KMetK6Kszl7i6upLWS3e7l8t2yyhzg/wCcCo3d0ViDkZ+Yd8+vrXt3xD/Y5+KHwt8L
33iTW9Fij0WyKCaWKdZDEWbaCcdskfnXirRylwQwLk7VUDduOOgHU/1qrpk2uWI9a1C2szFbXl1b
Rqu3bHKyAAjnoe/9aptLI8heVi3mnzGeR85Y+p79/wAq9/0P9hX4xa3pGmajD4bBg1CBbiFXlVHK
vyMjoOucV4r4p8H6h4X8Q6joWsRS2WoabO1vcQSgrsdSQfw44I4OKFYXKVrTVbvS7vzbS4ltGK7W
eCUoWHYHHYUl9q95rRR7m9mvNudonYnYeM4546Cu8+E37Onjn42WmrXPhHSf7TttOKLPMz+WisxI
2hm43cEkZo+LP7PPjf4HQac/i7SjYwakX+zTq+5GK4ypI6feFPQrY89gdreVHZ2Dq+VccMPfjoe9
XZfE2tG3kgfVr4206lGQXbkMv90jPTI6Y/Sl8P8Ah7U/FPiKw0nRreTUdUu51htrWFSzSOSAB7c4
yTxXr/iL9if4vaBpOoale+FZI7bT4Xubh1kViiKuW4DE5AqdCTwlmaMHecqCAGC42kccVr2nivWN
KtFtrDVb+0t2Jdktrpo1J9cAjBOOvWslo/KAMo+QE7k3c8Doa9e8D/sk/FD4h+FbLxDoPhuW50e+
3G3ldtm8BsEgHr3P4U3YaPLr++u7yfzJpZ7uUgIstwxkfaM4GSc8U3TdXutImNxYXclld7Cplt32
tg9Rke2a2viN4C174XeKb7w94k0+XT9Uswu+F/7rAEEHuMHtU/ww+Evif4ua7NpfhHRp9YuoIGuZ
UiHyrGOPvdMkngUWSBmXrvjTxBr9qlvrGuapf2rsGMF5dPKmR0O1j71Rd1COMMxKkZUf5x/9evQv
iP8As8+P/hToses+KvDsul6VLMttDczsCu8hiAcE4+769xXnVvbtLJHDEN0srbUQZJLnooUck5xw
PWiysTr0OqPxh8cpCkVt4v1u1hjURqiX8iqgH8IGeBXI3LSLIJH3NIxzlskn1z3Nev8A/DGfxkZE
m/4QnUQjL5hBAVguCRxnrx+vbNeQzxyWZMc5eJxwykYI9QfT0ojboO76nRaH8R/FfgyA2/h/xNqm
kQzfPJFYXbxIzYHzFQQMjA556VS8T+MtX8XXrXuu6reavfFPJSe+uGmkC5JxlicDJrrPAfwK8e/E
nRW1fw14autX0yOZoJLuFRsDDrjJyce1YPxA+HfiL4V66ul+JtKl0rUXjEixTjIKEkA5/DrQlG+h
pZPqZ/hbxZrHhTVoNT0a6m0rUrUnybq3Yq8bbSpIPpgn866PxP8AHH4heLdHm0nXfGOratpkhVpL
S6uCYm2ncpKjGSDg85rn/Cfg7XvHuu2+k+H9MuNW1WUFora2TJIA5JPYDuTXS+NPgN4++HukjVfE
Xhq+0/TVKBruSI+WCxwoLYwOadkK1jhJXVkyXJOcnPqRyRXqGn/tQfFXQdNtLXTvHmr2tvbARRK8
+4xxjgAZyBXlzsgR3YK4Odo3cD/61egR/s6/Ea9sba5i8GaxJDMiyJLHbMQ4YbgRn2P0qeWIrX1O
GutWuL+/vtQup5J7ueZppZnkJZ2YksSe/XP4V2Xw8+OPjr4T2N3aeDfENzpNjeyK9xCmNrkDAYZB
2nHBIx61w17am1ubi3nYxzwO0LxehBIOT65BFdJ4X+HPiXxdaTXWg+H7/WbeHasrWURkCE9uO+Kf
LEdrln4i/Fvxh8Wrm0ufFWu3OtXVpGYYPOICxxkkkAKBk5P3jk9KzfA/j3Wvh34msvEWi37afrFk
5MdwMMckEHg9ep4qPxF4Q1rwhcpb65pN5pM86l44bqFkYpnAOCPUdqo6RpN9rt9FZadZS319McR2
0KbnYgdh396GkTrc9T8aftY/FT4g+F9S8O6z4qnutJuyvnwLFGjShSG+ZlUHGQOAR3ryeOcR/vRI
Q45AHTf1/n3rW1jwJr2haebrUNFvbK0DYa4lhZEGeg//AF1z4jHlGXajAttJB6cdh+VSkg5Vc+hN
K/br+Mmgafp9paa/CkVlAsETSWcUjqijjnAOMADueteHeKPEmqeI/EOqazq189/qOoTyXU0zc7pH
Ykn9eKmi8G+IbiFJYNH1F7ZlDrKttIyspXOenIPHtzWXcwyKtyjHAVtpOOFI6gj8KdlfQaTuewfC
n9rDxx8HfDN5o2g3UVzYXNws7W97GJ0DcD+IZBwucdKwfjZ+0J4z+Odxp7eK9UW5stPjEdtZ20Yj
hQ928scFu2fTpXC2mlz3VrM1rbXDFCELRoWBJxjt7/rVe6srmxuViurV7e4ADOjoVbHIBwfpRGMV
sUyq6eWHZw7Y5A9OKnLGVSGjChgeM47HmnHO6cuoyTlR07f/AK6rmTdKFCbt3UA5xTZFgADRxorA
SKTnac565pWzhxh2Udjxj3/lUaypa7SqhwDgrk7uOKl4eKRnDF2GFVffoaSDYHd4JYgcNkbhk5AP
/wCrFK0hkkZvuHOSP502ONZHTc2MjHzcUNKjZZG5ztO0ZqiRsqpIWB3c/wARPH1xihxJbvsDCWJi
QxH+fYUxn8/MjtnnGQMcY78+1Pnu0K+UnyqvI9M0DQ3y1lbagYRg4yD0xT9UkAtx1Iz0bk+x/LFA
l8lYx5zZJG4FR+nr2p2sSvNFl1y2QWfAB9higd10Irzy1kUwbzH0Vzwce/vXefaEi+CNxbeb9/WY
5fmX72EkHHfso/OvPCFcKq8sRgDHavQbyTb8DrZVJDvq5yirztWNwOc8ipZLV2efKodGZFyG45XJ
pLlAnDKVweAKkUm2kUEbV5JHUU3D3M7ojKSOcFsDHr1rB3uUd8wa6tbV5GZyRIjA4A7fMP5Y+prs
vDsKwIiSyokTHKrjcd2e/rwf1rjbQFdMiScFGyxViSR2yD+hrqPD981yBCF2CMFi2eWz26dOB+Yr
sT7HJK56NpmLUNgIvnLlCpyyrxn8cHHNUfiPKt58NNf8mR9qGBdgwFP7wBgPfp7jFMtrGYoiGWaZ
iOpbBXgHH4kVL8QbWNPhV4ig3r5cccLlGBJGJE5Hr1xnPWtJJtIyjo7M+aLe4jtbeVpUDMowqsoz
mqFs7eaojK7ywA3U0phnfcdg5OF7+mKWIKJkYMSO/GCPpXNbU7kfob+wn+0LYeDPD2ofDHxSsdt4
Z19pozfw5DxySoI8HHAXAHJHoaZ8avgw3wg8Wy2lq0d3pFxIWs5oZfnZNxK5I44yOn9a+a/AcZis
/PcyOYRhdxztwOoP4dea9Ji1Z9SgQXVxdXDxKVDtIdwHoCfw7963jFpaHPJxvqfZH7MfhjQvAfwa
1z4oMsd74htIrhVguCDHG0ZCrsAJILYznGefauB+Gn7Ufi21+KEFzqF1Lqui6uxs59Publmt7ZJJ
wpKrnB2g59SM14RFc3MdlLYw3cq200gdkZiEb6446fzqNriV7pZlkXCEHAUEY9T6/wD1qXs09yub
lasfVP7THwDtrL4r6Tb+EXDz+IPMn+yvIqxxMhGCpJ6E7Tj/AAr239rb4YeIviZ4N8O2OhBZbyzm
Z51fbg7o1AYcjowPPpX59z+IL/7Tbj+0rx/IjzBOs5IQ7hwBnK4xx9KSLxnr8sX7zWdUCrKVieG6
dMEHjIUgDIznjvis1Sd9y/aI+ovjP8A9K+D/AOzho8tyFu/EVxqcctzcAKroWiLPGD6ZXGfeu615
f+MA9HQbTJHawLjeMg/aG/HPI/OvijUvE+q6tCLS81O8u4j8zq8hZSccZU9+oyKYdf1UaWdON/dS
aezmQWhuG8sMDnO3PbHHGK09k2RKp0PrD/gnUIbTxZ43jSQNvtbV2JOScSNk56dulfNfxVEp+Ivi
uPzZHiOq3Sq5frmVj165JJNc9o3iHUPDl6tzpWpTaXMyFWFqxjaRRzjI7Z7Ec1Xe+lN1K8oZmdvN
cYXcSwPz5/WmqfLuZuV2fVX7J3wyh+HcL/GXxVfppml2cTiwgQAi4jeMpuyPU9voa8R+PPxFT4r+
P7zxDJbPaefIj+QJd4XauxeeOw6EdTXLL4j19NOXSv7WvpdLQsVsnu3MI6jAQnGcZ6Vlx4uS0si+
WinlX5A9Oh/ziko2G2uhEzIgTBZCpLhWOdw74HX+lPlYJJmGJmmOAV2/U8n19vemySLdFd2yRVBz
uwdoB6e3TvRG4kMrIpVd3DA5JwMZPHFaW13Iu29hjxKgZUJySpxztJ64GPxp7wRMu6SIuygomOn0
zjI+vSnXjyt5cKFDGcb9vTAz9KS4SWWaYrMxiGVGRkkDvj8u9XGIWSepWnjM7F3Rd6DCfNyMgZB/
L9Knd08sJG+QGJ3RYGTx+vApWCebvVWAbIdeMAAgHjtnNI0TJiaJS0T/AMOzaTk9fbv/AIVbsgcY
vYldNkkbxknd8pYEEKAP7x7cDioUCPvKv5zRvjJ47kj9B+lLIpdvI3thWHQe/uOe9WEQxvu8kAdX
c8BRjqP14qL2MXG41UjupZN0sg+Ycsp4Oc9TwRXT/DXxgvgL4jeH/EBT7Tb6bfQXk8QALSIrjIUk
gKQOetcoAZGEisrpt5A4Occj354qZV2qWbdvzlVDEfUH/PpSklJalQbTPt39p3wjdfG3SPD3xP8A
AszarafYY7b7NZsDcxMx8wA4PBGdpHatT4A6dL+zP8IfEPi7xfOftPiB7d7fTQxFySC0YJDdCS4P
HGK+QvB3xf8AGPw+jnt/Dev3mkWt3IZZo1KsGcDaCFKnnAHT0qPx58VvEnj9bObxJrE+qPZwmNPt
BGFUknjCgZzjqK51T6HRzrY+tv2bvG+lfE34N+Ifhc039neILtLkxC4dVWVZGB2qQckjnOK8h+Hf
wD8eax8VdP8ADt4t9a2eiXy3k9zezSCFFhkXO0kbcFRxjA9q8O0PxDqHh7VLHVNNn+zX9pKs0MwP
zKwwVOPw6dK9Qu/2qvijqED2c/imUQ3EDq6+TGGKk4PzBRgY9BnPeq9lJdROcUfRvxn/AGkPCWi/
tC+EL5jcXdt4Ze6gvZ4ArqvmoOV55ABHcf0rmP2ufBviJvGMHxE8K31zd6RrltBaRy6UzbxtjJ3E
p64GPpivj83UbM5Yu6lflIcsWHPfvyT19K9R+H/7RXjT4e+FIdG0jWDbaVG7yx2rwxsIyxzgblPG
e1R7NocZ3Pp/4dW//DPP7NOuN41nKXXiKWWW1t4vmmZpoMLuzyDwSSemOah+G2o2vxn/AGULzwD4
elLeJ9LgTfZ3Z2lsXBlXa3Rhxjjp0r5F+InxX8R/FTVbe+1+7S9eCIW6OIkjCqM4A2qMDLE/jVPw
N4z1f4eeLrHWdGvZLbULTJR48lWDDkMOhXk8H1zVKkylUuz2z4B+C/H/AI4+Lnhxb+e6nsPDV9He
Ti/uTtt41cZ25LckrjaMcDHFey678dfCtj+15p99LeudOh0aXRpLlYWKifzSccD7vI5xj+nzv4i/
a3+JviPQ9T0mXUtOghvoJIJJI7RY5GQg/dYHhvQnHSvHF1I3AELDMhDHPc5PJyP8mn7GTF7TWzPq
D9p/TPG/wu+K2va/o95fWOneJZPNt20onE6pGu4Nt4zkk8+vFej2MZ+DH7G+qaR4xY2Gp6wl4trb
oTLJI03zRjr97HXJ45r5/wDCf7V3jbwn4WsdAii0rUdO01SIf7QtWnl7tgtuHQ47dBXHfFP4t+IP
iz4gTWNWkjM/lrEkcZ2QgqMYVSTsznPBoVKXYTqRifV2uzL8Zv2SNEsPBr/bdU8PG0F7bp/rbdoY
jvGCOT06DnNcH+zpeeO/it8bvDmrane3uo2XhkyNcy3j4WFZInVRg46nHQV4n8MPi3rvwd8R/wBr
6NKHE0TQ3FtMC0Myt3YA8kcYJ6c+teg+IP2tPFWq+H77SbPTNK0BNSAWe50dGinJU8AMG4yMD1wT
0o9nLsJVFe6PoLwp8U/Clx+114huI9ZgMF/pcOmwTMNsclwrxgxhj1YlT+VeH/FnxF48+EXxM8Za
LbXdzpOneIb2e6RIcFbiN2baRkHkhiDivBBLd6dqiXyzzxXkb+ZG8blWJ5IO7Oepznn8a940/wDa
+1JNK0iy1fwrpniSfT4lgh1HUtxm4PJc/THTr3qPYyuP2p694ph/4V/+xLaeH/Ef/Es1m4iKxWNw
376VzdGTbjOSdpBP1xVv493k/iz4G+BfFHhVxqf9gzw3k8tu2TbmOABi2M4KsBkH+VfK/wAWfixq
3xf8QT61qj5iPy29orEx264wQgPTPfvWn8IPjzrXwge8gt7ePV9EvYXin0e8lK24JAG8YBwQFI6d
DVqg0rh7W57P+zb4z8V/Fv48WHijWvNvrfTLC5t5b4IixxKyZRWKgAcseec+tep/CTxbo99+0r8U
Y7fVbSRr9rQWq+av74xx4fyz/Fj2zXznrn7VVxceCtT8P+GfC9p4PN9KGlu9PnJaRBgNxtGMgY69
O1eK6P4k1DQdZt9RsLo2V9aEyW1wq4MT5yuD3A/wzR7KQe1Wx6rq3xR8ZfDqw8VfD8gWdpqd1KJL
C5tAZh5zbTsz1DADoCPTnFe7fFydNE/ZE8FWOpqLTUFTT2a1lxHINgy/yHnjofrXCy/tZeHNa1PS
Nd1/4fQan4isIYwl/wDaiMupzuA8vHUZ79a8c+LnxV1T4veM7jWdRmla2XfHZ2zBdlpCWJ2jgZ+p
5pezkDqJaH1f+1t4q1LwtqPw28ZaFGt8mny3MyzIpkiyyxgbip6EE/lWP+zL8QdU+KHxx8ReK9Xh
ihxo32NnhQxxx7ZU4OTjoCeteSfCf4+QeFvBuo+DvFWmTeIfC0y7ILMsq+S28scN1xkg+2OK1/FH
7RejaT8OJ/CXw+0K48MRXsrm6d5hI7RuhUgHGQScc5459aj2b7D9oe8/BCe2vtc+NkUE0czS61Oy
iP7zKY35A7jnqPWvltvjvrdn8Hbn4bLpsJtFJj3BW+05Mnm7cYAzvPp3xXH+AfHesfDfxJaa9o9w
EvYc5LYdZV5BVwfUE5/CvoBfjh8HdT+IEXjm58Ia1F4gRkd2j2GIyBMAlA+D9TVqm1uhc99T0v42
Sra+GPgkJPkdNc01mQsA6gRDJP0OBmsT9pT4i3Pwo+PPgvxJa2P2/wArSbiKSGRiiMGcgHcFODya
+ZPit8T9S+KviZ9c1IvGyHy7e3jYhIIg5KYH97GMn1FeneG/jx4Y8TfCv/hDvibbX+oQWTwrY39n
CGuAq85Zi2SR0yOoJz60ezZpzxueq/so+NpviF8Sfid4jnsksjffY5MRksg2iRQMkDoAM/WmfDp3
vP2UviVGsa7op9YVUGeeCw/nXmfjT48eG/C/w1t/Bnwyt7+zhuC5vdQu4zFclS4ZdsinknkZ64AH
euD+Cnxlu/hF4m+0JvvtDuz5V9p5VnjKs6l5AnTftB578jvS5GtSOZPRHR6v+0Gnin4Y+EvAS6EY
ZtOurMNeGQPuEbhcqABtz1Jz3PBr6Y+LmY/2hfg6+4hjLfoQF9YgeTXjeieJvgXoPjm68WWU+p3M
2ZZ4dLn09xCkjLnaOOgPT0J614748+M3iLx142TxI93LY6jCx+xfZ5G22mQB+7BPHTJ9ST1p8jYK
SR9BfEf4u2fwY/af8Q6nfaVLqsV5o1pEscJUMDnqMjHG31HWtX9n3V4PEHws+Lepwwm1gvL69uRF
gfIHtt23jjjOPwrz3VviX8OPj54J0x/G19P4Y8XWD+RLd2lm8zzRhdoJIBwCfm254PrmqfxL+Nnh
zwn8PIPAvwsu8wSKkmoa1DG0TTPnDLgqPvAckEYBAHFLlbKUkjt9XjEn7AmmhVZj9nhHuD9srFuv
jN4d+JfxF+EFlo+hy6bcadq8QmkmiRA2Qi4XaxzyM8+lct8BvjVZaZZTeB/GpbUfBt/hU81spYEE
t9dpbB9iM8c10vgay+FHwgvdW8Wx+K4PFF3ZQGfStL3sjiUZZdpI5Y5VRwcc0uRlc0Uz2SUBf2x4
yUxu8J53Hv8AvzXlkfxX8H+Afih8YLTxLps2oyahfFINluJuiMGXJHGSw9ehzXjsn7QHiq7+LMfj
j7V9nvI90McJUFfsrMWEBGOnIGevfg16f428M+Afjhdad4w0/wAT6d4L1HUEY6xpmoXAZzJwNwHQ
Hr8wxkFeKHCxKmnsbnh6M/8ADBWtpsYFIrrIGO12Tnp/nmvk2VvK/dLufI9iTj06c19CfHT4v6Lo
fhIfDbwJJjw7ChS+vomWVLveBIQjEH+IkkjHJr53lVHjBDAIODnnIxjrW9KNtzCo7vQA6kMZC+5g
QBj7pP8AX/GvrD9hBVS78ZwgEFYLI7T1PzT8/oPzr5JBWIKFiLEYCnPWvWPgN8Z7j4O+MjO1t5+h
6g6Q6im3dKVXdsMfI5DOevbNOok9gpt31N7T9e+HUfwF8UaTqcS/8Jr9puDbO9u/mjMg2FGAwO/f
sa9j+JbrN8BfgnLIruG1LRdw7nMJHP51xOq/suaXrvjyG+0LxFYx+DNQeO6mWa7Q3EKud0iqD69s
9N3tWD+0D8abe9m0XwV4UVF8P+F5onivJPneS4tyVUqQSNgAHPfNc3LqdTkrbnq/x8Pho/tA/D0e
LlT+wP7OuzK0ikxg5bG8jnGdv9ad+zTJ4Yb4zfEyPwiyNoHlWf2VoSTGRhw20nr8wNcp4ggt/wBr
fwBZ61Yyrp/jzRo/Im09iI4JVeTO5SxyRhcg54/KneHPsX7IHw71LWdSuI77xxrimK209BuiPlOS
uSmccPkknngVHLcnmSNDwUn2f9kb4oBSW23OsDcBn8q8y13Rfh1afCT4c32i3sD+M5L3T/tipcF5
fmI83cp6AN/KtD4C/F3StQ0XX/hl4oLWmm+KJLgx6jEhPlTXAVTGe2Cc4P4GpfD37IWtWPxGnOuT
Gy8KabNJdHVAylpEjYMnAOfmAORjjn2qlGwk/ebTPcviKhb9q/4UbSc/ZNSJBHQeW3f8T+VeaeJv
BngPxX+1F8QofG97FZWkdpaSW3n3YgVpjDGGK56kLg4965X4k/tR2+ofHfRPE2jWS3ul+HDPb24L
MhvFkXDtyOOvH0561ofH34PXfxj1aP4k/D2c+IrTWo1hnt4lw9s8KBQASeuQQQR2qbWZb11N74Rw
Ww/Y8+KEFs5mtln1ZImb+JPLXafxGD+NRfGWK3l/Yw+GsVxIY7Z7jSUkkH8KFTk/gOfwqp421ey/
Zx+AV38O55zrXizxFFLJcwL8osxNEAz5xyFK4HPNR+EtWsP2jP2fIvhrbXkejeJNASJrWGVi5vo4
Yjgp065IPXGKLbCbu/uLWhfDTwj8M/2qfhrD4P1F7y3vLa9M6m4WZUKQSBQCB3BPBP8ADXp3w4Mf
/DWHxYCHdKLLTd52n+50J/KvC/2dfgzqnw28XP8AEDxwB4S0rw6pcLertMpmjeMgHpwSOnUn6VZ+
F37Uehn9onX/ABJf2j6ZonihbexSeeQf6II1wkkh6YY5B9MilZgnqjl7H4J+G/EHwh+Jni641N7X
XdM1O/MNsPLVHKtuVSvU7icDkV6d8ToAv7IvwgznMV7ovJ6fdxyPxrxr4ifsx+Mv+Fn3em6PaTax
pWt3S3UGrWqMbdo53L5dwPl2AnvyMY68d/8AtI/EDRfAvw/8F/Cuyn/t3VvDb2Nzc3MDKYkaDcvl
nnKuSDx2GKlR2LUl1Z2v7Tnw/svif+0N8L/DWoXTWdpd2l+00i/eAXDADPHJXH51H+yp4Atfhb8e
Piz4bsbx760tIbBYZ5BklW3vg+4MmOOyiuW/aSspP2jvAXh34i+CH+1vpMclveaRG2byFpHHZTkY
2k/Q5qT9mvTj+zh8PfFHxM8d+dp0WqLDFBpsuWvJGjdl+62CSdw47AZNWldCVkty54Bh/wCMKfiq
hYnN5rPPbqP5V4/q37O1t8OPBPwm8cw661zNreqaaZrZ4gvlo+yU7SDyQU56cV6D+zn4v034q/Bj
x78KIJzpfiTWW1G7tHvNoicTFSoGCeQTyBngGvIvhh8C/H/iX4wab4cuob23svD18bm4mvnkFkiw
TgOI8/LyANuAODTSaQaSZ9geOYy/7a3wwfuNA1P5Tz/e/wDrV4R4o/Zxj/aG/ac+MaTaydLGkS2k
qkpvLF7YKB94cDyz24zXQ/E39pnwjpX7X/hvVd91PpXhq0utKvriCNWUyy/xJ83KqTyeO9eX/the
CPGfgz4x6x4x8P3F+dI8YeUba70K4kzKEijQpIIyONxJGc/eNSlqVZXR3/w8heH/AIJy+OYCWkMf
9oKhVQvSZVyB7EH9at/tR+HpfEn7OHwG0CCSOBtTvtLsd7ErGpktCuTjk4LZx7VS8SMP2fv2Ibjw
T4qmFv4o8TrcjT7S2zJsMrLJ87YwoAzknvnFHxXvZP2iP2QfCOqfD2dri88GSwTanCuYri3MFoQ+
wdSQSpBHXt0qVe4mkyv8DvgNd/s9ftn+HtAfU01NLvw7eXqzIpBPJXaw6deeBXp/wKs4bX46ftNz
JH5cv2+0AIUDIMEzce+T+grwn9iOy8X+KPirf/FPxbfX8mg6Bp91a3GoazcM7JuQNtUOSQAMnA4/
E16F+zH8dPB3jL9oP4xWFhqHlDxndRXGiSSxMgu1igdZMEjg85weetHqNrRpb2PlKH9mTW9V/Znu
vjBHq9vHa27MBaFSJiqzeUzh88Hdz7+lfY/7QOn2+rSfssy3SLNM3iDTwWZc7gYIyQc9e/XPWviW
78G/FrTtZufgQt5qkZluBEdFjkBglZv3odc4+XgMWzjrX1r+1N8WfDPgbxx8A/Deo6mv9q+FdVs9
Q1mGJWb7LAIVXcxAI/gY4znipa10L0TRz/7XHwT1r9of9sOw8HaTqMOm+R4XivTLcrlI1WaVTtA7
nevT0rY/Yz8H3vg/4aftA+DNdki1E6LfTWTDO+EstrJl1U8LnaDj1rjf28dY8Z+D/ij4Y+MXgXUL
i20LU9Fg0631/TWDZy0sm0cHh1ZDlh/D1rf/AGWtVvfhd+zV8U/H/wAS7yTTbfxfcNeWl9qB/eXj
SWrKpCjoWY4A4/LFURHQzr2WTSf+CT9ncaWxtri4SPzHh/dNKDqO0gsMHBX5c+n1rz/4X/syeOf2
bP2kfgddanq0DQeItQcPFp0zttRYwzRvnAK4cdsf173wcf8Ahdn/AATNk8HeDf8AiceK9FSE32jo
ds0JW9MxBB65QEj1xXlP7LHi34oftCftI/Da41jUb7xLpHgqc3E00yIq2MJQp8xUDklF+9k8Gp3Q
07tn1p4V8NaSn/BSHxrOum2odPB1pchvIX5JvNX5+nDEd+tfFPjz9nn4gfGPxH8bfiRY30c9j4Y1
zUVklvLomUrDI8myMYOAse3A4HAxivrnwL8avBOqf8FEvFv2bxHZSxX3h210e0mV/wB3PeJIC8Kt
0Lgdh618bfFvx/8AFn4M/Er4m/DS1urnRLXxjq15cvpzwJJ9tjupZI0eM4J+ZNq8EZxVJJPcUn7y
XkfRXxjs4/F3/BN/4U6nq6JqWrm50lVvbgbpvmnZSN55wQMfhV7/AIKEfD3XfiB41+DPwp8HMlla
6pBfhNLDmG0JiEbIzqvB2qHxxwTVH9oDUrT4VfsMfCbwH4nu4dJ8XR3GlSvpMj/vo44p2aR9vPyq
COan/wCChXjXxJoepfCT4yfDe9juNN0yC8WPXLdVnt43mMaLuOCMEbxj1zSS2HzXGf8ABPv4b+Jf
hr8Vviv8K/GcyX1hY2FrLLpLTGe0WSZwzMqnj5lYZPcipfgLpVv4C/Ym+O3iXQreLSfEEd9rscOo
2ibZ41j+WNA45AU9AOmPWq3/AATw8a+JfFPir4r/ABk+Id1HFp2o2VpE+vzqlvau8DsrAdFGAEB5
/nU3wF1qy+IX7Efxu8M+G7tNV8Qtd6zcR6bA2bh45W3RsEPJDcgEetTsyr3PlXwZ8FPiz8H9d+E/
xZ+0S6Rp/izW7KOO9hvC08iXUiu3nDJJ3ruJzn3r0n/grb4a03Rfjj4YvNOsLe1utS0aWW9khTBm
ZbgqGbHU9s15/wCA/jL8Vfjb4q+FvwquB/a2meEdbsbhbG2sts0SW0qRsZHHQIpIOe9d9/wVp8Ta
drPxz8PWmnaha3c+maQ8F4IJdxt5WnZgjYOA23Bwea2he5E2kfCiOIhhJDkk+4xST7Wt7gghQUYb
gMduMehp0/ltGzbirf3u+PamyI6W0rKdz+WdpHf8qqXkZx3R+n37Xnhu98L/ALJPwW8DfDm3bSF8
YXdraXWmWD+UL9prQFlkOcsCxXPP1rmf2B/hx47+DP7S+ufCjxvDJb6JqPhq41S40J5BJbTZeOJX
I56rvHv3rsf22PFOoaX+zh+z/wCO/BRjvl8O3trei9iHn29sY7ZB+9I4C7k2nkdDzXPfsH/Gbxf+
0d+1nrnxD8WQWypaeFZ9Me7tIWitYlWWJwpJJw3zMSM9MelYWsjoTVmd3+yb8OPC/gfx9+094i0j
SLa11Hw7rF3Y6TcLGN1lCsEr7Y88KMgdBzjvXwZp/hb40adpNn+0fL/aCi4u/OTxMroXMrO0HzLy
cH7p+XGCK/Qj9l/xBYeKp/2r00q+g1G4vdd1Ca1hgcM88fkTIrIo+8CcDI9a+AE/aM+IutfBmw/Z
8Wwt/wCzbe6SNbeG1c6h5iTecY2GegY4xt4Aq72CKu/uPur9rj4O+E/GX7Tv7OFzqei29xP4nvZY
tYkC4+2LFDCy7x0Iycfp0rxv9v3wP8Qfjn+00Phf4PsLjVtC8P6FaX9vo0O1YrYspDOM9zkAE+nt
X0b+014i07Tf2r/2WrG51C0tWsry9kuFlkVTCrQwhN4J+XcRgZrwf9rz9ozxl+yv+2r4l8VaBplr
dJq+g6fpyHUY2MMoChiVIIyykY4OBkVNwSSNv4G6TefFT/gnp8U9E+I0Z1q58ET39npYu1G+xa1s
1aNcjGSjsw5JqDxDDJ8Bf+CYvhvV/AMR0TxD4rNjHqd7Coee4M5kVxyD1AAGB0Nan7M2vXTf8E9/
jt4m14Laza1da1eq7jy1laW0UjZnqCxIGCe9RfH7Ubq1/wCCZfwh1HR0OoPYzaPcylBvEZjWR23Y
6ANgHNJajdr/AHHlf7DvhT4j/s9/tW+E/BfiHT7jQtH8Z21zJd2FwEeO+jhglZG7kFWPY/xV7r+z
v+zp4A0H9vT4yNbaBEIfCkOnT6PFI7MlpJNEPMYZPJ54J6c15l+zT+0j4s/ay/bW+GOu6zpNpaW/
h7TtQiB01GaNUeFsNIxOASelfSn7P19ZX/7bX7Srx3EM07RaQkflyA7wkBLY9cfLnHSiw3vsfm54
0b4v/EbxV4k+OjpqN2uganIY9eijAjsvs8vyRqACAFBHYj1r7A/a7+G+i/F/4W/s5+NfENmH8WeJ
NS0TT9Uv7f5GnhmhLypgcDLMx4xg4r5Mb9qfxh4G+C/jr4JQ6JYLp2rX135jzpIt2glmDEBSQDwO
OO4r7i/aAWLQvhV+yRpV8/kXMfiLQxIsnyumyBQzMDyOT3qmtRJbHkv/AAUY0DxNH4q+HvwB+Gmn
TN4bGkfbYvD2nplpXikcKzN1Kqq55OMnmt79hzw5qPxM+Bvxf+EfxUspL/SfCksUNtpV2gWayciW
UgMACMMox9PSrH7dvx11v9nP9snwb450fRbfVWh8LNagXZdYj5k8uQGXocDtnP4Ctz9gT4h3nxL0
f9on4jataLpkmt3cd3Iyn9yhEE2VVj128D16UmmkSkcL8D7C0/Z7/wCCb/ib4t+FYY7Tx9feZE2r
TRK7IBe+QoAI6BSTjuc/h4n+yj/wtb4K/tB/D3Ub+xvdJ0/x7qUVldnUIi0d/BIyu5XeODlgcg98
V9AKvn/8EfblbYPcuMl41+ZkH9qZOR245/WvM/AH7WeuftMfHj9nPwtf+H7TTofC2rwuZrSdpDOB
HGm5lI+XGwHqalGqWrue0RfssfDyX/gpRcaa2h7tGi8OL4nNoZTsF79pA3H/AGc4O3pmvlz9qC++
K/xs+PnxD1qxhvr7RvAOqXFhbSabCVi0yCGSRl7YLYjBJ5PI7YFff2h3kU//AAU08RwhlZ4vAMCH
5hyxuEOMduM8e1fEfiT9sHW/gH40/aI8EWnh60vk8UeItRU3VzKyPFuaSIsoxh+GBAOKE2mTo3c9
V/aM0K1+PX/BPvwZ8WfFcCXXj9BDbR38CiIssl0Y3BAGOUWr37cvh6++CHwb+Gfwa+EunSWumeLH
uor6C0Tfd3rL5TffxnLNIc/h0FWviQj6Z/wSt+HMFzGY5rifTcRSfIxLXTsevsR+dbv/AAUS+LVx
8D/if8APG9ppq6qukfb7lIZXKRSEpCu0uAdvBznB5HSmrsT1OC/4J76V4i8SXfxB+A3xM06ZvC8e
lfa5dGv4ys8DyyIhw5+YAg55q9+yJ8L/AA98JPhh8e/irptmJvFHha91iw0iW8/eLbQwRBkG3uSQ
M/8A166j9gj4yXn7RX7UfxU+JF5pUelC60ayg8mBmkiiCSKMbyBk4UnoKX4JsZ/2Hv2kpYFeaSXV
tfYeXklwU2gj1HBH4GlZpk21PjH4c+PPjJ4P+J+hfGZ470HxPqgikv7qFjZ3wmfa6YBwFxkjHTYM
Zr7C/aL/AGVvAPiT9uv4X6fNpzwaf4wt7691qGGQqLh4UfZgj7oO0ZA/SvmOH9rfWPip8Ovg/wDB
v/hFoLWLw/rWn7b1LhnmuBFIEB2bRt+/yOf0zX378V5vN/4KKfBO3DqWt9B1WSVM8qG8zb+ZxQ9C
0j43/b1v/iH8Sv2g9Y+G3hXTLuTwx4IhgNlY6LCV8kSwREu+z03YBOAAD716JJaD9pz/AIJua54q
8eW63vijwd9rTSdRhOyT9ysaKWI65D89iRWT8Xf2t779lT9s/wCONzB4ah1064thbATytD5Qjtxh
s7TuB3dB/d966b4WXD2//BJ34h310gtpLw6kxbGFJe4jAwcDPcZ9qSi07jsrK/kZ/j3T5f2U/wBg
PwW3w8t2t/EfxCktodWvZl864kNxZuzCMj7p4ULjpn3rjv8AgnvdeOvCXxqu/hJ4ysr1PD/ibS7m
+utM1pGaUCOParrvzgHkHtXqX7Y/iGbwR+yH+zdrttZrfLpV/pV2YZGwjGKxLbScHAO0r+NY/wCz
R+0bdftY/t4aD4tl0JNCXT/Cl3ZLaQSmcY5bcX2rjJkAxj+Gq2RKsaP7Kn7LfgTw7+1X8bphpr3V
r4Eu4E0a3uG3pG0sUpZiOh6YA9hXxx4g+MXxi8SfEO7+OkcOooNOvVZLyKCX7BGYysQiAB27QNoI
J7mv0f8A2brhLr48/ta3aP55/taFQq8nCQTDAHpX562v7YV5afsnah8CoPDMTPdO7nVnnO9d9x5x
Xy9vJ4253d+lTa7uVHVo+pv2yPgb4d+Ilz8AvFN3atY63461HTtL1o2jCMPHJEkjbB/Cd0jc+/0r
l/8AgoJN4j8K+JPB/wAA/hnp91a+GLTSYNUFjpMbm5lZHlXLsvJXCKT78k171+0akdtqf7Idg7iN
/wDhILFwjnGFS3gyfYDBrzH9r79oqf8AZl/bysPFn/CPrr0Y8IJZCB5vIwZJpG3btp/uf+PVMW77
DRX/AGUtLvP2kP2WfiF4F+KEb3yeC3WPT5JMx3kDpHLMBK5JyQ0YHTpkGsn4CaRpv7NP/BP3XPjf
4as4n8e6moiW5vh5kcKLeGBQg6gbWJPPUe1dr+xH42fxz8If2mPH1zaR2A1fULu/ZIjlIw1nKwAb
AyBvAzgZ9s1zXilmh/4I6aWVUO8xhUBec51Mt29hmqjzXSfcTtfY8Q/ZD8afFD4e/tF+C4dZTUX0
nx/eizuYdYiZormB3DF4g3HG4EY7GvfdO/ZA+Hlz/wAFFr7w8bCWTw/aaInik2LONjXjTocHj7mW
ztFcN4D/AGuX/am/aN/Zx0aPw0uhL4a1I+aqS+aZT5IAOdowBs3enNfVHhGWK5/4KX+OFDYmh8D2
cbDrz5kR/TNU9Lg7HwD+1f8AFH4m/E74++M7jSP7St9C8B6hLp1mujxyRwWkcMrsJJSvylv3eSTj
AAr2/wDae0Cw+PX7AXhH42eJbSCPx5AIVW5s18uJxNd+S25cnPyxqeenP0rze6/bOk+DGq/tF+B1
8Kpql14q8Rap5N28+DHuaSHGzByBgHrnnvxn1r41RtpX/BJHwDayKIZLtdMHl42klp5H4HY9PwzT
S94htdD8yCqhm80FHJIBP0zTFh2lNxAboMGpLhWjRRt/eAfeXkUYb5STh/XvWrMtz9CP+Cdnwv8A
D+hfBz4nfGua2Go+KfDcd5BpnntuhiCWiTE7SQNxY4z6V4F4T+P3xo0P4paf8X5RfTtrF2zZu1mO
mzhyUMaKTtwFDAY6Fe9fVH7FCyWn/BOL433XlbWb+1WVcZLY0+Jf1INeET/tixfEb4EfDD4LQ+Ff
scukahp4lvfNXMojmA+VAOCd3+NYt2TsbQSvqfRf7V37JfgnxV+138J7L7PPYWvjqW6bWI7NwuTB
ChynHykjAPHPNeY/t3eKfHN18ao/g54AsrvT/DHg+ztbi0tNAjdZR5kKgvIUbkDJA4HU5r66+N1w
kn7ef7PNi+FMVjrE4HfJjfGfb5Rj6186fGH9q7/hmD9uj4wam3h0eJf7QsLCxWJ5RGYCsCMCCVOQ
SSMcdKFsWi14TsF/ak/4J9+K9T8ewCfxB4EN1b2F9bjZNm2tVZfMJyTy5Dc87c02202H9kb/AIJ/
6H468FQxjxr4xFpHe6heLvdPPR8+V6bQBjjrWz+zxqZl/wCCa3xn1yaH7LJqc2tXLDkDeyouAe4z
kfhSftR6knhT/gnh8C9Qe2kuY7S70S6eFQcMEgaQqe3OMc0k76omS1PJP2CfGfxB0D42aZ8OfGFv
f32i+MFkae38Qb3YxRQTNujD9ASOevQdK9P+AP7HXgO0/bd+JWl3FvLe6N4K+xXumWk5BBllRXBk
7NtIOBVb4LftL237VP7enw11e38PHQ7fQ9G1CIIJA5dmhcgkgAYxwAM1758B5VvP20f2mLmMFxAN
Mg5PUiBhj/x39am7u1YSR+dvxP8A2gfi140+M2rfErTxqNvo/hq9+ywLpcUq6fDFbzEIH5w2RjcD
jIavo/8AbD+EOi/Fv4UfBb4n3VsdO8V+LptL06/Fgo2SJcxs7nacAtljgnn8q8HtP2w7fwd+zT48
+EbeFTJfa1e30i6iZVyiSTZ+ZcZP3WHB719i/HO3+w/s8fsq2BUqV13QFIk+U/LbDt+PPpihaSs/
MqyVjyj9ufUNX/Z08LeCvgV8KIJ7Gw1Oza/uLyyU/wBoXLxytnLJgkEjcSBk9K1P2MYLz9qH4M+P
/hT8UoH1FPDkcBhvrnJv4XleVyrOxJyDGMdOO3FdN+2n8e7T9nj9s34d+ML/AEVtbsrDw1cxtbpt
VwZZnG4FuM5AGOOprT/YH+I8Hxe8a/tE/EKDTm0qHWJ7KVIMghUWCfuO/BpO+gLbU8w/ZX+H2g/A
39k3xz8frCwj1XxlYC8jsV1FN0UKxTKikDruORlgee1eN/s6fHb4saH8etA1HX5NQ1jSfG2oJp81
vqpka1liuZk3NGD8vAfjAxg9K+hPDTeR/wAEkfFc6LkTLeSDPO5TfAZI7/drzO3/AGs9J+PfjL9m
3wJpvhd9BuPD3iHTvPleRT5oj8tMKFHQ8nn0qtkNJX2O88e/sWeCtS/4KA6H4ci8yHRNU0ubxTe2
cajHmLK2IwCSAhZeQAOB715V+2b8dPiFqXx11nwl4NF9ofhrwBO9pFb6CHQSRjY7PLs9NmBnAGDX
23qTCX/gpjpUZGGh8APnuQDcsMY7dRzXy1rX7Xeg/AD41ftIaRqHhebWr7xHrUsUEzOgRMRFFU7h
wCWFKy3t2FZGx8evD+n/ALUH7COn/GjX9Ph03xppAkCT2ahluALgW/zk8kYAb2PT0qX4z6dD+w7+
yv4R0f4e24fxP4/LRXmuyLi6jaS3jkIiIwe4UDt7nmtMSGx/4JB2hjTyXuI0BUj+/qZ/wNdT+3f4
/wBM+Fcn7NOu6tprXtlpWqNeSWpO0EJBEPzGQQD6VMeaVlbuPbQ8t/YR8W6x8cJfFXwU+JVtc69p
d1p0+qfbNWDPd2sv7uPCFidv39w9DnFXP2R/2VfCfgrxv8XPGl+JNZPw21K+sdLtbtQ0cjQwu3mM
Ou4beMdD0ruP2RPjdp/7R/7cPi3xlpGkSaPpqeE1tILeYKJDtmhy5C4HUn/PFdb8DLrzPhf+1dfq
C0beItbdZOzEW7A4PtxQr3auLTc+JNC/ba+Lsfxkb4kXP2678P3t0M6KzMmnohHl+WByM4DHPGWX
8K96/ao/Yq8J6z+058L4dHnl0K2+INxJ/aFrbKojt/IiVmZFxwSMfic14xJ+1P4W8Wfss+Bfg5a+
F5bfxFBeWn2rUmWMRkCYu7rj5yxB9O/Wvvj45y5/bF/ZntChVUTV3I75FuB0z7UXs3bsFkz5J/bi
+MuvfBTxlpPwX+GUM3hnRfDdvb3cl1pQKT3bSRgKrAD2yT1OexrqL7QrT9s79iDV/HniuxbTPGPg
tZ401CFQz3ot4AxEmcfKxYnHque+K1fiv+034R/Z4/bv+K994p0KfWo7nRtPtIBbxKzRsIQ5zuOM
HcOc+1aPwA1IX/8AwTk+MmsRxtbR6hPrU6QnHyhgoC9cd8VOsZJIlQ6nN+F/Dmm/sW/sWWHxV0Ww
i1Xx54titYReXSpmzM8ZACYHRRzjuetc9+xX8b/EPxx8Xan8I/ijbz+LdJ8SwzXaXOpgqbMwozYV
cDIJGeMYIr0X9qTU9P8ACP7Cf7P93qUXnada6holxPFjh0W2Z2UjoScY57074OfHXwX+0N+3R4S1
fwXo76RZ6b4av4JlkiSNmkKlgcJ2Ab9ablomjRNdjhf2b/2EvDLftOfETS9bv21zw54AuYPKs7mA
BLt5I3YKxB6IR3645rzPWP8Agof45f48w+IdOha28A2M6248PRIvlyRIu0jft+8SSRj0AxX3T8A5
Vf4/ftRXayBmXVLaPYoz922lJI+tfCGjftGfC+y/Y0v/AIdHw/PL42u7uZxepaoyrun8wsZM5GE4
9auL505WLSVz079rr9kTRPEfxQ+F+u6A0Ogw/Ee6gtJbOKJcQSuoleUYAySHJOe4qX9qf4qR/sWa
X4d+DXwmsotJ1WK1t9V1HWRGsr3gZXVl2MpySVBJzwOlfSPx0tc+P/2TrYozNHq6E7+MYtowSR2N
eX/G34o/Dr4T/t8anrHxFsku9P8A+ENtLe3Etv5oSQu3PIx03DOazi3LcntZHI3fh3Tv28f2UNR8
TXcD6R408FxmKfVVUK12YYTI42DoH3Zx2rJ+CPw58Mfse/sxR/tB6vpTeKPEusW0SaZC6j/QhLuA
6nHJA3HGe1el/st6xp2qfsoftFa/o9q8Gn3WqaxcWsZj2bYzahkXAyAArD6ZrO+OI03TP+Cb/wAK
U1gf8Ss3OjNdAksrRly7g+v+fwXtG7Ik4H9lH9oi7/af1/UvhD8UdNj1RPENu7WV5bwJElv5IZyT
6/wEEf3cVzfwP/4J/WGs/tI+MfCviXVvtugeC2ha7jhQoboTRsyr3weBk+3avYvhv8R/hV8Sv23P
hrJ8LtNht7Gy0TUF1CeKDyFZ/KIQYIG5gB+AavZ/goS/7TP7S9wUIEdxp8XynPAt5c5H4VCk27Ly
A+Pbv/goc1t8b2h03Qra1+G9jKlhFpzWiG5kjjBjYl84BLbfUYGKrftWfsZC4+LHgbVvBxg03Rvi
NdxJFZzEhre5lQSOzH5sgg5x2YH1rBXxr8EbP9jPVNNFrZyfE2a8mMM32c+fFm75bf1HyDPX0r7T
+MkZk8e/snW6xOpXVYyyyAEpi0jGPb3x3H41o29bDWjSPnP47/EfQ/2GvDujfC3wdo1tqfi6RIdU
1XU9St1eEK6lfk6EsWQ4Hb+VL4ufDLQ/2xP2bJfi34WtotG8YeF7U2mqiQCKOVYYjLKu0Ag535U/
yr1D4y3HwjX9ujxNcfFSKEWEXhSzSz+1btgky5PQYztB79uKp/A9tKH7B/7QN3oIRNHmm1trTAK4
iFqqoMHkcADnmnCTUkOUVY/KqVHWPzQSxPJYnOcHFCwh3QCTLlevYe1PmXCqrxsrR5Az1A5xxSO8
csilWYs3B3DjtjmuxI59EMJUO6rGGfO0rj+tMbEPDh/8Ke5Zt65O0AZZcECmJ+++Usztg4wOfxp7
Esa8a5kLkBWPKjrk+lOQsJgChBTAO4ZqPaVYneAy9MnrU0bBnYJLIVbliD1+tFxCxqZUcNmMMevQ
dKUWyCJGYYDnAduhwOcUkk8kS4chogcLkdff+lMDyM+7OFJJXeAfrxTGOxISW4YdOR+X0puoiT7K
C6nGR3q3EqMY13Bg/JUkDH4VX1O2HBZzIoA/hxg5+tIVhkEcWwAI4jXDSHrgeo/Ou211YYvg5oaK
GScalK5WQ8MuGwRxx15/CuJLLKilVdXOCxB4HfpXe+Id1p8G/DTRRBftF9cs5wBkjA/yD07Vm07g
ecsHIBLHJOCv49qkuEaAqVB5HpzTTMw2uUKkHA29cjvQ8DyjDzYwTjtms+pSZ3Nq+dMWQk4MpZ2Y
9MjjBP0rrvDk0O1jG+wgZ8vqW5559fwrh7MPeaWhlBSMybd64GD1yeehwa67w7FPJeCWNlEaFQ7p
hsDPoeTyK7lrscc9Geg2WosZJZo4W+Vf4juA6kE9eMZpvjjN74C8QTeWqlbcCIqfmJ3qTz7ZIx7U
mlQQXZf94pmjT548EZyOOPTjpVrxhEsfw08QrtxKlsn7tcbSRImWz7DP/wBarlZomKXMro+VLiMF
N8RJjHX2pbWEmdVAyMg8dCM9a0NGt/tl3LCVyHVj83C9P09c1URDLM0O7AD9T9fXrjiue/c7ktT3
L4dWLXNrbQRhJppXWOKJGy7FmAUYBx/k17Z4i+Dfij4fLbjxPpU1gbhS8aZV8rwDyDjqfXpXr3/B
Nf4TeHtT8G+IviDfQjUL/RHltba2lwY8iDzGk5GQ2RjI6YPrXmvxL+KGq/FTxhd69qV+xaYK0Nmh
Igg+UDaoY98A5pqpLZEShG+oaR8MPFmpeB7jxVYaXNPotoX+0XafOU2FdxOOccjOOeK5SKFkf7Is
RaXcrIu3IJLYGDnnnsfWvpv9lf4z6XZeGr74YeLbuO20PXZJIbW+QkNHLMAArHoBkHB9fSuo8M/s
T6lonxMu9R1zUI7HwbYTNPBqCyhnlUEOp5wF4zkjP3SM0lWa3M3FXPnLWvgh4903U9N0i90W7tdT
1GU/ZYZB8rjI3EEDGMe/cVrS/stfExmkE3hG+iZI97y/L14IHByT04x/jXd/tPftH3Hj7x3ZXXhm
6fTYdEZ4bC6jZ1lnZiC24AcAEZByRwPWvpn9qC78W3fw28GW/hW/vINT1S7hheS1JUvugyCzDkDc
c0niJp6Ifs1a58IeMfgr42+H2jy6jrGg3mn2e9YllucKrsVOOfz/APrVUk+FPieTwbB4lbRbqTRf
NIN7HGWiwCQGBHbcuPy9a+g/ip8drnxl8EL/AOH/AIwtHTxhpOqQWzShQxuBGCWlJxgHjBx3x619
Kfs1aVp3iH9l3wdp2oWSXOmT2U0UlrONysglcYPp/T8Kr28upnGkm20fnJ4f8A6746muYfD+lXer
31rCJ/It1+ZV3YB9v58Vj6xp9xY30sF5A1rdW26J0OVk3rlSvOOAR39K/R/4E/s/3HwY+L3ia8s3
abw7f6fDHa3JOGMnm7mQ8k5HPfoa+GPj1bhfin4w8pFVTqd1sRTnB845LYHHv3496pVbhZI5lvhz
4gj8OQeIf7FuY9Ef7t+Y3WJlOcEEjaSc/r1rFkLbm8khSMFyy8FMdR+or6k/Y++KM2v67D8KfFFu
us+GdaVkto7gnNo8cTy4X/exkehX8K8s/aD+EkPwg+IWp6HY3Qu7ZESaJ+jYkBYZHTAHQmpU+Z2G
4I8omKn5kcvuba4bK7RyQMY5zj9antp1knTyiI5VO1sjaijGOvfI4xUdybeOVUCMWHyl+Dk9evFL
cMwnjFq0pTaAXCctjP4en61pFNsxa5WDvFJcb5l2YY70B27vl4569R61E8k0bgHayIdyOQfnPHGe
/wD9emzGACSO43JImCC468d8d+P1p/kfZEKyIGCjJyS+BnOMn/61dMG09Qeuo54UuJIoQo3BsLGh
3EnHT39aZDMyuAsjKN5XDt0HJ6f1pFjaWYsY2EiDPmYG5TznBqRYo5JAz/OoQl8AjAPGPbkA9etT
Ju5N10G3MoE3nM77PLxGIgODuzyR1yPX1p4lS5SQxyBpFG3klsfUf571BJcbEARsK2Azlfmz04B7
/wCNWsRqrDozDGOg25HP1/8Ar01bqJMihJeQqkbqCcnaQVznoB+VWwT5uCz5GNxQYIz1x69aqvE1
vBmMPId4dgpyEHHHvW94L0B/GHizQ9Cgl+z3Wq3cdsJmXcBuIA4/HpQ3FbAld2IdM0q/1SdRZWVz
L5chUPDEXRG46kA7eCOvHNS6tpl/pdy8F3p89sS7KrTQEZAyDjPUda+2/id4h039jLwHpXhHwpAR
4m1VFvLnWpoVkSRk2o7Mp7kAAAdhUfhC4079tb4W3uk6tZwWvjvw9EjJqiJiBjKzYIC44KpjGOKw
Vaz2N1TTPhmOJYIhF/rZQflQHnOc4x3Pb8atTaLq1o3mXGmXEJVQ0g8sgRjGWJP1B6dK+rv2Zv2e
dJ0jTdZ+I3i1v7U0/wANzTyw2cbHd59v8xc9AVx0Gcc1d0P9tGPVvirfafr2iwzeCdSeS0htYLFW
uFV/lRmYHp2Jz3qnWT6DdNbXPkB2JijeN1y4VmU5JI5II/OpFgkWBpAZLiLeYjKoyoAyTyOOnXoa
+l/jV+yNdaB8WtC0Lw7eQRaV4keWHTjcOzNblACyvx0APB9q9F+K3jbRf2TvCmleB/CGi2l/4gZk
u7q41O2WWPDqd5zxySowBnj8aydVE+zS6nxM1sI9iCAxgDcWCEbvTPOAODj60ksLxLvwysFJYE/d
/H8e1faGu+D9F/bE+Eo8VaPZRaR4z0ELb30CR+VBMyqJGVcdc54JP9DXP/sx/BrS/DXg25+L/jiS
S40ayY3FjZxqJNwUvEzOmPXGBnqCaarJFxhY+VZY5o4lid/mjGAzkA4z0z3Pv7io1tz5vmgYVD1O
eB0J49a+xvA/7SHhP4n+NL7wX4t8MaXpWla6slnY3FnZETo7SbIwzcgEqw5HQrXm/jb9k7XtK+Nt
p4L01ka01QSXFndNLk/ZlJ3F+Bhhjn659qarD5bs8EaMefvkhG1yAAM8c47euakkilJWNyZGKglQ
2NgPQ8/hzX2b8TPF3gL9mW20rwTo3hHTvFGp2QYX9zq1uN/zBXX59vzE7snHAAHesr4ufCPQfjh8
NYviV8O7VLF4YiNU0uKNYVQxxlpNox94Hb0znrWkcRYzlTufJBQxkrOpjIU4Dkjce/5UOhxGyJ1O
I5DkBvccYGCO/pX1X+z58B9H8L+FZPiZ8Rkik0ER7rLT5ozcJcJIuAzAE5yxGAR68VteAvGPwv8A
2gNS1bwXqHg3TvB2pXMarpV7AhLPKSTtBwuGACHbnnJFKVa7FGkkfIflFbdHbL9VIH97HJ/XPpTQ
GmkVQxVAw3DI/n+Vetn9mzxjZ/GRPBEdu8t0d0ouHZSgtN5UTFueNvYc9q9f8e3fwg+BUuleEx4P
t/G+p2aNHqeoO5SWOQdd3GC2D2PbFHtLGipo+SAjxS9SoPGwnk0+VnSIAKwDdCPTv/Ovo/8AaG+B
Wny6HB8S/hw/2/wjPF5ktpbAsLMKmGbkkkZGCvY1c+D/AMCtF0PwVc/EX4nxJBoi28i2OmzyFWuS
6BkfPHJAIUc9c8Yp+2H7NHzTHLIkeUQdCGL4yPwP4U5LmOS6YLGzgZyCc7R26V9U+DfCPwq/aC8P
arpfhbRv+EK8WxMptVu7ou06csQueowpyMcZzXivhv4I+LPEXjxfCEWmXdhqglEd27oXW0UjPmMe
Bt9/en7VEeyuzgNkuXJw4JICgk4469sVYDeZkRyFWK/cYdPcGvqjxt4d+Bnwy1zR/C2sadf6vdiN
I77UrC5UwwOWIfedwK4KknA4Fee/tGfAh/hlfvrukgah4P1B2ms7mAmVIA23ajSdsljtOSCBTVdP
Qn2Wuh4zLBIbdo2mO5Blc9Cfc063jlaBFeQ5dRvQkjnqP619DfCz9njRrH4e6r44+Jj3Fho4gLWV
lHIYrklcHIUjknGFHOc5rQl+Avgf4reAbrWvhbPeWur6dKxnsNUk/ePGqMTtXk5JYYOcHGKn2kRO
mfOLRuqsGBBOQD2H1/OowJRIMtjI5Oev0/Cut8B/DLxB8RvFMOg2drPb3TsYpROpQQtgnMmRx0P+
evvWofBP4MaT8Q7LwTf+IdUj8RSbFDRqgiVygx84GByceuTg0e0jcbpnzI7NKIi7ZAGNuMd+OO1Q
ySgSlSGkUdxXd/F74X6j8HfF9xpeqBpLeUmWzuIlLJLEZGVeg4bgZHqa9C+Hn7PGmy/Di+8aePNQ
n0DRiUa0MS75JVJwdydc5wAOeMmh1YrYFTueC28gdUWJGyBgq3C8fjTlYrzt24GD6k/5Ne3+Pv2e
tNh+HUXjTwBqlx4k0SAzG9WdBHLEqn0OCcHOc9sGvN/hr8PdU+KXiy00LSot8h/ezSoeIYwVBkJ6
cBgcZ5xxzTU4sFTszk5ZGgw2WyQSCDnjHp6VKTITtA2kjPzn1FfStv8Asu+Cr7xa/hOx+JYn8SWw
lDWRtgJGdRnBbJHHGcenSvnfxT4R1fwP4hv9G1iFrLUbZwbiNmJBB5VgfQg5/E1Smu4+Qz5PPiPm
BySehAHGBSbypkyzFx1UD2zXt/hX9mg3fgSHxP4s8S2vgq2uZ1jtvt68SoyBlbOcfMM49gaxvi/8
CtQ+F1lputW19Hr3hq9iUpqtsP3W9icKTn+IYwenOKTqInkaZ5XJCElclXJ3blOecY744/8A10TR
JKBnnb/frsfhh8NNZ+KXiiPSdLjd8t+9ugm6G3UjIZyOg7Y5616Ve/skajNYavLo3izRNf1LTbZ5
JdPsJi8u9d2VCjkHIwM1n7RXsXyHg6xMZAHO0KDwD29KbvYrt3dOCCeccDr+FPliuLWa4t7geVdQ
s0ckbKf3bA4Kn3yOlezaD+yZr+reF9O1e61TT/D0t5uaO21OXyX2g4B2sAfw9xWnPHqyOWXQ8Yfb
lflYnknd0PWhAW8yPf8ALgbeOM11PxQ+F+u/CTxBLpWtoZECB4ryJcxTKQM7TjsTgjqOveuNdsuH
Bk2+vTI9vWtLJ9SbNPUsNGT99fkySpx6dabuZYyQXUjptJz9afIVUnB2DBzkfz/OoZFbckMBZ5HJ
Xy0+Z2A5OB+QxWUnYpK70BUZiVwXUEANHxyBwfwzSiZY1RU3GTb8pYkZNeq6n+zN4+0nwpN4gu9K
hGnx2ovpEW5UyomzPKDnPr/XFeTcPtLDMgGcKOPehSVtS+QlluDkPhmPRWB6fUVFclRiKR5GXaQr
Z7duT744rs/BXwn8UfEi0vJvDmky6ktmV8zy2VBls8ZYgE8HvWD418J6r4F1ubR9YtGtdTgIDwSj
n5huDAjII56j0NJSiOUNDKFwUlRlMkJX+ID8avTatqEltLnVL9jMrKFa5cqc57E/pVS1sJ9Qvbey
hje6uriURRxpy0jE4CqO/JFdZ4s+EPjP4eab9u8RaDd2GntKIluJApUMx4Xjpn3H507pmaTicVjz
jHEyltgAx0wMdc5q/pnirWdIt2t9N1vUrKISl9treSRrk8k4U/Ss45Qsy/f3YK9e3Sul8P8Awu8W
+MNHXVtG8OXt9YlmRbm3j3IzjqB06EdaLxNNWYGqahfardPd6je3Go3RAje5uZWkY4zgZYk4xmo9
M1q/0K7W6068utPvIwVjuLKVonGRjhl56Gql/DcWzTBwFuI3aJlcH5cHBBB6HINaXh3wX4k8aWlx
daNpV7q0NmwEj20XmGMnpnA/Q4pPlZKUixrnxH8S+IbP+z9U8RarfWsjgvBeX8siuQfl+VmI4OPx
rmpyULIGPTqwypXr0rf17wF4p8LabNqOpeHtTs7WP5TcT2rokeeBkkcDJ9e1c5BbTaheRxWsbXM0
riKOKHLM5PACgdTz2prlElJna6X8c/iFpNja2GmeNtVtIbaMRJbiYsEReFC5Bx0xXC6jeXWo3l9q
N7cSXF9dSNNPM53NI5/iJ7nmuiuvhr4r82MN4b1rdMx3N9glyMHpjGRXMXttJbPJHIkkU0JaNxKp
DowyCCD0II7+lTzLoWk2ze8JfEzxZ8Plu28Na/e6Qb0xiY2uB5m3O3IIPqaj8efE/wAU/EZLP/hK
fEM+sfY2d7ZZ0VfL3gbiNoXngDOM8n3rDtIZ7+5WK3tpLu4L4jii3MzHBzhR147Y96r6tpV/YOhv
Laa2Z2IQToV3DuASByPT6VKaQnENK1e70XVrHUtMuHs72wuEuraZMDY6nKkdjyBx716JrX7WXxYv
9Nu9PuvF7tb3VvJBMDaQB3DqQTuVAR1HSvLVIdoo1XZ820Ip5Y5wP8+9SXGm3QJZ7K5VQMEywsAA
VycjHGPfpVaMaiZKBEt44kypAwSOAfU5r1Hwh+1D8Sfh54UtvDmla9BFpVnuFtDc2sU3lZOcZYdM
k9+OK8xMmAwzhSdrY6Ef5FQXGn3ByY0lLMONqZJHX/P1pWsy+Vml8RPiHr3xO8VT+I/Et42parcK
kZlCCNVRBgKqj7o4rR+Fvxj8R/BnWbnWPDc6RT3Fu1pNBcRiSJ0Ygksh6sCK4243QtseNopkO1kc
bWGex9Kq3DykjYdkpbcQO49Peola5UE0j1v4pftWeP8A4peE/wDhHNXvLKHRnm8+RNPtvs5l2ggC
RgeRyTjgV4zbajPYXsVzbXBt5YGWSOaKTEiEZIKnsRz0qV5WhkIZDEWJPCtjpzzjg9OKrXIlb54x
tjI5+XHJHP1//XRZMm7TPpL/AIeFfExbgX8dp4bfUFVYxezaYfNb5dpIYSDk182a5rl54h1C51PV
LyW81K8lZ5rmeUySOfXkk9OnPTgVWuYXCNNK0jIGyB1GfQAf55qpcyFZjGqnBwcBc5A/z+lTZLYv
Xc9o+Gv7X3jD4UeCB4PtLXS/Emix3RuYYPEED3P2c7du2L5xtUAcDtk1z3x2/aR8VfHmfSI9ba00
7TNMgMNtpWko8NqDnIcoXYEgdPSvL7gRKiYc72YZw3X+oHGKinuT5brEFc4/Ie4z6A0WiJ8yO3+D
Xxz8RfAXxxaeJfDdz500QKXFhPI/2e7QoyhZFDfNjcWBIODXrPiv9vjxdf8AhTW9I8PeF9B8IXOs
KsdxqmiB47oIG3EA9u4z/tHFfMnkG4X5doJ+6OOeKgZkhIznA5bLAfUfrQlG5N5Mkh1O6s9Rt76C
5khvIH86G5SQiSKQHIcP94NkA565r6ssf+ChWrSDw7eeIPh54a8Va5otpDbxa9fhzdO8f/LTcV4b
cAeO5NfI94zGbKHA4OBgkc0ySVVcAsEwcYVuR7Y70ml0Lim9zrvjR8WPEPxy8fan4q8RXb3V1PJI
baEyl0tISxKwxg4wuCB06816R8E/2w9Z+D/gXVPBer6BbePvB16IjDo2sXLCO0ZWZiIxtbhiVJHq
teC3ISUlsA5Jbjv9Mdqil+chVblV3EIQSB05pPUvl7Hvnx5/bK1r4veANM8BaN4as/h/4StXkluN
K0mcyRXhZwQHzGvAOWx6kV5z8EfjN4g+AfxC0vxZ4fumhngKpd2obbHd2+4FoHGD8rYHIHHavPpY
38sgMTznPoPWkuAzqm4nAHB9hS0HytH2NL/wUQ07RL/xRrPhH4QaP4U8ZaxaT26+ILe/8ySKSX5j
LjyhuIbDYyMkV8e67qOpeItZ1HVtVvX1PUr9zcXFzOQXkkJJLE46n/Cqk7GJ9oKsCAdnf2qOS4+X
aG55yfSqjZbGUorqRFY/myMH35FDTqsRjSQrNknAHAH+TUbFWkdSrD0Dc/rUixLtYsxHGSM5pNBF
H0V+zz+2Hc/Brwn4h8CeJNFfxt8P9Yt5FbQ3lWJYpHYF3RihPzAnjOO/eul8f/tyWMfwXuvh18J/
A6fDLT9UnkfVJY7hJmuYpE2OmdvDMMcgkgDrXyepf5cEAqNp9/pSMygq+0FeSCDmkopGvKdp8H/i
v4g+BXxD03xX4WuZLXUbZtkqxkETQEjfG2VIwQMdDjr1FfWsf7enwosPibqPxH0z4M3Vt44mEzLq
DXylBK8ezftx1IUdB618KSBy7YARSMAhqTaittJZH54btzT5UxOJv/ED4i+I/ib4w1LxX4m1KW/1
7VHE883CbSAAqqBgAKFXAHpX1Haftu+DviJ8LNA8L/G/wLqHjjUNGnY2mqWUsUTeUQFRWGQSQMAn
odor444TYMk9x7U+Ulh8x+VT1+lJxiKx9EftVftbSfHy20XwvoGnS+GPh7oUUa2WhlIyzSqpTe7D
OQFwAAe5NJ+yn+1zL+z8us+Gtf0248SfDjW7e4S/0WLYXEzoFEkZYdwMFcgfNn6/OjEeYEY7STnc
cimyjE68A+hPPTuKOVBZo+1Jf21fh/8ACD4Za9oXwK8C6r4N1/XJIjcarqJjuNkQBDKCXZt20kAA
YBPavlz4b/FTxF8JPHuneMPDV+1trVjM0ySS5kErFSGDg/eDA85rkGjJVyCdwOSaV0IDbTwOQaLI
Z95an+2F+zn4g+MFp8Tdb+FWvzeM4Ft5nkj8k2zTomA+zzMEjA5x2r5S+PHx48T/ALQfxKv/ABb4
guGRlfybG2gJRLaBWbYqjPBwRyOteZsuWHmEEZ+Yen1prptkIycsvXFFibO59leFP2xfA3j34I2H
w6+OfhrWfEz6NcQnSdR0hYxMsEcQVRJIzhi2d2T3BHesD9ob9r/TPFfwq8P/AAr+EmkXvg74fWMC
m9huwsd1dyByRuZHJ24yTk5JYelfKsRUIGQkvz36f5xSLEQ+QNoPOeuamw7s+k/2Q/2s7v8AZu8R
zWOrR/2r4A1dSmr6UYvNYAI+14gTtB3MM9iPcV6h4Y/az+CPwC0nxdq3wd8F67B471W3WCxvdeii
lgtfmySMSblHJPHdRXw6y+XLhclDxl/WleV1cMVB5OT7Ucivcep2mgfGLxZ4f+Jtn4/ttauJPFNv
drem5ndnEjhtxR1yMoe6g/TpX174r/aj/Z2+MniXwp49+IPgnxA/jawt7f7fFpMMQs55kbeSQzgu
pfI+btjPrXwmMMHXYVAGRx3qJ3eLagwqH5d3Q02kxrQ96/at/ab1X9pfxs9xhNL8I6UTBo2kxExx
xQhiVeRQ20ydOg4xgV6X8I/2u/Bmv/Am/wDhN8bNI1HXdBthGNG1HTYfNu4P3jOys7NxjCYI7cEV
8evL5IJ2glgTgjmnAHyuVIJPBzwvHpRZBd9D6/8AiR+2B4N8G/A6z+GHwC0vVPD9hfPM+s6rqcZi
vJAWDBY5EbOScjPYDArzL9lX9p/Vv2cPHcd2Q2qeGdTcW+taVM7Oj27Nl3RC23fjv35B68eFxxeZ
03K46n196WZN0vy5PHVaOVC94+7dE/aR/Zl+FPjTxP8AEXwX4c8QXPjW6hu5tMtdTswbSC5kO4EA
HCqHAHGcA18neJ/jl4y8X/FO58fajrV0viOW6N1Hc20skYtznIjj+clUGfu5wRmvPnINyFYLk9W6
E0SOEkCR/Nnhie1HIg1PuXxr+0v8Bf2ldF8I6j8ZdI1iDxzYRzQ303h2z2w3GXG0sQSx+VVOD6t1
rzD9sP8Aa1PxruLXwp4Jin8O/C/SEVLTT4ka3N2xVNzTpnDYZTgH3NfNazM4baeo4AHFDuFZGwrZ
XuO/rS5UJ3Z9b/s5ftb6FY/DHXfhR8Y7W51zwFe2srWVzDE1xd2c7YUqhz8qhfu46Y9DXR3H7U3w
o+AHwd1nQvgDp+qp4q1qUx3es63abZraBothMb8HIwMADAJJNfErGUh2KFSD1Axnj1qMeYDklxk8
ijlsUro9Y+B37Rvin4DfEm18XaTdT6lcTbl1KC7md1vkb7xky3zNySCTwfavqo/G/wDZNf44t8TF
07xEdXWY3h0c6fus/P8AKMeduCOuW4PXntX5/wCTMzKgXGevamQyCMlX4frntQ4pj1R6x8cv2lPF
3x2+JMnjDV72exa3Zf7LsLKd44tPUDA8oZyrYUZYck+1fRU/7Ufwj/aP+EOhaT8bLbVLPxdo86w2
+raLZ+bPPaiMKBJIRk5LHPuARXw6zMijHXoMnNOJFtC5PBxgKp/z6UlGKYK59dftG/tcaHdfDTQv
hH8GobzQPAVnbRfbr6WJoLy+lUMmxsEZUrjdkc5A7VlfscftcW/wajvPBHjqObXfhpqwb7TbOjXD
WThGKmJOwLbM+4yMc18ryGQhQGy3VetA8xQOC7HqM9/aq5exLZ92eC/2hP2f/wBl7QPFXiH4R2mp
6/48vEji03+37UiOzycNtbAKgKxJAOTtA4r5f8F/tC+OvCPxZj+IsOtXd74n+0/aZ5pbiTFwCc+U
4zzHwBt6cdq82klkkZQF3HHdhgmm7pc7uVJ4JxUcj6jUj7z8afGH9lb4u+NfD3xB8Xxa5pfiMwW8
uqaVpmn/AOiSXCtvff8AKSxZsrnOSMd68F/a4/al1T9obxj9j08HR/h/pLNbaNotruhiaJWPlyyR
5wXx7DAPSvBEVpHXeCOpznOT9PwpkkxWTeWx2DGqUbBbqPz+7IwygZPOT75pVjMhLbun3T61FPkO
SQShOB6VJGvl4ckbR06fnTYke+/slftQ6l+zx41SG+jbVvBmsMLTWNLmLSRGJnXfMsQO0uEGOc5F
e9eH/i7+yt8K/iT4j+I/huLVtd8R+XcXOmaJqFgUtYblzuQR5UADIAB5xnivghXdoi7Ebckg49qT
5RGXbLLGM5boPxqVC47nqPj/APaO8cfEP4uSfEe/1m6t9cS4MtoLW4kVLJMY8qEbvlU9x3yc9a+p
fHHx/wDgB+1P4S8M6r8WDqnhjxxaK8eoSaJaSH7T/Cu5wrbhhVPJ46Zr4HeWRY/L25GcAEdqljd4
FCkDavYU/ZpspNn1V+1v+11p3xE0mx+G3w0tm8OfDDSkXbHHG9s+osUAYSR8ArnJ56nmtT9mf9rf
QdP+H2sfCX4wefq3gO8tn+wXxja5nsZdixqigfdULkg9jnpmvj4uzrvlTBBypz1xSPglvlbI6Ad/
cUuREttH3Vov7RXwW/Zd+Guvf8KUjvNf8e6vKsKahrNq8bWUJQq5V9owBjIUd27187fBH9p/xn8E
vihB41sr+41W6mZ5NXhup3I1JSGz5vPzMMkgnoQK8dZnKYLYGfTmppQsquRktjaF6Cl7NdAi3c/Q
LWvHv7IniL41WnxKu77VLa63w3VxocGmuLSSVY+jKFxjOCfUg183ftJftT+Jfj18R7XXVuLnQtF0
iRf7C0u3lZUtgjHZLjs5XZkjpjFeHgM4ExKlm/h4+mKgaULKQ6jOOCBjP+c0+Q09T7uX9pb4WftI
/BDTvDfxzurnTPFGivFDbazaWzT3N5Ci4BLbSQSWOQcjjNZHxj/at8HfDr4OaV8LfgBLLbabPaod
Y8ULG1tdzurnKE7QWLDOT2DYr4r3LGolj+ZwRg44FIMZZ8YJB5A4xyTn8qFCxDZ9U/scftbW/wAK
Lq58E+P/ADta+GesIYLy2ud9xFZ43tuSMA/eYjOB6ccV6h4D+Kf7Nn7N8/ijx74M1TUvGfjIW5fR
bDUtPlWGCZmJBUlABjIBJIIC8V8BzNtXKqSD0wOp5pyv9njUEmTK/Lu5A78UuS41Kx7BZ/tN+Pof
jb/wtE6/cya6115kqyTyFHt/NL/Zc5J8vDbcdOPavqX4ieOf2Yv2jte8MePfFuv3Pg/xMLZG1XS7
G0kKSzbtzh3VORnKlgR9a/PyWbCgESEZ4xz/AJ7UkdwQAyuXkKhWOOQPSl7O+txXPqT9sP8Aa0Px
h1CHwN4AP9g/DDQXEdraWRMaX6qQwkZcDaQc8e9ehfDL9pPwJ8bvgJd/DD44X8ttNp0I/snxO4aa
5XMmSAcHbtCqCcjI7V8MOCzM+wADkt60sw2xImGGcnI7j+lDpq6s7FJ9z701P9on4V/ssfBK78P/
AAO1B9f8Y6680F34gkiaGeyiYZVxlBuAOAB7E15P+yb+13q3wU8X3tt4hlm17wx4jcprNjcFmDGR
1Etyc53PtLZHGR+FfMZYxy9c4OOf5UheRlbYSdmcgDn6UvZK1h8yP0J8OWH7Jnw1+K+t/EOLxXHr
FvD9ourHwt9mk8qOUjKxqMHPzZwOAM+wr5y+J/7Yfjz4l/GyL4l/2lNo95pzMNHtUkJ/s6NkCsiE
AZzzk45zXgckz4APLj+8c4pfMDkZBUDg96fs0xKWp+h3xQ8Y/An9sTw74c8WeLPFMHw18dAtDqLL
EWa4A+WPe5AB4VSD1A4rhf2rf2rvD1n4Ds/gt8Fbp7HwTYRAajqlvIT/AGkzKNyEkZPPLNnknHav
i8NNFEWUktu3Zbt/kVFLLsSMhcOOSoPU1UadnrqU5H3v+z9+0j4S+LXwU1H4N/GvUUi021tpG0jX
7mdI/seIgkcYAxkqC5BOeBjmtXwV49+DX7Efw71TxP4J8SQfEvx/qkjWenXaOFNojRE5YZxtDLyQ
MnIFfnrbMrSk5ZAeXyOB/wDXpF8yYONoCxg4zxgAc/Sp9lbS+guY+o/2af20vFnww+Lmqa74mu5t
c07xPO51+3VUBnZgyh8442A5AHYYr3Cb4HfszXHxwfxpP8StEHgwqbv/AIRZbhU8wiI/LkFTjcN2
OSentX52fa5I8vnJ3Z4/hzjofwoMrthiRsB3egBOev5n86SpWb10Fc+ov2gv27fFPxU+L+k+I/D0
8ug6J4XnEmg2bRBvLcLtMpwOSwHCtnAPbOa95+Jr/CT9uTwP4e8Y6l4w034f+P42+x3smoXISS4E
ce0DaxUY3cjg8Z/D85Zrl2BUZHmDORznj0pz30qhHR3QocMpIIJ6+hodJX0Gmj72/aI+Pvhb4BfB
hfgf8HdRiuxqNus+v69HMJknWaNklVDzh22gnHQVY+APx48MfH74F3HwO+Kt5b6c1paNPomrSSJB
bQ+UgWLcT0YM5YcEcV8A7iAuAE4bBHA98CljuJosIJWjxjbJmmoW1Q7qR+iHwf0P4ZfsKeFtc+JF
/wCMtP8AiD4zJ+waXbaNOsnlF1YNkBuQepY4wBivL/2cP27Nf8I/GjWNX8Zompad41mU6y9rEEeF
ghjjKcgKq55z2ya+P1kzK8kqbgzfeIGW9/yppBcMQ4G5zgY7YznFL2SZOx+ic/7DHw7HxsfXx430
aH4YsTctpiXyrcFTESyk9D859Rx2rzr4+/t56j4y+NXhvWfB1vFb+HvBdyX0f7ZGd1w+xUZnAPT5
TjHsevT4xWV7Mqiu8UZG7fnnPOCPzolQXTBBKxdj8249+nX8P1peyu73K5rH6P8A7QHww0D9uDQ/
D3xH8HaxaaR4luEFhqdvfy+WFEKtgBM8fOx5x09M1zv7QHxd8PfspfANvgN4DuE8Qanq0Lvr9/OR
JGizR4kCkcbmIAA5wv4V8BPe3QXCySAKwOVflT6j0qOd5CXklkkZ+7MSzFugBJ5//VVqnZ6kuUh0
kplyXbe5BAHU9Khi2xRy7x82eSDzx/kUwL5eTHl8P37d6e0ytJJgBumXxjt/jV7Gd7iurKPLkTax
GSQP6/lUc8ZiZSHwPuk9ulOaV1BMYO9hhi3zdPTinJIszANkoAMpjp6incLEQBiCRhhh+VJPI69D
6Ux4DEi72IOcBh2+lSOyGXaCirkAEcDH9O9OdmZiwCuDjBYYHPpz1piImCsSXIZgNvy0qko4JBYD
hc/1p4h3swUiEZ+Ysen6fWmoCZVOMgEgE9/WgYkqK07umRuOABkflzUmpDYqxsGC5DA9hyaaodBk
rhAc8Uy9mLMI2Dbc8jHA+lDADEFlBYgHO7APB9/5V23i7fJ8MPB4kKyYe6dZVJ+XOwlT9AR09a4i
1Vw7pGm9T3ZsFffmu68bWoTwJ4MVSzbI52Zy2c7m+U+54NZjtdnESzOqS7ihRgMevPrUT5icGReq
gbTTZACBuyXPp/h2pwR5mOx1GMZyR+HX8amw7WOtszczafNHHAvlmVZA7cdAccD2zXXeHYLqJypl
2jBdkPGB0/AdK4zR5HSwu4Wk+ztJs+b7wUgnHHuCa63QGCypNIxlMqlct82TznA9f84roWhy25tb
HbWbuUVXjZmLbd3AL9MDpyP89q2PF6bfAHihiECNZlAiv84+Zc57dMHn+7WNbXAWNh86qCUMnXb3
GT6HGa1tfMdz4B8TJva4kisXdMYCso28fo1W9jGLkpHzZot5v1UEffYbRgZzxVQz+TqUrbTu3Hbx
6mnJdiy80qVLuMpJGcFQeTj+VVYpGN0OAQeTnp/j6Vj1O5M/XX/glu0Vz+z78Q4ypaQ37mSNhsOP
sWAD2wSDg99xr5Luo/NvJFhSRoXbcGVcuAMAE++Mc9OlWf2Pf2htV+A+rNdRK194emHl6hpoyY3U
jiQgAncOnAPfivWP2g7H4cyaxFr3gDWTJp1+hEumw7kW1OByoIzjOOD0qlFp3MqjTdziPAPgbVvi
F4ksNA0mMzzzzIOfm8oZGWJ46E+hxX6HeMfh9L4p+BNv8MNN8T2cninTLGKOZYbsiVjCh3ZUEMQS
R7c818//ALLXxL8B/Dj4P+K7y8urfTvGgFzFa3HJlkjESlMN0Hz9fwrwbRvilr/h/wCIw8VQ6lKu
sefumvQVYsrOBIcHgjYCPxP1qHGbewvaJaHP+KtHvfDVxeadrNrLpV5bH96WAJiK84+nB56Gv0g+
LXxcufg18OfAOtrYxahFcTWdvcxFQQF+z7mKZP3jg4I9eteBftZa78LPit4g8KSab4itbK51GQxa
tqUablijwoG/sG5I/nXqvxr1f4X/ABZ+HWmeFn+JGl6X/Zrwut2X67I9hBHAPGTwaT9BOXNsfFvx
P1K58d+L/Eniext5bbSLvU5WZ5QFA3ksFJHcqR3z+FfYPhm6fTP+CfNpdW0rW8lvYO0bpKeG+1kL
83XGSDx2rj/i7e/C3wT+y/c+GPCWr6fqusNcW9xKI3LS3UinLOR2yo4xx07VJ4f+K3he1/YVl8OX
Wq2x8QtZ3EI0tmPmuTcFgFUjP3SOaTd+hqmraHrn7J37Qx+MWjPo+rRtH4l0qMNM6x4jlh3KiNnJ
+bPB6dPbNfD37QkYsvi940dFbc2s3O4jkL87Eg8ep7eteofsPePdA8GfE7xJc69qkGlx3emLFBJc
ttjfbKCc8cnHIxXk3x81ex8QfFTxbqNhJDeadNqU80cquDvQseVI68k0RiRLfQ7P9jUqP2kvCamV
sMLgeWen+ofkZ9M/qa2/26YPK+O+o7m8sT2NqUbfxKPLIPA54x+tbH7NU/w5+EPgK4+JusajBrPi
aJSbDSomUXNthijKMnktweegrwf4n/EHVPiz4tufEGsFJbm5IWKM4GxQMIoxxkDjjr+tWo63ByT0
OOWSS2lCy429UbGee351HcDO5A7BGbhhw2SOc+3SkuUbzT+7O5+rgcADp+NP2C4hgdw7tuyXPBB+
vftXRFHNK/UcbcRsRLEJg5xlumPf8McCoNQzLmSMyRDnAXjJPt7cVNNNJ+9Vtvlxjcdx9RwcUXFw
8cMeUYjbuJyP/HRVXd9CLXW5EJpzcvHIAzMOUB5J9R+FKsi+ZKi5j4wm5c5479h0psUbKysAI5BG
AEY42L7e+P50scTCAokm0kAjeO+f4hwcmhvUcV0J7cuqT53lFfg9c8cEY7dfzqKCBBGzMMK2WO7r
gccD8Kc8DsME+VEvO0HPToe9RzKssaK0bou7aGAORxwQR/ntRqxSVi8Jkg2lW2jOCR0Yf/W4rtPg
qjp8YPAikmQrrdmWYj5yfOQjJA9eM9K4QvlJAsZYAAsEQrkexwPQVo6HrVz4Z1XT9W07auo2Fyl5
CW+dPMjcMuRxnlRx3qJLQ1pPU/Q/9ofUPBuj/tBfD+48dpbtoJ027V3u4jLEXydgYAHuc5qb9mS6
8L3Xxn+KzeDXhOgkaf5AtlxEOJMhOBxnPrXF+JE0X9tr4Y2+rafex6L4+0QLbS21xJ5dtvcgvjIJ
ZcZII6E4NT+F7fR/2K/hdfaleXUesePNdDKsED74C0TExrgHgAMTnuSa5LHXe2x2fwvtET9nT4oR
R7pIxeawu1+Djy+leEeJYfhK37OnhifRGtP+E1Etp5wSRzOsm/Mu5c49e3cegq3+zJ+0nbRanrPg
Lxigi0jxRPPImo26HMM8+1SjcfKhzgHoCK3NM/Ydl/4Wk0uq6hAngm1kluIryC4Xz5VHzJuGMDry
Rn7po5fMFZu7PefjPCf+Fw/BdwMldRuh1xx5SfnXB/FnSfAOqftRNF8Qfsy6QfDkRja8laKLzfOb
aCwxzgN1Pb8vKPjv+17FqvxN8Oah4WtobzTPDN2Xtp596m6dlAcEYHy8EZz+Fdl8W/AMH7W3hfTv
iJ8P7wz60kKadeaRcyCNEI+dlJI5ZS+PQhqSROnQ7r9lu10Swtfita+G8HQodXKWbRyb0aMREDBP
sP5VgeHv3/7BGq4YZFrdY4zj/SicfWqmhTad+xR8GpoNRnTUfGOvkXsunknYhCqj4Iz8qAj0ya5n
9mr4r6P8Qfh7f/BrxLs0RtQWSLTr2Fmb7QZHZ2QgrhXBxjnB7elPlsDYar4G+F+haX8I9Z8JT20n
iKbWdO+0rHd+bK24qXymcIQ5HQCvd/GxVP2q/hp1DSaZqA+oCMcf1r59+HH7HniDw18Uf7U8SzJp
3hfQJTex6ozpm4MMgcEqGO1WAJJPIApnxK/bA09/2gPD3iTR9O/tDQvD6z2pmdyjXMcqqGkTK8Yy
ceu33ocW9hpxukeheJvhx4F8eftP+O4fG1wIkhsrGS0jmu/IjdjGFY9Rk/dGPetH9nfS7DS/2dvi
XpmnP9osbfUNWihkDBt6eUNp3dDxjpXB/tHfBfUfjfqNh8R/h3IPEtlrKJHNbRBVMPlIVzub3XBH
rmt4XVl+yF+zhL4Y1S6/tbxPryyznTkIQwmaMI+Oo2pj2yc4xVctwckty544V5/2C9LLOY3FpZgs
nYfaFH8qxrz4LeD/AIafFL4P6r4Y1GS6utR1ZROjzRuGXywdwCjI6/54qb4P+K9E+PPwCuPhILxd
B160hSCze5ZWF1sfzVdF6kDbgjt15rkfgN+zP4n0X4mWniXxTbnwvpnhWQXhluMeVchc7tp7KAu7
J/xp6iVrn0Pdgp+2DZfKSG8JOAf+3jpXlD/BDw98YfjV8Wl1rVprA6fdwmGOB0GfMiJZjuGeCo6e
tZmp/tU+Hl/aitdaSwnk0W3tX0R75ZUwcyZE4AHK59SOORWD+038FPFOpfECXxd4UWbxPo/iZhMs
mlJvFvsVV+ZlJJVsnkcdaaXcLtPQ9E+H9mq/sMeJbVQZUih1JeCRuAmc5z/npV744WZ1f9lf4d2h
n+zpdTaPC8wXdtDRBc479ayvFWoWP7Nn7L83gjXLsah4g1iG5VLKxYb4TPubJBOQoJI3dPSoba7h
/ag/ZesPC+hXEVl4j0FbUT2d1IpZzBHtJXac4bnBP40W1QnLqN0D4Eaf8D/2lPhqmm6pPqEeorfO
63KgMpWArxt7fNXqnhNEX9rTx2FADPodmzevVf8AP4189fsnfDrxZe/EnT/Fmv8A2zT9N8PxuZX1
dmQFZI5FVE3HsTnsMH8K7Twb+0N4UuP2r9c1FJbhNK1iyg0i3vGj/dmdWHzMf4UOMAmkUpJ2ucmv
7Ow+KV/8U9fOsCzl0nWrwQ2pg8xZNhZ8scjqeO+MV3dy3mfsD6fJ8y7bWI/KfS8ryL42fDjx14Q+
KuuW9lJftp/im+lu7b+y5pfJmWV2+WQAgbgW5zxzXrPxTvLf4P8A7KWm+APEdyjeJLu2MccVou8A
rOJSWPRQAVBPr0piTNz9qfw0/jHRPhZo0c32STUNUjtlnY/6vfCBn3PfHtXOfBr4PS/BH9p2z0Z9
W/tSO80C4nR/L2EfvFGD16bPXvWj8Ybx/jh8BNA8Q+BZnuH0Cf7VcJuMdxAY7dgwA67wSp4+orkv
2RLTxX4o+I1z488Q3l5e6fp1hNZS32oykkFtrBFBAOANxPHGfekNWPavgzAsXxq+Mahdp+22jfdA
xmJz2r5Vtv2f9U8UfDzxF8Qv7TtYraymvJJLWQM0jhJG3cgDBweOf51738DPi94X134+fEZLPUty
eIJrRtNdlIWfyo2V8E9Dk5AOMjpXzv4og+JHg7xHqXw3F/f2cOrTuE0yJg0U4lc4KkZ6jHcd80K4
Kx9G/FK1F/8As5fC+S7PmSi70be7jc3KAN/9eq37WPg7UPiB8QPh94W025W3fUIr3aHYrEGQRuGI
HXABwMd6r/HLxBYeAfgb8O/DOtXcVv4gsTpdxNZsdzhYQBI3GeAQfrR+1dqGsahZeBviN4Jvi1jp
SXUv9r2zBkjEojRSQRyDhgQffpSVxuyD9l3wFrPw0+J/jbwjrdzDeRrp1vP5cWTDJuZuQDjscHIr
W+CemwaJ4F+MdzbQLbXkGr6okc0agSIgiDIA3UAdh2rmP2SdZ8ReIvFvi/x/4pupJ7CfTkgl1WdV
ji3xNlgMADCr/I103wF8UaR4r0v4t6XpV9Hc3d9q1/c2tvkbpYXjCpIq91J7/T1qhaM+a3+F3jfR
vh9Z/FNdQk8mYpMt4l032pHdyhdiSCcs3Jyep45zX1R8d/Dmlav8Rvg/e3Njb3Mt1qximeWMHz4/
LBCsCOQOuD0xXye3jP4k3HhaP4UyC5eKFhbLo5tVW4Z1YyBdwUseRn6V9YfHjxLpej+N/hFDe6lb
2lxZauk9zbvMN0UZj273GeFzkZPFDuNWZ53+1L4M8VfE/wCMGneCtALzWMGkxXwsWlVIUKySKXAJ
Azjb+VdB+zN4dvb7wF4/8DeLoftkOj3S2o066YSJCTGWwpHQZwRjoelY/wC054z8RfDL4xaT428O
rG1vdaNHYxXksfm28paSRioxwWxtIweldD+ylrl9qmkfEjxXr+LSPVbyO5N1KoihfEbKxXPQA8cn
8aLslJGH8NIx4M/Yt1fxFoUa6drz285kv4ABK+y4ZVJPsvA/+vXl/gPwr47+FHjzwT4gmhutLs/E
Oo29sbnzEf7QkzqzLJyTyuT26+1es+CSviL9iDW7HT40vrtbe7Q21qNzFvPZhx78GvJdK+NPiv4k
6/8ADTwpf6fbzWmj6vZuZbVCJsRlUzKueMAnOABSSZasme8+L/hr4Wuf2rfDcU2kQuNS0y51C5jw
QklwrcSHH8XHNeNftE6H41+Lnxo8TaPp9jd63p/h5olht4QuLVJIkbcOhyWBPc8cdK+gfF1/bQ/t
ceBEaeJZH0W8iKFxu3csBj1xXj3xK+NGt/A/48+P5bGwtrmPVzajF2r4ISBcMpBx1dh+FNEPU2ba
Z/ip+xzq2teKV/tLWdCjvXsruVNkkbxAhOBjkD5TnrjmvkC9O1D0LITggc/hX2T8OpZbn9i3xvPc
xiM3EWqS7R0YMCePY54r44uVjUsEKsFJUjaeBXVS1OeejGtGnOQWHPSvUP2aNJn1z40+FVsrGW9S
zuRNeyLEWSJArfM/UDnGM968qnnZpNyMVZu5Neufsu+JL/QPjR4dg065lt4dTnW1vI1wVnjwxwwO
eh5yOaKkbIdOWuh7V8ffiV4k+DX7RVpr0MEkvhy8063t50mjZ4ZU8xjKqcgCQKoI/wAK+V/Gmq2v
iLxdruqQ2xt7S+v7i5hhKrlUaRmUYXqcEcDNe9ft0+KdRk+IlloEl250eDT4rtLRVBAmZpVLk4z9
0AY+tfNNtO0IBh3QOvKzAgbcdxx9K5zS7ufacuj6v4c/YeVLK1u9O10RJLKbeMpcf8feWchefuc8
9q8D+MXxqX4v+DPCh1S0itfFOmyTLeTxR7VmiKhYyM5PPBIzwc19KS/FLxOP2L18XreBvEgtAjXU
kaHcRc+UTtI25Kg9q+DDK7soMbMDkj5RhPbt3NEUglLU+jP2IvDsVx8WZ55rD7VaW+nSyx3E0W5Y
5i8W3aegOA351u658dpPDnxN+Jfg3x3anU/Cl9c3LW5vkaVocKRGI06bMgYxznpUf7CHjDVYfG+r
eFC6f2VLZSag0fljcJg8SAhsZwVPIz2ryf8Aak8a6j4t+M3iWLUDEIdHupNNtQkYX90pyNxAyxJJ
5J+nemldkzZ5VawtFEkSCRpREFVRyzsF9O3NfdX7RHje/wDgL4V+Gl54Ri+xafBcM1xp0A8uG6Xy
0ISTHTOTye5r4q8I+LbnwR4j03X7KNPtunyGaJJ03x52kcr34NfbP7bHj+/0T4SaPZQQ2bQ+IWaC
6kni3tGnlbyYx2bPfmi15Di7RPl74++J/Dnj7xvBrnhq2jsINQsYJby1hQDbdkuZc9ieVGe+M817
3rFxefC79ibw1qvg9/7E1a9Wwkvbu2wJZGfh2LNn5s4BJr4xknijwQwYEjEvQY55xX2vo18/wP8A
2N9M8RadHFr9zqJtLiS21zM1upkcDCpxgDjGO4zRJJMpO4n7KfinVvjj4S8f6L47u5PEemwrbeXF
fxKPLLCUsMhRnlFINfL/AMF/iFo/wb8W/wBuar4cPiKeBV+wRfaBH9nfOfMyeCenBHYetfWX7PXj
25/aI8GePdCv7Cx8KrEsGLnwzG1rI3meYSxOTzmPH4n1r4m8LeCdU8Y+LLLw3o9vLf388xgRHH3Q
GxudugAAHNJWBp82h718IP2gPi78TfjFpFpZ6hNe2VxqCXN9YpbwmO2svNG4FtoIAUgZz1rk/wBt
a38JWPxouR4Xlt0maIvqotHysd5vbcD2DHgkDvn1r1Xx54n0T9kX4ft4O8NOt58QdVhD6nqjr81p
HIm1mV1AxgxjavHZvSvj5Ek8Ra5bxvdSvdajdpDLcTMZHLSSAFyWPLfNnrQnqU221Y634QaN46vf
EsOu+BNMuNQ1DSZlczQwecsRYYAcHruBI+hJ96+lv+CkEMUel/Dm6lREu2nuoyyZUg+XGSODyNw6
c1P+0Br0n7Jvw+8NeAPAtubS51dZbi612H93cM0ciEk7RyW37ck8AYxTf+Cj2H8OfDlpWLlp7rLg
dcxJ/jmoTbY7JrQ86/ZS+DGh2XhPVvjT4vBvdD8PLNNbaXEvzGaAh2kYdDjA2j3Oa0vAf7ad144+
JE+j+MNGtNQ8C+IHexgtoLWMXFqJpAkQY5+YANgkc9Tziuq+CXl/8O+/iHuDY26tv3d/kH9MfjXn
2p678IL3wH8HrDwx5SeOotY0oaiIYpFnypHm72I2n5yMc96XW40vesasP7H+jeE/2stE8IaiTeeD
9XtrrUrOKKQiREiGRE5PbIHTqCab+0Z+1r4p+E3xq8R+HNF0jw6NH0iSGKOG400O7L5Mb/eDDu2B
xxivavjr4d1fxN+1x8K7bRNak8P3iaRezG+jjDsUVgWTaeCGAx+Oa8p/aT+Nfws8O/HLxDYa/wDB
+28TapYPClzqz3piMzeSkg+UKRkKQOeuPyNWwbd0Yf8AwUO+Gug+GpPCXjmwtDa6rr263vY4sCFt
kKuG24+U8Yqx4H+Hvhz9kj4NyfE7xfZJ4h8WeIrU2mi2CoJrdFki86NXBGAfkO5uePWmf8FNNK1g
Wng/Wm1lX8O3kci2ejtEqC1mEAJcOOW3A4x2/l6B+1LBoy/A/wCAS+I5NuhjVdKS+Yn5fINqRISR
0G3dzTvoPmlaxwHwu8b+Gf23fC+sfDnxVomnaB4yiEmo6RdaJa+XHhIwAWYk9GYZHGRivlGx+BPi
K9+NM/wveeztNfivHtJLiWcLAhVCxbJ7Y7deor7h+Eel/DPT/wBt7So/hhcWV1pK+FbmS4ewuvPi
E3mhSAcnB27cjPvXx1+1+Nv7S3xLkc8rqjbMHOB5a5Pp3oV3sQ0k7n2b+0d+z74J+D37E/iGx0Gx
tbq50+KO4TW51RriSTz1LP5oHoWX2Ffmz4V8JX3jzxJo/hvRWF7q2r3C2tupcBAzHhix6D3r75eV
pf8AglFLM8ruy20gBcbiMaiRjvwMfkK+S/2VYkj/AGl/hgsZK416ElQPl7//AFqad4jjdSaZ9HfE
N/h7+w38PNG8Ft4W03x/8SdQEWpamNYtw0cUbIUbbJtPG9NqqPc1kfFL4P8Ahj9q/wCCNr8WPhjp
UGkeKdAtVtte8PwR/Z4BsQyOVyBudc5U5wVP0r1f46eBvh18QP26n074kahDZaXF4Ohnt1mvBbB5
hcOBlsjszce1S/seaJo2g/B79ojT/D9wbzRINb1KCyuvM8wSQLaYjO7+L5cc98571ls0WrPU8E/Z
X/Z68NeD/h5J8ePi2qr4QtYvM0zTpIxIt+ZMxhpEAJwWwFHcnPFdF8KPFXwc/a9bW/htqfw60n4a
eI9UiRtC1DToPMkkZMuybtihWCoMjOCCeuK7Dx7DHN/wSl8IJcSmKExWG6ReMD7Zxn9KNJ+Cvwx+
FH7V3wFf4e6oL+XUJL6S9K3q3IO21+VsD7mdzccdKmV0WknufK/gv9izx34n/aBuPhddwnTJdP8A
LudTv12lIbRydsoGSCSMEAHqcdq9u8e/F79nX4OfE7SPANl8M9K8V6PpQt9N1nxNI+JYnV/LkYoE
+dkxuJBGe1fVngHaf2+/izySy+FtLH+7zn9f6V8cR/s9fDXxz8L/AI8ePdd1+ay8X6PrusPa2gvE
jVfKZmiUx8MwYnH41XqSt7HDftg/sx2vwe1zTviL4U2618KvEE0V7ZzK5VYDK25bfrnYV+6eODjg
ivZv2h/hd8MvEH7CGjfFLwr4Cs/DGsagbCcNbEmRFeYRyKxGNwxntnn2rZ/auxa/8EzPhMiEuuND
AOOf9Qxziu80T4tf8KT/AOCavgPxOdFs/ECpZ2UDaffxb4pBJcFTlfXnI9DQugXsfktMwZ23KUKk
n5RyuOOlfoL+zB8IfhVD+xHrfxT8a+DY/FGo6TPfyybpWR3SN1VYwc4Axiua/bJ/Z+8K+PPhnb/t
HfC3/RfDGqgTapYPEITG3mJbq0SADB3ghh6896t/s0/tPfB7Rf2RdX+EXxKvtYsJb+W7W4On6dJM
DFM4ZCrKp56cH3FUNs87f9pL9nWJ3P8AwzxLDIwA2DUF2ALjBCg4zwO315r5X8VXdhqPiPU7vSrF
tL0+6upJbW0dvMMMRJKqW744H4V9w+Af2UfgD+0Pa+KdJ+EfjXxJceNdPsGvYLTWLY28Od21QQyL
8pJCnv8ANntXxD4y0K78FeI9X0TUoimpaXdy2dxGDu2yRsyMMjjhlNXF9DJrqYN5KYxhkLyHkkHq
O3FEe1VdmUrj5jtb7vHagkXCkmPk9AvNEDGM8FgS2dvBFEhQ1kj7f+E37Lvgz4OfBPVfix8frKUR
X0DRaL4UMrRXczBvvgZBLMMED+EHNal1+zj8KP2pvgJe+JPgVplx4c8Y6HJNcXXhrULl5rm5iVT8
iqXbGWxtPTOQetfQP7f3w1g+LHi/9nTwNd3raZaarqdxbyXCDLRgW8frxn61S/Y++Alj+zz+2d8Q
/Cek6pLq9jB4VtJ1uJ1AfdJKpIbH0/Wsnc3Uk7u58L/stfsja98ffiJcWV9HP4f8L6JK7a/qN2pR
bfy2XzIN/RZMbu/AGa+h/D3gv9kP4j/GbWPhjo2l6jpt8RPb2Xii41D/AEG4nVfl8tt/zck44w23
vXvXwiwv7NH7U8wyjNr3iNmY8/8ALsOe31/GvkK8/Yy034f/ALPfwo+LsfiWS+1PWNU0tm08x4jj
E0oJCNnORgDn8uaWu9xu17HifxW/Zk8efC34wy/Du50W51HV5W8vTJLSFimoJgHzIuCSBnnrjniv
pXxn+zr8Av2UvA/hXS/jGup+KfH+o+dPe23h24ybRTgqHTeCqgMoB74OBX2d8YbVbj9vD9n4/N5k
Oma3Ifp5AAz+P8q+cfif+x9F+1j+2n8cBeeJJNEt9BtdMZFSFZDI8lqMZJxhR5XOOeeoqk3Im+qP
BP2v/wBj/SfAmlaf8TvhdN/bPww1O1Rm8uczvYSbQWMjdlOR1JIOas/swfsgaHrXw81v4ufF+WfR
Ph3Y2U4tLNGMNzfyKgYSITjKld4AByT9K+hPg3pp03/glJ8SbJpPMSJtViWQk4YLMqg9eOlb/wC2
D4HPj39lD9nPwalylimsavountOUyqCS0KZ2555bOPalcEeEad+zD8Gf2mPhD4j1L4Evqel+MtHm
WaTStfuCJbiFYyW2JknJ6AjuO1fMvwS/Zt8YfHj4ox+CtMsn029jdhqFzexukVgq53GTI6/KQF7m
vvP9mP8AZhH7LX7euneGk15teS58I3WoicW4hILSBNpGTwNvc9xXs/7PdpHD8V/2srmKMLIddAyB
ycWrkjP64pajeh8r/wDChv2R4PjnB8KW1fxBc63v8iTUvtA+wecYSwBlyP074B6V8n/Hr9nPxV+z
/wDE+78GapaS6lNcMDpd3axM6XysMqIyB8zgEAgd/avSZf2NLhf2PIvjo3iNZJpJOdNdDkA3X2cM
JAeoJBxiv0R/aI0+GX4ufsnQTRKxj1t8BlB+7aR8c+/NPmsHLY+Km/ZH+Ff7PvwW0XxH8ftT1O28
Sa/cp9k0bQmV57eFo9yiRCNxxtbcemTjHWuX/af/AGOtH8J/DLw/8U/hLd3mu/DzUrWH7THL+8ub
SVwx3uFHCgbQfQj3r6O/ah/Zlv8A9qv9vHV/D1tr0WhJpXhK0u3nliaXOZGG1VBGMlskg9vetr9k
LwpL4B/Y4/aL0C5mF+mkalrVkGIGxmhs1ViAegLDd+NLmsK19T5A/Y7/AGOm+PNxe+LfF1xJoXww
0ZJGvdR3+WbptjFVjY8YBA3H3x3r0Xwx+yX8EP2kvA/jNPgf4h1mfxvpIikgsNfKRJMpY52LtBIK
q3IOASK9l+MuiTXH/BKX4c6NYbLaXUG0m3wOATJM3X2Jx+tc58E/2QtR/ZU/ba+DVneeJLfXU1e0
1O5UwW5hMRS2YFD8x3A7gR0HB46U79Qtc+EfA3wY8WfET4l2Hw/0fS5/+EhnmMDwSoVEGOrSdNqj
14r7G1X9lL9mrwD8SvDvwx8V+N9dfxldRWqXLWmGtY7mT5CC+CE+c9GPAxX2D8FtNtv+G8v2hL5I
ESeKw0WLftGfmhyxB6jOOfWvzwu/2ONd+Ifwj+KnxrHiSzgt9M1bUXFjLE7TSrFOSx8zPB54GPSm
2CWx5p+1V+zN4g/Zn+JN1oN7FJfaRdn7RpOpou5biFpGEalgMBwAMj3969o+HH7HPg/wZ8BX+Jvx
31W98OWOpPCujabp/wC8unV1PzMgGd2eQuOAOc9vrL43WMOp/s6fsqJqGLl28QeHFeWVQzNm3+YZ
Priub/bw+BesftJ/tXfD/wAA6Xq0ekJJ4bubwzTgtFH5czbTtGM9h9KUXfqPqfL3xz/Yw0XT/ghp
PxX+D2p3/ibwnskGpx36YuYW8wIreWBuHO7PHocAc1wP7I/7K2u/tO+OI4IpPsHhDTJFl1jWvuqk
fOUjYghnOBx2HJr7y/4J7fDO7+DmoftC+Cb+6TUl0W/trYyKP3T/ALmUkgHpnIz71zeg250T/gkX
rz6aRbz3MdzvljwrEvqOwkkdeP04pc7C1meSeF/2PPgN8c7jxt4b+FXjrVtT8c6RazPb2t/GsVvN
IjbAVbYNy7gASpPBFfG938KvEtj8S7jwA2l3D+K4777F9i8ptxfzPLHUD5ScHd0wa+x/AX7Gvif9
nT43fs9eI73xBaXUHiXXYCIbHejQjakpQk4yCCQRX2ZBo9nN/wAFM727NnGbiH4frIJSg4Y3YXcP
fHfrT5mPY+L/ABV+xt8FvgkvgzQfi3491TSfHGqWqy3FvpMQlt4mLbcMwQ7RyPmOOhrw/wDbB/ZR
1T9mnx3JEm+98Hal++0jU2kDebGAu4ORgKwLcDvn617V42/ZK8UftF/FD9onxvDr1pBY+Gtfv4hD
cszSSCPdJtU4IAAUDHHU17R4rtDqH/BJrwkNQU3tw4swkk43MC1+RnJ6cEihycR8t7any5+zx+xj
YeKvhLrvxW+K+qXvgv4f2MBaylgQG4unD7WOwgnbwAOASfzrofiF+xf4S8VfAib4m/BLX7/xfa6f
dTLqdpdpsnWJF5KIQGyPlOMDI/KvqD/gor8LNS+LfiP4AfDLRLpdNi1ae/hMZcrbqEhhILoOuBux
6ZrG/YA+BWtfs9ftSfEvwNq+oxaktroFpNIbd2MDGWRSCVOPmwMZI5AFLnZN2fn1+z9+z94i/aK+
I1l4X8OQsFJ82/vnGEtLdWUO7E98NwvUmvq7S/2K/gB4t+K2sfDXQPidq9140tGuYxBNAoh8+NSS
gk2bTjjoT0OK97+A9lBoH7Mn7UWqaeiWk8mu+InjuYEEbgCD5eRjGCeK+L7X9jnxj8OPhn8MPi7d
a9C9vreq2GyG3LfaY/tEgZSzdzkD8zzU8z3uPc8B+I3ws8TfC3x1qPhDXdOeDXrCXyfIgBcyHAIZ
D/EDkV9WSfsRfDr4O/C3w1qvx68ZX3hTxHrs8rQWGnRmfy0AG3cEVivBDE8dSK+4vjbotlfft4/s
/Ga1hkk+waw774wfM2xHBPqQQDXzF+0P+y74u/a0/bY+K9lo+sW2nW/huz05lOolnXEkC/IgH3c4
Y/XH1p81+olHU+ff2t/2OJPgCuieKPDd5Nrnw71e2ha11STBdZmjaRgwA+UFQCM4qt+yh+x5L8f7
XWPF/ii/l8N/DfQreeS81jPLOqhgqA9V2ksSPQCvrr4G6ZLb/wDBLv4n2usuL5rY6tGjTHzQhTYg
2EjIAIOPSrn7VXhi+v8A9gT4FeFPD0o0qXXL3RbBljcxI5mtX+/t6guQSDmkqje7Hy6nz8/7EPw+
+Knwm8V618C/Gd74w17RJomksb2L7PuiILMoLhcnapIIBHGPcfKvgH4T+I/il45sPBvh/TZ7nxHe
MYVtjwI3AJJcn7qgKeT/AIV99/sp/sy+Kv2Vv23fC3hrWtXt7+LVfD1/futk7eTIqqUAdCOSDkj0
ya+if2c9DsLL9rP9p/Ubazihkiu9OjjZUA2k28hbb6ZPXHrQpNvcLW6HybefsNfAnwx8UNG+GviH
4p6pb+O72CCNrCKAGETuN33wm1c44yw6e9fJH7RfwI179nz4k3nhnXLd/KVmksrnO4TwFmCPkcAk
LkjtXpq/so+O/FfwL8R/H2XXoDaQ3VxOY5pnN5lLgRh1b2zkc9hX1R/wUxtFH7IfwXe5jSXUfOtI
3uZFBcgaeWYFuuCwzVxk7kSbPy7LrNJwNqBs785zT90ZIXgZz1NRBVRVYsGQluQMU5SCjkFR6Mwq
2Qj3H9lX9lvXv2nfHsenWIay8OWDhtX1f5dtrFzjaMjLHb07dTXv+ifsMfCH4pp4t8PfC74mX/iD
xpodlK62NzAEhnmRiuAzIMgsMfKT9eK9V8IfafC3/BHu51DR5Rp+p3kMpe7tD5UjF9S2ZLjn7vFe
V/DL9kTxl+zh8eP2ftb1LWIJ4PEusQbEsZmEiJsWV45OcMuG981k5WVzVHxnc/DXxFZeNpPB0mlz
jxCt7/Z4skXfI0+/YVAB55r7E1b9hn4WfCbSfB+m/F34lXfhnxrrUInl063i8yOIFwuCVVsdQNxO
M57CvtWLw9p11/wUtubv7JAJrbwCkwPlr/rGugN3s3vXxZ8Tv2T/ABn+0h8Yvj744tNXt5NN8La3
dwiC/kaRzHGrSbIgeBjGMZAo5vMpb2PE/wBrv9lDVf2ZfHi2Rae/8J6kSdI1N2VvNRUUuGxwCGfA
45FdR+zf+xrb/Er4deIPiX491a58F/DrTLRpY9SCAvPIDhyq9cL9DknjpX1j4sV9X/4JL+H7jWMa
nevHbKs15+8kXdf4GGbJ6ADr0rX/AOCgfw91zx/pHwE+F/hWRdIXXbu4tWtVk8m1wtvEVEijghTk
45yfwpqTeorI+WfH37EOgar8D7n4j/BrxXN460/TbiUanBPD5TxwohZ2VThiy8ZHcHivnb4L/BTx
N8efiBa+EPDNq09/OWMzthEgjVgGdicD5eeOp6V+h37A/wADPEf7Pf7VHjj4f+INQhvYofDiXMtv
aOWtXMssYVtpAyxXI6e1d/8As7aRY+Gfhv8AtT61ptnBY3qeIdbSKaCMK0SxwHaFI7AnPB60c72Q
banzwn7BvwW1P4r3Hw3sfivM3ji33KdPW2PlmRUEhXcflyB6HPJHtXxT8VPhD4g+Efj/AFDwp4hs
5LTV7Jwpg+8ZcgFShH3sg9s+le4SfsreO/Bvwk8KfHWfX1Ntq93buESZjdgzSMiyF88k9/ryeK/R
P9pDw3pGuftb/s2RX1lbXTyz6k8qzRBvN8uBCgbI5wcke/apVTVq5Vz4e0P9hvwv4L+DGjeLvjb4
yl+H1/rdxiwsYovOYps3IXUAkHHJHGOmRXDftSfsc3HwO0DRPF3h/U5PE/gLV4IZIdYEYQI8m5kQ
jsCoXk45Ir6X/ax/Z08a/tYftn+KvC+g6lDDp3hvRLGZU1CZlghMkZHyIARuY9eB0rrP2VtDvIP+
Cf3xl0XxHINRk0W41ezjW6PmJAYLNAAmf4VYNjpT1TWu5NrnxR+yd+yLq/7SOtXdzPLJofgrSEd9
Q1zb8inYWVF3YB9SQTgV6q37Bng74l+BvFGq/B/4iP488Q6KsZbS0hWESZc5HzcglQ+Mdce9e9/G
fSbrw1/wTI+HekeESdHv9bfS7V2sX8hrhpt28My4zuJwST0rif2Yf2ZfHH7Kf7YXwy0XW9VheLxB
Z3008enXDGORIYZMI4wNwDFDyOuahzktbi5Ez4E8NeA9c8ZeNLXwZpFhcXHiK6u/s0Vltw/m8jnP
QDBJJ7fWvs2X/gnh8OfDHiXwx4P8W/FkaL491W3gll0lLfJ3yggIDkgHeMA55x719j/Bnwbo9n+3
t8c7+3021iktdO0kRERKPKeSIM5T0zt5Nfn14g/Zq+JfxI0Px/8AHhdUZ7HSdXu5TPPek3Qjt5mA
8vuNnGOe3Sqc2HKePftF/AHXv2e/iXqnhjXLZ0tWkkfT7uQ4W7t97BJBjpwOn6Yr1/4H/sRp44+E
F98TviH4oPw/8Hr5YsLu5iDC5DNjfyeFJwAMc5zX2T+0Podt4x/Zc/Z2udbtYtY1TUdV0CG4vLmL
zpZBJCWcMTyQx7Vg/wDBQr4TeL/jl8b/AIZ/CrwfdLY6dc6Pc3bWLTGK0Xy5eHZBwSFGAcZ+lLnb
sykrHyb+0D+xMnw8+FGifErwD4h/4T7wjMz/AG3UoEVRbKDtRuG5G7IOBge1eX/s2fs8eIv2iviP
ZeHNAjkS3WVX1XUQuYrC3JwWbOBk9h1J9q/Rb/gnj8OfEPgCX42fDTxpPFqljoz2ls2nGTzrRWkS
ZpCqtwdwCdh09qzPhTaR/Df/AIJmePNf8OxJomtyJqLrfWSCOZT9q2KA4wcADA9qftHawuVI8KX/
AIJ1+E/FNz4l0rwD8XLDxP4t0eCWX+x4o13s8Z27WYOduWGCcd6+NNR8I6pp/i+58M3FjOuvxXJs
msBGWkMwONm31zx+NfYXwh/Zo+Kn7Pvxc+CXjHVbxbO38VaxaQSPY3ZMkiSskrpLjqCucjOD6mvs
7XPhr4Yv/wDgpDpFxJoVhJIng6TUpCbZPnuPtDKJTx98cc9eKaqPVDS8j4nh/wCCd2meGvDXhe7+
IvxJsvA3iLWoXlTR7lFeRDkKq9ecErnA79a8U/ah/Zm8Rfs0+PjpOrZu9IusHT9XWPEd0NqkkDPG
C2ME84/L6F+PP7PfxR/ab/aC+NfiLSL9r3TPBmqS2sEdxeFWijVN4jhByB0J7c4/D2jxvbf8J1/w
Su0fVPFOdb17yYRHf3v7yaMvfBcBjyPlAGfaiNVqai3caTPjv9m/9jDWPjd4b1rxdqutx+D/AAXp
UDOdbvIQ8Tsp+ZQCy42jOT2x3rZ+MH7Bdx4E+E0vj/wd4wi+Iuj28zxX8mkxAi3RFZmc7WOQuBnJ
4BFfYP7fXgnX7z4bfBj4RfDiJdHh8S3s1nNplpIbeG4UQq2JMHlQSWINYf8AwT1+FPi/4S/Fj4hf
CXx2v2jSo9JS6l0l3WSylE0iJuC9PmUMDmplNq0r3uFne5+c3wt+EfiH40+MNK8K+GrFrm8vpcPI
i7lgi3ANK3TCrnJr6yP/AATIa68Qap4e0j4oaBqHiKxid30dSPtAKrkgqGyvPt3r6R/Zw8I6P8Nv
gt+0R4z8NaZBYa/ZarrsWn34iUSwxxQgoiEjhQ6jjp8tfG3gb4SfGX4b+IPAnxru5ry3j8VanbI+
ptOJJ5jPIMl+pw6Z6j29Kbm1d3KV3oj5q8S+GNX8H63c6Lq2ny2GpWriCa0mUh4nxnp1PUfmK+nP
Bf8AwTs8QX/w+0jxJ4s8W6Z4DudWLNa6XqvySuBjnk8ZyOgPUetfdHxm+Dfg7xF+3l8Kp7zw3a3E
t9pmpXepMykLctEAsTOBwxXYBz2/CvmX9tH4X/Fr9pn9p/x5pnhqKfVtD8DLbiztBMI4rYyQI7be
nzEqT68Cj2nMLl2Pl39pD9mTxN+zl4sg03WF+16VNGslpqsMZ+zzkrkhWPUjjjPerX7Of7Knir9o
y61KTTnh0nw9psDy3Gs3+Vt/lIygb1AOT1FfcVxZyfFr/gmJca746j/tvxBpSzmzvboAzW7pciFc
MP8AZBHv3rR/a10HXPht+yr8Kvht8KIBpreK5E0+6tLNAZL3daK5XJ7sx5NCqtrQPI+M/jF+wp4s
+E3w3PjW21i28Z6ZHP5FzLoa+YIQELM5xnCqRg59jx0rwnwP4L1T4ieKbDw9o1tJfavfSiOKGBN5
5PLHHYcZr9Hv+Ce3gLx98Nvib4r+EPxCt5I/D1zocmoyaJdGORGaR403ZXJGV3jBNdr+y/8ACzwt
8LND/aE8Z6BpVva674d1PVrDSrhwX+yQxRl1VVY45KqTxnikqjasSrnyjff8ExfH9kL+3GveHrvV
raMynToLpmmAVdzjaQMnGOPevkXVdIvdA1O6sNUt5rK/t5vIktnBR1cHGCDzX1L8NrP44eCvH3hv
426ml/5Him+hVtYl2+XN9okCsoT7oyo44HG3pX2h8bP2b/A3iz9ub4ctd6OrLrNne6lqYBIFzJAM
RZwQP4alVmr3E4nwv8Pv2APiV8RfCGmeI7f7DpOnXxYQxapMYppAvAZV9Cemeo5ryn41fA7xN8Cv
GU2g+KLSSF02mK8i/wBTLlQw2Nk5719c/ttf8LY+L37Rfi3w/wCFtPu9S8O+AzCttaWESqlvvhEh
ckYOflx1PTpXoPxh02z+OX/BNuD4i+Mo0uvFOhrK9lf248ohhcm3XcB1BUDP41qptPlZWiR+YD7o
yZDnByQGqExS4kkkUqFGMLwDntxVy6IwjE/MxwQRwcfh1qKfblGIfGdu3J+YnPetbXMNCuyFWBjj
2AdW6nFSTTq6gofn9+/HP1pUG1mib5gRk8/hioxMqeXGkbEZ5BGOKLoVwEYjTeV3ZycDtinCVZlK
InfjPB+ufwpEUGRQQ5AOSqnHH1pCFZz5ZbB4K47+v86dwuL5RlT5SxYYw3UY9fpml+bzGJkDgfKw
Hy/lT1R/ldSwOPmU/wAv0pMKrKA7K8nDMRgZ9KCkOt7tLidht8pWJIAO4Z+n5VFeuTcA+XyH4JOM
1KYAMszAk5zgcggf/WqrcsDLCArDa21mPc0r2JZMrpG6liZABjaoya7Px6848L+CxJhdtgxEbHG0
buvI56YP0NclbBGvxiTaCcZ29Rg9q6Lxi8aaf4ZQus7RWLBIjxtUsTkj86klXukcW0u5lGN4U8AD
j35pzMMDKbPQ+opDcIdxY5YjjAxzTGYo5+UMeMHOeKlmp1mkxCSG9EjuVEQcMoJJ56cHscV0/hWy
iU+aJJJZVffhBzxycg+vT2Irn9Mg2TzFC290ZEUKSXPHYfzNdRpDNEoiMoWViZNvX0GP8a6kupy8
tnuegWNuiZZXLLGCxK5wOgxgfh2qz4ny3gbxQropkNg+1gduRgZH9fwrN0zzpYolaSJyyleucDp3
XI449uOtaviCRbTwR4jinjjikaymUeczZJwQOencCq5m0Z8nvXufK93Cizkc7AT0HzYpka+W6uhI
x69/rVt7R5WIA3beMjkn3qOOMxzIWGWBOFJ/LPFYdTtseyeA3ka3jEYZcgAIg2j3Hv8ASvRtKkER
JdTG0gVC7OAWbPp3rjfgz4W1Xxrc2OiaZYyahqlzIVt7e3AMhyRkjI7A5NfTvxl/Z8/4UC2kxXvi
ODVLy7Qi4tduXiAUYbGBjPzf/XreMo7GElyu55ldwQ+auYoY3kAAyCBk88enbimeaN5UBpSEOwBe
T7Ann8e2a9s+Dn7NWofGL4da14ysNRis7TSmeEWsgbfIUjDsASPQgdx746eO36+S0MUbjdI4H3Sd
q56/L278Dpmt4Sj1YpxT6jJRJclpWmZ27SOTuXjkc1IIWRgsjr+8VlK7SVbr1/M8H1r3r4j/ALIO
o/Dz/hF4dW1+wSLX5RbveudggY7WIIPseOe1dZffsAaro2mw391440SCyKg/aJi6ooYcDcex47jp
WLqU29SeVrZHySXUziCTdJsPl4i/hGOgOeOg71YijitoZArSRxsGkYrwxJ5OfxxXv/xR/Y01v4X/
AA6uPGR8QWGtWEbxtutGONrnaCOORuI6ZNVfCv7KXifxr8J08c6bcWk9kUkC6eX/AHzhJNp2nGM8
Hjt9ablTtoSou54XHdPBKpjTiTILN2PH/wCqgh2mLLuLptATGOPftgDt7V6R8G/gdrvxo8X3egad
cWumXFnbyTSm7Bwm2RV5BwTySBXMeP8AwPfeAPFGo6JqEwN1p9y0E5QgozAkYGMEDgf4Vjza6DcZ
MwC7iFllG4sNu0kD5ccDPfk54phgSUblRZZH+4G+U8j8uuPyr1/w7+zP4k8WfCi68aaJe2t7BBw1
hb5kuBh9rZVeeg6fpXjd/aPHdlpHdjGpj8pDtHJ9Mfp71smgUWSteLFc+UiPIytvYhTjp1B6Ht0z
TXSWJhM+EG7AVWO7pwSMY6+/PtSSrHCse1DuwWyMY7DH+femxys21C25CQAGB4Prz2/xFNO+xNtR
5VHSOZlwGY7nUbcjng8Hvnr+dNkPmOzeSWjUZcbcjB69vSmzSyLlTIGV/lAXgY9cd+opsJ8qSWDd
mIgkMO/HUUNMzkSTSuJfPwFz8u1cFj+Bzx9KersARMGZs4OB+QGP6mqxaCNNkbK0iuGUDnLfXqPw
qWaTcu6RQJHwUJPQ/WpSJTCZi+NsT24wGUMn3kPf2/8ArU2ZTvizvVQSD0z374ojZo40cyMrq3EZ
HHTHB7j69KllbZbiR4iGwRtAAOck44754zTSfQm13uLEEZQ+cRqcAlsEemT3xTzIrRyNx5+0hlAO
AM9AfXGOfrTLV2aFxINrMFOzGQSQMDH4daWC5ljjbfMJFXJOU+ZDkjBOBz/9aqfmbQWupYimuEkU
p5mTwfKJXPQZ4P8AKpxeykHcznJzmRtwyffg+3413nwu+APjj4w219e+FrBbu2syIZHuZ1RMtnKo
xHJ6k+hI54o+JP7Ovj34R6ZZ3ninTYrG0uD5MUkEwmRmxkhsfdOByeh5rFuNzp5WnucAsWb12ZDJ
KvzBienp+Wf5VcTXb6NCZrmZpCMRqZDyOBj/ADzVfSrC41/ULTT7SL7dc3cyJAIctvYkKoHqe+O+
K9lk/Y3+LUK+a3hWUpBCSWE8RZiv+yG5JHPAqk4idzxm0uDI7eb8kcY2KqoVI6/lxWjb6vqNnZTL
baje28UwA/0WZo8kADJ2nrhcfSqd1ZvYXE0EyrHLE5WRX+XZjswPIrrvA/wR8efEzRZNS8L+H5dV
tbeVoGliCbFcDO3JYZ7HOK0tCxGpz+r6zeamkUlxPdXropXdNMztjuBuycdDwcc1FY61LHeRziSa
KWFt6upwysDwQRyMe1dD49+Fni74YyWkPiXRLvS2mR5LdXUMsmOuCpPTOKwtO0u51m9tbGwhe9vb
uYQw20Y/eSMSMAD8Rx2qVyLqTaT2NWfx/wCKp7R1bxJqksMqtHKkt7IwIP8Ask4P/wCusVZyIZQz
7VboynnHfHHB9/evQ5P2c/iXZW89zc+DNVjt4IjI7SQ42gLuPAY56H/Jrz0RssTRbxOqfM2xv0Jx
/nNVeC2CzOm0P4g+KfDdhHbaV4i1PSbMMzGG0vHiiJY5ztBAySeuOc1na54t1rXr9LrWtUu9UlwE
Et1M0jbByBz0HJ4ArU0H4YeMfGWjDUfD/h7UNTsSSguIIGeNmHbIH+c1n+JfBWseFp1ttb0m50i/
8pX8q7hZHKn+LawzjORn2oXKyJRkV9O1q+8N6tbalYXVxY6nAd0N3bOUePr909jgkH611+pfHPxz
rOl3djf+KtYvrOeIpPb3F0zK6kDKkZ5H+JrhLPTLvVb2KysrR7m5nbZBbxktJLjsoHJPtXRap8NP
GGjwz3N74c1W0hiBeRpLRwsajJ3E7RwP5Vfuom8kYURwq+U2wL8oPoPbPp9a9B8L/HPx14L0e30j
RvF2pWenWzt5UK4KKM7iF3KTjJzjI/pXnuwSyA582Pb0DcAVqaf4Y1vUrOK6ttNu7myOdk8Vu7Ie
oxuAx2pe6zSLky74v8Zaz441ltR1rUZ9U1CUqrTTtuYqBhRgYHTNSeC/HOufDzWhrPhu/ew1IJ5f
mxopJQjDKQQc/l2FY+p2N7p13JaXkMthOADtZdr855I9+marxW0k1ysCAvMzH/Vx5LYXJwv61XJF
jd+h6X4j/aK+IfjjwtdaJq/iSW7sLj5bpPJjjLgEEAkLkAY555rzKRo4owFZkO0BTngjGfqc4q1J
YXMVlLJPZzqqsA0zR4CjOMk9B1qArtEjFmCYB3ccDB+XkYxgVXs4Ea3PZvDf7WXxF8LeHLDSrfV7
aS0sIlig8+yR2CKAADkZ4GOa4Lxd421b4geILvxB4guTdajdqFkITam1RwFXoo4HH1rAhspBalng
dV3k5ZevuQM/nT3fzNio2Mr8xQkgA8/qKz9nBGl2jufhn8YvEnwp1W6ufDs0Ef2yIQzQ3cfmRFRk
8plefQ+9dT47/af8bePfCsmi6hPY2NhJKrynToDCXIP3SQTwT1HfArxsshgd2ZAiqQG3YbPt+Hei
DUEmbapKBW+dT2ODUqMbkuTNG21S50y9jmgnkt3jYtBJAxjZPTBB4Pvx3r36x/bQ8YWrWcl1ougX
c1uiQi7uLV2nJAxuLBh19sfrXznJLIW3oZUPU7M4PfI5x7VNczmCFXlYLk8qW5U+n6/rTagiVJ9D
pvGPjXVPHfiK91/Wbl7m8uJmYIxJWNSxIRBn5VHGBXZfCr9oDXPhdoGo6RFaWmv6JeYzpeqbpIYm
yxbaM8Bs5I6ZA4ryYeY9wUb/AFY4ZA2ST15P+elODbGTyywLsQij+L6U7RYKUme2/EP9prWvHHhA
+GLPR7PwtpgLNJDpEjIJQQcxsOMKSSeK818G+LtS8F65Z6zo9zNZ6hafcYNjeufmRuvykcH2rCZH
haSRVkLg45OMnr39qjWZn2OUMahydpbHHbmqVOL2Q+Zp6n01F+2Hp3/CSTeIx8OdJ/t/bsF+sxWY
krtPzbD249cV4R4v8Waj448QXeta3O19qEy/PKwG7A+6oGOAMgAVzu5fOk7sy5BBGP0pwHm5aVuj
bcKSM9fSocUh3TPdPBn7SMWkfD8eDPGvhg+NNNs2R7JLi4ETRRquI0yFOcEHBPQdaofFn9oj/hOf
B2leFfDulJ4W8M20WybSo3WQSYYFQWwDgEdO+TkmvGjuDKqkKqjGCex6GpFZGkYArKRkEEHI9xQo
xfQHfud58HfitqHwU8Ypqum4lsJiqajaqoLXcQOcAn7rdcGvX7D9pf4feE9R13XPCngC60vxPqMU
w+1yzIyLKxLcpuIA3YOABmvlu6MrW8gtyud2d7nPy5GR+hqQRSLaq7ZfqWUHJH+P1qHBXvYuLN/X
/Emp69rt/rt9dynV7qc3JukIRlfOcr6AHGB0r3S+/aL8B/FTwzo1r8S/C2oatr+nGXF3p+xAynHP
3wRkADGDyCRXzafmX+Mkcgu38uO1NQn5yjhtygFeN1NRi9yeaSZ7V+0D+0APictj4f8AD8M2keEb
ERtFbunlSPIqkHdtblR6HvzXjUlwxMjn58gfLTeFjZtpbszEdO/FMknYjYq5Xsx6j9PrVWUdiHO4
pZSqyJhiTjAGB616d+z54t8I+C/H9prfi/8AtBHsB5tkbGMybpc8Bh6YJ/KvMJlXBXczSbcjZ059
6fG8ih3jOFZcAscd/wBD/hQ433HF8rue8/tP/EjwF8VNU0/X/Dc2pHXDGtrdRXUDRxG3Xeytgn72
4/l2714VLseSON3kwGPmMrZbHtkHtUMsph3yFiWccovb/PFQvczSlSFIj24wnBbtz+QrNw7Gqmkf
Zr/FL4Ly/Aw/DOPxBqqad9m8oTNYy+aH8zzOTsxnd7YxXxxM8cMvFwXGW/hCluepHY4H61DM37t4
hIxOzpnn0/Sq1w7O6hWyxbr0JPY+1OMUjKU23c+lP2TvGfgP4b6xqPiPxR4hk0vVGR7CG1e2d42i
YxsX3IpOdy4wcVwX7Q8vhPVPiVqOu+E9ZfXLPWy95cAwtGIJt2CAWAyD1ryxJXYJ5rspzxgEHPpj
FMaWdrh1EW0DLKCB06jrj9aagh8zZseFdK0zxH4j0+w1nU/7E0+SURz6gU3CJCDzj8Mfj2r6y/aq
8X/Dn4m/DS3ttK8b2smreHh9otrOFCWuT5ezZggYz+P+HxnM+C5DYcDJPQE+gx2/wpVwGdAwCgAj
PTHpUOFncrn6IrNIZbeM+SdvcHgnvj06ivrjwt8RfCfxp/ZuT4e6v4itPBep6Q9pFHdanINtwEIZ
XQZGehUjOQfrXyKZXKhnXapYkfNng+tNuc4MpysYBBAHB5B/w/Ok4qTEptH2l8KvEHgX9lrwZ4w1
d/HmleNrq/8AsyxWekzIZXIZ1VQu49TKT14wa4b9h/xt4c8LePfFV7r+qWmkPd6fG8c17MI0z5py
AzYGeRXy+C0cJUptUkZXHGOeRUVyXwv3onPRugzR7M0U3fY+ptT/AGbvDmvfErUNeufi34Yl0a91
NrmWNL9PM+ztLuaPOSNwXgHOK4H9pXQfAnw0+KGgTeBdQS/05II724S2uBcgypODgOOhIGMdPpg1
4lIZLdGJUl3BYMw+8ee9RmTzd6sNpK/Mff8A+tUOFiot6WPuH49aVof7V/hvwh4u8PeKtI0ZbW1n
E+n6vOI7hQzISu0NwwaIj06etYP7VvirRvj98CvDPjbw/qlrAmgzzPdaZeyLHctuCxtsTnJBGR6j
FfGE8K+ZHI8BJXLDIBJPr/OoZWMyAYztX7xAJUH09On8qaggfMtj6T/ZY+O+l6do+ofCXxtHt8I+
J3lt4ruLKyQTTlUYO5OAhz17Ee9dv4N/ZH0vwL8R7/xT4o8YabP4K0JpL63FrfK11KIpBJF5gAxw
FGcckjtmviyXcASwztweRwKrXEUVzI8xB3ldr9vl9/WjlGm1qfXlp+2RpviT9rrRPGGtRNZeDdMh
uNK0+aKI+YEmGBJKp5GTjOOgxV79o39j/wAafFb41+IvEvh/U9DbRdXaGeJ5L4LJxDGhGMf7BPXu
Oa+K3mZgEMZMYYq6F+TkHt0wf0yKrGR0mZ4maIsN2UIGcnnH6Cmqd9Lkt3afY+0v+Ck3xF0DWrXw
n4GsLtrnWNEczXnkxkxqjwBQN3TJHYHjIzWl4Z8W6V+3L8B5Ph1qNwNC8feFYhdWEUZ2QXQji8qJ
yzZ4bcVYA5Gc9K+FLlVZnZSzuWJfnrx/+qoGuCI8JNJDKYsHyWKtj04+godOK0uTzSep93/BH4SW
/wCxT4c1n4qfEm+js9bjSfTtL0ezlEyXJdFZASoPzsyEegA5r5M1ka9+058br+50vTrUeIvEl7Jd
pZfaNkceIyfvEc4VPTk157e3U00mGup5UVshXlZgMexOAetVDd3MF+LqGd7W4Q5jliYo445wR+I9
8mp5OzLTfU/U+b9nzxk37AjfDJraA+MDbu32WOdSrMbwzbQ5OPunPWvzutV8Rfsx/HDSbrWtJiOu
+Gr+K+NkZFKSjGdu4ZxkHrzg1xLeLtdRpXPiXWklUYDf2hMMcc4O7p09unFY2p3t5qWoyXF5fXF/
czKNz3MpkZsD+8xJPGOvoKaihc7crn35+0v8HD+2jovh34w/C6VtYv5oYdM1PRJWCNa7VMh5bb8y
M4Uj0ar91rWmf8E//wBmW58LanKuufEDxopuJNFjkCGzMluIpGLZOUTbjPGTwK/PrSPFOu+HT5Wk
67qelQSSebIljeyQBz0ywVhk4HWoNX1m/wDEF79q1PU7zUrtU8lbi9uHnk2AkhdzknHJ496hxTZq
pPofefwD8X6H+1T+yu/7Pd5fx+HPFenW8Y06SUeYuoCFzNkLxjBADDOcHIrF/ZL/AGT9c+A/jub4
qfFWWHwTovg7c8ZuNpS68xJI87lPygFl7ZO4Cvh/Tte1DQb+C+03ULrS9Rt2byLi3maKSMlcEhlI
IyCR9DV/xD8TvGPiW1uNL1fxbrur2DlXks73U554pWUgqWVnIOCM9OoqeU05rbH2f8J/29NA/wCG
xvEfjLVtJfTPDXiu3ttIF1LOM2MUGds8mFwVY9RnjI61ynxj/YA8da/8fWbw5C3iDwf4nvV1FfEU
ajyrWO4mMjlhznYCDnvxXxnI4JcHDKR82e4/wrtNP+O/xK0HTraxsfiD4mtLOBfKito9VmEcSAYV
VG7jAp8q6ErQ+t/29Pi94V8N/CXwn+zzoUx8Rah4XjslvdTidVhha3RofLYZzv4JIHSvTfFfw98Q
fEz/AIJY+BtC8MaVcazqj2mnTJa2q7nKLPuYgd8CvzA1PUrnUNUubu7upLq8u5DLNczMXeSQklmL
HqSe/vXY+H/j/wDEvwno1roug+P/ABDpOl2iGKG0t9QkWNEPOEXOB3o9ku5Ln0sfenxWhb4Q/wDB
LLT/AAX4u8rQ/Fd3EqW2jXsoS5mI1ASlVU8khCCcdARXyN8KP2Q9Y+N3wP1/xx4U8RWup6/oxkE3
hGD571lV8JwOm4BiMjBx1615d46+JPi34kzWcvizxJqXiOW0jZLaTUbgymMNgkDPTJAJ+lVPAXxT
8YfC+/ur3wh4l1Hw1e3cYhnmsXAMihtwUgg8A/zP0ppdA8z73/4Jm/CXxh8JPih408ReNPD+o+Fd
Di0Exvf6xD9niDCZXb5mA4CqxJr4O/aD1qw8Q/G74h6rpVwl5p174gv5obiJt6So1zIyupHBBUjp
Wz4o/aa+L3jDRb3RNb+JOuanot9CYbqzllVUmU9VbaoOCK8yc7FwvB7EjH5CmlYhybI0iDM5SRow
F5HTNJbbXKshIYHIHr7Go5SJGbLgtuyWJ7VHK32fb5agqeSRSYRdnc/Ur9qoXH7bf7Nfg/4k/Cy4
mk1Dwg0st3ogJGpRO2yM7QhJDLsLD1GMelZv7EPhDVf2XPB/jn47/Fm8u9MtL6wNnHp2ol21Gdo5
NwIWQgsW6Kua+Avhj8cPHfwWu9Su/A/iS78PT6jGkNyYAjrKqsSuVdSOCTzjvVv4p/tC/Ef42WWn
2PjrxVeeILTTpmuLaKZY0WJ2BBPyIuTjPXNTy33Lv2R98fsffHPw38a/B3xs+E6TSaH4k8a3mr6r
pkl+ypFLHdIsapuBJ3qWUlQDweM4r52+Dv7J/wAXta+O2nfD7UUvrXR/C+ofa7i7vZ5hpsa20ynd
FkbcsD8pC9z+Hy9pev3nh3WNP1bTLmSz1XTp0ubW5TBaN0YMpGQe4r2rVv28Pjtr2i3+mXvxAuJb
W+ge3nxZ26syMCCNwjDdD1zn+dDitkVzNO59mfHT9tr4d6J+3R4C1ETXV5onhCC+0jVNStkDxJPO
oAKc/Oq5G5hXlf8AwUB+FHxI8OfH7VviL4Lu9TvdE8cW0X2a48OXEqllhhjQpJ5eAQTggn1r4JhE
axurNwvyqCTg55r2n4cftn/Gb4VeFLLwx4a8XvY6DZbhbWslpBOINzZOxnQsBkngkgduKEl0Ju76
n2j8QZLL9jz/AIJ33Xwt8bXiy+NfE8V4bSxs/wB4ymR1lPmHPyhRwT3960vjNO/7Wv7DHg3Vvhnd
yf2r4ElhvL/T2Jiu43trVkYIqkndkhlIPPY1+bPxP+KHiX4ueMrvxT4y1NtY1y7jjikuHVVBVFwo
VVACjHYd61fhD8c/G/wJ1+81fwLrB0i6u7c29yXhWVJUyCFZWBBII/nzS5UVzI+3v+Cefg/xnofj
HxB8dfiXqmoReGtC0e801r3X5ZTdkjy5DgSfMVVVb8TXoP7G/wC0l4O+I3xr+O+g2VzNY6h441CX
UdDN5F5cdxEsLR43fwtlgcHt0r4D+Ln7YPxQ+OPg+Lwz4x8QpqGjQ3AuRb2kCW6s4DAFtgG77x+U
8Z+leR6fqt1o+p2WqafO1te2cyTwzqcNGysCCD9QPzp2J5mz6O034A/HK/8AiYfgI1xrElvb3OHs
3uJBpIUKZvNJI24yMjuTX2b+2B+1D4E8D/tOfBXS7i/nuj4EvprnXZLWEyJbLJbxheR95sDOB0Br
5Fu/+Clnx5JmC69pW+VTGbgaWnmhSMcNnrXy1LeyTyyXMkry3MrFpJJm3GRyclie5Jpctx3Z+hn/
AAUT0f4i+GfjNp/xk8A6lqMHh7xJpdnp9rqegTMJZAiF2VwvIUjB59K7z4fX837K/wDwT+8ZTfE2
8kt9d8fyXlzptvgy3E0l3ZLsMgGSGJVixPQntmvi34VftvfFT4L+CovCPhzV7E6Jb3Ek8MWp2a3L
Rs2MhWY5A4PH+1XC/Gn49eMf2g/FcXiDxlqAvL2OFYIYoI/JgiQE42xg4ByTk5zQo3DmsrI/Qi9u
E/ag/wCCauj+H/h1NJd+JPB62T6nYK3lTwvbqXkCE9Tg5XB9q85/4J96V8Tvit+0JpXxM8X6lqF9
4a8FWl1BcajrszDyPOhfCR7sZ6gnjgAc18g/Bb49+Lv2f/Fr+J/B12LPUTbvbTJcIJYJkbHDxng4
wCDkHj0rvfiv+3j8Wvi34IuvCOsanp9jol3Ij3CaPZi1km2EkIWVs7Txkd8fmuUfNY+8P2ZP2qvh
345/bZ+Lf2PVpBF4wWwttBuJ4GVLs21uwlAJHy85xuxkDg18T+M/Anxy8IfFbXfgZBPq6f8ACQ6i
8q6LaXJNneCdy4fcDtA2gFiSMbT68/NllqtzY3MM1tPLZ3cTb4riJ9rxnsQ3Xv2r6xi/4Ke/Gq3F
o5l8NSXcEaxC8l0otMQAFBZt/Ujv70cvQaktz6q/a9+MfhL4PaX+z58PtYvRJr3hTV9G1TWbaCNp
FtLWGIozlgMHJBwBknHauL/4KO6l41tvFfw++PHw11a8t/DM2iLp8Gt6ZJhw0zPIu5PvAMh9OoAO
K/OXxB4q1Txh4n1LXdZ1GXUdT1GVpriaaQszMWJxkk8AHAXpgV7P8Gf21viN8CvBd74W0KfS9S0K
W5N0tprVr9pEL7AMR5YbQdoOPr60cltgufb37HniDUfgP+zf8SPi58XtRmsoPGM8V1Z3d6/mXF6x
jkRDsXJBYsoAwOB2rN+D81v+0L/wTU8RfDPwPML3xvpkX+laXIdkiM14bhcFsBgUB6cZGM18H/H7
9p7x7+0lc6O/iy+tYLPTLfyrbTtLiMFshzneUyQW5xk9AKwfgv8AG7xX8B/Hdl4p8IXv2e/twVki
mybedSpXZIgOGHzZz2IFJQsLmZ9R/sbSfFr4/wD7Q/gP+3dT1bXNB+Hd+Ly5bUpQsNhGAIwvIGWO
zHfhT1r6c8K/tOfDzV/+CkGsT2/iGKWxvfD8Xhq1u9jCJ79bgFoQfXI64A96+KfGn/BRn4q+MfBe
veGzB4f0G11ZBDPe6JZPa3W0HJ2yLIcZ5HToxr5hiv57SVHguHt7mJvMDxuVZG6hgwOdwPOadik0
fXP7R0Xxo+D37R3xC8IaZqGtaRp3j/W5by202wlBh1OK5lZE6ZwSDtIyOnNfRf7TviOx+A37C/w3
+FPiy9W28dMLBpNLt90rrHDOXkckDooK9TyRgV8z+Gf+CmPxW0DSdB0250vwzrr6NaR21tqeqWDS
3ZWMYV2fzBluBzxzzXzz8TPilr3xY8dat4t8SXrXeqX8z3DDe3lw7jnZGCTsQdAM9BU8rYKSTP0k
/wCCh2u+IvGXw7+EXxu+EupTTaH4fN1O2tWDbZLczGCNDtYZ6q6nI471T/4J46z4p0yX4kfHz4ra
pLB4a1HToLca/qk65kMEu1uBzj5QBxivi/4E/tiePf2f9C1fQ9FTStZ0TUwjS6ZrcLTwRMpY/u1D
KFyWJPuAaX48/tl+Pf2gtC0rw/r66Zovh3TXeSPTdCge3gmY4IEil2DBSuR7k0KDIvZn3P8AsneO
dI+MX7N3x28BeGL5brxnrFxrV9aaZISrSQ3KbYXBOOCSO/Ga+RvhJF8cPit8SPCXwcnu9a1XT/CO
rQySaTdERwWEVrMFdmJxwoyByc9BXgvw3+KGv/Cvx9pXirw1eyWeqaZOkqgMRHKquD5cgB+ZDggj
vxX0prP/AAU1+JWr2OtRWmh+FtE1HVrWW3uNVsLB4rtfMABZX8w/N3z6gGlytaFqZ9gfGX9ozwFp
3/BQ34Y2934gt0tvDthfaff3ZBMVtd3G5I4mYA4JJUE9Bkc188/t1eJPiz8AP2n/ABl4l8OatqPh
zRfGq25tLuxdSt+kFvGpQcHBBJ9M596+EbnU7i4uZbieeWe8lYySTyuWeRiclmbOSc9819O+A/8A
gop8RfB/gPQPCt3pPh7xTb6PmO3vdft3u7gKWyBuLjOOP++QKpRaKT2PqO5mb9mv/gmVq/hn4h3A
0jxf4oS+FhpszCSe4aaRWXhM87cE+me1XP2jry8+MX7BHwy8QfDC5l1Y+D7izvL+4smAnsWtLR1d
trDOUfacYOR61+evx5/aC8YftEfEGbxP4pvAZDGiW9las4trVQoUiNGYhd2Mn61tfAD9qbxl+zld
6qNANnqdhqdu9td6Zq6NLbPkr8+wMOcLg88jipcHuF763Prf/gnrr3xI+MXx2vvjH8QNYn1Lw54b
0a906bXNQKxRRFtjhAABwAWJIGABXsf7H/x28E+PP2jvj/baLrUN1e+JtSjutHTBAvIoopEZkOMH
1/EV8GfF39u7x18W/hw3glbLRvCGiTT+bdQ+G7Z7Q3HyFSjkOcoQeR7AV4N4b8Wap4M1y01vRb2X
TNWsZRNbXMEhjZGHbII4PQjOCM0uS4nK57lcp8fbbUb79m+G61c+bdujeGYim1t4NxkttztP3j82
K+mv+CrPjTQ7D4e/Cv4cLqEVx4o0QpcX2nxtuaBBZrGrN25OcV5fL/wVR+Ic+sLq48G+DotZ2hTq
ZsnacHaV3bt3XacZ/Cvj3xP4j1Lxf4g1DW9dv7jVdXvJPNnu7qQvJIT7nsOgHYCtEne5D1Mh4gkK
SMPlVjg/3jViKfDHG0NjGCP5VCxY+WjBVU9PX8qQ/NNuQHYOCCAOfr60O5C0P1I+HyJ8aP8AglVe
eBvAU6a14v0iGNbzTLd/3sTfbvPI5HOUBP4V5D+x14p+L/7SH7Rnw4fXdS1HXtB8A3hubo3YWNLC
PYyjdgLkkx46HvXzP8AP2hfFP7OnjmLxR4UmCOqNHcafcszW12CCAsigjdjOQeoxXsXjP/gov468
SeBde8N6LoXh7wU2rosd1qPh63e2uiA2Thw/HBYZ/wBo96x5G9DVNo+1PB3x/wDAWrf8FJ9f8nxL
ZyW9z4Zi8P2dzuPlTXyzKzwK/Qt8p789K+Pv2jPFvxn+Cnx6+JngzSNU1XQ7Dx1rlxdxadbRo/8A
aMU8zxxtGSjNkrhcAgjjNfI9nqd3pdzDewzzW97bzCeG4jkIljkDZDqwOQ2eQc5r650X/gpn45sN
O8OJqfhLwr4i1PRraKCHVtUtnkuv3YGH3bvvZBbjHJq1Bpj5j6H/AGitWsvgR/wTv+H/AMNfFd3F
p/jaeGxZ9ILB5gsdwZJXIH8K55PrxWj/AMFDPFGva38PPhF8ZfhNfLqGmeHpLm5XW7EiRLcyGGND
gjH3g6nI7Ed6/Nv4s/GPxJ8bvHupeLfFmoPealeyP5cYZmitIySRDGpPyoPT8a9O+AP7ZvjL9nzw
lrvhm3tbHxNoN/5ZXS9ZDT29sylidg3cBi2SPUU+VrVAmj7A/wCCePjDxbrfiT4kfHz4paq7eH59
LjtP+EivwkUbCGUFlCqBwACM47V037JnxA0X4u/Br9obwz4avY7vxPrWpa3qFhp7ErLcW80IWGRc
87SzAZr4Z+PX7bfjb47eCNK8JS2On+EPDlo8hl0/w/uhivNwHEozggckD1PqAa8q+FnxY174PfED
SfFXhu7nsb7TpVYokjKs8YYExsARlDtwRUKnJO4+c9v+H+sfGz4m+IvC/wABZbnU7uy8O6jAZ9CM
aILRYZQWLttViEBzyTnivuz9oL48eAtI/bo+Cun3niOzj/4RxdTTVZi/7uylmi2xJI3ZiRj8Rmvk
C8/4Kd+M45NX1LTPA/hTS9d1SGaCTV7a2eO6TcuN4fJJOefwFfIOoa5qOs6rc6lqd1LeajeN5txP
MxZ5XPVi3Uk8HP8AhVez1uDne2h9+/t4eOvid+zd+1T4l8d+GNTuvD+meKrGztba+iVZorxIYV3I
AykZDYxjHX3r0n4e3k3wI/4Ju+Nh8RpDoGueLP7Tls7e7I+0XklzENh2g9W547AV8v8Agn/goV4w
0L4e6L4T1vwtonj+DSfMWG/8QBprgBskAk+nA+g9q8k/aO/aa8Y/tH+LotS8QzxW1jawpFaaXYSN
9ktsLgsqHufXqRxRyvQlS6WPvv4vT3XxS/4Jt+Arj4cSrrt74Wk0+4vzaOC9k9pEWlyO5U44964L
9hPx/wDFD9o/9pfRviF4wupdY0HwpY3tu+qTRLFFZ+bExVOAPbk56etfK37OX7V3if8AZsvtS/sy
NdZ0rUYXguNEvpGFnLkAb2QAjdgYyOxwa7n4l/8ABQnxd4x+GV/4N8O+HNK+H1rqFysl5e+Hi0Uk
qbTlDwPvcAn0471nySe6Dm1PvD9mr40+DPGn7Z/xzGm+IrO6OtNYQ6Tsk4v/ALPAwl8o/wAW0g9P
8K+CfEXjb42eENb8VfAO0W8s7bXtUnzoQgVpJvtEhYbTgkIVAOcjueOa+ePCvi3U/Aus2GtaLeSa
dqunurQXNudskbLjlSPXofXJyOa+y5f+CpOty6/aeIJfhh4Wu/EFtEqJqs28XAKoV3BtpwfmPHbP
WqtK+w9D6W/ah8deHPhp4G/Zz8Da7qltZa5pesaLeX9gz/Na20MQR5Hxwqhsj3wcdK4n/gov8Q/H
Hw0+Knw3+Lvw+uHXRf7HNjHrkUay27NNIzhPmGPmXBzwelfnB8Q/iP4k+KvjHVPFnijU5dT1bUXJ
kaZyyxpk7YkB6IoPAA/ma+gvg5+3n4i+GnwxfwL4g8O6d8QNFguEm0+PW5Gb7CqqAEXIPyrjKgY6
kUcrjpa6KUj7A/YM8Sa1onwt+MPxd+Kd9HY2Hia5iu01e+ZUjucRyKSoAHG5lC+tZvwzlT4rf8Ew
vFPhXwW6634pW3nEmk2pzPG0l4XXI91BP4Gviv8AaS/bK8R/tE6FoegGwt/CXhLTIBGmgaTKTbSO
DlXcEDO3AAHTvXLfs3ftKeIf2a/HMPiDQpZLi2fP23TWciK8TDYVx3wWyD1BB9afI1qiHO7sfRf7
OHxa+Ln7S3x6+E3h/WjPquheCdUgurhILIRLZJH8mZcAHPy4wSOQeK+w7b4reE73/gpBNYr4gsTJ
D4QXSVdZV2teG53GANnBk77f618Sa9/wUqvLbwx4q0/wV8OdG8B6vriGKXWNMkIn3Fsl/lUc8sRz
1NfJGleKL/Tddg1uG+uYdViufti3sbkyrNv3793UndUcj1Zanc+zfjx8ePiz+zv8d/jH4X06FtIs
vGmry3Vv59n5j3ELsYleJj1BBx3wa96+L1za/CT/AIJteCfCXi26j0fxDcrYgabdMFuSRd+Y+E6/
KpBJxjArwKP/AIKWx6jB4eu/Fnwu0nxb4i0e2jthrGoTbZpGRgwkHyEDLhWxnt7CvnL9ob4++Iv2
i/H154m8SXBdPNP9n2G/dFZQnpGgx7cnvmlyNy5uoc2p+j3/AAUM8b6/4W0j4NfFXwJs1Ow0Ca8u
v7ShHnW0ReOJEZyDjkgrye5rL/4J7+PvGnxP+JPxO+MnxDSOy0y40uC2/tcx/Z7PEEvzKrFiMKF5
Of518jfAb9t67+Efw21fwH4p8Nw+PvCV2UaDTdSuNsUAVi5CjY3ViCOnIFSfHv8Abhv/AIm/CnTv
h34R8MW3w68KmSRtQsdPlDLeBiCAfkG0ZBJFWou1hcx9qfAPW7b4gfspfHzT/C1xBqevXuoa9LBY
WsoeSXzVKRMBknDEYBFfJHwx/aC+Kfxi1n4VfBq6sFn0/QNYs3NvaWrLdQpbuOZTngKMZJAP614h
+zn8fPEf7N/j6z8RaNcOkAkRb6xDDy7qEEExnIO0H1AzX05J/wAFGfC3h+68W674S+E9j4a8a6tb
TL/bcV3vPmyAZkwY1zgjP1Hvmlybxt/THzWeh9keP/HGhxf8FAPh9p02t2kL2Xhy/iljaYBUnkb5
EY5+VmB4B5OK+UP2j/2l/iJ+y3+1H8X49CsLe1g8WSwNa3GoQGRZljh2bosHnDOfXoBXwxf+PNZ1
TxjdeLb7UJr3xBc3P2uS/kYh2lBJDccDnoo4r7Dj/wCChfhDxv4e8M/8LO+F48c+ItBiIj1czKu5
94OSpHH3Vz9OlHs7N+ZSmj3gtceAf+CVUlr4jQ6ZrOowykWt8PLmcyXxbdsbBJ2Yfp0rY/bs8c6p
8P8A4a/Avx14WgTU4NAv1vDMimWBVFsqLvZcgDPy5r8+/wBqz9qPXv2m/HTapfSSab4bs/3emaGG
Bjtk2jJbHViR198V337Ov7alp8M/hjrnw98eaPL408F3Y2w6aSFkg3Hc4GRkqTgjmpUeVrS5Clrc
+r/2EPjL4j/aK/aD8bfEbxJZRWkMOhJp4uraJktU2zI23cSfm24JBxXXfBnUoPEHwE/aak0q6i1G
5utb1944bU7mYGIhcAcnPb1r45+L/wC3NpEvwXPw3+D3hi68BaNePK+qSNsLzRuDmNSCSuccnOQB
ivIf2aP2ktb/AGdviPY65Z3LXGluRDqdm53CeAsCwGf4+Dg+p5oUGve87mnMnqew+Ef2rPFvxa0H
4XfBceHoEt9E1exjYwqzXJEUwUBhj5cAnPH5dK++viDqVs37eXwxtnuY1kTw3qZYNIMbmYhV69Tn
/wCtXx7B+3h8GfBPiPxX4y8EfDXULTx1qsc3lX1wyNEk7rncRuOBuAJ4FfHOtfGrxfrXxAvfiBda
vK3iSeZbpp4WKgShgQoUEDblRx/OodFvWI3JNn2t8av2wdd/Zl/ad+ONjpnhyPUV8R3MKxy3btG+
Et1UsnGGALnAHtzXY+JfO0j/AIJGyjUFktrq7i4j4DSF9SJHB9Rg15pr37avwX+LuleEdT+KvgrU
9a8X6PBl7zSkWNDJvDEDMi5yUUkHI5I7mvCP2s/2udS/aV8QrBYxXGgeBtLzHpekKRGpUhfmlVfl
LZU8ZwMmtox95MwufP7xgKqbHK7gV8xuScVA8iNvc/ISw25PT/OKdKEkdVBG2Mbi59ev9KjnVlxI
ACm4fdOcH+VarzIasSlYwhDfLuOcR859qhlc53fNu7ovb6/hSsxR2GAWcYUkZxz60kqu9w3lgFic
iRDjaMc80PXYAJE/lEHyxnjnAIqRmVJV2r5JzlWzw1VjEwbKAtjI3L2HrU+W8n96dwXuOue1OwDX
gLsyRscN7dfoP89aLgCNWZJFIf8AhI3EH+lOknBC7VYOpO0g96I1ldWRBzINzYPOOaaQ9BAC4I29
cdOv51QkGJ1X5nQ5AUN0NaETxqp3qUKjIVe4qq67LlAxwCDgkZ7UmSWFKwHcytLsBAGCePTn8a6r
4lRuE8NsXBD6ahK5ztO5uMduMVz1oVgyD+8AyuSuV/PPtW38SYm+26QqsQn9mxtk5Lcs2Mg9M4NA
ktTj3iLAL5QbPIOacxVOJCMj+8v8qlDFXlJHDg4JHBP0pkMcj7QZBgL1AB/Cs7Nmh1WiWoS+ZWeR
TNG4Gw8Z2knt0x9K6Lw1qUVtKBLKIx5igyYDFMDg9eevSuZ0XeuqWzA/M3y4foMjqfbpW5o9vHc3
C2vlxxTv9zC8jnkjsegHrXWlc5JJM9I0q8dCY4opI43yGb0A6fxHt9a0dbuReeF9eaKFsJp8rSBm
HK7eC2ep59MdKzdKjt/szkZ84sVSPcNvPOeRxx+pNautRrH4O10p8ySWMwZmAGFKn9R3x1x7U2kj
J72Pm7T4ymuRrLKTG24MQccY/wAKqXSRJqEixq7qHOA31qzoxYarEwYfxDgcY6VUv8vfySbSFLYO
M4J7Vzvc7IvQ/U7/AIJH6Dpd9pHjfW7rTVfW7F4I4LxwSUR0cnYe2cY/A+teF/EDxZrvifxbrN74
gvLjU52uHCLOwOxQx2jIHsK96/4JBanatYePbB5xHezrZusEpCyMFEwO0fxYGOR618/+L9GvfD3j
7WbC6jubEm6l3tLEVzukPJ77ePy61pBJMzrLll8j1P8AZx/aH1D4K+IzBIk974T1BjHeaazZUbyq
+avXDBR2xkV9R2X7Ovwo0fVR8W4tUjuPBDxC8ispUItwGJA24GSNxHykHp718dfBf4Nat8bPGaaN
ph2xRgPd3rnEMcQYbjlR94jOBX3LHr/wnm0+L9nn+17obIfsUcjNtV3QCTAlDAZ3Y4H0qZb2XUla
K7Pjz9on4/ah8bvETLmew0K1lENpYKSY06r5vQZJXaMY4ycc19j/ABL+HOq/Fb9m3wLoenCRrueL
TTM0bhTGht9jvz1wHPTHSvhv4x/B7XPhF4rutE1ESAK4ltr+PayTIS23GB1AXkY4r7Z+Mfi/V/Bf
7KngzXNDu5LW+hi0plkVipK+XkhuR8p24I9KykkmrGkX3PnL4jfETXPhd4U8ZfBW/c6/p1hdQRWN
/cSBWhCgSbAvXqV5J6HjgYr6l/Y+1e28P/st+H9Q1CdbWytGvHkuJSQAgnf5jn1NfEnjKy8T/Gm3
8YfFCS1aHS4J4jMdgC7pCEBQcFiCmOc496+pvg15Vx/wT+1ED5wLDUgVwUOTO55z0PI5/GokjRNW
PafCfwY0HRPijd/EHQzHGur6eY5IYxlJC7q/mhsnqAK/O39rCzlT47eNNsnlk6mzAoMMcjPGPwzX
0r+wp8ZNa1vULrwBqkiXdvZ2st9a3DMzSRKrRR+Xk8bPmOOnT8/nL9q63kj+P3jZo5VdW1AsSpAY
gKuRkdcHjB6Yp09zGSaaZtfsUeMdW8L/AB50DSLC6mi0zX3khv7fAeKbEbuuc9GBHUc84q7+258P
9J8FfFotoVsNPguLeO7lig+bdKwcE8ngfKOM4rlf2R4ZJP2kfArxBnKXchMbMRtHlOpYgdOCfxIr
0j/goHIT8Z7WFJdsjaVEGC84w0hJ68Ht07/SuiN7jklZaHyreCRL1TgMrjOF4xx19O4/CqxkkEhw
pWQsSpHfr278/h0qy3lrL5iE+SgBZTGcPkjuPrjPFRTKJQpLrl/lyCPkyOfyrqhdanIlqJbI0IkE
+11B2FG4K8Djn0z+lI25Wlgj3DhQHZN5I2k9iMfX2qzEBE8qbG4yRvGB6HHfPFCzNZxysZCxdSCq
npntkdB9aHJs2smtSBV23HIO/jBcbe2SQOvv+dWJpQsZ3GMbsjYvXGeCMH3qrO8dyRj/AFTNgE9l
6g1KWChLYIJ9r8qvAXnIIween5Vk3rqZMfJb7wI/M355XLdMc4pWHnABplYKjISx5HfGBTESJJGc
7TuwQHPXGcgdz1zUjl/nNv8AJEv3+eCO2f8A6/vVqSBRGTY3Kp+WQgn5ecdufXpW94R0KLX/ABjo
em+b/ol3ew20rqdpYNIitt9Dg8dhWO4iltW3K7Ow2PsA+71GB9OtdF4El3+P/DCJG7ONVtGJBOSo
nRmJH/Afw5pVJaG1Nan3H+1Nqmo/A7wB4R+HXw+tprW21SKWFWs42ku28socgrzuJfJOOx9ao/sl
6rq3xM8P+LvhT4/sp7yy0y1SSOW+B+0oJWfjcwyCOCD1BPp07f8Aau+IKfCb4h/DLxfLYyapHp73
6vaI+zKvEqFtxBAxuHWs79mr4v2nxn+PXjfXbSx/s2J9ItUMLNvY7XIzuAA65/OuHmudKS3M39mL
4H+HvA994/8AEsSS6prGgale6bYtfYZEjQK6tg87yR97Pc+tfP8Ab/tJfEew+JLfEIm6lspZmB02
dplsSpTYY1+bbgEcc/eFfYnwgCz6Z8Zo4s8+ItQGFHQmIdvWvlXUf2m9E/4Z0f4ZyeHpTrEKiP7S
CohXbN5u/s2QpA6YzxmmtSWtT2L9pn9nXwx4k8d+CNatkk0ubxLqUGl362eNjIwLb1XgBvVsVn/t
Y+N9R+DGg+Gvhp4Fjl0a1ltEn+2ac7LdnyzswNo5zgEtXsHxjYxT/BpjvX/iobNSVHTMZHP51xn7
QPxU0r4OftEeEvEOtWc97YnRJ7ZltlVnVml+U4YjNNNsXkYPwH1Bv2qfhR4h8FeN7d3v/D5hhh1r
O663PuIbLZw48sA8855wQKq/sw/C7Sfh58OvEvxNu0XXNb037alvHNhUiFuzHcMZ2s2OT2GK7L9l
Px3pnxJ+Inxb1/SLKWwsbq5sWjjmiVGP7uQFjtPcgn1/GnfDO2Ef7L3xDiOQvna0OnPG/t+FPms7
Aj558B/tg+MrX4vx6jq9xcap4f1a4Fr/AGS0hEESyyYBTcDjbx+HXrXpHxu/ZQ8P3vxr8KWWm3B0
nTfFFzObm1towog8qIM3l4IwG449Sa5PV/2hPB3in4Q+CfBllYXdn4ktLiw86aW0RV3RupkYODnk
854zzX0/8WQE+NXwdfOD9sv159DbilzWKUVfc8I/ag+Lmo/A8aZ8Ofh+n/CKxWVvHem6tdoM4cOu
0A9OVyT3OPx2fC1jZ/tm/Ai4vNdtU03xboZ+xjWY1DPKUQOTjjCuG5U5wTkVr/FP4neDfhd+1DPe
+MbUz2lz4dhijk+yfaBG3nOc7cHsvX3rd/ZZ1/SfE2n/ABU1DQ7R7PSbjXJZoYpI/LwGhU/dHA+l
Pms9ESop7nm/7OPw60T4RfBe9+MOo2reINUiiluLSA/K1uqO0RCsSeW6k44Fc98Iv2tfEHin4oy6
D4uT+2fDniK5GmR2TeWq2pmkCrgqgLKFYg5PvXpug7G/YR1MBdipY3bYxuztuGP9K4fU/iX8LvFN
t8I7DwnYwW3iC31zTFnVbA27IuF3hn2gN8wXv6VV29Q5VczfG/7IVpb/ALQGmeGNL1BbDQtcimv4
goZntkjyXi5JyCOAeOCa6X9oD483vwI1Kw+Hvw8so9Ft9IjVp7ieJZY3Ei71Rd2SDkkk8/TpXtPi
+JR+1D8PJed7aTqK9OoCjH8zXmvirxP8N/D/AO0t47i+IUVpILiy0/7F9qtXnXKx5YABSAfuHn+7
RcpRRieO/CGlftW/BRviPpdudC8VaXA0V08pG2fyYyzphSRjLkqeopnwR+Hmh/AL4US/F7xFbnVt
RmgWWwt7dP8Aj2jlIUIAeNxJ5bsPxrsv2e5tLu/2ePiINHBGlHUNWFuCpH7sx5Xg8j5SKzPHpgb9
g/S2kDNB9isS6nOSPPTI/pSu2DSRy3wh/aOX43a7c+AviJp9reWPiEG3sXsYVjKuAzfMQc8gZDjo
RXFN+yBfS/HP/hBjqMY0oW51M3IkO82fnbdvrvx8vX3r0i4v/hKPjB8IV+H72B1M6lJHOunEk7BB
jD54yMj9a9fkZV/a5iB3bm8InHocXR/xquZoSSsjxT4s/tEaZ8CtYsvBngnwzYT2Ohr9lvDqFruZ
mGPuMGBJwCdxzkkcVQ+P/wALtH+KXgSP4xeC4ns4ZYi+pWM6+VuVGEZYKAQGUqc+uM1ujQ/hTq3x
l+LY+IMljHdx38H2R764aHCtDltmCoJz25rc8BG3l/Yb1lYnMkCWmo7HBzuAnkI6/hT2FypnK/DP
4caF+zh8LJfiJ43hi1bWNTTy7C0C+dCBIm+FCCMBjtOWJwMnmpPh1458KftV6TqHgvxHoOneF9fu
GFxp8mlQjcQi7mJPUEAEHswPFdr8fE02/wD2dPh6NUl8nS5L3SPtEpONsRQbjntwetY/h3wf4C8H
/tOfD9PA1xbzxXNhqBuhbXP2gLiI7SWycZ+YfhRcXLqeJeAv2XfEuu/Fm88H6o6WsGkiKTU7qCXk
QuMp5fUbmHOM8YNem+MfjP8ADT4b+N7XwvpngXRdW0azVLPUtYmgCTI6kpJj5PnICnJyMk4r2vwU
BF+0x8Sl+cvJpmmuPTAVh/hXgmmfCH4f+LtN+LGva9qL2uu2OsamIgt6sRRVJkVtnf5s8n0pc1wU
bNI5r9pD4C2WkWtv4/8ABrfa/B+rgTskQ2i2eRl8vaOCVYsQP7pGPSuv8OfC3wt+zt8MbjxT8RrG
31jXr2LyrLRLsB1R0JIVWAb5myCT0AxXYeLiLj9h7QmBJH2LT23Y9Jo+a3P2pPDmmeMtd+FGk6tM
YNPvNWkikkXAIBiGBntk4GfehMrk10PL9H8I+AP2mvh9dxeD9BsfBXjbSzJOmnQ/N50QXaoY4X5G
LAZ7EV5j8Hf2b9d+IXjy60vUUn0vS9KuWj1adsFoJFwREOcZbnkZAFfRHwj+Guj/AAn/AGnb/RdD
u57mxk8Mm4xO4dldrhARkAA8ID0713HwhQx+MfjIgYZOt7xt7EwCq52tEJxT1PEb3xD+z5F8To/D
LeDLWXTRKI/+EkW7P2aOTZz/ABZAyNpIwM815X8aPgDq/wAMPF32aztJtX0XU3X+y7u2Q4k3t8sP
Xlx+oINbul/s+6HqH7Nl58RV1WcaxDDOfs7MrQfJKYyPXOFJyD36V9JfF/8A5Fj4NncVz4h0sZ7/
AOrPc0uZ9wUFueMwfB3wH8FPhjb6t8T7KXWvEWptG0Gk28wjubdWxlBhhkqc7mJxwBVfxt8FPC3x
L+GVj42+FVnLHJZROt/obyGW5+9wCu4kSDBOM8rjGa9R/aD+F0XxY+PPgrRLq+k0+0k0m7laWJAz
5RhwM/7wqb9l7wJB8MviH8U/DEF4byKzfT2RymwkNHK3Pqfmx+AoUn3K5Ez5u+A3wEu/itrkeoag
Tp3hLTX8y/u5sxLMqkh4VOMZGOTkY55r0vRPh78C/HnijWPB2gNq+na0gmhtb+5ut1rNMpK4jbcQ
3OCBxkD1ru/B6iP9kv4hRwt9xtZQEcj70nI/Dn9a8hl/Z0j8FeGvh74/Gttdi+1DTZGszEVEAlZW
G1gTnHQ5HNPmb1uJRSdjyHxn8LvEPgfx3L4UnsZ7vU0kEdv9micpdA8ho8jJyP617lr3wH+GvwZ8
JaG/xN1LVX13UGcPHoxWSOMcHG0qSAOPm65JwfT3/wCIUSD9pX4VSMMsbXU0z/2xBGf8968i+K/w
Ju/jj+0h4wt4NWg0qPT9NsXZ5YTLuLKdoA3DHQ/pS52+ocqPL/j1+z5b+ANKsPE/hOebU/BmoRQ7
bl5N8kUjKzbn+UfKQF57EkccV4V5TNJmJthGfkB68dfrivuP4EaVLp37MPxE0W8cSvp11q9oWBLL
lIuq55xnJAr4cSWO1tYmlDMAigZwN2RW9NuRzTikxqSyRHDbSzDv/U+tauieG9R8Xa5Y6NpNo19f
3sgijhRc8k9SewHXJ6YrJ3CdB5jmIuCBgdOM/ia+qP2BreP/AIT3xL5kYeWLTkYSMvIJl5+hxTqN
odOKZDqP7L/w88I6noWieLvH97pvibUIo5PskcIaMSMduA4UgDfwCxrxv41/B/VPgz4vOl6i0smn
Ts8lhqDOu2eMEDJA+63IyD616JrvwL8T/FDWfih4ot7u1e10bVr+BDdSsZpBGxfagwQFCkYyRz6Y
zXq3jxY9f/Yo8LahqgF5epDp5+0zr5kgPnIrfMeeRkH1rHmkjXlW54j8Kf2cofGfgvUfGnibWH8K
eGLZN1vfTxFlly+GYruB2ghQPXPAqx8R/wBmi30r4cx+NvA/iAeMtFidzeSJAYvKRAcuBu6AjBGO
h/GvfP2v/C2q+LW+G3g7w/KtsupXs6NaGUxQSBI0Ybwo5C8kDB57d6xP2WfA+vfDT4s+LvA2vzCe
yGkx3gtFl8y3fe4BbaeAcZB45rNtotRufJXw+8Aax8RPF9joOhwSXV3IxklA5WGMNh3bngAn19q9
6X9jvSLvxJeeHLH4o6TdeIYQ0raSbceaMKSQVEme457c9a9f+Bukad4f8NfGe80rT7exvrTW9Uhh
ngjCSIipuVQ3UKG5x618tWfw5+JHhfQNJ+LpvTFb3csUv9opdlrxjJIV3OTzhu454bnpT5mnuSoX
1R5n4l8O6n4V1W90rWLOSx1OyIWa3mGCmQG/Hgqc+9ez+Fv2S7/UfBFr4i8T+J9N8DR3czLBFrA2
llAyjA7gDuAyPb1r6Z+OfgrQtY+OfwimvdKtLh726uortpIVPnqkKsivx8wB6ZryT9qrwH46+Mnx
0PhPw4XutN0fSoNQjsGmSKCNnMiFuSPmPQD2p8zYKCPEPjR8BNY+C9zYz3VyNY0PUo0kh1i1H+jF
zk+WCe+F3D1BrE+C3wa8TfG3xbf6Vplv9l0+BGM2rzK3kQttyqsR1Legr60+B2lXnir9mDx34f8A
G0LalJoF1e2kNveNva2MEClArZJG1skHORnANV783Pw5/YZ0u/8AC0h0bV9RhtDcXVsoMs7TSKrE
sQeSrYz27VPMx+zSPFfFP7HXiDSPCGqa7oniDRPGMenoGks9IlaSU8gHpkcDJx3Ar5+s7e51W4ht
7WGa7upWWOKNEMru5OAAoBJ6gY9a+pv2cvB3j34F/HjwjoerWlzotj4oSb7TG0kckV0IoWYZ2lsM
rEH+E4NezeBPhJ4S0/8Aa/8AGc0Gi2sf2DTrPULWNVwkFxKzF5FXpknn2zxVcz2uFj53tf2GPGl5
Y6Y1z4g0PTtSvYY5I9Pv7grOm7+HbtyT24HUGvBvH3gbV/h74q1Lw9rdq1vf6dK0ZY/dlUH5XU45
VgAQfr6V6/8AEi0+KfxB+I/i3x9BbX+pWfhfVJ4INThRFWwS2mZkCLkZ2jDHA5yc17t8ffD1j8U/
2dfhv421+JJPE08mmRSX8X7stHO6+YhA4wc9McHpTUrPUpK58ufCr9mrxr8Y9J1G+0ZbOz02zdc3
mpSmCKXcDjY20hsY57e9U/jH+zZ4u+CmladqOrx291pd/I0aXemyiWOJh0DMMYyTxn3r6p/bP03W
9F0HwJ8I/h7YNFpOrx3AfSrJMyTLC0TqoY5IHLMT3xzVf9jfStb8QaL4/wDg/wDEPT3bQ9MhhKaT
d4EkCzM7sAy8gE4YcnB9OlTzWZVkfD3hfwdqXjnxHpehaLbSXmp306wJCjAAZONzE9FGRkmvZfEP
7CPxU0LRNRvntdPu47S3ed47a+V5WCrkgLjlu2K+gfgF4R0r4Q/AT4n+PdBt1PiiwfVba3vrkmQi
KAnyl2nIwMDPr3NfPXwn8T/FTwF8YPDnji6g1SC18WanHb3F/f27fZbxLiVS4UH5RnllxjGBjiq5
mT1tY+dbmF4lMciSCWItlHXDeuCD3r2PwX+xl8VPiN4U0/xBo+hQJp94GMP226WCVkBI+43IBI49
cfSvrL4k/s2eBdX/AGy/CqXGlsbLXrG71e/tVkYRS3UJXa+M8Z6nHXFeKftkeI/iV8Svjprvhrw7
b6sdF8FBDDFoUTqU82GN/MkKdeQQATjAPFJSd9CbWaVj5l+Jvw31v4S+K7rwv4mtPsWqwqr7VYuh
VhlWV+c+/v8AhVj4X/BXxf8AGbXbvTPCemNqF5bWrTyhnCxgBlGC7EAHkd819seOraD9pn9hQfEP
xbBH/wAJZokFzLZ6jaqYmLRy+Vlh3DKvIPfpipvjJE/7Lf7IvhjR/hvbC31DxfIlrd3kyma6d7i2
aR2Ugg7yQFHpkcZpuVwULbnxv8UP2Y/iT8HPDh17xh4f+waV56232mK4jlXcwJGdrEjpjJ9B615G
bWa7u4YrZJLia4cRw28Yy7sWGAF75OBX3r+w14n8Va74j8RfBX4h2l1qWi6lpU+ovFrgka7hP7uL
AMhJCsGLDI4IOK6r9lr9mXwb4F+L3xY1yOKbU5PBOoPY6Pb3rBkiQweZuYY+Zh0DemahyexXKz5Q
m/YY+NzK0i+BZxEEJy08O4L16b+cjPHWvn17fyJZI7mMwupZGin4YEcYwQCCCCMH0NfSMX7Wfxcg
+Kq/F+aS/XTJbjaNOxONHMe3y/JxnGeOoOd1fSv7WH7Kvg7xl8c/hdqsQn0keOb9rPVoLLaitthD
+YnHDnoT3oU7OzYuRM+Gfhx+zV8Tfit4ck1zwj4Uu9d0gTNb+dbvGiiRSNwyzDPUcge1cz8S/hN4
r+EWvJoni3SrnRtSmiFwkE3JeMkgMCCQeQQea+2/26fij4q+GXiXQfg38MYrnwzo+ladDqXmaC0k
d3cEh4wpMZGVGATkckjNdB4Vtrb9tn9ijW9S8c2kUfi3wR9ptbXW4R/pEj29qHDSMckb95DqD1Ga
rmdxqNj88fBngLXfiNrlv4e8OaTcatqd0WkgtbYbmYKNzH1xgH9K6bx3+zZ8T/hx4dvNf8SeCtU0
nSIGRZr24iAVAxwpLZ6EkD2r7f8Ahxo2nfsifsQj4t+G9Pg1Txv4ihtj9s1BQfsvnN5YWMgZCr1x
nk4zXE/sPfH/AMdeL/iyfhX8TJrvxtofi6GaRl8RFma2MUbyZjVgdyMV5HYhSOlTzXKSs9j4IQJG
hZgojYAhy2c/ie1eoWH7K3xd1extr+2+H2vXlpdwrNDKLVtrRsoYN6nrngV9ofBb9hzwfbftj+MN
FvZW1Lwx4LS01Kx025VXE73CMwSbI+ZUOcDvgZrx74o/t6fFBPj7NruhTXOjeD/Dd8bIeHYJStlc
xwSMjB/lxlgOcfdGMUubWxe+iR8e6rp0tnqFzBdI9re2sjW01u4KtG6kqwKnkEEEVv8Agn4QeMvi
TYX994Y8N6r4ggsHRLk6davKIy2SMADJOFJ4z+tfe/7YH7NXhz4oQ/Cn4paFDB4Xv/H17plhqVnB
GDFuuw0pnJGN0gLYJI5xV39r34gXf7Enw68I/CP4TWjaNealaLfXXiS32rdTGNwjkqFIZn6k54HA
q+Yz5dT86/HXw78T/Dm9trXxFoGpaFe3CNJCuoWrwNIo4yu4DIrH0jQNU8TaraaXp1rcXuqXcnlW
1rbxmR5n6lVUZJPf8K/Tj4NKn/BQz9mXxB4Z8exxR+NfBrxQW/isIslwZGBkDFcDblU2MM4PWuc/
ZQ+GXhr9m39mLVv2ktZ09fFniS2tpGsbMjyxZ7J3tyUYlvmfOSSOAMCp5nsVyJK7PgzXvgl8QvCm
myatq3gjxBYafbIGlurjTJlijBOMliuAPr7Vw8kZwykBwDzyM19/fsoftseM/in8Zm8BfFMp418K
+OpzpqafPGiw2RfcwxhPmTbhcH1BzXz3+2/8ALL9nD48ah4Z0e9Nzo15aR6pZxMmGto5ZZQIfcL5
eAfTFVG/UyaR8/NCMltpC54A7cdaSMnzZSMEjI3N2qWUNsIHODyR1q1o9lPq2p2mnqEEt3PHADIc
qGZgozj3OaGEYom0XwnrniXzRoukX+rPEgkkFjbPMYwTtBOwHGSf5+lL4g8G694VKf2xouoaT5rM
Ivt1s8QkwATt3AZwCOlfpp8Y7my/4JpfAPRfCfgW3F18RvFwbzvFLQqwDwOhZjE+4YCzbVUeuTzU
fwT1+3/4KS/AzxD4A+IFps8f+F4xc2nipYlVUeZ5BGwjXbggIVZehArO7Rrbsfl9DZy39zHDEpkm
kZVSNULM5JAAUDkkkite++HnijSraW6uPD2q2tvboZJZpbGRURB1YkrgYr75/Yx/Zh8NfDzwp40+
O/jtE8Qx+Cp79LHS7ZcBZrJiWnBJ5J2gKp4HXmmfBT/got4i+JXxz/4Rbx7pNrq3gHxldHRrXSoL
WNPsQuJVji8yTGXAVirfWlzNmmnRH51zRtuVWYEAcHNaln4W1vUbSO5s9Kv57aVm23MNq7oxGQQG
Ax2r7i8df8E5beH9s7S/h9pmrw2Xg/XbabXYVVWaS0tY5AHtxk8nsGJ711P7Tn7aN7+zL42svhL8
GtFsND0HwUPsl9/aNuswuXKo4VCclQAxyxPJahSaMmrn5yXlhc6ddS29zC8N1GdkkTgiRD6EdQaS
Cwur+cRWcU17Ngt5cCb2xnGcemcV+if7ZPwg8NftF/s72X7TvgqyHh29niEmr2FwQGuwsq26kFcg
MrA9PvA/jXR3Ph3R/wDgml+zRbeIW01PEXxY8Yo9lbagsZkgtiV81VKu3CpkZIGWI9Biq5ieW25+
Y91pVzp4jkntp7ZJM/NPCYw2PTI5qLIikH7sA8jOccV+nvwS8caX/wAFH/hh4k+GHxBsLa08eaLF
Jqmlazp1uIYI1ZRHG5Vech3IK4wR3rz39j39iPTV8TeOPG3xJnh1Hw38ONTvLaXS7dSftdzahZDJ
1A2ADhT97ODxVKQ42ufB91o+owb5JLO5hCfxSQsoPHPUVmFMujM3B6c59/6V+kPg7/gpzH4r+PU2
m634U05/hPrE7WEUEOnr9tiSTCRPIxOCSTyB2Jx2ri/2gf8Agnde6D+1P4Y8F+DtQtbXw940e4n0
03DEmwSCNXmVzglj94rjrnFTzF+6z4aSya4haREZwGKEqMgMO315qPaYZXQxlJEAyjcEA8g/rX6Y
/tGftGaF+xAuk/Bb4S6BYXeq6Kv2nWL/AMQWoufNMyK42sTks245PQDAwazv2mPhV4W/a2/Zvg/a
M8Aaemgazplq8Wu2MqmFJ0gG2RY0XI3Kx4JPI60c2oWR+cJiLSIoB3uDhVGScfTnuKBaSLGrlCoY
8b1I3AjqM1+lHwl+FXhX9hH9nUfGvxxp8fiXx34ltlttEtfLE9vaieDzoUZGwASUy7AngYFaHwA+
LHh3/goL4L8S/CPx/wCHNP0bxaqPqekahoVmLeKNI1QKWOSQyu/I6FWpc2o+XsfmJLAqKJOV2n+P
gcU6O2kknaFR85XcfT6V9v8A7M//AATzv/GHxv8AGGkePr63/wCET8CT/ZtTFjJl72UoXQJkZVdo
BOcHqBXoT/8ABS3wlL8c/wCzv+EA0NvhAkhtjcPpinUGQR48wDdtx5g6Yzt70nJonl1PzYuLZoGK
scEHDeg9qlKoQcOH7cDv1xX29+1Z/wAE+rzwd8cPCdj4Antv+EZ8f3ot9NjvJSGs5igeQOAv3ADk
Y7cV6l8ZvHXgv/gnb4I0D4Y+EPCekeLPiBOIdV1nUPENl58OHQqWVuDy0fyqOgHvTUmxOB+ZUkas
WVJASGBOT09KUB1jZWOM5+XHXAr9J/i/8KvCn7c37Nq/GrwDpVr4b8Y+GLV7XWdNSP7PaTCGEzTL
GF6nMgKseo4PrWT+yf8AAfwn+zv8Ej+0l8Vol1QTQj/hH9IRROgWZdkZlQjHmM2R6KDz7HPcFE/P
CO33WzN/Ci4O498U/wCz/dkGArrjPqfpX6W/s+/HHwJ+3B/b/wAHfH/gbRPB+ra3Gs2jXnhqyEbk
xbpXDOQdjKI19iCRXjHwi/4JzeJ/Fv7S2ufDvxFfwR6J4XeCbWb+zmw81vMrNCsWV4dgOeMDB5o5
jRHxxND5cqGR8bhu255xUQGQ7AksPuheR04zX6ReLf25vhb8MPjNYeEfDXwy8PXvw30Iw6bf6pc6
VuviEYxytH03bdvfrgmuB/bX/YuTQPFXhbx18MkivfB3xAvreKzs5JFjNvd3ZZ0VVwMREMPdeRVK
SFZXPh1I2+8xCM2GIB/AUkmIycAEgkNk88fe47YyPzr9MPGOn+B/+CbfwX07Rr3w/pnxB+MHiRIr
yeLVIBLaxIpKvsOPlRSDgdSTmmah8NfCH/BQ79nY+J/BGh6d4N+LHhJCl5ptpbi2spWkYttLbTuB
SMlW7E4PWlzCaXQ/ND5Yz8rhC/AyM570svlxxZVlb+LcHyPavvX9i79lXw7oHg66/aF+LexfBGhb
riw0+NBN58kcjRNJNGFOVDABV9Rk12fwS/ac+GH7T/jLXPhX4t+GGi+ENL8UiXT9F1HRrELP5rMQ
nmED5H24IPQEdaXMhcp+bESEgszAH+VDOsQw+2RV+6yng9eRX13N/wAE7fGLftVv8JUu7f8As/yx
q41fzFDLpXnbN+Mf63kjbjrzXs/xk/aQ+Ff7LXjHQvhN4K+HGieMtM8NKNP8Qaprdnm48wPhwrbR
5j8uSemSOcUc4WsfnE8QK78E84+XkD/JqKTleDlDwSB/n2r71/bh/Zg8P+IvBth+0J8KVRvBmvrH
Nf2RRYDAzskUbRR4BwWB3A9+ldhonwg8Gf8ABPX4Ev4y+Ieh2fjH4oeJ4Wh0zRrqET2kLIQ4UNj5
PldSxz2wM4o510Gon5slVO1uGA4wp746UxoAp8xjs7bM56mv008KaH8PP+Ci/wAGdb0LS/C2l+Af
i34f86+totKtfs9vOv3Ii8m3LIxIUjqOteQfsf8A7DZ8beLPEfiT4jKNP8CeA7y4h1SJJA73V1ak
O8eAMmIKpz/eHFLnfYOU+KBNtkZRLkjJVQec96WdUz5nzYK7tw7+vNfpd4U/bi+DPjn433nhLUvh
loGmfD29lk0+y1yLTibpiw2RSFdnyBs/UAivl/8AbZ/ZPuv2WviSunxyfafCWriSfRLrzQ0hRAnm
JIOxDOOfQ007vUnQ+aZlWUhsnOeMnkj0qXBjQsAx5+6TwKMjzgCM4OVweKfIHdS8hwFBJC8fnVko
S3iTkHI2gMVPXr709xHiby5VCqSWJPIBxjj8a/QP9nD9nbwX+zb8Dp/2g/jXYQ6ut/aCPQPD0kSX
UEyzRhoWdMEb2II5OACfWuv+B3jr4Vft5+Hte+GWteAtI+G3jC4xdaPcaFaAM6Rr5hbftwMFRlTw
QcVHNY0SPzOlXI3Nkdtx6ZpmVTYWk3EEEFec9f0r66+CP/BPvxl8R/j5q/gDxOU0vS/C0kY8QX1t
MhbZKjPGIh3LBRzjgHpXt/iX9sL4FeD/AI36b4H0v4U+HdY+HOm+Vp174juNNLToVBWR1jKZdVI6
9etPmVyrI/NtyUh3q2VZjgrzjio1kjAGSEAPPOBX2x+2N+xJN4K8c+H/ABF8OES/8FeObqGLSImZ
UEV5dFnjhC8EIVZSCcYGfSvU/EPhb4cf8E6/g7p9j4o8Nab8QPiv4nMN9NpuoW4kgt41YK6xuFIV
EJOM8k81XOiWj81zFncEfzMMVyh6H69KZsjjfeQWkJwB369RX6V+JPg74E/b0/Z9HjD4Y6LpvhT4
meGkK6jollCILaZnYna77Rv+SNirA9cg155+xX+yLob+Fpvjp8XFS1+H+ghru3tDtnFy8cjRv5qA
E7QyjgdePpS50CifDBgeFll2sO/I6f4dqFUyEkZYKeGNfpb8Jfj58Ev2mvHfiL4aaz8NtB8Gaf4g
STTtB1jT7QG7eVmKRnhBsbG1h6HANeA67/wT58bWv7TJ+Etg8Mkdyn9owak8o400TbTK2f48fw+t
T7RM0UUtz5QuImtym8FWkHBHAPHP4U0xNnIIOSCQfXGORX6U/GH4ufAz9kvXNB+F2h/DXRviFe6E
n2XxDq+q2yrNG5ILDcYzvcgseOBwM1xn7bH7LPhfXfA9h8e/g4kR8C6ookv7KBVgitjuWJGijIBG
ZNwYDvUqomwsmfBcxBVm2FSmABnr7/maiiwQ6q4Iz8wDZwfw6dOtfpD8OfgF4G/Yq+BV78UPjNol
n4h8Za1btDo/hi5UTRlvvqoIDAMRtJYjgD1rT8IeHPhT/wAFBvg7q/h/RfCGmfDj4paP5uowWWlQ
qBcoqbIy8hRRsZ5FBB5BUmq50ZOHY/MsK8YV3Xy1bqO2B/8AWpZixCyeYVkb7o9ev+Br7I/ZX/YT
1n4ofEnXR44iGj+DfBd/Nba5cJIG8yaAB2hUEcoRyWHavZdF/aq/Zs8QfHl/CR+FWgWngSWR7G28
XfZSJGbysBymzIQtlQSeOvFRz26D5T815QrRszHJXG3jr9ahjPmBdxwm8lXPAxzxX17+0f8AsE+K
/hv8ctL8JeFrdta0jxRKy6BcMyKZGCeZIknQDbnvjIxXuHxH034L/sF/D7QPBus+D9K+KfxIupUv
NUXVI0DWqPHncWKthMrhQPc9qvnSBpdD814ogxZ93GM8jkVBuG7jC5zhSOT6Y5r9GP2jf2a/Bv7R
nwP0j42/BPTrfS/7OtBHrfh+0jSKNFSMySkcDc6FgPcYNYf7Jn7K3hT4efDVvj18cY4E8LQxq+ja
VMvmJeGSMhHdVycliQoI9+1T7RPZGajqfBC/v7XG4AdQ3XnOAaI4XBAKkgg5Ofzr9LvhLL8C/wBt
/Q/Efw7i8A6Z8K/GNxFv0W5sV3TTKhZ3IO1RwE5XPRq+dvhN+wZ488eftA3Pw31SKXSU0J0k1vUI
9pWK3fJjdQx5LgYGM4z7VLmrXsaJK+p8ssjvsVCrL2JYEkA1LKjIjBwXwueOo49+a/SvxH8XP2Vf
ht8YNJ+Gtr8NdI8Q6XpvkaZqnisqFjikBKSlwVJfaVBJzgknmvD/ANsv9ivUfhx440rXvh7A2t+A
PGU0T6U8IULBNcMxit0GcldpUgnsaqM0U0uh8eBZCC7AFec5PFWJlZrZx91lYgoDyvHf0r9HZPg5
8IP2HPgXY3/xe8NWfjr4keIBFNF4cmdGeFcbXC54wOcsBycD3qj8Sv2Zvh5+1X+z9afET4G6Knh3
xHosZfVfCtmoZpS7g7WJ2jcoDsCByOO1UqibsxKPkfnaFcQguHJx1c/hTUl2u0QkUyj1IxX2v+xj
+xrpvi+wm+KvxPnXSPhjoTG5YX64jvijMjqeeEVlHXOSe/NeqfCqz/Zi/am1jxj8PfD3ga08Ba7e
RPHoutTSq8lxKSwV0XOc/KH29wecVLqJD5Ufmt+/VeWBwOMdOlMlVxGMkiRuMdS34V9EWf7EPxDv
/wBof/hUyWEsd/HN5s18U/dR2O/YLo8/dYYIGc5OK+ofHGi/spfs++NfDfw313wlF4t1q2jitNX8
RRz7IracsUfzAG4YctgDgAZxSdQSSufmurE3AIyRxk9MdqlI3zsjEAMN3Jxvx0579a+wP22P2OT8
M9etvGfgFBrHw+8SssmnTWI3RWruwEcWRywYkbSB3Ir0Twd+yv8ADX9m34Bv47+P9nNq3iPVo0/s
3wrDO0M0S7tuwBXG4tuVmzwufzftFoPlR+fZCG3UyF3UOAdvIb2/+vRNCYWLFlaPptx19+tfoF8S
v2V/h3+0L+zzD8Q/gDpraXrOmLI2peFi/m3L7nAClSx2kBWI7EdK8r/Yv/Y4ufjXqUvifxwDonw5
0RvOvb65bykutpIaLfkbQOMnjgY78DqxSuRbU+T1Kq6qVIQD5m/rQbYjzFV2CrjgnK4+n5V+jngj
4Pfsw/tF6p4x8CeCNJvfDviG3gmj0nUL66KxTz5KqUw5LcjO05yOa+SLf9lH4g3Xx2/4VbFot42v
Jc+TJviYQi28zZ9p3Y/1eOc9+lQpqWqKXmeMMrLK6h8ZB4H09PrVhIflH7kTqqjdjOPp/n0r9EvF
nwD/AGWvgp4o8LfDTxrcalqXjC4hhi1DUtMnPkxTudpMhz8mSckY4GCRXz5+2h+yTd/s7+L21DRv
9P8Ah7q7CTTLyB2cRj/nm7cjPGRzzmrjK7sWkuh8zEo3mbNxJ42qOBURMvk7SCi5yMjrUwBt9/WQ
g4XH+etOu28uMo8gbdkkDAFamTREwaOMwgsFOSWY8Z9hUMmWRtrjOcbR0NTMp8hAPmyST3xULW5A
kbA+91zjHFMgnijCIzKmTwSMYApkgCSO68k8lT1FGC+0ncsR6HH60OgYklzvHO3H9aAHTs2wFQB6
LnkfiKbG8m7yx8pYHtwPxpTbsSZWLFFB4VcfTNK6eYgwdu4cIzHqBQAyWB2y2SSCQwA6DtVV4/Mv
AoyNo7/nVhd0aEqXGMZ/LmoWbFyPlxlefrUsTuWIY5Xh80JvAGGAGMfj2OM/lXQeO5TPrFqUcyj7
FGpVyRtHJ4z9RWTZW58uRvMG4rnCk5wPatH4ixs+vo29dotozwpQLx0+v+FPZCSdzmSQ7fMMdlOa
JC6MNvBxznjNPkjWRCrMWZBwfaoJmXeQC3HGW44rK5qdloMnlX9hLlEdZhuJ+YKOOvBzn0Fa2hTR
W12oeLzhG7NtKgckYUg845x9M1zttGHlhXqCwYSkE7jnoRXUaU76Xq7vKokbqYyvTkda71E4nK52
enW7yhpAAhf7qqQwUenA5wc+9bup+WvhfXUaOSRUsZhsKt/cbkjgnHAx71j6Xd20iGSHepwGEanA
OMk4BIOfrx9K17uKK/0bVbdC8TvYztGq9FxGzc0S2IUdT5nS6ks/MO3a7JsyQARmorJh5qqzsFzk
beue1JdZZx5hKNyxDDOT9aZFLiRWVRwMfN0rltqdqVloe+/AjxtrPgPX7PWvD9/JZarYuZbeTO7B
KjIIPHTI/E19YfHP9ojw78eLXT78eE30nxMkCQXuoNINk+A24AdwGwQfc18X/Dfc9pIXYkKFC8cY
xyPx5/KvUdLMks4mimZ2cYbbyOecY9c8fStYxTZjUnLQ+rv2bv2m9M+C3gHxD4en0aa9mv5GkiuY
5Agz5QRt2ewODgCvn3UdVN8TcBXWYOX3KccZJUhvx61mxgJJNvleNChyi52gEYJ49cetOtwu6YH5
Cu08AEA55Hr6Vp7LyJVRt7H1T8bv2qvDfxh8M+DLO98M3iQadexTahC8yBpYwoR41YHjOSeSOnvX
deJP2y/hB4r8ExeF9X8I6tc6JEkYjtknQlfLGEUEPkHHHJ6V8OJHJtkKMhLNkbT0APf/AOtTomfG
0wllRsOSDkdQDn8qxdON9TZc3Y+xPiN+1L4D1z4F33w/8KeGb/RrY+VbwLcMm1FWRXJOGJYkZ7k8
9a57wH+1DoPhf9mfUPh7LYXt3qssE8EV0mDFiQ/efkHjnp+tfLvkS28exGCRgl1LqAOMHjp65B/C
muZ4VDsWcMDlVTbj1H0PHWqVKLE5NaHt37LXxm0j4P8AxQvvEV/YXU2n3GnzWzC2VN4JkVwcnHGV
PGe9cL8dPHtn8U/it4g8VWUEtpY3V07wpOmHC7dvJHrj/wAe9q5S5mMQCuJDb4IIjARAAcn5vXn1
7VTdz9pX94Sm7dvxwASeMd+MGj2aWxCktj6X+Bn7RPg74HfDLV20nQ7+b4gXUTql7NGJIA/JQAsw
O0bucV8/+LPFWpeMtYuNZ1G6knu7gtJJJnIGSSdvtzWXKjLbTkyNIo24QgkN3PqAffFRJMkZKtyQ
uF3jrkDjP0pLRhJczvII0ltbZojl05APViF+n4UxnEs6r5ofKjMYGP8AJ6VM0DgpIVYgjPl8gAYG
cmoYlVciM8Z27QpzjHb05P6VrfQx5bMmvZWjfy2b5QcZ4zuz0/DP86bJG0XlhZlRzu4By/TqccDm
mCEQlmcMsoPAwfkOcZ47/wAqj815tk0TM7luOg7dz68d/U0RSZM/IszHHlOgZGVT/q8EHg8ADvQt
o7zE7ZCRHkscBc846d/8KiEkcdxJ5P71toaQk/Lnp8v6fnUovFSXDxbsnaCrDj3p8nW5KiuoRXUc
AWInzC0mBvX5QeQTn+ntTlllFs6MxjAKhien0PqMZ/OmzNEWntkky24EEH7oOScgj60IWEYy7SFT
khRtLYGOh/Os1a4tb2uI11JFMXCE7tqhs5xzkED3xVq2uniZDE2945NwKthl5yO/Bz0qmihvlDM7
eUFG5SpZBkg89eSamMYt4xkbs8c8cf5x+dDs9DVXj1PsH4c/tHeDvix8Jb3wb8Yp3s5rNfKsNfVX
ubh1LdWO0lWGB7MB14rRuvjv8Mv2f/hVdaD8Ibp9X8Qak0izaldwtFcQqVJWUsyDeFyNo96+KN+9
o5cbJFUMM8H6datC4aQuCjojrkMBxxwT/KoVJdDo5rntvwO/aZ1z4SeOJNUvbiXWNI1Z/wDiaafL
KT5jswDz8k/OBk++Md69ze1/Zok+KEXjiTxXAliXLjQ5rArZmTZ2zGO5zt6Emvh1Ljd8zDBPyBdu
WBzgUoklh3qzPswXC8N/DjAB9av2RF3c9/8Aj5+1FrXxK8dQXOmTzaRo+hzpLp0EMmBuXI84DAO7
oOemPc59atvi38Ov2lvhfb6b8StTj8J+MNLaO2j1eSMSSTDHMi4XADYORngjNfFUkzuhd+XK8H6f
zpfM3lg6M2TyWGd3/wBaj2K6MFLufbWrfGD4f/s5/B8+HPhdqdvrfifU0b7TrEJCyBlPEkgIJLfP
tAAwOa85/Zp/aZu/AWsTeGPFol1bwhrcrx3Svg/ZpJjiSViedpzyo45J4xXzeF+1XY+/u3dVOM4G
B15x9aklDKrLuJYnnAycdyKfsl1DmZ9saD8K/gP4b+KFz4nf4iaLqGiReZcW/h8XAHlMfmUbg/z4
IO0Y74ry/wCMf7WOu+OPiPpuu6DPNp1nok4m06IqjlGIw27jDbh1Hv7V88xTPlQoAwNu7PU1XgEs
DyMQyAENznaT/wDqFT7GKEqrWx90+Lb74eftc+BrLW77XdP8C+NbEi1mk1SVVLovzHClhuQltwPU
Yx70ni74reF/2YfhfB4T+H2qW2teKNVjWa71hHE8IYrsMpAbAJ6Ko6dTXxNBeNJcqgVV3YDZYDcP
U5+mKTkkRofLO7lo8EH357VKgrhzPdH1Z+zP+0Bpi6PN8K/HbfaPDGrM1nbTttj8gyFiyyMMYVmP
B5wT6V2vgr9nzwJ4D8f3/i/XPHOkavpGkF9QsbK1vUE4KMXQyYb5ioXt1J5r4huwxIDboxgqNoH6
57f57UsaN5anCmEDHCjGB+H1q/ZdmPnlfY+jPiB+1/rOu/GbSfGehRxpp2jGSCztJ4cs8EgAk8zD
Y3MB1B44616x8Uvhr4Y/at0ux8d+DtctNJ1e7H2a/g1W42ArGCNpT+8MDnoRzXw5HcEzS2x3ockD
KYB9xmrDSfZopFO3ao3L8vy8en456UvZi52tz7X+IXxB8O/ssfCGL4eeFLiLU/EOrR+bdylhLChl
TZLICuMH5cquOlZ/7PPxV0b4ufDVPg142b7BLJEkGmXVtlfOWPEg3MchXDICM8Hp1r45N4skJZlL
EAAZXhOP/rUQ3EkciyLLMDGfkKHHP8x9faq9j5mftm3qfa/wi/ZZg+DfiO88c+ONYgOm+Hgb6wa0
n3k7Q+4vxk8bTgdTXCXP7Y9xL+0PbeMotFV9DW0/sry2yJfsjTbvN4z84HzFfTjvXzXBdS3Ucxkl
dkkILRtgEDHtyfpnrUZk8t/7gTkAj7w9P5UvZMbm76H2b8ZP2c5/jXrlr4/+G17Hqlp4jXzLtrqU
RJFjaqsAcHoGypGcjmrnxr8eaP8As6/BtPhNoskmu6rdwyw3Mki4+ypPucuexOWOFznHJr40sNcv
oSFivLq3hUMVSGZkGScnOCOp/nUGo381xdy3VxLJOxAR5GYs6nsck9MZo5H1H7S2h9seCfFul/tV
fBST4c/bF0LxTosMTWixjcs4hiCpJyMYJOGUEkdc1T/Z++AeofBTV7v4ifEO8j8O2uipJ5durq8b
iRCjOSp45PA5JzXx3p+q3OmXkdzp9zNaXKBgs9vI0TKCORuUg846fStHUPGOu6xbT2N9rmp3FtIB
mG4u5ZUPp8rHA/AUvZj9qfT/AIG/az09f2hda8QalpR0/QtaSGw+0vPn7NHECFnb5ehPUcYBzz0q
p8Yf2UPE2vfFBdV8KZ1nw/4iuzqUt+rqI7USuWP8WWUBywIHIOK+UXkI8yJ9ysjYV1XjA6cH3Fdf
Y/FTxnodtBZWHi3W7GzjXEVvb37qiDsFAPAA7D+lNUmP2uup9TftE+PtF+FPwc0n4Q2t2uta5HbQ
xTTxYAtvKZHBdQflLcAD0JJNafxCuLf9sT4JWmo+F5UtvEvh+VrqXRJMPOJAhGzggjdgFT0NfFup
6nf6vez393dS315cHfNcXMm6SQgYyzdzwOtXPC/jnX/BtxNeaBq15o91MnkvLaSlCyZztPryM+tP
2TH7VM+r/wBmP4aXnwOi1/4j+OceGoYrWWyFjeAJJLyjhwWbGTs2hepNW/2ef2kPDut/FXxjZXUM
2lJ4rvjdWM1w67V2R7SjnPBJBxjIr5Z8S/FPxj460lLDxF4hv9UtEkWVYbmbcoZeh9zzXJXYLoiB
tyjo6HLLx1HvR7JhztM941f9lnxzZfEmXwdBb3U2mXkpePVY0kNkImBbc5xhSpyNvqR1r2j9o/4v
aH4Pm8CeEy0mq6noF9ZahezW2wqiw5Uryfvkqfl7d6+cbb9pv4owhUj8a3oiUAKrRxs2APVkJNeb
ajqE+qajd3t07T3c8pkkn4zI7MWJJHqSan2T6k+1tofZ/wC01oOofFzwx4Z+JngW5ubu2s7YxNb2
O4Xg8116BOcqeGGfzq9+zTp2ofBz4e+KviF46nltrbU44ZUgnLNdbYjIoLKwB3NvGBzxXyj8PvjF
4v8AhVZXlt4b16TTIbyVJJbfyo5F3DIJAdGwSOMj0FJ8QPi/4u+J8lkniPWpdVisiWt08pIlVmxu
YhFH90AZ/rTVJjVU+q/gB4w0v4ofCfxz8P7KV7DXb4ajNbJdrtDRTswRgwznaWGR1FeLeCfhx8Sf
EXxL07wVeXWoG10HUUuGiu55PscKROCSucqeD8oUc5ryDSdfvfDGtWeq2EixX9nOs9vM2SVdSCOM
+o+mK9Xn/bF+KeoWMttLq1pGJUaN3WxjVgCMEg4znrz09qPZsr2iep798Xvjh4U8PftIeCZbq9dY
fDy3lvqUywsVhaaFQgB6HqM46fnXCfta2XjPwN8T38b6BqV3p+ka5bW9mlzpk7K7sq/cYD16gnPf
kV8sM8m1cu0j5IdmxuYAHkn616t8P/2ofG/w98NQ6BYyWl7p8BLwHULfzmXJyV3FhkYzij2bTuCn
0PoT4cm5+Dv7LOvSeObj+z7zX5LqS1SRjLK73EA2K20E7mIYn8zXw/LcH7PDFhiCgUgDJRsdu/pX
Y/FT4w+Jfi5rdrea/LEwtohDFbWqtFDEOcsE3H5juwWz0FcNvH2jBJVgu8leARz/AIVvTjyGE5u5
MeWO+Q5I+bcvXHpX0p+wz4q0zQPiZq1tqN5HZyavZR21mJePNlEhOwH1Ibj1xXzEjAjciliDhWye
KuQ3ktu0U0RkguIW3JLE+HjYdCD1B9CKmpG4U58u57t8VfEfxF+FHxE8Z+GbTUbrSrbxLqVzdW1l
EiOLuKV2CsjFWIJDBTtIIxXsXxbeP4e/sheHfC+uSpY+IWgtANPZw0pKTI0uMZztB5Irx+0/bS8T
2unaVb33hjw94jvdOiSOG91KNvtDbQPnLcgNwCSOp5ryr4pfFHWviv4kude1y4Z2ZybazjZmitIy
B8keeQPlH1Oax9nJm3tUtEfYf7WPiPUpfCfgP4ieC7oXFnpE891/adsgljiV0VFLAg8E5U5HHesT
9kjxL4l8b/ELxb8QfFNwJ7FtKS0bVXjSG3UxvuKgjC8AEk18/wDwg/aI1b4Q6PqGizWMHinw5ex7
f7J1OT91CxZmZlGCMNu5HTIBrZ+If7VOpeNfAkfhXSdBtPBOmtK7XMOkyFVmiZWBjI2qACWycdeK
XI9io1U+h9K/ArX9M8Sab8YdM0q/gub281nUJ7WBHw80bxgLIoPVSSORxzXyrafEz4ieIdB074QS
WsjpBNHB/ZhsgtyHiYSbDjBwCM5POK4HwN411b4a+KLLX9AuGt9Qt22ErwJYyQXjbPVSBj/64r6A
g/bT0q08UXPiIfCvSofEk0ZVNTS9PmsSAuGbyQcY6jNTy6lKTVj3z43+K9G0j43/AAeivdUtLaS3
vLtp1lmVfKV4NqF8n5QzcAnqa8n/AGlfin4t+B/7QV14k0K1iWz1XRYLQXF9bs9vMVaQnawI+ZTt
4B6GvkzxT4sv/Hmv3Os65ePfapegGa4c8MF6YAwBjGB7AV7hoH7WNhd/D2z8N/EPwdD8QI7CUyW1
zdTqjbMYQEbDlgpIyeoHPNHKxc1z279nnWb64/Z0+I/iLxFssH1q71DUFnmjMEcoktlO5Nx+6W3A
c9qy/Fs1xrX7AuizaGDqF5aW1jIViXzmQxzoXyo5+UAkj0FfO/x7/aRvvjDpunaTp2mf8I94T06F
FTRRIsgeVQQpLADhVIAH41W+AX7Q+ofBDWJpPLm1jw1fbvt+kgqN7bSFZSwODk888jiq5HuUppnr
Hwh+O/in9oL9o/4fyajp1qbfQftLPNptu/yLLbsN8jFmABKKB06mvf8AwfrFnL+2D8QbRLqFpv7B
sMx+YMlgWyMdyB19K+Zx+134S8C+Gdetfhh8PZfCOv6mFRtQlnSZEKk4bBLZwGfA4GTXznoHi3VP
DHim38RaffyQ6xBc/a0u5PnZpskksD2OTx056UnBsXOr2PcfEf7R3if4Wal8T/h3BplobfWda1Fk
ku45EnjSZ2jygHDZwCPrXvnxCaPT/wBjv4bx3+LWQS6Jujm+VwfMjzwe4ryvVP2r/hZ4u1jw/wCL
vGHw51LUfF2lwQo15bSp5ZkT5shN4BUOSRuFeEfHv45av8evGl1qt8PI0OLdDpdgQMQxZyC4yVZz
xk9u3vPKw9pZqx9g/tofEfUPhB8SPhb400+ygv2sY9QURT7hG+9I0I3LyPlckf7tJ+xv8S9Q+MXx
W+J/jC/soLD7Za6fF5dsWaNdgkXAY9ThQT9a8D+Gf7U2gx/Ce9+HnxR0TUPFugRiJNPa02tcIASx
DszL0IXaQemQab8Vf2rtBh+DsXw9+EWk6j4U0iUzR302oIvnFGO7EbpIxyckZPbii3kNTS0PfvA0
q3/7H/xg+xsJ3+2a/hEHJILYGB6/1r590/8Aap1j4nWnwn+HJ8PQ2qaTrWlh7+G4LtMIXVAdm3jr
nr2rzf8AZ3/aE1b4B+PIb2EPeaFfuItU04ndvRny8qAkASe57ZFe4aD+0v8As/fDrxb4g8Z+FPCn
iBfFt7FcGJLyFfsomZt5wN5EYLAcjtnFJxdrLcXOrn0v43kA/bQ+GsfmgP8A8I9qZ2Z5xlea+dvi
B+1Rdfs4/tN/F+1/4RiLWo9WNi6M115DJstlG77jZB3n0+7ivkjxT8YPF3iz4iS+PLzWrm38QNce
dBcWcrotsAciOIFjtT/Z79819JeLP2k/gX8fPD3hvVvizpWuWHi7ToWW8l8PWhMUmW+6W5LLhAcH
pkjNXyMammz0bwfdNef8Ey/El5Mhhae1v5SrHpm7bH+P41rftgeNT8Pvg78C/FCad/akelazp16b
fIUMEtmYLk9M4xn3r5f/AGsP2q0+Ma2vhTwYr6P8O9PVTFbpG1u9620Z81cgbARwvryc8Vp/AX9q
vw/F8MtZ+GHxlgvdb8FT2rCxv0ia6u7VjhfLGc7VUHKMB8vSp5WtyuZSPc/2cv2go/2jf2y38QQa
A+hW9p4TmswsswlZyLhGJJAA/jHHtXq3wS2t4y/aR8v5z/bzDGc8/Zc/1r5Zm/ac+EP7P3wl1vSP
gbFqV/4p1SbY2oazZtHLbIyFS4fapO3bkD1Pevn/AOAn7Tvir4GfEVvFK393r1rfyFtasLy4Zvt2
7gyN6yL1BP0qVF7g3bRHbz/te2Fx+ydafBr/AIRa5k1ZWVTqiuvlgC7E4Pl/e3YIX6+1fevx5jKf
Fv8AZxiJUMuuXGd3HS0P6181w+O/2R7L40zfEyPUdVfUGJuRoQ0iX7CJDHs3CMR4zkZ64ya+Y/jb
+1N4z+MfxOTxf/aV1oEWnyCTRrC1uCRpzAYDIf77DluOvFLkbdxpn2Z8f/2ldG/Zp/bQ1LWdY0Gf
W7e/8K2tli1KCSJjM7Bvm6j5TnHPStP9ljxVbeOP2Wvjv4lt7X7Baatq2t3scHGI1e1RwOmMgNg9
uK8Y8TfHr4G/tVfDLQn+MV/d+DPHulyeS+oaVp8s0l3Eq4B3iMjDbtxUn5WDYrmf2lP2tPDGlfDH
TfhH8DJHs/Cn2OP+1dat4mtri6JUxvEUZFOWAQs/foKOVtjvb1Pc/i7qQ8P/APBNP4dag0S3UNv/
AGLNJEejr5ytjnj2596zvh9+1J4Y/aX/AG0/hC3hnw/d6SNGsNVS5ku4kQtvgJULtJyBtzz6/WvD
/wBlH9q7w/pfg3UfhB8Yi2rfD29tpDa39wjTf2ZtUYjCKpO3PKkHKkeldp8O/iv+zl+yJ4U8SeKP
h/r7/Erx3II4NMhv7WS3eBT8hCOyfKgDFmPU4A9KXKJTSPrn4UASftmfHNh95dO0VW/79ORXwxJ+
1r4J8GfAH4tfC2/8NXs/iTU9U1gw3cUMTQZeZtrMS25cFfQ15J8Kv2xPHfw0+NM/xLudQufEVzqk
w/tywln8tb+EKVVAMELs3ZU44xjvX0h4l0b9kH4n/FrSfiRd+P7TRrK4aG9vvCb2ciwTzDDkP8vG
WI3DGCQT3NDXViUrHvfxhjWP4IfsxxMGIPiLw6oyec+TxXOftd/GLwt8Df2w/hx4n8X6XPq2kR+G
b2HyreBJnjdpflYKxAr40/a2/bQ1b4wePNNtfCU9x4a8G+FLiN9Es4QAXngdhHdZ28fKF2pk4Few
Xfxs+D/7a3wM0y1+Lniq1+HvxH8PPHapr0yb3u02KXdUAAKOSdy9iM5p2a6DvZ3Pfv2GPH2ifFDX
v2hPF3hzTpNM0fVdWtri3gmjVGC/ZWGSFJAJIJwPWuMvWVP+CSWovIm5Psc7MoGcgak2ePoK8s+J
H7TXgD9l/wCBNn8M/wBn7WLTXde1ZPM1XxdYhfkYMA7MrKwMjISqj+EYNch+xd+1noegaBN8Gfi/
KmofDbWFMUF1fMFi05vmkYStgMVd8EEng+g4oSe7Rd7nqem/tKfC744fGv8AZq0XwNoVxZarousD
7ZPLpotlVRb7dgbHzjcufTgc9K8m/wCCtMm79qWzjdSU/wCEbs2yeg/fXHtXo3wr8Pfs3fsmaj4h
+JcXxN0f4mappsLSaBo1nIouIXBOAnzkFyCFzgcZr4f+O3xt134+/EvVPGfiaZprq4H2a2i2qq29
sruY4gABnaHPPOTzWkdTF9jz2eVhIzDlXOAq/lxXQ+B5Ui8a+HkCBv8AiZ2uSSQSfOT8650BxG/O
5ucDsBU9nqc2mXdtdWbFJoHSWN8ZZXVgynHfBANNgtz9i/26PHHgj4d/tBfAPXfiFZLe+FrU6s1w
rWv2kIzQxKjGPByA5U9D0qL9iPxv4F+I/wC0t8c9d+HVmtl4ZmtNIWNEtTbKzbJdzLGQMZYN2rxG
z+L/AMPP2/8A4CJofxN8Uaf8Pvij4Z2pBr1/KkEE4kb5njQsAQyx4K8EHBqWf4s/Dn/gn/8AAaXS
Phr4j0zx/wDFPxMGin1nTZ0uLeDy3JRpYw5CKqSsFHUnOaytfQ1Ukj1bwTIY/wDgnn8cJGw+brxI
WwOMeY+f0rxhfif8C/GOlfs36D4D0uCPxtYeJNFi1CWLTXglQKQkoklx8+ZMHqecGuH/AGKv2vLH
SINZ+D3xZf7d4D8YyzQm8IWIWs9yx84yuCuI33dex9q9J+F37NPwT/Z++I2u/FHxB8UfD/irQfDf
m6pouj6bfoblZI382LeA37x1VVUL3NLlsiozS3Pq/wAThZP+ChnggfeaPwPfNyfug3AGa+ZdW+I/
wE8I/tC/tJWvxZsbS61m81KJNPkutOe6baLXaVRlVthyRzx1HNfO+tf8FDPF+o/tT2/xZtoIF0+z
DaZaaVLAB/xKmmDMjHP+tIGd2cA9OK9x+NX7Pnwu/a58SaD8WPBXxH0LwfF4lRbvXdN12/RbkvlV
OI9x2PhSp6jPNDiQpJHZWgRP+CP8GyLZFLagqrDorarx19iK7j9vLUvB2i3/AOznc+PVibwlFrkj
X8c8ZkjMYtl4ZQDkZxxXzD+25+0/4Y8J/D63/Zy+EgRfCGkIsGp6k/74SMJEmRYJNx3YfduJHsOl
dzonxL8Lf8FEv2dZPB/jrWLHwl8VPCkYnstRupxbWM7udiuFLfMCqAOvYnIpcjG5Ju56j+yTrfwq
8S/tpePtQ+EdrZW/hj/hE7WMtp9uYIWmE4D4QgY429ucV0vwlfyvgZ+1RPlc/wDCS+I8nHAxbivD
/BD+Dv8Agmv8F9W14+INN8cfFrxEJLC0XS7kT2kYXLxh1DAqgPzMepzgV5v+xr+2zb2/ifxX4A+J
4gfwl8SNQupbi+hj8tra8vGEbqxJwIipxk8rgHNCjbUTkrEV7rX7P2pfsy/CTTPDQ05Pio+q6St6
UjdbvesoExcngjJ47cjA6V96/F23M37bPwD4/wBVp2uSH6eQo/qK+RvBf7AHgb4c/Ga98ZeIviJo
d18MdCkl1HTrS11FTeExFXiVz0bbtbOOuB615V8WP+Cj2veKv2n9B+I3hnTLc+H/AAs89nplpdIV
e5tZgFlaUZ4c4yMYxgdeaOVAnqfQ/jLUPgPY/twfHFvjNJp32gWOkrpI1Tf5ePsw80Ljjdny+vY8
d6tfBtrFf+CWXxDfTAyWTf2wIc8nb9oIXP4YFch+0J+zd4c/bav9B+M3wr8XaXp03iODZq1nr12s
DJ5YSJSq4JVl8tgQeDgEVjftcfG/wt+zH8Co/wBmn4azNq1xNA41vU7vMwiWY+a2yQYVnZie3A96
VtStLo9p/a/Phi3/AGaP2c4PGDovhU67og1F5MhBbi0bzNxA6bd2faqH7Pcfwff9va2PwcNhJoCe
CLn7S2mOxg8/7VGDjPfbszj1FcH4B8ceHf8Agoj+zIvwo8R38Xhv4ieFIFvNLkhfy7W4MUXkwyEt
nIy5DqMEdRwasfBbwFoH/BOX4d658UviDrFrrPji6WbRtK0vRboTwSK6pIiHC5BZ4clicAULoCkk
rXPov9n+RD8Rv2prnfwPEGwkdRstTn/PtXwQ+m/s8/8ADDlncw3OnH4wSSIrKJ3+2ljeAMNucbfK
9q6L9kP9vT+zPi/41sviNbW1t4f+I169xd3lnG3+g3LjylUY6xkMASRkYzXU2v8AwS5s9O+PUt9q
Hiq0j+EVrM+ofahfxm8dAm5YyNuBhwQWA6CnboNTsfWn7Q8aL8dP2Xrc5BGtXhH/AAG0U1498btK
+C+u/t6eIovjHcafBp8HhOw+xHU7hoYTN5zHBIIBOCCM+p96+e/2m/8AgolL4n/aH8JeIPA+n293
4a8A3MsmnSXSOrak8kSpKGBwVXggY9jXp37SPwBs/wBvbRvDnxo+EetQHWb+L+ztT0/V7hYEhEQI
4BGQysCDzghgaLWFe1jvP2UYNBsf2Ov2hY/CpLeHY9Y8QJppRi26AWqiLBPJG0Lye1UfjtZaev8A
wTN+FFnqkrQ6bK2hJcSqdpWMnLnPbC5rifjb8S/Df7Cf7Nk3wL8NagPEnjzxLbS3GrzzktFZrcws
ksqlABncgCLyccntTfgF8S/D37cf7Msn7P8A4luT4d8XaBaxy6TNaArFeRW0aiKR2IIB3Nhl/EUl
FoL63Or+HfhH4H6B+3B8HP8AhS99ptzbnStWfUhp961ypZYNsZLFjhvnbpXv/wAGEz+2d+0ROQcr
baHHken2ZzXyz+zZ+zfY/sMQa58bPjDq0VlfaOrWmm6fpkyXCzCVCh+6CcsSAo4Awc1wv7O//BRz
+xv2mvFXivxppNtpfhrx29sLua3Z5DpqwRNHE3H3l5G4+5qbMdzIg8IfALVf2W/ilretXtl/wtUa
lqjWtv8A2i8d0HE/7oCENhhznketfYvxetwnwb/ZYspOh8TeHlKkdcW5r5s8Vf8ABLbUdd/aDF3o
/iG2b4U6pcC9uNSe5iN0kTgyOgXocscBsdCCay/2v/26dLg8f+BvB/gGCLVfDvw31K1uhqNwGDXt
3bZiMQyB8gAI3gcnp0p2b2EpJNXPe/2uND+GPiz9trwHpfxYvLax8Mf8IhdsHvbv7LC05nIRS+R2
3d+uK1P2DtM8HaFrH7Rtt4Clin8JWutRRWEsEvnRsi2zn5XJO4ZJ5zXl/wC0P8NLf/goz8MfCfxU
+GWoQN400uGLS9T0G6uBDHCcebKmWH3lZxg9CKQ+JtG/4Jl/s1T6BcXi698V/GsIu59M35gt22+U
zhkBAVAR1PzEUW1En5m/cskX/BITUS8pWOS0mDOg7NqZ/wAa5iH4Y/Arwh8a/wBmiT4VXtjda3da
yraiLTUjdMEWENlxuO07gw7c5rL/AGPvi/4Z/aX/AGeNQ/Zh8X3Z8NalNamPSdQtSd18ole4bORt
VlKjIz8wzjFR/sx/sTaj+zT46vvi38Z9Ws/D2geCM3Vg9nOk32o/OpdwpZgMEEKOSWx2pWexd0fX
mnwmf/gohrEwziH4fwxn6teZ4/AV8i3fw4+AnjLxR+034h+Iup2KeMbXxFqUWmwXGoeRIgSI7DHH
kbyXLdjyK5XwV/wUkjb9sy8+IOraDBaeEdVtE8Ps4ZjLbWaSlkuT8pLN3KY6EdxXQfHr/gnPrvxf
+PNr4z+HmrW+oeBvGdwus3eryzor2gmkMkhRDgsuxgVHX1HFDiyebU9S8SRRJ/wSs8C2z7ylxHpa
gnlub0Nn611v7dnh3wT4u+LH7POhfEG8js/C9xealJdtPN5MZVbeMgO/8ILbR+PUV86ftsftL+Gf
hf8ADjw9+zt4DkXxHD4VNqupatcucxzW0oZYRjALEj5mHQcV3/xV0+y/4Kg/s+aJ4i8HXcWm/Enw
azLc+G5XxHunZA6szYyCkRZWHGcipUXHcFI7r9jnwb8OvB37Yfxbs/hjcQ3HhmLQtPCNb3PnxrIz
5cI/ORnHGeKvfB+5e2/ZS/aW1DHliTWvE0gZevERB/lXmXwm0XT/APgmN8FNY8cePLiO6+IniaIW
tj4ZhlBRzE5KqJEDYyHDM39a5L9h/wDaj0Tx3onjb4JeOdmgf8J5PfSWWqwszlri8yjwYK4BG7Kk
kA/WrT7D1ORuvhP8A9C/Z/8Ag3r/AIZ1S0uviZqGqaQl9DFqYe4JeQGdWhz8uDx0H1ruv+Cz8wbx
P8LbfIBWy1FjnPeS3/wrL+Ev/BNbXvh38apdd8bavDpXw48IzSapBq63SM9ylvMJIS6fwjauTn0+
grwn/goD+1XYftO/Fu1l8P24i8N+Hkn0/T77Lbr0Oys8pVgNoyowOtNash6ny3xGuSg7/d5wM08R
C4R9xZYlXdleox3qKfKk7eQRxinmcLwQS+05XPTnvVu9iYqzP18/am0bSr/9in4A6Lr9wbXSbvU/
D1reXO7Z5URtsO27oPlzzVP4T/DX4TfD79v7wdYfCu9t7u0bwvqFxfLb3n2pElzsHzZOCQeRn04r
mvCPiPRP+Ch/7Hdt8NYLkaJ8Q/BtrDcw6VvDC7NvbmKJ8tgBHL4IzkH8DVD9lv4AL+wtpOsfGn4x
Xw0S+sop9K0vQrd0mFwJFUqAyE/MxUqB0A5NZX6Gyeh9N/AOTzP2pv2m5hkhLvSUB6ni1fP618GW
nwq+CGo/saeMfHV/rVv/AMLLN1fvbQNfhZ42N0VjUQZycqQen8Vdn+yR/wAFANMi/aI8cXfjDS4N
A0X4iXiXDXonZ/7PkSIpHGRt+YNnrxgmsnU/+CVPjE/Hs6TZairfDCeUTyeI5ZY2ljhKbmUJuBLb
htz7+lPYE7H1p8fLSJfBn7LNiPlQeKdF2jGfu2/FcV+1d8Pfhv8AE79uLwXo3xN1KKw8Pw+D5rgL
NefZUab7UwRS5IH97AB/hrxT9rb9u7QB8UPh3oXgiyTX9B+HGpW9+9+7tF9vniURmBRtAUAD72Dz
0rsf2o/g4/7fvgXwp8ZfhRfHU9ZjtYdJvdDEqxrBgvLIC7FTvRpNv0qE+WyFdM9S/YO8MeFPBupf
tEWfgq4S58MWuuJBYyrL5o2pbyEgPn5hknn3Nciqra/8EkdVk3BVuLKdpGYYAV9ROf0Nc/Za7of/
AATQ/Zg1HRtXuhrnxO8aRC6bQYX2i2LReSW3DPyIM8nG4jisn9kf4m+Hf2p/2WNT/Zq16+PhbxAL
YxWF0G3NfKJmuCUB4+UgAjPQ57U1uVz6MdbfBD4JfDn4ufs03Xw51iDUPEuqa1A9/wCTqn2kyIkS
uzsm4hPmGOB7CvqiPbcf8FDZM53W/wAPwVXH3d15/wDXP518efss/sUa98DfiO3xT+L13F4R8NeB
W+229w7xy/bJMMpJ2sSqgEEdzkehqh4V/wCCkOmXH7bF149vtDNr4Nv7FPDSy+cS9varPuW8b5Oc
nB29getKyTByvodXf/Br4MfEv4kftN69481ZLXXtL126NhFPqQgCkW5IZY8gsd49+lduIoof+CTO
ixbW2TWtsR5nVi2pBs/j1ryf9o//AIJ6+LPil8eP+Ex+HF9b674Q8bzrqlxqzyRhbPzpBu2qGBdQ
hDZHOK0f2zfj54Y+BvwP0L9m7wlcjxPqOkpBDqmobtgt2hlEuwrjlmbPfgUt5Ep2se4/t1eDvDPj
rxD+zt4Y8X3v2Dw7d6vOt7N53lARi2Tjf/Dk7Rn3rM/ZV+GfgD4Y/ts+PdH+HN79t0CDwnbMxW8+
0okrzqWAfJz2PB71yfxdit/+Clv7Mmk6/wCBbn+zvHnhJpJZvC7SKH3yHy9rOcbcrGWU9MnmqH7O
Pge3/wCCdfwi8SfFT4o3XkeLtdt5LOy8MCRd8xiZpEQOMjc+Mk9AMd+KbV0hqSR7P8ErkJ8NP2ot
RXGx/E2u89B8lvjP8q+PLv8AZy+D2i/skeAfHel6z9o8f6ld2KSQvfq5kd7kh08nPG0ZHPZea9C/
Yk/az0H4gv8AET4XeL418LT+P9R1C9stSjl3xrNeBYxbAEffAbgng7T0rj/hn/wTL8b6J8eLq38S
3R0z4b+H7uS+h8QyMjfaIomV0wm7IJAIYngDNKSumkNP3j7g+OUbS/tS/s5WyBdqz6vKwzgjbap/
9evD/it8Dfhr8cf26viJF8RtUFpYaT4c014oHuFgUuQcksTzhe3+0a8w+OX/AAUT8PT/ALXXgfXt
B04av4U8GNe2v22Kb5r8zxhGdBtwApBx1zjtWl+21+yrr/7Svi7S/jD8H7xfF9j4jtkt54beREFt
5KBUYMWGed4I7EDjrTs1u7aCXkeh/su6TY+Hv2E/jha6Q5fTLe91+GzlzkPFHbhEYHvwo5yfrSft
CaJY6p/wTw+Eui6pIbTT9QutAt5ZEONkbdWz24ORXI/F3xvoX7CH7I0vwTGqJ4o8f+JbW6aeBAEF
ityhDyMOflBG0c5PJ4q54U1nS/29f2K4Phnol+ugeN/CUFu8Ons+9rz7LbhUYH5dqOz7Seox71nH
3Ul3uNd2bHw9/Z4+HHwN/bn+F9t4BvGvIZ9E1S7uVkulnKNsKg5HIyGI59K92+DjpN+2B+0JcDJa
KPRIdxH/AE7OePxr5R/Y3/Z01b9la71X40fGa/Hg+30aGaxt9OuXVmvGkj7EMecnCqMknPpT/wBl
/wDb78Nat+1J46utf03/AIR/R/iBdW4tb+afK2Zht2VFkO0fe6dcAn8aFdXZTt0POLX9mv4X+Lv2
WviJ8UdQ1WWDxrBqmpvDAbpFAxc7VUxHkjBz65J9q+xPi/pYj+E/7MemhgyR+IdCjwQMPi3GOP8A
CvkXxR/wTO+JEv7Qf9g2Tm48DX10bk+JSRshjfc7ZTdu3A8YzgnbzXov7Xn7ZvhDwf4/+GXgfw8k
niCx8A6pZahqOoW8gMUjQIYzCuOS425PbNVJu90hdT0v9qv4KeGv2hP2zfA/hXxZqE2n6VD4Subw
mCRY3dxOwUBm4Hr07Vo/sQ/DrRPhdJ+0DoXhy9fUNIsNZW3gnkk8xmVLaQ4JHU8nmvL/ANtj4Sap
+2D4V8G/Gb4S3Z1+aGyTTbrQbNwZ4g5MjqXU/eQtgqcVf8D3mmf8E6P2VdZfxffLe+N/GIS6g8Op
IEnhZ4vKy2SSQuSWbHUYpWvKLTH0NXUrEL/wSantQ3lJPakE9Bh9TI3frn3xXP8Ah79kbwP+z7+0
L+z9qvhXxBd6lqGsag7Xcd1MkhO23LBl2jgfMw59vSn/ALO3jLR/2uv2MdU+Bmn6rD4d8Y2Vqi25
nGftOybzwyJlW4I2n0yDXE/sS/soeNvhx8Vv+FjfE9rjwbong2R5Gm1mcFLglZI/kfOAvIJJ68et
T8UdQ0uz7G8LxtN+3j4zZl/1Pg+wjVh1AaUf1zXxrL+yN4O+LGj/ALQ3xO1zXLmLVdL8S6vNa2iO
oUCLe4DA9dzH/wAdxXXfCr9vjwRqv7afiXXLqG407w54gtbbQ7O/uWUKhgYkTMegRuvXoa8t+OH7
CfxUvP2mb+x8PC71Xwp4u1KTUxrVuzLa2yTyOxEmDg7A2fp061UWotojY+mvG1qlt+wt8ErdlKBr
/wAPhVB9Wz/WpP21/gfp37Rv7SHwh8Fapqkml2P9n6rdSyxgbgFZCu3PGSV/IV59+118ffCfwQ8F
/Cv4OLKfEur+FrnS7vUrizdPKjS0+VkOWyHbAOD69qX9tbwlrv7T/gLwT8aPhRNcah/Z1i1pNpFm
S96nnybmBCd1xhh6c890rocXrqek/sNfCPS/gj8TPjx4T0fVH1XTtOudOjSaTBY5hlcg4443Bfwz
XLWlmh/4JbeL1EmxbuG9Jkfplr7GT+QrO/Z6U/sMfs0+K/HvxTvJBq3iuSGa20XP+m/KhjVWDEEn
L5IHQVB+z94r039p/wDYu8UfBnw5q0WleOILdxi9OEmWSdpg0f8AEeFZTjpxUxdn8ymkc34J/Yv0
T9nH4yfs9a7YeJZtWutc1VHuI540GJPIL/JjkL8x4PPv3r6p0JDL+3j4lnOJGg8F2sakAZQNODgn
sScn3FfFH7FnwD+KN/8AHHR/GHj6bUdF8L+BJzO83iCWVIyQjRhYQ5xjhTngYHTtXsnwu/bQ+Hvi
b9uPxLdLdzW+kapY23h2xv51CRyTwuxZuf4DtwG9/wAabXUV0eN+Jv2OLf4y3Px9+K134iFhcaJ4
g1N4LWNS5k8lmYZIYY/hA4PSvSv2uo3s/wDgmN8Oxlrl410fLA8t+7OT178V4d8dP2efjZpv7Q3i
LwdoVzqt5ofjTVZNSjksHlNh5VxMxXzeNq7Qfm4PrzXpX/BRrx5oHgX9nv4e/AmPUDfeLNLjsJrx
7UDyo44IZIyWbnBLgHb71tFpzIdz80bja/zJuUgnke1NOGlV5FJB4yT+fFOupvNUADZtyW2gfTNV
55AZhtxkjJHXB710bGOpMdgXzFBcYwpPH40SklNpyMj8KewjVDl2RmJIGPT+VRukZJKCQgAE56Z/
wplDQTuCFgUZsnHPHfAoliGCUBBPAd+BimsWWco4wVAIweCKex3KkZc4I+4w7Uh2HupXIRmY4+UD
kY71Gsb78FPlUA/ez9akMTCORG3Rpj5Qw/xpixukZduN/Cjdzj2oQrDlmYZ2cLkrkDnAqtDIi3RL
kuNuMscVNtjjf5iOcEcd/WnW+ntJDNdGTKINpyo5z1PNMVi1p4HlSEAMmDgFsYz0yaueORu15kJB
JgQYUkgnGc5P1FV7aOMW8+MsVAAIHTpzx3q343YS+IpZSxD7IwVIx8vlrj8MYFQ2C3OejEu9YwMs
3HPaogHjZwVIbPJK56U50DNlyQ46Fj2HSiRY2ADZU9Rs5/M1iyjo7bAnjMTExpwuOPqK6MIt1qcp
mJa4UjcWY5XA9uo+6KwrNNocNhBEGYmNxvAz2PAJrcs7sDVl8xIi24hSEyWBGMZxx6jNeolrZHAv
M7rS0SISW8saROXJRt2Qw7Y4464/CujvoSmh3bKRE4tpvu4PHlng45xx+o965aG3aWVZG3xsMZK8
nPrxwOnvW9sSHTr6WW5MoFtMF+cBcGNh3A+oz3qZxaNYyimfN4sVuruaJQFkUExIgyCQemfSqjod
+1lI+bLK3t2H61qaO0I8R2ZH7yEnABOB09fxqDVkjGtXDBwR5xOAeAPauZnSrPU+g/2YvhRrPxs8
TWnhzQ4J0uJVXzrho2kht0zjdJtPA+9zx0r6x/aW+Fnwy+EWnaZ4f8KyXd14ttrdEvbm3m3W8mAQ
ytzwwIPAGRVj/gjxNDPr/wAQ/JDAR2ljvLqRlvMk6c8d+nXNeRfGESxfFfxlCTI08erXTEuANv75
wMYHToelEG2yKrUXZnvX7Jf7Pfgf40+FfEia+JxrdjKY7aCC5CsEKEhgvcBuPwr5+1zwrd6b4kfR
k0++g1HzPLjsDCWmbcfl2x4J5zmnfDzx7rfwz8U2viXQr1oNUtm5zhlKdWHPVTzn/wDVX6P+HvEH
gHxr4Etv2gn8Ho2u2tlLdbiSZ4zHmN1U5C9sg4/WnKpKGhn7stT52+PX7Ofw++DfgHwdq7NqCXN5
c266nbmVfNFuyBpSi4BBDDH1Jr0HxD+zX8DfDXwytfHkh8QHQ57ZJovImDuVlUY+UqeeO545r5N+
Nfxe1T4veMJ9U1i4kWB2ZbWNiNtumeFXbwMD1yT+VfeuleEdP8f/ALHvgfRdZ1NdJsrjStNEtzN8
vIVMLz0JPH41zS5tzZPzPEvGv7Nvwj1f4C6j8RvBt7qlyYIPMhN1cD5Tv2bXTHy4649qq/s/fsie
HPi98E5fET3l5F4iknuYLVg4WAbGwm5dvI6ZJzXDfH53+BnxA8aeCfCVzPaeGdUgt1utPYM6qcBs
qzAnnPODX09+xbry+Fv2VJtUnSS5g0y61CbywAG2IxYj06ZP/wCqrcpLqNWdz5d+D37M0Pi749al
4B8afbbKCwtp55Y7TK+YUYAYcrypzkY9K84+P/w00n4bfFrxB4a024uGtrKZY0MhwygqJByOpwee
K/UvwNqHhP4oyaZ4+0gwXV99la186OQM0IbDNE4BwGHHvg1+d/7aMKx/tAeKplMYZpYUCKmHY+Sh
JzkdiKcZN7sxno0a3wB/Z98DfHjwBq8Gna3eQfEG1heb7G21LbduPljG3nAAB543V8/eM/CmqeC9
a1HR9ZtBaanYyyRyRnopDEAgnqpwcHvXov7MM7H9oX4eyLcSgtqqDC/LkFDlWHpwPyr2n/go3ax/
8LB0aZY/+YWhYqQCSJJcdRycgc1pFu9ipLZo+OtSljMUcayhX5YRck9+M9un15FQyiWWRTwIidrg
j73tzz61LKkdw7yREjI2tJjKMf05xkZz602QGO3CFgqEfKxwSCT1rot2OZpvcJzsnkQkucZHlsDn
jpntUsDnBDSiLGcIRnv0/wDr+1Vlw7XIWM+UCCAOSoPGSP8ACiNiIHLRhhu+VSOQuOBnvznr61Cv
cyu0yaRj5bMcsPuqX+XJ5yaktI4+SULAcYb19frmmW8ayFmMhQjgliPlOOP8ioLfzBESxT5HOcfx
Y6Z59PatL3KvdlgyM0TSuWb5sCMjLZOQePQetIFSY4MbrsQYk4wwPb9BRHMzMXLFkbClRg7Rzzn3
56UjlSV2HYr5IBP3j7cc85qFoytiWYrJIvlFRKASw5Zfwx3p0TRELh2VoxhA6nAI9zUVqYWiZZdy
/eUbCATnnJxz0HQetaGgPDLq+npIRIhuIVaN85ZTIqkdM4IJB9qqTshQ1Z718Df2SLv4keEr7xl4
o1dvBHhi3iWa3vZo0cTLlg7csNijCjkdelbXj79jmO1+Gv8AwmHw+8Vr4605JD55hiVdkaqd7qS/
OGAyO1fSH7YnhXWNd8OeBfBXhI/YE1a9ktfssDiGBkEeQGAwCBy2OnB4Ncd+yD8O/G/we+MeveCv
FdyH0660T+0o7OO5823LGcIXCdAxGQfxrm9pJM7UrnxL4J+H+u+PfFlr4f8AD1rPqOo3jB/KwFMA
BG5zgcKoIJJPbivqST9gXQk8RL4fk+J+mR+IZU3jS3t/37DBbAHm5IwSc47V9F/AjwbpGhfEb4tX
VppNrZ3UOrCGGaOJd8cZiDbVPUA8HHTmvjLV/hX8VvEljrPxhtruSaG2nkuYdUN2q3kZidkLgegA
wB6DGKr2jYcvQ8w+Jvwx1j4VeLr3R9Zs5rZ4Hd4mdPkmi3MocHHQ4zmvTvhB+ydq3xM8AzeLdY8Q
QeC9A3L9jvtRjwbhGIxIpJUKuTtGc5/Gvrv48+GtI8ceAvhjq2tabbahe3Gq6WktxJCCzpKvzqf9
kk/dPHNc5+2P4b8TeJrrwN8P/BcZhtdQjnzpsEggtysPllN2OAEGTjpS53cOWzPmr4wfsjav8MvB
dt4s0nXrXxpobljPc6fESsajPzZDEbeCMg8EYryfwD8PdY+I/iqx8P6RbvcX9zMqDy0LeXHvXc74
+6gycnPavuX9jXwz4s8Kal46+HfjdTNp9pb28sWl3DpLFGJjIXCgZG1hjjOK6v8AZ28FaF4K0H4o
aroml21lqlvrWo2yXMcX7wRRgMiZ/ugnIH0pqq3oHKfPzf8ABP8A1k6vcafD4+8PXGoxqX/s4Fln
A2jAKg5A6ckfxV8x+KfDWpeD9e1LSNXtZbW+tZNjRToY2JxnIB7HqDXscGi/FvSLm1+OEqagHnkS
Z9WLxkEMwiG5Ac7GB29MDivrj9o34W+GPFvjz4Y6pf6TBcT6jrMdneTYwbiExEhGx24NLnYlBHyJ
8LP2TPFHxR8GR+KVu7Dw5YvI0MR1t3gE4wCHX5TlTkgH1B9Ky/jH+zD4n+DVrp+p6lcwaxpt/nbf
6UzvDGfl2hiRxnOQeh59K+l/209K8X+L/EHhv4b+EbOa70ptO+3HTLMIrFonKKdzY4UYGCQOfXrr
/si6Zrfiv4deMfhv8SLKS6tNDngtI7G+AEkUbKXCllOTgqCDk/XFJSsLkPhv4d+A9Z+JXjGx8PaL
A91dXUu0tg+XACGO5yOAvynmvcdT/YK+JFhHdyx32j3gt1eUW9rdHzWGCdoUpyemBxz3r3/4A+EN
J+GvwK8Y+LtAs4bfxKiamDfzZcsIXkaNSCcADAyBjOOa+cfBOsfFbwL8RdM+It3YX6W2u3ccc+pT
QgW12txIrN8vQLj7uORxiq53fQrlPB7i3mtZZo7qJoJ4sh4pQVKn+6w6givXfBH7JfxD+Jfha317
RLW0i0+cvFF9suRG5AyCVHPG4dc819Y/G74F+DvEPx/8AXN5pz58QS3i6nHFKyJc+VApUtjvwM46
1wH7ZWoeL/EnjWz+HXhm1vJdIsbKC+Fpo9u7T5bzIxuZeiLtX8xT9pIj2Z8z/E74HeKvg/q9rp/i
a2S3WaNpre5hlEsMgBPy5GMHrwR+dY/grwTrvxD8SWGiaDY/bdRucxxxkhVwASzMx4AABr7n+Dun
TftAfALW/DXxFtjPNod39ihmUNFcKYokZHYg53jdg+vNVPg1oth8Gv2WtU+IWiWsUviOeylupJrw
bkzHKyBQBjAwOQDyapVZEexSPnLWf2Q/in4b0HUtRvdDhFtZQNOzwXsUjBVBJIXOemT0rxNIZjcI
8knDkqUY4GT0znvxmvpv4FfFD4k+FPi9o0us2urXGk+KtQSyuDqyyCDEkmd0W7ABG7jHBHFeseNP
2aPBV9+09ols0M0enaxaXOq3WnRPiMzxsuMDHyqckkdO1HtX1H7JHy54U/Zv+I/jvQLPW9G8OS3u
k3QYxzrNFDvAJGQGYHqPTtXKeOvAevfDnVX0fXNNmsr1YxKY5sD5TnDAglTnB7nvxX05+1z8VvGg
+JMng7wq97pGn6HFG6toxkjeQyRox37ONqhgAOmSa7mewtP2pf2Zp/EXiK3Fhr+jC4MV9aKBIzwI
c7gc/K+TlffIoVRoXsl0Pifwb4Q1rxvrUOieH9Pm1TUJh5gjQbm2DBdieAAM/qK6bxL8BvH/AIM0
i51bW/C1/YaZABJNcMoaNBu2gnByO3boeor6t8JaVpv7Mv7M0fjrSrGPVfEmpwWss91doA6mcoux
SoyEXOcZ5IzXEfsv/G3xXrfj6PwZ4yM3iXSvEW6F/wC1nMnkOkTudqtnKsByp9iKTm73KjBbHygw
ECBX3A53Ff4enX9BXZaR8EPHup6db6jp3g/XL2xuo1kSaK0bYy9QwB6g9cgc19M6P+yZ4aX9pi60
WW4uJNAsbKPW47I42uWnYeQx/uDA4xnAAz1rnvj9+0z4s8PfFKbR/DVwfDeieGp1szbQEbLso2SW
G35VwNoUds0e0l0KdNM+YtY0TUNC1O707VrO5s760kAntZVKSJ6bh2yMH8am0DwpqniW/Npo1hf6
3coPNKWdszttLYDMADhRxnNfYPx08FaP8cfgVZ/F+2tB4f10WXmTxxgP9oTzBGFkbAyV2kg4zzit
XW9PsP2O/gKNV0a0Gq+KNa22bazLtSRHkjZ0JJByqEcLjk9apVWZ+zSep8bat4G8R+GoFudY0TVd
KtmOElu7KWJS2f7xUDr61jyjy23beMYVk5I75r7K/Z5+Lk37QcOpfDr4iwR+I3mSS8hvnWNfL2bQ
EZEA5BO4N9QRWT8F/wBkzTz8ZvE+n67fDVNL8MSQJ5AQoLzzY2dC4ycAcZGeSKr2pXIz5lPgvxFn
zf7A1YQucCcWMoQg9MHbg+lZL/JJIjt88bGNhnOCDgj6givqnxf+21rmk/FMppNmIfB2mzfZpdNl
jj824WNmSR1fGV6DAz2GetWP2kPgDpfiSLw/8QPCjRaZbeKLm1+02twh/wBbc4KzcHj7w3KPcipV
XUj2Lvc+ULWxudRhme2gnmjiC5KRM+zPIOQDjr60l1Y3NksazwSxtIAyiWIxkgDqOP5V9seM9WsP
2MPhfYaHoNsb7xjramSTU2TfG8kZQOxQkkDDYVRxzUPhPUNP/bQ+GmqaLrtrFaeOdCj3x6lCnlxI
8u/YQAxOMJhlPpkVftWUqaPiyXbDJBhhIzAjByM9f5e1SXEbwzyrJHIDtJUmNl+UDJ6+3+eK+ov2
bP2dLKC813xl4zMN9pvhi6uYFtIixDz2zZaVgfvLgcKfQZrV8O/tm22v/Eq50/WdAs/+EB1GR7S3
ZbL/AErY2FUyDdggng8dCDUuprsXyrofHrFWtDtdpPmOeMEc0SlQioZDsOVVm4A47Z47V9H/ABd/
ZEuvDfxW0nRfC88UeleJpnTTzdTEm12Rh5VYlTwOdvXI4r0L4neNfC37J+jaL4I8LeHbLXNfDC8v
J9XtfNUI+7MhkAHzFgMDjAFV7byM/Zu+p8XpKszApJ5u7jKtn+XSmTAwoAEICk5LDn6V9jePvBGi
/tNfCy3+IvhC0GleJdIhNvqFgFWGF9ib5EAIIJG7KsDyDgmvjaYvcWqyEEnG7kckYz/kVrCd9TKc
Uh0y8qI2wh6lR1P41K9yCGQsS2doU9D/AJ/rUKyI7RMIyXVcB25684x+Ar0P4JfCfVfjH47ttH08
rFbI32nULp3x5NvuUMVHUtg4HPX6VnOZNNXdkcC8sWxVdvLI5wTzz3+lRv5aQyFldxt3ZjPIOR14
6V9f+Pvib8Jvg94lsPBFj8P9J8Vro8Udpqup31svnxsCFYEmMmVtuWPOOeOa5L9pf4Eafa6UnxM+
H+y68I6oBcXEMIWOOAsURCinB2MSSeOD7dM41O50OmmfNMTLOgkVWVGAO/HCk+vvyKhcv5RQFw2c
llHKY4xn8K+w/DXwg8Jfs7fCu98XfE7S7bW9dvl2WWgSgSIGUttVSMjcykMzYwoHtT9N8D/Dv9qL
4eXieDdBtPAfjTSpWuDp1qATOgXADHCgo5IGccEVXtelhKkkz4+kut4UlmJHLAjBB/l0xRI37xiS
JMfLgjv/AJNe3fAz9mvVviL4y1GHXopdI0XQbp4tVnlAwsibWMA+YdV6sM4BzXqNp4t/Z21L4oye
ER4GsrXSnkaGLxKLlltXfZwQM8AsSobOMgHNYuSubnxvOpt5UCxuA+AVIyBx0z6cVDvCwyAswAYE
kEAHnn8q9g+MH7NPij4Y/EGz8N2Fvc67a6pJs0W5ijwbrC7mRuTh1Gc54Iwa9g8RfDb4S/s1eAdH
g+IGljxx4yuZBK9raTNDPCjgnO0Pwildu7ufyrRTsY8t2fHzSRK5G8OBk5XtUcaMwBUEgckg8AV9
UfGv4AeHvFPw7sfiV8JrffpUduv9p6HGzTTQnAZ2OWJDoD8y+nNZX7N37OGn+KtHufHfxEcaX4Dt
4z5KXMjW/wBsUqQH8wMCEBwAOpPHtU85SgfMbOVMoY8AY+Vuv+cU2eYCWPKuMg7sZzx0FfYngn4e
/A39ojSPEGg+A9OvPB/iqKIPYTatdNJ5/UlkjMjBwMc9xvBrwPw98BvGfir4nP4BXS7mx1e3k23U
k9u3l20RPE7EAjYQDg96FMfJc8vnTgEDaXY555PWmzlQRvBbBB29h1wDX2h4q+H37OPw38caJ4J8
RR6vqmvTQwW97qdleH7JBKzeXumw4CHcuSAOB1rxX9pn9nPUPgt4rE+ntJqfg7V3aXTb6EF1jJb5
YJG6F8EbSOo96Sn3JUHc8QjkaSTDfcGRgJgHHQ5x9apXa7G/dhm2k4B6c+1fYHg79mXwf8Pvg/e+
O/jRNqNhHdGNrDSLGUQ3kcecbShxuc5DbR0HvmqPxJ/Zh8IfED4N23xE+Ck+oXkNkJxf6XqMrPdO
it0VACQ4Kng9VPtijmNHE+R2jQoZGJDtjgDJU4/lUFzOzYjPbkKAMA4/+t+te4/sz/s56n8e/GG8
eZYeFtOmDanfyfLlVfDRISD+84PXp19K9s0b9nr9nn4o+LfEfgXwp4g16PxfYQzrbS3su21eWM7M
xttAkAfqByRk1NxcqPhVmeQKGJbB+ZN2BwCF7e9QMwSIBdr8gAnoR/8Ar7V3fib4Q+KvCnxQm8A3
ulTv4gFx9nhhhQyLMCeJIycEoRg7iOM19MeJv2Wfgz8CvDfhK3+Lvi/V9O8V6jHJvj0MrJCGDAEk
GJiAA6jccA4PFPn1sPlR8R3chkQpyGZgA5BxjPXNVz8hIlUnoCcYz7f1r6J/a4/ZYuv2e9Zi1nSJ
n1bwBqgQWmpSuHeB9gysxUBQDnKsOCOO1bP7PX7IVh8QfAWufEX4j6ld+G/AtjbSvBc2mPPmMZ+Z
8Mp+UANjAO4n2ovYFFHyrKdyOqvgscKUxz+NRzbPKYyIzlgTgDIwMdD6ivszxZ+xz4O+IPwd1Dxr
8DfEuo+KrjS7hkvdO1ONYmkjRNzqi+WrCQZBGevIHNfN/wAF/gp4h+PXjy18JaDCFuJW8y+uJDhb
KFSPMZgejAMMDqTTUkVY87lICggNuB4yecVntcCJ8EMWODvPf/PSv0EX9h74Dah8TLv4a2HxX1s+
OYo3/wCJbJAjKsnl787jEFOPvbQ2eMV8XfFf4T+Ifg/431Dwb4lsng1W2bagiIZblSNyyRY6gjHH
qcdqacQSOJKylir7SYjkheG570STFNwO456nn5u/rX2vpf7DPg3wB8KtI8TfG/x5deANU1S5KQWd
vCkyKpG5Ff5WIOFJPIAzivNP2tf2Qrj9n+10PxNoWoyeJ/AerRwiLXG2oUmcMVDKp+6VAIOMc4pO
SK5V3PmgOA+VJJBwckf0okcxbysfHdj3r6H/AGVv2Mr/APaLutT1fVL2XQPAWnRStc60cH98qBgi
gkZUBixPt1r0vWv2A/DfjP4ZeIvEPwZ+I/8AwsrVNJlRZNNWBIgcYLKpzy+3JAPB6ZqedCtY+JDI
4LFV3MmRj7p796he4neJo+gI5yP5V1Hgb4f678TfGuneFfC9jLfeIb+Uxx2snyEEAs27JG0KqEnP
TFfa95/wTe+H1l4107wVqPxstbLxvdwqy6O1kvnOzLuIX94MggNjv3ockRY/PgQMH3tJlR2POPp7
81JM5JDRBii8Y759cV3fxs+C/iX4E+O9W8K+JrU213bSM8E5OVu4C5EcycnhgOnY8V778K/2B/7f
+DkXxC+JXjm0+FOl3VwiWJ1O3VlngdAY5GPmLjcScKey0XRaSPkB5i8rYxGvQjnApgk8qTCyEHHI
znBr6W/ah/Yr1b4CeF9F8Z6NrcPjbwNfxRv/AG9ZweXFG7sPLBTe+QwIwwOK5T9lz9lvX/2ovGS6
fpDNYaNbrnVNZkjLw2gKMUGARuZivAzTugsmeJyswU5Ubepz6+v61TmbfIRsPGcgE9a+39d/4JqD
VvBXiXVPhz8VtF+JOr6IolbRdJgDSOecx7hMcMQGxkc4xXxPdW8tvPLBMjQ3MTNHJC/yvGQcMGXs
QRjFNO+wNWKotlVGO7Oc7hT1UhQu3YmO3emXIKRlcbB1DDvSo0ckuFZmQAZGcZobIW9hkrgEYGQc
qAOnrTpLjEZD53Y4PpX1J+z9+wn4i+Nnw6vvHGpa9p3gDwyjJ9k1DXkZIrsE4Lq25cLnABJOSelZ
X7R/7D3if9n3wjpXjAazZeNvCV+rF9Y0RS9vbcqE3NnGHLYDdMrU3RbXQ+agxGxzliB355qR5Dlk
IBXJOSMDJHIr0f4C/ALxN+0L8RrLwp4WiZpXO+8vpVZoLKIgkSSMoOB8uAO5NfR/if8A4Jc+MrHR
fEuoeHPG/hjxfqWjQPM+haVI0l07IOYsDoxIIAODnjrxU82thqJ8U7WLbASc98/5xQVQnIwzD+IL
yKttpV2NWGmNbTpqTXBtPsbKfO83eU8sp1DBuMH0wa+wPDn/AAS1+I2seGPDmp6l4o8K+F7/AFi1
SZNI1ieSO5RnwQhXby4yMgd+Kdw5T4xaQEMoBG0d+M8VDKzusitGW3AgE9s16P8AG34HeJ/gL4+1
Dwj4ss2gvLZiYbyNG8i7iwP3sTEDcvIB9DxXdfs7fsU+O/2j9F1nWdFnsdB0LSmVW1PXJHggmJLb
hG4Qhtu0gnPGRmnzIOU+foowoVUjAB+cYAGD71KkqkYLbSCc4A/z6V9D/tDfsRePf2e/B+l+KNXu
tM8QaDdzPC19oczTx25AyDI2wAAncM5xkYryP4YfCrxF8XPHWmeFvC1i+oavqUixqACI4FLAGWQg
Hag3DJo5kK1zkCMko4JAboQOc0hlzEuzkjoSOlfZfiX/AIJY/GHQ9E1m/i1Pwzq1xpdrLO2nafey
S3MgVd2xV2A7j0APsK+PLnT5re8lt5oZIryNzG8Dja0bg7SpB6MG4x6+9TzIai2QqjyEMRtXHHyj
kGo3l80Lk4bABHXpwOa+r/An/BNj4x/ELwPofiW2j0bSrbVYTNBbarfm2uFGcKWQqcEjnHXBFeIf
Gb4I+KfgH4+uvC3i+xWx1e2VJBLE/mQXCMobdGxA3DkDPY8UcyFY4FJTnaP3bLxuUDPX/wCsKneQ
NFw7Pg5VGAAz7fga9X+AX7K3xA/aUutZTwXp9q9vpaCS4vNRn8i3O44Cq+CCw5JHYCuj+PH7EPxN
/Zz8KWniXxTZ6dcaRcXP2b7RpF0boQsVLDzPkG0HBGc9RT5kPlZ8+sylhuO4Z5Q9PoalNyQComkX
jHLZGPX61q+FfCGqeNvE2l+H9GtXvdW1W5S0toFByzswUHPYfMMk8AV9Ral/wSu+Olhb3U/2DSJU
t0aQxx6mhkZVGeF55OOOlK6DlPkaNkEexVIwT+f5fSnx3d1bjyhczqgbdsEhC8+3T9KZdW8sFw8U
6GzeLKyJMNrIw6hh2NfQ/wAKP2AvjP8AGLwVp3i/QNDto9F1DzPIa+vY7eRwrEcIxBAO04NNtIVr
nzqLszOWZmlkJOXdssfxPWnm4kgljlgmkhePPzxsUb3GRzXdfGv4IeLfgD44bw14ysV07UUhjnTy
pA8ciuCcq44OMEfhVz4HfAHxv+0X4lvNG8C6ONSu7O3N1cTzyCKGIZAAMh43HPA7/Sk2TbU4C/1G
7v1KyXVxKGYMwkkLBiBwefY1Xlm8pT6n5WUjJPrXu/xj/Yh+LvwM8GnxV4v0a2t9ESdYDNa3cc/l
l87SwUkgZHXnrXiFpY3GpX9tYWFvJd3d3IsUFvENzSSNgBVA9cikmnuLlZYPibW4EITVL1T93i5f
AUdABnoOw7VjbACzcBfvH6/WvrA/8ExPj+YlkHhS2jJXcV/tKFiBjuNwP4V8v31heWFxJZ3tu9nd
W8jRywyjDowJBBHY5B6iqTXQuMLi2mtahZIYbS/u7UvyyW8rIp4wCcHBOKbqGp3t+6y3V1NeSxJ5
SmeRpGCg5C5Yk45zj3r2j4R/sU/Fz46eET4k8HeHFu9FM7Wwu7m5jgEpXG4qHYZXnqPSuP8AjV+z
746/Z/8AE9tonjfSP7LvLiBbi3ZHWRJEJYcOpIyCvTsCKA5exw1pc3VlNFcw3MlrPG29Hgco6Hpw
w5B57Vcv/GGt6pbvb3mtaheW2NpguLuWRCODyrMQema6L4VfCHxd8b/G8PhjwhpbavrEkMk3lIQq
KiKSWZ2IAGcDJPUivRvib+wv8Zfg54Lv/FPijwotlodmE866gvIpvJ3PtBZUYnHI6cc+1Fx8rR4O
hB2JuGQDnsQPStaz8Ya3ZiK3tNf1e3hiUBYoNQlRVCjgKA3HboKyPm4ZSHBHy4zz7D1J/rX0hp//
AATn/aB1fS7LU7XwG32e7t0uED31ur4ZQQCpkBHB6EZourhZ9T5tuLhrqSSaWd5ZnkLNJI+5ySSx
Yknkk85960dK8RanoXmPpWrahpjzFfNNjdPEZAM7d20jOMnr603xP4bv/CviS/8AD+rWZstU0yd7
W6tmHzRSIxUg49x+PFelfBn9lT4nfH3SdS1bwD4YbVrCxmFtNcvNHEm8jdtG9lycEZx6jNNtEpHm
mteIdS1oxHU9UvdXaMuI/tlw8oTJGcFicZwPyqvBeXNlPE8TypLGwkjkhO1lYEEMpHIIIGDXo3xo
/Zy+IX7PsulxePtAfQjqaSPav50cqNsxuG5GIBG4cZrkfBHgfXPiT4t0vwx4c06XUtc1OTyrW2jY
fvD3yxwABgnk1mmi0rkOpePvFOp29xFeeI9Wuba4QxvBNfzMjg9cgtjB5+tc3ISQB90qCBnqBX0Z
4w/YD+O/gXw3quvaz4FeLSdLtmup7hL2CQpGq5Y7Fck4Geg7V88lUkXCHLHq2elWiGrDCpLtnDDH
yk0lvhS5LDGMnnggHpTm3lcMSBHjgUTW3lSOfMZwFyTgimC1NLQdf1LQb57vTdVvNJumjMYmsZmi
cocHblSCRkDj2qxqvjPxFrdiLLUtf1XVrfzPMEd9eSTKrcjcAx4PNdd8Gf2e/Hnx8n1K38C6DJr1
xYRrLcbHVERWJCgszAZO3pWx8Xf2T/ih8C9Ettc8a+FJ9A0m6ufssM8k8UimQoWAOxmxkKcZx060
tGacp5GJdsiM27cASOOn4106/EzxnEzBfF+ueQBhk/tOcqQRjH3un41j6dpVzrmqWOl2dtLc6hfS
rBb28Q3STSMwVVQdySa99uv+CdH7QlhbO83w9naKJWmeRLuAjYASflEmc+2M/WloFj5xnKOxBY4x
1JyPfA/Gt3RfHHiHw/Z/Z9K17UtMtSzP5dleSQLvIxnCkDOAOaxXtW890lTa6cPGp5B7g+hr2T4Z
/sdfFn4w+DrfxJ4L8I3WtaPNK6LerJHEpZTg7RI6lsdOOMg88UNrqHI9zyvXdf1LxRqAvtY1W81i
9CiPzr64aZwvJC7mJ/vE496ZpviPUNE1CK90+8nsbuFj5NzbuY5EOCDtZSCODXU/Fj4MeM/gh4kX
w/440qXRdTkgW6jgdkffGSyghlJHVTxVT4YfCvxR8YfFcXh3wfo0uvaw0TzmCEgYjXqxJ4A7cnrT
siLNFXVfiP4s8QaVNYap4p1rUbKYr5ttd6jNIjqOispbBAODz6VzkroAo3ZXPJH8q9o+JX7G/wAY
/hX4UuvE/inwVc6PodiF8+6aSORYw7BQfkYkcsM14ypCR5ZQpUEjGPmyOMevamkiuXzOr0z4xeON
Cht7TSvGniCztLeMQQww6nMqRoP4VUNgDHYcVzl/rNzqN9NeXtzJc3VzIZZpZGLO7tySxPU17rpf
7A/x91Owtb60+G1/PZ3MazRu00ILRsAQcF8g4PTrxXiXiTw5qHhbxFqWkanaNaanp9w9tcWzEExS
KSrKce4+lCS6D5bFvw5498QeFGnm0LXr/Rp5hslOnXUkDSoM7dxQjP3jwfel8S+OPEHi9rf+29f1
PWVt8mGPULyWYQDgHaGYgZxjjrXW/CX9m/4l/G2y1G98CeE7rxFBYyLDcTW7xoiFlJAyzLzgdvWo
/iz+zv49+CD6avjvw3daB/aSv9nabayPtxkb1JGRkcZ70tB2ODtdTnsbyGa0uHhuYX8yKWJyjqQc
hgwOQQeQe1dbJ8afiJqFnLZ3HjzxNPazxtDJG+qzsrKV2kEFsEEcYIPXtXO+GPBup+MfEuneHdCs
pNT1nVJhBZ2kf3pHPQD+p7Yr13xP+xL8cfCOhahq+rfD/UrTSbK3e5uZwUby0VcsSqtk9+3aloTL
Q8MXfbSkEksgxnp0NdloPxn8d+FdPg0zRPF+v6LpKkmK0sr+WGBckE7VUgDJ6465NcYH3puicB89
TwB249K9m8E/sb/GP4i+HdN8Q+H/AAPqOpaNqKGS2ugEVZQCRkAtnBweo/nQ7LchPseXeIPFOq+L
Nal1TXNUu9V1KYKr3d9M0srhRgAsTnAANP8ACvjjX/BWrfb/AA/rmo6HfiNomutPneF9p6jcpHGc
fkKueN/h9rnw68X6h4d8V2E+iazZ4M9lOB5iAjIPBwQQQeKvfDv4O+MvjLqN5p3g/wAOX2u3VlEJ
p47KLdsTIAPJA/XtQ1EuzY7xj8ZfHHj2wTTvEXi/W9e08SiQWuo6hJPGrjowVjjIyefeuRs7mSJv
MQtEM/KynBUgivR/ib+zP8Sfg7oMWteM/B+o6Lp8k/2dZZ0UIZCpIBKnHQduK85trO61K8t7Ozt5
Jby4kESJGhZnZjhQAO5OBxUaFRpvoekJ+1N8XbWYFPiJ4jjjVQhRdSlZQMYwFYkdPQV5tcTvLcvO
7vNPLI0kkjsSzOeSST155r2i8/Yp+N8ALj4ca4bWGIzNM0IUY25OckehrxlUDSsm8F0Izu42npz3
BzxirikNxO28G/Hb4g/D3TGsvDHjDVtBs/PM7W9ldNHG7kAZ29M4A/wrF+IHxL8UfFPWY9U8V+IL
7X76FPKhlvZCxijBJ2j2ySeneuk8D/s7fEv4j6adX8L+DtY1zSt7RNdWduzRh1wSN3Tv61h+Pfhv
4j+GGuf2P4q0O60LU/s63L2d5EUco2drY9MqwyPSlpcXKVPCfj/XPA+sW2teHdQuNJ1SENHDdQSF
JIwwIbae2R3Fdh4v/aX+Jnjnw1eaJ4k8Zavq2kXW0zWl3cM0TYOVyucHkDrmuR8GeB/EPxA1ldK8
MaRc6xqE6tIlhaReZIVUbjj6D+nrXS+NPgJ8R/h/oUuteKPA+s6RpEMiJJd3NoyRxbugY9BzjnOO
RUKKYzgEkTMTIVBQgsTyMY54Ney2X7Z3xl0yztdOsPiJrUFraII4Asq/KgGApyuSPxrxe4Zc/OoV
dxB8ts/0znFekw/sx/FiceZb/DrxJNBKqyJKdNlwEI3BtwQe1CjFMhtnnera1d63q97ql7cNfX13
ctcXNxKSXkkY5ZifUnmvQ/hx+0j8R/hBY3ln4L8TXOjwXlx588KqhR3xjJBU9gPSvNbqJori4tJI
mimt5GhaJ1KOrgnIYHkEEEYNdT4P+E/jPxxp0114c8J6zr1tDKIZZ9OspJkibspZVIzj3q2jNJ3N
P4rfGbxl8aLqxufHXiC58RXNlEVtnlwEgVjlgqqAM5A/KsvwB8SvE3wx8T2Ov+HNTm0rV7PJguYc
blBBBBByCMdQRiqHibwdrHgfVk07xFpF7o1/LH5yWl/C0UoXJG7BAyMg/lVfSPDmq+J9RjsdHsLr
Vr+bcUtbGIzTMByTsUE475pOKa1LTPY/Hn7Z3xe+JnhO/wDDniLxfdXej36Bbi3WGGPzlHIBZIwc
cZ684614oLxLGRGiZkKNuBXJwfr+Jrd8S/Dbxl4W0+S613wrrWmWMYVTPeWckUaBsABmIwCc9znm
uclWKFsEhZH4JZgAB+NS4xND6N0//goX8cdIgsbOw8UIYrKFYIFks4/lVRgdRycYrwTxT4n1Hxn4
l1TXdcvX1HV9Slae5u3PzMxOecYxjoAOMVp23wr8Xz2wuovCWv8AkumUljsZGVhxg5xz1H61yb5V
hiIggncknB3DsR2+ntTSS2IdyIB5s7vukg7j6+gpEjV0ZsEyA4DbuBnuasSRHf12MOg9CRiq6AKu
EX5ixViRweK0RNh2YyXjyC4YZyOCaHd1AGNzH5SOmRmiTBmZOr9QAemKJtyyOWLEK3Y5B9KVhCZY
srFUDleBnI+lLvL7327ioGAg6k/5/WnylNgKbSenzAjtyP5/pUDFTOAqPE44JHT6CmirEsk7N80j
kyEkYxn681K8sbqilDtJxjHI49aQomVO8fKO33gfU01sqZpQ5UMOM9yeMilYepG8UaxyANyR1AwD
7Zp9rqMsdtdWgLLFIEdgBkMVJx/OoHBQKu4A8kY7n0xTLOSIyyyAlNoJESjqSeAKdjM3LCcQ20si
yIvAyFGQrZGMj3xTfHCZ8WXkCvtOI1YBs4IQDH5jp2qzaCK7s3McZXaqgBhk5zzu9qzfFCbfEt9F
Jw5kLblIPP1HaoshRepj+WIyqygxsSQW64NPm8vanztvxydvX8TSNEU2sTtcnPXND2u/lQ0i9zjn
NZMs6GILdRN5rOpLEAMM4GDnp36VvacrPeK+9A0YUsNvAUgDA9+OorJvLSOz2qkYEZ2t5hk5JI5G
Kt2Mq/bIEjwpztkUn7oPf3/SvTcrPQ5LW3PRNHu5RBNCIw6eZl2Q7mQYPGT9V/X1rdjlae21KJVM
UT2koJCkkkRtjA4+nb86wtEu47L93JvLDJUycbR6A9+O9dFDrCy6dePFbMB9mmilwP4SpwQOf16V
Dm2Qmrnzjpcbwa3C7Qx7YjvZCcLt+pqtqs8dzqcrQrlCcAZ71HNcvEXWIEBgQcdeveq8LOjEjjPt
k1gdi1R+iv8AwSd+KegfD/4h+INF1q5+wz+I4LeCykcjYzxMzAMe2d55PpV39qn4Wa98O/jJr82p
QMbDWLyW/s5YwZIWjlkdsbsDB4J25PWvjD4aFjF+6uJ0K9MOQysMfd/H3r6r8R/tO+NvH/w80/wj
rk8V1ptkkaRyR26hgEGFVic5JA6j0pqOuhNRc2rPQf2W/wBm6T47eIpru6upLXwzp0gF9NEwEjPg
lY1Vh0PGTjpxX0bq37a3gzwp42j8H2mhWj/DuH/RLm5jhIaNduGJiOAVDcHjJr5G+Dv7RXiz4L2O
q2fhiW3iiv2jkuVurYMCwBXK9CGxjnn7o4rgL7VJLq9u7pstcTymUhjkZJzwPqar2Tepm5xjK3Q+
kv2p/wBmy2+HUcni7wxJ9q8I65Ik8SnloXkUsAoPRMYxjnHGOK968bwXg/YD0JIo3gvo9E08EBGD
RNhAWwOeCc9K+TfE/wC1j408S+ENA8PXRsZbbRZ7ea3kjtiGDRDC78H5uCRjjrXcRf8ABQj4lGwR
ZbDQ5oRGVdTZNjjuDvPTGORzUOnUvqiuaLVkZ1t8BvE/j74Q+Lvij4t1a5R4IzNYSOwZ7kLI0Z3Z
5UcLzx3r6J/ZWkI/Yy8RvLIAyHVtxC9MA/nwBXzX45/bb8beOPBWr+FtTsNFt9N1KFrci0t3RijH
JHUgcZOe/bFc34A/ai8S/DT4Y6p4K05LG50q+M5eSZMSAzDD4J6jHrzz7Yo5JdSefXQ9Y/4J/eJd
WsPi7J4fgvZxo17p093c2pIMbzrt2yDjOeWHB7CuL/bcVj+0T4jXCwqDCyNyxYm3Td/u/dWvMPhN
8XdY+C/i+HxVosNtcXcUT2wWaPKNC2MrjIz0XkH1qT4p/E7Uvix4x1DxLqTwwXd8qGaGFdqgKqgD
ByeNgHXvTVMG+a19zuP2Q/A+s+Nvjr4Y1PSbOS6tNCvEvb6dsKkSEMoO7uevy4yfbFd//wAFCfGm
jeI/iXptjp1/9pn0u2NtcxRZCrJvOVZu+MjjpxXlnw1/ai8T/C/4cX3gvQ7HT0t73zA2pCF1uj5g
5O9GU7hyQcHqOa8mv9Rmv7m4mnM1xLO7PNK7lmZick7vXvj2/Gq5HcmU+hFGyebiNWZsZO5x836e
mKjkVY2jRlPn4x5IOTk9Bj8jSTqYWZDExn43BBjCkD26d6qyXQVyke1WKgbvVuTwRW+yMua5Ykij
Z5AhL7CW3Z4YjkYHboKjWRkVGkUx5Gfky2D05otmjDjYwdNpV2PIB4wD+tSiJYiI8Fc/LgnK9c9f
rSSIVnuNNu0cIBkKBJNwDqdw4wPxpZplih2RRFpCMjaASh+ox/OpJJCIYwwKxgHJ3c54qNmd5Avk
dR8wkXnGPUHuMVPKUl2LAvY7PajtG5kdUyF+97fgAPyNQIw3pGg3qo4I5Axzx1yKY6Au3lIwAyDg
jOD3/wA+lEEESlipjREOB82OfbjrV8qKafUljb98wKiNmO1g2SV9+McmrmkXPka9ZTzODDFcRyHP
HloH3E5HJwATz6VAP3LyjftcD5tw3HBPWmecdnzEhypDbSelZuLZUOVO5+m37XviDWo/Bfgjx94H
lW/j0i/N4b+1RZ4kiaPbvbnBXsee/bFcb+yB8SPF3xn+MOr+MPEcAksrXR5NOGoQQeXArrMjbCQS
pONx496+efgZ+1jqvwd8P6n4b1XTI/Ffhi9RYotLun2RwZB8zB2nKtnlcdfSuu8cftqy3nw3m8M+
BvB8PgWC6kPnyWDht0bAhwMIu1mwBuzmudxb0OhTSejPrH4GeKdK1X4p/Fy2s9VtbmebU4pIYo5Q
zSIIsFl5+YZwMjjjtXxprHxz+JPgnSta+EK6fGI5ZbqJbM2bfa5EllZlEfT72TxgnBFeK+D/AB/r
fw28S2niPw5dNaarbMxWZcMzBvvIwIwQQMEV9USft3+BtS8S2HifVfhVFL4nsVCpqguVEoYAL8pC
Hj5zgE/ypqm+wOS7n0F8aNYt/C/wm+Gltqd1DY3sWp6PvhlYKy7MbiV6gAg5+lc9+2F411/4da34
C8eeGoYruDTo7sSXEiGS3xIIlAYqeAQWOfY18L/G/wCMuq/Gjxpca1qUz/Z+UsbZwP3EJO5YxwDw
c8nnrXq3wX/a7g8KeAb7wH440KTxh4fZEis7YSBGjQAl0c7RkE4I59fUYfs5ISqq59A/sb/EnXvj
J8Q/HnjXWbOK2tbiysreKa2Qi3Yp5hIVj1IBGecjNegfBC6t9X0X4t2lnNDcO/iPUSiRvuyroArc
diQeR718o/ET9srSrj4XxeCvhp4en8EWTvJFceYUbETAllTBJBYk/N2A4rxT4XfGTXPg943j8S6H
NJ5mNt1bStvW5jLAsje/UBuozn2pKLD2qPUbn9pDxlefDJPhO2h277CtnhYH+1ptcSKhUE84A/h6
e9fa/wAbbyKy1X4SNNIsOPEdtu8whSAY2XnJ9WFfO9t+1x8FpfiInjhvh/q0XikjnUdy8EpsPG8A
nacfdzXzP8Zvi7r3xm8X3WtatL8p+W0hhynkQqSUUDPDDIO7g5p8rewe0Ptj9q34san8D/i54W8X
WWmx6hD/AGRPZyC4ysbbpQdoZckNnB6dAfWtH9jj4g3/AMVPEPxP8WX1h9g/tC6siEjB8pNkTrtV
j97AC5Pv0rwfwB+1t4X8TfCVPBPxe0S+8TLbNFHazWSgSSRoPlLvvUhwV5bIzn85vit+1joknwut
vA/wr0+78L6SivBcPcoBKIxghY2SQ4JIOWJPSlylKZ9DfDSCLUv2aPHtrbsCDLrMYCHdg5k4Hr1F
fNjftWar4s8CeD/h7N4ditXtbrTomvknYmZYZF6IV+U/KvGT3rzf4A/tB6t8EvGceoLNNf6Dc7Yb
+yc7hIm7LOi5AEg5we/Q9a+hdF+P/wCz74d+IN7420zRNeTXblWl/eWw8gM45KoXwpJ7jpuNCgw9
orn0F8VXRPjT8HtzAMbvUFAJx1tv/rV5P8c/jrJ8CP2lTqD6C2rWl/4dhgZhL5e1hM5HzYI7DrXx
18UfjX4l+J/jZvFV3qMunXscmLW2tHaNLUAAApznOFBJB5r6Jb9pL4afGz4c6dpHxcttRsdZtJ9y
3WkWxJZVGA24bsZB+Zfxo5bEqXNsezfsleOv+Fm+GfiJ4ga1Fmuo65NN5Kvv2BoUGN2BnGKybCQD
9grUHjJlEWk3bAAbj8txIcY/DFeMfF/9qLw9pfw00zwF8IWudN0oW4judReN7a4UDooJALFgCWbv
071yv7M37SU3wn1IeHvEUr6r4J1D5bmK6Zpvsmd250TBGCT8yngj6U+R7jUr6HpMn7Uln8Vtb+Fn
hmDQJNPuNP8AEGnyS3Us6uvyYQgLgEE7gfwr6N8V/uv2mvARBGJNG1FMdxgxmvn7wj42/Zw+F3ij
xD430G+vNS1aWN5rawnsZWhgfllEJMXy8naGJ4HSvn7xp+0R4u8YfFKLxtbarNpeqWjg2UMLkpax
cZiC5AdWx82c5z7UcrfQpyS3Pqjxp+0FpHwJ/aO8fDV9Mmv4tStdPKvbFQ0eyE/e3cYOe3pXTfAT
U4PEP7NXjbULaNoIry51eZIm/wCWe4MQv4ZFeXeLfiR8G/2jvCuj3/jjWT4F8Wx5ju/scTyNIo3K
F37CGQ9R3HP1rC+Ov7Svhvw74It/hx8JGW10mS2X7Vq1nEYw4YOrxgFQSx2qWf8A2hRysj2kUew/
E/UjYfsTaJqAQzNb2elyFVwSSJYvw61j2X7QXhj4y/HP4WW2g6ZeWdxZ3V4ZZLyBYztNu20KQSSD
g+leXfs0/tD6bFoB+G/xJMeo+EZk2Wt5efOLQIAwifC5IyAVbORge1dl4Ck+CH7P6a54tsfGVv42
1W2h83TbOTAmiOG+WM4PLBgCxHGKVuli1OO573ZSY/ao1RCp+bwpEwOOOLk5rxofHDwV8M/iF8WN
H8S6VcXd5f6u7QyQ2YnWRPKA2sx6AEn/AL6rwOH9qnxZb/GEeOvtHnzv+4fTxtEZsRJv8gNtwMDj
djOcmvYPiDovwb/aEu9D8ZxeO9O8D6jexLNqdjLMgnlORlXyw2sAGG4deDT5ddQU0drobRv+wOSm
PKOmS7RjAA+1Njiui/aa1vS9C8A/DfV9Wi8/SbbX7GW4jZA4aLynLDb34HSvDv2kPj1o2n+HI/hb
8N2hh8M20QivLq2cSRTRth/Ljc5yMklmB555rX+Ffxn8L/HL4Z3fw8+J1/b2F1ZQGTT9bupVtojt
Uoh3ZUeYm7kdGHNKzHzJ7HoHw/8AiP4L+Iv7Tmk3Hg628iKDw9dJcSGzNt5jebGQNpAOQDnPvXof
w0iWH45fF7B/eNJpbH2H2ZiP5mvE/h3pXwz/AGVfC2p+LZvE+neN/EYP2e1OlTo0u18ARqgkPVsk
se1eUfDD9r3WfC3xY1XxNrsQvbHX5Yl1SC1g2sgjRkjMQJ/hGOCeRmjlbFzI6628efCnTfg7440b
W7a2fxi13qQhaWwZpmlMj+UyyhcADj+IdK9r8eSbP2evhjKieWFutDYLjO37mK818W/s2eAvHvxC
07xNpHjfRbTwlesl3e6bJfKZn3MWcI244DZHB6HOK5P9pL9om0vJtM8CeCwkfh3w7cQJ9rZQ/mzW
7BY0jbccoNoBJ680+VvYfOtj239pO98JaV8WPhhd+NYYZ9AVNREy3EBmiB8uPaWUA5G7b24pf2f7
zwfefGv4iP4Hjhi0RrGwYJbQmKIyfvA20EDHI6YriLzU/D/7afwpigm1C30L4gaLH8huJBFbl3I3
MEySyMIx7g1Y8OW+jfsY/C++1TUNTh8Q+OdYDQRQWT74HkXe0SheCqDPLE9+KWpF+U7/AOFh8/4V
/FqEhxjW9ajO4c8pn+tfP18fg/J8BfCkmmf2aPHgksTIkTH7UZfOVZd49PvfkKsfszftLC21rXPD
fjUW8Wl+KL6eY3NshUw3U7KhjPPEZB4bsRzxXR6T+xC1j8Vhc3Gr28fw/s5PtEaw3OLqQLhkR/lw
AGHJB5A7GgcZXSPefiqQPi/8IMg4N/fc+/2avNviTpPw71b9p/UofiFJaJaDw5bPbf2hOYYt/myK
fmyOcE8Z9a8u+Nn7W76p8VtA1PwtZRXuleF7h5oHuFZPtruhST0KrjocH1IrtPi18NIv2rdC0j4h
fD29STVMJZXthev5SqE3EqSRkMjP9CORQlqHMmdn+zpaaHaeAPihaeH3WbQ4tavYrUpJvTyhAgUA
55GK/O7z/tUEZWN0YKqZGNowMdu3WvuHVvEOmfsZfBA+GzcrrfjfXA109kzMEV3RUkbKqcIgUYzy
xHvXw5Kvk2tsrhm2oFB+gAH+Nb04s5aruxz/AL5vMkAPykFlJ4Pt78ivqf8A4J9kv8U/EJYHI0XC
n289f/r18pKpiUMqlo4wFD4r2P8AZo+NNj8E/iX/AGpqNuH0rUYF0+6kTcPsyGRW83ocgdSOKmor
7GlK0T3P/hUvw78c+MvjJqXivUhb61YazP5Aa/EPlIYgytsz83OeueVrfuZAP2BNOaHcyiwtwCep
H2pRXBfHP9lPXviB8Rh4u8DSw65ofih1vJbwOira7to3ckeYhUlhjkYxW3+0V8RNC+DXwb0/4Mab
dLr2tpapDcTA7PsoR0lVpFAOC+cAZ96xtqbpnoX7U/hfTfGVx8KdD1u6a20u+1c29zMHCcGDpuPA
Jxj8TWf8E/hp4e+Fv7TGu6T4bvZbuxk8NRznzZhKVY3AG0sB/s5/4FWL46nt/wBsj4HwzeGJUtPF
fh6Rrh9DlcM7OFZAucjAccq3Tt71l/s5/D24/Zp8O698RviPcrocslvJYQ6VKVMjAMHUBgcFm8vA
Ue5zS1C6TPXfhOPLf41gj/mYrxuO+YVr5Xu/2evDFr+y7p3xAg1i7OssIXNs0qGBj54jKgY3Ahef
vdQeK9J/Zx/aU0PxN408a6LqsD6Cni7UZb6xurl12I0iKghfphzgEdj0zmvL9P8A2LfH8fxT/wCE
Wm80eHI3Eh8RmMm2ZAA4wmeGJ4Iz97mm00F9bn118XG3fFL4L5X5jqtyfmHT/RTXmPxe+BulfHT9
qG90vVtUu9Nt7XwzBODZFBI58+RcZYHA+bPTsKxf2h/2oPDOifF/wcttDPq1v4PvnuL64s3Rll8y
IxlI+cFlzzkgdRWR+1j8Ndc+L8uifE/4cXUviLTb6zhsJbXSNzzxgGRt+UycAvhgOQQOtFmgTTtY
9F/Zw8LReDvhR8W/D8d6by2sNX1C1S6YAblW3RQTjvgDNYPju2F7+wFodsW2+bb6enmFc7f9KTnH
04/Gk8KT2/7Jf7NWpReM7sXOveJJJLiHSISPtAkmhRNhDEE7doLtjArO+GupWH7S37Jk3w20e8TT
/FmlW0EUltfMFEjRyiQFcEkowXbnsTz2pJO9xt3WhF4c/Zs0z4BftB/CefTdbutU/tKe+EiXCKoX
bbNgrt/h+Y8fT0r2zwxF/wAZY+N3Bz/xT1gMY/6aPXzL+yZ8G/HSfFG18VeLotR0HSPCrSl/7eml
xK0kbpiHecBRwSeBgLXceB/2rPBup/tYa5Nm7t9K1axt9ItNSmQCEzRyPyxz8qEnAbp09adi79zz
q9/ZXtviZpfxf8bv4jl0+407XNUMVqsCmM+UWk+Y5yM7sfQV7D4xRZP2Pfhj9oG8+boec8nO9Oa8
F+NfwF+JWkfGbWtE0CLVbzS/GGoSXaXOnSzLaBJ53z5+35V2K3zZ7d69V/aX+IGh/Bf4K+A/hxf3
Dar4l046dcSw2Kb1Edsy+YxY429DgHBPtS3YJpWudH+2B8KP+F0/Fn4T+FP7VbR1uo9Tdp1j8zAR
In4XIBPykfjTP2P/AISH4J/Fz4seFf7SOqxQRabIlyU2Fw6ytyvIB5APriuZ/a5t9Y+NXw48G/Fr
4X31xc2ujRTyMtg7xX6CVoxlFUZDKUIYZzg1J+yPb638HPAHjv4sfFDULu3sdVjgZf7ReWS9KwGR
dzo/zZYsAq56AUO9yU1Y6HwdCY/2OPiv5KCKR59ebA+XaxaTnj0rwHTv2Rbz4Vn4N/EBfE0eoRal
relmayFsY2j84q42vuORxgjA6163+zf8QdG+OXwN+Jnw50i4a18T3x1WeC1vlMYMVy7+U4PPQsMj
qK8I+BXw4+L/AIr+NegeDNbu9YOl+DNSjvrqLU7qX7JDHBcKAIs/KxKnC44x9KYK3MfbPiy2jf8A
bH8BzFFLr4Z1ABu/+sSvmL4nfsk6n+0j+0j8YNQt/EcOjpo72cEcc8Bm3mS2Vyo+YbR8oOeevSvT
PHv7TPgXSf21fDFtNqbfZtL0650a9vViZoobuZ02RlgOnHLDIFeJ/tiaf8VPhR8f/EOveGb/AFWw
0TxsIDby6NIzC4eKFIjHIFBww6jpkMeaa0E0meseFrKRf+CZup214wvHisLxMyHcuFvHAAz2AAA9
K1v2xPAV78TPhN8HfB+mXg00axqtrali7KnNo5XcF6gEZ/Cud+IWoW/7NP7A1v4E8aXKReK9Qs57
e20+3LTtLI9wZduQOiq4yTipf2lNcv8A43fso+DvHnwlvZbxvCl4mpzNa5jurcwQPHIFVgDvRj0I
5AzgipS2uO6e5S/ZH+AGr/s3/tU6v4YvtZi1aG88KNqBe3DJGxN1EvKseoKv19eMV658DNMt9K1n
9oi4tYlhkPiW5O+FQjcWqHqPcmvCP2C7rx5r/jHxT8YfiFrF1d+HbPRptL/tfV5QG/dyRysQBj5F
Ac5A65Feh/sr/Gzwl8UfGHxs0DRNWSTUtb1e51LTYpVZPtNu0CR+YhYcgMBx1wRxUpW3HfufHDfs
n+OdA+C2g/GyLxTbky3FvdSxhpBdpvn8tX8zIBJZhkccE1+gnx30Wz1D9or9ny5ntYJZl1DUgWkj
DMQLPI59jz9a/O/Q9I+POteMrL9n6fUNRlhsLuNZdGkdBbRRRuJvM8zHKAYYfMe2OeB9r/tJ/Hvw
T4I/af8Ag7p+q65FbvodzeSaqVDMtks1sFiaTA43Ej6A81Vr3C60X9bHmP7WH7O3ir9qL9q/W9A0
XWLewttF8PWdxs1CeTyR5juCURQcMehPFdT+z94W1LTv2D/ir4Z8TuuqT6NPrWnrDOfOjiEEQVVT
cPugrkcDHtXnn7ePiX4o/A749n4k+DdQutJ0PxBpVtp8eo2gjmjmaMO5jZGDc8gg45zwa9C+HU+o
fBT9gnxpffE67/szWvEzanewC5cNLcyXcRaJdo6M3Py9uaHFcyE3dD/jZo+oWn/BPP4eaF4Tb+yb
vV00SwAtX8gSCYKrBmUZ+bIz1zXCfso/s3+N/wBlj9rPwtomsavBLpev6LqE7w6bdO8MzRBdokVl
XLLuBBx347113xE1C6+Nn/BO/wANTfC66Oq6x4ai0yS4js3AntZrSNGlUhsfOvDY57da8+/YO8ef
Fj9ob9oKx8d+Mb261bQfDGnXllJe3EUMEcDzqjKoCBdxIXng4xzRbRBfU+m/gd4N0PSv2r/j9qFl
pdrBeL/Y4SVIVUpvtXLhSOm4gE4r88dc/Zq+LXizwh4r+Px12Nn0/ULueG7bUJRfqkE7RhoztOAu
CANwOAfWvun9nH43+CvGn7WHxusNJ8SWd9Pq7aa2mhH4u1gtnSYxE8NtPXH16V8IeJPGfx78P6v4
i/Z2tWu4X1a9uVTQRaRs8izytLlJiG+Qr82QcDJ6UWHFJ2aPvn4/+E9M8b6b+zfqOuadbanqM3iP
S1uLmeIPI6tbs7qSRyC4BIPcV5F+3r8HvH37RX7Qfhj4a+DruK30uy8OHV2sr25MNkHWdoxJtAOW
AZFHB616V+038TfC/wAMr79njw/4j16x07VNN17T7u9tJZwZIIEgeMzSAZwgfjceK8q/4KGfEL4i
/A74z+Evi54CuVh0S60A6Q+peSlxbSF5WmCHORyqhgf9nrTS6ibV0dj+w54E1qy+Cfxg+GXxDC69
ZeHdUl0qLT7txc28KLbhike4fc3YYDAxntWbHDcfDb/glnHqXgfHhrXbzT4N99p+Ip5WkuxGzFxz
uKMRntnitb9jvxLr/h79nD4o/Ef4p3EWiv4ovpdWi1C9KwpPHJaoqMo6YZshV6nIFYtldSfFv/gl
oNK8ByJr/iGxsIEmsbFw89vJHdiR0ZByrBFJwRyKlFrU8m/Ze/Z2+LH7Kn7U3wwj1u7Sx0bxcLm3
uobG98xLhYrZ5dkqYwSGYEHnnPNeSf8ABTzwxpHhv9q3W49G0220sXGmWVzMlqgjWSZ/M3SEDjcc
DPrivWf2Rfjl8X/2rv2n/h1feI2TU9B8Fm7murixsRBFa+daui+aQfvMVAHTvXj/APwU98S6Z4l/
a012TSb6HUIrfS7K1lltpg6LKgk3LkcZGQCKuK1Ikz5MBYOA+QFP3Md67T4VaTa+Ifif4N0y5hS4
tr3XLC3lhddwkR7mNWBB4PBI9Oa4ojzXA6Hkgbu9dn8FNUtNG+M3gW/vZo7aytdf0+We5mfaiIt1
EzMSegABOfanLZmcXqfpl/wUY+HvjP4ga98Mfgr8NLfydIv9Ou55dDt5VtrVktjEYs9BheeOnSr3
/BPjwP4qg8G/F/4N/Fe1XU9L8PXFpbRaNeFbiGFJkllZVYdVJCMOeKq/8FEfix40+CnxS+FvxW8D
2tve6baabeWkmpTwG4smE7RhUZlIxuGSCCOneug/4J6fEfxP4/0P4vfFjx9bw6Pb+ILy0uI70xG3
tZEhgaNjGWJ+VcKucnms9dDVNJGH8G9Kh+A3/BOHxn4x8D2kWjeLHgv5f7TjjHnO8V5JFESTnO1R
gA18+fsxfC342fs7/tEfC7xPrsFxpWlfEHUxaahMZ4pBeJKDOVlRSdrHO4HAxhulfRPgi6X4gf8A
BLnxZpvhVhrWri01JBY2LCeVZDfSOqlV55XB6DivDvgJ+1B8TP2mvjn8FvCeq6PZzaf4R1ZLu4l0
u0kDxJHCYxJcEuwXjI7AnikVfU+pf+FD+BJv+Ci82pN4ZsHlHhEa6QYhsF/9t2/aNvTfg9fXnrXx
x+1H4C+N/wAfvj78T/FWkW2oatofw81aSz0+6glihWxjhPnYjUkF2XhjwSflzX3hZeMdGm/4KJah
pa6la/bV8BJB5ImXJl+27imM53BecdcV8QfGX9rP4kfs6fGP45eAdP0e0Fr4q1y6urf7faymdkmT
yQ8GCA+4BccHkU1dBpdHvXx08P2vx/8A2BPh5448a2qan4yYaaP7VEYSVfOukjlHy4GGU8j1waj/
AOChPhbxP4f+Hvwt+Bfwf0k2ukeI3u4JdE01EUzxwCOULub7q5LMxyM981seOpD4F/4JwfDWw8Rk
aTqG7Q1ktr0+XIrfbInIKtgggdfTFO/4KM/F/X/gh4s+CfxM8M6db6tb6TLqQeSdWa1xNFDGoZ16
bgxwc8kUloxpqyOa/wCCdPg/xbHpHxS+BPxW0uSXQNKtbVxoWohHEQuTK0i716q2FI545xjFaP7M
ngXQvgP+y38Z/iJ4P06G18X2U2tw2+oTAyMkVszmCIbs/KCAcdyBnNX/APgnZ8YfEHx9+JPxh+J3
iLS4NJi1C30u2SW2Vltv3KzqwVmPOOMn3qf4XTr4h/4J/wDxpi0iQancS3HiMpHb/vGYs8hAwPal
uxXV9j5E/Z0tPjr8J/j58P8A4k61Zapp1j8QdYtrC81C9jSSHUI7uaORwRk7cqGYdCNvHpX2N48/
Zi+HWuf8FDfC93c+HbWRb7QLnXb63JIjnvY50EcrLnBIIB9DjpXy38N/2xfGP7QXjL4DfDG78NWd
rb+G/EulyyXNj5jzSLbnyi8iHIQAZJ+lfeOvanZf8PB/C9o1zELlPAl3tj8wA7jdpgY65xk4/wAK
e5TfY+E/21bf4wfHn9pbxtaeGtN1bV/D/wAPJo7ayTS4tiWe+JJXcsCNzbkOOpGK9l+M+lxftK/8
E2NF+JPj22jvvHGnWxlttSt18pkY3ogbAGBhkRc+/NecfFb9tLxV+yn+0r8ddBsfC9je2/iHVEuI
59QkkiaICAIJVCg71O725HWvXL5jon/BJfR/7R/0OaeytZGSYeWSZNRV+h9Qc/Sp6maSJP21tH1f
4Dfs4+AvhL8GdHewt/F13Np1xZ2CGS6lUx+a+xic72YkE56H2rlv+CcmleMkn+IfwE+KOk3B8OjS
RftpGrxETRidhGwD5zsZeR6MDg16T/wUT+K958GP+FFePNO06LVxpGsz3BilcrE4a3CgFwDtzu4P
rWL+wf8AH/VP2n/2lviP48v9Aj0GEeHbKwjit5GljXZKxIMhUbmzk9Bxin0KQn7IHwW8H/BPwz8d
/iDoenmXxD4b1bW9M0y5u5GkW3tbdFkjjAJxyVXJ6nA5r5A+F/xJ+PnhL4yeGvjfqNvqa2XjDVIb
ae8vbdlsLqK4kVPLRPuqNo+U9toIr71+E80dz+zj+0pNZkXPm6/4ndViwxP7sjjHX7pr4q0T9trW
vip4N+D/AMF5fCdnbNpWs6PBJewXLySuLeVUwYivyHGM/McYP4FmXuz6p+P/AOx/8OPG37cfw5ur
3SWEPiez1C91m1ilZI7qW2jjMTEA/LnODjr+NeG/8FBvEHxf+KX7QGseAfBGnanP4e8Dw21xBBoN
vIrI1xArb5GTrgqwA4xg19q/Ei7hP7cvwctjKolTw/rMmwkZwRGP5ivlP4zftrX37Jv7Y/xptl8J
x+ILbW7fTHSSa5MHlmK1XkfIwYHzGH1Wgi6Oj8YaWP2q/wDgmxN43+I9tHc+M/D9tdyWWpWsflSr
JbytCGccg7gp3DpzkVe+OWm3H7JP7EHhTQPhHZSW+teNriCxurpUae7mlubN2d0I53koqrjgcYFW
/Ampzzf8EovEGp30a2k19Z6lcFDkAeZfSYxnnHPHtitb9uX4kt8Ifgh8A/GMGnx6tDo2uaffG3dy
iSKlm5A3AHGex9cUkGlzyj/gnNrHxBbx54r+CXxO0+8ufD+p6LPq72fiCGT7ShLQw/KZMNsYM3GO
CuRXd/safsweBPhj8Y/jd4hs9PkubjwZq0um6L9ufzFt4fI8wkj+Jh0DHnHfNQ/sbftO3X7WH7ZW
reKj4fXw/bWHgtrD7OlwbjDC8jfJfavJ3dMdq9k+Bc8Us37UN1BIs5Hii+UhTkZS0Tj9cU7K4nbo
fm9YftHfHmP4qwfHprbVlsLi6VATbXB0UxuPI8oAnZ1yM5+9zX2Z+1/+yP4B+JX7TPwgvb61uLV/
GV9cQa3HZzeWtxHb23mLjjhicAnuM98V8kwftw3XiL9mLw58C18IQRzi4tLNtUF6d7bLpZsCHZkN
wozu9a/ST45TxR/tM/s3WrkBzd6u6r0xixoNHa+h8df8FEfGHxBf4kaV8Efhrp+o2Phjw5plrqUd
t4ZhkW4+ZWjUM0Z4RcjHHVuc16B4Fsj+1/8A8E+NbufidZn+2/BgvY7G9t1aO5V7S2+RpN2TuJLB
h3x0qp+0Z+12v7Jv7cPi7Ubnwy3iG31Tw3p1soW4EDJtZmJUlTu69OK6/wDZ08YSeLv2B/jP4suL
VdNbV7jxFqBhQkhPMjLAAnGeoGaTvfQhaHJWOnQfsWf8E9bDxt8PdPjm8Y+LINP+1394hll33SBS
U2lSNoPyqDjNcF/wT5+IPxP0D43R/Cz4g2Oo33h/xZZz3kkPiiKZpf3MRO6ISEgKTwQBivYv2mvE
R8Ef8E+PhBrAtluv7Pn8O3LwScK/lxhyrcHAJXHTvXOfs/8A7X8X7Xf7afgG8tfC0nhuDQtD1SNx
NMsxkaREIIZVGBgdDSs9xp9zS/Z0/Ym+Hfhn9sv4mPFb3NzpvgptNudHsbmQPFDJcwO77hj5guPl
HbNfKnxO/ad+N2v/ABs1H4t2Fvq9vonhrUWtYY7GO4Gk+VbylSso3YbeOGOerdq/Sb4Iyed+1V+0
Y3B2S6IgAPpZt/j+tfnNJ+3HH4Z/Z18cfBdfCDXN9qN7qCJqn2nYqiW6ZvmjKj5sZAwT2qrdStb6
n03+2f8As5+E/jZD8EviFd250rxJ4r1TSdJ1L7AwWKaC4RpHOO7gscNknB5zXP8A7fXiLxL8EPDf
gX4B/BvTrrTNLuNM+3M2jiX7e4hkCqodDnkgMzdTgc1718a9tr4Q/ZdtpQq58U6KCG4I22zf1rzD
9sv9pq3/AGXP2yPB/ie98Pv4ht5PCM1kkEM4ikVnuSdwJUj+Cp3Jtczf2StOvP2xP2XPGnw2+MFn
Ldy+FrmOytb+4BGpRNtMoZ2fJDqVA9xwazv2ZfB2j/ssfsQeI/jtpFhHrHjiaxlkWTUQDHGkd08S
IgABQEHLYPJr0n9g34oJ8YtN+PvxAWw/suDV9c85It24qi2gwC2ACQOuK5LXrwaZ/wAEiLi6WMPn
TN5RuQd2oZ/r+tJMLWPAP2N/j38ZND/aX8O6d4yi1W/0T4hXT2l1b69FMbbYxaQvbhm2rjOMAYw3
I6GvPP8AgpX8GfDvwZ/aMn0/wyHs7HWbBNbltmG5YppprgOEOOF+QYBzjNfRvhH9tK0/ao/aI/Z9
8O6f4Tl0b+xNUkubi4e4STewtSDsAUfIMHnAryP/AIK9S+b+1HaKMExeGrIYB55muiRj8a1i9SZa
HwxkMrKcjqc4q/ZWZurpbdc+Y7qgIPPJAqgHXaq7cEMSee1dL4Ehe58X6JHET/x/24bJxkean/1q
JXRMdWfqL+1wbv8AYn/Zx8JfDT4Naa1vqPjGe4tLjVIwW1BjtEjMjJgs53lRnOBwKyP2BfFniH4/
eGvH/wADPi5ZXOrabaab9sS41Yym/iE7FCA8vzADllJAIz1r0z/goH8Z7b4A/FX4A+M77TZNVsdN
utTeS3jYIzboIo/lY8ZG/OO+Ko/sQ/Ha0/aV/ak+K/jvT9Dl0Wzk0PTrNYpXEjgq7j5mXjJwT9Kj
VGljmP2NP2bfB3wej+L3xLW3l1vWPBuravpekC/2ssUNuqurYx/rDgAsOcZ9a+afhr+2D8a4/wBo
DTvibfx6pdaH4hvktDpVwtwNJEczKgWLPygqBkHPUdK+3/hPeIn7Lv7R2owgy79b8TyjaeWxGcdP
pXyFZftxaJ8UfhB8Hvg5a+E7ux1XT9V0eCW/eVGhbyJkUlFBLEnqQcd/Sm0x6Nnun7Tn7EvgLxh+
2V8OYVju9O0/xut/LrFrYlI491rArKYxt+Tf0b/69eef8FCPi98QNA+I1r8GvhbYXvh7w94Xsra6
3eGlkWZ/OQAbxGMBFwfqTzX2R8Wdtx+2v8CoS+Gh0vXJ9v1ijUf1r5w+Kf7Y2j/su/tq/F9tX8OX
fiD+09P0iCEWzohjKQZOd3UHzP096jvcLPQuXGkWf7bf7AF74t8d2qQeJ/CsN49pqdgnlyyyWcDA
Fy+ThiW3L60yw8P2f7C37Cth418Bacl9438WRWC3GoXy7pEkuYRny8YIVMZVcnnk1tfAnxEms/8A
BNT4ma81sLBNRGvXPlZ3bN7uAM4GfTNXP2sfFsHw3/Yq+COsSWX9pWmn6l4fne3Jx5iRwlyvPqFx
+NJNiPH/APgn98Z/iJrnxYufhJ8To7zxJo/iWynvJk8SrNJLF5cfSMSEgoxzx0yODXZ/s4/sM/D/
AEP9sD4jwTxz6no/giaxudLsbwK0YkuIWkxJx8wTtx2Ga1PgZ+1Hpv7U37bPgrVdK0CbQLfSfDmo
xPFcurysWKnJ28AdK96+BU8Vx+03+0bLHkvHd6TGwHP3bRuPz/nQpMHa5+c3xO/bW+M+p/HK/wDi
LpMOpWXhPwzdm0j0myeVNNkihkZSJccMW7+nAFfR37bf7NnhX412/wAHviRHHJoOv+MNT0rStQFg
qhJI7lTIznjLOuSAT1GM9K8EX9ubw5oP7MPjb4OJ4Sum1+9u9SjW8Lxi3/eXLvuIzu3BTjpya+4/
jhb+X4N/ZksgML/wlOijZjpstmP9KL+994I8I/bc8X6v+yr4C8E/A34NWt3oq31h9sk1bT9zagyx
Pg8pgktglmx044rX/ZSku/22/wBmbxX8OvizazT3nhmS3toNbkBN/vYNJvJkBKsNgGe4POK3f2xP
2j9J/Zp/bE8DeKNb0SbXLFPCV1beTbMglVpLkYZd+B/AR171v/sE/E+3+Mdz8efH9lpj6TYatrkb
wW0xUuFS2OMleM89qL7D6bHlv7Jvwu8Kfsx/sp+Lf2gbfT18ReLUtbp7cX2NkEcM7xosZxlS3BZu
+PavKf2Sv2svi5eftE6TD4xOp674d8f6gdPex1hZjZ2ySuGBtw3y4C5AHcHkV9CTTx2P/BJ/WZZo
yYn025ZlXuDft+nr7VwHhz9sfw1+0X8Wv2efCeheGrzSZtG1yJp7i58sghLYqAu3kAnPXHTNDkrD
5ddTV8SfsFfD/W/29IdLMlxFoE2knxXNpaovlGYXW0Q/9ciScr6cdK8w/bN/at+JumfHG98JfD4X
3g3wp4AuPsA/sTesFywAf98FG1VAXAHYZNfdET/aP2/bhRk+R4AVWHbLXuR+lfI2tftr+FP2f/iz
+0R4Y1fwveaxqOteILkwXNssW0Yg8sK+8jODzx2NO93sJWTOi/ae8E6F+1X+xfovx1vtMTQPGsFo
GMliARMjXAgKyEgFlAAZT2/GtD48QW3/AAT5/Zi0bw58MLBv+El8YSyWcmuvj7csvlh9ylQNxGSF
XPHUc1u6sdv/AASu0AHMRubOyAVcLw9+pxj6Guh/by+KGj/BvxT8AfEmuaXJqunadrF1cS2yKpYq
LZRkbiBuG4EDPapTvFCbs9Dyb9hP4k67+1J4O8a/Bj4u2l14giWwkv11XVNxu4lkby1AEgyCMllY
dOnern7HP7Ingz4deJ/ih471VZPET+BtUv8ATdMtLxUMQWBFfzmyP9ZwAD0GM9TXd/sh/HTQ/wBo
79q/4geLvD2l3GlaanhqytUjuo1VyfNOS20kZyOgrrPhLeJB8Ev2kdQAOz/hIfEMmcg52w4OD+Bq
Y7tPuVdnxB4D/wCCifxXvP2gbfxnqa3cng/V7xLVPDckjixiSTbEu1tvJH3unJzXtv7T37BfgnxH
+1l8PrfSZpPD+m+OZbuXU7OyjUJG1tEjlogCNpfPPXkk4rzOD9sTwH42+APww+EeleG7yLxDDf6X
DLdNDEIC6ToHKlCWyWbP3RX3b8Wfn/a++A8LDIjtdbkAA6f6OozT+22tLi5UtUfIH7c/7RPiX4G+
N9E+Cvwigk8GaR4ftoL6e60lcST70Y7GXGNvGS3Uk11vjHQNL/bo/Ydl+JHiKyTQ/GnheC58vVbT
bLLdfZYzuVyQCEkZiSvYjNafxK/aq8D/ALPP7aXxYm8Y6Tcan9r0nTLeAWkCytHshLkNuIxuDjp6
VtfBrXrPVP8AgnR8Q/ENtbmys9T/ALcuUjwAwDysq5xwD0olpOLQLucjoGgaR+wJ+xVB8SNC0tNc
8deJ4bQJeXKhWtXuogQqcH5EwWxxk9a5v9hT9o3W/wBpPV/EPwf+LMDeMbHWIJL+O41IjfAsajjY
EH8RUg54I4r1z9qzxdpvw+/ZP+Bus6vZteaXZavoc91bFAxaNLcsRtY4JwOhrO+BX7Q/gf8AaT/b
M0TWPBGjTaZZ6b4VvYZ3uLdIXkkMsfZT0AbqfWkr8qaHuecfsx/8E/PCOkftK/EJNfvm8SaB4FuY
Y7Syu4Ri6M0XmL5pzyE24x3rz3Uv+CnPj5/j9/wkVpaTWvgGxma3HhkFdksarsJMmzIOfnH4DvX3
X8FJXT4rftGXjncF1mBF+iWhr4N0z9qL4UXP7HrfDSHw9Ivjp2dTOtku3cbvzDIJevIOMUpK8XcF
uen/ALYf7DXhzxr8Zvh9rnhy6Xwz/wAJ7frZX9lHDmOF9hmedQMZY7myOASQaX9rv48TfsVaP4a+
DHwh05fD13Fawape64sSyGYNvjO5WU5dmQMWJ6DFfUfxohZvil+zhAUG5dYnYjnjZZ84/KvG/jf8
ffhp8D/22tZvviJYPeQS+ErK3t3WyFz5bGV2Ix2yAfypxd37wK2hzOt+H9P/AOCgH7HF14u1azXR
/HvhJHgbV2iUtcNBCJZF4xtV9+COxH0rK+EXgbwv+wp+ye/x0v8ATU8UeNNct7c2Z8sJ9hadNqxo
WJ4zlicc9Oleqfs3eIdO139kj41eJdBtTZ6Tqepa9e2UPl7NsTQ4X5V4HbgHiqX7QF/pHh79hP4U
T67AtxpKXWgvdROoYMnDNx+dJNtaa7lcuuh5n+x/+01e/ta3mufCL4vWUXii28QQvcQS+XHGtusK
hmB2KuTuAYHOQRWD+z3/AME8dFl/ad8a6f4o1OPXfCvgqWHfZSQEfbXmjZ4wxyMBV69c4Fey/Cj4
w/DH4y/tneFLz4bWBjtdO8M6gt3KLL7MjMzLgrn73oT717D8HyF+N/7Q0yZ/4/7JSxycEWjH/P1q
It6q/UT0eh8Xa1/wU/1i0/aADWGlww/DKzk+ynSDbR+dIiFo2Ik7HO049OO2Tw3/AAUt/ZZ0X4S6
/o/xG8JYstJ8VzESaTg/uLjYZXlHPRs5Ixwc1sW/xu+AkH7I8vhA2drN8Srm5dCYbE+b5j3ZfcZc
Y+72zmvVP+CviCD4RfCe1AKst9KRngALbID79MVvF+/ojKVj8sHDzO0rFpDgsMHOajTEe7zeXJ3B
VHPTqac0zRh+CuRhQPTjmkjjQE7s79uQSOvHQ10XM1qI8ZZmKkFPUcGiNGGXdC2DlR1z+NOSNWiI
CsykD7x6Gkmla1iVWkZQ3Pl56/5zRcZIIo5reRpZjvBGwHoOuOfX/wCtUdxGYlicKGHILg5J9P8A
9dRR/u0ZT9zOUJ5BP+c1LJAwIJAHGSD1xVFKwSMfP+RNySD5iR/F3ApkkSxyDa3zBguxjx+FOkC4
2NuMajPyjjNRTIG5DYweuOQaCR8sYmb5kAlBwWU4FU45yk0uwgk4woGR1q5cRMsbCRl29geCfWq1
hsgkkLDf6IvXGfX86UhG/YK3ChnEe9cEjGecD355/OqPi1lOu6gFBBeYkOG4rotBZb+HbKyvC9xG
vz8jJPBx1/L0rmfFxQ+JdUMW7aLhwCxBPU8kj+dTYiLu9DPZkyCWKvjBU9qkWdIgSAdxPJHOajwJ
MOfm3AgktmpFjZZXbYMHpzxistDW/c6+HF1YNHtLeXGxZ5GyW53AZ68DNOsUgFxbFh5oiUHAcjB5
yOMcZP6VDpF1GUuJMMsUqsrELlsevORn6Zqdwtu0exsR4OSBnd9a9Rwuzkc1Y9B0u2W6ijkV/Llb
DEElskcDg9M/410lhts7O6QgJ5drLvVQS33OB/k1xujSyXUkNyZvLU5wUXAUds/kPXrXW2TSXHmE
fNiKZEA5wxBxn2yckms5RaITi2fNF6gV9ytlskN82cHvVY5jkOTkA9ue1atuwbU2gdP9apQZI4JH
rVW8szp98Ypht29QTkD8q5b6nYrWPR/h2EfLsGErKWUjICgDPbuMCvVNFjuPLXJULIxAdjjfyeO/
of1q5+w/+z5e/tE+Pk0WO+Sx0uzVLrUmVwHNuHAYL33HgY9+a+vP2sfiN4L8M6XbfCHwdoFr9n8P
7be6vJ7bFwJY+PlfgEkZLN33e9aQnZ7BNK2p8rRTS3FxO87kB/3ieYMFU2/dB4wBknnn3qe8t96v
G7uVxlHQAkDtn6/5zivq/wDYE8X+Cf8AhIta8L+JLC1l1PWdgsHu4PMjcKrGSMluFJ4I6ZPHNec/
Fb9mnxdpvxpl8IWtq8kms3LGyuYF/dCKRsrng7QvHTOAD0rV1k2YOkr2Wp4YHSziJ8rzIxHhi2Rt
bOOpOM8rx3GcCknV7l4liYPM6HcATuK7ug465x2r7/8A2pZPCPws+E3hLwrJpemTeNYRZ3DJHagi
XyseYZGGMqWB+8ea9Ik8W+H7T9mOz+Jv/CCeHm1B7GO4+xGzQJlpAu3OM9849a53VYKCS0Py/wBq
zKjOkoVgI1dlCksQCByMAemaWW2JMeYHG0kBQQc4GTznnt0H4V+gUXxJ8C/HD9nfx5JJ4U0nQfFu
labK89m1okezq0LxuR8ynA75yDx0qX9iL4YeF/HfwK1pta0Kwu7q61a7tzcy26tMibI1BViMqe4x
TVXuaKKWx+d88w4t5THG0eWKcA49ee3PX0p80kyYj8nerJ6c8jj+Yx619y/DX9lhvhn+1bYabrGn
Q674TuLO7msri9iSYOAoIDcfKwJ6Y/GvHv22vC2n+GfjprEOiWcOnwrFbymK3UIqs0RzgAcdF49z
WirJbIylB30Z85rdCN4kZ0ecHdtDdVHTj/69TTkRksEaBZcodw3A8cn+dfVf7IfijwBrdjefC3xz
4YsYLnWWeKy1p4/MnlllO1Ys7SUYcbTnHArzD9pT4C6p8EPGr2V0xutNud8mmXZZS0sYPO9R0Iyv
6e9Uqik7WBwZ4zLHJE4bd56FssVG0KQD69f8+lTOpJQrH5kgJYA4ULznP8/xqIyqJCvJcngZ5TIz
yOR+OOlTzT5uAskmXkBViBwGxnDH046e9XZrUzUVcLidbaEGFwgUnA7HqT9arpqBjfyntwqKQCEO
WweDjjscfhmvdP2Pm8FN8bLCHxrbWl1o81tLb2y30e6EXDsixhh0ycuOenFdH+2Z8Gpvhx8T7i80
zw+mneGNQdW00Woyhk2AyKqg4Azk4wP1qFVinZmnsuzPmyXYluY4pSZgWQN0IGOuPx96sfMkcqwh
gvBVmJOR0P1r7u0n4L+DPhX+yRf3/wAQ9CsNM8U30Vy1rc3Sg3PmvuNuoKk4YArx2AOa+FbyIRXF
04yqpl8luCOAAM/WjmUh8tnuUx++jfeDtJ2BhgY69KdJEY2R4wG2nbufGMHrVh2bcWkKyq4Ls4yS
2T0H4jpVdJPNMeWJPzEhmAyDjjHP8XfNUpWG0IqRKyyKvmscgBmGSO/H4VJvHnZCbT0UHHT/ACKJ
ENrw6lSSQSVO1DxySPTI744qGSZxOfLcABDlkO7OeRx65/QUNk6EyeVJL82ZWXIAOMrwOff/AOtT
2VY1WRppBK3BHOB17fjTJplWXzHR5A5EfDdH4wTxz/nnildpABvCFQ3DMxJwT7cc4HXJ4qVqyEkt
RzShJQiBpNzE7icck8cfnQZUkT5QFkYgFlB6Ad/WmyRqsildqbW+dh1Ip8JEqful3HIJ8v5iT0Ay
Ox9K1vYljleaZ9rKQoQKyJgZHXGfw/WiF4oiY/MMh/iJB6n9BUazNLLIY1EceQCW7HGBjnp1/KkW
ZJ0BD5ycqzjaDyRn9Dx7Urpgo9SzOpK7Y3DKrDcCB849vTOKdlUldpvnUZHlnIwfrn8ue9VmnCo6
geaEJ2yA8gjIq2jW0bwK6yTxqy+Z5f3yuRnaTwDjOPrWbaQ1DmATsk2wHGcBSRyD70jE4MbExMTx
t7+oz07c/WvuHxb+zJ8JtU/Zs1T4j+EbbVYpf7OfUbWW9uNzh0OGDAg4BKEEA4r4iPmyhSAZNpO7
C8rz/wDW9KlO5XK1KwyFBJMcBkkUEqFYjf2/wq5G6s0wKMG5AOcDPWvff2SPhL8Ofi/4j1Dw/wCK
5tWg8QlTNp6WUwSKSFVBfJwcMD2OOBxmuS/ac+E2jfBv4uX/AIc0R7yfTUt4Z1+1tvZN6k7d3GeQ
Pwqk02HK4Hk0rh1jDztydrBc4HTGeM+ueanRxGmxdqjI25OQ3pyRwfz61VaQSOrbMNuA6YPFTLeh
ypB+bcchl4/zxWuhz9SeYskboBHJKDgDcRjjI9c/WhLsGT95GyqWO3HOD3wc1WeJpVVUYgF9p3HH
A7Zx6+1e/fs//st3vxk8MeINfvbybTNBsLaZIJIcGR7uNQ23aR93B65qeZLc1Sctjw2O7T73VQBG
VIJxzxjHTnAqWS4BucqCYz0XHzAkdM59Rio8iKTZGoC9SOv0qEASsi7QTksVzz+X5VaV9QUpJlq5
mRiqZaUqSDjucDr6YxTTdiOMu6EYONgPOB3706N/PdwQ28Yb7vBHp9aawkEkgPy/3fcdeaaYS13Z
c83ylWSTO8/3W46defrVQhri4yVZU6AA4HByM4PtTncS4KhWOQMHDcfT8qWS7dB5bhY0YDbjgscZ
zn86nUzsiybkM7Ll1G3j69sGmTTMrlivJOCe3SqoUS7yisiNkqR0HXuKmFyhDE/ulOB6gnHTP+et
FkMszTgQMgGT1VicZPWrEbssUiltjMCOvT/PSqJjEuXUMF6EH09qcyuGzsUoFOWOemT+uRTdjWKJ
2lDKR5mWzlRnJ/GkWZoo5A/zRFwA7rgHrjPbnp+HvVYQ7F8wxqpUH5+xP1pYpx5W9lDxgfxnp9PS
loS73LQuXk2sG5yxZk4DccZ9+B0x2pomlkmcKp2gbmQ+mf5Z/nUDzRp94DoDsySeOB0H6U9Y22NK
DKxZSVwvHUdT17VpExbY4zNJM7qm0EbSO5GMc+1S3E3mTAkHAIY8nGeM/jVYSB5SOV+U7iSBknjt
259akR1kVipkBZMAk/jz+VNJFK5ae6ecBldyi8EMMfUDPWmJdIkoUN86n1zgEciq0Mqi3JRi5K9A
T+FWE25aV0EjKmMA4GT/AFFZ2saXfcdLcNFDIV+ZG/vDcCaka9aI7ZW+Vl+Qqe+T2HTqfSq/lKDt
3YPJOWz26Y9OD+VQGdFRcDaB8yc549aCubzLq3zkDY24N3wMAAEVZOqz2luYYb24gBcSOkUjR4cD
GeDzxxn8KylTYzFmOV5OBjmk85nQs43SFiWKnOP09MUNIRevryWeVppJpZ2UEEzOXIXPuT+VVbh3
CrIOjsAvGccfpnmkkuDHKwdBD5i7gH9T2P6Uzz2RhligAPB54NCfQh2bJEImeRc5VTyOfTp/Ontc
tEXH3iPm4XkVFA32dhsy2TnZkfMPwqEoFO05JQbsj9allHTWvxF8VaHaW9vp/ifXLaCAfJb2+pTR
xqucjaocAfTisvU9XvNdvZb69vZrq7lf99cXcjPI7AdS7ZLHHrWajMzkhvkAJGe9H2tV4CBh684/
ClZFqTN3w74x1fwrdm60bWb7SZvLMJmsJjE8ikg4JUjPTNO8S/EzxT4vtorLX/EOra3bRS+bHFfX
Tyor4+9tYkZ5/WuegR4ZCyjcSNxUj7v+eKa43SMQ3PfAH61Vkgepd+2+U25JTDKrB1dWwwYcg8d8
4INd2n7RHxRCkHx7rhjK/LCbjjGMHdxXmkrfZ5UdE8xugIPb8frUMyzCZWZsYGcH73+fyqOouZot
tcSJEAzeYGBy+eTzkk+/c11vhX4y+Ovh9pR0zw34qv8ARtPMrTNb2hUIWIHzAFTgnHPrmuGeQt/r
RubOcdeKqtelZZ02uAp+82cH0x9OlFkw5jq/HPxD8RfETUbfUvEes3OsahFF5CS3JXMSZJwAoAHJ
7VQ8LeMdc8Ca9Dr3h7UpdO1a3V0iuoThlDLgg5BBHsQe1YkqP5fmRyDcTkrg7qYkwnR1csjKc4Y4
zx6dfSosWj0jxh+0r8T/ABt4YvNC1zxleX+lXqbLi1WCFPMAOduVjDYyB3Gea81aYSo0Q2mNs53H
tj0qJpkBDs+8dFycA9qgkYuWGx0nBK4Ix/nPrVJrqhe9c9s0b9sP4veHdMtNOs/F8n2OzhS3gjey
t3wigKBuZCxwB1YnPrXkHiLxJq3ifX7/AFrU7x7/AFK+nNzcTyYzI568DAA9scVmyFpF3AEY9e3r
VdZFDjbJyD82w80+WKGtT0X4VftB+PPgsuoW3hLVV0u31BllmilgjmTeuRuCtnBwefoKPi5+0L46
+NkNhaeL9ZjvrWwZpooIIFiQsRg7tgAbj19TXmly7FSj72Ax87Dgj2x171W8yOORHUO2Oeudv1H+
FKyHdo2/C/i/WPBHirSvEOhXjWerabMs1vcoM4YclWB6qRkEHgivYtZ/b6+Muu6Xfadda3YJbXcL
28ptdOSKXaylSVfsen5188XBW5nIjZgQcHnGff1qKd8TbdxiODguck8Amosrj1Y6W5DIykmYMpBe
Q5yffNe6eB/27Pi18OfCWk+GtL1TTLjTNOi8q3k1Gx+0TKmSQC+4Z2ghR7AV8+bowZFO5QOck9Ti
oGw+1iGEeTjsfX+tVp1M22dr8WPjB4m+NfjK58R+Kb2O5vpY0hCwIY4olQYwiZOAeT9fwrW+CX7S
3jL9nu/1KbwleW4i1CEJcWeoQma3ds8OUBHzgcZB6dc15ZNPsDFeH9Pw4zULyKQ4aPeG4IU4A75/
z6UNXKi0j3j4zftt/Ez41+Ex4W1y406w0sziaRdGt3tjOApG1z5jZXJyV6HivGvDni7VPAviPTdd
8P309hqNhKJ7eaCQqysG4BwRlTjBHcVjl/Ktyqb2bI+cj7vsT/nrVVgJMjad+c/J29/1qLWNHpqm
fYk3/BUL4ttdzkaR4QhmZCouRYTeaARjJbzeMcevSvkHX9YvvEmqz3+tX0up3tyd1xc3UhleVyMF
iTntx/8Aqqk8/nOpyykLt2sOQKr3UPy5BDM2DtzxjH/1qpaC32PqL4Z/t/8AxE+GngGw8Hy2GieL
NNs3ZoLjxFbyXU0K5OEz5gBAzgeg4rzf9oX9qPxj+0hrVjfeKZILWxs4hHbaVp4kS0iI3Zl8tmPz
kNjPoAPr5BO8hZGRw3PIPGKgmnZizydF6DHt/jT91iu0z2j9nj9qTxX+zT4ju9T8OeVeWF5E8V3p
V8Xa0uGIGJSqkYcEAZ54yK9C+JH/AAUY8e+O/h5eeE9I0DRPAcN9IrXV74Z8yGd1ByyqQRjcMAnr
jIr5QUkow28H5ic9vWokuYQ5jYc9O2KmyRXNc3PDPi3VvCGr6fq+garcaNq2nyedbXltIVkjbB5B
78ZBz1BIr7Lj/wCCrviZdUtNUvPhj4Uu9chiWL+18OLgjbhmVsZXOT8ue+K+GV2KCSRhuMnt6VDP
Iu0j5hnovpQFm+p1nxQ+JOvfFrxtqXivxNcSX2rX0pZvtEu8RRbiViXI4Vc8AV798Gf+CgXiX4af
DGLwJ4i8KaR8RtEtZxJYjxFI0htYgoCxKCrZC4O09gSK+UZHWOM/NvyTg4qKOaTz25GxRleOlJ6k
pNH0P+05+2j4q/aTsNI0SbT4PCXhSwiWNNB0qZvs8rKTtdhgfdUAAY4rD/Zm/al8VfsueNhq+iEX
+kXStHqOhzzMkF2dpCOcA4ZTzuAyeR3rxOeXzBvQbdo5OKjV/MYbhhlGAM8n6VO5adj7Y8U/8FMd
Qh8D69ofgD4Z6F8NtS1xVMut6HNtlQg8tsEKhm27lBJ4LZr4ouZWkllnkkeaWR2kkZjksxOSxPcm
mzSMj4fkKMYpu55Yym/IHXtQtCtGFxJtlJjTjr0FRoylcF8gnJDDOPYUry/Ky5CFOeB1qGdssM8A
5IYN1psnRH1/8DP295vh78HLj4a/EPwXB8VvC6Oh0621G5WNbWFMbYiPLbcARkEnI6Vn/tMft4X/
AMa/h7pHw98J+Gk+HfgW1j2Xmj2twswuwGVo0LbFKqpUnAPJPJr5QaQqijhunSmSOe5/OpVh3Pd/
2WP2p/En7LnxDTXNLY6lod2PL1fRWkCLexqrCP5iDtZS2QR6YPFfRL/8FJvCfhLQ/Fkvwv8Ag1be
BPGGuwuG1tbuOQCQsSHZPLOcEltpwK/PxXIfr0HPFPZ9zNnJHTpjjrRoDdzprfx54h03x1H4xt9U
uB4qjv8A+0k1l2zMLjfuMh455zx09sV9yP8A8FJPhr4wvPCGv/EP4KN4o8eaDawomuR3USjzkIYu
ikAAFxuAPTJGMV+eX2jGVG5jnlcUxZJEPHB6ZoBHvP7Vn7UevftQfEO41vVWks9HtGaLSNGZlxaQ
tjIYqPmdiOW/DivVvgJ+3ppngr4Nah8Kfiz4TuPiR4LxEmnWySpG9uiuXMbtgFgGCkHORjFfGwC4
O4tk9AfWmyzqRsAyM9B2pWRVz7P+Pf7e2l+K/gjpvwt+EHhS6+HHhDy5otRtpHSR5o2O4RowJIyx
YsevavJv2S/2pNd/ZY+JEWt6eGv9Bv3SHW9JUri5gH8SE/dkXOQe/Q14RI4QdA2e47U+BWQlgwOB
kD8KLEpo/Q/Tv+CgnwX+GV74y8WfDj4O3/h/4ha9BMF1C5ljkgEzlnDsu/hN5DEKATXxBqnxa8V6
n8UJfiHPq8jeK5dQGpC/HVJd+/CjOAoPG3pjiuMkdpEG6RmK45B7Uw5baCSD2AH5ZoBPU/RHWf28
vgj8Z7LwVrPxh+E+q+IvHOg2yrLfafJHHA8wYElV81dykqG2sOMkV4H+2V+1/rf7U3jfbAJtG8Ea
Z8uk6PIFVtrKu6SbaSrNleBn5RXzcJGiYchhnJ4xinSTt82QAM/nQkij7O/Z3/bn0LQPg5rfwk+N
mh6l478EzRpFpgsQnnwjzGdo3keRDgHaUIORjFbPxS/bz8F+H/gQnws+AHhrVfBWn3Uk41K61PaJ
hG4JPlSLIzbyxxuPQLxXwmZXLgrkk8Z7ikllkRyd+OmBmiyJvZnvv7Jf7UuvfssfEa31m2ke98O3
7LDrWlv8yzwlgWdBkASgA4Y9ehr6o0L9tj9mL4dfEHxb8RfBXw68Sw+P9St7mSKe/jjNr58v7zIT
zj5YZ1XJUdM1+biTuSCQAwOaWR8BmZjl+qk5osi7npfjL49eNfHPxXl+I19r1xF4sacXUFzbyMi2
jDBEca5+VBj7vQ5PrX2L4i/bR/Z//aI8OeDrz45eDtcuvG+kRuJ5tAgCWsrFxkZL7ipCKdp6ZIr8
7CC0RYOPlPXHUU3ew+6xxnqelDSZLZ9a/tqftuXP7RupweF/C0Evh34a6XtFrprxeRLdEoMmYI5U
qpB2r0HU81037Mn7cPh3w18Ite+EXxtsdQ8T+BprVo9LntIhcXdvuyGj3O2AADlCPu9PavimUhYy
+QX7j096jSdwrE53Z4pOwkz9AvE37bXwq+C3wH1LwR+zjo2q6RrGtXEq3up67bkSwRPGVZ45A+S4
wgXsME189fsy/tT+Iv2cvihD4jsbp9W0+/fZrWl3ErMt5GzLvfBIHnAAkMepPNeDrdHhmO7GTk9q
gILD0IxyaSLufprp37Tv7H3hb4zaj8VdM8M+Jz4un8yVLWWwX7GszRhSyx7iqscdenzE18XfFn9q
Hx18Zfi03xC1XVZ9P1eCRX06Kxnkij0xQoG2EbiVyANxzySc15DCPJ8xgCynueufb8qhXa43M219
w696fKgT1P0Y8Q/th/AD9p34deGIP2gtH1mw8aaVJKHvPDdmQkyn5QfM+ZsMu3K84Oe1ecftj/tv
6f8AE3QtM+Gvwpt5vDvwu0+3jEqiKS0nv32spjlQEAxjgkHO49a+MnnIUlshS2BTZZ8sRuPBwWpW
QuZo+2/2R/23dD8GeCdZ+FXxjhvNe+Gt9aSpazGJru5s3ICCNQc4TbkrgfKw9Ca7jRv2tvgD+zF8
LfEtp8ANJ1i68aaxMiC98SWjBoYyCpIlGCAoyQo7tmvztMm7MisQQMDJppaSUFnk+bPXNK1x81z3
H4HftWeOPgb8XF8e2+qXms313JnW4L+6lkXUl2kHzOfmYAnaT0OK+xtR+P37FurfG62+Kd3YeIk8
QK8c7WEelk2LzKm3eYgCCehJ7lc1+ZbTMgBLZc9qbJO4UckYHY9aHEfNY+gv2k/2t/GH7Q/xQTxL
Jf3Gg6XpU6nQdOs5njSz2MdkwXOPNIwSfoBwK+kbP9rr4NftIfA3SfC/7REGpWfinRbmOO21fw/Z
tLPcRxxgLJ5u1ipbc+5ehxmvzr84YByxYHgEcCpVu3xh5C3p6j0o5UK590/tHftr+FNK+EOi/CD9
n6G70HwnHaoup61JC9neTFflMZwAWLhcux65xxXNfsSftnWnwXW58A/Ee0k1/wCFurKfNjnje7/s
8gMRsi5yrN1A6HBxXx2lyZf4QwPJJPNCSMz7kzgtgAcmiyHc/RvwL+0Z+y1+y3pXi/xN8IbbWPEP
ju7jC6dbeILJxHbkscrHJsBRcOc98KB7V8F/E34ma78VvGGpeKfEl7cahqt85aSS4l8wou5mEa56
Iu4gAcACuZebeACxV+4bvUUvLkED2o+EltsRnAwGUljzj0FWLGV7dkdJD5m4Mm04ZSOhB7EYqs9w
T1UgKMZNRx3Hl88oefmH+FW1clOzP0M+FX7ZXw3+NH7Pl/8ADP8AaOkvpLjT1RNP8Q2lu91eyLvL
k7wrFHXaik/xDGeab8Qf2xPhl8B/2e4vh1+zc1+1/qizR6j4k1K1kt72NTgiRZMLuclmA7KBn6/n
4lz5Q3bMjJOadcTlwOvGAC3X2qOVXNE0fU/7G/7ZOp/s9eM5rLxFPJrngHX5THrOnXIeZYvMYebc
IgB3MQW3D+IH8/oXw18Xf2N/hD8SPEXxS8NTatrXiSWO4u9P0S701xaQTsd4EWY/kO5QBk/Ln6V+
aSS4QK7EFT6ZJpRIZB8zl+O54xSsmwv2PbPH/wC1/wDEbx78b1+J769c6frdpMW022tpn8iwiJGY
UHGVO3nP3s819a/Eb9oP9mD9rjwv4U134tT6r4N8cWaOuopoVhI3n8hdrzCM70wikc5G4ivzad1h
+bcpBB6DpThNk8k4AxkDr/k4p8iE5WPtP9sz9tvTfH+i2vwr+EMEnh34XWESrJPaRPatqQKfNE6E
D5ASSc8sevrW9+yl+2p4Uv8A4Uat8IPjw8+peD2tZV07W5Inu57Y7FRIgArbdoLFG7HivgrzJJCh
Mnt0IwKc00kQVGcMnPI/nT5UJNM/SDSf2nP2e/2Pvhbrq/BCS+8YePNTnWNL7XbN4ZoInTaSJTEv
yrtztHUnmvmr9n79szxn8Dvizc+N7u+u/EC6tc+brltc3LN9tUgrubA5ZA2VPtivnmOeM+Ywzhm3
EN1/OoWmkU7E3BRk5PPFLkRVz9P5/GP7FWu/HJfiY+tX1vcnbcTeHRo0i6fLMIyu5kEWDn7x5wSK
+Vf2l/2zfF/7QXxRtfElne3XhrQtAmD6Bp1rOyC3dCQtxxjEh4+g4r5wjuHBGQWHBI9aiabYSxLn
J5Ud6nkSEpH6T3H7TnwP/at+A+k6V8d9Um8I+NdFnSGLVtNt3nup0RB+83BHO1yWLKcjcM1znx4/
bU8D/DP4L6P8KP2c7uSOwlgX+1PEogktbpirAEfMi72cBtzdhwMV+fzXDs2FVVI+6B+mf0pGldS+
cEH5iF/pUqKL5j7c/Yj/AG0bD4bW8/w0+JxXV/hbqyPDIb6NpxYZV2KiMAgo7nn0OK9S8C/Ev9lH
9lMeK/HHgPWJ/Hnjh42fStP1OydBBKWPEbmIeWDuGSTnC4Ffmo0z4QxnHGG3HrTIbgRySAvluhCn
oaFFJlcx9BaN+2l8R9I/aBPxam1mW61WWUCexZj5Elpv3fZBwSEwAP1r61+Jnj39kT9ojxt4Y+IP
ijxXd+EfEUcUE2q6Ja2Evl3Em4OVmKx/MeqlgeRjPSvzJ3lmcA5IOSpGc1KsjlgA+7HRgf5f41bX
Yk+xP21f21h8YdTh8DfD+MaD8MdCcQ2kVpmFb5kKlJdm0bVUr8q+vJzXqnw6/aq+G37Sn7P138O/
2gL9dL1TS1i/szxNKjT3ErF2JdcLlHUIinHUMK/N9pmGCdzYJOF5wfWnswYIc/KQRh/X1qXC7uil
JdT9HfEP7T/wh/ZF+A1x4X+AOoHxP421wyw3XiOaAxz2wILLI5ZBu27sKvQdeeleR/sXfttap8HP
GNzpHjOSTXvBHiS4c6sly5kEUkxzJcbdpzwSCo6ivkO5u5polZk38cjd096ijm2BiFKODtC+o55o
5RaH6eeFH/Y5+FnxR8QfE3TvGcWtlUmu7Pwy9g/kWs7YZRHhBggrhc/d3e1fLHxD/bt+Ifj74+W/
xNtL+50Z9NmZdI0tZBJFZQNtDx8gb94X5sjvXzXI4dXVgWJBB9venpcDuOQCc5wSaHC/zC6P0z+L
HjD9mz9szRPDHi/xj4wX4YeMijpqdvaoXknQHaqSvtAPyrlSeRux2rzv9sT9sfwwvgi0+C/wUQab
8PrSILf6nbIUF7uU7ogrKDjJDMx6n6V8JT3Lun3QPmAI9uv86ZLMzli5AcjA4zz/AJzSsPmXQ/RX
9n/9q7wL8avglqvwf+PF/BaW1ra7tJ8QXxRvIIXy4gox99Mkhj2JBrX8KfFj4DfsJfC3VL/4ceJr
b4n/ABE1V2t7W6TCS28LLxnA+4rKGI/iJHSvzUKlJFcNt2khcEBu+MU7zy0aFiVGCqnPQ4/wqVG2
lxKVtbH15+yz+3b4l+FXxZ1DVPGN3Lr2g+KbndrkZjGWlYBROCF4wOoHGM9+a9/g+F37I2mfHeXx
8/xK0WTw4HN1F4UyFt/O2DqTgn5xuwepr8wxcMyRwlyVAyD37niieaWZmHmyblYkKSRj8aFFNkyn
d6H1v8ev2/PGXxV+N2keL/C87eGtG8NyZ0GxlRWMRZdskr4HzFxlcHOF6DmvoL4rXvwN/bu+H3hz
xfr3jfTPhl46tnEGpm4lDySJGGAQqWX5c5cHHGTX5hPM7MisG9MN9P1pyXQjkJk+YcrxzxVuCvdA
pn6K/tQftSeDPg/8I7T4GfAi5tbjS5rUHWdetW8yK4WVCrqjZOZH6sc8DArR/Z3/AGl/Bfx5+BF7
8F/jBqFppMdlZs+k69eSLbw2/lqEhByeXQtkZPIHevzce4U7ShBxnBBHA/yKYbokGaNCWA5wcA8f
/WqeRpqxXOfp58OL34MfsFeBPEHjSw8X6T8TfH11L/Z2mxaTMoMaOv3SgdhjKlmPoMCvIv2XP2+t
a8IfHDWtQ8dTfbtH8YSiTVXWNV2ShSiEHjaqq3pyFr4iN48sokZi+DuOR0J6n69qdHNKjsyoYyDl
QuRtoVNDunufqDD+yH8BYPj0/jZfiV4ffwHC5vk8ORXce4S7N4UvuO4bsNg18uftxfthXP7T/jqK
00u1a08D6I7ppcUke2WViu1pWPocHA9MV8xzXG/dLISSCSRnH/66i81uVDlWI4yOvpimk07kysxs
8v7gKEIduoBxjH+TTUkLyFn3FAvKKMYHtTVLPvRVHoMHgnnmlR3RtzPsIJBGOQK2uybWJJs+Qyhe
eoHoe1V5mZRHmNyNuMnrgHk1J50s7FgfvHA6fnUSmQK2ULgA5Y8j1oQMeHGG8pSD6nnp2qRZT85G
VKjJDcDj1ojKogY8pjdt/wABRJO4zJGqjcdxT14qiSNbnIYGRmUjAX0pVlKxiIKdxYkHZ3xTLcrG
2SpQOQN/U4PWpN32dmIfvlM96loaIbtHTzGY8rkHIzkmodM3F3weTjC5zVu6jmfO4HawyCSOevX3
qLSoUMcrhQWVup4OMEZBp2Ez1j4K6QNU8YaBAv7wTapDGOMdCSfoR/kV5h4phaLxJqcTHeoupQrE
843H9eK93/ZUtY7v4u+D8BV8vUhuKsQQRDI3TkdRz2x17V4JrQzqlyyKVYO7Nu5LHcahpEwS5mrF
RvLWNFJJB/vfjTCZUkYx89vm4x7VI8Bl8kMTk5wOMUSTFcFXLoegIIIrLY0asdHFA1q2DIyxjgbh
jA/wq/C6xrszvB+4cd85Jz6nj8qqrcI0rqR5jbcmTAGRzxUyoPLt1aPyo5CQHQZb3P059a9NSV9z
n5W+h3ukTKXhiWMsx+dCG56d/bqfxNdPZK9481v5ToGVgVXIGAhxg9v06VzOkLaWaxSrcYlYBVyA
u7vjHQ9MZ9PrXWaRfweZJ94MQSxZRypGPwPQZ+lKUr7CUUnqfNdsjRarCzvtVZccdcZxmrXi1Wk1
u4Ulny4O4kZ5HscAUyMeZrKzTSBVDlgSuOnt69Ki1HVBf6tPcqgKuQAMYrltqdUUtj9F/wDgjs0X
/C3PFCRsA66Dgqep/fx8j8TWV+1SscP7S/j6J8OjagwQBeUYohI+h615R+wP8fov2dvikdWutPGo
aVqlr9hvduVkhi8xWLqAPmxycY59a+uP2wPgjDqEx+MXhDUk1vw74gK3tw7yKrQsyjYYycEg4Awc
Y4ojbmHVV2mfJ8beU7Mk87TxsBkjaUYHjkEEbTj9DX60/BbxP471z9n2PV9d04x+MVtJzaQSxbDL
hT5BIJGc/Lnpmvi39g/4J6F8VfHOo6vrbtPb+HVilj0yRd0VwZN4BbnJC7QfrXUfF79szxboXxpn
fR5Z7DQdEuvsZ0dXAhuhHIVLMSvCsvp0xUzTb0M7WVj5r+J2ta/q3jbV5PEsc0WuzXjGVJQFZXJO
5tmflXqQB69K/RP4YXnh+0/Yl8P3Xia2nvtATSw93GpJkK+eeRjb0PPbpXmf7Unww0L4l/B/Q/i1
o9l/Zer6nBaTy2tqUxcPc7MFnwMspY88Z7163bfCnxK37GMXgk2Kv4kOlCH7G0gI3mXft3E4+6fp
Ut9yEnZ2Pjj9szxVpd98Zry48LX0EmmtpNqHezciPgOduAccfKcV9N/sG3U2l/s267cRn99bajeT
BnTC7xDGx4ySRmuF8FfscReC/gl488S+ObRbjxJdaXcNbafKwkWwEYZk2tyCxYA8cYwK7T9hllk/
Zv8AFic+Uuo3y4cgnHkIDnn1zQ2mXH3VY9Q/Z0/aJ0f45aKFfy7TxNZKFurVsLuyCd8YySVwOfTH
NfGX7fNxJH8er4NGrRC0tsJkKXLKOp7/AHeOKz/2Ibp/+GpdEQPIf9EvUdpOMgRkYAz3ODVz/goH
bKf2hrmcgEf2XbKGYL8rYfGATk/XHFKK1B6nj/wNnaX44fD8M8qofENkQ38O/wA5cZ9j/PFfVn/B
StFXVfBkzkRqILhA5Hqy8Z6jt+XtXlP7HH7P2o/Ejx3p3ja5nfTvDnh25S9e7k2ss80UisIx83yj
AyTjjAq7+2/8dNJ+KvjSw07Rh5unaOska6lC4dLkvg7k9MbT/wDWrZL3tAeqsfKrJm5VoW8suQS2
Bn2/kKAgRWLSbiH+bYCp6d/5VKQsIBRguesjqdv1I/wqvKGkiYKixSqDllTBY8kBs9eDnPvW+tjF
2HQuVgZvOdcDejJJ8w5Hr0xxz7cV+nv7DfxD8Q/Ff4c3Vp4t0wX1po7pFp+q3iNJ9sQ78/M4wxXa
FyM18J/sz/CRfjp8UbDwvPeJYWoR7y4cQ72aOLbuj7Z3BgPz4r60/aR/aRuP2crzTvhz8N9Ni0O3
0SFZZnlRXRo3AcLGGz/ebLE9TXPJamkXZangX7XHxR8SeOfiXqNh4gebT7PTJTBb6ehaONQHIWUq
T1Yc59K+w/gj8E9N+Hv7PujX/hLQdK1bX9YtrfVLhvEQBRnljVnUHHygZ4A461wnxN8H6F+2L+z+
3xH0+3OgeItOhmE7yqHEywqS8ZI7HqrDkdDXXeJhdfEf9iTSLLwZu17UIdO0+3a3007pPMi8sSJg
cgjacg81D7Bojtbb4Tf8LF8Ja9pPj/wx4Y0k3AEdtd6AFMgGMlizLlSDjH418Sfs1a3pfgD9oWLw
xJo9l4h07UtSOjJLqCLMY18w7ZVYjqcZI+laHw0/Y4+IHi+y13UfEuoXXgK0s1WVH1RpFDHDbiBk
AAADJJyDXmnwPgNh+094Ds2u11FrfXolWdclWAkK7lPcHOcnmixaaTPsH4q3/hL9m34naTotr4H0
XW7HxlqCXcovoVZrQtIkbiMFT8oyrAHgc9eleW/t9fAzw74E1XTPE2g2Zsn1h3ims7aILDGyqDuA
UfKCevvXe/t5BYfix8KZjJs23Mb7cjD4uYuGBHTnrVn/AIKVsI/C3g1sN/x83IJXOcbE7D3xTg7M
zktNT88PMeSFUC55IYN2x3PtxTtjeXMiSbs/Ntb5uo6Dv6e1Nv8ACDgFAeXI4LDvikuMS+awJQDA
UA4Zf1GK6OYwsrn1j+wT8C/D/wAVPFOta34iiS/t/DphjTSpVDxSySq5LSA9QNowMdR7V2Oq/tFf
CHQviZP4f1v4NeHbDT7fVDYzaqLeI7VWUxGQr5XOCM4J6Vf/AOCYd1b2sfjq0ku1N3MbR4oXcbyi
iUEgE5IBJ5rwr4h/A7x54u+O2uaVYeFNTii1DWZxFfXFrJ9lCtMxDl9uAnOc571lu9zfROxW/aY0
P4eaR43N18Ndbi1bTNRYzy2NumI7J+fkQkD5fQY4zXuf7Ongp9T+F+j3F3+z9pniuOd5Nusz3UMb
XC7ypcxyAkHqB9OMCvB/iv8AAjV/gF418NWHiLUtMvmu54rgvZuxARJFzvVhwD6cg8819n/tvX3i
fRvhv4Vm8ASanbqt4Qy6CWUeT5RKnKDAUYzjp1qWWkkeR/ty/s3+HPBXg2Lxv4dtoPDiyGKyuNKt
YgsYdgx3gg43cBT2718XhDbptMe/GeB8p7nA9a/Sr9tGK91v9lfT57dJb2UT2NxcNGpchQpLM2Mn
rj8a/M+ZVnikJfZL90gvkYPQDHUdPeqi9COuh+kXgYpc/wDBO+7WPaEXQr9VCngBZZQB09q/O6/U
SXTuxYncX+Ucdc+tfo98MNOvLj9gC4tZLSb7XJoOo7IPLKu+WmK4XryMH8a/OCRxFGd25grtHJ/y
zkB57EewH41URydmfRH7BLhv2kdKZWbDadeqB2I2p1/IVN+3wpH7QupESEMdNs8AnjGHqh+wI0s/
7SOkyW8LG0XTrovKAcKTHwD25/XFaf8AwUNjkPx2fKOiSaVbeW5B2tjzAcdOfmA/GklqKdmj5nlC
gNskMbAclmJyRjge/eo4mMqqwUKWYk4xkkAYOe3T8ab8rmRnUbmyXYLxn9PQD/8AVUUMpzkB3Tdz
xjJ5wOK6Ucelzq/B8+h23iPTG8RWtxqGiJcBr+C0k2zPEeu05HPQjntX6gfsz33w81D4RXreBNOv
tO8OLdXC3Ftf5M3mbFLk/M3BXbjmvyit2iCPMRmeQY2vzg9B+Vfoz+wVL5n7PfiOMfvSmqXI2vnk
/Z4eOe3SueaaZ2xS5bo+Qvjy3wwvJ9Ok+F9pq9ipEn2oagSUcfwlAWJHU/lXp/7NvwF+Hvx2+HGs
2QOoWfj60jebzN5W3UFisLYxyMpg968ZvPgX48t/BreNH8O3cXhryx/pLMhwA5XdsB3BSR1x3Hav
o7/gnBJv8ZeMsMzKLCANkYG7zX6evB/DpSemzIja70Ovtv2NPhKdVtPBN1qeqSeM/wCyjey/ZrvE
SkYQyFQOMueAeor4w+JngPUPhR4vv/DmtCO4vrCRY5Wtj8khKBxtYjjIZevc816f8Tvib4i+EX7U
PjzX/DV8lte/2jPbsZ4VlUo+OCCP7wHGR+tJ8C3Hx/8A2ntOuPHKnWBq5luLqH/Vws0cHycKRgDy
1GM80JyQWTkkep+AvgJ8DfHF3o9jaXfjSLVLuNCrTQNGhdlDZz5ZA9c5xXhf7QXwNvPgj44k0q7Y
XmnXe+fTbgyKWeEHbhhwQwPJ7c8E19rfEn45+JvBn7UXg3wDYG0Tw5qSWyyxyQgyfO0qna2eAPLH
6V45/wAFGnMXjfwofL3o+mzBjxgYlXGPU9RQpyuDirpmD8Iv2U9B1f4VXPxB+Iur3WgaB5CXVpJp
8is4hyQzyAo3U7cADOPStzQv2VvhL8VdJ1uP4d+ONV1rXLOz82G11FAsYbny8hok+UsMEjOPUV6V
L/pH/BPGMoNw/wCEeUoPpINv8hXw34Q03xL4jvp18LWusXuoRx4mj0kyNKsZ4yfL525x174oTb1u
NpJ2sb/w2+B3iDx58VZ/AcEXlX1jJKt9K5XZBHHKscjfeG488D3FfQevfsr/AAf8Ja7LoV98TL2w
1JGCPbSWwZgWG4DcF/2geD3rB/YNivLP9oLWLW9hkguI9JuvOS5XbKJfOg3bgeQT3zzXf/H39sLx
F8NPi7rnhrTdC0O6tbAwKtzfQu0rs0SuRw69N3H0pa3L0R4N+0f+zVd/AvVrWSGeTU/DN6AsGoSb
VcygEmNlB64GQehFeX+CV0qHxhoB1z/kBpqEJuyVJUwCRfM3Y7bd3SvX/wBob47+Pvif4f0PT/Fv
hpdB09ZvtcMqWM8Ank8tuFaUkHCsTx7dK8Q0ixl8R6vYaVagi8vZo7O3j3bC7vIEUbuwyy8+9axX
cy5tdj3f9rw/C5tY8PH4ZvpUmYZxqCaQ2YiQU8stjI3cyc9a8AieMtsQ4bGBk8Zwf8D+VemfGv4D
+JfgZd6RFr5sriLUUmeKSynaRQI9oYHcqkH51/M15a0hXzJSdrZw5HOCCcA+p5reKVtDBq7JZkjM
YT5iFIOByM5/yfwpATNINzOucAleSMUvmttQDdjB5wMk++BzUScgrvZS2OByen6VSdhWJXUxmQFC
FIyB0pokDAxREhD97IIxQRLIGLusg5DOSd3t/n2oDkhdwwechBjjNSyUtSUoJUDDkMcjPYDv9eKr
iJlifJQOM4Ddcc/5600v5Ctjeg5xuOeM+9L5iygAb2I4Ve4ouOyJxNKJGJmYMzZ6ZPI702ZvNmV0
JBfr3AzTGclHHJVjtLHjFI86rEiKCfm+Y9qYxZFd1wwynIUYznj1qOSUW5LOzMhO0Ke30xSvKkRT
e5HBClV6H0qOa4VzHjIAY53EDH0H5UEkgv1RlMTZyDwR1GOn41Wmu9sO5TtwGOfQD1+lQm6XzHjM
bMQcYb3/AMKBH5cj7iMk7vUD09/wrK+polfUsxmUKJPur2AAOT6/z+lEs8eUaLLI3BR1ORx9R0/y
KheZ/KBJwoHPPSnG4D4LgHGWyw9v/wBVWmyrIsSvMrRBXbJxk4+8On9Kjlgkjk3BygH8XeopZ/OQ
kMyqDlSBk49MEUspZ4925xk7BkcfWgi3cfExhDycBm/vD9eCPy9hVUny7n5vMLhgC+M+nP16VKCh
WRFO1lBbezkE8dB+ZqLdIQU3ZQ4KuecehqbCaHKB9oBMhG7nLAdD0z07AVFc+cGcgtn2IzQZssPP
JZ1GBsIBYe4qGeXzzJhSgByCw6flRLQtWRJK/mWyNkqxz0btgjHFVWlCO25CVbGT3z9akE6RGLC5
YZxnqBxUc8QkLrncCd20nGMetQgW5EQi7kwUVc7RjrUEjAD5fmDMTtwOtSSMku1gpVgPvEdD9O9R
zeWsUgyPNx1A7/nSNdiJpZkQEqxQDOcc1EdsVxuBCqxzjHf1oeQTq0KggL1fHJqF3k3PhQ38JLDn
GfXtVXFYLm4PllUIEnLFienFVjMGcNtxuHUHr70+YqJNw+dVOAWOM59qgby0gyzK2WOFHIHtSHYg
ueWkJYheiHPfr/SkeVU2Ttk4JYLjvkfpSuWXmE7zjcAx61WdikDpjDj5iW4xx/jUuzJemxFPeFoN
2MDOCQDkDpu/PNVnIjT5jyc4IHHc/nTp4vNCOpOPu89RxyagM8aSsAzIjZGw9FHH+c07CuMKtNFu
ZmLty3rkcZH6VWhSRQXWULk5/wBqp90hJdQBGx+5nmq048tcKVY5JY/3fYUbFpDZ1kUZLhmYbfvH
k1Xuo2UxpzHJnL7jwKe8kcJ4578npVdttyVbcyMCTgVJdkyJtyuQQeVJG1fyJqNzncCrYyMEY9Ot
TFiHDNkleFP+e9Q3UjM55AOOo5J9aQ+RISfEYwHO0jptA6+lU55d8Dso3AcFTkH1zSttlQ8nk4JP
FE6eXuiDBiFyMDpmmKxDOxlQAEgc9OMCoyPNl8zavHUk0TMyFTtJz69TUYYIrFlOQc8c0ENIdLuw
VG712ngAetIsjHO48dOnf8aV5dyuS44wQcc9aiknjTbuJKn5jxyTSGiJ32naCCW54IoLDfuCsARj
DemKdIUyh+bYePlHX2NMc7gzBdwX7qikyrCEhV3ZxzgKOaiklAZWPzN146g09UaNd2NikYI9D/kU
yVo1Xg7T1HrQO/cG3hC7kliAdo7VC0bOoAUZDfwdTUryxqhURkqT179KZIBFJnOcd6LE6DZwgJXO
09ahOSAScjsAKmlwY2ON5x3pjKBFnPzK2Ae2KQh0h3NwRjBwccVAYnkdeQW6HtTjJn5Af3bZxxSI
DuJyFCjJXOM0ANZ2DBWXaBnp2qVlBAbcNnucdqYZhI+SAoxwp5pXHAbACigdxmzzsshzn/GoyHMo
7tn7voKkLKIhkHIOQabMpjJ245Gc0AP8p3BHUjJ6+1QNnkKec5wBzU3mZQ7O/PI5FM+4MBQc99vN
PQoYC/zEdQPrTnDk/NgBu9IZQxYqOTnoO9JuLjYex+lIFZ6DndtpP3cDBOOtPjJkO5iRt7D6VFIf
mHUDOMj1qQAxdTyeoP8AjSGlYA6q5zyBzTXnMoyQdxGcd6SWTg9jwB3xSySKVCqM46lu9AMayZDc
Ef0pNiu3VnfPAA4oJ8wnacD0HQUvmIu4ZO7sw4oEgkQuRtOAOvqOKFj/AHoBJYY7Cl8wspBJzxnN
O3b0ywJP96ixQ15SNy5BI9eO1Ro7yK2MJzQfmOCRk5wTQVZCGYfpTIaJDLuyHIXHBPQ0gk3NhSF9
QRSHZKeoyByPWkeQMp2DYvqfpSsFxRDuyjM2xuDk0wBkcAyEhm/ClbaAfn3E9hTZCAoGM+hINAXR
Mo2vjczYGQAeKeSFRTICcH7tQKjMvJ5Hb2p67WG7OCeee3FS9CkNlQTSH5iwHIHSpvLHlg7jx27i
otjqD5ZyxJ5+lNmY4beTjPI7H1oS7AxzbUORwFPp1oLbSXUc9Dx/Q0YQjO3AB6EURhgOAThuciqJ
RGsjbMAEfj0qVkwVLHcO5HHamriJ2fBZTzt9KmYRs7c7hng5pMqwEIsn3d2eTSMxwAqEDuQOlIWU
/MRsKgjHc05y4X5U25/vdalalEanngAqPz+tAJSRw2SM8Ecc00gockZz3o2EoRn3xVktB5Z3MWHT
t14pRvTcu4lu6sMHFPQZO0sVyOKDl2LN8wHG4euagaSI5osk87u4IpzRkfMUIx19SKHJygHXvml3
kthjheuKOYLXIlVlcIzgI2cZP1qy5LK27JxxwP1qOTbKFBO09M55ppHlqTksfamUlYeAiHMuMkDG
ec0xFCkEEjJwBmht0kY4I9qC2Au5SuOMY60IYLGVc54HHynnn1prfOMAt15XFOM4zlto46Y/KkkQ
+Y0gbBxkAU7MhoQssa8Zx/tU4BZ3zu2qAOneml12KRg88d80NkHHIIznPah3JWjHxyRqSvLY5BAo
eYEErnBzmmOgjCksG47fSh4+VXzAM881Frlkg2NnaSGA5yciopMqyucr34FBPkkj73GPcVI0iYG1
sOuAM8mpBCSSdieAPmJ70uz5QQf3eBxzzQ4kJLNkAnr/ADpPMZogm7p0BHFUHUc8oDtsX5j0B7Un
2YHJD4cnoB1OKAD92Rvl28Y705gCVCsdw564AFFhiSSqw3MhEgHPGRSxqQiSby2RwMUwfMzp0wOT
mhI1EUi5KlVyDgHJxTC4g81S/wAmMjBH/wBepNzOoU7mROhY9KieXC7cldwGR15qaTy2JRQ0kiNj
eOlMZCGdQTj5ehPqcU4O7qCCQQe5/lTvtBjVE2At3680xANjZIUk5zmgAwMMT8xTjAPfNTTSqygA
DIOSwGcj0qPY/JIBDdBjr709fKiGWY7U5wh5PtQAiNvALAHAPOcUkiCVuSQnovc0jsquXG7Hp/Kl
hY7EITIPGW65xSvYAUbUfGWYA4+tOkXb8yqeP4T0oeFopSCxUAZwO9IEKuWDthwQe+KmyYDXJc9S
hUbQfbPXH0pyyhF2uCehBxgdKY0YdgUXb1yV9KkZQEVW27FG75RVWQrINzvMC2M4IBBwTx3pBhX5
yyHrjjtzSyBQoch+e5HJqQpmEncEHYg5z7Yo2CxEqZkKKC655Azx+NSgRykk/KFGQAMGo1uAqsOA
HOCQCDj6U6dQsamP7qkggEZJxmhMLoJEeI+Yw3rjpn1+lOecgAsWPQfjUJdcrv46nA6d6lkVY43J
yXUZABpMYyS2kJJZsKwwM85NRRyuDGQx39+Og/yKlZl2Hlgw5AbOMe1RFfNbchyhH3c8596VxE8j
72O1ckHJB4B60hbfKTKwbA4C+g//AF1I+2ZWXbliOSOOai8sLIMjseQe3SrTGROoh2MeRkjceKmD
NBEzxkMGJ+UHp+P+etLuV5AhJ2MO/HSmn5CAMsNvIzjH+NWIhb9/sGeowe/NTOFaEqSMg4GMZ6U5
IEJfHCryc9KSHyo9xBLxgjrxmlcVh4SSOThdnHKjIOPWo44Y5IiQzF1O1Sq8DNMY/cbPznPG769P
ypzgmQDaUMuG2huTx0xTKRFcrKAyuPuD5ck9al0x5Ft5do2K0gUg/T19KiugVBQIWBAO7p64q1Yh
7nSnJ3iIyhcgfKDg/wCFJkPQ+hP2U4Wb4n6BKWiQ776Qp6lLGdgx9RwM/WvnS6uRcTTOMFmdmPzE
gZY8Z79cfhX01+yZZuvxGs7hovMt4LDVZfNJO5G/s6ft9cD8a+YmVVmZVVUc8lT3POcd8VEiad+d
v0K7BnkCBguRnNLLGVY7So+poaANKd0nzDJ57exp8Ku52gAMB8xJ4/CsWanS2szrqKPIwk2j5kC7
s9hnPXtU3MFyo8po1DbgmNv8xxxVaBWgM7RofMUsuJMcAfxVZtpmlaN5GdgXG8t97HHf6D9a9CS1
OWLfU7vQ7AXEA4xx8reYCQc9wRx37+ldXpGlPb3TYd/LOVztwT8vJXJ465AGa5WwjLAReWfJKjY5
XKswxz/M8V0VilzJffZdskg+VVZNqhMDBOaTTsNWvsfOepu73TK5OA2PmGM+/wClVXISTKAlOuM1
evEN5fzmFSAGZsAcYyec1TVQfnU87ugrHZnRFHo3gNg/kvIHDqcIVJzu7V7Ro/iPV5NHjsJNRuV0
63wgsvOZoVGWxgdB16dq8Y8FThiTEjN5cZkAYE8KDlcjvX0dZfA74gaZ4GtvGcvh+9tvDc6q8d78
piZScZI3Fs7vbv71srb3KkrbmfoXiC/0S7uW07VbvSmki2mS0naMvtyRuwefbNVri6kn8yaV2mlJ
Zg2Sd5OeuOvfr611vgf4TeLPielzb+F9Em1u5tCrXIgC4RCf4iT7dB6HpXO6npV9o2ozadqEX2C6
tpDDLDICrqyZDZGTjnPHtWinE42pJ2NCPxhrr+G49ITWNQGnRgLFZpcMsaAcDChvlIxxj3rY/wCF
teOY7aFV8Z64Qkm4hL1xjg4BGeQePeprr4IeOovDUHiSfw/ff2BeLH5Oo+WCjlm2qcZzzxz710Fn
+yn8W5IFf/hAdVmgBMm7aqY4JHG4lj0HC96xlKLZqotHKXvxY8bajZSWlx4w1q6hlVklhe/m2Pnr
uUPgj9Oay9D8c6/4VhurTStWvbO2vMC6sra6dUOVIyVBwe/512er/sxfFTSrC7u9Q8E6ja2FtGZG
uWj27VAy3OenHfHeud8I/CHxp41sLnU9C0G/1ewsQ4nmtoCyo4G7yycdcfN+I55pRtcTTOf0fxTf
+G7tNT0y9ubO/iG6Ge1kKTR8EHaw5BIJFTa14m1DxdqUl/reo3OqXjxgPPe3DSuwHQZbnv0p2keF
tT8TeIYtC0vTZb7U5X/c2kCbpnIBYjb1zxnHoDU/jDwPrfgTVn07xJpc+k6gArm3nxvAJIVup7fh
zWr5UyfUdonxD8T+HNMudP0TW9S02xmZvOsra6dIp88MWQHB4Fc/cXTyKS2ApBR2zgKDmun8OfC7
xj4msrjV9F8M6hqej2xk87ULeAtFHtAJGRz3I/P0rlJItzZKO8fLLk4PY/j/APXqLq+hlzTuRBXl
gQqpkjVdysHxj0788YNTSwyQSxNKEbIJCgkN+WMfjUEdsjq0j5ROyEHnDAAj17/pUk8K7lYq0pVt
2SudvXGecD61SujRJGr4T8Xa34D1iPU9Eu7vRtQjZ1W6tJfLkVTgMO/UAZz1qx4r8aat4y1281zW
dQuNVvJgEknvWDMBtGB056YrnZWeV8bj+6+ZlX+IY9c889RTpsKeoIZgNqr0Hr29/wAqLJg79Dtf
DHxi8VeEvCl94f0zX9QstEvCd9lHMVhcMMEbTwMg9vSp/h98ZPGvw007ULbwt4iv9FhuXM8sdpIN
jNg5bDA89ea4WO3j3FYIwY8EFduDn1/HFAk2zPv5DnJA6j2Hp3/SotEhSktz1DxR+038SfE+g32j
6p4w1C/068jMMlvK6srqSDg8eox19ea4DTNavdF1K21SxunttStpfOgnBzJHIpyGBwcHJzWZcRoZ
Thi525ZQf889akgmC3EiMoZVUhRGMMPf6U1y9EUpNM7bx18Z/GPxN1O1n8R61danc2QX7NO5VfKI
Ib5dox1wef7tWviD8c/G3xWtrO28Ra3earb2gkCCUIojyVHG1Rk/KB17mvOwjM24w7gYyDljk454
GeeM/nUz+asQhlYqXAbLc55P5dvemkPm5nuMlaWR2R0KqeCQQvOOAWxjHrU6MRbgfKkncKfm6frU
um6Vd6rfyWtnb3F6xBlMcULSEKevCg8Z7+9WtW8Ka5pVu8t/pd/bW5bBe5tnixwcEFgBzimmgcST
wt4z1nwB4p0zxDoF9Lpmo2D5huU6rkFTlSCpBBIIIr2pP29PjFEePFME7KSWEmmwYPTHRR/PvXz/
ACK5kyj+YDzgEDA6cjPr/OpvN3xqIgFwwOXHy5xnn2/wqGritbqbfjTxnrHxA8T6hrusXj32p6lJ
5s7MccdMBRgBRjAA5r07wH+2H8SPhd4WtfDWkarCNKtA6xRT2kcrJvOQFcjOFJOBXhrzSyE7iZGL
bWJ5aQY6g9Rk/wAqtJvDeUFkkcghgkZYjdzg49xj8KWiLWvU9f8AD/7WXxI8O+H9d0Wy1iFrPV7i
aadrm1WR0aRcSFDgbQTk4xgHpXk0F9LZussMgSdH3JJtGd27g5PGc85NVXs1i84Evbh0zsaP5z04
x2PX8qFAC5MmwHh2kOPl9/Q9KqyYtUfTdp/wUJ+KtlZLA9xojJAuwu+nEnAGF4Dj8fw4FeAeJdfn
8U6xqOr3jqb2/uGvJhFHsBZjkkDOOvbtWNeOjo+91UkDJU7c/jj1xU8krxBSIUUg4DMScY9qat0M
G3fU9Y+Cn7Svi34EW2qJ4fh02Vbx0aYX8BcDbnoQwI+8ePYVd+N37Ufiz48aNZWmtWekx29lK0sX
2S3aOQkjHzEu/HX+vSvGH/czQ7S8jSdYx34yenbH86bJE0sTiQmCcj+ROf5Hmmoq4+eRZnykX8TM
42hVxzyPlPFIVRcb/uOCWPILGmx2wli2KWZSRhmOTkDuQKiBhZnRZIZvKXb5YkBIGepzWyt3BJvo
W0nhVP3qOI41wNoH6k/TrXqfwO/aK8TfA/WJ7vQ5or/T7hClxpV4WMEjY4cAEYYYHPcZFeYqY9oU
KACCAQOOlQ5j3AZX5ct5e/BOOD9RUNJ7lq62Pd/jP+1j4p+NegWehTWdp4d0KAs01ppbMEuhgYVw
f4VIPA7/AErW+BP7Xd78DfDLaLYeEtK1JXmZ2v5ZmhnIIBCNhTkA5x6Zr54ZSJHdGKwdQCxJAPNP
LXEasoKKEGVbaBgn379e/qKjliNSZ6f8cPjX/wALw8Uw6zN4estAvIrYxzrYOXaclgQ78D5hgDJz
kCuM8KeJNS8J61Y63pWoS2GoWTGS3ubdsMjYIOfYg9OhBNYgZ0tz+7xKwxhupoe0WCPOSM5LYOcE
YIoaQXadz6q8T/t0r4ps55pPh1pf/CSx23kWviJpgZYJQuVkQGPcu1jkDdXMfFj9q0fGb4bafoet
eE7U+IbFoV/t5pgZGK4L7VCfLvIGeccnjgV8/vNFFISxI9M9zjP8qlYv5hwS6qoCggDI/ChU0xc0
j6F+D37XF58L/B974Q8SeHf+Ey8MyxhbawllCi2j5LIcqwdSSMDt+ddjY/tr+FfB2iayfBXwrtPC
eq3kXkR3ttLHgNghJGRYxlVPOPavkkRN99mEcSZyOuePXNRIoaL5W2u+HBPG7jp7nvSUEh88meie
BvjR4n8D/Er/AITKC8a41yWV5LvzGwtwJGDSI+BjDEDtxgY6V794o/ax+FHi/XZNa134OtquqvtR
rqS4iZ32gAfkMD8K+PmilltX3kFWPG0c/wCeKdEVt0Vd7bmHRj2Ht+H6VajFle8e7ftI/tMX/wAe
Ly0trWxk0nwxYbZILCUI7tLtYNIXXpgEADp1rxfTdVu9G1W1v7KZ7W8s5o7iKYHJR1cMpH4gHH6V
WmDTwqdrs0YJAU5Gec5HH86jAkktS0bNnA5Yc/8A1qdktDC8k7noHxZ+Ovi/4x3Wnt4nvo7v7CHE
CW8AhVVfaWPy9fuqPwrg8lZcqrjA5zyo70p8z5lbO0YBA5A/zmnKqwk5k8w9cheM9apO2w0m2Omk
aQEhdueSSTzx0/Oo4cxDdkK4HTOcetEm6YlgCzLwR6fhQ+ZUJTOQASoznvmnFsvlYuFgTLkkknBx
3pbsyKQCpLMc9OoqIqZWwflJPJIzx/8AWpx23aR75C46qzdz14zTJ5Wx8zy78kMXbhlfHpSxPJG3
nBst1AA4HTgj/PWkmiaYkFjwfvdN3tTRMULBWwFbdjruP+RSFyjysgVkOCqsfmx39h2qMsojcYwW
fGSMHg9KcWkjgIwSDyeOlQAFEPl5Ax/EB6/1p2QrETTSMOUxIhOPT0HWo/L8tEI3M5HJPQ+oq48T
rKAqnnktjBGQcUwyGEQuqk7wcA8g57VLaQ+XqVFjj+UthixIwB1P07VLvLogA+Uk9O3HNEIQMFbz
CV5LZ5z6e5qSaVZB5hBP91StZiVyBQoQ/MGHIJH16fyqUtJ5f/PQgnAPamyRmRHUYWPneBn5f84p
rXbxlV5JOVJ9OKq5ohpYQhCJcZGW6jnt/OhLiSVGMeWKtuc5GPp0oXdLI2RuZASxAwF47/pVZ0+z
yRMGJZwSSRwec5Hp+fena4rNEt2/lSs0m7kgBFXkZ/8A10GRfNwXKY4zUMg3MsvO8k9zijIOEZdx
BIz2P0qb2ZnqLNMN5kJ+ZQfxFQ+aI3+bCsSWyeo/Ch3Mch4wC2Me3eoFPzy53KoGQCOc/X8aXNct
K5OHJJwq/dJyvbjoKgd1DMysfmGCh6jn/wDXUc0hdVVflc9z1I5qJzn5xwQuc5/OkLZkruZpRJEr
Aqc59qim2hQcfJ0GTyB/Wo7mYtnO5QOOOmKSYK6IQdyBSc9/zod0arUjkchmB25HRl4457+tRyHZ
K2SqIRnOfvE0kkoZ1fGBgZzUEoFwQ8ZJIGSfep1LvYVyhTaPmGMbc/59KqFPLGAVEQbeAAc9MEHm
ppHViFf73XIWoHQFkbc0Ybgse/8AhVLUCO5bAAQYGckjJqqXXDAKSScjjI/H2qyTHarIWyMDdjrm
qFw7r8+diseBz61LViZWI5iWm3bTsHDFTz7Y7VDKiFgXJbvtPXNT3MyeVtwFfknAzntxx+FU2RWX
5N7OAePftTQkkRLKCWCnJzkkntUcxZDhypCjjb3+tLcR5yxAjJ6KB1GPWoYOEYFjt6FR396C7WIr
iVpmQhMkc5Ue9RuW+VmAAPG1T3pJJZFbI+UMNuc8U2Rjl1XlVHRhzSY1qBMjHJJUjkqck81WYK1w
4GT/ALJIzUjMyZ2SZbqV6cU2Qn5Co+bAzk1Fyyrcp8uEXLL1PfH0qKVWRhuYADsDycVLd52sFLDP
8XXFMby1I3HzG29Mkc1ehLI5EMnzKfmwCCeailj2AEv8vccAmpIVdj5gYhRnBIwOnSoJIDjIGDjP
HapKtcSaLcgwQoC5+tRhmkJUjO35cjpUrsWXJHmM2MMe1MXBLtjtnAoE4jC2xWTgZJIOaiaR4RkN
uX0HenSZeVGOV9vTinEFFIzgZPWmLVEbNuTfszyTgnpUMiALj7xz0AzUkhORgMFx9496aGHmYU5Q
+h5qWyiOR3CncVHYDHSgkben3s9fSlmQl0PJUe9DRPGxGwA5PHXijUhoieL7wXpjOSaBHG3DMRjm
h1PJx8pPTOKkKKxV9xAxhQaW44kG7eAR0B6HjNI7L1DYYdQe9TSKIX/vdh7VDLbo0id8/wAIp7Da
uQyqqyKEZmQ8cjpUpjCqW2g4PQ8H60jZLBTk+lDRhHG4hyfSkRYUbmQnbgjkCmGM8k4bPXFSSu4X
oeMimgIqDJJcnv2p2HyiDksF7cAZ6UgLKuHXcKJMw7QeMnORTZASoIbg+9Iew4hVcryD6j0qPawY
kjAPPNTKTGM7TyeCB1o2NkkDrxg5NBQyQHaVH3sUxgSpZgfl6g96cpLEbgVBJzu7etPSPerAZGTx
kdaRFmRowc9dgA/OhiGbCqV/2aVxlpOMAdCTSRq5w3cd80yrDQv3gAwHX61JEAFwR1HHNKUHmD5j
gA5zTSOdqgZHBJ9KBiBSBjBLE9TTm2mIJubAbPNMkk+QDocUkgcbVAHTtQQ2IyHyzt5U8A0oMrR7
SAyjueop3ll1IyFA/nTim35ejjIPfIoLRGWVDuXBzxg/SkxuO1hyTwRmlAG8pt5BzyOlPdcupx90
HBX6UCauRJG6kYDbs/xU5+SwI49cc5xT2Yq4YnJz0PekxvdsAAnoDSCw1nCfMCCePrSSNJIA207h
37YpxUJ8zKDjr3z9Kc7DnLEEdACeam9xEe12ZApGAc5pWG2QrwxPNDMS+RxmpYQgYkgbjwODzQik
hhjUB3OGJ7ZzTUnYB125zyD6VIyhTt29+CKJFKxlxjBPPHWmhkUrF1GWJx79KcQkbnb8xHftTjGG
Kgj2IHegFQ4U8BepFDAa8q+UCyZIOT3pGEjEszMMjHzckUSRs53D5Qv4/jTywI5JbI6r60krhYRj
+7UAhiOm7imp8ud+cHke1SSkOOFy47HrTDEASGUluABimmFhh3NEOGdfSpUJAyAO446UuGQbWBQZ
/OgZGSAM9R607CSGynK5yVYZwwFM2F8HJ75NS5OCzMV4wRmmhwATtOBycj9KhpIAkdRMwwT9euab
OmTvTczNztY1M6YWQl9xfkEHFMRw25G4GM8ihFIH+UAncTwOeMU0p5h2urMMevT3pzsdgywbBxyM
AD3p8mGTGSMnb04FUBE0OSxAyM9c0Bi2QBhl6k8igRbWII59c8U1SdzMEDrjoT1oCw5oVCZDbiDz
9aikU7t6jcSfrU5GFPJGO3Y1GRmFiMhyMYoCwNAzRFydqeg45+tIGCE4BlK/n9KehEURIJ7cZqSa
ZNp2nc3XgfpQFhkm4R/Nkkj6kfjRKuRlUAwBlh0Jokk87hQWB5Kj0pWTJEZO0dcmpsA2M+YfmfGT
jrxTNoRzgsWBwoJ6jFOEeJCgXdn0oQhQzHIX7pBHIppBYZFE82WZtoHUEVMYRuLMflf5VxTUj3tJ
z8vr1oZSnG859abGoh5Tb1BK88AimkhJwRxgn6VLJH8i4yRg8egxTJwRIAGIwQNx9O1TYGhrYM5w
OOTuB6ipFQqWzJzyevNIGbyF3EE4PemITC+8rksMAnnHfFCEPuGkURr8wPXPGDSYMhYldrYyFHC1
JKgR13EbyDj2o2EEs2N4yRimBXcsJMSM2AcA9qkhVx8sYCBwS3PFNA4K/Q5/wp0Ukanlc5zjA5HF
KwxSoLtJuG0AjjpSiLdCWEjbVPA7c0iDhVYsIiTyx4+ppzBNxVEYrnJ4zmlYBWURLyxJIK885FR7
fJDtkkDkVKxWUZJGRwVBxmmmaNJDlmdgNhJHbFC0AbMzbFZVyDkU2MlGCt8wPIz29/zqRzhVJK7W
OcdjUTvmUuANoGOBj8KAJpGkLOXY46AVHGVLH5WbGeSMU6QCZjuB29SOmPWnyzYkXAYKFwOBz6nN
FgIdrtn+8vQdaeUIizlvTHcnpSO+JHYZZc5/2qXYxCgqSAecmhWQrD9isQ7MEQDbx3zSSMskShQU
IbB9G4pZlSSN2YrGoJO3PftSb5LgBiwzu4y2PfpRdDGMGcDGCg4IA6U7Z5cnHyRkABqRNoaTClQM
jeRnHByaaCY856ADAPA/WjQQsjK0gXDAgcFTgVJO4jRdo3Kx9Cc/Qio4x5Lsz5ZW+YDGR/nmlnbY
VUAgn5gccEelUguI3mM+Ap5TOe49KUqHlKtkBVxgknnBpksrs5SReEGMdCeelTvKoXa67OMGRf5U
2Ax4swbiSOQDjoahljlIJYcE59BUmChB2mPaPlPrQsxiMnzfN0z1x7c1NtR2GGOPAQE9ckn+GrCR
B2JZvMODtHbHufWknYwSAnhD97AwQKawYTOIpNse7OH6n0rQCCba0hyeSPujoR/k1as/Me0ASIBj
JuVs4xwcjFQXQa3jAJVkJ3ZXqD7mrmkRNNZ75ZfkLnAyAOBz/OkyT6c/ZYtDJqeohQyyw+GNauIl
Bz8wsnXJP/A+lfKdzJtuJWBdpGb5S4+YDceuO+K+p/2Yp5VsvFDR3AiaLwhrpQgHchNtwSeTxnqO
eK+VJUIuJGUBhknGeB6Cs2CXYc6l1eQxg7iRnHUmmGR4/kGFPWnCKWMeaueBkkdB+FMCKXbLHf1J
ycVnYaR108UdyJkRQ0vJLZ+dfc+tWlUW0MgX94r4Ubj1b1P1yMimBGF5++MbTxjcwz+OM0reW7Fl
UFXIaRv7zcdOnA4zXoctmcsZXWp1ujzJLHHEHeOJUy4PJ3Y5wT9K7XS7iN9WtP3TMsz7WAycJySc
D6CuS05kVYOhPmZw7cA4PQf5zXT6WsMd9Eq7VUnHmDAIJIyPfrxTk0UnqfP+ZbPXBGmXHmGMxynk
juD9RS+I9MisdbkjRHjRvmXJBB65x7elMuow2vuqFnUzELzuJy38zV3xvbNBrzxSsoKIAMDoMf45
rkdrnQrn1h/wTe+DmhfGb43ppXiGSQ6fYWB1RbaErtndJI8K+c5U7uQK+mP25PjZqk/jmf4baXCm
i+HdEbyHtLU7UuN0SOrMnC4XdwOa8V/4JB3Bf9o65jdzhfD14EDD/ppBnHr/ACxXTftyoIf2oPGG
EfeBayId3HMCZP6evH40RirhVlqkXv2Sf2hrf9n3x9cyalbLL4Y13y7e8uVJ32wjDFZVUD5h8xzj
09q96+LP7E178UPibb+J/C+qRXXhbXpI9Qu7uSUeZFvbL+UM45U5BxkdK+EtF0q98QahbWulRPdX
13IIobeKNiXY544z9celfqv+zZ8OLv4T/A+z8IeI9ct49Y1VJJ4Laa4KyQebGAIlDEMSp/ujqTgU
pJJ6E6vU8Q/a4+MWi/Dn4a2vwW8LyNqdxaWcVvcSyO2+DyipjBYAAliuTzx6da9e+HfjrxFqv7FE
fiX+0riXxGNLuHhvGGZN6yuqHjrwFGfSvgf47/CrxR8JviBqVr4gR7wyyCSHUCzFblcAblLegKg8
5yK+7f2dvEc/hX9iCw1q38uafT7C/mjD/OjbJ5sA+3GKi2o43s2efaN+0Xr3wx8F+IvBPxknuLfV
bvR577StWkPmy3CSKQI2AGAyk8fl2rqf+Ccc8dx8KfE5T5kGtnkjhv8ARoOeQM5618uftM/FWP8A
aQ8Z6bf+HNFvLiXSdJVbpVTiJi26Uk9dgyvOO3tX0l/wTfSUfB/xgvkmFm1htoPBY/Zohu4J64z/
AIVbjYpNu90d+37MFp4d/aQ8N/EHw3EtrpyLcG+slKrFEzQsgaNf9ovkgdMH1r5Y/wCCjtsp+M1r
NtIZNLgIKgl/vSHgjt8n5jtXtP7In7UV7r/iiX4ceJknuL5ZJP7KvhGSWjUM7JMckgjsT2wK8i/4
KNLFH8XrA7v9JfSYWVdxUlQ8ucY6nOPzqVqzOST3PPv2Pv2gdU+EnjrSPD8hn1Hwrr062UumSnCx
yTOqrMoOecscjjI6+tdj+3z8CNI+GfiOw17Qybay1vzWewjiAS3aMJkIRztbcTjoPpXzn8Lif+Fp
eD/3mFbXLIxkkg489Oozg9q+5v8AgpsgPhTwfIC4eO4uW2rjDDy1yOfqPyrZaMlq6Pznnd7gcgr8
q/J1I6f4dKsRlyxjDDepYFpMIpAHv16fnSXEgnnWV2EhwRygGOc8ce/61VkcKzbyXAbdksApPTpW
10c70GNcnyXkjX5ivyAc5+v+HvVu5neCWONgF2fK205AOOQOenTvSXBd96t+6RgCCOpI6fT8KhDg
sF4fy+SGByw7H+VMXNJFkTmMuXDF3GMKeRzkk9gegpskbNGwRl+YFm3jGT1yT2qs87bXVcxoTtUs
Tzgc8c/X8qnu2CvIhk+VuDu6dMH8+ayaKi1J6iTKyqJAFLMCTvG4emDz0p7kyOpeYvyVX+9k5yOu
SfmqBoBF5aSHKl+qds9KneWMbiR5hjcbcL1yRzz0/wDrU0gktbETxLJjYXPl/NuyN3uc+nHanReZ
IWVAzxR53hl+Y/jk/wA6QwlVYKm2LoW7LxyP8+hqQbjK7RMoc4Clec8//WqkwUEfY3/BNAK/xh8Q
/JGuND+7xlT50fYdP/rVP+29+0L4xfxt4q+HUM1rB4fjlhUL9mXzWHlxyf6w9Mlj29aof8EzbZrf
41a820tv0FwZN2QSJ4f55P5Vwv7cMxg/aO8YB0dELQfvQg+Vjbx9/pWXU6GeC3BR3TBGT8oQfd/A
HjkjrXs/7MP7NetfH3xXEzrPp/g6ycNf6kysFkwRuhjYcbyD68A57V5b4CstJ1nxRpVhrustoWjX
Myx3WphcmBCRubB5z+eOuK/RT9pfWNR/Z6/Z3sNL+GWkLbeHJYY4bjXbSRAY1ZQoYjqTIMZk7Z45
xgvdg7WPk79rXQ/hV4J8XxeH/hzaXaXdpvXUJmuXuLeR8KQUdmb5l5BxgV9Pfsd+EtI+Hv7LOq/E
XT7NZfE11YXc001z864t2l8pNo6DABPfmvzsF2FvPNlHzJIRtJLA57EZ7jPr1r9cPgl428M65+zH
BremeF00nw9a2F0suhoq7SIt6yr0AIcqxyf73NKSCOzPlz9pPxn8Pv2hP2frH4j6aPsPi3TbuDTZ
7dCqNucgurR5+ZOSVbrjNeI/spfDzRvib8ctC0PXlkm0+482XbDKY2cxoZBn/ZO3BHv2rznxxf2G
teJ9W1DRtPTQtLurp57eyQgiFGJKqDjtnHTjmvo//gnt4n0HTfjRHpOpaH9t1zUYXGm6ujHFqVik
aVCp/vLxnnpinsOFr3Z9bfEH4leCNM+KP/CovFGl2lppGu6RHFb3C24Ch5DInls/ReFG044PevzW
+MHhnT/BHxS8T6JpUz32j6dqL20EzTh2dR0ye+M7fwr6s/4KUa74eOu6DpP9jy/8JKkK3L6spwq2
xMiiMjnJ3AtntxXxRY3NpDeF7pJbyBWDyRo5UvzkgMMEZ9fxpK4SipM/Q74A+CPDHwY/ZNuPiPba
Lb6xrN9po1K6OoRh1baduxQfurtGTg8nmvM/2u7T4ffEL4WaD8VPCLR2eo3dymlzWsKrGFYBy25A
PvIQRxwQQa+qNG8SeBb/APZTTWINAmi8BNobyHR+ri3AIaPO7rwed3vmvyk1y6tptSuUsIpbew+0
SSwwyEN5a7jtU9shSASOtONyXa561+yR8J9J+L/xl07SfERuP7OhtpL4xxkAXLRMhCNnkLyM469K
+2fFE/wu8Y/EjxN8GNa8Padod7eWcX2G9tLSOOW43xl32vtwrLjjPXmvBP8AgnHrHhY+N9UsrjTZ
5PF8lvLJa6iH3RLajy/MjI4IYsQehBA4x0qt/wAFB9V8NW3xVsV0u0u7XxlbxRS3+oLLsjeMriEL
82dyjJyAOg5NPqDkkeH6H8MrTUPjha+CEu5I7Z9dOlm6CDcU88p5gX1IUn0/Kvsz47+IPAX7Lem+
EvD9t8L9F8URXFnIonv44lfEQRSzsYm3M24Ek4ya+FvhKmq3HxT8H/2FdLBrD6nb/Zbi4+aNZTKu
0t1JG4jPrz61+hX7VF38LdNsfCY+L9he6tqLwyLbtoyyKm5fLMp2hxhSxTGc9KT3KUlJKxwvxX+F
fgz40/suw/ErR/Ddp4N1KxsJ9SjttKhjRZVUkNFIVUbgdmQcZGa8m/ZgX4MeE/CmseJfHs1pq2vp
HNFBoWo2vnKyAKwZAVILNgDPQY7HNfRnxGeK+/YvvZPhesNp4VGlXTSxaor+f9kBk83YQThyQ/XI
5r89/B/gXWfiL4o0vw14dg+16hczgRgIWWOMsA0rEDhUDgk0Iatc+0fgL8SfhV+0J42l8NL8FNI0
ZnspJvtbRQyBVUL8uBGpXhhgg9q+WP2hvhdF8GPiprHhfTb2W+tYFiuYWlX5gkg3BDzg46Z56c19
e69rXhb9hH4WppmliHWPiLq0ayuZQX82QbVd2IAKRgE7R3P418ZeHtH8SfHP4mwaYNQS88S6zcFW
vL+Zm3fIzHJPO1QDwKuLtqS1d6HU/st3Hha8+MumaR4s8M/8JLaa0Rp0Pm52W8pZCr46NgdcHIr1
H9vL4U+Fvhxrng+Xwrodton222uxcraDYrFDFsJXoSN7DPvXv2nP4I/Zal8DfDvS7BNV8SarfQu1
xcw73CyybJJvMxwQ3Rewrzf/AIKRj/SPAfr5d8F29VOYOanmd7lNLoee/srfsvW/jm1fxx44iXTv
A9p/pEcdy+xb9QGDM7BhtjXA5PX8K9M8Iwfs5fGrxZrfgvTfCa6FqZikisNUlcoLohioeA+YckEh
gCOfSuv+HaC4/wCCfjRv+7D+H75Tt+XaN8vHU4IrjbX9ljwx8Lb34S+MdK1jUbm/m1rTo2huXjaF
vMHLIoUEH8Twfas3K7KS1seTaD+x/rX/AAv+PwDrN3LbaYsb6hFqnl5W9t0IBCgHhjkAjPB55r03
4tzfs4fB74gXHhbVvhxqN3qMMUMkk9lK/l4dcqMmZT07AV7p8UdQ1zTv2gvhs+h6dDqck1nfxXMc
0nlhIMwl3Dc8gdBjnp3rzP8AaO+HXwX8Q/Fue88a+Prvw74gntoA+nwgbWRQQjZ8tiMj0NCYXPKf
2wv2ZND+G2n6X4x8JqmnaJdvHZzaZLvkdZWUlZFcknkKAQT1pn7NH7KNj4k0m68a/EhW03wikbiG
0upHtWuMqCJy4KlY+eOeTXqv/BQC+1qP4Y+HrDTtPjk8OvdwvJq7TBijhXCIYyOhXLbsnpjANdN8
UdFj8Q/sX+G9LkmNvHd2GiwmVh9wM8Ayfz5qrkJbs8y8J/BP4FftA+GtfsfhxDqHhzxBbov2WW/u
pMnOSsixtI+9DtOTjIz24r5Q1v4deJPCfjK68LX2kzjXo5vKW2jgctOdzKjIuPmD7cgjqK+2PAv7
Mdp8Avj18O72y8QXWsLqC38Ei3caoVK2xIIK/ljHavLP249Vv/DX7Smma1pl5LYX9jo9nNBOh+64
nnI4PBHy9CO5qlLsxSik1Y9D+Hv7CWhW3wwuL7xqLoeJ5oGuQlldMq2n7oERkdGIbdnI+lfDVjam
bULWOd2jEjIjhRgrlgpxwc4zX6Yfsr/EfxJ8VfgjrGpeKb5dS1OK7uLUTpAsW5BCjKNqgD+I8+9f
mjYy/YNQsrhCDGjxzOzHdt2sGbGBznBH9D0oi33B/FY+5fin+zD8Bvg9odjqPiibxDb21zN5MJtL
h3JcgsQFReBwTXjninRP2ZP7C1F9J1TxcmrLaubTzopTG8gX5RgoF68ckda+lvj/AHfwz/aI8G6Z
pS/FfQNCa3uRdrMLuGQtmJl2lDIuPv5/CvnT4jfsZ3GjfDmfxb4L8XQfEK2tmxLFYQKMxj77h1kY
MVxkqB3oTfViML9lP9my2+Ndzq2p6/qT2XhnTWaKZYLhUuGkKBl4KthQDndntXpHg34H/s5fErxd
H4c8P+MPE1xrNwr+TG+Y0k2KWYqWgCtgDNfIGnSztcE2kknmyukaeSxDOWPy/dOT1r7j+CHwm8P/
ALLXgd/il8R3EfiAW5NhpshCzWgIYGJFLYaRwRn+6Paht3GkrXZ8nfGL4T6p8EvHt34b1R/OMaie
0nhbcJoCzBHbphsA5HqKu/AnwL4S+IfxJtdD8V61qGjWV/mG1nsEBd7pmUIm4qwUHJGcdccioPil
8SfEf7QvxPm1F7Brm/vttlp+nWseGEas5jiwCdz/ADnJ7+2BX1z8L/2efBHwR0bwpq3xCBu/HGoa
nbSWdvbXDobaVigSPyw4DhG5ZsHn9a57Kw1GJ4F+1l+zppPwB1Xw1Bo2q3uoWerQ3DyLfpGWjeJo
8FWRV6rJjkH7vWvAFYAEuJCh3YA6456ivtr/AIKUlTqPw+XIVjHfAMTgjm3P8gfzry79l79l6f4t
anP4g8TCXTfAdnzK8paE35AYFUcgYRSoLMD7dziXKxEVfU7SP9jHwV4j+Cd5498M+N9Q1Dy9MlvE
DQRiHzY0JdCCoZfmUrgnI/WvknQtMv8AxHqdjYWFmbu9v3jiggj6s7kKAAPqOe3J96/UXRLbwZZ/
sx+LbLwLHKvh21s9TgTzH3lpFEgdg2TkE8g56V8of8E57KNvjTeLcxLI0OgM8BZAQjCSEEqfXDY/
P1qedl2XNobsv7E3g7wjoHhyPx/8SD4U8R6wgH2IRRGPzRt3RoxJyAWAyTg14v8AtJ/s8ax+z34s
gjupZdU8NX2PsGskKhkYIN8bop4YE5HYjp0r3j4y/s5eNP2g/jb8Q72w1i1FroU8FtaxajcyYRXt
kdkjUAhBnk9Mk16Lq8dx4i/4J73s3iF31PU7fRpna4uz50olimZVIY5ORtADdgKFJ3sO3U+H9N+C
PjjxB4CvPGemeHbi+8N2olM97C0Z2iP752Fg7Ac/dB6GvPVjSRmkVmkIB2k8L+X1H/6ule8/sz/t
IX3wP8Uvb6nI974I1KTy9Rs5N0i2wOd00aA43dmGORXaftdfs5Wnh/Tk+KPw/U6j4O1krd3MNuSy
2jSZZZY1A+WE55/uk9MVqpvZmbifK0cJkaBZZRbhmUMzqdiISAX46gDJx7Yr7C8Kf8E/fD/j60l1
Dw18XtO15IcK72VksoRiM7WKznaT9PXivjhSrylixZW+6MZAHt+VffP/AATGB/4Rr4gg5/4/rXJ6
gnym5FQ27gkmfBer2T6VeXdhclTNa3EsBMZOGKSEbhnkA7Tx6Gs+VlZWyzlugyMc985rd8bo58Ya
8JiVP9o3YVQeiid8H8QRzXNnJGV+6Mdfr0FbJIxbsLNmZQEO1u7HGBUSfvJWO0oEBboMce/50CQS
qMK3nB93zEYP4f5617P8Avh38K/iFZ6sfiB4o1nw5eWzo1r/AGbbmQTI3UnEb4II4GBxScrFRjc8
W2Zcg7v9709qrIHcupBij9c9q+r/ANoH9jPTPAvwvtviN8PddvPEnhiONpr/AO3lY5Ei3BUlQbVJ
AOQQecY96579kT9mjSf2i9Z8S2mr6veabb6XbwyKbJELO0jOOdwPACGpczaMEfOU0nklUAGcZyRy
RVaWRwGc4UkcfLz64r7k8Zf8E1PEFv8AEHSbLw/rH2/wbcbPtupXhiW6tRu+cCPgPxjBA715p+1r
+xhqP7PWm2Ou6Hf3XiDwqQkNzc3RVJbWYsdu5QeVYcZA69annLcV0PEvg78N5fjB4+0rwrDq9no9
zqHmCO71BtsQcLuVMjqT0AANdp+0x+yZ4k/Zqj0K51rWNO1e31h5oo3sd67JEUMVIYdCCeR6V7J+
zp+xKmq+EdP+IHxC16TwnpqXltc6bGmxjMhkXYzkn5NzEAD3JNejf8FX3MPhj4buD/zEbsbT0P7l
f8/jUxlqDSR+b8ilVV1PmYPzAfwj1/lUM7JKjMrFiOeefy/IU6adBKVVkBK4IU5571TbczlQfNEf
GN2OtbXMrXIlB3OXyMHHJ6c9u1e4fAb9kfxx+0P4W8QeIvDN9ptumkSCJo7+Z42lk8sSYTEbdQQM
kjqa8SEgJIBztHQDj361+jH7HHwQ8Qy/s93mufDn4yHSJ9ZgaXVNNXSopxaXojI8ti7FlYDaCQOc
AjtWblqNQaV2fmsZZZApz5ZJOd56fh68VGSzBwqFXYd15P41aEcrxxxld0vChVxlmPp75NfaXh3/
AIJ1WelfDfRPEXxN+JVj8MtR1BmH2DUYY2WMgkqpdplBYoASB6/hRzpOxfL3PhqTiRA6lVPOGHU8
0k0W/wCcNuYddnBr67+OX7A1z4F+Eg+IPgTxhbfE3QYXd72XTbZR9ngRWLzKRK4cKVIYDnmpv2VP
2Apv2mPhk/jCTxknh+IX0tmlr/Z5uC+xUJYt5iY+9jAHak5IaSPkBlKRlsYQgAADk/5xVZ1Msm9e
VXGc9vXvX214b/4Jf/ETUvixq3hvUrpdJ8MWyySQeKDAJYrkcbFEQlBBOeQTxjvXLxf8E/PF1t+0
vF8KL/Uf7P068tZr2w8UfZC8N1DGgLYQOMOCwUjdx9Km6LsrnyLcW8gYENlO6jp+VDwbUUnD5JyR
wcDtX2x8bP8AgmF8Rvh7FpcvhO9HxAN3I6Tra2f2d7XoVLAyNlTk8j096wf2lv8AgnP40+Avw8t/
Fdhqg8Z2qc6rb2lqY2ssLkSZ3HcmQQTgYyDRzoLI+PWRkTZhgB+dNlG3exUEMeQO3X/GvqL9on9i
PWP2dvhD4a8f33iSz1mz1aSC3lsLe2eJ4DLC0g+csdw+UgnAqL4x/sNa98IP2ePD/wAU5/Ellqdp
qi2by6ZHbukkIuE3LhiSGwdueAetCaYaHy6y/MFUlVHWodzpDgKMAetTBwNwIIQkrlhjn8P881FJ
bgSOSSyk8E/SrMmQwqw+YkHH3s8816b8EP2cvG37SGv6hpXgvTor+7sbZbq6ae5SFUjLbRgk8kkd
PrXnjCNEBGcnqOxNfoZ/wTP+D3xCj0XW/iD4A8X+FLWXUFbS7rSdVgkuJI1jfejsI2BXJJxnHFJs
tRPz38TeG9U8J6/qmh6tF9k1bS7mSyuoCwby5Y3KsuR7jtWe9uojDMQyt6f1rvfjTb67J8ZfHq+I
5LWTXzrt6t81mv7kzfaHDFc87c5x7Y75r234Df8ABO/4g/HD4eHxm2p6X4I0aSUR2ra+siG6jK58
1cD7hLKFPfBouKz6nyjOpEw2tkKcEhRyKGAEjMARhj1//VX2N8WP+CaHxE+HXw51Txfpmu6N46tr
Er9osfDokkuBGfvMBg7toKkgc7cmvnT4QfCq9+M3xE0Twhp+o6dpN1qjskd5qcnlwx7Y2kyT1zhe
B6kUrjscDcqzgN74A/Cm4VCFJw23juK+0fHX/BLj4oeEvA2teINN17w74vbTYPOfT9FmlluZVzk7
AUAJ28gZ7HGelfLPw98Aah8RfHeg+FLKW2sNQ1S/WwWbUJPLhjdjj5z1GDkdM54oukFjkJVZZdzL
3wSaUQKFLFiD0OeMV9hXP/BMX4uWXxOj8HT3GhRJLpp1FNZku5Esnw20xq3l7vMBwSu3pzXmH7SH
7Injr9l6+0yPxXHa3tjqkO+DVtMZ5bUOCQYy7KuHPBweueOlTzXCx4hHavd3UcUEbzyOQkcUYyzs
TgAD1JNd58VPgB8QPgjJpy+OPDN34bOqJIbI3ZQrME27sFWIBG5Tg4PNaH7PXwk8S/Fv4taD4c8J
SafHrzub2F9Tl8uAeTiTBIBJJKgYHv6V9X/8FUdT+Kepj4dW/wARNC0PSrKIXT2c2i3jzrNNiMSB
9wG3A2kcc80+oWR+f8kGMAE4x0x1NI8RwCGXJPQjpXdfCH4QeJ/jn480/wAH+EbD7dqt9uKvK5SG
BQpYtI4B2rgHnHPQV9Jzf8EmPj0J9sY8MyYIKsdU5Hrxs9f6UXWwj4u8oRuWB3ZGcdRTZm+TAQDP
IPrXTeO/B+r+AvFOr+GfEFn/AGbrWlXMlrc255IdWKkg91OMg9xXOsrsPm2ZAJ59KYrEMcUjkAsQ
q9z2q3g+WrjJ464x/npUSKobdt4/Q17TefskfEaz/Z8tfjG+l248FzKswlW5UzCJn8sSGPggbj65
6cUikrnihiDEMAS2ck+nFKCVbk8Drk9K9ng/ZN+Ilx8BZfjDFo8UngiNDL9rF0nnbFk8suI+pAbP
5VlfBb9m7x1+0drOo6X4F0uLUbjToFuboTXKQhFLFRgsRk5Hbp+VK6Q1E8wx5pY+WcAY6e3FRsmC
dp+b0FfRPw//AGCfjf8AEWx1C70jwb5kVjfTabcpc30MDpNEdrrtdhkA5GRXivi7wnqPg7xPqnh/
WbZ9N1nS7hrW6tZhho3ViCD69OCOOeKlSuwaIPC/hHWfG+txaP4f0m+1rUpVZ0tLC3aeZgBliqIC
Tj6Va8YfD/xH8P72G28TeHdU8PXc0fmRQ6pZyWzSIDgsodRkA+lfSn7BnwX+M2rfEnQ/iJ8N9OaH
TNM1RLHUNSmeFYmt3KfaFCycsPLPVRkHpX0X/wAFoYPN1H4VtgBhDqW5hn7ubb8DVXFyn5ekK5Py
ct0780BgwKtgj37V6p8H/wBmr4jftAT6qngLQJNdOmLGbkefFEsYk3bOZHHXY3Az096n+Mf7LvxG
/Z/t9JuPH3hmbRLbUmeO2nM0csbuoBKkozAHnIB7A+nFBZnkLYfcFG3b71dTSLhtLGpi2nNgZzbi
88s+UZQu7y93TdjnGc45ru/hH8CfHHx01i+0vwHoj67f2kQubmGF44zHHuADZcgYJ4wOfav1F/aY
/Y71OH9iPwh8Pfhn4OWfW7XUbHUNQsoJFEsk3kOLiVndsFtxAJz6Y4qb6jSPx1nVwwIwqY5Ofxpx
CiMFTtJBJx9K2PEPh3VfCusahoutWb6fqenTyW91bTD5opEYhlOOOCD9a9V8JfsXfGrxx4Z0zxF4
f+HWrajpGpRefbXcbRBZUPRtpfIBx3AqrobieGyufKyBt7YPX60k2AqEZXJxzXovxc+AvxB+B8+m
J478LXnho6grm1+17CJdmNwBViMjcOPeuBk2su6SMucDgD/PrSIGOA6epHYHrUQj/iAxj2qV9u3j
IA6E8cU/b5akqpOM5ycCgVixY6Rf6ja3E9vp089tAMyzxRFkjyM/MwGBxzzVRQqYYA7T0A6Gv0r/
AGNPFXi/4D/sp+O4tY+DfinWLDXo59Xs9VsbWMwtbvZqgLbmyANgbgHhjxng/m5aWkkjQRpG088h
RESMZLE8AD1JJApXTLt2KcrsxGPlz0x1qbIt8g5cEZ45zXqGt/szfFLRPFWjeHL/AMCa3baxrnmH
TrNrUlroIAZCnPYHJz/Ksf4k/BDx98IotPl8Z+FNT8NLfs6Wn26AxrIVAJAP0OcHn8qLoVmcI4ZF
OBkg8NihlVYicHcOg9a7L4f/AAq8W/Fa+utO8HeHdR8S39rD509vp1uZGjTOMt6cmuq179lD4x+G
dIv9X1X4aeI7TTbKJp57maxYJFGoyzn2A5obQJHkqK0qPjAOO3akEYZSoOT+VSDByyt0OMqcjNdf
4E+Dnjb4n2uoXXhTwtq2v2+mkG6lsLVpI4TtLfMQODgZxQhpXZyEmTIDwWUZIPb/AD/WpLeylupn
jtozMwQyFI1LMoHfAyce/vUkNsLguScnAbbnB57Gv1c0TwT4Y/4Jf/s5T+L9V0+HxN8UvEymxt5o
omktxKUaSKL5iNsagZZurYpNjcbH5QPYXEGDNG6+m5SAc54BI59fwqscKGABJr9bfgn8UND/AOCm
Hwl8T/Dnx7pFto/jXTlbUrS/0q3MUFvx5cUo+cncGYhlJwQa/PPxf+zB8Q/D/ivxvo9l4dv/ABHH
4Xv5bPUNT021drZCgJ3E/wAPy4bHbPNOLT3Hynju0LkY3KexHIpHG/cMA855FWYojcugUeZI2FVU
O7cx4wPXmuo8Z/Czxh8PBbjxZ4c1Tw2LkF7ZtSs3hM6r1KBgN2OPzFKVkLlaOShgEreUFwzdB2/C
ntbIqf6wqyj5iwx+GK/Tz/gnZ+wTHpt3onxT+JtqtpcNL5nh/QL0GKQSqWInlRsbiVG5UwcYzjpX
mX/BYCwtrL9o7w2LSzhtlPhuJ5TGgUOftE4BIHBxgc9ahMEfA5UyYLMoPZgOtWPszxIpbftYHaSO
GGcda+1P+Ccf7Kfhf41+ND4q8Y6pp50TRLpY4tFe8Ec91dYR4iU/ij5PGeTx616//wAFltH0+x1j
4VPa2UNrIYNSDyQQqpKhrbAOMccn86uLux6H5jn5ZEAULjIIzUr20m4HZlC2AVHf/JH51ZnXy2ZQ
B8wwDnB4r9af+CUuh+L9J8I+IfCvjLwGNN0O3Eeo2Go6lpbQzzSTf6yMlxhwAqkEdv0TaurBc/Ia
UGF/X2ojQSR7gTgnk45r1j9qW0tLP9pD4ow2drFaQxeJb9Y4UUKqr5zjAHpkf5xXl8ilwdigDbkD
PehsaVyAxZkC53AL0xg0LEXypVizHAOcGvW/Dv7NPjLxD8EvEfxYayax8L6Q8EXnXYKNeebJ5e6H
j5wrFRnPevsH9hH9ljwv4O+Hsn7SHxNMNx4Z0y1kvNLsEXziBG8iPNMm3DZwNqg+5qLsrlPzkMXl
wl5MoEP7xQvQZ4prRhJW28ggEAnPFfrD8Cv21vAn7TPxP1n4YeLPh5omkeHPERm07RZ7Ox2zybyw
VZSPuMyDOV6H0r40/ao/Y38R/Af4z3HhDQrG68VWV5btqOlJp0MlxdC0MjKBKqr95cAEgY6VcZdy
LHzQgeOQMG+XGQMd6V1UoVZGY5xuY9a0NS0i40jUbrT7y3mtLu1laKe3uFKSROMgqykZB9jXQaZ4
B1+SyttZ/wCEe1STQJyIv7Ta0l+zfMdo/e7dh5PXPWjmsCTOSS3KsDkliMqFGc05UTfES+1SpP4f
Xtxmv28/bR+K2gfskfDDwpqeh/DfwtrU+o3QsvJv7JEjSNIS27cq5zwBz618I+O/+Ckt1448C63o
CfB7wLpzapay2Rvre1JaLepBccdQCfxFK92UtD4wljSPln5BwMe9RSxCVNquQrdMjoB/Kv0Z/wCC
SPiXTtU8ceIvh7qnhTRdWs7m2k1lNSu7RJJ0MZhiCZIIK/OT+deJf8FCvhneeGP2nviBqNl4Xl0n
w413bNFcW9kYrQ7rePkMAFyz7uM9SeKFq9CZHyqkYli5XkAg/hTo4vNjAWNm2tkn/wCt35qdoN1x
hiI0LYXJCqM9yTiv0d/4J0/sCt4jvNM+JnxCshFo0E3naLpUvDXcillMkqMv+rwMgd+tJySdhI/N
yVFUnJYcZ6+oqF2IZhuG3HJr7r/4Kz+D9A8JftA6Fb6DpVlo0E/h6KWeGxgSJHfz5gCVUAZwMZ9q
+GbiJfMdU3Rk9fTNVcTaIYhvYEhioBAA4+lBGeFJOM/MRUoCRLtZS5I47ZpQDdQnnaw6EDB9aVwI
wTJhPMwO4HWnBmhJBJAxt46UNIquZdu9vRjnP1pJYvtD7uChweD+lNgCfMZBngjgAU99jRpIxACr
njFRycKUOVXkY/SnptdCMEgEDDfr+tSJiMYl2Ki78E9O47UkW75RJgr2P6UhRyXjJIwflcnAoUbA
FjX6sepp2BE77og+3liOfzqvMmURCx9cmnQn95IqkJJ3Gcj8vzoldklCtkkdCOD+NCTK0CUb0LR5
L7ckMPT3p3+rRmJwx55PJ9ulSIo8wb1JJGOTjio5oj5Ujk9cLjrjn0/Kq3ELFH85MjAxEZ2rjJ9q
ZLCGDMi4APCin7/MTaf3ZHO709qXDBsJgovoeDx/9ap5UAMihJVIwRjgtgYxTZCrhSpOBwR1GQOm
aSZ/NdQAhJHYA8c4B/KpWBihiJCKmeoYZNC0AijVpsrwpX5hgflRM5TapDZB+7nNFxGzAYXG7pz2
96ftVDxmRiuDu559MVQDHyxTZIMknIbrn3qSIqWJddyspVRjJBqCcshClNrLkbSOR/nFSSEKQSDg
9QOlK4DrQFXkWRuMZGcHcfao5GRmAIzz823uPTNOCJOViQ42gEHPGKbdSbG2FsFV5BPBNUgHtulX
LKfL/vEfXjNJHtiXJZuGGTjk1EZcjIyV6nPcegqWaVAVdAY93BBORjPrVDTK91ASuzzecZ24PAFX
9NiEelwbCW3StuLAY4AxiqF6rAsAckg5Yen+FaFlMBpNvGw3NvZjnpjjj/PrUSIlKx9GfAmaOx8O
eN5Hco0HhDVUkKLjKukanBxjo2Pxr5iCCRmjGFB7v2PNfTvwSItPh78QrljKwj8IX0YfcAwDTW2R
zxzgDtjJr5jch2y2V9AGyMf54qGyabW4xydqh2DAZOF61I6lo1dQVUkjOMdKhaNd5AGQe4PSh0RQ
4LFcOeP5VmXc7vydt/cs7qBHjYz8gg5x0HtSCCMxsZgvDlc45JIzk+3GKrXQFvMQEbfkhiMH8Ae9
ShmDOsqYb+JJBkng9a9Js5EdJoFuscKF2KsWBTIBDjPQ46dq660WGWePzI3gbJ2s555OCxwM9Rx1
rkdPVWS3mjuXYR4xGcAg498enfpXS2sXnap5gnj8xWEoR2LByAM8ZHft9KyauXbseISLBFrLSSuo
VJGY7MkNg54/KqGq376hqD3E+GD5xjoBz0qbW1jTU7oRjbF5rL+v6VnMVYllG4d8dqytqbptHtP7
OXxQ134QeP7HxP4YvpbTU7YFSEClZ4shmiOezbQDX6C/Hv4i/CD9pH4aWvja31W38KfEeGLF9o5U
vNcHgbGYAbgMfK47HBr8xPBUyrNCEQ72Ygc9+3+fevT9IuD5gkLMkrr5e2Mr0J4J9f8A9daKFwlU
VrM+y/2G/GHw+8E/EzUr7xreWdiYrRW0m8vvlRJQx34PTdg8fpXNftM/GvU/GHxp1PVNN11r+y02
8KaTMoAWBAwZcDocMTyc9q8KtL9o4TGqL5v3ctlttSSzSFZTHIkc235FcfLIx6A//qp+xTe5HNfV
H3n8RPjh8Pviz+yNpt34o8Q2U/jyOzhLJ/y9Qzl13nYOcEdePSvSPh/4q+EWi/s6w/DcfE/SpIrm
wnt3u5pEilUzMzMxTjGGc/kK/MAXn2diwHlI7EbWAVj/AI96tz3skWNyFnYAY25GOO/9aiVK2zBT
TP0X+GkPwN+Cvw08U2mjePNF13W9Qs7pDqDXKGeRWQ7YlAJ4BA6da5T9gf4v+D/CHgfxfpmt67aa
Vcz6qZ4ILo+SZVMYHyZ6n5cevA4FfBrm4mWRiGw3ygqeODzkY9/wptyiANG0w2Oik+W2MNn9Dmko
rqDnZnvH7PHjbQvCX7VPh3VdV1VLDS4765jkupuI13wyrFubGADlR6DIzxXV/t9eN9C8cfFKzvfD
2q22s2kelwx/abZxLDuDykqG6Z5HT1PtXyw8rW7R26o+zGMyAgqp6HPfHHpUhMM3kFQfLj4OT8qn
vz69Krk7DcoyPpj9i7wT8PH1248a+PvEulwSaTKTp+hXjiJnlAV1nBLAtgqcDGM1zP7WH7St/wDH
jxdNb25+yeGdPYrZ2j7Nxbo7M4B5OOgPSvBfPRYySjMByBvBGc9Rjpx6D+dRyTJCziOJkSMqy7WH
A7cetCizOTQyQFgVVnjYsCFJznt24xninXGUXbHF937xGcnJ689MVGt4z2oPmFWfAXCnk45z6cd/
aku380FAmOMFW6gHt+lNtmSVyZMLgmEyzM2523AZxjIPQ5/GnXkgRQvmBjkFvlzn04P0/SopJVjg
eRVkLt8hLJtIGcZIJ9f50+S4hdJJoo/3ICnhs+gJ/X+dUnfcdrCTuJETyclSWLIwAAz0OR/nmp5n
3oqMZN+wqzIMYHbGfY1VEZDlmDIBkMgwSw7HGccjH50kUr73WKQsrHKwFc5HfP6fnSepKH3Nx5Ky
MoLLjcXZhw2c7f6UyS42xqApeNsPlRkY/DvzSzF/KcNENrDLfLgr/wDW6UQZijJU+X0IC88jHFUi
uUnPz27bgyMOCQB1AJ6/57UjHdCVZsZ7jmm21yqPIxBbcoOQcBcnk/zp1xIrxSASlmLswYY7+gwB
6YxU3Li11Pq7/gm1qdjpXx11SW7uY7Mz6JPCpuHVA22aE4Gcc+1dl+3F+zpeav4j8VfE+08SaI+k
vHAWsTKftPyxpG23GVP3Qfoa+GkidbZWfcdrZVzkvnP+fyqxcTSyReS0rPbgqQG6EhRjIyQPqPak
o3dzRyRJKoCBDlgxOTx/Ovrr9kP9q2DwzD/wrnx95moeD9QPkWt1dq0ptWchBG2Sf3J7f3Tz0PHx
2Zo44SjFUVMgdemeM9qa8piaREHmglgoHr3J/wAKGjLntsfSP7X37O1r8EfGdtPpF4kvhzWG3WQa
YPLA643h+B8uWGCOBX3d8B/hTqHhT9mOPwZfXtjLqF7Z3eJbd90I+0F2XsOBvGePWvyIS9munAup
53jAOBI5fC456nnp+lXLLX7+3XbHezRRIAqRpKxB+nPHbkUrXLjLpY3vH3hqfwL4y1XQr1g15pdw
9rI0UnmJlTkEHuMHHr1r33/gnr4C1bxX8bbXxTBLbR2Hh0SG7jdysrmWGRE2rzuwW68dK+Wru9up
ZpnnaSac/PvkkJ3Y6dev1qez1e70+cizuLq1Zxib7PMIzIeTjIP9apxuNSPuL/gpN8PdYXxJpXjd
Daf2IbKLTZBJLiQSq8zj5e6kOORk8dO9fFdhCNSv4LaIMJJ3VFcyBV3MQBknpyR7VHqHiHVNXgAv
r66uoFbeq3MzOVJz65z1x+JrLhmW7mCIGWX5tzgkHPXtgg8dqnk8x3cXc/XLQvg3r9p+x03w5nMD
eIX0K4sRtlzH5j7yvz/RhzX5V+INGufDesX2k3RW2u7O4ktplQAgSKSrDPPBKnn3qzJ8RPGf2eT/
AIqjXYFcfKy6jOByuCNpOP8A9dc/PcG8na4kleWXO93kOTJ7k9c5Hr3rSMH3OeUnJ3R9j/8ABOXw
RrOqfE278WwIv9gadbTWNxIz4fzpFUhQvUrhQc/4Un/BQ3wPq2jfFVPFlxFEui6xbQ2VtOHyzTxo
25Cvb5SSD/jXyhoPjHXvC6yNomtalobXBDSx2dxLAspHTcEYZPJFWtb8aa14nYtrGranqyRhin26
7kuCinOdu8tt9OKajJMvmT0Zu/BnXdP8J/FPwbret3ITT9O1e2uLmXB2xxrKu529l6/QV96/tl/C
fxL8eLDwRq/gGyg8RWMUNwzTwXcSK8cvkmN1LMAwOw8ivzVWSONTlstMoKgenqSB9fxrrtF+MXjr
w/YQaVpnivWtK063QLDb2upTxIgHOAiuAB9Klxk2WppaH6GX1o/wV/YauPDvjS4tdJ1dtHvLBIGk
X95NIZTHGoBILEMMgH1rL/4J/wDg+wtPgjqPiHTdNgh8T3N1dWn2xxlyq7fLQkn7oODivz98U/En
xR43t7eHxH4j1PXo7Z2aIX11JMoZgRwGPHHt2qx4L+K3jT4fWjW3h/xTrGj2jyebLb2140SFzjLb
emcAc+1S6bD2nkfSkP7IXxY+MfxTudS8dwzaIuoZlvdZdoZMbVIVERH/AN0AcAAVS8NeBbH9mn9s
LwvpOs67DNYW8izf2nLGLeJBLBKo3ZY4O4gdccivG/8Ahpj4qxOj/wDCwPEAUqWb/TyQufbHPNcb
4m8Wa1411q41jxBq93qd64VWmu5jJI20fKMntz0FPkkNV0nofor+0X8NPEXiP9oj4ZeLdK0qS+0H
TWt5L3UIiNlvGlyJHYnPTYM+n61y3/BRbRL7W/DHg/xTpltJf6Rp63HnX1oQ8aCYwiMnGcq2DzyO
lfI9n+0X8SY9Gj0iLxtqsekpbmzWzMqlPKK7dvTOAOOv41j3/wAXfGV74KTwfN4kv5/Dka7ItPeX
MRQcqCOpGSeCfT2pctmF9Lo+2f2V/iBoXxa/Z7vPg+bxtI8Rw6ZcWYe4UbZo5WkKvHyNxUMNy8Ed
fpw3wY/ZY+INp8ZtMTXp7+z0DwrqAvxf3W8214IpfkWAFsAFRnpwO1fInh7xFqvhvxBZa5pF1Np+
q2LiWC8gbDI23ryOeCR+Nej337W/xe1W3u9Pm8Y30tnPGYJ0eKBcqwIPIjzyPcfepcjBVH2PuJ/j
p4U8Yftc+HdA068EkmkWV9aS3bMv2eWaRY2CRtnDEbSD7/SvEP20PhF418V/G7UdT0XwnrGtWE9j
axpPZW7vHuUNu+dQcY4yBz0r40ivTBKstsximjPmIE+XZIDkMCBwR1zXuFr+2h8XUiVG8b3O9VG3
dZ2pOQO/7nn39e9Cg+g0+59W/t9SRx/s3aRYXUgjvJL6z/0YcSNtRt2Fz0HerfhbVrH9p/8AZTk8
IeGb5LLxBp1pZ2VzBfLsaOWDym3YXPysE4YZ618D/En4peJPitr8es+K9Wk1m8t4Bb258tIxDHuL
YCooGTnnHWq/w8+JPiH4b+Jhr/hbUX0nUo0MRlwrLIhxuUoylTwBjPoKv2bMudJn15+yf8Kfibd/
Fyz8T+MrrWLfS/DSzxhNflmd5DJG6Yi3nG0ZySOOB7V5X+2F450v4tfG+c+HHlvobS1i0kskTHzZ
o5Zt/lgfeGXAB7849+W8Zfte/FPxt4e1DQNR8TqbG+gMUwhs4oXZeMguigjPPQ9OK8y8O+JtR8I6
/p+r6dOtrqljMl1bTBQwRlPXaevpg1Xs5LUpVIt6n6OfsQ+HdW8PfAXV7PU9MvNMu21K5eOG9gZJ
GUwxYIU9RkED2FfnRrnh7WtEvLay1TSb2wupMxxxXsTQBsEDKkge3ToK9qh/b1+MiJNnWdLk9Gk0
2MbTnngf/Xrz34t/tAeMvjZLYv4nvLa6j04P5CWtqIcbyPvEZ5+UdKzUZRCUotnR/FX9ln4gfC/Q
bHV9RtoNV07UGCr/AGPI9wVbbvXzFCjjqARuHFfYH7Hdhd6P+yprMWoW01g63GokJOhiITYNpAOO
MY5r5C+HP7YfxJ+FXhK30DSr+0uNJtSwtxe2XnPCpzhQ24HAJ4zml+Jf7XfxF+K3hRtB1m+sYLBm
DTrYWxgaYDPDNvOV55XGDij2cpMaklse1/8ABP74LeHPE0F9471GEajfaXcfY7W1eNWhUmJJN+Dn
LDcAOmPyx5p8U9f+Iv7Vvxit9Lt9LuLO3W4aDTtNnyIbRVUiR2chRuYKSc+wFcV8Gv2nfGXwF0nU
tP8ADb6dJZX0i3DwX9s0oWQKFypDqQSoGe3HSvSv+HiPxRCCT7N4dywB/wCPGQqDjoT5oNPllFj5
kY/w1+F+ufBz9qzwBoPiS3S0vjqEM6GKRXjaJw4U7l/2lxg45r3z9ryGX/hpX4PTbJfIE8GZFQlB
i9jY5btwK+OvjD8ZfEnxj8Y2/iXWXgg1KG2S1iXTozAkaqzOpUl2bduc85r1S3/b0+IqeEo9Gmt9
CvEitPsv2+9tZGuGwu3eSJNpfgduTzUuL3KU0metf8FJ4WE/gCfD+Xi9jyo43E25Az2PBrrPBUZm
/wCCdNxHGpMn9gXqBV5JIllH9K+SvFP7VvjXxx8I7fwLri6VqFoiRqdRngdrw7GDKSxfaCNoBO2m
fBf9qnxb8CLG50vS7ey1jRbsAGy1EPLHG+T8ybXAG7cd2euPak0xRa1SPr79m5T/AMMS67Egzstd
WCRYztGJCBj8a+df2A/GujeGfjVCmrarHa/2ho72Fq852pJO0sJVAegY7GwD1xUFj+3f450jxzfa
4NN0dLS7t1tX0iKKT7KGXJ83G/O/naT3GBivB/EniyfxJ4t1XXNsFjLdXL3Sx2iNFHDIzFhsGcjG
cjnqKFB20KU7M+rf2mPH3xa+B3x18USaBcz6bo3i54ZLBlhhmjuXjhWNgCykowJA6jgg816t4uZ/
hR+wdNoXjS5j0vxDc6VPapaTTKZZriSR3WNdv3mwwzjpXgHhz/goJ4w0jwtoumal4X0PxTdadEIx
qeoyyGdyBjceD82MZOcn2rx743fH/wAUfHrxOdX1mRLK3gRUtdJt5We3g+XDMobByxzknngdqag2
9TNzS0LfwU+DGu/HbxjDoGlK9tbJhtQ1AKGSyizy2CeSegHc19NftC/HHQP2cPAEXwc+G8kMurG3
2ajfF1nSAEbZd6ljiVsZwOFzXg3ww/a88Q/CL4Wat4V0DQtMiurrz/K1zJW4jeRflYgZ3FCeM/lX
heo6jc3t9Pe3FxJfXdxK89zPctukldzkszHqSSMk04wad2S532FhlW3iSGR1bacFf4mA9/pX3/8A
8Ex5Fbw78QihJU31pjJz/wAsm/z+FfAMF8LO/gnZIp/JdXMb/dbDZ2t/snkEe5r660b/AIKPHwpY
PBpHwn0PSkkwrtZXhhDkDAYqsPzY9zSlG7RopXjofKnxC1AN408RKvzSfb7lSSSeDM/Tt2rnjK8M
LBsiPjDcHdj1HapdVvZdS1LUL6cqZru4kuW8vIAZ3ZyB36t3rOunZvmVfmX5coP5D0q0zm66lsyK
rMzgjGDxyR9D+Ffo/wCDHsv2aP2Grbx54Q061k16/sLHUby4vg0onmlaNCW5yAA5AA4HpX5rPOSg
UP8AvOuBjnnP4dK+kfgj+2/efDf4a3HgjxZ4Th8f6AGSOxtL6ZI1ghHJjZTEwZQwDDPIpNO5srWs
j66+Ivj+/wDir/wT11zxXqccMN9qegvNKluu1ARLt4BJ/uivJP8AglpOsmufEdR94W9iSfX57jFc
ToH/AAUAhsLXxJol38N7K+8EahCkdn4ajuFENiu0+anMeGVz82NoAPpmvMv2X/2rI/2bPEXiW7s/
Cv8Aaum63HGFtBeiJrQI8jKAdjbgBIV/AfhHLJl3Sk2fox4P1y/n+D/xTu5L64ae11PXFt5XkJeJ
UL7ApPIx29KxfiATr37OfwwfUWF6bu90B7hrj5/NLNGWLZ6556+tfHegf8FBrzRfh9478OXfg1JZ
vEF7qNxa3kV5hLcXO4hXUpltpbqDz7VjfEb9uq98Z/APw74C0/w/Lo2saS1kP7biuQ6ZtsbXVNoI
J2qcE8ZPXFHJJMqM4n0H/wAFEtc1Cw8dfBjSbfUJ7TTbzUJGmtIZCsczJNbBdyjhtoZiPSsn/grM
wTw38NmY/KNQu8c4BJiQDmvOLj/goX4b8X+DtCg+IPwtXxX4o0yFo49VNzGiiXgGVQEymdikgdxx
XFfGL9tvT/jj8CbXwh4w8EtqnjOB/MtteWdUSBw/+sCgAhinBUcHHPsJNO5OmyPly6KQM20qykkk
kck1TaRXQKnCDAwgGT9a9h+H37JfxZ+K/ha38R+FPCb6tpFwzJFdPd28W9kYqww8gIwQR26V0D/s
FfHyMKB8PZQ4HVNRsyPcf66ncrRHzysha4kBLIe/XgV+on/BLoiX4DePGVNjf20yj3xZwY/n/Ovz
n+Kfwr8YfBvxCmh+NdGk0PVJoVu44XdJFeMsVyHRmB+6e+fpX17+zp/wUD+HnwI+Euk+Fovhzqa3
SRgalcWVxGUu7jYFaT5zn5gq03Ftg5Jxce58RaQ+7XtMXegZr6AkqSQcTLx+lfpV/wAFamdPhp8P
wkhQf2tOxB6H/RmH6Zr86vip4m8O+IPiTrut+ENBl8LaFcXa3FlpjyBzb4VcnPODvDMOSOlfXMH/
AAUA+HXxK+FeiaD8cvh1f+ONX0h2c3dmY0inYZVZQokUhihAI6ZzgVLi1IFJPc9k/ZyDx/8ABL/x
SZXHmf2Xr5ZjgZw04HT2ArQ/4J3X0mlfsO69e2kj28tvd6rNFJn5kZY1IIPqCP0r5u+MX7fHhWX4
ESfDD4P+C73wPpN55trd/wBoRxui20qv5ixAO3zMWzlugziqn7Of7cXh34K/s0eIfhzqPh3ULzUr
t70WtxbNH5GJ4gMSFm3Ag7uinjH0pcr0Endto++/EfjjW7L9nL4ca3FqsyatqEuhJcXat80pmeIS
bj33bmz65rd8bru/ab+Fh7rpWs5/75t6+BfEf/BRXw9f/ATwB4Vh8JaomuaDc6U10ZZIhbyC0Zd/
lsGLZYJxlQOeeldd4n/4KdeCNW+NXgXxbbeGdcj0bR7G+tr+Obyln3ziPaEXeVbBj5yR1qeVl3R9
t/D/AMV6pq3xw+K+jXd48um6UNK+x25OVg8y3dnI9NxAJ9xXnF74k1Lxl+xd4/1HV7t7q7e31iLz
ZOTsjnmRV+gCgV81eCP+CmngXw78XviV4kvPDGuyab4h+wGxEAheVRBCyESL5gAyWz8pb3ri7X/g
oJ4Yh/ZY8Y/D2Tw9q8fiW/XUIraaMRm1IuZndGLF93Ak5G3+E0+VsE0e1/8ABTaMyfsY+ChgDF/Y
dT0H2SWpv23own/BO3wWhYlQuhgt1OBEP8K8Y0P9vX4Y/Er4Fab4K+PXg/VfEs2mTKUm0ZESKZY1
2xOf3yMH2kqR0NR+N/2/Phb8TvgL4l+HPi/wnrz20Ujx+GTYRqPLhiXFm0jB8q6EAHgg+/dW1TDu
j8/Zj83yglWz2qBn2+YA2cjpjrXdeBfgl8Qfirpkt54Q8Iax4htbSTyLmTT7RpVSQqGC7hxnBB/E
V0kn7H3xrSIE/C3xUW6n/iXP8oI7Y61aZJ5C0C4DM/y7sj0+tfo7/wAEbufGvxLxgBdPscgdyXl5
68/d6+1fBfjr4W+MfhfdWkfi7wxrHhz7UHNv/atm8Pmhcbiu4c4yPzr7N/YJ/av+CX7MfgG//wCE
gsPEUfjXVJ3F9eWts11DLCjkwhAG+XAfkYznJzSY0fL/AO02EP7QnxVLPtZ/E2pH0x/pMg+vrX6K
ftkF9O/4JleC442K4s9AjIVsbgUjGPpX5+/td+MPh78R/jX4h8R/Dq21K10fWEW9uxqiFGe9d3aZ
kUsSqkFeOxzX1D8Nv23fhN8UP2Z7H4W/Hiw1Rf7La3tbZ/D9vIwuYIFUQyFlPyOMEEdDjI60NO5X
Mmet/wDBI5ZP+FCeP/Ml84LrJQKDwv8AosZIH/fX514F/wAE3f2T/C/xz8Va/wCLvF23UdL8M3Ec
cWjbMR3EssbMHkbOfl7Ad+vSu6sv22vgZ+zL8GfEPhz4D6ZrV1r2pXInS31+3k8tGdVRpGdmBIVF
yFzycV4t+wR+2ZYfsyeJdS0zxHavN4O15kkvry3haS4tp442CuqD7ytnBHbtUtCuj9B/2Jfi78Nv
iXqfxC074feA28FDQruG3u2Zwwu8tMqMAPugeW3HvX47/GuKMfFvx6kJ8oQ65fldgxtxcycD36c1
94/Df9sv9nj9m74iq/w6tvEV3ovi29kl8S3N9aSlrTaHaF4lIBI3ytkAHj36/Fn7Tmq/D7xL8YvE
uq/Da41G58LaoxvN2oxGKRbmQlplQMA2zceM+ppAnqfor/wUYuJLf9hr4fziWRZRd6VuePO85tJM
9MVU/bXd7j/gmV4EmuT51ybbQGLyfMdxiXJz+fNev/tM2fwq1D9lbwdD8YNU1HSPDBXT/Ln0wEzG
4+zNtXARuNu89Owr4m/bk/bS8HfEj4WeGPhP8MFub3wnZRWrzarewSQy5twVjhVXC5+VQS2McjFO
KdwufIPwmeW3+Jvg6WKUw3Ka3ZukiNtKHzkHB9xn9a/RL/gs6gn074VxlyFabUTsHchICD/n1r4T
/Zgufh/a/G3w9dfFDU9Q07wpZs12LiyjaRvtELLJCrhVZtpKtnAz0FfYv/BRL9oz4EftI/C2yuPD
Xi+/u/Gfh+ctptnHYTwxzLK6LMH8yMDhUyDkYI75p9bDN3/gjHbxtcfFeXapcJpihwuDgm67/h+g
9K+K/FXxu8Z/Dj9pTxF4r0PxBew6hpniO/lto7md5YmxcyAK0ROGUrwRxXq3/BPL9rjSf2aPHuo2
Hia3VPCfiVYkvNTjR3lspIhKyOUUEurGTae4zmvYNI0z9iDTviynj+5+Ies6zImoS6kdK1HTZpLR
5nZmwy/ZQxAZyQCeeOtFrA7NnxH8ffjv4q/aN8bjxV4w+xjVEso7RfsFv5MflozEZGTk5Y85/lXm
ewEEDIC85Pp7V7H+1P8AFDwr8XvjTqmseCvCFn4S0IldOtLTTYBGb0I7hZzGFXazgr8uM4ArzKfw
1qgRm/s2/KBc5e1kGPbOKdwsUUwwWQgAA4OTX6y+PrfH/BG/T4YQTu0KwwOCTm+jNflGdPvY7cyS
2lyIk/jaFlUYHc4wK/Rr4D/tQfCT4vfsaz/BH4o+I1+HrabbW9gl+jNJ9thSQSLMnyYB3JgqT71A
I9F0p/sv/BGmVicE+HJSN3q163+NfGf/AATt1K/sf2wvh6trcy2X2y5uILqKGVkE0Ytpn2Oo+8Mq
Dz6e9fXGg/Hv9nzVf2fvF/7Od18QDpPhnSrKOw03xNMhzqEbsZiyLsxlH+Q+oNfJH/BP+ONv20/h
wsUxuP8ATbwrNs2GRVtLjD7e2Rg496aKR9m/H7xlrfh7/gp38KNC0rWr6x0i/is5LzTbado7eZnN
yHLoDhiQq8kV57/wUQ+G+j+Of27/AISeHZ4BBb+I7axtNRe2UI8iNfSISTjlghPP68V9FftA+GPg
ton7W3hD4leOfihaeFfFGg2cMkOiXBVVnjVpdjs3UDLv+VfB37Tf7Z2lfED9sbwz8SvD+kS3WieD
JraC0DvtbUkguHlMinHyhw/yg57E9alIm6ufY/7UXxNvf2cvGvwK+Cnw9tIdB8Oahe2PnzQZWYxJ
dRQiPcMZ3ZBZjyce9eS/8FpWA1n4UqzFQLfUW56H5rfPPr0rsvjZ8UPgN+0Nq3w9+Mn/AAtm00DW
vCdkmoL4YkCtPPIrrcC3cZyrh02/KDn+flf/AAUK+MXwu/aZ+DXgb4geH/HNpa+J9MHkt4RZhJdA
XDRmUMAMhozGuTgggds00NNJl39jz9m/Tvhn+z3rXx0+IHjXxD4X8PajbQz20Pha+kglFusjIGnV
B8zF2GAM4BOa+lvjtd+D/iL/AME4vFOsaDqN94q0I6LLc6fqmvgy3e9JsbyzKCGVgQDjoBXgH7P/
AO0B8Mfjf+xFffA3x14stPhlf6dZw6f/AGhfypsuYxN5qyRhioP3NrKSMZHPSu18DfFD4Jzfs6eM
/wBm6b4p6XYWuk2Bs7fxZcSxpb34uJHm3RAtglGO1lDH2PYMls/L7wb8Q/FPwv1GbVfCfiHUfDt+
8Zikm024eBnTIIU7SMjIHBr9bP8AgoH8T/F3gb9jvwFrOgeJ9Q0HW9Qu7BLnULG4aKaUNZSyOC4O
cFkBPPJAr8dL6OJLq7gjlW+iR3jE0Rws2CQHXPqOfxr9cdM+MPwN/bO/Z/8Ah/pXjjxXpnhBPD2p
20uo6HrF+kEtx9mhaMqCWU7HD5DDtmhi3PzT+CJfx1+0T4DTXy2tPrHiqyOote4k+0CS8jMm/P3t
245+tfsf+1n8Q/ht8NZvC+meLPiR4n+G6yQyfY7TwoJFWaJdoO5Y4nwF4A6dSK/LT4x/Ez4U+CP2
sND8RfCXwq9j4T8I6naSSRQ3BePU5Le63ySwlmbCsqgAk89fSvs39p7Rfg/+3Zp3gXxhB8cPDvgN
7fTWVtN1WSF5180q5SRDMhRlIIP04NTbUd0ct+2N+1x8CfiR+ype+CND8QX/AI18VRC2GnXerabM
boMJ0Z5TM8agEoGBIOSOK/MMEgAHoSRu9Pzr7c/a58Cfs6fBf4G+GfCnhK4tfHPxKuQHl8U6VqAk
jj8t1MplVZGUbw5VUA7deM18RzCNtuJRt9z1x1NaKwaMRovML7W/76xinPJ5e/Dbm7gDkUw7CUCy
qZHGBs5JqWFlc7gynIzu6igFG5+w3/BPfxdr3in9grxoNc1OfVf7KGpafYm5fcYreOyjKRA+ilmG
K/Iay8yxa2eB5IpE2MjISrRtxgg9c55Br9jP2L7HwF8Nf2Ph4Rv/AIp+F5NQ8U29xeySG+ih+ytd
wKvllGfcSnfOM+gr8mvEXgePwx8TL/wgniDT9RtLTVf7LXWoH/0VkEoQTkg8KAdx54waS1HY/Yvw
/wDHS58C/sA+FfjT4hsI/GHirSNDjnhubzaszSTSiDPmbflyGXdjqF5r5f8AiR+2R8OPjR+wXe6N
8QNTi1j4r+XMbW1ubFmmjuhckRyRSBNqjyyvIIO3Oc19HeO/APgnUP2FH+C9t8VPC32+y0iCBdSk
vofLeWGRZuUD5AYpjucGvxYMp2EO6yYAI2YxjHT6VKSEz9kP+Ce/h3wx4Q/Yau/FjzR+FrjUBqc2
peJIoVFzFHFNKiyFgMt5apkD1FXfh5+1l8EvCdzqaeIv2h7nx5YX1ubf7BrFkxjAPJICxfNlcjHT
BPtXkv7Fvxa8G/Ff9i/xP8CbnX7Twh4kj06+tjeapKiQSx3csrLJGSwLbd+1h1GAec1zPgH9hT4P
fCey8S+LPi98TtB8Z6Hp1i88Wn6DqPlyblJZicPudiAFCg9TSSQXR8WfFu48EeIf2ivEz+HboaP4
Bvddc201pbHZb2jSAGRI+CABuYD8h2r9gP2Bvh38KfAXw58UD4ZeNpfG+m3t+G1C+kQIYZFhAEeM
DGFOfx9jX4jeK7rQZPGmsT+GUu7Pw+95NJYQ3xDTpbl2MayEE5YLjua9l/Z//bY8efs2eHfE2heE
F0uW11uQTyNf2zSPHIE8vepDrg7ccEH7vSm43YJmh+0f8OvhF4H8WaPbfCv4gzeOBqM051GJ4Cq2
TCRdgBwM7izjnP3c1+mv7cuoeC9N1H4GD4gxW03hL/hI2N6t7GXg2fZmAMg7rlhnNfiXa6jJb3Ed
z5q+fGARIQDlhyGI7881+p2j/EXwf/wUq/Zxn8M+LtTsvCHxO8N4uIbuaYW9p5zb0jlQFsujIhDL
1BPHas3Gxpc9R/Zd1X4Q6/8AtaeNb34PR6UmhJ4Ss47htGg8m3ef7S5JC4AyEEYPFfM+lftlXn7O
/wC278V9D1sNqnw913xNcW2oWLYxbO7pH9oUYJbCZDL3UdjXpfhMfD7/AIJl/A7VNffXdP8AGvxQ
14yWFuNMuvOgkddzwps3ApEnyl265OAeRXkf7HHwz8LfFvxT4w/aJ+Mev6OLTT9RuNVOjLKInF0j
rP5rxk5MY6KmTnHOc00tNBcyPphf2MPgF8I/F+ofHy/vY28FCH+0rPRpYM6fbM6qVkjUck5yVUjg
t7V8m2n7YWgftLftkeFPEHxVhg0z4TaZJcf2dp2oqskNoWtyFknZV+Ys6qeeh2iva/BH/BVTRfiD
8aLzwh4p8L21l8JtVLafY3cluTON5VI2uFLbVjILZAGRkelfI/7c/wCzZof7OHxOt4vCOsWeqeFd
bWS80+yiuPOnswhQNHI2TkbnJU+nWjlb3End6n6ZWvj/AOFfx9/aC8E6roPxrttTbRnaXT/COn7f
Llm8tgzk4BPy5OD0xxjNfLP/AAWJ8LeG18Y+EvES+JFHih7RbCXQAoLi0BmdbjPUDf8ALjvXln/B
L34dv4s/aCtPGEmsabpNh4RDT3NpeTBZbtp4ZolKDpxwSfYV23/BXzwRI3xN8OePrbWtLvtOv7FN
FFlFcBriKSIyy7io/hIfr2P1pxVila58m/szoZv2hfhmEzGzeJdPG71/0mMEe+RkfSvtz/gs2zN4
o+F6K2CljqDdfWS3H9P1rxr9gn9lnVfiR4n8L/EqHxV4f0rR/D/iGB7uxu7gi7kELpLgLjA3Y4yR
0r65/wCCjn7Mmo/tFzaJ4n8O+LPD1ivhvTrpbi31C82tKDiQbNoPPyd6XUHoz5e/4JmfsneHPjx4
r1Pxb4rKX+k+FbiKP+xpot8V7JJG5BkJxwpCnHOSPSvur9lD9qa//aK+M3xO0+Cz/svwv4bjtrWx
s22sxbzJo2kyAPveVwOwxXyj/wAEh/jd4b8J6r4q8A6xdiw1zX5YbzTy/wDq5fKiYOpc8BucgHrz
Xt37KXw6sv2Pfi/r2k+L/FWhXd58RrgjR/7PnLZaBp5XEu4LtJEwAxnJGKlaPUVj8u/2l4lvf2if
ifO4ZQ3ibUj97qPtUoz+mPwr2L9h79i3UP2j/Eza9rwbSvhzo8xe91AhSt28exjbjJBClS25+gAp
/wATfgRZx/t8DwH4u1u1uNF8ReIvtk2oadcj5ILqWSVUZj92QAgEHPX3Fen/ALb37Z+l+GdCj+BX
wTlOieF9GQWOoatYBomkeIlHgRhgOvA3PzuJ+taP3noatpbGL+3T+2Ro3i+wj+DHwuht9O+GWiFL
e5ltQjW+o+WyPGsRxkRoyHofmP0r6b8KJBZf8EiMXIBtn8OylwvGVa7bP5g1+QEMqwJsUD5QMY6j
05Ffpl+w58e/Cnx2+Bt1+zP49dtF82zay0m9tn2NdRBmlILnKiRSAQDwwB681LVnoZptHY2eifs7
2vxf/Z5m+E02iS+IJNaBuTpU++RoxbEsZgCcNkL1xzmsv9vD9ojXf2df21vBHiTR3WaKDw5DHfWL
IrfabVrqVpYwSPlJCLgjoQKd+z5+xDo/7JnivV/i58UvF9lPpnhJJLvSjptxu+X5wzypsBLFSoCr
1JP0rxYyXP8AwU1/bDnvJDaeHfB+kWq2+55zFcS6ckz4ZQwz5reYW7BcjuKNdbiurn0z4p/ZF+EP
7c2paB8Z/DmtDQLO5RZtetIEy1xINrskxDjy5FXKkjjHrxj57/ba/bE0S3063+C3wjt7Ww8BaAy2
95PZqjxXbIUdI4jyVRWDAt1Jr2jx5/wUL8B/sxeOdH+FHw88NWmreE9BaPT9bvxlDG6uI5Nm0ATO
FBLMeprxr9vf9m3wTqGgaV8f/h1qNlD4N8QPDNqtpEwjceayKstvGAMHBbch75PrUaPcpNXO2+JH
/BSn4J/GXQLHR/G/wi1jxFbWR8+NJbmJVWTZtJUhweemKX4l/syfBz9ob9jbUPiv8MfCo+HF5pa3
epFTuY3CWwdZInwxGDtyCB1HfOa5rxd/wTU8P/FX4d+EvFvwB8XJeWOpKz3KeJLry129im2PKlWD
KVI7ivafFsOn/sY/8E7NU+HvjXV7ZvEurWWqWFpDp7GXz57gysApwMAKwyTgdqFZbMHboeAf8Ego
MftC6/uRkeLw3cABuwNxb/l0P5Gvrey+L2iftH/Hj4s/s6ePNCGr6VE7vZSqiqsUUSxZyRht4dlZ
WHfivlT/AII8zwR/GjxWJHSKVtAKorvksPtEeSPy/lX1h8NP2a/EHw5/bO+Ifxk1vUtLTwpqdtdN
Btucyxh/JJLLgAACJsnJotdtockkz4G8Bfs++CfhH+3ZH8NPiNqtlfeEtKmd572+kNrHMWthNBvO
7jl0B55IPrX6T3WoeHvi38dPBV94f+MPhy60bQnNxaeFtLnhkmuJBEyt8yvkgDnGDgD2r8hv2w/i
3ofxa/ad8c+MfDc73eiXk0SWd08JQypHAkZcAjOCVOM9scc17N/wSx+FOqeLv2htP8Zae1tDo3hc
yPfLNIBI7S28sabR1PLZ9OKqSW4rpnqX/BYbwFpkfjDwz4zHiSyGrS2SaS3h92HnmNWmlFwBnO35
mXkda/NOQrcOodirKDgd6/Q7/gsJ8N9es/ilo3jp5IZPDt/YQ6VDGkgMqTRmWRsr1Aww5Br88I1a
MMGCv7Z/z/kVfQy0RHKPmIUvtBIHHembC+zHKg/Mx4/z3pzM0q+YoZQGyccYpzYG4q+BjCg96VxD
ZI41YEA59AeB70pA8v74Oe/UdKYQqYTGBkkgDvT2CsrBV3A4AJHWqKIXiZcneMZ6k9P0p7DcpON2
QO3rThhypC8L7cUuS8e9VKr06ZzSsgIwJdg8v5SMjJPOKRI23DcfmPUZ5qVmbcSV4xtOBxUcaqFd
Qpy/J74PqPyoAsRRKHkcFgGJwR3FMf5yyjaSP4uhFJHvaQIZDheBjv3wfxokVln+X5Rj5ueD/k0I
LjY0kST5yXTOPr60/e6AhTlWyNp7e31pGhVyWDHOcAcZB9aS6QGXdz82MgdRTuAFYiZWDHZ9OTzT
40YKQBzg4wP1NRpGrSASNwuVJTnH4d+1I+PlPIGcHJPXNSmTcUsiglYwpycHGOeaHJNuNwKsMkH+
E0bl8xSUYL0JNKBvZQMNGOPpQUL5jrGx3A59Bnjr1pYnyd24s5Pbg9Kidxv2qDASMEEdR9OxxT43
3bXCqRn8RTQDOGYmRioIyScnoP8A636064ZWiys2+TOWUduoFSSSBWdwABtOEPOeKYrhmDLhHYbc
L1ppXEKMYjAADn5ckdR3pkqo3JVVcZVs9+amHlx5Ls20rj5RjPFNjKRykAfMeBnn8KqwCNGIANuH
Ug7dp/zigEZXzULcY3dqaX3rgJ5hjB3FeBj1p6JuVuNqdueSfpTsO5UvZUcvtJIz+lX7KI/Yodk2
WcMdvp6Vn3CqAygFT7nOR7Vp2flx2Fqyk71BJZRwvPcd+1S9yWfR/wAKI1Hwk+IqySx4HhqVNg5B
33MAA6jnjrnjPtXy2zEY3r5oJyCOAfSvpr4fySL8HPiXNtRohoULBY1OSwu4xyw+nIr5nMpl4WLY
oO4KR0rMcUhsTpuVunqKUbGLEndu5+Zc/wCetKroMjdtb2HFKULBWUjBzzUBsdHd3kn2/DoyAklS
CcDjt/n0pZJZwhlPmKzsF9SPf+VUbiZrkybd22P5QGPB/wA4qeV9mbYsNhXkYz07da9DS5zs67R7
d5LUyRlmWMBR5hxjkdSB9a6jTJl+1wlnVZB8qg5yCcZ9fSuR0GeKSHbnJ2g7VJyceuO3Xiuo02zZ
d0nmJI5wzSudoRT0OD1xn8qvZakx1eh5BqNnJda5dRELHulcBmBwME8D8qyXtWtpmTBBQ4YHr+Vb
GuSyxay8gYzP5p2kfKGyTxjtUvi6BUvlYIYw6q2OATn1xXJJnUvM1fA0Et3qiQQR+a7OnlxbMmRi
wAX688Y749a+97j9hq78FfBiLx/4t8QweH9VnjymgXcOJclgAm7cMseuAO4ryb/gl1ommeIP2qtD
stUsIL62isby4jS7jVgZEVSj4P8AEuT+ee1e9/8ABRjxhql/8eJfDl7ezNolnbW89rZSH9yrtGd0
ir3OWxRFt7BK0UcN+zf+zxP+0X4g1DSrbWodF/s+2WeSeWBpi3KrjAZcfezjNcT8WPAk3w3+JOr+
F7m6jubjSrjyXnRSiueowCeOMetL8HfirrHwT8a2HibRL+RXiPl3FkG/d3MG7LRv9ccE/XHSv0Ls
fhj8Lv219I0T4hQmTT7qLCanaWyqHM3ys0UxI5wMgEdQQRVc7izO3M10Pky3/Y01XUvgBZfFOHxF
bGGSyW6OmywFTtLlT84PHJXt2r0bwz/wTnufEXh6312H4k6XLp00BmWSG13opxzl92AAevfg1X/a
7/aes9Qs1+HfgIDTfCNlCI5ZLLbHHcjP+rC4yqqwxj+Ik57V7v8AsiaVdeJv2LJNMtHD3l5HqVpD
k8bizovJ9/51DnJjsr3PB3/4JuXt9o2o6xpfxI0rXIIleSMW0BMe5VyVDLIQD+HevNP2dv2QtU/a
C0vXb/T/ABDbabDpc62+J4GYyOctj0xt28+9ev6rqOu/sF3lhpMd7P4h0fxNpks1zpc0n7qzusBS
6NycDp2zXff8EyrmW48IePnljKN/a0JOWB3HyRk8DvxyetRzPqJQTPjrSPgB4l1b44W/w9vwdG1S
8nMBa7QuoIDtvz3UhCRj17Un7RH7O+p/s9eI4tL1HUotTa8t0uY5LJCsahmKldpGfvL1J9K/VG/+
HnhP4j+LvDnjuxeJ9T0i4fZf2ygtMqh4zGx9ASa+M/8Agphbl/Hvh+RjhDpWAD0/1r5P1HGP96tI
zYpRS2PEv2c/2Xr39oTT9auNL8R6ZZz6e5Uadd7vPc7FZWC4+5lsbj3ryfxf4T1DwLr+paLq9rPZ
X1qximhuVCP9SpAOD1B7g963Pgz8QNb+HnxQ0bWdF1N9OvPtENqcKpWSJ3VWQgjByOO46V9u/wDB
SvwVpCeG/D3iSDTrZNduLs2ct6UOZIhGzKrYPOCBjPvQptseltj85F8qNnZgGjwFKlSQMkYz6dcf
iKebhwhX51jdtgOSSxHpx6Y/KkaYC53Or7HU52g9DxioHDW1+oUERs2TICD7c9s8fjWm5jcllkCS
l2Zi2AQOBjnnLfXmo4b5Z1UNgROxySDlx7jAHNOl5hPmKpfO5RJk8/Qfj+dV0LyrIpjwY3wR83I4
wMZ45z36ChIiTZOeIfkIaXACqSemQc+nT+dTKwKn5clgQCegOfbtVESTmNBJ+6VSW6AcYPBJq0rR
xB0IMbFiWIfgcetU0ELsIrp5fLYq0YYlMdcYyBnj26+9OlkQRq/mMVBODGADnP4e9RzfOpMRymTg
4zkjrz/kUxm8qMgBRNHziQYxnig2uupOw3psRArOmzJJGQDnHHXOBStbYRHD5x8zq3TvyOn6elR+
fuYKzfvDneEzx149RTmnjWZlAZE6jB3D6ep/lUkpI7/4NfB/xH8c/GEfhvw5JCLk2kt2ZJpfKWNU
2g84ODucDoT7V6f8Qf2Fvif8M/BmpeIdUbQ5dPsIjJObK8ZpAoAHCmNc8+4rpv8Agmy7p+0bdxs5
kQaBcBCemBLDn+lVP+CheuajD+0JrOnQ6pcw2T2Noz2onfyiTHj5k6enH1NNTd7GllofLiRwooV8
hmO4D+E/n/Oh93ngDaiODtJA3HgmkeJBG7eZho8/MjdOenHbFfY37G37I6eKnh+IfxBhSz8LWH7+
0t7pjH9pcBSJtwwPKHXP8R46UmylFI+QmsJoDF58U1sZBvAlU4YY/hyB6/rXoHwf+B3jH40apeWf
g/TEvXtI/NuJbqTyIlGQAoc8bjk4Gc8V6b+2F+0vB8cvFEWmaBYRReF9HZhbTSW+yeaTBDvnP3AR
gAdc89q+xf2L/CmraB+ys0FzptzpeqXz31zErwmKZw+fKfaeckYx7YqXJlJI/PP4xfs+eM/ghLYL
4p08WbXqO1vJDMs8TkHDAsOhGQceh/LzqVCrqm1jMT+5UYLS+yj9K+tvHf7VFx4i/Zw8SfDjx5pE
8PjS0eC3spZbbDFEZDukEnKyAKQTgZBGK8//AGJ/DF54i/aL8H3H9ky3+n2cks1zMsAkjt18pthk
OMAFtv40XZHUiuP2I/jDFoUusP4WEloYPtjqL6HzNuN2BHnORjp1rwvy5reVRIQrRkoShwSfcent
X6lftM/HPxL8APjF4V1wWVxf+B7yzFnqSYYxRsZcllxx5oUHGTyOK/P79orxfonxG+NnivXPC8LQ
6bfyJJbRywmIgLHGrHaOm51Y475oUrCdm0Z/wz+CPjf413d5b+D9Gl1SSzwL2R5khSIEnHMjAEnb
xg9KPiZ8H/Fnwh1W003xfoz6ddzw+bGy7ZE25P8AErFSeMcHjHNfo98AtE1Xwt+xT/o1hc2HiJ9J
vrhEWErcvLmUxNgAMTt2Y46Yx2r5e+Nn7VmlfF79ngeGfFOkmz+I9lfwwkm2LBUQjzJPMI/dsQGB
X1P5UpsHFI+VIPMllijVXmkkfy0UYBJI4wfXOPavWde/ZR+K/hTQ73Xb/wAGXtvplrAZbiQSxOVi
GGJ2q+445JGK7T9gTw7/AG9+0Vp88unPe6dZ2U7yyOm+GKTYfLLdlbOcGvq/42/tJ3nwQ/aLs9M1
6yuL3wDrGlwpcSGJnjt23Sh3QAYY8oGXqQw9BVe1dwdNH5hWoiEYG4vMzDIUkpg559ulen+G/wBm
T4p+LNFs9b0fwdqN/pl2A8MyCLDp0DfM+T0z2q2114X8T/tTQTadaxf8IpeeKoFt4HhCRmAzqAPL
bgL97r2A4r7e/bn8beMfhv4b8Gr4Dvb3RYJbiaGf+yohgIETYvCkKBzjjFDqMSgtz4E+IPwP8bfD
W2hvvE/hi/0a2uH8qGW72eW8m3O35XODgNxWP4N+Huv/ABE1STTPDmlXOsaqsTXLwWqgkgY5bHAH
+cGv0h8I3F38T/2I7vUPHkDaxqf9mX1wzapBtkDxmXynxgENgDkc818f/s5/tTwfALwtqlpZeC7L
U9UvCXj1SSXyLiEFFUI/ysWUFd2AR+lTztg4q9jlJf2YPi5CI2k+H2uMqYJ/cA4568Ht1rzKUNBe
yRMksEsEjRyLIACJAcEEHnOePSvvT9iv4lfGP4t/EKTVNb1m51LwXZxSR3rXKxohmdMxiMBATg4P
HAH1r59/bQv/AAhqHx61aTwb9k+y/Z449Qks0Kxteh5fOPoWxsyR1NUp3J9nZnkXhLwd4i8X6kmm
6FpV5q+oMrObWxhaVjGMbmIUcdR+dWvF3w88VeBjb/8ACS6Bf6HJNu8lL61eIuF64JHOOPzr3D9j
P4ZfFK98W6N438HQPY6FHe/YtQurh4kWa33oZU2ucsNuOVGc4wa9x/4KYRKdG8BSMduLq7Xv8x8t
Djj6Z/Co5tTXlsfBGkadNrN/BYWtvLd31w4ihtoIzJLI+D8qquWJ9gK6i/8Agv8AETRree4l8F+I
obKKMtLM+l3ACKOST8gwOO9fY/7HPwf8M+B/hTd/GjVoV1zUo7a6ubSJ4VzZJA0it5ZP8b+Wee2R
isP4Qft5eIvGXxuTTNatI7jwprl0tjZ6cIkSSyaSRVjy+397gE7hn19KFUaDlV7HxD5qKQyNvcYI
IBYEe3vk9K7UfCDx2Qxh8E+I3jIMmRpNxubPOV+X/PFfb11+yR4L8J/tUeG7uOziufDmtx3VymhS
p+5tbiGNG3Dn5lJ52twOR0qh+13+1H4++D/xbj8P+Gru0ttO/s2G72T2aSl2ZnBwTz/CBgetX7Vr
ZEKNz4O1XR9S8NXBstZ0y90q9KAm3vYHhcjJw21gMfWodN0+51G5jjtLa5vrqXLJbW0BmdiOSAij
JwB2r9B/27fCmkeJv2fdJ8b3unxt4kg+xRrex8MqSkF1wOq5Ocdu1Rfs9/Dfwz+zX8B/+Fxa7HLr
WqTabFqMZt0w9tDMkf7pASASSwyx7fTk9oxKnrqfCF74N1zTIGvb7Q9VtrVAN9xNYyxoh92ZcAEk
dcVjNulJBJUINmWPI47e9ffHwB/a1l+PXi7Ufh38QdItr+x8SJLHYpaRARpEEdmjm5ycoBhh3H4j
59+P37P1v8LPj1p/hS21aC20fWmW5tbu8BWKwgeQoVlJPzFcDnPOe1UqjE6SueGWlheXMTzwQXFx
FEMSSQws6rwWy2BxxzzURljMAZWUMW4HPToK/YT4KfDTwh8Nfhimi+Grm11S12H7dqEMonF1ceWB
I7HLAZwPlzgAgV+QmqQm1vLiKJAmJGjQBdu1QSB/L/8AVSUrsj2VnZFaWZn2g7yc87cZB68+3H61
NJbzJ5UuyTynw4yhVHGB0OADX0H+yp+zHe/GzWItZ1hXtvA1jJuuJ33IL7aSGijcY4GPmb6jrX1Z
+3RoWl6Z+zBLb2VnBHZ2N1YJbJEAoRBIqqEPbggfSj2vYvl5UfmYZSkUyCRN2flZuRg8evP1pSG8
9jlUiVQSE+br0JH4VreHvC2teOdXstG0axmvNSu5NlvFBHvYYPLEAE4BPJ7e1fqf+zf+zhonwJ8L
yWjGHU/Ed4qtqV23zKcFiqop+6vJ+pzUuo7mii7an5OB/MMQUmNVzk5HGM8H+lQzSIkbOwxIXwck
fLxkZFdp8abVLH4oeOxapst4tbvxHBGMBQLiXaOMAAdOK+/fD9r4W/ZY/Ze0nxTp2ix6j9rWwvdQ
e7HmSytOIlkIPYhScAcZ7darnsJRurn5oGfev7tUD5zvHIPXpTTcoqbi678cqHAz+PoK+lv219E8
BPc+EviP4JuEe08XrNNdQRELGPLVcSCLAKtkkMD37V3HwW/ak+GPw1/Z1stLvdDi1PxtYRXLJYy6
bvW4lMjshMu0gAqVyScjFS3fWxUUj4ve8jfcS6lid3MgOeOo5pyuPMYZLAgYU8gmv0l/ZY+Pdj+0
R4i1vS9T8BeH9Laws47tZbWESZ3OU2sGX26ivmLVPgNL8XP2v/Gvg7QUt9M0+11OS5uSCEWG2DRe
b5a4PP7zgdMmlGS1Ke9rHzNcXMcMkgWVUUA5LNg/hxg9ak/1YiByCeMn+LH6Z5r77+OPxr+HH7M0
Wm/DPwz4I0fxVqFha/Z9Q/tiIK0SMqlN8nlnzGcOSccfTNfAEs7u9w7IkCF2ZYouUjG4kKPUAYAN
WpdzJpMYszXDOUIRiB97r0xwcUwzo8UmS/mHg5HGfT9K+mf2cf2YvB37Qvw3177J4rv7D4jWYleL
TiqLbqo4iZwUJdWI+Yq2VyPQZ+dvGnhvUvAPiDUfD2v20lnrmnSmKeA87SCRuHHzKcZDDqKaknsS
4pGP54KOCWEfOWOfp/SlkuSZFQDJwNpX0x6fhT4bxbS8glMaXYjkVmgmUlJMYOxunBxg4PQmv08/
Za1f4a/tNeGdaubj4O+GNDk0qSG3aMWcM4k3xltwJiUjp7/WodSz1QRit0flxcbyobdlCOdp7evT
moln3wjYDFt4ClcZPOfyyK1vFdmtl4o1qyt41S2t9Quo4wpwAqzOoGPYVgSS7nVSHchsfLwSMcU9
yN1ckd8ttOHyM4xjJx1pkx8wLHsKbhuJ6fr6VAZzx85RgcEtnPpg1e0B7JfEekHUlaWwS9hadU3H
MQkUuuByRt3ZxVOyVxpXdimLkBnCNkLjO3r9aimlMKl9u9iMr6kf5/nX1b+3jP8ABq5TwT/wqmLR
bfUS9wdRXSoPKJgKL5fmDA539+vXmvksKCQFyVQtlc/gen4VaasmFmx1xN824S7SxyVfBHaoch0Z
mBRCMkBsZqGXax8oSqHBPDdR7c9+acbYNGohI+6SADkHjmpbT6jUGhmGhm6439hz646ZqvIihdiF
oiOrKfun1p0rwOwdp1URn5wGC7uOAPr/AIVBuUoQCrSEFiFPUfX8Ki5ex2Xhn4xeNvA+nNp/h3xj
4h0PTYnZ0tbDU5ootxPzHYG2jPUnHWtIftKfFdSgj+Jvit4zywOszhl9s7s15lMVtpfvKwxt2lsk
nH69Kds2rvZY9zsxO1uDx05pX7MErvVGv4y8c+IvH12l74k1/UfEN/Cnkpc6jcPM6pndtBYnisCU
uyMUYqhOQd315z+VBnWPaplVS5ODuGD64NQ3IXyxyFOMYz2z0zV3b3ZVkLcSu0YV33e65JNQmcsz
53ZOQoYdf8KjlaRAyx8Y4B69e1Nnk+4NwIzzuHQ07oixJNPJMxTcMEYLA96aLhUyWd5D1w3Qc1W2
qTjJGT19KklgaMF/m2bSylhjOPekmhWaY2O6NxcqQzuCvGP8Ke80YMgALOfl3EYIOaiZjtULjaM/
MOufTNRAqOS2OehI5odi1cYZljdu757f1qXMsgcM3ltnqeuP8Kr3CqJUUvGsrHCgH29Kn2gsAVyr
dO4YZqBpDDNl3YsNpx1GOn+TTnlaRsnCso6jrXTeAJvDFl4z0KXxlY3V74TW8Q6na2DbJ5LfOHVD
kHPToRxmvun9vH9jr4SfCj9nrRfHfgDSbzSb+bUbSNGnv5pllgmjfhkkdgD05HIqOZXsXtqfGvww
/aT+JPwS02+svBPiu98PWN5MJrm2hSN1kcKFDjejYOFA464Fdwf+ChH7QMTAR/Eq9BYdXtLVx+Xk
/wBa+fXm2vswQOe1Rv5UT7d4ypzgnpmqRO56F8Xv2hviP8bzpn/CeeKrjxDFp+82kcsMMQiL43MB
Gi5JwOuelecPO0cTKCQzn5SD2qQsjSq2fu5A5P8An/8AVX2H8JP2HLHX/wBjzx78Ztf1JvtiaXc3
fh61sjgwm2eQM02Rht5jwAOgOepqb2L5bI+NJJDhQ+ctyBnp70I7fdBPXJGOtPmXBaQ5dy/T+6M9
6IoGknU/NjOcj+dO5CiRfaEjmYuGbA4yelOafKMVypzmo7oqW2gBzu5wc/nio5N7A7W4AxxxzRuM
fNeyOYycucc8ZNdd8Mtd8OeGviB4a1XxPoR8S6DZXqzX2k+Z5ZuI/wC7nOOCQcHg4xXGAyR4UMAD
zuA6GnJKzvksFkXsO/41FiVufo1+0L/wUb+E/wAdvglrHgS7+GmtJM9mw0ozSxeVZXSxlYX3I2Rs
3e+RxX50zuxYgE7WGR9OuK+qP+Cf/wCzR4U/ad8feJtI8ZapfWVnpulrdW8enTpC8jtJsbJZT8o9
PevBfjV4MtPh58XPGfhfTbxr+w0XVrmxtrlyC0kaSFQWxxnGAcelUi92cSspUA4J5PB6kUNNIdqf
cUHj+v1qV1ACqv3ydrZ6CoZHCgr1HqKYNB5mAQHxzjnvTJZC643bhnoPWnlFfYWA25Iw1NaIOrkk
AMRgdD1osTZo6HwR471b4feMNF8UaN9nTU9IvIryAXK742dGyAy55B6GvrqP/grd8a9of+zPBsrL
1V9LlA/HE2a+H94DZGflBGDT2DyL8o5PDZxUcutw5mfZ/iX/AIKqfGHxH4f1HSbnSPBvkX9vJbSl
dLkJVXUqcZmI6HuO1fG/mskKpGCqIpU7e49xSR7sn7oIGdzHGK+of2Kv2JdR/ai1LU9W1G/k8PeB
tI3w3urRbGme42K6xojZGArBmJ7H3p7FI+YTeSOqxFiu3gHOOP8AOK+nv2EPjB8LPgj8Vbnxj8Sb
fWZL/T4Q+hzaZEZo45GV45t6AgklH47dc9q6f9rL9gOP4NfDLSfiV4A8Ty+OvBUozf3s6RRm2V3V
IpFwRuQsxU4GQevFfHJyMFd23Ocdc/WgabPsT/goP+0R8Hv2i9c8PeLPAS65/wAJRGDZahLfWjQQ
PbKrGPAJPzB3PI7ZzXxzNMzSZ2lxu+9mkk8xpdyjjHQHk050KqQ+Y2HQYBphZh9rkYhQWwG6EZFN
eaRnJL5YZ+tMRigYruJ/n+FOMJTDbGBIyDQQ7jo55Y5Gy7JjgKG4P1pWunDMpb5eflBwDz3xSxxN
Pk7vmyPv4INQm3aNQxXJxggHg+9AtSNzuBIzuPXn0pxkUggux/vZJ/nUUibc4yh/u018sx28ZHQ/
SpEW0cA5JIZT680+WZpE2qCFI+YjpVOJVkiJx+8Xr/n8KfMnAG9sYJwewxRYtEryKmR94jgbuMV9
YfBn9uDw98K/hhpHhXUvgT4O8V3enq6vrGoRr51wGcsNwMLc4OM7u3Svk0xpLtyT7hh1FI6sxAUH
rgY6UWQ2fdh/4KPeB5EiRv2Y/AUhIJ+Xy8D2/wCPWvlj49/Fax+MHxEvfEmk+CtJ8B2c8MMP9kaP
/qQyLtLnCqMnjOAOnrXnKKXIYZ8teuDRk+bI/U7sZ9qdkCJpdzxqu8FUOMbcg9f/ANdMeb5CrEE8
jjoRj8K+mfhl+wh4j+J37L/iD4zWXiXTrKy0qK9uf7Int3Msy2qkv84bCk7WwMenrXzKyKwHy5LZ
IBOOtGgAg84iPbHF8pbcVGR+P+TzUYRldsMfm79cUKHRtuBg8dc4p+Xj+dRgLnO7v60BcGUKNhLS
KDwpHoaWWYO+Sq7ieW2gEfp/KhtzlsDBwTkUkSEMGIDZx1pNXC1xrDKj5RgdcUvkJMSwHXGOea0d
G0bUNb1W20yws5L3Ub2UQW1pbIXlnkJ4RVHJJr7b0b/gkj48vtL0a41Lxz4Y8PatqkEcw0bUGkFx
CzAExnj5mBJB29xS5ktClFI+EuUHl4w1TtcMVEbElVPp+v5/zrv/AI8/AnxP+z78SNR8H+KbOSK6
tyWtbyNG+z30OeJYmIGVOeR2OQa87ZSqsiDIPAYmq0YPyJpLkyMI1KooPVlHynv0qNnMgZWwUxjk
dR/nFOeDBBClWAznPWvfv2Zv2J/H/wC09/at1oq22iaFp8e5tZ1gSJbvJn/VxsFO8jkn0FJuxNj5
/kkJfd85OMjPOT6050kSR5pJjI5bknGTX3DrX/BJv4k22i6rqWmeMfCniW6sbSSddL06WVppyqk+
WnGNxIwM9yBXyX4L+FXin4geN7XwZpekXc3iKa4Nt/Z7QMJYm3BWMgAygTPzEjgZ9qjmNFFHH5Ak
bezB8FcjrjOcUSztIzOCN+RtyuCBjpX3e/8AwSG+JVrO0I8ceDYLokDyJLubIyPTywR+vWvmT9oz
9nTxb+zX4/fw14nt1aRo1lt9St1c2l2pVSxjdgMkE4I6g0LUGeXJdyQsV8zIY4AB4Dev1pst484D
SSu2Tlizct1H+NMkhBcZbO7p2wac0LoApZSCQAc9OOBTUSL+ZYt702csckE7KwyvB5GfemPdN50T
NNIuwjY2c4Ocg9PYflWx4F8Aa78S/E+meG/Demzahq+oTJDFDChfO443naDhR1LHoK+y5v8Agj78
YbcP5XiDwhdSKhIgN3MHHB4wY8eg9OahrUu7PhqaaRpTMkxSQMGD5JOR/ED6j1quT5cew45Jwc/y
rX8RaDqvhPXb3Q9Z06bT9UsnMVzayKVaJhnggj2yKygF+QEje44HatLE3uBKW5Xd85P3l6Y9KkM8
sE6yxSyQTR8oYyVI9wfaqqq0blsbXPOM/rmvpf4BfsCfFH9orwrc+IdBjsNL0qCcQpc65NJbLcfL
nfFiNtyjPUfnUNpDtc+fLzU7qW1EUt1NKrDaUMjEY64Iz64qKO+ubO63RzvbysNu9GKnHB7e4H5V
9WfFb/gmT8YvhX4G1fxbeHR9b0/TIxJLb6LdSXFwyZALLGY1ztyM8njJxXyZtIGZAHzyGWmrMWw6
W5RJGCuyu7Fmbrye/vT7jVbia0htXu5DbxAeXDvO1B246D/69VYmRlG5dpXrkZ60AlCxUqOMdO1O
yHc1YfEN7aWkdrbaheW8aElVE7DAPJ78c81U1TUrq/kX7TczXeDlDNKz7SepGTxmqLHzpSwBfHXt
UktwrfIBt45JPtU2EaNvrdzYSGS3uriyYfKsltI0bEZyQSpBxwOKlfxTq9yhjk1/UpUKmNopbp2U
g9QQT7/rWLC+zKlN7Dnn0prIZOASWYZzQtB3Lat5ACMx3YBz6e1XNP1+80iOVrC/urNpAN5tZmTc
B0zjrjJxWS7LmPHLpyR056cVIXAYBHXkHPYfSlqK5pa14k1XXhG+pX91flWJX7RO02MnnAYn0HT0
rNkdTvf7pGcgDqaAHhdQrZXpwevc0hU7x8xbnt2qhbjHdniZs85644FK0mFjJQgH1/iPtSFyN6gZ
YkkjGc07b5g+XLDOVBp2GMcBlYk/LjnHNKJGjRMO237y4BAHBxTykYJVQd38RPrT2QwfeJxtye9A
DVwAzeYSSOcdvqKVTw4DYYfkf/r01ioGQOWwwHSk2krvb5yfmOOaSQAiyNkZy3p6/wCeKXeFk+4A
5zlj3znmldgS7byuTjOOfw/z0zSBMFpXYswPJHUj1pNhYfIu5mLs288cAAflSCTadsfVgQCQMVG6
jazhjluM5/nS7nJCM20L0PvTuFgbZFIzHHy9ccAnFTZR3+YFgAT/ALtRxkuCjPk/TnJ560yVTICA
D5gwQVPFADWfYSfmLdQ2c5qwwAi4Odq8D1zURZpI2aQAtu6ng5o3/K7E8njHpSQCSFQNxDh85Y5y
foKer+QwLKdpHY57d6VJU2leGOeABzTkTzHO1lU459P88VSVgIQwwT94bgOnFTSStJIBH8hK/cHY
0sZwWU4C9QUYfTr9aY8nA8wKGzkEntQAyVcKQMg5wvTBPp+lOKuoTI2fLknGec8UyQKyhiflBySD
0qWOSSQsCBx1JPUe341SAHQrtBDOufmPqPT2pN4bIVSqA85/lTVBjfZu/d5zipJ7hNnyttIOSMU7
gxFMUaEr8sjAqQCelQrJiRkI2qRyT19vxpHZJE+YF+Mk9SDU0jR+Uu5dzHnrx0HTFBJSuMQKAoBb
PX1rW0xwiwMm0rg5UnqeeKyb9VBfaeN3CjpitG2CxparnZIV3bT9fXtUsEfQugIsXwB+KE0RZFGk
2CsQBzvvArfopr5s3iLy8MWXaeAT0r6SsQf+GcviZ5JQyCDSYWZV52m6mZiff5e3pXzc0ZlhVgAh
5+8eDWcjRO+hFkb13KozyRjrT2kRQTtMmT91e1RuWJwRnaPvEDI/CnKqElWJYDoy9DRYk2Y8P8jM
yICcqTznocmriRhmKSIiEKdrY5/H/GqDCKO4Dg+Y5AYhB8pH/wBapVmLtIMDI5VM5x7CunUnSR0u
n7oGdQQqbeME8cevU/nXVaVbPNdWvDhJGHJJA6evfJ7e9cTprOYQ7ZSEgDAO72BA69fQ9K67SGc3
kWzeQo37VPOPX26//rqrtmHLyyueV6tEZPEE6R7i4mYD0J3H9K0vG98NQvLT97u2QKrDOSD6Vn6j
fC28Q3c20yDzpCM9SCTkE9z71k3EjSyNI/zuxJYnrWElc6EfZn/BLrXdO8N/tUeG5NQuEs4Li0vI
I5bmQIiyNH8oye56AZ7mvef+ClfhzV7b4+prUun3baRPplvFb3SRt5LSKW3DcBjcP6ivzo8H3YtZ
YZgGaVCssYQncGB4we3Wvv63/brtviR8Bm8C/EXw43iXXLWAra6754VvMHEbsqjKuARlhwR15NOC
swlaVvI8f+F/w58Q/EvxFZ+F9BtJbma6nVDL5asI1J/1jnj5QWz7ba/Szwlq/wANv2IfC+leD9U1
CebUdXk86+uLZfMUTEIhdxuzGuMY45AzXwb+yr+0OnwB+I9xrl1pUmsWl3YtZSQRSLG0XzBw2SOc
Fcc44J61g/HX4yj4ufEbXfFEdo1ja6jKohgdy7RYUKfmAH91fzPrTlGUnsRdI95/bC/ZbPgW8fxl
4SiN54Y1BxIHt2MotyRn5iAVCEnhs96+if2P5Li0/YvvW3G2mhXVSjRHaVIaTkH2PQ18xeE/20dP
0r9mFvhbqXhu4utRSxmsU1Dz1FsUaRipxy4IBAxivQPh9/wUD+Hvgj4a2fhqz+HF7BYJbmK4s4bl
TEzuD5uN/JViWPPXPQVm1ItNI81k034iftn3Nhp7xRT23hnTGi/tOMuiS42na7lP9YQq4Hfk17z/
AMEx7WWx8OfEi1mfM0OqwI69g4ibOOTx0x9KwfDH7fPwr8D+G9Q0jw38PdS0G0ufMcRQPGytKy7d
zYY4HC85P4V5N+yv+2HpPwEfxfb6p4evL+11e4jnt2spkGx0DqynOBjpgj0p69UTdXsd5+zX8afE
Xgr9qG+8ECZ77Q9d1m6t3tp5SyWzK0zeZH/dPyYI6GrP/BTZIl8X+GpHhd5P7NZFK4wcyscHg+gP
5182eFfjNp/hf4+2HxClsbm6jh1d9R+zKArlHMhK5PAIEhHviu1/a2/aS0r9ovxFpV5oml3ujwab
bPbn7cQJJizAn5VJGBgjOecmqUdQ0sjxTwPpV1rHjbw/ZWNrPd6g15APLhBd2XzUBOF68Z9vev0U
/wCClM0X/CrfDsBuFSc6g7LEZMFv3LAHHcAkfiRXyZ+yZ+0L4H+BGq6trniPwlfarrhKw2Oo2jLm
CMqfMUqzAckA5GfTtz5x8avjdrfxp8a3fiDWHy8mY7eEDakMWeBtBIzgLk+1JRaZDmtkcFqE9y7i
FEJk378bd3Hsc9sd6iMiExqA+w4JzyRkdx14z+lMlScGJmkdVOeVG0AY6Hv3/Q0yUCDLguUY7gCO
Pzz05raxj5skciSYlJQzLxsOf8+lPRSzKgEiqwJZvXAyP6VXdg8xlj+WSTBdV6ke4/AdKGkfzo3k
DhXyAducdTg/l+tLYd0+gJLI5AMeU3E7pOoBAwMZ5HFPknMY+VWIK4LKuc4HOR1zxUM6xrGFKbpG
wyfNgY5xkEZIqUSRW0sYdGK5YbgVJbg98fTnFVcFAf8AaPItg3Ch+NzjHb9DRJdyoEdZNiyLh4nB
zuBz+Z/rVaYgJ5u1gjE8k5VcY46c0G7EhE3DQsSwAGVHYHPbmkZyi0x2x0h8zeEOdzNxznn+ZqZ7
hFHzI5Y9AOPlPvmobOTbN5gnjZWl2hW5A9j+NOe6aCXDxKFYbQx7dwc/jTTNIWPrr/gmyrW37Q/l
FcO2iXRwTyFDxc8epIqp/wAFF4rhv2htRWNgCbSzkCseCBH9OuRXB/sffGbQ/gh8YrTxF4jN6+lR
2M1nK9qnmtEXC4JX7xyVAr2z9pj4y/s6fGiw1zxDbz+IZvG0unrHZGO2lijZkz5YbI2jv17VnszZ
q58heFtds9L13S9QvtPj1m0gvYpLiwfCx3EaSKxRuCMELjOO9fpp8UIz+1X+zTbzfC3Vms4LWCMX
Hhm2SNC5VVYWzhsbCuBgcAj8K/K2UR+WGQFRuJ2knJ/Hr1/wr1z4BftE65+z14y/tvTZJLvSZCE1
DSpnYQzxnguFBx5i44Ppkcik73BWaszzm6spra7mtbiOSG4hYxSIHw6OMqVPfOQR2r9Yf2OfiBr/
AIz/AGZYNV1a8+3anYNdWUE7ooYpCNse4AAE8elfGH7ZvxN+EnxWfTvFfgNr5vEM+Yr+N7V4ISmz
5G2sAC+7AyM55z2r6E/Z/wD2pfgH8K/hDpXhq31zVbZXi8y7jvdPuHYTyKPNG5UxjIPTj86erGrJ
WR8FeP8AxvqXj/xvqOtavP8AatQvHMsskkYVi3XGBxgDA6DpXtn7CXjvV/BXx60HRLCcR6Z4gkNn
e27Rqd6pFI6HJ5ByB09+teLfFfT/AAovjnV28E3l5c+GjcF7O4vo2WQxleVwQp4JIBwOMd69R/Y0
8W/DbwT8TP8AhKPH2q6hp13pK79Ka2hklhkdtyv5ixqxyA3HA70NDg+jPf8A/gpv8QNW0+48NeEY
p1j0S9i+33EYjG5pI5CF+bqAMg4718D2FwLPe8c7xMcMjhMleeMH6819nft0/Fr4RfGXw3Yax4d8
SXl94u04ra29ktnLFG8Dvl2fzIwRjHBBHPFfFFrdRJMI5PMljYhWZF5Ttk/hjAprYwteVz9f/g38
W9e8Vfskw+O72W3uNfi0u9n8wRBI3eAyqpKg+kYzX5R+Ktbv/FfiK71nUrgS6hqc7XVy8YCq7scn
AHGMY4GOlfo38N/j3+z54X+C0Hw9svG840ltPmgcXFrOJlWbdv8AmEWMgu2OvTvX5w+NbfQdP8Xa
vY+HtSl1HRYrqRLK8ljMbyRhjsODjtgZ4oiVPc+pP+CdfxH1zRvjF/wiKCH+xdbimluEdMuskMRZ
CrZyM8gjpgV0/wDwUo+IOov4z0jweVtzpVtZxamny4naZ3ljIDZ+7tXFec/sM+Jvhp4A8bXvi3xv
4nfQ9Y0yExWMUsTGCWOVCJGLqGyR0A4x171v/t4+M/hf8TNX0bxV4T8YprHiVI106bTreJygtx5k
gl5UFSC+Mng5+tJLUqUtEfOnw70W18TfEHwzpWq+dNY3+q2tpKgOxsPMikbgeOG/lX6XftVfGLWf
2aPA/hG18HWlnKLiV7JV1JXnCRxxZHJcEn3JPSvy58O+I7jwz4m0nWrK3F3NY3sN6kTMQGMUgcAn
GRkqOR2NfoP8Tfib8E/2svAHhk+JfH3/AAg9/au9y+nmRPPidk2OjBlII54YcUpKzHFqx3Flrj/t
NfsZX2ueKovsF5cWV9K66ZK8KLJA0qKcbjlTs5Vsg56enxH+zT+z3rHx88UWVvaLNpvhi0KTajqj
Rb4ygK/uEb/now+uO9fTviT43fDH9n/9mmTwP4N8Rw+PbmWOe0jWCUFlE5dmmkKggKC3br2rO/YK
+NfgXwH8JtV0PxL4rsNH1Q6rLceTezeWXiaONQ6lgMjKN34pX7FK/MV/2ov2kdJ+DHhiH4P/AAwe
Oxe1tRbXt9E5BtYioIEUikfvTkksen1r5P8Agd4OsPiN8ZPCeganJNcaZf6iIbt1nZZHVgScHnBJ
6nryea+p/BPwM/Zy8P8AjiHXNR+LOi+J7BJJZjpeoXEHkszAgFmDc7c9/SvN/jT42+HXwq/ag8L+
Jfhzb6dqOg6THbXd1baHKDE82ZA4UjK7vL8vp3A6U0Wkr6n1F+0R8Sb34QeJfhT8P/C8cOk6RqV9
axySRbhKkUdxDGI1wcYIbnOcjNcV/wAFM9yeGPArlMoL+5GPU+SD1+gNbPxQ1v4S/H3UPAXjpvij
pXh660NVvY9MuJ4TK7b0l8uRGcFWDRYIwf8AHlv2rPiR8Pv2gvgXp/iLS/Gem6drOjGW8i0G4lV7
qZiPLaIxhtwPGQQDx+gtyXbTU9A/Z98v/hhGcAF4l0vWMBiGyBLcE9Oo61wNv+0F8JvF3g/4ZeG9
DtxH4qXUdJxGNPMZjkWSMSEy7QpPJHXvXmP7Hv7Vdn4FiPw68cFZvB1+WhhmlRBHZGUnzBIeMxPu
Oc9CT2r2Lwd8DPgV8PfineeMz8RtCvtJtgbix0GTUYSljIGV1YMJSz7dpCqR371LutB3XNzI9m+M
3hFfF3xd+FcH9o32lNFLqFwLnT5PLl+SFDtzg4ByQeOnHevMf2pf2o7b4PfEldBk8CaT4lf+z4bt
7y/IDqrNIAvKH+4cc+teP3v7d51b9o/Q/EN3p1xF4C0oT21vbRxIbrEse1p2w2DyAdueg9a9O/aI
+Bvgz9o/xpbeKLX4s6FpEJsIrRrbz4pN4UuwOfNXH+s6YoXmJNaWNP8Abp0weIf2etL8Upe3thaw
PZSnS4ZALYiVlwzLgZKbuOenatjxDJbwfsE6TPeKXsovDunPOmM7o1MW5fyGK4D9tn40eF9J+EOl
fDXStRj1vU7iK2dbuzeOaCKOBl5fDH5iVGFwe/44H7Kv7SHh34heCD8G/iY8C2UlotpY3j4t4ZIk
XOx3yMOCoKkfQ+9WYk7npWl/En4O+MfjH8Lovh8unDWIry58w2OnNbMkJtZMqxKLnnHr3rxT/gpP
Ps+L3h1WB2DQycjjB89jz7cH9K9S+E/wE+HP7N2s618QNf8AG2m+IJNJikn0xbadI3t02uGGwSHz
HZWVQOnHvXyv8aPixqP7Tfxks7mWS08PadJcppmlyXXCxQGUhXn+b72WycHA9KI3FJOTWp9ef8E7
dkvwN8Rxxg7BrMm1QcjBtbc8fXr+NfnhqxjttXu5ZlkaNLmRdmSW4kbI+nH1r9TP2XvhjZ/AjwLq
ehX3izSNXub2+a9M1pKFVAY0QDBbPASvgX9pb9ni8+CdxaXs/ifTtbtdYup2ijsmJli5L5ZfQ7uv
4VSdwatK6PtfVvi98Ftc+F0HhLw/8SbDwVprQrG8enoEcRsvzxEMvyk7uSOc9+tdT+0T4Q8La1+z
LeaVrPiH+ytAtrK1kh1UkfMYtjQ/XeQo49a/LDwfoknjXxVpnhq31C2s31S4W1W8vWKQqWIXLH8Q
K/Uv4/8Awzk8a/s1z+EbLV9PivrC0tHW5nl2xMbYoxyRnGQhx9RU7MbV1c+bP2DfiB8NPBGn+IdS
8W6tpeieJHmUWlxqcnlSLbGMbljLnj5gcgc9M19X/BU+H9V8R+LNe0jx9F46u9SkhM5iMW20RQ/l
oAnQYY+mcV+QKyXG5GSWSNZPmwMd/wCE8H3/ADr9Fv8AgnP4DvPD3gfXfFd5qFlLbeIZIlt7eJ8y
QiBpVbzM9CS+cenpSaaZe6Pmj9oSy8K/Df8AaU1W5W4h8aaQ2o/2jqWnRnaUaSZ/OtWYfxLg8578
gV90fHXxn4T0z9l6417VvDran4an060aHSciMqJNghGf4dpZee2K+Ev2ivgxrVj+0zqfh5L6ya58
Vag99ZXK3GyOJbiY7fNOMqykkHrmvuP46/B7VfF37LH/AAhFhc2X9p2VhYqXllKQSG2MbMN2MgHy
zjI9OlNX5jHaOh+ULu8NvGvkhh5flgnkLzk8/X860vCvhnU/GniW08P6HZyXusX7GC2hjcAs20sR
yQBgKep7fhWG0oePAYoVJxtfOM8kfnXtX7F15FB+0z4BWSePa1zMvzYUZ+zTbcEnPJOMeuK2ktBR
tzas+sIE8H/8E/8A4SvJcTprXxF1iMqI42Hmux3FBsL8QIwwW7nP4eZ/sD+LdV8dftL+JvEGtyxS
6nqOj3NxcNCgRS5nt84UZwMAdzWP/wAFLboP8btDSPGU0FFdgo4bz5Coz9CePYVyn7BfxL0L4f8A
x1j/ALfuRYW+q2EumW106HYLiSWEojEfdDbDyeM4rBqyLi7u6PoL4/eNv2e/+F66hofjjwBqWr+J
5Jra3udSt2IiYtHH5ecTLwFdB93tXiX7cf7MOifA3UdJ8QeFy1t4d1ab7KNKBdzaypGWJVmYnYwX
nPQjjrXrXx6/Yy+IHxG/aIvfGOjPph0W6vLO5V7i62OgiSEN8uw55i9ecis//gqD8Q9IuNK8MeE7
O8SfxDZXzX11ZhCfKhaCRVLEjHJIwM5qo6uwtUkfFXgnx7rPw18XaZ4l0G9On6rYy+ZExJ2v0BSQ
Z+ZSM5Hv619+fHn4a6X+1b+znpnxI1GxHgPxhYWSzpdatmGORAoZo+G5jcn5GPPPSvEv2Kv2ZbDx
paS/E/x5NaReDdFd5oYTIpSd4STIZ1IP7tQBweuPSuM/bH/a2uPjjrDeGfD5+wfD7T2KW8CnC6gy
/cnKlQVUDhU5HOfTBFNy0FKS6nzvgtGG5JAX5VPIPcCv0V/4JZzeb4U+Ih6hdRtQGwBkeScH/PpX
5z2sVzfT2VnboGmnkSJQzY5YhVGe3JH0r9V/2Df2fvF/wE8M+LLfxdb29tPqV3DLbJb3CzfIiMMk
r0yW780576FrZtH5ifERlHjjxQQSNuqXn3vTz5OQf89K4+LDCRyWMnXPXiui+JMxX4g+KTAcQ/2r
eHBYHP8ApD9PQVzMl1tUh12Ac1ukcvM+oGUMjFFVZD/D36k5r1H9m34k3Xws+M/hLW7XTre+ke5W
ylgu03q0c7LGWB6qwDZB+o715NIximSSKPcuB19+1elfs6eBtb+J3xm8M6NotuL69ju4r2UNJ5YS
CGVHkbJOPlGPrxwaxqfCzWD1PtH/AIKt6RYWeh/Dy+hsbeG5+23URuEQK+zygduRg4yAcV57/wAE
0fg34d8fePde8TeILIXt34ZjtW06NzmMPL54Z2U/eIEYAB4HJr2n/gqf4F1nxB8LPDviDT7BrvTv
D15Nc6jKjDMETx7A+CckZOOAcZryz/glX470TR/GPjTw5fX8Nlq2rQWbWNtK203Hlm4LhPUgOpxU
PVI1gt2zv/DX7al7rv7U6/DCXwB4dTR31+fRBfxxsZwsbuoc8bcnZ+tecftJ/A7wz8OP24PhoNGt
Qmn+K9Utr690uRVa3Wb7ZGH2KRwrhiSOma2fB37IXxS0b9tSDxvceHFj8KJ4nudUOoC9g2iF5JXV
ggfdnDrxtrQ/a48X6Pr/AO3d8FNP07Uoru60m+tba+jgbcLeVryNgj+jYxx7ip72HH7KZ9Y/F/4T
/DL4q28fw78S6NAk+qRm+g+xW6xSjyGTLLIFwpG4DnqCaxW/Z7+FPir4Za38IbbS0+x6RFHZ3E3l
L9rgkdRNHJ5xX5n5DZ6diK3/ABQVH7S/gZSzBzoWpkDscPb5/nSfDJs/F/4wnqy3un8D/rySkkxu
yPP/ANmbwT4K+Gf7MGnXmuaXYXNtpjX5u9QubFHlkCXcy7m4JJwBV/wT+zL8GPgh4rbWrPR/OuvG
N19mt4NQQXMCO2+bZGjLiMHB69MAVjW0u79hnxCz4yINT3E88/bZs16B8VnRNR+DKnOG8RQBW9/s
k1HQeh5P8Pv2WPAfw8/bC8UzWWkWt3p2teHDqK6Ze2ySw2krXQVxECMKhwCB2yR2FeL/ALP/AMLf
Ckn/AAUj+LOlz+HdMm0yztru4t7OS2R4Ud2tclUOVH337fxH1r6y1DxXpOiftiWOm6jewWV7qXhL
ybBJnCm4cXW5kXPU4XOPY15B8J/hV4l+H37enxH8ceItNbTfCmvRSWumarPKnlTzStbFI15yGPlv
gHqeOTQm7smL6n5/ftxeHdH8OftWfEOw0Syh0uwjubd1s7aMRxI7WsTuVUDAyxJ47mvAJ4o423ED
gdcnrivqf/gpJ4N1jwv+1P4m1W/0u4ttK19YLjT7wr+7uBHbRRvtI6lWHI7Zr5Rm82MnadyNwGJz
j/JNbqxnHmsrjmiU7gcqT6Hr7Zr9Nf2UfhH4L8Rf8E8/GOq6t4V0u+1KWLWH+3XNqjzgIDsIlI3D
btBGCORX5lKn3geCcHPT64r9Z/2FriHxX+wT4m8KaPcR6hr0MOrWzadDIpnVpQ5jBUnI3buCetRL
R6FtPlfc5L9gP4TeCvHH7GXi/Vdf8J6XrGozXWoobm8s45JgFt02hXIJGO2D1r4m/ZN8S6L4L+Of
gy71vwtaeLrC7nXTG02+VWVTOyoJQCrAsuc88e4r9MP2Ffh74i+HH7K+v+DNe0ubSfFry39zHpN5
hJykkSrG+3OdrMCM+tflz8K/DV54Z/aC8I6HrOnXOn6laeJdPs7mznXZJA4u4gwK5/8A11krpalL
SVj9GP2hdH+Ef7H/AMRLDUm+Emj+K7f4iXq272s9tEItOaIRoxiVkYBW83cVAHIJz6eFf8FQv2ZP
B/wi1Pw/448J2C6MmvTva3WjWMKx2ySRx7vNjRQNpYcEdCeeOa+k/wDgoj8HPGHxV8Q/CGbwv4fv
ddt9L1WSS8a0XcsCs8GGcAjAKq/PsfWuE/4K9a/p8Xhj4daYbqF7+PULm4e0DgyLH5G0OU6gZ4zT
GraXMf4P/s0+AP2Ofgle/Ez436bYa14m1KPyrXRb+JLmCCT5niii+Vh5jgZZugx9a9H/AOCnepRa
l+xtot5aW4tre61bTJIoFwBGrIxVR04GccelaH/BRDwdrnxq/Zt8InwBpk/jAtq0N4o0pftAaE20
67/lzldzLyOlZ/8AwUK8N3/iT9iDRTpOnT6q2j3Wn3N5FaoZGgjiiZZWcLyAh4b079KcfiQnE+D/
ANg/wheeJfjdmH4aaf8AFC1g0u5Nxp2oTRxW8GWjCy7pAULc7cHruPpX6LeLP2K/A/xm+EGuaZqX
wm0P4SeId26yv9GW3uJ4yhDh98arlSQVKnqCenGOE/4JVvYyfsweL7TT5oW8QLq92ZY7dwZgDbxe
U2M5APbPGc13n/BPxPiHZ/CrxVpnxL/tlPEh1OSW1t9eeT7Q1uYIxlBJzs37xkcUJ3CUeVOx+SXw
f8V+HPhp8SNJ1jxZ4Vt/G+hafJILrR7jAS4BR0XgggkMQ2GBHH0r9mvCHxd8CXv7FFx49tfBEem+
BI9HvLh/Cqom0Qo8iyRYwF+Yqx6fxV+InjbQtT8M+JtT0fXNPudL1eyn8m5sblCksbdRlSAeQVI+
tfq78DNOm8ef8Er7nQfDy/2nq8+halYi0tm8yQTPPN8hAyQcMDjGcGqdrjeqPzP/AGifG/g/4k/E
C71zwH4Ih+H2jtbIn9kwOCryIW3ykKAq54GAP4c1+m/wg/YT+Hvw3+A2j39/8OLb4u+J9SSG+nF0
YoJEWWNWKIXYKFTH1JJr4E/aX/Yv8X/syeENC1rxRrGi3o1d2tlstOlYTwuYy/KuBuAwwJBxn61+
jf7Ves+NZP2JfCd98LbnVp9WaPS8TeGzI8zW/lYfBiy23IGcVG7DYw/iZ+wf4B+LvwI1uKw+Gdp8
HPFdoWnsp4WSdx5a7huMbYaN8spB5HXsK/GuUFoYcNywDFM9Pxr7v8H/AAT/AGsPiN8JfEniy/8A
iH4i8NaZYLOs+meJtavLae6iSLc21CMKpB25Y8nNfCDiNoYymQu0EDr8uP0x0rREdSvvcMyBd3PK
9cCpgAhyoAB59yPrSsxRPMTh+23qDQGIdd5Bz/F6UmgsfpD/AMExfhv8HvjP4X8Q6H4g8Evd+MtK
c3Nxqr3EiJLbythFUo4KlSpyMe9fEP7RHhSy+H3x0+ImgaSrjTNM127tbaN2LlIxI20FjknrjJNf
dv8AwRp0i9j8Q/EjVDb3A02S0tLdLloWETSK8hZVfoSAVyB618Z/tdaZead+1P8AFKHULOazaTxD
dTxxzpsLxPMzJIMj5lYcgjrzUpWKPtn9mP8AY++B37VH7NvhzWbHw9ceF/EGj38Vrrmom5kc3jwh
HuB/rNuyRHyDgFc+1dd4i/Yf/Z+/aC8B+OrH4Q6fF4b8QeHbwWcev29zLcW0s6IsjJjzWDoQdpI5
HbNbf/BMPSL20/Y08VNLYXNuL3Ur+W1UxMrTobWJQyA9QSDgjg4r8wvA/wAcviR8HbTVNI8NeKdV
8NxXZC39nbSBN0gBUl1I4bBIPAP5U1cpu2h9E/8ABOv9jbRf2kPEms6/4ykW48L+H3S2n0iN5Ee8
nkjJGZEcFEXGcDk+1fXmgfsW/AHxP4u/sCX4CeMdGhLSL/at7c3MdmNoPzbxcn72ODjnIrkP+CNj
oPAXxLjE/mzDU7QkscuR9nIyfy/SvAND+Of7WfxM+NGoeDPCfizXYtQur26WCO6to4beBI3ckGR4
sKu1eDk+1TZ3Jep4F+2J+zsf2Z/jXqXg5dRGqaeYI76wmZSHFtK8gRHz1dRHgnvjNeImPBbbknOA
fb1r2n9rLR/ihoPxqv7X4v341PxhHaQZuRcJMpgKkxhCgAAHzdgc59a8ckIOcfePbPQe9a2J3CK3
35wwLAgD0zX7Yf8ABMj4UHwf+ys12NdtNVbxdK2qA2wyLTfBHGIn5Pzrs5HavxNTKOFU8njiv2D/
AOCVEEo/ZO8aMqNHHJrF2YGGfm/0OAEr/wACDdO+aloaPgj4lfEzxx8EdC+Jn7Pk3iOw8S+F5dRh
DzxyNcIhjdJiIDuwgYhA644KtXT/ALAXwl+FHx0+I+peDPiDDrR1u+gMuiGxm8qIeWrvMGI5ztCk
Z4618uX7u9zMshdJV+WQN95X53A++eOa+rf+CXTC5/bH8K4BYw6fqBY4PGYCBnsO9ZtFpn1doX/B
NT4F+BJ/DPhLx1q2q6v4z8Q3N1HaGyvTCskaB3BMYHAEaqCe7Z+tfE/xf/Yy1/wV+1Vp/wAFtP1C
0urnXJkfSdQnchfs0rSeWZcLkMqxNnHUjI6169/wUa+IXiH4Z/t3Q+KPDmpTaZrmkaVZS2FyiqwT
dG6uNrAgghiCMcg1xP7NPxj8U/HH9u/4X+KPGuqtrOtG9S0Fy8EcKJGkMxRFVFAHLsfXJNNCTPe/
i3+yz+yR+zbJ4d8N/ErUPFbeJbnTkuZJdNlnkSTBKvIdqkICytgV5Z+2v+wx4d+EngDw38TvhpfX
d14H1OO2juLbVLlmuVknO6KZMqPlKnBUkYxW3/wWEnaP9ovwurAhW8MRrG23ILG5m4x36CvoT9ut
ja/8E4vBiEYkCaCoBHQ+Uv8AQGnqgOa+B3/BL34UfEb9n3wV4m1PUtft/EGuaLb6hJdQ3iJEk08S
uMR7MYUsAB3xVH4Lf8Ef9NtrDW0+KXiJ7nUJZcaV/wAI5cmNI4wDl5A6ctkg46cV7uZprT9hz4LL
FMbZ3h8LxlgSMAvb5r1Tx3cyp+0v8J7dJWWFtM1t3QHhiEtgMj8aLsLH4S/Hf4Ja78APijr3gfxA
9vcalpzRstzbMWSaKRd0bjIBBK4yMcHNeeJGY4iCMkNyfw719ef8FQXU/tk+LCMsyWenqR25t1PT
8a+RpUy4A4GcAjoRVInlGpGHi/eMUOCeBzmv0M+Bv7Hn7NnxW8N+DxN4v8ew+J9atoPMgNkYoxOy
jIVzbFQu4kAhiPevlX9lDwDo3xM/aP8Ah74Z8QWg1DRNS1NYb228wx+YgRm2EqQcEgV+qnxt/aL8
T/Bf9rj4MfCLwtZaXZ+DNagtobmFrbLojTSRBY2z8oVYxj9c0mXofmL+2F+yzqP7LvxXuNAkuPt+
h3yve6NeOVaSS2DbdsoGMSKSQeMEDNe4fA39gjwN+0Z+zTaeMPBfjDVpfHllJHDrGmzpGlvBIGBl
jClASREdytuIOB64rsP+CyxJ+Kfw9QYAbRZ8nGc/6QDj9D+tes/8EiZIoP2f/iK6E7V1ssNx7Cyh
/rmoYk7nkNp/wTu+Evxi+GPirU/gZ8Q9X8TeJtJKxm11TbHbed94xN+5QglQwBBIz1r887vTJrSe
5t7qMx3FuzxyBDkqysVI46kEGv1p/wCCQKqvgL4oshJC6zACW/veQc/qTX5VeJJFk8Sa4VJLvfTy
5GGGDIx6/jQgW5+m+nfs5aXZ/sC+Lda+G3xi8Uz+FrjQL3UpdOxGltPKkRNxCyFA6BmR1IBHXv3/
ADR8L+BNU+IPjHSPDOgQpcazq1ytnaxSMFDTMcKCTwBX6q/AQmH/AIJH+IypKtJ4f8QYyPWW5AwP
TFfAP7FxSX9rn4Vq6BiNej+YdCRuI/UVN7F2PpPxf/wTw+BfwW0fw/D8XPjJqXhfxRqtmsr20MMT
Qq4x5gj/AHbHaGOASe1eYftb/sIWfwU+G2gfEz4c+Jbjxt4Bv0Q3V/dmNXiMrqIHTaBuVtxB4ypA
r1b/AILJyhfi98P0YjH9hzFVzjJ8/t+Qr4w8SeEPihZfC7S9R1iw8TW3w+n8trGW9Wf+zW3EshQH
5Fz2OO/FUpE8pweh6Fd6/qMFnaQ3Ms1xKkCGGMyYLEAHA+tfYH7bH7BNl+y14X8La5oviHUfEQ1W
4ezu7e6tlUQssW8MpX1bIwc9e1ec/so/tb+Iv2YtYu4tL0/Rb3TtburYX8mqWpkeNUYqWR1YFRhi
T1Hy1+lf7dv7bB+A3hHwvN4Kfw74ovtZuJklhuX+1LFGkYO4LG45JcDmne4XsfK//BIXwRoPib4t
+LtX1HTrfUb3Q9NtpdPuJU3C2leRwzrngNhcZ9jVD9ob4c/tAfGn43+PvijpIuLvw/4C1q7tdKu4
54YTZRWcu/EUYIZyPvEkEtgg12//AASA8T21/wDEv4pNeT2ltqerW1tdRWsbBN58+5eQRqedq+Yv
TOBiuQ+Kn7V/xd+CvxD+L/wi0zw9A6+I/EGoy6ct7YytdzR3crIrW+1hvDDlSQ2Tnnipsh3uz3/9
tHw3o/xR/YZ8B+NfGiW6+Jng0V5PETW6ma1+0GLz2AGMqdzZTofavkL9qL9gCX4O/DTRfiN8PvEM
3j/wRcwCW7vUWNTAHKiKRVB+dG3EZ7Ec9a+vP272/wCEU/4J5eD/AA7qbi01jy9EtnspMLMXjRDI
Ch54K8jtWf8A8ErbD4jaT8NNUn8RvFB8LZnSTTk1tpPN3MikNBv+UQNuXr1PSn6AfL/7PP8AwT0b
4kfCjVviT8RfFH/CvfBsVstxZXjIrtNGN2+SQMR5aBsADGSf18S0n9pH4k/Dv4aa38MfDPjWVPBt
5LdW7xw20ebhJGIdo3Kl13jng/xV9x/8FbJPiamhaEtiI4/g91c6I7jzJ8JtW7x8uzdnZjIJ681z
37Bv7L3hXwd8OD+0d8S5IptD0uGa+0uwRTKIFhaSOSaVMfM3y/KvOOpo1ZO53f7GHwnT9iP4SeIP
jP8AFXUhodzqlh5UOiyOg3JgTRAHPM0m3AQ8g5ryv9hj476H4z/b/wDF3jDWorfw0fF1pe/2fbzy
biJpJbYpFvwBuKxnsPSvFf2kfj/8QP24PihdnQNI1e/8O6Wf+JboOlQzTpFCJHVbieNcgSsGHJHA
4HSvV/8AgknoMNz+0x4ij1TTka70rQZ9sd5CPNtpvtMCk4PKsCrD2p2sFzv/ANpv9jT4z/ET9snU
fGmiaC154Wm1KwngvhfwoEijSESDYXDj7jdu9Wv+CyPjbRLuP4feGbfULe416xubq6urJMPJDE8S
Kpb+7ndkZ+teb/H79oP4kWn7feqaBpvjrXrDQ7bxNY2S6VaX7pCYS1urp5YOCCC/bkmvR/8Ags/p
9hbx/DG8Szj+3ztqCy3CKBJIirBtUt1OCRgUa3HY/MF7MSXOY/nZQWOMc57Y79q+/fhd+wv4c+GX
7MnjX4n/ABuaHTdR1DRriPRdIvJQn2eUx77eTcrZMzFcBB0Brpf2MP2P9C+FPg+T4+/GyJdO0vSI
TfaXpV1G2+IL5iNJPCV+Zm+Qooyc88dK+ZP2w/2uNe/ao8czNLJNpfg/Tpduk6PDK3lbUZ9s8i/8
9WVvoBgUXbC2p9of8Ep/APh/QvgR41+JiadBL4stru7s4tTkyzR26W0MnlgZxgtye/vXz34Qf9pb
TPir4e/aJ1CyvRZa3qMEU+pv5f2NrWd1t1XyA2Qm0jadvXBzX1Z/wTR8u5/Yl8aW2ngXGoNf6ofs
yEPIWNuioCo55wMCvmzwD+2d8QfiF4V8EfAceDbZZbLUrCzupIElN4I7e5RjvhP3MbRuJ6YPHNJv
QtJtn0D/AMFIv2ePAvjv4o/Cq5vtVsPAmq+KL+403UfEk/3Xjjt90SuCyqTu2qCSPvda+GP2vf2M
PEn7KviO0We5k8QeE79UWz1xIPLVptuWjKbmIPBPU5zX2P8A8Fmr+3/sr4WaeZUFw1zfyeUCC4Xy
4hnHXqete/fsKWvjbxz+zroKfFzRLXULeP59LOtxmW8MYZgvmpIvBA4U9SuKXM9BaH54WH/BPDVN
K/Zv1P4q+OvF1v4Amit5bmy0fUYF3XI2boVMnmcNJzhdpNecXP7V3xa8efCLw78HodUWfQIjBZW1
lYWwhu7jacQw715PzbR78ZNe7/8ABVHxV8TLj4rW/h7xTEdL8C21utxo1tp7t9iuMGQeZIWABlAw
Nv8ACO3Nes/svfs8eEf2NvhA3x8+Lc8N3rU1vG2k2qqJ4YFlVHgCgKSJmcH5skKDSZVktzuPgVBH
/wAE7f2XdY8QfFjXZtQ8Q+IWju4/DMkwkuUk2BPs8e5z5h+cFiOB+Ffl58NvB9h8YPjBpOi6jrtt
4PsdavJfM1HUMeTakhnGcFc4xtAJHJFdt8efjh4//a08c6n4uvrPULnTbHfHbabYI89rpMW3kbgu
ASE3FiRX0n/wSm/Zv8KfFTxBr/jnxJA2qt4bmigs7C4jVraR5YpCzyKwO/A24HqM0vhWguXqZd7/
AMEqE1bwnr9/4D+MOiePdZ062e4XSNPtlDTsFO2MOJnCFsYGV618L65od54f1G+0rVbSSw1bT5Xt
rq3mzvSRWIYMD3B4/Ov3F/Y2/aK0b41+O/iVo2keAtL8IJ4buVt2utOVFa6HmzIu8Ki8/uif+BV+
Qf7VsiXf7RvxTeNOP+Ek1HdtAwSLlxn+VXG7IaPJGiznDMuRkALx+NQOCZEzIB2LZqRX2IQY95yc
Ekj+VKFSRQ7c45wRxxTJIzhicEAdyf8A69OhQ724AKcnvx7U2MeZvwo9SBxipFTyWCsOO/NIWw4G
MLjP7znGRjNRtHEDu3FSB82e5qaVUhdMusjMBhSegqCOMLM7biW9u9QmwTEMhSbg5BGVboadM+4t
Icnd97Bxj8aSSVkkLMgTcPlA7UhfknaCx5ww7VpcY7EbjcAzSOMj3xQkXmKBuPUnAGCOKWfG1Si4
53cnPbj+tRfMZUJ2nB5BIB/zzU6jHgRhk8tt456jBIpNiy+YuWwG6k0qRHLFAPlHDMMn86VCu4Ac
vzknp0oQCOSGRuoPGD24p7EJvw4weAuaYyOXBDBQFIx755pnl7nXdkkjg9qVxXJSFiGWP3QMD1pX
cSIju5+YH5QKSQKVMZHGeSOlNeDd8oxtHbOOKW4XF4QYGRliMjkd6ezBXQbgAcbjjOP/AK9I7iOF
htDOOhByKakLSgFyMnPBHGMUWY9RHQ4KKdzE4Ye9LHGkj7EDoiqMnOcetIxZjI7v34CjFIWBCqrM
24YZR/n1pku4i7lnJVgydM1KsqhvlUEr1AGarmFmmQHGSfXrVkhVOFDArzkdOlNFK43fw5wWGN2P
89KRJDHGcDdu/jIIxT5SPLY7vlDYAI7elOVWlhwE+bnBJwAMc02hPUiiAB3lv3fOCB/Onq29iyH5
lB5PsKXeVCx7AVxjA71Cy+Y253KqSTxSSBFiRFdSMAgjd0piq0EgQOdoG7Gc4Pt9KbNtw+OFPXGA
T9aGQrlQSH75FUkMdKVDLtUD+Is3P50iZuIWcIAD97b0P0FNLGNQrL8/PPXNTIu0thtpIJA6cU+U
QyR2jjYoMIcAdu2ajLMod127lXr3PrTtpw25cxqew/T61GIS7GRfkjx91jzjof50wILvBVW2jJH3
R61qwq0UFp1Vtq7nHGBkjn2xg1lX6/OGRvlX16/jWrBG8sduxUyGRAAuOPSobsyZH0BpqCP9l34i
SowiLyaMhcEAAq9yTjgZzgDHXmvm+WVWhDKuSvTjAP1r6Gvp2s/2YfGIjn8yO41XS4PlXAJCXTkE
GvnZotqowHyhiRg5H0qNGXF2Q1iEZ+PvEnAWowpQHLFOeDjinvmOQMOSeCvWnpEblNmRlOxpIRrz
4QllZ0CKCuR1HtSDDupI98gEZ9/rV7WP9HyoTYR8m/ORnPWq8jSTAlkDSt/ETzjnnHpmut6ERdzQ
sbqX7THIqFkYHcWGAuOldbpSPJLFKXBiX5WRwQcZ6jgelcZbIZUijmlYEHOWO3J/Cuu0+eeJ40UE
xBATIWBJHPUeucdeeanmCZ5l4kiA1q8CghTM+0HggZ61mKXTcBkZPb+Vb1wIT4huTdsXgaZlI+6S
M/pVHWrA6ZeSwvHz1RlbIx25+lZ31Ki9DW8MQtcSoElKHOBgd+30r0jQo9kyeTKXdlz5e7BHPp1x
wK5X4Q+D9X8b+JbLR/D1hPquq3T7YrO2Tc7kckAew7+lfph8R/2cfhB+y/8As/21n40j/t74i3yF
IZrKZo5kldSQ5jD42LggtjnFXGdnsOUUlc+Jz/o4gkyVUDazHjA6EfT86t+buIfzSJHO5FJxgf4V
7h+xl4L8F/En43DRvHMcB0yW0l+yWr3ZgElwDEEAKsDkhm4yDkd6m/a8+ByfCD4t39jo2i3Gn+HL
oxvp0rlpBLlV3IjMSTtbj8RW/tU9LGFmeDzhHuIWfzAISfmX+H347ZOetOy5EmVeR42+VwwGQcZ4
/Hmvu3SP2U/AOh/si3vibxro1xofjE2FzcCe7uGimWQFvKQR52/MoX5Sp+8a7X4B/s//AAH+K3we
svGEXhK7je0SSK9dr6bcZokHmkFHAZT2wB9Kx9or7Gqi2fmw0fnyTYkZ5QmVizkMT+HaoJikJkbc
WeM7RGTuOep6+x5FfpZ8IPgz+zT8fNO1P/hGfD91Y3tg5jltri8kiuVIUfvAokJK/MPm9a8t/Y8/
Zm+Hfxe13x8ninTZ9SfSLmBbVBdyReSrGXBBQg87BwSalzv0D2dz4k/0maMFEbEhG1nbBx/T6/Wk
aSWaOSXy3idslg3AI7dfpX1/43/Y2l8CftF+DvD97Hdaj4G1vUkgS7j3JiIsSInkBzvAxznnk1D+
3T+zz4O+DXiHwv8A8InYyWNvqEE0kts0rS5dHTDKWb0Y/pQpGXIr7nyI7kJvwSrt2yc/QenNNdUc
hPmRWHzJkKp7D8ec19Ifsf8Ag34VfEDxRP4U+I8d+mpXjY0i6WYwRBgG3ozqQcnAIzx261z/AO1B
+zVrHwB8WzW0yXF74fuGMtlqixlYcMWxGxxjzAAMgn3q+ZF8ttzwyby4dymPdC2AoLFeeh56025Q
pwjpKMqRFzwPf2+npVgLshhSUG43ndkc4GP8PT0qrdOEnEZ3GTggIRwPx6VV7mckkKJWk+VA0LAZ
bpycn+VSyyqSCZCpLcBgcMcDJP5VFvwh8oFHIIJLcf7XT6Y796gkggjZN+6VtxB3H6eg6mhENW2J
HQSTW7PGGUHejA5xjkH6f41OYQ8TlS++MkrnjI4OPpntVZlMNtH5YYqchV5HcgAnt2qZX8+GMJhc
/NyckHtg9+lBtFtIfbzGSB8AFm+Y5zyQDxj3IxSnJicERvkmQHBC/lURdbfeWy2QQBjPPTp79c+m
aUyo8bSE8qwEfH3lPGfp7GlYGxpaLZuVQQeCqAEc9OMep+tSo6NCWaIwgjHlqQc843DIB4HamSx/
ZFIEmQz4yGx6dhxiiQ5lVQWZkBCnHO09P/10WMYvUV0a3XywuHyQy5ycDuQP881PI7uVMpOBgoVc
jjPfHavYv2Tvg7pXxy+Mmn+FNduru0sJrG4uXNiFSYGMLgFmBxnf+le8ftE/s2/s+fA6y1mzk8Ta
/D41Onm60/T55DMsr87ASIcDcwx978KTZs11PiyV18psx8deMZOPqOeaqSiMvku8W07RGTg9cenq
KdczLOVaGIxuOhLcYx05/GrOjaTeeJbiC1s7bzb13WOKNFZ3dmYKMADnk1akkLluUzcMysznaeoQ
kE4H41NDKsbJHtIUqM7Dghvf04z2r2L4t/so+PfgZoWn654stbBLC8lEEZtrvzTG20sA42jHAPTp
jrXd/snfsbt+0Lb6jreualPpHg60kkgW4s2Q3E9wuPlAZSFVVPJI7jFS5i9mz5jaRkmUQoHjOSRn
Bz1znHellhMcqMi7ZOWycZXJxzX2J+0H+w9pXgn4aweP/h7rtz4j0OGJ5bw3xjDLEcBXQqqAgMME
Hpn618qeD/BuqePvFek+HNKhWXW9RuPs1vG7hQzc9T6Y5x7UuZPUtR1MuS4ZJldW2mP92QGOST3/
AJ1HNPIFkOG254dOrfWv0Ht/+CaHhEWlhouo/EDUovFdxZNcfZlSLazDG8rgAlFY46g4r4o+Knw2
1r4ReLr/AMOazB9n1i0IQqku5SrKGVgRkYKmmmh27HKzXLo3yvuyA23BPbr/APWqJEdm+fO1W+Uk
8jkV9PfsvfsUzfG/wveeKvEmqyeHfCkCP9jvLRld5XQkSEgn5VXB5PXmn/tK/sWv8FPC9n4w8Kar
ceKfCE8Kvc3lyY1a3LkeW3GMo+7APY4B601JCt3PmET3B3KkZUScYTjcOw98kVNJNLFIPMBXbwMA
jkDp/n0rf+H/AIC1r4meJtO8M+H7R73Ur9mWCBgiYIBZmySMKoDE84r7Q1D/AIJh6Smm3FhYfEOe
bxRDY/aY9OeCIKz4IyTncEZxjOO1NSV/eJlB2PhIgx2+0E5Ax8wzuXPHH40qbn5V2LR8rtXAUegP
TFauteA9d8M+ObrwlqVnNHr9vdfZTZty/msQEUEEg53DGDgg19b6V/wT707QvAelaz8R/iTB4F1O
8GHtJ0h8qJzz5fmM4DNgc4puUCFFnxrC5+yqmGAOFChz2J9eSMkmnf2gbdHQksHG04BI68fWvrD4
mfsDyaL8NX8X+APGSfEK1ik3yxwxxqvkrnfJHIjkNtI5HcZrzv8AZi/Zb1z9o7WbmaG6bSPDNojx
z6oYRKplwCqKu5SW+bPXAxzSUo7hyu+h4kZpvMiaMZDA/Nu4PqP8+9SyM0Ee/LSEEnax5Uevp3r7
P0v/AIJ6+ENd1kabpnxv0m+1VGZfstnbQvPkcspUTk8YJPHH4V8ufFT4U+JPgz44vvDfiO3ZZrcl
oLjaNs8G4qkyYPIbafocilzJ7A4yRyS3IdCVyrA5DH06n+ZpjqLjKxBhgfN5ZIGO+T+NehfA34U2
nxl+I9j4VuPENp4Za6ikaK9u49++RdpESrlQzH0LduM16L+07+yPefs36Nouot4jh1u31O5e23Ja
m3eJwhcdXbIIX1H0oTuDi0fP4InVkjXzFbIfcelKt1OjSDesblSuCv4j3H4V2XwZ+DXiH42+ObHw
/oMLZMge9vyC0NnF3kfHXvgdzxX07cf8E0tTlmurKz+JmkXGpQpua0SzKy5xxkeYxXqOcelNtLqW
onxh5pa3OCxEyHj26YPp+nvUUiK0ShbZG3/eYgMeCSOcfWusl+EniuPx4ngV9Kuf+EtaTyf7M2fP
0zuwO20bt2cEc19OXf8AwTX1qwSBL74k6Bp9xKhYwzQMhX2BL/MO2cCknHqS4N7M+OGmFrGf4Cfu
qAMAemO9SQzcNiV2JAJXPH0/+seDzXsn7Qn7KHiz4BCxu9QePxBoV6FX+19PhcRRSE7Qkmc7cjbg
5wcn0rG+A/7P3iL47eLG0rQ42tbO3DtdarLG32a3IGVRmHQnPAqlKKFCD7nmgSJLhIxEhV2zhgBg
nGDz+FLIyzMcfOVwQPQjHevq/wAVf8E4fHenaHqmoad4m0LX7izid/7Ls4pBLKVBOxSTjccYAOOe
OK+SnBtZri1nhltJ4XKyxTIVeJhkEMvbBzkHoRV80ZbFW13JZnSZ0eTfheihFI+vI9aaWAneUBVl
bCbsYIXsP5V758F/2L/HPxp8JXHiPTZLXRLBG8u2/tdJInvBsyzR4U/JkgBuhwfSvAtSsp9OvruC
eMxzwSPG653EurEEZ6Y4xxU3iDTQGct+72MZiQOScY9Mf1qWVmKIWADE7sgKSOeh4PpjFfVcP/BN
T4lX1lDef254bhM0SEwSXM4CgqDyREeecccU3Uf+Ca/xRs7C5uoNY8NXUiw5SFbmZeQDwCYsEngd
h71m5RLinY+UndraNDv2+men0Gf60kdx5NyduY1JLFkIBJIxk+/SvY/gV+yp40/aA0/VbvQJtLsL
XT5Uhl/tSdkZ2Zd2AERuMHqcV6S3/BM/4qLcGSLV/DAJHRr6cgc9f9R9PyoU0xNM+VZ5mluDL829
mwHbknBPfjP/AOuppNcu5IXjkurkxkFWUyMVYem0545rrfjP8FvE3wH8UL4f8UzWE95JbLdQyWFw
ZUaMsw+YlVYEFT1A7Yrrfg5+x58RPjl4Tk8RaBFp9pp3nmGGXU55Lc3GFB3oPLbK5OM5+ma0Uoon
lbPFGYxMNrHHQKcFQAMYAAx2HtT1uGheB43KSxkMjISrIw5ByOhHtXU/FT4Y+Ivgx4tn8L+JrFoL
+MbvNhBkglRhuV0kwNw5AJ7Hg1xbsYYxk5Gc7Se3/wCrmqcrkKGpZu9SnubuWeW7lu5WKp5twWkb
C9Bubk4zUXmGJyyyMr9dwJBGOf8ACvX/AAr+yb8RvHHwruPHmk6ZbPosCTTul1L5NxNHGCWaNGHP
Q4zjOOtcd8J/hJ4n+N/itPD/AIXitZdTa1a6Au5vJURrjOWPX73Yd6xuupuo2MX/AIT3xJbTEjxJ
rSqR9z+0ptvt/Fn09qyNQ1l9Ru5Lu+uZ726mwDPcSmRmx0yxJJx0r0H44fs7+NPgFc6TbeLrO2hX
VRI1pNZ3QmQtGBuU4AweQfxryxpFQMCqoigAKRjBP9apJE3ZoR+INRsdOudPtNRvrOwuQfPtre6k
WGXcADvQMA2eAc5yAKy5Z3hRlPGDnb6dOKW4OYHkjjkaFW2mUD5e3VuncVTu3/dtsZskfKDznr6d
aqNjORYDlN3mKzMABuU4x3HeutX4z+PbSNYIvGfiGKONdgEesXGCPTAfAH9K5Gxs7rWrq3trVDPP
cOlvCmQvzMwVTkkdyOtfRF1/wTx+OgiWaPwvaKFUZQapAc8EnAz1/wA+9S2kyop21Pmye4kkKq2Z
BgsxZiWbOeuc5OaZlHlKqdoUcgdzTdSjnsr6eKVHhuoHeGWGQcpIrEOG9CCCPwqrJKZMkSfKFxhV
6/WtFK5k0iYzM4ZQNx6AZ49CfpV3QPEOo+EdUh1TQ9Wu9I1W2DeVe2E7wSIWGGAZSDgjiscyKw2E
DCrjJOM1BJHgqQGcAAk+lOwJa6Hd+I/jX8QPGWh3mk61458R6zplxzNb3upzyQuuchWUtgjI7iuX
0vX9R8PanYahp17eaVfWcomt7qzmaKWNwequuCOM1l3M3yIiSNtcc7c8/hSGRpV28sQT8p79qyNP
U9QT9pn4tpIHX4neLMx5zjV52H1OW5H6Vwt34m1C61h9dn1C7l1h51vDfPM3nNMG3CXeDnduwd3X
j2rnJS5mcDcACB84/DPSnNuiGDwzZ5znNVoCvfQ9Gvf2gfiTc+J7LxBN498QvrFgjw2l8+oSNLCr
/eRWznBwM+tMt/2g/iPomtavrGneO/ENvqOsbDqN1HqEm+4KgqhbJ5wMgex4rn/BPw98R/EnVJ9N
8M6Nda/qNvaS3sttaKCyRJ95uSOmQPfNcxK7qgDKQCAQjDGOO59eelCSG29jvP8AhdHjiHwRceEo
PGGtQ+Grnf5+mLeyeUxdtz5B9WJJ55JNTap+0X8R9YHh7+0PHWvXX9hzLc6cZL05t5FBVWHHUDI5
zXmzSyiMsQVQZx6E1CUMSqz/ACFucH5u9S9dBc0kd34p+MXjTxz4usPEuteKtT1bxHp4j+x6pPNi
e28ti6GMqBghiTx+NavjL9p74q/ECwhs9f8AiBreoWME6XMUTXIXZKhyjAhQcg89eory6KT5HI4y
CMdOK7nwh8BfiH4+8K6l4l8OeEdV1zQ9PLC4vLWIPGjIm9u4JIXGcZ6jiosty1d7ifED4yeKvi5f
6Vc+P/FWra7DaMI0kuGWV7aN2XzDGuAM7Rnv0r6psvgz+xE9tFI/xp8VBhj5HjOdxGehtCf1r4Yk
lEkYDfMpUNuU56jj8Kc0zGLb5hVEPBIzg+1Ow07bH3Mfgt+xHGGEfxw8SIzsSGKZw342mP8AGvln
wd8XvEnwV8Z63e/DPxdqWnQSTS2seoRosb3lqsjGIvGy46YboMZ7V560zNubcTu4O7+tRB8Sjcco
PQZp6BzPqe1L+1x8X38ax+MG+IWrHxHFaNp4vF8oqYCdxQx+XsxuGfu5yOtb37NHhvxn+0F+1VoF
8Lsav4g/teDX9Qvb51UiOCaN5JMgDJwFAUAda+eRKEkc/dB7nOKu6Tr2o+G9YS/0vULnS9Qiz5V1
aTtFIoPUBlIOKi1y4vqfs1/wUZ8bfF34ZfDnSvFnw21c6RpWmzONbliWJ5dshjSEhZFOVDFs4HGc
49PyC+IXxL8UfE/xVe+JPF+sz65rVwqo17PsDbFGAo2qoUewA6UniD4m+LfEtg2n6v4q1zVtOkIL
297qdxNCzKcglXcg8jOMVyDMVABzgDg54xTSsKz6Htvw0/bA+MHwm8JQeGvCHjO60rRoHaSGzaGC
ZI9zFm2mSJiASScZwKq2f7XPxd0rwx4j8NR+PL5dI16a5mv42igdpDP/AK3DGPKbsk/Ljr2ryzw9
oWreKdbstJ0XT7nWNWvZfLt7OyiMksp6/Ko6+v4VY8aeDdf+H/iGfRvEui32harHGryWeo2zQzBW
+621h0ODz7GjQSudH8JvjR4x+BfimTWvA2vS6FqE8JtpZljjl8yMkHayyKynlQen4813ep/tz/G7
UPGmleKZ/HU51zTYJbW2ljtbZI1jkI3qUEW1s7QcsDjFeBkKGBOW4wPamgt5mfKUj+854/KlZbjb
Z0/xI+IniD4peMdS8UeKNTbVde1GRZLm6aJI9+1FRRhAqgAKBwO1dZ8GP2mPiN+z++qyeANfOjHU
tn2uKSCO4jlKE7W2upAbk8gZ9a8kD5kLNnp8oxgU95HXcd+3jGQPxp2Rnqd38YPjf4x+Ovio+JPG
epnV9VECWqT+UsSiJScKEQBRySTx3rv/AIN/ttfFf4D+E28M+DvEUNho5le5S1ubKK4WKRgAdm4Z
CkjOM9T2rwUF1fIALEc8UHccFvlJ596LFI99+Mf7cXxe+Ofgqfwr4s8SQ3Wi3EiTSxWVjFblyvIV
ioyRnkjParP7M/7Ovw7+N2k65c+LvjRpPw2vrG6SGCy1Lyla5iMe4yjfKgI3EjjP3a+eR8r9C7Dn
GP51ZiDTMowztKwjVEBYsScBQBySTjgUmWj7rH7AXwWCLj9qrwo4Hr9k+bjvi5rwb9qT9nfwb8Db
fQJvCvxa0P4lHUmlSaHSxGr2m0Aq52SuMMTjB9BXluveAfFnhrT2vNZ8Oa1pOnAhWnvtPmgjVm4G
XZQBnjqa5yQksV3AhSOD3A7frUhc+hfgb+3Z8W/gB4Jj8J+FtQ0waLFPJcRwX2npK8bSHLANkErn
Pr161wHx8+Pni39orxynifxhcWkuoQ2sdpEtnbrCojQswBxk/eZjyT1rkNJ8G+JtctReaT4d1bU7
ViQlxa2MksQI6gMqkEjvXOeaUd2ZvmBIZR1U98jtTA+ufA3/AAU3+M/w38FaD4Y0q48Pz6bpFpFZ
W73emlpDHGoVQxV1GcAc459q+bPiR8QtX+J/jzXvFutyQvq+s3Ru7n7NEIo95H8KAnHQfWuctYnu
pljVWaWRtihRktk8ADuTxU+paJf6JOba9tLmwuCu/wAm7haKTaTgHawBwSDz7UAeg/Az9oPxf+z1
49t/FfhG6jhvERoLi3uFZ7e5iIPySRhlDYOCDnII4r6VP/BXb4zpJ/x4+FN3dhpkoP4/vvp618Nk
+VlQuSR3P60iMAwB5HfdS3Juz0bwnp+qftFfGnTbPxF4ottN1LxNqLpca3q8haCFm3OC2WHy5+VV
yByBX1gP+CVMpn3p8d/A7AEZAOMfh5h/nXweJGhIaNtuOcY4qWG3eWWQxRGV0+dliTcwz9OcVVmW
j7O8d/8ABNHUPBPg/XNfj+MngfVH0y0kvRZRylHm8tS2wEueTjH41D8KP+CpXxP+F3w68P8AhPT9
C8K3Vjo1nHYwS3FpMkjqiAKXKygZwBkgc9a+MfkfdnMirkEA5B9vfrTXjLYBO0nGwAcse368UWQj
sPix8RtR+MHxE1/xnq1vY2Go6zcm6mt9NjMcCnaq4UE5/hBOeSSTXqP7K37ZPin9lSTxE3hvQtG1
gawYnm/tONy8ZjDAbGRgcEN09q+fMvEcMpDqSCrDBB7jH1oxKMuQVLAkkjGaegXPoT9q39tDxL+1
amhp4i8OaJoz6O8rxTaYj+bKHAAUu7E7RjOOma8M0bX9R0HVbPUNL1CfStTspRPb3drI0ckUo6Mr
Aggj/GspXJYHOB/tVPjdIrE4xzjHvUi2PvK1/wCCuHjRtLs7XW/h14S8QXltbpC17diUvMVHLEHI
BY84zxmvEf2ov20PG/7UtxpUOsxw+H9CsIk8rQ9Mnk+yySgnMjqT8zAHaPQdK+eZH7bNxP8AFUch
VGOwkH3pWEpan1zff8FF/G138EfBfw5i8PaRHF4b/s7ZqheQvcCzK+WrJ0G7YuSD68V1mu/8FTvH
GvfFXwf42bwjoUDeH7O9tP7PWebbcfaNm4liMqR5SYxkdfUY+HiAEBEuPlOcinuSqb0HmAcHaMjn
pRYtXPrq2+EvxK/4KVfFTxz8QdDHhvw7cRvawT2N/fSKqL5O1NmEZm4Q5JA5NdJJ/wAEjPjgdrrq
vg1zjkDUpc5HoTBXxVBqdzZSGSC8mgaQbW8iRk3DqM4P/wCqrX/CV6zBIEj1vUo1zgbbyTA/Whpj
5j3b41/sq/Fr9i+fwz4s1bVdNsria8ZNPv8AQL9nlt7lELhuUQqMBvUHBB68+v8Aif8A4Kk634r8
Hx2+o/DLw7N4uh082lv4pedjc20wXAuIiYsowfLhQ3B718Q6pq+o6rEqXuo3d6qZMazTtIq54JAY
nB/WqTFvLVAxY9SM/wA6LdyT6n/aO/bk1P8AaQ+FHhzwv4h8GaSmv6W8Bk8VxTl7qUqhV9qmMbA5
O5gGI9vT7k/4JI2U2kfsz+LdRmsJiLnXp54leMj7Si2sABQkfMCQygjjivxziuzyOBt5Geea+u/h
z/wU4+Mnwu8DaD4V0s+HLjS9Gs47K3N3prNL5SDamSsqhjgDnA6dKH5FXO9+JX/BT7UrfwH4j8Jf
Dz4ZWHwu1a6nCS6jYyIskJDAOTCIVG8qNuSTjPtXwcZy0ryN+9LMS5Y53EnJz6g810nxO+Iur/F3
4ga94x137L/autXH2q4FrF5UYcKqgBMnAwo7n61yuckQoTuPO0Cgk/R3QP8AgrVomjeAYPCkfwR0
xdDjtTaPp0GphLZ0YfODF9mxhssSPc18FL4+utK+IY8WeGAvhi/t9QbUNPjs2G2ybzC6ImR0UEKA
eCB0xXMuFRvvbsHYw6Y+tRS2QUAqoOeSRS0DU/QXUf8AgqL4X8b6PokfxE+AuheONbsLVLeTU725
ifc2BvZVe3OwMw3bQcAnrXk37Xf7d+rftH+HdD8I6ToCeB/A+nRoZNFgmSZZ3Q/uiSEXaiKoAQcZ
J64FfKxljaMIck9ARULhXKnJwvAOOtJINx8l3IjKqkCM8HB6ipQEjJ2DZgHAHUiqxPmjdtACjAwM
U6DEkoZug9elXYSudP4E8da58OvFml+J/Dmotpmu6bMJrS5VAxRvcEHIPORX6BH/AIKpeA9a13Qv
E3iP4GRat4y02CNI9a+1QmWN1ySYiY8qNzMQM8Zr83FKM+ZH2qgzjPrSjbGZCzYAGM91+v6UnFMu
zR7r8Zv2o9X+Pvxsi8beNLOPVNCs7+NrXw3K4EKWSS7/ALMWC87hwz4yc16X+1T+35e/HjwVpPgL
wZoUngHwJZwolzpiSRsbnYymJQVX5UQIDtB5J56V8ePNGzsjcqMHCjPPrSbwynO4Zz35PFTyhzH2
p8Av+CgkfgD4M6z8LviP4Uf4keFpY/JsIJJI08mI7maJ9y/OobDKeo/LHafAv/gpP4M+HXwD074X
eIvhbd65pkKXNq8VveIbeW3kldwrBxnOHK8+meM1+fLyEI4UMVx83qKdHIGUfMQqnkE4GanlGtT9
J/hv/wAFKfgb8HJ7+fwV8Brrw5PdIsdxJp9xAjSqMkAk+hNfMHgD9rzV/hj+1FrXxZ8PW/2W217U
7i41DR5iredZTXPnNCGxhX4ADD0r53ZWTIMi7em7Pc/5NRyyeUnzNkdeKaQH6Qav/wAFBf2dNY8f
P401D4A3d34qNwlyNVMkLStKuNrk7sZG0du1fOX7WP7Y+sftK/FjTfENxZfZPC+hThtI0OUqXRSY
zIZHAwWcxe4AwK+amlJVQm5ySSf9njpQZSpwynJ4GeatINT6X/bL/bb139qjXbSCO3m0PwXpxVrT
R3KljKYwHeRlOGG4HbnoMV81yyvkhQcY79elN5WPH3hn73XBpjckk4JYn7p6iixN2e1fsv8A7Tni
T9mn4nWfiTR2e40+Vlg1TTnOUu7cuhkAz918J8rdq+z7b/gpZ8B/DPjrWPHmifBjV7Px1qUMiTar
mEGRyB9/5+ASq5IXJx3r8yIm3sMBvT2xSxuUuZACSMZYn6VNhq5794a/aI07xj+0jbfFP4z6VL46
s97TT6ZAyxhGVAsIjTIXapVeMjOMkmuz/bI/bw139pPxVa2+hm98NeCNOZZLHT2KxztLs2tJKyMQ
cHO0A4HGRXyYr+YcoxcHjKDB+lSPH8hVOT95U6nHPNTZFpO59y+Nf+Cgnhn4yfswXHgH4n+D77xN
41t45Dpur2rJFCkoGIJmIIO4A/MACG29Oa9Uvv8AgpP8CfHHwY0LwF49+HfiPXtPtLG1gubeLyhC
00SKuVYTI2B6kD0xX5ilgq43FXGOnSoWmXIB5PcUWEz9R/h//wAFGP2cfgp4K1bw74C+FviLRre9
Mkz2gjhZJpDGFDNI87EjAUY574FfMn7EH7alx+zB44vjqFlLqHg3WVD6lY2UKm4jlUEI8ZZgBgsc
jPIr5S8wHksxGCQD/KhHO47SV55xipsK9j9KNC/4KG/Bf4O/Em31P4YfDzWdJ0/W7uSfxW0+3zLl
NrNG0atK2HDsxxleM9a+P/2t/iX4D+Lfxn1TxT8PtA1Pw7pWqx+fdxai43zXjOzSSBQ7BQSegP4C
vFo5QJSzIW7kFufQ/wA6ddSs0hTfmPqV7Kc1SutQbuNbcMbRvZv4SKbuZlVM43dlpNyNFmNm3nrk
Y4x2qQv5oJCBTnjNUmShJV+znjoo5HvSQyC4k+Ybz6ds0srK5JRmycDbjOT3pd6NuK4jfpyOh9qT
ASRhHK8jAlsYAbsajM4kk3AbQRypNSxFJXfMpDDjDVXOOV2jPoeaQxXiVMZySPm6/wAqfOVmAIzk
DC8dsVJHIDGTxkcDv2pTkkgAk4BxngUwI5HCgnG3oB6n606MLhyTkjHUCmzkh85zgcqw6U5Z1MLo
7c5ycDmps7gKP3Yxk/KeucY9eO9E23yy4GCfuj1pJCEQsPuE8Z6mkYmUcISVGc+p7Ci1guKGztYv
gKMYI6GiP7+7O6PHIzjB9aJhlv7ufQZpmDEil8hHG4AdT/nFFtREzGTygCQckliOPbmoLiUeaBGG
2dsk9KsMI3TcshKkbtnY/wCeKgktzKFI+SPsSe9VYYOwkK+WSSvQKMcVIqP9zJJQFWGO31odGjlV
y649u1LIzl90DEjcST/eH0oswHyGIuAA20EEZH6VG8bfaBuLkAYBXoKbHM0cWWZSwXKkjPU96fJL
5pCjhezdyKWorlfYNyjnaM49c1IgcnKk7c8ljyxzT2QBs7QZOwPHFJJC7bZFw20gAcce+KEhiPlW
353ZO0qc/nQpC5KMQTxuI9aBw5Ri43dx1P0p8im2JUnJIHy4yB7UxDkQ24zs8zaCM5/w/Chl8u2i
3bZDn5sr90dfx7/lTHkdI2OzbGeMHsKGTdt68cke1UkAOFdjuAHccYJp0j5BwfnJwMe3rS7Ej2sX
YjpkfTGPrTPNKbGXnGcEj2x/hzU2GOunCPliQWGAfamQuBC6kkksMDJppYuqEA9OBnNSTbxFhHGR
+tWAI7pBIpB3jlSB/P8ACnSR+anyEB+rMBxj0FMklGflyemSW/Pn0piSkcuMJ0PY49qLAVL0AMQA
24nJ9K1rS4/dwKxC4A46ke9ZN0TCFPcA478fStuwePKnapXAGD9Py/8A1VEhHuXiKeKH9k3VpQhY
XPiXT41fPZbS4IPT6/mK+c3cbMKzAjPPvX0R44iW2/ZZaOKXyhc+LIPkC7S4WwlJyM89V9McetfP
u2MYDE4Xkg81CBO5Gs7KqfKXbBLKe3HJpFlWNATG3zHI2mnlGYmRMICO3cU/yiAF8tZCOSp6rmh2
LSOr1x47idT8whYnC8ZU4O09PfrVeSVp4tqhm3Bip6gH6flVnXbCaEhDCFYqVwRlsD0H51mpcqqZ
QszY4OOOn6d665J3OOEi9bM8cSb0Tyxlvl4OT3B9f8K2bBmS4iRZvNUZV42UYIA65PX0/rXMWiqh
SKV2KFCWOTnPJHHp0rpdJszvtyp3FuqE8Z9c/jU8qLepweuIF1W6XAUmQjavO3J6dfoK3PGVr5dp
YDG3EA+91J7GsrVovN1u6UoDIbhhuJyOvrVjxRqMd4IIFbdLAuxmJGOvbFYNamsbLc+l/wDgmdPJ
a/tc/Dv5WEU0l3GxVyNx+yykZA9xnn0r6b/4KgQ7fjtokpCvu0SMAnHysJpOgPGTn2r4r/Y4+K+m
/Bn4/wDg/wAW6xFcXGm6dcu9xHapudEaJ4y2MjOA+cdTjgE1+iP/AAUH+FN98U9H0b4teGLpNb8O
jTo4Slshd4oyJX84EDgYfn0ojox1ndJrofBkc01teQXEUjxXVs6yCWBjGysrAgoRyCDzkelfqv8A
sg/E2D9qP4ZhfHOgWmsap4akig/tC9iSX7QzLkShSPkfCrux3r88f2bPgvc/Hj4p2vhOC+FgiRNc
XUsi5BgQoJAo67juOD+dfcnxw/aS0r9jix0L4efD/Q7a8m02JftcV8jACMgMpDqV3OcnJ6c05NsS
Stp1PnH9s79oPWvij4yvvDzLNp/h/S5Wgjso5QUd0LK0jHGc5HA7Cvqj/gnmYL79li+t7mXZb/2l
fxSyZyVjIXkk/wCyc1yP7QHwi8PftYfB5Pi74CRrPW4oHkmhn/decsQcOpA48wHo3QjrXb/sC6Nq
ll+zJqEOoWVzbT3WoXksCzxlDJG0abCo7j+uai5UdE7njP7UujaN+z5pvw38SfDSZdP1S5s7mCTV
rFF33iFYxvfPB+8x56YxxXSf8Ewp5bi4+JU85kkmuJLKVpZGzvP7/J9j2xnsK8/+An7Ifib4267P
N47k1bRvCmjmSCCzuS8U7O3zKIwwICZO4kdeOua9M/4JyaZFovjT4uWMEkkkUE9pEvmqA+1XuQuc
EjpVJXLWl7n034I+KPhT4ua1rGjqsD654a1F1lsLjDSxNGxVJ1GOh9R0PFfJ/wDwVCHlS+DJc/8A
LC5Xlc4O+I5zjjpjP0ryy01jUdD/AOCgEJ0yeSyE3itrOY2zFTJCZm3JJ2IIIyD1zXq3/BUV5Y4v
CBUbU8m6y/bO+H/E/jiqSOeT0TR+f1ldyC5s5VzMEmBCbcMCpyTjORyD0r9SP+ChcCTfs66eZCrB
dRgyGON2YZAQOlfnJ8FfhJ4h+NXjyz8OaBEUml+eW5lVvKgQZLMzDofTr1xivu//AIKQ/Efw/pvw
20rwNLctNr7XUF79nWMsqxKjruYnpyQQOvTinbUbk1FH5o3DeQJ8jYoOdqkfKT2HFQPDIsYyN+WA
ymMqOetOWHawLqDKBwGG5R1wDk+3piqzMbVzIWxlyWiDfKx64HpW8UYczb1DeFllVw5VeDsJxtPf
OOKlZvMgEiyGUAZXHGM9AB6daVVleLzySCQdzBdwX2x6e/tUI2J8wARNuflPCn0J7/8A16Ggu7lq
zki2t5sg8vecFCN2M9MDt/hUd2sJEY8wISeMYDfdxgdM8fyqGeaMjAVoAXHA5P8AX3oktYvLdQGA
ZgxwAD15/OhMauxZt8satGHGAcr046A9OMYFPM0VkQS2d5PGSdvpzzzwPxoEiRSlQ4doxlQnKt3x
k+lRBGLup+6SSAf73/6v8iquEokqDJlkWRCroDgKD8x6D2HShZIZcSHIYrt+cdAP68GoJ7ghHVS7
eQpJVxk89BgZx/8Aqp0YMcanCsA+BgYIzjk+1QZJO59X/wDBOf8AdftQ6SeofTL5Pl5A/do2Tx/s
nua6f/gpiSnx0tCVKA6HAwdVOcb5hn35yK4z/gnBcg/tS6PF95v7P1AZDYGAi9uh6D356V23/BUC
ymPxk0qZY5Np0iBRLtOAPNkBAPTGSM9+az6nXK9kfHumQ3eo3dnbWcD3VxczCCO1SPezuThQFzzl
sCv0a/Zv+Anhr9k34czfFz4nmOHxILY3MFq67ZbFWTmBELfNKf046c1+c2l6lfaNrEN1FMbe/t5l
nimRvmSRGDBueuCB2r9Hvgh8W/C/7cHwnl+HPxDkit/HNvCyxXgCCWcr/wAvMAwAHG1dyf0IwNDi
9D49/aJ+PmvftCeMp9V1EyW2lRZjstNV2WOOEM2GKZP7wg4J79OlfpB+xD8OT4D/AGe9Pt21q11v
+13Oo/aLMho4xJFGvl5HUjbzX5e/Hn4M+IvgP45u9A8QQEREtNY3AwyXEBYqrg5PpyO2a/Q3/gmg
Gb9nS/ALBDrdyUU/d5hh4XA6Dpx6H1oaGldHx78TPiH4w/Z7g+IXwej8Q2etaDqMwYysTOIg2Hfy
vm+QnIBUg8j3zVz9g74d3Pjv9oHR9Uj1W0s49AxqLwTqfNucNtIjHQH5uT6EcV4F4z3/APCV6pG5
AlW7n+Rwc48xux5BHH6V6T+yVOP+Gl/hwkTb5G1WPecqTt2tnBHbgH3xTa0CKdz7l/4KA6d4k8NW
fhL4m+HNeh0q48L3DK1sZjFJc+ayDC4I3D5cMp6qTX56/GD4s6t8b/Hl/wCJ9eihtL67EcYjgb9y
AiBVwDyPu9eep9cV9jf8FUt4uPh+Qx8vZe8H7u7MOPx6/hmvz6a5ExKnbliFBfj9f1ppaXMU7PU/
Zv8AZ7+Et54I/ZrsvCF1qdpe3V3aXT/bbQloc3BdgQe4G/8ASvzo+I/xi8beAPBXiv4Ea3e2etaX
bX8UZvUneZ4BEyOY4j0CFlU4P3csMV9ufsZ3TTfsZ7xM0wWPU/LPXjdIRtHp6AV+V95IIJWd5N24
ru3ZyzdeRjvzUI1b1sfWn/BOv4baj4o+MsfiuG8tYdO8OrIZLaR/303nRSRgooyMDPJOPTvXun7c
WoeMPhB8RPB/xb8NarBa2llGul3di8hH2nLu4R0/ijIznnIIBr5S/YIu5Iv2p/CBhmeOGX7WjhSc
MptZCFb15UV6p/wVGvHX4n+FLfz38r+xnfyC/wAjHzzyVx1xkVSV2OcrWZ87+JvinN47+OM/j/Ur
dYJrjVINRntrUZEQj8tQqk5JO2MYPHJr9Hv2pvg/q37VPwr8NN4RubeydLn7ao1dJLc7GjZcEbSQ
Qcdq/Ln4cKF+IPhQM3yPqlrlORn96hAOAeODX6Lf8FL9SvNI+FPhWSzu5rQnWwr+VIyBl8iQ4OPc
D8aTWok9Edf4L8NS/st/skajpXjCdZp7aG9DNpaSXAZpd5QD5c9+pwB6ivzO8G/Fnxj8OdOntfC/
ijVfD1ndYa6tLOYKjPtCj5SpGeg4xnNfpB+xhqU/iD9kSZ9QuJb0s2oxM1w7SfICwABYkkY9fevn
X9gX9mnTfidqU3jvxJJDd6XoV19kg0doziS4CRSrMzZ5VQy4XHJyTxxSWg1udn+xd+zinw2tH+Nv
j67fRYLa0e6sYZXYGKJkcSyzgjOSG4A/rXzx+1X8drf9oD4rvqunWEkOmWcY07TsAmS6RZHIkK4B
BYtwnbHPPFegftWftI69+0Z45b4eeC9OurjRrO7MEGmxRkXF/cR5Dl1ViDGpBIB44ya87+D3gHXv
h9+0/wDDPSfFGlXWkX76vbTNa30JXejMwDAk8jIP4gUbbDSu0fQP7P8A+xHpOg+F9J+IHxK1a70O
8+1213Z2VtIoWNSU8oS7kJ3s5HAxjj3ruv8AgpxAsnw28GsWkQf20wJjHIzbSjPSl/b61W/svG3w
gt7e5mis5tTZpoI2YLIRPbAEgHBxuPX1NL/wU5l2fCnwl8xAOt5O32t5Tk9+oFNbkOzV33Nr9iPS
tO8Ofso3Gv6VY29nrUo1CSa8jiUyyPG8mzc2OcYHB46+tfPPgT4a/HLwL428O/GHUZ0+y6ze2p1C
+FzHJNLbXMkakSRkcAjaAB93jGK+hf2IrmLV/wBke90rTpY7m+hbUYfssTgyIzlyoI6jcWyM+teA
eEf2kPij40vfDfwau/C9u0mmXtlb3kUVtMt+kNvKjbmB+UAKgy2O3vUmiWuh9efGaXwr4E+Mvwy8
Y6tDb2F1Nc3eny6n5fzlXtyEVyBkjdjk9PYV4x+2l+y54/8AjZ8UdL8Q+FLS1vtNh0qO2JlvEiZZ
BJI2VDEdmXmvoj4kzWU3xg+FFnO8Mlz9rv5kgYguMWcnzY9M9/XFfI//AAUJ+JfjDwb8XNNtNA8V
6noto2iJMtpZ3zwLLIZpAWwrDJwoHftRElq7SPVv209a07wd+ybaeFNZvYE8Q3UWm2ttak5MssUs
PmEY7AK3NXfh/DN8J/2FINd8H2lvYeI5dAjv2uYoVZ57plA8xs/fPPGfbpR+1ZGNa/Ykj1G+KXl+
LHSZxdSgO4kaWDcwJ7nJ+uas6Hc3HiT9gKzbQFfU74eGkWKG3XzHaSPGU2jqRtII9qNbjtZaniX7
OHw/+MXwM+Nvh268T20llpHjK6ki1F2njuFmk2STKGCk+W+SenbIrF/b98N+GfAfx98J63H4fjuY
dUhF7q1jARGNQKTjduORgsmQSOvGc10vwa/ap8bftI/GP4e6DqHhi2trTR71r2/utPSUtG6QyITI
G+4u449M8dqwf+CoN3GPiV4MhEgbbpU/mKCPlJmXbkDnn0NVFLZmdtVY+wP2avjZoHxr8By3fh3Q
rjw9p2kyjT0sZwgCBY1IC7eMAHFfkP43mEnizXI2ZhI19doSW2HPmvnaewr9Dv8AgmPub4QeKQzF
sa3gZPb7NCB+Gc1+d3jmSRfGXiPbtAj1K6UL/EB5zgH8TVRSJle6O/8Aid+1Z8RviV4Ps/DfiPxG
r6TaukrCK3WGSYpwvmOmN3UccZNffH7Gtlf/AAs/ZoutR8cznRba5u5L+KXU7gDbbyRxCMksTjcc
gKfXpzXyn+wH8HdA+KHxTub7xDC11/YFvFqEFsxBR5TJhfMUjlRjIHrVn9vr48eJPFnxI1P4eQMu
l+FPD8yRS2sLHF9JsSRWfgcLkAKOOtJLnlbsVzcp7r/wTJdJPA/jgxMWU6rEckcf6nt+NfPfxH/Z
b+ON5448QXdl4e1y5sbnUriSJ7fUkAMZlYqw/eg4wV4xXjfw1+IPxF8F/bx4D1LXtOjm2G9fSInl
Rm+baXARgOM4r6C/ZM/aT+IvjL9oXwfoGseNdT1nTL6W4We3ndHiYLBIwBwo5BUflSs4OxoveZ8t
XmgeIdS8TPoElvqF34l87+zfsN3I0lys+4osfzHPDZ78Cv1a+Knh/wAX6H+yHZaf4YtLhPF2laZp
rR20AzKskLQmQYB5ICtkZ5wa+T/+CjVpbeDPj34b1rRR/Y2rTaZ9vkvrJRHIZ4p/lkLAcsB3PYYr
6Y+O3xH8T6D+xFD4qsdRltvEk+kaW8t7EFLh5mgEp6YyQ7du9J6zRHN7tz4i/aZ/ajt/2jfBfw+h
vNKl07xHpBuH1OfywIGdlCL5fJbB2lueBn6EdD+yD+yT/wALguh4u8WxfYvAdk5bNwNi6iF3BlDZ
GEUrkt04A68j5Vj/AHaAYYrnkE7SR3OfrzivrX4C/t9RfCX4QaT4B1DwLH4msrUTRC5N/wCWssUk
jttdDEwJw+Dzg+lOV1awQa1P0Bn8XaF4z+D3iuXw6wfS7K0vtOjMYAjbyo2Q7CM5Tjg+lfkV8Aby
XTPi98PZdPvJoZ/7WsIC8UpU7WnjVgSOcEEg+2a/VT4BfFnQviR8A5fFeleFbfw9pcf2wSaNbBPL
zHu3gbUUfNj+7X5eeCPEOn+K/wBpjw1rek6Rb6BYar4osp7fS4DlLVWuYvkU4HuTgY5OOBS0aLjp
U1P0c/bL/Zdvf2ltM8NW+leILTQ9R0mSeWNbuNnWcOqgj5SDxgfnX5P+NvCmu+BPEuq6B4ispNM1
XT5fLmtpVKkAZw6kgbkbAKnuPyr9LP29/iFrvw38Z/BvVPD99Jp96NWnjZ0CkOjiNWVgwIIIJBH0
rjv+CqvhDRoPCPg/xPHpsEevSambKXUETErwC3lcIx/iAZQRnpVRlrYzstzzD9hr9ojw3YabdfBn
4gWFkPDviCWWCzu3hwDLNkPFM+cBW42txg8ehryj9r39lLVP2bvFLz2MdxqPgnUnZ9PvlR3Frk8W
8rHjd6H+IZ4rx/wn4Q1b4j+JtP8ADmgWT6hrGpSmC3tgyoGb0ZiRtAHOe2PpX6YftD+LvDf7Pv7K
Nh4H+KF4nxF8Q6jZraWemXOxZmcIAHUgZ2QnnzD8xx17VUW+eyCSS1PyvVnjcsHwCAVIyCCD6/l+
NfqN/wAExPGviTxn4H8dN4j1/UtdNrqcMdudRumnMKGLJVSxJAzX5Zx5aCJGY5QBT1OABX6Y/wDB
J0KPA3xE2nP/ABN4M57HyKmpbmRcLcrufnP8TLhB8RPFKpucNq97uKj/AKeZOlcpOSrMScA5IA/l
XYfE0Sx/ETxaBsCf2ve55OebiTOfxri7kNMNxJXb/COlarRHNyjlmLc7CCOAAOD/AJzX0X8Df2Df
iJ8fPCVx4o0i4sPD+lGfy4H1zzYhdJsDGRAEbK/MBnPJBr5pgITLOSFJ67skD8Onav0H/Y0/bC8M
eLfBdx8Ffi/e2sWjJbeRpusXMy28IiRRiCWQFcOpGVbvjGcis5SaZvFI8s+Jf/BNT4rfDbwTqvic
XWheJItPi8yW00aSZ7koMbmRGi+baOdoOcDivDfgp8E/Evx68e2PhjwvbCWa5DPNeTBlt7VQpOZH
UHb90gcdSAM19neOvh/8V/2Ifh/c+M/hB47Txx8PdWIn1Frm0juBZoOI50O87kIO1mXpgZHcdH/w
SVuX1FvilezFJJ55rOSRoxgF2a5Zjj3OP0qG7GsUndnj8v8AwSe+Lrzvt8R+EjuwPKF7OMY/7d/Y
V8z6x8FvE/hv4wR/DbWLRNO8QnUksB9scxwvvcIkiyEHdG2QQ2K+/wDxR+2n8Bfhl8XdcnHw316b
xHpWpzxzapbyLhp1dlkYB5wMZDcY/Cvkz9tT9pjw9+1D8RdG8Q+HtJvNMg0mwNi4vynmyMZN+cIW
AC9iDnmqV0Sna2m5+j/7Fn7LWq/s8/DTXNK8RnR7vxDqd48n2/TAWAgMSKse9lViAysce+a/LT9p
P9ljxf8Asy6xpdn4om0y7t9W82S0uNOuTJwjDcHVkUrwynjI96/RD/gmDq17f/sy+Ipry9nv5oda
uY0eaZpCFFvCwUEkkda/JzxLrF7q+oPLfanc3cjyOFS4uGlZcnkDcTgEgdPSim21cUvjfcx5WVZm
EZww5+bpmqo3kbmG0MTtZqnljG3LZUN0+gFVmmE4T7w9FbpitkZNsA21pHbCqOSo7/8A16/Vb9lb
wh8avgX+y5rS6f4R8PeIdN1q1l121J1spNEstsvyMDGQxAQHAOOcZr8rFAZCVZcjGM9P/r1+wH/B
OO7ubz9ivW457iSZLe81G3hSSTcIkEK4ReeFyTgcYzWMviSNIv3WfkAAII0Dqq8KMge2OnOew/Kv
qHwD/wAE3/jV8QfCOkeJtL0/SoNP1OBbmGLUL4286qRwWjKHbnrjPevFvg3brP8AF3wAkqK0cmv6
dG6EZBBuY1P6H9a/Rr/grV4k13w/onw3i0bWL7R4pry7Ev2G6eDfiJSM7SM4xx6ZobJ5krHwt8fP
2N/iN+zfomn6z4x0y3XTL+b7Mt5p90J4oZcZAkIHy57E9cEVgfAX9m7xx+0ZrGq6R4JtrG6udOt0
urhb66W3GxmKrgkHJyPT0r9LNMvZ/E//AASc1K81q4l1K6l8JX0r3F7IZnZ1klKks2SSMDB7YFeR
fsKfsv8Ahuz+Dmt/HDxvqF7JpzWV01tZ6Tcz20sEFvJJ5zO0bLvLGLIU8DHqajm0NbO7TPlz42fs
O/Fb4A+EU8TeLtLsf7FMqW8tzYXouPIZgdpcBQVUkYz0z35rz74L/Azxd8ffGv8AwjPgqyt7vU0t
ZLthdzrEixoVBOW93HHvX7PeCfFXgb4n/sl+Kr/wYmo3HhibTdTiWPWy8s29Y33BvMZjgHoM1+QP
7Jmr32mftGfC6bTLmax367ZW0jW7tGWiknRXRsYypBwQeD6U+Z2uONm7HbeHP+Cevxs8XeJ/E+i2
fhu0t7rQLhba6kvr+OJHLoHQxnncpUg7vccV4z8W/hN4p+CfjW88KeL9N+w6zaKhaMOJI3jZVYMj
jhl5xkdxX6m/8FDPGviDwf8AF34EQ6Bq99pKanqrRXi2U7RCdRcWiqJMEBlAkYYP97tXG/8ABY+x
hPhj4a3ixxrci+vIjOQN23yVbbn0yAcUJsT7nyd+wD4G8fa58dtJ8Y+CvD0HiFfCcguL60mvUtN0
c8csQCsx+9gP6jgZrW/4KaeIfFXiX9o2GbxN4VHhSS30G2hhtvtqXXnR+bM3m70AHUsuP9mq/wDw
TS1rVbH9r/wda219PZ2V/HeQ3lvDKyR3KrbSyIrqDtbDDcMjgivQv+CujpH+0bopJJb/AIRq2GM8
YNxc9fyprRsHZ2Plf4Lfs/eOvj9rt1o/gjRxq97aWv2idnnSFIkyANzuQOc8DrXqmr/8E2P2htGs
7u7l8FQXSQRNIRbapbOxAXOFTflj7DrX3T/wTphj0H9hvxDqtkgt78XGrTG5QASFkT5TuAGcbRiv
gz4Mft6fF74TeLDrc3ia98a281q0D6X4ivZprc5IIZV3fIwwMEdsjGDSTbG3rY+ZHt3VpM4LK21w
wwyEdQR2PrTGh2AtjBPYnOa7r4wfEmf4u/E3xD4yutMstIvdWnFxJZaWm2CMhQny+5CgknuTXCTy
O23A3KehA/r3rQzloI8eQRhlkxkqf8a9s+FH7HHxe+NvhWPxH4M8JPrOitK9uLsXlvCpdPvACSRS
ecDpivGFHmoygYwCRX68fCv4CeH/ANjT9nnTNQ+J/wAT/E2gtfX2+RPDmpTR2iSyqWSNY1Tk7VJJ
xgkVnKVmVHvY/Mf4nfAzxr8DPE9toXjvQpdA1K5tvtMUTvHIsseSpKujsp5BGM5GOQK+hv8AgmJ8
MvDnxC/aYij1/TI9Wt9G0mfVLeObJSO5SaARSY4yRuYjPce1fZ3/AAVE0fR9Y/ZH0vWI4UvprfU9
PNlqNwm64WOQEEhz8wLKRn1714v/AMEgPiLJb+LfFvgOTRbIGe2fWE1dB/pGEMMRiJxkp84Yc9c1
LZonoe+fEf8Aai8Ba/8AHb4j/s//ABcs7OLwzPawvY3t2wjt2X7NHM8cjZBDbjlGz1Ujjivxvgtr
O712O1gaU2kl4LeOVeWZC4AIJ6kjHPqa+wf+CqfxLg8W/tHzeHP7Ds7B/C0C2z6jECZ77zooZh5n
GNqcAA56mvnr9nf4gwfC74w+E/FE+j2fiCGx1BA+n34ykgfMec4OCpcMDjqoqrWRCeup+uvxz+LP
h39gP4Y/Da20Lw9s8FHVP7PvLGzTfKsLRSSM6ljy2/5uTz0yK/Mv/goFB8Nrr49trHwzuLO80bXd
Lg1i8FjOJIheTPKXOMnYSBGSgwBnoM1+in/BU/4j2fgz9ms6Tc+H7TWn8TXR063nu+lhIIXkE6cZ
3jZwRjHOTX4sho2jITjaSNnTApJCuevfsreEvEeu/HTwpf8Ahvwxe+LpPD+o2msXdhpyKXMEcyMx
+Ygfma99/wCCqfxGk8f/ABP8Gwy+C9a8ISWOkzrv1q2SGS5DyqfkKMwKrsI68FvevJP2D/GOueEf
2pfh3/Y1/PZjVNSi0y+jiOBc20jAvGwPBGVB9RjjFfTH/BZOJZfiX8OlY7SNHuyDtzn9+nH8jRfU
o+Gvh78A/HnxT0/U9R8K+FtU16x0v/j6uLKDzFh+UsARnJJAzgZ6itnw3+yr8XPGnh+21/Q/h7rW
qaLcBnivbe2bY6DIOM89Qe3avqb/AIJE+MtbtPjxqnha31Sf/hHdR0i51C609yGjeaJ4USTkZDAO
R154r6f8F/FvxPo3/BSbVfhFp9+LP4d2mktdQaJBCiQxSm1ilYrhcjLyFsA9SeKLsmx+NAg8pmSR
duwlXRhgqc9CD05Fff3/AAS81PWPhfe+LvFOofDjxN4j0DWLKK1tNT0bSjcqHhkcyLyRwd4HGfu1
5f8A8FLfBWheDv2svE9roOnW2k21zZWd9PBaIEQzyIS77RwCxGTgdcnrX1L/AMEcvHOvapo/j3wl
c3zzeG9F+y3VhZsARby3DTGXa2M4JjBwScHPTNVdAfm98ZtXi8TfGHx3qlpps2jwX2uX1yljcw+V
NbK9xIQjJ/CwBAK9jmvr7/gnD+x3e+MPi3Z+KPiF4K1a38M6dYJq2j3N7C8Ntc3IliaJs/8ALQbS
zbTwepGK+ZP2n40b9o/4oKARF/wlOpgKW6/6VLn8M5r7+/4JX/tB/ED4j+L9W8E+I9fk1Xw5oHh+
I6fBNDGHhKyoigyBQz4Tj5iaTHufJP7fvwb8R+D/ANoj4ia+/hPUtL8LX+rq9pqq2LJYyGSJDhZA
NmSwbjOSSa8F8L/DHxd410+/vfDnhrWdetLDDXU2mWEtwkOQSNxRSBwCea+q/wDgoh+0n8RPFHxh
+IXwvvtdX/hB9P1OIQaXHbxru8tFdcuF3n5myRnFfUn/AAR9tRD8CviCzx4Da8QSSDuAtIv6EfnT
JPyh8O+EtZ8Yana6VomnXesalc7jb2djA0002FLHaigk4APaqniHw5qfhPWLrSNZ0280bUrYAT2e
owNDPESMgMjAEcdOORX2F/wTJsVb9tTSZhGNi22qMg242DYw9O2cVW/4KqKi/tha8Fi3s2kaczEf
9c2/+tT6jZ8aM5VMIA75xgmozE7bd3CnsanmijhuAJCd2c+w9Kax3ozA7f7pPWnYzS1PQ/2f7B77
40eDpp9BuvEmm2WqWl7f6dZWbXUj2sdxGZSY1ByAvUe+K+kv+Cn/AI68BeMPiR4MXwZoM+iz2mlz
m/Mujvp/mBpF8v5WVS+Nr84wN31rzv8AYD+IeufDj9qLwONEuI4xr97Fo1+kiB1lt5ZU3KM8qRty
COmD9K+lv+Cy8UZ+IXw3ZYwJl0m9Jkx1/fR7R+HzfnUPc2SPzy0vwfrfiS3ubnStD1TUre25uJrK
ykmSL5c/MyqQvHPParFj8PPE2s6bFqVj4e1i70yZWKXtvp87wsB1w4Qg9+/avuX/AIJF/ELWdO+M
eseA1eF/Dus6ZNqV1DLEGYzwmNFKt1A2yEEHg4r671H9rbwR8HP2ntB+BFrb2XhHwrptjMdQvb4r
DarJJCs0EaOx46tkseScelK7uJo/EHy49yyCTClemenetjUfAniHSdPkvrrQdUtbBBua9uLORIsd
juIxg8c+4r7ki+D3wh8e/wDBTnT/AAv4ctdN1r4eX/8Aps9pp9yZbWWcWU0zhHRsbRIqEqDjtX1n
8ZPjZrN5+2X4X/Z9m0vTLv4e+IdKQahbTQHzHRorliquCNoxABxRzBY/F/VPDmoaPa2lzfadeWUF
0A0E1xbOkcykZGxiMNxzwehqjBb/AGl1jhRmlJARE6uT0AHc5xx71+jP/BXr4halpvibwd8LrO2s
bXwnZadFrCIkP77zQZoFUMTwoQdAOvevOv8Agn/+zD4f+KXi7w345v8A4g6Po99oHiOFl8L3G03V
6Y/KmTGZBncflACnp7Yp3sFkfGupaLfaNN5F9Y3NlcON4huImjYDOASGA4619QfsHfsXTftP+MJt
S1eY2fgXQ5gmoywyL50820OkIBB+UjBY+5FfeH/BRr9l/RfjSNJ8S3fjnRPBV9oemXeI9UC7r1cr
IFB8xTgFMZGcZ6c1zH/BHtCfgr4/nZFXfrafd5BxaRE4Pfkmp3HZIw7r9qX9liP9oNPh8PhP4Sn8
MFxayeMTpkPlrNtPGww7iu/Cbw3U5r5a/bp/Ynuf2Z/HialoKyXvgTxBcFNM+YNLbzEM7W7KOSqg
fKwHIGD0r2yT9n34BXn7I1v43hu7RfiQXEwcaqWme6N4AYjblsYIwMbenNe//wDBTH4mav8AB3Qf
hL4t0OC0udR0vXpLiOG/jMkT4t2BVgMHkMaSVg0PxmmsWtrh45IzBIuC6OvIz049DStA5R5tj/Z0
+++3IXpwT07j86/Wr4i/AL4f/wDBTTwJoHxJ8C6pD4S8ZQ7bLUvPTcyxx78xSxqeodso/dcfQc7+
1F8VfAP7EvwFf4H/AA90+w1Xxbq1qx1Sa6Tz40EqtHPNId2fNJX5UxgDtgVdwsj8tvsDXUZlhV5k
TIcxoWAOOmR04qMWjwOocYLjftkwDj/Ir9Tv+CdHwx+MHgL4KX+sWo8KeHPD2r3y3dt/wllpK08y
eUiCZWVlAjbACg9eT3r0P/gpb8F9H8X/ALLqePNYt7VvGnhlLZodQ0j5IZhPNFHKmD9+Mhty5PB7
9aXNca0Plv8A4Ji/sn+E/jh4o17xV4uhGp6Z4algSHRpow0FzJIjkNLnqFxkD1xmvZPhh8Uf2bfi
p+0ND8Lbf9nXRrGe7vruwXVJbeAozQiQltgTOD5Td+M1m/8ABHey8ULN48ubWbTh4SNxbxXccisb
l5xE5QxkHaAAcHPbpXffAK5/ZW1L9qqB/Atn4gj+Iv22/aNrkzfZBNtlMxAZivTzMY9frUhe58Jf
t7/s46R+zd8fLvRdAnaXQ9TtF1i0tmUA2ayzTL5AOfmVfLGCe2B2r5reEySlUVjuHDelfZX/AAVE
/wCEtn/aov18TNp4t49Kt10sadu2/YvNnMfm7jnzdxfI6dMVyn7F/wCx7rv7TnjaJ54pLDwPYyB9
U1OWNtkqKy5ghcf8tSD6/KOeeKadhpJ7nh1p8IfFd58LL/4gQaPczeE7C/SwuNRGAiTOAVGCdxGG
UZAwNwFfa37DX7Emgv4Sf41fGkW1l4Et7Vp7Ww1GRTb3UTAr58pVvkCkgBT1P4Voft/ftMeEtD8E
wfs6/DLToRoGjvBa6neqhCpJAyFIomHEhyuXc5/OvofUdJt7r/glX4Y0u9nNraXugaTBPNwNiS3M
G488dGPWkxaLY8y+Funfsn/tox+L/h34Y8B2nw/8RvARpOo7VS5ugNx8+AA/Ns2Bip/hb0r89Pjj
8EPE3wB+IGqeEfFFm0N5ZHfFPgbLqBmYRzpgn5Wx07Hg1+p/hX9kn4Z/s9/tS/BfU/BN9cy3uovq
ouILm9E+9RYsRIo6qMnHHHNc7+0j+0foXwC/b2jk8VaFZav4W1Lw5Y2moXNzbCaSyQyXDCVAeOoG
RjJHShOzGtz8jS+6ElW/ixgAHtSJFFMNokVyTjlhnpn6V+snxJ/4Ja+H/ir8Y9O8beCtbsbD4b6+
6ajqllFIRKN7l3+yEKUCsrDCt0Oa8c/4KGfG74b2OjWXwH+H3hjTZYvDjRRX2tm2AmtpYAFWKN9u
XJX77574q3K+w7o1P+Cef/BPu1+I9nZfEf4i6eJ/C0oc6Xo9wQ6aijIV85trfKFOcAjtn3rwH/go
P8K/Cvwd/ae17w94R0iPRtFW0s7hLCBmMcbSRZcqCTgEjoOK92/4JVfFrxrqXxx0rwTdeKNQuPCN
pot60Oiyzl4IirRkFVxxguceleaf8FQpFuf2x/Fq8mSOw06Mcn/n3B/Ln+dSmQ1d6HyAUtyzZlId
gSVB/WkeKOJX3PhN2M9gcZr9fP8AgnF+zJ4Y0v8AZ1HxFh0Sx8Q+MfEcVwqRa3GsltCIZ5UjRcqS
oO0FiOvHSvojQ/gxF470zXtF+Ifwz8D6ZpF7avbpJosavIxYEN8xjBXg5BHINRzXFZo/D/4K/A7x
P8dvHFh4U8I2X229uvnlkUhRbQhkDzOSQNq7x6nJHHNfoV8R/hj+yd+yDpfg/wAFePvDb+NvFNxF
5d/fafI4njwVPnToJl8sHzAQOuBxWV/wT6+G+nfC/wDbz+KHhfSLyTUNL0PTb6ztriZg0jRi5tQN
xHBI6Z/2a9I8Y/sWeBv2nP2gfjdr3ijWtUsbzTb2zt4UsJ41WNRZRnc6spyCQPToeaVrvcs+a/27
P2G9I8EaRb/Fj4RJHqXw3vYFkuraykNwliAv+vVyxLRsRg/3T9ePg6f52+fay87eAufxFfsvqenR
aH/wSU1KytpDcQxeGJkSRR94G5bkD8a/G24kjk2AxZKj5SM4x/nNWr9TJuzKvltEoIO3APJHI+ma
DmSQlsAHkcYzTJGd+3IOCpOaVG3A/wAJA6npiiwuYlSP5xu6EimTkFmIB7kAHimPMwKKSRx164FP
kZQCpOAOMnjNMq4xkUFWXg/3TTtoQ7u+TgEUgwYWySzdBk8Y5omyoAGMEZ+lIAkURzDHJHTApX3S
yEgj5TjHAx/nmowhIBfPJxtI/rSqWJZo1J4weM4pNiFnhbKts+fGTxj1odmRVZiMjnnrTjGxYl2c
kc5FI5SRjlzu7nGCaSKHEOFyWC4OB05z/kU35gCCAWFNIIIVvm38Z7ipF3lOAzDOM9M0WQDZCG2q
FYSZxggU5jiUhVOP50yVnKZGB3OfWo7gsSAykswzlTwcd6YFiWJSFwh3Hr6ZFMb5CFUlfXJqNXkA
I3BmPGCelPAUTyljjAGA3r3oAaHBYA7iTTwibdqnDk4yexpEzDM4U5Pr/hTyfmUlgyr26Y96dgI1
l2CT5CpPB9xUoUSMQvPPUjIA96hlyRuCEAElh2apYXO4oIyAw6Ke/bNCAQQmSYfOQIznHYU6Ridq
MMnGM+o9adGjsArcMCecY4qAoFUFs7gdwyKdwJHUq21QAMcjtQ58u3O48jocZzRtmdmYZAPB9OlO
hgQIXlI2jACnBzx6UroSREWG7kAkjPBPAxT42dZNyuDzgeop4KYYsATtwPpUTLti+Vdxxwe31pDJ
iC6sGbay8Ae/0ouMSs24l8HDAHvULF5Zd+4HPtmgKftS7so5J+Y8U0wJB97PUKegPHSnGWNJR5WS
GJLBugqNE8sSIv7zaeueo9xT9ygHKkKw+52HfNMBGbzpRGCFzzhjimSDjGxQo9D1605z9pdnCEuT
nC9B+FKpiICMHDBcnimAyONX2cttPykHjFLI0gYNs3EDaoI9KfMsYwEbzQTyuKZIFkQB94dfvYxn
PoBSW4x0tqJnQKnlrjBJ4zQ+2KEIAWkAyMg4qEDzECqWz057+lKpdeGyCCVBPpVaEXKt0pkO4gBe
c7R0rat1QyxKpfJVeR1WsG6BDlSd3TFb1lMsDKu44cDco5GM+lRIZ7J8SJzH+zjpAGVjm8TyMdgU
KSLNR0HfBHP/AOqvBBkxyddrHgEV7x8T3mb9mnwp5gOH8TXb/Njn/R4gvPXj+teEMrHpwfVv6Uti
Y3HE/uCokJcEfl6U5XkldiZGLdyEzmo2RY1Drk5OMZpwG5SwO0ZwB6+tJl6nX69fNeRpI6COUAqC
GOXPc46DqPyFYsYijjd3VmKRksO7E9vpTnnV3AmJUopK5GQfTJ/KluwTEzO77yx2t0Ix0x6g+nvX
VJtnOvd0GR4iUY37yDtB9eoNdPZTWzC38p3jJ+Ug/IWx1yc8dzXKCUQwZ28KMcduxrZ047lQKMFM
7h1HOelSWkznNZ3pq13nAIkdjznNZchaTMj8nHU960ddj/0+6jUMpVywRlwR+HbiqJLICrZLZw2T
/So6miNjw4WVkIkdDk7QnBJ6Dn8a+mfhb+058Sfhb4LufBuj+JpYtCvS7vYSQpKQ0nDDcykqD6Ke
M18xaTAxkjZFIOcgdQp/yK9X8NeHdVv0+02Ol3t5bwKxlltLV5EzjPUD3B9sVpFXJe1j0r4X/Evx
R8JfEtv4j8M6idK1m1gktSwjVlZXIyCrKR/CO1L46+KGvfE3xbfa/wCJr1r7Vb1EV58BQQoAACqA
vQDjA/rXOadYXepXFvaWi3VzePE0vl2y+azKBknavPqenSnappuoafNLHPavDcwsEmWVSr9jjBHX
HY1VkRroepeD/wBpv4geA/AV/wCCNF8RLDoF2k3mW8kKs6GXhgjEZAxk/n7Y6zw5+3h8YfC3h3T9
JsvEVoLayt/IhWfT4pHYKAqDcQM44ycd68HttE1G+ilvbaxnntYiY3u4rd2iTg8MQMZz1zzVq38K
a9LBAyaVqckT7nWUWkn70AZ+X5NpHv79aztG5qrs+grr/gol8ZvKRm13TotyYZVsIT2OT93IIyPb
jvXmvwv/AGmPHfwd8Q6xqvhvUFjvtc2C8NxAsiuQ7MGwflDZZh+Jri7jwJrHnLEdF1B/my6pavjI
59OB79KzYdH1G7vLhILea6MQLeXbxNIdoP3m2gnOV61SSRDhK5ty/FPxHcfFD/hMm1F4tfW//tP7
TEAFaYOHHyYxjIxjuO9dR8ZP2jvGfx5m05/F15aXLadC8cS29usKgMyliQOp+UflXmlzpc0d0Iri
N/NR8soGWU9MHHT+fSn3ulT2cryvZywgZCG4UqOmMDIBxntRoUoaWZ6F8Gv2mPHf7Ps+oSeFruGK
01BFEyX1sJQdrEh1yAV4OPfIrjvF/jm/8Y67qes6zcSXt5qFwbmSXdgksf7ucdOOMdKxTZ3N2GAW
do4871iiZxnsCAeM5A/Gory1EVw5mRwwYkqein2B59qNDFppkU06rIyqpEpyR0JAODjPbrmnxSRx
KFkKpHjLFn/PrVd4/Lgz50asWBYZ9Secf8BpLyJriS3GUNqScsOScccjv3qlqRZokkdI3ZYyUyCf
3h4J9qVAzxK7hELnK5YcAE9vXjH4VE08TJKVYoGk3KjLtUA9e/t09qhaFEuI5mRWKcxgtliT169u
+arQm7Lbl9zlPNKI2Q235myc8j8RSyo4jMioRJLGDv4BPYcfhUcanCxGQhmBO7PQf/WppiIHkTna
IhtSVvmAyMnkdO9R1LTbHMTbzgTqWBIzux0I5JohRpJkY7i0Tb3XkfKwyCPfHNV4g0rRKMSYGT83
O3HU1YjjaG4lLxsGl+9JGNu7A9M9Kb1Got7kkdwzSyBG8tHBfJzu65xT0kKW5baWfrgEY4656c9/
wqFpot2xzg/dAIwD1B6de/6Uy2eRlkTYipuLKFPAHPc8/hS2JaS6ne/Bj4zav8EfH+neMdCjs59Q
t98fl3ceY3Rl2upxhuc54I6Cvofxt/wUe8cfEDwtqeg3vhbwukF/bSWssoimkdQwIygL4B5GPSvj
qWXOwRkBBuOEXcX47n0Aq0sXmxs+GaZyNqDgk/7I7/h6U0rktyEnvF84vHuWQ/MoPO8n1PHHP6Vo
+H/EWq+Fteg1PRrubTr+0cSwXMTmN4XBBypHbjkd8kVi+WZLV4pI/mdtqlh+me3epvLkmV5NwByA
T0GMe9D10LjzJn0Z8ev2y/Ef7QfgrTdC1/w/pVo9hMs7X1qXMzuEKnqMAHJJXGDgeld98Of+Ck/i
34d+CdG8OQ+CNCuotLt47NZkmkiaXYoXeVXjJx+hr44S3YQvtYPLjLNn8j+OKSeVUmhO7HmHC4yc
8c8+v6Vmo3NE5Hf/ABo+KL/GL4kar4pn02z0e4vCD5FimIhhQCemTk4ySevpXU/s4/tCX/7OHi6/
1y00PTdfe6tltvKvD5ZhwchkkUHB5OR3HevGFA2MEbMjYXfnOznofrSrE5ZDcICq4BXd6Z5/+vWv
KrCvLofVnx7/AG79R+PvgGTwtqngnR7BZZEmF/HcPNJCyMG/d5UbScAE56E18tm8W2n3DYsYJAUp
uxj2FRRW6wny/NBZ8BQWA4/yP0ps8Qe6YbSc8GTP3j1yAOelNOKRHK73Z9zeE/8AgpvqvhbwvY6W
vw40aOG2gWBRaXzxoxAALeWIsDJ5xnuea+TPij47Tx9491rWxpFpoSahKbkWVgv7mLIAKg4HcdMD
qa5CFZFjVDNGSWIxJ8pIxzzSzOiQlzJ5i87mCk468Y7dKjRGp7n+zH+08f2b9V1O9TwpY+JGvEjR
ZLuTyprUrv5SQI2AwbB/Dmuu/aa/bIh/aM8H2Gm3Xw+07SryC6SWPVvtZuZ40AOUXMSlQxxnmvl3
DPFJH8yROuGw2N2PX0/WpFkjivYowfnHRmPOO9JJXJu3uWWnlgZfJkeGRXDLLFIQwIOQcjv09Olf
Y/hL/gozdjwRpuheN/h5YePLqxXnUdQvUBm6hXKGBwGwcEjrmviq4HnPMxOFU/w/T/69TlCLXdGT
G6gHGd2fb2rRxuZ3lc+wfih/wUM1PxJ8P7jwh4N8Gw/DsXJAkvNPulfZGSfMREWJApbuwz1NYf7L
X7aL/s2eF9Y0Obwu/iOyu7lbyF0vPIeJhGsZU7lbIwinsfavl2NvNj84qJGGVbB4z05NMnuniiaV
1MgIJAUEnIHOeOKORFptH3fY/wDBRfwPpevtqtn8D7Gx1JAzm/huYEnG7O4h1gySfY85Oa8T/aN/
ahvfjf8AE7QfGeiWM/he60K3WOzMdzvkWRZGfzC20ZwcYGMcV89fLJI4d0EbY2kkZBPXNTXCGIAb
0MZBU+nvn2pcqHd3ufc4/wCCjGha54W0WHxt8MbXxT4hsYf+P1p4gizbRmVQ0f7vcVU/Kc8ewrkv
iJ+3TafFT4KX3g7xp4Eh1nXJEdINYM67Ld2J2TKuzcrqpA4Izjk4Jr5BhmVC2XJjDMqEH5umcH19
adJM0wCq+VztJI6AdM+9NU0TKTeh6l8Cfj/4h+AvjePX9ImF1A7JHfac2Al3Buy69DhscgjofqRX
1paf8FGvh3aeKb/X7P4SXMHiO5iMcup+ZbiaUADAaQDJHyqPwr899paUbCuFOVOMZ6cVYV5BCY2G
FdtpIIAbuc/hScEgi5N6nq2oftGePbz4xD4ly675fiWJwbc+SjJDENw8koFAK7WIPGffPNfTXiT9
vD4R/EM2V/4z+Dz6/qsMCxmdhBPs74Utzt3E4/Ovg7eSpbLFVyARjHXrmmxRnyUZSBGzEkA8HHII
9+tQoLcu72Pqz9qD9t6b4z+ErHwn4U0mfwp4VVUF7bXKRs82xlMSqVzsVdgPHXj0rlP2X/2sNS/Z
31woIW1fwnfuDf6XbhVkD7SFeMnhWGRkEgEfhXgTEOB8xEa/wg5O3+tRzu8DYUbSuTtIwRz3/SqU
UTZ3Pv6x/b8+F3g+y8R3/gT4ZXui+JdYEjyXLxQIktx8xVpSrkkBmJ4HevjfU/iDN4p+Jn/CUeLL
ceIJLy/S41GEnyhcxeYDJGvOVBQEAZ4riHkJV3H7xj/dwMD/ACadLN5KowDYZhuzxVqC6ApW1sfo
f8Pv+CgnwS+GGmNpPhf4deINCspm894LWK3KsxAUN/rznhQPwr5y/aU+J/wo+I76fdfD/wAD6h4W
1prmS41G7uigW5RweNqyP82456DHNfP08qRytH5gjwfm9C31/Ko5DL5Thjkr82FORtqeSw3K59Ff
sd/tMaF+zj4x1q/13TdU1Sz1OyW1zYCNniZG3AkOyjBGR1zwK82+OvxKg+LnxY8TeLrCznsLPVbw
zwQzlRIqhVQBsEjOEycH+KvPQRErhsmRDuCg4LL1qOa83Kh+4u3BYqQBkep/pTjGz06kSk3ufR/7
I/7Ws/7OviG8t9XtG1PwnqI33lvaRB7lJACqSR5KjuFKnqACK9v8M/tTfsv+FPGyeLtI+HniPTvE
CySzpdJENqyODvIT7SUydxGMd+BXwJsfaHX59q4O3Jwfr+FRo5iDA4ZQScE44Pt61m4q5alK2p71
8Yv2j7f46/Gy28VeKtLln8HWc6W1vpNufKnOnibcwdgxzIw64IHJHFfUvjf9uj4C+L/hXeeBb7w5
4nbQnsls47GO1RdixgeUFkExxtKoQ2T05zX5wExyXDEqykZyR260ySWUvhG+XHOcjcOatQje5Kbt
Yna5Ahk2GRfmJVH5YA9ATimpdxhVKowVGyUXnt/jVCSfbKGcFWII+U8Y5FDSO5RwPLj6kkZ3H8ab
SehlezP0o+E37cHwD+Gvwn07wjBpfiOytDan7bbixMuZZF/ffOHycksARgdMV8Qx+KvDPhX472Gv
+GrW9bwnpuu29/Y218QLgWySo+0kEjOAwGeeRmvNTdqiEsrfvfkH93/PHSojeMQ2xiuBw+O1Y8nQ
2jN3ufpx8Tf2x/2aPjC+iSeKrLxDqD6NcfarMrp0qeXJxnlW5Hyj8q+bf24v2wbX9ojVNL0Tw7aS
R+ENJlW8hnuoGhup7kxsrZBPyoobA9TXyvLMXGQ27A3scdDn0qrLPIf3bMxzyO/61UVyu4pNn2h+
yP8AtM/CT9nP4aa1qt9o+p3PxQu1mRSbcyRzImfJRXHyxqSw3f14r5c+LPxb8Q/GXxpe+KfE17Pc
ahelikJkZorSMnIhiUk7VXgcdcc1ybNGTKAzxxBTlgM//qqvdBi4I4HTc3ODVr3XoPWW5c0m7tTq
Np/aEUj2CSxiZLc4kePePMCnsdoOPc9RX6ZfA/8Aas/Zd/Z50K/03wld+I7KHUZVnnW9065uGdwu
0fMFIHFfl3KqtK6j74XPzdPrxUMlzKsDBsuTzkLwBk/40ezUndk87WhveMNbj1zxhruqwB44b2+u
blEkGGCPM7LkZPOGHeuZlnPmB1YsAdpHbFPlXzJUZU8zIxnsB6iq7ScmHBGTyx9KGNalmCaMBtwU
4OeB94elfePwf/aW+AnxH/Z6Hw5+MulQ+HZ9OENta6ho+nStNcxptZJVkjjYxyAghs8H8a+A5YYJ
HB8xlKN90nBYdKR7pYmIKsVx1PU+1S0mzTmsfqPZftk/s/8A7P3wQ1nwx8Ptd1nxvKsbRWHh/X4L
gx5YYMe6SFVWPkkrz+tfNn7Dn7Xul/s6/EbWIvEOmx23hHxK6fbJrGN5DpxRpTEyooJZP3m0gDIA
B9q+SHkl8wlUIO75lBz/AJ6USTnb5ZGCAcFT83XgUuUFPW5+lttrf7FWn/Fp/iLL4s1i71eS/l1J
7S7sLqWzadyzHMRtuRliQM+npXyF+198aPDnxs+MN3rHhTw3baDoOnxGwtHtbcQNfRq5ImkTaNpP
Yeh5rxKS7MxICdMsWY8f56VWklL78hiDyAOc0WFfbyPuT/gnz+2f4Z+CGl614H8dE6Z4Z1OeXUIN
YjjkkeGYxxo0ToiscME3Bh0IIxyKu/thfHz4Gf8ACl9K+G/ws0XTPE11cAefr0tkYbmw8tldW3tG
rO7/ADD0Azwc4r4NW4U24wWdhwc8Af5zSPNIHb92SP8AaXpmpUbbDUk2bdv4M8QajZQ3dloGr3lt
MG2z21hLIjAZBwyqQTkY49Kkk+H3iaMKq+G9bkfGSBp02V/8cr3v4M/8FEfir8FPh9png7Qh4fut
J00Mlt9u01nmVCS2CyyqDgnHTPua7v8A4ey/GrLN9h8JGNRkk6XNz+H2j+tO7RTij4yubO5sbi5t
Lq2mtLqHG63njKSAEZUlWAPI5FfrB+yp+0P+zj8G/gBpnhL/AIWG1tPqUT3mpQanbyidLiaNRMoA
iwACMDGfxr82vj7+0D4l/aN8cr4r8VRadbamtmllHHplsYUCKzHJy7FiSx6mvMDM+3IPmdefT6UN
XYr20PTNdv8AQfhf8a7i98CapJ4k8PaDrMV7pN5cgxG6WGRJFBwBxkbdwB6ZxX6HfFX4z/s1/to+
AvBeoePPH8/gPWNO8y4OmRyDzraV1CyI5aJgw+XgjGa/KmaZTGCcA5GWB9qia9KE+W5Bzzt6mk43
1FzW36H6Z/tDftX/AAi+GH7JcPwZ+F2uf8J0b/Tp9IS6RmBtIjlmlkOxQzZbAUYrN/Yr/a2+HM/7
POufBb4m6rF4Ssfsl3Bb6tJIQtzBdPIzr907JEMpxnII+hr83TcvITtJJUcHPrU32mSNSUJVuQT3
6UcqBNu9z9cPgt+0J+zt8IvDmp/BbTfiJ9u8Lz2FzdL4ju3OC87MssG4IAXAIYH0PTjn4E+DOm+H
9B/bH8G6X4d1lvEGh2vjGyh07VGiMf2qIXSbWx0zjv3xmvCWkaQ/PuJHTLc17b+xx45+HHwv+OGl
+JviZbanPp+mIbjT5NNQuIbxJEaNnVSCwwD7Z60nGyNY6NNn6tftgeAvhP4h8T/DnxF8TfHSeDn0
C8e506OSZUW8YPE7KcgnAMacj1r4T/4KW/tZeGvjr4n0Pwj4SZNS0Xw7LLO+txPuhvJXjVSkYx0X
nLZ611H7ff7V/wADv2jfhrbJoK69L440qdRpdxNYPBCkcjqJw5Y7SCi5HGcgYr8+r945SGjJYAfM
G5/GhInqfXv/AATf1L4c+GPjLL4v8d+N7TwndeH7fzNNtr91hjvDMkkTku39xT0HPzA12/8AwVA8
TfC74m674a8a+DfiJpXiHXPJXSbrSbGZJgLdfMlE2V+6QXK4PXcK+BxKFDB92COQelMkuDtByXCr
gH09qfKh3P0j/wCCef7WHgTQPhRrvwg8dalB4QjlW7ntNcvZ1jt545wFZCxwFkXdkA9R9K1fhV+z
p+yJ8KNY1DxBrnxZ8P8AxHs7exlWLR9SuLZ0Q8MWWNGzI+FIAwepPWvzFa6ZWIUsc9Rnr+FPkuAq
7lVti5woOPYmjlC9zq/jB4o8J+MfiV4g1nwd4f8A+ET8N3lyZbDSHkyIYwijp2yyscDgZxXGAxF2
BdAwznc2Mf54r034AfHK+/Z/+KWn+M7LRdN8QyQW81u9jqaExOJFALAjkMMdcHjIxzX1jJ/wVr1m
5+ST4OeDpY2P/LSVyDjv9w07tC0Z8DxTxKrgyRZ3YCq4PI/l0r9dfGnxZ+Df7en7NmhaXq/xI0r4
ZX9pfR3NxZapdQC4hkhV4yux5EyjB8hvTtXyl8a/+Cid58YPhtr/AINn+FPhPTE1W3MA1G3LPJbD
IIdAUHzAjI56ivjU3mJnG4kEnknt+HFTa472P10/aB+Kvwk/aN/ZM8S+DrH4m6Pouo+FpdtuZ7mI
vfSWSkI0ce4F45gOCuevfFeM/wDBKpvAfgnXPEvjvxN8Q9I0LWDE+jQ6Jqd1HbN5T+TN5wLuCwyp
XAHVTX54y3rqP3cmB1YAcE4//XUMd4yMzf6wudrBu49KXKLnR9hf8FK9H8KTfH2bxh4Z8caR4ti8
VQedcWmmzRzGxkgSKEKXR2B3gZHAxtNeHfs4eCdG+IXxp8NaF4h8TWvg3SZLj7RLqt8V8pDEPNCE
syqCxXaCTjmvK55mbexXJJ3Fs8570QNI7iQ5btyP8/5FVYlSP2E/4KaXfgT4vfs9G60n4leGxqPh
i4OrW+nxX8M76gfLaMwoFkyGIckYB5HSvx+bbDgxncwByp7VI0xdyAApAIGOuOePzNQsojBH8Wck
jvTQz6f/AOCd3g7R/FP7SGgarrfi7TvC9t4ZZdaRL91QXrI4URKzMoH3sk88dq+n/wDgrZofh7xn
pPhTx5o3jXQ7640onSJNHt7yOSaZZn3iRNrH7pTkEdK/MATEtnjIGAdoOPzp5IK7inqcgDP50rFX
Psv/AIJZ+LtE8K/tRCfW9XtNJhudAvYIpLyZYo2k8yBgu5iBkhGPX+Gve/DXxE8Mj/grpr+qtr+n
jSH0xrZL9rpPs5mFhANgkztz8rcZ7V+W7ylAAq59Q4yPy71G0riMLHwXOc8ED/8AVQ0K59af8FO/
FWmeIf2ufENxpV5bapaJpthCbi0lEibxESV3qSM/MM19Z/8ABJrw1o/gn4b6/wCNtR8YaMr+J3jt
10x7lY5rQWsky5fcR97zN2MdMc1+TUk5OUYYHLMF7+tJgohViwXORjqam3UR9A/t0/D5fh9+0x4w
C63p2v22s3UuuwzadJv8lLmaV/KfHG9efwx616Z/wTF+PHhn4OfHe5g8TzvZ2/iazTSLW7GPJinM
yMgk/uhum7oMc18YFsnGzYhO7cKaZm8zKbSc9D+f+fpVE3sz9bf2q/2Z/gL4Y0X4q/Fn4j6nNqus
63fPJpf9i37+bFI8apDEsathm3gkk8AVyn/BIv40eFtF8PeJ/hlqN5JY+I9QvG1S1e5XbFPELeKN
wG6BlKk47jpX5h+cJEyy7yTxuOcDHb86+l/2W/gb8IPi34f1u6+IXxfg+HOqWV4I7OymMSPLCY1b
zQ0h5+YsvHp70jRan3X+x/8AsD69+z/8eYPGWo+MNB1q0is7yFLawaQzky7cE7hjA7/Wvj//AIKq
YP7YOv4dQf7J076/6tuK+nfhh4n/AGcv2E/AninxR4f+I1j8VPFTbfs1vBeQtdsGKp5UaoxAUk7m
brgV8J2n7R0Hjb9o6H4nfFzw4njyzmYi+0XbHGjRLE0cSKMBSEJU89SDzQgZ4TORsaRhkcd+9NRk
cjJG4jG0mvvdf2tf2SAu7/hmIDIyDtg7evNcv8V/2lv2YPFnw78Q6P4d/Z8m0LXru1eLT9TjeOMW
s7D5JQysSNpwTxzjHequScr/AME7PhTf/Ef9p7wxc2OoWVnbeGpY9duku3IaWKORRsjAHLZYegFf
V3/BYX4ZanqFh4Q+IdrdWbaXpcb6RcWryATbpnDq6g9R8mDjkda/Le1uXtrndFI8ZQEB4zggH/a6
+lOvtSvL8gT3VxcAdFmlL4IBwRnOOCenvStqO7ufoX/wSG+Feoal8T9e+IyXdiukabaTaK9u0v8A
pDTS+TIDt/uhV6nvkVwv/BVP4Y6r4Q/aSuvF15cWk+k+LLeKSxjSTc6NbQxRSB0xxyVwQec+1fFm
n6xfaU0j2l7c2Zb7xglMe4jgZwRT9X1m61Bo5J7qe7ljUqhuJWkKg8nG4njP8qXULs95/YY+Jnh/
4Q/tQ+C/E3im7bTdEtmubee6EbOkPm28kaMwGSF3OoJ7Zz2r9R/F/wCzJ4n8U/t2eEPjLa3env4U
03T1icGY+cWENwgCrggjM2c5r8MopQu7f95xyex7f/Wr7g/4Jl6jZ3nxvPiXxT8UY/D1l4atg0Wk
axf7Fv8Azo5YiFMkgACfKTgH+HgCpau7jUjU/wCCv+D+0ppBYFgvha2A46ZuLn8ulfHXwv8AH9x8
LPiV4Y8ZWVst3caDqEF8kLuUWby3DFCwBwCMjOD19q+1/wDgrbpXhzxF4y8M+P8AQfHGia39ttU0
aXSLG5jnlQR+bKJsox+XL7Tx1Ir895ZN+zGSBk9MAj3qhWufSH7Y37Xt7+1v4n0PUrjw5D4btdIs
5LaO2S7NyJWZgxcsUXGMAdPxr2r/AIJeftdeHfhFrmofDXxTs0/SvE94Lm01eSQ7YroosflOAvAf
C4bIAI5618CglN3JAA5H92hJNr7Y8oTzuzzn1pNBdn6h6T/wSe1Wy/aQF/c64rfC62uBqEF4sifb
yR8whZcYA39XHYdOeOf/AG4/jCv7a/xm8I/BL4XC01FtOv5m/tiS42wzzCFhKq5AG2MK3OTk9K+C
W+LnjdEMQ8Y+IAhG35dUnGRjH9//ADmue07W7rSL+1v7G5nsb22ctFcW0rRuh5GQwIIPJHB71Nhq
SW5+v/iP4sfCz/gl94K0XwVpGlSeM/Gt8RcakkDLDdyxMWPnSuEICgjaien415f+2X+zx4X/AGlv
hQv7Rvwfa2nmNm9zrmnLtjWZEDvM5yARPGxww/iA/P8ANTV/E+p+IdQnvtU1K81C8lxvnvZ3mkYA
YUFmJJA7ZPFXLLx94h0nQbrQ7DXtUs9Gut32nT4L2WO3l3DDFo1YKcjrkc0KLKTR+wvjzQJ/2vP+
Cfnh3QvhRf2ur6lDBplvLG84g8uW3VBNDIWwVYeh68dQa1Pi38O9d8d/8E9r74c6EY9U8ZeHtM0/
TtTsluFDJc2hgknQsTgnahI55yMV+OHhj4p+LfBNrcW/h3xRrGgW07+ZPFpl/NbpLJgDcyowBOAB
k0+1+KvjDRm1IaZ4v1+0bU5GmvWg1OZDdMQQWlw/zEgkEnPBotYd0fpJ/wAEcfHOhWej+OfCdxfx
23iLULyK+tbGbh54Ei2uyeu05yM570v7KX7DnxY+GP7Xln478S6PZ2fhu2vNRuDcRX0UrMJY5lj+
UEt1kH071+YWgeLNV8K6ta6nomp3ekalacQXllK0UyAjGA6kEce9ds37TvxYjjKf8LQ8XhCcEJrl
1j8Rv5/l7UaibR9h/tWaZ4X/AGrv+CiWg+C9L8QodNuLWHRb3ULLEnlTRi4eVFyQCwwF9Mn1rr/2
yf2tvDn7OHgV/gJ8Dvsumz2sBg1fV7EKyWxbfHND0P79iMs38Oexr8ztP12+0PW4NY0+/uLTVLeY
XMV7FMyTLJnJkDgg7snrnvVe91O51O5nvLmaeee5kaWWaZy7SMSSxZjySSSSTyapNEuVyx9oWJgF
3A9yGy3vg8df65r9Zv2cfHXhf9tf9jGf4DDVV8L+LtJ0m107azCRp1t/KdLiNCQWQsiqw7ZNfkKj
7ZOSWQdh1Oa6Pwr421vwZqsWr+HtZv8Aw/q1uGSK9064e3mAYYZQ6kHBH9KTEmfqB+xT+xr4p/Z2
8c6l8VvjFrEHhuy8NRTR2yT3EUkMscsZR5nkDnYBkYB5O7HNeQeI/Cd7/wAFN/2v9Z1Hwvaz6T4D
063t7C/1mYo7RQosux0XcCxdwwAHQHJr5I8W/H/4leNNAudF8Q+P/EmtaVc48+x1DVJpopcEEZVm
xwQDWT4D+Lni/wCF1xPN4S8T6p4cuLlFSd9Ku3hEyqSQHCnDDJPUHqaTKu7n7EX/AO158Cf2cfGG
jfAS1jD6LHCLDUNRguFey093LRulwxbIYnlgPu7u1fCv7fn7G0/wA8WN408MKdR+HHiCYSQ3SMGX
TppGZhCW3EsjDBVvw+vxzd38t9PPc3cz3NzdO8s0krFmkkYksxJ5JJJyT1rrPEHxk8deMvCVh4Y1
3xfrGp+HbDZ9m0y8vXlhiKAhCFY8YBAA7YpISk0z9Bf+CX/7OXxB8I/Fuy+Ies+HZdM8I33h6cWV
9NLGftHmvEUIVWLLkKTyBXC/8FNv2ffH0fxs8a/FFPDk1z4LdNPQ6oky7EAhjiPyg7vv5B4/nXy7
4W/as+L3grQ9O0bQviR4h0rS7CIQW1nBeHyoUX7oUHtjjHvVXxz+058V/iD4cn0LxJ8RNc13Rrpl
aaxvLrfG5Vtwzxk4YA4zRdl82p+of7Lt5J8SP+CaV/4T8GXyX/jG30fUdPaytZwlxb3Eks5jDDIK
EhgQeM9q+Y/gv/wTp+NfjOTVh8QPE+rfDbSNOg85L69ujcGZgTnOycAKqjJYt3r4/wDh38bfG3wj
urq78G+KtS8MTXsUcd01hMVMyqSV3ZyDgk9R3Priun8S/ti/GbxV4dvdG1X4m6/qOkahFJa3lrcT
jbNEwKlDhRwQfWktyeax7l+xL8YvC/7Kn7V2u22v62Nf0O+M2gf8JJaENAzG5jK3TMWOUO07iCSM
55wa9t/bE/Yp+KPjr9oGbxj8N76bWfDvjloXurqxuTElggRIyXKt88ZQlgR1GRjvX5gSTKExklG4
C/w49K9k8I/tkfGbwL4bsNC0T4ja5Y6XYQiC2tI5lKRRqMKq5XIwMDrTUewue5+iH7YvxE8K/ssf
sb2XwEOq/wBv+LL/AEj+zYvs5VTCgdXaaZdxZAdxwOc/rX5GyMAwwi8scsM4x7CtXxx44134heKr
zxD4i1a51vW9QKvcX16+6R9qhVyQBwAAMAdqxQDuDHrjnJqtiW7jGfcBIxAIbAxxRLLlirKUz/FS
sCUOGHGM4HWmSN0+QhScYxUiQ8IjkOmAoGOe/FMd1znBwOmaeSYiqkKpIB+U02Y/OUcnBIz6fhSL
0EYIdpxk+vanzkM4LENjgL0H4U2RFByWbYAflzTMq/Qhdo5JP1oAGkLDa5JUd2FLhchUyGPRs4xS
soLE7woB255P1pqYZvlJO3GDQMmMjbNy43HnBqKXI4wVPX3NKiLGwYEt7j1pzXGzcMbzjIOOn0os
A0MChXIBxyTxz9ajyzrjOO9KkZdthcjcemOeKlMezdlQ3YH3p2ECy/OEZV2qvB+nrTzLBNtG4nqQ
AvA9hUEh3AKSCvIOOxo2kR7DyCef6UmuwDhFukw3yBjnnnPtRK6+dk/dI5zjilk+UKGY8HGBx+lR
ozMxUcDoOMVKbuBK6+UxCp5jvwCDnnPamzBojuCfe6g06SR2jUBTuQdRQHZlVHTIxkDpnnjNWMf5
W52LBskfdHpimSBd4VGxxncP0p5MXmK+44BwfrTJoUTdJhcN8qkmldAJG7q2Ocg7WY9+9P3iQq4X
YOgyOPzo+1FYyDtLc9O1MEgkdVcMFPfNA9CeUmAEMnfnPU8VFJMZVIUbQhAGKTCSSFmBVgMpnjNO
CqtxkZIByT1HSpFcbMDtQFcnB+tSERhThFBwOhJ/KmzsGUANu6n2HH/1qjD+YVRBgHBbjofajUBy
MzbmcELjpnrRJtd8KpHfr0NSZLORlmAOM7eDxTZWKBwoO8cAnuPerQDXlBk+VNhVvvd+afcOkYCs
5Xdxg8k0shWRmKIqhumecfnUckQkK5bcxXPyjoc96oB0Eyx+Z1BHK/4U3DySH5CzE59RSlMBFZCM
n7xOKVg23AJVfVT3paXGG5RJuEfOegGAPypZCYkLMnUZ3HsP8ajiZo0DKxDjPXpzT0lJTaSGRvlA
HJ/zxVCE8sKsmCWKkDaRg9Ka+19iv8q5+8ecewp8krs3kohY9RjkUyRv3BQhs5zgCkySjLsDbQSx
3YAx1Hat9YcXDBApK7VCkYJ49fbg1hPgXKADJ4GBXQsI59WkaQbBkMFC7dpznH1/zxUgep/E+Yyf
s/eCYm8wf8TrUZW8wAchIR0HPQj+teJEL91QPfK84FekfEzxJFefDnwno0aiOSxu72V9rH94JPJw
f/HDXmZ3SYKtsKnGT6fWlYtbAzxMhz2PHFEYjwSoK57lc0YV5HLFmCjJOOnFKJTKCqhh/F0qSbmg
zCWPZK5dN/AGcjjg/Sp7tRJCoBBSNfmdyPvf4dPyqmsA/fuAdqDO0dMf57VbW3im3ZDMuONxyB7f
/XrpItchgKW0Jww2kgr6t7Y/PrW3p2qRqkESrGkwIIkUkuf6YGK5h1l3urHYUPHYVfgldGWQ4Zdu
cdcUmw1RTvZHh1GQsPmEm4/N2B9aveKtLW2aG9QApdoJAjHlcjrWZqB8yaVgwxv/ABFdD4niWXw3
oz9Sse0YbIUYHBz7j+dZPcqx0fwB+HE/xh+KXhnwTBeiyl1y6W0S4aLeIiRksRnoACa/WD9oDxr4
e/Yc+DGm/DHwNpMdvrer20k4upoRJAx4WZ2yc7mJyABgCvzY/YSVh+1r8LgF/eLrMTZBwdu1gRx7
Mfyr7o/4KxuV8Y+AsBWc2FztQqfmxIuecj16Uk3cmcuVJnz1+yh8Y9H+C/x00jxF4gtptSs3R7We
eBVBj8wBRJtPBAOCcEdc9q+rf24P2a9f+K+t2HxC+Hsa+JbPUoYIvsumASZ2byJQQSCG3EEjpgD0
r863lYpGWZUDjaocYUdB1+uP0r9Tf+CamiePtJ+GWot4keRfCcjr/YMErISgDSCYgA5VS23APHBp
ttMmD5ixBZaD+x1+yfdeGfGF5balrGpR3WLW1RWleSdXwoUkEqp+Xd047V0n7A/xV1T4n/BC7fVr
eGGXQ9Qk023ZYwm6FY0dC2CQSN5GRjIA718a/t32Hjuy+MWoyeKhM+nyOzaLIPnj+zB2wBjheOoP
OWr6b/4Jg/8AJEvFAaLLf26+5Tzkm2hOPpjtU7Gqu0bHwS/a71TXfFb6B8TdBi8OwapK66JqgtjD
a3IUNvRnZiCflGCOpyK4j9g+W0ufj78aBbIk1o8iTRSLhkwbicjbjpwRx3wK4H9tD40+DviZ8MPB
uk+H44rPVrHUrnztJjyDaEK6g8KAcnkY/vH0q9/wSyndfHvxCt/ldDp9oQ6kYyskgxx1+8OfpRdl
RUm27aHsfxv/AGPrK6+MPhX4g+FbYRD+2rabWdPRWkEw85CZEXOFGFO4AYwSa4j/AIKk6Xbjw34N
uo7VEnD3UYmUbe0ZCk45GfWvXvDH7V9nYftD+Jfhj4slhsib0po9+7CNZMhNtuRj73PDdzkV5b/w
VFA/4RLwUpAybq5w5XdtwqH8BjPP09acSJM+Sv2Rf2nIfgF4+Z9R06C88PakqxapiHzJ4kG4h4j7
EksD1HTmvbf26P2StH0XSn+Kng4QWWh3rJNfWrSFMSTt8siK3AUllyoxg9K+Gt8UXm4UAsrKqmXH
GAOnb26V+s/7YMDX37F8OFQt5GmybSAE/g7HtWt7MGk1c/I28t0L5MKkIhVmIXAH9f58+1QJA42K
m4IFydx5B645+tX55JHQB0YSMAzJt78/1445zWa97tkKCRwrcggHj9PaqTM2BlMgZGHm+YNo3dTy
OMjpUkaAXUZbYGBChu/oOewqpKoZ3lZirHCswOM55wcUfIHEyo53E7Uz8uPTPUH9Ksw63LN3M8s7
mPbgnH7voO39AaR2kiITd8u0kP2zjgY79qrLA8chLHZgcIvfk4+mannuTAwjVyEIyy9CRj8uuPyq
RxlqWWnliRG4ATBLbcbhjpmotrSO7zMFwvDHnAPP9BUCyq7uBGdrNlsjHPIHt60kl4S8OIiu3kIP
m7f4cfhVb7Gt7ak8mxUiyGKHqUxx0565/wAmnPbvAkjRkbC3y7jg7evUDqeKrXc2cRb2jkH3N/AU
+hyf85p0Ss0YV3dtpOUIBB9fy5pGfxM+kv2B9J0/U/2pPB1pd20eoxNHdu8U8YaNdts5GVIIOG2k
V9eft0fHzSPgtMPB1r4B0PUjrGlO5vp4lUwbi6LtVUycFc9RXyX/AME+5l/4au8EKrAZjvF4HX/R
JD9fSvWv+CqaCP4m+GXLuu/RiMBMrkSvjJHTnNZrc3a0R8NyO4uQIo1KElcEgKp7DJPv+ldp8Gvg
x4h+N/jmz8L+H7ZXvJ1MklzMSIIY+rO7AHAA5A5JxXIWMEM19aw3V3DZx3EyRtdzRgxwqxClzz0X
knB6A1+s+j+GtH/Yp/ZmvtZ8G6RJ4t1m5tPPuNWskM0VxLsJSeTDErEowPl7AZ9apsaSW58pftff
sx/C39nTwvpVnpPiLUrjxrceUxsLgo0bQZIkk+RF2jPQZJ/nXon/AATU+AXhPxXo+r/EHV7Maje2
d5LpUFjcoj26KYonZyhB3N8+OenPtj4i8c+MtW8f+L7/AMTa7dyX+r3szTTM7FljLEttRSflQE8K
Ogr9WP8Agnrq/hHU/wBn6zHhbTLnS7m2uPI1lbk7jNfCNN8gbcQQy7Tx+VS7opWPNvjJoPwm/aa+
BHjHWdL0618O+J/A0V1czQ6fEkMsTxiT5H2qN6P5ZI9D7ivgb4SeDbX4jfFPwt4Vu7ia3sdW1KK0
nmtsF0V2A4PI9cEiun/ae8Q+Gk+N3i0+AoNS0rRLiVo72GSRlS5mDHzfkz8yb8nDZ9sV1H7CmueC
YP2ifD8PinTLi7vZ5Auk3EbfLDemRPLZ1DDjIPOOvWnqkZxd2foH4o0/4O/s/wAngLwVrnhjR00v
V45LCDVtQtI5G82MIF819hJZy5+YkYPoK/Ob9sH4M6H8EvjRqHhvQbmVtKnhhv7dZCCYPMLEw5A5
A2jB64Ir7e/4KY3fg+H4TabDrlrdz+JJp3XQ3tm2iNhs84uScbdnsTnpX5d6rrNzrN0l/q1zcahe
fLG8lxOzyFFONu4+wA/ChITd2foZ+wD+zn4N1D4Z3/xH8QaZb+JNSnaezjsdQgjmtoI4m6oCp+Zu
5Oao/tF+CfhN8ev2ar/4peBrSx0DV/DkWLm10qFIjvLJvt5QqjJGcq2Oc/l9L/sf3vge+/Z20V/B
dlc2GhqsqXVvdEmVbkf6/ccnJLZOQehFflR8Xta0G28f+LbP4d3Wpaf4KvLhHjs5rh0WXYAcurMd
wDAkbu2KS1Kk10LH7P8A8L7D4y/GXwz4Ovb2ewsNWuJVlmiRXkKJC8hALcDOzuD972r9OPEvhL4G
+CvFWg/C/WfCGh2EuvaaYbK+ktIledwRGI/M27hIc5DZ5Pviviv/AIJw6h4MT4320WuWVz/wk0sb
tokkbExRPsYSBgpxkoWxwR1z2r3n/gpp/wAIlHo/hua4a/h8dxkvpk9q0ixx24kUysxXABBxt5zk
g0uoX0PjT4z/AAUtPhp8fdV+Htjqd1c2ltdW0Ed3LEu/ZOiMOnBK+Zg9OlfefxK8C/CT9jb4J6Lc
6v8ADzT/AB48dwtk9zeWkDXErsjyFmd1bAwh4+lfmbY6xrfiDxVb6m2p3FzrNzdxKbm8cyOz7gqF
mIYtzt5r9cP2hY/BcXwY0VvjuwkhW8jEh0UTqjXJVwNoU7sFd2c+9Deo7JI82sfgr8J/2v8A9n2f
W/DPgzTvh9fG4kNteaZaQrKk0WQQzRqA6HJBB/mBXzD+xX4b+C93rWqa78Ttf0yGfTyIbbRtaKi1
uVdB+9w4w5Vgwx6kH0r7n+FL+HIf2bdQPwIihksVNz9kXWTMEeb/AJaFifmye3bOM96/JHwx4Z1f
xl4gsNH0Wym1PWb6QRwwW0ZZ8nrnGcAEZJPHriqTdg0u9D9Afhb8Xv2fPi/8VrXwLpfwW02Jr2SR
Yb+TT7fynRA5D7QuQDtP5ivnP9tz4A6R8CvitDDobOdE1y2kvoLMp8tniTDRKe68jGeg47V9ZeE/
Cfg//gnp8FLnxH4imTVvHOpRmMuu5vtM4V2jgjGPkQZwzY75PavgvxJ4t8XftQfGGGXUdQKazrl/
FaWNrPKTb2ayOAsS9wgJH1JJoRLTk9Cb9nLX/DOj/F3RH8WeGB4t8P3TmylsJFRgrS4VXMbHD7Se
nvntX1j/AMFE/gV4C+Hfwy8Oa14X8J6doV7Jq62c0unxCISQmCVtrKMA8oCP/r16poHh7wJ+wl8L
dC0u5tk1jxZr15FEHkQyG6uyUDlW2/u413ZA4rO/4KfNs+B3h5iu4HxBEvBxyba4A6fWkm7jkkfM
X7GH7Hcnxf1GLxR4mjEHgjT5/MYsR/ps0bDdERnhAPvN9a+gtL1P9lDxN8Yb34bR+BdEgucmCHVx
DGllcy7RmONw33vmIBxgkYFdl+whGP8AhkmRUTAN1qXy9Acsf/1fnXl9n+x14B8P/Bvwv8QdPv8A
U211W0+7Zmuw8Ds08YZQmOOWPQ9RSu3rcq1tDzLxX+wBq2kftDaF4OS9ki8Ca1NLNba2sQLxhI2l
MDDd98BSAeMgg+or2f46WP7Nn7NWuaJoWv8Awp/te8ubH7RHcW0QkwiHYS5eQZb5QTj1FfQfx2ud
ctfE3wzbw5b21zqn9tuBFeOUiMf2WUSZYZI+Ukjg815n+1x4X+BOv+KtBm+LPiW60HWY7IpaxWsr
hXiMmTnEbA/OMds+lK7HomeOftg/sreBrf4LWvxS8AWMXhSKCzgnm01YjsuY53j2H7+I3Uydsgg4
7CuU/Y3/AGNLXxrbx/ET4jRLF4OhhMltYXUgWO+G05mkYEbUUjjOM47Dr9D/ALaQurb9kmGx8H2s
N/4SktrGOW9e4ImjtA8JhaNSPnLYXJJHB6elvwtaC9/4J32lrJmNJfCbQuUwMKVKk+nTmnzPYS1v
Y4Hwb4P/AGXP2gNZ8TeCfDPhmPQtbjilitL/AHGIz4LJ5tr+9O/aRuxgcEds18UfFv8AZ28VfCP4
lnwXqNrPeTzSqml3kaYGpxltiPGM43ElQVJyCa+79M/Y68FfA74kfC/xT4e1LU7i7n1lbd476SOR
HD28jbl2qCPueuOa88/4KhalcaN46+HF9ZvLDexWt1LDcQD95A6TQsrg54wfY1UZSvoJpXPQf2eP
+CePhDRPAS3HxK0pPEHiLUNlwYZGeL+z8xgeWNkmC2Sct+HavzW8W6emi+LdX0y13SJa3txaoJWJ
YKszouScHOFH1r9Uv+Cf/wAVfFnxX+HHiO98XavPrN5aat5EE9yqBhGYY2x8igHlmHTtX5e/FYAf
ErxaYQzyf2teliwyeJ3GOPp2pxbl1M5Kz0P0F8Y+AP2TPhf8MdJ8Q6nolnrkU8cEccWk38lxeTu6
j5tizA/U8YFbnwj+Av7Nv7Qfw9vfEnhTwXeWNmsj2bNdXFzDPFIqK2QPNYcB1x1r81vAngXU/iV4
g07w5oNs1zrl/KsECR5/8ePOxMdW6AA1+mR1jw3/AME9f2dItLvr2TX/ABTqbG5j00zfNd3ZSKOX
YQvESYXkjoPU1N7Oxb1iz4f+A/7Meu/H/wAfPo1ik2n6FYyFNQ1koSkKqxwnBGZGA4+uTXuP7Uvg
39l/4V2PiDwnp+g6lL48j01pLa50++nmS3nZWCeaTNtByuSpXoeleof8EvZxd+DviFOy7ZJNZjlY
DnBaLdgHuBkjNYOs/smfs/8AxS+Letafa/FHUj4s1DUbmSfSbO4gLJKWZ5EAaInjDdT0FTGV3dlS
XNofnIsSwo43GJD949gOfbj6/Wv0Q+Df7Fvwt8HfAy08b/Fzfq51Rba8EkNzPbx2MM4jVE+RlJwX
BZj74HFfLnxV/Z2PwX+ONp4K8ZanJaeHNQuUaPX4AufsJkCGQrztZQfm468jjiv03+MngXwTrf7K
V54d1PxH/ZXgyHR7VYNbLqdscYjMEmSMNuKpxjndjvWjlrYlJct0fnn+3H+yxpXwF8SaPrvhW8WX
wh4iLR2dnJMZXtZUj3EK5zuRhyMnjmvlueJVO7zAgBLYcjHfgCu18RfEbxT4m8K6F4W1fXbnVNE8
PtKumQOE2IGJyVwAxHYBicCvub9kD9nr4e+F/wBnDUvi54u0aDxjNdWVzffYL+3SWK2itnl+SJWB
wz7OWPt+I3tYm2jbPzeEm6REBL5BfAHylPWmm6BmYNjZneH4Gfxr9GPAXxo/ZZ+M0ms6B4h+GPhz
4dQzWbeVq95BaxK5bgiOZFGxxkMOefwr4n/4RbT9H+NI8PeEruP4gaXaa0kGnyouF1RPMTavGQM5
2FhxwTwDTUrago2djzyORXVoxKQ2SzKx549T6U15mMYXzNrE9cZz1xiv2d8M/ADwT4lvzY65+zzo
vhuzaFs3pktJsN/dAjO4E+tfmH+1j8CLP9nr4zaz4V069lvdKEMV5Y+cMPHFLuIiJ77NpG7gkYq+
e6JtrY3vhL+xT41+Nfwf1Hx74W1TSL2G2aeMaK0j/bJXi6oABtVmwdoJ5yOma+e9Tgltbm4tbmGS
xurVzHPDOhSSNx1RlIypB4wa9h/Zk/aO139mvx+mvaYj32kXYWLVdIH/AC+RA5+X+7IvVT+B619a
ftg/sw6F+0X4CT45/B1Yr7ULqBbrVdOt9g+2pjc8nLALOgPzD+IDHJxURlrZmjVtT86NKjsZbyzO
pCUWPnIt19nP7wRbhuKkjG7bnGeK/TX4O/sZ/su/tA+Hb7U/BN34lvIbN1t53mvJomilKBxlZFHY
9uP0r8vraRfKyjEb8YEnABPrmv1K/wCCSEiTfDXx6Qc41eHnpnEAGf51lOTi15humfl94r0ceG/E
+rabHLJKmn309qkkgAaRY5WTJxjkhecCsgsnzGMnJySGz0PXmuw+Lqm3+JXi+JmJZdavVbPHIuJO
R7Hr+NcYkvlYC4GBkk810NGSHsUcgINwByB3/H0phiLMRuIwDgE4znFPTDsuWMWfvd8j8q/T/wDZ
a/ZW+Dy/svweOtW8PRfGPU7yNb6eK0QNNbNtUtaqnmD5o8kkH5j6dKycrM20Py6dipDMWWPOSVGM
8cUFI2ZgYi2D/EOa/SWz8O/sjftP/DPxTY+G7Gw+EHiawYGC71eVLWeGQZIdUMxEicFWHXnp0ryb
/gnr+yV4f+PnjXW9b8U3MeqeH/C86QSaagYxak8iy7WLhhhAArdDnjpT50lqKx8ZSpI0rqvzIo4X
oenpTIv33VgQOuMV+z+n/slfBLUvFz6PN+z1e2VsJWjGrTOfsuFzh8icnBxxx3FfAn7UXwB8N/so
ftMaPZ3lpP4l8BX8qasumRt5c6WnnbZbbeW+fb0DEjIIzS5mL3U0n1Ln7H/7C0n7SHhnxF4m1vV2
0rwxYRzWtrLp7gzyXyqrZZWUjYobn1NfJTQy7f3nHALAcD0r93/2RvFfwr8U/BK71D4X+Hbrw74S
jvriO4sLqIrIZ1RfMJ+d85G0fe7V+Tv7VPib4NeLvEmmXHwc8K6p4Zt9sy6m1++Ip3Lr5bRoZHwR
8+TwMEVUJ6XYnH3tDwJQpYbTk9sd6a0UkQDdVJIwT19a/RXUf2Q/hkP+CdEXxFj0JoPHA0FNU/te
O6lDGbzOfk3bMYJGNvT0607wr+xz8MNd/wCCd198QbvRHPjWPQ7/AFVNWS7l3LNDJKUGwNsIxGo+
7yM0ue9rdSkkr+R+cjo0vJ+QD+VMZQCArFQePmI5r6l/YS8I/DT4ifGUeDPiZoN1rUWswmDSfLeS
JYrpcu29kdW5RTgjgY5619YT/sk/s0fBj4qaZ8PPGOgarr+u+Mb95NEnS5uQkFu77UgkZJVxtbI3
YJIwT7Tza2LsflRKjZKOoVcZ56GmMueQduADnrxX0/8At8fso2f7MvxXt10i7+1eFfEUc13pto5Y
zWfllQ8TMSS4G9SGznB56V8vtG2QByAMHnGR6VqtjG44zFUI2ZJHAx7U6RvkcY6ZO7PAwOagCkNz
kocgKDiu4+C7+EoPif4af4gWl3e+DWusanDZkiRoSpGQQQcAlScHOBxWctNTWOr0Ppz9r/8AYI0H
9nH4HeHPHOjeKdR1i8vLm3tri2vIoxGRLEzZjKAEYI754718ao2TkliqtjcRxiv09/4KXfs/+GfA
/wCzz4d1XQNQ16OK01W2tbexvNVnubbynjYACORyFIwMHtzXyP8AsTfswJ+098WBol/fi00LR401
DU4wGWS4g83aYkZSCpJ4z2BPtRzJK447s+ebsFSiq7bickE5zUJjYE8FSRnHSv1H1/4M/sV+FPjO
vwvvtH1+LxW95Fp4iS8vWhM0oXYnm+Zj+NRXzP8At9fsd237MnjmwvvD97v8IeIGf+zre4laS4s2
jVTJEzN95eSynk44PajmQ7JvQ+UFwYzyDhSDnuaRowUG5vkAHT9a/Q79ln9mX4E/Ez4M+Gta8V+A
fHd/4huhKt3f6fb3hs5G8xlDI0fybcADI964T/goH+w/pf7NU+leKvBssr+DNVmFl/Z11I01xa3P
lvISHblkZUJ5yQQfWkppuwrHxWzIzu8YLLjsKjyzFAo4KkbfWvvH9iH9k34P/tPfCHxLpt7Nqlp8
S9PMrfaEncQwxSDFvJtwVYblII68GvYfAP7CP7N/j+/134dWOoa0fid4esFTV5Uu5RFFdBQhlCkB
HXeQcdwelCkh2sflikDqAHXDjjjpX1p+xf8AsHw/tXeGvFWq3PjCTw8mkXMVtBbwWazFy8W/c5JG
Ow/Ovn/4v/DLVfgx8TvEHgfXJYrnVdFuVt55rQkxSBlDo65wcFWU44wcjtX6af8ABPr4HeAvHX7P
w1Twv4r8XaF4knT7J4hXTNTktlW7ALD5duGAVxgjPBqXJphoflFrmgz6Pq2oadM4lnsriS0d1BCs
Y3KkjPYlc/jWe8GdjMNrDII710fiPSJbXxdqunRztcTxahNaGadsNMRKy7m/2j1PuSa/Rqb/AIJ/
/Az4EfBjRtd+PviLULbVbqcwXF7pM8n2bzH3OkSqkRbAROpHUGq5h2R+XrDyZt+Mg8c81JtClsqG
zyCK/Q74w/sHfCXxp+zrf/E39n/xPc6jHpAmuLtdXu3aK4hhVjKgDRqySrgEZAB6dxX53RjygF2b
ssVyp71S1IZFKAp+9jFSRqS33gR0oEEZUM2QWPC/z/pW54M8F6r4z8T6doGi2M+oapqE6wQW8CFm
OSATwDwAeT2FDdhJHu37HX7Itt+1fr+vaf8A8J1aeFNQ0uKKSG2mtPtMt4rbtzKvmIcLtXOCcbhm
uS/as/Z01D9l/wCLN34MutWh15Vs4L+DUIYTCXjkLjDJuYggxsOp7V+nv7KP7DXg39mfxb4I1zXt
buJ/ijdWt3GlrDMGs2YxnzRGCm75UI5LDJ7V8ff8FZyp/apXMZcr4fsScenmXP6//WqE2VY+Jivn
RCRFwc7iB3qukZViWPy8nAr9NPBP/BMTwF4T+EWma98ZvFGuaPrN3MTJDoTrNDEHy0cfEDsTtXk9
M1kfGf8A4Jp+D5fgjf8Ajf4M+ItY8Q3dgXnuLXXWSMT28YJlCgxRlZFwCMjBwRnpTUkFj84kmIYK
wI28g4psoLMMAAg4GDivsb9hb9hWL9qC31jxF4j1O40rwbY77VZtOkT7XJdYR8bWVgECPknHOcV7
X4T/AGAv2Zfif4lvvDHgv4y65feKoraR/sBeAvHt4LMjQKSFYgkA5+lDlcLH5nxQFy/H417p4I/Z
G8a+NPgL4z+K8cKab4d0CAXEEl2jD+0kDYl8kj+5ggnoTx2NS6D8A9G8C/tJz/DX4z+Iv+EP02xl
eG91eyIkTmHzIWDFSFDgjkgkE4r9fNI+GHw2sf2LH8F23ipj8M5NCmhPiLeo/wBGkLM0ucY6se1F
waPwIlRjnerKBkn/AAqMRZc45BPUCvUv2jPBPgP4ffEzUNG+HPjF/G3hdIIZY9ScLuEjA+YhIUBi
uByB/FS/s5/Bw/HL4ueEPCDTXdpYazf+TNqFtBvMMYDM55GP4cc+tVcm1zzNIRs4x8oA/wAiuz+E
3wx8R/F/xxpnhTwtYXF/q99Kq/uo2ZLePeqtNIVGVRQwJPvXs37bv7I1t+yf4+0fS9I1q817RtS0
77YLm/iRGhdZCjKWXCnPBHH519sf8EnvB2iaN8AvF/jWHTrQeJTqd1af2o0YM3kJbwOsZbrt3ZOO
KlsaR4jqH/BIP4mIl0tt478L3V0qlxafv1ZsDKjocHJHP0r4U8TeG9Y8Ia9faLrumz6ZrFhMYbuy
uoyskTL2IP5g9wQa+5fA/hT9pfw38XfC/wC0FqxYaf4gv7OG/vvPhdJbO5kjiCm3X7qbduDwQcdO
a+gf+Cgn7O3w6+Ivxm+Fsmu6tB4Fu/E0t3YX/iGKNQ1yY44zBG5J27s/KGboDihSHbufkC6u8hJB
JB6DninDlT1ZOu0mvpD9rj9jXxN+zD8QLexQ3Wv+FNVC/wBlawkADTSAfPC6KThwSMdAc16X40/4
J2w/Cb9mybx/8RPG0XhXxS8DOvhuaOORZH38QLJvyzsgzhRwTz3pcyGkjnfg5/wTQ+K3xi+H9h4u
tZtJ8L2t27iC018zQXDxjhZNoiOFPUZ6gZ71lfH/AP4J0/FH4A+BpPGWqzaPr2jQSLHdNoUsskls
hB/eurxj92CACcnGRmv0L/bB8BeIv2pv2SvCsHwknXXZJL21vM2l4kAlhWCVGG5mUZDMoKk9vamf
BXwnrH7On7BnibS/izcQaJqIt9U3rfXiOP3qMIk3BiCzdgCetFxrRn5b/s4/speNv2n/ABHf6b4U
t4IYbWB5Z9X1AvHaRuCuI/MVGy53g7R2Ga971H/gkB8aLS2ubiLWPCN9JGjOsCXs4eQhei5gwCSM
de9fRv8AwSf8Q6Xq37MHjHwtpt9CnimO/urg6eJAlwEktoVjk25+6WBAboCOxqh/wT2/Zq+NXwi+
O+p638Q9Ov7PQ5dEmt1luNSSdHuGlhKkIsjEfKr9h1/JXB6n5Ua3ouoaBql1p+q20+n6pZSGG4tr
mMxyQyDqjKRkVVAydw+YA5244B+lfUX/AAUnjCftlfETywoDPZMQg6n7Db5zXhHw2+GniT4t+LdO
8LeFdLl1XXtQY+XbRYA2gZZmJIACjkkmqISOYmjYxM6KuW/Akeuf/r17H+zp+yR8QP2nptS/4Qi2
s/smnorTX+qTNBb7icbFcI25uvA6V7l+2/8AsneAv2YfhH8ONEsNXj1P4jSXUw1S6MpD3MDK7qxg
3NsRWARTjnBr7Z/aX8P65+z1+yl4e8AfA3R1s9T8QX6aLHDbgtORPBK80iMSMSEp949M8YwKXMWt
D82vj9+wN8W/2evA8nizxNaaVeaQs6QzXGk3bTm3zn55FKKQpIxkdOM183MrRINjKfm445HNfr5/
wTzj+KEE/jr4N/GTS7qbSk00XsVprgM0rxXDGN03ljujIDcHODnGOleA3X/BNyz+K3jz4uad4C8a
6bo83hfWbi1sfDF5GZ5jGIlePdJvDKpZ9oYqwGPY0XEz8/ppN0m4qcdQR3owW+6MFjyCcY+teq+A
P2dvG/jr4vx/Da30K8t/FEdx9nvrKVBusEDhZJpOeEXOc9xjHXNevfGz/gnp4q+FnxR8J+AdB16x
8f654ghllNrYwmKaySMpmSVC5whD5ByM4NO6I5bs+SnTDYB5HbNP8pmBbbsYDBHXiv0O1H/gkPca
RqBtL341eGLS7bDC2urQxyAHIHymYHn19q+av2qf2R/F/wCyv4xXSdZf+19GvoPM0/XLaFo7e4bB
3x8k7ZF6kZ5BBpXNFZGp8G/2DPjB8dfBEPinwvoVmdEndoYZdQvFtmcrjLBW5K8kA+xqr8cv2F/i
78AfBj+LPF2g2kWiRzpBLdadepcGEtwrOo5VcgDPrivSY/2tf2i/2lvA2lfDrwhpE4j0JIJp7vwb
bzQXkscaFF3usvCknJAAyRnjpX3T+zNY+L/DX7Dvi2L40PfQ6qy6mZF8UyM0ghMI8tSZTkjIOBnk
9Klbj5j8RGQo2C4x1Xn9aVVCBi3OOm4cfWnXEKxwwSRnf8gIBHt/+umEebwGPy+tW2ZDADyGJ3E5
z/jTmZmU45A4OP6VFJGVOMHjj15qRpCFZCRnpxUaANbLSAAbeeSRSTyg+2BzgU+QElWydgPI9aiZ
XJIwB9T2q0S9yyNjKGVjzn5euaUMvl87lYjnjoKrB1ChSxyOhB6VMrgHIJJJxjPOKhlJjW8lXZwc
MM496cJCz7tvfGCevvTS25yHKk9eaWbADHgA8EelIoe0vzA9s8g0zzH8zAAUAED1qNgQ4DAsDzx2
oAVidzYINCRJNKu5iclQpxjuaR3zj+Mf3T2pkhYOSMjIyOeaZuKgpjeCcnHXNADtys2x/ukZHHNK
wDSFc43cZB6UyVFjIYsOO2acxO3cg35+bI4oHYQkqyrxjpuI7VOm2NjnBGBgioCOCQPXgdfalJaF
Sd3J4A9adxpDZD6AtluoqSPBC/3geh5FMWbfIQgCtjpmkldhglipOdwHrzTGPkKRScruGOAD2pZW
MzFsgoOB2/OogBkAycjI9c0b4/KIGS2O1ZiHz4e2JHUHoO/FJjy0AP3fQc1HsXaAC4yOjHinMd0m
0DaR7fzprULj2cq25lA3DPBpFkRX2kE579hSORtJCkEHJyfamrgNv6Hnr1HpRYXUmkRcM+MckA+n
FRyYVVJbcccYobcwLN1Ydz1pSq5PBUAckU2UCSFX3KNvXHGcU15GfcpPIPpzSyFI9pyyAjpToSCH
LIflPU880riuOXIQ7RyMcmkMhSQAE4x29P8AIpZvLU4VMkgAjoc02VQEyFCnoRn0p7jFeJl2uCMj
5gOp6/pQIBvJIwckjP8Anmmj965I5XoMdhTpHIYbQVUEgAncQPejYB5lwQQp55P596jmjLSKobaz
HBz2pziTCsoBVvXvTSrKEd/l3HAA54/xqeYVyYSRx5DZHBHPYU5Qpk2g715IB4zVVi0TlWAcscZP
8qkBdFKKhORyVGcCgEDTBzjbh8fKvT680rfvMqMFUG3BAHNOjBkCq2Bjpz2pSyNcFVypHBY4IPvm
nYY9VEpMSqVJAwM/0qHcyyMrjAORgHoamIEZbYCXIx6596gn+WPG35mzj2pIu1x/nKFcF1UMOhPP
501AYl5I75x0zTVjTy90mG42gEdqeyM4Cr/F2HTA9KvQgkgcyqFK7Cozlc4pkqOuGEilix+QHtUe
4DK7c4wODUn2cxyNuOcDg4zkUtCkPaMxjf8AwsMc9DjrS3EPkTHaGVSNwAIPHt+lQlWC4BJyOQfT
FDqzpt2HA7KeoHvUqWoMfPmaCN8gAE5yMHOaYq44TD5GWDdBSblMYXghiecc1KpLZVgzFhkAKOw7
1pdEjH8oFlU7mxg5pRGBIGwO4Ck9PeooowZADggcKMd6keNw5TzAsa7s7sDnv/KncbHEO8zqmGIJ
+Y8cAUyRyqShupOcgE4piwLJlkIIQ5GT1PrTZfNKEF1UHOQT/P3pXJsypKEE6qjEnPOfWt5nZdTc
IiDkZO/I44yKwoyZ71Plyc9M9a2IdpvFDEhd4BYD5Secf59qholq5a8UMhe2jSRgoLEKfwrI2edJ
sUrz+Hb/ABxVzXQiXULR/vUAJKv/ALxrMZedz8E+hxgY4pdCkEmWkKA7g2clj3z6/wCetNukWOR1
AztYruJyT701myCuAw7cjiniPKKrKASN3z8flU3GtTXmKiNjtZHPIVuAf8elS+ZI2nBXDFV5dgME
UksQeIIzjgAhj1X8PrTJJ0kQx7mUDA2c84710szuVkKIkkLPnIBHy5OR0qNJ2+0bFGVB+YKO1Qzm
NGYxg7skZI/zzQMxqrRhlYZ7Vm2VcLoxNdsiljE7gZwPxrR8Q30EnkW1qcwwgI3zblbjr0HoKx2d
ZFk3Nhycg89arlyvy9h3HenYq9j0D4S+OdS+G/j3RfFOhSrFqejXK3Ns0vzIXHYjqQRkGv1h8V6p
4E/4KDfAiDxXZ6pb+HfHegxGCSDUJfKSKXaGlUKTlkbnac9uemR+O2mCO2uEkQ5GRkHjjvXb6feS
m7MTb3DZ+TaducHB9M447CmoXYpcrWp9Ofsi+BfDHjP9oLw/4e8WPnS/NlcmS42gyxgvGmeRglT3
5r6l/b++P2u+Ftb0/wAD+DNUj0XS7a2S8a60efy5xJ8w2fIcY6HpX5042mNpZYlaJcsS24ueMc9e
oHFXp79ppGeW4aLZHkfKeF79vp0+tW6FzJSsj9Tvhf4o0j9sL9lC8h+IL2R1bS2kt3uIZQsoeGNW
SbnnLZBI6H0rof2AvAZ+HvwUmnn1rTr5NfujqMC2kgxEhRUw3+18vI7Yr8jo7u6jZyly0MOVAETY
DDHJx0GfmH41KLqXSUAimdUQnKoTtzjB79On4jPWsnTaNIyTZ+rfwN/Yr0j4f/E/WfGXinULHXJk
upJdKgWTMUO/duZ1YkbgGAH51xH7FV5pNh+1N8Z7O1ntYIVeRbaKPChkF3KRt7YAwMDpX5wz6vqM
ACS3c4tiGaPYx2suRkgkewqldXlykxuUu5kum3ETxEq65JJ5HPOeR71apNkudnofVvxv16xP7dCT
Pch4IfFNiZJSwEZXzoQTuPA28gn2NfQv/BUfUIF8E+DHjdZW+03L/unG7b5a88ckZx+VfmTJfTMI
/Nkm3r867pCxycHOO5yAeKW41u8uwqveySli/lRzzsVQkZOASQMn+Q9BQ6bQozVkrHvf7JH7NC/t
EfEg21/qaWmgafsuNSiiuQlw6HcFWMbe7LyfbpzX0L+31+03oa+Hh8HvCoW7ttPeKLUrttxMRgI2
QrxhuQuWycYPFfn5Y3t9aXNxJazTWs6qV+0QSFCoIwQCCPU8VFLczSPNJNK8833CWYtu5+9nv164
pRp66kyndCTStFvUM5cEkbQQMD5gM8e/1qNblsBV+WVi3mKGBBPTj17HpUcnlS25BcswBOS3A6/4
1Dbv9nlVdx2uuBu6A+2e1bKKRgncttcibLeUXBX5mHf0xj/PFOlMdpvOWfB+UErgH0HfPeq/2zyp
WhkIZAAdm078A+vpUcmxpVyuAOmDlSM98+2Pzp8ty3Ky0JLu4dViEiysX+Yjb8uOeTj8u3WmxqJI
WaZH2FsqhwNvHAGPpTS/2rfsR4hk7yDlfXIHb6e1TNJL5wQtuXA+bO08egx9BT5UkZJu92NZmj3i
MtIQN3H3SMDBPoaSX7RMr7Dkn5WIbvnr9f8ACnAmaWR3URqDkE5IY+/qMfyqK5uHNypVFLD5QQMK
v4j24zUbGjlckSF2kMhjZ5YX8tsgEBiMg+/anxTb5wHJIQHIY4A6jP06moWGZmTIxgHarcf/AF6n
EqI7KJsv1bAyB7jp/kU7BA+kv+Cfc0cf7WHgZ2lVQ4u1GXxkm1lCjB6k8/lX15/wUE/Zn8b/ABh1
iw8SeGbazvbDSdLYXEM9wInJVpHO3PXhgf8AgNflvYapfaNqlvqGnXs1je2b+dHJDKUdHA4II5B5
7c811F38aPiHeC8tJvHfia4t7lNjRyavccrg5Vhv5z0NQ4m7aehhyw+VI0UrAHdwzL0P+B56+vav
r79h79sVPhxt+Hvju4+2/D++UQWtze8ppgYNuQgg7on4BB+7n04r4yuQ1rGsAP7kcgLnA9v1qOMC
7+U74433Ab8EFeg4/Khq4k1E+1P27/2TF+F1zN4+8LsLnwjqkgDRqY1FlI5JVEwAWjYZxycdK+sP
+Ccnw31z4ffABZdaiSAa5d/2nZxhwzfZ3hjVCwHCk7enpjvX5K6j8RvF+uacul6r4m1fVNOtii29
hd3sssMYQbVKRliOBjn68Vu6R8bPiD4Y061sNJ8c+I9P022BWO0tdXuI40Xk7QquAOTStcFM6z9p
zwBrPw0+MviTTNXsvIu7i/muUMbqwaCaR5I3B7ggj3GCMCul/Yw+Gmt+PP2gvDMmkRpPDol/b6te
iRghSBJFLNyeTkDgdzXh3iLxhq/jO8Opa7q99q9/KAr3OpXDzSkAYwWPPA/n71J4a8Wa74H1pdS0
DWNQ0C+MbRG6066aCVkJBwShDY7+lU4sFJI/T/8A4Ke/D7WPFPww0PXtPthNZeH55p71w3zRLIqK
rAdTyO344Fflwzhba4V/nxlyq8Nxkk4xj8Peus8R/Gnx/wCJtLm0nWfGut+ItNlI823vdUnkifHP
KFsHBAPPpXBmdZpZCdrZXBZhnJJPBpcrIVnK7P2h/Ye+Hev/AA+/Zt03StetVtb67lnvYohKsh8q
bDRliCRnB5r8nfin4N1r4a+NtV8PeIbZrHWLNts8bOJAQQNpDA4wykHr/Krek/tD/FDQLW1srD4i
eJ7GzgQR29vHqUnlxqPuoB6Afyrj/E3iXVPEusT6lrGpXOqapdMTc3V7M0kkuAACzH2GPwoimato
+lv+CeXw/wBc8VftA6HrWm2LPpPhp2n1C6eVRtEkMqJhScnJbt0xXvn/AAVI8Ba7fw+GPF9rZyy6
Dp0ElnfTxOMRNJIpj3KecEjGR+Pavz98E/EzxN4C1aa68LeItW8PXEybXk025aHeB0BA4bknrWv4
1+PXxB8f6P8A2V4j8b6xrenLMsv2a7uWaJmUjaSD1wafIybpmB4fv10rxJo9xM5WGC8heR2Iyqq6
sWB9sV+tP7YPgTVv2mfgJog+G7W3iASajFqEUkNzGiSRCKVchmIGQzjjr+Vfj5ezqsiKwV4lOQgx
liRz/T8q9B8F/H34jeAtJTSfDvjjW9B0kM0v2a1uykUbN97aDnGTzScXcOZbM/U79mLwrf8A7OP7
NVza/EV7Xw5JaXV1cTPc3UZjRHwEy4YqSfTPU15L/wAEwfB2hXmg+LfFZ0yB9bj1D7Jb37LukSBo
lJVT2BbPTrivg7xn+0N8RfiBop0fxJ4z1bXNHeWOVra7uS6yFTkcDHQ5Iz7dxVT4f/GPxn8NBenw
h4s1Lw5FdFTcLZTELJgnaWVsgMM9fzo9mxqabPsz4p/spfG/9oT9oCa/8X28eneHBcyWtrq0csMi
W1krMUCwhwSSO+MknmvPfiH8A9N/ZB/aV+E73/iUaholzf29819cxCA2/l3Mayl/mI24YHPbmvIj
+2L8Zy0jR/EzXAcBQWlQjOTzjb/nFcN4++KfjD4mXMGo+MNfutevLdPLimvZA3lpn5guAAufQUJM
rmtqj9Sv20fhX4n+McPw01LwXYR67aadqJvp5oZUIELCMq6EkbgQDjGag/4KH+GtV+If7OsF94Yg
XWIdM1EajcGAhgIEhmV3467S3IHp7V+dPh39p74reDfDFlomh+PtW03SrNfKgsxKrCJAMhVLKSFH
IxngY6VlWPx++Ilj4WvvDVv461iLQ73zWmsVnBjYSEmQcjOCWJOMdTTUGtRNqx90f8E5vjjoF34C
n+FGoyDTddMtzdWkzyARXSSsCVTdg+YCx+XByB161yPgX9g34jaV8cU0m91m7h8C6Vcx30OtLKTH
eBGWRU+z+ZgOT8pODjB65r4Q07V77RNQt7zTbya01SyuFlt5bZtsiOjBkdSM8hl6HrXr/wDw2v8A
G7ytv/Cw9VPZnKQkH2/1fH1yOtJQfQlyine5+jHxG/aa8Ef8NK+AvAw1KNrnTrua4vdREqfZbaR7
aVEhd88OWI4OAMjPJryn/goB8AfHXxV+JPhfWfCXhqfW7a20uSB57cofLl8wsgwxGOvXpgV+cC3Z
e5uZpGMtzO7TzO55kY8s5PrnmvYtK/bQ+NejaXaWVr4/v47a2hEcSyQwSEIoAA3GPJwMcnmhxaHd
Oz7H3/8AtTRr4K/YbGg6rdRWOqpo+n2CQySgO8yeUGVB/ERtPT0rE/ZW+IHhz9on9lq8+FVjqL6P
4jsNGbSrpZl+YB1YLNGMguvrjGD+FfnP8RvjV44+MF9az+L/ABPda+bLetqJlRFiV8bgFRVGTtHJ
BPArC8G+O/EXw88R2fiDw3qkul61ZMzQXNsiswBUgjDAgggnIIP6VNmylo22fe/7MH7JPxM8O/Gq
HVPG2o31noXhC5Z7Fri5kmhv/vIpiVnIjTbg9OM4rzL9v34u6T8a/jPo3hnwlHc6jc+HvN02W4iA
MdxcSvHhYSpJYjaBnjk4FeR61+2x8b9b0jUtN1Hx5d/Y72JoJI47a3jby2BDLuEWR6ZBzzXj+l61
Po97ZX1lcS293BPHdQTwthllRt6MOvIIU/UdKtQa1IbTa1P1j/4J5/C/xd8Mfht4lt/F2jXOiXl9
qv2iGC6++U8lFzjsMrX54ftF/Cvxt8PvHviDUPEvhvUNI03UdXu2sL90/dXGZHcYboDt5x6fjW9/
w3l8eFiWL/hYE+9Vzv8A7Osznnv+59q4r4oftJfEj416ZY2PjLxM+tadZym4hgNvDEBLtZdx8uNc
nazDnpk0Ri4jbTPqP/glVCrfErxs7QjcujwbZCo7zHODXjf7eer3d/8AtP8AjazvLma8t7WaCG2i
lm3JEjW0LFVUnCjdk8c85ryX4V/Gvxh8GNcudY8I662jX9xB9nuGSNJUdM5A2urDqPr+NYvjTxvq
3xA8Sanr3iDUpNW1jUJftFzcyqoLMFCjhQABhVGF44pqLTM202j9Gv8AglbrNvL4V8eWDXcTXhv4
J0ty43+X5O3OOuNwIzXD/Bb4F/EbQ/25E8R6j4P1S08PR65qd0dWktysDRSrcbG3ZwcmRcV8TfD/
AOJPiL4U+JbbxD4X1mTSNWtlcRzxKrcMMMuxlKtkcEEV7Nb/APBQj48y/wDM7LjsDpdpnPof3Q/T
HSs/Zva2hqqsb3R7T/wU9hfxF8Z/BmkaQn9oa02kyQR2kA8yUySTfIu0cgtjjp0r6K/aE8C+IL/9
gS48OWelXVz4gtvD2nxmySMtMkkXkl/lXOSu1iQM9DX5R3PxO8R3XjuTxt/bM3/CXPe/2idUkCbv
PBBDBdu0YxwuMAV7Ncf8FD/jnd27W3/CYQsJIyp/4ldsrYxzg+X1x34p+zbmpEcy5bHz59oCwbY3
csucKBhl7446H2Pev1d/Z5gl8Xf8E7LnStDUajqh0jVrRbWJg8hnMs+EwP4jkYHuK/JSeXz3bzZi
J9xkJJ5dick8fUmvRvg/+0b48+A95fz+C9b/ALKGpIgukmgSaGUpkK2xwcNjjIwfwocWmmhtqUHB
9T074N/sG/Fb4pX91ZX2kz+BbOzgVvtGvWskUcp3YCRgKCTgE9PT1r1f9gLwqPh3+2X4u8J65Pp1
zqmk6bdWUNxayBoppFkt2zHn+LaTx1HPoa8g1z/goh8b9Z0q902fxXZ/Z7qB4JHg0yCN8MMEqwGV
OCfpXz54e8TXnhrVNP1XTJ5LG/sZ0u7e7jbDpKrBgwz7gZBzkE5ocZNaGinbTofr7pd18S7T9vLV
EvZtdHw1l0kLbiRX/szzvKQ8H7u/du9+tfG//BUjSbix/aMi1O8s7mHT73SLWG1uXRhDcSJ5xdVb
7pKhhkZ4rzrxz+338ZvHnhq80TU/EluLCdkJ+yWEVvM21gww4Bxyue1cb8a/2q/iP8fNB0jS/Gur
WWo2emzfaYo4LJIGMhTZuZl64BPHHX6U4wd7kXS0PQ/2O/2QdU/aQ8ZRX+pwXFh4B07Bvrxg0ZvS
esMDAEEjHJzwK9u/bl/a/sfDGkSfBb4VH+ztPsFFlq2p2LNB5Xl8G1iZcZyAA7D6d6+T/BX7XvxP
+F3wu1LwDoevJZ+HbtZxsNsklxH5oIk8uUjK8kkdSM8HoK8dYvboYzGxl5fJOSWPOSe56n8apRs7
yFKTlotixawSXs8UUMXnTyuIxCpySScADue2B+FfrJ/wS38D+IfB3w58af2/omo6JJdapE0EWo2z
wsyrCBlQ4BI96/JbSdUudJ1C31C2mMN9aypPFMCD5bqwZSAQQcMor6rj/wCCofx28oJ/a2hsQMFz
pCgnjr96s5RcmilJWsfOXxTVX+JnjOVxlzrN6WYNkk+fJ19OtcaVAAZN4AHzbhV3W9Wm1LVb+7uJ
TNd3c0tzNKwALO7FmOBwOWPas8yyshcgjdyFPXHatrkKxJzcSKGZlGQvHX+vWv0A/Zr8HfHf9kn4
a2/xV0Ox0/xr4C1+ygu5/ClleSGYeaECT7BEQHXgNszxwRxX58zM2xVDZUgtgdj9fyr3/wCAX7cH
xO/Z48KXHh7wveadc6VLM1wLTVbZpkgYqFPlEONoOASvI44xWUlctWPvT4t/AXw5+3L8A7TxXoPh
s+AviLpiO6Ws9r9lUz8F4ZmMY3o23KuOmfqK5n/gkPuttH+KNjdOsd9b39rFJalwWj2JKjcZPG4E
Zr5V+Jf/AAUf+MnxZ8D6n4Wv7rR9N07UkEFxLpdo8Nw0efnVZN525xjp0J5rxT4NfGjxJ8CvHlr4
u8LXy2mqWrMGWcF4biNgQY5EBG5ec+oIGCMCpcXZFKVr9j7G8TfET9qbx3+0j4h8HeFPEXiTSbeX
W7uCxea18mxht0d9hMhjIA2jr/Ovn39r/wAJ/F/wV8Srax+MevR+I9Z+xJJZ38Fz5yGAu42r8q7e
VJI2jt1r1Qf8FafjTEXJsvCkinhS+ly5XjqcTjvXyj8RviRr3xP8Z6p4o8TXv9oavqszzSlWYpEC
SQqKSdqjoB6VotyOblsj9XP+CXUiXP7JOuQxHfImtXwaMD5gTFGeR79vWvgj4hfsWfFH4afCe2+J
upWunt4dCwyyW8d2TdRxyMFRihXuSCQDwPpXGfs9/tQeNv2bPE11qvg+8gkju4Db3VhqCPLayjjY
5jV1+ZSB8wOccVuftG/tm+P/ANpi10228VS6fa6Zp5cpp+kxyQxTO2PnkUyPuK7RjJwMn1qFGSVi
uaLlzJn6JfYbnW/+CUsNtp0Ml1dS+EUEUVuhd2O8HAA5JGP0qz8P9Kv4/wDglld6b9muV1VPB+pR
G18sibzP33y7cZB9sZr89P2dv24viF+zRoN/ovhr7BqGkXcomWz1uOSaO2fGGMQWRSN3BPYkDvXc
aV/wVC+MWkeNdb10Dw/cJqkUUcmmz2MxtoPLDBXiXzgQzbjuyTnA44pRi1ZPoKST5knuch+wpO11
+1/8Mgkm4nUZpMgdvs02RyMnr68d6+8v2qdC1K+/bz/Z7vLayuZrSFh5s0URZEAmYncccdq+Of2B
tL8QfE/9sHQPFkGlieKz1G61XV7izj2w2vnx3HRedil3wB7d6+5v28f2sviL+y9qHhm48MeF9N1D
w7qaPHc6vqUEskcNyCdsWUkXaWUZGc55pWu7GrdrNnz1/wAFkPl8YfDIjlvsGoArjPWS3H+P5V+b
VxG6OVTgY6Z6e1ep+PviV4o/aY+NFvq/irXYLW/1y+t7RXeV0stNjd0j/dhmO1F+8cdcEmvq0/8A
BIzVJwXHxh8MN82c/ZmIx9RIPatlJWszBQPgBo+FBGG68HqMVd0i3luNQgtLaGSe7mby4olUuSfY
D/PNfd1z/wAEitdht3uYvi/4ZkdELEG1cLxzwfMOBx1r5A+FXxK1b4CfFfSvFOnJYahq/h+8m2JL
++tZyA8RGQQSpySCOehpPXY0i1GR+qP/AAVPt5ZP2T9HdFlKRazYmRkH3V8t8lj2GfX1FeA/8Efi
h+MfjptwJOgxbQOP+XgZGPbj9a4X4m/8FVPHvxQ+G3iPwnqXg3wxb2utWUljJPGJ5GRJFKswVmxk
A8Z7+tfMPwg+L3iL4IePdJ8XeF76Sz1GxcM8YciK5j3AtBKB96NsYI+hzkVnJO2hEWuZ+Z9U/GbT
72X/AIKgEi2neJvFukTK3lthQotct0xtyv8A46a+hv8AgqhaWmq+K/gTYXqGS2n1mZJVwcMjPbIQ
eRwQxFeNJ/wWF8d+Z50nw78KvISMOJ7gMT+tfK37Q37SXi79ojx7ceKPEtwYdjKun6bbTP8AZ9PV
dpxECcgkqGLdSfpRJNu5adkkfrF+2T8UvEn7M3gv4a6d8MbWz0exutWXTZLeOyR4orcKMKFKkKMn
qMfWuG/4K6Tqn7PnhaRs4/4SKPJQ8AG1uByfTJH518sfDz/gq58QfCXgLTvD2r+FdI8ZzWETRrqm
q3EhnmAJ2+Z1BYKQu7vjnrWB4g/4KPeOPHHwR134d+JNB0bW4tWiuIW1i6MhmgSV2ICpkgtGGAQ5
GMD0qUmmXddD2/8A4I8qF8a/E0bdo/s+wA7cebcf4/pXqn7KFpP/AMPAv2iLl4JYozuQOy/K/wC+
TBz+BrgP+CPmgapFdfEPXpbC5h0a7htLa1vGQiGaRHmMiq3Qkb1yB0zV39qH/gpD4y+CfxS8Z+B7
H4d6fpt1b+ZFZaxeyyJLcRsCI7lRswy55HJHGO1CV9hSaTPkP/goCUb9rr4onIXF/Au7HXNnBnn6
19x/8EfEcfA3xpI4YJJ4gyjMMBh9lhGR7cY/Cvyc8TeKdU8Wa7f6vrmpT6vq19KZ7q+nfdJPIeMk
+uAB9APavtT4Hf8ABUXWvgj8LPD/AIMi+Hmj6lHpNsLYXqXr2zT4HEjosTAsRjPPOKtp3ErWPlfW
7RE+L+plyHxr8wdAvTN0www9xX6gf8FeCf8AhnLwrgDYPEcTMxIwo+y3AyfxIr8yfj78Y1+N3xU1
7xwuh2XhWbVTEXsdNYldyIAXZsKS7EZJAzX058L/APgqn4j8IfDjSPC3i3wVp3xEu7AOP7T1O+Ky
yJk+XvBicFgp257gCkrp3KvokfPHh/8AZ5+LviX4Qan478P+HtSu/AkSXEtzd2l4kayJGNsreSJA
zgAH+E5APWvGJoRGoK5O3n0wK+1f2g/+Cl3if4wfDKTwP4Z8MW3w40+7ZlvZdLvfNeeAqweEDyVC
KxbJI549zXCfs6/sBfEb9pXwJL4r8Map4es9Lju5LLy9SupUlZ0xklVjbA5Hfmq5u4m77nzUI8BN
pOW5IxX2n/wShgjk/aztpJEyyaBfGNjztYtD6+2a1f8Ah0F8aFIxrng4qOii/uf/AJH/AA/GvMfH
3wm+Lf8AwTz+K3hHX59V06HUrlZZ7K50q6M0UyIUE0EquinawZBjGOeDxSbvsI+7ZtSvLj/grZbW
ks0z2tt4Zbyomc+Wga2JYgHgEnrivlv/AIKpTp/w1o43Auvh6xXDdhvnzxx69a6Dx7/wVeu/Emm3
d1onws0zQPGxiWO38RverPNEARkD9wGII3DaWxya8Z/bC/bStf2rbXw2ZvANj4Y1bSWk87VY7rz5
7lGTHlBvLQqm7LYyece9StCk0fp9+25+0X4j/Zl+Bui+KfC9nYXeo3Op29gV1JGeJY2ikcthWU5+
QDr3r46uv23v2ovix8FvEOvaX8P9JuPB0lrdWl3rGnaVM/lRhCsjqDcH7oPXae/BxXM/DT/gqNea
N8MdK8I/ET4eWfxOkspGCajqV2il0GTHviaBwWUHbuByf1rN+Nv/AAU5vfH3wkvPAvw/8EW/wytb
pzHc3Gn3aOGt2VhJGiCBApbcCWHPHvRZiTVzkv2HvF/xd+EGsz/EHwX4T1TxL4AtybXxFb2kJeCW
JQjSSIAw/eoighsd8HINfXd38Pfg7+3foXi/x18Gr6+8FfFmKcN/aRuJbOd3CLnfFHJzHIuVLgA5
5Pv8Tfsdftw65+yld6nYf2d/wk3g3UEZ5tCaYQhLnCKJkfY2MooUg8Ec9a+h9F/4KueBPCkl3feG
PgFYaBqk0bRi6tbmCFmzzhykAJGQDjPahJjbPhHx1pPibRPHGs6b42g1CDxTbziPUotTdnnMgUcs
zklhtKkHPQiv1d0NJH/4JHCKBWeSTwjNGiYO47pWXp+Nfkr8Q/iV4h+KnjfVvFnim/8A7S1zU5Vl
ubho1TJVFRQAoAChVUDjPFfVP7LP/BRC9+BvgC98B+LfDTePPCuwHTbGWZITZqSzSREshDoWIYA9
McU3clO+h81fEj4P+OvhZBZf8JZ4S1Xw5b3hf7JNf2zRxzbRkhSRjOOdvXFfWf8AwTq/bO1f4a+K
/B3wivNO0ZPCur6tKG1S4DJcwNMpYAvuC43gDkdDivJP20P2zdb/AGsvEWnx/Yj4e8I6WA9jo8ki
SuJypV5XkCjJIOAOgA96+a/OUgx4Lqp4IHU8c+1MWiP13/4KO/tp3vwovY/h74e0zw7r9r4h0KZr
y5vS07QeYzRjaqtjplufT2q//wAEsp7fUP2P/FGj27Qz6jHql8JbSFv3nz28WzgnPzdAT6V+PPmB
nc4y3qVGT9a9X/Zz/aF8R/s2/E2w8XeH5mkQbYNRsDgpfWu9S8TFh8pIXhuoPSlYaaPrf4Z/tc/F
j4iX3gz4BXHguBpNL1DTrbUmt7Wf7dbQWtxCzGVDwm0KmWIA4PrXoH/BY/VImk+E+mxTo1952oSr
bxyfvclYFQhRz1OAR3rOX/gqp8L9F8Rap4q0r4KT2njDUITFNqfnWySzcDaJJFXJXIXPfivkb4ef
tQrF+0zF8Wvirog+Ik7s7SWuI0EcgQLCYlZQqiMooAPXk9aVi2frr+xrZfECT9n/AMLp8XI7ebWU
jDWpv1Zr5Y8nZ9p8wcShdoyOcDnnr+cv/BT3VfiXc/Hgad46je38LpCr+H47N5Dp0q5IZwGwDPzh
weQMAcYz5/8AtdftveJf2mfHNtc6e954Z8LaQwfStNRx56SlV3yyOnVtynaM4A+td38SP+Cg2jfH
f9mRvAvxC8FTa78QLZGWy8SRyRwwRTgkJcAD5lbZgMuMMc/grPqTufZ//BR3x1r3wT/Zp8JzeBNY
n8HyHWbe1MukkQsIBa3DlBtwACUXiqn7O/iHVPjB/wAE4fE9/wCONQm8WX0una0r3GqgTSHyxKI9
27uu0EHtwa+dvC//AAUo8DeMPhDpHhH43/Dy9+IeoadKW+1RpbmKcrlY5CpZcOEbacDnGe9ZXxh/
4KP+Fj8C5vhv8GfAs3gOyvhLbXX2yOHyorWVXEojVGbDsWHJ6AmqCzPjPwV8SfE3wv1S31rwvr19
4Z1X7OYHu7CYpIUbBYE9xlRxjsK/RT/gmB8bviz8XPjD4iXxX4o1vxN4WttIlYyXzGSCK682DYu7
GA+wucZ6V8NfsyfEn4a/Dnx9Lf8AxR8Ey+O9Baze3itImQtbzFkIlCsyg8Bh94YzX1r4n/4KU/Dv
4efB/VPCvwD8B33gLVL+bzVnvLeLyIS2BJIFWRyXKgAZwAee3JYdzyL9ubw3qnxM/bz8a+HfDNhJ
rOq311Y2sUFshfa/2S3Us23oq7huPbFfYWjaH4G/4JcfAVtd1uK28RfFTWoyI4kyTczqADFC+wtH
EisCxPXHqRXwF+x5+0bp/wCzj8c18da3aXOuwyafd2s5jcGZ5JAjK+W6ktGAfY57V538a/jd4l+P
nxF1Hxn4pulm1S72rHDACkNsgUKERfoBk96LaibG/EP4qa58XPH934u8VajPq2rXmSZpySIYxkrG
oP3UUHAFfsl+3h8UtQ+DHwy+HPj7S9LXW49D8R21zNEWKxeSbWdSWdVbaPmADdMkV+GrNGqnJ/hI
O7v2xX29+yh/wUB0b4f/AAr1j4YfGTSL/wAbeCbmBobFLaJJplR2cywy73XK/MpUg5GD7Umuwkz7
E/Yn/aO1X9qr41eL/G9x4V/4RzTbXQbbS4DHM08cjLcSu370ooJ+foBx618da34v8feHv+ClHiy9
+GnmXOtXXil7OS1TIhvLbfEJY5mxgR/Lyf4etdd8TP8AgpF4C8IfAuX4e/s9+EtT8FTXLvFJNqcC
KtrDKr+ZJERM7GUkjDNwPwFcR+xb+2N8M/2ZPBPim/1rwpq+r/E6/ln8rWIdk0c8RAaNHLygr+8y
WIBJ65qOVlKSP1f8Y2Go+GvCOreMfDfgbSNR+IxsAzQIyRSTybV3R/aNm5wNowDjdtA44r8xf+Cd
fjDxR8Rv255vEXjO9nvfE1xpV+16bqPZIjKsSbCNowFGFAxxivI/hb/wUE+JnhD4+S/EjVb19Xtt
Vm2atoAkf7KbdnXcIEZsRuqqNrfXPBNelfFT9ufwFof7Uvh/4wfCbwtqNpqUlvJD4nj1eJY47+Nz
GuERZSFkCIfn6Ehcg81STEmjlv8AgqNIj/tkeJGZyHg0/TmVlOGj/c54PbmvF/jT8ffil8VdM0rS
PiJ4k1DV9P01zLZw6hbxxBG2bCwKopc7eCWJP519wfEP9rr9jb4q+Ol8ZeLvAPijUvEX7otdPbOo
JiACZRbgK2B7V86/t+/tn6d+1Pr+k6V4Z0cad4M0IedbXV7aiO9mndMPkBjhBxhfUZOeKaQKx438
Kfi78Q/2d9cvdZ8H6ld+F73UbcW0sslqpWeMMH+7KhU+xHTJ9a/Vn9nvxdqX7UH7AfirUvig0Pie
8kj1WN5JolXIhVjEwCgAEHuOfevn7Tf26vgN8dfgj4a8OftCeEb691vSpQR/Y1i/ksyKVSRHR1Zd
ykblHGfbGG/ED/goH8Ivht+zpefDv9n/AEO/sXvzNA6a5ayiK3hmVhNIGaQsznIA5IBotqJ2PzVl
3tHHkt9wHjknj9e1R9TwxDHoasONkZXDbVACn1xTBIqICFA5wTTehFiOaQkopyO5I601nDOcDHYM
3U0sg8xww4XPAoYICu4YOOR2NShkcxY4YnJIzyKXCbScngfw9RmlacBflUn19qblXX7uwL0A6mru
K4RFW+7lgT1NSOyWzEE8kc1EACFMa9ckgHrTjEd5YkksM9eBUDBhvBYAE45qVg6xMNnAGciot7Ig
yck+hxTpAwXcGIP6UMYiMyoDyc5HFLKq5DAkAnke9JmQAgNknPH0oZTt3MxbHGMdT6VIhCwyeNxP
HmDqKQKckgMQOelPddwJAKjsKa24/dfj1NMQ51SQrvGAR160rNg4O5cDjAoV1VFU7WGO5x2pizYl
yudoPAPNNFoe6HG9RyF5PTNN6NziRgPXgUOTvY+vUA0KN52n5QM44zUthcbJ8wyig4BNIVXG4uQQ
c4x04qVmCxAgk9QcYqGYPtJUfKT370CuPc4AO0BR2xUTpvXjaqk9BUhAY5bPX1pfLCq20jK4yTwM
e1KwK4SLll+8VHBfHAFJMfMmwu4LntSy4jTILAZ9KWO4KZMbY3cYP86ooZlTC+3qRgBqcu1Vx1JP
TPQ0rA8hwAT39KjV2xndgg4GKGBIXVgFA5UEnvQ6nI2SAgHBPb8aVF2YbAYE4znOOKb/AH8Ac84A
pNiCVvKkw67wODz1pu9nkbCkIxJ4701owpIJZ3OevAFKx8mTILD/ABqCbajpCJGJAK9ADnrUiRqV
C878/wAXSkQGSRVc/uwMEE9PemMwJEechT1x096pFIe0SfOFO1vcdaVCWUIeGbnJPSkc/vFH3w38
R4xT35+ZhsYDK/7VUxkexkzgneOQP6ClSJkYEnLHI5P/ANalLKclC31bqe9NXc4VN3B+Xn+VQLqS
z5doxt6An1qFs4AwxA529anOc4xlQcfNxn/69QvIZZQWO5c4waauMeRvJychgQq4xt/CozEwwPLL
AGpTKGJzyR044/Oo1wTIS4b5Tsx160agSBn27AWQ55ZupNLMx80A4Py4AHXNPSPcrtKwVkUgYOST
jPHHFRBco758uVQSCc8Y+lAxhc8FBgkfKBzxUkSqyEqdhUE8Ecj2/SkQKsW48OOOOh460kZCYDqH
yOh4yafLcnqPZFnUHfuCkAgAfrUYXezDkFfugHiiIvbt+76Me5yAafJAwkJUt5nXp7Ukik7CviHL
CQ4IC98g/SnI+J1ZSMEbdwGPxpk0isqq7HzQfrzSupMjsoDIF607WYhC4eRvlDDOcDg1Iv7tyTJl
cBSQeAKhJMUgKuAD1H4U6RTlV3hR0x1PTrV2uA+TyVOETLcEHsfWmBQkZDr95sY9B7GkCBdzNkSN
naoPBI7mnhV+VjJhuB84/Ood0PcheJ0AO8Kx+Vs0ySbYjBlwQcgHvVg+UxIZtrD0H60xkDW5y24H
nk5J9KaFcpopku12owO4YKjpWxHFI968LE5LYHHb396x4gYrpVZyhyORyMVpWywNcGR5GV1fGM4B
HvSaEM1yRhKgP3QuA3Ynv1/L8KzXjwnzLjPp61d1yZbiZB0KKQfpniqzq6RbFJw5ztYcH0oBDWjZ
I9w+Y5zkHikZlxl8tjgEdKkiYJGxYCQEbTnPHNDSohyCUXoD14qLFI3Lho3IjiUu+0BtxHPr16fS
mOGEXliMGQL2PQdqZcRbvlEnl/Ng7TnPXn3phnEMrqwZt2BvAIz+H6V1mFyrLGY5mJ3CNj1PXJ/p
TXhBUptLEnBK9gKldSrEs+SxB444x09zUIcDeQpDNjnNQ7FJkEduZcqg5JGajvraSxuHhnTDA9R0
PFPkYQM/IznI46Guh8RWwl8M6Zfuc3Dgqz7gScdsAccYrK+poUtGeOMIxXLDPPb2Jr66+AX7EPxA
+OvgiXxdpDWum6Ra5UDUt8D3GBuygKYYYx82cc18/wD7NXh/TfGnxr8CeHdYtxeafqms2trcW7MQ
HjeVVIyOehz+Ffq5/wAFFfGeo/Bf4T+D/Afg1k0Xw9qUMtlJDb5EiwQrHsVG6jGTk+wrWMnfRimu
VXaPz8+HHwU8R/E74iWvgzQGiXUrqV4w97MqRoVRmIyOTjax4B6dq2vj38APEfwA8Rf2P4na1kkl
hW4je0kMqMjZAIOARyD27da5b4Y/E3X/AIYfEPQvFOhXm7VtLkLoZDuWQNkMrrzwVYjHH1r9LNe8
G+BP+Cjnw30vxFpeoL4f8W6aDa3kUirNLbjJ3xugflSw3K3cfU1TnJPVmcoc0fdPiD4J/sgeN/jp
4FvfGfhpNPXS7Z5IQL25MbyugyQoA6YPBJHP0rrfA3/BO74nePvDWl+INK1DR0s7uNi0cl/uCsGK
lQQjd88+1fWHxj+KHhb9iD4MQ+AvA627eKLmJpSg/fIkhVfNlkUuCCw+6P8ADmn/AMEs9bv9Y+HX
jn7bdSz7NYUpHI3EW6PJAGTjPB6DqKxc2+o4xtsfP5/4Jf8AxabIkuvD6xYBEcd4zYPPcoOffnqP
SvJ/AP7Jvjz4hfErxJ4C0oWkGq6MWbUBdzlIowH2sFYA5yeAf9n2r7KutW+IP7J3ii6+JGvX8mqe
EPE+uyWl/oc2TNbJulMckRY4BwM4zgjFZf7Dvjqy+If7WXxT8QWEMsdrqtlJdxiYEFVN0CFI6ZwQ
ePWhSki1FNnxH45+AvibwB8Q4/BupWpi1qOdYovtMiqkhcgIwfowOMZGc59q6D40fsheOvgDplhq
via1tktrp3EctpcLIqMDna3Tqp9K/Wv40/BDQfjPFpks/lJreiXkdxa3YJPlMrKzI4B5yB36ZGK8
D/4Kh27zfB7w6Y1V2XVTlXOAV8lyc/l+lUptkSSPzw+B3wA8V/tE6zqOj+DFskksIxdTi9l8sbGY
7QAeeSPw4rmviV8Mtf8Ahd4tvfDviS3nsb62kKSsw4YDB3KRwRyMHvT/AAN8R9f+D/jjTfFXhq6a
z1WwlDCMyFVmiBBaKUAjchOPlP8AOv00/bP8F6J8Wf2U7L4j6lYKniG30yzvI5rTgYm8svH82crl
sjPPA5ocmS6aaufk6YXSPAy5PIP8OO+fTpUEkoEmZIgm4AptwNp6YP8AP8Ku6gqAsMAOm4BAc49s
5xVSaA+cXkdHeRRGUAyOmM/XpVpmcUiOZlhmJJJZVHmMqcv3/nimyTRzxvIgYBAAyEnPToPanNC8
hRYVAnBKurHGODwB+X5UxUDzjeNrg844A68f/WrTmsDi+g4TNFIE2kBjuLEdB7YJ+nHp2oSeK6eK
PKyiTjbkoPXIx05p0cO2NWVQMMSpU4OPX/P5VHOIzgJvUZ25I+ZvxH070rkuNhWkjMYXMiuuVOQM
cAH/AOt0pY1mSGXKs0TtgKWGD36CoWkaL94YxKknAOOh45/r6VNvS5PlqR5YQjyzz/wLP8vxoauy
VZ6MdKcpAzRFIvm27RjJ9KcYwHLRozMG/hXO365+oqvdSp5wQoXQsSG7D3/nVlZUjdRGSrg4YA9v
845pao0jHU6Xwd8PNa+J/ifS/DXhq0N9rOou0VtBuEZZlRnOWOAAArck17lqP/BO3426Fpd9qVx4
XtpkhiMskcOpwltqjJwu85PXvVT9gu4/4y48BIZDGvmXHlrkYY/Zps+54/nX1J/wU/8AiL4v8G6v
4SsPDniLU9Esr6yuDcR2Nw8SykOo+cL97gkc+tTzM3aS1PzVmtikxg8tllDECJuDjJAz/hQsjHCr
EQjrsLH5eR/9emyJsiC5Y5UDLZLEnsSeck19OfsX/shah+0PrrahrMBsvBOnyj7VcMrxveZGPLgb
GOCp3enHFU5eREVdnzT9kkiPmIJBvG0kjAJBGRk9On6V1fw2+GfiX4v+LE8OeD9M/tbVnQzPbl1j
G1RySxIUL9SO1fZ37f3x38FDQYvg/wCD9J02dNKlijvLpYdptWjwVhi4G44xuOcc455r1X/glj4S
udI+E3ifU9Q05ree91jNtcy25RpYBDH91iBuXdu6cZqOY0SR8D/Fz9lz4j/BCwi1Txn4ckstLuZx
BDdRTxyxhsEhWMbttJAPUc4xXlNyhDK3KgkALkjHbp0r9KvGP7YWlafpfxc+GPxX02V7u0F3Ho8t
7ZNL9r3PIIkYYwMfuyr8DB9s18U/s1+HLnxb8dfA9ommzaoq61Zz3kSQl1SITpvZ/wDYxnP1qlJk
pJuxteEf2I/jH4u8M2GvaR4Kmn0u7hE0Ej3UETPGRnPlu4YZ69Oc149qGkTaNqV1YXlnJDdW0hgn
imUK8EgOCGXsQR09q/ZH9rj4269+zvZ+CvEekaW+oaDFeyQanaxqVi8koAgJAIQ5+72yK/PL9t/4
peCvjB8XYvEHghMWsmm28N9K9s1u0lwGlY5BA3MoZQTj154pqTYlFLY8l+Fnwk8T/G3X7jRPB2kS
aze2sRnmSJ0Xy0yACWdlUc4xk9qv/F/9nvx58EUsJPGvh59Ij1A7bWTzY5UkYHlMxswDAYPUcdvT
9K/+Ca/hObw/+zYbu5017G/v9RuZkmliKSTQYXymBPJXHSvDviD+1/onjn9nr4heAfiVp72fjvTU
e20uS6tjIbqXJCSZ2/u5Fzgk4yCD3NLm1Kdj4Bt2MiqJVCySNtVe30r3C3/Yn+NV74fg1m38FXJs
ZrX7QrefEzMm3cp2Bt2SMdOeR6Vd/Yu8LSeLf2mfBHmaPJqFha3ZuLkNF5kcS+VIN0gwQBkjk98V
+i37UH7SF9+zf8QfAV9c6fNe+DNSSe01MxAhbY749sgIBywXd8vcA/gc7uPlR+OFzHFay3CjaZkk
Me9lwTjjp2+hr1v4f/sr/Fb4meGYPEHhzwdeato1xuWG6jnhjWYjqRvdTjPfHOMVvftPeL/BHjf9
pnV9d8FeQfD10bJmmhg8qOaQKPOcKQOPU+ua/Rr9sjX/ABZ8Of2f7a6+GQuLC9ivLeJRpNsJGWBl
bO1Ap46dBS52Zcml2fmH8Rv2Wvif8LvDY17xR4Su9J0eKRVluy0UiRlmwNwRmIzxz0ya4nwV8P8A
XviT4nh8PeF9On1XVJ1d47OJQHIAyWJY9O2c44r9c/2TNV8R/FP9nO7/AOFmwzatez3V1bzR6xbB
TLAACu5GUccnqO1fAn7M37Vdl+y/qXiaBPBcHiE3lz5VreLN5UsKIzAjeVbKkEceoqlNlKCUrHMy
/sPfG+Egx/DvUGAUNxNCSPUf6zOO+BXj2vaLqfhrWL3S9VtZNP1SwmNvdWcy4eKQH5gff/Gv0N/Y
/wDjD8cv2gPjPNrV3rDr4AsriSa9t2REg2uH8uCIhMuV+XnPGOeteX/8FM9f8G6x8YdLg8OtZy+I
7K0lh1yS3iIIk3J5SSMAAXCluecDg1KeupUkk0j5N8EfDzxD8SPEcGieF9KuNW1WdHeK0twAW2jL
dSAOM9TXT/EL9n74gfCa2tb7xd4Zv9Cs7lzDDPcIGjL4yAWBOD7E9jXrX7H37PnxZ8aeItN8d+B7
iHw/a6Reqv2+8cxecpA3qq7WDrsZgeCD+FfYP/BUVGP7PmlsrYI8QWoJHoVk/wD1fjVKo7kSilqf
ldo/h+88Q6raaVp9q11qd5IsVtAhJaVyQqoB0ySQOtetr+xx8cEh+03Pw91mLCgnaEYj1yobkAAG
vsD/AIJt/Abw0vgVvipfWX2/xF59zZ2scwV47ZI2Uh0yPvkr97sD+Ncz4f8A2+viDe/tB299e6Pc
w/DzULhLGHRiiHyt+2NZPP8ALyW3EsRnHb6T7RlKmrnwNeaeY5ZYblJEl3GOSN4yrbhwRt9cjGPU
V6xp/wCyJ8ZNQiW7g+HWttbzxq8biEDcDgjgkY4r9G/ib+y74Ii/aO8B+PP7Oth/aN/Ja32lyQRm
1nl+zyusxXH3yyjOepAP15P9u743/E74TeL/AAlZeA7u7sdNu7CeS4+y6clwGkVwEBLRtt4Pbmjn
Y+Q/OD4hfCnxl8LZ7e38W+Hr3w7c3KGS2ju02+aAcHaec446Hisjwp4M1nxzrdnpGgWFzq2p3QKw
WVmu+SUgZOOnYEknjiv1T/a1sIPG37D0mveJ9NS88QQ6NZ36TNCFngunEW8pjGwkkgr07EVhfsj/
AAy8J/Af9m2b4wHT5NZ1y70h9Wdp40WWCNVc+TE23Kg9zznP4UuZgmtb9D4D8Qfst/Fnwzo95qmq
fDrxBBp9sjTXE7wBwigEsxAYnAAznHrXltrbXOoXMFrp0Et/e3LrDBbQjLySMwUKB3JJAx1r9H/2
b/24PGnj345f8Iv410o3OgeKbmSHS4Ftgq2A+dgpfYPOXbhTu5HHvnJ+NfhrwT+xB+0ppXjyz8Jj
xFouv21zPHosaoo025WSPMsJYFVGXyB2ycHpV872E6abPk5/2TPjGypt+GniSIbirE2fT/vk/wAq
868XeENY8C6xNo/iDSbzQtTi2maz1CFopEVgdpAPUHnkZ719zeEf2xvjd+0T8e7bSfhyINC8PSSR
ynTbm3hmMVsgXzWkmZOC3zHGe4AGa6L/AIKsyeC20HwxEPsJ+In2pWXy8C6Gn7JQxcjnyxJtIz3B
x3qVJ3sDjY+I7v8AZx+Ken+Hptdm+H2v/wBnxxeebj7CzK0ON28Yzxj6mvNrqOUKybAgB6jow7Hk
e9fs1+xh4w8SfFb9lHTLvX7xNQ1d0u9OS6KBBKkZaOMtgY6AAnHbpX5ZfGz9mzx5+z1f2EPi7Tha
wXcTNa3tvKs0D7SNy7gflYZBwe1XGpcycNdDyudlifG47ONwznPuKSUjhos4B5IyC3ocZrtPhf8A
CTxR8ZvF9toHhSwS+1OeNpFWUiOPCruJZzwPxr2x/wDgmv8AHlIAZNB0t9vSKLVoi2Oc4JI5q1ON
xeyfU+W5UQopUEzA8eg4/nW94T8F65441eHTvDei3+vaiys4t9OgaVwABu4GfzqLxd4Y1LwZ4k1H
w9r1k+m63ptw8F1BKfmVgT+HOMgjIIORX0/+wr8Ivivc+PtF+IXgexii0S2vVstQuppYkR7d9jTq
Ec5f5ccjBz0PalKdioQSZ83+NPhv4o+H1zCnijw3qvh+W6LCA6lZNb+bjk7dwGfwzVbwl8OvFPjy
a4h8MeHdT8RG2CtPHplo87RBuATtB/mPxr9H/wDgrbEreAfh8+3cRq84wOSf3B/z+FeCfs/ft0eH
f2efghe+GtI8DyHx5cLcBNZV4hFcvl2gaYEhyEVsEAHO3jrUJvc0UbuyPnTV/gb8T/D2lT6lqPw9
8S2NnDGXlmn0i4CRjGSzHbhQP8a5bQfCGv8Aji+hsNC0e/1y/MLSra6dA88pUDO7aoyByOT6iv1L
/YM/aW+IH7RWt+LNK8ey2OpWFrYRun2exSFGLsVZWx1BXgg/kK+c9M+K3hT9ib9uj4kzQeHbm58M
qg0+DT9Pcb7bzkt5iY/MOCAwbjdwDgVUajs9A5UpcrPmwfs7fFcuC3w68XCQjaGOiXOP/QPwrlPG
Pw78T+BUQeJPDmreH/tAYW7anYS23m7cAhdwGSPavt/Vf+Cmvj7xn8aNOsvB1pb6V4SvNRtrSGwv
7OOa6kQsiyEsrHGcueM4wK9Y/wCCuUUUnwc8Fb0JY+IdoK54/wBEnOOPcClGTcrEyikj8oxbrt4I
cg7sjr06Gmyvhjh22D+/6193fsQeEfgX8fvh/qnwy8X+FrWx+IhS5a11idgJrhCcB7dt334vl+XH
QA9M4+Wv2i/gB4h/Zy+I954V8So89u26TTdTChUv7YHiRQCcEcAqeQapSjJtEv3WeZ26yXdzFHEj
zvO21EQZJJ6ADufbvXXXnwV+IyRPL/wg3if7PEpZ5v7FuVQ4758vHOP1rmNH1G40TVbHU9Pn+zXl
nMlxbSj+CRSGUkHg8gcGv2h/YC/aH8WftFfDzxFf+MZLK4v9N1BbRJbO28gOjQq2SuT3JrNzadi+
W6ufiTJh28zcQScFm657/rmmNkq+4/Kp42Nj8a634oRG3+IXiuNgu2LVr1EVBjbieTgiuMaTe5Ur
gkE5qrGSsK0jKgdgRHyA2cZqRCXVigLBTjJ6DPaoljAgBDEoSeT/AAmvW/hf+y78UPjR4fk1vwT4
Qutf0mKQ273EMsUa79oJALuueD2z1pNm8UeSmXCD5htB6LSjDtvZOR04617F8Q/2Q/i78KPDFx4j
8VeAr/RtCtdouL6SWGZIgxwC3lyMwGcAnHGRkgVkfDD9nX4i/HC31CbwP4UuPEEVi6JcGGSONYmb
JC5d15wD3p3HY8zlLknOAQc4xyRiopkKqpJY9doI6CvUL39nv4gaZ8TI/h5eeE9QtvF84Bg0t1DS
TKQSGUqSpXAPzbsDBzVXxH8APH/hbx7Z+BtV8K31l4tv9jWWlTBRJdBiQpjOdp5BHXgiqVjJpHnL
jarMxwehPtTSJGmAy3yjpnjFexzfskfGM+L/APhFpPhzrS699m/tAWZjQ5h37N4cNtOCRnBz7Vy3
xK+DXjL4PahDpnjXw1f+Gru6iM9st2oAmjBwxUgkEjjI7ZFDYKKOFmkyjqV4z1J/lS7zH8xyTj+I
9KW4lAKhhgg4wg/X+VQswMpUMzdTwvFSPToejfB34/eNvgPqN7qHgfxBPoN1fxLb3ZhjicTIpLLl
ZEccEnoM8mug+Lv7XHxa+NvhxdB8a+LZNd0hLhblbcWlvAvmKCAW8uNScZ6E9e1eY+FfBmr+Odf0
3RNBsLnVtW1CUx2tjbRF5ZXA3EAD0AJOfSvXn/Yh+O0ZOfhXr+3j5hCucY7gN/L0qbq5dr7nhLsC
580q3X5TyCfcflUqsQD/AAlgQNgxyPStNvButDxS/hk6Vcp4kS8+wHTGQpOLgsFERUgHOSAK7q5/
Zh+K1h4psvC03w+12LxFexyXNvYm1YNLEhwzqemBx37+9PQaTPNpJPMgLlo92QnllRwo5yOP1qry
8RUEDYOMjAA/Cu0+Ifwf8a/COezt/GXhnUvDlxfhnt1v7Zk80KcNtPQ4yMjrzUr/AAU8dRfD7/hO
B4T1WTwfsEra5Fas9qF3bdxcZAAPU9u9GgmmzgtrqQQw2rwcdKmlyYSwUs6sOcjFdta/BTxzeeAr
rxnbeENZuvCMCM8usx2bm2jVWIYlsdF7ntg56VzOjeHtT8R6xZaZpdrLqGqXcgigtbRDJJK3ZVA5
JpXHGDMtrhlQFSy9zxSGXecA9ecGvXl/ZN+MzJtk+Fvi4rkjP9lS9QCf7vt1968u8Q6Ne+HNWu9I
1OyudM1S0cxXFldRtFLCwwSHVgCD0PPrTuDXcotI6fIozyT8vGamjlTzcsSWJ6DoKgRj5mVO49qA
pYbuQxOOnWh6kqx9G/BD9uj4rfAHwYPCPhPVLBdFiuJLiGK9sEmKM5y4DZBxnnn17DiuH+PXx+8a
ftDeKbbxD43vrS5vrO1+yW4tbdII1j3FuQpOTk1wHhvw9e6/qkGkadZzarqV6RFBZW0Zkmmc9kUZ
JNfuJ8LbSfx5+xfOvjrwDbeHNTi0C7sZ9Ku7EIxWGFkVyjqCu4KGwe9Z7Fbn4Rzxsj7g2UUnkdad
LlyCWO0Hscdq0bTTJ766trSC3klu5vLijiiQu0jNjAUDJJJIruW/Zm+KzMSPhn4ubJ76Fdcjn/pn
TuRZ3PNWkDFSBnGMgnpTUYLI5QkMDWpqvh+90HWZ9Lv7K5stUtX2S2NxEyTxt2UowDAkdARmt3Uf
g7410rXLDS7vwd4gtNS1EMbS1l0udHuAoydgKZfjk7c9aZRxy8kkbh/eHvitXSPFutaBA0emazqO
mLu3lLO7lhVjjGSEYc4A/IVe8WfDzxP4FFs3iLw7q+gxXJIhbUrGW3DsByBvUZOKj8O+B9a8ZXU1
toOlX2tzxgO8Gm2z3EgBOMlVBOKGFrlt/if4u4H/AAl+vDA4H9qznP5tWNq/iHVvEE0c+q6ne6tc
ou1Jb+4eZ1XuAWJxnjpXQax8FfHekWVxf33grxJZWdsGeW4n0i4RI1HUsxTCjvzXJTQ7UGMuOoKc
gj1zTVh2ZGm9mDeYQADhe9NZgVA5JXkDGK29J8F6/r1pcXulaJqOpQWpxO9layTLFxn5igOOOee1
Urezl1CeBICJZJyFXnrnGM+3NIagyu0b7EZnEeB0Y45piyLKCqyBnB571+uHwz+AXw3/AGAP2fdS
8cfFXT9O8UeMNSjETWlxAtzFJMPMeG1t9yHaWXG5jxlc9BTNN8HfCD/gpH8CNVi8N+F9P+HPxA0V
3khis7eJJbWUhxFvkWNRJBJg5x0I9QMrmG0fkjIjBcHcmeRTdxVThsgHkkDg12vi34XeLfBGua/p
Wr6FeRT6Ddy2N/cQQtLbxyR/ePmKNu3ocnHBFcnBamWaOKJGdpWUKBzuJ9B3zkU7kFUBt6byMNjn
IGaUBZXK+aocDJy3OMV9qf8ABM34VaX4m/afaw8YeG01K3g8P3k6WOs2OUVhJCivskGDwXHT1r6Q
/aV/aX+B/wCzv8XNX8CXH7O3h/xBPp8EEz3kVpZQo3mxhwArRE8ZAo5uw7H5Mytg7HYF16Cnm1Hm
B492W69q+jf2svj98OvjpB4eTwX8KdP+Glxp7zPc3Vj5Sm5R1AVGWONFOCM5OcfjX0L/AME2/wBh
aw+JEdj8VfHUEGoeGo5JF0nRn2SxXUimSKRrhCD8qsvyjueopXBn50ZEgkJOXHOB1/GkdjEq7EAP
uK+zf+CpXwu8J/DL9orTbbwpoVh4bsr3w/b3M9ppsCwxPL51whcIuAp2xoOB2r41mhO5WDM2RinY
z6jluHZdxYDncQaJOTuU5U/kCa1/BngnVPiD4q0fw1osD3erapcJaW9uDgOzMAMn+EZIyTxXrfw4
/ZC8b/EL9oK6+EjWq2GvaXOTq5aVdtrbK8YeUHo/yyIVHfcKDVI8Nlikjdi+UzgZzwaRxt3HjjGR
nNfrT8VZv2U/2N18H+APEnw/0rxzriolvqV8bCCa6tl2oftFznuwfcFHYGvFv29P2J/D3h/QI/jb
8H7e1vPAN9Ck2o6bprL5FsjAKtzAc8ozEblA+U88c4SYj8/Hlctxlc9u9MHTnPuCauGyaKQCUFWw
TtPBHY1A0POFPAOfcUwsRCAySYUHL5xnjn2oZ/NXOPmxgn8eK+5f+Cff7E6fGi/T4keNQlt8OdLd
njWR4zFqDxkiVH5ysQHVvY1jftqfET9nHX7HUPCXws+HS6Xrml6sF/4SnTyiWdxCqsJQhDkupPA4
xxnNK4rHxmmYXyoEgI24HY0lxFIMkjAJ4BrU0DQ77xHrVhpGmWxu9Rv7hLS3t0YAySuwRUBJ6ksB
X7Dfs1f8EyfAXhP4SMnxO0G38TeLNUhWe4+0ZB04+Xjyo2RsZUkncO/0FNsdrH4xvxt3qcdB9aQT
+WykYGRjgf1q/rlr9j1O5hRXKQStGpfOSAeDk/nWYD8rcjOc7iKRm9CS5fzHY9eMA0iJ5hDbgccZ
FNDbxhyNpHT0pskpQkLyOmBTDQmMjHfxwRgdsUnmlThiTjg5qNgSpCqd3YH1pS3zEFTtJxyKnUNB
zTSTOoXJ5xhjTmdsE8Z5HXiq52bznIK9MUqFcNu+Ze2PXvU3KsTGbEHPzEHoP501pCYygOGHII5q
EvuOSdowB0ppbach89qpDLTl9jFmG0Dqv0qLzGZDngdvekceaqlm2YGSKZDKiKAQWJBwT0oYD/PC
j5lDf7tNd9zMApGOM0BkUsPvHA+U96C4YbfTvUp2AaUbazBRtyOelPZwAQQpGeDn2pHychT2xTly
x24Ujr70XuCRGGCuSDtbqOOgqwDwSAHyOucVE4AVlKjPqe4pMlG+RSqkdM0FD8eZz0ccYFNdvMjb
GeD3NNAIZzycHPHPNLuLbFA3DPOOn0pC3E8vIzk7gcDb3qRsg7nYqoODjqPWhpNrIArKVJHPamAb
1Ysdy5xzSHYJcuhYtxv70rR5ywPOM56j6UTLkjoM/wAI7UrEKoOD8pxiixNhjKGZgp3YGcCnl1Vo
8nL9TilYeZ8w3Dd1HaodokYLnlTwM9RQMmEalmcMSynkGlc7UJALMT1X+Ee9NH3WLjBOBt9KRX3t
8vLDk+lA0RIyKxLZLfSkYsQWHC9MLTpAZ3ORh88cdBTmY7iBgdue1IXUkaLbgFeM1GuBlV6d++aX
eRMXx7802fJy4BT5s8cUxiqd2RJnYPTrTXYuxCgDgnjgUoLKvzAHAxxSLyDxkHv6UmFx5zFgqNzH
uegpsjgghlyc7sj1pXZQAmflBGTjmmhSiM6NyGHygDikrsAZmSMZBz2IPalVR3JTcOAOnNEhIkHT
eOjHoKduDsuSOmDz1P8ASqsFgaJcFhIQ46ccUki/KGLZ6jOe/vT0LeW21dx5wQOgolPnYBC4HBYc
ZNRcYnmFgd2AQcD1pRICwdycZ7d6R0MIYDoT2pJGBVhs2+h70/QkUPtYhYyAxJGKemPLwQS54HAz
+FR/aEMYByGLfLtqWOHIYkncMng9u/8AOndjuMKO75XG7qPUfWnPHEkp2A7eoB6jiliKxKTnJc4x
TJUYEPkFidp44ouFx5cY2gEMMnr+lMMaqpRBtbII5704AKjMSxf2phfIXKBSDzgdaLhcFjSNWGGL
MMg5wM09FCqGXBZhgIO1CNGZQJCdo4wvUGnvGsUhw29ccEUk2MRw6ugGAxzv4yRURbdMyggqOnuK
WbY+QgIYDJYntjPFLBEk8rZQkYO4g8CrASRVTjbtbGQPekSMSMC2I92WFT4jjKqyAs2dpB5NRNHt
Ur8zd1UD5uO3+fWkA6MtIArod3T0596fI4Qng7sEZ9B3pmAXGP3KhQDu5G7+tEs4Y5QE+gx1/wAm
kmA37rITGHU+p6cU4YdG8pCAvJJ5A9elNKFY92wsR1zjkZ6fpSxStEqDCoeh5+uKrcB4KLLkHcGO
M4yKWRiJ8sVCcE46CiSUeZvVS0mfTk0kkTI/zEHzO2KeoD3eMO3QZOPX8aRyu10Y7AOnHX8aikBQ
v95X/wBkUrorsVbnKhuvH60WAJIUUsxBfgBeSPxqOR12MoJLDovWl2hnCeZwOmRioJP3TqT85J5x
/n2pCRCoLXKZbJ3dGrYsiBftLKN+ORnoc1kRbY74bgdpIz9K09PeKGZGkPyNk/px/Wk1cCnqQBuH
KkMFOBz1qMmQEZJO3qOwHrTZx5czcKcngdfenElHG7OBkg44NC0BDA8YbLZcZ5JomK+blWyCOBjO
KCfKjVWG4ZPUdKGmEQBUAk9QRTSGdBLD5FzcRogeIKeT97pz34qvJIpUMkibuuFJAUntmrl0oXYC
gjlYYkAz0Hf6mqLBFJ2+YSwxyMDP1/KtmZakSEzTBBkAAng5578+lLvZHfeDKP7/AKH/APXTItyB
9yeZ0Cjvn1pjStFIQCfm5K9z9f1rN6saRTkeQ+Zx97gnHX8a7DVSY/AGmu/y7icKByTk5yfy/MVy
m1VmJk3AbssOMAH0qbW9Ya/dEjJS3iXYi5zgVFtTVHqX7Kl6tp+0L8M5pJEhii8SWDMXOBg3EYwS
eg7/AFr9Pf8AgrHbyP4X8CThGWJbq5Rp8fKMrGQM+5A/Kvx38P3L212ssMjxSowZXVyjAg5BBHII
IHI6V+kHwS/4KBeDfFfwSvfhv8fbG91e2jiS2stUtbdppJ1xxvOcrIhxhx1A9etpNPQb1St0PkOz
t5ReiMRtM0zhPLB+ZycDjnr7V+p/7IPwT039kP4f33jr4ia62i6hrQW3niuZVe3to95aIZQfeI/K
vz7+CnxO8LfDD9o7w54snsbm68L2GpSOwkj8yYwOHSOTacDcu5Dxz8tex/tu/tVeHP2gtX0qDwhd
au+iW1qYrq3uYzBFJMWyMoWzkKcA4705RkLRHpv7df7NlxqE1x8X/CM765ourot1fvE4IiXYoidD
kZjI68ccfh23/BKWV28IfESF3Vymp253Lnj90RjHtj26mvLv2Lv21PBXwq+FWr+DPHb67dBbl2so
pbdrqFbdo1HkqP4FDbuCMYPtXY/ss/tdfAv4O+DbsSR6xpmv6vctPqSJZvNGWBbYQQdqja444PPT
isra6kpcuiOU/aA+Nfij9ovU7n4JWeji+1628STiC8LqN8SmVUBHGCo35PoB61e/4Js+FtQ8A/tG
+O/D2sWpstXs9IMFxA3JVlmTv06Mp+hr0zwJ+1n+zP4H8X+IfEuiNrf9tarIZL65n06Ztu5i3y7v
ugknp6V438Lf2wfh54P/AGvfG/jnUpLuDwzrkU0FvdJau0odpISNyAZwfL/Ch66IqLWx6T8Tf2lN
f+Af7bOpWCQvqXhXXZrS3u9OjcbldoYlEq/KSCC6nAwCD1rsv+CnziL4NeH5SrHGtKAFJByYZP8A
6/X6d6+If2qfjXoXxH/aKuvGnhyW7udHtbm0uYpGAjc+T5e4qpGQCUHucV7t+23+2D8OPjj8LtG0
XwtdX1xqCX32pzPZvCsQWJ1PzMOeWxxwRmhJ3Jkro+CpJPtkv3f3rBmVMbmc44wO5r9efjfEV/4J
6JHNHJcN/wAI3pqsqkhmOIef68+lfnj+yX4q+FnhD4s22t/FH7UtlZKJ9Pa2heSJbkHgOqAlhjtj
GRzXdftq/tnXPx11X/hHPDE82m+CbQnyRG7xNfn5MNInGNpHyjnAOetXZkSfLGx8taxLHPI4Vdmx
vLJccMOwBHXHNUniVmSQbT5g2og+XHP+f1qJ3Q3DS3DOyL8xDNuOTwOaimMFu4mQGVVUfeG4N16e
3pitF5kxRLu8lC2d78HJPJI6UrTmK4LNIpKg9CDnj/6/eoWiNxJkMhU5Ownp2zmi4IKHyyNrZ4ZM
nA4H1xwKb1F6D5VNw0kGVRguVYt0HXPPXOO9NERyBy4CmQ7xgAY9T7H9aiaYgGSMKxCdDztHTj9f
XvSyzSzuJHJcldxd/mIAAGc5+n5UybW3HSSyBtgCybcv85yeTyPp/wDXoL+TPKqOoyfukdsemPrU
CSJNcy53h8g7Qxzxj8x3/GpzMn7yUoBIVb5m4O7/APXTuQ0hPlt4WIZslt+C3Y9asIInXZna3Qbe
MjOfz61UnUyw7iwTHBCAce9Osf8ARn8+GVmcg7h0x04/GluOD1PpH9gwh/2sPh68bybY7q4AxyOb
WUYPboT+dfR3/BWCHzPFngIZ8sfYLr96ThVzIg5+vFfI/wCyb8RdG+Fvx+8E+I9emuLbSLO8Zrp4
13LCrxSIHYAZYAsOnYE192ftJfFb9lv9ouys7jWviY0eoabbzCy/s5ZBuJG4gq0RDHKdOKys2zpb
VlY/MkXEenzx3MsDXaRskvl9GcAjjpxkD9a/X7wd4gsPjZ+yLcad8C7qDwfdw2UlqdJeNHmtWKvv
hYBjsZySRJznOe5x+OGpTRi7lSKUyQbiFbaVLDJxkfz6fSvU/gB8fvEn7OXjy08SeHZ2vbeTampa
TJL5cd9EM/u2O04YclT2J9CabuiOZSVjmfEvhzWPCOvX2m69ayWWrWk7pdQTMd8Ug+Yhiepxzz1z
6V+pf/BM74n698Q/gfeWGtyQzp4evRp9nIke1/JMYcK/Ziu7GQOQK+dP20vin8DPjr8O9J8d+G9Y
jtPiEohSbSkgZZpEkwXS4G0ZaPa2Gz69Qa9V/Yl+N/wK+C3wasoJvG4sdf1cLe6tb6gsm6KYKEKr
hMBBjj1HND1Ki1Y+N/2yvinqfxO+Ofii41FIIm0y7m0q3W3jKERwSyKpbuxxznP8qT9jD4r6v8OP
2hfCkulCFv7cvYdHvRMgKfZ5p0DYP94EA5z2HWov2xZ/AWofGHW9T8AeIJfEWk6k32+aRlOyO5kd
zIiEhcrkhvxIzUP7HUvw8svjFo2qfEfXZtB0rS5hfWd2Ttjku4XV41f5ThSFbtz0yDinYIvU/Qn/
AIKYfE3UfA3wVtdHs1tmtvEk0ljeG4h8wiIJuO3ng5x2Nfk60LXEu5yEAwqv/czjoe1fpT+3F8Xv
gf8AHf4MX8enfEK2uvEOi7rvTbawUu00xXZ5bKycgg9QQRjrX5hrO0SYZXR0ThHOCx+n1x371S2I
vaTR+0f7BXxX1X4rfs46bqOsmBrrSrmbSlkhG0PHCqhC3bdtIzjFflb8eviTqXxP+KviLxNqxtFn
vbgxMLdNqeXFmNAAe+FHJPev0Z/ZV+NnwE+C/wAFtJ8P2nxEtFknzfXqXrMHiuJVXzEwEAUAgACv
zl/aM0vwdYfFXX7XwHr3/CSeHmdZob9m3cuMsmcAEKeMgdKhIqXxaHsH/BPP4n6r4H/aD0fQtOWJ
tM8Tv9hvEmG8qqI7o6kY2nIx755zX03/AMFS/iXq/hzwRoHg+x+zrY6+J57qV4t8ymB4SgQ5wuTJ
yfbtXyb+wbqnw58OfGYa54/8TyeHZ9EjW50rzGAgnlJZGDEKTwrDA4zmvoT/AIKF/Ev4P/Fz4c2G
r6L4/tNQ8TaI7LaafYtvFxHMUEm8EcBQoYHoMdOaS3CWyPz+0hxd6tYWdxH5kUs6I8anoC/Pzc5r
9jf2kviPf/snfs92N/4JsLS6a0u4LCKDVHkmVUcMSc7wxORxz/SvxdtdTmgu47iHbI8LLLGeOCpU
rnjkZUZxX6nav+0L8Ev2wPgVaaN4/wDFSeAr4XKSXWntdKs8U0WRuRmXDIwJIOM4PTg07WeoovTQ
9S/Z5+Il5+1t+zpqN94wtYbGS7uZ7GVdHkkgwqqhDK28kNlvXHHIr85vgL+zB4h/aD+I76PYia08
NWFw8Wo606744FXBEYA2lncDA9Opr7P0D9oL4Hfsm/AvUtK8C+MIfHd0s73EGnx3iNPcSylVI3qo
VVAXJ47H1rzH/gnV+0R4E+H2g+OrLxp4lsPDmoX2qLdwi9k8tZUKYbDHOSGJz9QaE7Iu2rPRP2ov
2lPDf7J/gS2+E3wyto4PEC2wXERMiafBJ5gZ2fJPnFlJGQcZz6V+fnwo0WDx98YfCWna3Lc30Wua
3bQagzSsZJBJMqvukzktg459/Svu3/hW/wCyl/wt+6+IOo/FGx1S5ub97+bTry/heyeVyTtYbMlc
t0J7DNeQ/tg/ET4UeG/jB8MvFnwnh8P31zpsn2++TQ1WOKZobiJolYphd3yyjODwfSi6BK0lzH1b
+1r8Qp/2cvhn4I8KeCrSDStP1O9j0kOrMGtbdQp/dnP3j0yc9+9ZH/BTxT/wzpY4kMbHXbZQyjnJ
jl4/EZrP8e/Fv4DftWfDvwXq3ib4gWngu6sZ11Eafc3ccc9vPsG6Jww5wQOR1HTrxi/tGfH/AODP
7Tf7OWt6e/j2z0HU7C6e6s7SSQedNPbs/lAIRl0kGDlf73qKSXYGl1O4/wCCdGG/ZOiUkE/2jqCt
jkD5z+NeXWn7Z3wy174K6L8PEtbtPEYe0sBA9oPJSVbhB5gfPtu455rwT9iz9se8+BGuQeGvEcxu
vAWpXBMsbBA2nzSMoadTjLJ1LDP09/qTSfCn7J+h/GW4+IsHjjwwZWBki0Vr2A2UM7AZljj6hjjO
BwCTgUrdCtndn0D8dfCtp4y1P4d6betPHAfEAl320rRuCltMwwykEcjqP/r15P8Atiftc+If2cPE
mgaVoWjaVqSXtjJdzPqbSZXa4UBSrDkjPWvmn4k/8FFr3VP2gtB1zSrCW48BeHrlxHpnyrJeNtki
a434OMowKrzj617f8dE/Z3/alm8P+I9Z+L9h4fltbExJBHqNvDJtch9siScqwPBHBoVr6iT1R0f7
YdsnxV/YsPim/NzY3I02z1Y21ncMkRaTymZHXo6jdxnOMfWtH4eXEVl/wTys55EaSK38ITOwA5IV
HJx+RryL9sL9p74ceGv2drP4W+DNctPGEl/p8enC7sL6OVbSGDywrylc5ZsDCgDPPQV51+xF+2np
vhzTovhV8S5IpfCF0jWljqN4qiG0VgwaCcngxt2Y9CcHg5FW6hfVpHvul/tbfDH4zeLPhX4d8Lx3
J1qPWIJBHPZ+SLdFhcOobPoQMCub/wCCiPwr8S/F74lfCzw94ZtZrq7uIrxJCqbooUaS3Bllx0Re
pPtWh8ONL/Zb/Z58UeJ/Huk/EDw/qt3J5lxaWL6lbztZfeYx2qg7txztHtiuI/Z9/bt0r4l/tQ61
rXjLVrbwj4XbSJbLRI9VlihjhPmxOVaXOC7BSck9sUaon3W9D07VtT8E/wDBOj4JJZ2MMWqeONYT
eFAfF9dqoBdwOYoVzxgD8yTX5f8AxF8f698TfGepeINdvZLvV9SlZ3kncuyJuOI05+VFzgAYGBX6
N/Gv4Ufs8/HL4qP451v426Rbyutuj2EWr2ZhKRYGAWbcM9/rXk/7fNp8A/8AhXHh3/hWVz4STxFD
qQRz4amiZxbmJ93mCE4ILBOTzmmuxLT3Z9I/sBmWx/Y2tpIpTG6Sai0bBuUw7Y+hBGab+z/qlv8A
tp/sq3umfELTkvvLuG05pw7ec7RxxvHOH5KyAv8AeHXHviuJ/YM+N/w60L9mSLwz4l8a6J4e1KK9
vopbbUr6O3kCySFgwWQjIw/Wuh8FfFT4MfsY/A3Wrfw/8QNP8dMLtruKxsL+3mup5ZAiKiqjHAAU
HJ7A1nE0lK9zB/4JcaJb6X4b+Ja4Ek8GuC1ErIPMKIhC5IHfrge9ddqn7W/wK8B/EfVVv/Hvic6r
p97PFc2Li9ntEkDMrqECFdqnIHbgV84f8E9/2r/DHw+8S+I/Dvixo9Et/E982ow6pLKEtraQK2Y5
mbG0EdG6Z+tekQ/s8/s2S/GGfxzrHxk0DXLS4v7jUX0G61W0+zO0xZtrEPkqGbOOhxzRFPqO+up5
f+0B42+G/wC1z+1t8LdP8N/a7nSr4x6dqt4LZ7N5cy7lwWAYkKCAe2eK+qf2hfinP+z7d/CH4beD
NLjsdP13UrewM8chVra3ilgXavqWDgEntnqTXwn8a/jL8MPB37VnhzxV8LvClpbaB4RvY2ujp7LF
bao6vkyQ7Mj7pYBuh4zxX2b8TvEvwR/aTg+G3xAu/ippHhuTw/Iur29hc3tusxYmNzFKjOGUgx7S
Bnn9be+pLW1vmcb/AMFbHaPwB8PyuN39q3BGTwf9HPFfnb8KPiHc/Cf4haD4xtLW01GfR5xMLO+/
1Mi4KlWx0yCcHntxX3z+278Uvhd+0x+zRYeJdI8e6bp+s6JM1/a6HLcxm6uHOYjCYt24E9QwH868
2/ZHh/Z1+LPwZ1TwL490rSPDHjSzSTHiXUXjgluPMd2jlhlYj5o/lyh9B1BNPm0QovV2PqP9jX9r
u4/aa1zxNpF34TsvDLafZRzrLp0xYyK7FeuARjgg1+aP7WvhQ+DP2j/iHpB1K/1jyNSVkvdTm825
cSQxPhmPUjfgHHQDvX6F/s5eCPgj+yDL4o1+y+Mmj+ImvbMK8D31rvVIyXwio5LMcgY+lfm38efi
1afGP42+L/GllY3Om2Ws3aTW1tcMPOVFiSNdwBwGPl5x/tY7UQurif8AET6H3r+xT+zh4a/Z++Fo
+O3xFvoXmbTxqNo0W+WOzs3jBDMuCXlbcORnHGK+Mv2sf2odf/ab8byajNNPp3hnTyw0rRQx8tBg
jz3BxmRgefTOBX2X46+OHgO9/wCCaUXh6HxhpD+ID4VtdL/syO9j+2LcKEQoIs7twI9Olfl5cSsD
8r/MCSTj8qcV1RUrc7j0Rs+CrvX7fxbo8vhqS9i8SwXKy6adNDG4SUHK+WoySeenfPNfrv8AtJ+H
9E8T/scw3f7Qb2mj69bWsUkd/pSNJNDelR5ewbchmbAdB8vJGcV84f8ABP8A0T4N/DPwHqnxf8X+
JtIufFemrcfZ9Lup40udPWINkxRlgWeQYwdvcfh8y/tZftU65+034/k1a6ZrLw1YM8ej6U67TFET
kPJyQZSByR0zgUkteYU2tjxIAuMTrvmAAYoOAe/H1Ffq/wD8Ei4wPhN44cSCUf21GmcAYIto+P1/
Wvyq8PWtnrOv6Xp95qA0u0urqKCa9kGRBG7hWc4I+6CW59K/Z39k3wd8MP2VvA+saRb/ABd0HxDH
qN2NQkuZr62h24iVMYEhyMKD2qZXbQ42sz8dfiw7x/EnxRlT/wAhe8PDY48+QdK5ObJYkEcg85xX
RfEbUrbU/G/iW7s5ku7afU7toZ0O5ZENw5Ugjtg9a5aSN1ZQchfY8Z9Pat7mKuTxTbCm870HOe//
ANev19/Yf/aC0X48fA0fDDTrtvAHjrSNOWONtMVVaeNVVRdx5Xbktw6nJ6885H4/FfLO1cjJ4yc9
81+lHwN+FXwA/aM/Z50yXw3rEXwm+JmlpDaX+tG+8i8+0Ki+Y+3zlEkcgJwRjqehFZyWqNU3bU6P
4ifG/wCN37HXg++0r4z6Np3xh8P+ImeK21b7UUgtvl2tBODbkfNlSAR6gGtf/gkOyzeCPiXMtnHZ
I+swMIYgQsYMTEKM9hnivT/FmvfDz4X/ALKer+Gfi14/034o6PaWgtm+ztH9tulBAjQKJmLSBsEP
kHjJ6V4l/wAEm/iB4S8P+EfiPp11rFroskmpw3Vva6pdJFKYDGVU5YgNgrg4zzz3rPWyKTu2faye
MdJvvAWo/EaTw/aPqGlxXpid1QzbYJJU2iUrld2w/Tca+d/+CldzFpvwM8M/EPTIUsfFWkavaXGn
arGo+0WocMSFf06ZHQmuisPjN4Fn/ZM8aTw+LdGARdZj2/bYwxb7RMQFUnJyCpGOu4V5p/wUc+JH
hXxB+yToMGl6/pupXF1fWTQwWt1HK7AISflByMDrVpJvUielvVfmep/tc/EjxB4B/ZQ074gaHeLY
eMEi01V1RYULqJ2iEqnKkbWzyOnA9K4f9s62t/iN/wAE7LDxT4gtYNT1+PSNK1KK+kiXzY7iVoPM
dGx8u4OwOMAg17B8X/g9bfHT9lLSfCV7rkPh23uLHTp3v7kZRNixsAfmXqRjrXzz+318XPBfw1/Z
N0j4QQa2uva3qGn2dhbvppSRBFatDvllwx2BghwOufpUUls2E1p80fk9cq0gRyhicqMJwf8A9dQh
REQrMTk84FWZJI2bMk6bdxGPMH+e1QFovNdg6lVIyS2VGenPSumysJp3P0g/4I5+E9I1DxH8Rtcu
9NiudS02Kwisru4jDPAH+0eZ5Z/h3BVBx6VifFT/AIKI/Ff4XftR+JbNtWTUvBei+IZbVtCNnDH5
tqrbSgl8suG4OGz1HPWsv/glV8efCfwu+IXiDwpr96bObxaLSLT7xyPJE8XnZjkOflLCQYJ4yMdS
K9z8Sf8ABNjw742/aB1Txxr/AMQdNv8Awtqmryalc6EibJXViWEXmiTAHTJAz1rC61NGpKa7WPin
9p/9pfTPjn8atH+IPhLwkvgTWNPjikl1BZkluLi4jlDxythQpKbQM4JIOOgr9MfjX8WfEHh79ibR
/ivYyW0fjyPQtNuYdTNtG5jkuTbibClSArbjlcY6egr80f21vDHwl8H/ABuudE+EdvdR2dhB9n1U
faHnt/tat0hd2Zjhc7sHGcY71+o2qfB1vjp+wv4Y8E/2pFoc2oeHNJ230ql0iZEgk5GRkHZjr3qW
/fSD7Hu9zzv4/wAVt8df+CbaeMPGVnb6h4g/4R631iG+EIR4LltmZI8fdJDEEDAIOMUvhK1hj/4J
PXESruiPgi9IGM5JEuD+fNZn7Xfi7wv+zj+wvY/Cm91lNS1y/wBGi0TT1s13GcxGPzJTydigc5J7
4FH7I/jXw3+0h+xFqXwk0nVl0vxHp+izaJeJeLt8sS+YI5lA+8hB7dCCD2yXtJDeqaRd/Z8tY4f+
CWN7EyqyHw5rmfTmW65+lfB/7APxg8I/Br4/Wd/430eK6sb2MWVrq08at/Y8xfifkcKQSpYcgc88
1+lvwo8BaV4Z+Amt/s5nxVZTeK7DQrmOa6hOEWO8ecpKBnPBY5Hbj1r82f2Vfhd8LJ/2jdb+HPxm
Zr0pPJpGmXdjeSRWzXyTFDmRCpw4HylsDsRSfw6sI352fpH8dtW/aI8M+O9I8V/DCfS/G3w5eNb2
/wBGZLeKfyVwxSGU8uHTJVgScn6Z/MH9t79onwT+0r4+0vxJ4b8ETeGNWt4JINWu7koZb5/kEe7Z
3jCEZPJ3Y7Cv08+CP7O/xG+B3xRuZdP+JqeIPhcU+zWXhrWbiaWSwthjYsR+6GQDaD0I61+ev/BT
21+G1v8AtCzv4GmZdd8p/wDhJYIkItEu/wB2YymRt3ld27bxkDPOa0gKR8buEji52gZzgDmpY4mk
ACBpFB4HfFEjHeEcbT0O7t7VYSVICCWwem4Ht/jWjRmtHqfql/wTt+APg34WfAm4/aB1u3m1vW2s
ru8gieIM2nw25mSQQ5P+scIcnjsBivpX4XfGmb4/fsseKfG8tubO3vrfV1toGADxwIJVjDY6sFAy
a8Q/YG8feH/jl+xXq3wq0nWFtvFthpuoafdxXKspjFy8/kzD+8h3jOMkEEHtn1r9nb4YJ8JPghc/
AfUNf0268XR6Xe3G22ZtnkXMkqo/IBwGJB+nvWCNnbWx8Kf8EpvAuieMfj9qd3rml22ozaHoqX+n
tcru+zzmZFEi5/iABwe2eK97+MX7b3xN8FftwW/wu0pdK/4RNta0zTWE1nunK3CQFz5m7g5lOOK8
P/4J/eJNN/Zm/a88UeB/HN/Daajcwf8ACOQ3cBMlrJdJMjRjeOgcHAJAGeDya+j/AIofsCeLfHX7
YsXxdtPEekW+hjVtO1FrKQy/aNtusQZRhduT5XHPftS0u7jXdnmH/BXj4c6LoOseAPHWkaatj4q1
C7mgvNQt+GnWGNGiLryGKnGCRnHHSvpKz+KGqa/+w/ovxl1S0sbzxvo/h2XWbS5aHCR3IjZGIUHo
y8EdOfpXy3/wV4+MHhzXdd8HeCNM1D7brugzXFzqkEIJW2EscYjVm6byMnGcgemRXtekajaW/wDw
SYmlilRo4/BVxENzY+cbxtPvntRbUSsaXia9g/ax/wCCc2p+K/HNhZ3mrjRtR1WCa2QxC3urYzrF
LHzlThOecEEg9ai/4Jh/DvQvD37Msfiyytba08SaxNeR3mrSLk7IppFjBycbFAzjPrzWd8F9Qg/4
dTatJHIqiHwvrkbgEfI2+5+UjseRx71V/wCCc3j3RfiR+yVqnw10zU44fF+nw6jFPbShl2JcySmK
YHHzJ+8GSOQRjHSmtg+07H0f4a8faT5t9D4n+Kfg3xJps8BiNtCbe3xnGdx85wVIyMEd6/Ff9oPw
X4J039q/xL4X8H6haaf4IOtwWsF7HP51rZxyLF5rK2TlUd34zwFI7V9j/CP/AIJHpoup6rqXxf8A
EVnNoNraGSD+wLh0dWU7meVpI8BQoJwM5Jr89vixYeF9C+I3iSw8Farc6z4Thv3Gl6hcoUaaDgg4
Kr0JIzgZAzirSC9mftz+xP8As4eHf2efAOv6Xofi608cQatqH2me9tUQRowiVfLO137AHk/xdK/N
n9qL9l/Q/gD8T/CV1pHjzS/F8ev643m6falI57Ii4D4IV2yvz7TnBGMY5qp+x/8At9Xv7J/g/wAR
eHY/CcXii21G7F7BK+oG3MD+UEYEeW+4HavAx39a+Z7jxXd3HiiXXcoL2S9a/wCMtiYv5nBxnAPH
rgUrApJSP3V/aw8OeDPFt58LNF8feQfDV34idJkuZzDG0gsrgxqzgjGWwOo/WsL9m/4YfDb4WfHz
x3p3wxW1i0ifQ9OmuoLK9NzFDN5twAoJZsZUA4z396851y98M/8ABUf9l822i3/9g+N9CmW8fTHb
clrfBJERZSV+aKQbsMvIznqMU74KfDTw5/wTS+AfiHxj451SO58Uamnly2VvNmK7mj81re2gym7c
wbBY57noKhPUd1Y434d/tbaH4N/ax+LPwa8eWdrN4Q8SeI7uO2u5YwVink2o0c5JA8pxxnsfrXa+
BP8AgnT8LvgP8VNX+KOr6vDc+DtNJ1HSNMviVi0x1YSK5mL/ADhduFB9RnNeNfsffs+af+0Z8SfF
/wC0f4/S0sfDMmsXWpW+lCZZFWVH3MLjcuDGi444yeeK9z8Lft1/B39o/wCIuu/BjU9IiTwrqCHT
9L1W7YG01V8qgjjQoDE2WyhJ/hGMcU+ojwzT/wDgpl4a1L9rVPFmq6DdW/w+0/TLrSdOu7S13XpE
hifzpUzyC0RUKDwGB9a6HxT8df2MP2kfibFqOv8AhjxBqniTWpbewN5JbXdsjn5Y4w2yVVwPlGcV
4rqv7CfhT4Xftcad8O/iH4tvdJ8B+IraSXwzq0EyCeeYOiJbzMyFVb5mGSMH5eea7w/8ErfiB4M+
O+n33hfVdP1LwLZaraXsN3qFyI7wRI6uyNGqbSQQwyMZxnFUGh5V/wAFE/2M9I/Zt8WaR4g8ISiH
wh4ikeCLS5pHeSynjQMwV2JLIw5GeQRiov8Agmb498Uaf+094F8KReIdSTwzcfb2l0hLt/sjsLSR
8mLO3O4Kcgcnmvd/+CxfxE0G/h8CeDrTUUuPEmnXc97d2MeS9vDJCFjdscDcTwOvWuS/4JqfsoeN
k+I/gX4yTRWUfhAR3xVxcgzsxjlgxsxn7x9e1K5KRzf/AAV7US/tNaLGz7VHha1J+XJwbq65FfEO
geH9S8Ta7Y6TpNlPqGp38ogtbO3QtJNJ2VVHU8V+mv8AwVO/Zg8e+OvFd38VdBs7e98N6D4bijvl
+0qk6LDNPJIyofvALICee3APSsv9gXwf8MPgT+z1eftGeN7yGbVmkubS0hutpW2eJ2CRQDGfOk2d
euDVXJtY9G+C/wAKPBH/AATe+CN58T/iEwu/HOpQCIRQIWkTeFaOziQnqGX5n49+BXl3/BOX4l6l
8af23vHvjvV7W3sdS1nQbm4ltbTPlxfvrRAoJJJwEHJr46/ad/ae8SftPfEa58S66XtdNt98GkaS
pUrYwFiQMhQWYnqxrrv2C/2j9I/Zu+PVtruuWck2g6rato95dq2Gs0kliYTsMHcqmPkdcEntSsUj
9D/GH7J/wl/aF+O/xl1Lxqbh9X0+5sY42t9QNubdDp8RD7QRnkHk8cVN4jEXhn/gltcCwdbqG08M
L5LuNwcCYYJz1rzr9q39grXf2hfjVp3xL+H2v29/4e8WR2z6nMJ4wtvEscSJNEc/vVMYLbccEe9X
f22fin4T/Z9/Zv0z9nDQppPEnijU9Oh0mOJJkElpEHjIlnA+6Xz8oxz9BmpuVZCaVD4C/wCCoP7P
/wDZkltbeD/in4ZU7DBHhbWQ7lVlJGZLeQJyBkqcdwM6Xhv4d/D7/gmL8AtS8Q+Kvsnirx/rWbbi
Pcl7cJ5rwxRqwzHGFI3tjt9KX4HfCbwf/wAE7fgxc/Ef4lXUV347vbcAWqvH5ok+b/RbU8byVYbj
7elVvjL4L8E/8FNvgT/wmvw7mFj8RtAV/K0+5lXzgR5gW1uFDYUPyyP/AEzSFY9T/Y08QaR4r/Yw
vfEHiCzg0bRdUm1u81G1skKRW8Jnn81YwMkKFBx34r5++Hn7If7Jv7SuneIdG+Futa+uu2diZlnk
mnRIS2VjkKyIBIoYDI5r0z9iZj4o/YU8S/DbTZYT460mDW9KvdFmlVZ7W5lluPLSUE8AlwNx44PP
BrE/4JwfsqfEH9mrxP4x1nx/pltoljc6XDbxyi8hkXKSM7sSjHaAMHJx3p+gH5a+PvAet/BH4nax
4Y1GYW/iDw7fmFrmxlO0SxkOksb4BH8DA8EcdK/YD/gmR8UfF3xY+AHiLVPGPiG88SX1rrUtnBdX
z7nES20DAZ+rmvzN/abdPjJ+2J42i8GTwa62t+IPs2nPbTKY7p22xrsfOMFu+cV+pf8AwTu+BHjT
4D/AbWfD/jTTI9K1i81ia8jtlmjkAja3hRSSjEdUI65piPxD1zdFd3ClvlMjMCvckk8/59K59lO4
ydOcEDivYPj9+z18QPgDrFpaeOtBfRjqayS2cvmxyxThGAfDIxwRuQ4OD81eTTKxXIA+UbSMVREk
QNGXA2ruDDgDilDYJK/rTj8xU5O0diaWdAPnUgD26UXEkQOcHg5I5zSM5eRicsPQHpUko3SExgZP
601wYwSRgk9QaTYrCFdxbn5uwIyacuY+4Ge55o34ywXBP8IphdgGOzJX+LFTbUpC7kbAIDr60jfO
4wcDtjj9KdIAG9D7U2d0IATgjvmq2GBlwpLE7zxn1pwIk4ZNvHJ9ajKZH3gxHBFBdgCWJA4BWhiT
H7owzEZz2oDb15J9D6UuFUbwDjoOPakmZS7AZ6+lQMe5KFcDKkZ9s1EWCNwuHAqRADGuSdueoPOa
jYqXYYYt1yaQD5G3KdzBy3U9PpTBOQ7ZHTjkU7bGjDbnHXkfrTJZckgYPPGKaGOdtxJU7cdaeJNg
IXlTz9PeoSu4EnIAPaph8isoUkt0z9KewkxEDyDaAFJ/ib+dDZQ4zhh1prIE+YcMT+lLJL5gJb73
XIpDYrDgbu5okkOcKRjPGe4psZMZLOrbeo+tJkMc4OM8UtQJcNIhGMFOeuf51GFQK2cgjpilbO0/
KCfQ0jP1DHbgYAPNMQ5CxB5+Y87vWmu22QhSTkYwKdC6KxwAc54PalX0VSfcdaNCkQgbZcDcc9zU
7EKpQY6YJPWmN5ZO0Bg5OAB1FNjXygQ2S3XHGBQAKRjAAznBp5Q7mQ5YdRn1pFJQ5HynjB96HDyS
gl8MOTmkMRAu3BYnBOOKdKVGAEIPcjjNMeJgGxkgcDBpTuKjLEgHBU8UCJDukGQBjsCcVGXjiYoR
vznJ9DQGJXcACCcAA4PSmOXXAYDjjPf86LDJRMMcjJPSmPGwn+VcU3yijKXbGeiipSzM5BYgE4Pt
VASRgCMrnlj8v5VGWO/k/MD0zTmKhCSysMYBHXFRypGSDHjceoHYVnbUQ+UjamSwUnop6/nTQFdy
EHU5GafG6tKdmFI4570jnY+3OwZ5A6EUAPZFTBVFY9xSRs5Vk525zgURkneA2EPGKPNdHC5VVUY6
dPekmxIXyN3zDgEklR65pjuoQhgMA9u1BlBIwzOSSemAfwpZV+Zi7bSDkj3qx7iCbYvyhkUnBB+l
SGNAzMW2k+/B5pnmfIPmX5Tye+akdto3uwAJ6lcmk9AFESncgVg/OeAePxph/dqA4C7uG5zinOgk
5JIJGNwFRtMASm/72cP2ppDJBEsj7tm0ZwJAaQv5akFtydGwT83v+tI8RR/vM3zYweODUR3DeFG3
I6kcfpSbGmWRIc7UjHGcAn9ajIkLNn5QpI4/GhVKRNLMQCRnAxg0rIWf5RuG0k4yAPwpWExskkiO
6YV+P4u3vQmY9rFCScY3YxUly7MuR8gGMHGDRKpaMq5Lt1OAcY9/fpQhIZPJ5xYfdU9BkUeWUwqL
g9Wwc5FOt4URvnBYj5toPT2/KlR1QM4OzqeOfwqkMiU5QRlSFzk9hUs5Iddo4UYDDn9PzpigOArS
naeen+eaPKkUFizSxKTwfpVgKpWUMSwEmegHB59Kevzoqs27qAT1A9KYGjTCkKAOmBn8aTzVE21M
BSc/Lg54qUK4YCsGkLbSOo7GoZlCzbkw5Ayf/r1O7qynP7raDyB146VA4yc4yOjcY5qhldMLeYAx
hs9v51etkeSQllG4E5VB7dhWeMC6DAbD1Ge1X7Y7Jdysc8jLcZ4/+vSEyncNuuJSoAO7PHFPuN8g
BLBtowB+FKY23yHjrwP60rr5ZZiG3qOecYqWMjJLSZYkue5PBqOS3ZFAOG9iKfcDfIGU9RgY4J96
dI5jkJOMYAHBoQHQTIXV3ikOxflJYdfT6Gqu1C+3LbMdzkL64/GpZPMkjaLkHduIHf8Awqs4XzDG
7nKIcY7+3vW7Mb2GTvJl3QKGU8Fu49aj374d+G68sRwTT93nnc+5x1G3AYH3pkrmZdjOdq84zjH/
ANes27FKV2ULhyVPGQc5I9ahKrnpkjtnrVyZI4bl1ZS6d2q1rmhf2esM8eTbzKGjbr9c+9Tc13IL
Aea33ioJ+8Pbsa7DT1aCaAiNtv4HOe/NcvoNlPd3CRwxySuWwsUKbnY+w61+mX7O3/BPfwfpfwVv
/iN8bbq80+zNo15DaorwSWcWw7i6kFi3TAGDxVxkrg1bc+Go7VxJumZvNQ5Uk9T3zjtzj8KtbIkK
NJFKqSDaW3AjpxnkHPv7V6n8JfAXhPxv+0foXheS5uF8Fanqn2aC5lcxym3+by9xPRidnXr3HNe2
/t3/ALLGhfs/3nhq78LxamdCuopBdTXB8xY5Fxj58AdM8VpKSM/M+Rri4uLidNxCtGfMO1QTyT0P
cdOKeoMUY374ycKGILHGeB7c19wfsZfsYeFfjj8O9f1/xra6tZvFd/ZrCaGYwJJEI1bzB1DLk/Su
s/Zm/ZA+B3xi0rV45NQ1q98TaFevaamkNwYghEjiMjKchlX1P6VmqiRtdbH53yXUpm2KSsjBV3Og
5XHAz+dRO+1MsVAyu8dSR34PsPWv1K8M/sZ/s1eL/HuueFdH1bVbjxHpBP27TmvSHQBsEjdH8wDc
Ej+teI/Dv9jfwdr37YHjL4aalc6lL4f0iye4t2SVBISfJKAkLxgSn2+XpSU0zPZnxEskSv5isIi6
jYVOQQMAH69PzFQXczSQxnyllAj+YFj1J6dOv+Jr7D/av/YnvPgh4ltbvQll1HwbqU0dvDcuwMto
5wGWU4x0GQ3tiu3/AGu/2I/B/wADfhJZeI/DF9fS3sl5FbzLeMjKwZGJZdqjB+Un3quZEbnwQ5ke
YqEDbAA2OdvHP8qZPKqRAMV3MS2c4HT+LHOeP0r2r9lnwH8OfiJ8Urfw38SdVvdMstSQ29jdWjeW
r3BYBI3kwQM9iRjtXZ/tifsZXv7O+rNqWnG41Hwbc5FteSfPIjYG5JiMAEk8HHOB6VXN0FKL7ny1
cJGSrEsXUcjAxjnGOe9RoAsSkYRmXjccBW+g68dqsXFqkbtt7vtIP3QO4/A/SqrbvtEsZLY4GcZ3
e+Ox/pTMouzHXV0j5dU8sqoBY5+7yc064hkSJl3Nv4bORx7DoT2pGuI0QkrwxCkEdRjp+gqN9rMs
0hLSHOxc7sD/AD/KlsNrmJI5ViUu/E2NoGOPeo5f3rr5IwcHaRlucGnTSGafLt8xzmPOBj1x+dRw
ZZEA3KwJyQvI68fpTCwK6gny13SEE7uwGOmB3pZJFkyFX/VAfOR1OD/9ao3d42UbDsOAyEdOnUZz
1zTpEluMRt8roxGWXlh2OeKTJkuxPG88cQmyhXG1lOBuyR278iiMMrM7Eo5UgouAvPvUM8S2w2YA
xj3wMde/cVJbq1uqFiWK9MH7n1pjirE7RvawEMSW3dmB47jnp/8AWqUSxA7IgZHI5YDBBGRjPf8A
/XXffs+/DOz+MPxj8H+EdUupLe11m9MM08JBdV2McqD3+THPsa+6Pit+wH8BPhFoyy+I/iPqvh2a
4idbP7dNDiQpyQFEYJ6gde9Rza2N0ran5ngyMFZlZNzFQcYbgkZ6kYNPhfyRKA8jBjkhidwx6Y/K
rOoTW6STG3kaaIuwicrjIBIBx7jnFQ2tlLfzJDDD5ssh2CJV3ljyccAkZx2FNtEOF3dEYdrls7Sk
nRVJ4Jwckjv3/OpYw8a7S+HZjhgCAvTOfz/LFe6eOf2KviZ8Pvhda+Ote0q2h0d4opZHhuAZoEcD
ZuQYI6gEduprd/Y6/ZI1T9pnxHO93ctpng3SpDDf3sDL9oEjRbo1jDZBJLDJI45p8ysNRtsfOQuA
soZt7tGecEjP+z60l1O7MqR54JYBMkA9h/k1+gnxw/4Jo6Rovw3vdd+GOu3niS/02WQXVnevCQ6I
SJdrKFw6FT8vfn2r4N0bw7eeJ9f0/S7FGuby9uo7O0jX5DJLI4RFz2ySBntmi4JNsz7WUPtZ1ZUG
c/Nnn3HfoP8AOaV5jER8hLff3kZ3Ht29f5V+lXhb/glp4QtfDmhw+KvGWo6b4o1CNla0tWhMZmCl
ise9SzFQPXHB4r4p/aE/Z8179nv4hXvhzWQJ4nzNYXyMCt3AXKq5HY8DIPQmlzI0SPK4rqSXdE2Q
gY9G49efxGaeVnLGSTZ5IJBKnnpwQMfhX1L+xx+w/P8AtHm/1zWr6fSPB1u0lqs1mUE8s4CnbtbP
ADA5INdx+0r/AME9rX4d/DJvG3w61+78T6ZaI89+LpojshUfNLGUABUbSCMZ9OlF0KStufDpMksj
MHCo3ZSfrwaZJIrOZGzH8+7KsSenYk8V0ngnwJq/xA8U6V4Z0WNLjVNVnW2toi4w0nbOeByO/wD+
r9E7P/glP4JOk2VnqPjbUbXxTc2Jc28ZiaMShV3si4DMqsRzkcEU1NbGXKz8ymDFIwxPl4JUD5cj
196kS4ZAGTDbPvNySv4+ldp8W/g3rvwh+I154Q8QQm31WzKlcHcs8TEiOVSM8NgnB6YNfaXg/wD4
Jq+EvD/wxt/Efxb8azeEbuXmaJJ4FtYN33VZ5AdzHrjPFF0XFHwGLpkh5V8Ou1uOD/nJqsZjDIzR
nbEG+bjqOnFfoF44/wCCbvhbxB8LtQ8SfCHxvL4wv4OYrZpoZba5K/fRXQja+DkZOM49a8J/ZR/Z
D1P9pHxbfQyznSfDOkyGLUb6MjzkmxlUVD1Y+/AAP0o5kO2up88SSyuokdj8p+++D15HH5VH9sli
ncygNuYbELZz6jP4frX6Lwf8E9/gbqXi6Twzb/GOabX0uHibSIrm0a4EgXlNnLZGOR2xivkv9pz9
mPWv2cPHsmkalItzpN67zaVqZYBryJSM/KD8rruAYevI4NNSRMr3PGTcJFIHiUoyjBbP3D7U+S5k
8jC4TycHIbbk9ATj8a9E/Z78AeG/ib8V9L8MeKdfPhjTNSeRGvlwCj7CU+ZvlGWXHPt619Hftg/s
J6H+zh8O7bxdofiO+1RG1GOzmtNQiQHEithlZccggDp0o5lcGrbnxgLpjBLL5qujZGQMnb/k0RSG
ON3RpM8JlT0PYkenXivUv2df2ddf/aM+IUWhaTE1vZRMk2pX7EbbaDeFZh2Lc8KOfbrX2t/w7H+F
/wDbsnhyD4oXq+J1h8/+z825nUYzvMWd239KOZInk7n5qtJcBNzDnAzgEcdifQ0nn3MU7ICwZjky
+ueQDjt9a9m8Q/sqfEDwz8cU+F72An169Zv7OcyqkN1Bhv3wI4VdqsSD6EV9Ya5/wTa+Ffgq30xf
GPxefQL6ePcIrl7WAO+Pn2Bz8wBPXBquaN9UOK6n53TvIq7W3YGAQOmKgSaTzVMYZlyST04r7C/a
q/YGf4JeDLTxl4L1i58UeFvJD6hPN5YMAwvlTKVOGRsnOBxkHpXAfsn/ALImuftJeJZI2kk0nwlZ
sU1HU48Flba2xUU43Elcd8d6OZdCUm2eASXLygHByBwAMjHoaZFIY92xTtHX+Hb64/z3r9F7v/gm
H4I1m11zT/CXxVm1XxPpkbA2MjQSCKbB2rOqHcgJGO2K+S/hv+yt42+IHxtf4a3en/2Vr1hIF1US
yJts7dWTfMBu+cYcFcZzn8QuY0SuzxrzZHk27sKMvt5HOcnmmCYmRi5KgjqT1xnk+vXvX6La7/wT
X+EPh/XoND1T41yaVrEuwx6fcyWcU7FgAuEY7jnsK+d/2t/2NtU/Zi1izuYrmfXPB14gjt9WkjVZ
EnCsWikUHg/LuBxgjI60cyI1TPnU3Eiuygl36cLyR659wRUU12bYMFHlgkgY4ye9foDon/BMzwj4
3+EUPi/wt8T7y/kuNOa7tzFBDJamXZkxsVO7AYFSM5GOelfnzLCoGP3gDgHpggH3oj7wOVtBs08h
bfGyxMuRkjoPT+VNW6lgwGOVALKA+O5/xqNkdGOMFFyA394dPxNEsOGjDq3XAOQQefWtHZEx5r3H
rKxl3CTKjkBT2/r6U5L103Lj93nJye/0qGRUZVQZAOR1wOOCa9H+AfwE8R/tCfEG18L6FbOyMFmv
L4uFS2txIiyOScZIVgQByayNdWecS3kk2FLgA8Lu601pGMbkkNhsOCM/p3FfU/7af7Etv+yppfhb
VdP8UTa7aancSWpjuLURvFIsZfcCGIORkY46V89fDr4aeIPil4103wr4asn1DWdSZhFEuAq4Uksz
EgKAAeeOAad0Sovuc5NMkbRSrGqOuAoAAx6YPWqzXUrPuG1ZSC+Sc59fx/wr7O/ax/4J7/8ADOHw
nsvHFn4vl1ieO5gs7ywntQiKZQQWjIbJwex7fSvi+WNzM23opxwfb1oT6olp3H3ExYozcSLkbiMn
PGP5D8qrSyHCbnJbPAx0+lfWX7Gv7Hvhz9qSw15bzx0/h3WrCRNumwwxyyywlcmTYzAlQSBkdDXl
n7Uv7P8AP+zd8X73wQdVGtQRWsF5a3phMbukgb5WXJAIZG6etLmUtitmkzx7chYBgJQOV3gZB9uK
hmZ3clmO05OB0ycV63pf7LvxP8TfCy5+IukeFZ9V8IweZJJfWksbOEj4kZYt28hSDnC/wmvJHG1M
qoZWB5zx+opoppbDlRwpYOFAGMAZB9c0x1RlJcCTJADEDsfocVf8O6XZ614j0uw1DU00rT7u9ht7
i/kX5baN5ArSHpwoJPXtX6NaX/wSQ8LeKtImufDfxlTVY0ynm2llFPEr/eCllmOOCPfBpXVxcul7
n5pXJXaqhiW6YHBqICSQAknH90noRWh4j0SXw7q2p6dcyK89hey2kjqCFdo3ZCRk9DtzWVI+3cUy
AfmyaZBY8wsRuBfA4wPfvTncbxuxIFGAWA/wqozhPlMhIz0X+tO8wsqux2L1+mKQXJfOCyGRYwqr
x5gUA0rzvMNqquwZH3c5/wA5qvNIxYlWxxnGcGhHkjYufnAPKkUDT1LousKi5ICsAck8f411fwqt
vCusfEPw7Z+N7+40nwtNeLHqOoWo+eCJur9D7fQc44rhgxky24ZJwVBxT1cncFJOf4WFJq5ofsz+
0X8eP2cviz+zvrngQ/FeytLWOxT7JJaOxmEkChogAU+bJQArjnNfjjFcP5CB0xI6Bn/h5x/n86Rp
ZHAUTcLwMHpmoD5gA/eEyZ6sOKFHSwdbn2/8GP28vhp8NPhj4e8Ma18AtJ8RalptsIJ9TX7MPtLA
nDnzISQSOuT1rsj/AMFIPgsW3Tfsz6Z5i8blWyOM+/k18QfCb4P+JvjT470vwl4c0+a+1K8fLhTl
YogVEkrf7KhgTW7+0l8BdT/Zx+LWp+CNQ1C21GW1igniu7cMqyRyLuB2kkqRgjGT0FNRitLibdx3
7SXxg8M/F/4o3Xibwj4Gt/AWky28MX9lwbNskqbt0xCqFUnKj5R/DnvXl5u5TKyjKZ4PAyajWAyy
Ek4x/FzxSDeThslfWiyBXR0fgbTtG1Hxpotp4g1A6L4dur6CG+vIlybaB5FWWQDGPlVifwr9jvij
8S/2ffG37Nd18MLb4z6Fp2nw6TBZWt5BqUTT4twpj+Un58mNcqOuSBX4qIzspAJwRkEdqeZpBEw+
cBiTjPGcdfrUcuty9WrDry8/1b7zOyj5ZCu3dnv7DpUC6hJHL5gLq6jKlcDr/PoPyprHkbQcH9eK
6HwF4B1/4peMbDwz4bsJdU1e+fbDbQRljgdWPoqg5J9AasnUwXvnEpcOyyHg7f7voahWcxMwGAAd
wGcAnrnp1r1P9pj9nbXv2Zvia/hDXb211OZ7KDUIriz3BHjk3KQQeQQ0bD3ABryedw2eoOcDK8dK
LJkpl0avcrICtzMxPVVlZRjHYD+VWtA1CHTvE2n39/bR6rYw3kVxc2E2NtzGsgZ4z/vKCPx71jMQ
OB1xxx1pI8+Yr5LBc5WpsUfo7/w2n+yXJIWf9m/Y+MOy6ZYAZ9M+YM1Xn/bD/Y/uIJVb9nBwWBB2
abYq3TsRKD+VfneHbcWAYD0HQ0qBtwCZxzn2NCXmVdM2tU12JPEOo32hrcaHYz3MzW0KzHzIYGkZ
o4mccnapUZzUC+ItS+1faY9Tu0vGTY1wJ2349N2eRWXJlgGZGIXjGOtDEou7aAT0U09GTzMsyX0i
3JdnfzWYymXcd27Od2fXPOetbA8e+JRGQvinWifX+0punb+KudbzZFxyB3BHNR+Uy8EEkHj3osh8
zL99qdxfXE01xcy3M7vvkllYs7k9SWPOambxBqKaO+km+vBpjtvayFy4hY9cmPO0889OtZiIcn5s
k9M9KaI3J4Y9M59KdhXfU1h4k1SHRJtLj1S9j0+U/vrOK6dYXz1ygO09u1SaN4h1Pw3Mtxpmp3em
zAFTJZXDxOVOONyEHGQOPasdkjwpCktu6rXpXwL+A/in9oHx/Y+FPDFq0lxcktPdSq/2e0jGfnlZ
VO0Hbj3PFQ0ik2c/qPxM8VXNvLbTeKtcuIJlMc0EmpTsrqRgqQXwQRwQeK5aYmaZwMsOgXGABivu
PxV/wSQ+MGg+HtU1O21Tw3qslpbtOmn2lxM005UZKJmJRuIGBnvgV8RTWVxa3FzBdpLaXcDtHNBM
CrxuDgqykZBB7U0l0B6kMudpBQKB+FRbjG55PHQ4yRUjQySsPvnJxkg+lJDE7TiNB5sjvsRB1Zic
AD37VRHKbHh/xfrnhOaWbRdc1DR55AEaSwupIGYZyASjAnGT1q74g+IfinxnxrviXWNbhiP7uPU7
6W5SNvVQ7HBxx+lfT3w5/wCCWnxj+JfgTS/Elu+g6HDqUXnx2Os3E0VzGp4G+MQnbkDOM9CK4z9o
r9hL4m/syeF7PxB4mXStT0OecW8l5pE8kq20hUkeaGRSoOMA8jPHpmfdLWh4bp3j7xJo+lXOjaX4
g1Ox0m8LC5sbe+kjgkyu1t0akKcgDNY1nqLWtwkiSSRvEQ0cqEqYyCMFSOQQQORjHFdz8Hfgp4l+
O/j3TfCPhe28+9upMNM2fJgXk75GAO0celemftE/sIfE79mPwPD4p8UNo13pkt4toZdMu2kaJmBK
5DIpIOCOBwaNAbaPEvEXjnX/ABbqNvf65r2qate242QXV/eyzyQrncArMxK888d66c/tEfFBtn/F
xfFg29FGvXWMemPM4rzmONmUE89ODX0Z8H/2Efih8efh7qPjLwtp9oLC3kaOKPUJTBJebUDZi3Lg
qcgA5xmhpCueCa94n1PxLqt1qer6pe6pqty2Zru9uHnmlIGBudyWOAAOT2rofCPxr+IPgzThpvh/
xv4k0LTkZmSy07V7i3gVmOWIRHAGTya46eOUTMJhsbHzL3U+hpJFRUZx8vr1OadkK7O+8Q/Hz4j+
JNBudL1j4heKdR0+6Qx3NrdazcyxTJ/dZDIQwPoRXIv4n1j+yF0dtWvn0dbg3qac9w5tlmIwZBFn
bvIzzjPNZSRM6Els7c4GKNjM5RCSTnJb0xmiwXY9gJ2G5lX6H+dSpL5aqBMODjHpUTQru2gYPUDr
/npU0NrLdTQ28ELPJKwjSNVyzMemB+VJAejeGP2gvid4X0i20jR/iF4n0nTrSPyre0stYuIooVHO
1UD4A9hxgVxviHxTrPizV7rV9a1a+1rVrpwZ7+/uXmnkKABSXJzwAMelfTnhj/gl18dPFnhzSdbs
NO0mKz1G1ju40u9QWKZVddwDJjg4IOPevnj4pfC3xD8H/HOreE/Fli2m61pjhJYgQVcMu5XRhwys
pBB/qDRZFEfjH4qeNfiJHZL4p8Vaz4khsCxtU1O/luFgLDDbQ5OOgFL4L+Kni34a3V3eeEvFOreH
Lq6jWKZ9Lu3tzMqkld208gHPB9T61yu5mXCM4Gcc96bsTgj7w5B7CiwJs7Twj8XPHPgLxDe634e8
W6vous34Zr2+s710lutzFz5jdWO4k5OeSa6rV/2r/jL4k0jUNK1D4p+I7/Tr2BoLiGe9YxujAhlI
I6EHBp3wK/ZV+JH7SUOsSeBdFS+h0wp589zOkEZZsgKC5AJ+Vs4rvPHH/BOL47/D/wAJ6r4h1Twl
Bdafp9u9zcLYalDPIkaglmCAgtgckDJ64zSshXZ842eqXOjapaXthdPbXtlIktvcW77HidGDK6kd
CGAII9K9gX9tX474VV+K3iUckk/bT+XTgV5t4F+Hmt/EzxfpHhzw9YvqeranMLaCCI4yxyOSeAOD
kngfhX0XN/wTC/aIQAReD7I71wWOq2w2jrgjf7Y60aAeBfE34y+Ovi9LY3PjbxZqXieeyV4rU6jN
v8pWIL7RgAZKjP0HpXCyBXA27j2BrrfHXw91z4aeKtU8KeJ9Ol0vXtLm8q5spMFlPVSCCQysCCGH
Br2LwT/wT2+OXxC8J6b4j0PwZ9p0jVIBc2ssl/bwuyHoSjyKRmq0Q0j5v2FEAIBPvxS7QuVVgfY8
DFes/Gr9mH4l/s9R6XL498MS6LFqhaO1uBPDPG7qBlC0bMA3IIB68+hqD4K/swfEf9oZNS/4QPw7
LrUWnhPtMwuIYkXfu2jMjrkna3TNQ2hWPK5UMTKpUqc4qMj94OfmPBBr6M8a/wDBP/46+BPDOqa9
q3gO7j0zTIWuLiSO5tpTHGoyzbUlZmAHJ2joM9q+eXi8pcFw5PIKngjtRdCIGbYVZiCT2FLuXd1I
B7VC5MpGN3XBHYU513knj60WTFcfLKiL0Ibpx1ppjLrnavFIUUjdzuHcU8gu4IPzDOAe/FSmhkbB
lYkZz0NLEoYgd+2f8ac+55ypPzHkgUsaAxtg5fOCp7Ck2JITG4YUsQeo7UsoAdmL5XgfSmhWLEBh
jtTZECkd+x96RVgkjaNQwPDc8UkcgLkdCfalYkPhsgZ/DFMZSGCrnLHggUxFhiXH+1jgnqKCV2Nx
njqKYAMlTlwKHIJ4UArxn2oGAAfcWGfdeCKeCWXaACV5qAja3zEk0+WMopcHB9DRcQvmfw87sZJF
IzbRnaMk9Qc5FNUDa397r168U8L821TgdMH1p3GwZ/LyoB46K1JnMqsSPbHb0pWZRNtlG4Dqc9KQ
4+bYSy9MelSJDnX95kHcMZOOppJMMAwjw2cnuaUKS3zAcdcdjRI/mIecgDHTimtRgdyoDtHHtSbX
IODwecUrkhUHKkDJzzmmFsfdJJPQHjFJodiWNF2/3mPY9R/nimMjeX82QM4A9aRWAYZLbx1JAxSv
mcAcOpPHPSgQ87liQgh884BPFJ94kxHAI78Z9aYqFJShyFHH1pjhlcqOxwBigLDiqRkOQcgcg05V
80ELjOc560B8IAcHJP1pANrqAzKvfikABwrn92NvY+lDv86qQDz1IFRlNzkAk4OcgVIWGR8u73NM
ofM0bEoAAc/KRyc4pvlb3ySSAeQD1pY3aNsqpbgn5u9CPJsCjAGcZJ+6aAFd1eLAIwvHzevt7Uzy
gGDE5APII6/SgxsgbzAWyeMnr2zSbiXJYEDPUZOKd7gKSuwAB1YnJycjvUsshGFbaVI4KjrRIchc
LhR3p1xcBo0QxhNoIUetRuFiKSMh164I4HrTjEyOWOc9iP5Uk++YoWbC9AAfanyRS43q+31LnvQg
GrskEfmYiz8u7JHPqadNGSCd27sQOP0pquiTKDl1wCec/lQVySFIYjk7uOKtMBCd4fCqCDnPoKsA
I43cbB+ZqEybCAq7SVwx4+Wl2GMsSxY8kehqGxXEkCk4+8DkgEfXFEsRVCcqCRzx+VClIkjdMrk4
YE809zJImzYoAPbJ4ppDvcjiBLlnO4YzjvUhyqbcZYdPXp60o/dkbkHOF3A8Y9aiuJCGG0gr7d6T
WoWFKBiy5CgfxMDQrOHwOCTt29AaSZzJHgLjp+WKQYR92QWUEMGH9adgDLhzvByvr/KlXAIDbmUd
cH/PtU0rB0C+ZjuQCeTUI3rCxIIzxtzw1C3F1HqItjldyM/Bb1HuaSRtlttUCUNwWPXOOuaaVyio
nyDGcgcn/OKDtKAqGVWJPvV7jEkgYssewovuevvUkgVInhMjHacgLjp/nFBf7S7Fn+fHIHT8Pela
IK4dAXVgSD6j/IqUxjWaMooj3EgZY46H/OKQK4PKKzgD2zS+UcB/uI2RlTSFGgO8AsxB5XPIqkib
CBBIr8iNkG7IPU44qKRv3TMAdmMEY4zVp2gddgOJmOTt44x0/nVa5QhioP7vGTgnrV2HYpw7RIN3
J7DtV2FUcAuxRU+dRkkH/OKoghJiCcryCfSrqNlDlNw7Z7f5FS0KxAY5HYkHaPUHt7ZpWZdw3OT2
2gU4PJBKwbIC9QevNV3BUEg5GeHFQMkkkC4VSdoOQD2pr75lBzk+melEgXC/eY5yc9MVJHbllKrh
GBzhuc0wNubhdhV1KtsC9x9TVW7jLzgNtQKuN3PJxTpptjDdt8zOWHQHpk81BcNLc7WDErxj0x2r
S7MkiskzA4VioI2+lCO8gJ5BAyW/lVpUkyVBViqlsbunriqqBg77fmBHGRjtUtMtWEud7KCf73Br
rZokf4fWrOfnimbaMcMvH9RXITRLId24hiuSCc811V9ffZfAVvpr4R2m85MEnj09utRZlrc0/giS
/wAVPCjhtpi1iyZF4wxE6AA546+tfrz/AMFWzcR/Bbw48BlaNtVKNFC2N2YmIPocbSeff2r8b/hz
rVt4d8WaFqV4jtDY6jb3cqooJKJIrED3wD+VftJ+2XoH/DV/7MeieJ/hzdQ69p1rM2pkwsQ7xCJ1
cAcHcp4K9evFUtyqivFI/JyG7d0VEaSKROkiLtZcZJ5HQ4zz2r9P/wBij49Wn7TXg67+E3xL0lde
u9NtVnhuZlBjuLaMoq+YQ2RIpK8/xfga/N/wH4L1Lxp470zwjpcSS6lqcot4Flk2rvJwAeuOoOa/
UzTh4J/4J3fBSza4thq/jC+QMYCx824kwDJGkoRtsanOMjGSPUU5JE2SWpwX7d37RafC3RE+Dvgn
TZtIt4rVIryeFAqR25QMkcWck+5OOmO9Uv8Agk7dNcP8SnkkmdpjZSKsq7cLm4HA7f45rt/FvhXw
T/wUO+EMXi3w1Guk+NNOQxT20ijzEkIz9nmfAyOCVYcc1zP/AAS+8I6n4M1r4o2Gp2EljKktrDtd
CAGRp1cAnrye31rNpDj1Nz9rD4VaX8HvCer/ABg8N33keOrXxD9rGqQMRhZGG6CRA2GHABB65ryj
9gv4lar8VP2xdf8AEeuGN9U1LQ7meWWNNiErJbrtVe2BgVo+MfhX4++PH7Snjf4fQ393a/D3+3Dq
mowzOBGi5UGSPI+Ziei8AY71a/ZV+FFl8Ef299e8JWN017bWejXKxzsoV2DC2k+fB98flQZp+8fc
+o+I/C3jbxFrvw/1Bori/htopZ7GdgDLFICQyDOTjHJA44rwn/go7E4/Z3VIRHhNTgB8zoq7JBke
/OB9a+aP29/G+s/Dv9rvStf8OX7aTq1nYWrrdRY3HO9WU54YMpxg56V9Jf8ABRxHm/Zh8wu286la
EuAeQVcNnA7gkfXFC3DW1z8iniEMpeQMHZyyMqjMfPDAY69K/XnxXcTeIv8AgnGl3PLNd3k3g63l
8+T55TJ5aZbnq2c1+SOieGtV8ceJdK0HSLee51LULhbaBYk5yxAGcA7RnAJ6Cv1q+K9zZfAv9hCD
wn4v1W30rWl8NrpMMayAtLcLEBtjxncRjrjHGa00uN7WPx/1C1ismYSOfM3lTsXII5O726isqW4n
gRsqMdVCDrjv07iruo3Uct5NI6uxB3tg5JzjOD6VRe7gYqhdmRs5HHHGOOxqkc603EknZwuFxkgs
x+76GpfLWDcckErxJuztz6VDHMSCkUThcZOT39f/AK1Ivzyyxg/Mx9c5Pf8AXP6Um7FrUVwAkbMA
8hJJKqDgfj9adEQwHlDoQrsODgds96jWIJGxiZndBlsrznPYHrRcs4yqEgHqW+UZPOAeff8AwqhW
SY+ZlldsqSg43Hhmz3+vTmnCUyxl9p2rjCZxk/X0/wAarGNWBZWZ1TBweGIIHP5/yojkSHcgLuoO
CWGevqOn50XSIbbLMRYMTE33lwFxuz0yB7dqbAT5yu7bGZsFQ3C8d6RAIpUCn5sjao+8T27f5xUa
3UYAYxmIliCGyT+PUUrtiS1PoL9iMyw/tX/DLzVADai2DHkYHlPjPYggivrn/grSREvw/ckkMt6h
Xrx+5JP4Y/Wvj39iZ1l/ar+GhTbg6qpY7sdEb8+v86+yf+Csyyx2fgKcKxt4heMzKpIBzDjPHpu/
yKXU6HpY/NAATqyJGJGkbYu9cA89gR61+lH7C37GVj8O9Jg+L/xPhFlfQQm5sbC/zH/Z6rvDSyAn
BLLggHkflX5rpftHKrcLJG5KjqVweOfXPP8AjX6jfsq/tOeHv2q/AMnwh+KYhbxLLbNHb3DFYV1B
AMAxc5EyAjgdcZ9aGXfTQ+Y/20v2wtQ/aE8TtpGg3F1Y+ANOkIgSFnj/ALQOVPmyr0IBU7R2B6ZN
fZv/AATH+F0ngX4GS66dVg1KLxXNHqMccOSbZQm3y3z3FfAH7U/7L2sfs0+LZ7W+JvPDd44/s7U3
bAnHUhuTtcEgEZ5/EV9mf8EkHl/4Vp44D+Z5Q1WERiT7o/dHO09CM8/jSYkrptHm37Qnxh8f/sk/
Gz4haLZapZaxofjS3n1CKwmZ3WyMzuocLn5XGWyOjDFeC/sa/DN/iv8AtD+FbOLVbfTX0uRNVf7Q
x3yrbTxMY07licfgKxP2vg8v7R/xGW4uZMf25cdeTt3cDB6DkfkK579n83D/ABy+HzQyskw1uyYB
AQzL9oQ4JHqOvtmpkEW7n6u/t8+GvEV58HE8UeGNZXRNT8IXDayJ9+yRlSJgQjY+9yDt4zjFfl3+
0L+0P4j/AGjvFGka54isLSxubLTk0+NrVGAk+YsznPQsx6YwMV+g3/BVaUx/BHw7mSWOM62PNMZO
Nv2abqO/O3rX5Oz3by5DkshJ4H16Va0RK+I/av8AYJ+GMvw0/Z30dZNRt9S/txxrKvbLhYllhjAj
PqV24NfDnxh+L3xA/Zrv/il8F7q+tfE2h6/5gt5JpH/0CK43EiIDplX5U9GBP1+rv+CXcjL+zTcx
KZfs8WuXSwI/RV8uJtqjsuSeK/Lb4r3Ei/ETxP57O9wNRufMeU8k+a2M574H8qFqOXxWZ9Af8E7f
hlP8QP2g9L1eHUoLb/hGWOqSxTtmS5RsptQAc43ZJ4xla+zf+ChVn4u8L+H/AAr8UvCXiBNFuPBt
y7XEZk2NcpO0SBPRgSuGUkZDHHpX58fsT3Fx/wANXfDd7VZUP9oNHK8ZZf3ZRsqQOo6dcjH4V9c/
8FbGki0z4dOZpBbefeZjDfIXxFtbGQCRzjr1PFHXQUltY+KvjN8dNY+PHxQuPGutWllp9+8EVtHb
afuZSIwQACeSTk/niv1h+Ofw2m/a1/Z2s9J02eXw1LfS214v9r27xPFsPKunX+hr8WbG7jm1ayJd
lVZ1BKNjgHGDx04r9ff+Cis95bfssXb6fJPE/wBus8tasykJuJ/h5x06UPcOiR1H7Pfwpl/ZM+Am
pabrWof28LW5uNSkl0q1dyVYL8qxgFiflyf/AK1fkTo/xg8VeC9f16+8K+ItU8OHUp5ZZv7Pmkhe
RDKzoHVSORuxz05Hev1B/wCCaVzc337NlxLdzyXLvrV1807lzjZGOpwcV8+/sK/s0eHfip8SPGvj
HxPHHqFv4f164gt9KlRZLeZnZ3V3z125BA+lJPQHfmuzs/2JP2SpPBQk+NPxRuJ7XVFWTUba3vy8
c1sTuaWefdgktkkA/Xk184/tqftJp+0/8RtKtvDmneZomkTSWWl3KI3n35mMeW2kZALKoUYz19a9
V/bM+PXjH9oP4uf8KW8DafdwaZZ3v2SaF1ET6hdxu4JD5wIgAOpGeeOleK+F/gR45+A/7RXwrsfG
ujNpj3uvWdxA0UyTQui3Ue/DqSMjK5H+0KdgTu9T6b/Zu/YR8LeCvhsvjr4xRyx3rCG+gsPMeL+z
0HzBZAoBaQsQcdsAdQa9R/4KgD/jGQMoORrlmQAcZ+/mqX/BSl7xfCPw6jtnn8qTxHGJooQx3DYe
WA6ge/rV7/gp0I2/ZiR2DMqa1ZsHUZC/fwfb/wCvUBdPQP8AgmnBBD+y4LuC3EE0uq3xZtuHOH4B
PfH+NeBad+y18XNN1XTPjw/jlLu/nube+uv9IcXZt2dUKlsBSAhI256HGDivbv8AgmZ4r0rVP2fb
nw3BqdtLrllqFzNNZF8TLHJtZZCh+baSSM+1eF+HfiD+0Lqfjq0+AktnAkdpcILuKW1UMLFJRJu8
/OCpTGCOowOtT0NH8Wh9v/HjV7Hwnrnw51+40ya/mh14W4azg82dUktplOAOSMkEgeleX/tjfse6
x+03r3hzVdI12w0ddOtJbZ0vopCXDsGyMDg9R7V698TPE+jwfEf4b6DLfW51qbVXuY7LzB53lC1n
UvtznGTjJ/Doa+NP+CqWp6tYeM/AKWF3eQwvY3OY7aV0Uv5icnaeuPUVaFHdHtn7YcsPww/YoufC
uovNe3UmlWujLNbQO8byKI1LM2MIp28biPStH4LtdeHv2AtLutDYaRqo8LTTw3EKgMlwUcrJ05bO
OfaqPx9lkm/4J4Xk04M0/wDwidk7+adzFvLizknv70v7P2qRfE39guDR/Cd7barrsfh240xreKZV
aG82OBG/PyHcR1xxg009iE17x478FP2T/ih+z78avBfjXU/F1tqdvr979k1n7JNJ5k5lV5P3wYYk
+ZeuevPGeD/gpv4h1r4c/EL4ceJfCWr3HhrXZ7W+t5tQ05xFPLGph2q7Y+ZRubg5x2qn+zX8X/j3
8cPjPofhfxTaWo0LwZfibWP9EW3mieNXjXcxb5zuz90c9cCvdP2gfg/4U/aQ/aD8E+HNU1WGYeGr
G41LUNNiIdpEMtuFifBBQPj647cimnqNrVWPlv8AZA/ZS8QftB+P1+LvxSur290+OZZo5dQlkF1f
SIqmNlfAAiX24OMVT/4KPftX6T8V7+P4c+F4UvtJ0HUBcXmrYYCW6QSRmFAVwUXdy4yCQQPWvTf2
/P2jdQ8Hx2/wK+HumTaL59tHbXtzBCqR/ZpEAjgtsHIJzgnAwBgZr4j+Ln7NPxH+A2k6ZqvjXw9N
pWnX8/2dLhZ4p0MmCwQ7GbaSMkZxnaRTRm5XP07/AOCea27fsgaS10QbKS51FpGXONhnk3Ed/Wvm
j9qX9gTQtP8AANt8Rfg6LnUNFNsLm8013aeSSFgCssHGeAfmQ9hkAEGvo/8AYEhZf2LbFBwxfUip
weplkPQ+5/Sqv/BNP7af2dNTivjM3l63cpGlwS21fKhJAB6DcW49/epi3GxpUinJ6bHxn+wr+x3p
/wC0ZrGpa14mnz4S0iRrW5sYpXinnmZFZMMpGFHJP5V9XWf7Af7Omo+IW0WLRPFi3avJHullvEh3
AHcRIybeecHODmnf8EyFSPQfiwqMpC+KpcEHnG04JGa8p8TftwftAa18fdd+HfgfSNGvbmDVrmxs
oJbDMjRxyMNxdpVXGxck/WnzNseh4D8dv2LdY+HX7R+jfDjStQtrqx8WTk6Dc3LkGOIvgrNgdV6Z
AOcZ61+lXww+Gmhfsi/D7Q/DuhaNd65rmqXMKXlzb2zyCWVvLWaVnC4jjUAsFPp9SPzd/aX+J3x2
8M/Gzw14i+J9vbaD4r0GGO50YWMEX2cAOWLZV235YYYZHHYV6t8HP+Cknxa8c/FLwf4f1D+wZbHV
dVt7GYQac0cvlvKqsynzTyA3pRJPcVNXW56//wAFbNIvLv4UeDdQgtJ57Ww1aQ3M8SFkgDwlFLno
uTxk469a+LP2R/2pZ/2X/Eer3UPhi38TxatBHG5ll8uW2KM5yjhWJyH5HA+Uc194/wDBUz4max4K
+BlloWmmCOz8UXEun380sW8rEIi+F5wpJA59q/JLTlee9gVA080udgQ5I6f0zTtdBBvnsfr1/wAF
QLlrr9kiWVQFebVdPIxyBlif6mvyj+Gfwt8QfFzxfpvhjwzp09/qF7MkbSLGSkALqrSuR91F3Ak9
hX6t/wDBTK2kuf2Q2WJGl2ahp7sAeQA2cmvmf/gkm8TfHDxWVdGP/CPAoAc/8t0Lc/iKly5UiYXc
pI+vP2bP2PfAH7MviTRLqG+uLvx/faXLDNI87NFKq+WZjGhHCglMEnPTrXwb/wAFUQB+1NeOWOf7
EsAAq8k/vjjP9fevrzwnf6jdf8FQvFcVzPObCDwwEgilLFEbZbk7M8DOWPHoa+Q/+CqSEftV3ByQ
p0GxPPA+9MCefoB+VXGydgmvhfzKf7DX7bN18AtfTwz4smkvPhrqcpSYS7pBpTseZlUKSUOQGQfX
1rsv2/v2JIvCjS/Fn4Z20d14J1HF3qVlZ/Mtk0hLfaIVVf8AUtuyfTOeh4+fv2VP2X9f/ah8eJpF
gZLXw/aSI2r6uFDLbIdxAUEjcxKYAH8q++P2wP2mfC/7Jvwvh+D3w7it5vEktl5DI+J4NPt2BV2l
BbIkYZKrjHc8Uotqfu7Cn7yT6n5KOiSqqxnO7gkHt6/jX65f8Ei02/A7xeFUpGuvlQnQDFtDnH4n
NfkVDF5IEe1pWVQm1B8zdu1frz/wSOWUfAzxW0m/5tdJHmDBP+jQjOPTjH4Gm9TVJKLPys+KbFvi
H4sG1VjGsXvzdd37+TnH+elccm2RuBgY43c/Wut+LMjf8LI8WmL5lOsXnfoftEnH4f0rjQjecFOG
BPyseOPSrOZCTJkZC4zzjtUUyx+WCQwPoT0pJSVGIxkj5Tu6fhTPJIAXewDDsOKQD5E3LkMQwI68
ZFPiZlwWbkdWPNNxzySR0AFMSJpB8qZ3HaaY0iQu3mjAyOu7pXX/AAy+HOpfFbx5ofg/RngTVtbu
hZ2/2ttsauQTkn04PTNcnJDgp8xA9K6T4eeHfEPizxhpOk+E4bqfxJdXI+wJZMUm80cqUb+EjGc/
rUM0jufpZ42/4J3/ALNvwc0vR/8AhYnxC1zQby4hyZpLuOOCWQBQ5X9wcDceBnPNeG/tmfsFab8F
PAWm/En4b6vd+J/A9wkf2p7uWKSaHzWUQTRMirvRtwB4JHB9a+nPDv7Q+geKfDvhv4L/ALV3g280
7xVezQxW099bFLS8UYWK4eZJP3b5yGIOM88A180/8FAP2Y/F/wAChFfeHdb1jVPgxdyIbbTjeTTW
+kyArsidSxXYzHKNj2POMzFtlPR6nv8A/wAEzvhp8GNPbw/4s8OeMbvU/ibPokkepaFdSqBabmXz
dsewHAKqAdxyCD3rK/4Kd/Cv4QarN4k8Vaj48fR/inFpcC2mgCRHW7Ct+7Xytm4Fhu+YMPWvGv8A
gkzLj9qK4UgJu8OXqryPmPm2xOO/rXUf8FCPgD8Q/i5+094nvvB3hS/8QWtnpOniaS0UEITHJkcn
kkDpQtGEtbHxt8BfCfhbx38YPC/hzxhqV3pHh3Ubp4Lq/slHmRr5bFSMq2PnCDODgGv0EuP+CXvw
e+IfhrXE+HfxC1u88QW0BeA3RgeATEHZ5gEKHaSMcHiuc/4JGfCfQtY8UeNfFOtaX5viLw3LBbWD
T7la0aRZlmzHkDd8u05HG096+xv2ev2lNU+Mfxn+Lng6+0ez0218F6gLS1nty/mXCGSVCz5OM/uw
eAPvfSlzO45K2h+LNj8Mo/DnxjtvBfxDuX8MxWWrx6brdxAQ7WqeYFkdWwwwASwJ7dq/QyL/AIJE
+DNS8Vaffaf461e48CXGmGcTK0JuDMSpRlfZtMbIxbOOo96+Lf2150tv2r/iyqkGZtXlOw8ZyoxX
6aftT+JNV8E/8E8FvdI1O50u/j0PSIBd2srRyKrm3RwGHIypI+hpuT5rITaSufFP7X//AATwt/gp
8OLX4i/DrXb7xZ4SSISai188XmQIxURzRlFXchLYIxkcH6e0f8Ey/g18JrfUPCvjvT/HP234mfYL
uO68MmaNTbljskPl/f4UZznBBzXy14m/b+8ceJ/2Yo/g7dado8lgLOPTZdYVZDcyQRsCvy52hjtU
Z9icc11f/BLB937WOhn5ix0nUCeeB+7TilKVlcasz6U/4Ke/BL4YeJYtb8c6x8QYtA+IOl6En9n6
BJPEftyxvIyJ5X3zvLsoI4BOa/JyVAjEFcAEjHX1r9GP+Clfwe8cfFf9pm4Pg/wzqfiJLHw5YfaD
YQ+YIj5tyRkdcnIwAO3tX51ahBNYT3FtcwzQTxO0UkEyFJInDFWVlPIYEEEHpitEQkVvuN1IQZ+t
bPg6x0rVPGOi2euXp0vRbq9ggvb5f+XaBpFDyEeiqSfwrERW3YJz24HNW4444kOGB3fxN3zx/Wh7
aFrc+1P2r/8AgnbP8FfAmm+PPh9rk/jzwfLGr3swjRpIEfHlTIYyRJG24A4HGQc4PGx+zj/wTYb4
ifCa98e/EbxO/gLSpIvtOnS4jx9n2ktLPvYBBnHBweK+o/8AgllZfE60+DE8Xi1XHglmDaDDqQcX
KRFFP7sHjyDuJGe+ccdOc/4Kzar8R9O8AaPbaWr2/wAM5pCmrzaYz+aznAjjuABgQ55HOCeD2zmm
xvQ8C/Zn/wCCaI+KfhTWPFfjHxZJoHhZXk/sfUdP8uQXsKPIslw28nYh2qQCa7/Vf+CTngnxV4W1
qX4ffFiTXtetrcm3hlSB4mlwSiyNGcqGIxnH54r6c/Zo8aD4d/8ABPLw74qa0TUBo3hi61D7K7fL
L5ZlfYTz124zXzx4C/4Kd+P/ABtNfr4M+AkevSW4V7pdCnmlZFJYKz7ITjO04zQrjtc+TP2W/wBl
fTPjL8aNc+GXjvxBN4E17TkmjjtQqNNPcxyBHjTJAOByAOo5HAr6R0X/AIJQeGbLX77R/GnxSbRd
Rnvng0KBBAkmoW+FKsqu24vltpC+nuK8R/Z68e33xN/4KG+GfFGpaemk6jq/iia5uLJQ263ba4MZ
3AMCNuDkDp0r7b/bUDzftj/sxIrMqrqjM3GR/wAfEGP5Gk5NMfLZn5v/ALVv7KGv/su/EmTQ9Rdt
R0S9Vp9J1jYEW6jXYHyoJ2ujOAR34I4NJ+yT+zkn7TXxcs/B0uqzaRp0lpcXU9/BCJDHsAwACQOS
e/pX2p/wWVPPwuTAO5dSPPsbU9eKv/8ABNf9r2HxHf8AhP4N3vhfT7O6sdKnS216GQCS68ohgGTa
MMVPOGPSr52kRF3Wp8M/tZfs2t+y78X7/wAKnWTrVo1rBe2l6YDEWWTcNjDJG4Mp796/Sr9g7w7p
Xwv/AGGJ/H2gaTZReKr3TdSv7i9ZCxuXgkuPJVznJUbRwD615/8A8FLv2rdP8Pz+Kvg6fA+ma1ca
jpMBbWrq4xJatIWZSqBCdy7cj5hgkGvU/wBlRptR/wCCakUOkW7Xt8uhaxBDawjfI0vmXIVAvXcc
gY96jmu9S9LHiX7N8n7TPw7/AGkvD3iX4g2l6/hz4g35t7wz3Mc1pHvRpY/KjWQmEqAQuQOMgg11
v7W37FXgb4x/tVaHaW+uQeAtW8R6PPfXU8cQk/tC5jlRV2oXUeZtdicddvQ1Q+An7cXib9pb4jfD
fwFF8PhYromoQ3Or6lbzyTeSIYZEy0ZiXyhv4+Zj6e9ct/wVmv8AULX43fCf+wZbiPxHb2M02ntY
k/aEn+0J5bRgclsjpg0lfoFkfIv7QH7IHjL4G/F+08CXFtda42rXIh0K/gg2RakrsoULyQrAuAyk
8demDXrXxm/4J66n+zt8BLL4l6r48tdO8XW32a6/4RyaFBi4MiFo4pd5MjJwehzg9K/Vr4Hp4o8Q
/CrwjqHxG0uyHi9LKKWVtoZ1cop3sCo8uQn7yrwCOvp+UH7eWtfFX4mftQxeEPF9o2kot3DaeG9K
STfZFJykYmSUqN5ZsBiRkZxgYqlJsnqM1H9qP9oH9tjxL4X8B6Lcx6XeRTNKo8NNNYlwAqmWeTze
VQZOBtHJr7U/am8X6F+zH+xZdfDLxPr93408U6xo1zo1m0pD3VxJIkhE8gZiVjQkDcSfujv0q+Gv
DvgH/gl/+zxLr+uiHVvG+phQyo3N9eBMeTDIUzHGBySRjgk1+Ynxj8YfEP43azqfxU8T2GsXljfs
YotUNlILK3g8xlSFJQgTavT3IOTzSs2O6eh+o/8AwTq+H1h4O/Y/Xxh4W0K3k8b6zBfySNKebqaG
edIIjyAFyoHGOp5r4X+PvwJ/ak8ajxB4w8f+Gten05ZpdUubaG6EtraKMnMUCzNgIvTCk4Br71/Z
K1+68Gf8E2INc06cw3mmaJrV7bTEAlXjmunVscg4Kj8q4n/gmb+0f8Q/j3r3jqw8f+IW1+GysraW
GCe2iTy97yKw+RRkEKOuetNXEflN4HuND03xpol9r9k2r+G4r6CXULKFsPPb+YPNRSCOSm7HI5r9
+f2cvjH4G+M/wiXUfh5bXFj4a0vdpMNrdQeSYfKiXChcngKyjrX4n/tc+HNJ8JftPfE/SNHsoNO0
221uXybS3ULHACFbCqOFGS2APWv0w/4JUIF/ZU1h1ZXZ9evSxXGCRDCOMfQfWmx2ufjnqkDS30rZ
JZmPK84ySRX2V8CP+Ccen/HX4VeHfF0Xxe0bRZdVVmbTJLQSy27hyvlk+chzwO3cV8u+BPDcPjn4
m+GfDkszW0erarZ2MklvjzESWdI2IyMA4Y81+xnxEuvhd+xH4c+Fvhm2+Htnr66pqKaRbXk0cAuo
2G0md3MZLtlgcDGaHdCsflf+1d+yP4n/AGUfGkOl6s7atoV+A2na5DAUguDty0ZBLbZFwflzyMEd
68LKoGjHbOWyMCv11/4LJPH/AMKe8Ao2Vzr8jZHTi0lGP1H618Z/sS/sV6r+074wF7qhl0/wJpkg
N/eqNkk+VfYkJK7W+ZRk84Bp3FY3v2Df2HLn9onXU8VeJopbP4d2MgdnwyjVCCyvCjggqFIGTXrf
7EH7Ovw9uv23vijp8duNa0XwLdNcaBI9y0qRyCbapJBw+0cDOenOTXS/t4/tl6R8K9Bm+CHwge20
lbWFrfV77TFQR2qHeklmi7CBIc5Zgcrz3Ncn/wAEa0jj+KPxBAbDNo0BWNmyQPPORnPPUc0rsR13
7XHxj/aN1P4261qHw80fWbbwb8PLvy2l0cOYb1wscsn2pd37xQuAVA4BPrXpf7aPwr0D9pH9kvwr
8SNagtvD/i9bTTLm31EEiO2+2SwJLFJ/fjHmE4PQjI754v4kft7xfs9/Fj4w/D298EX2uXV9rDz2
c8E6x5862jjUFGGWG5RjaecnpXpH7WDSWn/BMtRcwtbTnRNCWSCThkJmtcqc85rO7RVj8+v2o/2B
fGX7MWhaX4hk1K28V+F7slJtV02CRFsnONnmglsK+QA+cZ4OOKufs1/8E8fHP7RfgG98WxalaeE9
J2B7CfVrd9l6MEl0IxhRgAnmvtf/AIJjeKfGPxP+Bup+FfHGgx6/4CsZPsel3upqj74gA3kMjL+9
UZBD846dhjI/4KjfFLxb8KfhxoPgPwno3/CNeBNYjktLrVbNYxC6BRi0VQMxZ654DDIHeq5wsfIf
w+/bx+If7P8A8D9R+D2kabo8FzBLeWcHiK2kkNzAZJX3PHt+VypZij+m09q+/P2KbT4l/DD4O+Jf
iF8dPGep3WmT2puorDXJHkm0+CEy75GDZ5ddpAHUY9q+b/8Agn1+xbol94etfjf8UHsf+EWtInut
Lsrxke2aJRIjz3AYEAKRlRntzXlv7d37aOq/tLeJJ/DHhZZIfhzo07FIoU3PqE6GRBOzL0jKtlV9
Dk9RT3FdHvP/AATe8R+FPGX7W/xr1vQooYNP1Tzb3SYXjEbeQ12zFkTtw6ZA6Zrf1W//AGkD+38Y
YW8Yr8Lz4igXbHHJ/Zv2IbN3O0rtxnPTvzX5eeCvHOq/D/xbo/ijw3fvp+taVMtzZ3SY+RgcgEd1
PQr3BxX3r+y9/wAFDvjb8Ufj/wCBfCWtalpV/pGrX6wXiRaYkbiIgkncv3enB9qXKwTOg/4K7yQe
FPit8IfE1lY2h1a3juZzJLGD9o8me3dEkx1UfOOf75rz3xx+2/8AtCftMW2mxfDLwvrPhOx0VpIr
t/B8c919odguxHIj2qFCnC/7XtXef8Fk8P41+GSEsX/s6/2qvU5lgxj8QK+R/gX+1z8Tf2ZrbWtK
8GanaWVvqEyXN1Z6nYrLtkVdm4bsFchQD/u0FWP0s+N1tq2v/wDBMO+uviLb/bPFsXhuG5uDqsY8
+O9DrhjuGRIPUc9ad+zJLfaP/wAE0ob7wHF/xVKaDqDW7adEDO96kkwGABkuCABnngVS/aXmh+O/
/BNFfHHi22S410aDa65FLb5iEV0xQFgoPQhiCpyD6dKsfs930HwG/wCCbQ8e+F7GCHXV8Oz61LJc
qWE9wC5BcZ/QYFTrcBP+Cc/iL4zeLIPHafGCXxBeWqraLYDxHZmHO4TCYKGRcjAjz/nH5SftQ6Np
ui/tCfEqz0a0istLg8Q3yW8FuAsaoJ3AVQOABgjAr9Yv+Cfv7TXjH9q7SfiJY+PzptzBpptreIaf
bm33xzJKHDYY/wB0DjBGDX5IftC+HNO8F/G7x/oOlRtBp2na/f2tsjyFysaXEgUEkknAAHPPFWhS
PNZCMHHJ9BTXK7B8rAnjIp5fCqQOvQHrSXUnChQQe9UQMdhyCH44GBRLcqMMFJPTFNV2cqpI9s0c
sxz8qjuahoWo5p1wcdMcZoYBlQqCzH9aaDu5wBgAU9d8xK4KqMsuKkYzyztOGAbPSkQDkng89+tG
NhJlUjHvSMH2naNoNOwXHBij8nOPU07efnLgofbjFQscsDngcc9KkL7yeNucg7ee1FgTCViIwwGC
e+KccADccj1xmow+AQ5PB+7Twu+M7CRnJ5FDKuhHY7x0OeBR5jeWxYEg8Uhysq5BHXmnICSM8OOm
e9IQb/MBx8uO3TNPAbJLcHHr096jCqC+/qD370bCx+X73dTxQMZLh3bPOfSnhhETnKkHgDuKRYw4
JGFI45p5PPU5PHAoCwOxcnDbQeoHekLlh5YGQMdadEEjyx7dzUcjMWYjJXPAIxzVJjJnfEnUP6np
j6UwfIGUhyG5BPekyRn5yFJPyimkfMVD/QA8UgFnRhLgtxjgU5VePGcsuRgUoO7ZlhwOeOQKjfmX
hioHY0ALKrNIXVuQc+9KN0jHgFsdAeKjlXIyGxj1705Q2wtt49v60WEPdGaNcjawJwc9qbudQ2cM
xXGMfjUjIGKhj06N26dKi4VjkFz0FNhcfFJJJkqFVevP8qYTIsvTnripDF5khVcksegpSm5jyRtP
QjOahoQm7Ack7uMEdqjYFiNuWUDGfSpC2eNoQ9B3OaQFlYHk7vvZpJNDI3ZozkE9Mc07cWXcHCnI
znqakdY97LtLAfkDTWy4yRk8ckYyKAZLHKBE4kYoCeO/aomYspfbuZeMj6UiJlWB6dQCeKc6HG9W
Bck5HoKgkSJfnOBuBGfQj8aJQ8qspf5Qerd/anhN7KGBLdMgHilkQPEctkg5ORyau5SAqm4Lt5HG
F5H1pskezLZHTjnr+FSZATAGXbkHbyBzUUm0kHqcAc85qkxigBUQkAqD1B7ehoL7maM5UjgEdDSF
NhKrwfc8ClLujKxRvnGBjg/hRuSIrRxkgjd8u7PUg1IhldHzkZB+7ximqgZSygA9+TUg3KpGAvBB
xxmlsUiRNuCxxjptIzn2/lUEjbE+RCdvJPtUiOQVc9RxjHU+tRNOGfKJ8uc7Sc7vakU7DVIxgDLk
9OtNiChgWHTknvUsr4bcF4HQgYPvTEDzpymWTrg9R9aokQSbZdiqwyc88j8qsSHphthUnJPTn0FR
BWQbt28Hjpgrn1p105nxsPzA4+b0780X1AGUMhckEdg3+FClhMDIArITkLTEcKyq6Et0DHkeoFTJ
5kYlbb/sk/yobAbujYlnDbQS2OmR/jSxuPMBUnaOQOo561FnYCXO4dDilCoIVVcqM96EA1m3nglF
U5I9c1M5zyCuTj5R0/D0qJkCsqL8zKScnv3qRpHlYAkLGrZ2kc0NtAiHzo3bJGwk8YGeKe7LHlfM
LKMk4X1HSppyJSWAxgj5tvCj16c//XqrNIYgwK7s9QDtBppyKvYpRlvNJHJf1FW081JCyj7oI25y
OlVIwUkDfeIPfpVrzyBlmwQCCM1dyCBZDIWMmWbHA7mpnUvEAT+8Y7dpGMCogWdXwuT19xTpJjEF
3HcccDqcepqGMRyIwQwYtnnPYU0ykKrI209C2eaJDhizcZGRt7/WmA7icgYzU2FY2mjiKlNpLt2P
p2qtMfMRcgKMYAVgSoH/AOo1ZaXeHRTtC5IAHzEY7n8arh0kCYOwkckfw84ra5CKxlJ2BmKnkBhw
cHrmhyc7UJGO+PantgwMEA2A/eIGfxNNkXdLufjcPvAY5xgVLKRE6YQuScjoQeaLq+l1CVWldmdR
gc8AAcUsj7WCgdeMAZqE797blw3IwaEUXNPuPLcFychgcCvpX9mv9sfx9+zIb218LXNteaPqI3ya
ZqSNNbxOCfmRVYFWOTnB5wMivmm0jYSsdpGPlOTwK6fR9JudRlV7OOS7kEigrEpbaT0OB0/+tVWQ
uZx6HsbfGHxDF8XIfiTaC0tteTUf7VjURfukkzkALnhfau4/aK/a08W/tILo8viaLTrX+ywwhi0y
DapLH5yxZm/urjn8DXhtvZXMkKiWN8vtAiA+bJ6Y75q/c6Xd28MUM8UkBdPMbcpXcQDwe/b0qkkx
JufQ9t/Z4/bF8bfs22msQeHINL1Cy1Nonkh1NHKrIuVDKysMHBwc5HAr0X4f/wDBST4jfDyw1Gyt
9J8PaklzeS3yvcW8ispkYu6fJIMgMTj0B6mvkhLOe5LhIpPsyRqxk8slVGeMnGOuKsjRbp4VeNXZ
GbduVcLz0wce4z9aHGKNUpLc+3Yv+CrPxIcL/wAUp4XEvVnInGRjoAHJOPXI+leNWP7YvjLRf2hr
n4twWlhLrV3G8M1i0ZW3eJkWMLgNkY8tMfNnivDrjwnf+YxFtdWzh8DMLjLDknBHr61VudNu31FL
PyW89l5iRSXzj72Bz3zxUpIiT5dUj0z4+ftC65+0D44k8U6tY2Om37WyWpismbYipv2kZOSfnPOa
7742ft5eM/jf8KrbwX4g0rTbeBJ43a7sRKs0/lqQCRuOMn5jjivnK502SFvInQHY3OCd+MfxH8Qc
e1RPY3iKl21nPGB8yMI2AKj0z6U7IxUnLQ9H+Afx9139njx+virRbCy1C78iWB7bUQzROGwcjB+V
gVHI7EjvVv8AaF/aU8UftHeMJdR19xbW6qqwaVbyM1vAoUZ2qxIDEjkjk15HNE7O0sO6XYeU28+5
B/L8qJ7J1U745ADyBghg3bt7VTjZ3G2MZ1lnkLAKEXrnBPXgZ5xUDhvMQGJn5yyd8ev5d6kuoPOi
3yud23kBeD6nrVczTrNHKSFUsQHQD5+Puj8KTdjJ3LcM6xLJhUwBxvJ49O4PrQSTONm4joWHI+o/
SqphS4ifbKVyc/Nzzyfyp0jgqAwwCONpPy++aEPmAtviYsX3LkAJ0Ayf1zSRjEJ3pgPkLg5x/wDW
6frSXMaXM8LqGKgbzk9Rnr/jT2UxAFQkkPcMcUIqLTEmIcEL+7djgs3Qgf1pY2zFjoC259q8kgdP
btSyQgg7IwYyCV2nj8qajJbCT945R+GQDik1cbshrOVkDFzhgcYPI7ZPpxUys9ukceC25wzHPA44
Oe/XFUz5RkLiLggHI4wRjt7jNSwSwKG2uzDcSCw3BeOOR6+lJaEpJPU7T4Y+PdS+FnxB0Pxpp8Sz
alot6t0kd02Y3K8ENj1GR+Oa+09T/wCCuGsa1YzWd18MtEm85DFtlvpJU5GGyDGMjB/SvgCN1maN
VmIRsj5uS3Gcf/XqZdNuI5Db+TI+QTHGF+ZBgkZH8Izjn3pcyub+hNruuDVNVvbqRI42mkZ9sQ2r
HuJOFyeAM4H0qPS9VvND1az1HTLma1vrCdZ7adHKSRyLyroR0IP+FU3ti0hbbu2Lhx15Hr6UgtlM
yBN6qOpAJ5/zitNGQk3I+t/iv/wUE8QfGn4Gw+APEfhLTrq+kjt1m14yFpHZGUtIqFcKzbeTnqTx
XR/An/gpXqXwO+GeieCV8Babqo0yNojerfNA8vJYFlERBODjI64r4tltBJIzLII2PQE9ePT/AD1p
hiWU+UiuJpuUbZnd+HejlRdmj279pv8AaI/4aQ8eweJrjwxYeHZobQW7ixm8wz8kh3copYjjBI4F
Zf7Nf7QV7+z/APEm38V22g2uuJBC9u1reybOHx86sASGUjrjofy8uFm6Qs6QybQqgsykAA9AfQnB
/KondE2RqrRMG++DkMevTt+FTZGd7M+3/jP/AMFN734vfDjXvCk3w20qO21KA24vJr95zEWGPMVD
EPmAJxyOa+KLm4AkYZ6EZBfBx1I+uadY2d1d4CRAo7ZyykqeDkZ9eDVaSIysqsWYgAgggZHb+tKx
fmfdHwv/AOCp+pfDzwJoPhWP4b6dctplpHam6XUHgEpQY8wqsLYJCqTz1NfMH7Q3xrX46fFDVvFc
fh2z8Pi9ijDWVmdyswBzIzbV3Mx6kjPHSvNorhwQrYwwAfA5wOh6euP1pl1aNbl5WRshzluCRzjn
3oj2Mrts92/ZW/ahuP2X/F+oazD4astfW9tVt3jupGiliwchkk2tj3GORj0r1X9of/goreftBfDT
UfCU/wAOtP0mOeWKSPUpNQN00W1wxKAxLtJ2gZz3r4zNsGkRQQRnls8VaGnywIsYfenJBB4x61SW
pa94EuJPPEm5Tk/K6jj6ev419qfB/wD4KdeJfAfw/svDPiXwtD43azJRNR1C+KSGMY2Kw8tt5BGN
x56fWviAqjZAYoM7MZyR1Oc9+o/Opf8Aj5jX5ozDGPnKOBzTlFFLQ+6PiX/wVL8S+K/Amq6B4d8H
Wvg6+vY9o1Sy1BpXiyRuKL5S8lcjPUc9OK80/ZW/bf1f9mKy12yTQLfxbBrDxXTrNeNbyRzKpBO7
Y27IPOcdOtfMv2YKpk80ooG0sT3OOMZxmoGh8uQfvM5BUFRnpzn8OaSSIbaP0TX/AIK37L5p/wDh
UmnfagC5kXVvnYkdm+z+9eC/tTftl6l+0prHhm//ALAj8KHw+sjW5t7xppvNd0YOH2ptK+UmMA9e
tfOEFvPK7uUkd428sBVJY9cZHXGBVN5fLO8YDbznOcg8gj9afKhxk+x98eDf+CqGuad4J07RvFPg
fT/Ft9ZRmKXVbu92G528CRo/JI3FevPPXvWDf/8ABS/xD4m+FniDwl4l8E6XrraoLhIri4uTi3hk
J2K0flkOY8/KcjoPSvit0KvEkKu0jMTlQST1JGPzp0kTMu9Q5C88DOQfX/8AXSUV0CUnueg/B74y
+IPgn410zxR4WvWtr60YCaF2zHdwlgWikGOVYDHr3HOK+0n/AOCuN0rPOvwtsGvSpAlGruDjHc/Z
89uma/OmKBptw2kIT91VJJ6Y4HbOKseSRI8S7RMvBTdj65q+VdSVUb0seleMP2h/G3jH4rP8QbjW
7qLxEsvnWdzbzMotArMUjQf3VDEYPB79TX1vp/8AwVgv7jTbCPV/hlperanDAsU11JflDI+35yqm
FgoYjpmvz7eKVpVO4JgY+XqO/wDWkaGWbCxq5O3JOxskevt2qbItN3Pqb9p79vvxN+0R4ftPDVto
48FaDHua8tLO/M5vumxXPlphVwTgZyT6V53+zj+1B4n/AGbPF0WsaCiX2lTkJqGjTSmOO+XawXLA
Eqyk5DY46dK8dlHlMybHJAyAV+YjPU+n/wBaoFL+aqklNwJBJAA4z/SiyFa23U/Q/Wv+CtGrSaPq
EWlfDjTtN1W5gdYrwaoz+VKVO12TyBuwTnGecda+df2f/wBr7xH8F/jHq/xA1S0Pi7U9Zsnt9RN9
cGOSdtysGWQKcYKKMbMYHavABEpWXFwhcYHDZOD06duagW0YOG81nKk4K9CMfSlZEOTi7n6Lzf8A
BWqOe7NzN8ItPkmJASZ9Wywxkg7vsx7+leN/tYft5X/7TXgmw8LyeELXw3a2d4NQedL9rppSsboE
A8pAo+cknJP618nxtNOcRqZFJPIUY6VAWIkJG9gQG4xj6VcYocZcx9jfswf8FDNZ/Z3+FsXguTwV
aeIrK2uZrm1uG1E27xpI24xsqxOD8xJDZ5z+NdV8UP8AgqVr/ivwBqvh7wv4Nt/A19fKEGq2moec
8Sk/OUXyFG4gYznjPqBXwn5JkhUY5YbpFUjgZ6460l2PMVUEi5LHAJxkD2POaXJYq57j+zT+1V4l
/Zn8Zf2tpo/tLR7wBdU0aSYpHdtsIEm7axVwcndjPY8V9QL/AMFcUt5Rdx/CDTxeMSfMTVyH5HJL
fZs81+dTStg+ZuWNeM5wP/109ldBIgALqfmDH5ge3H40rIfNc7/49fHjxJ8ffiHfeK/ENz5hkLR2
Wno+6OxgZsiJDgZA4JJ6kVufs0fH5/2bvidH4vtvDlr4l/0Z7ZrW7k8oqGIIdJNrbWBVe3QmvHPJ
cAMGDqRncjZ7/wBKmjZCjuxBAJXLZCg9uPWm43Q4pJaH2v8AtFf8FKX+Ovwn13wXcfDGw06XUECQ
3lzqZujbuCpEiJ5C4Yc4ORivJf2Vf2qx+zJqWrzP4H0/xYurRRx7LqfyGtSjMcq3lvnO7kYHQc9q
+e7hF85l83YAucE5x6Z/HvTttwChETSRsOHzhfQc9xmmoXFdJ3P0e1H/AIK/JfWkltd/B+1uoSP9
TNre5Gx6g23r7V8haL+1Br3hf4/X/wAUfCNhaeE5rq/a6bRLRv8ARjA2zfbtheVbZknaOSSOgrxm
ZwZVJYkL1Oc/Sm4iS6ILOwI+4P8APTOKTglowT9659+ePP8AgrFrGs6fO2gfDPTdB8SExiPW5NQ8
91CsCwAECkhgNuCw4PevGP2wP20Iv2q7Pw1FJ4CsvDd/pLvJJqK3X2ieVGTHlZMa4UMS2CW5Ar5t
nmii8oDcnBG4qQDgcn0qk2HBJy3XBQ9xmkki20fX3wP/AOCiuq/AT4F3HgPRvBdjNqqLcC28QC68
tozIWZGeIREPsLHA3YwAOK+UfFPijUvFmvX2uavqE2p6tqEzXN1eTtl53PUnHT0wOOKy2zKCoYHB
656kVA4cNiXcuSQMc/WhKxk7tm/4W8UHwr4q0TWxaQX76fewXRtLj/VzeXIrlG4OVbaVPXg9K/Q3
S/8AgsNZaPYva6d8G7PT3PPl2usCOMtjrgWw9K/NhUBljUfIFHBIBJpHihEYYSKhXBZsjbz60rJl
a2NPxJrj69rmpapPGkc19cTXUkaElVaR2cgZ7AsQKx3ZTCFyduePao5mO1tzlgpOCDUDYYhixyP4
Rk5NMxTexLPkxqq4JY4IbuP8mvtDwd/wSl+LnjPwrpPiOw13wollqtpHeQxyXsxdUkQMudsRHQ9j
XxYY3kXIOcnr3HFdfpnxb8beH9Lt7Gw8aeJLG0t02QW9tq9yiRDGNqqrgKOuB0qW2tjVW6n1xL/w
R9+NIXC654Oc9yb24H448j+tch4S/wCCZPxf8W+L/FnhuK88N2N94amhine5v5Nk4lTfHJHsjY4I
B+8Acg5rwOL48/EWM7v+FheK8kAEDXLodvaSszT/AIp+L9G1a+1Sy8XeIrHULwAXV5bavcpLPt+6
HcPlsAnGfWjmkCUb7n17J/wSB+NIjCjWfCL8AHN/OD+fkV558Xf2TPi7+xMPDnxAvdS0y2eDUVjs
9R0W6Mj29xtLJuV0GQ21h0I7HrXja/H34m7t4+JHi0KOONeuhxjp/rKxfGPxQ8WeNra3t/EPifWt
dgt282GHUtSnukR8EFlWRiAcEjOO9Gr3DbY/Qe0/4KyeGtf0XSY/G3wbtPEWtQWqpPdvdQNG0gA3
mNZImKgkZxn8+teLfth/8FBNT/aU8Lad4R0LRH8GeE4sPfac80czXjqytEMhBsVCvAB5zz0r5BLY
iDA7QWIycZPtjrUTEs2ByOn/ANahRSJ5rnofwa+NHiT4FfEDTvF/he6+zanYMVMbqrJcRNjfE4IP
ysBjI5HUc191+Pv+CwCal4N1u38J/D2fw/4tvbcRRarPeQzJFJtwJCoj+faCdoNfmosBDsH5yp4B
9B3pZFARShDrjgBgQaair3K5pNan0J+y/wDti+Jv2cfirc+JjNJrukazI769pjFVa9y0jq4cj5ZF
eRiMYByQRX1Dq/8AwVg8OWviXTdW8GfCkaJdXOoJLrtzJJAsuoW2G3puROXywYMx6r7mvzVbdjgH
JHSmA4O5iRjgY6H3o5FcSm+p9U/Hr9qnwV8Vf2lvDHxO074cLYW1g8Davpt28TnVtsoJL4XbnYNu
TnP4V9NePv8Agqz8OPHHgDVfC2p/CHUb7S7y0Nr9huLiA2/3fk4A4AIGCBxgYr8xX2Ow+Ykt/Eep
PoKAGZQd+xf4Qx5Jx/8ArpKOtx9LWIpHSKLgGNlAXKnPHvXdfB/4w698DPHWkeMPDFwLbWNOkJUu
AySow2vE4PVWXIPQjgggiuCulG8lSSp4Jbv+FQMpgbJZSOoVeQBim4p6MFK2x+p/i3/gsDoEnhnW
G8M/D3UNN8Z3tl5MV/eSQNBHKFby2fB3OqFiQMd8d6+KvhH+z/8AEb9sXxl4tv8Aw4LLU9bQnU9V
uL2dLbzZZ5HOQAMctuOOgxXhRf5w7HGORnpmus8DfE3xb8MtSkvvCXiTU/DV9LF5M1xpd00LSpnO
1sdRnn2qbNbDTTZ9NN/wSo/aCjc7dG0JsHkrq8fz/gR/Oua8c/scfFD9lWTQ/iB4+8J2OteEtP1a
1N7BaXUc6yJ5y/I46qHHy56ZYfWuHj/a++N4j/efFnxXu4AP9pOMCud8fftC/Er4maMdG8WePvEO
u6U0iTmzvr1nhLKcqWXocHnnvTV+pfMfVf7WH/BR1vi34A07wJ8N9GvvA3hzyxFqBfyld40K+VDD
5bHy0G3kjBPAHHWx+zp/wUa0/wAE/CHUPhv8XPD174+0MW/2aw8gRPI9uQ26KfzGUMBxtbOfyzXw
gYi0YaMgLyQCece9QsZFZWzkDgg0W1Fc+9v2Wv8Ago5pXwo8Caz4C8d+FbrXvBBMy6RaWKRSSW9v
I8jPbTB2AkXDgA/UdCMelwf8FOPgt8MPC/iBfhf8Jb/w7rt7bt5ISxs7aCScBhG03lyklVJJOATg
nHWvy+kuWjiXHByVwKGladcGQnBxSURc1+h7F8Lf2gbvwv8AtI6N8WPEUDandxa42rajDaxLEZTI
W8wIOgwGOB7DNfQn7Rv7f+gfFb9oD4T+OfDvhvU7bTvBs4nuYNSaJJbkmdHZY9rMB8qHBYjk+lfC
+SVJGBgkUSsc7GP/AHyP1o5FcXM0fYn7f37Y/hr9qq/8G/8ACNaNqulWmhR3QmfVViBleYxABPLd
xgeWeSe9fIttqtzYXqXFjcTWlxE25JYHaN4z0yGUgj8KpszAAO3zY544pPuZ3YRSOh6mqSI1Zd1H
VLrUb2W+vrqe8u5sB5riRpHbAGBuY5OK+j/2Lv2ztU/ZX8b4uI7rVvBWqsF1XS4vnkQgNtmhBYAP
ubnJww9ODXzHIoZQcFfrQJPLdfmGB3654pOKY7uJ+rUP/BTX4A+Bf+Ev13wP8OtZ0/xhq8bzTStY
28KXVz8xUzMsxwNzEkgetfMH7PH7XHh7Sv2k9Y+Kvxk0m98XaldRF7KWziWT+zZ9+VaGORwFULwM
HI64OSa+Q5JZGYDkk9x3qcqxmc/MxwOFo5SlJs+uv2j/APgoT42+LHxk03xD4S1O98K+HNDuA2kW
QJjeQgo3mXKq5WTJQjbnGDXrHxh/4KFfDL4xfDbwXqHiHwTqc/xR8OXtpfQ3NvFGlrBOksbXGxzL
uKOqHCsDzt5yM1+dhy6NscPhtrDuD6UlwZTHtzn8z2pcqHzWP1o+IX/BRv8AZj+NGl2Fj438EeJN
ctbeYzwQ3emIyxSYwWG2bPTI71xvxo/4KH/AzV/2Y/Ffww8DeGNf01LrSJdO0yzn0+OK3hZ87SSZ
WOFPzZwTX5j+cdiEuS2McHr7U1mLBhjkcAUlElyv0Pvv9jX9vzw78Jvhbqvwy+KGl3es+DTDMlg2
n24lm2Ts5nglG5cofMJUjBHzDnivVPCn/BQL9mD4HaL4jvPhZ4C1jR9dv7Y+XB/Z5hiupUDeUrs0
xwoY8kc4NflcZS0jDfkAdB2pZpTlVB+UkgCixV2ep3HxctvHXx6j8f8AxB0uPV7PUdbTUta0yxQo
kkJdfMjjUt2UcAnnHJ5r9Lvhr/wUX/Zh+EvhX/hH/CXh7xFoOi+a8rWdvpRcb2A3NzKSc4AzntX4
+qTuK5+YnaD2NSl5I4zsIY89TVONwTPpb9pz4qfBi9+InhLxL8AtH1Pwxc6YRdXn22F0ha5jmSS3
ZY2kbJG19x6fd719bv8A8FFfgL8aPAnhR/i94U1eXxfpJN0IrCzcw292vWSFxIOG2qQDnHTtX5Vs
z7Rn58j+LpxR9pKMpO5jjA/Kk0Lntofol+0z+3f8J/2oP2Y7nQ/Eehava/ES0la40uOG32wQ3Acq
r+buxtaI/Mp7k+grj/Dv/BQaX4b/ALFWg/DDwVZzaf4zUXdhf6nKh2W9tI0jCaB1YHzcSADP3Sp9
q+HllYHJJOT1JPSpQzHc4Ix2z0o5R3uT6rfTXl3LdTySXE08jSySzOXd2YlmZiSSSSScmuw+D3xn
8SfA7x/pXjDwpfm11OykBMRZhFdR9WhlAI3I3QjtweorhWDIsmXzjqR2/wA4qJ2PmDaOACfrS5bi
P1rvP28P2WfidrXg7x1448M6snjzR4UeN10uWT7JKCHKhlbbIqvnaTnqfWvmr9rP9uyP9ov4qaTp
6xainwe0rUIGk0qBngl1aDfG0rzLuwWG1gi8Yz6mvjAXDkDAIDdxxiovOaJt24t645JoUR3sfpP+
0n/wUc8NWfwl0X4efs//AG3QrGO2FpcahJbyWs9jDGYxGlu27JZgGDMc8e5pnwf/AG/fAXxC+BOs
/DD9ouK91hREtvb6nDbvczXsZyVeRskrMjAEOOvHoa/N8XBbmPLEHkGieeSWZpAMM3OB60cqGmfq
x8CP26/2e9M/Zl0v4Y+MZdXtrKKzuNKuLGXT5pPNtjI+w74x3jKnjoc1tfB79qn9jD4AnUj4LGo6
W2qBFuTNpd7cFggIVQZAxA5PA46elfkgsrr5fJI7Z5pHkdEZvmXb8uev50cgNo9aPir4c+IP2oLr
W9e0+6g+GOoeJbm6ntrOMpJHZSXEjKVVDlcKynC+hwOlff8A8L/2lP2Nv2aT4g1z4cx6t/wkFxZM
qw3FlfSvMUDMiK8wITc3fI61+Tm7GG5UH/OKXzZCCBJtB+8jHtTIUj3L4kftUeMfip8cbL4ma9JH
Pf6dex3WnaTKzSWdkiOrCFEJztbYNx6kmvuLxx+0Z+x7+1HoHhXW/ilFeaD4ts7TbNaWFndpJbu2
N8RlhQrIgYErk9D2ya/KlXkXOBknNNMzE5JA5J46ZxSsVzLsfpl+19+3L8M5f2c9P+DnweaXWNKv
bOPTZ7y9t54BYW0RQoF81FMjnbj0GM81B+yl+3P8Lbr9nC9+Cvxoln0XS7WyOnW2oWltNKL61csS
CI0YxuhOM4wRg+or82Jbp3G8SE9jntUTTMqnc27PTIp2Qr9j9cvhp+1H+yZ+yT4L8YXHws1jUtY1
m+hFwNNube8L3c0av5SeZJEAgJc5PYE+lflv8S/Gt38R/HHiPxVeW0drda3qFxqMsEZ3LE0sjSFA
Tzgb8VzHnFuVUgD9OKhJzIxLfKT34o2C9xsj/MQpJGcAYpZNwTdjgnHPWlZkjLBCSR+QpgKseCc/
XilcjUYV2qQvOOlTMWZScDHcmoypCklhnGMdqRhgg8kA88dKW5ZIsoUFSe2OKcC+GIIUnjB/lUWB
uLMcD2FSEBhy5G48GloVcjJ525ycYOaC4YksSv4e1LJcbMqACexI601iZPlYYB7jtSsRcMqFKjJx
3zTzzEQBjv8AWoxEqs3zYHPXvT9gA3HLdsE9KBoHZYxtwOTnINIhOSA3B9+KDCVY78njg4zUkax7
TuPXNAxN+1jvw2DgYpjq0kv3SB2FIR5WCRlSSf8ACnFiGYgZf69OKAEjDnBYZHXnmk2sspYsTzTp
TmMMQdpb1xSTySIcMnGODT0EKyfvMMSO/wCNOP76QHcOOgFRSBmdcEbvanMVG0n5AByPWkhpiyhQ
pGW544PFNJZvmJ5B5ApJQzIQMMpPQU5k2gbGUH37UCByu4rt46A+9BgERbaN3GM9hSyOGjXgKV6t
jOaJi2zG7IPJBPSlqUKCCVKgg4wR60x3G846HihY8MuHDZGCSPUdKe8O5QpxgcZpiRE5eNiwGfSp
Dkgnkgdc/wBaU7ULFiSF42k9acY0CiQvjcueOuaBkQAZif8Ax0Ht9KcRtwQHZumemRTZFYsfmJAP
BFKrAvtzk9CMUmSLIULqfm3ngnpg0jI6luqnNPlC+bIQGG3qD061GJQQrAkyZyBtPNCHYcBubZ5p
x1BPqaTIRSAcns2eB7U1xtJ9Q3WllRVAyMkHH170LUQ4FWkLbeOAcf0pzyYAP3cgjaDmkdQCpEm7
j5R3B9/xqTajKWLngcqT1PNKzGiIjK5kGe2AKe5jiVFCHe2QwJpHVFcAE4PJbsKaHVHJZt+B8o5G
aqyGSks0YKZGAM89aXzhGgJIbvtb0qJjvXCkF/ftSjbMGwh8wdMc1LigsPZkABzjngbqSFUWQA/M
QCDj+VMdk2kcMGTGT/CfbFKrpszg+Zu+8emKPQByxEhpBnA/hB/rSTskkoJYtnge1C72QlDhc0kM
aODufLA9uKSvcBQPKPDYDfKSKkkfMmCfl6ZxTSmxIyQGBOPvdPwpzCMhNpB+bqTyMVoMYFdS5Izj
p3FBQxICON3IVuevf+VOM7Iz5cxt7HqKAynkrtyMYA747flSsgHOC0bkZkK8Dt9aYQDAfLAHXP19
KVspGV2k+uOtMVvLkGFAOejHp70XsIUMZCIxlARgKD+f8qVwd7x5UE8nA68dRTpkMjeZkx84H1Pe
pkbbudnPlj+Pr+A/EVKbZViGQPGBnIUAED09RTZJSWfeh9vl6HsKfK4GWyXU4B4/z7U2JmLMBwGz
94jtViFO1fvAxfxbcc/5zSGMM25VJ/2iB/Kk5jZsqCmMYZuBSJK7wFfuqo+XHX15o0Fcc2HwzEKR
x0zRJHgsdwLLjOTxUexlUk/MRkcdz71KqxxuGTkFtobP86TYwZSiYYcHg4J59OKiuYzHGGO3cRxg
1LIqvIm1SwAOBjHNRXEonUlgpAXb8vbihMCkjM43YwuehPAqwRhGGM/MTuA4qrbFQ5LEnHYd6u4+
X5AcMDxjmqAiQoEb5ztPLADp6c1E1uGYbnAB6d+KWJjEWKjd25A6+tSJsiUtIf3jZABFAEHmR/dJ
OAe/Sn+UgLK/UHjBpMBeSpXH8J7mnrKfmbYWcnBx2qbAaN5GTIxU7QOE56imbnJJIz8u0t2/z/jU
jlJGHytsDEADtjrn09aguJiJOCQpG0DPUVoSRXG6U53DaTk4HHPtUUki7gGiZQO2cZqSSUGQZOFA
yOeg9KYoEik9BycGloMbcfKVMWVA5I7Zro9X08a1oVtqyHbKiiGRFA5x0bj2rnHlAYnHykEEHniu
u0/yz4AniL+XMsxkdQeccAH/AOt7Gpe4zD8OWn9qataWSypGZpVQM/AyWwOa/ab4b/C7wP8A8E8/
2fbnxX4mtodU8T6hEkU8qQl45pzuMMYGCEHPzNwOPYV+LXhqdU1uwlLYkWeNkIX5shgeOK/bv/gp
nIE/ZSt5S4TOo2ag465RhxUpXeoql1C6PzbsfjRbRfHjTfiLqmjRw2f9rw6nd2NgoaMRrKjSKoxj
7ikY9TX6I/tkfC63/ad+C/hfx/8ADeK11OK1ja4Rbe3P2i4hk2jaoGDuU5yp96/JtJmimWFkJU52
5IAzzzyOuBmv0I/4JRN8Ro/Eeui3Yz/DFy4uXlxtF8FBUxdxwcNgYOB3Bq2uV6Cg2kkey/sP/Bm6
/Zv+DviTxX8RGt7aC/QXht7q3xNZRR78o6nPLDBwPpirP7D3xl0r4jeNPiP4X0rRLKDw5p17Jqem
3HkhJDFPK2E28jAwcYxjnivPf+CoGq+Oov7KsVaa38BSKkiTQR5D3ahyUdh22gEA4ya5H/gkk4i+
IPxCtvLAxplq+7OWB819276kjj2NT6lxfNe59NXv7Udp4a+Puo+CvEnhKCw8KRzixt/EkVuzx/aW
2lUkONq53EexrzHStF0I/wDBTieGy0+1azbRHlkVY0MRl8kEsoA+9nr7k11P7Z/xQ8CL8GPip4Xg
ns7PxVHdWolsJdonuZWeJhKiZyw25GRzwelfI3/BOTUmuf2sdEJlkYnTL6KVJWYvu8teuewC459a
i3UcVc+uv22P2LLL4u6bJ4r8J2yWXiqziVZYIkUJdQKGyAoHLjIxzzgfjY/bl8E6QP2STJ/ZdrbX
1obHy7gQrHJEcqrc4yMjgivQfiL+1DpPwl+Puj+CfE+LPRtcsomtNTYARwXBd1xKx4CthRnsfbOM
v9v9Ypv2WfE7lsr5toUdPm5M6AEc89eKtGXTQ/Kn9nn4wf8ADO/xa0zxVNo8ev6OVa1vdPuVDOYX
YZkTj764yB35Hevt79r79l3wr8c/hzH8a/hu9vbJPYjUbxFUwR3luqEiUDHyuoHIPBA9q/My+nt+
f3zyQlgrFkIz6nGO5zX7F/CBjc/8E3bHIDlvBt0oAOc/upQB83tgc0OTuXJJxufjYbIx3FwoDbEJ
8zcMkHoRg9uvNU5I43LTIHRRkxBmyAO/GPXitnxPGJNWnkbzIwzMSsXygDnjnp2rAXe1zkHBLH5l
P68VpF33OVkkiMsZ8sBmIIZOhUev8qSKH7PJG8m1WwQcZIHB9O9RTQxxXSyFcxyZUuyl1HGQCOc8
45qWUoZS+wyOcZycDJHp3/CncF5gJYoDs37SgIU9eO1Dyl4XJMYIGCSNoI6k/wD1qQhJtztH8owM
DAHJ4Ofr+lI6NMkiKGUjhmJ3E/T8h0p3Hy66EBuDGAGxhBkMOAR17/0qZETzSxCYYkhuRkD0NLIC
10ihSCvQkYB9wf8APWkkuN1wvPIJVs+/XPv/AI0XK5L7jgWW7xnaAv12g9OTTRAtsm4tkMR93OR/
nNKSjrgLhv7+Of8APGKTc32klZDIAobYOgI7n68/pSsiHE9q/ZB0+w1j9qH4a6fe2sd1bT6xHugc
ZRwFY4YHg9M81+pP7Yfxr8M/s1+HdId/h1pniZdcea2eIxxwpEqquWb922R8+MV+XX7GUXmftR/D
CVm8thrsK8OORhs4+p6+1fc3/BWnjwp4EbdtYXVyFzjDfKmRz+ePaoWrN7WSTPzE1yaPUNZvruG2
hsoZ5HlS3teI4wWJCL3wucfhXRfCT4Va58X/ABpY+F/C9sZNQvMbGkRnigyOGlYD5V9/auUiDyui
uCmMIXXqDnG7b6c9PpX7Kfs3fCTwt+yz+z1L4z0Kzn8b63daaLqe+0uPzZrlMbkiQbj8q55xzwTj
jFNtoqKsfMP7VX7GXwl/Z2+CNhfX2v3w8dSpGkMXmGSO/mUDztse0lUAyeoHTuaf/wAEu/gZ4W+I
PiHxL421q3bU73w9NDaWVvKc243xtvYxng9sZ6YFfKXxy+OviX45+PL/AMTeJpIzNcSBYLOJ3ENj
GqhfLRCxx05xjJJJr9Mf+CY2veDNV+B0lr4d0mTTdfsXji11n58+ba3luCDyNoPYGk5Mu71ubfj/
AMN/CT9qTRPiB4Dl0+x0bxP4WuJRutYUiuYDGMx3CkAZQk4I9yD1Br8kPAPhuy8V/EHw3ok83lW2
qana2zEEZVJZkjZkOOoD8Z6EV9If8FAfFWg2H7RutzeBLm/0HWFD2evXEcjRi5uCuGC4blSCuegL
DpxXn37Fuv8Agzw5+0F4ZPjbSH1SwllW3tJo1LG3ui6GCQgEZAcY79QccU76GSWtz9S/Edr8LP2S
vA3g/TtQ0Oxh8OyXi6U2p3kCSyRM6OwkkbaS2WGD0wD7V+d3/BQv4JeGfhX8XbG98KkLp3iOyfVD
bxbfKgYyYxHj+A5JH164r74/4KHap4K0z9nDVf8AhNLG5vhNOsWki1yJI7/Y5ifIIwAA2c9s8V+N
2o+Ib3Ulhjv7++uI4AsEJvJmkEcY4CLuPTBHA4oVwsmz9Fv+CXPwH8Ka34O1f4h6tp0ep6yl7Jpl
styokhhjEUTswQjG8l2GewGBXb/HLQvhF+1R8BPHOp2FlaaN4k8EJdTzLaQpHcW8kQkwj4Ubo5PL
P/6xXpf/AAT917wdr/7OOjv4P0yXS1tpTbarFKDl79Y4xK+cnIYbSMdiK/M79rfxT4WPxx8a/wDC
ALquj6JdzGHUITLJHHdXKu3nYUE7kL5IDcZycc0o3Kklc4j9nn4f6f8AE/43eEfCeriRNN1vUI7e
4MMmJFG0k4/756nNfsN4y1X4UfA+98EeB9a0DSrLSdcik0+2urm2RkRo1RVWRivO/wAzG4nr161+
cf8AwTn8QeC9I/aA0mz8T6dJf6zfEw6PeqN32a7znLAEEBlB55GT0r68/wCCo+oeCrb4OafaeILW
6l8UXc7roE9ruHkyDYZS5B4UrjsT3HSldtiaaR8MftmfBTw98EPjxqHhjw7NIdImjt76GO4cEW4m
dt0Ab0XAK55weuea/Rvxnpvgn9h79muPUdO8IQeKbewlijMd6Y1lnkmbljIY2wMngY4GBX49Pq+o
a7qgvrzVbq/v5WSMzTyNNIcfKB8xJxjHGelftj8X9R8L+Cf2drWT41ovivSofs0V6bSzYefMWAQi
MPnOSP4vwHSld3KtocZ8OY/A37ePwG1e41XwTb+GDNdvZFrN42uIZo1R1lSVUUggv0I7HPWvi79i
m5+Cnwy+J3i3UviZrUVpqeh3Mttpg1GMtazIXljd2QIQW+Vev1r9Av2evEPgnxf8D9QufgjYQ+HN
PNxPFBHqFqwRLsIuWdN+SMFf4q/HvRfhn4l+KXxQvPCGkW41bxRd3lwr4yiFlkbzWJOOM5P59Kdy
ban6Q/CX9tLwl8aPjkngPQPhLFd6NJcyxLr6RoyCFCQs7R+SNqtgYy3cV8z/APBTP4C+FfhN8Q9H
17w5ALH/AIStbie602NQIIZYjEDJGAPl3+YcjpkEivrnRdD+Hf8AwTj+B8moX7/2p4jukEcs8KA3
N/MSxRQhb5Y1PUjgAZr819f8Z+Jv2rvjnZSeINQig1HxFqkFjCId5hs1d0iCxqzNtA+8RnkgmtIv
qTa7VjS/Y+v9f8L/ABh0XxDpXgybxva2k32a8t4rVpURJRtJLbSFYA5GRzt96++/+CnvgXQ4v2bU
v7TRrK1vrXV7RIbi3t1R0RiwZQVGcHpiu11FvBn7AfwV0bStH083Gq6pcRWEdz5eftt6Ux5sx3cD
AJwP8TWL/wAFPpvJ/ZeeXcY9utWZG3v9/ipvrct7K581f8E8f2OrP4hzW/xL8VyxXmg6fcsmn2SS
HdNcRODumGMbB/d7nrxX0Jb/ALcPwe1f46N8O18PadcaDO62i+JlgVreS5b5REU8r7pbKb9xGcet
an/BNiIx/sn26Oiox1O/JUMCOZPUVx9r8Gfgdc/AjRvENnZ6PH4rga3ukuo7wLdNdidcqV3cndkY
xUeY7e9ZHJeP/wDgnH4e0z9pHwvPZkxfDfXruc3OnwvsmtbgRvMIkIH+qJTjuuMelev/ALUP7R/h
f9ku+8MaJB8N7HXY760Z43Vo7cQpGVTH+rbdxyenSvY/jZYapqF74Ai0a/j02/8A7fVluJovNUKL
ecspXIzkAjqOvWvM/wBrr4lfBDwHqfhuD4u+G5NfvbuGU2TRWXneVGrLvJO8YGdvr/jV2Jdjzb9t
34Q+D/i3+y5bfFS00ePQfEFjpMOqWz2cSgvHMiM0EuFG8fNweoIz3Irl/wBhL9lTw54F8CwfGvx/
LazqbOS+sUlG6K0tthDySrjDNhSR1wPevZv2zTdar+xvf3ng+W10/wANf2ZbzzWtxbkO9h+7Kxoc
4jIXA5B9OKtfDGG2b/gntpKXQ/0V/Bj+aGORtMLbv5mnzapEJO8mjkPhf+1f8I/2lfH2vfDC98G2
dppuorNb6Vdz26vFq0a5DYHljy224ZQTnHvXxJ+01+xrd/A34x6D4U07UUu9F8V3WzRby8kAeImR
EMc2B0Qyp8wHI54r73sPhJ8DvDPiv4Va34Gg0Sz15dUQW7aZeK0k0b27iQsgY56Lz2J968K/4K7F
X1b4YIWKkRag+ehwHteh7fhVK99BtJNH1h+zf+yf4T+Avw3TQpLC11nVr1RLq17dRLKJ5toDBNw4
jHYV+Mvxf0CDSfit4z0zTo44bS01u+hggVcIircSKFxjgADoOmBX6g/8EtvEup+J/gp4kn1TVr3V
5I9caOKW9uXnZY/s8JABfkDJJ/GvzT+LciSfHLx1kDYviPUS55XcftcmKSvZj5f3isfpH8Bv2efA
v7DXwg1Xx/8AEOe2v9euLcfbbl0MsSAndHBEpX7xOBuxyfapvhf8T/hb/wAFCfAXiXwRq/hWPwxr
dspmFlGVaeFMrsuYpVRQCHbBX88g17Z+0Vp/hvVfhz4esvF6wP4duNZ0+O9W7YCIoX6OTwB6muX+
GHw3+FXgH9oC3k+HNtpVnPd+HLlbyDSrgSoEW4gKMcMcHJaobY+W+6Pkr9nv/gmldr8a9Yi8d3tt
e+G/DV0oWG0mcPflgJISfRQMFgec8V7n42/bv+EvgX47W3w8bw1ZXGhwMbPUfEMUCiKxuMspi8ry
8uA23LKf4jwcV9MeENv/AAsLx8VBJNxZg+n/AB6p/jXy5rvwu+BWvfCTx1qmsQaC3i9b3VZpriS7
VLsXaXMpQBSwPVUGMc9O9O4JI8p/a/8A+CeMmpfELTPEnw1e0sNM8T3yW91YTuEhtLmUsRLGAvEb
ZJKjoenB49o15vhh/wAE6PgRYWM2lweIvEV+4kitrhN0upXQEaysHKNsRchsEcduTX0L41Z/7C8D
FlO7+2LDdvOCDz1964j45+Gvh54s+LXgax+IcGm3Vh/Z+ovaRapIEh84Nb5OSRztz3ovfcOp4v40
+Gnw/wD+Cjv7Plv4k8KWdv4c8a6YWWJo4ght7rYGa2mYKPMiYbTkDuCMEEVb/wCCbmsw+Ivhbr3g
HxF4X02PVfBGoPp1xcbEl88u8hIOV6qVZc5OQAa9p/Zu8M+DfCGp/EPS/ASWkfhpNXieNLCQSQJK
1rGZFVgT0PbPHT2rxn/gnqV/4WB+0KQwYnxY+QD0/eXGB+WDUXegWV9D87P2uvhJqvgD44eOHk8M
XGh+HbzXJzpztbtFbSKfmxE2NpHU4B4B9K98/wCCSWg2GofGrxab62t79rbQVaMToH8tmuFBIBHc
DH4V9rW3ibwh+2IvxW+EvifSSbjw3qMlo8qpgKhZhBPG+ciRSpz06ehr4N/Z7+Jek/8ABPz9p/x/
oHjH7Vr9hb250oX+mQgzMd0c0bbGYdVbBGeCM85rSTclp3Ig0nyn0t+0p+31oHwD+MOreBY/hNYa
/NYRQu17Jcx24YyRh8Y8l+3HXtXxB+1x+1lo37TNl4fhsPhvpvgqfS7mSaS/tJ1mmnV0K+WSsSfL
n5u/KivsrUv2pf2Uvjt8SNNh1/4a6hq/iPXJYLGO91DSYmyzMEQMyzEgAkcgfnXhX/BST9kHw/8A
AzVNI8a+DUXTtA1y6NlNoi7tltOI2ffEcnCsqHK9j9aqLu7IPhS5iX9k79jr4R/tSfAPVEsddv8A
T/irYrMtwpn/AHUDeYxt3MZXDRtsAJU56jrivjT4k/C/xJ8LPHGseFPFNj/Z2t6bKY5UckrKDyss
ZPJRhyD6V0Pwd+K/ib4IfEHS/E/hO8NnqlmSv2chjFdKTzFKgI3q36E5HNfql+2L8CPD/wC05+zh
B448S28fw78ZaTp/26C+1FlBhwu9reQhvmRz93JyCQcZyKUZO7TCenvI/Hrwh4gk8H+LdC1v7Jb6
hJpt9BdLbXJzHN5civsb/Zbbg/Wv28/ZR+Kfg/8Aa3+GWr67L8ONJ0OK2v5NKls3iiuA4ESPnd5a
8ESdP1r8LLYyFFZlD7gpLY4BPav2F/4JGx+X+z54m2srKfEswBGMj/RrfrTejKilKLufkl8QdOh0
nxr4gsrWLZa22pXUUKom1UjWZ1C49AAAK5u5+RjgAcnjPSuy+KTKfiJ4q+ZznVrxuB/03kwPpXFy
Kodcqzf7Of5VRmrCkYXcDggZ2jvUUsvl52gkno2Ogp07gOFCcqMc96ag3RkY5bt0xTG9RoiZDjnb
3xT1kRM5A2ZJw1LsUkgcEjOf6VXWAyBmxg5x+PpQFiX59x4AXd0616B8EfHdr8MPip4W8UX2i23i
Gx0u9E1zpt2oKXEZRlZeQRnDZGQeQK4EuVjUABAcseOeelaOnu0coOQ2RjnocDg/Xv8AhUNFRV2f
sJ/wUl+E3hjU/wBmC31jRPBlhBri6nYvanTdPQXWJGwyL5a5bIOMDrivyC1rw9f+Hr77Bq+m3mk3
wAk+y30LQybT0bawBx2+or9/fj/8TPDPwY+BUXi3xRbTXVrpf2WW0jiXc/2sY8nuONw5J4xmvnz/
AIKQ+A/D3xE/ZOsfiPqdilt4o02KwuLW8tyVZEuZYVmiP95CHJwe4Boi+hnJKLuflf8AB7SjffE7
wr5+k3Os2dvqltdXVtbWzTtJbpOhlBVQTt2Zzn1r7K/4KoXXwpvLXwCfBtjp1p4hMs5uRZWJtna1
2DbvG1Q3zgYzz1r5+/Ym+Jms/Cn9prwNfaVDAw1O/j0G7huQSDb3MsaMQQeGB2sD6jvX21/wWQtI
ZPBXw3mMKmcarcr5gUbgvkEkZ9OPzqoX52FToflZpelXOq38NrYwyXV5NkR20CGSRwASQFHJ4Gav
3fgXX9PurK1udC1K3vLklYLWW0kEsxXqEXHJGc4HPNfo7/wSE+FXhzVLrxt42vtOivPEWlXMNrp9
25y1vHJExcKPU5xz+FfU/wADPjGf2iPjD8R9E8QeHNNh/wCFb64IdJuYg5lLk3ERkYk4ztQ8AAfN
9KXPc1UbH4t/D7Ul+EvxS8P6z4l8P/2guj3sN7daHqUJjM0WeVZGHGVyRkYyBX6l/t9/Bn4fav8A
sgzeKfD/AIF0rStYMmnXdhNp+nRxXEfnTRBl/dgFso7AjnP5V8Vf8FC/ihdfFP8AaZ8UWl1ZWVpB
4ZMuiW8lrGVedVO7fKSeWycDHFfql8QPGfhX4d/su6R4k8XQzTaRpmmadcokMZdzOqxmDAH/AE0C
deKz1U7idpRufgVq2k3mhzva6hYTWV6y7zbXKGNsHvg847VjyQtgLI2EHIAHNfsp+394C8M/Gv8A
Yttfilqempa+JtN0uz1Sxu7clGjW4aHzYj/eQhuhzjAIr8c7gKZGUEsRzuPU1qtTN3vYq5EgEQGV
BySRz+FXbSxlv7mO2tleWaRtiJGhZmOM4AHJ6VViJxucZOOD6Cvtj/gltrHw20349bfGcC/8JJcJ
FH4Yu5w3lR3BWUSoSDtDMpULu6445xUyajuWonyRc+CNet9scmiamrtgD/QpcE9v4ay7ezllnjhj
/fTSMI0ReTuPGAOua/dn41fH74i/BT4ixNqHw2i1/wCFwAuLzxHpCySTWNv0cyR8gsvBOOCv6fmh
+1J8W/hW/wC0v4f+JfwWWV5luItX1KC8s5IbSW9jmDDCNggMFO7HGenOaSdy0nex79/wTa/Yx8H/
ABA8AeIPHHjzRpNVuJpbnRoNI1OACFE2xlpwCN28k7QQRjnvX50eJ9Fbw/rl5p7W0tv5EzxrDMpV
yqsVB55Ocdf1r95v2Of2jL/9pX4Q3Hi3UtDttBubfUJrFrW0laSNgiIwYEjPO/pivye/bN/ayvP2
nvEOkC78L6doC6FJdQwTWrs81yrOFw7EDAHl5x70KQ+V81j5qtNIu9S8xrW0uJkU/N5UTNtznHQd
eD+VE2mXFrGvmxtCzk7FmUozADqM/h+dfqJ/wSz+GfxJ8N+CvFviS00zSLbQtcmgNm2txybpxGr7
pYmQfcO4DnqRX0h+2z8DtE+Lf7L/AInuvFthax6/4c0261mxvtKJXyLiKJ2G1iMlGAwynr+ANCkS
00z8LLbSpZt7JC86oMtsQvtGeM46ZPH40lrp0tzve2tp7lQcF4oy6A/UdOvevsn/AIJdfEmbwd+0
jZeHV0yC9sPF1u9hO84+e2aFJJkdeDnO1lOfUelfpd4u+Ivwy/Zs8Y+F/BNpptnp2reONZMqWkMG
2Mea4WWdmxtUbiAFz1bjAo5tbCsz8BYrMSyNzl8kALz3/nX6J/8ABM/9jLwf8WPDmv8Ajnx1YPqq
QXE2j22k3Me2FlaGNmnPfd85UY6YzWb/AMFJfh14Y/Z9/aL8E/EDwnZ21vqOqyNrN5pLkfZZbq2m
hYNsA+XzATuHQlSepNfe37GX7SDftQfCu88VyeHoPDkltqcli9pBL5qkrHG+7O0c4cD8Klz1Ktof
g1450JPDXi3WNK8uWOGwu54Ikmzu2LI6qeeowo571hLGXXI/h+bLcAV9X/twftXt+0rr1nZSeDNP
8Nt4cu7qBb6KUyXF2pbbhjtXCgpnHPNZ37HH7H+q/tSahq02ma9pmlx6HLbtdWt6HLTJIWPy7c4H
yMK0ukSlc+aTbiF2RkXeOxPJ98V7h+yj+yzrv7T3xMi0PTpzp+k2YW41XUkK77WEkgFVP3mJXAAr
9b/20P2RI/2lfhnpGhaE2laFrOlX8d1Df3Fvg+SIpEaIMoyAdwPp8tfNH/BJrwreeCfip8avD2px
xR6no32bT7jyJA8e+Oe5Rip7glQfyqW7lJHoHxB039j79nzxl4U+H/ifwTod/rt5HDbT6itlDKLY
5VBLeNvBj3E5Jwe56V8tft+/sLH4L6q/xF8AQG7+HWpMpls7ceYulSPtC7TkloZC3yn+EkDoQa+s
/G37Fnwy+OGpfGTxdrEl9H4pi1a7j+1213sFuY7eNo8pjBHQ89c12f7Q/iR/Av7A+nayttFeSadp
2gzCC4XKPtnteGA7etJOwWR+GywRl2Xeu9Dgqp5X6jtSfZS+Wj+fBxu5xn61+vnjD4U/Df8A4Kd/
APSfFng02nhHx3pn7pysQU2czBTLbXGEBdCOVYD0I7itI+Evhd/wTX/ZpnfxFZWfijxbq0e02s6e
b/at6FP7uPKfu4hu5JAAHvVcwj8bYoflLMylc7WceuM0SIJEbaTlAQB3Y+v61+nP/BL3wL4yXTfH
3imz8CaGPD2uXEUlnHq4eKNQGlJW3bY+5BuCnpjatfSn7Zf7Nnhv4t/s06/f+I9F03QPE3huwu9X
sr3RQMQyxxu23dsUsjhQGUj9QDRcLH4o+A/AusfEbxbpXhnw/atfaxqkv2a1ty6rvkIJ6sQO3rX6
/fD7/gmn8NPBn7P+oaV4x0uPxH4pms57mXWpA0U9rI0WQsW1sDYRwe9fkD4Y8Q6h4O1nTtd0i/n0
7UrCeO7tLy3xvhdTuVhkYzx3HPIr9sv2Ovip4m+Lf7G974n8YarJq2rsupwyXrxojFIwwGQoAyMH
tUtu4WPwrnSRbdIg55RSQ3HpzURGBuzuC+gz/nmtG4kICqgULsXocDkfpzXuv7I/7JXiD9qTx3Dp
trHJZeGLRhJqurupWNEV498UbbSDKUbgfielUS1c8p0z4UeLtc+HeteOLDRri48M6RcR22o6goyk
DvgLu74+ZeenIr7f/YB/YKt/iLYw/E34mW8cfgpY3ew02cgJqEbRkGZ3DAoinp6kdq9W/bw+Mvw+
/Z0+A0/7PXgnTILvU9Rsks71FY5sYQEYTTMF/eSOAMDIPfoK9U8CWcV3/wAEudJtZ3eCC48IhJHQ
7SquTk57dalyKSPOfDPwO/ZC/anXxv4E+HOiQ+HfFWlq8UOq2+9H8xSyCa3JkImQMnzDuG7bs1+a
fxr+Cfib4E/EPU/B3imz+yajYvujmCnyryAswSeInqrbT7ggg9K/WbwJ+w74G/Z0/aE+FXibwff6
mTO9/ZzWt7OssbKbGVgy/KCPu9Mkc1B+0T8fPBXwa/bU0ay8faFZah4c1zwrbWk+p3kKzLp5+1XB
WRlKnKZ4OOmQaSkUtD8XIlMyk/KU6hhzwP8AJprRsrq2QysMg+1frH8R/wDgk/ovjL47WHiLwrqk
Nh8MdXk+26jp0EoWS3LZci1O0jy2ypA/hyQOMVwn/BSTxh8K9P0jQfgr4E8HafqvjDTp4FfUdPt1
8+wK/KLfKpuklkHUA8devR84rH5tCNUwzSoFboxYA0j5LMAeEO35Dnn8K/d39nb4W3mmfCr4fab4
k+DHhZbmLSrSC9v2eHzgBGoLvE0G7dxkqW65Ga+Ev+CqH7NHhf4M/ELQfF3haJdOtvFpuPtejxKE
t4JoghMkQH3Q+/lcY3c96Lglqdp+yz+y98A9H/ZVk+LHxevLbxJDdx/bHhhuZI308Kxj8hUikDM5
btXo/hD9k39l39rv4Y+LR8JNI1Dw9q9ntij1W4a6Vre5Kloy0crkOvGG46HrX52/s2eGvh14w+Ku
maV8TvENx4Z8JXUc3malCwjEcqpmMFmVgoJGM4xkD1r9if2Qfhr8MPg/4B8Yt8FvEUvxBS4mFzcR
yX8TbrhYiI4g6oqpuGBkg+tTdltJH4X+MPDF54K8Wa54f1EI19pGoz6fO8PKmSKRo2x6glTj2rET
O/kAbuCSa7T4u6xfa58TfFuoXtmbC/udZvZbi0Zgxt5GnctGSOu0kjPfGa4dzh+Qd3XnoKq9zF2Q
6WQxZK9D1FKxJ4x+YpjyZAAHy4yAOlC4JTqzHge1MV7h5gQkevNOZjuONuD696aVLDjqDwfak2s8
gO5SOm3pSuUSlmKBcgJ1571ERuJDDHQc9aSQKMkkgZ4HpQzZG7qR60XAc6ZU/wB4c8UxsrjIHI4w
M01pMZLZxjnAp7KNuSxCnAHrUO4DZg0gz0foQeMClBPyg/LmlY4XKgsvtSGUMCPu0hjfvZBBxTXB
B7EdsUsi5ZcZFNdQDwce5qr6CtqKAxUDG449elGSxG4leO1PcEnORgfrRnkEDB9TSuKyAJuILMSu
aUABGEeS2TuJNNZWJwRubtj6U5WCqU2fN154pajWgjMok3diMYNNcEAqRx2NOdcfMBk/ToaRsqME
EjuTQhO4mTwAQwFLIH3EEYIGCfek8vbnjAPHWm7Tg5Vm9xTsGpIpO0kq3BxxSkgsX3gjlcH8aIwG
Y7mx7HnJqMR7n2nvnAHrSHqS7jsATHXpTJEVW2sRluCO1OjXawBJZlB4PAFOYKzn1HIFOw0hkbMo
yDhRyBnGaUOMk7cDoOf50joSwwcgnaaGHltycjGfpRYEiQxoy9Qo6A460km0RnLA889qRWVwrHD4
HTtUTsGIx064NUUS7N0xG3Ax0/CkmDh3XKnnnByBijzirZIPQU8vmVjtA3Zx6ipuSMSMtuZQCwPP
FNckEEkf7v8AdNOyY3KbjyMgDoeKSSMsxIBG7tjikAjLuyxIYZ+6aQAklsYz0wKk8z5CvHyjsO9I
GKqSowenAoAWRSoLMdo+79f88USHbhgowB19aaFJBGc7uTkZNPRd7soXO3rmmOw1mIBzyTzS7jIU
yd2B1PSlLZzubDAgKuOopjMjJxlSvpSQWGjIEhVcn1qxGA+VZMI3IyetRo+8e5Yk8fjUroSDhjnt
s60a3GMKEkAMCq5OM8e1OMTR7sbThdxBFI7QiNdy4OMEDvSSRkxkx5K4yT6UxDpzHl3MeMLyc8g/
5xTNwETMoww4B9qRlWCT5zktzj0pfNUgoxYoeeT0qbAIpDoAq/Mc4zQ20MQWB2kg4BqQqkUYK8HP
BNJE6yDDMRgdQMj8apWATbIqnGPlPX1+tBgxkqeW+bAFCLJKBtwQCM7z39acXO1i3VQRhv8A61Ax
rLufO87vTHFPdCNqlgM8hu1RrGrzb1D4xnA7UjkxlR8wHU5PX/ClqSyRTsjO5Q36nH402ZZGXKZA
I3DJ4pFUsFYbgGPOenAqWImKLeRwT8rZ+YU7jQwKcYByx6c9KdbRr50ivjPSmsrbmZVzhux5pS0Y
3EoGYc/j7VOgkySbETMikleB7fr3ppZdgAwec7SMDpTSTKxlL8E5wBz9aDKqjhWwTjkc+9C0KuLF
I3lNzj5uRzgCkVmZCUjGOQQT19xT1VWcrGflA4HTn6044jR2HPlk8DOT+NVcBsh80MFUL/C3sf8A
IpCB5gG7HIMmfpmmrOrBQwKsWO7aeKf5ri22kYLdOO3rzSbAcoWHzQI8tkMuDx7n/PpTREdvlhh8
2d31pJCr7AEMYI5JbgmkPzSDyiTt684z6ULUAmBWQYOeOBngcVDORLbbmUdeGHGeOlTMwb5uTheg
9KqTAuSeg5wPSqAgi3qSduOKtxkmNsMcEev9KqJ8gPzHnqAe1WRIGjC4xgnrRcVyIMDGq/xbjnmn
N8r/AHQeeCx4qONuThmVc9alnfcIwrDg/eJpNjGGP5ZGOFI6DOc/hSRjeTg7T3p0wVpjuByueg70
wKxztJ20IDRuEkhxMrgBgcgHBz1x+OaryJt5ZSjdgc8VdmxCdr8IxJw3IHHP9KpnDSLvOBuxkjtV
CRFLiQgKB5YGNyj07mlm2GNSwOM59AfpT2QOz+VyDkHPGKhkQKRnkL+GaQxLpXyASvljoc9q6LU9
XbTdAttOt2VxKvmSAHOM+v5Vz0p3SFiSFP8AeGcYqKWQSSHAwDxnPfFDVxGhpN1suVbaFIIbGBj1
5r9nfg78XPCv/BRP9nm9+HviRxoXjXTYYpJYIiVV3RMR3MZPVN2QVzkH8DX4sWYLT7S2OMYHrXVa
bql9ol19tsL28tJlXy1ktZmjKhuqllIyPxotqXo1ys95vPhEvg349weAPFt1DAiarDY3VxbShx5T
uoLKx6EgnBxnOOK/TL9qz4lW/wCxv8BtB8O/D+1gsZtQ/wCJfb3wcCWBUQEz9DvfA6nua/HW+v5t
Xv2vbq9kvruYlnluWd2PHctznjvV7U/Emr61bpFeavd6hFHkiO8uGlI9huPy4z261fLfch6H6r/s
QfHG5/a5+F/in4e/E1LbX7zTohHJeu22a7ikLDcy4G1lyAGHXjvmtb9hz9nd/hD8V/ihqVjfQ3eh
LOdGgXcDPuifdlxzxg+tfktpPiPUtJZpdOv7rSrvBj86zkaGRkzkjcpBxkDjmrtt4/1+zW8aw1vV
dOlnlM8wtr+WJZJD1ZwrDc3PU0nC+w7q+h+uPi/9h5PiR+1JqHj/AMTTW8/hd3guobOFv3rzRJGA
HUrjaSpJ7nAriY9F0LwR/wAFPdHttJs4dKt7rRpA6QgJG0skDkgDOMkoD061+aU3xe8ZWwWJvGvi
OY/c8w6tOTjA4xu7elZl/wCKNXvdZTWxq13LqbSB1vZbp2mUgjBDEkrjt+lL2b6iUtT7o/4KvNHN
8V9AgkkkVRoqsdvygDzZc8jkk/L64/Gvpz9rnUYLv9hh7t51aOex0phLHzuJkhOVx+J49K/HbXfE
Op+Jbx7zVtQutWuDDsNxdztPIUzhQWYkheDx71ePjjxBe6LFo9zrWpzaTAqtDYXV9JJDHs4XZGWK
jHYihQQlFvQ7T4GfBnUvj78WNM8Kafc29qLpmlnMsnJiVgZNo4zlc8f/AK6/SD9pn4u+Dv2RP2fY
PhLoxfWNTm02TTobJ3/eQW8iyZlcjAHUgDI7elfklpep3ejavDfaXe3OmXkLN5d7aSlJEIG07XXk
ZyehpNd8U6l4kvDd6xqt7qt5t8oXF9cNPKwCkffYk/d4xT5QlpoOvb/7U5lbhWLZI7nPXvnPNZrT
IJ3EG0Y+bnr9MelOdmiiIKEdlJPOD3yKrRxkMJFzuJwD0w2OlNIjoOmkXy0j8zzXJ5CMfvdhx/kV
O4MhcFAH27id3zYHaqUluFlG18Jt3HGQ2c9cf1qaHh5drKwAPCnI/Cq0MdRggUHa8Yc52KSchVJ/
+tUsV0RMOTG4+UlTkdxTIpTHNuDoXz9xvUHp/wDrqObdcF5wioFYgoM4X2pFRdmWsskeSvmKpIxk
hl9SD/npUSTbJASFBGZPvZYAgcE/WlnYKpLuQzEEBecYHb9aj8pXdvncFV+TGB5mfWkkW3cab5pC
WERByWy3OBVy3gaKGM71B5GWwPoM/lWdPCsrCPCRqjAKWOc8dOP505GmcKCCQo6N3x709CFdO57j
+yDKtv8AtPfDB1G1V163Z3YkgZO30wPvCv1H/b3/AGc/Fv7QfhTw7b+E/sb3WlTzzyRXkvlq25AA
QcHJ4PHqRX4pw6i2k3cdzBK0M0TrNHNC5BicEFWU9iDzkdxXow/aW+LMsS5+KPjBk2FTGdXnCsPQ
nd9PT8eaVlc1ck1Y4/Xba+0HVtRsL6Ew3NnLJBPFnOJFbDAfiK+o/wBhf9tm+/Z98Sx+FvE891f/
AA71F8srAMdMmdhulXvsxyyjjuORz8jXN5JfGa5nkM1ycsZJGLOW7nJOTn3qCS4kiCu7KrFdwPXC
9dvtTaTJUmj9JP29P2PNGTQLj4u/DqK3l0O9VLrUbOyZREEcLtuIQOCrZBYc9civXP8Aglr8MNZ8
HfCvW/E2oLFHZeJ5oZ7KOOVXISIPGd2Ohzj1r8qpPij4tl8Hr4Wl8XazP4bAEf8AY76jK1pGoOQo
iyVC55x0/StTwp8afHfgmwTStB8deItF0tXdxaadqk8MRc/eO1XAyT2GBzUcpome3/8ABQr4d6v4
A/aO8Q3V/Gi23iSd9VsZMZWWM4U8jowYdPxri/2UfhbrHxZ+O/hrS9EkgS5tbmLUbh7lyqrDDIjO
RwcnGcD/APXXl3jb4geJfHd7DN4k8R6jr93HGI0n1W5eeRFGSQGYnjk/XPtWd4Y8Uat4R1WPUtB1
i90jU4shb2ynaKVBg5wy4IyDVoSfKfs1/wAFIvhzrXxC/ZzuG0aNZZNEvF1e5UnB8iOKUOVHcjcD
jrxxX4xyylLZTOzElwmxex7Yz6/hXXeIP2g/iR4j02503VfH/iXUtOuo/JmtLjV7l0mQj+JTJgj2
xzXn8Vx5BkVJPnOTuY5+amo6GfN71z9tv+CdPw21n4b/ALOViNZe3d9buTrFt9nfcBBLDHsDejfL
zX5d/tY/DvW/hR8afE+k63bIt1cXU2owzLICssMszujg544OMHkEEdq43Rvjz8RfDdhbWGleP/FG
k2NvH5cNpaa1cRxJx/AgfAAPt3rk/E3jTVvGmvSapreqXutahOoR7vULlp5HAGACzEnFCixuV2fT
n/BPD4Z6v8RP2kNC1KxNvHZ+FpU1W8aVyryIdyAIMckEjP1Ga+x/+Cp/wz1vxX8LtA8UaWkcll4Z
uJpb8NJtZIpUVQ4H8QBUAj3Ffk94T8beIvBurDUfDevanoN/hoTcaZdywPt6lSUYEg4HtxXQeMPj
r498Y6TPpGveN/Eer6fIwMllfanNLE/OQCjMVI+oqXFpjbTRh2N5Fp+oW8zEBo5N2I/4j1Az2ycG
v2p+O/hlP2x/2W47f4c6xY3f2+W2ura6kkxGDE4Lo2AdrDkEdq/Dpr0QurId56HP8utdb4Q+MXjn
wVpq2/h7xhrugWzSGV7PTdTntomcgAsVRlGSAMnFOwcy6n7M/ss/C+4/ZE/Z91Wy8e6tp9rDbX82
oz3kc2YY43WNRliF5yuOncd68h/4JiyaJ4nuvizrttBBLePrvmQXDpmVbeUO6kEjIDZPH1Ffmd4i
+N/j3xno82ma/wCN/EWsabcFRNYahqk9xFlW3KSjMQTnBFVfC/xF8VeBmmfw14n1bw+syATHS7yW
2MgGcBjGQTgkn8elTysvmi9T9NPjB+wr8Svj/wDtEHxH408S2E/gZLl4YIYX/wBIgsdzMscabMBu
cFiSTnNeNftG/s8+EP2Mfjf8IvEGk6jqc3hubU47u9ivMTPAYLmBmcMqj5SrnjHG2vk5v2kvirFA
Fb4l+LyWBBkbXLkZPqPn7fjXNeJfiT4n8b3kcviLxHq3iCeNdiTaneSXJQcHALs20ZA4GBwKqzYl
bQ/aH9qD4S3n7UngTwNqHgnU9PvbK01KPVorhpsRTxbSMoQCCe3bqaoftweE9Q+Nv7MOrJ4Se01C
40y8S/njaReBbFhNGD03rhhg+h9RX4+6D8YvHfhTS49J0vxr4j0vTFXbHZafq1xDDEM5O1FYAdc/
jVVfir4vtdBvNHh8W69baXePI9xaxalPHFOX+/vUPht3ctkmkou4ptbXPvz/AIJsftU6H4c0lfhL
4pMWmNe3UtxpV+zbY5HkKL9mcchWyflOcHp1rvPDP/BMm20b9oQ+IbjV/P8AAltN9usrdZCt6s4Y
OqOduCoYHnIyK/KO3vZrFt8M3lSxsrRvFlMEHIKkcjBwQa7hv2gfilMsat8S/Fz7AQytrt0yH/Z2
mTpj2o5RKavofqf8XP26fAej/tIeDvBIuUl0rSdRc6tr0bFobScxTQiEgDorMNzg4Xj0NSftvfsj
+Kv2pNX8J6t4U1fSLaz0+xmiL3kzr5nmsrBkKKwIwP1r8dW1J3u5GfzWlfJaSRtxdiSScnqSee/e
uw0747fELSreGzs/HviWztrcBIYodYuUWNAMBVCuMAYAAFPlLvFn61ftneI9K+E37GVx4P1vVbeH
Wr3R4NIs4EyTdSxrGJNgxnaApOcdMVwX7D37Q/hP43/B1vgX4iL6VrEGmS6VChfY1/aMsgJjOMCR
E6jnpnHUV+XniXx/4g8b3EN34k8Q6r4ilt0McJ1O9luPJBOTt8xjjPQ96o6brtxomrRXunX11YXs
DiSCe2laKSE4OGVwQQRn9aXKJWu0+p+s37OH/BPe2+A/xW1Hxx4k8QLqVlo7vJojRSsu2L58vcgq
BuVCPunGcmvlr9tD48af+1x8d/C3hjwYkB0zS7p9KstVuZSsd7LcPGpkAK/IgKYzyTzxXy/qfxx8
eanbXNle+OvFN5aTKY3gl1y6eN1IwQQ0mCDnvmuLj1CWN42gmlt3TlHgYqUIxjaRyCPanFNak88b
6n7m/sSfs6+IP2bPhtq2geIr6x1C9vdUe9V7DcUVDGigEsBz8voK/Or9tv8AZc8Y/A3xPqHjrUbm
wvdE8Ra5ctbtaTEyQu7yTKroyr1GehPIr56f48/EZVjUfELxSE+Ulf7buRkAYwcPWN4m+IPiPxfH
nW/EOra3HG5kjj1O/mudrkYLDzGODjuO1PlYOet0frr4C+InhH/gop+zhqPgzUbl9A8VR26G9sIJ
R5kEqEeVcLkfNEzYOMcZKnBo/Zj/AGZNA/Yc8GeIvHvj3XrOPVFgeG7v7ZmFpFa71KKFK7i5ZR09
QBX496J4r1jwvcLcaLq99pNyUMRuNPuZIJQp+8N6MpwSBxnGe1auufFLxV4q09rLWPF/iDVrBpFY
2t9qc88ZKnIyruQccdu1SoMaqLZH6SfAf/gpT4f8QfHnxFbeI9OTw94V8RXUcenapJMz+Q8aFIzM
NvAkAHPAU4Bz1rqPF/8AwTM03xX+0k/jY61u8FahPJqN9p5kIu/tL72IjcLjYWZTycjkc8V+RovH
kkKtIVBJGPXPWuwHxt+IMKiNPG3iSK3VdiQwaxcooA6AAOAB7D+lHs22EZJWP05/bM/b60f4XeM9
G8HeErKHxJqWhajDe6rIZSEgMTYNsODl2BPzfw8da6b4u/Dnwv8A8FIvgb4f8T+C9cjsNd0tz5KX
J3xW0ziMz21woGcgBcMPTjINfjne61PfvNcXc8k88jmSSWQ7ndjkl2bnJJNaOg+PvEPhKORNB8Ra
poyXDCSVNNv5LcSNjgnYwycd6OTXQpuL3P2J0658Gf8ABNH9mlE1O7Oqa/fOZVsxMQdR1Dy1BSL5
fkjAUfMRwBzycVwP/BKrxd/wmNz8ZdUmWOK+1HWIdSlt0P8Aq/O85sDuVBJUHvivy18T+PNb8YvC
+ua9qmuNECkR1O/luTGp+9jzGOM8ZxjoKr6D401zwldTXGjazqOjSSjypZbC5eBnHYEoQSB/U0nC
9tRKSu2z9yPgp8Ar/wCCfxd+Mnj7WNXspNH8VXK38QQsGto0MjMZCQAMBu3pXwJpkvwa/aY/br8a
Dxhq1z/wi2vXCxaDfwF7dLm6CQxoA+3gNtkCk8E49RXyLefGDxjfWklpdeMtevLadSkkU+qTsrjn
KlS/PU9e1cq126FI0uXiKEbHj4MZHQrjnjA6VVmSlFNeWh+olv8A8Erte8I/HjSvEfhjxVYt4N0/
U7W+itdSLm8EcciO6Eqm1uVIB44xmof+CwPxN8P32heEfAlpqSy+KbPUf7TurONWYwwNbyopc9Bu
LDA645r88T8a/HgZSnjfxIpTCh11e5BwBj+/XL65r+oa5qEt5qF9c6nqE+N97eztNK+BgbnYkn8T
SSadxySkrH6R/wDBOP8AZR8M6V4ST47+NryyubO1jmn01HkJiskj8xJpJ1KgZAGQOcda8H/bz/bY
1D9o3xPP4a8NzTWXw502YrAFzjVZFY4uWyoIUfwrz6+1fLdr408Qadolzo9trepWuk3LHzbCK7lW
CXPXcgbac45454rEknV3TqqgEAE/himlYbSex0Hg3w/P4v8AFmheH7aWOK41a/g0+OeU7UjaWRYw
W9gWH5V+4v7EH7OWvfsw/CfVfDfiTULDULy61STUPN08sYwhhjTBLKD/AMs/SvwaSV4JAY52WRW3
AoxGCDkcj3AP4V1svxu8etbNCfHHiMxSZEkcmr3BVh3BHmUNXYr2ViP4lTMPH/i2RDlG1W7PXnPn
v/TFcizrsDA8enYHNMuJjPM2GkbdnOT16n9c9aZJt2bBxjtVkLyHhQAME7s/UUkk21iQ6g7QThug
pFYxB9rFGHQj1r9Dfh5+0Z+xVb+BdCj8T/Ci9/4SBLCCPUJF0oyrJOsahzvEo3AnOCQPpUt2di1f
qfntIy8D5cr129aadjAgMFDDgZ5bmv0xP7RX7COI5T8J7xeOANDOP/R2M15v8P8A4z/scaH8Q/Hx
1j4X6tceGL27guNG8+zMwiAjxKoTzcou/kDJ4PbApcxXkfCjtFvCmQBxgYJHsM/0r0D4NfDXUfjB
8T9A8G6PcW8F9rFwLeOS5cJEh2l/mIyeQpAwK+8H/aC/YLQFz8Kr2MDIJGiuPqOJu9eDftWfFj9n
LxHpHh27+B3hfWfBvizT78Sy38ED2SGEIcciUkuHCFWAyMHmk3cNmff/APwVCgNl+x9c2zMuU1Gx
jGT94gn1+hqj+3HKsv8AwTsaWORdraboxR+oOZbfH1r8ifFPxb8YeMbCGx1/xdrut2kUglS31HU5
7hFcZAba7kA8nn3qtqfxG8U6x4TtPDV54n1i70G02mHTLjUZpLWPb02wsxUYPTA+lKKaaZM4Rlpf
qme0fsPfDHVPiv8AtKeDLTS7iGB9Iu4tcm+1MQGgt5omcDjluRx79q/Qb/grP8L9b8Y/A7TPE2ko
k1r4VvHvL+Fn2v5EieXvXPB2kgkenSvx/wDD3irVPCWsR6no2q32i6pArLFd2Fy8EyBhggOhBx+l
dFr/AMcviF4t0yfS9b8e+I9W024I82zvdXuJoJAMEB42cq3IzgimrphKMZKzP0K/4JCfEvQNFn8Z
+B76/itfEGq3EN7p9s/ym5RImEgT1K4yR6HPY4+sf2af2eNd+DfxU+MniXVrqzuLXxjrAv7JbZiX
SPzJ2+cEDB/er09K/OH/AIJp3vwl1D4lXvh3x9byxeJr8ofD+rC6ktxDIquZIhJGylHIxtJPPI9M
/pD8MfDdj+zTpXj3X/E3xbv/ABd4dfN/Cus3nnyabBGJGZFJdi+QVHAGdo4qOpo9rn5VfH/wPqPx
O/bf8c+EtKkhi1HW/FEtlbtcybY977R8xxx3r9Gf+CiGnTeHf2FNU02Uh5rVdKtWMZ4LLPEpwe44
71+S3x6+KcHxA+PPi7xzoEl3Yw6jrEmoWFxkw3EQ3Dy3BUgqflyOcisXxZ8cPH3jnSl0vxF458R6
/p6usgtdR1eeeHeudrFGbBI7elU1eTZzxlaKR+tn7Sdz/wAatw6OG3+FNIGcj/p3zz09a/GC6WIz
eYWC8sTyBj/9WK6if4r+Lr7wpF4VufFWuXHhaJUSPQ5NSnezUKcgCItsABAIGMCvsL4G+P8A9i2H
4S+FYfiD4S1SXxnDZiDVJkhu5FkmBILAxyYIOMjjvTT5VZmi1bZ8I+W8YfJ3KOQzY+lfVH7Bn7PX
w+/aN8b614e8YeKr7w9rkFvFNo9tp80aPdH5zIRvRslQqnAOcEntX0C/jv8A4J7POd3hTUiWGSxs
9RCjPXGX/lXx/wDtC+J/hpo3xvXXfgNLrOhaFBDBcW8sjTW8tvefNvaEs28Lgp1PUt2NDfNojS9j
9hvgZ4Y+PHg/xbqfhz4i3uj+NPAAg+yadqUQSK6EaggNNHt+cuuAwyecnp1/Lj/goX8IPA/wW+P9
xpngXU7eW1v4WvbrRYGDDR5y3+pGDwrA7ghxt+mK8xb9rb4zKFA+K3jDzAQVI1ifjjGPvcj65ry7
XNa1LxHrF7quq39xqWpXkrTXV3dSGSWWQ9WZjySaajbczctbn66/8Ej/ABzoWp/AzxH4PS+hj8RW
2rz3sunGQecYJYogsqr1KbgVz2I56jPgn7WX7BXg/wCAHwHk8dXfjm4sPGhaN10W8kiaG9mZwJIo
QAHLKrZyCRxkjBr4e8EfEHxH8Pddi1rwxrd9oGrRxmFLzTp2ilCN95SR2OBwfQVf+IPxk8afE17R
vGHirV/FMlmrLbHVblp/JDEbgoPAzgZPU4xUpSRTld3P2D/Z0nh+NP8AwTnsfC3gjU4JPEMfhl9F
kjSYRyWt4EI2yd0JPzc44Nd14J+GfiKH9jbU/hVfXFvcePIfDd3ptxbG7WQrLMswhZmycK+QQT7+
hx+Ifw6+N/jf4TveyeDvFereGHvQFuf7NuDEsxXO0kdCRk8+hxmty0/an+K9l4kv/EcPxF8QRa7q
EC291ereuJJo1zsVu3y5OOOMmpUZIOdH0n/wTd+DvinUf2rorqSwGnnwJczDW4bhlDxO0dxbhB/e
O8N044z3r2H/AILE+AvEMl/4J8c2tmZPDtnbvpV1dKwPkzySh4gy5BwdpAI7/hX54+B/jR45+H/i
XVPEXh7xfq+ja3qJJvr61u3WS6LOXJkOfn+Yk5PQk1o/ED9o/wCJfxX0VdI8YePNb8R6bFMtwLS9
uMxB1BCtgAcjJ61aRLknscPqeqXuquJruea8uAAgkmkLuFz0DEk+tfrZ/wAEfPGujXHwa8VeFFv4
E8Q22uS30lgZAJfIeGFVkVepXcpBPYivyCaZUJ+bgkEAdPfNdP4D+JviH4aeIrfXvCms3mgaxGrx
x3tjKY5NrgBlJ7g4HB9BRJN7BFrqfbv7VX/BPzQfgp8DdU+ImrePmsfFgYzx6LdiLybmV5cvDEc7
mbDZGPTnivhvw34q1TwhqsWo6TqVxY3cEqSBoJWjMhQhlVtp6Zre+Jvx08ffGNbMeN/Feq+JBYFj
arqEwZYywG4qAB1wOfavO2kw4wWJHpwD70rX3LvY/Qj9sn/gpBovx7+E+i+HfB+l+I/D2qw6gl7f
XM0scMe1YZF2K0chZxvcHkAfLzXMf8EzP2l9A+CXxd1bS/Fkj2+m+L0t7QarJIBHa3KPIymVmI2o
3mkFuxA9Sa+IUnKqEMmATnAGafLdh1AIGCefQ0WFzH60ftAfsAfED4jftK3HiDw14quYvh74vnW6
1mW2vPJez+UKyrGGxMrKFIOO5Brpv2/finoWhfCfRf2c/DjvrXjjXxpunWkUUkeLYR3EAjM53fI0
hUADHcngV+cGhfto/G/w3o1npemfE/XrbTrSFYLe3WVH8qNQAqgspJwAByTXmWt/EDxBrfjafxZf
6zeXXiaW7S+k1V5iZ/PQhkfPqpVccYGAKLDTXc/Yr4ReA/AX/BN74Dz+KvHF75niO/CSXiWxVp5Z
GEam2t4y4EgRjnP1Ncn+1T8F9A/b4+DGkfFz4U373PijSrd3i06ZxuuU4L2sqbsRTLjI9eAcggj8
vfiV8dvHfxlms5vG3i3UfE5sFZbMX0gIhDY3bQAACdoyeuBUvw3/AGhfH/wetdRg8GeLtS8OW+ol
Gu1tHAWQqCqkhgecEjP09KVncq6P18/Z2W7+IH/BOy38N+D7wDxZB4eutJ8mOcRz2t8DIux+QUbd
g849eldP4E8A+Ln/AGItY+Hmthrv4gr4c1CyuLKe7WaYzTCfydzbjw+Rgk/yr8XPhn+0D8Qvg5ea
re+EvGGqaJNqcgkvXtpl/wBJYFiGcMpBOXbnGea6C2/bJ+MuneMb/wAU23xE1mLW7+3itbq4V4z5
scf3FKlCvGTyADy3PNJJibRzHhP4Y+JfFvxAs/AOmaW8niue9fT1024ZYnWeMsHRt/C7djZ+lftL
+yJ8EPF/wp/ZEufA/iWzS38SzDUj9mSZXXMxfYNwJAByO9fh7YfEDxBpvjRPGFrrV1D4ojvDqKaq
sv8ApAuCxcylsdSxOeOdx+lezxf8FBv2gkhIHxQ1br94wW5IP4x9Kppk3WxSl/ZP8Z+Gvj74K+GH
jzS5vC914muoYYJgyTIY3coZEZWIbaecZ9B3Ffoz8f8A46eCP+CcXwU034e/Du0tLrxxdW6/Z4WU
Ou9REkl1dhXDBmXlR3K46Cvyu8eftDfEX4m+J9G8SeKPFt/q2v6NtbTr5pFR7RlkEismxVAIYA8g
9B6Vy/jfxtrXxE8Vaj4l8S6rc6trmouHub24ILyMAAOgAAAAGB6UyU0g1bxJqHiPV7rV9Xv59S1S
7l824vbmRpJZn9WLHJ+nQDgV+t37IHj/AMNftU/sV3vwVsdU/wCEc8WaboraPcRTqrOEOfLuIkDA
vH0BxjByPTP447yWOCDtPQdq6j4e/FDxL8MPFtr4m8Kazc6Hr1mHEN3b4LKrKVYEMCCCD0IpWGpI
/Vv9if8AZI+Kfw0+I0vjP4seJry3svDHnw6bY3Go/abedHidHn3M5EShSMD064FeUfF/wfbf8FHv
20Li08F3VzD4N8LabHpOs64UVkDLLdNuiG7Lq7fKCOuCfr8p+M/26vjX8QPC2o+HNb+IOo3ek6lA
ba8thBbRiaM/eXckSsAehweQa4v4S/tAeP8A4F6vqWoeA/Es/h26voUt7poY4pElRCSgKyIw4ycH
GeTzSsVePc/bC6/aG+Df7P3izwt8EZ9Yks7y4gW1h2yGSG1LFlVZpi+YmZgcemR0FfGnxC/ZVm/Z
m/bf+H3jmW8mvPhprXiSK5/te9kZxY3MjMPInkYnIJYFXbqPpX50eIvFOoeK/EWpa3rl6+o6pqVw
93d3En35ZXYszHGBnJ9OB9K9C8c/tU/FD4k/D/TvBHirxbda34XsWhEFjLHEv+qXam51QO2B/eY5
IBpO6EpI/ZD9pPw98S9W+OXwW1vwbcaj/wAIZp980niEWFzshaEyRENIoPzrt356jGTXzz/wWN8N
6prHg34deIrGxkvtDsZ7qK7v4Bvjt/OWHymJHQMVIB6H15FfEGlft6fHHQ/Btt4Vs/H93FoVvZf2
fFbNa2skixBdqr5jRF+F4znPFcx4m/as+KXiv4RwfDXV/GN5qPg+BYo/sM8UO5kjYNGplCeYQGUE
At0AFCuNNdz3vw5/wTc134l/s1aF8TvAfimHxHq15ax3UvhtYAj56SQrJvxvXsrAZx15r7W/4Jk/
BXxp8B/ht40j8f6M/hua71GOaFLuRM+UkAVmJDEAZzzmvyr+DP7VnxP/AGf7G/tfAfiqbQ7S/kWS
5tjBDPGzqu0MFlRtpxxxjOBXUeO/2+fjj8SvCGr+GfEfjqa60XUofIubeKytYC68ErujiVgD0ODz
zTs+oXR5p8a72LUfi948uopY54LjXb6aKZGDBkNxIQQR14xXn8n7tQfvBvSpJZWndjuwucnjt6U3
ABwHO0dM1VkRuRgHhccn06fjSSxmPBX5Sp65p5ZAxC9e/wCVNLhV3bSV7tQxKw8SHaFz35NM2cnH
XPBNObapJ4x/DmkL84HUnmoKEYFQQ4wT3oUB1IAwcdaayneeSQO9IRtbj1xgjrTSFqOlT5NpJXI4
UHimIWwGX+Hkn0p7Ddg7QEzxnvTGRgx4VVI4x3oaFrcG4KsCWyfvf40OF6p3HX0NCNl8EZAPSlbK
DcDgnjBFQULuGOcPx171GwLIQTt7A4pwGBzkE+lL5JbczHA7HNUWNibG0jJJpX3FDx948GjCoylR
jPU9xSORtx6HqR1pWJYRh0cEyDjjp1pz5bGSc88Uj5MOTgZPX2pq7t+V+anYRLI/CqWbd6Y4xTfm
X7vccnNOI3x7pCMiojl8HOcdxSAfnPHIz29aNuGGWYA8YpGUnJ3HgZweKTAUZOSQOAP1pDHyE7Tk
4X1ABIqNMArvfC+vepH27QoPvg0w/wCsAAJUH0pgPbkYRvlznFM2MWJLZx1NKoAGFyTnBx3pGO0E
HcrDsKAHLE7biDnvgUNIVUgMrY4yRzimAPuIHJPvUpgDkcZHoOBVFEbRo+QzYY9MdKcIxgumAgwP
XmnSKDknGOOB2Ao/ixyMDhR/OpAjAJkLE8Zp2Gyw34bGaa77QoIIIPWhCwDNyflpEjkDpMDuDMDj
FAcl/mOMHgGpURny/GR171EW3cbQzYPOKYxrlgx568jjmpFIdydo4HIPWm7Su5QxbK49KjZfKwQ5
L4xQmMlaM78x7igGfxpygsgIRg3O4huRTEOScMBkdfwqaDa0u2QttwclfWqAa6hDuBJY54PNEqrF
hmIIHHIzUbAOCV6n2pU2qmCA3+9xjipT1ES5+dmUYz94D09aaZCszNGAyk9uP1owEiGMEDt3pqf3
g/CgnHr7U7DEfaQCP+BDPNITgEI5Az0NSSCOTJPPYY7CmbNueCOMD5aVhDpDuVnI8vPBA6U5lEbJ
jBD+3amM4iRWG7LHHA6jFJ+7Lxg5UD0pWGPcK23aSGJ5PemsAWALbVHP14pziMBznpkj1qPY+0F8
nP5/SgRPKdwITGAoBweaTg4UH5ugx0NNiZtzooCEkjA9KZIDFght2Bx7mjcCS3lxvUqrHpu/u8jn
+f50wwbjJsbIAzyetPkbzELACM4+6fXnP9KFQcnAVcHOTSXmFx0TEq4wFB5HHPApu3eqoSq84yet
MUkTbcsMdCSB9amETLFlvn/ugdce9A0MlVHk27wAuQCvFPGfKLfJtH8OOn07VETGjKpU5LZfmppF
j88DLBGGAA3Q0MTImPzsysRwQGPpSo5Maqr5QsCVX1pZYfKCszFlB6KOhpyN5DklxtbJBA+tSmCQ
nlCORndCVOfzojdiXJbC9NozjpUnAAwQQzcljnP+eKJ3jhXbE2Gfv3o1Yxiwg+YvBGflJ605twRF
jBYZPAHTHpUcZcFUUDdjG49j3601twGACTnp1pXBFiYi3UCVSAcrkjNNlRY42xg7jlVTHA/z2qKT
BhYsHUEbsdAD/nNOTZ5eBgBjuDdxxj/CtEgJcZwuzYAMbu5qpdhXiyp2Hkbc/wA6e0hkViRvwAMk
kZOKZc7vs4LSggjoT09sdasCspMYK8sR37AVZRFlt1UfNIO3rxVOLcdzBs7R+OK07JEWOVhy5Ukc
Hp3/AJfrStcRlgk5G5VA7E8VIsflPmR8Y7dc0+NUbI4RTkjIzzSyhZTz8pA559qQyNtpjONwA9ac
I8Jndhs+vFRuxZwC+Yy3alkOwgjDKRx3p2A0ZJWeJgXAXv65/Kobkp5nMZJxuJ9Rx+VLPIwYs6eW
y8fKOtSXAhkJ2BhlRkA8dOaq5KKxgZnTHAwSFA9KdOBIGccq3TnnPpSRwtHISSQpzt5pjcyAsxXD
ZAHOKHqPYY6IpJyFJPA7fSn3emz2yxSPGVR13K2cgjpSOuRvZscZwe5rq7BYtT+Hl+bhSbizlX7M
4AJIPUHvjpT0QI5SyRmnVkBL55r6H/Z2/ZR8eftH6pNb+FbWGGGzhaVru9cxREhtgAbHJznPpXgG
noyzEnk7SeTjtn/6341+7fjG+s/2Sf2KE1j4fafaWLi0tZf9IDsgecIryHBByN2f1qE7s12XMkfl
N40+C2veBfidJ4B1RbZPEy3KW4ijcMC7bduGHUHKnp36V3Xxw/Yz+IfwB0Ow1TxXDpxsrwmCOaxm
80KQNwRsgHcRk8ehryC/8Y6pc+KH12S/nvdY+1i4F5cOZZZJUwQzFhkngYJ7AV+qPw6+IfhP/go7
8Cb7wh4pWPSvHWl4dxEdii4AYJPCpYkoQCGU5xk+xq5OS2Ibcj8+f2ef2YfFP7SU+uWXhc2K3Wlo
kkrXdx5aqrZC4wpzyDxxXoXgr/gnT8U/HF3rMFo+l2s+jXr2F3HdXHllZFAPy4ByCDkH0r7o8D+A
/An/AATs+EGpavq17/aPiK+JjkuIgxkv3GfKVYs8ADGSOnPOK8e/4J//ABp8QfF39qz4iarqf+hw
avpP257OGUiFnWSJUZUyeQpI3e9Zc0kCPIZP+CVnxf4UyaC75G5/tny4DHkZXO7AB6dzXlOrfsfe
PdM+Ndv8MJ4rOPxTfwmWzU3P7iWMBm3BgoIGEbqP4TX6MfHaP4q/D34neJfit4f1Zn8KeHoYftHh
28YiC9gMKeayZwAwPIPr69K8Y8N/Hvw78ev+Ch/w81jwxHcizXTpLeQ3Uex0lW2uGZdvsHIyD2an
zyGndnxl8b/2ZPGf7P8ArX9neKLdEkvEElvKjq8NyuQGCkdSMgYODzXV+Lv2HfiT4H+FkHju9tbW
TRp4IplaCcSSRCVQUZlJGF3MB+I7c1+w/wAZPhF4f+Nngq/8M6/biSKdd0My/wCsgkH3XU/XqO4/
OvLP2rPDZ8PfsYa/oQLTjTtJtLVmhXlhG8SkgcdhRFu5Lk1qfjx8Kfg54j+NXjnT/Cvh37MdVuAx
QTz+XFuUFj2znAJx+taPxr/Z38b/AAB1waL4o08Q3kkYaOa3lEkTIQcMr+x6n1Bz1553Sdc1LwX4
g0/XdGvbrSdW06czW15bsY5omHHXPpkHPUE5r9c4bTS/20f2KLPxF42063/tJ7C6ulks2O1LiAyx
h16ZDFMlTxz7CtLimtOY/GCdXEQSQqcLl8Nzj8+tVZGKp8q7SDuIB5Yf04rX8QWqxTmOKGNC/RVU
7Q3t+tZgU7BtIyhG9D2Hb+lUcznrYicgMGkjBZ1yr9SRjP8APFSTI0scapGYzsGZEydx9/r/AEoe
dpiCAXxwi4yAPXj3oDBYzFguW53Z9Ovemxp3K8ZWP92w2zhiMnHXj8qRhLCg8tG5P3N3De/NSXMz
I8U0itcyKeHYBvw6df8ACkmUnjd8542q2SDUaD5kiaePEqiUELweg59qYtrG8w2vuAGC2eB6ZH41
G8UhLxPvVx/CR933pEjVEEcQJUjG8N39SPaq0RVyzfJ9jaMD76/vcg8N2qBZS255S0gbBIzwDiiR
DHKgb5Ci4DN247e/H61DjKN82UB/PmnoZu72JkBlaMoDydo+XAqSYPmSIMdxP3SMfhioreQRxRMz
ENnO04z+lK8vnyljIc9Rn/P0qeW+ppEfKMQqsEQKq33cYyaZDIxlUjcQTt2hjtLcjB/WlLbY0ZiC
2SdueD2A/mPxpkkhaNfmHAwFUYyMnqfUU7Ddhbh0DmKJNpI5YNx+H41JHKDKSql8DG0Y9DjHr2pj
cqj7MDqSABgemaQK0ku6MZA9SAM4/wA/nSKQrTbY0UFRljy3t09B61LjbCI2ALdQ2ciqxleRTFIC
3zEYIA79/wAzRNCQGVGCrnHAxn0oMZSd7FmAxKSW652gg8KetMuLrzpn3jDEAFl5zx0qKIqgZ3G7
Bzt9yOtAlIkMrttGONnc+lCKXmThXSYbyxyOGXB/P0qNkUKJGG7B5ycAqaVpi5Z2LFmzkZ/WmskQ
2AEZOOc8E1Vx6Av7t8bmyeAc9Aev86VEw64bGcgk88e1RufkIGXUA/OOwHv3zUXkgokisOvIft+F
FzPqWJI1MhJO5RyB6/jTwzAIqr93JOe3vSLIsykkIAybQF9ai2Dycsnllju69fShDdiWG4Min5Qo
U9RyfbP+NNlJ8x1G8EjPBpgf5SPLYk5BKtg460lvMTKo8skYO7b2I/yKoSauWGYuoMh3CM8OcZHW
ladlj4jyhxySPl+lQrN5pkQruY5w54PX1o80rGIGYZIOSf4TSNk0KtwxJw3zdyevT/61KJHkTPmk
BhgjbnGO4/nVXLMQM7mU4P8ASnZe3BQLkFSNwH3R3FCMpSRNI7EABiu7ozdQPT9Kc84jjUhRI443
txUCy73JBAb0HQVEUfYyhtzjjg9famrXI0JXkkIEYYEsQR65qwY9sLOxOV6Dv9arbfMYHaeFAHIz
RIrRNuDFgw2nJ5A9MVeiBDnuA0xWMMFHBZuxpJXYuSFJxwpz1PvTTCGDIpyc53H2pEUqjH+IcEk1
DsUiRk3RFZOcnI9sUsTKIlQN8wIOM1WDeY+5nZvLzlV9PemtIGYMoxg4OaAJmy2QD8zHHHIFSso8
wiTe4HJJOAKrQoXkDlvl5GMdDTyQ+VL5x09+KQ0JK0SlvKJZl547U5CVjwSBnnaoySagYK8nUKTw
F/XmlidRJgvwo5Wq0J5hwfGe8i5ABPWpGmAJBOMdMHP/ANaoiuCfLBz1ANRThSwABGOq+pxSuLm1
JHb5QqnYV7HoaA+5GC7WVeAOmfaoioaIk8nPO0cj2pFja3k3ggkAc0rml+5NFGsRAI28c4pXlLyb
DzHndjjiq7ztv+YZ/wBsUb9oPzjd3NId10B2ZWYEEDqM05pwse0KVI5ye1Rec05lQEHABBx1prMF
wCnGBlTzxQK9iRLiRdyqQTkFmPb2pjTGWPCsAqHGVHNMuJUjl2pgA8nNNZ9zMVBUZxzQJMlbMj8s
25ulJIAjBmOc8DHOKZktu2udq+1NO8yK25gQPXFBehO7xuUXG04+971BIgx8xyeaQADKsu9gdwx2
9aOWy4cFc4G49KBAAF3AZGRwoNQncByMAHAI61KxYNvTAINKitJvfrjnIoKslsNEhjHQc8kn+Knm
dvKxwvOc1A65bGQx9TSSOQdmTj1HrQTdjyXVVPzlTyCaWORwrcnngk0xyByGP17UxW55yp5FKw0W
hNIjYMh2nPBNRSSF5CqkgD1+lN2DcA3XHyk/1ppPOOd3bsKQ2yVclwdwznvSOS5J9e3eo0ZPNGD1
7EninvIrKQBgjgZ70BbqBkQ7c53EdTSuxCcgg9T69KhKu0YZm2gZ2riiXJz1Yj5ulNEsvW842Mvm
FGZSpKjr+dP+2pLCIvLTcOu5Rz74wKz3YxupAIQDjjBNOzycfMCO1DEmyxI43L827PLAn+tRy44I
OAc5BpoJOUQYJ7GoWTaDv6jn1FCFqWIJAkvyntgflUsr7wNhIBAzx0qvE+xQqOCCc4x39KewxwT/
ALW4cUygaYiM44z1470O/mMDuzimbgsR4HrkU2RiE3EHkcgUkhEjP/ESAB3FI25yGAw3rUa9TsbI
p+MKxByBzQAJJ8x+Yj17UsjkthnPHcdMVF8o5ySfSnyglQQvBGaVwJWG0Aqu5ScA561EEOCSR16G
mg5ACgKQcjNLMigIMnOMnJpoB/mb3IBP0zSIQ0hBJ6HoajJ8xsgcD1/lTldY0OBz0GegoYAMBsNk
FeD9aVo8LlME9ee1K8TI3OcHqcVGxOcLgnsM9aBokG84LDcefw60hZlLEAHJ69Mmm5DZJLFv0pDm
M7Dk9x6UWKfkJIfm3KTn0FO3AkfMcdweKSRwZQoGcHqO9NklUiQAce3SkIlA2K4ypHQGgNvXBQEd
8cVGeW4HA6kUAGQjjDYzgUJkilyTkcAdqUFXB5xnrmmMGByQFXtiiRBGqqTlv4lHNA02iRiHVQMc
cEnoKaxIAyc+hph+YYXPWlLgKOCT70A3cc8mxjtIYDpSsSyLgsOckVHI4dc8BsAYHemsSqgdKBEk
rdCOueRSO67s89OCVxTEIzljTnUnAxwaABp1J4YgE8mhpChLA5wOOKawG5srjA7Ujgh84yBwM0Ds
SySBmU4BJI57U93ZYyBkZ64qMs23JxtHYU0yErg4P070hoe8pQ7gc8HrSmYg56jGOlRko2SVOAMH
impJhmKnaBnOPSgViZiSvHDDHIpZpGdflBBwc478VXLjHy/Ke4p4lbBJyU9RTEOdwyk/dbH50CTC
HPLHofSmYUZJJbqQM07yyybcgc9h+VQXYk3oyYJwQOfeolI4AJY00oBnGAT1pAuRg8HGOOmaaaDU
eAWBOO/SkMJbeADgdqa2/bkjC5/CkY5cgEgU7h1HFCUwThhxjvRIuCpDc/0qMgg7nLMCc5B61ODm
MHBx0Ge1K6KGSEsRkE47illlDKOzHPDDBpRgNypGeMikdTlm+9jFMBu8lMgbQvUdeaTzQqgYyKdL
l2K4KEY4NNZV3HGPahkaod94MQAF+tIyDOQMjoOaBhCCPm+lNfcGAUnGcZJ4zUJIofIxdApGcHjF
HTIOSevtSSKPM4IGByRzimorEk55HAOcZqmVcexOeTjA5wKaDv8A4i2B0Pelkb5izAAnqAKTG+XA
wrVADwNzfMBjHQHNEhBLKn3cnocU5GYqQTgjjNRMoG0MSQT94U9RjzKRHnOcDH3aVnRwFACk4yT2
+lEpHl5A+UDB96TeqqNnTuDSuK4SMpAHLZ9D0ppRuVD9sZprbCQwyTjpSygk7gc7h1o3EA3OwPHy
jHPfinYMgKggdwRTFAKnDMWHQ+lSOixxHblmI4yKAG7Hd+Oo5BHWnMrBOoYnqe9IsikEHKtj7o4p
rRk5zuBHcUIBfMcZAGPX1pcAsRuJHPPWkXIXCgnnrTxgP0LfUdDRsAjvz8inFBwUdidpB796N7Kp
VkYjqSDx0NNX92/AO7upFK47jo3MygH7w6DFJIDnpg9M5oB5xnBApNwO4FtvP51V0MUSFGx8wUjk
inyyxpLlUPXg1EY8gtyMHoDTw2TyWHOD61DFcbLtUgFmXPJBpWDH5flYA5UHoaLjK4wMjHqDTtny
jr04Y84p7FCRLvVwq4A469KJCVYbQRg8jPX2pGJIJUDcO9NCmJtwO5j61VxEzFj97hT6npTDEvlK
STuOeM8dKkl3IoxnkchqZIpwQAXcc4oGOkYGQKAOMAqD+tCDdGy/xpwe3U01VQICqkP/ABc+1KIZ
EVXRwOSCelK4AigoTkZB6fUZ5pPMZHPGU7nrSAs3ryfvdiKkUcADOCeW9qQuojxL5g2NuQ+o6cU2
YNsIEY443Dv71JIrbiYhuROeelNC8kszF84OOCaaQxEc+WR/DjBpybXj3h9uATjqOlRvGS21ASwz
nnt9ac0IcEn5c4747elS7AHksZSX45yB3NOWJGX96du08N7UOxRVR25P8QHpSyWzOWPyFcdjg0IB
rR/vsIzFP9rqKJkKZOeh5DHrShmYoeXG09Rj8MU0FWfdyQP0qrXAla1YmNl4IGWAOe3FAkIcYZl3
dRnrxSCRySAcFuDn6UskQ3qcrt4BAHGe9FgFcAgnYvy4G4dc9aSD5yztll6+pqGQGOXGSwzxgcdK
nKDMZXLMcrgcYGKlgIXVzgZOTwo4GaMswZgu3aDgY570SQNH7DOMt+tJ8siqDhfm645FMdyQS/ug
VClhyfXPrTWKy8ADJ6EjPNNlAb5xgrnaBngnmlBxIyZUgc8kf0+hpppivcsxr9myWCknPCnOKqK7
jfkYz9xmIP6VKkMc6KUlEa5IO7v9KiVd8nDlyg9MjFFkhkhby0UMAWHBOAeKY/yoTgY68cU7EeAj
A72PT19KXLAtEVBTGCQM80k0IiKrjoRGCAR1xUF5Ijqy4GR6VYkXbGE3bDgklRnn/Jqrd7Y1ypHz
EjHpVKwEUHGAMAE81rWQe73rFg4VskuF6A+tZMS5I7DOMHp0rRgMSqW3KE2HgdScemadhMo7Y2Jy
Tv749aQxNI3faM9aYoAAIBCnkk/zp0kxAG1iysDkZ6/hQMMRjGRnHUCo5DsY7gQOoB64qTiP7p/X
imSM+75m57HGeKQGhNcFpDwpYNnOc5FRecGDDYFDHGByTx2pZFbkspIx95eg/X6UgjKSgsCvA2mq
aJTEkyiADPU4Yk5FRoCXOeFGTjHWpFdoXEi5JJwQwyPofyplwr+aV5V+MADgDtigZDNHsfBJz1bP
rXYeHYZB4N1ZGYRo3zqWI5AGeO/PH1x7VyrplkDZ35wd3f3q3farJcWUNvHN8iZ3YXafb+dFwsNs
38spOWKmNs/K2C2DX7m/tSMbz/gnnFLCGvYxo2kyM0S7yUBhLMPoMmvwqgt1eZh5hViMjPB+lfoD
+wv/AMFA7T4R6PJ8PPirczax4Cmt2jtLp4XupbQ7QvkFPm3RsM4GOPp0S0dxy1Vj5HFk8dxGloC8
1w4AOACWz/d9f16V+jv/AATu/ZXm8LpafGjxZeTaHb2kL3FoshEUckTIyyPKDyFA55x2Pavi/wCN
Xin4e6b8Z31X4cRXV34TiuYr6zjvoyrB+HaMBsFeRjn+9X2R+2L+3f4H+J3wSsPDnw51zU7PU7u5
Rb2CG1e3UQiJt8TMRtZSSBxkcVUnzaCUuRaHsX7aHwLh/am8F2Hjz4fa7Dr82kRPbi1sXWWO6jDl
nCMDw6nt3FfPn/BLqJ7T9pPXobhBHOnh2eMqAR0uIevvkGuP/wCCf/7WXh39n3xLr1h401PUrPw3
qdr50UcUTzQJdqwDN5a52sy4HA5xzXu3ww/a/wD2efCnx2+IXji5l1DS7zUplhsb1NOmMc9uUQyH
Yq5Ql0zyBkAVm29i0rbHSftqftUX3hfXvHXwe/sGfV5PEGmW9tpk0ICiB5lw+89WHQjHSvm79ln4
ReLfgr+294A0zxVpcmk3UrXMyxs6sHja1mVGBDHPJIOT2NfU13+1X+ytqHxVT4hXesXreJ7aIW4n
m0i8MagKQDt8rGcdD16V4l8Yf2xfh5rX7ZPw58d6JqN7qnhrRIxb31wlrJF5W7zAzhHCs2A4yAOl
TZsmO57d+2/8ePEP7PHxl+H3ifRt1zYGyni1KwJ+W5hEqkp7MckqexWvWP2l/E8HjD9jvxJ4jsd1
va6jo1vqEaynDLG7RvgkZxwccV8Ef8FCv2jvAXx21rwrd+CdVfV7ext5EmlaB4lLeahACuFJ+6T0
xXpviT9tz4X+IP2HYvA6atdx+Mf7AtdIbTXsZAfOQRox8zb5e35c8MeDVctifs67n51z+eZrjkkq
xwAu0cEjj16dfpX7H/sZRSH9g3TYZEkik/s/U8pIvzLulnYfXgivyt+Cuo+B4vjF4Ybx/HJN4O+0
ML+SLeXiBU7HO3BI8wDPHQ5xxX2D+2l/wUA0bV/BcPgb4N3rR6dLEPtuqwRvbkIMj7OiMoPzDGW9
M/WjqXLWHL3Pz98QyMNQcFi7PjduUA9On51h3jRzzO2CqMctjIwB06de3NTTzyTXHmKwLd8knHHu
arxj7TGPm2kOQR6/StUciWth5M0oG0JG+AMhiMDHrTIsK8gCtIV6ljg8/X604csCpO4nk56D1A9K
hiyJjJ5gJTnAX73qTjtigpR1J1dcbk3RyHGC3B46fSmXEu4qU7tlmA5J96bcOZZWG9QrDHA5Uf3R
UajYrbJAX3kEsM59+vXpTKa7EryBWYEfLjL7upPbn05qO4A+YQghweMDvj/Gh4zJIpaQkgYUkYGQ
e9PV/M2KDgg44/i465pBZiSu5Zc7Wk53FVz25z/jQzhiMBIuMrk5x7ZpslsPPJRgy5+ULj/I7c0B
mE0iho8sx68Z+me+aCQkcB28za7jGM844weKkklWNQuBGXHJPQ+hqIqvnBSmX53DHQ+x9P8AGmHf
NIzBVK7RtGaaKJpdk67gfu9crgcZ49qTa6gSqFLZ5Vj0GP5VGId7KZMhQSAOpHf8s050xEOcpkZU
8cdjzQh2Fkfzp2VipOACvTP4UgufLfOzAU8bhxge9RJG6sST5blSc46inEpIqghwckNlc8+9FjPV
CmR0nbCmNJDkHOf1+lPdiWULzGh7AE/U0sewgEOWVcKRio33W8jBScMc9OcYxQiWmx7S+Yh3Hc6t
jO3HGfQ/nSK+8ybF2pyBu9f8iot3O5yOAenGePQ0piEUZk3bv4tueR+FUJJtljIO0kFHI7ntUSus
LAsrlsZB6gj05p7zAqnGT6DsOOP51WMuZlXA25Jwc4FIp3JRdAM0QUCNgc57Dn8qZKNsqKecEEnP
BNK8pCsy8pL8oI6ZBp6/fBYBIj1OM4NBK8xLkyZAMfy4xuHHH4fWgSooYNzkZGcdemKSW5Evyryz
jhsf0NMIRG2lcsvUH1pi32HTusi7ASqDpgZLflS7/Kic4cEkAHvilJkJDOAMdD6elNmLcqm4Oyj7
3OeaRSViYuXjXH38EkkcAdqY2UjdQiiMnle4PX8qjimdIiNw3nI2nr0/SlEn2UBtx3DkEY5H1p3N
ENkOcktnfgqAeOKA7SRgMHAI5Dc0pKu5KjzFJ67ug9PrTRIYmlU5Ix0BzikTy3HZwAHQrklhjjmm
lAodUZgwPHByRRhVUM7kN6DnHvSAMzFgdwOVJxTIs0ImcszR5ZlwuG+6eOelSuDFIWQY29GzwPrU
OCnQ/L0yOelSqInUhgwOccdAMelItCy3DmVBtUB1wOhPU96jMrnaFUxgjBIx/KjyyNuMYB+UN39K
dJIMrjcWHtjn/wCtTSQcrQqNHlgwA7Ak8H600s6M3mcEjjA4xTJWZcgY9ScUk4UE7WOMdcUMe44O
SdqLhT1OOtNlG1XKphS2dx4xTlUKUQ5y3PfIpk87BxGvzIDyMdeKASsOkUbFZZNhPJ55qPcoYyDc
2TyP6UpikIwoOSOhHNBAK8AB8AHPb1ouS1d6Czygkso2ZGDzmo1Ll2K59zjikk68Bgo/vL1NK++O
RlU55HI9aaHaw+SVEzkAFuPpQ9wiq6nL88en1qFnLMdwx6+hNG75WAzweO4xSC41mL4G4AdQo5p8
jIpJJGCOoFM3gpkEsCcZHSnyqm1lKhhj8cUhjYjuRmk2lFJUc4z6VCAXbbnEfTGeeKXafmSMqUxy
3oP8aQIqYG7k85HU0CaFwqD51BPofSk2+XIwcErndx2qUOjAhict0wOaim4YgtkgngccUkO1hGmL
vkKMHjBPFE0pUMF+ZyeSegpvknGRlwvJXFLI291CJgYzgDn2pjQx8tICuAMZx+FNKkqQuCM/d6UE
ncwzn0HTFLk7SSFYDvQIY42k7uWU9jT9zRqxDbQ3QGkJWMo+3f7dsUruhyJCeTwetSMYQcncMnPV
aWYiROcdeMUOQjfL+YppO8fNgYOSadxDBy2Qflwc05QFwApz2pxKhCpGc5wRUbDgktgqfXtRcB7g
Ekt97GBt701Ii6g9Vzgse3tTJCpbjINO85kBTcCKQ0O2LgYHyg4yDUb5JPzELnB4pEfcAOeeOalc
hiMDLBcU7F2Qwu3Cplh0Bp8khLMqZ+hpFBOABn1IpsuHbIGCOKLEtIkVFeQK4yp7jsKRjGpKjJPT
PTtUe0H5clc81INpJVjhhwCO9IkcjkMRu6DIPrUYRpEJJwST0pZMxDgk5Hft7Uud2S3X1H+FMAVg
jg7tv4dTT2kAckKCOmD2qLc2xtqjGOSaVpC3HA44z3obACSBt6ZzQ0w3EkHB4/SmGNxy2AOKa77A
efmPUYouBIcfwkgClU7jwSB0IPemMH5HAHanox3AHGM/pSKVhWby14P4elOkkIC7sA/rUbyLlgQC
B0pd2EYhuh4PagLIbLksC5yBzwOlLJslAH3mxx60jAN1PJHUUOoBIXnbzkHrQKw5nKFsnO0Y/WkV
h5nTPGRTXJYgDAbOKeSzvz1Uc+1K4hXeThdu1hzj29ajbMxbaOnO7vTizbgWJJI4OKQZVMgkluoA
70xoCjIwDHAHtSxnknllB70jhpMMTwRyMUrvtKgMcelBV+wxnVnHy89M5ofaqEYO4eh60syjzMKP
6UzAicAk7vr1NBN7h5hJYgct1walVj5fZecZB/OoiCMEgZ7+oochumN3agESy7XZRG2PUimv8kmQ
cDHOfWmosjHI+X1ApSndmzzQWxQRu3L1Hc0jEA/MevSkZiBg4z0yO9NK7x9PWgiw5kBYEjGPTkUc
Dngn16AUxWKjBB56UhPGcE5PagFuTli5AKjPsKaXKAnBxntSFtsaYPfqtDDcMg5CngGgbQpOx85w
v8xUbuSAATnPT1pzHcu3Jwec4puCULk9OlK4kPKhVwOCxpJMcgZVh0phILZB607csTfdx+tAMRmw
QuSSRg8dKQARl02BieM1IwwS3qfvGhmVQTnnrkUmCB8bQRz1JUfpQrYG1QfemMFOFyQP71MU88tk
etCY7EspUdtx6/LTWbAPAoYAMcHkc0qEFs5wPWi6Gg+U9flHvTmzgZ5yaa4EkuGPAPHvSriPOVyW
OBntU3RQm5tu3BwOxpQVBwAfxolbzByCp5PFAfGc44Hej0AUKPLc5LHryKCSyxqTwOfpSecMY7fW
lkTCMwAPPSpAUudoUtg5oVjG+33y2TTFxkENgmlwSMEc/wBKAFeVvMf5h8x6kdB/kVGsmCRuGe+B
xmiQsDjOFzjNNkbIRgAx71RNyYEc4PA4psr5YbegHf6UqlSuCuO+R3oeIYHrmpKGI0hBHPPHNDRk
ZG44Ge3tT8ZPy5bHU00u2e4B/WmFhHXJBLUuEGQSQTTig3ZHOcjFIULFjgcenai4AAc4GAc8Z71J
ksAG6e1R8IRgkewoKkFCcgnoaBikkNkdAeMjikDqeoyO9DDax5yM9PWmlmXHABz0qbAKXXGAuVzk
YpRIDEVUkbuv9KRVLIGA+XPOTgU9tickZ/2Se1CQEeDuXYAMjp609mPLYGSMAChcNuIJBAOM035k
6EcetVYdhwTKMD97Gc1HtZQxwSoGKlzvZVyG7kCmysDGqquAvcdM07AII3ZQEB+XkinvI2WUEhW4
PFRyKSME5UjnNTCMvBu3cg4HuKVgsREHYAHYMfvA0r7h05YjnnrSxFjlduQP4vSns4AyoUjBBJ4p
WCw2QKBlBlunJ6UmFXaSm7HWmZMmBkAAZ+WnkiTOemcdaWxI5lUyMFDdM4NNfAD7j82ODmmrCysc
tkHIyKXyxgAkEYPX1pgK5XyxtBA7nHWnpIHxyzHkc9qRW8sgkBuvA/nTZC5kD7sDNCKuPWL5i27C
sCdpHSmvG3mbdxJA5GetIjfM+cEHpmnI7NJu+Xce/pTuguKnKgOckdRjpSMBuG44Q9zTvMePIYbg
SR9aarJA+cAnkDPIB+lTcLgsewglSMnqx6UoLOxU/KDzg0jHYWjwp749KYAyoXBKjsCMimA8J5ny
g/MvOOmRT5V2qqgYwOoOf8iopPndAD7BulEq7XKvhj0U0CHM5ZWTPB4+Xv6UOvzg8sc8/NyOKMmJ
toYq3qBxSEkE7wuOn1pN2Fccj+TIwjC9SBQ4KkEnBxkE0hiVGB2dsjPrQxDIMcHOOO4oTuVcXf8A
vtrJudurY4pPKkUv+9KgevGakfczuBhR6imyN5zBiQzrz81O1xjJHImVs8EHgDBFSbQiE8nHQAc0
w4eXlSD0HPrUrxGHIbIU/KdpzmpbFciUNKSC2AOnfn1qRUVMr1U9Pf1pPLG/GcqccD+E02SMkuuQ
zLzkcGmmholJjJYKhxn+HOaT5mcYfaPbp9KQK0SklCCRwecY9aajboXjAIYncMjrRZDJlieSUpvG
M9SePwqB4yuTuwqnAKnr/hUo+Yh2zvBP3fp3FAKPEzDcuDkk0CInJjPO5TgHa4wOe9PYFlIVdwA5
I69fSiU4B53gtlWYdOPenBwwc45B5bHvUq9yUERWEgE5yMshX8+KWRo1mkUER54OOOPb/CkWJ2IB
beBySB0HPehwPNYFN3I+cf4VXNYolKoxghDgOeS4B3Y+n+etVyBcOwBfaDwGPNKZI45mOSQBglT0
qWMRSodoy2eXBPJqQVhsjNHtRlDRnGWBHFVb8CNG2/NnndjtVnEcTg7+XO3A5x61X1DhWxIWI9u1
WrjditCxxhV3cYxirL5jLEkHjjb0qrDh0IBG4cjOauXBCQsoywIB3HoBVXEVWA2jPReMg+1MUL5e
Nm9snnvT5YAXGwkjOOTz0qPy2EgBOMn1qbgLuzgsoOOARSTMAoXaAVPp2pHRom4yCpxk9jU8jKRj
Kk9z0OaYGlIFdyEQ4xuIXoOahaUhCDtbAx1GSc1patp32JVKlw0gJbcMDOT09ulZLQSuYmK7oxxu
I4Na2ITIiVlQbVEb87ssAOnWgqInIc8kdM5I4pXYyuAIxwePYU9pQAB1YZBqSrleRgQcAls/QVAw
UA4BPvipSSpG4bgfmxitbUtGSXSoNRtEYxgBZ/RW9s/SpvYZl2pZ33Ln5eTzW9CqAK5wVByABnBN
YdgzLcLgZJIyua/Q/wDYQ/YCj+MVq/jD4h2stt4TMTJBaMTDJKxUFZlcHkAE9e/0ouFtD4mt4BcI
XnZioAw+4Y6/p2q9YmSMSIFLKASVPByR2I9K9i/aK8HfDjwX8a7/AEnwFe3GoeEoruNZZr1yI42U
4lVZCPmUEY3fX0r6w/az/YT8CfDL4FWHjXwGL17iFoPtMm8TxyQOjEyZxkDO05zii9idD864wSwL
MT1YE9cfT9asG+l8l2mIcY2kswPPPHT1r60/YK/ZW0T49eO9dTxjpOoN4ctbFjb3cEhiSWUSKChP
rtYH8+te8+Df2JPgRqH7QfjP4carNqs2q2ccd/p1vHKFD27oGcFgpG5WYjnBII60+dFW7M/M0x/6
nO2WZ1yqgADA7j9O9W5rWQYkY/McEIeeecV+s+sfsVfsw6L8RNO8G39xeWfinUIzNa2Mt4waVSSc
B9m0Z2tgZB9PSvEPjL+xf4F8I/tafDrwbZNeHw94l2vLbtIPNj2lwcOB8w4BwR0z7VPP2GfAM8/y
kSAq3OI9vOP/ANf9Kit5/NYGTcpYnapbgnPX2r9E/wBrz/gnPZeA/Dz+Kfh0Z5NJs4C2oWFwxmnU
8YkjPdQM5U+lWtW/YL+Hsf7HcXjy2/tK08TR6AupytLKJEaQrudSmBgZJAxjGBWikupG2rPzhcu7
/ODHLyAeuR2B/GqzpLb2oRlBfPzAds//AFq9M+E2k+C7r4qaPa+P3uNP8J3cwhvrqBgvkL13Hg/L
nGT2r6r/AGxP2ArP4c+F4PG/w3ll1Xwt5KGeNpfNaIMPllDKPmQ5Xntk9jUuV3ohNpq9j4G2usbp
IVBJ47mo5AkTOFbZIAeBkAj2FXrizaAkYyy7uQMhfUk1nzbTNIpyCoHIGc8Z61adjF2JEMbAyEjP
UKrccjp+tV44o3nchmV2wFJHGM96mIEO8eWegySccf40ssZM0jCM5zuAznjHHX/PNF9QTQ6a3h83
j52UBhlcdv8AGmRSozMu1o8rnaRnmlvGtzMNplDKNpAPQ45+o60rqFk3R52HsTVGt0V7mAxuACSN
2JAD+GBT3kAwNuQw2gA846ZP5VE7NDOARvVuSw9c8DH+NSORhwRk5zvA6+nWpZLYOWjIEa5bGMn0
/Km7szxkkyBFxj3/AP1U+WQuwbJ5yPl+nXP41WUgykElwRgdeP8APFBm2TofOLryJQxblupzTJYn
Xcxk3NxtHqaWZMk8sCnJwenoaV5UiyyEs6kYx3HqPQ1RcWmIkgikZSSrD+6349KV2jYKRjcACzv1
A/z/ACqOOXdI77AmPvHOM/T8KlnZICjIz78kHPTb60i9hXA4Q4Z8jaRz8v8Ak0krhoCqMTs5OeAc
UxVAuTht7LzxjtUklwqyIMEdvXdx7UamfNcjCIFcq+09CfwqUb40WRXGZMjLdh9O1RSyRwSTKPmR
8gs3Y0YjKR7B90YbmmUkhzOnlqzIWAyRxzn1oYCTYAMLnJP4c5qLchE0mSEBwQhzzSS3KTRB1Drs
zwQCD9aAbSJV3LO+z7gOME9fpTBOwuWG3aBxyKRzhiwfHy5Az93nj8aVVaRW8wjaRu3Hv700ZavY
CS8m1QZgDjGOMmgRmSRhlVI4wTgn/PNNSNogFOScZySPu+tOYgCMr9/k8igrVD3QxlXcqu3hlYdB
2xUUhAdMOrFgDlMHB96cXWUSFziRgQN3AqO22mcCVcKAc4HA9KQk/Il3q53EMWLfxdvepGaOTO1g
o+vJqKeNy6qV2DsxHakXB81JcbtuG9B9O9GgczHg7o96kEHPPrx0pkjCPywgJB9e2KaPL3lVYqcY
zjhe/HvRI7klSWcMevofWgnmY7e+9XzujzyQPu+lF0w3KzgKwPQDGRTWY26lFdWJy3l9/qKbNOZF
T5Bgdcjk/jQaIekRbAyNw/iYZ/OmwyiMuWG4E8H0Ht+lKAViGVwQeSvpmmCISMSCQDnAHpQQ5akx
+ZIyzlnJJwO1MSR0f5MyDqoPU8UpKGTgYPQMT1pWlcSAuW4JB2ep/wA/lT0NFqI0jySneTuXngDA
74pgBJDGXcR3bnmnGMK8m1sIvBPUmmtGchWwQPXt6ZoHqg807s/eBJJZfT0p1x5bMSjBUOTj+76G
onIM20ZCEZBzkUhi2fcVnyCBu+v/AOqhiTHPGgYsv31I7/rSSOdg5yRnBNNYARu20k4pSvmj5VOV
PPoBikDGqGYgKzEkcgD29aV5ViaQFe/PzdKYUGQFLHuBu5PtTkdD8oXDcg96ZNwLR7JAWL56Z470
iqDu59+tL9nKTfKARjH1qOfJO3eA4J47UFJXCbawA+ZWBzn2pvmDegBDAcEAcGpPkKqGyx6jb0NR
7hG5c5DZ+XAoKshyAKmAD6ZI6GmSnzGT5uAM7R6UpcsCA3PXPpTZCHHGeD1zSFfsNZDtORtBOePS
kYjcpPzFePSpTsRt8gLgds9eKgkbzTxnGSP/AK1NkO48ksepDjBKinnYM/eyfXtVcYVuZGJ6YPan
mNXBG8nPoKVgQb9mevzA/NSAlXXuvZhRIoQ53AAHA4pjPtEa9GIzgUDHsgDEr0IJIIxUYA3gx/Ng
4wRwOKfK6tu5xnjae9MLsikrgkcY9aB2HyyMGKsfm9DUUw8xjtHHpinSEmPgk4yT60YEjKAdvHPe
lYQxweGJxn+VK5Ji4UjBPB6UEDlS3y5470p2vwMbPekAzad8mBx2wabIhDFeRkcnmpNysTu4weMD
+tDEOXVvlBH8XWn0KsNyjDO3fjrzimYGcrnjr/WlkiK9OQB1o+9vdSw46CkSEy5Ygde3apEbAAwP
l/Wo8qCeST702QvlWAAGcbulMq5MzrGWKgBcEZB9qY6ZJyCVPIprktH94gZ6k0p5Udc549KdyRGY
4BJ/4EKXbuKnG5z1NMC5bk/LnIA71IcFuGIwOBUgHzDc3BAzyaaCWBduDnjFCqxUjcRml5ABPOO4
/rTGrAwYrtGRg5J9RSOhUKSc++ae0hB+bJ3Z4xg0k33WOT/u9x+lAgZuD94UNxzwCeM5qPOUz+eK
EVUz95j69qQD3bK7Seg6UMrOpOMDHJpHHJAwT6j0p7HKlSSO3PSgaGiLd8oIz1x60rQ4BUOMDnB9
ajIEfzYy3v2pwf5huOD6GgdwALNnuBj6VJ5at04I9aau5GbsD0zTJt28c59TQK5INvmYwQx9KQ7V
zz8o/M0wDgnOz/apd7ICMcH/AD1pWAVpix6sQc4FIBvTP3GUck0h+nIpRhl+UHOMnFNBuMZcvtYk
gc8UNhSBzTiilSxOX44HpTQccYzx+NAhXl9u3QU1lJG4kgZ9af8AJkZ4GfXrSMMkgZwDnOelA0xJ
C0hHYdc5pEA3E456U3gBTtwe+eppzr5bkqQ30NAh+9UOQpI79qQqzEqoxu6A0OAoGOMjOaGfCbNo
yMnd3NA7jWDJkEfMCRSiMEMTw3vSbRhSGJJHTFLI/JPHJoEGRhQSTjoKYeAQOvZT709gF5Byc9KC
xd1xhW6c0gFIC4PYcYAzzTJiu/5SQRzmnCUKh3AlunFRhcseuMHOfpQXcVhuGQMegp6xli25+nPH
eoySw7sfb0p+1uMkgdMHpQwQjttZjsznOOaR8jG4AA+1PkRCfp1qEMvPJ46cZqUwZIzEKQT3pCmU
3NkLSM2PXk0EjLADPpg1RNhzlS2EyRjpTMAPtwBkdakKKwBDfN79qaqkgkAkgVKRdhxIB2sMYPrS
hhkjHsMdKjkba+T9047U5ZvIUnBKk9RRygSOoVQVOGBxmhVZ85wMfjiomcMW44PPPahHVX+YZHtz
SsO48EoCB/FwaZOMZOAVzxSnBY9iT0HahlJcDccDmkA2I4bkcetPGCmQSQSeB2ocqgJByGPI6Yp7
ybWG0YHbBoAjJCIWDHcDkrQWDuu0tyOhpJlG8c59adztXOSRnGBRYVxCpXO4cc/hTgoRdx57cduK
aHbcd3K0ohDfdPPrRYY5hGuDu+Xv9aaxBbA6Huaa37tf4cgfnTg+RjOBngUMBCuG3Icr34604skj
jkJj2ocFc4JwPSmhGeMqAepJosOwEgE4yV9afs4yOh60BfLPQ+4x7UScqWJ2gelPYewwoWbe+AvY
ZoG5SOcDtSIQVw2QM8ClBOcdhUiuBUhsknjnilVgrEsA2cd6HOVKscZ601wYVBII70IByyIZN2Pl
zwP7op0sa5zjap6Y6mkGxs9VPAIxTCG3YY/QelO4XF2kKw5welOfO0HB+UY+tIw3YHPHfNK8civg
AjbQmAwKThl528c8VI6llHBI9R0oZpEAGTjqQKcZECYAOM5IJNLdjuR7d7bQNw9qdxGdp+UZ4pjZ
ySeABjAodg6ElMEdyegxVBclMSn19+aiJDFlUZC9BQqHaAjEL/nmgYUsR06DjqalpgxELFgw4Gfu
n0p2Aoc7fkGAQf4jTWbLAKCAOtDMe4+nFIkcW3OAnH+yP5UuQCwyRg9COvFJKjEjge5FPYc8fIw9
aaGkIEjCMCCSxB4PSmYAOG6DkUrgwnJGSehPehUUyEFcj3OMVWgwZ22DDfL0NAQROJF+YihI/NYj
O0eoHH40pVPMChgUxkkEjmlYYqOqliSCAOB1603zA2QVyD13dvxpZF2thOD17c0BDg4bKjqD1NIB
RGhZfl3Z6nOKcSyqw3Ec4xTEUo+NxHHPpTocZIK5zz/kUIBhUIhbOT0p6qrqFlxgEYz1pQqSMOSu
D+P5UhVFc7gWwfSi4h0igpwMbj1BpEbAIKsW9ST/ACod9hCgM4bGG64pnmbXKqSWDc44zinoA4p8
ztg4zkDNDxERhvugnO096EIyQzFM5PzdqfA7qWYszKvGMZqQI2T58LxnnNALAuSuCD1/lUrsCAdx
34ySDTXZGbKgMCMZ7j/69UtRjkX90SwAZjw3Q0MOfnV2JOT6/lR52XO/7vPHHXFKxWRiPugDPNZt
ADKjNkDDDHUUNcqj4CDJIy2KSRidpkG8YyMeuKkinkSI5UHPqM5osO5HcuZAqqzIgwSSc0puCIxH
uUADlz3/AB7UeVuCoMgkZHtUZUbBuO7OD6VdupOoSKsRCp0bGOMk/jUwhJiJYZUDPHBP0poK7nHL
MOiluB9KaY9iN8zMTkbAenvRYYmVPqztjCHIx71YfEabNq/McgFeD7ioIkWVSy7ht7E8fjSbdrFM
FcHjv+FFgHPcb1CswOPu4P6GhIvOMgJyQMZ3YAPrTX4QABSMck9qkjuBEfLdfvDqepFSyQWOIJ8y
75MkZJ6fhTpAd6MCCOR8pxk1CqrDIpB2AnnuaekmZcqcEEkZPFCQAXjRmYqXJztGelQXjoqxomQS
Mnd9anI3xFwu7HBz9ap3fAAJYLyBmrQ7jM7VBwcHqR2rTtY0l029bLB0jAGD3Jx0+lZcBO7IJVxw
oA6n3rZil3aTfYI+UKrHHPX0qrE31MtPLkQhlw4wDgfyqFiPMBDHC8AnripcqShO7bng47etN+Tl
VOAT1PWoehaGTOyk87g3JHakZsoikBdo6+tSznCgfLIWAwe9D7IiNsW7I5B7U7gdtqGpyeK7KAPB
HFfKdpZEwG6459vYVMvhK9ijMCwq+Dls/eBwc4PbpSo/9hagbpWjAO1kBXOwegVu4I6+ldP9uOp2
f2uRCF42O/Ads/X19f1rujGLd2cqk0eU6pZNYahJDINuzoVbIHFQ21l5+/vjJ54BruvGmiyXMDEI
zXUeN3ygZGOv8uK4ILNYsvzFSx6+ntWM1Z6GkZXGMo8wg9Mfe9a67w2Wk8L62JAJY1XlQAQoxgN+
eK5GQiWTa7fdBwe1dV4elg03QtUlusiK5UouEGCf/wBX8/eueSNTmbaVVXJjyB3Ppx/niv3Y8S3V
yv8AwTPtLiwmeO5TwXZuksTlCWEcZJyOecHOOeTX4SacQbhRs3/7BPrX7i/A3xHp/wC0v+wV/wAI
N4U1O2n8UWHh6DR7u1cFPs86oAuQR0ITg+1TrzItr922fj7dSXCkIZJpyzt+9fJJOOCf/r199/8A
BOP9qjUtJ1bT/gx4ut5te8N63ui0mR0Dtau33omBP+pIyR/dP14+IfFvg7V/CXiu78P69by6Tqlr
c7JbadAhjcNg8cZBIPPcfnX6sfsv/Avwl+xl8FZfiL47nt11u6iFzPcKNyJvLNEkY5+chguRVyki
EktTr/2p/jFov7F3wki0rwPoSabqWqNKdOigg3W0LgjzJHJPH3h+Jr4y/wCCbXii/wBe/a/S51S5
ku7670vUHknkJLSMTG2ST14B/L0FfVfgT40fDr/goz4K8SeCdU0uXw34gtA0ltFKyyTxx8AXETFQ
ODhWX6eteL/sh/AbxF8D/wBt7+xtWgM8Vnpt0gvYgxi2MmYzn7oLAA46joaWjHBe82z6y/aO+Anh
Xx1aeL/FuqbpNe03RRLYS28m2aykhWV0kXBB5J7+hr8/PhF8fvFHxw/a7+Dk/iqaGa+0a5j06GSJ
QHlHzbpGOfmJAH6+pr6c/bI8O/FHxX+0Npfhz4fX15pNt4j0FbC8ulQi3kUPKzKzFSo+Xv1Ga8r1
n9l7Tf2Y/wBrP4Hw6Vqtxqlvql/HI6XSLuhlVwrFWAHBDnr0x70mkhwep+jfiDxzomi+JtI8OapO
kN3rcc32VJgAk2zbuTJ7kMMDvXEftFaLZ6L+zN4702wtkgsrfQ7hIrcHCKAhwPYV8x/8FXZpNP8A
D/w+1G2mmt7u1vJ3SaFmUpjymzkdDlQQc9QK9jTxVqXxB/YBn8QavdG81LUvB8lxc3BQAufJbJwO
M4FC3Ikro/FrVId+ozxxxloy3yhDkDH17fr0r9g/+CdNxc+If2MdNg1O4m1ALNqFmDeP5uIlkZVQ
ZJ+UDgD2r8f7jTrvX9V+zWFrLcX1zKQkNuhdyQSflHUjiv2Y/ZU8HzfssfsmQW3xAvrTTRbvPfzS
hvljWYhlU8Z3ZOMc898VT3GvhsfjT47eOHXtQSMgBJnjVI+QACQBnnPGDn/aFcv5scsh8xVRjlgA
cEccZP8AnpXQ+ObpLrxNeSQnzYQ7eW6kp5nPJx+HH0rnXCSOZV+QFyFDnPHNaIxcbjEUSse4TG5D
0wM96hLqs7sAWYngAH8KV5VIJQFOu7juOlOBKK0ZYsCwJJXP0oJ5UHmK+5Nu1wcjnOc+/p/jUY3x
zJG8TjILkk8Z9PyH60qtDHOJFDr945HQj2p4d5HVt6BCDlT17YOD2o3NFFIbcATBmIVioyWbP5VH
eCQOQjtNn5wnoP8AD2qVzsGFbCHOBg9venLt2NPgqVG0MPU0WDQgDeYjOqnci7R3J98Uwh1UvvZg
OAAf04+tTN++O0QkZ+Ulec8f4/zpHYtAUjyGIzuHY5otchxEI8uIKqbVY4L4yBzUbymTGdqrjq3H
aljEmMl2C54yM/5FPkUSI2QMHvgEDtVBGI1YgYJ+PlBGeM4NCOdgQrtZW+UdyaepHzfPuIG4qABk
9zimZVpDlWcnkFjzj6dKVxtPoMkkXnahEftwSc+lSMVid1YFyeQGG0gkVC27zFLfu0zyB6Uss/mA
sMMQMDPX0pkOwoYKvluBhuTtPX8afvaONkaM7EYKRjJz6E0phXfnA29QA2QaRpizEKoIU7mVuVJ6
UDQwu0Max7sDJ24wQO/p05p0CpKGVmCcE5bgbqfKqsoHO4c4A6f5GKY8wkJPUYyR/k0rBbqNMuXE
ZXIzkZ74/wAmp5WjmHyEoidD0FVJmEkihE3BG6k9PeiQMsgwMLn149qYJkkoJZ1VS0RGSvX9Kck5
Yqcbc8sB1H/1qR9pUoC3B3FQcZpZ2ZYzwI4+g7mkO9yN4VLF5QWyflI6D60+LKRHMgEmcKR0A96R
32xBXO4++QB+VII40USiXt0HQ/41QDmw+47y2wHGON3FR7yzlSMgDGcc4+tPe4VCN43KSQMUqsIV
fOSGUEHk/hQS1cGmIQ/LtUZAYDkjvTS0agk5GTxk0912CM44CkgkA85qvJIrtsC424HX8v60hKKF
kOAMEjdgbgfWn3AMcGAQzY44BJHfimvKrRbtuCSPl9P/ANVSvEkiq3zHdnnsKC0iLaW3IoJQKOp4
P0pH3MpXkBTnjrQQTGQG3EZ+YdhRuBwpO7LcY69KaRKWo4KkbowckltoPBx+FPaRHnI3ttB5I5+t
IP7hb+LHHeoyYzM/zElV44yM0rGiTQ5SifMGLYJwSeCM+lRvI0z71l4f+EDpS5KZHPHp9OlMSYMQ
SoAA5wOetMGrjzHuV2I+6uafua1G3lW5ZQDmmJdlRjO5mXJx0/z0oeN2QMTgDgjvg9KQloTxWyPB
JI7htuCOKq3BMcbhSTuIzzTiWcPGJCIAfmUng0XaIYwEyRnOARj86ZTCRQRkghwOCBiq6FTySfr6
Gpi6ylVG/IGTjnaMU3zEiZht3dBxwBSMXqySGQR4ZiSvT6mmLIwfKnC88gc5p7MGhIOMdSajB8xG
yC/+z0I9KDVIjK4+badxGc54waHAePIPKdQOlKygAFSdx/vDFJJ8qAZw4GDgYyM5phcgAIk2E8Ek
YJqa4by1JPQtj5KaTGTjODnBI65AqOUjbsJ3GkZ6km91Xhfl9c00o2NxbAzkmkHyFeSAcHAPb0pj
szSP8uAo6dqCgaRTKwHzDGM4p5bCgAcfXrURJXbhduBzg1JIxLHauPQY46UAJKcdSUz6dKYy7Hxk
Djr1qRxuA3FTx93PvUUzLuJJzjuKBjgyjGVyDnn2qRgzKMDBPOe3SoRIGAYnvx2pd5zs/hzyc55o
GmkOdQFPB3DrjvTWCgbijKM4pWG1mAc7WBPPWmEjqG3EHv0xU3E7AwGVZW+Xp06USCM4IUhsU4zb
8IydTxjgfSiVDASSDsPGCenpTAaZdsZUnC+hpJMSKCQSTjpSomQSMEg8Ypkg2Z5GPXt+VFhCsjnc
wXIXP4UHKSA4BXHII4NIc5wGAHGMHOadljyvCkYINFgByA5IGM9jRIQSBjIHU4ppBBAHQc560STO
Qc4x6Ypl2QbQ2BkbQc02V/myo2n2FJjIUjIxUjEuVwOVPUd6kVhozIoVV6c570PIyn5jnB6U9m3B
eSppjjdkYBA9etNCaGhshiRg/wAqcTvIORkDGTS7N/BGMDjPFJtzyc8Z4pCHFCeSenr0H404sFBL
cnp1ppL7c84OeKcIiXZWU/L2p2AY4diVQYUAk5pu/IIKnjnHrUkigMGBLcc0jsEB2gdOueaQxFba
+fun7vy88Ux5PvL0HOT606CPzMvu2jPIzTpY+FKYIJxRYtCRvnJAyM96QKrucZwePpTx8kZHCk8k
1GOecZI/WnYlkhckLv5xxzSBhyOMevpSGNmBy2T2GKRv4sErjgikKwkjEkjHH0phJICipWVlAUsG
4xxS7CoDLz6g0ARl8lht6ZqVkDL93kDkA9KYyrvYjGQOlGfLYk8EnOR3oKuhs3Xrk/XqKNwDZ5FO
2KWyD8wOcmmySIeATgZ5oExXYMh43HtjtTOFGOCc9+aUgxt6Fuc5p33hyfmqiRjBjkuOcUqozAHA
z2p3lseST9KQoGbC5A9SOKkpaCEmQkHGO3+FIDnAOSQemOlOkVFK8knOTxxTHCghsYz6HrQIcWwR
gcgUjkE5ZSO/ymhiCMEY+nWkkLMx9OxNACFgGIGfU4NPk3KV24xjJIpN6qGGM5/iNNlXOev9KAEH
Ruxz3p4wT82QaayDc2Tk4z60MQwIH4ZoBDm+Tp0pVkA6nPpzSHAC7uW9R3pGYAnaMUirscW+faTg
EdB0zUbsA+F//XQzgkE/e7U1gGBJ4bNTYV7geVUEHOexpSqMeMrSoSBxxxSRsDgkc+1AybcuQMkg
8daTon3vm7gGmLF984IOc5pZAwXaVJc9B6Ur6lDFyrgkEnqOac7jBC4znmlwQQN3PTp0qN/lIJ5B
5471exLHEZIYfLn8qFiKkscEUgLA8cgHgUqMwcngHrz6VOoaD0AZWYZJPXPakyFOQQSBTQ7KuTtI
znOaceELKcZ7UupQx5BKp+TBIzTfKYHGD2709gVAHA4zzShSdxYHNU7Egr+Tu/jNPAUspDfOaaVV
YyZMlv4SBwPrRtEikAZfPBHSoKQ+ZkYDHBAz9aYju3P90ZOOwpdo4Pcdc0FQT0Kjv6GgB6qspHOc
8EfhSzJHG4G0hAc8VH5hTO3qfXtQ8mec/P6etNaj0Hhw8WVUjHU+1EpDKyo2Bxnmo1fIbqg6UroM
lQSBjIPrRYZG+6TceT2609xt+VskDnFKSgxlhjNKMIVQnhvTmkTYDIgOQAPYnvTTLuQgLz357012
HA6jucUZTLFRtOeB2+tCAVnkbnbu5wSR0pznnjJJPHtT2ciQqT8voOxpAATkA9abQ7DXILlwM9Nx
zTWxIyrtxg49aezHBQcoTyabGuxWHO7HB96LASDBUL6HH1qNjtfIbnvimcoo39T05px2r83PPI70
7CHkKVUjnHOR6Ub1cHgnFRsCMmNcYHTP606MhV5Gc1IDjKOEx0zQDkEE/QY601CQGGPlHI3daei7
y5bABOaaKQgbyw0YOcj7vSmlMLuGTk46Ur8Y43EHpntTBhCCw3emO1MGOlUtlVGOe3el+Y45+XPW
htrtuC5zwFP0p3mFPkQHcRkD/wCtU2BDHVhKT/CT05HanyqJMkMPTPrSSSM7HedqjggetIoSReMj
b6CnYLDyCycAjaelRxttcnDHtgmnebhW2cgjvTsCJBlui5zRYBsm1R+7ySAMig5OI2wuRkE9qTeZ
AFxwpz9amkTgnd8wGdvQUWYXIZsxyHc/XH5UobKbW4B4BxTi4ZDyGwB6HoKc20q3zgZPGT0qQGuu
5FyxBxz70kbjfliCp4wRg5pS4di20tg464wKdIq7kkVSQBj8R/8Aro1EMeLY+SykEcUjIhVSNxY9
e4NCgAhmG/2/CnyO53blMaggemBQMRGAwmeecZ/zxQJVjlKlcP69e3SkRVaX5TtUjrTldZS+FY/N
yAO9KwCgAybe3cEdDUp2JIQxO1uoAqIybzIzHB+nSo9ru7EAoe/oafKFh5ODjkEjlRSSRBSApwAS
cqOtBb5nwAQBg56ipllhKsFO0kHDHoOP/wBdUkMYkiQSMyL5oI/iGcUybDhtgzGTuJ6U4RPG6sAV
XHY5Bz3pZcD91jJAPJ4FS2AsIZGCg8+pOO1KhHmbAuGJ6ds+tI0kXlfLAwDHCt0z6596YzbYkHJZ
emOSPWmmBLvYBiAFUfKSO1NZEII3sxHzcd/f+XFQmJ/MVmYnJ5BPX61YjSPzMCQLzzz09v0ppgEm
5RnLK5GASOopqh43Jz0x8xPWknkYy7g29R/kVI+1mQjlW6getIBk8io5gyEw5zj7x9Ofwp5hSPBy
zELyTxj8abI0fmtsXIU5AZf61Gs/mLMMfIx4weh9jR6gLJJvXaVJIHOeM06dlQrnLADO04/nTwpl
jLAnrjIPpTHiVMNJl1HVh+oo0YCSTI0Q4YsBknr/AFpvnKGkyCqdcAYH5UMV3IQV28ZC/wANPlh3
EluFAySRxj1pcorD5J0KbSCrgfeUcfl9Kz7tssAQc9ffpVstuLOzbmx94AcCqk7oqFVJ4JwSOaqw
DVAG3HY8mt+18lPDmo4ZBIxReevX/wCtXPgMzKWJAOcNWqLpP7GltVjG95VJY9SAPajYVihMxiJ5
2luQD0HtTQ0ZVzgk+nvQvy5OSy5xSqisrMuFA6ikyhrAQ7Rg5xikIaUkc8cnA709WDvGpcqueXHN
Mn/dytsGecZJ6+9SgOtvoJrq/ESxBpAMrtbOOOfrXXaXZ+TY28bRqTGCZA4+8w55z37/AIVh+GEi
eGZx5c1w7YYsB8igc/hXQ3urW6RK8kbQlRgDIwfQ57V6EVy7nK5Juw3Ubr7IweZHIw2Q75IPP415
vqsguLq52D5d25cjBOe9aXiLxCbuZRbNsXBDhGyCSP8A61YLvukSUseDt24/pXPKV2bRSWxFI6xs
rMDt557kUySeSeHazERqcKDSThXY4ztH8J7fSmvbOhUvld3IPYis2a2HWy7pPkzkHqP6GvY/gJ8f
PF/7PXjmHxT4S1EW93EvlT2U5Z4LpCuNkqZG4DOQeCD3rxyJj5o2HK55yOK6XQdHvtfvzaWFlPe3
rIzxw28ZkcqBklVHXpz9aaLSaPZPjp+0N4h/aH8ZL4l1+KxsNSECW+7TLby1AXJ3ksxJOWHGewr0
n4i/tyfEP4sfBy1+HGtQ6a1japAHvLe3ZZbgQk7CSWwPuqTwOh9a+br3Q73TLmS1u4XtriJcSWlw
hjkjzzllI47Vs3/hPWdG0qK+vtPubeCUqizTQsqNkZ4J4x3qHa5Hodt8Dvj/AOJ/2ffiBB4w8N/Z
3uRDJby292hkhmjbAYEZBBJQdCOle16X/wAFMPid4f8AiHr3ipNP0S7m1e1t4ZbV7R/Ki8sEKyYk
DdznJ9PSvlLS/DWoeKNSFlplrNf3BjMzQwqZGKKAWOBnpWpB4C8QzajcWNvo+o3MsR2SRRQMTGf7
p9Cew44rRJD57n2ef+CuHxOM0Ljwn4XkRjhwsc5Kccnd5g/LHavGvi5+2z8QPjN458KeK9RsNJ0b
VvC0gnsPsEcmzzBIsgDhmJYZQZAI+8a8eh+FPjJjNND4fvjEMfM1rJtODyvA7Z/z1rN1Lwzqum38
Vjc2NzFfyqGS1e3dZG3AnKjvwQcD1p2RF7antP7SH7aPjT9pSw0q08Safo+mxaYXaIackisWYDLE
s55wvbGOa2PD/wC318QvD37P8nwnTTtMvdIksJNNi1NlcXMMD5yDzgkKSAcdcE18+a54Z1XRWBvd
MurbOQi3Vu8JOMdAwB/H2qN/B2sHRl1uXT7tdPlA2XbREwsAccN0H4+tLYz5r7Gl4F+JWr/DXx9o
vjHQ47c6jpdyLiJZ4w0cmDyrAYOCM9CK9h/aR/bk8c/tHWdrpurxQaNplqSxsNKkkWGfGTvfceSM
DHpj3r5/ttOk1K9trS0t5prm5PlJDFGWkfPoBn64/wAKl1LQNQ0mTyp7Ke1lXD+RcxMjDOcEqRn+
lGjLs+pSMzPJtRiMEYYYOM9vpUU8LRXDRs2MA8AY4/Gn3EkySBgNpUctjg4qBpdsyLKrIcYXIPXO
M1Li1sJtCSAIAiqWJHOO1IZ2e3LIpK4ywJ4U5xx+lLArRO7IXI6MpPKgf/XpixBNhUhVbJPbjntV
JdzNasc3ywpKEy+AMAcZI6c9PrUbqdiZQrIowWBzkfQ095PLfETbg0gYsw7Y/nxSurGZ0BaXJPIH
Tvz/AIVaRoJIJmKrGruP4jnaD9KYBIJWBAIZiq4wT/ninM7FncuXKghSpwePT06VJHaszCTO8t8y
7OnuDn8OlJkPcijcL5o27mztyeRx7duaNu0kgkEnOCCAPX6Dimx/IokRzk5zgcn8KkeQ3GTECARj
Dcj680JkqSGmNnViNxKEt+8PT3FDoZEG35jn7xP9P89abPFvZkDkSDj5jjt3pwBiVcgHPDFR0wKR
Vxh+Z8qFARcAgcj1zUkm8bnOFIBwR9KZJaSEkoDGXGMnv6/pUpeUxqyvH8q4JOQMAn9eaCk7iMoe
IiRcTk5JVetRCBZEkeUsWVdyYHJ57/h3p8gZ8skrY6Ef3s9cUkwZH3KzADGQvJP5/SqDQZl0R1CD
zQflz0Htn8DTGZ2jzvVQOCynP1qaVpCRCAV29dw5P1Pc01ERx5a5xySx6ZxSuS4roMLSDDqTjOOO
Afr+VPwWIJKZxgYbHtUs9zMuFwQOBkDrTI0XJZRls5A6A1QcpAxCScgrxtLkA4FSeWfNYEMxY5zk
DHHGaRZRG5JYAN94c4OO30pzkHC+UoHOD0OKmxI0gxorNlGBwozkdD/jTpXYeWuBjHRuMD/IpgjY
s6szEsxxjnFEwV5IzIoKgZAbPHBxgUWG9xzk4JJEmenTP1/So2jkRtrfKSML6f8A66sNjIcYJA5H
qKSaPzZC/wDdP3SMjHahMq1yAQxjIkUgZ64zzUs2cKoBZBwNw6U6ZSGVlG75scfWiYrExLEnPHAB
xTuLlIN0jSnGWAHpSElZUUrkYySo5x7VYWHJXduBCgg46inOIkZxls5DLhvvcHqKaYuQqyPuk3qi
Y7Drj6VLbs0xLFyETpx1/wA4p7QqCHQb8jGB/CfX2pbmOe3AJJERAIYgj14/nQWkkRKR90JIWOTk
DAx17UixhIRLjdkntgj6mnSFlVnXB5wD/DQ8Xm5f+Edge/NAaEeZGDNGvyjJbHYetNnWIhCWIDfM
do71YQvGkoU4j4BU9+/NJ5ZWLJJIUdBzx/k0DGALOGZnaMg56dqik2+XJj5QpwMjrT3gEh5JVR1x
1xTZWjZl+9kAgY79eaaRDYmwF3LLxjkDtSSuEChScYHJ5JpzhXDCLPzdcjmnRRK0AX/VuOoPQ0DV
xsirLHmNtoByWbuemKhkIfcruEH+zU7RsYSDyB0UD3qFoSijADORnB7dqVgEWLJY4LD+8vX6U14s
Pg5Az0/CpmaSIEjC5GBx39qU27t84BZicgH6f5/OnYFYiWLauVYvzh1pZkYJkLtQcAk5J4qeWNo4
yGG3pnjknFRorMdgXBHYDORjrSKKbhnlRZMqgOc4ofcUJ3fMOBtUfnVlrdRuGQpz0yTimpCCQEXc
q8detIykiuHMbAtkYIJJHNPURgFixJPIwOpqR0ZGODzn7p6UkK/IflGFbn396CoohP7lvmTcDyB/
WnDaQWXIDE5HpUqWrDcxbGRgA9AacsDorAsMg4plW7FRfLjwxBPfHWnTKSxaNyfl3EN2qVSdoBjQ
gHoR1pGcuJZGBRAuMCkVYgGQMuowo4PciniNZEZQuOe44x6U5RyAWCduadghdp3HPOQKCepXe2eV
juXBzgfSh4Ska5BXj8c1N5TtlXO1V5JPWiOI9GJUnkE0BZFVzk5wR/sk02RAmCODjIyMZqyI9pkG
enOQOaSaJmchkBxwCepqbEtECp5m7eQOcghuvtSsB9w5yx7noKlMezA+6Mcd6MKzcgMSMc1S0Elc
hLEfLj7vGaWWIYwSTgc46VL5YVMfxf3cVG0YLEZI56etDL5SI46YwQOwp2wlQ2Qo7D1qd7chicZU
H1pXtWU5ALd8CkJKxCUCFgRgHrULAupBIODVlU+chR9d3NIYgWYZIweuP0plWInRo+F+6RngU5Uy
OTt9VNPYjccDHoBzSSDAOQwHPO3iiwrDSdvUA5/SmsuWY/d2+tSBFIILAZ7ntTWieRxgkE9gKBNN
jXy3yqBnuTTS29gDj+VTGI4woJAPB9aGid14Ta2ccUuouUikTyzlTx0INKsjBmUnJPWpHBzhmKHB
HHP+c01gCAWA6YyKoVhgVmyMcYwc9qmkUKFBHBz70iwlOxwOMGneQy9Rlh1B47UFJEYARSoweaQr
uCnG1OhxUhtztLsM56elBi/hOQfQ0ihJLcIRjp65qLJJBHQdPep3hllcADbgY4NPaFAnAIIPJ9aT
HbqVgHH7wHA6fQ06UAE5G4kZznrTjGEYnBO371SCLjOcDB4NCQELIW24XB9/SkZDkt09hUr4ZSGy
Ae+KbtJXaEYn1xTZGwxrdsE/lzUboyDDYXjrirMkHOFfknqOaR4nUEc4x169RUjsiCIMv3uh4pGi
PGAORmp5Y2TA59cUgOIgGBBBBwB2osFkQeWrx7yHJzj2p7RqrZ3D6elSMpMpw3B+bFIQN2SCzMPw
NPYaiRkYJAOQB6Um0lUAIzzx3qSWHaMjg98UpibHBXg9AOfzo3GVyN2FycsacdqngHd05qTy3MgO
BtxnNKYi5boMnuaCSJo0PBbAPHWhoCh68dBmpUG88gevtTzFvVOz4xxSKGfYm2E4wSPmPtUO0ryA
Sc96sS3QCBVAU4x9arSNsbIYkZzQS9xjrhQSDzTHHOf4SKlDBgD79DSEqFIUHrikIeFEWHYhjjgC
mSL94qfw9KaoBfGaUKd5G0n6UWJuIFJdu7DvQzYHI5PtjNOA5xzxngU0cvgmgBDHt2sAQM96c2QC
QQTnqDT92AfnGcgYxT2Kq7Ajt2+lSykM5OQflzTjsRiQfm6bj2pjODwWJzUfUE9aEguPZioBz8x5
Bpqgs3GSQcEjtTcFRhh7jFSMFDg5ATHzYoZQ6WMRvgHPclORTXjD5bPTrmggEkIevAoK7nOTn2qU
AkwG0HH59TTGLc46HsKlXaD1yc45FKEEMhYHH1NWmKxG28HnOMdDTm3YXH3ueRTi26X5sEHjNACm
Qn7uOh9agYm8htpJb2NKMr90bO9EuA2VP496YzNKcnJwODQBIVcuTnOecYxSlS44OcVHkfdDHJOc
kdKcU+U8knnpTuMQRgNkkjjrUjlVUjqf7uOlMIQow5APUUK2JMjn2xT0Hcc7Alkxk46kYoYySZ2D
Ax29KJ2DsXz7YxTA5RiACCePwobFcYhy2T9OalK7BjqMk57U1iuT82AewFOZ9oJGGUg496gQhlUY
BUAd+OtIQNrA8qemKYQEC/mRUjN+9DFcD0xTAaGGSoXHP3qeJAd3pnApryFpMsAox2pVDSqVVS2B
uAA9qBhK6rg7dueg7U5pN5HylSR2OKcQrxbSjMw79sVG/wB7nKDsDT2BpjW+bJBLEDrin/wANyy/
lRzExTpnmljIiI3DK4pANYFlABC56ikD7VIHIHYjvT5PnJCggHuOO1D7QwAJjIzlwe3rTGNklJJL
EcjhQKI/mB4IQdQD1oIVmXDfKT1HpTWLAsF5X6Uhj7mQ+Yvy7RjsOvNE8y5yfmz7YpyqH+ZmC5Of
oKj2JsKlsnOc4609RC7sHcuFBwSx7etEnzMV3AZOc4/SjKshPG4ZB+lK+9+du1QeMUIBREjwsNwz
kDFMYvnO7bs7+tOUKpIHJJzSrKYsrgKSeaYxI4SwYnqwyDnijdtLErtGO/GadJ+6K5UoTzzxkUi4
DfN82efm9KQAn7tsou4Nk54pr4b5iSDg4U9qVjskbbwnfH40smJGOOmOpAoFoJFudegCnrUk2xXG
A205OKhaI8hQTjnHtToV81cKoGB1PHNSkA9yEKkbstkrzmmSo8cp6nnNO8oDAG4eo7+9ODl13Sbs
4+UD07ZqhjYSkTgMpII52nn6U9YzMQMfL/ebqPapGSFELrkyKuSp79qjSZuVG3cOQx71DQhJAIpG
ZSF5479KRgVlDKxyecg8Gm4MisWO9h0x3/CnmLPKkADGcr+NUkhg+8kfP8o6nPPvSTybACpK55z1
B/zilJj4Tdtwc5IpoRCxeM5HJwSeaVwFkZpAW2BATzjGOn+IzS7PKTIbOM/eHQYoYKsnz7mOMAqP
Wm+UpbaxxuzkZ4Bz1o2FsErgR7zu2g8FT3qQFHgUk7XDH5ic5HamyYV/mIxgYz1/Ck2YyQCB/dqW
MmcoJj0kJ6L2qKeTpgbCWPIpXkXzU2K2QM4x/Kh3Em4suCxJx1xQmK40K0kTFfuqQ2SadIquHLj5
mIyyjGDTFbzIihIVM8jBwfanvIrxuULYU44PSmAzyRG5AbPYc9fSnywAbmAIYAYK8ihVhK8qS7Ht
0x/9fmp0lLnGNqKfugfnRqUrDFJgwoYMjDG4+vpULRbiSq4YEcZxk1JdGPz3c565C9MDt+lK6bmB
yQo+bKr71W49CRlZFiDYUdCwHB9SKjkkDKDGAAMBgxzn0pplMgVUIbByDUrxJ5ZDKCWySegBpWsL
chZNzM6KcDkY/OkZ2mgH94nAyafuKqNpYkjBB/pT9iOAqjB2nIboDVK4ishCqW2qVVsgelMm8qVF
AzvDcn1qZYwm+PfvXP1FVLyTLqSOBxTBEpLxxgRrhh361X3NIQGP3jkHNKrhCuGIyAMjmkdmBK4w
ByKTAlYtGvllNq5yMjrSi6Zo2jRVUEAO3TIpDOyHnDEDHzciovld9rHGeuOlLQdx+94wAChzxjAz
SQxmRmBXc3+0wUU6RRGm5cMM8047lHCnd6t3FF0CVzf0vVn06JzCQQ43N8mSAD3P0qvJqFxerIWm
KxsxJB4GPSqVlL5btlsL/tDjNPlliAYluQM7x0PpjHfrWzuZJJDQ0cQ8tgVAPBbpj6d6iLCQHcxU
ZORjqKSZpZ3DPlywzyR6YHOangCvGElQDadxJPU9v51Fhle7URyx5BHHAXuK6XStOHiLQ7iLZHH9
kyYiPlY7hnn16H865i7O6UjnavHH9Pau3+HYa6tdZWKRU/cHAPXAUngngGpaZopWOKt4Y2uAhkKK
cAvt4HPWv2B/4J6/sz+Evgx8IU+OPiG4GoT3WnSX8M3ktutbYKd4287shQenavx9glAuwH/dxh+p
+8oz3r9z/wBnpPN/4Jl6WoBj2eE7wHcMY2iXPrnpQr7Fyk+XQ/N39r/47Wv7QPxX1TXrHSLWxsYg
LOxe2hw88aklJJc9Tgj6dq/RzwxeeFv24P2Nv7C8MLa2ev6fZwWk1ndRL5lldxKuVyOgcAgOOPm/
Cvx21VRaTRPEd9uDkFuCM8gcdsGvqD/gndoXxDvvj9pWq+CZWt7K3Ctq5kfEMtkzAOhB4ZuOMcjG
eKcomSXQ+pP+Cf8A+x74q+FvxAv/AB54zt10d7S3n09NPuY+qv5bGUMQBjMfXpx2rsPAP7S2h+Jv
28bzw34StIbvRdXsTZX1ybcp/pVskj+Yh6HgBcnqK9D/AOCgS/EE/Aq9bwLGZLclv7ZWIAyfZccs
o6kA8nbzj8a/Ov8A4J7SNJ+134GuZJXnci7PmBs4Bt5FOT3Gce/PtWbiOLu2ux+lP7QX7ROt/BTx
/wCHrCz8HDX/AA1NALrWby3VmntIC5QuqrwQoGeevPTrXh37VfiTwf4w+OP7OOt6GbK+/tTUI5Vu
I9imSAz25AYdem4YPqa+lfjN8YPA3w91m70jxXPDpNxqWhzyR6hcFUSSNdwMOT1bqQOa/Hb4KJJD
+0f8PobkSrbSa7pzxK24Ha1xGAeeACQD+dCTJT97VH7BftN/szaB+0R4En02eGKy121UvpupKuDE
/UK2OqEgAj8q5bwb8Jv+ER/YnufB2vaTA97p2h3sE8MsKuGkTzNrgEYPRSD9K3/2pf2i1/ZssfCm
v31m95oF5qDWmopDjzVQxswZM9xtJx3xiux8beItO8cfArxHrGhX0GoadqGhXUtrdQOGjkUwvggj
jrTWrsTJpJn4SeFfFep/Cz4j6N4hsfIlvtDulkVJI8oxXlkcEdDz788V+pHiXwH4P/4KKfs/aZ40
0azXQvGNkkkIIXb5V0qjzLd22/OmcFWHqPcV+TfiCRpbyQqpBeTLSyncGOepx9K/WH/glBP5/wCz
XqsYTyxHr9yAMYyDDC2fxJJrTZlXUos/JXxloC+GtcvNMuI2W5t5GjkQtj5lJXqpI4I6g1hzMJYx
8xXAwcHcw6nGf89K9E+OVyZfid4okmiUZ1K6KlgoKAzucfoO3btXnXnqZBIQSp4IxjJx6CqMFrqA
gZQIy4OQWL46HPQn1ru/BnwK8cfEPw3eaz4b8L6jrWnWjlbia0gLhPl3EdsgcfnXDyNEV25KK7bT
sOdgPcZ9q/Yz/gm74H8L+EvhhYah4W8dHV49ftUvdQ0C5MbS2tyAFYgA7kwBgjGDwaTdjeNj8r/B
PwP8cfEEai/h/wAL6jqi2AC3JtoGcxPgthgoJHA6fSsLSvAWs+IfE8fh3TNJludVmkMP2NkKytIM
5G3qCNp4OOlf0N2XhPQvBljqs2j2dloIuy91dTxRhEL4JMjjpx1r8+9O+Afw8tv2u9R1VvjBaQ62
qJr1nd2rW6xi5MjNJG4DbW4KnbnlXNHN2LT12PgPxb8B/G3gzX9L0HV/Dt9p+t6kVW2sJ7dhJLuO
BtGDu54+Xp3qL4i/BDxd8KDDH4n0C+0e4lQyRC5iKCRQduVyME5x0Pev6CrrwroviOTSr/UrKz1a
8ssS2t7JCrFGIB3oecZIB4NfKf8AwUR+HWheP/AT3GteObLSv7GKXUOjP5XmyhjtbHO8kg8cYyKV
2yGk2flVZfAP4g6h4GbxlB4Xv28MJGZG1IQnycZ2ls+g6mn+Hf2dPH/ivw3e+JdP8Jahd6Jaq0rX
1tEzxsq8k5Axx371+1n7Lnw+8N+CvhDbeF9B8WweNvDXlZiRnjm8pZMs8ZZSSVJPAYZHSu/1fwxo
3hbwFe6VpU1h4N0rY+Z44okgg3n5jtbC8k/maXM0Z+zSP59fAvwt8SfEvxC2j+HdJudY1fYXa1tE
3OUHVgDjOK2f+FGeN5fGx8Gw+GL8eJJAf+JZNbMk7ADOdjc7ffp15r9C/wBkj4D/AA/+H37QHiG+
0f4p2N14h0S9FrEInhKX8E0YJQLu5OeDt6Eewr7/AJfCeiT+JIPEEml2ja3BEYY9QMQ85IznKh8Z
xyfzo57l8qP54PHfwx8TfDLWl0zxPoOo6LqLYxb3kDqMHhSGICkdB8vSuh8S/s6eP/Cvg+HxPrHh
fULHw9OEEeoPFmJ9w+XBGevb14r9HP29/hJ4F8e+OPC2s+LPiRZads1CHR7jTBLDFLb20rZLEjJB
ViGJYdK+tvhl4G0O0+FGn+Ezq1t428PWtulnDNOsUqPAqgIjbcqxAA5NVzFJI/C6+/Zs+INt8PbT
xtJ4Y1KPwu0SzC/ePCKjfdPGflPrWb8OPgp41+L91qUPg/QbrW5bMhrlbZSfLXsSO/fp71+7nxo8
J6Fqnwk1DwteeILXwToN5bmwe4PlRosTIR5aF8BT6Y544r5d/wCCffwx8D+CdR1rUvDvxBg1LXWv
rnSr7To5IjHdxxvmORFwGwQNwYZHUZo5hW1PzQ8I/s+ePfHXi/VPDejeGb271uyR5bqyMJSWIKQC
GVgCDkgYOCc8da5vxT8Nde8LeK5PD2q6beWWso/lPZzIUkD7tu3aT6kfXIr+iu38MaPZ63daxBpl
pDq10gSe9SFRLKo6BmxkjgdfQV+ev7S3wS+Fvjr9p3w1f+KPivaFtbaa2vIEuYozaSwrmLJU4XJA
XDYyQPwnroNOzsfBHj79m/4g/CjT7G/8ZeHL/RLO9fZDNNHmOQ4yACDgHkcE56+lS+Nf2ZfiJ8Nv
DFpr3ibwveaZo12Y/Ku2TdHl0LLuYdM479zX706VoukeJvBthZ309r4v07y0xdXcUc0dwV6OQBtz
7gV5f+194O8MePPhHe+HPE/jSy8G6VOhlQXTRIs8kRDxj5yDhWC5CkHmqUrbg32Pxc+G37M3xF+M
WjahqfhLwxdavY2TlJ5YQNqttzgAnJP0Bqb4b/szfEX4szaqfCfhm9vhpLJHegx7WQsD8oB6kY5F
fqr/AME8fBvgnwn8OLa88H+Nl1uXWbdLnUtK3xkRTINpYKPmTv1zkEZ6CvqqPRtJ8Pw6hcWdpaaT
9o3z3NxBCke5sEmRyB8xHJyc0ubUD+dLTvht4g1rxpB4WtdFul8QT3n2BbF42jkMoYrgq33eRzXZ
eOf2ZfiD8NvEWj6J4l8JX+m6hq8oi0/em5ZiTjaGBKg8jIzX3RN8Gvg3c/tkvq9z8W4ptQu4P7ct
7q1uoEeHUElHyl1+XkYITGSFIr9Fxplhq9vYS3MdvqhhCSw3M0Sud2OJF4wCeuRTbYt0fz+fGH9n
H4gfAy10ybxh4em0OG+fbFO7B4pHGCF3A4DHrg9cH0qz4a/ZX+JvjP4bTePdE8KXGq+GoUdxc2xD
mRUzvIUctggjj0r9UP8Agor4Z8FeLfhNdReKvHUXhy80qP8AtTTtMlki/wBJlQMOEI3sWBKjHQjp
1r0H9jvwj4F8F/CSy074f+JJtf0GZFu/JnmWTyJHUFwAFBXJ6r0yD70uaxKWp+PPgT9lD4mfFTwb
e+KvDfha7vtKtXdC6cMxVcttUnJPQYHeuP8Ahz8G/FHxX8Z2fhPw5Y/atbuPMcQuRGdijLE5Iwcd
v8DX9B+vW+leHfCV+PtUXhnTI4meS7tlSJbcd3GQVB+or4D/AGdfhj8DPD37TviefRPiRNe67YzR
X2jXdvdIfNWUSefHwpWQAsQcAcNRzGi8z4Z1L9lT4kaL8V7P4b3nhe8g8UXgzbRnBhlXDESLJ90r
hT34IxxWF8ZPgN4y+B/iO20XxfoNxpF5dW/nws+145Vzjcsikg46EdR+Vf0MSWFrNeJcyW8T3UYw
krICyD0BxkV8L/8ABQ3SvhRrOqeHr/xn46urTWtGv4B/YMUqjdaSsnngKE3D5Bu3ZP8AKnzEs/PW
5/Y7+Kuj/C6H4iTeF2bwtLDHcRz28qyy+U4BEhQfMFA656ZqQfsZ/FWb4Qt8SI/Dc8vhtLY3xlEq
+b5I6v5Od/AyTx0Fftl8DNF8JaN8MdL0rwbqcus+FIIhHZPcP5irDtGFVioJTuM569asfGaDwvH8
MNXsvFWrv4b8MXEDW13dWziELE6lSu7adoIPWkpMbR+EfwP/AGbfHP7QOpX2leErKC6u7S3W6mjm
uFiG0nA27vr7V0Hhj9jP4peMPiVqXgG08OTWXiPTYnmuVvW8uFFGACJOVYHcuCOoNff/AOwJ4R+C
2heK/EMvgrxTe6n4pstSubHzlLgXlizBoi67SCp28MMcrX3n9mhS4MwjQTMNpcKNxH1qlNozcLu5
/Oj8TvhB4j+E3xC1Dwf4kszYa3b7QVkI2srY2yKc8oc53D0PTBr0z4p/sTfFb4LeF7TXdd8OG506
+cR+dp8i3HkkjKhwCSMjoce1fZv7U2kfs7ax+0t4VvPGfiqe/wBSuLmTTddsbhpD5EYR/JZiEHlq
shCjHBDZ5619/wDg+10+Lwtp1tYT3F9p0UKxwy3oYyMg4XO4AnjuRmk5DjqtT8NPi3+xJ8T/AIRf
DWHxtr2lxPoc5j8xraQM1uHx5bSIOVBJAPPBNZ3wK/Yz+IX7RXhvWtf8IxWFxb6TP9nljuLkRvI/
lhwqDvnIAzge9fsJ+2Inw5u/hDqNh8TtfvdA8O3iNGJrR5UDygZTOxSCQQCA3BI715x/wTutfhcv
wutLr4em/TUpIFj1xHWbymuU4y+4bA+DkAH7rdMU+Zjtqfmn8I/2G/ip8YpNfTSdDGlzaNcfZ7xN
UfyWD8/IgONxG096830j4J+Jdb+Ldr8PZbVbDxPNqZ0w215Js2zbiME/hnIzkdK/omvxHFa3Dv5k
aeWxdoQd4GDkjGTn0xX5vaBbfsz6j+2VfPaahq+r6pqMIuYpgLtpLfVo5tx2HHmByozyMAr+FHMS
tJJPY+TPHn7BfxW+HXj7w74XvtC+1Sa66RWuoWDedbK7MqsrOBldu7PzAcYrD/aL/ZA+IH7NLaXN
4ptrWWy1PetteWE/mxq6AEo5wCDtJPTGAeTX752wV4Iiu9lCDBlB3dO+ec18X/8ABSfUvhHF4CW3
+IV3qkXiCFVutDtoEuDFNIDh1XA8vlSQxJBAI6UKbLPz48JfsGfEvxt8Cv8AhZmgR2mraZIjyw6f
byE3kiIxVyEIwSCp+XOTjjmtDwV/wT1+K/jP4S3nj20sIEiijmlj0e8YxXsgjLblCMMA/LwCec1+
uH7MK/D1vhbZT/DG2u7XwpckzxJcJOibz98oJf8AaznbxkGu1+Jd3o+m+CNWu9fa9XRYoS922niU
yrGPvH918+Mdcds54oU2Jrqfgn+z1+z3r37TXjZ/DHh29sba+itmvH+3TbFKADjgHJyy9vXpivRt
F/4J4fFzUfjHd+AZdKTTZ7eB5zrdwXNg8YXKssgBzkkDHUHORX2J+w3c/s9Xnxi8Y2vw/wBN1Btc
t9Qe70e+8m4Rm09lUMAeMRrIXXDjoR61+hGF34/ipc7bHZn87Xx3+AHif9nr4j3PgvxOlsuoiCO8
t54JgYLiF9wVweo+ZWBB54r1z4p/8E5fir8Pfh/pviywtrTxVY3axyGHSHaSaFJFDIxQqOMkDIz1
5xX11+2L4r/Z7sf2hvB7eNNL1bUPE1nqHl6vZ3NtcPE9iySFJMN8rIsu0qEODluDX3h4Gg0ux8H6
ZHpFtcWGkRQhbaC83h44/wCEfOSQMYwCemBS5tQtc/E/40/8E9PiJ8FfhH/wsHV7rTrrSYxC9zbQ
SlZ7NJMAM4ZRkhmUEDpWV+zV+xB4r/ai8Ja/rPhnVtJtf7HnFs1reysGuJCm4YKghQeBz71+uP7Y
uq/DvSPg5qUnxOsdTvfC0sbQSf2fHLIkbtjYXCEAHdt2lxgNXm3/AATp1j4b+IPhNZz+CNAudM1a
1h+x61c+WY1kmUkjeQ2GYggg4yAcZFHMJbtHwD8I/wDgmf8AFT4ovrpvoIfB8ulyi3KavG6md/my
Y8ZDAEcEcHI5rxHwj8C9c8RfHe1+FdxNbaXr0uqPpc8s7boo5lJXBYdQSvHrkV/Q/eSxPaXfD3CK
jiSO3Y+YeOQMHIbsOhr84fAXj39nW+/bW1fS9M8HatLrGrRJFGs9tKJYtWilcy7Nz7kYqNxbI5U+
vNc7tYdnc+bvGf8AwTL+Lvhb4neHfC0VrFrGm6u6q3iCwRntLYZO/wA3OCmAB169ulcD+1b+xz4t
/ZT1DQo9dvrLV9O1hHMF9ZHascqYMkbBsHgEEHAzz6V++PnwwvDG7iN5BiNJGAY8c9Tya+G/+Ckv
jD4U+G9M0mLxt4U1XV/Ei3MN1pcyQN9nkjR089N5bbgpkMuMnI4NLmE7o+JNK/4J0/ELxX8AtO+K
Hhe+sPEcV7bC8TRINxuWj3FW2ZG1mGOmeece969/4JofErTPgHdfEa5ntrW8t7CS/ufDt4rRXUUS
bmYDgru2qGwcZ6V+tfwC1nwZq3wp0vW/BWjv4d8JXsIu7WKdBDGY2XO8LuIUevTkHirPxu8S+FNC
+Fera14p0y48QeFbeIzXkWnw/aP3IB3OVBG5AM56/ShS1E00fiF+yh+yXq37WWu61pOkeILDQZ9J
to7uT7bEz+crsV+XHoQPXrXqPgP/AIJd/Ffxf8QfEPhnW1i8MWuloTFrc8TSWt78wC+URjOQcnuO
nGDX11/wTm8afCHxXc+INN8DeCr3TdW0nULqWK/mhAdbCWRjD5jb8njK7cHGBmvuyO/tJr2W1juI
pLqEAyQI4LoD0JUHI/GpU9SrXP53/iJ+z/rPwx+Pc3wx1e+soNUgv4LMXkbl4CkzL5Uh7jKuCV6j
kV9AfFf/AIJa/FTwBqmhQ6OsHji11KXyZ7rTVZPsjEjBkQ5wuOd2T0I9K+gPij8UPgVpX7bvh6wv
vhlqNzrhEuj6w2pWa4aeUp9ldRI5Dc/x5HDrzX6LQ39hpGmWK3DR6RC4SKGC6kVCpxxGOSCw9AT0
p86vYlJtH4b/ALW/7BniT9lPwxoviHU9csde0vUbk2cklshiaCfyyyAhj8wKq/I6Ee9avwY/4J2e
Lvjj8AB8RfCfiDS7q9ZpxFoUyMkkjRnGzzOgY4yMgDkfWv0F/wCCj/i/4eeFPhPLH468G6n4gutQ
ja30a9tYgYYbwAtGGkLjZyCTxyoI5r1P9kjxh4L8c/B3TfEngrwx/wAItpN2mJ4vKjiQzR5SThGO
drAgsQM1XMEYs/MPwZ/wS4+JfiL4Ual4t1GWDw7rEEc7JoGqIySyeWMqC3RQ2D14rxf9lT9nCf8A
al+Iz+EbPXrbQ7hdPkvjJdRF1cIyKVAByT84P0B9K/ebxrruiz+AdU1SSzfxVpEMTSzWulFZ2mVD
lgoDAMRjO3PaviH9hP4p/Bjxt8XPGGjeBfhtcafcPfvrWl3s9vAslvbMkaSZO/ciiToq7uHHpRza
DSldvofMOg/8Etfipqfxev8AwhqXk6XodujTReKliMlncLwVAGQwY5wR2wfSvA/j3+z7rH7Pnxhv
fAOu31vdTxLFNFf2uTHLbyfdk2ZypGCCvtX9DA1exbVDp4vLc6gq+Y1oJVMoT+9sznHvivzs/aj+
MvwS8HftXeE7bxF8Lr+68R2NxJHrFzc2cflXFpMjLFKAXIl/eEMCcYAb6AUhKMrpXPnX4wf8Erfi
V4C0TRNV8LXMPj+G9ZEnh06Jlmttygh9pPzJ15U5HpXO/tXf8E9PE37MPw707xfc+ILDWdMmuo7S
8jRGjkgkcfJgHO4ZBBxX7X2OoaT4a8OWTziHw9pyKsUUN5Kkax/3UzuK59ADXzv/AMFBvFngDwl8
FL5/iD4Nv/E+nX8b2dpc2cCyLa3TA+VuYuChJ6MAehHsUpjaaPzX/Z//AOCe3iP9pD4Kaj478NeJ
dPt9Stp57aLQ7iE7pZEVSqmTdhdwIxkYGa6n4cf8EqviZ43+G+o6/qt5F4P1y1klWDQtUt23XGxQ
VJcN8qseM47Zr9E/2FvGXg/x/wDBLSda8JeD18LxR28djfOkUUay3MSKsg+RiTjg5YDrXtmva9o+
p+D9WvYYh4p0+GF/OtNLdJ3m2jJjUbgC3sSKFUvqKUXufgP+zT+zhP8AtFfFyPwFBrlt4fvmguZf
Oni88B4fvJww755z24r3ax/4JV/FFvjXL4PupoIPDfkPcR+K0iaS1ZMfKpXgq+eCuffkV9H/ALF3
xY+C3jX9oTxvong74Z3Gnz314ur6ZPdWsKPbiNFS4PL5QCQ5CqT948V+hT6zp66sumG+thqTR+aL
MzL5xTn5tmc44POO1HPqHK2rn89n7TX7OGrfsxfFi48HaxqNrqR+yxXlpe2+QJYpCyjKnkMGRhjn
t617v8Tf+CXPxJ8F+FdE1/wncW/j6C/WPzrewDJNArqGV1Uj5157HjjjuPo79sn41fBn4f8A7Sfh
CLxR8L9Q1DxJpN75+oXc1pC0F1YTRyIJFzIRJh8MNwG3a3Q1986Jf6J4a8IWU3kW/hjR1jVYoLuS
OFIQx+VchiozngA96ObUUVJrU/Fn9pz/AIJ1+LP2b/hTa+ObzxBYa1ZtNBb31qkZhltTKQABkkP8
xAOMY96qfs3/ALAHiD9pb4Q6x4y8MeJ9Nhv7C5msl0q8hf8AeyoiOuJARt3BupBHT0Nfp/8At3+N
vAHg34JX138QPCOoeLNC1CJ7KKWwiSRbeZxmIkl12ZYAhxn7v51f2AvHfhH4kfA/SNW8K+EP+Eam
tYI9O1OVYo0Wa7iRA/3Wy3G07mA4OKOcFHdH57/DD/glb8UPH/gXWNZ1WeHwXrFtJLDbaRq1u2+5
2xghiwOFUsdoOD0JxXhP7On7O93+0B8Ybf4dHWIPDuoMt1uuZ089BJCDuTAI3dM5B7V+/era9pOq
eFtVvLZF8TWdvHIJrXTJEnaUqMmMfMBu/wBkkda+B/2Q/jB8GfGH7S/jLRfCPwvuNNl1O4TUtMuL
i0hjkglhXbc8F8xjd8wCk/xcCn7Sw1CVz5vtf+CVnxTb40N4PvWiXw6I5J4fFsUbNaOu0lVZfvK5
IwV/mK8L/ah/Zr1n9mD4py+EtbvrfURJZR6hZ31nuVZYiWU5U8qwZG4z0we9f0HvrenR6ummPf2y
6k6eYlkZlEzJz8wTOSODzjtX59ftr/GX4N+Af2hPBieKvhvf6h4k0m+W4v7ue0iMFzp0ySRlly/7
z5wCAQMbW6UvaBGLTPmf4of8EufiR4N8F6N4l8JX8HxChvkjeWz02ApcRiRdyuoJw68jOOf6c/8A
tJ/8E7fGP7OPwt0/x1qGv2Gr2DSQW97aIjRy2skoAXGeGAckV+0/h7UNE0Lwbp9xHbweGdH8tfJt
7l44UiVuVXhioznoDXin7c/jD4e+E/gjfXfxE8KX3irw9extbRS2ESSLBO6/umJLqUJbGGAPT35F
MbTufl3+zp+wV4h/af8AhTrXjPwv4l062u7C7ktY9Ku1YNO6xo6guPu7t2AenHtXW/DX/gld8Tvi
F4E1LVr9rfwbrFpLLBBo2rxuHuCq5Vg69FJOM47Gv0F/4J/+MvBnj34I6VqfhHwj/wAI7Jb28ena
nPHHEkdxdRIqv9xssejZKj72K+gdX1zS9U8M6pc26jxHa28bia10uRZXkKjJjGGA3f7JI60lUDlZ
+AP7Pf7Oeo/Hv4zQ/DltTg8O6iY7r9/dR+ahkhzuQbSM9G5Hoa9xj/4JYfFl/jM/gy4MEOgNE9xH
4uiQvaMoUlVK/eD5AUqfrzX0n+yH8Xvgl4u/ae8caF4P+GV1Z3V7cJqWl3FxawpJbvGgW62hn3RA
P82FJ/i46V+h76tYR6pHpz3tuuoSJ5iWjSqJWTn5gmckcHnHY1XtA5bn89f7Sv7NXiH9mX4pN4O1
y+tNQMltFe2t/aZ2ywOWBOw8qVZHBB64GOte0/FL/gmH8T/Bng3RPEvheW38f22opG0ltpKt50Ku
gZXCEfOvbI6ZHHPH1B+2v8X/AIL+Cf2h/Bkfiz4dajqHiPSb0TajdXVqjQXGmzLKm5d0hEg8w7lB
AwUbp3+9PD95ougeELCWKCLw3o4jXyYLp0iWFT91eGKjgjABo9oHKz8UP2lv+CevjT9nD4X6f441
DV9N1TTnlht723hVo5bV5PujDHDDdwcH3qr+zt+wF4p/aU+E2seNfCviHSxcWF3NZx6TcqytO6Rq
4XzOi5LjkjFfqN+3j4n+HXh34J3rfErwvqPiPQb5HtoJ7CEOttcMp8klt6lCWPDcjg/jU/4J/wDi
7wV46+B2j6p4P8Iv4aeC3j0/VH8qONZbqJEDk7WJYnIYMQOCKXPYSiz86/hn/wAEtvir8Qvh7rGt
6gIvCWr2c0sVvousRsklwyICDuGQEYnAPtmvCv2dP2fNT/aL+LcPw/stUtNA1JoLiU3F8hkQNEPm
jwuOevfsetf0F65rWm3PhfU7yJH16zt438230thLLIVGWRdrD5vbIr4B/Y++KHwL8XftJeNdE8G/
D67guru5XVNHuJrZFltygC3JGX3RAOwYAZ4zx2o5xpO58yWf/BLn4sy/GqfwTdRRQaSIDcReKwjP
YsoXIHByGLfLtPPfmvCf2jf2cvEP7NvxSufBPiC5t7qUQRXdte2h/dz27khXxnKncrKQfSv6JDqN
muoiyN1CL0p5gtvMHmFPUL1xx1r87/2x/i78DfB/7S/gyHxb8P8AVNQ8SabfN/a0t3Zh4bjT5o3R
ZBuc+aokKsvTG1unSlzjsz5Z+L//AAS9+Kfw88PaPrfh7yPHNhqBQSwaQjG4tty5RiDwyerA/h0N
Yf7Tn/BPbxt+zN8PtP8AGOp6tp2taVLcR29yLbMcto7qSoKn7wJBGR044r9u9AudJ0TwpYMlsPDu
lpGqQ2946RiFf4V+8QPYZr59/wCCgnif4b+HPghfp8SdA1bWdM1GGW3sptOiZ44bvaTDuIYBGLAb
SQRwe1NTuFmj8vvgP+wL4x/aN+DGo+PfCOsaXLcWc89suj3BZJZpIwp2BsYBZW4zx0roPh1/wS9+
K/xG+G2r+JXMPhvU7RpUh0TWIXinuiiBhgnhQScAkY4r9K/2CfEfgbxp8DtG1fwT4Vfw6pto7LUW
WFYYpbuJQsnCt8xz/EVHBr3XxNrOlyeDtU1Bo5dd02GF3lh0k+dJKF5ZUCMMt7A0ua7FZo/mamtm
imcSZRlPzL12+3H41WkRuQq8HmvVf2jtT8C698ZPFOpfDSyudN8G3VwstlbXSbChKjzCq5JCl92A
TxXl0xaRyARgZ5qw3K+Mc9CPSneYOAT1OelKIgRnqTxQyMpOVHFInYaylwWxjsaRgwI4Ix19aeX2
BguD7CkDBPm5LdRSJ0YvmAJ8hOe/vTeMjHp0pfvRsSMHop9aQx5BIGcnrmmPlE8vdz/WlZRE5wDj
pkdKTaSuSeaV2YsVPIx2NTew7CMF3ADOfWnK5HJBAFMyQAuDj1PWnE7Rk4x70rgPLo7FgpOOpI6U
2QMmMcg9c0gDYymRnuKaXKt83U8nNCQ7kjFdynOM9cUvO7kYDcVEfnbIU04cuBkg96egXHMoMvyv
grzkDNKQWGQQWA6DHNMkdQeOe340Iu18ggHHaosFwYfMc9uaXJU/KnAHWlBB3bhx3IqPaDuwc5xj
HamIVfmyxySfSnqFcqclR702NdrDk/4U9lXdnBKdOD7Uig3IjlidwB6HvTXbhj29KR4sb2HQfrSj
b0GcHr6UgBiAiAIc9CSeKUr5i4HB56ClIyMDt0pu7buGeT6CmAu/am1uSOKBGNp69OD3oCL1cYI7
g05WLgc8YxQASActjKDqcc0AJKQclMjJI6UwcgrnP40/5mKopwF560AKSCu0YAFE7LjEZyO+fpSM
wyQeASKDtOEViB1yR1p3HcFVSwXnjnkUK5QnazBsEGkVzvOSMY570i4YFnOQo45pCCWUgFQxORwa
FzPKoZT1wcUzoDlc44APpTgDG7EZJz0FGoDpY2BB7ikyZSTuA9j34pd7Nkg7T0+tGAuN3THPvQA7
y1jH38Y5APc02JdrPu5yOfSnK0Zj+6cnimoSrbSOCOtMpB5gZGGzn2NA3bV3BttIwHKg7cHJPenO
2c4BwOSCetGwXGNtLOm0lcfxc5p5kQSLuHQdqayBTjoD6U11OzlTxU3C46MrknBHv6mnsSfmDnr0
6ZpkYX5C+7Gfumnu+9vLWNdx6YoTC4ivGMENgZ7CiYxsxIJUZ4Y96YkZiAJHzDnZjpUjPuVo3jVg
CCMdhVjDIBwxJC9qVlAUFeAeCBTUHyYYgHnqfxFKsoyckBjxgZNISE4P+yNpIA78VGzDCgrgk4JN
KqhmO4HGen8qVlweFGwfw5pASEMkZMnykfmRTEBBY7igbPHY+lOmKtuOD7E84pm7ON3OBjOKVwRK
6LE7AM2T/EOlJJH5Sh9uVIwcHp6U5HDL8ucEZ6VCuDIABuA5OelAXJ2BRcYJJ6YPGMURMYDyR5ZP
OOuDQzFHOMAnjPWk2hZChUMDgjI6VVgsLIjQSHLbQeQAc0x280SEggKPvDrUzI0gYL0PPbrUTsiI
Gzv/AL2R1qWIRY93YcdfpSMsaggvtIPHHUU4uhAYsyHknH8qkcfLvO1h2OTnFNIaIZpC5zgcDAz3
p/kleRuLhcnd0x1pZXLOVxuxwM1HN5jBcjp6UmwYpJSTfjcwHAIJ+tG9mLMSpc/dx0FPulMe1VIL
EZJ+vrUQI/eEErzknHQVIrkzsT5e0qJe2B71FJJsmAB3Z6jHGakUKArBTu7OT2qNsxsHUh+5NJXA
l8p3O1eGBxj1pJ4kGFAKM3XPY0iPt3FMk9h396DLmBRkttJyOmTVXHcWXdGVDEgAc49aTzSD9zDH
JHPGKRUW4YMQACOuTxTg+RmQkkdSTVDGr/EZfu9AMZNPhBV2xksB/eIyKYZGdiSdqg5PGKI403Ei
QbzjHfPFK4CyRhSrRhnVuARwAfSpFkKLtJJIJHTpzUTKFIYh/UH0I9aV0UyBlbc+3JHvTuAofbIW
ZCEz0c9BT5icswbDYyuByaSRQoKllAAxu7E0yRmMjDaOMqTjvQgHs0ZG1YyW6lz61n3kb+ZlwRuH
B9auPlSDwcDjnmqU5Ynbu46U7gL5aoBvUq69TT8qrZwZD1INMj2yDa2QcdGPFLlEKmMnf39RQwHr
PhSXtwy5456Uzy0WMBgQTyBx0qVGYFwW2gZO09zUboXBGBuxznrU3sK4b1yoZiUyePWnfa3hYbQQ
SOdxzUJO4sCfUDIpJFcoqswGOQc9aBpnR3mhNp8Y80gOSPLQnkcZzWdKD1UEnJJ45z9Pwq/qGqTX
jbpZzIdu7pgZ6YI7Vnu7MSpVewJOcg+orpbXQzSY6JEdw0RzGp5Z+CPwq0riOAgnYTz93IHrj07V
IbZrh/MUFFLbm3DA47fpVW+vEEQjDsQ56dAKzuVYz5J1EjbVB7ZbB/Gui0rXI9D0q4KSZuLlTH5e
BtZcdx64J54rnptsZ2kkEjnHeoXYuwOMvn7pqWOxPBMUuC7H5jzjOR61+oX/AATm/bJ0LVfBcPwB
+IYWztLqCWy0nUmc4uFmLbrdyc7CN/ynp27V+XMDgyrkBQfetWFzCxKlmmjbcjbuUP5jFCVx6n1d
+2X+z5bfs5fEW603T7yPVtHv2+22jAksi85WTBwDkYz34r9FfAHiPwZ+zR+w9D448D2tlKz6XFqI
SebJmupAoZWbO7hiRtHpgV+LY1O61GMJPNJKwIwXdn2gc8Z57k1dtdVuUt3tJL2UWSHcLUSlYmbP
937ueTyRV8qbKtK1mfov+wZ+2x4y8XfG+bwR481lNc0bxKJmtZL6TLWk6qz+Umf4GUEYYnkdea9l
8P8A7M3hrwh+3Zb694W1OPTo7SxbVrnS2f5T5weJkjwegIDbT0r8gX1C5iMU8EnkMAD54co4IGOC
Op5Pp61oweIdUTVDcx6nffa/KKCf7QzOoHO0PuyB7Dip5ETonuft5+03+yzaftF+MvBl9qF5b/2P
o7OLu0ckPKjMudpHfA714h+3jofhb4e+M/gLdWkFppsemaxFCZFCgrBHLAV3HqQpXoa/Le48Wa1G
Cia5qUAYBdovZMEY5wAR1xVG91+712Jxe6hdXDxE/wDHzIZNq98ZPQ/5xSUQuk9D9b/+Cqup2svw
Q8PLHNHK8mqCSPadwZRC5zx1HT8cV0f7HWu6fqf/AAT8toUvYmax0rU7O5CsAIXDTZU+nDD86/G2
71u91iCCG4v7u5jiOY1luJJEUkbSArE44GOKWDW9U0q0nt7G/u7a1kZpJYIZnSORiMMzAHk4GOlC
hZmaWrXc7Dw3pVv8QPiBpOhSX8GmQ6pfrafa5VISLzGChtp9dw49+tfrbqeu+B/+CdX7OcGmWXl3
mrToZobMuR/aF2ERZX4B2jAB/KvxUnu/PQhmYlSHDdCDnIwe3IrR1jxFq3iBLf8AtfUdQ1M26t5b
XdzJNsJGMjJ44Hb+tFtdSnL3bFnxx4suPFeu6prkkEMFxfXEt1Ii8lTI5YgE/wC8ec9ulYi3KwPt
gGxZAchiTg8jvUEkIlYBWxlcNv7HrzTAGZLgMqyOUPy4O36+uKrmRitWSzBpGUIMIBuK9eeua+p/
2S/23Lr9lfSdZ0//AIRfT/EMeoyrNBNLcGCaFsbWXcEbKng9uRXyxHGWgP7zaSueCQDxyOlQR3DL
E3mzNHhvlb2JxjPX0ovctPlPuv4cf8FTfH3hTXfEk3inSo/HGk6q++10+ecQLYAlsxq/lksm0gEM
vYe9fO1t8a2tPjrD8RdN0KwtDBqh1T+x9oMGxn3GHOOhGVzj8K8mIWZIyS0YPzF/9nFMlnKybyuR
naAv0FCKUran3V8UP+CqHjzxTqmgSeDtMg8F2umOGurMXAnF7yp8sq0a4QBSvHrXn/7WP7bsf7UO
maIj+CLHw9d6fIzDUVufOnlBHC5KLtUcnGTyBXys0z+dIwh2biWHOe3+fzp04Hl/eDEjhT1Hrx+X
61dtTPnbZ9l/Ab/gozrfwQ+DD+BtP8I2F5fQCdbTW2uzGU35MZaPysNsJ/vc1teBP+ConizQ/h5q
fhjxz4cg+Ic94ZVi1C7uFgxEycK6CIhsNnHTjHpXwpJL5YRg5XjlccE+9KLvbFsAkQ7zks3GD26V
Nky1K+57V+z/APtBy/s//F238Zw6Nb6vG3mJcadI21GDA45x8rKduDzwPy+gtb/4Kq/EHUfippXi
XTtNh0rwzboI7jwxLciSO5GMMxk8pWVucjGcECvhOTypZAfNDMpGR2z74FPuMOArsdq5IJ46n/61
TypGlz6A/as/ah/4ah8a2GvnwxZeHDb2fkSxw3PnG4YOSGdvLUn05Hb0r16w/wCCn3i/w78DbPwP
ovhiy0nX7OyhtE8RW9xgAJtBk8ox4DkA9yM9q+HtqojAeYo6Fh0/l9Ka8gCDa/GeSx6j6U7ENs+2
fEP/AAUt17xp8AdT+HvijwrZ+KNSurV7KTxDPc+S53Z2y+SIyN68DIYZIzxXmX7Jv7W15+y14k1r
Vbbw7Z+JLfUrdY5bSWdoXjZGyrJJtb1IIx0xivnUPGyNtiJTPzEn8vw6USyNEnlxMFDcLjBAP0J6
9PzpoXMfcuhf8FT/AIi6Z8XdQ8T39pHfeE7xHjTwq8wWK1GFCFJgm7cMEsSOdxGOlfPnx7+Olx8Z
vi9e+O7DRLPw1cTvDKlvavvRJE6OX2qSSepwK8a8+VsgqQzDGMZHT1pI5GjLKHIXPA29qdgU2nex
9z/E3/gqd4/8ceCLXQPDmkW/gTUbeVPN1iwvGlaVVTG0I0YChm5PzHAFYHxz/wCCiet/Hf4OReA9
a8GaWNR/cPca99oJbdGQWeNNo8t3x2YjkivjlkDbY2d1bOQemRTPLLkrvwhyM55PFKyLuz6q/ZR/
bq1n9lnRNd0m18M2fiO01KZLpDdXDwtBJghgGCtuBGOMDGK7H4Z/8FQfih4L8QeIdQ12O38Z2mqy
NJBpd5cmOKwO5mAiZVJCbWA2n0HNfEcspgCxPlwx+Vieop4WVAMMMt0TuP8AIoUURzPseuXfx31A
/Hj/AIWhZ6da6XqB1ZdW/s22bNsCJNxi7Ha3Q8fxV798WP8AgqJ8TfH1xo0nh9IfAa6bcGZhp85m
F90wswYDCDbyP9qviiQkHY4wuTuI+nrUYl2zxSgHcpwDjOadhc59TftU/tz65+054T0PS9Y8JaVo
hsZvtQurSZpJWfBXbuYDC5+bjrx6VpfAP/goj4w/Z9+EEngnR9A0rVkWWeW1v7qWRJLYyc8qAQ+H
ywyR1A96+RXAuHbzMFAeeenpSr8jbyA5CcKDgj05p2Q029j7S+F3/BTj4l/D/wAK6no2tR2vj57y
Qsl3rc7+dDuXDpgDDL3AOOpFeCfB3496z8Cfi/bfEDS7OyutQhknb7AyFYJFkDBkJU5AAbg9sCvK
mAVVkY5diGxn9akhVCrkpgk9Mndn1qlBMltpn194x/4KX/FPxJ8UNE8W2d0uhWWlxKreHbKZ/sV4
csW80E/Nuzj1GBXnn7Tf7XWt/tSa7o+p6/4c0jSf7Kt3t447Lc7SBmDEuz9emAMdz614GxDA8+Xg
EHPOT9KQSKgYkFjgZGOuDS5UhKUmz610j/go98SdG+Blt8N9JgsLCKzsU02216IyJfRxLgAqQ20M
F4De2aZa/wDBR74oW3wavvh7qUemeJYr22nsX1fVxJPdmKQHOSGAYjdgZH54r5OdhGuHbBxnAbJF
QSygriMsGPqelHKi22e0/s6ftReJv2X/ABZea/4YttPuTd2ps7i31GItHIm4Mp+Uqcgj1r0PTv8A
go18Xrb4r3vj9NVt5JLlPKfw5K0zaWq7Ao2xeZlcFQ2c5zn1r5UmDYVJPkGMBm/P+tRLGyIBkyAt
wBwD70rIlNnqnx4+PXiH48/EO88aeILSwstWuYYoNmnI0caLGCARuZjnnrmvW/iV/wAFHPi38TfA
Nh4Vur6x0O2j8vzb/RIpba7mKDgM/mMACQDwAP5V8sMUlRcA4Ax71CYnZ8KNqjuRjn60tBrmWx9O
fEv9vv4m/F/4QH4c+Ixo8+lNHAkuoLasbufymUrvcuV3ErksEGfxrH/Z3/bW+IX7Muh6tpPhOPS5
9P1KYXEq6nbtP5Uqrt3LtdTkgAckj5RxXz7PG2QoUjIAyF7/AFoaTygI3JHHXqBRYqLfU+nPhz/w
UB+L3ws1jXNT0/V4NcfXZDcXFtrokuI4pCzN+5UONn3sYzjAH1ryg/HDxN/wtyT4lpNbxeLDqg1g
SCEJEJwwYDYP4eMYzz6815vcM5kK7sjqG6H2pruqtlicnPOaFZA5Wd7H0l8Uv29Pi78XNa0LU7/x
FFo82iStLaxaHG9rGWLKdzjexc5ReDxyeKw/2gP2x/iH+0rY6VY+MpNMktdMleW3TT7XyMMwClmJ
Zs8Dp0/KvBHnEbMwO0c44qUSOUC7QwJ+8ByRTSSM79z6G8E/t1/Ff4cfCGX4caHrFrBorJPFFM9q
HuIUmJZgkpORyzY4yM8U7wH+3f8AF74WfD+68H6J4hSXS5/N+bU7YXcsW8YYRyMcgdwDnkmvm4uG
JI3cdMcHFKsyxsf4z6mmkhXb0PTPgr8evFfwC8YweKPCN9DY6vHbSWjedCJYpY327lKHryin2xXa
3H7b3xXufi+nxJPigr4kig+zoYoES28vBUxmDBVl7885wc18/S3BR8g59fUVE7IrBQo6kZOfSk4p
mibPUPjX8evFfx+8XnxZ4u1GO+1X7GlkGt4UgRIlLEBVUDu7HnPJ/Cu18eftufFz4mfDy08C6/4i
jk8PWiwxlba2SCaZYlATzJF+Y8qDx3x9K+flkxuBBMbDaeaZG7bX54x3HQUrai5mtD3jxh+2d8Uv
H3wmt/hpr3iQX3haKOGExNaIJ5Y4iDGHm+8eVX0J2j3Bo/Bj9rD4lfs82Os2vgjX10uHVpEmuIZr
WOZd6rtDDcp2nB59cD0rxV5kZyyqVOMYB4pUkAYtjBckk4oshcz6Htfwx/bC+LHwf1jX9U8N+LHt
7vX5DLfNdwpcLJJvZg+2QFVbLsMgd64uy+L/AIpsfienxCh1mVfFovjqX9ogLuNwxYs2Au3ncRjG
MYGMVxDXDL94ctx9P8/1oMg3kMWXB4Y9h/nFHKh8zR7V8Qv2tvin8TPGfh7xN4k8Wz3mq6FKJtOe
GOO3S2YMrZCxoATlVOSD0x0rK+M/7S3xH+P17pcnjzxIdbGliRrOMW8UKRbyNxxGq5J2gZOa8pab
ruzjnBHWopm8nBPQ/LhaaSDmZ7Zp/wC1t8WNK+EY+G1v4ymTwcIGtf7PEMW4QkklPMK+Zt68Bu+O
BUej/tafFLQPhPe/DjS/GN3beEbiKa3ex8qKXEcufMQSOpcA7jwrDGTjFeMB45TgZLKMDPbihRky
YJyvOc0uVJ3E5N6HpXwm/aC8cfA7XbrW/A/iGbRdRuofs1xIkaSLKmc4KSKwOCM5xWroH7VHxP8A
C/xL1P4gab4vu7fxdqYZbzUtkUhlRsfKyMhjwNq4G3jaMV47lldAMdyGPNOLbpAD/FycHtSsrjTb
O38ZfFXxP8QfG934v8Ra5Pf+JrmVJ31F9qOJEA8sqFAVdu1cbQMYrqPin+1L8TPjdbaXbeM/GN7r
NppcpmtI3WKEI3GGPlopZhjqa8gnZVXcMhs8ZFRmVVXB3Bm+XPalZblao9i+Kn7UfxO+NGg2GieM
vGF3ruk6fKJra2niiRVcIUDMURSzbT1JPU96b4J/ai+J3w68BX/gnw34z1DS/DN4ZfN06IRMg8wY
kCsyMy7uc7WHXjmvIfKK/KGIXkk4oZmQs0eWPTJ7dadkK73PXPAX7UvxP+FHhG+8MeE/G2oaLol4
0jT2cSxupZxhmQuhKEgD7pHTiub+Gvxc8WfBvxNF4i8Ha9ceHtXWFoRcWqoxMbY3IVdSpB2jqOw5
rhJj5ik4G/0A601XOcnk9qEkCcrnrcH7TPxLg+KM3xIPjDUP+EzeLyf7VZk8zaV27NoXZtx/Dtxm
ub+JPxU8R/FnxZc+JfFet3OvazOqI95cbUcxqMIAEVQoGT90CuMkkUE4HXkfWh2EYxg5PbFOyQ23
1PVviN+058Tvi3oOn6F4u8aalrukWLrNBZ3JREDKhRWJVQWYKTySeuetL48/ae+JfxM8FWHhHxH4
y1DWPDlk0TQWFwI9uY12puYKGbH+0T09RXkrScYwVXHHPQ0JyCQw46Clpe9iU2es/D79pT4l/Crw
vqXhzwl4x1LQtE1B3luLW0ZcM7KFZhuUlSQMHaR60/4c/tK/Er4PaXqmm+CfF99oVhqL77i2tyjI
7bdpf5lOGIOMgjpXkrvtPrz3ppBiGT37Cm0iud7Ha+A/ih4n+F3iu18SeGNauNI121WRY76Bx5m1
xhgSQQ2c8g8cfSt+9/aP+I178UYPiHP4x1J/GsSiOHV94EkabCmwALt24J4xg5rywP0I4I7Gg3DD
t8v60nFMnndzufib8X/Ffxg8Ty674u1+71/VngW0+03RVSIl3YQbQAB8zdu9dJ43/aT+JXxG8F2f
hTxL411TV/DtiU8mwuSmwbUKplgoY7QeMntnrzXkUmCm7196JLh2dkxj2Bo0GpPoeq+MP2l/iV45
8CWXgjXfGep6x4atDEItPuWUxgRjEYJChjtHTJPal+HP7SfxH+Emjaho3g3xlqfh7SdRlM1zb2rr
teQpsLDcp2kqAMjGcD0rynccZB4z3oVuSo5Ve4osmF2etfDf9o/4jfB611K08GeMtS0C01FxJdw2
5R1ldVKhvnVsHBxkHmub8DfE7xP8N/Ftr4o8Pa7dab4ghaR47+JgZQzghySwOc5OQeua4rJKbgTu
xwMU7zGiOTk4PHt70JLYak1uep6l+0f8R9V+Jlv4/u/GepyeMLOMJb6sWUSxAAqFGFCkYZuCOcnN
ZXxP+LXif4yeJG8ReNteuPEmqNbLaLPc7V2xLuKqFUAYyx7Z5NefvOdrlm5I475NIriP5WyQOKOV
Jj52eu+Mf2nPib8SvBFl4Q8TeNdU1bw7ZeV5VlO6Kv7sYQsVQMxA/vE1H4t/aW+Jvjb4e2ngTXPG
uoap4SsxCIdMuAhUCIYjy4UM2MDgk9BXk4kwuR17e4pfNaVM8D6VVkK9z1j4XftLfEf4MaLqej+D
PGOp6BpeoyGa4tLYxtGZCoUsu9GKnAAyuOgPak+G37TXxL+C+n6lY+DPGd/ocGokNdR26xssjhSA
+HVgGweSOT+VeUG5O3AQEE5z6UrSr5rE8jHrxmlyoHJo63wV8TvFHw48Y2nivw1rV3pOv2skjx6h
CwMoZwQ+dwIYMCcgg5znrXX6r+0/8TNW+KFn8RLvxlqEnjKzjWO31ZvLDRoEZdqoECBcO3y7cEkk
+teQmXEcnOTnjBpz7JIAHQ7j09qlpMhSaO6+KPxj8W/GrxI/iHxvrk+v6y9slmtxOiJtiXOECoqq
OWY9OpNdN42/aj+J/wATPA9h4V8V+M77V/D9k8Zh06URomY1KoWZVDNgdNzH8a8dXbG3z7iR0weB
TgzKuF6ehp2TL52j1nxp+1J8T/H/AMObbwNr/jS/1XwtZeT5NhNHEAFiGIwWEYdtuB1J6DNO+F37
U/xP+DXh7UtC8GeMr3Q9K1GTzrm2hSJw7lAhZWdGKMVCjKkdB6V5CwJdlU5AzyDxSZCs2G2jqD15
p2XYnnZ6/wDC79qL4nfBHTNSsfBXjC+0a01OQTXVuIopEkcKRu+dGKnB5KkZ71yXgT4peKvhV4vs
/FPhTXZ9J8Q228rfxbXchwQ4YMCGDZyQRj24rjmlkYnB/A802TOCpA3+p4H4UrIV2ewX37UvxP1P
4qWnxKufF98/jO1URxauixKUTYU2CLZ5e3DHjbznPNcx8WPjD4v+NfiyTxH4z12fXtXkt0tRPOqJ
tiXOFCoqqOpPAHJzXCu5VgM9DnANIJ2G4L1PejQfMz2Xx7+1J8Ufip4K0/wn4s8a3+raHp7o9tbO
sUeCqlVLsiqz4B43E0nxA/ao+KPxI+HmmeCPE/jG81jw3YND5VlNFCvMSlYy0ioHfA/vMckc1475
wP3uDjvSSMWzgYGTyaNENzZ7J8Lf2qvij8F/CuqeG/B3i+50TRNRlaea1jhhkBkZQjMGdCykqo5U
j86T4X/tX/FX4JaFqGj+DfGF5o2m6lKZrq38uKdWkK7SymVHKkjGSPQV46shZgpXd6GkeYh9pBx7
nilZBeRPM7ykkvu3dWbn65qIqoxncFzn1qMqxbDMFwOg70xmZchck5pku44sgduuO2ajH7xuSQD6
U+LbzuPOc5ahmAXAUE560CXmNaIqc8YPpTWYMxBUgdj60+UHc6sCuD0NMckPkfOOmRwaB6Dk5YBu
nah2ChhtBXrkGmHJJ+UlcUDKtg8AjoKB3FOCcIexFK2QDtPQ8mlUKBle3Y96A5ViDhRjP0pWHcbI
QAAM7u9MZdxIPGaf5oJy+SR1IpW2tkjr6CpsIYPlQ7D36Z7UoHBB5z0pVQDO0luKVSQ3Axj1qhoj
kwNqKTilHzYz8vb61MpRm3OdrDsRTJjs5wMnriovqMiZcHOQDSx9CBmhT5jknPtR5eMgngn1qiLD
uT9049RT5IkVNzMCT2ppDAjPXuDziggGMA5B9hS0Y9R0ZGCc8gcA96jLEygdFpyrgFi/IxgEUCTL
8gc9/SnZIoSQAsMkso4B6U7YNp25yfSlYAvkHOOM9KTzNhABwTxmpYyNiy5Udam44yQSR93FNaTf
ISBlRwTimEgSAgjA55ppE7DnQAZ65PSneaqFPLUj8KVQqrhm5HTHf6UoKlV4Ckn8adhoYCpPzAk+
opHKuVIG3PXPrUkq8naRwOTUY27vUe1KwDwpf5jkKOMZpSu/HovHBpRkAKpwO2TSlT09Oc9jSsBH
KDE7DdtxTARhhtK+hFOlDlTj7w6k0rfLtO4sc5Pp7UgEcI824kjJxmlZDI+8/KOx9aTy9rBmH1FM
UN98Z254U07DJH3AEsRk8D2pSpzjGc96HX5DxnjJHpSq/wApG7YT0I5pCEZNuQuNo5NKPnkI3cAf
lSQsN+HOF75709lKsSgyOlAXIjEry8EAH+L0pDgY4yTx1p8vDLwPU8U8gRiQZBxkDB60bjIRuUvI
QcAHgCnHcQQSCn3sDmnq+UADYB5IPPNNkYOemDzwBQAHafkbAOc5zSTW5Lbhg7R2p64hKt94HnJF
Ru/GVPfOSM5HpSARvm+cfNheeOtKflDkr0A2gnmnIU3kZbJ4IFMyY2yxyRxk09x7iOFI69RnJqSN
tjZ3g4JAIHWleNniwe65yKaAOgJCkc46g0CJY9ruxJJLDp2qI4UttfcMYyeKc5Kk/MOAP8ikQL0H
LZxz2FO4XJI1VuN3PpTfLXe5Ukq3AGeppjod285OepAxinFG+WVAQVHr096LlAVcKo27RnAy1DEI
NoB3Z4wKUSM27zJCTnPPNOkwQJEYHB+YE0tAIjGxAYFtvSpowQCWPTge3FMUu0bZOOvSnFuh3E7u
o7Yp3AWSIlDiQTbRuwOMUyRgrqCMkHOfal8zy3KYzzxj9KMGT5C3+e9K4gk/eYYuA3Tjg/WlifdJ
kkN1AX175qOfyyEwCw78dPxqSXG1FVSCPmDEYyBTTGKUeJ8sOBhuvrTN42ls7Sf4T3pY0Te29Duf
oQOBSGJRI+9to7Ac4qRWGIAC25QCTjHalKEF13YzySBipp5N0WzCjbkFgOvHWo2Bz5RyQcY45NKw
WFaFU+Zjznj3oCqEkO4MeBt6Y9xQ8flMw2cMduMjimNFtJQj5gTyD3p2BD1EYdYxnd0PHt9akwrW
7sMsqtgZPSmESFRswAeCy8gUx8uQgQEtz8o71FtQFl+4BGwBzxxinSRMsb7gFXIxgZBoKEq3DIB0
BOTmkYggASg5GQCcfh+lVcBySAdWO4N0GOKZJ5XnqfvE4+bpmnRSeUHPI2g9ADT38tvSMnkFRwOx
/XNFwEnuPM+QKwXOOOAP84pply24qQSMYBxx6UrIi427vccgf/XqQujyquSpByGz3q0BEFUI2Bhj
8478en86kBw+NxOR24JplwgichHErjjPtTIoSuS6ttUdc96Bh9nycjPXGMflVOVPLlw+Q/T0xWg7
MHCqwLkY9AMVn3Ch5QcHH8R680ALneN23JHHFKUZmbBAwOzA5/zmmjZsyASPQUMijBIYD0PakIkX
y/mLKS3oDSESgc/dbng9QKUARZA74yaHdlX5TkEc+1KwCsqyR712qVP3CeaYoBHzsw54GeKkMWxQ
33yVzhhUcsO8Lyo4z05p2GaV2rXEruwCxk8g9v8A63WpIoZUyvyHo29s5PXim3BfyWjC79hzvzjn
J7UsCFdpbevOQQMj8qsVywC6I2XZELb2U9CfT/PrWTeuJ5m2AAEnn+7/AJxWnqOyyDxfNIWALF+3
pWQBwwIZQBldozk/5NFgIpSe6A44ye9Wxpcp083qqJIw2xjn7voT7e9QzSK8SgKScck8c11fgGP7
cdQ09y7WssDF0j7+mPx71IHH7NsvcHPbnFe1/s4/s/eJv2iPG1v4d0NI4oiQbi/YZS3BUlS3BPJH
avG3AguZVKlNjHOeSK/bz/gnVoWl+G/2MrfxhpunW1p4huLS8a4vBCA0xhZxGXx94fKPw49qOaxT
biro/Oj9p39kbWP2Ydbs9M1a7t9SttRiMkV5EwyzDg5TJIPOeld7rn/BPXxTpHwAsfilDqdjqVjN
pseo3Fkv7t0iZMhhu4yMjPPfHavEvjR8UPEnxW+IOq+IvEl48l3qEjTrFvPkKowAsa9uFXj2r7e/
YL/bI0nxb4Yh+BXxU8mXTr+J9O0e9lwiSRFT/osxyMN/cb2x6U22Spua8z4q+AnwUu/jt8UtN8EW
NzFpd9qEcjRXVzHuDBELHI+npnpX0bb/APBMLxc/xTuvBj+KdN064htU1C3uQpdbuAttkIHBVgeD
kd6+4vA37OXwx/YzHij4gXt2DFGHmtZrx1WS3jCsfJjJI3MwwBnrj8a+P/Bv7VGsftB/t7fDrXFt
5dM0iK+ewsLdMJItsyvxKQSGJ+8ccfKam8iN7I1ZP+CP3ixJ12eNtLkjMjO6mNwFyMDaMenOD3rx
b9oD9hXxP8AdX8LWuoanZahaa7crZQX0IbCzMQCrJnJxuBzzxnjiv0+/aW+Cvin4t654cn8OeJLv
w02kRT3UdzbudpuQY2iDqDyPlavhj9oD9r5vjSnw48M6lpr6f4u8P+Ili1eRVH2WVlmWIvGfvAkr
nGB1PNCbLS1POPj/AP8ABPLxx8BPBEfieW8g1/TIz/pb2RZWtCf4ivIK8AA56n3qr8Nv2APF/wAV
PgWfiRoWp2jqY5SmlyHbI3lPtkwxGP4Gxnr7V+0mrwafqOnNp2pCGS2v1NsYJyMTBlOUwepxngel
ch4K+Fml/C/4b33hXRsppC/ant4iBmJZdzFffBY002J7H4EaP4Ln1Xx1pHh83EdhcX90lqlxMuY0
d3CqzgY4BIJr3L9oz9hHx1+z1pllqt8ya7pE3y/btN3bIpOyyLj5Qex6dvc+E+Lrn7Zrd87KY0ed
yBG7c8kqc8c7do+o96/WP/gnJ4+1D9oD9mzWvDHjo/8ACQWukXTaQs1780s1s0YZRIT1Kg4DdcAe
lNi0kj8dZbZoC03lHewOEwR71C0YaXKRhZM8YbIxj/8AXXrf7Qng7S/A3xi8V6FpSm3stK1K4tIU
PzDZGxQbuOvy/rXl1wESVygOBghV6Zx1oRmRO5jCu8JcKMb1PpzyB9f0r7c+D/8AwTG8UfFjwHo3
i618VaPFYatbLNHEQxKZByDjcOD2z1+lfE0YG5lVsSZJOOBjua/W7/gkZql3c/B/xhYy3cs9pZ6v
H9mhlkLCBWt13BQegJGePWpbaZrHU8B8X/8ABJz4g+HfDmpajZa/puszW8ZkSxiRxI6gchSf4uOn
fNfDfiHw5c6JfG3liMUkeVZM/MnqD7g5H4V+7H7Png/x54Y+JnxWvfFlxeS6JqOpibR/tNz5iLEG
kJ2DPyjDJ2HSvzB/aZ0rw/48/bU13R2vE07RtR19LKTUoQGjjDbQWG09A+7k8DFWmJ2bsfKgjaVd
4RgT8obPWk8l03uwKdche9fqFH/wSX8H6tDImlfFOa8vhGTGqxQyqD2JCnOMnk+9fBnxh+COsfBb
x1ceGPFNtLYzwOEacDKyRliBKnZgQM470+YSikzzSSzmmiAVQ/G7PoOP8P0NNSKRdqyxblIOMdcj
vX6OT/8ABLLTNQ8NeFdd8OeOWvdI1Lyzf3SQhjHDIAQ8ZBIO08HceASe2KZ8W/8AglZBofw8vNe8
B+JZvFup2RaQ2YCgyxgfMsZDkFx1wTz0oui2kfnVFG0wlQRDHQHbnnpjHWvqvwh/wT08YeMP2eYP
itp2o2csM1k+orpEm5Z/LTdkZ5BPyk449utfOK6GdN1+3tNSkOmx/aBDdTN8vkpvw5I9QM1+zGlf
BrxP8Lf2RNT8LeCfGenaxo6aJdSWU1/aFiIZUaRgkqPg8M20kHqKXUbtY/Ei9gJIDKxkc5zuIUAj
IxntVfaYHZgoPOFXIxz6/nXXeHvBGr+OvGul+GtIt2u9R1OX7NaRghS5Iyoye+O/FfpBpP8AwSb8
Iaf4X0yXxL45udN1SWJWuFkWIKsu0EopJHAOenvzT5kiVsflnbKpEiEYYdNpxuA+v+eagukZ3Clg
XyQPmHBz69u1ff8A+03/AME15PhJ4BXxj4L1mXxTplpGZdQiMXzqv/PZNpOUAznHTr0zj4IW2ZQ5
aPDKcPlcEf8A6qpO5DSIo7lxH5bgttPQdcmmo7RuxlI3A9R09hQ0GyVXZ9wPAXHX/Oa9F+BfwS1v
4/8AxK07wl4bji+1T7p3e4bYEhQjc+e+MjgU72Fy9TgBH5tyZMFNhI69KSONsskiDy25LgdO/wDO
v1euv+CUvwt0ErHqnxCv7O4mQnF3LApOeu3dg4zXz1+2P+wHP+zxpVn4j8L3NxrnhaRvKuJJE+ez
Y8hnIz8rdj68VHMi9j4mkhCShijAgcnHT/Oammhd96nYVHzYYcn/ABr9B/h1/wAE3/CXxk/Z48Pe
NPB/jC6u9du7eOa8tspJH5i/66ED+FwScZ7gdM16Jr3/AASk8A634L1ceD/GV3e+JreFhAbiaORI
58E7JAvK5Ix7Uua+wpH5YsjxwsgVVydoc9vT/PvUCu8SOpzKHICg5roPGvgvVPAfivU9D1iB7LVN
Pne3uIJz80Tr/Ccceh4z1rBmMmQyDa3TANaIyVySFMMzoio68qv4c5NfWf7Jf/BPnxB+0p4evvEF
9qMvhXQBj+z9Qe1Lm7fLBivIOFIxk9cV8veGoEuNb0yA5kkmuo0fgHK7xxz6j+dfuJ+2R461H9nz
9lfUb3wQlro11CbfT7cRwArAkrbGKKMDcASQfXmpvd2RstEfEvxu/wCCUPiP4eeAL7X/AAlr8fiy
8ssyy6V9mMMjRA8mM5O4gckcZ5xXm37G37EWm/tW6P4iuJ/FjeH73RbiOJrVLUO+GUlWILAgZDDt
061s/si/t76z+z+9/pHic3ninwreCSbyTIHuYLjgBo3dgNrfxKfTIre/4Jm6vNq/7X93eov2eC80
7UZnjDZ+VpA6IR/s7/544obkkCV3qeq3H/BG+wLzCP4kSh2BMYbTuN3bI8zp24I4r5I179jnxL4O
/aV0L4S+IZBYLrN+tpY6ykRaKeBidsyDjJ6Arng9e1frb43+DXiXxB+1R4D+IVnqEUfhzRNNuLS8
s2lIaR3EgUhcYIy4/I14l+1J4q0PXv21/wBnrR7DULW81bSdUl+3wRSBmti5iKK4HQkKxApKTBQX
MfE37an7Ev8Awykvh/UbLXTr2k6w0kO6WIxzQzIobGASGUgnHTpXX6H/AME4D4m/ZOtPilYeKHj1
mTSn1htJntl8t0UFjFvBypwpGee1fRn/AAV3iaX4feASrKuNUn3bhkEeSOMf55r2H4SxrP8A8E89
MUEhZPBM5PqMwSZH5mnzO4n8LtufhjcoVKYRl3L8u8lsDtyah3CPhn3EjH09qvXJCRxIxZj5ag4H
BJFZce47mK7ADzu7/jWhjd31NbR/D114g1e1sbC3e7u7qQQQQxA7nc8AD8T1r9Lfh3/wST0i48A6
VqPjzxbdeH9cmQm7s7fy2ih3E7U3kgbsY/GvE/8AgldY215+1PaCaKOTytIvZlSUBiD+6AIz6ZPP
vX3B+2X8CfiJ+054oj8GeHPFVt4f8M6dYQajc21wrbbi5aSZULFPm4CZxnFZN6nRe1j4X/bP/wCC
f+r/ALNdjbeIvDF1d+I/BsmEubuWMedYS54Mm0EFG4AbjBIFfHflLLKsYDly2zaw5znAx/hX73fs
6eHPEN7+zefCPxLKa3qun/atGvVuFV1kjjJRFOB8w27cE8nivxJ8F6xp/hH4l6Bq19ALjStN1mC6
mjVMnyY7hXbAH+ypqr9SU9bHE6jYT6e0UVxHLAZVLp5sZQsuT0z16Gs1wACFBdDwSeoxX6Uf8FN/
jp8Hfi58NPCcfgrV9L13xLban5hmsYj5lvbGGTcGYAcFtgxnrj61+bdxbrEBh+JD93sPaml1Jd2y
NysnIJKDqT2okT5FZn3swxwcf/qpJbcRr8pGSvQnNRbSrgliSeDk9KYrXFk+YlskqBg4HX/CllZS
BkFgMnH9KkeFyThSpJwQGzkVHIAuPlHBIyOtILWI3yyjD9+CGzinySlj82PmbGfSoz+83Ajn+lPj
QOd3Hy+vSmNXBpnRGAUbemT3xSebu6DJx2pkxHmK2/JHAH1pSflbOcE9fT1qeomMeQgkMOh65/Sn
Bc4G8qp5zimyALh+F78nvTifmIGAmccimIkdkL9hx1qJ5g0mTkBhlvenNJIqbSFCtznAJ4qOQs8S
5DKQeMHqKAJJLeRI1l5ZJBwG+v8A9aod5ZtjZVD29KdMzhlRckYAA7elEqqq4GRjr2INArA0gIZ1
AU4/KmvLtHIIPqOlOUKFBwCx6iiSOSJwWXaSc9aBxTGSF9y5G4dMmnu7x4Ux/d4BHeglz0AIX3pV
Usm9iDgfd7igvRCMx3ZKnB6CmSRFyPmHPOM1Mx3nAHK9KYvJwTl270DYrsQhyct2HbFKJNj42MQR
yAeTUTwumQc45yaRgA4BJYEcA0E6olJVGDYycZC98VGz5UKoAOerCjC7SoX5xySf502QMBtzlieo
pbD5hWfAUEgkfxDjFBcuc5Oe+TTJNwVVPK+uOtK+NwZcsx54ouS9R6uCuCNwpCpBxjYB1DU1k8xm
yG49e/FSpmMtuwR69aEIa5BBKngDpTfMJUkHBzgrRlRu2vkk4GaV2Zn9SCDuzTHYXdmIgluOP0oc
Bjgpu7DHb3pshIY5JIznkUjyKPfigGOcLnBY8dsdKdLIm5Ay85zn1FMCncdvJ9KTy9wzu4FIQ+Q7
BhSFB7H0ppBJyDu4/h/lQ/QHn0zTVA3ct36etDGtRVlOeMgg9MUruo2jnJ4PPWjYULHbgHpScMRz
yO/pQhAsiElTxjjGKczLsfaDk9MnpS4EoG4Z7Zx1oBVcgrgg4ANMdhkbDbv2lhkD8KkDBZtmBhvz
po28leO1Iy4ZTjIHt3pWHewvyhyp5GcYzxR5gVj1YY6DpTCxZshOnPFPbmPLct2ApkjC4bf1Xrgg
U4uUyeNvHWmEA7jjJ7enSnlyW2lT0+tIAZmc5xg5xzSyK8YJAG3H3c0jrtbLAbc9uaa67iQDx14N
KwCqWL4B25GBSltmQ2CT0OAaD8hTbhWHOc8U7arb8DBU8k96CkrjdwV2BbAGODT2bzSWwSQeCKhf
Bc4+bPORTsGNmG4jvxT3E9A4lJwpyTSb/JzjjjqOacCoVirAH+ErS5BQ5w2BzSsCQw7XOcE8daTz
fLYHggd80ruGHygqO5pPuBmCnH060Ds0DNyzBsc5HvTWAZsEgc559Kc5JCDbknkY6/jTVOw/Ng56
jNAtRs0nmOOTwcDFPl353AjOOlHk5LbV6nODTdpDc88dBQGoEgDOOepFAYAKvQ980g5bj5R3pZNz
N3PpgUCGvnI3Dk/rSkEAMfXpihdu8B/0FKFHOWGPSgaYiFlJIztNAJOSRn3NK5+b5e5/yKAxj52h
jnOGPFA1uBYMQAM46kdqGGVOTk54pqBsYAzk80siFCeCc9AKRYcEErz1BNNBwffP6URcAgg49+1D
AHdt654UUyQYbxk8HHHNKWDqhI4UY4oICkBT7H0obJ4UD8aBJMY+QSepNOUDDZ5bOeaBgKSc5HWn
YbaDuHXmoGNGA3HQ+lISAf7uOg9acEwDlsDNKGySSvy9OaYWHvyVOT1wQBSFlVSFBU9ST0xTSWDB
iCME4IoZG3AMQ2eTzUWKGNlxj0OeaciggLnDE9qe4CN04I9OlN8zL4x8wBxTAUuMeXyFz81N6yAL
0HrQwBbJznvj1pzsuMZOSD0+lIBdreaVIGBx0qSRCTyRtHXHbmoMHaCG5PYUrscZByCehqkmK6HS
LGScZ2Y6inqvykDBNQ7SUwNwA5NOHz7iO/8AKnYVxwXIGQfQ5pFU8My89qcBg7VyV7MaQysow43e
1LYoeXxkFTtx1pqllDjOR/Omh1DHkuT146UrLtGMg45B98U9RXFZicg4Gf0pOSWGCu4jGfSkxtB5
JPTB7VIQ7jYOWxkYqd2UmDbVwrYbOQTjpUO8jCgck+vFPa3I+ZX69jQqljyfnXoAPSrsFwdyodXG
fYU2TCr90c9CeKRgWYYOAeTkd6fM7P14UdxUsBuFD7Rzj24p4JRN2cDvzUb5YbSc55zTyiqmT85Y
8ewqbCsPTGCdvJ4LMcA03aZAAuNwJDDrSXDqwBGQoPQikZ8puj7cdOcVSiMcgSCTjt1z16U1TtmZ
gMK3OCelCqQATyT+tK7gNyRycAUWEE20t8oHHHB7VGAdpABXBpzuMk7cDoQKewVduGGWP4gdqkBq
OICjqCz55JoLNMxYYUZyF7U/yi3BIKev9aZJ807FVyc5yPXNFirCC6bBVRxnk+tRghgWLYb07VYK
ZQkkO6tnpTVbggqNxJ5oELvG7JbI9D2qJ5GTeFPHUMOanBUpuJXJORgcmkWNmJPDMf5UCIVZpMZP
LfpU7jCFfbHHeoyuY8jCkZ4qaM5jIVup+4eoNBSI1VAfvZ4545J9KlfYZDhcqBg4Pc9KZ5KlhyDz
1ByaDEPMGTtx/nmpSGOaYhQBlV+9TZTuBbnk49qcMSBmc7gowB2JqMQs0fLEKD0PrTsKw9EBcZXD
dc01uXYjqh5IqSOMKmTkkn7o5zUcqsSDhgelUMRg0r52jJ9OlSNkOqjOQMUNlFC5IyP6UkB+zv5n
3gDtG4ZGakBoEjLIAuccY9KkBUn5wV5596ickBy0rDLH5vSncDLyNvyeM8fSlqK465SOK4byELA8
ru6j605kzMFPIz1NMLBQWViTnOcfpTTE+dxUg8ksORTQJk08quGb7pHXb3/zioFcCTIXDHvS7VZk
J3e4U4wKPMdk2k4PXOO/+cUx3HFNhC5xnLYyaIolAyQdpPLGnSGR1YbRtHXHUcetMDliykbsjjJp
WAJADGwXJK9O1PiJ3REgYwR84yo4pNhIBCl8D5mGfypxk2lxuwozj5c/hVWAZtGHZn8rodoHWlXD
HeVLFzy2elCIFQlXIduQD9elN3EeWWXcqtnpz781NhEshIb5yrxr61C5W3AJyQDnnqM1I8asAWct
nhW68+lK0KhNoYg9CDzmqtYZC7mUkKvmMOc5wMUrqZGUA7wq5+U8UuxHY7BwueMdTjtUkhwoQj5Q
OQc0rAMVmhZmeJG3Zx6rx1qjPJJG5DcMOtWzGjJhXORyzD0qo4Bdx/rNx+9TECylWO0FgeMGnBsk
lhuPcHqTSDBwMFc9QetPAKdCAw5zn8qdhj2IZcHK5OcelQg/PhQZGBGB2xU0ZGC5OZMYG2lSHacg
Bd38RHIHFSArrtdUKhcjJwP61EzKuQdxbPRafvIkZJXO2P7o9af56KMrh8nrtwaV2gLpjaaRjvLY
bknqeOuavnEKFmDBApG7HWq6Mi7k3YgbBPynj1qnqM4bKRyiSPBAz1HcVroSQ3f72dmD7l65I5/z
xTJziHdyjt0Gcr9acuRGpcDJyQQRwPcVXlCmIkFicnngD2/lQASzbgFK/KBjj1rtPh3HDDe3ckv7
qEw/60sPl/wriS5EIHBJ554z+NX7fWWsrKe3RfLEpz68Y5FJ2GMvJlmu5Qjb1JOOnP0Nft3/AME5
LuHXP2GYtLsis13ANQt3hhfJDvudfpneDj3r8OrZkztcEhjngfMMelfRP7I37V3iL9lrx7Fq+nFr
/wAOXRxq+lGVikkeRl1XO0SADAbv0qbal/FFx7nB+LtLn0fW9QsLuJ4bmGZ7eSCXIkhYEjH5L/Kv
tD9gP9ii4+KOrQ+NPGFvd22gWUoNrDLmKV3A3LIG6kBhj/GvMv23vj58Kv2gPFWneKPAmjalpGqu
GGrT3kQhWZhjY20E5cf3h265r2Xwh/wUi8LeHv2Qx4AbStYg8bRaXJp0NzZBVg8xg3ly+YHDDggn
jrkemU79DCNo7n218XLL4eftleD/ABX8NdI8VwT63o06vcRW5PmW06EgbgR8ykggkfnX5ufAv4Xe
Ivg9+2t4A8P+IbRrG/tNbEJVsqkuUJDIwJDDHI57kV5T+zd8cl+Cfx28OeN7hr2eC3uHOoxQyl2u
YJAQ+ST8zc7sHqRX2d4u/wCCiHwV8VfHnwl4wuvDetXdhpFnLGZ5rJVnt7gOGjZVLc9D0NT72xuk
l7yPpj9sb9pDX/2dtW8FXWkacdattWa6tJtOC4Mkm1PKIYKSCGJ9q/OX4nfs2eL/AIfaj4L8fa5b
OLDxNqccwiAzLb3DziVlk5GAfnOdvavsPxT/AMFHv2cPHraTf6zo2vajLp8ouLKV9Ny0LnHIxJx9
0dcivFf23v26fA3xx8DeGbHwdDqqXem6iNRb+0bQRJxGwVThiQTk/TirWgovlkrn1z/wUL8V6t4E
+AVj4h0K9k07VdP1qzlt7qMgGNyGUHkEHrjB4wea7T9l341Xv7QP7OumeLdQtUstUmintbpUPyPL
HlWkXGOGIzjtmvhj9r/9vzwF8evgDa+GdBj1S31l7qCW8iubfZHHsUlgGz8wyQOKj/Yw/wCCgHgb
4G/AZ/BPi6HUvtlvc3DWM9pB5iyK/IVuRsIJ9/vD3o5epDd1I+IPHs6y63dSbPKdHYuAONwOGwT/
ALQNfqb/AMEltB1bR/g94pudRsJ7K21DUoruzeZSBLEYAAy5AyPlHI9a/LGHxLYjx1Z65c2gutPS
7W4nsnyftEe/Mic56jj8/avv341/8FJvB2mfBiDwd8F7O90W9aD7I8l1aNALGHbyYmVuWOeGB461
CT5iUko2R8h/teXZ/wCGgfHvlMsjLrl7kg5K5mYj8B0/Dv1rxpY2ll/dSOcDJXOfb8ulXNWvJ9Yu
7i5u5pbm7nkM09xPJ5skjNklmYnJb1zVGVEwWyDKrDaOnH+cV0JolRtuTW7iNwPOFvJzk7c8YxX6
p/8ABHSYSfDL4grgMy6xDmUPkN/o47Z4wMfnX5RtPLuyVRC6/IcbiBj8smv0y/Yz/bm+CfwU+DGn
aLrGn6hoPibeU1FbGxaZLllJCSgg9CuMjHGfxqXqaLyPO/2r/wBs74v6F8VPiD4S0zxVJZeH4b65
06G1SzgB8oEqVD7N+ffPesf/AIJk/B3w98V/jJqF14iEl2/h+1S/hts/JLIZAMyDuBkHHfca86/b
T+LPw3+K/wAXLzxB8PLfUPsuoxLNfS3sRgV7n5gzIjc84Qn3ye9c3+yn+0dqv7MHxStPEtpbR6jY
zwG01WymOHktS6EshH8a7cjg9Md6htscbJ6n7FeHfil4XtP2lNS+F+neFINO1O00v7edUgVEEits
JTaFB/jHc9DXwT/wVdVLf446e7ExvLoduwcgEHEs49e3WvX/ABt/wUQ/Z+tNRuPH/hm01W98exwL
HEXsJYRcRkYKSFjjbgEZxkHBrwT9vT9qD4P/ALRWhaDqfg/+0f8AhMIVEU0l1atFGLYgtsJJAZg7
dvf2ppSRLdrM+1v2BSLn9h/w4pfevkagmS27jz5gOfpUf/BN2ee8/ZxmknO6VtavRntj5QP0Ar5s
/Y+/bw+Hfwi/ZoPgfxWNQtNa043gto4LN5Y7hJHZ0+YZ2nL4IbFU/wBiD9vj4d/BX4Xaz4b8ZvqN
ne/2i95aC1s3uBIska5XKD5cMh6+ooSLerZ8XfF63S0+JPiNjLlY7+dTufcciRuCenr+Ffr3+yaD
P/wT+8NxOuCPDd3HtUkjA84ccZx6V+N3jPxfa6z4/vtbjtftlvLeteNbXQCeavmFymBjGckfQ9q/
VTwX/wAFG/2bfCvgLTPDdodW0rSorPyRpy6NOyRq2d6DAIIyW6cc8VT0B7WPgD9kfyk/a2+F0k2N
/wDbEQKgcBirevuR+dfc/wDwVy84/DbwQsZI/wCJlPyOefJyOO/I/lX50+KfiJoPhf4+XHjD4W/a
odI0nVl1DRPt6tkbHB2srHO0nIwSDtxnmv0O/wCHiP7Pnxb8DeH4/iro12NajUSXGn/2XNcQW9wV
w2x16ggfl9Ki+ot0fFVp8TP2itA/Z8n0azutWtvhdPHNC0r2sUkRtpOGVZmQsAzM2MNwCcY7fOEj
cvydnJwc57/171+l/wC0h/wUS+GEvwEufAPwl0+Wf7dCdMMeoWElvDZ2zIcvGG5L8jb2B5r8zLu7
a5eQJEW6YydxUk5P9a0TZlLcjx5uAxEiZzluK+zv+CWMK/8ADWOlvnaq6NfqqnnJ2x5569q+NlwB
vGJSmBtZhgD/ADmvQfgT8afEHwC+J2k+NvD/AJct7p4kRrSUZjuYWGHifuMjoRjBwe1D1Lhufa//
AAVfcn40+EcxyEpoysjxyFSP38nQDuSQK+s/21LXzP2I/EaNG0xTT7AlE6nE0Oce/XHvXkPiX/go
D+y58TBpmpeLvDl/qupxwKoW50X7QbcnlkVs9iTyK8V/bb/4KJ6B8XPh/F4E+G1rdN4fvQqatdX9
mYZAEZWSKMFumFyTjsMVNtRr3dJdz6F/4JM7v+FBeJNwHHiKRRtGB/x7Qdulbn7CNy0/xI/aCVyc
p4rdfmPPEk+OOwxivlP9g39uTwP+zf8AD/xL4b8Yw6l5dxe/2lZy6fALjcWjVHjI3ZBHlryeOaP2
Vf2/fBnwi+JvxV1LxLpOsRaP4t1RtTspLOETzIfMlOyRQRt+V16Hgg9etFgk/fsjw/8AbtMZ/ao+
JWwHb/ax3MRzu8pM/h1/T1r52kQbmVSHXPy59/WvS/2lfilpnxe+NfjTxfpMU9pYatqZuIIZxtl2
7EUFhyASF7E15eJ8SHgbhwpzWiM1I2vCl9b6b4h0u5uAVitrmGR2GFPEgIx7cV+1f/BRDSb7xh+y
LrL6Fbyantms73/RB5n7lXDNIAM7gAc8fWvw/SYQtvDlmA5BPPfPP4V9z/sZf8FDV+BXhmfwd8QL
bUPEPg9EdtOe1QT3NqSeYcMwDREZwOxyOnFSzXdWPLv2bv2JPGf7Sum65q9sqaRpVhbS/Yr6c/u7
q6B4i4OVGRycH2r0j/gl3pVzpf7WbQXaeXc22k31vMA25QylAxBGQcHIzXtHj3/gqb8NvDvw01LR
/hZ4W1HRdcn/AOPRbrTYYrSFnYeZIypITnGe3JxXhn7En7aHg39nCTxfc+N9Au9W1HU7gXMGqaPa
QtMN2TKjbnUgE4bjj2o1YlG0rnuv/BTH9oH4h/Cn4r+G9K8IeL9Q8NadPoxuZ47JwvmP5zrnkHng
V8k/sm+JdT8Uftg/D7VNW1G51S+vNejnuru6bc0rtuG5j69PpwK9L/bn/bM+Fn7UHhXR4fD3hnWb
XxFYzEjVNRjhjAtyjgxfLIxYFyhxgYxmvjvwn4o1Dwl4n03xBpN29pqmmTx3VtOvWOVHDIcHg4IH
H4UW0FCTUj9Tf+CvfmD4f/D8xqzf8TWdWCHnmIY/XFeyfCmFrH/gnjYRsGVovBVwNr9RiCTrn+tf
Pukf8FXvh5rfg7SLbx/8PdS1rXIIlF41vZ20loZ8fM0aySZAP/1q4L9pX/gqFp/xA+FFz4N+GHh3
UvCpv4ntbu81K3hVYrUxkNHEqOdrHIGewz65pJDaZ+el3ctPAGEq5GDnHJ49B0qmSbkBNoHqx5xj
pxUrSRMAN/mHPOOqn1qPzAjq0bMWUk9duTW1+xhbU+yv+CWfiOy0b9qnS476dIBqGl3lpbu77UaX
92yp6ZIRsDvX2p+2h8dvih+zZ8VdM8TeEfDNt4g8P6xpMemym8jfyluklldV3IQVO1jjPBye4r8c
dO1ma2nSW1nktrhXDJJE+wq4+YEHrkHHIr9GPhT/AMFbbfR/AFjonxI8GXXijXLXIe/tJIVimRTl
GZX4DgYBI64z1NZNO5stdj7q+Dd74j034JXOvfEKGy0LXL1rrVL6NJV+z24diy4YnAG3b1NfhR4M
8LR+Nvil4c8O3FwY7bWtWt7Hz0IyizTKmT9Cx/IV9Iftk/8ABRDVv2kdDtPDPhzSbvwr4SZd+oWt
zLHJJeyhwY8so4RcZxxk9elfH1tqVxpt3bXdrcNbXdrIs0Usb7WRw25WGO4YA9+aqILl5rs+xP27
v2FbD9lvw94f17w7rFzq+kandtZXUN6oEyS7GdSpXA2kKRjHaviydSm1WO1M5A6GvVvjB+1Z8Uvj
lpml6X448Wz6xp9jL50Fs0MMSh9pTe3lopbgnrmvJPMiVG3Alm+9ntTS7kN+YyWRZC3HsCDjB9qe
WWVssu4KSeR1qGQLIwboBwB6U8MGUYYhRxWiSZmpNMlWRpopHTcFA4yearyAmEMD868HOOfepEdV
jZQcDODnvSMQ7Bvuf3sdAKzejG53E3+UVO3DEcgjk0qsrFA4AUZzgYNI0anLK21M468U2GQCc9VI
BwexoBMbKMvhRtGep/SmuQqHDMdx4HoKVlJ3McjkZU+tIyhWbnleinvTsTuPIDBdpALZBP0pAGd2
54HJB/nTV+dtpxuXtxT0DM5GNzMCAMd6VikO3L1wD6cVApUSE7iFJJx6e1SFSjkNiTseeKVlQFcr
tUHv1oLsRyYLkD7ucgg4OOtDMNpBU4JzzUsqb1coNgz60yM5Hzks3QZPGKCSGRFX7ueeachJZQQX
OeOakWMyKQxI5x6YpyxFRuJwaehSiyBm4IB2tnoeRUsj+WGU4Y+vX8KjwwLk4K5/P2ppc9924ew4
FFhWsDtsVupXufT2qQNH1A4z+OKaUjlxwdzHmmuQpC4IAPfvRYauOl2sRgsSTjGcUk6lDgqTmmAq
ZARwoyRjmnNKUJ+YH8OKLFXE+coQVye5PpThlBtU5A6nHOaUtk43Anvg9qWVmzhR7ikToRnlDz0P
PvTfM2Ng4wPXtU21QoOeQMnHrTSEY5wRkZINILJ7DTypXlsn+HpS7OMljk9OKchUHcRnPynFDMU+
VRyvQk0CtYjAPPGQTjkdKWYBANgPpzSMHchlYcnpSMhUncuT2x3oKTJSPMUdsGo5E2ODtxng5NPK
sBlMnHJ9KZOzjHzZXODg5z70Ao3eowqxYkAKeuM07JaPhDgEDk08qMggE+2ecUhi3nAbaB1yegos
NxSGFsA8EHPT1p4CMAeh64NEud2OirxwKakWVOcY5A4oIFG3zOoB6bTT5FEQ2qDknk00gMeFGc5J
NIVkU4JwcEfjQLYcFGQc5B7elIY98mN2fUU1R85y3SnjBfPTjPNBWgjoGYY596cxRcsdy4GPlNIX
O4bRtJ6g9TSBDlgaA06Au3bkkluxzTTIudu3J9c4p4CkqG57c9qdIgUggbhjGcUBYapjUMOT3+tB
wSVQHj1oKhiNqcdzmpGQMQq55PX3oLSI3B6HoRxzQY1CMwOM0GNoXdWXa3XDDGBSzE5K4IB55oJa
FMgEeAoGfQ9Ka+R6A8ZNJySAo4680sagkjO0jt2pgg27kJ4yKjlGCQV5qYxrnLnDdMU1gFznJx0A
5pDshJCMDrgdBTiTswBnP50xX/eLxwc8Ac06bcGc52gGnYFYiIxjaoUdAGpZH34IURkcED1pSCgU
scgHkAU0kBnJ4BPBqWgbCQkIM8Y7dxTJFBBPTjg1KijJ+TtzmkdCCTj5fX1osQ9RiyiNMo2HzQSZ
SRzyMcjvSbRliPl54qUSkk7sgAD60gW4xo9xIORgc5NOYg5JBwp7dhSqAGXLFlY8k0qRkO+36YI6
mixpYiYpuxtLK3cU0oSCu4Bc8UrrjaRkn9KAwU7iuT3OOtFiLage+1wGJzihm4OSehPHrQIwx3c0
u0ZOMk85PagdgOAjYyxpvO4AbsjnApxGDxnd05pSCDkkjIoGJIQx+T6fNShCjEE5Vu44oVdhIGT6
5qVlXPLbR1GehpjI0RWAAGc8YJpoXY/c44GKVQ27IXJHOFGTTlwp+XOO49aQEOfvEnnsKUx4+8cd
6mdUYljxz9RinTKgk4OcdqTuFrkDALHgDk96Gj3kbm3EE/KDRI+SAfrxTWxksvJ5zigCR8MoBPzZ
IxTG5U7evpThGqfOzBufunrSuwIVl4HpRZAMfcSSp3AetAwFycnPpTownUtsPXilkGWzyVIyO1Rb
UAmCrwhBHrTXO4KFOG9uKcFKncT8p7Goy4bduGAPTvTsxMQhlkI+6QfyqcgtEAevXd/So9oZdxPz
e1PkBAJDEqeOBmi7EkhFXEiqccnrntTmG4uBkew9KTaZJEQAegJPSldSqvgAnnjNFirEcZHPI54x
7U53UrgdfShk2KMqQT2xkUhG0ADg+lNIQ4tyxI2/7ppEVfLY7grk4wV4PvmlljaNiSPlPO4c9qCQ
VZRkjcTkjrxTJ1FCKSckMOmaXI/iBIzxjg0ioJHPGM9Kc6eQNqtk44HvU2KWoxcEdCGI6mgRKuQu
TnqRyaRdzSbwBnp15FOZfLJwSNxz71QxhGwswXGD1pFGF3ZIOOgp5XJySc9x60wNGH6YzxSYDlVW
I5bBHNJKdoJ4BBxg08IF2qvQjnvimlTHuD5wB3pJgKIw5wxG725zTHQK5wGVR1X3p4x8mCysTyR6
UK5/eH74PU46VVwEjJyeuOQAO1IArbQAQM549KaAF3E52eop6PgsxUkbSOD0OOtADXdUPy5xz16/
WpVJSHfjOeOnJFRpGAylmww5GRmp9o3sQu0YyDnvQ9QIC5UkIM7eOOacqgR98nnJpCWZgAuTjORS
OUO5m3B+uOlSUK7KUwMgr2xS+YWCgjcT7d6R902WwRnAzilfDSKBlAP4sc0CHuGVPnjIxnBIzTFE
iBWHTPJB9qd5rlSu7coOV3HJpfMZzjOAQeTwBS3GISpk2g7SR1PNPVFRXUnaRjDdc+1GVX5m/i4q
N2+b+I/jTGCSjkn5TzipFmc78ZCnjGeuKhAMKlXPynsOue1OVMyN1Tt60hDpgA0YjQhV5OP8+1SG
dZDvLbVBwB1zxUQJXIYHAPI9akYxv5Y27Qrfw9cds0wQyVlmkUqfuk5KipHIeXYD8wOBnvTxtDkY
IPY4/nTJNrBiACuSc5+77UajHSAGJmKjGeQT+ZqsSdvERLDkkjsasZL/ALtGygznP+NOVirDYpUA
H5gcg0twISEYB1QDvtP0pRhtzPgbj1x0pZJVY7igHXj3qIDftXqfvEelDQEiurjBIIPt7f8A1qYZ
FOAQcMMFs/5704rvLDoi+nf2psoDMDswOijdyOO361FhWGyuNqjG4Y696ORErsd6vwcEce36/rUs
kOUIK7QRgleufemvbqBtVw2fm5P5fypjGAHydysVl6AdjViNXgkVGIXJyB/Sq6YkkUKcnJA44BxV
h5fJy6x75CQNx5waYIYwaKQqpIZj822kDtCf+mm7djPGMd6dJsednAY49OKgZgu4Zxk8t1FUBLuy
AcjOM9Cc1JHIGKoSML1JHaktUEjkFGlUAsFXtUDRPuOXBGcYzzQBKiFA0YfKDOF7ZpGbywCSoLcc
dRTlg2I5OQp4DKc/hUTSrJIqhCuBtye1ICRQuY8kkknv2FRtJkEMhO4546/SpoYtyluC/TaMjj1z
UbE7mOPmB5x2461SEK7B0UqCMcEdP1qpKV3/AHtpQ/eU9auTyfuiCPnznBOAB/jWe/zyEDPPr1pX
aBocpBcHluehochJAygHvk4OaAhwwLjnt3pBnaMgHBzk0XFsSLIRJuwFTPIHI6U8ygrtKtuOWVgT
zUQ3KFBUkryT/hUhVJkJjJG0kjI5x71O5Q51jkB2gA+vvRsUJGNjE45IxTTIzFSqbe2ABzSOmWGw
tkjODQBcedoLQBDv3Dlifas/dHsB2NuHc9Kluio27CQo4Cge1NEP7rIXLZwVJI7Vo0QhGmC7Q6/I
DwRSTnMancrZydgHIokjfCsVwaaGQFy3BJyCo/SmO4SECAKUzJ2PoKSRfMUE9O3GKHfeCc4YDha1
tJ0tdTsLiJeZo1DqD3HfHr0qWUY5ysgXOCB1WtSPbGVUFpBwQ2OhrOaMxSlSpReuGGcV9vf8E/8A
9i65+PHiS08VeIrfHgyxJYeYgcXTq3zRkZx0Pf8ApSTRSR8lEyxeWpb90WB27h8uO2Px69KnYRma
NtwjBJYMqjAA7Cvsf/gopo3wh8HeNLHQ/hxpUcWrWMfl6olsS1sc5KAHJBIIYHA68dq+g/gF+y78
L/j3+xDDe6fo6y+M/wCzp45LiDi4S8i3hVKDAOcDGRzkc96vmRzSTbuj8to40gc7QFaQYQEjj6A9
DU06yFmjLbFUFCX+Tgcmvpv9kz9mrUfHH7ROmWWs+FZbzQbC7B1iC9TYLdN7qvynqQwH656V9m/F
/wCDHwG+FX7SngbTb7wvay2PiiE6ZNplv8yQTblELmLsG3gH8OuMVDlZlo/JsWxl2gbFA4U5A7Z/
Ed6IXMkTTFC68kbjw3H6V+1nxq+EH7NnwO0vTb/xX4Ftraxvbn7PE1nbtIFbg/MA2QoHcdBmvBP2
/v2bPhZ4W+Evhfxj4N0aC1invo4BLYNvhmhkRmVjnOcED8CQe9HMQ4yurH5nSEOWkKgNt3FQDtB9
P0zTpN7QyBMSbBg7fvY6dPqa/ZP44f8ABO/wD8RvhPFF4P0m00LxTbwrPa3cCBVuW2DMb+gYcA9s
9+a4T9j79i3wV4l+CniCy8eeEiniWHVbu0eSdmS4gARQvII6ZOKnnbYcrW5+TTRPEPmQKRzubjHt
g1MjrO/397rwVIyPqM12ninTbPwl8QbmOe0kurWxv5IxbzoN00ccpAGRgZwpwfoa/Rzxd+yL8Kf2
pvgLpXjn4K2kem65BCAIOUabHMlvKjfdkB6E+3ODWunUu2l0flcjBL5N0pCv98njmn3TROxkY4ZB
naQCT2/D/wCtWv4s8Ly+G9UvNP1C1eC7s5DDPC64ZXXqCOxB4rDMXmTIoUdM5/z/AI1WnQY1iTIm
9SI+Twep55qUTRMsrthSeFBzj60jbHZQU+Xoy5/u9fevdf2TP2WNc/ae8dR2torWnhe3fN7qaxlh
bn5tqY3LycH249cUXCx4VcoZN3z4lAAIxgj0IPpjFK8fmFXeQK4H387s/wCf61+mn7Zn/BOnTdB8
OJ4s+HNnIsdnCseo6VApYlQGzNGOSevK9uvrX5pavp7WN1DDMDb+VlNp/iO7uD3rNvUx6lOadnZo
wf3JODvHHTn+QqSRkPmhNygAEljk/n6VPc6TJdSkm3lQnsy4btz9MHrTJ7GSPbEq4dc53jAHqDn2
q0w9EQCdyZegdufMUkcY+tOhkdVklctlsHB4/L26U6PTLm7jKxQys0RyZIVJBxg84HOBj86kWzkK
zvImwxHcUZdpHB529ulPcItorLMt0yLuLOT8pIwenrUkEgAKdQ2cgtjBGegqZbMuEGwyMRuCxDcz
D2x7fzqR/D107uskTxwoxCDblhkZyR1GaSsbalSJJZDj94A3I2EjvxSXEjMoAYryd2OvPQ/oan+y
TQkYZ5MHYAv5de3SnLpUzMIZYJS7fMqiNhkev6nn2NOwameNk+drYCcBQOW49adJHPbSRBcFjgMO
nOPWrkumz29wwmt3TOTuK4GBx/Mdq9B+BHweufjH8XPDXhIzTWlvqt4ts94kJlVF2l29sYH6+1Jj
ULu55nbxuhkJOAw9eccjj9aV38pTu8xGKHtgj61+gP8AwUQ/ZP8ABfwQ0LwFf+EtOe1nkt5bK8WL
IjmMaJtlx0ViTycgHNfArJw8gOU6jdzkD29x6UkTZshM6wPulUfMMfKcAk4omMdvdOXLBgvBwSQD
7ev6194/8Es/hv8AD34h6144tfG+kWeqXscNvLp8eooQmz51lKE4BI+TOOma+c/2ufCHh7wL+0T4
10Two0baHZXxFuI5fM2lkV3Xd04ZmA+ntQmS7pnjJnSOTO4kZyOTk57Uk92N/wAu1toI4GCO/NW7
bS59QheS2i80KxVmx0I/rzVi88LanZxu09jIsYAJlZCB6Y5/zyKtWBmRt82ccbVJyVX86hdCZuAS
o4xnn1rTsNEvNQMS2tnPNN83yQxEng+34Vd07w1qurTSpZ6XfXrglZDBbO4Vh2Ixx26+tX7pnyu5
lRvG0JOMunI3DkjNM3Sht+SQDuBFXJNPe3nminilgli+V1kQqR1HKkexrPj3KSjMAPQf/qqbXLTa
3FnYnaWc4DYLAirun2FxqF9BZWcLz3VxKsMUUfJkYnAA6c9Kz2jARgP3jLnB9a9m/ZP+FmrfF/45
+EdI0r7OLm2vYtUlE0oTMUMqPIq56nAND0LVmYXxO+AXjz4OWunXfjTw3d6Fa3+RbNMv+sYDJUHo
CBj9fSvOo5SzsQuyMDIYg/jiv2K/4K1+ItJtfgDpmi3F9bx6pdavBNBaGRfOZEDlmC5zt4IJFfj3
JGCSisQCflUdeff1/wAaV7q5MdZNdhrNJIFCyhQTg9x+n1FRSSlpDkklOFx0PvXUR/C3xVLYjUYv
DGsHThD9oN39jcRMmM7w2MYxzmoW8A+INP8ADSa9No17HoswAhv3t2WCQk4AD/d5yAOe9Ito5lmK
OAABx1I5P+c1XJZnVs7iMjFdb4S+H/iLxy1w3h/RNQ1ryFV5PsEDStFk8FgO31roNX+A/jrSLDUb
668H61bW1gN0s8tlIi4zgkkjgD/CqizNxtqefKhWQsMDAwzHtVqNJEjedEMkScSSEZVAemT2z6Vp
+GPBOueNNQWw0PSrvVr5ozMbe0haVig6khQTgZFfrR8C/wBjjw1oP7EesWt/4YlufF3iXQZLu+gv
VPnrdLG5jWMHlCDtxjB4HeiVr2Ku0rn47zOCQxUF2JBA7n2qvLIzny2YRlT/ABdfpXTeKvBWqeB9
WudE1u0n0/VYAjNbXK7JE3DIyCc4x0P0rnHKuGR1yeRv707CbuI0YljT5mLqeQo9KSd0WWRccdit
KEPAycEZyBXafD74NeL/AIqy3i+EvDWoa8tuM3BtYtwiBBILHtnB/I0JpCOF8vdkuGIzwR/KpGXc
hzGdueCor1bwp+zD8UfG2kNqGg+BtX1PTzJJCZYbZztkjIBXgdckjnjg1xZ8Fa5H4rPhltPuotdS
6+yNpkiFZvOzjy9nXORiqvEVtTG0/SrjVZVtrC3lvJ2DSeXCm5sLyx47DvVORTNIrRAGMcls9a/Y
T/gmt+yJP8MdL8QeJfHvhL+zvFRuTa2BvBuKWrRIXwPukFs8+1fn7+2B8CNe+Dfxh8SzXnh650fw
9q+sX0ujzGILbvF5rELGc4A2kEL+XFZXTH9pI+fJYWV2GDgcnFROCBuMZJ7HvXpnwp+Bfjf45X95
Z+CdGl126tIhLcrEVCxqx2gnJHXB/KvQdW/YI+PGl6XdXtx8P79bW1jeaV/MiZgiDJwockkgHgCr
jqxysj5yeUA8EgA55H86CFWRmBORnDVae38jJOQ/cEfd9Qc96quvlllBGCeAec+lW0YKVxkhzwAR
gn5u5q5pmj3OsahbW1nFJPd3MgSGGIZd2J4AHc9se9VxvgZS2Tx8xzkV75+xb8OvE3jX9oDwrqfh
nSJNZHhu+ttXvLVXVWaFJV3YDHnjNZS0N4K55R4s8Ca/4C1KKx8R6JqGh3syeZHBqNq8DunQsAwG
RnjIrm7yE5AwWUHgbq/Ub/gsJ4p0vVdK+HWmLpd7bav591cfabmyaJTD5YUxhyPmO7B2g8cGvh34
Mfso/En9oHTdV1LwXow1S107CTb54423kEgDceelNWA8ZIJB+Ynn64p6gY5GQPwOa+mPA3/BPP40
/EDw0mu6d4TxEZXiEV3cxQSqUYo6lXYEEEHggdPcV4R4n8Iap4Q1/UtH1jTJ9M1XT5mt7qyuE2yR
SDjB9uhBHUEdsU7Ilb2JfAXwx8UfFXWpdN8KaFfa5qMMZma2sY97qgIBJHpkirnj74U+L/hjf2Wn
+MfDmo+HL+7QzQw30RjMiA4JUn09O1fZf/BNb4HfFjSfiT4f+J3hfTrZvB880umam9/J5Rktm2l3
RTgttxlSM8ivoD/gsVYwz/CfwHM0aGdNddFcgbsG3kJGfTj9BWalrY1b5T8hCrICQAoGc5+tMmjy
5Xd5gIzle1fQnwM/Y4+I/wC0hpWp6l4Q0u3Fhp8qRGe/k8hZGYZ/dkj5gB3ziuz8bf8ABMX44eB/
Ct/r8+k6dqNvZQNPJZ6VdiW42jrhcfMQBnC1oku5MpJHyIYxvJZSFNJJsyeNxI4WvYf2e/2YvFv7
TGv6jo3hP7KL7T4BcSreS+VhC204B5yCPoM+9cp8RfhT4i+F/jzUfCXiPTJtP1+yuBbi0b5nkBJ2
PHj76twQQMHNGncm5xKwEuRtxzg8VLDaNPcx2scW+eVgiL3JJwMfjX0X43/YO+Kvw++CUfxN1PS4
DoZt4rqaCKUfaLSF+d0iHBUDIzjpn8vpL/gkx+zp4a8e6v4i+IXiKD7ff+H7pbGxspQr24LxZaRl
I5YDgemTS5kgSuz4j+KP7Pnj/wCDlpYXnjLwvfaDa35KW9xcp+7kYDO3cOA2OcZycH0NeceSpUuT
x04PWv0I/wCCon7XTfErxXd/CPQbIR6J4dvgdSu7mAebLex7lxEcnCANjpkk+1eE/DL9hD4m/GX4
L3XxF8KJpuo2MTTKmnici5lMRwyouMZI5AJ5qlbdkK582GFdrnHQ4x0qYrtg3RZA77+hPNfZ4/4J
S/GZPBA12P8Asa4lNiL1dMS4YXOTGG8rG3aW7dRkivEP2f8A9mzxL+0P8SbvwNo1xb6NrFnHLNPH
qZMbJ5bhWXZyd2T0NQ7I1i1scPp/wi8U6x8PdU8bafo9xdeGdLmWC9v4lJSF3wVzjsdwy3QZ5rlZ
IfLiHy8HnJr+gLS/2a30j9jW6+Edpb6dbaxd+GX0u4nRAsUt00GwysQCSd2Oe2B6V+Jnx3+Bnif4
DfEe98GeJYFGpx7JYJYstHdRPnY8Xc5IYY65UikncTdpcrPLDb9WOQfXNMwUOed4GAD6V9qfCj/g
lv8AFn4meB9L8ULc6ZoSagjP9g1jzI7iMBiAWAHGcZx6Vy37Q3/BPX4ofs7+Cf8AhK9XGn6xo0cg
ju5dId5TZg/8tHDKMJnjcOmecZpoLpHzZ4T8M6j418SaZ4f0e3W71TUrgWttAzhPNkbooJ4Fdj8Z
P2ePHHwE1KysPHOgXOiXN7CZ7cyFXjkUHB2upIyOMjOeQe9croGqX3hbX9N1vTZ2ttQ0y5ivLadc
HZLG4dGx7Mor9p/hvP4d/wCClf7IYXxppCWmsJJJZS3UcYBtb+NF/wBIg5yFIdTtPXLKeKhys7Mt
3Suj8O2jDEnkejHvTGOyQBucj+E1ueJdAk8K+INU0eWVbmSwup7RnVcBjHIybgO33c4rEYeYAPvM
OpPFURzJjSCAWz1PrTgXYdMEUkgOOcL7dq+vP2dP+CcvjT9pf4a23jfQPEmiWFhPNLaiG+MhkV4z
gg7Bwc8/THrRdAkfIpEhIIxt7mkbPRmOVzyK+7fGP/BIT4seGfCur6rb6/oWsXFpbtcDT7QyI85U
ZKoWAGSBxnivnb9mX9mXXf2oPHt14V0LVrHRr6CwOoB9QJxIAyghQOSRu59KtW7k31seNCJhJhl3
etMdAZMDknjIr9D4f+CNXxNiGD418OOT13LPgfTivjb49/BHxN8APiZqvgzxPHGdQsirxzQNmO4h
fJSRfQEZ4PIwaGl0C5wCnCDnnPNOKgoQASP7x6VGF+fOVJxgkdhT0y8ojQFgDj5ec/hUlRHAcNxn
BJ3DpQ+E4wSDyB2zX218Jv8Agld8UPih4A03xG+raf4ZN8GP9narFIJ1UcKxAHfHQ9jWD+0P/wAE
1/iR8APAlx4xnvbHxLpNo2LxdKR99pF2lYNyyg9cDjr06CsNyS3Plrwj4N1fx34o03QNBs21HVtR
mS3gt4xy7MQB9Bk9a9i+K37Enxh+CnhW58S+KfCEtrodoV868tbiOdYwe7BTuA9yMV9U/wDBHvwB
fXPxJ8UeLLvR2bR49LFrbajLH8n2gTKSFP8Ae2nNfZPj/wDan8KeFf2gNe+EHxOt7XT/AA1rGlxz
6fqWoqqWU6NE3nwys2ByVOD65XrioctSrPofg07OXOcuW743fTgd6+gvCf7A3xt8feCNM8TaJ4S+
06RewmeBzdRq5TP9wkH14rA+FngaPxv+0Zp2jeFNOfWNMHiMSW6WWZALJLwYfvlAhBye1ftX+1N8
eY/2XPAnh/xT/Y73nh2LVorPUobOIZhtnjk+ZegXDBccgE4HejmsJpn8+mpaNd6ZqF1ZXcM1nfW8
jRTW86bJYnUkMrL1BBFVkjAGNrn5ckE19ff8FLPEnw88d/HzSvEfw8vLC/XWdEgudRutMkV/OuWd
wm/aeJdgUMOvC5rd+En/AASt+J3xT8A6V4mm1Ww8LNf7mOnapbyrcIoYqGZenzABgPQjmtLpiWiu
z4g2bZNhQjPHNI0Ducgkr6Yr64/aQ/4Jx/En9nfwK/i26ubLxJo1tJi9m0zdvtYyeJXRgDtzjOM4
z6cjkf2YP2LfHX7Ul5qR0VotG0i1i3DVr+FmtpX3YMalerDg47UibpvQ+eTAwCgcNSvH5sIUEbt/
ORzX3zr/APwR5+Kmn6Ne3dr4n0DVLy3jkkisozIjXBC5ChmXAJPGDx05r5u+AH7NHiH44/GKb4dQ
3EHh7XbeK4knh1ZGV45ISA8RTIO4Z5HoCe1BaaucNpXwj8R6z8PdX8c2OkT3XhnSbiO2vL9EykLO
MrnvjkAt0GRXH9SQQSpOO/Ff0EaH+zZPov7G03weSayGrT+GZdHlvUTETzvEyeY3GTyRzjPFfiN+
0D8BfEX7OfxHvvB3iiNDd26JcQ3cDbo7qByQsi9wPlIIPOQai4K/NZ7HrH7Ef7E8P7Xq+MPtHiN/
D/8AYYttrx24lMzS+ZjIJGABH+tfP/xZ+Geq/CD4h+I/BertFJqOiX0lnJLCfkl2n5XX2YYP41+o
H/BNf4ifBv4N/s3614rvvEtno/iOcyPrljfX0Ync2wYx+VESC2UkGMdScV+b37SHxQsPjN8b/Gvj
bTbS5tLDXNQe7torrHmLGQAu7aSAcDpnilFtieh5mhWTO4YIHPvUeEJycg5xwaUISwJJGeMUgRHz
j5e2KZMdyxHEZAxUgsM/KeM/jXuujfsNfGvxH4OtvFOm+Cr650e4tReRvGyFniKlshd2ScDpjJ3C
sr9k3wPdeOP2gPh/ZQ6XLqtsmt2Mt7FHEZFWBbiMuzjH3cZzniv2y/ac/aM079lTSfBesahprP4Q
utT/ALM1BrOHJtIzCxjZQMAAFeR3AOOaXMan89kkTwzsjbkdMq6uu1kYcFSD0II6UzytwLfw9vfi
vrD/AIKPw+ALr9prUNW+H91YXen6vplrqV7NpsyyQPdSGTew2nAJURsR6nPerHwb/wCCbXxL+O/w
00bxv4a1fQf7J1JHMUdxdMH+VyjK3ycEEH1xiquI+SCi7mJyMHjB60Ip8zbuKY6kj29a9U/aC/Zz
8Zfs2+Ox4c8aWKpLNH59nf2mWt7uPHJjb1U8EdR+IJ5z4VfDnUvi78QNE8G6RNb2uo6xObSCS7bC
FirEZP4dueRQwujT+In7PXxB+E/h/Sde8VeGL7StG1XabXUHQNC5KhlUspO0kdAeuD6V55LC5bIX
IJGFPev2V/b+1fX/AA7+wSnh3xN4XuW1c/2XYXGo2Rils7eaKWImUsG3IrFCqkr1cA1+OIDBd0jH
gg+uPftSuCZ6zbfsj/Fq78EReLoPAmq3Hh+Wz+3pewRh1aDYX3gA8jbg4HrXkIIYH5g5XpznA9K/
cj/glj4+1Tx5+yhYWertHN/wj99Lots6qQTbokboGz1IEmM9MAV+NHxJ0tX+Kviy3tYQSdbvIljR
ckf6Q4VQB+AA+lCdwOIkQna2cu2T8o6fWgIT8y5wRycZr678A/8ABMr41fEHwjpPiLTbHTrWx1OL
zkW/ufKnjXtlCO/auE/aI/Yu+Jn7M+madqvi/ToP7Lv5TCL3TZDPDDIPupI+PlLDOMjBwaAPBIVk
YKETqQBxz17V6b4l/Zm+JvhjwrL4k1HwVrFvoccAuJr97VhFFHx87Meg5FU/g38F/FHxs8Y2Xhfw
vB5upXJ4d2KpFgE5LD7vAOK/ef4E+G/Ful/s4aP4c+JVnZPrljpbaddRRHzI5okQxoW5IJZAM/jS
G9D+c94gwPc/oOaNwUFRg9sVbv4UjmZRhI1+6uR07VXzhiqcgckmkwGvEAg56U7yicbTk5yVNKse
5mKpuOM4J619I/Df/gn78Zfif4I0fxVoOgW17o+qo0kEjXiBgBkZKk8ZIPf60k2I+bvK3SOr4VeT
xS+WJcADg8D/APVXZ/E34V+J/g74uufC/jDS59I1y0+Z4ZB8rqejo3RlPHIPt1r6i/YI/Yy8SfEn
4i+C/HOveGYdX+Gs01ytxLJIjI+1JEAZc7gQ4B6enpVbAfG+teG9R0C/k0/VLS40+9jVXMNzGUba
wyrYOOCDxWe0XVWGD0Bz3r9W/wDgrR+zXr/i6Xw1408H+F/tVhoGlTRapPYw5kWMSJsyBywVS574
G6vymbbLh2yGxxjp9arcV0V2h2gknBz2NSCQqQoGB+fNIRk7c5HXmnpuTJQhs9Tj9Km1txLc0tK8
PXmt3sdrp9rcX1xtz5NvG0jkDvgAmo9Rs7jTLqWxureS2u4W2yRTDa6n0IPIr7E/4Ji+FPEVr+0H
o3jaHw7c6t4YsRPp+p3NvF532YyxZR9gyxAO3PHGc1Y/4Kqaj4T1r9paI+GLNYdQGjwf2uy2z27N
PvkKllZRlthQZ9AKdy7HxYikkHJdc/gKQx+YVYLuXGeOK9C+F/wG8b/GB9T/AOEP8PXeuPYBTOtq
FIXOcdSOeD0rovFf7JHxg8G6Fe63rXgDWLHTLKD7RcXLW+5Y0B5Jx0x3+hpXWwWPHAiNjMgIJ5UH
pTXXJVQp2k5GK7f4e/B/xZ8VNTmsPCGhXOv3cNv9qliswGKJkANjPrx+dTeLPgv4x8FeKtM8La74
dv8AS9d1FVa1sriErJKWbaqr680Es8/KMJGUZzg4z9KkgjDsu5uSCcn2/wAiu38X/Bbxl4A8S2Gh
+I/D19o+pX6K9rBdR7WnBbAKkZBOeMVr+N/2dPiH8PdCn1rxH4T1PSNKhlEZvLm2ZY1J6ZY+v5Ur
pAkeYGMjIHG059qBgBj9/ngd67if4Q+LYPA48YyeHtQTwyxB/tNoCYNpO0Hf06jHWq5+E3iq58Fy
+MYtA1B/C8b7H1NbdjbgDgksBjGTjPbvRdMZxrMrP6q3O3tUbqqvhht54xxiuztvhd4p1HwfP4pt
dBv7vw5bZ8/VI7d2gjxgElsYAGRntXIyDDyYw/oV6MPajlQtSGRi0ijhWH61Kw3xkKcsOGI61CWB
ByuMdfUmpbSYIjg7lJPUU7WGOgtjOgCHJUEso5bHrSfZysmVbHHKjsMZr6Y/YI0e2/4aP8I6jrWj
PqnhWO6lt9Qlkg8yCDzIHCNJkYC7ipJPQA16T/wVS8E+AvD/AMbvD+oeBjp0Et/ppk1SHTZFaMSq
+IztU/KxTJ9wPapTTZWisfD8rqjKFRipxwRik+620YC/T2NdJH4M1q70aTWRpt4+kL11AW7GFecc
vjHXjNOuvhr4ntdPfVJvDWqjTo4zKbx7N1hKY67iACO/0o0Yjl2Rmc4HGMEjvUqwyeU7AMIwcBjw
Ppn8K1NE8Man4i1H7JpdjcX0x5WC2iMrsAD/AArk44r9QfDf7B3gxf8Agn/r2t3unXF14v1DRT4j
hmZMXFpdRwFliX/ZyCCv+0aE1sFj8qQjOduflQ/oKjdFDPnOO3HSrsljPBM1nJA0Fwj+XJEwO5W6
bSOu7oMVdn8Haza6hBZXGmXsV1MG8mF4GEkmDglVIyQPahvWxSTMd8sykbjtHGKjbLHczgccDHBr
XvdDvdNt42u4JLcuSqtMpUEjGR+GazZIZGkILZccc9MU+gNEYQttKpyOTipXlQjjJb1J4pCY45P4
nbnOODTxbySSiNYy0jHCKoyxJPQY71BN7DWiQgBR9SDmkCnmSRwc9Pl79wBWsfC+qxB2lsLqMFtq
Zhbg9D2z1qhc2720jxyKyyRNsaMjBVhkEHPQ00MrMuAS2M7ejdfrSdN5LHd7c12Xwu+GOtfGLxtY
+GfDls91q16GEUBOAxAyefTFWvi18JfE3wP8ZTeGfFukPpmqLGJUVwMOrc7lYcMO3HfIqbq4zi4V
VnaMFBu7tj880vkMMnjCttIA5x9K9t/ZZ/Zpv/2lfH8eiWEklvZw/vLy7TBMcZ4+7nPXH0GT2r9A
v22P2F/hl4A/ZN1fXdH0xNM8SeG7WCQahaLj7SwIjZWUno28n16HnFOMru1idz8j5FbcMbpOdoTt
Uklo37uL5QAQzc8c96+pP2Cv2brP48fFuDT9QuR9m0yNtQlhZfkkKOuFcehJ6YOa/Qbxt8Ov2Rvh
t8RbXwFr3h/T7LxLeCEx2i2Mjn94cJllUgZNT7XVpLY0slufiqImP+r2Pg/eHI6ZzUBRGbCOOnyr
n/PtX6Gfty/sF6V8KPiT4X1zwlctpvg7xNqEeny2zkO1jcs2fkyOUK7iMnggjpgV9N+IP2Xv2d/2
ZvhNomofFO0tGCzi0bVWt3ZppnBIBVATnap5PpSdT3uUmS7H4tGJoHDNh1xyF57UzzRFuVRtf6dB
X6xftQfsN/DD4ufs5r8SfgyiaZc6XaS6jAzl1iv7ZcmRG3/MGAVip/Doa/Jy4gxHvGccEE9/ofSr
V5K5OqFjdV3bAxTGSecfjTpQMkoyofVjz+FRqhWPO9wRjA9D/Wmh0QkMN2AcH0oUSua4z5i+MsD3
I6nirflKs48o+Z/stxk1FEzKS4YFh2x1odmaU78HJyQo6fnQ0xojfEmTnBBzjGMGkVjlU5Cg8AAZ
/OpnIDllXdECMAnGBUMsrSIQegGFpkMlLIYtiltwJb5jzik3MV3ENhzgbB7elQkRjaAxBxkn29Kk
M0w2MDtK/LnGetDVgTHOhLMm87Y+Sehz/Wh2jWReWccHJ70sgATB3HcOWI7+tR4JUMq78djjGKLD
J1EsbHgMDztJwMUsrhnXJ5P93timCQkhgocrztPApnlySN+7XD5+YevfFFhg8geXK5Utx6A8U9oX
DjzFCoeBtXFDQuAzE+WpIG3OOfT/AD61HvlbcoG/b8oB6A07CFlIVBJt+Xuc81UnlQlhtyQOuauH
ehIYgAgli3P4VT8wGUlhuH93pkUDAlcsQc56BR1603cA+MlR7807DIoJQghiFAphU4+dDz37YoFY
eTlepBJAH1qWJf3ZHmkBjny+vA7mmIoKMgbcNuQMdKWLcJSoXLj05zxSsNDxAgOS2BjJ57eopyNt
kZVQv3zzSxN+9IkUMBkYYHPT0qQyKsrGNTHjg7Bn8/yqVoVoyN2SMgElcnLcdKdK6qyFEBKjli3J
6DpVUCSYhgCwUEbj9OKYH8qQEDgdD71sZFoCS5jwVZ1UnoKgbaHwCc8ZGOhqYOTFv8w9eFzjNNlt
2iUSEfKxOHwCD7ZpAiMqpbIG4g/N6fhXT+ALwQa+vyAllOC4BVfcDvXMygpHuU/KQcD+dbHhK7i0
7UGvZY/PjgU/IDjOQcY/Gs2WUtTBOp3DMVV/MJIIwA1ftZ/wSckWX9kSZQqhl1a9zhcYJCnn6Z/L
FfinfXUd9f3k6NjzWJVWOc89/XpX6pf8EkP2hvDll4QuvhDqsxsdbuLuW/0+STAS6VkG9FPZlx09
Klbl68rsfn/8RgW8deIP9IMluLu52ZbIYGRjldw4547EfjmvSP2Pfjr4x+A/xi0298NQz6pBqUi2
+p6DAoY38IPYc/OoJYMOeMdKuftj/s5eIvgH8SdWg1pTPpeozPd6Xq0KkJOjMxK98SKSAV9xjg19
q/8ABPD9mzwp4T+Flp8adaWHUL2a1kvYXMILWSpu8wKMnOQPY5FaPQyg1KPOfVnxv+INl8FvhJrH
xF0zwws+pywRs6QxrHLul6NK+Ois2TnPNfkB8PPiPrnxO/au8FeKfEbtf6xdeI7FrmWQAHcJY0GF
AUAKOMhew+tfd/wz/wCCkmj/ABU+PUnw61fw5Evg3XpTp+kXrRsZpWYYCzISQFbkdsfhXnHxj/Yd
vvh1+1P4B1jwZAbvw/e6sl8LUtl7RYpUeRR0woB4IycdemTN9NSor30foJ8RPCXhbxZqugW/iUW8
7B50tbS62lZ2aP51APU7Rnivxc+PXj/xHpXiHxB8KLS/uX8E6J4ku5LDTy6use13XCkjO0EngnHf
vX6e/t5/DXxj8SfAvhaHwQ17DqtprKTNc2ExjkgjKMN4I54JX8M+tfLn7X37Hvhj4O/s36Hr7zXN
74ngvguoao0rb7xp8s7Pknc2QME+/rQZqXLK7Pvzxj8UdJ+Efw00rxFrIk/skC0hmmjGfKWTaocj
0GRwK7LRbzTdX02PU9KkguLPUEFwlzbkMswKjDgjrwB+VfMf7bhjuv2IL+eQ7YfsWnuQmOQWjAAB
GD16YrA/4JV65qGp/s56jaXl5NdRaZrk9rapM5Yww+VEwVR/Cu5mIHueKEtgTbcr9D8tvjXG8HxE
8UR7WjWLVbiNECksF81h16kZ+bP8q+9f+COk3mab8SwWbe0li5Ung/LMN3pk7ccelfBXxljvrr4y
+JoIZ0e4fVriFY2chAxlbI5OO/J9RX6k/sCfAG7/AGYvhbr/AIr8XaibNtWhW6ureYbUtI4t5B69
w344960sXCSsz8+f267aKT9qD4hLtaInUCpQDkAqCWyfXOQPy4r5zcLGHNs5THGM449Pzr2b9rb4
oaV8Vvj14t17QjJJpOoXheC4uV2F0VQuQuMgZGOcHGOOK8NlhWBWQZDA53A8mqMldFiJCrkqm2In
jfyf8+3Wv3K/YU0hIv2OvCd1oUNvbate6dMVuTCF3yh5FjL8cgHHUetfh9bsn2NnV/LaNQAe6/5P
ev22/YxvprP9gzw3dW7mC4t9IvijqeY2WSbBz6gj9Kyd2zZaRbN79mL4d/FvwJrnjIfEjWrPWtO1
i6bULZICG8mZz86r6Ltxx0yOO9fJHij4afDW2/4KJXGh+KVi0XQ/MgvLJGZYbZrkpG6RsW4w53jH
HUAdTXa/8E0/jb44+KHxH8bWHivxRf67aW2mQXNvDd3JlWN2kIbA7YAx+ea4j9or4I237QH/AAUB
1bwdfeIW8Pvcafb3NrIuAzskKEquf4sBiMen40WIt7yZ9efHLxt4T+Bkunvf/CeXXdFuBg6hplhD
MsDDkhlIyOBntnHtXxB/wUQ8N/BrxJ4d0jxt8P8AXtHGsXcogvdF0+VN8ishbzGhByrr0IwM565H
P2N4D8UfGb4VeI9I8C+LPDLeN/CltF5L+NLZy87xchDLH3YYw3GT15zXyp/wUk/ZT0X4eJH8RvC/
kaTYahc+Tf2KfL5crAkSRqB0ODlfXHrwhWTZ9S/s0fDb4X6N+zVoeq+DPDlp44t2tPtRjlWG4u2l
ZQZISzZAdc4257Y9K5PXvGPwF+P/AMPPEHhXxVo9l8MrmFh5EOuxxWEu8A7JYiCNwBBBAOa4r9n3
4G/Ez9mz4cWPjX4U6vbfEPRdftYbufwxcExJIZEXE0bA4DDgHGMgnIyK9u+K3wXtv2uPgrbzeINC
l8E+NrSIvEZ4vmtpgMmMsQC0TcemM56g001cqS0Plv8A4Jk/s+eCfFmu+L/Ft3jXZvD2oPpdiXO6
2miZSRIUOdxOMgn8zX34PCGknXzbN8PdMFk3ynUPJgxjH9zGa+PP+CU80Xht/iv4Murq3fV9N1WJ
jFC+Q0a+ZGWX1G5Tz6tU/wARvhv+034z/aU1S10nxd4h8M/D651BhBd2tyvkww4J4UcgYAH4nuTS
VrlSSueM/t6fBjw3+zZ8bvA/jvwzptu1nqN61/c6LKmY2MUkbSgEg7VYOQB0BPpxX3t4c8BfCzxZ
o/hz4tP4X0vRkk0YXXmzQRRoltLGJMS4G07Rnn3Nfmz+2p+z/wDEnwj4/wDB+i+IfHd78SJ9ZQ2m
lteTYkikZ1LIVZjgZC4Pfp2r9FdR8Aa237Eq+DWs5E19PBsentaDlhMtsFKdeTkEU7a7ma0jdnCf
H74M/Df9qL9nC+8T6Hp1nYS2UFxe6bqVrCq7vIMgKtt+8jbW9xkHrXzf/wAE5v2ovDOgXfh34T6t
4QabXrvU5E0/X4Io2Rd6F9rsfnDAqwyOowfaviyX4zfEjwR4f1HwbY+KdY0jQJ5JEl0m1uXSNN3E
g2nkAgcjtk1u/sdeKodC/ag+GF5qF7HbWMOtwpNLM20fOGRSfxZa0cbBFq+h+vH7ZH7QPgz4FeDb
A+MfC0niy31lprWKySONgAEyxO88DkdPSvwx1+6t9S1a8u7KzXT7GWV3jtSxYRKWO1CT1wMDPtX7
iftd/svyftPzeEdLOotpVhp7XNxLexpvILLGFUDIzkbvyr8jf2ofgW/7PfxZ1XwdNfx6msCR3Nvc
xngxOOAw7OO4P17ikrBfU/Qr/glr440L4jfCbUvCN94bspdS8LTLuvpYo5RPHMWZeoyCCrDnORiv
h/8Abn8I6b4S/aY8d6bo1mNO0+O4R0hjyEBaFJGxzgDMnSvsv/gkV8Otd8N+E/GHia/tvL0bXjaj
T5sqfMEXm7uhJ/5aDrg18zf8FIfBmreHv2n/ABRqF/ZyxadrKQ3VlM52xzKIY0bDZ6hlI9qRa+JH
3v8AADwJ4G/Zw/ZCs/GcehJfq2hx+I7/AM7bLLI7W6O4VnzjgAccVd+Avxd+Hf7cXgjxjYt4FXTd
Ps5FsLqG8SJzJ5iFtyOg4Ix1+lWfCdhD8ZP2ALDR/C0i3z6j4NTTIFUjIlW3ETIR2YMCMGuU/wCC
dn7P3jL4A+H/ABra+MdNh02bULuCaAROGDqqOGJI78gduMUIlq7Z8x/swCy/Zw/bw1/4bm1i13Rd
SupNGiaZQZIG3ebFLg5B+UlW6diPSvtnx/4o8G/sxeMPDdlZ+EoZI/H+uR2xS0RIo7WchEMu3acg
5UkDHIPrXwhFq1re/wDBVmKaC7jeD/hKwqyRuCrHyNp5HXk7cV9Yft6XC2/xK/Z1LdD4wiHJwq/P
Fyf5fjQ1qNK6R4p/wVZ/Z68LeHdE0/4maTAml6vqN+mm38UCBY7g+XI6SEDgMNjAnHORmvzBlEsC
h8ksPvYwNx/Cv2R/4K2bD+zzoSEyCRtfj8vygMki3n46jjv+FfjVLHlc7SGOcseTWkXoYSveyI2z
G67N68fd9c10Xgbx/wCIfhZ4wsPE3hfVJdG1uxZjDdwbS8ZKlSCGBBBBPBBFc9JbBLfdv2l87Rnk
9ahLSDJBUkn5i3f0/rWl7kRk76Hd/Ev4v+L/AIzeI313xnr914h1dYFtVubnYPLiUkhVCKoAyxJA
Hc812H7JvgnSfiV+0P4D8Oa/bfbdIv8AVUiubc5USLtY4z9VFeNRZkWQbBGjPuJGck4PH0r3L9j7
xXp3gn9pf4a6pq8621hbauiz3B+6iuCgY+gBYZPYZrOSN4bn7H/Gj4u6V8Br34ZeC7bwpBqWm+KN
RTQYoA4jitYvkjHylSGGJAMH09682/4KC+JtO+CX7KFz4f0TwtYX9prT/wBh29rMdqWgeKR/NUYJ
LLsyOnOOa7r9pH4I6z8ZfG/wf1/Q7m2bT/DOuJql2WkH72HdEwKHv9w9PavIP+CtsoX4BeGUYZVv
EUZ49Ra3JH61mtynodj/AME2fhp4f8J/sw+G/EGmaTFZ61r8Uk+oXDD55WWWREBPoFUDFfS1rb6j
fC4t9Yg0+azdChSEEhs5yGDcYIP86+Zv+CfXiXTvG37Gmi6FoupxHV9MgvNPuIlceZbSNLKYyy5y
AQykdu3avIfgl/wTb8Yw6/qE/wAVPG2oXmnGEG2j0nVZdxkzzvz0GPQ0Lcb6po+f/i14k039hb9u
PVtW8BWMN5pdrbiWTSZ5v3aC4jO+ENgsoUhWX06dK/TvwR8aZvFv7N9n8UZNNjgmn0OTWGsEkJUM
sbNs3eh2/rX4h/tJ/D3Rvhv8dPFehaH4jTxZo0Vypgv1uBKTuUFkdtxDshwM+1frP+x/rmkfGH9i
TSvDOkX0D6lDocujX9sJP3lrMyuoDjqMg7gccim0riV+XY/K/wDa1/adu/2ovGFh4h1Hw5p2gXNl
Z/Y447KQyPKC+7c7EDOB0HbPvXhC3DW7PHsI2jsOf84r7+/4KAfsafDD9nn4a6HqPhbWjpnieWcR
HSruXznvozgPIg+8oQ85PGCQa/P2QCaZCQQcdATyfX8q2W1zFb2EMuACq4BGeTzX3N/wSc+J+q+F
/j+fCFvGlxpniezkN6ZTlont0eSN09M5YHPXI9K+Hxbo5G8qgDbeuen/ANbmvu//AIJOfCWbxV8c
brxqmqwxReFImD2IGXmFzFJGrDngAq1RJm0I9z9FfjH+0Z4N/Z98X+C/Ct+sdhd+K9SChhFst4ke
QCSaRuACXYDnqTk18Of8FUdP8OeEviZ8PfHfhOezt/F00ry3Mtk6v5rQmNoJJFB6hhjPBI4ruP8A
gr18N9YuNC8KfESyMT6XpBOm3anG9GlkVo2HqCQRxyDg1+V17ctuLKAFY7iQPuk579aUVoKybXkf
ut+wp+0p4j/ac+GWsa/4l0+wsL7T9TawA09XVHAiR9xDM3Pz+tfnb+33+1z4x+L/AIr8Q/DK8sNK
t/Dfh/xFOIJbWFxcyeRJJEu52crzznAFfQP/AASF+LXh+y8N+KPhze38Vv4iudQ/tazhlIDXcRhR
H2epTyhkdgc9K7L9r/8AZM+C/hL4O+N/GPiq5Oi+KdQubm8t9YjlffLeSSPLFGI84bP3SO4BPFSr
Ipq8l2PQ/wDgmf4V0zRv2UvDupWthbW2o6jNeG6u44wHm2XMsabj1ICqMV7nafELw/p1zKdT8f6B
cxEHbG11BFsOf9/njivmT/gmV8VfD/jP9nC3+H1vqCx+KNB+1LcWb5Ehhlmd0mUHqv7zb1OCvPUV
mfBz/gll4S8EeKNR1fxzqq+NbWeFljs5ITEkTEklywbJOOO3r1pBJanwH/wUI07wbY/tSeJf+EJk
s5dJngguZzp0okgF06kybdpIyTgkDua+Z3XKk7QePyr2D9qzSPAHh/46eKrD4aX7X3heC4H2cg4j
ilC/vYkJA3qGBwefrXkq20ruAc7cMc4646j9DXSmc6hZFeHaWRQWAPBz3NfUH/BPnx9q/gf9qbwO
ukzJGms3Y0m/icZEsEmMj6gqCPcV80PB5UbO+MDJBz05x/P+dfWf/BMf4a2XxA/ad0ue+1U6dN4d
iGt2tt5YJu2jYKVJJGAN4Pr+VZzWhrB8r12PrT/gscu34f8Aw7YIZGGq3AAHX/UHv6c14R/wSa+I
GtaV+0RceFba5KaNrOmT3F5bOgId4QDE6nqCN7j3B+lfWH/BVn4Uz+Of2fovE9pqEVq/hG5N9LDK
2PPidfLcKf74yCB35r4h/wCCX+v2mg/tc+H0v51QX2n39rDJJgASmMOq57EhG/IetZyeiFC92mfp
F43+L+veEv2z/h38OdLa3g8M+ItOvL7ULcQKWeZElYPuxkElF6Hn8a8K/bd+BnhP4i/tlfBCw1HT
wkfiRbm21ZrZjG91FEUZA5BHTLDPX8q6H41eNdK0T/gpz8HYbu8hgT+xJ4HkaRQFklW4WNWPbJwA
Peuc/b3+MWl/Cf8Aax+A/iS7f7VaaKLm5vYrZg8qRPJGhO3PXbvIHfaaa0bDflfW7/U+lfit8XNL
/Z+v/hV8P9F0owf8JLqdvo1kYVURWsKNGHyD1JQ4H4nOa+d/+Cwc3l/CLwNEE3u+uu2O5C20hPH5
V9BePvAPg/8AaPHwy+JumeIo/I8OzLrmlzAgxXCNsfD5OV+4PcHORxx4n/wUStfD/wAe/wBkuLxx
4c8S2ZtvD851WHdIF+0qA0UsQycrJyce4980k0OSfU9v/Y9bT9A/Y9+Ht3LKmnWUWgJc3Fw+EEY2
lndj7AE5PpmnWn7XfwS0szC4+Lmi3of+Ga7RwmOv3V/nXjn7DXxl8IfHj9lGP4Y3WqNpmvaToz6N
qUErBHELiSNJoyThgVx9DwcVv/C/9jr4J/s4+GvEeqazfWviux+z+dJPr/lT/Zoo1YsEHPJz9egF
TGzKlfsfnRr/AO0la/BP9tPxb4++EV3ayeFn1Ni1rbRYt9RtnWIzxqMcBpFYgr0OCPSv1v0/4b+A
f2hI/AvxP17wV5HiK0hivbE6jEY7q0Y4YI+Mbwp5GcjuOtfmX+xd4B+C3xa/bA8UX0bS6b4V06R9
X8O6VqpjSO4j3BWSRWJ6GTIX0PPSvqP9pz/gpvpXwV+Kun+CvBOlWXiO30qVY9fmKny4lOzEVsyM
AXCFs5BAOB2Ioe44qySPJP8Agpb+2hfa1qus/BrwjLJpum2n7nXb0ja17uX/AI91BUEIp6sDyeBx
Xo3/AARyYv8AC34gbvmkGtQ5YHj/AI9k/rmsL/goB4f+FX7SP7OkHxs8MeI7Kw8QadbxzQrJOsU1
7FvAa2liznzVIO3jOQR0NfKX7Cv7Ysv7K/j2aDWY/tXgvXZI49WWJDJNbsm8RzxgHHG4Bl5yB6ih
q9rAux57+1wRL+0z8VA7ZP8Awkd2c55xv4/lX3N/wRo8RapLpnxE8PTXssuk2hs7yGzkOUimlMyy
MvpuEUeR7Zrg/wDgqJ8KvhzdrpPxZ8E+ItLfV9duI4NR0+zukka83xlkuVUHKsAMNxzkd69e/wCC
UJ+H3hP4Nal4gl8Q2Vh4r1G6ay1S1vL6ONVELO0RRWIOCkuc8/zrSTUtWTTVo27HpPiz9v7w18Pf
2sNa+H/jGT/hG/Cukaf5X9rS5kjlvHWGVdwVSVGwuB15+uK/Ov8Aab+P2nN+2N4j+I3wa1yTTFkW
BIdYs4zAJZWgCTOFdMEMepI5IB9667/gqT4X0HSP2gF8T6D4msNdXxTafarm0tZllNpLAscIyykj
a6gYB5yp7V8XwXXlTglNxzlt5649qVgi9r7n736h8SPE8f7BjeOk1Fv+EubwONUW/wAAsbk2ocPj
pncfSvxi1v4geKfjB8V9D1vxjr13rmqteWdsbq8ZS6xLOMKqrgAZJ7d6/UP9iP8AaZ8AfGn9miw+
H3iq/tfD+q6DpsOkX1vqNxHBHdQqNkcsTMRuUqgBHUH6g187f8FPfiz8JbjxP4Q0jwNYWN/4z0G5
F9NqmlGM2ixFSVgdkPztvWNunGOvNTHUe0uY/TX4q/EXwn8JPBDa74w1c6BocUkcBulDnDtwqgIp
bnB7V4N4r/by/Z4uvh7rNnL4xOtWM1lNG8M2m3U3nqUIKktFgg5xz61BH8W/gj+3N+zzZWvizxFa
6FaXMqSXmm3mpRWd1b3MXUfMeVzyCOoINcb8ZNR/Zp/Zw/ZN1jwvbNo/i3TpopLS10+0u4Lq/uJ5
icNvUhgFJ3Fh0C8URtoTPZ6H44oAsaOgK/IpCt0GB3/EGv2j/wCCSUDx/srzvJhjNr93IJAT842R
AHn6Y444r8gfhXD4f1H4leD7PxfI0Phe41C3ttUnSTYY4WYKzluMAZBJ9Aa/XX9p39tvwF+yt8HN
M8N/C2+0nX/EM1t9k0uGwuY7mGySNVHnXGxs/d6DqxoerLvpax+QvxaGfiV4vZFX/kM3qg/9t3rj
nRELrjr0PpV3WNXn1i+nvL1zNdXMz3EsmzbvdmLMcDpyelUJNzspxtPcY6itLGOgHaozgkcc1+n3
/BPL9nvxP8Qf2fYtc8J/G/XfBKXGo3KXui6WqNHDOpC5IJyGZAje+RX5gs7HYpIH0HP41f0zxDqm
lo0dnqF5ZKW3f6PcPGDxz0I5460mmWnbqftT8T/2T/i9b/D7xDLpf7R/ie51EWMxSG/ZYoJPkOVZ
lOVBGRkdM57V+dH/AAT1+GM3xO+PcWnWfj3U/AWtRaVNcWF/pTASOQVV4sk4PynOPbPavne68V6z
PAUl1q/midSrxSXkjK49CC3IrPtb+a0njlt55LeVOFkifY6+4Iwfyo1tYSdne5+6n/DI/wATBLuP
7SvjMA442p9Mda/J/wDbe8BeJ/hz+0V4k0fxX4tufG9+qQSw6xdTeZK1uyny0cH7pXkEDjv3rxqf
xdrdxIS2tak5BypN5IcHHUZNZlxc3FzJJdTSySyv96SVtzN6ZNUm1uO9yIqI956EYHHU1seH4FfV
LMOdhaQYPrz0/SsdY8HzCdxP8PcVesJZkl81ZTFIjbkA6k9efapkWtz94P2/vHXiP4ZfstXureFN
XuNB1ZLuxt1vbV9jxo8iq2D2zUX7D3jLWviz+yDZ6p4yv5vEeo3Umo29zNfnzGlRZZECtnr8vGK5
Hwn+0l8EP2xv2c7TSfiRr+neGXkaKHU9J1LU47KUXMJVt8ZLDKFhkEfoRV3X/wBo74Cfshfs+32m
eCPEumeJIrUyCy0PStWivLuaaZiSSQxO0Elix7DuazTTE1dNHi//AASB+I16Ljx38PDEraXaiLWo
JTkOkj7YZE91OwMPfPrXlv8AwV1+KzeI/jJpnggaXbwL4Ytkuv7SU/v5muUVtnsq7M47mu+/4JZ/
Eb4SeCPB/iLWfEvijSfDfjiS4NlOdV1JbcT2vyyRlFkI4DblOP7p9a8l/wCCqEnw88RfFHQvHHgr
xppfiO+1qza21O10y8juViNuEEcmUY4yjkYP93Iq47jne6seJ/sVfEy/+FX7THgXVrBElNzqEWlX
NvK+1ZYbllib8VLhhjjK1+rH/BS74qn4afsx6xaDSbfVW8TyHQsXJ+SBZIZHaYDuyiPIHHODX5Xf
sM3PgKH9pLwxP8RbmDT9Ch3Tw311OYIbe6j2yQs75wASuOeK/R7/AIKBfED4P/Gb9mvxFa2vxP8A
Dc2saJjWNNgstUhnknuI0cCLYrEnersvHQkGl9onofjfLJJv+WYQ7cgSZwRjv7fpX7Y/sj/tLWn7
VPwdl8H6nf33g/x9plktvO9pI0EzooVUu4WI5zxuXscjoRX4irOUlDQcMG+UFQRnqOO4/Ov2O8Ff
F79nT9rr4N6De+OdR0jwTr2mkW9zbXOoxaddwTKqhyjZBeJ+CPY9iKmVrl62Mv8AaV+Kfxd/Y5+H
F/pWtWUfxi8H66slqniLWGZJbFpE2mG4VVYMp6qSQDkjNd3/AMEqbf7L+yJp4cAEategkccBlH4d
Kl/aQ/ab+CfgT9mXXNGm8UaZ8RYJrAaVbaVZ6jBeXVyWXYjPhui/eLn+76186/8ABL79sfwp4H8J
XXws8cahb+H1WWbUtO1e+mSG1kVgnmQu7EBXzlhngjPcUWEj6nsv2+f2f9P1u7t4PG99dXMbtFJH
9ivZURgeQMx4/KvzP/ay+PWjeIf2vdW+I/wn1m60poY7ZI9WtIWtZGnEJSZsMAxyGKncOcd8V+iX
wr8KfsmfA/xbqnijw7408KDVL2BklN1r8FwApbexRCxwSQOlfk/+0/8AEXwj8Tfjv4r8VeCdHk0T
w/fzJJFblgDLIBtkmCqcKHI3Yz3B46VSC9mfsbD8QfE1z+wCPGqancSeLn8EnUU1AMPNN19n3Bs+
u7FfiJ8VPiB4s+I3iNtZ8Zavqmsas0IiS61V2MgjUnaFzjCgsfzNfqV+w1+158OfiD+zpa/DLx9r
Nj4Y1Lw/p0enStqt5Hbw3tqOI3jdiAWVQqsvYgHkGvBP+Cq/xh+FHjm98L+HvBNrp+ra7phN1ceI
NGaJrdIJEkH2feh+Ztyo/fHHc04vUiWjukfn1JNgEsqqTxlcf5NVS6yHoSOoI5p7SKSvGAoxjgUn
mDAOFBHTFWTcRssqsr/NyCppbeDcqFj34qIsw5yM+oqRt5QFj8ynAU/zpDVj6g/4J7fE2/8Ahl+1
N4Nlso0ubXWLhdFvLYsRmO4dEEgxxlG2tz7jjOa/R7/gqt8T7bwP+zj/AMI9PocGrzeK7r7BBNcN
hbJ0Uy+cBg5cbcDp1r85P+CdbeAR+0nos3xA1CLSbGyRr7Try5ultoUvYXjePezEDBAbAPevvf8A
4Kc3Pw7+L/7Ok+oaf8QdAfXPC9yNTsbW21GGZrskGN4QqsSSVbIIHBArK2prY/Inw/oUnivxLo+k
7/IbUbyGz3AfcDyBN2OOma/eHwH4C8DfsX/BTwt4f1rxbd6XpsE62YvN8ipc3chZ/wDVqG2ltp9u
DX4K6DrtxoOs2Op2e1LrT7hLqESDI3o4dP1UV+3X/C1vg1+3J8DfD93rPie18MPDepfGwvdQit7q
zvYVZdrKzfMo3kjHUEHihvULHN/8FcNE0+8/Zdt9SuLWKW+stdtBbXDIC8YcOrhT2DDGfpX45+Gd
b1HwvrVhrOl3Mmn6lp1wt5a3VudrxyIwKsD2INfsP+1X8Qvhn+1X+yX420zTfH2kWeqaBNLcR+Zd
RqZrqy3E7E3ZaOQZ2sueHB9q/J34C2PhnxP8Z/B2keN7h7PwtqF8lrfXSy+UsQc7VYsegBIzk+ua
aegkrM/Yv/gotez3n7B3iS4uFMlzcR6U7kAZ3m7tyTx05r8ORaqQqKWZVYEcf5zX73/tP2Xw++M/
7OHifwGPiLoWm+fZR/Z7walA5SSBlljJAfkExAHHOCcV+BZupPLX5V6jjHU+3600Nbu5+0H/AASC
BP7MmsMxJdvElzkFcAYggHH5V+Y+gxrcftYWkZAmJ8cITHgMD/xMeBj09q/WX/gn1e/DTwB+zD4W
/sXxTYRHWY/7Tvob/UIlliumRI5UwWGAGjxjFflv+03otr8Bv2rddOg6vZa/Bp+qQ67Y3FqVdSXk
+0CJipI3KflPPocc0kw6n7ifFrx94V+GfheDUPFfimPwbpZnW3jvWkWNS5ViI+VI5AJxjtXzx+0J
+1L8AvGP7PXjfQL34gaN4ka40e4FvBOS8k04jZoSoCj5g4UgjnIrT8XXvwg/4KC/A3w8914s/sfT
jei+Nt9rit7yCdEkiaORHORgufqMEda8i/aq8Bfs8fAr9jq88NTpZ67qiWzWGkXmntFJqT3hV3jd
nQ5VQV+YnjAwQc1KuVodp/wTi8AeGfhz+ydYfEWHTt+tahY3Vzfz7ixlWCWbAXPQ4XGcV7b+zv8A
GS4+PfwLuPG81v8AY47+e+EFqxB8mKN3RUJAGSAvXvXy3/wTQ/aL8IeNf2fZPg5ruoR6L4g0q2uo
1W5kWNLmzmdsPG5OCymXaV4x8uARX0L8DLTwH8BNOg+CNv4rtbmV7a51LTjLOgke3d8SAnON4dmI
A7H2NCepLSeh+A+orlgNgznkjkmqDAx5AO0nkhhXofx5+H3/AAqb4teKfBiX1vqiaJfPAt9akMky
cMjfXawBHYg9a86lPmMzMe/OR/KtCHY6TwD4Xn8ceM9F8PW9wsN1qt7BYxPIMhGlkVAfcAsK/ez4
N/DC0/Zb+D/g7wnqvj0adFbv9nBuzEsdxcSOXZIy3IDEkhc8Zr8F/h74rbwT418P+I44vtEujahb
36Rj+IxSrJj8duK/dLxtp3g79ur4OeEPEvhnxKlvY2d+ur2zP8hW4jR18qZTypVjzx24zways7lI
8X/4LGeBNI1D4FeHvFb2cY1/Ttais4b1VHmeRLHLvjJ7qWVTg9MV87/8Evv2ivG+g/FTwz8JILyC
bwbqt5d3T288IZ4m+zvIdj5yATHnHTJNfW37c9vp37Tf7HPiLUfCmvWZuPC942pTxNIMGWzEizwH
0OC209yF7HNflv8Asl/Giw+Bf7QngrxlrMTjSNOu5EvtoLPHFJG8TPgddoctgc8dKsEfpT/wUv8A
2tPHX7PF74b0LwhLp0Vvrmn3TXj31qJiBuSMYyQB98/l3r8aJ4wCVUj5AOQc5r+h7x58Kfhb8fl0
zxt4mt9M8T+HE0dmtZbj5oliciTzgeo+UV+CfxutfBdj8VvFUXw+uLi88FC9b+ypp0ZXMRAOPmAb
AJYAkZIAqkyW7HDsisSwPQ4yKVUJ4B5H6mmFepxn8fUUKCoY57gBV6mlcSZ+gH/BIDxzrGg/H/U/
CttKjaNr2mS3V5FJgkSQAGNkOeDiQg8cj6Vc/wCCv1lE/wC0noDIqRyN4Zt2kkHDNi5uQM/Qcc1p
/wDBHr4b6fr3xG1vxpHrGzUdAt2tH0oxD5o7heJN3XrGR+FdD/wWR+FOrReI/CfxJikR9GntF0C5
QEeZFKHlljOCeVYO4OOhUetRF3bNLn0f/wAE/wC0sPB37DmjeKLDTrVdV+wahdTSBMGZopptqsc5
I+Qd6o/sGftleKP2tNU8Y6X4q0TRbW20u2heN9OjkAlEpYFXDswPC4OKw/8Agmj8VvDfxR/ZYf4W
wX8Np4n0a3vLS4sTkP8AZ5ncpOo4yv7zBx3HbNd1+xl+xfcfsiah4nv7zxOuuQanbRoT5XliERlm
59RgnmkB5t+yX8L9D+E3/BQD42eH/D0H2XSLfS4Jbe3xxB5pilZAcfdBcgdeOK+y9U8N+FPiBrm7
VNGtdS1Hw9dKIZ7mEF4JCiuCjdRwwr40/Za+K3hf4j/8FEPjPqXh/V4dTsdS0e3+xTxH5ZhCtukh
XODwfbmvr/wLqME/jv4jWaTI89vqNs0ka9VD2kWM/XB/KmNnF/H7wJ4T+M/wttNdvrFbqTSbiHVN
MvGTbLC6SrnB6gNggjv+VeoeLYdF1Wzh8Pa7ZxahZ62ZLM2k0e+OUCNnZWHptU15W2v6a37KsmpC
5jfT4bQiaUNlVCXG2QH6EEfhXb+N9Ttrfxv8OGlnVVuNRuI4SWAV2NlMQB7nnFTYVibT/CHg600O
8+HNtpFsmjpZZl0poy0JglZ1I5zwSrfSvOvhb8KtA8Afs/8Ai/wPZWwm8PWVxqtrHBcAP+6LOQp9
cA45r0Kxv4P+F061ZGUC5/sOzlEWeSnn3ALD2BIH41xvw08f6F48vvir4FsdTgXxHper30NzZM48
1Ul5SXb1KZfGcdR707ahZHj3/BNPw9Z3P7GEdhe2cFxZ3N5fxvBNHuR4ycYYHqCM8V+J+qWqWupX
MSZ8uOV0U4wMKxAx7YAr99v2Q/hXc/AT4Yn4X6zeRSavDLPeQNvH+kwyBSzxjOSqO20nHp0yK/Dr
4wfDXWPhN8TfEfhTxJbCz1bTrx4pWLAq6H5kkGOzIykH35wa0gTb3tDztl8wkHIAJ+YU4xcEjgA9
M1M0XzOqFGXG4HPUckfyNNQ4jeQHcTlR06//AFqYz7v/AOCTXxKu/Dn7QD+EPsi3mleJrOVJfMGf
s8sCNIrjjGCAVI9we3PTf8FgvCuj6d8avCN9aadBbXd9ocjXk8Eaq0uybCl+MkgHA/CsH/gkx8Ld
Q8S/HIeNYb20+yeH4pY7i0dj55E0TKrgAcLu4zn19K9L/wCCyvgPXp9c8CeMILSaTw5DaS6Zc3cP
KwzNJvRX/uhhnB7kYrKOsmOxq/8ABJjxXb+PPBXjz4Ya/odnq2h2oS+SS5RXUpNlHgZecjKMwOc8
n2r6g+GfjXw3rHxc8Yfs/wBz4UtrzTvC2lQ5vbtEkW6t5ETbE6bccLIB77enNfHv/BGy+tj46+IM
H2gtdSabbsIiMfIsrDOe/wB4V9A/BS7hX/gpb8c4ml2yy6NZbI2PLbYrbdge2R+dStLia1sfGHjj
UtI/4J8ft063L4c0sa34ehjSZtJmYIwguIi3ko5BHyPgjjpge9fqOvxq09f2Zm+KQ0Lbpw0E6wdI
Lj7nlljFu24PpnHPpX5Qf8FNr7P7ZXipWC/ubPTimFHOIAcE/j+tfoX8BJtM/aA/YB0Xw/oc4ke4
8PLolzArjzbeRB5Tqw7H5SRnrkGk7qemwz8pP2n/AI56R8b/AI1v468MeEk8EyNaxBkEqt58yE4m
baoGfujjnAr9gfgbL4P+OXwO+HfxS1/w1YabqWn2b3YnZFY2xQNHP82OUbYSRj07ivzl/bj/AGHP
Dv7OR8IzeGPE0ckes3v2ObTtVnUSwlgSko4/1eVIJPGSK/SH4H/CTXPBf7Gln4CvfKudbXQry0UQ
P8jtKJTHtOBx8680NXkuw9jkPir4I+FX7dPwA8UTaStkyaPPdR6Zr1jEpe3ngXIdCMEow4Kngg/j
X4VTW0kUxJbexG4uDw3uK7pPGHjn4Xyat4Xg1rXPCrxTva6lpdpeS24MoGxxKgIDNxjJHSuIg2rd
Lv4HQAH72Ogrb4UJIiZQQqsuSP1NfYf/AATK+DPh34s/Hww+Ibdr2DT7F9Qtwr42TRyx7Qw7j5j1
966z9pP/AIJ4ab8NP2Y9N+K/hrxBJerb21reX1ncgMHim2DMbDqVZwcH3qP/AIJJ+MtI8NftDtY6
jL9kn1jSp7a1eRgqSTB43C89yqnHJrnm20mgPs39pn9tb4a/sy/E2PwRrPw9n1uT7HFdvPZJCUij
fdjKsP8AZPfnmvi//gpBYfBDxZL4e+JHwu8UaM+s6q3k6pounOC0g2lhOyKf3bqflYYGeOfX6C/b
e/Yr8efHn9pqDxPpFrE3hySxs7O4m8zEgVS+/Abj+I/lXyr+3r+w9Z/ssWnh3V9E8RJqGnajI1tJ
Z3Uo+0xyYLBlXvGQpB9CBWsLX0JZ4j+zl8ZLn4F/F3wz43is21A6Rdh57WEDMsbKySKue5R274r0
79uv9rzQ/wBrnxl4c1XQ9AvtDt9IsZLZhePGzys77udueBgd+9ZX7CzeC5f2kvCln43t7WfRb55b
Ui92+SXdCse7PHLHHPrXb/8ABTT4LeB/hH8b9NfwFFBZWOs6b9turO1kVoYZhIVygH3QwIOPbiiM
k5PQpbniX7PPxr8VfAfx9D4l8JTLbah5Mlu6vD5yyRtjKlScHlRiv2G/bl1q41b9gXxTqVyyG6vt
KsJZTt2qXeWEnjPHJ6Z4r8if2ev2efGHx01HUE8LQxTXGlqk8waRQcEkggEjPK/5zX7LftV/C3xB
8Rf2PNQ8HeHrPz9clsrCOKBjt5jeItn8FIrP/l5psVtufAf/AASlXV/+Ggb660zFxZjS5BewsArP
GzLtYHplWPTPOTX3L8Xfg58GPGH7Ruk+IPEnifT7D4gwC1+y6dLeJHK3lktH8hOWySOK+PP+CV9+
Ph9+0T4r8G+JFbSfEQsZrQWl0ux3ljlQsOe+MEDuORXsPx4/Zd8d+Lv269N+Idjp/neG4J9MkN0s
gLIkW0yd/l5Xp3/Gsf5roqVr6G9/wUm13XbnWPhP4X03Q5mF1rq3MOsMym3EoRkETDBIPzhskYx+
ntn7SngrwN8SvhTo1h8WtTtfDVhHexzmS5u1hQzqjLtV89wxP0ryH/go7450XwlrPwTn1O4i8u28
ULczxZDOsQUZYrzx+Haug/4KIfBrxX+0H8JvCVh4Ht4tTkh1hb2RhKgHk+RIA4LHB+8P0rRJ8/yE
dB8UV074QfsOeILbwTay+LdEt9Fnt7VrKZZP3EgYNLvHVUDMxIBPFfgrcon2ZBHGeBnhs4GeDn8q
/dzU/D1z8Mv+CdesaBrRgs9RsfB13ZSq77U84wyDGWPcmvwpvX8xApLdRv28AH0H5Y/Ct4aIhoqq
oZjlz8oyCf0qKdELxkBlI6qTwaS4VlkIHzAHjP8An2qQIEQ72YMT3PT8O1WSA8qRtwG1wflcHqad
I29ApbbKDzgYH41GdqMY15Dc7j2/ClEZYhgCyHghep96kpDTnOxT1JBpBIoRmOZD02EDGfWhikZ2
72EnQrtHFPlAZ8CJgeuD1xj1pMYxlXnfGVP8IxTpR5bqi8knJfGT9KEUltrnYrZwVGc0jqFikVT8
4YEc84xzSuIkEKhwpG4DJ254pkYdpDuTaCMgH6VGzLITuGNrZzz2pXfPBPy9m9Djii6C4/YhLKpy
W5Iz0xTmn2FAMr1O1SeCfX8qbKpDAkHAHynHB470wQ7mRuHUdt1AtSS4XdOwVySe4/z+tRyySE5A
ZUIw2OhHXk1LGwRPMDblzjGMdaZ5sYhdIx04Hr+FCuOxDyV3dVXkc8Y9Kq7kbLNnI5FWZbfaFToS
T8rdqhJCuN53DGMZqrAhy/KOQxPXgfrSybVZlBH5dR6UMVLDa2FHQAdKHkZlBPqWz/WnsG4vlODh
TztGSSCDxT0UImVDKxOCG6cUhkZQEHI9sZzUhl8v/ZY9Vxz/AJ4FILEkcCPMGZXKk9e/H+RRGuXY
8Bf9ocn0pvEu5IgUfGcL6e9Nj38jeWx6rmpbsMqlsHCEqvqTnP4VM0kbFNwwQvOe57fSopSrMSq4
B5HNSxxebwynPWtLkkcq/KVyMqcnFE0i5IwWXtk9BSvMoJX7vBJx9KhyNuRnK5xn06VNx2HySblw
DjPPHamLO65CkjPB9DzmlZMqp28EEj9aeY2khZgPlXAPtQMbIFjI6ljznvW9p+qXGj3VleQXLWt1
aP50UsTFXicchgw5BBA6GsJFErHnA7k1taBoFz4h1GzsNOVrm+upRBHBkZZzjHX3Io0Gr9D2b4pf
tY/Ev46+H7PTfF3iJ9WtbOTfDG8ESFSAVz8qA9D3Jqx4B/aw+Kfw3+Ht74I8PeJ5bHw1dGXdbC0i
O0SABgrFcjvwD+Nc18Rv2fPHfwds7CbxnoV3oZvATC0ihkbGeNyEgHoee1bGg/s1/EjW/h9c+NLL
w1c3HhyEFmvLUrKr7TtONuTx3+ntVaEafI4bTdW1TTvENrqmn6i1pqVnPHLbXMZ5gmQhlbPqCM/4
5r3XxB+3l8Z9d8V6Lrd34m8nUtCEgs5VhRVJcFWLrja+R6jHFeG6F4U1PXNdstK0WB7++vZljgto
Pvlm6dfqa9RvP2OvixZeKLHw1P4Wul1m8hkuILdhtEiLwwBOAeSOPfvQmnuUvdPQF/4KYfH+WMGT
xfasE2sQmk2+HGB/EU6Hnt3rlPjD+2p8Vvjj4Vbw14q1qO40rckxt4bGOHewyM7l64zx+dasX/BP
74zxoUHgq+OUBZPNVdo9PmIzjjpxXF/E39l34j/B3SrXVPFWgT6Lp8kqwrdSYKlzyAWBI6Dsea09
3uJtGt4g/bE+JniX4Saf8NdX12K78LwxQwCNrVfMZYvuKZPvEcKeefl61T+B37WHxJ/Z4ttUsfCO
qLp9jqU6zzwXkQuAzBQu4Z+6Sq46+lReJP2ZPiH4S8KWfirWfDt3aeH7tQYrp4iRgrxuIBxnjGTz
VH4e/s2+Ofitod7qfhHwzNq1haHbPPbHdhgAxGCQScVGg1e2hwOv+INQ8SaxfavcyCW8up2uJZUw
m6QksW46ZJzxxXsXxA/bM+KnxX+Hln4O1/XXvfD1mIz8kSRzTFBtHmuqjcAOeff2rxXUPDl3p2ov
ZP5tvcJJ5bQkfdbpg16P8Sf2dfiH8LNCs9T8R+HbvS9NugNlyAWhPGQN65XJyOM9jVpIl3PK2mln
EjySvKDyvPQcnqaiubppArMd0a9T3x1qa7h8hlKeZIucZYYIB9fzqts3r5YURsMYPqeuaRjrcvAe
QZHjjA3c5HT6/rXt/wAM/wBtH4rfCn4X3nw/0PW7aPw/Os4BurYSzR+dncI2JyvLE857/SvBgP3Z
8tCy5LM/rx9eelFuGlnEZ+djyu4gClY11tY9Q+D/AO0T4z+A/jZ/FHgq6gsr54DbXXmwrLFcRnBC
upwOCMgjHeofiT8dPFvxS+JH/CwNX1LyfFazRyx3unjyRC0f+rMY/hxgDnPU5zXnjWyvsDqEVyS0
akjvwRxzSTPvklSFHYx84JA+Uf4UWFqfYdj/AMFTfjvbW4jfUNEmEQCebNpK726fMcP9e4/CvFPj
t+0948/aH1q11HxrqUEgs4/KgtrGLyIgNxIymSCTkV5ZPZl52kDDH98nk8dMfnTZJFeZtqkyRYIU
DHAPH6kUuUu59AfAb9tv4n/s9eGJfDfhTUrSTRmn+0Lb6nb+esRb7wjyw2KcD5RxkV0fj3/gpN8b
fiN4V1Pw1qGpaTY2GoxGC4m0yx8qYoeCFfd8uc845r5aDNGHRwFIIBJ/hH+NNuYmeN3A2oBnJHHQ
cfpRypA2zu/hp8WfEnwj8ZWfi3wvffYNc05m2Oy5R8rykigjepzyDX0ZN/wVV+PJKCK60BkUZd/7
LGc8cfePXn9K+OvszNMvztGzJ0U/gPakleVpDuJ2K2GKjn27jvTSRHOz0D4i/HPxh8UPiZcePdX1
jZ4iW4hnguIY/L+ztGwMfljkAAqpx9TX0Cf+Cq/x3tAym90G4jGAGOlAH893Xp2r45VMlCjeZk8B
hnnHUjvT7r5biIYONx3Rflg7vfnir5Li5jW8V+Kr3xtreo6vqsok1C/leeZ41ChnYlmwAOB8x4rL
iuUieRANwK5IPc4AJz19arkNtZogWwuV5xx04B57ipbfdtjkdfnLYdWPGMf0yKdrGabvofUHhH/g
pX8c/A3hTT/DthrenTWOn24topr6wE8yoB8gLlsnA4B64A+tfN2ueKdV8Uatdajq95Lf3lzM00s8
7FmYsSSc9uew4/KqDLvUFn8tgOf6fpTWYyDO9mV+hUc4znFRsapS6nv3wS/bl+LHwF8Ef8Ip4W1i
zg0eOWW5hgvLJLhomc7mAJ5xnJx7mub+Of7VPxA/aLnsZ/HWo2l3Jpkbpax2lssCpuO5uB1Jwo5P
AHfmvKWVW3lUGXO44boPb8xUUm0sSqDDbWMh64HXj+tUlfoD0PZPgh+1t8Sv2dbS+tfA+tx2VlqL
LLPZXlsLiNGXjeitwCc4Prx6V3vin/gpH8ffFnh6/wBJu/E9pb2eoW7288tnpUUbqrAg7W/hJBxn
tXy8sibpCS+wj5Q/cduKJBI8ZG9jGcnYTkDtkDt2pXs9h7m1oHi7UvCmuadrOmXU1rqWmTJc2l1G
xDQyIwKsPXkdD15zXo3xf/a2+Jnx2uNJm8VeITdvo8zTWH2eCO2WJ8qQyhFB3AqvJJ7+teMhZDMY
UztxkbuBU+9o9jMoD9PXI9ad9QS1PXPjR+1v8Ufjxomm6H4z8RLqdnYOZoIYrSKIeZgrvYquSdrE
dce1ePuzIEjbJccnAwCaLhBO/OAi/Nweh+tRzb2kO/nCjBHf3pEyRHcNvYn7rMeW9AD2ohRTcEhj
tIwaey5DFgxB+6COAMU6CQROquWQ/dIHQ/1p2Eo2FRjIRtfbECcgf3hx9RXd/Bq98MWvxQ8MN43h
Nz4RGoxLqSlnG2A8M3y/MQOuB6Vwcny71XJydxGCBVwuJEPCscAFM4qWjRbn736N8J9SkHgbVPhb
8SLmw8A28cMn9lNtu7e5teCBG7fMAVJGDnHHTpXy/wD8FdPip4ZPhbwt4FXVEl8TRaj/AGhNYR8t
FCbeVA7ntkvxX5sad8UvGejWNtpumeK9a0uytkMcFtZanNFHGOvCo4AHt7msDWda1DxHfTX+p6le
apdMFD3F9cPO5AGACzkn260RWuopRUtEzqfh98afGHwfv7nUfBfiS/8ADV/cRmGa4tZOZEzkAggg
89M9M8Y5rtr79ur4631sYJPipr/kzKUZlaJcA8Y4TPc8/T614UF3Mw35Y45A5AqQRLHhPMYqGOT0
I9qp8vQhcxJcahJJcs7Dc2MGRzyfx/Gur+Hvxj8bfC03dz4Q8Uaj4avLzalxNYS+W0iL0VvX61yJ
AjUDaHxnqOlMzh2JU9ztYcc0kPU6n4jfFHxb8VtXh1Xxf4j1HxJfQw+RFNqEu9o4wSdqnsMk1+i/
w6+On7EreBPDh8R+E9OstfWziW7gbw9NK8c2wBgHjQhvqCa/LpcRuQFJXnDBqmM7lwr5AbhRjqMG
m7hFI/Wa1+Pv7CtlJIbfw1p7yAnew8MXJ54zkmP3r4h+P/xr8IaF8cNY8Qfs8a5rfhDQr+yhS5On
+Zp6mYM5ZUUYOzG04IxknFfPBkkZsh/LIycY7fWoXgZ1ZwOScHkdMcGpSG9Hodt49+Nnj/4haPDp
3iTxvr/iCwSQTG21PUZp4y69G2sSM88elcQTu3LyB3w3BPU4pbmNxOcZ2qeRxjGKWZVeUrk5ABBI
5z9KpqwK/U0dD8Raj4ZuodR0u8uNN1K3ctb3tpKyTRswIyrDBU4PXNaXiX4oeLvG8SQeI/FOta3F
C/mxW+o6hNPEj427lR2IU4OMjmuZZ5BuUlmAPAzgfWh8g7iu9WGck8ihK427GxoHi/WPDF5/aGj6
leaXclDD9osZ3t5SpxlSyMDjK5rdn+NPjqdJIpvG3iKQSZLGTWblsgjBBy+D6c1xAYSDBGD6Adae
YpCy7VJUjJ3dhTsiVJtntX7Ivxi8KfA74zWHiHxj4e/4SLw+9rLaXFskCzsoK/K6o/DEEY5IJBNf
fx/4KXfsz2u9I/hrqqRpjJj0KzCgnnHEvv8AhX5NEHy2QcYyB70qpIQoIYk8bQf1qLGl77n6dfGH
/goT+zf8Qfhv4j8OD4a6xcPqFnLDD/xK7WELIU+Rt6S7lw2Dkc8V+ZOm6nc2LpNDcTWl5HkCSGRl
IyOTkHPOB/nFVrlWlOEYhd2SBTXAJJJ5PA4yWpkWXQ1dS8SaxqNt5V5rOoXUTESNFc3ckqNg5HDM
ehA/IVnxX8sd0skczwToQ6SKxDZ9j/X3pFtCwbkAYPLentTGhDMxxtC9+1CFaxYe/uJLs3E9xLLO
+MvI5LjHTn+VNmv5numlkllmlc53ucnOSev1NVp42iQvj3yD0pHDBgdpY9sVdrkJ26l0alLFFiN5
I4hgBYyVBH06ZqGS6Z4Qgx5ScKrksF78enOelRhDIvzMF74zUZspIowR8u/OefyqeUptssJel1xn
LAbSwOMZOcfnzT7m5EwZJiZlYkF2xxVZYirKhzhh+Z/yKkl3GLGMFc7i3Hak4lJsa04VlywYLwGI
6D/PamPMxLkuecsSnB55psoLiNmCgHlT7c0+bJdQ3zDbnk4wKfKK/cJrlnBbaGBJOWQZJ9c9fxqP
z3yX568YHSnTFgxGBwcYHb60JHvQvxjJzznB5o5RJoSJgxBIIxyCevA4pzXLxqQDhxnsMjPoaaY1
MhPJdhz9B3pfLaSQbFz7kUJMpvzIdzEcAYAxx6daQuzM7nn37ipzCwX5gcA9QaJI2AIZT94jZ34H
rRsFriLKAgVju7r0/wA8UwuFKhQwx0BGOf605YwIwwAL5wMHP4UixmTLKTwecHpTUbmTbWgGcyys
CoX5SAzAHGe31p8soLHagAwCflGRj/8AX+tMWJC6tv4PI/xpZtrvt3AFecZxmk42NIu4hZVXazEP
jBAH6f596c9wGVs4wTzjqeP/AKwoEOcBWwp6Z7UgtnZ8qeegA5zSLuVlQlg4PAOOeM/hRKyAeiHA
ye1Wnh6DaVx1GetNaDBKlgqr6nmqUbkMqDrkZxkjBOM+9STR70DAnI7e1TNCQxIIK5698etPlt2V
WkP3RkdAM1XKTtuU5Iz2OePu0ixjAwSGB71YCiQLj5jz93B/OpViYkKcLnJ5wM1PK0ONmVCjRSYU
ZCngGnhmZDGp4NSCIxvliFUDJzxinrAMZHPPORn8KkuyKgjYt0zjipmkERG3g9c+lWns9r5A+Ukr
gkZJHUVDJCjIzIy4Xg8jj2oswHJMEaNgxOPugcY70lxceVJuKkRyE5PXt2FPWEEgsoRQwDEfTim/
ZA6lgMxqRzu7dKNBaj47wR5IwwJwCBUErmQ/McHNTQwMZtiqWJyox7f/AK6bJHhlByBkoWxxnnij
0D1GRkxuU3Hd13D9KdM5kQAou5e+BzQ8QkXco2lOMYzmnNCJNpQDA5JzmgViI71lVkG7b/Ce9TST
nayyMdnp7+tRFGQEr82RzipEtCyM3Bbn5Tjsf1pFWt1GSXZWMKCQjdgOPrj196WGd/uYLKDubPbi
murp85ZQeynHNSiBpIQVU4I6gdR/kigdriTXErOuGIAB9DnPY8VF5rZ5Ug+3HH4Ur8IFaTb14phU
s67cMuB06U7EPclFwyBl37wwxtI6j0/So5JyTuxt7DHT2FSG2BUkspbuQw4+tJ5WxnCuCeh780WK
KkjHHI4YHHsKChUN04Geasy25C88Y4HOc8c1GtsfK3OcAnHXofegm2pXfDjoe+RmlZmiJxk455pZ
lYZ29OBUZQuzjOMdfypASxTGOTByUPtuHTvUxkUjDYyRhcgD3/nVdTJGu7AwR6UYUr8xwQe9IfNY
dcOSWB5J5IHfH+fWntP5m4uFZfvDI79P5Cqx3JnHK5p7P5iBehIwTnFIpMlkn3xqgQkAccfn+dIJ
HjIYvnHPTP8An/61ViXSMoCQc9jmnfNJgE5PTA60C5izLdGRhuVA2OWxg9MYz16U15myduCyjvVc
qG5zknORQwz90EcUBcsBmG4g4U/wg8HvkimtNkKp/h6Bqh84vgfdOMU1TImf8aQm+xdW7I25UZJz
vZQTmlN2yKQqkR44yMY+lVpWDBGHT0HAphdmGCSveglXLS3hTksc88A44/yKSa7LzLIihAONp5qs
IQQSWO09CfWpGbaMHlcnGOnemaJ2Hyv8xYZweck9DUUj5AP3geM46GiQnBH8ZGPao0jdlCk7Tg8Y
60rEtMmV8NnJ4HTpk1ch1iS3GyJ5YssWykhBB45/SqaruVcLnbwc96a4HmHKFWX0NDKTaNIahcQC
4txcTJHNnzo1lPz55+b+9+NVjcMjsd2COee/Peq+8lsKQMdaCpkjGSAoOBUagacXiPUoLI2aaleR
QbdqxrcOE2+mAQP/ANdZjTGRWQMxA6A80rtsw2GBHXccio3JL56E88d6tEsVyd3CgE8YFKBtkX+7
jnNIilj0OM9z1prR5QNnIPGM9KSFbqauk+KdW0G5ebSdVvdJmddjS2Fw0LOuc7WKkbhx0Naet+Od
e8RW0dtq+vaprEEbBlS/vZLhUYAgFQ7HB5rmHyAcL0NIGJJOOAOc1SsF2jotC8V6r4WvBf6JqV5p
V8qlPtFjcNA5Tuu5SDg9xnnAroZ/jl8Qbi0a2n8beI5rdxh4ZdWuGV1xjBBfpg4xXnzNt28Edsk+
1P8AOZSTnqOuP0pWHzs3NC8U6n4U1e31LQdSvNF1G3D+XdWMzQSru4YKykEAjPeuhsfjN49sNb1D
VbDxtr9tql/t+2XS6jN5t1hdo3sWJbA4GTxXBzSbyMfKegqVp3jU/MEboCKLDTudlH8VvF9p4Yuf
Dcfi7W10G73edpov5fsz7iGYmPdjJYZPqST3rRvvjr48vl0dLnxtrtxHo7rPp5fU52Nm4G0NHlzt
IGQMdia86RikahiMsc8HJp8wj8sMrZbq2fSlsF2elQ/tB/EqPxI3iOL4heJF1prYWYvzqUnnGEHI
jLA5Izg4PU1j6P8AFvxl4b8W3firSPFOs6f4nuw4utXtr147ifcdzb3B+YEgHn0rislRs3YHT5eQ
eaR5GjG1SNo7DmjcLs9Tk/aQ+Kd14lsfEsnxB8RTa9ZxNBbX0moyebFE+C6A56EjpXpP7L/xz8EW
fx1v9Z+Osc3inSNatZkvdUvYXu50n+TYxCgsRhNvyjgYr5kibc7AuQo4HOKn8zB5G5hgrgZz2oaQ
0frJJ8b/ANgeYwf8SexGPljA8P3RC8YxgoRXlf7Tniv9jHxz8FNetfAY/szxdZxedpcmm6TcQs9w
BkI5ZApVsYOT9K/Ox5dsg4AOOc9KinJkjDOSRnjNSKyudv8ADX4x+M/g7q91qHgzxJf+G9QuofJn
uNPfaZIwS21gcgjPOcV0HxG/af8Aip8V9BGieMfHOr+INIWVZhZ3LqUZ15VjtUZP415SZQWGGwQM
E1HJK7hRgFE6E04judn8Ofit4p+EPiCPxB4L1u78O6wkTwfaLRgGeN+qsCMEZAPPcZ61uaZ+0X8S
dG+Jc3xAsvGOop4zvdwuNYdkMsilVUqQVK4wiDBH8PavNfNjM25ugyAgHtTVcqV2sSV5+lFw3Ol+
IHxI8S/E7xZf+I/FOr3Ot6zelTcXk+0M+1Qo4UAAAADAArqfhB+0X8RPgS2pyeAfE114dXUSjXSQ
rG6y7clSQ6sM8nnGea81kQkNIxPy857GmmVAT/Fj/CjcVrHafE/4v+LfjL4kfxF411268QaubdLU
XMwRcIucKFQADkk/jXrmg/8ABQ34++HdGtNOtPiFdLbWUK28KSWlu+I1AVQS0eTgAck5r5vjlXkE
MQeevQ0NNtQqm47l56Ur2Hc2PFnizVfGviHUtf1m8bUdX1O4a5ubmTAMkjHLNgcDmskuVTywQQ3B
OM9+/wCVV1cEsoyQOlDys8qug4A5OKd7iuewav8AtVfFXWvhDB8MLjxQ8ngmCJLZdNMEf+rRtygy
Y3kAgcZ7CvNNM1aWwvLa7trhrS7s5VmhnjOGjkVgVZT9RWUx3sSzcN6fSkMfCksME8gntigSbufW
Vv8A8FM/2h4rfaPG0IiRViRTptux47klM5PqfU14d8Y/jV4y+OvixvEHjbVRrmrfZ0tUmMKRhI1z
8qqoA7nJ4zXCYQSEgbYz0G7PeiOUhwwGV9/ShPsXoO8obSSpBHzDPQintMzocSGSU98nI9qhkHmn
cHy49WpIpzv2AA84+XsaA0PSfgn8e/G37P8A4kfWfBWqtpd/JAYJC8SyxshOcMjZB5HBxkfjXuDf
8FRP2h9x2+LrIHBXA0q34Oev3evb8vrXya7+UQyklgc8jj+dJ5mV4G1s/f8AX8KmyTK5jvfFPxl8
XeL/AIkSeP8AUNWmk8Vy3KXP9oQEQusqABXAQKBgKOMete7D/gqB+0Esh2+K7Jkj4UPpkDbuMc/K
Ofxr5I3TKPvnodu05K0jROwYNtAUZODzz0zTVr3sRc9J+Kfxr8VfGrxnd+KvF2onU9bnjWPzMCNF
RRgIqLgAcfzr0j4Yft/fGb4S+ENO8K6F4oS30azUrBDc2kU5iUkkgM6k4yeB0HHWvnEiND1CHGAT
yOOKcIxudsbgeMdv88VV0ylJnuXxv/bC+J/x98PW2keMvEYv9Ot5vNS3treO1QttIy2wDd1+leHS
7DDu3hyfUU5p5CFHlhicgHI+mMUxlG0ERhR0Iz3pARpHGqhctnGSRyfpQ4DHLcgnAyOnHenlyoDI
BwMdKjd1CfO4BPTj9akQ5YzuZkIGOCfYUspZWKllRNwBZD1qS5djKWACnB+70pZDHFuJI2H+HrzQ
rhYhECs7sqlXHOD/ADpCdwyx3KG7E1L5o84MxJxwQw4ApjyqU2qGz9flobJegPtZ+XYR4xyen0/W
kYZyyBfLBwDtx+dKlsksbo7qjgbwTxkmnIqiDyGYhySc46UropDDGZd+75Cq9BjBpVaNtrOxYqOF
WkXbDuILyBe/qKmzG8jFCBz92ToKTGQMytuUDYAS2OmabPEvyuvC4yQDVmeNJH+XBYD7mcYPQ8/h
URWM7mLEgZC559/8aEA5QGtR5SEqOdzetIJFaFf3YXnHTJJx+lMCtFHtBCxnkZ4wf8aUPlUk3Dee
5HXPtirTERSYcOxzkD5Qe1VeCW2joeSelX3dPLPykt6k1UMY8p2ck5OQAf4qoEMcqU2hT13BlHSh
vmZgc+gAFPJ2LEqhgfQn1/8A10ot2+Z3yqdMn6UAOzngr5fTn1PPtUjIAGYZKjqc0kwIhwFJUcBw
O9OUbItzEgkHG2psFxCX84KNuCeoHtTwotpSVfLt1OcU6NCjAMjk8HzAOabPhiThRn+8DmlYaV9j
PMYHRt2Rk4qVmiNsigs0zHqTwBk//WpsyESZxjHOB3o2Dy/mDDjOOtXoyASMGQb22qecnpQ6rvds
kqOhp6Rjyyeny9BTSPLckgEgjOfSpaKQshVE8tkyRhsnp0rb8GWa6trCacSpS5TYCTgjgnr2/wAM
1gvKskjNjC98966b4fTFfEloCuY1PzAcZ9j+tSxmHqGnPp9/dQsT+6k2FmXHQ9xX6gf8El/2fPCX
jDRdR+IV9aC61XTL1rBFlQNGcxo6sue4z2HXHpX5m+IZI5df1El+DKW5HH146mv15/4IzqYvgd4y
jO8v/bgY78ZwYEx0+lJajto2j5q/4KCftY6p8ZPHF54OsY5NP8NaFcmAwSxIzT3CO6O+7H3cqMYx
XuP/AATK/af8Maj4Hk+CGvyf2ZeM8/8AZU87Ksdykn3oQc8SBixA/i7c8V8N/tKv5fxz8cGRfLgG
q3LK/wDESJWB4xx0HP8AWrH7L/wN8R/HP4j2WneHZbuyEMqSXGowkg2i7siQN6hgDwcg4q2kRCzV
n1Pv74Tf8E0JvAvx7XxDrOsDUPD2l3K32kzRDZIHWQuqyDPAAGD+NV/2x/2wba3+NvgbQfBM6vqW
g6pFLcauiJLbt5jBHgB6k4649etfY3jPwZ4l8QfBC88KaZ4mks/Fx0tLddYVsSNMFAMmevzY6/7W
a/ETXfCOvfD/AOM1loPia1ms9XtdTiWQXUnzCTzgdxOTlWwpDdCDnvSUepceWUkpbH7WftGaz490
7wbpi/DhrZPEl7fJDGbqESIF8t2IIKnHKgZ7V8k/taftK6H8UP2SvEXh7xEsWg/EPTr63tNR8PTs
PNM0ci+Y8I6shG4g19g/HT4zWHwM8G6R4k1W1NzprX8FrcsucxI6tmQcfw4zz1r8gf2rLO6+JHxA
8RfE7SrW7bwHreqyi11TZ5allVQCckHqDj25ppEpXktD9i/h54c0/wAU/Afwro+sWkd7Y3ehWkU8
MoyrAwL19/f1rmP2af2eLf8AZ0tvFejadOJtEv78XtkCTvRCmCje4wB3zjPtXPeJ/F1/4G/YYtPE
ej3Lw6hpvhayuoJVbLErHETznnIBBz1ya6P9lT9pDTv2lfhvFrcNs1hq9o32bVLE8iKXBwyt0KsB
kenQ+66oGtWfjl+1Tpi2vx28e28P7toNYugwGcNmaQ5z6YIGK/QX9hb4zr+1v8H9f+F3xI05dbfS
bRImvyuwXNs2Uj3AY2yptHzA88Hrmvz+/a/j3ftD+PhCuVGtXLp2HEjAg/iDj6delfUf/BHhUh+I
nj9BBLCx0qA/OuFP75s4J6jkVpdEx1jqfKf7WXwZg+B3xl17whaXxvILVleEEgMI3+ZM/RWArxGe
MlN2wBgOCT6Hr/P8q+wf+Cm0aJ+1T4jcEgtbWZxyekIGe/c4H0PrXx28hilwQeepP8R9hVozi9Qk
iJ/copEZXgqcnPc16/8As6/sz+Kv2ktV1PTPCk9it7psSzyw3ThSyE4B6+xrycbBbrJtIIIB2kjJ
z3r9OP8AglZP8N73SZIoYrm0+JOnTstzLCHBu7VyWQsRwYwTgg9GUe1S3Y2S8z5e8H/sC/FXxj8R
dV8I/wBjJp1zpSbp7i9YxwuAQMq21gwbPHqK8w+JHwG8RfDH4lHwX4ggjtNaFzHC0ocCJ1fpIGPR
cHPPYE1/Q0yJ5m7b82MZI5xX51ftP6h+z3L+0d4ePi6/v9VuZLiSy1u0mSVxESD5LjIByrAL8meK
nmJ6nyR8Uv2BPif8MLXQ76/0+K9tL9tn2qwczLE3VFc44z1yRjg81W+M/wCwv8Q/gb4JsvEOuJZv
pc8oR57eQsItwyu844z05A5r9vvCMdj/AMIxYJp8lzPYLEFha93eYVHTduAPHTkV4r+2onw7uPhD
f2XxH1a80nTJ4ZDavbiQq1woBThRgsDgAMRnJoUhvR6H5SfBb9ijx18efA2qeMPCj2N1b6fM0LWs
0pWaSREDYUY4Y5GB39utbXwx/YE+J/xW8PavqFppiaetjK0EltqG6CWR1XPyBh64X86/SD9gyb4b
33wss7zwFa3en3U0SrrNvJHIiG7VQHJyNpbvwc4YV9L6t5MGnXjyGWGIROzvbg+YBjkrt5z6YpNt
g72P57/AvwM8R+PfidbfD6zFvp+vzTPbxxakzRkSKDlTx14NeqeK/wDgn58UvCXxI0zwcdGe5fVD
GLfVodz2YzkMXcAhCvoeTmvrT4aD9naf9sLXBpOpXura3dKl/p1+iTtLb3wd/OjjO3ng7jkcc+lf
ojGiOsZILkAEM3Xp1oUmmLl0PwL/AGhf2SPF37OOp2lv4kSBxfL5kE1o7PFJt4bDEDkZGc46966S
P9gf4laz8Gbb4jaTBZ6xpF1bpdQwWM3mXCxnO5ihxu298E9/Svv7/gofrHwltdG05/G+oXreItKu
Ib3T9MWOZo54y4WReF2kFc5IOePwr6J/Z5g8EQ/C/Sv+FeGf/hEZohcWSSLII1VxuOzeAcZJyPXN
XzyEoxaPyHsP+CevxNn+Ds3xBFnC9v5DXB0/lbtEViGOw9cAE4HOK83/AGdf2Y/Ev7R/i698PaFd
WNlqFram6Zb9ypEe5VJ4zk/N3r94vihfaBpXgTVrnxNcz2egJCReTW/mZWMjBz5YLY554r42/Yam
+Bkvj3xNbeB3up9f0/U5lsL9YZFefT3AKk8YKZypLAdFNJzkSo2eh8WeF/8Agn58UvEvxW1TwPLp
g0yWwhZ5r+7Um3deAGR+hzkEc+vpXlnxi+AfiL4FePbjwn4kiihvUI/e7/3TROcLIrY+779a/oZK
Lvzjn6V+e/7VniH4CL+0h4TPiya71PWheHTtc028hmkSO3IZo5NrDhQ5UApx830oTZd7PU+Rvih/
wTr+KPw08O6XrdtZweItMvAA0mllpJIS6hkZ067Tk5I9unan8X/2APiJ8FPhdB411hbW8syIvtYt
mz9lD9C4Jz1IGQOCRx3r9s/BsenxeGNOTSvPXTY4VS3F0GDLGBhRhucYxjPbFeZ/tYXnw8t/hNqF
r8TLq9tfDV6phZ7WOZwJBhkJ8sHkMAVz3FCm7g0z8h/2bv2KfE37Sfh/XtX8NanpgXSpBBJY3TES
MzR70+YHvz+RrqPhN/wTk+J/xRn8Rxz6cvhX+yZ/s4j1bMbTtg/dwCCox1Br75/4J2ar8MPEHwvt
b7wHp1xZ6qkX2PV3ELqkkkfKFz93cVYMOc8sK+up9oil3biApyI8lsY7Y5zU3fUp+R/O9afAjXrj
422fw01OW20bxBNqS6Sy3LZVJC2FBx69j7ivdPiN/wAE2fiz4K8W6FpcNlFr9tqbpbjU7FWe3t8t
g+b/ABLgEHOMHBr6bj8X/s8Xn7af2RdLv9R1LVbUxzPPZzB7fVVkBUDIDhmVTzyNwGOtfohbBBbQ
hd4TYoUS53dO+ec1Vxn4W/tTfsOeLP2YrTStT1K7tdV0nUX+zxz2pO2O4CswjYE55UH9fatj4Xf8
E+/F/wAZfgRD8SfDOqWF87Rz7dIUMJ3eNyGQHoWwMgd+K/QP/gox4m+GOl/Cq60/x7aahPqjx/aN
D8i1lkja6TO1QR8meuQexr1r9lPV/AniH4SaPq/w80d9G0G+iExhELQx+aPlkwuSM7lOce1F2StT
8v8AwJ/wTD+I3i34W3Xiy8nh0nVIfOK6Fdx7Zpdg42sSFGccZ45rxb9nX9nh/wBoL4pp4IXVoNF1
J4JnV7hScNGDuXqMnOPXoa/fPxZe2Vp4Y1C7u4bm8sYoWeVNPVnlZR1ChDk9+B718G/sdePvgP4l
+P8A450nwZ4WuPtc14mq6PeS2RWZFCoJwrE7lVZCOPT1xQmwW580P/wS7+J//C6E8INJHbaEyGVP
FCxF7cpjPTs2QRtPPGenXyP9qf8AZb1v9mD4gW+h6tcw6jBqFt9osru3G1JSDtb5SSQQccEV+/TT
RCcRGRRKwyIyfmI9cfhXwX+3t8V/hF4L+I3g9vFfhfU9Q8W6RqcF2sj2haCWxY4m2ljtdccBcD5v
xouxdT5d1D/gmL4y/wCFMaL468H6zD4vm1G3hvm0mCLy5VidAw2Et8xGT8vXnp2p/wAQf+CZfi/w
N+z1P47udZtY9UtLIahf6K6YMSbdzru+Ubl6kY7HvX62fDu/8O/8IHpt9oNqukeG5LdLi1SUCJFh
ZQVIBOFGCOO1c5+0J4w8HeFfhHqmu+MtJufEPhGKPddxWEH2geUQfnKhgCg9ecZFCbFK/Q/Hn9j/
APY7X9qa/wDE9gviSHQbrR4kkEZgMvnBsqDkEYwwx1J4r1b4Yf8ABKbx74j8Wa7pXjC7Twzp9gGF
rqkKrcR3rb8KyAEHGBkhsHntX1Z/wTt+Jfw78feHdU0/wj4Um02+0O7njN7Lbokv2SR2aAO+Q7Hb
kHIPIPNfY8Wq2N1fz2UN5BNeW4Bmt45VaSIHpuUHIz70XYW6n8+nxJ+Al/8AC79oOX4a63fxeZDf
wW8+oQDcnkybSJQOo+Ug4PQ19P8Axa/4JN+NvDV3okvg7UovGNjduI75mPkS2oyMOAWIZcZ6dCOn
New/Ef8AaK+EPhz9s7Q7G6+Hd3Bq8Pm6Frkt9ZwqDJOyG3YZchhkg7sj5X79K/QG51nS/DWnWr6h
dWuiWzbYYlu5kiUNjIQEnBOAeAe1F2mJWaPxv/bE/wCCez/sx/DnTvFWn+Il1uzkuktb1J4hG8bM
p2ldvUZBGP1qb9l7/gnxa/tMfA/UPFum+KJtI8RW1zNZxWNxDmEyIFKbyQGCsGHI9fav0G/b3+Iv
g/4f/Ba//wCEz8GXfijTNUheygmt4kZLe4I/dF2JBT5sEMAfumtz9jD4naD8Xfgvo+v6JoMWjPDC
unXuwIuZ4lUNyvUHIIPoabkylHQ+IPhj/wAEm9f8Q+CdVv8Axrq7eG/EaGWOz0+MJNCcKNjMwPQt
npzivlj9nj4BQfFH9oK3+HPiW9l0qFL2exub+zAdEmjD4GTgEMUwPqK/eM+KtK8Q+HdSvNDmtPFM
dukga30+5jlErhc+VuBIDHgc+tfCX7NH7UPgjxb+1v4v8L6R8PP7DXxIUeFbgRq631sj+cWUDC7l
BIweqc43UJsSTUjzbUf+CQ+t2nxlsLO015rv4ezAtNqhkVbyAbfuMhGGO4cEdj2xXgf7a37H0P7L
3jvRLLTNYl1/S9cgP2ZWTN1FIjKrqRyDnepH41+3Gp+NNA0XX9O0W/1mwsdX1EH7HYXFyiTXGOvl
oTlvwr4t/wCCiX7Qum/CHVvC2nXvw5XVdVXULbWNM1yV4/LAhlUyr03biMrg8YbNHM2XuzwfSv8A
gl7afEf9nXw74w8D+ILp/Ft/Zw3Uul6qqxw7sYlhBABQgggZ4496sfEf/glXZ+Cf2b7vxG/iGVPH
enaf9suLO4kX7JLIo3SRKR3IBC89cV+kXgf4n6Frfwl07x/O1noHh6+sU1MzzTKsccTqGDO2AB15
rJ+Knxf0nQ/gfq/xB0bTrbx9oNraPeNHY3UZjmgXIkZXIKnaAcjrwaFJoznG99T8j/2CP2SvCv7T
Gta9beLLvVNMSC0E+nvaqI0lG7Y4BYEMynHrX0R8Nf8AgkP5Xj7X18c+IGuPC8TA6Tc6ZIiTTjdk
eahT5TtGDj3x7er/APBN39ozTPi5p3ifwtYeGrPQxo9/NqNtFBKCI7S4lZ1RRtBJRiQTwP5V9caV
8UPCeu+NtT8I6fr9hd+JtMTzL3S45gZ4VOOWX05H0yKXO2HLY/C74s/s2D4Y/tQXnw9tJbzXtGiv
YWE+nRGSVbN2UuTgHBQMVJ6fKfpX2f8AE/8A4JE6Ze6j4fvvhv4mePS5JFGp22qOCTESD5kLKvXB
PyntjBq58X/2zpPAn7cGgaNd/Dyz02XSZH0W+1EXiObqC48to2zsG1VyHwfUivu3x78WPCPwk0jT
73xhrlj4etbyUW1u87HY8mM7QQPTnPQU+Zjirq5+XH7eP7BPhD9nb4c6Z4s8G6jeSul0La9sL6Xz
TIr5KyIeq4YAenzCtz9lT/gnx4V+OX7PN9ea6NY8L+P1muYInueBEDhoZDGyglCCOnocV9Zf8FC/
jbP8I/gVczx+FLPxVpevI+nedPcgLBI6ExybNp3jgnIIxgV1H7Fnx4Hxu/Z50rxRfW9ppsunq9he
rDJwjwDBdsj5QVw34mhtiinqfNvgb/gkp4TsfhnqUXjvVnn8XOZjBqOmzMsEClcR/IcZwecV8k/s
dfsv2nxJ+Oz+HfG3h/U9R8KRNc2F1qVtFJFFDdIf3Yc4yu4D82FfsR4N+Nnhb4veDtZ1r4d61p3i
ZbBnhOXaOMTKuQj5XIB4+bGPTNfE/wCxx+2nqHxO/ap8W+Hb3w3pfh638TMZkSGXfsubZNr/ADjA
dnUH/vnPPNLmZSWt2aFn/wAEjvC1n8ap9VfWDd/DmaJm/sWUst1E5B+RZFwCuecnnnGK+VP2z/2N
7X4TfHLSPDnw8tdR1XS9eh321iEMjw3C53wq+OQVAbHbP0r9dta+PngHw58T9M+Huo+JLW08XalH
5trp0gbMgIbA342gnYcAnPHuK+Lv+Chf7Xfif4K/FTwjoVroGi3kNhcw+ILO9aR3mZE3K8TKPuZ5
BIPINNSYlBXQeI/+CW/gj4mfBvwzfeEJLrwb4ue1gmuv7ReSVGYoPNhkU8qQxJyO4x0NYP7VH/BO
DwL8PP2aL3WvDEN9/wAJtottHcyzRSSTrfOoHnfu/wCEEbnyMYxX2/qX7QfhHwh8GdO+I/ibWLax
0K7tbe5863BkH74LtVVGSxy2OPQ1w37Rn7S48IfsyXfxS8BNoviTTmt0mSLUJGRZ4pPkG0DB3gkZ
UjsehoUpClFLVHxP+wJ+xZ4U+M3gDxDdfEzwhqMUl0El0XVN8kIlt3XkoRxuVgDz6ivdfg9/wSl8
A+CYvEsXja8fxlFdyf8AEvcM9u1tEN33ghGX5Ge3Fav/AATP/aMuviz8L9U0HWWsLO88Lz+WixfI
0kEu6RWwW4CncnT+EV9BeAP2mvh38VfEXiPQPDPiO3udX0KQxXUNwpiXIZk3IWwHXcpGVJ7etTdl
cttT8mPhx+yNdR/th3Pg698L6t4g8EaHrXk30wjeIiwdm8mdsYJGBk46gGvtbxP/AMEnfhnqnxW0
PxFpE8+l+Gbcq9/4dLO6XBHQpJu3Lnv16e9eReE/25/Fx/bwOkeIF8NadpUtwfCNzcWCu0LlZWaO
43luTvIXBPAYivub4k/tVfDf4SeMPDnhvxJr6W2pa9II7UwoZY0JYIDIy5CAsQMn9KptjSukfnR/
wUQ/Yw0P4aa14Mvvhlol9FbapKNLubGMSSwpMSPJbecnc25l6noOle6eCP8Agmt4B+JH7M3h6w1f
w9c+B/iMLVBeampZpkuUb5yyk4ZH64GB83Xiov8Agpz+074k+Gum6B4e8LXnh+7sdaK3ImGZr21m
t5UkDD59gUkKASOzCvoX4aftbeFNU/Zo0j4peKtc0+1B0xbzUbW1cGSKUfLJGse7cTvBAHuKTbui
YQdjw/4mf8E1/h14f/Zd1LRtM0qW+8dafpzy22tWkTfaLu7QF1BQEghiNuPQ9a8F/wCCcP7JFj46
h16/+KHgJtR0DUIR/Zl7dZCJPFIySptGCh+v90ivtvxl+194c8Qfs1678Sfhv4l0OS8srOW7gs9d
bY7NETuheEOrhztIHqSOxzXz3/wTa/bAvfHt/wCN/DvjzWtJ02dbuTXLTfttkYXErGaNCz9FfBA/
2u9HM0Chdno/wt/4JefC74eeLvEWo6tH/wAJfpN+u2w0/VIQTYLuLEBgfmPIGcA4FfEPij9jLVNE
/bam8L+FvB11rPgizvYNXFncoqiXTWZfOUBtu4KzOgxzwPrX6aeEv21fhN40+KGv+BbHxJBDqejq
xkvLt0is5ypUMIpi2HILAY474zivhX4jftzeK/C/7dGnW95r/h688L6Df/2I+oabB+6lsbpoS7u5
cgtHlehwCrdaabYcuqsfTXxE/wCCYXwn8aeL/DWu6JBJ4StLCRDe6VYxjyL+INv2sCQUY8gn0OMc
V8//APBSf9i/QPDHh7w34n+GPhKXT50vFsNTh09CLbypMiOR/QiTC56ANz2r7K+Lv7bfwp+CtxoE
Os+IY9Wk1iXykfRGjuhAoxmWXa3yJ8w9fpXzz/wUq/apuvCPw60rS/APi7w/qNn4nEllewW2y6uY
UC7/ADkZX+X+FeRwSMdaFOSYOKZq/B7/AIJ9+E/Gv7L2neGfH3g9PC/jqOKSCXV7EgzqwkLRyq2c
MCNuQfcccV0etf8ABOj4ZeHv2b9U8IxaG2teIo7Kf7NrcMe29kuDlkIOSB820emM5rb/AGd/21/C
2v8A7LOm+PPHPirS4ddsLKb+1NPE0UNx50JIKpEWBJcBCB33jnmrz/tweA/iP8BfEXi7wd4z0zwv
rljbztFZeJGiS4SWMbgrQF8kPxggn734Uc8mwcFqfH3/AATf/ZLn1fxJrWqfEjwDa6n4ZeCSwibU
o1dra+ilAdWjPK5GefYV9X/D3/gmv8J/h/8AEjX/ABE1l/belalGywaHqMKvBZEuGJRhg9sAdhXz
j/wTq/bYv/EvxK8b6V8S/FOm6RBq7HXYGuVjtovtLYWWNXJwo2hCFzng9Oa+s9H/AG/vg9rHxj1T
4fDX0s7ixVsa5dSxJpk7qqsyRz78FgG7gD5W54qXJofJtc/P34u/sQ6vo/7ZraN4G8FzXvhKeWHW
7bT79hHBLAjp9phQk8qGJwOwI7dftv4n/wDBNf4P/E7WvDmtWekv4SfTmRp7LTUCQ3kQYMY5U7E4
IJBHU5zXyN+0B+3R4m8KftmaQ+n+MNJ1rwb4cvliiv8ATYUeOS1uQnno7LkOUUjkHqufp9u/F/8A
b9+EPwb03RrmbxDH4sOoy7DH4cmhupIYwAWlkUONqjI9+arnkwUNrnzb/wAFJ/2OfDdp8PtH8VfD
vwaunarp12Ib8aXEIoXtpFKh5AByVfy+ewJzxXoXwF/YB8Ka1+zHa+D/AIleC7LT/FJ85W1exKm4
AaQvFKko7gEAg+nIwa5X/gov+17bab8JNOg+GXxF0DUY/EPmadqOmWhhubpYJImbzV5JjxwpyP4h
jBFd1+yn+3P4MvP2YNP1r4geN9Lt/FGi209vfafNcRx3U3kEiNkizucunl9ByT2pOUlYlRWqZut/
wTu+Fmi/AO+8EnQY9b1YW04t9ekgQX6zOWZGV/4cMR+Ar5Z/4Jz/ALJGq23jfXtT+Ingey1LQAs2
jvHfKJJbO9hcFtyHorA9R/s19U6d/wAFAfhp8UPgl4l8ReHvGVl4J8RWVvN5Fh4heBbpZUQun7ks
fMVuny56nuK+W/8Agn3+3He6n8X/ABnafE/xjp+laV4hDazHcXwitLdbweWjxqxOFBjAwCRnYT1N
HO+4/Zq+h9X+A/8AgnJ8IfAXxM17xTFpJ1ax1RCItB1RFntLNi24mIN2znAPQHrXxN8af2GNatf2
1ksPA3hSNfCmoNFrdpZaiwW0lETp9phQegPIU9j6Yr7jsf8Agoh8Hrz4y3/gN9ditYLaIuviWaeM
aZM4RXZFmzjgE8nglSK+Kv2jv24fE/h79r/R7jQfHdl4i8D+Gr6OSCfTLaN45Le4VPtETSLkSFEY
gMM9AetHM3cahZn2f8UP+Cefwj+K+q+G9YGiHwrdaW6yPb6WiRx3SZDGOZMEN0xkep614T/wUt/Y
68Ov8JdM8SfDzwbb6Zq+kXirenTYBHE9pIG3GQDGdsgQ57bm9a94+L3/AAUR+Dvwi0bSL2HxHb+N
pb6URPb+HrmK5ngTbkyyKGG0Djjrk9K8O/4KG/tm6bB8H9Nh+FXxM0O/fxAXsNR0u0MVzN9lkhYm
XIy0RHC8j+PtihSYOOp3XwD/AGE/DOs/swW/gj4o+DdPg17dMDq2n7BOyM++OZJBkhgCBg/3eQRX
Wt/wT4+Fmi/ADUPAMXh6LV7xreYW+sSxIt+JmJZGEuPlwxHTjA6VyP7Kf7d/gmX9mOx1Xx/4302H
xVodvPb3VhcTRxXVwIMmMxxnBYvGE5HU10Wnf8FBPht8Tfgf4i8Q6R4xsvAPiS2guPstlr0kP2kS
ou5CsRbEqvwPlz1PcUk5XG4HzD/wTo/ZD1e38aa/qPxJ8B6XqmgBJtJkj1OOOaW1vIpATlG6AjPI
/wBk+9fW/gP/AIJ4/CPwB8S/EHimHSTqltqqFU0TUlWe0syWDN5QbpyOAemeO1fI3/BPv9uC+vPi
54ttfir4ytNNsPESvq8dxeiK1tku18tWTJ+7mMDAyPudzX1jpn/BR/4M6l8Z77wGPEFvb2dtCXTx
TNcxrpc8gQMY0lJwSAcZ6EgilzWBxvufFHxl/Ya1Wx/bVGmeBvCKDwrdmLX7W0vmVbWVY3Q3MMee
Ov8AAegb0xX298Uf+Ce/wf8Ai1rnhzXH0AeHLrTJEke302NUiu0DBvLnjIw3Ixnrgkc18O/tFftw
+JNC/bG0zUNF8eWuv+BPDN/GbW70uCFontrgILmMuqkS7VJUEdxnrX218Xf+CjXwd+Eug6Nf22vx
+OW1CURtF4eminlgTbkyygEBRyBj1NU272JjGx4X/wAFKf2O/D4+HGj+Kfhz4It9N1TSb1Y9RbTI
lhiNnIrAvIo67ZBGSewJzXovwU/YN8M+If2YbTwT8T/CWn2/ieMPH/bmm7ftON26KZJcZDAHBB98
jmvPP+Ch/wC2VYL8JtJT4WfE7RtQt/EIk07UtKsfKuLgW8kLN5ueWjP3UIOPvjpiu8/ZZ/b18DN+
y7pmsePvHGnp4u0i0mhu9KuJoorycwlhGI4hgsXQJgjOSaLvcaVzq9U/4J7/AAs0f9nnUvAtr4aX
WdS+yzC11h0RL9p2LPG3m9sMVHpgdK+Yf+Cb/wCyZq0HiLW9V+I3gix1DQpll0/bqIWSa0vYJBnM
ZHy556e1fSVr/wAFBvh38UvgN4k8RaH4vtfh94otYLgW2n6+0P2lZo13piJiRIrjA49fUV8w/wDB
PT9t6dvip40sPir40stMsPEG7Wo7rUDFbW32zKq6KxAC5jC4Gedh70uZjUT69+HX/BO/4RfDf4he
I/Etro39p22sAhNI1FFmtbLLBmEQPIGRwOwwO1fDfxH/AGF9c0r9tqfS/BfhSH/hETLB4gsrTUJF
WCe3V4zcwR54O1y+EOMAjtivt/Q/+Cjfwd1v4z6n4C/tyC0tbSItF4nuLiNdMuXCqzIkpbtuIz0J
UiviD40/tyeJPD/7aOlX9h45tPE/gjw1qIgtr3TreHy3s7kRG6UsikSbVyoI7pnrmmmxONmfbfxG
/wCCdXwh+Inirw34ktdCTwxPpjo89lYQIsF5Gp3COWPpnPBYc4JBzXz1/wAFMP2NtDTwf4d8TfDr
wgmk3dhdrZ6kdNjWK3W1k4SR1H92TGSOgY57V9E/Fz/gox8HPhNbaC8PiKDxm+ozbJV0C4iuHtYx
jMsqg8deF4JwfSvAv+CjX7Y1kfhtpum/DD4j6Hq1l4gWSx1XTbERXM8UW3f5m4ZaMnhSD0yMc0KT
Hyps9F+Ef7AfhLxV+y/pvgr4keDrHSfFturo2t6WFS43B2aKdJBzyDyp/wBr1rb8Vf8ABO/4aWf7
NWpeB7TQRquvwWMv2PXY4ljvnuQWeJi2f720YJ5Gc9apfs3ft8eBrz9mDT/E3j3xvpi+LdLtJo9R
0p5o4ryWWHIAji4LGRQpBAOS1Xrr/goB8P8A4mfs7+I/FfhTxtp3gfxXaW87W2meIXh+0+fEu5U8
lm+cOAACvPzccilzO4OFj8Qtc0a98O6ld6VqdtLZ6nZzPb3NvMu14pVYqykeoIIrKdWIJA47nNdL
8RPHOqfEzxlq3ivXZludZ1i5a9upI4wis7cnCjoOwrnm+YEgYHStCdSuJCylXyR2z2p5AIbLHdn1
4oYZZQ/rgU5mB4J49RUisNUYBDHHHrTcjIGTn09aWWLAA+8exA4FI8bKucbSe5oGgZcHJ+6fTtSg
PtDDGVND/cJzjnoaazEDBPXJwKBMXcN5+bBAPbvikPyqQQck9u9DnaMev503nsDn164oKHkcFh8v
PQU37rccH1pqk7DjOSeRSkMT1B4pENMklZiQD7UxjtY4JJz0pVUvnJwR696RwVQY53UwSZLKx+zg
gqoJ+4Dkj3qJm3x85yPzpQQYjtyD3xQOMblyPrQUk0B+baWB5FOPEeMFTnjPemK+7AHHtTXyGyDj
3zTZI9EYNgHtyAaR42UnBznsaVkduVDcDoDSGRssD1H8NS7lagoIOMcetPLZXaOM8GosED37mphl
lB5yTgmgENkYhCMg7epNIRgcnA7HFObbu5x9DQCUIBU/XrTGOBYkbB04ximgCM7epHOfQ0rDIOMK
c8Y60rQs5Lbcgckk1LQCmLMhO7IOT071HvwxLnA/U0rYVFySSM0zHBJAI7etERMnRQ6AEk5+Xr0p
sgwDtXag4yRnNNidhuYAEjOKR8lAADknn1x6VQlqKVZ1YlhuHQZpSCMnJbIxyOKRBlN2OR0pGYxA
bvrxQOxI7hQFPDjgdqRS3lMWB2jtjg04Q+YpZvvDHLdKR90LYC7RnvU6AOZV3ZVenUd80hcOPkQ7
QMEnoaFLRZY4Unnr3pJJGxnB7k/SqBhhtoLcc8e1LsZmK8kjpg4p4lIQqRuyM/SmMHjJHUjpg+1S
1qSnYdKu/JYgMvTBzmmOuW2qfmznJNDSNv3MxDHvnHFO85icbQR159KktWIwGTjdy3B+lK6ZYKCM
jPX6UPOfNO5c+mOKZK24kltue1HULkrs0j/OuGz1FP8AJC7izDGz1qCONgq5bC9eDUpdfLKFevVv
SpABMyqVGSCMlQaSRo9ytzuAzx3obKNvKg4B7UxWG4qMYPJJ61SAdIN6gsCD7jFKJPJxkEt0oKgF
AwIDdCDTmIEfzMZCp4XpmjVgI8nJYKD+HWmq4JA+4DnGe3HNI7bmOCFwcZ9qaqbtx+aSNeR2qkhM
RdqBlI3HPBxipEZAVWQ+3GaaiZOTls98d6c2UOwqPqQKliQ6Ty0dAMbCcY9KQAOxwcqOgxTXJL7t
o3D2pTtDnaNpGVwO9FhobPiEsrZA9KfGAoY5245wae6ZO523H+6KbtyW8yPketWyhVl3qVYllHU9
6eyKAN5OOQuR0NRGUKDuwSozg9KlFx50oaQ8NznoPpUWHoNBAK44J5OanaXdG0QbIOCSTUfyJFls
YDnIU96jZlO1F5B60iSSRneVkH3enXpmlltgwIBwSM7iDj9KYT853sQecYHSlZ9w8sHLHnIpdQvY
egW3YrKNxPCtngUweWrn5c5ycY70xgPkBVmIbk/0qYLvjSTbhl4DAYqguN8szH53O1ckAngd8Uss
isxj2ZB6ZPTIqIoWOcZGcqM8GneWWuEDPsUe3OKdirj5IwCqq5aJer5phUHertlSemcf/r/+vUrz
BY/lZlB67fTHpUa7HlXYpz3Pb8aWwXFkjjKFfQDg9aaLby2/uqh5APWlZ280Db90/wAI5H0NNcFp
DHwfm6gdaQtARyrZwQ3Qc4zT542yhZyWJzn2xSzK3mBFUgbeNhodGAEeSFQ5JU5P6UFA8KNKV27g
Onb/ADzSzJg45wfvFetMkjBnYR70DHcGcYJJqWO5LqyygEkYDDrRqPQcV/deY3y853ZzvP8A9enA
q+5RGfLbHPeopMrJ5Zyxx36CnzyPwoByBjAGMj14o1C6GPDvmwGDRDnI6ipfLEThCSWIAQhgcDHf
8KgkdygZY8NjJ9qkiDzGNThUJwT0X60dSSveZCBeWbPJH04qiwbdjOcHIPqKvzyjbIUjYDqrBuKp
pgxMxOXY8exq07Ax7Idgb5m3fjxTTl9oVgTnhT0HtSyF4VwCDjjHrUtvjzEdgFPt+lGpIogkQMGZ
llj7e3emhXM5VtrH1HrVwl5N8jIHBz8uQD+VVyPLUE5jJz8i8kU1fqFxxTawVjsfqQretWpIAhLq
BIDxtJ/WqwBWVnCNJGB82Op9qebm4cbY2ddp+6Bkila7LjJIz0fLgFioA++O3FPkfDscl167+pIp
N67SBt2g5yR1oknKwqg+YbcGhKwrDA6GJQE+fJ/GlZ0jwgVivG456+tR5JYYAGR2NOyq5Vzg9OaG
JDCCCwAG33ra0TVItMaZ5CIpCu1SF9uv+fWsd23R4JJA4+lE27eQ53Y9RUtXKHzXHmzNJ3Y8knFf
Tv7Fn7Zet/sqeN4ppoZtR8Gam6pqunxAM+BnEsYOMOM+vI49K+XtyKzfKSx6DtWlvYR7nIYAYHYD
PSqUUNOx+iH/AAUI8N/B3xXplj8S/h54t0j+0NSdWv8ASrebMk5kUsJNmco2WyRgdSe1ewf8E5fj
Z8MPA/7Pd+Nc1jSPDfiawmuBOL6dIZZ4MK0ZxnL9MdM5zX5Qo5jjB4JxwAfmPHHNWgkUkscshYyK
DhSOgH86OULxScT6b0D9snxfYftPWPxFvNYnEkuoLDfrbj/RpLQsI3Qx5xgIOG65xzX3L+1dr/7P
/wAXvFHww1W/8X6LNcTamIXvbS/Rmji8tmQybW4UOoGT0NfkGbkW1sHij3OFI2oOB705ZEmwAPMI
UoF24Cg8nI/DNHI2x3g7WP3g+Nfir4KfHDwCfCmtfEjw5FZyTRXCPHqsG7dGcggFuleP/theOvhN
pH7Hut+E/DPiPQb2WIWyx6fpFzFJNKwlTeQqHOSM5P4dK/IOVVlER8tkkjRcbWHzEDg46H8qSaSe
BEYIGDYJOAdozxjjrxTUbEtxP1y8UfH7wB4v/wCCdM1jb+J9Oj1MeGodPk06S7RrlJ0CoVKZJ+8p
xxXm/wDwSz+OngnwPp3jnw74g12y0C4uLmC9ge/m8qKYbTG2GYBd2QuQD3HHWvzUvGEiKXiG7Lbm
IHB/x9v8KD5+1oyrOGYhi38I/wA4Pai1xLdtdT2r9qfxbp/iH4+eO9QsZ1vdLfVbhoHhlDxyJ5rE
spX168fhX6MfBj4x/Aj9l/8AZotPEmg+ILHWb6W0SWS1t7mKTUHllAbySmQwCkdDyADX49bBEm2T
fu4bB5AHf/PtUsl4zZghHllV4Ykjj29Op/M1XKtxbKx3Hxw+NWt/HTx/qXi7XBFJqF84Yrbx7EjR
RtRAOvAAHPoTXARsWUGTZEWIAzyx7cc8Y9f0qPBlCghnKkqWQ9MnPftTZ0aOIkA+VnPzHkU1Yzsk
TLGq3HdVH8eckDFen/Af9orxX+zz41l8S+D7i1jvZLZ7KWO9g82GRCQRkAg8EA8GvJiAAWEpJA24
OQeRn8eaRZ5AqKEIc/IeMenI/Sqsg9pY+k7b9u/4vWfxqn+JP/CQJ/aE0ZgfTDvbT2jKABfJ3Dph
SOeozzXC/Gj4/eKvjz47PjDxJFa2uqpHGrnTrbyF+T7pGSxJIA7/AErylpXfdsZtpP3CM/TP61JN
dSOwEYIONzM3PGP/ANVSrId0z6a+IX/BQL4ufFHwDB4V1PWrbTrODy832lwNb3EpRcBncMcepA9a
p/EH9uv4n/Fn4P2/w78Sz6de6XGsAOo/ZSbufygCpdi2NxIALAZOTnrXzWSwjaNcEydQO9BmMg4A
UgAHg+v+NFh81tj6J/Z//bU+If7NOhatpHhaWxurLUXE8sWpQtJ5Uu3aZEIYY42k9Qdtanwx/wCC
gnxh+Emo65d2usx+IDrMnnzQa6XuUjkycmP5xsyD0HHA4r5m8wSyM0pKqeNo/lQsRjRE37sdBj+V
NRuQ5tnpOlfHDxNoXxdg+JNrcW1t4kg1J9THlQkQ+a7MzqEySEO4jGc47ivS/iT+398X/iX4v8P6
5e65Fos+iSrLaQaJ5lvCzbgzeYu4784CnPbsOa+abiZWnBKNjpnOfTFMdiWOOgOC2O/Sq5USptbH
vnx//bB8cftLtpcfi+LSYTppcW40+2aMfOQWySTnhVroPBf7fXxP+HHwaf4Z6ReWCaUkMlvb3xhY
3kMUhJYLIGABBJxxkZ618yPK9u67lwE5K7c7gfXjimCT5ZGWMYGGwAQRzUuJcXpofTfgT/goB8Xf
Avw2v/A6ahY6rpN8s6G41eJ7m4iWZcMEdnHGSzDIPJNcB8E/2jvF/wCz341PifwfNax3skMltNb3
sfmQyo+DgqCOQVUg+3vXkucSkyJuRf4AKDcElhsAIOAQO39e1HKkLmPpB/2/vi9J8ZYPiL/baQav
GvkvpYZ3054tu0x+Ruxz1znO7BzXCfHz9onxV+0L41HiHxPc2n29LdLfGnQCFFC5xjueT3PYV5Nt
dHL4ADdSRnmp5iZYQqghi3IK8YotqUpM+mPH/wC3r8WPiJ8NbPwXqOrWlto9r5C+fYQmG7mWJflD
yBu5AzgDOPeqHi79u74reM/g/F8ONX1Kx1PRUjgiN1Nbbrx1jIZd0pPzN8o+bAPHfmvnGVjGoGzf
s43A9aSOTIBKlnx3fjGf161NhuZ7n+z9+2B8RP2aLbWLfwdd2Swaq6SXMGoW3nKHUEB1+YYPP04r
V+Hf7b/xa+GXjjX/ABLo3iEXd1rrtLfwaihntmcnduWNm+Qgk4IPQ4r52nuWC4PC9AemD1pYTIPM
WM4ycHGD9DVWQKTZ3uv/ABh8Ua18UpPiLdagqeK31FNUF7BGF2zI4ZML0wu0KAc8cV6D8Yv23Piv
8brrRJtf15LNNGmM1qdIjNqVkyvznafmI28ZyPavAVZWLbyxA449f/10bHRVOSXVslTySuKLIT5j
3L45ftjfEj9obQtL0fxxq1tdaZpk32iFLS0EJMm0gNIRnccH25NSfC/9tP4o/CL4Z3ngXwvr8dn4
fuZJ2Eb2iPJAJR8wRzyo6n2JOK8Gdmlx5f3WGWwO/pSymMTM+SRk7WyT2p8qEm0e+/DL9tH4r/Bz
w7qWgeGPFLWumXshmkS9gW6KsUClkMmdgOOQO4/Pz/4bfGTxJ8K/iDb+M/DGpNpuuw+aIrhlDIQ/
3gysCCDnofbpXAOXYMGBZRyMjGB69OaSSTYyY+fHQv1oasNS1PavEn7XXxQ8WfE7S/iRqXimYeK9
OCRWU9vEsUMKLnjygNpyWbOf71Y3xk/aG8a/H3xBa6r441j+1761t/ssLC3SBUTqRtQActzXmMlw
4YhAFDDgY60jfOqhlKBuuw8ZqbIbdz22/wD2vvitqXwst/h1P4xuW8KQ2yWn2BIoo8xIQUTzAm/a
AAMZ6AVDa/tZfFTTvhc/w2i8XXkng1oXt3sZo43Jhb5jH5hUuFySAM8Dj0rxuScKwKlkZeFJ6896
r5lRNuQdw5wOe/Bqklcnc9K+Fvx98d/BDWL7VPA3iK48P319D9munhjSRZVByAVdWXKksQcZGa1N
F/aZ+JXhX4ian410fxbf2vizVFYXmp5R2mDFSQwZSmPlGBt47V5PNOCVUZzt5yOM89KaylkfIx8o
I57Yp6Am0dZ46+JviPx/4xu/E+vavPqevXMqzPqDkJKZFxtIKABcYGMAAV0nxK/aQ+I/xn0ywsPG
XjPVdbsrF91vbTOqKh2gbvlUZOOMnJGTjrXlKgEjC4bPY/rUrMWjaQ/OM7cev0NGnUNeh6l4/wD2
kPiT8R/DFl4c8TeM9V1fRbNlaLT5pQYwyjCEnblsD1J+lM8EftDfEf4X6Bf6P4V8a6poOl30vmXN
taS7FkYqEJwQcHaAMjHGPSvLWlbYc5UjlQTTmmZ3AwvQd+vrRZEq6PR/Avx58f8AwtttQi8HeLtU
8ORXxU3K2E20TYzjPHJ5PNc5ovjTWPD3iK18Tafq95Z+IrS4a5h1FJGWcSnOW3dcnLZ+veuaeR+R
uGSOgFKHEabQdxwTk80WQ+Z3O58SfGDxh4z8WWfinXfFOrap4isdhtNTuLtnnh2HKhG/hAPPHfnr
UPjr4p+MPihfQ33ivxPqfiW5tY/LhbUrppfK5ycA/d/ACuJe4IycAx5xg+vXpUkbhzuTGG4O4d+2
KnRMfMdzL8bPG0/gmHwi3i3Wl8J267F0QXj/AGUANu2lM/MM8gHIFVbf4s+NLXwrN4Xs/F+t2Phu
Yt5mk29/LHbPv+8DGGCkHkkY71xcs3mSbSQFB5z1pGmYspIPqAaQNnUeF/iF4j8A30t14Z13UfD9
5Ihje60u7e3kdP7rFCMjvg5xSad8SPEmneIpdds/EWrW2vTM7TanDfSJcyM4wxaUHccgDqewrlGk
yRxgZ5zyae1xskwV+Reh6e1NWFdmxrHinVPEesT6pqmo3d/qE7h5by5uHkmcgcbnYkk8cc+laHib
4g+IvFSQRa1r+p6zaxKRFFqN7LOsecA4V2IGRgcelcvLJs4x0B6EUx23PzxxT0HzNHTav4113xPp
kFhqetajqdvaANBFd3ckyxEDA2hmIGB6AUzTvGOtaRpk+nadq97Y2Nyxaa3s7qSKOQkYO5VIDZAA
5zXPsCqh0yAe+enHpUasGB6lx1bGTT0C7Nyw8UaxoML2+n6pe6ck5BkjsrhoQ4AOAwUjOM4x71St
NWm069juLeWWCaMgrPE5R1OMZDDkHkjPvVNW+XDfKeuSKA/lHeUZhnB55FKwXvua7eJr+e/F/LfX
E19jIu5ZWebIHGHJJyO3P0xVe41u7v7l57maa5kYYMkzlyR9SSaz5X3TH5Sq47djTMkfNg4Pc0Xs
M2Jtau5rX7LcXE7W8ZDRxmQ+XkDghOnH0qs+ozSJ5XnSGHcD5ZY4656dOvPTrVFpc8HLsTxzx0oU
hm2EtlRnrxT3J1ZdOpT26yIsrhSOQpxn6/n+lPg1GS1LmFim9drIjbQR7+ves1T82OpH+FKrxseR
gjjNTYrUu294YZWMZLMfurnFSfa3EnmSOGc5yVABPsce9Z/zOdwUkqM5HSl5LkOp46ljxRYabJpp
WP7zLbicZHT9KdLdyFQhOMkYI7gc/wA6rPIzAEfcIwMHoabK4LgnGAMcUBqWDe+WmwqCN2dwAPP5
ZpVuclgq5zyA3P4n0quwXdlsjHYmm79gAGAfYUCsy/FdF8xnZtyOOAMjof0qKI+ay7SQytkFR39a
qrlJVZQ/zHsKnyAdyMAc4560DRYS6Fqv7pgpPDZH8zVeW4aRi7Eu/AJ6E+2aqSorsTyG7jPWlV9j
qSCR/Wgltl174KowOCOh5HYf4Uo1B5U8vACHgp/UVSDAA9g3alCL5uTx2IUYp6CVywbgh8hcjPOQ
OPwNSNcmRgucRnOVIyCe3FVAFjdyPmJ5APWiSQlQ2zaexoK17krXku87cBgMfXipVvpJFGAODkAc
c+vvVENxtC8dTSlsAEEjHUdKRNy5LdNtKuSck7QScjn0/Km/a3HU4PTIPt61Bw6FmHPrQCWBz0A4
z3pDSbLU94wUEOW6nOOvHp6U5blolBDZJ4YE8Gs9nZ2AAyBwAacWZcEcFe1BaL63RljIG9QclueO
fao5SMgHCquflHP4e9Vo2AMgAweuPWkLggrtB56fhQFy9JqJQIiA/Kf4eKJ7153DbtzDjB6Gs53V
ZCTnOfSpmdG4KFSOcgUxfMsLeOrkEsMg52nqPwpx1AoFO5uB8pJyRVN3CnA+XcOtRKzROckNjjim
K7Lj3Qz84Mg7e30qb+0HVVwdoPII7VQ3b/4TtIOMfSleRj8yrnHQZ6Ug1LZuFO0sdrKcjH86e108
VwHBO5snp/OqTAcZkBznIpT8wwuc9iaGUrlxpQWjXAAU9c9cml+2usrIjEA9wetUjwD8w3DikdNs
g2tliOtId7FuS6+boeTg8dqSO6eGXJJdQSRk45qq0zDLbvvDofWgu0isN/OOh55p2JuWlvXJyzk5
6knn35qJpn83zAxGOQc9KrlVK9CSB60AtjA5GfWixLbZZF9KrhQxbf1bJ6/5zThKTDlTtHP3qqfd
JDLgg5yKdI3y7fm6nAoBFgXbx8I7KpOSeuf84qU3DJEwJPzctjHzVTkG9cc4AyeKbkoQVO4Yyc0r
GmvccRvJwu0jjG45NQGVlzk4zxjFSEFtx6buhzTCqbGVydwPGKGIbIwbHykAdx1JpzlSigKQc9+t
NacFQv3uvGKQvtICnB7CkkA6SRkcsMkEYx6UwuZmUZIx61IQAm5h83vTWUnbkZP1xQwFkLLhnIJ6
AY7UhUBmON1KD5jcnPbNNJwzD+EnBPtRYBB8zgd8d6lZVZQGPTrgUwSBZcFcAcf/AKqZKd2RyM8U
aDEJxnAOB0oX5ec5J9ulO2ny8YHr160gBkU4DEr1J/lSEIAScckg9qeqqSASQRSFGBAB+7weaUKH
XnIIPegQ7b85+X5SeKWZNwO75iO+ajdXK8c/T6Uo+VsAndn86bGJ5KhMsCD1xnqKSQFGHBCk9KfN
KzOAR+BpZCDjHIPt0pANMnktkgYI4A5FDuW6g/eNNbjcuM4HWmk/NgE89qADcCOeCKcjAc8+1DId
nB+YDkEUwkB9pzQA5pCeM8ex605n2bRkZHPFNYkEc7TjHHpTgm5zuGBntQTfUXdkcA5PIwaBnORk
jHJNJsJ7ED60ox0JP1xQDI5FGSQTt96epO0MwwMcEUH53UE8dPloyAx2tgjr9KkaJNhHKc45zimh
yzHnAznJ6k0hdlAK56evQU5cswZiS350xjX+V8tkD+dPEg84lvTgUFNjZzuX3NDBSw+XAzwaLgS7
F2sUJBIwDgHI9KYUD7ULkBh0Jpi4AcAZzyR0OaUKYz6np9KYD1RQcMepPI7CmlEUhQxfPQ4xRu5K
7cLgDNRDcrZVt3pntRYCQy/KwLEDIJBFGCcMCB1Oc00MG2uynfntRIvmFgAcsMYNBKiKxG7cfmA7
Z9qWZgGHAwRwB1oSMIhDDkmo/NC/Lgkg9cUaFEgk5LMSfSopJNzHeQB7CpRk/KRuUHI46UshGzIA
3g/ePpUegWI0YkYUjOM05p3VAGxvPAwOKadvzZBBI5x0NSMu9V52qpOPal6gIxIjICgk8n5qjj4z
nAYc4PJqSTbs2rHtbHJznPvURQMMnKkdT6fhVq3QViZkDcjOOQKSduAhUZABBWkysnBXcQPpSKPK
f7pxj+LrSC5Y8iKaOM7sFc5H+RUADBpFPykdN3Sm4IkYHgdRTusvyklgeT60rhuMd2GCB7egzRuM
zDefm5Gc+tSujhm3LuVefanGSJ0QKNu0EbgPvU73GkOc5XH3mB27x/ntUSw+YG25WRO+adJkMqr9
7oQOMf408qHZY1BBPy9MA/5xVDGtKsY+bLnkDHQjFDZMxbcPmGPmHSmoAsuGOAO+cU9lUYRSNxwR
zSAaturYIO8kevQ0MwJKbsryFHXH+eKa8AUrluSTwO1DLwpGCVPHaloIWSIF1A6sOTUhb52OASp5
XnimkjzMElVOc49aUN9nYvywz/D0NLYY9t74BwhwWIxyfamk8sSu1eFUDpn3qZyXXduJz17VAsjR
B1MbBXPXPtSAmGI4gQV3HJJ3dPw/Kl3iQYLqFUEg81E6fIpVTu7n+mKHXbgg99wIGMj0ppoB5jXY
Q3zr1U56H2ppUzONpJJ5IPQ07AZyiDA3fwntTU2pNkqXUcAbv502NDt4MjKUXjkDGaYLkxv8z9iC
oHFSzldu8DGTxjpn0qMRo+/5iUI4DdRSBh5RQjksMblI7+3tT32OwVS2eqkj8sU1j5fyoCQOOOKf
5JWJXQgoDjcSfSgSE2uzEMQcpu2k0FSCEJ2BuwpI4gejbO2PT0NNuFZCrcc8Fu3SnYCU7Bhc5K9A
TwfxpkUREqNjaVJ+X/PWpI44RE7KSZdoHPQA03yGRAwGeDjHHSptYaH3ERLCTcFVwTweh6UjqI7p
BIuVC7dyn7ufWgNJ5gVsDg4yOc4/+vRKGSXhmfABYb8Z464oWoXFYjLKrb36ZPPT1FTQGNohiMBF
5Zup+tNizau3yhlYhtrcleKVogsrMoCsD8rr9M9KLWAq3MTbJCQACQvPY1ngBG8skFc9R6+tX71y
zsuWHZqpINhbchKkZHriqQmKfusDgEdCeOP8atWCeZAUZVAxkv1IFUXBkOBgj35xWxp7KiZLNs6A
Ac/5/wAaoTKzyPwmdsvILDqQelNZWD7dxctxhuvH+elW721GwsisW3E5P93sKrXSvFIdkRTjBKf4
1RKZegtDKxBcLF0LEHHpgYptzZyxsJIZCGcc7GA7nr6VWgufIl4GQv8ACckE446fWrcE8k0e3ySW
HfHLep+lILpGD5YVWH8QIwP60rKyxHeoFEpdpMO2Qo6qaeJFlf8AeZA24Oaye5qQgkpgg9wAB1pd
xRiG/hOMdhT5mcKpK7Rzg+opGKsvU5PUmmArKREHVhz1GM1a03Sm1SaSKJz54QsqgZL47VSEgjwP
mGOCcda2fBztF4lspEVsq+dueWGOnXn6UmIxXUeaMncAecGvWPgL8Dtf+Onjiw8P6JYLcLMVaaWa
UxoqblBG7nnn0Jrz3xNZJZazeRRgBA+0V+qn/BGS3tLnwl46kkt4zeW9zAqOyglV2seD26/z9aFK
xaTtc8a/bF/YZ8F/s4+ANO17TPEu/Um2mXTbhy0z7jgNGBg4BPJI6Ctf9kf9g7wt+0t8AdV8R3Gq
3thr8dzNbxqwAjR1jDJ8o52EOp654NeLft0+NtS8RftH+N7LW9Qu9SXT76Sys/NIMcEa5KogXAwN
3Oec/So/2Sf2tPE37L3j2O7hL6l4WvNi6ppHmErKmR+9Tg4lVen97GKp83QlR5l5nA6J8MLy/wDi
nZ+E5LWe82ah5FysCksirIFdiAegHevvL4x/8E6Phr8KtZ8GX994kvbPwxql9/ZeofaJVzFJIpaN
w5HCgqRzn7wr7ItPCXwmTf8AHi2soLeO80z7ZLqQgYZgZQSxjAzuIAzxnivy/wD2zP2udY/aB8e2
dlZPcaf4IspFexs43+W4YM4Ez5HDlccdBn1FK8iYpJpPc+vda/4Ja/Bzw3p39pal4m1G3sIufOv7
pEgTOADvOMfice1eYftR/wDBOXwb8OfgfqXjfwXrtxcvYKlwReMs0VxAzAAKykDqwO7nivvfx58N
7L4vfCjSPDmobH066W0kuInHEkahWK9Rivzl/ah+NOs/AOz+If7Ptsx17wtcLCdOmv3YSafC4EjQ
A4wy9uoxn3NLmbFa7sdJ4b/4Jk6F8RP2bPD3ijwxq89v4s1DS4r5YrgBreVmAZl6fKx5AP0riP2O
f2GdF+M9z450nxqdS0TVfD08UHkQsUZGcOCGU4zgpn8a/Q39l7Wbbw7+yR8PNU1KUQWlp4dt5p5C
OI0EYJ9+B+P416houhaLFqt14i0yCFZ9Xhiaa5gAxcKoOxiR1OD19KE2XLRs/n/+PfwtHwc+Jvif
wjNcfbn0u9eDzYgQrryU4PP3cceua+v7P/gnt4P+L37O9h47+FXiC41XXJLdZZLC6dSGYL88JCgF
JB6H2rxD9vWKKP8Aal8feTFOkqXu6QEjDbkU5AxyPmB5zjiva/8AgkJ4n1i0+Lvijwyt5ctoU2jt
evaNnyVuFljUOMDAbazDrn8hWjuTH3o3R8IeJvCkvhfVru0vIClxZyPG8b5DowyCrAgcg8Z9qwDC
zTxsqoiyAkk+o5/HtX29/wAFTPDun6T+0bey2Nols93pdreXBiG1XkYyqztjHJCL9a+HZEDPFKwT
hsKnQg44J7URM1K6uOj2FxvIjDjqx+UEdv8APpUXl/aJDufO3n5FIzjv+n608KQhDKByWVsgk/4U
5Y2gtWlyHDfLuOMjiqC5GG86UqEO0jgrxj86ni3xhTIdzbdoLfpkd6hikzKRHuXHQ5znj2p0zZnR
T8zMOFGOnc5oGh1zbieJUidVCjaWXPzc8nviopJmlDRK2Jcnd8pGT60qOfOYACPzB0c9OOeR9Kjk
b7PPuZN79Mjkjjv9c0xapkgCiIndnBx83PzUj/PICyMcDILdAcUsknmRuAoDMAeuMn1/T9aZMree
Aqs7Ko5J5PHrTjuQ9R04LO8m0IfVD8o/+v0qupG1t+6VG/u9SO9SndFGRLwC2NoOQfwoaH9y2wgH
H3Fq+VEp2YrhUMgjJ2o2BKyc+wxSOqRlnIYBxh1x09KWGY+SCSu7cCBnpx/9epFR5xM7SMUY7dhH
XA60rI0vciVwSXCnag4B7jpTxIBwwKHaeCMn86i+QzNgMoA4Oeo+lMYZuQGLkg5HcAZ5/SpYiZHY
wko5QDtjLYps00gICs20joMZ+ppoIWQb2ZsjBx6etI8UeJHMhJAyV/w/z3q0rjGzStgldzKTjcR2
pX3r3CjacACliyEKkEqOACPu4ptwmEVzLj5uc55GOlS0Ta45FRkPODnksM4NMWSVXKniNSenrTHI
d9oHGPXgmnxq4BRvXPuKViib5QCpDKrdSDn61CDHEQpk++CAwYZ/H9KQRmJdytkt0HP+e1KyYkO5
QOP4u3/6qEitR0pcbirZAUHjtSIzJGjHAGcbV6n3NM3EuyjEgIwSeD+FMCFX2qSW7qeTzVWJHK2+
RvnGM8EdD7f/AFqlnYShVjQGY5JUegBpADEgIIAz2xxUb+SWfaWBYcE+v+RSYDtuUR89RwDyMetS
Fx5QC4ZsZBHT61VPAHlkgL0yaN4Zfv8AJ4x14qUCZOLksrcswxj3FAGVYlmBUcIBmov9YiYcsufX
p9aeqiCdVWXGR1z+VBaJHmQMnm8+Vkex4/8Ar1GvmtluSp6ID0FDzLIhUpkZySf8aBdGNSFUFegD
LnHrRYLgFCkfKFI4IY0MsbJlWxzkKaWfLgYJB6DJ7URFF+RsFhx14IoGRhUBweR0BPahGAVwB87c
g0TRsrncOnIyf5U4usadOnI3dKvch6CcDkfM5xuU+lPZC5Zhx2UA1Gj8BiGxjP0+lMlaRWUq3Aye
G5pE7kiZZ3Xf1PQjPamuyg4w2Qc0srRCFXRmZ8HPFJJKdiqqnb1Pr0oCwwkAkdAeg9DTt8nllQPl
wCc8cUx4gjjby565p5k2qFOM+meKGVoPkUAr79AOMjFROF27j97JBA4odi7Hhh3G48U1F37sN8uM
8njipsGjHqQgDjqOSSc0xxvG4tubPYYzQ7KUHlr97ovpRg7d+QOOhpiHSHJGW7E8D8MUhZFHJbJ/
u+lRszEJg9KejgPwOD3YcUFIRcEr827PduMU/cqtliWGTk7uaWVVyA2F54APUVE8TB8hAM8nPamD
QyQnz+WGOefWnM5YhgMKfQYpOW/gCq3HXPPrS/L5YHOQe57UCGlymevzccVINnoQQMHd3pFAI4cd
cZzT2QpgY3ew5qRpA5MgUAdD+lM/hP8AEVP407LyDuMdsVFImzsOTjGaAY55yoOOMjnmhsnGWJVv
emiMqShXkeg5pJCQDg/Nmgi7HuCCOSFHao95HykEZPWlEpxsU9etGGY85yDk5oZRI2d2WwcjGfwp
owz88AcZ7mgHHyplsnqR0NOJ2Buct0PFIq4krlcEjGOAaTgqBu5+nSmyBic7sqfWlVg2FA6HrTAa
MK39/wBM8VMWCN8pDEd+tRMx+YBQfQnrSl325wMngHOKA0HHJkGDtXplulRqNo6nnkjOKDuIDEHb
1HekCkkyHIb0xVJEseXLY/hx2pJGAibcuG9ab5gwQU3Nmmhzyo54PHtU3C5IjL/tE+vtSM+CQAWI
pcHaRgDGaVDGSA27kZOPWkFg3PKQDxxkc09p8qqsTmomHmOWQHgYo2sq88g0yrj5yVY4OD1yKbv/
AHZIOGPVac52k854xzTCMITjpkfeGaAY45HO4YHXnmkJeVgdvJ54pAxwDt+6c04SmMMQP90+1HUS
EwUJzwSO9DGRSfmOB3p4AmY89RknPSl3IGODu/Ci5VhgQFDuJJI+lODKUxjHGOB3qM5BADE9ulDy
kLgHJ6E9sUCY8OQeeRnpmkf5ZCvzLnkVH5g77vb3qXeGycE0DG5IKg5Pq1L5hwVKllHQ012O77gP
sKVNwOAep+7RuA4ncv3cHFNaQsfmBHbinPKd+0fdB9KjyxJOOT0+lAtx43HjaOPWldyEUZ6nOBSS
Mf4XzkYPWkKNGQ5Ax6jmgLIeMLGN2SepwOlLtaTBDKu3gA/So5JnIwOP60Ku8cttOOoFCE3cayFm
37/XrTxhAAxO7qM9KRkAGS25ulOZxnk8Hjj6UhJDSQBlDhc5Oac02+IAD5upOMEUw5JJBUEfnRIz
ZBYZ7E0ytQA3MGGfQjpSuwdnKlR6A0rucEdFzximM4C9Px70DGtGQ43dBTHbLgEDGOM8cU9ySdgB
J680rguQG24xndSAa7bQPTtTgwYjJPHXFJKFZgi8E8c0OQpVccjqRSSADndxxgcH1o4bkHI703dv
wDj2xUkYI3nAI/lTACCXLZXHQE9BURJABfp0PFSGMAAcZ5P0qN+V6ljmlYQAE5yTnt6UpJz98D8K
aoHOTx1xSliSCRkdMelIYhbLFjwTz6UrYUAlqQkHsOD+dITnORyOgxnFBJKoKgEEihtxUsWyOmSK
Gd3XOQB6Z5pnmgkgHAqihByeVBPY08MpLl8njAApskpKADkg80oiEkYJcBvTpSAQptXO08Y/GngK
DhsgdBxTORkFsYP5UoQseOc9GPcUrANk+VsA8U4sNjKF5657mlU5fGOW6jA4oXBbB6dvpQA3rtON
x7804ZD4yMk9+3FG4QtxyT6UjNhWVgWJ5BzwKQh3JBDHaR0pq/N8nO71/pTfMO/JGV/lQTvzgYPb
FMLIc0TuwAX5x6DpimlWjOM496eGLqQCQ3r1+tMzlhhj0wTQMk+6AxbOfegmSRcKNoH054/+tSvK
cuqE4Pc1FnBODnpkUCuSgZVmkxgHt0pNgVxiU7Sc7R1oIwh7CmtHjvuHT3NFhXJQxj3HnDdc8/nT
H5fHX0NOKjLc7BnvTX3K6rjLYyO+BQNO4RygAKxL85wabIAMMq5HTIqZI1AdgFbkEc8j6Uzhlz1A
6g0DASFlAHAA7CnuXJA6J1JIqDDLltvTrg/0qYSKd43AZOef8+9AIlY5VCSMZweeTTCgSIkqAeuc
dKiJEajfHuB5JBoNwY87B8h4IagBNrRkAnJPtyacygFgThm455AoS4Iydp3HnntTWGW3EZyecdKA
HvGgI+YlCf4e1MRlyeCT6ntSBSRuQkAnaQKQqI2AY4I7UrCRM8iSOoLdMDdRcQlnCxtk9SD2psyk
525CkZx0zSMnmZbGNvB9zTGOSPOVV8EZyR3qSZSEzuyffGKRMCTADZxwSKSQPIHwoKg8cUrAI5LM
uF5IIyRwKRpGYjJBCnIwOvvTdsitgYJPPbih8YYBST2AFLlETiUupBBHemtGdwDAcDIx6VFv6v3A
9eQaElAkySWbpjuc+lMaHFjuBy20fhSyhWVQgYMejA/59KYFKjuQOnFJuYozd+3pTFcc6Eg5Ytjp
T4yWYbiXJ4z7YqOWU88l1IwCeoqV5iJEEX3iANoHWkNag8oVVwMjqMmnMpctvG7K4AQ4x/nNEoAf
a2E2nBSmvzISFyB6cfQ0mOwpzGvIxt9TzxRIRKGO8HCjgdac6uzEN8zMeh4pspCQ/KApHAJNSkMN
4JX5di/73c1JJO0kOMfJ0B6VDIn8IUhwB34HvT3cLGy7gVYcqR/n/JqtEJkuI43y2FUjb1J5pqxk
p8rgKAeT1x/kUOMQ84yBk4HA6cU6ZyXXk7MAAHAqb3HcVypmU5JRRgn8On1oukaJVIVircn5s4pu
B94sd55IK/hUYTDjLkjO3jvVWC5OxTcMZJC8Acc1A4KMeAp+7uxzVi4DyIgA2bzwAeg+lERYocMN
irgtnGR+NSIhCyI27e7bvu7ed3v+lTI7GEArznHWoA3kzmRE46LtP+f8mnAmUMNhBHA54ptEDjES
3Dllzkc9KV2kkTLlQmOMLnmo4ywd3AwqcDkdasskk8km9lRE6YIwaSGCz/OPkUY+faowOvH40ydt
rEODg/wL0J9j9ajkneMBcqV65Ax+FW8qkKys2Sp4CntikwGTThJVTgrJncPT8aVt0khC9V6r1H4U
sJEsx+UB5eArY+U4+9+dTRtuZ/MYb0z8wHehIaIpUk8vcq5PViP89akDswC7wyDndjOOR1p06uLj
BCANwGyDxjOcioItz+bHl1kHRl/p61dh37EOqTIGKfdcEZIbjA/Cs1pdytzgH071Zv4WVizFSy9Q
o4qrsIyMFe/pSe4ahtYp8uBjqR/n3rc08obMkhFK/wASoASMev1rFGfKUAE7hkkcE81vWRV4Ecx7
zyG6fh364ppC0HktMS7IQ2MFeuMn19MUkkAlBWTMbYAQKcD8BUjquT1Yj5VGOD7VVQSvdDaqJGBw
McD3/M1oiFuMa0gQgQs+CoEgbofoPwpZWNkAqbI2IHzEnpzxx+FXXnikYRs8e3HLqufwqeMMs+IQ
Zdq43IMZBPX9BS2BtHHqWRuVOWHy85xQrbGY9ySORxT5CuM4+YD5SppjGSQgZLDrtHWszYJyQACW
ODkHtipXt0WIFWPIHGO9QSBmO4jBztzT0k2oPmbOentQA47WzhsYB49TWj4Zizq1vK0oUQtv2jgn
HbNZ52spAB4P4H1p1peG0uWeNQBjABGfx5osBqeKrqKfX7uSJ8pJIeABjHrmv0p/4I2/ELQ9J1Dx
v4UutSit9S1Fre4tbabCecVVlYIT944xwOw6V+XjsTMzg7j1yT+ddH4W8S6h4Z1yy1rSdQm03VrK
QS215AcSQuMYYH17fjUmkZJXT6n1J+3t8ONZ8EftCeMLzUtImtLTVb+W8sbpYyI5o3IOQehPLAj2
967n9gH9j2b42axD4w14D/hHrCc/uwDtllVlJRwexHucZ6Vxvx2/4KAaj+0L8JdK8J+IvB9hNrVh
5bf8JEZcySSLw7Km35S2ASM46irH7I/7fWufsy+GNf0FtEi8RWN9MLq3jkuGRreUJtOMAjBCr6VU
m2tCabajyyP0m1P9rj4Q+F/iXp/wRud7wahH/Z6zxhHsIt6sBAzbsjP3emMn2r4L/bV/Ydvvgh4x
tNU8I6feX/hDWLryoFtsyNZOTkRNnJxzkEdlPTv8i+JPGFz4o8ZXuvMxtJby9fUR5R3CF2kZ8c+h
/wAa+yPGH/BUjxB4w8JeFdMuPClnFqGjXlteXN7Jdb47sxKQcLjIDZPPOD2qE5rQORSeh98/tLar
4k0X9lCLUvDX2iHxBaW+nTwiCMyOrBo9w2jrxnP418ZfF/8AZT8Y/En4Ia/8b/H2r2tv43aOK8mg
gBS2ktwI0UYOSrYABx6H2rQuv+Cw18ulLH/wqvT5ET5DC+qsoXHQgeUeMfrXI/GT/gqRdfFT4SeI
PBUXgOHRV1W2Fo13DqBl8lScthSgzwKpJtiVN3uj7Q+GYRf+CdVisKEKng+dUXlicRyD0zzjr715
n/wSt+J/iHxF4d8Z+C9Z1OfVbTw7LC1k1z8zQLJv3RhjyUyoKjHAz64r5G8B/wDBRbXfCf7OF78J
p/C8Wrh7G40+DVmuWUpFKWzlNvO0Occ9AK4j9kj9sXUv2WvFWsajYaNBrOl6vAsNxYS3LR5ZCxSU
MEYjG4jGOh7VaQrPmd+puf8ABQZI7X9qbx1MoDMbtdwbqo8iI5H5mvof/glB8DvE2keMtS+I95aC
10K5sJNNhEpIldiyOHx0KkDg8ivh79oD4y3Hxy+Kmu+Mp9Pi0yXVJhItpBJv8pQAF+chcn5R2HWv
onwf/wAFPPFvgn9n6y+H9j4fhh160sRZQ+JYroBkwSquYijZbGO9U0KF1HQtf8FVfE2maj+0FJFY
3cV2YdJto5nt5NwSRXmDRtg8MvBIP96viCMM5ZixkJ+6GGO2Ov0qbV9Vu9b1S81DULh7u8up5J5b
iVi7F3YljnvkmqIYGEyyyAKxGAASPTOPxppGXw6ErspkDIp+U5wMflxRKY8bXVjtO4nOOMZFUozK
FJPyeo6E8VMgDMrSEENwvccetVYSvcWIRiQtt4YYOTyPf0pr+ZvjWLGRk565OORT5EVXKyAgkr7H
8Dn/AD60MqxzkEeZAg42jhu55/GpE73BwHhGIwrt0KnOOeR6YNOuYBG7vHGQsfI5/TioY0VgZY1x
ngYIyaknlIV40l387voPT+dUtyxMorbi2ZVG9c8E/T9KSbMaE7izMCQAucen5UGddkYYq0mM5omk
y4aUMCO4PB59Kq2pBFEHJjJXKLnJ6knjtThcNHKzyoSnQcUTyOjkkZC/dP6UoiaSMsXBOeT1/wA9
KpXI0E2b3Zo2VYAOB0/DGO2KViiuqxyOQBkkjBB9KQyb5idnAyeBzSxsUDMVOOevT0/xpPQpIjVF
QsEjPXHrjvTioJcLJllxhcYz/nmo38uB1zk7l4bPGetSruhw2MHZlST1qbF6EbsTI0ciEF+Rjjik
VVdW3rtEa/e/vGiVVDiNmKjuR1+tPkjVCPLOQTxubJ6elaJkiNc42K2d2MYQcdfT1ppRZM+WwBBI
JI4JpfMDOxOW55zw386Y5icJ+8zuPCnrSaAaB0VirLjp2JpWXDKPvFRgflSvDgDCgMPx/CmtAwbc
VKtHwQam9mUkOaYSDa2RtPA9KjaVAGBJJxnnqD6+9LOQhdgPl5465/CneWqBCcktwEx+ooDYiD4U
7UXggj/CnDy3LsY3AY84OCPapJAgDcAAcDAwT/SmM8ZbaoYBedgB60rMVxh3eY+0gA9yO31piBgx
AIyOTkY/z1qVXJQSllRGyQo5GKcIi3mMjfITz3NPUEEkjKrAgDJ+UgdfammYLOQy9Plyo5pAjI+c
bgo6E9fwpMbsPn5fQr/n1pWK2HvMjRlVQJk9xzThAkbKQWbbk5Iz09KiiHmSgZUnHAAzUyeZE54L
EjB9hU2sUQbSzZYgr19PpSA4jxnHOQc4qa4UopBflv8AZHOKPKdkIBUgDjFUjNkDM7lmIL5B5znF
SbyP9YvIwTgYNKAqSAPuHX7p60EMHZWDB+vzd8UbiSY0yk7cozfw+wppYmPJUlM4yO3FShdwKklg
RnsM+tI7SByVGR93AHTinZA9SLe275D1bn2pxiZv9YMKDy2M/Skz5ZyTlzxjvn6UEFSQM9c8Him4
oSFkZd6D5tuOSPTFKYmkJCyYJ5I6YoYuCc+nII6ChwjyNgEKenbNTsUMXOCOrDufpTWUuwxwx+6M
YzUyqmEc5APGPpSS7ZPlKndn5eaLlWIdhLYIIYHkHr+VPZTsYErg9gPypREN390+oPWky2GDAN7d
xQTfUJFwSQQUx0AxTZWOSVAAyflA6U5A3VOS3G3HX0FEi93Xy2I5GO9CHa4rEA4bjA3dOvtUZICA
42A8cU4JhWGSec7aSQMzN03DkDPFVZMljHYxkfKWAHT0pPOYLu5JPXPenZLoWPUe/tUkzB+dgDA8
fWpSGiI4IXIPqKezfOFWPCtyFNPjRoombAZh09TTTuI25GeeD9OM0WC6IzFslbKAEHp6UpdpOmVb
oKcdznkbyT1B60zy1LeYGCkZ49KLAmKwIcfMwPfFIytIGfjgZPH5UGVgmAMHu2MUx3+Y4bIPc0rW
KvcM4YneSWp5G0lgSzEdDUZG0gZQHqT/AEp7OACNw2k9B0piFckZCqobvikyz7mUFcjPXrSklQHC
nOOppjOZW6AHFKw0HmFFJZAf6U0zHJI5GcUS7hLh8YNLFw+ce/FKxPUUu20hjnHYdac8zSAHjIHp
Tc7lbpnkipHQMox83PNBYwsqEMxGaawJYtj5ewp8pTP93jHXOeKaGy+5vTCkdqLjHHLDAB2f0pHX
cMDgHFKzOqcZyM8UwuGK4OCBk/WgTHFCmcY680sgBBAzkdMVE0hYggncf1pWBznkNnqO9BKuSxuM
MxPtx602QbhuDd6UAKSGbgdKaVLkgEjnA9DQXcX7QUTJALD06EU0bmxzkDp7UOpDFWHPSmsdgKYI
7kmgGSMjv1bGAehxTGGXYmlbA6ZOPSml1VMDPNMnQmTbtIBwKHiCgAfMW6HNQmQFQoBPPSn+YJGy
45HGRSsO4hkCtgr7Zx0oZGDE4bJ4HFOzu75AGKYzEHAOQT9KAuAjLYJOCOKDuUYGcU8qQB+83ZGR
TVUcMDyvrQFhNqFkznHIx3qRhgFQW9qgf5Sx4BPrTmZsgN6dc0WAmkjOPm4phIDnaPfPtTXkYgg7
iemM04tycAjjg+1AxxO51PA570jk5HH4jimZYSHJ4znLcYp4YsMspyehpC1A5BJQZXHJIxQzOW5G
MUyZiT95eO4/GpH5cKvA6nnPNAkMfcdoYkOOnv70ioeWJKnHPvSu5BwzdD94jpUbk4BbLHOc57Uw
aJMkAA/d9aeSN4zjPYY5qLcGAJO09MU55Rx97NAIdN1+Xk/rTTKGQKckdx0pDgMN2WY8k0hlCnIX
HBGMUFXHnYo+Yk+hFIxQjJOPbFRnORk7fT1p0suTg8j1xQK4bC6M36mmlQeQCfegtyDjtSBmRsEd
aQXFbntg9MD+dNdCFJyRnnGOaUszsOhJ4+WhhjsSR60hibQuAob605lbJwQOe1OEvzcEk45FIArE
lixGOMU7AKjKgxuOfSmMN0jFvlbt70uDjGTnFI6bcFeevUUMBxdMnjrx0phXbzk7etBXB+Y5GQeK
JDvJKgACpFew0ybzwO/pQQQ7EnBx3o5BBHX0IqSVo2AxlR7jvRYYvls+5iR6Y/CkcY3fLgDkH2oG
CxzwCOKcziUgbsLjGDRsBEZVweMc5z60BQ57ilGcAYHsacZFXAb59/Oe9AChsAhvm2+nelEnmkEk
Lt4wKjBCsQBnPUDrTty7flAGPWgBdm5CeDj+I03yyo5I/Cm5JGQ2Qe1OEYYD5vmFFhOwcAZz7UFi
cg4PGKI2+8cFvYCnEbicr+VBLGkjDcZzQI2ds7TgHjFKflIyAfbNSK6YAC4OMGixSuJjblfuqO1J
5YRd3LcevApNpO7naBSh9gKkdRjgUxkRIHUYx1pfldj7DtTtyqjFhknjIFNVmX5gMjOM0gH+YqM2
ATng5NKZNyruABU596hYec3yk/U1KV5bIwcdaLiFaYsW2gAHrmkX5VOWw39KQxMqr82QBnpTz8+5
cc9SxoFr0AEJuGM56EcYp0ihuQdpAJIY9fpTNvcsBg88U8rvQ5O89cDjAoGrjA5XBwpA4x/Wmsow
2UyCc8VIWVANhxnrntSO7FWLAkjgCmFxNoyF2EMfYYxRIUMmCcAHtShj0BKt39qYY1wAwJYZxg0h
kjFFckLzxndTeBuDcOelDNliASCMYoZiVw2Gz0YGgBqsI89+uQaUzKCCBuxztpksj78bssVwSetO
kj27gTtIHAoAcCD/ABdcdB19qe2Ecjd07AdaR4AMFzlgBgDp+dIybeRk56c0WAdJIm1lIbJHAB7+
9NL87SWjb+KnozbmDDe2c7gcUrSKZSWAbHPJoAhDl24Ocjk7QcUpbKj3+8RT8/OWjXAI6daiIw5I
yV5JFIBwhGzK5Ppxinwt5fQAtjg+lDO6fKCCpGRu6n2pNxICkgAdMfyoFdCEkq4AIYHAwODSMFVc
YJHfIx+VIWCuRzjinSyYlOMFT6dcelAJ6iFBkADGRxj19aduXIKnDAjn1pkhYLtzyDx7+5p7DzH3
bducc0xjlQyB3J5zktj9KIwLlgCxUDgrjNKV3RABwvOCB6etNGI3kZ0Z0PAzwOlKwx8cwjk3IdoV
iRxQI0mc5I3Ocgj196iKn5jxzwVXjFEbbAwBzxxxjmiwrltYV80EuT8p3MB6VEI1lUNgDB5B5/Sk
j3cFvlYdOc+vFKiEynB28ZyT1qGFwmcCTC884xinGVwM4PQ8n1xRIQRnaPlOeOtKNrn98zbSOVQd
PT+lCsTqJFIrSB3+dwOEPcc0oljVXJVvNAPDdD0pYI0wsgRxuBHODggU51V5TEUCk/xbutaFjWlA
kQ7AQAPlxwPQU2SUBixUsvYgfl9P/r09YWhkSTOed2Af1P6U50VYmJJJY8HHqeetQKzGMT57ZjUF
cfKnA/z/AIUpnDSJlMvyTg06P96xzkAN/ewc4OKJ1UylEz5mAMd/fBp3Af5cTq298kADGPb9ajjV
V8w7i6ZxuOaZsw+GLoo4GOTzUjF41dVLCMH5lY9c9OlFxXJDsICg4YKTuA9RwKdjc6xlBk4HHUGm
JCJIi6gqqj7q8ntxzViMGSIk5CLkZ3ZP1/lTFcLgbJmMZGE+UgcHP1qN5/s4BKuzsMDd0BPvTo5p
IYzH5jFn6AKPmA9O9TOzvalceYpIZVUgbTyO/fnH40bBcRrYYJ8xi4AOcnPI5/Ac8UySM+Qi+Y2G
OduTgevHrjHNPjtGC/fbk7gDnBPbA9+KcQbh5cZCj5Tx8uQeQT+FK5SM3VEUEoiDpnOapiUyAsww
wI5AyatX8jLnd86nhTjhfaqQLbVJYc8ZGOKe49gcMUAXIGTgMen0rctFxaDfJsYpwqDrxz+PSskp
5j5LM64xg8Dj0rbtYpDAjLuAI+VAfm/AU0gJJvKZY2DFShOFAw2cen1qnLcTx3BYZHmgKc/KAMf5
NXGmL3OZD8jjJL9RjofrU28zEhvnUjnAAJJ781pYwb1Kqxu0e6CIAhfmPX8SafBNJFI0cZkjIAO6
PJz+X5055JUiciMpGv7vK8E/40tvbWs8QDTvEy9uSanqJS1OWLhUQMoKjkUMSWJQCNsZ2g0qqdhI
TfwVJbgCnDACLkFwc8dh6CodjpGTJkMTh887s5OfelIVmAVc5Izk5NPlkVZCQWwScZ5/CkeNywyu
ADkMB/WpYx4CFJFAIC+/U1VKZ4x+Oastas0igDZhfmJPWr3h5Ik1XymbMcyNGzY5XII4/MVNx2Mk
RlcOUJUdeO1W4lV2IRWAOUBLYyTnHP5UuoW72NzLblGXnoe49f5V9bf8E/v2U7H9ovxrcXeqTxNo
2iyJLd2LD55N3Q5yMAfqalspRurnzRc+GdXsYluLuxubWApuDzRFFfjopOOe9X9N8N6zqli19bad
eTWMHD3MUDtGDtzhmAPbFfot/wAFGviv8P8AQ/C8Hwr0PQ7W+1yxAiuL5Ywp04oI8LnALM6EZweB
jNL/AMEpviP4Mj0/xD8NPEcMA1XVLvztO+2qrrdJ5eHjXjhhszyee3NUnYyUG1zXPzZmsZoFME0Z
Vy5ztxnPXp+FdJJ4E8Qwpa282j3cJndlgUwkF2A5AHOTjnivtTxb/wAE8vF0/wC0vcaXBbLY+G7i
6lvbbUkcMkcbOzJGBwVzjHRgMV9Pft0/EzwZ8Gfh94QLWlk/jfTL21u9PsxGAzrGwE2Tt+6V3e/e
tE0PTTzPycm+FHjCS1jb/hG9WQMWzILRyMgE4PHBxVTUPhz4m0yxN1qmk3enwYG6Se3dVz25Ix1/
Wv3r8W+O7bSvgXbeO7Twxb397cWFveQ6WiqN8kwTCb8DuwG4/jXjPxE+KvhD9oP9k/4iwapocWj+
J9HsJRf6DdoHuLG4XPlyLwNwO3cGHvQmlsDfY/HVvAviAaOupppd2NO6fawm2Lk9c/X+dSeHfBWr
eJZ/J0nT7vUXKcx2sRkK/UAetfth+x74L0Px3+xx4P0vXtHtL6yvbOWOSCeEEGPzX2jkdQu38qxf
2Xv2Rz+zp8bPHEkUY1HwzqlnG2nXk4BdMOS0bDoCNx5AGQBRzBez2PxRurA2tzskjeB04ClTuPXg
g9CP6VebwRry6ZHqkml3ltYMwRbi5t2WOTIyMMRjGOc19N/8FEPDGneHP2l/FMGk6fFZWrvBI0Vu
oRctCrMRxgEszE19Q/sG/GLwf+0d8KJfgN478PW51Kw05vs7kq4u7UEgMpABjljDJ/MYxim2Uryj
zJH5T3tncxtIhdiVJwD1ORyajd2AWMovmxg4JA2k9fzr6H/bG/ZvuP2avihc6D9qa+025ia7sLhy
N7wMzbQQOm0/ITjtmvnuaIpbt8qqx59s1SRg5JsbHZs4kVyXVgcAHue4IrodA8A+IdZsprzSdC1G
+hRgoaC1dwDx8pOMA8isNVYpGIy6ydCT0A//AFV+iH/BIv4pvb/EPxD8PZbCO4stQs21SO5lA3Ry
xttYdMtuDKO2NgpSdjSKPiPUfgr41Ql5/DGrwiFj5khspMIAcZ6fUiuMns3jiZOcA/IH7seD9K/o
R8RfF2x0n9oLw98MZvD4n/trTZb0akWXZHtD/u9m05zs9R171+a//BTn4CeHPhb8UNL1Lw5FHYW/
iC3lvZtOiX5I5ldVJjUcgMWzgcZPQUKRm1K+x8JPYDymXYFTGCMY57H3HSo44HkhLklRkjd0Br6L
0v8AYb+M2paVa3tl4KuZradPNAXKtjGVI3Y9uP0NcV8S/gd4w+Es8Vt4m0KfRpHQSILlQFk5PK49
wM/WqUl0HZJ6nL6V8KfFWqafHfW+gajcWUwYxyJaSFcDq27HA461z11Zny1VsFt2fmzlT3Hp+dfs
f/wS0+Jj/E79ne58O6vpNus3hi5/s8z4DC5hYFkDAjgqvy98gD3r4T/bY/Z61v4bfFLX9VOhyaZ4
ZvNUuW0+Y8QupYkBD24HAPrTjO71FK8Xax8oSAxpzh933t38wfxpFR9zbVHmEZCgZGD/ACrodC8J
6l4t8QWOjaPave6pfTiG3tYs73c9AM/zNe0v+wd8cWL7PAd/E7YbczIBtwT1DEZ7fU4rTnijJp3P
nFIXEClRmUnB57/5/lUs/looSYZkwCSDXWeOvh9rvw88QXeja3Ytpeo2DiKWCRQPmHfOeRnHPfNe
ieBP2Rvin8SPDVt4j0LwfeXumahHvidSEaT5sFsNjjgn8vWsnJNmsTwoosqxSEljggDH9KeCW6gB
FJ254/M17B8Sf2XviR8K9Hj1PxN4ZutOs5XKLNMhxkDOPbjn8K6H9mv9mTxL8YfGGh3UXhm51Xwj
Fq8UOo3YC+UIg6mUHJzjbntRzI0ilueH3nhzUbG1tbi5tZrWO6iE0LtHgSIeNyk9VOOtZckAhj3h
CzDqF6Cv2b/b7/Zdh8QfBHwxYfD/AMJxTah4fl+zwfZYwZYrQQvlQT1GVQ4OeQK/HDUbKaxvri3n
KrNFI6SRgYZWU8g/Q8e1OLuZSaMzaXGdpwGPOM44pu0M4+QlenHQGp5JtquAoJ3etNUkKA3IByFx
1NaXM0tSSOKSWTZHC0jlwFVFyTngY9TXQeIvAuv+EkhbW9KvrB7nPli5t3jyAAcgsBkfMK1/g/4J
8QfED4h6VpvhjSm1nV4LlLxbEFRvWJw5wenOMfjX6ef8FRvEvhm9/Zz8PQ3ukvZeIn1CB7W3ntCs
kAEbNKm/bgAYwcZzj3rNtX1Nm+VaH5DyR+Y5DH5geBjgn3ppAjfewJ2Nk5HAr0b4UfArxn8cNZm0
rwbpL61c2oE06BggCZxncxA/Cu88NfsQfF3xvNqqaT4Tmd9LuGtbhJXWN1lXB24YjsR/9eqcoonX
dnz7LGDJIkRbbnkk8c1GqGLcxIVvugZ5rtPiL8Ntf+GfiG/8Pa7p0+malav5U0NyhQk4yCM9QRyC
M5rlmUMql0IC9cc9qtNdCblVggDgDEg45PGatafp11qd9b2dnC9zdXDiOOCNeZWPAUepJqMQ+YHw
MKSeAMfjX2n/AMEsPhxo/jz9op5dd0uLULTTdImu7dZlDJHcLLEFP+9hifwrOU7G0Yq9z5x8Wfs/
/EPwVpcmp6/4R1XSNNRlRrueBljUtwAWxtBPsa88uLZYVKsRIobaqLycc/pX9A/j341eBLn4wzfB
LxraW6Jr2lpNZ/a0zDebi4eI8YUjZkE4598V+HHx18H6V4A+Nnjfw/o0v2rS9K1e4tYZ2k3MyBzj
JHGcYHHGQamMhXTMT4f/AAt8V/E/Vriz8IaBea5d28Xmyw2ce5kTIGcfX86wda0W98P61eadqNtN
Z3dtK0U9tKu142U4IKnoQa/R3/gkx4J8UeGvEeseOk0Y3vhTWbSTTZJ4GzNDNE4ZTsPUHkf8CBr5
0/4KJ61p3iD9rLxrNpely2KJ9njuFuLYwStKsSh2wcZ3cHPcYNNT11E3rY+YWQE5TkjqpNAQOwIB
BHBA57V73N+xh8Ro/gbbfFU6ctx4WmtxeA28gkkWE5/eFQM4AAz2GSSeK83+Evww1j4x/ELR/CGg
iP8AtTU3aKAysQuQpb5sc9AfxqromxxoVGZgAV689/emNH5kmCGOf4u5r6E8ZfsWfE/wR8WdI+Hl
3o6ya3q4iktLiEl7dkZtrkuAcBTnP1FewRf8ElvjCZVSS50GDOMlbpmGcc4+UEfU0rxEj4aZd7tt
XeQTjcCMUSAxId4Cs3OO544NekfGr4D+LfgN4yvfDvivTzaXsI3RzRZMM6Ho6MfvA4OcdxXm0iYm
kGC6nkEnGDVadBEUjYIwgzgYx64zmpTGcAuu0EZHPXihU2tlsZI6scdq9y/Zx/ZL8Z/tM61Npvh2
2FlaQ2zzHVL1WFurjaAm7Byfmzj0FS2aRV9TwyWABumce+OaChxjLc/w47V9/v8A8EefihGLpx4m
0Isg3xRxu5LnH3cFRjnPOa+NtT+E3i/SfiBd+BZ9Eux4sgma1bS0j3zPIBnhRnI5ByOMc5pXQtLn
EruTAzwPrSKGIJ53MTjHb3r9CtI/4I/ePrrTrO8k8V6RYyyxI720iOzJkAsD2z2xxj1rwT9qb9iP
xn+y29jd6w8esaFefLFrFmh8mOQZzFIDyrEDIzweRVKUROyauz5uaPDDKs2O57VKQVclQeB1xzX1
9+zV/wAE5fG/7Rngd/FJ1BPCli8iGzbULZj9rjKht64I+X0OOa9E+IX/AASE8d+FPBus6zoviSx8
S6laxieHSYYGjkmA++qszEE4GQOMkUk4i1TPz+cbl2LnAHzDvUflO0asW47rjpXsXwA/Zx8T/tCf
EeDwvokFzDElwsWp3phyNPQ7uZFJBzlCMdjX2jH/AMEYtWExz8RIF3AZJsSQPX+Pn9OtJyitEUmf
mZMjEAgZ9WHXGOlL5WGmV1xtHygd69v+P37MHiD9nf4pw+EfFMnl2lxIjW+sQRZhuLdnVTMB2K85
U8jA9RX1F4h/4JIatovirwvFb+K/tvhnV8w3WpLbjzbOZuYvlBwVbON3bHuKFJIbV9T87TH5YA2/
I33vY4qRx5iAgADHSvt/9qz/AIJjeJfgX4BXxf4f1VfFOm2TFtTjSLypLaPqJQMncB0OOR1rwX9m
v9m/Xv2k/iDY+HNEjEdkjRyalfLgi0t2bDPj19B+NS5oEkezeDf+CXfxH8d/CjTvG2i67ot5a6hp
w1C0s0ZjLKCNwQN93JGR9eOK+ONU0250q8mt7uCSC5hkaCaKRSrI6khlI9QQRzX9EX7MHwPvv2ef
hXa+CLrxJN4mtrCZzZTzRCMwQtgiIDJOAd2Oe9fijcfBvW/jt+0/4o8K+G4TJeX2vX8i/OB5ca3L
eY/PHAJOO9OM+5PLqfPuwlmUZAHcjrQ8SrlePx9K/WyX/gjV4afax8famknygE2sTAD+IYwPw5/E
18d/tsfsR6x+ylr9jd2t1LrXg3UyEtdSdPmimAO6GUfwkgZUjqM+laJqXUTajufK0iFACV+U9D6V
E2SRleAO1WpVCoxH04HWoQGYgdvYUNBF3GCMEAADcOc0nlgOODvHY9jTiEid1IO3HXPNPODIdvPP
AFJFDT5m3+8Bx/8AWp4BzyAMjHFNJJicFCqg00lhgAkqe3YVNirkb4VwSCMcDvT+AQcYJ7mm53N3
yO+KVlIPJ3E8jHSmLcAirKQQce7dKepIXeGCA5H0poVlA8xMjPBIzTJcMcA7h6jipGPdAXUHn154
pzHLgAbSDwopGjDEMuAO/tTSQZMD5h6ikMHzklSSaUpmJSRgk8mlYDnjbt5yaZhih64JwDTARl2h
iflHTcO9MDEZCjjvjtU0hZhtHzL78UkihACCGOeSKdybCNnZyPl7E0AlGwPmHbPSkHT72MetOZQu
GGPpikCViV3yc9OMY9aifbuOVyR6n2od2cqPu5/SmSxlJcv1z0HfimkNsNwByV5/SlYA84DY54NJ
Jw2QCOpxSSAlQQelO5O4jP8ANxnB60rrxxkDtzSyKI1HBBzjJoJXd0OCPSlcdhy4RDg8+wxmlbcA
CRhT0NL5i+WEZc9we9RHK556/wANCDYBkAlSCxpXyMDADemaUF5AAf72ARSygoWUDaSSemaTKEkC
DlSS3f2oxgccDH3qjdTgDo3rTv8AWJjd8w5x68U7EkhQxuCOnv3p21WbI+UY5+tREtnHQ09FyzcZ
OOtIaHPGjqMHL9801mAQ5IGO1GCPm2+xz1pkqjOcEg9PWgYrksAwUAAHPFKEKAE5DdVUmgKrE9Qu
c8mmMzKqZ57Z96dySRpyFMbAFd2cHtTNgIwBhuhBNDLtU9j1OTSFSJNvJOM/WgBxUBiCcYPUVIU3
KdnfoDzimE/MDgD1FKG25ZgcnjFIoANr4bnvkUwqNx+bJxnPpSSbg3AOD6UDac/MfoaLEsWRwApX
oOoNOTBxjJUdjTGYAnB9eKeAozzjHI9aAvYRhnAz146U1yP4eMce9I7ZGQSfrTyokOeg6+9ILjSW
RhjqeuRSncFbJAXAprE7xuPPQChhkgE5BoKEIBcD7uP1p5zkFeABzkUmAR+uTR85IHBB9qBIbgbw
ccHo2acThmABJximyAZUlcEDoBUu4sjEZz3oGQs5Byo69KkIATOMH07U1Uw5C5/KiUAHKj09qkBN
rgg4Jz7U+Vs4IPUdMdKjbcjYOSB0oZmUg4JFMAJOcMMnrQdzljildmQrjlsHOaRyZGyD9aQDl+fg
ZB7UgXnkng9DxSBCCNvJPcGnBPmYBucdSaEBIyx7Xbox4AzURjUeh96mdx5fK+vTpUUSkZOAc1Qx
+xSPu7SMk00ttwAcDvx1p5XZy52kjkA5qMqxYMOnYmgVhwO08d+gNJMxQgNkn0pCd3AOT60hPJ+Y
59SaVxXsP2bxu6HqKdnC5HyEenJpu8EYLHI4BxTeQmSe2OtFxXFUFQwfp9eTQ4x82Rjrg84oJ6kK
Tg4PvSODkM2Qp4wTSDUeqbgd2d3pQ7Ar8mOnTvSFCi5LbmOAM1FncD1B9qCh5QKoBbDEYxTt2OGO
D2xRGAyHdnPJwD1pG3hV6FTx05/GgVkKzlgdxIGO1OjXG7GSD0z6VGSUyAoPPalwcg4PHGBQCVhZ
D8pz3PFCjaSTkZXbinTHYoCryBn1prIVG4MC3GPcU2S7sVSGVgWC888U7gqUGQuM4P0piAnHUEde
KdsYBmJBB469KVikhJN0bEq3zdvamIXWVW4zg8446VK8eQQpwB371EjANzn8PWgNiVgyuSOSD1Xp
SXDBgmwbSBkgCnNuZdpcJ1wB3pJmCkgEjPOfagYxSq4xj5ux5zTpXExUFeT6f1owGAXOQp/MUzYF
KsgySeh9KAFQbtw/hHcDpTjlCV2kd8nqabiTcQ2QvIAFTBsxgtlue/Ue1MCAKzdG2k+o/wA96cQS
UGwY705mdgBjk9vT8aWXaSRk7QMM3f8Az0pAMfKEYODnhs9aM4LDdx/ETTWYO+QDjrgjoKlKgAGM
DfjPPpQBGoAALHDFdykntmpJl+RWPLngt2FRPtVwSuAO1SiSRognPJzjHNArIic5A4IwO3OaUddy
jcRycjpStvA3MDkcbVp7sgU8HOzJIFAWGBCZDg5Y846U/wAtyPmDA9h2oAZlchgSemR0pzssp2sC
MHPBxQMaoI+baAR13DrTWcLu3DP+0OM0pXdvc7m44BPWnlCi53nHQoRSASP5dwVSSepA/lSzQtHv
DZViOmOlBBVA2NvUEjmkMZkDPuJ4z70C1HFt77mj6HqTjIojX52bzMqOeec/5/pRE67Su/BBxkjp
in7ghwDkkDOOBj1pNXGJI2OCAGPY9qU/u2xHg7uuDwPWnOEB5BG3/Z546mmtKhXMe5iWwMjg0rMb
CZ9m6MguoP3gT6U2Vt2Rg9cDJzgVPMiK33S525BU4x/9amRwmdirbUz1K96sQ9sRhskjDBQCeCCP
SpJDuG3mMDgrn2qEL5QJdiWQ7gCe3TikuTv37skjGCppATxqJImRWXPJBxwOPf3xTRbOtyTuIcZI
yCN3tTETylZ3TJIyMdDxU6KJJgzEghuh6D3pbgLKRsQMNvOVzyfz9KZEQ8SksVCAj1/ED/PSlkYJ
KGfc6ocE5GCPap3VVRmhyG8wkof4eKLICJF8vGFEn8PJ7dhilAnaaVXAVuDtTnH+c1M0aiRVVeNu
S5PWnQx+Y7He0pc7f3YGVH+eKnYXKmIsgRNhU+YhPt196gLxofmXhTjDZznOeT3q1dRR+YrqzAHl
vl4z7U5JE+dWByexHGcda0sQ1Zldj9odt8hXa24gdDx+tWJVW0jbYDHgh1/iDZ65/Okj3phoAm4H
JOOM84OPx6VHHdPFBkqoTdzlep96LFpoz9XjfaGypjJ/hGBVDKrH8vLHOBjtVjVH86T5ueTz2FRW
8RVSrEYPRqVy0I8b4+UHA4C55z7D610Fisq2qSY2SRAhTuxtBHXJ7Z9OcisIlBMoXJYY5PT61seY
V8oKwkZBkjqMZqkRLQs+cJ7bDZLRjCk9h9aZMqySRuSW9W9cdqljlUSSGYZJJG5TgA9s9eBn/wDV
Q0wa5llHl79u0qyZJJHUk8Y4PTnpWhhoxzcyF4tjJzkufmyM/wD1qfbrFMSxPmO2T5m7jHpnHaoP
LDhi8W07d8bHhSO/1p8aSJKQreX8oyT83I9ufzqGD16HKIvkncCBk/jQ6+W6qD82cgAU6V0QLjcz
Ack+tMQvuZy2X6dMmsrnWOVlSPc3JJzgnoKSZygyrHA9DTXi2RZJ+f04IoecTDATZzk7eBRa4CkN
noVB/iJq9ocHl6xauSGKurAMcZNUuHVmII2jHy+lXdEZYL2OfKnYPMCE+n9aLBuXfGUaya5O4kyW
xnOQemM4Nfo9/wAEWZkOv/EhTJIzC0tT5XQKBI/P4+tfmdqF8+o3c0kuAXOQwH+FfT/7Bv7Uf/DM
PxON5qFr/aHhzV4lstTSLCyRLuBWUHHO35vl4zn2pcuppFOzS6lr/goE8KftbfECSNnCi8QbWTC5
MMeSD6EjGfavFvBVlrmpeJdNtdBmuxra3INrcWGQYWx94EYIx6194/8ABRL9nPSvEVlJ8dvA2sJr
mka8sJu445lKLuAVJUOScH5QRjj25NaH/BKHwF4N8Sz+LdZ1NIX8S6WyW4jmZcC3ZWJwAc4yByec
gelXLyRzUVJR1WqPvz4a3Xjib4HWL+JFtrnx9b2D7sIVjluFU+WxXOcHjPTqa/Ez4867438QfFbX
r7x5Lcp4jkvNlxYTIUS3+Yqvlox4TBznuK+nvj3/AMFG/HHhH9oRo/DNx/Z3hrw7fmxvNKJV4L9Y
5NrEttyNy5HH3TjrXv37Z/wd8FfHr4XeFfiRYXFromr6lLYq2oRgYlhmZcCQ8E7d3U88VlrsaSUo
tTto2fQHh7xTo3gv9ljwnrPiKB30iy0Kwe5QpuZQI4xnGeSDg9a/Nf8A4KL+MLbxF8adT1Xwrqck
2h3Ol29pdXmnylbWRiG+WRkPzEHOQRxx1r9J/FHwZufFP7LLfDZb8LdSaPHp32vbkMVCjdgHvt9e
9eUah+y/4R+DH7HfjvSLqygu9Qk0a4nup5VP7y4RGKOoJJBz6Y56dqST0sErKTl2N79h/UpNJ/YU
8KX+noslzb6deTRRE7gHE0rBSc84OAee1d1+y5+05oP7Sng6W8sT9l17TGFtqunvw0UmAd6+qN1B
+o7V5d+xLr+l6t+wxDb20286fbala3MZfDo4eU85ORkEEZx1r5d/4JN+I7K3+O/jKwuJltbu60lf
KjAASYiQFgDjquOnv6CqV+om3Ko10PPv+Cll5s/ab8QKsEjFIbbLZ+QHyRgnI44J6VH/AMEu2X/h
rfRyHjH/ABLb5R8vzE7Bkccf/qqt/wAFQLth+1XrkEaHabKzfejEhiIyCCM4z746fSvqL/gnV8Ff
Bnw9+F6/GrUdViuZpIZ5hfEYFlEAUliIyT0Ttjk1q1cIScIWZ5d/wV9hWL4peFrhYAT/AGMUaRlP
OZX+VT0B4Gfwr86BFtuJOm1XAIkbkdf5V9T/ALe37T9h+0f8VzNosCwaFpEbWNnerIWa6TdlpMYw
uT8uPbrXyvNH5kpJk2sx3dRk9P8A61axi2c6snoW4mjjjmUqzM2FLnJwOegHFfZP/BKLcn7VlmGj
xEdFvwncgnyjn6YBH418XRx+SXwxkYLknP8AnNffv/BIv4b2utfFnU/GR1hTqWg2r2r6WVwxinXA
kznB5Qj86mcbG8XzH6eXfiDwlL8XLXQrqOD/AIS+PTvtlnJMi7mhLMrCNs5yMHI9Dn1r4o+PR8Q+
Jf8Agob8M9H8W6RDN4biuTDph8kmOe3ePeWYk/MwkXBHbGa7v49+JtN8Nf8ABQv4KSX0y20dxYTW
hldhjzJBOka+25iB+VYf7fnxG034T/tBfA/xXqG9rfSrl5rhYkDOIfOiDtj2UsawHF2Z9j+KLu38
O2dkiatp2hW6YjjF6yIjADAC7mHTivCv2xJvAHjz9m/xMus6vo2pahY2TXFnJb3sfmLcAYBTa2ec
nI6HNJ+018ANL/bU8E+EL/Q/FHlaMqvdQXtnH58dxHIEwcbhz8nXtzxXz9+1D+x38Ifg1+y5v1O+
/sbxbDGi22oBmL392BkxiLJGG56cCqilclrujU/4I/TeZ4L+IyKAsZ1C1dVHO3MT9+/TNfR9h4n8
OftMar8T/hb4p0Zmbw/feQz7CqvE2fKlRuodSrc/SvlP/gjpr1oLT4kaO1wBfM9ndRQO+XMOJUzj
HYj14yK+v/hv8GJvhp8ZPiV46utSEtl4lZJijAqIFjBOSenALD8KLGs7X1Pk/wDYq+A2n/C/9sr4
keHL+O11STRLFJrK4aAZTMqlHU9m2sM+4OBXon7QX7afiz4SftWeHfhxp+h6TdaLfz2Mct1cq/n7
ZyFYqwbAwemRXmPwt/ao8D2P/BQnxtqT6vCfDPiG1j0u21cgi3WZBDty/ozKy5OB0r2v4+fsWXPx
e/aC0X4kRa/9nh097OX7F9m3ljA4cAPnoSB+ZpWM000mch/wVA+HWgSeEvCXi86fGNaXWobF7hAA
0sZV3AYdDhkHXsTXuP7R3xIn/Zl+AQ1vwjo9i32KaC2hs5IiIER8gnapXHT1rwX/AIKr/FTQdB+G
ug+Hv7Qhl8RpqcOojTCSXMKpKAxA6AtgV7B42stA/bp/Zcgg8JeIEhtdTNvI88B8xrWVcF4pFBzu
XcQRx60WdwjbXQd+z/47b9sL9l6fUfG+i2SJqjXVlPaWqskZVDgMu4kqe4Oa/O79kf8AaX8WfAL4
sR+BNFsba80HX/EUEFyuoIxdAZfJLxsDwxBXqOSK/R34K+BNJ/Y2/Zyu9K8Ua/CmjaXPcXMmpTjY
qpKwIB98nHfk1+LeneOLTQfi5p/i1C1xZ2utRamqR/6xlSYOB1GDgDjPbHerSFf95dbH7Q/tu/tB
6/8As3fCuw8S+HbCy1C9uNTjsWjvlZkCtHI/8JBz8lfhr8RfFd/478Y6l4kvo4Y77VLqW8lW1TbE
pdixwPTLHqa/erxN4V8BftfeAPCt8ZrbxL4OluRfbIm3RykIy7WIPBVjgg9wRX5Gft9eA/h38N/j
TcaF8Ob2Ka0hj/0+whYsLCcEL5Ybp2Jx271cLGck09T5ilh23HzZGE4XHbr/AFpirGxkyCGbjk/K
PwouGjwGV2CZwecn3FRyhZeQFYnnJ4zx0xW1iUemfs7fEDV/hV8b/B/irSHRL2zvo4vLbJjlRz5b
o4BB24fr7Cv1x/4Km28Fx+yhfPMibxqlpsd0B2Els9enGea/KL9lD4aWXxd+PnhXw3farFob3NwZ
opXh3h5I8MIxkjBIB59q/Zr9uj4WX3xY/Zr8R6Tp0iJeWPl6kiucCQQ5ZkB7Ernmud/Ebbo/Kj/g
n98RtV+H/wC1X4Jt9LaPZrlwdHu1kGVkt5CrNjphgUUg/h3NfqX+0x8XtS+BXjz4dN4fsLKSTxjr
UWmam1xGTuQMihhgjD4cjPpivx+/ZV8R2vhj9pv4X6hdzLb2EGvW++WTsrMIwT6ctzX6Zf8ABSDx
Tp3hbxJ8CdQvriOGG38ULcF2I4VWiJJ9sd6mSbZoktDnv+CungLQ5fhZoHi57VYtai1FdPa8Q4LQ
NHK21vXDLwe2TX5CXQZZGAwBuIGD973r9i/+CvF3by/s6+HstHKk2uxMhD9vIlO4c8j/AOKr8cZJ
EaXkkluOetb01pc5nuyTzncbnBCA4PGMc84r9Nf+CO/xOsv7V8V+ALjTUGoCI6vZ3qKMiHKJJGxz
nJZlI7cGvzFuJWB2Rl1AOOa/TL/gjz8OtLuda8T+Ol1ctrFnG2lSaVs2iOGTY6yZzzkxkfhUzN4M
vf8ABXDxz4c/4Sfwr4ft9NuYfGOnKl++sQkIEtXEoWPcDnO9c+2K/MecyXcjySXBmZyS7u2WPPXJ
ySfqa/Sj/gsF8NLiz8U+F/H0c0bWF9ANInjLHdHJGJZEbHoQ2M+3vX5oyFo/lJLKeCGHOP8AP40R
21Mk9bH6df8ABG7xnrB1Txz4Qa4MmgRW0WqRQyDJSdn8tiG64Khcj1FeIf8ABT0A/tY+KY1B4s7A
nA5BMPX9Mfia+kP+CP8A4B0Ky8KeJfG9pqry6vcSHSLyxlKgQhCsisOh5B9+p9K8m/4K2/C240P4
v6b43j1OOax8Q2qW5tIz+8iltwBk89GD5zx0qUrs0b1Psr4Lo/ib/gmzpdrpsZvbqXwRcWqQQAMz
TCGRCgHdtwx9a/PP/gnZ4B1zW/2tvC+r22mXMtholzK+pTqpAty0EqqHyf7xUetSfsKftuXv7M+u
xaD4huZ7/wCG+pT/AL+EksdMc5JniXHQk/Mvfr1r9QfiN8W/hT+zd8NNU+JNsmnW9lqaJLD/AGTE
u7UZXBeIBV6k5Jyeg61NmtBPR3O8vreK4+N2lu6KzxaDcshYZ25uIQcehrnviL8d/hz8O/Fbab4m
+Imm+HdSjRJG066uQr7SMg7cdDxX51/suf8ABQm7u/2rNX8SfE7U/sWgeI7b+zbaOLe1tpbeZH5a
gc7U+U7j6nNfYfxg/Yw+Gfxp+NEHxG13VxcXv+ilbL7TEYJfKxgEHOQQuCOc5pWaB2Pmr/gqZ8cP
hH8Ufhl4ct/DPiPTPEPiy01QMv2HLyxWxicSbiBwpJTr/OvzBucMwUnAYhueK/Qz/gqhJ8INH1rw
7oPhOytrPxtYr5l3/ZyKtutq4JCuV4L7lUgehPqK/PAmNV3HBYHA9DW8Y6Gd0SKsZLqSpAPHoB3r
72/4JjftdaR8GdevfAvilEtvD3iC6E8OrjOyyuAgTZJwfkfYvORg9eDXwL5jbTtA2kngjH5V+g3/
AATE+KHwyksfEPwu+IVnZNcazO0+nXd/CojdfKIkiaU/dfjI6e3NRNFxZ97/ABV8IfFjwrr93498
B+LZ/E2m2/8Apf8AwhV6V8q6jwd8cUoHHGCv9e/xB8Ifj437SP8AwUj8Aa/d+C7bwhfWdvdWVxbr
IZppWjt5zukZkTBHAxjIwc9sfc/wi8I+AvgjqN82lfEb7Xp118pstU12K4SILkrsBOVwOOD0FfnX
8af2n/h/4N/4KA2HxL8FWEsllok7WuuSxKoS+ceZFNJDzg5jY4bjJGfes0VFLmTZ+nPxx+Ofw6+D
Uumf8J34pPhs3Yd7bAlPmBcBidgPA3Dr618v/tnftg/Bb4hfs2eKfDNhrcmsarqVtGmmxyabMN0w
cMjhpECjGCckjvXpPxb0H9n79sDQvC2veI/FWmvbxWry2ccmqx2kyLLtJDoWBBBUZB9O/FeH/wDB
Q/4ifBbw9+zxp/gPRoNN17WZfLi0yTR5Y5vsJgKDfM6nIyuV568/WqjENOqPoHwdrN14S/4Jy6dq
+j3T2F7YeABdWtzCdrROtpuVgccEEA5r88PgV/wUG+M3w81u/u7q61H4l6fNCUex1WV38ghuHRlB
IbqMdx2r6p/Yv/ap+GnxQ/ZjHwo8favbeHLrStJ/sW4Oq3SWsd9asrIrRMzDkL8pHUEZ5zXo/ga8
/Zd/Zh8MeJ9R8P8AiXwzIl1Abq5tV1iG6muPLVsJErMTk5wAO5oirdB83LJniX/BK7xdcfEf43fH
Hxbc2NpYT6x9lu2trFSsUO6SbCgHn/E5OBWV8X/jN8QLD/gpfpPhSx8Y6va+HTr2mW50eG8dbZ4m
ihZ18sEAg7mJ/GvHf2Qv2xvB/wAGv2n/ABVrcekXOjfD3xnP9nEGVY6Yvm7opXwfuAvICMnAbjpX
3Vrtx+yrrHxhi+JN7438It4uinhu1vG1qPIeJVCELvwDgL2pNMaeqdjzD/gsPawv8NPAEhgikmfW
ZolLqOQbdjjPUAkfoK9j+FXiLUpP+CdOm6zJfTy6nF4KnlS8kcmQOkMgRt3XI2rg+wr4l/4Kgfta
eFPjLq+geC/B06azZaBcG+n1u1kWS2mkkiK+XGQeSoOSfWvYvhR+118NLL/gnXJ4b1jxTZaX4psf
DV5o/wDY9zIVuZZQjpGEXGWDBkPGevtRyPQy1SlY98/ZR8RX3jH9hXSdT8QXU+q3Vzpeoi4mvZDK
8iiWdcMT1+UAVzH/AAS+0ey0b9knTtTgs4o7ye7vDJPsHmSBZCAGbqcYNeTfsgfth/DHwr+xIPDH
ibxRZaF4g0i1vbP7BO5EtwXMkiNGAvOfMxxnpXBf8E1v23fDPwy8KXPw2+IF3/ZOll5NQsdZuj+5
iLBd9u4AyMsGcHp8xHHFHL5FJNylbsfaP7EHxj1z47/D/wAU+KteEUd1J4iubaOG2Y+VFFHFEECg
9OOvvk1+SXhLxh458DftWaz4o8CaZPrfiTTtavpoLGCA3HnR+a6yIyrztYNyeMHBzxX6S+Dv2w/2
cfgl4ttPDfhTxPYjw3r7y3Vxc2e82un3CqACx25HmAY68FB618JeIfjn4I+AH7c2o/EX4W3Z8VeE
2na5uLQbo45vtCsbiON2HQOQykg88dqai2XbU/SaPU/hv+2h4R0pdUv77wr4ptneCTSPt/2PUbWc
Ll4zHnLrkBgQOQM8civgz/goh42+NngPTLf4T+Ormy1bwOzwz6VrwtNs1+sI+UPJuP70cbgRk9cn
NfX97+1B+yP4r8TWHje88RaTZeJlSOdb1rWaO6iYDIDFU5YDg9ensK+X/wDgph+2v8PPjR4T0rwH
4KW38UIksepSeICrr9kkUsvlRhkBJZSdxzjB7mnFNPYh67o/PXRtAvvEutWWlaXaTXl9eS+TBBCp
d5ZD/CAOSa0PG3w58Q/DnWv7K8UaTdaDqflCb7JeRlGKHofp/wDXrovgJ8Rrf4UfG/wP4tv4TJYa
Rq9veXCIm4mIMBJgeyFvrivrL/gp/wDtLfCz9oSLwO3gLUTrGp6W9z9svPsUkOyJ1UCPdIqljkZw
M1d7vVGbVrWPgcAMo4LHOB6V3d38EfHNh8P4PHU/hi9g8JXCK8ereXmBgzbVOeSAT6gdfeuHR88Z
6Hdke1fpd8Jv24/hRZ/sDXXw18XXEsfim30W90iLTWsJJ0uSQ/kOHClRkOnUjBU+lO+tkhtO11uf
mbcB97AfNgcjp9a7T4YfBPxn8a9Ru9N8F6Fca7dWcQmnSHHyKxIBOSPQ/lXGyqpGQcbflwM8n6Gv
rj/gnH+094Y/Zi+KmuXvjN54NC1nTPsZuLaBpnhmSQOnyLyQwLDI74od0UlofJuuaJf+HtWvdO1O
3ey1Cyme3uLaUbXikVirKR6gg1TKE9RnAzkV7f8AtnfFPw38Z/2lPGPi7wnC8Wiag8IiaW2ELSlI
lRnK9clgTk814k7CPhRyepNMhMRn3NjOB15HSkfJUIGGM9PXNDuAq7TtI6j/ABqNjnBx/FzilZDu
wJdGA6HqeaeGLhjgBs/SmkfMTwwPb0/GkaLAJIz7UrBcA5DD5i2O/Wnq5BKnnNIBjJOOBjpRuLtz
x24otcaZJKy4DgYxwM01dkZ+dc9T+nSmGPg7WJKnkGlQjaVYZHb2o5Q5hsj7lAA4PtSr8gycnP6U
H5RuGCOgprZCD5ST0+tFgEfLEndj0p7MDyu4tjqeac4VQuFxj0poO93A+ROooGJKxkYDBH07U4Lz
kEY6URmMgls7u3HBowpDAsVYdqQDZMk+/uKGUqqMOeOcdM0rfKMAkg/4UhG0e+eRwQf85qrCuJLI
Q3JPpx3FPQrgq3PoQM0ZDN91SAT0ppU9ztGKLBcG2qcjkZ5p4ZUI+cHJ5BqMLgDod36UFB1xuP4U
mgTA7RlVUEHuOlG3ys4HXjBochT8gI46dKRX2Z43dwTzS2HuGQwyRg+1SKyg9Gzz0pjKWAbHX3oV
WGSpwMcU7XC5IVBXvvJOKaxw3OcfWlKs0ZbI+Q5B701UaQsS3FSUxShBYhsj1pSuUGCSB1JFAQ5L
KOAcZJ6UNvzjHGcYp2J3FdUxlyRkfezTH4I2scjjOKRz7bge1K3A3BuSOgPSiwDX3OfmzStyvJ59
6Q4LDBGO4/CnSYRlIfdweaaQriYOAA28joOlIVOTtO3PagkE5HGR0Hb3pXTPAbIPGKdguM2oQvPz
Z/OpAojZsnP0pAmw88/jyaRdxYsRlRxjPNKwxpwAxU4HpTiN3bGPWlBVlOQTzyvSmlg8hIODSsNC
7gX3HJPOPanSYYk4znuKa4ZBkcA9c96cThcr8pHPNSMRRu78dMVJKgG0hgG7jFRM7bcHqeQaDlmb
ALH2poWwjn5VwSaFXewycA808EIBjAI49aY5Cs+W4HoOtA7jpGCtwc88UhBwG55HrQfmRyfm7UqI
VXPbHFAAXYg8e2DTlAc5yVCjGO9MJOeuOeT6UGFmJbqP50gHnjLK25enIppiVc56k9KkONioCWbt
UbA7jkYHpmiwAVUAAHbx60m4FduSMn0pr43qBjP9KkIDFgQV9KLAJJhQcA59SaT+IHvTtuVJJ3Nj
imlAQfm+maAHiIBgWPGc+tRtuLHd0zxT3RsYyQADTTHkqeevGaLASBgG+7kAdjzUbYP3cflT3CjB
GSR2OAD9KQuzHpgDvTsSxpIIDHqOOlGRkjOAexpwh+YsCNnYn1pDHtDMGGev0qbDS7iGNimRyuaa
qFgOeBzUpIkAA+UfU80uGh3DG8jI+U5FFh6ER5GOR6H0pyplSFUhhz9fWn7QS6kkHtSSA9hgEHHP
WnYQowUIAYOeAf8AGkVXjzu52n7poyfuevTGKUJI2FLcmgYBSWJJ+U9yaeIxjlu+QBTXkZxtztUe
nSgjBTGT+NADTs3/AMXf71IEzJkhgP8A61OkOWy3qAcinn96VTAHON3tQA3axAy+AaAFBYEZx1NO
kTEgwu3P8Oc4pCQYxyWIPQDtQAOPMYAEcL278VEAW2jGMnpUk5UIoQEYGSD1pJUCqGHzHpz6fWk0
G4MmxfmJGPQ0kxV3G35VPIBpEDY3KeD60qqTgheTwDnvTsDEIMbcHGehHeh3+f5s56DmpUQksQct
0O48Go5UznjJ3dRSsTqI53fJ1680rK7n5chRjqaaQcY5wO+Ked4O3cWAzjB7Uh3GlJOPXPAzTmQM
Tv4YDoKMI7kMSff0pF2tlQcjrk1QtRchw5IIVR19KUfM52Y5zz7UKAykgkLxkDvSxIwzlyc9hzg4
NKwrMa48oOuPvEEGk2F8EsQAvHUfrTz/AKnJOWORjGePX6010JAKkHtg4osUrgEkdm2k7Bzvb2pc
biSQTgEZPpinpLhcINrLkEY4phXODzjJB4pDH4zGGXgjkqOvr0pjbvmZWznnmggLGcYA6BD1+tOh
/dnJ4B4OelMBduEwX2A5GTTGQ8sshZx37dKdMgedmK4Tk/8A6qF8vYzFWznjB4FFhoQEpJ34ODxz
TwhBJ+Z1PBI4p0IaJpMDJYZIf1oRHZ9q4GDjGaLAGxPMAIIDfdB4Galh+RygUKcnDDrzUcoV9xBJ
dDjA6UkcefLGSoJ/h6jNFgEO7eTHlccMf71SKi+YqxgsT3J6YourZYCP3mVGACo5NOiLAZzwOd2a
VhCMEUkrIcqeTn+tNkV+dzADAG7PH1p5iV2znPOTnoanaZBlWiCFflC8YI5P9aAsV1XYrY4ByvTr
mp32oWMROwgZyO+PfrVcMxIYE4HGAB/n0qUxtO22R8kg5x2xTsIsTNGsyqvMZBAXHAPpUM8U8kvz
ZwQp5OAR2FPijG1vnywPQdDzjmpGjjhmSRpD5bDO3BGcdRRYVyvNGSjBjgH5fXHFPGCxjVixbkHs
RUvlKADIuxW+73IAPvUhHkSE/fj24wp4zjjP4mlYi4NPcLChUrsI2NwM7adbszD5FYjJYFeBn6/j
Qm58o5cgnAUcgnp/X9alhxmWEiRTja5Q4BH1/L8qVik2G55lcBfNlYEgkHnPUk+tNkiuY285VHOS
xwOf0psKCGJiHfKtuHGQB9PU5FSpvjA2bvKYkkP06d/88UtU9CW5MZLG8TpcJ0TkFPfvxQVSS0kk
ALy4J2E5xz/k1LFBtjxlQDnCqejdMZqGVQJmWUGEsBhgOe/WqHF2Mi/VRIHT5sk4wOMd/wBarNEE
jDIOM5Bz+dTaqGguGwTjoDmq+xn46AjJJGKhvU1RJuXyVKnLnnpzWxCZJIA52usSlFxw2SM4/Pv9
ax0GG3EDAPB9vetV1mdVCNjd1VBjHPX2q0iXqTwt5zLuDOAPm9j2/wA+1W0tLea5A88qxQ4DDocc
f0qnBEPPOSqttP7zrx6H0PFWnuZIid48wgfK/Bx3wRnrV2M1K3QemPMELsdpU4+bhcn375/KrAmj
lZvNkywOCcnn3zVSRxK0Srncw3EFeMdBzTlshd26TKqSrkoFyo6fX8KhilJ2OQcbxyDk/dc+golj
2PuXsfXJzQykSkE5A6/lQ6lMlx8xGcZ6e5pWOgcZFKq+QT1w3Wo2bPJXCenrSqqh2DZKAZGB0poQ
MoJYYzyBRcZMp2xljk5HQ01njyuxWHBBJPf2prKXjOW2kHhD6Y7VGzg8Ku3nuakdxzBsgDO3OQCe
9XoLgxqig4OScjP41nrkA5PHvVuGRoiAMnGdwPQU0K51EXjzxJHpB0Jtc1BdG4K6d9qkNupBBz5e
do6dhU+jeMta8NTXMmj6te6X56BZjZ3DxM6gnAJUjuTXPRZAdnI3epPX0FWcRx2xcoG3EnG7jHp9
ev5iqFd7Ikurp9SvpLi4ne4muCXeSRtxY5zluOT19etat5401u50hdIPiDURpluAyaebpzFwcjCE
4wCBxj0rDwojQpIFmdcFWXkdwP8A69NgKbgR87OfmJHStEkyeZ9zupfi944mh8mLxpr6qSSGGoyj
JIGSfm4Pb2qK++JPi68sri0u/FGsXcM0flypcX0kisuckEbsNn3rkBMikugJx2z3pd2871l2g45I
Oc8//Wod0O/NudBp/jDxFo9tPZaXrWo6fZ3O95be1uWRJiw2ksAcHIGDntWfa6tfaVqUd/p19c2N
3Efkmt5mideOgZeR09apEEKWUcdAM9T604xnymbHy5DbuD27VO25Wvcu6h4h1TWdTlvdT1K61K8Z
SrXF5cPNKcY4LOSTx744q1p3irxDp+j3GlW+uanFps24mxS6kWIhjyDGCB1/WsZnBQeWRh8nOcZp
k8k06BVAZCcHb3Ht+VXdGbTegNGFmaV5PNXfh2wMk+n1zmnSQhAdoLdPl7gUtxE8EnmOBt/u+n5V
CwkWUbWYsD8zEZBWjmYrImUgHBLqynkZ/I1r+H/HHiHwrctcaJq95oshHlu9ldSQFlz0JRhkZHvW
Q7L5qeZ+8jQ4yGHPBHpxSXRW4dc4KFsHcehpPXcS0Zva54w1nXtQj1DUta1PUrmLBimvLySV4gDu
G1mYlcHkYwRjPWpvEfjTWfFf2eTXNc1HWpYsiM6hdSTmNSdxClicDP8AkVzTBlXGAOx7YxTWV1ds
oS2PyqlFE8zR2Gk/EzxnpGmx2el+Mtf0uxgH+j21jqU0UUS9CAqsF59ah1vx/wCJPFlvEmua/quu
QxkOkV/dyziM9CwDscHH04rl0l/cuqSMzccYwB68VLma3jDoxUdT6/QCna5akza0PxdrPhHUP7S0
bV7zS70qYvO064e3lVMfd3IQeeP0rdvfjn8QbuGSO58d+Kbi1cEPFLrFwyuCMEMC3I69a4d2dQyh
ipHZO/1H6fhULMPMOc5C8jihLuEpN6F97gXKpcBy04HDdGXnOR9PWuvX43/EWNoQvxA8Sqq4Ct/b
U4PA6Y3+5/pXn5j+djuEYAA+bjPtT4olidECuSePUE/T/PWqcUyOdo3/ABB4o1jxJqa32uaxfa/f
xoALvUbpp5GUdAS5Jx04zWhoPxP8VeE7Gaz0LxPrOhxTSec8Om6hNbRyyDHzFUYZPGM1yCyqg2x5
BYHPHA/zikCGCVZG+WR/nwx6d8/zpcg1Uudhr3xb8deK9NfRtU8X65qNjuV3t7/UZ50JByDtdiMg
55xXLn/R0jJ+Y5ySTznqDUM16SAAoVzzkH5v/wBXJqIXCA4MZzyACecY60uWwm77HX6J8WPGXhLT
k0zQvFutaRZCQyrb2OozQpuPU7UYDnHPFctdXc1/eXN9dXc1zNKxllnuHMjux5JLMSWJJ6k1Xx5m
DtLAZwDk8dev40sdwVbn5VBwOh7dPersTd9RGiKOSp27xkkgHd7e1OiYs2CwUA53NkCmpjaWLBnx
hQeO9IqbYGyxPGcjjBFCViW7vYu2N7cafq9vfW13LHd27rLFNDIUaNhggqRyD7iuo1H4w+PNTt57
e48a+IbiC4QxyRTatcFGXGMEbsYxx09q4fc+CWfqeefmPHWrDFlhyjnOdobHT3qXuaIek7RqmJT5
kfKtk9jkEHPbitnxD4u13xesR8Qa1qGtNbgiH+0LyWZY1/2A7Hb26Y6VzxPmSCLcMgEnC4H1zT5F
2N+9YsD0wegoTK1NPxB4v17xLbxwavrWparb2p/0eG9u5JliyMfKGJA4wOKxvNVl5O4AcD0pVRnZ
CXD7gcgHk/jURVmZlbgr0OapMzcSwHeQbSCzMOP8mtDTPE2paE0w03VbyweRR5jWkzxF8A43EEZx
k/0rJklfcA5ZnbjjrQF2D5nAYDH4U3YpXRr6t4l1bWok/tHV77UQjeaDd3LyhTjBK7icGs8lpmDk
sS53EE4AqMyxtEqsu5wOCTkH8KdJIS3H3vrwKlITdi9p2tanpEMiWWp3djG4G9LSdowcdCQpGTye
vrUVxqd/dZkuL+4vZQAN1xKXPb1zVZiC4Kn5jndtXviozuBALZjz8xxWlhczZYSR5BlWYELgkdT/
AIf/AK6sXmo30sMUUlzNLEv3Ed+Aenp71RRcxuY23KOODzT2uDIuXOXVf4Rz9KVhtkqXTRhW3MAm
eo6Upv5tsjT3MwkY7jlj82R0+nt71S8x5XUr0x34xQ6hphubcCCQCaAC4IZgzyFmPHHBAx0/SmLG
JAVJ24zgjvSshMmCNuRgZPtS4EZ4yNpwc/ypNMQ5RhyPnKAdR39qlhkLH5VKof4enNQySSKCm0sC
O/QD2oO5mznBPQZ6Cna+5XmW0uNi7QCd5J5/xpPM+deCR6Kcbepqm4EjruZ8jJLngYp0UvlNgBWU
nGT3FUoroK5YeYqWDoWQccd/1+lDMHDGNQqDrk9SarMSzsxyqngGmvIrY2KxCnGKQcxee5LRgMFJ
HGW5+nFQtMobd3JyeOPYfhmq0jBj9D35NDTK24kDcBgelFxJotvOLfHl5UAnbg8fjQ0z7ctgY6ZO
cCqv2ghGViDnnHpTQW8wAnCZ25AzWbd2UiyJd24nORzkAYpHmd8hiwkboTUbbADkHP8AtdQaa+wB
XfJ55APPpTUugrPuWZ7pxhdxRRwNhznimx3hiXcQQpzgMOM+tVZm3Rb8/KPlwCTSbFHCt0H3WOaT
Gi1JfvcDJy3p7UkV05XAxtXnOMd+KhwoHB57880oVCAM4x1yaFcd2TteyMQWO7b90jr71XnYM29e
MnOTTXCOQc54JxnpTAo2kdMepzTTYtydiGK+YSw9jjNRbtxLMAfSlYbgDjae3vTQ4jY8Dr170mIA
7GYkHb3470rSq4Y7yx/u1HIzEsQfoAOtIuSuHTHXpUookM2SASOeARSvJ5g2ryR6jrUWwsApAxz1
HanRHax5wFHy570aggJlYqWGM8YHpTXbdgE9uvpRLJ5i554PbtShN/TGOQe3alYOopTaG6M5zznt
600yZyAoDHnJ7UrRtGWQNkAdDxQw2AZAySecUihgGGD9RjrQgAYk8A88HtUkhJGRgkcEZ6VGSqKR
yGqyB5ZWGcYAPT1FNIGCAeO+e9JGoC5CnJOMml4U5brSAUHadw2+nIzTnctgcAc8AVG5GcYY4HOR
QwOARk5p6sAP71iRlfQf5+tCjbx2pXQYDHNOcBSTk5bjFHKAhJY9OD3pCCr7VBI7mnmQhNuAvv60
3eWc5zkdxRYYDJIUjGOaa8ZDMAM7jwetOzv6E59aRnKnBwDu6Unceg1nYKVySPTPFIVAw2Cxzxx1
p0ke1lYkbTzgUE7kxz3PTmlcLCeWpPPA7UhGCQMinAYUgbhjoetIU3gdFOMn3ouKw52UZUDvndUa
SMBlakcMVXIBPrnrRv2YIzjPBx+lADRncG4HrmkI25AwcjqRUm5mcSFDnOeKR0LE45BOc07BcjVR
szhiCccU7IiXcvORgg+vtSuvBI6L6dKBlk+Ucnjk9akoQyZLdh7dKdtK5O4Dvg0zPKbwdp5/CnHk
kDgetNWBtisSgO0khqQuqg/MSeooEZ2nORjrSlIwoIJz60cpNxpUHnnOefpRt2k4bn0HXFADNljy
vfmhgHAKfLz0zzTGAByAoIzxxQ6FR0z2okPzADII4pwJYcADjucUyRoUmPJzg8fKaRkyfl6H1pSC
5yBgE9PSlK7SR90j16UWATZuAA+91AHYUgGSTk8GhiOTyD60bSGPUnGeKllIE3biMZpzAqMZHHOD
QmPNJ5B/SkMmQwIJzU6laAUL4XkYHU01eDyOBxinttwADtwe5pJY1Hzc7genrVKLEOZRnv0qMgx4
KnPqaUOC20AjPvSsMMccYH1qWrDTFI3hiQeBUZAznHHvTkkUZO0j6UrYkbj5RjrSQxM5XnkDt0oE
u7huv8qNmPu8AepoCKTuz81USBwvGF6UjYD8bgPXtTwfN6ctyAKTYy8HOPcd6gYjqU4BzjoadJL5
hPy4DU3ZlgCTzTvKMgJAyV5zmgYwrsy3UDpilZi6FiDkdAKeq5b+6O2RRj5+Tz3PqKoBuSVz37in
cDKsCopJSWc4IC+1LtAJBIY989R1pAJuAkAQdxkn0oZizNg8A8DNKjAOT17AEZFDKDk9AOvHtQAq
g7T0DL03Co3TAXKgDOcA0pBcZ9O3an7PlOenUY9KBDPNOAMcDrjpSlgU4woPFPKgg7RuVRzn0pjF
NrFQdxIwO1AIbgkED+Edh0pcGPuST3FKmehGM8ZHSiQZYDA5HSkMGyWJGWwaCxTgLleeSKkZVBQg
44+6RilYsMBTgA9PSgCNTyMU5UZ9o34ByMZoYMBuCjrjI70u4Fc7snPI+tAkGFUqwySOME9KbIQV
3IA2eKUEuDtIOB0PYUoDYAB5A4FAxNjO3yjHY5709iSvBKgD160xiWwdxL9ORjmnvJkE4Gc844Ao
AUhdwf8AhOSQDQyrIwc5VCMVGHLx49Dkj2pDEY9vfHXP6U0DFKlWyxyv8xikkXLAIAVz0NIVJ2cn
r1p5ZkZkz1PC0WEhXiIK5UFCSCaYwDEKvJ7rmpCGPO4c8ZY4FNKncCx2j1HU0DI2LLNtwMg9c1Jz
I23ceOc46mmnk5B34704sqMR90jkfLyamwEbLsZlJGQOc0qyjJ+bI4A9qfIHdSxyvPfvSFXEYWPA
x1yKOoWGsQABnIB9MU7hmwBgAH6UhDYzgHHp1qRpPMYgAEEevWmA1rdkw2djEZOOwpyuDHxhcHqO
9KsBdWYnBHWkKs8hI+ZeByOfegBT82MAFRxg1HkDg/kB0qSYMGYgkkjOcdBTQBvzk/gOTQAiMEck
E8nIY8gfhSAbo3ZpeckfWnOFUncvHBwB+tOaBXjJY4YYC8cEetKwmrkTYKAEZJHUU+IqSPXIz9Pp
TnbZGqryp6fSgOgwQG9eO9FgtYbKvzYVW5bgDuKsZEIfGeGwR3NQBpNmdzDnK5BoZjgAglic7s/o
aaQXEBYyEENuJ3FSaeYztKltrkcMfpzTnVkY7iQ4GMA1JPAGVcvvGRlvT8KTuMYWQtzznAHuKPMI
ldtvlbB8q5ojGfMbYN3QMCeeOPxqVU3hzKoLDuR168UhjSq7S+0uG/u8nOPSlADW+05Vs8k/e+tP
t/LZj5mQpbKnGe3FMVt5bcoOQTuYZIx2oQmhxjWPyyM5UYx1Jb1plwrMjPn53PPqDTo8RISqlnY5
3Hj9etLJGWfPTvzTsTZkhISNCzqu4DEYPWmxuyRncgHPXqcdMU60aOWYghsdzjj2p88RjnGF2sMD
HUBu/wDOgNQKcSIAzoRkEj1/Dp2pUjZSqcMTx6bf84p6gYcFdrqvQ8gjt+NOEhIVlVXIYkBgScc9
fXnHei5m/MaqsFYq4yQQVJPIx7/jTmhVYWiDBpAN24ckj096JHKSFGGdxJCBuF9RnnNO3o+HSIsi
dSvfv1FCepHXQeJoo4wI1/fqcYB6jj/69K0JnOYiyx5LbHHBGO/J9KjkSJZcQygSM3PcCngOxdAG
fBwPl+U/nRY0VwkRncBXGc4GONw9P89amdZIGhVlYKAcdcH8e/BpS+I13L5m0bWRhnaB6+nP86Tz
hHvbYHdgfmfPygf5FLY0uV5ArAjLNgHB6c4x/jSXE7zKwIAZUChvXn26VZjZZroW7bQvUKTxj/Jp
JJI45YmJQGM4HGcj3Hc80rk2Oc1FmScqwJwMZPJx70xNzRKSPkb5T3NXdUQG5PRQOrEY7dvzqmsS
lX3ZVVI+XPX6evapdrlpEuzfMm1wF5PI61qJJNGDIqAkHcF9TiqNqAYsHCsrDAB5q+gaF9pTftb5
Q3T0zn/6/rWlyWy2zwpswjLIFLHJGMDoetExliVisZkBBcsGHU9TjPPQ8VNEWVllkIVsnIZ9pOe3
NNFwf9aSuQ5yhHfOBmpTZCs2BQKo8uTCDBETLkZ9vfH8qddW4hlcs4VSeittOTURmMhVSohUklzk
jB7EVPEzGZ45TukBOcDPQ4zn3oeptbscZIrSEtsACdSOnXrTCPkfc+D2PXP+eKlldphtJCpj5f8A
CmOuYOBjJ5zxSTGAj2qSzYdfenOQNuQNwx3zke9MlhWGQozhjjOe1KJcs4C7uOo9KGVYXeSRwM4x
jNQAP6de1SDIJGMnbxk9qjWTY+70PSkIeAdhBwAONo/z7VZSRFhfjk9e/eq0rBuVY5Oc8dKt2U0C
xuZV3NnAABA/OmhMcij5tzmMMMHcOtW482znC5jVQDhSev1qskwJVSmATgZ4qzkwnbI5VmyQAvHs
apENFoyxlQ5OHAzs29TjFVJJk6oMS5zjlc/lSLGdobcSBxgL15/SnuvlyCUFGC8nK9fwqrjSHIxQ
xttWM/eJHJGfUU6aOMDJ/wBSRwxGMtnn8aijZnRmUkK3ylgPmA5P5VPKm1AjN5eQWAcdeev86q4W
FiDFY1XB7jJJAHIxj8qSSNvO2MNyHJVu/Tv+VK08kRCqQpBA3Adh/wDWoaUynIGcZYs2eR6f1ouh
pWFhuAqBFXhAVYkfe56j9KbG8ZB+Rt5yQM478UTK6wq5BZd4AC9UJ/8A1Us8af6xZDub37detGhL
uWftjSYwUK4wqngj1yOlQM6M4Ea8jc5yp68+/NNIQPhduV6jqCPxpqB4lRVkZzgtkdQKpK5F/MLi
Vm4YRl+eR0J9CKVW3hiNpIOCg6Zq0wzAxOw8khm69OoPfrVMYRNyS7+i9ScnpTsDLDyRmCTDE9QG
6cnmkWUuIgQxZyQxP1piyoCCEzk8jPFI1zNb7Rjcr4wuPfqKuzJ0HOA6OoBTa3OR830NJNLGCNod
mKYLN/CcdgKmYNE0m0qY+69+/FRYDSsykI2CAcYORnFC8x7DFnDxoMr/AL/TI+tSkCAeYz7CPXnI
44qFwrEA4jHUkjrROfPuFJcnafvep/Cr0JuWr2YysJBEPn4IBA4Iqq84V0VVYNxl27HvSyPll2qw
CnJA6D6etOktSo37jvU8o/Uj1pWFZsIEZkyg3ZPXbjilZd7AjAwOQDyeOOfxFEm1QHD5OMFT0xmm
yMTIXGZEHG/IG32pDsOmDOQ5UDaD0PLe1R+XLK0mCqgDO3cORnpxTJm8wrIMtg7T7fhTjbsXKttZ
u+OlNhawCQhwYsEkYAGeKdJHI0rMWJBJI5zgUzaf9aB9z5QAKc+5DvAUs/IUAjA9KLjsE0RaIuVB
K8decUwv5qhdgUYyM9aeUCGRclgOo6jpio1cF2IYqwB24PI/zik2JkYDOeBk54PapRKIyXB6fKR0
FRTPKWfflx1K44P/ANenq+ISokJzztPp6UJXBXHgNPGT8qYyeBnNSyR42uWGcYwT/OocmKMjA3M2
eD+n6VBLk57Y5Kse/PFHKVexY2MTIGKlBgDb1PsBTZfJcAgEKCfkkPQ+/wCOaRQpG8KHCkEluOfa
nlY1BAUuWGBtzxkf55p8qQr3I2YxOhVt6fTtSs6SSghjuHQdMUs8rKyqMYK8Y5x9aYkheQDAz39T
7UaAOMQJyo47g9KjjlJRoyRjPVascNh1+Vh0xz9QaZIS4cJ8gJyfehJMTQ2FzE6mMZ4Ax2NJJmPO
NzDGTgAUb8soxudDggd/oaOTkAsgI5DcmrSFcZvYJ2C+mKCwKj5jkHke1ErbXYFj5nTaemKQFokC
Bc9SSetKwuopCxhTu6HORnpSYMrHpnPXvTgzTSBTtLds9vegvvZyV2jPTsKLWL0Yz7OzAswKcYy3
f6VIHTeRtLKBwT/Wm7GbpzjtniiVmC5IBHGMCpGBV3baSMse5wAPWkMBVGkzyTgE05wJnIXgehFM
RCylSx3nngY6dqEIc5CFgSWHH/6qY0g4B6MOO4p0rb0CBSuDxigL5aZ2gYJ55yKq4CuDHGykhh1A
PGPpTC5clwoAIwDT2iMoLKNw+982Mgf4U3J+RF+bJzhRnikFxGwxzn5h6nvSNtDNtHB6ZHJolILO
MbcHuKQEOx9QBRYTADcGbjI64pytgDBwR0x1pX2orKORng1FwpBUHI7HPNJoEyRs792zdkZOP50x
nCqv94n0p8jqYAmOM9D2qJ4s8rzjgkYI96lKxQ52AG4AnnoRQ0yzYOxSfcU51JXK/wAIJz+FJFEo
3knccY+ppgJIoclVOFHIz1poTeTyTn3pw3oxIKhemMU7KHb1DDrzilYBjfIcDKkflTHJwSmSCOcn
pUvyfKpJJOc+gqOXyw5Q5BHH5UASI+Uw7DavcdaZ8sqknseDiiaJmUMOVAxxS7VGOduT07UgGyAY
GeW7HP8AShc7FIJJB6nvTg6xPkgc9D1pMkKxYgDPpjNOwDJpCkjAjAPrzQG+fbjjHUdKV13qWByT
wBQqdnxz8uM1DQ0Nba7cDj0oLbcgDg0sgG4YJbA64pHV+pxg9u9MNh7OxcfNzjJB6035sMGwO+cU
5wCgBPzdsfypoXqACOCRn6UKOoXAZlJONpxj8KNhDHPOe56Yp4R0xg7ge4NRyuXkwR/sgCr2EIRk
g7cd8k06ZUwSuenf1pTKSMN0HHPWmZGcH5lHapYD88sWO58d6aTuBGSwz6UMQ3TOfUinFQBnAPOa
AGSSFzjngdDSqpaPJIK5z05pWIKkBQGJ4NAPljHBHrmmkgHMrlVbGQMZzTJETOQxyeadJlmBAIBO
SfWlwQuQcAdabQCMu/gHb8valkgLNhSGwAc0hfnPUY6YpwBlwOOSaVrgQshIA7U87kyzLgnnPbNP
K5YbQUxxg96QysYwp6D1FDiNMHDBSWJHfFRSNtGcg+3pTpskY3Fuc8fSkIwpU8HrmpQPcaDnHf0F
OkDEt1AJz7Zp25V+UgnimyEqFIJP1/lTsIcAY89GAFKdpUEAkHtTGLZJxjvinHptBJB54pWAbIh5
xgg/hS4KkcfhQylMg9fRqJPkJA6jHHanYAaQpt2jkduwpQcgkKBxTV+ZuoAzzQ3DELk49T1qbDuO
aT5iCT1xRgLnAGAfSmnlgvA9OKcQoBwc57npTSEKXwoPTJ7UwqACT8xzwBTiRKAMbwBjB4oARWbJ
6DH607MYySTcGIDHJyTT0ACHIGcUEgkAZUdcA9acQURmXhemSc0WYhm5sgBQOegqUAsCMHJ7ntTP
J2oCfmYnApsoGcgkYpq4DHPBLAEdOaVjndggfhQqFlP3jxT0QKuSuTyDjnFKzAauP4wQCeo60gGJ
MAjI5FPdQW6YDGlQgFioBx6jtT5QCVdxGcbuT06U3zfnDtk4NKdkgI3bSB1xTQpIwfu579KAFkZG
O9iQx6e1N3hs4Y596VvmJ27QOnHSmljkjaDz3qGNA+FJVeQTnIpcBV+/jn86HyvsPagrvUAsWI7U
WC4BATjOMD17035lJyBjrx1pWc54+UjqKcdrkDOB3JosO40MvXBye1KrYY/MenNJs57le1PKELuO
Fz2pcrC411BI2knp29qGzHFhTt9c0bMnOSDnqacRvQA8Y5z60coXGjOAoOOckE0jH5eV47kdKCC7
gk4YU5/lbbztPcmnYLiNwOOB2ApMboyd2T0PNOLqoG35s9c08qYnHyDJHbFCsIiYlSR90jipCisP
vnk8rj0oldmZgOgOcnsKSSHacBty54xRYBjnAAAzkfSh+hAJG0cDrSsOAGAwDk0jbM42tj1pWHcF
G4kkEBRk470/Cs27Iz1xnios54xgVLDtXJyCAelJoaHHBY5Oe9Nk2lxt5JP3qHO1d5IHGMetOREw
N7E56DHf3qSiLDbiwxxwcd6lKkjdlcY9acIyc5IyM4AoSMBgdwK45BpADb43XepG3k4HSmbjI5HA
GeMmleVTcMc7V7980RhmDZ4B4G7vTsTsLgSDaFG4HGRQ0RLDJ5/Pn6UixyxMAPu57GkKjG8EAg8q
TkiiwXJmIjeQnkDI/wD1UkKLOW4CgDOSTTWIjC5Oc8nNLkIAQDz0z9aLDTuSqjqpdcbiMHnjFVpt
wI3fNv8AQ9qnk4jcFmUA52470hk2ygnjuBxxQgG70YojAnAwPYUssqs64GwquN4HU+9IpAnbeN4P
YDmhE8rzHY7SOVUjqKbQkBzjG3LDv2NK0bMUZwAvOCaGkZ0+XAJHIzyKQEquCCQPQcAUrFaBKVhJ
wPlPXtTycYDkDd8wwMfrTdocu24KMgZxQ0o3qXIB7DHUUAJOS0uVJYg5BNNd9zsSMnknA4pZCJAc
EZA6DvSyEOq4HI5/WgBJAYijrjYeu2klIVQFXoM571KUXYi/MWBJJ6Co5hhtuMsD1FNK4MQyFAQv
JIp4GDvZsgcYpDACSVyFPP1qQxJuAY7guM460MSGO7FlUAt7Y6inn96SmQoznafXtSbyGYgMEPA9
qjAbzSccn26VIx87TMHVyBtyOfQVHyuQGA5zk1OFUopOScE5A71GSrKMfeJ7jH1NUkSw3OdmAJAO
Rx0qQiKQMSCFP4YpmWjT5QV5HXoacpw45Bzng+tIaY6ZIxvVXJ54+lQAMhAVc4GeeeKeMSKSD+8c
/d9KlPyhmX5GHHvj0oCwxV86QE4HUkjqcU4/uVHGQDjOOtNk8tJMoeM4+bPT2/SpJzudiQRHjCgN
wKAuNYsGKkAFj1HTFS+YlwHUu7NndyBnpz9aiEnmhsMdvTk5xTowv8JO7nLY4ApCVxIkVmkSIhOS
u7IOanQqgIHLDkHuB/kVGJYWEYVQGUYyOtSiRoYuFBAB/hPP+TSsULLAfMJjO4t8wJHfPb2xmlZR
GhU8sAfkz161C/myTZGUK/Lgfrj86tNEGi+QZKjLk8kihDFVHTYZGZckZBGB9KZKv2e5CMwk25Kr
jt65H0pXle4CqXfH+0MAYp7SphEKYU/KBjA5pmUnqLGoe480r5cX93PB460SbUcxh9y9Rt4HXvTw
4MMiqoRc5Riv4U5CYQW8tXUjJYDk/jTKTugVChG9QzZIVccdOO/NIImw+0EiNsbQSAalt58zqRuj
ZeEYngj/ABpYndlkBC7tpyGY+vpUlWQ1nJMmFCAj+EZ2mnxOsMe/eTI2W4HGCMHr07U9HVFGQA3r
g9Kgn81ngDRglmJAB/n6/wD1qatch6bCbISUEzbQGzgHAPPWpLhoVZirjrwnUtz34pdymWSKZo5A
GBEhA+XAP+eKSNIYI8kZkwSoIOW981TsJXYonaO63gD51Kfe71HczlEIDKJSWIDHtjnHvSxOtzCw
K7AF3rIVwRjOR9P8KSZ45xH5atJwSHOMgEcfyrKyG1YwbuNklbdGWc+nQj6VE6eX8rfccevfrU91
cnzHZv8AWKemSDUUUwdXSQMARkbcH+dSlqVYmtFWO7Vd4dF9u9bSyLGpmxuRkKk+3IIrJhiWe4Qw
tweTjoPwrbSGRAfNdXiIwi9ecH9R/StLEPQjMhUgu6gEZOTn6D69KdcRqJjld29QwOeWOOp9/eop
YGlGBsBL4Dt1P4duallQQxqASXLAbgPlA75P4imkZ7skeQxCJV2H5irdDx6/r+tWJELINrLFg/NI
ckkkdPy5/Gq0SfZ1ZgUJJZCGwQPf9asRM4lkVW5T5eDtP/1/rUbM3jc4YPukVZScD0OMVNcjcw2t
kY4HpUBw3TJf0x1pACHAKk4P41JoSyyNIGYrzwCzVOGAU+YVX5cALjnpVb57g7c8e56U5kBQZyWA
5zQO4OUzIRnjgcVCEy3XjripSWQEpwPWiJFRw0uV3UhEYIVSTyp7e9WIUIKqMMTyDn9ajkOEwoUn
qSKntT5qbUBZ1XJB7ihATRwvLn5/lU5IJ65/z2qaMRyElkIYHht2On4/5xUUEhTIVlBwfkXnrVmG
Z3i2lMhQW25zt4z0/pWiM76jQN6yR5ZCPVuMYzSxu4JOPLic87uQRjBH55qDf83OS2OGPY+lXDCw
TdvLk8mJTwOuOvrTKEZSAeSEB4IOBjvTfNZi27GZCWAA4H4VLs27C6hwAR5RORzzz6c54qFYywbI
2BcjlsdfSqQmOWYzFzIzYI5wO/tTpJDsk52kkDk+3HvUUfl7/l8zJIB3cDpVgQpPM+SqYXkjGcdR
/QVdkJXFa6lMaqqr8x+Zl5+lNtlWaNkdsHb0Jx8vt+VOmXbGpJ3HG5iB1HpTGYG5ULGpKryexP0o
22BruNdiGZQCCMIu44yMVM6hi5kO1iBgnio3lKtgorMeFbrz/nFSucnY6neq7xjPy545P4Um2Skh
rFUj3GUPsxhW54FNZhmTgksARn+XuMU+TY0KoFIPbgc802dj5JQvkocDnmqiDQ1GBiKKcrnG7JHW
pJAFuvmO8rkgKcY4xxQTEgIDM54BU8np3poCuse1N8nIxsqyWJIFcyMn7sEf3iSx/oTRI3mzIygs
QAPnOeanLhJiwjCuTwD0PfpUMriVwgTCg7iV6c9//wBdBOgShdzeavy8YbHP0pPIYxgxkBOn40+a
XzIWjXco/h4OccfWiRAiRqPldyd6DHT16delUrBoQkuMEDO0EY9+9JIq7PlaTltxbPY0/wAp3Lnq
xyAx4FK0TxosQIPGevSm0NIjeFoYpJcErnKFuMn0xTmEjSBgw2vxkHp9asttjUKwLsQFwxzk+34V
EYyikZH3sbAM49Kzs0MbwsZGRgHLHGQadCcOjDDR+vSkQGaYqARgEkAjgAdcfgaD5kcJ2kNGRgFu
qjHpWlzK+pIWKOoyXhPOMYJNRSM0kpAQhlOAp4xSCQxAENv2jgge2cU/zGkJWQMz55YjpVJFke0j
KblJH3iO9I9urbSyjPUr6+9SmI7WXyyQDgAjAHNEzgLjOxu3P3TUtAiMLLkuMOpGMdKYsBkb/Uks
O+etS3BNugIb5up9/WjfmIOoKY+5uPAHqacdAauQYCO+9SqsOM9uKkcxSfKAR2LKeKjnYrOz7tyk
dV/WnD/Vv0I/uqcdehp6CTYrlGG3IwAcFWJH4+9KoVMh3bzB+A9qYsRiZRKSB1wO/FSvuZNw5HfI
6UtytBjgMinc2QckHrUUigDcAWYNyelOMeSjArjrx1p/llOQcvu5BGM5FHLYm3YTa5QiLPmFcH5f
5VHJHvBYMSE4IHrTijsgcS7TnAGeRihnwqhXbLcNzwapWBpjY14frz0yemachKKxYhm24Ht7019q
KME5wevGalMkfl7CnzDjPX8KV7BuV27tjeAcgj9TUrSMOdu527NSOpwUChUbPBHWjcTGm5mDoeQO
+B6U9ATsxGIUl8BXye3amFg6HDEt3IqR2V23Kuc8YNM2AgHOHUfTinYNiWRldAFbbJnDZ9KieXAH
Yjjj0pxVVUsMSHPU54pu0S/MflTtg+1JoV2OZQXVgxJbknoBUaqS3HGGxn60+RSMgEbeevJoQgHH
mYA6ZGM9cVKKCVPLZM8kDP402T75AJx0FOkSQcN68457UhickgZ988U7IlyBZArgAE4+8Ce3vRli
5UFdv+zx+FRkbDj7pHcU4AIxBG4ntVcqELKilN/IIY9D24qOaNGG7JB9uKf5rx7hGeD1yP5U4yMy
4CjrkZ5OanQsjPm4wASh5zURBY5YkY7evtU0oIXow7YNSrtUByfmVhx1Jo1BJFfG1TkYxxzzTowM
H5toJI4HAqQybGd1ORkgZqJXywLdOfehavUbZM7xqqq7FkAIwv8AOoHYnaSAFJ4A60m0sd2ec4xj
ipHKqcKgZRxgGkxXuNC56Zzngd6XeFZiQQ/Pv26U5wAQw+Tn7tEqYIVsc9zSKGsqqAT688fjSyjz
Xyp4ZgRx2phGH77e2fWlMbKu4BdpOR35oAJFO7cq7TyD+VRFeMDI+nOKn/eOztuxxxx/KmkAu29s
4PUd6LAICu4LgkgcmhwXi3Ahu3HpSbRv3/N+VNYMFGMjb6H+dJgKSQq5GAvXHpTQCrFuXPuO1OGX
3gnkDJBoGQMDhRzkCoauO4M29mfGwDrgd6bIuMDdgk8k0A5JwcE9eKk3BFde59RmktxEP3iVyflz
kj0p6IW5IyxPQUvyvnJOeRkdKV1eM5A3ADArRARuGLkA8cA47UEGMdyen05pwVhngA4OeetOZtu3
B+969jRoBGQAzMM7j0pSCMn7mOxpdgaVxgnrz07UMrMHyCGHPNLQQnlk8ZB44zSmJo1IYklegzxi
k8s/xZPYEd6e24oAxzxgUWTGRbNrjc3B7AU/Yu1wevb60kaBlJyevU087drEAnsPemkA0qMHduXP
tTSmQ2GwBxzUkjBQTj3+tI23aEyc/wAqbAjX7wDZKDgkVI+ccDGOKUAAv2J4570j9gCBxkjrilYB
hJJz0wfWnMN7OGJPFK2RGWJwAKQupXIB3E9T3pgHlHYTt5xwc0ZwW4z1HpipHTdFnGeMgDvTFHmt
hWAHoalDEIAb5clgOlLJGe45OePSjywDuAyh5om4yeVBOSKbERBfLOPukcgGnuMheOMck0uwuAWH
A6c9qG3BSGBAPTH6VICLtJIVcnrn2pZIyD1LcDv0o524AGR1Hc0ENuHGCe2O1FwIiB+OaUbo25Bx
6Cn7QZOTwOfpTmUFuAeeMULUBucFuqnP503nJXkgDpUsqEKdxyc4AxToxvBAJXjjjqauwiNiVIyA
AeOOKaFGSMYHv6VLIm0EEAe+Pah1HlgsSAMrhaLDI2jVsAcDP3vSmknbgjj1zUoTlgfmGDweKbKR
HgY3ZpbAAds7gwHvjnpSkZx/D+FOO1sR7Nu4kHI5pD8u4MCxGcDNFwGFiVIbOP7oNMLg54OD3qb/
AJZ5AIzzjNNMeVVgMAUALIodFO9mIP3cdKYV+XnqOM1IABt3dTnj0oZc/McYIGaYXI3UgljnHagA
uNo545zxipSCVIJwmaa+3cBkggYPHWgVxroMcDC47HvTTGdvJAPfBpCc4AbA9cVKU3OCflyuc+4q
RkW0E4z82emadgEkD73tUjFHKNtDYOPTNDnzeq5PXbjgCmBEVYAnrj1PWhsAkk9OcU5DuYDle+Se
1NGH6555wKNAHAZAOCeeKSQEYZug/umpZcLhVUMBgkmmMyAgBMDPPFADf4htAJbuKR8pydvPHWla
L7pA5IJJzgU+ZUUADDeo9KkCJtoOASB6A8U5trjlsYHAFMI2DoD7CnBsehXGPcGiwCONwOMBsdKa
pO/knIGM+hqYYLsSM4HWk+8WYsAO1KwETsS3J9s1MxEoxj5vUU1IyFORz1pQdjAlSv1p7gMdc8kH
PQUhBQ5HQ8fWnF8seDgn0qQIqqTkDHIzU2AhLc5weKWTLldo4yevrUjqGGMZx6n26UBdwKtlQfX1
pWHcQruhwW47qaRd2duSce9DRkvyp5HenOSRlRgnHA70WBDUHLENj2PelbcAGyAvqKcFKYOB6Ffz
p8jKpA2kDtj1/wA5pbDtcgC5G5m2p+tPUDPrg5/+vQo3Yyfp6+2aQ7uQRtYfe9TQ9QWhKu4lsucD
ofaodpVh8ueecmlILHOSAB60+bhievrQlYTdx2RJISSFO3gdOaYwBIO7OOBzTiMxZC5JGcntQsjk
N8oYt1PbOKNwTsOeRsHJBDdcimJH85wACemTxQQu4D5lYc4ByDStIjSZY7mPdutNIGxnzIQWcA55
z1FK2yZQzZLevtS+apYZBx3AoQoAMtkE9AOcVMhoCoDFcH0J6fjTZMsDnhiMcd8CpBKqxkbWY4Pt
imhlQB2zjBKhu9Go3YQuXXngHngdKaxXcAM4HcCndMll25JI2mnlA0GMYy3UUE3IQp8wbQdx6g8c
VIYMbkJYZPBBpSHEh4LHBHPB9sUrc4CZJwOCaTKQzBVRn+EE57mlyZpF3kI3T0qVnO1MBlOcE/5+
tNLli0ePu5OetUhNahLIXjAYgqowMcA//XqEJtBJJzjAFTqnnbjtGS3PPanMxUZXaEzgDPSnZBqR
l5Y4xwqnpkmnNxDwGJPftSypncTzz154pqqQgDHEZ6HpSsIkSMqdwlDcHcpGT+FNCGVnO85xu259
qljB8zzFKlex70jGIl85DA5Jz3q0xEW1sAksdx4B5qQgx+UWIA5yqnkehpyyAuAyEEMdpBok8tJG
X7r4wc81PUVxGZQzMnpkEjGW9qRot8ZLEB+u4GlwzxqDJk9QPenkRscjjAyVOOKphcgeHMTSNhO4
yM5PFSEBogrqCwGD+VKSYgV2GTcu44GfxpQhIJZWCY43f0rMYiqFw0m5Wx1xgVEJnhkJOcngk91+
lOMuSBkHdlenzAetTFUKIQD8w5b+VJFIDbLGuQSqkjHOO/pUgZ23NICYVzyxH4VFKuzcPLUljkr1
JNSGArAB0DfLhuD34H44pDv2JodjwmQruABxntUfmyIR5YypXnBxu9KHmYMVXgrkDAJAz1+tS2kZ
UqxIIOeExz9KkauxVIaQMVDHIJHf/PNStIiSbgp3ORtweAeOtMuCFchWIjJ4wOox/wDWoglZsBMo
F4+bgDPtTFp1HyosylI0OeBvXGT9DTbZCAwdOUP8WNqjH86l8qGCXzTuCD7xHT8B+NQXEnnx74iC
AQWH94c9fyoFp0LPkvv3S7VDKSAP0A/KomjadPK24ZQQCcnjn1pwnWQiVDklCpX73Tv75zTpk8mQ
IAQc4yT0HcGkNXI4S8UiLuALDHzHoQOuPX2okDTOGLtIV53qONvrT0VSzLjBRi7Px0A7VJdwxJ+9
Mu4gBgMEHJOSBx2oQNNjXWZYH+78y7lwepoFwVA3IcMCWBGMn8vX+tPjba+9QScbgWwQeMcgio5C
iAAuw6g4Gd3+fem2iVFx2HorxRoAuE5VWQg5Xr1z6CoXV/I82NniEZ2ZUjOD0x/nvT2hQokkfyKn
USdCen9agn81VLN8/wDDtOVU9eMUmJ36mTqIDS4Q+ah6NwD+P8qgeJQzMysmMAqPXuTU91cKLjcg
ULjGQAPyHaoQ2xj5Z5bqrjk1PU0Rb09SsuzGxOQHYgCtny5YYs5eTbnayqSAPX0rHtFYbX+Q84VR
0rdtGYWkZimZPKOQCcErnGB69aoltdRu154JcnYy9cqcnoev49vSkt2TCIC7QyEFwOo46hTUEchR
5JmVlZsEqMkEdAcH8e1SJHM0ykEyAcYGS/1p2MkuZlh7eHfHtSWRyu4MDwBx3H8u34Uy9nU3DFmO
3gAqhU47DjnGKkmV5Lh1jULFt3EgngEAcfzNT25CKrQlWJUDfgnj3z+FJpI2imjgWgaN2AbAUZzT
3fzSmRuxxx3prkxmQBQOg600yEAYJxnOffvUI0HsFDIOgxSFydyqOWPb0qaJVWDzHAbHIAPIqAyA
DCkCgCRAqKcksWHGDVeTO9g3J7VNGMsSemDx0FRsMsQMDJyPrSGLtAHdQfbNSwjOAwbGMEg9qiLn
bsOBznP9KsRZMYBG4jjB75osInaPZLgMuFXhlGRip/MdWT90cdyP4v8ACiytUk+bnzWJXb6DpVu6
tZonC7cttLYA6Y7GrSJuipOh81vLUqu7G084qSN/Jcs3AyOF5INaK2qBXaKQySD7yg5IJHYfSj+y
XNuD5iK+QCpPysRxn69OadhXRXSWK4Q52hXYglu5x1H/AOumzMvQAu8Z5i3feBz39anjtWjCNKoU
DIXD5yT/APXqN4VumO2UOO+TnHt+tPUXMmNdhBcEEKI8bgemfb88UsjKEEhAV2XacqDT7lEeQLIA
V27Qw5z259KlWFGjdeeQMbTnGOuKtIVykX2+akf+qA5x25o2uiNhxtbB2jn88VYubIxTERONmBlh
ggsefWntbRyYC5HAIHpjvn3zSuC3IljRPl5Hy7tw5wc8cU5CSV8wl0zyu3p6fr71JLbeW8saP5Sd
s88np+tJMrt5vltlCQWwPTvzU3GyIRmTHkts3HdwMkjHqfemvtjdmVSvO4KRU8v7pVxM7/ONvsOc
j86U28fzP5nzvyD156Yz2FUpWERMZGZ12qwY7skBcfSmPOUjKmLaq5HLcexqwgCEiQK21wBv7H19
cUskbYAkZdzdy3Yf/rquZIi1ymrgxlxuVuMbfoaTzFSNw6ncV2qA3X14q7NEq7HLF1II4YHNNeKK
R2ZmRQv3T/Q0ue41CxC0YjVhHlSBgg9KgZi/lkMGU8kKOR6Vd8tbiUqropxyUPP/ANeo3tofmFuj
DAwcdz1/pVqVhOIriEqjEsigY2+p9abJBvBYKdvUZPrzj+dTmLfCFDAMQTg/w9cCoklEJ/14HZ/b
8DS5xqyIizSk4UbuoPAB+hpUkeRQhUKvT0qWRVZlWCQA5BKdvz9KYo+zSMJGUlR1U5A5xRzk21B5
EJ2BdzKuACPXtToUEqeWF3MwPAHQCpEt0dNzyAA9AeKrRqLeVkLMz7sDaeMVSmhcuo4DIyoxtHXm
lkdpchW8znPXINTO6FjGhCrnLEjLD2pMrJCQxQugIwCPXg+/4U/aFcpGG+R/MXJx0Y5zj/IphZJY
0Ux4LEr1xUhRJFfMvy7uCwHFH2mGMdeckE+o9annK5CGYl9w2cIcZWo2cMxGd0fqw6cdDVhnR0IB
EaOByO9COtuo3vgu3JB4xj2/CrVSLFylR4sAtGFzn6ipgjFWZTyARx/L9Kl86FD0+bOTzweaZMio
CXZQueNpwKblEnlIHLhskBF6ZUZJ609mYsqAqcHB5/zzU4ZBLC/B2DJHXntRLawKysXCknfjOcjn
8uahSSZdrlSVWcELlWXAPbAqQuZ8qvQDOeOlLK0SAN5hAzjk1GZI1jw0m3qB3/Cr50Z2aGMhDlQe
eMYPAzTpXcF1+9jjgZ784qU3CiLfhQxOUHp9aR3jQ7V4YkPu56+1Z85pa41WCod3JzgnHGKZtdjI
QfnJ5xxkU93A/wBZgjrt3dacqBmYnBfBGFOMj0o50LlZG5G5QwYnHb0pjMFYBF2DOCQOtT7VaYAu
GG3u23PpUZkiV0y5IBPb+tUqiJ5GJzEzMFYA9T6ClGPMXb8ynjPFSpcJGysGGdpB5wPSmLcQs425
6c46g4rRVYk8rHNGFLZAOB3zg8VHGSjDAGUHKnp/npUjXUUYH7zEhfpjih76BCpXkHl/epdWJXKy
JozuO7O5ueOwpkqmTP7vCnoGNS3N7AWQx8KRjpzT0kt5bhQ8gVMj3x9aXtEw5WRSW+1FbjacjBPQ
9qCyk5YnaTxk8VZvpIrWd4C4HlNzzuGPSq0l3blRxgL2pe0QuViPGrScllXHBHSl3DzCudxB4GMU
1rqAluMgc880jXsKxMzfOScY703VQcjHpEofcy/KTkUwwqxyD82fu/8A16dLfwrGrJncO1Il/CEk
XPzMvysw6UKqh8jHINpcY+Y8c84prJ5eQBkDluOM1I19HGFdGBJPKCoDfowXAUEfeBFHtEVykkyr
KqqjBR3GMZqHYy8jIxnApz3yOcKuOM56U19STzd5XIweBnj/AOvSdRAoMV33MGB54HPFCQvMMIV6
8+9NXUIsMccjOBjk/WmyXyNnbn1wKn2i6hysnSNlb5h93j60vl/aEZ8AEdM96rLebup24GcHvSC8
RSP3eDjmj2sRpMlB9Q2cdc5pUJcEqoznrx0pn22LcCysdxztz2pGu027QuQCMHHNL2qHyskKsuVG
7J/UUijzGKLHliMehFRNeqjHj6Uj3+cEcH2o9oPlZZuCA54GScqvT86YpK8txnrVdrzewyOe2Kab
sclQd3pS9og5WWmO7cwHzE9aYzsuVA+U9RUKX2GJZR6fSke7DKQCR7nvS9ohcpaKh5O23uR16VEY
234BIxjqag+2seO3rTjeE5+XPGM+vFRzhysnlKhgEI6ZzSOMNy2RjuarvcZ2ggjYCo9qQThnyc5/
MVXtB8pakUmLcCCc+vFOfAwEOSTnFU/tDEDjIHbGKDcsXAxkZ4x1o50HKWchCxHfg880u8vn720d
c1VW8ZQeMnGOacLuQqV4bPTFLnDlLCjHXd7YpJAWGB69zUDXkyHHHAwRTftci9vrkUKoLlLpPBGP
lHUCmn5CAHOcZAPGfaqou2A45JznNN899+cdDxmq9qPlLqhZGwzbcZ96aFAlGR8oP51WW6wH3D5j
74xTRcuhwRn60/ahyl1sbm2gLk52k0oZkUsi4HcHvVD7S5Y5H4fhSm5fy9gHOc80e1DlL8SPIpJ+
6QelQhyo+42Bzj0qqlzIjHBC54pBM4Py8Uvahyl4LtVQDn6j9KGK5IA29s1T+1yhmPBz1GKDcyYA
2/e6H1oVUOUtksx2DgYHIoKjD/NyPXvVM3MhYjp2IxQ0jxuQwwR2pe0DlLspCFcLuI+YDt+NMz5z
Dnav9Kq+ewOSMnPU+lOeaRsDG33FL2guUt42MrJlVxyaRmCrnJIzjIqqJ5YxwOOmDTftDhegGe9J
VB8pcdFy23PPQigr5bbeR06c1UM8i/dOM9aT7RIeoz60/ahyl58FiVByOvPWkUNj7pbacZBqst1K
MD7oPb1oNzImFPBPOe4pqqHKWm6jgsfT1pgj3ucjgc4PABqtJPMWJ6Y5pFuJEBySc880/ahylwIM
8ksfQGgBTuZlYMAAB1qq9y74fO04xxTTI7ANzx61HtQ5SywMTkEHJ7UNveM8BVzx6mqxncgHJPoD
SedKjBs801UDlLSuxQ4YDFKsgYbCcHuRVX7RJtPGD3pPPfOD370/ahylp22MuGLAccinMDhlHOPS
qjyMDnr34phmZuhI+lHtA5C4zbvvDgdqeGEwwADycgVS86TH3c470v2iXcTwDjsKPai5S0+Y8KFG
/ox7ZpUZ2VlI4PU9KpmWQvknnNPkml3scc+uKXtR8pZycEk8DoDwDThIXwpQsT/CKpCR87T1/Cnt
NKpZvutz7Ue0DlLLKYyAxI6gjvUZVS3HCgdR9KgM8hJwDjvSfaXAHy8D2o9oHKWVxl2Dc44HrQzB
3OBxjnJqq0zF845p29zgkn1o9qg5S0qgqRyMeppoUKwJ78HFVvOkOdoA45pTPJxkcYPan7VE8pYe
PY4AJO4cA9qACobaSGx69PWq/mSAgleM96UzyR5AO0Hrx1p+2HyloRq6ja2CQcmo2RgRvyx647iq
/wBolR8jOPpTpbmV2xjBPtU+0Hyk7hRJuOQO+aVyW3Lu5qo1zIyhSM7af9ok2/6sBhxkimqgcpZV
dq7geAAMkU2YMHK5HBwDVZ7l3TBX5DxzSCaTaQo7+nNL2gcpaKFipLc9OKc0MjsUJwV9arG4l9Mf
Sh7p1Jx8vNL2g+UtvwvDjOMD0qH5pCc45HAqJLpsfL69TSNdSF8Acg+lHOPlLULYZiR06buTml2f
LuY5B4FVBOxIGOQeTQZ3L4wR6CjmDlJ5FYHnjGDTg27Dc5bjp1FV2mkV+SeueRQbj5uGORgAilzi
5S5MFKnHzDByc9MUs527GCBlx1qmLhhuDAtxxQ9yxKAj7vQCnzjsXGPBBwARgYNNYhRhWBUc7cc5
71UluWY52kUrSuMEjoODU8yCxeJAG3bxxmoSBu+XBJ9qq+dJu4zlqPtD4AyeM9aftLCsXHUc5A3n
rjjikaJwVXcGXPPt/jVZppkAJzjHFEk8pXByDnijnHaxYfByPMGe2cU2MMQN7bhg4UdcVAZpEkPy
84OKka5baFByuTkkc/SkphyosSIiL94Me2O3tQokeNtpOwfewcd+tU2kZXVhkKck5qRnIGShUN0H
Y1aqC5SyFzNjd24akddyg7ipzwPUVXF3Ih4HAByB9aQ3cjgsdxOflHYUue41EsETIoIYsex9D9Kc
0QcA/wAJB59RVVp5gEAKknnI6/SnC8lCLhVLKNowuaXMFi40sc0KqilGGTkH2NOMhiUIWDE9j3FU
FlYMUAAZeCzcnFIbqZkMWSQpyfWjmQWLcb/NhfkBzhVPFWQiblXggnPtWZJNIgO0noDz2pDcSAAD
/wCt+FP2iFyl8sI2wqZJPHenGEO7fJs7ED1qiLtlI3knrz/KpopJXZ2UYA5LGj2guUuCHDRcM2ef
ekYHz3UqTuXgkCq2bgS9SeCFB6dO1TbbgFVKjK5xzVc4ciJng8tTkbVKjIAGD+PrTGQCP9yRt4OT
j8uvTFQSicjcpHkg8gfdzSQ3Mh37kzuOBgdv8ijnJ5EWt0jFQWJfaThewyeKWUk2owvzMx+Y9sdq
rI9zI5YIoBGAcdeP/rU10ug5fG3HBBPTIqeYFGxbPlRsMIMcknk7RROBI+XGS5x05Xnr9Kp5mYsX
YgqMjA5/KppYJZGDMEUD+AHgcUuZDSFmAExKuGwxUEdSMcVKrMzAEk7O7HOB3qvLbufJZ5QzYIyC
M8UqRTFWVGyrk/McYFK4WLSSMD5iuM4wCf6e9JLNJGoIKg/xEjqKijQgsxQE8KCX5685P4U5bX7R
NKI3V8kAFm7Y6frTuUkKZTgNvVpFbaO4+pFONwDtV8HAJJzj/wCtUMUWWfBAUryGPANLDaMUKNKo
93PancXKWDP+5ZQwdd2Wjbr37fj+lSPOvkBwNu4beOQfXiqktvmInKxhASdjZz/+uoYwqxckl85V
QMKw+n5/nU3HYvC4MEORMVIBC+mOuPfrT5Lu3y5ZWLydFHy59yff9KryWBljWRWEZPtkde1RvCqp
guS4YMMHt9D1+lDlcVixLcAOro6lh94t1I9Af/rUq3PMrMo2yHcCf4eP0qIQiMhwVkVywJU9AO/6
U2dQkcRjuANwK7RxjngnNBVrF62nCEEYlIwFAYcih7lorkzDmNsghjgD1we1U7dwgO0rlucHnjn3
qeN8RCDK+YQG2ev1/OquO5JLdRvIA58xCcEAdOvf3qO5vke3xuJLMRuHYdx7detTwQwNGgPyxMxZ
2AyEP+RV8aFEl4hEokQNuyrA/p27/SlcLHKSW7zyKkb5jByrDnj1/SovLKySIVJGSvPA+tdYdMt4
rnzHIZGbHHYZyR9SMdahv9MVrww28LAjLlnBzg89O4o0FZGbBISkMIRQVB5x97v/AJ+lXPLhKTMC
+5e5GBn2/wAPaooYH025WIxMzFSysAQCD3HtVqWDyXxIJAG+dTjjk8fhVJmcooYXc4LOWKFGXauS
PT9RUzXKNFJ5iMsmOd56D6D6fzp1yrRF9m3a425ySRjr0qO8jQWpaRds332B6r3zx+NImN0Mu2aN
lVjsxgZ+X5fb2rW0+E3W5HkhEi9DNg8f5xWLHcPHBIpcHJGY924DuMetaVlI5uC4EATywCH65+n5
07XRTvc4JR8wVvfrSlcBtoORxgCkJG4/N2HJpzKMbt24ZxycfjWdkajFGGycZHalMRlzgAEc08x7
CyMOvAOc1G6SR4Y7sHocdaRQqEAMzRu2OnPA+tOEYcDYpYjkgdqUK0gwVOB1LCt3SrnR44EW7jaO
4TIEnJDggYz+OfzoQFWxsbaZWjkZN+TjDYAHWtWKxsZY/LETlhy3IU9eQOuemc09pNGSFIxcKVx8
0kQOfxBqWG80SEArI5Z1BwCcA9f/AK351SJegnladBFHiQoR8jjf8xOOD04HQ/8A66vo+mi9j8/c
ETCtsxjoPmz+tUpLjRoJQHfedoZNwJAJ74Gc9uTTPtmjM+JJpgpG3ABwD+X0/wAKd7EpN9DoDHZz
RRxncjfxuGyoHbAHX1qcQ2yZiZfMf5SgzwpI4yMYJx/kdK5qPWNNtTE6F5FByd456fzpG1izCytA
8pPXY56jn8uCKOdhyI3bIaS/+viLlGLBd21myevTsT260l5FpsG5YVz5oXcVcHAxnuPXH/1++TJr
1ikkTvGkjr/A+Tj8scfjmob/AFvSp5yyiVU6g44z7c9B+dK9yuSJotpdvaLI0sYj6DceSQR0x68+
1Ium2zRIYppVx8p5y3QZ9iOtZSa/aQQSMwLsSCdo68delTQ+INPiIJyUZchVXGD9aV7Byo110200
rUI5HidgGyWI6gnqM5HTt7Vozf2MCWgSRZZGICP2zXKrrlvIN80juM4bJOVGOg46/wCcUXOt6bLA
oSSTeTgMFOUHp/8AXouSoam0kWkzkeV+5K5++xIb5uScH61WuRYRrtjX5skk5Jzx0HPTmsmbUdPZ
VQs2VG3BX5cZ6nHNRf21Zb1R1Rk2kFsE444x/nvQOyNqeKyjlTLN5AUkFSMrkd8fniqqXGnwRliW
kY/MquBuPALAnOOxqlJqWmkYO+RB/EoOSf8A6/8ASnvqOkgfIGkByDuQgfT1PrQNRRDeXMc8xaGN
U3dzxgn1GPxqSC7tPIIuI3Z03BZFJ+8e+PSnjVdIMeSjblGcOvHHGKjh1DTY45QYTtJHIB4B7+3N
A1FIjSSzdhtjk80Yyr9fUmpbg286BAu7HJDnrz6Usd9YwISGYMRtIUEkgU2SbSoJMHzNm/cQygtt
z16/1/xpWG0Nt2RnzsIQc7T0p0k9pO7KEbzFcFuenpj9aG1SxaFSo/do+4BlOcYAGfXucUg1LT5d
8irLiPrzkkDn/wCtiqM7EszWkbPtQu6t/DgD3yaj320lxtniaNdoAz047n86eb3SdkjAnbnJDLyS
PWo5bzT51VeSHAJJBOB6YHTHShlcqIpba0mdyoZH5CgHhjUGyMBl+ZJPUc4HcGrKXdjHENshVwfu
hT19c/hSpeabLjcoBAOFXPJ9c0WFYYJI/Jw4JdWIy4yMVHLHBkeX5hVkLEsehx/Khrq1aYj5iuSM
AHP41Yj1bT4FZWIkySMMpXb+VAblFYfJTdGGyQQSD1pqpEJFJ8xM4+8MjpWn9s04W4A3FsnjBHGf
T/PSh7rS0jUGVWcdCFPzHr3H4Uh2M02qzSMzbmORkDsDxj8c1Gwg8/ygc9gmDgf5xWna3FmEcSOM
43AkcHHSoVurCOV3mjbepOM56Hv0/wA5pdRlGe3KKV6gLuOW7f5/nQ5MowTjjO0HHHTpWlJdaWz5
U7U6KOf5nrUIl0uOTeXMmCMpgj3Oaokony7aRdwZ0IGOef8APWnvGksMh5BXnjt/+utifUNJihCx
gL1BG8nP+c96R5NLl3hpfKVhnvyMemKNw0MiWAbFKq43fMct27cU0RtsI3Fs5GO4roFvNGYRtI2Z
cFNx5B9DVWS6sXlUvcDczdcHAHp0pJNFGNJAke5NzPuwcn6VKsMDRsEchhk89D9K1bqXSdnyT7l6
7hnA9sYpT/YluceaDnPPJx78VaMncw2thuDdfl3Ek4p6pmVTykWOTnOK2DNpEm/En3QCM5Geamzo
sSuv2lDtIblmBPqOlSy43sYbRbGDBSVHCnOc+9DfvJCRyQeUHFbrS6I2WF0qbVHr+VIx0HHEineS
SQTwB74/zmhEu5z0kP70IBwB0J5HtSzWxiUqylQvr6+9dCr6LDMky3qTA4OGJB6Zx0/Chzo0js5u
BtJ4OTx7EU7DStuc55SErgbcjb+GKVbLzLhgmc4yOMV0rnQtk3+kxcDC7mYHHfGB9Kag0Qxh3vIw
wAwoGecHNFh7nOtaqJVjUjfn7xHfmh7RY1YOQuwZ46n2rprgaIEV0ukVgckliS3t09aYYdDe4Lm7
Rt4AAUn5T+XTipsM5owwnJ2Mq56A80NboI1GDkdD3rop10LYdtyvH3m59Ow+tPcaAqc3iBB90LuL
H3PFC0Ec+0MM21mUhzkn5skjP/1qriD58k7s9yK6jytFSQg3abRnGM+/tQY/D6Sq7XSyg5O0M3y+
h6Y/z0p2BbnMyW6q+EBbABOOc0i2oO3EZI5JzxXUvD4fQ7o7pdxXe7ZY8j8PrTLltFDRyQ3EWWX5
huP5/XrTsBzRt0KNxg5IwB0ppt4lYA7j3DY7V1US+H4pDm7WVGP94jgdumef8ahk/sBMBbgNgjBG
7gH1OKnQh3ObeCN2VASCTjPrTWtVRiGzx8vFdb5fh7e4lvVwuSJFJweOwx9Ki3eH3Ib7SMYyRg8n
HuKehafc5xraMZKliRxhRSTW4jBTjPfsK6Qy6CVXFxG55BGGGBg+1Jc3mheUPKlUuirtLbzuOOf4
aSRTa6HNNaZHy8+tSG3WLbG3DHGOOvFbNzNo5VQkpVQMDAOTzyelV0n0mORv3pYA5Dbev4UMLGbL
DESRtJIGaX7FG8ZfoOcDParhutOLSNtYsB19vpTrfU7KPzItnmgsSu5egxx9KzaGUTbRRyEhc+mR
jFRmEOxdxtbHAq0+oWZ25BYg9aR9RtCXbZk9OO1MRB9nixt2FQ3Ic/59qiMOSQE6cHcatjULcsDh
vu8/KODTE1CIYZlJ6AijUNSMwg9cDDZx6CneQjMDIh29iOlSjUomBUJ1P3h1A9KjGpxmDYEbI74z
SGRPD5cp24IU545ApDboQWOAAeB3NTT3QjdCql1I5z3pDeRn+FxgYOQOadhDTZqgQGNsHqDwaYIl
JdR/DnB7nnFPkvolQKNzFRwW/lTEvI8EFMYHBFFhjvJVptpj5PTBzTDEMo0ceQDg1KupRht2wgDt
jPNA1NFyFB+nahCGTRhWBYYyd2KR4QT93CDkN68VI9/E6AY7YHtSTahG0a4U7hxgdKLBYhMIdc47
+lDRB+FG3HQd6R78ZJCAe1H2/bztw2KVgsPSEksWYE9MGkkjG/BU8dvX3qI3rHPH41Zt9XCSEvHk
FdvBwcU7BYaYfNPQE9vypfszKGLIenboKha4w5ZcKuePWnTX5eNFHGByKLDE8nO3jk4z7U9lKuyl
dxYd6qi4IIyMgU57p2cMOMUWFYteWqgFl3Y7dBUTRD5pBwuelRCdl5BBz604XTqgGRtOTjHemMV1
JBOB161Kq5GSowPQVX84h2yATj14FNWZ8nvwRSsBYchScL1HU9qVlaJ0Y8kj8KryTuwUbuAeKR52
OO/1pgWVTzZCNvzN196dtzJuZDkcYFVfPOM5O7sab5z56496ALDp97jHOeDSeV1PRfQHrULSMQM5
PvjrRJOxOensKVhWLBjLy9QM9MU5yg3E/eGcVUE7hgcnNDSFs5NFgsWAMjLDCkcVIu0JkkZ7Aiqj
S8Y701pWYD1osFi2Dvx6LyT2pPMWQsNwz2J9Kq+c/Y0hkb1x9KLAWy7Pu3N0A7dqcsqgk87jnH1q
oZZCMlsjGKPMKnnhu9FgsW5ZBtIZQpJzTcgcLwevWq0spc4yTTfMOfmORRYLFp1OwkDGD0oO5u2O
enrVeU4PDbhjOc0omccjpRYLFrLOxCKPekJ8psAZY5Bz0qqJ3XJB5PemmRiRzzRYZZZCOoIqVwj2
4OMMO3rVEyMT972pXdj3O0cUAWXBhUHIwRnApjtuxuXHHbvUIlK8cH606Of5vm5BpWFYljYKHBHX
pmly33jz/WqrOSSM8UF2PGfwppBYvI+Hyy/RqY25m+bLZPU1V3uBwSBUnnuOV4PPNAywS3lBlBJ7
80wgnnpUAmkTJ3EA+nQ0NKzLj9aGBMVIY4496VJM5GBz3NV2Zlxk03eSuKSQrFxmVQwxlsYBpm4r
nPP1qsWY+tKWPUtzmiwWLbv+7wSc549qjbJxgFl9cVXLndnPNO8xlUjs1FhWLIKg4IIA5p0gZfm6
g+tVzMcL04GKYZXJ5NUUWlcbyoXJI7etI+SwyST1NVjKd2R1HcUCUhtwJBNIRaYeaT/d6DHOKUhm
GRnOT7VT8xlzzzUnmttAU80WGTSMxfJ+Yd/b2pDHuJOODwM1AJHOVH3qGlY4B6A0WFYtK/lw4YDj
IqJpM/OTznPFRPL82OoB/Okdh5hIGFPamMndlK5I5zmllAGOCkg9e9Vw5B60rys5JOCT1OOaQidm
c4G4/L2ppOPujA7jNRO+SM8jHahpi5yQAQOtFgsWGXHIXKduOKSRjhmLE461EJzsZecH+dIlwVyG
5B7UWCw/d8oYng9qkYDIIPydQTVXzDgcdOOaPNIGM8elKwWLG752wxwOBT7mZXZGCFVxjiqwctkE
flSlyAASQPY1QycMZflA49B1pdzo+AcAH+LntUCztvBXikknkY89fWgC0TuHABYnvTR94knBJ6Cq
qyNzyeetL5pLA4HHoKmwrFvd5hVFAUZ+81PyZCI/MOMYOV4FUnBDg5JyKZvYNkEj8adhl3d5bOoG
R0LdKNynbgZJ53A1VLtzluT70hODxkGmBbYqGBVckjt2NQklS2CcHqPSo/OZc7Tj3pNwYksSB6UA
TuwVsFTvPOR3FIJjGpwAHbrxk1EzHI7Y6EdabI+5j3PrSaAmO4nkgkfw561L57N1QkA9PT6VU5Iy
T0496cXYLg8elKwrFh3G/wCZcDHABqXzfKQqSQXAxj/GqO8yYDHpTnkZ2G4/MOB7U7BY0UvVDksG
ZiNqjOQOKZJeOZFIclhkADkVS2/MPmAz1NIoAJ3kgjoaYWLfnkO+4kc8ehp5zHiTdgPztzVJwhyp
PToTSLMzYUcHBFDuFi557RSqV4YHJHUUXFx50hYcbmDE9qqbioIyAfb6Uj/f4PFKwWLbXG0SAnkt
wAfepnkEjguXB2k9MA1QCjfyeh6sc01nYuCWJHrTSCxc35CBi27vzx/9akF10IHPQAd6rOV2EHlu
2O1DxtGVySoYZFUItq77ozuO8HODytONy0VxlCRzyCMdKqNauFJ+8inG7sKe8Eu4ZBBxk+1GpLJ5
Zv3kjMxYn+8KkJzEuZCmM5xVQwyb84zgYz2p32eV1PAJB4OetDGXIn5EauSz85PWiCWWKQtuRWDY
2nv74qCOGRpVCKwOTkk4p8tlJ56xkMO+QM/hmnYYpJOVaXeucjH54FSzuSAwUKScHjk5qBIyxLFS
GCgDjA/GpJ4nVY2UbsAnA7c96BEkeUR2CYZRlVJ+XHqfwp8IaPaBEzkqfmPIx9PaoZZnlkBZvMb1
Q4DcY5FXZ72O5m88xLEBtjCR/KMgYyfXOMn6npQkBBLausIkchGPIUgbTg4/p+tLKWZBuJMgYYBH
I4qWKUyzOhcgdh2B/lj8auEtN86oV8tckf1/lWiiZpq5VhG0lNzNGqklQ3X3OPek2hGBDvtzkoTn
d6/zouLdniDFfLViSVXoF7Z/WlW6aIlTBiTquACcdqrkHd9AmgmEqZc4fg7T+IFWXtrg7C0xQooO
4Mc8cgH/AAqLz5FhZcK0wI5Hc98celLDqKqzIyiRUJ2rjkn1b8qhopMvXeq3tzFF9oc/ul+QlR0J
+g/KoWbJSViygZVVznI+nYYNTM6iR4n37gdh9M9MAd+arSae0cRk+UPu+aInkf5AFCiTImaKMRsh
w2TgFTuA+nsfrUiLN5aSSN5iYLBVXGfX1+lUYSzIzbS0echiNvGPlx/nvUjXEt1EpTCuwCsFwcep
49R296dhXLyyRSiRbiIRjnCK2HbPXjsBTzDIseI1AYYXgYOBnvxmo4Z3tPILLuldsYdMk8cfSr1o
721xIkx+zhxvV8denHA7frSfkWpI86diwK7fnPcAYprjKgAjJOD2pY843AAgcketOdxKhCx43HOe
1ZFkQBJALYOeM9KstmSPacfIc5zUAxGV3AsfT8akSRCjgqc4/KgYsUUpc4I2k7c7sCo5IyqtlQrK
ealjjDRfMxCjJIIOOPp3pVbAkZRujH8bKMjNIaG+U32dm+UDrjvTlTzVRFHz47d6ePLO1cgAjBJ6
fnTI5TCU8snCnPzdDUjshFZY4nO0jLYGemMfnUkcm5AAu5euSOh+tNKCUbpHIkORkDNNRS7EJuPo
x7VRI+Xy1JYKQSvQ9BURjMSghgpYb6dIxXDHll4IIz+VK2Xj8zIYDgD0oADIssgY5Pcj196eflj+
U4Xr7moD8p3DhsZ9qchkKnB2knGTzSEL55k3bWxv4x0wKHt2jbduGE9DUjSDcWVApAwAecmmD97I
GY4BBzjt/nFDGPyS65GGOScnrS/ZWYsyKcA52Ag0mTIRtwVzk560ol2qzKu4Z64wKGBAzEgcFTkd
BUpX5QSoUckqO9ErplSoG0E/LSs2d+3bj75Ht6Ckieo+IDcNuTxnAzk+op0aojszL8inqeTTrdfM
IXeELc/NjgAZ/pSTMNhXJCqBnOOTTvqNiyRxbPMADY6qe3tTIipLA7dp/vDJFEsqOiFQQc8jHUUf
aw0uTGMHOAvvVXEPESrJy7ADk/8A66WOPa58xN5bjcRx3pju7FUICBQc4GcmlkkEbsMD7v3geM/5
NFx9Brx5YYRenYcUpTYz4UDJ28N60eZIcYDorZ7dgP8AA0rOd64y6Z6HjHfmi4kCrtUpxkKfvDg0
kTeQSrEZK456g1G7nzFbPBGAp96WUb5B5gxhcfL1zUajsOkcSvlgOpIIHSo5owC20MzZyCvTGKGW
Pjk7gpzjv1pZGDLkMVbPY9u1UhCBB5W4RkHPQVG0XJIRg355qVk+RmRvvYJGKSNGdiD8uBzntSuA
RKd3KLx2wB2pJNiBiVPPQ5xtNLKXhCjv3GOtOwShJcR7OSe/SgaGyI4hEjDC561BJMuWCqdh7k5q
V7shQGbf1wQPWq4OUPU45K+tMYPIWYHJG3px0oLgKwHBPXNLJHgfKuPUZphySAAQcc0iRcfMf4sf
lTmJ4+Y+n0pJHB4JxikO3ja3Qc07jDIyDt4olcqcHkHsKGYgfXpiklALDggYouMcXUDjn196j3d6
c6Ij4zlQeooATc2/cB2xSFYQHBBHbk5p6bWBLncewpuMKx9TgCkBO7I44NMYH5RjqKc642hcA9zn
mkbDZIJNG1tx5+bigBOFJyefQ01gwYjNOc5/CkbcQTnAHrQAp+QBs57Yp5ddu0Aciojk8Ec0uRlc
0ASPIyAArjIxk81Gx2nK8EUryeYVyCQOMU18K3Qj2NIBMj1pxPYZPajaAeaXnbgdPUUAIAXYgDJF
LkjAXqe1N246mjcf/wBdMBPnQnkg9KX7p5596Uk8nj86axznAwKAHHIAI6evrSqxXoeD2pmQVOAf
rSsCCccjrQAsh3E8cimsNppSQ+B096BggknmgA+6VODSsO+KQnJBNIW60AK3ByMml4yPmpmKdxjP
+TQAHOTg5NBcqx5IoI7nP0oILnI596AEY5x6+go3YyB39qMA55wc9KTpx/KgB2SO+MUpJB+ViR70
3j05peBn1oAWMMTw201MmI1bJLMOuO1RxhTnJI44wKeTtH3snuKAEz03ZPYmlLbmKjj0z6ULIWky
vrmnyKhUtn5vUdzU3AieIFiCTuzTQD6A47mlyxYnueetI+VwDnPWqADtJH60SoQeMlTRgc8fKfxN
SOf3YGTj1NAEbREYI4HvQIznnke1BUAsAeB0x3pw4XceSf0oAY3J5OB7U3A75qVxhycEZ6CoyBnn
PWkAAHqAcUBfmAzj3pp61KQoAIzg0wGN37npSlDSsMKQMUmc5IJoAaw56YpduOMZpZEC98n65ozk
80CNqw8J32pIskcZ2v0bHtmoNZ8NX+iCNru3eNXGQzKRkdjXs2jyReIfA0Men3KW2oNEp+WQfKwI
4IxyMDp79a5f4geJtTPhmPRdYtpBPEdvnYG0sGPPA9KuyIUtTz3RtCutdkdbcBygyw9qn1XwpqOj
2Yup4HFuX8syAcBuoBr0n4NWz2OkajfiQxhwULYzgEdD7HFaXiPUD4n+F1y7yFWVWlG0fKxR+mPX
HekPm1PIND8P3OvTSR26sxQZZh0A96XWPDN7opj+0RFUkJVWwcZ9K7r4HZj1m/dQ+FgwSh5GT3/K
ug8b417weflzPbamseU52jJX88flRYu55xb/AA71a7hjeGFphIMqEUnPftWBBp09zdm1SMmcEqUP
BFfSGkuwv49MSZUSK3UY2lAGI9+h6+1eM2Oq2+h/EqW6u4y1sly4kUjnbk4OPypWFcZF8MtXuSUi
i3sgG4LzjPNc5c6Rc2GqtYXUTQXKvsdGHQ17pq9ne3+o2mp+HrhWjCESwL0b/wCvj/Jry3WtUuNW
8cwS3ts1rdRyJFKg5YlTwfrikBmax4WvNHnto5xgXBwjEYHXHJ9aPEXhO78OiNrhHAfuUwAfT8q9
J+MgkY6KzAqGn7rlgf60fGMj/hGtPEf+paUOQnCg7Dzj9KVibnD6T8NtS1mygubdd8UylkYA/N7f
Ws7S/CV7qmrXOnpBItzbgmWJkO5ccGvWfD+u/wBjfDrS7wRCJoNrO+NwcbzkEfTPFW7ezGnfE6TU
QCEv7ITB1GOQORgZ9F9sUDueGT6TcQ6m9g6FbhW2kEYxXTwfDDULlkjjB89wzCI4zgDOR68ZP4Ve
u5Pt3xYlJTcWmyFK5ydvv14rp/GP9onxpYW+mXEkcwUSIigAsMcn6dfzpjvc8l1vRbrQL+Szu4zH
IvIB7j1qiQMV3nxTF213Zvf2ogm2solU53AHgH6c1wbkZIUkj1NAxhGKCcmg5B5oAJ7UAFKSWNB6
inDGDjGaAGmgjFKQSvTgd6aaADigHj2opyMF6jNADTmgcEUrDBpKAHMwLZxj2pT8zcDApq47804t
sPy9u9ADeRwDS7iOmPShuTk8Gj5eeTQAh4oBxSsQT1zTT14oAOtKcjjpSUoGQTyaAEJNKDzzzRij
j6UAHU47UpORjrimkEAGlUZPXHvQA77vO2k4654pGYnqcilznjt7UABwF4puadgjByPwoIwMg5oA
QjB55oDY7CgAHvS7OuB0GaAF3Db1w3rSNg0EHr3z070rcnP50AJjjPGBQTx0yKCMH39KUP69KAA5
J54pKUksaDSEJQQTSng01jQAvONo70ccEZz6UAn8KXgKcHmmMbyfrSlRSxLvfb3NTPbNH97rjNIC
ENsYHGcUjHcc4p2xVznPTNOEIMbvuHy9j1NMCL9KdyCM9atC1MscZjByevNTf2S67EbCsw4J9fSg
VzPA3HPpQQAavS6e1vtMmArEjgjimrZEgE4weB7+9DFcojOeOtOZSvXOasXNr5JBBBHtUD535B46
UFDc7zzQFIPqaUqMe4qSCETOF3YFAEbZ49aRlxnrVt7NlXd/dOPqKsafpyXaTFn+ZOgzSuBnM+4A
YxQ20AikkXaxHJxQdvAAIPc0wBz8/HSjJzknj0pWwDwSR1zSEBm4OOO9ACE5JIpS2DzzWjFpclzZ
zTw/P5fJC9RVeCyeduc8DLZ44p6CuQyOGIA4xSMW356nHGKfPD5cm0Z6Z/Co9wA/PkVIBw+SzHPp
QrGM7u49aUsSOmOa0VtEa33k8dDRcZQX5skEHHPXFMY5AGMf4e1aH2MLMoY7VwBuPUVLLYxq5jB3
IR8rntTFczXO4KpGMdMdTTmXKrnJAJA471o2+lq8mJJAQhHX060k2nx28/kmZZVJ+Ux5PNUIzHUb
jzg88GnFlI+ZiWHTNTXcKW8rx53Op5yOvWoNmQCTwagRaikKx5D5J7dvxqwt5MhlwvEg2ngYBHpV
SGP5xlsnqB3NbN3BHBbAh/NjDDnGPTitAKqkGRSF+8OMnHFOZSi5BDgN8wzz/nNbFulnsWFw0Yxt
Gei8E5+tU7vy1lBiCRliCQRzkd/x60WBFTbHkLkMw5AHQD1zUpUMm0O6sR94/wAPqOnsKdN+9nEm
AjbedgHbPOKRYmaFWjcrKemRkY9/b396oHoRSqHAUAOc8DPbvxj/ADmnXdwJ0woXBHQDBzjr+dPm
gaMKB1ZQMnjFNjXB8hoxliGPy5Iz3BpWJTIfKCKrCPII79Cc9aGjRtuQQ2CeRkA/hXRXdnCiGGfa
UKh90fUHGf8A6xrDXash2q2x8gI+RkH6dKpIBgidCkyvwDltvUD6GrETSKCSuEPBB5yKjRQu4mLe
eBzxmlSQrMQ+HGMcjhauD1M5J3ujoNK0+bxLeW9pY2ss+oXEnli2hUszHPAUDJrb1X4YeItDgupL
3wvq9j5Kb3mks5VVV7seCMYqx8DfiHD8Lvibonia4txd2VlcDz1JAJRgVJUeuGJ/Cv1Mn8eeENT0
5JZvEWizafdRbgJ72MB42HoW75r7XAZfhsTQ5m/ePyjivivMeHMZShRwrq05rdX3vqtPKx+Pc1ov
mZttjKSe+ePX2qCWCSJiSm4kEnJ6DHr9RXpnxfsPDmjfErxTZ+HJ1m0aO8k+ytGwePZjordwDuAP
PQVxEfk3kXlqXKleCoAYj3/x+lfL4qhKhWcHrY/UcLiFiqEKyTXMk7PfUxnkMczEo24EFTjCn1+t
T29y8qsz/JnAEjchv/rmpZrdGjU4fKfKMkYPX257VDPbyI0Py7HySqqeDn2x1rmV0bu1y2zPdWny
4njACsQ2CB0/H/69NVcStGFx0VcgYPbr+lUhM8MCupRtp2nYSMKeM57dP1q/HKslu29I0I43JhF/
3vrnv3qG9b2Dl7ErGJgvlkGdTlnAGAw6dOtWtJn2yyGX5ZBkbg3J57e3H8qqRosav5sq4jIZSOh4
5X1zzVzTUTcxUCRcdxkj9Dx0qGykrHAFdkjEH5c4BNJjaN3VM+nWlDBGyAQ3YZpdjO4Q5Ge2KwNh
+4/KS2dvKjd0705pHdyyj5yOc4ximzKEPBORxjNS7FAGxQxHLZ4Ofak2NIjkLLFgglm5zStGYgBk
7Dyfl6UkxKgAA7yTk5P4CgI+9l3cAZxjP1/lQIsLB5kQKYKjkjpmpGt1jhIOS2f4hjj0FRxRgoGE
oUn7pzjGKiKsyl3LHd1IOf1oKuMEbE4AITuT6UE75CHJAAwAp4xUwV5N0QyBt6HjOPeklzE+/wCU
8YGMHrTJAosqfx5U9SOKA8W8kgqoyBintEANgcsGxn1U45FMVtjsMjAz97rSAa4CZJAZC+FZjntT
/Jy5I4LLgB/5fWnR7S298BMn5QePzp/nSQ4zwxJIB5C5/wA5pgQ+d5cpbk/3QfanKpVpC4GBk9eK
HQXDhcASMeT7elSNAsUIY8kkqAOTx6ipYEZkB3cAc8ccUOxQhETBY8nNOwFALr97oBxilJSORVAI
B9TnnHWlyiQhJBOz5iR601LeQkg7Uf8A2j17/wBKkmcKVjEJUg5wRyf1oG8qXIJwcDB5Bp2GR3Ki
Pb5WVcDkdfbFPA8w7SMswyMdPxpkkkkh/iUg5LHocU8ShUzyGbkE4qgCXEeGVSSPlIA4pksbIE2x
FeN2c9ulKS4wMgL94kcgHvx60kxDbiSzBQccdqQh7gM3zFckc/WnAx4LMflXsen0qBQJWG1R0IOO
9SLDH5nzsRGxOMdQKYyOLAbjPGcZ6Z/zxTpVCr3Bz+OaQuqZ2KChPAxmnvL5iFQjb8kcDjGe1ACK
meRlu3I6f5xSFCWXDAbv7w6U7ACY2hgvHGP1oDNNGM9+BtbgCnYQyQMGVmUk+5FP3GVigjUYBOOO
B7etOLFWjfYSewPaoj8hB/iPOBmp1AEZypz3PQfzqUxsyBid+T6dR05/KmSjGCCpB5xzxxSTlwG5
YjO3J6YpXDYewV2ySMHG0+/T+lQTsCmC3mMDyMcfWo2cKCBksMnHoahLAnqd1O4hQuA25Sdw49qQ
oAwGeP0p6uxYAZBHfFOKY2bhvPcDiqKEmQJ/ERxxzxTUYqCQ2eKUxLt65A4+tPZVOQo+782Tzmmr
C1K5O4jue9SeWGB7N3pZFVWPOO+etPWLKjaMk849aVgGGIcjOD270eSqKckNzgYqTAwc4B64BpnJ
TA6damwCNEoQdS2eRQsRZsEgADgmhkJIHtnA+lLuYYHJ4x06VRIOhDBRz7g5pGQElQCcdzUzuoiB
UfNnH196HaMlCNxOSSq9MUJDuQ7vLLZwR2qIkux57Zqd4yCxU5HuMZqORPmJK7eO1BQihkYZ+XPR
jSc4OTgE05mBQg/3utNVlAYMNw/lSARgzE85OPWlcKoXAycc0pAOSBx703aHHy8nHTPSgAyN2Bxn
1oJyMk59qGXDYJxTMEewoAB/kU/GwkHNMxSnjOeaQB0PNO3KVwRz60jjHb8qT7o6c0AAwW56UMME
4pUGDmg4yM596YDcHNKSScUmfm9qOp4oAX7pPNBAwSDnmkPOT3pzFSOnNACbxu55FBGTx0ptKAO4
NACqcHJGRSsdw/GkbrwMUn4UALg9OxoGQM/nSE7sCgg5NADsBT8ynBHFIwxjt9aQnPWnKAc5z+FI
BvGfWpXbfnAAAqMqVPNIM0wHqM9ePSnMAvQljSZ+TBOT2pGIAAAyaABQQc9c9qfGdzjccYpo2gg9
RjOKUEHkjFSwEkCgtnOfekJBGSckU18kk9QPWgcjAyaAHFsjpjIpwfcAAMN6mo6ccHHPI4oATO1v
QelPDHOevsKYSM9M8UquyscYGexpgLtLN8jEtjNNIPPGPWlU7XJBIPtSHJJoATOQfX2p5JA5GD1p
N2Bxxxg0jNzxQIcTg5ZeCPWmDHOTilByBk9P0pZEKjJHHY0xiHGDzz2pQyjGR9aYAR70pDdxQB7D
pvha2ufD8OqaBqT297tV/IAzvPGVI45HPr0q/wCPLqC88BMLxA98FUsVUDa/rnr2xivHbTWb2zjM
cVw6Rnqobjrml1HWL3UXY3Fw8hbkgng/hVcxPKj1bwneW+m/C67dLhYZG3OQvBZgOB+HP50nwvv9
Pv8AwfeafdyYkLSIwJ+6rAEEnjvn8q8jW+uY4zEszrEQV2A8YPaiG7mtUZYpGjDddpxmpuFkekfB
i7j07VNXt5Xijk8ogGU9eQMDp/ntWv4U1CC51DxLps5EqG8acn72VPp+Ofzrx+O7nt5GkjldXbgs
pwTTVuZVlMomcSN95gTmi4z2jw94gs08baqJnYIERI0LYLc5/QY/OuIW405PiTeJqSmSxedotygD
HPDd/wCdccbyZZmkErlzxuJ5IqNnZ33FiWPOSec1Vwse46fpS6F4lgudOvml0qSPElu8gwp6nH1x
n8a4r4la1BN4ytryAKxiRAzI27ODwPyrjI9VvI49qXMqr6bjVaR2kYszlj7nJrO2oHuHi2zsvGtv
p9xa3bOIzvZGxkZA798Vg/FzW7XUbO2ht5/NKOGfnHOMHjJ9BXmcV9cQrtjmdVPYMcVFJIZGJZix
J5LHmnYnlPVf7dgn+EzWQnPmiMBVPoCcjGfxrb8I+JrDUdF0h55fKurW3MDszcnACgD22gV4gsjY
27yB6Z4pu9lP3iD7UJFWR11zq0Fl8RmvncfZhchmfB+VCOT74B/SvQNXbTdR8Q2mtwXu14ScAHaN
vXr+NeJNIXbLEsT1zTjO4UAMQMdjVp2KTsdl8Utch1fU4VhYOsYI3DuO3PeuIbrTmOcEkk470Ejb
yvPrUgxNwOc0dTkcUh6cUE8Y6UCFZs0gG7p1pAKeJCDQAMNoxn8Kbt445pW9fWm0AL+lIeadtP0p
GGCaQAAeTQaBgHvQSOeKAFwAeTU0o2RgAZz3FV6kSYopXAIPXNMBh60pAB61LKiDayc+o9KhPB6U
AKzBgOOe9NopWxxigBMGnBSKVGAx2PrV9bQTRGRBtVeCCaVxFAj8xSYIPIyDVp4QoC56CmKoORuz
zmjUEV2BBoK4OKlnQ+ZkDAppBfk5BoGIVyo7GjbkEnt7U7cAAB17g0B9pOPyNFxDAKXaM9TUgYZw
SME54ocAyE5yvOKQDQNvUUuCenHGKDgEbucCgHfnoOKoYNE2eOc00qR1OAeKmO15BsYDjjPbion3
DjPWkAhTD4zgmhhhj2I4pzPvPP504sctgZPrRcRFRUj/ADLksMjA471GRg+3rRuIMGkxzS5oPWgY
hx1xmhvXGKKXOetMAAOOeKsR3DQqVJ3jHAqAj0/WnkKEJySSelSIsOI5rYspIYEZXAqqGGfu55yO
aVZCo49OtMP15pjuWVvJLddpAPGKR7yWcLvkLBAcZPSq5+ZsmkA+b19qYFqXUpZYljJyqjHPXjpT
RcSrh92eMcioScnO0DtxRknIJ4oDQkmnMyHOSRUbBsZxx7UrNycDGeOKa2XOf0FIB3D4A4PPNOSQ
xkMOMHrTSwG0qCDjmmtz60DJ/tEjJjPWmi4lgkJjfDYxx6YpioGxk9aa6YYjpii4rinBB4JJ6U05
VjzTlBIIz0pGGDgcmmMcVYLu6g8CiSIJkFhn2obOMkEAn9aCO5IJoAlt76azDCMgbjzkZpPPZn3Z
wT1NMcnjjOMg/WmBSxGAcmkIdMzM7FhznORTEHzDIz7UsgwSCTxS7PlzkZoGKQSSCPenl3iK8+4A
NNdTu5OfQjpSAHr3HY1Iak8jyS4cvlicAZpJmkaQYcnGASTxURUhiGyM/pUzJx/dxzk+lFxhIZY2
yz4K91702OeQKw3nZ3odfMIUA4HrUkNnPP8ALHHk5x7Va1EQPKHmZmy+W4z6VLK6oxCLnPtVg6Zd
A82x4J6jrjrinDT7qUK/kk8ZJxjA/GnYRXzslOD6dODihpm3FQ5Ce3HFWBpd00XEZL5xgelSrol5
I+FhfJ42lTn/APVVElNJyDtRi2TnHofp3/8Ar1PskkYMyncp5O7t64qzDpV3E+DbtE65OZFPPccf
561IunajKUWONvmPJIPH1HpRYLjYpysS7FMbfwEHkCpmnQRhVUn5dvXOBnnP6VI2mXUrqTaMIy+3
cQevTHNTS6JMlygaEEK/HB/l35qlYNyuozHuDKNhzg8HHei0ijuAfm2MvOe549cVYl8NX0mD5Mka
cFSwIBPXv+FTNpFzaRyboFEiDlmH3Tnnv36UFWtuMdBE0cPLqwyx/iB6/p1qubQwkyBATnu3atC2
0HUWQyLHJwfmLA8Lg8Yx+tPm0K9uZ2Q2zbgVG1ckjPU4oQpTitjJZPMMnyMkjfNjGRt/GnQq6wFZ
MKCPlU9G9Dya3H8LXd9KU+zGJAwj3BeATxjinv4J1ESyCWKQmPKCQqfvDnj0BH86a3MZSRh2kyR4
JXyu24+mCc4/KlZlhXy3T7v7zH3STnJz+H863JvBOrzx+YLQRYTGCCeRwc0xvCd+sBZbRgQQreYC
oAI45I5z9PSumE5x+FisnujFjuXuJOMsQuMSAnaB6Zp8MDwOxJwRxCBx6E8Yz+FbSeDdXEksMcDx
9w5GAV+vapLjwlqyrGGt337CAAdzMc+3B698HpWj5payGnBGUJo2mRW2owByM4yTz09f8KlikSRR
wvXIbGcnH8j6GrX/AAgmqSykfYW4TDTZzu9D+HoO1XIPh/4hSWFI7CVoZMMJs/KE7Nn+lSlcybTd
0YNxYRyRyoh2chum7JHAHH4VTPn20LIyyKARlXAGDj+ddknhDVjHzYuhwVE3VX9Dxz/LtTx4I1C8
ddlsZpJBgqnRACBluPXv7c1XsmzSM49zk4ZURJGZysZQnrg/TH1xzWlpD3kc8sls6qW4ZJHKkgfd
Ix0GOPwrtdH+Desaldjai/Z5Fba4BZiAD17DgZznHQ5roZPhVe+FJXW/u/JeXBjnKFo3XvycfN0/
WsZ059EDnG+5/9k=


--=-7SiOlc9xVXlIsddwPZiY
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-7SiOlc9xVXlIsddwPZiY--



From xen-devel-bounces@lists.xen.org Mon Jan 20 19:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Kym-0008DZ-Nl; Mon, 20 Jan 2014 19:57:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Kyk-0008DU-Oi
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:57:50 +0000
Received: from [85.158.139.211:19341] by server-3.bemta-5.messagelabs.com id
	3B/97-04773-DBF7DD25; Mon, 20 Jan 2014 19:57:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390247858!10882197!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDk4NTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22484 invoked from network); 20 Jan 2014 19:57:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 19:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; 
	d="asc'?scan'208";a="94634897"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 19:57:37 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 14:57:36 -0500
Message-ID: <1390247840.5727.2.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@eu.citrix.com>
Date: Mon, 20 Jan 2014 20:57:20 +0100
In-Reply-To: <control-reply-1390242602.7482@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
Organization: Citrix Ltd
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8889773671444663153=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8889773671444663153==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-TZqh6tPaV+Ixqg126W6s"

--=-TZqh6tPaV+Ixqg126W6s
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Mmm... Perhaps it's obvious, but I don't see it... What am I doing
wrong?

Dario

On lun, 2014-01-20 at 18:30 +0000, xen@bugs.xenproject.org wrote:
> Processing commands for xen@bugs.xenproject.org:
>=20
> > --=3D-udcPDT2/1ZUlzquM5prW
> Command failed: Unknown command `--=3D-udcPDT2/1ZUlzquM5prW'. at /srv/xen=
-devel-bugs/lib/emesinae/control.pl line 437, <M> line 29.
> Stop processing here.
>=20
> ---
> Xen Hypervisor Bug Tracker
> See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information o=
n reporting bugs
> Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-TZqh6tPaV+Ixqg126W6s
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLdf6AACgkQk4XaBE3IOsQEPQCeP58cH3BByULogATaPRU2/ksy
j2wAni2YMsyDppqaDg2ZGssxkl+CRc66
=2s5J
-----END PGP SIGNATURE-----

--=-TZqh6tPaV+Ixqg126W6s--


--===============8889773671444663153==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8889773671444663153==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 19:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 19:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Kym-0008DZ-Nl; Mon, 20 Jan 2014 19:57:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Kyk-0008DU-Oi
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 19:57:50 +0000
Received: from [85.158.139.211:19341] by server-3.bemta-5.messagelabs.com id
	3B/97-04773-DBF7DD25; Mon, 20 Jan 2014 19:57:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390247858!10882197!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDk4NTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22484 invoked from network); 20 Jan 2014 19:57:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 19:57:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,691,1384300800"; 
	d="asc'?scan'208";a="94634897"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 19:57:37 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 14:57:36 -0500
Message-ID: <1390247840.5727.2.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@eu.citrix.com>
Date: Mon, 20 Jan 2014 20:57:20 +0100
In-Reply-To: <control-reply-1390242602.7482@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
Organization: Citrix Ltd
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8889773671444663153=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8889773671444663153==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-TZqh6tPaV+Ixqg126W6s"

--=-TZqh6tPaV+Ixqg126W6s
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Mmm... Perhaps it's obvious, but I don't see it... What am I doing
wrong?

Dario

On lun, 2014-01-20 at 18:30 +0000, xen@bugs.xenproject.org wrote:
> Processing commands for xen@bugs.xenproject.org:
>=20
> > --=3D-udcPDT2/1ZUlzquM5prW
> Command failed: Unknown command `--=3D-udcPDT2/1ZUlzquM5prW'. at /srv/xen=
-devel-bugs/lib/emesinae/control.pl line 437, <M> line 29.
> Stop processing here.
>=20
> ---
> Xen Hypervisor Bug Tracker
> See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information o=
n reporting bugs
> Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues
>=20
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-TZqh6tPaV+Ixqg126W6s
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLdf6AACgkQk4XaBE3IOsQEPQCeP58cH3BByULogATaPRU2/ksy
j2wAni2YMsyDppqaDg2ZGssxkl+CRc66
=2s5J
-----END PGP SIGNATURE-----

--=-TZqh6tPaV+Ixqg126W6s--


--===============8889773671444663153==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8889773671444663153==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 20:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 20:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5LFx-0000g4-GK; Mon, 20 Jan 2014 20:15:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5LFw-0000fz-1e
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 20:15:36 +0000
Received: from [85.158.139.211:48682] by server-1.bemta-5.messagelabs.com id
	87/5A-21065-7E38DD25; Mon, 20 Jan 2014 20:15:35 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390248933!10700021!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24254 invoked from network); 20 Jan 2014 20:15:34 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 20:15:34 -0000
Received: from elbmasnwh002.us-ct-eb01.gdeb.com ([153.11.13.41]
	helo=ebsmtp.gdeb.com) by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5LFq-0004l2-Dy; Mon, 20 Jan 2014 15:15:30 -0500
In-Reply-To: <20140120153827.GA24989@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 2899D9D9:D33A4E3B-85257C66:006ECFD4;
 type=4; name=$KeepSent
Message-ID: <OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 15:15:24 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5LFq-0004l2-Dy
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
10:38:27 AM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> Date: 01/20/2014 10:38 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 
AM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > Date: 01/20/2014 10:14 AM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > 
> > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> > crashes 
> > > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 

> > slow 
> > > > graphics with a newer HD7000 (can see each line refresh 
indiviually on 
> > 
> > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > 
> > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > 
> > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> > RadeonSI sluggishness is when using the KMS framebuffer device for a 
plain 
> > text console login.
> 
> So sluggish is probably due to the PAT not being enabled. This patch
> should be applied:
> 
> lkml.org/lkml/2011/11/8/406
> 
> (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> 
> and these two reverted:
> 
>  "xen/pat: Disable PAT support for now."
>  "xen/pat: Disable PAT using pat_enabled value."
> 
> Which is to say do:
> 
> git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1

Thanks!  I cherry-picked that patch out of your testing tree, reverted 
those 2 commits, recompiled and installed.  Definitely fixed the HD 7000 
sluggishness and appears to have fixed the R600 crashes (although it's 
only been running a few hours).

How come that patch didn't get into mainline?  It looks pretty innocuous 
to me...

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 20:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 20:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5LFx-0000g4-GK; Mon, 20 Jan 2014 20:15:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=009791a7e9=mlabriol@gdeb.com>)
	id 1W5LFw-0000fz-1e
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 20:15:36 +0000
Received: from [85.158.139.211:48682] by server-1.bemta-5.messagelabs.com id
	87/5A-21065-7E38DD25; Mon, 20 Jan 2014 20:15:35 +0000
X-Env-Sender: prvs=009791a7e9=mlabriol@gdeb.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390248933!10700021!1
X-Originating-IP: [153.11.250.40]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MCA9PiA2Mzk3OTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24254 invoked from network); 20 Jan 2014 20:15:34 -0000
Received: from mx1.gd-ms.com (HELO mx1.gd-ms.com) (153.11.250.40)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Jan 2014 20:15:34 -0000
Received: from elbmasnwh002.us-ct-eb01.gdeb.com ([153.11.13.41]
	helo=ebsmtp.gdeb.com) by mx1.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W5LFq-0004l2-Dy; Mon, 20 Jan 2014 15:15:30 -0500
In-Reply-To: <20140120153827.GA24989@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 2899D9D9:D33A4E3B-85257C66:006ECFD4;
 type=4; name=$KeepSent
Message-ID: <OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Mon, 20 Jan 2014 15:15:24 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx1.gd-ms.com  1W5LFq-0004l2-Dy
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
10:38:27 AM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> Date: 01/20/2014 10:38 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 
AM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > Date: 01/20/2014 10:14 AM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > 
> > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> > crashes 
> > > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 

> > slow 
> > > > graphics with a newer HD7000 (can see each line refresh 
indiviually on 
> > 
> > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > 
> > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > 
> > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> > RadeonSI sluggishness is when using the KMS framebuffer device for a 
plain 
> > text console login.
> 
> So sluggish is probably due to the PAT not being enabled. This patch
> should be applied:
> 
> lkml.org/lkml/2011/11/8/406
> 
> (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> 
> and these two reverted:
> 
>  "xen/pat: Disable PAT support for now."
>  "xen/pat: Disable PAT using pat_enabled value."
> 
> Which is to say do:
> 
> git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1

Thanks!  I cherry-picked that patch out of your testing tree, reverted 
those 2 commits, recompiled and installed.  Definitely fixed the HD 7000 
sluggishness and appears to have fixed the R600 crashes (although it's 
only been running a few hours).

How come that patch didn't get into mainline?  It looks pretty innocuous 
to me...

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:15:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MBo-0002zM-MY; Mon, 20 Jan 2014 21:15:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5MBm-0002zH-N6
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:15:22 +0000
Received: from [193.109.254.147:51776] by server-13.bemta-14.messagelabs.com
	id DA/D1-19374-AE19DD25; Mon, 20 Jan 2014 21:15:22 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390252519!12069443!1
X-Originating-IP: [209.85.214.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18066 invoked from network); 20 Jan 2014 21:15:20 -0000
Received: from mail-ob0-f170.google.com (HELO mail-ob0-f170.google.com)
	(209.85.214.170)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:15:20 -0000
Received: by mail-ob0-f170.google.com with SMTP id va2so4200403obc.1
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:15:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=D6frsYSZf4E+Bb61k+Br/LmySHYTjmn868tANmJUHCk=;
	b=dHXfhabIgg4X7ANskSJZsrNAqzKuxL0Z7nI1QIW7AZKay5rQQinonk9DjZzi/EmuNK
	RSt+1AZJ4kbG/IDfN+slIZW9/KoCl4cNXZlKFNnH3/8iVa53nP7p59o08nZsjggzWg70
	xZDqdNFLLTENTx4qP6q9MDG9ubR3N5ydcsFhufqv1WcXdYRohhvyIU4Xgbdg3zJcDaRu
	EdH6OcQLBAmjUMLYvKAwpgstEqc3/tm0iAwAgsae1vBdlFZARvWDhNpkYVI8r62h9Yd5
	KSXrRUlxAR/y8ia7N+dSEevMWtQdZYC6PzwbxojwxKMTOcxxx/67KRgnlTKHI+Ar7L30
	Vlww==
MIME-Version: 1.0
X-Received: by 10.60.148.197 with SMTP id tu5mr16994028oeb.11.1390252518632;
	Mon, 20 Jan 2014 13:15:18 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 13:15:18 -0800 (PST)
In-Reply-To: <b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
Date: Mon, 20 Jan 2014 13:15:18 -0800
Message-ID: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Gordan Bobic <gordan@bobich.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-20 15:19, Shakeel Butt wrote:
>>
>> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
>>>
>>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: xen-devel-bounces@lists.xen.org
>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>>> To: xen-devel@lists.xen.org
>>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>> upstream)
>>>>>
>>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>>> >>> > -----Original Message-----
>>>>> >>> > From: xen-devel-bounces@lists.xen.org
>>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
>>>>> >>> > Butt
>>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>>>> >>> > To: xen-devel@lists.xen.org
>>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>> >>> > upstream)
>>>>> >>> >
>>>>> >>> > Hi all,
>>>>> >>> >
>>>>> >>> > Is it possible to do vga passthrough on xen-unstable with
>>>>> >>> > qemu-xen
>>>>> >>> > as
>>>>> >>> > device model? I tried but I am getting error 'gfx_passthru'
>>>>> >>> > invalid
>>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>>>> >>> > traditional i.e. qemu-dm.
>>>>> >>>
>>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>>>> >>> right now.
>>>>> >>
>>>>> >> Right.
>>>>> >> It is not possible to assign your primary VGA card to a VM with
>>>>> >> qemu-xen. You should be able to assign your secondary VGA card
>>>>> >> though.
>>>>> >
>>>>> > Let me understand this correctly. If I have two VGA cards then I can
>>>>> > passthrough
>>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>>> > right and
>>>>> > if yes how can I do it?
>>>>>
>>>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>>>
>>>>
>>>>
>>>> I think passing VGA card as a primary-in-domU works well in
>>>> Qemu-traditional, right?
>>>
>>>
>>>
>>> I never managed to get it working - it certainly isn't just a matter of
>>> enabling the option. There is at least the matter of also side-loading
>>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>>>
>>> Having said that - I never found a particularly good use-case for
>>> primary passthrough. Once the GPU driver loads it works just the
>>> same for all intents and purposes.
>>>
>>
>> I have successfully managed to passthrough VGA card as primary to DomU
>> with qemu traditional. I am trying to do the same with upstream qemu
>> because
>>  I need some new features of the upstream qemu which are not available
>> in qemu traditional.
>>
>> With qemu upstream I can passthrough as secondary VGA card to DomU
>> and able to see it in device manager in DomU (Windows 7) but Windows
>> couldn't use it and display some error that another card is being used as
>> display. I want Windows to use the passthroughed vga card  as its display.
>
>
> Disable the other (emulated) card in device manager and reboot
> the domU. That should fix it.

This is not working for me. I am disabling the device and even uninstalling the
driver but on reboot Windows 7 install the driver for the emulated vga and
make it primary vga.

Is there a way to stop qemu upstream from emulating vga. I tried 'nographics'
in xl.conf but Windows still see the emulated vga.

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:15:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MBo-0002zM-MY; Mon, 20 Jan 2014 21:15:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5MBm-0002zH-N6
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:15:22 +0000
Received: from [193.109.254.147:51776] by server-13.bemta-14.messagelabs.com
	id DA/D1-19374-AE19DD25; Mon, 20 Jan 2014 21:15:22 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390252519!12069443!1
X-Originating-IP: [209.85.214.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18066 invoked from network); 20 Jan 2014 21:15:20 -0000
Received: from mail-ob0-f170.google.com (HELO mail-ob0-f170.google.com)
	(209.85.214.170)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:15:20 -0000
Received: by mail-ob0-f170.google.com with SMTP id va2so4200403obc.1
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:15:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=D6frsYSZf4E+Bb61k+Br/LmySHYTjmn868tANmJUHCk=;
	b=dHXfhabIgg4X7ANskSJZsrNAqzKuxL0Z7nI1QIW7AZKay5rQQinonk9DjZzi/EmuNK
	RSt+1AZJ4kbG/IDfN+slIZW9/KoCl4cNXZlKFNnH3/8iVa53nP7p59o08nZsjggzWg70
	xZDqdNFLLTENTx4qP6q9MDG9ubR3N5ydcsFhufqv1WcXdYRohhvyIU4Xgbdg3zJcDaRu
	EdH6OcQLBAmjUMLYvKAwpgstEqc3/tm0iAwAgsae1vBdlFZARvWDhNpkYVI8r62h9Yd5
	KSXrRUlxAR/y8ia7N+dSEevMWtQdZYC6PzwbxojwxKMTOcxxx/67KRgnlTKHI+Ar7L30
	Vlww==
MIME-Version: 1.0
X-Received: by 10.60.148.197 with SMTP id tu5mr16994028oeb.11.1390252518632;
	Mon, 20 Jan 2014 13:15:18 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Mon, 20 Jan 2014 13:15:18 -0800 (PST)
In-Reply-To: <b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
Date: Mon, 20 Jan 2014 13:15:18 -0800
Message-ID: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: Gordan Bobic <gordan@bobich.net>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
> On 2014-01-20 15:19, Shakeel Butt wrote:
>>
>> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
>>>
>>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: xen-devel-bounces@lists.xen.org
>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>>> To: xen-devel@lists.xen.org
>>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>> upstream)
>>>>>
>>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>>> > <stefano.stabellini@eu.citrix.com> wrote:
>>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>>> >>> > -----Original Message-----
>>>>> >>> > From: xen-devel-bounces@lists.xen.org
>>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
>>>>> >>> > Butt
>>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
>>>>> >>> > To: xen-devel@lists.xen.org
>>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>> >>> > upstream)
>>>>> >>> >
>>>>> >>> > Hi all,
>>>>> >>> >
>>>>> >>> > Is it possible to do vga passthrough on xen-unstable with
>>>>> >>> > qemu-xen
>>>>> >>> > as
>>>>> >>> > device model? I tried but I am getting error 'gfx_passthru'
>>>>> >>> > invalid
>>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>>>>> >>> > traditional i.e. qemu-dm.
>>>>> >>>
>>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
>>>>> >>> right now.
>>>>> >>
>>>>> >> Right.
>>>>> >> It is not possible to assign your primary VGA card to a VM with
>>>>> >> qemu-xen. You should be able to assign your secondary VGA card
>>>>> >> though.
>>>>> >
>>>>> > Let me understand this correctly. If I have two VGA cards then I can
>>>>> > passthrough
>>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>>> > right and
>>>>> > if yes how can I do it?
>>>>>
>>>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>>>
>>>>
>>>>
>>>> I think passing VGA card as a primary-in-domU works well in
>>>> Qemu-traditional, right?
>>>
>>>
>>>
>>> I never managed to get it working - it certainly isn't just a matter of
>>> enabling the option. There is at least the matter of also side-loading
>>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>>>
>>> Having said that - I never found a particularly good use-case for
>>> primary passthrough. Once the GPU driver loads it works just the
>>> same for all intents and purposes.
>>>
>>
>> I have successfully managed to passthrough VGA card as primary to DomU
>> with qemu traditional. I am trying to do the same with upstream qemu
>> because
>>  I need some new features of the upstream qemu which are not available
>> in qemu traditional.
>>
>> With qemu upstream I can passthrough as secondary VGA card to DomU
>> and able to see it in device manager in DomU (Windows 7) but Windows
>> couldn't use it and display some error that another card is being used as
>> display. I want Windows to use the passthroughed vga card  as its display.
>
>
> Disable the other (emulated) card in device manager and reboot
> the domU. That should fix it.

This is not working for me. I am disabling the device and even uninstalling the
driver but on reboot Windows 7 install the driver for the emulated vga and
make it primary vga.

Is there a way to stop qemu upstream from emulating vga. I tried 'nographics'
in xl.conf but Windows still see the emulated vga.

Shakeel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MIz-0003T0-Kz; Mon, 20 Jan 2014 21:22:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5MIy-0003Su-MP
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:22:48 +0000
Received: from [85.158.143.35:46085] by server-3.bemta-4.messagelabs.com id
	3C/72-32360-8A39DD25; Mon, 20 Jan 2014 21:22:48 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390252966!12763541!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30323 invoked from network); 20 Jan 2014 21:22:47 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 21:22:47 -0000
Received: from [10.2.3.3] (unknown [10.2.3.3])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by external.sentinel2 (Postfix) with ESMTPSA id 8AE62221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 21:22:46 +0000 (GMT)
Message-ID: <52DD93A6.30100@bobich.net>
Date: Mon, 20 Jan 2014 21:22:46 +0000
From: Gordan Bobic <gordan@bobich.net>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
	<CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
In-Reply-To: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/20/2014 09:15 PM, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
>> On 2014-01-20 15:19, Shakeel Butt wrote:
>>>
>>> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
>>>>
>>>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: xen-devel-bounces@lists.xen.org
>>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>>>> To: xen-devel@lists.xen.org
>>>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>>> upstream)
>>>>>>
>>>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>>>>> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>>>>> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: xen-devel-bounces@lists.xen.org
>>>>>>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
>>>>>>>>>> Butt
>>>>>>>>>> Sent: Monday, January 20, 2014 1:48 PM
>>>>>>>>>> To: xen-devel@lists.xen.org
>>>>>>>>>> Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>>>>>>> upstream)
>>>>>>>>>>
>>>>>>>>>> Hi all,
>>>>>>>>>>
>>>>>>>>>> Is it possible to do vga passthrough on xen-unstable with
>>>>>>>>>> qemu-xen
>>>>>>>>>> as
>>>>>>>>>> device model? I tried but I am getting error 'gfx_passthru'
>>>>>>>>>> invalid
>>>>>>>>>> parameter for qemu-xen. I am able to do passthrough with qemu
>>>>>>>>>> traditional i.e. qemu-dm.
>>>>>>>>>
>>>>>>>>> As far as I know, only qemu-traditional supports vga pass-through
>>>>>>>>> right now.
>>>>>>>>
>>>>>>>> Right.
>>>>>>>> It is not possible to assign your primary VGA card to a VM with
>>>>>>>> qemu-xen. You should be able to assign your secondary VGA card
>>>>>>>> though.
>>>>>>>
>>>>>>> Let me understand this correctly. If I have two VGA cards then I can
>>>>>>> passthrough
>>>>>>> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>>>>> right and
>>>>>>> if yes how can I do it?
>>>>>>
>>>>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>>>>
>>>>>
>>>>>
>>>>> I think passing VGA card as a primary-in-domU works well in
>>>>> Qemu-traditional, right?
>>>>
>>>>
>>>>
>>>> I never managed to get it working - it certainly isn't just a matter of
>>>> enabling the option. There is at least the matter of also side-loading
>>>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>>>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>>>>
>>>> Having said that - I never found a particularly good use-case for
>>>> primary passthrough. Once the GPU driver loads it works just the
>>>> same for all intents and purposes.
>>>>
>>>
>>> I have successfully managed to passthrough VGA card as primary to DomU
>>> with qemu traditional. I am trying to do the same with upstream qemu
>>> because
>>>   I need some new features of the upstream qemu which are not available
>>> in qemu traditional.
>>>
>>> With qemu upstream I can passthrough as secondary VGA card to DomU
>>> and able to see it in device manager in DomU (Windows 7) but Windows
>>> couldn't use it and display some error that another card is being used as
>>> display. I want Windows to use the passthroughed vga card  as its display.
>>
>>
>> Disable the other (emulated) card in device manager and reboot
>> the domU. That should fix it.
>
> This is not working for me. I am disabling the device and even uninstalling the
> driver but on reboot Windows 7 install the driver for the emulated vga and
> make it primary vga.

Don't uninstall the driver, just disable the device. If you uninstall 
it, it will get re-detected. If you disable it, it should show up with a 
red cross next to it, and your secondary GPU will start working the way 
you want it to.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:23:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MIz-0003T0-Kz; Mon, 20 Jan 2014 21:22:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5MIy-0003Su-MP
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:22:48 +0000
Received: from [85.158.143.35:46085] by server-3.bemta-4.messagelabs.com id
	3C/72-32360-8A39DD25; Mon, 20 Jan 2014 21:22:48 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390252966!12763541!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30323 invoked from network); 20 Jan 2014 21:22:47 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 21:22:47 -0000
Received: from [10.2.3.3] (unknown [10.2.3.3])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by external.sentinel2 (Postfix) with ESMTPSA id 8AE62221BEA
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 21:22:46 +0000 (GMT)
Message-ID: <52DD93A6.30100@bobich.net>
Date: Mon, 20 Jan 2014 21:22:46 +0000
From: Gordan Bobic <gordan@bobich.net>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
	<CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
In-Reply-To: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/20/2014 09:15 PM, Shakeel Butt wrote:
> On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
>> On 2014-01-20 15:19, Shakeel Butt wrote:
>>>
>>> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net> wrote:
>>>>
>>>> On 2014-01-20 13:24, Wu, Feng wrote:
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: xen-devel-bounces@lists.xen.org
>>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
>>>>>> Sent: Monday, January 20, 2014 8:50 PM
>>>>>> To: xen-devel@lists.xen.org
>>>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>>> upstream)
>>>>>>
>>>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
>>>>>>> On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
>>>>>>> <stefano.stabellini@eu.citrix.com> wrote:
>>>>>>>> On Mon, 20 Jan 2014, Wu, Feng wrote:
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: xen-devel-bounces@lists.xen.org
>>>>>>>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
>>>>>>>>>> Butt
>>>>>>>>>> Sent: Monday, January 20, 2014 1:48 PM
>>>>>>>>>> To: xen-devel@lists.xen.org
>>>>>>>>>> Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
>>>>>>>>>> upstream)
>>>>>>>>>>
>>>>>>>>>> Hi all,
>>>>>>>>>>
>>>>>>>>>> Is it possible to do vga passthrough on xen-unstable with
>>>>>>>>>> qemu-xen
>>>>>>>>>> as
>>>>>>>>>> device model? I tried but I am getting error 'gfx_passthru'
>>>>>>>>>> invalid
>>>>>>>>>> parameter for qemu-xen. I am able to do passthrough with qemu
>>>>>>>>>> traditional i.e. qemu-dm.
>>>>>>>>>
>>>>>>>>> As far as I know, only qemu-traditional supports vga pass-through
>>>>>>>>> right now.
>>>>>>>>
>>>>>>>> Right.
>>>>>>>> It is not possible to assign your primary VGA card to a VM with
>>>>>>>> qemu-xen. You should be able to assign your secondary VGA card
>>>>>>>> though.
>>>>>>>
>>>>>>> Let me understand this correctly. If I have two VGA cards then I can
>>>>>>> passthrough
>>>>>>> secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>>>>>>> right and
>>>>>>> if yes how can I do it?
>>>>>>
>>>>>> Passing any VGA card as a primary-in-domU has always been problematic.
>>>>>
>>>>>
>>>>>
>>>>> I think passing VGA card as a primary-in-domU works well in
>>>>> Qemu-traditional, right?
>>>>
>>>>
>>>>
>>>> I never managed to get it working - it certainly isn't just a matter of
>>>> enabling the option. There is at least the matter of also side-loading
>>>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
>>>> out all ATI and Nvidia GPUs of the past 2-3 generations.
>>>>
>>>> Having said that - I never found a particularly good use-case for
>>>> primary passthrough. Once the GPU driver loads it works just the
>>>> same for all intents and purposes.
>>>>
>>>
>>> I have successfully managed to passthrough VGA card as primary to DomU
>>> with qemu traditional. I am trying to do the same with upstream qemu
>>> because
>>>   I need some new features of the upstream qemu which are not available
>>> in qemu traditional.
>>>
>>> With qemu upstream I can passthrough as secondary VGA card to DomU
>>> and able to see it in device manager in DomU (Windows 7) but Windows
>>> couldn't use it and display some error that another card is being used as
>>> display. I want Windows to use the passthroughed vga card  as its display.
>>
>>
>> Disable the other (emulated) card in device manager and reboot
>> the domU. That should fix it.
>
> This is not working for me. I am disabling the device and even uninstalling the
> driver but on reboot Windows 7 install the driver for the emulated vga and
> make it primary vga.

Don't uninstall the driver, just disable the device. If you uninstall 
it, it will get re-detected. If you disable it, it should show up with a 
red cross next to it, and your secondary GPU will start working the way 
you want it to.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:23:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MJz-0003X4-3h; Mon, 20 Jan 2014 21:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5MJx-0003Wp-G9
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:23:49 +0000
Received: from [193.109.254.147:8346] by server-1.bemta-14.messagelabs.com id
	49/B8-15600-4E39DD25; Mon, 20 Jan 2014 21:23:48 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390253026!12101222!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32445 invoked from network); 20 Jan 2014 21:23:47 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:23:47 -0000
Received: by mail-bk0-f42.google.com with SMTP id 6so578898bkj.29
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:23:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=QE9iGpqVqujFUuGBFjKjQVP2Qj4x5noo0DdQto6FblE=;
	b=Z091g1uyWmJ5IrjuVzhmWRwO7hv1fzcHco9QqoEVoxDtKOARlU8CA+JiyX9KTWgSTn
	W4lJL7ial4Ixwifx6LLoV6PBsS5mOq6HmYRVjIiTomte2Ehf/fDd6M9TzmxV3IwRgMVH
	6i+ppKHi4bCG6gnxYgvdxktXKO5j1/8FJZybpKrhEOy1Gq+3iJ0DiFGzPr2slckZ4T9l
	6d3j5rK+jAYYsyVn1ZFxLnAZIO7Xv+YD4Zz/oMtU43P8o9l31+1j3+/A9Y+qQEEfDGC5
	kV12Ryl4CxcV9jSO31JK48Enp3X50BI2F2FWEYkeVqnSbnPAiOLa/yAjjDa1GhTxbc9/
	solQ==
X-Gm-Message-State: ALoCoQmyPFwY10DYmasM5bSIZVgKbVKFUQat6TDdewgTG2VFCBw29sn0NOWhQUyRm4RmwiCdUgMp
MIME-Version: 1.0
X-Received: by 10.205.77.131 with SMTP id zi3mr4666bkb.101.1390253026436; Mon,
	20 Jan 2014 13:23:46 -0800 (PST)
Received: by 10.205.25.73 with HTTP; Mon, 20 Jan 2014 13:23:46 -0800 (PST)
X-Originating-IP: [87.0.89.102]
In-Reply-To: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
	<CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
Date: Mon, 20 Jan 2014 22:23:46 +0100
Message-ID: <CABMPFzii=WZ4A6ycBWr+OjSyBFGAP-j8v0O3nGF+5bHDfGCifg@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: Shakeel Butt <shakeel.butt@gmail.com>
Cc: Gordan Bobic <gordan@bobich.net>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6462110989610894312=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6462110989610894312==
Content-Type: multipart/alternative; boundary=f46d041706a1ca3c3404f06d81cf

--f46d041706a1ca3c3404f06d81cf
Content-Type: text/plain; charset=ISO-8859-1

2014/1/20 Shakeel Butt <shakeel.butt@gmail.com>

> On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
> > On 2014-01-20 15:19, Shakeel Butt wrote:
> >>
> >> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net>
> wrote:
> >>>
> >>> On 2014-01-20 13:24, Wu, Feng wrote:
> >>>>>
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: xen-devel-bounces@lists.xen.org
> >>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> >>>>> Sent: Monday, January 20, 2014 8:50 PM
> >>>>> To: xen-devel@lists.xen.org
> >>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
> >>>>> upstream)
> >>>>>
> >>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
> >>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> >>>>> > <stefano.stabellini@eu.citrix.com> wrote:
> >>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
> >>>>> >>> > -----Original Message-----
> >>>>> >>> > From: xen-devel-bounces@lists.xen.org
> >>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
> >>>>> >>> > Butt
> >>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
> >>>>> >>> > To: xen-devel@lists.xen.org
> >>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
> >>>>> >>> > upstream)
> >>>>> >>> >
> >>>>> >>> > Hi all,
> >>>>> >>> >
> >>>>> >>> > Is it possible to do vga passthrough on xen-unstable with
> >>>>> >>> > qemu-xen
> >>>>> >>> > as
> >>>>> >>> > device model? I tried but I am getting error 'gfx_passthru'
> >>>>> >>> > invalid
> >>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >>>>> >>> > traditional i.e. qemu-dm.
> >>>>> >>>
> >>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
> >>>>> >>> right now.
> >>>>> >>
> >>>>> >> Right.
> >>>>> >> It is not possible to assign your primary VGA card to a VM with
> >>>>> >> qemu-xen. You should be able to assign your secondary VGA card
> >>>>> >> though.
> >>>>> >
> >>>>> > Let me understand this correctly. If I have two VGA cards then I
> can
> >>>>> > passthrough
> >>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is
> this
> >>>>> > right and
> >>>>> > if yes how can I do it?
> >>>>>
> >>>>> Passing any VGA card as a primary-in-domU has always been
> problematic.
> >>>>
> >>>>
> >>>>
> >>>> I think passing VGA card as a primary-in-domU works well in
> >>>> Qemu-traditional, right?
> >>>
> >>>
> >>>
> >>> I never managed to get it working - it certainly isn't just a matter of
> >>> enabling the option. There is at least the matter of also side-loading
> >>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
> >>> out all ATI and Nvidia GPUs of the past 2-3 generations.
> >>>
> >>> Having said that - I never found a particularly good use-case for
> >>> primary passthrough. Once the GPU driver loads it works just the
> >>> same for all intents and purposes.
> >>>
> >>
> >> I have successfully managed to passthrough VGA card as primary to DomU
> >> with qemu traditional. I am trying to do the same with upstream qemu
> >> because
> >>  I need some new features of the upstream qemu which are not available
> >> in qemu traditional.
> >>
> >> With qemu upstream I can passthrough as secondary VGA card to DomU
> >> and able to see it in device manager in DomU (Windows 7) but Windows
> >> couldn't use it and display some error that another card is being used
> as
> >> display. I want Windows to use the passthroughed vga card  as its
> display.
> >
> >
> > Disable the other (emulated) card in device manager and reboot
> > the domU. That should fix it.
>
> This is not working for me. I am disabling the device and even
> uninstalling the
> driver but on reboot Windows 7 install the driver for the emulated vga and
> make it primary vga.
>
> Is there a way to stop qemu upstream from emulating vga. I tried
> 'nographics'
> in xl.conf but Windows still see the emulated vga.
>

Try with this:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg00725.html


>
> Shakeel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--f46d041706a1ca3c3404f06d81cf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014/1/20 Shakeel Butt <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:shakeel.butt@gmail.com" target=3D"_blank">shakeel.butt@gmail.com</a>&g=
t;</span><br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex">
<div class=3D""><div class=3D"h5">On Mon, Jan 20, 2014 at 7:29 AM, Gordan B=
obic &lt;<a href=3D"mailto:gordan@bobich.net">gordan@bobich.net</a>&gt; wro=
te:<br>
&gt; On 2014-01-20 15:19, Shakeel Butt wrote:<br>
&gt;&gt;<br>
&gt;&gt; On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic &lt;<a href=3D"mailt=
o:gordan@bobich.net">gordan@bobich.net</a>&gt; wrote:<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On 2014-01-20 13:24, Wu, Feng wrote:<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; -----Original Message-----<br>
&gt;&gt;&gt;&gt;&gt; From: <a href=3D"mailto:xen-devel-bounces@lists.xen.or=
g">xen-devel-bounces@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; [mailto:<a href=3D"mailto:xen-devel-bounces@lists.xen.=
org">xen-devel-bounces@lists.xen.org</a>] On Behalf Of Gordan Bobic<br>
&gt;&gt;&gt;&gt;&gt; Sent: Monday, January 20, 2014 8:50 PM<br>
&gt;&gt;&gt;&gt;&gt; To: <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; Subject: Re: [Xen-devel] vga passthrough with qemu-xen=
 (or qemu<br>
&gt;&gt;&gt;&gt;&gt; upstream)<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 2014-01-20 12:31, Shakeel Butt wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt; On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabelli=
ni<br>
&gt;&gt;&gt;&gt;&gt; &gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citri=
x.com">stefano.stabellini@eu.citrix.com</a>&gt; wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; On Mon, 20 Jan 2014, Wu, Feng wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; -----Original Message-----<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; From: <a href=3D"mailto:xen-devel-bo=
unces@lists.xen.org">xen-devel-bounces@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; [mailto:<a href=3D"mailto:xen-devel-=
bounces@lists.xen.org">xen-devel-bounces@lists.xen.org</a>] On Behalf Of Sh=
akeel<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Butt<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Sent: Monday, January 20, 2014 1:48 =
PM<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; To: <a href=3D"mailto:xen-devel@list=
s.xen.org">xen-devel@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Subject: [Xen-devel] vga passthrough=
 with qemu-xen (or qemu<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; upstream)<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Hi all,<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Is it possible to do vga passthrough=
 on xen-unstable with<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; qemu-xen<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; as<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; device model? I tried but I am getti=
ng error &#39;gfx_passthru&#39;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; invalid<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; parameter for qemu-xen. I am able to=
 do passthrough with qemu<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; traditional i.e. qemu-dm.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; As far as I know, only qemu-traditional s=
upports vga pass-through<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; right now.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; Right.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; It is not possible to assign your primary VGA=
 card to a VM with<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; qemu-xen. You should be able to assign your s=
econdary VGA card<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; though.<br>
&gt;&gt;&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt; Let me understand this correctly. If I have two V=
GA cards then I can<br>
&gt;&gt;&gt;&gt;&gt; &gt; passthrough<br>
&gt;&gt;&gt;&gt;&gt; &gt; secondary VGA card (in Dom0) to HVM as its primar=
y VGA card. Is this<br>
&gt;&gt;&gt;&gt;&gt; &gt; right and<br>
&gt;&gt;&gt;&gt;&gt; &gt; if yes how can I do it?<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; Passing any VGA card as a primary-in-domU has always b=
een problematic.<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; I think passing VGA card as a primary-in-domU works well i=
n<br>
&gt;&gt;&gt;&gt; Qemu-traditional, right?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; I never managed to get it working - it certainly isn&#39;t jus=
t a matter of<br>
&gt;&gt;&gt; enabling the option. There is at least the matter of also side=
-loading<br>
&gt;&gt;&gt; the VGA BIOS, and IIRC that was limited to 64KB in size, which=
 rules<br>
&gt;&gt;&gt; out all ATI and Nvidia GPUs of the past 2-3 generations.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Having said that - I never found a particularly good use-case =
for<br>
&gt;&gt;&gt; primary passthrough. Once the GPU driver loads it works just t=
he<br>
&gt;&gt;&gt; same for all intents and purposes.<br>
&gt;&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; I have successfully managed to passthrough VGA card as primary to =
DomU<br>
&gt;&gt; with qemu traditional. I am trying to do the same with upstream qe=
mu<br>
&gt;&gt; because<br>
&gt;&gt; =A0I need some new features of the upstream qemu which are not ava=
ilable<br>
&gt;&gt; in qemu traditional.<br>
&gt;&gt;<br>
&gt;&gt; With qemu upstream I can passthrough as secondary VGA card to DomU=
<br>
&gt;&gt; and able to see it in device manager in DomU (Windows 7) but Windo=
ws<br>
&gt;&gt; couldn&#39;t use it and display some error that another card is be=
ing used as<br>
&gt;&gt; display. I want Windows to use the passthroughed vga card =A0as it=
s display.<br>
&gt;<br>
&gt;<br>
&gt; Disable the other (emulated) card in device manager and reboot<br>
&gt; the domU. That should fix it.<br>
<br>
</div></div>This is not working for me. I am disabling the device and even =
uninstalling the<br>
driver but on reboot Windows 7 install the driver for the emulated vga and<=
br>
make it primary vga.<br>
<br>
Is there a way to stop qemu upstream from emulating vga. I tried &#39;nogra=
phics&#39;<br>
in xl.conf but Windows still see the emulated vga.<br></blockquote><div><br=
></div><div>Try with this:<br><a href=3D"http://lists.xen.org/archives/html=
/xen-devel/2013-12/msg00725.html">http://lists.xen.org/archives/html/xen-de=
vel/2013-12/msg00725.html</a><br>
</div><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class=3D""><font color=3D"#888888"><br>
Shakeel<br>
</font></span><div class=3D""><div class=3D"h5"><br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div></div>

--f46d041706a1ca3c3404f06d81cf--


--===============6462110989610894312==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6462110989610894312==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 21:23:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MJz-0003X4-3h; Mon, 20 Jan 2014 21:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5MJx-0003Wp-G9
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:23:49 +0000
Received: from [193.109.254.147:8346] by server-1.bemta-14.messagelabs.com id
	49/B8-15600-4E39DD25; Mon, 20 Jan 2014 21:23:48 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390253026!12101222!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32445 invoked from network); 20 Jan 2014 21:23:47 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:23:47 -0000
Received: by mail-bk0-f42.google.com with SMTP id 6so578898bkj.29
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:23:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=QE9iGpqVqujFUuGBFjKjQVP2Qj4x5noo0DdQto6FblE=;
	b=Z091g1uyWmJ5IrjuVzhmWRwO7hv1fzcHco9QqoEVoxDtKOARlU8CA+JiyX9KTWgSTn
	W4lJL7ial4Ixwifx6LLoV6PBsS5mOq6HmYRVjIiTomte2Ehf/fDd6M9TzmxV3IwRgMVH
	6i+ppKHi4bCG6gnxYgvdxktXKO5j1/8FJZybpKrhEOy1Gq+3iJ0DiFGzPr2slckZ4T9l
	6d3j5rK+jAYYsyVn1ZFxLnAZIO7Xv+YD4Zz/oMtU43P8o9l31+1j3+/A9Y+qQEEfDGC5
	kV12Ryl4CxcV9jSO31JK48Enp3X50BI2F2FWEYkeVqnSbnPAiOLa/yAjjDa1GhTxbc9/
	solQ==
X-Gm-Message-State: ALoCoQmyPFwY10DYmasM5bSIZVgKbVKFUQat6TDdewgTG2VFCBw29sn0NOWhQUyRm4RmwiCdUgMp
MIME-Version: 1.0
X-Received: by 10.205.77.131 with SMTP id zi3mr4666bkb.101.1390253026436; Mon,
	20 Jan 2014 13:23:46 -0800 (PST)
Received: by 10.205.25.73 with HTTP; Mon, 20 Jan 2014 13:23:46 -0800 (PST)
X-Originating-IP: [87.0.89.102]
In-Reply-To: <CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<0e1966027bab98420993970d931e7e92@mail.shatteredsilicon.net>
	<CAGj-7pXyL5e6noT64eeevq68RqMcy1FT2G64jPwAugCY3W_6_w@mail.gmail.com>
	<b8379f4a331d246f13155aebb2528ac5@mail.shatteredsilicon.net>
	<CAGj-7pV77ou=qQ4J_qT4yakSaOGJPDU0RjH54EP=EmcDYv3VBw@mail.gmail.com>
Date: Mon, 20 Jan 2014 22:23:46 +0100
Message-ID: <CABMPFzii=WZ4A6ycBWr+OjSyBFGAP-j8v0O3nGF+5bHDfGCifg@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: Shakeel Butt <shakeel.butt@gmail.com>
Cc: Gordan Bobic <gordan@bobich.net>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6462110989610894312=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6462110989610894312==
Content-Type: multipart/alternative; boundary=f46d041706a1ca3c3404f06d81cf

--f46d041706a1ca3c3404f06d81cf
Content-Type: text/plain; charset=ISO-8859-1

2014/1/20 Shakeel Butt <shakeel.butt@gmail.com>

> On Mon, Jan 20, 2014 at 7:29 AM, Gordan Bobic <gordan@bobich.net> wrote:
> > On 2014-01-20 15:19, Shakeel Butt wrote:
> >>
> >> On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic <gordan@bobich.net>
> wrote:
> >>>
> >>> On 2014-01-20 13:24, Wu, Feng wrote:
> >>>>>
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: xen-devel-bounces@lists.xen.org
> >>>>> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Gordan Bobic
> >>>>> Sent: Monday, January 20, 2014 8:50 PM
> >>>>> To: xen-devel@lists.xen.org
> >>>>> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu
> >>>>> upstream)
> >>>>>
> >>>>> On 2014-01-20 12:31, Shakeel Butt wrote:
> >>>>> > On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabellini
> >>>>> > <stefano.stabellini@eu.citrix.com> wrote:
> >>>>> >> On Mon, 20 Jan 2014, Wu, Feng wrote:
> >>>>> >>> > -----Original Message-----
> >>>>> >>> > From: xen-devel-bounces@lists.xen.org
> >>>>> >>> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shakeel
> >>>>> >>> > Butt
> >>>>> >>> > Sent: Monday, January 20, 2014 1:48 PM
> >>>>> >>> > To: xen-devel@lists.xen.org
> >>>>> >>> > Subject: [Xen-devel] vga passthrough with qemu-xen (or qemu
> >>>>> >>> > upstream)
> >>>>> >>> >
> >>>>> >>> > Hi all,
> >>>>> >>> >
> >>>>> >>> > Is it possible to do vga passthrough on xen-unstable with
> >>>>> >>> > qemu-xen
> >>>>> >>> > as
> >>>>> >>> > device model? I tried but I am getting error 'gfx_passthru'
> >>>>> >>> > invalid
> >>>>> >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >>>>> >>> > traditional i.e. qemu-dm.
> >>>>> >>>
> >>>>> >>> As far as I know, only qemu-traditional supports vga pass-through
> >>>>> >>> right now.
> >>>>> >>
> >>>>> >> Right.
> >>>>> >> It is not possible to assign your primary VGA card to a VM with
> >>>>> >> qemu-xen. You should be able to assign your secondary VGA card
> >>>>> >> though.
> >>>>> >
> >>>>> > Let me understand this correctly. If I have two VGA cards then I
> can
> >>>>> > passthrough
> >>>>> > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is
> this
> >>>>> > right and
> >>>>> > if yes how can I do it?
> >>>>>
> >>>>> Passing any VGA card as a primary-in-domU has always been
> problematic.
> >>>>
> >>>>
> >>>>
> >>>> I think passing VGA card as a primary-in-domU works well in
> >>>> Qemu-traditional, right?
> >>>
> >>>
> >>>
> >>> I never managed to get it working - it certainly isn't just a matter of
> >>> enabling the option. There is at least the matter of also side-loading
> >>> the VGA BIOS, and IIRC that was limited to 64KB in size, which rules
> >>> out all ATI and Nvidia GPUs of the past 2-3 generations.
> >>>
> >>> Having said that - I never found a particularly good use-case for
> >>> primary passthrough. Once the GPU driver loads it works just the
> >>> same for all intents and purposes.
> >>>
> >>
> >> I have successfully managed to passthrough VGA card as primary to DomU
> >> with qemu traditional. I am trying to do the same with upstream qemu
> >> because
> >>  I need some new features of the upstream qemu which are not available
> >> in qemu traditional.
> >>
> >> With qemu upstream I can passthrough as secondary VGA card to DomU
> >> and able to see it in device manager in DomU (Windows 7) but Windows
> >> couldn't use it and display some error that another card is being used
> as
> >> display. I want Windows to use the passthroughed vga card  as its
> display.
> >
> >
> > Disable the other (emulated) card in device manager and reboot
> > the domU. That should fix it.
>
> This is not working for me. I am disabling the device and even
> uninstalling the
> driver but on reboot Windows 7 install the driver for the emulated vga and
> make it primary vga.
>
> Is there a way to stop qemu upstream from emulating vga. I tried
> 'nographics'
> in xl.conf but Windows still see the emulated vga.
>

Try with this:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg00725.html


>
> Shakeel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--f46d041706a1ca3c3404f06d81cf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014/1/20 Shakeel Butt <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:shakeel.butt@gmail.com" target=3D"_blank">shakeel.butt@gmail.com</a>&g=
t;</span><br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px=
 solid rgb(204,204,204);padding-left:1ex">
<div class=3D""><div class=3D"h5">On Mon, Jan 20, 2014 at 7:29 AM, Gordan B=
obic &lt;<a href=3D"mailto:gordan@bobich.net">gordan@bobich.net</a>&gt; wro=
te:<br>
&gt; On 2014-01-20 15:19, Shakeel Butt wrote:<br>
&gt;&gt;<br>
&gt;&gt; On Mon, Jan 20, 2014 at 5:31 AM, Gordan Bobic &lt;<a href=3D"mailt=
o:gordan@bobich.net">gordan@bobich.net</a>&gt; wrote:<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; On 2014-01-20 13:24, Wu, Feng wrote:<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; -----Original Message-----<br>
&gt;&gt;&gt;&gt;&gt; From: <a href=3D"mailto:xen-devel-bounces@lists.xen.or=
g">xen-devel-bounces@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; [mailto:<a href=3D"mailto:xen-devel-bounces@lists.xen.=
org">xen-devel-bounces@lists.xen.org</a>] On Behalf Of Gordan Bobic<br>
&gt;&gt;&gt;&gt;&gt; Sent: Monday, January 20, 2014 8:50 PM<br>
&gt;&gt;&gt;&gt;&gt; To: <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; Subject: Re: [Xen-devel] vga passthrough with qemu-xen=
 (or qemu<br>
&gt;&gt;&gt;&gt;&gt; upstream)<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; On 2014-01-20 12:31, Shakeel Butt wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt; On Mon, Jan 20, 2014 at 4:09 AM, Stefano Stabelli=
ni<br>
&gt;&gt;&gt;&gt;&gt; &gt; &lt;<a href=3D"mailto:stefano.stabellini@eu.citri=
x.com">stefano.stabellini@eu.citrix.com</a>&gt; wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; On Mon, 20 Jan 2014, Wu, Feng wrote:<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; -----Original Message-----<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; From: <a href=3D"mailto:xen-devel-bo=
unces@lists.xen.org">xen-devel-bounces@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; [mailto:<a href=3D"mailto:xen-devel-=
bounces@lists.xen.org">xen-devel-bounces@lists.xen.org</a>] On Behalf Of Sh=
akeel<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Butt<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Sent: Monday, January 20, 2014 1:48 =
PM<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; To: <a href=3D"mailto:xen-devel@list=
s.xen.org">xen-devel@lists.xen.org</a><br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Subject: [Xen-devel] vga passthrough=
 with qemu-xen (or qemu<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; upstream)<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Hi all,<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; Is it possible to do vga passthrough=
 on xen-unstable with<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; qemu-xen<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; as<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; device model? I tried but I am getti=
ng error &#39;gfx_passthru&#39;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; invalid<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; parameter for qemu-xen. I am able to=
 do passthrough with qemu<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; &gt; traditional i.e. qemu-dm.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; As far as I know, only qemu-traditional s=
upports vga pass-through<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;&gt; right now.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; Right.<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; It is not possible to assign your primary VGA=
 card to a VM with<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; qemu-xen. You should be able to assign your s=
econdary VGA card<br>
&gt;&gt;&gt;&gt;&gt; &gt;&gt; though.<br>
&gt;&gt;&gt;&gt;&gt; &gt;<br>
&gt;&gt;&gt;&gt;&gt; &gt; Let me understand this correctly. If I have two V=
GA cards then I can<br>
&gt;&gt;&gt;&gt;&gt; &gt; passthrough<br>
&gt;&gt;&gt;&gt;&gt; &gt; secondary VGA card (in Dom0) to HVM as its primar=
y VGA card. Is this<br>
&gt;&gt;&gt;&gt;&gt; &gt; right and<br>
&gt;&gt;&gt;&gt;&gt; &gt; if yes how can I do it?<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; Passing any VGA card as a primary-in-domU has always b=
een problematic.<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; I think passing VGA card as a primary-in-domU works well i=
n<br>
&gt;&gt;&gt;&gt; Qemu-traditional, right?<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; I never managed to get it working - it certainly isn&#39;t jus=
t a matter of<br>
&gt;&gt;&gt; enabling the option. There is at least the matter of also side=
-loading<br>
&gt;&gt;&gt; the VGA BIOS, and IIRC that was limited to 64KB in size, which=
 rules<br>
&gt;&gt;&gt; out all ATI and Nvidia GPUs of the past 2-3 generations.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Having said that - I never found a particularly good use-case =
for<br>
&gt;&gt;&gt; primary passthrough. Once the GPU driver loads it works just t=
he<br>
&gt;&gt;&gt; same for all intents and purposes.<br>
&gt;&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; I have successfully managed to passthrough VGA card as primary to =
DomU<br>
&gt;&gt; with qemu traditional. I am trying to do the same with upstream qe=
mu<br>
&gt;&gt; because<br>
&gt;&gt; =A0I need some new features of the upstream qemu which are not ava=
ilable<br>
&gt;&gt; in qemu traditional.<br>
&gt;&gt;<br>
&gt;&gt; With qemu upstream I can passthrough as secondary VGA card to DomU=
<br>
&gt;&gt; and able to see it in device manager in DomU (Windows 7) but Windo=
ws<br>
&gt;&gt; couldn&#39;t use it and display some error that another card is be=
ing used as<br>
&gt;&gt; display. I want Windows to use the passthroughed vga card =A0as it=
s display.<br>
&gt;<br>
&gt;<br>
&gt; Disable the other (emulated) card in device manager and reboot<br>
&gt; the domU. That should fix it.<br>
<br>
</div></div>This is not working for me. I am disabling the device and even =
uninstalling the<br>
driver but on reboot Windows 7 install the driver for the emulated vga and<=
br>
make it primary vga.<br>
<br>
Is there a way to stop qemu upstream from emulating vga. I tried &#39;nogra=
phics&#39;<br>
in xl.conf but Windows still see the emulated vga.<br></blockquote><div><br=
></div><div>Try with this:<br><a href=3D"http://lists.xen.org/archives/html=
/xen-devel/2013-12/msg00725.html">http://lists.xen.org/archives/html/xen-de=
vel/2013-12/msg00725.html</a><br>
</div><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class=3D""><font color=3D"#888888"><br>
Shakeel<br>
</font></span><div class=3D""><div class=3D"h5"><br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div></div>

--f46d041706a1ca3c3404f06d81cf--


--===============6462110989610894312==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6462110989610894312==--


From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKm-0003dh-VO; Mon, 20 Jan 2014 21:24:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKl-0003dO-J3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:39 +0000
Received: from [85.158.137.68:63330] by server-10.bemta-3.messagelabs.com id
	51/07-23989-6149DD25; Mon, 20 Jan 2014 21:24:38 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390253076!10322762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13526 invoked from network); 20 Jan 2014 21:24:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595726"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:36 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:35 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:20 +0000
Message-ID: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

v4: Now we are using a new grant mapping API to avoid m2p_override. The RX queue
timeout logic changed also.

v5: Only minor fixes based on Wei's comments

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKm-0003dh-VO; Mon, 20 Jan 2014 21:24:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKl-0003dO-J3
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:39 +0000
Received: from [85.158.137.68:63330] by server-10.bemta-3.messagelabs.com id
	51/07-23989-6149DD25; Mon, 20 Jan 2014 21:24:38 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390253076!10322762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13526 invoked from network); 20 Jan 2014 21:24:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595726"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:36 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:35 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:20 +0000
Message-ID: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant mapping
	with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A long known problem of the upstream netback implementation that on the TX
path (from guest to Dom0) it copies the whole packet from guest memory into
Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
huge perfomance penalty. The classic kernel version of netback used grant
mapping, and to get notified when the page can be unmapped, it used page
destructors. Unfortunately that destructor is not an upstreamable solution.
Ian Campbell's skb fragment destructor patch series [1] tried to solve this
problem, however it seems to be very invasive on the network stack's code,
and therefore haven't progressed very well.
This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
know when the skb is freed up. That is the way KVM solved the same problem,
and based on my initial tests it can do the same for us. Avoiding the extra
copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
switch)
Based on my investigations the packet get only copied if it is delivered to
Dom0 stack, which is due to this [2] patch. That's a bit unfortunate, but
luckily it doesn't cause a major regression for this usecase. In the future
we should try to eliminate that copy somehow.
There are a few spinoff tasks which will be addressed in separate patches:
- grant copy the header directly instead of map and memcpy. This should help
  us avoiding TLB flushing
- use something else than ballooned pages
- fix grant map to use page->index properly
I will run some more extensive tests, but some basic XenRT tests were already
passed with good results.
I've tried to broke it down to smaller patches, with mixed results, so I
welcome suggestions on that part as well:
1: Introduce TX grant map definitions
2: Change TX path from grant copy to mapping
3: Remove old TX grant copy definitons and fix indentations
4: Change RX path for mapped SKB fragments
5: Add stat counters for zerocopy
6: Handle guests with too many frags
7: Add stat counters for frag_list skbs
8: Timeout packets in RX path
9: Aggregate TX unmap operations

v2: I've fixed some smaller things, see the individual patches. I've added a
few new stat counters, and handling the important use case when an older guest
sends lots of slots. Instead of delayed copy now we timeout packets on the RX
path, based on the assumption that otherwise packets should get stucked
anywhere else. Finally some unmap batching to avoid too much TLB flush

v3: Apart from fixing a few things mentioned in responses the important change
is the use the hypercall directly for grant [un]mapping, therefore we can
avoid m2p override.

v4: Now we are using a new grant mapping API to avoid m2p_override. The RX queue
timeout logic changed also.

v5: Only minor fixes based on Wei's comments

[1] http://lwn.net/Articles/491522/
[2] https://lkml.org/lkml/2012/7/20/363

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKo-0003eR-KB; Mon, 20 Jan 2014 21:24:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKn-0003df-1Y
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:41 +0000
Received: from [85.158.137.68:11337] by server-4.bemta-3.messagelabs.com id
	A2/6C-10414-8149DD25; Mon, 20 Jan 2014 21:24:40 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390253076!10322762!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13604 invoked from network); 20 Jan 2014 21:24:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595756"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:38 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:38 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:21 +0000
Message-ID: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

v4:
- fix indentations and comments
- use bool for tx_dealloc_work_todo
- BUG() if grant_tx_handle corrupted - now really :)
- go back to gnttab_unmap_refs, now we rely on API changes

v5:
- remove hypercall from xenvif_idx_unmap
- remove stray line in xenvif_tx_create_gop
- simplify tx_dealloc_work_todo
- BUG() in xenvif_idx_unmap

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   30 ++++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..66b4696 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,26 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
-
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
+	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
+	struct page *pages_to_map[MAX_PENDING_REQS];
+	struct page *pages_to_unmap[MAX_PENDING_REQS];
+
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -219,6 +239,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..f0f0c3d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -38,6 +38,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index bb241d0..195602f 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif,
+					u16 pending_idx,
+					struct xen_netif_tx_request *txp,
+					struct gnttab_map_grant_ref *gop)
+{
+	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif = container_of(temp - pending_idx,
+					  struct xenvif,
+					  pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_dealloc_action:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Trying to unmap invalid handle! "
+					   "pending_idx: %x\n", pending_idx);
+				BUG();
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
+				vif->mmap_pages[pending_idx];
+			gnttab_set_unmap_op(gop,
+					    idx_to_kaddr(vif, pending_idx),
+					    GNTMAP_host_map,
+					    vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
+					vif->pages_to_unmap,
+					gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				   gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					   " host_addr: %llx handle: %x status: %d\n",
+					   gop[i].host_addr,
+					   gop[i].handle,
+					   gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				   XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+			   "Trying to unmap invalid handle! pending_idx: %x\n",
+			   pending_idx);
+		BUG();
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			    idx_to_kaddr(vif, pending_idx),
+			    GNTMAP_host_map,
+			    vif->grant_tx_handle[pending_idx]);
+	ret = gnttab_unmap_refs(&tx_unmap_op, &vif->mmap_pages[pending_idx], 1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1740,6 +1874,11 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline bool tx_dealloc_work_todo(struct xenvif *vif)
+{
+	return vif->dealloc_cons != vif->dealloc_prod
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					 tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKo-0003eR-KB; Mon, 20 Jan 2014 21:24:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKn-0003df-1Y
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:41 +0000
Received: from [85.158.137.68:11337] by server-4.bemta-3.messagelabs.com id
	A2/6C-10414-8149DD25; Mon, 20 Jan 2014 21:24:40 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390253076!10322762!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13604 invoked from network); 20 Jan 2014 21:24:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595756"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:38 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:38 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:21 +0000
Message-ID: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX grant
	map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the new definitions necessary for grant mapping.

v2:
- move unmapping to separate thread. The NAPI instance has to be scheduled
  even from thread context, which can cause huge delays
- that causes unfortunately bigger struct xenvif
- store grant handle after checking validity

v3:
- fix comment in xenvif_tx_dealloc_action()
- call unmap hypercall directly instead of gnttab_unmap_refs(), which does
  unnecessary m2p_override. Also remove pages_to_[un]map members
- BUG() if grant_tx_handle corrupted

v4:
- fix indentations and comments
- use bool for tx_dealloc_work_todo
- BUG() if grant_tx_handle corrupted - now really :)
- go back to gnttab_unmap_refs, now we rely on API changes

v5:
- remove hypercall from xenvif_idx_unmap
- remove stray line in xenvif_tx_create_gop
- simplify tx_dealloc_work_todo
- BUG() in xenvif_idx_unmap

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/common.h    |   30 ++++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..66b4696 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -79,6 +79,11 @@ struct pending_tx_info {
 				  * if it is head of one or more tx
 				  * reqs
 				  */
+	/* callback data for released SKBs. The	callback is always
+	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
+	 * contains the pending_idx
+	 */
+	struct ubuf_info callback_struct;
 };
 
 #define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
@@ -108,6 +113,8 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
+#define NETBACK_INVALID_HANDLE -1
+
 struct xenvif {
 	/* Unique identifier for this interface. */
 	domid_t          domid;
@@ -126,13 +133,26 @@ struct xenvif {
 	pending_ring_idx_t pending_cons;
 	u16 pending_ring[MAX_PENDING_REQS];
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
+	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
 	/* Coalescing tx requests before copying makes number of grant
 	 * copy ops greater or equal to number of slots required. In
 	 * worst case a tx request consumes 2 gnttab_copy.
 	 */
 	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
-
+	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
+	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
+	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
+	struct page *pages_to_map[MAX_PENDING_REQS];
+	struct page *pages_to_unmap[MAX_PENDING_REQS];
+
+	spinlock_t dealloc_lock;
+	spinlock_t response_lock;
+	pending_ring_idx_t dealloc_prod;
+	pending_ring_idx_t dealloc_cons;
+	u16 dealloc_ring[MAX_PENDING_REQS];
+	struct task_struct *dealloc_task;
+	wait_queue_head_t dealloc_wq;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
@@ -219,6 +239,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget);
 int xenvif_kthread(void *data);
 void xenvif_kick_thread(struct xenvif *vif);
 
+int xenvif_dealloc_kthread(void *data);
+
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
@@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
 
 void xenvif_stop_queue(struct xenvif *vif);
 
+/* Callback from stack when TX packet can be released */
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
+
+/* Unmap a pending page, usually has to be called before xenvif_idx_release */
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
+
 extern bool separate_tx_rx_irq;
 
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..f0f0c3d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -38,6 +38,7 @@
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
+#include <xen/balloon.h>
 
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index bb241d0..195602f 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	return page;
 }
 
+static inline void xenvif_tx_create_gop(struct xenvif *vif,
+					u16 pending_idx,
+					struct xen_netif_tx_request *txp,
+					struct gnttab_map_grant_ref *gop)
+{
+	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
+	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  txp->gref, vif->domid);
+
+	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
+	       sizeof(*txp));
+}
+
 static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
@@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	return work_done;
 }
 
+void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
+{
+	unsigned long flags;
+	pending_ring_idx_t index;
+	u16 pending_idx = ubuf->desc;
+	struct pending_tx_info *temp =
+		container_of(ubuf, struct pending_tx_info, callback_struct);
+	struct xenvif *vif = container_of(temp - pending_idx,
+					  struct xenvif,
+					  pending_tx_info[0]);
+
+	spin_lock_irqsave(&vif->dealloc_lock, flags);
+	do {
+		pending_idx = ubuf->desc;
+		ubuf = (struct ubuf_info *) ubuf->ctx;
+		index = pending_index(vif->dealloc_prod);
+		vif->dealloc_ring[index] = pending_idx;
+		/* Sync with xenvif_tx_dealloc_action:
+		 * insert idx then incr producer.
+		 */
+		smp_wmb();
+		vif->dealloc_prod++;
+	} while (ubuf);
+	wake_up(&vif->dealloc_wq);
+	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+}
+
+static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
+{
+	struct gnttab_unmap_grant_ref *gop;
+	pending_ring_idx_t dc, dp;
+	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
+	unsigned int i = 0;
+
+	dc = vif->dealloc_cons;
+	gop = vif->tx_unmap_ops;
+
+	/* Free up any grants we have finished using */
+	do {
+		dp = vif->dealloc_prod;
+
+		/* Ensure we see all indices enqueued by all
+		 * xenvif_zerocopy_callback().
+		 */
+		smp_rmb();
+
+		while (dc != dp) {
+			pending_idx =
+				vif->dealloc_ring[pending_index(dc++)];
+
+			/* Already unmapped? */
+			if (vif->grant_tx_handle[pending_idx] ==
+				NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Trying to unmap invalid handle! "
+					   "pending_idx: %x\n", pending_idx);
+				BUG();
+			}
+
+			pending_idx_release[gop-vif->tx_unmap_ops] =
+				pending_idx;
+			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
+				vif->mmap_pages[pending_idx];
+			gnttab_set_unmap_op(gop,
+					    idx_to_kaddr(vif, pending_idx),
+					    GNTMAP_host_map,
+					    vif->grant_tx_handle[pending_idx]);
+			vif->grant_tx_handle[pending_idx] =
+				NETBACK_INVALID_HANDLE;
+			++gop;
+		}
+
+	} while (dp != vif->dealloc_prod);
+
+	vif->dealloc_cons = dc;
+
+	if (gop - vif->tx_unmap_ops > 0) {
+		int ret;
+		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
+					vif->pages_to_unmap,
+					gop - vif->tx_unmap_ops);
+		if (ret) {
+			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
+				   gop - vif->tx_unmap_ops, ret);
+			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
+				netdev_err(vif->dev,
+					   " host_addr: %llx handle: %x status: %d\n",
+					   gop[i].host_addr,
+					   gop[i].handle,
+					   gop[i].status);
+			}
+			BUG();
+		}
+	}
+
+	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
+		xenvif_idx_release(vif, pending_idx_release[i],
+				   XEN_NETIF_RSP_OKAY);
+}
+
+
 /* Called after netfront has transmitted */
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
@@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 	vif->mmap_pages[pending_idx] = NULL;
 }
 
+void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
+{
+	int ret;
+	struct gnttab_unmap_grant_ref tx_unmap_op;
+
+	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
+		netdev_err(vif->dev,
+			   "Trying to unmap invalid handle! pending_idx: %x\n",
+			   pending_idx);
+		BUG();
+	}
+	gnttab_set_unmap_op(&tx_unmap_op,
+			    idx_to_kaddr(vif, pending_idx),
+			    GNTMAP_host_map,
+			    vif->grant_tx_handle[pending_idx]);
+	ret = gnttab_unmap_refs(&tx_unmap_op, &vif->mmap_pages[pending_idx], 1);
+	BUG_ON(ret);
+	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
+}
 
 static void make_tx_response(struct xenvif *vif,
 			     struct xen_netif_tx_request *txp,
@@ -1740,6 +1874,11 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static inline bool tx_dealloc_work_todo(struct xenvif *vif)
+{
+	return vif->dealloc_cons != vif->dealloc_prod
+}
+
 void xenvif_unmap_frontend_rings(struct xenvif *vif)
 {
 	if (vif->tx.sring)
@@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
 	return 0;
 }
 
+int xenvif_dealloc_kthread(void *data)
+{
+	struct xenvif *vif = data;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(vif->dealloc_wq,
+					 tx_dealloc_work_todo(vif) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			break;
+
+		xenvif_tx_dealloc_action(vif);
+		cond_resched();
+	}
+
+	/* Unmap anything remaining*/
+	if (tx_dealloc_work_todo(vif))
+		xenvif_tx_dealloc_action(vif);
+
+	return 0;
+}
+
 static int __init netback_init(void)
 {
 	int rc = 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKt-0003gT-8u; Mon, 20 Jan 2014 21:24:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKr-0003fn-To
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:46 +0000
Received: from [85.158.137.68:7700] by server-12.bemta-3.messagelabs.com id
	FD/B3-20055-D149DD25; Mon, 20 Jan 2014 21:24:45 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32266 invoked from network); 20 Jan 2014 21:24:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665715"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:41 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:41 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:22 +0000
Message-ID: <1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

v4:
- fix indentations and comments
- handle errors of set_phys_to_machine
- go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
  modified API

v5:
- BUG_ON(vif->dealloc_task) in xenvif_connect
- use 'task' in xenvif_connect for thread creation
- proper return value if alloc_xenballooned_pages fails
- BUG in xenvif_tx_check_gop if stale handle found

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   63 ++++++++-
 drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
 2 files changed, 160 insertions(+), 157 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index f0f0c3d..b3daae2 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+	    vif->dealloc_task == NULL ||
+	    !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+				       vif->mmap_pages,
+				       false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -383,12 +403,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 
 	BUG_ON(vif->tx_irq);
 	BUG_ON(vif->task);
+	BUG_ON(vif->dealloc_task);
 
 	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 
 	vif->task = task;
 
+	task = kthread_create(xenvif_dealloc_kthread,
+					   (void *)vif,
+					   "%s-dealloc",
+					   vif->dev->name);
+	if (IS_ERR(task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(task);
+		goto err_rx_unbind;
+	}
+
+	vif->dealloc_task = task;
+
 	rtnl_lock();
 	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
 		dev_set_mtu(vif->dev, ETH_DATA_LEN);
@@ -442,6 +476,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -479,6 +514,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+			    net_ratelimit())
+				netdev_err(vif->dev,
+					   "Page still granted! Index: %x\n",
+					   i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 195602f..747b428 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 	       sizeof(*txp));
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
-					       struct sk_buff *skb,
-					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
+							struct sk_buff *skb,
+							struct xen_netif_tx_request *txp,
+							struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -811,83 +814,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -909,9 +841,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+		    NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				   "Stale mapped handle! pending_idx %x handle %x\n",
+				   pending_idx,
+				   vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+			    NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Stale mapped handle! pending_idx %x handle %x\n",
+					   pending_idx,
+					   vif->grant_tx_handle[pending_idx]);
+				BUG();
+			}
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -975,7 +926,9 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif,
+			      struct sk_buff *skb,
+			      u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -989,6 +942,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -996,10 +960,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1480,30 +1449,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1532,17 +1481,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1566,12 +1515,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+				  skb,
+				  skb_shinfo(skb)->destructor_arg ?
+				  pending_idx :
+				  INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			/* We have to set this flag so the dealloc thread can
+			 * send the slots back
+			 */
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1731,7 +1702,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1741,7 +1712,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = gnttab_map_refs(vif->tx_map_ops, vif->pages_to_map, nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1752,45 +1724,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKt-0003gT-8u; Mon, 20 Jan 2014 21:24:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKr-0003fn-To
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:46 +0000
Received: from [85.158.137.68:7700] by server-12.bemta-3.messagelabs.com id
	FD/B3-20055-D149DD25; Mon, 20 Jan 2014 21:24:45 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32266 invoked from network); 20 Jan 2014 21:24:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665715"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:41 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:41 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:22 +0000
Message-ID: <1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
	from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch changes the grant copy on the TX patch to grant mapping

v2:
- delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
  request
- mark the effect of using ballooned pages in a comment
- place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
  before netif_receive_skb, and mark the importance of it
- grab dealloc_lock before __napi_complete to avoid contention with the
  callback's napi_schedule
- handle fragmented packets where first request < PKT_PROT_LEN
- fix up error path when checksum_setup failed
- check before teardown for pending grants, and start complain if they are
  there after 10 second

v3:
- delete a surplus checking from tx_action
- remove stray line
- squash xenvif_idx_unmap changes into the first patch
- init spinlocks
- call map hypercall directly instead of gnttab_map_refs()
- fix unmapping timeout in xenvif_free()

v4:
- fix indentations and comments
- handle errors of set_phys_to_machine
- go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
  modified API

v5:
- BUG_ON(vif->dealloc_task) in xenvif_connect
- use 'task' in xenvif_connect for thread creation
- proper return value if alloc_xenballooned_pages fails
- BUG in xenvif_tx_check_gop if stale handle found

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   63 ++++++++-
 drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
 2 files changed, 160 insertions(+), 157 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index f0f0c3d..b3daae2 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	BUG_ON(skb->dev != dev);
 
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (vif->task == NULL ||
+	    vif->dealloc_task == NULL ||
+	    !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	vif->pending_prod = MAX_PENDING_REQS;
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
+	spin_lock_init(&vif->dealloc_lock);
+	spin_lock_init(&vif->response_lock);
+	/* If ballooning is disabled, this will consume real memory, so you
+	 * better enable it. The long term solution would be to use just a
+	 * bunch of valid page descriptors, without dependency on ballooning
+	 */
+	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
+				       vif->mmap_pages,
+				       false);
+	if (err) {
+		netdev_err(dev, "Could not reserve mmap_pages\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	for (i = 0; i < MAX_PENDING_REQS; i++) {
+		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
+			{ .callback = xenvif_zerocopy_callback,
+			  .ctx = NULL,
+			  .desc = i };
+		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
+	}
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -383,12 +403,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 
 	BUG_ON(vif->tx_irq);
 	BUG_ON(vif->task);
+	BUG_ON(vif->dealloc_task);
 
 	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
 	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&vif->dealloc_wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
@@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 
 	vif->task = task;
 
+	task = kthread_create(xenvif_dealloc_kthread,
+					   (void *)vif,
+					   "%s-dealloc",
+					   vif->dev->name);
+	if (IS_ERR(task)) {
+		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		err = PTR_ERR(task);
+		goto err_rx_unbind;
+	}
+
+	vif->dealloc_task = task;
+
 	rtnl_lock();
 	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
 		dev_set_mtu(vif->dev, ETH_DATA_LEN);
@@ -442,6 +476,7 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
 	rtnl_unlock();
 
 	wake_up_process(vif->task);
+	wake_up_process(vif->dealloc_task);
 
 	return 0;
 
@@ -479,6 +514,11 @@ void xenvif_disconnect(struct xenvif *vif)
 		vif->task = NULL;
 	}
 
+	if (vif->dealloc_task) {
+		kthread_stop(vif->dealloc_task);
+		vif->dealloc_task = NULL;
+	}
+
 	if (vif->tx_irq) {
 		if (vif->tx_irq == vif->rx_irq)
 			unbind_from_irqhandler(vif->tx_irq, vif);
@@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
 
 void xenvif_free(struct xenvif *vif)
 {
+	int i, unmap_timeout = 0;
+
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
+			unmap_timeout++;
+			schedule_timeout(msecs_to_jiffies(1000));
+			if (unmap_timeout > 9 &&
+			    net_ratelimit())
+				netdev_err(vif->dev,
+					   "Page still granted! Index: %x\n",
+					   i);
+			i = -1;
+		}
+	}
+
+	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
+
 	netif_napi_del(&vif->napi);
 
 	unregister_netdev(vif->dev);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 195602f..747b428 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
 	RING_IDX cons = vif->tx.req_cons;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 		if (cons == end)
 			break;
 		txp = RING_GET_REQUEST(&vif->tx, cons++);
@@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 	       sizeof(*txp));
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
-					       struct sk_buff *skb,
-					       struct xen_netif_tx_request *txp,
-					       struct gnttab_copy *gop)
+static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
+							struct sk_buff *skb,
+							struct xen_netif_tx_request *txp,
+							struct gnttab_map_grant_ref *gop)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
@@ -811,83 +814,12 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
-	/* Coalesce tx requests, at this point the packet passed in
-	 * should be <= 64K. Any packets larger than 64K have been
-	 * handled in xenvif_count_requests().
-	 */
-	for (shinfo->nr_frags = slot = start; slot < nr_slots;
-	     shinfo->nr_frags++) {
-		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
-
-		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-		if (!page)
-			goto err;
-
-		dst_offset = 0;
-		first = NULL;
-		while (dst_offset < PAGE_SIZE && slot < nr_slots) {
-			gop->flags = GNTCOPY_source_gref;
-
-			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
-			gop->source.offset = txp->offset;
-
-			gop->dest.domid = DOMID_SELF;
-
-			gop->dest.offset = dst_offset;
-			gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-
-			if (dst_offset + txp->size > PAGE_SIZE) {
-				/* This page can only merge a portion
-				 * of tx request. Do not increment any
-				 * pointer / counter here. The txp
-				 * will be dealt with in future
-				 * rounds, eventually hitting the
-				 * `else` branch.
-				 */
-				gop->len = PAGE_SIZE - dst_offset;
-				txp->offset += gop->len;
-				txp->size -= gop->len;
-				dst_offset += gop->len; /* quit loop */
-			} else {
-				/* This tx request can be merged in the page */
-				gop->len = txp->size;
-				dst_offset += gop->len;
-
+	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
+	     shinfo->nr_frags++, txp++, gop++) {
 				index = pending_index(vif->pending_cons++);
-
 				pending_idx = vif->pending_ring[index];
-
-				memcpy(&pending_tx_info[pending_idx].req, txp,
-				       sizeof(*txp));
-
-				/* Poison these fields, corresponding
-				 * fields for head tx req will be set
-				 * to correct values after the loop.
-				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
-				pending_tx_info[pending_idx].head =
-					INVALID_PENDING_RING_IDX;
-
-				if (!first) {
-					first = &pending_tx_info[pending_idx];
-					start_idx = index;
-					head_idx = pending_idx;
-				}
-
-				txp++;
-				slot++;
-			}
-
-			gop++;
-		}
-
-		first->req.offset = 0;
-		first->req.size = dst_offset;
-		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
-		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
+		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
@@ -909,9 +841,9 @@ err:
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
 			       struct sk_buff *skb,
-			       struct gnttab_copy **gopp)
+			       struct gnttab_map_grant_ref **gopp)
 {
-	struct gnttab_copy *gop = *gopp;
+	struct gnttab_map_grant_ref *gop = *gopp;
 	u16 pending_idx = *((u16 *)skb->data);
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	struct pending_tx_info *tx_info;
@@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	err = gop->status;
 	if (unlikely(err))
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+	else {
+		if (vif->grant_tx_handle[pending_idx] !=
+		    NETBACK_INVALID_HANDLE) {
+			netdev_err(vif->dev,
+				   "Stale mapped handle! pending_idx %x handle %x\n",
+				   pending_idx,
+				   vif->grant_tx_handle[pending_idx]);
+			BUG();
+		}
+		vif->grant_tx_handle[pending_idx] = gop->handle;
+	}
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-		do {
 			newerr = (++gop)->status;
-			if (newerr)
-				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
 
 		if (likely(!newerr)) {
+			if (vif->grant_tx_handle[pending_idx] !=
+			    NETBACK_INVALID_HANDLE) {
+				netdev_err(vif->dev,
+					   "Stale mapped handle! pending_idx %x handle %x\n",
+					   pending_idx,
+					   vif->grant_tx_handle[pending_idx]);
+				BUG();
+			}
+			vif->grant_tx_handle[pending_idx] = gop->handle;
 			/* Had a previous error? Invalidate this fragment. */
-			if (unlikely(err))
+			if (unlikely(err)) {
+				xenvif_idx_unmap(vif, pending_idx);
 				xenvif_idx_release(vif, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
+			}
 			continue;
 		}
 
@@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
+		xenvif_idx_unmap(vif, pending_idx);
 		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -975,7 +926,9 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif *vif,
+			      struct sk_buff *skb,
+			      u16 prev_pending_idx)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -989,6 +942,17 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
+		/* If this is not the first frag, chain it to the previous*/
+		if (unlikely(prev_pending_idx == INVALID_PENDING_IDX))
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
+		else if (likely(pending_idx != prev_pending_idx))
+			vif->pending_tx_info[prev_pending_idx].callback_struct.ctx =
+				&(vif->pending_tx_info[pending_idx].callback_struct);
+
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
+		prev_pending_idx = pending_idx;
+
 		txp = &vif->pending_tx_info[pending_idx].req;
 		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
@@ -996,10 +960,15 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
-		/* Take an extra reference to offset xenvif_idx_release */
+		/* Take an extra reference to offset network stack's put_page */
 		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
+	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
+	 * overlaps with "index", and "mapping" is not set. I think mapping
+	 * should be set. If delivered to local stack, it would drop this
+	 * skb in sk_filter unless the socket has the right to use it.
+	 */
+	skb->pfmemalloc	= false;
 }
 
 static int xenvif_get_extras(struct xenvif *vif,
@@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 
 static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
@@ -1480,30 +1449,10 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			}
 		}
 
-		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
-		if (!page) {
-			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
-			break;
-		}
-
-		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
-		gop->source.offset = txreq.offset;
-
-		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
-		gop->dest.domid = DOMID_SELF;
-		gop->dest.offset = txreq.offset;
-
-		gop->len = txreq.size;
-		gop->flags = GNTCOPY_source_gref;
+		xenvif_tx_create_gop(vif, pending_idx, &txreq, gop);
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
-		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1532,17 +1481,17 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		vif->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop-vif->tx_map_ops) >= ARRAY_SIZE(vif->tx_map_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - vif->tx_map_ops;
 }
 
 
 static int xenvif_tx_submit(struct xenvif *vif)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_map_grant_ref *gop = vif->tx_map_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
@@ -1566,12 +1515,17 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		memcpy(skb->data,
 		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
 		       data_len);
+		vif->pending_tx_info[pending_idx].callback_struct.ctx = NULL;
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
 			txp->offset += data_len;
 			txp->size -= data_len;
+			skb_shinfo(skb)->destructor_arg =
+				&vif->pending_tx_info[pending_idx].callback_struct;
 		} else {
 			/* Schedule a response immediately. */
+			skb_shinfo(skb)->destructor_arg = NULL;
+			xenvif_idx_unmap(vif, pending_idx);
 			xenvif_idx_release(vif, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
@@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(vif,
+				  skb,
+				  skb_shinfo(skb)->destructor_arg ?
+				  pending_idx :
+				  INVALID_PENDING_IDX);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
@@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		if (checksum_setup(vif, skb)) {
 			netdev_dbg(vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
+			/* We have to set this flag so the dealloc thread can
+			 * send the slots back
+			 */
+			if (skb_shinfo(skb)->destructor_arg)
+				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			kfree_skb(skb);
 			continue;
 		}
@@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		work_done++;
 
+		/* Set this flag right before netif_receive_skb, otherwise
+		 * someone might think this packet already left netback, and
+		 * do a skb_copy_ubufs while we are still in control of the
+		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.
+		 */
+		if (skb_shinfo(skb)->destructor_arg)
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+
 		netif_receive_skb(skb);
 	}
 
@@ -1731,7 +1702,7 @@ static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
 int xenvif_tx_action(struct xenvif *vif, int budget)
 {
 	unsigned nr_gops;
-	int work_done;
+	int work_done, ret;
 
 	if (unlikely(!tx_work_todo(vif)))
 		return 0;
@@ -1741,7 +1712,8 @@ int xenvif_tx_action(struct xenvif *vif, int budget)
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	ret = gnttab_map_refs(vif->tx_map_ops, vif->pages_to_map, nr_gops);
+	BUG_ON(ret);
 
 	work_done = xenvif_tx_submit(vif);
 
@@ -1752,45 +1724,19 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
-	pending_ring_idx_t head;
+	pending_ring_idx_t index;
 	u16 peek; /* peek into next tx request */
+	unsigned long flags;
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
-
-	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
-		return;
-
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
-
-	head = pending_tx_info->head;
-
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
-
-	do {
-		pending_ring_idx_t index;
-		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
-
-		pending_tx_info = &vif->pending_tx_info[info_idx];
+		pending_tx_info = &vif->pending_tx_info[pending_idx];
+		spin_lock_irqsave(&vif->response_lock, flags);
 		make_tx_response(vif, &pending_tx_info->req, status);
-
-		/* Setting any number other than
-		 * INVALID_PENDING_RING_IDX indicates this slot is
-		 * starting a new packet / ending a previous packet.
-		 */
-		pending_tx_info->head = 0;
-
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
-
-		peek = vif->pending_ring[pending_index(++head)];
-
-	} while (!pending_tx_is_head(vif, peek));
-
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+		index = pending_index(vif->pending_prod);
+		vif->pending_ring[index] = pending_idx;
+		/* TX shouldn't use the index before we give it back here */
+		mb();
+		vif->pending_prod++;
+		spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKu-0003he-8L; Mon, 20 Jan 2014 21:24:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKs-0003gD-UU
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:47 +0000
Received: from [85.158.137.68:63526] by server-16.bemta-3.messagelabs.com id
	9B/7E-26128-E149DD25; Mon, 20 Jan 2014 21:24:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32381 invoked from network); 20 Jan 2014 21:24:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665731"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:44 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:44 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:23 +0000
Message-ID: <1390253069-25507-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolete with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index f35a3ce..2b1cd83 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -135,11 +105,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5724468..f74fa92 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif,
 					u16 pending_idx,
 					struct xen_netif_tx_request *txp,
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -873,14 +833,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1353,7 +1311,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 	       (skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1728,18 +1685,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKu-0003he-8L; Mon, 20 Jan 2014 21:24:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKs-0003gD-UU
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:47 +0000
Received: from [85.158.137.68:63526] by server-16.bemta-3.messagelabs.com id
	9B/7E-26128-E149DD25; Mon, 20 Jan 2014 21:24:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32381 invoked from network); 20 Jan 2014 21:24:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665731"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:44 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:44 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:23 +0000
Message-ID: <1390253069-25507-4-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 3/9] xen-netback: Remove old TX
	grant copy definitons and fix indentations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These became obsolete with grant mapping. I've left intentionally the
indentations in this way, to improve readability of previous patches.

v2:
- move the indentation fixup patch here

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h  |   37 +------------------
 drivers/net/xen-netback/netback.c |   72 ++++++++-----------------------------
 2 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index f35a3ce..2b1cd83 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -46,39 +46,9 @@
 #include <xen/xenbus.h>
 
 typedef unsigned int pending_ring_idx_t;
-#define INVALID_PENDING_RING_IDX (~0U)
 
-/* For the head field in pending_tx_info: it is used to indicate
- * whether this tx info is the head of one or more coalesced requests.
- *
- * When head != INVALID_PENDING_RING_IDX, it means the start of a new
- * tx requests queue and the end of previous queue.
- *
- * An example sequence of head fields (I = INVALID_PENDING_RING_IDX):
- *
- * ...|0 I I I|5 I|9 I I I|...
- * -->|<-INUSE----------------
- *
- * After consuming the first slot(s) we have:
- *
- * ...|V V V V|5 I|9 I I I|...
- * -----FREE->|<-INUSE--------
- *
- * where V stands for "valid pending ring index". Any number other
- * than INVALID_PENDING_RING_IDX is OK. These entries are considered
- * free and can contain any number other than
- * INVALID_PENDING_RING_IDX. In practice we use 0.
- *
- * The in use non-INVALID_PENDING_RING_IDX (say 0, 5 and 9 in the
- * above example) number is the index into pending_tx_info and
- * mmap_pages arrays.
- */
 struct pending_tx_info {
-	struct xen_netif_tx_request req; /* coalesced tx request */
-	pending_ring_idx_t head; /* head != INVALID_PENDING_RING_IDX
-				  * if it is head of one or more tx
-				  * reqs
-				  */
+	struct xen_netif_tx_request req; /* tx request */
 	/* callback data for released SKBs. The	callback is always
 	 * xenvif_zerocopy_callback, ctx points to the next fragment, desc
 	 * contains the pending_idx
@@ -135,11 +105,6 @@ struct xenvif {
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	/* Coalescing tx requests before copying makes number of grant
-	 * copy ops greater or equal to number of slots required. In
-	 * worst case a tx request consumes 2 gnttab_copy.
-	 */
-	struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 5724468..f74fa92 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -71,16 +71,6 @@ module_param(fatal_skb_slots, uint, 0444);
  */
 #define XEN_NETBK_LEGACY_SLOTS_MAX XEN_NETIF_NR_SLOTS_MIN
 
-/*
- * If head != INVALID_PENDING_RING_IDX, it means this tx request is head of
- * one or more merged tx requests, otherwise it is the continuation of
- * previous tx request.
- */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
-{
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
-}
-
 static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 			       u8 status);
 
@@ -762,19 +752,6 @@ static int xenvif_count_requests(struct xenvif *vif,
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
-				      u16 pending_idx)
-{
-	struct page *page;
-
-	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
-	if (!page)
-		return NULL;
-	vif->mmap_pages[pending_idx] = page;
-
-	return page;
-}
-
 static inline void xenvif_tx_create_gop(struct xenvif *vif,
 					u16 pending_idx,
 					struct xen_netif_tx_request *txp,
@@ -797,13 +774,9 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	skb_frag_t *frags = shinfo->frags;
 	u16 pending_idx = *((u16 *)skb->data);
-	u16 head_idx = 0;
-	int slot, start;
-	struct page *page;
-	pending_ring_idx_t index, start_idx = 0;
-	uint16_t dst_offset;
+	int start;
+	pending_ring_idx_t index;
 	unsigned int nr_slots;
-	struct pending_tx_info *first = NULL;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
@@ -815,8 +788,8 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	for (shinfo->nr_frags = start; shinfo->nr_frags < nr_slots;
 	     shinfo->nr_frags++, txp++, gop++) {
-				index = pending_index(vif->pending_cons++);
-				pending_idx = vif->pending_ring[index];
+		index = pending_index(vif->pending_cons++);
+		pending_idx = vif->pending_ring[index];
 		xenvif_tx_create_gop(vif, pending_idx, txp, gop);
 		frag_set_pending_idx(&frags[shinfo->nr_frags], pending_idx);
 	}
@@ -824,18 +797,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
 	return gop;
-err:
-	/* Unwind, freeing all pages and sending error responses. */
-	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
-				frag_get_pending_idx(&frags[shinfo->nr_frags]),
-				XEN_NETIF_RSP_ERROR);
-	}
-	/* The head too, if necessary. */
-	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
-
-	return NULL;
 }
 
 static int xenvif_tx_check_gop(struct xenvif *vif,
@@ -848,7 +809,6 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
-	u16 peek; /* peek into next tx request */
 
 	/* Check status of header. */
 	err = gop->status;
@@ -873,14 +833,12 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
-		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
 		tx_info = &vif->pending_tx_info[pending_idx];
-		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
-			newerr = (++gop)->status;
+		newerr = (++gop)->status;
 
 		if (likely(!newerr)) {
 			if (vif->grant_tx_handle[pending_idx] !=
@@ -1353,7 +1311,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 	       (skb_queue_len(&vif->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
-		struct page *page;
 		struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
 		u16 pending_idx;
 		RING_IDX idx;
@@ -1728,18 +1685,17 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t index;
-	u16 peek; /* peek into next tx request */
 	unsigned long flags;
 
-		pending_tx_info = &vif->pending_tx_info[pending_idx];
-		spin_lock_irqsave(&vif->response_lock, flags);
-		make_tx_response(vif, &pending_tx_info->req, status);
-		index = pending_index(vif->pending_prod);
-		vif->pending_ring[index] = pending_idx;
-		/* TX shouldn't use the index before we give it back here */
-		mb();
-		vif->pending_prod++;
-		spin_unlock_irqrestore(&vif->response_lock, flags);
+	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	spin_lock_irqsave(&vif->response_lock, flags);
+	make_tx_response(vif, &pending_tx_info->req, status);
+	index = pending_index(vif->pending_prod);
+	vif->pending_ring[index] = pending_idx;
+	/* TX shouldn't use the index before we give it back here */
+	mb();
+	vif->pending_prod++;
+	spin_unlock_irqrestore(&vif->response_lock, flags);
 }
 
 void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKw-0003ji-SV; Mon, 20 Jan 2014 21:24:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKv-0003iQ-Ec
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:49 +0000
Received: from [85.158.137.68:63627] by server-4.bemta-3.messagelabs.com id
	15/7C-10414-0249DD25; Mon, 20 Jan 2014 21:24:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32468 invoked from network); 20 Jan 2014 21:24:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665740"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:47 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:47 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:24 +0000
Message-ID: <1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f74fa92..d43444d 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				     struct pending_tx_info,
+				     callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				     struct xenvif,
+				     pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKw-0003ji-SV; Mon, 20 Jan 2014 21:24:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKv-0003iQ-Ec
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:49 +0000
Received: from [85.158.137.68:63627] by server-4.bemta-3.messagelabs.com id
	15/7C-10414-0249DD25; Mon, 20 Jan 2014 21:24:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32468 invoked from network); 20 Jan 2014 21:24:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665740"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:47 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:47 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:24 +0000
Message-ID: <1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path for
	mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RX path need to know if the SKB fragments are stored on pages from another
domain.

v4:
- indentation fixes

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
 1 file changed, 41 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index f74fa92..d43444d 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
 static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
-				 unsigned long offset, int *head)
+				 unsigned long offset, int *head,
+				 struct xenvif *foreign_vif,
+				 grant_ref_t foreign_gref)
 {
 	struct gnttab_copy *copy_gop;
 	struct xenvif_rx_meta *meta;
@@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->flags = GNTCOPY_dest_gref;
 		copy_gop->len = bytes;
 
-		copy_gop->source.domid = DOMID_SELF;
-		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
+		if (foreign_vif) {
+			copy_gop->source.domid = foreign_vif->domid;
+			copy_gop->source.u.ref = foreign_gref;
+			copy_gop->flags |= GNTCOPY_source_gref;
+		} else {
+			copy_gop->source.domid = DOMID_SELF;
+			copy_gop->source.u.gmfn =
+				virt_to_mfn(page_address(page));
+		}
 		copy_gop->source.offset = offset;
 
 		copy_gop->dest.domid = vif->domid;
@@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	int old_meta_prod;
 	int gso_type;
 	int gso_size;
+	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
+	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
+	struct xenvif *foreign_vif = NULL;
 
 	old_meta_prod = npo->meta_prod;
 
@@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 	npo->copy_off = 0;
 	npo->copy_gref = req->gref;
 
+	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
+		 (ubuf->callback == &xenvif_zerocopy_callback)) {
+		u16 pending_idx = ubuf->desc;
+		int i = 0;
+		struct pending_tx_info *temp =
+			container_of(ubuf,
+				     struct pending_tx_info,
+				     callback_struct);
+		foreign_vif =
+			container_of(temp - pending_idx,
+				     struct xenvif,
+				     pending_tx_info[0]);
+		do {
+			pending_idx = ubuf->desc;
+			foreign_grefs[i++] =
+				foreign_vif->pending_tx_info[pending_idx].req.gref;
+			ubuf = (struct ubuf_info *) ubuf->ctx;
+		} while (ubuf);
+	}
+
 	data = skb->data;
 	while (data < skb_tail_pointer(skb)) {
 		unsigned int offset = offset_in_page(data);
@@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 			len = skb_tail_pointer(skb) - data;
 
 		xenvif_gop_frag_copy(vif, skb, npo,
-				     virt_to_page(data), len, offset, &head);
+				     virt_to_page(data), len, offset, &head,
+				     NULL,
+				     0);
 		data += len;
 	}
 
@@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
-				     &head);
+				     &head,
+				     foreign_vif,
+				     foreign_grefs[i]);
 	}
 
 	return npo->meta_prod - old_meta_prod;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKz-0003m4-GA; Mon, 20 Jan 2014 21:24:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKy-0003km-Cm
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:52 +0000
Received: from [85.158.137.68:11609] by server-8.bemta-3.messagelabs.com id
	30/D3-31081-3249DD25; Mon, 20 Jan 2014 21:24:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32589 invoked from network); 20 Jan 2014 21:24:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665753"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:50 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:49 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:25 +0000
Message-ID: <1390253069-25507-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MKz-0003m4-GA; Mon, 20 Jan 2014 21:24:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5MKy-0003km-Cm
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:52 +0000
Received: from [85.158.137.68:11609] by server-8.bemta-3.messagelabs.com id
	30/D3-31081-3249DD25; Mon, 20 Jan 2014 21:24:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32589 invoked from network); 20 Jan 2014 21:24:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665753"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:50 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:49 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:25 +0000
Message-ID: <1390253069-25507-6-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 5/9] xen-netback: Add stat counters
	for zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the buffers had to be copied. Also
they help find out if packets are leaked, as if "sent != success + fail",
there are probably packets never freed up properly.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    3 +++
 drivers/net/xen-netback/interface.c |   15 +++++++++++++++
 drivers/net/xen-netback/netback.c   |    9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 419e63c..e3c28ff 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,6 +155,9 @@ struct xenvif {
 
 	/* Statistics */
 	unsigned long rx_gso_checksum_fixup;
+	unsigned long tx_zerocopy_sent;
+	unsigned long tx_zerocopy_success;
+	unsigned long tx_zerocopy_fail;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af5216f..75fe683 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -239,6 +239,21 @@ static const struct xenvif_stat {
 		"rx_gso_checksum_fixup",
 		offsetof(struct xenvif, rx_gso_checksum_fixup)
 	},
+	/* If (sent != success + fail), there are probably packets never
+	 * freed up properly!
+	 */
+	{
+		"tx_zerocopy_sent",
+		offsetof(struct xenvif, tx_zerocopy_sent),
+	},
+	{
+		"tx_zerocopy_success",
+		offsetof(struct xenvif, tx_zerocopy_success),
+	},
+	{
+		"tx_zerocopy_fail",
+		offsetof(struct xenvif, tx_zerocopy_fail)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a1b03e4..e2dd565 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1611,8 +1611,10 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 		 * skb_copy_ubufs while we are still in control of the skb. E.g.
 		 * the __pskb_pull_tail earlier can do such thing.
 		 */
-		if (skb_shinfo(skb)->destructor_arg)
+		if (skb_shinfo(skb)->destructor_arg) {
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent++;
+		}
 
 		netif_receive_skb(skb);
 	}
@@ -1645,6 +1647,11 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
 		napi_schedule(&vif->napi);
 	} while (ubuf);
 	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
+
+	if (likely(zerocopy_success))
+		vif->tx_zerocopy_success++;
+	else
+		vif->tx_zerocopy_fail++;
 }
 
 static inline void xenvif_tx_action_dealloc(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML2-0003ot-Ub; Mon, 20 Jan 2014 21:24:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML2-0003nw-0M
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:56 +0000
Received: from [85.158.143.35:63062] by server-2.bemta-4.messagelabs.com id
	44/EE-11386-7249DD25; Mon, 20 Jan 2014 21:24:55 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25210 invoked from network); 20 Jan 2014 21:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595820"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:53 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:52 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:26 +0000
Message-ID: <1390253069-25507-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

v4:
- handle error of skb_copy_expand()

v5:
- ratelimit error messages
- remove a tx_flags setting from xenvif_tx_submit 

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  124 ++++++++++++++++++++++++++++++++++---
 1 file changed, 114 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 22d05de..031258c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -803,6 +803,20 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 	       sizeof(*txp));
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb =
+		alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			  GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 							struct sk_buff *skb,
 							struct xen_netif_tx_request *txp,
@@ -813,11 +827,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -833,6 +852,30 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			if (net_ratelimit())
+				netdev_err(vif->dev,
+					   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+					     pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -846,6 +889,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -866,6 +910,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -900,11 +945,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -916,6 +970,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1419,8 +1499,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1428,9 +1507,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1492,6 +1568,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1534,6 +1611,30 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				  pending_idx :
 				  INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					      0,
+					      0,
+					      GFP_ATOMIC | __GFP_NOWARN);
+			if (!skb) {
+				if (net_ratelimit())
+					netdev_dbg(vif->dev,
+						   "Can't consolidate skb with too many fragments\n");
+				kfree_skb(nskb);
+				continue;
+			}
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1587,6 +1688,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML2-0003ot-Ub; Mon, 20 Jan 2014 21:24:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML2-0003nw-0M
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:56 +0000
Received: from [85.158.143.35:63062] by server-2.bemta-4.messagelabs.com id
	44/EE-11386-7249DD25; Mon, 20 Jan 2014 21:24:55 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25210 invoked from network); 20 Jan 2014 21:24:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595820"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:53 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:52 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:26 +0000
Message-ID: <1390253069-25507-7-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 6/9] xen-netback: Handle guests with
	too many frags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
- create a new skb
- map the leftover slots to its frags (no linear buffer here!)
- chain it to the previous through skb_shinfo(skb)->frag_list
- map them
- copy the whole stuff into a brand new skb and send it to the stack
- unmap the 2 old skb's pages

v3:
- adding extra check for frag number
- consolidate alloc_skb's into xenvif_alloc_skb()
- BUG_ON(frag_overflow > MAX_SKB_FRAGS)

v4:
- handle error of skb_copy_expand()

v5:
- ratelimit error messages
- remove a tx_flags setting from xenvif_tx_submit 

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>

---
 drivers/net/xen-netback/netback.c |  124 ++++++++++++++++++++++++++++++++++---
 1 file changed, 114 insertions(+), 10 deletions(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 22d05de..031258c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -803,6 +803,20 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
 	       sizeof(*txp));
 }
 
+static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
+{
+	struct sk_buff *skb =
+		alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
+			  GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(skb == NULL))
+		return NULL;
+
+	/* Packets passed to netif_rx() must have some headroom. */
+	skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+
+	return skb;
+}
+
 static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 							struct sk_buff *skb,
 							struct xen_netif_tx_request *txp,
@@ -813,11 +827,16 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 	u16 pending_idx = *((u16 *)skb->data);
 	int start;
 	pending_ring_idx_t index;
-	unsigned int nr_slots;
+	unsigned int nr_slots, frag_overflow = 0;
 
 	/* At this point shinfo->nr_frags is in fact the number of
 	 * slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
 	 */
+	if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+		frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
+		BUG_ON(frag_overflow > MAX_SKB_FRAGS);
+		shinfo->nr_frags = MAX_SKB_FRAGS;
+	}
 	nr_slots = shinfo->nr_frags;
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
@@ -833,6 +852,30 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
 
 	BUG_ON(shinfo->nr_frags > MAX_SKB_FRAGS);
 
+	if (frag_overflow) {
+		struct sk_buff *nskb = xenvif_alloc_skb(0);
+		if (unlikely(nskb == NULL)) {
+			if (net_ratelimit())
+				netdev_err(vif->dev,
+					   "Can't allocate the frag_list skb.\n");
+			return NULL;
+		}
+
+		shinfo = skb_shinfo(nskb);
+		frags = shinfo->frags;
+
+		for (shinfo->nr_frags = 0; shinfo->nr_frags < frag_overflow;
+		     shinfo->nr_frags++, txp++, gop++) {
+			index = pending_index(vif->pending_cons++);
+			pending_idx = vif->pending_ring[index];
+			xenvif_tx_create_gop(vif, pending_idx, txp, gop);
+			frag_set_pending_idx(&frags[shinfo->nr_frags],
+					     pending_idx);
+		}
+
+		skb_shinfo(skb)->frag_list = nskb;
+	}
+
 	return gop;
 }
 
@@ -846,6 +889,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	struct pending_tx_info *tx_info;
 	int nr_frags = shinfo->nr_frags;
 	int i, err, start;
+	struct sk_buff *first_skb = NULL;
 
 	/* Check status of header. */
 	err = gop->status;
@@ -866,6 +910,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
 
+check_frags:
 	for (i = start; i < nr_frags; i++) {
 		int j, newerr;
 
@@ -900,11 +945,20 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
 			continue;
-
 		/* First error: invalidate header and preceding fragments. */
-		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_unmap(vif, pending_idx);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		if (!first_skb) {
+			pending_idx = *((u16 *)skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		} else {
+			pending_idx = *((u16 *)first_skb->data);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif,
+					   pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
 			xenvif_idx_unmap(vif, pending_idx);
@@ -916,6 +970,32 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		err = newerr;
 	}
 
+	if (shinfo->frag_list) {
+		first_skb = skb;
+		skb = shinfo->frag_list;
+		shinfo = skb_shinfo(skb);
+		nr_frags = shinfo->nr_frags;
+		start = 0;
+
+		goto check_frags;
+	}
+
+	/* There was a mapping error in the frag_list skb. We have to unmap
+	 * the first skb's frags
+	 */
+	if (first_skb && err) {
+		int j;
+		shinfo = skb_shinfo(first_skb);
+		pending_idx = *((u16 *)first_skb->data);
+		start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
+		for (j = start; j < shinfo->nr_frags; j++) {
+			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
+			xenvif_idx_unmap(vif, pending_idx);
+			xenvif_idx_release(vif, pending_idx,
+					   XEN_NETIF_RSP_OKAY);
+		}
+	}
+
 	*gopp = gop + 1;
 	return err;
 }
@@ -1419,8 +1499,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
 			PKT_PROT_LEN : txreq.size;
 
-		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
-				GFP_ATOMIC | __GFP_NOWARN);
+		skb = xenvif_alloc_skb(data_len);
 		if (unlikely(skb == NULL)) {
 			netdev_dbg(vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
@@ -1428,9 +1507,6 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			break;
 		}
 
-		/* Packets passed to netif_rx() must have some headroom. */
-		skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
-
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1492,6 +1568,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
+		struct sk_buff *nskb = NULL;
 
 		pending_idx = *((u16 *)skb->data);
 		txp = &vif->pending_tx_info[pending_idx].req;
@@ -1534,6 +1611,30 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				  pending_idx :
 				  INVALID_PENDING_IDX);
 
+		if (skb_shinfo(skb)->frag_list) {
+			nskb = skb_shinfo(skb)->frag_list;
+			xenvif_fill_frags(vif, nskb, INVALID_PENDING_IDX);
+			skb->len += nskb->len;
+			skb->data_len += nskb->len;
+			skb->truesize += nskb->truesize;
+			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
+			vif->tx_zerocopy_sent += 2;
+			nskb = skb;
+
+			skb = skb_copy_expand(skb,
+					      0,
+					      0,
+					      GFP_ATOMIC | __GFP_NOWARN);
+			if (!skb) {
+				if (net_ratelimit())
+					netdev_dbg(vif->dev,
+						   "Can't consolidate skb with too many fragments\n");
+				kfree_skb(nskb);
+				continue;
+			}
+			skb_shinfo(skb)->destructor_arg = NULL;
+		}
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
@@ -1587,6 +1688,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		}
 
 		netif_receive_skb(skb);
+
+		if (nskb)
+			kfree_skb(nskb);
 	}
 
 	return work_done;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML6-0003sn-Lh; Mon, 20 Jan 2014 21:25:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML4-0003q4-7P
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:58 +0000
Received: from [85.158.137.68:63875] by server-3.bemta-3.messagelabs.com id
	C6/43-10658-9249DD25; Mon, 20 Jan 2014 21:24:57 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32739 invoked from network); 20 Jan 2014 21:24:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:55 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:55 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:27 +0000
Message-ID: <1390253069-25507-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML6-0003sn-Lh; Mon, 20 Jan 2014 21:25:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML4-0003q4-7P
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:24:58 +0000
Received: from [85.158.137.68:63875] by server-3.bemta-3.messagelabs.com id
	C6/43-10658-9249DD25; Mon, 20 Jan 2014 21:24:57 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390253082!9459643!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32739 invoked from network); 20 Jan 2014 21:24:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="94665772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:55 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:55 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:27 +0000
Message-ID: <1390253069-25507-8-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 7/9] xen-netback: Add stat counters
	for frag_list skbs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These counters help determine how often the guest sends a packet with more
than MAX_SKB_FRAGS frags.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    1 +
 drivers/net/xen-netback/interface.c |    7 +++++++
 drivers/net/xen-netback/netback.c   |    1 +
 3 files changed, 9 insertions(+)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index e3c28ff..c037efb 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -158,6 +158,7 @@ struct xenvif {
 	unsigned long tx_zerocopy_sent;
 	unsigned long tx_zerocopy_success;
 	unsigned long tx_zerocopy_fail;
+	unsigned long tx_frag_overflow;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index ac27af3..b7daf8d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -254,6 +254,13 @@ static const struct xenvif_stat {
 		"tx_zerocopy_fail",
 		offsetof(struct xenvif, tx_zerocopy_fail)
 	},
+	/* Number of packets exceeding MAX_SKB_FRAG slots. You should use
+	 * a guest with the same MAX_SKB_FRAG
+	 */
+	{
+		"tx_frag_overflow",
+		offsetof(struct xenvif, tx_frag_overflow)
+	},
 };
 
 static int xenvif_get_sset_count(struct net_device *dev, int string_set)
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 9841429..4305965 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1656,6 +1656,7 @@ static int xenvif_tx_submit(struct xenvif *vif, int budget)
 			skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			skb_shinfo(nskb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
 			vif->tx_zerocopy_sent += 2;
+			vif->tx_frag_overflow++;
 			nskb = skb;
 
 			skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC | __GFP_NOWARN);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML9-0003v9-5W; Mon, 20 Jan 2014 21:25:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML6-0003sS-PC
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:25:01 +0000
Received: from [85.158.143.35:63263] by server-2.bemta-4.messagelabs.com id
	1D/EE-11386-C249DD25; Mon, 20 Jan 2014 21:25:00 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25392 invoked from network); 20 Jan 2014 21:24:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595839"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:58 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:58 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:28 +0000
Message-ID: <1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.
The QDisc queue in worst case takes 3 round to flush usually.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

v4:
- due to RX flow control changes now start_xmit doesn't drop the packets but
  place them on the internal queue. So the timer set rx_queue_purge and kick in
  the thread to drop the packets there
- we shoot down the timer if a previously filled internal queue drains
- adjust the teardown timeout as in worst case it can take more time now

v5:
- create separate variable worst_case_skb_lifetime and add an explanation about
  why is it so long

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 ++++++
 drivers/net/xen-netback/interface.c |   37 +++++++++++++++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |   23 +++++++++++++++++++---
 3 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 109c29f..d1cd8ce 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -129,6 +129,9 @@ struct xenvif {
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
+	bool rx_queue_purge;
+
+	struct timer_list wake_queue;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af6b3e1..40aa500 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		vif->rx_queue_purge = true;
+		xenvif_kick_thread(vif);
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	init_timer(&vif->credit_timeout);
 	vif->credit_window_start = get_jiffies_64();
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -532,6 +551,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
 void xenvif_free(struct xenvif *vif)
 {
 	int i, unmap_timeout = 0;
+	/* Here we want to avoid timeout messages if an skb can be legitimatly
+	 * stucked somewhere else. Realisticly this could be an another vif's
+	 * internal or QDisc queue. That another vif also has this
+	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
+	 * internal queue. After that, the QDisc queue can put in worst case
+	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
+	 * internal queue, so we need several rounds of such timeouts until we
+	 * can be sure that no another vif should have skb's from us. We are
+	 * not sending more skb's, so newly stucked packets are not interesting
+	 * for us here.
+	 */
+	unsigned int worst_case_skb_lifetime = (rx_drain_timeout_msecs/1000) *
+		DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS));
 
 	for (i = 0; i < MAX_PENDING_REQS; ++i) {
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > worst_case_skb_lifetime &&
 			    net_ratelimit())
 				netdev_err(vif->dev,
 					   "Page still granted! Index: %x\n",
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 560950e..bb65c7c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -63,6 +63,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -1909,8 +1916,9 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return (!skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots)) ||
+	       vif->rx_queue_purge;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1998,12 +2006,19 @@ int xenvif_kthread(void *data)
 		if (kthread_should_stop())
 			break;
 
+		if (vif->rx_queue_purge) {
+			skb_queue_purge(&vif->rx_queue);
+			vif->rx_queue_purge = false;
+		}
+
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
 		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
+		    netif_queue_stopped(vif->dev)) {
+			del_timer_sync(&vif->wake_queue);
 			xenvif_start_queue(vif);
+		}
 
 		cond_resched();
 	}
@@ -2054,6 +2069,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ML9-0003v9-5W; Mon, 20 Jan 2014 21:25:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML6-0003sS-PC
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:25:01 +0000
Received: from [85.158.143.35:63263] by server-2.bemta-4.messagelabs.com id
	1D/EE-11386-C249DD25; Mon, 20 Jan 2014 21:25:00 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25392 invoked from network); 20 Jan 2014 21:24:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595839"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:24:58 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:24:58 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:28 +0000
Message-ID: <1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout packets in
	RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A malicious or buggy guest can leave its queue filled indefinitely, in which
case qdisc start to queue packets for that VIF. If those packets came from an
another guest, it can block its slots and prevent shutdown. To avoid that, we
make sure the queue is drained in every 10 seconds.
The QDisc queue in worst case takes 3 round to flush usually.

v3:
- remove stale debug log
- tie unmap timeout in xenvif_free to this timeout

v4:
- due to RX flow control changes now start_xmit doesn't drop the packets but
  place them on the internal queue. So the timer set rx_queue_purge and kick in
  the thread to drop the packets there
- we shoot down the timer if a previously filled internal queue drains
- adjust the teardown timeout as in worst case it can take more time now

v5:
- create separate variable worst_case_skb_lifetime and add an explanation about
  why is it so long

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    6 ++++++
 drivers/net/xen-netback/interface.c |   37 +++++++++++++++++++++++++++++++++--
 drivers/net/xen-netback/netback.c   |   23 +++++++++++++++++++---
 3 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 109c29f..d1cd8ce 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -129,6 +129,9 @@ struct xenvif {
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
+	bool rx_queue_purge;
+
+	struct timer_list wake_queue;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
@@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int rx_drain_timeout_msecs;
+extern unsigned int rx_drain_timeout_jiffies;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index af6b3e1..40aa500 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void xenvif_wake_queue(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	if (netif_queue_stopped(vif->dev)) {
+		netdev_err(vif->dev, "draining TX queue\n");
+		vif->rx_queue_purge = true;
+		xenvif_kick_thread(vif);
+		netif_wake_queue(vif->dev);
+	}
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
@@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
+	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
+		vif->wake_queue.function = xenvif_wake_queue;
+		vif->wake_queue.data = (unsigned long)vif;
 		xenvif_stop_queue(vif);
+		mod_timer(&vif->wake_queue,
+			jiffies + rx_drain_timeout_jiffies);
+	}
 
 	skb_queue_tail(&vif->rx_queue, skb);
 	xenvif_kick_thread(vif);
@@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	init_timer(&vif->credit_timeout);
 	vif->credit_window_start = get_jiffies_64();
 
+	init_timer(&vif->wake_queue);
+
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
@@ -532,6 +551,7 @@ void xenvif_disconnect(struct xenvif *vif)
 		xenvif_carrier_off(vif);
 
 	if (vif->task) {
+		del_timer_sync(&vif->wake_queue);
 		kthread_stop(vif->task);
 		vif->task = NULL;
 	}
@@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
 void xenvif_free(struct xenvif *vif)
 {
 	int i, unmap_timeout = 0;
+	/* Here we want to avoid timeout messages if an skb can be legitimatly
+	 * stucked somewhere else. Realisticly this could be an another vif's
+	 * internal or QDisc queue. That another vif also has this
+	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
+	 * internal queue. After that, the QDisc queue can put in worst case
+	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
+	 * internal queue, so we need several rounds of such timeouts until we
+	 * can be sure that no another vif should have skb's from us. We are
+	 * not sending more skb's, so newly stucked packets are not interesting
+	 * for us here.
+	 */
+	unsigned int worst_case_skb_lifetime = (rx_drain_timeout_msecs/1000) *
+		DIV_ROUND_UP(XENVIF_QUEUE_LENGTH, (XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS));
 
 	for (i = 0; i < MAX_PENDING_REQS; ++i) {
 		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
 			unmap_timeout++;
 			schedule_timeout(msecs_to_jiffies(1000));
-			if (unmap_timeout > 9 &&
+			if (unmap_timeout > worst_case_skb_lifetime &&
 			    net_ratelimit())
 				netdev_err(vif->dev,
 					   "Page still granted! Index: %x\n",
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 560950e..bb65c7c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -63,6 +63,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
 static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
 module_param(fatal_skb_slots, uint, 0444);
 
+/* When guest ring is filled up, qdisc queues the packets for us, but we have
+ * to timeout them, otherwise other guests' packets can get stucked there
+ */
+unsigned int rx_drain_timeout_msecs = 10000;
+module_param(rx_drain_timeout_msecs, uint, 0444);
+unsigned int rx_drain_timeout_jiffies;
+
 /*
  * To avoid confusion, we define XEN_NETBK_LEGACY_SLOTS_MAX indicating
  * the maximum slots a valid packet can use. Now this value is defined
@@ -1909,8 +1916,9 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return (!skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots)) ||
+	       vif->rx_queue_purge;
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1998,12 +2006,19 @@ int xenvif_kthread(void *data)
 		if (kthread_should_stop())
 			break;
 
+		if (vif->rx_queue_purge) {
+			skb_queue_purge(&vif->rx_queue);
+			vif->rx_queue_purge = false;
+		}
+
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
 		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
+		    netif_queue_stopped(vif->dev)) {
+			del_timer_sync(&vif->wake_queue);
 			xenvif_start_queue(vif);
+		}
 
 		cond_resched();
 	}
@@ -2054,6 +2069,8 @@ static int __init netback_init(void)
 	if (rc)
 		goto failed_init;
 
+	rx_drain_timeout_jiffies = msecs_to_jiffies(rx_drain_timeout_msecs);
+
 	return 0;
 
 failed_init:

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MLB-0003xy-Ly; Mon, 20 Jan 2014 21:25:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML9-0003v2-Fj
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:25:03 +0000
Received: from [85.158.143.35:52905] by server-3.bemta-4.messagelabs.com id
	0E/53-32360-E249DD25; Mon, 20 Jan 2014 21:25:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25475 invoked from network); 20 Jan 2014 21:25:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595855"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:25:01 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:25:01 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:29 +0000
Message-ID: <1390253069-25507-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

v4:
- use bool for tx_dealloc_work_todo

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   34 +++++++++++++++++++++++++++++++++-
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index d1cd8ce..95498c8 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -118,6 +118,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 40aa500..f925af5 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -407,6 +407,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -557,6 +558,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index bb65c7c..c098276 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -135,6 +135,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1932,9 +1937,36 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline bool tx_dealloc_work_todo(struct xenvif *vif)
 {
-	return vif->dealloc_cons != vif->dealloc_prod
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+		    (vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+		    !vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					  jiffies + msecs_to_jiffies(1));
+
+			}
+			return false;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
+		return true;
+	}
+
+	return false;
 }
 
 void xenvif_unmap_frontend_rings(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:25:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5MLB-0003xy-Ly; Mon, 20 Jan 2014 21:25:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5ML9-0003v2-Fj
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 21:25:03 +0000
Received: from [85.158.143.35:52905] by server-3.bemta-4.messagelabs.com id
	0E/53-32360-E249DD25; Mon, 20 Jan 2014 21:25:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390253093!5760953!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25475 invoked from network); 20 Jan 2014 21:25:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:25:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92595855"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 21:25:01 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 20 Jan 2014 16:25:01 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Mon, 20 Jan 2014 21:24:29 +0000
Message-ID: <1390253069-25507-10-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net-next v5 9/9] xen-netback: Aggregate TX unmap
	operations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Unmapping causes TLB flushing, therefore we should make it in the largest
possible batches. However we shouldn't starve the guest for too long. So if
the guest has space for at least two big packets and we don't have at least a
quarter ring to unmap, delay it for at most 1 milisec.

v4:
- use bool for tx_dealloc_work_todo

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 ++
 drivers/net/xen-netback/interface.c |    2 ++
 drivers/net/xen-netback/netback.c   |   34 +++++++++++++++++++++++++++++++++-
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index d1cd8ce..95498c8 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -118,6 +118,8 @@ struct xenvif {
 	u16 dealloc_ring[MAX_PENDING_REQS];
 	struct task_struct *dealloc_task;
 	wait_queue_head_t dealloc_wq;
+	struct timer_list dealloc_delay;
+	bool dealloc_delay_timed_out;
 
 	/* Use kthread for guest RX */
 	struct task_struct *task;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 40aa500..f925af5 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -407,6 +407,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 			  .desc = i };
 		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
 	}
+	init_timer(&vif->dealloc_delay);
 
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
@@ -557,6 +558,7 @@ void xenvif_disconnect(struct xenvif *vif)
 	}
 
 	if (vif->dealloc_task) {
+		del_timer_sync(&vif->dealloc_delay);
 		kthread_stop(vif->dealloc_task);
 		vif->dealloc_task = NULL;
 	}
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index bb65c7c..c098276 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -135,6 +135,11 @@ static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
 		vif->pending_prod + vif->pending_cons;
 }
 
+static inline pending_ring_idx_t nr_free_slots(struct xen_netif_tx_back_ring *ring)
+{
+	return ring->nr_ents -	(ring->sring->req_prod - ring->rsp_prod_pvt);
+}
+
 bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
 {
 	RING_IDX prod, cons;
@@ -1932,9 +1937,36 @@ static inline int tx_work_todo(struct xenvif *vif)
 	return 0;
 }
 
+static void xenvif_dealloc_delay(unsigned long data)
+{
+	struct xenvif *vif = (struct xenvif *)data;
+
+	vif->dealloc_delay_timed_out = true;
+	wake_up(&vif->dealloc_wq);
+}
+
 static inline bool tx_dealloc_work_todo(struct xenvif *vif)
 {
-	return vif->dealloc_cons != vif->dealloc_prod
+	if (vif->dealloc_cons != vif->dealloc_prod) {
+		if ((nr_free_slots(&vif->tx) > 2 * XEN_NETBK_LEGACY_SLOTS_MAX) &&
+		    (vif->dealloc_prod - vif->dealloc_cons < MAX_PENDING_REQS / 4) &&
+		    !vif->dealloc_delay_timed_out) {
+			if (!timer_pending(&vif->dealloc_delay)) {
+				vif->dealloc_delay.function =
+					xenvif_dealloc_delay;
+				vif->dealloc_delay.data = (unsigned long)vif;
+				mod_timer(&vif->dealloc_delay,
+					  jiffies + msecs_to_jiffies(1));
+
+			}
+			return false;
+		}
+		del_timer_sync(&vif->dealloc_delay);
+		vif->dealloc_delay_timed_out = false;
+		return true;
+	}
+
+	return false;
 }
 
 void xenvif_unmap_frontend_rings(struct xenvif *vif)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:42:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Mc7-0005uf-OA; Mon, 20 Jan 2014 21:42:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W5Mc6-0005ua-C6
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:42:34 +0000
Received: from [85.158.139.211:48162] by server-17.bemta-5.messagelabs.com id
	EB/F9-19152-9489DD25; Mon, 20 Jan 2014 21:42:33 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390254152!10868118!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29495 invoked from network); 20 Jan 2014 21:42:32 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:42:32 -0000
Received: by mail-lb0-f173.google.com with SMTP id y6so5302763lbh.4
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:42:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=2BotNebKF2fFJP0YMkKN4rMtycu4A/6W9FWq1Iebc9U=;
	b=QGyeI+e0BPIkh9gmwPr69paYfPgKwgiLEjRZwOOyhVkOMzYLUF23H+5QMR1rFOYSiS
	LEYbgYRK36hqm5YTB9xGseyuyHANWI26nef54G0L57R6esbpFI5pm5Cnd5CnKu8gQFgc
	6caGrgX15GrWam6EfcncPMahnkYxNh/IfG4NJRaZgTU58D3NEGM+/QTXkygkyDICAgya
	2Xk54u7oPSnqAcaFM+grmTc0p0w6CCu7T5vHPE0qkboVZPAarAu8JkmrQEeORekB1yrz
	1hnTXVb3dLb2wYJjYBD8RnK1rxAuPS1GZnoEMWGyrr5Hwq8M9k5fp3R4GNJqbDaj+aVP
	ngNw==
X-Received: by 10.152.44.167 with SMTP id f7mr38318lam.64.1390254152186; Mon,
	20 Jan 2014 13:42:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 20 Jan 2014 13:42:11 -0800 (PST)
In-Reply-To: <52DD72C9.1040202@citrix.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
	<52DD72C9.1040202@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 20 Jan 2014 13:42:11 -0800
X-Google-Sender-Auth: d3xp_L8wat_a6kyWUrfZ3K4VtWc
Message-ID: <CAB=NE6X29-kjbJdBkA5c-NvS5bYyhk4pDJFUz1jD97y4QrDnYQ@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 11:02 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 20/01/14 18:33, Luis R. Rodriguez wrote:
>> Is the delta of what is queued for the next release typically small?
>> Otherwise someone doing development based on linux.git alone should
>> have conflicts with anything on the queue, no?
>
> We've not had any issues so far.

Will give that a shot.

>>>     To see what's queued for the next release, the next merge window,
>>>     and other work in progress:
>>>
>>>         The Xen subsystem maintainers' tip.git tree.
>>
>> That's the thing, you can't clone the tip.git tree today well, there
>> are undefined references and git gives up, asking for the linux-next
>> branch however did work.
>
> I think it did work, you just needed to checkout a branch manually.

Indeed, I just didn't expect random developer to know how to fix this,
and if other git trees will exist in the same form I figured its best
to extend MAINTAINERS to specify the preference for a branch. In this
case, since the branch is not recommended for development, I suppose
the recommendation no longer applies for this tree.

> But it looks like Konrad has pushed a master branch recently, as well.

That gives me a warm fuzzy, thanks.

 Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 21:42:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 21:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Mc7-0005uf-OA; Mon, 20 Jan 2014 21:42:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W5Mc6-0005ua-C6
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 21:42:34 +0000
Received: from [85.158.139.211:48162] by server-17.bemta-5.messagelabs.com id
	EB/F9-19152-9489DD25; Mon, 20 Jan 2014 21:42:33 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390254152!10868118!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29495 invoked from network); 20 Jan 2014 21:42:32 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 21:42:32 -0000
Received: by mail-lb0-f173.google.com with SMTP id y6so5302763lbh.4
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 13:42:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=2BotNebKF2fFJP0YMkKN4rMtycu4A/6W9FWq1Iebc9U=;
	b=QGyeI+e0BPIkh9gmwPr69paYfPgKwgiLEjRZwOOyhVkOMzYLUF23H+5QMR1rFOYSiS
	LEYbgYRK36hqm5YTB9xGseyuyHANWI26nef54G0L57R6esbpFI5pm5Cnd5CnKu8gQFgc
	6caGrgX15GrWam6EfcncPMahnkYxNh/IfG4NJRaZgTU58D3NEGM+/QTXkygkyDICAgya
	2Xk54u7oPSnqAcaFM+grmTc0p0w6CCu7T5vHPE0qkboVZPAarAu8JkmrQEeORekB1yrz
	1hnTXVb3dLb2wYJjYBD8RnK1rxAuPS1GZnoEMWGyrr5Hwq8M9k5fp3R4GNJqbDaj+aVP
	ngNw==
X-Received: by 10.152.44.167 with SMTP id f7mr38318lam.64.1390254152186; Mon,
	20 Jan 2014 13:42:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 20 Jan 2014 13:42:11 -0800 (PST)
In-Reply-To: <52DD72C9.1040202@citrix.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
	<52DD72C9.1040202@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 20 Jan 2014 13:42:11 -0800
X-Google-Sender-Auth: d3xp_L8wat_a6kyWUrfZ3K4VtWc
Message-ID: <CAB=NE6X29-kjbJdBkA5c-NvS5bYyhk4pDJFUz1jD97y4QrDnYQ@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 11:02 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 20/01/14 18:33, Luis R. Rodriguez wrote:
>> Is the delta of what is queued for the next release typically small?
>> Otherwise someone doing development based on linux.git alone should
>> have conflicts with anything on the queue, no?
>
> We've not had any issues so far.

Will give that a shot.

>>>     To see what's queued for the next release, the next merge window,
>>>     and other work in progress:
>>>
>>>         The Xen subsystem maintainers' tip.git tree.
>>
>> That's the thing, you can't clone the tip.git tree today well, there
>> are undefined references and git gives up, asking for the linux-next
>> branch however did work.
>
> I think it did work, you just needed to checkout a branch manually.

Indeed, I just didn't expect random developer to know how to fix this,
and if other git trees will exist in the same form I figured its best
to extend MAINTAINERS to specify the preference for a branch. In this
case, since the branch is not recommended for development, I suppose
the recommendation no longer applies for this tree.

> But it looks like Konrad has pushed a master branch recently, as well.

That gives me a warm fuzzy, thanks.

 Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:04:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Mwm-0006qX-T2; Mon, 20 Jan 2014 22:03:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5Mwl-0006qS-Lo
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 22:03:55 +0000
Received: from [85.158.139.211:26477] by server-9.bemta-5.messagelabs.com id
	39/C8-15098-A4D9DD25; Mon, 20 Jan 2014 22:03:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390255430!593255!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24546 invoked from network); 20 Jan 2014 22:03:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:03:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92608277"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 22:03:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 17:03:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5Mwe-0005lW-Vx;
	Mon, 20 Jan 2014 22:03:48 +0000
Date: Mon, 20 Jan 2014 22:03:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120220348.GA5058@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 09:24:28PM +0000, Zoltan Kiss wrote:
> A malicious or buggy guest can leave its queue filled indefinitely, in which
> case qdisc start to queue packets for that VIF. If those packets came from an
> another guest, it can block its slots and prevent shutdown. To avoid that, we
> make sure the queue is drained in every 10 seconds.
> The QDisc queue in worst case takes 3 round to flush usually.
> 
> v3:
> - remove stale debug log
> - tie unmap timeout in xenvif_free to this timeout
> 
> v4:
> - due to RX flow control changes now start_xmit doesn't drop the packets but
>   place them on the internal queue. So the timer set rx_queue_purge and kick in
>   the thread to drop the packets there
> - we shoot down the timer if a previously filled internal queue drains
> - adjust the teardown timeout as in worst case it can take more time now
> 
> v5:
> - create separate variable worst_case_skb_lifetime and add an explanation about
>   why is it so long
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    6 ++++++
>  drivers/net/xen-netback/interface.c |   37 +++++++++++++++++++++++++++++++++--
>  drivers/net/xen-netback/netback.c   |   23 +++++++++++++++++++---
>  3 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 109c29f..d1cd8ce 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -129,6 +129,9 @@ struct xenvif {
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> +	bool rx_queue_purge;
> +
> +	struct timer_list wake_queue;
>  
>  	/* This array is allocated seperately as it is large */
>  	struct gnttab_copy *grant_copy_op;
> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>  
>  extern bool separate_tx_rx_irq;
>  
> +extern unsigned int rx_drain_timeout_msecs;
> +extern unsigned int rx_drain_timeout_jiffies;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index af6b3e1..40aa500 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
>  	return IRQ_HANDLED;
>  }
>  
> +static void xenvif_wake_queue(unsigned long data)
> +{
> +	struct xenvif *vif = (struct xenvif *)data;
> +
> +	if (netif_queue_stopped(vif->dev)) {
> +		netdev_err(vif->dev, "draining TX queue\n");
> +		vif->rx_queue_purge = true;
> +		xenvif_kick_thread(vif);
> +		netif_wake_queue(vif->dev);
> +	}
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> @@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	 * then turn off the queue to give the ring a chance to
>  	 * drain.
>  	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> +		vif->wake_queue.function = xenvif_wake_queue;
> +		vif->wake_queue.data = (unsigned long)vif;
>  		xenvif_stop_queue(vif);
> +		mod_timer(&vif->wake_queue,
> +			jiffies + rx_drain_timeout_jiffies);
> +	}
>  
>  	skb_queue_tail(&vif->rx_queue, skb);
>  	xenvif_kick_thread(vif);
> @@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	init_timer(&vif->credit_timeout);
>  	vif->credit_window_start = get_jiffies_64();
>  
> +	init_timer(&vif->wake_queue);
> +
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
>  		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
> @@ -532,6 +551,7 @@ void xenvif_disconnect(struct xenvif *vif)
>  		xenvif_carrier_off(vif);
>  
>  	if (vif->task) {
> +		del_timer_sync(&vif->wake_queue);
>  		kthread_stop(vif->task);
>  		vif->task = NULL;
>  	}
> @@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
>  void xenvif_free(struct xenvif *vif)
>  {
>  	int i, unmap_timeout = 0;
> +	/* Here we want to avoid timeout messages if an skb can be legitimatly
> +	 * stucked somewhere else. Realisticly this could be an another vif's
> +	 * internal or QDisc queue. That another vif also has this
> +	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
> +	 * internal queue. After that, the QDisc queue can put in worst case
> +	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
> +	 * internal queue, so we need several rounds of such timeouts until we
> +	 * can be sure that no another vif should have skb's from us. We are
> +	 * not sending more skb's, so newly stucked packets are not interesting
> +	 * for us here.
> +	 */

You beat me to this. Was about to reply to your other email. :-)

It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
as you cannot possible know the maximum / miminum queue length of all
other vifs (as they can be changed during runtime). In practice most
users will stick with the default, but some advanced users might want to
tune this value for individual vif (whether that's a good idea or not is
another topic).

So, in order to convince myself this is safe. I also did some analysis
on the impact of having queue length other than default value.  If
queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
in qdisc than default and drain it faster than calculated, which is
safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
actually you need more time than calculated. I'm in two minded here. The
default value seems sensible to me but I'm still a bit worried about the
queue_len > XENVIF_QUEUE_LENGTH case.

An idea is to book-keep maximum tx queue len among all vifs and use that
to calculate worst scenario.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:04:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Mwm-0006qX-T2; Mon, 20 Jan 2014 22:03:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5Mwl-0006qS-Lo
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 22:03:55 +0000
Received: from [85.158.139.211:26477] by server-9.bemta-5.messagelabs.com id
	39/C8-15098-A4D9DD25; Mon, 20 Jan 2014 22:03:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390255430!593255!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24546 invoked from network); 20 Jan 2014 22:03:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:03:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92608277"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 22:03:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 17:03:49 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5Mwe-0005lW-Vx;
	Mon, 20 Jan 2014 22:03:48 +0000
Date: Mon, 20 Jan 2014 22:03:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120220348.GA5058@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 09:24:28PM +0000, Zoltan Kiss wrote:
> A malicious or buggy guest can leave its queue filled indefinitely, in which
> case qdisc start to queue packets for that VIF. If those packets came from an
> another guest, it can block its slots and prevent shutdown. To avoid that, we
> make sure the queue is drained in every 10 seconds.
> The QDisc queue in worst case takes 3 round to flush usually.
> 
> v3:
> - remove stale debug log
> - tie unmap timeout in xenvif_free to this timeout
> 
> v4:
> - due to RX flow control changes now start_xmit doesn't drop the packets but
>   place them on the internal queue. So the timer set rx_queue_purge and kick in
>   the thread to drop the packets there
> - we shoot down the timer if a previously filled internal queue drains
> - adjust the teardown timeout as in worst case it can take more time now
> 
> v5:
> - create separate variable worst_case_skb_lifetime and add an explanation about
>   why is it so long
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    6 ++++++
>  drivers/net/xen-netback/interface.c |   37 +++++++++++++++++++++++++++++++++--
>  drivers/net/xen-netback/netback.c   |   23 +++++++++++++++++++---
>  3 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 109c29f..d1cd8ce 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -129,6 +129,9 @@ struct xenvif {
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> +	bool rx_queue_purge;
> +
> +	struct timer_list wake_queue;
>  
>  	/* This array is allocated seperately as it is large */
>  	struct gnttab_copy *grant_copy_op;
> @@ -225,4 +228,7 @@ void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
>  
>  extern bool separate_tx_rx_irq;
>  
> +extern unsigned int rx_drain_timeout_msecs;
> +extern unsigned int rx_drain_timeout_jiffies;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index af6b3e1..40aa500 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -114,6 +114,18 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
>  	return IRQ_HANDLED;
>  }
>  
> +static void xenvif_wake_queue(unsigned long data)
> +{
> +	struct xenvif *vif = (struct xenvif *)data;
> +
> +	if (netif_queue_stopped(vif->dev)) {
> +		netdev_err(vif->dev, "draining TX queue\n");
> +		vif->rx_queue_purge = true;
> +		xenvif_kick_thread(vif);
> +		netif_wake_queue(vif->dev);
> +	}
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> @@ -143,8 +155,13 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	 * then turn off the queue to give the ring a chance to
>  	 * drain.
>  	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> +	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed)) {
> +		vif->wake_queue.function = xenvif_wake_queue;
> +		vif->wake_queue.data = (unsigned long)vif;
>  		xenvif_stop_queue(vif);
> +		mod_timer(&vif->wake_queue,
> +			jiffies + rx_drain_timeout_jiffies);
> +	}
>  
>  	skb_queue_tail(&vif->rx_queue, skb);
>  	xenvif_kick_thread(vif);
> @@ -352,6 +369,8 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	init_timer(&vif->credit_timeout);
>  	vif->credit_window_start = get_jiffies_64();
>  
> +	init_timer(&vif->wake_queue);
> +
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
>  		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
> @@ -532,6 +551,7 @@ void xenvif_disconnect(struct xenvif *vif)
>  		xenvif_carrier_off(vif);
>  
>  	if (vif->task) {
> +		del_timer_sync(&vif->wake_queue);
>  		kthread_stop(vif->task);
>  		vif->task = NULL;
>  	}
> @@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
>  void xenvif_free(struct xenvif *vif)
>  {
>  	int i, unmap_timeout = 0;
> +	/* Here we want to avoid timeout messages if an skb can be legitimatly
> +	 * stucked somewhere else. Realisticly this could be an another vif's
> +	 * internal or QDisc queue. That another vif also has this
> +	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
> +	 * internal queue. After that, the QDisc queue can put in worst case
> +	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
> +	 * internal queue, so we need several rounds of such timeouts until we
> +	 * can be sure that no another vif should have skb's from us. We are
> +	 * not sending more skb's, so newly stucked packets are not interesting
> +	 * for us here.
> +	 */

You beat me to this. Was about to reply to your other email. :-)

It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
as you cannot possible know the maximum / miminum queue length of all
other vifs (as they can be changed during runtime). In practice most
users will stick with the default, but some advanced users might want to
tune this value for individual vif (whether that's a good idea or not is
another topic).

So, in order to convince myself this is safe. I also did some analysis
on the impact of having queue length other than default value.  If
queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
in qdisc than default and drain it faster than calculated, which is
safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
actually you need more time than calculated. I'm in two minded here. The
default value seems sensible to me but I'm still a bit worried about the
queue_len > XENVIF_QUEUE_LENGTH case.

An idea is to book-keep maximum tx queue len among all vifs and use that
to calculate worst scenario.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:09:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5N1g-0007Ca-QQ; Mon, 20 Jan 2014 22:09:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5N1f-0007CJ-PO
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 22:09:00 +0000
Received: from [193.109.254.147:30728] by server-16.bemta-14.messagelabs.com
	id 4C/54-20600-B7E9DD25; Mon, 20 Jan 2014 22:08:59 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390255736!12010367!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31785 invoked from network); 20 Jan 2014 22:08:57 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:08:57 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so6071155qaq.11
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 14:08:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=w9VCOtugAJKZIUakMSZaScxb2WfQMeCnsCvyW1TYDdQ=;
	b=jtWgXOywAXDosoUVpdgvC+rKiIS7PnXpdhY1btSPR8OdQ3pGqdqeROx84Ox3yfOdCL
	Vs7FMTyxcnvJFbXuQWCtOsZm/nTevGPQy5IrnTPwoSa9VyQCWU4sM2gilL1uM8AVfOHX
	jYq3Y413NPCtQ4fmJyBwGmqlqlLzTjDwzWlv2ySFn/eywM0Z9d0cF2DBTQ9V7gp1P+Tw
	Q2FcschRS2iLWGmUp222lfdkMRHe5CRqUaqnw2rkp2h784XpYOl/9ycr3vnh+iFsOdyu
	duwaTcPpgcXhaB40p3X49GhvZzQegDl2DGV4IcyGZtqW/RVlzsY7puJ4Q05NiK/4rY5F
	QiOA==
MIME-Version: 1.0
X-Received: by 10.140.31.247 with SMTP id f110mr30004408qgf.58.1390255736297; 
	Mon, 20 Jan 2014 14:08:56 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Mon, 20 Jan 2014 14:08:56 -0800 (PST)
Date: Mon, 20 Jan 2014 17:08:56 -0500
Message-ID: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a113a9c0251bb6704f06e2331
Subject: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP to
	DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113a9c0251bb6704f06e2331
Content-Type: text/plain; charset=ISO-8859-1

xen: [v2] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..8c307c9 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];
+            int acpi_os_root_pointer_string_size =
snprintf(acpi_os_root_pointer_string,
+
MAX_RSDP_ADDRESS_STRING, "%08lX",
+
acpi_os_get_root_pointer());
+
+            if ( (acpi_os_root_pointer_string_size > 0) &&
(acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
+            {
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
+            }
+            else
+            {
+                printk(XENLOG_WARNING "RSDP Address String Size Check
Failed!\n");
+                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
+                printk(XENLOG_WARNING "(Expected size 1-%d, got
%d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a7c3905 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -750,6 +750,7 @@ struct start_info {
                                 /*  SIF_MOD_START_PFN set in flags).      */
     unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
 #define MAX_GUEST_CMDLINE 1024
+#define MAX_RSDP_ADDRESS_STRING 21
     int8_t cmd_line[MAX_GUEST_CMDLINE];
     /* The pfn range here covers both page table and p->m table frames.   */
     unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */

--001a113a9c0251bb6704f06e2331
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqoadl900

eGVuOiBbdjJdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3NldHVwLmMgYi94ZW4vYXJj
aC94ODYvc2V0dXAuYwppbmRleCBiNDkyNTZkLi44YzMwN2M5IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvc2V0dXAuYworKysgYi94ZW4vYXJjaC94ODYvc2V0dXAuYwpAQCAtMTM3OCw2ICsxMzc4
LDI1IEBAIHZvaWQgX19pbml0IF9fc3RhcnRfeGVuKHVuc2lnbmVkIGxvbmcgbWJpX3ApCiAgICAg
ICAgICAgICBzYWZlX3N0cmNhdChkb20wX2NtZGxpbmUsICIgYWNwaT0iKTsKICAgICAgICAgICAg
IHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgYWNwaV9wYXJhbSk7CiAgICAgICAgIH0KKyAgICAg
ICAgaWYgKCAhc3Ryc3RyKGRvbTBfY21kbGluZSwgImFjcGlfcnNkcD0iKSApCisgICAgICAgIHsK
KyAgICAgICAgICAgIGNoYXIgYWNwaV9vc19yb290X3BvaW50ZXJfc3RyaW5nW01BWF9SU0RQX0FE
RFJFU1NfU1RSSU5HXTsKKyAgICAgICAgICAgIGludCBhY3BpX29zX3Jvb3RfcG9pbnRlcl9zdHJp
bmdfc2l6ZSA9IHNucHJpbnRmKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZywgCisgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBNQVhf
UlNEUF9BRERSRVNTX1NUUklORywgIiUwOGxYIiwgCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhY3BpX29zX2dldF9yb290X3BvaW50
ZXIoKSk7CisKKyAgICAgICAgICAgIGlmICggKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZ19z
aXplID4gMCkgJiYgKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZ19zaXplIDw9IE1BWF9SU0RQ
X0FERFJFU1NfU1RSSU5HKSApCisgICAgICAgICAgICB7ICAgCisgICAgICAgICAgICAgICAgc2Fm
ZV9zdHJjYXQoZG9tMF9jbWRsaW5lLCAiIGFjcGlfcnNkcD0weCIpOworICAgICAgICAgICAgICAg
IHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgYWNwaV9vc19yb290X3BvaW50ZXJfc3RyaW5nKTsK
KyAgICAgICAgICAgIH0gCisgICAgICAgICAgICBlbHNlIAorICAgICAgICAgICAgeworICAgICAg
ICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORyAiUlNEUCBBZGRyZXNzIFN0cmluZyBTaXpl
IENoZWNrIEZhaWxlZCFcbiIpOworICAgICAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklO
RyAiTm90IHBhc3NpbmcgYWNwaV9yc2RwIHRvIERvbTAhXG4iKTsKKyAgICAgICAgICAgICAgICBw
cmludGsoWEVOTE9HX1dBUk5JTkcgIihFeHBlY3RlZCBzaXplIDEtJWQsIGdvdCAlZClcbiIsIE1B
WF9SU0RQX0FERFJFU1NfU1RSSU5HLCBhY3BpX29zX3Jvb3RfcG9pbnRlcl9zdHJpbmdfc2l6ZSk7
CisgICAgICAgICAgICB9CisgICAgICAgIH0KIAogICAgICAgICBjbWRsaW5lID0gZG9tMF9jbWRs
aW5lOwogICAgIH0KZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaCBiL3hlbi9p
bmNsdWRlL3B1YmxpYy94ZW4uaAppbmRleCA4YzU2OTdlLi5hN2MzOTA1IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9wdWJsaWMveGVuLmgKKysrIGIveGVuL2luY2x1ZGUvcHVibGljL3hlbi5oCkBA
IC03NTAsNiArNzUwLDcgQEAgc3RydWN0IHN0YXJ0X2luZm8gewogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAvKiAgU0lGX01PRF9TVEFSVF9QRk4gc2V0IGluIGZsYWdzKS4gICAgICAq
LwogICAgIHVuc2lnbmVkIGxvbmcgbW9kX2xlbjsgICAgICAvKiBTaXplIChieXRlcykgb2YgcHJl
LWxvYWRlZCBtb2R1bGUuICAgICAqLwogI2RlZmluZSBNQVhfR1VFU1RfQ01ETElORSAxMDI0Cisj
ZGVmaW5lIE1BWF9SU0RQX0FERFJFU1NfU1RSSU5HIDIxCiAgICAgaW50OF90IGNtZF9saW5lW01B
WF9HVUVTVF9DTURMSU5FXTsKICAgICAvKiBUaGUgcGZuIHJhbmdlIGhlcmUgY292ZXJzIGJvdGgg
cGFnZSB0YWJsZSBhbmQgcC0+bSB0YWJsZSBmcmFtZXMuICAgKi8KICAgICB1bnNpZ25lZCBsb25n
IGZpcnN0X3AybV9wZm47LyogMXN0IHBmbiBmb3JtaW5nIGluaXRpYWwgUC0+TSB0YWJsZS4gICAg
Ki8K
--001a113a9c0251bb6704f06e2331
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113a9c0251bb6704f06e2331--


From xen-devel-bounces@lists.xen.org Mon Jan 20 22:09:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5N1g-0007Ca-QQ; Mon, 20 Jan 2014 22:09:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5N1f-0007CJ-PO
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 22:09:00 +0000
Received: from [193.109.254.147:30728] by server-16.bemta-14.messagelabs.com
	id 4C/54-20600-B7E9DD25; Mon, 20 Jan 2014 22:08:59 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390255736!12010367!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31785 invoked from network); 20 Jan 2014 22:08:57 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:08:57 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so6071155qaq.11
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 14:08:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=w9VCOtugAJKZIUakMSZaScxb2WfQMeCnsCvyW1TYDdQ=;
	b=jtWgXOywAXDosoUVpdgvC+rKiIS7PnXpdhY1btSPR8OdQ3pGqdqeROx84Ox3yfOdCL
	Vs7FMTyxcnvJFbXuQWCtOsZm/nTevGPQy5IrnTPwoSa9VyQCWU4sM2gilL1uM8AVfOHX
	jYq3Y413NPCtQ4fmJyBwGmqlqlLzTjDwzWlv2ySFn/eywM0Z9d0cF2DBTQ9V7gp1P+Tw
	Q2FcschRS2iLWGmUp222lfdkMRHe5CRqUaqnw2rkp2h784XpYOl/9ycr3vnh+iFsOdyu
	duwaTcPpgcXhaB40p3X49GhvZzQegDl2DGV4IcyGZtqW/RVlzsY7puJ4Q05NiK/4rY5F
	QiOA==
MIME-Version: 1.0
X-Received: by 10.140.31.247 with SMTP id f110mr30004408qgf.58.1390255736297; 
	Mon, 20 Jan 2014 14:08:56 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Mon, 20 Jan 2014 14:08:56 -0800 (PST)
Date: Mon, 20 Jan 2014 17:08:56 -0500
Message-ID: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a113a9c0251bb6704f06e2331
Subject: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP to
	DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113a9c0251bb6704f06e2331
Content-Type: text/plain; charset=ISO-8859-1

xen: [v2] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..8c307c9 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];
+            int acpi_os_root_pointer_string_size =
snprintf(acpi_os_root_pointer_string,
+
MAX_RSDP_ADDRESS_STRING, "%08lX",
+
acpi_os_get_root_pointer());
+
+            if ( (acpi_os_root_pointer_string_size > 0) &&
(acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
+            {
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
+            }
+            else
+            {
+                printk(XENLOG_WARNING "RSDP Address String Size Check
Failed!\n");
+                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
+                printk(XENLOG_WARNING "(Expected size 1-%d, got
%d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a7c3905 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -750,6 +750,7 @@ struct start_info {
                                 /*  SIF_MOD_START_PFN set in flags).      */
     unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
 #define MAX_GUEST_CMDLINE 1024
+#define MAX_RSDP_ADDRESS_STRING 21
     int8_t cmd_line[MAX_GUEST_CMDLINE];
     /* The pfn range here covers both page table and p->m table frames.   */
     unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */

--001a113a9c0251bb6704f06e2331
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqoadl900

eGVuOiBbdjJdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3NldHVwLmMgYi94ZW4vYXJj
aC94ODYvc2V0dXAuYwppbmRleCBiNDkyNTZkLi44YzMwN2M5IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvc2V0dXAuYworKysgYi94ZW4vYXJjaC94ODYvc2V0dXAuYwpAQCAtMTM3OCw2ICsxMzc4
LDI1IEBAIHZvaWQgX19pbml0IF9fc3RhcnRfeGVuKHVuc2lnbmVkIGxvbmcgbWJpX3ApCiAgICAg
ICAgICAgICBzYWZlX3N0cmNhdChkb20wX2NtZGxpbmUsICIgYWNwaT0iKTsKICAgICAgICAgICAg
IHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgYWNwaV9wYXJhbSk7CiAgICAgICAgIH0KKyAgICAg
ICAgaWYgKCAhc3Ryc3RyKGRvbTBfY21kbGluZSwgImFjcGlfcnNkcD0iKSApCisgICAgICAgIHsK
KyAgICAgICAgICAgIGNoYXIgYWNwaV9vc19yb290X3BvaW50ZXJfc3RyaW5nW01BWF9SU0RQX0FE
RFJFU1NfU1RSSU5HXTsKKyAgICAgICAgICAgIGludCBhY3BpX29zX3Jvb3RfcG9pbnRlcl9zdHJp
bmdfc2l6ZSA9IHNucHJpbnRmKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZywgCisgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBNQVhf
UlNEUF9BRERSRVNTX1NUUklORywgIiUwOGxYIiwgCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhY3BpX29zX2dldF9yb290X3BvaW50
ZXIoKSk7CisKKyAgICAgICAgICAgIGlmICggKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZ19z
aXplID4gMCkgJiYgKGFjcGlfb3Nfcm9vdF9wb2ludGVyX3N0cmluZ19zaXplIDw9IE1BWF9SU0RQ
X0FERFJFU1NfU1RSSU5HKSApCisgICAgICAgICAgICB7ICAgCisgICAgICAgICAgICAgICAgc2Fm
ZV9zdHJjYXQoZG9tMF9jbWRsaW5lLCAiIGFjcGlfcnNkcD0weCIpOworICAgICAgICAgICAgICAg
IHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgYWNwaV9vc19yb290X3BvaW50ZXJfc3RyaW5nKTsK
KyAgICAgICAgICAgIH0gCisgICAgICAgICAgICBlbHNlIAorICAgICAgICAgICAgeworICAgICAg
ICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklORyAiUlNEUCBBZGRyZXNzIFN0cmluZyBTaXpl
IENoZWNrIEZhaWxlZCFcbiIpOworICAgICAgICAgICAgICAgIHByaW50ayhYRU5MT0dfV0FSTklO
RyAiTm90IHBhc3NpbmcgYWNwaV9yc2RwIHRvIERvbTAhXG4iKTsKKyAgICAgICAgICAgICAgICBw
cmludGsoWEVOTE9HX1dBUk5JTkcgIihFeHBlY3RlZCBzaXplIDEtJWQsIGdvdCAlZClcbiIsIE1B
WF9SU0RQX0FERFJFU1NfU1RSSU5HLCBhY3BpX29zX3Jvb3RfcG9pbnRlcl9zdHJpbmdfc2l6ZSk7
CisgICAgICAgICAgICB9CisgICAgICAgIH0KIAogICAgICAgICBjbWRsaW5lID0gZG9tMF9jbWRs
aW5lOwogICAgIH0KZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaCBiL3hlbi9p
bmNsdWRlL3B1YmxpYy94ZW4uaAppbmRleCA4YzU2OTdlLi5hN2MzOTA1IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9wdWJsaWMveGVuLmgKKysrIGIveGVuL2luY2x1ZGUvcHVibGljL3hlbi5oCkBA
IC03NTAsNiArNzUwLDcgQEAgc3RydWN0IHN0YXJ0X2luZm8gewogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAvKiAgU0lGX01PRF9TVEFSVF9QRk4gc2V0IGluIGZsYWdzKS4gICAgICAq
LwogICAgIHVuc2lnbmVkIGxvbmcgbW9kX2xlbjsgICAgICAvKiBTaXplIChieXRlcykgb2YgcHJl
LWxvYWRlZCBtb2R1bGUuICAgICAqLwogI2RlZmluZSBNQVhfR1VFU1RfQ01ETElORSAxMDI0Cisj
ZGVmaW5lIE1BWF9SU0RQX0FERFJFU1NfU1RSSU5HIDIxCiAgICAgaW50OF90IGNtZF9saW5lW01B
WF9HVUVTVF9DTURMSU5FXTsKICAgICAvKiBUaGUgcGZuIHJhbmdlIGhlcmUgY292ZXJzIGJvdGgg
cGFnZSB0YWJsZSBhbmQgcC0+bSB0YWJsZSBmcmFtZXMuICAgKi8KICAgICB1bnNpZ25lZCBsb25n
IGZpcnN0X3AybV9wZm47LyogMXN0IHBmbiBmb3JtaW5nIGluaXRpYWwgUC0+TSB0YWJsZS4gICAg
Ki8K
--001a113a9c0251bb6704f06e2331
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113a9c0251bb6704f06e2331--


From xen-devel-bounces@lists.xen.org Mon Jan 20 22:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5N5D-0007Tt-Hr; Mon, 20 Jan 2014 22:12:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5N5C-0007To-68
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 22:12:38 +0000
Received: from [85.158.139.211:37724] by server-7.bemta-5.messagelabs.com id
	7B/45-04824-55F9DD25; Mon, 20 Jan 2014 22:12:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390255955!123860!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28769 invoked from network); 20 Jan 2014 22:12:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:12:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92610713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 22:12:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 17:12:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5N57-0005sk-Ls;
	Mon, 20 Jan 2014 22:12:33 +0000
Date: Mon, 20 Jan 2014 22:12:33 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120221233.GB5058@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
	<20140120220348.GA5058@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140120220348.GA5058@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 10:03:48PM +0000, Wei Liu wrote:
[...]
> 
> You beat me to this. Was about to reply to your other email. :-)
> 
> It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
> as you cannot possible know the maximum / miminum queue length of all
> other vifs (as they can be changed during runtime). In practice most
> users will stick with the default, but some advanced users might want to
> tune this value for individual vif (whether that's a good idea or not is
> another topic).
> 
> So, in order to convince myself this is safe. I also did some analysis
> on the impact of having queue length other than default value.  If
> queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
> in qdisc than default and drain it faster than calculated, which is
> safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
> actually you need more time than calculated. I'm in two minded here. The
> default value seems sensible to me but I'm still a bit worried about the
> queue_len > XENVIF_QUEUE_LENGTH case.
> 
> An idea is to book-keep maximum tx queue len among all vifs and use that
> to calculate worst scenario.
> 

And unfortunately there doesn't seem to be a way to know when tx queue
length is changed! So this approach won't work. :-(

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5N5D-0007Tt-Hr; Mon, 20 Jan 2014 22:12:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5N5C-0007To-68
	for xen-devel@lists.xenproject.org; Mon, 20 Jan 2014 22:12:38 +0000
Received: from [85.158.139.211:37724] by server-7.bemta-5.messagelabs.com id
	7B/45-04824-55F9DD25; Mon, 20 Jan 2014 22:12:37 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390255955!123860!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28769 invoked from network); 20 Jan 2014 22:12:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Jan 2014 22:12:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="92610713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Jan 2014 22:12:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 17:12:34 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5N57-0005sk-Ls;
	Mon, 20 Jan 2014 22:12:33 +0000
Date: Mon, 20 Jan 2014 22:12:33 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140120221233.GB5058@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
	<20140120220348.GA5058@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140120220348.GA5058@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 10:03:48PM +0000, Wei Liu wrote:
[...]
> 
> You beat me to this. Was about to reply to your other email. :-)
> 
> It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
> as you cannot possible know the maximum / miminum queue length of all
> other vifs (as they can be changed during runtime). In practice most
> users will stick with the default, but some advanced users might want to
> tune this value for individual vif (whether that's a good idea or not is
> another topic).
> 
> So, in order to convince myself this is safe. I also did some analysis
> on the impact of having queue length other than default value.  If
> queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
> in qdisc than default and drain it faster than calculated, which is
> safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
> actually you need more time than calculated. I'm in two minded here. The
> default value seems sensible to me but I'm still a bit worried about the
> queue_len > XENVIF_QUEUE_LENGTH case.
> 
> An idea is to book-keep maximum tx queue len among all vifs and use that
> to calculate worst scenario.
> 

And unfortunately there doesn't seem to be a way to know when tx queue
length is changed! So this approach won't work. :-(

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5NfB-0000dI-No; Mon, 20 Jan 2014 22:49:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W5NfA-0000dD-K3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 22:49:48 +0000
Received: from [193.109.254.147:55426] by server-13.bemta-14.messagelabs.com
	id 86/29-19374-B08ADD25; Mon, 20 Jan 2014 22:49:47 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390258185!12091322!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15068 invoked from network); 20 Jan 2014 22:49:46 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 22:49:46 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 20 Jan 2014 22:49:44 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="635394511"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.184])
	by fldsmtpi03.verizon.com with ESMTP; 20 Jan 2014 22:49:44 +0000
Message-ID: <52DDA807.2050703@terremark.com>
Date: Mon, 20 Jan 2014 17:49:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

* Hardware:

SeaMicro*SM15000-XN*Quad Core Servers with Intel=AE Xeon=AE E3-1265Lv2 proc=
essors (=93Ivy Bridge=94 microarchitecture)

1 server, 32G of memory.

* Software:

Fedora 17 (3.8.11-100.fc17.x86_64)
CentOS release 5.10 (2.6.18-371.el5xen)

* Guest operating systems:


on Fedora 17: All HVM:


vm:ubuntu-12.04.3-server-amd64
Linux ubuntu 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 U=
TC 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:centos-5.9-x86_64
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Tue Jan 8 18:35:04 EST=
 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:debian-7.2.0-amd64
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux
vm:rhel-6.4-x86_64
Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:4=
1 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:centos-6.4-i386
Linux localhost.localdomain 2.6.32-358.el6.i686 #1 SMP Thu Feb 21 21:50:49 =
UTC 2013 i686 i686 i386 GNU/Linux
vm:centos-5.9-i386
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Tue Jan 8 19:22:56 EST=
 2013 i686 i686 i386 GNU/Linux
vm:rhel-5.9-i386
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Wed Nov 28 22:04:26 ES=
T 2012 i686 i686 i386 GNU/Linux
vm:centos-6.4-x86_64
Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:2=
6 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:rhel-6.4-i386
Linux localhost.localdomain 2.6.32-358.el6.i686 #1 SMP Tue Jan 29 11:48:01 =
EST 2013 i686 i686 i386 GNU/Linux
vm:rhel-5.9-x86_64
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Wed Nov 28 21:31:28 ES=
T 2012 x86_64 x86_64 x86_64 GNU/Linux


windows-server-2008-ENT-x86_64


* Functionality tested:
xl
xl save
xl restore


* Comments:

4.4.rc2 does not build on CentOS release 5.10:

Need

* 407a3c0 (origin/staging) compat/memory: fix build with old gcc

and a newer upstream QEMU then qemu-xen-4.4.0-rc1

* b97307e (HEAD, tag: qemu-xen-4.4.0-rc1, dummy) xen_disk: mark ioreq as ma=
pped before unmapping in error case
*   d84e452 Merge remote branch 'origin/stable-1.6' into xen-staging-master=
-9
|\
| * 62ecc3a (tag: v1.6.1) Update VERSION for 1.6.1 release


Mail threads that are related:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01477.html

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01520.html


  =

[root@dcs-xen-54 ~]# xl save -p 6 /big/xl-save/centos-6.4-x86_64.0.save
Saving to /big/xl-save/centos-6.4-x86_64.0.save new xl format (info 0x0/0x0=
/560)
xc: Saving memory: iter 0 (last sent 0 skipped 0): 1044481/1044481  100%
[root@dcs-xen-54 ~]# xl unpause 6

has left domain #6 in a bad disk state (on VGA):

INFO: task jbd2/dm-0-8:386 blocked for more then 120 seconds.
INFO: task sadc:22139 blocked for more then 120 seconds.


However "xl restore -V /big/xl-save/centos-6.4-x86_64.0.save" looks to work=
 fine.

2nd time the unpause failed with:
[root@dcs-xen-54 ~]# xl unpause 17

WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 >..."
INFO: task sadc:22164 blocked for more then 120 seconds.

  =

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V /big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 20 22:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jan 2014 22:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5NfB-0000dI-No; Mon, 20 Jan 2014 22:49:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W5NfA-0000dD-K3
	for xen-devel@lists.xen.org; Mon, 20 Jan 2014 22:49:48 +0000
Received: from [193.109.254.147:55426] by server-13.bemta-14.messagelabs.com
	id 86/29-19374-B08ADD25; Mon, 20 Jan 2014 22:49:47 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390258185!12091322!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15068 invoked from network); 20 Jan 2014 22:49:46 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Jan 2014 22:49:46 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 20 Jan 2014 22:49:44 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,692,1384300800"; d="scan'208";a="635394511"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.184])
	by fldsmtpi03.verizon.com with ESMTP; 20 Jan 2014 22:49:44 +0000
Message-ID: <52DDA807.2050703@terremark.com>
Date: Mon, 20 Jan 2014 17:49:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

* Hardware:

SeaMicro*SM15000-XN*Quad Core Servers with Intel=AE Xeon=AE E3-1265Lv2 proc=
essors (=93Ivy Bridge=94 microarchitecture)

1 server, 32G of memory.

* Software:

Fedora 17 (3.8.11-100.fc17.x86_64)
CentOS release 5.10 (2.6.18-371.el5xen)

* Guest operating systems:


on Fedora 17: All HVM:


vm:ubuntu-12.04.3-server-amd64
Linux ubuntu 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 U=
TC 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:centos-5.9-x86_64
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Tue Jan 8 18:35:04 EST=
 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:debian-7.2.0-amd64
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux
vm:rhel-6.4-x86_64
Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:4=
1 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:centos-6.4-i386
Linux localhost.localdomain 2.6.32-358.el6.i686 #1 SMP Thu Feb 21 21:50:49 =
UTC 2013 i686 i686 i386 GNU/Linux
vm:centos-5.9-i386
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Tue Jan 8 19:22:56 EST=
 2013 i686 i686 i386 GNU/Linux
vm:rhel-5.9-i386
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Wed Nov 28 22:04:26 ES=
T 2012 i686 i686 i386 GNU/Linux
vm:centos-6.4-x86_64
Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:2=
6 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
vm:rhel-6.4-i386
Linux localhost.localdomain 2.6.32-358.el6.i686 #1 SMP Tue Jan 29 11:48:01 =
EST 2013 i686 i686 i386 GNU/Linux
vm:rhel-5.9-x86_64
Linux localhost.localdomain 2.6.18-348.el5xen #1 SMP Wed Nov 28 21:31:28 ES=
T 2012 x86_64 x86_64 x86_64 GNU/Linux


windows-server-2008-ENT-x86_64


* Functionality tested:
xl
xl save
xl restore


* Comments:

4.4.rc2 does not build on CentOS release 5.10:

Need

* 407a3c0 (origin/staging) compat/memory: fix build with old gcc

and a newer upstream QEMU then qemu-xen-4.4.0-rc1

* b97307e (HEAD, tag: qemu-xen-4.4.0-rc1, dummy) xen_disk: mark ioreq as ma=
pped before unmapping in error case
*   d84e452 Merge remote branch 'origin/stable-1.6' into xen-staging-master=
-9
|\
| * 62ecc3a (tag: v1.6.1) Update VERSION for 1.6.1 release


Mail threads that are related:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01477.html

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01520.html


  =

[root@dcs-xen-54 ~]# xl save -p 6 /big/xl-save/centos-6.4-x86_64.0.save
Saving to /big/xl-save/centos-6.4-x86_64.0.save new xl format (info 0x0/0x0=
/560)
xc: Saving memory: iter 0 (last sent 0 skipped 0): 1044481/1044481  100%
[root@dcs-xen-54 ~]# xl unpause 6

has left domain #6 in a bad disk state (on VGA):

INFO: task jbd2/dm-0-8:386 blocked for more then 120 seconds.
INFO: task sadc:22139 blocked for more then 120 seconds.


However "xl restore -V /big/xl-save/centos-6.4-x86_64.0.save" looks to work=
 fine.

2nd time the unpause failed with:
[root@dcs-xen-54 ~]# xl unpause 17

WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 >..."
INFO: task sadc:22164 blocked for more then 120 seconds.

  =

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V /big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.

    -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 00:25:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 00:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5P8w-0004Z5-Bf; Tue, 21 Jan 2014 00:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5P8u-0004Z0-MS
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 00:24:36 +0000
Received: from [85.158.139.211:41821] by server-3.bemta-5.messagelabs.com id
	D4/3A-04773-34EBDD25; Tue, 21 Jan 2014 00:24:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390263873!10905686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13719 invoked from network); 21 Jan 2014 00:24:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 00:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,693,1384300800"; d="scan'208";a="92642001"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 00:24:32 +0000
Received: from [10.68.14.33] (10.68.14.33) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 19:24:31 -0500
Message-ID: <52DDBE3D.4060406@citrix.com>
Date: Tue, 21 Jan 2014 00:24:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
	<20140120220348.GA5058@zion.uk.xensource.com>
In-Reply-To: <20140120220348.GA5058@zion.uk.xensource.com>
X-Originating-IP: [10.68.14.33]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 22:03, Wei Liu wrote:
> On Mon, Jan 20, 2014 at 09:24:28PM +0000, Zoltan Kiss wrote:
>> @@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
>>   void xenvif_free(struct xenvif *vif)
>>   {
>>   	int i, unmap_timeout = 0;
>> +	/* Here we want to avoid timeout messages if an skb can be legitimatly
>> +	 * stucked somewhere else. Realisticly this could be an another vif's
>> +	 * internal or QDisc queue. That another vif also has this
>> +	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
>> +	 * internal queue. After that, the QDisc queue can put in worst case
>> +	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
>> +	 * internal queue, so we need several rounds of such timeouts until we
>> +	 * can be sure that no another vif should have skb's from us. We are
>> +	 * not sending more skb's, so newly stucked packets are not interesting
>> +	 * for us here.
>> +	 */
> You beat me to this. Was about to reply to your other email. :-)
>
> It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
> as you cannot possible know the maximum / miminum queue length of all
> other vifs (as they can be changed during runtime). In practice most
> users will stick with the default, but some advanced users might want to
> tune this value for individual vif (whether that's a good idea or not is
> another topic).
>
> So, in order to convince myself this is safe. I also did some analysis
> on the impact of having queue length other than default value.  If
> queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
> in qdisc than default and drain it faster than calculated, which is
> safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
> actually you need more time than calculated. I'm in two minded here. The
> default value seems sensible to me but I'm still a bit worried about the
> queue_len > XENVIF_QUEUE_LENGTH case.
>
> An idea is to book-keep maximum tx queue len among all vifs and use that
> to calculate worst scenario.
I don't think it should be that perfect. This is just a best effort 
estimation, if someone changes the vif queue length and see this message 
because of that, nothing very drastic will happen. It is just a rate 
limited warning message. Well, it is marked as error, because it is a 
serious condition.
And also, the odds of seeing this message unnecessarily are quite low. 
With default settings (256 slots, max 17 per skb, 32 queue length, 10 
secs queue drain timeout) this delay is 20 seconds. You can raise the 
queue length to 64 before getting warning (see netif_napi_add), so it 
would go up to 40 seconds, but anyway, if your vif is sitting on a 
packet more than 20 seconds, you deserve this message :)

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 00:25:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 00:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5P8w-0004Z5-Bf; Tue, 21 Jan 2014 00:24:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5P8u-0004Z0-MS
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 00:24:36 +0000
Received: from [85.158.139.211:41821] by server-3.bemta-5.messagelabs.com id
	D4/3A-04773-34EBDD25; Tue, 21 Jan 2014 00:24:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390263873!10905686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13719 invoked from network); 21 Jan 2014 00:24:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 00:24:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,693,1384300800"; d="scan'208";a="92642001"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 00:24:32 +0000
Received: from [10.68.14.33] (10.68.14.33) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 20 Jan 2014 19:24:31 -0500
Message-ID: <52DDBE3D.4060406@citrix.com>
Date: Tue, 21 Jan 2014 00:24:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-9-git-send-email-zoltan.kiss@citrix.com>
	<20140120220348.GA5058@zion.uk.xensource.com>
In-Reply-To: <20140120220348.GA5058@zion.uk.xensource.com>
X-Originating-IP: [10.68.14.33]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	ian.campbell@citrix.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 8/9] xen-netback: Timeout
	packets in RX path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 22:03, Wei Liu wrote:
> On Mon, Jan 20, 2014 at 09:24:28PM +0000, Zoltan Kiss wrote:
>> @@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
>>   void xenvif_free(struct xenvif *vif)
>>   {
>>   	int i, unmap_timeout = 0;
>> +	/* Here we want to avoid timeout messages if an skb can be legitimatly
>> +	 * stucked somewhere else. Realisticly this could be an another vif's
>> +	 * internal or QDisc queue. That another vif also has this
>> +	 * rx_drain_timeout_msecs timeout, but the timer only ditches the
>> +	 * internal queue. After that, the QDisc queue can put in worst case
>> +	 * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
>> +	 * internal queue, so we need several rounds of such timeouts until we
>> +	 * can be sure that no another vif should have skb's from us. We are
>> +	 * not sending more skb's, so newly stucked packets are not interesting
>> +	 * for us here.
>> +	 */
> You beat me to this. Was about to reply to your other email. :-)
>
> It's also worth mentioning that DIV_ROUND_UP part is merely estimation,
> as you cannot possible know the maximum / miminum queue length of all
> other vifs (as they can be changed during runtime). In practice most
> users will stick with the default, but some advanced users might want to
> tune this value for individual vif (whether that's a good idea or not is
> another topic).
>
> So, in order to convince myself this is safe. I also did some analysis
> on the impact of having queue length other than default value.  If
> queue_len < XENVIF_QUEUE_LENGTH, that means you can queue less packets
> in qdisc than default and drain it faster than calculated, which is
> safe. On the other hand if queue_len > XENVIF_QUEUE_LENGTH, it means
> actually you need more time than calculated. I'm in two minded here. The
> default value seems sensible to me but I'm still a bit worried about the
> queue_len > XENVIF_QUEUE_LENGTH case.
>
> An idea is to book-keep maximum tx queue len among all vifs and use that
> to calculate worst scenario.
I don't think it should be that perfect. This is just a best effort 
estimation, if someone changes the vif queue length and see this message 
because of that, nothing very drastic will happen. It is just a rate 
limited warning message. Well, it is marked as error, because it is a 
serious condition.
And also, the odds of seeing this message unnecessarily are quite low. 
With default settings (256 slots, max 17 per skb, 32 queue length, 10 
secs queue drain timeout) this delay is 20 seconds. You can raise the 
queue length to 64 before getting warning (see netif_napi_add), so it 
would go up to 40 seconds, but anyway, if your vif is sitting on a 
packet more than 20 seconds, you deserve this message :)

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 01:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5PjG-0003KQ-SC; Tue, 21 Jan 2014 01:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1W5PjB-0002Mx-42
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:02:09 +0000
Received: from [85.158.137.68:42733] by server-9.bemta-3.messagelabs.com id
	0E/3C-13104-C07CDD25; Tue, 21 Jan 2014 01:02:04 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390266122!10252785!1
X-Originating-IP: [209.85.214.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19440 invoked from network); 21 Jan 2014 01:02:03 -0000
Received: from mail-ob0-f169.google.com (HELO mail-ob0-f169.google.com)
	(209.85.214.169)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:02:03 -0000
Received: by mail-ob0-f169.google.com with SMTP id wo20so3747217obc.28
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 17:02:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=RJrvXRlimU1zKlvOfAg5vRJjB7jFfF2ihDhbY0u9NZ0=;
	b=QItfal9qYgiD4Y05ntGbK59CSq6rQ+omVW/1j0/nnzYGq27Z9vbMaoCn+L8J6a3opF
	5ClH9Q6hUsKh+1dVrX7u8HNDPJu3QHoTBmvpzY8idr3kh4aJNFfxvTg5zlQ4AQ+RtrZ2
	VE2MCPkv3z+n8LIEWhEwSPEekkWmqQRuaSMs7JrVABS9QBDRoAevCdC65dqEsHxbOcLH
	ITey40qHdvG9CaqC81Nc0KDEdYxXRnXKyC5u6BfAIVx6ZHH49KToZb/XBmxkvm4gv/pW
	8L0Cc0J71Nzaj+i9Kk+XALH2bHCVKnYmOjC6XPQk8lKuyYMbxrn3rJfOlS2rJYQzLobk
	QvOw==
X-Gm-Message-State: ALoCoQlhL2uma5zBjQi4K9tI9+8Cz9/sFEw94QnBbtVH6jq4bJM1VJjoV77/WIrPBJ4OJE8rwTTn
MIME-Version: 1.0
X-Received: by 10.182.4.232 with SMTP id n8mr17962302obn.34.1390266121671;
	Mon, 20 Jan 2014 17:02:01 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Mon, 20 Jan 2014 17:02:01 -0800 (PST)
In-Reply-To: <1389997126.16457.339.camel@Solace>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
	<1389997126.16457.339.camel@Solace>
Date: Mon, 20 Jan 2014 15:02:01 -1000
Message-ID: <CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 12:18 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> Allow me to comment only on the 'only one runqueue on multiple socket
> issue' thing. I honestly think that that one is a bug, so you shouldn't
> base your work on that behavior. To try facilitate you doing this, I'll
> try to put together a patch for fixing such issue early next week. I'm
> not sure wheter it will be accepted in Xen right now or when 4.5
> development cycle opens, but at least you can apply that and work on top
> of it.
>
> Would that make sense and be of any help to you?

Yes, that's what I'll do (ignore the single run queue behavior). About
the bug, do you mind if I try to fix it? You probably have a lot of
other things you're working on.

Thanks,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 01:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5PjG-0003KQ-SC; Tue, 21 Jan 2014 01:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1W5PjB-0002Mx-42
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:02:09 +0000
Received: from [85.158.137.68:42733] by server-9.bemta-3.messagelabs.com id
	0E/3C-13104-C07CDD25; Tue, 21 Jan 2014 01:02:04 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390266122!10252785!1
X-Originating-IP: [209.85.214.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19440 invoked from network); 21 Jan 2014 01:02:03 -0000
Received: from mail-ob0-f169.google.com (HELO mail-ob0-f169.google.com)
	(209.85.214.169)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:02:03 -0000
Received: by mail-ob0-f169.google.com with SMTP id wo20so3747217obc.28
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 17:02:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=RJrvXRlimU1zKlvOfAg5vRJjB7jFfF2ihDhbY0u9NZ0=;
	b=QItfal9qYgiD4Y05ntGbK59CSq6rQ+omVW/1j0/nnzYGq27Z9vbMaoCn+L8J6a3opF
	5ClH9Q6hUsKh+1dVrX7u8HNDPJu3QHoTBmvpzY8idr3kh4aJNFfxvTg5zlQ4AQ+RtrZ2
	VE2MCPkv3z+n8LIEWhEwSPEekkWmqQRuaSMs7JrVABS9QBDRoAevCdC65dqEsHxbOcLH
	ITey40qHdvG9CaqC81Nc0KDEdYxXRnXKyC5u6BfAIVx6ZHH49KToZb/XBmxkvm4gv/pW
	8L0Cc0J71Nzaj+i9Kk+XALH2bHCVKnYmOjC6XPQk8lKuyYMbxrn3rJfOlS2rJYQzLobk
	QvOw==
X-Gm-Message-State: ALoCoQlhL2uma5zBjQi4K9tI9+8Cz9/sFEw94QnBbtVH6jq4bJM1VJjoV77/WIrPBJ4OJE8rwTTn
MIME-Version: 1.0
X-Received: by 10.182.4.232 with SMTP id n8mr17962302obn.34.1390266121671;
	Mon, 20 Jan 2014 17:02:01 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Mon, 20 Jan 2014 17:02:01 -0800 (PST)
In-Reply-To: <1389997126.16457.339.camel@Solace>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
	<1389997126.16457.339.camel@Solace>
Date: Mon, 20 Jan 2014 15:02:01 -1000
Message-ID: <CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 12:18 PM, Dario Faggioli
<dario.faggioli@citrix.com> wrote:
> Allow me to comment only on the 'only one runqueue on multiple socket
> issue' thing. I honestly think that that one is a bug, so you shouldn't
> base your work on that behavior. To try facilitate you doing this, I'll
> try to put together a patch for fixing such issue early next week. I'm
> not sure wheter it will be accepted in Xen right now or when 4.5
> development cycle opens, but at least you can apply that and work on top
> of it.
>
> Would that make sense and be of any help to you?

Yes, that's what I'll do (ignore the single run queue behavior). About
the bug, do you mind if I try to fix it? You probably have a lot of
other things you're working on.

Thanks,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 01:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ps3-00028D-02; Tue, 21 Jan 2014 01:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Ps1-000288-00
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:11:13 +0000
Received: from [85.158.137.68:31806] by server-6.bemta-3.messagelabs.com id
	AC/6B-04868-039CDD25; Tue, 21 Jan 2014 01:11:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390266669!10303297!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16200 invoked from network); 21 Jan 2014 01:11:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:11:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,693,1384300800"; d="scan'208,217";a="94719932"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 01:11:08 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 20 Jan 2014 20:11:08 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	02:11:06 +0100
Message-ID: <52DDC929.3060003@citrix.com>
Date: Tue, 21 Jan 2014 01:11:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philip Wernersbach <philip.wernersbach@gmail.com>,
	<xen-devel@lists.xen.org>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
In-Reply-To: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9090455972065760111=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9090455972065760111==
Content-Type: multipart/alternative;
	boundary="------------070500020506090502020100"

--------------070500020506090502020100
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 20/01/2014 22:08, Philip Wernersbach wrote:
> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
>
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..8c307c9 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }

Please read the CODING_STYLE document in the SCM root.

In particular, there should be a newline here, and lines should (where
possible) be wrapped at ~72-80 chars

> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];

char rp_str[sizeof(unsigned long)*2 + 1] would suffice with a much
shorter name, and do without an extra define.

> +            int acpi_os_root_pointer_string_size =

This return value from snprintf is bogus when used in the if() condition
later; It is unconditionally within the provided bounds.  Furthermore,
snprintf() is not going to fail for a single hex long into an
appropriately sized buffer.

Furthermore, to correctly check for a failure of
acpi_os_get_root_pointer(), you would be much better with something such as:

acpi_physical_address rsdp = acpi_os_get_root_pointer();

if ( rsdp )
{
  ... append
}
else
{
  ... error
}

> snprintf(acpi_os_root_pointer_string,
> +
> MAX_RSDP_ADDRESS_STRING, "%08lX",
> +
> acpi_os_get_root_pointer());
> +
> +            if ( (acpi_os_root_pointer_string_size > 0) &&
> (acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
> +            {
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING "RSDP Address String Size Check
> Failed!\n");
> +                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
> +                printk(XENLOG_WARNING "(Expected size 1-%d, got
> %d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);

And talking of errors, this is overkill.  A simple:

printk(XENLOG_WARNING "Failed to get acpi_rsdp to pass to dom0\n");

should suffice.

~Andrew

> +            }
> +        }
>
>          cmdline = dom0_cmdline;
>      }
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 8c5697e..a7c3905 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -750,6 +750,7 @@ struct start_info {
>                                  /*  SIF_MOD_START_PFN set in flags).      */
>      unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
>  #define MAX_GUEST_CMDLINE 1024
> +#define MAX_RSDP_ADDRESS_STRING 21
>      int8_t cmd_line[MAX_GUEST_CMDLINE];
>      /* The pfn range here covers both page table and p->m table frames.   */
>      unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------070500020506090502020100
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 20/01/2014 22:08, Philip Wernersbach
      wrote:<br>
    </div>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">xen: [v2] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <a class="moz-txt-link-rfc2396E" href="mailto:philip.wernersbach@gmail.com">&lt;philip.wernersbach@gmail.com&gt;</a>

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..8c307c9 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }</pre>
    </blockquote>
    <br>
    Please read the CODING_STYLE document in the SCM root.<br>
    <br>
    In particular, there should be a newline here, and lines should
    (where possible) be wrapped at ~72-80 chars<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];</pre>
    </blockquote>
    <br>
    char rp_str[sizeof(unsigned long)*2 + 1] would suffice with a much
    shorter name, and do without an extra define.<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+            int acpi_os_root_pointer_string_size =</pre>
    </blockquote>
    <br>
    This return value from snprintf is bogus when used in the if()
    condition later; It is unconditionally within the provided bounds.&nbsp;
    Furthermore, snprintf() is not going to fail for a single hex long
    into an appropriately sized buffer.<br>
    <br>
    Furthermore, to correctly check for a failure of
    acpi_os_get_root_pointer(), you would be much better with something
    such as:<br>
    <br>
    acpi_physical_address rsdp = acpi_os_get_root_pointer();<br>
    <br>
    if ( rsdp )<br>
    {<br>
    &nbsp; ... append<br>
    }<br>
    else<br>
    {<br>
    &nbsp; ... error<br>
    }<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
snprintf(acpi_os_root_pointer_string,
+
MAX_RSDP_ADDRESS_STRING, "%08lX",
+
acpi_os_get_root_pointer());
+
+            if ( (acpi_os_root_pointer_string_size &gt; 0) &amp;&amp;
(acpi_os_root_pointer_string_size &lt;= MAX_RSDP_ADDRESS_STRING) )
+            {
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
+            }
+            else
+            {
+                printk(XENLOG_WARNING "RSDP Address String Size Check
Failed!\n");
+                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
+                printk(XENLOG_WARNING "(Expected size 1-%d, got
%d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);</pre>
    </blockquote>
    <br>
    And talking of errors, this is overkill.&nbsp; A simple:<br>
    <br>
    printk(XENLOG_WARNING "Failed to get acpi_rsdp to pass to dom0\n");<br>
    <br>
    should suffice.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a7c3905 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -750,6 +750,7 @@ struct start_info {
                                 /*  SIF_MOD_START_PFN set in flags).      */
     unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
 #define MAX_GUEST_CMDLINE 1024
+#define MAX_RSDP_ADDRESS_STRING 21
     int8_t cmd_line[MAX_GUEST_CMDLINE];
     /* The pfn range here covers both page table and p-&gt;m table frames.   */
     unsigned long first_p2m_pfn;/* 1st pfn forming initial P-&gt;M table.    */
</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------070500020506090502020100--


--===============9090455972065760111==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9090455972065760111==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 01:11:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ps3-00028D-02; Tue, 21 Jan 2014 01:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5Ps1-000288-00
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:11:13 +0000
Received: from [85.158.137.68:31806] by server-6.bemta-3.messagelabs.com id
	AC/6B-04868-039CDD25; Tue, 21 Jan 2014 01:11:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390266669!10303297!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16200 invoked from network); 21 Jan 2014 01:11:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:11:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,693,1384300800"; d="scan'208,217";a="94719932"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 01:11:08 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 20 Jan 2014 20:11:08 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	02:11:06 +0100
Message-ID: <52DDC929.3060003@citrix.com>
Date: Tue, 21 Jan 2014 01:11:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philip Wernersbach <philip.wernersbach@gmail.com>,
	<xen-devel@lists.xen.org>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
In-Reply-To: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9090455972065760111=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9090455972065760111==
Content-Type: multipart/alternative;
	boundary="------------070500020506090502020100"

--------------070500020506090502020100
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 20/01/2014 22:08, Philip Wernersbach wrote:
> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
>
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..8c307c9 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }

Please read the CODING_STYLE document in the SCM root.

In particular, there should be a newline here, and lines should (where
possible) be wrapped at ~72-80 chars

> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];

char rp_str[sizeof(unsigned long)*2 + 1] would suffice with a much
shorter name, and do without an extra define.

> +            int acpi_os_root_pointer_string_size =

This return value from snprintf is bogus when used in the if() condition
later; It is unconditionally within the provided bounds.  Furthermore,
snprintf() is not going to fail for a single hex long into an
appropriately sized buffer.

Furthermore, to correctly check for a failure of
acpi_os_get_root_pointer(), you would be much better with something such as:

acpi_physical_address rsdp = acpi_os_get_root_pointer();

if ( rsdp )
{
  ... append
}
else
{
  ... error
}

> snprintf(acpi_os_root_pointer_string,
> +
> MAX_RSDP_ADDRESS_STRING, "%08lX",
> +
> acpi_os_get_root_pointer());
> +
> +            if ( (acpi_os_root_pointer_string_size > 0) &&
> (acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
> +            {
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING "RSDP Address String Size Check
> Failed!\n");
> +                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
> +                printk(XENLOG_WARNING "(Expected size 1-%d, got
> %d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);

And talking of errors, this is overkill.  A simple:

printk(XENLOG_WARNING "Failed to get acpi_rsdp to pass to dom0\n");

should suffice.

~Andrew

> +            }
> +        }
>
>          cmdline = dom0_cmdline;
>      }
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 8c5697e..a7c3905 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -750,6 +750,7 @@ struct start_info {
>                                  /*  SIF_MOD_START_PFN set in flags).      */
>      unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
>  #define MAX_GUEST_CMDLINE 1024
> +#define MAX_RSDP_ADDRESS_STRING 21
>      int8_t cmd_line[MAX_GUEST_CMDLINE];
>      /* The pfn range here covers both page table and p->m table frames.   */
>      unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------070500020506090502020100
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 20/01/2014 22:08, Philip Wernersbach
      wrote:<br>
    </div>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">xen: [v2] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <a class="moz-txt-link-rfc2396E" href="mailto:philip.wernersbach@gmail.com">&lt;philip.wernersbach@gmail.com&gt;</a>

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..8c307c9 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }</pre>
    </blockquote>
    <br>
    Please read the CODING_STYLE document in the SCM root.<br>
    <br>
    In particular, there should be a newline here, and lines should
    (where possible) be wrapped at ~72-80 chars<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];</pre>
    </blockquote>
    <br>
    char rp_str[sizeof(unsigned long)*2 + 1] would suffice with a much
    shorter name, and do without an extra define.<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+            int acpi_os_root_pointer_string_size =</pre>
    </blockquote>
    <br>
    This return value from snprintf is bogus when used in the if()
    condition later; It is unconditionally within the provided bounds.&nbsp;
    Furthermore, snprintf() is not going to fail for a single hex long
    into an appropriately sized buffer.<br>
    <br>
    Furthermore, to correctly check for a failure of
    acpi_os_get_root_pointer(), you would be much better with something
    such as:<br>
    <br>
    acpi_physical_address rsdp = acpi_os_get_root_pointer();<br>
    <br>
    if ( rsdp )<br>
    {<br>
    &nbsp; ... append<br>
    }<br>
    else<br>
    {<br>
    &nbsp; ... error<br>
    }<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
snprintf(acpi_os_root_pointer_string,
+
MAX_RSDP_ADDRESS_STRING, "%08lX",
+
acpi_os_get_root_pointer());
+
+            if ( (acpi_os_root_pointer_string_size &gt; 0) &amp;&amp;
(acpi_os_root_pointer_string_size &lt;= MAX_RSDP_ADDRESS_STRING) )
+            {
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
+            }
+            else
+            {
+                printk(XENLOG_WARNING "RSDP Address String Size Check
Failed!\n");
+                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
+                printk(XENLOG_WARNING "(Expected size 1-%d, got
%d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);</pre>
    </blockquote>
    <br>
    And talking of errors, this is overkill.&nbsp; A simple:<br>
    <br>
    printk(XENLOG_WARNING "Failed to get acpi_rsdp to pass to dom0\n");<br>
    <br>
    should suffice.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com"
      type="cite">
      <pre wrap="">
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a7c3905 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -750,6 +750,7 @@ struct start_info {
                                 /*  SIF_MOD_START_PFN set in flags).      */
     unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
 #define MAX_GUEST_CMDLINE 1024
+#define MAX_RSDP_ADDRESS_STRING 21
     int8_t cmd_line[MAX_GUEST_CMDLINE];
     /* The pfn range here covers both page table and p-&gt;m table frames.   */
     unsigned long first_p2m_pfn;/* 1st pfn forming initial P-&gt;M table.    */
</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------070500020506090502020100--


--===============9090455972065760111==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9090455972065760111==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 01:19:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Q0H-0002eM-78; Tue, 21 Jan 2014 01:19:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1W5Q0F-0002eF-Ra
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:19:44 +0000
Received: from [85.158.143.35:19999] by server-2.bemta-4.messagelabs.com id
	AC/1D-11386-F2BCDD25; Tue, 21 Jan 2014 01:19:43 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390267181!12928547!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6064 invoked from network); 21 Jan 2014 01:19:42 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:19:42 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so5513387lbd.9
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 17:19:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=CKFzOQwXvAUUIeP66mEGEEZsFCUNliuweGfPjg/aVjc=;
	b=lfiTTGM6Qb7RbNUJpJGeMNINcIuFXnwvRp0i4w7H4tcn2YK19vazQ7kA4vuvUDLJUr
	1PrQlntDKPo+9GjDnLgxrjq7UEJx16+6bj0QnyccqE4mvQsMr0SDCxnGM3T/HUNGPT2S
	QmSL/KkI+ZAPxYr6k6+kpN7IC5bVsQga+EEk1FHS0X7jimkTlJGP2jc/Jh6jiXj8yXpZ
	3lrmoiIsDLASgIQrSTO5mmt63JuIOP966rbhnysn0qD53gX1lQ/StY+Tr3bThD/ykvHp
	cXUIphEixaWOWTDj72Rf8MgzZLKR86sPWmZqx2fByliyGu9SZy9JLEMCtyXgRcfNxeSm
	rhxg==
X-Gm-Message-State: ALoCoQmDo3VPjfPNtBVe8N9aegZEBP1b1aXps6X9ELU52wTrUojotALVNYRnfIiu18aHDknezJhv
MIME-Version: 1.0
X-Received: by 10.152.45.8 with SMTP id i8mr14200879lam.12.1390267181612; Mon,
	20 Jan 2014 17:19:41 -0800 (PST)
Received: by 10.114.21.137 with HTTP; Mon, 20 Jan 2014 17:19:41 -0800 (PST)
In-Reply-To: <52DCE80D0200007800114E78@nat28.tlf.novell.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
	<52DCE80D0200007800114E78@nat28.tlf.novell.com>
Date: Mon, 20 Jan 2014 17:19:41 -0800
Message-ID: <CAL54oT0fLq_dWwHs7fzVeZa9bCyFmcrLF7n+ZNim7=oiBN6DVQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, Eddie Dong <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
	during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 12:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 18.01.14 at 15:32, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2014-01-17:
>>>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>>> access directly.
>>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>>> time, if there is a virtual vmexit pending (for example, an
>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>>> retry to handle the IO request later and unfortunately, the retry
>>>> will happen in L1's context. And it will cause the problem.
>>>> The fixing is that if there is a pending IO request, no virtual
>>>> vmexit/vmentry is allowed.
>>>>
>>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>>> ---
>>>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
>>>
>>> Didn't we agree earlier on to do this in common code?
>>>
>>
>> I think you agree with this fixing. Let me have a double check: Do you mean
>> move the check to nhvm_interrupt_block () as Christoph suggested or move to
>> another place in common code? Christoph' s suggestion doesn't solve the issue
>> as I said in previous thread. Also, since SVM and VMX handle the vmswitch
>> totally independent, there is no proper point to put the check in common code
>> to cover both.
>
> Okay, fine with me then as is. Awaiting a VMX maintainer ack then...

Acked-by: Jun Nakajima <jun.nakajima@intel.com>

>
> Jan
>


-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 01:19:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 01:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Q0H-0002eM-78; Tue, 21 Jan 2014 01:19:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jun.nakajima@intel.com>) id 1W5Q0F-0002eF-Ra
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 01:19:44 +0000
Received: from [85.158.143.35:19999] by server-2.bemta-4.messagelabs.com id
	AC/1D-11386-F2BCDD25; Tue, 21 Jan 2014 01:19:43 +0000
X-Env-Sender: jun.nakajima@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390267181!12928547!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6064 invoked from network); 21 Jan 2014 01:19:42 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 01:19:42 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so5513387lbd.9
	for <xen-devel@lists.xen.org>; Mon, 20 Jan 2014 17:19:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=CKFzOQwXvAUUIeP66mEGEEZsFCUNliuweGfPjg/aVjc=;
	b=lfiTTGM6Qb7RbNUJpJGeMNINcIuFXnwvRp0i4w7H4tcn2YK19vazQ7kA4vuvUDLJUr
	1PrQlntDKPo+9GjDnLgxrjq7UEJx16+6bj0QnyccqE4mvQsMr0SDCxnGM3T/HUNGPT2S
	QmSL/KkI+ZAPxYr6k6+kpN7IC5bVsQga+EEk1FHS0X7jimkTlJGP2jc/Jh6jiXj8yXpZ
	3lrmoiIsDLASgIQrSTO5mmt63JuIOP966rbhnysn0qD53gX1lQ/StY+Tr3bThD/ykvHp
	cXUIphEixaWOWTDj72Rf8MgzZLKR86sPWmZqx2fByliyGu9SZy9JLEMCtyXgRcfNxeSm
	rhxg==
X-Gm-Message-State: ALoCoQmDo3VPjfPNtBVe8N9aegZEBP1b1aXps6X9ELU52wTrUojotALVNYRnfIiu18aHDknezJhv
MIME-Version: 1.0
X-Received: by 10.152.45.8 with SMTP id i8mr14200879lam.12.1390267181612; Mon,
	20 Jan 2014 17:19:41 -0800 (PST)
Received: by 10.114.21.137 with HTTP; Mon, 20 Jan 2014 17:19:41 -0800 (PST)
In-Reply-To: <52DCE80D0200007800114E78@nat28.tlf.novell.com>
References: <1389940742-2275-1-git-send-email-yang.z.zhang@intel.com>
	<52D910BE02000078001147DC@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BFB93@SHSMSX104.ccr.corp.intel.com>
	<52DCE80D0200007800114E78@nat28.tlf.novell.com>
Date: Mon, 20 Jan 2014 17:19:41 -0800
Message-ID: <CAL54oT0fLq_dWwHs7fzVeZa9bCyFmcrLF7n+ZNim7=oiBN6DVQ@mail.gmail.com>
From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"chegger@amazon.de" <chegger@amazon.de>, Eddie Dong <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit
	during IO emulaiton
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 12:10 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 18.01.14 at 15:32, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2014-01-17:
>>>>>> On 17.01.14 at 07:39, Yang Zhang <yang.z.zhang@intel.com> wrote:
>>>> Sometimes, L0 need to decode the L2's instruction to handle IO
>>>> access directly.
>>>> And L0 may get X86EMUL_RETRY when handling this IO request. At same
>>>> time, if there is a virtual vmexit pending (for example, an
>>>> interrupt pending to inject to L1) and hypervisor will switch the
>>>> VCPU context from L2 to L1. Now we already in L1's context, but
>>>> since we got a X86EMUL_RETRY just now and this means hyprevisor will
>>>> retry to handle the IO request later and unfortunately, the retry
>>>> will happen in L1's context. And it will cause the problem.
>>>> The fixing is that if there is a pending IO request, no virtual
>>>> vmexit/vmentry is allowed.
>>>>
>>>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>>>> ---
>>>>  xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
>>>
>>> Didn't we agree earlier on to do this in common code?
>>>
>>
>> I think you agree with this fixing. Let me have a double check: Do you mean
>> move the check to nhvm_interrupt_block () as Christoph suggested or move to
>> another place in common code? Christoph' s suggestion doesn't solve the issue
>> as I said in previous thread. Also, since SVM and VMX handle the vmswitch
>> totally independent, there is no proper point to put the check in common code
>> to cover both.
>
> Okay, fine with me then as is. Awaiting a VMX maintainer ack then...

Acked-by: Jun Nakajima <jun.nakajima@intel.com>

>
> Jan
>


-- 
Jun
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 08:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 08:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5X1w-0003CN-LQ; Tue, 21 Jan 2014 08:49:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W5X1u-0003CI-Kn
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 08:49:54 +0000
Received: from [85.158.139.211:11113] by server-11.bemta-5.messagelabs.com id
	EA/FA-23268-1B43ED25; Tue, 21 Jan 2014 08:49:53 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390294192!10778600!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25788 invoked from network); 21 Jan 2014 08:49:52 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-11.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 08:49:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 21 Jan 2014 00:49:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,695,1384329600"; d="scan'208";a="468232127"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 21 Jan 2014 00:49:50 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 00:49:50 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 00:49:50 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 21 Jan 2014 16:49:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9D//8zygIAAhlgAgAkC74CAAcZoIA==
Date: Tue, 21 Jan 2014 08:49:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
	<52DCE753.1040507@amazon.de>
In-Reply-To: <52DCE753.1040507@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-20:
> On 14.01.14 08:38, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-01-14:
>>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Zhang, Yang Z wrote on 2013-12-24:
>>>> 
>>>> Any comments ?
>>> 
>>> Considering Christoph's comments and reservations, if you can't
>>> alone deal with this I think you should work with the AMD people to
>>> eliminate or address his concerns.
>>> 
>> 
>> Yes. But the problem puzzled me is that Christoph said nested SVM works
>> well w/o my patch which I cannot understand. Because according my
>> analysis in previous thread, it also buggy in AMD side. But if they
>> really solved the issue in their side, I wonder how they fix it.
>> Perhaps I can use the same solution in VMX side without touch the
>> common code.
>> 
>> Christoph, can you help to check it? thanks.
> 
> The fix I mentioned solves the vmswitch problem on AMD side.

But the current code is buggy with your fixing even in AMD side, see below scenario:
virtual vmentry: 
    Expected result: nested mode is being updated.
    Current result in SVM: 
          !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging mode is updated.  Wrong.

I cannot understand why you said it is working.

> The page mode problem you discovered is a seperate issue for both SVM
> and VMX that needs to be addressed.
> 
> Christoph


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 08:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 08:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5X1w-0003CN-LQ; Tue, 21 Jan 2014 08:49:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W5X1u-0003CI-Kn
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 08:49:54 +0000
Received: from [85.158.139.211:11113] by server-11.bemta-5.messagelabs.com id
	EA/FA-23268-1B43ED25; Tue, 21 Jan 2014 08:49:53 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390294192!10778600!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25788 invoked from network); 21 Jan 2014 08:49:52 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-11.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 08:49:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 21 Jan 2014 00:49:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,695,1384329600"; d="scan'208";a="468232127"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 21 Jan 2014 00:49:50 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 00:49:50 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 00:49:50 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 21 Jan 2014 16:49:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, Jan Beulich <JBeulich@suse.com>,
	"Dong, Eddie" <eddie.dong@intel.com>
Thread-Topic: [PATCH 1/3] Nested VMX: update nested paging mode when
	vmswitch is in progress
Thread-Index: AQHO+9kU0FlGvzaCh0WxrQUf7BQwfJpZu8qw//+ZSoCAAw7kcIAG236QgCBqm9D//8zygIAAhlgAgAkC74CAAcZoIA==
Date: Tue, 21 Jan 2014 08:49:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
	<52DCE753.1040507@amazon.de>
In-Reply-To: <52DCE753.1040507@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-01-20:
> On 14.01.14 08:38, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-01-14:
>>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com>
> wrote:
>>>> Zhang, Yang Z wrote on 2013-12-24:
>>>> 
>>>> Any comments ?
>>> 
>>> Considering Christoph's comments and reservations, if you can't
>>> alone deal with this I think you should work with the AMD people to
>>> eliminate or address his concerns.
>>> 
>> 
>> Yes. But the problem puzzled me is that Christoph said nested SVM works
>> well w/o my patch which I cannot understand. Because according my
>> analysis in previous thread, it also buggy in AMD side. But if they
>> really solved the issue in their side, I wonder how they fix it.
>> Perhaps I can use the same solution in VMX side without touch the
>> common code.
>> 
>> Christoph, can you help to check it? thanks.
> 
> The fix I mentioned solves the vmswitch problem on AMD side.

But the current code is buggy with your fixing even in AMD side, see below scenario:
virtual vmentry: 
    Expected result: nested mode is being updated.
    Current result in SVM: 
          !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging mode is updated.  Wrong.

I cannot understand why you said it is working.

> The page mode problem you discovered is a seperate issue for both SVM
> and VMX that needs to be addressed.
> 
> Christoph


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 08:53:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 08:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5X4y-0003IC-9a; Tue, 21 Jan 2014 08:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5X4x-0003I3-8x
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 08:53:03 +0000
Received: from [85.158.139.211:5339] by server-5.bemta-5.messagelabs.com id
	CD/34-14928-E653ED25; Tue, 21 Jan 2014 08:53:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390294371!10779470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjUyMTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18217 invoked from network); 21 Jan 2014 08:52:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 08:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="94793208"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 08:52:50 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 03:52:49 -0500
Message-ID: <1390294368.23576.62.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Tue, 21 Jan 2014 09:52:48 +0100
In-Reply-To: <CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
	<1389997126.16457.339.camel@Solace>
	<CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	Henri Casanova <henric@hawaii.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1395610953972966232=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1395610953972966232==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-JozRekhpL5n0VSeavhcf"

--=-JozRekhpL5n0VSeavhcf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 15:02 -1000, Justin Weaver wrote:
> On Fri, Jan 17, 2014 at 12:18 PM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > Allow me to comment only on the 'only one runqueue on multiple socket
> > issue' thing. I honestly think that that one is a bug, so you shouldn't
> > base your work on that behavior. To try facilitate you doing this, I'll
> > try to put together a patch for fixing such issue early next week. I'm
> > not sure wheter it will be accepted in Xen right now or when 4.5
> > development cycle opens, but at least you can apply that and work on to=
p
> > of it.
> >
> > Would that make sense and be of any help to you?
>=20
> Yes, that's what I'll do (ignore the single run queue behavior).=20
>
Ok, cool... Have you seen this mail:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01710.html

> About
> the bug, do you mind if I try to fix it? You probably have a lot of
> other things you're working on.
>=20
Please, be my very very very welcome guest! :-D

Feel free to try, and do not hesitate to ask if you think you need help.
As you say, I'm busy with other stuff right now, so I can't deal with it
right away myself, but I certainly can find 5 minutes every now and then
to (try to) advise. :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-JozRekhpL5n0VSeavhcf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeNWAACgkQk4XaBE3IOsS9zQCfXaOH3P/JTPaQk7pE1Iav8RuE
V50AniXvSNEXzJZFqwoZ+puVTK/68cRO
=PRh/
-----END PGP SIGNATURE-----

--=-JozRekhpL5n0VSeavhcf--


--===============1395610953972966232==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1395610953972966232==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 08:53:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 08:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5X4y-0003IC-9a; Tue, 21 Jan 2014 08:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5X4x-0003I3-8x
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 08:53:03 +0000
Received: from [85.158.139.211:5339] by server-5.bemta-5.messagelabs.com id
	CD/34-14928-E653ED25; Tue, 21 Jan 2014 08:53:02 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390294371!10779470!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjUyMTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18217 invoked from network); 21 Jan 2014 08:52:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 08:52:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="94793208"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 08:52:50 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 03:52:49 -0500
Message-ID: <1390294368.23576.62.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Tue, 21 Jan 2014 09:52:48 +0100
In-Reply-To: <CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
References: <1387044943-5325-1-git-send-email-jtweaver@hawaii.edu>
	<1387334265.3880.87.camel@Solace>
	<CA+o8iRXZgH37gM1i5Z5+wkU1dpHvDRML1TiwgfRuZRCcuKsadg@mail.gmail.com>
	<1389997126.16457.339.camel@Solace>
	<CA+o8iRXjC83a66t4LpzgeiNbT8DXXijSwg_MCbP3kNSTf88AuQ@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Marcus.Granado@eu.citrix.com,
	Henri Casanova <henric@hawaii.edu>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: sched: introduce hard and soft
 affinity in credit 2 scheduler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1395610953972966232=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1395610953972966232==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-JozRekhpL5n0VSeavhcf"

--=-JozRekhpL5n0VSeavhcf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 15:02 -1000, Justin Weaver wrote:
> On Fri, Jan 17, 2014 at 12:18 PM, Dario Faggioli
> <dario.faggioli@citrix.com> wrote:
> > Allow me to comment only on the 'only one runqueue on multiple socket
> > issue' thing. I honestly think that that one is a bug, so you shouldn't
> > base your work on that behavior. To try facilitate you doing this, I'll
> > try to put together a patch for fixing such issue early next week. I'm
> > not sure wheter it will be accepted in Xen right now or when 4.5
> > development cycle opens, but at least you can apply that and work on to=
p
> > of it.
> >
> > Would that make sense and be of any help to you?
>=20
> Yes, that's what I'll do (ignore the single run queue behavior).=20
>
Ok, cool... Have you seen this mail:

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01710.html

> About
> the bug, do you mind if I try to fix it? You probably have a lot of
> other things you're working on.
>=20
Please, be my very very very welcome guest! :-D

Feel free to try, and do not hesitate to ask if you think you need help.
As you say, I'm busy with other stuff right now, so I can't deal with it
right away myself, but I certainly can find 5 minutes every now and then
to (try to) advise. :-)

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-JozRekhpL5n0VSeavhcf
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeNWAACgkQk4XaBE3IOsS9zQCfXaOH3P/JTPaQk7pE1Iav8RuE
V50AniXvSNEXzJZFqwoZ+puVTK/68cRO
=PRh/
-----END PGP SIGNATURE-----

--=-JozRekhpL5n0VSeavhcf--


--===============1395610953972966232==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1395610953972966232==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEu-0003qf-3K; Tue, 21 Jan 2014 09:03:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEt-0003qY-FW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:19 +0000
Received: from [193.109.254.147:57737] by server-13.bemta-14.messagelabs.com
	id B9/9A-19374-6D73ED25; Tue, 21 Jan 2014 09:03:18 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28684 invoked from network); 21 Jan 2014 09:03:16 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439851"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xn004370;
	Tue, 21 Jan 2014 17:03:07 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014515-1244791 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:04 +0800
Message-Id: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds support for network buffering in the Remus
codebase in libxl.  NB: This series still does not contain the
bug fix related to usleep calls in libxl__domain_suspend.
I will send it out later as a separate series.

The patches was written by Shriram, so he owns all the credits.
We(Intel&Fujitsu) do&will do all the left community/revision works:
On 12/26/2013 12:23 AM, Shriram Rajagopalan wrote:
> Sorry, I have been on and off these patch series
> lately.  I have to submit my PhD thesis soon and
> devoting my free cycles towards this code.
> 
> It would be great if you folks (Wen & Eddie) can help
> out with the patch series. Other answers inline.
> 

Changes in V6:
  Applied Ian Jackson's comments of V5 series.
  the [PATCH 2/4 V5] is split by small functionalities.

  [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.

Changes in V5:

Merge hotplug script patch (2/5) and hotplug script setup/teardown
patch (3/5) into a single patch.

Changes in V4:

[1/5] Remove check for libnl command line utils in autoconf checks

[2/5] minor nits

[3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h

[4/5] clean ups. Make the usleep in checkpoint callback asynchronous

[5/5] minor nits

Changes in V3:
[1/5] Fix redundant checks in configure scripts
      (based on Ian Campbell's suggestions)

[2/5] Introduce locking in the script, during IFB setup.
      Add xenstore paths used by netbuf scripts
      to xenstore-paths.markdown

[3/5] Hotplug scripts setup/teardown invocations are now asynchronous
      following IanJ's feedback.  However, the invocations are still
      sequential. 

[5/5] Allow per-domain specification of netbuffer scripts in xl remus
      commmand.

And minor nits throughout the series based on feedback from
the last version

Changes in V2:
[1/5] Configure script will automatically enable/disable network
      buffer support depending on the availability of the appropriate
      libnl3 version. [If libnl3 is unavailable, a warning message will be
      printed to let the user know that the feature has been disabled.]

      use macros from pkg.m4 instead of pkg-config commands
      removed redundant checks for libnl3 libraries.

[3,4/5] - Minor nits.

Version 1:

[1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
      to libxl Makefile.

[2/5] External script to setup/teardown network buffering using libnl3's
      CLI. This script will be invoked by libxl before starting Remus.
      The script's main job is to bring up an IFB device with plug qdisc
      attached to it.  It then re-routes egress traffic from the guest's
      vif to the IFB device.

[3/5] Libxl code to invoke the external setup script, followed by netlink
      related setup to obtain a handle on the output buffers attached
      to each vif.

[4/5] Libxl interaction with network buffer module in the kernel via
      libnl3 API.

[5/5] xl cmdline switch to explicitly enable network buffering when
      starting remus.


  Few things to note(by shriram): 

    a) Based on previous email discussions, the setup/teardown task has
    been moved to a hotplug style shell script which can be customized as
    desired, instead of implementing it as C code inside libxl.

    b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
   (Linux).  So I have made network buffering support an optional feature
   so that it can be disabled if desired.

   c) NetBSD does not have libnl3. So I have put the setup script under
   tools/hotplug/Linux folder.

thanks
Lai


Shriram Rajagopalan (13):
  remus: add libnl3 dependency to autoconf scripts
  remus: implement network buffering hotplug scripts
  remus: update libxl_domain_remus_info
  remus: introduce a new structure libxl__remus_state
  remus: introduce a function to check whether network buffering is
    enabled
  remus: implement the API to setup network buffering
  remus: implement the API to buffer/release packages
  remus: implement the API to teardown network buffering
  remus: use the API to setup network buffering
  remus: use the API to teardown network buffering
  remus: rename remus_failover_cb() to remus_replication_failure_cb()
  remus: control network buffering in remus callbacks
  remus - network buffering cmdline switch

 README                                 |   4 +
 config/Tools.mk.in                     |   3 +
 docs/man/xl.conf.pod.5                 |   6 +
 docs/man/xl.pod.1                      |  11 +-
 docs/misc/xenstore-paths.markdown      |   4 +
 tools/configure.ac                     |  15 +
 tools/hotplug/Linux/Makefile           |   1 +
 tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++
 tools/libxl/Makefile                   |  11 +
 tools/libxl/libxl.c                    |  48 ++-
 tools/libxl/libxl.h                    |  13 +
 tools/libxl/libxl_dom.c                | 118 +++++--
 tools/libxl/libxl_internal.h           |  54 +++-
 tools/libxl/libxl_netbuffer.c          | 551 +++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |  56 ++++
 tools/libxl/libxl_remus.c              |  64 ++++
 tools/libxl/libxl_types.idl            |   2 +
 tools/libxl/xl.c                       |   4 +
 tools/libxl/xl.h                       |   1 +
 tools/libxl/xl_cmdimpl.c               |  28 +-
 tools/libxl/xl_cmdtable.c              |   3 +
 tools/remus/README                     |   6 +
 22 files changed, 1145 insertions(+), 41 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c
 create mode 100644 tools/libxl/libxl_remus.c

-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEs-0003qS-O7; Tue, 21 Jan 2014 09:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEr-0003qN-1Z
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:17 +0000
Received: from [193.109.254.147:64259] by server-11.bemta-14.messagelabs.com
	id 2F/14-20576-4D73ED25; Tue, 21 Jan 2014 09:03:16 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28487 invoked from network); 21 Jan 2014 09:03:13 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439850"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:27 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B7004372;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014539-1244793 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:06 +0800
Message-Id: <1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
	hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch introduces remus-netbuf-setup hotplug script responsible for
setting up and tearing down the necessary infrastructure required for
network output buffering in Remus.  This script is intended to be invoked
by libxl for each guest interface, when starting or stopping Remus.

Apart from returning success/failure indication via the usual hotplug
entries in xenstore, this script also writes to xenstore, the name of
the IFB device to be used to control the vif's network output.

The script relies on libnl3 command line utilities to perform various
setup/teardown functions. The script is confined to Linux platforms only
since NetBSD does not seem to have libnl3.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/hotplug/Linux/Makefile           |   1 +
 tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++++++++++++++++++++++++
 2 files changed, 184 insertions(+)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup

diff --git a/tools/hotplug/Linux/Makefile b/tools/hotplug/Linux/Makefile
index 47655f6..6139c1f 100644
--- a/tools/hotplug/Linux/Makefile
+++ b/tools/hotplug/Linux/Makefile
@@ -16,6 +16,7 @@ XEN_SCRIPTS += network-nat vif-nat
 XEN_SCRIPTS += vif-openvswitch
 XEN_SCRIPTS += vif2
 XEN_SCRIPTS += vif-setup
+XEN_SCRIPTS-$(CONFIG_REMUS_NETBUF) += remus-netbuf-setup
 XEN_SCRIPTS += block
 XEN_SCRIPTS += block-enbd block-nbd
 XEN_SCRIPTS-$(CONFIG_BLKTAP1) += blktap
diff --git a/tools/hotplug/Linux/remus-netbuf-setup b/tools/hotplug/Linux/remus-netbuf-setup
new file mode 100644
index 0000000..3467db2
--- /dev/null
+++ b/tools/hotplug/Linux/remus-netbuf-setup
@@ -0,0 +1,183 @@
+#!/bin/bash
+#============================================================================
+# ${XEN_SCRIPT_DIR}/remus-netbuf-setup
+#
+# Script for attaching a network buffer to the specified vif (in any mode).
+# The hotplugging system will call this script when starting remus via libxl
+# API, libxl_domain_remus_start.
+#
+# Usage:
+# remus-netbuf-setup (setup|teardown)
+#
+# Environment vars:
+# vifname     vif interface name (required).
+# XENBUS_PATH path in Xenstore, where the IFB device details will be stored
+#                      or read from (required).
+#             (libxl passes /libxl/<domid>/remus/netbuf/<devid>)
+# IFB         ifb interface to be cleaned up (required). [for teardown op only]
+
+# Written to the store: (setup operation)
+# XENBUS_PATH/ifb=<ifbdevName> the IFB device serving
+#  as the intermediate buffer through which the interface's network output
+#  can be controlled.
+#
+# To install a network buffer on a guest vif (vif1.0) using ifb (ifb0)
+# we need to do the following
+#
+#  ip link set dev ifb0 up
+#  tc qdisc add dev vif1.0 ingress
+#  tc filter add dev vif1.0 parent ffff: proto ip \
+#    prio 10 u32 match u32 0 0 action mirred egress redirect dev ifb0
+#  nl-qdisc-add --dev=ifb0 --parent root plug
+#  nl-qdisc-add --dev=ifb0 --parent root --update plug --limit=10000000
+#                                                (10MB limit on buffer)
+#
+# So order of operations when installing a network buffer on vif1.0
+# 1. find a free ifb and bring up the device
+# 2. redirect traffic from vif1.0 to ifb:
+#   2.1 add ingress qdisc to vif1.0 (to capture outgoing packets from guest)
+#   2.2 use tc filter command with actions mirred egress + redirect
+# 3. install plug_qdisc on ifb device, with which we can buffer/release
+#    guest's network output from vif1.0
+#
+#
+
+#============================================================================
+
+# Unlike other vif scripts, vif-common is not needed here as it executes vif
+#specific setup code such as renaming.
+dir=$(dirname "$0")
+. "$dir/xen-hotplug-common.sh"
+
+findCommand "$@"
+
+if [ "$command" != "setup" -a  "$command" != "teardown" ]
+then
+  echo "Invalid command: $command"
+  log err "Invalid command: $command"
+  exit 1
+fi
+
+evalVariables "$@"
+
+: ${vifname:?}
+: ${XENBUS_PATH:?}
+
+check_libnl_tools() {
+    if ! command -v nl-qdisc-list > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-list tool"
+    fi
+    if ! command -v nl-qdisc-add > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-add tool"
+    fi
+    if ! command -v nl-qdisc-delete > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-delete tool"
+    fi
+}
+
+# We only check for modules. We don't load them.
+# User/Admin is supposed to load ifb during boot time,
+# ensuring that there are enough free ifbs in the system.
+# Other modules will be loaded automatically by tc commands.
+check_modules() {
+    for m in ifb sch_plug sch_ingress act_mirred cls_u32
+    do
+        if ! modinfo $m > /dev/null 2>&1; then
+            fatal "Unable to find $m kernel module"
+        fi
+    done
+}
+
+setup_ifb() {
+
+    for ifb in `ifconfig -a -s|egrep ^ifb|cut -d ' ' -f1`
+    do
+        local installed=`nl-qdisc-list -d $ifb`
+        [ -n "$installed" ] && continue
+        IFB="$ifb"
+        break
+    done
+
+    if [ -z "$IFB" ]
+    then
+        fatal "Unable to find a free IFB device for $vifname"
+    fi
+
+    do_or_die ip link set dev "$IFB" up
+}
+
+redirect_vif_traffic() {
+    local vif=$1
+    local ifb=$2
+
+    do_or_die tc qdisc add dev "$vif" ingress
+
+    tc filter add dev "$vif" parent ffff: proto ip prio 10 \
+        u32 match u32 0 0 action mirred egress redirect dev "$ifb" >/dev/null 2>&1
+
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to redirect traffic from $vif to $ifb"
+    fi
+}
+
+add_plug_qdisc() {
+    local vif=$1
+    local ifb=$2
+
+    nl-qdisc-add --dev="$ifb" --parent root plug >/dev/null 2>&1
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to add plug qdisc to $ifb"
+    fi
+
+    #set ifb buffering limit in bytes. Its okay if this command fails
+    nl-qdisc-add --dev="$ifb" --parent root \
+        --update plug --limit=10000000 >/dev/null 2>&1
+}
+
+teardown_netbuf() {
+    local vif=$1
+    local ifb=$2
+
+    if [ "$ifb" ]; then
+        do_without_error ip link set dev "$ifb" down
+        do_without_error nl-qdisc-delete --dev="$ifb" --parent root plug >/dev/null 2>&1
+        xenstore-rm -t "$XENBUS_PATH/ifb" 2>/dev/null || true
+    fi
+    do_without_error tc qdisc del dev "$vif" ingress
+    xenstore-rm -t "$XENBUS_PATH/hotplug-status" 2>/dev/null || true
+}
+
+xs_write_failed() {
+    local vif=$1
+    local ifb=$2
+    teardown_netbuf "$vifname" "$IFB"
+    fatal "failed to write ifb name to xenstore"
+}
+
+case "$command" in
+    setup)
+        check_libnl_tools
+        check_modules
+
+        claim_lock "pickifb"
+        setup_ifb
+        redirect_vif_traffic "$vifname" "$IFB"
+        add_plug_qdisc "$vifname" "$IFB"
+        release_lock "pickifb"
+
+        #not using xenstore_write that automatically exits on error
+        #because we need to cleanup
+        _xenstore_write "$XENBUS_PATH/ifb" "$IFB" || xs_write_failed "$vifname" "$IFB"
+        success
+        ;;
+    teardown)
+        : ${IFB:?}
+        teardown_netbuf "$vifname" "$IFB"
+        ;;
+esac
+
+log debug "Successful remus-netbuf-setup $command for $vifname, ifb $IFB."
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEy-0003s9-F6; Tue, 21 Jan 2014 09:03:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEx-0003rp-PN
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:24 +0000
Received: from [85.158.139.211:63398] by server-12.bemta-5.messagelabs.com id
	50/26-30017-AD73ED25; Tue, 21 Jan 2014 09:03:22 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31136 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439854"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f2004371;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014548-1244798 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:11 +0800
Message-Id: <1390295117-718-8-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 07/13 V6] remus: implement the API to
	buffer/release packages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch implements two APIs:
1. libxl__remus_netbuf_start_new_epoch()
   It marks a new epoch. The packages before this epoch will
   be flushed, and the packages after this epoch will be buffered.
   It will be called after the guest is suspended.
2. libxl__remus_netbuf_release_prev_epoch()
   It flushes the buffered packages to client, and it will be
   called when a checkpoint finishes.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |  6 +++++
 tools/libxl/libxl_netbuffer.c   | 49 +++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c | 14 ++++++++++++
 3 files changed, 69 insertions(+)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 0430307..7216f89 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2324,6 +2324,12 @@ _hidden void libxl__remus_setup_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
                                        libxl__domain_suspend_state *dss);
 
+_hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                               libxl__remus_state *remus_state);
+
+_hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                                  libxl__remus_state *remus_state);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 0be876c..1b61597 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -441,6 +441,55 @@ void libxl__remus_netbuf_setup(libxl__egc *egc,
     libxl__remus_setup_done(egc, dss, rc);
 }
 
+/* The buffer_op's value, not the value passed to kernel */
+enum {
+    tc_buffer_start,
+    tc_buffer_release
+};
+
+static int remus_netbuf_op(libxl__gc *gc, uint32_t domid,
+                           libxl__remus_state *remus_state,
+                           int buffer_op)
+{
+    int i, ret;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    for (i = 0; i < netbuf_state->num_netbufs; ++i) {
+        if (buffer_op == tc_buffer_start)
+            ret = rtnl_qdisc_plug_buffer(netbuf_state->netbuf_qdisc_list[i]);
+        else
+            ret = rtnl_qdisc_plug_release_one(netbuf_state->netbuf_qdisc_list[i]);
+
+        if (!ret)
+            ret = rtnl_qdisc_add(netbuf_state->nlsock,
+                                 netbuf_state->netbuf_qdisc_list[i],
+                                 NLM_F_REQUEST);
+        if (ret) {
+            LOG(ERROR, "Remus: cannot do netbuf op %s on %s:%s",
+                ((buffer_op == tc_buffer_start) ?
+                 "start_new_epoch" : "release_prev_epoch"),
+                netbuf_state->ifb_list[i], nl_geterror(ret));
+            return ERROR_FAIL;
+        }
+    }
+
+    return 0;
+}
+
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_start);
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index acfa534..a3e3f5c 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -28,6 +28,20 @@ void libxl__remus_netbuf_setup(libxl__egc *egc,
 {
 }
 
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEu-0003qf-3K; Tue, 21 Jan 2014 09:03:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEt-0003qY-FW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:19 +0000
Received: from [193.109.254.147:57737] by server-13.bemta-14.messagelabs.com
	id B9/9A-19374-6D73ED25; Tue, 21 Jan 2014 09:03:18 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28684 invoked from network); 21 Jan 2014 09:03:16 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439851"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xn004370;
	Tue, 21 Jan 2014 17:03:07 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014515-1244791 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:04 +0800
Message-Id: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds support for network buffering in the Remus
codebase in libxl.  NB: This series still does not contain the
bug fix related to usleep calls in libxl__domain_suspend.
I will send it out later as a separate series.

The patches was written by Shriram, so he owns all the credits.
We(Intel&Fujitsu) do&will do all the left community/revision works:
On 12/26/2013 12:23 AM, Shriram Rajagopalan wrote:
> Sorry, I have been on and off these patch series
> lately.  I have to submit my PhD thesis soon and
> devoting my free cycles towards this code.
> 
> It would be great if you folks (Wen & Eddie) can help
> out with the patch series. Other answers inline.
> 

Changes in V6:
  Applied Ian Jackson's comments of V5 series.
  the [PATCH 2/4 V5] is split by small functionalities.

  [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.

Changes in V5:

Merge hotplug script patch (2/5) and hotplug script setup/teardown
patch (3/5) into a single patch.

Changes in V4:

[1/5] Remove check for libnl command line utils in autoconf checks

[2/5] minor nits

[3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h

[4/5] clean ups. Make the usleep in checkpoint callback asynchronous

[5/5] minor nits

Changes in V3:
[1/5] Fix redundant checks in configure scripts
      (based on Ian Campbell's suggestions)

[2/5] Introduce locking in the script, during IFB setup.
      Add xenstore paths used by netbuf scripts
      to xenstore-paths.markdown

[3/5] Hotplug scripts setup/teardown invocations are now asynchronous
      following IanJ's feedback.  However, the invocations are still
      sequential. 

[5/5] Allow per-domain specification of netbuffer scripts in xl remus
      commmand.

And minor nits throughout the series based on feedback from
the last version

Changes in V2:
[1/5] Configure script will automatically enable/disable network
      buffer support depending on the availability of the appropriate
      libnl3 version. [If libnl3 is unavailable, a warning message will be
      printed to let the user know that the feature has been disabled.]

      use macros from pkg.m4 instead of pkg-config commands
      removed redundant checks for libnl3 libraries.

[3,4/5] - Minor nits.

Version 1:

[1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
      to libxl Makefile.

[2/5] External script to setup/teardown network buffering using libnl3's
      CLI. This script will be invoked by libxl before starting Remus.
      The script's main job is to bring up an IFB device with plug qdisc
      attached to it.  It then re-routes egress traffic from the guest's
      vif to the IFB device.

[3/5] Libxl code to invoke the external setup script, followed by netlink
      related setup to obtain a handle on the output buffers attached
      to each vif.

[4/5] Libxl interaction with network buffer module in the kernel via
      libnl3 API.

[5/5] xl cmdline switch to explicitly enable network buffering when
      starting remus.


  Few things to note(by shriram): 

    a) Based on previous email discussions, the setup/teardown task has
    been moved to a hotplug style shell script which can be customized as
    desired, instead of implementing it as C code inside libxl.

    b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
   (Linux).  So I have made network buffering support an optional feature
   so that it can be disabled if desired.

   c) NetBSD does not have libnl3. So I have put the setup script under
   tools/hotplug/Linux folder.

thanks
Lai


Shriram Rajagopalan (13):
  remus: add libnl3 dependency to autoconf scripts
  remus: implement network buffering hotplug scripts
  remus: update libxl_domain_remus_info
  remus: introduce a new structure libxl__remus_state
  remus: introduce a function to check whether network buffering is
    enabled
  remus: implement the API to setup network buffering
  remus: implement the API to buffer/release packages
  remus: implement the API to teardown network buffering
  remus: use the API to setup network buffering
  remus: use the API to teardown network buffering
  remus: rename remus_failover_cb() to remus_replication_failure_cb()
  remus: control network buffering in remus callbacks
  remus - network buffering cmdline switch

 README                                 |   4 +
 config/Tools.mk.in                     |   3 +
 docs/man/xl.conf.pod.5                 |   6 +
 docs/man/xl.pod.1                      |  11 +-
 docs/misc/xenstore-paths.markdown      |   4 +
 tools/configure.ac                     |  15 +
 tools/hotplug/Linux/Makefile           |   1 +
 tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++
 tools/libxl/Makefile                   |  11 +
 tools/libxl/libxl.c                    |  48 ++-
 tools/libxl/libxl.h                    |  13 +
 tools/libxl/libxl_dom.c                | 118 +++++--
 tools/libxl/libxl_internal.h           |  54 +++-
 tools/libxl/libxl_netbuffer.c          | 551 +++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |  56 ++++
 tools/libxl/libxl_remus.c              |  64 ++++
 tools/libxl/libxl_types.idl            |   2 +
 tools/libxl/xl.c                       |   4 +
 tools/libxl/xl.h                       |   1 +
 tools/libxl/xl_cmdimpl.c               |  28 +-
 tools/libxl/xl_cmdtable.c              |   3 +
 tools/remus/README                     |   6 +
 22 files changed, 1145 insertions(+), 41 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c
 create mode 100644 tools/libxl/libxl_remus.c

-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEx-0003rd-1m; Tue, 21 Jan 2014 09:03:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEv-0003qy-K4
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:21 +0000
Received: from [85.158.139.211:44887] by server-11.bemta-5.messagelabs.com id
	4F/F8-23268-8D73ED25; Tue, 21 Jan 2014 09:03:20 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25340 invoked from network); 21 Jan 2014 09:03:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439852"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cr004373;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014540-1244794 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:07 +0800
Message-Id: <1390295117-718-4-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/13 V6] tools/libxl: update
	libxl_domain_remus_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Add two members:
1. netbuf: whether netbuf is enabled
2. netbufscript: the path of the script which will be run to setup
     and tear down the guest's interface.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.h         | 13 +++++++++++++
 tools/libxl/libxl_types.idl |  2 ++
 2 files changed, 15 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..d89ad0a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_REMUS_NETBUF 1
+ *
+ * If this is defined, then the libxl_domain_remus_info structure will
+ * have a boolean field (netbuf) and a string field (netbufscript).
+ *
+ * netbuf, if true, indicates that network buffering should be enabled.
+ *
+ * netbufscript, if set, indicates the path to the hotplug script to
+ * setup or teardown network buffers.
+ */
+#define LIBXL_HAVE_REMUS_NETBUF 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e49945a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -561,6 +561,8 @@ libxl_domain_remus_info = Struct("domain_remus_info",[
     ("interval",     integer),
     ("blackhole",    bool),
     ("compression",  bool),
+    ("netbuf",       bool),
+    ("netbufscript", string),
     ])
 
 libxl_event_type = Enumeration("event_type", [
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEy-0003s9-F6; Tue, 21 Jan 2014 09:03:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEx-0003rp-PN
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:24 +0000
Received: from [85.158.139.211:63398] by server-12.bemta-5.messagelabs.com id
	50/26-30017-AD73ED25; Tue, 21 Jan 2014 09:03:22 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31136 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439854"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f2004371;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014548-1244798 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:11 +0800
Message-Id: <1390295117-718-8-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 07/13 V6] remus: implement the API to
	buffer/release packages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch implements two APIs:
1. libxl__remus_netbuf_start_new_epoch()
   It marks a new epoch. The packages before this epoch will
   be flushed, and the packages after this epoch will be buffered.
   It will be called after the guest is suspended.
2. libxl__remus_netbuf_release_prev_epoch()
   It flushes the buffered packages to client, and it will be
   called when a checkpoint finishes.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |  6 +++++
 tools/libxl/libxl_netbuffer.c   | 49 +++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c | 14 ++++++++++++
 3 files changed, 69 insertions(+)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 0430307..7216f89 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2324,6 +2324,12 @@ _hidden void libxl__remus_setup_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
                                        libxl__domain_suspend_state *dss);
 
+_hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                               libxl__remus_state *remus_state);
+
+_hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                                  libxl__remus_state *remus_state);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 0be876c..1b61597 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -441,6 +441,55 @@ void libxl__remus_netbuf_setup(libxl__egc *egc,
     libxl__remus_setup_done(egc, dss, rc);
 }
 
+/* The buffer_op's value, not the value passed to kernel */
+enum {
+    tc_buffer_start,
+    tc_buffer_release
+};
+
+static int remus_netbuf_op(libxl__gc *gc, uint32_t domid,
+                           libxl__remus_state *remus_state,
+                           int buffer_op)
+{
+    int i, ret;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    for (i = 0; i < netbuf_state->num_netbufs; ++i) {
+        if (buffer_op == tc_buffer_start)
+            ret = rtnl_qdisc_plug_buffer(netbuf_state->netbuf_qdisc_list[i]);
+        else
+            ret = rtnl_qdisc_plug_release_one(netbuf_state->netbuf_qdisc_list[i]);
+
+        if (!ret)
+            ret = rtnl_qdisc_add(netbuf_state->nlsock,
+                                 netbuf_state->netbuf_qdisc_list[i],
+                                 NLM_F_REQUEST);
+        if (ret) {
+            LOG(ERROR, "Remus: cannot do netbuf op %s on %s:%s",
+                ((buffer_op == tc_buffer_start) ?
+                 "start_new_epoch" : "release_prev_epoch"),
+                netbuf_state->ifb_list[i], nl_geterror(ret));
+            return ERROR_FAIL;
+        }
+    }
+
+    return 0;
+}
+
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_start);
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index acfa534..a3e3f5c 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -28,6 +28,20 @@ void libxl__remus_netbuf_setup(libxl__egc *egc,
 {
 }
 
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEs-0003qS-O7; Tue, 21 Jan 2014 09:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEr-0003qN-1Z
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:17 +0000
Received: from [193.109.254.147:64259] by server-11.bemta-14.messagelabs.com
	id 2F/14-20576-4D73ED25; Tue, 21 Jan 2014 09:03:16 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28487 invoked from network); 21 Jan 2014 09:03:13 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439850"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:27 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B7004372;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014539-1244793 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:06 +0800
Message-Id: <1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
	hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch introduces remus-netbuf-setup hotplug script responsible for
setting up and tearing down the necessary infrastructure required for
network output buffering in Remus.  This script is intended to be invoked
by libxl for each guest interface, when starting or stopping Remus.

Apart from returning success/failure indication via the usual hotplug
entries in xenstore, this script also writes to xenstore, the name of
the IFB device to be used to control the vif's network output.

The script relies on libnl3 command line utilities to perform various
setup/teardown functions. The script is confined to Linux platforms only
since NetBSD does not seem to have libnl3.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/hotplug/Linux/Makefile           |   1 +
 tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++++++++++++++++++++++++
 2 files changed, 184 insertions(+)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup

diff --git a/tools/hotplug/Linux/Makefile b/tools/hotplug/Linux/Makefile
index 47655f6..6139c1f 100644
--- a/tools/hotplug/Linux/Makefile
+++ b/tools/hotplug/Linux/Makefile
@@ -16,6 +16,7 @@ XEN_SCRIPTS += network-nat vif-nat
 XEN_SCRIPTS += vif-openvswitch
 XEN_SCRIPTS += vif2
 XEN_SCRIPTS += vif-setup
+XEN_SCRIPTS-$(CONFIG_REMUS_NETBUF) += remus-netbuf-setup
 XEN_SCRIPTS += block
 XEN_SCRIPTS += block-enbd block-nbd
 XEN_SCRIPTS-$(CONFIG_BLKTAP1) += blktap
diff --git a/tools/hotplug/Linux/remus-netbuf-setup b/tools/hotplug/Linux/remus-netbuf-setup
new file mode 100644
index 0000000..3467db2
--- /dev/null
+++ b/tools/hotplug/Linux/remus-netbuf-setup
@@ -0,0 +1,183 @@
+#!/bin/bash
+#============================================================================
+# ${XEN_SCRIPT_DIR}/remus-netbuf-setup
+#
+# Script for attaching a network buffer to the specified vif (in any mode).
+# The hotplugging system will call this script when starting remus via libxl
+# API, libxl_domain_remus_start.
+#
+# Usage:
+# remus-netbuf-setup (setup|teardown)
+#
+# Environment vars:
+# vifname     vif interface name (required).
+# XENBUS_PATH path in Xenstore, where the IFB device details will be stored
+#                      or read from (required).
+#             (libxl passes /libxl/<domid>/remus/netbuf/<devid>)
+# IFB         ifb interface to be cleaned up (required). [for teardown op only]
+
+# Written to the store: (setup operation)
+# XENBUS_PATH/ifb=<ifbdevName> the IFB device serving
+#  as the intermediate buffer through which the interface's network output
+#  can be controlled.
+#
+# To install a network buffer on a guest vif (vif1.0) using ifb (ifb0)
+# we need to do the following
+#
+#  ip link set dev ifb0 up
+#  tc qdisc add dev vif1.0 ingress
+#  tc filter add dev vif1.0 parent ffff: proto ip \
+#    prio 10 u32 match u32 0 0 action mirred egress redirect dev ifb0
+#  nl-qdisc-add --dev=ifb0 --parent root plug
+#  nl-qdisc-add --dev=ifb0 --parent root --update plug --limit=10000000
+#                                                (10MB limit on buffer)
+#
+# So order of operations when installing a network buffer on vif1.0
+# 1. find a free ifb and bring up the device
+# 2. redirect traffic from vif1.0 to ifb:
+#   2.1 add ingress qdisc to vif1.0 (to capture outgoing packets from guest)
+#   2.2 use tc filter command with actions mirred egress + redirect
+# 3. install plug_qdisc on ifb device, with which we can buffer/release
+#    guest's network output from vif1.0
+#
+#
+
+#============================================================================
+
+# Unlike other vif scripts, vif-common is not needed here as it executes vif
+#specific setup code such as renaming.
+dir=$(dirname "$0")
+. "$dir/xen-hotplug-common.sh"
+
+findCommand "$@"
+
+if [ "$command" != "setup" -a  "$command" != "teardown" ]
+then
+  echo "Invalid command: $command"
+  log err "Invalid command: $command"
+  exit 1
+fi
+
+evalVariables "$@"
+
+: ${vifname:?}
+: ${XENBUS_PATH:?}
+
+check_libnl_tools() {
+    if ! command -v nl-qdisc-list > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-list tool"
+    fi
+    if ! command -v nl-qdisc-add > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-add tool"
+    fi
+    if ! command -v nl-qdisc-delete > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-delete tool"
+    fi
+}
+
+# We only check for modules. We don't load them.
+# User/Admin is supposed to load ifb during boot time,
+# ensuring that there are enough free ifbs in the system.
+# Other modules will be loaded automatically by tc commands.
+check_modules() {
+    for m in ifb sch_plug sch_ingress act_mirred cls_u32
+    do
+        if ! modinfo $m > /dev/null 2>&1; then
+            fatal "Unable to find $m kernel module"
+        fi
+    done
+}
+
+setup_ifb() {
+
+    for ifb in `ifconfig -a -s|egrep ^ifb|cut -d ' ' -f1`
+    do
+        local installed=`nl-qdisc-list -d $ifb`
+        [ -n "$installed" ] && continue
+        IFB="$ifb"
+        break
+    done
+
+    if [ -z "$IFB" ]
+    then
+        fatal "Unable to find a free IFB device for $vifname"
+    fi
+
+    do_or_die ip link set dev "$IFB" up
+}
+
+redirect_vif_traffic() {
+    local vif=$1
+    local ifb=$2
+
+    do_or_die tc qdisc add dev "$vif" ingress
+
+    tc filter add dev "$vif" parent ffff: proto ip prio 10 \
+        u32 match u32 0 0 action mirred egress redirect dev "$ifb" >/dev/null 2>&1
+
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to redirect traffic from $vif to $ifb"
+    fi
+}
+
+add_plug_qdisc() {
+    local vif=$1
+    local ifb=$2
+
+    nl-qdisc-add --dev="$ifb" --parent root plug >/dev/null 2>&1
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to add plug qdisc to $ifb"
+    fi
+
+    #set ifb buffering limit in bytes. Its okay if this command fails
+    nl-qdisc-add --dev="$ifb" --parent root \
+        --update plug --limit=10000000 >/dev/null 2>&1
+}
+
+teardown_netbuf() {
+    local vif=$1
+    local ifb=$2
+
+    if [ "$ifb" ]; then
+        do_without_error ip link set dev "$ifb" down
+        do_without_error nl-qdisc-delete --dev="$ifb" --parent root plug >/dev/null 2>&1
+        xenstore-rm -t "$XENBUS_PATH/ifb" 2>/dev/null || true
+    fi
+    do_without_error tc qdisc del dev "$vif" ingress
+    xenstore-rm -t "$XENBUS_PATH/hotplug-status" 2>/dev/null || true
+}
+
+xs_write_failed() {
+    local vif=$1
+    local ifb=$2
+    teardown_netbuf "$vifname" "$IFB"
+    fatal "failed to write ifb name to xenstore"
+}
+
+case "$command" in
+    setup)
+        check_libnl_tools
+        check_modules
+
+        claim_lock "pickifb"
+        setup_ifb
+        redirect_vif_traffic "$vifname" "$IFB"
+        add_plug_qdisc "$vifname" "$IFB"
+        release_lock "pickifb"
+
+        #not using xenstore_write that automatically exits on error
+        #because we need to cleanup
+        _xenstore_write "$XENBUS_PATH/ifb" "$IFB" || xs_write_failed "$vifname" "$IFB"
+        success
+        ;;
+    teardown)
+        : ${IFB:?}
+        teardown_netbuf "$vifname" "$IFB"
+        ;;
+esac
+
+log debug "Successful remus-netbuf-setup $command for $vifname, ifb $IFB."
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEw-0003rN-M9; Tue, 21 Jan 2014 09:03:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEv-0003qq-7X
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:21 +0000
Received: from [193.109.254.147:64826] by server-12.bemta-14.messagelabs.com
	id 75/3E-13681-8D73ED25; Tue, 21 Jan 2014 09:03:20 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28925 invoked from network); 21 Jan 2014 09:03:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439853"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f1004371;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014537-1244792 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:05 +0800
Message-Id: <1390295117-718-2-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 01/13 V6] remus: add libnl3 dependency to
	autoconf scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Libnl3 is required for controlling Remus network buffering.
This patch adds dependency on libnl3 (>= 3.2.8) to autoconf scripts.
Also provide ability to configure tools without libnl3 support, that
is without network buffering support.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 README               |  4 ++++
 config/Tools.mk.in   |  3 +++
 tools/configure.ac   | 15 +++++++++++++++
 tools/libxl/Makefile |  2 ++
 tools/remus/README   |  6 ++++++
 5 files changed, 30 insertions(+)

diff --git a/README b/README
index 4148a26..7bb25fb 100644
--- a/README
+++ b/README
@@ -72,6 +72,10 @@ disabled at compile time:
     * cmake (if building vtpm stub domains)
     * markdown
     * figlet (for generating the traditional Xen start of day banner)
+    * Development install of libnl3 (e.g., libnl-3-200,
+      libnl-3-dev, etc).  Required if network buffering is desired
+      when using Remus with libxl.  See tools/remus/README for detailed
+      information.
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index d9d3239..81802b3 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -38,6 +38,8 @@ PTHREAD_LIBS        := @PTHREAD_LIBS@
 
 PTYFUNCS_LIBS       := @PTYFUNCS_LIBS@
 
+LIBNL3_LIBS         := @LIBNL3_LIBS@
+LIBNL3_CFLAGS       := @LIBNL3_CFLAGS@
 # Download GIT repositories via HTTP or GIT's own protocol?
 # GIT's protocol is faster and more robust, when it works at all (firewalls
 # may block it). We make it the default, but if your GIT repository downloads
@@ -56,6 +58,7 @@ CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_XEND         := @xend@
 CONFIG_BLKTAP1      := @blktap1@
+CONFIG_REMUS_NETBUF := @remus_netbuf@
 
 #System options
 ZLIB                := @zlib@
diff --git a/tools/configure.ac b/tools/configure.ac
index 0754f0e..f95956d 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -236,6 +236,21 @@ esac
 # Checks for header files.
 AC_CHECK_HEADERS([yajl/yajl_version.h sys/eventfd.h])
 
+# Check for libnl3 >=3.2.8. If present enable remus network buffering.
+PKG_CHECK_MODULES(LIBNL3, [libnl-3.0 >= 3.2.8 libnl-route-3.0 >= 3.2.8],
+		[libnl3_lib="y"], [libnl3_lib="n"])
+
+AS_IF([test "x$libnl3_lib" = "xn" ], [
+	    AC_MSG_WARN([Disabling support for Remus network buffering.
+	    Please install libnl3 libraries, command line tools and devel
+	    headers - version 3.2.8 or higher])
+	    AC_SUBST(remus_netbuf, [n])
+	    ],[
+	    AC_SUBST(LIBNL3_LIBS)
+	    AC_SUBST(LIBNL3_CFLAGS)
+	    AC_SUBST(remus_netbuf, [y])
+])
+
 AC_OUTPUT()
 
 AS_IF([test "x$xend" = "xy" ], [
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..da27c84 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -21,11 +21,13 @@ endif
 
 LIBXL_LIBS =
 LIBXL_LIBS = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libblktapctl) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS += $(LIBNL3_LIBS)
 
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 CFLAGS_LIBXL += $(CFLAGS_libblktapctl) 
+CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 CFLAGS_LIBXL += -Wshadow
 
 LIBXL_LIBS-$(CONFIG_ARM) += -lfdt
diff --git a/tools/remus/README b/tools/remus/README
index 9e8140b..4736252 100644
--- a/tools/remus/README
+++ b/tools/remus/README
@@ -2,3 +2,9 @@ Remus provides fault tolerance for virtual machines by sending continuous
 checkpoints to a backup, which will activate if the target VM fails.
 
 See the website at http://nss.cs.ubc.ca/remus/ for details.
+
+Using Remus with libxl on Xen 4.4 and higher:
+ To enable network buffering, you need libnl 3.2.8
+ or higher along with the development headers and command line utilities.
+ If your distro does not have the appropriate libnl3 version, you can find
+ the latest source tarball of libnl3 at http://www.carisma.slowglass.com/~tgr/libnl/
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEw-0003rN-M9; Tue, 21 Jan 2014 09:03:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEv-0003qq-7X
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:21 +0000
Received: from [193.109.254.147:64826] by server-12.bemta-14.messagelabs.com
	id 75/3E-13681-8D73ED25; Tue, 21 Jan 2014 09:03:20 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28925 invoked from network); 21 Jan 2014 09:03:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439853"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f1004371;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014537-1244792 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:05 +0800
Message-Id: <1390295117-718-2-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 01/13 V6] remus: add libnl3 dependency to
	autoconf scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Libnl3 is required for controlling Remus network buffering.
This patch adds dependency on libnl3 (>= 3.2.8) to autoconf scripts.
Also provide ability to configure tools without libnl3 support, that
is without network buffering support.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 README               |  4 ++++
 config/Tools.mk.in   |  3 +++
 tools/configure.ac   | 15 +++++++++++++++
 tools/libxl/Makefile |  2 ++
 tools/remus/README   |  6 ++++++
 5 files changed, 30 insertions(+)

diff --git a/README b/README
index 4148a26..7bb25fb 100644
--- a/README
+++ b/README
@@ -72,6 +72,10 @@ disabled at compile time:
     * cmake (if building vtpm stub domains)
     * markdown
     * figlet (for generating the traditional Xen start of day banner)
+    * Development install of libnl3 (e.g., libnl-3-200,
+      libnl-3-dev, etc).  Required if network buffering is desired
+      when using Remus with libxl.  See tools/remus/README for detailed
+      information.
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index d9d3239..81802b3 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -38,6 +38,8 @@ PTHREAD_LIBS        := @PTHREAD_LIBS@
 
 PTYFUNCS_LIBS       := @PTYFUNCS_LIBS@
 
+LIBNL3_LIBS         := @LIBNL3_LIBS@
+LIBNL3_CFLAGS       := @LIBNL3_CFLAGS@
 # Download GIT repositories via HTTP or GIT's own protocol?
 # GIT's protocol is faster and more robust, when it works at all (firewalls
 # may block it). We make it the default, but if your GIT repository downloads
@@ -56,6 +58,7 @@ CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_XEND         := @xend@
 CONFIG_BLKTAP1      := @blktap1@
+CONFIG_REMUS_NETBUF := @remus_netbuf@
 
 #System options
 ZLIB                := @zlib@
diff --git a/tools/configure.ac b/tools/configure.ac
index 0754f0e..f95956d 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -236,6 +236,21 @@ esac
 # Checks for header files.
 AC_CHECK_HEADERS([yajl/yajl_version.h sys/eventfd.h])
 
+# Check for libnl3 >=3.2.8. If present enable remus network buffering.
+PKG_CHECK_MODULES(LIBNL3, [libnl-3.0 >= 3.2.8 libnl-route-3.0 >= 3.2.8],
+		[libnl3_lib="y"], [libnl3_lib="n"])
+
+AS_IF([test "x$libnl3_lib" = "xn" ], [
+	    AC_MSG_WARN([Disabling support for Remus network buffering.
+	    Please install libnl3 libraries, command line tools and devel
+	    headers - version 3.2.8 or higher])
+	    AC_SUBST(remus_netbuf, [n])
+	    ],[
+	    AC_SUBST(LIBNL3_LIBS)
+	    AC_SUBST(LIBNL3_CFLAGS)
+	    AC_SUBST(remus_netbuf, [y])
+])
+
 AC_OUTPUT()
 
 AS_IF([test "x$xend" = "xy" ], [
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..da27c84 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -21,11 +21,13 @@ endif
 
 LIBXL_LIBS =
 LIBXL_LIBS = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libblktapctl) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS += $(LIBNL3_LIBS)
 
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 CFLAGS_LIBXL += $(CFLAGS_libblktapctl) 
+CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 CFLAGS_LIBXL += -Wshadow
 
 LIBXL_LIBS-$(CONFIG_ARM) += -lfdt
diff --git a/tools/remus/README b/tools/remus/README
index 9e8140b..4736252 100644
--- a/tools/remus/README
+++ b/tools/remus/README
@@ -2,3 +2,9 @@ Remus provides fault tolerance for virtual machines by sending continuous
 checkpoints to a backup, which will activate if the target VM fails.
 
 See the website at http://nss.cs.ubc.ca/remus/ for details.
+
+Using Remus with libxl on Xen 4.4 and higher:
+ To enable network buffering, you need libnl 3.2.8
+ or higher along with the development headers and command line utilities.
+ If your distro does not have the appropriate libnl3 version, you can find
+ the latest source tarball of libnl3 at http://www.carisma.slowglass.com/~tgr/libnl/
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEx-0003rd-1m; Tue, 21 Jan 2014 09:03:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEv-0003qy-K4
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:21 +0000
Received: from [85.158.139.211:44887] by server-11.bemta-5.messagelabs.com id
	4F/F8-23268-8D73ED25; Tue, 21 Jan 2014 09:03:20 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25340 invoked from network); 21 Jan 2014 09:03:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439852"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cr004373;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014540-1244794 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:07 +0800
Message-Id: <1390295117-718-4-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/13 V6] tools/libxl: update
	libxl_domain_remus_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Add two members:
1. netbuf: whether netbuf is enabled
2. netbufscript: the path of the script which will be run to setup
     and tear down the guest's interface.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.h         | 13 +++++++++++++
 tools/libxl/libxl_types.idl |  2 ++
 2 files changed, 15 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..d89ad0a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_REMUS_NETBUF 1
+ *
+ * If this is defined, then the libxl_domain_remus_info structure will
+ * have a boolean field (netbuf) and a string field (netbufscript).
+ *
+ * netbuf, if true, indicates that network buffering should be enabled.
+ *
+ * netbufscript, if set, indicates the path to the hotplug script to
+ * setup or teardown network buffers.
+ */
+#define LIBXL_HAVE_REMUS_NETBUF 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e49945a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -561,6 +561,8 @@ libxl_domain_remus_info = Struct("domain_remus_info",[
     ("interval",     integer),
     ("blackhole",    bool),
     ("compression",  bool),
+    ("netbuf",       bool),
+    ("netbufscript", string),
     ])
 
 libxl_event_type = Enumeration("event_type", [
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEz-0003sz-TD; Tue, 21 Jan 2014 09:03:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEy-0003rz-EP
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:24 +0000
Received: from [193.109.254.147:58245] by server-5.bemta-14.messagelabs.com id
	D3/5A-03510-BD73ED25; Tue, 21 Jan 2014 09:03:23 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29390 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439856"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xo004370;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014544-1244795 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:08 +0800
Message-Id: <1390295117-718-5-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 04/13 V6] tools/libxl: introduce a new structure
	libxl__remus_state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl_domain_remus_info only contains the argument of the command
'xl remus'. So introduce a new structure libxl__remus_state to save
the remus state.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          | 25 +++++++++++++++++++++++--
 tools/libxl/libxl_dom.c      | 12 ++++--------
 tools/libxl/libxl_internal.h | 22 ++++++++++++++++++++--
 3 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..25af816 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -729,11 +729,32 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     dss->type = type;
     dss->live = 1;
     dss->debug = 0;
-    dss->remus = info;
 
     assert(info);
 
-    /* TBD: Remus setup - i.e. attach qdisc, enable disk buffering, etc */
+    GCNEW(dss->remus_state);
+
+    /* convenience shorthand */
+    libxl__remus_state *remus_state = dss->remus_state;
+    remus_state->blackhole = info->blackhole;
+    remus_state->interval = info->interval;
+    remus_state->compression = info->compression;
+    remus_state->dss = dss;
+    libxl__ev_child_init(&remus_state->child);
+
+    /* TODO: enable disk buffering */
+
+    /* Setup network buffering */
+    if (info->netbuf) {
+        if (info->netbufscript) {
+            remus_state->netbufscript =
+                libxl__strdup(gc, info->netbufscript);
+        } else {
+            remus_state->netbufscript =
+                GCSPRINTF("%s/remus-netbuf-setup",
+                          libxl__xen_script_dir_path());
+        }
+    }
 
     /* Point of no return */
     libxl__domain_suspend(egc, dss);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..8d63f90 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1290,7 +1290,7 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
     /* REMUS TODO: Wait for disk and memory ack, release network buffer */
     /* REMUS TODO: make this asynchronous */
     assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->interval * 1000);
+    usleep(dss->remus_state->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
@@ -1308,7 +1308,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     const libxl_domain_type type = dss->type;
     const int live = dss->live;
     const int debug = dss->debug;
-    const libxl_domain_remus_info *const r_info = dss->remus;
     libxl__srm_save_autogen_callbacks *const callbacks =
         &dss->shs.callbacks.save.a;
 
@@ -1343,11 +1342,8 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     dss->guest_responded = 0;
     dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
-    if (r_info != NULL) {
-        dss->interval = r_info->interval;
-        if (r_info->compression)
-            dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
-    }
+    if (dss->remus_state && dss->remus_state->compression)
+        dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
 
     dss->xce = xc_evtchn_open(NULL, 0);
     if (dss->xce == NULL)
@@ -1366,7 +1362,7 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     }
 
     memset(callbacks, 0, sizeof(*callbacks));
-    if (r_info != NULL) {
+    if (dss->remus_state != NULL) {
         callbacks->suspend = libxl__remus_domain_suspend_callback;
         callbacks->postcopy = libxl__remus_domain_resume_callback;
         callbacks->checkpoint = libxl__remus_domain_checkpoint_callback;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..9970780 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2292,6 +2292,25 @@ typedef struct libxl__logdirty_switch {
     libxl__ev_time timeout;
 } libxl__logdirty_switch;
 
+typedef struct libxl__remus_state {
+    /* filled by the user */
+    /* checkpoint interval */
+    int interval;
+    int blackhole;
+    int compression;
+    /* Script to setup/teardown network buffers */
+    const char *netbufscript;
+    libxl__domain_suspend_state *dss;
+
+    /* private */
+    int saved_rc;
+    int dev_id;
+    /* Opaque context containing network buffer related stuff */
+    void *netbuf_state;
+    libxl__ev_time timeout;
+    libxl__ev_child child;
+} libxl__remus_state;
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
@@ -2302,7 +2321,7 @@ struct libxl__domain_suspend_state {
     libxl_domain_type type;
     int live;
     int debug;
-    const libxl_domain_remus_info *remus;
+    libxl__remus_state *remus_state;
     /* private */
     xc_evtchn *xce; /* event channel handle */
     int suspend_eventchn;
@@ -2310,7 +2329,6 @@ struct libxl__domain_suspend_state {
     int xcflags;
     int guest_responded;
     const char *dm_savefile;
-    int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
     /* private for libxl__domain_save_device_model */
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XEz-0003sz-TD; Tue, 21 Jan 2014 09:03:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEy-0003rz-EP
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:24 +0000
Received: from [193.109.254.147:58245] by server-5.bemta-14.messagelabs.com id
	D3/5A-03510-BD73ED25; Tue, 21 Jan 2014 09:03:23 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29390 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439856"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xo004370;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014544-1244795 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:08 +0800
Message-Id: <1390295117-718-5-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:46,
	Serialize complete at 2014/01/21 17:01:46
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 04/13 V6] tools/libxl: introduce a new structure
	libxl__remus_state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl_domain_remus_info only contains the argument of the command
'xl remus'. So introduce a new structure libxl__remus_state to save
the remus state.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          | 25 +++++++++++++++++++++++--
 tools/libxl/libxl_dom.c      | 12 ++++--------
 tools/libxl/libxl_internal.h | 22 ++++++++++++++++++++--
 3 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..25af816 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -729,11 +729,32 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     dss->type = type;
     dss->live = 1;
     dss->debug = 0;
-    dss->remus = info;
 
     assert(info);
 
-    /* TBD: Remus setup - i.e. attach qdisc, enable disk buffering, etc */
+    GCNEW(dss->remus_state);
+
+    /* convenience shorthand */
+    libxl__remus_state *remus_state = dss->remus_state;
+    remus_state->blackhole = info->blackhole;
+    remus_state->interval = info->interval;
+    remus_state->compression = info->compression;
+    remus_state->dss = dss;
+    libxl__ev_child_init(&remus_state->child);
+
+    /* TODO: enable disk buffering */
+
+    /* Setup network buffering */
+    if (info->netbuf) {
+        if (info->netbufscript) {
+            remus_state->netbufscript =
+                libxl__strdup(gc, info->netbufscript);
+        } else {
+            remus_state->netbufscript =
+                GCSPRINTF("%s/remus-netbuf-setup",
+                          libxl__xen_script_dir_path());
+        }
+    }
 
     /* Point of no return */
     libxl__domain_suspend(egc, dss);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..8d63f90 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1290,7 +1290,7 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
     /* REMUS TODO: Wait for disk and memory ack, release network buffer */
     /* REMUS TODO: make this asynchronous */
     assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->interval * 1000);
+    usleep(dss->remus_state->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
@@ -1308,7 +1308,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     const libxl_domain_type type = dss->type;
     const int live = dss->live;
     const int debug = dss->debug;
-    const libxl_domain_remus_info *const r_info = dss->remus;
     libxl__srm_save_autogen_callbacks *const callbacks =
         &dss->shs.callbacks.save.a;
 
@@ -1343,11 +1342,8 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     dss->guest_responded = 0;
     dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
-    if (r_info != NULL) {
-        dss->interval = r_info->interval;
-        if (r_info->compression)
-            dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
-    }
+    if (dss->remus_state && dss->remus_state->compression)
+        dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
 
     dss->xce = xc_evtchn_open(NULL, 0);
     if (dss->xce == NULL)
@@ -1366,7 +1362,7 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     }
 
     memset(callbacks, 0, sizeof(*callbacks));
-    if (r_info != NULL) {
+    if (dss->remus_state != NULL) {
         callbacks->suspend = libxl__remus_domain_suspend_callback;
         callbacks->postcopy = libxl__remus_domain_resume_callback;
         callbacks->checkpoint = libxl__remus_domain_checkpoint_callback;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..9970780 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2292,6 +2292,25 @@ typedef struct libxl__logdirty_switch {
     libxl__ev_time timeout;
 } libxl__logdirty_switch;
 
+typedef struct libxl__remus_state {
+    /* filled by the user */
+    /* checkpoint interval */
+    int interval;
+    int blackhole;
+    int compression;
+    /* Script to setup/teardown network buffers */
+    const char *netbufscript;
+    libxl__domain_suspend_state *dss;
+
+    /* private */
+    int saved_rc;
+    int dev_id;
+    /* Opaque context containing network buffer related stuff */
+    void *netbuf_state;
+    libxl__ev_time timeout;
+    libxl__ev_child child;
+} libxl__remus_state;
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
@@ -2302,7 +2321,7 @@ struct libxl__domain_suspend_state {
     libxl_domain_type type;
     int live;
     int debug;
-    const libxl_domain_remus_info *remus;
+    libxl__remus_state *remus_state;
     /* private */
     xc_evtchn *xce; /* event channel handle */
     int suspend_eventchn;
@@ -2310,7 +2329,6 @@ struct libxl__domain_suspend_state {
     int xcflags;
     int guest_responded;
     const char *dm_savefile;
-    int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
     /* private for libxl__domain_save_device_model */
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF0-0003v4-Tp; Tue, 21 Jan 2014 09:03:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEz-0003sT-Db
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:25 +0000
Received: from [193.109.254.147:65310] by server-16.bemta-14.messagelabs.com
	id 18/41-20600-CD73ED25; Tue, 21 Jan 2014 09:03:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!5
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29695 invoked from network); 21 Jan 2014 09:03:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439858"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Ct004373;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014572-1244800 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:13 +0800
Message-Id: <1390295117-718-10-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 09/13 V6] remus: use the API to setup network
	buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_setup() to setup the network
buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |  2 +-
 tools/libxl/libxl_internal.h |  3 +++
 tools/libxl/libxl_remus.c    | 10 ++++++++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 026206a..f45fd74 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -762,7 +762,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     }
 
     /* Point of no return */
-    libxl__domain_suspend(egc, dss);
+    libxl__remus_setup_initiate(egc, dss);
     return AO_INPROGRESS;
 
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 358590b..657cfc4 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2336,6 +2336,9 @@ _hidden void libxl__remus_teardown_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
                                           libxl__domain_suspend_state *dss);
 
+_hidden void libxl__remus_setup_initiate(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 6790c61..38776e1 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -19,6 +19,16 @@
 
 /*----- remus setup/teardown code -----*/
 
+void libxl__remus_setup_initiate(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss)
+{
+    libxl__ev_time_init(&dss->remus_state->timeout);
+    if (!dss->remus_state->netbufscript)
+        libxl__remus_setup_done(egc, dss, 0);
+    else
+        libxl__remus_netbuf_setup(egc, dss);
+}
+
 void libxl__remus_setup_done(libxl__egc *egc,
                              libxl__domain_suspend_state *dss,
                              int rc)
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF0-0003v4-Tp; Tue, 21 Jan 2014 09:03:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEz-0003sT-Db
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:25 +0000
Received: from [193.109.254.147:65310] by server-16.bemta-14.messagelabs.com
	id 18/41-20600-CD73ED25; Tue, 21 Jan 2014 09:03:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!5
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29695 invoked from network); 21 Jan 2014 09:03:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439858"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Ct004373;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014572-1244800 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:13 +0800
Message-Id: <1390295117-718-10-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 09/13 V6] remus: use the API to setup network
	buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_setup() to setup the network
buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |  2 +-
 tools/libxl/libxl_internal.h |  3 +++
 tools/libxl/libxl_remus.c    | 10 ++++++++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 026206a..f45fd74 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -762,7 +762,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     }
 
     /* Point of no return */
-    libxl__domain_suspend(egc, dss);
+    libxl__remus_setup_initiate(egc, dss);
     return AO_INPROGRESS;
 
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 358590b..657cfc4 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2336,6 +2336,9 @@ _hidden void libxl__remus_teardown_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
                                           libxl__domain_suspend_state *dss);
 
+_hidden void libxl__remus_setup_initiate(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 6790c61..38776e1 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -19,6 +19,16 @@
 
 /*----- remus setup/teardown code -----*/
 
+void libxl__remus_setup_initiate(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss)
+{
+    libxl__ev_time_init(&dss->remus_state->timeout);
+    if (!dss->remus_state->netbufscript)
+        libxl__remus_setup_done(egc, dss, 0);
+    else
+        libxl__remus_netbuf_setup(egc, dss);
+}
+
 void libxl__remus_setup_done(libxl__egc *egc,
                              libxl__domain_suspend_state *dss,
                              int rc)
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF1-0003vt-HM; Tue, 21 Jan 2014 09:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEz-0003sV-Kb
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:25 +0000
Received: from [85.158.139.211:26997] by server-2.bemta-5.messagelabs.com id
	84/75-29392-CD73ED25; Tue, 21 Jan 2014 09:03:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31461 invoked from network); 21 Jan 2014 09:03:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439857"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B8004372;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014545-1244796 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:09 +0800
Message-Id: <1390295117-718-6-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 05/13 V6] remus: introduce a function to check
	whether network buffering is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl__netbuffer_enabled() returns 1 when network buffering is compiled,
or returns 0 when network buffering is not compiled.

If network buffering is not compiled, and the user wants to use it, report
a error and exit.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/Makefile            |  7 +++++++
 tools/libxl/libxl.c             |  5 +++++
 tools/libxl/libxl_internal.h    |  2 ++
 tools/libxl/libxl_netbuffer.c   | 31 +++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c | 31 +++++++++++++++++++++++++++++++
 5 files changed, 76 insertions(+)
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index da27c84..84a467c 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -45,6 +45,13 @@ LIBXL_OBJS-y += libxl_blktap2.o
 else
 LIBXL_OBJS-y += libxl_noblktap2.o
 endif
+
+ifeq ($(CONFIG_REMUS_NETBUF),y)
+LIBXL_OBJS-y += libxl_netbuffer.o
+else
+LIBXL_OBJS-y += libxl_nonetbuffer.o
+endif
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 25af816..026206a 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -746,6 +746,11 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     /* Setup network buffering */
     if (info->netbuf) {
+        if (!libxl__netbuffer_enabled(gc)) {
+            LOG(ERROR, "Remus: No support for network buffering");
+            goto out;
+        }
+
         if (info->netbufscript) {
             remus_state->netbufscript =
                 libxl__strdup(gc, info->netbufscript);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9970780..2f64382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2311,6 +2311,8 @@ typedef struct libxl__remus_state {
     libxl__ev_child child;
 } libxl__remus_state;
 
+_hidden int libxl__netbuffer_enabled(libxl__gc *gc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
new file mode 100644
index 0000000..8e23d75
--- /dev/null
+++ b/tools/libxl/libxl_netbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
new file mode 100644
index 0000000..6aa4bf1
--- /dev/null
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF1-0003vt-HM; Tue, 21 Jan 2014 09:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XEz-0003sV-Kb
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:25 +0000
Received: from [85.158.139.211:26997] by server-2.bemta-5.messagelabs.com id
	84/75-29392-CD73ED25; Tue, 21 Jan 2014 09:03:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31461 invoked from network); 21 Jan 2014 09:03:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439857"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B8004372;
	Tue, 21 Jan 2014 17:03:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014545-1244796 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:09 +0800
Message-Id: <1390295117-718-6-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 05/13 V6] remus: introduce a function to check
	whether network buffering is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl__netbuffer_enabled() returns 1 when network buffering is compiled,
or returns 0 when network buffering is not compiled.

If network buffering is not compiled, and the user wants to use it, report
a error and exit.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/Makefile            |  7 +++++++
 tools/libxl/libxl.c             |  5 +++++
 tools/libxl/libxl_internal.h    |  2 ++
 tools/libxl/libxl_netbuffer.c   | 31 +++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c | 31 +++++++++++++++++++++++++++++++
 5 files changed, 76 insertions(+)
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index da27c84..84a467c 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -45,6 +45,13 @@ LIBXL_OBJS-y += libxl_blktap2.o
 else
 LIBXL_OBJS-y += libxl_noblktap2.o
 endif
+
+ifeq ($(CONFIG_REMUS_NETBUF),y)
+LIBXL_OBJS-y += libxl_netbuffer.o
+else
+LIBXL_OBJS-y += libxl_nonetbuffer.o
+endif
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 25af816..026206a 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -746,6 +746,11 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     /* Setup network buffering */
     if (info->netbuf) {
+        if (!libxl__netbuffer_enabled(gc)) {
+            LOG(ERROR, "Remus: No support for network buffering");
+            goto out;
+        }
+
         if (info->netbufscript) {
             remus_state->netbufscript =
                 libxl__strdup(gc, info->netbufscript);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9970780..2f64382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2311,6 +2311,8 @@ typedef struct libxl__remus_state {
     libxl__ev_child child;
 } libxl__remus_state;
 
+_hidden int libxl__netbuffer_enabled(libxl__gc *gc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
new file mode 100644
index 0000000..8e23d75
--- /dev/null
+++ b/tools/libxl/libxl_netbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
new file mode 100644
index 0000000..6aa4bf1
--- /dev/null
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF2-0003xE-D0; Tue, 21 Jan 2014 09:03:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF0-0003sp-3Q
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:26 +0000
Received: from [85.158.139.211:27085] by server-16.bemta-5.messagelabs.com id
	31/F4-11843-DD73ED25; Tue, 21 Jan 2014 09:03:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25616 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439855"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cs004373;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014547-1244797 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:10 +0800
Message-Id: <1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

The following steps are taken during setup:
 a) call the hotplug script for each vif to setup its network buffer

 b) establish a dedicated remus context containing libnl related
    state (netlink sockets, qdisc caches, etc.,)

 c) Obtain handles to plug qdiscs installed on the IFB devices
    chosen by the hotplug scripts.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/misc/xenstore-paths.markdown |   4 +
 tools/libxl/Makefile              |   2 +
 tools/libxl/libxl_dom.c           |   7 +-
 tools/libxl/libxl_internal.h      |  11 +
 tools/libxl/libxl_netbuffer.c     | 419 ++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c   |   6 +
 tools/libxl/libxl_remus.c         |  35 ++++
 7 files changed, 479 insertions(+), 5 deletions(-)
 create mode 100644 tools/libxl/libxl_remus.c

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..7a0d2c9 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
 
 The device model version for a domain.
 
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
+
+IFB device used by Remus to buffer network output from the associated vif.
+
 [BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
 [FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
 [HVMPARAMS]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 84a467c..218f55e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -52,6 +52,8 @@ else
 LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
+LIBXL_OBJS-y += libxl_remus.o
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 8d63f90..e3e9f6f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
 
 /*==================== Domain suspend (save) ====================*/
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc);
-
 /*----- complicated callback, called by xc_domain_save -----*/
 
 /*
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     dss->save_dm_callback(egc, dss, our_rc);
 }
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc)
+void domain_suspend_done(libxl__egc *egc,
+                         libxl__domain_suspend_state *dss, int rc)
 {
     STATE_AO_GC(dss->ao);
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2f64382..0430307 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
 
+_hidden void domain_suspend_done(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss,
+                                 int rc);
+
+_hidden void libxl__remus_setup_done(libxl__egc *egc,
+                                     libxl__domain_suspend_state *dss,
+                                     int rc);
+
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
+                                       libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 8e23d75..0be876c 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -17,11 +17,430 @@
 
 #include "libxl_internal.h"
 
+#include <netlink/cache.h>
+#include <netlink/socket.h>
+#include <netlink/attr.h>
+#include <netlink/route/link.h>
+#include <netlink/route/route.h>
+#include <netlink/route/qdisc.h>
+#include <netlink/route/qdisc/plug.h>
+
+typedef struct libxl__remus_netbuf_state {
+    struct rtnl_qdisc **netbuf_qdisc_list;
+    struct nl_sock *nlsock;
+    struct nl_cache *qdisc_cache;
+    const char **vif_list;
+    const char **ifb_list;
+    uint32_t num_netbufs;
+    uint32_t unused;
+} libxl__remus_netbuf_state;
+
 int libxl__netbuffer_enabled(libxl__gc *gc)
 {
     return 1;
 }
 
+/* If the device has a vifname, then use that instead of
+ * the vifX.Y format.
+ */
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,
+                               libxl_device_nic *nic)
+{
+    const char *vifname = NULL;
+    const char *path;
+    int rc;
+
+    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
+                          libxl__xs_get_dompath(gc, 0), domid, nic->devid);
+    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
+    if (rc < 0) {
+        /* use the default name */
+        vifname = libxl__device_nic_devname(gc, domid,
+                                            nic->devid,
+                                            nic->nictype);
+    }
+
+    return vifname;
+}
+
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
+                                       int *num_vifs)
+{
+    libxl_device_nic *nics = NULL;
+    int nb, i = 0;
+    const char **vif_list = NULL;
+
+    *num_vifs = 0;
+    nics = libxl_device_nic_list(CTX, domid, &nb);
+    if (!nics)
+        return NULL;
+
+    /* Ensure that none of the vifs are backed by driver domains */
+    for (i = 0; i < nb; i++) {
+        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
+                "Network buffering is not supported with driver domains",
+                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
+            *num_vifs = -1;
+            goto out;
+        }
+    }
+
+    GCNEW_ARRAY(vif_list, nb);
+    for (i = 0; i < nb; ++i) {
+        vif_list[i] = get_vifname(gc, domid, &nics[i]);
+        if (!vif_list[i]) {
+            vif_list = NULL;
+            goto out;
+        }
+    }
+    *num_vifs = nb;
+
+ out:
+    for (i = 0; i < nb; i++)
+        libxl_device_nic_dispose(&nics[i]);
+    free(nics);
+    return vif_list;
+}
+
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
+{
+    int i;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* free qdiscs */
+    for (i = 0; i < netbuf_state->num_netbufs; i++) {
+        qdisc = netbuf_state->netbuf_qdisc_list[i];
+        if (!qdisc)
+            break;
+
+        nl_object_put((struct nl_object *)qdisc);
+    }
+
+    /* free qdisc cache */
+    nl_cache_clear(netbuf_state->qdisc_cache);
+    nl_cache_free(netbuf_state->qdisc_cache);
+
+    /* close nlsock */
+    nl_close(netbuf_state->nlsock);
+
+    /* free nlsock */
+    nl_socket_free(netbuf_state->nlsock);
+}
+
+static int init_qdiscs(libxl__gc *gc,
+                       libxl__remus_state *remus_state)
+{
+    int i, ret, ifindex;
+    struct rtnl_link *ifb = NULL;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state * const netbuf_state = remus_state->netbuf_state;
+    const int num_netbufs = netbuf_state->num_netbufs;
+    const char ** const ifb_list = netbuf_state->ifb_list;
+
+    /* Now that we have brought up IFB devices with plug qdisc for
+     * each vif, lets get a netlink handle on the plug qdisc for use
+     * during checkpointing.
+     */
+    netbuf_state->nlsock = nl_socket_alloc();
+    if (!netbuf_state->nlsock) {
+        LOG(ERROR, "cannot allocate nl socket");
+        goto out;
+    }
+
+    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
+    if (ret) {
+        LOG(ERROR, "failed to open netlink socket: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* get list of all qdiscs installed on network devs. */
+    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
+                                 &netbuf_state->qdisc_cache);
+    if (ret) {
+        LOG(ERROR, "failed to allocate qdisc cache: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* list of handles to plug qdiscs */
+    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
+
+    for (i = 0; i < num_netbufs; ++i) {
+
+        /* get a handle to the IFB interface */
+        ifb = NULL;
+        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
+                                   ifb_list[i], &ifb);
+        if (ret) {
+            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
+                nl_geterror(ret));
+            goto out;
+        }
+
+        ifindex = rtnl_link_get_ifindex(ifb);
+        if (!ifindex) {
+            LOG(ERROR, "interface %s has no index", ifb_list[i]);
+            goto out;
+        }
+
+        /* Get a reference to the root qdisc installed on the IFB, by
+         * querying the qdisc list we obtained earlier. The netbufscript
+         * sets up the plug qdisc as the root qdisc, so we don't have to
+         * search the entire qdisc tree on the IFB dev.
+
+         * There is no need to explicitly free this qdisc as its just a
+         * reference from the qdisc cache we allocated earlier.
+         */
+        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache, ifindex,
+                                         TC_H_ROOT);
+
+        if (qdisc) {
+            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
+            /* Sanity check: Ensure that the root qdisc is a plug qdisc. */
+            if (!tc_kind || strcmp(tc_kind, "plug")) {
+                nl_object_put((struct nl_object *)qdisc);
+                LOG(ERROR, "plug qdisc is not installed on %s", ifb_list[i]);
+                goto out;
+            }
+            netbuf_state->netbuf_qdisc_list[i] = qdisc;
+        } else {
+            LOG(ERROR, "Cannot get qdisc handle from ifb %s", ifb_list[i]);
+            goto out;
+        }
+        rtnl_link_put(ifb);
+    }
+
+    return 0;
+
+ out:
+    if (ifb)
+        rtnl_link_put(ifb);
+    free_qdiscs(netbuf_state);
+    return ERROR_FAIL;
+}
+
+static void netbuf_setup_timeout_cb(libxl__egc *egc,
+                                    libxl__ev_time *ev,
+                                    const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+    assert(libxl__ev_child_inuse(&remus_state->child));
+
+    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
+        remus_state->netbufscript, vif);
+
+    if (kill(remus_state->child.pid, SIGKILL)) {
+        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
+              remus_state->netbufscript,
+              (unsigned long)remus_state->child.pid);
+    }
+
+    return;
+}
+
+/* the script needs the following env & args
+ * $vifname
+ * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
+ * $IFB (for teardown)
+ * setup/teardown as command line arg.
+ * In return, the script writes the name of IFB device (during setup) to be
+ * used for output buffering into XENBUS_PATH/ifb
+ */
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_state,
+                              char *op, libxl__ev_child_callback *death)
+{
+    int arraysize = 7, nr = 0;
+    char **env = NULL, **args = NULL;
+    pid_t pid;
+
+    /* Convenience aliases */
+    libxl__ev_child *const child = &remus_state->child;
+    libxl__ev_time *const timeout = &remus_state->timeout;
+    char *const script = libxl__strdup(gc, remus_state->netbufscript);
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char *const ifb = netbuf_state->ifb_list[devid];
+
+    GCNEW_ARRAY(env, arraysize);
+    env[nr++] = "vifname";
+    env[nr++] = libxl__strdup(gc, vif);
+    env[nr++] = "XENBUS_PATH";
+    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
+                          libxl__xs_libxl_path(gc, domid), devid);
+    if (!strcmp(op, "teardown")) {
+        env[nr++] = "IFB";
+        env[nr++] = libxl__strdup(gc, ifb);
+    }
+    env[nr++] = NULL;
+    assert(nr <= arraysize);
+
+    arraysize = 3; nr = 0;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = script;
+    args[nr++] = op;
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    /* Set hotplug timeout */
+    if (libxl__ev_time_register_rel(gc, timeout,
+                                    netbuf_setup_timeout_cb,
+                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
+        LOG(ERROR, "unable to register timeout for "
+            "netbuf setup script %s on vif %s", script, vif);
+        return ERROR_FAIL;
+    }
+
+    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
+        script, op, vif);
+
+    /* Fork and exec netbuf script */
+    pid = libxl__ev_child_fork(gc, child, death);
+    if (pid == -1) {
+        LOG(ERROR, "unable to fork netbuf script %s", script);
+        return ERROR_FAIL;
+    }
+
+    if (!pid) {
+        /* child: Launch netbuf script */
+        libxl__exec(gc, -1, -1, -1, args[0], args, env);
+        /* notreached */
+        abort();
+    }
+
+    return 0;
+}
+
+static void netbuf_setup_script_cb(libxl__egc *egc,
+                                   libxl__ev_child *child,
+                                   pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+    const char *out_path_base, *hotplug_error = NULL;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char **const ifb = &netbuf_state->ifb_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    out_path_base = GCSPRINTF("%s/remus/netbuf/%d",
+                              libxl__xs_libxl_path(gc, domid), devid);
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/hotplug-error", out_path_base),
+                                &hotplug_error);
+    if (rc)
+        goto out;
+
+    if (hotplug_error) {
+        LOG(ERROR, "netbuf script %s setup failed for vif %s: %s",
+            remus_state->netbufscript,
+            netbuf_state->vif_list[devid], hotplug_error);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/remus/netbuf/%d/ifb",
+                                          libxl__xs_libxl_path(gc, domid),
+                                          devid),
+                                ifb);
+    if (rc)
+        goto out;
+
+    if (!(*ifb)) {
+        LOG(ERROR, "Cannot get ifb dev name for domain %u dev %s",
+            domid, vif);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    LOG(DEBUG, "%s will buffer packets from vif %s", *ifb, vif);
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        rc = exec_netbuf_script(gc, remus_state,
+                                "setup", netbuf_setup_script_cb);
+        if (rc)
+            goto out;
+
+        return;
+    }
+
+    rc = init_qdiscs(gc, remus_state);
+ out:
+    libxl__remus_setup_done(egc, remus_state->dss, rc);
+}
+
+/* Scan through the list of vifs belonging to domid and
+ * invoke the netbufscript to setup the IFB device & plug qdisc
+ * for each vif. Then scan through the list of IFB devices to obtain
+ * a handle on the plug qdisc installed on these IFB devices.
+ * Network output buffering is controlled via these qdiscs.
+ */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+    libxl__remus_netbuf_state *netbuf_state = NULL;
+    int num_netbufs = 0;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = dss->domid;
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    GCNEW(netbuf_state);
+    netbuf_state->vif_list = get_guest_vif_list(gc, domid, &num_netbufs);
+    if (!num_netbufs) {
+        rc = 0;
+        goto out;
+    }
+
+    if (num_netbufs < 0) goto out;
+
+    GCNEW_ARRAY(netbuf_state->ifb_list, num_netbufs);
+    netbuf_state->num_netbufs = num_netbufs;
+    remus_state->netbuf_state = netbuf_state;
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "setup",
+                           netbuf_setup_script_cb))
+        goto out;
+    return;
+
+ out:
+    libxl__remus_setup_done(egc, dss, rc);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 6aa4bf1..acfa534 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -22,6 +22,12 @@ int libxl__netbuffer_enabled(libxl__gc *gc)
     return 0;
 }
 
+/* Remus network buffer related stubs */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
new file mode 100644
index 0000000..b3342b3
--- /dev/null
+++ b/tools/libxl/libxl_remus.c
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2014
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+/*----- remus setup/teardown code -----*/
+
+void libxl__remus_setup_done(libxl__egc *egc,
+                             libxl__domain_suspend_state *dss,
+                             int rc)
+{
+    STATE_AO_GC(dss->ao);
+    if (!rc) {
+        libxl__domain_suspend(egc, dss);
+        return;
+    }
+
+    LOG(ERROR, "Remus: failed to setup network buffering"
+        " for guest with domid %u", dss->domid);
+    domain_suspend_done(egc, dss, rc);
+}
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF2-0003xE-D0; Tue, 21 Jan 2014 09:03:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF0-0003sp-3Q
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:26 +0000
Received: from [85.158.139.211:27085] by server-16.bemta-5.messagelabs.com id
	31/F4-11843-DD73ED25; Tue, 21 Jan 2014 09:03:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25616 invoked from network); 21 Jan 2014 09:03:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439855"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:28 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cs004373;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014547-1244797 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:10 +0800
Message-Id: <1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

The following steps are taken during setup:
 a) call the hotplug script for each vif to setup its network buffer

 b) establish a dedicated remus context containing libnl related
    state (netlink sockets, qdisc caches, etc.,)

 c) Obtain handles to plug qdiscs installed on the IFB devices
    chosen by the hotplug scripts.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/misc/xenstore-paths.markdown |   4 +
 tools/libxl/Makefile              |   2 +
 tools/libxl/libxl_dom.c           |   7 +-
 tools/libxl/libxl_internal.h      |  11 +
 tools/libxl/libxl_netbuffer.c     | 419 ++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c   |   6 +
 tools/libxl/libxl_remus.c         |  35 ++++
 7 files changed, 479 insertions(+), 5 deletions(-)
 create mode 100644 tools/libxl/libxl_remus.c

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..7a0d2c9 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
 
 The device model version for a domain.
 
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
+
+IFB device used by Remus to buffer network output from the associated vif.
+
 [BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
 [FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
 [HVMPARAMS]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 84a467c..218f55e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -52,6 +52,8 @@ else
 LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
+LIBXL_OBJS-y += libxl_remus.o
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 8d63f90..e3e9f6f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
 
 /*==================== Domain suspend (save) ====================*/
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc);
-
 /*----- complicated callback, called by xc_domain_save -----*/
 
 /*
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     dss->save_dm_callback(egc, dss, our_rc);
 }
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc)
+void domain_suspend_done(libxl__egc *egc,
+                         libxl__domain_suspend_state *dss, int rc)
 {
     STATE_AO_GC(dss->ao);
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2f64382..0430307 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
 
+_hidden void domain_suspend_done(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss,
+                                 int rc);
+
+_hidden void libxl__remus_setup_done(libxl__egc *egc,
+                                     libxl__domain_suspend_state *dss,
+                                     int rc);
+
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
+                                       libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 8e23d75..0be876c 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -17,11 +17,430 @@
 
 #include "libxl_internal.h"
 
+#include <netlink/cache.h>
+#include <netlink/socket.h>
+#include <netlink/attr.h>
+#include <netlink/route/link.h>
+#include <netlink/route/route.h>
+#include <netlink/route/qdisc.h>
+#include <netlink/route/qdisc/plug.h>
+
+typedef struct libxl__remus_netbuf_state {
+    struct rtnl_qdisc **netbuf_qdisc_list;
+    struct nl_sock *nlsock;
+    struct nl_cache *qdisc_cache;
+    const char **vif_list;
+    const char **ifb_list;
+    uint32_t num_netbufs;
+    uint32_t unused;
+} libxl__remus_netbuf_state;
+
 int libxl__netbuffer_enabled(libxl__gc *gc)
 {
     return 1;
 }
 
+/* If the device has a vifname, then use that instead of
+ * the vifX.Y format.
+ */
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,
+                               libxl_device_nic *nic)
+{
+    const char *vifname = NULL;
+    const char *path;
+    int rc;
+
+    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
+                          libxl__xs_get_dompath(gc, 0), domid, nic->devid);
+    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
+    if (rc < 0) {
+        /* use the default name */
+        vifname = libxl__device_nic_devname(gc, domid,
+                                            nic->devid,
+                                            nic->nictype);
+    }
+
+    return vifname;
+}
+
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
+                                       int *num_vifs)
+{
+    libxl_device_nic *nics = NULL;
+    int nb, i = 0;
+    const char **vif_list = NULL;
+
+    *num_vifs = 0;
+    nics = libxl_device_nic_list(CTX, domid, &nb);
+    if (!nics)
+        return NULL;
+
+    /* Ensure that none of the vifs are backed by driver domains */
+    for (i = 0; i < nb; i++) {
+        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
+                "Network buffering is not supported with driver domains",
+                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
+            *num_vifs = -1;
+            goto out;
+        }
+    }
+
+    GCNEW_ARRAY(vif_list, nb);
+    for (i = 0; i < nb; ++i) {
+        vif_list[i] = get_vifname(gc, domid, &nics[i]);
+        if (!vif_list[i]) {
+            vif_list = NULL;
+            goto out;
+        }
+    }
+    *num_vifs = nb;
+
+ out:
+    for (i = 0; i < nb; i++)
+        libxl_device_nic_dispose(&nics[i]);
+    free(nics);
+    return vif_list;
+}
+
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
+{
+    int i;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* free qdiscs */
+    for (i = 0; i < netbuf_state->num_netbufs; i++) {
+        qdisc = netbuf_state->netbuf_qdisc_list[i];
+        if (!qdisc)
+            break;
+
+        nl_object_put((struct nl_object *)qdisc);
+    }
+
+    /* free qdisc cache */
+    nl_cache_clear(netbuf_state->qdisc_cache);
+    nl_cache_free(netbuf_state->qdisc_cache);
+
+    /* close nlsock */
+    nl_close(netbuf_state->nlsock);
+
+    /* free nlsock */
+    nl_socket_free(netbuf_state->nlsock);
+}
+
+static int init_qdiscs(libxl__gc *gc,
+                       libxl__remus_state *remus_state)
+{
+    int i, ret, ifindex;
+    struct rtnl_link *ifb = NULL;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state * const netbuf_state = remus_state->netbuf_state;
+    const int num_netbufs = netbuf_state->num_netbufs;
+    const char ** const ifb_list = netbuf_state->ifb_list;
+
+    /* Now that we have brought up IFB devices with plug qdisc for
+     * each vif, lets get a netlink handle on the plug qdisc for use
+     * during checkpointing.
+     */
+    netbuf_state->nlsock = nl_socket_alloc();
+    if (!netbuf_state->nlsock) {
+        LOG(ERROR, "cannot allocate nl socket");
+        goto out;
+    }
+
+    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
+    if (ret) {
+        LOG(ERROR, "failed to open netlink socket: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* get list of all qdiscs installed on network devs. */
+    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
+                                 &netbuf_state->qdisc_cache);
+    if (ret) {
+        LOG(ERROR, "failed to allocate qdisc cache: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* list of handles to plug qdiscs */
+    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
+
+    for (i = 0; i < num_netbufs; ++i) {
+
+        /* get a handle to the IFB interface */
+        ifb = NULL;
+        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
+                                   ifb_list[i], &ifb);
+        if (ret) {
+            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
+                nl_geterror(ret));
+            goto out;
+        }
+
+        ifindex = rtnl_link_get_ifindex(ifb);
+        if (!ifindex) {
+            LOG(ERROR, "interface %s has no index", ifb_list[i]);
+            goto out;
+        }
+
+        /* Get a reference to the root qdisc installed on the IFB, by
+         * querying the qdisc list we obtained earlier. The netbufscript
+         * sets up the plug qdisc as the root qdisc, so we don't have to
+         * search the entire qdisc tree on the IFB dev.
+
+         * There is no need to explicitly free this qdisc as its just a
+         * reference from the qdisc cache we allocated earlier.
+         */
+        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache, ifindex,
+                                         TC_H_ROOT);
+
+        if (qdisc) {
+            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
+            /* Sanity check: Ensure that the root qdisc is a plug qdisc. */
+            if (!tc_kind || strcmp(tc_kind, "plug")) {
+                nl_object_put((struct nl_object *)qdisc);
+                LOG(ERROR, "plug qdisc is not installed on %s", ifb_list[i]);
+                goto out;
+            }
+            netbuf_state->netbuf_qdisc_list[i] = qdisc;
+        } else {
+            LOG(ERROR, "Cannot get qdisc handle from ifb %s", ifb_list[i]);
+            goto out;
+        }
+        rtnl_link_put(ifb);
+    }
+
+    return 0;
+
+ out:
+    if (ifb)
+        rtnl_link_put(ifb);
+    free_qdiscs(netbuf_state);
+    return ERROR_FAIL;
+}
+
+static void netbuf_setup_timeout_cb(libxl__egc *egc,
+                                    libxl__ev_time *ev,
+                                    const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+    assert(libxl__ev_child_inuse(&remus_state->child));
+
+    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
+        remus_state->netbufscript, vif);
+
+    if (kill(remus_state->child.pid, SIGKILL)) {
+        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
+              remus_state->netbufscript,
+              (unsigned long)remus_state->child.pid);
+    }
+
+    return;
+}
+
+/* the script needs the following env & args
+ * $vifname
+ * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
+ * $IFB (for teardown)
+ * setup/teardown as command line arg.
+ * In return, the script writes the name of IFB device (during setup) to be
+ * used for output buffering into XENBUS_PATH/ifb
+ */
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_state,
+                              char *op, libxl__ev_child_callback *death)
+{
+    int arraysize = 7, nr = 0;
+    char **env = NULL, **args = NULL;
+    pid_t pid;
+
+    /* Convenience aliases */
+    libxl__ev_child *const child = &remus_state->child;
+    libxl__ev_time *const timeout = &remus_state->timeout;
+    char *const script = libxl__strdup(gc, remus_state->netbufscript);
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char *const ifb = netbuf_state->ifb_list[devid];
+
+    GCNEW_ARRAY(env, arraysize);
+    env[nr++] = "vifname";
+    env[nr++] = libxl__strdup(gc, vif);
+    env[nr++] = "XENBUS_PATH";
+    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
+                          libxl__xs_libxl_path(gc, domid), devid);
+    if (!strcmp(op, "teardown")) {
+        env[nr++] = "IFB";
+        env[nr++] = libxl__strdup(gc, ifb);
+    }
+    env[nr++] = NULL;
+    assert(nr <= arraysize);
+
+    arraysize = 3; nr = 0;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = script;
+    args[nr++] = op;
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    /* Set hotplug timeout */
+    if (libxl__ev_time_register_rel(gc, timeout,
+                                    netbuf_setup_timeout_cb,
+                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
+        LOG(ERROR, "unable to register timeout for "
+            "netbuf setup script %s on vif %s", script, vif);
+        return ERROR_FAIL;
+    }
+
+    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
+        script, op, vif);
+
+    /* Fork and exec netbuf script */
+    pid = libxl__ev_child_fork(gc, child, death);
+    if (pid == -1) {
+        LOG(ERROR, "unable to fork netbuf script %s", script);
+        return ERROR_FAIL;
+    }
+
+    if (!pid) {
+        /* child: Launch netbuf script */
+        libxl__exec(gc, -1, -1, -1, args[0], args, env);
+        /* notreached */
+        abort();
+    }
+
+    return 0;
+}
+
+static void netbuf_setup_script_cb(libxl__egc *egc,
+                                   libxl__ev_child *child,
+                                   pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+    const char *out_path_base, *hotplug_error = NULL;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char **const ifb = &netbuf_state->ifb_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    out_path_base = GCSPRINTF("%s/remus/netbuf/%d",
+                              libxl__xs_libxl_path(gc, domid), devid);
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/hotplug-error", out_path_base),
+                                &hotplug_error);
+    if (rc)
+        goto out;
+
+    if (hotplug_error) {
+        LOG(ERROR, "netbuf script %s setup failed for vif %s: %s",
+            remus_state->netbufscript,
+            netbuf_state->vif_list[devid], hotplug_error);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/remus/netbuf/%d/ifb",
+                                          libxl__xs_libxl_path(gc, domid),
+                                          devid),
+                                ifb);
+    if (rc)
+        goto out;
+
+    if (!(*ifb)) {
+        LOG(ERROR, "Cannot get ifb dev name for domain %u dev %s",
+            domid, vif);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    LOG(DEBUG, "%s will buffer packets from vif %s", *ifb, vif);
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        rc = exec_netbuf_script(gc, remus_state,
+                                "setup", netbuf_setup_script_cb);
+        if (rc)
+            goto out;
+
+        return;
+    }
+
+    rc = init_qdiscs(gc, remus_state);
+ out:
+    libxl__remus_setup_done(egc, remus_state->dss, rc);
+}
+
+/* Scan through the list of vifs belonging to domid and
+ * invoke the netbufscript to setup the IFB device & plug qdisc
+ * for each vif. Then scan through the list of IFB devices to obtain
+ * a handle on the plug qdisc installed on these IFB devices.
+ * Network output buffering is controlled via these qdiscs.
+ */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+    libxl__remus_netbuf_state *netbuf_state = NULL;
+    int num_netbufs = 0;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = dss->domid;
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    GCNEW(netbuf_state);
+    netbuf_state->vif_list = get_guest_vif_list(gc, domid, &num_netbufs);
+    if (!num_netbufs) {
+        rc = 0;
+        goto out;
+    }
+
+    if (num_netbufs < 0) goto out;
+
+    GCNEW_ARRAY(netbuf_state->ifb_list, num_netbufs);
+    netbuf_state->num_netbufs = num_netbufs;
+    remus_state->netbuf_state = netbuf_state;
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "setup",
+                           netbuf_setup_script_cb))
+        goto out;
+    return;
+
+ out:
+    libxl__remus_setup_done(egc, dss, rc);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 6aa4bf1..acfa534 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -22,6 +22,12 @@ int libxl__netbuffer_enabled(libxl__gc *gc)
     return 0;
 }
 
+/* Remus network buffer related stubs */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
new file mode 100644
index 0000000..b3342b3
--- /dev/null
+++ b/tools/libxl/libxl_remus.c
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2014
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+/*----- remus setup/teardown code -----*/
+
+void libxl__remus_setup_done(libxl__egc *egc,
+                             libxl__domain_suspend_state *dss,
+                             int rc)
+{
+    STATE_AO_GC(dss->ao);
+    if (!rc) {
+        libxl__domain_suspend(egc, dss);
+        return;
+    }
+
+    LOG(ERROR, "Remus: failed to setup network buffering"
+        " for guest with domid %u", dss->domid);
+    domain_suspend_done(egc, dss, rc);
+}
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF3-0003yY-40; Tue, 21 Jan 2014 09:03:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF0-0003uZ-Q8
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:26 +0000
Received: from [193.109.254.147:65454] by server-3.bemta-14.messagelabs.com id
	3E/87-11000-DD73ED25; Tue, 21 Jan 2014 09:03:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!6
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29841 invoked from network); 21 Jan 2014 09:03:24 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439859"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xp004370;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014573-1244801 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:14 +0800
Message-Id: <1390295117-718-11-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 10/13 V6] remus: use the API to teardown network
	buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_teardown() to teardown network
buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |  4 ----
 tools/libxl/libxl_dom.c      | 11 +++++++++++
 tools/libxl/libxl_internal.h |  4 ++++
 tools/libxl/libxl_remus.c    | 13 +++++++++++++
 4 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f45fd74..83d3772 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -778,10 +778,6 @@ static void remus_failover_cb(libxl__egc *egc,
      * backup died or some network error occurred preventing us
      * from sending checkpoints.
      */
-
-    /* TBD: Remus cleanup - i.e. detach qdisc, release other
-     * resources.
-     */
     libxl__ao_complete(egc, ao, rc);
 }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e3e9f6f..912a6e4 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1519,6 +1519,17 @@ void domain_suspend_done(libxl__egc *egc,
     if (dss->xce != NULL)
         xc_evtchn_close(dss->xce);
 
+    if (dss->remus_state) {
+        /*
+         * With Remus, if we reach this point, it means either
+         * backup died or some network error occurred preventing us
+         * from sending checkpoints. Teardown the network buffers and
+         * release netlink resources.  This is an async op.
+         */
+        libxl__remus_teardown_initiate(egc, dss, rc);
+        return;
+    }
+
     dss->callback(egc, dss, rc);
 }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 657cfc4..f818916 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2339,6 +2339,10 @@ _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
 _hidden void libxl__remus_setup_initiate(libxl__egc *egc,
                                          libxl__domain_suspend_state *dss);
 
+_hidden void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss,
+                                            int rc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 38776e1..0d281a0 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -44,6 +44,19 @@ void libxl__remus_setup_done(libxl__egc *egc,
     domain_suspend_done(egc, dss, rc);
 }
 
+void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                    libxl__domain_suspend_state *dss,
+                                    int rc)
+{
+    /* stash rc somewhere before invoking teardown ops. */
+    dss->remus_state->saved_rc = rc;
+
+    if (!dss->remus_state->netbuf_state)
+        libxl__remus_teardown_done(egc, dss);
+    else
+        libxl__remus_netbuf_teardown(egc, dss);
+}
+
 void libxl__remus_teardown_done(libxl__egc *egc,
                                 libxl__domain_suspend_state *dss)
 {
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF3-0003yY-40; Tue, 21 Jan 2014 09:03:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF0-0003uZ-Q8
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:26 +0000
Received: from [193.109.254.147:65454] by server-3.bemta-14.messagelabs.com id
	3E/87-11000-DD73ED25; Tue, 21 Jan 2014 09:03:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!6
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29841 invoked from network); 21 Jan 2014 09:03:24 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439859"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xp004370;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014573-1244801 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:14 +0800
Message-Id: <1390295117-718-11-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 10/13 V6] remus: use the API to teardown network
	buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_teardown() to teardown network
buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |  4 ----
 tools/libxl/libxl_dom.c      | 11 +++++++++++
 tools/libxl/libxl_internal.h |  4 ++++
 tools/libxl/libxl_remus.c    | 13 +++++++++++++
 4 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f45fd74..83d3772 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -778,10 +778,6 @@ static void remus_failover_cb(libxl__egc *egc,
      * backup died or some network error occurred preventing us
      * from sending checkpoints.
      */
-
-    /* TBD: Remus cleanup - i.e. detach qdisc, release other
-     * resources.
-     */
     libxl__ao_complete(egc, ao, rc);
 }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e3e9f6f..912a6e4 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1519,6 +1519,17 @@ void domain_suspend_done(libxl__egc *egc,
     if (dss->xce != NULL)
         xc_evtchn_close(dss->xce);
 
+    if (dss->remus_state) {
+        /*
+         * With Remus, if we reach this point, it means either
+         * backup died or some network error occurred preventing us
+         * from sending checkpoints. Teardown the network buffers and
+         * release netlink resources.  This is an async op.
+         */
+        libxl__remus_teardown_initiate(egc, dss, rc);
+        return;
+    }
+
     dss->callback(egc, dss, rc);
 }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 657cfc4..f818916 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2339,6 +2339,10 @@ _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
 _hidden void libxl__remus_setup_initiate(libxl__egc *egc,
                                          libxl__domain_suspend_state *dss);
 
+_hidden void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss,
+                                            int rc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 38776e1..0d281a0 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -44,6 +44,19 @@ void libxl__remus_setup_done(libxl__egc *egc,
     domain_suspend_done(egc, dss, rc);
 }
 
+void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                    libxl__domain_suspend_state *dss,
+                                    int rc)
+{
+    /* stash rc somewhere before invoking teardown ops. */
+    dss->remus_state->saved_rc = rc;
+
+    if (!dss->remus_state->netbuf_state)
+        libxl__remus_teardown_done(egc, dss);
+    else
+        libxl__remus_netbuf_teardown(egc, dss);
+}
+
 void libxl__remus_teardown_done(libxl__egc *egc,
                                 libxl__domain_suspend_state *dss)
 {
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF3-0003zF-Kb; Tue, 21 Jan 2014 09:03:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF1-0003v1-8x
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:27 +0000
Received: from [85.158.139.211:45616] by server-9.bemta-5.messagelabs.com id
	2B/BA-15098-ED73ED25; Tue, 21 Jan 2014 09:03:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31727 invoked from network); 21 Jan 2014 09:03:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439861"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B9004372;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014575-1244802 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:15 +0800
Message-Id: <1390295117-718-12-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 11/13 V6] remus: rename remus_failover_cb() to
	remus_replication_failure_cb()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Failover means that: the machine on which primary vm is running is
down, and we need to start the secondary vm to take over the primary
vm. remus_failover_cb() is called when remus fails, not when we need
to do failover. So rename it to remus_replication_failure_cb()

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 83d3772..70e34c0 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -702,8 +702,9 @@ out:
     return ptr;
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc);
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc);
 
 /* TODO: Explicit Checkpoint acknowledgements via recv_fd. */
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
@@ -722,7 +723,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     GCNEW(dss);
     dss->ao = ao;
-    dss->callback = remus_failover_cb;
+    dss->callback = remus_replication_failure_cb;
     dss->domid = domid;
     dss->fd = send_fd;
     /* TODO do something with recv_fd */
@@ -769,8 +770,9 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     return AO_ABORT(rc);
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc)
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc)
 {
     STATE_AO_GC(dss->ao);
     /*
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF3-0003zF-Kb; Tue, 21 Jan 2014 09:03:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF1-0003v1-8x
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:27 +0000
Received: from [85.158.139.211:45616] by server-9.bemta-5.messagelabs.com id
	2B/BA-15098-ED73ED25; Tue, 21 Jan 2014 09:03:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390294999!10969712!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31727 invoked from network); 21 Jan 2014 09:03:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439861"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938B9004372;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014575-1244802 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:15 +0800
Message-Id: <1390295117-718-12-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 11/13 V6] remus: rename remus_failover_cb() to
	remus_replication_failure_cb()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Failover means that: the machine on which primary vm is running is
down, and we need to start the secondary vm to take over the primary
vm. remus_failover_cb() is called when remus fails, not when we need
to do failover. So rename it to remus_replication_failure_cb()

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 83d3772..70e34c0 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -702,8 +702,9 @@ out:
     return ptr;
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc);
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc);
 
 /* TODO: Explicit Checkpoint acknowledgements via recv_fd. */
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
@@ -722,7 +723,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     GCNEW(dss);
     dss->ao = ao;
-    dss->callback = remus_failover_cb;
+    dss->callback = remus_replication_failure_cb;
     dss->domid = domid;
     dss->fd = send_fd;
     /* TODO do something with recv_fd */
@@ -769,8 +770,9 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     return AO_ABORT(rc);
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc)
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc)
 {
     STATE_AO_GC(dss->ao);
     /*
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF4-00040J-8c; Tue, 21 Jan 2014 09:03:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF1-0003uy-6Z
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:27 +0000
Received: from [85.158.139.211:27221] by server-13.bemta-5.messagelabs.com id
	C1/3D-11357-ED73ED25; Tue, 21 Jan 2014 09:03:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26018 invoked from network); 21 Jan 2014 09:03:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439860"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f3004371;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014550-1244799 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:12 +0800
Message-Id: <1390295117-718-9-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 08/13 V6] remus: implement the API to teardown
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

During teardown, the netlink resources are released, followed by
invocation of hotplug scripts to remove the IFB devices.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |  6 +++++
 tools/libxl/libxl_netbuffer.c   | 52 +++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |  5 ++++
 tools/libxl/libxl_remus.c       |  6 +++++
 4 files changed, 69 insertions(+)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7216f89..358590b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2330,6 +2330,12 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden void libxl__remus_teardown_done(libxl__egc *egc,
+                                        libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                          libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 1b61597..686101b 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -490,6 +490,58 @@ int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
     return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
 }
 
+static void netbuf_teardown_script_cb(libxl__egc *egc,
+                                      libxl__ev_child *child,
+                                      pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+    }
+
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        if (exec_netbuf_script(gc, remus_state,
+                               "teardown", netbuf_teardown_script_cb))
+            goto out;
+        return;
+    }
+
+ out:
+    libxl__remus_teardown_done(egc, remus_state->dss);
+}
+
+/* Note: This function will be called in the same gc context as
+ * libxl__remus_netbuf_setup, created during the libxl_domain_remus_start
+ * API call.
+ */
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(dss->ao);
+
+    free_qdiscs(netbuf_state);
+
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "teardown",
+                           netbuf_teardown_script_cb))
+        libxl__remus_teardown_done(egc, dss);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index a3e3f5c..ef7b513 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -42,6 +42,11 @@ int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
     return ERROR_FAIL;
 }
 
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index b3342b3..6790c61 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -33,3 +33,9 @@ void libxl__remus_setup_done(libxl__egc *egc,
         " for guest with domid %u", dss->domid);
     domain_suspend_done(egc, dss, rc);
 }
+
+void libxl__remus_teardown_done(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss)
+{
+    dss->callback(egc, dss, dss->remus_state->saved_rc);
+}
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF4-00040J-8c; Tue, 21 Jan 2014 09:03:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF1-0003uy-6Z
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:27 +0000
Received: from [85.158.139.211:27221] by server-13.bemta-5.messagelabs.com id
	C1/3D-11357-ED73ED25; Tue, 21 Jan 2014 09:03:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26018 invoked from network); 21 Jan 2014 09:03:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439860"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:29 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937f3004371;
	Tue, 21 Jan 2014 17:03:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014550-1244799 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:12 +0800
Message-Id: <1390295117-718-9-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:47,
	Serialize complete at 2014/01/21 17:01:47
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 08/13 V6] remus: implement the API to teardown
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

During teardown, the netlink resources are released, followed by
invocation of hotplug scripts to remove the IFB devices.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |  6 +++++
 tools/libxl/libxl_netbuffer.c   | 52 +++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |  5 ++++
 tools/libxl/libxl_remus.c       |  6 +++++
 4 files changed, 69 insertions(+)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 7216f89..358590b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2330,6 +2330,12 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden void libxl__remus_teardown_done(libxl__egc *egc,
+                                        libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                          libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 1b61597..686101b 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -490,6 +490,58 @@ int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
     return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
 }
 
+static void netbuf_teardown_script_cb(libxl__egc *egc,
+                                      libxl__ev_child *child,
+                                      pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+    }
+
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        if (exec_netbuf_script(gc, remus_state,
+                               "teardown", netbuf_teardown_script_cb))
+            goto out;
+        return;
+    }
+
+ out:
+    libxl__remus_teardown_done(egc, remus_state->dss);
+}
+
+/* Note: This function will be called in the same gc context as
+ * libxl__remus_netbuf_setup, created during the libxl_domain_remus_start
+ * API call.
+ */
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(dss->ao);
+
+    free_qdiscs(netbuf_state);
+
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "teardown",
+                           netbuf_teardown_script_cb))
+        libxl__remus_teardown_done(egc, dss);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index a3e3f5c..ef7b513 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -42,6 +42,11 @@ int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
     return ERROR_FAIL;
 }
 
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index b3342b3..6790c61 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -33,3 +33,9 @@ void libxl__remus_setup_done(libxl__egc *egc,
         " for guest with domid %u", dss->domid);
     domain_suspend_done(egc, dss, rc);
 }
+
+void libxl__remus_teardown_done(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss)
+{
+    dss->callback(egc, dss, dss->remus_state->saved_rc);
+}
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF6-000431-14; Tue, 21 Jan 2014 09:03:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF2-0003wb-FP
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:28 +0000
Received: from [193.109.254.147:23608] by server-9.bemta-14.messagelabs.com id
	49/5C-13957-FD73ED25; Tue, 21 Jan 2014 09:03:27 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!7
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 21 Jan 2014 09:03:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439862"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:30 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cu004373;
	Tue, 21 Jan 2014 17:03:10 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014576-1244803 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:16 +0800
Message-Id: <1390295117-718-13-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:48,
	Serialize complete at 2014/01/21 17:01:48
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 12/13 V6] tools/libxl: control network buffering
	in remus callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch constitutes the core network buffering logic.
and does the following:
 a) create a new network buffer when the domain is suspended
    (remus_domain_suspend_callback)
 b) release the previous network buffer pertaining to the
    committed checkpoint (remus_domain_checkpoint_dm_saved)

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_dom.c | 90 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 912a6e4..a4ffdfd 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1243,8 +1243,30 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* REMUS TODO: Issue disk and network checkpoint reqs. */
-    return libxl__domain_suspend_common_callback(data);
+    libxl__save_helper_state *shs = data;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    /* REMUS TODO: Issue disk checkpoint reqs. */
+    int ok = libxl__domain_suspend_common_callback(data);
+
+    if (!remus_state->netbuf_state || !ok) goto out;
+
+    /* The domain was suspended successfully. Start a new network
+     * buffer for the next epoch. If this operation fails, then act
+     * as though domain suspend failed -- libxc exits its infinite
+     * loop and ultimately, the replication stops.
+     */
+    if (libxl__remus_netbuf_start_new_epoch(gc, dss->domid,
+                                            remus_state))
+        ok = 0;
+
+ out:
+    return ok;
 }
 
 static int libxl__remus_domain_resume_callback(void *data)
@@ -1257,7 +1279,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. */
     return 1;
 }
 
@@ -1266,11 +1288,17 @@ static int libxl__remus_domain_resume_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc);
 
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs);
+
 static void libxl__remus_domain_checkpoint_callback(void *data)
 {
     libxl__save_helper_state *shs = data;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
-    libxl__egc *egc = dss->shs.egc;
+
+    /* Convenience aliases */
+    libxl__egc *const egc = dss->shs.egc;
+
     STATE_AO_GC(dss->ao);
 
     /* This would go into tailbuf. */
@@ -1284,10 +1312,56 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
-    /* REMUS TODO: make this asynchronous */
-    assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->remus_state->interval * 1000);
+    /* Convenience aliases */
+    /*
+     * REMUS TODO: Wait for disk and explicit memory ack (through restore
+     * callback from remote) before releasing network buffer.
+     */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    if (rc) {
+        LOG(ERROR, "Failed to save device model. Terminating Remus..");
+        goto out;
+    }
+
+    if (remus_state->netbuf_state) {
+        rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
+                                                    remus_state);
+        if (rc) {
+            LOG(ERROR, "Failed to release network buffer."
+                " Terminating Remus..");
+            goto out;
+        }
+    }
+
+    /* Set checkpoint interval timeout */
+    rc = libxl__ev_time_register_rel(gc, &remus_state->timeout,
+                                     remus_next_checkpoint,
+                                     dss->remus_state->interval);
+    if (rc) {
+        LOG(ERROR, "unable to register timeout for next epoch."
+            " Terminating Remus..");
+        goto out;
+    }
+    return;
+
+ out:
+    libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 0);
+}
+
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    libxl__domain_suspend_state *const dss = remus_state->dss;
+
+    STATE_AO_GC(dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF6-000431-14; Tue, 21 Jan 2014 09:03:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF2-0003wb-FP
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:28 +0000
Received: from [193.109.254.147:23608] by server-9.bemta-14.messagelabs.com id
	49/5C-13957-FD73ED25; Tue, 21 Jan 2014 09:03:27 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390294992!12124130!7
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 21 Jan 2014 09:03:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-10.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 09:03:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439862"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:30 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L938Cu004373;
	Tue, 21 Jan 2014 17:03:10 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014576-1244803 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:16 +0800
Message-Id: <1390295117-718-13-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:48,
	Serialize complete at 2014/01/21 17:01:48
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 12/13 V6] tools/libxl: control network buffering
	in remus callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch constitutes the core network buffering logic.
and does the following:
 a) create a new network buffer when the domain is suspended
    (remus_domain_suspend_callback)
 b) release the previous network buffer pertaining to the
    committed checkpoint (remus_domain_checkpoint_dm_saved)

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_dom.c | 90 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 912a6e4..a4ffdfd 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1243,8 +1243,30 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* REMUS TODO: Issue disk and network checkpoint reqs. */
-    return libxl__domain_suspend_common_callback(data);
+    libxl__save_helper_state *shs = data;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    /* REMUS TODO: Issue disk checkpoint reqs. */
+    int ok = libxl__domain_suspend_common_callback(data);
+
+    if (!remus_state->netbuf_state || !ok) goto out;
+
+    /* The domain was suspended successfully. Start a new network
+     * buffer for the next epoch. If this operation fails, then act
+     * as though domain suspend failed -- libxc exits its infinite
+     * loop and ultimately, the replication stops.
+     */
+    if (libxl__remus_netbuf_start_new_epoch(gc, dss->domid,
+                                            remus_state))
+        ok = 0;
+
+ out:
+    return ok;
 }
 
 static int libxl__remus_domain_resume_callback(void *data)
@@ -1257,7 +1279,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. */
     return 1;
 }
 
@@ -1266,11 +1288,17 @@ static int libxl__remus_domain_resume_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc);
 
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs);
+
 static void libxl__remus_domain_checkpoint_callback(void *data)
 {
     libxl__save_helper_state *shs = data;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
-    libxl__egc *egc = dss->shs.egc;
+
+    /* Convenience aliases */
+    libxl__egc *const egc = dss->shs.egc;
+
     STATE_AO_GC(dss->ao);
 
     /* This would go into tailbuf. */
@@ -1284,10 +1312,56 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
-    /* REMUS TODO: make this asynchronous */
-    assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->remus_state->interval * 1000);
+    /* Convenience aliases */
+    /*
+     * REMUS TODO: Wait for disk and explicit memory ack (through restore
+     * callback from remote) before releasing network buffer.
+     */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    if (rc) {
+        LOG(ERROR, "Failed to save device model. Terminating Remus..");
+        goto out;
+    }
+
+    if (remus_state->netbuf_state) {
+        rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
+                                                    remus_state);
+        if (rc) {
+            LOG(ERROR, "Failed to release network buffer."
+                " Terminating Remus..");
+            goto out;
+        }
+    }
+
+    /* Set checkpoint interval timeout */
+    rc = libxl__ev_time_register_rel(gc, &remus_state->timeout,
+                                     remus_next_checkpoint,
+                                     dss->remus_state->interval);
+    if (rc) {
+        LOG(ERROR, "unable to register timeout for next epoch."
+            " Terminating Remus..");
+        goto out;
+    }
+    return;
+
+ out:
+    libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 0);
+}
+
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    libxl__domain_suspend_state *const dss = remus_state->dss;
+
+    STATE_AO_GC(dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF7-00044n-4d; Tue, 21 Jan 2014 09:03:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF3-0003yF-Ev
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:29 +0000
Received: from [85.158.139.211:51780] by server-5.bemta-5.messagelabs.com id
	29/27-14928-0E73ED25; Tue, 21 Jan 2014 09:03:28 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26233 invoked from network); 21 Jan 2014 09:03:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439863"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:30 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xq004370;
	Tue, 21 Jan 2014 17:03:10 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014578-1244804 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:17 +0800
Message-Id: <1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:48,
	Serialize complete at 2014/01/21 17:01:48
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering cmdline
	switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Command line switch to 'xl remus' command, to enable network buffering.
Pass on this flag to libxl so that it can act accordingly.
Also update man pages to reflect the addition of a new option to
'xl remus' command.

Note: the network buffering is enabled as default. If you want to
disable it, please use -n option.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/man/xl.conf.pod.5    |  6 ++++++
 docs/man/xl.pod.1         | 11 ++++++++++-
 tools/libxl/xl.c          |  4 ++++
 tools/libxl/xl.h          |  1 +
 tools/libxl/xl_cmdimpl.c  | 28 ++++++++++++++++++++++------
 tools/libxl/xl_cmdtable.c |  3 +++
 6 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.conf.pod.5 b/docs/man/xl.conf.pod.5
index 7c43bde..8ae19bb 100644
--- a/docs/man/xl.conf.pod.5
+++ b/docs/man/xl.conf.pod.5
@@ -105,6 +105,12 @@ Configures the default gateway device to set for virtual network devices.
 
 Default: C<None>
 
+=item B<remus.default.netbufscript="PATH">
+
+Configures the default script used by Remus to setup network buffering.
+
+Default: C</etc/xen/scripts/remus-netbuf-setup>
+
 =item B<output_format="json|sxp">
 
 Configures the default output format used by xl when printing "machine
diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..3c5f246 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -399,7 +399,7 @@ Enable Remus HA for domain. By default B<xl> relies on ssh as a transport
 mechanism between the two hosts.
 
 N.B: Remus support in xl is still in experimental (proof-of-concept) phase.
-     There is no support for network or disk buffering at the moment.
+     There is no support for disk buffering at the moment.
 
 B<OPTIONS>
 
@@ -418,6 +418,15 @@ Generally useful for debugging.
 
 Disable memory checkpoint compression.
 
+=item B<-n>
+
+Disable network output buffering.
+
+=item B<-N> I<netbufscript>
+
+Use <netbufscript> to setup network buffering instead of the instead of
+the default (/etc/xen/scripts/remus-netbuf-setup).
+
 =item B<-s> I<sshcommand>
 
 Use <sshcommand> instead of ssh.  String will be passed to sh.
diff --git a/tools/libxl/xl.c b/tools/libxl/xl.c
index 657610b..e02a618 100644
--- a/tools/libxl/xl.c
+++ b/tools/libxl/xl.c
@@ -46,6 +46,7 @@ char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
 char *default_vifbackend = NULL;
+char *default_remus_netbufscript = NULL;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
 int claim_mode = 1;
 
@@ -177,6 +178,9 @@ static void parse_global_config(const char *configfile,
     if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
         claim_mode = l;
 
+    xlu_cfg_replace_string (config, "remus.default.netbufscript",
+                            &default_remus_netbufscript, 0);
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..d991fd3 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -153,6 +153,7 @@ extern char *default_vifscript;
 extern char *default_bridge;
 extern char *default_gatewaydev;
 extern char *default_vifbackend;
+extern char *default_remus_netbufscript;
 extern char *blkdev_start;
 
 enum output_format {
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..4145543 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7263,8 +7263,9 @@ int main_remus(int argc, char **argv)
     r_info.interval = 200;
     r_info.blackhole = 0;
     r_info.compression = 1;
+    r_info.netbuf = 1;
 
-    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    SWITCH_FOREACH_OPT(opt, "buni:s:N:e", NULL, "remus", 2) {
     case 'i':
         r_info.interval = atoi(optarg);
         break;
@@ -7274,6 +7275,12 @@ int main_remus(int argc, char **argv)
     case 'u':
         r_info.compression = 0;
         break;
+    case 'n':
+        r_info.netbuf = 0;
+        break;
+    case 'N':
+        r_info.netbufscript = optarg;
+        break;
     case 's':
         ssh_command = optarg;
         break;
@@ -7285,6 +7292,9 @@ int main_remus(int argc, char **argv)
     domid = find_domain(argv[optind]);
     host = argv[optind + 1];
 
+    if(!r_info.netbufscript)
+        r_info.netbufscript = default_remus_netbufscript;
+
     if (r_info.blackhole) {
         send_fd = open("/dev/null", O_RDWR, 0644);
         if (send_fd < 0) {
@@ -7322,13 +7332,19 @@ int main_remus(int argc, char **argv)
     /* Point of no return */
     rc = libxl_domain_remus_start(ctx, &r_info, domid, send_fd, recv_fd, 0);
 
-    /* If we are here, it means backup has failed/domain suspend failed.
-     * Try to resume the domain and exit gracefully.
-     * TODO: Split-Brain check.
+    /* check if the domain exists. User may have xl destroyed the
+     * domain to force failover
      */
-    fprintf(stderr, "remus sender: libxl_domain_suspend failed"
-            " (rc=%d)\n", rc);
+    if (libxl_domain_info(ctx, 0, domid)) {
+        fprintf(stderr, "Remus: Primary domain has been destroyed.\n");
+        close(send_fd);
+        return 0;
+    }
 
+    /* If we are here, it means remus setup/domain suspend/backup has
+     * failed. Try to resume the domain and exit gracefully.
+     * TODO: Split-Brain check.
+     */
     if (rc == ERROR_GUEST_TIMEDOUT)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..f05e07b 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
       "-i MS                   Checkpoint domain memory every MS milliseconds (def. 200ms).\n"
       "-b                      Replicate memory checkpoints to /dev/null (blackhole)\n"
       "-u                      Disable memory checkpoint compression.\n"
+      "-n                      Enable network output buffering.\n"
+      "-N <netbufscript>       Use netbufscript to setup network buffering instead of the\n"
+      "                        instead of the default (/etc/xen/scripts/remus-netbuf-setup).\n"
       "-s <sshcommand>         Use <sshcommand> instead of ssh.  String will be passed\n"
       "                        to sh. If empty, run <host> instead of \n"
       "                        ssh <host> xl migrate-receive -r [-e]\n"
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:03:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XF7-00044n-4d; Tue, 21 Jan 2014 09:03:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W5XF3-0003yF-Ev
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:03:29 +0000
Received: from [85.158.139.211:51780] by server-5.bemta-5.messagelabs.com id
	29/27-14928-0E73ED25; Tue, 21 Jan 2014 09:03:28 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390294997!194548!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26233 invoked from network); 21 Jan 2014 09:03:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	21 Jan 2014 09:03:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384272000"; 
   d="scan'208";a="9439863"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 21 Jan 2014 16:59:30 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0L937Xq004370;
	Tue, 21 Jan 2014 17:03:10 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012117014578-1244804 ;
	Tue, 21 Jan 2014 17:01:45 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 17:05:17 +0800
Message-Id: <1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/21 17:01:45,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/21 17:01:48,
	Serialize complete at 2014/01/21 17:01:48
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering cmdline
	switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Command line switch to 'xl remus' command, to enable network buffering.
Pass on this flag to libxl so that it can act accordingly.
Also update man pages to reflect the addition of a new option to
'xl remus' command.

Note: the network buffering is enabled as default. If you want to
disable it, please use -n option.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/man/xl.conf.pod.5    |  6 ++++++
 docs/man/xl.pod.1         | 11 ++++++++++-
 tools/libxl/xl.c          |  4 ++++
 tools/libxl/xl.h          |  1 +
 tools/libxl/xl_cmdimpl.c  | 28 ++++++++++++++++++++++------
 tools/libxl/xl_cmdtable.c |  3 +++
 6 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.conf.pod.5 b/docs/man/xl.conf.pod.5
index 7c43bde..8ae19bb 100644
--- a/docs/man/xl.conf.pod.5
+++ b/docs/man/xl.conf.pod.5
@@ -105,6 +105,12 @@ Configures the default gateway device to set for virtual network devices.
 
 Default: C<None>
 
+=item B<remus.default.netbufscript="PATH">
+
+Configures the default script used by Remus to setup network buffering.
+
+Default: C</etc/xen/scripts/remus-netbuf-setup>
+
 =item B<output_format="json|sxp">
 
 Configures the default output format used by xl when printing "machine
diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..3c5f246 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -399,7 +399,7 @@ Enable Remus HA for domain. By default B<xl> relies on ssh as a transport
 mechanism between the two hosts.
 
 N.B: Remus support in xl is still in experimental (proof-of-concept) phase.
-     There is no support for network or disk buffering at the moment.
+     There is no support for disk buffering at the moment.
 
 B<OPTIONS>
 
@@ -418,6 +418,15 @@ Generally useful for debugging.
 
 Disable memory checkpoint compression.
 
+=item B<-n>
+
+Disable network output buffering.
+
+=item B<-N> I<netbufscript>
+
+Use <netbufscript> to setup network buffering instead of the instead of
+the default (/etc/xen/scripts/remus-netbuf-setup).
+
 =item B<-s> I<sshcommand>
 
 Use <sshcommand> instead of ssh.  String will be passed to sh.
diff --git a/tools/libxl/xl.c b/tools/libxl/xl.c
index 657610b..e02a618 100644
--- a/tools/libxl/xl.c
+++ b/tools/libxl/xl.c
@@ -46,6 +46,7 @@ char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
 char *default_vifbackend = NULL;
+char *default_remus_netbufscript = NULL;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
 int claim_mode = 1;
 
@@ -177,6 +178,9 @@ static void parse_global_config(const char *configfile,
     if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
         claim_mode = l;
 
+    xlu_cfg_replace_string (config, "remus.default.netbufscript",
+                            &default_remus_netbufscript, 0);
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..d991fd3 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -153,6 +153,7 @@ extern char *default_vifscript;
 extern char *default_bridge;
 extern char *default_gatewaydev;
 extern char *default_vifbackend;
+extern char *default_remus_netbufscript;
 extern char *blkdev_start;
 
 enum output_format {
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..4145543 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7263,8 +7263,9 @@ int main_remus(int argc, char **argv)
     r_info.interval = 200;
     r_info.blackhole = 0;
     r_info.compression = 1;
+    r_info.netbuf = 1;
 
-    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    SWITCH_FOREACH_OPT(opt, "buni:s:N:e", NULL, "remus", 2) {
     case 'i':
         r_info.interval = atoi(optarg);
         break;
@@ -7274,6 +7275,12 @@ int main_remus(int argc, char **argv)
     case 'u':
         r_info.compression = 0;
         break;
+    case 'n':
+        r_info.netbuf = 0;
+        break;
+    case 'N':
+        r_info.netbufscript = optarg;
+        break;
     case 's':
         ssh_command = optarg;
         break;
@@ -7285,6 +7292,9 @@ int main_remus(int argc, char **argv)
     domid = find_domain(argv[optind]);
     host = argv[optind + 1];
 
+    if(!r_info.netbufscript)
+        r_info.netbufscript = default_remus_netbufscript;
+
     if (r_info.blackhole) {
         send_fd = open("/dev/null", O_RDWR, 0644);
         if (send_fd < 0) {
@@ -7322,13 +7332,19 @@ int main_remus(int argc, char **argv)
     /* Point of no return */
     rc = libxl_domain_remus_start(ctx, &r_info, domid, send_fd, recv_fd, 0);
 
-    /* If we are here, it means backup has failed/domain suspend failed.
-     * Try to resume the domain and exit gracefully.
-     * TODO: Split-Brain check.
+    /* check if the domain exists. User may have xl destroyed the
+     * domain to force failover
      */
-    fprintf(stderr, "remus sender: libxl_domain_suspend failed"
-            " (rc=%d)\n", rc);
+    if (libxl_domain_info(ctx, 0, domid)) {
+        fprintf(stderr, "Remus: Primary domain has been destroyed.\n");
+        close(send_fd);
+        return 0;
+    }
 
+    /* If we are here, it means remus setup/domain suspend/backup has
+     * failed. Try to resume the domain and exit gracefully.
+     * TODO: Split-Brain check.
+     */
     if (rc == ERROR_GUEST_TIMEDOUT)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..f05e07b 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
       "-i MS                   Checkpoint domain memory every MS milliseconds (def. 200ms).\n"
       "-b                      Replicate memory checkpoints to /dev/null (blackhole)\n"
       "-u                      Disable memory checkpoint compression.\n"
+      "-n                      Enable network output buffering.\n"
+      "-N <netbufscript>       Use netbufscript to setup network buffering instead of the\n"
+      "                        instead of the default (/etc/xen/scripts/remus-netbuf-setup).\n"
       "-s <sshcommand>         Use <sshcommand> instead of ssh.  String will be passed\n"
       "                        to sh. If empty, run <host> instead of \n"
       "                        ssh <host> xl migrate-receive -r [-e]\n"
-- 
1.8.4.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XgV-0006pl-8q; Tue, 21 Jan 2014 09:31:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5XgU-0006pg-0G
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:31:50 +0000
Received: from [193.109.254.147:33583] by server-4.bemta-14.messagelabs.com id
	12/A1-03916-58E3ED25; Tue, 21 Jan 2014 09:31:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390296707!12133563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14745 invoked from network); 21 Jan 2014 09:31:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 09:31:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94801968"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 09:31:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 04:31:46 -0500
Message-ID: <1390296705.20516.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ehouby@yahoo.com>
Date: Tue, 21 Jan 2014 09:31:45 +0000
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
References: <1390244796.2322.6.camel@astar.houby.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

extending the subject. Also CCing a few likely people.

On Mon, 2014-01-20 at 12:06 -0700, Eric Houby wrote:
> xen-devel list,
> 
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.

Are you also using any ivrs_ioapic[] command line options?

There's a thread at
http://lists.xen.org/archives/html/xen-devel/2013-10/msg00313.html which
did use that and then saw similar looking results.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XgV-0006pl-8q; Tue, 21 Jan 2014 09:31:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5XgU-0006pg-0G
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:31:50 +0000
Received: from [193.109.254.147:33583] by server-4.bemta-14.messagelabs.com id
	12/A1-03916-58E3ED25; Tue, 21 Jan 2014 09:31:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390296707!12133563!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14745 invoked from network); 21 Jan 2014 09:31:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 09:31:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94801968"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 09:31:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 04:31:46 -0500
Message-ID: <1390296705.20516.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ehouby@yahoo.com>
Date: Tue, 21 Jan 2014 09:31:45 +0000
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
References: <1390244796.2322.6.camel@astar.houby.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

extending the subject. Also CCing a few likely people.

On Mon, 2014-01-20 at 12:06 -0700, Eric Houby wrote:
> xen-devel list,
> 
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.

Are you also using any ivrs_ioapic[] command line options?

There's a thread at
http://lists.xen.org/archives/html/xen-devel/2013-10/msg00313.html which
did use that and then saw similar looking results.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Xgr-0006rc-M3; Tue, 21 Jan 2014 09:32:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Xgq-0006rR-Rs
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:32:12 +0000
Received: from [85.158.139.211:10957] by server-16.bemta-5.messagelabs.com id
	39/19-11843-C9E3ED25; Tue, 21 Jan 2014 09:32:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390296729!10968364!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30425 invoked from network); 21 Jan 2014 09:32:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 09:32:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 09:32:06 +0000
Message-Id: <52DE4CA1020000780011547D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 09:32:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.

For analyzing this we need a full serial log (with "iommu=debug" in
place on the Xen command line), not just a screen shot.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Xgr-0006rc-M3; Tue, 21 Jan 2014 09:32:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5Xgq-0006rR-Rs
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:32:12 +0000
Received: from [85.158.139.211:10957] by server-16.bemta-5.messagelabs.com id
	39/19-11843-C9E3ED25; Tue, 21 Jan 2014 09:32:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390296729!10968364!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30425 invoked from network); 21 Jan 2014 09:32:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 09:32:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 09:32:06 +0000
Message-Id: <52DE4CA1020000780011547D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 09:32:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.

For analyzing this we need a full serial log (with "iommu=debug" in
place on the Xen command line), not just a screen shot.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Xu0-0007J5-Ge; Tue, 21 Jan 2014 09:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Xtz-0007J0-H7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:45:47 +0000
Received: from [85.158.139.211:28116] by server-9.bemta-5.messagelabs.com id
	FC/3D-15098-AC14ED25; Tue, 21 Jan 2014 09:45:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390297544!10794505!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27576 invoked from network); 21 Jan 2014 09:45:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 09:45:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94805127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 09:45:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 04:45:43 -0500
Message-ID: <1390297542.20516.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 21 Jan 2014 09:45:42 +0000
In-Reply-To: <52DE4CA1020000780011547D@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> > line allows the system to boot as expected.
> 
> For analyzing this we need a full serial log

See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure
such a thing.

>  (with "iommu=debug" in
> place on the Xen command line), not just a screen shot.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Xu0-0007J5-Ge; Tue, 21 Jan 2014 09:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Xtz-0007J0-H7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:45:47 +0000
Received: from [85.158.139.211:28116] by server-9.bemta-5.messagelabs.com id
	FC/3D-15098-AC14ED25; Tue, 21 Jan 2014 09:45:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390297544!10794505!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27576 invoked from network); 21 Jan 2014 09:45:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 09:45:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94805127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 09:45:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 04:45:43 -0500
Message-ID: <1390297542.20516.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 21 Jan 2014 09:45:42 +0000
In-Reply-To: <52DE4CA1020000780011547D@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> > line allows the system to boot as expected.
> 
> For analyzing this we need a full serial log

See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure
such a thing.

>  (with "iommu=debug" in
> place on the Xen command line), not just a screen shot.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XzW-000816-BQ; Tue, 21 Jan 2014 09:51:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5XzU-000811-Vu
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:51:29 +0000
Received: from [85.158.143.35:25509] by server-3.bemta-4.messagelabs.com id
	F7/EB-32360-0234ED25; Tue, 21 Jan 2014 09:51:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390297887!10330004!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 573 invoked from network); 21 Jan 2014 09:51:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 09:51:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 09:51:26 +0000
Message-Id: <52DE512A02000078001154C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 09:51:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
In-Reply-To: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
> 
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0

This reads as if this was a bug in Xen, which it isn't. Dom0's
lack of EFI support when running on top of Xen is the issue.

> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
> 
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..8c307c9 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )

I think I had indicated my opposition to this sort of hack on v1
already; I'm not sure I asked which OSes usable as Dom0 but
other than Linux recognize this option. Or which versions of
Linux actually do (I'm pretty sure older ones don't).

Bottom line - I continue to think that the issue should be fixed
in Linux.

> +        {
> +            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];
> +            int acpi_os_root_pointer_string_size =
> snprintf(acpi_os_root_pointer_string,
> +
> MAX_RSDP_ADDRESS_STRING, "%08lX",
> +
> acpi_os_get_root_pointer());
> +
> +            if ( (acpi_os_root_pointer_string_size > 0) &&
> (acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
> +            {
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING "RSDP Address String Size Check
> Failed!\n");
> +                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
> +                printk(XENLOG_WARNING "(Expected size 1-%d, got
> %d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);
> +            }
> +        }
> 
>          cmdline = dom0_cmdline;
>      }
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 8c5697e..a7c3905 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -750,6 +750,7 @@ struct start_info {
>                                  /*  SIF_MOD_START_PFN set in flags).      
> */
>      unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
>  #define MAX_GUEST_CMDLINE 1024
> +#define MAX_RSDP_ADDRESS_STRING 21

This clearly doesn't belong here (public header). In fact it doesn't
belong into any header at all, as it's being used only in a single
source file.

Jan

>      int8_t cmd_line[MAX_GUEST_CMDLINE];
>      /* The pfn range here covers both page table and p->m table frames.   */
>      unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 09:51:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 09:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5XzW-000816-BQ; Tue, 21 Jan 2014 09:51:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5XzU-000811-Vu
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 09:51:29 +0000
Received: from [85.158.143.35:25509] by server-3.bemta-4.messagelabs.com id
	F7/EB-32360-0234ED25; Tue, 21 Jan 2014 09:51:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390297887!10330004!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 573 invoked from network); 21 Jan 2014 09:51:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 09:51:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 09:51:26 +0000
Message-Id: <52DE512A02000078001154C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 09:51:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
In-Reply-To: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
> 
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0

This reads as if this was a bug in Xen, which it isn't. Dom0's
lack of EFI support when running on top of Xen is the issue.

> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
> 
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..8c307c9 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )

I think I had indicated my opposition to this sort of hack on v1
already; I'm not sure I asked which OSes usable as Dom0 but
other than Linux recognize this option. Or which versions of
Linux actually do (I'm pretty sure older ones don't).

Bottom line - I continue to think that the issue should be fixed
in Linux.

> +        {
> +            char acpi_os_root_pointer_string[MAX_RSDP_ADDRESS_STRING];
> +            int acpi_os_root_pointer_string_size =
> snprintf(acpi_os_root_pointer_string,
> +
> MAX_RSDP_ADDRESS_STRING, "%08lX",
> +
> acpi_os_get_root_pointer());
> +
> +            if ( (acpi_os_root_pointer_string_size > 0) &&
> (acpi_os_root_pointer_string_size <= MAX_RSDP_ADDRESS_STRING) )
> +            {
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, acpi_os_root_pointer_string);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING "RSDP Address String Size Check
> Failed!\n");
> +                printk(XENLOG_WARNING "Not passing acpi_rsdp to Dom0!\n");
> +                printk(XENLOG_WARNING "(Expected size 1-%d, got
> %d)\n", MAX_RSDP_ADDRESS_STRING, acpi_os_root_pointer_string_size);
> +            }
> +        }
> 
>          cmdline = dom0_cmdline;
>      }
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 8c5697e..a7c3905 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -750,6 +750,7 @@ struct start_info {
>                                  /*  SIF_MOD_START_PFN set in flags).      
> */
>      unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
>  #define MAX_GUEST_CMDLINE 1024
> +#define MAX_RSDP_ADDRESS_STRING 21

This clearly doesn't belong here (public header). In fact it doesn't
belong into any header at all, as it's being used only in a single
source file.

Jan

>      int8_t cmd_line[MAX_GUEST_CMDLINE];
>      /* The pfn range here covers both page table and p->m table frames.   */
>      unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:00:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Y8b-0000SW-Bn; Tue, 21 Jan 2014 10:00:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Y8a-0000SN-Aw
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 10:00:52 +0000
Received: from [85.158.137.68:40490] by server-1.bemta-3.messagelabs.com id
	80/84-29598-3554ED25; Tue, 21 Jan 2014 10:00:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390298449!10379723!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 21 Jan 2014 10:00:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:00:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="92746011"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:00:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:00:48 -0500
Message-ID: <1390298447.20516.98.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 21 Jan 2014 10:00:47 +0000
In-Reply-To: <1389882550.6697.21.camel@kazak.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
	<1389882550.6697.21.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:29 +0000, Ian Campbell wrote:
> On Thu, 2014-01-16 at 14:22 +0000, Stefano Stabellini wrote:
> > LVM works for me, but I am not using udev at the moment.
> 
> I instrumented the guest f/s and blkfront and it seems like reads are
> returning buffers full of 0xc2c2c2c2, which is the pattern that Xen
> scrubs pages with in a debug build.
> 
> So either there is a cache coherency issue or perhaps something to do
> with the dom0 swiotlb doing direct i/o to guest pages and sending them
> to the wrong place.

Just to close this off -- the issue here turned out to be truncation of
dma addresses from 64-bits to 32-bits when LPAE is not enabled. Fixed by
my Linux patch "xen: swiotlb: handle sizeof(dma_addr_t) !=
sizeof(phys_addr_t)".

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:00:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Y8b-0000SW-Bn; Tue, 21 Jan 2014 10:00:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5Y8a-0000SN-Aw
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 10:00:52 +0000
Received: from [85.158.137.68:40490] by server-1.bemta-3.messagelabs.com id
	80/84-29598-3554ED25; Tue, 21 Jan 2014 10:00:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390298449!10379723!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 21 Jan 2014 10:00:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:00:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="92746011"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:00:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:00:48 -0500
Message-ID: <1390298447.20516.98.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 21 Jan 2014 10:00:47 +0000
In-Reply-To: <1389882550.6697.21.camel@kazak.uk.xensource.com>
References: <osstest-24366-mainreport@xen.org>
	<1389606502.8187.10.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161414420.21510@kaball.uk.xensource.com>
	<1389882550.6697.21.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24366: tolerable trouble:
 broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 14:29 +0000, Ian Campbell wrote:
> On Thu, 2014-01-16 at 14:22 +0000, Stefano Stabellini wrote:
> > LVM works for me, but I am not using udev at the moment.
> 
> I instrumented the guest f/s and blkfront and it seems like reads are
> returning buffers full of 0xc2c2c2c2, which is the pattern that Xen
> scrubs pages with in a debug build.
> 
> So either there is a cache coherency issue or perhaps something to do
> with the dom0 swiotlb doing direct i/o to guest pages and sending them
> to the wrong place.

Just to close this off -- the issue here turned out to be truncation of
dma addresses from 64-bits to 32-bits when LPAE is not enabled. Fixed by
my Linux patch "xen: swiotlb: handle sizeof(dma_addr_t) !=
sizeof(phys_addr_t)".

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:02:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Y9p-0000bo-UK; Tue, 21 Jan 2014 10:02:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0918db619=chegger@amazon.de>)
	id 1W5Y9o-0000bY-KX
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 10:02:08 +0000
Received: from [193.109.254.147:27920] by server-15.bemta-14.messagelabs.com
	id 74/36-22186-F954ED25; Tue, 21 Jan 2014 10:02:07 +0000
X-Env-Sender: prvs=0918db619=chegger@amazon.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390298525!10665207!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13948 invoked from network); 21 Jan 2014 10:02:07 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:02:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390298526; x=1421834526;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=0PcU7hCQunlDV2qQc2Zp/12lK39ieI6T60gooiNzCTk=;
	b=o2Jodwu9CrMrMi9Gc6nmc3iWomfSSAuq3N+YlPuo+qIXKrSmAmdMy868
	9yLPlhUqn44q7A3/YIAIAZNzbdOYvvc7u2/OsmLYoEuNoj1xjEZzJxMK4
	MDHHMcXKzGjExFI76pufMk0fSCtmKRpnHSnbtvVAUuKywePf9tkU4hj0S 0=;
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="65347904"
Received: from email-inbound-relay-7001.iad7.amazon.com ([10.55.235.156])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Jan 2014 10:02:03 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by email-inbound-relay-7001.iad7.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0L9jhAk027302
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 21 Jan 2014 09:46:16 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 21 Jan 2014 01:46:07 -0800
Message-ID: <52DE41DD.70502@amazon.de>
Date: Tue, 21 Jan 2014 10:46:05 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	"Dong, Eddie" <eddie.dong@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
	<52DCE753.1040507@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.01.14 09:49, Zhang, Yang Z wrote:
> Egger, Christoph wrote on 2014-01-20:
>> On 14.01.14 08:38, Zhang, Yang Z wrote:
>>> Jan Beulich wrote on 2014-01-14:
>>>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Zhang, Yang Z wrote on 2013-12-24:
>>>>>
>>>>> Any comments ?
>>>>
>>>> Considering Christoph's comments and reservations, if you can't
>>>> alone deal with this I think you should work with the AMD people to
>>>> eliminate or address his concerns.
>>>>
>>>
>>> Yes. But the problem puzzled me is that Christoph said nested SVM works
>>> well w/o my patch which I cannot understand. Because according my
>>> analysis in previous thread, it also buggy in AMD side. But if they
>>> really solved the issue in their side, I wonder how they fix it.
>>> Perhaps I can use the same solution in VMX side without touch the
>>> common code.
>>>
>>> Christoph, can you help to check it? thanks.
>>
>> The fix I mentioned solves the vmswitch problem on AMD side.
> 
> But the current code is buggy with your fixing even in AMD side, see below scenario:
> virtual vmentry: 
>     Expected result: nested mode is being updated.
>     Current result in SVM: 
>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging mode is updated.  Wrong.
> 
> I cannot understand why you said it is working.

This page mode problem you discovered that needs to be addressed for
both SVM and VMX.

The fix I am saying is working solves the problem of emulating an MMIO
instruction which is finished in the softirq while an interrupt
can cause a vmswitch during instruction emulation.

We are talking about two different issues. One is fixed on AMD side and
one not.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:02:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Y9p-0000bo-UK; Tue, 21 Jan 2014 10:02:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0918db619=chegger@amazon.de>)
	id 1W5Y9o-0000bY-KX
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 10:02:08 +0000
Received: from [193.109.254.147:27920] by server-15.bemta-14.messagelabs.com
	id 74/36-22186-F954ED25; Tue, 21 Jan 2014 10:02:07 +0000
X-Env-Sender: prvs=0918db619=chegger@amazon.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390298525!10665207!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13948 invoked from network); 21 Jan 2014 10:02:07 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:02:07 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390298526; x=1421834526;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=0PcU7hCQunlDV2qQc2Zp/12lK39ieI6T60gooiNzCTk=;
	b=o2Jodwu9CrMrMi9Gc6nmc3iWomfSSAuq3N+YlPuo+qIXKrSmAmdMy868
	9yLPlhUqn44q7A3/YIAIAZNzbdOYvvc7u2/OsmLYoEuNoj1xjEZzJxMK4
	MDHHMcXKzGjExFI76pufMk0fSCtmKRpnHSnbtvVAUuKywePf9tkU4hj0S 0=;
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="65347904"
Received: from email-inbound-relay-7001.iad7.amazon.com ([10.55.235.156])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 21 Jan 2014 10:02:03 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by email-inbound-relay-7001.iad7.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0L9jhAk027302
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 21 Jan 2014 09:46:16 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 21 Jan 2014 01:46:07 -0800
Message-ID: <52DE41DD.70502@amazon.de>
Date: Tue, 21 Jan 2014 10:46:05 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	"Dong, Eddie" <eddie.dong@intel.com>
References: <1386814004-5574-1-git-send-email-yang.z.zhang@intel.com>
	<1386814004-5574-2-git-send-email-yang.z.zhang@intel.com>
	<A12AC9D104E08D47BAF23C492F83C53B256C8C6A@SHSMSX104.ccr.corp.intel.com>
	<52B18213020000780010E98C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A996341@SHSMSX104.ccr.corp.intel.com>
	<52B18F89.1070309@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BB8ED@SHSMSX104.ccr.corp.intel.com>
	<52D4F57F020000780011362C@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BC3CB@SHSMSX104.ccr.corp.intel.com>
	<52DCE753.1040507@amazon.de>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C1688@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/3] Nested VMX: update nested paging mode
 when vmswitch is in progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.01.14 09:49, Zhang, Yang Z wrote:
> Egger, Christoph wrote on 2014-01-20:
>> On 14.01.14 08:38, Zhang, Yang Z wrote:
>>> Jan Beulich wrote on 2014-01-14:
>>>>>>> On 14.01.14 at 03:33, "Zhang, Yang Z" <yang.z.zhang@intel.com>
>> wrote:
>>>>> Zhang, Yang Z wrote on 2013-12-24:
>>>>>
>>>>> Any comments ?
>>>>
>>>> Considering Christoph's comments and reservations, if you can't
>>>> alone deal with this I think you should work with the AMD people to
>>>> eliminate or address his concerns.
>>>>
>>>
>>> Yes. But the problem puzzled me is that Christoph said nested SVM works
>>> well w/o my patch which I cannot understand. Because according my
>>> analysis in previous thread, it also buggy in AMD side. But if they
>>> really solved the issue in their side, I wonder how they fix it.
>>> Perhaps I can use the same solution in VMX side without touch the
>>> common code.
>>>
>>> Christoph, can you help to check it? thanks.
>>
>> The fix I mentioned solves the vmswitch problem on AMD side.
> 
> But the current code is buggy with your fixing even in AMD side, see below scenario:
> virtual vmentry: 
>     Expected result: nested mode is being updated.
>     Current result in SVM: 
>           !vcpu_in_guestmode and vmswitch_in_progress:  L1's paging mode is updated.  Wrong.
> 
> I cannot understand why you said it is working.

This page mode problem you discovered that needs to be addressed for
both SVM and VMX.

The fix I am saying is working solves the problem of emulating an MMIO
instruction which is finished in the softirq while an interrupt
can cause a vmswitch during instruction emulation.

We are talking about two different issues. One is fixed on AMD side and
one not.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5YXU-0002BZ-Cu; Tue, 21 Jan 2014 10:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5YXT-0002BU-Cj
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:26:35 +0000
Received: from [85.158.143.35:64381] by server-2.bemta-4.messagelabs.com id
	DD/4E-11386-A5B4ED25; Tue, 21 Jan 2014 10:26:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390299992!10341901!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23279 invoked from network); 21 Jan 2014 10:26:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:26:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="92752357"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:26:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:26:32 -0500
Message-ID: <1390299991.20516.116.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 10:26:31 +0000
In-Reply-To: <1390247840.5727.2.camel@Abyss>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> wrong?

Apparently the bug tracker doesn't handle MIME encapsulation very
well/at all. I swear I remember adding that code...

I'll take a look at it, but in the meantime sending control messages in
plain text without any attachments etc should avoid the issue.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5YXU-0002BZ-Cu; Tue, 21 Jan 2014 10:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5YXT-0002BU-Cj
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:26:35 +0000
Received: from [85.158.143.35:64381] by server-2.bemta-4.messagelabs.com id
	DD/4E-11386-A5B4ED25; Tue, 21 Jan 2014 10:26:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390299992!10341901!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23279 invoked from network); 21 Jan 2014 10:26:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:26:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="92752357"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:26:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:26:32 -0500
Message-ID: <1390299991.20516.116.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 10:26:31 +0000
In-Reply-To: <1390247840.5727.2.camel@Abyss>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> wrong?

Apparently the bug tracker doesn't handle MIME encapsulation very
well/at all. I swear I remember adding that code...

I'll take a look at it, but in the meantime sending control messages in
plain text without any attachments etc should avoid the issue.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5YZu-0002Yn-Uh; Tue, 21 Jan 2014 10:29:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5YZt-0002YV-Ap
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:29:05 +0000
Received: from [85.158.143.35:2215] by server-2.bemta-4.messagelabs.com id
	B2/53-11386-0FB4ED25; Tue, 21 Jan 2014 10:29:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390300142!10342827!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12555 invoked from network); 21 Jan 2014 10:29:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:29:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94814281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 10:28:43 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 05:28:42 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	11:28:41 +0100
Message-ID: <52DE4BD8.7060209@citrix.com>
Date: Tue, 21 Jan 2014 10:28:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I participated in (a rather extended version of) the 4.4-rc2 test day,
and rc2 got a full XenRT nightly run in the XenServer testing system.

For the setup, the comparison is against XenServer trunk, which is
currently Xen-4.3-staging based (plus patch queue), Linux 3.10.y dom0
kernel, CentOS 6.4 based dom0 userspace.

The tested version had Xen 4.4 (staging, as I needed the ABI fix) in
place of Xen-4.3, but identical dom0 kernel, dom0 userspace, qemu,
toolstack and windows PV drivers.


The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
which have problems on live migrate.  Some source of time is
unexpectedly jumping forwards by two days, from the correct time to 2
days in the future.  The observed result is that it looses its DHCP
lease, drops its IP address and networking ceases to work (It appears
that windows will not attempt to renew the lease itself).

I am currently investigating which source of time is jumping forwards,
but this does appear to be a regression directly attributable to Xen 4.4.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5YZu-0002Yn-Uh; Tue, 21 Jan 2014 10:29:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5YZt-0002YV-Ap
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:29:05 +0000
Received: from [85.158.143.35:2215] by server-2.bemta-4.messagelabs.com id
	B2/53-11386-0FB4ED25; Tue, 21 Jan 2014 10:29:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390300142!10342827!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12555 invoked from network); 21 Jan 2014 10:29:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:29:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94814281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 10:28:43 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 05:28:42 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	11:28:41 +0100
Message-ID: <52DE4BD8.7060209@citrix.com>
Date: Tue, 21 Jan 2014 10:28:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I participated in (a rather extended version of) the 4.4-rc2 test day,
and rc2 got a full XenRT nightly run in the XenServer testing system.

For the setup, the comparison is against XenServer trunk, which is
currently Xen-4.3-staging based (plus patch queue), Linux 3.10.y dom0
kernel, CentOS 6.4 based dom0 userspace.

The tested version had Xen 4.4 (staging, as I needed the ABI fix) in
place of Xen-4.3, but identical dom0 kernel, dom0 userspace, qemu,
toolstack and windows PV drivers.


The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
which have problems on live migrate.  Some source of time is
unexpectedly jumping forwards by two days, from the correct time to 2
days in the future.  The observed result is that it looses its DHCP
lease, drops its IP address and networking ceases to work (It appears
that windows will not attempt to renew the lease itself).

I am currently investigating which source of time is jumping forwards,
but this does appear to be a regression directly attributable to Xen 4.4.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 10:50:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ytu-0003JK-B3; Tue, 21 Jan 2014 10:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Yts-0003J2-Ep
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:49:44 +0000
Received: from [85.158.139.211:11609] by server-8.bemta-5.messagelabs.com id
	A8/9B-29838-7C05ED25; Tue, 21 Jan 2014 10:49:43 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390301381!10994062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7617 invoked from network); 21 Jan 2014 10:49:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="92757537"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:49:41 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:49:40 -0500
Message-ID: <1390301379.23576.110.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 11:49:39 +0100
In-Reply-To: <1390299991.20516.116.camel@kazak.uk.xensource.com>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
	<1390299991.20516.116.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8153212541153803079=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8153212541153803079==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-BpFiXJ+PZA+9gsl2xtBI"

--=-BpFiXJ+PZA+9gsl2xtBI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-21 at 10:26 +0000, Ian Campbell wrote:
> On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> > Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> > wrong?
>=20
> Apparently the bug tracker doesn't handle MIME encapsulation very
> well/at all. I swear I remember adding that code...
>=20
> I'll take a look at it, but in the meantime sending control messages in
> plain text without any attachments etc should avoid the issue.
>=20
Ok thanks. I'll retry creating the entry with a new mail, without any
attachment.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-BpFiXJ+PZA+9gsl2xtBI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeUMMACgkQk4XaBE3IOsSSHwCfV4pWCew/79c5DbaDjv2m45Ia
9I8AnRRa53qdbm3PvQHMM1N54tQh8/BI
=Lpsw
-----END PGP SIGNATURE-----

--=-BpFiXJ+PZA+9gsl2xtBI--


--===============8153212541153803079==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8153212541153803079==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 10:50:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:50:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Ytu-0003JK-B3; Tue, 21 Jan 2014 10:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Yts-0003J2-Ep
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:49:44 +0000
Received: from [85.158.139.211:11609] by server-8.bemta-5.messagelabs.com id
	A8/9B-29838-7C05ED25; Tue, 21 Jan 2014 10:49:43 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390301381!10994062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7617 invoked from network); 21 Jan 2014 10:49:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="92757537"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 10:49:41 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:49:40 -0500
Message-ID: <1390301379.23576.110.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 11:49:39 +0100
In-Reply-To: <1390299991.20516.116.camel@kazak.uk.xensource.com>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
	<1390299991.20516.116.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8153212541153803079=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8153212541153803079==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-BpFiXJ+PZA+9gsl2xtBI"

--=-BpFiXJ+PZA+9gsl2xtBI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-21 at 10:26 +0000, Ian Campbell wrote:
> On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> > Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> > wrong?
>=20
> Apparently the bug tracker doesn't handle MIME encapsulation very
> well/at all. I swear I remember adding that code...
>=20
> I'll take a look at it, but in the meantime sending control messages in
> plain text without any attachments etc should avoid the issue.
>=20
Ok thanks. I'll retry creating the entry with a new mail, without any
attachment.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-BpFiXJ+PZA+9gsl2xtBI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeUMMACgkQk4XaBE3IOsSSHwCfV4pWCew/79c5DbaDjv2m45Ia
9I8AnRRa53qdbm3PvQHMM1N54tQh8/BI
=Lpsw
-----END PGP SIGNATURE-----

--=-BpFiXJ+PZA+9gsl2xtBI--


--===============8153212541153803079==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8153212541153803079==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 10:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Z0c-0003ao-Nt; Tue, 21 Jan 2014 10:56:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Z0b-0003ae-8k
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:56:41 +0000
Received: from [193.109.254.147:10705] by server-3.bemta-14.messagelabs.com id
	96/2D-11000-8625ED25; Tue, 21 Jan 2014 10:56:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390301788!12205787!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDc4NjIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21209 invoked from network); 21 Jan 2014 10:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="94820124"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 10:56:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:56:27 -0500
Message-ID: <1390301785.23576.115.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 11:56:25 +0100
In-Reply-To: <CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2199752092285387422=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2199752092285387422==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-fOgJYx+XxATFX/PrVtHy"

--=-fOgJYx+XxATFX/PrVtHy
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 19:31 +0200, Pavlo Suikov wrote:
> Hi again,
>=20
>=20
> sorry for the broken formatting, see better one below where the tables
> should be.
>=20
Mmm... At least for me, formatting is broken in this e-mail, and was ok
in the former one :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-fOgJYx+XxATFX/PrVtHy
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeUlkACgkQk4XaBE3IOsRqWQCeOxNsIxknqf30wOyeq2dpb+e/
XlkAniM9f3SJ6RCF1VkUqnbibV93GOx+
=as+t
-----END PGP SIGNATURE-----

--=-fOgJYx+XxATFX/PrVtHy--


--===============2199752092285387422==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2199752092285387422==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 10:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 10:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Z0c-0003ao-Nt; Tue, 21 Jan 2014 10:56:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Z0b-0003ae-8k
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 10:56:41 +0000
Received: from [193.109.254.147:10705] by server-3.bemta-14.messagelabs.com id
	96/2D-11000-8625ED25; Tue, 21 Jan 2014 10:56:40 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390301788!12205787!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDc4NjIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21209 invoked from network); 21 Jan 2014 10:56:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 10:56:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="94820124"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 10:56:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 05:56:27 -0500
Message-ID: <1390301785.23576.115.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 11:56:25 +0100
In-Reply-To: <CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<CAE4oM6wiHT6Y3wQ809uPHXO3N+kzVmSt-fcp1QzG-qbVeRmfww@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2199752092285387422=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2199752092285387422==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-fOgJYx+XxATFX/PrVtHy"

--=-fOgJYx+XxATFX/PrVtHy
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 19:31 +0200, Pavlo Suikov wrote:
> Hi again,
>=20
>=20
> sorry for the broken formatting, see better one below where the tables
> should be.
>=20
Mmm... At least for me, formatting is broken in this e-mail, and was ok
in the former one :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-fOgJYx+XxATFX/PrVtHy
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeUlkACgkQk4XaBE3IOsRqWQCeOxNsIxknqf30wOyeq2dpb+e/
XlkAniM9f3SJ6RCF1VkUqnbibV93GOx+
=as+t
-----END PGP SIGNATURE-----

--=-fOgJYx+XxATFX/PrVtHy--


--===============2199752092285387422==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2199752092285387422==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Z5J-0004JK-KI; Tue, 21 Jan 2014 11:01:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Z5H-0004JE-Ti
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:01:32 +0000
Received: from [85.158.139.211:31548] by server-13.bemta-5.messagelabs.com id
	D5/6D-11357-B835ED25; Tue, 21 Jan 2014 11:01:31 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390302079!10998149!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTA5OTIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18626 invoked from network); 21 Jan 2014 11:01:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="92760285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 11:01:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:01:17 -0500
Message-ID: <1390302075.23576.120.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 12:01:15 +0100
In-Reply-To: <CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
	<1389984018.16457.335.camel@Solace>
	<CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Artem Mygaiev <artem.mygaiev@globallogic.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Sisu Xi <xisisu@gmail.com>, Claudio
	Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8261255266873280806=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8261255266873280806==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-y2QwZAD5y8B3s8ftstch"

--=-y2QwZAD5y8B3s8ftstch
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 15:30 +0200, Pavlo Suikov wrote:

> > This is probably going to be a lot similar to what is being
> attempted
> > here:
> > http://bugs.xenproject.org/xen/mid/%
> 3C1387278345.8738.80.camel@Solace%3E
>=20
>=20
> Wow. That looks fantastic.=20
>
Indeed.

> I would watch this one attentively, thanks a lot!
>=20
Exactly. I hope there will be room for some good collaboration, for the
benefit of both (well, three, as I also count Xen in!) efforts and
projects, as usual in Open Source :-)

> > Anyway, again, there is rising interest in this sort of workloads
> these
> > days, as Artem (from your same company, I think, and I'm adding him,
> and
> > a bunch of other people too, to the Cc list) knows. :-)
>=20
>=20
> Yep, I'm from his team actually :)
>=20
Aha! I suspected that, but good to have the confirmation. By any chance,
will any of you be in Brussels for FOSDEM (at the end of) next week?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-y2QwZAD5y8B3s8ftstch
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeU3sACgkQk4XaBE3IOsS48ACfQXSBw0vdydiMrYn7e2xwrRbC
WlwAnirE7Tal/w2vH1wgN6ANvjYbxJ45
=KQkK
-----END PGP SIGNATURE-----

--=-y2QwZAD5y8B3s8ftstch--


--===============8261255266873280806==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8261255266873280806==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Z5J-0004JK-KI; Tue, 21 Jan 2014 11:01:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5Z5H-0004JE-Ti
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:01:32 +0000
Received: from [85.158.139.211:31548] by server-13.bemta-5.messagelabs.com id
	D5/6D-11357-B835ED25; Tue, 21 Jan 2014 11:01:31 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390302079!10998149!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTA5OTIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18626 invoked from network); 21 Jan 2014 11:01:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:01:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; 
	d="asc'?scan'208";a="92760285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 11:01:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:01:17 -0500
Message-ID: <1390302075.23576.120.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 12:01:15 +0100
In-Reply-To: <CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
References: <CAE4oM6yhxOMaOUAGMS16i=7dniY32dQ6W8i53V=ewb3BN4ZLAA@mail.gmail.com>
	<1389977958.6697.135.camel@kazak.uk.xensource.com>
	<1389984018.16457.335.camel@Solace>
	<CAE4oM6zwSuv+38WZ87FUSzZBewtW0sM+PxGQCcqd47ef+b93HA@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Artem Mygaiev <artem.mygaiev@globallogic.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Sisu Xi <xisisu@gmail.com>, Claudio
	Scordino <claudio@evidence.eu.com>, xen-devel@lists.xen.org,
	Arianna Avanzini <avanzini.arianna@gmail.com>
Subject: Re: [Xen-devel] QNX Neutrino and RT-Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8261255266873280806=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8261255266873280806==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-y2QwZAD5y8B3s8ftstch"

--=-y2QwZAD5y8B3s8ftstch
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 15:30 +0200, Pavlo Suikov wrote:

> > This is probably going to be a lot similar to what is being
> attempted
> > here:
> > http://bugs.xenproject.org/xen/mid/%
> 3C1387278345.8738.80.camel@Solace%3E
>=20
>=20
> Wow. That looks fantastic.=20
>
Indeed.

> I would watch this one attentively, thanks a lot!
>=20
Exactly. I hope there will be room for some good collaboration, for the
benefit of both (well, three, as I also count Xen in!) efforts and
projects, as usual in Open Source :-)

> > Anyway, again, there is rising interest in this sort of workloads
> these
> > days, as Artem (from your same company, I think, and I'm adding him,
> and
> > a bunch of other people too, to the Cc list) knows. :-)
>=20
>=20
> Yep, I'm from his team actually :)
>=20
Aha! I suspected that, but good to have the confirmation. By any chance,
will any of you be in Brussels for FOSDEM (at the end of) next week?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-y2QwZAD5y8B3s8ftstch
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeU3sACgkQk4XaBE3IOsS48ACfQXSBw0vdydiMrYn7e2xwrRbC
WlwAnirE7Tal/w2vH1wgN6ANvjYbxJ45
=KQkK
-----END PGP SIGNATURE-----

--=-y2QwZAD5y8B3s8ftstch--


--===============8261255266873280806==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8261255266873280806==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ZT6-0005Ce-SW; Tue, 21 Jan 2014 11:26:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5ZT6-0005CW-1R
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:26:08 +0000
Received: from [85.158.139.211:19413] by server-11.bemta-5.messagelabs.com id
	2B/CE-23268-F495ED25; Tue, 21 Jan 2014 11:26:07 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390303564!8301313!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26632 invoked from network); 21 Jan 2014 11:26:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:26:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94826769"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 11:26:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:26:03 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5ZT0-0000Fw-Ap;
	Tue, 21 Jan 2014 11:26:02 +0000
Message-ID: <1390303557.22324.6.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>
Date: Tue, 21 Jan 2014 11:25:57 +0000
In-Reply-To: <1390229463.14279.27.camel@hamster.uk.xensource.com>
References: <1390229463.14279.27.camel@hamster.uk.xensource.com>
Content-Type: multipart/mixed; boundary="=-99UV9m/OJzESYXEE3660"
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] mce: Fix for another race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-99UV9m/OJzESYXEE3660
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Mon, 2014-01-20 at 14:51 +0000, Frediano Ziglio wrote:
> Hi,
>   this are actually two patches which both fixes the same race condition
> in mce code. The problem is that these lines (in mctelem_reserve)
> 
> 
> 	newhead = oldhead->mcte_next;
> 	if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread of recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting a
> new head that could point to whatever element (even already used).
> 
> The base idea of both patches is to separate mcte_state in a separate
> state and setting it with cmpxchg to make sure we don't pick up an
> already allocated element.
> 
> The first patch try to use the list detaching entirely (setting head to
> NULL) to avoid the use of mcte_next falling to a slow_reserve which scan
> all the array trying to find an element in the state FREE. This is
> surely safe and easy but if list is mostly allocated you end up scanning
> the array entirely every time.
> 
> The second patch (which needs some cleanup) instead of using pointers
> use array indexes to allow to bound in an atomic way which head and next
> pointer. The head is attached to a counter which is incremented to avoid
> to have the list changed but the head with same value (it's like a list
> version). The state is attached with the next index (which replace
> mcte_next if state is FREE) to allow atomic read of state+next. To
> handle both thread safety and reentrancy mctelem_reserve got a bit more
> complicated and the updates are not so straight forward.
> 
> Now, the question is: Should I just send the first patch and ignore the
> computation problem in the corner case or should I try to put in shape
> the second patch?
> 

I reply to my own mail with another fix. This is quite easy and mainly
uses a bitmap. I used a unsigned long instead of a bitmap as is easier
to test if all bits are zero but this limit number of entries.

Beside changing the structure another idea would be to use a single
bitmap and start from 0 to search for urgent entries while starting from
MC_URGENT_NENT to get non urgent entries.

Frediano


--=-99UV9m/OJzESYXEE3660
Content-Disposition: attachment; filename="mce_fix2_v3.patch"
Content-Type: text/x-patch; name="mce_fix2_v3.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent c45cce86b944403632c81b0f3b98b0db33658e28

diff -r c45cce86b944 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 10:27:49 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Tue Jan 21 05:50:40 2014 -0500
@@ -69,6 +69,11 @@
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+/* Check if we can fit enough bits in the free bit array */
+#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
+#error Too much elements
+#endif
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +82,9 @@
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +104,7 @@
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	unsigned long mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -214,7 +217,10 @@
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -284,7 +290,7 @@
 	}
 
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
@@ -292,16 +298,15 @@
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,18 +315,21 @@
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	unsigned long *freelp;
+	unsigned long oldfree;
+	unsigned bit;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldfree = *freelp;
+
+		if (oldfree == 0) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
@@ -333,9 +341,11 @@
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		bit = find_first_set_bit(oldfree);
+		if (test_and_clear_bit(bit, freelp)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
@@ -345,6 +355,7 @@
 				MCTE_SET_CLASS(tep, URGENT);
 			else
 				MCTE_SET_CLASS(tep, NONURGENT);
+
 			return MCTE2COOKIE(tep);
 		}
 	}

--=-99UV9m/OJzESYXEE3660
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-99UV9m/OJzESYXEE3660--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:26:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ZT6-0005Ce-SW; Tue, 21 Jan 2014 11:26:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5ZT6-0005CW-1R
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:26:08 +0000
Received: from [85.158.139.211:19413] by server-11.bemta-5.messagelabs.com id
	2B/CE-23268-F495ED25; Tue, 21 Jan 2014 11:26:07 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390303564!8301313!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26632 invoked from network); 21 Jan 2014 11:26:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:26:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,695,1384300800"; d="scan'208";a="94826769"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 11:26:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:26:03 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5ZT0-0000Fw-Ap;
	Tue, 21 Jan 2014 11:26:02 +0000
Message-ID: <1390303557.22324.6.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>
Date: Tue, 21 Jan 2014 11:25:57 +0000
In-Reply-To: <1390229463.14279.27.camel@hamster.uk.xensource.com>
References: <1390229463.14279.27.camel@hamster.uk.xensource.com>
Content-Type: multipart/mixed; boundary="=-99UV9m/OJzESYXEE3660"
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] mce: Fix for another race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--=-99UV9m/OJzESYXEE3660
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Mon, 2014-01-20 at 14:51 +0000, Frediano Ziglio wrote:
> Hi,
>   this are actually two patches which both fixes the same race condition
> in mce code. The problem is that these lines (in mctelem_reserve)
> 
> 
> 	newhead = oldhead->mcte_next;
> 	if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread of recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting a
> new head that could point to whatever element (even already used).
> 
> The base idea of both patches is to separate mcte_state in a separate
> state and setting it with cmpxchg to make sure we don't pick up an
> already allocated element.
> 
> The first patch try to use the list detaching entirely (setting head to
> NULL) to avoid the use of mcte_next falling to a slow_reserve which scan
> all the array trying to find an element in the state FREE. This is
> surely safe and easy but if list is mostly allocated you end up scanning
> the array entirely every time.
> 
> The second patch (which needs some cleanup) instead of using pointers
> use array indexes to allow to bound in an atomic way which head and next
> pointer. The head is attached to a counter which is incremented to avoid
> to have the list changed but the head with same value (it's like a list
> version). The state is attached with the next index (which replace
> mcte_next if state is FREE) to allow atomic read of state+next. To
> handle both thread safety and reentrancy mctelem_reserve got a bit more
> complicated and the updates are not so straight forward.
> 
> Now, the question is: Should I just send the first patch and ignore the
> computation problem in the corner case or should I try to put in shape
> the second patch?
> 

I reply to my own mail with another fix. This is quite easy and mainly
uses a bitmap. I used a unsigned long instead of a bitmap as is easier
to test if all bits are zero but this limit number of entries.

Beside changing the structure another idea would be to use a single
bitmap and start from 0 to search for urgent entries while starting from
MC_URGENT_NENT to get non urgent entries.

Frediano


--=-99UV9m/OJzESYXEE3660
Content-Disposition: attachment; filename="mce_fix2_v3.patch"
Content-Type: text/x-patch; name="mce_fix2_v3.patch"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

# HG changeset patch
# Parent c45cce86b944403632c81b0f3b98b0db33658e28

diff -r c45cce86b944 xen/arch/x86/cpu/mcheck/mctelem.c
--- a/xen/arch/x86/cpu/mcheck/mctelem.c	Mon Jan 20 10:27:49 2014 +0000
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c	Tue Jan 21 05:50:40 2014 -0500
@@ -69,6 +69,11 @@
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+/* Check if we can fit enough bits in the free bit array */
+#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
+#error Too much elements
+#endif
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +82,9 @@
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +104,7 @@
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	unsigned long mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -214,7 +217,10 @@
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -284,7 +290,7 @@
 	}
 
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
@@ -292,16 +298,15 @@
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,18 +315,21 @@
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	unsigned long *freelp;
+	unsigned long oldfree;
+	unsigned bit;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldfree = *freelp;
+
+		if (oldfree == 0) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
@@ -333,9 +341,11 @@
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		bit = find_first_set_bit(oldfree);
+		if (test_and_clear_bit(bit, freelp)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
@@ -345,6 +355,7 @@
 				MCTE_SET_CLASS(tep, URGENT);
 			else
 				MCTE_SET_CLASS(tep, NONURGENT);
+
 			return MCTE2COOKIE(tep);
 		}
 	}

--=-99UV9m/OJzESYXEE3660
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-99UV9m/OJzESYXEE3660--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:46:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Zma-00063n-Qq; Tue, 21 Jan 2014 11:46:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5ZmZ-00063i-HS
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:46:15 +0000
Received: from [85.158.137.68:57801] by server-5.bemta-3.messagelabs.com id
	47/D5-25188-60E5ED25; Tue, 21 Jan 2014 11:46:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390304763!9233598!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTMxNjggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 21 Jan 2014 11:46:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:46:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; 
	d="asc'?scan'208";a="92770897"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 11:46:03 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:46:02 -0500
Message-ID: <1390304761.23576.161.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 12:46:01 +0100
In-Reply-To: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1851469252837072552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1851469252837072552==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-sSvjKjgnxY9Rv12SeFLy"

--=-sSvjKjgnxY9Rv12SeFLy
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 18:05 +0200, Pavlo Suikov wrote:
> > x86 or ARM host?
> ARM. ARMv7, TI Jacinto6 to be precise.
>=20
Ok.

> > Also, how many pCPUs and vCPUs do the host and the various guests
> have?
>=20
>=20
> 2 pCPUs, 4 vCPUs: 2 vCPU per domain.
>=20
Right. So you are overbooking the platform a bit. Don't get me wrong,
that's not only legitimate, it's actually a good thing, if only because
it gives us something nice to play with, from the Xen scheduling
perspective. If you were just having #vCPUs=3D=3D#pCPUs, that would be way
more boring! :-)

That being said, is that a problem, as a temporary measure, during this
first phase of testing and benchmarking, to change it a bit? I'm asking
because I think that could help isolating the various causes of the
issues you're seeing, and hence facing and resolving them.

> > Are you using any vCPU-to-pCPU pinning?
>
> No.
>=20
Ok, so, if, as said above, you can do that, I'd try the following. With
the credit scheduler (after having cleared/disabled the rate limiting
thing), go for 1 vCPU in Dom0 and 1 vCPU in DomU.

Also, pin both, and do it to different pCPUs. I think booting with this
"dom0_max_vcpus=3D1 dom0_vcpus_pin" in the Xen command line would do the
trick for Dom0. For DomU, you just put in the config file a "cpus=3DX"
entry, as soon as you see what it is the pCPU to which Dom0 is _not_
pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you should
use "cpus=3D1" for the DomU).

With that configuration, repeat the tests.

Basically, what I'm asking you to do is to completely kick the Xen
scheduler out of the window, for now, to try getting some baseline
numbers. Nicely enough, when using only 1 vCPU for both Dom0 and DomU,
you pretty much rule out most of the Linux scheduler's logic (not
everything, but at least the part about load balancing). To push even
harder on the latter, I'd boost the priority of the test program (I'm
still talking about inside the Linux guest) to some high level rtprio.

What all the above should give you is an estimation of the current lower
bound on latency and jitter that you can get. If that's already not good
enough (provided I did not make any glaring mistake in the instructions
above :-D), then we know that there are areas other than the scheduler
that needs some intervention, and we can start looking for which ones
and what to do.

Also, whether or not what you get is enough, one can also start working
on seeing what scheduler, and/or what set of scheduling parameters, is
able to replicate, or get close and reliably enough, to the 'static
scenario'.

What do you think?

> We did additional measurements and as you can see, my first impression
> was not very correct: difference between dom0 and domU exist and is
> quite observable on a larger scale. On the same setup bare metal
> without Xen number of times t > 32 is close to 0; on the setup with
> Xen but without domU system running number of times t > 32 is close to
> 0 as well.=20
>
I appreciate that. Given the many actors and factors involved, I think
the only way to figure out what's going on is to try isolating the
various components as much as we can... That's why I'm suggesting to
consider a very very very simple situation first, at least wrt to
scheduling.

>  We will make additional measurements with Linux (not Android) as a
> domU guest, though.
>=20
Ok.

> > # xl sched-sedf
>=20
> # xl sched-sedf
> Cpupool Pool-0:
> Name                                ID Period Slice  Latency Extra
> Weight
> Domain-0                             0    100      0       0     1
>    0
> android_4.3                          1    100      0       0     1
>    0
>=20
May I ask for the output of

# xl list -n

and

# xl vcpu-list

in the sEDF case too?

That being said, I suggest you not to spend much time on sEDF for now.
As it is, it's broken, especially on SMPs, so we either re-engineer it
properly, or turn toward RT-Xen (and, e.g., help Sisu and his team to
upstream it).

I think we should have a discussion about the above, outside and beyond
this thread... I'll spring it up in the proper way ASAP.

> > Oh, and now that I think about it, something that present in credit
> and
> > not in sEDF that might be worth checking is the scheduling rate
> limiting
> > thing.
>=20
>=20
> We'll check it out, thanks!
>=20
Right. One other thing that I forgot to mention: the timeslice. Credit
uses, by default, 30ms as its scheduling timeslice which, I think, is
quite high for latency sensitive workloads like yours (Linux typically
uses 1, 3.33, 4 or 10).

# xl sched-credit
Cpupool Pool-0: tslice=3D30ms ratelimit=3D1000us
Name                                ID Weight  Cap
Domain-0                             0    256    0
vm.guest.osstest                     9    256    0

I think that another thing that is worth trying is running the
experiments with that lowered a bit. E.g.:

# xl sched-credit -s -t 1
# xl sched-credit
Cpupool Pool-0: tslice=3D1ms ratelimit=3D1000us
Name                                ID Weight  Cap
Domain-0                             0    256    0
vm.guest.osstest                     9    256    0

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-sSvjKjgnxY9Rv12SeFLy
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeXfkACgkQk4XaBE3IOsS0vgCfdmRIJ87ulGyUj4I3FeBbhhKd
TdQAoJQRfD/ymebJkJC7r2EhKnjG2FSW
=0hVs
-----END PGP SIGNATURE-----

--=-sSvjKjgnxY9Rv12SeFLy--


--===============1851469252837072552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1851469252837072552==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 11:46:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 11:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5Zma-00063n-Qq; Tue, 21 Jan 2014 11:46:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5ZmZ-00063i-HS
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 11:46:15 +0000
Received: from [85.158.137.68:57801] by server-5.bemta-3.messagelabs.com id
	47/D5-25188-60E5ED25; Tue, 21 Jan 2014 11:46:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390304763!9233598!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTMxNjggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 21 Jan 2014 11:46:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 11:46:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; 
	d="asc'?scan'208";a="92770897"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 11:46:03 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 06:46:02 -0500
Message-ID: <1390304761.23576.161.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 12:46:01 +0100
In-Reply-To: <CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1851469252837072552=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1851469252837072552==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-sSvjKjgnxY9Rv12SeFLy"

--=-sSvjKjgnxY9Rv12SeFLy
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-01-20 at 18:05 +0200, Pavlo Suikov wrote:
> > x86 or ARM host?
> ARM. ARMv7, TI Jacinto6 to be precise.
>=20
Ok.

> > Also, how many pCPUs and vCPUs do the host and the various guests
> have?
>=20
>=20
> 2 pCPUs, 4 vCPUs: 2 vCPU per domain.
>=20
Right. So you are overbooking the platform a bit. Don't get me wrong,
that's not only legitimate, it's actually a good thing, if only because
it gives us something nice to play with, from the Xen scheduling
perspective. If you were just having #vCPUs=3D=3D#pCPUs, that would be way
more boring! :-)

That being said, is that a problem, as a temporary measure, during this
first phase of testing and benchmarking, to change it a bit? I'm asking
because I think that could help isolating the various causes of the
issues you're seeing, and hence facing and resolving them.

> > Are you using any vCPU-to-pCPU pinning?
>
> No.
>=20
Ok, so, if, as said above, you can do that, I'd try the following. With
the credit scheduler (after having cleared/disabled the rate limiting
thing), go for 1 vCPU in Dom0 and 1 vCPU in DomU.

Also, pin both, and do it to different pCPUs. I think booting with this
"dom0_max_vcpus=3D1 dom0_vcpus_pin" in the Xen command line would do the
trick for Dom0. For DomU, you just put in the config file a "cpus=3DX"
entry, as soon as you see what it is the pCPU to which Dom0 is _not_
pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you should
use "cpus=3D1" for the DomU).

With that configuration, repeat the tests.

Basically, what I'm asking you to do is to completely kick the Xen
scheduler out of the window, for now, to try getting some baseline
numbers. Nicely enough, when using only 1 vCPU for both Dom0 and DomU,
you pretty much rule out most of the Linux scheduler's logic (not
everything, but at least the part about load balancing). To push even
harder on the latter, I'd boost the priority of the test program (I'm
still talking about inside the Linux guest) to some high level rtprio.

What all the above should give you is an estimation of the current lower
bound on latency and jitter that you can get. If that's already not good
enough (provided I did not make any glaring mistake in the instructions
above :-D), then we know that there are areas other than the scheduler
that needs some intervention, and we can start looking for which ones
and what to do.

Also, whether or not what you get is enough, one can also start working
on seeing what scheduler, and/or what set of scheduling parameters, is
able to replicate, or get close and reliably enough, to the 'static
scenario'.

What do you think?

> We did additional measurements and as you can see, my first impression
> was not very correct: difference between dom0 and domU exist and is
> quite observable on a larger scale. On the same setup bare metal
> without Xen number of times t > 32 is close to 0; on the setup with
> Xen but without domU system running number of times t > 32 is close to
> 0 as well.=20
>
I appreciate that. Given the many actors and factors involved, I think
the only way to figure out what's going on is to try isolating the
various components as much as we can... That's why I'm suggesting to
consider a very very very simple situation first, at least wrt to
scheduling.

>  We will make additional measurements with Linux (not Android) as a
> domU guest, though.
>=20
Ok.

> > # xl sched-sedf
>=20
> # xl sched-sedf
> Cpupool Pool-0:
> Name                                ID Period Slice  Latency Extra
> Weight
> Domain-0                             0    100      0       0     1
>    0
> android_4.3                          1    100      0       0     1
>    0
>=20
May I ask for the output of

# xl list -n

and

# xl vcpu-list

in the sEDF case too?

That being said, I suggest you not to spend much time on sEDF for now.
As it is, it's broken, especially on SMPs, so we either re-engineer it
properly, or turn toward RT-Xen (and, e.g., help Sisu and his team to
upstream it).

I think we should have a discussion about the above, outside and beyond
this thread... I'll spring it up in the proper way ASAP.

> > Oh, and now that I think about it, something that present in credit
> and
> > not in sEDF that might be worth checking is the scheduling rate
> limiting
> > thing.
>=20
>=20
> We'll check it out, thanks!
>=20
Right. One other thing that I forgot to mention: the timeslice. Credit
uses, by default, 30ms as its scheduling timeslice which, I think, is
quite high for latency sensitive workloads like yours (Linux typically
uses 1, 3.33, 4 or 10).

# xl sched-credit
Cpupool Pool-0: tslice=3D30ms ratelimit=3D1000us
Name                                ID Weight  Cap
Domain-0                             0    256    0
vm.guest.osstest                     9    256    0

I think that another thing that is worth trying is running the
experiments with that lowered a bit. E.g.:

# xl sched-credit -s -t 1
# xl sched-credit
Cpupool Pool-0: tslice=3D1ms ratelimit=3D1000us
Name                                ID Weight  Cap
Domain-0                             0    256    0
vm.guest.osstest                     9    256    0

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-sSvjKjgnxY9Rv12SeFLy
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLeXfkACgkQk4XaBE3IOsS0vgCfdmRIJ87ulGyUj4I3FeBbhhKd
TdQAoJQRfD/ymebJkJC7r2EhKnjG2FSW
=0hVs
-----END PGP SIGNATURE-----

--=-sSvjKjgnxY9Rv12SeFLy--


--===============1851469252837072552==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1851469252837072552==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 12:27:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 12:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5aQX-00083E-Oc; Tue, 21 Jan 2014 12:27:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5aQV-000839-KR
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 12:27:31 +0000
Received: from [193.109.254.147:47879] by server-8.bemta-14.messagelabs.com id
	6E/EF-30921-2B76ED25; Tue, 21 Jan 2014 12:27:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390307248!9912811!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29415 invoked from network); 21 Jan 2014 12:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 12:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92783655"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 12:27:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 07:27:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5aQQ-0001ER-Tr;
	Tue, 21 Jan 2014 12:27:26 +0000
Date: Tue, 21 Jan 2014 12:26:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   13 +++---
>  drivers/xen/grant-table.c           |   81 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  4 files changed, 87 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..87ded60 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
> +		else {
> +			unsigned long pfn = page_to_pfn(pages[i]);
> +			WARN_ON(PagePrivate(pages[i]));
> +			SetPagePrivate(pages[i]);
> +			set_page_private(pages[i], mfn);
> +			pages[i]->index = pfn_to_mfn(pfn);
> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;

What happens if the page is PageHighMem?

This looks like a subset of m2p_add_override, but it is missing some
relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
check.  Maybe we can find a way to avoid duplicating the code.
We could split m2p_add_override in two functions or add yet another
parameter to it.


> +		}
>  		if (ret)
>  			goto out;
>  	}
> @@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;
> +}
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i], kmap_ops ?
> +						  &kmap_ops[i] : NULL);
> +		else {
> +			unsigned long pfn = page_to_pfn(pages[i]);
> +			WARN_ON(!PagePrivate(pages[i]));
> +			ClearPagePrivate(pages[i]);
> +			set_phys_to_machine(pfn, pages[i]->index);

same here: let's try to avoid code duplication


> +		}
>  		if (ret)
>  			goto out;
>  	}
> @@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;
> +}
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 12:27:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 12:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5aQX-00083E-Oc; Tue, 21 Jan 2014 12:27:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5aQV-000839-KR
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 12:27:31 +0000
Received: from [193.109.254.147:47879] by server-8.bemta-14.messagelabs.com id
	6E/EF-30921-2B76ED25; Tue, 21 Jan 2014 12:27:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390307248!9912811!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29415 invoked from network); 21 Jan 2014 12:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 12:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92783655"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 12:27:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 07:27:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5aQQ-0001ER-Tr;
	Tue, 21 Jan 2014 12:27:26 +0000
Date: Tue, 21 Jan 2014 12:26:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   13 +++---
>  drivers/xen/grant-table.c           |   81 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  4 files changed, 87 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..87ded60 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
> +		else {
> +			unsigned long pfn = page_to_pfn(pages[i]);
> +			WARN_ON(PagePrivate(pages[i]));
> +			SetPagePrivate(pages[i]);
> +			set_page_private(pages[i], mfn);
> +			pages[i]->index = pfn_to_mfn(pfn);
> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +				return -ENOMEM;

What happens if the page is PageHighMem?

This looks like a subset of m2p_add_override, but it is missing some
relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
check.  Maybe we can find a way to avoid duplicating the code.
We could split m2p_add_override in two functions or add yet another
parameter to it.


> +		}
>  		if (ret)
>  			goto out;
>  	}
> @@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;
> +}
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i], kmap_ops ?
> +						  &kmap_ops[i] : NULL);
> +		else {
> +			unsigned long pfn = page_to_pfn(pages[i]);
> +			WARN_ON(!PagePrivate(pages[i]));
> +			ClearPagePrivate(pages[i]);
> +			set_phys_to_machine(pfn, pages[i]->index);

same here: let's try to avoid code duplication


> +		}
>  		if (ret)
>  			goto out;
>  	}
> @@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;
> +}
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 12:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 12:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5arc-0000oQ-Dg; Tue, 21 Jan 2014 12:55:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W5arb-0000oL-AR
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 12:55:31 +0000
Received: from [85.158.137.68:35561] by server-11.bemta-3.messagelabs.com id
	8B/5C-19379-24E6ED25; Tue, 21 Jan 2014 12:55:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390308929!10370134!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22960 invoked from network); 21 Jan 2014 12:55:29 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 12:55:29 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 14906188972;
	Tue, 21 Jan 2014 14:55:27 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6158936C01F; Tue, 21 Jan 2014 14:55:27 +0200 (EET)
Date: Tue, 21 Jan 2014 14:55:27 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Wu, Feng" <feng.wu@intel.com>
Message-ID: <20140121125527.GG2924@reaktio.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Gordan Bobic <gordan@bobich.net>, "G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
> 
> 
> > >>> >
> > >>> > Hi all,
> > >>> >
> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> > >>> > traditional i.e. qemu-dm.
> > >>>
> > >>> As far as I know, only qemu-traditional supports vga pass-through
> > >>> right now.
> > >>
> > >> Right.
> > >> It is not possible to assign your primary VGA card to a VM with
> > >> qemu-xen. You should be able to assign your secondary VGA card though.
> > >
> > > Let me understand this correctly. If I have two VGA cards then I can
> > > passthrough
> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
> > > right and
> > > if yes how can I do it?
> > 
> > Passing any VGA card as a primary-in-domU has always been problematic.
> 
> I think passing VGA card as a primary-in-domU works well in Qemu-traditional, right?
> 

primary-in-domU requires vendor specific hacks in Xen qemu.
qemu-traditional includes many patches for Intel IGD primary passthru support,
but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-traditional. 

There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru are in 
various source trees, mailinglist archives, and on some blogs around the internet.

Also for Intel IGD I think there's at least one outstanding patch/fix that 
hasn't been merged to qemu-traditional yet, see:

http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.html
http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html

The patch in question probably needs some work before it is suitable for being applied to qemu-traditional.


> 
> Thanks,
> Feng
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 12:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 12:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5arc-0000oQ-Dg; Tue, 21 Jan 2014 12:55:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W5arb-0000oL-AR
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 12:55:31 +0000
Received: from [85.158.137.68:35561] by server-11.bemta-3.messagelabs.com id
	8B/5C-19379-24E6ED25; Tue, 21 Jan 2014 12:55:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390308929!10370134!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22960 invoked from network); 21 Jan 2014 12:55:29 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 12:55:29 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 14906188972;
	Tue, 21 Jan 2014 14:55:27 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 6158936C01F; Tue, 21 Jan 2014 14:55:27 +0200 (EET)
Date: Tue, 21 Jan 2014 14:55:27 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Wu, Feng" <feng.wu@intel.com>
Message-ID: <20140121125527.GG2924@reaktio.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Gordan Bobic <gordan@bobich.net>, "G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
> 
> 
> > >>> >
> > >>> > Hi all,
> > >>> >
> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu-xen as
> > >>> > device model? I tried but I am getting error 'gfx_passthru' invalid
> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> > >>> > traditional i.e. qemu-dm.
> > >>>
> > >>> As far as I know, only qemu-traditional supports vga pass-through
> > >>> right now.
> > >>
> > >> Right.
> > >> It is not possible to assign your primary VGA card to a VM with
> > >> qemu-xen. You should be able to assign your secondary VGA card though.
> > >
> > > Let me understand this correctly. If I have two VGA cards then I can
> > > passthrough
> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
> > > right and
> > > if yes how can I do it?
> > 
> > Passing any VGA card as a primary-in-domU has always been problematic.
> 
> I think passing VGA card as a primary-in-domU works well in Qemu-traditional, right?
> 

primary-in-domU requires vendor specific hacks in Xen qemu.
qemu-traditional includes many patches for Intel IGD primary passthru support,
but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-traditional. 

There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru are in 
various source trees, mailinglist archives, and on some blogs around the internet.

Also for Intel IGD I think there's at least one outstanding patch/fix that 
hasn't been merged to qemu-traditional yet, see:

http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.html
http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html

The patch in question probably needs some work before it is suitable for being applied to qemu-traditional.


> 
> Thanks,
> Feng
> 

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:40:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bZ0-0002kf-QZ; Tue, 21 Jan 2014 13:40:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5bYz-0002ka-A3
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 13:40:21 +0000
Received: from [85.158.139.211:51489] by server-15.bemta-5.messagelabs.com id
	17/6D-08490-4C87ED25; Tue, 21 Jan 2014 13:40:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390311618!10865077!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32373 invoked from network); 21 Jan 2014 13:40:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:40:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94865212"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 13:40:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:40:17 -0500
Message-ID: <52DE78BF.2070909@citrix.com>
Date: Tue, 21 Jan 2014 13:40:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 12:26, Stefano Stabellini wrote:
> On Mon, 20 Jan 2014, Zoltan Kiss wrote:
>
>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		if (m2p_override)
>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL);
>> +		else {
>> +			unsigned long pfn = page_to_pfn(pages[i]);
>> +			WARN_ON(PagePrivate(pages[i]));
>> +			SetPagePrivate(pages[i]);
>> +			set_page_private(pages[i], mfn);
>> +			pages[i]->index = pfn_to_mfn(pfn);
>> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> +				return -ENOMEM;
> 
> What happens if the page is PageHighMem?
> 
> This looks like a subset of m2p_add_override, but it is missing some
> relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
> check.  Maybe we can find a way to avoid duplicating the code.
> We could split m2p_add_override in two functions or add yet another
> parameter to it.

The PageHighMem() check isn't relevant as we're not mapping anything
here.  Also, a page for a kernel grant mapping only cannot be highmem.

The check for a local mfn and the additional set_phys_to_machine() is
only necessary if something tries an mfn_to_pfn() on the local mfn.  We
can only omit adding an m2p override if we know thing will try
mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:40:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bZ0-0002kf-QZ; Tue, 21 Jan 2014 13:40:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5bYz-0002ka-A3
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 13:40:21 +0000
Received: from [85.158.139.211:51489] by server-15.bemta-5.messagelabs.com id
	17/6D-08490-4C87ED25; Tue, 21 Jan 2014 13:40:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390311618!10865077!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32373 invoked from network); 21 Jan 2014 13:40:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:40:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94865212"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 13:40:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:40:17 -0500
Message-ID: <52DE78BF.2070909@citrix.com>
Date: Tue, 21 Jan 2014 13:40:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 12:26, Stefano Stabellini wrote:
> On Mon, 20 Jan 2014, Zoltan Kiss wrote:
>
>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		if (m2p_override)
>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL);
>> +		else {
>> +			unsigned long pfn = page_to_pfn(pages[i]);
>> +			WARN_ON(PagePrivate(pages[i]));
>> +			SetPagePrivate(pages[i]);
>> +			set_page_private(pages[i], mfn);
>> +			pages[i]->index = pfn_to_mfn(pfn);
>> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> +				return -ENOMEM;
> 
> What happens if the page is PageHighMem?
> 
> This looks like a subset of m2p_add_override, but it is missing some
> relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
> check.  Maybe we can find a way to avoid duplicating the code.
> We could split m2p_add_override in two functions or add yet another
> parameter to it.

The PageHighMem() check isn't relevant as we're not mapping anything
here.  Also, a page for a kernel grant mapping only cannot be highmem.

The check for a local mfn and the additional set_phys_to_machine() is
only necessary if something tries an mfn_to_pfn() on the local mfn.  We
can only omit adding an m2p override if we know thing will try
mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:45:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5beB-0002sT-5i; Tue, 21 Jan 2014 13:45:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5be8-0002sN-I8
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 13:45:41 +0000
Received: from [85.158.139.211:6775] by server-1.bemta-5.messagelabs.com id
	1E/75-21065-30A7ED25; Tue, 21 Jan 2014 13:45:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390311937!10866622!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18599 invoked from network); 21 Jan 2014 13:45:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92811377"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 13:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:45:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5be3-0002QI-HP;
	Tue, 21 Jan 2014 13:45:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <will.deacon@arm.com>
Date: Tue, 21 Jan 2014 13:44:24 +0000
Message-ID: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, arnd@arndb.de, stefano.stabellini@eu.citrix.com,
	catalin.marinas@arm.com, jaccon.bastiaansen@gmail.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and !GENERIC_ATOMIC64
	build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove !GENERIC_ATOMIC64 build dependency:
- introduce xen_atomic64_xchg
- use it to implement xchg_xen_ulong

Remove !CPU_V6 build dependency:
- introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
  CONFIG_CPU_V6
- implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: arnd@arndb.de
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: catalin.marinas@arm.com
CC: linux-arm-kernel@lists.infradead.org
CC: linux-kernel@vger.kernel.org
CC: xen-devel@lists.xenproject.org

---

Changes in v4:
- avoid moving and renaming atomic64_xchg
- introduce xen_atomic64_xchg
- fix asm comment in __cmpxchg8 and __cmpxchg16.

---
 arch/arm/Kconfig                   |    3 +-
 arch/arm/include/asm/cmpxchg.h     |   60 ++++++++++++++++++++++++------------
 arch/arm/include/asm/sync_bitops.h |   24 ++++++++++++++-
 arch/arm/include/asm/xen/events.h  |   32 ++++++++++++++++++-
 4 files changed, 95 insertions(+), 24 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..ae54ae0 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1881,8 +1881,7 @@ config XEN_DOM0
 config XEN
 	bool "Xen guest support on ARM (EXPERIMENTAL)"
 	depends on ARM && AEABI && OF
-	depends on CPU_V7 && !CPU_V6
-	depends on !GENERIC_ATOMIC64
+	depends on CPU_V7
 	select ARM_PSCI
 	select SWIOTLB_XEN
 	help
diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index df2fbba..a17cff1 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -133,6 +133,44 @@ extern void __bad_cmpxchg(volatile void *ptr, int size);
  * cmpxchg only support 32-bits operands on ARMv6.
  */
 
+static inline unsigned long __cmpxchg8(volatile void *ptr, unsigned long old,
+				      unsigned long new)
+{
+	unsigned long oldval, res;
+
+	do {
+		asm volatile("@ __cmpxchg8\n"
+		"	ldrexb	%1, [%2]\n"
+		"	mov	%0, #0\n"
+		"	teq	%1, %3\n"
+		"	strexbeq %0, %4, [%2]\n"
+			: "=&r" (res), "=&r" (oldval)
+			: "r" (ptr), "Ir" (old), "r" (new)
+			: "memory", "cc");
+	} while (res);
+
+	return oldval;
+}
+
+static inline unsigned long __cmpxchg16(volatile void *ptr, unsigned long old,
+				      unsigned long new)
+{
+	unsigned long oldval, res;
+
+	do {
+		asm volatile("@ __cmpxchg16\n"
+		"	ldrexh	%1, [%2]\n"
+		"	mov	%0, #0\n"
+		"	teq	%1, %3\n"
+		"	strexheq %0, %4, [%2]\n"
+			: "=&r" (res), "=&r" (oldval)
+			: "r" (ptr), "Ir" (old), "r" (new)
+			: "memory", "cc");
+	} while (res);
+
+	return oldval;
+}
+
 static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				      unsigned long new, int size)
 {
@@ -141,28 +179,10 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	switch (size) {
 #ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
 	case 1:
-		do {
-			asm volatile("@ __cmpxchg1\n"
-			"	ldrexb	%1, [%2]\n"
-			"	mov	%0, #0\n"
-			"	teq	%1, %3\n"
-			"	strexbeq %0, %4, [%2]\n"
-				: "=&r" (res), "=&r" (oldval)
-				: "r" (ptr), "Ir" (old), "r" (new)
-				: "memory", "cc");
-		} while (res);
+		oldval = __cmpxchg8(ptr, old, new);
 		break;
 	case 2:
-		do {
-			asm volatile("@ __cmpxchg1\n"
-			"	ldrexh	%1, [%2]\n"
-			"	mov	%0, #0\n"
-			"	teq	%1, %3\n"
-			"	strexheq %0, %4, [%2]\n"
-				: "=&r" (res), "=&r" (oldval)
-				: "r" (ptr), "Ir" (old), "r" (new)
-				: "memory", "cc");
-		} while (res);
+		oldval = __cmpxchg16(ptr, old, new);
 		break;
 #endif
 	case 4:
diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
index 63479ee..942659a 100644
--- a/arch/arm/include/asm/sync_bitops.h
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -21,7 +21,29 @@
 #define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
 #define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
 #define sync_test_bit(nr, addr)		test_bit(nr, addr)
-#define sync_cmpxchg			cmpxchg
 
+static inline unsigned long sync_cmpxchg(volatile void *ptr,
+										 unsigned long old,
+										 unsigned long new)
+{
+	unsigned long oldval;
+	int size = sizeof(*(ptr));
+
+	smp_mb();
+	switch (size) {
+	case 1:
+		oldval = __cmpxchg8(ptr, old, new);
+		break;
+	case 2:
+		oldval = __cmpxchg16(ptr, old, new);
+		break;
+	default:
+		oldval = __cmpxchg(ptr, old, new, size);
+		break;
+	}
+	smp_mb();
+
+	return oldval;
+}
 
 #endif
diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
index 8b1f37b..2032ee6 100644
--- a/arch/arm/include/asm/xen/events.h
+++ b/arch/arm/include/asm/xen/events.h
@@ -16,7 +16,37 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
 	return raw_irqs_disabled_flags(regs->ARM_cpsr);
 }
 
-#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr),	\
+#ifdef CONFIG_GENERIC_ATOMIC64
+/* if CONFIG_GENERIC_ATOMIC64 is defined we cannot use the generic
+ * atomic64_xchg function because it is implemented using spin locks.
+ * Here we need proper atomic instructions to read and write memory
+ * shared with the hypervisor.
+ */
+static inline u64 xen_atomic64_xchg(atomic64_t *ptr, u64 new)
+{
+	u64 result;
+	unsigned long tmp;
+
+	smp_mb();
+
+	__asm__ __volatile__("@ xen_atomic64_xchg\n"
+"1:	ldrexd	%0, %H0, [%3]\n"
+"	strexd	%1, %4, %H4, [%3]\n"
+"	teq	%1, #0\n"
+"	bne	1b"
+	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
+	: "r" (&ptr->counter), "r" (new)
+	: "cc");
+
+	smp_mb();
+
+	return result;
+}
+#else
+#define xen_atomic64_xchg atomic64_xchg
+#endif
+
+#define xchg_xen_ulong(ptr, val) xen_atomic64_xchg(container_of((ptr),	\
 							    atomic64_t,	\
 							    counter), (val))
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:45:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5beB-0002sT-5i; Tue, 21 Jan 2014 13:45:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5be8-0002sN-I8
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 13:45:41 +0000
Received: from [85.158.139.211:6775] by server-1.bemta-5.messagelabs.com id
	1E/75-21065-30A7ED25; Tue, 21 Jan 2014 13:45:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390311937!10866622!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18599 invoked from network); 21 Jan 2014 13:45:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:45:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92811377"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 13:45:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:45:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5be3-0002QI-HP;
	Tue, 21 Jan 2014 13:45:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <will.deacon@arm.com>
Date: Tue, 21 Jan 2014 13:44:24 +0000
Message-ID: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, arnd@arndb.de, stefano.stabellini@eu.citrix.com,
	catalin.marinas@arm.com, jaccon.bastiaansen@gmail.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and !GENERIC_ATOMIC64
	build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove !GENERIC_ATOMIC64 build dependency:
- introduce xen_atomic64_xchg
- use it to implement xchg_xen_ulong

Remove !CPU_V6 build dependency:
- introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
  CONFIG_CPU_V6
- implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: arnd@arndb.de
CC: linux@arm.linux.org.uk
CC: will.deacon@arm.com
CC: catalin.marinas@arm.com
CC: linux-arm-kernel@lists.infradead.org
CC: linux-kernel@vger.kernel.org
CC: xen-devel@lists.xenproject.org

---

Changes in v4:
- avoid moving and renaming atomic64_xchg
- introduce xen_atomic64_xchg
- fix asm comment in __cmpxchg8 and __cmpxchg16.

---
 arch/arm/Kconfig                   |    3 +-
 arch/arm/include/asm/cmpxchg.h     |   60 ++++++++++++++++++++++++------------
 arch/arm/include/asm/sync_bitops.h |   24 ++++++++++++++-
 arch/arm/include/asm/xen/events.h  |   32 ++++++++++++++++++-
 4 files changed, 95 insertions(+), 24 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..ae54ae0 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1881,8 +1881,7 @@ config XEN_DOM0
 config XEN
 	bool "Xen guest support on ARM (EXPERIMENTAL)"
 	depends on ARM && AEABI && OF
-	depends on CPU_V7 && !CPU_V6
-	depends on !GENERIC_ATOMIC64
+	depends on CPU_V7
 	select ARM_PSCI
 	select SWIOTLB_XEN
 	help
diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index df2fbba..a17cff1 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -133,6 +133,44 @@ extern void __bad_cmpxchg(volatile void *ptr, int size);
  * cmpxchg only support 32-bits operands on ARMv6.
  */
 
+static inline unsigned long __cmpxchg8(volatile void *ptr, unsigned long old,
+				      unsigned long new)
+{
+	unsigned long oldval, res;
+
+	do {
+		asm volatile("@ __cmpxchg8\n"
+		"	ldrexb	%1, [%2]\n"
+		"	mov	%0, #0\n"
+		"	teq	%1, %3\n"
+		"	strexbeq %0, %4, [%2]\n"
+			: "=&r" (res), "=&r" (oldval)
+			: "r" (ptr), "Ir" (old), "r" (new)
+			: "memory", "cc");
+	} while (res);
+
+	return oldval;
+}
+
+static inline unsigned long __cmpxchg16(volatile void *ptr, unsigned long old,
+				      unsigned long new)
+{
+	unsigned long oldval, res;
+
+	do {
+		asm volatile("@ __cmpxchg16\n"
+		"	ldrexh	%1, [%2]\n"
+		"	mov	%0, #0\n"
+		"	teq	%1, %3\n"
+		"	strexheq %0, %4, [%2]\n"
+			: "=&r" (res), "=&r" (oldval)
+			: "r" (ptr), "Ir" (old), "r" (new)
+			: "memory", "cc");
+	} while (res);
+
+	return oldval;
+}
+
 static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 				      unsigned long new, int size)
 {
@@ -141,28 +179,10 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	switch (size) {
 #ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
 	case 1:
-		do {
-			asm volatile("@ __cmpxchg1\n"
-			"	ldrexb	%1, [%2]\n"
-			"	mov	%0, #0\n"
-			"	teq	%1, %3\n"
-			"	strexbeq %0, %4, [%2]\n"
-				: "=&r" (res), "=&r" (oldval)
-				: "r" (ptr), "Ir" (old), "r" (new)
-				: "memory", "cc");
-		} while (res);
+		oldval = __cmpxchg8(ptr, old, new);
 		break;
 	case 2:
-		do {
-			asm volatile("@ __cmpxchg1\n"
-			"	ldrexh	%1, [%2]\n"
-			"	mov	%0, #0\n"
-			"	teq	%1, %3\n"
-			"	strexheq %0, %4, [%2]\n"
-				: "=&r" (res), "=&r" (oldval)
-				: "r" (ptr), "Ir" (old), "r" (new)
-				: "memory", "cc");
-		} while (res);
+		oldval = __cmpxchg16(ptr, old, new);
 		break;
 #endif
 	case 4:
diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
index 63479ee..942659a 100644
--- a/arch/arm/include/asm/sync_bitops.h
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -21,7 +21,29 @@
 #define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
 #define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
 #define sync_test_bit(nr, addr)		test_bit(nr, addr)
-#define sync_cmpxchg			cmpxchg
 
+static inline unsigned long sync_cmpxchg(volatile void *ptr,
+										 unsigned long old,
+										 unsigned long new)
+{
+	unsigned long oldval;
+	int size = sizeof(*(ptr));
+
+	smp_mb();
+	switch (size) {
+	case 1:
+		oldval = __cmpxchg8(ptr, old, new);
+		break;
+	case 2:
+		oldval = __cmpxchg16(ptr, old, new);
+		break;
+	default:
+		oldval = __cmpxchg(ptr, old, new, size);
+		break;
+	}
+	smp_mb();
+
+	return oldval;
+}
 
 #endif
diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
index 8b1f37b..2032ee6 100644
--- a/arch/arm/include/asm/xen/events.h
+++ b/arch/arm/include/asm/xen/events.h
@@ -16,7 +16,37 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
 	return raw_irqs_disabled_flags(regs->ARM_cpsr);
 }
 
-#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr),	\
+#ifdef CONFIG_GENERIC_ATOMIC64
+/* if CONFIG_GENERIC_ATOMIC64 is defined we cannot use the generic
+ * atomic64_xchg function because it is implemented using spin locks.
+ * Here we need proper atomic instructions to read and write memory
+ * shared with the hypervisor.
+ */
+static inline u64 xen_atomic64_xchg(atomic64_t *ptr, u64 new)
+{
+	u64 result;
+	unsigned long tmp;
+
+	smp_mb();
+
+	__asm__ __volatile__("@ xen_atomic64_xchg\n"
+"1:	ldrexd	%0, %H0, [%3]\n"
+"	strexd	%1, %4, %H4, [%3]\n"
+"	teq	%1, #0\n"
+"	bne	1b"
+	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
+	: "r" (&ptr->counter), "r" (new)
+	: "cc");
+
+	smp_mb();
+
+	return result;
+}
+#else
+#define xen_atomic64_xchg atomic64_xchg
+#endif
+
+#define xchg_xen_ulong(ptr, val) xen_atomic64_xchg(container_of((ptr),	\
 							    atomic64_t,	\
 							    counter), (val))
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:51:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bjg-0003MO-Vs; Tue, 21 Jan 2014 13:51:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5bjf-0003MJ-Uz
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:51:24 +0000
Received: from [193.109.254.147:38118] by server-3.bemta-14.messagelabs.com id
	61/6B-11000-B5B7ED25; Tue, 21 Jan 2014 13:51:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390312282!9937424!1
X-Originating-IP: [62.94.10.166]
X-SpamReason: No, hits=1.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi45NC4xMC4xNjYgPT4gNDI0MDA=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi45NC4xMC4xNjYgPT4gNDI0MDA=\n,BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7279 invoked from network); 21 Jan 2014 13:51:22 -0000
Received: from mp1-smtp-6.eutelia.it (HELO mp1-smtp-6.eutelia.it)
	(62.94.10.166) by server-13.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 13:51:22 -0000
Received: from FantuNB.m2r.local (ip-73-126.sn2.eutelia.it [83.211.73.126])
	by mp1-smtp-6.eutelia.it (Eutelia) with ESMTP id E4C046B4CDD;
	Tue, 21 Jan 2014 14:51:15 +0100 (CET)
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Jan 2014 14:51:08 +0100
Message-Id: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: George.Dunlap@eu.citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make rdname function work with xl

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/hotplug/Linux/init.d/xendomains |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
index 38371af..59f1e3d 100644
--- a/tools/hotplug/Linux/init.d/xendomains
+++ b/tools/hotplug/Linux/init.d/xendomains
@@ -186,7 +186,7 @@ contains_something()
 rdname()
 {
     NM=$($CMD create --quiet --dryrun --defconfig "$1" |
-         sed -n 's/^.*(name \(.*\))$/\1/p')
+         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')
 }
 
 rdnames()
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:51:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bjg-0003MO-Vs; Tue, 21 Jan 2014 13:51:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5bjf-0003MJ-Uz
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:51:24 +0000
Received: from [193.109.254.147:38118] by server-3.bemta-14.messagelabs.com id
	61/6B-11000-B5B7ED25; Tue, 21 Jan 2014 13:51:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390312282!9937424!1
X-Originating-IP: [62.94.10.166]
X-SpamReason: No, hits=1.7 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi45NC4xMC4xNjYgPT4gNDI0MDA=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi45NC4xMC4xNjYgPT4gNDI0MDA=\n,BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7279 invoked from network); 21 Jan 2014 13:51:22 -0000
Received: from mp1-smtp-6.eutelia.it (HELO mp1-smtp-6.eutelia.it)
	(62.94.10.166) by server-13.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 13:51:22 -0000
Received: from FantuNB.m2r.local (ip-73-126.sn2.eutelia.it [83.211.73.126])
	by mp1-smtp-6.eutelia.it (Eutelia) with ESMTP id E4C046B4CDD;
	Tue, 21 Jan 2014 14:51:15 +0100 (CET)
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Jan 2014 14:51:08 +0100
Message-Id: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: George.Dunlap@eu.citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make rdname function work with xl

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/hotplug/Linux/init.d/xendomains |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
index 38371af..59f1e3d 100644
--- a/tools/hotplug/Linux/init.d/xendomains
+++ b/tools/hotplug/Linux/init.d/xendomains
@@ -186,7 +186,7 @@ contains_something()
 rdname()
 {
     NM=$($CMD create --quiet --dryrun --defconfig "$1" |
-         sed -n 's/^.*(name \(.*\))$/\1/p')
+         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')
 }
 
 rdnames()
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bmb-0003TE-Js; Tue, 21 Jan 2014 13:54:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5bma-0003T7-8f
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:54:24 +0000
Received: from [85.158.143.35:13073] by server-1.bemta-4.messagelabs.com id
	1C/75-02132-F0C7ED25; Tue, 21 Jan 2014 13:54:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390312462!13101900!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21920 invoked from network); 21 Jan 2014 13:54:22 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:54:22 -0000
Received: by mail-ea0-f180.google.com with SMTP id f15so3782512eak.11
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Jan 2014 05:54:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1NcdHb6grAaCDVcvO6YDM1R7yzHWRtX/Hw8Ve2lv3j4=;
	b=J5o+yUSszICmGL8Tk9HcmgQfZtQpyOCgpBW16esKVG61WCjkD9zHqHW6hObd7hMKqp
	l/opQLQu2WMl3eOvLit7tWXZP4me5ohTgCj6mZsct8i9MMd4PDpAV+jP79Zj9bl4XvKC
	nfMR8r/yAW+G51G8lgKbLFRN7BvKfPfRkzUrymMGBg/pzW6rNB+UkdNY6fl5+R6JM6+P
	MsWcWhYg4JkHRfgOBrAaO4gr/hP5fizFWXtRxwD4lIpnQz0igiPTIMd3JWFi3YJ7tdbu
	eovisLhSGWAaqghafHINgNkBXHFNkPh2wVtkkdwmb5BcWdXAWinxYC8Jg3VIrIUk5C+O
	n+rw==
X-Gm-Message-State: ALoCoQn/doEay5Uzbez5URVK7Zg5ea5dKjgKeHKVTCFz9O6uFdYiSMfjW3QRzy2kts23AcCLupXk
X-Received: by 10.14.205.201 with SMTP id j49mr1345476eeo.85.1390312462298;
	Tue, 21 Jan 2014 05:54:22 -0800 (PST)
Received: from [192.168.1.7] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id
	o13sm15070379eex.19.2014.01.21.05.54.21
	for <xen-devel@lists.xensource.com>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 05:54:21 -0800 (PST)
Message-ID: <52DE7C0F.3030102@m2r.biz>
Date: Tue, 21 Jan 2014 14:54:23 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <52D95538.2080602@m2r.biz>
In-Reply-To: <52D95538.2080602@m2r.biz>
Subject: Re: [Xen-devel] Xendomains service start show xl error on domain
 autostart already existing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 17/01/2014 17:07, Fabio Fantoni ha scritto:
> I not know if was already reported.
> Using xl (with xen-unstable commit 
> 025c1b755afc9a9f42f71ef167c20fdc616b1d2d) on xendomains service start 
> restore correctly all saved domUs but also tried to boot them if are 
> in autostart showing error instead skip them:
>
> service xendomains start
> Restoring Xen domains: arkivi clearos6 m2rsvc1 nagios office1_w7 s4pdc 
> service2_w7 svn
> Starting auto Xen domains: arkivi.cfglibxl: error: 
> libxl.c:323:libxl__domain_rename: domain with name "arkivi" already 
> exists.
> libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
> domain: -6
>
> An error occurred while creating domain arkivi.cfg:
>
> !
>  clearos6.cfglibxl: error: libxl.c:323:libxl__domain_rename: domain 
> with name "clearos6" already exists.
> libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
> domain: -6
>
> An error occurred while creating domain clearos6.cfg:
>
> !
> ...
>
> Thanks for any reply and sorry for my bad english.

Fixed with this patch:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg01773.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5bmb-0003TE-Js; Tue, 21 Jan 2014 13:54:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5bma-0003T7-8f
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:54:24 +0000
Received: from [85.158.143.35:13073] by server-1.bemta-4.messagelabs.com id
	1C/75-02132-F0C7ED25; Tue, 21 Jan 2014 13:54:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390312462!13101900!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21920 invoked from network); 21 Jan 2014 13:54:22 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:54:22 -0000
Received: by mail-ea0-f180.google.com with SMTP id f15so3782512eak.11
	for <xen-devel@lists.xensource.com>;
	Tue, 21 Jan 2014 05:54:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1NcdHb6grAaCDVcvO6YDM1R7yzHWRtX/Hw8Ve2lv3j4=;
	b=J5o+yUSszICmGL8Tk9HcmgQfZtQpyOCgpBW16esKVG61WCjkD9zHqHW6hObd7hMKqp
	l/opQLQu2WMl3eOvLit7tWXZP4me5ohTgCj6mZsct8i9MMd4PDpAV+jP79Zj9bl4XvKC
	nfMR8r/yAW+G51G8lgKbLFRN7BvKfPfRkzUrymMGBg/pzW6rNB+UkdNY6fl5+R6JM6+P
	MsWcWhYg4JkHRfgOBrAaO4gr/hP5fizFWXtRxwD4lIpnQz0igiPTIMd3JWFi3YJ7tdbu
	eovisLhSGWAaqghafHINgNkBXHFNkPh2wVtkkdwmb5BcWdXAWinxYC8Jg3VIrIUk5C+O
	n+rw==
X-Gm-Message-State: ALoCoQn/doEay5Uzbez5URVK7Zg5ea5dKjgKeHKVTCFz9O6uFdYiSMfjW3QRzy2kts23AcCLupXk
X-Received: by 10.14.205.201 with SMTP id j49mr1345476eeo.85.1390312462298;
	Tue, 21 Jan 2014 05:54:22 -0800 (PST)
Received: from [192.168.1.7] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id
	o13sm15070379eex.19.2014.01.21.05.54.21
	for <xen-devel@lists.xensource.com>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 05:54:21 -0800 (PST)
Message-ID: <52DE7C0F.3030102@m2r.biz>
Date: Tue, 21 Jan 2014 14:54:23 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <52D95538.2080602@m2r.biz>
In-Reply-To: <52D95538.2080602@m2r.biz>
Subject: Re: [Xen-devel] Xendomains service start show xl error on domain
 autostart already existing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 17/01/2014 17:07, Fabio Fantoni ha scritto:
> I not know if was already reported.
> Using xl (with xen-unstable commit 
> 025c1b755afc9a9f42f71ef167c20fdc616b1d2d) on xendomains service start 
> restore correctly all saved domUs but also tried to boot them if are 
> in autostart showing error instead skip them:
>
> service xendomains start
> Restoring Xen domains: arkivi clearos6 m2rsvc1 nagios office1_w7 s4pdc 
> service2_w7 svn
> Starting auto Xen domains: arkivi.cfglibxl: error: 
> libxl.c:323:libxl__domain_rename: domain with name "arkivi" already 
> exists.
> libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
> domain: -6
>
> An error occurred while creating domain arkivi.cfg:
>
> !
>  clearos6.cfglibxl: error: libxl.c:323:libxl__domain_rename: domain 
> with name "clearos6" already exists.
> libxl: error: libxl_create.c:741:initiate_domain_create: cannot make 
> domain: -6
>
> An error occurred while creating domain clearos6.cfg:
>
> !
> ...
>
> Thanks for any reply and sorry for my bad english.

Fixed with this patch:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg01773.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:56:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5boK-0003bf-At; Tue, 21 Jan 2014 13:56:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5boJ-0003bW-JS
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:56:11 +0000
Received: from [193.109.254.147:14152] by server-6.bemta-14.messagelabs.com id
	07/8A-14958-A7C7ED25; Tue, 21 Jan 2014 13:56:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390312567!10736529!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22192 invoked from network); 21 Jan 2014 13:56:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94869946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 13:56:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:56:07 -0500
Message-ID: <1390312565.20516.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Tue, 21 Jan 2014 13:56:05 +0000
In-Reply-To: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xensource.com,
	Ian.Jackson@eu.citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
> Make rdname function work with xl
>
> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although I would have preferred a slightly more verbose changelog.

> ---
>  tools/hotplug/Linux/init.d/xendomains |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
> index 38371af..59f1e3d 100644
> --- a/tools/hotplug/Linux/init.d/xendomains
> +++ b/tools/hotplug/Linux/init.d/xendomains
> @@ -186,7 +186,7 @@ contains_something()
>  rdname()
>  {
>      NM=$($CMD create --quiet --dryrun --defconfig "$1" |
> -         sed -n 's/^.*(name \(.*\))$/\1/p')
> +         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')

>  }
>  
>  rdnames()



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 13:56:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 13:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5boK-0003bf-At; Tue, 21 Jan 2014 13:56:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5boJ-0003bW-JS
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 13:56:11 +0000
Received: from [193.109.254.147:14152] by server-6.bemta-14.messagelabs.com id
	07/8A-14958-A7C7ED25; Tue, 21 Jan 2014 13:56:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390312567!10736529!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22192 invoked from network); 21 Jan 2014 13:56:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 13:56:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94869946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 13:56:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 08:56:07 -0500
Message-ID: <1390312565.20516.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Tue, 21 Jan 2014 13:56:05 +0000
In-Reply-To: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xensource.com,
	Ian.Jackson@eu.citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
> Make rdname function work with xl
>
> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although I would have preferred a slightly more verbose changelog.

> ---
>  tools/hotplug/Linux/init.d/xendomains |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
> index 38371af..59f1e3d 100644
> --- a/tools/hotplug/Linux/init.d/xendomains
> +++ b/tools/hotplug/Linux/init.d/xendomains
> @@ -186,7 +186,7 @@ contains_something()
>  rdname()
>  {
>      NM=$($CMD create --quiet --dryrun --defconfig "$1" |
> -         sed -n 's/^.*(name \(.*\))$/\1/p')
> +         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')

>  }
>  
>  rdnames()



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:36:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cR3-0005G5-U1; Tue, 21 Jan 2014 14:36:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5cR2-0005G0-BW
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 14:36:12 +0000
Received: from [85.158.137.68:52804] by server-9.bemta-3.messagelabs.com id
	90/AA-13104-BD58ED25; Tue, 21 Jan 2014 14:36:11 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390314968!9636972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11140 invoked from network); 21 Jan 2014 14:36:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94888546"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 14:36:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 09:36:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5cFC-0002z2-JM;
	Tue, 21 Jan 2014 14:23:58 +0000
Date: Tue, 21 Jan 2014 14:22:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52DE78BF.2070909@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
	<52DE78BF.2070909@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, David Vrabel wrote:
> On 21/01/14 12:26, Stefano Stabellini wrote:
> > On Mon, 20 Jan 2014, Zoltan Kiss wrote:
> >
> >> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> >> -				       &kmap_ops[i] : NULL);
> >> +		if (m2p_override)
> >> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> >> +					       &kmap_ops[i] : NULL);
> >> +		else {
> >> +			unsigned long pfn = page_to_pfn(pages[i]);
> >> +			WARN_ON(PagePrivate(pages[i]));
> >> +			SetPagePrivate(pages[i]);
> >> +			set_page_private(pages[i], mfn);
> >> +			pages[i]->index = pfn_to_mfn(pfn);
> >> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> >> +				return -ENOMEM;
> > 
> > What happens if the page is PageHighMem?
> > 
> > This looks like a subset of m2p_add_override, but it is missing some
> > relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
> > check.  Maybe we can find a way to avoid duplicating the code.
> > We could split m2p_add_override in two functions or add yet another
> > parameter to it.
> 
> The PageHighMem() check isn't relevant as we're not mapping anything
> here.  Also, a page for a kernel grant mapping only cannot be highmem.
> 
> The check for a local mfn and the additional set_phys_to_machine() is
> only necessary if something tries an mfn_to_pfn() on the local mfn.  We
> can only omit adding an m2p override if we know thing will try
> mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.

OK, you convinced me that the two checks are superfluous for this case.

Can we still avoid the code duplication by removing the corresponding
code from m2p_add_override and m2p_remove_override and doing the
set_page_private thing uniquely in grant-table.c?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:36:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cR3-0005G5-U1; Tue, 21 Jan 2014 14:36:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5cR2-0005G0-BW
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 14:36:12 +0000
Received: from [85.158.137.68:52804] by server-9.bemta-3.messagelabs.com id
	90/AA-13104-BD58ED25; Tue, 21 Jan 2014 14:36:11 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390314968!9636972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11140 invoked from network); 21 Jan 2014 14:36:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94888546"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 14:36:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 09:36:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5cFC-0002z2-JM;
	Tue, 21 Jan 2014 14:23:58 +0000
Date: Tue, 21 Jan 2014 14:22:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52DE78BF.2070909@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
	<52DE78BF.2070909@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, David Vrabel wrote:
> On 21/01/14 12:26, Stefano Stabellini wrote:
> > On Mon, 20 Jan 2014, Zoltan Kiss wrote:
> >
> >> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> >> -				       &kmap_ops[i] : NULL);
> >> +		if (m2p_override)
> >> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> >> +					       &kmap_ops[i] : NULL);
> >> +		else {
> >> +			unsigned long pfn = page_to_pfn(pages[i]);
> >> +			WARN_ON(PagePrivate(pages[i]));
> >> +			SetPagePrivate(pages[i]);
> >> +			set_page_private(pages[i], mfn);
> >> +			pages[i]->index = pfn_to_mfn(pfn);
> >> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> >> +				return -ENOMEM;
> > 
> > What happens if the page is PageHighMem?
> > 
> > This looks like a subset of m2p_add_override, but it is missing some
> > relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
> > check.  Maybe we can find a way to avoid duplicating the code.
> > We could split m2p_add_override in two functions or add yet another
> > parameter to it.
> 
> The PageHighMem() check isn't relevant as we're not mapping anything
> here.  Also, a page for a kernel grant mapping only cannot be highmem.
> 
> The check for a local mfn and the additional set_phys_to_machine() is
> only necessary if something tries an mfn_to_pfn() on the local mfn.  We
> can only omit adding an m2p override if we know thing will try
> mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.

OK, you convinced me that the two checks are superfluous for this case.

Can we still avoid the code duplication by removing the corresponding
code from m2p_add_override and m2p_remove_override and doing the
set_page_private thing uniquely in grant-table.c?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:40:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cVA-0005iB-Tq; Tue, 21 Jan 2014 14:40:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cV8-0005i6-HL
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:40:26 +0000
Received: from [193.109.254.147:56495] by server-13.bemta-14.messagelabs.com
	id 86/B2-19374-9D68ED25; Tue, 21 Jan 2014 14:40:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390315222!9932061!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23619 invoked from network); 21 Jan 2014 14:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92835010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 14:40:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 09:40:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cV1-00088M-Ky;
	Tue, 21 Jan 2014 14:40:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cUz-0004nE-Gg;
	Tue, 21 Jan 2014 14:40:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.34512.368209.648082@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 14:40:16 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390211808.20516.10.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> It's hard to believe but that seems to be the only comment I have, I
> think I actually grokked the whole locking via SIGCHLD deferral thing
> and it seems to make sense.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Following a pub conversation, and a conversation with a friend of mine
last night over a glass of port, I think this fixup patch needs to be
applied too.

Jim, I'm going to look at your crash now.

Ian.

>From 7a593aa6903f4a2e3b927e546da32582184843f5 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 14:22:41 +0000
Subject: [PATCH] libxl: fork: Fixup SIGCHLD sharing

* Use a mutex for defer_sigchld, to guard against concurrency between
  the thread calling defer_sigchld and an instance of the primary
  signal handler on another thread.

* libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
  Document this correctly.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   14 ++++++++------
 tools/libxl/libxl_fork.c  |   26 ++++++++++++++++++++++++--
 2 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index f0703f6..ca43cb9 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *     libxl_sigchld_owner_libxl_always:
  *
- *       The application expects libxl to reap all of its children,
- *       and provides a callback to be notified of their exit
- *       statuses.  The application may have multiple libxl_ctxs
- *       configured this way.
+ *       The application expects this libxl ctx to reap all of the
+ *       process's children, and provides a callback to be notified of
+ *       their exit statuses.  The application must have only one
+ *       libxl_ctx configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
  *
  *       The application expects to reap all of its own children
- *       synchronously, and does not use SIGCHLD.  libxl is
- *       to install a SIGCHLD handler.
+ *       synchronously, and does not use SIGCHLD.  libxl is to install
+ *       a SIGCHLD handler.  The application may have multiple
+ *       libxl_ctxs configured this way; in which case all of its ctxs
+ *       must be so configured.
  */
 
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 5b42bad..2432512 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -51,6 +51,7 @@ static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
  * asynchronously by the signal handler) by sigchld_defer (see
  * below). */
 static bool sigchld_installed; /* 0 means not */
+static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
 static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
     LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
@@ -188,11 +189,17 @@ static void sigchld_handler(int signo)
     libxl_ctx *notify;
     int esave = errno;
 
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+
     LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
         int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
         assert(!e); /* errors are probably EBADF, very bad */
     }
 
+    r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     errno = esave;
 }
 
@@ -221,8 +228,10 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
  * must be reentrant (see the comment in release_sigchld).
  *
  * Callers have the atfork_lock so there is no risk of concurrency
- * within these functions, aside obviously from the risk of being
- * interrupted by the signal.
+ * within these functions, aside from the risk of being interrupted by
+ * the signal.  We use sigchld_defer_mutex to guard against the
+ * possibility of the real signal handler being still running on
+ * another thread.
  */
 
 static volatile sig_atomic_t sigchld_occurred_while_deferred;
@@ -235,12 +244,25 @@ static void sigchld_handler_when_deferred(int signo)
 static void defer_sigchld(void)
 {
     assert(sigchld_installed);
+
     sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+
+    /* Now _this thread_ cannot any longer be interrupted by the
+     * signal, so we can take the mutex without risk of deadlock.  If
+     * another thread is in the signal handler, either it or we will
+     * block and wait for the other. */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
 }
 
 static void release_sigchld(void)
 {
     assert(sigchld_installed);
+
+    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     sigchld_sethandler_raw(sigchld_handler, 0);
     if (sigchld_occurred_while_deferred) {
         sigchld_occurred_while_deferred = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:40:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cVA-0005iB-Tq; Tue, 21 Jan 2014 14:40:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cV8-0005i6-HL
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:40:26 +0000
Received: from [193.109.254.147:56495] by server-13.bemta-14.messagelabs.com
	id 86/B2-19374-9D68ED25; Tue, 21 Jan 2014 14:40:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390315222!9932061!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23619 invoked from network); 21 Jan 2014 14:40:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:40:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92835010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 14:40:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 09:40:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cV1-00088M-Ky;
	Tue, 21 Jan 2014 14:40:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cUz-0004nE-Gg;
	Tue, 21 Jan 2014 14:40:17 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.34512.368209.648082@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 14:40:16 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390211808.20516.10.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> It's hard to believe but that seems to be the only comment I have, I
> think I actually grokked the whole locking via SIGCHLD deferral thing
> and it seems to make sense.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Following a pub conversation, and a conversation with a friend of mine
last night over a glass of port, I think this fixup patch needs to be
applied too.

Jim, I'm going to look at your crash now.

Ian.

>From 7a593aa6903f4a2e3b927e546da32582184843f5 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 14:22:41 +0000
Subject: [PATCH] libxl: fork: Fixup SIGCHLD sharing

* Use a mutex for defer_sigchld, to guard against concurrency between
  the thread calling defer_sigchld and an instance of the primary
  signal handler on another thread.

* libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
  Document this correctly.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   14 ++++++++------
 tools/libxl/libxl_fork.c  |   26 ++++++++++++++++++++++++--
 2 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index f0703f6..ca43cb9 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *     libxl_sigchld_owner_libxl_always:
  *
- *       The application expects libxl to reap all of its children,
- *       and provides a callback to be notified of their exit
- *       statuses.  The application may have multiple libxl_ctxs
- *       configured this way.
+ *       The application expects this libxl ctx to reap all of the
+ *       process's children, and provides a callback to be notified of
+ *       their exit statuses.  The application must have only one
+ *       libxl_ctx configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
  *
  *       The application expects to reap all of its own children
- *       synchronously, and does not use SIGCHLD.  libxl is
- *       to install a SIGCHLD handler.
+ *       synchronously, and does not use SIGCHLD.  libxl is to install
+ *       a SIGCHLD handler.  The application may have multiple
+ *       libxl_ctxs configured this way; in which case all of its ctxs
+ *       must be so configured.
  */
 
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 5b42bad..2432512 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -51,6 +51,7 @@ static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
  * asynchronously by the signal handler) by sigchld_defer (see
  * below). */
 static bool sigchld_installed; /* 0 means not */
+static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
 static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
     LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
@@ -188,11 +189,17 @@ static void sigchld_handler(int signo)
     libxl_ctx *notify;
     int esave = errno;
 
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+
     LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
         int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
         assert(!e); /* errors are probably EBADF, very bad */
     }
 
+    r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     errno = esave;
 }
 
@@ -221,8 +228,10 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
  * must be reentrant (see the comment in release_sigchld).
  *
  * Callers have the atfork_lock so there is no risk of concurrency
- * within these functions, aside obviously from the risk of being
- * interrupted by the signal.
+ * within these functions, aside from the risk of being interrupted by
+ * the signal.  We use sigchld_defer_mutex to guard against the
+ * possibility of the real signal handler being still running on
+ * another thread.
  */
 
 static volatile sig_atomic_t sigchld_occurred_while_deferred;
@@ -235,12 +244,25 @@ static void sigchld_handler_when_deferred(int signo)
 static void defer_sigchld(void)
 {
     assert(sigchld_installed);
+
     sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+
+    /* Now _this thread_ cannot any longer be interrupted by the
+     * signal, so we can take the mutex without risk of deadlock.  If
+     * another thread is in the signal handler, either it or we will
+     * block and wait for the other. */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
 }
 
 static void release_sigchld(void)
 {
     assert(sigchld_installed);
+
+    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     sigchld_sethandler_raw(sigchld_handler, 0);
     if (sigchld_occurred_while_deferred) {
         sigchld_occurred_while_deferred = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:46:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cah-0005rk-Vb; Tue, 21 Jan 2014 14:46:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cah-0005re-1P
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:46:11 +0000
Received: from [85.158.137.68:9993] by server-16.bemta-3.messagelabs.com id
	F0/D6-26128-2388ED25; Tue, 21 Jan 2014 14:46:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390315568!10407991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9309 invoked from network); 21 Jan 2014 14:46:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92837896"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 14:46:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 09:46:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cac-00089u-FL;
	Tue, 21 Jan 2014 14:46:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5caa-0004oX-Sd;
	Tue, 21 Jan 2014 14:46:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.34859.759826.374311@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 14:46:03 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52DD678F.3070504@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
...
> It looks like libxl is waiting for a read with a ctx locked on thread 5,
> then receives an occurred_fd event on the same ctx in thread 1.  But it
> is not clear to me why read() is blocking...

Presumably the sigchld handler (bottom half) lost the race with a
previous instance of the sigchld_selfpipe_handler (top half), causing
sigchld_selfpipe_handler to run when the pipe was in fact empty.

And the real bug is that nothing sets the pipe to nonblocking!
Bear with me.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:46:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cah-0005rk-Vb; Tue, 21 Jan 2014 14:46:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cah-0005re-1P
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:46:11 +0000
Received: from [85.158.137.68:9993] by server-16.bemta-3.messagelabs.com id
	F0/D6-26128-2388ED25; Tue, 21 Jan 2014 14:46:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390315568!10407991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9309 invoked from network); 21 Jan 2014 14:46:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:46:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92837896"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 14:46:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 09:46:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cac-00089u-FL;
	Tue, 21 Jan 2014 14:46:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5caa-0004oX-Sd;
	Tue, 21 Jan 2014 14:46:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.34859.759826.374311@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 14:46:03 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52DD678F.3070504@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
...
> It looks like libxl is waiting for a read with a ctx locked on thread 5,
> then receives an occurred_fd event on the same ctx in thread 1.  But it
> is not clear to me why read() is blocking...

Presumably the sigchld handler (bottom half) lost the race with a
previous instance of the sigchld_selfpipe_handler (top half), causing
sigchld_selfpipe_handler to run when the pipe was in fact empty.

And the real bug is that nothing sets the pipe to nonblocking!
Bear with me.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:53:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5chW-0006MG-4e; Tue, 21 Jan 2014 14:53:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5chU-0006MB-59
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:53:12 +0000
Received: from [85.158.143.35:22551] by server-1.bemta-4.messagelabs.com id
	99/27-02132-7D98ED25; Tue, 21 Jan 2014 14:53:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390315989!10423673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24563 invoked from network); 21 Jan 2014 14:53:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:53:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94896580"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 14:53:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 09:53:08 -0500
Message-ID: <1390315987.32519.1.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 14:53:07 +0000
In-Reply-To: <21214.34512.368209.648082@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
	<21214.34512.368209.648082@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 14:40 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> > It's hard to believe but that seems to be the only comment I have, I
> > think I actually grokked the whole locking via SIGCHLD deferral thing
> > and it seems to make sense.
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Following a pub conversation, and a conversation with a friend of mine
> last night over a glass of port, I think this fixup patch needs to be
> applied too.
> 
> Jim, I'm going to look at your crash now.
> 
> Ian.
> 
> From 7a593aa6903f4a2e3b927e546da32582184843f5 Mon Sep 17 00:00:00 2001
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Date: Tue, 21 Jan 2014 14:22:41 +0000
> Subject: [PATCH] libxl: fork: Fixup SIGCHLD sharing
> 
> * Use a mutex for defer_sigchld, to guard against concurrency between
>   the thread calling defer_sigchld and an instance of the primary
>   signal handler on another thread.
> 
> * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
>   Document this correctly.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Including if you want to fold this into another patch.

> ---
>  tools/libxl/libxl_event.h |   14 ++++++++------
>  tools/libxl/libxl_fork.c  |   26 ++++++++++++++++++++++++--
>  2 files changed, 32 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index f0703f6..ca43cb9 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *
>   *     libxl_sigchld_owner_libxl_always:
>   *
> - *       The application expects libxl to reap all of its children,
> - *       and provides a callback to be notified of their exit
> - *       statuses.  The application may have multiple libxl_ctxs
> - *       configured this way.
> + *       The application expects this libxl ctx to reap all of the
> + *       process's children, and provides a callback to be notified of
> + *       their exit statuses.  The application must have only one
> + *       libxl_ctx configured this way.
>   *
>   *     libxl_sigchld_owner_libxl_always_selective_reap:
>   *
>   *       The application expects to reap all of its own children
> - *       synchronously, and does not use SIGCHLD.  libxl is
> - *       to install a SIGCHLD handler.
> + *       synchronously, and does not use SIGCHLD.  libxl is to install
> + *       a SIGCHLD handler.  The application may have multiple
> + *       libxl_ctxs configured this way; in which case all of its ctxs
> + *       must be so configured.
>   */
>  
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 5b42bad..2432512 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -51,6 +51,7 @@ static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
>   * asynchronously by the signal handler) by sigchld_defer (see
>   * below). */
>  static bool sigchld_installed; /* 0 means not */
> +static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
>  static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
>      LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
>  static struct sigaction sigchld_saved_action;
> @@ -188,11 +189,17 @@ static void sigchld_handler(int signo)
>      libxl_ctx *notify;
>      int esave = errno;
>  
> +    int r = pthread_mutex_lock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
>          int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
>          assert(!e); /* errors are probably EBADF, very bad */
>      }
>  
> +    r = pthread_mutex_unlock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      errno = esave;
>  }
>  
> @@ -221,8 +228,10 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
>   * must be reentrant (see the comment in release_sigchld).
>   *
>   * Callers have the atfork_lock so there is no risk of concurrency
> - * within these functions, aside obviously from the risk of being
> - * interrupted by the signal.
> + * within these functions, aside from the risk of being interrupted by
> + * the signal.  We use sigchld_defer_mutex to guard against the
> + * possibility of the real signal handler being still running on
> + * another thread.
>   */
>  
>  static volatile sig_atomic_t sigchld_occurred_while_deferred;
> @@ -235,12 +244,25 @@ static void sigchld_handler_when_deferred(int signo)
>  static void defer_sigchld(void)
>  {
>      assert(sigchld_installed);
> +
>      sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
> +
> +    /* Now _this thread_ cannot any longer be interrupted by the
> +     * signal, so we can take the mutex without risk of deadlock.  If
> +     * another thread is in the signal handler, either it or we will
> +     * block and wait for the other. */
> +
> +    int r = pthread_mutex_lock(&sigchld_defer_mutex);
> +    assert(!r);
>  }
>  
>  static void release_sigchld(void)
>  {
>      assert(sigchld_installed);
> +
> +    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      sigchld_sethandler_raw(sigchld_handler, 0);
>      if (sigchld_occurred_while_deferred) {
>          sigchld_occurred_while_deferred = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:53:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5chW-0006MG-4e; Tue, 21 Jan 2014 14:53:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5chU-0006MB-59
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:53:12 +0000
Received: from [85.158.143.35:22551] by server-1.bemta-4.messagelabs.com id
	99/27-02132-7D98ED25; Tue, 21 Jan 2014 14:53:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390315989!10423673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24563 invoked from network); 21 Jan 2014 14:53:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 14:53:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94896580"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 14:53:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 09:53:08 -0500
Message-ID: <1390315987.32519.1.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 14:53:07 +0000
In-Reply-To: <21214.34512.368209.648082@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
	<21214.34512.368209.648082@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 14:40 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> > It's hard to believe but that seems to be the only comment I have, I
> > think I actually grokked the whole locking via SIGCHLD deferral thing
> > and it seems to make sense.
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Following a pub conversation, and a conversation with a friend of mine
> last night over a glass of port, I think this fixup patch needs to be
> applied too.
> 
> Jim, I'm going to look at your crash now.
> 
> Ian.
> 
> From 7a593aa6903f4a2e3b927e546da32582184843f5 Mon Sep 17 00:00:00 2001
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Date: Tue, 21 Jan 2014 14:22:41 +0000
> Subject: [PATCH] libxl: fork: Fixup SIGCHLD sharing
> 
> * Use a mutex for defer_sigchld, to guard against concurrency between
>   the thread calling defer_sigchld and an instance of the primary
>   signal handler on another thread.
> 
> * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
>   Document this correctly.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Including if you want to fold this into another patch.

> ---
>  tools/libxl/libxl_event.h |   14 ++++++++------
>  tools/libxl/libxl_fork.c  |   26 ++++++++++++++++++++++++--
>  2 files changed, 32 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
> index f0703f6..ca43cb9 100644
> --- a/tools/libxl/libxl_event.h
> +++ b/tools/libxl/libxl_event.h
> @@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
>   *
>   *     libxl_sigchld_owner_libxl_always:
>   *
> - *       The application expects libxl to reap all of its children,
> - *       and provides a callback to be notified of their exit
> - *       statuses.  The application may have multiple libxl_ctxs
> - *       configured this way.
> + *       The application expects this libxl ctx to reap all of the
> + *       process's children, and provides a callback to be notified of
> + *       their exit statuses.  The application must have only one
> + *       libxl_ctx configured this way.
>   *
>   *     libxl_sigchld_owner_libxl_always_selective_reap:
>   *
>   *       The application expects to reap all of its own children
> - *       synchronously, and does not use SIGCHLD.  libxl is
> - *       to install a SIGCHLD handler.
> + *       synchronously, and does not use SIGCHLD.  libxl is to install
> + *       a SIGCHLD handler.  The application may have multiple
> + *       libxl_ctxs configured this way; in which case all of its ctxs
> + *       must be so configured.
>   */
>  
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 5b42bad..2432512 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -51,6 +51,7 @@ static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
>   * asynchronously by the signal handler) by sigchld_defer (see
>   * below). */
>  static bool sigchld_installed; /* 0 means not */
> +static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
>  static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
>      LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
>  static struct sigaction sigchld_saved_action;
> @@ -188,11 +189,17 @@ static void sigchld_handler(int signo)
>      libxl_ctx *notify;
>      int esave = errno;
>  
> +    int r = pthread_mutex_lock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
>          int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
>          assert(!e); /* errors are probably EBADF, very bad */
>      }
>  
> +    r = pthread_mutex_unlock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      errno = esave;
>  }
>  
> @@ -221,8 +228,10 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
>   * must be reentrant (see the comment in release_sigchld).
>   *
>   * Callers have the atfork_lock so there is no risk of concurrency
> - * within these functions, aside obviously from the risk of being
> - * interrupted by the signal.
> + * within these functions, aside from the risk of being interrupted by
> + * the signal.  We use sigchld_defer_mutex to guard against the
> + * possibility of the real signal handler being still running on
> + * another thread.
>   */
>  
>  static volatile sig_atomic_t sigchld_occurred_while_deferred;
> @@ -235,12 +244,25 @@ static void sigchld_handler_when_deferred(int signo)
>  static void defer_sigchld(void)
>  {
>      assert(sigchld_installed);
> +
>      sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
> +
> +    /* Now _this thread_ cannot any longer be interrupted by the
> +     * signal, so we can take the mutex without risk of deadlock.  If
> +     * another thread is in the signal handler, either it or we will
> +     * block and wait for the other. */
> +
> +    int r = pthread_mutex_lock(&sigchld_defer_mutex);
> +    assert(!r);
>  }
>  
>  static void release_sigchld(void)
>  {
>      assert(sigchld_installed);
> +
> +    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
> +    assert(!r);
> +
>      sigchld_sethandler_raw(sigchld_handler, 0);
>      if (sigchld_occurred_while_deferred) {
>          sigchld_occurred_while_deferred = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ckT-0006T6-O5; Tue, 21 Jan 2014 14:56:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5ckT-0006T1-6D
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:56:17 +0000
Received: from [85.158.143.35:2357] by server-2.bemta-4.messagelabs.com id
	68/5E-11386-09A8ED25; Tue, 21 Jan 2014 14:56:16 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390316174!5949304!1
X-Originating-IP: [62.94.10.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuOTQuMTAuMTYyID0+IDQyMDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32161 invoked from network); 21 Jan 2014 14:56:15 -0000
Received: from mp1-smtp-2.eutelia.it (HELO mp1-smtp-2.eutelia.it)
	(62.94.10.162) by server-6.tower-21.messagelabs.com with SMTP;
	21 Jan 2014 14:56:15 -0000
Received: from FantuNB.m2r.local (ip-73-126.sn2.eutelia.it [83.211.73.126])
	by mp1-smtp-2.eutelia.it (Eutelia) with ESMTP id 40BD9DF2ED;
	Tue, 21 Jan 2014 15:56:13 +0100 (CET)
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Jan 2014 15:56:10 +0100
Message-Id: <1390316170-4938-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: George.Dunlap@eu.citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] tools/libxl: comments clenup on libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Removed some unuseful comments lines.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/libxl/libxl_dm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 76ac9e2..e87f606 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -417,7 +417,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     /*
      * Remove default devices created by qemu. Qemu will create only devices
      * defined by xen, since the devices not defined by xen are not usable.
-     * Remove deleting of empty floppy no more needed with nodefault.
      */
     flexarray_append(dm_args, "-nodefaults");
 
@@ -475,10 +474,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
 
-    /*if (info->type == LIBXL_DOMAIN_TYPE_PV && !b_info->nographic) {
-        flexarray_vappend(dm_args, "-vga", "xenfb", NULL);
-      } never was possible?*/
-
     if (keymap) {
         flexarray_vappend(dm_args, "-k", keymap, NULL);
     }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 14:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 14:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ckT-0006T6-O5; Tue, 21 Jan 2014 14:56:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5ckT-0006T1-6D
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 14:56:17 +0000
Received: from [85.158.143.35:2357] by server-2.bemta-4.messagelabs.com id
	68/5E-11386-09A8ED25; Tue, 21 Jan 2014 14:56:16 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390316174!5949304!1
X-Originating-IP: [62.94.10.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuOTQuMTAuMTYyID0+IDQyMDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32161 invoked from network); 21 Jan 2014 14:56:15 -0000
Received: from mp1-smtp-2.eutelia.it (HELO mp1-smtp-2.eutelia.it)
	(62.94.10.162) by server-6.tower-21.messagelabs.com with SMTP;
	21 Jan 2014 14:56:15 -0000
Received: from FantuNB.m2r.local (ip-73-126.sn2.eutelia.it [83.211.73.126])
	by mp1-smtp-2.eutelia.it (Eutelia) with ESMTP id 40BD9DF2ED;
	Tue, 21 Jan 2014 15:56:13 +0100 (CET)
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Tue, 21 Jan 2014 15:56:10 +0100
Message-Id: <1390316170-4938-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: George.Dunlap@eu.citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] tools/libxl: comments clenup on libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Removed some unuseful comments lines.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/libxl/libxl_dm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 76ac9e2..e87f606 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -417,7 +417,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     /*
      * Remove default devices created by qemu. Qemu will create only devices
      * defined by xen, since the devices not defined by xen are not usable.
-     * Remove deleting of empty floppy no more needed with nodefault.
      */
     flexarray_append(dm_args, "-nodefaults");
 
@@ -475,10 +474,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
 
-    /*if (info->type == LIBXL_DOMAIN_TYPE_PV && !b_info->nographic) {
-        flexarray_vappend(dm_args, "-vga", "xenfb", NULL);
-      } never was possible?*/
-
     if (keymap) {
         flexarray_vappend(dm_args, "-k", keymap, NULL);
     }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cxZ-0007Gl-DE; Tue, 21 Jan 2014 15:09:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cxX-0007Gg-JQ
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:09:47 +0000
Received: from [193.109.254.147:6355] by server-11.bemta-14.messagelabs.com id
	FB/FD-20576-ABD8ED25; Tue, 21 Jan 2014 15:09:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390316984!12197234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18835 invoked from network); 21 Jan 2014 15:09:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:09:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92851580"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:09:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:09:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cx2-0008HG-1H;
	Tue, 21 Jan 2014 15:09:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cx0-0005WE-Dd;
	Tue, 21 Jan 2014 15:09:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.36248.856512.878113@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:09:12 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390315987.32519.1.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
	<21214.34512.368209.648082@mariner.uk.xensource.com>
	<1390315987.32519.1.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> On Tue, 2014-01-21 at 14:40 +0000, Ian Jackson wrote:
...
> > * Use a mutex for defer_sigchld, to guard against concurrency between
> >   the thread calling defer_sigchld and an instance of the primary
> >   signal handler on another thread.
> > 
> > * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
> >   Document this correctly.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Including if you want to fold this into another patch.

Thanks, I intend to fold it into my series.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5cxZ-0007Gl-DE; Tue, 21 Jan 2014 15:09:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5cxX-0007Gg-JQ
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:09:47 +0000
Received: from [193.109.254.147:6355] by server-11.bemta-14.messagelabs.com id
	FB/FD-20576-ABD8ED25; Tue, 21 Jan 2014 15:09:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390316984!12197234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18835 invoked from network); 21 Jan 2014 15:09:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:09:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92851580"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:09:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:09:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cx2-0008HG-1H;
	Tue, 21 Jan 2014 15:09:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5cx0-0005WE-Dd;
	Tue, 21 Jan 2014 15:09:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.36248.856512.878113@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:09:12 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390315987.32519.1.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<1389975845-1195-13-git-send-email-ian.jackson@eu.citrix.com>
	<21209.29400.383300.983690@mariner.uk.xensource.com>
	<1390211808.20516.10.camel@kazak.uk.xensource.com>
	<21214.34512.368209.648082@mariner.uk.xensource.com>
	<1390315987.32519.1.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 12/12] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 12/12] libxl: fork: Share SIGCHLD handler amongst ctxs"):
> On Tue, 2014-01-21 at 14:40 +0000, Ian Jackson wrote:
...
> > * Use a mutex for defer_sigchld, to guard against concurrency between
> >   the thread calling defer_sigchld and an instance of the primary
> >   signal handler on another thread.
> > 
> > * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD sharing.
> >   Document this correctly.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Including if you want to fold this into another patch.

Thanks, I intend to fold it into my series.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5czZ-0007Tj-08; Tue, 21 Jan 2014 15:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5czX-0007Ta-Mw
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:11:52 +0000
Received: from [85.158.139.211:14448] by server-17.bemta-5.messagelabs.com id
	88/5D-19152-63E8ED25; Tue, 21 Jan 2014 15:11:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390317108!11086032!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24408 invoked from network); 21 Jan 2014 15:11:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:11:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92852947"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:11:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:11:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czN-0008Hr-5H;
	Tue, 21 Jan 2014 15:11:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czL-0005YG-AG;
	Tue, 21 Jan 2014 15:11:39 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 15:11:27 +0000
Message-ID: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <21214.34859.759826.374311@mariner.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/12] libxl: events: Break out
	libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Break out the pipe creation and destruction from the poller code
into two new functions libxl__pipe_nonblock and libxl__pipe_close.

No overall functional difference other than minor differences in exact
log messages.

Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
new pipe utilities section in libxl_event.c; this is pure code motion.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
 tools/libxl/libxl_internal.h |    9 ++++
 2 files changed, 73 insertions(+), 40 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..35a8f81 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
 }
 
 /*
- * Manipulation of pollers
+ * Utilities for pipes (specifically, useful for self-pipes)
  */
 
-int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+void libxl__pipe_close(int fds[2])
+{
+    if (fds[0] >= 0) close(fds[0]);
+    if (fds[1] >= 0) close(fds[1]);
+    fds[0] = fds[1] = -1;
+}
+
+int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
 {
     int r, rc;
-    p->fd_polls = 0;
-    p->fd_rindices = 0;
 
-    r = pipe(p->wakeup_pipe);
+    r = libxl_pipe(ctx, fds);
     if (r) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
+        fds[0] = fds[1] = -1;
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
     if (rc) goto out;
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
+    if (rc) goto out;
+
+    return 0;
+
+ out:
+    libxl__pipe_close(fds);
+    return rc;
+}
+
+int libxl__self_pipe_wakeup(int fd)
+{
+    static const char buf[1] = "";
+
+    for (;;) {
+        int r = write(fd, buf, 1);
+        if (r==1) return 0;
+        assert(r==-1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+int libxl__self_pipe_eatall(int fd)
+{
+    char buf[256];
+    for (;;) {
+        int r = read(fd, buf, sizeof(buf));
+        if (r == sizeof(buf)) continue;
+        if (r >= 0) return 0;
+        assert(r == -1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+/*
+ * Manipulation of pollers
+ */
+
+int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+{
+    int rc;
+    p->fd_polls = 0;
+    p->fd_rindices = 0;
+
+    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
     if (rc) goto out;
 
     return 0;
@@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
 
 void libxl__poller_dispose(libxl__poller *p)
 {
-    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
-    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
+    libxl__pipe_close(p->wakeup_pipe);
     free(p->fd_polls);
     free(p->fd_rindices);
 }
@@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
     if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
 }
 
-int libxl__self_pipe_wakeup(int fd)
-{
-    static const char buf[1] = "";
-
-    for (;;) {
-        int r = write(fd, buf, 1);
-        if (r==1) return 0;
-        assert(r==-1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
-int libxl__self_pipe_eatall(int fd)
-{
-    char buf[256];
-    for (;;) {
-        int r = read(fd, buf, sizeof(buf));
-        if (r == sizeof(buf)) continue;
-        if (r >= 0) return 0;
-        assert(r == -1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
 /*
  * Main event loop iteration
  */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 8429448..9d17586 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
  * string. (similar to a gc'd dirname(3)). */
 _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
 
+/* Make a pipe and set both ends nonblocking.  On error, nothing
+ * is left open and both fds[]==-1, and a message is logged.
+ * Useful for self-pipes. */
+_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
+/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
+ * `not open'.  Ignores any errors.  Sets fds[] to -1. */
+_hidden void libxl__pipe_close(int fds[2]);
+
+
 /* Each of these logs errors and returns a libxl error code.
  * They do not mind if path is already removed.
  * For _file, path must not be a directory; for _directory it must be. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5czZ-0007Tj-08; Tue, 21 Jan 2014 15:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5czX-0007Ta-Mw
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:11:52 +0000
Received: from [85.158.139.211:14448] by server-17.bemta-5.messagelabs.com id
	88/5D-19152-63E8ED25; Tue, 21 Jan 2014 15:11:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390317108!11086032!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24408 invoked from network); 21 Jan 2014 15:11:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:11:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92852947"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:11:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:11:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czN-0008Hr-5H;
	Tue, 21 Jan 2014 15:11:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czL-0005YG-AG;
	Tue, 21 Jan 2014 15:11:39 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 15:11:27 +0000
Message-ID: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <21214.34859.759826.374311@mariner.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/12] libxl: events: Break out
	libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Break out the pipe creation and destruction from the poller code
into two new functions libxl__pipe_nonblock and libxl__pipe_close.

No overall functional difference other than minor differences in exact
log messages.

Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
new pipe utilities section in libxl_event.c; this is pure code motion.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
 tools/libxl/libxl_internal.h |    9 ++++
 2 files changed, 73 insertions(+), 40 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..35a8f81 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
 }
 
 /*
- * Manipulation of pollers
+ * Utilities for pipes (specifically, useful for self-pipes)
  */
 
-int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+void libxl__pipe_close(int fds[2])
+{
+    if (fds[0] >= 0) close(fds[0]);
+    if (fds[1] >= 0) close(fds[1]);
+    fds[0] = fds[1] = -1;
+}
+
+int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
 {
     int r, rc;
-    p->fd_polls = 0;
-    p->fd_rindices = 0;
 
-    r = pipe(p->wakeup_pipe);
+    r = libxl_pipe(ctx, fds);
     if (r) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
+        fds[0] = fds[1] = -1;
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
     if (rc) goto out;
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
+    if (rc) goto out;
+
+    return 0;
+
+ out:
+    libxl__pipe_close(fds);
+    return rc;
+}
+
+int libxl__self_pipe_wakeup(int fd)
+{
+    static const char buf[1] = "";
+
+    for (;;) {
+        int r = write(fd, buf, 1);
+        if (r==1) return 0;
+        assert(r==-1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+int libxl__self_pipe_eatall(int fd)
+{
+    char buf[256];
+    for (;;) {
+        int r = read(fd, buf, sizeof(buf));
+        if (r == sizeof(buf)) continue;
+        if (r >= 0) return 0;
+        assert(r == -1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+/*
+ * Manipulation of pollers
+ */
+
+int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+{
+    int rc;
+    p->fd_polls = 0;
+    p->fd_rindices = 0;
+
+    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
     if (rc) goto out;
 
     return 0;
@@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
 
 void libxl__poller_dispose(libxl__poller *p)
 {
-    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
-    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
+    libxl__pipe_close(p->wakeup_pipe);
     free(p->fd_polls);
     free(p->fd_rindices);
 }
@@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
     if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
 }
 
-int libxl__self_pipe_wakeup(int fd)
-{
-    static const char buf[1] = "";
-
-    for (;;) {
-        int r = write(fd, buf, 1);
-        if (r==1) return 0;
-        assert(r==-1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
-int libxl__self_pipe_eatall(int fd)
-{
-    char buf[256];
-    for (;;) {
-        int r = read(fd, buf, sizeof(buf));
-        if (r == sizeof(buf)) continue;
-        if (r >= 0) return 0;
-        assert(r == -1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
 /*
  * Main event loop iteration
  */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 8429448..9d17586 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
  * string. (similar to a gc'd dirname(3)). */
 _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
 
+/* Make a pipe and set both ends nonblocking.  On error, nothing
+ * is left open and both fds[]==-1, and a message is logged.
+ * Useful for self-pipes. */
+_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
+/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
+ * `not open'.  Ignores any errors.  Sets fds[] to -1. */
+_hidden void libxl__pipe_close(int fds[2]);
+
+
 /* Each of these logs errors and returns a libxl error code.
  * They do not mind if path is already removed.
  * For _file, path must not be a directory; for _directory it must be. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5czd-0007Ud-D8; Tue, 21 Jan 2014 15:11:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5czc-0007UK-4d
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:11:56 +0000
Received: from [85.158.137.68:56038] by server-1.bemta-3.messagelabs.com id
	2A/59-29598-B3E8ED25; Tue, 21 Jan 2014 15:11:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390317112!10462199!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20606 invoked from network); 21 Jan 2014 15:11:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:11:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94907717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 15:11:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:11:51 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czR-0008Hu-6Z;
	Tue, 21 Jan 2014 15:11:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czP-0005YN-H2;
	Tue, 21 Jan 2014 15:11:43 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 15:11:28 +0000
Message-ID: <1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the new libxl__pipe_nonblock and _close functions, rather than
open coding the same logic.  Now the pipe is nonblocking, which avoids
a race which could result in libxl deadlocking in a multithreaded
program.

Reported-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.c      |    6 +-----
 tools/libxl/libxl_fork.c |   12 +++---------
 2 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4679b51..3730074 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -171,11 +171,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
     libxl__sigchld_notneeded(gc);
-
-    if (ctx->sigchld_selfpipe[0] >= 0) {
-        close(ctx->sigchld_selfpipe[0]);
-        close(ctx->sigchld_selfpipe[1]);
-    }
+    libxl__pipe_close(ctx->sigchld_selfpipe);
 
     pthread_mutex_destroy(&ctx->lock);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2432512..1d0017b 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -343,17 +343,11 @@ void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 
 int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
-    int r, rc;
+    int rc;
 
     if (CTX->sigchld_selfpipe[0] < 0) {
-        r = pipe(CTX->sigchld_selfpipe);
-        if (r) {
-            CTX->sigchld_selfpipe[0] = -1;
-            LIBXL__LOG_ERRNO(CTX, LIBXL__LOG_ERROR,
-                             "failed to create sigchld pipe");
-            rc = ERROR_FAIL;
-            goto out;
-        }
+        rc = libxl__pipe_nonblock(CTX, CTX->sigchld_selfpipe);
+        if (rc) goto out;
     }
     if (!libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_register(gc, &CTX->sigchld_selfpipe_efd,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5czd-0007Ud-D8; Tue, 21 Jan 2014 15:11:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5czc-0007UK-4d
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:11:56 +0000
Received: from [85.158.137.68:56038] by server-1.bemta-3.messagelabs.com id
	2A/59-29598-B3E8ED25; Tue, 21 Jan 2014 15:11:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390317112!10462199!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20606 invoked from network); 21 Jan 2014 15:11:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:11:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94907717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 15:11:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:11:51 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czR-0008Hu-6Z;
	Tue, 21 Jan 2014 15:11:45 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5czP-0005YN-H2;
	Tue, 21 Jan 2014 15:11:43 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 15:11:28 +0000
Message-ID: <1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the new libxl__pipe_nonblock and _close functions, rather than
open coding the same logic.  Now the pipe is nonblocking, which avoids
a race which could result in libxl deadlocking in a multithreaded
program.

Reported-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.c      |    6 +-----
 tools/libxl/libxl_fork.c |   12 +++---------
 2 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4679b51..3730074 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -171,11 +171,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
     libxl__sigchld_notneeded(gc);
-
-    if (ctx->sigchld_selfpipe[0] >= 0) {
-        close(ctx->sigchld_selfpipe[0]);
-        close(ctx->sigchld_selfpipe[1]);
-    }
+    libxl__pipe_close(ctx->sigchld_selfpipe);
 
     pthread_mutex_destroy(&ctx->lock);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2432512..1d0017b 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -343,17 +343,11 @@ void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 
 int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
-    int r, rc;
+    int rc;
 
     if (CTX->sigchld_selfpipe[0] < 0) {
-        r = pipe(CTX->sigchld_selfpipe);
-        if (r) {
-            CTX->sigchld_selfpipe[0] = -1;
-            LIBXL__LOG_ERRNO(CTX, LIBXL__LOG_ERROR,
-                             "failed to create sigchld pipe");
-            rc = ERROR_FAIL;
-            goto out;
-        }
+        rc = libxl__pipe_nonblock(CTX, CTX->sigchld_selfpipe);
+        if (rc) goto out;
     }
     if (!libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_register(gc, &CTX->sigchld_selfpipe_efd,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dCV-00089Y-2D; Tue, 21 Jan 2014 15:25:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dCQ-00089T-JK
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:25:13 +0000
Received: from [85.158.139.211:43771] by server-14.bemta-5.messagelabs.com id
	AD/58-24200-5519ED25; Tue, 21 Jan 2014 15:25:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390317907!11070192!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24868 invoked from network); 21 Jan 2014 15:25:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:25:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92860881"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:24:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:24:43 -0500
Message-ID: <1390317882.32519.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 15:24:42 +0000
In-Reply-To: <1390301379.23576.110.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
	<1390299991.20516.116.camel@kazak.uk.xensource.com>
	<1390301379.23576.110.camel@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 11:49 +0100, Dario Faggioli wrote:
> On mar, 2014-01-21 at 10:26 +0000, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> > > Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> > > wrong?
> > 
> > Apparently the bug tracker doesn't handle MIME encapsulation very
> > well/at all. I swear I remember adding that code...
> > 
> > I'll take a look at it, but in the meantime sending control messages in
> > plain text without any attachments etc should avoid the issue.
> > 
> Ok thanks. I'll retry creating the entry with a new mail, without any
> attachment.

Actually, after a little refactoring the fix was pretty easy, and is now
live. https://gitorious.org/emesinae if you care about the details ;-)

Ian.
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dCV-00089Y-2D; Tue, 21 Jan 2014 15:25:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dCQ-00089T-JK
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:25:13 +0000
Received: from [85.158.139.211:43771] by server-14.bemta-5.messagelabs.com id
	AD/58-24200-5519ED25; Tue, 21 Jan 2014 15:25:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390317907!11070192!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24868 invoked from network); 21 Jan 2014 15:25:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:25:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92860881"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:24:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:24:43 -0500
Message-ID: <1390317882.32519.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 15:24:42 +0000
In-Reply-To: <1390301379.23576.110.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
	<1390241799.23576.42.camel@Solace>
	<control-reply-1390242602.7482@bugs.xenproject.org>
	<1390247840.5727.2.camel@Abyss>
	<1390299991.20516.116.camel@kazak.uk.xensource.com>
	<1390301379.23576.110.camel@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 11:49 +0100, Dario Faggioli wrote:
> On mar, 2014-01-21 at 10:26 +0000, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 20:57 +0100, Dario Faggioli wrote:
> > > Mmm... Perhaps it's obvious, but I don't see it... What am I doing
> > > wrong?
> > 
> > Apparently the bug tracker doesn't handle MIME encapsulation very
> > well/at all. I swear I remember adding that code...
> > 
> > I'll take a look at it, but in the meantime sending control messages in
> > plain text without any attachments etc should avoid the issue.
> > 
> Ok thanks. I'll retry creating the entry with a new mail, without any
> attachment.

Actually, after a little refactoring the fix was pretty easy, and is now
live. https://gitorious.org/emesinae if you care about the details ;-)

Ian.
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:28:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dFD-0008Gj-PP; Tue, 21 Jan 2014 15:28:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dFB-0008GQ-Dn
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:28:02 +0000
Received: from [85.158.143.35:59909] by server-1.bemta-4.messagelabs.com id
	B4/35-02132-0029ED25; Tue, 21 Jan 2014 15:28:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390318078!13112158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27495 invoked from network); 21 Jan 2014 15:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92863097"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:27:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:27:57 -0500
Message-ID: <1390318076.32519.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 15:27:56 +0000
In-Reply-To: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 13/12] libxl: events: Break out
 libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> Break out the pipe creation and destruction from the poller code
> into two new functions libxl__pipe_nonblock and libxl__pipe_close.
> 
> No overall functional difference other than minor differences in exact
> log messages.
> 
> Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
> new pipe utilities section in libxl_event.c; this is pure code motion.

You also switched pipe() -> libxl_pipe(), which is fine but not
mentioned here.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
>  tools/libxl/libxl_internal.h |    9 ++++
>  2 files changed, 73 insertions(+), 40 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index bdef7ac..35a8f81 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
>  }
>  
>  /*
> - * Manipulation of pollers
> + * Utilities for pipes (specifically, useful for self-pipes)
>   */
>  
> -int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
> +void libxl__pipe_close(int fds[2])
> +{
> +    if (fds[0] >= 0) close(fds[0]);
> +    if (fds[1] >= 0) close(fds[1]);
> +    fds[0] = fds[1] = -1;
> +}
> +
> +int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
>  {
>      int r, rc;
> -    p->fd_polls = 0;
> -    p->fd_rindices = 0;
>  
> -    r = pipe(p->wakeup_pipe);
> +    r = libxl_pipe(ctx, fds);
>      if (r) {
> -        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
> +        fds[0] = fds[1] = -1;
>          rc = ERROR_FAIL;
>          goto out;
>      }
>  
> -    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
> +    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
>      if (rc) goto out;
>  
> -    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
> +    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
> +    if (rc) goto out;
> +
> +    return 0;
> +
> + out:
> +    libxl__pipe_close(fds);
> +    return rc;
> +}
> +
> +int libxl__self_pipe_wakeup(int fd)
> +{
> +    static const char buf[1] = "";
> +
> +    for (;;) {
> +        int r = write(fd, buf, 1);
> +        if (r==1) return 0;
> +        assert(r==-1);
> +        if (errno == EINTR) continue;
> +        if (errno == EWOULDBLOCK) return 0;
> +        assert(errno);
> +        return errno;
> +    }
> +}
> +
> +int libxl__self_pipe_eatall(int fd)
> +{
> +    char buf[256];
> +    for (;;) {
> +        int r = read(fd, buf, sizeof(buf));
> +        if (r == sizeof(buf)) continue;
> +        if (r >= 0) return 0;
> +        assert(r == -1);
> +        if (errno == EINTR) continue;
> +        if (errno == EWOULDBLOCK) return 0;
> +        assert(errno);
> +        return errno;
> +    }
> +}
> +
> +/*
> + * Manipulation of pollers
> + */
> +
> +int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
> +{
> +    int rc;
> +    p->fd_polls = 0;
> +    p->fd_rindices = 0;
> +
> +    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
>      if (rc) goto out;
>  
>      return 0;
> @@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
>  
>  void libxl__poller_dispose(libxl__poller *p)
>  {
> -    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
> -    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
> +    libxl__pipe_close(p->wakeup_pipe);
>      free(p->fd_polls);
>      free(p->fd_rindices);
>  }
> @@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
>      if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
>  }
>  
> -int libxl__self_pipe_wakeup(int fd)
> -{
> -    static const char buf[1] = "";
> -
> -    for (;;) {
> -        int r = write(fd, buf, 1);
> -        if (r==1) return 0;
> -        assert(r==-1);
> -        if (errno == EINTR) continue;
> -        if (errno == EWOULDBLOCK) return 0;
> -        assert(errno);
> -        return errno;
> -    }
> -}
> -
> -int libxl__self_pipe_eatall(int fd)
> -{
> -    char buf[256];
> -    for (;;) {
> -        int r = read(fd, buf, sizeof(buf));
> -        if (r == sizeof(buf)) continue;
> -        if (r >= 0) return 0;
> -        assert(r == -1);
> -        if (errno == EINTR) continue;
> -        if (errno == EWOULDBLOCK) return 0;
> -        assert(errno);
> -        return errno;
> -    }
> -}
> -
>  /*
>   * Main event loop iteration
>   */
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 8429448..9d17586 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
>   * string. (similar to a gc'd dirname(3)). */
>  _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
>  
> +/* Make a pipe and set both ends nonblocking.  On error, nothing
> + * is left open and both fds[]==-1, and a message is logged.
> + * Useful for self-pipes. */
> +_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
> +/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
> + * `not open'.  Ignores any errors.  Sets fds[] to -1. */
> +_hidden void libxl__pipe_close(int fds[2]);
> +
> +
>  /* Each of these logs errors and returns a libxl error code.
>   * They do not mind if path is already removed.
>   * For _file, path must not be a directory; for _directory it must be. */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:28:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dFD-0008Gj-PP; Tue, 21 Jan 2014 15:28:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dFB-0008GQ-Dn
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:28:02 +0000
Received: from [85.158.143.35:59909] by server-1.bemta-4.messagelabs.com id
	B4/35-02132-0029ED25; Tue, 21 Jan 2014 15:28:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390318078!13112158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27495 invoked from network); 21 Jan 2014 15:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92863097"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:27:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:27:57 -0500
Message-ID: <1390318076.32519.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 15:27:56 +0000
In-Reply-To: <1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 13/12] libxl: events: Break out
 libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> Break out the pipe creation and destruction from the poller code
> into two new functions libxl__pipe_nonblock and libxl__pipe_close.
> 
> No overall functional difference other than minor differences in exact
> log messages.
> 
> Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
> new pipe utilities section in libxl_event.c; this is pure code motion.

You also switched pipe() -> libxl_pipe(), which is fine but not
mentioned here.

> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
>  tools/libxl/libxl_internal.h |    9 ++++
>  2 files changed, 73 insertions(+), 40 deletions(-)
> 
> diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
> index bdef7ac..35a8f81 100644
> --- a/tools/libxl/libxl_event.c
> +++ b/tools/libxl/libxl_event.c
> @@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
>  }
>  
>  /*
> - * Manipulation of pollers
> + * Utilities for pipes (specifically, useful for self-pipes)
>   */
>  
> -int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
> +void libxl__pipe_close(int fds[2])
> +{
> +    if (fds[0] >= 0) close(fds[0]);
> +    if (fds[1] >= 0) close(fds[1]);
> +    fds[0] = fds[1] = -1;
> +}
> +
> +int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
>  {
>      int r, rc;
> -    p->fd_polls = 0;
> -    p->fd_rindices = 0;
>  
> -    r = pipe(p->wakeup_pipe);
> +    r = libxl_pipe(ctx, fds);
>      if (r) {
> -        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
> +        fds[0] = fds[1] = -1;
>          rc = ERROR_FAIL;
>          goto out;
>      }
>  
> -    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
> +    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
>      if (rc) goto out;
>  
> -    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
> +    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
> +    if (rc) goto out;
> +
> +    return 0;
> +
> + out:
> +    libxl__pipe_close(fds);
> +    return rc;
> +}
> +
> +int libxl__self_pipe_wakeup(int fd)
> +{
> +    static const char buf[1] = "";
> +
> +    for (;;) {
> +        int r = write(fd, buf, 1);
> +        if (r==1) return 0;
> +        assert(r==-1);
> +        if (errno == EINTR) continue;
> +        if (errno == EWOULDBLOCK) return 0;
> +        assert(errno);
> +        return errno;
> +    }
> +}
> +
> +int libxl__self_pipe_eatall(int fd)
> +{
> +    char buf[256];
> +    for (;;) {
> +        int r = read(fd, buf, sizeof(buf));
> +        if (r == sizeof(buf)) continue;
> +        if (r >= 0) return 0;
> +        assert(r == -1);
> +        if (errno == EINTR) continue;
> +        if (errno == EWOULDBLOCK) return 0;
> +        assert(errno);
> +        return errno;
> +    }
> +}
> +
> +/*
> + * Manipulation of pollers
> + */
> +
> +int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
> +{
> +    int rc;
> +    p->fd_polls = 0;
> +    p->fd_rindices = 0;
> +
> +    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
>      if (rc) goto out;
>  
>      return 0;
> @@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
>  
>  void libxl__poller_dispose(libxl__poller *p)
>  {
> -    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
> -    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
> +    libxl__pipe_close(p->wakeup_pipe);
>      free(p->fd_polls);
>      free(p->fd_rindices);
>  }
> @@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
>      if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
>  }
>  
> -int libxl__self_pipe_wakeup(int fd)
> -{
> -    static const char buf[1] = "";
> -
> -    for (;;) {
> -        int r = write(fd, buf, 1);
> -        if (r==1) return 0;
> -        assert(r==-1);
> -        if (errno == EINTR) continue;
> -        if (errno == EWOULDBLOCK) return 0;
> -        assert(errno);
> -        return errno;
> -    }
> -}
> -
> -int libxl__self_pipe_eatall(int fd)
> -{
> -    char buf[256];
> -    for (;;) {
> -        int r = read(fd, buf, sizeof(buf));
> -        if (r == sizeof(buf)) continue;
> -        if (r >= 0) return 0;
> -        assert(r == -1);
> -        if (errno == EINTR) continue;
> -        if (errno == EWOULDBLOCK) return 0;
> -        assert(errno);
> -        return errno;
> -    }
> -}
> -
>  /*
>   * Main event loop iteration
>   */
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 8429448..9d17586 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
>   * string. (similar to a gc'd dirname(3)). */
>  _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
>  
> +/* Make a pipe and set both ends nonblocking.  On error, nothing
> + * is left open and both fds[]==-1, and a message is logged.
> + * Useful for self-pipes. */
> +_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
> +/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
> + * `not open'.  Ignores any errors.  Sets fds[] to -1. */
> +_hidden void libxl__pipe_close(int fds[2]);
> +
> +
>  /* Each of these logs errors and returns a libxl error code.
>   * They do not mind if path is already removed.
>   * For _file, path must not be a directory; for _directory it must be. */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:28:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dFj-0008L1-76; Tue, 21 Jan 2014 15:28:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dFh-0008Kq-Vx
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:28:34 +0000
Received: from [85.158.139.211:55510] by server-14.bemta-5.messagelabs.com id
	C3/DF-24200-1229ED25; Tue, 21 Jan 2014 15:28:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390318111!11089015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15001 invoked from network); 21 Jan 2014 15:28:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92863281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:28:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:28:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dFd-0008N2-PW;
	Tue, 21 Jan 2014 15:28:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dFc-000501-5N;
	Tue, 21 Jan 2014 15:28:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.37402.648941.864060@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:28:26 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52DD678F.3070504@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I let this run over the weekend and today noticed libvirtd was deadlocked

I have just retested xl with:
  * my 3-patch 4.4 fixes series
  * v2 of my fork series
  * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
  * "13/12" and "14/12" just posted
and it WFM.

Of course I don't have the same setup as Jim.

Jim: if it's not too much trouble, I'd appreciate it if you could try
that combination.

For your convenience you can find a git branch of it at
  http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
aka
  git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

A useful stress test would be to occasionally run something like
   while true; do kill -CHLD <pid of libvirt process>; done
while it's busy.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:28:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:28:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dFj-0008L1-76; Tue, 21 Jan 2014 15:28:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dFh-0008Kq-Vx
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:28:34 +0000
Received: from [85.158.139.211:55510] by server-14.bemta-5.messagelabs.com id
	C3/DF-24200-1229ED25; Tue, 21 Jan 2014 15:28:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390318111!11089015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15001 invoked from network); 21 Jan 2014 15:28:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92863281"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:28:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:28:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dFd-0008N2-PW;
	Tue, 21 Jan 2014 15:28:29 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dFc-000501-5N;
	Tue, 21 Jan 2014 15:28:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.37402.648941.864060@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:28:26 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52DD678F.3070504@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I let this run over the weekend and today noticed libvirtd was deadlocked

I have just retested xl with:
  * my 3-patch 4.4 fixes series
  * v2 of my fork series
  * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
  * "13/12" and "14/12" just posted
and it WFM.

Of course I don't have the same setup as Jim.

Jim: if it's not too much trouble, I'd appreciate it if you could try
that combination.

For your convenience you can find a git branch of it at
  http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
aka
  git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

A useful stress test would be to occasionally run something like
   while true; do kill -CHLD <pid of libvirt process>; done
while it's busy.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dIa-0000Rs-1A; Tue, 21 Jan 2014 15:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dIY-0000Rm-Mu
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:31:30 +0000
Received: from [85.158.139.211:41018] by server-11.bemta-5.messagelabs.com id
	2A/4C-23268-2D29ED25; Tue, 21 Jan 2014 15:31:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390318287!8368173!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21364 invoked from network); 21 Jan 2014 15:31:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92864790"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:31:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:31:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dIG-0008Nv-3G;
	Tue, 21 Jan 2014 15:31:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dIE-00058o-El;
	Tue, 21 Jan 2014 15:31:10 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.37565.492567.78763@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:31:09 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390318076.32519.4.camel@kazak.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390318076.32519.4.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 13/12] libxl: events: Break out
 libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 13/12] libxl: events: Break out libxl__pipe_nonblock, _close"):
> On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> > Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
> > new pipe utilities section in libxl_event.c; this is pure code motion.
> 
> You also switched pipe() -> libxl_pipe(), which is fine but not
> mentioned here.

Good point.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dIa-0000Rs-1A; Tue, 21 Jan 2014 15:31:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dIY-0000Rm-Mu
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:31:30 +0000
Received: from [85.158.139.211:41018] by server-11.bemta-5.messagelabs.com id
	2A/4C-23268-2D29ED25; Tue, 21 Jan 2014 15:31:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390318287!8368173!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21364 invoked from network); 21 Jan 2014 15:31:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:31:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92864790"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:31:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:31:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dIG-0008Nv-3G;
	Tue, 21 Jan 2014 15:31:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dIE-00058o-El;
	Tue, 21 Jan 2014 15:31:10 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.37565.492567.78763@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:31:09 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390318076.32519.4.camel@kazak.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390318076.32519.4.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 13/12] libxl: events: Break out
 libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 13/12] libxl: events: Break out libxl__pipe_nonblock, _close"):
> On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> > Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
> > new pipe utilities section in libxl_event.c; this is pure code motion.
> 
> You also switched pipe() -> libxl_pipe(), which is fine but not
> mentioned here.

Good point.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:33:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dK0-0000Xn-Hk; Tue, 21 Jan 2014 15:33:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dJz-0000Xg-R8
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:32:59 +0000
Received: from [85.158.143.35:45892] by server-3.bemta-4.messagelabs.com id
	6C/5B-32360-B239ED25; Tue, 21 Jan 2014 15:32:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390318377!13130269!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30542 invoked from network); 21 Jan 2014 15:32:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:32:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94919998"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 15:32:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:32:56 -0500
Message-ID: <1390318375.32519.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 15:32:55 +0000
In-Reply-To: <1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> Use the new libxl__pipe_nonblock and _close functions, rather than
> open coding the same logic.  Now the pipe is nonblocking, which avoids
> a race which could result in libxl deadlocking in a multithreaded
> program.

Is any additional error handling required at any of the points where the
pipe is used, EWOULDBLOCK etc.

I think in practice that is all in libxl__self_pipe_wakeup and
libxl__self_pipe_eatall, the callers of which treat errors as disaster
(with assert and LIBXL__EVENT_DISASTER respectively). But both wakeup
and eatall handle EWOULDBLOCK sensibly.

Assuming my analysis of what needs to be handled and where it needs to
be done matches yours:
Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:33:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dK0-0000Xn-Hk; Tue, 21 Jan 2014 15:33:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5dJz-0000Xg-R8
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:32:59 +0000
Received: from [85.158.143.35:45892] by server-3.bemta-4.messagelabs.com id
	6C/5B-32360-B239ED25; Tue, 21 Jan 2014 15:32:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390318377!13130269!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30542 invoked from network); 21 Jan 2014 15:32:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:32:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94919998"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 15:32:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:32:56 -0500
Message-ID: <1390318375.32519.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 15:32:55 +0000
In-Reply-To: <1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> Use the new libxl__pipe_nonblock and _close functions, rather than
> open coding the same logic.  Now the pipe is nonblocking, which avoids
> a race which could result in libxl deadlocking in a multithreaded
> program.

Is any additional error handling required at any of the points where the
pipe is used, EWOULDBLOCK etc.

I think in practice that is all in libxl__self_pipe_wakeup and
libxl__self_pipe_eatall, the callers of which treat errors as disaster
(with assert and LIBXL__EVENT_DISASTER respectively). But both wakeup
and eatall handle EWOULDBLOCK sensibly.

Assuming my analysis of what needs to be handled and where it needs to
be done matches yours:
Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dKs-0000e4-0x; Tue, 21 Jan 2014 15:33:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5dKq-0000do-1K
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 15:33:52 +0000
Received: from [85.158.139.211:16303] by server-4.bemta-5.messagelabs.com id
	25/7F-26791-F539ED25; Tue, 21 Jan 2014 15:33:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390318430!779549!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12855 invoked from network); 21 Jan 2014 15:33:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 15:33:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 15:33:50 +0000
Message-Id: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 15:33:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3C0FBD4B.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3C0FBD4B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Since 64-bit PV uses separate kernel and user mode page tables, kernel
addresses (as usually provided via VCPUOP_register_runstate_memory_area
and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
necessarily accessible when the respective updating occurs. Add logic
for toggle_guest_mode() to take care of this (if necessary) the next
time the vCPU switches to kernel mode.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
 }
=20
 /* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+bool_t update_runstate_area(const struct vcpu *v)
 {
     if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+        return 1;
=20
     if ( has_32bit_shinfo(v->domain) )
     {
@@ -1334,10 +1334,18 @@ static void update_runstate_area(struct=20
=20
         XLAT_vcpu_runstate_info(&info, &v->runstate);
         __copy_to_guest(v->runstate_guest.compat, &info, 1);
-        return;
+        return 1;
     }
=20
-    __copy_to_guest(runstate_guest(v), &v->runstate, 1);
+    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=3D
+           sizeof(v->runstate);
+}
+
+static void _update_runstate_area(struct vcpu *v)
+{
+    if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
+         !(v->arch.flags & TF_kernel_mode) )
+        v->arch.pv_vcpu.need_update_runstate_area =3D 1;
 }
=20
 static inline int need_full_gdt(struct vcpu *v)
@@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
         flush_tlb_mask(&dirty_mask);
     }
=20
-    if (prev !=3D next)
-        update_runstate_area(prev);
+    if ( prev !=3D next )
+        _update_runstate_area(prev);
=20
     if ( is_hvm_vcpu(prev) )
     {
@@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
=20
     context_saved(prev);
=20
-    if (prev !=3D next)
-        update_runstate_area(next);
+    if ( prev !=3D next )
+        _update_runstate_area(next);
=20
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(next);
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
 {
     struct cpu_time       *t;
     struct vcpu_time_info *u, _u;
-    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
     struct domain *d =3D v->domain;
     s_time_t tsc_stamp =3D 0;
=20
@@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
     /* 3. Update guest kernel version. */
     u->version =3D version_update_end(u->version);
=20
-    user_u =3D v->arch.time_info_guest;
-    if ( !guest_handle_is_null(user_u) )
-    {
-        /* 1. Update userspace version. */
-        __copy_field_to_guest(user_u, &_u, version);
-        wmb();
-        /* 2. Update all other userspavce fields. */
-        __copy_to_guest(user_u, &_u, 1);
-        wmb();
-        /* 3. Update userspace version. */
-        _u.version =3D version_update_end(_u.version);
-        __copy_field_to_guest(user_u, &_u, version);
-    }
+    if ( !update_secondary_system_time(v, &_u) && is_pv_domain(d) &&
+         !is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )
+        v->arch.pv_vcpu.pending_system_time =3D _u;
+}
+
+bool_t update_secondary_system_time(const struct vcpu *v,
+                                    struct vcpu_time_info *u)
+{
+    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u =3D v->arch.time_info_guest;=

+
+    if ( guest_handle_is_null(user_u) )
+        return 1;
+
+    /* 1. Update userspace version. */
+    if ( __copy_field_to_guest(user_u, u, version) =3D=3D sizeof(u->versio=
n) )
+        return 0;
+    wmb();
+    /* 2. Update all other userspace fields. */
+    __copy_to_guest(user_u, u, 1);
+    wmb();
+    /* 3. Update userspace version. */
+    u->version =3D version_update_end(u->version);
+    __copy_field_to_guest(user_u, u, version);
+
+    return 1;
 }
=20
 void update_vcpu_system_time(struct vcpu *v)
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
 #else
     write_ptbase(v);
 #endif
+
+    if ( !(v->arch.flags & TF_kernel_mode) )
+        return;
+
+    if ( v->arch.pv_vcpu.need_update_runstate_area &&
+         update_runstate_area(v) )
+        v->arch.pv_vcpu.need_update_runstate_area =3D 0;
+
+    if ( v->arch.pv_vcpu.pending_system_time.version &&
+         update_secondary_system_time(v,
+                                      &v->arch.pv_vcpu.pending_system_time=
) )
+        v->arch.pv_vcpu.pending_system_time.version =3D 0;
 }
=20
 unsigned long do_iret(void)
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -373,6 +373,10 @@ struct pv_vcpu
     /* Current LDT details. */
     unsigned long shadow_ldt_mapcnt;
     spinlock_t shadow_ldt_lock;
+
+    /* Deferred VA-based update state. */
+    bool_t need_update_runstate_area;
+    struct vcpu_time_info pending_system_time;
 };
=20
 struct arch_vcpu
@@ -445,6 +449,10 @@ struct arch_vcpu
 #define hvm_vmx         hvm_vcpu.u.vmx
 #define hvm_svm         hvm_vcpu.u.svm
=20
+bool_t update_runstate_area(const struct vcpu *);
+bool_t update_secondary_system_time(const struct vcpu *,
+                                    struct vcpu_time_info *);
+
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
=20



--=__Part3C0FBD4B.0__=
Content-Type: text/plain; name="x86-deferred-va-updates.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-deferred-va-updates.patch"

x86: don't drop guest visible state updates when 64-bit PV guest is in =
user mode=0A=0ASince 64-bit PV uses separate kernel and user mode page =
tables, kernel=0Aaddresses (as usually provided via VCPUOP_register_runstat=
e_memory_area=0Aand possibly via VCPUOP_register_vcpu_time_memory_area) =
aren't=0Anecessarily accessible when the respective updating occurs. Add =
logic=0Afor toggle_guest_mode() to take care of this (if necessary) the =
next=0Atime the vCPU switches to kernel mode.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domain.c=0A+++ =
b/xen/arch/x86/domain.c=0A@@ -1323,10 +1323,10 @@ static void paravirt_ctxt=
_switch_to(stru=0A }=0A =0A /* Update per-VCPU guest runstate shared =
memory area (if registered). */=0A-static void update_runstate_area(struct =
vcpu *v)=0A+bool_t update_runstate_area(const struct vcpu *v)=0A {=0A     =
if ( guest_handle_is_null(runstate_guest(v)) )=0A-        return;=0A+      =
  return 1;=0A =0A     if ( has_32bit_shinfo(v->domain) )=0A     {=0A@@ =
-1334,10 +1334,18 @@ static void update_runstate_area(struct =0A =0A       =
  XLAT_vcpu_runstate_info(&info, &v->runstate);=0A         __copy_to_guest(=
v->runstate_guest.compat, &info, 1);=0A-        return;=0A+        return =
1;=0A     }=0A =0A-    __copy_to_guest(runstate_guest(v), &v->runstate, =
1);=0A+    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) =
!=3D=0A+           sizeof(v->runstate);=0A+}=0A+=0A+static void _update_run=
state_area(struct vcpu *v)=0A+{=0A+    if ( !update_runstate_area(v) && =
is_pv_vcpu(v) &&=0A+         !(v->arch.flags & TF_kernel_mode) )=0A+       =
 v->arch.pv_vcpu.need_update_runstate_area =3D 1;=0A }=0A =0A static =
inline int need_full_gdt(struct vcpu *v)=0A@@ -1443,8 +1451,8 @@ void =
context_switch(struct vcpu *prev, s=0A         flush_tlb_mask(&dirty_mask);=
=0A     }=0A =0A-    if (prev !=3D next)=0A-        update_runstate_area(pr=
ev);=0A+    if ( prev !=3D next )=0A+        _update_runstate_area(prev);=
=0A =0A     if ( is_hvm_vcpu(prev) )=0A     {=0A@@ -1497,8 +1505,8 @@ void =
context_switch(struct vcpu *prev, s=0A =0A     context_saved(prev);=0A =
=0A-    if (prev !=3D next)=0A-        update_runstate_area(next);=0A+    =
if ( prev !=3D next )=0A+        _update_runstate_area(next);=0A =0A     =
/* Ensure that the vcpu has an up-to-date time base. */=0A     update_vcpu_=
system_time(next);=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/time.c=
=0A@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st=0A {=0A    =
 struct cpu_time       *t;=0A     struct vcpu_time_info *u, _u;=0A-    =
XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;=0A     struct domain *d =3D =
v->domain;=0A     s_time_t tsc_stamp =3D 0;=0A =0A@@ -805,19 +804,31 @@ =
static void __update_vcpu_system_time(st=0A     /* 3. Update guest kernel =
version. */=0A     u->version =3D version_update_end(u->version);=0A =0A-  =
  user_u =3D v->arch.time_info_guest;=0A-    if ( !guest_handle_is_null(use=
r_u) )=0A-    {=0A-        /* 1. Update userspace version. */=0A-        =
__copy_field_to_guest(user_u, &_u, version);=0A-        wmb();=0A-        =
/* 2. Update all other userspavce fields. */=0A-        __copy_to_guest(use=
r_u, &_u, 1);=0A-        wmb();=0A-        /* 3. Update userspace version. =
*/=0A-        _u.version =3D version_update_end(_u.version);=0A-        =
__copy_field_to_guest(user_u, &_u, version);=0A-    }=0A+    if ( =
!update_secondary_system_time(v, &_u) && is_pv_domain(d) &&=0A+         =
!is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )=0A+        =
v->arch.pv_vcpu.pending_system_time =3D _u;=0A+}=0A+=0A+bool_t update_secon=
dary_system_time(const struct vcpu *v,=0A+                                 =
   struct vcpu_time_info *u)=0A+{=0A+    XEN_GUEST_HANDLE(vcpu_time_info_t)=
 user_u =3D v->arch.time_info_guest;=0A+=0A+    if ( guest_handle_is_null(u=
ser_u) )=0A+        return 1;=0A+=0A+    /* 1. Update userspace version. =
*/=0A+    if ( __copy_field_to_guest(user_u, u, version) =3D=3D sizeof(u->v=
ersion) )=0A+        return 0;=0A+    wmb();=0A+    /* 2. Update all other =
userspace fields. */=0A+    __copy_to_guest(user_u, u, 1);=0A+    =
wmb();=0A+    /* 3. Update userspace version. */=0A+    u->version =3D =
version_update_end(u->version);=0A+    __copy_field_to_guest(user_u, u, =
version);=0A+=0A+    return 1;=0A }=0A =0A void update_vcpu_system_time(str=
uct vcpu *v)=0A--- a/xen/arch/x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_6=
4/traps.c=0A@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)=0A =
#else=0A     write_ptbase(v);=0A #endif=0A+=0A+    if ( !(v->arch.flags & =
TF_kernel_mode) )=0A+        return;=0A+=0A+    if ( v->arch.pv_vcpu.need_u=
pdate_runstate_area &&=0A+         update_runstate_area(v) )=0A+        =
v->arch.pv_vcpu.need_update_runstate_area =3D 0;=0A+=0A+    if ( v->arch.pv=
_vcpu.pending_system_time.version &&=0A+         update_secondary_system_ti=
me(v,=0A+                                      &v->arch.pv_vcpu.pending_sys=
tem_time) )=0A+        v->arch.pv_vcpu.pending_system_time.version =3D =
0;=0A }=0A =0A unsigned long do_iret(void)=0A--- a/xen/include/asm-x86/doma=
in.h=0A+++ b/xen/include/asm-x86/domain.h=0A@@ -373,6 +373,10 @@ struct =
pv_vcpu=0A     /* Current LDT details. */=0A     unsigned long shadow_ldt_m=
apcnt;=0A     spinlock_t shadow_ldt_lock;=0A+=0A+    /* Deferred VA-based =
update state. */=0A+    bool_t need_update_runstate_area;=0A+    struct =
vcpu_time_info pending_system_time;=0A };=0A =0A struct arch_vcpu=0A@@ =
-445,6 +449,10 @@ struct arch_vcpu=0A #define hvm_vmx         hvm_vcpu.u.vm=
x=0A #define hvm_svm         hvm_vcpu.u.svm=0A =0A+bool_t update_runstate_a=
rea(const struct vcpu *);=0A+bool_t update_secondary_system_time(const =
struct vcpu *,=0A+                                    struct vcpu_time_info=
 *);=0A+=0A void vcpu_show_execution_state(struct vcpu *);=0A void =
vcpu_show_registers(const struct vcpu *);=0A =0A
--=__Part3C0FBD4B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3C0FBD4B.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 21 15:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dKs-0000e4-0x; Tue, 21 Jan 2014 15:33:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5dKq-0000do-1K
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 15:33:52 +0000
Received: from [85.158.139.211:16303] by server-4.bemta-5.messagelabs.com id
	25/7F-26791-F539ED25; Tue, 21 Jan 2014 15:33:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390318430!779549!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12855 invoked from network); 21 Jan 2014 15:33:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 15:33:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 15:33:50 +0000
Message-Id: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 15:33:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part3C0FBD4B.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part3C0FBD4B.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Since 64-bit PV uses separate kernel and user mode page tables, kernel
addresses (as usually provided via VCPUOP_register_runstate_memory_area
and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
necessarily accessible when the respective updating occurs. Add logic
for toggle_guest_mode() to take care of this (if necessary) the next
time the vCPU switches to kernel mode.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
 }
=20
 /* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+bool_t update_runstate_area(const struct vcpu *v)
 {
     if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+        return 1;
=20
     if ( has_32bit_shinfo(v->domain) )
     {
@@ -1334,10 +1334,18 @@ static void update_runstate_area(struct=20
=20
         XLAT_vcpu_runstate_info(&info, &v->runstate);
         __copy_to_guest(v->runstate_guest.compat, &info, 1);
-        return;
+        return 1;
     }
=20
-    __copy_to_guest(runstate_guest(v), &v->runstate, 1);
+    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=3D
+           sizeof(v->runstate);
+}
+
+static void _update_runstate_area(struct vcpu *v)
+{
+    if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
+         !(v->arch.flags & TF_kernel_mode) )
+        v->arch.pv_vcpu.need_update_runstate_area =3D 1;
 }
=20
 static inline int need_full_gdt(struct vcpu *v)
@@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
         flush_tlb_mask(&dirty_mask);
     }
=20
-    if (prev !=3D next)
-        update_runstate_area(prev);
+    if ( prev !=3D next )
+        _update_runstate_area(prev);
=20
     if ( is_hvm_vcpu(prev) )
     {
@@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
=20
     context_saved(prev);
=20
-    if (prev !=3D next)
-        update_runstate_area(next);
+    if ( prev !=3D next )
+        _update_runstate_area(next);
=20
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(next);
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
 {
     struct cpu_time       *t;
     struct vcpu_time_info *u, _u;
-    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
     struct domain *d =3D v->domain;
     s_time_t tsc_stamp =3D 0;
=20
@@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
     /* 3. Update guest kernel version. */
     u->version =3D version_update_end(u->version);
=20
-    user_u =3D v->arch.time_info_guest;
-    if ( !guest_handle_is_null(user_u) )
-    {
-        /* 1. Update userspace version. */
-        __copy_field_to_guest(user_u, &_u, version);
-        wmb();
-        /* 2. Update all other userspavce fields. */
-        __copy_to_guest(user_u, &_u, 1);
-        wmb();
-        /* 3. Update userspace version. */
-        _u.version =3D version_update_end(_u.version);
-        __copy_field_to_guest(user_u, &_u, version);
-    }
+    if ( !update_secondary_system_time(v, &_u) && is_pv_domain(d) &&
+         !is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )
+        v->arch.pv_vcpu.pending_system_time =3D _u;
+}
+
+bool_t update_secondary_system_time(const struct vcpu *v,
+                                    struct vcpu_time_info *u)
+{
+    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u =3D v->arch.time_info_guest;=

+
+    if ( guest_handle_is_null(user_u) )
+        return 1;
+
+    /* 1. Update userspace version. */
+    if ( __copy_field_to_guest(user_u, u, version) =3D=3D sizeof(u->versio=
n) )
+        return 0;
+    wmb();
+    /* 2. Update all other userspace fields. */
+    __copy_to_guest(user_u, u, 1);
+    wmb();
+    /* 3. Update userspace version. */
+    u->version =3D version_update_end(u->version);
+    __copy_field_to_guest(user_u, u, version);
+
+    return 1;
 }
=20
 void update_vcpu_system_time(struct vcpu *v)
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
 #else
     write_ptbase(v);
 #endif
+
+    if ( !(v->arch.flags & TF_kernel_mode) )
+        return;
+
+    if ( v->arch.pv_vcpu.need_update_runstate_area &&
+         update_runstate_area(v) )
+        v->arch.pv_vcpu.need_update_runstate_area =3D 0;
+
+    if ( v->arch.pv_vcpu.pending_system_time.version &&
+         update_secondary_system_time(v,
+                                      &v->arch.pv_vcpu.pending_system_time=
) )
+        v->arch.pv_vcpu.pending_system_time.version =3D 0;
 }
=20
 unsigned long do_iret(void)
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -373,6 +373,10 @@ struct pv_vcpu
     /* Current LDT details. */
     unsigned long shadow_ldt_mapcnt;
     spinlock_t shadow_ldt_lock;
+
+    /* Deferred VA-based update state. */
+    bool_t need_update_runstate_area;
+    struct vcpu_time_info pending_system_time;
 };
=20
 struct arch_vcpu
@@ -445,6 +449,10 @@ struct arch_vcpu
 #define hvm_vmx         hvm_vcpu.u.vmx
 #define hvm_svm         hvm_vcpu.u.svm
=20
+bool_t update_runstate_area(const struct vcpu *);
+bool_t update_secondary_system_time(const struct vcpu *,
+                                    struct vcpu_time_info *);
+
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
=20



--=__Part3C0FBD4B.0__=
Content-Type: text/plain; name="x86-deferred-va-updates.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-deferred-va-updates.patch"

x86: don't drop guest visible state updates when 64-bit PV guest is in =
user mode=0A=0ASince 64-bit PV uses separate kernel and user mode page =
tables, kernel=0Aaddresses (as usually provided via VCPUOP_register_runstat=
e_memory_area=0Aand possibly via VCPUOP_register_vcpu_time_memory_area) =
aren't=0Anecessarily accessible when the respective updating occurs. Add =
logic=0Afor toggle_guest_mode() to take care of this (if necessary) the =
next=0Atime the vCPU switches to kernel mode.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domain.c=0A+++ =
b/xen/arch/x86/domain.c=0A@@ -1323,10 +1323,10 @@ static void paravirt_ctxt=
_switch_to(stru=0A }=0A =0A /* Update per-VCPU guest runstate shared =
memory area (if registered). */=0A-static void update_runstate_area(struct =
vcpu *v)=0A+bool_t update_runstate_area(const struct vcpu *v)=0A {=0A     =
if ( guest_handle_is_null(runstate_guest(v)) )=0A-        return;=0A+      =
  return 1;=0A =0A     if ( has_32bit_shinfo(v->domain) )=0A     {=0A@@ =
-1334,10 +1334,18 @@ static void update_runstate_area(struct =0A =0A       =
  XLAT_vcpu_runstate_info(&info, &v->runstate);=0A         __copy_to_guest(=
v->runstate_guest.compat, &info, 1);=0A-        return;=0A+        return =
1;=0A     }=0A =0A-    __copy_to_guest(runstate_guest(v), &v->runstate, =
1);=0A+    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) =
!=3D=0A+           sizeof(v->runstate);=0A+}=0A+=0A+static void _update_run=
state_area(struct vcpu *v)=0A+{=0A+    if ( !update_runstate_area(v) && =
is_pv_vcpu(v) &&=0A+         !(v->arch.flags & TF_kernel_mode) )=0A+       =
 v->arch.pv_vcpu.need_update_runstate_area =3D 1;=0A }=0A =0A static =
inline int need_full_gdt(struct vcpu *v)=0A@@ -1443,8 +1451,8 @@ void =
context_switch(struct vcpu *prev, s=0A         flush_tlb_mask(&dirty_mask);=
=0A     }=0A =0A-    if (prev !=3D next)=0A-        update_runstate_area(pr=
ev);=0A+    if ( prev !=3D next )=0A+        _update_runstate_area(prev);=
=0A =0A     if ( is_hvm_vcpu(prev) )=0A     {=0A@@ -1497,8 +1505,8 @@ void =
context_switch(struct vcpu *prev, s=0A =0A     context_saved(prev);=0A =
=0A-    if (prev !=3D next)=0A-        update_runstate_area(next);=0A+    =
if ( prev !=3D next )=0A+        _update_runstate_area(next);=0A =0A     =
/* Ensure that the vcpu has an up-to-date time base. */=0A     update_vcpu_=
system_time(next);=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/time.c=
=0A@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st=0A {=0A    =
 struct cpu_time       *t;=0A     struct vcpu_time_info *u, _u;=0A-    =
XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;=0A     struct domain *d =3D =
v->domain;=0A     s_time_t tsc_stamp =3D 0;=0A =0A@@ -805,19 +804,31 @@ =
static void __update_vcpu_system_time(st=0A     /* 3. Update guest kernel =
version. */=0A     u->version =3D version_update_end(u->version);=0A =0A-  =
  user_u =3D v->arch.time_info_guest;=0A-    if ( !guest_handle_is_null(use=
r_u) )=0A-    {=0A-        /* 1. Update userspace version. */=0A-        =
__copy_field_to_guest(user_u, &_u, version);=0A-        wmb();=0A-        =
/* 2. Update all other userspavce fields. */=0A-        __copy_to_guest(use=
r_u, &_u, 1);=0A-        wmb();=0A-        /* 3. Update userspace version. =
*/=0A-        _u.version =3D version_update_end(_u.version);=0A-        =
__copy_field_to_guest(user_u, &_u, version);=0A-    }=0A+    if ( =
!update_secondary_system_time(v, &_u) && is_pv_domain(d) &&=0A+         =
!is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )=0A+        =
v->arch.pv_vcpu.pending_system_time =3D _u;=0A+}=0A+=0A+bool_t update_secon=
dary_system_time(const struct vcpu *v,=0A+                                 =
   struct vcpu_time_info *u)=0A+{=0A+    XEN_GUEST_HANDLE(vcpu_time_info_t)=
 user_u =3D v->arch.time_info_guest;=0A+=0A+    if ( guest_handle_is_null(u=
ser_u) )=0A+        return 1;=0A+=0A+    /* 1. Update userspace version. =
*/=0A+    if ( __copy_field_to_guest(user_u, u, version) =3D=3D sizeof(u->v=
ersion) )=0A+        return 0;=0A+    wmb();=0A+    /* 2. Update all other =
userspace fields. */=0A+    __copy_to_guest(user_u, u, 1);=0A+    =
wmb();=0A+    /* 3. Update userspace version. */=0A+    u->version =3D =
version_update_end(u->version);=0A+    __copy_field_to_guest(user_u, u, =
version);=0A+=0A+    return 1;=0A }=0A =0A void update_vcpu_system_time(str=
uct vcpu *v)=0A--- a/xen/arch/x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_6=
4/traps.c=0A@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)=0A =
#else=0A     write_ptbase(v);=0A #endif=0A+=0A+    if ( !(v->arch.flags & =
TF_kernel_mode) )=0A+        return;=0A+=0A+    if ( v->arch.pv_vcpu.need_u=
pdate_runstate_area &&=0A+         update_runstate_area(v) )=0A+        =
v->arch.pv_vcpu.need_update_runstate_area =3D 0;=0A+=0A+    if ( v->arch.pv=
_vcpu.pending_system_time.version &&=0A+         update_secondary_system_ti=
me(v,=0A+                                      &v->arch.pv_vcpu.pending_sys=
tem_time) )=0A+        v->arch.pv_vcpu.pending_system_time.version =3D =
0;=0A }=0A =0A unsigned long do_iret(void)=0A--- a/xen/include/asm-x86/doma=
in.h=0A+++ b/xen/include/asm-x86/domain.h=0A@@ -373,6 +373,10 @@ struct =
pv_vcpu=0A     /* Current LDT details. */=0A     unsigned long shadow_ldt_m=
apcnt;=0A     spinlock_t shadow_ldt_lock;=0A+=0A+    /* Deferred VA-based =
update state. */=0A+    bool_t need_update_runstate_area;=0A+    struct =
vcpu_time_info pending_system_time;=0A };=0A =0A struct arch_vcpu=0A@@ =
-445,6 +449,10 @@ struct arch_vcpu=0A #define hvm_vmx         hvm_vcpu.u.vm=
x=0A #define hvm_svm         hvm_vcpu.u.svm=0A =0A+bool_t update_runstate_a=
rea(const struct vcpu *);=0A+bool_t update_secondary_system_time(const =
struct vcpu *,=0A+                                    struct vcpu_time_info=
 *);=0A+=0A void vcpu_show_execution_state(struct vcpu *);=0A void =
vcpu_show_registers(const struct vcpu *);=0A =0A
--=__Part3C0FBD4B.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part3C0FBD4B.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 21 15:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dYl-0001Rc-Kr; Tue, 21 Jan 2014 15:48:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dYj-0001RV-U6
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:48:14 +0000
Received: from [85.158.137.68:35625] by server-16.bemta-3.messagelabs.com id
	12/E1-26128-DB69ED25; Tue, 21 Jan 2014 15:48:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390319289!6807724!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1242 invoked from network); 21 Jan 2014 15:48:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:48:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92872524"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:48:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:48:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dYd-0008Sw-Ot;
	Tue, 21 Jan 2014 15:48:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dYc-0005EL-8f;
	Tue, 21 Jan 2014 15:48:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.38581.47252.874216@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:48:05 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390318375.32519.9.camel@kazak.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
	<1390318375.32519.9.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe nonblocking"):
> On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> > Use the new libxl__pipe_nonblock and _close functions, rather than
> > open coding the same logic.  Now the pipe is nonblocking, which avoids
> > a race which could result in libxl deadlocking in a multithreaded
> > program.
> 
> Is any additional error handling required at any of the points where the
> pipe is used, EWOULDBLOCK etc.

No, they are just the libxl__self_pipe* functions as you have noticed,
which are designed to work (only!) with nonblocking fds.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dYl-0001Rc-Kr; Tue, 21 Jan 2014 15:48:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5dYj-0001RV-U6
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 15:48:14 +0000
Received: from [85.158.137.68:35625] by server-16.bemta-3.messagelabs.com id
	12/E1-26128-DB69ED25; Tue, 21 Jan 2014 15:48:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390319289!6807724!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1242 invoked from network); 21 Jan 2014 15:48:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:48:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92872524"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:48:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 10:48:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dYd-0008Sw-Ot;
	Tue, 21 Jan 2014 15:48:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5dYc-0005EL-8f;
	Tue, 21 Jan 2014 15:48:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.38581.47252.874216@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 15:48:05 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390318375.32519.9.camel@kazak.uk.xensource.com>
References: <21214.34859.759826.374311@mariner.uk.xensource.com>
	<1390317088-21308-1-git-send-email-ian.jackson@eu.citrix.com>
	<1390317088-21308-2-git-send-email-ian.jackson@eu.citrix.com>
	<1390318375.32519.9.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 14/12] libxl: fork: Make SIGCHLD self-pipe nonblocking"):
> On Tue, 2014-01-21 at 15:11 +0000, Ian Jackson wrote:
> > Use the new libxl__pipe_nonblock and _close functions, rather than
> > open coding the same logic.  Now the pipe is nonblocking, which avoids
> > a race which could result in libxl deadlocking in a multithreaded
> > program.
> 
> Is any additional error handling required at any of the points where the
> pipe is used, EWOULDBLOCK etc.

No, they are just the libxl__self_pipe* functions as you have noticed,
which are designed to work (only!) with nonblocking fds.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ddf-0001xC-Eh; Tue, 21 Jan 2014 15:53:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5ddd-0001x3-Rc
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:53:18 +0000
Received: from [85.158.137.68:28563] by server-1.bemta-3.messagelabs.com id
	28/6C-29598-DE79ED25; Tue, 21 Jan 2014 15:53:17 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390319593!9300815!1
X-Originating-IP: [64.18.0.189]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12518 invoked from network); 21 Jan 2014 15:53:15 -0000
Received: from exprod5og119.obsmtp.com (HELO exprod5og119.obsmtp.com)
	(64.18.0.189)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 15:53:15 -0000
Received: from mail-oa0-f54.google.com ([209.85.219.54]) (using TLSv1) by
	exprod5ob119.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6X6W4w9tdSQI5kT0vL3D/9x4Sd0zAl@postini.com;
	Tue, 21 Jan 2014 07:53:15 PST
Received: by mail-oa0-f54.google.com with SMTP id i4so2700058oah.41
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 07:53:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=dhCHCYnzyc7F0mErtXZU48q0k85lCpd86wBab/9BJ6o=;
	b=Ei8Teu4S9uxn2RvSctHLVbS/Jj6C6LPa7nGXAI2L2ixJOPa2kXAKRBeJf+5AHWAr8/
	0xtvzuHMB+R1JSRugl5fKxZgpGmYpH/Rtin+AObG4g7qx6s0/Cd3aDNAOKHcmNWaRmpK
	1GBd7x2xdQgs8PsRTkNM6h08/g+pvFHUCIspA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=dhCHCYnzyc7F0mErtXZU48q0k85lCpd86wBab/9BJ6o=;
	b=bJrBEst+R6I/6ATfFtmyZlkAeXwfPf+ETmqLWAoyI7YzaZm8HAChZzE6VTP1gfmVgW
	4d4P1xUVe5I7wfZRXiJ9m8sZhN43jw1EPun3qFsTaIbWyF/9MTmbNtUBHMA8cwaKXuw1
	GZEeKdvKq/thCU7AFAWu6PrGd+rz+xtDcLFwbel0O2NVTtXXXvaPJ7Y6RUbo+haO/tLi
	wxZlK0+Dzjg6fz7pPa2IdcLAtt0mxPrOqAlJ16k8VC90tTRq7N9y0mUzfkbK7tZFm1HY
	Uynax2YD0E6Exh+Y3nMp7t2DxLPBv6Tyn7TzJKGtVS/e60qHTsVv3Q05DYUfhiLK4aoV
	W3NQ==
X-Gm-Message-State: ALoCoQn9gVwBqBoFg0Xd6GJaha+qIw1GULNhE789M7vi7o4Hv1dGGmDoaF9h/TPQOhDAweOt0x9GolTj6BMYxVc5jpMOqDX0LHTV1hyVG+n6mdmDFswOLyHzye4Jzw4z8PAN+mHr+WDQbfpF/HfLm70b4acA1JoVhfp2R7KC0IlxbhreKuib+78=
X-Received: by 10.182.113.195 with SMTP id ja3mr11945663obb.46.1390319593199; 
	Tue, 21 Jan 2014 07:53:13 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr11945647obb.46.1390319593033; 
	Tue, 21 Jan 2014 07:53:13 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Tue, 21 Jan 2014 07:53:12 -0800 (PST)
In-Reply-To: <1390304761.23576.161.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
Date: Tue, 21 Jan 2014 17:53:12 +0200
Message-ID: <CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5077006526708740602=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5077006526708740602==
Content-Type: multipart/alternative; boundary=089e013d0db077c92004f07d0191

--089e013d0db077c92004f07d0191
Content-Type: text/plain; charset=ISO-8859-1

Hi,

> Ok, so, if, as said above, you can do that, I'd try the following. With
> the credit scheduler (after having cleared/disabled the rate limiting
> thing), go for 1 vCPU in Dom0 and 1 vCPU in DomU.

> Also, pin both, and do it to different pCPUs. I think booting with this
> "dom0_max_vcpus=1 dom0_vcpus_pin" in the Xen command line would do the
> trick for Dom0. For DomU, you just put in the config file a "cpus=X"
> entry, as soon as you see what it is the pCPU to which Dom0 is _not_
> pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you should
> use "cpus=1" for the DomU).

> With that configuration, repeat the tests.

It happened that we cannot start dom0 with one vCPU (investigation on this
one is still ongoing), but we succeded in giving one vCPU to domU and
pinning it to one of the pCPUs. Interestingly enough, that fixed all the
latency observed in domU.

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0    0   ---      38.6  0
Domain-0                             0     1    0   r--      31.8  0
android_4.3                          1     0    1   b     230.2  1

In dom0 (which has two vCPUs, so Xen scheduling is actually used) latency
is still present.

So without virtualization of CPUs for domU soft real time properties with
regard to timers are met (and our RTP audio sync is doing much better).
That's obviously not a solution, but it shows that Xen credit (and sEDF)
scheduling is actually misbehaving on such tasks and there is an area to
investigate.

I will keep you informed when more results are present, so stay tuned :)

Regards, Pavlo

--089e013d0db077c92004f07d0191
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi,<br><br>&gt; Ok, so, if, as said ab=
ove, you can do that, I&#39;d try the following. With<br>&gt; the credit sc=
heduler (after having cleared/disabled the rate limiting<br>&gt; thing), go=
 for 1 vCPU in Dom0 and 1 vCPU in DomU.<br>

<br>&gt; Also, pin both, and do it to different pCPUs. I think booting with=
 this<br>&gt; &quot;dom0_max_vcpus=3D1 dom0_vcpus_pin&quot; in the Xen comm=
and line would do the<br>&gt; trick for Dom0. For DomU, you just put in the=
 config file a &quot;cpus=3DX&quot;<br>
&gt; entry, as soon as you see what it is the pCPU to which Dom0 is _not_<b=
r>&gt; pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you sho=
uld<br>&gt; use &quot;cpus=3D1&quot; for the DomU).<br>
<br>&gt; With that configuration, repeat the tests.<br><br></div>It happene=
d that we cannot start dom0 with one vCPU (investigation on this one is sti=
ll ongoing), but we succeded in giving one vCPU to domU and pinning it to o=
ne of the pCPUs. Interestingly enough, that fixed all the latency observed =
in domU.<br>
<br><span dir=3D"ltr" id=3D":1og"># xl vcpu-list <br>Name =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0=
 Time(s) CPU Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A00 =A0 --- =A0 =A0 =A038.6 =A00<br>Domain=
-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 1 =A0 =
=A00 =A0 r-- =A0 =A0 =A031.8 =A00<br>
android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A01 =A0 <s>b</s> =A0 =A0 230.2 =A01</span><br><br></div><div>In dom0 (=
which has two vCPUs, so Xen scheduling is actually used) latency is still p=
resent.<br></div><div><br></div>So without virtualization of CPUs for domU =
soft real time properties with regard to timers are met (and our RTP audio =
sync is doing much better). That&#39;s obviously not a solution, but it sho=
ws that Xen credit (and sEDF) scheduling is actually misbehaving on such ta=
sks and there is an area to investigate.<br>
<br></div>I will keep you informed when more results are present, so stay t=
uned :)<br><br></div>Regards, Pavlo<br></div>

--089e013d0db077c92004f07d0191--


--===============5077006526708740602==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5077006526708740602==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 15:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ddf-0001xC-Eh; Tue, 21 Jan 2014 15:53:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W5ddd-0001x3-Rc
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:53:18 +0000
Received: from [85.158.137.68:28563] by server-1.bemta-3.messagelabs.com id
	28/6C-29598-DE79ED25; Tue, 21 Jan 2014 15:53:17 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390319593!9300815!1
X-Originating-IP: [64.18.0.189]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12518 invoked from network); 21 Jan 2014 15:53:15 -0000
Received: from exprod5og119.obsmtp.com (HELO exprod5og119.obsmtp.com)
	(64.18.0.189)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 15:53:15 -0000
Received: from mail-oa0-f54.google.com ([209.85.219.54]) (using TLSv1) by
	exprod5ob119.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6X6W4w9tdSQI5kT0vL3D/9x4Sd0zAl@postini.com;
	Tue, 21 Jan 2014 07:53:15 PST
Received: by mail-oa0-f54.google.com with SMTP id i4so2700058oah.41
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 07:53:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=dhCHCYnzyc7F0mErtXZU48q0k85lCpd86wBab/9BJ6o=;
	b=Ei8Teu4S9uxn2RvSctHLVbS/Jj6C6LPa7nGXAI2L2ixJOPa2kXAKRBeJf+5AHWAr8/
	0xtvzuHMB+R1JSRugl5fKxZgpGmYpH/Rtin+AObG4g7qx6s0/Cd3aDNAOKHcmNWaRmpK
	1GBd7x2xdQgs8PsRTkNM6h08/g+pvFHUCIspA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=dhCHCYnzyc7F0mErtXZU48q0k85lCpd86wBab/9BJ6o=;
	b=bJrBEst+R6I/6ATfFtmyZlkAeXwfPf+ETmqLWAoyI7YzaZm8HAChZzE6VTP1gfmVgW
	4d4P1xUVe5I7wfZRXiJ9m8sZhN43jw1EPun3qFsTaIbWyF/9MTmbNtUBHMA8cwaKXuw1
	GZEeKdvKq/thCU7AFAWu6PrGd+rz+xtDcLFwbel0O2NVTtXXXvaPJ7Y6RUbo+haO/tLi
	wxZlK0+Dzjg6fz7pPa2IdcLAtt0mxPrOqAlJ16k8VC90tTRq7N9y0mUzfkbK7tZFm1HY
	Uynax2YD0E6Exh+Y3nMp7t2DxLPBv6Tyn7TzJKGtVS/e60qHTsVv3Q05DYUfhiLK4aoV
	W3NQ==
X-Gm-Message-State: ALoCoQn9gVwBqBoFg0Xd6GJaha+qIw1GULNhE789M7vi7o4Hv1dGGmDoaF9h/TPQOhDAweOt0x9GolTj6BMYxVc5jpMOqDX0LHTV1hyVG+n6mdmDFswOLyHzye4Jzw4z8PAN+mHr+WDQbfpF/HfLm70b4acA1JoVhfp2R7KC0IlxbhreKuib+78=
X-Received: by 10.182.113.195 with SMTP id ja3mr11945663obb.46.1390319593199; 
	Tue, 21 Jan 2014 07:53:13 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr11945647obb.46.1390319593033; 
	Tue, 21 Jan 2014 07:53:13 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Tue, 21 Jan 2014 07:53:12 -0800 (PST)
In-Reply-To: <1390304761.23576.161.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
Date: Tue, 21 Jan 2014 17:53:12 +0200
Message-ID: <CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5077006526708740602=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5077006526708740602==
Content-Type: multipart/alternative; boundary=089e013d0db077c92004f07d0191

--089e013d0db077c92004f07d0191
Content-Type: text/plain; charset=ISO-8859-1

Hi,

> Ok, so, if, as said above, you can do that, I'd try the following. With
> the credit scheduler (after having cleared/disabled the rate limiting
> thing), go for 1 vCPU in Dom0 and 1 vCPU in DomU.

> Also, pin both, and do it to different pCPUs. I think booting with this
> "dom0_max_vcpus=1 dom0_vcpus_pin" in the Xen command line would do the
> trick for Dom0. For DomU, you just put in the config file a "cpus=X"
> entry, as soon as you see what it is the pCPU to which Dom0 is _not_
> pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you should
> use "cpus=1" for the DomU).

> With that configuration, repeat the tests.

It happened that we cannot start dom0 with one vCPU (investigation on this
one is still ongoing), but we succeded in giving one vCPU to domU and
pinning it to one of the pCPUs. Interestingly enough, that fixed all the
latency observed in domU.

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0    0   ---      38.6  0
Domain-0                             0     1    0   r--      31.8  0
android_4.3                          1     0    1   b     230.2  1

In dom0 (which has two vCPUs, so Xen scheduling is actually used) latency
is still present.

So without virtualization of CPUs for domU soft real time properties with
regard to timers are met (and our RTP audio sync is doing much better).
That's obviously not a solution, but it shows that Xen credit (and sEDF)
scheduling is actually misbehaving on such tasks and there is an area to
investigate.

I will keep you informed when more results are present, so stay tuned :)

Regards, Pavlo

--089e013d0db077c92004f07d0191
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi,<br><br>&gt; Ok, so, if, as said ab=
ove, you can do that, I&#39;d try the following. With<br>&gt; the credit sc=
heduler (after having cleared/disabled the rate limiting<br>&gt; thing), go=
 for 1 vCPU in Dom0 and 1 vCPU in DomU.<br>

<br>&gt; Also, pin both, and do it to different pCPUs. I think booting with=
 this<br>&gt; &quot;dom0_max_vcpus=3D1 dom0_vcpus_pin&quot; in the Xen comm=
and line would do the<br>&gt; trick for Dom0. For DomU, you just put in the=
 config file a &quot;cpus=3DX&quot;<br>
&gt; entry, as soon as you see what it is the pCPU to which Dom0 is _not_<b=
r>&gt; pinned (I suspect Dom0 will end up pinned to pCPU #0, and so you sho=
uld<br>&gt; use &quot;cpus=3D1&quot; for the DomU).<br>
<br>&gt; With that configuration, repeat the tests.<br><br></div>It happene=
d that we cannot start dom0 with one vCPU (investigation on this one is sti=
ll ongoing), but we succeded in giving one vCPU to domU and pinning it to o=
ne of the pCPUs. Interestingly enough, that fixed all the latency observed =
in domU.<br>
<br><span dir=3D"ltr" id=3D":1og"># xl vcpu-list <br>Name =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ID =A0VCPU =A0 CPU State =A0=
 Time(s) CPU Affinity<br>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 0 =A0 =A0 0 =A0 =A00 =A0 --- =A0 =A0 =A038.6 =A00<br>Domain=
-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 1 =A0 =
=A00 =A0 r-- =A0 =A0 =A031.8 =A00<br>
android_4.3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 =A0 =A0 0 =
=A0 =A01 =A0 <s>b</s> =A0 =A0 230.2 =A01</span><br><br></div><div>In dom0 (=
which has two vCPUs, so Xen scheduling is actually used) latency is still p=
resent.<br></div><div><br></div>So without virtualization of CPUs for domU =
soft real time properties with regard to timers are met (and our RTP audio =
sync is doing much better). That&#39;s obviously not a solution, but it sho=
ws that Xen credit (and sEDF) scheduling is actually misbehaving on such ta=
sks and there is an area to investigate.<br>
<br></div>I will keep you informed when more results are present, so stay t=
uned :)<br><br></div>Regards, Pavlo<br></div>

--089e013d0db077c92004f07d0191--


--===============5077006526708740602==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5077006526708740602==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 15:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dfk-00025o-6g; Tue, 21 Jan 2014 15:55:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5dfi-00025g-KH
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:55:26 +0000
Received: from [85.158.143.35:7744] by server-2.bemta-4.messagelabs.com id
	1F/D6-11386-D689ED25; Tue, 21 Jan 2014 15:55:25 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390319714!13075286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDU3NTUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27307 invoked from network); 21 Jan 2014 15:55:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92875816"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:55:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:55:13 -0500
Message-ID: <1390319712.23576.202.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Tue, 21 Jan 2014 16:55:12 +0100
In-Reply-To: <1386984785.3980.96.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it credit2 only uses one runqueue instead of one runq per socket
thanks

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dfk-00025o-6g; Tue, 21 Jan 2014 15:55:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5dfi-00025g-KH
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:55:26 +0000
Received: from [85.158.143.35:7744] by server-2.bemta-4.messagelabs.com id
	1F/D6-11386-D689ED25; Tue, 21 Jan 2014 15:55:25 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390319714!13075286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDU3NTUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27307 invoked from network); 21 Jan 2014 15:55:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 15:55:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92875816"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 15:55:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 10:55:13 -0500
Message-ID: <1390319712.23576.202.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Tue, 21 Jan 2014 16:55:12 +0100
In-Reply-To: <1386984785.3980.96.camel@Solace>
References: <1386984785.3980.96.camel@Solace>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it credit2 only uses one runqueue instead of one runq per socket
thanks

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dgG-0002B2-PF; Tue, 21 Jan 2014 15:56:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dgE-0002An-P6
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:55:58 +0000
Received: from [85.158.137.68:24670] by server-1.bemta-3.messagelabs.com id
	85/C0-29598-D889ED25; Tue, 21 Jan 2014 15:55:57 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390319756!9658051!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27867 invoked from network); 21 Jan 2014 15:55:57 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Jan 2014 15:55:57 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dkA-0003is-8x; Tue, 21 Jan 2014 16:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Dario Faggioli <dario.faggioli@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390320002.14312@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390319712.23576.202.camel@Solace>
In-Reply-To: <1390319712.23576.202.camel@Solace>
X-Emesinae-Message: control
X-Emesinae-Control-From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 16:00:02 +0000
Subject: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #36 rooted at `<1386984785.3980.96.camel@Solace>'
Title: `Re: [Xen-devel] multiple runqueues in credit2'
> title it credit2 only uses one runqueue instead of one runq per socket
Set title for #36 to `credit2 only uses one runqueue instead of one runq per socket'
> thanks
Finished processing.

Modified/created Bugs:
 - 36: http://bugs.xenproject.org/xen/bug/36 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 15:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dgG-0002B2-PF; Tue, 21 Jan 2014 15:56:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dgE-0002An-P6
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:55:58 +0000
Received: from [85.158.137.68:24670] by server-1.bemta-3.messagelabs.com id
	85/C0-29598-D889ED25; Tue, 21 Jan 2014 15:55:57 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390319756!9658051!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27867 invoked from network); 21 Jan 2014 15:55:57 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Jan 2014 15:55:57 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dkA-0003is-8x; Tue, 21 Jan 2014 16:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Dario Faggioli <dario.faggioli@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390320002.14312@bugs.xenproject.org>
References: <1386984785.3980.96.camel@Solace>
	<1390319712.23576.202.camel@Solace>
In-Reply-To: <1390319712.23576.202.camel@Solace>
X-Emesinae-Message: control
X-Emesinae-Control-From: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 21 Jan 2014 16:00:02 +0000
Subject: [Xen-devel] Processed: Re:  multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #36 rooted at `<1386984785.3980.96.camel@Solace>'
Title: `Re: [Xen-devel] multiple runqueues in credit2'
> title it credit2 only uses one runqueue instead of one runq per socket
Set title for #36 to `credit2 only uses one runqueue instead of one runq per socket'
> thanks
Finished processing.

Modified/created Bugs:
 - 36: http://bugs.xenproject.org/xen/bug/36 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:11:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dun-0003fI-Fv; Tue, 21 Jan 2014 16:11:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dul-0003f8-MY
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:10:59 +0000
Received: from [85.158.137.68:62206] by server-3.bemta-3.messagelabs.com id
	DE/26-10658-21C9ED25; Tue, 21 Jan 2014 16:10:58 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390320657!6813771!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19007 invoked from network); 21 Jan 2014 16:10:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Jan 2014 16:10:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dyg-0003su-Je; Tue, 21 Jan 2014 16:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390320902.14934@bugs.xenproject.org>
References: <1390320437.32519.18.camel@kazak.uk.xensource.com>
In-Reply-To: <1390320437.32519.18.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 16:15:02 +0000
Subject: [Xen-devel] Processed: Removing discussion of MIME encapsulated bug
 control messages from bug 36
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> prune 36 <control-reply-1390242602.7482@bugs.xenproject.org>
Prune `<control-reply-1390242602.7482@bugs.xenproject.org>' from #36
> thanks
Finished processing.

Modified/created Bugs:
 - 36: http://bugs.xenproject.org/xen/bug/36

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:11:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5dun-0003fI-Fv; Tue, 21 Jan 2014 16:11:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dul-0003f8-MY
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:10:59 +0000
Received: from [85.158.137.68:62206] by server-3.bemta-3.messagelabs.com id
	DE/26-10658-21C9ED25; Tue, 21 Jan 2014 16:10:58 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390320657!6813771!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19007 invoked from network); 21 Jan 2014 16:10:58 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Jan 2014 16:10:58 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W5dyg-0003su-Je; Tue, 21 Jan 2014 16:15:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390320902.14934@bugs.xenproject.org>
References: <1390320437.32519.18.camel@kazak.uk.xensource.com>
In-Reply-To: <1390320437.32519.18.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 16:15:02 +0000
Subject: [Xen-devel] Processed: Removing discussion of MIME encapsulated bug
 control messages from bug 36
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> prune 36 <control-reply-1390242602.7482@bugs.xenproject.org>
Prune `<control-reply-1390242602.7482@bugs.xenproject.org>' from #36
> thanks
Finished processing.

Modified/created Bugs:
 - 36: http://bugs.xenproject.org/xen/bug/36

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eSQ-0005WB-QU; Tue, 21 Jan 2014 16:45:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5eSP-0005W4-SA
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:45:45 +0000
Received: from [85.158.137.68:17838] by server-14.bemta-3.messagelabs.com id
	E1/E1-06105-834AED25; Tue, 21 Jan 2014 16:45:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390322742!10484970!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16285 invoked from network); 21 Jan 2014 16:45:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:45:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92905275"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 16:45:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 11:45:41 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5eSK-0000K6-HX;
	Tue, 21 Jan 2014 16:45:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 16:45:40 +0000
Message-ID: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to JobDB
	flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The real jobdb takes $intended and $branch in the other order, meaningthat the
standalone db ends up with them backwards.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/JobDB/Standalone.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index d3ff1df..14293ef 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -56,7 +56,7 @@ sub open ($) {
 }
 
 sub flight_create ($$$) {
-    my ($obj, $branch, $intended) = @_;
+    my ($obj, $intended, $branch) = @_;
     my $fl = $ENV{'OSSTEST_FLIGHT'};
     $fl = 'standalone' unless defined $fl && length $fl;
     die "flight names may not contain ." if $fl =~ m/\./;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eSQ-0005WB-QU; Tue, 21 Jan 2014 16:45:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5eSP-0005W4-SA
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:45:45 +0000
Received: from [85.158.137.68:17838] by server-14.bemta-3.messagelabs.com id
	E1/E1-06105-834AED25; Tue, 21 Jan 2014 16:45:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390322742!10484970!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16285 invoked from network); 21 Jan 2014 16:45:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:45:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92905275"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 16:45:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 11:45:41 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5eSK-0000K6-HX;
	Tue, 21 Jan 2014 16:45:40 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 16:45:40 +0000
Message-ID: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to JobDB
	flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The real jobdb takes $intended and $branch in the other order, meaningthat the
standalone db ends up with them backwards.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/JobDB/Standalone.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index d3ff1df..14293ef 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -56,7 +56,7 @@ sub open ($) {
 }
 
 sub flight_create ($$$) {
-    my ($obj, $branch, $intended) = @_;
+    my ($obj, $intended, $branch) = @_;
     my $fl = $ENV{'OSSTEST_FLIGHT'};
     $fl = 'standalone' unless defined $fl && length $fl;
     die "flight names may not contain ." if $fl =~ m/\./;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:47:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eTl-0005bx-AL; Tue, 21 Jan 2014 16:47:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5eTj-0005bf-PB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:47:07 +0000
Received: from [85.158.137.68:40418] by server-3.bemta-3.messagelabs.com id
	60/3D-10658-A84AED25; Tue, 21 Jan 2014 16:47:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390322824!10433435!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1520 invoked from network); 21 Jan 2014 16:47:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:47:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92905826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 16:47:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 11:47:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5eTf-0000KV-2T;
	Tue, 21 Jan 2014 16:47:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5eTd-0005P9-JM;
	Tue, 21 Jan 2014 16:47:01 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.42116.463716.285955@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 16:47:00 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
References: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to
	JobDB flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] standalone: Correct arguments to JobDB flight_create"):
> The real jobdb takes $intended and $branch in the other order, meaningthat the
> standalone db ends up with them backwards.

*blrfhl*

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:47:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:47:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eTl-0005bx-AL; Tue, 21 Jan 2014 16:47:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5eTj-0005bf-PB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:47:07 +0000
Received: from [85.158.137.68:40418] by server-3.bemta-3.messagelabs.com id
	60/3D-10658-A84AED25; Tue, 21 Jan 2014 16:47:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390322824!10433435!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1520 invoked from network); 21 Jan 2014 16:47:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:47:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92905826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 16:47:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 11:47:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5eTf-0000KV-2T;
	Tue, 21 Jan 2014 16:47:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5eTd-0005P9-JM;
	Tue, 21 Jan 2014 16:47:01 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21214.42116.463716.285955@mariner.uk.xensource.com>
Date: Tue, 21 Jan 2014 16:47:00 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
References: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to
	JobDB flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] standalone: Correct arguments to JobDB flight_create"):
> The real jobdb takes $intended and $branch in the other order, meaningthat the
> standalone db ends up with them backwards.

*blrfhl*

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ecP-0006Ek-C6; Tue, 21 Jan 2014 16:56:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5ecN-0006Ee-K7
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 16:56:03 +0000
Received: from [85.158.143.35:31401] by server-2.bemta-4.messagelabs.com id
	E7/68-11386-2A6AED25; Tue, 21 Jan 2014 16:56:02 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390323360!13072672!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24528 invoked from network); 21 Jan 2014 16:56:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:56:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208,217";a="94963969"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 16:55:59 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 11:55:59 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	17:55:57 +0100
Message-ID: <52DEA69D.4010102@citrix.com>
Date: Tue, 21 Jan 2014 16:55:57 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
In-Reply-To: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2147995466282644967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2147995466282644967==
Content-Type: multipart/alternative;
	boundary="------------000303050600060402060603"

--------------000303050600060402060603
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 21/01/2014 15:33, Jan Beulich wrote:
> Since 64-bit PV uses separate kernel and user mode page tables, kernel
> addresses (as usually provided via VCPUOP_register_runstate_memory_area
> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
> necessarily accessible when the respective updating occurs. Add logic
> for toggle_guest_mode() to take care of this (if necessary) the next
> time the vCPU switches to kernel mode.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>  }
>  
>  /* Update per-VCPU guest runstate shared memory area (if registered). */
> -static void update_runstate_area(struct vcpu *v)
> +bool_t update_runstate_area(const struct vcpu *v)

Can you adjust the comment to indicate what the return value means.  The
logic is quite opaque, but I believe it is "true if the runstate has
been suitably updated".

>  {
>      if ( guest_handle_is_null(runstate_guest(v)) )
> -        return;
> +        return 1;
>  
>      if ( has_32bit_shinfo(v->domain) )
>      {
> @@ -1334,10 +1334,18 @@ static void update_runstate_area(struct 
>  
>          XLAT_vcpu_runstate_info(&info, &v->runstate);
>          __copy_to_guest(v->runstate_guest.compat, &info, 1);
> -        return;
> +        return 1;
>      }
>  
> -    __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> +    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
> +           sizeof(v->runstate);
> +}
> +
> +static void _update_runstate_area(struct vcpu *v)
> +{
> +    if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
> +         !(v->arch.flags & TF_kernel_mode) )
> +        v->arch.pv_vcpu.need_update_runstate_area = 1;
>  }
>  
>  static inline int need_full_gdt(struct vcpu *v)
> @@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
>          flush_tlb_mask(&dirty_mask);
>      }
>  
> -    if (prev != next)
> -        update_runstate_area(prev);
> +    if ( prev != next )
> +        _update_runstate_area(prev);
>  
>      if ( is_hvm_vcpu(prev) )
>      {
> @@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
>  
>      context_saved(prev);
>  
> -    if (prev != next)
> -        update_runstate_area(next);
> +    if ( prev != next )
> +        _update_runstate_area(next);
>  
>      /* Ensure that the vcpu has an up-to-date time base. */
>      update_vcpu_system_time(next);
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
>  {
>      struct cpu_time       *t;
>      struct vcpu_time_info *u, _u;
> -    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
>      struct domain *d = v->domain;
>      s_time_t tsc_stamp = 0;
>  
> @@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
>      /* 3. Update guest kernel version. */
>      u->version = version_update_end(u->version);
>  
> -    user_u = v->arch.time_info_guest;
> -    if ( !guest_handle_is_null(user_u) )
> -    {
> -        /* 1. Update userspace version. */
> -        __copy_field_to_guest(user_u, &_u, version);
> -        wmb();
> -        /* 2. Update all other userspavce fields. */
> -        __copy_to_guest(user_u, &_u, 1);
> -        wmb();
> -        /* 3. Update userspace version. */
> -        _u.version = version_update_end(_u.version);
> -        __copy_field_to_guest(user_u, &_u, version);
> -    }
> +    if ( !update_secondary_system_time(v, &_u) && is_pv_domain(d) &&
> +         !is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )
> +        v->arch.pv_vcpu.pending_system_time = _u;
> +}
> +
> +bool_t update_secondary_system_time(const struct vcpu *v,
> +                                    struct vcpu_time_info *u)
> +{
> +    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v->arch.time_info_guest;
> +
> +    if ( guest_handle_is_null(user_u) )
> +        return 1;
> +
> +    /* 1. Update userspace version. */
> +    if ( __copy_field_to_guest(user_u, u, version) == sizeof(u->version) )
> +        return 0;
> +    wmb();
> +    /* 2. Update all other userspace fields. */
> +    __copy_to_guest(user_u, u, 1);
> +    wmb();
> +    /* 3. Update userspace version. */
> +    u->version = version_update_end(u->version);
> +    __copy_field_to_guest(user_u, u, version);
> +
> +    return 1;
>  }
>  
>  void update_vcpu_system_time(struct vcpu *v)
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>  #else
>      write_ptbase(v);
>  #endif
> +
> +    if ( !(v->arch.flags & TF_kernel_mode) )
> +        return;
> +
> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
> +         update_runstate_area(v) )
> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
> +
> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
> +         update_secondary_system_time(v,
> +                                      &v->arch.pv_vcpu.pending_system_time) )
> +        v->arch.pv_vcpu.pending_system_time.version = 0;

What would happen now if a guest kernel loads a faulting address for its
runstate info (or edits its pagetables so a previously good address now
faults)?  It appears as if we could retry writing to the same bad
address each time we try to toggle into kernel mode.

Given that we know we are running on the correct set of pagetables, I
think both of these pending variable can be unconditionally cleared,
whether or not update{runstate_area,secondary_system_time} succeed or fail.

~Andrew

>  }
>  
>  unsigned long do_iret(void)
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -373,6 +373,10 @@ struct pv_vcpu
>      /* Current LDT details. */
>      unsigned long shadow_ldt_mapcnt;
>      spinlock_t shadow_ldt_lock;
> +
> +    /* Deferred VA-based update state. */
> +    bool_t need_update_runstate_area;
> +    struct vcpu_time_info pending_system_time;
>  };
>  
>  struct arch_vcpu
> @@ -445,6 +449,10 @@ struct arch_vcpu
>  #define hvm_vmx         hvm_vcpu.u.vmx
>  #define hvm_svm         hvm_vcpu.u.svm
>  
> +bool_t update_runstate_area(const struct vcpu *);
> +bool_t update_secondary_system_time(const struct vcpu *,
> +                                    struct vcpu_time_info *);
> +
>  void vcpu_show_execution_state(struct vcpu *);
>  void vcpu_show_registers(const struct vcpu *);
>  
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------000303050600060402060603
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 21/01/2014 15:33, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Since 64-bit PV uses separate kernel and user mode page tables, kernel
addresses (as usually provided via VCPUOP_register_runstate_memory_area
and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
necessarily accessible when the respective updating occurs. Add logic
for toggle_guest_mode() to take care of this (if necessary) the next
time the vCPU switches to kernel mode.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
 }
 
 /* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+bool_t update_runstate_area(const struct vcpu *v)</pre>
    </blockquote>
    <br>
    Can you adjust the comment to indicate what the return value means.&nbsp;
    The logic is quite opaque, but I believe it is "true if the runstate
    has been suitably updated".<br>
    <br>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
 {
     if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+        return 1;
 
     if ( has_32bit_shinfo(v-&gt;domain) )
     {
@@ -1334,10 +1334,18 @@ static void update_runstate_area(struct 
 
         XLAT_vcpu_runstate_info(&amp;info, &amp;v-&gt;runstate);
         __copy_to_guest(v-&gt;runstate_guest.compat, &amp;info, 1);
-        return;
+        return 1;
     }
 
-    __copy_to_guest(runstate_guest(v), &amp;v-&gt;runstate, 1);
+    return __copy_to_guest(runstate_guest(v), &amp;v-&gt;runstate, 1) !=
+           sizeof(v-&gt;runstate);
+}
+
+static void _update_runstate_area(struct vcpu *v)
+{
+    if ( !update_runstate_area(v) &amp;&amp; is_pv_vcpu(v) &amp;&amp;
+         !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        v-&gt;arch.pv_vcpu.need_update_runstate_area = 1;
 }
 
 static inline int need_full_gdt(struct vcpu *v)
@@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
         flush_tlb_mask(&amp;dirty_mask);
     }
 
-    if (prev != next)
-        update_runstate_area(prev);
+    if ( prev != next )
+        _update_runstate_area(prev);
 
     if ( is_hvm_vcpu(prev) )
     {
@@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
 
     context_saved(prev);
 
-    if (prev != next)
-        update_runstate_area(next);
+    if ( prev != next )
+        _update_runstate_area(next);
 
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(next);
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
 {
     struct cpu_time       *t;
     struct vcpu_time_info *u, _u;
-    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
     struct domain *d = v-&gt;domain;
     s_time_t tsc_stamp = 0;
 
@@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
     /* 3. Update guest kernel version. */
     u-&gt;version = version_update_end(u-&gt;version);
 
-    user_u = v-&gt;arch.time_info_guest;
-    if ( !guest_handle_is_null(user_u) )
-    {
-        /* 1. Update userspace version. */
-        __copy_field_to_guest(user_u, &amp;_u, version);
-        wmb();
-        /* 2. Update all other userspavce fields. */
-        __copy_to_guest(user_u, &amp;_u, 1);
-        wmb();
-        /* 3. Update userspace version. */
-        _u.version = version_update_end(_u.version);
-        __copy_field_to_guest(user_u, &amp;_u, version);
-    }
+    if ( !update_secondary_system_time(v, &amp;_u) &amp;&amp; is_pv_domain(d) &amp;&amp;
+         !is_pv_32bit_domain(d) &amp;&amp; !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        v-&gt;arch.pv_vcpu.pending_system_time = _u;
+}
+
+bool_t update_secondary_system_time(const struct vcpu *v,
+                                    struct vcpu_time_info *u)
+{
+    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v-&gt;arch.time_info_guest;
+
+    if ( guest_handle_is_null(user_u) )
+        return 1;
+
+    /* 1. Update userspace version. */
+    if ( __copy_field_to_guest(user_u, u, version) == sizeof(u-&gt;version) )
+        return 0;
+    wmb();
+    /* 2. Update all other userspace fields. */
+    __copy_to_guest(user_u, u, 1);
+    wmb();
+    /* 3. Update userspace version. */
+    u-&gt;version = version_update_end(u-&gt;version);
+    __copy_field_to_guest(user_u, u, version);
+
+    return 1;
 }
 
 void update_vcpu_system_time(struct vcpu *v)
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
 #else
     write_ptbase(v);
 #endif
+
+    if ( !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        return;
+
+    if ( v-&gt;arch.pv_vcpu.need_update_runstate_area &amp;&amp;
+         update_runstate_area(v) )
+        v-&gt;arch.pv_vcpu.need_update_runstate_area = 0;
+
+    if ( v-&gt;arch.pv_vcpu.pending_system_time.version &amp;&amp;
+         update_secondary_system_time(v,
+                                      &amp;v-&gt;arch.pv_vcpu.pending_system_time) )
+        v-&gt;arch.pv_vcpu.pending_system_time.version = 0;</pre>
    </blockquote>
    <br>
    What would happen now if a guest kernel loads a faulting address for
    its runstate info (or edits its pagetables so a previously good
    address now faults)?&nbsp; It appears as if we could retry writing to the
    same bad address each time we try to toggle into kernel mode.<br>
    <br>
    Given that we know we are running on the correct set of pagetables,
    I think both of these pending variable can be unconditionally
    cleared, whether or not update{runstate_area,secondary_system_time}
    succeed or fail.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
 }
 
 unsigned long do_iret(void)
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -373,6 +373,10 @@ struct pv_vcpu
     /* Current LDT details. */
     unsigned long shadow_ldt_mapcnt;
     spinlock_t shadow_ldt_lock;
+
+    /* Deferred VA-based update state. */
+    bool_t need_update_runstate_area;
+    struct vcpu_time_info pending_system_time;
 };
 
 struct arch_vcpu
@@ -445,6 +449,10 @@ struct arch_vcpu
 #define hvm_vmx         hvm_vcpu.u.vmx
 #define hvm_svm         hvm_vcpu.u.svm
 
+bool_t update_runstate_area(const struct vcpu *);
+bool_t update_secondary_system_time(const struct vcpu *,
+                                    struct vcpu_time_info *);
+
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
 


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------000303050600060402060603--


--===============2147995466282644967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2147995466282644967==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 16:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ecP-0006Ek-C6; Tue, 21 Jan 2014 16:56:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5ecN-0006Ee-K7
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 16:56:03 +0000
Received: from [85.158.143.35:31401] by server-2.bemta-4.messagelabs.com id
	E7/68-11386-2A6AED25; Tue, 21 Jan 2014 16:56:02 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390323360!13072672!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24528 invoked from network); 21 Jan 2014 16:56:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 16:56:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208,217";a="94963969"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 16:55:59 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 11:55:59 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	17:55:57 +0100
Message-ID: <52DEA69D.4010102@citrix.com>
Date: Tue, 21 Jan 2014 16:55:57 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
In-Reply-To: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2147995466282644967=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2147995466282644967==
Content-Type: multipart/alternative;
	boundary="------------000303050600060402060603"

--------------000303050600060402060603
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 21/01/2014 15:33, Jan Beulich wrote:
> Since 64-bit PV uses separate kernel and user mode page tables, kernel
> addresses (as usually provided via VCPUOP_register_runstate_memory_area
> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
> necessarily accessible when the respective updating occurs. Add logic
> for toggle_guest_mode() to take care of this (if necessary) the next
> time the vCPU switches to kernel mode.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>  }
>  
>  /* Update per-VCPU guest runstate shared memory area (if registered). */
> -static void update_runstate_area(struct vcpu *v)
> +bool_t update_runstate_area(const struct vcpu *v)

Can you adjust the comment to indicate what the return value means.  The
logic is quite opaque, but I believe it is "true if the runstate has
been suitably updated".

>  {
>      if ( guest_handle_is_null(runstate_guest(v)) )
> -        return;
> +        return 1;
>  
>      if ( has_32bit_shinfo(v->domain) )
>      {
> @@ -1334,10 +1334,18 @@ static void update_runstate_area(struct 
>  
>          XLAT_vcpu_runstate_info(&info, &v->runstate);
>          __copy_to_guest(v->runstate_guest.compat, &info, 1);
> -        return;
> +        return 1;
>      }
>  
> -    __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> +    return __copy_to_guest(runstate_guest(v), &v->runstate, 1) !=
> +           sizeof(v->runstate);
> +}
> +
> +static void _update_runstate_area(struct vcpu *v)
> +{
> +    if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
> +         !(v->arch.flags & TF_kernel_mode) )
> +        v->arch.pv_vcpu.need_update_runstate_area = 1;
>  }
>  
>  static inline int need_full_gdt(struct vcpu *v)
> @@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
>          flush_tlb_mask(&dirty_mask);
>      }
>  
> -    if (prev != next)
> -        update_runstate_area(prev);
> +    if ( prev != next )
> +        _update_runstate_area(prev);
>  
>      if ( is_hvm_vcpu(prev) )
>      {
> @@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
>  
>      context_saved(prev);
>  
> -    if (prev != next)
> -        update_runstate_area(next);
> +    if ( prev != next )
> +        _update_runstate_area(next);
>  
>      /* Ensure that the vcpu has an up-to-date time base. */
>      update_vcpu_system_time(next);
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
>  {
>      struct cpu_time       *t;
>      struct vcpu_time_info *u, _u;
> -    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
>      struct domain *d = v->domain;
>      s_time_t tsc_stamp = 0;
>  
> @@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
>      /* 3. Update guest kernel version. */
>      u->version = version_update_end(u->version);
>  
> -    user_u = v->arch.time_info_guest;
> -    if ( !guest_handle_is_null(user_u) )
> -    {
> -        /* 1. Update userspace version. */
> -        __copy_field_to_guest(user_u, &_u, version);
> -        wmb();
> -        /* 2. Update all other userspavce fields. */
> -        __copy_to_guest(user_u, &_u, 1);
> -        wmb();
> -        /* 3. Update userspace version. */
> -        _u.version = version_update_end(_u.version);
> -        __copy_field_to_guest(user_u, &_u, version);
> -    }
> +    if ( !update_secondary_system_time(v, &_u) && is_pv_domain(d) &&
> +         !is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )
> +        v->arch.pv_vcpu.pending_system_time = _u;
> +}
> +
> +bool_t update_secondary_system_time(const struct vcpu *v,
> +                                    struct vcpu_time_info *u)
> +{
> +    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v->arch.time_info_guest;
> +
> +    if ( guest_handle_is_null(user_u) )
> +        return 1;
> +
> +    /* 1. Update userspace version. */
> +    if ( __copy_field_to_guest(user_u, u, version) == sizeof(u->version) )
> +        return 0;
> +    wmb();
> +    /* 2. Update all other userspace fields. */
> +    __copy_to_guest(user_u, u, 1);
> +    wmb();
> +    /* 3. Update userspace version. */
> +    u->version = version_update_end(u->version);
> +    __copy_field_to_guest(user_u, u, version);
> +
> +    return 1;
>  }
>  
>  void update_vcpu_system_time(struct vcpu *v)
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>  #else
>      write_ptbase(v);
>  #endif
> +
> +    if ( !(v->arch.flags & TF_kernel_mode) )
> +        return;
> +
> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
> +         update_runstate_area(v) )
> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
> +
> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
> +         update_secondary_system_time(v,
> +                                      &v->arch.pv_vcpu.pending_system_time) )
> +        v->arch.pv_vcpu.pending_system_time.version = 0;

What would happen now if a guest kernel loads a faulting address for its
runstate info (or edits its pagetables so a previously good address now
faults)?  It appears as if we could retry writing to the same bad
address each time we try to toggle into kernel mode.

Given that we know we are running on the correct set of pagetables, I
think both of these pending variable can be unconditionally cleared,
whether or not update{runstate_area,secondary_system_time} succeed or fail.

~Andrew

>  }
>  
>  unsigned long do_iret(void)
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -373,6 +373,10 @@ struct pv_vcpu
>      /* Current LDT details. */
>      unsigned long shadow_ldt_mapcnt;
>      spinlock_t shadow_ldt_lock;
> +
> +    /* Deferred VA-based update state. */
> +    bool_t need_update_runstate_area;
> +    struct vcpu_time_info pending_system_time;
>  };
>  
>  struct arch_vcpu
> @@ -445,6 +449,10 @@ struct arch_vcpu
>  #define hvm_vmx         hvm_vcpu.u.vmx
>  #define hvm_svm         hvm_vcpu.u.svm
>  
> +bool_t update_runstate_area(const struct vcpu *);
> +bool_t update_secondary_system_time(const struct vcpu *,
> +                                    struct vcpu_time_info *);
> +
>  void vcpu_show_execution_state(struct vcpu *);
>  void vcpu_show_registers(const struct vcpu *);
>  
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------000303050600060402060603
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 21/01/2014 15:33, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Since 64-bit PV uses separate kernel and user mode page tables, kernel
addresses (as usually provided via VCPUOP_register_runstate_memory_area
and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
necessarily accessible when the respective updating occurs. Add logic
for toggle_guest_mode() to take care of this (if necessary) the next
time the vCPU switches to kernel mode.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
 }
 
 /* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+bool_t update_runstate_area(const struct vcpu *v)</pre>
    </blockquote>
    <br>
    Can you adjust the comment to indicate what the return value means.&nbsp;
    The logic is quite opaque, but I believe it is "true if the runstate
    has been suitably updated".<br>
    <br>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
 {
     if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+        return 1;
 
     if ( has_32bit_shinfo(v-&gt;domain) )
     {
@@ -1334,10 +1334,18 @@ static void update_runstate_area(struct 
 
         XLAT_vcpu_runstate_info(&amp;info, &amp;v-&gt;runstate);
         __copy_to_guest(v-&gt;runstate_guest.compat, &amp;info, 1);
-        return;
+        return 1;
     }
 
-    __copy_to_guest(runstate_guest(v), &amp;v-&gt;runstate, 1);
+    return __copy_to_guest(runstate_guest(v), &amp;v-&gt;runstate, 1) !=
+           sizeof(v-&gt;runstate);
+}
+
+static void _update_runstate_area(struct vcpu *v)
+{
+    if ( !update_runstate_area(v) &amp;&amp; is_pv_vcpu(v) &amp;&amp;
+         !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        v-&gt;arch.pv_vcpu.need_update_runstate_area = 1;
 }
 
 static inline int need_full_gdt(struct vcpu *v)
@@ -1443,8 +1451,8 @@ void context_switch(struct vcpu *prev, s
         flush_tlb_mask(&amp;dirty_mask);
     }
 
-    if (prev != next)
-        update_runstate_area(prev);
+    if ( prev != next )
+        _update_runstate_area(prev);
 
     if ( is_hvm_vcpu(prev) )
     {
@@ -1497,8 +1505,8 @@ void context_switch(struct vcpu *prev, s
 
     context_saved(prev);
 
-    if (prev != next)
-        update_runstate_area(next);
+    if ( prev != next )
+        _update_runstate_area(next);
 
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(next);
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -740,7 +740,6 @@ static void __update_vcpu_system_time(st
 {
     struct cpu_time       *t;
     struct vcpu_time_info *u, _u;
-    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u;
     struct domain *d = v-&gt;domain;
     s_time_t tsc_stamp = 0;
 
@@ -805,19 +804,31 @@ static void __update_vcpu_system_time(st
     /* 3. Update guest kernel version. */
     u-&gt;version = version_update_end(u-&gt;version);
 
-    user_u = v-&gt;arch.time_info_guest;
-    if ( !guest_handle_is_null(user_u) )
-    {
-        /* 1. Update userspace version. */
-        __copy_field_to_guest(user_u, &amp;_u, version);
-        wmb();
-        /* 2. Update all other userspavce fields. */
-        __copy_to_guest(user_u, &amp;_u, 1);
-        wmb();
-        /* 3. Update userspace version. */
-        _u.version = version_update_end(_u.version);
-        __copy_field_to_guest(user_u, &amp;_u, version);
-    }
+    if ( !update_secondary_system_time(v, &amp;_u) &amp;&amp; is_pv_domain(d) &amp;&amp;
+         !is_pv_32bit_domain(d) &amp;&amp; !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        v-&gt;arch.pv_vcpu.pending_system_time = _u;
+}
+
+bool_t update_secondary_system_time(const struct vcpu *v,
+                                    struct vcpu_time_info *u)
+{
+    XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v-&gt;arch.time_info_guest;
+
+    if ( guest_handle_is_null(user_u) )
+        return 1;
+
+    /* 1. Update userspace version. */
+    if ( __copy_field_to_guest(user_u, u, version) == sizeof(u-&gt;version) )
+        return 0;
+    wmb();
+    /* 2. Update all other userspace fields. */
+    __copy_to_guest(user_u, u, 1);
+    wmb();
+    /* 3. Update userspace version. */
+    u-&gt;version = version_update_end(u-&gt;version);
+    __copy_field_to_guest(user_u, u, version);
+
+    return 1;
 }
 
 void update_vcpu_system_time(struct vcpu *v)
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
 #else
     write_ptbase(v);
 #endif
+
+    if ( !(v-&gt;arch.flags &amp; TF_kernel_mode) )
+        return;
+
+    if ( v-&gt;arch.pv_vcpu.need_update_runstate_area &amp;&amp;
+         update_runstate_area(v) )
+        v-&gt;arch.pv_vcpu.need_update_runstate_area = 0;
+
+    if ( v-&gt;arch.pv_vcpu.pending_system_time.version &amp;&amp;
+         update_secondary_system_time(v,
+                                      &amp;v-&gt;arch.pv_vcpu.pending_system_time) )
+        v-&gt;arch.pv_vcpu.pending_system_time.version = 0;</pre>
    </blockquote>
    <br>
    What would happen now if a guest kernel loads a faulting address for
    its runstate info (or edits its pagetables so a previously good
    address now faults)?&nbsp; It appears as if we could retry writing to the
    same bad address each time we try to toggle into kernel mode.<br>
    <br>
    Given that we know we are running on the correct set of pagetables,
    I think both of these pending variable can be unconditionally
    cleared, whether or not update{runstate_area,secondary_system_time}
    succeed or fail.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52DEA16B02000078001156E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
 }
 
 unsigned long do_iret(void)
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -373,6 +373,10 @@ struct pv_vcpu
     /* Current LDT details. */
     unsigned long shadow_ldt_mapcnt;
     spinlock_t shadow_ldt_lock;
+
+    /* Deferred VA-based update state. */
+    bool_t need_update_runstate_area;
+    struct vcpu_time_info pending_system_time;
 };
 
 struct arch_vcpu
@@ -445,6 +449,10 @@ struct arch_vcpu
 #define hvm_vmx         hvm_vcpu.u.vmx
 #define hvm_svm         hvm_vcpu.u.svm
 
+bool_t update_runstate_area(const struct vcpu *);
+bool_t update_secondary_system_time(const struct vcpu *,
+                                    struct vcpu_time_info *);
+
 void vcpu_show_execution_state(struct vcpu *);
 void vcpu_show_registers(const struct vcpu *);
 


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------000303050600060402060603--


--===============2147995466282644967==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2147995466282644967==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 16:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ee5-0006LN-8A; Tue, 21 Jan 2014 16:57:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5ee4-0006LI-5l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:57:48 +0000
Received: from [193.109.254.147:50157] by server-9.bemta-14.messagelabs.com id
	4C/42-13957-B07AED25; Tue, 21 Jan 2014 16:57:47 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390323463!9968650!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23412 invoked from network); 21 Jan 2014 16:57:45 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:57:45 -0000
Received: from mail-ee0-f43.google.com ([74.125.83.43]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nBuvpZrv1PccOfWq53HblxJFoy/N6@postini.com;
	Tue, 21 Jan 2014 08:57:45 PST
Received: by mail-ee0-f43.google.com with SMTP id c41so4188877eek.16
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:57:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id;
	bh=ypAeZ7eQXEXhVefFzEgug68soQ1XSsFgZMPLDEttbYw=;
	b=Rka7+lMEhENkI6ZTuBWzURx+T+z4j6Yw51I/nIkxCl0cC0e7NAc6GaaSDwnlT8F100
	H+CiYtxZLxEO+8pSyKnDlM6GhtTpNMFQnCSZisV90db/l0oYykSriGSFqZ/qtaX9SGe/
	yH/B+Gm7Zvj+C6OGd+sfvIg81y+tju1QexJQI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=ypAeZ7eQXEXhVefFzEgug68soQ1XSsFgZMPLDEttbYw=;
	b=Wnqb5Z8pb6khCloXCSWvgHaH45CFdhhiZmLeWPkDYyG5CqPVaoCFkEk6iY+fWVcYzN
	S+HGwjMzqimFr1o+dKBNuUJ8cdqpLVboYPlQVZJyNulHDdNXxDebFsxgan4fMfV7BFD2
	fVGdvJb3jMkbzwIZQiENETSNtO4nhw29mimF2fsxFoKRI2ryF6R+rggsTeMD/WPC0cNr
	KLdatL/qx0Gn+SP/RtzhvBW2maeY0JM8cYml59uDzx/MrMo3or1JCWPyjIspdiieZGwk
	gcQOP9H4JWGUxCpvbzkq9eooCXJLxfi0Bl8ibqGnFpMxRXfdkJYSc3aGektYFtBEkmt2
	H6wg==
X-Gm-Message-State: ALoCoQkMimpUfTILuCtKsuTiYNLJHlirXaqu5RbsTY+UwGrBKYDFZiynSF8IgpP4nRsthEwgBICsDQryzcTDc1BYE63v2pUilRbtCh6AE3ttn9ISfShAtHWZHfCXBhYswoirtsyfEIK24QnRAkeDcZFxHD61yRhTqEXTSVux8P2RxjeBnS7zDyc=
X-Received: by 10.14.111.73 with SMTP id v49mr1662488eeg.94.1390323461869;
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
X-Received: by 10.14.111.73 with SMTP id v49mr1662480eeg.94.1390323461768;
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 4sm16752303eed.14.2014.01.21.08.57.40
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:53:14 +0200
Message-Id: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Subject: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Could someone advice on the issue I am facing?

I am trying to run PV USB on omap5uevm (omap5-panda) board.

I use latest drivers for PV USB from Nathanael server:
http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch

I have applied it to k3.8(dom0) with some patches for USB HCD, usbback drivers
(attached) and run on Xen 4.4.0-rc2.

I am facing an issue with USB_STORAGE:
USB storage inited and mounted on domU over PV USB drivers.
But I only can copy files on USB storage with small sizes(no more than ~100-500 kBytes).
Then USB storage falls to infinite loop
(leds on USB storage blinking all the time, more than needing for copy)
and then after few seconds dom0 disconnected usb device.

Dom0, DomU use k3.8.

I observed that usb-storage uses some scsi requests(from domU) which pass
directly to hardware, I think this is a problem.

So, I applied PV SCSI drivers from
http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=refs/heads/devel/xen-scsi.v1.0
to k3.8.

Then I inited PV USB & PV SCSI with scripts vusb-start.sh and vscsi-start.sh accordingly.
But I still facing issue with this.

Dom0 log:
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
[    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7), cr=10c5387d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
[    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
[    0.000000] bootconsole [earlycon0] enabled
[    0.000000] Memory policy: ECC disabled, Data cache writeback
[    0.000000] On node 0 totalpages: 65280
[    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_map c428e000
[    0.000000]   Normal zone: 512 pages used for memmap
[    0.000000]   Normal zone: 0 pages reserved
[    0.000000]   Normal zone: 64768 pages, LIFO batch:15
[    0.000000] psci: probing function IDs from device-tree
[    0.000000] OMAP5432 ES2.0
[    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
[    0.000000] pcpu-alloc: [0] 0 
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 64768
[    0.000000] Kernel command line: console=hvc0 earlyprintk
[    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
[    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 255MB = 255MB total
[    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K highmem
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
[    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
[    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
[    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
[    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
[    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
[    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
[    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] Architected local timer running at 6.14MHz (virt).
[    0.000000] Switching to timer-based delay loop
[    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns, wraps every 3489660920ms
[    0.000000] Console: colour dummy device 80x30
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 3695 kB
[    0.000000]  per task-struct memory footprint: 1152 bytes
[    0.046875] Calibrating delay loop (skipped), value calculated using timer frequency.. 12.30 BogoMIPS (lpj=48000)
[    0.054687] pid_max: default: 32768 minimum: 301
[    0.054687] Security Framework initialized
[    0.062500] Mount-cache hash table entries: 512
[    0.070312] CPU: Testing write buffer coherency: ok
[    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e58
[    0.085937] devtmpfs: initialized
[    0.093750] Xen 4.4 support found, events_irq=31 gnttab_frame_pfn=4b000
[    0.101562] xen:grant_table: Grant tables using version 1 layout.
[    0.101562] Grant table initialized
[    0.109375] omap_hwmod: aess: _wait_target_disable failed
[    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
[    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
[    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
[    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
[    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
[    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
[    0.234375] pinctrl core: initialized pinctrl subsystem
[    0.242187] regulator-dummy: no parameters
[    0.242187] NET: Registered protocol family 16
[    0.250000] Xen: initializing cpu0
[    0.250000] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for software IO TLB
[    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped at [ce000000-ce7fffff]
[    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
[    0.281250] OMAP GPIO hardware version 0.1
[    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
[    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
[    0.296875] OMAP DMA hardware revision 0.0
[    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840 size 438
[    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840 size 56
[    0.335937] bio: create slab <bio-0> at 0
[    0.343750] xen:balloon: Initialising balloon driver
[    0.343750] of_get_named_gpio_flags exited with status 80
[    0.343750] hsusb2_reset: 3300 mV 
[    0.351562] of_get_named_gpio_flags exited with status 79
[    0.351562] hsusb3_reset: 3300 mV 
[    0.351562] SCSI subsystem initialized
[    0.359375] libata version 3.00 loaded.
[    0.359375] usbcore: registered new interface driver usbfs
[    0.367187] usbcore: registered new interface driver hub
[    0.367187] usbcore: registered new device driver usb
[    0.375000] Switching to clocksource arch_sys_counter
[    0.414062] NET: Registered protocol family 2
[    0.414062] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
[    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes)
[    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
[    0.437500] TCP: reno registered
[    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
[    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
[    0.453125] NET: Registered protocol family 1
[    0.679687] NetWinder Floating Point Emulator V0.97 (double precision)
[    0.687500] VFS: Disk quotas dquot_6.5.2
[    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
[    0.703125] msgmni has been set to 372
[    0.710937] io scheduler noop registered
[    0.718750] io scheduler deadline registered
[    0.718750] io scheduler cfq registered (default)
[    0.726562] xen:xen_evtchn: Event-channel device installed
[    0.742187] console [hvc0] enabled, bootconsole disabled
[    0.765625] brd: module loaded
[    0.781250] loop: module loaded
[    0.789062] ahci ahci.0.auto: can't get clock
[    0.789062] ahci ahci.0.auto: SATA PLL_STATUS = 0x00018041
[    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
[    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps 0x1 impl platform mode
[    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only pmp pio slum part ccc apst 
[    0.820312] scsi0 : ahci_platform
[    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff] port 0x100 irq 86
[    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
[    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
[    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigned bus number 1
[    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
[    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
[    0.898437] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[    0.906250] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.914062] usb usb1: Product: EHCI Host Controller
[    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ehci_hcd
[    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
[    0.937500] hub 1-0:1.0: USB hub found
[    0.937500] hub 1-0:1.0: 3 ports detected
[    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
[    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered, assigned bus number 2
[    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
[    1.187500] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
[    1.195312] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
[    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ohci_hcd
[    1.210937] usb usb2: SerialNumber: 4a064800.ohci
[    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
[    1.226562] hub 2-0:1.0: USB hub found
[    1.226562] hub 2-0:1.0: 3 ports detected
[    1.359375] usb 1-2: new high-speed USB device number 2 using ehci-omap
[    1.515625] usb 1-2: New USB device found, idVendor=0424, idProduct=3503
[    1.515625] usb 1-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    1.531250] hub 1-2:1.0: USB hub found
[    1.531250] hub 1-2:1.0: 3 ports detected
[    1.664062] usb 1-3: new high-speed USB device number 3 using ehci-omap
[    1.820312] usb 1-3: New USB device found, idVendor=0424, idProduct=9730
[    1.820312] usb 1-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    1.835937] usbcore: registered new interface driver usbback
[    1.843750] Initializing USB Mass Storage driver...
[    1.843750] usbcore: registered new interface driver usb-storage
[    1.851562] USB Mass Storage support registered.
[    1.859375] i2c /dev entries driver
[    1.859375] usbcore: registered new interface driver usbhid
[    1.867187] usbhid: USB HID core driver
[    1.875000] TCP: cubic registered
[    1.875000] Initializing XFRM netlink socket
[    1.882812] NET: Registered protocol family 17
[    1.882812] NET: Registered protocol family 15
[    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
[    1.898437] mux: Failed to setup hwmod io irq -22
[    1.898437] Power Management for TI OMAP4PLUS devices.
[    1.906250] ThumbEE CPU extension supported.
[    1.914062] Registering SWP/SWPB emulation handler
[    1.921875] devtmpfs: mounted
[    1.968750] Freeing init memory: 57752K
# ./vusb-start.sh 1 0
[    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-channel 3
# ./vscsi-start.sh 1 0
# echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport

[   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
[   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
[   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   40.929687] usb 1-2.1: Product: Mass Storage Device
[   40.929687] usb 1-2.1: Manufacturer: JetFlash
[   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO

DomU log:
0.710937] console [hvc0] enabled, bootconsole disabled
[    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq = 104) is a OMAP UART0
[    0.718750] omap_uart 4806c000.serial: did not get pins for uart1 error: -19
[    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq = 105) is a OMAP UART1
[    0.718750] omap_uart 4806e000.serial: did not get pins for uart3 error: -19
[    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq = 102) is a OMAP UART3
[    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq = 137) is a OMAP UART4
[    0.726562] omap_uart 48068000.serial: did not get pins for uart5 error: -19
[    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq = 138) is a OMAP UART5
[    0.726562] [drm] Initialized drm 1.1.0 20060810
[    0.742187] brd: module loaded
[    0.757812] loop: module loaded
[    0.757812] omap2_mcspi 48098000.spi: pins are not configured from the driver
[    0.765625] Initialising Xen virtual ethernet driver.
[    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.765625] ehci-platform: EHCI generic platform driver
[    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
[    0.765625] vusb vusb-0: new USB bus registered, assigned bus number 1
[    0.765625] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[    0.765625] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
[    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 xen_hcd
[    0.765625] usb usb1: SerialNumber: vusb-0
[    0.773437] hub 1-0:1.0: USB hub found
[    0.773437] hub 1-0:1.0: 8 ports detected
[    0.773437] Initializing USB Mass Storage driver...
[    0.773437] usbcore: registered new interface driver usb-storage
[    0.773437] USB Mass Storage support registered.
[    0.773437] mousedev: PS/2 mouse device common for all mice
[    0.781250] usbcore: registered new interface driver usbhid
[    0.781250] usbhid: USB HID core driver
[    0.789062] TCP: cubic registered
[    0.789062] Initializing XFRM netlink socket
[    0.789062] NET: Registered protocol family 17
[    0.789062] NET: Registered protocol family 15
[    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
[    0.789062] mux: Failed to setup hwmod io irq -22
[    0.789062] ThumbEE CPU extension supported.
[    0.789062] Registering SWP/SWPB emulation handler
[    0.789062] dmm 4e000000.dmm: initialized all PAT entries
[    0.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
[    0.804687] devtmpfs: mounted
[    0.812500] Freeing init memory: 6044K

Please press Enter to activate this console.
[    6.500000] scsi0 : Xen SCSI frontend driver

/ # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
[   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
[   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   40.929687] usb 1-2.1: Product: Mass Storage Device
[   40.929687] usb 1-2.1: Manufacturer: JetFlash
[   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
[   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
(XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
[   32.875000] usb 1-1: New USB device found, idVendor=8564, idProduct=1000
[   32.875000] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   32.875000] usb 1-1: Product: Mass Storage Device
[   32.875000] usb 1-1: Manufacturer: JetFlash
[   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
[   32.906250] scsi1 : usb-storage 1-1:1.0
[   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB    1100 PQ: 0 ANSI: 4
[   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)
[   34.140625] sd 1:0:0:0: [sda] Write Protect is off
[   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data may changed on different boots*
[   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.195312]  sda: sda1
[   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk

 # lsusb 
Bus 001 Device 002: ID 8564:1000
Bus 001 Device 001: ID 1d6b:0002

But it looks like scsi requests from usb-storage still passing directly to hardware
instead of passing through PV SCSI.

Could smb tell me how to init PV SCSI and PV USB correctly?

Regards,
Alexander

Alexander Savchenko (2):
  usbback: Add new features
  HACK: usb:core:hcd: Do not remapping self dma addresses

Nathanael Rensen (1):
  pvusb drivers

 drivers/usb/core/hcd.c                 |    1 +
 drivers/usb/host/Kconfig               |   23 +
 drivers/usb/host/Makefile              |    2 +
 drivers/usb/host/xen-usbback/Makefile  |    3 +
 drivers/usb/host/xen-usbback/common.h  |  170 ++++
 drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
 drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
 drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
 drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
 include/xen/interface/io/usbif.h       |  150 +++
 10 files changed, 4244 insertions(+)
 create mode 100644 drivers/usb/host/xen-usbback/Makefile
 create mode 100644 drivers/usb/host/xen-usbback/common.h
 create mode 100644 drivers/usb/host/xen-usbback/usbback.c
 create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
 create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
 create mode 100644 drivers/usb/host/xen-usbfront.c
 create mode 100644 include/xen/interface/io/usbif.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eeG-0006MV-MW; Tue, 21 Jan 2014 16:58:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5eeE-0006MC-H5
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:57:59 +0000
Received: from [85.158.137.68:50253] by server-5.bemta-3.messagelabs.com id
	EB/B3-25188-517AED25; Tue, 21 Jan 2014 16:57:57 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390323469!10496529!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1229 invoked from network); 21 Jan 2014 16:57:51 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:57:51 -0000
Received: from mail-ea0-f175.google.com ([209.85.215.175]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nDAhH3JeLTFG/7i3PfnM6F15uJAGK@postini.com;
	Tue, 21 Jan 2014 08:57:50 PST
Received: by mail-ea0-f175.google.com with SMTP id z10so3854710ead.6
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:57:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=XyttfSaipgRVmn7sy2uep2S9EDH/PwdlzkGmZVap1/E=;
	b=fgmwW7B7ydwu097qJgR462vw5PPJqgr7ZHbVQ8VqNBEN3tmh1tQmVCs5hPZDxzonev
	wNrg6yX9jRivdABRdNGsZe8M8umDsqd58NCu4ZxMyvo24qrZpLswFfVhMtb9uSErM0lj
	nPk2frL4Q8TUOy+cPnMJ7rUIKa1+nEqcq959g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=XyttfSaipgRVmn7sy2uep2S9EDH/PwdlzkGmZVap1/E=;
	b=Ya2zWejFY4Z3Yb+sLzlsusLOEC+Fcg4Cw+9/dDvZwHHVgua9oPbL+95vOXRHGq+Qup
	0kqjDHAphbc4/bxdFPi6SMuBEpwxx4OOnfNzWWtQfX22zDdxJksBEmKQcBTdnVho+1tk
	KYj+xU/wjkv9fwX6mzsubqCXVGfTSEniP5Y7q/kH+N5zFmeDk1aYqffhTApOPtNKLORV
	TofmuqA9tlMSYcR1JlA5EJ/pmuRaRbyBjIppO0/ibb76znl35fbjrti7bgksXoUAsWrG
	TBL+loGmq8EQxcS8ahzxO/hJmKjILHB6DO44+fc1waWmQZ3iXtIRmN1D68Wyeiaet3Fe
	OWDg==
X-Gm-Message-State: ALoCoQmrZ1pvgL333NkFjfavHi3CLvSevr8RGvRJeivx9lLeAQblXKT/+O3/aM+ebnz0QClen0UP0fdaJ0+PMxZAo+AxoLJUFEZNYWVatKtY+uQcosxkloyh54gluG0KT6itwfYsXXoug6/kUEWrL0x6JFyB0517klYxZEiwPtBq+zwWkuV0sFg=
X-Received: by 10.14.102.67 with SMTP id c43mr24309611eeg.23.1390323467709;
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
X-Received: by 10.14.102.67 with SMTP id c43mr24309594eeg.23.1390323467549;
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 4sm16752303eed.14.2014.01.21.08.57.46
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:53:15 +0200
Message-Id: <1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>,
	Nathanael Rensen <nathanael@polymorpheus.com>
Subject: [Xen-devel] [PATCH 1/3] pvusb drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Nathanael Rensen <nathanael@polymorpheus.com>

Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
---
 drivers/usb/host/Kconfig               |   23 +
 drivers/usb/host/Makefile              |    2 +
 drivers/usb/host/xen-usbback/Makefile  |    3 +
 drivers/usb/host/xen-usbback/common.h  |  170 ++++
 drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
 drivers/usb/host/xen-usbback/usbdev.c  |  319 ++++++
 drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
 drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
 include/xen/interface/io/usbif.h       |  150 +++
 9 files changed, 4160 insertions(+)
 create mode 100644 drivers/usb/host/xen-usbback/Makefile
 create mode 100644 drivers/usb/host/xen-usbback/common.h
 create mode 100644 drivers/usb/host/xen-usbback/usbback.c
 create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
 create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
 create mode 100644 drivers/usb/host/xen-usbfront.c
 create mode 100644 include/xen/interface/io/usbif.h

diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig
index 2d2975d..2c10ea7 100644
--- a/drivers/usb/host/Kconfig
+++ b/drivers/usb/host/Kconfig
@@ -634,6 +634,29 @@ config USB_IMX21_HCD
          To compile this driver as a module, choose M here: the
          module will be called "imx21-hcd".
 
+config XEN_USBDEV_FRONTEND
+	tristate "Xen pvusb device frontend driver"
+	depends on XEN && USB
+	select XEN_XENBUS_FRONTEND
+	default m
+	help
+	  The pvusb device frontend driver allows the kernel to
+	  access usb devices exported exported by a virtual
+	  machine containing a physical usb device driver. The
+	  frontend driver is intended for unprivileged guest domains;
+	  if you are compiling a kernel for a Xen guest, you almost
+	  certainly want to enable this.
+
+config XEN_USBDEV_BACKEND
+	tristate "PVUSB device backend driver"
+	depends on XEN_BACKEND && USB
+	default m
+	help
+	  The pvusb backend driver allows the kernel to export its usb
+	  devices to other guests via a high-performance shared-memory
+	  interface. This requires the guest to have the pvusb frontend
+	  available.
+
 config USB_OCTEON_EHCI
 	bool "Octeon on-chip EHCI support"
 	depends on USB && USB_EHCI_HCD && CPU_CAVIUM_OCTEON
diff --git a/drivers/usb/host/Makefile b/drivers/usb/host/Makefile
index 56de410..cba51c8 100644
--- a/drivers/usb/host/Makefile
+++ b/drivers/usb/host/Makefile
@@ -45,5 +45,7 @@ obj-$(CONFIG_USB_HWA_HCD)	+= hwa-hc.o
 obj-$(CONFIG_USB_IMX21_HCD)	+= imx21-hcd.o
 obj-$(CONFIG_USB_FSL_MPH_DR_OF)	+= fsl-mph-dr-of.o
 obj-$(CONFIG_USB_OCTEON2_COMMON) += octeon2-common.o
+obj-$(CONFIG_XEN_USBDEV_FRONTEND) += xen-usbfront.o
+obj-$(CONFIG_XEN_USBDEV_BACKEND) += xen-usbback/
 obj-$(CONFIG_USB_HCD_BCMA)	+= bcma-hcd.o
 obj-$(CONFIG_USB_HCD_SSB)	+= ssb-hcd.o
diff --git a/drivers/usb/host/xen-usbback/Makefile b/drivers/usb/host/xen-usbback/Makefile
new file mode 100644
index 0000000..9f3628c
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_XEN_USBDEV_BACKEND) := xen-usbback.o
+
+xen-usbback-y   := usbdev.o xenbus.o usbback.o
diff --git a/drivers/usb/host/xen-usbback/common.h b/drivers/usb/host/xen-usbback/common.h
new file mode 100644
index 0000000..d9671ec
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/common.h
@@ -0,0 +1,170 @@
+/*
+ * This file is part of Xen USB backend driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_USBBACK__COMMON_H__
+#define __XEN_USBBACK__COMMON_H__
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/slab.h>
+#include <linux/usb.h>
+#include <linux/vmalloc.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/kref.h>
+#include <asm/hypervisor.h>
+#include <xen/xen.h>
+#include <xen/events.h>
+#include <xen/interface/xen.h>
+#include <xen/xenbus.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+#include <xen/interface/io/usbif.h>
+
+#define DRV_PFX "xen-usbback:"
+
+struct xen_usbdev;
+
+#ifndef BUS_ID_SIZE
+#define XEN_USB_BUS_ID_SIZE 20
+#else
+#define XEN_USB_BUS_ID_SIZE BUS_ID_SIZE
+#endif
+
+#define XEN_USB_DEV_ADDR_SIZE 128
+
+struct xen_usbif {
+	domid_t				domid;
+	unsigned int			handle;
+	int				num_ports;
+	enum usb_spec_version		usb_ver;
+
+	struct list_head		usbif_list;
+
+	struct xenbus_device		*xbdev;
+
+	unsigned int			irq;
+
+	void				*urb_sring;
+	void				*conn_sring;
+	struct usbif_urb_back_ring	urb_ring;
+	struct usbif_conn_back_ring	conn_ring;
+
+	spinlock_t			urb_ring_lock;
+	spinlock_t			conn_ring_lock;
+	atomic_t			refcnt;
+
+	struct xenbus_watch		backend_watch;
+
+	/* device address lookup table */
+	struct xen_usbdev		*addr_table[XEN_USB_DEV_ADDR_SIZE];
+	spinlock_t			addr_lock;
+
+	/* connected device list */
+	struct list_head		dev_list;
+	spinlock_t			dev_lock;
+
+	/* request schedule */
+	struct task_struct		*xenusbd;
+	unsigned int			waiting_reqs;
+	wait_queue_head_t		waiting_to_free;
+	wait_queue_head_t		wq;
+};
+
+struct xen_usbport {
+	struct list_head	port_list;
+
+	char			phys_bus[XEN_USB_BUS_ID_SIZE];
+	domid_t			domid;
+	unsigned int		handle;
+	int			portnum;
+	unsigned		is_connected:1;
+};
+
+struct xen_usbdev {
+	struct kref		kref;
+	struct list_head	dev_list;
+
+	struct xen_usbport	*port;
+	struct usb_device	*udev;
+	struct xen_usbif	*usbif;
+	int			addr;
+
+	struct list_head	submitting_list;
+	spinlock_t		submitting_lock;
+};
+
+#define usbif_get(_b) (atomic_inc(&(_b)->refcnt))
+#define usbif_put(_b) \
+	do { \
+		if (atomic_dec_and_test(&(_b)->refcnt)) \
+			wake_up(&(_b)->waiting_to_free); \
+	} while (0)
+
+int xen_usbif_xenbus_init(void);
+void xen_usbif_xenbus_exit(void);
+struct xen_usbif *xen_usbif_find(domid_t domid, unsigned int handle);
+
+int xen_usbdev_init(void);
+void xen_usbdev_exit(void);
+
+void xen_usbif_attach_device(struct xen_usbif *usbif, struct xen_usbdev *dev);
+void xen_usbif_detach_device(struct xen_usbif *usbif, struct xen_usbdev *dev);
+void xen_usbif_detach_device_without_lock(struct xen_usbif *usbif,
+						struct xen_usbdev *dev);
+void xen_usbif_hotplug_notify(struct xen_usbif *usbif, int portnum, int speed);
+struct xen_usbdev *xen_usbif_find_attached_device(struct xen_usbif *usbif,
+								int port);
+irqreturn_t xen_usbif_be_int(int irq, void *dev_id);
+int xen_usbif_schedule(void *arg);
+void xen_usbif_unlink_urbs(struct xen_usbdev *dev);
+
+struct xen_usbport *xen_usbport_find_by_busid(const char *busid);
+struct xen_usbport *xen_usbport_find(const domid_t domid,
+				const unsigned int handle, const int portnum);
+int xen_usbport_add(const char *busid, const domid_t domid,
+				const unsigned int handle, const int portnum);
+int xen_usbport_remove(const domid_t domid, const unsigned int handle,
+							const int portnum);
+#endif /* __XEN_USBBACK__COMMON_H__ */
diff --git a/drivers/usb/host/xen-usbback/usbback.c b/drivers/usb/host/xen-usbback/usbback.c
new file mode 100644
index 0000000..6d7bda0
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/usbback.c
@@ -0,0 +1,1272 @@
+/*
+ * Xen USB backend driver
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/mm.h>
+#include "common.h"
+
+static int xen_usbif_reqs = USBIF_BACK_MAX_PENDING_REQS;
+module_param_named(reqs, xen_usbif_reqs, int, 0);
+MODULE_PARM_DESC(reqs, "Number of usbback requests to allocate");
+
+struct pending_req_segment {
+	uint16_t offset;
+	uint16_t length;
+};
+
+struct pending_req {
+	struct xen_usbif	*usbif;
+
+	uint16_t		id; /* request id */
+
+	struct xen_usbdev	*dev;
+	struct list_head	urb_list;
+
+	/* urb */
+	struct urb		*urb;
+	void			*buffer;
+	dma_addr_t		transfer_dma;
+	struct usb_ctrlrequest	*setup;
+	dma_addr_t		setup_dma;
+
+	/* request segments */
+	uint16_t		nr_buffer_segs;
+				/* number of urb->transfer_buffer segments */
+	uint16_t		nr_extra_segs;
+				/* number of iso_frame_desc segments (ISO) */
+	struct pending_req_segment *seg;
+
+	struct list_head	free_list;
+};
+
+#define USBBACK_INVALID_HANDLE (~0)
+
+struct xen_usbbk {
+	struct pending_req	*pending_reqs;
+	struct list_head	pending_free;
+	spinlock_t		pending_free_lock;
+	wait_queue_head_t	pending_free_wq;
+	struct list_head	urb_free;
+	spinlock_t		urb_free_lock;
+	struct page		**pending_pages;
+	grant_handle_t		*pending_grant_handles;
+};
+
+static struct xen_usbbk *usbbk;
+
+static inline int vaddr_pagenr(struct pending_req *req, int seg)
+{
+	return (req - usbbk->pending_reqs) *
+		USBIF_MAX_SEGMENTS_PER_REQUEST + seg;
+}
+
+#define pending_page(req, seg) pending_pages[vaddr_pagenr(req, seg)]
+
+static inline unsigned long vaddr(struct pending_req *req, int seg)
+{
+	unsigned long pfn = page_to_pfn(usbbk->pending_page(req, seg));
+	return (unsigned long)pfn_to_kaddr(pfn);
+}
+
+#define pending_handle(_req, _seg) \
+	(usbbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
+
+static struct pending_req *alloc_req(void)
+{
+	struct pending_req *req = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbbk->pending_free_lock, flags);
+	if (!list_empty(&usbbk->pending_free)) {
+		req = list_entry(usbbk->pending_free.next, struct pending_req,
+								free_list);
+		list_del(&req->free_list);
+	}
+	spin_unlock_irqrestore(&usbbk->pending_free_lock, flags);
+	return req;
+}
+
+static void free_req(struct pending_req *req)
+{
+	unsigned long flags;
+	int was_empty;
+
+	spin_lock_irqsave(&usbbk->pending_free_lock, flags);
+	was_empty = list_empty(&usbbk->pending_free);
+	list_add(&req->free_list, &usbbk->pending_free);
+	spin_unlock_irqrestore(&usbbk->pending_free_lock, flags);
+	if (was_empty)
+		wake_up(&usbbk->pending_free_wq);
+}
+
+static inline void add_req_to_submitting_list(struct xen_usbdev *dev,
+						struct pending_req *pending_req)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_add_tail(&pending_req->urb_list, &dev->submitting_list);
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+static inline void remove_req_from_submitting_list(struct xen_usbdev *dev,
+						struct pending_req *pending_req)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_del_init(&pending_req->urb_list);
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+void xen_usbif_unlink_urbs(struct xen_usbdev *dev)
+{
+	struct pending_req *req, *tmp;
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_for_each_entry_safe(req, tmp, &dev->submitting_list, urb_list) {
+		usb_unlink_urb(req->urb);
+	}
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+static void copy_buff_to_pages(void *buff, struct pending_req *pending_req,
+				int start, int nr_pages)
+{
+	unsigned long copied = 0;
+	int i;
+
+	for (i = start; i < start + nr_pages; i++) {
+		memcpy((void *) vaddr(pending_req, i) +
+						pending_req->seg[i].offset,
+			buff + copied, pending_req->seg[i].length);
+		copied += pending_req->seg[i].length;
+	}
+}
+
+static void copy_pages_to_buff(void *buff, struct pending_req *pending_req,
+							int start, int nr_pages)
+{
+	unsigned long copied = 0;
+	int i;
+
+	for (i = start; i < start + nr_pages; i++) {
+		void *src = (void *) vaddr(pending_req, i) +
+						pending_req->seg[i].offset;
+		memcpy(buff + copied, src, pending_req->seg[i].length);
+		copied += pending_req->seg[i].length;
+	}
+}
+
+static int usbbk_alloc_urb(struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	int ret;
+
+	if (usb_pipeisoc(req->pipe))
+		pending_req->urb = usb_alloc_urb(req->u.isoc.number_of_packets,
+						 GFP_KERNEL);
+	else
+		pending_req->urb = usb_alloc_urb(0, GFP_KERNEL);
+	if (!pending_req->urb) {
+		pr_alert(DRV_PFX "can't alloc urb\n");
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (req->buffer_length) {
+		pending_req->buffer =
+			usb_alloc_coherent(pending_req->dev->udev,
+						req->buffer_length, GFP_KERNEL,
+						&pending_req->transfer_dma);
+		if (!pending_req->buffer) {
+			pr_alert(DRV_PFX "can't alloc urb buffer\n");
+			ret = -ENOMEM;
+			goto fail_free_urb;
+		}
+	}
+
+	if (usb_pipecontrol(req->pipe)) {
+		pending_req->setup = usb_alloc_coherent(pending_req->dev->udev,
+					sizeof(struct usb_ctrlrequest),
+					GFP_KERNEL, &pending_req->setup_dma);
+		if (!pending_req->setup) {
+			pr_alert(DRV_PFX "can't alloc usb_ctrlrequest\n");
+			ret = -ENOMEM;
+			goto fail_free_buffer;
+		}
+	}
+
+	return 0;
+
+fail_free_buffer:
+	if (req->buffer_length)
+		usb_free_coherent(pending_req->dev->udev, req->buffer_length,
+				pending_req->buffer, pending_req->transfer_dma);
+fail_free_urb:
+	usb_free_urb(pending_req->urb);
+fail:
+	return ret;
+}
+
+static void usbbk_release_urb(struct urb *urb)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbbk->urb_free_lock, flags);
+	list_add(&urb->urb_list, &usbbk->urb_free);
+	spin_unlock_irqrestore(&usbbk->urb_free_lock, flags);
+}
+
+static void usbbk_free_urb(struct urb *urb)
+{
+	if (usb_pipecontrol(urb->pipe))
+		usb_free_coherent(urb->dev, sizeof(struct usb_ctrlrequest),
+				urb->setup_packet, urb->setup_dma);
+	if (urb->transfer_buffer_length)
+		usb_free_coherent(urb->dev, urb->transfer_buffer_length,
+				urb->transfer_buffer, urb->transfer_dma);
+	barrier();
+	usb_free_urb(urb);
+}
+
+static void usbbk_free_urbs(void)
+{
+	unsigned long flags;
+	struct list_head tmp_list;
+
+	if (list_empty(&usbbk->urb_free))
+		return;
+
+	INIT_LIST_HEAD(&tmp_list);
+
+	spin_lock_irqsave(&usbbk->urb_free_lock, flags);
+	list_splice_init(&usbbk->urb_free, &tmp_list);
+	spin_unlock_irqrestore(&usbbk->urb_free_lock, flags);
+
+	while (!list_empty(&tmp_list)) {
+		struct urb *next_urb =
+			list_first_entry(&tmp_list, struct urb, urb_list);
+		list_del(&next_urb->urb_list);
+		usbbk_free_urb(next_urb);
+	}
+}
+
+static void usbif_notify_work(struct xen_usbif *usbif)
+{
+	usbif->waiting_reqs = 1;
+	wake_up(&usbif->wq);
+}
+
+irqreturn_t xen_usbif_be_int(int irq, void *dev_id)
+{
+	usbif_notify_work(dev_id);
+	return IRQ_HANDLED;
+}
+
+static void xen_usbbk_unmap(struct pending_req *req)
+{
+	struct gnttab_unmap_grant_ref unmap[USBIF_MAX_SEGMENTS_PER_REQUEST];
+	unsigned int i, nr_segs, invcount = 0;
+	grant_handle_t handle;
+	int ret;
+
+	nr_segs = req->nr_buffer_segs + req->nr_extra_segs;
+
+	if (nr_segs == 0)
+		return;
+
+	for (i = 0; i < nr_segs; i++) {
+		handle = pending_handle(req, i);
+		if (handle == USBBACK_INVALID_HANDLE)
+			continue;
+		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
+					GNTMAP_host_map, handle);
+		pending_handle(req, i) = USBBACK_INVALID_HANDLE;
+		invcount++;
+	}
+
+	ret = HYPERVISOR_grant_table_op(
+		GNTTABOP_unmap_grant_ref, unmap, invcount);
+	BUG_ON(ret);
+	/*
+	 * Note, we use invcount, not nr_segs, so we can't index
+	 * using vaddr(req, i).
+	 */
+	for (i = 0; i < invcount; i++) {
+		ret = m2p_remove_override(
+			virt_to_page(unmap[i].host_addr), false);
+		if (ret) {
+			pr_alert(DRV_PFX "Failed to remove M2P override for "
+				"%lx\n", (unsigned long)unmap[i].host_addr);
+			continue;
+		}
+	}
+
+	kfree(req->seg);
+}
+
+static int xen_usbbk_map(struct xen_usbif *usbif,
+				struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	int i, ret;
+	unsigned int nr_segs;
+	uint32_t flags;
+	struct gnttab_map_grant_ref map[USBIF_MAX_SEGMENTS_PER_REQUEST];
+
+	nr_segs = pending_req->nr_buffer_segs + pending_req->nr_extra_segs;
+
+	if (nr_segs == 0)
+		return 0;
+
+	if (nr_segs > USBIF_MAX_SEGMENTS_PER_REQUEST) {
+		pr_alert(DRV_PFX "Bad number of segments in request\n");
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	pending_req->seg = kmalloc(sizeof(struct pending_req_segment) *
+							nr_segs, GFP_KERNEL);
+	if (!pending_req->seg) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	flags = GNTMAP_host_map;
+	if (usb_pipeout(req->pipe))
+		flags |= GNTMAP_readonly;
+	for (i = 0; i < pending_req->nr_buffer_segs; i++) {
+		gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
+					req->seg[i].gref, usbif->domid);
+	}
+
+	flags = GNTMAP_host_map;
+	for (i = pending_req->nr_buffer_segs; i < nr_segs; i++) {
+		gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
+					req->seg[i].gref, usbif->domid);
+	}
+
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segs);
+	BUG_ON(ret);
+
+	for (i = 0; i < nr_segs; i++) {
+		if (unlikely(map[i].status != 0)) {
+			pr_alert(DRV_PFX "invalid buffer "
+					"-- could not remap it (error %d)\n",
+				map[i].status);
+			map[i].handle = USBBACK_INVALID_HANDLE;
+			ret |= 1;
+		}
+
+		pending_handle(pending_req, i) = map[i].handle;
+
+		if (ret)
+			continue;
+
+		ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr),
+			usbbk->pending_page(pending_req, i), NULL);
+		if (ret) {
+			pr_alert(DRV_PFX "Failed to install M2P override for "
+			    "%lx (ret: %d)\n",
+			    (unsigned long)map[i].dev_bus_addr, ret);
+			/* We could switch over to GNTTABOP_copy */
+			continue;
+		}
+
+		pending_req->seg[i].offset = req->seg[i].offset;
+		pending_req->seg[i].length = req->seg[i].length;
+
+		barrier();
+
+		if (pending_req->seg[i].offset >= PAGE_SIZE ||
+			pending_req->seg[i].length > PAGE_SIZE ||
+			pending_req->seg[i].offset +
+				pending_req->seg[i].length > PAGE_SIZE)
+			ret |= 1;
+	}
+
+	if (ret)
+		goto fail_flush;
+
+	return 0;
+
+fail_flush:
+	xen_usbbk_unmap(pending_req);
+	ret = -ENOMEM;
+
+fail:
+	return ret;
+}
+
+static void usbbk_do_response(struct pending_req *pending_req, int32_t status,
+				int32_t actual_length, int32_t error_count,
+				uint16_t start_frame)
+{
+	struct xen_usbif *usbif = pending_req->usbif;
+	struct usbif_urb_response *res;
+	unsigned long flags;
+	int notify;
+
+	spin_lock_irqsave(&usbif->urb_ring_lock, flags);
+	res = RING_GET_RESPONSE(&usbif->urb_ring, usbif->urb_ring.rsp_prod_pvt);
+	res->id = pending_req->id;
+	res->status = status;
+	res->actual_length = actual_length;
+	res->error_count = error_count;
+	res->start_frame = start_frame;
+	usbif->urb_ring.rsp_prod_pvt++;
+	barrier();
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&usbif->urb_ring, notify);
+	spin_unlock_irqrestore(&usbif->urb_ring_lock, flags);
+
+	if (notify)
+		notify_remote_via_irq(usbif->irq);
+}
+
+static void usbbk_urb_complete(struct urb *urb)
+{
+	struct pending_req *pending_req = (struct pending_req *)urb->context;
+
+	if (usb_pipein(urb->pipe) && urb->status == 0 && urb->actual_length > 0)
+		copy_buff_to_pages(pending_req->buffer, pending_req, 0,
+					pending_req->nr_buffer_segs);
+
+	if (usb_pipeisoc(urb->pipe))
+		copy_buff_to_pages(&urb->iso_frame_desc[0], pending_req,
+					pending_req->nr_buffer_segs,
+					pending_req->nr_extra_segs);
+
+	barrier();
+
+	xen_usbbk_unmap(pending_req);
+
+	usbbk_do_response(pending_req, urb->status, urb->actual_length,
+				urb->error_count, urb->start_frame);
+
+	remove_req_from_submitting_list(pending_req->dev, pending_req);
+
+	barrier();
+	usbbk_release_urb(urb);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+}
+
+static void usbbk_init_urb(struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	unsigned int pipe;
+	struct usb_device *udev = pending_req->dev->udev;
+	struct urb *urb = pending_req->urb;
+
+	switch (usb_pipetype(req->pipe)) {
+	case PIPE_ISOCHRONOUS:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvisocpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndisocpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		urb->dev = udev;
+		urb->pipe = pipe;
+		urb->transfer_flags = req->transfer_flags;
+		urb->transfer_flags |= URB_ISO_ASAP;
+		urb->transfer_buffer = pending_req->buffer;
+		urb->transfer_buffer_length = req->buffer_length;
+		urb->complete = usbbk_urb_complete;
+		urb->context = pending_req;
+		urb->interval = req->u.isoc.interval;
+		urb->start_frame = req->u.isoc.start_frame;
+		urb->number_of_packets = req->u.isoc.number_of_packets;
+
+		break;
+	case PIPE_INTERRUPT:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvintpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndintpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		usb_fill_int_urb(urb, udev, pipe,
+				pending_req->buffer, req->buffer_length,
+				usbbk_urb_complete,
+				pending_req, req->u.intr.interval);
+		/*
+		 * high speed interrupt endpoints use a logarithmic encoding of
+		 * the endpoint interval, and usb_fill_int_urb() initializes a
+		 * interrupt urb with the encoded interval value.
+		 *
+		 * req->u.intr.interval is the interval value that already
+		 * encoded in the frontend part, and the above
+		 * usb_fill_int_urb() initializes the urb->interval with double
+		 * encoded value.
+		 *
+		 * so, simply overwrite the urb->interval with original value.
+		 */
+		urb->interval = req->u.intr.interval;
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	case PIPE_CONTROL:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvctrlpipe(udev, 0);
+		else
+			pipe = usb_sndctrlpipe(udev, 0);
+
+		usb_fill_control_urb(urb, udev, pipe,
+				(unsigned char *) pending_req->setup,
+				pending_req->buffer, req->buffer_length,
+				usbbk_urb_complete, pending_req);
+		memcpy(pending_req->setup, req->u.ctrl, 8);
+		urb->setup_dma = pending_req->setup_dma;
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	case PIPE_BULK:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvbulkpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndbulkpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		usb_fill_bulk_urb(urb, udev, pipe, pending_req->buffer,
+				req->buffer_length, usbbk_urb_complete,
+				pending_req);
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	default:
+		break;
+	}
+
+	if (req->buffer_length) {
+		urb->transfer_dma = pending_req->transfer_dma;
+		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+	}
+}
+
+struct set_interface_request {
+	struct pending_req *pending_req;
+	int interface;
+	int alternate;
+	struct work_struct work;
+};
+
+static void usbbk_set_interface_work(struct work_struct *arg)
+{
+	struct set_interface_request *req
+		= container_of(arg, struct set_interface_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = req->pending_req->dev->udev;
+
+	int ret;
+
+	usb_lock_device(udev);
+	ret = usb_set_interface(udev, req->interface, req->alternate);
+	usb_unlock_device(udev);
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_set_interface(struct pending_req *pending_req, int interface,
+				int alternate)
+{
+	struct set_interface_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+	req->pending_req = pending_req;
+	req->interface = interface;
+	req->alternate = alternate;
+	INIT_WORK(&req->work, usbbk_set_interface_work);
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+
+struct clear_halt_request {
+	struct pending_req *pending_req;
+	int pipe;
+	struct work_struct work;
+};
+
+static void usbbk_clear_halt_work(struct work_struct *arg)
+{
+	struct clear_halt_request *req = container_of(arg,
+					struct clear_halt_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = req->pending_req->dev->udev;
+	int ret;
+
+	usb_lock_device(udev);
+	ret = usb_clear_halt(req->pending_req->dev->udev, req->pipe);
+	usb_unlock_device(udev);
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_clear_halt(struct pending_req *pending_req, int pipe)
+{
+	struct clear_halt_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+	req->pending_req = pending_req;
+	req->pipe = pipe;
+	INIT_WORK(&req->work, usbbk_clear_halt_work);
+
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+
+#if 0
+struct port_reset_request {
+	struct pending_req *pending_req;
+	struct work_struct work;
+};
+
+static void usbbk_port_reset_work(struct work_struct *arg)
+{
+	struct port_reset_request *req = container_of(arg,
+					struct port_reset_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = pending_req->dev->udev;
+	int ret, ret_lock;
+
+	ret = ret_lock = usb_lock_device_for_reset(udev, NULL);
+	if (ret_lock >= 0) {
+		ret = usb_reset_device(udev);
+		if (ret_lock)
+			usb_unlock_device(udev);
+	}
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_port_reset(struct pending_req *pending_req)
+{
+	struct port_reset_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	req->pending_req = pending_req;
+	INIT_WORK(&req->work, usbbk_port_reset_work);
+
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+#endif
+
+static void usbbk_set_address(struct xen_usbif *usbif, struct xen_usbdev *dev,
+				int cur_addr, int new_addr)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->addr_lock, flags);
+	if (cur_addr)
+		usbif->addr_table[cur_addr] = NULL;
+	if (new_addr)
+		usbif->addr_table[new_addr] = dev;
+	dev->addr = new_addr;
+	spin_unlock_irqrestore(&usbif->addr_lock, flags);
+}
+
+static void process_unlink_req(struct xen_usbif *usbif,
+				struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	struct pending_req *unlink_req = NULL;
+	int devnum;
+	int ret = 0;
+	unsigned long flags;
+
+	devnum = usb_pipedevice(req->pipe);
+	if (unlikely(devnum == 0)) {
+		pending_req->dev = xen_usbif_find_attached_device(usbif,
+						usbif_pipeportnum(req->pipe));
+		if (unlikely(!pending_req->dev)) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+	} else {
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	spin_lock_irqsave(&pending_req->dev->submitting_lock, flags);
+	list_for_each_entry(unlink_req, &pending_req->dev->submitting_list,
+								urb_list) {
+		if (unlink_req->id == req->u.unlink.unlink_id) {
+			ret = usb_unlink_urb(unlink_req->urb);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&pending_req->dev->submitting_lock, flags);
+
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+	return;
+}
+
+static int check_and_submit_special_ctrlreq(struct xen_usbif *usbif,
+						struct usbif_urb_request *req,
+						struct pending_req *pending_req)
+{
+	int devnum;
+	struct xen_usbdev *dev = NULL;
+	struct usb_ctrlrequest *ctrl = (struct usb_ctrlrequest *) req->u.ctrl;
+	int ret;
+	int done = 0;
+
+	devnum = usb_pipedevice(req->pipe);
+
+	/*
+	 * When the device is first connected or reseted, USB device has no
+	 * address. In this initial state, following requests are send to
+	 * device address (#0),
+	 *
+	 *  1. GET_DESCRIPTOR (with Descriptor Type is "DEVICE") is send, and
+	 *     OS knows what device is connected to.
+	 *
+	 *  2. SET_ADDRESS is send, and then, device has its address.
+	 *
+	 * In the next step, SET_CONFIGURATION is send to addressed device, and
+	 * then, the device is finally ready to use.
+	 */
+	if (unlikely(devnum == 0)) {
+		dev = xen_usbif_find_attached_device(usbif,
+						usbif_pipeportnum(req->pipe));
+		if (unlikely(!dev)) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+
+		switch (ctrl->bRequest) {
+		case USB_REQ_GET_DESCRIPTOR:
+			/*
+			 * GET_DESCRIPTOR request to device #0.
+			 * through to normal urb transfer.
+			 */
+			pending_req->dev = dev;
+			return 0;
+			break;
+		case USB_REQ_SET_ADDRESS:
+			/*
+			 * SET_ADDRESS request to device #0.
+			 * add attached device to addr_table.
+			 */
+			{
+				__u16 addr = le16_to_cpu(ctrl->wValue);
+				usbbk_set_address(usbif, dev, 0, addr);
+			}
+			ret = 0;
+			goto fail_response;
+			break;
+		default:
+			ret = -EINVAL;
+			goto fail_response;
+		}
+	} else {
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	/*
+	 * Check special request
+	 */
+	switch (ctrl->bRequest) {
+	case USB_REQ_SET_ADDRESS:
+		/*
+		 * SET_ADDRESS request to addressed device.
+		 * change addr or remove from addr_table.
+		 */
+		{
+			__u16 addr = le16_to_cpu(ctrl->wValue);
+			usbbk_set_address(usbif, dev, devnum, addr);
+		}
+		ret = 0;
+		goto fail_response;
+		break;
+#if 0
+	case USB_REQ_SET_CONFIGURATION:
+		/*
+		 * linux 2.6.27 or later version only!
+		 */
+		if (ctrl->RequestType == USB_RECIP_DEVICE) {
+			__u16 config = le16_to_cpu(ctrl->wValue);
+			usb_driver_set_configuration(pending_req->dev->udev,
+							config);
+			done = 1;
+		}
+		break;
+#endif
+	case USB_REQ_SET_INTERFACE:
+		if (ctrl->bRequestType == USB_RECIP_INTERFACE) {
+			__u16 alt = le16_to_cpu(ctrl->wValue);
+			__u16 intf = le16_to_cpu(ctrl->wIndex);
+			usbbk_set_interface(pending_req, intf, alt);
+			done = 1;
+		}
+		break;
+	case USB_REQ_CLEAR_FEATURE:
+		if (ctrl->bRequestType == USB_RECIP_ENDPOINT
+			&& ctrl->wValue == USB_ENDPOINT_HALT) {
+			int pipe;
+			int ep = le16_to_cpu(ctrl->wIndex) & 0x0f;
+			int dir = le16_to_cpu(ctrl->wIndex) & USB_DIR_IN;
+			if (dir)
+				pipe = usb_rcvctrlpipe(pending_req->dev->udev,
+							ep);
+			else
+				pipe = usb_sndctrlpipe(pending_req->dev->udev,
+							ep);
+			usbbk_clear_halt(pending_req, pipe);
+			done = 1;
+		}
+		break;
+#if 0 /* not tested yet */
+	case USB_REQ_SET_FEATURE:
+		if (ctrl->bRequestType == USB_RT_PORT) {
+			__u16 feat = le16_to_cpu(ctrl->wValue);
+			if (feat == USB_PORT_FEAT_RESET) {
+				usbbk_port_reset(pending_req);
+				done = 1;
+			}
+		}
+		break;
+#endif
+	default:
+		break;
+	}
+
+	return done;
+
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+	return 1;
+}
+
+static void dispatch_request_to_pending_reqs(struct xen_usbif *usbif,
+						struct usbif_urb_request *req,
+						struct pending_req *pending_req)
+{
+	int ret;
+
+	pending_req->id = req->id;
+	pending_req->usbif = usbif;
+
+	barrier();
+
+	usbif_get(usbif);
+
+	/* unlink request */
+	if (unlikely(usbif_pipeunlink(req->pipe))) {
+		process_unlink_req(usbif, req, pending_req);
+		return;
+	}
+
+	if (usb_pipecontrol(req->pipe)) {
+		if (check_and_submit_special_ctrlreq(usbif, req, pending_req))
+			return;
+	} else {
+		int devnum = usb_pipedevice(req->pipe);
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	barrier();
+
+	ret = usbbk_alloc_urb(req, pending_req);
+	if (ret) {
+		ret = -ESHUTDOWN;
+		goto fail_response;
+	}
+
+	add_req_to_submitting_list(pending_req->dev, pending_req);
+
+	barrier();
+
+	usbbk_init_urb(req, pending_req);
+
+	barrier();
+
+	pending_req->nr_buffer_segs = req->nr_buffer_segs;
+	if (usb_pipeisoc(req->pipe))
+		pending_req->nr_extra_segs = req->u.isoc.nr_frame_desc_segs;
+	else
+		pending_req->nr_extra_segs = 0;
+
+	barrier();
+
+	ret = xen_usbbk_map(usbif, req, pending_req);
+	if (ret) {
+		pr_alert(DRV_PFX "invalid buffer\n");
+		ret = -ESHUTDOWN;
+		goto fail_free_urb;
+	}
+
+	barrier();
+
+	if (usb_pipeout(req->pipe) && req->buffer_length)
+		copy_pages_to_buff(pending_req->buffer, pending_req, 0,
+					pending_req->nr_buffer_segs);
+	if (usb_pipeisoc(req->pipe)) {
+		copy_pages_to_buff(&pending_req->urb->iso_frame_desc[0],
+				pending_req, pending_req->nr_buffer_segs,
+				pending_req->nr_extra_segs);
+	}
+
+	barrier();
+
+	ret = usb_submit_urb(pending_req->urb, GFP_KERNEL);
+	if (ret) {
+		pr_alert(DRV_PFX "failed submitting urb, error %d\n", ret);
+		ret = -ESHUTDOWN;
+		goto fail_flush_area;
+	}
+	return;
+
+fail_flush_area:
+	xen_usbbk_unmap(pending_req);
+fail_free_urb:
+	remove_req_from_submitting_list(pending_req->dev, pending_req);
+	barrier();
+	usbbk_release_urb(pending_req->urb);
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+}
+
+static int usbbk_start_submit_urb(struct xen_usbif *usbif)
+{
+	struct usbif_urb_back_ring *urb_ring = &usbif->urb_ring;
+	struct usbif_urb_request *req;
+	struct pending_req *pending_req;
+	RING_IDX rc, rp;
+	int more_to_do = 0;
+
+	rc = urb_ring->req_cons;
+	rp = urb_ring->sring->req_prod;
+	rmb();
+
+	while (rc != rp) {
+		if (RING_REQUEST_CONS_OVERFLOW(urb_ring, rc)) {
+			pr_warn(DRV_PFX "RING_REQUEST_CONS_OVERFLOW\n");
+			break;
+		}
+
+		pending_req = alloc_req();
+		if (NULL == pending_req) {
+			more_to_do = 1;
+			break;
+		}
+
+		req = RING_GET_REQUEST(urb_ring, rc);
+		urb_ring->req_cons = ++rc;
+
+		dispatch_request_to_pending_reqs(usbif, req, pending_req);
+	}
+
+	RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+void xen_usbif_hotplug_notify(struct xen_usbif *usbif, int portnum, int speed)
+{
+	struct usbif_conn_back_ring *ring = &usbif->conn_ring;
+	struct usbif_conn_request *req;
+	struct usbif_conn_response *res;
+	unsigned long flags;
+	u16 id;
+	int notify;
+
+	spin_lock_irqsave(&usbif->conn_ring_lock, flags);
+
+	req = RING_GET_REQUEST(ring, ring->req_cons);
+	id = req->id;
+	ring->req_cons++;
+	ring->sring->req_event = ring->req_cons + 1;
+
+	res = RING_GET_RESPONSE(ring, ring->rsp_prod_pvt);
+	res->id = id;
+	res->portnum = portnum;
+	res->speed = speed;
+	ring->rsp_prod_pvt++;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+
+	spin_unlock_irqrestore(&usbif->conn_ring_lock, flags);
+
+	if (notify)
+		notify_remote_via_irq(usbif->irq);
+}
+
+int xen_usbif_schedule(void *arg)
+{
+	struct xen_usbif *usbif = (struct xen_usbif *) arg;
+
+	usbif_get(usbif);
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(usbif->wq,
+				usbif->waiting_reqs || kthread_should_stop());
+		wait_event_interruptible(usbbk->pending_free_wq,
+		    !list_empty(&usbbk->pending_free) || kthread_should_stop());
+		usbif->waiting_reqs = 0;
+		smp_mb();
+
+		if (usbbk_start_submit_urb(usbif))
+			usbif->waiting_reqs = 1;
+
+		usbbk_free_urbs();
+	}
+
+	usbbk_free_urbs();
+	usbif->xenusbd = NULL;
+	usbif_put(usbif);
+
+	return 0;
+}
+
+/*
+ * attach xen_usbdev device to usbif.
+ */
+void xen_usbif_attach_device(struct xen_usbif *usbif, struct xen_usbdev *dev)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_add(&dev->dev_list, &usbif->dev_list);
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+	dev->usbif = usbif;
+}
+
+/*
+ * detach usbdev device from usbif.
+ */
+void xen_usbif_detach_device(struct xen_usbif *usbif, struct xen_usbdev *dev)
+{
+	unsigned long flags;
+
+	if (dev->addr)
+		usbbk_set_address(usbif, dev, dev->addr, 0);
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_del(&dev->dev_list);
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+	dev->usbif = NULL;
+}
+
+void xen_usbif_detach_device_without_lock(struct xen_usbif *usbif,
+							struct xen_usbdev *dev)
+{
+	if (dev->addr)
+		usbbk_set_address(usbif, dev, dev->addr, 0);
+	list_del(&dev->dev_list);
+	dev->usbif = NULL;
+}
+
+static int __init xen_usbif_init(void)
+{
+	int i, mmap_pages;
+	int rc = 0;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	usbbk = kzalloc(sizeof(struct xen_usbbk), GFP_KERNEL);
+	if (!usbbk) {
+		pr_alert(DRV_PFX "%s: out of memory!\n", __func__);
+		return -ENOMEM;
+	}
+
+	mmap_pages = xen_usbif_reqs * USBIF_MAX_SEGMENTS_PER_REQUEST;
+	usbbk->pending_reqs =
+		kzalloc(sizeof(usbbk->pending_reqs[0]) * xen_usbif_reqs,
+								GFP_KERNEL);
+	usbbk->pending_grant_handles =
+		kmalloc(sizeof(usbbk->pending_grant_handles[0]) * mmap_pages,
+								GFP_KERNEL);
+	usbbk->pending_pages =
+		kzalloc(sizeof(usbbk->pending_pages[0]) * mmap_pages,
+								GFP_KERNEL);
+
+	if (!usbbk->pending_reqs || !usbbk->pending_grant_handles ||
+	    !usbbk->pending_pages) {
+		rc = -ENOMEM;
+		pr_alert(DRV_PFX "%s: out of memory\n", __func__);
+		goto failed_init;
+	}
+
+	for (i = 0; i < mmap_pages; i++) {
+		usbbk->pending_grant_handles[i] = USBBACK_INVALID_HANDLE;
+		usbbk->pending_pages[i] = alloc_page(GFP_KERNEL);
+		if (usbbk->pending_pages[i] == NULL) {
+			rc = -ENOMEM;
+			pr_alert(DRV_PFX "%s: out of memory\n", __func__);
+			goto failed_init;
+		}
+	}
+
+	INIT_LIST_HEAD(&usbbk->pending_free);
+	spin_lock_init(&usbbk->pending_free_lock);
+	init_waitqueue_head(&usbbk->pending_free_wq);
+
+	INIT_LIST_HEAD(&usbbk->urb_free);
+	spin_lock_init(&usbbk->urb_free_lock);
+
+	for (i = 0; i < xen_usbif_reqs; i++)
+		list_add_tail(&usbbk->pending_reqs[i].free_list,
+				&usbbk->pending_free);
+
+	rc = xen_usbdev_init();
+	if (rc)
+		goto failed_init;
+
+	rc = xen_usbif_xenbus_init();
+	if (rc)
+		goto usb_exit;
+
+	return 0;
+
+ usb_exit:
+	xen_usbdev_exit();
+ failed_init:
+	kfree(usbbk->pending_reqs);
+	kfree(usbbk->pending_grant_handles);
+	if (usbbk->pending_pages) {
+		for (i = 0; i < mmap_pages; i++) {
+			if (usbbk->pending_pages[i])
+				__free_page(usbbk->pending_pages[i]);
+		}
+		kfree(usbbk->pending_pages);
+	}
+	kfree(usbbk);
+	usbbk = NULL;
+	return rc;
+}
+
+struct xen_usbdev *xen_usbif_find_attached_device(struct xen_usbif *usbif,
+								int portnum)
+{
+	struct xen_usbdev *dev;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_for_each_entry(dev, &usbif->dev_list, dev_list) {
+		if (dev->port->portnum == portnum) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+
+	if (found)
+		return dev;
+
+	return NULL;
+}
+
+static void __exit xen_usbif_exit(void)
+{
+	int i;
+	int mmap_pages = xen_usbif_reqs * USBIF_MAX_SEGMENTS_PER_REQUEST;
+
+	xen_usbif_xenbus_exit();
+	xen_usbdev_exit();
+	kfree(usbbk->pending_reqs);
+	kfree(usbbk->pending_grant_handles);
+	for (i = 0; i < mmap_pages; i++) {
+		if (usbbk->pending_pages[i])
+			__free_page(usbbk->pending_pages[i]);
+	}
+	kfree(usbbk->pending_pages);
+	usbbk = NULL;
+}
+
+module_init(xen_usbif_init);
+module_exit(xen_usbif_exit);
+
+MODULE_AUTHOR("");
+MODULE_DESCRIPTION("Xen USB backend driver (xen_usbback)");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/usb/host/xen-usbback/usbdev.c b/drivers/usb/host/xen-usbback/usbdev.c
new file mode 100644
index 0000000..53a14b4
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/usbdev.c
@@ -0,0 +1,319 @@
+/*
+ * USB stub device driver - grabbing and managing USB devices.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include "common.h"
+
+static LIST_HEAD(port_list);
+static DEFINE_SPINLOCK(port_list_lock);
+
+struct xen_usbport *xen_usbport_find_by_busid(const char *busid)
+{
+	struct xen_usbport *port;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if (!(strncmp(port->phys_bus, busid, XEN_USB_BUS_ID_SIZE))) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	if (found)
+		return port;
+
+	return NULL;
+}
+
+struct xen_usbport *xen_usbport_find(const domid_t domid,
+				const unsigned int handle, const int portnum)
+{
+	struct xen_usbport *port;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if ((port->domid == domid) &&
+					(port->handle == handle) &&
+					(port->portnum == portnum)) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	if (found)
+		return port;
+
+	return NULL;
+}
+
+int xen_usbport_add(const char *busid, const domid_t domid,
+		const unsigned int handle, const int portnum)
+{
+	struct xen_usbport *port;
+	unsigned long flags;
+
+	port = kzalloc(sizeof(*port), GFP_KERNEL);
+	if (!port)
+		return -ENOMEM;
+
+	port->domid = domid;
+	port->handle = handle;
+	port->portnum = portnum;
+
+	strncpy(port->phys_bus, busid, XEN_USB_BUS_ID_SIZE);
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_add(&port->port_list, &port_list);
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return 0;
+}
+
+int xen_usbport_remove(const domid_t domid, const unsigned int handle,
+			const int portnum)
+{
+	struct xen_usbport *port, *tmp;
+	int err = -ENOENT;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry_safe(port, tmp, &port_list, port_list) {
+		if (port->domid == domid &&
+					port->handle == handle &&
+					port->portnum == portnum) {
+			list_del(&port->port_list);
+			kfree(port);
+
+			err = 0;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return err;
+}
+
+static struct xen_usbdev *xen_usbdev_alloc(struct usb_device *udev,
+						struct xen_usbport *port)
+{
+	struct xen_usbdev *dev;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev) {
+		pr_alert(DRV_PFX "no memory for alloc xen_usbdev\n");
+		return NULL;
+	}
+	kref_init(&dev->kref);
+	dev->udev = usb_get_dev(udev);
+	dev->port = port;
+	spin_lock_init(&dev->submitting_lock);
+	INIT_LIST_HEAD(&dev->submitting_list);
+
+	return dev;
+}
+
+static void usbdev_release(struct kref *kref)
+{
+	struct xen_usbdev *dev;
+
+	dev = container_of(kref, struct xen_usbdev, kref);
+
+	usb_put_dev(dev->udev);
+	dev->udev = NULL;
+	dev->port = NULL;
+	kfree(dev);
+}
+
+static inline void usbdev_get(struct xen_usbdev *dev)
+{
+	kref_get(&dev->kref);
+}
+
+static inline void usbdev_put(struct xen_usbdev *dev)
+{
+	kref_put(&dev->kref, usbdev_release);
+}
+
+static int usbdev_probe(struct usb_interface *intf,
+			 const struct usb_device_id *id)
+{
+	struct usb_device *udev = interface_to_usbdev(intf);
+	const char *busid = dev_name(intf->dev.parent);
+	struct xen_usbport *port = NULL;
+	struct xen_usbdev *dev = NULL;
+	struct xen_usbif *usbif = NULL;
+	int retval = -ENODEV;
+
+	/* hub currently not supported, so skip. */
+	if (udev->descriptor.bDeviceClass == USB_CLASS_HUB)
+		goto out;
+
+	port = xen_usbport_find_by_busid(busid);
+	if (!port)
+		goto out;
+
+	usbif = xen_usbif_find(port->domid, port->handle);
+	if (!usbif)
+		goto out;
+
+	switch (udev->speed) {
+	case USB_SPEED_LOW:
+	case USB_SPEED_FULL:
+		break;
+	case USB_SPEED_HIGH:
+		if (usbif->usb_ver >= USB_VER_USB20)
+			break;
+		/* fall through */
+	default:
+		goto out;
+	}
+
+	dev = xen_usbif_find_attached_device(usbif, port->portnum);
+	if (!dev) {
+		/* new connection */
+		dev = xen_usbdev_alloc(udev, port);
+		if (!dev)
+			return -ENOMEM;
+		xen_usbif_attach_device(usbif, dev);
+		xen_usbif_hotplug_notify(usbif, port->portnum, udev->speed);
+	} else {
+		/* maybe already called and connected by other intf */
+		if (strncmp(dev->port->phys_bus, busid, XEN_USB_BUS_ID_SIZE))
+			goto out; /* invalid call */
+	}
+
+	usbdev_get(dev);
+	usb_set_intfdata(intf, dev);
+	retval = 0;
+
+out:
+	return retval;
+}
+
+static void usbdev_disconnect(struct usb_interface *intf)
+{
+	struct xen_usbdev *dev
+		= (struct xen_usbdev *) usb_get_intfdata(intf);
+
+	usb_set_intfdata(intf, NULL);
+
+	if (!dev)
+		return;
+
+	if (dev->usbif) {
+		xen_usbif_hotplug_notify(dev->usbif, dev->port->portnum, 0);
+		xen_usbif_detach_device(dev->usbif, dev);
+	}
+	xen_usbif_unlink_urbs(dev);
+	usbdev_put(dev);
+}
+
+static ssize_t usbdev_show_ports(struct device_driver *driver, char *buf)
+{
+	struct xen_usbport *port;
+	size_t count = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if (count >= PAGE_SIZE)
+			break;
+		count += scnprintf((char *)buf + count, PAGE_SIZE - count,
+				"%s:%d:%d:%d\n",
+				&port->phys_bus[0],
+				port->domid,
+				port->handle,
+				port->portnum);
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return count;
+}
+
+DRIVER_ATTR(port_ids, S_IRUSR, usbdev_show_ports, NULL);
+
+/* table of devices that matches any usbdevice */
+static const struct usb_device_id usbdev_table[] = {
+	{ .driver_info = 1 }, /* wildcard, see usb_match_id() */
+	{ } /* Terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, usbdev_table);
+
+static struct usb_driver xen_usbdev_driver = {
+	.name = "usbback",
+	.probe = usbdev_probe,
+	.disconnect = usbdev_disconnect,
+	.id_table = usbdev_table,
+	.no_dynamic_id = 1,
+};
+
+int __init xen_usbdev_init(void)
+{
+	int err;
+
+	err = usb_register(&xen_usbdev_driver);
+	if (err < 0) {
+		pr_alert(DRV_PFX "usb_register failed (error %d)\n",
+									err);
+		goto out;
+	}
+
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+							&driver_attr_port_ids);
+	if (err)
+		usb_deregister(&xen_usbdev_driver);
+
+out:
+	return err;
+}
+
+void xen_usbdev_exit(void)
+{
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+							&driver_attr_port_ids);
+	usb_deregister(&xen_usbdev_driver);
+}
diff --git a/drivers/usb/host/xen-usbback/xenbus.c b/drivers/usb/host/xen-usbback/xenbus.c
new file mode 100644
index 0000000..5eae4ec
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/xenbus.c
@@ -0,0 +1,482 @@
+/*
+ * Xenbus interface for USB backend driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/delay.h>
+#include "common.h"
+
+static LIST_HEAD(usbif_list);
+static DEFINE_SPINLOCK(usbif_list_lock);
+
+struct xen_usbif *xen_usbif_find(domid_t domid, unsigned int handle)
+{
+	struct xen_usbif *usbif;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_for_each_entry(usbif, &usbif_list, usbif_list) {
+		if (usbif->domid == domid && usbif->handle == handle) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+
+	if (found)
+		return usbif;
+
+	return NULL;
+}
+
+struct xen_usbif *xen_usbif_alloc(domid_t domid, unsigned int handle)
+{
+	struct xen_usbif *usbif;
+	unsigned long flags;
+	int i;
+
+	usbif = kzalloc(sizeof(struct xen_usbif), GFP_KERNEL);
+	if (!usbif)
+		return NULL;
+
+	usbif->domid = domid;
+	usbif->handle = handle;
+	INIT_LIST_HEAD(&usbif->usbif_list);
+	spin_lock_init(&usbif->urb_ring_lock);
+	spin_lock_init(&usbif->conn_ring_lock);
+	atomic_set(&usbif->refcnt, 0);
+	init_waitqueue_head(&usbif->wq);
+	init_waitqueue_head(&usbif->waiting_to_free);
+	spin_lock_init(&usbif->dev_lock);
+	INIT_LIST_HEAD(&usbif->dev_list);
+	spin_lock_init(&usbif->addr_lock);
+	for (i = 0; i < XEN_USB_DEV_ADDR_SIZE; i++)
+		usbif->addr_table[i] = NULL;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_add(&usbif->usbif_list, &usbif_list);
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+
+	return usbif;
+}
+
+static int xen_usbif_map(struct xen_usbif *usbif, unsigned long urb_ring_ref,
+			unsigned long conn_ring_ref, unsigned int evtchn)
+{
+	int err = -ENOMEM;
+
+	if (usbif->irq)
+		return 0;
+
+	err = xenbus_map_ring_valloc(usbif->xbdev, urb_ring_ref,
+	    &usbif->urb_sring);
+	if (err < 0)
+		return err;
+
+	err = xenbus_map_ring_valloc(usbif->xbdev, conn_ring_ref,
+	    &usbif->conn_sring);
+	if (err < 0)
+		goto fail_alloc;
+
+	err = bind_interdomain_evtchn_to_irqhandler(usbif->domid, evtchn,
+				xen_usbif_be_int, 0, "usbif-backend", usbif);
+	if (err < 0)
+		goto fail_evtchn;
+	usbif->irq = err;
+
+	BACK_RING_INIT(&usbif->urb_ring,
+	    (struct usbif_urb_sring *)usbif->urb_sring, PAGE_SIZE);
+	BACK_RING_INIT(&usbif->conn_ring,
+	    (struct usbif_conn_sring *)usbif->conn_sring, PAGE_SIZE);
+
+	return 0;
+
+fail_evtchn:
+	xenbus_unmap_ring_vfree(usbif->xbdev, usbif->conn_sring);
+fail_alloc:
+	xenbus_unmap_ring_vfree(usbif->xbdev, usbif->urb_sring);
+
+	return err;
+}
+
+static void xen_usbif_disconnect(struct xen_usbif *usbif)
+{
+	struct xen_usbdev *dev, *tmp;
+	unsigned long flags;
+
+	if (usbif->xenusbd) {
+		kthread_stop(usbif->xenusbd);
+		usbif->xenusbd = NULL;
+	}
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_for_each_entry_safe(dev, tmp, &usbif->dev_list, dev_list) {
+		xen_usbif_unlink_urbs(dev);
+		xen_usbif_detach_device_without_lock(usbif, dev);
+	}
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+
+	wait_event(usbif->waiting_to_free, atomic_read(&usbif->refcnt) == 0);
+
+	if (usbif->irq) {
+		unbind_from_irqhandler(usbif->irq, usbif);
+		usbif->irq = 0;
+	}
+
+	if (usbif->urb_ring.sring) {
+		xenbus_unmap_ring_vfree(usbif->xbdev, usbif->urb_sring);
+		usbif->urb_ring.sring = NULL;
+		xenbus_unmap_ring_vfree(usbif->xbdev, usbif->conn_sring);
+		usbif->conn_ring.sring = NULL;
+	}
+}
+
+static void xen_usbif_free(struct xen_usbif *usbif)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_del(&usbif->usbif_list);
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+	kfree(usbif);
+}
+
+static void usbbk_changed(struct xenbus_watch *watch, const char **vec,
+				unsigned int len)
+{
+	struct xenbus_transaction xbt;
+	int err;
+	int i;
+	char node[8];
+	char *busid;
+	struct xen_usbport *port = NULL;
+
+	struct xen_usbif *usbif = container_of(watch, struct xen_usbif,
+						backend_watch);
+	struct xenbus_device *dev = usbif->xbdev;
+
+again:
+	err = xenbus_transaction_start(&xbt);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "starting transaction");
+		return;
+	}
+
+	for (i = 1; i <= usbif->num_ports; i++) {
+		sprintf(node, "port/%d", i);
+		busid = xenbus_read(xbt, dev->nodename, node, NULL);
+		if (IS_ERR(busid)) {
+			err = PTR_ERR(busid);
+			xenbus_dev_fatal(dev, err, "reading port/%d", i);
+			goto abort;
+		}
+
+		/*
+		 * remove port, if the port is not connected,
+		 */
+		if (strlen(busid) == 0) {
+			port = xen_usbport_find(usbif->domid, usbif->handle, i);
+			if (port) {
+				if (port->is_connected)
+					xenbus_dev_fatal(dev, err,
+						"can't remove port/%d, "
+						"unbind first", i);
+				else
+					xen_usbport_remove(usbif->domid,
+							usbif->handle, i);
+			}
+			continue; /* never configured, ignore */
+		}
+
+		/*
+		 * add port,
+		 * if the port is not configured and not used from other usbif.
+		 */
+		port = xen_usbport_find(usbif->domid, usbif->handle, i);
+		if (port) {
+			if ((strncmp(port->phys_bus, busid,
+							XEN_USB_BUS_ID_SIZE)))
+				xenbus_dev_fatal(dev, err, "can't add port/%d, "
+						"remove first", i);
+			else
+				continue; /* already configured, ignore */
+		} else {
+			if (xen_usbport_find_by_busid(busid))
+				xenbus_dev_fatal(dev, err, "can't add port/%d, "
+						"busid already used", i);
+			else
+				xen_usbport_add(busid, usbif->domid,
+						usbif->handle, i);
+		}
+	}
+
+	err = xenbus_transaction_end(xbt, 0);
+	if (err == -EAGAIN)
+		goto again;
+	if (err)
+		xenbus_dev_fatal(dev, err, "completing transaction");
+
+	return;
+
+abort:
+	xenbus_transaction_end(xbt, 1);
+
+	return;
+}
+
+static int usbbk_remove(struct xenbus_device *dev)
+{
+	struct xen_usbif *usbif = dev_get_drvdata(&dev->dev);
+	int i;
+
+	if (usbif->backend_watch.node) {
+		unregister_xenbus_watch(&usbif->backend_watch);
+		kfree(usbif->backend_watch.node);
+		usbif->backend_watch.node = NULL;
+	}
+
+	if (usbif) {
+		/* remove all ports */
+		for (i = 1; i <= usbif->num_ports; i++)
+			xen_usbport_remove(usbif->domid, usbif->handle, i);
+		xen_usbif_disconnect(usbif);
+		xen_usbif_free(usbif);
+	}
+	dev_set_drvdata(&dev->dev, NULL);
+
+	return 0;
+}
+
+static int usbbk_probe(struct xenbus_device *dev,
+				const struct xenbus_device_id *id)
+{
+	struct xen_usbif *usbif;
+	unsigned long handle;
+	int num_ports;
+	int usb_ver;
+	int err;
+
+	if (usb_disabled())
+		return -ENODEV;
+
+	if (kstrtoul(strrchr(dev->otherend, '/') + 1, 0, &handle))
+		return -ENOENT;
+
+	usbif = xen_usbif_alloc(dev->otherend_id, handle);
+	if (!usbif) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating backend interface");
+		return -ENOMEM;
+	}
+	usbif->xbdev = dev;
+	dev_set_drvdata(&dev->dev, usbif);
+
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "num-ports",
+							"%d", &num_ports);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading num-ports");
+		goto fail;
+	}
+	if (num_ports < 1 || num_ports > USB_MAXCHILDREN) {
+		xenbus_dev_fatal(dev, err, "invalid num-ports");
+		goto fail;
+	}
+	usbif->num_ports = num_ports;
+
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "usb-ver", "%d", &usb_ver);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading usb-ver");
+		goto fail;
+	}
+	switch (usb_ver) {
+	case USB_VER_USB11:
+	case USB_VER_USB20:
+		usbif->usb_ver = usb_ver;
+		break;
+	default:
+		xenbus_dev_fatal(dev, err, "invalid usb-ver");
+		goto fail;
+	}
+
+	err = xenbus_switch_state(dev, XenbusStateInitWait);
+	if (err)
+		goto fail;
+
+	return 0;
+
+fail:
+	usbbk_remove(dev);
+	return err;
+}
+
+static int connect_rings(struct xen_usbif *usbif)
+{
+	struct xenbus_device *dev = usbif->xbdev;
+	unsigned long urb_ring_ref;
+	unsigned long conn_ring_ref;
+	unsigned int evtchn;
+	int err;
+
+	err = xenbus_gather(XBT_NIL, dev->otherend,
+			    "urb-ring-ref", "%lu", &urb_ring_ref,
+			    "conn-ring-ref", "%lu", &conn_ring_ref,
+			    "event-channel", "%u", &evtchn, NULL);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "reading %s/ring-ref and event-channel",
+				 dev->otherend);
+		return err;
+	}
+
+	pr_info(DRV_PFX "urb-ring-ref %ld, conn-ring-ref %ld, "
+	    "event-channel %d\n", urb_ring_ref, conn_ring_ref, evtchn);
+
+	err = xen_usbif_map(usbif, urb_ring_ref, conn_ring_ref, evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "mapping urb-ring-ref %lu "
+					"conn-ring-ref %lu port %u",
+					urb_ring_ref, conn_ring_ref, evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int start_xenusbd(struct xen_usbif *usbif)
+{
+	int err = 0;
+	char name[TASK_COMM_LEN];
+
+	snprintf(name, TASK_COMM_LEN, "usbback.%d.%d", usbif->domid,
+			usbif->handle);
+	usbif->xenusbd = kthread_run(xen_usbif_schedule, usbif, name);
+	if (IS_ERR(usbif->xenusbd)) {
+		err = PTR_ERR(usbif->xenusbd);
+		usbif->xenusbd = NULL;
+		xenbus_dev_error(usbif->xbdev, err, "start xenusbd");
+	}
+
+	return err;
+}
+
+static void frontend_changed(struct xenbus_device *dev,
+			     enum xenbus_state frontend_state)
+{
+	struct xen_usbif *usbif = dev_get_drvdata(&dev->dev);
+	int err;
+
+	switch (frontend_state) {
+	case XenbusStateReconfiguring:
+	case XenbusStateReconfigured:
+		break;
+
+	case XenbusStateInitialising:
+		if (dev->state == XenbusStateClosed) {
+			pr_info(DRV_PFX "%s: %s: prepare for reconnect\n",
+			       __func__, dev->nodename);
+			xenbus_switch_state(dev, XenbusStateInitWait);
+		}
+		break;
+
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+		if (dev->state == XenbusStateConnected)
+			break;
+
+		xen_usbif_disconnect(usbif);
+
+		err = connect_rings(usbif);
+		if (err)
+			break;
+		err = start_xenusbd(usbif);
+		if (err)
+			break;
+		err = xenbus_watch_pathfmt(dev, &usbif->backend_watch,
+		    usbbk_changed,  "%s/%s", dev->nodename, "port");
+		if (err)
+			break;
+		xenbus_switch_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		xenbus_switch_state(dev, XenbusStateClosing);
+		break;
+
+	case XenbusStateClosed:
+		xen_usbif_disconnect(usbif);
+		xenbus_switch_state(dev, XenbusStateClosed);
+		if (xenbus_dev_is_online(dev))
+			break;
+		/* fall through if not online */
+	case XenbusStateUnknown:
+		device_unregister(&dev->dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+				 frontend_state);
+		break;
+	}
+}
+
+
+/* ** Driver Registration ** */
+
+static const struct xenbus_device_id usbback_ids[] = {
+	{ "vusb" },
+	{ "" },
+};
+
+static DEFINE_XENBUS_DRIVER(usbback, ,
+	.probe = usbbk_probe,
+	.remove = usbbk_remove,
+	.otherend_changed = frontend_changed,
+);
+
+int __init xen_usbif_xenbus_init(void)
+{
+	return xenbus_register_backend(&usbback_driver);
+}
+
+void __exit xen_usbif_xenbus_exit(void)
+{
+	xenbus_unregister_driver(&usbback_driver);
+}
diff --git a/drivers/usb/host/xen-usbfront.c b/drivers/usb/host/xen-usbfront.c
new file mode 100644
index 0000000..e632de3
--- /dev/null
+++ b/drivers/usb/host/xen-usbfront.c
@@ -0,0 +1,1739 @@
+/*
+ * xen-usbfront.c
+ *
+ * This file is part of Xen USB Virtual Host Controller driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/usb.h>
+#include <linux/usb/hcd.h>
+#include <linux/list.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/io.h>
+#include <xen/xenbus.h>
+#include <xen/events.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/io/usbif.h>
+
+static inline struct usbfront_info *hcd_to_info(struct usb_hcd *hcd)
+{
+	return (struct usbfront_info *) (hcd->hcd_priv);
+}
+
+static inline struct usb_hcd *info_to_hcd(struct usbfront_info *info)
+{
+	return container_of((void *) info, struct usb_hcd, hcd_priv);
+}
+
+/* Private per-URB data */
+struct urb_priv {
+	struct list_head list;
+	struct urb *urb;
+	int req_id;		/* RING_REQUEST id for submitting */
+	int unlink_req_id;	/* RING_REQUEST id for unlinking */
+	int status;
+	unsigned unlinked:1;	/* dequeued marker */
+};
+
+/* virtual roothub port status */
+struct rhport_status {
+	u32 status;
+	unsigned resuming:1;		/* in resuming */
+	unsigned c_connection:1;	/* connection changed */
+	unsigned long timeout;
+};
+
+/* status of attached device */
+struct vdevice_status {
+	int devnum;
+	enum usb_device_state status;
+	enum usb_device_speed speed;
+};
+
+/* RING request shadow */
+struct usb_shadow {
+	struct usbif_urb_request req;
+	struct urb *urb;
+};
+
+/* statistics for tuning, monitoring, ... */
+struct xenhcd_stats {
+	unsigned long ring_full;	/* RING_FULL conditions */
+	unsigned long complete;		/* normal giveback urbs */
+	unsigned long unlink;		/* unlinked urbs */
+};
+
+struct usbfront_info {
+	/* Virtual Host Controller has 4 urb queues */
+	struct list_head pending_submit_list;
+	struct list_head pending_unlink_list;
+	struct list_head in_progress_list;
+	struct list_head giveback_waiting_list;
+
+	spinlock_t lock;
+
+	/* timer that kick pending and giveback waiting urbs */
+	struct timer_list watchdog;
+	unsigned long actions;
+
+	/* virtual root hub */
+	int rh_numports;
+	struct rhport_status ports[USB_MAXCHILDREN];
+	struct vdevice_status devices[USB_MAXCHILDREN];
+
+	/* Xen related staff */
+	struct xenbus_device *xbdev;
+	int urb_ring_ref;
+	int conn_ring_ref;
+	struct usbif_urb_front_ring urb_ring;
+	struct usbif_conn_front_ring conn_ring;
+
+	unsigned int evtchn, irq; /* event channel */
+	struct usb_shadow shadow[USB_URB_RING_SIZE];
+	unsigned long shadow_free;
+
+	/* RING_RESPONSE thread */
+	struct task_struct *kthread;
+	wait_queue_head_t wq;
+	unsigned int waiting_resp;
+
+	/* xmit statistics */
+#ifdef XENHCD_STATS
+	struct xenhcd_stats stats;
+#define COUNT(x) do { (x)++; } while (0)
+#else
+#define COUNT(x) do {} while (0)
+#endif
+};
+
+#define XENHCD_RING_JIFFIES (HZ/200)
+#define XENHCD_SCAN_JIFFIES 1
+
+enum xenhcd_timer_action {
+	TIMER_RING_WATCHDOG,
+	TIMER_SCAN_PENDING_URBS,
+};
+
+static inline void
+timer_action_done(struct usbfront_info *info, enum xenhcd_timer_action action)
+{
+	clear_bit(action, &info->actions);
+}
+
+static inline void
+timer_action(struct usbfront_info *info, enum xenhcd_timer_action action)
+{
+	if (timer_pending(&info->watchdog) &&
+	    test_bit(TIMER_SCAN_PENDING_URBS, &info->actions))
+		return;
+
+	if (!test_and_set_bit(action, &info->actions)) {
+		unsigned long t;
+
+		switch (action) {
+		case TIMER_RING_WATCHDOG:
+			t = XENHCD_RING_JIFFIES;
+			break;
+		default:
+			t = XENHCD_SCAN_JIFFIES;
+			break;
+		}
+		mod_timer(&info->watchdog, t + jiffies);
+	}
+}
+
+struct kmem_cache *xenhcd_urbp_cachep;
+struct hc_driver xen_usb20_hc_driver;
+struct hc_driver xen_usb11_hc_driver;
+
+static ssize_t show_statistics(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct usb_hcd *hcd;
+	struct usbfront_info *info;
+	unsigned long flags;
+	unsigned temp, size;
+	char *next;
+
+	hcd = dev_get_drvdata(dev);
+	info = hcd_to_info(hcd);
+	next = buf;
+	size = PAGE_SIZE;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	temp = scnprintf(next, size,
+			"bus %s, device %s\n"
+			"%s\n"
+			"xenhcd, hcd state %d\n",
+			hcd->self.controller->bus->name,
+			dev_name(hcd->self.controller),
+			hcd->product_desc,
+			hcd->state);
+	size -= temp;
+	next += temp;
+
+#ifdef XENHCD_STATS
+	temp = scnprintf(next, size,
+			"complete %ld unlink %ld ring_full %ld\n",
+			info->stats.complete, info->stats.unlink,
+			info->stats.ring_full);
+	size -= temp;
+	next += temp;
+#endif
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	return PAGE_SIZE - size;
+}
+
+static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
+
+static inline void create_debug_file(struct usbfront_info *info)
+{
+	struct device *dev = info_to_hcd(info)->self.controller;
+	if (device_create_file(dev, &dev_attr_statistics))
+		printk(KERN_WARNING "statistics file not created for %s\n",
+					info_to_hcd(info)->self.bus_name);
+}
+
+static inline void remove_debug_file(struct usbfront_info *info)
+{
+	struct device *dev = info_to_hcd(info)->self.controller;
+	device_remove_file(dev, &dev_attr_statistics);
+}
+
+/*
+ * set virtual port connection status
+ */
+void set_connect_state(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_POWER) {
+		switch (info->devices[port].speed) {
+		case USB_SPEED_UNKNOWN:
+			info->ports[port].status &=
+					~(USB_PORT_STAT_CONNECTION |
+						USB_PORT_STAT_ENABLE |
+						USB_PORT_STAT_LOW_SPEED |
+						USB_PORT_STAT_HIGH_SPEED |
+						USB_PORT_STAT_SUSPEND);
+			break;
+		case USB_SPEED_LOW:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			info->ports[port].status |= USB_PORT_STAT_LOW_SPEED;
+			break;
+		case USB_SPEED_FULL:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			break;
+		case USB_SPEED_HIGH:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			info->ports[port].status |= USB_PORT_STAT_HIGH_SPEED;
+			break;
+		default: /* error */
+			return;
+		}
+		info->ports[port].status |= (USB_PORT_STAT_C_CONNECTION << 16);
+	}
+}
+
+/*
+ * set virtual device connection status
+ */
+void rhport_connect(struct usbfront_info *info, int portnum,
+			enum usb_device_speed speed)
+{
+	int port;
+
+	if (portnum < 1 || portnum > info->rh_numports)
+		return; /* invalid port number */
+
+	port = portnum - 1;
+	if (info->devices[port].speed != speed) {
+		switch (speed) {
+		case USB_SPEED_UNKNOWN: /* disconnect */
+			info->devices[port].status = USB_STATE_NOTATTACHED;
+			break;
+		case USB_SPEED_LOW:
+		case USB_SPEED_FULL:
+		case USB_SPEED_HIGH:
+			info->devices[port].status = USB_STATE_ATTACHED;
+			break;
+		default: /* error */
+			return;
+		}
+		info->devices[port].speed = speed;
+		info->ports[port].c_connection = 1;
+
+		set_connect_state(info, portnum);
+	}
+}
+
+/*
+ * SetPortFeature(PORT_SUSPENDED)
+ */
+void rhport_suspend(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status |= USB_PORT_STAT_SUSPEND;
+	info->devices[port].status = USB_STATE_SUSPENDED;
+}
+
+/*
+ * ClearPortFeature(PORT_SUSPENDED)
+ */
+void rhport_resume(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_SUSPEND) {
+		info->ports[port].resuming = 1;
+		info->ports[port].timeout = jiffies + msecs_to_jiffies(20);
+	}
+}
+
+/*
+ * SetPortFeature(PORT_POWER)
+ */
+void rhport_power_on(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if ((info->ports[port].status & USB_PORT_STAT_POWER) == 0) {
+		info->ports[port].status |= USB_PORT_STAT_POWER;
+		if (info->devices[port].status != USB_STATE_NOTATTACHED)
+			info->devices[port].status = USB_STATE_POWERED;
+		if (info->ports[port].c_connection)
+			set_connect_state(info, portnum);
+	}
+}
+
+/*
+ * ClearPortFeature(PORT_POWER)
+ * SetConfiguration(non-zero)
+ * Power_Source_Off
+ * Over-current
+ */
+void rhport_power_off(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_POWER) {
+		info->ports[port].status = 0;
+		if (info->devices[port].status != USB_STATE_NOTATTACHED)
+			info->devices[port].status = USB_STATE_ATTACHED;
+	}
+}
+
+/*
+ * ClearPortFeature(PORT_ENABLE)
+ */
+void rhport_disable(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status &= ~USB_PORT_STAT_ENABLE;
+	info->ports[port].status &= ~USB_PORT_STAT_SUSPEND;
+	info->ports[port].resuming = 0;
+	if (info->devices[port].status != USB_STATE_NOTATTACHED)
+		info->devices[port].status = USB_STATE_POWERED;
+}
+
+/*
+ * SetPortFeature(PORT_RESET)
+ */
+void rhport_reset(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status &= ~(USB_PORT_STAT_ENABLE
+					| USB_PORT_STAT_LOW_SPEED
+					| USB_PORT_STAT_HIGH_SPEED);
+	info->ports[port].status |= USB_PORT_STAT_RESET;
+
+	if (info->devices[port].status != USB_STATE_NOTATTACHED)
+		info->devices[port].status = USB_STATE_ATTACHED;
+
+	/* 10msec reset signaling */
+	info->ports[port].timeout = jiffies + msecs_to_jiffies(10);
+}
+
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+static int xenhcd_bus_suspend(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ret = 0;
+	int i, ports;
+
+	ports = info->rh_numports;
+
+	spin_lock_irq(&info->lock);
+	if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags))
+		ret = -ESHUTDOWN;
+	else {
+		/* suspend any active ports*/
+		for (i = 1; i <= ports; i++)
+			rhport_suspend(info, i);
+	}
+	spin_unlock_irq(&info->lock);
+
+	del_timer_sync(&info->watchdog);
+
+	return ret;
+}
+
+static int xenhcd_bus_resume(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ret = 0;
+	int i, ports;
+
+	ports = info->rh_numports;
+
+	spin_lock_irq(&info->lock);
+	if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags))
+		ret = -ESHUTDOWN;
+	else {
+		/* resume any suspended ports*/
+		for (i = 1; i <= ports; i++)
+			rhport_resume(info, i);
+	}
+	spin_unlock_irq(&info->lock);
+
+	return ret;
+}
+#endif
+#endif
+
+static void xenhcd_hub_descriptor(struct usbfront_info *info,
+					struct usb_hub_descriptor *desc)
+{
+	u16 temp;
+	int ports = info->rh_numports;
+
+	desc->bDescriptorType = 0x29;
+	desc->bPwrOn2PwrGood = 10; /* EHCI says 20ms max */
+	desc->bHubContrCurrent = 0;
+	desc->bNbrPorts = ports;
+
+	/* size of DeviceRemovable and PortPwrCtrlMask fields*/
+	temp = 1 + (ports / 8);
+	desc->bDescLength = 7 + 2 * temp;
+
+	/* bitmaps for DeviceRemovable and PortPwrCtrlMask */
+	memset(&desc->u.hs.DeviceRemovable[0], 0, temp);
+	memset(&desc->u.hs.DeviceRemovable[temp], 0xff, temp);
+
+	/* per-port over current reporting and no power switching */
+	temp = 0x000a;
+	desc->wHubCharacteristics = cpu_to_le16(temp);
+}
+
+/* port status change mask for hub_status_data */
+#define PORT_C_MASK \
+	((USB_PORT_STAT_C_CONNECTION \
+	| USB_PORT_STAT_C_ENABLE \
+	| USB_PORT_STAT_C_SUSPEND \
+	| USB_PORT_STAT_C_OVERCURRENT \
+	| USB_PORT_STAT_C_RESET) << 16)
+
+/*
+ * See USB 2.0 Spec, 11.12.4 Hub and Port Status Change Bitmap.
+ * If port status changed, writes the bitmap to buf and return
+ * that length(number of bytes).
+ * If Nothing changed, return 0.
+ */
+static int xenhcd_hub_status_data(struct usb_hcd *hcd, char *buf)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	int ports;
+	int i;
+	int length;
+
+	unsigned long flags;
+	int ret = 0;
+
+	int changed = 0;
+
+	if (!HC_IS_RUNNING(hcd->state))
+		return 0;
+
+	/* initialize the status to no-changes */
+	ports = info->rh_numports;
+	length = 1 + (ports / 8);
+	for (i = 0; i < length; i++) {
+		buf[i] = 0;
+		ret++;
+	}
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	for (i = 0; i < ports; i++) {
+		/* check status for each port */
+		if (info->ports[i].status & PORT_C_MASK) {
+			if (i < 7)
+				buf[0] |= 1 << (i + 1);
+			else if (i < 15)
+				buf[1] |= 1 << (i - 7);
+			else if (i < 23)
+				buf[2] |= 1 << (i - 15);
+			else
+				buf[3] |= 1 << (i - 23);
+			changed = 1;
+		}
+	}
+
+	if (!changed)
+		ret = 0;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	return ret;
+}
+
+static int xenhcd_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+				u16 wIndex, char *buf, u16 wLength)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ports = info->rh_numports;
+	unsigned long flags;
+	int ret = 0;
+	int i;
+	int changed = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+	switch (typeReq) {
+	case ClearHubFeature:
+		/* ignore this request */
+		break;
+	case ClearPortFeature:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		switch (wValue) {
+		case USB_PORT_FEAT_SUSPEND:
+			rhport_resume(info, wIndex);
+			break;
+		case USB_PORT_FEAT_POWER:
+			rhport_power_off(info, wIndex);
+			break;
+		case USB_PORT_FEAT_ENABLE:
+			rhport_disable(info, wIndex);
+			break;
+		case USB_PORT_FEAT_C_CONNECTION:
+			info->ports[wIndex-1].c_connection = 0;
+			/* falling through */
+		default:
+			info->ports[wIndex-1].status &= ~(1 << wValue);
+			break;
+		}
+		break;
+	case GetHubDescriptor:
+		xenhcd_hub_descriptor(info, (struct usb_hub_descriptor *) buf);
+		break;
+	case GetHubStatus:
+		/* always local power supply good and no over-current exists. */
+		*(__le32 *)buf = cpu_to_le32(0);
+		break;
+	case GetPortStatus:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		wIndex--;
+
+		/* resume completion */
+		if (info->ports[wIndex].resuming &&
+		    time_after_eq(jiffies, info->ports[wIndex].timeout)) {
+			info->ports[wIndex].status |=
+						(USB_PORT_STAT_C_SUSPEND << 16);
+			info->ports[wIndex].status &= ~USB_PORT_STAT_SUSPEND;
+		}
+
+		/* reset completion */
+		if ((info->ports[wIndex].status & USB_PORT_STAT_RESET) != 0 &&
+		    time_after_eq(jiffies, info->ports[wIndex].timeout)) {
+			info->ports[wIndex].status |=
+						(USB_PORT_STAT_C_RESET << 16);
+			info->ports[wIndex].status &= ~USB_PORT_STAT_RESET;
+
+			if (info->devices[wIndex].status !=
+							USB_STATE_NOTATTACHED) {
+				info->ports[wIndex].status |=
+							USB_PORT_STAT_ENABLE;
+				info->devices[wIndex].status =
+							USB_STATE_DEFAULT;
+			}
+
+			switch (info->devices[wIndex].speed) {
+			case USB_SPEED_LOW:
+				info->ports[wIndex].status |=
+						USB_PORT_STAT_LOW_SPEED;
+				break;
+			case USB_SPEED_HIGH:
+				info->ports[wIndex].status |=
+						USB_PORT_STAT_HIGH_SPEED;
+				break;
+			default:
+				break;
+			}
+		}
+
+		((u16 *) buf)[0] = cpu_to_le16(info->ports[wIndex].status);
+		((u16 *) buf)[1] = cpu_to_le16(info->ports[wIndex].status
+									>> 16);
+		break;
+	case SetHubFeature:
+		/* not supported */
+		goto error;
+	case SetPortFeature:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		switch (wValue) {
+		case USB_PORT_FEAT_POWER:
+			rhport_power_on(info, wIndex);
+			break;
+		case USB_PORT_FEAT_RESET:
+			rhport_reset(info, wIndex);
+			break;
+		case USB_PORT_FEAT_SUSPEND:
+			rhport_suspend(info, wIndex);
+			break;
+		default:
+			if ((info->ports[wIndex-1].status &
+						USB_PORT_STAT_POWER) != 0)
+				info->ports[wIndex-1].status |= (1 << wValue);
+		}
+		break;
+
+	default:
+error:
+		ret = -EPIPE;
+	}
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	/* check status for each port */
+	for (i = 0; i < ports; i++) {
+		if (info->ports[i].status & PORT_C_MASK)
+			changed = 1;
+	}
+	if (changed)
+		usb_hcd_poll_rh_status(hcd);
+
+	return ret;
+}
+
+struct kmem_cache *xenhcd_urbp_cachep;
+
+static struct urb_priv *alloc_urb_priv(struct urb *urb)
+{
+	struct urb_priv *urbp;
+
+	urbp = kmem_cache_zalloc(xenhcd_urbp_cachep, GFP_ATOMIC);
+	if (!urbp)
+		return NULL;
+
+	urbp->urb = urb;
+	urb->hcpriv = urbp;
+	urbp->req_id = ~0;
+	urbp->unlink_req_id = ~0;
+	INIT_LIST_HEAD(&urbp->list);
+
+	return urbp;
+}
+
+static void free_urb_priv(struct urb_priv *urbp)
+{
+	urbp->urb->hcpriv = NULL;
+	kmem_cache_free(xenhcd_urbp_cachep, urbp);
+}
+
+static inline int get_id_from_freelist(struct usbfront_info *info)
+{
+	unsigned long free;
+	free = info->shadow_free;
+	BUG_ON(free >= USB_URB_RING_SIZE);
+	info->shadow_free = info->shadow[free].req.id;
+	info->shadow[free].req.id = (unsigned int)0x0fff; /* debug */
+	return free;
+}
+
+static inline void add_id_to_freelist(struct usbfront_info *info,
+							unsigned long id)
+{
+	info->shadow[id].req.id  = info->shadow_free;
+	info->shadow[id].urb = NULL;
+	info->shadow_free = id;
+}
+
+static inline int count_pages(void *addr, int length)
+{
+	unsigned long start = (unsigned long) addr >> PAGE_SHIFT;
+	unsigned long end = (unsigned long)
+				(addr + length + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	return end - start;
+}
+
+static inline void xenhcd_gnttab_map(struct usbfront_info *info, void *addr,
+					int length, grant_ref_t *gref_head,
+					struct usbif_request_segment *seg,
+					int nr_pages, int flags)
+{
+	grant_ref_t ref;
+	unsigned long mfn;
+	unsigned int offset;
+	unsigned int len;
+	unsigned int bytes;
+	int i;
+
+	len = length;
+
+	for (i = 0; i < nr_pages; i++) {
+		BUG_ON(!len);
+
+		mfn = virt_to_mfn(addr);
+		offset = offset_in_page(addr);
+
+		bytes = PAGE_SIZE - offset;
+		if (bytes > len)
+			bytes = len;
+
+		ref = gnttab_claim_grant_reference(gref_head);
+		BUG_ON(ref == -ENOSPC);
+		gnttab_grant_foreign_access_ref(ref, info->xbdev->otherend_id,
+						mfn, flags);
+		seg[i].gref = ref;
+		seg[i].offset = (uint16_t)offset;
+		seg[i].length = (uint16_t)bytes;
+
+		addr += bytes;
+		len -= bytes;
+	}
+}
+
+static int map_urb_for_request(struct usbfront_info *info, struct urb *urb,
+				struct usbif_urb_request *req)
+{
+	grant_ref_t gref_head;
+	int nr_buff_pages = 0;
+	int nr_isodesc_pages = 0;
+	int ret = 0;
+
+	if (urb->transfer_buffer_length) {
+		nr_buff_pages = count_pages(urb->transfer_buffer,
+						urb->transfer_buffer_length);
+
+		if (usb_pipeisoc(urb->pipe))
+			nr_isodesc_pages = count_pages(&urb->iso_frame_desc[0],
+				sizeof(struct usb_iso_packet_descriptor) *
+							urb->number_of_packets);
+
+		if (nr_buff_pages + nr_isodesc_pages >
+						USBIF_MAX_SEGMENTS_PER_REQUEST)
+			return -E2BIG;
+
+		ret = gnttab_alloc_grant_references(
+				USBIF_MAX_SEGMENTS_PER_REQUEST, &gref_head);
+		if (ret) {
+			printk(KERN_ERR "usbfront: "
+				"gnttab_alloc_grant_references() error\n");
+			return -ENOMEM;
+		}
+
+		xenhcd_gnttab_map(info, urb->transfer_buffer,
+				urb->transfer_buffer_length, &gref_head,
+				&req->seg[0], nr_buff_pages,
+				usb_pipein(urb->pipe) ? 0 : GTF_readonly);
+
+		if (!usb_pipeisoc(urb->pipe))
+			gnttab_free_grant_references(gref_head);
+	}
+
+	req->pipe = usbif_setportnum_pipe(urb->pipe, urb->dev->portnum);
+	req->transfer_flags = urb->transfer_flags;
+	req->buffer_length = urb->transfer_buffer_length;
+	req->nr_buffer_segs = nr_buff_pages;
+
+	switch (usb_pipetype(urb->pipe)) {
+	case PIPE_ISOCHRONOUS:
+		req->u.isoc.interval = urb->interval;
+		req->u.isoc.start_frame = urb->start_frame;
+		req->u.isoc.number_of_packets = urb->number_of_packets;
+		req->u.isoc.nr_frame_desc_segs = nr_isodesc_pages;
+		/* urb->number_of_packets must be > 0 */
+		if (unlikely(urb->number_of_packets <= 0))
+			BUG();
+		xenhcd_gnttab_map(info, &urb->iso_frame_desc[0],
+				sizeof(struct usb_iso_packet_descriptor) *
+					urb->number_of_packets, &gref_head,
+				&req->seg[nr_buff_pages], nr_isodesc_pages, 0);
+		gnttab_free_grant_references(gref_head);
+		break;
+	case PIPE_INTERRUPT:
+		req->u.intr.interval = urb->interval;
+		break;
+	case PIPE_CONTROL:
+		if (urb->setup_packet)
+			memcpy(req->u.ctrl, urb->setup_packet, 8);
+		break;
+	case PIPE_BULK:
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static void xenhcd_gnttab_done(struct usb_shadow *shadow)
+{
+	int nr_segs = 0;
+	int i;
+
+	nr_segs = shadow->req.nr_buffer_segs;
+
+	if (usb_pipeisoc(shadow->req.pipe))
+		nr_segs +=  shadow->req.u.isoc.nr_frame_desc_segs;
+
+	for (i = 0; i < nr_segs; i++)
+		gnttab_end_foreign_access(shadow->req.seg[i].gref, 0, 0UL);
+
+	shadow->req.nr_buffer_segs = 0;
+	shadow->req.u.isoc.nr_frame_desc_segs = 0;
+}
+
+static void xenhcd_giveback_urb(struct usbfront_info *info, struct urb *urb,
+								int status)
+__releases(info->lock)
+__acquires(info->lock)
+{
+	struct urb_priv *urbp = (struct urb_priv *) urb->hcpriv;
+
+	list_del_init(&urbp->list);
+	free_urb_priv(urbp);
+	switch (urb->status) {
+	case -ECONNRESET:
+	case -ENOENT:
+		COUNT(info->stats.unlink);
+		break;
+	case -EINPROGRESS:
+		urb->status = status;
+		/* falling through */
+	default:
+		COUNT(info->stats.complete);
+	}
+	spin_unlock(&info->lock);
+	usb_hcd_giveback_urb(info_to_hcd(info), urb,
+				urbp->status <= 0 ? urbp->status : urb->status);
+	spin_lock(&info->lock);
+}
+
+static inline int xenhcd_do_request(struct usbfront_info *info,
+					struct urb_priv *urbp)
+{
+	struct usbif_urb_request *req;
+	struct urb *urb = urbp->urb;
+	uint16_t id;
+	int notify;
+	int ret = 0;
+
+	req = RING_GET_REQUEST(&info->urb_ring, info->urb_ring.req_prod_pvt);
+	id = get_id_from_freelist(info);
+	req->id = id;
+
+	if (unlikely(urbp->unlinked)) {
+		req->u.unlink.unlink_id = urbp->req_id;
+		req->pipe = usbif_setunlink_pipe(usbif_setportnum_pipe(
+						urb->pipe, urb->dev->portnum));
+		urbp->unlink_req_id = id;
+	} else {
+		ret = map_urb_for_request(info, urb, req);
+		if (ret < 0) {
+			add_id_to_freelist(info, id);
+			return ret;
+		}
+		urbp->req_id = id;
+	}
+
+	info->urb_ring.req_prod_pvt++;
+	info->shadow[id].urb = urb;
+	info->shadow[id].req = *req;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->urb_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	return ret;
+}
+
+static void xenhcd_kick_pending_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp;
+	int ret;
+
+	while (!list_empty(&info->pending_submit_list)) {
+		if (RING_FULL(&info->urb_ring)) {
+			COUNT(info->stats.ring_full);
+			timer_action(info, TIMER_RING_WATCHDOG);
+			goto done;
+		}
+
+		urbp = list_entry(info->pending_submit_list.next,
+							struct urb_priv, list);
+		ret = xenhcd_do_request(info, urbp);
+		if (ret == 0)
+			list_move_tail(&urbp->list, &info->in_progress_list);
+		else
+			xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN);
+	}
+	timer_action_done(info, TIMER_SCAN_PENDING_URBS);
+
+done:
+	return;
+}
+
+/*
+ * caller must lock info->lock
+ */
+static void xenhcd_cancel_all_enqueued_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp, *tmp;
+
+	list_for_each_entry_safe(urbp, tmp, &info->in_progress_list, list) {
+		if (!urbp->unlinked) {
+			xenhcd_gnttab_done(&info->shadow[urbp->req_id]);
+			barrier();
+			if (urbp->urb->status == -EINPROGRESS)/* not dequeued */
+				xenhcd_giveback_urb(info, urbp->urb,
+								-ESHUTDOWN);
+			else /* dequeued */
+				xenhcd_giveback_urb(info, urbp->urb,
+							urbp->urb->status);
+		}
+		info->shadow[urbp->req_id].urb = NULL;
+	}
+
+	list_for_each_entry_safe(urbp, tmp, &info->pending_submit_list, list) {
+		xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN);
+	}
+
+	return;
+}
+
+/*
+ * caller must lock info->lock
+ */
+static void xenhcd_giveback_unlinked_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp, *tmp;
+
+	list_for_each_entry_safe(urbp, tmp,
+					&info->giveback_waiting_list, list) {
+		xenhcd_giveback_urb(info, urbp->urb, urbp->urb->status);
+	}
+}
+
+static int xenhcd_submit_urb(struct usbfront_info *info, struct urb_priv *urbp)
+{
+	int ret = 0;
+
+	if (RING_FULL(&info->urb_ring)) {
+		list_add_tail(&urbp->list, &info->pending_submit_list);
+		COUNT(info->stats.ring_full);
+		timer_action(info, TIMER_RING_WATCHDOG);
+		goto done;
+	}
+
+	if (!list_empty(&info->pending_submit_list)) {
+		list_add_tail(&urbp->list, &info->pending_submit_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	ret = xenhcd_do_request(info, urbp);
+	if (ret == 0)
+		list_add_tail(&urbp->list, &info->in_progress_list);
+
+done:
+	return ret;
+}
+
+static int xenhcd_unlink_urb(struct usbfront_info *info, struct urb_priv *urbp)
+{
+	int ret = 0;
+
+	/* already unlinked? */
+	if (urbp->unlinked)
+		return -EBUSY;
+
+	urbp->unlinked = 1;
+
+	/* the urb is still in pending_submit queue */
+	if (urbp->req_id == ~0) {
+		list_move_tail(&urbp->list, &info->giveback_waiting_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	/* send unlink request to backend */
+	if (RING_FULL(&info->urb_ring)) {
+		list_move_tail(&urbp->list, &info->pending_unlink_list);
+		COUNT(info->stats.ring_full);
+		timer_action(info, TIMER_RING_WATCHDOG);
+		goto done;
+	}
+
+	if (!list_empty(&info->pending_unlink_list)) {
+		list_move_tail(&urbp->list, &info->pending_unlink_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	ret = xenhcd_do_request(info, urbp);
+	if (ret == 0)
+		list_move_tail(&urbp->list, &info->in_progress_list);
+
+done:
+	return ret;
+}
+
+static int xenhcd_urb_request_done(struct usbfront_info *info)
+{
+	struct usbif_urb_response *res;
+	struct urb *urb;
+
+	RING_IDX i, rp;
+	uint16_t id;
+	int more_to_do = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	rp = info->urb_ring.sring->rsp_prod;
+	rmb(); /* ensure we see queued responses up to "rp" */
+
+	for (i = info->urb_ring.rsp_cons; i != rp; i++) {
+		res = RING_GET_RESPONSE(&info->urb_ring, i);
+		id = res->id;
+
+		if (likely(usbif_pipesubmit(info->shadow[id].req.pipe))) {
+			xenhcd_gnttab_done(&info->shadow[id]);
+			urb = info->shadow[id].urb;
+			barrier();
+			if (likely(urb)) {
+				urb->actual_length = res->actual_length;
+				urb->error_count = res->error_count;
+				urb->start_frame = res->start_frame;
+				barrier();
+				xenhcd_giveback_urb(info, urb, res->status);
+			}
+		}
+
+		add_id_to_freelist(info, id);
+	}
+	info->urb_ring.rsp_cons = i;
+
+	if (i != info->urb_ring.req_prod_pvt)
+		RING_FINAL_CHECK_FOR_RESPONSES(&info->urb_ring, more_to_do);
+	else
+		info->urb_ring.sring->rsp_event = i + 1;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+static int xenhcd_conn_notify(struct usbfront_info *info)
+{
+	struct usbif_conn_response *res;
+	struct usbif_conn_request *req;
+	RING_IDX rc, rp;
+	uint16_t id;
+	uint8_t portnum, speed;
+	int more_to_do = 0;
+	int notify;
+	int port_changed = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	rc = info->conn_ring.rsp_cons;
+	rp = info->conn_ring.sring->rsp_prod;
+	rmb(); /* ensure we see queued responses up to "rp" */
+
+	while (rc != rp) {
+		res = RING_GET_RESPONSE(&info->conn_ring, rc);
+		id = res->id;
+		portnum = res->portnum;
+		speed = res->speed;
+		info->conn_ring.rsp_cons = ++rc;
+
+		rhport_connect(info, portnum, speed);
+		if (info->ports[portnum-1].c_connection)
+			port_changed = 1;
+
+		barrier();
+
+		req = RING_GET_REQUEST(&info->conn_ring,
+					info->conn_ring.req_prod_pvt);
+		req->id = id;
+		info->conn_ring.req_prod_pvt++;
+	}
+
+	if (rc != info->conn_ring.req_prod_pvt)
+		RING_FINAL_CHECK_FOR_RESPONSES(&info->conn_ring, more_to_do);
+	else
+		info->conn_ring.sring->rsp_event = rc + 1;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	if (port_changed)
+		usb_hcd_poll_rh_status(info_to_hcd(info));
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+int xenhcd_schedule(void *arg)
+{
+	struct usbfront_info *info = (struct usbfront_info *) arg;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(info->wq,
+				info->waiting_resp || kthread_should_stop());
+		info->waiting_resp = 0;
+		smp_mb();
+
+		if (xenhcd_urb_request_done(info))
+			info->waiting_resp = 1;
+
+		if (xenhcd_conn_notify(info))
+			info->waiting_resp = 1;
+	}
+
+	return 0;
+}
+
+static void xenhcd_notify_work(struct usbfront_info *info)
+{
+	info->waiting_resp = 1;
+	wake_up(&info->wq);
+}
+
+irqreturn_t xenhcd_int(int irq, void *dev_id)
+{
+	xenhcd_notify_work((struct usbfront_info *) dev_id);
+	return IRQ_HANDLED;
+}
+
+static void xenhcd_watchdog(unsigned long param)
+{
+	struct usbfront_info *info = (struct usbfront_info *) param;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+	if (likely(HC_IS_RUNNING(info_to_hcd(info)->state))) {
+		timer_action_done(info, TIMER_RING_WATCHDOG);
+		xenhcd_giveback_unlinked_urbs(info);
+		xenhcd_kick_pending_urbs(info);
+	}
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
+/*
+ * one-time HC init
+ */
+static int xenhcd_setup(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	spin_lock_init(&info->lock);
+	INIT_LIST_HEAD(&info->pending_submit_list);
+	INIT_LIST_HEAD(&info->pending_unlink_list);
+	INIT_LIST_HEAD(&info->in_progress_list);
+	INIT_LIST_HEAD(&info->giveback_waiting_list);
+	init_timer(&info->watchdog);
+	info->watchdog.function = xenhcd_watchdog;
+	info->watchdog.data = (unsigned long) info;
+	return 0;
+}
+
+/*
+ * start HC running
+ */
+static int xenhcd_run(struct usb_hcd *hcd)
+{
+	hcd->uses_new_polling = 1;
+	hcd->state = HC_STATE_RUNNING;
+	create_debug_file(hcd_to_info(hcd));
+	return 0;
+}
+
+/*
+ * stop running HC
+ */
+static void xenhcd_stop(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	del_timer_sync(&info->watchdog);
+	remove_debug_file(info);
+	spin_lock_irq(&info->lock);
+	/* cancel all urbs */
+	hcd->state = HC_STATE_HALT;
+	xenhcd_cancel_all_enqueued_urbs(info);
+	xenhcd_giveback_unlinked_urbs(info);
+	spin_unlock_irq(&info->lock);
+}
+
+/*
+ * called as .urb_enqueue()
+ * non-error returns are promise to giveback the urb later
+ */
+static int xenhcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
+				gfp_t mem_flags)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	struct urb_priv *urbp;
+	unsigned long flags;
+	int ret = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	urbp = alloc_urb_priv(urb);
+	if (!urbp) {
+		ret = -ENOMEM;
+		goto done;
+	}
+	urbp->status = 1;
+
+	ret = xenhcd_submit_urb(info, urbp);
+	if (ret != 0)
+		free_urb_priv(urbp);
+
+done:
+	spin_unlock_irqrestore(&info->lock, flags);
+	return ret;
+}
+
+/*
+ * called as .urb_dequeue()
+ */
+static int xenhcd_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	struct urb_priv *urbp;
+	unsigned long flags;
+	int ret = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	urbp = urb->hcpriv;
+	if (!urbp)
+		goto done;
+
+	urbp->status = status;
+	ret = xenhcd_unlink_urb(info, urbp);
+
+done:
+	spin_unlock_irqrestore(&info->lock, flags);
+	return ret;
+}
+
+/*
+ * called from usb_get_current_frame_number(),
+ * but, almost all drivers not use such function.
+ */
+static int xenhcd_get_frame(struct usb_hcd *hcd)
+{
+	/* it means error, but probably no problem :-) */
+	return 0;
+}
+
+static const char hcd_name[] = "xen_hcd";
+
+struct hc_driver xen_usb20_hc_driver = {
+	.description = hcd_name,
+	.product_desc = "Xen USB2.0 Virtual Host Controller",
+	.hcd_priv_size = sizeof(struct usbfront_info),
+	.flags = HCD_USB2,
+
+	/* basic HC lifecycle operations */
+	.reset = xenhcd_setup,
+	.start = xenhcd_run,
+	.stop = xenhcd_stop,
+
+	/* managing urb I/O */
+	.urb_enqueue = xenhcd_urb_enqueue,
+	.urb_dequeue = xenhcd_urb_dequeue,
+	.get_frame_number = xenhcd_get_frame,
+
+	/* root hub operations */
+	.hub_status_data = xenhcd_hub_status_data,
+	.hub_control = xenhcd_hub_control,
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+	.bus_suspend = xenhcd_bus_suspend,
+	.bus_resume = xenhcd_bus_resume,
+#endif
+#endif
+};
+
+struct hc_driver xen_usb11_hc_driver = {
+	.description = hcd_name,
+	.product_desc = "Xen USB1.1 Virtual Host Controller",
+	.hcd_priv_size = sizeof(struct usbfront_info),
+	.flags = HCD_USB11,
+
+	/* basic HC lifecycle operations */
+	.reset = xenhcd_setup,
+	.start = xenhcd_run,
+	.stop = xenhcd_stop,
+
+	/* managing urb I/O */
+	.urb_enqueue = xenhcd_urb_enqueue,
+	.urb_dequeue = xenhcd_urb_dequeue,
+	.get_frame_number = xenhcd_get_frame,
+
+	/* root hub operations */
+	.hub_status_data = xenhcd_hub_status_data,
+	.hub_control = xenhcd_hub_control,
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+	.bus_suspend = xenhcd_bus_suspend,
+	.bus_resume = xenhcd_bus_resume,
+#endif
+#endif
+};
+
+#define GRANT_INVALID_REF 0
+
+static void destroy_rings(struct usbfront_info *info)
+{
+	if (info->irq)
+		unbind_from_irqhandler(info->irq, info);
+	info->evtchn = info->irq = 0;
+
+	if (info->urb_ring_ref != GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->urb_ring_ref, 0,
+					(unsigned long)info->urb_ring.sring);
+		info->urb_ring_ref = GRANT_INVALID_REF;
+	}
+	info->urb_ring.sring = NULL;
+
+	if (info->conn_ring_ref != GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->conn_ring_ref, 0,
+					(unsigned long)info->conn_ring.sring);
+		info->conn_ring_ref = GRANT_INVALID_REF;
+	}
+	info->conn_ring.sring = NULL;
+}
+
+static int setup_rings(struct xenbus_device *dev, struct usbfront_info *info)
+{
+	struct usbif_urb_sring *urb_sring;
+	struct usbif_conn_sring *conn_sring;
+	int err;
+
+	info->urb_ring_ref = GRANT_INVALID_REF;
+	info->conn_ring_ref = GRANT_INVALID_REF;
+
+	urb_sring = (struct usbif_urb_sring *)
+					get_zeroed_page(GFP_NOIO|__GFP_HIGH);
+	if (!urb_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating urb ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(urb_sring);
+	FRONT_RING_INIT(&info->urb_ring, urb_sring, PAGE_SIZE);
+
+	err = xenbus_grant_ring(dev, virt_to_mfn(info->urb_ring.sring));
+	if (err < 0) {
+		free_page((unsigned long)urb_sring);
+		info->urb_ring.sring = NULL;
+		goto fail;
+	}
+	info->urb_ring_ref = err;
+
+	conn_sring = (struct usbif_conn_sring *)
+					get_zeroed_page(GFP_NOIO|__GFP_HIGH);
+	if (!conn_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating conn ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(conn_sring);
+	FRONT_RING_INIT(&info->conn_ring, conn_sring, PAGE_SIZE);
+
+	err = xenbus_grant_ring(dev, virt_to_mfn(info->conn_ring.sring));
+	if (err < 0) {
+		free_page((unsigned long)conn_sring);
+		info->conn_ring.sring = NULL;
+		goto fail;
+	}
+	info->conn_ring_ref = err;
+
+	err = xenbus_alloc_evtchn(dev, &info->evtchn);
+	if (err)
+		goto fail;
+
+	err = bind_evtchn_to_irqhandler(info->evtchn, xenhcd_int, 0,
+					"usbif", info);
+	if (err <= 0) {
+		xenbus_dev_fatal(dev, err, "bind_listening_port_to_irqhandler");
+		goto fail;
+	}
+	info->irq = err;
+
+	return 0;
+fail:
+	destroy_rings(info);
+	return err;
+}
+
+static int talk_to_usbback(struct xenbus_device *dev,
+				struct usbfront_info *info)
+{
+	const char *message;
+	struct xenbus_transaction xbt;
+	int err;
+
+	err = setup_rings(dev, info);
+	if (err)
+		goto out;
+
+again:
+	err = xenbus_transaction_start(&xbt);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "starting transaction");
+		goto destroy_ring;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "urb-ring-ref",
+				"%u", info->urb_ring_ref);
+	if (err) {
+		message = "writing urb-ring-ref";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "conn-ring-ref",
+				"%u", info->conn_ring_ref);
+	if (err) {
+		message = "writing conn-ring-ref";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "event-channel",
+				"%u", info->evtchn);
+	if (err) {
+		message = "writing event-channel";
+		goto abort_transaction;
+	}
+
+	err = xenbus_transaction_end(xbt, 0);
+	if (err) {
+		if (err == -EAGAIN)
+			goto again;
+		xenbus_dev_fatal(dev, err, "completing transaction");
+		goto destroy_ring;
+	}
+
+	return 0;
+
+abort_transaction:
+	xenbus_transaction_end(xbt, 1);
+	xenbus_dev_fatal(dev, err, "%s", message);
+
+destroy_ring:
+	destroy_rings(info);
+
+out:
+	return err;
+}
+
+static int connect(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+
+	struct usbif_conn_request *req;
+	int i, idx, err;
+	int notify;
+	char name[TASK_COMM_LEN];
+	struct usb_hcd *hcd;
+
+	hcd = info_to_hcd(info);
+	snprintf(name, TASK_COMM_LEN, "xenhcd.%d", hcd->self.busnum);
+
+	err = talk_to_usbback(dev, info);
+	if (err)
+		return err;
+
+	info->kthread = kthread_run(xenhcd_schedule, info, name);
+	if (IS_ERR(info->kthread)) {
+		err = PTR_ERR(info->kthread);
+		info->kthread = NULL;
+		xenbus_dev_fatal(dev, err, "Error creating thread");
+		return err;
+	}
+	/* prepare ring for hotplug notification */
+	for (idx = 0, i = 0; i < USB_CONN_RING_SIZE; i++) {
+		req = RING_GET_REQUEST(&info->conn_ring, idx);
+		req->id = idx;
+		idx++;
+	}
+	info->conn_ring.req_prod_pvt = idx;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	return 0;
+}
+
+static struct usb_hcd *create_hcd(struct xenbus_device *dev)
+{
+	int i;
+	int err = 0;
+	int num_ports;
+	int usb_ver;
+	struct usb_hcd *hcd = NULL;
+	struct usbfront_info *info = NULL;
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "num-ports",
+				"%d", &num_ports);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading num-ports");
+		return ERR_PTR(-EINVAL);
+	}
+	if (num_ports < 1 || num_ports > USB_MAXCHILDREN) {
+		xenbus_dev_fatal(dev, err, "invalid num-ports");
+		return ERR_PTR(-EINVAL);
+	}
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "usb-ver", "%d", &usb_ver);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading usb-ver");
+		return ERR_PTR(-EINVAL);
+	}
+	switch (usb_ver) {
+	case USB_VER_USB11:
+		hcd = usb_create_hcd(&xen_usb11_hc_driver,
+					&dev->dev, dev_name(&dev->dev));
+		break;
+	case USB_VER_USB20:
+		hcd = usb_create_hcd(&xen_usb20_hc_driver,
+					&dev->dev, dev_name(&dev->dev));
+		break;
+	default:
+		xenbus_dev_fatal(dev, err, "invalid usb-ver");
+		return ERR_PTR(-EINVAL);
+	}
+	if (!hcd) {
+		xenbus_dev_fatal(dev, err,
+					"fail to allocate USB host controller");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	info = hcd_to_info(hcd);
+	info->xbdev = dev;
+	info->rh_numports = num_ports;
+
+	for (i = 0; i < USB_URB_RING_SIZE; i++) {
+		info->shadow[i].req.id = i + 1;
+		info->shadow[i].urb = NULL;
+	}
+	info->shadow[USB_URB_RING_SIZE-1].req.id = 0x0fff;
+
+	return hcd;
+}
+
+static int usbfront_probe(struct xenbus_device *dev,
+				const struct xenbus_device_id *id)
+{
+	int err;
+	struct usb_hcd *hcd;
+	struct usbfront_info *info;
+
+	if (usb_disabled())
+		return -ENODEV;
+
+	hcd = create_hcd(dev);
+	if (IS_ERR(hcd)) {
+		err = PTR_ERR(hcd);
+		xenbus_dev_fatal(dev, err,
+					"failed to create usb host controller");
+		goto fail;
+	}
+
+	info = hcd_to_info(hcd);
+	dev_set_drvdata(&dev->dev, info);
+
+	err = usb_add_hcd(hcd, 0, 0);
+	if (err != 0) {
+		xenbus_dev_fatal(dev, err, "fail to add USB host controller");
+		goto fail;
+	}
+
+	init_waitqueue_head(&info->wq);
+
+	return 0;
+
+fail:
+	usb_put_hcd(hcd);
+	dev_set_drvdata(&dev->dev, NULL);
+	return err;
+}
+
+static void usbfront_disconnect(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+	struct usb_hcd *hcd = info_to_hcd(info);
+
+	usb_remove_hcd(hcd);
+	if (info->kthread) {
+		kthread_stop(info->kthread);
+		info->kthread = NULL;
+	}
+	xenbus_frontend_closed(dev);
+}
+
+static void usbback_changed(struct xenbus_device *dev,
+				enum xenbus_state backend_state)
+{
+	switch (backend_state) {
+	case XenbusStateInitialising:
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+	case XenbusStateReconfiguring:
+	case XenbusStateReconfigured:
+	case XenbusStateUnknown:
+	case XenbusStateClosed:
+		break;
+
+	case XenbusStateInitWait:
+		if (dev->state != XenbusStateInitialising)
+			break;
+		if (!connect(dev))
+			xenbus_switch_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		usbfront_disconnect(dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+					backend_state);
+		break;
+	}
+}
+
+static int usbfront_remove(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+	struct usb_hcd *hcd = info_to_hcd(info);
+
+	destroy_rings(info);
+	usb_put_hcd(hcd);
+
+	return 0;
+}
+
+static const struct xenbus_device_id usbfront_ids[] = {
+	{ "vusb" },
+	{ "" },
+};
+MODULE_ALIAS("xen:vusb");
+
+static DEFINE_XENBUS_DRIVER(usbfront, ,
+	.probe = usbfront_probe,
+	.remove = usbfront_remove,
+	.otherend_changed = usbback_changed,
+);
+
+static int __init usbfront_init(void)
+{
+	if (!xen_domain())
+		return -ENODEV;
+
+	xenhcd_urbp_cachep = kmem_cache_create("xenhcd_urb_priv",
+					sizeof(struct urb_priv), 0, 0, NULL);
+	if (!xenhcd_urbp_cachep) {
+		printk(KERN_ERR "usbfront failed to create kmem cache\n");
+		return -ENOMEM;
+	}
+
+	return xenbus_register_frontend(&usbfront_driver);
+}
+
+static void __exit usbfront_exit(void)
+{
+	kmem_cache_destroy(xenhcd_urbp_cachep);
+	xenbus_unregister_driver(&usbfront_driver);
+}
+
+module_init(usbfront_init);
+module_exit(usbfront_exit);
+
+MODULE_AUTHOR("");
+MODULE_DESCRIPTION("Xen USB Virtual Host Controller driver (usbfront)");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/include/xen/interface/io/usbif.h b/include/xen/interface/io/usbif.h
new file mode 100644
index 0000000..f3bb1b2
--- /dev/null
+++ b/include/xen/interface/io/usbif.h
@@ -0,0 +1,150 @@
+/*
+ * usbif.h
+ *
+ * USB I/O interface for Xen guest OSes.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_PUBLIC_IO_USBIF_H__
+#define __XEN_PUBLIC_IO_USBIF_H__
+
+#include "ring.h"
+#include "../grant_table.h"
+
+enum usb_spec_version {
+	USB_VER_UNKNOWN = 0,
+	USB_VER_USB11,
+	USB_VER_USB20,
+	USB_VER_USB30,	/* not supported yet */
+};
+
+/*
+ *  USB pipe in usbif_request
+ *
+ *  bits 0-5 are specific bits for virtual USB driver.
+ *  bits 7-31 are standard urb pipe.
+ *
+ *  - port number(NEW):	bits 0-4
+ *				(USB_MAXCHILDREN is 31)
+ *
+ *  - operation flag(NEW):	bit 5
+ *				(0 = submit urb,
+ *				 1 = unlink urb)
+ *
+ *  - direction:		bit 7
+ *				(0 = Host-to-Device [Out]
+ *				 1 = Device-to-Host [In])
+ *
+ *  - device address:	bits 8-14
+ *
+ *  - endpoint:		bits 15-18
+ *
+ *  - pipe type:		bits 30-31
+ *				(00 = isochronous, 01 = interrupt,
+ *				 10 = control, 11 = bulk)
+ */
+#define usbif_pipeportnum(pipe) ((pipe) & 0x1f)
+#define usbif_setportnum_pipe(pipe, portnum) \
+	((pipe)|(portnum))
+
+#define usbif_pipeunlink(pipe) ((pipe) & 0x20)
+#define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
+#define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
+
+#define USBIF_BACK_MAX_PENDING_REQS (128)
+#define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
+
+/*
+ * RING for transferring urbs.
+ */
+struct usbif_request_segment {
+	grant_ref_t gref;
+	uint16_t offset;
+	uint16_t length;
+};
+
+struct usbif_urb_request {
+	uint16_t id; /* request id */
+	uint16_t nr_buffer_segs; /* number of urb->transfer_buffer segments */
+
+	/* basic urb parameter */
+	uint32_t pipe;
+	uint16_t transfer_flags;
+	uint16_t buffer_length;
+	union {
+		uint8_t ctrl[8]; /* setup_packet (Ctrl) */
+
+		struct {
+			uint16_t interval; /* maximum (1024*8) in usb core */
+			uint16_t start_frame; /* start frame */
+			uint16_t number_of_packets; /* number of ISO packet */
+			uint16_t nr_frame_desc_segs; /* number of iso_frame_desc
+								 segments */
+		} isoc;
+
+		struct {
+			uint16_t interval; /* maximum (1024*8) in usb core */
+			uint16_t pad[3];
+		} intr;
+
+		struct {
+			uint16_t unlink_id; /* unlink request id */
+			uint16_t pad[3];
+		} unlink;
+
+	} u;
+
+	/* urb data segments */
+	struct usbif_request_segment seg[USBIF_MAX_SEGMENTS_PER_REQUEST];
+};
+
+struct usbif_urb_response {
+	uint16_t id; /* request id */
+	uint16_t start_frame;  /* start frame (ISO) */
+	int32_t status; /* status (non-ISO) */
+	int32_t actual_length; /* actual transfer length */
+	int32_t error_count; /* number of ISO errors */
+};
+
+DEFINE_RING_TYPES(usbif_urb, struct usbif_urb_request,
+						struct usbif_urb_response);
+#define USB_URB_RING_SIZE __CONST_RING_SIZE(usbif_urb, PAGE_SIZE)
+
+/*
+ * RING for notifying connect/disconnect events to frontend
+ */
+struct usbif_conn_request {
+	uint16_t id;
+};
+
+struct usbif_conn_response {
+	uint16_t id; /* request id */
+	uint8_t portnum; /* port number */
+	uint8_t speed; /* usb_device_speed */
+};
+
+DEFINE_RING_TYPES(usbif_conn, struct usbif_conn_request,
+						struct usbif_conn_response);
+#define USB_CONN_RING_SIZE __CONST_RING_SIZE(usbif_conn, PAGE_SIZE)
+
+#endif /* __XEN_PUBLIC_IO_USBIF_H__ */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ee5-0006LN-8A; Tue, 21 Jan 2014 16:57:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5ee4-0006LI-5l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:57:48 +0000
Received: from [193.109.254.147:50157] by server-9.bemta-14.messagelabs.com id
	4C/42-13957-B07AED25; Tue, 21 Jan 2014 16:57:47 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390323463!9968650!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23412 invoked from network); 21 Jan 2014 16:57:45 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:57:45 -0000
Received: from mail-ee0-f43.google.com ([74.125.83.43]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nBuvpZrv1PccOfWq53HblxJFoy/N6@postini.com;
	Tue, 21 Jan 2014 08:57:45 PST
Received: by mail-ee0-f43.google.com with SMTP id c41so4188877eek.16
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:57:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id;
	bh=ypAeZ7eQXEXhVefFzEgug68soQ1XSsFgZMPLDEttbYw=;
	b=Rka7+lMEhENkI6ZTuBWzURx+T+z4j6Yw51I/nIkxCl0cC0e7NAc6GaaSDwnlT8F100
	H+CiYtxZLxEO+8pSyKnDlM6GhtTpNMFQnCSZisV90db/l0oYykSriGSFqZ/qtaX9SGe/
	yH/B+Gm7Zvj+C6OGd+sfvIg81y+tju1QexJQI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=ypAeZ7eQXEXhVefFzEgug68soQ1XSsFgZMPLDEttbYw=;
	b=Wnqb5Z8pb6khCloXCSWvgHaH45CFdhhiZmLeWPkDYyG5CqPVaoCFkEk6iY+fWVcYzN
	S+HGwjMzqimFr1o+dKBNuUJ8cdqpLVboYPlQVZJyNulHDdNXxDebFsxgan4fMfV7BFD2
	fVGdvJb3jMkbzwIZQiENETSNtO4nhw29mimF2fsxFoKRI2ryF6R+rggsTeMD/WPC0cNr
	KLdatL/qx0Gn+SP/RtzhvBW2maeY0JM8cYml59uDzx/MrMo3or1JCWPyjIspdiieZGwk
	gcQOP9H4JWGUxCpvbzkq9eooCXJLxfi0Bl8ibqGnFpMxRXfdkJYSc3aGektYFtBEkmt2
	H6wg==
X-Gm-Message-State: ALoCoQkMimpUfTILuCtKsuTiYNLJHlirXaqu5RbsTY+UwGrBKYDFZiynSF8IgpP4nRsthEwgBICsDQryzcTDc1BYE63v2pUilRbtCh6AE3ttn9ISfShAtHWZHfCXBhYswoirtsyfEIK24QnRAkeDcZFxHD61yRhTqEXTSVux8P2RxjeBnS7zDyc=
X-Received: by 10.14.111.73 with SMTP id v49mr1662488eeg.94.1390323461869;
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
X-Received: by 10.14.111.73 with SMTP id v49mr1662480eeg.94.1390323461768;
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 4sm16752303eed.14.2014.01.21.08.57.40
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:57:41 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:53:14 +0200
Message-Id: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Subject: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Could someone advice on the issue I am facing?

I am trying to run PV USB on omap5uevm (omap5-panda) board.

I use latest drivers for PV USB from Nathanael server:
http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch

I have applied it to k3.8(dom0) with some patches for USB HCD, usbback drivers
(attached) and run on Xen 4.4.0-rc2.

I am facing an issue with USB_STORAGE:
USB storage inited and mounted on domU over PV USB drivers.
But I only can copy files on USB storage with small sizes(no more than ~100-500 kBytes).
Then USB storage falls to infinite loop
(leds on USB storage blinking all the time, more than needing for copy)
and then after few seconds dom0 disconnected usb device.

Dom0, DomU use k3.8.

I observed that usb-storage uses some scsi requests(from domU) which pass
directly to hardware, I think this is a problem.

So, I applied PV SCSI drivers from
http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=refs/heads/devel/xen-scsi.v1.0
to k3.8.

Then I inited PV USB & PV SCSI with scripts vusb-start.sh and vscsi-start.sh accordingly.
But I still facing issue with this.

Dom0 log:
[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
[    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7), cr=10c5387d
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
[    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
[    0.000000] bootconsole [earlycon0] enabled
[    0.000000] Memory policy: ECC disabled, Data cache writeback
[    0.000000] On node 0 totalpages: 65280
[    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_map c428e000
[    0.000000]   Normal zone: 512 pages used for memmap
[    0.000000]   Normal zone: 0 pages reserved
[    0.000000]   Normal zone: 64768 pages, LIFO batch:15
[    0.000000] psci: probing function IDs from device-tree
[    0.000000] OMAP5432 ES2.0
[    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
[    0.000000] pcpu-alloc: [0] 0 
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 64768
[    0.000000] Kernel command line: console=hvc0 earlyprintk
[    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
[    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
[    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 255MB = 255MB total
[    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K highmem
[    0.000000] Virtual kernel memory layout:
[    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
[    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
[    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
[    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
[    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
[    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
[    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
[    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
[    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
[    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
[    0.000000] NR_IRQS:16 nr_irqs:16 16
[    0.000000] Architected local timer running at 6.14MHz (virt).
[    0.000000] Switching to timer-based delay loop
[    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns, wraps every 3489660920ms
[    0.000000] Console: colour dummy device 80x30
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 3695 kB
[    0.000000]  per task-struct memory footprint: 1152 bytes
[    0.046875] Calibrating delay loop (skipped), value calculated using timer frequency.. 12.30 BogoMIPS (lpj=48000)
[    0.054687] pid_max: default: 32768 minimum: 301
[    0.054687] Security Framework initialized
[    0.062500] Mount-cache hash table entries: 512
[    0.070312] CPU: Testing write buffer coherency: ok
[    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e58
[    0.085937] devtmpfs: initialized
[    0.093750] Xen 4.4 support found, events_irq=31 gnttab_frame_pfn=4b000
[    0.101562] xen:grant_table: Grant tables using version 1 layout.
[    0.101562] Grant table initialized
[    0.109375] omap_hwmod: aess: _wait_target_disable failed
[    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
[    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
[    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
[    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
[    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
[    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
[    0.234375] pinctrl core: initialized pinctrl subsystem
[    0.242187] regulator-dummy: no parameters
[    0.242187] NET: Registered protocol family 16
[    0.250000] Xen: initializing cpu0
[    0.250000] DMA: preallocated 256 KiB pool for atomic coherent allocations
[    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for software IO TLB
[    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped at [ce000000-ce7fffff]
[    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
[    0.281250] OMAP GPIO hardware version 0.1
[    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
[    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
[    0.296875] OMAP DMA hardware revision 0.0
[    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840 size 438
[    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840 size 56
[    0.335937] bio: create slab <bio-0> at 0
[    0.343750] xen:balloon: Initialising balloon driver
[    0.343750] of_get_named_gpio_flags exited with status 80
[    0.343750] hsusb2_reset: 3300 mV 
[    0.351562] of_get_named_gpio_flags exited with status 79
[    0.351562] hsusb3_reset: 3300 mV 
[    0.351562] SCSI subsystem initialized
[    0.359375] libata version 3.00 loaded.
[    0.359375] usbcore: registered new interface driver usbfs
[    0.367187] usbcore: registered new interface driver hub
[    0.367187] usbcore: registered new device driver usb
[    0.375000] Switching to clocksource arch_sys_counter
[    0.414062] NET: Registered protocol family 2
[    0.414062] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
[    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes)
[    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
[    0.437500] TCP: reno registered
[    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
[    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
[    0.453125] NET: Registered protocol family 1
[    0.679687] NetWinder Floating Point Emulator V0.97 (double precision)
[    0.687500] VFS: Disk quotas dquot_6.5.2
[    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
[    0.703125] msgmni has been set to 372
[    0.710937] io scheduler noop registered
[    0.718750] io scheduler deadline registered
[    0.718750] io scheduler cfq registered (default)
[    0.726562] xen:xen_evtchn: Event-channel device installed
[    0.742187] console [hvc0] enabled, bootconsole disabled
[    0.765625] brd: module loaded
[    0.781250] loop: module loaded
[    0.789062] ahci ahci.0.auto: can't get clock
[    0.789062] ahci ahci.0.auto: SATA PLL_STATUS = 0x00018041
[    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
[    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps 0x1 impl platform mode
[    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only pmp pio slum part ccc apst 
[    0.820312] scsi0 : ahci_platform
[    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff] port 0x100 irq 86
[    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
[    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
[    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigned bus number 1
[    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
[    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
[    0.898437] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[    0.906250] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.914062] usb usb1: Product: EHCI Host Controller
[    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ehci_hcd
[    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
[    0.937500] hub 1-0:1.0: USB hub found
[    0.937500] hub 1-0:1.0: 3 ports detected
[    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
[    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered, assigned bus number 2
[    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
[    1.187500] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
[    1.195312] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
[    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ohci_hcd
[    1.210937] usb usb2: SerialNumber: 4a064800.ohci
[    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
[    1.226562] hub 2-0:1.0: USB hub found
[    1.226562] hub 2-0:1.0: 3 ports detected
[    1.359375] usb 1-2: new high-speed USB device number 2 using ehci-omap
[    1.515625] usb 1-2: New USB device found, idVendor=0424, idProduct=3503
[    1.515625] usb 1-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    1.531250] hub 1-2:1.0: USB hub found
[    1.531250] hub 1-2:1.0: 3 ports detected
[    1.664062] usb 1-3: new high-speed USB device number 3 using ehci-omap
[    1.820312] usb 1-3: New USB device found, idVendor=0424, idProduct=9730
[    1.820312] usb 1-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    1.835937] usbcore: registered new interface driver usbback
[    1.843750] Initializing USB Mass Storage driver...
[    1.843750] usbcore: registered new interface driver usb-storage
[    1.851562] USB Mass Storage support registered.
[    1.859375] i2c /dev entries driver
[    1.859375] usbcore: registered new interface driver usbhid
[    1.867187] usbhid: USB HID core driver
[    1.875000] TCP: cubic registered
[    1.875000] Initializing XFRM netlink socket
[    1.882812] NET: Registered protocol family 17
[    1.882812] NET: Registered protocol family 15
[    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
[    1.898437] mux: Failed to setup hwmod io irq -22
[    1.898437] Power Management for TI OMAP4PLUS devices.
[    1.906250] ThumbEE CPU extension supported.
[    1.914062] Registering SWP/SWPB emulation handler
[    1.921875] devtmpfs: mounted
[    1.968750] Freeing init memory: 57752K
# ./vusb-start.sh 1 0
[    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-channel 3
# ./vscsi-start.sh 1 0
# echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport

[   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
[   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
[   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   40.929687] usb 1-2.1: Product: Mass Storage Device
[   40.929687] usb 1-2.1: Manufacturer: JetFlash
[   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO

DomU log:
0.710937] console [hvc0] enabled, bootconsole disabled
[    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq = 104) is a OMAP UART0
[    0.718750] omap_uart 4806c000.serial: did not get pins for uart1 error: -19
[    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq = 105) is a OMAP UART1
[    0.718750] omap_uart 4806e000.serial: did not get pins for uart3 error: -19
[    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq = 102) is a OMAP UART3
[    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq = 137) is a OMAP UART4
[    0.726562] omap_uart 48068000.serial: did not get pins for uart5 error: -19
[    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq = 138) is a OMAP UART5
[    0.726562] [drm] Initialized drm 1.1.0 20060810
[    0.742187] brd: module loaded
[    0.757812] loop: module loaded
[    0.757812] omap2_mcspi 48098000.spi: pins are not configured from the driver
[    0.765625] Initialising Xen virtual ethernet driver.
[    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.765625] ehci-platform: EHCI generic platform driver
[    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
[    0.765625] vusb vusb-0: new USB bus registered, assigned bus number 1
[    0.765625] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[    0.765625] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
[    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 xen_hcd
[    0.765625] usb usb1: SerialNumber: vusb-0
[    0.773437] hub 1-0:1.0: USB hub found
[    0.773437] hub 1-0:1.0: 8 ports detected
[    0.773437] Initializing USB Mass Storage driver...
[    0.773437] usbcore: registered new interface driver usb-storage
[    0.773437] USB Mass Storage support registered.
[    0.773437] mousedev: PS/2 mouse device common for all mice
[    0.781250] usbcore: registered new interface driver usbhid
[    0.781250] usbhid: USB HID core driver
[    0.789062] TCP: cubic registered
[    0.789062] Initializing XFRM netlink socket
[    0.789062] NET: Registered protocol family 17
[    0.789062] NET: Registered protocol family 15
[    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
[    0.789062] mux: Failed to setup hwmod io irq -22
[    0.789062] ThumbEE CPU extension supported.
[    0.789062] Registering SWP/SWPB emulation handler
[    0.789062] dmm 4e000000.dmm: initialized all PAT entries
[    0.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
[    0.804687] devtmpfs: mounted
[    0.812500] Freeing init memory: 6044K

Please press Enter to activate this console.
[    6.500000] scsi0 : Xen SCSI frontend driver

/ # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
[   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
[   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   40.929687] usb 1-2.1: Product: Mass Storage Device
[   40.929687] usb 1-2.1: Manufacturer: JetFlash
[   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
[   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
(XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
[   32.875000] usb 1-1: New USB device found, idVendor=8564, idProduct=1000
[   32.875000] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[   32.875000] usb 1-1: Product: Mass Storage Device
[   32.875000] usb 1-1: Manufacturer: JetFlash
[   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
[   32.906250] scsi1 : usb-storage 1-1:1.0
[   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB    1100 PQ: 0 ANSI: 4
[   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)
[   34.140625] sd 1:0:0:0: [sda] Write Protect is off
[   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data may changed on different boots*
[   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.195312]  sda: sda1
[   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
[   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
[   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk

 # lsusb 
Bus 001 Device 002: ID 8564:1000
Bus 001 Device 001: ID 1d6b:0002

But it looks like scsi requests from usb-storage still passing directly to hardware
instead of passing through PV SCSI.

Could smb tell me how to init PV SCSI and PV USB correctly?

Regards,
Alexander

Alexander Savchenko (2):
  usbback: Add new features
  HACK: usb:core:hcd: Do not remapping self dma addresses

Nathanael Rensen (1):
  pvusb drivers

 drivers/usb/core/hcd.c                 |    1 +
 drivers/usb/host/Kconfig               |   23 +
 drivers/usb/host/Makefile              |    2 +
 drivers/usb/host/xen-usbback/Makefile  |    3 +
 drivers/usb/host/xen-usbback/common.h  |  170 ++++
 drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
 drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
 drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
 drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
 include/xen/interface/io/usbif.h       |  150 +++
 10 files changed, 4244 insertions(+)
 create mode 100644 drivers/usb/host/xen-usbback/Makefile
 create mode 100644 drivers/usb/host/xen-usbback/common.h
 create mode 100644 drivers/usb/host/xen-usbback/usbback.c
 create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
 create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
 create mode 100644 drivers/usb/host/xen-usbfront.c
 create mode 100644 include/xen/interface/io/usbif.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eeG-0006MV-MW; Tue, 21 Jan 2014 16:58:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5eeE-0006MC-H5
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:57:59 +0000
Received: from [85.158.137.68:50253] by server-5.bemta-3.messagelabs.com id
	EB/B3-25188-517AED25; Tue, 21 Jan 2014 16:57:57 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390323469!10496529!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1229 invoked from network); 21 Jan 2014 16:57:51 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:57:51 -0000
Received: from mail-ea0-f175.google.com ([209.85.215.175]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nDAhH3JeLTFG/7i3PfnM6F15uJAGK@postini.com;
	Tue, 21 Jan 2014 08:57:50 PST
Received: by mail-ea0-f175.google.com with SMTP id z10so3854710ead.6
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:57:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=XyttfSaipgRVmn7sy2uep2S9EDH/PwdlzkGmZVap1/E=;
	b=fgmwW7B7ydwu097qJgR462vw5PPJqgr7ZHbVQ8VqNBEN3tmh1tQmVCs5hPZDxzonev
	wNrg6yX9jRivdABRdNGsZe8M8umDsqd58NCu4ZxMyvo24qrZpLswFfVhMtb9uSErM0lj
	nPk2frL4Q8TUOy+cPnMJ7rUIKa1+nEqcq959g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=XyttfSaipgRVmn7sy2uep2S9EDH/PwdlzkGmZVap1/E=;
	b=Ya2zWejFY4Z3Yb+sLzlsusLOEC+Fcg4Cw+9/dDvZwHHVgua9oPbL+95vOXRHGq+Qup
	0kqjDHAphbc4/bxdFPi6SMuBEpwxx4OOnfNzWWtQfX22zDdxJksBEmKQcBTdnVho+1tk
	KYj+xU/wjkv9fwX6mzsubqCXVGfTSEniP5Y7q/kH+N5zFmeDk1aYqffhTApOPtNKLORV
	TofmuqA9tlMSYcR1JlA5EJ/pmuRaRbyBjIppO0/ibb76znl35fbjrti7bgksXoUAsWrG
	TBL+loGmq8EQxcS8ahzxO/hJmKjILHB6DO44+fc1waWmQZ3iXtIRmN1D68Wyeiaet3Fe
	OWDg==
X-Gm-Message-State: ALoCoQmrZ1pvgL333NkFjfavHi3CLvSevr8RGvRJeivx9lLeAQblXKT/+O3/aM+ebnz0QClen0UP0fdaJ0+PMxZAo+AxoLJUFEZNYWVatKtY+uQcosxkloyh54gluG0KT6itwfYsXXoug6/kUEWrL0x6JFyB0517klYxZEiwPtBq+zwWkuV0sFg=
X-Received: by 10.14.102.67 with SMTP id c43mr24309611eeg.23.1390323467709;
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
X-Received: by 10.14.102.67 with SMTP id c43mr24309594eeg.23.1390323467549;
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 4sm16752303eed.14.2014.01.21.08.57.46
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:57:47 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:53:15 +0200
Message-Id: <1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>,
	Nathanael Rensen <nathanael@polymorpheus.com>
Subject: [Xen-devel] [PATCH 1/3] pvusb drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Nathanael Rensen <nathanael@polymorpheus.com>

Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
---
 drivers/usb/host/Kconfig               |   23 +
 drivers/usb/host/Makefile              |    2 +
 drivers/usb/host/xen-usbback/Makefile  |    3 +
 drivers/usb/host/xen-usbback/common.h  |  170 ++++
 drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
 drivers/usb/host/xen-usbback/usbdev.c  |  319 ++++++
 drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
 drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
 include/xen/interface/io/usbif.h       |  150 +++
 9 files changed, 4160 insertions(+)
 create mode 100644 drivers/usb/host/xen-usbback/Makefile
 create mode 100644 drivers/usb/host/xen-usbback/common.h
 create mode 100644 drivers/usb/host/xen-usbback/usbback.c
 create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
 create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
 create mode 100644 drivers/usb/host/xen-usbfront.c
 create mode 100644 include/xen/interface/io/usbif.h

diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig
index 2d2975d..2c10ea7 100644
--- a/drivers/usb/host/Kconfig
+++ b/drivers/usb/host/Kconfig
@@ -634,6 +634,29 @@ config USB_IMX21_HCD
          To compile this driver as a module, choose M here: the
          module will be called "imx21-hcd".
 
+config XEN_USBDEV_FRONTEND
+	tristate "Xen pvusb device frontend driver"
+	depends on XEN && USB
+	select XEN_XENBUS_FRONTEND
+	default m
+	help
+	  The pvusb device frontend driver allows the kernel to
+	  access usb devices exported exported by a virtual
+	  machine containing a physical usb device driver. The
+	  frontend driver is intended for unprivileged guest domains;
+	  if you are compiling a kernel for a Xen guest, you almost
+	  certainly want to enable this.
+
+config XEN_USBDEV_BACKEND
+	tristate "PVUSB device backend driver"
+	depends on XEN_BACKEND && USB
+	default m
+	help
+	  The pvusb backend driver allows the kernel to export its usb
+	  devices to other guests via a high-performance shared-memory
+	  interface. This requires the guest to have the pvusb frontend
+	  available.
+
 config USB_OCTEON_EHCI
 	bool "Octeon on-chip EHCI support"
 	depends on USB && USB_EHCI_HCD && CPU_CAVIUM_OCTEON
diff --git a/drivers/usb/host/Makefile b/drivers/usb/host/Makefile
index 56de410..cba51c8 100644
--- a/drivers/usb/host/Makefile
+++ b/drivers/usb/host/Makefile
@@ -45,5 +45,7 @@ obj-$(CONFIG_USB_HWA_HCD)	+= hwa-hc.o
 obj-$(CONFIG_USB_IMX21_HCD)	+= imx21-hcd.o
 obj-$(CONFIG_USB_FSL_MPH_DR_OF)	+= fsl-mph-dr-of.o
 obj-$(CONFIG_USB_OCTEON2_COMMON) += octeon2-common.o
+obj-$(CONFIG_XEN_USBDEV_FRONTEND) += xen-usbfront.o
+obj-$(CONFIG_XEN_USBDEV_BACKEND) += xen-usbback/
 obj-$(CONFIG_USB_HCD_BCMA)	+= bcma-hcd.o
 obj-$(CONFIG_USB_HCD_SSB)	+= ssb-hcd.o
diff --git a/drivers/usb/host/xen-usbback/Makefile b/drivers/usb/host/xen-usbback/Makefile
new file mode 100644
index 0000000..9f3628c
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_XEN_USBDEV_BACKEND) := xen-usbback.o
+
+xen-usbback-y   := usbdev.o xenbus.o usbback.o
diff --git a/drivers/usb/host/xen-usbback/common.h b/drivers/usb/host/xen-usbback/common.h
new file mode 100644
index 0000000..d9671ec
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/common.h
@@ -0,0 +1,170 @@
+/*
+ * This file is part of Xen USB backend driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_USBBACK__COMMON_H__
+#define __XEN_USBBACK__COMMON_H__
+
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/slab.h>
+#include <linux/usb.h>
+#include <linux/vmalloc.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/kref.h>
+#include <asm/hypervisor.h>
+#include <xen/xen.h>
+#include <xen/events.h>
+#include <xen/interface/xen.h>
+#include <xen/xenbus.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+#include <xen/interface/io/usbif.h>
+
+#define DRV_PFX "xen-usbback:"
+
+struct xen_usbdev;
+
+#ifndef BUS_ID_SIZE
+#define XEN_USB_BUS_ID_SIZE 20
+#else
+#define XEN_USB_BUS_ID_SIZE BUS_ID_SIZE
+#endif
+
+#define XEN_USB_DEV_ADDR_SIZE 128
+
+struct xen_usbif {
+	domid_t				domid;
+	unsigned int			handle;
+	int				num_ports;
+	enum usb_spec_version		usb_ver;
+
+	struct list_head		usbif_list;
+
+	struct xenbus_device		*xbdev;
+
+	unsigned int			irq;
+
+	void				*urb_sring;
+	void				*conn_sring;
+	struct usbif_urb_back_ring	urb_ring;
+	struct usbif_conn_back_ring	conn_ring;
+
+	spinlock_t			urb_ring_lock;
+	spinlock_t			conn_ring_lock;
+	atomic_t			refcnt;
+
+	struct xenbus_watch		backend_watch;
+
+	/* device address lookup table */
+	struct xen_usbdev		*addr_table[XEN_USB_DEV_ADDR_SIZE];
+	spinlock_t			addr_lock;
+
+	/* connected device list */
+	struct list_head		dev_list;
+	spinlock_t			dev_lock;
+
+	/* request schedule */
+	struct task_struct		*xenusbd;
+	unsigned int			waiting_reqs;
+	wait_queue_head_t		waiting_to_free;
+	wait_queue_head_t		wq;
+};
+
+struct xen_usbport {
+	struct list_head	port_list;
+
+	char			phys_bus[XEN_USB_BUS_ID_SIZE];
+	domid_t			domid;
+	unsigned int		handle;
+	int			portnum;
+	unsigned		is_connected:1;
+};
+
+struct xen_usbdev {
+	struct kref		kref;
+	struct list_head	dev_list;
+
+	struct xen_usbport	*port;
+	struct usb_device	*udev;
+	struct xen_usbif	*usbif;
+	int			addr;
+
+	struct list_head	submitting_list;
+	spinlock_t		submitting_lock;
+};
+
+#define usbif_get(_b) (atomic_inc(&(_b)->refcnt))
+#define usbif_put(_b) \
+	do { \
+		if (atomic_dec_and_test(&(_b)->refcnt)) \
+			wake_up(&(_b)->waiting_to_free); \
+	} while (0)
+
+int xen_usbif_xenbus_init(void);
+void xen_usbif_xenbus_exit(void);
+struct xen_usbif *xen_usbif_find(domid_t domid, unsigned int handle);
+
+int xen_usbdev_init(void);
+void xen_usbdev_exit(void);
+
+void xen_usbif_attach_device(struct xen_usbif *usbif, struct xen_usbdev *dev);
+void xen_usbif_detach_device(struct xen_usbif *usbif, struct xen_usbdev *dev);
+void xen_usbif_detach_device_without_lock(struct xen_usbif *usbif,
+						struct xen_usbdev *dev);
+void xen_usbif_hotplug_notify(struct xen_usbif *usbif, int portnum, int speed);
+struct xen_usbdev *xen_usbif_find_attached_device(struct xen_usbif *usbif,
+								int port);
+irqreturn_t xen_usbif_be_int(int irq, void *dev_id);
+int xen_usbif_schedule(void *arg);
+void xen_usbif_unlink_urbs(struct xen_usbdev *dev);
+
+struct xen_usbport *xen_usbport_find_by_busid(const char *busid);
+struct xen_usbport *xen_usbport_find(const domid_t domid,
+				const unsigned int handle, const int portnum);
+int xen_usbport_add(const char *busid, const domid_t domid,
+				const unsigned int handle, const int portnum);
+int xen_usbport_remove(const domid_t domid, const unsigned int handle,
+							const int portnum);
+#endif /* __XEN_USBBACK__COMMON_H__ */
diff --git a/drivers/usb/host/xen-usbback/usbback.c b/drivers/usb/host/xen-usbback/usbback.c
new file mode 100644
index 0000000..6d7bda0
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/usbback.c
@@ -0,0 +1,1272 @@
+/*
+ * Xen USB backend driver
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/mm.h>
+#include "common.h"
+
+static int xen_usbif_reqs = USBIF_BACK_MAX_PENDING_REQS;
+module_param_named(reqs, xen_usbif_reqs, int, 0);
+MODULE_PARM_DESC(reqs, "Number of usbback requests to allocate");
+
+struct pending_req_segment {
+	uint16_t offset;
+	uint16_t length;
+};
+
+struct pending_req {
+	struct xen_usbif	*usbif;
+
+	uint16_t		id; /* request id */
+
+	struct xen_usbdev	*dev;
+	struct list_head	urb_list;
+
+	/* urb */
+	struct urb		*urb;
+	void			*buffer;
+	dma_addr_t		transfer_dma;
+	struct usb_ctrlrequest	*setup;
+	dma_addr_t		setup_dma;
+
+	/* request segments */
+	uint16_t		nr_buffer_segs;
+				/* number of urb->transfer_buffer segments */
+	uint16_t		nr_extra_segs;
+				/* number of iso_frame_desc segments (ISO) */
+	struct pending_req_segment *seg;
+
+	struct list_head	free_list;
+};
+
+#define USBBACK_INVALID_HANDLE (~0)
+
+struct xen_usbbk {
+	struct pending_req	*pending_reqs;
+	struct list_head	pending_free;
+	spinlock_t		pending_free_lock;
+	wait_queue_head_t	pending_free_wq;
+	struct list_head	urb_free;
+	spinlock_t		urb_free_lock;
+	struct page		**pending_pages;
+	grant_handle_t		*pending_grant_handles;
+};
+
+static struct xen_usbbk *usbbk;
+
+static inline int vaddr_pagenr(struct pending_req *req, int seg)
+{
+	return (req - usbbk->pending_reqs) *
+		USBIF_MAX_SEGMENTS_PER_REQUEST + seg;
+}
+
+#define pending_page(req, seg) pending_pages[vaddr_pagenr(req, seg)]
+
+static inline unsigned long vaddr(struct pending_req *req, int seg)
+{
+	unsigned long pfn = page_to_pfn(usbbk->pending_page(req, seg));
+	return (unsigned long)pfn_to_kaddr(pfn);
+}
+
+#define pending_handle(_req, _seg) \
+	(usbbk->pending_grant_handles[vaddr_pagenr(_req, _seg)])
+
+static struct pending_req *alloc_req(void)
+{
+	struct pending_req *req = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbbk->pending_free_lock, flags);
+	if (!list_empty(&usbbk->pending_free)) {
+		req = list_entry(usbbk->pending_free.next, struct pending_req,
+								free_list);
+		list_del(&req->free_list);
+	}
+	spin_unlock_irqrestore(&usbbk->pending_free_lock, flags);
+	return req;
+}
+
+static void free_req(struct pending_req *req)
+{
+	unsigned long flags;
+	int was_empty;
+
+	spin_lock_irqsave(&usbbk->pending_free_lock, flags);
+	was_empty = list_empty(&usbbk->pending_free);
+	list_add(&req->free_list, &usbbk->pending_free);
+	spin_unlock_irqrestore(&usbbk->pending_free_lock, flags);
+	if (was_empty)
+		wake_up(&usbbk->pending_free_wq);
+}
+
+static inline void add_req_to_submitting_list(struct xen_usbdev *dev,
+						struct pending_req *pending_req)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_add_tail(&pending_req->urb_list, &dev->submitting_list);
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+static inline void remove_req_from_submitting_list(struct xen_usbdev *dev,
+						struct pending_req *pending_req)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_del_init(&pending_req->urb_list);
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+void xen_usbif_unlink_urbs(struct xen_usbdev *dev)
+{
+	struct pending_req *req, *tmp;
+	unsigned long flags;
+
+	spin_lock_irqsave(&dev->submitting_lock, flags);
+	list_for_each_entry_safe(req, tmp, &dev->submitting_list, urb_list) {
+		usb_unlink_urb(req->urb);
+	}
+	spin_unlock_irqrestore(&dev->submitting_lock, flags);
+}
+
+static void copy_buff_to_pages(void *buff, struct pending_req *pending_req,
+				int start, int nr_pages)
+{
+	unsigned long copied = 0;
+	int i;
+
+	for (i = start; i < start + nr_pages; i++) {
+		memcpy((void *) vaddr(pending_req, i) +
+						pending_req->seg[i].offset,
+			buff + copied, pending_req->seg[i].length);
+		copied += pending_req->seg[i].length;
+	}
+}
+
+static void copy_pages_to_buff(void *buff, struct pending_req *pending_req,
+							int start, int nr_pages)
+{
+	unsigned long copied = 0;
+	int i;
+
+	for (i = start; i < start + nr_pages; i++) {
+		void *src = (void *) vaddr(pending_req, i) +
+						pending_req->seg[i].offset;
+		memcpy(buff + copied, src, pending_req->seg[i].length);
+		copied += pending_req->seg[i].length;
+	}
+}
+
+static int usbbk_alloc_urb(struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	int ret;
+
+	if (usb_pipeisoc(req->pipe))
+		pending_req->urb = usb_alloc_urb(req->u.isoc.number_of_packets,
+						 GFP_KERNEL);
+	else
+		pending_req->urb = usb_alloc_urb(0, GFP_KERNEL);
+	if (!pending_req->urb) {
+		pr_alert(DRV_PFX "can't alloc urb\n");
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (req->buffer_length) {
+		pending_req->buffer =
+			usb_alloc_coherent(pending_req->dev->udev,
+						req->buffer_length, GFP_KERNEL,
+						&pending_req->transfer_dma);
+		if (!pending_req->buffer) {
+			pr_alert(DRV_PFX "can't alloc urb buffer\n");
+			ret = -ENOMEM;
+			goto fail_free_urb;
+		}
+	}
+
+	if (usb_pipecontrol(req->pipe)) {
+		pending_req->setup = usb_alloc_coherent(pending_req->dev->udev,
+					sizeof(struct usb_ctrlrequest),
+					GFP_KERNEL, &pending_req->setup_dma);
+		if (!pending_req->setup) {
+			pr_alert(DRV_PFX "can't alloc usb_ctrlrequest\n");
+			ret = -ENOMEM;
+			goto fail_free_buffer;
+		}
+	}
+
+	return 0;
+
+fail_free_buffer:
+	if (req->buffer_length)
+		usb_free_coherent(pending_req->dev->udev, req->buffer_length,
+				pending_req->buffer, pending_req->transfer_dma);
+fail_free_urb:
+	usb_free_urb(pending_req->urb);
+fail:
+	return ret;
+}
+
+static void usbbk_release_urb(struct urb *urb)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbbk->urb_free_lock, flags);
+	list_add(&urb->urb_list, &usbbk->urb_free);
+	spin_unlock_irqrestore(&usbbk->urb_free_lock, flags);
+}
+
+static void usbbk_free_urb(struct urb *urb)
+{
+	if (usb_pipecontrol(urb->pipe))
+		usb_free_coherent(urb->dev, sizeof(struct usb_ctrlrequest),
+				urb->setup_packet, urb->setup_dma);
+	if (urb->transfer_buffer_length)
+		usb_free_coherent(urb->dev, urb->transfer_buffer_length,
+				urb->transfer_buffer, urb->transfer_dma);
+	barrier();
+	usb_free_urb(urb);
+}
+
+static void usbbk_free_urbs(void)
+{
+	unsigned long flags;
+	struct list_head tmp_list;
+
+	if (list_empty(&usbbk->urb_free))
+		return;
+
+	INIT_LIST_HEAD(&tmp_list);
+
+	spin_lock_irqsave(&usbbk->urb_free_lock, flags);
+	list_splice_init(&usbbk->urb_free, &tmp_list);
+	spin_unlock_irqrestore(&usbbk->urb_free_lock, flags);
+
+	while (!list_empty(&tmp_list)) {
+		struct urb *next_urb =
+			list_first_entry(&tmp_list, struct urb, urb_list);
+		list_del(&next_urb->urb_list);
+		usbbk_free_urb(next_urb);
+	}
+}
+
+static void usbif_notify_work(struct xen_usbif *usbif)
+{
+	usbif->waiting_reqs = 1;
+	wake_up(&usbif->wq);
+}
+
+irqreturn_t xen_usbif_be_int(int irq, void *dev_id)
+{
+	usbif_notify_work(dev_id);
+	return IRQ_HANDLED;
+}
+
+static void xen_usbbk_unmap(struct pending_req *req)
+{
+	struct gnttab_unmap_grant_ref unmap[USBIF_MAX_SEGMENTS_PER_REQUEST];
+	unsigned int i, nr_segs, invcount = 0;
+	grant_handle_t handle;
+	int ret;
+
+	nr_segs = req->nr_buffer_segs + req->nr_extra_segs;
+
+	if (nr_segs == 0)
+		return;
+
+	for (i = 0; i < nr_segs; i++) {
+		handle = pending_handle(req, i);
+		if (handle == USBBACK_INVALID_HANDLE)
+			continue;
+		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
+					GNTMAP_host_map, handle);
+		pending_handle(req, i) = USBBACK_INVALID_HANDLE;
+		invcount++;
+	}
+
+	ret = HYPERVISOR_grant_table_op(
+		GNTTABOP_unmap_grant_ref, unmap, invcount);
+	BUG_ON(ret);
+	/*
+	 * Note, we use invcount, not nr_segs, so we can't index
+	 * using vaddr(req, i).
+	 */
+	for (i = 0; i < invcount; i++) {
+		ret = m2p_remove_override(
+			virt_to_page(unmap[i].host_addr), false);
+		if (ret) {
+			pr_alert(DRV_PFX "Failed to remove M2P override for "
+				"%lx\n", (unsigned long)unmap[i].host_addr);
+			continue;
+		}
+	}
+
+	kfree(req->seg);
+}
+
+static int xen_usbbk_map(struct xen_usbif *usbif,
+				struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	int i, ret;
+	unsigned int nr_segs;
+	uint32_t flags;
+	struct gnttab_map_grant_ref map[USBIF_MAX_SEGMENTS_PER_REQUEST];
+
+	nr_segs = pending_req->nr_buffer_segs + pending_req->nr_extra_segs;
+
+	if (nr_segs == 0)
+		return 0;
+
+	if (nr_segs > USBIF_MAX_SEGMENTS_PER_REQUEST) {
+		pr_alert(DRV_PFX "Bad number of segments in request\n");
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	pending_req->seg = kmalloc(sizeof(struct pending_req_segment) *
+							nr_segs, GFP_KERNEL);
+	if (!pending_req->seg) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	flags = GNTMAP_host_map;
+	if (usb_pipeout(req->pipe))
+		flags |= GNTMAP_readonly;
+	for (i = 0; i < pending_req->nr_buffer_segs; i++) {
+		gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
+					req->seg[i].gref, usbif->domid);
+	}
+
+	flags = GNTMAP_host_map;
+	for (i = pending_req->nr_buffer_segs; i < nr_segs; i++) {
+		gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags,
+					req->seg[i].gref, usbif->domid);
+	}
+
+	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nr_segs);
+	BUG_ON(ret);
+
+	for (i = 0; i < nr_segs; i++) {
+		if (unlikely(map[i].status != 0)) {
+			pr_alert(DRV_PFX "invalid buffer "
+					"-- could not remap it (error %d)\n",
+				map[i].status);
+			map[i].handle = USBBACK_INVALID_HANDLE;
+			ret |= 1;
+		}
+
+		pending_handle(pending_req, i) = map[i].handle;
+
+		if (ret)
+			continue;
+
+		ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr),
+			usbbk->pending_page(pending_req, i), NULL);
+		if (ret) {
+			pr_alert(DRV_PFX "Failed to install M2P override for "
+			    "%lx (ret: %d)\n",
+			    (unsigned long)map[i].dev_bus_addr, ret);
+			/* We could switch over to GNTTABOP_copy */
+			continue;
+		}
+
+		pending_req->seg[i].offset = req->seg[i].offset;
+		pending_req->seg[i].length = req->seg[i].length;
+
+		barrier();
+
+		if (pending_req->seg[i].offset >= PAGE_SIZE ||
+			pending_req->seg[i].length > PAGE_SIZE ||
+			pending_req->seg[i].offset +
+				pending_req->seg[i].length > PAGE_SIZE)
+			ret |= 1;
+	}
+
+	if (ret)
+		goto fail_flush;
+
+	return 0;
+
+fail_flush:
+	xen_usbbk_unmap(pending_req);
+	ret = -ENOMEM;
+
+fail:
+	return ret;
+}
+
+static void usbbk_do_response(struct pending_req *pending_req, int32_t status,
+				int32_t actual_length, int32_t error_count,
+				uint16_t start_frame)
+{
+	struct xen_usbif *usbif = pending_req->usbif;
+	struct usbif_urb_response *res;
+	unsigned long flags;
+	int notify;
+
+	spin_lock_irqsave(&usbif->urb_ring_lock, flags);
+	res = RING_GET_RESPONSE(&usbif->urb_ring, usbif->urb_ring.rsp_prod_pvt);
+	res->id = pending_req->id;
+	res->status = status;
+	res->actual_length = actual_length;
+	res->error_count = error_count;
+	res->start_frame = start_frame;
+	usbif->urb_ring.rsp_prod_pvt++;
+	barrier();
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&usbif->urb_ring, notify);
+	spin_unlock_irqrestore(&usbif->urb_ring_lock, flags);
+
+	if (notify)
+		notify_remote_via_irq(usbif->irq);
+}
+
+static void usbbk_urb_complete(struct urb *urb)
+{
+	struct pending_req *pending_req = (struct pending_req *)urb->context;
+
+	if (usb_pipein(urb->pipe) && urb->status == 0 && urb->actual_length > 0)
+		copy_buff_to_pages(pending_req->buffer, pending_req, 0,
+					pending_req->nr_buffer_segs);
+
+	if (usb_pipeisoc(urb->pipe))
+		copy_buff_to_pages(&urb->iso_frame_desc[0], pending_req,
+					pending_req->nr_buffer_segs,
+					pending_req->nr_extra_segs);
+
+	barrier();
+
+	xen_usbbk_unmap(pending_req);
+
+	usbbk_do_response(pending_req, urb->status, urb->actual_length,
+				urb->error_count, urb->start_frame);
+
+	remove_req_from_submitting_list(pending_req->dev, pending_req);
+
+	barrier();
+	usbbk_release_urb(urb);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+}
+
+static void usbbk_init_urb(struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	unsigned int pipe;
+	struct usb_device *udev = pending_req->dev->udev;
+	struct urb *urb = pending_req->urb;
+
+	switch (usb_pipetype(req->pipe)) {
+	case PIPE_ISOCHRONOUS:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvisocpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndisocpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		urb->dev = udev;
+		urb->pipe = pipe;
+		urb->transfer_flags = req->transfer_flags;
+		urb->transfer_flags |= URB_ISO_ASAP;
+		urb->transfer_buffer = pending_req->buffer;
+		urb->transfer_buffer_length = req->buffer_length;
+		urb->complete = usbbk_urb_complete;
+		urb->context = pending_req;
+		urb->interval = req->u.isoc.interval;
+		urb->start_frame = req->u.isoc.start_frame;
+		urb->number_of_packets = req->u.isoc.number_of_packets;
+
+		break;
+	case PIPE_INTERRUPT:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvintpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndintpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		usb_fill_int_urb(urb, udev, pipe,
+				pending_req->buffer, req->buffer_length,
+				usbbk_urb_complete,
+				pending_req, req->u.intr.interval);
+		/*
+		 * high speed interrupt endpoints use a logarithmic encoding of
+		 * the endpoint interval, and usb_fill_int_urb() initializes a
+		 * interrupt urb with the encoded interval value.
+		 *
+		 * req->u.intr.interval is the interval value that already
+		 * encoded in the frontend part, and the above
+		 * usb_fill_int_urb() initializes the urb->interval with double
+		 * encoded value.
+		 *
+		 * so, simply overwrite the urb->interval with original value.
+		 */
+		urb->interval = req->u.intr.interval;
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	case PIPE_CONTROL:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvctrlpipe(udev, 0);
+		else
+			pipe = usb_sndctrlpipe(udev, 0);
+
+		usb_fill_control_urb(urb, udev, pipe,
+				(unsigned char *) pending_req->setup,
+				pending_req->buffer, req->buffer_length,
+				usbbk_urb_complete, pending_req);
+		memcpy(pending_req->setup, req->u.ctrl, 8);
+		urb->setup_dma = pending_req->setup_dma;
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	case PIPE_BULK:
+		if (usb_pipein(req->pipe))
+			pipe = usb_rcvbulkpipe(udev,
+						usb_pipeendpoint(req->pipe));
+		else
+			pipe = usb_sndbulkpipe(udev,
+						usb_pipeendpoint(req->pipe));
+
+		usb_fill_bulk_urb(urb, udev, pipe, pending_req->buffer,
+				req->buffer_length, usbbk_urb_complete,
+				pending_req);
+		urb->transfer_flags = req->transfer_flags;
+
+		break;
+	default:
+		break;
+	}
+
+	if (req->buffer_length) {
+		urb->transfer_dma = pending_req->transfer_dma;
+		urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+	}
+}
+
+struct set_interface_request {
+	struct pending_req *pending_req;
+	int interface;
+	int alternate;
+	struct work_struct work;
+};
+
+static void usbbk_set_interface_work(struct work_struct *arg)
+{
+	struct set_interface_request *req
+		= container_of(arg, struct set_interface_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = req->pending_req->dev->udev;
+
+	int ret;
+
+	usb_lock_device(udev);
+	ret = usb_set_interface(udev, req->interface, req->alternate);
+	usb_unlock_device(udev);
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_set_interface(struct pending_req *pending_req, int interface,
+				int alternate)
+{
+	struct set_interface_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+	req->pending_req = pending_req;
+	req->interface = interface;
+	req->alternate = alternate;
+	INIT_WORK(&req->work, usbbk_set_interface_work);
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+
+struct clear_halt_request {
+	struct pending_req *pending_req;
+	int pipe;
+	struct work_struct work;
+};
+
+static void usbbk_clear_halt_work(struct work_struct *arg)
+{
+	struct clear_halt_request *req = container_of(arg,
+					struct clear_halt_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = req->pending_req->dev->udev;
+	int ret;
+
+	usb_lock_device(udev);
+	ret = usb_clear_halt(req->pending_req->dev->udev, req->pipe);
+	usb_unlock_device(udev);
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_clear_halt(struct pending_req *pending_req, int pipe)
+{
+	struct clear_halt_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+	req->pending_req = pending_req;
+	req->pipe = pipe;
+	INIT_WORK(&req->work, usbbk_clear_halt_work);
+
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+
+#if 0
+struct port_reset_request {
+	struct pending_req *pending_req;
+	struct work_struct work;
+};
+
+static void usbbk_port_reset_work(struct work_struct *arg)
+{
+	struct port_reset_request *req = container_of(arg,
+					struct port_reset_request, work);
+	struct pending_req *pending_req = req->pending_req;
+	struct usb_device *udev = pending_req->dev->udev;
+	int ret, ret_lock;
+
+	ret = ret_lock = usb_lock_device_for_reset(udev, NULL);
+	if (ret_lock >= 0) {
+		ret = usb_reset_device(udev);
+		if (ret_lock)
+			usb_unlock_device(udev);
+	}
+	usb_put_dev(udev);
+
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(pending_req->usbif);
+	free_req(pending_req);
+	kfree(req);
+}
+
+static int usbbk_port_reset(struct pending_req *pending_req)
+{
+	struct port_reset_request *req;
+	struct usb_device *udev = pending_req->dev->udev;
+
+	req = kmalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	req->pending_req = pending_req;
+	INIT_WORK(&req->work, usbbk_port_reset_work);
+
+	usb_get_dev(udev);
+	schedule_work(&req->work);
+	return 0;
+}
+#endif
+
+static void usbbk_set_address(struct xen_usbif *usbif, struct xen_usbdev *dev,
+				int cur_addr, int new_addr)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->addr_lock, flags);
+	if (cur_addr)
+		usbif->addr_table[cur_addr] = NULL;
+	if (new_addr)
+		usbif->addr_table[new_addr] = dev;
+	dev->addr = new_addr;
+	spin_unlock_irqrestore(&usbif->addr_lock, flags);
+}
+
+static void process_unlink_req(struct xen_usbif *usbif,
+				struct usbif_urb_request *req,
+				struct pending_req *pending_req)
+{
+	struct pending_req *unlink_req = NULL;
+	int devnum;
+	int ret = 0;
+	unsigned long flags;
+
+	devnum = usb_pipedevice(req->pipe);
+	if (unlikely(devnum == 0)) {
+		pending_req->dev = xen_usbif_find_attached_device(usbif,
+						usbif_pipeportnum(req->pipe));
+		if (unlikely(!pending_req->dev)) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+	} else {
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	spin_lock_irqsave(&pending_req->dev->submitting_lock, flags);
+	list_for_each_entry(unlink_req, &pending_req->dev->submitting_list,
+								urb_list) {
+		if (unlink_req->id == req->u.unlink.unlink_id) {
+			ret = usb_unlink_urb(unlink_req->urb);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&pending_req->dev->submitting_lock, flags);
+
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+	return;
+}
+
+static int check_and_submit_special_ctrlreq(struct xen_usbif *usbif,
+						struct usbif_urb_request *req,
+						struct pending_req *pending_req)
+{
+	int devnum;
+	struct xen_usbdev *dev = NULL;
+	struct usb_ctrlrequest *ctrl = (struct usb_ctrlrequest *) req->u.ctrl;
+	int ret;
+	int done = 0;
+
+	devnum = usb_pipedevice(req->pipe);
+
+	/*
+	 * When the device is first connected or reseted, USB device has no
+	 * address. In this initial state, following requests are send to
+	 * device address (#0),
+	 *
+	 *  1. GET_DESCRIPTOR (with Descriptor Type is "DEVICE") is send, and
+	 *     OS knows what device is connected to.
+	 *
+	 *  2. SET_ADDRESS is send, and then, device has its address.
+	 *
+	 * In the next step, SET_CONFIGURATION is send to addressed device, and
+	 * then, the device is finally ready to use.
+	 */
+	if (unlikely(devnum == 0)) {
+		dev = xen_usbif_find_attached_device(usbif,
+						usbif_pipeportnum(req->pipe));
+		if (unlikely(!dev)) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+
+		switch (ctrl->bRequest) {
+		case USB_REQ_GET_DESCRIPTOR:
+			/*
+			 * GET_DESCRIPTOR request to device #0.
+			 * through to normal urb transfer.
+			 */
+			pending_req->dev = dev;
+			return 0;
+			break;
+		case USB_REQ_SET_ADDRESS:
+			/*
+			 * SET_ADDRESS request to device #0.
+			 * add attached device to addr_table.
+			 */
+			{
+				__u16 addr = le16_to_cpu(ctrl->wValue);
+				usbbk_set_address(usbif, dev, 0, addr);
+			}
+			ret = 0;
+			goto fail_response;
+			break;
+		default:
+			ret = -EINVAL;
+			goto fail_response;
+		}
+	} else {
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	/*
+	 * Check special request
+	 */
+	switch (ctrl->bRequest) {
+	case USB_REQ_SET_ADDRESS:
+		/*
+		 * SET_ADDRESS request to addressed device.
+		 * change addr or remove from addr_table.
+		 */
+		{
+			__u16 addr = le16_to_cpu(ctrl->wValue);
+			usbbk_set_address(usbif, dev, devnum, addr);
+		}
+		ret = 0;
+		goto fail_response;
+		break;
+#if 0
+	case USB_REQ_SET_CONFIGURATION:
+		/*
+		 * linux 2.6.27 or later version only!
+		 */
+		if (ctrl->RequestType == USB_RECIP_DEVICE) {
+			__u16 config = le16_to_cpu(ctrl->wValue);
+			usb_driver_set_configuration(pending_req->dev->udev,
+							config);
+			done = 1;
+		}
+		break;
+#endif
+	case USB_REQ_SET_INTERFACE:
+		if (ctrl->bRequestType == USB_RECIP_INTERFACE) {
+			__u16 alt = le16_to_cpu(ctrl->wValue);
+			__u16 intf = le16_to_cpu(ctrl->wIndex);
+			usbbk_set_interface(pending_req, intf, alt);
+			done = 1;
+		}
+		break;
+	case USB_REQ_CLEAR_FEATURE:
+		if (ctrl->bRequestType == USB_RECIP_ENDPOINT
+			&& ctrl->wValue == USB_ENDPOINT_HALT) {
+			int pipe;
+			int ep = le16_to_cpu(ctrl->wIndex) & 0x0f;
+			int dir = le16_to_cpu(ctrl->wIndex) & USB_DIR_IN;
+			if (dir)
+				pipe = usb_rcvctrlpipe(pending_req->dev->udev,
+							ep);
+			else
+				pipe = usb_sndctrlpipe(pending_req->dev->udev,
+							ep);
+			usbbk_clear_halt(pending_req, pipe);
+			done = 1;
+		}
+		break;
+#if 0 /* not tested yet */
+	case USB_REQ_SET_FEATURE:
+		if (ctrl->bRequestType == USB_RT_PORT) {
+			__u16 feat = le16_to_cpu(ctrl->wValue);
+			if (feat == USB_PORT_FEAT_RESET) {
+				usbbk_port_reset(pending_req);
+				done = 1;
+			}
+		}
+		break;
+#endif
+	default:
+		break;
+	}
+
+	return done;
+
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+	return 1;
+}
+
+static void dispatch_request_to_pending_reqs(struct xen_usbif *usbif,
+						struct usbif_urb_request *req,
+						struct pending_req *pending_req)
+{
+	int ret;
+
+	pending_req->id = req->id;
+	pending_req->usbif = usbif;
+
+	barrier();
+
+	usbif_get(usbif);
+
+	/* unlink request */
+	if (unlikely(usbif_pipeunlink(req->pipe))) {
+		process_unlink_req(usbif, req, pending_req);
+		return;
+	}
+
+	if (usb_pipecontrol(req->pipe)) {
+		if (check_and_submit_special_ctrlreq(usbif, req, pending_req))
+			return;
+	} else {
+		int devnum = usb_pipedevice(req->pipe);
+		if (unlikely(!usbif->addr_table[devnum])) {
+			ret = -ENODEV;
+			goto fail_response;
+		}
+		pending_req->dev = usbif->addr_table[devnum];
+	}
+
+	barrier();
+
+	ret = usbbk_alloc_urb(req, pending_req);
+	if (ret) {
+		ret = -ESHUTDOWN;
+		goto fail_response;
+	}
+
+	add_req_to_submitting_list(pending_req->dev, pending_req);
+
+	barrier();
+
+	usbbk_init_urb(req, pending_req);
+
+	barrier();
+
+	pending_req->nr_buffer_segs = req->nr_buffer_segs;
+	if (usb_pipeisoc(req->pipe))
+		pending_req->nr_extra_segs = req->u.isoc.nr_frame_desc_segs;
+	else
+		pending_req->nr_extra_segs = 0;
+
+	barrier();
+
+	ret = xen_usbbk_map(usbif, req, pending_req);
+	if (ret) {
+		pr_alert(DRV_PFX "invalid buffer\n");
+		ret = -ESHUTDOWN;
+		goto fail_free_urb;
+	}
+
+	barrier();
+
+	if (usb_pipeout(req->pipe) && req->buffer_length)
+		copy_pages_to_buff(pending_req->buffer, pending_req, 0,
+					pending_req->nr_buffer_segs);
+	if (usb_pipeisoc(req->pipe)) {
+		copy_pages_to_buff(&pending_req->urb->iso_frame_desc[0],
+				pending_req, pending_req->nr_buffer_segs,
+				pending_req->nr_extra_segs);
+	}
+
+	barrier();
+
+	ret = usb_submit_urb(pending_req->urb, GFP_KERNEL);
+	if (ret) {
+		pr_alert(DRV_PFX "failed submitting urb, error %d\n", ret);
+		ret = -ESHUTDOWN;
+		goto fail_flush_area;
+	}
+	return;
+
+fail_flush_area:
+	xen_usbbk_unmap(pending_req);
+fail_free_urb:
+	remove_req_from_submitting_list(pending_req->dev, pending_req);
+	barrier();
+	usbbk_release_urb(pending_req->urb);
+fail_response:
+	usbbk_do_response(pending_req, ret, 0, 0, 0);
+	usbif_put(usbif);
+	free_req(pending_req);
+}
+
+static int usbbk_start_submit_urb(struct xen_usbif *usbif)
+{
+	struct usbif_urb_back_ring *urb_ring = &usbif->urb_ring;
+	struct usbif_urb_request *req;
+	struct pending_req *pending_req;
+	RING_IDX rc, rp;
+	int more_to_do = 0;
+
+	rc = urb_ring->req_cons;
+	rp = urb_ring->sring->req_prod;
+	rmb();
+
+	while (rc != rp) {
+		if (RING_REQUEST_CONS_OVERFLOW(urb_ring, rc)) {
+			pr_warn(DRV_PFX "RING_REQUEST_CONS_OVERFLOW\n");
+			break;
+		}
+
+		pending_req = alloc_req();
+		if (NULL == pending_req) {
+			more_to_do = 1;
+			break;
+		}
+
+		req = RING_GET_REQUEST(urb_ring, rc);
+		urb_ring->req_cons = ++rc;
+
+		dispatch_request_to_pending_reqs(usbif, req, pending_req);
+	}
+
+	RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+void xen_usbif_hotplug_notify(struct xen_usbif *usbif, int portnum, int speed)
+{
+	struct usbif_conn_back_ring *ring = &usbif->conn_ring;
+	struct usbif_conn_request *req;
+	struct usbif_conn_response *res;
+	unsigned long flags;
+	u16 id;
+	int notify;
+
+	spin_lock_irqsave(&usbif->conn_ring_lock, flags);
+
+	req = RING_GET_REQUEST(ring, ring->req_cons);
+	id = req->id;
+	ring->req_cons++;
+	ring->sring->req_event = ring->req_cons + 1;
+
+	res = RING_GET_RESPONSE(ring, ring->rsp_prod_pvt);
+	res->id = id;
+	res->portnum = portnum;
+	res->speed = speed;
+	ring->rsp_prod_pvt++;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+
+	spin_unlock_irqrestore(&usbif->conn_ring_lock, flags);
+
+	if (notify)
+		notify_remote_via_irq(usbif->irq);
+}
+
+int xen_usbif_schedule(void *arg)
+{
+	struct xen_usbif *usbif = (struct xen_usbif *) arg;
+
+	usbif_get(usbif);
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(usbif->wq,
+				usbif->waiting_reqs || kthread_should_stop());
+		wait_event_interruptible(usbbk->pending_free_wq,
+		    !list_empty(&usbbk->pending_free) || kthread_should_stop());
+		usbif->waiting_reqs = 0;
+		smp_mb();
+
+		if (usbbk_start_submit_urb(usbif))
+			usbif->waiting_reqs = 1;
+
+		usbbk_free_urbs();
+	}
+
+	usbbk_free_urbs();
+	usbif->xenusbd = NULL;
+	usbif_put(usbif);
+
+	return 0;
+}
+
+/*
+ * attach xen_usbdev device to usbif.
+ */
+void xen_usbif_attach_device(struct xen_usbif *usbif, struct xen_usbdev *dev)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_add(&dev->dev_list, &usbif->dev_list);
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+	dev->usbif = usbif;
+}
+
+/*
+ * detach usbdev device from usbif.
+ */
+void xen_usbif_detach_device(struct xen_usbif *usbif, struct xen_usbdev *dev)
+{
+	unsigned long flags;
+
+	if (dev->addr)
+		usbbk_set_address(usbif, dev, dev->addr, 0);
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_del(&dev->dev_list);
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+	dev->usbif = NULL;
+}
+
+void xen_usbif_detach_device_without_lock(struct xen_usbif *usbif,
+							struct xen_usbdev *dev)
+{
+	if (dev->addr)
+		usbbk_set_address(usbif, dev, dev->addr, 0);
+	list_del(&dev->dev_list);
+	dev->usbif = NULL;
+}
+
+static int __init xen_usbif_init(void)
+{
+	int i, mmap_pages;
+	int rc = 0;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	usbbk = kzalloc(sizeof(struct xen_usbbk), GFP_KERNEL);
+	if (!usbbk) {
+		pr_alert(DRV_PFX "%s: out of memory!\n", __func__);
+		return -ENOMEM;
+	}
+
+	mmap_pages = xen_usbif_reqs * USBIF_MAX_SEGMENTS_PER_REQUEST;
+	usbbk->pending_reqs =
+		kzalloc(sizeof(usbbk->pending_reqs[0]) * xen_usbif_reqs,
+								GFP_KERNEL);
+	usbbk->pending_grant_handles =
+		kmalloc(sizeof(usbbk->pending_grant_handles[0]) * mmap_pages,
+								GFP_KERNEL);
+	usbbk->pending_pages =
+		kzalloc(sizeof(usbbk->pending_pages[0]) * mmap_pages,
+								GFP_KERNEL);
+
+	if (!usbbk->pending_reqs || !usbbk->pending_grant_handles ||
+	    !usbbk->pending_pages) {
+		rc = -ENOMEM;
+		pr_alert(DRV_PFX "%s: out of memory\n", __func__);
+		goto failed_init;
+	}
+
+	for (i = 0; i < mmap_pages; i++) {
+		usbbk->pending_grant_handles[i] = USBBACK_INVALID_HANDLE;
+		usbbk->pending_pages[i] = alloc_page(GFP_KERNEL);
+		if (usbbk->pending_pages[i] == NULL) {
+			rc = -ENOMEM;
+			pr_alert(DRV_PFX "%s: out of memory\n", __func__);
+			goto failed_init;
+		}
+	}
+
+	INIT_LIST_HEAD(&usbbk->pending_free);
+	spin_lock_init(&usbbk->pending_free_lock);
+	init_waitqueue_head(&usbbk->pending_free_wq);
+
+	INIT_LIST_HEAD(&usbbk->urb_free);
+	spin_lock_init(&usbbk->urb_free_lock);
+
+	for (i = 0; i < xen_usbif_reqs; i++)
+		list_add_tail(&usbbk->pending_reqs[i].free_list,
+				&usbbk->pending_free);
+
+	rc = xen_usbdev_init();
+	if (rc)
+		goto failed_init;
+
+	rc = xen_usbif_xenbus_init();
+	if (rc)
+		goto usb_exit;
+
+	return 0;
+
+ usb_exit:
+	xen_usbdev_exit();
+ failed_init:
+	kfree(usbbk->pending_reqs);
+	kfree(usbbk->pending_grant_handles);
+	if (usbbk->pending_pages) {
+		for (i = 0; i < mmap_pages; i++) {
+			if (usbbk->pending_pages[i])
+				__free_page(usbbk->pending_pages[i]);
+		}
+		kfree(usbbk->pending_pages);
+	}
+	kfree(usbbk);
+	usbbk = NULL;
+	return rc;
+}
+
+struct xen_usbdev *xen_usbif_find_attached_device(struct xen_usbif *usbif,
+								int portnum)
+{
+	struct xen_usbdev *dev;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_for_each_entry(dev, &usbif->dev_list, dev_list) {
+		if (dev->port->portnum == portnum) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+
+	if (found)
+		return dev;
+
+	return NULL;
+}
+
+static void __exit xen_usbif_exit(void)
+{
+	int i;
+	int mmap_pages = xen_usbif_reqs * USBIF_MAX_SEGMENTS_PER_REQUEST;
+
+	xen_usbif_xenbus_exit();
+	xen_usbdev_exit();
+	kfree(usbbk->pending_reqs);
+	kfree(usbbk->pending_grant_handles);
+	for (i = 0; i < mmap_pages; i++) {
+		if (usbbk->pending_pages[i])
+			__free_page(usbbk->pending_pages[i]);
+	}
+	kfree(usbbk->pending_pages);
+	usbbk = NULL;
+}
+
+module_init(xen_usbif_init);
+module_exit(xen_usbif_exit);
+
+MODULE_AUTHOR("");
+MODULE_DESCRIPTION("Xen USB backend driver (xen_usbback)");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/usb/host/xen-usbback/usbdev.c b/drivers/usb/host/xen-usbback/usbdev.c
new file mode 100644
index 0000000..53a14b4
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/usbdev.c
@@ -0,0 +1,319 @@
+/*
+ * USB stub device driver - grabbing and managing USB devices.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include "common.h"
+
+static LIST_HEAD(port_list);
+static DEFINE_SPINLOCK(port_list_lock);
+
+struct xen_usbport *xen_usbport_find_by_busid(const char *busid)
+{
+	struct xen_usbport *port;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if (!(strncmp(port->phys_bus, busid, XEN_USB_BUS_ID_SIZE))) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	if (found)
+		return port;
+
+	return NULL;
+}
+
+struct xen_usbport *xen_usbport_find(const domid_t domid,
+				const unsigned int handle, const int portnum)
+{
+	struct xen_usbport *port;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if ((port->domid == domid) &&
+					(port->handle == handle) &&
+					(port->portnum == portnum)) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	if (found)
+		return port;
+
+	return NULL;
+}
+
+int xen_usbport_add(const char *busid, const domid_t domid,
+		const unsigned int handle, const int portnum)
+{
+	struct xen_usbport *port;
+	unsigned long flags;
+
+	port = kzalloc(sizeof(*port), GFP_KERNEL);
+	if (!port)
+		return -ENOMEM;
+
+	port->domid = domid;
+	port->handle = handle;
+	port->portnum = portnum;
+
+	strncpy(port->phys_bus, busid, XEN_USB_BUS_ID_SIZE);
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_add(&port->port_list, &port_list);
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return 0;
+}
+
+int xen_usbport_remove(const domid_t domid, const unsigned int handle,
+			const int portnum)
+{
+	struct xen_usbport *port, *tmp;
+	int err = -ENOENT;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry_safe(port, tmp, &port_list, port_list) {
+		if (port->domid == domid &&
+					port->handle == handle &&
+					port->portnum == portnum) {
+			list_del(&port->port_list);
+			kfree(port);
+
+			err = 0;
+		}
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return err;
+}
+
+static struct xen_usbdev *xen_usbdev_alloc(struct usb_device *udev,
+						struct xen_usbport *port)
+{
+	struct xen_usbdev *dev;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev) {
+		pr_alert(DRV_PFX "no memory for alloc xen_usbdev\n");
+		return NULL;
+	}
+	kref_init(&dev->kref);
+	dev->udev = usb_get_dev(udev);
+	dev->port = port;
+	spin_lock_init(&dev->submitting_lock);
+	INIT_LIST_HEAD(&dev->submitting_list);
+
+	return dev;
+}
+
+static void usbdev_release(struct kref *kref)
+{
+	struct xen_usbdev *dev;
+
+	dev = container_of(kref, struct xen_usbdev, kref);
+
+	usb_put_dev(dev->udev);
+	dev->udev = NULL;
+	dev->port = NULL;
+	kfree(dev);
+}
+
+static inline void usbdev_get(struct xen_usbdev *dev)
+{
+	kref_get(&dev->kref);
+}
+
+static inline void usbdev_put(struct xen_usbdev *dev)
+{
+	kref_put(&dev->kref, usbdev_release);
+}
+
+static int usbdev_probe(struct usb_interface *intf,
+			 const struct usb_device_id *id)
+{
+	struct usb_device *udev = interface_to_usbdev(intf);
+	const char *busid = dev_name(intf->dev.parent);
+	struct xen_usbport *port = NULL;
+	struct xen_usbdev *dev = NULL;
+	struct xen_usbif *usbif = NULL;
+	int retval = -ENODEV;
+
+	/* hub currently not supported, so skip. */
+	if (udev->descriptor.bDeviceClass == USB_CLASS_HUB)
+		goto out;
+
+	port = xen_usbport_find_by_busid(busid);
+	if (!port)
+		goto out;
+
+	usbif = xen_usbif_find(port->domid, port->handle);
+	if (!usbif)
+		goto out;
+
+	switch (udev->speed) {
+	case USB_SPEED_LOW:
+	case USB_SPEED_FULL:
+		break;
+	case USB_SPEED_HIGH:
+		if (usbif->usb_ver >= USB_VER_USB20)
+			break;
+		/* fall through */
+	default:
+		goto out;
+	}
+
+	dev = xen_usbif_find_attached_device(usbif, port->portnum);
+	if (!dev) {
+		/* new connection */
+		dev = xen_usbdev_alloc(udev, port);
+		if (!dev)
+			return -ENOMEM;
+		xen_usbif_attach_device(usbif, dev);
+		xen_usbif_hotplug_notify(usbif, port->portnum, udev->speed);
+	} else {
+		/* maybe already called and connected by other intf */
+		if (strncmp(dev->port->phys_bus, busid, XEN_USB_BUS_ID_SIZE))
+			goto out; /* invalid call */
+	}
+
+	usbdev_get(dev);
+	usb_set_intfdata(intf, dev);
+	retval = 0;
+
+out:
+	return retval;
+}
+
+static void usbdev_disconnect(struct usb_interface *intf)
+{
+	struct xen_usbdev *dev
+		= (struct xen_usbdev *) usb_get_intfdata(intf);
+
+	usb_set_intfdata(intf, NULL);
+
+	if (!dev)
+		return;
+
+	if (dev->usbif) {
+		xen_usbif_hotplug_notify(dev->usbif, dev->port->portnum, 0);
+		xen_usbif_detach_device(dev->usbif, dev);
+	}
+	xen_usbif_unlink_urbs(dev);
+	usbdev_put(dev);
+}
+
+static ssize_t usbdev_show_ports(struct device_driver *driver, char *buf)
+{
+	struct xen_usbport *port;
+	size_t count = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&port_list_lock, flags);
+	list_for_each_entry(port, &port_list, port_list) {
+		if (count >= PAGE_SIZE)
+			break;
+		count += scnprintf((char *)buf + count, PAGE_SIZE - count,
+				"%s:%d:%d:%d\n",
+				&port->phys_bus[0],
+				port->domid,
+				port->handle,
+				port->portnum);
+	}
+	spin_unlock_irqrestore(&port_list_lock, flags);
+
+	return count;
+}
+
+DRIVER_ATTR(port_ids, S_IRUSR, usbdev_show_ports, NULL);
+
+/* table of devices that matches any usbdevice */
+static const struct usb_device_id usbdev_table[] = {
+	{ .driver_info = 1 }, /* wildcard, see usb_match_id() */
+	{ } /* Terminating entry */
+};
+MODULE_DEVICE_TABLE(usb, usbdev_table);
+
+static struct usb_driver xen_usbdev_driver = {
+	.name = "usbback",
+	.probe = usbdev_probe,
+	.disconnect = usbdev_disconnect,
+	.id_table = usbdev_table,
+	.no_dynamic_id = 1,
+};
+
+int __init xen_usbdev_init(void)
+{
+	int err;
+
+	err = usb_register(&xen_usbdev_driver);
+	if (err < 0) {
+		pr_alert(DRV_PFX "usb_register failed (error %d)\n",
+									err);
+		goto out;
+	}
+
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+							&driver_attr_port_ids);
+	if (err)
+		usb_deregister(&xen_usbdev_driver);
+
+out:
+	return err;
+}
+
+void xen_usbdev_exit(void)
+{
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+							&driver_attr_port_ids);
+	usb_deregister(&xen_usbdev_driver);
+}
diff --git a/drivers/usb/host/xen-usbback/xenbus.c b/drivers/usb/host/xen-usbback/xenbus.c
new file mode 100644
index 0000000..5eae4ec
--- /dev/null
+++ b/drivers/usb/host/xen-usbback/xenbus.c
@@ -0,0 +1,482 @@
+/*
+ * Xenbus interface for USB backend driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/delay.h>
+#include "common.h"
+
+static LIST_HEAD(usbif_list);
+static DEFINE_SPINLOCK(usbif_list_lock);
+
+struct xen_usbif *xen_usbif_find(domid_t domid, unsigned int handle)
+{
+	struct xen_usbif *usbif;
+	int found = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_for_each_entry(usbif, &usbif_list, usbif_list) {
+		if (usbif->domid == domid && usbif->handle == handle) {
+			found = 1;
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+
+	if (found)
+		return usbif;
+
+	return NULL;
+}
+
+struct xen_usbif *xen_usbif_alloc(domid_t domid, unsigned int handle)
+{
+	struct xen_usbif *usbif;
+	unsigned long flags;
+	int i;
+
+	usbif = kzalloc(sizeof(struct xen_usbif), GFP_KERNEL);
+	if (!usbif)
+		return NULL;
+
+	usbif->domid = domid;
+	usbif->handle = handle;
+	INIT_LIST_HEAD(&usbif->usbif_list);
+	spin_lock_init(&usbif->urb_ring_lock);
+	spin_lock_init(&usbif->conn_ring_lock);
+	atomic_set(&usbif->refcnt, 0);
+	init_waitqueue_head(&usbif->wq);
+	init_waitqueue_head(&usbif->waiting_to_free);
+	spin_lock_init(&usbif->dev_lock);
+	INIT_LIST_HEAD(&usbif->dev_list);
+	spin_lock_init(&usbif->addr_lock);
+	for (i = 0; i < XEN_USB_DEV_ADDR_SIZE; i++)
+		usbif->addr_table[i] = NULL;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_add(&usbif->usbif_list, &usbif_list);
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+
+	return usbif;
+}
+
+static int xen_usbif_map(struct xen_usbif *usbif, unsigned long urb_ring_ref,
+			unsigned long conn_ring_ref, unsigned int evtchn)
+{
+	int err = -ENOMEM;
+
+	if (usbif->irq)
+		return 0;
+
+	err = xenbus_map_ring_valloc(usbif->xbdev, urb_ring_ref,
+	    &usbif->urb_sring);
+	if (err < 0)
+		return err;
+
+	err = xenbus_map_ring_valloc(usbif->xbdev, conn_ring_ref,
+	    &usbif->conn_sring);
+	if (err < 0)
+		goto fail_alloc;
+
+	err = bind_interdomain_evtchn_to_irqhandler(usbif->domid, evtchn,
+				xen_usbif_be_int, 0, "usbif-backend", usbif);
+	if (err < 0)
+		goto fail_evtchn;
+	usbif->irq = err;
+
+	BACK_RING_INIT(&usbif->urb_ring,
+	    (struct usbif_urb_sring *)usbif->urb_sring, PAGE_SIZE);
+	BACK_RING_INIT(&usbif->conn_ring,
+	    (struct usbif_conn_sring *)usbif->conn_sring, PAGE_SIZE);
+
+	return 0;
+
+fail_evtchn:
+	xenbus_unmap_ring_vfree(usbif->xbdev, usbif->conn_sring);
+fail_alloc:
+	xenbus_unmap_ring_vfree(usbif->xbdev, usbif->urb_sring);
+
+	return err;
+}
+
+static void xen_usbif_disconnect(struct xen_usbif *usbif)
+{
+	struct xen_usbdev *dev, *tmp;
+	unsigned long flags;
+
+	if (usbif->xenusbd) {
+		kthread_stop(usbif->xenusbd);
+		usbif->xenusbd = NULL;
+	}
+
+	spin_lock_irqsave(&usbif->dev_lock, flags);
+	list_for_each_entry_safe(dev, tmp, &usbif->dev_list, dev_list) {
+		xen_usbif_unlink_urbs(dev);
+		xen_usbif_detach_device_without_lock(usbif, dev);
+	}
+	spin_unlock_irqrestore(&usbif->dev_lock, flags);
+
+	wait_event(usbif->waiting_to_free, atomic_read(&usbif->refcnt) == 0);
+
+	if (usbif->irq) {
+		unbind_from_irqhandler(usbif->irq, usbif);
+		usbif->irq = 0;
+	}
+
+	if (usbif->urb_ring.sring) {
+		xenbus_unmap_ring_vfree(usbif->xbdev, usbif->urb_sring);
+		usbif->urb_ring.sring = NULL;
+		xenbus_unmap_ring_vfree(usbif->xbdev, usbif->conn_sring);
+		usbif->conn_ring.sring = NULL;
+	}
+}
+
+static void xen_usbif_free(struct xen_usbif *usbif)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&usbif_list_lock, flags);
+	list_del(&usbif->usbif_list);
+	spin_unlock_irqrestore(&usbif_list_lock, flags);
+	kfree(usbif);
+}
+
+static void usbbk_changed(struct xenbus_watch *watch, const char **vec,
+				unsigned int len)
+{
+	struct xenbus_transaction xbt;
+	int err;
+	int i;
+	char node[8];
+	char *busid;
+	struct xen_usbport *port = NULL;
+
+	struct xen_usbif *usbif = container_of(watch, struct xen_usbif,
+						backend_watch);
+	struct xenbus_device *dev = usbif->xbdev;
+
+again:
+	err = xenbus_transaction_start(&xbt);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "starting transaction");
+		return;
+	}
+
+	for (i = 1; i <= usbif->num_ports; i++) {
+		sprintf(node, "port/%d", i);
+		busid = xenbus_read(xbt, dev->nodename, node, NULL);
+		if (IS_ERR(busid)) {
+			err = PTR_ERR(busid);
+			xenbus_dev_fatal(dev, err, "reading port/%d", i);
+			goto abort;
+		}
+
+		/*
+		 * remove port, if the port is not connected,
+		 */
+		if (strlen(busid) == 0) {
+			port = xen_usbport_find(usbif->domid, usbif->handle, i);
+			if (port) {
+				if (port->is_connected)
+					xenbus_dev_fatal(dev, err,
+						"can't remove port/%d, "
+						"unbind first", i);
+				else
+					xen_usbport_remove(usbif->domid,
+							usbif->handle, i);
+			}
+			continue; /* never configured, ignore */
+		}
+
+		/*
+		 * add port,
+		 * if the port is not configured and not used from other usbif.
+		 */
+		port = xen_usbport_find(usbif->domid, usbif->handle, i);
+		if (port) {
+			if ((strncmp(port->phys_bus, busid,
+							XEN_USB_BUS_ID_SIZE)))
+				xenbus_dev_fatal(dev, err, "can't add port/%d, "
+						"remove first", i);
+			else
+				continue; /* already configured, ignore */
+		} else {
+			if (xen_usbport_find_by_busid(busid))
+				xenbus_dev_fatal(dev, err, "can't add port/%d, "
+						"busid already used", i);
+			else
+				xen_usbport_add(busid, usbif->domid,
+						usbif->handle, i);
+		}
+	}
+
+	err = xenbus_transaction_end(xbt, 0);
+	if (err == -EAGAIN)
+		goto again;
+	if (err)
+		xenbus_dev_fatal(dev, err, "completing transaction");
+
+	return;
+
+abort:
+	xenbus_transaction_end(xbt, 1);
+
+	return;
+}
+
+static int usbbk_remove(struct xenbus_device *dev)
+{
+	struct xen_usbif *usbif = dev_get_drvdata(&dev->dev);
+	int i;
+
+	if (usbif->backend_watch.node) {
+		unregister_xenbus_watch(&usbif->backend_watch);
+		kfree(usbif->backend_watch.node);
+		usbif->backend_watch.node = NULL;
+	}
+
+	if (usbif) {
+		/* remove all ports */
+		for (i = 1; i <= usbif->num_ports; i++)
+			xen_usbport_remove(usbif->domid, usbif->handle, i);
+		xen_usbif_disconnect(usbif);
+		xen_usbif_free(usbif);
+	}
+	dev_set_drvdata(&dev->dev, NULL);
+
+	return 0;
+}
+
+static int usbbk_probe(struct xenbus_device *dev,
+				const struct xenbus_device_id *id)
+{
+	struct xen_usbif *usbif;
+	unsigned long handle;
+	int num_ports;
+	int usb_ver;
+	int err;
+
+	if (usb_disabled())
+		return -ENODEV;
+
+	if (kstrtoul(strrchr(dev->otherend, '/') + 1, 0, &handle))
+		return -ENOENT;
+
+	usbif = xen_usbif_alloc(dev->otherend_id, handle);
+	if (!usbif) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating backend interface");
+		return -ENOMEM;
+	}
+	usbif->xbdev = dev;
+	dev_set_drvdata(&dev->dev, usbif);
+
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "num-ports",
+							"%d", &num_ports);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading num-ports");
+		goto fail;
+	}
+	if (num_ports < 1 || num_ports > USB_MAXCHILDREN) {
+		xenbus_dev_fatal(dev, err, "invalid num-ports");
+		goto fail;
+	}
+	usbif->num_ports = num_ports;
+
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "usb-ver", "%d", &usb_ver);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading usb-ver");
+		goto fail;
+	}
+	switch (usb_ver) {
+	case USB_VER_USB11:
+	case USB_VER_USB20:
+		usbif->usb_ver = usb_ver;
+		break;
+	default:
+		xenbus_dev_fatal(dev, err, "invalid usb-ver");
+		goto fail;
+	}
+
+	err = xenbus_switch_state(dev, XenbusStateInitWait);
+	if (err)
+		goto fail;
+
+	return 0;
+
+fail:
+	usbbk_remove(dev);
+	return err;
+}
+
+static int connect_rings(struct xen_usbif *usbif)
+{
+	struct xenbus_device *dev = usbif->xbdev;
+	unsigned long urb_ring_ref;
+	unsigned long conn_ring_ref;
+	unsigned int evtchn;
+	int err;
+
+	err = xenbus_gather(XBT_NIL, dev->otherend,
+			    "urb-ring-ref", "%lu", &urb_ring_ref,
+			    "conn-ring-ref", "%lu", &conn_ring_ref,
+			    "event-channel", "%u", &evtchn, NULL);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "reading %s/ring-ref and event-channel",
+				 dev->otherend);
+		return err;
+	}
+
+	pr_info(DRV_PFX "urb-ring-ref %ld, conn-ring-ref %ld, "
+	    "event-channel %d\n", urb_ring_ref, conn_ring_ref, evtchn);
+
+	err = xen_usbif_map(usbif, urb_ring_ref, conn_ring_ref, evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "mapping urb-ring-ref %lu "
+					"conn-ring-ref %lu port %u",
+					urb_ring_ref, conn_ring_ref, evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int start_xenusbd(struct xen_usbif *usbif)
+{
+	int err = 0;
+	char name[TASK_COMM_LEN];
+
+	snprintf(name, TASK_COMM_LEN, "usbback.%d.%d", usbif->domid,
+			usbif->handle);
+	usbif->xenusbd = kthread_run(xen_usbif_schedule, usbif, name);
+	if (IS_ERR(usbif->xenusbd)) {
+		err = PTR_ERR(usbif->xenusbd);
+		usbif->xenusbd = NULL;
+		xenbus_dev_error(usbif->xbdev, err, "start xenusbd");
+	}
+
+	return err;
+}
+
+static void frontend_changed(struct xenbus_device *dev,
+			     enum xenbus_state frontend_state)
+{
+	struct xen_usbif *usbif = dev_get_drvdata(&dev->dev);
+	int err;
+
+	switch (frontend_state) {
+	case XenbusStateReconfiguring:
+	case XenbusStateReconfigured:
+		break;
+
+	case XenbusStateInitialising:
+		if (dev->state == XenbusStateClosed) {
+			pr_info(DRV_PFX "%s: %s: prepare for reconnect\n",
+			       __func__, dev->nodename);
+			xenbus_switch_state(dev, XenbusStateInitWait);
+		}
+		break;
+
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+		if (dev->state == XenbusStateConnected)
+			break;
+
+		xen_usbif_disconnect(usbif);
+
+		err = connect_rings(usbif);
+		if (err)
+			break;
+		err = start_xenusbd(usbif);
+		if (err)
+			break;
+		err = xenbus_watch_pathfmt(dev, &usbif->backend_watch,
+		    usbbk_changed,  "%s/%s", dev->nodename, "port");
+		if (err)
+			break;
+		xenbus_switch_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		xenbus_switch_state(dev, XenbusStateClosing);
+		break;
+
+	case XenbusStateClosed:
+		xen_usbif_disconnect(usbif);
+		xenbus_switch_state(dev, XenbusStateClosed);
+		if (xenbus_dev_is_online(dev))
+			break;
+		/* fall through if not online */
+	case XenbusStateUnknown:
+		device_unregister(&dev->dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+				 frontend_state);
+		break;
+	}
+}
+
+
+/* ** Driver Registration ** */
+
+static const struct xenbus_device_id usbback_ids[] = {
+	{ "vusb" },
+	{ "" },
+};
+
+static DEFINE_XENBUS_DRIVER(usbback, ,
+	.probe = usbbk_probe,
+	.remove = usbbk_remove,
+	.otherend_changed = frontend_changed,
+);
+
+int __init xen_usbif_xenbus_init(void)
+{
+	return xenbus_register_backend(&usbback_driver);
+}
+
+void __exit xen_usbif_xenbus_exit(void)
+{
+	xenbus_unregister_driver(&usbback_driver);
+}
diff --git a/drivers/usb/host/xen-usbfront.c b/drivers/usb/host/xen-usbfront.c
new file mode 100644
index 0000000..e632de3
--- /dev/null
+++ b/drivers/usb/host/xen-usbfront.c
@@ -0,0 +1,1739 @@
+/*
+ * xen-usbfront.c
+ *
+ * This file is part of Xen USB Virtual Host Controller driver.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * or, by your choice,
+ *
+ * When distributed separately from the Linux kernel or incorporated into
+ * other software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/usb.h>
+#include <linux/usb/hcd.h>
+#include <linux/list.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/io.h>
+#include <xen/xenbus.h>
+#include <xen/events.h>
+#include <xen/page.h>
+#include <xen/grant_table.h>
+#include <xen/interface/xen.h>
+#include <xen/interface/io/usbif.h>
+
+static inline struct usbfront_info *hcd_to_info(struct usb_hcd *hcd)
+{
+	return (struct usbfront_info *) (hcd->hcd_priv);
+}
+
+static inline struct usb_hcd *info_to_hcd(struct usbfront_info *info)
+{
+	return container_of((void *) info, struct usb_hcd, hcd_priv);
+}
+
+/* Private per-URB data */
+struct urb_priv {
+	struct list_head list;
+	struct urb *urb;
+	int req_id;		/* RING_REQUEST id for submitting */
+	int unlink_req_id;	/* RING_REQUEST id for unlinking */
+	int status;
+	unsigned unlinked:1;	/* dequeued marker */
+};
+
+/* virtual roothub port status */
+struct rhport_status {
+	u32 status;
+	unsigned resuming:1;		/* in resuming */
+	unsigned c_connection:1;	/* connection changed */
+	unsigned long timeout;
+};
+
+/* status of attached device */
+struct vdevice_status {
+	int devnum;
+	enum usb_device_state status;
+	enum usb_device_speed speed;
+};
+
+/* RING request shadow */
+struct usb_shadow {
+	struct usbif_urb_request req;
+	struct urb *urb;
+};
+
+/* statistics for tuning, monitoring, ... */
+struct xenhcd_stats {
+	unsigned long ring_full;	/* RING_FULL conditions */
+	unsigned long complete;		/* normal giveback urbs */
+	unsigned long unlink;		/* unlinked urbs */
+};
+
+struct usbfront_info {
+	/* Virtual Host Controller has 4 urb queues */
+	struct list_head pending_submit_list;
+	struct list_head pending_unlink_list;
+	struct list_head in_progress_list;
+	struct list_head giveback_waiting_list;
+
+	spinlock_t lock;
+
+	/* timer that kick pending and giveback waiting urbs */
+	struct timer_list watchdog;
+	unsigned long actions;
+
+	/* virtual root hub */
+	int rh_numports;
+	struct rhport_status ports[USB_MAXCHILDREN];
+	struct vdevice_status devices[USB_MAXCHILDREN];
+
+	/* Xen related staff */
+	struct xenbus_device *xbdev;
+	int urb_ring_ref;
+	int conn_ring_ref;
+	struct usbif_urb_front_ring urb_ring;
+	struct usbif_conn_front_ring conn_ring;
+
+	unsigned int evtchn, irq; /* event channel */
+	struct usb_shadow shadow[USB_URB_RING_SIZE];
+	unsigned long shadow_free;
+
+	/* RING_RESPONSE thread */
+	struct task_struct *kthread;
+	wait_queue_head_t wq;
+	unsigned int waiting_resp;
+
+	/* xmit statistics */
+#ifdef XENHCD_STATS
+	struct xenhcd_stats stats;
+#define COUNT(x) do { (x)++; } while (0)
+#else
+#define COUNT(x) do {} while (0)
+#endif
+};
+
+#define XENHCD_RING_JIFFIES (HZ/200)
+#define XENHCD_SCAN_JIFFIES 1
+
+enum xenhcd_timer_action {
+	TIMER_RING_WATCHDOG,
+	TIMER_SCAN_PENDING_URBS,
+};
+
+static inline void
+timer_action_done(struct usbfront_info *info, enum xenhcd_timer_action action)
+{
+	clear_bit(action, &info->actions);
+}
+
+static inline void
+timer_action(struct usbfront_info *info, enum xenhcd_timer_action action)
+{
+	if (timer_pending(&info->watchdog) &&
+	    test_bit(TIMER_SCAN_PENDING_URBS, &info->actions))
+		return;
+
+	if (!test_and_set_bit(action, &info->actions)) {
+		unsigned long t;
+
+		switch (action) {
+		case TIMER_RING_WATCHDOG:
+			t = XENHCD_RING_JIFFIES;
+			break;
+		default:
+			t = XENHCD_SCAN_JIFFIES;
+			break;
+		}
+		mod_timer(&info->watchdog, t + jiffies);
+	}
+}
+
+struct kmem_cache *xenhcd_urbp_cachep;
+struct hc_driver xen_usb20_hc_driver;
+struct hc_driver xen_usb11_hc_driver;
+
+static ssize_t show_statistics(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct usb_hcd *hcd;
+	struct usbfront_info *info;
+	unsigned long flags;
+	unsigned temp, size;
+	char *next;
+
+	hcd = dev_get_drvdata(dev);
+	info = hcd_to_info(hcd);
+	next = buf;
+	size = PAGE_SIZE;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	temp = scnprintf(next, size,
+			"bus %s, device %s\n"
+			"%s\n"
+			"xenhcd, hcd state %d\n",
+			hcd->self.controller->bus->name,
+			dev_name(hcd->self.controller),
+			hcd->product_desc,
+			hcd->state);
+	size -= temp;
+	next += temp;
+
+#ifdef XENHCD_STATS
+	temp = scnprintf(next, size,
+			"complete %ld unlink %ld ring_full %ld\n",
+			info->stats.complete, info->stats.unlink,
+			info->stats.ring_full);
+	size -= temp;
+	next += temp;
+#endif
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	return PAGE_SIZE - size;
+}
+
+static DEVICE_ATTR(statistics, S_IRUGO, show_statistics, NULL);
+
+static inline void create_debug_file(struct usbfront_info *info)
+{
+	struct device *dev = info_to_hcd(info)->self.controller;
+	if (device_create_file(dev, &dev_attr_statistics))
+		printk(KERN_WARNING "statistics file not created for %s\n",
+					info_to_hcd(info)->self.bus_name);
+}
+
+static inline void remove_debug_file(struct usbfront_info *info)
+{
+	struct device *dev = info_to_hcd(info)->self.controller;
+	device_remove_file(dev, &dev_attr_statistics);
+}
+
+/*
+ * set virtual port connection status
+ */
+void set_connect_state(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_POWER) {
+		switch (info->devices[port].speed) {
+		case USB_SPEED_UNKNOWN:
+			info->ports[port].status &=
+					~(USB_PORT_STAT_CONNECTION |
+						USB_PORT_STAT_ENABLE |
+						USB_PORT_STAT_LOW_SPEED |
+						USB_PORT_STAT_HIGH_SPEED |
+						USB_PORT_STAT_SUSPEND);
+			break;
+		case USB_SPEED_LOW:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			info->ports[port].status |= USB_PORT_STAT_LOW_SPEED;
+			break;
+		case USB_SPEED_FULL:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			break;
+		case USB_SPEED_HIGH:
+			info->ports[port].status |= USB_PORT_STAT_CONNECTION;
+			info->ports[port].status |= USB_PORT_STAT_HIGH_SPEED;
+			break;
+		default: /* error */
+			return;
+		}
+		info->ports[port].status |= (USB_PORT_STAT_C_CONNECTION << 16);
+	}
+}
+
+/*
+ * set virtual device connection status
+ */
+void rhport_connect(struct usbfront_info *info, int portnum,
+			enum usb_device_speed speed)
+{
+	int port;
+
+	if (portnum < 1 || portnum > info->rh_numports)
+		return; /* invalid port number */
+
+	port = portnum - 1;
+	if (info->devices[port].speed != speed) {
+		switch (speed) {
+		case USB_SPEED_UNKNOWN: /* disconnect */
+			info->devices[port].status = USB_STATE_NOTATTACHED;
+			break;
+		case USB_SPEED_LOW:
+		case USB_SPEED_FULL:
+		case USB_SPEED_HIGH:
+			info->devices[port].status = USB_STATE_ATTACHED;
+			break;
+		default: /* error */
+			return;
+		}
+		info->devices[port].speed = speed;
+		info->ports[port].c_connection = 1;
+
+		set_connect_state(info, portnum);
+	}
+}
+
+/*
+ * SetPortFeature(PORT_SUSPENDED)
+ */
+void rhport_suspend(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status |= USB_PORT_STAT_SUSPEND;
+	info->devices[port].status = USB_STATE_SUSPENDED;
+}
+
+/*
+ * ClearPortFeature(PORT_SUSPENDED)
+ */
+void rhport_resume(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_SUSPEND) {
+		info->ports[port].resuming = 1;
+		info->ports[port].timeout = jiffies + msecs_to_jiffies(20);
+	}
+}
+
+/*
+ * SetPortFeature(PORT_POWER)
+ */
+void rhport_power_on(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if ((info->ports[port].status & USB_PORT_STAT_POWER) == 0) {
+		info->ports[port].status |= USB_PORT_STAT_POWER;
+		if (info->devices[port].status != USB_STATE_NOTATTACHED)
+			info->devices[port].status = USB_STATE_POWERED;
+		if (info->ports[port].c_connection)
+			set_connect_state(info, portnum);
+	}
+}
+
+/*
+ * ClearPortFeature(PORT_POWER)
+ * SetConfiguration(non-zero)
+ * Power_Source_Off
+ * Over-current
+ */
+void rhport_power_off(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	if (info->ports[port].status & USB_PORT_STAT_POWER) {
+		info->ports[port].status = 0;
+		if (info->devices[port].status != USB_STATE_NOTATTACHED)
+			info->devices[port].status = USB_STATE_ATTACHED;
+	}
+}
+
+/*
+ * ClearPortFeature(PORT_ENABLE)
+ */
+void rhport_disable(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status &= ~USB_PORT_STAT_ENABLE;
+	info->ports[port].status &= ~USB_PORT_STAT_SUSPEND;
+	info->ports[port].resuming = 0;
+	if (info->devices[port].status != USB_STATE_NOTATTACHED)
+		info->devices[port].status = USB_STATE_POWERED;
+}
+
+/*
+ * SetPortFeature(PORT_RESET)
+ */
+void rhport_reset(struct usbfront_info *info, int portnum)
+{
+	int port;
+
+	port = portnum - 1;
+	info->ports[port].status &= ~(USB_PORT_STAT_ENABLE
+					| USB_PORT_STAT_LOW_SPEED
+					| USB_PORT_STAT_HIGH_SPEED);
+	info->ports[port].status |= USB_PORT_STAT_RESET;
+
+	if (info->devices[port].status != USB_STATE_NOTATTACHED)
+		info->devices[port].status = USB_STATE_ATTACHED;
+
+	/* 10msec reset signaling */
+	info->ports[port].timeout = jiffies + msecs_to_jiffies(10);
+}
+
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+static int xenhcd_bus_suspend(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ret = 0;
+	int i, ports;
+
+	ports = info->rh_numports;
+
+	spin_lock_irq(&info->lock);
+	if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags))
+		ret = -ESHUTDOWN;
+	else {
+		/* suspend any active ports*/
+		for (i = 1; i <= ports; i++)
+			rhport_suspend(info, i);
+	}
+	spin_unlock_irq(&info->lock);
+
+	del_timer_sync(&info->watchdog);
+
+	return ret;
+}
+
+static int xenhcd_bus_resume(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ret = 0;
+	int i, ports;
+
+	ports = info->rh_numports;
+
+	spin_lock_irq(&info->lock);
+	if (!test_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags))
+		ret = -ESHUTDOWN;
+	else {
+		/* resume any suspended ports*/
+		for (i = 1; i <= ports; i++)
+			rhport_resume(info, i);
+	}
+	spin_unlock_irq(&info->lock);
+
+	return ret;
+}
+#endif
+#endif
+
+static void xenhcd_hub_descriptor(struct usbfront_info *info,
+					struct usb_hub_descriptor *desc)
+{
+	u16 temp;
+	int ports = info->rh_numports;
+
+	desc->bDescriptorType = 0x29;
+	desc->bPwrOn2PwrGood = 10; /* EHCI says 20ms max */
+	desc->bHubContrCurrent = 0;
+	desc->bNbrPorts = ports;
+
+	/* size of DeviceRemovable and PortPwrCtrlMask fields*/
+	temp = 1 + (ports / 8);
+	desc->bDescLength = 7 + 2 * temp;
+
+	/* bitmaps for DeviceRemovable and PortPwrCtrlMask */
+	memset(&desc->u.hs.DeviceRemovable[0], 0, temp);
+	memset(&desc->u.hs.DeviceRemovable[temp], 0xff, temp);
+
+	/* per-port over current reporting and no power switching */
+	temp = 0x000a;
+	desc->wHubCharacteristics = cpu_to_le16(temp);
+}
+
+/* port status change mask for hub_status_data */
+#define PORT_C_MASK \
+	((USB_PORT_STAT_C_CONNECTION \
+	| USB_PORT_STAT_C_ENABLE \
+	| USB_PORT_STAT_C_SUSPEND \
+	| USB_PORT_STAT_C_OVERCURRENT \
+	| USB_PORT_STAT_C_RESET) << 16)
+
+/*
+ * See USB 2.0 Spec, 11.12.4 Hub and Port Status Change Bitmap.
+ * If port status changed, writes the bitmap to buf and return
+ * that length(number of bytes).
+ * If Nothing changed, return 0.
+ */
+static int xenhcd_hub_status_data(struct usb_hcd *hcd, char *buf)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	int ports;
+	int i;
+	int length;
+
+	unsigned long flags;
+	int ret = 0;
+
+	int changed = 0;
+
+	if (!HC_IS_RUNNING(hcd->state))
+		return 0;
+
+	/* initialize the status to no-changes */
+	ports = info->rh_numports;
+	length = 1 + (ports / 8);
+	for (i = 0; i < length; i++) {
+		buf[i] = 0;
+		ret++;
+	}
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	for (i = 0; i < ports; i++) {
+		/* check status for each port */
+		if (info->ports[i].status & PORT_C_MASK) {
+			if (i < 7)
+				buf[0] |= 1 << (i + 1);
+			else if (i < 15)
+				buf[1] |= 1 << (i - 7);
+			else if (i < 23)
+				buf[2] |= 1 << (i - 15);
+			else
+				buf[3] |= 1 << (i - 23);
+			changed = 1;
+		}
+	}
+
+	if (!changed)
+		ret = 0;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	return ret;
+}
+
+static int xenhcd_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+				u16 wIndex, char *buf, u16 wLength)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	int ports = info->rh_numports;
+	unsigned long flags;
+	int ret = 0;
+	int i;
+	int changed = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+	switch (typeReq) {
+	case ClearHubFeature:
+		/* ignore this request */
+		break;
+	case ClearPortFeature:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		switch (wValue) {
+		case USB_PORT_FEAT_SUSPEND:
+			rhport_resume(info, wIndex);
+			break;
+		case USB_PORT_FEAT_POWER:
+			rhport_power_off(info, wIndex);
+			break;
+		case USB_PORT_FEAT_ENABLE:
+			rhport_disable(info, wIndex);
+			break;
+		case USB_PORT_FEAT_C_CONNECTION:
+			info->ports[wIndex-1].c_connection = 0;
+			/* falling through */
+		default:
+			info->ports[wIndex-1].status &= ~(1 << wValue);
+			break;
+		}
+		break;
+	case GetHubDescriptor:
+		xenhcd_hub_descriptor(info, (struct usb_hub_descriptor *) buf);
+		break;
+	case GetHubStatus:
+		/* always local power supply good and no over-current exists. */
+		*(__le32 *)buf = cpu_to_le32(0);
+		break;
+	case GetPortStatus:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		wIndex--;
+
+		/* resume completion */
+		if (info->ports[wIndex].resuming &&
+		    time_after_eq(jiffies, info->ports[wIndex].timeout)) {
+			info->ports[wIndex].status |=
+						(USB_PORT_STAT_C_SUSPEND << 16);
+			info->ports[wIndex].status &= ~USB_PORT_STAT_SUSPEND;
+		}
+
+		/* reset completion */
+		if ((info->ports[wIndex].status & USB_PORT_STAT_RESET) != 0 &&
+		    time_after_eq(jiffies, info->ports[wIndex].timeout)) {
+			info->ports[wIndex].status |=
+						(USB_PORT_STAT_C_RESET << 16);
+			info->ports[wIndex].status &= ~USB_PORT_STAT_RESET;
+
+			if (info->devices[wIndex].status !=
+							USB_STATE_NOTATTACHED) {
+				info->ports[wIndex].status |=
+							USB_PORT_STAT_ENABLE;
+				info->devices[wIndex].status =
+							USB_STATE_DEFAULT;
+			}
+
+			switch (info->devices[wIndex].speed) {
+			case USB_SPEED_LOW:
+				info->ports[wIndex].status |=
+						USB_PORT_STAT_LOW_SPEED;
+				break;
+			case USB_SPEED_HIGH:
+				info->ports[wIndex].status |=
+						USB_PORT_STAT_HIGH_SPEED;
+				break;
+			default:
+				break;
+			}
+		}
+
+		((u16 *) buf)[0] = cpu_to_le16(info->ports[wIndex].status);
+		((u16 *) buf)[1] = cpu_to_le16(info->ports[wIndex].status
+									>> 16);
+		break;
+	case SetHubFeature:
+		/* not supported */
+		goto error;
+	case SetPortFeature:
+		if (!wIndex || wIndex > ports)
+			goto error;
+
+		switch (wValue) {
+		case USB_PORT_FEAT_POWER:
+			rhport_power_on(info, wIndex);
+			break;
+		case USB_PORT_FEAT_RESET:
+			rhport_reset(info, wIndex);
+			break;
+		case USB_PORT_FEAT_SUSPEND:
+			rhport_suspend(info, wIndex);
+			break;
+		default:
+			if ((info->ports[wIndex-1].status &
+						USB_PORT_STAT_POWER) != 0)
+				info->ports[wIndex-1].status |= (1 << wValue);
+		}
+		break;
+
+	default:
+error:
+		ret = -EPIPE;
+	}
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	/* check status for each port */
+	for (i = 0; i < ports; i++) {
+		if (info->ports[i].status & PORT_C_MASK)
+			changed = 1;
+	}
+	if (changed)
+		usb_hcd_poll_rh_status(hcd);
+
+	return ret;
+}
+
+struct kmem_cache *xenhcd_urbp_cachep;
+
+static struct urb_priv *alloc_urb_priv(struct urb *urb)
+{
+	struct urb_priv *urbp;
+
+	urbp = kmem_cache_zalloc(xenhcd_urbp_cachep, GFP_ATOMIC);
+	if (!urbp)
+		return NULL;
+
+	urbp->urb = urb;
+	urb->hcpriv = urbp;
+	urbp->req_id = ~0;
+	urbp->unlink_req_id = ~0;
+	INIT_LIST_HEAD(&urbp->list);
+
+	return urbp;
+}
+
+static void free_urb_priv(struct urb_priv *urbp)
+{
+	urbp->urb->hcpriv = NULL;
+	kmem_cache_free(xenhcd_urbp_cachep, urbp);
+}
+
+static inline int get_id_from_freelist(struct usbfront_info *info)
+{
+	unsigned long free;
+	free = info->shadow_free;
+	BUG_ON(free >= USB_URB_RING_SIZE);
+	info->shadow_free = info->shadow[free].req.id;
+	info->shadow[free].req.id = (unsigned int)0x0fff; /* debug */
+	return free;
+}
+
+static inline void add_id_to_freelist(struct usbfront_info *info,
+							unsigned long id)
+{
+	info->shadow[id].req.id  = info->shadow_free;
+	info->shadow[id].urb = NULL;
+	info->shadow_free = id;
+}
+
+static inline int count_pages(void *addr, int length)
+{
+	unsigned long start = (unsigned long) addr >> PAGE_SHIFT;
+	unsigned long end = (unsigned long)
+				(addr + length + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	return end - start;
+}
+
+static inline void xenhcd_gnttab_map(struct usbfront_info *info, void *addr,
+					int length, grant_ref_t *gref_head,
+					struct usbif_request_segment *seg,
+					int nr_pages, int flags)
+{
+	grant_ref_t ref;
+	unsigned long mfn;
+	unsigned int offset;
+	unsigned int len;
+	unsigned int bytes;
+	int i;
+
+	len = length;
+
+	for (i = 0; i < nr_pages; i++) {
+		BUG_ON(!len);
+
+		mfn = virt_to_mfn(addr);
+		offset = offset_in_page(addr);
+
+		bytes = PAGE_SIZE - offset;
+		if (bytes > len)
+			bytes = len;
+
+		ref = gnttab_claim_grant_reference(gref_head);
+		BUG_ON(ref == -ENOSPC);
+		gnttab_grant_foreign_access_ref(ref, info->xbdev->otherend_id,
+						mfn, flags);
+		seg[i].gref = ref;
+		seg[i].offset = (uint16_t)offset;
+		seg[i].length = (uint16_t)bytes;
+
+		addr += bytes;
+		len -= bytes;
+	}
+}
+
+static int map_urb_for_request(struct usbfront_info *info, struct urb *urb,
+				struct usbif_urb_request *req)
+{
+	grant_ref_t gref_head;
+	int nr_buff_pages = 0;
+	int nr_isodesc_pages = 0;
+	int ret = 0;
+
+	if (urb->transfer_buffer_length) {
+		nr_buff_pages = count_pages(urb->transfer_buffer,
+						urb->transfer_buffer_length);
+
+		if (usb_pipeisoc(urb->pipe))
+			nr_isodesc_pages = count_pages(&urb->iso_frame_desc[0],
+				sizeof(struct usb_iso_packet_descriptor) *
+							urb->number_of_packets);
+
+		if (nr_buff_pages + nr_isodesc_pages >
+						USBIF_MAX_SEGMENTS_PER_REQUEST)
+			return -E2BIG;
+
+		ret = gnttab_alloc_grant_references(
+				USBIF_MAX_SEGMENTS_PER_REQUEST, &gref_head);
+		if (ret) {
+			printk(KERN_ERR "usbfront: "
+				"gnttab_alloc_grant_references() error\n");
+			return -ENOMEM;
+		}
+
+		xenhcd_gnttab_map(info, urb->transfer_buffer,
+				urb->transfer_buffer_length, &gref_head,
+				&req->seg[0], nr_buff_pages,
+				usb_pipein(urb->pipe) ? 0 : GTF_readonly);
+
+		if (!usb_pipeisoc(urb->pipe))
+			gnttab_free_grant_references(gref_head);
+	}
+
+	req->pipe = usbif_setportnum_pipe(urb->pipe, urb->dev->portnum);
+	req->transfer_flags = urb->transfer_flags;
+	req->buffer_length = urb->transfer_buffer_length;
+	req->nr_buffer_segs = nr_buff_pages;
+
+	switch (usb_pipetype(urb->pipe)) {
+	case PIPE_ISOCHRONOUS:
+		req->u.isoc.interval = urb->interval;
+		req->u.isoc.start_frame = urb->start_frame;
+		req->u.isoc.number_of_packets = urb->number_of_packets;
+		req->u.isoc.nr_frame_desc_segs = nr_isodesc_pages;
+		/* urb->number_of_packets must be > 0 */
+		if (unlikely(urb->number_of_packets <= 0))
+			BUG();
+		xenhcd_gnttab_map(info, &urb->iso_frame_desc[0],
+				sizeof(struct usb_iso_packet_descriptor) *
+					urb->number_of_packets, &gref_head,
+				&req->seg[nr_buff_pages], nr_isodesc_pages, 0);
+		gnttab_free_grant_references(gref_head);
+		break;
+	case PIPE_INTERRUPT:
+		req->u.intr.interval = urb->interval;
+		break;
+	case PIPE_CONTROL:
+		if (urb->setup_packet)
+			memcpy(req->u.ctrl, urb->setup_packet, 8);
+		break;
+	case PIPE_BULK:
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static void xenhcd_gnttab_done(struct usb_shadow *shadow)
+{
+	int nr_segs = 0;
+	int i;
+
+	nr_segs = shadow->req.nr_buffer_segs;
+
+	if (usb_pipeisoc(shadow->req.pipe))
+		nr_segs +=  shadow->req.u.isoc.nr_frame_desc_segs;
+
+	for (i = 0; i < nr_segs; i++)
+		gnttab_end_foreign_access(shadow->req.seg[i].gref, 0, 0UL);
+
+	shadow->req.nr_buffer_segs = 0;
+	shadow->req.u.isoc.nr_frame_desc_segs = 0;
+}
+
+static void xenhcd_giveback_urb(struct usbfront_info *info, struct urb *urb,
+								int status)
+__releases(info->lock)
+__acquires(info->lock)
+{
+	struct urb_priv *urbp = (struct urb_priv *) urb->hcpriv;
+
+	list_del_init(&urbp->list);
+	free_urb_priv(urbp);
+	switch (urb->status) {
+	case -ECONNRESET:
+	case -ENOENT:
+		COUNT(info->stats.unlink);
+		break;
+	case -EINPROGRESS:
+		urb->status = status;
+		/* falling through */
+	default:
+		COUNT(info->stats.complete);
+	}
+	spin_unlock(&info->lock);
+	usb_hcd_giveback_urb(info_to_hcd(info), urb,
+				urbp->status <= 0 ? urbp->status : urb->status);
+	spin_lock(&info->lock);
+}
+
+static inline int xenhcd_do_request(struct usbfront_info *info,
+					struct urb_priv *urbp)
+{
+	struct usbif_urb_request *req;
+	struct urb *urb = urbp->urb;
+	uint16_t id;
+	int notify;
+	int ret = 0;
+
+	req = RING_GET_REQUEST(&info->urb_ring, info->urb_ring.req_prod_pvt);
+	id = get_id_from_freelist(info);
+	req->id = id;
+
+	if (unlikely(urbp->unlinked)) {
+		req->u.unlink.unlink_id = urbp->req_id;
+		req->pipe = usbif_setunlink_pipe(usbif_setportnum_pipe(
+						urb->pipe, urb->dev->portnum));
+		urbp->unlink_req_id = id;
+	} else {
+		ret = map_urb_for_request(info, urb, req);
+		if (ret < 0) {
+			add_id_to_freelist(info, id);
+			return ret;
+		}
+		urbp->req_id = id;
+	}
+
+	info->urb_ring.req_prod_pvt++;
+	info->shadow[id].urb = urb;
+	info->shadow[id].req = *req;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->urb_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	return ret;
+}
+
+static void xenhcd_kick_pending_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp;
+	int ret;
+
+	while (!list_empty(&info->pending_submit_list)) {
+		if (RING_FULL(&info->urb_ring)) {
+			COUNT(info->stats.ring_full);
+			timer_action(info, TIMER_RING_WATCHDOG);
+			goto done;
+		}
+
+		urbp = list_entry(info->pending_submit_list.next,
+							struct urb_priv, list);
+		ret = xenhcd_do_request(info, urbp);
+		if (ret == 0)
+			list_move_tail(&urbp->list, &info->in_progress_list);
+		else
+			xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN);
+	}
+	timer_action_done(info, TIMER_SCAN_PENDING_URBS);
+
+done:
+	return;
+}
+
+/*
+ * caller must lock info->lock
+ */
+static void xenhcd_cancel_all_enqueued_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp, *tmp;
+
+	list_for_each_entry_safe(urbp, tmp, &info->in_progress_list, list) {
+		if (!urbp->unlinked) {
+			xenhcd_gnttab_done(&info->shadow[urbp->req_id]);
+			barrier();
+			if (urbp->urb->status == -EINPROGRESS)/* not dequeued */
+				xenhcd_giveback_urb(info, urbp->urb,
+								-ESHUTDOWN);
+			else /* dequeued */
+				xenhcd_giveback_urb(info, urbp->urb,
+							urbp->urb->status);
+		}
+		info->shadow[urbp->req_id].urb = NULL;
+	}
+
+	list_for_each_entry_safe(urbp, tmp, &info->pending_submit_list, list) {
+		xenhcd_giveback_urb(info, urbp->urb, -ESHUTDOWN);
+	}
+
+	return;
+}
+
+/*
+ * caller must lock info->lock
+ */
+static void xenhcd_giveback_unlinked_urbs(struct usbfront_info *info)
+{
+	struct urb_priv *urbp, *tmp;
+
+	list_for_each_entry_safe(urbp, tmp,
+					&info->giveback_waiting_list, list) {
+		xenhcd_giveback_urb(info, urbp->urb, urbp->urb->status);
+	}
+}
+
+static int xenhcd_submit_urb(struct usbfront_info *info, struct urb_priv *urbp)
+{
+	int ret = 0;
+
+	if (RING_FULL(&info->urb_ring)) {
+		list_add_tail(&urbp->list, &info->pending_submit_list);
+		COUNT(info->stats.ring_full);
+		timer_action(info, TIMER_RING_WATCHDOG);
+		goto done;
+	}
+
+	if (!list_empty(&info->pending_submit_list)) {
+		list_add_tail(&urbp->list, &info->pending_submit_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	ret = xenhcd_do_request(info, urbp);
+	if (ret == 0)
+		list_add_tail(&urbp->list, &info->in_progress_list);
+
+done:
+	return ret;
+}
+
+static int xenhcd_unlink_urb(struct usbfront_info *info, struct urb_priv *urbp)
+{
+	int ret = 0;
+
+	/* already unlinked? */
+	if (urbp->unlinked)
+		return -EBUSY;
+
+	urbp->unlinked = 1;
+
+	/* the urb is still in pending_submit queue */
+	if (urbp->req_id == ~0) {
+		list_move_tail(&urbp->list, &info->giveback_waiting_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	/* send unlink request to backend */
+	if (RING_FULL(&info->urb_ring)) {
+		list_move_tail(&urbp->list, &info->pending_unlink_list);
+		COUNT(info->stats.ring_full);
+		timer_action(info, TIMER_RING_WATCHDOG);
+		goto done;
+	}
+
+	if (!list_empty(&info->pending_unlink_list)) {
+		list_move_tail(&urbp->list, &info->pending_unlink_list);
+		timer_action(info, TIMER_SCAN_PENDING_URBS);
+		goto done;
+	}
+
+	ret = xenhcd_do_request(info, urbp);
+	if (ret == 0)
+		list_move_tail(&urbp->list, &info->in_progress_list);
+
+done:
+	return ret;
+}
+
+static int xenhcd_urb_request_done(struct usbfront_info *info)
+{
+	struct usbif_urb_response *res;
+	struct urb *urb;
+
+	RING_IDX i, rp;
+	uint16_t id;
+	int more_to_do = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	rp = info->urb_ring.sring->rsp_prod;
+	rmb(); /* ensure we see queued responses up to "rp" */
+
+	for (i = info->urb_ring.rsp_cons; i != rp; i++) {
+		res = RING_GET_RESPONSE(&info->urb_ring, i);
+		id = res->id;
+
+		if (likely(usbif_pipesubmit(info->shadow[id].req.pipe))) {
+			xenhcd_gnttab_done(&info->shadow[id]);
+			urb = info->shadow[id].urb;
+			barrier();
+			if (likely(urb)) {
+				urb->actual_length = res->actual_length;
+				urb->error_count = res->error_count;
+				urb->start_frame = res->start_frame;
+				barrier();
+				xenhcd_giveback_urb(info, urb, res->status);
+			}
+		}
+
+		add_id_to_freelist(info, id);
+	}
+	info->urb_ring.rsp_cons = i;
+
+	if (i != info->urb_ring.req_prod_pvt)
+		RING_FINAL_CHECK_FOR_RESPONSES(&info->urb_ring, more_to_do);
+	else
+		info->urb_ring.sring->rsp_event = i + 1;
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+static int xenhcd_conn_notify(struct usbfront_info *info)
+{
+	struct usbif_conn_response *res;
+	struct usbif_conn_request *req;
+	RING_IDX rc, rp;
+	uint16_t id;
+	uint8_t portnum, speed;
+	int more_to_do = 0;
+	int notify;
+	int port_changed = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	rc = info->conn_ring.rsp_cons;
+	rp = info->conn_ring.sring->rsp_prod;
+	rmb(); /* ensure we see queued responses up to "rp" */
+
+	while (rc != rp) {
+		res = RING_GET_RESPONSE(&info->conn_ring, rc);
+		id = res->id;
+		portnum = res->portnum;
+		speed = res->speed;
+		info->conn_ring.rsp_cons = ++rc;
+
+		rhport_connect(info, portnum, speed);
+		if (info->ports[portnum-1].c_connection)
+			port_changed = 1;
+
+		barrier();
+
+		req = RING_GET_REQUEST(&info->conn_ring,
+					info->conn_ring.req_prod_pvt);
+		req->id = id;
+		info->conn_ring.req_prod_pvt++;
+	}
+
+	if (rc != info->conn_ring.req_prod_pvt)
+		RING_FINAL_CHECK_FOR_RESPONSES(&info->conn_ring, more_to_do);
+	else
+		info->conn_ring.sring->rsp_event = rc + 1;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	spin_unlock_irqrestore(&info->lock, flags);
+
+	if (port_changed)
+		usb_hcd_poll_rh_status(info_to_hcd(info));
+
+	cond_resched();
+
+	return more_to_do;
+}
+
+int xenhcd_schedule(void *arg)
+{
+	struct usbfront_info *info = (struct usbfront_info *) arg;
+
+	while (!kthread_should_stop()) {
+		wait_event_interruptible(info->wq,
+				info->waiting_resp || kthread_should_stop());
+		info->waiting_resp = 0;
+		smp_mb();
+
+		if (xenhcd_urb_request_done(info))
+			info->waiting_resp = 1;
+
+		if (xenhcd_conn_notify(info))
+			info->waiting_resp = 1;
+	}
+
+	return 0;
+}
+
+static void xenhcd_notify_work(struct usbfront_info *info)
+{
+	info->waiting_resp = 1;
+	wake_up(&info->wq);
+}
+
+irqreturn_t xenhcd_int(int irq, void *dev_id)
+{
+	xenhcd_notify_work((struct usbfront_info *) dev_id);
+	return IRQ_HANDLED;
+}
+
+static void xenhcd_watchdog(unsigned long param)
+{
+	struct usbfront_info *info = (struct usbfront_info *) param;
+	unsigned long flags;
+
+	spin_lock_irqsave(&info->lock, flags);
+	if (likely(HC_IS_RUNNING(info_to_hcd(info)->state))) {
+		timer_action_done(info, TIMER_RING_WATCHDOG);
+		xenhcd_giveback_unlinked_urbs(info);
+		xenhcd_kick_pending_urbs(info);
+	}
+	spin_unlock_irqrestore(&info->lock, flags);
+}
+
+/*
+ * one-time HC init
+ */
+static int xenhcd_setup(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	spin_lock_init(&info->lock);
+	INIT_LIST_HEAD(&info->pending_submit_list);
+	INIT_LIST_HEAD(&info->pending_unlink_list);
+	INIT_LIST_HEAD(&info->in_progress_list);
+	INIT_LIST_HEAD(&info->giveback_waiting_list);
+	init_timer(&info->watchdog);
+	info->watchdog.function = xenhcd_watchdog;
+	info->watchdog.data = (unsigned long) info;
+	return 0;
+}
+
+/*
+ * start HC running
+ */
+static int xenhcd_run(struct usb_hcd *hcd)
+{
+	hcd->uses_new_polling = 1;
+	hcd->state = HC_STATE_RUNNING;
+	create_debug_file(hcd_to_info(hcd));
+	return 0;
+}
+
+/*
+ * stop running HC
+ */
+static void xenhcd_stop(struct usb_hcd *hcd)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+
+	del_timer_sync(&info->watchdog);
+	remove_debug_file(info);
+	spin_lock_irq(&info->lock);
+	/* cancel all urbs */
+	hcd->state = HC_STATE_HALT;
+	xenhcd_cancel_all_enqueued_urbs(info);
+	xenhcd_giveback_unlinked_urbs(info);
+	spin_unlock_irq(&info->lock);
+}
+
+/*
+ * called as .urb_enqueue()
+ * non-error returns are promise to giveback the urb later
+ */
+static int xenhcd_urb_enqueue(struct usb_hcd *hcd, struct urb *urb,
+				gfp_t mem_flags)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	struct urb_priv *urbp;
+	unsigned long flags;
+	int ret = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	urbp = alloc_urb_priv(urb);
+	if (!urbp) {
+		ret = -ENOMEM;
+		goto done;
+	}
+	urbp->status = 1;
+
+	ret = xenhcd_submit_urb(info, urbp);
+	if (ret != 0)
+		free_urb_priv(urbp);
+
+done:
+	spin_unlock_irqrestore(&info->lock, flags);
+	return ret;
+}
+
+/*
+ * called as .urb_dequeue()
+ */
+static int xenhcd_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+{
+	struct usbfront_info *info = hcd_to_info(hcd);
+	struct urb_priv *urbp;
+	unsigned long flags;
+	int ret = 0;
+
+	spin_lock_irqsave(&info->lock, flags);
+
+	urbp = urb->hcpriv;
+	if (!urbp)
+		goto done;
+
+	urbp->status = status;
+	ret = xenhcd_unlink_urb(info, urbp);
+
+done:
+	spin_unlock_irqrestore(&info->lock, flags);
+	return ret;
+}
+
+/*
+ * called from usb_get_current_frame_number(),
+ * but, almost all drivers not use such function.
+ */
+static int xenhcd_get_frame(struct usb_hcd *hcd)
+{
+	/* it means error, but probably no problem :-) */
+	return 0;
+}
+
+static const char hcd_name[] = "xen_hcd";
+
+struct hc_driver xen_usb20_hc_driver = {
+	.description = hcd_name,
+	.product_desc = "Xen USB2.0 Virtual Host Controller",
+	.hcd_priv_size = sizeof(struct usbfront_info),
+	.flags = HCD_USB2,
+
+	/* basic HC lifecycle operations */
+	.reset = xenhcd_setup,
+	.start = xenhcd_run,
+	.stop = xenhcd_stop,
+
+	/* managing urb I/O */
+	.urb_enqueue = xenhcd_urb_enqueue,
+	.urb_dequeue = xenhcd_urb_dequeue,
+	.get_frame_number = xenhcd_get_frame,
+
+	/* root hub operations */
+	.hub_status_data = xenhcd_hub_status_data,
+	.hub_control = xenhcd_hub_control,
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+	.bus_suspend = xenhcd_bus_suspend,
+	.bus_resume = xenhcd_bus_resume,
+#endif
+#endif
+};
+
+struct hc_driver xen_usb11_hc_driver = {
+	.description = hcd_name,
+	.product_desc = "Xen USB1.1 Virtual Host Controller",
+	.hcd_priv_size = sizeof(struct usbfront_info),
+	.flags = HCD_USB11,
+
+	/* basic HC lifecycle operations */
+	.reset = xenhcd_setup,
+	.start = xenhcd_run,
+	.stop = xenhcd_stop,
+
+	/* managing urb I/O */
+	.urb_enqueue = xenhcd_urb_enqueue,
+	.urb_dequeue = xenhcd_urb_dequeue,
+	.get_frame_number = xenhcd_get_frame,
+
+	/* root hub operations */
+	.hub_status_data = xenhcd_hub_status_data,
+	.hub_control = xenhcd_hub_control,
+#ifdef XENHCD_PM
+#ifdef CONFIG_PM
+	.bus_suspend = xenhcd_bus_suspend,
+	.bus_resume = xenhcd_bus_resume,
+#endif
+#endif
+};
+
+#define GRANT_INVALID_REF 0
+
+static void destroy_rings(struct usbfront_info *info)
+{
+	if (info->irq)
+		unbind_from_irqhandler(info->irq, info);
+	info->evtchn = info->irq = 0;
+
+	if (info->urb_ring_ref != GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->urb_ring_ref, 0,
+					(unsigned long)info->urb_ring.sring);
+		info->urb_ring_ref = GRANT_INVALID_REF;
+	}
+	info->urb_ring.sring = NULL;
+
+	if (info->conn_ring_ref != GRANT_INVALID_REF) {
+		gnttab_end_foreign_access(info->conn_ring_ref, 0,
+					(unsigned long)info->conn_ring.sring);
+		info->conn_ring_ref = GRANT_INVALID_REF;
+	}
+	info->conn_ring.sring = NULL;
+}
+
+static int setup_rings(struct xenbus_device *dev, struct usbfront_info *info)
+{
+	struct usbif_urb_sring *urb_sring;
+	struct usbif_conn_sring *conn_sring;
+	int err;
+
+	info->urb_ring_ref = GRANT_INVALID_REF;
+	info->conn_ring_ref = GRANT_INVALID_REF;
+
+	urb_sring = (struct usbif_urb_sring *)
+					get_zeroed_page(GFP_NOIO|__GFP_HIGH);
+	if (!urb_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating urb ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(urb_sring);
+	FRONT_RING_INIT(&info->urb_ring, urb_sring, PAGE_SIZE);
+
+	err = xenbus_grant_ring(dev, virt_to_mfn(info->urb_ring.sring));
+	if (err < 0) {
+		free_page((unsigned long)urb_sring);
+		info->urb_ring.sring = NULL;
+		goto fail;
+	}
+	info->urb_ring_ref = err;
+
+	conn_sring = (struct usbif_conn_sring *)
+					get_zeroed_page(GFP_NOIO|__GFP_HIGH);
+	if (!conn_sring) {
+		xenbus_dev_fatal(dev, -ENOMEM, "allocating conn ring");
+		return -ENOMEM;
+	}
+	SHARED_RING_INIT(conn_sring);
+	FRONT_RING_INIT(&info->conn_ring, conn_sring, PAGE_SIZE);
+
+	err = xenbus_grant_ring(dev, virt_to_mfn(info->conn_ring.sring));
+	if (err < 0) {
+		free_page((unsigned long)conn_sring);
+		info->conn_ring.sring = NULL;
+		goto fail;
+	}
+	info->conn_ring_ref = err;
+
+	err = xenbus_alloc_evtchn(dev, &info->evtchn);
+	if (err)
+		goto fail;
+
+	err = bind_evtchn_to_irqhandler(info->evtchn, xenhcd_int, 0,
+					"usbif", info);
+	if (err <= 0) {
+		xenbus_dev_fatal(dev, err, "bind_listening_port_to_irqhandler");
+		goto fail;
+	}
+	info->irq = err;
+
+	return 0;
+fail:
+	destroy_rings(info);
+	return err;
+}
+
+static int talk_to_usbback(struct xenbus_device *dev,
+				struct usbfront_info *info)
+{
+	const char *message;
+	struct xenbus_transaction xbt;
+	int err;
+
+	err = setup_rings(dev, info);
+	if (err)
+		goto out;
+
+again:
+	err = xenbus_transaction_start(&xbt);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "starting transaction");
+		goto destroy_ring;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "urb-ring-ref",
+				"%u", info->urb_ring_ref);
+	if (err) {
+		message = "writing urb-ring-ref";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "conn-ring-ref",
+				"%u", info->conn_ring_ref);
+	if (err) {
+		message = "writing conn-ring-ref";
+		goto abort_transaction;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "event-channel",
+				"%u", info->evtchn);
+	if (err) {
+		message = "writing event-channel";
+		goto abort_transaction;
+	}
+
+	err = xenbus_transaction_end(xbt, 0);
+	if (err) {
+		if (err == -EAGAIN)
+			goto again;
+		xenbus_dev_fatal(dev, err, "completing transaction");
+		goto destroy_ring;
+	}
+
+	return 0;
+
+abort_transaction:
+	xenbus_transaction_end(xbt, 1);
+	xenbus_dev_fatal(dev, err, "%s", message);
+
+destroy_ring:
+	destroy_rings(info);
+
+out:
+	return err;
+}
+
+static int connect(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+
+	struct usbif_conn_request *req;
+	int i, idx, err;
+	int notify;
+	char name[TASK_COMM_LEN];
+	struct usb_hcd *hcd;
+
+	hcd = info_to_hcd(info);
+	snprintf(name, TASK_COMM_LEN, "xenhcd.%d", hcd->self.busnum);
+
+	err = talk_to_usbback(dev, info);
+	if (err)
+		return err;
+
+	info->kthread = kthread_run(xenhcd_schedule, info, name);
+	if (IS_ERR(info->kthread)) {
+		err = PTR_ERR(info->kthread);
+		info->kthread = NULL;
+		xenbus_dev_fatal(dev, err, "Error creating thread");
+		return err;
+	}
+	/* prepare ring for hotplug notification */
+	for (idx = 0, i = 0; i < USB_CONN_RING_SIZE; i++) {
+		req = RING_GET_REQUEST(&info->conn_ring, idx);
+		req->id = idx;
+		idx++;
+	}
+	info->conn_ring.req_prod_pvt = idx;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->conn_ring, notify);
+	if (notify)
+		notify_remote_via_irq(info->irq);
+
+	return 0;
+}
+
+static struct usb_hcd *create_hcd(struct xenbus_device *dev)
+{
+	int i;
+	int err = 0;
+	int num_ports;
+	int usb_ver;
+	struct usb_hcd *hcd = NULL;
+	struct usbfront_info *info = NULL;
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "num-ports",
+				"%d", &num_ports);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading num-ports");
+		return ERR_PTR(-EINVAL);
+	}
+	if (num_ports < 1 || num_ports > USB_MAXCHILDREN) {
+		xenbus_dev_fatal(dev, err, "invalid num-ports");
+		return ERR_PTR(-EINVAL);
+	}
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "usb-ver", "%d", &usb_ver);
+	if (err != 1) {
+		xenbus_dev_fatal(dev, err, "reading usb-ver");
+		return ERR_PTR(-EINVAL);
+	}
+	switch (usb_ver) {
+	case USB_VER_USB11:
+		hcd = usb_create_hcd(&xen_usb11_hc_driver,
+					&dev->dev, dev_name(&dev->dev));
+		break;
+	case USB_VER_USB20:
+		hcd = usb_create_hcd(&xen_usb20_hc_driver,
+					&dev->dev, dev_name(&dev->dev));
+		break;
+	default:
+		xenbus_dev_fatal(dev, err, "invalid usb-ver");
+		return ERR_PTR(-EINVAL);
+	}
+	if (!hcd) {
+		xenbus_dev_fatal(dev, err,
+					"fail to allocate USB host controller");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	info = hcd_to_info(hcd);
+	info->xbdev = dev;
+	info->rh_numports = num_ports;
+
+	for (i = 0; i < USB_URB_RING_SIZE; i++) {
+		info->shadow[i].req.id = i + 1;
+		info->shadow[i].urb = NULL;
+	}
+	info->shadow[USB_URB_RING_SIZE-1].req.id = 0x0fff;
+
+	return hcd;
+}
+
+static int usbfront_probe(struct xenbus_device *dev,
+				const struct xenbus_device_id *id)
+{
+	int err;
+	struct usb_hcd *hcd;
+	struct usbfront_info *info;
+
+	if (usb_disabled())
+		return -ENODEV;
+
+	hcd = create_hcd(dev);
+	if (IS_ERR(hcd)) {
+		err = PTR_ERR(hcd);
+		xenbus_dev_fatal(dev, err,
+					"failed to create usb host controller");
+		goto fail;
+	}
+
+	info = hcd_to_info(hcd);
+	dev_set_drvdata(&dev->dev, info);
+
+	err = usb_add_hcd(hcd, 0, 0);
+	if (err != 0) {
+		xenbus_dev_fatal(dev, err, "fail to add USB host controller");
+		goto fail;
+	}
+
+	init_waitqueue_head(&info->wq);
+
+	return 0;
+
+fail:
+	usb_put_hcd(hcd);
+	dev_set_drvdata(&dev->dev, NULL);
+	return err;
+}
+
+static void usbfront_disconnect(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+	struct usb_hcd *hcd = info_to_hcd(info);
+
+	usb_remove_hcd(hcd);
+	if (info->kthread) {
+		kthread_stop(info->kthread);
+		info->kthread = NULL;
+	}
+	xenbus_frontend_closed(dev);
+}
+
+static void usbback_changed(struct xenbus_device *dev,
+				enum xenbus_state backend_state)
+{
+	switch (backend_state) {
+	case XenbusStateInitialising:
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+	case XenbusStateReconfiguring:
+	case XenbusStateReconfigured:
+	case XenbusStateUnknown:
+	case XenbusStateClosed:
+		break;
+
+	case XenbusStateInitWait:
+		if (dev->state != XenbusStateInitialising)
+			break;
+		if (!connect(dev))
+			xenbus_switch_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		usbfront_disconnect(dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+					backend_state);
+		break;
+	}
+}
+
+static int usbfront_remove(struct xenbus_device *dev)
+{
+	struct usbfront_info *info = dev_get_drvdata(&dev->dev);
+	struct usb_hcd *hcd = info_to_hcd(info);
+
+	destroy_rings(info);
+	usb_put_hcd(hcd);
+
+	return 0;
+}
+
+static const struct xenbus_device_id usbfront_ids[] = {
+	{ "vusb" },
+	{ "" },
+};
+MODULE_ALIAS("xen:vusb");
+
+static DEFINE_XENBUS_DRIVER(usbfront, ,
+	.probe = usbfront_probe,
+	.remove = usbfront_remove,
+	.otherend_changed = usbback_changed,
+);
+
+static int __init usbfront_init(void)
+{
+	if (!xen_domain())
+		return -ENODEV;
+
+	xenhcd_urbp_cachep = kmem_cache_create("xenhcd_urb_priv",
+					sizeof(struct urb_priv), 0, 0, NULL);
+	if (!xenhcd_urbp_cachep) {
+		printk(KERN_ERR "usbfront failed to create kmem cache\n");
+		return -ENOMEM;
+	}
+
+	return xenbus_register_frontend(&usbfront_driver);
+}
+
+static void __exit usbfront_exit(void)
+{
+	kmem_cache_destroy(xenhcd_urbp_cachep);
+	xenbus_unregister_driver(&usbfront_driver);
+}
+
+module_init(usbfront_init);
+module_exit(usbfront_exit);
+
+MODULE_AUTHOR("");
+MODULE_DESCRIPTION("Xen USB Virtual Host Controller driver (usbfront)");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/include/xen/interface/io/usbif.h b/include/xen/interface/io/usbif.h
new file mode 100644
index 0000000..f3bb1b2
--- /dev/null
+++ b/include/xen/interface/io/usbif.h
@@ -0,0 +1,150 @@
+/*
+ * usbif.h
+ *
+ * USB I/O interface for Xen guest OSes.
+ *
+ * Copyright (C) 2009, FUJITSU LABORATORIES LTD.
+ * Author: Noboru Iwamatsu <n_iwamatsu@jp.fujitsu.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_PUBLIC_IO_USBIF_H__
+#define __XEN_PUBLIC_IO_USBIF_H__
+
+#include "ring.h"
+#include "../grant_table.h"
+
+enum usb_spec_version {
+	USB_VER_UNKNOWN = 0,
+	USB_VER_USB11,
+	USB_VER_USB20,
+	USB_VER_USB30,	/* not supported yet */
+};
+
+/*
+ *  USB pipe in usbif_request
+ *
+ *  bits 0-5 are specific bits for virtual USB driver.
+ *  bits 7-31 are standard urb pipe.
+ *
+ *  - port number(NEW):	bits 0-4
+ *				(USB_MAXCHILDREN is 31)
+ *
+ *  - operation flag(NEW):	bit 5
+ *				(0 = submit urb,
+ *				 1 = unlink urb)
+ *
+ *  - direction:		bit 7
+ *				(0 = Host-to-Device [Out]
+ *				 1 = Device-to-Host [In])
+ *
+ *  - device address:	bits 8-14
+ *
+ *  - endpoint:		bits 15-18
+ *
+ *  - pipe type:		bits 30-31
+ *				(00 = isochronous, 01 = interrupt,
+ *				 10 = control, 11 = bulk)
+ */
+#define usbif_pipeportnum(pipe) ((pipe) & 0x1f)
+#define usbif_setportnum_pipe(pipe, portnum) \
+	((pipe)|(portnum))
+
+#define usbif_pipeunlink(pipe) ((pipe) & 0x20)
+#define usbif_pipesubmit(pipe) (!usbif_pipeunlink(pipe))
+#define usbif_setunlink_pipe(pipe) ((pipe)|(0x20))
+
+#define USBIF_BACK_MAX_PENDING_REQS (128)
+#define USBIF_MAX_SEGMENTS_PER_REQUEST (16)
+
+/*
+ * RING for transferring urbs.
+ */
+struct usbif_request_segment {
+	grant_ref_t gref;
+	uint16_t offset;
+	uint16_t length;
+};
+
+struct usbif_urb_request {
+	uint16_t id; /* request id */
+	uint16_t nr_buffer_segs; /* number of urb->transfer_buffer segments */
+
+	/* basic urb parameter */
+	uint32_t pipe;
+	uint16_t transfer_flags;
+	uint16_t buffer_length;
+	union {
+		uint8_t ctrl[8]; /* setup_packet (Ctrl) */
+
+		struct {
+			uint16_t interval; /* maximum (1024*8) in usb core */
+			uint16_t start_frame; /* start frame */
+			uint16_t number_of_packets; /* number of ISO packet */
+			uint16_t nr_frame_desc_segs; /* number of iso_frame_desc
+								 segments */
+		} isoc;
+
+		struct {
+			uint16_t interval; /* maximum (1024*8) in usb core */
+			uint16_t pad[3];
+		} intr;
+
+		struct {
+			uint16_t unlink_id; /* unlink request id */
+			uint16_t pad[3];
+		} unlink;
+
+	} u;
+
+	/* urb data segments */
+	struct usbif_request_segment seg[USBIF_MAX_SEGMENTS_PER_REQUEST];
+};
+
+struct usbif_urb_response {
+	uint16_t id; /* request id */
+	uint16_t start_frame;  /* start frame (ISO) */
+	int32_t status; /* status (non-ISO) */
+	int32_t actual_length; /* actual transfer length */
+	int32_t error_count; /* number of ISO errors */
+};
+
+DEFINE_RING_TYPES(usbif_urb, struct usbif_urb_request,
+						struct usbif_urb_response);
+#define USB_URB_RING_SIZE __CONST_RING_SIZE(usbif_urb, PAGE_SIZE)
+
+/*
+ * RING for notifying connect/disconnect events to frontend
+ */
+struct usbif_conn_request {
+	uint16_t id;
+};
+
+struct usbif_conn_response {
+	uint16_t id; /* request id */
+	uint8_t portnum; /* port number */
+	uint8_t speed; /* usb_device_speed */
+};
+
+DEFINE_RING_TYPES(usbif_conn, struct usbif_conn_request,
+						struct usbif_conn_response);
+#define USB_CONN_RING_SIZE __CONST_RING_SIZE(usbif_conn, PAGE_SIZE)
+
+#endif /* __XEN_PUBLIC_IO_USBIF_H__ */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5efn-0006jD-W3; Tue, 21 Jan 2014 16:59:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5efm-0006iz-N7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:59:34 +0000
Received: from [85.158.143.35:58491] by server-2.bemta-4.messagelabs.com id
	C7/3D-11386-577AED25; Tue, 21 Jan 2014 16:59:33 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390323570!11846780!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28604 invoked from network); 21 Jan 2014 16:59:32 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:59:32 -0000
Received: from mail-ee0-f49.google.com ([74.125.83.49]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nch9MNS7MwXVDQMRNZAfNI8d5N8dc@postini.com;
	Tue, 21 Jan 2014 08:59:32 PST
Received: by mail-ee0-f49.google.com with SMTP id d17so4229960eek.8
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:59:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id;
	bh=NwXSleKc/QfYG5rJi7fN5vgJKNuV1GK3XVTILI11f10=;
	b=Pcb7YzgQCUxPJvN1cG4HTwxHiyc5YzWRR1XXnToprcekUsGuxjlNvseMUqizVrgBbz
	2wkJf1YETWZI6pVxWbc6RwESvqkmxkyDtbxKAqZXNIGggRuG+7eJEZM/LAacfSnBFYRY
	vD0F59PS6HzVD6Huj53oghwTD5SFEN+66nGQE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=NwXSleKc/QfYG5rJi7fN5vgJKNuV1GK3XVTILI11f10=;
	b=GHvqEzujxfZlM71suTJvSPEjWbQrAaAod9o3Zca0GuIikzocZ8+y1gTBXmsgqwfUL9
	HV0+Hn3VWP5qHLcKnbpNPM2RekXrjePCMZtPMWNiuDY6eGbzMfGMzlVl9vOlqrnRSk2F
	xv7sfS8+NGI+VbJuhQu9PKoCu0Jp1ipH+mOzN4yuOVhUasP0HLI7MGXrdDcqe6/zTTrs
	rrIED960kU3UmcqeBRKeUk3b3ItUshJPLYZT48m2ToZmLzLpUjARiSlIYG/2wQYUVpp0
	0Va/pDL/EQxcetB9q+MajK/OwG2+Fu6l201PpB1kU78NMCscubqhxnxIKm2zTK8Ten0z
	me6w==
X-Gm-Message-State: ALoCoQkSNhY6LFf3naPo4f26ZtoZr2IbRCt0ZCdyU+Nv4LZtK+ZzJdb6Y6KY9YOG7VAZAv4+yxTV2lYYtVrSoIxHsmRmJMAAxykt46ELwP05wHlr8VQ9BEt2NpurWISkm9oJpk/W7msRJwya9YrwapRWbUmp/jE63A8H9dzd4l36SJXROWfJ6po=
X-Received: by 10.14.69.200 with SMTP id n48mr24782010eed.54.1390323569105;
	Tue, 21 Jan 2014 08:59:29 -0800 (PST)
X-Received: by 10.14.69.200 with SMTP id n48mr24782005eed.54.1390323569063;
	Tue, 21 Jan 2014 08:59:29 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id
	d43sm16764424eeo.12.2014.01.21.08.59.28 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:59:28 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:55:31 +0200
Message-Id: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Subject: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add new_vport and remove_vport.
Ported from patch Noboru Iwamatsu: PVUSB: backend driver.

Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
---
 drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
 1 file changed, 88 insertions(+), 5 deletions(-)

diff --git a/drivers/usb/host/xen-usbback/usbdev.c b/drivers/usb/host/xen-usbback/usbdev.c
index 53a14b4..05709f2 100644
--- a/drivers/usb/host/xen-usbback/usbdev.c
+++ b/drivers/usb/host/xen-usbback/usbdev.c
@@ -276,6 +276,73 @@ static ssize_t usbdev_show_ports(struct device_driver *driver, char *buf)
 
 DRIVER_ATTR(port_ids, S_IRUSR, usbdev_show_ports, NULL);
 
+static inline int str_to_vport(const char *buf, char *phys_bus, int *dom_id,
+							int *dev_id, int *port)
+{
+	char *p;
+	int len;
+	int err;
+
+	/* no physical bus */
+	if (!(p = strchr(buf, ':')))
+		return -EINVAL;
+
+	len = p - buf;
+
+	/* bad physical bus */
+	if (len + 1 > XEN_USB_BUS_ID_SIZE)
+		return -EINVAL;
+
+	strlcpy(phys_bus, buf, len + 1);
+	err = sscanf(p + 1, "%d:%d:%d", dom_id, dev_id, port);
+	if (err == 3)
+		return 0;
+	else
+		return -EINVAL;
+}
+
+static ssize_t usbstub_vport_add(struct device_driver *driver,
+						const char *buf, size_t count)
+{
+	char bus_id[XEN_USB_BUS_ID_SIZE];
+	int err = 0, dom_id, dev_id, portnum;
+
+	err = str_to_vport(buf, &bus_id[0], &dom_id, &dev_id, &portnum);
+	if (err)
+		goto out;
+
+	err = xen_usbport_add(&bus_id[0], dom_id, dev_id, portnum);
+
+out:
+	if (!err)
+		err = count;
+
+	return err;
+}
+
+DRIVER_ATTR(new_vport, S_IWUSR, NULL, usbstub_vport_add);
+
+static ssize_t usbstub_vport_remove(struct device_driver *driver,
+						const char *buf, size_t count)
+{
+	char bus_id[XEN_USB_BUS_ID_SIZE];
+	int err = 0, dom_id, dev_id, portnum;
+
+	err = str_to_vport(buf, &bus_id[0], &dom_id, &dev_id, &portnum);
+	if (err)
+		goto out;
+
+	err = xen_usbport_remove(dom_id, dev_id, portnum);
+
+out:
+	if (!err)
+		err = count;
+
+	return err;
+}
+
+DRIVER_ATTR(remove_vport, S_IWUSR, NULL, usbstub_vport_remove);
+
 /* table of devices that matches any usbdevice */
 static const struct usb_device_id usbdev_table[] = {
 	{ .driver_info = 1 }, /* wildcard, see usb_match_id() */
@@ -293,27 +360,43 @@ static struct usb_driver xen_usbdev_driver = {
 
 int __init xen_usbdev_init(void)
 {
-	int err;
+	int err = 0;
 
 	err = usb_register(&xen_usbdev_driver);
 	if (err < 0) {
 		pr_alert(DRV_PFX "usb_register failed (error %d)\n",
 									err);
-		goto out;
+		return err;
 	}
 
 	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
 							&driver_attr_port_ids);
 	if (err)
-		usb_deregister(&xen_usbdev_driver);
+		goto err;
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+                               &driver_attr_remove_vport);
+	if (err)
+		goto err;
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+                               &driver_attr_new_vport);
+	if (err)
+		goto err;
+
+	return err;
+
+err:
+	usb_deregister(&xen_usbdev_driver);
 
-out:
 	return err;
 }
 
 void xen_usbdev_exit(void)
 {
 	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
-							&driver_attr_port_ids);
+						&driver_attr_port_ids);
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+						&driver_attr_remove_vport);
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+						&driver_attr_new_vport);
 	usb_deregister(&xen_usbdev_driver);
 }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 16:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 16:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5efn-0006jD-W3; Tue, 21 Jan 2014 16:59:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5efm-0006iz-N7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 16:59:34 +0000
Received: from [85.158.143.35:58491] by server-2.bemta-4.messagelabs.com id
	C7/3D-11386-577AED25; Tue, 21 Jan 2014 16:59:33 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390323570!11846780!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28604 invoked from network); 21 Jan 2014 16:59:32 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 16:59:32 -0000
Received: from mail-ee0-f49.google.com ([74.125.83.49]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt6nch9MNS7MwXVDQMRNZAfNI8d5N8dc@postini.com;
	Tue, 21 Jan 2014 08:59:32 PST
Received: by mail-ee0-f49.google.com with SMTP id d17so4229960eek.8
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 08:59:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:cc:subject:date:message-id;
	bh=NwXSleKc/QfYG5rJi7fN5vgJKNuV1GK3XVTILI11f10=;
	b=Pcb7YzgQCUxPJvN1cG4HTwxHiyc5YzWRR1XXnToprcekUsGuxjlNvseMUqizVrgBbz
	2wkJf1YETWZI6pVxWbc6RwESvqkmxkyDtbxKAqZXNIGggRuG+7eJEZM/LAacfSnBFYRY
	vD0F59PS6HzVD6Huj53oghwTD5SFEN+66nGQE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=NwXSleKc/QfYG5rJi7fN5vgJKNuV1GK3XVTILI11f10=;
	b=GHvqEzujxfZlM71suTJvSPEjWbQrAaAod9o3Zca0GuIikzocZ8+y1gTBXmsgqwfUL9
	HV0+Hn3VWP5qHLcKnbpNPM2RekXrjePCMZtPMWNiuDY6eGbzMfGMzlVl9vOlqrnRSk2F
	xv7sfS8+NGI+VbJuhQu9PKoCu0Jp1ipH+mOzN4yuOVhUasP0HLI7MGXrdDcqe6/zTTrs
	rrIED960kU3UmcqeBRKeUk3b3ItUshJPLYZT48m2ToZmLzLpUjARiSlIYG/2wQYUVpp0
	0Va/pDL/EQxcetB9q+MajK/OwG2+Fu6l201PpB1kU78NMCscubqhxnxIKm2zTK8Ten0z
	me6w==
X-Gm-Message-State: ALoCoQkSNhY6LFf3naPo4f26ZtoZr2IbRCt0ZCdyU+Nv4LZtK+ZzJdb6Y6KY9YOG7VAZAv4+yxTV2lYYtVrSoIxHsmRmJMAAxykt46ELwP05wHlr8VQ9BEt2NpurWISkm9oJpk/W7msRJwya9YrwapRWbUmp/jE63A8H9dzd4l36SJXROWfJ6po=
X-Received: by 10.14.69.200 with SMTP id n48mr24782010eed.54.1390323569105;
	Tue, 21 Jan 2014 08:59:29 -0800 (PST)
X-Received: by 10.14.69.200 with SMTP id n48mr24782005eed.54.1390323569063;
	Tue, 21 Jan 2014 08:59:29 -0800 (PST)
Received: from uglx0187394.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id
	d43sm16764424eeo.12.2014.01.21.08.59.28 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 08:59:28 -0800 (PST)
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 18:55:31 +0200
Message-Id: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Subject: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add new_vport and remove_vport.
Ported from patch Noboru Iwamatsu: PVUSB: backend driver.

Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
---
 drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
 1 file changed, 88 insertions(+), 5 deletions(-)

diff --git a/drivers/usb/host/xen-usbback/usbdev.c b/drivers/usb/host/xen-usbback/usbdev.c
index 53a14b4..05709f2 100644
--- a/drivers/usb/host/xen-usbback/usbdev.c
+++ b/drivers/usb/host/xen-usbback/usbdev.c
@@ -276,6 +276,73 @@ static ssize_t usbdev_show_ports(struct device_driver *driver, char *buf)
 
 DRIVER_ATTR(port_ids, S_IRUSR, usbdev_show_ports, NULL);
 
+static inline int str_to_vport(const char *buf, char *phys_bus, int *dom_id,
+							int *dev_id, int *port)
+{
+	char *p;
+	int len;
+	int err;
+
+	/* no physical bus */
+	if (!(p = strchr(buf, ':')))
+		return -EINVAL;
+
+	len = p - buf;
+
+	/* bad physical bus */
+	if (len + 1 > XEN_USB_BUS_ID_SIZE)
+		return -EINVAL;
+
+	strlcpy(phys_bus, buf, len + 1);
+	err = sscanf(p + 1, "%d:%d:%d", dom_id, dev_id, port);
+	if (err == 3)
+		return 0;
+	else
+		return -EINVAL;
+}
+
+static ssize_t usbstub_vport_add(struct device_driver *driver,
+						const char *buf, size_t count)
+{
+	char bus_id[XEN_USB_BUS_ID_SIZE];
+	int err = 0, dom_id, dev_id, portnum;
+
+	err = str_to_vport(buf, &bus_id[0], &dom_id, &dev_id, &portnum);
+	if (err)
+		goto out;
+
+	err = xen_usbport_add(&bus_id[0], dom_id, dev_id, portnum);
+
+out:
+	if (!err)
+		err = count;
+
+	return err;
+}
+
+DRIVER_ATTR(new_vport, S_IWUSR, NULL, usbstub_vport_add);
+
+static ssize_t usbstub_vport_remove(struct device_driver *driver,
+						const char *buf, size_t count)
+{
+	char bus_id[XEN_USB_BUS_ID_SIZE];
+	int err = 0, dom_id, dev_id, portnum;
+
+	err = str_to_vport(buf, &bus_id[0], &dom_id, &dev_id, &portnum);
+	if (err)
+		goto out;
+
+	err = xen_usbport_remove(dom_id, dev_id, portnum);
+
+out:
+	if (!err)
+		err = count;
+
+	return err;
+}
+
+DRIVER_ATTR(remove_vport, S_IWUSR, NULL, usbstub_vport_remove);
+
 /* table of devices that matches any usbdevice */
 static const struct usb_device_id usbdev_table[] = {
 	{ .driver_info = 1 }, /* wildcard, see usb_match_id() */
@@ -293,27 +360,43 @@ static struct usb_driver xen_usbdev_driver = {
 
 int __init xen_usbdev_init(void)
 {
-	int err;
+	int err = 0;
 
 	err = usb_register(&xen_usbdev_driver);
 	if (err < 0) {
 		pr_alert(DRV_PFX "usb_register failed (error %d)\n",
 									err);
-		goto out;
+		return err;
 	}
 
 	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
 							&driver_attr_port_ids);
 	if (err)
-		usb_deregister(&xen_usbdev_driver);
+		goto err;
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+                               &driver_attr_remove_vport);
+	if (err)
+		goto err;
+	err = driver_create_file(&xen_usbdev_driver.drvwrap.driver,
+                               &driver_attr_new_vport);
+	if (err)
+		goto err;
+
+	return err;
+
+err:
+	usb_deregister(&xen_usbdev_driver);
 
-out:
 	return err;
 }
 
 void xen_usbdev_exit(void)
 {
 	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
-							&driver_attr_port_ids);
+						&driver_attr_port_ids);
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+						&driver_attr_remove_vport);
+	driver_remove_file(&xen_usbdev_driver.drvwrap.driver,
+						&driver_attr_new_vport);
 	usb_deregister(&xen_usbdev_driver);
 }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eiE-00077J-JV; Tue, 21 Jan 2014 17:02:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5eiC-000778-Cj
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 17:02:04 +0000
Received: from [85.158.143.35:20674] by server-1.bemta-4.messagelabs.com id
	48/E0-02132-B08AED25; Tue, 21 Jan 2014 17:02:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390323722!13154076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24102 invoked from network); 21 Jan 2014 17:02:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 17:02:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 17:02:02 +0000
Message-Id: <52DEB6170200007800115796@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 17:01:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
	<52DEA69D.4010102@citrix.com>
In-Reply-To: <52DEA69D.4010102@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 17:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 21/01/2014 15:33, Jan Beulich wrote:
>> Since 64-bit PV uses separate kernel and user mode page tables, kernel
>> addresses (as usually provided via VCPUOP_register_runstate_memory_area
>> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
>> necessarily accessible when the respective updating occurs. Add logic
>> for toggle_guest_mode() to take care of this (if necessary) the next
>> time the vCPU switches to kernel mode.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>>  }
>>  
>>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>> -static void update_runstate_area(struct vcpu *v)
>> +bool_t update_runstate_area(const struct vcpu *v)
> 
> Can you adjust the comment to indicate what the return value means.  The
> logic is quite opaque, but I believe it is "true if the runstate has
> been suitably updated".

Sigh - yes, I could of course. But it looks quite obvious to me.

>> --- a/xen/arch/x86/x86_64/traps.c
>> +++ b/xen/arch/x86/x86_64/traps.c
>> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>>  #else
>>      write_ptbase(v);
>>  #endif
>> +
>> +    if ( !(v->arch.flags & TF_kernel_mode) )
>> +        return;
>> +
>> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
>> +         update_runstate_area(v) )
>> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
>> +
>> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
>> +         update_secondary_system_time(v,
>> +                                      &v->arch.pv_vcpu.pending_system_time) )
>> +        v->arch.pv_vcpu.pending_system_time.version = 0;
> 
> What would happen now if a guest kernel loads a faulting address for its
> runstate info (or edits its pagetables so a previously good address now
> faults)?  It appears as if we could retry writing to the same bad
> address each time we try to toggle into kernel mode.
> 
> Given that we know we are running on the correct set of pagetables, I
> think both of these pending variable can be unconditionally cleared,
> whether or not update{runstate_area,secondary_system_time} succeed or fail.

We could, but do you really think there's much harm in us keeping
this logically consistent? The only entity impacted is going to be
the "offending" domain, as we're wasting more of its time doing the
mode switch for it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5eiE-00077J-JV; Tue, 21 Jan 2014 17:02:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5eiC-000778-Cj
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 17:02:04 +0000
Received: from [85.158.143.35:20674] by server-1.bemta-4.messagelabs.com id
	48/E0-02132-B08AED25; Tue, 21 Jan 2014 17:02:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390323722!13154076!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24102 invoked from network); 21 Jan 2014 17:02:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 17:02:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 21 Jan 2014 17:02:02 +0000
Message-Id: <52DEB6170200007800115796@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 21 Jan 2014 17:01:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
	<52DEA69D.4010102@citrix.com>
In-Reply-To: <52DEA69D.4010102@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 17:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 21/01/2014 15:33, Jan Beulich wrote:
>> Since 64-bit PV uses separate kernel and user mode page tables, kernel
>> addresses (as usually provided via VCPUOP_register_runstate_memory_area
>> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
>> necessarily accessible when the respective updating occurs. Add logic
>> for toggle_guest_mode() to take care of this (if necessary) the next
>> time the vCPU switches to kernel mode.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>>  }
>>  
>>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>> -static void update_runstate_area(struct vcpu *v)
>> +bool_t update_runstate_area(const struct vcpu *v)
> 
> Can you adjust the comment to indicate what the return value means.  The
> logic is quite opaque, but I believe it is "true if the runstate has
> been suitably updated".

Sigh - yes, I could of course. But it looks quite obvious to me.

>> --- a/xen/arch/x86/x86_64/traps.c
>> +++ b/xen/arch/x86/x86_64/traps.c
>> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>>  #else
>>      write_ptbase(v);
>>  #endif
>> +
>> +    if ( !(v->arch.flags & TF_kernel_mode) )
>> +        return;
>> +
>> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
>> +         update_runstate_area(v) )
>> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
>> +
>> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
>> +         update_secondary_system_time(v,
>> +                                      &v->arch.pv_vcpu.pending_system_time) )
>> +        v->arch.pv_vcpu.pending_system_time.version = 0;
> 
> What would happen now if a guest kernel loads a faulting address for its
> runstate info (or edits its pagetables so a previously good address now
> faults)?  It appears as if we could retry writing to the same bad
> address each time we try to toggle into kernel mode.
> 
> Given that we know we are running on the correct set of pagetables, I
> think both of these pending variable can be unconditionally cleared,
> whether or not update{runstate_area,secondary_system_time} succeed or fail.

We could, but do you really think there's much harm in us keeping
this logically consistent? The only entity impacted is going to be
the "offending" domain, as we're wasting more of its time doing the
mode switch for it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:06:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5emA-0007KC-9H; Tue, 21 Jan 2014 17:06:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5em8-0007K4-Nc
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 17:06:08 +0000
Received: from [85.158.139.211:30244] by server-1.bemta-5.messagelabs.com id
	D7/7A-21065-FF8AED25; Tue, 21 Jan 2014 17:06:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390323965!11107507!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8833 invoked from network); 21 Jan 2014 17:06:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:06:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94971218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 17:06:05 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 12:06:05 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	18:06:03 +0100
Message-ID: <52DEA8FB.7010307@citrix.com>
Date: Tue, 21 Jan 2014 17:06:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
	<52DEA69D.4010102@citrix.com>
	<52DEB6170200007800115796@nat28.tlf.novell.com>
In-Reply-To: <52DEB6170200007800115796@nat28.tlf.novell.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 17:01, Jan Beulich wrote:
>>>> On 21.01.14 at 17:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 21/01/2014 15:33, Jan Beulich wrote:
>>> Since 64-bit PV uses separate kernel and user mode page tables, kernel
>>> addresses (as usually provided via VCPUOP_register_runstate_memory_area
>>> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
>>> necessarily accessible when the respective updating occurs. Add logic
>>> for toggle_guest_mode() to take care of this (if necessary) the next
>>> time the vCPU switches to kernel mode.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>>>  }
>>>  
>>>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>>> -static void update_runstate_area(struct vcpu *v)
>>> +bool_t update_runstate_area(const struct vcpu *v)
>> Can you adjust the comment to indicate what the return value means.  The
>> logic is quite opaque, but I believe it is "true if the runstate has
>> been suitably updated".
> Sigh - yes, I could of course. But it looks quite obvious to me.
>
>>> --- a/xen/arch/x86/x86_64/traps.c
>>> +++ b/xen/arch/x86/x86_64/traps.c
>>> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>>>  #else
>>>      write_ptbase(v);
>>>  #endif
>>> +
>>> +    if ( !(v->arch.flags & TF_kernel_mode) )
>>> +        return;
>>> +
>>> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
>>> +         update_runstate_area(v) )
>>> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
>>> +
>>> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
>>> +         update_secondary_system_time(v,
>>> +                                      &v->arch.pv_vcpu.pending_system_time) )
>>> +        v->arch.pv_vcpu.pending_system_time.version = 0;
>> What would happen now if a guest kernel loads a faulting address for its
>> runstate info (or edits its pagetables so a previously good address now
>> faults)?  It appears as if we could retry writing to the same bad
>> address each time we try to toggle into kernel mode.
>>
>> Given that we know we are running on the correct set of pagetables, I
>> think both of these pending variable can be unconditionally cleared,
>> whether or not update{runstate_area,secondary_system_time} succeed or fail.
> We could, but do you really think there's much harm in us keeping
> this logically consistent? The only entity impacted is going to be
> the "offending" domain, as we're wasting more of its time doing the
> mode switch for it.
>
> Jan
>

I suppose not - an extra pagefault or two for a toggle into kernel mode
is hardly the biggest of overheads.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:06:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5emA-0007KC-9H; Tue, 21 Jan 2014 17:06:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5em8-0007K4-Nc
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 17:06:08 +0000
Received: from [85.158.139.211:30244] by server-1.bemta-5.messagelabs.com id
	D7/7A-21065-FF8AED25; Tue, 21 Jan 2014 17:06:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390323965!11107507!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8833 invoked from network); 21 Jan 2014 17:06:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:06:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="94971218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 17:06:05 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 12:06:05 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	18:06:03 +0100
Message-ID: <52DEA8FB.7010307@citrix.com>
Date: Tue, 21 Jan 2014 17:06:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DEA16B02000078001156E8@nat28.tlf.novell.com>
	<52DEA69D.4010102@citrix.com>
	<52DEB6170200007800115796@nat28.tlf.novell.com>
In-Reply-To: <52DEB6170200007800115796@nat28.tlf.novell.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't drop guest visible state updates
 when 64-bit PV guest is in user mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 17:01, Jan Beulich wrote:
>>>> On 21.01.14 at 17:55, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 21/01/2014 15:33, Jan Beulich wrote:
>>> Since 64-bit PV uses separate kernel and user mode page tables, kernel
>>> addresses (as usually provided via VCPUOP_register_runstate_memory_area
>>> and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
>>> necessarily accessible when the respective updating occurs. Add logic
>>> for toggle_guest_mode() to take care of this (if necessary) the next
>>> time the vCPU switches to kernel mode.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -1323,10 +1323,10 @@ static void paravirt_ctxt_switch_to(stru
>>>  }
>>>  
>>>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>>> -static void update_runstate_area(struct vcpu *v)
>>> +bool_t update_runstate_area(const struct vcpu *v)
>> Can you adjust the comment to indicate what the return value means.  The
>> logic is quite opaque, but I believe it is "true if the runstate has
>> been suitably updated".
> Sigh - yes, I could of course. But it looks quite obvious to me.
>
>>> --- a/xen/arch/x86/x86_64/traps.c
>>> +++ b/xen/arch/x86/x86_64/traps.c
>>> @@ -266,6 +266,18 @@ void toggle_guest_mode(struct vcpu *v)
>>>  #else
>>>      write_ptbase(v);
>>>  #endif
>>> +
>>> +    if ( !(v->arch.flags & TF_kernel_mode) )
>>> +        return;
>>> +
>>> +    if ( v->arch.pv_vcpu.need_update_runstate_area &&
>>> +         update_runstate_area(v) )
>>> +        v->arch.pv_vcpu.need_update_runstate_area = 0;
>>> +
>>> +    if ( v->arch.pv_vcpu.pending_system_time.version &&
>>> +         update_secondary_system_time(v,
>>> +                                      &v->arch.pv_vcpu.pending_system_time) )
>>> +        v->arch.pv_vcpu.pending_system_time.version = 0;
>> What would happen now if a guest kernel loads a faulting address for its
>> runstate info (or edits its pagetables so a previously good address now
>> faults)?  It appears as if we could retry writing to the same bad
>> address each time we try to toggle into kernel mode.
>>
>> Given that we know we are running on the correct set of pagetables, I
>> think both of these pending variable can be unconditionally cleared,
>> whether or not update{runstate_area,secondary_system_time} succeed or fail.
> We could, but do you really think there's much harm in us keeping
> this logically consistent? The only entity impacted is going to be
> the "offending" domain, as we're wasting more of its time doing the
> mode switch for it.
>
> Jan
>

I suppose not - an extra pagefault or two for a toggle into kernel mode
is hardly the biggest of overheads.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fFj-0000NL-0Z; Tue, 21 Jan 2014 17:36:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5fFg-0000NG-U9
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:36:41 +0000
Received: from [193.109.254.147:11877] by server-2.bemta-14.messagelabs.com id
	50/BD-00361-820BED25; Tue, 21 Jan 2014 17:36:40 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390325798!12328982!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17007 invoked from network); 21 Jan 2014 17:36:39 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:36:39 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so4678223oag.7
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 09:36:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=MRUr7HagQGgiO7n0+JT0lrrO8ef3Ddan5JFaVi9Osj0=;
	b=Yicuno6io7/322dJe1jm+Yc+cYyioBmhEKfsq0No3jTR4fgD2m/BBDrIvsxe+zfzeC
	RgGkC9ZV1XkuiiJotch6q9OnyJGAJCkmmSDnrrFA96zywqmA+8cPeWOZbhXcmb4FvizH
	T4YewKY84z6CaHIQhMFsA3goJTo5Sno8dXxPxwgT1g4pY4fufk9Yvuq+7pvW/Zvpm0F7
	kRP90iaabtU7CrZvfEuZ650zGb4zMh2insSu/kPNKa0AsLuga8Pd+5NKdwWTTk9Zhhdb
	nvhQ9MJPTvAqk6deLOGJ1TycNZ8KZAM5jT3ZyadiUjHCSHWkbmeyLZStHG6rwfEo5tAt
	SIIQ==
MIME-Version: 1.0
X-Received: by 10.182.29.33 with SMTP id g1mr2775046obh.59.1390325797735; Tue,
	21 Jan 2014 09:36:37 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Tue, 21 Jan 2014 09:36:37 -0800 (PST)
In-Reply-To: <20140121125527.GG2924@reaktio.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<20140121125527.GG2924@reaktio.net>
Date: Tue, 21 Jan 2014 09:36:37 -0800
Message-ID: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Gordan Bobic <gordan@bobich.net>, "Wu, Feng" <feng.wu@intel.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:55 AM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
> Hello,
>
> On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
>>
>>
>> > >>> >
>> > >>> > Hi all,
>> > >>> >
>> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu-x=
en as
>> > >>> > device model? I tried but I am getting error 'gfx_passthru' inva=
lid
>> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> > >>> > traditional i.e. qemu-dm.
>> > >>>
>> > >>> As far as I know, only qemu-traditional supports vga pass-through
>> > >>> right now.
>> > >>
>> > >> Right.
>> > >> It is not possible to assign your primary VGA card to a VM with
>> > >> qemu-xen. You should be able to assign your secondary VGA card thou=
gh.
>> > >
>> > > Let me understand this correctly. If I have two VGA cards then I can
>> > > passthrough
>> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>> > > right and
>> > > if yes how can I do it?
>> >
>> > Passing any VGA card as a primary-in-domU has always been problematic.
>>
>> I think passing VGA card as a primary-in-domU works well in Qemu-traditi=
onal, right?
>>
>
> primary-in-domU requires vendor specific hacks in Xen qemu.
> qemu-traditional includes many patches for Intel IGD primary passthru sup=
port,
> but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-traditional.
>
> There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru ar=
e in
> various source trees, mailinglist archives, and on some blogs around the =
internet.
>
> Also for Intel IGD I think there's at least one outstanding patch/fix that
> hasn't been merged to qemu-traditional yet, see:
>
> http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.html
> http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html
>
> The patch in question probably needs some work before it is suitable for =
being applied to qemu-traditional.
>
Thanks for sharing. Just for the information, I was able to make
Nvidia K2000 work
in passthrough mode without any patch to qemu traditional.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fFj-0000NL-0Z; Tue, 21 Jan 2014 17:36:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <shakeel.butt@gmail.com>) id 1W5fFg-0000NG-U9
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:36:41 +0000
Received: from [193.109.254.147:11877] by server-2.bemta-14.messagelabs.com id
	50/BD-00361-820BED25; Tue, 21 Jan 2014 17:36:40 +0000
X-Env-Sender: shakeel.butt@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390325798!12328982!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17007 invoked from network); 21 Jan 2014 17:36:39 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:36:39 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so4678223oag.7
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 09:36:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=MRUr7HagQGgiO7n0+JT0lrrO8ef3Ddan5JFaVi9Osj0=;
	b=Yicuno6io7/322dJe1jm+Yc+cYyioBmhEKfsq0No3jTR4fgD2m/BBDrIvsxe+zfzeC
	RgGkC9ZV1XkuiiJotch6q9OnyJGAJCkmmSDnrrFA96zywqmA+8cPeWOZbhXcmb4FvizH
	T4YewKY84z6CaHIQhMFsA3goJTo5Sno8dXxPxwgT1g4pY4fufk9Yvuq+7pvW/Zvpm0F7
	kRP90iaabtU7CrZvfEuZ650zGb4zMh2insSu/kPNKa0AsLuga8Pd+5NKdwWTTk9Zhhdb
	nvhQ9MJPTvAqk6deLOGJ1TycNZ8KZAM5jT3ZyadiUjHCSHWkbmeyLZStHG6rwfEo5tAt
	SIIQ==
MIME-Version: 1.0
X-Received: by 10.182.29.33 with SMTP id g1mr2775046obh.59.1390325797735; Tue,
	21 Jan 2014 09:36:37 -0800 (PST)
Received: by 10.76.19.13 with HTTP; Tue, 21 Jan 2014 09:36:37 -0800 (PST)
In-Reply-To: <20140121125527.GG2924@reaktio.net>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<20140121125527.GG2924@reaktio.net>
Date: Tue, 21 Jan 2014 09:36:37 -0800
Message-ID: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
From: Shakeel Butt <shakeel.butt@gmail.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: Gordan Bobic <gordan@bobich.net>, "Wu, Feng" <feng.wu@intel.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:55 AM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
> Hello,
>
> On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
>>
>>
>> > >>> >
>> > >>> > Hi all,
>> > >>> >
>> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu-x=
en as
>> > >>> > device model? I tried but I am getting error 'gfx_passthru' inva=
lid
>> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
>> > >>> > traditional i.e. qemu-dm.
>> > >>>
>> > >>> As far as I know, only qemu-traditional supports vga pass-through
>> > >>> right now.
>> > >>
>> > >> Right.
>> > >> It is not possible to assign your primary VGA card to a VM with
>> > >> qemu-xen. You should be able to assign your secondary VGA card thou=
gh.
>> > >
>> > > Let me understand this correctly. If I have two VGA cards then I can
>> > > passthrough
>> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is this
>> > > right and
>> > > if yes how can I do it?
>> >
>> > Passing any VGA card as a primary-in-domU has always been problematic.
>>
>> I think passing VGA card as a primary-in-domU works well in Qemu-traditi=
onal, right?
>>
>
> primary-in-domU requires vendor specific hacks in Xen qemu.
> qemu-traditional includes many patches for Intel IGD primary passthru sup=
port,
> but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-traditional.
>
> There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru ar=
e in
> various source trees, mailinglist archives, and on some blogs around the =
internet.
>
> Also for Intel IGD I think there's at least one outstanding patch/fix that
> hasn't been merged to qemu-traditional yet, see:
>
> http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.html
> http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html
>
> The patch in question probably needs some work before it is suitable for =
being applied to qemu-traditional.
>
Thanks for sharing. Just for the information, I was able to make
Nvidia K2000 work
in passthrough mode without any patch to qemu traditional.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fUp-0001VJ-2u; Tue, 21 Jan 2014 17:52:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5fUn-0001V7-Si
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:52:17 +0000
Received: from [193.109.254.147:52460] by server-9.bemta-14.messagelabs.com id
	19/62-13957-1D3BED25; Tue, 21 Jan 2014 17:52:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390326735!12235249!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6144 invoked from network); 21 Jan 2014 17:52:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:52:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92938322"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 17:52:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 12:52:14 -0500
Message-ID: <52DEB3CD.60202@citrix.com>
Date: Tue, 21 Jan 2014 17:52:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
References: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
In-Reply-To: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 16:55, Alexander Savchenko wrote:
> Add new_vport and remove_vport.
> Ported from patch Noboru Iwamatsu: PVUSB: backend driver.
> 
> Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
> ---
>  drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
>  1 file changed, 88 insertions(+), 5 deletions(-)

What is this patch against?  Upstream linux doesn't have a backend
driver.  Is patch 1/3 (which hasn't arrived on xen-devel) adding it?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fUp-0001VJ-2u; Tue, 21 Jan 2014 17:52:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5fUn-0001V7-Si
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:52:17 +0000
Received: from [193.109.254.147:52460] by server-9.bemta-14.messagelabs.com id
	19/62-13957-1D3BED25; Tue, 21 Jan 2014 17:52:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390326735!12235249!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6144 invoked from network); 21 Jan 2014 17:52:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:52:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92938322"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 17:52:14 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 12:52:14 -0500
Message-ID: <52DEB3CD.60202@citrix.com>
Date: Tue, 21 Jan 2014 17:52:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
References: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
In-Reply-To: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 16:55, Alexander Savchenko wrote:
> Add new_vport and remove_vport.
> Ported from patch Noboru Iwamatsu: PVUSB: backend driver.
> 
> Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
> ---
>  drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
>  1 file changed, 88 insertions(+), 5 deletions(-)

What is this patch against?  Upstream linux doesn't have a backend
driver.  Is patch 1/3 (which hasn't arrived on xen-devel) adding it?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fV4-0001Xf-G5; Tue, 21 Jan 2014 17:52:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5fV2-0001XD-Ty
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:52:33 +0000
Received: from [85.158.137.68:54490] by server-7.bemta-3.messagelabs.com id
	9F/02-27599-0E3BED25; Tue, 21 Jan 2014 17:52:32 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390326750!9683290!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13250 invoked from network); 21 Jan 2014 17:52:31 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 17:52:31 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id B7AD0221BEA;
	Tue, 21 Jan 2014 17:52:29 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 21 Jan 2014 17:52:29 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>"
	<20140121125527.GG2924@reaktio.net>
	<CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
Message-ID: <2b0e555088687e299bf36b23349cefd8@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org, "Wu, Feng" <feng.wu@intel.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjAxNC0wMS0yMSAxNzozNiwgU2hha2VlbCBCdXR0IHdyb3RlOgo+IE9uIFR1ZSwgSmFuIDIx
LCAyMDE0IGF0IDQ6NTUgQU0sIFBhc2kgS8Okcmtrw6RpbmVuIDxwYXNpa0Bpa2kuZmk+IHdyb3Rl
Ogo+PiBIZWxsbywKPj4gCj4+IE9uIE1vbiwgSmFuIDIwLCAyMDE0IGF0IDAxOjI0OjIzUE0gKzAw
MDAsIFd1LCBGZW5nIHdyb3RlOgo+Pj4gCj4+PiAKPj4+ID4gPj4+ID4KPj4+ID4gPj4+ID4gSGkg
YWxsLAo+Pj4gPiA+Pj4gPgo+Pj4gPiA+Pj4gPiBJcyBpdCBwb3NzaWJsZSB0byBkbyB2Z2EgcGFz
c3Rocm91Z2ggb24geGVuLXVuc3RhYmxlIHdpdGggcWVtdS14ZW4gYXMKPj4+ID4gPj4+ID4gZGV2
aWNlIG1vZGVsPyBJIHRyaWVkIGJ1dCBJIGFtIGdldHRpbmcgZXJyb3IgJ2dmeF9wYXNzdGhydScg
aW52YWxpZAo+Pj4gPiA+Pj4gPiBwYXJhbWV0ZXIgZm9yIHFlbXUteGVuLiBJIGFtIGFibGUgdG8g
ZG8gcGFzc3Rocm91Z2ggd2l0aCBxZW11Cj4+PiA+ID4+PiA+IHRyYWRpdGlvbmFsIGkuZS4gcWVt
dS1kbS4KPj4+ID4gPj4+Cj4+PiA+ID4+PiBBcyBmYXIgYXMgSSBrbm93LCBvbmx5IHFlbXUtdHJh
ZGl0aW9uYWwgc3VwcG9ydHMgdmdhIHBhc3MtdGhyb3VnaAo+Pj4gPiA+Pj4gcmlnaHQgbm93Lgo+
Pj4gPiA+Pgo+Pj4gPiA+PiBSaWdodC4KPj4+ID4gPj4gSXQgaXMgbm90IHBvc3NpYmxlIHRvIGFz
c2lnbiB5b3VyIHByaW1hcnkgVkdBIGNhcmQgdG8gYSBWTSB3aXRoCj4+PiA+ID4+IHFlbXUteGVu
LiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gYXNzaWduIHlvdXIgc2Vjb25kYXJ5IFZHQSBjYXJkIHRo
b3VnaC4KPj4+ID4gPgo+Pj4gPiA+IExldCBtZSB1bmRlcnN0YW5kIHRoaXMgY29ycmVjdGx5LiBJ
ZiBJIGhhdmUgdHdvIFZHQSBjYXJkcyB0aGVuIEkgY2FuCj4+PiA+ID4gcGFzc3Rocm91Z2gKPj4+
ID4gPiBzZWNvbmRhcnkgVkdBIGNhcmQgKGluIERvbTApIHRvIEhWTSBhcyBpdHMgcHJpbWFyeSBW
R0EgY2FyZC4gSXMgdGhpcwo+Pj4gPiA+IHJpZ2h0IGFuZAo+Pj4gPiA+IGlmIHllcyBob3cgY2Fu
IEkgZG8gaXQ/Cj4+PiA+Cj4+PiA+IFBhc3NpbmcgYW55IFZHQSBjYXJkIGFzIGEgcHJpbWFyeS1p
bi1kb21VIGhhcyBhbHdheXMgYmVlbiBwcm9ibGVtYXRpYy4KPj4+IAo+Pj4gSSB0aGluayBwYXNz
aW5nIFZHQSBjYXJkIGFzIGEgcHJpbWFyeS1pbi1kb21VIHdvcmtzIHdlbGwgaW4gCj4+PiBRZW11
LXRyYWRpdGlvbmFsLCByaWdodD8KPj4+IAo+PiAKPj4gcHJpbWFyeS1pbi1kb21VIHJlcXVpcmVz
IHZlbmRvciBzcGVjaWZpYyBoYWNrcyBpbiBYZW4gcWVtdS4KPj4gcWVtdS10cmFkaXRpb25hbCBp
bmNsdWRlcyBtYW55IHBhdGNoZXMgZm9yIEludGVsIElHRCBwcmltYXJ5IHBhc3N0aHJ1IAo+PiBz
dXBwb3J0LAo+PiBidXQgcGF0Y2hlcyBmb3IgQU1EL0FUSSBhbmQgTnZpZGlhIEdQVXMgYXJlbid0
IG1lcmdlZCB0byAKPj4gcWVtdS10cmFkaXRpb25hbC4KPj4gCj4+IFRoZXJlIHVuYXBwbGllZCBw
YXRjaGVzIGZvciBxZW11LXRyYWRpdGlvbmFsIChBTUQvTnZpZGlhKSBHUFUgcGFzc3RocnUgCj4+
IGFyZSBpbgo+PiB2YXJpb3VzIHNvdXJjZSB0cmVlcywgbWFpbGluZ2xpc3QgYXJjaGl2ZXMsIGFu
ZCBvbiBzb21lIGJsb2dzIGFyb3VuZCAKPj4gdGhlIGludGVybmV0Lgo+PiAKPj4gQWxzbyBmb3Ig
SW50ZWwgSUdEIEkgdGhpbmsgdGhlcmUncyBhdCBsZWFzdCBvbmUgb3V0c3RhbmRpbmcgcGF0Y2gv
Zml4IAo+PiB0aGF0Cj4+IGhhc24ndCBiZWVuIG1lcmdlZCB0byBxZW11LXRyYWRpdGlvbmFsIHll
dCwgc2VlOgo+PiAKPj4gaHR0cDovL2xpc3RzLnhlbnByb2plY3Qub3JnL2FyY2hpdmVzL2h0bWwv
eGVuLWRldmVsLzIwMTMtMDIvbXNnMDA1MzguaHRtbAo+PiBodHRwOi8vbGlzdHMueGVuLm9yZy9h
cmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEzLTA3L21zZzAxMzg1Lmh0bWwKPj4gCj4+IFRoZSBw
YXRjaCBpbiBxdWVzdGlvbiBwcm9iYWJseSBuZWVkcyBzb21lIHdvcmsgYmVmb3JlIGl0IGlzIHN1
aXRhYmxlIAo+PiBmb3IgYmVpbmcgYXBwbGllZCB0byBxZW11LXRyYWRpdGlvbmFsLgo+PiAKPiBU
aGFua3MgZm9yIHNoYXJpbmcuIEp1c3QgZm9yIHRoZSBpbmZvcm1hdGlvbiwgSSB3YXMgYWJsZSB0
byBtYWtlCj4gTnZpZGlhIEsyMDAwIHdvcmsKPiBpbiBwYXNzdGhyb3VnaCBtb2RlIHdpdGhvdXQg
YW55IHBhdGNoIHRvIHFlbXUgdHJhZGl0aW9uYWwuCgpBbGwgUXVhZHJvIHgwMDAgYW5kIEt4MDAw
IGNhcmRzLCBib3RoIGdlbnVpbmUgYW5kIG1vZGlmaWVkIEdlRm9yY2VzCmFyZSBrbm93biB0byB3
b3JrIGZpbmUgd2l0aCBWR0EgcGFzc3Rocm91Z2guCgpHb3JkYW4KCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fV4-0001Xf-G5; Tue, 21 Jan 2014 17:52:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordan@bobich.net>) id 1W5fV2-0001XD-Ty
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:52:33 +0000
Received: from [85.158.137.68:54490] by server-7.bemta-3.messagelabs.com id
	9F/02-27599-0E3BED25; Tue, 21 Jan 2014 17:52:32 +0000
X-Env-Sender: gordan@bobich.net
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390326750!9683290!1
X-Originating-IP: [217.34.137.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13250 invoked from network); 21 Jan 2014 17:52:31 -0000
Received: from host217-34-137-81.in-addr.btopenworld.com (HELO
	external.sentinel2) (217.34.137.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 17:52:31 -0000
Received: from mail.shatteredsilicon.net (localhost [127.0.0.1])
	by external.sentinel2 (Postfix) with ESMTP id B7AD0221BEA;
	Tue, 21 Jan 2014 17:52:29 +0000 (GMT)
MIME-Version: 1.0
Date: Tue, 21 Jan 2014 17:52:29 +0000
From: Gordan Bobic <gordan@bobich.net>
To: Shakeel Butt <shakeel.butt@gmail.com>
In-Reply-To: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
References: "\"<CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>"
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>"
	<20140121125527.GG2924@reaktio.net>
	<CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
Message-ID: <2b0e555088687e299bf36b23349cefd8@mail.shatteredsilicon.net>
X-Sender: gordan@bobich.net
User-Agent: Roundcube Webmail/0.9.5
Cc: xen-devel@lists.xen.org, "Wu, Feng" <feng.wu@intel.com>,
	"G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjAxNC0wMS0yMSAxNzozNiwgU2hha2VlbCBCdXR0IHdyb3RlOgo+IE9uIFR1ZSwgSmFuIDIx
LCAyMDE0IGF0IDQ6NTUgQU0sIFBhc2kgS8Okcmtrw6RpbmVuIDxwYXNpa0Bpa2kuZmk+IHdyb3Rl
Ogo+PiBIZWxsbywKPj4gCj4+IE9uIE1vbiwgSmFuIDIwLCAyMDE0IGF0IDAxOjI0OjIzUE0gKzAw
MDAsIFd1LCBGZW5nIHdyb3RlOgo+Pj4gCj4+PiAKPj4+ID4gPj4+ID4KPj4+ID4gPj4+ID4gSGkg
YWxsLAo+Pj4gPiA+Pj4gPgo+Pj4gPiA+Pj4gPiBJcyBpdCBwb3NzaWJsZSB0byBkbyB2Z2EgcGFz
c3Rocm91Z2ggb24geGVuLXVuc3RhYmxlIHdpdGggcWVtdS14ZW4gYXMKPj4+ID4gPj4+ID4gZGV2
aWNlIG1vZGVsPyBJIHRyaWVkIGJ1dCBJIGFtIGdldHRpbmcgZXJyb3IgJ2dmeF9wYXNzdGhydScg
aW52YWxpZAo+Pj4gPiA+Pj4gPiBwYXJhbWV0ZXIgZm9yIHFlbXUteGVuLiBJIGFtIGFibGUgdG8g
ZG8gcGFzc3Rocm91Z2ggd2l0aCBxZW11Cj4+PiA+ID4+PiA+IHRyYWRpdGlvbmFsIGkuZS4gcWVt
dS1kbS4KPj4+ID4gPj4+Cj4+PiA+ID4+PiBBcyBmYXIgYXMgSSBrbm93LCBvbmx5IHFlbXUtdHJh
ZGl0aW9uYWwgc3VwcG9ydHMgdmdhIHBhc3MtdGhyb3VnaAo+Pj4gPiA+Pj4gcmlnaHQgbm93Lgo+
Pj4gPiA+Pgo+Pj4gPiA+PiBSaWdodC4KPj4+ID4gPj4gSXQgaXMgbm90IHBvc3NpYmxlIHRvIGFz
c2lnbiB5b3VyIHByaW1hcnkgVkdBIGNhcmQgdG8gYSBWTSB3aXRoCj4+PiA+ID4+IHFlbXUteGVu
LiBZb3Ugc2hvdWxkIGJlIGFibGUgdG8gYXNzaWduIHlvdXIgc2Vjb25kYXJ5IFZHQSBjYXJkIHRo
b3VnaC4KPj4+ID4gPgo+Pj4gPiA+IExldCBtZSB1bmRlcnN0YW5kIHRoaXMgY29ycmVjdGx5LiBJ
ZiBJIGhhdmUgdHdvIFZHQSBjYXJkcyB0aGVuIEkgY2FuCj4+PiA+ID4gcGFzc3Rocm91Z2gKPj4+
ID4gPiBzZWNvbmRhcnkgVkdBIGNhcmQgKGluIERvbTApIHRvIEhWTSBhcyBpdHMgcHJpbWFyeSBW
R0EgY2FyZC4gSXMgdGhpcwo+Pj4gPiA+IHJpZ2h0IGFuZAo+Pj4gPiA+IGlmIHllcyBob3cgY2Fu
IEkgZG8gaXQ/Cj4+PiA+Cj4+PiA+IFBhc3NpbmcgYW55IFZHQSBjYXJkIGFzIGEgcHJpbWFyeS1p
bi1kb21VIGhhcyBhbHdheXMgYmVlbiBwcm9ibGVtYXRpYy4KPj4+IAo+Pj4gSSB0aGluayBwYXNz
aW5nIFZHQSBjYXJkIGFzIGEgcHJpbWFyeS1pbi1kb21VIHdvcmtzIHdlbGwgaW4gCj4+PiBRZW11
LXRyYWRpdGlvbmFsLCByaWdodD8KPj4+IAo+PiAKPj4gcHJpbWFyeS1pbi1kb21VIHJlcXVpcmVz
IHZlbmRvciBzcGVjaWZpYyBoYWNrcyBpbiBYZW4gcWVtdS4KPj4gcWVtdS10cmFkaXRpb25hbCBp
bmNsdWRlcyBtYW55IHBhdGNoZXMgZm9yIEludGVsIElHRCBwcmltYXJ5IHBhc3N0aHJ1IAo+PiBz
dXBwb3J0LAo+PiBidXQgcGF0Y2hlcyBmb3IgQU1EL0FUSSBhbmQgTnZpZGlhIEdQVXMgYXJlbid0
IG1lcmdlZCB0byAKPj4gcWVtdS10cmFkaXRpb25hbC4KPj4gCj4+IFRoZXJlIHVuYXBwbGllZCBw
YXRjaGVzIGZvciBxZW11LXRyYWRpdGlvbmFsIChBTUQvTnZpZGlhKSBHUFUgcGFzc3RocnUgCj4+
IGFyZSBpbgo+PiB2YXJpb3VzIHNvdXJjZSB0cmVlcywgbWFpbGluZ2xpc3QgYXJjaGl2ZXMsIGFu
ZCBvbiBzb21lIGJsb2dzIGFyb3VuZCAKPj4gdGhlIGludGVybmV0Lgo+PiAKPj4gQWxzbyBmb3Ig
SW50ZWwgSUdEIEkgdGhpbmsgdGhlcmUncyBhdCBsZWFzdCBvbmUgb3V0c3RhbmRpbmcgcGF0Y2gv
Zml4IAo+PiB0aGF0Cj4+IGhhc24ndCBiZWVuIG1lcmdlZCB0byBxZW11LXRyYWRpdGlvbmFsIHll
dCwgc2VlOgo+PiAKPj4gaHR0cDovL2xpc3RzLnhlbnByb2plY3Qub3JnL2FyY2hpdmVzL2h0bWwv
eGVuLWRldmVsLzIwMTMtMDIvbXNnMDA1MzguaHRtbAo+PiBodHRwOi8vbGlzdHMueGVuLm9yZy9h
cmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEzLTA3L21zZzAxMzg1Lmh0bWwKPj4gCj4+IFRoZSBw
YXRjaCBpbiBxdWVzdGlvbiBwcm9iYWJseSBuZWVkcyBzb21lIHdvcmsgYmVmb3JlIGl0IGlzIHN1
aXRhYmxlIAo+PiBmb3IgYmVpbmcgYXBwbGllZCB0byBxZW11LXRyYWRpdGlvbmFsLgo+PiAKPiBU
aGFua3MgZm9yIHNoYXJpbmcuIEp1c3QgZm9yIHRoZSBpbmZvcm1hdGlvbiwgSSB3YXMgYWJsZSB0
byBtYWtlCj4gTnZpZGlhIEsyMDAwIHdvcmsKPiBpbiBwYXNzdGhyb3VnaCBtb2RlIHdpdGhvdXQg
YW55IHBhdGNoIHRvIHFlbXUgdHJhZGl0aW9uYWwuCgpBbGwgUXVhZHJvIHgwMDAgYW5kIEt4MDAw
IGNhcmRzLCBib3RoIGdlbnVpbmUgYW5kIG1vZGlmaWVkIEdlRm9yY2VzCmFyZSBrbm93biB0byB3
b3JrIGZpbmUgd2l0aCBWR0EgcGFzc3Rocm91Z2guCgpHb3JkYW4KCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Jan 21 17:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fZE-0001yK-17; Tue, 21 Jan 2014 17:56:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5fZD-0001yA-5f
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:56:51 +0000
Received: from [85.158.137.68:59664] by server-6.bemta-3.messagelabs.com id
	13/C1-04868-2E4BED25; Tue, 21 Jan 2014 17:56:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390327007!10447217!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23910 invoked from network); 21 Jan 2014 17:56:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:56:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; 
	d="asc'?scan'208";a="92940044"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 17:56:47 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 12:56:46 -0500
Message-ID: <1390327005.23576.219.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 18:56:45 +0100
In-Reply-To: <CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0868226796670297393=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0868226796670297393==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-QEXupiMYfqFJv6DyZ+I8"

--=-QEXupiMYfqFJv6DyZ+I8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-21 at 17:53 +0200, Pavlo Suikov wrote:

> It happened that we cannot start dom0 with one vCPU (investigation on
> this one is still ongoing),=20
>
Oh, I see. Weird.

> but we succeded in giving one vCPU to domU and pinning it to one of
> the pCPUs. Interestingly enough, that fixed all the latency observed
> in domU.
>=20
Can I ask how the numbers (for DomU, of course) looks like now?

> # xl vcpu-list=20
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0    0   ---      38.6  0
> Domain-0                             0     1    0   r--      31.8  0
> android_4.3                          1     0    1   b     230.2  1
>=20
>=20
> In dom0 (which has two vCPUs, so Xen scheduling is actually used)
> latency is still present.
>=20
Did you try reducing the scheduling timeslice to 1ms (or even something
bigger, but less than 30)? If yes, down to which value?

Another thing I'll try, if you haven't done that already, is as follows:
 - get rid of the DomU
 - pin the 2 Dom0's vCPUs each one to one pCPU
 - repeat the experiment

If that works, iterate, without the second step, i.e., basically, run
the experiment with no pinning, but only with Dom0.

What I'm after is, since you report DomU performance are starting to be
satisfying if pinning is used, trying to figure out whether that is the
case for Dom0 too, as it should be, or if there is something interfering
with that. I know, if the DomU is idle when running the load in Dom0,
having it or not should not make much difference, but still, I'd give
this a try.

> So without virtualization of CPUs for domU soft real time properties
> with regard to timers are met (and our RTP audio sync is doing much
> better). That's obviously not a solution, but it shows that Xen credit
> (and sEDF) scheduling is actually misbehaving on such tasks and there
> is an area to investigate.
>=20
Yes. Well, I think it says that, at least for your usecase, the latency
introduced by virtualization per se (VMEXITs, or whatever is the ARM
equivalent for them, etc) are not showstoppers... Which is already
something! :-)

What we now need to figure out, is with what scheduler(s) and, for that
(each) scheduler(s), with what parameters, that property is preserved.
And, I agree, this is something to be investigated.

> I will keep you informed when more results are present, so stay
> tuned :)
>=20
Thanks for all the updates so far... I definitely will stay tuned. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-QEXupiMYfqFJv6DyZ+I8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLetN0ACgkQk4XaBE3IOsR9jQCfb4kKzzn7IBfRKkii/2KLlU1U
eZ0AnRh+nDvEQWCM5kNjgRJEfdTJwKf5
=Mc+L
-----END PGP SIGNATURE-----

--=-QEXupiMYfqFJv6DyZ+I8--


--===============0868226796670297393==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0868226796670297393==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 17:56:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 17:56:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fZE-0001yK-17; Tue, 21 Jan 2014 17:56:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W5fZD-0001yA-5f
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:56:51 +0000
Received: from [85.158.137.68:59664] by server-6.bemta-3.messagelabs.com id
	13/C1-04868-2E4BED25; Tue, 21 Jan 2014 17:56:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390327007!10447217!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23910 invoked from network); 21 Jan 2014 17:56:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 17:56:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; 
	d="asc'?scan'208";a="92940044"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 17:56:47 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 12:56:46 -0500
Message-ID: <1390327005.23576.219.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Tue, 21 Jan 2014 18:56:45 +0100
In-Reply-To: <CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0868226796670297393=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0868226796670297393==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-QEXupiMYfqFJv6DyZ+I8"

--=-QEXupiMYfqFJv6DyZ+I8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-21 at 17:53 +0200, Pavlo Suikov wrote:

> It happened that we cannot start dom0 with one vCPU (investigation on
> this one is still ongoing),=20
>
Oh, I see. Weird.

> but we succeded in giving one vCPU to domU and pinning it to one of
> the pCPUs. Interestingly enough, that fixed all the latency observed
> in domU.
>=20
Can I ask how the numbers (for DomU, of course) looks like now?

> # xl vcpu-list=20
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0    0   ---      38.6  0
> Domain-0                             0     1    0   r--      31.8  0
> android_4.3                          1     0    1   b     230.2  1
>=20
>=20
> In dom0 (which has two vCPUs, so Xen scheduling is actually used)
> latency is still present.
>=20
Did you try reducing the scheduling timeslice to 1ms (or even something
bigger, but less than 30)? If yes, down to which value?

Another thing I'll try, if you haven't done that already, is as follows:
 - get rid of the DomU
 - pin the 2 Dom0's vCPUs each one to one pCPU
 - repeat the experiment

If that works, iterate, without the second step, i.e., basically, run
the experiment with no pinning, but only with Dom0.

What I'm after is, since you report DomU performance are starting to be
satisfying if pinning is used, trying to figure out whether that is the
case for Dom0 too, as it should be, or if there is something interfering
with that. I know, if the DomU is idle when running the load in Dom0,
having it or not should not make much difference, but still, I'd give
this a try.

> So without virtualization of CPUs for domU soft real time properties
> with regard to timers are met (and our RTP audio sync is doing much
> better). That's obviously not a solution, but it shows that Xen credit
> (and sEDF) scheduling is actually misbehaving on such tasks and there
> is an area to investigate.
>=20
Yes. Well, I think it says that, at least for your usecase, the latency
introduced by virtualization per se (VMEXITs, or whatever is the ARM
equivalent for them, etc) are not showstoppers... Which is already
something! :-)

What we now need to figure out, is with what scheduler(s) and, for that
(each) scheduler(s), with what parameters, that property is preserved.
And, I agree, this is something to be investigated.

> I will keep you informed when more results are present, so stay
> tuned :)
>=20
Thanks for all the updates so far... I definitely will stay tuned. :-)

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-QEXupiMYfqFJv6DyZ+I8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLetN0ACgkQk4XaBE3IOsR9jQCfb4kKzzn7IBfRKkii/2KLlU1U
eZ0AnRh+nDvEQWCM5kNjgRJEfdTJwKf5
=Mc+L
-----END PGP SIGNATURE-----

--=-QEXupiMYfqFJv6DyZ+I8--


--===============0868226796670297393==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0868226796670297393==--


From xen-devel-bounces@lists.xen.org Tue Jan 21 18:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5foM-0003Lu-4y; Tue, 21 Jan 2014 18:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5foK-0003LZ-2P
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:12:28 +0000
Received: from [85.158.143.35:20322] by server-2.bemta-4.messagelabs.com id
	65/50-11386-B88BED25; Tue, 21 Jan 2014 18:12:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390327945!13146081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31062 invoked from network); 21 Jan 2014 18:12:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:12:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92947557"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:12:25 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 13:12:24 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	19:12:23 +0100
Message-ID: <52DEB887.8070409@citrix.com>
Date: Tue, 21 Jan 2014 18:12:23 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Ian Campbell
	<ian.campbell@citrix.com>, George Dunlap <george.dunlap@eu.citrix.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have been giving nested virt a try, and have my first bug to report. 
This is still ongoing, and is by no means complete yet.

Setup:
Each reference to XenServer is a trunk XenServer based on 4.4-rc2

Single Intel Haswell SDP (Grantley platform):
Native hypervisor: XenServer

Two L1 guests:
  XenServer (running with EPT)
  XenServer (running with shadow)


When attempting to create an L2 EPT HVM domain under an L1 shadow
domain, the L1 shadow domain is killed with:

(XEN) <vm_launch_fail> error code 7
(XEN) domain_crash_sync called from vmcs.c:1293
(XEN) Domain 16 (vcpu#3) crashed on cpu#2:
(XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    2
(XEN) RIP:    0000:[<0000000000000000>]
(XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
(XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
(XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
(XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
(XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
(XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
(XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
(XEN) cr3: 0000000000000000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000


I am continuing experiments with different VMs under each L1 hypervisor,
to see what else breaks.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5foA-0003Ke-O3; Tue, 21 Jan 2014 18:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1W5foA-0003KZ-1D
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 18:12:18 +0000
Received: from [193.109.254.147:45320] by server-6.bemta-14.messagelabs.com id
	EF/67-14958-188BED25; Tue, 21 Jan 2014 18:12:17 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390327936!9982063!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.8 required=7.0 tests=MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15417 invoked from network); 21 Jan 2014 18:12:16 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-11.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 18:12:16 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.203.36])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	s0LI7tki019684; Tue, 21 Jan 2014 18:07:55 GMT
Date: Tue, 21 Jan 2014 18:07:50 +0000
From: Will Deacon <will.deacon@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140121180750.GO30706@mudshark.cambridge.arm.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> Remove !GENERIC_ATOMIC64 build dependency:
> - introduce xen_atomic64_xchg
> - use it to implement xchg_xen_ulong
> 
> Remove !CPU_V6 build dependency:
> - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
>   CONFIG_CPU_V6
> - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: arnd@arndb.de
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: catalin.marinas@arm.com
> CC: linux-arm-kernel@lists.infradead.org
> CC: linux-kernel@vger.kernel.org
> CC: xen-devel@lists.xenproject.org

  Reviewed-by: Will Deacon <will.deacon@arm.com>

Cheers Stefano,

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5foM-0003Lu-4y; Tue, 21 Jan 2014 18:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5foK-0003LZ-2P
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:12:28 +0000
Received: from [85.158.143.35:20322] by server-2.bemta-4.messagelabs.com id
	65/50-11386-B88BED25; Tue, 21 Jan 2014 18:12:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390327945!13146081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31062 invoked from network); 21 Jan 2014 18:12:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:12:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92947557"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:12:25 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 13:12:24 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	19:12:23 +0100
Message-ID: <52DEB887.8070409@citrix.com>
Date: Tue, 21 Jan 2014 18:12:23 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Ian Campbell
	<ian.campbell@citrix.com>, George Dunlap <george.dunlap@eu.citrix.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I have been giving nested virt a try, and have my first bug to report. 
This is still ongoing, and is by no means complete yet.

Setup:
Each reference to XenServer is a trunk XenServer based on 4.4-rc2

Single Intel Haswell SDP (Grantley platform):
Native hypervisor: XenServer

Two L1 guests:
  XenServer (running with EPT)
  XenServer (running with shadow)


When attempting to create an L2 EPT HVM domain under an L1 shadow
domain, the L1 shadow domain is killed with:

(XEN) <vm_launch_fail> error code 7
(XEN) domain_crash_sync called from vmcs.c:1293
(XEN) Domain 16 (vcpu#3) crashed on cpu#2:
(XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    2
(XEN) RIP:    0000:[<0000000000000000>]
(XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
(XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
(XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
(XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
(XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
(XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
(XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
(XEN) cr3: 0000000000000000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000


I am continuing experiments with different VMs under each L1 hypervisor,
to see what else breaks.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5foA-0003Ke-O3; Tue, 21 Jan 2014 18:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <will.deacon@arm.com>) id 1W5foA-0003KZ-1D
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 18:12:18 +0000
Received: from [193.109.254.147:45320] by server-6.bemta-14.messagelabs.com id
	EF/67-14958-188BED25; Tue, 21 Jan 2014 18:12:17 +0000
X-Env-Sender: will.deacon@arm.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390327936!9982063!1
X-Originating-IP: [217.140.96.50]
X-SpamReason: No, hits=0.8 required=7.0 tests=MANY_EXCLAMATIONS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15417 invoked from network); 21 Jan 2014 18:12:16 -0000
Received: from cam-admin0.cambridge.arm.com (HELO
	cam-admin0.cambridge.arm.com) (217.140.96.50)
	by server-11.tower-27.messagelabs.com with SMTP;
	21 Jan 2014 18:12:16 -0000
Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com
	[10.1.203.36])
	by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id
	s0LI7tki019684; Tue, 21 Jan 2014 18:07:55 GMT
Date: Tue, 21 Jan 2014 18:07:50 +0000
From: Will Deacon <will.deacon@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140121180750.GO30706@mudshark.cambridge.arm.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>, Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> Remove !GENERIC_ATOMIC64 build dependency:
> - introduce xen_atomic64_xchg
> - use it to implement xchg_xen_ulong
> 
> Remove !CPU_V6 build dependency:
> - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
>   CONFIG_CPU_V6
> - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: arnd@arndb.de
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: catalin.marinas@arm.com
> CC: linux-arm-kernel@lists.infradead.org
> CC: linux-kernel@vger.kernel.org
> CC: xen-devel@lists.xenproject.org

  Reviewed-by: Will Deacon <will.deacon@arm.com>

Cheers Stefano,

Will

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:14:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fqh-0003fZ-Ox; Tue, 21 Jan 2014 18:14:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5d4g-0007n7-FE
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:17:10 +0000
Received: from [85.158.137.68:13868] by server-9.bemta-3.messagelabs.com id
	94/AE-13104-57F8ED25; Tue, 21 Jan 2014 15:17:09 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390317424!9647723!1
X-Originating-IP: [72.30.238.142]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20813 invoked from network); 21 Jan 2014 15:17:05 -0000
Received: from nm36-vm6.bullet.mail.bf1.yahoo.com (HELO
	nm36-vm6.bullet.mail.bf1.yahoo.com) (72.30.238.142)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 15:17:05 -0000
Received: from [66.196.81.172] by nm36.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
Received: from [98.139.211.205] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
Received: from [127.0.0.1] by smtp214.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390317424; bh=/iUpiX8sdruO2IQ4GbXhQ3g4uwiZSDZ45rPPG909nZY=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=jLx4Bpa8kT8+iJ7r8efiwar6QgmSv80bTiWIQdoFLmftgisfLG5Cll5uxH/pma2eThYHC0pkIdRANPynA8dRnFL0BVUbPf4SJXW9w6geN4jevgITAzqd/LQHJ8khYERB4LhjsKpXl9jDk6w0SHcf8xE/xaZidjDnVSgtt4dUEFk=
X-Yahoo-Newman-Id: 102752.61657.bm@smtp214.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lS9p6vYVM1lXl7oUW.Gs4UKVMCi3rOV3IyYBxSKnZGG2jy.
	SnT46mQMZwlCxfsBGI8_.IgyS25qyyozBzb2wJM38seXvqCQr1.On05Jjdnm
	StuLygMG8z79Yt1Ohf35PCJqJwHKoCFA4u8_QFx5DEQUak30gMS6S5GkzKjm
	M_2tjsP5b0NdYMOXjpiaCFYgB5UDBkQj7PXYeJXTDsgQ22pbVliczc4bud67
	Q6qz1.yGumXHtJJWSpar0q6csTWYHDpJpmu3QG_YMx_JoOgq1_iTkP4h61vY
	hHo4PopdsxIt0RyT0IV04IYVTfoaSPfarTJsOqhFOvPUyOpgKPsWeTHi92pY
	_KhoAuZpIICp7QYI9r0_iDdQ3KugMrdHxfx8s4JNmh3wCuyYGjptMspEr9cT
	N1DMC1WG4a2IKxcVruWauECe13ajezdK58WcQdqvfl0FIulsTVeX5GH0Q48M
	NsyDzc.xojuNprNiUW78kKM2O6PxA7o6GMsBGHtmbrx0KlUBPmonnN0eDK8c
	tP7uYy_XbWvqUXA3dDiJQ4M0WO0VBEsfE31jHOZyUyfB4mLZLx2e.BK_qJsl
	B5qwTLhOHpa9IAfTNa5nFooiN1oAY_iOlMYXsZDvQ57pdUnWbhoZJtI9sZzs
	UKo.u72G9hbbsCUr2Ba.gHRmArgEm.OgKWtzNxeou
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp214.mail.bf1.yahoo.com with SMTP; 21 Jan 2014 07:17:04 -0800 PST
Message-ID: <1390317421.4634.5.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 08:17:01 -0700
In-Reply-To: <1390296705.20516.82.camel@kazak.uk.xensource.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<1390296705.20516.82.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Tue, 21 Jan 2014 18:14:54 +0000
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 09:31 +0000, Ian Campbell wrote:
> extending the subject. Also CCing a few likely people.
> 
> On Mon, 2014-01-20 at 12:06 -0700, Eric Houby wrote:
> > xen-devel list,
> > 
> > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> > line allows the system to boot as expected.
> 
> Are you also using any ivrs_ioapic[] command line options?
> 
> There's a thread at
> http://lists.xen.org/archives/html/xen-devel/2013-10/msg00313.html which
> did use that and then saw similar looking results.
> 
> Ian.
> 

Ian,

I am not using the ivrs_ioapic[] command option, although I plan to give
it a try once I have 4.4 running properly.  My full xen command line is:

dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all
guest_loglvl=all iommu=debug,verbose,no-amd-iommu-perdev-intremap

With no-amd-iommu-perdev-intremap, the system boots, without the command
xen crashes at boot.  

Per Jan's request, I will try to get serial logs for the group.

-Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:14:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fqi-0003fh-5F; Tue, 21 Jan 2014 18:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5fIN-0000eA-VW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:39:28 +0000
Received: from [85.158.139.211:31044] by server-10.bemta-5.messagelabs.com id
	54/D9-01405-FC0BED25; Tue, 21 Jan 2014 17:39:27 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390325964!809213!1
X-Originating-IP: [216.109.115.141]
X-SpamReason: No, hits=1.9 required=7.0 tests=BODY_RANDOM_LONG,
	DATE_IN_PAST_06_12,FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10374 invoked from network); 21 Jan 2014 17:39:25 -0000
Received: from nm47-vm6.bullet.mail.bf1.yahoo.com (HELO
	nm47-vm6.bullet.mail.bf1.yahoo.com) (216.109.115.141)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 17:39:25 -0000
Received: from [98.139.212.150] by nm47.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
Received: from [98.139.213.15] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
Received: from [127.0.0.1] by smtp115.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390325963; bh=af69iXjWxg8sieaPPySP2ldgec7zK27h05KMEX2Z5ic=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:Cc:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:X-Mailer:Thread-Index:Content-Language;
	b=1N4v2ka37o1/kfwY5GzClbKpTl64rWrH0RR3NMP+izsaz/YzSCuv3eCCjfeB8eJFSd71mlN9hequC8yzw2w9hknTpL3WRD2BoiNGDjQJmXwSJ3FUpEmeqQwx62zedQMVE4rJGlK0eT4wEacDmM+eFRVaXsEcz8L7qFHFNgnyPUA=
X-Yahoo-Newman-Id: 841176.45525.bm@smtp115.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: wT9LpDgVM1nC05D1pYf7PYcHLAILXOTSJQih20nUfB3c22q
	V9qWzcFQCBFEWUpkhwJ2gaQp2uadxDHYDQPVs36EumCtyKBCxXcidr.APqi2
	A4CQJPicYV8JpitmuxntxF.BXI9_YK.ZStfx49F.1BQFrQJ.cQLjE7FOi6SR
	ExKFKJhreJMu2pkMx3aACHLmynyOz15UJX.Zl3nNv.0HwA13WJRVb9ib1UXu
	9FXl1bEZH_1kbba87HiKKFnKLl1dgfVS59OhNnLuvndDEMEl01D3Uz4KScb8
	g_nVdmkA5peAJ1Jvu6pkxk1gjhshxXEVCoxNH8nisiqumtEroROZxGw4WesV
	09w.67C7AQzD74QXnV6nkHRCXxotR.0RAy8d5P2881LWO8n73vW92mkoobj6
	1AJBcu8p2hKSlzeCtKJbKEcrdjExy.cNNl3dY4hjB4m_WmDrpT7pHQEn63Q0
	KNS8Pkt7bfhrjZC1LjH2G1HWw79XTLOoA8AJBimvdh4Bib8bQUhwv6mZzGQx
	sby.KdHZrriwMxyFwrh_jxf4Pvtwfe1EkUvoH.HUq6J8bXIru
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp115.mail.bf1.yahoo.com with SMTP; 21 Jan 2014 09:39:23 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'Ian Campbell'" <Ian.Campbell@citrix.com>,
	"'Jan Beulich'" <JBeulich@suse.com>
References: <1390244796.2322.6.camel@astar.houby.net>	
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1390297542.20516.90.camel@kazak.uk.xensource.com>
Date: Tue, 21 Jan 2014 03:39:20 -0700
Message-ID: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0001_01CF165A.60397640"
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQNRMRPL93br9nVY3GTwuF+Cjfn1+gHJ8i9eAouag2yXaBrfsA==
Content-Language: en-us
X-Mailman-Approved-At: Tue, 21 Jan 2014 18:14:54 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, January 21, 2014 2:46 AM
> To: Jan Beulich
> Cc: ehouby@yahoo.com; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
> 
> On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> > >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to
> xen
> > > command line allows the system to boot as expected.
> >
> > For analyzing this we need a full serial log
> 
> See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure such a
> thing.
> 
> >  (with "iommu=debug" in
> > place on the Xen command line), not just a screen shot.
> >
> > Jan
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 


Xen console logs are attached.

Thanks,

Eric

------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain;
	name="xen44crash.txt"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="xen44crash.txt"


astar login: (XEN) Domain 0 shutdown: rebooting machine.
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 =
20131212 (Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M =
dom0_max_vcpus=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall =
iommu=3Ddebug,verbose com1=3D38400,8n1,pci console=3Dcom1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  =
3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       =
98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             =
0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG =
53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         =
0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.987 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - =
ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) Assertion 'get_rte_index(rte) =3D=3D offset' failed at =
iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] =
amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: =
ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: =
ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  =
0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: =
ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: =
0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: =
00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=3Dffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 =
0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 =
0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 =
0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 =
0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 =
ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 =
ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 =
ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 =
ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 =
ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 =
0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 =
ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 =
0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 =
0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c =
ffff830000086d50
(XEN)    0000000000000005 ffff830000086ef0 0000000800000000 =
000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) =3D=3D offset' failed at =
iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_NextPart_000_0001_01CF165A.60397640--



From xen-devel-bounces@lists.xen.org Tue Jan 21 18:14:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fqh-0003fZ-Ox; Tue, 21 Jan 2014 18:14:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5d4g-0007n7-FE
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 15:17:10 +0000
Received: from [85.158.137.68:13868] by server-9.bemta-3.messagelabs.com id
	94/AE-13104-57F8ED25; Tue, 21 Jan 2014 15:17:09 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390317424!9647723!1
X-Originating-IP: [72.30.238.142]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20813 invoked from network); 21 Jan 2014 15:17:05 -0000
Received: from nm36-vm6.bullet.mail.bf1.yahoo.com (HELO
	nm36-vm6.bullet.mail.bf1.yahoo.com) (72.30.238.142)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 15:17:05 -0000
Received: from [66.196.81.172] by nm36.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
Received: from [98.139.211.205] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
Received: from [127.0.0.1] by smtp214.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 15:17:04 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390317424; bh=/iUpiX8sdruO2IQ4GbXhQ3g4uwiZSDZ45rPPG909nZY=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=jLx4Bpa8kT8+iJ7r8efiwar6QgmSv80bTiWIQdoFLmftgisfLG5Cll5uxH/pma2eThYHC0pkIdRANPynA8dRnFL0BVUbPf4SJXW9w6geN4jevgITAzqd/LQHJ8khYERB4LhjsKpXl9jDk6w0SHcf8xE/xaZidjDnVSgtt4dUEFk=
X-Yahoo-Newman-Id: 102752.61657.bm@smtp214.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lS9p6vYVM1lXl7oUW.Gs4UKVMCi3rOV3IyYBxSKnZGG2jy.
	SnT46mQMZwlCxfsBGI8_.IgyS25qyyozBzb2wJM38seXvqCQr1.On05Jjdnm
	StuLygMG8z79Yt1Ohf35PCJqJwHKoCFA4u8_QFx5DEQUak30gMS6S5GkzKjm
	M_2tjsP5b0NdYMOXjpiaCFYgB5UDBkQj7PXYeJXTDsgQ22pbVliczc4bud67
	Q6qz1.yGumXHtJJWSpar0q6csTWYHDpJpmu3QG_YMx_JoOgq1_iTkP4h61vY
	hHo4PopdsxIt0RyT0IV04IYVTfoaSPfarTJsOqhFOvPUyOpgKPsWeTHi92pY
	_KhoAuZpIICp7QYI9r0_iDdQ3KugMrdHxfx8s4JNmh3wCuyYGjptMspEr9cT
	N1DMC1WG4a2IKxcVruWauECe13ajezdK58WcQdqvfl0FIulsTVeX5GH0Q48M
	NsyDzc.xojuNprNiUW78kKM2O6PxA7o6GMsBGHtmbrx0KlUBPmonnN0eDK8c
	tP7uYy_XbWvqUXA3dDiJQ4M0WO0VBEsfE31jHOZyUyfB4mLZLx2e.BK_qJsl
	B5qwTLhOHpa9IAfTNa5nFooiN1oAY_iOlMYXsZDvQ57pdUnWbhoZJtI9sZzs
	UKo.u72G9hbbsCUr2Ba.gHRmArgEm.OgKWtzNxeou
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp214.mail.bf1.yahoo.com with SMTP; 21 Jan 2014 07:17:04 -0800 PST
Message-ID: <1390317421.4634.5.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 21 Jan 2014 08:17:01 -0700
In-Reply-To: <1390296705.20516.82.camel@kazak.uk.xensource.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<1390296705.20516.82.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Tue, 21 Jan 2014 18:14:54 +0000
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 09:31 +0000, Ian Campbell wrote:
> extending the subject. Also CCing a few likely people.
> 
> On Mon, 2014-01-20 at 12:06 -0700, Eric Houby wrote:
> > xen-devel list,
> > 
> > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> > line allows the system to boot as expected.
> 
> Are you also using any ivrs_ioapic[] command line options?
> 
> There's a thread at
> http://lists.xen.org/archives/html/xen-devel/2013-10/msg00313.html which
> did use that and then saw similar looking results.
> 
> Ian.
> 

Ian,

I am not using the ivrs_ioapic[] command option, although I plan to give
it a try once I have 4.4 running properly.  My full xen command line is:

dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all
guest_loglvl=all iommu=debug,verbose,no-amd-iommu-perdev-intremap

With no-amd-iommu-perdev-intremap, the system boots, without the command
xen crashes at boot.  

Per Jan's request, I will try to get serial logs for the group.

-Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:14:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fqi-0003fh-5F; Tue, 21 Jan 2014 18:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W5fIN-0000eA-VW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 17:39:28 +0000
Received: from [85.158.139.211:31044] by server-10.bemta-5.messagelabs.com id
	54/D9-01405-FC0BED25; Tue, 21 Jan 2014 17:39:27 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390325964!809213!1
X-Originating-IP: [216.109.115.141]
X-SpamReason: No, hits=1.9 required=7.0 tests=BODY_RANDOM_LONG,
	DATE_IN_PAST_06_12,FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10374 invoked from network); 21 Jan 2014 17:39:25 -0000
Received: from nm47-vm6.bullet.mail.bf1.yahoo.com (HELO
	nm47-vm6.bullet.mail.bf1.yahoo.com) (216.109.115.141)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 17:39:25 -0000
Received: from [98.139.212.150] by nm47.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
Received: from [98.139.213.15] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
Received: from [127.0.0.1] by smtp115.mail.bf1.yahoo.com with NNFMP;
	21 Jan 2014 17:39:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390325963; bh=af69iXjWxg8sieaPPySP2ldgec7zK27h05KMEX2Z5ic=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:Cc:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:X-Mailer:Thread-Index:Content-Language;
	b=1N4v2ka37o1/kfwY5GzClbKpTl64rWrH0RR3NMP+izsaz/YzSCuv3eCCjfeB8eJFSd71mlN9hequC8yzw2w9hknTpL3WRD2BoiNGDjQJmXwSJ3FUpEmeqQwx62zedQMVE4rJGlK0eT4wEacDmM+eFRVaXsEcz8L7qFHFNgnyPUA=
X-Yahoo-Newman-Id: 841176.45525.bm@smtp115.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: wT9LpDgVM1nC05D1pYf7PYcHLAILXOTSJQih20nUfB3c22q
	V9qWzcFQCBFEWUpkhwJ2gaQp2uadxDHYDQPVs36EumCtyKBCxXcidr.APqi2
	A4CQJPicYV8JpitmuxntxF.BXI9_YK.ZStfx49F.1BQFrQJ.cQLjE7FOi6SR
	ExKFKJhreJMu2pkMx3aACHLmynyOz15UJX.Zl3nNv.0HwA13WJRVb9ib1UXu
	9FXl1bEZH_1kbba87HiKKFnKLl1dgfVS59OhNnLuvndDEMEl01D3Uz4KScb8
	g_nVdmkA5peAJ1Jvu6pkxk1gjhshxXEVCoxNH8nisiqumtEroROZxGw4WesV
	09w.67C7AQzD74QXnV6nkHRCXxotR.0RAy8d5P2881LWO8n73vW92mkoobj6
	1AJBcu8p2hKSlzeCtKJbKEcrdjExy.cNNl3dY4hjB4m_WmDrpT7pHQEn63Q0
	KNS8Pkt7bfhrjZC1LjH2G1HWw79XTLOoA8AJBimvdh4Bib8bQUhwv6mZzGQx
	sby.KdHZrriwMxyFwrh_jxf4Pvtwfe1EkUvoH.HUq6J8bXIru
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp115.mail.bf1.yahoo.com with SMTP; 21 Jan 2014 09:39:23 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'Ian Campbell'" <Ian.Campbell@citrix.com>,
	"'Jan Beulich'" <JBeulich@suse.com>
References: <1390244796.2322.6.camel@astar.houby.net>	
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1390297542.20516.90.camel@kazak.uk.xensource.com>
Date: Tue, 21 Jan 2014 03:39:20 -0700
Message-ID: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0001_01CF165A.60397640"
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQNRMRPL93br9nVY3GTwuF+Cjfn1+gHJ8i9eAouag2yXaBrfsA==
Content-Language: en-us
X-Mailman-Approved-At: Tue, 21 Jan 2014 18:14:54 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multipart message in MIME format.

------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Tuesday, January 21, 2014 2:46 AM
> To: Jan Beulich
> Cc: ehouby@yahoo.com; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
> 
> On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> > >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to
> xen
> > > command line allows the system to boot as expected.
> >
> > For analyzing this we need a full serial log
> 
> See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure such a
> thing.
> 
> >  (with "iommu=debug" in
> > place on the Xen command line), not just a screen shot.
> >
> > Jan
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 


Xen console logs are attached.

Thanks,

Eric

------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain;
	name="xen44crash.txt"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="xen44crash.txt"


astar login: (XEN) Domain 0 shutdown: rebooting machine.
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 =
20131212 (Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M =
dom0_max_vcpus=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall =
iommu=3Ddebug,verbose com1=3D38400,8n1,pci console=3Dcom1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  =
3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       =
98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             =
0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG =
53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  =
1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         =
0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.987 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - =
ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault =
address =3D 0xfdf8010140, flags =3D 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) Assertion 'get_rte_index(rte) =3D=3D offset' failed at =
iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] =
amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: =
ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: =
ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  =
0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: =
ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: =
0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: =
00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=3Dffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 =
0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 =
0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 =
0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 =
0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 =
ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 =
ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 =
ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 =
ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 =
ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 =
0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 =
ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 =
0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 =
0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c =
ffff830000086d50
(XEN)    0000000000000005 ffff830000086ef0 0000000800000000 =
000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 =
0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) =3D=3D offset' failed at =
iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...


------=_NextPart_000_0001_01CF165A.60397640
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------=_NextPart_000_0001_01CF165A.60397640--



From xen-devel-bounces@lists.xen.org Tue Jan 21 18:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5frs-0003sX-S7; Tue, 21 Jan 2014 18:16:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5frq-0003sD-PI
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:16:06 +0000
Received: from [85.158.139.211:16675] by server-16.bemta-5.messagelabs.com id
	15/2E-11843-669BED25; Tue, 21 Jan 2014 18:16:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390328163!11118803!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32704 invoked from network); 21 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92949146"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:16:03 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 13:16:02 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	19:16:01 +0100
Message-ID: <52DEB961.40506@citrix.com>
Date: Tue, 21 Jan 2014 18:16:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Ian Campbell
	<ian.campbell@citrix.com>, George Dunlap <george.dunlap@eu.citrix.com>
References: <52DEB887.8070409@citrix.com>
In-Reply-To: <52DEB887.8070409@citrix.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 18:12, Andrew Cooper wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report. 
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
>
> (XEN) <vm_launch_fail> error code 7
> (XEN) domain_crash_sync called from vmcs.c:1293
> (XEN) Domain 16 (vcpu#3) crashed on cpu#2:
> (XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    2
> (XEN) RIP:    0000:[<0000000000000000>]
> (XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
> (XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
> (XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
> (XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
> (XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
> (XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
> (XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
> (XEN) cr3: 0000000000000000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000

I suppose it is worth adding that `xl dmesg` from the L1 shadow
XenServer shows:

(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) Brought up 4 CPUs

Which indicates that EPT is available even in a shadow L1 domain.  I
can't think of a technical reason why it wouldn't work.

>
>
> I am continuing experiments with different VMs under each L1 hypervisor,
> to see what else breaks.
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5frs-0003sX-S7; Tue, 21 Jan 2014 18:16:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5frq-0003sD-PI
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:16:06 +0000
Received: from [85.158.139.211:16675] by server-16.bemta-5.messagelabs.com id
	15/2E-11843-669BED25; Tue, 21 Jan 2014 18:16:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390328163!11118803!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32704 invoked from network); 21 Jan 2014 18:16:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:16:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92949146"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:16:03 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 13:16:02 -0500
Received: from [192.168.1.84] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Tue, 21 Jan 2014
	19:16:01 +0100
Message-ID: <52DEB961.40506@citrix.com>
Date: Tue, 21 Jan 2014 18:16:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Ian Campbell
	<ian.campbell@citrix.com>, George Dunlap <george.dunlap@eu.citrix.com>
References: <52DEB887.8070409@citrix.com>
In-Reply-To: <52DEB887.8070409@citrix.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 18:12, Andrew Cooper wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report. 
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
>
> (XEN) <vm_launch_fail> error code 7
> (XEN) domain_crash_sync called from vmcs.c:1293
> (XEN) Domain 16 (vcpu#3) crashed on cpu#2:
> (XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    2
> (XEN) RIP:    0000:[<0000000000000000>]
> (XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
> (XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
> (XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
> (XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
> (XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
> (XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
> (XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
> (XEN) cr3: 0000000000000000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000

I suppose it is worth adding that `xl dmesg` from the L1 shadow
XenServer shows:

(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - MSR direct-access bitmap
(XEN)  - Unrestricted Guest
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) Brought up 4 CPUs

Which indicates that EPT is available even in a shadow L1 domain.  I
can't think of a technical reason why it wouldn't work.

>
>
> I am continuing experiments with different VMs under each L1 hypervisor,
> to see what else breaks.
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:19:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fuc-0004DJ-EX; Tue, 21 Jan 2014 18:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5fub-0004DA-VG
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 18:18:58 +0000
Received: from [85.158.137.68:33821] by server-5.bemta-3.messagelabs.com id
	65/21-25188-11ABED25; Tue, 21 Jan 2014 18:18:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390328335!6824788!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21853 invoked from network); 21 Jan 2014 18:18:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="95004393"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 18:18:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:18:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5fuW-0006eC-AS;
	Tue, 21 Jan 2014 18:18:52 +0000
Date: Tue, 21 Jan 2014 18:17:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1401211817350.21510@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> Cheers Stefano,

Thanks!!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:19:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fuc-0004DJ-EX; Tue, 21 Jan 2014 18:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5fub-0004DA-VG
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 18:18:58 +0000
Received: from [85.158.137.68:33821] by server-5.bemta-3.messagelabs.com id
	65/21-25188-11ABED25; Tue, 21 Jan 2014 18:18:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390328335!6824788!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21853 invoked from network); 21 Jan 2014 18:18:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="95004393"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 18:18:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:18:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5fuW-0006eC-AS;
	Tue, 21 Jan 2014 18:18:52 +0000
Date: Tue, 21 Jan 2014 18:17:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1401211817350.21510@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> Cheers Stefano,

Thanks!!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:21:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fwq-0004bm-1U; Tue, 21 Jan 2014 18:21:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5fwp-0004bh-4H
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:21:15 +0000
Received: from [85.158.137.68:14060] by server-2.bemta-3.messagelabs.com id
	18/93-17329-A9ABED25; Tue, 21 Jan 2014 18:21:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390328472!10505703!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3718 invoked from network); 21 Jan 2014 18:21:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:21:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92951638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:21:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:21:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5fht-0006SZ-Lk;
	Tue, 21 Jan 2014 18:05:49 +0000
Date: Tue, 21 Jan 2014 18:04:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52DEB3CD.60202@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211804340.21510@kaball.uk.xensource.com>
References: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<52DEB3CD.60202@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, David Vrabel wrote:
> On 21/01/14 16:55, Alexander Savchenko wrote:
> > Add new_vport and remove_vport.
> > Ported from patch Noboru Iwamatsu: PVUSB: backend driver.
> > 
> > Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
> > ---
> >  drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
> >  1 file changed, 88 insertions(+), 5 deletions(-)
> 
> What is this patch against?  Upstream linux doesn't have a backend
> driver.  Is patch 1/3 (which hasn't arrived on xen-devel) adding it?

http://marc.info/?l=xen-devel&m=139032355522882


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:21:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5fwq-0004bm-1U; Tue, 21 Jan 2014 18:21:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5fwp-0004bh-4H
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:21:15 +0000
Received: from [85.158.137.68:14060] by server-2.bemta-3.messagelabs.com id
	18/93-17329-A9ABED25; Tue, 21 Jan 2014 18:21:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390328472!10505703!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3718 invoked from network); 21 Jan 2014 18:21:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:21:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92951638"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:21:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:21:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5fht-0006SZ-Lk;
	Tue, 21 Jan 2014 18:05:49 +0000
Date: Tue, 21 Jan 2014 18:04:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52DEB3CD.60202@citrix.com>
Message-ID: <alpine.DEB.2.02.1401211804340.21510@kaball.uk.xensource.com>
References: <1390323331-23077-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<52DEB3CD.60202@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Alexander Savchenko <oleksandr.savchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/3] usbback: Add new features
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, David Vrabel wrote:
> On 21/01/14 16:55, Alexander Savchenko wrote:
> > Add new_vport and remove_vport.
> > Ported from patch Noboru Iwamatsu: PVUSB: backend driver.
> > 
> > Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
> > ---
> >  drivers/usb/host/xen-usbback/usbdev.c |   93 +++++++++++++++++++++++++++++++--
> >  1 file changed, 88 insertions(+), 5 deletions(-)
> 
> What is this patch against?  Upstream linux doesn't have a backend
> driver.  Is patch 1/3 (which hasn't arrived on xen-devel) adding it?

http://marc.info/?l=xen-devel&m=139032355522882


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:28:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5g3E-0004nK-4o; Tue, 21 Jan 2014 18:27:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5g3C-0004nF-RB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:27:51 +0000
Received: from [193.109.254.147:54991] by server-5.bemta-14.messagelabs.com id
	F6/10-03510-62CBED25; Tue, 21 Jan 2014 18:27:50 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390328868!12320371!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2322 invoked from network); 21 Jan 2014 18:27:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:27:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92954409"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:27:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:27:47 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5g38-0006oM-2n;
	Tue, 21 Jan 2014 18:27:46 +0000
Date: Tue, 21 Jan 2014 18:27:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140121182745.GA23328@zion.uk.xensource.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC01F6.6050502@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:32:38PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 13:34, Wei Liu ha scritto:
> > On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> >> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> >>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of havi=
ng the
> >>>> ability to compile out accel=3Dtcg.
> >>>
> >>> Didn't you and Paolo even have patches for that a while ago?
> >>
> >> Yes, but some code shuffling is required in each target to make sure y=
ou
> >> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> >> worked for x86 at the time.
> >>
> > =

> > Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> > reference to your patches? Thanks.
> =

> Googling "disable tcg" would have provided an answer, but the patches
> were old enough to be basically useless.  I'll refresh the current
> version in the next few days.  Currently I am (or try to be) on
> vacation, so I cannot really say when, but I'll do my best. :)
> =


Hi Paolo, any update?

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:28:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5g3E-0004nK-4o; Tue, 21 Jan 2014 18:27:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5g3C-0004nF-RB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 18:27:51 +0000
Received: from [193.109.254.147:54991] by server-5.bemta-14.messagelabs.com id
	F6/10-03510-62CBED25; Tue, 21 Jan 2014 18:27:50 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390328868!12320371!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2322 invoked from network); 21 Jan 2014 18:27:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:27:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92954409"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:27:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 13:27:47 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5g38-0006oM-2n;
	Tue, 21 Jan 2014 18:27:46 +0000
Date: Tue, 21 Jan 2014 18:27:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140121182745.GA23328@zion.uk.xensource.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52CC01F6.6050502@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 07, 2014 at 02:32:38PM +0100, Paolo Bonzini wrote:
> Il 07/01/2014 13:34, Wei Liu ha scritto:
> > On Mon, Jan 06, 2014 at 09:53:37PM +0100, Paolo Bonzini wrote:
> >> Il 06/01/2014 19:00, Andreas F=E4rber ha scritto:
> >>> Am 06.01.2014 16:39, schrieb Anthony Liguori:
> >>>> We already have accel=3Dxen.  I'm echoing Peter's suggestion of havi=
ng the
> >>>> ability to compile out accel=3Dtcg.
> >>>
> >>> Didn't you and Paolo even have patches for that a while ago?
> >>
> >> Yes, but some code shuffling is required in each target to make sure y=
ou
> >> can compile out translate-all.c, cputlb.c, etc.  So my patches only
> >> worked for x86 at the time.
> >>
> > =

> > Hi Paolo, I don't monitor qemu-devel on a daily basis. Do you have
> > reference to your patches? Thanks.
> =

> Googling "disable tcg" would have provided an answer, but the patches
> were old enough to be basically useless.  I'll refresh the current
> version in the next few days.  Currently I am (or try to be) on
> vacation, so I cannot really say when, but I'll do my best. :)
> =


Hi Paolo, any update?

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKe-0005tr-2A; Tue, 21 Jan 2014 18:45:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKc-0005te-Ns
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:50 +0000
Received: from [85.158.139.211:44919] by server-13.bemta-5.messagelabs.com id
	24/F4-11357-E50CED25; Tue, 21 Jan 2014 18:45:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18730 invoked from network); 21 Jan 2014 18:45:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961943"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKU-0000v8-EG;
	Tue, 21 Jan 2014 18:45:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKT-0007Mt-0b;
	Tue, 21 Jan 2014 18:45:41 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:30 +0000
Message-ID: <1390329931-28251-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving domain
	config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This makes valgrind a bit happier.

It is also
Coverity-CID: 1055903

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..aff6f90 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
                      ctx, fd, optdata_begin, hdr.optional_data_len,
                      source, "header"));
 
+    free(optdata_begin);
+
     fprintf(stderr, "Saving to %s new xl format (info"
             " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
             source, hdr.mandatory_flags, hdr.optional_flags,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKa-0005tL-59; Tue, 21 Jan 2014 18:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKZ-0005tB-45
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:47 +0000
Received: from [85.158.139.211:62402] by server-16.bemta-5.messagelabs.com id
	A5/3C-11843-A50CED25; Tue, 21 Jan 2014 18:45:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18561 invoked from network); 21 Jan 2014 18:45:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961933"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKW-0000vB-3L;
	Tue, 21 Jan 2014 18:45:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKU-0007My-W9;
	Tue, 21 Jan 2014 18:45:43 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:31 +0000
Message-ID: <1390329931-28251-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port: always
	free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If portstr!=NULL but plen==0 this function would leak portstr.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Fix whitespace error.
---
 tools/xenstore/xs.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index a636498..dd03a85 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
     portstr = xs_read(xs, XBT_NULL, path, &plen);
     xs_daemon_close(xs);
 
-    if (!portstr || !plen)
-        return -1;
+    if (!portstr || !plen) {
+        port = -1;
+        goto out;
+    }
 
     port = atoi(portstr);
-    free(portstr);
 
+out:
+    free(portstr);
     return port;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKf-0005uA-Jm; Tue, 21 Jan 2014 18:45:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKe-0005tp-9G
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:52 +0000
Received: from [85.158.143.35:36488] by server-3.bemta-4.messagelabs.com id
	B6/CD-32360-F50CED25; Tue, 21 Jan 2014 18:45:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390329947!571523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 21 Jan 2014 18:45:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="95015622"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKS-0000v5-8L;
	Tue, 21 Jan 2014 18:45:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKR-0007Mo-1l;
	Tue, 21 Jan 2014 18:45:39 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:29 +0000
Message-ID: <1390329931-28251-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_event.c:eventloop_iteration would pass the allocated pollfds
array size, rather than the used size, to poll (and to
afterpoll_internal).

The effect is that if the number of fds to poll on reduces, libxl will
poll on stale entries.  Because of the way the return value from poll
is processed these stale entries are often harmless because any events
coming back from poll ignored by libxl.  However, it could cause
malfunctions:

It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
the fd has been reused to refer to an object which can generate those
signals.  Alternatively, it could result in libxl spinning if the
stale entry refers to an fd which happens now to be ready for the
previously-requested operation.

I have tested this with a localhost migration and inspected the strace
output.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..1c48fee 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
      * can unlock it when it polls.
      */
     EGC_GC;
-    int rc;
+    int rc, nfds;
     struct timeval now;
     
     rc = libxl__gettimeofday(gc, &now);
@@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     int timeout;
 
     for (;;) {
-        int nfds = poller->fd_polls_allocd;
+        nfds = poller->fd_polls_allocd;
         timeout = -1;
         rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
                                  &timeout, now);
@@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     }
 
     CTX_UNLOCK;
-    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
+    rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
     if (rc < 0) {
@@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     rc = libxl__gettimeofday(gc, &now);
     if (rc) goto out;
 
-    afterpoll_internal(egc, poller,
-                       poller->fd_polls_allocd, poller->fd_polls, now);
+    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
 
     rc = 0;
  out:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKU-0005sv-Ol; Tue, 21 Jan 2014 18:45:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKU-0005sp-7f
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:42 +0000
Received: from [85.158.139.211:62218] by server-1.bemta-5.messagelabs.com id
	48/62-21065-550CED25; Tue, 21 Jan 2014 18:45:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18341 invoked from network); 21 Jan 2014 18:45:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKP-0000v2-UO;
	Tue, 21 Jan 2014 18:45:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKO-0007Mg-Bb;
	Tue, 21 Jan 2014 18:45:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:28 +0000
Message-ID: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think these three bugfixes are clear 4.4 material.

 a 1/3 libxl: events: Pass correct nfds to poll
 a 2/3 xl: Free optdata_begin when saving domain config
 a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr

They have all been acked by Ian Campbell.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKe-0005tr-2A; Tue, 21 Jan 2014 18:45:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKc-0005te-Ns
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:50 +0000
Received: from [85.158.139.211:44919] by server-13.bemta-5.messagelabs.com id
	24/F4-11357-E50CED25; Tue, 21 Jan 2014 18:45:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18730 invoked from network); 21 Jan 2014 18:45:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961943"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKU-0000v8-EG;
	Tue, 21 Jan 2014 18:45:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKT-0007Mt-0b;
	Tue, 21 Jan 2014 18:45:41 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:30 +0000
Message-ID: <1390329931-28251-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xl: Free optdata_begin when saving domain
	config
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This makes valgrind a bit happier.

It is also
Coverity-CID: 1055903

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..aff6f90 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3442,6 +3442,8 @@ static void save_domain_core_writeconfig(int fd, const char *source,
                      ctx, fd, optdata_begin, hdr.optional_data_len,
                      source, "header"));
 
+    free(optdata_begin);
+
     fprintf(stderr, "Saving to %s new xl format (info"
             " 0x%"PRIx32"/0x%"PRIx32"/%"PRIu32")\n",
             source, hdr.mandatory_flags, hdr.optional_flags,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKa-0005tL-59; Tue, 21 Jan 2014 18:45:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKZ-0005tB-45
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:47 +0000
Received: from [85.158.139.211:62402] by server-16.bemta-5.messagelabs.com id
	A5/3C-11843-A50CED25; Tue, 21 Jan 2014 18:45:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18561 invoked from network); 21 Jan 2014 18:45:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961933"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKW-0000vB-3L;
	Tue, 21 Jan 2014 18:45:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKU-0007My-W9;
	Tue, 21 Jan 2014 18:45:43 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:31 +0000
Message-ID: <1390329931-28251-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xenstore: xs_suspend_evtchn_port: always
	free portstr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If portstr!=NULL but plen==0 this function would leak portstr.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Fix whitespace error.
---
 tools/xenstore/xs.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index a636498..dd03a85 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -1095,12 +1095,15 @@ int xs_suspend_evtchn_port(int domid)
     portstr = xs_read(xs, XBT_NULL, path, &plen);
     xs_daemon_close(xs);
 
-    if (!portstr || !plen)
-        return -1;
+    if (!portstr || !plen) {
+        port = -1;
+        goto out;
+    }
 
     port = atoi(portstr);
-    free(portstr);
 
+out:
+    free(portstr);
     return port;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKU-0005sv-Ol; Tue, 21 Jan 2014 18:45:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKU-0005sp-7f
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:42 +0000
Received: from [85.158.139.211:62218] by server-1.bemta-5.messagelabs.com id
	48/62-21065-550CED25; Tue, 21 Jan 2014 18:45:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390329939!11122870!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18341 invoked from network); 21 Jan 2014 18:45:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="92961893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKP-0000v2-UO;
	Tue, 21 Jan 2014 18:45:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKO-0007Mg-Bb;
	Tue, 21 Jan 2014 18:45:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:28 +0000
Message-ID: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I think these three bugfixes are clear 4.4 material.

 a 1/3 libxl: events: Pass correct nfds to poll
 a 2/3 xl: Free optdata_begin when saving domain config
 a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr

They have all been acked by Ian Campbell.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 18:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 18:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gKf-0005uA-Jm; Tue, 21 Jan 2014 18:45:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5gKe-0005tp-9G
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 18:45:52 +0000
Received: from [85.158.143.35:36488] by server-3.bemta-4.messagelabs.com id
	B6/CD-32360-F50CED25; Tue, 21 Jan 2014 18:45:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390329947!571523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 21 Jan 2014 18:45:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 18:45:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,696,1384300800"; d="scan'208";a="95015622"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Jan 2014 18:45:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 13:45:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKS-0000v5-8L;
	Tue, 21 Jan 2014 18:45:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5gKR-0007Mo-1l;
	Tue, 21 Jan 2014 18:45:39 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 21 Jan 2014 18:45:29 +0000
Message-ID: <1390329931-28251-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] libxl: events: Pass correct nfds to poll
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_event.c:eventloop_iteration would pass the allocated pollfds
array size, rather than the used size, to poll (and to
afterpoll_internal).

The effect is that if the number of fds to poll on reduces, libxl will
poll on stale entries.  Because of the way the return value from poll
is processed these stale entries are often harmless because any events
coming back from poll ignored by libxl.  However, it could cause
malfunctions:

It could result in unwanted SIGTTIN/SIGTTOU/SIGPIPE, for example, if
the fd has been reused to refer to an object which can generate those
signals.  Alternatively, it could result in libxl spinning if the
stale entry refers to an fd which happens now to be ready for the
previously-requested operation.

I have tested this with a localhost migration and inspected the strace
output.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index bdef7ac..1c48fee 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1386,7 +1386,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
      * can unlock it when it polls.
      */
     EGC_GC;
-    int rc;
+    int rc, nfds;
     struct timeval now;
     
     rc = libxl__gettimeofday(gc, &now);
@@ -1395,7 +1395,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     int timeout;
 
     for (;;) {
-        int nfds = poller->fd_polls_allocd;
+        nfds = poller->fd_polls_allocd;
         timeout = -1;
         rc = beforepoll_internal(gc, poller, &nfds, poller->fd_polls,
                                  &timeout, now);
@@ -1413,7 +1413,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     }
 
     CTX_UNLOCK;
-    rc = poll(poller->fd_polls, poller->fd_polls_allocd, timeout);
+    rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
     if (rc < 0) {
@@ -1428,8 +1428,7 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
     rc = libxl__gettimeofday(gc, &now);
     if (rc) goto out;
 
-    afterpoll_internal(egc, poller,
-                       poller->fd_polls_allocd, poller->fd_polls, now);
+    afterpoll_internal(egc, poller, nfds, poller->fd_polls, now);
 
     rc = 0;
  out:
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbb-0007BD-7h; Tue, 21 Jan 2014 19:03:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbZ-0007At-Md
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:21 +0000
Received: from [85.158.143.35:61941] by server-3.bemta-4.messagelabs.com id
	A1/4A-32360-974CED25; Tue, 21 Jan 2014 19:03:21 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390330999!1323211!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14444 invoked from network); 21 Jan 2014 19:03:20 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:20 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id B3974B945;
	Tue, 21 Jan 2014 14:03:18 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:52:44 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-5-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-5-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211352.44784.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:19 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 04/13] xen: implement basic PIRQ support
	for Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:53 am Roger Pau Monne wrote:
> This allows Dom0 to manage physical hardware, redirecting the
> physical interrupts to event channels.
> ---
>  sys/x86/xen/xen_intr.c |  190 
+++++++++++++++++++++++++++++++++++++++++++++--
>  sys/xen/xen_intr.h     |   11 +++
>  2 files changed, 192 insertions(+), 9 deletions(-)
> 
> diff --git a/sys/x86/xen/xen_intr.c b/sys/x86/xen/xen_intr.c
> index bc0781e..340e5ed 100644
> --- a/sys/x86/xen/xen_intr.c
> +++ b/sys/x86/xen/xen_intr.c
> @@ -104,6 +104,8 @@ DPCPU_DECLARE(struct vcpu_info *, vcpu_info);
>  
>  #define is_valid_evtchn(x)	((x) != 0)
>  
> +#define	EEXIST	17	/* Xen "already exists" error */

Is there a xen_errno.h header?  Might be nice to have one and give these 
constants unique names (e.g. XEN_EEXIST or some such) to avoid 
confusion/conflicts with <sys/errno.h>.

Other than that I think this looks fine.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gba-0007Ay-FA; Tue, 21 Jan 2014 19:03:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbY-0007Aj-6i
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:20 +0000
Received: from [85.158.143.35:61817] by server-2.bemta-4.messagelabs.com id
	5B/25-11386-774CED25; Tue, 21 Jan 2014 19:03:19 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390330997!13112305!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9671 invoked from network); 21 Jan 2014 19:03:18 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:18 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 71FC2B94C;
	Tue, 21 Jan 2014 14:03:16 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:25:29 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-2-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211325.29460.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:16 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 01/13] xen: use the hardware e820 map on
	Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:50 am Roger Pau Monne wrote:
> We need to do some tweaking of the hardware e820 map, since the memory
> layout provided by Xen and the e820 map doesn't match.
> 
> This consists in clamping the e820 map so that regions above max_pfn
> are marked as unusuable.
> ---
>  sys/x86/xen/pv.c |   35 +++++++++++++++++++++++++++++++++--
>  1 files changed, 33 insertions(+), 2 deletions(-)
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index 4f7415e..ab4afba 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -297,17 +297,48 @@ static void
>  xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
>  {
>  	struct xen_memory_map memmap;
> +	unsigned long max_pfn = HYPERVISOR_start_info->nr_pages;
> +	u_int64_t mem_end = ptoa(max_pfn);
>  	u_int32_t size;
> -	int rc;
> +	int rc, mem_op, i;

One minor nit is that it is preferred to not initalize variables in 
declarations style-wise.  Aside from that, this looks fine to me.

>  	/* Fetch the E820 map from Xen */
>  	memmap.nr_entries = MAX_E820_ENTRIES;
>  	set_xen_guest_handle(memmap.buffer, xen_smap);
> -	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
> +	mem_op = xen_initial_domain() ?
> +	                XENMEM_machine_memory_map :
> +	                XENMEM_memory_map;
> +	rc = HYPERVISOR_memory_op(mem_op, &memmap);
>  	if (rc)
>  		panic("unable to fetch Xen E820 memory map");
>  	size = memmap.nr_entries * sizeof(xen_smap[0]);
>  
> +	/*
> +	 * This should be improved, Xen provides us with a single
> +	 * chunk of physical memory that goes from 0 to max_pfn,
> +	 * and what we do here is clamp the HW memory map to make
> +	 * sure regions above max_pfn are marked as reserved.
> +	 *
> +	 * TRTTD would be to move the memory not used because of
> +	 * the holes in the HW memory map to the now clamped regions
> +	 * using XENMEM_{decrease/increase}_reservation.
> +	 */
> +	for (i = 0; i < memmap.nr_entries; i++) {
> +		u_int64_t end = xen_smap[i].base + xen_smap[i].length;
> +		if (xen_smap[i].type == SMAP_TYPE_MEMORY) {
> +			if (xen_smap[i].base > mem_end) {
> +				/* Mark as reserved */
> +				xen_smap[i].type = SMAP_TYPE_RESERVED;
> +				continue;
> +			}
> +			if (end > mem_end) {
> +				/* Truncate region */
> +				xen_smap[i].length -= end - mem_end;
> +			}
> +		}
> +	}
> +
> +
>  	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
>  }
>  
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbe-0007Bq-1n; Tue, 21 Jan 2014 19:03:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbc-0007BO-Rr
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:24 +0000
Received: from [85.158.137.68:10205] by server-9.bemta-3.messagelabs.com id
	17/DD-13104-C74CED25; Tue, 21 Jan 2014 19:03:24 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390331002!9693362!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22611 invoked from network); 21 Jan 2014 19:03:23 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:23 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id B55EEB99A;
	Tue, 21 Jan 2014 14:03:21 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 14:02:58 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-7-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-7-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211402.58721.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:21 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 06/13] xen: Dom0 console fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:55 am Roger Pau Monne wrote:
> Minor fixes and workarounds to make the Xen Dom0 console work.

Looks ok to me.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbd-0007Ba-L1; Tue, 21 Jan 2014 19:03:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbb-0007B9-ER
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:23 +0000
Received: from [85.158.143.35:60202] by server-2.bemta-4.messagelabs.com id
	BD/35-11386-A74CED25; Tue, 21 Jan 2014 19:03:22 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390331001!13112316!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10065 invoked from network); 21 Jan 2014 19:03:21 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:21 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 5EBAEB946;
	Tue, 21 Jan 2014 14:03:20 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:55:13 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-6-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-6-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211355.13800.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:20 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 05/13] xen: implement Xen IO APIC ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:54 am Roger Pau Monne wrote:
> Implement a different set of hooks for IO APIC to use when running
> under Xen Dom0.
> ---
>  sys/x86/xen/pv.c |   44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 44 insertions(+), 0 deletions(-)
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index ab4afba..e5ad200 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -49,6 +49,7 @@ __FBSDID("$FreeBSD$");
>  #include <vm/vm_pager.h>
>  #include <vm/vm_param.h>
>  
> +#include <x86/apicreg.h>
>  #include <machine/sysarch.h>
>  #include <machine/clock.h>
>  #include <machine/pc/bios.h>
> @@ -58,6 +59,7 @@ __FBSDID("$FreeBSD$");
>  #include <xen/xen-os.h>
>  #include <xen/hypervisor.h>
>  #include <xen/pv.h>
> +#include <xen/xen_intr.h>
>  
>  #include <xen/interface/vcpu.h>
>  
> @@ -73,6 +75,11 @@ static caddr_t xen_pv_parse_preload_data(u_int64_t);
>  static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
>  
>  static void xen_pv_set_init_ops(void);
> +
> +static u_int xen_pv_ioapic_read(volatile ioapic_t *, int);
> +static void xen_pv_ioapic_write(volatile ioapic_t *, int, u_int);
> +static void xen_pv_ioapic_register_intr(struct ioapic_intsrc *);
> +
>  /*---------------------------- Extern Declarations 
---------------------------*/
>  /* Variables used by amd64 mp_machdep to start APs */
>  extern struct mtx ap_boot_mtx;
> @@ -92,6 +99,13 @@ struct init_ops xen_init_ops = {
>  	.parse_memmap =		xen_pv_parse_memmap,
>  };
>  
> +/* Xen ioapic_ops implementation */
> +struct ioapic_ops xen_ioapic_ops = {
> +	.read =			xen_pv_ioapic_read,
> +	.write =		xen_pv_ioapic_write,
> +	.register_intr =	xen_pv_ioapic_register_intr,
> +};
> +
>  static struct
>  {
>  	const char	*ev;
> @@ -342,6 +356,34 @@ xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, 
int *physmap_idx)
>  	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
>  }
>  
> +static u_int
> +xen_pv_ioapic_read(volatile ioapic_t *apic, int reg)
> +{
> +	struct physdev_apic apic_op;
> +	int rc;
> +
> +	mtx_assert(&icu_lock, MA_OWNED);
> +
> +	apic_op.apic_physbase = pmap_kextract((vm_offset_t) apic);

Seems a shame to have to do this.  I wouldn't mind if you changed the 
read/write callbacks to take 'struct ioapic *' instead and then use the 
'io_paddr' member.  I do think that would be cleaner.

> +	apic_op.reg = reg;
> +	rc = HYPERVISOR_physdev_op(PHYSDEVOP_apic_read, &apic_op);
> +	if (rc)
> +		panic("apic_read operation failed");
> +
> +	return (apic_op.value);
> +}
> +
> +static void
> +xen_pv_ioapic_write(volatile ioapic_t *apic, int reg, u_int val)
> +{
> +}

I guess not allowing writes is on purpose?

> +
> +static void
> +xen_pv_ioapic_register_intr(struct ioapic_intsrc *pin)
> +{
> +	xen_register_pirq(pin->io_irq, pin->io_activehi, pin->io_edgetrigger);
> +}
> +
>  static void
>  xen_pv_set_init_ops(void)
>  {
> @@ -349,4 +391,6 @@ xen_pv_set_init_ops(void)
>  	init_ops = xen_init_ops;
>  	/* Disable lapic */
>  	lapic_disabled = true;
> +	/* IOAPIC ops for Xen PV */
> +	ioapic_ops = xen_ioapic_ops;
>  }
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbe-0007Bq-1n; Tue, 21 Jan 2014 19:03:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbc-0007BO-Rr
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:24 +0000
Received: from [85.158.137.68:10205] by server-9.bemta-3.messagelabs.com id
	17/DD-13104-C74CED25; Tue, 21 Jan 2014 19:03:24 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390331002!9693362!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22611 invoked from network); 21 Jan 2014 19:03:23 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:23 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id B55EEB99A;
	Tue, 21 Jan 2014 14:03:21 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 14:02:58 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-7-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-7-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211402.58721.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:21 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 06/13] xen: Dom0 console fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:55 am Roger Pau Monne wrote:
> Minor fixes and workarounds to make the Xen Dom0 console work.

Looks ok to me.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gba-0007Ay-FA; Tue, 21 Jan 2014 19:03:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbY-0007Aj-6i
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:20 +0000
Received: from [85.158.143.35:61817] by server-2.bemta-4.messagelabs.com id
	5B/25-11386-774CED25; Tue, 21 Jan 2014 19:03:19 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390330997!13112305!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9671 invoked from network); 21 Jan 2014 19:03:18 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:18 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 71FC2B94C;
	Tue, 21 Jan 2014 14:03:16 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:25:29 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-2-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211325.29460.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:16 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 01/13] xen: use the hardware e820 map on
	Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:50 am Roger Pau Monne wrote:
> We need to do some tweaking of the hardware e820 map, since the memory
> layout provided by Xen and the e820 map doesn't match.
> 
> This consists in clamping the e820 map so that regions above max_pfn
> are marked as unusuable.
> ---
>  sys/x86/xen/pv.c |   35 +++++++++++++++++++++++++++++++++--
>  1 files changed, 33 insertions(+), 2 deletions(-)
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index 4f7415e..ab4afba 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -297,17 +297,48 @@ static void
>  xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, int *physmap_idx)
>  {
>  	struct xen_memory_map memmap;
> +	unsigned long max_pfn = HYPERVISOR_start_info->nr_pages;
> +	u_int64_t mem_end = ptoa(max_pfn);
>  	u_int32_t size;
> -	int rc;
> +	int rc, mem_op, i;

One minor nit is that it is preferred to not initalize variables in 
declarations style-wise.  Aside from that, this looks fine to me.

>  	/* Fetch the E820 map from Xen */
>  	memmap.nr_entries = MAX_E820_ENTRIES;
>  	set_xen_guest_handle(memmap.buffer, xen_smap);
> -	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
> +	mem_op = xen_initial_domain() ?
> +	                XENMEM_machine_memory_map :
> +	                XENMEM_memory_map;
> +	rc = HYPERVISOR_memory_op(mem_op, &memmap);
>  	if (rc)
>  		panic("unable to fetch Xen E820 memory map");
>  	size = memmap.nr_entries * sizeof(xen_smap[0]);
>  
> +	/*
> +	 * This should be improved, Xen provides us with a single
> +	 * chunk of physical memory that goes from 0 to max_pfn,
> +	 * and what we do here is clamp the HW memory map to make
> +	 * sure regions above max_pfn are marked as reserved.
> +	 *
> +	 * TRTTD would be to move the memory not used because of
> +	 * the holes in the HW memory map to the now clamped regions
> +	 * using XENMEM_{decrease/increase}_reservation.
> +	 */
> +	for (i = 0; i < memmap.nr_entries; i++) {
> +		u_int64_t end = xen_smap[i].base + xen_smap[i].length;
> +		if (xen_smap[i].type == SMAP_TYPE_MEMORY) {
> +			if (xen_smap[i].base > mem_end) {
> +				/* Mark as reserved */
> +				xen_smap[i].type = SMAP_TYPE_RESERVED;
> +				continue;
> +			}
> +			if (end > mem_end) {
> +				/* Truncate region */
> +				xen_smap[i].length -= end - mem_end;
> +			}
> +		}
> +	}
> +
> +
>  	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
>  }
>  
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbb-0007BD-7h; Tue, 21 Jan 2014 19:03:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbZ-0007At-Md
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:21 +0000
Received: from [85.158.143.35:61941] by server-3.bemta-4.messagelabs.com id
	A1/4A-32360-974CED25; Tue, 21 Jan 2014 19:03:21 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390330999!1323211!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14444 invoked from network); 21 Jan 2014 19:03:20 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:20 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id B3974B945;
	Tue, 21 Jan 2014 14:03:18 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:52:44 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-5-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-5-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211352.44784.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:19 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 04/13] xen: implement basic PIRQ support
	for Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:53 am Roger Pau Monne wrote:
> This allows Dom0 to manage physical hardware, redirecting the
> physical interrupts to event channels.
> ---
>  sys/x86/xen/xen_intr.c |  190 
+++++++++++++++++++++++++++++++++++++++++++++--
>  sys/xen/xen_intr.h     |   11 +++
>  2 files changed, 192 insertions(+), 9 deletions(-)
> 
> diff --git a/sys/x86/xen/xen_intr.c b/sys/x86/xen/xen_intr.c
> index bc0781e..340e5ed 100644
> --- a/sys/x86/xen/xen_intr.c
> +++ b/sys/x86/xen/xen_intr.c
> @@ -104,6 +104,8 @@ DPCPU_DECLARE(struct vcpu_info *, vcpu_info);
>  
>  #define is_valid_evtchn(x)	((x) != 0)
>  
> +#define	EEXIST	17	/* Xen "already exists" error */

Is there a xen_errno.h header?  Might be nice to have one and give these 
constants unique names (e.g. XEN_EEXIST or some such) to avoid 
confusion/conflicts with <sys/errno.h>.

Other than that I think this looks fine.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gba-0007B5-S2; Tue, 21 Jan 2014 19:03:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbY-0007Ak-HY
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:20 +0000
Received: from [193.109.254.147:27669] by server-14.bemta-14.messagelabs.com
	id 47/7E-12628-774CED25; Tue, 21 Jan 2014 19:03:19 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390330998!12289290!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23208 invoked from network); 21 Jan 2014 19:03:19 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:19 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 9586DB977;
	Tue, 21 Jan 2014 14:03:17 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:27:17 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-3-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-3-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211327.17258.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:17 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 02/13] ioapic: introduce hooks for some
	ioapic ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:51 am Roger Pau Monne wrote:
> Create some hooks for IO APIC operations that will diverge from bare
> metal when implemented for Xen Dom0.
> 
> This patch should not introduce any changes in functionality, it's a
> preparatory patch for the implementation of the Xen IO APIC hooks.

I think this is fine.  I should really create a sys/x86/include/apicvar.h
as there is a lot shared between those two headers.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gbd-0007Ba-L1; Tue, 21 Jan 2014 19:03:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbb-0007B9-ER
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:23 +0000
Received: from [85.158.143.35:60202] by server-2.bemta-4.messagelabs.com id
	BD/35-11386-A74CED25; Tue, 21 Jan 2014 19:03:22 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390331001!13112316!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10065 invoked from network); 21 Jan 2014 19:03:21 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:21 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 5EBAEB946;
	Tue, 21 Jan 2014 14:03:20 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:55:13 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-6-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-6-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211355.13800.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:20 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 05/13] xen: implement Xen IO APIC ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:54 am Roger Pau Monne wrote:
> Implement a different set of hooks for IO APIC to use when running
> under Xen Dom0.
> ---
>  sys/x86/xen/pv.c |   44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 44 insertions(+), 0 deletions(-)
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index ab4afba..e5ad200 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -49,6 +49,7 @@ __FBSDID("$FreeBSD$");
>  #include <vm/vm_pager.h>
>  #include <vm/vm_param.h>
>  
> +#include <x86/apicreg.h>
>  #include <machine/sysarch.h>
>  #include <machine/clock.h>
>  #include <machine/pc/bios.h>
> @@ -58,6 +59,7 @@ __FBSDID("$FreeBSD$");
>  #include <xen/xen-os.h>
>  #include <xen/hypervisor.h>
>  #include <xen/pv.h>
> +#include <xen/xen_intr.h>
>  
>  #include <xen/interface/vcpu.h>
>  
> @@ -73,6 +75,11 @@ static caddr_t xen_pv_parse_preload_data(u_int64_t);
>  static void xen_pv_parse_memmap(caddr_t, vm_paddr_t *, int *);
>  
>  static void xen_pv_set_init_ops(void);
> +
> +static u_int xen_pv_ioapic_read(volatile ioapic_t *, int);
> +static void xen_pv_ioapic_write(volatile ioapic_t *, int, u_int);
> +static void xen_pv_ioapic_register_intr(struct ioapic_intsrc *);
> +
>  /*---------------------------- Extern Declarations 
---------------------------*/
>  /* Variables used by amd64 mp_machdep to start APs */
>  extern struct mtx ap_boot_mtx;
> @@ -92,6 +99,13 @@ struct init_ops xen_init_ops = {
>  	.parse_memmap =		xen_pv_parse_memmap,
>  };
>  
> +/* Xen ioapic_ops implementation */
> +struct ioapic_ops xen_ioapic_ops = {
> +	.read =			xen_pv_ioapic_read,
> +	.write =		xen_pv_ioapic_write,
> +	.register_intr =	xen_pv_ioapic_register_intr,
> +};
> +
>  static struct
>  {
>  	const char	*ev;
> @@ -342,6 +356,34 @@ xen_pv_parse_memmap(caddr_t kmdp, vm_paddr_t *physmap, 
int *physmap_idx)
>  	bios_add_smap_entries(xen_smap, size, physmap, physmap_idx);
>  }
>  
> +static u_int
> +xen_pv_ioapic_read(volatile ioapic_t *apic, int reg)
> +{
> +	struct physdev_apic apic_op;
> +	int rc;
> +
> +	mtx_assert(&icu_lock, MA_OWNED);
> +
> +	apic_op.apic_physbase = pmap_kextract((vm_offset_t) apic);

Seems a shame to have to do this.  I wouldn't mind if you changed the 
read/write callbacks to take 'struct ioapic *' instead and then use the 
'io_paddr' member.  I do think that would be cleaner.

> +	apic_op.reg = reg;
> +	rc = HYPERVISOR_physdev_op(PHYSDEVOP_apic_read, &apic_op);
> +	if (rc)
> +		panic("apic_read operation failed");
> +
> +	return (apic_op.value);
> +}
> +
> +static void
> +xen_pv_ioapic_write(volatile ioapic_t *apic, int reg, u_int val)
> +{
> +}

I guess not allowing writes is on purpose?

> +
> +static void
> +xen_pv_ioapic_register_intr(struct ioapic_intsrc *pin)
> +{
> +	xen_register_pirq(pin->io_irq, pin->io_activehi, pin->io_edgetrigger);
> +}
> +
>  static void
>  xen_pv_set_init_ops(void)
>  {
> @@ -349,4 +391,6 @@ xen_pv_set_init_ops(void)
>  	init_ops = xen_init_ops;
>  	/* Disable lapic */
>  	lapic_disabled = true;
> +	/* IOAPIC ops for Xen PV */
> +	ioapic_ops = xen_ioapic_ops;
>  }
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:03:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gba-0007B5-S2; Tue, 21 Jan 2014 19:03:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1W5gbY-0007Ak-HY
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:03:20 +0000
Received: from [193.109.254.147:27669] by server-14.bemta-14.messagelabs.com
	id 47/7E-12628-774CED25; Tue, 21 Jan 2014 19:03:19 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390330998!12289290!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23208 invoked from network); 21 Jan 2014 19:03:19 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:03:19 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 9586DB977;
	Tue, 21 Jan 2014 14:03:17 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 21 Jan 2014 13:27:17 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-3-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1387884062-41154-3-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Message-Id: <201401211327.17258.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Tue, 21 Jan 2014 14:03:17 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 02/13] ioapic: introduce hooks for some
	ioapic ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 6:20:51 am Roger Pau Monne wrote:
> Create some hooks for IO APIC operations that will diverge from bare
> metal when implemented for Xen Dom0.
> 
> This patch should not introduce any changes in functionality, it's a
> preparatory patch for the implementation of the Xen IO APIC hooks.

I think this is fine.  I should really create a sys/x86/include/apicvar.h
as there is a lot shared between those two headers.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggq-0007t4-Vg; Tue, 21 Jan 2014 19:08:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007rI-NV
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:45 +0000
Received: from [193.109.254.147:4880] by server-13.bemta-14.messagelabs.com id
	30/3B-19374-CB5CED25; Tue, 21 Jan 2014 19:08:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331321!12220740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21826 invoked from network); 21 Jan 2014 19:08:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8bcZ031123
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8aK0018157
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:36 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Zod026998; Tue, 21 Jan 2014 19:08:35 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:35 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:50 -0500
Message-Id: <1390331342-3967-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 05/17] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 14 +++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 16 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 84b8a36..f6c542b 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( !is_pv_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( !is_pv_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8d920c0..a966b91 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d6a9ff6..0770bcf 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -227,18 +235,18 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
 }
 
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggk-0007qo-FE; Tue, 21 Jan 2014 19:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggi-0007p2-Nb
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:41 +0000
Received: from [85.158.139.211:23763] by server-3.bemta-5.messagelabs.com id
	0C/2C-04773-8B5CED25; Tue, 21 Jan 2014 19:08:40 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390331317!11126147!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18151 invoked from network); 21 Jan 2014 19:08:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8Xxa031037
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8WXV022927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Wuo026863; Tue, 21 Jan 2014 19:08:32 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:31 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:46 -0500
Message-Id: <1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 01/17] common/symbols: Export hypervisor
	symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 19 ++++++++++++
 xen/include/xen/symbols.h                |  7 +++--
 xen/include/xlat.lst                     |  1 +
 7 files changed, 95 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..cdb6886 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE_64(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.u.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..98f9534 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++(*symnum);
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..ba9da49 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,24 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    union {
+        XEN_GUEST_HANDLE(char) name;
+        uint64_t pad;
+    } u;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +571,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..adbf91d 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,8 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/xen.h>
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+extern int xensyms_read(uint32_t *symnum, uint32_t *type,
+                        uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index f00cef3..cf89583 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -84,6 +84,7 @@
 ?	processor_px			platform.h
 !	psd_package			platform.h
 ?	xenpf_enter_acpi_sleep		platform.h
+!	xenpf_symdata			platform.h
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 !	sched_poll			sched.h
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggq-0007sR-5T; Tue, 21 Jan 2014 19:08:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007r1-DQ
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:44 +0000
Received: from [85.158.137.68:45335] by server-9.bemta-3.messagelabs.com id
	F8/E2-13104-BB5CED25; Tue, 21 Jan 2014 19:08:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390331321!10517230!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1099 invoked from network); 21 Jan 2014 19:08:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8W7H028943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8VIS029569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:31 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8VPQ017953; Tue, 21 Jan 2014 19:08:31 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:31 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:45 -0500
Message-Id: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 00/17] x86/PMU: Xen PMU PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the fourth version PV (and now PVH) PMU patches.

The following patch series adds PMU support in Xen for PV(H)
guests. There is a companion patchset for Linux kernel. In addition,
another set of changes will be provided (later) for userland perf
code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately


A few notes that may help reviewing: 

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor 
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or regular vector
interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV(H) guests
* PV guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been moved
up from hvm subtree


Changes in v4:

* Added support for PVH guests:
  o changes in pvpmu_init() to accommodate both PV and PVH guests, still in patch 10 
  o more careful use of is_hvm_domain
  o Additional patch (16)
* Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe handling
* Fixed dom0's VCPU selection in privileged mode
* Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in vpmu_do_interrupt.
  (don't want to expose compat_cpu_user_regs in a public header)
* Renamed public structures by prefixing them with "xen_"
* Added an entry for xenpf_symdata in xlat.lst
* Fixed pv_cpuid check for vpmu-specific cpuid adjustments
* Varios code style fixes
* Eliminated anonymous unions
* Added more verbiage to NMI patch description


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest


Boris Ostrovsky (17):
  common/symbols: Export hypervisor symbols to privileged guest
  x86/VPMU: Stop AMD counters when called from vpmu_save_force()
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Suport for PVH guests
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  18 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/hvm.c                   |   3 +-
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vmx.c               |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  35 +-
 xen/arch/x86/vpmu.c                      | 671 ++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 499 ++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 936 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  98 ++++
 xen/include/public/arch-x86/xenpmu.h     |  66 +++
 xen/include/public/platform.h            |  19 +
 xen/include/public/xen.h                 |   2 +
 xen/include/public/xenpmu.h              | 102 ++++
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   7 +-
 xen/include/xlat.lst                     |   1 +
 38 files changed, 2601 insertions(+), 1873 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggs-0007th-5l; Tue, 21 Jan 2014 19:08:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007rE-Fu
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:45 +0000
Received: from [193.109.254.147:33769] by server-1.bemta-14.messagelabs.com id
	96/71-15600-BB5CED25; Tue, 21 Jan 2014 19:08:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331321!12220739!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21802 invoked from network); 21 Jan 2014 19:08:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8avu029108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8ZU3018129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:36 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8Y6j023020; Tue, 21 Jan 2014 19:08:34 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:34 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:49 -0500
Message-Id: <1390331342-3967-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 04/17] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 5368670..8d920c0 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -373,56 +361,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -433,10 +404,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -452,7 +421,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -497,6 +466,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -509,27 +479,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -538,27 +506,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -577,7 +545,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -593,7 +560,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -678,7 +645,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -696,27 +663,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -734,7 +699,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -797,18 +762,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggk-0007qo-FE; Tue, 21 Jan 2014 19:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggi-0007p2-Nb
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:41 +0000
Received: from [85.158.139.211:23763] by server-3.bemta-5.messagelabs.com id
	0C/2C-04773-8B5CED25; Tue, 21 Jan 2014 19:08:40 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390331317!11126147!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18151 invoked from network); 21 Jan 2014 19:08:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8Xxa031037
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8WXV022927
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Wuo026863; Tue, 21 Jan 2014 19:08:32 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:31 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:46 -0500
Message-Id: <1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 01/17] common/symbols: Export hypervisor
	symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 19 ++++++++++++
 xen/include/xen/symbols.h                |  7 +++--
 xen/include/xlat.lst                     |  1 +
 7 files changed, 95 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..cdb6886 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE_64(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.u.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..98f9534 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++(*symnum);
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..ba9da49 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,24 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    union {
+        XEN_GUEST_HANDLE(char) name;
+        uint64_t pad;
+    } u;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +571,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..adbf91d 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,8 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/xen.h>
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+extern int xensyms_read(uint32_t *symnum, uint32_t *type,
+                        uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index f00cef3..cf89583 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -84,6 +84,7 @@
 ?	processor_px			platform.h
 !	psd_package			platform.h
 ?	xenpf_enter_acpi_sleep		platform.h
+!	xenpf_symdata			platform.h
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 !	sched_poll			sched.h
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggo-0007s2-Ud; Tue, 21 Jan 2014 19:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggl-0007qm-Lx
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:43 +0000
Received: from [85.158.139.211:34529] by server-3.bemta-5.messagelabs.com id
	51/3C-04773-9B5CED25; Tue, 21 Jan 2014 19:08:41 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390331319!8409671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28085 invoked from network); 21 Jan 2014 19:08:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8XP4028984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8XCY029632
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Xpu018041; Tue, 21 Jan 2014 19:08:33 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:32 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:47 -0500
Message-Id: <1390331342-3967-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 02/17] x86/VPMU: Stop AMD counters when
	called from vpmu_save_force()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change amd_vpmu_save() algorithm to accommodate cases when we need
to stop counters from vpmu_save_force() (needed by subsequent PMU
patches).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c | 14 ++++----------
 xen/arch/x86/hvm/vpmu.c     | 12 ++++++------
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..bec40d8 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -223,22 +223,16 @@ static int amd_vpmu_save(struct vcpu *v)
     struct amd_vpmu_context *ctx = vpmu->context;
     unsigned int i;
 
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
     {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
         for ( i = 0; i < num_counters; i++ )
             wrmsrl(ctrls[i], 0);
 
-        return 0;
+        vpmu_set(vpmu, VPMU_FROZEN);
     }
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
 
     context_save(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..a4e3664 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,13 +127,19 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
     vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -177,12 +183,8 @@ void vpmu_load(struct vcpu *v)
          * before saving the context.
          */
         if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
             on_selected_cpus(cpumask_of(vpmu->last_pcpu),
                              vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
     } 
 
     /* Prevent forced context save from remote CPU */
@@ -195,9 +197,7 @@ void vpmu_load(struct vcpu *v)
         vpmu = vcpu_vpmu(prev);
 
         /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
         vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
         vpmu = vcpu_vpmu(v);
     }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggq-0007sR-5T; Tue, 21 Jan 2014 19:08:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007r1-DQ
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:44 +0000
Received: from [85.158.137.68:45335] by server-9.bemta-3.messagelabs.com id
	F8/E2-13104-BB5CED25; Tue, 21 Jan 2014 19:08:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390331321!10517230!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1099 invoked from network); 21 Jan 2014 19:08:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:42 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8W7H028943
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8VIS029569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:31 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8VPQ017953; Tue, 21 Jan 2014 19:08:31 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:31 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:45 -0500
Message-Id: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 00/17] x86/PMU: Xen PMU PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the fourth version PV (and now PVH) PMU patches.

The following patch series adds PMU support in Xen for PV(H)
guests. There is a companion patchset for Linux kernel. In addition,
another set of changes will be provided (later) for userland perf
code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately


A few notes that may help reviewing: 

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor 
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or regular vector
interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV(H) guests
* PV guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been moved
up from hvm subtree


Changes in v4:

* Added support for PVH guests:
  o changes in pvpmu_init() to accommodate both PV and PVH guests, still in patch 10 
  o more careful use of is_hvm_domain
  o Additional patch (16)
* Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe handling
* Fixed dom0's VCPU selection in privileged mode
* Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in vpmu_do_interrupt.
  (don't want to expose compat_cpu_user_regs in a public header)
* Renamed public structures by prefixing them with "xen_"
* Added an entry for xenpf_symdata in xlat.lst
* Fixed pv_cpuid check for vpmu-specific cpuid adjustments
* Varios code style fixes
* Eliminated anonymous unions
* Added more verbiage to NMI patch description


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest


Boris Ostrovsky (17):
  common/symbols: Export hypervisor symbols to privileged guest
  x86/VPMU: Stop AMD counters when called from vpmu_save_force()
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Suport for PVH guests
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  18 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/hvm.c                   |   3 +-
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vmx.c               |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  35 +-
 xen/arch/x86/vpmu.c                      | 671 ++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 499 ++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 936 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  98 ++++
 xen/include/public/arch-x86/xenpmu.h     |  66 +++
 xen/include/public/platform.h            |  19 +
 xen/include/public/xen.h                 |   2 +
 xen/include/public/xenpmu.h              | 102 ++++
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   7 +-
 xen/include/xlat.lst                     |   1 +
 38 files changed, 2601 insertions(+), 1873 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggo-0007rj-8u; Tue, 21 Jan 2014 19:08:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggl-0007qu-Cq
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:43 +0000
Received: from [85.158.143.35:38992] by server-1.bemta-4.messagelabs.com id
	02/E8-02132-AB5CED25; Tue, 21 Jan 2014 19:08:42 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390331320!13164102!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16099 invoked from network); 21 Jan 2014 19:08:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8YtJ029050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:35 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8YxB023009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8X3D026923; Tue, 21 Jan 2014 19:08:33 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:33 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:48 -0500
Message-Id: <1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           | 11 +++--------
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bec40d8..84b8a36 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( !is_pv_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -276,7 +277,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -292,7 +293,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( !is_pv_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -303,7 +305,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( !is_pv_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -395,7 +398,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( !is_pv_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..5368670 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -446,7 +443,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -813,7 +810,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a4e3664..d6a9ff6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,10 +127,7 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
         return;
 
     vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -138,8 +135,7 @@ static void vpmu_save_force(void *arg)
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -149,8 +145,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggo-0007s2-Ud; Tue, 21 Jan 2014 19:08:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggl-0007qm-Lx
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:43 +0000
Received: from [85.158.139.211:34529] by server-3.bemta-5.messagelabs.com id
	51/3C-04773-9B5CED25; Tue, 21 Jan 2014 19:08:41 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390331319!8409671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28085 invoked from network); 21 Jan 2014 19:08:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8XP4028984
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8XCY029632
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:33 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Xpu018041; Tue, 21 Jan 2014 19:08:33 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:32 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:47 -0500
Message-Id: <1390331342-3967-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 02/17] x86/VPMU: Stop AMD counters when
	called from vpmu_save_force()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Change amd_vpmu_save() algorithm to accommodate cases when we need
to stop counters from vpmu_save_force() (needed by subsequent PMU
patches).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c | 14 ++++----------
 xen/arch/x86/hvm/vpmu.c     | 12 ++++++------
 2 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..bec40d8 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -223,22 +223,16 @@ static int amd_vpmu_save(struct vcpu *v)
     struct amd_vpmu_context *ctx = vpmu->context;
     unsigned int i;
 
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
     {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
         for ( i = 0; i < num_counters; i++ )
             wrmsrl(ctrls[i], 0);
 
-        return 0;
+        vpmu_set(vpmu, VPMU_FROZEN);
     }
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
 
     context_save(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..a4e3664 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,13 +127,19 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
     vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -177,12 +183,8 @@ void vpmu_load(struct vcpu *v)
          * before saving the context.
          */
         if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
             on_selected_cpus(cpumask_of(vpmu->last_pcpu),
                              vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
     } 
 
     /* Prevent forced context save from remote CPU */
@@ -195,9 +197,7 @@ void vpmu_load(struct vcpu *v)
         vpmu = vcpu_vpmu(prev);
 
         /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
         vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
         vpmu = vcpu_vpmu(v);
     }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggs-0007th-5l; Tue, 21 Jan 2014 19:08:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007rE-Fu
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:45 +0000
Received: from [193.109.254.147:33769] by server-1.bemta-14.messagelabs.com id
	96/71-15600-BB5CED25; Tue, 21 Jan 2014 19:08:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331321!12220739!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21802 invoked from network); 21 Jan 2014 19:08:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8avu029108
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8ZU3018129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:36 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8Y6j023020; Tue, 21 Jan 2014 19:08:34 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:34 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:49 -0500
Message-Id: <1390331342-3967-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 04/17] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 5368670..8d920c0 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -373,56 +361,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -433,10 +404,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -452,7 +421,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -497,6 +466,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -509,27 +479,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -538,27 +506,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -577,7 +545,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -593,7 +560,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -678,7 +645,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -696,27 +663,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -734,7 +699,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -797,18 +762,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggo-0007rj-8u; Tue, 21 Jan 2014 19:08:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggl-0007qu-Cq
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:43 +0000
Received: from [85.158.143.35:38992] by server-1.bemta-4.messagelabs.com id
	02/E8-02132-AB5CED25; Tue, 21 Jan 2014 19:08:42 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390331320!13164102!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16099 invoked from network); 21 Jan 2014 19:08:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:41 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8YtJ029050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:35 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8YxB023009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:34 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8X3D026923; Tue, 21 Jan 2014 19:08:33 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:33 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:48 -0500
Message-Id: <1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           | 11 +++--------
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bec40d8..84b8a36 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( !is_pv_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -276,7 +277,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -292,7 +293,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( !is_pv_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -303,7 +305,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( !is_pv_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -395,7 +398,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( !is_pv_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..5368670 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -446,7 +443,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -813,7 +810,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a4e3664..d6a9ff6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -127,10 +127,7 @@ static void vpmu_save_force(void *arg)
     struct vcpu *v = (struct vcpu *)arg;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
         return;
 
     vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -138,8 +135,7 @@ static void vpmu_save_force(void *arg)
     if ( vpmu->arch_vpmu_ops )
         (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
 
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-    vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
 }
@@ -149,8 +145,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggq-0007t4-Vg; Tue, 21 Jan 2014 19:08:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggm-0007rI-NV
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:45 +0000
Received: from [193.109.254.147:4880] by server-13.bemta-14.messagelabs.com id
	30/3B-19374-CB5CED25; Tue, 21 Jan 2014 19:08:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331321!12220740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21826 invoked from network); 21 Jan 2014 19:08:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8bcZ031123
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8aK0018157
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:36 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8Zod026998; Tue, 21 Jan 2014 19:08:35 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:35 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:50 -0500
Message-Id: <1390331342-3967-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 05/17] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 14 +++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 16 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 84b8a36..f6c542b 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -290,8 +290,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( !is_pv_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -302,8 +300,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( !is_pv_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8d920c0..a966b91 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -532,19 +532,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -710,10 +697,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d6a9ff6..0770bcf 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -227,18 +235,18 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
 }
 
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggx-0007yf-QA; Tue, 21 Jan 2014 19:08:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggn-0007rb-Sm
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:46 +0000
Received: from [193.109.254.147:16103] by server-12.bemta-14.messagelabs.com
	id A4/68-13681-DB5CED25; Tue, 21 Jan 2014 19:08:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390331323!12311850!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 530 invoked from network); 21 Jan 2014 19:08:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8cQR029160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bAq029779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bKv029772; Tue, 21 Jan 2014 19:08:37 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:37 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:53 -0500
Message-Id: <1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 08/17] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 9d39061..f352a84 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -396,6 +396,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 9992887..8646fd6 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggx-0007yf-QA; Tue, 21 Jan 2014 19:08:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggn-0007rb-Sm
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:46 +0000
Received: from [193.109.254.147:16103] by server-12.bemta-14.messagelabs.com
	id A4/68-13681-DB5CED25; Tue, 21 Jan 2014 19:08:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390331323!12311850!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 530 invoked from network); 21 Jan 2014 19:08:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8cQR029160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bAq029779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bKv029772; Tue, 21 Jan 2014 19:08:37 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:37 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:53 -0500
Message-Id: <1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 08/17] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 9d39061..f352a84 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -396,6 +396,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 9992887..8646fd6 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggy-0007zb-TL; Tue, 21 Jan 2014 19:08:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggp-0007ra-39
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:47 +0000
Received: from [85.158.143.35:28419] by server-2.bemta-4.messagelabs.com id
	6D/E8-11386-DB5CED25; Tue, 21 Jan 2014 19:08:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390331323!13113017!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26341 invoked from network); 21 Jan 2014 19:08:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8bFp029128
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8a11027030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8a1q029717; Tue, 21 Jan 2014 19:08:36 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:36 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:51 -0500
Message-Id: <1390331342-3967-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 06/17] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a966b91..217c1f7 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -371,8 +364,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggy-0007zb-TL; Tue, 21 Jan 2014 19:08:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggp-0007ra-39
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:47 +0000
Received: from [85.158.143.35:28419] by server-2.bemta-4.messagelabs.com id
	6D/E8-11386-DB5CED25; Tue, 21 Jan 2014 19:08:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390331323!13113017!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26341 invoked from network); 21 Jan 2014 19:08:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8bFp029128
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8a11027030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:37 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8a1q029717; Tue, 21 Jan 2014 19:08:36 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:36 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:51 -0500
Message-Id: <1390331342-3967-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 06/17] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a966b91..217c1f7 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -371,8 +364,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggz-00080r-Sy; Tue, 21 Jan 2014 19:08:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggp-0007s8-I7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.143.35:28495] by server-1.bemta-4.messagelabs.com id
	E4/F8-02132-EB5CED25; Tue, 21 Jan 2014 19:08:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390331324!10477855!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29006 invoked from network); 21 Jan 2014 19:08:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8cAf031144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bCs027083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bca027052; Tue, 21 Jan 2014 19:08:37 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:36 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:52 -0500
Message-Id: <1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xenpmu.h header file, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 71 ++++++++++++++------------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 87 +++++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 ------------
 xen/include/asm-x86/hvm/vpmu.h           | 13 ++---
 xen/include/public/arch-x86/xenpmu.h     | 66 ++++++++++++++++++++++++
 xen/include/public/xenpmu.h              | 38 ++++++++++++++
 8 files changed, 199 insertions(+), 115 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index f6c542b..bf7f1f6 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/xenpmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -142,7 +136,7 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -157,7 +151,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -177,28 +171,31 @@ static inline void context_load(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     vpmu_reset(vpmu, VPMU_FROZEN);
 
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
     {
         unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -210,17 +207,18 @@ static inline void context_save(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
     unsigned int i;
 
     if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
@@ -248,7 +246,9 @@ static void context_update(unsigned int msr, u64 msr_content)
     unsigned int i;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -260,12 +260,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -292,7 +292,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu_set(vpmu, VPMU_RUNNING);
 
         if ( !is_pv_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -302,7 +302,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( !is_pv_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -343,7 +343,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 static int amd_vpmu_initialise(struct vcpu *v)
 {
-    struct amd_vpmu_context *ctxt;
+    struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
 
@@ -373,7 +373,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -382,6 +384,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -395,7 +400,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
         return;
 
     if ( !is_pv_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
@@ -412,7 +417,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
 static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -442,8 +449,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 217c1f7..3c3bedc 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/xenpmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -293,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -320,10 +318,13 @@ static int core2_vpmu_save(struct vcpu *v)
 static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -353,7 +354,7 @@ static void core2_vpmu_load(struct vcpu *v)
 static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
@@ -366,11 +367,16 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -418,7 +424,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -447,7 +453,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -510,11 +516,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -565,7 +574,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -576,7 +585,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -625,8 +634,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -645,12 +657,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -659,7 +668,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -670,14 +679,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     u64 msr_content;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
 
     rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
     if ( msr_content )
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -740,12 +749,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0770bcf..8c263a5 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/xenpmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..9992887 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/xenpmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -42,6 +41,9 @@
 #define MSR_TYPE_ARCH_COUNTER       3
 #define MSR_TYPE_ARCH_CTRL          4
 
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
 
 /* Arch specific operations shared by all vpmus */
 struct arch_vpmu_ops {
@@ -76,11 +78,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-x86/xenpmu.h b/xen/include/public/arch-x86/xenpmu.h
new file mode 100644
index 0000000..7778a45
--- /dev/null
+++ b/xen/include/public/arch-x86/xenpmu.h
@@ -0,0 +1,66 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+
+#include "xen.h"
+
+
+/* AMD PMU registers and structures */
+struct xen_pmu_amd_ctxt {
+    uint64_t counters;       /* Offset to counter MSRs */
+    uint64_t ctrls;          /* Offset to control MSRs */
+    uint64_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct xen_pmu_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+
+struct xen_pmu_intel_ctxt {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
+                                    sizeof(struct xen_pmu_intel_ctxt) ? \
+                                     sizeof(struct xen_pmu_amd_ctxt) : \
+                                     sizeof(struct xen_pmu_intel_ctxt))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct xen_arch_pmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } r;
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } l;
+    union {
+        struct xen_pmu_amd_ctxt amd;
+        struct xen_pmu_intel_ctxt intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } c;
+};
+typedef struct xen_arch_pmu xen_arch_pmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
new file mode 100644
index 0000000..4757db9
--- /dev/null
+++ b/xen/include/public/xenpmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_XENPMU_H__
+#define __XEN_PUBLIC_XENPMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/xenpmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xen_pmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    xen_arch_pmu_t pmu;
+};
+typedef struct xen_pmu_data xen_pmu_data_t;
+
+#endif /* __XEN_PUBLIC_XENPMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ggz-00080r-Sy; Tue, 21 Jan 2014 19:08:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggp-0007s8-I7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.143.35:28495] by server-1.bemta-4.messagelabs.com id
	E4/F8-02132-EB5CED25; Tue, 21 Jan 2014 19:08:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390331324!10477855!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29006 invoked from network); 21 Jan 2014 19:08:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8cAf031144
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bCs027083
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8bca027052; Tue, 21 Jan 2014 19:08:37 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:36 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:52 -0500
Message-Id: <1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xenpmu.h header file, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 71 ++++++++++++++------------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 87 +++++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 ------------
 xen/include/asm-x86/hvm/vpmu.h           | 13 ++---
 xen/include/public/arch-x86/xenpmu.h     | 66 ++++++++++++++++++++++++
 xen/include/public/xenpmu.h              | 38 ++++++++++++++
 8 files changed, 199 insertions(+), 115 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/xenpmu.h
 create mode 100644 xen/include/public/xenpmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index f6c542b..bf7f1f6 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/xenpmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -142,7 +136,7 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -157,7 +151,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -177,28 +171,31 @@ static inline void context_load(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     vpmu_reset(vpmu, VPMU_FROZEN);
 
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
     {
         unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -210,17 +207,18 @@ static inline void context_save(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
     unsigned int i;
 
     if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
@@ -248,7 +246,9 @@ static void context_update(unsigned int msr, u64 msr_content)
     unsigned int i;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -260,12 +260,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -292,7 +292,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu_set(vpmu, VPMU_RUNNING);
 
         if ( !is_pv_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -302,7 +302,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( !is_pv_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -343,7 +343,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 static int amd_vpmu_initialise(struct vcpu *v)
 {
-    struct amd_vpmu_context *ctxt;
+    struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
 
@@ -373,7 +373,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -382,6 +384,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -395,7 +400,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
         return;
 
     if ( !is_pv_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
@@ -412,7 +417,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
 static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -442,8 +449,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 217c1f7..3c3bedc 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/xenpmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -293,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -320,10 +318,13 @@ static int core2_vpmu_save(struct vcpu *v)
 static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -353,7 +354,7 @@ static void core2_vpmu_load(struct vcpu *v)
 static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
@@ -366,11 +367,16 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -418,7 +424,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -447,7 +453,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -510,11 +516,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -565,7 +574,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -576,7 +585,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -625,8 +634,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -645,12 +657,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -659,7 +668,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -670,14 +679,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     u64 msr_content;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
 
     rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
     if ( msr_content )
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -740,12 +749,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0770bcf..8c263a5 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/xenpmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..9992887 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/xenpmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -42,6 +41,9 @@
 #define MSR_TYPE_ARCH_COUNTER       3
 #define MSR_TYPE_ARCH_CTRL          4
 
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
 
 /* Arch specific operations shared by all vpmus */
 struct arch_vpmu_ops {
@@ -76,11 +78,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-x86/xenpmu.h b/xen/include/public/arch-x86/xenpmu.h
new file mode 100644
index 0000000..7778a45
--- /dev/null
+++ b/xen/include/public/arch-x86/xenpmu.h
@@ -0,0 +1,66 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+
+#include "xen.h"
+
+
+/* AMD PMU registers and structures */
+struct xen_pmu_amd_ctxt {
+    uint64_t counters;       /* Offset to counter MSRs */
+    uint64_t ctrls;          /* Offset to control MSRs */
+    uint64_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct xen_pmu_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+
+struct xen_pmu_intel_ctxt {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
+                                    sizeof(struct xen_pmu_intel_ctxt) ? \
+                                     sizeof(struct xen_pmu_amd_ctxt) : \
+                                     sizeof(struct xen_pmu_intel_ctxt))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct xen_arch_pmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } r;
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } l;
+    union {
+        struct xen_pmu_amd_ctxt amd;
+        struct xen_pmu_intel_ctxt intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } c;
+};
+typedef struct xen_arch_pmu xen_arch_pmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
new file mode 100644
index 0000000..4757db9
--- /dev/null
+++ b/xen/include/public/xenpmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_XENPMU_H__
+#define __XEN_PUBLIC_XENPMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/xenpmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xen_pmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    xen_arch_pmu_t pmu;
+};
+typedef struct xen_pmu_data xen_pmu_data_t;
+
+#endif /* __XEN_PUBLIC_XENPMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gh1-000833-Lp; Tue, 21 Jan 2014 19:08:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggq-0007sP-Ej
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.139.211:34733] by server-4.bemta-5.messagelabs.com id
	C1/99-26791-FB5CED25; Tue, 21 Jan 2014 19:08:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390331325!11114138!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2388 invoked from network); 21 Jan 2014 19:08:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 19:08:46 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8fGn031192
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8fxa023357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8eot023312; Tue, 21 Jan 2014 19:08:40 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:40 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:56 -0500
Message-Id: <1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  8 ++++++
 xen/arch/x86/traps.c              | 31 +++++++++++++++++++-
 xen/include/public/xenpmu.h       |  1 +
 5 files changed, 90 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index da8e522..25572d5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1972,8 +1972,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 1254c04..5213c11 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,6 +298,9 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -306,10 +310,14 @@ static int core2_vpmu_save(struct vcpu *v)
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -424,6 +439,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -450,7 +473,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -462,11 +485,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -482,7 +506,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -507,10 +531,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -527,7 +555,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -566,13 +597,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -596,7 +633,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 23b3040..d32325c 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3f7a3c7..7ff8401 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -865,8 +866,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
         break;
 
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        break; 
+
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -882,6 +885,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
     }
 
  out:
+    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
+
     regs->eax = a;
     regs->ebx = b;
     regs->ecx = c;
@@ -2499,6 +2504,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2587,6 +2600,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 9424313..c22cd18 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gh1-000833-Lp; Tue, 21 Jan 2014 19:08:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggq-0007sP-Ej
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.139.211:34733] by server-4.bemta-5.messagelabs.com id
	C1/99-26791-FB5CED25; Tue, 21 Jan 2014 19:08:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390331325!11114138!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2388 invoked from network); 21 Jan 2014 19:08:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Jan 2014 19:08:46 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8fGn031192
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8fxa023357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8eot023312; Tue, 21 Jan 2014 19:08:40 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:40 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:56 -0500
Message-Id: <1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  8 ++++++
 xen/arch/x86/traps.c              | 31 +++++++++++++++++++-
 xen/include/public/xenpmu.h       |  1 +
 5 files changed, 90 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index da8e522..25572d5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1972,8 +1972,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 1254c04..5213c11 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,6 +298,9 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -306,10 +310,14 @@ static int core2_vpmu_save(struct vcpu *v)
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -424,6 +439,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -450,7 +473,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -462,11 +485,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -482,7 +506,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -507,10 +531,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -527,7 +555,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -566,13 +597,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -596,7 +633,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 23b3040..d32325c 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3f7a3c7..7ff8401 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -865,8 +866,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
         break;
 
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        break; 
+
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -882,6 +885,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
     }
 
  out:
+    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
+
     regs->eax = a;
     regs->ebx = b;
     regs->ecx = c;
@@ -2499,6 +2504,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2587,6 +2600,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 9424313..c22cd18 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gh6-000859-N9; Tue, 21 Jan 2014 19:09:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggq-0007sN-9l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.143.35:39202] by server-2.bemta-4.messagelabs.com id
	EF/E8-11386-FB5CED25; Tue, 21 Jan 2014 19:08:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390331324!13164115!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16394 invoked from network); 21 Jan 2014 19:08:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8dqi029175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8caa029837
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8c4j029823; Tue, 21 Jan 2014 19:08:38 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:38 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:54 -0500
Message-Id: <1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c        |  2 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  4 +-
 xen/arch/x86/hvm/vpmu.c            | 77 ++++++++++++++++++++++++++++++++++----
 xen/arch/x86/x86_64/compat/entry.S |  4 ++
 xen/arch/x86/x86_64/entry.S        |  4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  9 +----
 xen/include/public/xen.h           |  1 +
 xen/include/public/xenpmu.h        | 51 +++++++++++++++++++++++++
 xen/include/xen/hypercall.h        |  4 ++
 9 files changed, 138 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bf7f1f6..3dd6911 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -471,7 +471,7 @@ int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 3c3bedc..9e0e743 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -707,7 +707,7 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -826,7 +826,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 8c263a5..309f858 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,7 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +53,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +61,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode |= XENPMU_MODE_ON;
         break;
     }
 }
@@ -234,19 +235,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 }
@@ -268,3 +269,63 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
+        if ( mode & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 8646fd6..8c5c772 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/xenpmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
@@ -98,5 +91,7 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint32_t vpmu_mode;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 4757db9..fac29a6 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -13,6 +13,57 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xen_pmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } v;
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } d;
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xen_pmu_params xen_pmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_FEATURE_SHIFT      16
+#define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..acf50e8 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/xenpmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gh6-000859-N9; Tue, 21 Jan 2014 19:09:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggq-0007sN-9l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:48 +0000
Received: from [85.158.143.35:39202] by server-2.bemta-4.messagelabs.com id
	EF/E8-11386-FB5CED25; Tue, 21 Jan 2014 19:08:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390331324!13164115!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16394 invoked from network); 21 Jan 2014 19:08:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8dqi029175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8caa029837
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:39 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8c4j029823; Tue, 21 Jan 2014 19:08:38 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:38 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:54 -0500
Message-Id: <1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c        |  2 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  4 +-
 xen/arch/x86/hvm/vpmu.c            | 77 ++++++++++++++++++++++++++++++++++----
 xen/arch/x86/x86_64/compat/entry.S |  4 ++
 xen/arch/x86/x86_64/entry.S        |  4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  9 +----
 xen/include/public/xen.h           |  1 +
 xen/include/public/xenpmu.h        | 51 +++++++++++++++++++++++++
 xen/include/xen/hypercall.h        |  4 ++
 9 files changed, 138 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index bf7f1f6..3dd6911 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -471,7 +471,7 @@ int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 3c3bedc..9e0e743 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -707,7 +707,7 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -826,7 +826,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_flags == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 8c263a5..309f858 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,7 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +53,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +61,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode |= XENPMU_MODE_ON;
         break;
     }
 }
@@ -234,19 +235,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 }
@@ -268,3 +269,63 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
+        if ( mode & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 8646fd6..8c5c772 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/xenpmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
 
@@ -98,5 +91,7 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint32_t vpmu_mode;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index 4757db9..fac29a6 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -13,6 +13,57 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xen_pmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } v;
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } d;
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xen_pmu_params xen_pmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_FEATURE_SHIFT      16
+#define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..acf50e8 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/xenpmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghG-0008A2-QT; Tue, 21 Jan 2014 19:09:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggr-0007sm-5B
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:49 +0000
Received: from [193.109.254.147:5083] by server-8.bemta-14.messagelabs.com id
	83/55-30921-0C5CED25; Tue, 21 Jan 2014 19:08:48 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390331325!12326560!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7655 invoked from network); 21 Jan 2014 19:08:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:47 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8em4029199
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8dIP018282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:40 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8dvq029865; Tue, 21 Jan 2014 19:08:39 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:39 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:55 -0500
Message-Id: <1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 10/17] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 38 ++++++++++---------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 50 ++++++++++++++++---------
 xen/arch/x86/hvm/vpmu.c           | 77 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/xen.h          |  1 +
 xen/include/public/xenpmu.h       |  2 +
 xen/include/xen/softirq.h         |  1 +
 8 files changed, 136 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3dd6911..e2bff67 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( !is_pv_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
 
     ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -399,18 +404,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( !is_pv_domain(v->domain) &&
-         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( !is_pv_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 9e0e743..1254c04 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -356,22 +356,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -751,6 +759,10 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -761,11 +773,15 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( !is_pv_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 309f858..23b3040 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,10 +21,14 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
+#include <asm/p2m.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vmcs.h>
@@ -257,7 +261,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -269,6 +279,59 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gmfn = params->d.val;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
 {
     int ret = -EINVAL;
@@ -325,7 +388,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 34efd24..daf381c 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 8c5c772..29bb977 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -60,6 +60,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index fac29a6..9424313 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghG-0008A2-QT; Tue, 21 Jan 2014 19:09:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggr-0007sm-5B
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:49 +0000
Received: from [193.109.254.147:5083] by server-8.bemta-14.messagelabs.com id
	83/55-30921-0C5CED25; Tue, 21 Jan 2014 19:08:48 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390331325!12326560!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7655 invoked from network); 21 Jan 2014 19:08:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:47 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8em4029199
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8dIP018282
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:40 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8dvq029865; Tue, 21 Jan 2014 19:08:39 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:39 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:55 -0500
Message-Id: <1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 10/17] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 38 ++++++++++---------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 50 ++++++++++++++++---------
 xen/arch/x86/hvm/vpmu.c           | 77 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/xen.h          |  1 +
 xen/include/public/xenpmu.h       |  2 +
 xen/include/xen/softirq.h         |  1 +
 8 files changed, 136 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3dd6911..e2bff67 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -373,16 +373,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( !is_pv_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
 
     ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -399,18 +404,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( !is_pv_domain(v->domain) &&
-         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( !is_pv_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 9e0e743..1254c04 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -356,22 +356,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -751,6 +759,10 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -761,11 +773,15 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( !is_pv_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 309f858..23b3040 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,10 +21,14 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
+#include <asm/p2m.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vmcs.h>
@@ -257,7 +261,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -269,6 +279,59 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gmfn = params->d.val;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
 {
     int ret = -EINVAL;
@@ -325,7 +388,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 34efd24..daf381c 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 8c5c772..29bb977 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -60,6 +60,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index fac29a6..9424313 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghV-0008Ih-68; Tue, 21 Jan 2014 19:09:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggs-0007ty-Rt
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:51 +0000
Received: from [85.158.143.35:39381] by server-1.bemta-4.messagelabs.com id
	3E/F8-02132-2C5CED25; Tue, 21 Jan 2014 19:08:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390331328!11867235!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 521 invoked from network); 21 Jan 2014 19:08:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8hQM029270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8gZP027260
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8gRN018385; Tue, 21 Jan 2014 19:08:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:42 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:59 -0500
Message-Id: <1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling) and the switched
out domain is not the control domain. The latter condition is needed because
me may have just turned the privileged PMU mode on and thus need to save last
domain.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 25572d5..124c0e7 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if (prev != next)
-        update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        update_runstate_area(prev);
+        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
+             !is_control_domain(prev->domain) )
             vpmu_save(prev);
-
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
-            pt_save_timer(prev);
     }
 
+    if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+        pt_save_timer(prev);
+
     local_irq_disable();
 
     set_current(next);
@@ -1491,7 +1490,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( (prev != next) && !(vpmu_mode & XENPMU_MODE_PRIV) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghV-0008Ih-68; Tue, 21 Jan 2014 19:09:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggs-0007ty-Rt
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:51 +0000
Received: from [85.158.143.35:39381] by server-1.bemta-4.messagelabs.com id
	3E/F8-02132-2C5CED25; Tue, 21 Jan 2014 19:08:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390331328!11867235!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 521 invoked from network); 21 Jan 2014 19:08:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8hQM029270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8gZP027260
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8gRN018385; Tue, 21 Jan 2014 19:08:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:42 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:59 -0500
Message-Id: <1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling) and the switched
out domain is not the control domain. The latter condition is needed because
me may have just turned the privileged PMU mode on and thus need to save last
domain.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 25572d5..124c0e7 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if (prev != next)
-        update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        update_runstate_area(prev);
+        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
+             !is_control_domain(prev->domain) )
             vpmu_save(prev);
-
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
-            pt_save_timer(prev);
     }
 
+    if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+        pt_save_timer(prev);
+
     local_irq_disable();
 
     set_current(next);
@@ -1491,7 +1490,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( (prev != next) && !(vpmu_mode & XENPMU_MODE_PRIV) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghZ-0008Md-Jk; Tue, 21 Jan 2014 19:09:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggt-0007vT-W6
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:52 +0000
Received: from [85.158.143.35:28704] by server-3.bemta-4.messagelabs.com id
	D8/2E-32360-3C5CED25; Tue, 21 Jan 2014 19:08:51 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390331329!6002827!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17034 invoked from network); 21 Jan 2014 19:08:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:50 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8hll029267
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8g3U029965
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8gIW023415; Tue, 21 Jan 2014 19:08:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:58 -0500
Message-Id: <1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 88 ++++++++++++++++++++++++++++++++-------------
 xen/arch/x86/traps.c        |  6 +++-
 xen/include/public/xenpmu.h |  3 ++
 3 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index aead6af..214300d 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -87,6 +87,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
     {
         int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
@@ -112,6 +115,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
     {
         int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
@@ -134,14 +140,18 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
         v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
-        const struct cpu_user_regs *gregs;
+        struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
@@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
-
-            cmp = (struct compat_cpu_user_regs *)
-                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
+            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs;
         }
-        else if ( !is_control_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -444,7 +483,8 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
 
         mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
-        if ( mode & ~XENPMU_MODE_ON )
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode &= ~XENPMU_MODE_MASK;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 7ff8401..1854230 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2510,7 +2510,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index df85209..f715f30 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -56,11 +56,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_FEATURE_SHIFT      16
 #define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghZ-0008Md-Jk; Tue, 21 Jan 2014 19:09:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggt-0007vT-W6
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:52 +0000
Received: from [85.158.143.35:28704] by server-3.bemta-4.messagelabs.com id
	D8/2E-32360-3C5CED25; Tue, 21 Jan 2014 19:08:51 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390331329!6002827!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17034 invoked from network); 21 Jan 2014 19:08:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:50 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8hll029267
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8g3U029965
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:43 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8gIW023415; Tue, 21 Jan 2014 19:08:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:58 -0500
Message-Id: <1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 88 ++++++++++++++++++++++++++++++++-------------
 xen/arch/x86/traps.c        |  6 +++-
 xen/include/public/xenpmu.h |  3 ++
 3 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index aead6af..214300d 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -87,6 +87,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
     {
         int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
@@ -112,6 +115,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
     {
         int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
@@ -134,14 +140,18 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
         v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
-        const struct cpu_user_regs *gregs;
+        struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
@@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
-
-            cmp = (struct compat_cpu_user_regs *)
-                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
+            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs;
         }
-        else if ( !is_control_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -444,7 +483,8 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
 
         mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
-        if ( mode & ~XENPMU_MODE_ON )
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode &= ~XENPMU_MODE_MASK;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 7ff8401..1854230 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2510,7 +2510,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index df85209..f715f30 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -56,11 +56,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_FEATURE_SHIFT      16
 #define XENPMU_MODE_MASK          ((1U << XENPMU_FEATURE_SHIFT) - 1)
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghd-0008Pb-Am; Tue, 21 Jan 2014 19:09:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggt-0007vO-Qx
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:52 +0000
Received: from [85.158.143.35:39445] by server-2.bemta-4.messagelabs.com id
	3C/F8-11386-3C5CED25; Tue, 21 Jan 2014 19:08:51 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390331328!13123029!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11162 invoked from network); 21 Jan 2014 19:08:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8ghD029232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:42 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8fmq018344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8fNw029918; Tue, 21 Jan 2014 19:08:41 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:57 -0500
Message-Id: <1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 116 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/xenpmu.h |   7 +++
 2 files changed, 116 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d32325c..aead6af 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -75,7 +75,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -83,7 +88,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
@@ -92,16 +113,86 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
 
-    if ( vpmu->arch_vpmu_ops )
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        const struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs *cmp;
+
+            gregs = guest_cpu_user_regs();
+
+            cmp = (struct compat_cpu_user_regs *)
+                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            XLAT_cpu_user_regs(cmp, gregs);
+        }
+        else if ( !is_control_domain(current->domain) &&
+                 !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
     {
         struct vlapic *vlapic = vcpu_vlapic(v);
         u32 vlapic_lvtpc;
@@ -213,8 +304,13 @@ void vpmu_load(struct vcpu *v)
 
     local_irq_enable();
 
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (is_pv_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -408,6 +504,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index c22cd18..df85209 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
@@ -68,6 +69,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
 #define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghd-0008Pb-Am; Tue, 21 Jan 2014 19:09:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggt-0007vO-Qx
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:52 +0000
Received: from [85.158.143.35:39445] by server-2.bemta-4.messagelabs.com id
	3C/F8-11386-3C5CED25; Tue, 21 Jan 2014 19:08:51 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390331328!13123029!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11162 invoked from network); 21 Jan 2014 19:08:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8ghD029232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:42 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8fmq018344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:41 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8fNw029918; Tue, 21 Jan 2014 19:08:41 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:08:57 -0500
Message-Id: <1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c     | 116 +++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/xenpmu.h |   7 +++
 2 files changed, 116 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d32325c..aead6af 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -75,7 +75,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -83,7 +88,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
@@ -92,16 +113,86 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
 
-    if ( vpmu->arch_vpmu_ops )
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        const struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs *cmp;
+
+            gregs = guest_cpu_user_regs();
+
+            cmp = (struct compat_cpu_user_regs *)
+                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            XLAT_cpu_user_regs(cmp, gregs);
+        }
+        else if ( !is_control_domain(current->domain) &&
+                 !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
     {
         struct vlapic *vlapic = vcpu_vlapic(v);
         u32 vlapic_lvtpc;
@@ -213,8 +304,13 @@ void vpmu_load(struct vcpu *v)
 
     local_irq_enable();
 
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (is_pv_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -408,6 +504,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/xenpmu.h b/xen/include/public/xenpmu.h
index c22cd18..df85209 100644
--- a/xen/include/public/xenpmu.h
+++ b/xen/include/public/xenpmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
@@ -68,6 +69,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
 #define XENPMU_FEATURE_MASK       ((uint32_t)(~XENPMU_MODE_MASK))
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghf-0008SC-23; Tue, 21 Jan 2014 19:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggv-0007wT-HW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:53 +0000
Received: from [85.158.143.35:39574] by server-3.bemta-4.messagelabs.com id
	6E/2E-32360-4C5CED25; Tue, 21 Jan 2014 19:08:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390331330!13094941!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10613 invoked from network); 21 Jan 2014 19:08:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8i1d031246
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:45 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8iYa023537
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8hLp023503; Tue, 21 Jan 2014 19:08:43 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:43 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:00 -0500
Message-Id: <1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts.

Most of processing is still performed by vpmu_do_interrupt(). However, since
certain operations are not NMI-safe we defer them to a softint that vpmu_do_interrupt()
will schedule:
* For PV guests that would be send_guest_vcpu_virq() and hvm_get_segment_register().
* For HVM guests it's VLAPIC accesses.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c | 169 ++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 135 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 214300d..e76b538 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -36,6 +36,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/xenpmu.h>
 
 /*
@@ -48,33 +49,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode |= XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if ( !is_pv_domain(current->domain) ||
@@ -83,6 +108,24 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic = vcpu_vlapic(v);
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -134,6 +177,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     return 0;
 }
 
+/* This routine may be called in NMI context */
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -214,9 +258,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
 
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -227,29 +275,29 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
     else if ( vpmu->arch_vpmu_ops )
     {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;
 
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
         else
-            v->nmi_pending = 1;
+            vpmu_send_nmi(v);
+
         return 1;
     }
 
@@ -299,7 +347,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -414,12 +462,50 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+
 static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct page_info *page;
     uint64_t gmfn = params->d.val;
-
+    static int pvpmu_initted = 0;
+ 
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
 
@@ -435,6 +521,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
         return -EINVAL;
     }
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     vpmu_initialise(v);
 
     return 0;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghf-0008SC-23; Tue, 21 Jan 2014 19:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggv-0007wT-HW
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:53 +0000
Received: from [85.158.143.35:39574] by server-3.bemta-4.messagelabs.com id
	6E/2E-32360-4C5CED25; Tue, 21 Jan 2014 19:08:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390331330!13094941!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10613 invoked from network); 21 Jan 2014 19:08:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8i1d031246
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:45 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LJ8iYa023537
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 19:08:44 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LJ8hLp023503; Tue, 21 Jan 2014 19:08:43 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:43 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:00 -0500
Message-Id: <1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts.

Most of processing is still performed by vpmu_do_interrupt(). However, since
certain operations are not NMI-safe we defer them to a softint that vpmu_do_interrupt()
will schedule:
* For PV guests that would be send_guest_vcpu_virq() and hvm_get_segment_register().
* For HVM guests it's VLAPIC accesses.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c | 169 ++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 135 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 214300d..e76b538 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -36,6 +36,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/xenpmu.h>
 
 /*
@@ -48,33 +49,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode |= XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if ( !is_pv_domain(current->domain) ||
@@ -83,6 +108,24 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic = vcpu_vlapic(v);
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -134,6 +177,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     return 0;
 }
 
+/* This routine may be called in NMI context */
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -214,9 +258,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
 
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -227,29 +275,29 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
     else if ( vpmu->arch_vpmu_ops )
     {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;
 
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
         else
-            v->nmi_pending = 1;
+            vpmu_send_nmi(v);
+
         return 1;
     }
 
@@ -299,7 +347,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -414,12 +462,50 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+
 static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct page_info *page;
     uint64_t gmfn = params->d.val;
-
+    static int pvpmu_initted = 0;
+ 
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
 
@@ -435,6 +521,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
         return -EINVAL;
     }
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     vpmu_initialise(v);
 
     return 0;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghi-00005l-Mb; Tue, 21 Jan 2014 19:09:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggy-0007yh-8l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:57 +0000
Received: from [85.158.143.35:28923] by server-2.bemta-4.messagelabs.com id
	19/09-11386-7C5CED25; Tue, 21 Jan 2014 19:08:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390331332!13123039!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11436 invoked from network); 21 Jan 2014 19:08:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8lht031324
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8keG018510
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:46 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8jEN027361; Tue, 21 Jan 2014 19:08:45 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:02 -0500
Message-Id: <1390331342-3967-18-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 17/17] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 499 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 936 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 671 ------------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 671 ++++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 499 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 936 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  98 ----
 xen/include/asm-x86/vpmu.h            |  98 ++++
 16 files changed, 2209 insertions(+), 2211 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index e2bff67..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,499 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/xenpmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
-    unsigned int i;
-
-    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        vpmu_set(vpmu, VPMU_FROZEN);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-            return 0;
-
-    context_save(v);
-
-    if ( !is_pv_domain(v->domain) && 
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( !is_pv_domain(v->domain) &&
-             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( !is_pv_domain(v->domain) &&
-             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct xen_pmu_amd_ctxt *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
-
-    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index 5213c11..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,936 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/xenpmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
-
-    if ( is_pv_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    if ( is_pv_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && !is_pv_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( is_pv_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( !is_pv_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( !is_pv_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( !is_pv_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            xen_pmu_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( !is_pv_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( !is_pv_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index f736de0..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,671 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/p2m.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <asm/p2m.h>
-#include <public/xenpmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  (parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_apic_vector = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !is_pv_domain(current->domain) ||
-         !(current->arch.vpmu.xenpmu_data &&
-           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-static void vpmu_send_nmi(struct vcpu *v)
-{
-    struct vlapic *vlapic = vcpu_vlapic(v);
-    u32 vlapic_lvtpc;
-    unsigned char int_vec;
-
-    if ( !is_vlapic_lvtpc_enabled(vlapic) )
-        return;
-
-    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-    else
-        v->nmi_pending = 1;
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !is_hvm_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !is_hvm_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-/* This routine may be called in NMI context */
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV(H) guest or dom0 is doing system profiling */
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-            return 1;
-
-        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
-            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-                return 0;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        if ( !is_hvm_domain(current->domain) )
-        {
-            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-
-            /* Store appropriate registers in xenpmu_data */
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                gregs = guest_cpu_user_regs();
-
-                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
-                     !is_pv_32bit_domain(v->domain) )
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           gregs, sizeof(struct cpu_user_regs));
-                else 
-                {
-                    /*
-                     * 32-bit dom0 cannot process Xen's addresses (which are
-                     * 64 bit) and therefore we treat it the same way as a
-                     * non-priviledged PV 32-bit domain.
-                     */
-
-                    struct compat_cpu_user_regs *cmp;
-
-                    cmp = (struct compat_cpu_user_regs *)
-                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                    XLAT_cpu_user_regs(cmp, gregs);
-                }
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV(H) guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       regs, sizeof(struct cpu_user_regs));
-
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            if ( !is_pvh_domain(current->domain) )
-                gregs->cs = cs;
-            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
-            {
-                struct segment_register seg_cs;
-
-                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
-                gregs->cs = seg_cs.attr.fields.dpl;
-            }
-        }
-        else
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   gregs, sizeof(struct cpu_user_regs));
-
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                gregs->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_apic_vector & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-    else if ( vpmu->arch_vpmu_ops )
-    {
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( vpmu_apic_vector & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            vpmu_send_nmi(v);
-
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* 
-     * Only when PMU is counting and is not cached (for PV guests) do
-     * we load PMU context immediately.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-         (is_pv_domain(v->domain) &&
-          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( vpmu_mode & XENPMU_MODE_PRIV ||
-         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-    else
-    {
-        if ( is_hvm_domain(sampled->domain) )
-        {
-            vpmu_send_nmi(sampled);
-            return;
-        }
-        v = sampled;
-    }
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( !is_pv_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-
-static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    struct page_info *page;
-    uint64_t gmfn = params->d.val;
-    static int pvpmu_initted = 0;
- 
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
-    if ( !page )
-        return -EINVAL;
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
-    if ( !v->arch.vpmu.xenpmu_data )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            put_page(page);
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xen_pmu_params_t pmu_params;
-    uint32_t mode;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
-        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_MODE_MASK;
-        vpmu_mode |= mode;
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
-        pmu_params.v.version.maj = XENPMU_VER_MAJ;
-        pmu_params.v.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_FEATURE_MASK;
-        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_load(current);
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1854230..11f6821 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..f736de0
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,671 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/p2m.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/hvm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <asm/p2m.h>
+#include <public/xenpmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic = vcpu_vlapic(v);
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+/* This routine may be called in NMI context */
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV(H) guest or dom0 is doing system profiling */
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
+            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+                return 0;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        if ( !is_hvm_domain(current->domain) )
+        {
+            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV(H) guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = cs;
+            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
+        }
+        else
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
+    {
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            vpmu_send_nmi(v);
+
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_save_force(prev);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (is_pv_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gmfn = params->d.val;
+    static int pvpmu_initted = 0;
+ 
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..a0629d4
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,499 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/xenpmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
+    unsigned int i;
+
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        vpmu_set(vpmu, VPMU_FROZEN);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
+
+    context_save(v);
+
+    if ( !is_pv_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( !is_pv_domain(v->domain) &&
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( !is_pv_domain(v->domain) &&
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct xen_pmu_amd_ctxt *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
+
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..4323aaf
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,936 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/xenpmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            xen_pmu_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v, vpmu_flags);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 29bb977..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/xenpmu.h>
-
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-/* Start of PMU register bank */
-#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
-                                                 (uintptr_t)ctxt->offset))
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xen_pmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint32_t vpmu_mode;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..863be59
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,98 @@
+/*
+ * vpmu.h: PMU virtualization.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_VPMU_H_
+#define __ASM_X86_VPMU_H_
+
+#include <public/xenpmu.h>
+
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
+int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint32_t vpmu_mode;
+
+#endif /* __ASM_X86_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:09:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghi-00005l-Mb; Tue, 21 Jan 2014 19:09:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ggy-0007yh-8l
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:08:57 +0000
Received: from [85.158.143.35:28923] by server-2.bemta-4.messagelabs.com id
	19/09-11386-7C5CED25; Tue, 21 Jan 2014 19:08:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390331332!13123039!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11436 invoked from network); 21 Jan 2014 19:08:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:08:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8lht031324
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8keG018510
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:46 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8jEN027361; Tue, 21 Jan 2014 19:08:45 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:02 -0500
Message-Id: <1390331342-3967-18-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 17/17] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 499 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 936 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 671 ------------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 671 ++++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 499 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 936 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  98 ----
 xen/include/asm-x86/vpmu.h            |  98 ++++
 16 files changed, 2209 insertions(+), 2211 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index e2bff67..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,499 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/xenpmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
-    unsigned int i;
-
-    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        vpmu_set(vpmu, VPMU_FROZEN);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-            return 0;
-
-    context_save(v);
-
-    if ( !is_pv_domain(v->domain) && 
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( !is_pv_domain(v->domain) &&
-             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( !is_pv_domain(v->domain) &&
-             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct xen_pmu_amd_ctxt *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
-
-    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index 5213c11..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,936 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/xenpmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
-
-    if ( is_pv_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    if ( is_pv_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && !is_pv_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( is_pv_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( !is_pv_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( !is_pv_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( !is_pv_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            xen_pmu_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( !is_pv_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( !is_pv_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_flags == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index f736de0..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,671 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/p2m.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <asm/p2m.h>
-#include <public/xenpmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  (parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_apic_vector = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !is_pv_domain(current->domain) ||
-         !(current->arch.vpmu.xenpmu_data &&
-           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-static void vpmu_send_nmi(struct vcpu *v)
-{
-    struct vlapic *vlapic = vcpu_vlapic(v);
-    u32 vlapic_lvtpc;
-    unsigned char int_vec;
-
-    if ( !is_vlapic_lvtpc_enabled(vlapic) )
-        return;
-
-    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-    else
-        v->nmi_pending = 1;
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !is_hvm_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !is_hvm_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-/* This routine may be called in NMI context */
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV(H) guest or dom0 is doing system profiling */
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-            return 1;
-
-        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
-            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-                return 0;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        if ( !is_hvm_domain(current->domain) )
-        {
-            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-
-            /* Store appropriate registers in xenpmu_data */
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                gregs = guest_cpu_user_regs();
-
-                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
-                     !is_pv_32bit_domain(v->domain) )
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           gregs, sizeof(struct cpu_user_regs));
-                else 
-                {
-                    /*
-                     * 32-bit dom0 cannot process Xen's addresses (which are
-                     * 64 bit) and therefore we treat it the same way as a
-                     * non-priviledged PV 32-bit domain.
-                     */
-
-                    struct compat_cpu_user_regs *cmp;
-
-                    cmp = (struct compat_cpu_user_regs *)
-                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                    XLAT_cpu_user_regs(cmp, gregs);
-                }
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV(H) guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       regs, sizeof(struct cpu_user_regs));
-
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            if ( !is_pvh_domain(current->domain) )
-                gregs->cs = cs;
-            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
-            {
-                struct segment_register seg_cs;
-
-                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
-                gregs->cs = seg_cs.attr.fields.dpl;
-            }
-        }
-        else
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   gregs, sizeof(struct cpu_user_regs));
-
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                gregs->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_apic_vector & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-    else if ( vpmu->arch_vpmu_ops )
-    {
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( vpmu_apic_vector & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            vpmu_send_nmi(v);
-
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_save_force(prev);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* 
-     * Only when PMU is counting and is not cached (for PV guests) do
-     * we load PMU context immediately.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-         (is_pv_domain(v->domain) &&
-          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( vpmu_mode & XENPMU_MODE_PRIV ||
-         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-    else
-    {
-        if ( is_hvm_domain(sampled->domain) )
-        {
-            vpmu_send_nmi(sampled);
-            return;
-        }
-        v = sampled;
-    }
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( !is_pv_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-
-static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    struct page_info *page;
-    uint64_t gmfn = params->d.val;
-    static int pvpmu_initted = 0;
- 
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
-    if ( !page )
-        return -EINVAL;
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
-    if ( !v->arch.vpmu.xenpmu_data )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            put_page(page);
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xen_pmu_params_t pmu_params;
-    uint32_t mode;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
-        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_MODE_MASK;
-        vpmu_mode |= mode;
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
-        pmu_params.v.version.maj = XENPMU_VER_MAJ;
-        pmu_params.v.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_mode &= ~XENPMU_FEATURE_MASK;
-        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_load(current);
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1854230..11f6821 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..f736de0
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,671 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/p2m.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/hvm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <asm/p2m.h>
+#include <public/xenpmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint32_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  (parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_apic_vector = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_mode |= XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_apic_vector | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic = vcpu_vlapic(v);
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+/* This routine may be called in NMI context */
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV(H) guest or dom0 is doing system profiling */
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
+            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+                return 0;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        if ( !is_hvm_domain(current->domain) )
+        {
+            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV(H) guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = cs;
+            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
+        }
+        else
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_apic_vector & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+    else if ( vpmu->arch_vpmu_ops )
+    {
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( vpmu_apic_vector & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            vpmu_send_nmi(v);
+
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_apic_vector | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_save_force(prev);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* 
+     * Only when PMU is counting and is not cached (for PV guests) do
+     * we load PMU context immediately.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+         (is_pv_domain(v->domain) &&
+          vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v, vpmu_mode) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( vpmu_mode & XENPMU_MODE_PRIV ||
+         sampled->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gmfn = params->d.val;
+    static int pvpmu_initted = 0;
+ 
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+    uint32_t mode;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
+        if ( (mode & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((mode & XENPMU_MODE_ON) && (mode & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_MODE_MASK;
+        vpmu_mode |= mode;
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_mode &= ~XENPMU_FEATURE_MASK;
+        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_load(current);
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..a0629d4
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,499 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/xenpmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+	uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
+    unsigned int i;
+
+    if ( !vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        vpmu_set(vpmu, VPMU_FROZEN);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+            return 0;
+
+    context_save(v);
+
+    if ( !is_pv_domain(v->domain) && 
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( !is_pv_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( !is_pv_domain(v->domain) &&
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( !is_pv_domain(v->domain) &&
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct xen_pmu_amd_ctxt *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
+
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..4323aaf
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,936 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/xenpmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && !is_pv_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            xen_pmu_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_flags & (XENPMU_FEATURE_INTEL_BTS << XENPMU_FEATURE_SHIFT)) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_flags == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v, vpmu_flags);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 29bb977..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/xenpmu.h>
-
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-/* Start of PMU register bank */
-#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
-                                                 (uintptr_t)ctxt->offset))
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xen_pmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint32_t vpmu_mode;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..863be59
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,98 @@
+/*
+ * vpmu.h: PMU virtualization.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_VPMU_H_
+#define __ASM_X86_VPMU_H_
+
+#include <public/xenpmu.h>
+
+#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
+#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
+int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint32_t vpmu_mode;
+
+#endif /* __ASM_X86_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghy-0000Pd-K0; Tue, 21 Jan 2014 19:09:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ghw-0000M2-4P
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:09:56 +0000
Received: from [193.109.254.147:43149] by server-5.bemta-14.messagelabs.com id
	F2/6F-03510-306CED25; Tue, 21 Jan 2014 19:09:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331393!12220913!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26669 invoked from network); 21 Jan 2014 19:09:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:09:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8kSp031278
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8jfl018475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:45 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8iFb027320; Tue, 21 Jan 2014 19:08:44 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:01 -0500
Message-Id: <1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for PVH guests. Most of operations are performed as in an HVM guest.
However, interrupt management is done in PV-like manner.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/hvm.c     |  3 ++-
 xen/arch/x86/hvm/vmx/vmx.c |  4 +++-
 xen/arch/x86/hvm/vpmu.c    | 22 ++++++++++++++++++----
 3 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..1e50c35 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3451,7 +3451,8 @@ static hvm_hypercall_t *const pvh_hypercall64_table[NR_hypercalls] = {
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    HYPERCALL(xenpmu_op)
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index dfff628..59b8ef1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -112,7 +112,9 @@ static int vmx_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    /* PVH will initialize VPMU using PV path */
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     vmx_install_vlapic_mapping(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index e76b538..f736de0 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -37,6 +37,7 @@
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
 #include <asm/nmi.h>
+#include <asm/p2m.h>
 #include <public/xenpmu.h>
 
 /*
@@ -194,13 +195,17 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
-        /* PV guest or dom0 is doing system profiling */
+        /* PV(H) guest or dom0 is doing system profiling */
         struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
             return 1;
 
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
+            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+                return 0;
+
         /* PV guest will be reading PMU MSRs from xenpmu_data */
         vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
@@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             else if ( !is_control_domain(current->domain) &&
                       !is_idle_vcpu(current) )
             {
-                /* PV guest */
+                /* PV(H) guest */
                 gregs = guest_cpu_user_regs();
                 memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                        gregs, sizeof(struct cpu_user_regs));
@@ -247,7 +252,15 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
                        regs, sizeof(struct cpu_user_regs));
 
             gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = cs;
+            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
         }
         else
         {
@@ -271,7 +284,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
         v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
 
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ghy-0000Pd-K0; Tue, 21 Jan 2014 19:09:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W5ghw-0000M2-4P
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:09:56 +0000
Received: from [193.109.254.147:43149] by server-5.bemta-14.messagelabs.com id
	F2/6F-03510-306CED25; Tue, 21 Jan 2014 19:09:55 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390331393!12220913!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26669 invoked from network); 21 Jan 2014 19:09:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 19:09:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LJ8kSp031278
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 19:08:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8jfl018475
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 19:08:45 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LJ8iFb027320; Tue, 21 Jan 2014 19:08:44 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 11:08:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xen.org
Date: Tue, 21 Jan 2014 14:09:01 -0500
Message-Id: <1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	JBeulich@suse.com, jun.nakajima@intel.com, boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for PVH guests. Most of operations are performed as in an HVM guest.
However, interrupt management is done in PV-like manner.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/hvm.c     |  3 ++-
 xen/arch/x86/hvm/vmx/vmx.c |  4 +++-
 xen/arch/x86/hvm/vpmu.c    | 22 ++++++++++++++++++----
 3 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..1e50c35 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3451,7 +3451,8 @@ static hvm_hypercall_t *const pvh_hypercall64_table[NR_hypercalls] = {
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    HYPERCALL(xenpmu_op)
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index dfff628..59b8ef1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -112,7 +112,9 @@ static int vmx_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    /* PVH will initialize VPMU using PV path */
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     vmx_install_vlapic_mapping(v);
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index e76b538..f736de0 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -37,6 +37,7 @@
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
 #include <asm/nmi.h>
+#include <asm/p2m.h>
 #include <public/xenpmu.h>
 
 /*
@@ -194,13 +195,17 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
-        /* PV guest or dom0 is doing system profiling */
+        /* PV(H) guest or dom0 is doing system profiling */
         struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
             return 1;
 
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
+            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+                return 0;
+
         /* PV guest will be reading PMU MSRs from xenpmu_data */
         vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
@@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             else if ( !is_control_domain(current->domain) &&
                       !is_idle_vcpu(current) )
             {
-                /* PV guest */
+                /* PV(H) guest */
                 gregs = guest_cpu_user_regs();
                 memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                        gregs, sizeof(struct cpu_user_regs));
@@ -247,7 +252,15 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
                        regs, sizeof(struct cpu_user_regs));
 
             gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = cs;
+            else if ( !(vpmu_apic_vector & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
         }
         else
         {
@@ -271,7 +284,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
         v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
 
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:28:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gzl-0002jl-UK; Tue, 21 Jan 2014 19:28:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W5gzj-0002jY-Qc
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:28:20 +0000
Received: from [85.158.139.211:43991] by server-3.bemta-5.messagelabs.com id
	08/5C-04773-35ACED25; Tue, 21 Jan 2014 19:28:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390332497!11121911!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21082 invoked from network); 21 Jan 2014 19:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 19:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="92980430"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 19:28:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 14:28:15 -0500
Message-ID: <52DECA4E.4080004@citrix.com>
Date: Tue, 21 Jan 2014 20:28:14 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Mukesh Rathor
	<mukesh.rathor@oracle.com>, xen-devel <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
cpuid feature flags exposed to PVH guests are kind of strange, this is
the output of the feature flags as seen by an HVM domain:

Features=0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
 Features2=0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,HV>
AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
AMD Features2=0x1<LAHF>

And this is what a PVH domain sees when running on the same hardware:

Features=0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT>
Features2=0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
AMD Features=0x20100800<SYSCALL,NX,LM>
AMD Features2=0x1<LAHF>

I would expect the feature flags to be quite similar between an HVM
domain and a PV domain (since they both run inside of a HVM container).
AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
guests. Also, is there any reason why PVH guests have the ACPI, SS and
CLFLUSH feature flags but not HVM?

Most (if not all) of this probably comes from the fact that we are
reporting the same feature flags as pure PV guests, but I see no reason
to do that for PVH guests, we should decide what's supported on PVH and
set the feature flags accordingly.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:28:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5gzl-0002jl-UK; Tue, 21 Jan 2014 19:28:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W5gzj-0002jY-Qc
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 19:28:20 +0000
Received: from [85.158.139.211:43991] by server-3.bemta-5.messagelabs.com id
	08/5C-04773-35ACED25; Tue, 21 Jan 2014 19:28:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390332497!11121911!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21082 invoked from network); 21 Jan 2014 19:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 19:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="92980430"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 19:28:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 14:28:15 -0500
Message-ID: <52DECA4E.4080004@citrix.com>
Date: Tue, 21 Jan 2014 20:28:14 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Mukesh Rathor
	<mukesh.rathor@oracle.com>, xen-devel <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
cpuid feature flags exposed to PVH guests are kind of strange, this is
the output of the feature flags as seen by an HVM domain:

Features=0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
 Features2=0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,HV>
AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
AMD Features2=0x1<LAHF>

And this is what a PVH domain sees when running on the same hardware:

Features=0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT>
Features2=0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
AMD Features=0x20100800<SYSCALL,NX,LM>
AMD Features2=0x1<LAHF>

I would expect the feature flags to be quite similar between an HVM
domain and a PV domain (since they both run inside of a HVM container).
AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
guests. Also, is there any reason why PVH guests have the ACPI, SS and
CLFLUSH feature flags but not HVM?

Most (if not all) of this probably comes from the fact that we are
reporting the same feature flags as pure PV guests, but I see no reason
to do that for PVH guests, we should decide what's supported on PVH and
set the feature flags accordingly.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hEp-0003zm-QK; Tue, 21 Jan 2014 19:43:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5hEo-0003zg-18
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:43:54 +0000
Received: from [193.109.254.147:26853] by server-7.bemta-14.messagelabs.com id
	94/ED-15500-9FDCED25; Tue, 21 Jan 2014 19:43:53 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390333431!12335986!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 21 Jan 2014 19:43:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 19:43:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="92986249"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 19:43:50 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 14:43:49 -0500
Message-ID: <52DECDF4.2060301@citrix.com>
Date: Tue, 21 Jan 2014 19:43:48 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, David Vrabel
	<david.vrabel@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
	<52DE78BF.2070909@citrix.com>
	<alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 14:22, Stefano Stabellini wrote:
> On Tue, 21 Jan 2014, David Vrabel wrote:
>> On 21/01/14 12:26, Stefano Stabellini wrote:
>>> On Mon, 20 Jan 2014, Zoltan Kiss wrote:
>>>
>>>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>>>> -				       &kmap_ops[i] : NULL);
>>>> +		if (m2p_override)
>>>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>>>> +					       &kmap_ops[i] : NULL);
>>>> +		else {
>>>> +			unsigned long pfn = page_to_pfn(pages[i]);
>>>> +			WARN_ON(PagePrivate(pages[i]));
>>>> +			SetPagePrivate(pages[i]);
>>>> +			set_page_private(pages[i], mfn);
>>>> +			pages[i]->index = pfn_to_mfn(pfn);
>>>> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>>>> +				return -ENOMEM;
>>>
>>> What happens if the page is PageHighMem?
>>>
>>> This looks like a subset of m2p_add_override, but it is missing some
>>> relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
>>> check.  Maybe we can find a way to avoid duplicating the code.
>>> We could split m2p_add_override in two functions or add yet another
>>> parameter to it.
>>
>> The PageHighMem() check isn't relevant as we're not mapping anything
>> here.  Also, a page for a kernel grant mapping only cannot be highmem.
>>
>> The check for a local mfn and the additional set_phys_to_machine() is
>> only necessary if something tries an mfn_to_pfn() on the local mfn.  We
>> can only omit adding an m2p override if we know thing will try
>> mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.
>
> OK, you convinced me that the two checks are superfluous for this case.
>
> Can we still avoid the code duplication by removing the corresponding
> code from m2p_add_override and m2p_remove_override and doing the
> set_page_private thing uniquely in grant-table.c?
Yes, I moved these parts out from the m2p* funcions to the gntmap 
functions. One change is that now we pass pfn/mfn to m2p* functions as 
they are changing right before the call. Also, to avoid racing I clear 
the page->private value before calling m2p_remove_override. I'll send in 
the patch soon.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 19:44:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 19:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hEp-0003zm-QK; Tue, 21 Jan 2014 19:43:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5hEo-0003zg-18
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 19:43:54 +0000
Received: from [193.109.254.147:26853] by server-7.bemta-14.messagelabs.com id
	94/ED-15500-9FDCED25; Tue, 21 Jan 2014 19:43:53 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390333431!12335986!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 21 Jan 2014 19:43:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 19:43:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="92986249"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 19:43:50 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 14:43:49 -0500
Message-ID: <52DECDF4.2060301@citrix.com>
Date: Tue, 21 Jan 2014 19:43:48 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, David Vrabel
	<david.vrabel@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401211217510.21510@kaball.uk.xensource.com>
	<52DE78BF.2070909@citrix.com>
	<alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401211416420.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 14:22, Stefano Stabellini wrote:
> On Tue, 21 Jan 2014, David Vrabel wrote:
>> On 21/01/14 12:26, Stefano Stabellini wrote:
>>> On Mon, 20 Jan 2014, Zoltan Kiss wrote:
>>>
>>>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>>>> -				       &kmap_ops[i] : NULL);
>>>> +		if (m2p_override)
>>>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>>>> +					       &kmap_ops[i] : NULL);
>>>> +		else {
>>>> +			unsigned long pfn = page_to_pfn(pages[i]);
>>>> +			WARN_ON(PagePrivate(pages[i]));
>>>> +			SetPagePrivate(pages[i]);
>>>> +			set_page_private(pages[i], mfn);
>>>> +			pages[i]->index = pfn_to_mfn(pfn);
>>>> +			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>>>> +				return -ENOMEM;
>>>
>>> What happens if the page is PageHighMem?
>>>
>>> This looks like a subset of m2p_add_override, but it is missing some
>>> relevant bits, like the PageHighMem check, or the p2m(m2p(mfn)) == mfn
>>> check.  Maybe we can find a way to avoid duplicating the code.
>>> We could split m2p_add_override in two functions or add yet another
>>> parameter to it.
>>
>> The PageHighMem() check isn't relevant as we're not mapping anything
>> here.  Also, a page for a kernel grant mapping only cannot be highmem.
>>
>> The check for a local mfn and the additional set_phys_to_machine() is
>> only necessary if something tries an mfn_to_pfn() on the local mfn.  We
>> can only omit adding an m2p override if we know thing will try
>> mfn_to_pfn(), therefore the check and set_phys_to_machine() is unnecessary.
>
> OK, you convinced me that the two checks are superfluous for this case.
>
> Can we still avoid the code duplication by removing the corresponding
> code from m2p_add_override and m2p_remove_override and doing the
> set_page_private thing uniquely in grant-table.c?
Yes, I moved these parts out from the m2p* funcions to the gntmap 
functions. One change is that now we pass pfn/mfn to m2p* functions as 
they are changing right before the call. Also, to avoid racing I clear 
the page->private value before calling m2p_remove_override. I'll send in 
the patch soon.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 20:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hbW-00058a-PN; Tue, 21 Jan 2014 20:07:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W5hbU-00058K-KT; Tue, 21 Jan 2014 20:07:20 +0000
Received: from [85.158.143.35:52682] by server-2.bemta-4.messagelabs.com id
	23/FC-11386-773DED25; Tue, 21 Jan 2014 20:07:19 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390334838!13014639!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 21 Jan 2014 20:07:19 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:07:19 -0000
Received: by mail-lb0-f169.google.com with SMTP id q8so6444841lbi.28
	for <multiple recipients>; Tue, 21 Jan 2014 12:07:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=RlyEKLbyDhNYHRtAGPpG0WftxP57geFc7v0fM5HMuwk=;
	b=O8nIln8mqtbHujwAQN+zs0BZsv4GQeC6AfBsdPqgF7qpq26QA0Y4MgRTsTLHTxbcrR
	HGGpvxrNss5Rq3uN9bhrDwABi3HLwIWT9OrhbyBADxLTPhOXFFAGlees4aoiSDUmMlEF
	fWSmry//dTjnc8YXv7exdqmx3YLsUWwd6VrowT8egetqQZNwckU7r6X1ldVm/4sFR1hZ
	i5lvgUElwm8OUydX1zW3RJZaaAtRR8GXCl+LQRmXpu+ovnspOJy/kKGUk4vdJxXpe5Rw
	UbJ2Tic5dRObwVzyJptdcUuQNGEGIk3CpjMruttiQH9s4zzOivE2dHQYA4KumSx2t3bU
	puPg==
MIME-Version: 1.0
X-Received: by 10.152.20.6 with SMTP id j6mr17216121lae.8.1390334838063; Tue,
	21 Jan 2014 12:07:18 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Tue, 21 Jan 2014 12:07:18 -0800 (PST)
In-Reply-To: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
References: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
Date: Tue, 21 Jan 2014 15:07:18 -0500
X-Google-Sender-Auth: tKBBZCMe1AfEvXa9eek3Lvi5POA
Message-ID: <CAHehzX0N7M-gq7gcFraZ6LPkW=563aYQbBL0LYN3uB-_oKa_yg@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, xen-api@lists.xen.org,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] TODAY, Jan. 20, is Xen Test Day for Xen 4.4 RC2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you to all who participated in Monday's Xen Project Test Day for
Xen 4.4 RC2!

We will be holding another Test Day in a couple weeks.  February 3 is
currently earmarked for the next round.

If you know people who rely on Xen, please let them know.  This is
their opportunity to test the new release in their environment.

Thanks again, and see you in a couple weeks!

Russ Pavlicek
Xen Project Evangelist

On Mon, Jan 20, 2014 at 12:01 AM, Russ Pavlicek
<russell.pavlicek@xenproject.org> wrote:
> This is a reminder that today is the Xen Test Day for Xen 4.4 RC2.
>
> This is the first Test Day for the 4.4 codebase (we had no Test Day
> for RC1 due to the number of people taking time off in late December).
>  As such, it is extremely important that we take some time to make
> sure the code is functioning properly.  Please try to spend some time
> today loading and testing Xen 4.4 RC2.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 20:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hbW-00058a-PN; Tue, 21 Jan 2014 20:07:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W5hbU-00058K-KT; Tue, 21 Jan 2014 20:07:20 +0000
Received: from [85.158.143.35:52682] by server-2.bemta-4.messagelabs.com id
	23/FC-11386-773DED25; Tue, 21 Jan 2014 20:07:19 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390334838!13014639!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 21 Jan 2014 20:07:19 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:07:19 -0000
Received: by mail-lb0-f169.google.com with SMTP id q8so6444841lbi.28
	for <multiple recipients>; Tue, 21 Jan 2014 12:07:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=RlyEKLbyDhNYHRtAGPpG0WftxP57geFc7v0fM5HMuwk=;
	b=O8nIln8mqtbHujwAQN+zs0BZsv4GQeC6AfBsdPqgF7qpq26QA0Y4MgRTsTLHTxbcrR
	HGGpvxrNss5Rq3uN9bhrDwABi3HLwIWT9OrhbyBADxLTPhOXFFAGlees4aoiSDUmMlEF
	fWSmry//dTjnc8YXv7exdqmx3YLsUWwd6VrowT8egetqQZNwckU7r6X1ldVm/4sFR1hZ
	i5lvgUElwm8OUydX1zW3RJZaaAtRR8GXCl+LQRmXpu+ovnspOJy/kKGUk4vdJxXpe5Rw
	UbJ2Tic5dRObwVzyJptdcUuQNGEGIk3CpjMruttiQH9s4zzOivE2dHQYA4KumSx2t3bU
	puPg==
MIME-Version: 1.0
X-Received: by 10.152.20.6 with SMTP id j6mr17216121lae.8.1390334838063; Tue,
	21 Jan 2014 12:07:18 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Tue, 21 Jan 2014 12:07:18 -0800 (PST)
In-Reply-To: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
References: <CAHehzX0UcfqKd6Uy2vW0niEuypYRT4taLvvLQSLscTPGHzC_9w@mail.gmail.com>
Date: Tue, 21 Jan 2014 15:07:18 -0500
X-Google-Sender-Auth: tKBBZCMe1AfEvXa9eek3Lvi5POA
Message-ID: <CAHehzX0N7M-gq7gcFraZ6LPkW=563aYQbBL0LYN3uB-_oKa_yg@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>, xen-api@lists.xen.org,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] TODAY, Jan. 20, is Xen Test Day for Xen 4.4 RC2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you to all who participated in Monday's Xen Project Test Day for
Xen 4.4 RC2!

We will be holding another Test Day in a couple weeks.  February 3 is
currently earmarked for the next round.

If you know people who rely on Xen, please let them know.  This is
their opportunity to test the new release in their environment.

Thanks again, and see you in a couple weeks!

Russ Pavlicek
Xen Project Evangelist

On Mon, Jan 20, 2014 at 12:01 AM, Russ Pavlicek
<russell.pavlicek@xenproject.org> wrote:
> This is a reminder that today is the Xen Test Day for Xen 4.4 RC2.
>
> This is the first Test Day for the 4.4 codebase (we had no Test Day
> for RC1 due to the number of people taking time off in late December).
>  As such, it is extremely important that we take some time to make
> sure the code is functioning properly.  Please try to spend some time
> today loading and testing Xen 4.4 RC2.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC2_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 20:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hqT-0006Rb-J7; Tue, 21 Jan 2014 20:22:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5hqR-0006RW-CM
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 20:22:47 +0000
Received: from [85.158.137.68:22928] by server-3.bemta-3.messagelabs.com id
	3A/56-10658-617DED25; Tue, 21 Jan 2014 20:22:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390335763!6840097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24937 invoked from network); 21 Jan 2014 20:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93001538"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 20:22:43 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 15:22:42 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 21 Jan 2014 20:22:35 +0000
Message-ID: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |   12 +++--
 arch/x86/xen/p2m.c                  |   25 ++--------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   90 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 107 insertions(+), 56 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..68a1438 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
-extern int m2p_add_override(unsigned long mfn, struct page *page,
-			    struct gnttab_map_grant_ref *kmap_op);
+extern int m2p_add_override(unsigned long mfn,
+			    struct page *page,
+			    struct gnttab_map_grant_ref *kmap_op,
+			    unsigned long pfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long pfn,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..0060178 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
 
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
 {
 	unsigned long flags;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long pfn,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
-
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..1f97fa0 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+			return -ENOMEM;
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL, pfn);
 		if (ret)
 			goto out;
 	}
@@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+			return -EINVAL;
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  pfn,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -977,10 +1023,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 20:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5hqT-0006Rb-J7; Tue, 21 Jan 2014 20:22:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W5hqR-0006RW-CM
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 20:22:47 +0000
Received: from [85.158.137.68:22928] by server-3.bemta-3.messagelabs.com id
	3A/56-10658-617DED25; Tue, 21 Jan 2014 20:22:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390335763!6840097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24937 invoked from network); 21 Jan 2014 20:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93001538"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 20:22:43 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 15:22:42 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 21 Jan 2014 20:22:35 +0000
Message-ID: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |   12 +++--
 arch/x86/xen/p2m.c                  |   25 ++--------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   90 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 107 insertions(+), 56 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..68a1438 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
-extern int m2p_add_override(unsigned long mfn, struct page *page,
-			    struct gnttab_map_grant_ref *kmap_op);
+extern int m2p_add_override(unsigned long mfn,
+			    struct page *page,
+			    struct gnttab_map_grant_ref *kmap_op,
+			    unsigned long pfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long pfn,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..0060178 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
 
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
 {
 	unsigned long flags;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long pfn,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
-
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..1f97fa0 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+			return -ENOMEM;
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL, pfn);
 		if (ret)
 			goto out;
 	}
@@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+			return -EINVAL;
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  pfn,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -977,10 +1023,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 	if (lazy)
 		arch_leave_lazy_mmu_mode();
 
-	return ret;
+	return 0;
+}
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 20:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5iLm-0007hy-SX; Tue, 21 Jan 2014 20:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5iLh-0007hq-EB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 20:55:09 +0000
Received: from [85.158.137.68:38074] by server-1.bemta-3.messagelabs.com id
	07/4A-29598-8AEDED25; Tue, 21 Jan 2014 20:55:04 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390337702!6855870!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25733 invoked from network); 21 Jan 2014 20:55:03 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:55:03 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so7006564qae.41
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 12:55:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=TnPQTo/Bo2tRW21q6A4pDM0UX+x2D0zO9S54G0JdgFo=;
	b=AhRmPLtMkd6kxF5fwiw3gwh1KUo9ea+8siLLGRz1qpx5YJ8TUbHGcEkd4yt0ud4NtH
	8gkIlufJLczaMCZG8Xx8zv9PQf/SjRFEsYEKXXMcNHMp55yNQchjwohx0xfcsqNKE5Wd
	X2H8YqPnSK8hhNN59O3v/luDoWDvXc5gNBs5VGsPghxPJJ87U4EU3/WTwDZLkbAZ51GO
	MIo4J8GBc3WwyvCBFPqMGDt+o2nyz/NRDPg224IyNP63v3ZUgfOiiukUW4/3rTSJz484
	bkyVWOItCtWNkLPpB7hNG2eCqItaE+m6x25S/nY3iyGcMtkrmtLrvQkjv29qqZtGaUkK
	Ffaw==
MIME-Version: 1.0
X-Received: by 10.140.108.229 with SMTP id j92mr29325509qgf.7.1390337702470;
	Tue, 21 Jan 2014 12:55:02 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 12:55:02 -0800 (PST)
Date: Tue, 21 Jan 2014 15:55:02 -0500
Message-ID: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a11398730dfb8e304f0813879
Subject: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11398730dfb8e304f0813879
Content-Type: text/plain; charset=ISO-8859-1

xen: [v3] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

---
Changed since v2:
    * Fix coding style
    * Get rid of extra define
    * Use correct typedef'd type for the ACPI RSDP pointer
    * Better error checking conditional
    * Simplify error message

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..fdeb9f2 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            acpi_physical_address rp = acpi_os_get_root_pointer();
+            char rp_str[sizeof(acpi_physical_address)*2 + 1];
+
+            if ( rp )
+            {
+                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,
+                         "%08lX", rp);
+
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, rp_str);
+            }
+            else
+            {
+                printk(XENLOG_WARNING
+                       "Failed to get acpi_rsdp to pass to dom0\n");
+            }
+        }

         cmdline = dom0_cmdline;
     }

--001a11398730dfb8e304f0813879
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqpn6fx70

eGVuOiBbdjNdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgotLS0KQ2hhbmdlZCBzaW5jZSB2MjoKICAgICogRml4IGNvZGluZyBzdHls
ZQogICAgKiBHZXQgcmlkIG9mIGV4dHJhIGRlZmluZQogICAgKiBVc2UgY29ycmVjdCB0eXBlZGVm
J2QgdHlwZSBmb3IgdGhlIEFDUEkgUlNEUCBwb2ludGVyCiAgICAqIEJldHRlciBlcnJvciBjaGVj
a2luZyBjb25kaXRpb25hbAogICAgKiBTaW1wbGlmeSBlcnJvciBtZXNzYWdlCgpkaWZmIC0tZ2l0
IGEveGVuL2FyY2gveDg2L3NldHVwLmMgYi94ZW4vYXJjaC94ODYvc2V0dXAuYwppbmRleCBiNDky
NTZkLi5mZGViOWYyIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvc2V0dXAuYworKysgYi94ZW4v
YXJjaC94ODYvc2V0dXAuYwpAQCAtMTM3OCw2ICsxMzc4LDI1IEBAIHZvaWQgX19pbml0IF9fc3Rh
cnRfeGVuKHVuc2lnbmVkIGxvbmcgbWJpX3ApCiAgICAgICAgICAgICBzYWZlX3N0cmNhdChkb20w
X2NtZGxpbmUsICIgYWNwaT0iKTsKICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGlu
ZSwgYWNwaV9wYXJhbSk7CiAgICAgICAgIH0KKyAgICAgICAgaWYgKCAhc3Ryc3RyKGRvbTBfY21k
bGluZSwgImFjcGlfcnNkcD0iKSApCisgICAgICAgIHsKKyAgICAgICAgICAgIGFjcGlfcGh5c2lj
YWxfYWRkcmVzcyBycCA9IGFjcGlfb3NfZ2V0X3Jvb3RfcG9pbnRlcigpOworICAgICAgICAgICAg
Y2hhciBycF9zdHJbc2l6ZW9mKGFjcGlfcGh5c2ljYWxfYWRkcmVzcykqMiArIDFdOworCisgICAg
ICAgICAgICBpZiAoIHJwICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBzbnByaW50
ZihycF9zdHIsIHNpemVvZihhY3BpX3BoeXNpY2FsX2FkZHJlc3MpKjIgKyAxLAorICAgICAgICAg
ICAgICAgICAgICAgICAgICIlMDhsWCIsIHJwKTsKKworICAgICAgICAgICAgICAgIHNhZmVfc3Ry
Y2F0KGRvbTBfY21kbGluZSwgIiBhY3BpX3JzZHA9MHgiKTsKKyAgICAgICAgICAgICAgICBzYWZl
X3N0cmNhdChkb20wX2NtZGxpbmUsIHJwX3N0cik7CisgICAgICAgICAgICB9CisgICAgICAgICAg
ICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJO
SU5HIAorICAgICAgICAgICAgICAgICAgICAgICAiRmFpbGVkIHRvIGdldCBhY3BpX3JzZHAgdG8g
cGFzcyB0byBkb20wXG4iKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgfQogCiAgICAgICAgIGNt
ZGxpbmUgPSBkb20wX2NtZGxpbmU7CiAgICAgfQo=
--001a11398730dfb8e304f0813879
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11398730dfb8e304f0813879--


From xen-devel-bounces@lists.xen.org Tue Jan 21 20:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 20:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5iLm-0007hy-SX; Tue, 21 Jan 2014 20:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5iLh-0007hq-EB
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 20:55:09 +0000
Received: from [85.158.137.68:38074] by server-1.bemta-3.messagelabs.com id
	07/4A-29598-8AEDED25; Tue, 21 Jan 2014 20:55:04 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390337702!6855870!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25733 invoked from network); 21 Jan 2014 20:55:03 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 20:55:03 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so7006564qae.41
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 12:55:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=TnPQTo/Bo2tRW21q6A4pDM0UX+x2D0zO9S54G0JdgFo=;
	b=AhRmPLtMkd6kxF5fwiw3gwh1KUo9ea+8siLLGRz1qpx5YJ8TUbHGcEkd4yt0ud4NtH
	8gkIlufJLczaMCZG8Xx8zv9PQf/SjRFEsYEKXXMcNHMp55yNQchjwohx0xfcsqNKE5Wd
	X2H8YqPnSK8hhNN59O3v/luDoWDvXc5gNBs5VGsPghxPJJ87U4EU3/WTwDZLkbAZ51GO
	MIo4J8GBc3WwyvCBFPqMGDt+o2nyz/NRDPg224IyNP63v3ZUgfOiiukUW4/3rTSJz484
	bkyVWOItCtWNkLPpB7hNG2eCqItaE+m6x25S/nY3iyGcMtkrmtLrvQkjv29qqZtGaUkK
	Ffaw==
MIME-Version: 1.0
X-Received: by 10.140.108.229 with SMTP id j92mr29325509qgf.7.1390337702470;
	Tue, 21 Jan 2014 12:55:02 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 12:55:02 -0800 (PST)
Date: Tue, 21 Jan 2014 15:55:02 -0500
Message-ID: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a11398730dfb8e304f0813879
Subject: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11398730dfb8e304f0813879
Content-Type: text/plain; charset=ISO-8859-1

xen: [v3] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

---
Changed since v2:
    * Fix coding style
    * Get rid of extra define
    * Use correct typedef'd type for the ACPI RSDP pointer
    * Better error checking conditional
    * Simplify error message

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..fdeb9f2 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            acpi_physical_address rp = acpi_os_get_root_pointer();
+            char rp_str[sizeof(acpi_physical_address)*2 + 1];
+
+            if ( rp )
+            {
+                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,
+                         "%08lX", rp);
+
+                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
+                safe_strcat(dom0_cmdline, rp_str);
+            }
+            else
+            {
+                printk(XENLOG_WARNING
+                       "Failed to get acpi_rsdp to pass to dom0\n");
+            }
+        }

         cmdline = dom0_cmdline;
     }

--001a11398730dfb8e304f0813879
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqpn6fx70

eGVuOiBbdjNdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgotLS0KQ2hhbmdlZCBzaW5jZSB2MjoKICAgICogRml4IGNvZGluZyBzdHls
ZQogICAgKiBHZXQgcmlkIG9mIGV4dHJhIGRlZmluZQogICAgKiBVc2UgY29ycmVjdCB0eXBlZGVm
J2QgdHlwZSBmb3IgdGhlIEFDUEkgUlNEUCBwb2ludGVyCiAgICAqIEJldHRlciBlcnJvciBjaGVj
a2luZyBjb25kaXRpb25hbAogICAgKiBTaW1wbGlmeSBlcnJvciBtZXNzYWdlCgpkaWZmIC0tZ2l0
IGEveGVuL2FyY2gveDg2L3NldHVwLmMgYi94ZW4vYXJjaC94ODYvc2V0dXAuYwppbmRleCBiNDky
NTZkLi5mZGViOWYyIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvc2V0dXAuYworKysgYi94ZW4v
YXJjaC94ODYvc2V0dXAuYwpAQCAtMTM3OCw2ICsxMzc4LDI1IEBAIHZvaWQgX19pbml0IF9fc3Rh
cnRfeGVuKHVuc2lnbmVkIGxvbmcgbWJpX3ApCiAgICAgICAgICAgICBzYWZlX3N0cmNhdChkb20w
X2NtZGxpbmUsICIgYWNwaT0iKTsKICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGlu
ZSwgYWNwaV9wYXJhbSk7CiAgICAgICAgIH0KKyAgICAgICAgaWYgKCAhc3Ryc3RyKGRvbTBfY21k
bGluZSwgImFjcGlfcnNkcD0iKSApCisgICAgICAgIHsKKyAgICAgICAgICAgIGFjcGlfcGh5c2lj
YWxfYWRkcmVzcyBycCA9IGFjcGlfb3NfZ2V0X3Jvb3RfcG9pbnRlcigpOworICAgICAgICAgICAg
Y2hhciBycF9zdHJbc2l6ZW9mKGFjcGlfcGh5c2ljYWxfYWRkcmVzcykqMiArIDFdOworCisgICAg
ICAgICAgICBpZiAoIHJwICkKKyAgICAgICAgICAgIHsKKyAgICAgICAgICAgICAgICBzbnByaW50
ZihycF9zdHIsIHNpemVvZihhY3BpX3BoeXNpY2FsX2FkZHJlc3MpKjIgKyAxLAorICAgICAgICAg
ICAgICAgICAgICAgICAgICIlMDhsWCIsIHJwKTsKKworICAgICAgICAgICAgICAgIHNhZmVfc3Ry
Y2F0KGRvbTBfY21kbGluZSwgIiBhY3BpX3JzZHA9MHgiKTsKKyAgICAgICAgICAgICAgICBzYWZl
X3N0cmNhdChkb20wX2NtZGxpbmUsIHJwX3N0cik7CisgICAgICAgICAgICB9CisgICAgICAgICAg
ICBlbHNlCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJO
SU5HIAorICAgICAgICAgICAgICAgICAgICAgICAiRmFpbGVkIHRvIGdldCBhY3BpX3JzZHAgdG8g
cGFzcyB0byBkb20wXG4iKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgfQogCiAgICAgICAgIGNt
ZGxpbmUgPSBkb20wX2NtZGxpbmU7CiAgICAgfQo=
--001a11398730dfb8e304f0813879
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11398730dfb8e304f0813879--


From xen-devel-bounces@lists.xen.org Tue Jan 21 21:19:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ijZ-0000WP-PA; Tue, 21 Jan 2014 21:19:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5ijX-0000W6-TF
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:19:44 +0000
Received: from [85.158.143.35:15861] by server-1.bemta-4.messagelabs.com id
	0A/D4-02132-F64EED25; Tue, 21 Jan 2014 21:19:43 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390339181!1339197!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27484 invoked from network); 21 Jan 2014 21:19:42 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 21:19:42 -0000
Received: by mail-qa0-f46.google.com with SMTP id ii20so7029485qab.19
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 13:19:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Pl6L49Ms0TDk9Rd2pw4JT162NYqt91E3VhMGiqbJ3vc=;
	b=Np19wrljY33G1rHKK+QZczT26nuUA0nJDpPYqobY4A30GNLQm/UxS8epfF2kJCsG9/
	+fzs758s4kOidqLY0Fk72tQQgZyXIPUHYZJMJ0ddLsyw2YjinJv2Qa+mQ56XklXbST6+
	dkGSjGeOW4AbcyVoKvafRhV6jVsoXRAbwOCi6RJgTGX+WXX8rKD7LCGFHZxNt8MDVzgH
	Pbl/WQgi+QAo8J+HmeOsWBooMtDPHoTLvE5VEAJtYlyfwajDnimggOkYvVo/xFyi2eaK
	7ZHm+LQff0XObH13c1hEUynCAdjjfMN9uUIPm0VjZeDubVlR7reYJGqF6+1htabymLZf
	w+OQ==
MIME-Version: 1.0
X-Received: by 10.140.88.180 with SMTP id t49mr38723006qgd.97.1390339181040;
	Tue, 21 Jan 2014 13:19:41 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 13:19:40 -0800 (PST)
In-Reply-To: <52DE512A02000078001154C7@nat28.tlf.novell.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
Date: Tue, 21 Jan 2014 16:19:40 -0500
Message-ID: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
>> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>>
>> Some machines, such as recent IBM servers, only allow the OS to get the
>> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
>
> This reads as if this was a bug in Xen, which it isn't. Dom0's
> lack of EFI support when running on top of Xen is the issue.

It all depends on how you look at it, as there's two ways to fix this.
Linux currently supports EFI just fine. Xen nukes DOM0's ability to
access EFI using the current methods, which causes Linux's existing
EFI support to fail. This would be Xen's fault. If Xen currently
exposes EFI to DOM0 in some other way that Linux is not aware of, then
this would be Linux's fault. Either way, we can either choose to fix
Xen or fix Linux.

> I think I had indicated my opposition to this sort of hack on v1
> already; I'm not sure I asked which OSes usable as Dom0 but
> other than Linux recognize this option. Or which versions of
> Linux actually do (I'm pretty sure older ones don't).
>
> Bottom line - I continue to think that the issue should be fixed
> in Linux.

The method that this patch uses is a valid way to fix this. In the
best case, the DOM0 OS recognizes this option and uses it. In the
worst case, the DOM0 OS won't recognize the option and will ignore it,
so we're no worse off.

I agree with you that this is more of a stop gap than a long term fix.
The final solution is to fully expose EFI services to the DOM0 in some
way. However, getting there will take some time. The reason this patch
came about is that the company I work for bought new IBM servers in
the hope of migrating our existing Xen server farm to the new IBM
servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
pointer on the new IBM servers, which means that Xen is dead in the
water on these machines for now. The final solution of exposing EFI to
the DOM0 is the ultimate goal, but for now businesses need an
immediate solution. This provides an acceptable solution until DOM0
EFI services are implemented at a later date. There is no reason why
this can't be merged now and then removed when DOM0 EFI service
support arrives.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:19:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ijZ-0000WP-PA; Tue, 21 Jan 2014 21:19:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5ijX-0000W6-TF
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:19:44 +0000
Received: from [85.158.143.35:15861] by server-1.bemta-4.messagelabs.com id
	0A/D4-02132-F64EED25; Tue, 21 Jan 2014 21:19:43 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390339181!1339197!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27484 invoked from network); 21 Jan 2014 21:19:42 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 21:19:42 -0000
Received: by mail-qa0-f46.google.com with SMTP id ii20so7029485qab.19
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 13:19:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Pl6L49Ms0TDk9Rd2pw4JT162NYqt91E3VhMGiqbJ3vc=;
	b=Np19wrljY33G1rHKK+QZczT26nuUA0nJDpPYqobY4A30GNLQm/UxS8epfF2kJCsG9/
	+fzs758s4kOidqLY0Fk72tQQgZyXIPUHYZJMJ0ddLsyw2YjinJv2Qa+mQ56XklXbST6+
	dkGSjGeOW4AbcyVoKvafRhV6jVsoXRAbwOCi6RJgTGX+WXX8rKD7LCGFHZxNt8MDVzgH
	Pbl/WQgi+QAo8J+HmeOsWBooMtDPHoTLvE5VEAJtYlyfwajDnimggOkYvVo/xFyi2eaK
	7ZHm+LQff0XObH13c1hEUynCAdjjfMN9uUIPm0VjZeDubVlR7reYJGqF6+1htabymLZf
	w+OQ==
MIME-Version: 1.0
X-Received: by 10.140.88.180 with SMTP id t49mr38723006qgd.97.1390339181040;
	Tue, 21 Jan 2014 13:19:41 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 13:19:40 -0800 (PST)
In-Reply-To: <52DE512A02000078001154C7@nat28.tlf.novell.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
Date: Tue, 21 Jan 2014 16:19:40 -0500
Message-ID: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
>> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>>
>> Some machines, such as recent IBM servers, only allow the OS to get the
>> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
>
> This reads as if this was a bug in Xen, which it isn't. Dom0's
> lack of EFI support when running on top of Xen is the issue.

It all depends on how you look at it, as there's two ways to fix this.
Linux currently supports EFI just fine. Xen nukes DOM0's ability to
access EFI using the current methods, which causes Linux's existing
EFI support to fail. This would be Xen's fault. If Xen currently
exposes EFI to DOM0 in some other way that Linux is not aware of, then
this would be Linux's fault. Either way, we can either choose to fix
Xen or fix Linux.

> I think I had indicated my opposition to this sort of hack on v1
> already; I'm not sure I asked which OSes usable as Dom0 but
> other than Linux recognize this option. Or which versions of
> Linux actually do (I'm pretty sure older ones don't).
>
> Bottom line - I continue to think that the issue should be fixed
> in Linux.

The method that this patch uses is a valid way to fix this. In the
best case, the DOM0 OS recognizes this option and uses it. In the
worst case, the DOM0 OS won't recognize the option and will ignore it,
so we're no worse off.

I agree with you that this is more of a stop gap than a long term fix.
The final solution is to fully expose EFI services to the DOM0 in some
way. However, getting there will take some time. The reason this patch
came about is that the company I work for bought new IBM servers in
the hope of migrating our existing Xen server farm to the new IBM
servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
pointer on the new IBM servers, which means that Xen is dead in the
water on these machines for now. The final solution of exposing EFI to
the DOM0 is the ultimate goal, but for now businesses need an
immediate solution. This provides an acceptable solution until DOM0
EFI services are implemented at a later date. There is no reason why
this can't be merged now and then removed when DOM0 EFI service
support arrives.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:24:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ink-0000tU-Jr; Tue, 21 Jan 2014 21:24:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W5inj-0000tK-9Q
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:24:03 +0000
Received: from [85.158.143.35:49816] by server-2.bemta-4.messagelabs.com id
	B3/C5-11386-275EED25; Tue, 21 Jan 2014 21:24:02 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390339440!590332!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18809 invoked from network); 21 Jan 2014 21:24:00 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:24:00 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 93EED81966;
	Tue, 21 Jan 2014 23:23:59 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 79F0236C01F; Tue, 21 Jan 2014 23:23:59 +0200 (EET)
Date: Tue, 21 Jan 2014 23:23:59 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Message-ID: <20140121212359.GJ2924@reaktio.net>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savchenko wrote:
> Hi,
> 
> Could someone advice on the issue I am facing?
> 
> I am trying to run PV USB on omap5uevm (omap5-panda) board.
> 
> I use latest drivers for PV USB from Nathanael server:
> http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch
>

I think Konrad actually has the Xen PVUSB drivers in his git tree, and it has 
some extra patches compared to nathanaels's version (iirc)..

-- Pasi

 
> I have applied it to k3.8(dom0) with some patches for USB HCD, usbback drivers
> (attached) and run on Xen 4.4.0-rc2.
> 
> I am facing an issue with USB_STORAGE:
> USB storage inited and mounted on domU over PV USB drivers.
> But I only can copy files on USB storage with small sizes(no more than ~100-500 kBytes).
> Then USB storage falls to infinite loop
> (leds on USB storage blinking all the time, more than needing for copy)
> and then after few seconds dom0 disconnected usb device.
> 
> Dom0, DomU use k3.8.
> 
> I observed that usb-storage uses some scsi requests(from domU) which pass
> directly to hardware, I think this is a problem.
> 
> So, I applied PV SCSI drivers from
> http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=refs/heads/devel/xen-scsi.v1.0
> to k3.8.
> 
> Then I inited PV USB & PV SCSI with scripts vusb-start.sh and vscsi-start.sh accordingly.
> But I still facing issue with this.
> 
> Dom0 log:
> [    0.000000] Booting Linux on physical CPU 0x0
> [    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
> [    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7), cr=10c5387d
> [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
> [    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
> [    0.000000] bootconsole [earlycon0] enabled
> [    0.000000] Memory policy: ECC disabled, Data cache writeback
> [    0.000000] On node 0 totalpages: 65280
> [    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_map c428e000
> [    0.000000]   Normal zone: 512 pages used for memmap
> [    0.000000]   Normal zone: 0 pages reserved
> [    0.000000]   Normal zone: 64768 pages, LIFO batch:15
> [    0.000000] psci: probing function IDs from device-tree
> [    0.000000] OMAP5432 ES2.0
> [    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> [    0.000000] pcpu-alloc: [0] 0 
> [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 64768
> [    0.000000] Kernel command line: console=hvc0 earlyprintk
> [    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
> [    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
> [    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
> [    0.000000] __ex_table already sorted, skipping sort
> [    0.000000] Memory: 255MB = 255MB total
> [    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K highmem
> [    0.000000] Virtual kernel memory layout:
> [    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
> [    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
> [    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
> [    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
> [    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
> [    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
> [    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
> [    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
> [    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
> [    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
> [    0.000000] NR_IRQS:16 nr_irqs:16 16
> [    0.000000] Architected local timer running at 6.14MHz (virt).
> [    0.000000] Switching to timer-based delay loop
> [    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns, wraps every 3489660920ms
> [    0.000000] Console: colour dummy device 80x30
> [    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
> [    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
> [    0.000000] ... MAX_LOCK_DEPTH:          48
> [    0.000000] ... MAX_LOCKDEP_KEYS:        8191
> [    0.000000] ... CLASSHASH_SIZE:          4096
> [    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
> [    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
> [    0.000000] ... CHAINHASH_SIZE:          16384
> [    0.000000]  memory used by lock dependency info: 3695 kB
> [    0.000000]  per task-struct memory footprint: 1152 bytes
> [    0.046875] Calibrating delay loop (skipped), value calculated using timer frequency.. 12.30 BogoMIPS (lpj=48000)
> [    0.054687] pid_max: default: 32768 minimum: 301
> [    0.054687] Security Framework initialized
> [    0.062500] Mount-cache hash table entries: 512
> [    0.070312] CPU: Testing write buffer coherency: ok
> [    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e58
> [    0.085937] devtmpfs: initialized
> [    0.093750] Xen 4.4 support found, events_irq=31 gnttab_frame_pfn=4b000
> [    0.101562] xen:grant_table: Grant tables using version 1 layout.
> [    0.101562] Grant table initialized
> [    0.109375] omap_hwmod: aess: _wait_target_disable failed
> [    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
> [    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
> [    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
> [    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
> [    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
> [    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
> [    0.234375] pinctrl core: initialized pinctrl subsystem
> [    0.242187] regulator-dummy: no parameters
> [    0.242187] NET: Registered protocol family 16
> [    0.250000] Xen: initializing cpu0
> [    0.250000] DMA: preallocated 256 KiB pool for atomic coherent allocations
> [    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for software IO TLB
> [    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped at [ce000000-ce7fffff]
> [    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
> [    0.281250] OMAP GPIO hardware version 0.1
> [    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
> [    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
> [    0.296875] OMAP DMA hardware revision 0.0
> [    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840 size 438
> [    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840 size 56
> [    0.335937] bio: create slab <bio-0> at 0
> [    0.343750] xen:balloon: Initialising balloon driver
> [    0.343750] of_get_named_gpio_flags exited with status 80
> [    0.343750] hsusb2_reset: 3300 mV 
> [    0.351562] of_get_named_gpio_flags exited with status 79
> [    0.351562] hsusb3_reset: 3300 mV 
> [    0.351562] SCSI subsystem initialized
> [    0.359375] libata version 3.00 loaded.
> [    0.359375] usbcore: registered new interface driver usbfs
> [    0.367187] usbcore: registered new interface driver hub
> [    0.367187] usbcore: registered new device driver usb
> [    0.375000] Switching to clocksource arch_sys_counter
> [    0.414062] NET: Registered protocol family 2
> [    0.414062] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
> [    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes)
> [    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
> [    0.437500] TCP: reno registered
> [    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
> [    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
> [    0.453125] NET: Registered protocol family 1
> [    0.679687] NetWinder Floating Point Emulator V0.97 (double precision)
> [    0.687500] VFS: Disk quotas dquot_6.5.2
> [    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
> [    0.703125] msgmni has been set to 372
> [    0.710937] io scheduler noop registered
> [    0.718750] io scheduler deadline registered
> [    0.718750] io scheduler cfq registered (default)
> [    0.726562] xen:xen_evtchn: Event-channel device installed
> [    0.742187] console [hvc0] enabled, bootconsole disabled
> [    0.765625] brd: module loaded
> [    0.781250] loop: module loaded
> [    0.789062] ahci ahci.0.auto: can't get clock
> [    0.789062] ahci ahci.0.auto: SATA PLL_STATUS = 0x00018041
> [    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
> [    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps 0x1 impl platform mode
> [    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only pmp pio slum part ccc apst 
> [    0.820312] scsi0 : ahci_platform
> [    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff] port 0x100 irq 86
> [    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
> [    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
> [    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigned bus number 1
> [    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
> [    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
> [    0.898437] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
> [    0.906250] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    0.914062] usb usb1: Product: EHCI Host Controller
> [    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ehci_hcd
> [    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
> [    0.937500] hub 1-0:1.0: USB hub found
> [    0.937500] hub 1-0:1.0: 3 ports detected
> [    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
> [    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered, assigned bus number 2
> [    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
> [    1.187500] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
> [    1.195312] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
> [    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ohci_hcd
> [    1.210937] usb usb2: SerialNumber: 4a064800.ohci
> [    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
> [    1.226562] hub 2-0:1.0: USB hub found
> [    1.226562] hub 2-0:1.0: 3 ports detected
> [    1.359375] usb 1-2: new high-speed USB device number 2 using ehci-omap
> [    1.515625] usb 1-2: New USB device found, idVendor=0424, idProduct=3503
> [    1.515625] usb 1-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
> [    1.531250] hub 1-2:1.0: USB hub found
> [    1.531250] hub 1-2:1.0: 3 ports detected
> [    1.664062] usb 1-3: new high-speed USB device number 3 using ehci-omap
> [    1.820312] usb 1-3: New USB device found, idVendor=0424, idProduct=9730
> [    1.820312] usb 1-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
> [    1.835937] usbcore: registered new interface driver usbback
> [    1.843750] Initializing USB Mass Storage driver...
> [    1.843750] usbcore: registered new interface driver usb-storage
> [    1.851562] USB Mass Storage support registered.
> [    1.859375] i2c /dev entries driver
> [    1.859375] usbcore: registered new interface driver usbhid
> [    1.867187] usbhid: USB HID core driver
> [    1.875000] TCP: cubic registered
> [    1.875000] Initializing XFRM netlink socket
> [    1.882812] NET: Registered protocol family 17
> [    1.882812] NET: Registered protocol family 15
> [    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
> [    1.898437] mux: Failed to setup hwmod io irq -22
> [    1.898437] Power Management for TI OMAP4PLUS devices.
> [    1.906250] ThumbEE CPU extension supported.
> [    1.914062] Registering SWP/SWPB emulation handler
> [    1.921875] devtmpfs: mounted
> [    1.968750] Freeing init memory: 57752K
> # ./vusb-start.sh 1 0
> [    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-channel 3
> # ./vscsi-start.sh 1 0
> # echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport
> 
> [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
> [   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
> [   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   40.929687] usb 1-2.1: Product: Mass Storage Device
> [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> 
> DomU log:
> 0.710937] console [hvc0] enabled, bootconsole disabled
> [    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq = 104) is a OMAP UART0
> [    0.718750] omap_uart 4806c000.serial: did not get pins for uart1 error: -19
> [    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq = 105) is a OMAP UART1
> [    0.718750] omap_uart 4806e000.serial: did not get pins for uart3 error: -19
> [    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq = 102) is a OMAP UART3
> [    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq = 137) is a OMAP UART4
> [    0.726562] omap_uart 48068000.serial: did not get pins for uart5 error: -19
> [    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq = 138) is a OMAP UART5
> [    0.726562] [drm] Initialized drm 1.1.0 20060810
> [    0.742187] brd: module loaded
> [    0.757812] loop: module loaded
> [    0.757812] omap2_mcspi 48098000.spi: pins are not configured from the driver
> [    0.765625] Initialising Xen virtual ethernet driver.
> [    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.765625] ehci-platform: EHCI generic platform driver
> [    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
> [    0.765625] vusb vusb-0: new USB bus registered, assigned bus number 1
> [    0.765625] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
> [    0.765625] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
> [    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 xen_hcd
> [    0.765625] usb usb1: SerialNumber: vusb-0
> [    0.773437] hub 1-0:1.0: USB hub found
> [    0.773437] hub 1-0:1.0: 8 ports detected
> [    0.773437] Initializing USB Mass Storage driver...
> [    0.773437] usbcore: registered new interface driver usb-storage
> [    0.773437] USB Mass Storage support registered.
> [    0.773437] mousedev: PS/2 mouse device common for all mice
> [    0.781250] usbcore: registered new interface driver usbhid
> [    0.781250] usbhid: USB HID core driver
> [    0.789062] TCP: cubic registered
> [    0.789062] Initializing XFRM netlink socket
> [    0.789062] NET: Registered protocol family 17
> [    0.789062] NET: Registered protocol family 15
> [    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
> [    0.789062] mux: Failed to setup hwmod io irq -22
> [    0.789062] ThumbEE CPU extension supported.
> [    0.789062] Registering SWP/SWPB emulation handler
> [    0.789062] dmm 4e000000.dmm: initialized all PAT entries
> [    0.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
> [    0.804687] devtmpfs: mounted
> [    0.812500] Freeing init memory: 6044K
> 
> Please press Enter to activate this console.
> [    6.500000] scsi0 : Xen SCSI frontend driver
> 
> / # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
> [   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
> [   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   40.929687] usb 1-2.1: Product: Mass Storage Device
> [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> [   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
> (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
> [   32.875000] usb 1-1: New USB device found, idVendor=8564, idProduct=1000
> [   32.875000] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   32.875000] usb 1-1: Product: Mass Storage Device
> [   32.875000] usb 1-1: Manufacturer: JetFlash
> [   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
> [   32.906250] scsi1 : usb-storage 1-1:1.0
> [   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB    1100 PQ: 0 ANSI: 4
> [   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)
> [   34.140625] sd 1:0:0:0: [sda] Write Protect is off
> [   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data may changed on different boots*
> [   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.195312]  sda: sda1
> [   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk
> 
>  # lsusb 
> Bus 001 Device 002: ID 8564:1000
> Bus 001 Device 001: ID 1d6b:0002
> 
> But it looks like scsi requests from usb-storage still passing directly to hardware
> instead of passing through PV SCSI.
> 
> Could smb tell me how to init PV SCSI and PV USB correctly?
> 
> Regards,
> Alexander
> 
> Alexander Savchenko (2):
>   usbback: Add new features
>   HACK: usb:core:hcd: Do not remapping self dma addresses
> 
> Nathanael Rensen (1):
>   pvusb drivers
> 
>  drivers/usb/core/hcd.c                 |    1 +
>  drivers/usb/host/Kconfig               |   23 +
>  drivers/usb/host/Makefile              |    2 +
>  drivers/usb/host/xen-usbback/Makefile  |    3 +
>  drivers/usb/host/xen-usbback/common.h  |  170 ++++
>  drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
>  drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
>  drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
>  drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
>  include/xen/interface/io/usbif.h       |  150 +++
>  10 files changed, 4244 insertions(+)
>  create mode 100644 drivers/usb/host/xen-usbback/Makefile
>  create mode 100644 drivers/usb/host/xen-usbback/common.h
>  create mode 100644 drivers/usb/host/xen-usbback/usbback.c
>  create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
>  create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
>  create mode 100644 drivers/usb/host/xen-usbfront.c
>  create mode 100644 include/xen/interface/io/usbif.h
> 
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:24:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ink-0000tU-Jr; Tue, 21 Jan 2014 21:24:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W5inj-0000tK-9Q
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:24:03 +0000
Received: from [85.158.143.35:49816] by server-2.bemta-4.messagelabs.com id
	B3/C5-11386-275EED25; Tue, 21 Jan 2014 21:24:02 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390339440!590332!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18809 invoked from network); 21 Jan 2014 21:24:00 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:24:00 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 93EED81966;
	Tue, 21 Jan 2014 23:23:59 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 79F0236C01F; Tue, 21 Jan 2014 23:23:59 +0200 (EET)
Date: Tue, 21 Jan 2014 23:23:59 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Message-ID: <20140121212359.GJ2924@reaktio.net>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savchenko wrote:
> Hi,
> 
> Could someone advice on the issue I am facing?
> 
> I am trying to run PV USB on omap5uevm (omap5-panda) board.
> 
> I use latest drivers for PV USB from Nathanael server:
> http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch
>

I think Konrad actually has the Xen PVUSB drivers in his git tree, and it has 
some extra patches compared to nathanaels's version (iirc)..

-- Pasi

 
> I have applied it to k3.8(dom0) with some patches for USB HCD, usbback drivers
> (attached) and run on Xen 4.4.0-rc2.
> 
> I am facing an issue with USB_STORAGE:
> USB storage inited and mounted on domU over PV USB drivers.
> But I only can copy files on USB storage with small sizes(no more than ~100-500 kBytes).
> Then USB storage falls to infinite loop
> (leds on USB storage blinking all the time, more than needing for copy)
> and then after few seconds dom0 disconnected usb device.
> 
> Dom0, DomU use k3.8.
> 
> I observed that usb-storage uses some scsi requests(from domU) which pass
> directly to hardware, I think this is a problem.
> 
> So, I applied PV SCSI drivers from
> http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=refs/heads/devel/xen-scsi.v1.0
> to k3.8.
> 
> Then I inited PV USB & PV SCSI with scripts vusb-start.sh and vscsi-start.sh accordingly.
> But I still facing issue with this.
> 
> Dom0 log:
> [    0.000000] Booting Linux on physical CPU 0x0
> [    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
> [    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7), cr=10c5387d
> [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
> [    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
> [    0.000000] bootconsole [earlycon0] enabled
> [    0.000000] Memory policy: ECC disabled, Data cache writeback
> [    0.000000] On node 0 totalpages: 65280
> [    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_map c428e000
> [    0.000000]   Normal zone: 512 pages used for memmap
> [    0.000000]   Normal zone: 0 pages reserved
> [    0.000000]   Normal zone: 64768 pages, LIFO batch:15
> [    0.000000] psci: probing function IDs from device-tree
> [    0.000000] OMAP5432 ES2.0
> [    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> [    0.000000] pcpu-alloc: [0] 0 
> [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 64768
> [    0.000000] Kernel command line: console=hvc0 earlyprintk
> [    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
> [    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
> [    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
> [    0.000000] __ex_table already sorted, skipping sort
> [    0.000000] Memory: 255MB = 255MB total
> [    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K highmem
> [    0.000000] Virtual kernel memory layout:
> [    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
> [    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
> [    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
> [    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
> [    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
> [    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
> [    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
> [    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
> [    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
> [    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
> [    0.000000] NR_IRQS:16 nr_irqs:16 16
> [    0.000000] Architected local timer running at 6.14MHz (virt).
> [    0.000000] Switching to timer-based delay loop
> [    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns, wraps every 3489660920ms
> [    0.000000] Console: colour dummy device 80x30
> [    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
> [    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
> [    0.000000] ... MAX_LOCK_DEPTH:          48
> [    0.000000] ... MAX_LOCKDEP_KEYS:        8191
> [    0.000000] ... CLASSHASH_SIZE:          4096
> [    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
> [    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
> [    0.000000] ... CHAINHASH_SIZE:          16384
> [    0.000000]  memory used by lock dependency info: 3695 kB
> [    0.000000]  per task-struct memory footprint: 1152 bytes
> [    0.046875] Calibrating delay loop (skipped), value calculated using timer frequency.. 12.30 BogoMIPS (lpj=48000)
> [    0.054687] pid_max: default: 32768 minimum: 301
> [    0.054687] Security Framework initialized
> [    0.062500] Mount-cache hash table entries: 512
> [    0.070312] CPU: Testing write buffer coherency: ok
> [    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e58
> [    0.085937] devtmpfs: initialized
> [    0.093750] Xen 4.4 support found, events_irq=31 gnttab_frame_pfn=4b000
> [    0.101562] xen:grant_table: Grant tables using version 1 layout.
> [    0.101562] Grant table initialized
> [    0.109375] omap_hwmod: aess: _wait_target_disable failed
> [    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
> [    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
> [    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
> [    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
> [    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
> [    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
> [    0.234375] pinctrl core: initialized pinctrl subsystem
> [    0.242187] regulator-dummy: no parameters
> [    0.242187] NET: Registered protocol family 16
> [    0.250000] Xen: initializing cpu0
> [    0.250000] DMA: preallocated 256 KiB pool for atomic coherent allocations
> [    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for software IO TLB
> [    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped at [ce000000-ce7fffff]
> [    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
> [    0.281250] OMAP GPIO hardware version 0.1
> [    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
> [    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
> [    0.296875] OMAP DMA hardware revision 0.0
> [    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840 size 438
> [    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840 size 56
> [    0.335937] bio: create slab <bio-0> at 0
> [    0.343750] xen:balloon: Initialising balloon driver
> [    0.343750] of_get_named_gpio_flags exited with status 80
> [    0.343750] hsusb2_reset: 3300 mV 
> [    0.351562] of_get_named_gpio_flags exited with status 79
> [    0.351562] hsusb3_reset: 3300 mV 
> [    0.351562] SCSI subsystem initialized
> [    0.359375] libata version 3.00 loaded.
> [    0.359375] usbcore: registered new interface driver usbfs
> [    0.367187] usbcore: registered new interface driver hub
> [    0.367187] usbcore: registered new device driver usb
> [    0.375000] Switching to clocksource arch_sys_counter
> [    0.414062] NET: Registered protocol family 2
> [    0.414062] TCP established hash table entries: 2048 (order: 2, 16384 bytes)
> [    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes)
> [    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
> [    0.437500] TCP: reno registered
> [    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
> [    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
> [    0.453125] NET: Registered protocol family 1
> [    0.679687] NetWinder Floating Point Emulator V0.97 (double precision)
> [    0.687500] VFS: Disk quotas dquot_6.5.2
> [    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
> [    0.703125] msgmni has been set to 372
> [    0.710937] io scheduler noop registered
> [    0.718750] io scheduler deadline registered
> [    0.718750] io scheduler cfq registered (default)
> [    0.726562] xen:xen_evtchn: Event-channel device installed
> [    0.742187] console [hvc0] enabled, bootconsole disabled
> [    0.765625] brd: module loaded
> [    0.781250] loop: module loaded
> [    0.789062] ahci ahci.0.auto: can't get clock
> [    0.789062] ahci ahci.0.auto: SATA PLL_STATUS = 0x00018041
> [    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
> [    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps 0x1 impl platform mode
> [    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only pmp pio slum part ccc apst 
> [    0.820312] scsi0 : ahci_platform
> [    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff] port 0x100 irq 86
> [    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
> [    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
> [    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigned bus number 1
> [    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
> [    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
> [    0.898437] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
> [    0.906250] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    0.914062] usb usb1: Product: EHCI Host Controller
> [    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ehci_hcd
> [    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
> [    0.937500] hub 1-0:1.0: USB hub found
> [    0.937500] hub 1-0:1.0: 3 ports detected
> [    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
> [    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered, assigned bus number 2
> [    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
> [    1.187500] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001
> [    1.195312] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
> [    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6 ohci_hcd
> [    1.210937] usb usb2: SerialNumber: 4a064800.ohci
> [    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
> [    1.226562] hub 2-0:1.0: USB hub found
> [    1.226562] hub 2-0:1.0: 3 ports detected
> [    1.359375] usb 1-2: new high-speed USB device number 2 using ehci-omap
> [    1.515625] usb 1-2: New USB device found, idVendor=0424, idProduct=3503
> [    1.515625] usb 1-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
> [    1.531250] hub 1-2:1.0: USB hub found
> [    1.531250] hub 1-2:1.0: 3 ports detected
> [    1.664062] usb 1-3: new high-speed USB device number 3 using ehci-omap
> [    1.820312] usb 1-3: New USB device found, idVendor=0424, idProduct=9730
> [    1.820312] usb 1-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
> [    1.835937] usbcore: registered new interface driver usbback
> [    1.843750] Initializing USB Mass Storage driver...
> [    1.843750] usbcore: registered new interface driver usb-storage
> [    1.851562] USB Mass Storage support registered.
> [    1.859375] i2c /dev entries driver
> [    1.859375] usbcore: registered new interface driver usbhid
> [    1.867187] usbhid: USB HID core driver
> [    1.875000] TCP: cubic registered
> [    1.875000] Initializing XFRM netlink socket
> [    1.882812] NET: Registered protocol family 17
> [    1.882812] NET: Registered protocol family 15
> [    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
> [    1.898437] mux: Failed to setup hwmod io irq -22
> [    1.898437] Power Management for TI OMAP4PLUS devices.
> [    1.906250] ThumbEE CPU extension supported.
> [    1.914062] Registering SWP/SWPB emulation handler
> [    1.921875] devtmpfs: mounted
> [    1.968750] Freeing init memory: 57752K
> # ./vusb-start.sh 1 0
> [    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-channel 3
> # ./vscsi-start.sh 1 0
> # echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport
> 
> [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
> [   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
> [   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   40.929687] usb 1-2.1: Product: Mass Storage Device
> [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> 
> DomU log:
> 0.710937] console [hvc0] enabled, bootconsole disabled
> [    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq = 104) is a OMAP UART0
> [    0.718750] omap_uart 4806c000.serial: did not get pins for uart1 error: -19
> [    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq = 105) is a OMAP UART1
> [    0.718750] omap_uart 4806e000.serial: did not get pins for uart3 error: -19
> [    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq = 102) is a OMAP UART3
> [    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq = 137) is a OMAP UART4
> [    0.726562] omap_uart 48068000.serial: did not get pins for uart5 error: -19
> [    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq = 138) is a OMAP UART5
> [    0.726562] [drm] Initialized drm 1.1.0 20060810
> [    0.742187] brd: module loaded
> [    0.757812] loop: module loaded
> [    0.757812] omap2_mcspi 48098000.spi: pins are not configured from the driver
> [    0.765625] Initialising Xen virtual ethernet driver.
> [    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [    0.765625] ehci-platform: EHCI generic platform driver
> [    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
> [    0.765625] vusb vusb-0: new USB bus registered, assigned bus number 1
> [    0.765625] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
> [    0.765625] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
> [    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6 xen_hcd
> [    0.765625] usb usb1: SerialNumber: vusb-0
> [    0.773437] hub 1-0:1.0: USB hub found
> [    0.773437] hub 1-0:1.0: 8 ports detected
> [    0.773437] Initializing USB Mass Storage driver...
> [    0.773437] usbcore: registered new interface driver usb-storage
> [    0.773437] USB Mass Storage support registered.
> [    0.773437] mousedev: PS/2 mouse device common for all mice
> [    0.781250] usbcore: registered new interface driver usbhid
> [    0.781250] usbhid: USB HID core driver
> [    0.789062] TCP: cubic registered
> [    0.789062] Initializing XFRM netlink socket
> [    0.789062] NET: Registered protocol family 17
> [    0.789062] NET: Registered protocol family 15
> [    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30 variant f rev 0
> [    0.789062] mux: Failed to setup hwmod io irq -22
> [    0.789062] ThumbEE CPU extension supported.
> [    0.789062] Registering SWP/SWPB emulation handler
> [    0.789062] dmm 4e000000.dmm: initialized all PAT entries
> [    0.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
> [    0.804687] devtmpfs: mounted
> [    0.812500] Freeing init memory: 6044K
> 
> Please press Enter to activate this console.
> [    6.500000] scsi0 : Xen SCSI frontend driver
> 
> / # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using ehci-omap
> [   40.914062] usb 1-2.1: New USB device found, idVendor=8564, idProduct=1000
> [   40.921875] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   40.929687] usb 1-2.1: Product: Mass Storage Device
> [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> [   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
> (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
> [   32.875000] usb 1-1: New USB device found, idVendor=8564, idProduct=1000
> [   32.875000] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> [   32.875000] usb 1-1: Product: Mass Storage Device
> [   32.875000] usb 1-1: Manufacturer: JetFlash
> [   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
> [   32.906250] scsi1 : usb-storage 1-1:1.0
> [   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB    1100 PQ: 0 ANSI: 4
> [   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)
> [   34.140625] sd 1:0:0:0: [sda] Write Protect is off
> [   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data may changed on different boots*
> [   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.195312]  sda: sda1
> [   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
> [   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
> [   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk
> 
>  # lsusb 
> Bus 001 Device 002: ID 8564:1000
> Bus 001 Device 001: ID 1d6b:0002
> 
> But it looks like scsi requests from usb-storage still passing directly to hardware
> instead of passing through PV SCSI.
> 
> Could smb tell me how to init PV SCSI and PV USB correctly?
> 
> Regards,
> Alexander
> 
> Alexander Savchenko (2):
>   usbback: Add new features
>   HACK: usb:core:hcd: Do not remapping self dma addresses
> 
> Nathanael Rensen (1):
>   pvusb drivers
> 
>  drivers/usb/core/hcd.c                 |    1 +
>  drivers/usb/host/Kconfig               |   23 +
>  drivers/usb/host/Makefile              |    2 +
>  drivers/usb/host/xen-usbback/Makefile  |    3 +
>  drivers/usb/host/xen-usbback/common.h  |  170 ++++
>  drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
>  drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
>  drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
>  drivers/usb/host/xen-usbfront.c        | 1739 ++++++++++++++++++++++++++++++++
>  include/xen/interface/io/usbif.h       |  150 +++
>  10 files changed, 4244 insertions(+)
>  create mode 100644 drivers/usb/host/xen-usbback/Makefile
>  create mode 100644 drivers/usb/host/xen-usbback/common.h
>  create mode 100644 drivers/usb/host/xen-usbback/usbback.c
>  create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
>  create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
>  create mode 100644 drivers/usb/host/xen-usbfront.c
>  create mode 100644 include/xen/interface/io/usbif.h
> 
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:31:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ius-0001fK-L1; Tue, 21 Jan 2014 21:31:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5iuq-0001f2-W7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:31:25 +0000
Received: from [85.158.143.35:31109] by server-3.bemta-4.messagelabs.com id
	BA/11-32360-C27EED25; Tue, 21 Jan 2014 21:31:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390339882!13190870!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25787 invoked from network); 21 Jan 2014 21:31:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:31:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLVKDb010569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:31:20 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLVJLE028424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:31:19 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLVIFr003943; Tue, 21 Jan 2014 21:31:18 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:31:18 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 93E431BF76C; Tue, 21 Jan 2014 16:31:17 -0500 (EST)
Date: Tue, 21 Jan 2014 16:31:17 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Philip Wernersbach <philip.wernersbach@gmail.com>, daniel.kiper@oracle.com
Message-ID: <20140121213117.GA6003@phenom.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 04:19:40PM -0500, Philip Wernersbach wrote:
> On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> >> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
> >>
> >> Some machines, such as recent IBM servers, only allow the OS to get the
> >> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> >
> > This reads as if this was a bug in Xen, which it isn't. Dom0's
> > lack of EFI support when running on top of Xen is the issue.
> 
> It all depends on how you look at it, as there's two ways to fix this.
> Linux currently supports EFI just fine. Xen nukes DOM0's ability to
> access EFI using the current methods, which causes Linux's existing
> EFI support to fail. This would be Xen's fault. If Xen currently
> exposes EFI to DOM0 in some other way that Linux is not aware of, then
> this would be Linux's fault. Either way, we can either choose to fix
> Xen or fix Linux.
> 
> > I think I had indicated my opposition to this sort of hack on v1
> > already; I'm not sure I asked which OSes usable as Dom0 but
> > other than Linux recognize this option. Or which versions of
> > Linux actually do (I'm pretty sure older ones don't).
> >
> > Bottom line - I continue to think that the issue should be fixed
> > in Linux.
> 
> The method that this patch uses is a valid way to fix this. In the
> best case, the DOM0 OS recognizes this option and uses it. In the
> worst case, the DOM0 OS won't recognize the option and will ignore it,
> so we're no worse off.
> 
> I agree with you that this is more of a stop gap than a long term fix.
> The final solution is to fully expose EFI services to the DOM0 in some
> way. However, getting there will take some time. The reason this patch
> came about is that the company I work for bought new IBM servers in
> the hope of migrating our existing Xen server farm to the new IBM
> servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
> pointer on the new IBM servers, which means that Xen is dead in the
> water on these machines for now. The final solution of exposing EFI to
> the DOM0 is the ultimate goal, but for now businesses need an
> immediate solution. This provides an acceptable solution until DOM0
> EFI services are implemented at a later date. There is no reason why
> this can't be merged now and then removed when DOM0 EFI service
> support arrives.

Could you use the Linux patches that use Xen's EFI services? Ie,
the DOM0 EFI ones?

They were posted on the mailing list some time ago (last year, November-ish?)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:31:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5ius-0001fK-L1; Tue, 21 Jan 2014 21:31:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5iuq-0001f2-W7
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:31:25 +0000
Received: from [85.158.143.35:31109] by server-3.bemta-4.messagelabs.com id
	BA/11-32360-C27EED25; Tue, 21 Jan 2014 21:31:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390339882!13190870!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25787 invoked from network); 21 Jan 2014 21:31:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:31:23 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLVKDb010569
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:31:20 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLVJLE028424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:31:19 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLVIFr003943; Tue, 21 Jan 2014 21:31:18 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:31:18 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 93E431BF76C; Tue, 21 Jan 2014 16:31:17 -0500 (EST)
Date: Tue, 21 Jan 2014 16:31:17 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Philip Wernersbach <philip.wernersbach@gmail.com>, daniel.kiper@oracle.com
Message-ID: <20140121213117.GA6003@phenom.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 04:19:40PM -0500, Philip Wernersbach wrote:
> On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> >> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
> >>
> >> Some machines, such as recent IBM servers, only allow the OS to get the
> >> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> >
> > This reads as if this was a bug in Xen, which it isn't. Dom0's
> > lack of EFI support when running on top of Xen is the issue.
> 
> It all depends on how you look at it, as there's two ways to fix this.
> Linux currently supports EFI just fine. Xen nukes DOM0's ability to
> access EFI using the current methods, which causes Linux's existing
> EFI support to fail. This would be Xen's fault. If Xen currently
> exposes EFI to DOM0 in some other way that Linux is not aware of, then
> this would be Linux's fault. Either way, we can either choose to fix
> Xen or fix Linux.
> 
> > I think I had indicated my opposition to this sort of hack on v1
> > already; I'm not sure I asked which OSes usable as Dom0 but
> > other than Linux recognize this option. Or which versions of
> > Linux actually do (I'm pretty sure older ones don't).
> >
> > Bottom line - I continue to think that the issue should be fixed
> > in Linux.
> 
> The method that this patch uses is a valid way to fix this. In the
> best case, the DOM0 OS recognizes this option and uses it. In the
> worst case, the DOM0 OS won't recognize the option and will ignore it,
> so we're no worse off.
> 
> I agree with you that this is more of a stop gap than a long term fix.
> The final solution is to fully expose EFI services to the DOM0 in some
> way. However, getting there will take some time. The reason this patch
> came about is that the company I work for bought new IBM servers in
> the hope of migrating our existing Xen server farm to the new IBM
> servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
> pointer on the new IBM servers, which means that Xen is dead in the
> water on these machines for now. The final solution of exposing EFI to
> the DOM0 is the ultimate goal, but for now businesses need an
> immediate solution. This provides an acceptable solution until DOM0
> EFI services are implemented at a later date. There is no reason why
> this can't be merged now and then removed when DOM0 EFI service
> support arrives.

Could you use the Linux patches that use Xen's EFI services? Ie,
the DOM0 EFI ones?

They were posted on the mailing list some time ago (last year, November-ish?)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:39:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5j24-00027a-LU; Tue, 21 Jan 2014 21:38:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5j22-00027Q-NU
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:38:51 +0000
Received: from [85.158.137.68:22451] by server-8.bemta-3.messagelabs.com id
	25/39-31081-AE8EED25; Tue, 21 Jan 2014 21:38:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390340326!6860116!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26227 invoked from network); 21 Jan 2014 21:38:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:38:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLcf41014266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:38:42 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLcega015775
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 21:38:41 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLce8S000667; Tue, 21 Jan 2014 21:38:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:38:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2F221BF76C; Tue, 21 Jan 2014 16:38:37 -0500 (EST)
Date: Tue, 21 Jan 2014 16:38:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Eric Houby <ehouby@yahoo.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Message-ID: <20140121213837.GB6089@phenom.dumpdata.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: 'Ian Campbell' <Ian.Campbell@citrix.com>, 'Jan Beulich' <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 03:39:20AM -0700, Eric Houby wrote:
> 
> 
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, January 21, 2014 2:46 AM
> > To: Jan Beulich
> > Cc: ehouby@yahoo.com; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
> > 
> > On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> > > >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > > > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > > > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > > > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to
> > xen
> > > > command line allows the system to boot as expected.
> > >
> > > For analyzing this we need a full serial log
> > 
> > See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure such a
> > thing.
> > 
> > >  (with "iommu=debug" in
> > > place on the Xen command line), not just a screen shot.
> > >
> > > Jan
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> 
> 
> Xen console logs are attached.

Which are inline with what I had reported in December:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02964.html

Lets add the AMD folks on it.

> 
> Thanks,
> 
> Eric

> 
> astar login: (XEN) Domain 0 shutdown: rebooting machine.
>  Xen 4.4-rc2-0.rc2.1.fc20
> (XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
> (XEN) Latest ChangeSet:
> (XEN) Bootloader: GRUB 2.00
> (XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose com1=38400,8n1,pci console=com1
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
> (XEN)  EDID info not retrieved because no DDC retrieval method detected
> (XEN) Disc information:
> (XEN)  Found 3 MBR signatures
> (XEN)  Found 3 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 0000000000097000 (usable)
> (XEN)  000000000009f800 - 00000000000a0000 (reserved)
> (XEN)  00000000000f0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 00000000afcf0000 (usable)
> (XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
> (XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
> (XEN)  00000000afd00000 - 00000000afe00000 (reserved)
> (XEN)  00000000e0000000 - 00000000f0000000 (reserved)
> (XEN)  00000000fec00000 - 0000000100000000 (reserved)
> (XEN)  0000000100000000 - 0000000250000000 (usable)
> (XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
> (XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
> (XEN) ACPI: FACS AFCF0000, 0040
> (XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
> (XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
> (XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
> (XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
> (XEN) System RAM: 8188MB (8385052kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-0000000250000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f4670
> (XEN) DMI 2.4 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x808
> (XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
> (XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
> (XEN) Processor #0 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
> (XEN) Processor #1 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
> (XEN) Processor #3 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
> (XEN) Processor #4 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
> (XEN) Processor #5 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
> (XEN) ERST table was not found
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
> (XEN) XSM Framework v1.0.0 initialized
> (XEN) Flask:  No policy file found. Disabling Flask.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 2812.987 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) AMD Fam10h machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
> (XEN) PCI: MCFG area at e0000000 reserved in E820
> (XEN) PCI: Using MCFG for segment 0000 bus 00-ff
> (XEN) AMD-Vi: Found MSI capability block at 0x54
> (XEN) AMD-Vi: ACPI Table:
> (XEN) AMD-Vi:  Signature IVRS
> (XEN) AMD-Vi:  Length 0xd8
> (XEN) AMD-Vi:  Revision 0x1
> (XEN) AMD-Vi:  CheckSum 0x6f
> (XEN) AMD-Vi:  OEM_Id AMD
> (XEN) AMD-Vi:  OEM_Table_Id RD890S
> (XEN) AMD-Vi:  OEM_Revision 0x202031
> (XEN) AMD-Vi:  Creator_Id AMD
> (XEN) AMD-Vi:  Creator_Revision 0
> (XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
> (XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
> (XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
> (XEN) AMD-Vi: IOMMU 0 Enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Interrupt remapping enabled
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using new ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... works.
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 64 KiB.
> (XEN) HVM: ASIDs enabled.
> (XEN) SVM: Supported advanced features:
> (XEN)  - Nested Page Tables (NPT)
> (XEN)  - Last Branch Record (LBR) Virtualisation
> (XEN)  - Next-RIP Saved on #VMEXIT
> (XEN)  - Pause-Intercept Filter
> (XEN) HVM: SVM enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> (XEN) HVM: PVH mode not supported on this platform
> (XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) spurious 8259A interrupt: IRQ7.
> (XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) Brought up 6 CPUs
> (XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
> (XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
> (XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
> (XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
> (XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
> (XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
> (XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
> (XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
> (XEN) cr3: 00000000afa98000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
> (XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
> (XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
> (XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
> (XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
> (XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
> (XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
> (XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
> (XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
> (XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
> (XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
> (XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
> (XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
> (XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
> (XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d50
> (XEN)    0000000000000005 ffff830000086ef0 0000000800000000 000000010000006e
> (XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
> (XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
> (XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
> (XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
> (XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
> (XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
> (XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:39:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5j24-00027a-LU; Tue, 21 Jan 2014 21:38:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5j22-00027Q-NU
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:38:51 +0000
Received: from [85.158.137.68:22451] by server-8.bemta-3.messagelabs.com id
	25/39-31081-AE8EED25; Tue, 21 Jan 2014 21:38:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390340326!6860116!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26227 invoked from network); 21 Jan 2014 21:38:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:38:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLcf41014266
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:38:42 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLcega015775
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 21 Jan 2014 21:38:41 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLce8S000667; Tue, 21 Jan 2014 21:38:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:38:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2F221BF76C; Tue, 21 Jan 2014 16:38:37 -0500 (EST)
Date: Tue, 21 Jan 2014 16:38:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Eric Houby <ehouby@yahoo.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Message-ID: <20140121213837.GB6089@phenom.dumpdata.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: 'Ian Campbell' <Ian.Campbell@citrix.com>, 'Jan Beulich' <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 03:39:20AM -0700, Eric Houby wrote:
> 
> 
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> > Sent: Tuesday, January 21, 2014 2:46 AM
> > To: Jan Beulich
> > Cc: ehouby@yahoo.com; xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
> > 
> > On Tue, 2014-01-21 at 09:32 +0000, Jan Beulich wrote:
> > > >>> On 20.01.14 at 20:06, Eric Houby <ehouby@yahoo.com> wrote:
> > > > Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> > > > Screen shot of the crash is attached.  Hardware is a Gigabyte
> > > > GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to
> > xen
> > > > command line allows the system to boot as expected.
> > >
> > > For analyzing this we need a full serial log
> > 
> > See http://wiki.xen.org/wiki/Xen_Serial_Console for how to configure such a
> > thing.
> > 
> > >  (with "iommu=debug" in
> > > place on the Xen command line), not just a screen shot.
> > >
> > > Jan
> > >
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> 
> 
> Xen console logs are attached.

Which are inline with what I had reported in December:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02964.html

Lets add the AMD folks on it.

> 
> Thanks,
> 
> Eric

> 
> astar login: (XEN) Domain 0 shutdown: rebooting machine.
>  Xen 4.4-rc2-0.rc2.1.fc20
> (XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
> (XEN) Latest ChangeSet:
> (XEN) Bootloader: GRUB 2.00
> (XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose com1=38400,8n1,pci console=com1
> (XEN) Video information:
> (XEN)  VGA is text mode 80x25, font 8x16
> (XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
> (XEN)  EDID info not retrieved because no DDC retrieval method detected
> (XEN) Disc information:
> (XEN)  Found 3 MBR signatures
> (XEN)  Found 3 EDD information structures
> (XEN) Xen-e820 RAM map:
> (XEN)  0000000000000000 - 0000000000097000 (usable)
> (XEN)  000000000009f800 - 00000000000a0000 (reserved)
> (XEN)  00000000000f0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 00000000afcf0000 (usable)
> (XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
> (XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
> (XEN)  00000000afd00000 - 00000000afe00000 (reserved)
> (XEN)  00000000e0000000 - 00000000f0000000 (reserved)
> (XEN)  00000000fec00000 - 0000000100000000 (reserved)
> (XEN)  0000000100000000 - 0000000250000000 (usable)
> (XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
> (XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
> (XEN) ACPI: FACS AFCF0000, 0040
> (XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
> (XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
> (XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
> (XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
> (XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
> (XEN) System RAM: 8188MB (8385052kB)
> (XEN) No NUMA configuration found
> (XEN) Faking a node at 0000000000000000-0000000250000000
> (XEN) Domain heap initialised
> (XEN) found SMP MP-table at 000f4670
> (XEN) DMI 2.4 present.
> (XEN) Using APIC driver default
> (XEN) ACPI: PM-Timer IO Port: 0x808
> (XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
> (XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
> (XEN) ACPI: Local APIC address 0xfee00000
> (XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
> (XEN) Processor #0 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
> (XEN) Processor #1 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
> (XEN) Processor #2 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
> (XEN) Processor #3 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
> (XEN) Processor #4 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
> (XEN) Processor #5 0:10 APIC version 16
> (XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
> (XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
> (XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
> (XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override.
> (XEN) ACPI: IRQ9 used by override.
> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
> (XEN) ERST table was not found
> (XEN) Using ACPI (MADT) for SMP configuration information
> (XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
> (XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
> (XEN) XSM Framework v1.0.0 initialized
> (XEN) Flask:  No policy file found. Disabling Flask.
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Detected 2812.987 MHz processor.
> (XEN) Initing memory sharing.
> (XEN) AMD Fam10h machine check reporting enabled
> (XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
> (XEN) PCI: MCFG area at e0000000 reserved in E820
> (XEN) PCI: Using MCFG for segment 0000 bus 00-ff
> (XEN) AMD-Vi: Found MSI capability block at 0x54
> (XEN) AMD-Vi: ACPI Table:
> (XEN) AMD-Vi:  Signature IVRS
> (XEN) AMD-Vi:  Length 0xd8
> (XEN) AMD-Vi:  Revision 0x1
> (XEN) AMD-Vi:  CheckSum 0x6f
> (XEN) AMD-Vi:  OEM_Id AMD
> (XEN) AMD-Vi:  OEM_Table_Id RD890S
> (XEN) AMD-Vi:  OEM_Revision 0x202031
> (XEN) AMD-Vi:  Creator_Id AMD
> (XEN) AMD-Vi:  Creator_Revision 0
> (XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
> (XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
> (XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
> (XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
> (XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
> (XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
> (XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
> (XEN) AMD-Vi: IOMMU 0 Enabled.
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Interrupt remapping enabled
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using new ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... works.
> (XEN) Platform timer is 14.318MHz HPET
> (XEN) Allocated console ring of 64 KiB.
> (XEN) HVM: ASIDs enabled.
> (XEN) SVM: Supported advanced features:
> (XEN)  - Nested Page Tables (NPT)
> (XEN)  - Last Branch Record (LBR) Virtualisation
> (XEN)  - Next-RIP Saved on #VMEXIT
> (XEN)  - Pause-Intercept Filter
> (XEN) HVM: SVM enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> (XEN) HVM: PVH mode not supported on this platform
> (XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> (XEN) spurious 8259A interrupt: IRQ7.
> (XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
> (XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
> (XEN) Brought up 6 CPUs
> (XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
> (XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
> (XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
> (XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
> (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
> (XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
> (XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
> (XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
> (XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
> (XEN) cr3: 00000000afa98000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
> (XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
> (XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
> (XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
> (XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
> (XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
> (XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
> (XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
> (XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
> (XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
> (XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
> (XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
> (XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
> (XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
> (XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d50
> (XEN)    0000000000000005 ffff830000086ef0 0000000800000000 000000010000006e
> (XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
> (XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
> (XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
> (XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
> (XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
> (XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
> (XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jHQ-0002we-KV; Tue, 21 Jan 2014 21:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jHO-0002wU-4C
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 21:54:43 +0000
Received: from [193.109.254.147:37866] by server-12.bemta-14.messagelabs.com
	id 77/86-13681-1ACEED25; Tue, 21 Jan 2014 21:54:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390341277!12338590!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17374 invoked from network); 21 Jan 2014 21:54:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:54:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLsaQO004348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:54:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLsajM026433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:54:36 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLsZFD026423; Tue, 21 Jan 2014 21:54:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:54:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D60401BF76C; Tue, 21 Jan 2014 16:54:33 -0500 (EST)
Date: Tue, 21 Jan 2014 16:54:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, jbeulich@suse.com
Message-ID: <20140121215433.GA6363@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="rwEMma7ioTxnRzrJ"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] Regression compared to Xen 4.3,
 Xen 4.4-rc2 -  pci_prepare_msix+0xb1/0x12 - BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey,

I hadn't done yet any diagnosis to figure out exactly which
PCI device is at fault here. But this is regression compared
to Xen 4.3 which boots just fine (see logs). The xen-syms
is at: http://darnok.org/xen/xen-syms.gz

I used idential kernel for Xen 4.3 and it booted nicely.

My next step is to instrument the do_physdev_op to figure out which
of the PCI devices is triggering this, but that will have to wait
till later this week.

What I get is this when booting Xen 4.4:


[   15.927480] xen: registering gsi 19 triggering 0 polarity 1
[   15.933039] Already setup the GSI :19
(XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-22 05:38:00] CPU:    0
(XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
(XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0   rcx: 0000000000000000
(XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08   r8:  0000000000000000
(XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20   r11: 0000000000000202
(XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005   r14: 0000000000000000
(XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000001526f0
(XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
(XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=ffff82d0802cfe08:
(XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d0802cfe68 000000000000001e
(XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078716898 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d08012a25f 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080310f00 ffff82d0802cff18
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000040004 0000000000000246
(XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff8100122a 000000000000e030
(XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070fe2780 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7fd300c7 ffff82d08022231b
(XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f60e0e0 0000000000000000
(XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078623e58 ffff880078716800
(XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000000006 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000000000 ffff880078623e28
(XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff8100142a 000000000000e033
(XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 000000000000e02b 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000000000
(XEN) [2014-01-22 05:38:00] Xen call trace:
(XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
(XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x119e
(XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 05:38:00]  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] Panic on CPU 0:
(XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
(XEN) [2014-01-22 05:38:00] [error_code=0000]
(XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-xen-4.3.log"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... =1B[06;00Hok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
Loading microcode.bin... ok
 __  __            _  _    _____  ____                   =20
 \ \/ /___ _ __   | || |  |___ / |___ \    _ __  _ __ ___=20
  \  // __/|__| |_) | | |  __/
 /_/\_\___|_| |_|    |_|(_)____(_)_____|  | .__/|_|  \___|
                                          |_|            =20
(XEN) Xen version 4.3.2-pre (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red =
Hat 4.4.4-2)) debug=3Dy Tue Jan 21 14:30:34 EST 2014
(XEN) Latest ChangeSet: Fri Jan 17 16:37:06 2014 +0100 git:7261a3f-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) LASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1158: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
3ffd54000
(XEN) [VT-D]iommu.c:1160: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1158: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
3ffd53000
(XEN) [VT-D]iommu.c:1160: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.046 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI wit_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 03:41:23] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 03:41:23] Allocated console ring of 1048576 KiB.
(XEN) [2014-01-22 03:41:23] mwait-idle: MWAIT substates: 0x42120
(XEN) [2014-01-22 03:41:23] mwait-idle: v0.4 model 0x3c
(XEN) [2014-01-22 03:41:23] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff
(XEN) [2014-01-22 03:41:23] VMX: Supported advanced features:
(XEN) [2014-01-22 03:41:23]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 03:41:23]  - APIC TPR shadow
(XEN) [2014-01-22 03:41:23]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 03:41:23]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 03:41:23]  - Virtual NMI
(XEN) [2014-01-22 03:41:23]  - MSR direct-access bitmap
(XEN) [2014-01-22 03:41:23]  - Unrestricted Guest
(XEN) [2014-01-22 03:41:23]  - VMCS shadowing
(XEN) [2014-01-22 03:41:23] HVM: ASIDs enabled.
(XEN) [2014-01-22 03:41:23] HVM: VMX enabled
(XEN) [2014-01-22 03:41:23] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 03:41:23] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 03:41:23] Brought up 8 CPUs
(XEN) [2014-01-22 03:41:23] ACPI sleep modes: S3
(XEN) [2014-01-22 03:41:23] mcheck_poll: Machine check polling timer starte=
d.2 03:41:23] *** LOADING DOMAIN 0 ***
(XEN) [2014-01-22 03:41:23] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa22000
(XEN) [2014-01-22 03:41:23] elf_parse_binaryz=3D0xc00f0
(XEN) [2014-01-22 03:41:23] elf_parse_binary: phdr: paddr=3D0x1cc1000 memsz=
=3D0x14d80
(XEN) [2014-01-22 03:41:24] elf_parse_binary: phdr: paddr=3D0x1cd6000 memsz=
=3D0x71e000
(XEN) [2014-01-22 03:41:24] elf_parse_binary: memory: 0x1000000 -> 0x23f4000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 03:41:24] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 03:41:24]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 03:41:24]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 03:41:24]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 03:41:24]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 03:41:24]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 03:41:24]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 03:41:24]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 03:41:24]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 03:41:24]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 03:41:24] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 03:41:24]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487894 pages to be allocated)
(XEN) [2014-01-22 03:41:24]  Init. ramdisk: 000000023af5d000->000000023fd86=
d75
(XEN) [2014-01-22 03:41:24] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 03:41:24]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 03:41:24]  Init. ramdisk: ffffffff823f4000->ffffffff8721d=
d75
(XEN) [2014-01-22 03:41:24]  Phys-Mach map: ffffffff8721e000->ffffffff8761e=
000
(XEN) [2014-01-22 03:41:24]  Start info:    ffffffff8761e000->ffffffff8761e=
4b4
(XEN) [2014-01-22 03:41:24]  Page tables:   ffffffff8761f000->ffffffff8765e=
000
(XEN) [2014-01-22 03:41:24]  Boot stack:    ffffffff8765e000->ffffffff8765f=
000
(XEN) [2014-01-22 03:41:24]  TOTAL:         ffffffff80000000->ffffffff87800=
000
(XEN) [2014-01-22 03:41:24]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 03:41:24] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 03:41:24] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:02.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:755: iommu_enable_translation: io=
mmu->reg =3D ffff82c3ffd54000
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:755: iommu_enable_translation: io=
mmu->reg =3D ffff82c3ffd53000
(XEN) [2014-01-22 03:41:25] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 03:41:25] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 03:41:25] Std. Loglevel: All
(XEN) [2014-01-22 03:41:25] Guest Loglevel: All
(XEN) [2014-01-22 03:41:25] **********************************************
(XEN) [2014-01-22 03:41:25] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 03:41:25] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 03:41:25] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 03:41:25] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 03:41:25] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 03:41:25] **********************************************
(XEN) [2014-01-22 03:41:25] 3... 2... 1...=20
(XEN) [2014-01-22 03:41:28] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 03:41:28] Freed 260kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-02502-gec513b1 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(xx:xx:xx)
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x0721dfff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.3.2-pre (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.662076] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.662077] Policy zone: DMA32
[    5.662078] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(xx:xx:xx)
[    5.662382] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.662412] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.682547] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.685643] Memory: 1894852K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 202296K reserved)
[    5.685868] Hierarchical RCU implementation.
[    5.685868] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.685869] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.685877] NR_IRQS:33024 nr_irqs:256 16
[    5.685955] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.685957] xen: registering gsi 9 triggering 0 polarity 0
[    5.685967] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.685988] xen: acpi sci 9
[    5.685992] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.685994] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.685997] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.686000] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.686002] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.686005] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.686007] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.686009] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.686012] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.686015] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.686017] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.686019] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.686022] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.686024] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.687586] Console: colour VGA+ 80x25
[    6.638682] console [hvc0] enabled
[    6.642617] Xen: using vcpuop timer interface
[    6.646958] installing Xen timer for CPU 0
[    6.651141] tsc: Detected 3400.046 MHz processor
[    6.655825] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.09 BogoMIPS (lpj=3D3400046)
[    6.666458] pid_max: default: 32768 minimum: 301
[    6.671290] Security Framework initialized
[    6.675378] SELinux:  Initializing.
[    6.678952] SELinux:  Starting in permissive mode
[    6.684017] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.691456] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.698616] Mount-cache hash table entries: 256
[    6.703577] Initializing cgroup subsys freezer
[    6.708076] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.708076] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.721179] CPU: Physical Processor ID: 0
[    6.725254] CPU: Processor Core ID: 0
[    6.729676] mce: CPU supports 2 MCE banks
[    6.733687] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.733687] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.733687] tlb_flushall_shift: 6
[    6.770682] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.779324] ACPI: Core revision 20131115
[    6.832338] ACPI: All ACPI Tables successfully acquired
[    6.839105] cpu 0 spinlock event irq 41
[    6.842977] callinginit_spinlocks_jump+0x0/0x1d returned 0 after 4882 us=
ecs
[    6.861451] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.867089] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.874535] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.880947] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.889181] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.894814] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.902267] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.907986] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.915527] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.920726] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    6.929582] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    6.936861] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    6.943274] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    6.951508] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    6.956975] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    6.964091] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    6.968887] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    6.975446] calling  init_workqueues+0x0/0x59a @ 1
[    6.980452] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    6.987050] calling  migration_init+0x0/0x71 @ 1
[    6.991730] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    6.998229] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    7.003428] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    7.010451] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    7.016341] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    7.024054] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    7.029292] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    7.036278] calling  cpu_stop_init+0x0/0x76 @ 1
[    7.040894] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    7.047282] calling  relay_init+0x0/0x14 @ 1
[    7.051615] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    7.057767] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    7.063075] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    7.070160] calling  init_events+0x0/0x61 @ 1
[    7.074580] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    7.080821] calling  init_trace_printk+0x0/0x12 @ 1
[    7.085760] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    7.092521] calling  event_trace_memsetup+0x0/0x52 @ 1
[    7.097741] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    7.104742] calling  jump_label_init_module+0x0/0x12 @ 1
[    7.110115] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    7.117308] calling  balloon_clear+0x0/0x4f @ 1
[    7.121901] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    7.128315] calling  rand_initialize+0x0/0x30 @ 1
[    7.133101] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    7.139667] calling  mce_amd_init+0x0/0x165 @ 1
[    7.144259] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    7.150698] x86: Booted up 1 node, 1 CPUs
[    7.155443] NMI watchdog: disabled (cpu0): hardware events not enabled
[    7.162084] devtmpfs: initialized
[    7.167963] calling  ipc_ns_init+0x0/0x14 @ 1
[    7.172306] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    7.178544] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    7.183571] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    7.190418] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    7.197094] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    7.205584] calling  net_ns_init+0x0/0x104 @ 1
[    7.210148] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    7.216431] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    7.221619] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    7.229513] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    7.237673] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    7.244879] calling  cpufreq_tsc+0x0/0x37 @ 1
[    7.249298] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    7.255538] calling  reboot_init+0x0/0x1d @ 1
[    7.259961] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    7.266199] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    7.271051] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    7.277726] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    7.283271] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    7.290638] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    7.295491] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    7.302165] calling  wq_sysfs_init+0x0/0x14 @ 1
[    7.306858] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    7.313606] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    7.320133] calling  ksysfs_init+0x0/0x94 @ 1
[    7.324595] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    7.330791] calling  pm_init+0x0/0x4e @ 1
[    7.334902] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    7.340756] calling  pm_disk_init+0x0/0x19 @ 1
[    7.345278] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    7.351591] calling  swsusp_header_init+0x0/0x30 @ 1
[    7.356617] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    7.363465] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    7.369010] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    7.376376] calling  cgroup_wq_init+0x0/0x32 @ 1
[    7.381061] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    7.387555] calling  event_trace_enable+0x0/0x173 @ 1
[    7.393145] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    7.400004] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.404595] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.411009] calling  fsnotify_init+0x0/0x26 @ 1
[    7.415603] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.422013] calling  filelock_init+0x0/0x84 @ 1
[    7.426619] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.433021] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.437874] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.444547] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.449575] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.456421] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.461188] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.467774] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.473147] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.480340] calling  debugfs_init+0x0/0x5c @ 1
[    7.484856] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.491174] calling  securityfs_init+0x0/0x53 @ 1
[    7.495949] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.502526] calling  prandom_init+0x0/0xe2 @ 1
[    7.507034] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.513362] calling  virtio_init+0x0/0x30 @ 1
[    7.517881] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.524047] calling  __gnttab_init+0x0/0x30 @ 1
[    7.528641] xen:grant_table: Grant tables using version 2 layout
[    7.534724] Grant table initialized
[    7.538258] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.544933] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.549986] RTC time:  3:41:29, date: 01/22/14
[    7.554465] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.561486] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.566425] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.573358] calling  cpuidle_init+0x0/0x40 @ 1
[    7.577865] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.584364] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.589306] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.596065] calling  sock_init+0x0/0x8b @ 1
[    7.600412] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.606403] calling  net_inuse_init+0x0/0x26 @ 1
[    7.611085] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.617583] calling  netpoll_init+0x0/0x31 @ 1
[    7.622089] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.628417] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.633570] NET: Registered protocol family 16
[    7.638060] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.645155] calling  bdi_class_init+0x0/0x4d @ 1
[    7.649940] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.656371] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.661497] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.668412] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.673417] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.680112] calling  pci_driver_init+0x0/0x12 @ 1
[    7.684974] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.691484] calling  backlight_class_init+0x0/0x85 @ 1
[    7.696743] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.703707] calling  video_output_class_init+0x0/0x19 @ 1
[    7.709229] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.716443] calling  xenbus_init+0x0/0x26f @ 1
[    7.721041] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.727297] calling  tty_class_init+0x0/0x38 @ 1
[    7.732043] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.738475] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.743844] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.750792] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.756603] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.764225] calling  register_node_type+0x0/0x34 @ 1
[    7.769381] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.776157] calling  i2c_init+0x0/0x70 @ 1
[    7.780483] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.786392] calling  init_ladder+0x0/0x12 @ 1
[    7.790811] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.797223] calling  init_menu+0x0/0x12 @ 1
[    7.801471] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.807711] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.812737] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.819595] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.825149] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.832496] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.837639] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.844544] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.849050] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.855549] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.860319] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.866903] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.872103] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.879123] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.883715] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.891342] ACPI: bus type PCI registered
[    7.895415] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.901915] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.908589] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.913217] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.919916] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    7.926402] calling  dma_channel_table_init+0x0/0xde @ 1
[    7.931804] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    7.938981] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    7.944530] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    7.951894] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    7.957530] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    7.964980] calling  xen_pcpu_init+0x0/0xcc @ 1
[    7.970418] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    7.976766] calling  dmi_id_init+0x0/0x31d @ 1
[    7.981518] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    7.987773] calling  dca_init+0x0/0x20 @ 1
[    7.991932] dca service started, version 1.12.1
[    7.996585] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    8.002681] calling  iommu_init+0x0/0x58 @ 1
[    8.007023] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    8.013167] calling  pci_arch_init+0x0/0x69 @ 1
[    8.017775] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    8.027119] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    8.041684] PCI: Using configuration type 1 for base access
[    8.047245] initcall pci_arch_init+0x0/0x69 returned 0 afterpology_init+=
0x0/0x98 @ 1
[    8.058931] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    8.065293] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    8.070385] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    8.077318] calling  init_vdso+0x0/0x135 @ 1
[    8.081652] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    8.087802] calling  sysenter_setup+0x0/0x2dd @ 1
[    8.092571] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    8.099157] calling  param_sysfs_init+0x0/0x194 @ 1
[    8.120226] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    8.127264] calling  pm_sysrq_init+0x0/0x19ed 0 after 0 usecs
[    8.138272] calling  default_bdi_init+0x0/0x65 @ 1
[    8.143430] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    8.150034] calling  init_bio+0x0/0xe9 @ 1
[    8.154241] bio: create slab <bio-0> at 0
[    8.158312] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    8.164418] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    8.170157] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    8.177676] calling  cryptomgr_init+0x0/0x12 @ 1
[    8.182356] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    8.188857] calling  blk_settings_init+0x0/0x2c @ 1
[    8.193795] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    8.200555] calling  blk_ioc_init+0x0/0x2a @ 1
[    8.205072] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    8.211387] calling  blk_softirq_init+0x0/0x6e @ 1
[    8.216240] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    8.222914] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    8.227766] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    8.234440] calling  blk_mq_init+0x0/0x5f @ 1
[    8.238859] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    8.245100] calling  genhd_device_init+0x0/0x85 @ 1
[    8.250166] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    8.256857] calling  pci_slot_init+0x0/0x50 @ 1
[    8.261454] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    8.267859] calling  fbmem_init+0x0/0x98 @ 1
[    8.272261] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    8.278347] calling  acpi_init+0x0/0x27a @ 1
[    8.282705] ACPI: Added _OSI(Module Device)
[    8.286926] ACPI: Added _OSI(Processor Device)
[    8.291432] ACPI: Added _OSI(3.0 _SCP Extensions)
[    8.296198] ACPI: Added _OSI(Processor Aggregator Device)
[    8.305411] ACPI: Executed 1 blocks of module-level executable AML code
[    8.337352] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    8.345202] \_SB_:_OSC invalid UUID
[    8.348684] _O
[    8.377464] ACPI: Interpreter enabled
[    8.381132] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    8.390396] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    8.399674] ACPI: (supports S0 S1 S4 S5)
[    8.403645] ACPI: Using IOAPIC for interrupt routing
[    8.409046] HEST: Table parsing has been initialized.
[    8.414096] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.424473] ACPI: No dock devices found.
[    8.525664] ACPI: Power Resource [FN00] (off)
[    8.530812] ACPI: Power Resource [FN01] (off)
[    8.535968] ACPI: Power Resource [FN02] (off)
[    8.541098] ACPI: Power Resource [FN03] (off)
[    8.546242] ACPI: Power Resource [FN04] (off)
[    8.555962] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.562140] acpi PNP0A08:00: _OSC: OS supports [Extenatform does not sup=
port [PCIeHotplug PME]
[    8.581948] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.595479] PCI host bridge to bus 0000:00
[    8.599564] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.605111] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    8.611350] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    8.617590] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.624523] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.631457] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.638389] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.645323] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.652257] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.659202] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:00.0
[    8.670770] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.676928] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.683540] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:01.0
[    8.694374] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.700437] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:01.1
[    8.712097] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.718105] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.724942] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.732217] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:02.0
[    8.743372] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.749392] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:03.0
[    8.761807] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.767861] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.774787] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.781010] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:14.0
[    8.791861] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.797902] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.804838] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:16.0
[    8.816479] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.822519] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.828816] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.835142] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.840900] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.847379] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:19.0
[    8.858231] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.864265] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.870718] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.877293] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1a.0
[    8.888152] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.894186] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.901147] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.907631] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1b.0
[    8.918468] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    8.924630] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    8.931135] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.0
[    8.941991] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    8.948147] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    8.954639] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.3
[    8.965489] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    8.971650] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    8.978143] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.5
[    8.988994] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    8.995154] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    9.001645] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1c.6
[    9.012489] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    9.018658] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    9.025148] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1c.7
[    9.036008] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    9.042048] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    9.048502] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    9.055076] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1d.0
[    9.065928] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.0
[    9.077609] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    9.083646] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    9.089252] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    9.094883] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    9.100517] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    9.106150] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    9.111785] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    9.118191] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.2
[    9.128951] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    9.134983] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    9.141836] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.3
[    9.152963] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    9.158999] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.6
[    9.171699] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.182484] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    9.188533] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    9.194168] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    9.201013] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    9.207863] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    9.214662] pci 0000:01:00.0: supports D1 D2
[    9.219052] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:01:00.0
[    9.232010] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    9.237229] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    9.243381] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    9.250230] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    9.257099] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.267890] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    9.273937] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    9.280258] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    9.286584] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    9.292215] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    9.298563] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    9.305355] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    9.311481] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.318396] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:02:00.0
[    9.330599] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    9.336606] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    9.342925] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    9.349251] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    9.354886] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    9.361232] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    9.368022] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    9.374148] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.381065] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:02:00.1
[    9.395356] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.400572] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.406722] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.413570] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.420604] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.431426] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.437468] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.443784] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.450110] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.455827] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.462654] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.468880] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:04:00.0
[    9.479791] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.485823] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.492135] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.498461] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.504177] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.511003] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1444: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:04:00.1
[    9.531024] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.536247] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.542398] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.549249] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.556277] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.567130] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.573163] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.579499] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.585113] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.591610] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.597838] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:05:00.0
[    9.610835] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.616057] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.622208] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.629058] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.636130] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.646945] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.653181] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.659950] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:06:00.0
[    9.670841] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.676070] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.682931] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.691421] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.697606] pci 0000:07:01.0: supports D1 D2
[    9.701867] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 03:41:32] PCI add device 0000:07:01.0
[    9.713683] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.719717] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.726024] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.732507] pci 0000:07:03.0: supports D1 D2
[    9.736766] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:07:03.0
[    9.754672] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.761728] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.768571] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.777137] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.785805] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.794384] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.802967] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.811365] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.817413] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:08.0
[    9.836070] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.842122] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:08.1
[    9.860805] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.866860] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:09.0
[    9.885525] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.891573] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:09.1
[    9.910257] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.916310] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0a.0
[    9.935024] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[    9.941074] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0a.1
[    9.959756] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[    9.965809] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0b.0
[    9.984474] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[    9.990522] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 03:41:33] PCI add device 0000:08:0b.1
[   10.009235] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[   10.014464] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   10.021304] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[   10.027976] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[   10.034645] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[   10.041688] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.052580] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[   10.058683] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[   10.065844] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[   10.072135] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:33] PCI add device 0000:09:00.0
[   10.085236] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[   10.090456] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   10.097299] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[   10.104329] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.115125] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[   10.121172] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[   10.126800] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[   10.132434] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[   10.138067] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[   10.143700] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[   10.149334] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[   10.155867] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:33] [VT-D]iommu.c:1444: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 03:41:33] PCI add device 0000:0a:00.0
[   10.175360] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[   10.180578] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   10.186727] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   10.193576] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[   10.200339] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[   10.212101] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.219411] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.226715] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.234019] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.241327] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.248631] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.
[   10.257067] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.264372] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.272791] ACPI: Enabled 4 GPEs in block 00 to 3F
[   10.277584] ACPI: \_SB_.PCI0: notify handler is installed
[   10.283068] Found 1 acpi root devices
[   10.286868] initcall acpi_init+0x0/0x27a returned 0 after 453125 usecs
[   10.293383] calling  pnp_init+0x0/0x12 @ 1
[   10.297636] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[   10.303539] calling  balloon_init+0x0/0x242 @ 1
[   10.308131] xen:balloon: Initialising balloon driver
[   10.313159] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[   10.319746] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[   10.325292] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[   10.332657] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[   10.338384] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[   10.345763] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[   10.351599] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[   10.359066] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[   10.364082] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[   10.370766] calling  balloon_init+0x0/0xfa @ 1
[   10.375270] xen_balloon: Initialising balloon driver
[   10.380684] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[   10.387112] calling  misc_init+0x0/0xba @ 1
[   10.391429] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.397425] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.402675] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.410748] vgaarb: loaded
[   10.413517] vgaarb: bridge control possible 0000:00:02.0
[   10.418892] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.426085] calling  cn_init+0x0/0xc0 @ 1
[   10.430175] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.436050] calling  dma_buf_init+0x0/0x75 @ 1
[   10.440567] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.446884] calling  phy_init+0x0/0x2e @ 1
[   10.451269] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.457181] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.461915] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.468360] calling  usb_init+0x0/0x169 @ 1
[   10.472618] ACPI: bus type USB registered
[   10.476876] usbcore: registered new interface driver usbfs
[   10.482447] usbcore: registered new interface driver hub
[   10.487842] usbcore: registered new device driver usb
[   10.492890] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.499214] calling  serio_init+0x0/0x31 @ 1
[   10.503663] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.509743] calling  input_init+0x0/0x103 @ 1
[   10.514233] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.520406] calling  rtc_init+0x0/0x5b @ 1
[   10.524635] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.530545] calling  pps_init+0x0/0xb7 @ 1
[   10.534766] pps_core: LinuxPPS API ver. 1 registered
[   10.539730] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.548916] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.555154] calling  ptp_init+0x0/0xa4 @ 1
[   10.559373] PTP clock support registered
[   10.563303] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.569454] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.574973] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.582197] calling  hwmon_init+0x0/0xe3 @ 1
[   10.586591] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.592683] calling  leds_init+0x0/0x40 @ 1
[   10.596987] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.602997] calling  efisubsys_init+0x0/0x142 @ 1
[   10.607763] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.614347] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.619112] PCI: Using ACPI for IRQ routing
[   10.626785] PCI: pci_cache_line_size set to 64 bytes
[   10.631945] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.661340] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.666563] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.672909] calling  neigh_init+0x0/0x80 @ 1
[   10.677240] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.683393] calling  fib_rules_init+0x0/0xaf @ 1
[   10.688073] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.694572] calling  pktsched_init+0x0/0x10a @ 1
[   10.699258] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.705752] calling  tc_filter_init+0x0/0x55 @ 1
[   10.710433] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.716932] calling  tc_action_init+0x0/0x55 @ 1
[   10.721612] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.728111] calling  genl_init+0x0/0x85 @ 1
[   10.732375] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.738425] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.743021] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.749431] calling  netlbl_init+0x0/0x81 @ 1
[   10.753851] NetLabel: Initializing
[   10.757319] NetLabel:  domain hash size =3D 128
[   10.761738] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.766802] NetLabel:  unlabeled traffic allowed by default
[   10.772399] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.778899] calling  rfkill_init+0x0/0x79 @ 1
[   10.783491] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.789658] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.794249] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.800677] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.805443] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.812014] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.817348] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.824406] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.829524] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.836454] calling  hpet_late_init+0x0/0x101 @ 1
[   10.841218] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.847980] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.852489] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.858813] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.864367] Switched to clocksource xen
[   10.868266] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs
[   10.875890] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.881375] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 281 =
usecs
[   10.888496] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.894828] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.902969] calling  event_trace_init+0x0/0x205 @ 1
[   10.922252] initcall event_trace_init+0x0/0x205 returned 0 after 14003 u=
secs
[   10.929281] calling  init_kprobe_trace+0x0/nit_kprobe_trace+0x0/0x93 ret=
urned 0 after 11 usecs
[   10.941089] calling  init_pipe_fs+0x0/0x4c @ 1
[   10.945634] initcall init_pipe_fs+0x0/0x4c returned 0 after 44 usecs
[   10.952002] calling  eventpoll_init+0x0/0xda @ 1
[   10.956708] initcall eventpoll_init+0x0/0xda returned 0 after 25 usecs
[   10.963268] calling  anon_inode_init+0x0/0x5b @ 1
[   10.968073] initcall anon_inode_init+0x0/0x5b returned 0 after 36 usecs
[   10.974709] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   10.979908] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   10.986929] calling  acpi_event_init+0x0/0x3a @ 1
[   10.991710] initcall acpi_event_init+0x0/0x3a returned 0 after 16 usecs
[   10.998368] calling  pnp_system_init+0x0/0x12 @ 1
[   11.003230] initcall pnp_system_init+0x0/0x12 returned 0 after 91 usecs
[   11.009837] calling  pnpacpi_init+0x0/0x8c @ 1
[   11.014330] pnp: PnP ACPI init
[   11.017473] ACPI: bus type PNP registered
[   11.021848] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   11.028453] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   11.035344] pnp 00:01: [dma 4]
[   11.038557] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   11.045244] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   11.052293] kworker/u2:0 (512) used greatest stack depth: 5560 bytes left
[   11.059077] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   11.066679] system 00:04: [io  0x0680-0x069f] has been reserved
[   11.072596] system 00:04: [io  0xffff] has been reserved
[   11.077967] system 00:04: [io  0xffff] has been reserved
[   11.083341] system 00:04: [io  0xffff] has been reserved
[   11.088714] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   11.094693] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   11.100673] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   11.106652] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   11.112633] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   11.118612] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   11.124940] system 00:04: [io  0x164e-0x164f] has been reserved
[   11.130914] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.137793] xen: registering gsi 8 triggering 1 polarity 0
[   11.143530] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   11.150378] system 00:06: [io  0x1854-0x1857] has been reserved
[   11.156284] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   11.164645] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   11.170561] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   11.176535] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.184757] xen: registering gsi 4 triggering 1 polarity 0
[   11.190229] Already setup the GSI :4
[   11.193872] pnp 00:08: [dma 0 disabled]
[   11.197974] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.205712] xen: registering gsi 3 triggering 1 polarity 0
[   11.211206] pnp 00:09: [dma 0 disabled]
[   11.215314] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.222148] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   11.228064] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.234939] xen: registering gsi 13 triggering 1 polarity 0
[   11.240723] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   11.250370] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   11.256980] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   11.263650] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   11.270322] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   11.276993] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   11.283667] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   11.290340] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   11.297013] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   11.303687] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   11.310360] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   11.317033] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   11.323708] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   11.330376] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.339280] pnp: PnP ACPI: found 13 devices
[   11.343452] ACPI: bus type PNP unregistered
[   11.347701] initcall pnpacpi_init+0x0/0x8c returned 0 after 325555 usecs
[   11.354458] calling  pcistub_init+0x0/0x29f @ 1
[   11.359052] xen_pciback: Error parsing pci_devs_to_hide at "(xx:xx:xx)"
[   11.365726] initcall pcistub_init+0x0/0x29f returned -22 after 6517 usecs
[   11.372572] calling  chr_dev_init+0x0/0xc6 @ 1
[   11.386177] initcall chr_dev_init+0x0/0xc6 returned 0 after 8886 usecs
[   11.392694] calling  firmware_class_init+0x0/0xec @ 1
[   11.397906] initcall firmware_class_init+0x0/0xec returned 0 after 101 u=
secs
[   11.404944] calling  init_pcmcia_bus+0x0/0x65 @ 1
[   11.409844] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 131 usecs
[   11.416533] calling  thermal_init+0x0/0x8b @ 1
[   11.421111] initcall thermal_init+0x0/0x8b returned 0 after 72 usecs
[   11.427449] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.433342] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.441230] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.449916] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.456344] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9343 usecs
[   11.464144] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.469796] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.474753] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.480906] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.487767] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.494696] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.501628] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.508560] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.515495] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.522426] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.529359] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.536294] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.543226] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.550159] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.557091] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.564018] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.571414] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.578316] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.585785] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.592704] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.600084] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.607002] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.614462] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.619741] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.625897] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.632748] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.637770] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.643927] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.650781] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.655796] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.661954] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.668808] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.673830] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.680700] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.685962] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.692816] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.698094] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.704946] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.709967] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.716820] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.721835] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.727992] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.734847] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.740468] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.746099] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.752426] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.758753] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.765080] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.771406] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.777819] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.784233] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.789866] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.796194] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.801826] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.808152] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.813785] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.820112] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.825746] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.832073] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.838398] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.844724] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.851052] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.857378] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.863705] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.869338] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.875666] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396460 usecs
[   11.883464] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.888332] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.895078] calling  inet_init+0x0/0x296 @ 1
[   11.899482] NET: Registered protocol family 2
[   11.904145] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.911398] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.918043] TCP: Hash tables configured (established 16384 bind 16384)
[   11.924635] TCP: reno registered
[   11.927919] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   11.933986] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   11.940610] initcall inet_init+0x0/0x296 returned 0 after 40232 usecs
[   11.947041] calling  ipv4_offload_init+0x0/0x61 @ 1
[   11.951980] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   11.958740] calling  af_unix_init+0x0/0x55 @ 1
[   11.963256] NET: Registered protocol family 1
[   11.967679] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs
[   11.974253] calling  ipv6_offload_init+0x0/0x7f @ 1
[   11.979194] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   11.985954] calling  init_sunrpc+0x0/0x69 @ 1
[   11.990566] RPC: Registered named UNIX socket transport module.
[   11.996479] RPC: Registered udp transport module.
[   12.001242] RPC: Registered tcp transport module.
[   12.006007] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   12.012507] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs
[   12.019094] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   12.024561] pci 0000:00:02.0: Boot video device
[   12.029647] xen: registering gsi 16 triggering 0 polarity 1
[   12.035219] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   12.039859] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   12.047598] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   12.055504] xen: registering gsi 16 triggering 0 polarity 1
[   12.061067] Already setup the GSI :16
[   12.081398] xen: registering gsi 23 triggering 0 polarity 1
[   12.086978] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   12.108626] xen: registering gsi 18 triggering 0 polarity 1
[   12.114213] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   12.118rks+0x0/0x117 returned 0 after 105455 usecs
[   12.140241] calling  populate_rootfs+0x0/0x112 @ 1
[   12.145228] Unpacking initramfs...
[   13.211269] Freeing initrd memory: 80040K (ffff8800023f4000 - ffff880007=
21e000)
[   13.218579] initcall populate_rootfs+0x0/0x112 returned 0 after 1048322 =
usecs
[   13.225760] calling  pci_iommu_init+0x0/0x41 @ 1
[   13.230441] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   13.236941] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   13.242573] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   13.250216] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   13.256180] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   13.263979] calling  i8259A_init_ops+0x0/0x21 @ 1
[   13.268745] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   13.275333] calling  vsyscall_init+0x0/0x27 @ 1
[   13.279930] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   13.286339] calling  sbf_init+0x0/0xf6 @ 1
[   13.290500] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   13.296479] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   13.301679] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 0 us=
ecs
[   13.308699] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   13.313208] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   13.319532] calling  i8237A_init_ops+0x0/0x14 @ 1
[   13.324299] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.330886] calling  cache_sysfs_init+0x0/0x65 @ 1
[   13.335981] initcall cache_sysfs_init+0x0/0x65 returned 0 after 236 usecs
[   13.342752] calling  amd_uncore_init+0x0/0x130 @ 1
[   13.347603] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   13.354450] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   13.359476] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   13.366496] calling  intel_uncore_init+0x0/0x3ab @ 1
[   13.371522] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   13.378541] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.383237] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.393797] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10327 usecs
[   13.400646] calling  inject_init+0x0/0x30 @ 1
[   13.405062] Machine check injector initialized
[   13.409569] initcall inject_init+0x0/0x30 returned 0 after 4400 usecs
[   13.416069] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.421961] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.429674] calling  microcode_init+0x0/0x1b1 @ 1
[   13.434628] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.440747] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.449521] initcall microcode_init+0x0/0x1b1 returned 0 after 14723 use=
cs
[   13.456451] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.461040] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.467627] calling  msr_init+0x0/0x162 @ 1
[   13.472093] initcall msr_init+0x0/0x162 returned 0 after 213 usecs
[   13.478265] calling  cpuid_init+0x0/0x162 @ 1
[   13.482879] initcall cpuid_init+0x0/0x162 returned 0 after 193 usecs
[   13.489216] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.493982] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.500569] calling  add_pcspkr+0x0/0x40 @ 1
[   13.505002] initcall add_pcspkr+0x0/0x40 returned 0 after 99 usecs
[   13.511172] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.517668] Scanning for low memory corruption every 60 seconds
[   13.523648] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5839 usecs
[   13.532226] calling  sysfb_init+0x0/0x9c @ 1
[   13.536666] initcall sysfb_init+0x0/0x9c returned 0 after 103 usecs
[   13.542926] calling  audit_classes_init+0x0/0xaf @ 1
[   13.547963] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.554884] calling  pt_dump_init+0x0/0x30 @ 1
[   13.559398] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs
[   13.565717] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.570577] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.577243] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.582536] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.589635] calling  ioresources_init+0x0/0x3c @ 1
[   13.594494] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.601161] calling  uid_cache_init+0x0/0x85 @ 1
[   13.605856] initcall uid_cache_init+0x0/0x85 returned 0 after 15 usecs
[   13.612428] calling  init_posix_timers+0x0/0x240 @ 1
[   13.617471] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.624387] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.629674] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.636780] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.641897] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.648827] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.654146] initcall snapshot_device_init+0x0/0x12 returned 0 after 116 =
usecs
[   13.661264] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.666029] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.672617] calling  create_proc_profile+0x0/0x300 @ 1
[   13.677816] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.684836] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.690037] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.697056] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.702643] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 20=
7 usecs
[   13.709939] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.715315] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs
[   13.722502] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.727563] initcall alarmtimer_init+0x0/0x15f returned 0 after 203 usecs
[   13.734341] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.740008] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
7 usecs
[   13.747309] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.752337] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.759179] calling  futex_init+0x0/0xf6 @ 1
[   13.763528] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.769668] initcall futex_init+0x0/0xf6 returned 0 after 6010 usecs
[   13.776079] calling  proc_dma_init+0x0/0x22 @ 1
[   13.780673] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.787084] calling  proc_modules_init+0x0/0x22 @ 1
[   13.792027] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.798785] calling  kallsyms_init+0x0/0x25 @ 1
[   13.803380] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.809790] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.815606] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.823310] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.828773] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.836050] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.841176] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.848183] calling  ikconfig_init+0x0/0x3c @ 1
[   13.852780] initcall ikconfig_init+0x0/0x3c returned 0 after 4 usecs
[   13.859190] calling  audit_init+0x0/0x141 @ 1
[   13.863609] audit: initializing netlink socket (disabled)
[   13.869091] type=3D2000 audit(1390362093.580:1): initialized
[   13.874617] initcall audit_init+0x0/0x141 returned 0 after 10749 usecs
[   13.881202] calling  audit_watch_init+0x0/0x3a @ 1
[   13.886057] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.892729] calling  audit_tree_init+0x0/0x49 @ 1
[   13.897496] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   13.904082] calling  init_kprobes+0x0/0x16c @ 1
[   13.918670] initcall init_kprobes+0x0/0x16c returned 0 after 9759 usecs
[   13.925273] calling  hung_task_init+0x0/0x56 @
[   13.936558] calling  utsname_sysctl_init+0x0/0x14 @ 1
[   13.941682] initcall utsname_sysctl_init+0x0/0x14 returned 0 after 8 use=
cs
[   13.948608] calling  init_tracepoints+0x0/0x20 @ 1
[   13.953458] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   13.960128] calling  init_blk_tracer+0x0/0x5a @ 1
[   13.964896] initcall init_blk_tracer+0x0/0x5a returned 0 after 0 usecs
[   13.971482] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   13.977204] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   13.984739] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   13.990555] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 513=
 usecs
[   13.997767] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   14.003293] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   14.010594] calling  kswapd_init+0x0/0x76 @ 1
[   14.015046] initcall kswapd_init+0x0/0x76 returned 0 after 33 usecs
[   14.021340] calling  extfrag_debug_init+0x0/0x7e @ 1
[   14.026387] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   14.033299] calling  setup_vmstat+0x0/0xf3 @ 1
[   14.037821] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs
[   14.044218] calling  mm_sysfs_init+0x0/0x29 @ 1
[   14.048822] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   14.055312] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   14.060599] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   14.067704] calling  slab_proc_init+0x0/0x25 @ 1
[   14.072388] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   14.078886] calling  init_reserve_notifier+0x0/0x26 @ 1
[   14.084175] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   14.091277] calling  init_admin_reserve+0x0/0x40 @ 1
[   14.096303] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.103150] calling  init_user_reserve+0x0/0x40 @ 1
[   14.108090] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.114851] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   14.119794] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   14.126551] calling  procswaps_init+0x0/0x22 @ 1
[   14.131233] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   14.137730] calling  init_frontswap+0x0/0x96 @ 1
[   14.142439] initcall init_frontswap+0x0/0x96 returned 0 after 28 usecs
[   14.148996] calling  hugetlb_init+0x0/0x4c2 @ 1
[   14.153590] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   14.160082] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6340 usecs
[   14.166684] calling  mmu_notifier_init+0x0/0x12 @ 1
[   14.171627] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   14.178385] calling  slab_proc_init+0x0/0x8 @ 1
[   14.182978] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   14.189390] calling  cpucache_init+0x0/0x4b @ 1
[   14.193984] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   14.200398] calling  hugepage_init+0x0/0x145 @ 1
[   14.205078] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   14.211751] calling  init_cleancache+0x0/0xbc @ 1
[   14.216547] initcall init_cleancache+0x0/0xbc returned 0 after 29 usecs
[   14.223192] calling  fcntl_init+0x0/0x2a @ 1
[   14.227535] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs
[   14.233764] calling  proc_filesystems_init+0x0/0x22 @ 1
[   14.239055] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   14.246157] calling  dio_init+0x0/0x2d @ 1
[   14.250328] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   14.256384] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   14.261444] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 33 use=
cs
[   14.268343] calling  dnotify_init+0x0/0x7b @ 1
[   14.272876] initcall dnotify_init+0x0/0x7b returned 0 after 25 usecs
[   14.279263] calling  inotify_user_setup+0x0/0x70 @ 1
[   14.284309] initcall inotify_user_setup+0x0/0x70 returned 0 after 19 use=
cs
[   14.291224] calling  aio_setup+0x0/0x7d @ 1
[   14.295530] initcall aio_setup+0x0/0x7d returned 0 after 57 usecs
[   14.301623] calling  proc_locks_init+0x0/0x22 @ 1
[   14.306394] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs
[   14.312975] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   14.317874] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs
[   14.324590] calling  dquot_init+0x0/0x121 @ 1
[   14.329008] VFS: Disk quotas dquot_6.5.2
[   14.333032] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   14.339498] initcall dquot_init+0x0/0x121 returned 0 after 10242 usecs
[   14.346082] calling  init_v2_quota_format+0x0/0x22 @ 1
[   14.351282] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   14.358301] calling  quota_init+0x0/0x31 @ 1
[   14.362653] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   14.368875] calling  proc_cmdline_init+0x0/0x22 @ 1
[   14.373819] initcall proc_cmdline_init+0x0/0x22 returned 0 after 3 usecs
[   14.380576] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.385606] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.392448] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.397391] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.404150] calling  proc_devices_init+0x0/0x22 @ 1
[   14.409090] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.415849] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.421052] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.428069] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.433011] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.439769] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.444711] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.451469] calling  proc_stat_init+0x0/0x22 @ 1
[   14.456151] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.462647] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.467504] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.474174] calling  proc_version_init+0x0/0x22 @ 1
[   14.479117] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.485875] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.490904] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.497749] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.502523] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.509187] calling  vmcore_init+0x0/0x5cb @ 1
[   14.513693] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.520020] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.524704] initcall proc_kmsg_init+0x0/0x25 returned 0 after 4 usecs
[   14.531200] calling  proc_page_init+0x0/0x42 @ 1
[   14.535887] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.542380] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.547105] initcall init_devpts_fs+0x0/0x62 returned 0 after 43 usecs
[   14.553647] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.558250] initcall init_ramfs_fs+0x0/0x4d returned 0 after 10 usecs
[   14.564740] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.569839] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 70 use=
cs
[   14.576700] calling  init_fat_fs+0x0/0x4f @ 1
[   14.581140] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.587445] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.591952] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.598279] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.602873] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.609286] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.614077] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs
[   14.620727] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.625426] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs
[   14.631853] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.636273] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.642513] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.646933] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.653173] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.657593] NFS: Registering the id_resolver key type
[   14.662716] Key type id_resolver registered
[   14.666952] Key type id_legacy registered
[   14.671031] initcall init_nfs_v4+0x0/0x3b returned 0 after 13123 usecs
[   14.677614] calling  init_nlm+0x0/0x4c @ 1
[   14.681781] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.687752] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.692433] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.698932] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.703613] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.710112] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.715141] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.721986] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.726579] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.732992] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.737584] NTFS driver 2.1.30 [Flags: R/W].
[   14.741969] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4281 usecs
[   14.748593] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.753489] initcall init_autofs4_fs+0x0/0x2a returned 0 after 128 usecs
[   14.760185] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.764872] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.771448] calling  ipc_init+0x0/0x2f @ 1
[   14.775614] msgmni has been set to 3857
[   14.779516] initcall ipc_init+0x0/0x2f returned 0 after 3815 usecs
[   14.785746] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.790520] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.797099] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.801840] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.808366] calling  key_proc_init+0x0/0x5e @ 1
[   14.812964] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.819373] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.824397] SELinux:  Registering netfilter hooks
[   14.829298] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4785 u=
secs
[   14.836332] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.841098] initcall init_sel_fs+0x0/0xa5 returned 0 after 338 usecs
[   14.847435] calling  selnl_init+0x0/0x56 @ 1
[   14.851778] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.858006] calling  sel_netif_init+0x0/0x5c @ 1
[   14.862690] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs
[   14.869186] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.874043] initcall sel_netnode_init+0x0/0x6a returned 0 after 3 usecs
[   14.880714] calling  sel_netport_init+0x0/0x6a @ 1
[   14.885568] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs
[   14.892240] calling  aurule_init+0x0/0x2d @ 1
[   14.896660] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   14.902900] calling  crypto_wq_init+0x0/0x33 @ 1
[   14.907613] initcall crypto_wq_init+0x0/0x33 returned 0 after 32 usecs
[   14.914169] calling  crypto_algapi_init+0x0/0xd @ 1
[   14.919113] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   14.925867] calling  chainiv_module_init+0x0/0x12 @ 1
[   14.930980] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   14.937913] calling  eseqiv_module_init+0x0/0x12 @ 1
[   14.942958] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.949802] calling  hmac_module_init+0x0/0x12 @ 1
[   14.954653] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.961329] calling  md5_mod_init+0x0/0x12 @ 1
[   14.965885] initcall md5_mod_init+0x0/0x12 returned 0 after 48 usecs
[   14.972249] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   14.977560] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 25 =
usecs
[   14.984726] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   14.990101] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   14.997292] calling  des_generic_mod_init+0x0/0x17 @ 1
[   15.002546] initcall des_generic_mod_init+0x0/0x17 returned 0 after 50 u=
secs
[   15.009599] calling  aes_init+0x0/0x12 @ 1
[   15.013784] initcall aes_init+0x0/0x12 returned 0 after 24 usecs
[   15.019826] calling  zlib_mod_init+0x0/0x12 @ 1
[   15.024445] initcall zlib_mod_init+0x0/0x12 returned 0 after 25 usecs
[   15.030920] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   15.036639] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   15.044178] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   15.050244] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   15.058132] calling  krng_mod_init+0x0/0x12 @ 1
[   15.062751] initcall krng_mod_init+0x0/0x12 returned 0 after 25 usecs
[   15.069225] calling  proc_genhd_init+0x0/0x3c @ 1
[   15.073999] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   15.080578] calling  bsg_init+0x0/0x12e @ 1
[   15.084903] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   15.092279] initcall bsg_init+0x0/0x12e returned 0 after 7280 usecs
[   15.098604] calling  noop_init+0x0/0x12 @ 1
[   15.102851] io scheduler noop registered
[   15.106838] initcall noop_init+0x0/0x12 returned 0 after 3893 usecs
[   15.113165] calling  deadline_init+0x0/0x12 @ 1
[   15.117756] io scheduler deadline registered
[   15.122090] initcall deadline_init+0x0/0x12 returned 0 after 4232 usecs
[   15.128764] calling  cfq_init+0x0/0x8b @ 1
[   15.132950] io scheduler cfq registered (default)
[   15.137690] initcall cfq_init+0x0/0x8b returned 0 after 4653 usecs
[   15.143930] calling  percpu_counter_startup+0x0/0x38 @ 1
[   15.149305] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   15.156497] calling  pci_proc_init+0x0/0x6a @ 1
[   15.161277] initcall pci_proc_init+0x0/0x6a returned 0 after 183 usecs
[   15.167790] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   15.173447] xen: registering gsi 16 triggering 0 polarity 1
[   15.179008] Already setup the GSI :16
[   15.183583] xen: registering gsi 16 triggering 0 polarity 1
[   15.189146] Already setup the GSI :16
[   15.193650] xen: registering gsi 16 triggering 0 polarity 1
[   15.199210] Already setup the GSI :16
[   15.203570] xen: registering gsi 19 triggering 0 polarity 1
[   15.209145] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   15.214378] xen: registering gsi 17 triggering 0 polarity 1
[   15.219951] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   15.225262] xen: registering gsi 19 triggering 0 polarity 1
[   15.230824] Already setup the GSI :19
[   15.234745] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60560 u=
secs
[   15.241778] calling  aer_service_init+0x0/0x2b @ 1
[   15.246701] initcall aer_service_init+0x0/0x2b returned 0 after 70 usecs
[   15.253391] calling  pci_hotplug_init+0x0/0x1d @ 1
[   15.258242] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   15.263877] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5502 use=
cs
[   15.270809] calling  pcied_init+0x0/0x79 @ 1
[   15.275339] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   15.281943] initcall pcied_init+0x0/0x79 returned 0 after 6641 usecs
[   15.288353] calling  pcifront_init+0x0/0x3f @ 1
[   15.292944] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   15.299531] calling  genericbl_driver_init+0x0/0x14 @ 1
[   15.304932] initcall genericbl_driver_init+0x0/0x14 returned 0 after 111=
 usecs
[   15.312141] calling  cirrusfb_init+0x0/0xcc @ 1
[   15.316823] initcall cirrusfb_init+0x0/0xcc returned 0 after 81 usecs
[   15.323251] calling  efifb_driver_init+0x0/0x14 @ 1
[   15.328263] initcall efifb_driver_init+0x0/0x14 returned 0 after 71 usecs
[   15.335041] calling  intel_idle_init+0x0/0x331 @ 1
[   15.339892] intel_idle: MWAIT substates: 0x42120
[   15.344571] intel_idle: v0.4 model 0x3C
[   15.348472] intel_idle: lapic_timer_reliable_states 0xffffffff
(XEN) [2014-01-22 03:41:35] traps.c:2527:d0 Domain attempted WRMSR 00000000=
000001fc from 0x000000000004005f to 0x000000000004005d.
[   15.365805] intel_idle: intel_idle yielding to none
[   15.370483] initcall intel_idle_init+0x0/0x331 returned -19 after 29873 =
usecs
[   15.377936] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   15.383318] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   15.390502] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.395084] initcall acpi_ac_init+0x0/0x2a returned 0 after 72 usecs
[   15.401436] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.407164] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.415330] ACPI: Power Button [PWRB]
[   15.419309] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.426689] ACPI: Power Button [PWRF]
[   15.430485] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3047 usecs
[   15.438039] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.443478] ACPI: Fan [FAN0] (off)
[   15.447067] ACPI: Fan [FAN1] (off)
[   15.450664] ACPI: Fan [FAN2] (off)
[   15.454264] ACPI: Fan [FAN3] (off)
[   15.457898] ACPI: Fan [FAN4] (off)
[   15.461370] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1770=
5 usecs
[   15.468671] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.486676] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.495099] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)
[   15.510701] Monitor-Mwait will be used to enter C-1 state
[   15.516096] Monitor-Mwait will be used to enter C-2 state
[ ] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.543528] thermal LNXTHERM:00: registered as thermal_zone0
[   15.549179] ACPI: Thermal Zone [TZ00] (28 C)
[   15.555635] thermal LNXTHERM:01: registered as thermal_zone1
[   15.561281] ACPI: Thermal Zone [TZ01] (30 C)
[   15.565943] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25109 u=
secs
[   15.572979] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.577919] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.584677] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.589914] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.595645] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5630=
 usecs
[   15.602855] calling  erst_init+0x0/0x2fc @ 1
[   15.607228] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.614730] pstore: Registered erst as persistent store backend
[   15.620703] initcall erst_init+0x0/0x2fc returned 0 after 13201 usecs
[   15.627206] calling  ghes_init+0x0/0x173 @ 1
[   15.631737] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35376 usecs
[   15.640129] \_SB_:_OSC request failed
[   15.643784] _OSC request data:1 1 0=20
[   15.647423] \_SB_:_OSC invalid UUID
[   15.650976] _OSC request data:1 1 0=20
[   15.654614] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.660856] initcall ghes_init+0x0/0x173 returned 0 after 28630 usecs
[   15.667356] calling  einj_init+0x0/0x522 @ 1
[   15.671753] EINJ: Error INJection is initialized.
[   15.676457] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs
[   15.682869] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.687721] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.693762] initcall ioat_init_module+0x0/0xb1 returned 0 after 5898 use=
cs
[   15.700623] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.705581] initcall virtio_mmio_init+0x0/0x14 returned 0 after 103 usecs
[   15.712357] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.718146] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 71 usecs
[   15.725701] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.730987] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.738093] calling  xenbus_init+0x0/0x3d @ 1
[   15.742655] initcall xenbus_init+0x0/0x3d returned 0 after 137 usecs
[   15.748997] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.754226] initcall xenbus_backend_init+0x0/0x51 returned 0 after 115 u=
secs
[   15.761259] calling  gntdev_init+0x0/0x4d @ 1
[   15.765797] initcall gntdev_init+0x0/0x4d returned 0 after 116 usecs
[   15.772138] calling  gntalloc_init+0x0/0x3d @ 1
[   15.776865] initcall gntalloc_init+0x0/0x3d returned 0 after 131 usecs
[   15.783379] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.788751] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.795941] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.800947] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 64 usecs
[   15.807728] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.813364] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
81 usecs
[   15.820749] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.826139] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 189 =
usecs
[   15.833261] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.838055] xen_pciback: backend is vpci
[   15.842091] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3967 usecs
[   15.848866] calling  xen_acpi_processor_init+0x0/0x24b @ 1
[   15.855161] xen_acpi_processor: Uploading Xen processor PM info
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(1) cpuid(0) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:35] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:35] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:35] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:35] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:35] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:35] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:35] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:35] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:35] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:35] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:35] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:35] 	_PPC: 0
(XEN) [2014-01-22 03:41:35] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:35] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:35] CPU0: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:35] CPU 0 initialization completed
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(2) cpuid(2) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:35] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:35] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:35] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:35] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:35] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:35] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:35] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:35] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:35] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:35] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:35] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:35] 	_PPC: 0
(XEN) [2014-01-22 03:41:35] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:35] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:35] CPU2: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:35] CPU 2 initialization completed
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(3) cpuid(4) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU4: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 4 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(4) cpuid(6) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU6: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 6 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(5) cpuid(1) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU1: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 1 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(6) cpuid(3) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU3: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 3 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(7) cpuid(5) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU5: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 5 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(8) cpuid(7) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU7: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 7 initialization completed
[   17.277335] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389575 usecs
[   17.285215] calling  pty_init+0x0/0x453 @ 1
[   17.299534] kworker/u2:1 (780) used greatest stack depth: 5488 bytes left
[   17.350706] initcall pty_init+0x0/0x453 returned 0 after 59805 usecs
[   17.357045] calling  sysrq_init+0x0/0xb0 @ 1
[   +0x0/0x228 returned 0 after 1028 usecs
[   17.379792] calling  serial8250_init+0x0/0x1ab @ 1
[   17.384641] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   17.412280] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A
[   17.420676] initcall serial8250_init+0x0/0x1ab returned 0 after 35189 us=
ecs
[   17.427626] calling  serial_pci_driver_init+0x0/0x1b @ 1
[   17.433103] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 10=
2 usecs
[   17.440399] calling  init_kgdboc+0x0/0x16 @ 1
[   17.444818] kgdb: Registered I/O driver kgdboc.
[   17.449442] initcall init_kgdboc+0x0/0x16 returned 0 after 4516 usecs
[   17.455913] calling  init+0x0/0x10f @ 1
[   17.460030] initcall init+0x0/0x10f returned 0 after 213 usecs
[   17.465853] calling  hpet_init+0x0/0x6a @ 1
[   17.470583] hpet_acpi_add: no address or irqs in _CRS
[   17.475710] initcall hpet_init+0x0/0x6a returned 0 after 5479 usecs
[   17.481964] calling  nvram_init+0x0/0x82 @ 1
[   17.486421] Non-volatile memory driver v1.3
[   17.490598] initcall nvram_init+0x0/0x82 returned 0 after 4200 usecs
[   17.497008] calling  mod_init+0x0/0x5a @ 1
[   17.501167] initcall mod_init+0x0/0x5a returned -19 after 0 usecs
[   17.507321] calling  rng_init+0x0/0x12 @ 1
[   17.511615] initcall rng_init+0x0/0x12 returned 0 after 130 usecs
[   17.517696] calling  agp_init+0x0/0x26 @ 1
[   17.521856] Linux agpgart interface v0.103
[   17.526016] initcall agp_init+0x0/0x26 returned 0 after 4062 usecs
[   17.532254] calling  agp_amd64_mod_init+0x0/0xb @ 1
[   17.537340] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 142 u=
secs
[   17.544371] calling  agp_intel_init+0x0/0x29 @ 1
[   17.549142] initcall agp_intel_init+0x0/0x29 returned 0 after 88 usecs
[   17.555656] calling  agp_sis_init+0x0/0x29 @ 1
[   17.560250] initcall agp_sis_init+0x0/0x29 returned 0 after 86 usecs
[   17.566592] calling  agp_via_init+0x0/0x29 @ 1
[   17.571187] initcall agp_via_init+0x0/0x29 returned 0 after 86 usecs
[   17.577530] calling  drm_core_init+0x0/0x10c @ 1
[   17.582288] [drm] Initialized drm 1.1.0 20060810
[   17.586899] initcall drm_core_init+0x0/0x10c returned 0 after 4580 usecs
[   17.593657] calling  cn_proc_init+0x0/0x3d @ 1
[   17.598167] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs
[   17.604491] calling  topology_sysfs_init+0x0/0x70 @ 1
[   17.609636] initcall topology_sysfs_init+0x0/0x70 returned 0 after 32 us=
ecs
[   17.616623] calling  loop_init+0x0/0x14e @ 1
[   17.674073] loop: module loaded
[   17.677239] initcall loop_init+0x0/0x14e returned 0 after 54961 usecs
[   17.683713] clling  mac_hid_init+0x0/0x22 @ 1
[   17.699621] initcall mac_hid_init+0x0/0x22 returned 0 after 15 usecs
[   17.706023] calling  macvlan_init_module+0x0/0x3d @ 1
[   17.711139] initcall macvlan_init_module+0x0/0x3d returned 0 after 1 use=
cs
[   17.718067] calling  macvtap_init+0x0/0x100 @ 1
[   17.722731] initcall macvtap_init+0x0/0x100 returned 0 after 69 usecs
[   17.729175] calling  net_olddevs_init+0x0/0xb5 @ 1
[   17.734013] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs
[   17.740686] calling  fixed_mdio_bus_init+0x0/0x105 @ 1
[   17.746108] libphy: Fixed MDIO Bus: probed
[   17.750204] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4216=
 usecs
[   17.757474] calling  tun_init+0x0/0x93 @ 1
[   17.761633] tun: Universal TUN/TAP device driver, 1.6
[   17.766744] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[   17.773133] initcall tun_init+0x0/0x93 returned 0 after 11229 usecs
[   17.779390] calling  tg3_driver_init+0x0/0x1b @ 1
[   17.784277] initcall tg3_driver_init+0x0/0x1b returned 0 after 124 usecs
[   17.790966] calling  ixgbevf_init_module+0x0/0x4c @ 1
[   17.796076] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Ne=
twork Driver - version 2.11.3-k
[   17.805523] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[   17.811802] initcall ixgbevf_init_module+0x0/0x4c returned 0 after 15356=
 usecs
[   17.819007] calling  forcedeth_pci_driver_init+0x0/0x1b @ 1
[   17.824734] initcall forcedeth_pci_driver_init+0x0/0x1b returned 0 after=
 92 usecs
[   17.832207] calling  netback_init+0x0/0x48 @ 1
[   17.836786] initcall netback_init+0x0/0x48 returned 0 after 74 usecs
[   17.843129] calling  nonstatic_sysfs_init+0x0/0x12 @ 1
[   17.848327] initcall nonstatic_sysfs_init+0x0/0x12 returned 0 after 0 us=
ecs
[   17.855345] calling  yenta_cardbus_driver_init+0x0/0x1b @ 1
[   17.861110] initcall yenta_cardbus_driver_init+0x0/0x1b returned 0 after=
 127 usecs
[   17.868665] calling  mon_init+0x0/0xfe @ 1
[   17.873050] initcall mon_init+0x0/0xfe returned 0 after 219 usecs
[   17.879138] calling  ehci_hcd_init+0x0/0x5c @ 1
[   17.883728] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   17.890315] initcall ehci_hcd_init+0x0/0x5c returned 0 after 6431 usecs
[   17.896987] calling  ehci_pci_init+0x0/0x69 @ 1
[   17.901579] ehci-pci: EHCI PCI platform driver
[   17.906732] xen: registering gsi 16 triggering 0 polarity 1
[   17.912292] Already setup the GSI :16
[   17.916063] ehci-pci 0000:00:1a.0: EHCI Host Controller
[   17.921541] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus =
number 1
[   17.928948] ehci-pci 0000:00:1a.0: debug port 2
[   17.937440] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[   17.944291] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf153c000
[   17.955454] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[   17.961327] usb usb1: New USB device found, idVendor=3D1d6b,380] usb usb=
1: Product: EHCI Host Controller
[   17.980321] usb usb1: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   17.987773] usb usb1: SerialNumber: 0000:00:1a.0
[   17.993106] hub 1-0:1.0: USB hub found
[   17.996880] hub 1-0:1.0: 3 ports detected
[   18.002367] xen: registering gsi 23 triggering 0 polarity 1
[   18.007929] Already setup the GSI :23
[   18.011681] ehci-pci 0000:00:1d.0: EHCI Host Controller
[   18.017166] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus =
number 2
[   18.024576] ehci-pci 0000:00:1d.0: debug port 2
[   18.033050] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[   18.039892] ehci-pci 0000:00:1d.0: irq 23, io HCI 1.00
[   18.057317] usb usb2: New USB device found, idVendor=3D1d6b, idProduct=
=3D0002
[   18.064093] usb usb2: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[   18.071368] usb usb2: Product: EHCI Host Controller
[   18.076307] usb usb2: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   18.083761] usb usb2: SerialNumber: 0000:00:1d.0
[   18.089073] hub 2-0:1.0: USB hub found
[   18.092849] hub 2-0:1.0: 3 ports detected
[   18.097886] initcall ehci_pci_init+0x0/0x69 returned 0 after 191705 usecs
[   18.104685] calling  ohci_hcd_mod_init+0x0/0x64 @ 1
[   18.109618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   18.115865] initcall ohci_hcd_mod_init+0x0/0x64 returned 0 after 6100 us=
ecs
[   18.122878] calling  ohci_pci_init+0x0/0x69 @ 1
[   18.127470] ohci-pci: OHCI PCI platform driver
[   18.132112] initcall ohci_pci_init+0x0/0x69 returned 0 after 4531 usecs
[   18.138718] calling  uhci_hcd_init+0x0/0xb0 @ 1
[   18.143304] uhci_hcd: USB Universal Host Controller Interface driver
[   18.149847] initcall uhci_hcd_init+0x0/0xb0 returned 0 after 6388 usecs
[   18.156445] calling  usblp_driver_init+0x0/0x1b @ 1
[   18.161496] usbcore: registered new interface driver usblp
[   18.166965] initcall usblp_driver_init+0x0/0x1b returned 0 after 5450 us=
ecs
[   18.173985] calling  kgdbdbgp_start_thread+0x0/0x4f @ 1
[   18.179272] initcall kgdbdbgp_start_thread+0x0/0x4f returned 0 after 0 u=
secs
[   18.186377] calling  i8042_init+0x0/0x3c5 @ 1
[   18.191062] i8042: PNP: No PS/2 controller found. Probing ports directly.
[   18.201127] serio: i8042 KBD port at 0x60,0x64 irq 1
[   18.206085] serio: i8042 AUX port at 0x60,0x64 irq 12
[   18.211371] initcall i8042_init+0x0/0x3c5 returned 0 after 20090 usecs
[   18.217883] calling  serport_init+0x0/0x34 @ 1
[   18.222388] initcall serport_init+0x0/0x34 returned 0 after 0 usecs
[   18.228714] calling  mousedev_init+0x0/0x62 @ 1
[   18.233520] mousedev: PS/2 mouse device common for all mice
[   18.239081] initcall mousedev_init+0x0/0x62 returned 0 after 5637 usecs
[   18.245751] calling  evdev_init+0x0/0x12 @ 1
[   18.250523] initcall evdev_init+0x0/0x12 returned 0 after 428 usecs
[   18.256780] calling  atkbd_init+0x0/0x27 @ 1
[   18.261221] initcall atkbd_init+0x0/0x27 returned 0 after 106 usecs
[   18.267473] calling  psmouse_init+0x0/0x82 @ 1
[   18.272163] initcall psmouse_init+0x0/0x82 returned 0 after 179 usecs
[   18.278593] calling  cmos_init+0x0/0x77 @ 1
[   18.282887] rtc_cmos 00:05: RTC can wake from S4
[   18.287901] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0
[   18.294046] rtc_cmos 00:05: alarms up to one month, y3k, 242 bytes nvram
[   18.301532] initcall cmos_init+0x0/0x77 returned 0 after 18249 usecs
[   18.307878] calling  i2c_i801_init+0x0/0xad @ 1
[   18.313060] xen: registering gsi 18 triggering 0 polarity 1
[   18.318623] Already setup the GSI :18
[   18.322442] i801_smbus 0000:00:1f.3: SMBus using PCI Interrupt
[   18.328570] initcall i2c_i801_init+0x0/0xad returned 0 after 15719 usecs
[   18.335262] calling  cpufreq_gov_dbs_init+0x0/0x12 @ 1
[   18.340470] initcall cpufreq_gov_dbs_init+0x0/0x12 returned -19 after 0 =
usecs
[   18.347657] calling  efivars_sysfs_init+0x0/0x220 @ 1
[   18.352769] initcall efivars_sysfs_init+0x0/0x220 returned -19 after 0 u=
secs
[   18.359873] calling  efivars_pstore_init+0x0/0xa2 @ 1
[   18.364986] initcall efivars_pstore_init+0x0/0xa2 returned 0 after 0 use=
cs
[   18.371920] calling  vhost_net_init+0x0/0x30 @ 1
[   18.377025] usb 1-1: new high-speed USB device number 2 using ehci-pci
[   18.383661] initcall vhost_net_init+0x0/0x30 returned 0 after 6896 usecs
[   18.390353] calling  vhost_init+0x0/0x8 @ 1
[   18.394594] initcall vhost_init+0x0/0x8 returned 0 after 0 usecs
[   18.400661] calling  staging_init+0x0/0x8 @ 1
[   18.405080] initcall staging_init+0x0/0x8 returned 0 after 0 usecs
[   18.411317] calling  zram_init+0x0/0x2fd @ 1
[   18.416495] zram: Created 1 device(s) ...
[   18.420498] initcall zram_init+0x0/0x2fd returned 0 after 4732 usecs
[   18.426909] calling  zs_init+0x0/0x90 @ 1
[   18.430985] initcall zs_init+0x0/0x90 returned 0 after 2 usecs
[   18.436874] calling  eeepc_laptop_init+0x0/0x5a @ 1
[   18.442076] initcall eeepc_laptop_init+0x0/0x5a returned -19 after 254 u=
secs
[   18.449117] calling  sock_diag_init+0x0/0x12 @ 1
[   18.453812] initcall sock_diag_init+0x0/0x12 returned 0 after 16 usecs
[   18.460381] calling  flow_cache_init_global+0x0/0x19a @ 1
[   18.465863] initcall flow_cache_init_global+0x0/0x19a returned 0 after 2=
0 usecs
[   18.473206] calling  llc_init+0x0/0x20 @ 1
[   18.477366] initcall llc_init+0x0/0x20 returned 0 after 0 usecs
[   18.483345] calling  snap_init+0x0/0x38 @ 1
[   18.487593] initcall snap_init+0x0/0x38 returned 0 after 1 usecs
[   18.493657] calling  blackhole_module_init+0x0/0x12 @ 1
[   18.498946] initcall blackhole_module_init+0x0/0x12 returned 0 after 0 u=
secs
[   18.506053] calling  nfnetlink_init+0x0/0x59 @ 1
[   18.510731] Netfilter messages via NETLINK v0.30.
[   18.515513] initcall nfnetlink_init+0x0/0x59 returned 0 after 4669 usecs
[   18.522258] calling  nfnetlink_log_init+0x0/0xb6 @ 1
[   18.527294] initcall nfnetlink_log_init+0x0/0xb6 returned 0 after 9 usecs
[   18.534132] calling  nf_conntrack_standalone_init+0x0/0x82 @ 1
[   18.540024] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   18.546277] initcall nf_conntrack_standalone_init+0x0/0x82 returned 0 af=
ter 6105 usecs
[   18.554178] calling  ctnetlink_init+0x0/0xa4 @ 1
[   18.558857] ctnetlink v0.93: registering with nfnetlink.
[   18.564230] initcall ctnetlink_init+0x0/0xa4 returned 0 after 5247 usecs
[   18.570991] calling  nf_conntrack_ftp_init+0x0/0x1ca @ 1
[   18.576368] initcall nf_conntrack_ftp_init+0x0/0x1ca returned 0 after 4 =
usecs
[   18.583556] calling  nf_conntrack_irc_init+0x0/0x173 @ 1
[   18.588934] initcall nf_conntrack_irc_init+0x0/0x173 returned 0 after 3 =
usecs
[   18.596123] calling  nf_conntrack_sip_init+0x0/0x215 @ 1
[   18.601495] initcall nf_conntrack_sip_init+0x0/0x215 returned 0 after 0 =
usecs
[   18.608688] calling  xt_init+0x0/0x118 @ 1
[   18.612851] initcall xt_init+0x0/0x118 returned 0 after 1 usecs
[   18.618829] calling  tcpudp_mt_init+0x0/0x17 @ 1
[   18.623509] initcall tcpudp_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.630010] calling  connsecmark_tg_init+0x0/0x12 @ 1
[   18.635121] initcall connsecmark_tg_init+0x0/0x12 returned 0 after 0 use=
cs
[   18.642056] calling  nflog_tg_init+0x0/0x12 @ 1
[   18.646649] initcall nflog_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.653062] calling  secmark_tg_init+0x0/0x12 @ 1
[   18.657830] initcall secmark_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.664415] calling  tcpmss_tg_init+0x0/0x17 @ 1
[   18.669095] initcall tcpmss_tg_init+0x0/0x17 returned 0 after 0 usecs
[   18.675595] calling  conntrack_mt_init+0x0/0x17 @ 1
[   18.680536] initcall conntrack_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.687296] calling  policy_mt_init+0x0/0x17 @ 1
[   18.691975] initcall policy_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.698475] calling  state_mt_init+0x0/0x12 @ 1
[   18.703068] initcall state_mt_init+0x0/0x12 returned 0 after 0 usecs
[   18.709481] calling  sysctl_ipv4_init+0x0/0x92 @ 1
[   18.714361] initcall sysctl_ipv4_init+0x0/0x92 returned 0 after 26 usecs
[   18.721095] calling  tunnel4_init+0x0/0x72 @ 1
[   18.725602] initcall tunnel4_init+0x0/0x72 returned 0 after 0 usecs
[   18.731927] calling  ipv4_netfilter_init+0x0/0x12 @ 1
[   18.737041] initcall ipv4_netfilter_init+0x0/0x12 returned 0 after 0 use=
cs
[   18.743975] calling  nf_conntrack_l3proto_ipv4_init+0x0/0x17c @ 1
[   18.750231] initcall nf_conntrack_l3proto_ipv4_init+0x0/0x17c returned 0=
 after 100 usecs
[   18.758302] calling  nf_defrag_init+0x0/0x17 @ 1
[   18.762979] initcall nf_defrag_init+0x0/0x17 returned 0 after 0 usecs
[   18.769480] calling  ip_tables_init+0x0/0xaa @ 1
[   18.774173] ip_tables: (C) 2000-2006 Netfilter Core Team
[   18.779534] initcall ip_tables_init+0x0/0xaa returned 0 after 5248 usecs
[   18.786292] calling  iptable_filter_init+0x0/0x51 @ 1
[   18.791430] initcall iptable_filter_init+0x0/0x51 returned 0 after 23 us=
ecs
[   18.798427] calling  iptable_mangle_init+0x0/0x51 @ 1
[   18.803557] initcall iptable_mangle_init+0x0/0x51 returned 0 after 17 us=
ecs
[   18.810560] calling  reject_tg_init+0x0/0x12 @ 1
[   18.815240] initcall reject_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.821738] calling  ulog_tg_init+0x0/0x85 @ 1
[   18.826262] initcall ulog_tg_init+0x0/0x85 returned 0 after 16 usecs
[   18.832658] calling  cubictcp_register+0x0/0x5c @ 1
[   18.837599] TCP: cubic registered
[   18.840980] initcall cubictcp_register+0x0/0x5c returned 0 after 3302 us=
ecs
[   18.847999] calling  xfrm_user_init+0x0/0x4a @ 1
[   18.852678] Initializing XFRM netlink socket
[   18.857024] initcall xfrm_user_init+0x0/0x4a returned 0 after 4243 usecs
[   18.863773] calling  inet6_init+0x0/0x370 @ 1
[   18.868273] NET: Registered protocol family 10
[   18.873054] initcall inet6_init+0x0/0x370 returned 0 after 4747 usecs
[   18.879475] calling  ah6_init+0x0/0x79 @ 1
[   18.883638] initcall ah6_init+0x0/0x79 returned 0 after 0 usecs
[   18.889615] calling  esp6_init+0x0/0x79 @ 1
[   18.893864] initcall esp6_init+0x0/0x79 returned 0 after 0 usecs
[   18.899929] calling  xfrm6_transport_init+0x0/0x17 @ 1
[   18.905128] initcall xfrm6_transport_init+0x0/0x17 returned 0 after 0 us=
ecs
[   18.912150] calling  xfrm6_mode_tunnel_init+0x0/0x17 @ 1
[   18.917522] initcall xfrm6_mode_tunnel_init+0x0/0x17 returned 0 after 0 =
usecs
[   18.924715] calling  xfrm6_beet_init+0x0/0x17 @ 1
[   18.929482] initcall xfrm6_beet_init+0x0/0x17 returned 0 after 0 usecs
[   18.936068] calling  ip6_tables_init+0x0/0xaa @ 1
[   18.940848] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   18.946296] initcall ip6_tables_init+0x0/0xaa returned 0 after 5332 usecs
[   18.953142] calling  ip6table_filter_init+0x0/0x51 @ 1
[   18.958436] initcall ip6table_filter_init+0x0/0x51 returned 0 after 91 u=
secs
[   18.965466] calling  ip6table_mangle_init+0x0/0x51 @ 1
[   18.970707] initcall ip6table_mangle_init+0x0/0x51 returned 0 after 40 u=
secs
[   18.977772] calling  nf_conntrack_l3proto_ipv6_init+0x0/0x154 @ 1
[   18.983934] initcall nf_conntrack_l3proto_ipv6_init+0x0/0x154 returned 0=
 after 8 usecs
[   18.991899] calling  nf_defrag_init+0x0/0x54 @ 1
[   18.996587] initcall nf_defrag_init+0x0/0x54 returned 0 after 8 usecs
[   19.003078] calling  ipv6header_mt6_init+0x0/0x12 @ 1
[   19.008191] initcall ipv6header_mt6_init+0x0/0x12 returned 0 after 0 use=
cs
[   19.015125] calling  reject_tg6_init+0x0/0x12 @ 1
[   19.019890] initcall reject_tg6_init+0x0/0x12 returned 0 after 0 usecs
[   19.026479] calling  sit_init+0x0/0xcf @ 1
[   19.030637] sit: IPv6 over IPv4 tunneling driver
[   19.036315] initcall sit_init+0x0/0xcf returned 0 after 5543 usecs
[   19.042504] calling  packet_init+0x0/0x47 @ 1
[   19.046906] NET: Registered protocol family 17
[   19.051421] initcall packet_init+0x0/0x47 returned 0 after 4408 usecs
[   19.057912] calling  br_init+0x0/0xa2 @ 1
[   19.062005] initcall br_init+0x0/0xa2 returned 0 after 19 usecs
[   19.067964] calling  init_rpcsec_gss+0x0/0x64 @ 1
[   19.072773] initcall init_rpcsec_gss+0x0/0x64 returned 0 after 38 usecs
[   19.079408] calling  dcbnl_init+0x0/0x4d @ 1
[   19.083740] initcall dcbnl_init+0x0/0x4d returned 0 after 0 usecs
[   19.089894] calling  init_dns_resolver+0x0/0xe1 @ 1
[   19.094842] Key type dns_resolver registered
[   19.099164] initcall init_dns_resolver+0x0/0xe1 returned 0 after 4230 us=
ecs
[   19.106201] calling  mcheck_init_device+0x0/0x123 @ 1
[   19.111591] initcall mcheck_init_device+0x0/0x123 returned 0 after 271 u=
secs
[   19.118650] calling  tboot_late_init+0x0/0x243 @ 1
[   19.123481] initcall tboot_late_init+0x0/0x243 returned 0 after 0 usecs
[   19.130157] calling  mcheck_debugfs_init+0x0/0x3c @ 1
[   19.135284] initcall mcheck_debugfs_init+0x0/0x3c returned 0 after 13 us=
ecs
[   19.142290] calling  severities_debugfs_init+0x0/0x3c @ 1
[   19.147752] initcall severities_debugfs_init+0x0/0x3c returned 0 after 5=
 usecs
[   19.155025] calling  threshold_init_device+0x0/0x50 @ 1
[   19.160317] initcall threshold_init_device+0x0/0x50 returned 0 after 1 u=
secs
[   19.167420] calling  hpet_insert_resource+0x0/0x23 @ 1
[   19.172620] initcall hpet_insert_resource+0x0/0x23 returned 0 after 0 us=
ecs
[   19.179640] calling  update_mp_table+0x0/0x56d @ 1
[   19.184493] initcall update_mp_table+0x0/0x56d returned 0 after 0 usecs
[   19.191166] calling  lapic_insert_resource+0x0/0x3f @ 1
[   19.196453] initcall lapic_insert_resource+0x0/0x3f returned 0 after 0 u=
secs
[   19.203558] calling  io_apic_bug_finalize+0x0/0x1b @ 1
[   19.208758] initcall io_apic_bug_finalize+0x0/0x1b returned 0 after 0 us=
ecs
[   19.215777] calling  print_ICs+0x0/0x456 @ 1
[   19.220111] initcall print_ICs+0x0/0x456 returned 0 after 0 usecs
[   19.226265] calling  check_early_ioremap_leak+0x0/0x65 @ 1
[   19.231812] initcall check_early_ioremap_leak+0x0/0x65 returned 0 after =
0 usecs
[   19.239177] calling  pat_memtype_list_init+0x0/0x32 @ 1
[   19.244464] initcall pat_memtype_list_init+0x0/0x32 returned 0 after 0 u=
secs
[   19.251572] calling  init_oops_id+0x0/0x40 @ 1
[   19.256080] initcall init_oops_id+0x0/0x40 returned 0 after 1 usecs
[   19.262404] calling  pm_qos_power_init+0x0/0x7b @ 1
[   19.267695] initcall pm_qos_power_init+0x0/0x7b returned 0 after 343 use=
cs
[   19.274556] calling  pm_debugfs_init+0x0/0x24 @ 1
[   19.279329] initcall pm_debugfs_init+0x0/0x24 returned 0 after 6 usecs
[   19.285908] calling  printk_late_init+0x0/0x44 @ 1
[   19.290763] initcall printk_late_init+0x0/0x44 returned 0 after 0 usecs
[   19.297435] calling  tk_debug_sleep_time_init+0x0/0x3d @ 1
[   19.302988] initcall tk_debug_sleep_time_init+0x0/0x3d returned 0 after =
5 usecs
[   19.310349] calling  debugfs_kprobe_init+0x0/0x90 @ 1
[   19.315479] initcall debugfs_kprobe_init+0x0/0x90 returned 0 after 17 us=
ecs
[   19.322482] calling  taskstats_init+0x0/0x73 @ 1
[   19.327170] registered taskstats version 1
[   19.331321] initcall taskstats_init+0x0/0x73 returned 0 after 4063 usecs
[   19.338082] calling  clear_boot_tracer+0x0/0x2d @ 1
[   19.343022] initcall clear_boot_tracer+0x0/0x2d returned 0 after 0 usecs
[   19.349781] calling  kdb_ftrace_register+0x0/0x2f @ 1
[   19.354895] initcall kdb_ftrace_register+0x0/0x2f returned 0 after 0 use=
cs
[   19.361827] calling  max_swapfiles_check+0x0/0x8 @ 1
[   19.366853] initcall max_swapfiles_check+0x0/0x8 returned 0 after 0 usecs
[   19.373701] calling  set_recommended_min_free_kbytes+0x0/0xa0 @ 1
[   19.379853] initcall set_recommended_min_free_kbytes+0x0/0xa0 returned 0=
 after 0 usecs
[   19.387827] calling  kmemleak_late_init+0x0/0x93 @ 1
[   19.392881] kmemleak: Kernel memory leak detector initialized
[   19.398661] initcall kmemleak_late_init+0x0/0x93 returned 0 after 5671 u=
secs
[   19.405768] calling  init_root_keyring+0x0/0xb @ 1
[   19.410641] initcall init_root_keyring+0x0/0xb returned 0 after 20 usecs
[   19.417380] calling  fail_make_request_debugfs+0x0/0x2a @ 1
[   19.423055] initcall fail_make_request_debugfs+0x0/0x2a returned 0 after=
 40 usecs
[   19.430552] calling  prandom_reseed+0x0/0x47 @ 1
[   19.435234] initcall prandom_reseed+0x0/0x47 returned 0 after 2 usecs
[   19.441733] calling  pci_resource_alignment_sysfs_init+0x0/0x19 @ 1
[   19.448065] initcall pci_resource_alignment_sysfs_init+0x0/0x19 returned=
 0 after 5 usecs
[   19.456208] calling  pci_sysfs_init+0x0/0x51 @ 1
[   19.465029] initcall pci_sysfs_init+0x0/0x51 returned 0 after 4045 usecs
[   19.471721] calling  boot_wait_for_devices+0x0/95509] initcall deferred_=
probe_initcall+0x0/0x70 returned 0 after 5781 usecs
[   19.503017] calling  late_resume_init+0x0/0x1d0 @ 1
[   19.507949]   Magic number: 14:70:665
[   19.511687] bdi 7:43: hash matches
[   19.515239] initcall late_resume_init+0x0/0x1d0 returned 0 after 7117 us=
ecs
[   19.522186] calling  firmware_memmap_init+0x0/0x38 @ 1
[   19.527822] initcall firmware_memmap_init+0x0/0x38 returned 0 after 428 =
usecs
[   19.534944] calling  pci_mmcfg_late_insert_resources+0x0/0x50 @ 1
[   19.541096] initcall pci_mmcfg_late_insert_resources+0x0/0x50 returned 0=
 after 0 usecs
[   19.549068] calling  tcp_congestion_default+0x0/0x12 @ 1
[   19.554442] initcall tcp_congestion_default+0x0/0x12 returned 0 after 0 =
usecs
[   19.561634] calling  ip_auto_config+0x0/0xf1c @ 1
[   19.566406] initcall ip_auto_config+0x0/0xf1c returned 0 after 5 usecs
[   19.572989] calling  software_resume+0x0/0x290 @ 1
[   19.577841] PM: Hibernation image not present or could not be loaded.
[   19.584341] initcall software_resume+0x0/0x290 returned -2 after 6347 us=
ecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.613629] async_waiting @ 1
[   19.616659] async_continuing @ 1 after 0 usec
[   19.621434] Freeing unused kernel memory: 1724K (ffffffff81cc1000 - ffff=
ffff81e70000)
[   19.629251] Write protecting the kernel read-only data: 12288k
[   19.637705] Freeing unused kernel memory: 1244K (ffff8800016c9000 - ffff=
880001800000)
[   19.646030] Freeing unused kernel memory: 1912K (ffff880001a22000 - ffff=
880001c00000)
init started: BusyBox v1.14.3 (2014-01-20 09:47:53 EST)
Mounting directories  [  OK  ]
[   19.674355] mv (1495) used greatest stack depth: 5416 bytes left
[   19.681907] mkdir (14^G[   19.709502] usb 1-1: New USB device found, idV=
endor=3D8087, idProduct=3D8008
[   19.716195] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[   19.741503] hub 1-1:1.0: USB hub found
[   19.747424] hub 1-1:1.0: 6 ports detected
mount: mount point /proc/bus/usb does not exist
[   19.822797] calling  privcmd_init+0x0/0x1000 [xen_privcmd] @ 1522
[   19.829027] initcall privcmd_init+0x0/0x1000 [xen_privcmd] returned 0 af=
ter 144 usecs
[   19.837231] calling  xenfs_init+0x0/0x1000 [xenfs] @ 1522
[   19.842626] initcall xenfs_init+0x0/0x1000 [xenfs] returned 0 after 1 us=
ecs
mount: mount point /sys/kernel/config does not exist
[   19.864438] usb 2-1: new high-speed USB device number 2 using ehci-pci
[   19.874912] calling  xenkbd_init+0x0/0x1000 [xen_modules/3.13.0upstream-=
02502-gec513b1/kernel/drivers/input/misc/xen-kbdfront.ko): No such device
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.910345] calling  xenfb_init+0x0/0x=
1000 [xen_fbfront] @ 1536
[   19.916256] initcall xenfb_init+0x0/0x1000 [xen_fbfront] returned -19 af=
ter 0 usecs
FATAL: Error inserting xen_fbfront (/lib/modules/3.13.0upstream-02502-gec51=
3b1/kernel/drivers/video/xen-fbfront.ko): No such device
[   19.937850] calling  netif_init+0x0/0x1000 [xen_netfront] @ 1543
[   19.943848] xen_netfront: Initialising Xen virtual ethernet driver
[   19.950209] initcall netif_init+0x0/0x1000 [xen_netfront] returned 0 aft=
er 6210 usecs
[   19.960108] calling  xlblk_init+0x0/0x1000 [xen_blkfront] @ 1546
[   19.966232] initcall xlblk_init+0x0/0x1000 [xen_blkfront] returned 0 aft=
er 120 usecs
[   19.975587] load_xen_module (1530) used greatest stack depth: 4792 bytes=
 left
[   19.991633] udevd (1552): /proc/1552/oom_adj is deprecated, please use /=
proc/1552/oom_score_adj instead.
[   20.001485] usb 2-1: New USB device found, idVendor=3D8087, idProduct=3D=
8000
[   20.008173] usb 2-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[   20.039496] hub 2-1:1.0: USB hub found
[   20.044435] hub 2-1:1.0: 8 ports detected
[   20.079257] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1561
[   20.085906] initcall acpi_cpufreq_init+0x0/0x100G^G^G^G^G^G^G[   20.1084=
51] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1573
[   20.115062] initcall acpi_cpufreq_init+0x0/0x100nit+0x0/0x1000 [acpi_cpu=
freq] @ 1576
[   20.139861] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
^G^G^G^G^G^G^G^G^G^G[   20.155847] calling  acpi_cpufreq_init+0x0/0x1000 [a=
cpi_cpufreq] @ 1574
[   20.162461] initcall acpi_cpufreq_init+0x0/0x100] @ 1603
[   20.177100] initcall acpi_video_init+0x0/0xfee [video] returned 0 after =
12 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   20.261238] calling  ini=
t_scsi+0x0/0x91 [scsi_mod] @ 1778
[   20.270883] SCSI subsystem initialized
[   20.274631] initcall init_scsi+0x0/0x91 [scsi_mod] returned 0 after 7810=
 usecs
[   20.284535] calling  igb_init_module+0x0/0x1000 [igb] @ 1787
[   20.290186] igb: Intel(R) Gigabit Ethernet Network Driver - 5.0.5-k
[   20.297212] igb: Copyright (c) 2007-2013 Intel Corporation.
[   20.303139] xen: registering gsi 17 triggering 0 polarity 1
[   20.308701] Already setup the GSI :17
^G^G[   20.314223] calling  sas_transport_init+0x0/0x1000 [scsi_transport_s=
as] @ 1778
[   20.343508] calling  drm_fb_helper_modinit+0x0/0x1000 [drm_kms_helper] @=
 1802
[   20.356285] calling  e1000_init_module+0x0en: registering gsi 20 trigger=
ing 0 polarity 1
[   20.380536] xen: --> pirq=3D20 -> irq=3D20 (gsi=3D20)
[   20.385267] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G[   20.403401] calling  fb_console_init+0x0/0x1000 [fbcon] @ 1823
[   20.412445] initcall fb_console_init+0x0/0x1000 [fbcon] returned 0 after=
 3144 usecs
[   20.441188] initcall sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
returned 0 after 116943 usecs
[   20.452295] initca0/0x1000 [drm_kms_helper] returned 0 after 99276 usecs
^G^G^G^G^G^G^G[   20.493808] calling  ata_init+0x0/0x4ce [libata] @ 1894
[   20.499447] calling  fusion_init+0x0/0x1000 [mptbase] @ 1778
[   20.505094] Fusion MPT base driver 3.04.20
[   20.509250] Copyright (c) 1999-2008 LSI Corporation
[   20.514209] initcall fusion_init+0x0/0x1000 [mptbase] returned 0 after 8=
899 usecs
[   20.531543] calling  mptsas_init+0x0/0x1000 [mptsas] @ 1778
[   20.537107] Fusion MPT SAS Host driver 3.04.20
[   20.541916] xen: registering gsi 16 triggering 0 polarity 1
[   20.547481] Already setup the GSI :16
[   20.564590] mptbase: ioc0: Initiating bringup
[   20.571614] calling  i915_init+0x0/0x68 [i915] @ 1802
[   20.587807] libata version 3.00 loaded.
[   20.591648] initcall ata_init+0x0/0x4ce [libata] returned 0 after 90440 =
usecs
[   20.601514] calling  ahci_pci_driver_init+0x0/0x1000 [ahci] @ 1889
udevd-work[1583]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: Permission denied

[   20.626756] igb 0000:02:00.0: added PHC on eth0
[   20.631278] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
on
[   20.638210] igb 0000:02:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ac
[   20.645402] igb 0000:02:00.0: eth0: PBA No: Unknown
[   20.650341] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   20.658328] xen: registering gsi 18 triggering 0 polarity 1
[   20.663895] Already setup the GSI :18
[   20.733746] e1000e 0000:00:19.0 eth1: registered PHC clock
[   20.739226] e1000e 0000:00:19.0 eth1: (PCI Express:2.5GT/s:Width x1) 00:=
25:90:86:be:f0
[   20.747193] e1000e 0000:00:19.0 eth1: Intel(R) PRO/1000 Network Connecti=
on
[   20.754152] e1000e 0000:00:19.0 eth1: MAC: 11, PHY: 12, PBA No: 0100FF-0=
FF
[   20.761381] xen: registering gsi 16 triggering 0 polarity 1
[   20.766945] Already setup the GSI :16
[   20.770848] e1000e 0000:04:00.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
[   20.781045] xen: registering gsi 16 triggering 0 polarity 1
[   20.786609] Already setup the GSI :16
[   20.816233] [drm] Memory usable by graphics device =3D 2048M
[   20.853844] Failed to add WC MTRR for [00000000e0000000-00000000efffffff=
]; performance may suffer.
[   20.889782] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   20.896648] [drm] Driver supports precise vblank timestamp query.
[   20.903040] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=
=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
[   20.927898] igb 0000:02:00.1: added PHC on eth2
[   20.932427] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
on
[   20.939356] igb 0000:02:00.1: eth2: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ad
[   20.946547] igb 0000:02:00.1: eth2: PBA No: Unknown
[   20.951485] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   20.959464] xen: registering gsi 19 triggering 0 polarity 1
[   20.965027] Already setup the GSI :19
[   21.086536] e1000e 0000:04:00.0 eth3: (PCI Express:2.5GT/s:Width x1) 00:=
15:17:8f:18:a2
[   21.094436] e1000e 0000:04:00.0 eth3: Intel(R) PRO/1000 Network Connecti=
on
[   21.101443] e1000e 0000:04:00.0 eth3: MAC: 0, PHY: 4, PBA No: D50868-003
[   21.108464] xen: registering gsi 17 triggering 0 polarity 1
[   21.114025] Already setup the GSI :17
[   21.117929] e1000e 0000:04:00.1: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
[   21.199003] i915 0000:00:02.0: No connectors reported connected with mod=
es
[   21.205884] [drm] Cannot find any crtc or siz^G^G^G^G^G^G[   21.404610] =
fbcon: inteldrmfb (fb0) is primary device
[   21.406495] ioc0: LSISAS1064E B3: Capabilities=3D{Initiator}
[   2000:05:00.0: eth4: (PCIe:2.5Gb/s:Width x1) 00:25:90:86:be:f1
[   21.409299] igb 0000:05:00.0: eth4: PBA No: 011000-000
[   21.409300] igb 0000:05:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   21.409848] initcall igb_init_module+0x0/0x1000 [igb] returned 0 after 1=
093415 usecs
[   21.410155] modprobe (1787) used greatest stack depth: 4424 bytes left
[   21.469414] Console: switching to colour frame buffer device 128x48
[   21.575433] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
[   21.581671] i915 0000:00:02.0: registered panic notifier
[   21.606553] ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: n=
o)
[   21.626546] acpi device:61: registered as cooling_device6
[   21.676120] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:0=
0/LNXVIDEO:00/input/input5
[   21.730538] e1000e 0000:04:00.1 eth5: (PCI Express:2.5GT/s:Width x1) 00:=
15:17:8f:18:a3
[   21.738444] e1000e 0000:04:00.1 eth5: Intel(R) PRO/1000 Network Connecti=
on
[   21.745453] e1000e 0000:04:00.1 eth5: MAC: 0, PHY: 4, PBA No: D50868-003
[   21.754008] initcall e1000_init_module+0x0/0x1000 [e1000e] returned 0 af=
ter 1359007 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   21.7985=
14] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0
[   21.806126] ahci 0000:00:1f.2: version 3.01
[   21.816462] Already setup the GSI :19
[   21.820423] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x=
3f impl SATA mode
[   21.828587] ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part=
 ems apst=20
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   21.863614] modprobe (1807) used greates=
t stack depth: 4104 bytes left
[   21.922552] scsi0 : ahci
[   21.931583] initcall i915_init+0x0/0x68 [i915] returned 0 after 1323162 =
usecs
[   21.941218] scsi1 : ahci
[   21.948566] ip (2713) used greatest stack depth: 3720 bytes left
[   21.975594] scsi2 : ahci
[   21.995538] scsi3 : ahci
[   22.004917] modprobe (1802) used greatest stack depth: 2928 bytes left
[   22.018394] scsi4 : ahci
[   22.038000] scsi5 : ahci
[   22.047925] ata1: SATA max UDMA/133 abar m2048@0xf153a000 port 0x ata2: =
SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a180 irq 66
[   22.062770] ata3: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a20=
0 irq 66
[   22.070215] ata4: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a28=
0 irq 66
[   22.077667] ata5: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a30=
0 irq 66
[   22.085125] ata6: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a38=
0 irq 66
[   22.092885] xen: registering gsi 19 triggering 0 polarity 1
[   22.098443] Already setup the GSI :19
[   22.102364] ahci 0000:0a:00.0: SSS flag set, parallel bus scan disabled
[   22.109008] ahci 0000:0a:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x=
3 impl SATA mode
[   22.117123] ahci 0000:0a:00.0: flags: 64bit ncq sntf stag led clo pmp pi=
o slum part ccc sxs=20
^G^G^G^G^G^G^G^G[   22.137059] scsi6 : ahci
[   22.141152] scsi7 : ahci
[   22.144872] ata7: SATA max UDMA/133 abar m512@0xf1c00000 port 0xf^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GWaiting for devices [  OK  ]
[   22.431469] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[   22.437655] ata6: SATA link down (SStatus 0 SControl ata5: SATA link dow=
n (SStatus 0 SControl 300)
[   22.460539] ata4: SATA link down (SStatus 0 SControl 300)
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   22.472269] at=
a1.00: ATA-8: WDC WD740HLFS-01G6U4, 04.04V06, max UDMA/133
[   22.478964] ata1.00: 145226112 sectors, multi 16: LBA48 NCQ (depth 31/32=
), AA
[   22.486178] ata7: SATA link down (SStatus 0 SControl 300)
[   22.494553] ata1.00: configured for UDMA/133
[   22.499018] scsi 0:0:0:0: Direct-Access     ATA      WDC WD740HLFS-01 04=
=2E0 PQ: 0 ANSI: 5
[   22.642704] [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
[   22.812477] ata8: SATA link down (SStatus 0 SControl 300)
[   22.831694] calling  init_sd+0x0/0x1000 [sd_mod] @ 3283
[   22.837706] sd 0:0:0:0: [sda] 145226112 512-byte logical blocks:22.85601=
5] sd 0:0:0:0: [sda] Write Protect is off
[   22.860841] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   22.868410] sd 0:0:0:0: [sda] Write cache: e=
nabled, read cache: enabled, doesn't support DPO or FUA
^G[   22.893984]  sda: sda1 sda2
[   22.905029] sd 0:0:0:0: [sda] Attached SCSI disk
[   22.926614] calling  init_sg+0x0/0x1000 [sg] @ 3307
[   22.941347] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   22.946694] initcall init_sg+0x0/0x1000 [sg] returned 0 after 1^G[   24.=
489493] random: nonblocking pool is initialized
[   31.264235] scsi8 : ioc0: LSISAS1064E B3, FwRev=3D011a0000h, Ports=3D1, =
MaxQ=3D266, IRQ=3D16
[   31.294185] initcall mptsas_init+0x0/0x1000 [mptsas] returned 0 after 10=
504955 usecs
Waiting for init.pre_custom [  OK  ]
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f236e34f000): Writting .. [1024:768]
Done!
[   31.439261] calling  ttm_init+0x0/0x1000 [ttm] @ 3391
[   31.444528] initcall ttm_init+0x0/0x1000 [ttm] returned 0 ^G[   31.46243=
7] calling  radeon_init+0x0/0xba [radeon] @ 3391
[   31.467829] [drm] radeon kernel modesetting enabled.
[   31.5575] wmi: Mapper loaded
[   31.498676] initcall acpi_wmi_init+0x0/0x1000 [wmi] returned 0 after 480=
3 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   31.508988] calling  mxm_wmi_init+0x0/0x1000=
 [mxm_wmi] @ 3396
[   31.514729] initcall mxm_wmi_init+0x0/0x1000 [mxm_wmi] returned 0 after =
0 usecs
^G[   31.537248] calling  nouveau_drm_init+0x0/0x1000 [nouveau] @ 3396
[   31.543588] initcall nouveau_drm_init+0x0/0x1000 [nouveau] returned 0 af=
ter 240 usecs
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f6f6c002000): Writting .. [1024:768]
Done!
VGA: 0000:00:02.0
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [   31.905535] IPv6: ADDRCONF(NETDEV_UP): eth0=
: link is not ready
[   31.923216] device eth0 entered promiscuous mode
[  OK Bringing up interface eth1: =20
Determining IP information for eth1...[   32.179912] IPv6: ADDRCONF(NETDEV_=
UP): eth1: link is not ready
[   34.332970] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[   34.340506] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   35.845105] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Cont=
rol: Rx/Tx
[   35.852684] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
 done.
[  OK  ]
Bringing up interface eth2: =20
Determining IP information for eth2...[   37.247699] IPv6: ADDRCONF(NETDEV_=
UP): eth2: link is not ready
[   39.910970] igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[   39.918496] IPv6: ADDRCONF(NETDEV_CHANGE done.
[  OK  ]
Bringing up interface switch: =20
Determining IP information for switch...[   41.042089] switch: port 1(eth0)=
 entered forwarding state
[   41.047491] switch:  done.
[  OK  ]
Waiting for init.custom [  OK  ]

Starting SSHd ...

    SSH started [3961]


Waiting for SSHd [  OK  ]
WARNING: ssh currently running [3961] ignoring start request
[   42.392408] calling  crc32c_mod_init+0x0/0x1000 [crc32c] @ 3996
[   42.398383] initcall crc32c_mod_init+0x0/0x1000 [crc32c] returned 0 afte=
r 59 usecs
[   42.407994] calling  libcrc32c_mod_init+0x0/0x1000 [libcrc32c] @ 3999
[   42.414431] initcall libcrc32c_mod_init+0x0/0x1000 [libcrc32c] returned =
0 after 4 usecs
[   42.425154] calling  iscsi_transport_init+0x0/0x1000 [scsi_transport_isc=
si] @ 4001
[   42.432708] Loading iSCSI transport class v2.0-870.
[   42.438766] initcall iscsi_transport_init+0x0/0x1000 [scsi_transport_isc=
si] returned 0 after 5912 usecs
[   42.450230] calling  iscsi_sw_tcp_init+0x0/0x1000 [iscsi_tcp] @ 4001
[   42.456813] iscsi: registered transport (tcp)
[   42.461163] initcall iscsi_sw_tcp_init+0x0/0x1000 [iscsi_tcp] returned 0=
 after 4475 usecs
iscsistart: transport class version 2.0-870. iscsid version 2.0-872
Could not get list of targets from firmware.
Jan 22 03:42:04 tst035 syslogd 1.5.0: restart.
Running in PV context on Xen v4.3.
[   42.521619] calling  evtchn_init+0x0/0x1000 [xen_evtchn] @ 4038
[   42.527945] xen:xen_evtchn: Event-channel device installinit+0x0/0x1000 =
[xen_evtchn] returned 0 after 5755 usecs
^G^G^G^G^G^G^G^G^GStarting C xenstored...Jan 22 03:42:04 tst035 xenstored: =
Checking store ...
Jan 22 03:42:04 tst035 xenstored: Checking store complete.

Setting domain 0 name...
Starting xenconsoled...
Starting QEMU as disk backend for dom0
[0:0:0:0]    disk    ATA      WDC WD740HLFS-01 04.0  /dev/sda=20
00:00.0 Host bridge: Intel Corporation Device 0c08 (rev 06)
00:01.0 PCI bridge: Intel Corporation Device 0c01 (rev 06)
00:01 Corporation Device 0c05 (rev 06)
^G^G00:02.0 VGA compatible controller: Intel Corporation Device 041a (rev 0=
6)
^G^G00:03.0 Audio device: Intel Corporation Device 0c0c (rev 06)
^G00:14.0 USB Controller: Intel Corporation Device 8c31 (rev 04)
00:16.0 Communication controller: Intel Corporation Device 8c3a (rev 04)
00:19.0 Ethernet controller: Intel Corporation Device 153a (rev 04)
00:1a.0 USB Controller: Intel Corporation Device 8c2d (rev 04)
00:1b.0 Audio device: Intel Corporation Device 8c20 (rev 04)
00:1c.0 PCI bridge: Intel Corporation Device 8c10 (rev d4)
00:1c.3 PCI bridge: Intel Corporation Device 8c16 (rev d4)
00:1c.5 PCI bridge: Intel Corporation Device 8c1a (rev d4)
00:1c.6 PCI bridge: Intel Corporation Device 8c1c (rev d4)
00:1c.7 PCI bridge: Intel Corporation Device 8c1e (rev d4)
00:1d.0 USB Controller: Intel Corporation Device 8c26 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Device 8c56 (rev 04)
00:1f.2 SATA controller: Intel Corporation Device 8c02 (rev 04)
00:1f.3 SMBus: Intel Corporation Device 8c22 (rev 04)
00:1f.6 Signal processing controller: Intel Corporation Device 8c24 (rev 04)
01:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Ex=
press Fusion-MPT SAS (rev 08)
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connec=
tion (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connec=
tion (rev 01)
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Con=
troller (rev 06)
04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Con=
troller (rev 06)
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
07:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent=
 mode) (rev 11)
07:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000=
 Controller (PHY/Link)
08:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
09:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
0a:00.0 SATA controller: Device 1b21:0612 (rev 01)
           CPU0      =20
  1:          2  xen-pirq-ioapic-edge  i8042
  8:          1  xen-pirq-ioapic-edge  rtc0
  9:       timer0
 41:          0  xen-percpu-ipi       spinlock0
 42:          0  xen-percpu-ipi       resched0
 43:          0  xen-percpu-ipi       callfunc0
 44:          0  xen-percpu-virq      debug0
 45:          0  xen-percpu-ipi       callfuncsingle0
 46:       1037  xen-percpu-ipi       irqwork0
 47:         77   xen-dyn-event     xenbus
 48:          0  xen-percpu-virq      xen-pcpu
 51:          0  xen-percpu-virq      mce
 52:        313  xen-percpu-virq      hvc_console
 53:          1  xen-pirq-msi-x     eth0
 54:         31  xen-pirq-msi-x     eth0-rx-0
 55:         27  xen-pirq-msi-x     eth0-tx-0
 56:         38  xen-pirq-msi       eth1
 57:          1  xen-pirq-msi-x     eth2
 58:         12  xen-pirq-msi-x     eth2-rx-0
 59:         16  xen-pirq-msi-x     eth2-tx-0
 61:         26  xen-pirq-msi       i915
 66:         10  xen-pirq-msi       ahci
 67:          0  xen-pirq-msi       ahci
 68:         24   xen-dyn-event     evtchn:xenstored
 69:          0   xen-dyn-event     evtchn:xenstored
NMI:          0   Non-maskable interrupts
LOC:          0   Local timer interrupts
SPU:          0   Spurious interrupts
PMI:          0   Performance monitoring interrupts
IWI:       1037   IRQ work interrupts
RTR:          0   APIC ICR read retries
RES:          0   Rescheduling interrupts
CAL:          0   Function call interrupts
TLB:          0   TLB shootdowns
TRM:          0   Thermal event interrupts
THR:          0   Threshold APIC interrupts
MCE:          0   Machine check exceptions
MCP:          1   Machine check polls
ERR:          0
MIS:          0
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G00000000-00000f=
ff : reserved
00001000-00098fff : System RAM
00099000-00099bff : RAM buffer
00099c00-000fffff : reserved
  000a0000-000bffff : PCI Bus 0000:00
  000c0000-000ce9ff : Video ROM
  000cf000-000cf7ff : Adapter ROM
  000cf800-000d07ff : Adapter ROM
  000d0800-000d17ff : Adapter ROM
  000d1800-000d27ff : Adapter ROM
  000d2800-000d37ff : Adapter ROM
  000d3800-000d47ff : Adapter ROM
  000d8000-000dbfff : PCI Bus 0000:00
  000dc000-000dffff : PCI Bus 0000:00
  000e0000-000e3fff : PCI Bus 0000:00
  000e4000-000e7fff : PCI Bus 0000:00
  000f0000-000fffff : System ROM
00100000-80066fff : System RAM
  01000000-016c5ebc : Kernel code
  016c5ebd-01cbfa3f : Kernel data
  01e78000-01fd0fff : Kernel bss
80067000-a58f0fff : Unusable memory
a58f1000-a58f7fff : ACPI Non-volatile Storage
a58f8000-a61b0fff : Unusable memory
a61b1000-a6596fff : reserved
a6597000-b74b3fff : Unusable memory
b74b4000-b76cafff : reserved
  b7648018-b764803e : APEI ERST
  b764803f-b764a03e : APEI ERST
  b76c9d98-b76c9d98 : APEI EINJ
  b76c9d9a-b76c9da6 : APEI EINJ
  b76c9da8-b76c9daf : APEI EINJ
b76cb000-b770bfff : Unusable memory
b770c000-b77b8fff : ACPI Non-volatile Storage
b77b9000-b7ffefff : reserved
b7fff000-b7ffffff : Unusable memory
bc000000-be1fffff : reserved
  bc200000-be1fffff : Graphics Stolen Memory
be200000-feafffff : PCI Bus 0000:00
  e0000000-efffffff : 0000:00:02.0
  f0000000-f03fffff : 0000:00:02.0
  f0400000-f14fffff : PCI Bus 0000:02
    f0400000-f07fffff : 0000:02:00.1
    f0800000-f0bfffff : 0000:02:00.1
      f0800000-f0bfffff : igb
    f0c00000-f0ffffff : 0000:02:00.0
    f1000000-f13fffff : 0000:02:00.0
      f1000000-f13fffff : igb
    f1400000-f141ffff : 0000:02:00.1
      f1400000-f141ffff : igb
    f1420000-f143ffff : 0000:02:00.0
      f1420000-f143ffff : igb
    f1440000-f1443fff : 0000:02:00.1
      f1440000-f1443fff : igb
    f1444000-f1447fff : 0000:02:00.0
      f1444000-f1447fff : igb
    f1448000-f1467fff : 0000:02:00.0
    f1468000-f1487fff : 0000:02:00.0
    f1488000-f14a7fff : 0000:02:00.1
    f14a8000-f14c7fff : 0000:02:00.1
  f1500000-f151ffff : 0000:00:19.0
    f1500000-f151ffff : e1000e
  f1520000-f152ffff : 0000:00:14.0
  f1530000-f1533fff : 0000:00:1b.0
  f1534000-f1537fff : 0000:00:03.0
  f1538000-f1538fff : 0000:00:1f.6
  f1539000-f15390ff : 0000:00:1f.3
  f153a000-f153a7ff : 0000:00:1f.2
    f153a000-f153a7ff : ahci
  f153b000-f153b3ff : 0000:00:1d.0
    f153b000-f153b3ff : ehci_hcd
  f153c000-f153c3ff : 0000:00:1a.0
    f153c000-f153c3ff : ehci_hcd
  f153d000-f153dfff : 0000:00:19.0
    f153d000-f153dfff : e1000e
  f153f000-f153f00f : 0000:00:16.0
  f1600000-f18fffff : PCI Bus 0000:01
    f1600000-f17fffff : 0000:01:00.0
    f1800000-f180ffff : 0000:01:00.0
      f1800000-f180ffff : mpt
    f1810000-f1813fff : 0000:01:00.0
      f1810000-f1813fff : mpt
  f1a00000-f1bfffff : PCI Bus 0000:06
    f1a00000-f1bfffff : PCI Bus 0000:07
      f1a00000-f1afffff : PCI Bus 0000:08
        f1a00000-f1a00fff : 0000:08:0b.1
        f1a01000-f1a01fff : 0000:08:0b.0
        f1a02000-f1a02fff : 0000:08:0a.1
        f1a03000-f1a03fff : 0000:08:0a.0
        f1a04000-f1a04fff : 0000:08:09.1
        f1a05000-f1a05fff : 0000:08:09.0
        f1a06000-f1a06fff : 0000:08:08.1
        f1a07000-f1a07fff : 0000:08:08.0
      f1b00000-f1b03fff : 0000:07:03.0
      f1b04000-f1b047ff : 0000:07:03.0
  f1c00000-f1cfffff : PCI Bus 0000:0a
    f1c00000-f1c001ff : 0000:0a:00.0
      f1c00000-f1c001ff : ahci
  f1d00000-f1dfffff : PCI Bus 0000:09
    f1d00000-f1d01fff : 0000:09:00.0
  f1e00000-f1efffff : PCI Bus 0000:05
    f1e00000-f1e7ffff : 0000:05:00.0
      f1e00000-f1e7ffff : igb
    f1e80000-f1e83fff : 0000:05:00.0
      f1e80000-f1e83fff : igb
  f1f00000-f1ffffff : PCI Bus 0000:04
    f1f00000-f1f1ffff : 0000:04:00.1
    f1f20000-f1f3ffff : 0000:04:00.1
      f1f20000-f1f3ffff : e1000e
    f1f40000-f1f5ffff : 0000:04:00.1
      f1f40000-f1f5ffff : e1000e
    f1f60000-f1f7ffff : 0000:04:00.0
    f1f80000-f1f9ffff : 0000:04:00.0
      f1f80000-f1f9ffff : e1000e
    f1fa0000-f1fbffff : 0000:04:00.0
      f1fa0000-f1fbffff : e1000e
  f7fef000-f7feffff : pnp 00:0c
  f7ff0000-f7ff0fff : pnp 00:0c
  f8000000-fbffffff : PCI MMCONFIG 0000 [bus 00-3f]
    f8000000-fbffffff : reserved
      f8000000-fbffffff : pnp 00:0c
fec00000-fec00fff : reserved
  fec00000-fec003ff : IOAPIC 0
fed00000-fed03fff : reserved
  fed00000-fed003ff : HPET 0
fed10000-fed17fff : pnp 00:0c
fed18000-fed18fff : pnp 00:0c
fed19000-fed19fff : pnp 00:0c
fed1c000-fed1ffff : reserved
  fed1c000-fed1ffff : pnp 00:0c
fed20000-fed3ffff : pnp 00:0c
fed40000-fed44fff : pnp 00:00
fed45000-fed8ffff : pnp 00:0c
fed90000-fed93fff : pnp 00:0c
fee00000-feefffff : reserved
  fee00000-feefffff : pnp 00:0c
    fee00000-fee00fff : Local APIC
ff000000-ffffffff : reserved
  ff000000-ffffffff : pnp 00:0c
100000000-23fdfffff : Unusable memory
MemTotal:        1979800 kB
MemFree:         1612432 kB
Buffers:               0 kB
Cached:           275408 kB
SwapCached:            0 kB
Active:            66460 kB
Inactive:         212104 kB
Active(anon):      66460 kB
Inactive(anon):   212104 kB
Active(file):          0 kB
Inactive(file):        0 kB
Unevictable:        4956 kB
Mlocked:            4956 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:          8164 kB
Mapped:             6308 kB
Shmem:            275408 kB
Slab:              66296 kB
SReclaimable:      10688 kB
SUnreclaim:        55608 kB
KernelStack:         632 kB
PageTables:         1120 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      989900 kB
Committed_AS:     307792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      342828 kB
VmallocChunk:   34359391228 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     2097564 kB
DirectMap2M:           0 kB
Waiting for init.late [  OK  ]
+ . /etc/init.d/functions
++ TEXTDOMAIN=3Dinitscripts
++ umask 022
++ PATH=3D/sbin:/usr/sbin:/bin:/usr/bin
++ export PATH
++ '[' -z '' ']'
++ COLUMNS=3D80
++ '[' -z '' ']'
+++ /sbin/consoletype
++ CONSOLETYPE=3Dserial
++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']'
++ '[' -z '' ']'
++ '[' -f /etc/sysconfig/init ']'
++ . /etc/sysconfig/init
+++ BOOTUP=3Dcolor
+++ RES_COL=3D60
+++ MOVE_TO_COL=3D'echo -en \033[60G'
+++ SETCOLOR_SUCCESS=3D'echo -en \033[0;32m'
+++ SETCOLOR_FAILURE=3D'echo -en \033[0;31m'
+++ SETCOLOR_WARNING=3D'echo -en \033[0;33m'
+++ SETCOLOR_NORMAL=3D'echo -en \033[0;39m'
+++ LOGLEVEL=3D3
+++ PROMPT=3Dyes
+++ AUTOSWAP=3Dno
+++ ACTIVE_CONSOLES=3D'/dev/tty[1-6]'
++ '[' serial =3D serial ']'
++ BOOTUP=3Dserial
++ MOVE_TO_COL=3D
++ SETCOLOR_SUCCESS=3D
++ SETCOLOR_FAILURE=3D
++ SETCOLOR_WARNING=3D
++ SETCOLOR_NORMAL=3D
++ __sed_discard_ignored_files=3D'/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\=
|\.rpmsave\)$/d'
+ mkdir -p /mnt/lab
+ echo '192.168.102.1  build'
+ ping -q -c 1 build
PING build.dumpdata.com (192.168.102.1) 56(84) bytes of data.

--- build.dumpdata.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev =3D 0.293/0.293/0.293/0.000 ms
+ '[' 0 -eq 0 ']'
+ mount build:/srv/tftpboot/lab /mnt/lab -o vers=3D3
+ iscsiadm -m discovery -t st -p build
192.168.102.1:3260,1 iqn.2003-01.org.linux-iscsi.target:sn.bd5777dc54e541e0=
bd7727619992arget:sn.bd5777dc54e541e0bd772761999275e6
^G^G^G^G^G^G^G^G+ iscsiadm -m node -L all
^G^G^G^G^G^G^G^GLogging in to [iface: default, target: iqn.2003-01.org.linu=
x-iscsi.target:sn.bd5777dc54e541e0bd772761999275e6, portal: 192.168.102.1,3=
260]
[   47.484612] scsi9 : iSCSI Initiator over TCP/IP
[   47.748549] scsi 9:0:0:0: Direct-Access     LIO-ORG  IBLOCK           4.=
0  PQ: 0 ANSI: 5
[   47.762708] sd 9:0:0:0: [sdb] 1G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^GLogin to [iface: default, target: iqn.2003-01.org.linux-iscsi.tar=
get:sn.bd5777dc54e541e0bd772761999275e6, portal: 192.168.102.1,3260] succes=
sful.
[   47.795026] sd 9:0:0:0: [sdb] Write Protect is off
[   47.799813] sd 9:0:0:0: [sdb] Mode Sense: 2f 00 00 00
+ modprobe dm-multipath
[   47.807682] sd 9:0:0:0: [sdb] Write cache: disabled, read cache: enabled=
, doesn't support DPO or FUA
[   47.820163] calling  dm_init+0x0/0x48 [dm_mod] @ 4114
[   47.825992] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised:=
 dm-devel@redhat.com
[   47.834420] initcall dm_init+0x0/0x48 [dm_mod] returned 0 after 8994 use=
cs
[   47.842507] calling  dm_multipath_init+0x0/0x1000 [dm_multipath] @ 4114
[   47.849344] device-mapper: multipath: version 1.6.0 loaded
[   47.854857] initcall dm_multipath_init+0x0/0x1000 [dm_multipath] returne=
d 0 after 5558 usecs
+ sleep 5
[   47.884507]  sdb: unknown partition table
[   47.895985] sd 9:0:0:0: [sdb] Attached SCSI disk
Jan 22 03:42:10 tst035 iscsid: Connection1:0 to [target: iqn.2003-01.org.li=
nux-iscsi.target:sn.bd5777dc54e541e0bd772761999275e6,^G+ pvscan
  Incorrect metadata area header checksum
  PV /dev/sdb    VG guests          lvm2 [931.44 GiB / 452.00 MiB free]
  PV /dev/sda1                      lvm2 [69.24 GiB]
  Total: 2 [1000.68 GiB] / in use: 1 [931.44 GiB] / in no VG: 1 [69.24 GiB]
+ vgchange -aly
  Incorrect metadata area header checksum
[   52.969799] bio: create slab <bio-1> at 1
  32 logical volume(s) in volume group "guests" now active
+ echo 'NFS done'
NFS done
+ xl info
host                   : tst035.dumpdata.com
release                : 3.13.0upstream-02502-gec513b1
version                :      : 2
^G^G^G^G^Gcpu_mhz                : 3400
^G^Ghw_caps                : bfebfbff:2c100800:00000000:00007f00:77fafbff:0=
0000000:00000021:00002fbb
^G^Gvirt_caps              : hvm hvm_directio
^G^Gtotal_memory           : 8046
^G^Gfree_memory            : 4860
^G^Gsharing_freed_memory   : 0
^Gsharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64=20
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : Fri Jan 17 16:37:06 2014 +0100 git:7261a3f-dirty
xen_commandline        : dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug=
,verbose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=
=3D1 console_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbo=
se sync_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax=
:6GB,2G
cc_compiler            : gcc (GCC) 4.4.4 20100503 (Red Hat 4.4.4-2)
cc_compile_by          : konrad
cc_compile_domain      : (none)
cc_compile_date        : Tue Jan 21 14:30:34 EST 2014
xend_config_format     : 4
+ '[' -e /sys/kernel/debug/xen/mmu/p2m ']'
+ cat /sys/kernel/debug/xen/mmu/p2m
 [0x0->0x99] pfn
 [0x99->0x100] identity
 [0x100->0x80067] pfn
 [0x0->0x80200] level entry
 [0x80200->0xa5800] level mid400->0xa6600] level entry
 [0xa6600->0xb7400] level middle
 [0xa6597->0xb74b4] missing
 [0xb74b4->0xb76cb] identity
 [0xb76cb->0xb770c] missing
 [0xb7400->0xb7800] level entry
 [0xb7800->0xb7e00] level middle
 [0xb770c->0xb7fff] identity
 [0xb7fff->0xb8000] missing
 [0xb7e00->0xb8000] level entry
 [0xb8000->0x100000] identity
 [0xb8000->0x100000] level middle
 [0x100000->0x7cfffff] missing
 [0x100000->0x7cfffff] level top
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G++ boot_parameter cpuplug
++ local param
++ local value
+++ cat /proc/cmdline
+++ tr -s ' \t' '\n'
+++ tail -1
+++ egrep '^cpuplug=3D|^cpuplug$'
++ param=3D
++ '[' -z '' ']'
++ return 1
+ HOTPLUG=3D
+ '[' 1 -eq 0 ']'
+ TMEM_LOADED=3D0
+ swapon /dev/xvda
swapon: /de/xvda: stat failed: No such file or directory
+ '[' 255 -eq 0 ']'
++ boot_parameter run
++ local param
++ local value
+++ cat /proc/cmdline
+++ tr -s ' \t' '\n'
+++ egrep '^run=3D|^run$'
+++ tail -1
++ param=3D
++ '[' -z '' ']'
++ return 1
+ HOTPLUG=3D
+ '[' 1 -eq 0 ']'
^GJan 22 03:42:16 tst035 init: starting pid 4314, tty '/dev/tty1': '/bin/sh'


BusyBox v1.14.3 (2014-01-20 09:47:53 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

Jan 22 03:42:18 tst035 init: reloading /etc/inittab
Jan 22 03:42:18 tst035 init: starting pid 4318, tty '/dev/hvc0': '/bin/sh'
#=20

[   56.078481] switch: port 1(eth0) entered forwarding state
[  680.637843] kmemleak: 6 new suspected memory leaks (see /sys/kernel/debu=
g/kmemleak)
Jan 22 04:02:04 tst035 -- MARK --
Jan 22 04:22:04 tst035 -- MARK --
Jan 22 04:42:04 tst035 -- MARK --
Jan 22 05:02:04 tst035 -- MARK --
Jan 22 05:20:12 tst035 sshd[4320]: WARNING: /etc/ssh/moduli does not exist,=
 using fixed modulus
Jan 22 05:20:12 tst035 sshd[4320]: Accepted publickey for root from 192.168=
=2E102.1 port 51799 ssh2
Jan 22 05:20:12 tst035 sshdJan 22 05:20:12 tst035 sshd[4322]: lastlog_opens=
eek: Couldn't stat /var/log/lastlog: No such file or directory
Jan 22 05:21:12 tst035 sshd[4320]: syslogin_perform_logout: logout() return=
ed an error
Jan 22 05:21:12 tst035 sshd[4320]: RecJan 22 05:21:38 tst035 sshd[4338]: WA=
RNING: /etc/ssh/moduli does not exist, using fixed modulus
Jan 22 05:21:38 tst035 sshd[4338]: reverse mapping checking getaddrinfo for=
 ns.build.dumpdata.com [192.168.102.1] failed - POSSIK-IN ATTEMPT!
Jan 22 05:21:38 tst035 sshd[4338]: Accepted publickey for root from 192.168=
=2E102.1 port 51800 ssh2
Jan 22 05:21:38 tst035 sshd[ 6032.451880] pci 0000:03:10.0: [8086:10ca] typ=
e 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: m-01-22 05:21:54] =
PCI add virtual function 0000:03:10.0
[ 6032.474112] calling  igbvf_init_module+0x0/0x1000 [igbvf] @ 4350
[ 6032.480110] igbvf: Intel(R) Gigabit Virtual Function Network Driver - ve=
rsion 2.0.2-k
[ 6032.487991] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[ 6032.494017] igbvf 0000:03:10.0: enabling device (0000 -> 0002)
[ 6032.501330] igbvf 0000:03:10.0: PF still in reset state. Is the PF inter=
face up?
[ 6032.508714] igbvf 0000:03:10.0: Assigning random MAC address.
[ 6032.515682] igbvf 0000:03:10.0: PF still resetting
[ 6032.521001] pci 0000:03:10.2: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:10.2
(XEN) [2014-01-22 05:21:54] PCI add virtual function 0000:03:10.2
[ 6032.541643] igbvf 0000:03:10.0: Intel(R) 82576 Virtual Function
[ 6032.547551] igbvf 0000:03:10.0: Address: b6:86:eb:db:15:f6
[ 6032.553719] initcall igbvf_init_module+0x0/0x1000 [igbvf] returned 0 aft=
er 71881 usecs
[ 6032.561720] igbvf 0000:03:10.2: enabling device (0000 -> 0002)
[ 6032.569002] igbvf 0000:03:10.2: PF still in reset state. Is the PF inter=
face up?
[ 6032.576382] igbvf 0000:03:10.2: Assigning random MAC address.
[ 6032.583351] igbvf 0000:03:10.2: PF still resetting
[ 6032.613619] igbvf 0000:03:10.2: Intel(R) 82576 Virtual Function
[ 6032.619528] igbvf 0000:03:10.2: Address: 52:d1:5a:32:a8:
[ 6032.628452] pci 0000:03:10.4: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:10.4
(XEN) [2014-01-22 05:21:54] PCI add virtual function 0000:03:10.4
[ 6032.653556] igbvf 0000:03:10.4: enabling device (0000 -> 0002)
[ 6032.660842] igbvf 0000:03:10.4: PF still in reset state. Is the PF inter=
face up?
[ 6032.668222] igbvf 0000:03:10.4: Assigning random MAC address.
[ 6032.675190] igbvf 0000:03:10.4: PF still resetting
[ 6032.690590] igbvf 0000:03:10.4: Intel(R) 82576 Virtual Function
[ 6032.696504] igbvf 0000:03:10.4: Address: 9a:db:cc:5b:1d:[ 6032.710504] p=
ci 0000:03:10.6: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: madd virtual funct=
ion 0000:03:10.6

[ 6032.749540] igbvf 0000:03:10.6: enabling device (0000 -> 0002)
[ 6032.756842] igbvf 0000:03:10.6: PF still in reset state. [ 6032.788590] =
igbvf 0000:03:10.6: Intel(R) 82576 Virtual Function
[ 6032.794495] igbvf 0000:03:10.6: Address: 86:16:04:6b:30:b6
[ 6032.809451] pci 0000:03:11.0: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: mtual function 000=
0:03:11.0
[ 6032.847522] igbvf 0000:03:11.0: enabling device (0000 -> 0002)
[ 6032.854821] igbvf 0000:03:11.0: PF still in reset state. igng random MAC=
 address.
[ 6032.869203] igbvf 0000:03:11.0: PF still resetting

[ 6032.885594] igbvf 0000:03:11.0: Intel(R) 82576 Virtual Function
[ 6032.891501] igbvf 0000:03:11.0: Address: 96:56:63:e1:3e:[ 6032.905450] p=
ci 0000:03:11.2: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:11.2
(XEN) [2014-01-22 05:21:55] PCI add virtual function 0000:03:11.2
[ 6032.930517] igbvf 0000:03:11.2: enabling device (0000 -> 0002)
[ 6032.937821] igbvf 0000:03:11.2: PF still in reset state. .945205] igbvf =
0000:03:11.2: Assigning random MAC address.
[ 6032.952201] igbvf 0000:03:11.2: PF still resetting
[ 6032.967772] igbvf 0000:03:11.2: Intel(R) 82576 Virtual Function
[ 6032.973683] igbvf 0000:03:11.2: Address: 2a:bc:5[ 6032.988450] pci 0000:=
03:11.4: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:11.4
(XEN) [2014-01-22 05:21:55] PCI add virtual function 0000:03:11.4
[ 6033.026537] igbvf 0000:03:11.4: enabling device (0000 -> 0002)
[ 6033.033850] igbvf 0000:03:11.4: PF still in reset state. 065273] igbvf 0=
000:03:11.4: Intel(R) 82576 Virtual Function
[ 6033.071184] igbvf 0000:03:11.4: Address: 32:43:7e:18:4b:b6
[ 6033.082456] igb 0000:02:00.0: 7 VFs allocated
[ 6033.554519] switch: port 1(eth0) entered disabled state
[ 6036.406965] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[ 6036.414517] switch: port 1(eth0) entered

[ 6051.438490] switch: port 1(eth0) entered forwarding state
Jan 22 05:21:38 tst035 sshd[4340]: lastlog_openseek: Couldn't stat /var/log=
/lastlog: No such file or directory
Jan 22 05:36:02 tst035 sshd[4338]: syslogin_perform_logout: logout() return=
ed an error
Jan 22 05:36:02 tst035 sshd[4338]: Received disconnect from 192.168.102.1: =
11: disconnected by user

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-xen-4.4.log"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... ok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00Hok
Loading microcode.bin... ok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 21 16:32:29 EST 2014
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FACP B779F0B8, 010C (r5 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.082 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 05:37:48] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 05:37:48] Allocated console ring of 1048576 KiB.(XEN) [20=
14-01-22 05:37:48] mwait-idle: v0.4 model 0x3c
(XEN) [2014-01-22 05:37:48] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff
(XEN) [2014-01-22 05:37:48] VMX: Supported advanced features:
(XEN) [2014-01-22 05:37:48]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 05:37:48]  - APIC TPR shadow
(XEN) [2014-01-22 05:37:48]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 05:37:48]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 05:37:48]  - Virtual NMI
(XEN) [2014-01-22 05:37:48]  - MSR direct-access bitmap
(XEN) [2014-01-22 05:37:48]  - Unrestricted Guest
(XEN) [2014-01-22 05:37:48]  - VMCS shadowing
(XEN) [2014-01-22 05:37:48] HVM: ASIDs enabled.
(XEN) [2014-01-22 05:37:48] HVM: VMX enabled
(XEN) [2014-01-22 05:37:48] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 05:37:48] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 05:37:48] Brought up 8 CPUs
(XEN) [2014-01-22 05:37:48] ACPI sleep modes: S3
(XEN) [2014-01-22 05:37:48] mcheck_poll: Machine check polling timer starte=
d.(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1000000 mem=
sz=3D0xa22000
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=
=3D0xc00f0
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1cc1000 memsz=
=3D0x14d80
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1cd6000 memsz=
=3D0x71e000
(XEN) [2014-01-22 05:37:48] elf_parse_binary: memory: 0x1000000 -> 0x23f4000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 05:37:49] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 05:37:49]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 05:37:49]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 05:37:49]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 05:37:49]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 05:37:49]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 05:37:49]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 05:37:49]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 05:37:49]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 05:37:49]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 05:37:49] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 05:37:49]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487006 pages to be allocated)
(XEN) [2014-01-22 05:37:49]  Init. ramdisk: 000000023abe5000->000000023fd86=
cda
(XEN) [2014-01-22 05:37:49] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 05:37:49]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 05:37:49]  Init. ramdisk: ffffffff823f4000->ffffffff87595=
cda
(XEN) [2014-01-22 05:37:49]  Phys-Mach map: ffffffff87596000->ffffffff87996=
000
(XEN) [2014-01-22 05:37:49]  Start info:    ffffffff87996000->ffffffff87996=
4b4
(XEN) [2014-01-22 05:37:49]  Page tables:   ffffffff87997000->ffffffff879d8=
000
(XEN) [2014-01-22 05:37:49]  Boot stack:    ffffffff879d8000->ffffffff879d9=
000
(XEN) [2014-01-22 05:37:49]  TOTAL:         ffffffff80000000->ffffffff87c00=
000
(XEN) [2014-01-22 05:37:49]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 05:37:49] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:02.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000203000
(XEN) [2014-01-22 05:37:50] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 05:37:50] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 05:37:50] Std. Loglevel: All
(XEN) [2014-01-22 05:37:50] Guest Loglevel: All
(XEN) [2014-01-22 05:37:50] **********************************************
(XEN) [2014-01-22 05:37:50] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 05:37:50] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 05:37:50] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 05:37:50] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 05:37:50] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 05:37:50] **********************************************
(XEN) [2014-01-22 05:37:50] 3... 2... 1...=20
(XEN) [2014-01-22 05:37:53] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 05:37:53] Freed 272kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-02502-gec513b1 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(05:00.*)(01:00.*)
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x07595fff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.735576] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.735577] Policy zone: DMA32
[    5.735578] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(05:00.*)(01:00.*)
[    5.735883] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.735913] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.756394] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.759486] Memory: 1891300K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 205848K reserved)
[    5.759711] Hierarchical RCU implementation.
[    5.759712] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.759712] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.759720] NR_IRQS:33024 nr_irqs:256 16
[    5.759799] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.759800] xen: registering gsi 9 triggering 0 polarity 0
[    5.759811] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.759833] xen: acpi sci 9
[    5.759836] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.759839] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.759842] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.759844] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.759847] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.759849] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.759852] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.759854] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.759856] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.759859] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.759861] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.759864] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.759866] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.759869] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.761431] Console: colour VGA+ 80x25
[    6.713737] console [hvc0] enabled
[    6.717671] Xen: using vcpuop timer interface
[    6.722014] installing Xen timer for CPU 0
[    6.726195] tsc: Detected 3400.082 MHz processor
[    6.730881] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.16 BogoMIPS (lpj=3D3400082)
[    6.741515] pid_max: default: 32768 minimum: 301
[    6.746348] Security Framework initialized
[    6.750433] SELinux:  Initializing.
[    6.754007] SELinux:  Starting in permissive mode
[    6.759078] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.766530] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.773691] Mount-cache hash table entries: 256
[    6.778651] Initializing cgroup subsys freezer
[    6.783156] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.783156] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.796260] CPU: Physical Processor ID: 0
[    6.800332] CPU: Processor Core ID: 0
[    6.804756] mce: CPU supports 2 MCE banks
[    6.808768] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.808768] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.808768] tlb_flushall_shift: 6
[    6.845887] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.854531] ACPI: Core revision 20131115
[    6.907489] ACPI: All ACPI Tables successfully acquired
[    6.914255] cpu 0 spinlock event irq 41
[    6.918133] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1
[    6.929132] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs
[    6.936597] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.942237] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.949682] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.956095] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.964329] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.969964] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.977440] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.983159] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.990700] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.995900] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    7.004740] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    7.012019] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    7.018432] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    7.026665] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    7.032135] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    7.039260] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    7.044052] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    7.050612] calling  init_workqueues+0x0/0x59a @ 1
[    7.055622] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    7.062225] calling  migration_init+0x0/0x71 @ 1
[    7.066904] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    7.073404] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    7.078604] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    7.085623] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    7.091516] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    7.099229] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    7.104467] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    7.111453] calling  cpu_stop_init+0x0/0x76 @ 1
[    7.116067] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    7.122456] calling  relay_init+0x0/0x14 @ 1
[    7.126789] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    7.132942] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    7.138250] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    7.145334] calling  init_events+0x0/0x61 @ 1
[    7.149756] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    7.155994] calling  init_trace_printk+0x0/0x12 @ 1
[    7.160935] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    7.167694] calling  event_trace_memsetup+0x0/0x52 @ 1
[    7.172915] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    7.179915] calling  jump_label_init_module+0x0/0x12 @ 1
[    7.185287] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    7.192482] calling  balloon_clear+0x0/0x4f @ 1
[    7.197075] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    7.203488] calling  rand_initialize+0x0/0x30 @ 1
[    7.208276] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    7.214841] calling  mce_amd_init+0x0/0x165 @ 1
[    7.219433] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    7.225871] x86: Booted up 1 node, 1 CPUs
[    7.230627] NMI watchdog: disabled (cpu0): hardware events not enabled
[    7.237267] devtmpfs: initialized
[    7.243140] calling  ipc_ns_init+0x0/0x14 @ 1
[    7.247488] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    7.253728] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    7.258753] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    7.265600] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    7.272275] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    7.280766] calling  net_ns_init+0x0/0x104 @ 1
[    7.285330] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    7.291612] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    7.296800] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    7.304694] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    7.312858] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    7.320068] calling  cpufreq_tsc+0x0/0x37 @ 1
[    7.324490] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    7.330728] calling  reboot_init+0x0/0x1d @ 1
[    7.335149] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    7.341388] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    7.346242] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    7.352915] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    7.358460] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    7.365827] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    7.370681] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    7.377354] calling  wq_sysfs_init+0x0/0x14 @ 1
[    7.382048] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    7.388794] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    7.395321] calling  ksysfs_init+0x0/0x94 @ 1
[    7.399783] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    7.405979] calling  pm_init+0x0/0x4e @ 1
[    7.410091] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    7.415945] calling  pm_disk_init+0x0/0x19 @ 1
[    7.420467] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    7.426779] calling  swsusp_header_init+0x0/0x30 @ 1
[    7.431805] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    7.438652] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    7.444198] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    7.451565] calling  cgroup_wq_init+0x0/0x32 @ 1
[    7.456249] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    7.462745] calling  event_trace_enable+0x0/0x173 @ 1
[    7.468331] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    7.475192] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.479783] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.486196] calling  fsnotify_init+0x0/0x26 @ 1
[    7.490792] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.497202] calling  filelock_init+0x0/0x84 @ 1
[    7.501808] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.508209] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.513063] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.519734] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.524761] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.531609] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.536374] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.542961] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.548334] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.555528] calling  debugfs_init+0x0/0x5c @ 1
[    7.560044] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.566360] calling  securityfs_init+0x0/0x53 @ 1
[    7.571136] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.577714] calling  prandom_init+0x0/0xe2 @ 1
[    7.582220] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.588548] calling  virtio_init+0x0/0x30 @ 1
[    7.593068] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.599234] calling  __gnttab_init+0x0/0x30 @ 1
[    7.603828] xen:grant_table: Grant tables using version 2 layout
[    7.609910] Grant table initialized
[    7.613445] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.620120] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.625172] RTC time:  5:37:54, date: 01/22/14
[    7.629652] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.636671] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.641612] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.648545] calling  cpuidle_init+0x0/0x40 @ 1
[    7.653051] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.659551] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.664491] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.671251] calling  sock_init+0x0/0x8b @ 1
[    7.675598] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.681591] calling  net_inuse_init+0x0/0x26 @ 1
[    7.686272] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.692769] calling  netpoll_init+0x0/0x31 @ 1
[    7.697275] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.703601] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.708754] NET: Registered protocol family 16
[    7.713248] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.720340] calling  bdi_class_init+0x0/0x4d @ 1
[    7.725127] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.731556] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.736682] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.743599] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.748603] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.755298] calling  pci_driver_init+0x0/0x12 @ 1
[    7.760159] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.766678] calling  backlight_class_init+0x0/0x85 @ 1
[    7.771937] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.778900] calling  video_output_class_init+0x0/0x19 @ 1
[    7.784422] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.791637] calling  xenbus_init+0x0/0x26f @ 1
[    7.796236] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.802492] calling  tty_class_init+0x0/0x38 @ 1
[    7.807237] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.813668] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.819039] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.825983] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.831797] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.839418] calling  register_node_type+0x0/0x34 @ 1
[    7.844574] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.851348] calling  i2c_init+0x0/0x70 @ 1
[    7.855675] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.861584] calling  init_ladder+0x0/0x12 @ 1
[    7.866004] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.872416] calling  init_menu+0x0/0x12 @ 1
[    7.876662] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.882903] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.887929] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.894787] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.900340] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.907688] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.912831] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.919735] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.924242] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.930741] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.935512] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.942095] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.947296] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.954314] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.958908] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.966533] ACPI: bus type PCI registered
[    7.970606] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.977133] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.983807] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.988436] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.995142] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    8.001628] calling  dma_channel_table_init+0x0/0xde @ 1
[    8.007014] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    8.014191] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    8.019738] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    8.027104] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    8.032738] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    8.040191] calling  xen_pcpu_init+0x0/0xcc @ 1
[    8.045627] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    8.051976] calling  dmi_id_init+0x0/0x31d @ 1
[    8.056728] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    8.062984] calling  dca_init+0x0/0x20 @ 1
[    8.067141] dca service started, version 1.12.1
[    8.071795] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    8.077890] calling  iommu_init+0x0/0x58 @ 1
[    8.082231] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    8.088376] calling  pci_arch_init+0x0/0x69 @ 1
[    8.092985] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    8.102328] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    8.117100] PCI: Using configuration type 1 for base access
[    8.122662] initcall pci_arch_init+0x0/0x69 returned 0 after0x0/0x98 @ 1
[    8.134351] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    8.140721] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    8.145816] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    8.152744] calling  init_vdso+0x0/0x135 @ 1
[    8.157078] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    8.163229] calling  sysenter_setup+0x0/0x2dd @ 1
[    8.167997] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    8.174582] calling  param_sysfs_init+0x0/0x194 @ 1
[    8.195670] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    8.202707] calling  pm_sysrq_init+0x0/0x19er 0 usecs
[    8.213715] calling  default_bdi_init+0x0/0x65 @ 1
[    8.218877] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    8.225485] calling  init_bio+0x0/0xe9 @ 1
[    8.229699] bio: create slab <bio-0> at 0
[    8.233763] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    8.239868] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    8.245611] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    8.253128] calling  cryptomgr_init+0x0/0x12 @ 1
[    8.257806] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    8.264307] calling  blk_settings_init+0x0/0x2c @ 1
[    8.269245] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    8.276008] calling  blk_ioc_init+0x0/0x2a @ 1
[    8.280522] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    8.286838] calling  blk_softirq_init+0x0/0x6e @ 1
[    8.291690] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    8.298364] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    8.303216] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    8.309889] calling  blk_mq_init+0x0/0x5f @ 1
[    8.314310] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    8.320549] calling  genhd_device_init+0x0/0x85 @ 1
[    8.325617] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    8.332305] calling  pci_slot_init+0x0/0x50 @ 1
[    8.336903] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    8.343308] calling  fbmem_init+0x0/0x98 @ 1
[    8.347712] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    8.353796] calling  acpi_init+0x0/0x27a @ 1
[    8.358156] ACPI: Added _OSI(Module Device)
[    8.362375] ACPI: Added _OSI(Processor Device)
[    8.366880] ACPI: Added _OSI(3.0 _SCP Extensions)
[    8.371648] ACPI: Added _OSI(Processor Aggregator Device)
[    8.380862] ACPI: Executed 1 blocks of module-level executable AML code
[    8.412834] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    8.420693] \_SB_:_OSC invalid UUID
[    8.424175] _OSC request data:1 1f=20
[    8.429826] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    8.439048] ACPI: Dynamic OEM Table Load:
[    8.443048] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    8.452878] ACPI: Interpreter enabled
[    8.456552] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    8.465812] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    8.475095] ACPI: (supports S0 S1 S4 S5)
[    8.479067] ACPI: Using IOAPIC for interrupt routing
[    8.484468] HEST: Table parsing has been initialized.
[    8.489519] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.499898] ACPI: No dock devices found.
[    8.601026] ACPI: Power Resource [FN00] (off)
[    8.606175] ACPI: Power Resource [FN01] (off)
[    8.611331] ACPI: Power Resource [FN02] (off)
[    8.616459] ACPI: Power Resource [FN03] (off)
[    8.621605] ACPI: Power Resource [FN04] (off)
[    8.631332] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.637508] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM Cloc=
kPM Segments MSI]
[    8.648230] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]
[    8.657220] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.670762] PCI host bridge to bus 0000:00
[    8.674854] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.680402] p]
[    8.692891] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.699818] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.706754] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.713681] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.720612] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.727546] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.734490] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:00.0
[    8.746060] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.752218] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.758836] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:01.0
[    8.769671] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.775735] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:01.1
[    8.787403] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.793421] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.800257] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.807532] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:02.0
[    8.818696] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.824715] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:03.0
[    8.837133] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.843194] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.850122] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.856352] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:14.0
[    8.867211] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.873250] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.880190] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:16.0
[    8.891830] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.897868] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.904163] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.910490] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.916247] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.922737] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:19.0
[    8.933588] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.939622] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.946076] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.952655] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:1a.0
[    8.963521] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.969549] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.976540] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.983032] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:1b.0
[    8.993884] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    9.000048] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    9.006537] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.0
[    9.017394] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    9.023562] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    9.030049] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.3
[    9.040901] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    9.047063] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    9.053554] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.5
[    9.064405] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    9.070567] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    9.077055] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.6
[    9.087897] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    9.094062] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    9.100551] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.7
[    9.111410] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    9.117449] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    9.123902] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    9.130477] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1d.0
[    9.141328] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.0
[    9.153011] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    9.159049] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    9.164650] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    9.170283] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    9.175916] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    9.181548] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    9.187184] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    9.193591] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.2
[    9.204358] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    9.210390] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    9.217239] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.3
[    9.228370] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    9.234407] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.6
[    9.247096] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.257892] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    9.263940] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    9.269576] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    9.276420] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    9.283271] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    9.290069] pci 0000:01:00.0: supports D1 D2
[    9.294460] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:01:00.0
[    9.307438] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    9.312653] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    9.318805] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    9.325654] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    9.332524] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.343321] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    9.349369] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    9.355690] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    9.362016] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    9.367648] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    9.373995] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    9.380784] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    9.386912] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.393827] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:02:00.0
[    9.406030] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    9.412038] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    9.418355] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    9.424682] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    9.430316] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    9.436663] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    9.443452] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    9.449580] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.456494] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:02:00.1
[    9.470790] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.476008] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.482159] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.489008] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.496042] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.506864] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.512908] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.519222] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.525550] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.531265] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.538094] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.544323] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:04:00.0
[    9.555239] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.561270] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.567583] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.573907] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.579626] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.586451] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:04:00.1
[    9.606505] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.611728] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.617878] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.624729] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.631758] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.642612] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.648644] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.654980] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.660593] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.667094] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.673331] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:05:00.0
[    9.686342] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.691563] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.697714] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.704575] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.711644] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.722463] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.728704] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.735485] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:06:00.0
[    9.746391] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.751618] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.758478] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.766974] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.773166] pci 0000:07:01.0: supports D1 D2
[    9.777425] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 05:37:57] PCI add device 0000:07:01.0
[    9.789259] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.795292] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.801597] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.808082] pci 0000:07:03.0: supports D1 D2
[    9.812340] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:07:03.0
[    9.830256] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.837319] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.844161] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.852730] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.861396] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.869974] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.878558] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.886956] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.893004] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:08.0
[    9.911698] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.917746] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:08.1
[    9.936452] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.942501] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:09.0
[    9.961196] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.967248] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:09.1
[    9.985995] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.992046] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0a.0
[   10.010728] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[   10.016786] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0a.1
[   10.035489] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[   10.041540] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0b.0
[   10.060223] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[   10.066271] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0b.1
[   10.085001] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[   10.090230] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   10.097065] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[   10.103738] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[   10.110410] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[   10.117453] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.128345] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[   10.134455] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[   10.141625] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[   10.147916] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:58] PCI add device 0000:09:00.0
[   10.161024] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[   10.166245] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   10.173089] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[   10.180118] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.190922] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[   10.196971] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[   10.202599] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[   10.208231] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[   10.213866] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[   10.219498] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[   10.225130] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[   10.231667] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:0a:00.0
[   10.251184] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[   10.256408] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   10.262559] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   10.269408] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[   10.276172] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[   10.287942] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.295254] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.302567] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.309882] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.317195] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.324505] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.
[   10.332945] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.340257] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.348683] ACPI: Enabled 4 GPEs in block 00 to 3F
[   10.353475] ACPI: \_SB_.PCI0: notify handler is installed
[   10.358959] Found 1 acpi root devices
[   10.362758] initcall acpi_init+0x0/0x27a returned 0 after 453125 usecs
[   10.369275] calling  pnp_init+0x0/0x12 @ 1
[   10.373528] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[   10.379438] calling  balloon_init+0x0/0x242 @ 1
[   10.384033] xen:balloon: Initialising balloon driver
[   10.389058] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[   10.395646] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[   10.401192] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[   10.408559] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[   10.414283] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[   10.421662] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[   10.427499] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[   10.434965] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[   10.439981] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[   10.446666] calling  balloon_init+0x0/0xfa @ 1
[   10.451170] xen_balloon: Initialising balloon driver
[   10.456586] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[   10.463020] calling  misc_init+0x0/0xba @ 1
[   10.467335] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.473331] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.478584] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.486663] vgaarb: loaded
[   10.489432] vgaarb: bridge control possible 0000:00:02.0
[   10.494807] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.502000] calling  cn_init+0x0/0xc0 @ 1
[   10.506092] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.511966] calling  dma_buf_init+0x0/0x75 @ 1
[   10.516485] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.522800] calling  phy_init+0x0/0x2e @ 1
[   10.527186] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.533097] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.537830] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.544275] calling  usb_init+0x0/0x169 @ 1
[   10.548534] ACPI: bus type USB registered
[   10.552792] usbcore: registered new interface driver usbfs
[   10.558364] usbcore: registered new interface driver hub
[   10.563757] usbcore: registered new device driver usb
[   10.568805] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.575129] calling  serio_init+0x0/0x31 @ 1
[   10.579579] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.585657] calling  input_init+0x0/0x103 @ 1
[   10.590147] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.596320] calling  rtc_init+0x0/0x5b @ 1
[   10.600551] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.606460] calling  pps_init+0x0/0xb7 @ 1
[   10.610681] pps_core: LinuxPPS API ver. 1 registered
[   10.615645] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.624830] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.631069] calling  ptp_init+0x0/0xa4 @ 1
[   10.635287] PTP clock support registered
[   10.639217] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.645367] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.650885] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.658113] calling  hwmon_init+0x0/0xe3 @ 1
[   10.662505] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.668598] calling  leds_init+0x0/0x40 @ 1
[   10.672902] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.678911] calling  efisubsys_init+0x0/0x142 @ 1
[   10.683676] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.690262] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.695026] PCI: Using ACPI for IRQ routing
[   10.702721] PCI: pci_cache_line_size set to 64 bytes
[   10.707879] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
[   10.713868] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]
[   10.719938] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usecs
[   10.726782] calling  proto_init+0x0/0x12 @ 1
[   10.731118] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.737266] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.742481] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.748823] calling  neigh_init+0x0/0x80 @ 1
[   10.753153] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.759307] calling  fib_rules_init+0x0/0xaf @ 1
[   10.763986] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.770485] calling  pktsched_init+0x0/0x10a @ 1
[   10.775171] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.781666] calling  tc_filter_init+0x0/0x55 @ 1
[   10.786345] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.792844] calling  tc_action_init+0x0/0x55 @ 1
[   10.797523] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.804024] calling  genl_init+0x0/0x85 @ 1
[   10.808287] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.814338] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.818932] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.825345] calling  netlbl_init+0x0/0x81 @ 1
[   10.829762] NetLabel: Initializing
[   10.833231] NetLabel:  domain hash size =3D 128
[   10.837649] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.842715] NetLabel:  unlabeled traffic allowed by default
[   10.848310] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.854810] calling  rfkill_init+0x0/0x79 @ 1
[   10.859406] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.865579] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.870172] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.876597] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.881366] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.887934] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.893269] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.900326] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.905446] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.912373] calling  hpet_late_init+0x0/0x101 @ 1
[   10.917141] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.923899] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.928409] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.934732] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.940285] Switched to clocksource xen
[   10.944185] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3812 usecs
[   10.951808] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.957293] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 280 =
usecs
[   10.964417] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.970748] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.978888] calling  event_trace_init+0x0/0x205 @ 1
[   10.998168] initcall event_trace_init+0x0/0x205 returned 0 after 13974 u=
secs
[   11.005201] calling  init_kprobe_trace+0x0/kprobe_trace+0x0/0x93 returne=
d 0 after 11 usecs
[   11.016993] calling  init_pipe_fs+0x0/0x4c @ 1
[   11.021538] initcall init_pipe_fs+0x0/0x4c returned 0 after 45 usecs
[   11.027907] calling  eventpoll_init+0x0/0xda @ 1
[   11.032611] initcall eventpoll_init+0x0/0xda returned 0 after 25 usecs
[   11.039171] calling  anon_inode_init+0x0/0x5b @ 1
[   11.043977] initcall anon_inode_init+0x0/0x5b returned 0 after 37 usecs
[   11.050609] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   11.055810] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   11.062831] calling  acpi_event_init+0x0/0x3a @ 1
[   11.067614] initcall acpi_event_init+0x0/0x3a returned 0 after 16 usecs
[   11.074270] calling  pnp_system_init+0x0/0x12 @ 1
[   11.079134] initcall pnp_system_init+0x0/0x12 returned 0 after 92 usecs
[   11.085749] calling  pnpacpi_init+0x0/0x8c @ 1
[   11.090243] pnp: PnP ACPI init
[   11.093385] ACPI: bus type PNP registered
[   11.097759] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   11.104365] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   11.111254] pnp 00:01: [dma 4]
[   11.114469] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   11.121157] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   11.128212] kworker/u2:0 (512) used greatest stack depth: 5560 bytes left
[   11.134997] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   11.142596] system 00:04: [io  0x0680-0x069f] has been reserved
[   11.148507] system 00:04: [io  0xffff] has been reserved
[   11.153877] system 00:04: [io  0xffff] has been reserved
[   11.159250] system 00:04: [io  0xffff] has been reserved
[   11.164624] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   11.170603] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   11.176583] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   11.182563] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   11.188543] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   11.194521] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   11.200849] system 00:04: [io  0x164e-0x164f] has been reserved
[   11.206825] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.213704] xen: registering gsi 8 triggering 1 polarity 0
[   11.219395] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   11.226285] system 00:06: [io  0x1854-0x1857] has been reserved
[   11.232196] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   11.240559] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   11.246471] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   11.252444] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.260665] xen: registering gsi 4 triggering 1 polarity 0
[   11.266139] Already setup the GSI :4
[   11.269782] pnp 00:08: [dma 0 disabled]
[   11.273885] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.281631] xen: registering gsi 3 triggering 1 polarity 0
[   11.287127] pnp 00:09: [dma 0 disabled]
[   11.291237] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.298079] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   11.303989] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.310864] xen: registering gsi 13 triggering 1 polarity 0
[   11.316654] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   11.326293] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   11.332906] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   11.339576] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   11.346248] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   11.352920] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   11.359595] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   11.366267] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   11.372940] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   11.379612] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   11.386285] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   11.392959] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   11.399633] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   11.406300] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.415205] pnp: PnP ACPI: found 13 devices
[   11.419379] ACPI: bus type PNP unregistered
[   11.423626] initcall pnpacpi_init+0x0/0x8c returned 0 after 325568 usecs
[   11.430384] calling  pcistub_init+0x0/0x29f @ 1
[   11.435316] pciback 0000:01:00.0: seizing device
[   11.439998] pciback 0000:05:00.0: seizing device
[   11.444934] initcall pcistub_init+0x0/0x29f returned 0 after 9723 usecs
[   11.451540] calling  chr_dev_init+0x0/0xc6 @ 1
[   11.465135] initcall chr_dev_init+0x0/0xc6 returned 0 after 8883 usecs
[   11.471651] calling  firmware_class_init+0x0/0xec0x65 returned 0 after 1=
33 usecs
[   11.495521] calling  thermal_init+0x0/0x8b @ 1
[   11.500115] initcall thermal_init+0x0/0x8b returned 0 after 92 usecs
[   11.506466] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.512352] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.520238] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.528929] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.535353] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9340 usecs
[   11.543152] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.548809] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.553760] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.559915] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.566775] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.573705] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.580636] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.587569] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.594503] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.601436] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.608367] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.615303] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.622233] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.629169] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.636100] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.643026] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.650409] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.657324] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.664793] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.671710] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.679093] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.686010] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.693470] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.698749] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.704905] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.711755] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.716779] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.722936] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.729787] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.734803] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.740960] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.747812] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.752836] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.759692] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.764968] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.771821] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.777100] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.783954] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.788975] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.795827] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.800841] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.807001] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.813855] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.819473] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.825107] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.831432] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.837758] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.844086] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.850411] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.856826] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.863238] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.868874] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.875199] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.880830] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.887158] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.892791] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.899118] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.904751] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.911076] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.917403] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.923730] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.930058] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.936384] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.942711] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.948343] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.954670] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396454 usecs
[   11.962470] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.967337] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.974082] calling  inet_init+0x0/0x296 @ 1
[   11.978488] NET: Registered protocol family 2
[   11.983177] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.990427] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.997075] TCP: Hash tables configured (established 16384 bind 16384)
[   12.003663] TCP: reno registered
[   12.006949] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   12.013015] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   12.019625] initcall inet_init+0x0/0x296 returned 0 after 40241 usecs
[   12.026056] calling  ipv4_offload_init+0x0/0x61 @ 1
[   12.030995] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   12.037754] calling  af_unix_init+0x0/0x55 @ 1
[   12.042271] NET: Registered protocol family 1
[   12.046693] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs
[   12.053268] calling  ipv6_offload_init+0x0/0x7f @ 1
[   12.058209] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   12.064967] calling  init_sunrpc+0x0/0x69 @ 1
[   12.069583] RPC: Registered named UNIX socket transport module.
[   12.075492] RPC: Registered udp transport module.
[   12.080256] RPC: Registered tcp transport module.
[   12.085022] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   12.091522] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs
[   12.098107] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   12.103574] pci 0000:00:02.0: Boot video device
[   12.108664] xen: registering gsi 16 triggering 0 polarity 1
[   12.114241] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   12.118882] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   12.126621] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   12.134527] xen: registering gsi 16 triggering 0 polarity 1
[   12.140089] Already setup the GSI :16
[   12.160320] xen: registering gsi 23 triggering 0 polarity 1
[   12.165895] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   12.187545] xen: registering gsi 18 triggering 0 polarity 1
[   12.193131] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   12.19773] initcall pci_apply_final_quirks+0x0/0x117 returned 0 after 10=
5374 usecs
[   12.219168] calling  populate_rootfs+0x0/0x112 @ 1
[   12.224157] Unpacking initramfs...
[   13.311888] Freeing initrd memory: 83592K (ffff8800023f4000 - ffff880007=
596000)
[   13.319191] initcall populate_rootfs+0x0/0x112 returned 0 after 1069500 =
usecs
[   13.326378] calling  pci_iommu_init+0x0/0x41 @ 1
[   13.331058] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   13.337558] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   13.343191] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   13.350834] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   13.356797] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   13.364597] calling  i8259A_init_ops+0x0/0x21 @ 1
[   13.369363] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   13.375950] calling  vsyscall_init+0x0/0x27 @ 1
[   13.380546] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   13.386957] calling  sbf_init+0x0/0xf6 @ 1
[   13.391117] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   13.397096] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   13.402296] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs
[   13.409316] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   13.413826] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   13.420148] calling  i8237A_init_ops+0x0/0x14 @ 1
[   13.424914] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.431501] calling  cache_sysfs_init+0x0/0x65 @ 1
[   13.436604] initcall cache_sysfs_init+0x0/0x65 returned 0 after 242 usecs
[   13.443375] calling  amd_uncore_init+0x0/0x130 @ 1
[   13.448227] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   13.455074] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   13.460101] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   13.467119] calling  intel_uncore_init+0x0/0x3ab @ 1
[   13.472148] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   13.479168] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.483865] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.494421] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10325 usecs
[   13.501268] calling  inject_init+0x0/0x30 @ 1
[   13.505685] Machine check injector initialized
[   13.510194] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs
[   13.516693] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.522586] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.530298] calling  microcode_init+0x0/0x1b1 @ 1
[   13.535249] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.541372] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.550138] initcall microcode_init+0x0/0x1b1 returned 0 after 14718 use=
cs
[   13.557071] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.561663] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.568251] calling  msr_init+0x0/0x162 @ 1
[   13.572720] initcall msr_init+0x0/0x162 returned 0 after 217 usecs
[   13.578887] calling  cpuid_init+0x0/0x162 @ 1
[   13.583505] initcall cpuid_init+0x0/0x162 returned 0 after 196 usecs
[   13.589848] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.594612] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.601199] calling  add_pcspkr+0x0/0x40 @ 1
[   13.605638] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs
[   13.611901] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.618397] Scanning for low memory corruption every 60 seconds
[   13.624373] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5835 usecs
[   13.632954] calling  sysfb_init+0x0/0x9c @ 1
[   13.637395] initcall sysfb_init+0x0/0x9c returned 0 after 106 usecs
[   13.643652] calling  audit_classes_init+0x0/0xaf @ 1
[   13.648690] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.655610] calling  pt_dump_init+0x0/0x30 @ 1
[   13.660126] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs
[   13.666444] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.671303] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.677969] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.683262] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.690360] calling  ioresources_init+0x0/0x3c @ 1
[   13.695220] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.701888] calling  uid_cache_init+0x0/0x85 @ 1
[   13.706583] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs
[   13.713155] calling  init_posix_timers+0x0/0x240 @ 1
[   13.718196] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.725112] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.730400] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.737505] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.742622] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.749551] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.754874] initcall snapshot_device_init+0x0/0x12 returned 0 after 118 =
usecs
[   13.761998] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.766763] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.773351] calling  create_proc_profile+0x0/0x300 @ 1
[   13.778550] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.785571] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.790769] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.797789] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.803389] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 22=
1 usecs
[   13.810689] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.816065] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs
[   13.823252] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.828299] initcall alarmtimer_init+0x0/0x15f returned 0 after 188 usecs
[   13.835077] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.840742] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
7 usecs
[   13.848039] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.853071] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.859914] calling  futex_init+0x0/0xf6 @ 1
[   13.864261] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.870402] initcall futex_init+0x0/0xf6 returned 0 after 6011 usecs
[   13.876810] calling  proc_dma_init+0x0/0x22 @ 1
[   13.881408] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.887817] calling  proc_modules_init+0x0/0x22 @ 1
[   13.892760] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.899516] calling  kallsyms_init+0x0/0x25 @ 1
[   13.904113] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.910522] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.916341] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.924042] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.929505] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.936781] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.941906] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.948915] calling  ikconfig_init+0x0/0x3c @ 1
[   13.953511] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs
[   13.959921] calling  audit_init+0x0/0x141 @ 1
[   13.964341] audit: initializing netlink socket (disabled)
[   13.969823] type=3D2000 audit(1390369078.681:1): initialized
[   13.975349] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs
[   13.981935] calling  audit_watch_init+0x0/0x3a @ 1
[   13.986814] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.993486] calling  audit_tree_init+0x0/0x49 @ 1
[   13.998254] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   14.004840] calling  init_kprobes+0x0/0x16c @ 1
[   14.019432] initcall init_kprobes+0x0/0x16c returned 0 after 9765 usecs
[   14.026028] calling  hung_task_init+0x0/0x56 @ 1ts+0x0/0x20 @ 1
[   14.054202] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   14.060875] calling  init_blk_tracer+0x0/0x5a @ 1
[   14.065641] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs
[   14.072221] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   14.077941] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   14.085479] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   14.091292] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 512=
 usecs
[   14.098504] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   14.104028] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   14.111330] calling  kswapd_init+0x0/0x76 @ 1
[   14.115796] initcall kswapd_init+0x0/0x76 returned 0 after 48 usecs
[   14.122072] calling  extfrag_debug_init+0x0/0x7e @ 1
[   14.127118] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   14.134030] calling  setup_vmstat+0x0/0xf3 @ 1
[   14.138551] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs
[   14.144950] calling  mm_sysfs_init+0x0/0x29 @ 1
[   14.149553] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   14.156044] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   14.161330] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   14.168434] calling  slab_proc_init+0x0/0x25 @ 1
[   14.173120] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   14.179614] calling  init_reserve_notifier+0x0/0x26 @ 1
[   14.184903] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   14.192008] calling  init_admin_reserve+0x0/0x40 @ 1
[   14.197034] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.203880] calling  init_user_reserve+0x0/0x40 @ 1
[   14.208821] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.215581] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   14.220524] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   14.227281] calling  procswaps_init+0x0/0x22 @ 1
[   14.231963] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   14.238461] calling  init_frontswap+0x0/0x96 @ 1
[   14.243169] initcall init_frontswap+0x0/0x96 returned 0 after 27 usecs
[   14.249726] calling  hugetlb_init+0x0/0x4c2 @ 1
[   14.254320] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   14.260815] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6343 usecs
[   14.267414] calling  mmu_notifier_init+0x0/0x12 @ 1
[   14.272358] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   14.279113] calling  slab_proc_init+0x0/0x8 @ 1
[   14.283709] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   14.290120] calling  cpucache_init+0x0/0x4b @ 1
[   14.294716] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   14.301128] calling  hugepage_init+0x0/0x145 @ 1
[   14.305807] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   14.312482] calling  init_cleancache+0x0/0xbc @ 1
[   14.317277] initcall init_cleancache+0x0/0xbc returned 0 after 28 usecs
[   14.323921] calling  fcntl_init+0x0/0x2a @ 1
[   14.328265] initcall fcntl_init+0x0/0x2a returned 0 after 12 usecs
[   14.334496] calling  proc_filesystems_init+0x0/0x22 @ 1
[   14.339783] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   14.346887] calling  dio_init+0x0/0x2d @ 1
[   14.351059] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   14.357114] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   14.362167] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 26 use=
cs
[   14.369077] calling  dnotify_init+0x0/0x7b @ 1
[   14.373607] initcall dnotify_init+0x0/0x7b returned 0 after 24 usecs
[   14.379993] calling  inotify_user_setup+0x0/0x70 @ 1
[   14.385039] initcall inotify_user_setup+0x0/0x70 returned 0 after 18 use=
cs
[   14.391955] calling  aio_setup+0x0/0x7d @ 1
[   14.396256] initcall aio_setup+0x0/0x7d returned 0 after 55 usecs
[   14.402355] calling  proc_locks_init+0x0/0x22 @ 1
[   14.407123] initcall proc_locks_init+0x0/0x22 returned 0 after 4 usecs
[   14.413708] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   14.418605] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 43 usecs
[   14.425320] calling  dquot_init+0x0/0x121 @ 1
[   14.429741] VFS: Disk quotas dquot_6.5.2
[   14.433761] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   14.440227] initcall dquot_init+0x0/0x121 returned 0 after 10240 usecs
[   14.446811] calling  init_v2_quota_format+0x0/0x22 @ 1
[   14.452013] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   14.459031] calling  quota_init+0x0/0x31 @ 1
[   14.463384] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   14.469603] calling  proc_cmdline_init+0x0/0x22 @ 1
[   14.474549] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs
[   14.481303] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.486334] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.493176] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.498121] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.504877] calling  proc_devices_init+0x0/0x22 @ 1
[   14.509820] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.516576] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.521779] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.528796] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.533738] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.540495] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.545439] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.552194] calling  proc_stat_init+0x0/0x22 @ 1
[   14.556879] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.563374] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.568232] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.574901] calling  proc_version_init+0x0/0x22 @ 1
[   14.579845] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.586600] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.591631] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.598476] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.603253] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.609915] calling  vmcore_init+0x0/0x5cb @ 1
[   14.614420] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.620746] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.625430] initcall proc_kmsg_init+0x0/0x25 returned 0 after 4 usecs
[   14.631925] calling  proc_page_init+0x0/0x42 @ 1
[   14.636611] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.643105] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.647832] initcall init_devpts_fs+0x0/0x62 returned 0 after 45 usecs
[   14.654372] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.658975] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs
[   14.665379] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.670474] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 68 use=
cs
[   14.677339] calling  init_fat_fs+0x0/0x4f @ 1
[   14.681779] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.688084] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.692592] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.698917] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.703512] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.709925] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.714717] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs
[   14.721364] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.726064] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs
[   14.732496] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.736913] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.743151] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.747572] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.753813] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.758232] NFS: Registering the id_resolver key type
[   14.763358] Key type id_resolver registered
[   14.767590] Key type id_legacy registered
[   14.771669] initcall init_nfs_v4+0x0/0x3b returned 0 after 13121 usecs
[   14.778251] calling  init_nlm+0x0/0x4c @ 1
[   14.782419] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.788391] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.793071] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.799569] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.804249] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.810749] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.815778] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.822623] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.827217] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.833629] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.838222] NTFS driver 2.1.30 [Flags: R/W].
[   14.842606] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4280 usecs
[   14.849231] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.854127] initcall init_autofs4_fs+0x0/0x2a returned 0 after 127 usecs
[   14.860826] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.865507] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.872083] calling  ipc_init+0x0/0x2f @ 1
[   14.876248] msgmni has been set to 3857
[   14.880152] initcall ipc_init+0x0/0x2f returned 0 after 3818 usecs
[   14.886381] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.891156] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.897734] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.902476] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.909002] calling  key_proc_init+0x0/0x5e @ 1
[   14.913603] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.920010] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.925036] SELinux:  Registering netfilter hooks
[   14.929938] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4786 u=
secs
[   14.936969] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.941738] initcall init_sel_fs+0x0/0xa5 returned 0 after 342 usecs
[   14.948080] calling  selnl_init+0x0/0x56 @ 1
[   14.952424] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.958652] calling  sel_netif_init+0x0/0x5c @ 1
[   14.963334] initcall sel_netif_init+0x0/0x5c returned 0 after 2 usecs
[   14.969831] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.974687] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs
[   14.981359] calling  sel_netport_init+0x0/0x6a @ 1
[   14.986239] initcall sel_netport_init+0x0/0x6a returned 0 after 2 usecs
[   14.992912] calling  aurule_init+0x0/0x2d @ 1
[   14.997330] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   15.003569] calling  crypto_wq_init+0x0/0x33 @ 1
[   15.008281] initcall crypto_wq_init+0x0/0x33 returned 0 after 31 usecs
[   15.014837] calling  crypto_algapi_init+0x0/0xd @ 1
[   15.019782] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   15.026536] calling  chainiv_module_init+0x0/0x12 @ 1
[   15.031650] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   15.038582] calling  eseqiv_module_init+0x0/0x12 @ 1
[   15.043608] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   15.050455] calling  hmac_module_init+0x0/0x12 @ 1
[   15.055308] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   15.061983] calling  md5_mod_init+0x0/0x12 @ 1
[   15.066519] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs
[   15.072903] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   15.078214] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 25 =
usecs
[   15.085383] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   15.090753] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   15.097947] calling  des_generic_mod_init+0x0/0x17 @ 1
[   15.103198] initcall des_generic_mod_init+0x0/0x17 returned 0 after 48 u=
secs
[   15.110254] calling  aes_init+0x0/0x12 @ 1
[   15.114440] initcall aes_init+0x0/0x12 returned 0 after 26 usecs
[   15.120481] calling  zlib_mod_init+0x0/0x12 @ 1
[   15.125102] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs
[   15.131574] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   15.137297] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   15.144835] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   15.150899] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   15.158787] calling  krng_mod_init+0x0/0x12 @ 1
[   15.163408] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs
[   15.169881] calling  proc_genhd_init+0x0/0x3c @ 1
[   15.174656] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   15.181233] calling  bsg_init+0x0/0x12e @ 1
[   15.185555] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   15.192935] initcall bsg_init+0x0/0x12e returned 0 after 7281 usecs
[   15.199256] calling  noop_init+0x0/0x12 @ 1
[   15.203504] io scheduler noop registered
[   15.207493] initcall noop_init+0x0/0x12 returned 0 after 3895 usecs
[   15.213818] calling  deadline_init+0x0/0x12 @ 1
[   15.218410] io scheduler deadline registered
[   15.222743] initcall deadline_init+0x0/0x12 returned 0 after 4232 usecs
[   15.229418] calling  cfq_init+0x0/0x8b @ 1
[   15.233602] io scheduler cfq registered (default)
[   15.238344] initcall cfq_init+0x0/0x8b returned 0 after 4654 usecs
[   15.244583] calling  percpu_counter_startup+0x0/0x38 @ 1
[   15.249957] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   15.257151] calling  pci_proc_init+0x0/0x6a @ 1
[   15.261928] initcall pci_proc_init+0x0/0x6a returned 0 after 181 usecs
[   15.268444] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   15.274091] xen: registering gsi 16 triggering 0 polarity 1
[   15.279656] Already setup the GSI :16
[   15.284175] xen: registering gsi 16 triggering 0 polarity 1
[   15.289741] Already setup the GSI :16
[   15.294249] xen: registering gsi 16 triggering 0 polarity 1
[   15.299814] Already setup the GSI :16
[   15.304173] xen: registering gsi 19 triggering 0 polarity 1
[   15.309758] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   15.314981] xen: registering gsi 17 triggering 0 polarity 1
[   15.320559] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   15.325870] xen: registering gsi 19 triggering 0 polarity 1
[   15.331433] Already setup the GSI :19
[   15.335339] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60504 u=
secs
[   15.342376] calling  aer_service_init+0x0/0x2b @ 1
[   15.347303] initcall aer_service_init+0x0/0x2b returned 0 after 71 usecs
[   15.353990] calling  pci_hotplug_init+0x0/0x1d @ 1
[   15.358841] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   15.364474] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5500 use=
cs
[   15.371409] calling  pcied_init+0x0/0x79 @ 1
[   15.375941] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   15.382549] initcall pcied_init+0x0/0x79 returned 0 after 6647 usecs
[   15.388962] calling  pcifront_init+0x0/0x3f @ 1
[   15.393552] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   15.400138] calling  genericbl_driver_init+0x0/0x14 @ 1
[   15.405535] initcall genericbl_driver_init+0x0/0x14 returned 0 after 108=
 usecs
[   15.412741] calling  cirrusfb_init+0x0/0xcc @ 1
[   15.417423] initcall cirrusfb_init+0x0/0xcc returned 0 after 88 usecs
[   15.423850] calling  efifb_driver_init+0x0/0x14 @ 1
[   15.428862] initcall efifb_driver_init+0x0/0x14 returned 0 after 69 usecs
[   15.435641] calling  intel_idle_init+0x0/0x331 @ 1
[   15.440492] intel_idle: MWAIT substates: 0x42120
[   15.445171] intel_idle: v0.4 model 0x3C
[   15.449070] intel_idle: lapic_timer_reliable_states 0xffffffff
[   15.454967] intel_idle: intel_idle yielding to none
[   15.459642] initcall intel_idle_init+0x0/0x331 returned -19 after 18700 =
usecs
[   15.467097] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   15.472475] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   15.479661] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.484241] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs
[   15.490591] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.496323] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.504488] ACPI: Power Button [PWRB]
[   15.508470] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.515855] ACPI: Power Button [PWRF]
[   15.519651] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3055 usecs
[   15.527207] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.532644] ACPI: Fan [FAN0] (off)
[   15.536270] ACPI: Fan [FAN1] (off)
[   15.539884] ACPI: Fan [FAN2] (off)
[   15.543491] ACPI: Fan [FAN3] (off)
[   15.547094] ACPI: Fan [FAN4] (off)
[   15.550554] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1772=
3 usecs
[   15.557855] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.575878] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.584300] ACPI Error: Metning: Processor Platform Limit not supported.
[   15.616759] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 51940 usecs
[   15.624649] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.632791] thermal LNXTHERM:00: registered as thermal_zone0
[   15.638441] ACPI: Thermal Zone [TZ00] (28 C)
[   15.64489al Zone [TZ01] (30 C)
[   15.655211] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25021 u=
secs
[   15.662253] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.667196] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.673947] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.679190] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.684913] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5631=
 usecs
[   15.692120] calling  erst_init+0x0/0x2fc @ 1
[   15.696495] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.703999] pstore: Registered erst as persistent store backend
[   15.709972] initcall erst_init+0x0/0x2fc returned 0 after 13203 usecs
[   15.716472] calling  ghes_init+0x0/0x173 @ 1
[   15.720962] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35333 usecs
[   15.729398] \_SB_:_OSC request failed
[   15.733054] _OSC request data:1 1 0=20
[   15.736690] \_SB_:_OSC invalid UUID
[   15.740245] _OSC request data:1 1 0=20
[   15.743882] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.750125] initcall ghes_init+0x0/0x173 returned 0 after 28632 usecs
[   15.756624] calling  einj_init+0x0/0x522 @ 1
[   15.761023] EINJ: Error INJection is initialized.
[   15.765725] initcall einj_init+0x0/0x522 returned 0 after 4654 usecs
[   15.772137] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.776988] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.783031] initcall ioat_init_module+0x0/0xb1 returned 0 after 5900 use=
cs
[   15.789913] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.794820] initcall virtio_mmio_init+0x0/0x14 returned 0 after 70 usecs
[   15.801509] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.807297] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 67 usecs
[   15.814855] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.820140] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.827246] calling  xenbus_init+0x0/0x3d @ 1
[   15.831802] initcall xenbus_init+0x0/0x3d returned 0 after 131 usecs
[   15.838143] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.843376] initcall xenbus_backend_init+0x0/0x51 returned 0 after 119 u=
secs
[   15.850413] calling  gntdev_init+0x0/0x4d @ 1
[   15.854986] initcall gntdev_init+0x0/0x4d returned 0 after 150 usecs
[   15.861329] calling  gntalloc_init+0x0/0x3d @ 1
[   15.866052] initcall gntalloc_init+0x0/0x3d returned 0 after 129 usecs
[   15.872568] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.877941] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.885131] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.890135] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 63 usecs
[   15.896918] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.902554] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
88 usecs
[   15.909936] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.915330] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs
[   15.922450] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.927480] xen: registering gsi 19 triggering 0 polarity 1
[   15.933039] Already setup the GSI :19
(XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----
(XEN) [2014-01-22 05:38:00] CPU:    0
(XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_m=
six+0xb1/0x128
(XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0  =
 rcx: 0000000000000000
(XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000  =
 rdi: 0000000000000000
(XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08  =
 r8:  0000000000000000
(XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20  =
 r11: 0000000000000202
(XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005  =
 r14: 0000000000000000
(XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033  =
 cr4: 00000000001526f0
(XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
(XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008
(XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=3Dffff82d0802cfe08:
(XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d080=
2cfe68 000000000000001e
(XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078=
716898 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d080=
12a25f 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080=
310f00 ffff82d0802cff18
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000=
040004 0000000000000246
(XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff81=
00122a 000000000000e030
(XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070=
fe2780 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7f=
d300c7 ffff82d08022231b
(XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f=
60e0e0 0000000000000000
(XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078=
623e58 ffff880078716800
(XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000=
000006 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000=
000000 ffff880078623e28
(XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff81=
00142a 000000000000e033
(XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 0000000000=
00e02b 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000=
000000 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000=
000000
(XEN) [2014-01-22 05:38:00] Xen call trace:
(XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0=
x128
(XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x1=
19e
(XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 05:38:00]  L4[0x000] =3D 0000000000000000 ffffffffffffffff
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] Panic on CPU 0:
(XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
(XEN) [2014-01-22 05:38:00] [error_code=3D0000]
(XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--rwEMma7ioTxnRzrJ--


From xen-devel-bounces@lists.xen.org Tue Jan 21 21:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jHQ-0002we-KV; Tue, 21 Jan 2014 21:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jHO-0002wU-4C
	for xen-devel@lists.xenproject.org; Tue, 21 Jan 2014 21:54:43 +0000
Received: from [193.109.254.147:37866] by server-12.bemta-14.messagelabs.com
	id 77/86-13681-1ACEED25; Tue, 21 Jan 2014 21:54:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390341277!12338590!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17374 invoked from network); 21 Jan 2014 21:54:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:54:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLsaQO004348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:54:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLsajM026433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:54:36 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLsZFD026423; Tue, 21 Jan 2014 21:54:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:54:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D60401BF76C; Tue, 21 Jan 2014 16:54:33 -0500 (EST)
Date: Tue, 21 Jan 2014 16:54:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org, jbeulich@suse.com
Message-ID: <20140121215433.GA6363@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="rwEMma7ioTxnRzrJ"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] Regression compared to Xen 4.3,
 Xen 4.4-rc2 -  pci_prepare_msix+0xb1/0x12 - BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey,

I hadn't done yet any diagnosis to figure out exactly which
PCI device is at fault here. But this is regression compared
to Xen 4.3 which boots just fine (see logs). The xen-syms
is at: http://darnok.org/xen/xen-syms.gz

I used idential kernel for Xen 4.3 and it booted nicely.

My next step is to instrument the do_physdev_op to figure out which
of the PCI devices is triggering this, but that will have to wait
till later this week.

What I get is this when booting Xen 4.4:


[   15.927480] xen: registering gsi 19 triggering 0 polarity 1
[   15.933039] Already setup the GSI :19
(XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-22 05:38:00] CPU:    0
(XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
(XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0   rcx: 0000000000000000
(XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000   rdi: 0000000000000000
(XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08   r8:  0000000000000000
(XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20   r11: 0000000000000202
(XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005   r14: 0000000000000000
(XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000001526f0
(XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
(XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=ffff82d0802cfe08:
(XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d0802cfe68 000000000000001e
(XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078716898 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d08012a25f 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080310f00 ffff82d0802cff18
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000040004 0000000000000246
(XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff8100122a 000000000000e030
(XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070fe2780 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7fd300c7 ffff82d08022231b
(XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f60e0e0 0000000000000000
(XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078623e58 ffff880078716800
(XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000000006 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000000000 ffff880078623e28
(XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff8100142a 000000000000e033
(XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 000000000000e02b 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000000000
(XEN) [2014-01-22 05:38:00] Xen call trace:
(XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
(XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x119e
(XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 05:38:00]  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] Panic on CPU 0:
(XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
(XEN) [2014-01-22 05:38:00] [error_code=0000]
(XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] 
(XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-xen-4.3.log"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... =1B[06;00Hok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
Loading microcode.bin... ok
 __  __            _  _    _____  ____                   =20
 \ \/ /___ _ __   | || |  |___ / |___ \    _ __  _ __ ___=20
  \  // __/|__| |_) | | |  __/
 /_/\_\___|_| |_|    |_|(_)____(_)_____|  | .__/|_|  \___|
                                          |_|            =20
(XEN) Xen version 4.3.2-pre (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red =
Hat 4.4.4-2)) debug=3Dy Tue Jan 21 14:30:34 EST 2014
(XEN) Latest ChangeSet: Fri Jan 17 16:37:06 2014 +0100 git:7261a3f-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) LASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1158: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
3ffd54000
(XEN) [VT-D]iommu.c:1160: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1158: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
3ffd53000
(XEN) [VT-D]iommu.c:1160: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.046 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI wit_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 03:41:23] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 03:41:23] Allocated console ring of 1048576 KiB.
(XEN) [2014-01-22 03:41:23] mwait-idle: MWAIT substates: 0x42120
(XEN) [2014-01-22 03:41:23] mwait-idle: v0.4 model 0x3c
(XEN) [2014-01-22 03:41:23] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff
(XEN) [2014-01-22 03:41:23] VMX: Supported advanced features:
(XEN) [2014-01-22 03:41:23]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 03:41:23]  - APIC TPR shadow
(XEN) [2014-01-22 03:41:23]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 03:41:23]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 03:41:23]  - Virtual NMI
(XEN) [2014-01-22 03:41:23]  - MSR direct-access bitmap
(XEN) [2014-01-22 03:41:23]  - Unrestricted Guest
(XEN) [2014-01-22 03:41:23]  - VMCS shadowing
(XEN) [2014-01-22 03:41:23] HVM: ASIDs enabled.
(XEN) [2014-01-22 03:41:23] HVM: VMX enabled
(XEN) [2014-01-22 03:41:23] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 03:41:23] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 03:41:23] Brought up 8 CPUs
(XEN) [2014-01-22 03:41:23] ACPI sleep modes: S3
(XEN) [2014-01-22 03:41:23] mcheck_poll: Machine check polling timer starte=
d.2 03:41:23] *** LOADING DOMAIN 0 ***
(XEN) [2014-01-22 03:41:23] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa22000
(XEN) [2014-01-22 03:41:23] elf_parse_binaryz=3D0xc00f0
(XEN) [2014-01-22 03:41:23] elf_parse_binary: phdr: paddr=3D0x1cc1000 memsz=
=3D0x14d80
(XEN) [2014-01-22 03:41:24] elf_parse_binary: phdr: paddr=3D0x1cd6000 memsz=
=3D0x71e000
(XEN) [2014-01-22 03:41:24] elf_parse_binary: memory: 0x1000000 -> 0x23f4000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 03:41:24] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 03:41:24] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 03:41:24]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 03:41:24]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 03:41:24]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 03:41:24]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 03:41:24]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 03:41:24]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 03:41:24]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 03:41:24]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 03:41:24]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 03:41:24] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 03:41:24]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487894 pages to be allocated)
(XEN) [2014-01-22 03:41:24]  Init. ramdisk: 000000023af5d000->000000023fd86=
d75
(XEN) [2014-01-22 03:41:24] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 03:41:24]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 03:41:24]  Init. ramdisk: ffffffff823f4000->ffffffff8721d=
d75
(XEN) [2014-01-22 03:41:24]  Phys-Mach map: ffffffff8721e000->ffffffff8761e=
000
(XEN) [2014-01-22 03:41:24]  Start info:    ffffffff8761e000->ffffffff8761e=
4b4
(XEN) [2014-01-22 03:41:24]  Page tables:   ffffffff8761f000->ffffffff8765e=
000
(XEN) [2014-01-22 03:41:24]  Boot stack:    ffffffff8765e000->ffffffff8765f=
000
(XEN) [2014-01-22 03:41:24]  TOTAL:         ffffffff80000000->ffffffff87800=
000
(XEN) [2014-01-22 03:41:24]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 03:41:24] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 03:41:24] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 03:41:24] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:02.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:1444: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:755: iommu_enable_translation: io=
mmu->reg =3D ffff82c3ffd54000
(XEN) [2014-01-22 03:41:25] [VT-D]iommu.c:755: iommu_enable_translation: io=
mmu->reg =3D ffff82c3ffd53000
(XEN) [2014-01-22 03:41:25] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 03:41:25] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 03:41:25] Std. Loglevel: All
(XEN) [2014-01-22 03:41:25] Guest Loglevel: All
(XEN) [2014-01-22 03:41:25] **********************************************
(XEN) [2014-01-22 03:41:25] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 03:41:25] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 03:41:25] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 03:41:25] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 03:41:25] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 03:41:25] **********************************************
(XEN) [2014-01-22 03:41:25] 3... 2... 1...=20
(XEN) [2014-01-22 03:41:28] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 03:41:28] Freed 260kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-02502-gec513b1 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(xx:xx:xx)
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x0721dfff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.3.2-pre (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.662076] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.662077] Policy zone: DMA32
[    5.662078] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(xx:xx:xx)
[    5.662382] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.662412] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.682547] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.685643] Memory: 1894852K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 202296K reserved)
[    5.685868] Hierarchical RCU implementation.
[    5.685868] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.685869] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.685877] NR_IRQS:33024 nr_irqs:256 16
[    5.685955] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.685957] xen: registering gsi 9 triggering 0 polarity 0
[    5.685967] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.685988] xen: acpi sci 9
[    5.685992] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.685994] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.685997] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.686000] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.686002] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.686005] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.686007] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.686009] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.686012] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.686015] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.686017] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.686019] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.686022] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.686024] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.687586] Console: colour VGA+ 80x25
[    6.638682] console [hvc0] enabled
[    6.642617] Xen: using vcpuop timer interface
[    6.646958] installing Xen timer for CPU 0
[    6.651141] tsc: Detected 3400.046 MHz processor
[    6.655825] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.09 BogoMIPS (lpj=3D3400046)
[    6.666458] pid_max: default: 32768 minimum: 301
[    6.671290] Security Framework initialized
[    6.675378] SELinux:  Initializing.
[    6.678952] SELinux:  Starting in permissive mode
[    6.684017] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.691456] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.698616] Mount-cache hash table entries: 256
[    6.703577] Initializing cgroup subsys freezer
[    6.708076] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.708076] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.721179] CPU: Physical Processor ID: 0
[    6.725254] CPU: Processor Core ID: 0
[    6.729676] mce: CPU supports 2 MCE banks
[    6.733687] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.733687] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.733687] tlb_flushall_shift: 6
[    6.770682] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.779324] ACPI: Core revision 20131115
[    6.832338] ACPI: All ACPI Tables successfully acquired
[    6.839105] cpu 0 spinlock event irq 41
[    6.842977] callinginit_spinlocks_jump+0x0/0x1d returned 0 after 4882 us=
ecs
[    6.861451] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.867089] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.874535] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.880947] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.889181] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.894814] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.902267] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.907986] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.915527] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.920726] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    6.929582] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    6.936861] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    6.943274] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    6.951508] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    6.956975] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    6.964091] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    6.968887] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    6.975446] calling  init_workqueues+0x0/0x59a @ 1
[    6.980452] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    6.987050] calling  migration_init+0x0/0x71 @ 1
[    6.991730] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    6.998229] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    7.003428] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    7.010451] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    7.016341] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    7.024054] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    7.029292] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    7.036278] calling  cpu_stop_init+0x0/0x76 @ 1
[    7.040894] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    7.047282] calling  relay_init+0x0/0x14 @ 1
[    7.051615] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    7.057767] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    7.063075] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    7.070160] calling  init_events+0x0/0x61 @ 1
[    7.074580] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    7.080821] calling  init_trace_printk+0x0/0x12 @ 1
[    7.085760] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    7.092521] calling  event_trace_memsetup+0x0/0x52 @ 1
[    7.097741] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    7.104742] calling  jump_label_init_module+0x0/0x12 @ 1
[    7.110115] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    7.117308] calling  balloon_clear+0x0/0x4f @ 1
[    7.121901] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    7.128315] calling  rand_initialize+0x0/0x30 @ 1
[    7.133101] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    7.139667] calling  mce_amd_init+0x0/0x165 @ 1
[    7.144259] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    7.150698] x86: Booted up 1 node, 1 CPUs
[    7.155443] NMI watchdog: disabled (cpu0): hardware events not enabled
[    7.162084] devtmpfs: initialized
[    7.167963] calling  ipc_ns_init+0x0/0x14 @ 1
[    7.172306] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    7.178544] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    7.183571] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    7.190418] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    7.197094] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    7.205584] calling  net_ns_init+0x0/0x104 @ 1
[    7.210148] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    7.216431] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    7.221619] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    7.229513] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    7.237673] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    7.244879] calling  cpufreq_tsc+0x0/0x37 @ 1
[    7.249298] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    7.255538] calling  reboot_init+0x0/0x1d @ 1
[    7.259961] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    7.266199] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    7.271051] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    7.277726] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    7.283271] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    7.290638] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    7.295491] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    7.302165] calling  wq_sysfs_init+0x0/0x14 @ 1
[    7.306858] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    7.313606] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    7.320133] calling  ksysfs_init+0x0/0x94 @ 1
[    7.324595] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    7.330791] calling  pm_init+0x0/0x4e @ 1
[    7.334902] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    7.340756] calling  pm_disk_init+0x0/0x19 @ 1
[    7.345278] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    7.351591] calling  swsusp_header_init+0x0/0x30 @ 1
[    7.356617] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    7.363465] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    7.369010] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    7.376376] calling  cgroup_wq_init+0x0/0x32 @ 1
[    7.381061] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    7.387555] calling  event_trace_enable+0x0/0x173 @ 1
[    7.393145] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    7.400004] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.404595] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.411009] calling  fsnotify_init+0x0/0x26 @ 1
[    7.415603] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.422013] calling  filelock_init+0x0/0x84 @ 1
[    7.426619] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.433021] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.437874] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.444547] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.449575] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.456421] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.461188] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.467774] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.473147] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.480340] calling  debugfs_init+0x0/0x5c @ 1
[    7.484856] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.491174] calling  securityfs_init+0x0/0x53 @ 1
[    7.495949] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.502526] calling  prandom_init+0x0/0xe2 @ 1
[    7.507034] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.513362] calling  virtio_init+0x0/0x30 @ 1
[    7.517881] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.524047] calling  __gnttab_init+0x0/0x30 @ 1
[    7.528641] xen:grant_table: Grant tables using version 2 layout
[    7.534724] Grant table initialized
[    7.538258] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.544933] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.549986] RTC time:  3:41:29, date: 01/22/14
[    7.554465] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.561486] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.566425] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.573358] calling  cpuidle_init+0x0/0x40 @ 1
[    7.577865] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.584364] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.589306] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.596065] calling  sock_init+0x0/0x8b @ 1
[    7.600412] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.606403] calling  net_inuse_init+0x0/0x26 @ 1
[    7.611085] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.617583] calling  netpoll_init+0x0/0x31 @ 1
[    7.622089] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.628417] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.633570] NET: Registered protocol family 16
[    7.638060] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.645155] calling  bdi_class_init+0x0/0x4d @ 1
[    7.649940] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.656371] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.661497] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.668412] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.673417] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.680112] calling  pci_driver_init+0x0/0x12 @ 1
[    7.684974] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.691484] calling  backlight_class_init+0x0/0x85 @ 1
[    7.696743] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.703707] calling  video_output_class_init+0x0/0x19 @ 1
[    7.709229] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.716443] calling  xenbus_init+0x0/0x26f @ 1
[    7.721041] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.727297] calling  tty_class_init+0x0/0x38 @ 1
[    7.732043] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.738475] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.743844] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.750792] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.756603] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.764225] calling  register_node_type+0x0/0x34 @ 1
[    7.769381] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.776157] calling  i2c_init+0x0/0x70 @ 1
[    7.780483] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.786392] calling  init_ladder+0x0/0x12 @ 1
[    7.790811] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.797223] calling  init_menu+0x0/0x12 @ 1
[    7.801471] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.807711] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.812737] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.819595] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.825149] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.832496] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.837639] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.844544] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.849050] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.855549] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.860319] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.866903] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.872103] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.879123] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.883715] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.891342] ACPI: bus type PCI registered
[    7.895415] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.901915] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.908589] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.913217] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.919916] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    7.926402] calling  dma_channel_table_init+0x0/0xde @ 1
[    7.931804] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    7.938981] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    7.944530] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    7.951894] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    7.957530] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    7.964980] calling  xen_pcpu_init+0x0/0xcc @ 1
[    7.970418] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    7.976766] calling  dmi_id_init+0x0/0x31d @ 1
[    7.981518] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    7.987773] calling  dca_init+0x0/0x20 @ 1
[    7.991932] dca service started, version 1.12.1
[    7.996585] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    8.002681] calling  iommu_init+0x0/0x58 @ 1
[    8.007023] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    8.013167] calling  pci_arch_init+0x0/0x69 @ 1
[    8.017775] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    8.027119] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    8.041684] PCI: Using configuration type 1 for base access
[    8.047245] initcall pci_arch_init+0x0/0x69 returned 0 afterpology_init+=
0x0/0x98 @ 1
[    8.058931] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    8.065293] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    8.070385] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    8.077318] calling  init_vdso+0x0/0x135 @ 1
[    8.081652] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    8.087802] calling  sysenter_setup+0x0/0x2dd @ 1
[    8.092571] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    8.099157] calling  param_sysfs_init+0x0/0x194 @ 1
[    8.120226] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    8.127264] calling  pm_sysrq_init+0x0/0x19ed 0 after 0 usecs
[    8.138272] calling  default_bdi_init+0x0/0x65 @ 1
[    8.143430] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    8.150034] calling  init_bio+0x0/0xe9 @ 1
[    8.154241] bio: create slab <bio-0> at 0
[    8.158312] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    8.164418] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    8.170157] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    8.177676] calling  cryptomgr_init+0x0/0x12 @ 1
[    8.182356] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    8.188857] calling  blk_settings_init+0x0/0x2c @ 1
[    8.193795] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    8.200555] calling  blk_ioc_init+0x0/0x2a @ 1
[    8.205072] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    8.211387] calling  blk_softirq_init+0x0/0x6e @ 1
[    8.216240] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    8.222914] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    8.227766] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    8.234440] calling  blk_mq_init+0x0/0x5f @ 1
[    8.238859] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    8.245100] calling  genhd_device_init+0x0/0x85 @ 1
[    8.250166] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    8.256857] calling  pci_slot_init+0x0/0x50 @ 1
[    8.261454] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    8.267859] calling  fbmem_init+0x0/0x98 @ 1
[    8.272261] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    8.278347] calling  acpi_init+0x0/0x27a @ 1
[    8.282705] ACPI: Added _OSI(Module Device)
[    8.286926] ACPI: Added _OSI(Processor Device)
[    8.291432] ACPI: Added _OSI(3.0 _SCP Extensions)
[    8.296198] ACPI: Added _OSI(Processor Aggregator Device)
[    8.305411] ACPI: Executed 1 blocks of module-level executable AML code
[    8.337352] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    8.345202] \_SB_:_OSC invalid UUID
[    8.348684] _O
[    8.377464] ACPI: Interpreter enabled
[    8.381132] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    8.390396] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    8.399674] ACPI: (supports S0 S1 S4 S5)
[    8.403645] ACPI: Using IOAPIC for interrupt routing
[    8.409046] HEST: Table parsing has been initialized.
[    8.414096] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.424473] ACPI: No dock devices found.
[    8.525664] ACPI: Power Resource [FN00] (off)
[    8.530812] ACPI: Power Resource [FN01] (off)
[    8.535968] ACPI: Power Resource [FN02] (off)
[    8.541098] ACPI: Power Resource [FN03] (off)
[    8.546242] ACPI: Power Resource [FN04] (off)
[    8.555962] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.562140] acpi PNP0A08:00: _OSC: OS supports [Extenatform does not sup=
port [PCIeHotplug PME]
[    8.581948] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.595479] PCI host bridge to bus 0000:00
[    8.599564] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.605111] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    8.611350] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    8.617590] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.624523] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.631457] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.638389] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.645323] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.652257] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.659202] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:00.0
[    8.670770] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.676928] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.683540] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:01.0
[    8.694374] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.700437] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:01.1
[    8.712097] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.718105] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.724942] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.732217] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:02.0
[    8.743372] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.749392] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:03.0
[    8.761807] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.767861] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.774787] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.781010] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:14.0
[    8.791861] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.797902] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.804838] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:16.0
[    8.816479] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.822519] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.828816] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.835142] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.840900] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.847379] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:19.0
[    8.858231] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.864265] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.870718] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.877293] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1a.0
[    8.888152] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.894186] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.901147] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.907631] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1b.0
[    8.918468] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    8.924630] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    8.931135] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.0
[    8.941991] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    8.948147] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    8.954639] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.3
[    8.965489] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    8.971650] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    8.978143] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:31] PCI add device 0000:00:1c.5
[    8.988994] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    8.995154] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    9.001645] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1c.6
[    9.012489] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    9.018658] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    9.025148] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1c.7
[    9.036008] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    9.042048] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    9.048502] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    9.055076] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1d.0
[    9.065928] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.0
[    9.077609] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    9.083646] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    9.089252] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    9.094883] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    9.100517] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    9.106150] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    9.111785] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    9.118191] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.2
[    9.128951] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    9.134983] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    9.141836] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.3
[    9.152963] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    9.158999] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:00:1f.6
[    9.171699] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.182484] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    9.188533] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    9.194168] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    9.201013] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    9.207863] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    9.214662] pci 0000:01:00.0: supports D1 D2
[    9.219052] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:01:00.0
[    9.232010] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    9.237229] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    9.243381] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    9.250230] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    9.257099] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.267890] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    9.273937] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    9.280258] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    9.286584] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    9.292215] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    9.298563] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    9.305355] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    9.311481] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.318396] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:02:00.0
[    9.330599] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    9.336606] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    9.342925] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    9.349251] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    9.354886] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    9.361232] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    9.368022] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    9.374148] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.381065] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 03:41:32] PCI add device 0000:02:00.1
[    9.395356] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.400572] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.406722] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.413570] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.420604] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.431426] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.437468] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.443784] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.450110] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.455827] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.462654] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.468880] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:04:00.0
[    9.479791] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.485823] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.492135] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.498461] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.504177] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.511003] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1444: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:04:00.1
[    9.531024] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.536247] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.542398] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.549249] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.556277] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.567130] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.573163] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.579499] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.585113] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.591610] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.597838] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:05:00.0
[    9.610835] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.616057] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.622208] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.629058] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.636130] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.646945] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.653181] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.659950] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:32] PCI add device 0000:06:00.0
[    9.670841] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.676070] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.682931] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.691421] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.697606] pci 0000:07:01.0: supports D1 D2
[    9.701867] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 03:41:32] PCI add device 0000:07:01.0
[    9.713683] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.719717] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.726024] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.732507] pci 0000:07:03.0: supports D1 D2
[    9.736766] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:07:03.0
[    9.754672] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.761728] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.768571] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.777137] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.785805] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.794384] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.802967] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.811365] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.817413] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:08.0
[    9.836070] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.842122] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:08.1
[    9.860805] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.866860] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:09.0
[    9.885525] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.891573] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:09.1
[    9.910257] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.916310] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0a.0
[    9.935024] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[    9.941074] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0a.1
[    9.959756] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[    9.965809] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 03:41:32] PCI add device 0000:08:0b.0
[    9.984474] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[    9.990522] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 03:41:32] [VT-D]iommu.c:1456: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 03:41:33] PCI add device 0000:08:0b.1
[   10.009235] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[   10.014464] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   10.021304] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[   10.027976] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[   10.034645] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[   10.041688] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.052580] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[   10.058683] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[   10.065844] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[   10.072135] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:33] PCI add device 0000:09:00.0
[   10.085236] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[   10.090456] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   10.097299] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[   10.104329] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.115125] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[   10.121172] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[   10.126800] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[   10.132434] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[   10.138067] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[   10.143700] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[   10.149334] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[   10.155867] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 03:41:33] [VT-D]iommu.c:1444: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 03:41:33] PCI add device 0000:0a:00.0
[   10.175360] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[   10.180578] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   10.186727] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   10.193576] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[   10.200339] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[   10.212101] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.219411] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.226715] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.234019] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.241327] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.248631] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.
[   10.257067] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.264372] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.272791] ACPI: Enabled 4 GPEs in block 00 to 3F
[   10.277584] ACPI: \_SB_.PCI0: notify handler is installed
[   10.283068] Found 1 acpi root devices
[   10.286868] initcall acpi_init+0x0/0x27a returned 0 after 453125 usecs
[   10.293383] calling  pnp_init+0x0/0x12 @ 1
[   10.297636] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[   10.303539] calling  balloon_init+0x0/0x242 @ 1
[   10.308131] xen:balloon: Initialising balloon driver
[   10.313159] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[   10.319746] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[   10.325292] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[   10.332657] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[   10.338384] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[   10.345763] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[   10.351599] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[   10.359066] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[   10.364082] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[   10.370766] calling  balloon_init+0x0/0xfa @ 1
[   10.375270] xen_balloon: Initialising balloon driver
[   10.380684] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[   10.387112] calling  misc_init+0x0/0xba @ 1
[   10.391429] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.397425] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.402675] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.410748] vgaarb: loaded
[   10.413517] vgaarb: bridge control possible 0000:00:02.0
[   10.418892] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.426085] calling  cn_init+0x0/0xc0 @ 1
[   10.430175] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.436050] calling  dma_buf_init+0x0/0x75 @ 1
[   10.440567] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.446884] calling  phy_init+0x0/0x2e @ 1
[   10.451269] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.457181] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.461915] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.468360] calling  usb_init+0x0/0x169 @ 1
[   10.472618] ACPI: bus type USB registered
[   10.476876] usbcore: registered new interface driver usbfs
[   10.482447] usbcore: registered new interface driver hub
[   10.487842] usbcore: registered new device driver usb
[   10.492890] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.499214] calling  serio_init+0x0/0x31 @ 1
[   10.503663] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.509743] calling  input_init+0x0/0x103 @ 1
[   10.514233] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.520406] calling  rtc_init+0x0/0x5b @ 1
[   10.524635] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.530545] calling  pps_init+0x0/0xb7 @ 1
[   10.534766] pps_core: LinuxPPS API ver. 1 registered
[   10.539730] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.548916] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.555154] calling  ptp_init+0x0/0xa4 @ 1
[   10.559373] PTP clock support registered
[   10.563303] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.569454] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.574973] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.582197] calling  hwmon_init+0x0/0xe3 @ 1
[   10.586591] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.592683] calling  leds_init+0x0/0x40 @ 1
[   10.596987] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.602997] calling  efisubsys_init+0x0/0x142 @ 1
[   10.607763] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.614347] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.619112] PCI: Using ACPI for IRQ routing
[   10.626785] PCI: pci_cache_line_size set to 64 bytes
[   10.631945] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.661340] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.666563] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.672909] calling  neigh_init+0x0/0x80 @ 1
[   10.677240] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.683393] calling  fib_rules_init+0x0/0xaf @ 1
[   10.688073] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.694572] calling  pktsched_init+0x0/0x10a @ 1
[   10.699258] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.705752] calling  tc_filter_init+0x0/0x55 @ 1
[   10.710433] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.716932] calling  tc_action_init+0x0/0x55 @ 1
[   10.721612] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.728111] calling  genl_init+0x0/0x85 @ 1
[   10.732375] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.738425] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.743021] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.749431] calling  netlbl_init+0x0/0x81 @ 1
[   10.753851] NetLabel: Initializing
[   10.757319] NetLabel:  domain hash size =3D 128
[   10.761738] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.766802] NetLabel:  unlabeled traffic allowed by default
[   10.772399] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.778899] calling  rfkill_init+0x0/0x79 @ 1
[   10.783491] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.789658] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.794249] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.800677] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.805443] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.812014] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.817348] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.824406] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.829524] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.836454] calling  hpet_late_init+0x0/0x101 @ 1
[   10.841218] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.847980] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.852489] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.858813] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.864367] Switched to clocksource xen
[   10.868266] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs
[   10.875890] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.881375] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 281 =
usecs
[   10.888496] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.894828] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.902969] calling  event_trace_init+0x0/0x205 @ 1
[   10.922252] initcall event_trace_init+0x0/0x205 returned 0 after 14003 u=
secs
[   10.929281] calling  init_kprobe_trace+0x0/nit_kprobe_trace+0x0/0x93 ret=
urned 0 after 11 usecs
[   10.941089] calling  init_pipe_fs+0x0/0x4c @ 1
[   10.945634] initcall init_pipe_fs+0x0/0x4c returned 0 after 44 usecs
[   10.952002] calling  eventpoll_init+0x0/0xda @ 1
[   10.956708] initcall eventpoll_init+0x0/0xda returned 0 after 25 usecs
[   10.963268] calling  anon_inode_init+0x0/0x5b @ 1
[   10.968073] initcall anon_inode_init+0x0/0x5b returned 0 after 36 usecs
[   10.974709] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   10.979908] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   10.986929] calling  acpi_event_init+0x0/0x3a @ 1
[   10.991710] initcall acpi_event_init+0x0/0x3a returned 0 after 16 usecs
[   10.998368] calling  pnp_system_init+0x0/0x12 @ 1
[   11.003230] initcall pnp_system_init+0x0/0x12 returned 0 after 91 usecs
[   11.009837] calling  pnpacpi_init+0x0/0x8c @ 1
[   11.014330] pnp: PnP ACPI init
[   11.017473] ACPI: bus type PNP registered
[   11.021848] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   11.028453] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   11.035344] pnp 00:01: [dma 4]
[   11.038557] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   11.045244] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   11.052293] kworker/u2:0 (512) used greatest stack depth: 5560 bytes left
[   11.059077] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   11.066679] system 00:04: [io  0x0680-0x069f] has been reserved
[   11.072596] system 00:04: [io  0xffff] has been reserved
[   11.077967] system 00:04: [io  0xffff] has been reserved
[   11.083341] system 00:04: [io  0xffff] has been reserved
[   11.088714] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   11.094693] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   11.100673] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   11.106652] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   11.112633] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   11.118612] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   11.124940] system 00:04: [io  0x164e-0x164f] has been reserved
[   11.130914] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.137793] xen: registering gsi 8 triggering 1 polarity 0
[   11.143530] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   11.150378] system 00:06: [io  0x1854-0x1857] has been reserved
[   11.156284] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   11.164645] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   11.170561] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   11.176535] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.184757] xen: registering gsi 4 triggering 1 polarity 0
[   11.190229] Already setup the GSI :4
[   11.193872] pnp 00:08: [dma 0 disabled]
[   11.197974] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.205712] xen: registering gsi 3 triggering 1 polarity 0
[   11.211206] pnp 00:09: [dma 0 disabled]
[   11.215314] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.222148] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   11.228064] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.234939] xen: registering gsi 13 triggering 1 polarity 0
[   11.240723] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   11.250370] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   11.256980] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   11.263650] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   11.270322] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   11.276993] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   11.283667] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   11.290340] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   11.297013] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   11.303687] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   11.310360] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   11.317033] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   11.323708] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   11.330376] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.339280] pnp: PnP ACPI: found 13 devices
[   11.343452] ACPI: bus type PNP unregistered
[   11.347701] initcall pnpacpi_init+0x0/0x8c returned 0 after 325555 usecs
[   11.354458] calling  pcistub_init+0x0/0x29f @ 1
[   11.359052] xen_pciback: Error parsing pci_devs_to_hide at "(xx:xx:xx)"
[   11.365726] initcall pcistub_init+0x0/0x29f returned -22 after 6517 usecs
[   11.372572] calling  chr_dev_init+0x0/0xc6 @ 1
[   11.386177] initcall chr_dev_init+0x0/0xc6 returned 0 after 8886 usecs
[   11.392694] calling  firmware_class_init+0x0/0xec @ 1
[   11.397906] initcall firmware_class_init+0x0/0xec returned 0 after 101 u=
secs
[   11.404944] calling  init_pcmcia_bus+0x0/0x65 @ 1
[   11.409844] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 131 usecs
[   11.416533] calling  thermal_init+0x0/0x8b @ 1
[   11.421111] initcall thermal_init+0x0/0x8b returned 0 after 72 usecs
[   11.427449] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.433342] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.441230] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.449916] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.456344] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9343 usecs
[   11.464144] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.469796] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.474753] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.480906] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.487767] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.494696] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.501628] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.508560] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.515495] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.522426] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.529359] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.536294] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.543226] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.550159] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.557091] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.564018] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.571414] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.578316] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.585785] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.592704] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.600084] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.607002] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.614462] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.619741] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.625897] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.632748] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.637770] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.643927] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.650781] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.655796] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.661954] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.668808] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.673830] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.680700] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.685962] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.692816] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.698094] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.704946] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.709967] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.716820] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.721835] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.727992] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.734847] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.740468] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.746099] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.752426] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.758753] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.765080] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.771406] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.777819] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.784233] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.789866] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.796194] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.801826] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.808152] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.813785] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.820112] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.825746] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.832073] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.838398] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.844724] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.851052] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.857378] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.863705] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.869338] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.875666] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396460 usecs
[   11.883464] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.888332] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.895078] calling  inet_init+0x0/0x296 @ 1
[   11.899482] NET: Registered protocol family 2
[   11.904145] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.911398] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.918043] TCP: Hash tables configured (established 16384 bind 16384)
[   11.924635] TCP: reno registered
[   11.927919] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   11.933986] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   11.940610] initcall inet_init+0x0/0x296 returned 0 after 40232 usecs
[   11.947041] calling  ipv4_offload_init+0x0/0x61 @ 1
[   11.951980] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   11.958740] calling  af_unix_init+0x0/0x55 @ 1
[   11.963256] NET: Registered protocol family 1
[   11.967679] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs
[   11.974253] calling  ipv6_offload_init+0x0/0x7f @ 1
[   11.979194] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   11.985954] calling  init_sunrpc+0x0/0x69 @ 1
[   11.990566] RPC: Registered named UNIX socket transport module.
[   11.996479] RPC: Registered udp transport module.
[   12.001242] RPC: Registered tcp transport module.
[   12.006007] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   12.012507] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs
[   12.019094] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   12.024561] pci 0000:00:02.0: Boot video device
[   12.029647] xen: registering gsi 16 triggering 0 polarity 1
[   12.035219] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   12.039859] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   12.047598] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   12.055504] xen: registering gsi 16 triggering 0 polarity 1
[   12.061067] Already setup the GSI :16
[   12.081398] xen: registering gsi 23 triggering 0 polarity 1
[   12.086978] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   12.108626] xen: registering gsi 18 triggering 0 polarity 1
[   12.114213] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   12.118rks+0x0/0x117 returned 0 after 105455 usecs
[   12.140241] calling  populate_rootfs+0x0/0x112 @ 1
[   12.145228] Unpacking initramfs...
[   13.211269] Freeing initrd memory: 80040K (ffff8800023f4000 - ffff880007=
21e000)
[   13.218579] initcall populate_rootfs+0x0/0x112 returned 0 after 1048322 =
usecs
[   13.225760] calling  pci_iommu_init+0x0/0x41 @ 1
[   13.230441] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   13.236941] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   13.242573] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   13.250216] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   13.256180] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   13.263979] calling  i8259A_init_ops+0x0/0x21 @ 1
[   13.268745] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   13.275333] calling  vsyscall_init+0x0/0x27 @ 1
[   13.279930] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   13.286339] calling  sbf_init+0x0/0xf6 @ 1
[   13.290500] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   13.296479] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   13.301679] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 0 us=
ecs
[   13.308699] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   13.313208] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   13.319532] calling  i8237A_init_ops+0x0/0x14 @ 1
[   13.324299] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.330886] calling  cache_sysfs_init+0x0/0x65 @ 1
[   13.335981] initcall cache_sysfs_init+0x0/0x65 returned 0 after 236 usecs
[   13.342752] calling  amd_uncore_init+0x0/0x130 @ 1
[   13.347603] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   13.354450] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   13.359476] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   13.366496] calling  intel_uncore_init+0x0/0x3ab @ 1
[   13.371522] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   13.378541] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.383237] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.393797] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10327 usecs
[   13.400646] calling  inject_init+0x0/0x30 @ 1
[   13.405062] Machine check injector initialized
[   13.409569] initcall inject_init+0x0/0x30 returned 0 after 4400 usecs
[   13.416069] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.421961] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.429674] calling  microcode_init+0x0/0x1b1 @ 1
[   13.434628] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.440747] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.449521] initcall microcode_init+0x0/0x1b1 returned 0 after 14723 use=
cs
[   13.456451] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.461040] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.467627] calling  msr_init+0x0/0x162 @ 1
[   13.472093] initcall msr_init+0x0/0x162 returned 0 after 213 usecs
[   13.478265] calling  cpuid_init+0x0/0x162 @ 1
[   13.482879] initcall cpuid_init+0x0/0x162 returned 0 after 193 usecs
[   13.489216] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.493982] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.500569] calling  add_pcspkr+0x0/0x40 @ 1
[   13.505002] initcall add_pcspkr+0x0/0x40 returned 0 after 99 usecs
[   13.511172] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.517668] Scanning for low memory corruption every 60 seconds
[   13.523648] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5839 usecs
[   13.532226] calling  sysfb_init+0x0/0x9c @ 1
[   13.536666] initcall sysfb_init+0x0/0x9c returned 0 after 103 usecs
[   13.542926] calling  audit_classes_init+0x0/0xaf @ 1
[   13.547963] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.554884] calling  pt_dump_init+0x0/0x30 @ 1
[   13.559398] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs
[   13.565717] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.570577] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.577243] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.582536] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.589635] calling  ioresources_init+0x0/0x3c @ 1
[   13.594494] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.601161] calling  uid_cache_init+0x0/0x85 @ 1
[   13.605856] initcall uid_cache_init+0x0/0x85 returned 0 after 15 usecs
[   13.612428] calling  init_posix_timers+0x0/0x240 @ 1
[   13.617471] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.624387] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.629674] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.636780] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.641897] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.648827] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.654146] initcall snapshot_device_init+0x0/0x12 returned 0 after 116 =
usecs
[   13.661264] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.666029] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.672617] calling  create_proc_profile+0x0/0x300 @ 1
[   13.677816] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.684836] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.690037] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.697056] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.702643] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 20=
7 usecs
[   13.709939] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.715315] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs
[   13.722502] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.727563] initcall alarmtimer_init+0x0/0x15f returned 0 after 203 usecs
[   13.734341] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.740008] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
7 usecs
[   13.747309] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.752337] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.759179] calling  futex_init+0x0/0xf6 @ 1
[   13.763528] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.769668] initcall futex_init+0x0/0xf6 returned 0 after 6010 usecs
[   13.776079] calling  proc_dma_init+0x0/0x22 @ 1
[   13.780673] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.787084] calling  proc_modules_init+0x0/0x22 @ 1
[   13.792027] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.798785] calling  kallsyms_init+0x0/0x25 @ 1
[   13.803380] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.809790] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.815606] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.823310] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.828773] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.836050] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.841176] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.848183] calling  ikconfig_init+0x0/0x3c @ 1
[   13.852780] initcall ikconfig_init+0x0/0x3c returned 0 after 4 usecs
[   13.859190] calling  audit_init+0x0/0x141 @ 1
[   13.863609] audit: initializing netlink socket (disabled)
[   13.869091] type=3D2000 audit(1390362093.580:1): initialized
[   13.874617] initcall audit_init+0x0/0x141 returned 0 after 10749 usecs
[   13.881202] calling  audit_watch_init+0x0/0x3a @ 1
[   13.886057] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.892729] calling  audit_tree_init+0x0/0x49 @ 1
[   13.897496] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   13.904082] calling  init_kprobes+0x0/0x16c @ 1
[   13.918670] initcall init_kprobes+0x0/0x16c returned 0 after 9759 usecs
[   13.925273] calling  hung_task_init+0x0/0x56 @
[   13.936558] calling  utsname_sysctl_init+0x0/0x14 @ 1
[   13.941682] initcall utsname_sysctl_init+0x0/0x14 returned 0 after 8 use=
cs
[   13.948608] calling  init_tracepoints+0x0/0x20 @ 1
[   13.953458] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   13.960128] calling  init_blk_tracer+0x0/0x5a @ 1
[   13.964896] initcall init_blk_tracer+0x0/0x5a returned 0 after 0 usecs
[   13.971482] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   13.977204] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   13.984739] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   13.990555] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 513=
 usecs
[   13.997767] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   14.003293] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   14.010594] calling  kswapd_init+0x0/0x76 @ 1
[   14.015046] initcall kswapd_init+0x0/0x76 returned 0 after 33 usecs
[   14.021340] calling  extfrag_debug_init+0x0/0x7e @ 1
[   14.026387] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   14.033299] calling  setup_vmstat+0x0/0xf3 @ 1
[   14.037821] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs
[   14.044218] calling  mm_sysfs_init+0x0/0x29 @ 1
[   14.048822] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   14.055312] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   14.060599] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   14.067704] calling  slab_proc_init+0x0/0x25 @ 1
[   14.072388] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   14.078886] calling  init_reserve_notifier+0x0/0x26 @ 1
[   14.084175] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   14.091277] calling  init_admin_reserve+0x0/0x40 @ 1
[   14.096303] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.103150] calling  init_user_reserve+0x0/0x40 @ 1
[   14.108090] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.114851] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   14.119794] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   14.126551] calling  procswaps_init+0x0/0x22 @ 1
[   14.131233] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   14.137730] calling  init_frontswap+0x0/0x96 @ 1
[   14.142439] initcall init_frontswap+0x0/0x96 returned 0 after 28 usecs
[   14.148996] calling  hugetlb_init+0x0/0x4c2 @ 1
[   14.153590] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   14.160082] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6340 usecs
[   14.166684] calling  mmu_notifier_init+0x0/0x12 @ 1
[   14.171627] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   14.178385] calling  slab_proc_init+0x0/0x8 @ 1
[   14.182978] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   14.189390] calling  cpucache_init+0x0/0x4b @ 1
[   14.193984] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   14.200398] calling  hugepage_init+0x0/0x145 @ 1
[   14.205078] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   14.211751] calling  init_cleancache+0x0/0xbc @ 1
[   14.216547] initcall init_cleancache+0x0/0xbc returned 0 after 29 usecs
[   14.223192] calling  fcntl_init+0x0/0x2a @ 1
[   14.227535] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs
[   14.233764] calling  proc_filesystems_init+0x0/0x22 @ 1
[   14.239055] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   14.246157] calling  dio_init+0x0/0x2d @ 1
[   14.250328] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   14.256384] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   14.261444] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 33 use=
cs
[   14.268343] calling  dnotify_init+0x0/0x7b @ 1
[   14.272876] initcall dnotify_init+0x0/0x7b returned 0 after 25 usecs
[   14.279263] calling  inotify_user_setup+0x0/0x70 @ 1
[   14.284309] initcall inotify_user_setup+0x0/0x70 returned 0 after 19 use=
cs
[   14.291224] calling  aio_setup+0x0/0x7d @ 1
[   14.295530] initcall aio_setup+0x0/0x7d returned 0 after 57 usecs
[   14.301623] calling  proc_locks_init+0x0/0x22 @ 1
[   14.306394] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs
[   14.312975] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   14.317874] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs
[   14.324590] calling  dquot_init+0x0/0x121 @ 1
[   14.329008] VFS: Disk quotas dquot_6.5.2
[   14.333032] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   14.339498] initcall dquot_init+0x0/0x121 returned 0 after 10242 usecs
[   14.346082] calling  init_v2_quota_format+0x0/0x22 @ 1
[   14.351282] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   14.358301] calling  quota_init+0x0/0x31 @ 1
[   14.362653] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   14.368875] calling  proc_cmdline_init+0x0/0x22 @ 1
[   14.373819] initcall proc_cmdline_init+0x0/0x22 returned 0 after 3 usecs
[   14.380576] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.385606] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.392448] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.397391] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.404150] calling  proc_devices_init+0x0/0x22 @ 1
[   14.409090] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.415849] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.421052] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.428069] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.433011] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.439769] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.444711] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.451469] calling  proc_stat_init+0x0/0x22 @ 1
[   14.456151] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.462647] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.467504] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.474174] calling  proc_version_init+0x0/0x22 @ 1
[   14.479117] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.485875] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.490904] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.497749] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.502523] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.509187] calling  vmcore_init+0x0/0x5cb @ 1
[   14.513693] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.520020] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.524704] initcall proc_kmsg_init+0x0/0x25 returned 0 after 4 usecs
[   14.531200] calling  proc_page_init+0x0/0x42 @ 1
[   14.535887] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.542380] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.547105] initcall init_devpts_fs+0x0/0x62 returned 0 after 43 usecs
[   14.553647] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.558250] initcall init_ramfs_fs+0x0/0x4d returned 0 after 10 usecs
[   14.564740] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.569839] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 70 use=
cs
[   14.576700] calling  init_fat_fs+0x0/0x4f @ 1
[   14.581140] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.587445] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.591952] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.598279] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.602873] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.609286] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.614077] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs
[   14.620727] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.625426] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs
[   14.631853] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.636273] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.642513] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.646933] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.653173] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.657593] NFS: Registering the id_resolver key type
[   14.662716] Key type id_resolver registered
[   14.666952] Key type id_legacy registered
[   14.671031] initcall init_nfs_v4+0x0/0x3b returned 0 after 13123 usecs
[   14.677614] calling  init_nlm+0x0/0x4c @ 1
[   14.681781] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.687752] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.692433] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.698932] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.703613] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.710112] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.715141] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.721986] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.726579] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.732992] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.737584] NTFS driver 2.1.30 [Flags: R/W].
[   14.741969] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4281 usecs
[   14.748593] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.753489] initcall init_autofs4_fs+0x0/0x2a returned 0 after 128 usecs
[   14.760185] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.764872] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.771448] calling  ipc_init+0x0/0x2f @ 1
[   14.775614] msgmni has been set to 3857
[   14.779516] initcall ipc_init+0x0/0x2f returned 0 after 3815 usecs
[   14.785746] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.790520] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.797099] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.801840] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.808366] calling  key_proc_init+0x0/0x5e @ 1
[   14.812964] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.819373] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.824397] SELinux:  Registering netfilter hooks
[   14.829298] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4785 u=
secs
[   14.836332] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.841098] initcall init_sel_fs+0x0/0xa5 returned 0 after 338 usecs
[   14.847435] calling  selnl_init+0x0/0x56 @ 1
[   14.851778] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.858006] calling  sel_netif_init+0x0/0x5c @ 1
[   14.862690] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs
[   14.869186] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.874043] initcall sel_netnode_init+0x0/0x6a returned 0 after 3 usecs
[   14.880714] calling  sel_netport_init+0x0/0x6a @ 1
[   14.885568] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs
[   14.892240] calling  aurule_init+0x0/0x2d @ 1
[   14.896660] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   14.902900] calling  crypto_wq_init+0x0/0x33 @ 1
[   14.907613] initcall crypto_wq_init+0x0/0x33 returned 0 after 32 usecs
[   14.914169] calling  crypto_algapi_init+0x0/0xd @ 1
[   14.919113] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   14.925867] calling  chainiv_module_init+0x0/0x12 @ 1
[   14.930980] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   14.937913] calling  eseqiv_module_init+0x0/0x12 @ 1
[   14.942958] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.949802] calling  hmac_module_init+0x0/0x12 @ 1
[   14.954653] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.961329] calling  md5_mod_init+0x0/0x12 @ 1
[   14.965885] initcall md5_mod_init+0x0/0x12 returned 0 after 48 usecs
[   14.972249] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   14.977560] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 25 =
usecs
[   14.984726] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   14.990101] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   14.997292] calling  des_generic_mod_init+0x0/0x17 @ 1
[   15.002546] initcall des_generic_mod_init+0x0/0x17 returned 0 after 50 u=
secs
[   15.009599] calling  aes_init+0x0/0x12 @ 1
[   15.013784] initcall aes_init+0x0/0x12 returned 0 after 24 usecs
[   15.019826] calling  zlib_mod_init+0x0/0x12 @ 1
[   15.024445] initcall zlib_mod_init+0x0/0x12 returned 0 after 25 usecs
[   15.030920] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   15.036639] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   15.044178] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   15.050244] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   15.058132] calling  krng_mod_init+0x0/0x12 @ 1
[   15.062751] initcall krng_mod_init+0x0/0x12 returned 0 after 25 usecs
[   15.069225] calling  proc_genhd_init+0x0/0x3c @ 1
[   15.073999] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   15.080578] calling  bsg_init+0x0/0x12e @ 1
[   15.084903] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   15.092279] initcall bsg_init+0x0/0x12e returned 0 after 7280 usecs
[   15.098604] calling  noop_init+0x0/0x12 @ 1
[   15.102851] io scheduler noop registered
[   15.106838] initcall noop_init+0x0/0x12 returned 0 after 3893 usecs
[   15.113165] calling  deadline_init+0x0/0x12 @ 1
[   15.117756] io scheduler deadline registered
[   15.122090] initcall deadline_init+0x0/0x12 returned 0 after 4232 usecs
[   15.128764] calling  cfq_init+0x0/0x8b @ 1
[   15.132950] io scheduler cfq registered (default)
[   15.137690] initcall cfq_init+0x0/0x8b returned 0 after 4653 usecs
[   15.143930] calling  percpu_counter_startup+0x0/0x38 @ 1
[   15.149305] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   15.156497] calling  pci_proc_init+0x0/0x6a @ 1
[   15.161277] initcall pci_proc_init+0x0/0x6a returned 0 after 183 usecs
[   15.167790] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   15.173447] xen: registering gsi 16 triggering 0 polarity 1
[   15.179008] Already setup the GSI :16
[   15.183583] xen: registering gsi 16 triggering 0 polarity 1
[   15.189146] Already setup the GSI :16
[   15.193650] xen: registering gsi 16 triggering 0 polarity 1
[   15.199210] Already setup the GSI :16
[   15.203570] xen: registering gsi 19 triggering 0 polarity 1
[   15.209145] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   15.214378] xen: registering gsi 17 triggering 0 polarity 1
[   15.219951] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   15.225262] xen: registering gsi 19 triggering 0 polarity 1
[   15.230824] Already setup the GSI :19
[   15.234745] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60560 u=
secs
[   15.241778] calling  aer_service_init+0x0/0x2b @ 1
[   15.246701] initcall aer_service_init+0x0/0x2b returned 0 after 70 usecs
[   15.253391] calling  pci_hotplug_init+0x0/0x1d @ 1
[   15.258242] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   15.263877] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5502 use=
cs
[   15.270809] calling  pcied_init+0x0/0x79 @ 1
[   15.275339] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   15.281943] initcall pcied_init+0x0/0x79 returned 0 after 6641 usecs
[   15.288353] calling  pcifront_init+0x0/0x3f @ 1
[   15.292944] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   15.299531] calling  genericbl_driver_init+0x0/0x14 @ 1
[   15.304932] initcall genericbl_driver_init+0x0/0x14 returned 0 after 111=
 usecs
[   15.312141] calling  cirrusfb_init+0x0/0xcc @ 1
[   15.316823] initcall cirrusfb_init+0x0/0xcc returned 0 after 81 usecs
[   15.323251] calling  efifb_driver_init+0x0/0x14 @ 1
[   15.328263] initcall efifb_driver_init+0x0/0x14 returned 0 after 71 usecs
[   15.335041] calling  intel_idle_init+0x0/0x331 @ 1
[   15.339892] intel_idle: MWAIT substates: 0x42120
[   15.344571] intel_idle: v0.4 model 0x3C
[   15.348472] intel_idle: lapic_timer_reliable_states 0xffffffff
(XEN) [2014-01-22 03:41:35] traps.c:2527:d0 Domain attempted WRMSR 00000000=
000001fc from 0x000000000004005f to 0x000000000004005d.
[   15.365805] intel_idle: intel_idle yielding to none
[   15.370483] initcall intel_idle_init+0x0/0x331 returned -19 after 29873 =
usecs
[   15.377936] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   15.383318] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   15.390502] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.395084] initcall acpi_ac_init+0x0/0x2a returned 0 after 72 usecs
[   15.401436] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.407164] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.415330] ACPI: Power Button [PWRB]
[   15.419309] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.426689] ACPI: Power Button [PWRF]
[   15.430485] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3047 usecs
[   15.438039] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.443478] ACPI: Fan [FAN0] (off)
[   15.447067] ACPI: Fan [FAN1] (off)
[   15.450664] ACPI: Fan [FAN2] (off)
[   15.454264] ACPI: Fan [FAN3] (off)
[   15.457898] ACPI: Fan [FAN4] (off)
[   15.461370] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1770=
5 usecs
[   15.468671] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.486676] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.495099] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)
[   15.510701] Monitor-Mwait will be used to enter C-1 state
[   15.516096] Monitor-Mwait will be used to enter C-2 state
[ ] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.543528] thermal LNXTHERM:00: registered as thermal_zone0
[   15.549179] ACPI: Thermal Zone [TZ00] (28 C)
[   15.555635] thermal LNXTHERM:01: registered as thermal_zone1
[   15.561281] ACPI: Thermal Zone [TZ01] (30 C)
[   15.565943] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25109 u=
secs
[   15.572979] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.577919] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.584677] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.589914] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.595645] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5630=
 usecs
[   15.602855] calling  erst_init+0x0/0x2fc @ 1
[   15.607228] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.614730] pstore: Registered erst as persistent store backend
[   15.620703] initcall erst_init+0x0/0x2fc returned 0 after 13201 usecs
[   15.627206] calling  ghes_init+0x0/0x173 @ 1
[   15.631737] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35376 usecs
[   15.640129] \_SB_:_OSC request failed
[   15.643784] _OSC request data:1 1 0=20
[   15.647423] \_SB_:_OSC invalid UUID
[   15.650976] _OSC request data:1 1 0=20
[   15.654614] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.660856] initcall ghes_init+0x0/0x173 returned 0 after 28630 usecs
[   15.667356] calling  einj_init+0x0/0x522 @ 1
[   15.671753] EINJ: Error INJection is initialized.
[   15.676457] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs
[   15.682869] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.687721] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.693762] initcall ioat_init_module+0x0/0xb1 returned 0 after 5898 use=
cs
[   15.700623] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.705581] initcall virtio_mmio_init+0x0/0x14 returned 0 after 103 usecs
[   15.712357] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.718146] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 71 usecs
[   15.725701] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.730987] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.738093] calling  xenbus_init+0x0/0x3d @ 1
[   15.742655] initcall xenbus_init+0x0/0x3d returned 0 after 137 usecs
[   15.748997] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.754226] initcall xenbus_backend_init+0x0/0x51 returned 0 after 115 u=
secs
[   15.761259] calling  gntdev_init+0x0/0x4d @ 1
[   15.765797] initcall gntdev_init+0x0/0x4d returned 0 after 116 usecs
[   15.772138] calling  gntalloc_init+0x0/0x3d @ 1
[   15.776865] initcall gntalloc_init+0x0/0x3d returned 0 after 131 usecs
[   15.783379] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.788751] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.795941] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.800947] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 64 usecs
[   15.807728] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.813364] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
81 usecs
[   15.820749] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.826139] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 189 =
usecs
[   15.833261] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.838055] xen_pciback: backend is vpci
[   15.842091] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3967 usecs
[   15.848866] calling  xen_acpi_processor_init+0x0/0x24b @ 1
[   15.855161] xen_acpi_processor: Uploading Xen processor PM info
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(1) cpuid(0) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:35] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:35] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:35] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:35] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:35] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:35] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:35] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:35] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:35] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:35] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:35] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:35] 	_PPC: 0
(XEN) [2014-01-22 03:41:35] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:35] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:35] CPU0: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:35] CPU 0 initialization completed
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(2) cpuid(2) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:35] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:35] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:35] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:35] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:35] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:35] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:35] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:35] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:35] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:35] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:35] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:35] 	_PPC: 0
(XEN) [2014-01-22 03:41:35] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:35] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:35] CPU2: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:35] CPU 2 initialization completed
(XEN) [2014-01-22 03:41:35] Set CPU acpi_id(3) cpuid(4) Px State info:
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:35] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:35] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:35] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:35] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:35] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:35] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:35] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU4: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 4 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(4) cpuid(6) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU6: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 6 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(5) cpuid(1) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU1: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 1 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(6) cpuid(3) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU3: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 3 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(7) cpuid(5) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU5: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 5 initialization completed
(XEN) [2014-01-22 03:41:36] Set CPU acpi_id(8) cpuid(7) Px State info:
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0
(XEN) [2014-01-22 03:41:36] 	_PSS: state_count=3D16
(XEN) [2014-01-22 03:41:36] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x2600
(XEN) [2014-01-22 03:41:36] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x2200
(XEN) [2014-01-22 03:41:36] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x2000
(XEN) [2014-01-22 03:41:36] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e00
(XEN) [2014-01-22 03:41:36] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c00
(XEN) [2014-01-22 03:41:36] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b00
(XEN) [2014-01-22 03:41:36] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x1900
(XEN) [2014-01-22 03:41:36] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x1700
(XEN) [2014-01-22 03:41:36] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x1500
(XEN) [2014-01-22 03:41:36] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x1300
(XEN) [2014-01-22 03:41:36] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00
(XEN) [2014-01-22 03:41:36] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00
(XEN) [2014-01-22 03:41:36] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00
(XEN) [2014-01-22 03:41:36] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00
(XEN) [2014-01-22 03:41:36] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00
(XEN) [2014-01-22 03:41:36] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800
(XEN) [2014-01-22 03:41:36] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8
(XEN) [2014-01-22 03:41:36] 	_PPC: 0
(XEN) [2014-01-22 03:41:36] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space
(XEN) [2014-01-22 03:41:36] max_freq: 3401000    second_max_freq: 3400000
(XEN) [2014-01-22 03:41:36] CPU7: Turbo Mode detected and enabled
(XEN) [2014-01-22 03:41:36] CPU 7 initialization completed
[   17.277335] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389575 usecs
[   17.285215] calling  pty_init+0x0/0x453 @ 1
[   17.299534] kworker/u2:1 (780) used greatest stack depth: 5488 bytes left
[   17.350706] initcall pty_init+0x0/0x453 returned 0 after 59805 usecs
[   17.357045] calling  sysrq_init+0x0/0xb0 @ 1
[   +0x0/0x228 returned 0 after 1028 usecs
[   17.379792] calling  serial8250_init+0x0/0x1ab @ 1
[   17.384641] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   17.412280] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A
[   17.420676] initcall serial8250_init+0x0/0x1ab returned 0 after 35189 us=
ecs
[   17.427626] calling  serial_pci_driver_init+0x0/0x1b @ 1
[   17.433103] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 10=
2 usecs
[   17.440399] calling  init_kgdboc+0x0/0x16 @ 1
[   17.444818] kgdb: Registered I/O driver kgdboc.
[   17.449442] initcall init_kgdboc+0x0/0x16 returned 0 after 4516 usecs
[   17.455913] calling  init+0x0/0x10f @ 1
[   17.460030] initcall init+0x0/0x10f returned 0 after 213 usecs
[   17.465853] calling  hpet_init+0x0/0x6a @ 1
[   17.470583] hpet_acpi_add: no address or irqs in _CRS
[   17.475710] initcall hpet_init+0x0/0x6a returned 0 after 5479 usecs
[   17.481964] calling  nvram_init+0x0/0x82 @ 1
[   17.486421] Non-volatile memory driver v1.3
[   17.490598] initcall nvram_init+0x0/0x82 returned 0 after 4200 usecs
[   17.497008] calling  mod_init+0x0/0x5a @ 1
[   17.501167] initcall mod_init+0x0/0x5a returned -19 after 0 usecs
[   17.507321] calling  rng_init+0x0/0x12 @ 1
[   17.511615] initcall rng_init+0x0/0x12 returned 0 after 130 usecs
[   17.517696] calling  agp_init+0x0/0x26 @ 1
[   17.521856] Linux agpgart interface v0.103
[   17.526016] initcall agp_init+0x0/0x26 returned 0 after 4062 usecs
[   17.532254] calling  agp_amd64_mod_init+0x0/0xb @ 1
[   17.537340] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 142 u=
secs
[   17.544371] calling  agp_intel_init+0x0/0x29 @ 1
[   17.549142] initcall agp_intel_init+0x0/0x29 returned 0 after 88 usecs
[   17.555656] calling  agp_sis_init+0x0/0x29 @ 1
[   17.560250] initcall agp_sis_init+0x0/0x29 returned 0 after 86 usecs
[   17.566592] calling  agp_via_init+0x0/0x29 @ 1
[   17.571187] initcall agp_via_init+0x0/0x29 returned 0 after 86 usecs
[   17.577530] calling  drm_core_init+0x0/0x10c @ 1
[   17.582288] [drm] Initialized drm 1.1.0 20060810
[   17.586899] initcall drm_core_init+0x0/0x10c returned 0 after 4580 usecs
[   17.593657] calling  cn_proc_init+0x0/0x3d @ 1
[   17.598167] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs
[   17.604491] calling  topology_sysfs_init+0x0/0x70 @ 1
[   17.609636] initcall topology_sysfs_init+0x0/0x70 returned 0 after 32 us=
ecs
[   17.616623] calling  loop_init+0x0/0x14e @ 1
[   17.674073] loop: module loaded
[   17.677239] initcall loop_init+0x0/0x14e returned 0 after 54961 usecs
[   17.683713] clling  mac_hid_init+0x0/0x22 @ 1
[   17.699621] initcall mac_hid_init+0x0/0x22 returned 0 after 15 usecs
[   17.706023] calling  macvlan_init_module+0x0/0x3d @ 1
[   17.711139] initcall macvlan_init_module+0x0/0x3d returned 0 after 1 use=
cs
[   17.718067] calling  macvtap_init+0x0/0x100 @ 1
[   17.722731] initcall macvtap_init+0x0/0x100 returned 0 after 69 usecs
[   17.729175] calling  net_olddevs_init+0x0/0xb5 @ 1
[   17.734013] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs
[   17.740686] calling  fixed_mdio_bus_init+0x0/0x105 @ 1
[   17.746108] libphy: Fixed MDIO Bus: probed
[   17.750204] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4216=
 usecs
[   17.757474] calling  tun_init+0x0/0x93 @ 1
[   17.761633] tun: Universal TUN/TAP device driver, 1.6
[   17.766744] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[   17.773133] initcall tun_init+0x0/0x93 returned 0 after 11229 usecs
[   17.779390] calling  tg3_driver_init+0x0/0x1b @ 1
[   17.784277] initcall tg3_driver_init+0x0/0x1b returned 0 after 124 usecs
[   17.790966] calling  ixgbevf_init_module+0x0/0x4c @ 1
[   17.796076] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Ne=
twork Driver - version 2.11.3-k
[   17.805523] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[   17.811802] initcall ixgbevf_init_module+0x0/0x4c returned 0 after 15356=
 usecs
[   17.819007] calling  forcedeth_pci_driver_init+0x0/0x1b @ 1
[   17.824734] initcall forcedeth_pci_driver_init+0x0/0x1b returned 0 after=
 92 usecs
[   17.832207] calling  netback_init+0x0/0x48 @ 1
[   17.836786] initcall netback_init+0x0/0x48 returned 0 after 74 usecs
[   17.843129] calling  nonstatic_sysfs_init+0x0/0x12 @ 1
[   17.848327] initcall nonstatic_sysfs_init+0x0/0x12 returned 0 after 0 us=
ecs
[   17.855345] calling  yenta_cardbus_driver_init+0x0/0x1b @ 1
[   17.861110] initcall yenta_cardbus_driver_init+0x0/0x1b returned 0 after=
 127 usecs
[   17.868665] calling  mon_init+0x0/0xfe @ 1
[   17.873050] initcall mon_init+0x0/0xfe returned 0 after 219 usecs
[   17.879138] calling  ehci_hcd_init+0x0/0x5c @ 1
[   17.883728] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   17.890315] initcall ehci_hcd_init+0x0/0x5c returned 0 after 6431 usecs
[   17.896987] calling  ehci_pci_init+0x0/0x69 @ 1
[   17.901579] ehci-pci: EHCI PCI platform driver
[   17.906732] xen: registering gsi 16 triggering 0 polarity 1
[   17.912292] Already setup the GSI :16
[   17.916063] ehci-pci 0000:00:1a.0: EHCI Host Controller
[   17.921541] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus =
number 1
[   17.928948] ehci-pci 0000:00:1a.0: debug port 2
[   17.937440] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[   17.944291] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf153c000
[   17.955454] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[   17.961327] usb usb1: New USB device found, idVendor=3D1d6b,380] usb usb=
1: Product: EHCI Host Controller
[   17.980321] usb usb1: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   17.987773] usb usb1: SerialNumber: 0000:00:1a.0
[   17.993106] hub 1-0:1.0: USB hub found
[   17.996880] hub 1-0:1.0: 3 ports detected
[   18.002367] xen: registering gsi 23 triggering 0 polarity 1
[   18.007929] Already setup the GSI :23
[   18.011681] ehci-pci 0000:00:1d.0: EHCI Host Controller
[   18.017166] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus =
number 2
[   18.024576] ehci-pci 0000:00:1d.0: debug port 2
[   18.033050] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[   18.039892] ehci-pci 0000:00:1d.0: irq 23, io HCI 1.00
[   18.057317] usb usb2: New USB device found, idVendor=3D1d6b, idProduct=
=3D0002
[   18.064093] usb usb2: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[   18.071368] usb usb2: Product: EHCI Host Controller
[   18.076307] usb usb2: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   18.083761] usb usb2: SerialNumber: 0000:00:1d.0
[   18.089073] hub 2-0:1.0: USB hub found
[   18.092849] hub 2-0:1.0: 3 ports detected
[   18.097886] initcall ehci_pci_init+0x0/0x69 returned 0 after 191705 usecs
[   18.104685] calling  ohci_hcd_mod_init+0x0/0x64 @ 1
[   18.109618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   18.115865] initcall ohci_hcd_mod_init+0x0/0x64 returned 0 after 6100 us=
ecs
[   18.122878] calling  ohci_pci_init+0x0/0x69 @ 1
[   18.127470] ohci-pci: OHCI PCI platform driver
[   18.132112] initcall ohci_pci_init+0x0/0x69 returned 0 after 4531 usecs
[   18.138718] calling  uhci_hcd_init+0x0/0xb0 @ 1
[   18.143304] uhci_hcd: USB Universal Host Controller Interface driver
[   18.149847] initcall uhci_hcd_init+0x0/0xb0 returned 0 after 6388 usecs
[   18.156445] calling  usblp_driver_init+0x0/0x1b @ 1
[   18.161496] usbcore: registered new interface driver usblp
[   18.166965] initcall usblp_driver_init+0x0/0x1b returned 0 after 5450 us=
ecs
[   18.173985] calling  kgdbdbgp_start_thread+0x0/0x4f @ 1
[   18.179272] initcall kgdbdbgp_start_thread+0x0/0x4f returned 0 after 0 u=
secs
[   18.186377] calling  i8042_init+0x0/0x3c5 @ 1
[   18.191062] i8042: PNP: No PS/2 controller found. Probing ports directly.
[   18.201127] serio: i8042 KBD port at 0x60,0x64 irq 1
[   18.206085] serio: i8042 AUX port at 0x60,0x64 irq 12
[   18.211371] initcall i8042_init+0x0/0x3c5 returned 0 after 20090 usecs
[   18.217883] calling  serport_init+0x0/0x34 @ 1
[   18.222388] initcall serport_init+0x0/0x34 returned 0 after 0 usecs
[   18.228714] calling  mousedev_init+0x0/0x62 @ 1
[   18.233520] mousedev: PS/2 mouse device common for all mice
[   18.239081] initcall mousedev_init+0x0/0x62 returned 0 after 5637 usecs
[   18.245751] calling  evdev_init+0x0/0x12 @ 1
[   18.250523] initcall evdev_init+0x0/0x12 returned 0 after 428 usecs
[   18.256780] calling  atkbd_init+0x0/0x27 @ 1
[   18.261221] initcall atkbd_init+0x0/0x27 returned 0 after 106 usecs
[   18.267473] calling  psmouse_init+0x0/0x82 @ 1
[   18.272163] initcall psmouse_init+0x0/0x82 returned 0 after 179 usecs
[   18.278593] calling  cmos_init+0x0/0x77 @ 1
[   18.282887] rtc_cmos 00:05: RTC can wake from S4
[   18.287901] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0
[   18.294046] rtc_cmos 00:05: alarms up to one month, y3k, 242 bytes nvram
[   18.301532] initcall cmos_init+0x0/0x77 returned 0 after 18249 usecs
[   18.307878] calling  i2c_i801_init+0x0/0xad @ 1
[   18.313060] xen: registering gsi 18 triggering 0 polarity 1
[   18.318623] Already setup the GSI :18
[   18.322442] i801_smbus 0000:00:1f.3: SMBus using PCI Interrupt
[   18.328570] initcall i2c_i801_init+0x0/0xad returned 0 after 15719 usecs
[   18.335262] calling  cpufreq_gov_dbs_init+0x0/0x12 @ 1
[   18.340470] initcall cpufreq_gov_dbs_init+0x0/0x12 returned -19 after 0 =
usecs
[   18.347657] calling  efivars_sysfs_init+0x0/0x220 @ 1
[   18.352769] initcall efivars_sysfs_init+0x0/0x220 returned -19 after 0 u=
secs
[   18.359873] calling  efivars_pstore_init+0x0/0xa2 @ 1
[   18.364986] initcall efivars_pstore_init+0x0/0xa2 returned 0 after 0 use=
cs
[   18.371920] calling  vhost_net_init+0x0/0x30 @ 1
[   18.377025] usb 1-1: new high-speed USB device number 2 using ehci-pci
[   18.383661] initcall vhost_net_init+0x0/0x30 returned 0 after 6896 usecs
[   18.390353] calling  vhost_init+0x0/0x8 @ 1
[   18.394594] initcall vhost_init+0x0/0x8 returned 0 after 0 usecs
[   18.400661] calling  staging_init+0x0/0x8 @ 1
[   18.405080] initcall staging_init+0x0/0x8 returned 0 after 0 usecs
[   18.411317] calling  zram_init+0x0/0x2fd @ 1
[   18.416495] zram: Created 1 device(s) ...
[   18.420498] initcall zram_init+0x0/0x2fd returned 0 after 4732 usecs
[   18.426909] calling  zs_init+0x0/0x90 @ 1
[   18.430985] initcall zs_init+0x0/0x90 returned 0 after 2 usecs
[   18.436874] calling  eeepc_laptop_init+0x0/0x5a @ 1
[   18.442076] initcall eeepc_laptop_init+0x0/0x5a returned -19 after 254 u=
secs
[   18.449117] calling  sock_diag_init+0x0/0x12 @ 1
[   18.453812] initcall sock_diag_init+0x0/0x12 returned 0 after 16 usecs
[   18.460381] calling  flow_cache_init_global+0x0/0x19a @ 1
[   18.465863] initcall flow_cache_init_global+0x0/0x19a returned 0 after 2=
0 usecs
[   18.473206] calling  llc_init+0x0/0x20 @ 1
[   18.477366] initcall llc_init+0x0/0x20 returned 0 after 0 usecs
[   18.483345] calling  snap_init+0x0/0x38 @ 1
[   18.487593] initcall snap_init+0x0/0x38 returned 0 after 1 usecs
[   18.493657] calling  blackhole_module_init+0x0/0x12 @ 1
[   18.498946] initcall blackhole_module_init+0x0/0x12 returned 0 after 0 u=
secs
[   18.506053] calling  nfnetlink_init+0x0/0x59 @ 1
[   18.510731] Netfilter messages via NETLINK v0.30.
[   18.515513] initcall nfnetlink_init+0x0/0x59 returned 0 after 4669 usecs
[   18.522258] calling  nfnetlink_log_init+0x0/0xb6 @ 1
[   18.527294] initcall nfnetlink_log_init+0x0/0xb6 returned 0 after 9 usecs
[   18.534132] calling  nf_conntrack_standalone_init+0x0/0x82 @ 1
[   18.540024] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   18.546277] initcall nf_conntrack_standalone_init+0x0/0x82 returned 0 af=
ter 6105 usecs
[   18.554178] calling  ctnetlink_init+0x0/0xa4 @ 1
[   18.558857] ctnetlink v0.93: registering with nfnetlink.
[   18.564230] initcall ctnetlink_init+0x0/0xa4 returned 0 after 5247 usecs
[   18.570991] calling  nf_conntrack_ftp_init+0x0/0x1ca @ 1
[   18.576368] initcall nf_conntrack_ftp_init+0x0/0x1ca returned 0 after 4 =
usecs
[   18.583556] calling  nf_conntrack_irc_init+0x0/0x173 @ 1
[   18.588934] initcall nf_conntrack_irc_init+0x0/0x173 returned 0 after 3 =
usecs
[   18.596123] calling  nf_conntrack_sip_init+0x0/0x215 @ 1
[   18.601495] initcall nf_conntrack_sip_init+0x0/0x215 returned 0 after 0 =
usecs
[   18.608688] calling  xt_init+0x0/0x118 @ 1
[   18.612851] initcall xt_init+0x0/0x118 returned 0 after 1 usecs
[   18.618829] calling  tcpudp_mt_init+0x0/0x17 @ 1
[   18.623509] initcall tcpudp_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.630010] calling  connsecmark_tg_init+0x0/0x12 @ 1
[   18.635121] initcall connsecmark_tg_init+0x0/0x12 returned 0 after 0 use=
cs
[   18.642056] calling  nflog_tg_init+0x0/0x12 @ 1
[   18.646649] initcall nflog_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.653062] calling  secmark_tg_init+0x0/0x12 @ 1
[   18.657830] initcall secmark_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.664415] calling  tcpmss_tg_init+0x0/0x17 @ 1
[   18.669095] initcall tcpmss_tg_init+0x0/0x17 returned 0 after 0 usecs
[   18.675595] calling  conntrack_mt_init+0x0/0x17 @ 1
[   18.680536] initcall conntrack_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.687296] calling  policy_mt_init+0x0/0x17 @ 1
[   18.691975] initcall policy_mt_init+0x0/0x17 returned 0 after 0 usecs
[   18.698475] calling  state_mt_init+0x0/0x12 @ 1
[   18.703068] initcall state_mt_init+0x0/0x12 returned 0 after 0 usecs
[   18.709481] calling  sysctl_ipv4_init+0x0/0x92 @ 1
[   18.714361] initcall sysctl_ipv4_init+0x0/0x92 returned 0 after 26 usecs
[   18.721095] calling  tunnel4_init+0x0/0x72 @ 1
[   18.725602] initcall tunnel4_init+0x0/0x72 returned 0 after 0 usecs
[   18.731927] calling  ipv4_netfilter_init+0x0/0x12 @ 1
[   18.737041] initcall ipv4_netfilter_init+0x0/0x12 returned 0 after 0 use=
cs
[   18.743975] calling  nf_conntrack_l3proto_ipv4_init+0x0/0x17c @ 1
[   18.750231] initcall nf_conntrack_l3proto_ipv4_init+0x0/0x17c returned 0=
 after 100 usecs
[   18.758302] calling  nf_defrag_init+0x0/0x17 @ 1
[   18.762979] initcall nf_defrag_init+0x0/0x17 returned 0 after 0 usecs
[   18.769480] calling  ip_tables_init+0x0/0xaa @ 1
[   18.774173] ip_tables: (C) 2000-2006 Netfilter Core Team
[   18.779534] initcall ip_tables_init+0x0/0xaa returned 0 after 5248 usecs
[   18.786292] calling  iptable_filter_init+0x0/0x51 @ 1
[   18.791430] initcall iptable_filter_init+0x0/0x51 returned 0 after 23 us=
ecs
[   18.798427] calling  iptable_mangle_init+0x0/0x51 @ 1
[   18.803557] initcall iptable_mangle_init+0x0/0x51 returned 0 after 17 us=
ecs
[   18.810560] calling  reject_tg_init+0x0/0x12 @ 1
[   18.815240] initcall reject_tg_init+0x0/0x12 returned 0 after 0 usecs
[   18.821738] calling  ulog_tg_init+0x0/0x85 @ 1
[   18.826262] initcall ulog_tg_init+0x0/0x85 returned 0 after 16 usecs
[   18.832658] calling  cubictcp_register+0x0/0x5c @ 1
[   18.837599] TCP: cubic registered
[   18.840980] initcall cubictcp_register+0x0/0x5c returned 0 after 3302 us=
ecs
[   18.847999] calling  xfrm_user_init+0x0/0x4a @ 1
[   18.852678] Initializing XFRM netlink socket
[   18.857024] initcall xfrm_user_init+0x0/0x4a returned 0 after 4243 usecs
[   18.863773] calling  inet6_init+0x0/0x370 @ 1
[   18.868273] NET: Registered protocol family 10
[   18.873054] initcall inet6_init+0x0/0x370 returned 0 after 4747 usecs
[   18.879475] calling  ah6_init+0x0/0x79 @ 1
[   18.883638] initcall ah6_init+0x0/0x79 returned 0 after 0 usecs
[   18.889615] calling  esp6_init+0x0/0x79 @ 1
[   18.893864] initcall esp6_init+0x0/0x79 returned 0 after 0 usecs
[   18.899929] calling  xfrm6_transport_init+0x0/0x17 @ 1
[   18.905128] initcall xfrm6_transport_init+0x0/0x17 returned 0 after 0 us=
ecs
[   18.912150] calling  xfrm6_mode_tunnel_init+0x0/0x17 @ 1
[   18.917522] initcall xfrm6_mode_tunnel_init+0x0/0x17 returned 0 after 0 =
usecs
[   18.924715] calling  xfrm6_beet_init+0x0/0x17 @ 1
[   18.929482] initcall xfrm6_beet_init+0x0/0x17 returned 0 after 0 usecs
[   18.936068] calling  ip6_tables_init+0x0/0xaa @ 1
[   18.940848] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   18.946296] initcall ip6_tables_init+0x0/0xaa returned 0 after 5332 usecs
[   18.953142] calling  ip6table_filter_init+0x0/0x51 @ 1
[   18.958436] initcall ip6table_filter_init+0x0/0x51 returned 0 after 91 u=
secs
[   18.965466] calling  ip6table_mangle_init+0x0/0x51 @ 1
[   18.970707] initcall ip6table_mangle_init+0x0/0x51 returned 0 after 40 u=
secs
[   18.977772] calling  nf_conntrack_l3proto_ipv6_init+0x0/0x154 @ 1
[   18.983934] initcall nf_conntrack_l3proto_ipv6_init+0x0/0x154 returned 0=
 after 8 usecs
[   18.991899] calling  nf_defrag_init+0x0/0x54 @ 1
[   18.996587] initcall nf_defrag_init+0x0/0x54 returned 0 after 8 usecs
[   19.003078] calling  ipv6header_mt6_init+0x0/0x12 @ 1
[   19.008191] initcall ipv6header_mt6_init+0x0/0x12 returned 0 after 0 use=
cs
[   19.015125] calling  reject_tg6_init+0x0/0x12 @ 1
[   19.019890] initcall reject_tg6_init+0x0/0x12 returned 0 after 0 usecs
[   19.026479] calling  sit_init+0x0/0xcf @ 1
[   19.030637] sit: IPv6 over IPv4 tunneling driver
[   19.036315] initcall sit_init+0x0/0xcf returned 0 after 5543 usecs
[   19.042504] calling  packet_init+0x0/0x47 @ 1
[   19.046906] NET: Registered protocol family 17
[   19.051421] initcall packet_init+0x0/0x47 returned 0 after 4408 usecs
[   19.057912] calling  br_init+0x0/0xa2 @ 1
[   19.062005] initcall br_init+0x0/0xa2 returned 0 after 19 usecs
[   19.067964] calling  init_rpcsec_gss+0x0/0x64 @ 1
[   19.072773] initcall init_rpcsec_gss+0x0/0x64 returned 0 after 38 usecs
[   19.079408] calling  dcbnl_init+0x0/0x4d @ 1
[   19.083740] initcall dcbnl_init+0x0/0x4d returned 0 after 0 usecs
[   19.089894] calling  init_dns_resolver+0x0/0xe1 @ 1
[   19.094842] Key type dns_resolver registered
[   19.099164] initcall init_dns_resolver+0x0/0xe1 returned 0 after 4230 us=
ecs
[   19.106201] calling  mcheck_init_device+0x0/0x123 @ 1
[   19.111591] initcall mcheck_init_device+0x0/0x123 returned 0 after 271 u=
secs
[   19.118650] calling  tboot_late_init+0x0/0x243 @ 1
[   19.123481] initcall tboot_late_init+0x0/0x243 returned 0 after 0 usecs
[   19.130157] calling  mcheck_debugfs_init+0x0/0x3c @ 1
[   19.135284] initcall mcheck_debugfs_init+0x0/0x3c returned 0 after 13 us=
ecs
[   19.142290] calling  severities_debugfs_init+0x0/0x3c @ 1
[   19.147752] initcall severities_debugfs_init+0x0/0x3c returned 0 after 5=
 usecs
[   19.155025] calling  threshold_init_device+0x0/0x50 @ 1
[   19.160317] initcall threshold_init_device+0x0/0x50 returned 0 after 1 u=
secs
[   19.167420] calling  hpet_insert_resource+0x0/0x23 @ 1
[   19.172620] initcall hpet_insert_resource+0x0/0x23 returned 0 after 0 us=
ecs
[   19.179640] calling  update_mp_table+0x0/0x56d @ 1
[   19.184493] initcall update_mp_table+0x0/0x56d returned 0 after 0 usecs
[   19.191166] calling  lapic_insert_resource+0x0/0x3f @ 1
[   19.196453] initcall lapic_insert_resource+0x0/0x3f returned 0 after 0 u=
secs
[   19.203558] calling  io_apic_bug_finalize+0x0/0x1b @ 1
[   19.208758] initcall io_apic_bug_finalize+0x0/0x1b returned 0 after 0 us=
ecs
[   19.215777] calling  print_ICs+0x0/0x456 @ 1
[   19.220111] initcall print_ICs+0x0/0x456 returned 0 after 0 usecs
[   19.226265] calling  check_early_ioremap_leak+0x0/0x65 @ 1
[   19.231812] initcall check_early_ioremap_leak+0x0/0x65 returned 0 after =
0 usecs
[   19.239177] calling  pat_memtype_list_init+0x0/0x32 @ 1
[   19.244464] initcall pat_memtype_list_init+0x0/0x32 returned 0 after 0 u=
secs
[   19.251572] calling  init_oops_id+0x0/0x40 @ 1
[   19.256080] initcall init_oops_id+0x0/0x40 returned 0 after 1 usecs
[   19.262404] calling  pm_qos_power_init+0x0/0x7b @ 1
[   19.267695] initcall pm_qos_power_init+0x0/0x7b returned 0 after 343 use=
cs
[   19.274556] calling  pm_debugfs_init+0x0/0x24 @ 1
[   19.279329] initcall pm_debugfs_init+0x0/0x24 returned 0 after 6 usecs
[   19.285908] calling  printk_late_init+0x0/0x44 @ 1
[   19.290763] initcall printk_late_init+0x0/0x44 returned 0 after 0 usecs
[   19.297435] calling  tk_debug_sleep_time_init+0x0/0x3d @ 1
[   19.302988] initcall tk_debug_sleep_time_init+0x0/0x3d returned 0 after =
5 usecs
[   19.310349] calling  debugfs_kprobe_init+0x0/0x90 @ 1
[   19.315479] initcall debugfs_kprobe_init+0x0/0x90 returned 0 after 17 us=
ecs
[   19.322482] calling  taskstats_init+0x0/0x73 @ 1
[   19.327170] registered taskstats version 1
[   19.331321] initcall taskstats_init+0x0/0x73 returned 0 after 4063 usecs
[   19.338082] calling  clear_boot_tracer+0x0/0x2d @ 1
[   19.343022] initcall clear_boot_tracer+0x0/0x2d returned 0 after 0 usecs
[   19.349781] calling  kdb_ftrace_register+0x0/0x2f @ 1
[   19.354895] initcall kdb_ftrace_register+0x0/0x2f returned 0 after 0 use=
cs
[   19.361827] calling  max_swapfiles_check+0x0/0x8 @ 1
[   19.366853] initcall max_swapfiles_check+0x0/0x8 returned 0 after 0 usecs
[   19.373701] calling  set_recommended_min_free_kbytes+0x0/0xa0 @ 1
[   19.379853] initcall set_recommended_min_free_kbytes+0x0/0xa0 returned 0=
 after 0 usecs
[   19.387827] calling  kmemleak_late_init+0x0/0x93 @ 1
[   19.392881] kmemleak: Kernel memory leak detector initialized
[   19.398661] initcall kmemleak_late_init+0x0/0x93 returned 0 after 5671 u=
secs
[   19.405768] calling  init_root_keyring+0x0/0xb @ 1
[   19.410641] initcall init_root_keyring+0x0/0xb returned 0 after 20 usecs
[   19.417380] calling  fail_make_request_debugfs+0x0/0x2a @ 1
[   19.423055] initcall fail_make_request_debugfs+0x0/0x2a returned 0 after=
 40 usecs
[   19.430552] calling  prandom_reseed+0x0/0x47 @ 1
[   19.435234] initcall prandom_reseed+0x0/0x47 returned 0 after 2 usecs
[   19.441733] calling  pci_resource_alignment_sysfs_init+0x0/0x19 @ 1
[   19.448065] initcall pci_resource_alignment_sysfs_init+0x0/0x19 returned=
 0 after 5 usecs
[   19.456208] calling  pci_sysfs_init+0x0/0x51 @ 1
[   19.465029] initcall pci_sysfs_init+0x0/0x51 returned 0 after 4045 usecs
[   19.471721] calling  boot_wait_for_devices+0x0/95509] initcall deferred_=
probe_initcall+0x0/0x70 returned 0 after 5781 usecs
[   19.503017] calling  late_resume_init+0x0/0x1d0 @ 1
[   19.507949]   Magic number: 14:70:665
[   19.511687] bdi 7:43: hash matches
[   19.515239] initcall late_resume_init+0x0/0x1d0 returned 0 after 7117 us=
ecs
[   19.522186] calling  firmware_memmap_init+0x0/0x38 @ 1
[   19.527822] initcall firmware_memmap_init+0x0/0x38 returned 0 after 428 =
usecs
[   19.534944] calling  pci_mmcfg_late_insert_resources+0x0/0x50 @ 1
[   19.541096] initcall pci_mmcfg_late_insert_resources+0x0/0x50 returned 0=
 after 0 usecs
[   19.549068] calling  tcp_congestion_default+0x0/0x12 @ 1
[   19.554442] initcall tcp_congestion_default+0x0/0x12 returned 0 after 0 =
usecs
[   19.561634] calling  ip_auto_config+0x0/0xf1c @ 1
[   19.566406] initcall ip_auto_config+0x0/0xf1c returned 0 after 5 usecs
[   19.572989] calling  software_resume+0x0/0x290 @ 1
[   19.577841] PM: Hibernation image not present or could not be loaded.
[   19.584341] initcall software_resume+0x0/0x290 returned -2 after 6347 us=
ecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.613629] async_waiting @ 1
[   19.616659] async_continuing @ 1 after 0 usec
[   19.621434] Freeing unused kernel memory: 1724K (ffffffff81cc1000 - ffff=
ffff81e70000)
[   19.629251] Write protecting the kernel read-only data: 12288k
[   19.637705] Freeing unused kernel memory: 1244K (ffff8800016c9000 - ffff=
880001800000)
[   19.646030] Freeing unused kernel memory: 1912K (ffff880001a22000 - ffff=
880001c00000)
init started: BusyBox v1.14.3 (2014-01-20 09:47:53 EST)
Mounting directories  [  OK  ]
[   19.674355] mv (1495) used greatest stack depth: 5416 bytes left
[   19.681907] mkdir (14^G[   19.709502] usb 1-1: New USB device found, idV=
endor=3D8087, idProduct=3D8008
[   19.716195] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[   19.741503] hub 1-1:1.0: USB hub found
[   19.747424] hub 1-1:1.0: 6 ports detected
mount: mount point /proc/bus/usb does not exist
[   19.822797] calling  privcmd_init+0x0/0x1000 [xen_privcmd] @ 1522
[   19.829027] initcall privcmd_init+0x0/0x1000 [xen_privcmd] returned 0 af=
ter 144 usecs
[   19.837231] calling  xenfs_init+0x0/0x1000 [xenfs] @ 1522
[   19.842626] initcall xenfs_init+0x0/0x1000 [xenfs] returned 0 after 1 us=
ecs
mount: mount point /sys/kernel/config does not exist
[   19.864438] usb 2-1: new high-speed USB device number 2 using ehci-pci
[   19.874912] calling  xenkbd_init+0x0/0x1000 [xen_modules/3.13.0upstream-=
02502-gec513b1/kernel/drivers/input/misc/xen-kbdfront.ko): No such device
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.910345] calling  xenfb_init+0x0/0x=
1000 [xen_fbfront] @ 1536
[   19.916256] initcall xenfb_init+0x0/0x1000 [xen_fbfront] returned -19 af=
ter 0 usecs
FATAL: Error inserting xen_fbfront (/lib/modules/3.13.0upstream-02502-gec51=
3b1/kernel/drivers/video/xen-fbfront.ko): No such device
[   19.937850] calling  netif_init+0x0/0x1000 [xen_netfront] @ 1543
[   19.943848] xen_netfront: Initialising Xen virtual ethernet driver
[   19.950209] initcall netif_init+0x0/0x1000 [xen_netfront] returned 0 aft=
er 6210 usecs
[   19.960108] calling  xlblk_init+0x0/0x1000 [xen_blkfront] @ 1546
[   19.966232] initcall xlblk_init+0x0/0x1000 [xen_blkfront] returned 0 aft=
er 120 usecs
[   19.975587] load_xen_module (1530) used greatest stack depth: 4792 bytes=
 left
[   19.991633] udevd (1552): /proc/1552/oom_adj is deprecated, please use /=
proc/1552/oom_score_adj instead.
[   20.001485] usb 2-1: New USB device found, idVendor=3D8087, idProduct=3D=
8000
[   20.008173] usb 2-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
[   20.039496] hub 2-1:1.0: USB hub found
[   20.044435] hub 2-1:1.0: 8 ports detected
[   20.079257] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1561
[   20.085906] initcall acpi_cpufreq_init+0x0/0x100G^G^G^G^G^G^G[   20.1084=
51] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1573
[   20.115062] initcall acpi_cpufreq_init+0x0/0x100nit+0x0/0x1000 [acpi_cpu=
freq] @ 1576
[   20.139861] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
^G^G^G^G^G^G^G^G^G^G[   20.155847] calling  acpi_cpufreq_init+0x0/0x1000 [a=
cpi_cpufreq] @ 1574
[   20.162461] initcall acpi_cpufreq_init+0x0/0x100] @ 1603
[   20.177100] initcall acpi_video_init+0x0/0xfee [video] returned 0 after =
12 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   20.261238] calling  ini=
t_scsi+0x0/0x91 [scsi_mod] @ 1778
[   20.270883] SCSI subsystem initialized
[   20.274631] initcall init_scsi+0x0/0x91 [scsi_mod] returned 0 after 7810=
 usecs
[   20.284535] calling  igb_init_module+0x0/0x1000 [igb] @ 1787
[   20.290186] igb: Intel(R) Gigabit Ethernet Network Driver - 5.0.5-k
[   20.297212] igb: Copyright (c) 2007-2013 Intel Corporation.
[   20.303139] xen: registering gsi 17 triggering 0 polarity 1
[   20.308701] Already setup the GSI :17
^G^G[   20.314223] calling  sas_transport_init+0x0/0x1000 [scsi_transport_s=
as] @ 1778
[   20.343508] calling  drm_fb_helper_modinit+0x0/0x1000 [drm_kms_helper] @=
 1802
[   20.356285] calling  e1000_init_module+0x0en: registering gsi 20 trigger=
ing 0 polarity 1
[   20.380536] xen: --> pirq=3D20 -> irq=3D20 (gsi=3D20)
[   20.385267] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G[   20.403401] calling  fb_console_init+0x0/0x1000 [fbcon] @ 1823
[   20.412445] initcall fb_console_init+0x0/0x1000 [fbcon] returned 0 after=
 3144 usecs
[   20.441188] initcall sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
returned 0 after 116943 usecs
[   20.452295] initca0/0x1000 [drm_kms_helper] returned 0 after 99276 usecs
^G^G^G^G^G^G^G[   20.493808] calling  ata_init+0x0/0x4ce [libata] @ 1894
[   20.499447] calling  fusion_init+0x0/0x1000 [mptbase] @ 1778
[   20.505094] Fusion MPT base driver 3.04.20
[   20.509250] Copyright (c) 1999-2008 LSI Corporation
[   20.514209] initcall fusion_init+0x0/0x1000 [mptbase] returned 0 after 8=
899 usecs
[   20.531543] calling  mptsas_init+0x0/0x1000 [mptsas] @ 1778
[   20.537107] Fusion MPT SAS Host driver 3.04.20
[   20.541916] xen: registering gsi 16 triggering 0 polarity 1
[   20.547481] Already setup the GSI :16
[   20.564590] mptbase: ioc0: Initiating bringup
[   20.571614] calling  i915_init+0x0/0x68 [i915] @ 1802
[   20.587807] libata version 3.00 loaded.
[   20.591648] initcall ata_init+0x0/0x4ce [libata] returned 0 after 90440 =
usecs
[   20.601514] calling  ahci_pci_driver_init+0x0/0x1000 [ahci] @ 1889
udevd-work[1583]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: Permission denied

[   20.626756] igb 0000:02:00.0: added PHC on eth0
[   20.631278] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
on
[   20.638210] igb 0000:02:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ac
[   20.645402] igb 0000:02:00.0: eth0: PBA No: Unknown
[   20.650341] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   20.658328] xen: registering gsi 18 triggering 0 polarity 1
[   20.663895] Already setup the GSI :18
[   20.733746] e1000e 0000:00:19.0 eth1: registered PHC clock
[   20.739226] e1000e 0000:00:19.0 eth1: (PCI Express:2.5GT/s:Width x1) 00:=
25:90:86:be:f0
[   20.747193] e1000e 0000:00:19.0 eth1: Intel(R) PRO/1000 Network Connecti=
on
[   20.754152] e1000e 0000:00:19.0 eth1: MAC: 11, PHY: 12, PBA No: 0100FF-0=
FF
[   20.761381] xen: registering gsi 16 triggering 0 polarity 1
[   20.766945] Already setup the GSI :16
[   20.770848] e1000e 0000:04:00.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
[   20.781045] xen: registering gsi 16 triggering 0 polarity 1
[   20.786609] Already setup the GSI :16
[   20.816233] [drm] Memory usable by graphics device =3D 2048M
[   20.853844] Failed to add WC MTRR for [00000000e0000000-00000000efffffff=
]; performance may suffer.
[   20.889782] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   20.896648] [drm] Driver supports precise vblank timestamp query.
[   20.903040] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=
=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
[   20.927898] igb 0000:02:00.1: added PHC on eth2
[   20.932427] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
on
[   20.939356] igb 0000:02:00.1: eth2: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ad
[   20.946547] igb 0000:02:00.1: eth2: PBA No: Unknown
[   20.951485] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   20.959464] xen: registering gsi 19 triggering 0 polarity 1
[   20.965027] Already setup the GSI :19
[   21.086536] e1000e 0000:04:00.0 eth3: (PCI Express:2.5GT/s:Width x1) 00:=
15:17:8f:18:a2
[   21.094436] e1000e 0000:04:00.0 eth3: Intel(R) PRO/1000 Network Connecti=
on
[   21.101443] e1000e 0000:04:00.0 eth3: MAC: 0, PHY: 4, PBA No: D50868-003
[   21.108464] xen: registering gsi 17 triggering 0 polarity 1
[   21.114025] Already setup the GSI :17
[   21.117929] e1000e 0000:04:00.1: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
[   21.199003] i915 0000:00:02.0: No connectors reported connected with mod=
es
[   21.205884] [drm] Cannot find any crtc or siz^G^G^G^G^G^G[   21.404610] =
fbcon: inteldrmfb (fb0) is primary device
[   21.406495] ioc0: LSISAS1064E B3: Capabilities=3D{Initiator}
[   2000:05:00.0: eth4: (PCIe:2.5Gb/s:Width x1) 00:25:90:86:be:f1
[   21.409299] igb 0000:05:00.0: eth4: PBA No: 011000-000
[   21.409300] igb 0000:05:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   21.409848] initcall igb_init_module+0x0/0x1000 [igb] returned 0 after 1=
093415 usecs
[   21.410155] modprobe (1787) used greatest stack depth: 4424 bytes left
[   21.469414] Console: switching to colour frame buffer device 128x48
[   21.575433] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
[   21.581671] i915 0000:00:02.0: registered panic notifier
[   21.606553] ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: n=
o)
[   21.626546] acpi device:61: registered as cooling_device6
[   21.676120] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:0=
0/LNXVIDEO:00/input/input5
[   21.730538] e1000e 0000:04:00.1 eth5: (PCI Express:2.5GT/s:Width x1) 00:=
15:17:8f:18:a3
[   21.738444] e1000e 0000:04:00.1 eth5: Intel(R) PRO/1000 Network Connecti=
on
[   21.745453] e1000e 0000:04:00.1 eth5: MAC: 0, PHY: 4, PBA No: D50868-003
[   21.754008] initcall e1000_init_module+0x0/0x1000 [e1000e] returned 0 af=
ter 1359007 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   21.7985=
14] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0
[   21.806126] ahci 0000:00:1f.2: version 3.01
[   21.816462] Already setup the GSI :19
[   21.820423] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x=
3f impl SATA mode
[   21.828587] ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part=
 ems apst=20
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   21.863614] modprobe (1807) used greates=
t stack depth: 4104 bytes left
[   21.922552] scsi0 : ahci
[   21.931583] initcall i915_init+0x0/0x68 [i915] returned 0 after 1323162 =
usecs
[   21.941218] scsi1 : ahci
[   21.948566] ip (2713) used greatest stack depth: 3720 bytes left
[   21.975594] scsi2 : ahci
[   21.995538] scsi3 : ahci
[   22.004917] modprobe (1802) used greatest stack depth: 2928 bytes left
[   22.018394] scsi4 : ahci
[   22.038000] scsi5 : ahci
[   22.047925] ata1: SATA max UDMA/133 abar m2048@0xf153a000 port 0x ata2: =
SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a180 irq 66
[   22.062770] ata3: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a20=
0 irq 66
[   22.070215] ata4: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a28=
0 irq 66
[   22.077667] ata5: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a30=
0 irq 66
[   22.085125] ata6: SATA max UDMA/133 abar m2048@0xf153a000 port 0xf153a38=
0 irq 66
[   22.092885] xen: registering gsi 19 triggering 0 polarity 1
[   22.098443] Already setup the GSI :19
[   22.102364] ahci 0000:0a:00.0: SSS flag set, parallel bus scan disabled
[   22.109008] ahci 0000:0a:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x=
3 impl SATA mode
[   22.117123] ahci 0000:0a:00.0: flags: 64bit ncq sntf stag led clo pmp pi=
o slum part ccc sxs=20
^G^G^G^G^G^G^G^G[   22.137059] scsi6 : ahci
[   22.141152] scsi7 : ahci
[   22.144872] ata7: SATA max UDMA/133 abar m512@0xf1c00000 port 0xf^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^GWaiting for devices [  OK  ]
[   22.431469] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[   22.437655] ata6: SATA link down (SStatus 0 SControl ata5: SATA link dow=
n (SStatus 0 SControl 300)
[   22.460539] ata4: SATA link down (SStatus 0 SControl 300)
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   22.472269] at=
a1.00: ATA-8: WDC WD740HLFS-01G6U4, 04.04V06, max UDMA/133
[   22.478964] ata1.00: 145226112 sectors, multi 16: LBA48 NCQ (depth 31/32=
), AA
[   22.486178] ata7: SATA link down (SStatus 0 SControl 300)
[   22.494553] ata1.00: configured for UDMA/133
[   22.499018] scsi 0:0:0:0: Direct-Access     ATA      WDC WD740HLFS-01 04=
=2E0 PQ: 0 ANSI: 5
[   22.642704] [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
[   22.812477] ata8: SATA link down (SStatus 0 SControl 300)
[   22.831694] calling  init_sd+0x0/0x1000 [sd_mod] @ 3283
[   22.837706] sd 0:0:0:0: [sda] 145226112 512-byte logical blocks:22.85601=
5] sd 0:0:0:0: [sda] Write Protect is off
[   22.860841] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   22.868410] sd 0:0:0:0: [sda] Write cache: e=
nabled, read cache: enabled, doesn't support DPO or FUA
^G[   22.893984]  sda: sda1 sda2
[   22.905029] sd 0:0:0:0: [sda] Attached SCSI disk
[   22.926614] calling  init_sg+0x0/0x1000 [sg] @ 3307
[   22.941347] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   22.946694] initcall init_sg+0x0/0x1000 [sg] returned 0 after 1^G[   24.=
489493] random: nonblocking pool is initialized
[   31.264235] scsi8 : ioc0: LSISAS1064E B3, FwRev=3D011a0000h, Ports=3D1, =
MaxQ=3D266, IRQ=3D16
[   31.294185] initcall mptsas_init+0x0/0x1000 [mptsas] returned 0 after 10=
504955 usecs
Waiting for init.pre_custom [  OK  ]
Waiting for fb [  OK  ]
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f236e34f000): Writting .. [1024:768]
Done!
[   31.439261] calling  ttm_init+0x0/0x1000 [ttm] @ 3391
[   31.444528] initcall ttm_init+0x0/0x1000 [ttm] returned 0 ^G[   31.46243=
7] calling  radeon_init+0x0/0xba [radeon] @ 3391
[   31.467829] [drm] radeon kernel modesetting enabled.
[   31.5575] wmi: Mapper loaded
[   31.498676] initcall acpi_wmi_init+0x0/0x1000 [wmi] returned 0 after 480=
3 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   31.508988] calling  mxm_wmi_init+0x0/0x1000=
 [mxm_wmi] @ 3396
[   31.514729] initcall mxm_wmi_init+0x0/0x1000 [mxm_wmi] returned 0 after =
0 usecs
^G[   31.537248] calling  nouveau_drm_init+0x0/0x1000 [nouveau] @ 3396
[   31.543588] initcall nouveau_drm_init+0x0/0x1000 [nouveau] returned 0 af=
ter 240 usecs
Starting..[/dev/fb0]
/dev/fb0: len:0
/dev/fb0: bits/pixel32
(7f6f6c002000): Writting .. [1024:768]
Done!
VGA: 0000:00:02.0
Waiting for network [  OK  ]
Bringing up loopback interface:  [  OK  ]
Bringing up interface eth0:  [   31.905535] IPv6: ADDRCONF(NETDEV_UP): eth0=
: link is not ready
[   31.923216] device eth0 entered promiscuous mode
[  OK Bringing up interface eth1: =20
Determining IP information for eth1...[   32.179912] IPv6: ADDRCONF(NETDEV_=
UP): eth1: link is not ready
[   34.332970] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[   34.340506] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   35.845105] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Cont=
rol: Rx/Tx
[   35.852684] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
 done.
[  OK  ]
Bringing up interface eth2: =20
Determining IP information for eth2...[   37.247699] IPv6: ADDRCONF(NETDEV_=
UP): eth2: link is not ready
[   39.910970] igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[   39.918496] IPv6: ADDRCONF(NETDEV_CHANGE done.
[  OK  ]
Bringing up interface switch: =20
Determining IP information for switch...[   41.042089] switch: port 1(eth0)=
 entered forwarding state
[   41.047491] switch:  done.
[  OK  ]
Waiting for init.custom [  OK  ]

Starting SSHd ...

    SSH started [3961]


Waiting for SSHd [  OK  ]
WARNING: ssh currently running [3961] ignoring start request
[   42.392408] calling  crc32c_mod_init+0x0/0x1000 [crc32c] @ 3996
[   42.398383] initcall crc32c_mod_init+0x0/0x1000 [crc32c] returned 0 afte=
r 59 usecs
[   42.407994] calling  libcrc32c_mod_init+0x0/0x1000 [libcrc32c] @ 3999
[   42.414431] initcall libcrc32c_mod_init+0x0/0x1000 [libcrc32c] returned =
0 after 4 usecs
[   42.425154] calling  iscsi_transport_init+0x0/0x1000 [scsi_transport_isc=
si] @ 4001
[   42.432708] Loading iSCSI transport class v2.0-870.
[   42.438766] initcall iscsi_transport_init+0x0/0x1000 [scsi_transport_isc=
si] returned 0 after 5912 usecs
[   42.450230] calling  iscsi_sw_tcp_init+0x0/0x1000 [iscsi_tcp] @ 4001
[   42.456813] iscsi: registered transport (tcp)
[   42.461163] initcall iscsi_sw_tcp_init+0x0/0x1000 [iscsi_tcp] returned 0=
 after 4475 usecs
iscsistart: transport class version 2.0-870. iscsid version 2.0-872
Could not get list of targets from firmware.
Jan 22 03:42:04 tst035 syslogd 1.5.0: restart.
Running in PV context on Xen v4.3.
[   42.521619] calling  evtchn_init+0x0/0x1000 [xen_evtchn] @ 4038
[   42.527945] xen:xen_evtchn: Event-channel device installinit+0x0/0x1000 =
[xen_evtchn] returned 0 after 5755 usecs
^G^G^G^G^G^G^G^G^GStarting C xenstored...Jan 22 03:42:04 tst035 xenstored: =
Checking store ...
Jan 22 03:42:04 tst035 xenstored: Checking store complete.

Setting domain 0 name...
Starting xenconsoled...
Starting QEMU as disk backend for dom0
[0:0:0:0]    disk    ATA      WDC WD740HLFS-01 04.0  /dev/sda=20
00:00.0 Host bridge: Intel Corporation Device 0c08 (rev 06)
00:01.0 PCI bridge: Intel Corporation Device 0c01 (rev 06)
00:01 Corporation Device 0c05 (rev 06)
^G^G00:02.0 VGA compatible controller: Intel Corporation Device 041a (rev 0=
6)
^G^G00:03.0 Audio device: Intel Corporation Device 0c0c (rev 06)
^G00:14.0 USB Controller: Intel Corporation Device 8c31 (rev 04)
00:16.0 Communication controller: Intel Corporation Device 8c3a (rev 04)
00:19.0 Ethernet controller: Intel Corporation Device 153a (rev 04)
00:1a.0 USB Controller: Intel Corporation Device 8c2d (rev 04)
00:1b.0 Audio device: Intel Corporation Device 8c20 (rev 04)
00:1c.0 PCI bridge: Intel Corporation Device 8c10 (rev d4)
00:1c.3 PCI bridge: Intel Corporation Device 8c16 (rev d4)
00:1c.5 PCI bridge: Intel Corporation Device 8c1a (rev d4)
00:1c.6 PCI bridge: Intel Corporation Device 8c1c (rev d4)
00:1c.7 PCI bridge: Intel Corporation Device 8c1e (rev d4)
00:1d.0 USB Controller: Intel Corporation Device 8c26 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Device 8c56 (rev 04)
00:1f.2 SATA controller: Intel Corporation Device 8c02 (rev 04)
00:1f.3 SMBus: Intel Corporation Device 8c22 (rev 04)
00:1f.6 Signal processing controller: Intel Corporation Device 8c24 (rev 04)
01:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Ex=
press Fusion-MPT SAS (rev 08)
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connec=
tion (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connec=
tion (rev 01)
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Con=
troller (rev 06)
04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Con=
troller (rev 06)
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
07:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent=
 mode) (rev 11)
07:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000=
 Controller (PHY/Link)
08:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
08:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capt=
ure (rev 11)
08:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (r=
ev 11)
09:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
0a:00.0 SATA controller: Device 1b21:0612 (rev 01)
           CPU0      =20
  1:          2  xen-pirq-ioapic-edge  i8042
  8:          1  xen-pirq-ioapic-edge  rtc0
  9:       timer0
 41:          0  xen-percpu-ipi       spinlock0
 42:          0  xen-percpu-ipi       resched0
 43:          0  xen-percpu-ipi       callfunc0
 44:          0  xen-percpu-virq      debug0
 45:          0  xen-percpu-ipi       callfuncsingle0
 46:       1037  xen-percpu-ipi       irqwork0
 47:         77   xen-dyn-event     xenbus
 48:          0  xen-percpu-virq      xen-pcpu
 51:          0  xen-percpu-virq      mce
 52:        313  xen-percpu-virq      hvc_console
 53:          1  xen-pirq-msi-x     eth0
 54:         31  xen-pirq-msi-x     eth0-rx-0
 55:         27  xen-pirq-msi-x     eth0-tx-0
 56:         38  xen-pirq-msi       eth1
 57:          1  xen-pirq-msi-x     eth2
 58:         12  xen-pirq-msi-x     eth2-rx-0
 59:         16  xen-pirq-msi-x     eth2-tx-0
 61:         26  xen-pirq-msi       i915
 66:         10  xen-pirq-msi       ahci
 67:          0  xen-pirq-msi       ahci
 68:         24   xen-dyn-event     evtchn:xenstored
 69:          0   xen-dyn-event     evtchn:xenstored
NMI:          0   Non-maskable interrupts
LOC:          0   Local timer interrupts
SPU:          0   Spurious interrupts
PMI:          0   Performance monitoring interrupts
IWI:       1037   IRQ work interrupts
RTR:          0   APIC ICR read retries
RES:          0   Rescheduling interrupts
CAL:          0   Function call interrupts
TLB:          0   TLB shootdowns
TRM:          0   Thermal event interrupts
THR:          0   Threshold APIC interrupts
MCE:          0   Machine check exceptions
MCP:          1   Machine check polls
ERR:          0
MIS:          0
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G00000000-00000f=
ff : reserved
00001000-00098fff : System RAM
00099000-00099bff : RAM buffer
00099c00-000fffff : reserved
  000a0000-000bffff : PCI Bus 0000:00
  000c0000-000ce9ff : Video ROM
  000cf000-000cf7ff : Adapter ROM
  000cf800-000d07ff : Adapter ROM
  000d0800-000d17ff : Adapter ROM
  000d1800-000d27ff : Adapter ROM
  000d2800-000d37ff : Adapter ROM
  000d3800-000d47ff : Adapter ROM
  000d8000-000dbfff : PCI Bus 0000:00
  000dc000-000dffff : PCI Bus 0000:00
  000e0000-000e3fff : PCI Bus 0000:00
  000e4000-000e7fff : PCI Bus 0000:00
  000f0000-000fffff : System ROM
00100000-80066fff : System RAM
  01000000-016c5ebc : Kernel code
  016c5ebd-01cbfa3f : Kernel data
  01e78000-01fd0fff : Kernel bss
80067000-a58f0fff : Unusable memory
a58f1000-a58f7fff : ACPI Non-volatile Storage
a58f8000-a61b0fff : Unusable memory
a61b1000-a6596fff : reserved
a6597000-b74b3fff : Unusable memory
b74b4000-b76cafff : reserved
  b7648018-b764803e : APEI ERST
  b764803f-b764a03e : APEI ERST
  b76c9d98-b76c9d98 : APEI EINJ
  b76c9d9a-b76c9da6 : APEI EINJ
  b76c9da8-b76c9daf : APEI EINJ
b76cb000-b770bfff : Unusable memory
b770c000-b77b8fff : ACPI Non-volatile Storage
b77b9000-b7ffefff : reserved
b7fff000-b7ffffff : Unusable memory
bc000000-be1fffff : reserved
  bc200000-be1fffff : Graphics Stolen Memory
be200000-feafffff : PCI Bus 0000:00
  e0000000-efffffff : 0000:00:02.0
  f0000000-f03fffff : 0000:00:02.0
  f0400000-f14fffff : PCI Bus 0000:02
    f0400000-f07fffff : 0000:02:00.1
    f0800000-f0bfffff : 0000:02:00.1
      f0800000-f0bfffff : igb
    f0c00000-f0ffffff : 0000:02:00.0
    f1000000-f13fffff : 0000:02:00.0
      f1000000-f13fffff : igb
    f1400000-f141ffff : 0000:02:00.1
      f1400000-f141ffff : igb
    f1420000-f143ffff : 0000:02:00.0
      f1420000-f143ffff : igb
    f1440000-f1443fff : 0000:02:00.1
      f1440000-f1443fff : igb
    f1444000-f1447fff : 0000:02:00.0
      f1444000-f1447fff : igb
    f1448000-f1467fff : 0000:02:00.0
    f1468000-f1487fff : 0000:02:00.0
    f1488000-f14a7fff : 0000:02:00.1
    f14a8000-f14c7fff : 0000:02:00.1
  f1500000-f151ffff : 0000:00:19.0
    f1500000-f151ffff : e1000e
  f1520000-f152ffff : 0000:00:14.0
  f1530000-f1533fff : 0000:00:1b.0
  f1534000-f1537fff : 0000:00:03.0
  f1538000-f1538fff : 0000:00:1f.6
  f1539000-f15390ff : 0000:00:1f.3
  f153a000-f153a7ff : 0000:00:1f.2
    f153a000-f153a7ff : ahci
  f153b000-f153b3ff : 0000:00:1d.0
    f153b000-f153b3ff : ehci_hcd
  f153c000-f153c3ff : 0000:00:1a.0
    f153c000-f153c3ff : ehci_hcd
  f153d000-f153dfff : 0000:00:19.0
    f153d000-f153dfff : e1000e
  f153f000-f153f00f : 0000:00:16.0
  f1600000-f18fffff : PCI Bus 0000:01
    f1600000-f17fffff : 0000:01:00.0
    f1800000-f180ffff : 0000:01:00.0
      f1800000-f180ffff : mpt
    f1810000-f1813fff : 0000:01:00.0
      f1810000-f1813fff : mpt
  f1a00000-f1bfffff : PCI Bus 0000:06
    f1a00000-f1bfffff : PCI Bus 0000:07
      f1a00000-f1afffff : PCI Bus 0000:08
        f1a00000-f1a00fff : 0000:08:0b.1
        f1a01000-f1a01fff : 0000:08:0b.0
        f1a02000-f1a02fff : 0000:08:0a.1
        f1a03000-f1a03fff : 0000:08:0a.0
        f1a04000-f1a04fff : 0000:08:09.1
        f1a05000-f1a05fff : 0000:08:09.0
        f1a06000-f1a06fff : 0000:08:08.1
        f1a07000-f1a07fff : 0000:08:08.0
      f1b00000-f1b03fff : 0000:07:03.0
      f1b04000-f1b047ff : 0000:07:03.0
  f1c00000-f1cfffff : PCI Bus 0000:0a
    f1c00000-f1c001ff : 0000:0a:00.0
      f1c00000-f1c001ff : ahci
  f1d00000-f1dfffff : PCI Bus 0000:09
    f1d00000-f1d01fff : 0000:09:00.0
  f1e00000-f1efffff : PCI Bus 0000:05
    f1e00000-f1e7ffff : 0000:05:00.0
      f1e00000-f1e7ffff : igb
    f1e80000-f1e83fff : 0000:05:00.0
      f1e80000-f1e83fff : igb
  f1f00000-f1ffffff : PCI Bus 0000:04
    f1f00000-f1f1ffff : 0000:04:00.1
    f1f20000-f1f3ffff : 0000:04:00.1
      f1f20000-f1f3ffff : e1000e
    f1f40000-f1f5ffff : 0000:04:00.1
      f1f40000-f1f5ffff : e1000e
    f1f60000-f1f7ffff : 0000:04:00.0
    f1f80000-f1f9ffff : 0000:04:00.0
      f1f80000-f1f9ffff : e1000e
    f1fa0000-f1fbffff : 0000:04:00.0
      f1fa0000-f1fbffff : e1000e
  f7fef000-f7feffff : pnp 00:0c
  f7ff0000-f7ff0fff : pnp 00:0c
  f8000000-fbffffff : PCI MMCONFIG 0000 [bus 00-3f]
    f8000000-fbffffff : reserved
      f8000000-fbffffff : pnp 00:0c
fec00000-fec00fff : reserved
  fec00000-fec003ff : IOAPIC 0
fed00000-fed03fff : reserved
  fed00000-fed003ff : HPET 0
fed10000-fed17fff : pnp 00:0c
fed18000-fed18fff : pnp 00:0c
fed19000-fed19fff : pnp 00:0c
fed1c000-fed1ffff : reserved
  fed1c000-fed1ffff : pnp 00:0c
fed20000-fed3ffff : pnp 00:0c
fed40000-fed44fff : pnp 00:00
fed45000-fed8ffff : pnp 00:0c
fed90000-fed93fff : pnp 00:0c
fee00000-feefffff : reserved
  fee00000-feefffff : pnp 00:0c
    fee00000-fee00fff : Local APIC
ff000000-ffffffff : reserved
  ff000000-ffffffff : pnp 00:0c
100000000-23fdfffff : Unusable memory
MemTotal:        1979800 kB
MemFree:         1612432 kB
Buffers:               0 kB
Cached:           275408 kB
SwapCached:            0 kB
Active:            66460 kB
Inactive:         212104 kB
Active(anon):      66460 kB
Inactive(anon):   212104 kB
Active(file):          0 kB
Inactive(file):        0 kB
Unevictable:        4956 kB
Mlocked:            4956 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:          8164 kB
Mapped:             6308 kB
Shmem:            275408 kB
Slab:              66296 kB
SReclaimable:      10688 kB
SUnreclaim:        55608 kB
KernelStack:         632 kB
PageTables:         1120 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      989900 kB
Committed_AS:     307792 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      342828 kB
VmallocChunk:   34359391228 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     2097564 kB
DirectMap2M:           0 kB
Waiting for init.late [  OK  ]
+ . /etc/init.d/functions
++ TEXTDOMAIN=3Dinitscripts
++ umask 022
++ PATH=3D/sbin:/usr/sbin:/bin:/usr/bin
++ export PATH
++ '[' -z '' ']'
++ COLUMNS=3D80
++ '[' -z '' ']'
+++ /sbin/consoletype
++ CONSOLETYPE=3Dserial
++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']'
++ '[' -z '' ']'
++ '[' -f /etc/sysconfig/init ']'
++ . /etc/sysconfig/init
+++ BOOTUP=3Dcolor
+++ RES_COL=3D60
+++ MOVE_TO_COL=3D'echo -en \033[60G'
+++ SETCOLOR_SUCCESS=3D'echo -en \033[0;32m'
+++ SETCOLOR_FAILURE=3D'echo -en \033[0;31m'
+++ SETCOLOR_WARNING=3D'echo -en \033[0;33m'
+++ SETCOLOR_NORMAL=3D'echo -en \033[0;39m'
+++ LOGLEVEL=3D3
+++ PROMPT=3Dyes
+++ AUTOSWAP=3Dno
+++ ACTIVE_CONSOLES=3D'/dev/tty[1-6]'
++ '[' serial =3D serial ']'
++ BOOTUP=3Dserial
++ MOVE_TO_COL=3D
++ SETCOLOR_SUCCESS=3D
++ SETCOLOR_FAILURE=3D
++ SETCOLOR_WARNING=3D
++ SETCOLOR_NORMAL=3D
++ __sed_discard_ignored_files=3D'/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\=
|\.rpmsave\)$/d'
+ mkdir -p /mnt/lab
+ echo '192.168.102.1  build'
+ ping -q -c 1 build
PING build.dumpdata.com (192.168.102.1) 56(84) bytes of data.

--- build.dumpdata.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev =3D 0.293/0.293/0.293/0.000 ms
+ '[' 0 -eq 0 ']'
+ mount build:/srv/tftpboot/lab /mnt/lab -o vers=3D3
+ iscsiadm -m discovery -t st -p build
192.168.102.1:3260,1 iqn.2003-01.org.linux-iscsi.target:sn.bd5777dc54e541e0=
bd7727619992arget:sn.bd5777dc54e541e0bd772761999275e6
^G^G^G^G^G^G^G^G+ iscsiadm -m node -L all
^G^G^G^G^G^G^G^GLogging in to [iface: default, target: iqn.2003-01.org.linu=
x-iscsi.target:sn.bd5777dc54e541e0bd772761999275e6, portal: 192.168.102.1,3=
260]
[   47.484612] scsi9 : iSCSI Initiator over TCP/IP
[   47.748549] scsi 9:0:0:0: Direct-Access     LIO-ORG  IBLOCK           4.=
0  PQ: 0 ANSI: 5
[   47.762708] sd 9:0:0:0: [sdb] 1G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^GLogin to [iface: default, target: iqn.2003-01.org.linux-iscsi.tar=
get:sn.bd5777dc54e541e0bd772761999275e6, portal: 192.168.102.1,3260] succes=
sful.
[   47.795026] sd 9:0:0:0: [sdb] Write Protect is off
[   47.799813] sd 9:0:0:0: [sdb] Mode Sense: 2f 00 00 00
+ modprobe dm-multipath
[   47.807682] sd 9:0:0:0: [sdb] Write cache: disabled, read cache: enabled=
, doesn't support DPO or FUA
[   47.820163] calling  dm_init+0x0/0x48 [dm_mod] @ 4114
[   47.825992] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised:=
 dm-devel@redhat.com
[   47.834420] initcall dm_init+0x0/0x48 [dm_mod] returned 0 after 8994 use=
cs
[   47.842507] calling  dm_multipath_init+0x0/0x1000 [dm_multipath] @ 4114
[   47.849344] device-mapper: multipath: version 1.6.0 loaded
[   47.854857] initcall dm_multipath_init+0x0/0x1000 [dm_multipath] returne=
d 0 after 5558 usecs
+ sleep 5
[   47.884507]  sdb: unknown partition table
[   47.895985] sd 9:0:0:0: [sdb] Attached SCSI disk
Jan 22 03:42:10 tst035 iscsid: Connection1:0 to [target: iqn.2003-01.org.li=
nux-iscsi.target:sn.bd5777dc54e541e0bd772761999275e6,^G+ pvscan
  Incorrect metadata area header checksum
  PV /dev/sdb    VG guests          lvm2 [931.44 GiB / 452.00 MiB free]
  PV /dev/sda1                      lvm2 [69.24 GiB]
  Total: 2 [1000.68 GiB] / in use: 1 [931.44 GiB] / in no VG: 1 [69.24 GiB]
+ vgchange -aly
  Incorrect metadata area header checksum
[   52.969799] bio: create slab <bio-1> at 1
  32 logical volume(s) in volume group "guests" now active
+ echo 'NFS done'
NFS done
+ xl info
host                   : tst035.dumpdata.com
release                : 3.13.0upstream-02502-gec513b1
version                :      : 2
^G^G^G^G^Gcpu_mhz                : 3400
^G^Ghw_caps                : bfebfbff:2c100800:00000000:00007f00:77fafbff:0=
0000000:00000021:00002fbb
^G^Gvirt_caps              : hvm hvm_directio
^G^Gtotal_memory           : 8046
^G^Gfree_memory            : 4860
^G^Gsharing_freed_memory   : 0
^Gsharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64=20
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : Fri Jan 17 16:37:06 2014 +0100 git:7261a3f-dirty
xen_commandline        : dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug=
,verbose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=
=3D1 console_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbo=
se sync_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax=
:6GB,2G
cc_compiler            : gcc (GCC) 4.4.4 20100503 (Red Hat 4.4.4-2)
cc_compile_by          : konrad
cc_compile_domain      : (none)
cc_compile_date        : Tue Jan 21 14:30:34 EST 2014
xend_config_format     : 4
+ '[' -e /sys/kernel/debug/xen/mmu/p2m ']'
+ cat /sys/kernel/debug/xen/mmu/p2m
 [0x0->0x99] pfn
 [0x99->0x100] identity
 [0x100->0x80067] pfn
 [0x0->0x80200] level entry
 [0x80200->0xa5800] level mid400->0xa6600] level entry
 [0xa6600->0xb7400] level middle
 [0xa6597->0xb74b4] missing
 [0xb74b4->0xb76cb] identity
 [0xb76cb->0xb770c] missing
 [0xb7400->0xb7800] level entry
 [0xb7800->0xb7e00] level middle
 [0xb770c->0xb7fff] identity
 [0xb7fff->0xb8000] missing
 [0xb7e00->0xb8000] level entry
 [0xb8000->0x100000] identity
 [0xb8000->0x100000] level middle
 [0x100000->0x7cfffff] missing
 [0x100000->0x7cfffff] level top
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G++ boot_parameter cpuplug
++ local param
++ local value
+++ cat /proc/cmdline
+++ tr -s ' \t' '\n'
+++ tail -1
+++ egrep '^cpuplug=3D|^cpuplug$'
++ param=3D
++ '[' -z '' ']'
++ return 1
+ HOTPLUG=3D
+ '[' 1 -eq 0 ']'
+ TMEM_LOADED=3D0
+ swapon /dev/xvda
swapon: /de/xvda: stat failed: No such file or directory
+ '[' 255 -eq 0 ']'
++ boot_parameter run
++ local param
++ local value
+++ cat /proc/cmdline
+++ tr -s ' \t' '\n'
+++ egrep '^run=3D|^run$'
+++ tail -1
++ param=3D
++ '[' -z '' ']'
++ return 1
+ HOTPLUG=3D
+ '[' 1 -eq 0 ']'
^GJan 22 03:42:16 tst035 init: starting pid 4314, tty '/dev/tty1': '/bin/sh'


BusyBox v1.14.3 (2014-01-20 09:47:53 EST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

Jan 22 03:42:18 tst035 init: reloading /etc/inittab
Jan 22 03:42:18 tst035 init: starting pid 4318, tty '/dev/hvc0': '/bin/sh'
#=20

[   56.078481] switch: port 1(eth0) entered forwarding state
[  680.637843] kmemleak: 6 new suspected memory leaks (see /sys/kernel/debu=
g/kmemleak)
Jan 22 04:02:04 tst035 -- MARK --
Jan 22 04:22:04 tst035 -- MARK --
Jan 22 04:42:04 tst035 -- MARK --
Jan 22 05:02:04 tst035 -- MARK --
Jan 22 05:20:12 tst035 sshd[4320]: WARNING: /etc/ssh/moduli does not exist,=
 using fixed modulus
Jan 22 05:20:12 tst035 sshd[4320]: Accepted publickey for root from 192.168=
=2E102.1 port 51799 ssh2
Jan 22 05:20:12 tst035 sshdJan 22 05:20:12 tst035 sshd[4322]: lastlog_opens=
eek: Couldn't stat /var/log/lastlog: No such file or directory
Jan 22 05:21:12 tst035 sshd[4320]: syslogin_perform_logout: logout() return=
ed an error
Jan 22 05:21:12 tst035 sshd[4320]: RecJan 22 05:21:38 tst035 sshd[4338]: WA=
RNING: /etc/ssh/moduli does not exist, using fixed modulus
Jan 22 05:21:38 tst035 sshd[4338]: reverse mapping checking getaddrinfo for=
 ns.build.dumpdata.com [192.168.102.1] failed - POSSIK-IN ATTEMPT!
Jan 22 05:21:38 tst035 sshd[4338]: Accepted publickey for root from 192.168=
=2E102.1 port 51800 ssh2
Jan 22 05:21:38 tst035 sshd[ 6032.451880] pci 0000:03:10.0: [8086:10ca] typ=
e 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: m-01-22 05:21:54] =
PCI add virtual function 0000:03:10.0
[ 6032.474112] calling  igbvf_init_module+0x0/0x1000 [igbvf] @ 4350
[ 6032.480110] igbvf: Intel(R) Gigabit Virtual Function Network Driver - ve=
rsion 2.0.2-k
[ 6032.487991] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[ 6032.494017] igbvf 0000:03:10.0: enabling device (0000 -> 0002)
[ 6032.501330] igbvf 0000:03:10.0: PF still in reset state. Is the PF inter=
face up?
[ 6032.508714] igbvf 0000:03:10.0: Assigning random MAC address.
[ 6032.515682] igbvf 0000:03:10.0: PF still resetting
[ 6032.521001] pci 0000:03:10.2: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:10.2
(XEN) [2014-01-22 05:21:54] PCI add virtual function 0000:03:10.2
[ 6032.541643] igbvf 0000:03:10.0: Intel(R) 82576 Virtual Function
[ 6032.547551] igbvf 0000:03:10.0: Address: b6:86:eb:db:15:f6
[ 6032.553719] initcall igbvf_init_module+0x0/0x1000 [igbvf] returned 0 aft=
er 71881 usecs
[ 6032.561720] igbvf 0000:03:10.2: enabling device (0000 -> 0002)
[ 6032.569002] igbvf 0000:03:10.2: PF still in reset state. Is the PF inter=
face up?
[ 6032.576382] igbvf 0000:03:10.2: Assigning random MAC address.
[ 6032.583351] igbvf 0000:03:10.2: PF still resetting
[ 6032.613619] igbvf 0000:03:10.2: Intel(R) 82576 Virtual Function
[ 6032.619528] igbvf 0000:03:10.2: Address: 52:d1:5a:32:a8:
[ 6032.628452] pci 0000:03:10.4: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:54] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:10.4
(XEN) [2014-01-22 05:21:54] PCI add virtual function 0000:03:10.4
[ 6032.653556] igbvf 0000:03:10.4: enabling device (0000 -> 0002)
[ 6032.660842] igbvf 0000:03:10.4: PF still in reset state. Is the PF inter=
face up?
[ 6032.668222] igbvf 0000:03:10.4: Assigning random MAC address.
[ 6032.675190] igbvf 0000:03:10.4: PF still resetting
[ 6032.690590] igbvf 0000:03:10.4: Intel(R) 82576 Virtual Function
[ 6032.696504] igbvf 0000:03:10.4: Address: 9a:db:cc:5b:1d:[ 6032.710504] p=
ci 0000:03:10.6: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: madd virtual funct=
ion 0000:03:10.6

[ 6032.749540] igbvf 0000:03:10.6: enabling device (0000 -> 0002)
[ 6032.756842] igbvf 0000:03:10.6: PF still in reset state. [ 6032.788590] =
igbvf 0000:03:10.6: Intel(R) 82576 Virtual Function
[ 6032.794495] igbvf 0000:03:10.6: Address: 86:16:04:6b:30:b6
[ 6032.809451] pci 0000:03:11.0: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: mtual function 000=
0:03:11.0
[ 6032.847522] igbvf 0000:03:11.0: enabling device (0000 -> 0002)
[ 6032.854821] igbvf 0000:03:11.0: PF still in reset state. igng random MAC=
 address.
[ 6032.869203] igbvf 0000:03:11.0: PF still resetting

[ 6032.885594] igbvf 0000:03:11.0: Intel(R) 82576 Virtual Function
[ 6032.891501] igbvf 0000:03:11.0: Address: 96:56:63:e1:3e:[ 6032.905450] p=
ci 0000:03:11.2: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:11.2
(XEN) [2014-01-22 05:21:55] PCI add virtual function 0000:03:11.2
[ 6032.930517] igbvf 0000:03:11.2: enabling device (0000 -> 0002)
[ 6032.937821] igbvf 0000:03:11.2: PF still in reset state. .945205] igbvf =
0000:03:11.2: Assigning random MAC address.
[ 6032.952201] igbvf 0000:03:11.2: PF still resetting
[ 6032.967772] igbvf 0000:03:11.2: Intel(R) 82576 Virtual Function
[ 6032.973683] igbvf 0000:03:11.2: Address: 2a:bc:5[ 6032.988450] pci 0000:=
03:11.4: [8086:10ca] type 00 class 0x020000
(XEN) [2014-01-22 05:21:55] [VT-D]iommu.c:1444: d0:PCIe: map 0000:03:11.4
(XEN) [2014-01-22 05:21:55] PCI add virtual function 0000:03:11.4
[ 6033.026537] igbvf 0000:03:11.4: enabling device (0000 -> 0002)
[ 6033.033850] igbvf 0000:03:11.4: PF still in reset state. 065273] igbvf 0=
000:03:11.4: Intel(R) 82576 Virtual Function
[ 6033.071184] igbvf 0000:03:11.4: Address: 32:43:7e:18:4b:b6
[ 6033.082456] igb 0000:02:00.0: 7 VFs allocated
[ 6033.554519] switch: port 1(eth0) entered disabled state
[ 6036.406965] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control=
: RX/TX
[ 6036.414517] switch: port 1(eth0) entered

[ 6051.438490] switch: port 1(eth0) entered forwarding state
Jan 22 05:21:38 tst035 sshd[4340]: lastlog_openseek: Couldn't stat /var/log=
/lastlog: No such file or directory
Jan 22 05:36:02 tst035 sshd[4338]: syslogin_perform_logout: logout() return=
ed an error
Jan 22 05:36:02 tst035 sshd[4338]: Received disconnect from 192.168.102.1: =
11: disconnected by user

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-xen-4.4.log"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... ok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00Hok
Loading microcode.bin... ok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 21 16:32:29 EST 2014
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FACP B779F0B8, 010C (r5 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.082 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 05:37:48] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 05:37:48] Allocated console ring of 1048576 KiB.(XEN) [20=
14-01-22 05:37:48] mwait-idle: v0.4 model 0x3c
(XEN) [2014-01-22 05:37:48] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff
(XEN) [2014-01-22 05:37:48] VMX: Supported advanced features:
(XEN) [2014-01-22 05:37:48]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 05:37:48]  - APIC TPR shadow
(XEN) [2014-01-22 05:37:48]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 05:37:48]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 05:37:48]  - Virtual NMI
(XEN) [2014-01-22 05:37:48]  - MSR direct-access bitmap
(XEN) [2014-01-22 05:37:48]  - Unrestricted Guest
(XEN) [2014-01-22 05:37:48]  - VMCS shadowing
(XEN) [2014-01-22 05:37:48] HVM: ASIDs enabled.
(XEN) [2014-01-22 05:37:48] HVM: VMX enabled
(XEN) [2014-01-22 05:37:48] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 05:37:48] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 05:37:48] Brought up 8 CPUs
(XEN) [2014-01-22 05:37:48] ACPI sleep modes: S3
(XEN) [2014-01-22 05:37:48] mcheck_poll: Machine check polling timer starte=
d.(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1000000 mem=
sz=3D0xa22000
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=
=3D0xc00f0
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1cc1000 memsz=
=3D0x14d80
(XEN) [2014-01-22 05:37:48] elf_parse_binary: phdr: paddr=3D0x1cd6000 memsz=
=3D0x71e000
(XEN) [2014-01-22 05:37:48] elf_parse_binary: memory: 0x1000000 -> 0x23f4000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 05:37:49] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 05:37:49] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 05:37:49]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 05:37:49]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 05:37:49]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 05:37:49]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 05:37:49]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 05:37:49]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 05:37:49]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 05:37:49]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 05:37:49]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 05:37:49] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 05:37:49]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487006 pages to be allocated)
(XEN) [2014-01-22 05:37:49]  Init. ramdisk: 000000023abe5000->000000023fd86=
cda
(XEN) [2014-01-22 05:37:49] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 05:37:49]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 05:37:49]  Init. ramdisk: ffffffff823f4000->ffffffff87595=
cda
(XEN) [2014-01-22 05:37:49]  Phys-Mach map: ffffffff87596000->ffffffff87996=
000
(XEN) [2014-01-22 05:37:49]  Start info:    ffffffff87996000->ffffffff87996=
4b4
(XEN) [2014-01-22 05:37:49]  Page tables:   ffffffff87997000->ffffffff879d8=
000
(XEN) [2014-01-22 05:37:49]  Boot stack:    ffffffff879d8000->ffffffff879d9=
000
(XEN) [2014-01-22 05:37:49]  TOTAL:         ffffffff80000000->ffffffff87c00=
000
(XEN) [2014-01-22 05:37:49]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 05:37:49] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 05:37:49] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:02.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000
(XEN) [2014-01-22 05:37:50] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000203000
(XEN) [2014-01-22 05:37:50] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 05:37:50] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 05:37:50] Std. Loglevel: All
(XEN) [2014-01-22 05:37:50] Guest Loglevel: All
(XEN) [2014-01-22 05:37:50] **********************************************
(XEN) [2014-01-22 05:37:50] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 05:37:50] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 05:37:50] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 05:37:50] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 05:37:50] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 05:37:50] **********************************************
(XEN) [2014-01-22 05:37:50] 3... 2... 1...=20
(XEN) [2014-01-22 05:37:53] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 05:37:53] Freed 272kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-02502-gec513b1 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(05:00.*)(01:00.*)
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x07595fff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.735576] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.735577] Policy zone: DMA32
[    5.735578] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(05:00.*)(01:00.*)
[    5.735883] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.735913] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.756394] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.759486] Memory: 1891300K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 205848K reserved)
[    5.759711] Hierarchical RCU implementation.
[    5.759712] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.759712] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.759720] NR_IRQS:33024 nr_irqs:256 16
[    5.759799] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.759800] xen: registering gsi 9 triggering 0 polarity 0
[    5.759811] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.759833] xen: acpi sci 9
[    5.759836] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.759839] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.759842] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.759844] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.759847] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.759849] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.759852] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.759854] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.759856] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.759859] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.759861] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.759864] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.759866] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.759869] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.761431] Console: colour VGA+ 80x25
[    6.713737] console [hvc0] enabled
[    6.717671] Xen: using vcpuop timer interface
[    6.722014] installing Xen timer for CPU 0
[    6.726195] tsc: Detected 3400.082 MHz processor
[    6.730881] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.16 BogoMIPS (lpj=3D3400082)
[    6.741515] pid_max: default: 32768 minimum: 301
[    6.746348] Security Framework initialized
[    6.750433] SELinux:  Initializing.
[    6.754007] SELinux:  Starting in permissive mode
[    6.759078] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.766530] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.773691] Mount-cache hash table entries: 256
[    6.778651] Initializing cgroup subsys freezer
[    6.783156] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.783156] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.796260] CPU: Physical Processor ID: 0
[    6.800332] CPU: Processor Core ID: 0
[    6.804756] mce: CPU supports 2 MCE banks
[    6.808768] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.808768] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.808768] tlb_flushall_shift: 6
[    6.845887] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.854531] ACPI: Core revision 20131115
[    6.907489] ACPI: All ACPI Tables successfully acquired
[    6.914255] cpu 0 spinlock event irq 41
[    6.918133] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1
[    6.929132] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs
[    6.936597] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.942237] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.949682] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.956095] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.964329] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.969964] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.977440] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.983159] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.990700] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.995900] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    7.004740] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    7.012019] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    7.018432] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    7.026665] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    7.032135] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    7.039260] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    7.044052] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    7.050612] calling  init_workqueues+0x0/0x59a @ 1
[    7.055622] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    7.062225] calling  migration_init+0x0/0x71 @ 1
[    7.066904] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    7.073404] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    7.078604] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    7.085623] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    7.091516] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    7.099229] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    7.104467] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    7.111453] calling  cpu_stop_init+0x0/0x76 @ 1
[    7.116067] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    7.122456] calling  relay_init+0x0/0x14 @ 1
[    7.126789] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    7.132942] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    7.138250] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    7.145334] calling  init_events+0x0/0x61 @ 1
[    7.149756] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    7.155994] calling  init_trace_printk+0x0/0x12 @ 1
[    7.160935] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    7.167694] calling  event_trace_memsetup+0x0/0x52 @ 1
[    7.172915] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    7.179915] calling  jump_label_init_module+0x0/0x12 @ 1
[    7.185287] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    7.192482] calling  balloon_clear+0x0/0x4f @ 1
[    7.197075] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    7.203488] calling  rand_initialize+0x0/0x30 @ 1
[    7.208276] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    7.214841] calling  mce_amd_init+0x0/0x165 @ 1
[    7.219433] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    7.225871] x86: Booted up 1 node, 1 CPUs
[    7.230627] NMI watchdog: disabled (cpu0): hardware events not enabled
[    7.237267] devtmpfs: initialized
[    7.243140] calling  ipc_ns_init+0x0/0x14 @ 1
[    7.247488] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    7.253728] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    7.258753] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    7.265600] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    7.272275] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    7.280766] calling  net_ns_init+0x0/0x104 @ 1
[    7.285330] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    7.291612] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    7.296800] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    7.304694] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    7.312858] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    7.320068] calling  cpufreq_tsc+0x0/0x37 @ 1
[    7.324490] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    7.330728] calling  reboot_init+0x0/0x1d @ 1
[    7.335149] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    7.341388] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    7.346242] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    7.352915] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    7.358460] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    7.365827] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    7.370681] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    7.377354] calling  wq_sysfs_init+0x0/0x14 @ 1
[    7.382048] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    7.388794] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    7.395321] calling  ksysfs_init+0x0/0x94 @ 1
[    7.399783] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    7.405979] calling  pm_init+0x0/0x4e @ 1
[    7.410091] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    7.415945] calling  pm_disk_init+0x0/0x19 @ 1
[    7.420467] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    7.426779] calling  swsusp_header_init+0x0/0x30 @ 1
[    7.431805] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    7.438652] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    7.444198] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    7.451565] calling  cgroup_wq_init+0x0/0x32 @ 1
[    7.456249] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    7.462745] calling  event_trace_enable+0x0/0x173 @ 1
[    7.468331] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    7.475192] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.479783] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.486196] calling  fsnotify_init+0x0/0x26 @ 1
[    7.490792] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.497202] calling  filelock_init+0x0/0x84 @ 1
[    7.501808] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.508209] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.513063] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.519734] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.524761] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.531609] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.536374] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.542961] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.548334] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.555528] calling  debugfs_init+0x0/0x5c @ 1
[    7.560044] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.566360] calling  securityfs_init+0x0/0x53 @ 1
[    7.571136] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.577714] calling  prandom_init+0x0/0xe2 @ 1
[    7.582220] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.588548] calling  virtio_init+0x0/0x30 @ 1
[    7.593068] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.599234] calling  __gnttab_init+0x0/0x30 @ 1
[    7.603828] xen:grant_table: Grant tables using version 2 layout
[    7.609910] Grant table initialized
[    7.613445] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.620120] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.625172] RTC time:  5:37:54, date: 01/22/14
[    7.629652] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.636671] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.641612] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.648545] calling  cpuidle_init+0x0/0x40 @ 1
[    7.653051] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.659551] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.664491] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.671251] calling  sock_init+0x0/0x8b @ 1
[    7.675598] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.681591] calling  net_inuse_init+0x0/0x26 @ 1
[    7.686272] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.692769] calling  netpoll_init+0x0/0x31 @ 1
[    7.697275] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.703601] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.708754] NET: Registered protocol family 16
[    7.713248] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.720340] calling  bdi_class_init+0x0/0x4d @ 1
[    7.725127] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.731556] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.736682] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.743599] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.748603] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.755298] calling  pci_driver_init+0x0/0x12 @ 1
[    7.760159] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.766678] calling  backlight_class_init+0x0/0x85 @ 1
[    7.771937] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.778900] calling  video_output_class_init+0x0/0x19 @ 1
[    7.784422] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.791637] calling  xenbus_init+0x0/0x26f @ 1
[    7.796236] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.802492] calling  tty_class_init+0x0/0x38 @ 1
[    7.807237] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.813668] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.819039] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.825983] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.831797] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.839418] calling  register_node_type+0x0/0x34 @ 1
[    7.844574] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.851348] calling  i2c_init+0x0/0x70 @ 1
[    7.855675] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.861584] calling  init_ladder+0x0/0x12 @ 1
[    7.866004] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.872416] calling  init_menu+0x0/0x12 @ 1
[    7.876662] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.882903] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.887929] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.894787] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.900340] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.907688] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.912831] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.919735] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.924242] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.930741] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.935512] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.942095] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.947296] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.954314] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.958908] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.966533] ACPI: bus type PCI registered
[    7.970606] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.977133] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.983807] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.988436] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.995142] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    8.001628] calling  dma_channel_table_init+0x0/0xde @ 1
[    8.007014] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    8.014191] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    8.019738] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    8.027104] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    8.032738] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    8.040191] calling  xen_pcpu_init+0x0/0xcc @ 1
[    8.045627] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    8.051976] calling  dmi_id_init+0x0/0x31d @ 1
[    8.056728] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    8.062984] calling  dca_init+0x0/0x20 @ 1
[    8.067141] dca service started, version 1.12.1
[    8.071795] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    8.077890] calling  iommu_init+0x0/0x58 @ 1
[    8.082231] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    8.088376] calling  pci_arch_init+0x0/0x69 @ 1
[    8.092985] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    8.102328] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    8.117100] PCI: Using configuration type 1 for base access
[    8.122662] initcall pci_arch_init+0x0/0x69 returned 0 after0x0/0x98 @ 1
[    8.134351] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    8.140721] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    8.145816] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    8.152744] calling  init_vdso+0x0/0x135 @ 1
[    8.157078] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    8.163229] calling  sysenter_setup+0x0/0x2dd @ 1
[    8.167997] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    8.174582] calling  param_sysfs_init+0x0/0x194 @ 1
[    8.195670] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    8.202707] calling  pm_sysrq_init+0x0/0x19er 0 usecs
[    8.213715] calling  default_bdi_init+0x0/0x65 @ 1
[    8.218877] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    8.225485] calling  init_bio+0x0/0xe9 @ 1
[    8.229699] bio: create slab <bio-0> at 0
[    8.233763] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    8.239868] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    8.245611] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    8.253128] calling  cryptomgr_init+0x0/0x12 @ 1
[    8.257806] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    8.264307] calling  blk_settings_init+0x0/0x2c @ 1
[    8.269245] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    8.276008] calling  blk_ioc_init+0x0/0x2a @ 1
[    8.280522] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    8.286838] calling  blk_softirq_init+0x0/0x6e @ 1
[    8.291690] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    8.298364] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    8.303216] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    8.309889] calling  blk_mq_init+0x0/0x5f @ 1
[    8.314310] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    8.320549] calling  genhd_device_init+0x0/0x85 @ 1
[    8.325617] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    8.332305] calling  pci_slot_init+0x0/0x50 @ 1
[    8.336903] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    8.343308] calling  fbmem_init+0x0/0x98 @ 1
[    8.347712] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    8.353796] calling  acpi_init+0x0/0x27a @ 1
[    8.358156] ACPI: Added _OSI(Module Device)
[    8.362375] ACPI: Added _OSI(Processor Device)
[    8.366880] ACPI: Added _OSI(3.0 _SCP Extensions)
[    8.371648] ACPI: Added _OSI(Processor Aggregator Device)
[    8.380862] ACPI: Executed 1 blocks of module-level executable AML code
[    8.412834] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    8.420693] \_SB_:_OSC invalid UUID
[    8.424175] _OSC request data:1 1f=20
[    8.429826] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    8.439048] ACPI: Dynamic OEM Table Load:
[    8.443048] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    8.452878] ACPI: Interpreter enabled
[    8.456552] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    8.465812] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    8.475095] ACPI: (supports S0 S1 S4 S5)
[    8.479067] ACPI: Using IOAPIC for interrupt routing
[    8.484468] HEST: Table parsing has been initialized.
[    8.489519] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.499898] ACPI: No dock devices found.
[    8.601026] ACPI: Power Resource [FN00] (off)
[    8.606175] ACPI: Power Resource [FN01] (off)
[    8.611331] ACPI: Power Resource [FN02] (off)
[    8.616459] ACPI: Power Resource [FN03] (off)
[    8.621605] ACPI: Power Resource [FN04] (off)
[    8.631332] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.637508] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM Cloc=
kPM Segments MSI]
[    8.648230] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]
[    8.657220] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.670762] PCI host bridge to bus 0000:00
[    8.674854] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.680402] p]
[    8.692891] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.699818] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.706754] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.713681] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.720612] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.727546] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.734490] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:00.0
[    8.746060] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.752218] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.758836] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:01.0
[    8.769671] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.775735] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:01.1
[    8.787403] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.793421] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.800257] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.807532] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:02.0
[    8.818696] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.824715] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:03.0
[    8.837133] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.843194] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.850122] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.856352] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:14.0
[    8.867211] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.873250] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.880190] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:16.0
[    8.891830] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.897868] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.904163] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.910490] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.916247] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.922737] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:19.0
[    8.933588] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.939622] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.946076] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.952655] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:1a.0
[    8.963521] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.969549] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.976540] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.983032] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:56] PCI add device 0000:00:1b.0
[    8.993884] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    9.000048] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    9.006537] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.0
[    9.017394] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    9.023562] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    9.030049] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.3
[    9.040901] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    9.047063] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    9.053554] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.5
[    9.064405] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    9.070567] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    9.077055] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.6
[    9.087897] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    9.094062] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    9.100551] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1c.7
[    9.111410] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    9.117449] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    9.123902] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    9.130477] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1d.0
[    9.141328] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.0
[    9.153011] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    9.159049] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    9.164650] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    9.170283] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    9.175916] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    9.181548] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    9.187184] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    9.193591] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.2
[    9.204358] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    9.210390] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    9.217239] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.3
[    9.228370] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    9.234407] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:00:1f.6
[    9.247096] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.257892] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    9.263940] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    9.269576] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    9.276420] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    9.283271] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    9.290069] pci 0000:01:00.0: supports D1 D2
[    9.294460] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:01:00.0
[    9.307438] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    9.312653] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    9.318805] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    9.325654] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    9.332524] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.343321] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    9.349369] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    9.355690] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    9.362016] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    9.367648] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    9.373995] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    9.380784] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    9.386912] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.393827] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:02:00.0
[    9.406030] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    9.412038] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    9.418355] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    9.424682] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    9.430316] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    9.436663] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    9.443452] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    9.449580] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    9.456494] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 05:37:57] PCI add device 0000:02:00.1
[    9.470790] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.476008] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.482159] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.489008] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.496042] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.506864] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.512908] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.519222] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.525550] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.531265] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.538094] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.544323] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:04:00.0
[    9.555239] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.561270] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.567583] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.573907] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.579626] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.586451] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:04:00.1
[    9.606505] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.611728] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.617878] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.624729] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.631758] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.642612] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.648644] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.654980] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.660593] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.667094] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.673331] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:05:00.0
[    9.686342] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.691563] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.697714] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.704575] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.711644] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.722463] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.728704] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.735485] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:57] PCI add device 0000:06:00.0
[    9.746391] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.751618] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.758478] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.766974] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.773166] pci 0000:07:01.0: supports D1 D2
[    9.777425] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 05:37:57] PCI add device 0000:07:01.0
[    9.789259] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.795292] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.801597] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.808082] pci 0000:07:03.0: supports D1 D2
[    9.812340] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:07:03.0
[    9.830256] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.837319] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.844161] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.852730] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.861396] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.869974] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.878558] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.886956] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.893004] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:08.0
[    9.911698] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.917746] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:08.1
[    9.936452] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.942501] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:09.0
[    9.961196] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.967248] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 05:37:57] PCI add device 0000:08:09.1
[    9.985995] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.992046] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 05:37:57] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0a.0
[   10.010728] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[   10.016786] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0a.1
[   10.035489] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[   10.041540] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0b.0
[   10.060223] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[   10.066271] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 05:37:58] PCI add device 0000:08:0b.1
[   10.085001] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[   10.090230] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   10.097065] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[   10.103738] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[   10.110410] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[   10.117453] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.128345] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[   10.134455] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[   10.141625] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[   10.147916] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:58] PCI add device 0000:09:00.0
[   10.161024] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[   10.166245] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   10.173089] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[   10.180118] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[   10.190922] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[   10.196971] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[   10.202599] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[   10.208231] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[   10.213866] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[   10.219498] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[   10.225130] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[   10.231667] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 05:37:58] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 05:37:58] PCI add device 0000:0a:00.0
[   10.251184] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[   10.256408] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   10.262559] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   10.269408] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[   10.276172] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[   10.287942] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.295254] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.302567] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.309882] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.317195] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)
[   10.324505] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.
[   10.332945] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[   10.340257] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[   10.348683] ACPI: Enabled 4 GPEs in block 00 to 3F
[   10.353475] ACPI: \_SB_.PCI0: notify handler is installed
[   10.358959] Found 1 acpi root devices
[   10.362758] initcall acpi_init+0x0/0x27a returned 0 after 453125 usecs
[   10.369275] calling  pnp_init+0x0/0x12 @ 1
[   10.373528] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[   10.379438] calling  balloon_init+0x0/0x242 @ 1
[   10.384033] xen:balloon: Initialising balloon driver
[   10.389058] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[   10.395646] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[   10.401192] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[   10.408559] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[   10.414283] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[   10.421662] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[   10.427499] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[   10.434965] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[   10.439981] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[   10.446666] calling  balloon_init+0x0/0xfa @ 1
[   10.451170] xen_balloon: Initialising balloon driver
[   10.456586] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[   10.463020] calling  misc_init+0x0/0xba @ 1
[   10.467335] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.473331] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.478584] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.486663] vgaarb: loaded
[   10.489432] vgaarb: bridge control possible 0000:00:02.0
[   10.494807] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.502000] calling  cn_init+0x0/0xc0 @ 1
[   10.506092] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.511966] calling  dma_buf_init+0x0/0x75 @ 1
[   10.516485] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.522800] calling  phy_init+0x0/0x2e @ 1
[   10.527186] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.533097] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.537830] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.544275] calling  usb_init+0x0/0x169 @ 1
[   10.548534] ACPI: bus type USB registered
[   10.552792] usbcore: registered new interface driver usbfs
[   10.558364] usbcore: registered new interface driver hub
[   10.563757] usbcore: registered new device driver usb
[   10.568805] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.575129] calling  serio_init+0x0/0x31 @ 1
[   10.579579] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.585657] calling  input_init+0x0/0x103 @ 1
[   10.590147] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.596320] calling  rtc_init+0x0/0x5b @ 1
[   10.600551] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.606460] calling  pps_init+0x0/0xb7 @ 1
[   10.610681] pps_core: LinuxPPS API ver. 1 registered
[   10.615645] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.624830] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.631069] calling  ptp_init+0x0/0xa4 @ 1
[   10.635287] PTP clock support registered
[   10.639217] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.645367] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.650885] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.658113] calling  hwmon_init+0x0/0xe3 @ 1
[   10.662505] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.668598] calling  leds_init+0x0/0x40 @ 1
[   10.672902] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.678911] calling  efisubsys_init+0x0/0x142 @ 1
[   10.683676] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.690262] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.695026] PCI: Using ACPI for IRQ routing
[   10.702721] PCI: pci_cache_line_size set to 64 bytes
[   10.707879] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
[   10.713868] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]
[   10.719938] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usecs
[   10.726782] calling  proto_init+0x0/0x12 @ 1
[   10.731118] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.737266] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.742481] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.748823] calling  neigh_init+0x0/0x80 @ 1
[   10.753153] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.759307] calling  fib_rules_init+0x0/0xaf @ 1
[   10.763986] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.770485] calling  pktsched_init+0x0/0x10a @ 1
[   10.775171] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.781666] calling  tc_filter_init+0x0/0x55 @ 1
[   10.786345] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.792844] calling  tc_action_init+0x0/0x55 @ 1
[   10.797523] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.804024] calling  genl_init+0x0/0x85 @ 1
[   10.808287] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.814338] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.818932] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.825345] calling  netlbl_init+0x0/0x81 @ 1
[   10.829762] NetLabel: Initializing
[   10.833231] NetLabel:  domain hash size =3D 128
[   10.837649] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.842715] NetLabel:  unlabeled traffic allowed by default
[   10.848310] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.854810] calling  rfkill_init+0x0/0x79 @ 1
[   10.859406] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.865579] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.870172] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.876597] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.881366] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.887934] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.893269] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.900326] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.905446] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.912373] calling  hpet_late_init+0x0/0x101 @ 1
[   10.917141] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.923899] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.928409] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.934732] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.940285] Switched to clocksource xen
[   10.944185] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3812 usecs
[   10.951808] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.957293] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 280 =
usecs
[   10.964417] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.970748] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.978888] calling  event_trace_init+0x0/0x205 @ 1
[   10.998168] initcall event_trace_init+0x0/0x205 returned 0 after 13974 u=
secs
[   11.005201] calling  init_kprobe_trace+0x0/kprobe_trace+0x0/0x93 returne=
d 0 after 11 usecs
[   11.016993] calling  init_pipe_fs+0x0/0x4c @ 1
[   11.021538] initcall init_pipe_fs+0x0/0x4c returned 0 after 45 usecs
[   11.027907] calling  eventpoll_init+0x0/0xda @ 1
[   11.032611] initcall eventpoll_init+0x0/0xda returned 0 after 25 usecs
[   11.039171] calling  anon_inode_init+0x0/0x5b @ 1
[   11.043977] initcall anon_inode_init+0x0/0x5b returned 0 after 37 usecs
[   11.050609] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   11.055810] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   11.062831] calling  acpi_event_init+0x0/0x3a @ 1
[   11.067614] initcall acpi_event_init+0x0/0x3a returned 0 after 16 usecs
[   11.074270] calling  pnp_system_init+0x0/0x12 @ 1
[   11.079134] initcall pnp_system_init+0x0/0x12 returned 0 after 92 usecs
[   11.085749] calling  pnpacpi_init+0x0/0x8c @ 1
[   11.090243] pnp: PnP ACPI init
[   11.093385] ACPI: bus type PNP registered
[   11.097759] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   11.104365] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   11.111254] pnp 00:01: [dma 4]
[   11.114469] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   11.121157] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   11.128212] kworker/u2:0 (512) used greatest stack depth: 5560 bytes left
[   11.134997] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   11.142596] system 00:04: [io  0x0680-0x069f] has been reserved
[   11.148507] system 00:04: [io  0xffff] has been reserved
[   11.153877] system 00:04: [io  0xffff] has been reserved
[   11.159250] system 00:04: [io  0xffff] has been reserved
[   11.164624] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   11.170603] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   11.176583] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   11.182563] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   11.188543] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   11.194521] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   11.200849] system 00:04: [io  0x164e-0x164f] has been reserved
[   11.206825] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.213704] xen: registering gsi 8 triggering 1 polarity 0
[   11.219395] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   11.226285] system 00:06: [io  0x1854-0x1857] has been reserved
[   11.232196] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   11.240559] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   11.246471] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   11.252444] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.260665] xen: registering gsi 4 triggering 1 polarity 0
[   11.266139] Already setup the GSI :4
[   11.269782] pnp 00:08: [dma 0 disabled]
[   11.273885] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.281631] xen: registering gsi 3 triggering 1 polarity 0
[   11.287127] pnp 00:09: [dma 0 disabled]
[   11.291237] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   11.298079] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   11.303989] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.310864] xen: registering gsi 13 triggering 1 polarity 0
[   11.316654] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   11.326293] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   11.332906] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   11.339576] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   11.346248] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   11.352920] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   11.359595] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   11.366267] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   11.372940] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   11.379612] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   11.386285] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   11.392959] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   11.399633] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   11.406300] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   11.415205] pnp: PnP ACPI: found 13 devices
[   11.419379] ACPI: bus type PNP unregistered
[   11.423626] initcall pnpacpi_init+0x0/0x8c returned 0 after 325568 usecs
[   11.430384] calling  pcistub_init+0x0/0x29f @ 1
[   11.435316] pciback 0000:01:00.0: seizing device
[   11.439998] pciback 0000:05:00.0: seizing device
[   11.444934] initcall pcistub_init+0x0/0x29f returned 0 after 9723 usecs
[   11.451540] calling  chr_dev_init+0x0/0xc6 @ 1
[   11.465135] initcall chr_dev_init+0x0/0xc6 returned 0 after 8883 usecs
[   11.471651] calling  firmware_class_init+0x0/0xec0x65 returned 0 after 1=
33 usecs
[   11.495521] calling  thermal_init+0x0/0x8b @ 1
[   11.500115] initcall thermal_init+0x0/0x8b returned 0 after 92 usecs
[   11.506466] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.512352] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.520238] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.528929] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.535353] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9340 usecs
[   11.543152] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.548809] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.553760] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.559915] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.566775] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.573705] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.580636] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.587569] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.594503] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.601436] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.608367] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.615303] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.622233] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.629169] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.636100] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.643026] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.650409] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.657324] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.664793] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.671710] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.679093] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.686010] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.693470] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.698749] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.704905] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.711755] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.716779] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.722936] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.729787] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.734803] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.740960] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.747812] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.752836] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.759692] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.764968] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.771821] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.777100] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.783954] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.788975] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.795827] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.800841] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.807001] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.813855] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.819473] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.825107] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.831432] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.837758] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.844086] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.850411] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.856826] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.863238] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.868874] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.875199] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.880830] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.887158] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.892791] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.899118] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.904751] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.911076] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.917403] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.923730] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.930058] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.936384] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.942711] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.948343] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.954670] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396454 usecs
[   11.962470] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.967337] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.974082] calling  inet_init+0x0/0x296 @ 1
[   11.978488] NET: Registered protocol family 2
[   11.983177] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.990427] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.997075] TCP: Hash tables configured (established 16384 bind 16384)
[   12.003663] TCP: reno registered
[   12.006949] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   12.013015] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   12.019625] initcall inet_init+0x0/0x296 returned 0 after 40241 usecs
[   12.026056] calling  ipv4_offload_init+0x0/0x61 @ 1
[   12.030995] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   12.037754] calling  af_unix_init+0x0/0x55 @ 1
[   12.042271] NET: Registered protocol family 1
[   12.046693] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs
[   12.053268] calling  ipv6_offload_init+0x0/0x7f @ 1
[   12.058209] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   12.064967] calling  init_sunrpc+0x0/0x69 @ 1
[   12.069583] RPC: Registered named UNIX socket transport module.
[   12.075492] RPC: Registered udp transport module.
[   12.080256] RPC: Registered tcp transport module.
[   12.085022] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   12.091522] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs
[   12.098107] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   12.103574] pci 0000:00:02.0: Boot video device
[   12.108664] xen: registering gsi 16 triggering 0 polarity 1
[   12.114241] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   12.118882] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   12.126621] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   12.134527] xen: registering gsi 16 triggering 0 polarity 1
[   12.140089] Already setup the GSI :16
[   12.160320] xen: registering gsi 23 triggering 0 polarity 1
[   12.165895] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   12.187545] xen: registering gsi 18 triggering 0 polarity 1
[   12.193131] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   12.19773] initcall pci_apply_final_quirks+0x0/0x117 returned 0 after 10=
5374 usecs
[   12.219168] calling  populate_rootfs+0x0/0x112 @ 1
[   12.224157] Unpacking initramfs...
[   13.311888] Freeing initrd memory: 83592K (ffff8800023f4000 - ffff880007=
596000)
[   13.319191] initcall populate_rootfs+0x0/0x112 returned 0 after 1069500 =
usecs
[   13.326378] calling  pci_iommu_init+0x0/0x41 @ 1
[   13.331058] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   13.337558] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   13.343191] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   13.350834] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   13.356797] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   13.364597] calling  i8259A_init_ops+0x0/0x21 @ 1
[   13.369363] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   13.375950] calling  vsyscall_init+0x0/0x27 @ 1
[   13.380546] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   13.386957] calling  sbf_init+0x0/0xf6 @ 1
[   13.391117] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   13.397096] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   13.402296] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs
[   13.409316] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   13.413826] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   13.420148] calling  i8237A_init_ops+0x0/0x14 @ 1
[   13.424914] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.431501] calling  cache_sysfs_init+0x0/0x65 @ 1
[   13.436604] initcall cache_sysfs_init+0x0/0x65 returned 0 after 242 usecs
[   13.443375] calling  amd_uncore_init+0x0/0x130 @ 1
[   13.448227] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   13.455074] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   13.460101] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   13.467119] calling  intel_uncore_init+0x0/0x3ab @ 1
[   13.472148] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   13.479168] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.483865] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.494421] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10325 usecs
[   13.501268] calling  inject_init+0x0/0x30 @ 1
[   13.505685] Machine check injector initialized
[   13.510194] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs
[   13.516693] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.522586] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.530298] calling  microcode_init+0x0/0x1b1 @ 1
[   13.535249] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.541372] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.550138] initcall microcode_init+0x0/0x1b1 returned 0 after 14718 use=
cs
[   13.557071] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.561663] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.568251] calling  msr_init+0x0/0x162 @ 1
[   13.572720] initcall msr_init+0x0/0x162 returned 0 after 217 usecs
[   13.578887] calling  cpuid_init+0x0/0x162 @ 1
[   13.583505] initcall cpuid_init+0x0/0x162 returned 0 after 196 usecs
[   13.589848] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.594612] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.601199] calling  add_pcspkr+0x0/0x40 @ 1
[   13.605638] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs
[   13.611901] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.618397] Scanning for low memory corruption every 60 seconds
[   13.624373] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5835 usecs
[   13.632954] calling  sysfb_init+0x0/0x9c @ 1
[   13.637395] initcall sysfb_init+0x0/0x9c returned 0 after 106 usecs
[   13.643652] calling  audit_classes_init+0x0/0xaf @ 1
[   13.648690] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.655610] calling  pt_dump_init+0x0/0x30 @ 1
[   13.660126] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs
[   13.666444] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.671303] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.677969] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.683262] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.690360] calling  ioresources_init+0x0/0x3c @ 1
[   13.695220] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.701888] calling  uid_cache_init+0x0/0x85 @ 1
[   13.706583] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs
[   13.713155] calling  init_posix_timers+0x0/0x240 @ 1
[   13.718196] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.725112] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.730400] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.737505] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.742622] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.749551] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.754874] initcall snapshot_device_init+0x0/0x12 returned 0 after 118 =
usecs
[   13.761998] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.766763] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.773351] calling  create_proc_profile+0x0/0x300 @ 1
[   13.778550] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.785571] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.790769] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.797789] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.803389] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 22=
1 usecs
[   13.810689] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.816065] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs
[   13.823252] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.828299] initcall alarmtimer_init+0x0/0x15f returned 0 after 188 usecs
[   13.835077] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.840742] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
7 usecs
[   13.848039] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.853071] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.859914] calling  futex_init+0x0/0xf6 @ 1
[   13.864261] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.870402] initcall futex_init+0x0/0xf6 returned 0 after 6011 usecs
[   13.876810] calling  proc_dma_init+0x0/0x22 @ 1
[   13.881408] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.887817] calling  proc_modules_init+0x0/0x22 @ 1
[   13.892760] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.899516] calling  kallsyms_init+0x0/0x25 @ 1
[   13.904113] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.910522] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.916341] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.924042] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.929505] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.936781] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.941906] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.948915] calling  ikconfig_init+0x0/0x3c @ 1
[   13.953511] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs
[   13.959921] calling  audit_init+0x0/0x141 @ 1
[   13.964341] audit: initializing netlink socket (disabled)
[   13.969823] type=3D2000 audit(1390369078.681:1): initialized
[   13.975349] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs
[   13.981935] calling  audit_watch_init+0x0/0x3a @ 1
[   13.986814] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.993486] calling  audit_tree_init+0x0/0x49 @ 1
[   13.998254] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   14.004840] calling  init_kprobes+0x0/0x16c @ 1
[   14.019432] initcall init_kprobes+0x0/0x16c returned 0 after 9765 usecs
[   14.026028] calling  hung_task_init+0x0/0x56 @ 1ts+0x0/0x20 @ 1
[   14.054202] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   14.060875] calling  init_blk_tracer+0x0/0x5a @ 1
[   14.065641] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs
[   14.072221] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   14.077941] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   14.085479] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   14.091292] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 512=
 usecs
[   14.098504] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   14.104028] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   14.111330] calling  kswapd_init+0x0/0x76 @ 1
[   14.115796] initcall kswapd_init+0x0/0x76 returned 0 after 48 usecs
[   14.122072] calling  extfrag_debug_init+0x0/0x7e @ 1
[   14.127118] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   14.134030] calling  setup_vmstat+0x0/0xf3 @ 1
[   14.138551] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs
[   14.144950] calling  mm_sysfs_init+0x0/0x29 @ 1
[   14.149553] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   14.156044] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   14.161330] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   14.168434] calling  slab_proc_init+0x0/0x25 @ 1
[   14.173120] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   14.179614] calling  init_reserve_notifier+0x0/0x26 @ 1
[   14.184903] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   14.192008] calling  init_admin_reserve+0x0/0x40 @ 1
[   14.197034] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.203880] calling  init_user_reserve+0x0/0x40 @ 1
[   14.208821] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   14.215581] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   14.220524] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   14.227281] calling  procswaps_init+0x0/0x22 @ 1
[   14.231963] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   14.238461] calling  init_frontswap+0x0/0x96 @ 1
[   14.243169] initcall init_frontswap+0x0/0x96 returned 0 after 27 usecs
[   14.249726] calling  hugetlb_init+0x0/0x4c2 @ 1
[   14.254320] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   14.260815] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6343 usecs
[   14.267414] calling  mmu_notifier_init+0x0/0x12 @ 1
[   14.272358] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   14.279113] calling  slab_proc_init+0x0/0x8 @ 1
[   14.283709] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   14.290120] calling  cpucache_init+0x0/0x4b @ 1
[   14.294716] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   14.301128] calling  hugepage_init+0x0/0x145 @ 1
[   14.305807] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   14.312482] calling  init_cleancache+0x0/0xbc @ 1
[   14.317277] initcall init_cleancache+0x0/0xbc returned 0 after 28 usecs
[   14.323921] calling  fcntl_init+0x0/0x2a @ 1
[   14.328265] initcall fcntl_init+0x0/0x2a returned 0 after 12 usecs
[   14.334496] calling  proc_filesystems_init+0x0/0x22 @ 1
[   14.339783] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   14.346887] calling  dio_init+0x0/0x2d @ 1
[   14.351059] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   14.357114] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   14.362167] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 26 use=
cs
[   14.369077] calling  dnotify_init+0x0/0x7b @ 1
[   14.373607] initcall dnotify_init+0x0/0x7b returned 0 after 24 usecs
[   14.379993] calling  inotify_user_setup+0x0/0x70 @ 1
[   14.385039] initcall inotify_user_setup+0x0/0x70 returned 0 after 18 use=
cs
[   14.391955] calling  aio_setup+0x0/0x7d @ 1
[   14.396256] initcall aio_setup+0x0/0x7d returned 0 after 55 usecs
[   14.402355] calling  proc_locks_init+0x0/0x22 @ 1
[   14.407123] initcall proc_locks_init+0x0/0x22 returned 0 after 4 usecs
[   14.413708] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   14.418605] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 43 usecs
[   14.425320] calling  dquot_init+0x0/0x121 @ 1
[   14.429741] VFS: Disk quotas dquot_6.5.2
[   14.433761] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   14.440227] initcall dquot_init+0x0/0x121 returned 0 after 10240 usecs
[   14.446811] calling  init_v2_quota_format+0x0/0x22 @ 1
[   14.452013] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   14.459031] calling  quota_init+0x0/0x31 @ 1
[   14.463384] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   14.469603] calling  proc_cmdline_init+0x0/0x22 @ 1
[   14.474549] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs
[   14.481303] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.486334] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.493176] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.498121] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.504877] calling  proc_devices_init+0x0/0x22 @ 1
[   14.509820] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.516576] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.521779] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.528796] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.533738] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.540495] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.545439] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.552194] calling  proc_stat_init+0x0/0x22 @ 1
[   14.556879] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.563374] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.568232] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.574901] calling  proc_version_init+0x0/0x22 @ 1
[   14.579845] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.586600] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.591631] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.598476] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.603253] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.609915] calling  vmcore_init+0x0/0x5cb @ 1
[   14.614420] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.620746] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.625430] initcall proc_kmsg_init+0x0/0x25 returned 0 after 4 usecs
[   14.631925] calling  proc_page_init+0x0/0x42 @ 1
[   14.636611] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.643105] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.647832] initcall init_devpts_fs+0x0/0x62 returned 0 after 45 usecs
[   14.654372] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.658975] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs
[   14.665379] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.670474] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 68 use=
cs
[   14.677339] calling  init_fat_fs+0x0/0x4f @ 1
[   14.681779] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.688084] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.692592] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.698917] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.703512] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.709925] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.714717] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs
[   14.721364] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.726064] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs
[   14.732496] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.736913] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.743151] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.747572] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.753813] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.758232] NFS: Registering the id_resolver key type
[   14.763358] Key type id_resolver registered
[   14.767590] Key type id_legacy registered
[   14.771669] initcall init_nfs_v4+0x0/0x3b returned 0 after 13121 usecs
[   14.778251] calling  init_nlm+0x0/0x4c @ 1
[   14.782419] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.788391] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.793071] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.799569] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.804249] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.810749] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.815778] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.822623] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.827217] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.833629] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.838222] NTFS driver 2.1.30 [Flags: R/W].
[   14.842606] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4280 usecs
[   14.849231] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.854127] initcall init_autofs4_fs+0x0/0x2a returned 0 after 127 usecs
[   14.860826] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.865507] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.872083] calling  ipc_init+0x0/0x2f @ 1
[   14.876248] msgmni has been set to 3857
[   14.880152] initcall ipc_init+0x0/0x2f returned 0 after 3818 usecs
[   14.886381] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.891156] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.897734] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.902476] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.909002] calling  key_proc_init+0x0/0x5e @ 1
[   14.913603] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.920010] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.925036] SELinux:  Registering netfilter hooks
[   14.929938] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4786 u=
secs
[   14.936969] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.941738] initcall init_sel_fs+0x0/0xa5 returned 0 after 342 usecs
[   14.948080] calling  selnl_init+0x0/0x56 @ 1
[   14.952424] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.958652] calling  sel_netif_init+0x0/0x5c @ 1
[   14.963334] initcall sel_netif_init+0x0/0x5c returned 0 after 2 usecs
[   14.969831] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.974687] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs
[   14.981359] calling  sel_netport_init+0x0/0x6a @ 1
[   14.986239] initcall sel_netport_init+0x0/0x6a returned 0 after 2 usecs
[   14.992912] calling  aurule_init+0x0/0x2d @ 1
[   14.997330] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   15.003569] calling  crypto_wq_init+0x0/0x33 @ 1
[   15.008281] initcall crypto_wq_init+0x0/0x33 returned 0 after 31 usecs
[   15.014837] calling  crypto_algapi_init+0x0/0xd @ 1
[   15.019782] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   15.026536] calling  chainiv_module_init+0x0/0x12 @ 1
[   15.031650] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   15.038582] calling  eseqiv_module_init+0x0/0x12 @ 1
[   15.043608] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   15.050455] calling  hmac_module_init+0x0/0x12 @ 1
[   15.055308] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   15.061983] calling  md5_mod_init+0x0/0x12 @ 1
[   15.066519] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs
[   15.072903] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   15.078214] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 25 =
usecs
[   15.085383] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   15.090753] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   15.097947] calling  des_generic_mod_init+0x0/0x17 @ 1
[   15.103198] initcall des_generic_mod_init+0x0/0x17 returned 0 after 48 u=
secs
[   15.110254] calling  aes_init+0x0/0x12 @ 1
[   15.114440] initcall aes_init+0x0/0x12 returned 0 after 26 usecs
[   15.120481] calling  zlib_mod_init+0x0/0x12 @ 1
[   15.125102] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs
[   15.131574] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   15.137297] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   15.144835] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   15.150899] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   15.158787] calling  krng_mod_init+0x0/0x12 @ 1
[   15.163408] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs
[   15.169881] calling  proc_genhd_init+0x0/0x3c @ 1
[   15.174656] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   15.181233] calling  bsg_init+0x0/0x12e @ 1
[   15.185555] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   15.192935] initcall bsg_init+0x0/0x12e returned 0 after 7281 usecs
[   15.199256] calling  noop_init+0x0/0x12 @ 1
[   15.203504] io scheduler noop registered
[   15.207493] initcall noop_init+0x0/0x12 returned 0 after 3895 usecs
[   15.213818] calling  deadline_init+0x0/0x12 @ 1
[   15.218410] io scheduler deadline registered
[   15.222743] initcall deadline_init+0x0/0x12 returned 0 after 4232 usecs
[   15.229418] calling  cfq_init+0x0/0x8b @ 1
[   15.233602] io scheduler cfq registered (default)
[   15.238344] initcall cfq_init+0x0/0x8b returned 0 after 4654 usecs
[   15.244583] calling  percpu_counter_startup+0x0/0x38 @ 1
[   15.249957] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   15.257151] calling  pci_proc_init+0x0/0x6a @ 1
[   15.261928] initcall pci_proc_init+0x0/0x6a returned 0 after 181 usecs
[   15.268444] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   15.274091] xen: registering gsi 16 triggering 0 polarity 1
[   15.279656] Already setup the GSI :16
[   15.284175] xen: registering gsi 16 triggering 0 polarity 1
[   15.289741] Already setup the GSI :16
[   15.294249] xen: registering gsi 16 triggering 0 polarity 1
[   15.299814] Already setup the GSI :16
[   15.304173] xen: registering gsi 19 triggering 0 polarity 1
[   15.309758] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   15.314981] xen: registering gsi 17 triggering 0 polarity 1
[   15.320559] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   15.325870] xen: registering gsi 19 triggering 0 polarity 1
[   15.331433] Already setup the GSI :19
[   15.335339] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60504 u=
secs
[   15.342376] calling  aer_service_init+0x0/0x2b @ 1
[   15.347303] initcall aer_service_init+0x0/0x2b returned 0 after 71 usecs
[   15.353990] calling  pci_hotplug_init+0x0/0x1d @ 1
[   15.358841] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   15.364474] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5500 use=
cs
[   15.371409] calling  pcied_init+0x0/0x79 @ 1
[   15.375941] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   15.382549] initcall pcied_init+0x0/0x79 returned 0 after 6647 usecs
[   15.388962] calling  pcifront_init+0x0/0x3f @ 1
[   15.393552] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   15.400138] calling  genericbl_driver_init+0x0/0x14 @ 1
[   15.405535] initcall genericbl_driver_init+0x0/0x14 returned 0 after 108=
 usecs
[   15.412741] calling  cirrusfb_init+0x0/0xcc @ 1
[   15.417423] initcall cirrusfb_init+0x0/0xcc returned 0 after 88 usecs
[   15.423850] calling  efifb_driver_init+0x0/0x14 @ 1
[   15.428862] initcall efifb_driver_init+0x0/0x14 returned 0 after 69 usecs
[   15.435641] calling  intel_idle_init+0x0/0x331 @ 1
[   15.440492] intel_idle: MWAIT substates: 0x42120
[   15.445171] intel_idle: v0.4 model 0x3C
[   15.449070] intel_idle: lapic_timer_reliable_states 0xffffffff
[   15.454967] intel_idle: intel_idle yielding to none
[   15.459642] initcall intel_idle_init+0x0/0x331 returned -19 after 18700 =
usecs
[   15.467097] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   15.472475] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   15.479661] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.484241] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs
[   15.490591] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.496323] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.504488] ACPI: Power Button [PWRB]
[   15.508470] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.515855] ACPI: Power Button [PWRF]
[   15.519651] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3055 usecs
[   15.527207] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.532644] ACPI: Fan [FAN0] (off)
[   15.536270] ACPI: Fan [FAN1] (off)
[   15.539884] ACPI: Fan [FAN2] (off)
[   15.543491] ACPI: Fan [FAN3] (off)
[   15.547094] ACPI: Fan [FAN4] (off)
[   15.550554] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1772=
3 usecs
[   15.557855] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.575878] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.584300] ACPI Error: Metning: Processor Platform Limit not supported.
[   15.616759] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 51940 usecs
[   15.624649] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.632791] thermal LNXTHERM:00: registered as thermal_zone0
[   15.638441] ACPI: Thermal Zone [TZ00] (28 C)
[   15.64489al Zone [TZ01] (30 C)
[   15.655211] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25021 u=
secs
[   15.662253] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.667196] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.673947] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.679190] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.684913] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5631=
 usecs
[   15.692120] calling  erst_init+0x0/0x2fc @ 1
[   15.696495] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.703999] pstore: Registered erst as persistent store backend
[   15.709972] initcall erst_init+0x0/0x2fc returned 0 after 13203 usecs
[   15.716472] calling  ghes_init+0x0/0x173 @ 1
[   15.720962] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35333 usecs
[   15.729398] \_SB_:_OSC request failed
[   15.733054] _OSC request data:1 1 0=20
[   15.736690] \_SB_:_OSC invalid UUID
[   15.740245] _OSC request data:1 1 0=20
[   15.743882] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.750125] initcall ghes_init+0x0/0x173 returned 0 after 28632 usecs
[   15.756624] calling  einj_init+0x0/0x522 @ 1
[   15.761023] EINJ: Error INJection is initialized.
[   15.765725] initcall einj_init+0x0/0x522 returned 0 after 4654 usecs
[   15.772137] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.776988] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.783031] initcall ioat_init_module+0x0/0xb1 returned 0 after 5900 use=
cs
[   15.789913] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.794820] initcall virtio_mmio_init+0x0/0x14 returned 0 after 70 usecs
[   15.801509] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.807297] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 67 usecs
[   15.814855] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.820140] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.827246] calling  xenbus_init+0x0/0x3d @ 1
[   15.831802] initcall xenbus_init+0x0/0x3d returned 0 after 131 usecs
[   15.838143] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.843376] initcall xenbus_backend_init+0x0/0x51 returned 0 after 119 u=
secs
[   15.850413] calling  gntdev_init+0x0/0x4d @ 1
[   15.854986] initcall gntdev_init+0x0/0x4d returned 0 after 150 usecs
[   15.861329] calling  gntalloc_init+0x0/0x3d @ 1
[   15.866052] initcall gntalloc_init+0x0/0x3d returned 0 after 129 usecs
[   15.872568] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.877941] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.885131] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.890135] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 63 usecs
[   15.896918] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.902554] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
88 usecs
[   15.909936] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.915330] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs
[   15.922450] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.927480] xen: registering gsi 19 triggering 0 polarity 1
[   15.933039] Already setup the GSI :19
(XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----
(XEN) [2014-01-22 05:38:00] CPU:    0
(XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_m=
six+0xb1/0x128
(XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0  =
 rcx: 0000000000000000
(XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000  =
 rdi: 0000000000000000
(XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08  =
 r8:  0000000000000000
(XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20  =
 r11: 0000000000000202
(XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005  =
 r14: 0000000000000000
(XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033  =
 cr4: 00000000001526f0
(XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
(XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008
(XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=3Dffff82d0802cfe08:
(XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d080=
2cfe68 000000000000001e
(XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078=
716898 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d080=
12a25f 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080=
310f00 ffff82d0802cff18
(XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000=
040004 0000000000000246
(XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff81=
00122a 000000000000e030
(XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070=
fe2780 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7f=
d300c7 ffff82d08022231b
(XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f=
60e0e0 0000000000000000
(XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078=
623e58 ffff880078716800
(XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000=
000006 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000=
000000 ffff880078623e28
(XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff81=
00142a 000000000000e033
(XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 0000000000=
00e02b 0000000000000000
(XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000=
000000 0000000000000000
(XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000=
000000
(XEN) [2014-01-22 05:38:00] Xen call trace:
(XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0=
x128
(XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x1=
19e
(XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 05:38:00]  L4[0x000] =3D 0000000000000000 ffffffffffffffff
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00] Panic on CPU 0:
(XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
(XEN) [2014-01-22 05:38:00] [error_code=3D0000]
(XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 05:38:00] ****************************************
(XEN) [2014-01-22 05:38:00]=20
(XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

--rwEMma7ioTxnRzrJ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--rwEMma7ioTxnRzrJ--


From xen-devel-bounces@lists.xen.org Tue Jan 21 21:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jKg-000375-Fy; Tue, 21 Jan 2014 21:58:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jKe-00036z-PC
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:58:05 +0000
Received: from [193.109.254.147:51660] by server-8.bemta-14.messagelabs.com id
	05/6E-30921-C6DEED25; Tue, 21 Jan 2014 21:58:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390341481!12351197!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8826 invoked from network); 21 Jan 2014 21:58:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:58:03 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLvwir003028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:57:58 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LLvvhp026045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:57:58 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LLvvTL026027; Tue, 21 Jan 2014 21:57:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:57:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CB93B1BF76C; Tue, 21 Jan 2014 16:57:55 -0500 (EST)
Date: Tue, 21 Jan 2014 16:57:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140121215755.GB6363@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140120162943.GD11681@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 04:29:43PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well, and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> > 
> > 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> > +
> 
> I'm a bit confused, did this work for you? At this point d->tot_pages
> should be (target_pages - 0x20). However in the hypervisor logic if you
> try to claim the exact amount of pages as d->tot_pages it should return
> EINVAL.
> 
> Furthur more, the original logic doesn't look right. In PV guest
> creation, xc tries to claim "memory=" pages, while in HVM guest creation
> it tries to claim "maxmem=" pages. I think HVM code is wrong.
> 
> And George shed me some light on PoD this morning, "cache" in PoD should
> be the pool of pages that used to populate into guest physical memory.
> In that sense it should be the size of mem_target ("memory=").
> 
> So I come up with a fix like this. Any idea?

The patch I came up didn't work. It did work the first time but I think
that is because I had 'claim=0' in the xl.conf and forgot about it (sigh).

After a bit of rebooting I realized that the patch didn't do the right
job. The one way it _did_ work was to claim the amount of pages
that the guest had allocated at this moment. That is for the PoD
path the 'xc_domain_set_pod_target' (which is called before the
claim call) ends up allocating memory.

So the claim 'clamp' was done past that moment (which is not good).

I will try out your patch later this week and report. Thanks!

> 
> Wei.
> 
> ---8<---
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..472f1df 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>  
> +#define POD_VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - POD_VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = target_pages;
> +
> +        if ( pod_mode )
> +            nr -= POD_VGA_HOLE_SIZE;
> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..1e44ba3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>          goto out;
>      }
>  
> -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> +    /* disallow a claim below current tot_pages or above max_pages */
> +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>      {
>          ret = -EINVAL;
>          goto out;
> 
> 
> 
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jKg-000375-Fy; Tue, 21 Jan 2014 21:58:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jKe-00036z-PC
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:58:05 +0000
Received: from [193.109.254.147:51660] by server-8.bemta-14.messagelabs.com id
	05/6E-30921-C6DEED25; Tue, 21 Jan 2014 21:58:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390341481!12351197!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8826 invoked from network); 21 Jan 2014 21:58:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:58:03 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLvwir003028
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:57:58 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LLvvhp026045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:57:58 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LLvvTL026027; Tue, 21 Jan 2014 21:57:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:57:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CB93B1BF76C; Tue, 21 Jan 2014 16:57:55 -0500 (EST)
Date: Tue, 21 Jan 2014 16:57:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140121215755.GB6363@phenom.dumpdata.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140120162943.GD11681@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 04:29:43PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@citrix.com>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well, and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> > 
> > 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> > +
> 
> I'm a bit confused, did this work for you? At this point d->tot_pages
> should be (target_pages - 0x20). However in the hypervisor logic if you
> try to claim the exact amount of pages as d->tot_pages it should return
> EINVAL.
> 
> Furthur more, the original logic doesn't look right. In PV guest
> creation, xc tries to claim "memory=" pages, while in HVM guest creation
> it tries to claim "maxmem=" pages. I think HVM code is wrong.
> 
> And George shed me some light on PoD this morning, "cache" in PoD should
> be the pool of pages that used to populate into guest physical memory.
> In that sense it should be the size of mem_target ("memory=").
> 
> So I come up with a fix like this. Any idea?

The patch I came up didn't work. It did work the first time but I think
that is because I had 'claim=0' in the xl.conf and forgot about it (sigh).

After a bit of rebooting I realized that the patch didn't do the right
job. The one way it _did_ work was to claim the amount of pages
that the guest had allocated at this moment. That is for the PoD
path the 'xc_domain_set_pod_target' (which is called before the
claim call) ends up allocating memory.

So the claim 'clamp' was done past that moment (which is not good).

I will try out your patch later this week and report. Thanks!

> 
> Wei.
> 
> ---8<---
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..472f1df 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>  
> +#define POD_VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - POD_VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = target_pages;
> +
> +        if ( pod_mode )
> +            nr -= POD_VGA_HOLE_SIZE;
> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..1e44ba3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>          goto out;
>      }
>  
> -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> +    /* disallow a claim below current tot_pages or above max_pages */
> +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>      {
>          ret = -EINVAL;
>          goto out;
> 
> 
> 
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jM0-0003HF-2y; Tue, 21 Jan 2014 21:59:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jLy-0003H8-Il
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:59:26 +0000
Received: from [85.158.137.68:16615] by server-16.bemta-3.messagelabs.com id
	C5/D5-26128-DBDEED25; Tue, 21 Jan 2014 21:59:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390341563!10483303!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3579 invoked from network); 21 Jan 2014 21:59:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:59:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLxArQ004757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:59:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLx6Qp006020
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:59:07 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLx6eO004057; Tue, 21 Jan 2014 21:59:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:59:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5322E1BF76C; Tue, 21 Jan 2014 16:59:05 -0500 (EST)
Date: Tue, 21 Jan 2014 16:59:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140121215905.GC6363@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> 10:38:27 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > Date: 01/20/2014 10:38 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 
> AM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > Date: 01/20/2014 10:14 AM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > 
> > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> > > crashes 
> > > > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
> 
> > > slow 
> > > > > graphics with a newer HD7000 (can see each line refresh 
> indiviually on 
> > > 
> > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > 
> > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > 
> > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> > > RadeonSI sluggishness is when using the KMS framebuffer device for a 
> plain 
> > > text console login.
> > 
> > So sluggish is probably due to the PAT not being enabled. This patch
> > should be applied:
> > 
> > lkml.org/lkml/2011/11/8/406
> > 
> > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > 
> > and these two reverted:
> > 
> >  "xen/pat: Disable PAT support for now."
> >  "xen/pat: Disable PAT using pat_enabled value."
> > 
> > Which is to say do:
> > 
> > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> 
> Thanks!  I cherry-picked that patch out of your testing tree, reverted 
> those 2 commits, recompiled and installed.  Definitely fixed the HD 7000 
> sluggishness and appears to have fixed the R600 crashes (although it's 
> only been running a few hours).
> 
> How come that patch didn't get into mainline?  It looks pretty innocuous 
> to me...

<Sigh> the x86 maintainers wanted a different route. And I hadn't had
the chance nor time to implement it.


> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 21:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 21:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jM0-0003HF-2y; Tue, 21 Jan 2014 21:59:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jLy-0003H8-Il
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 21:59:26 +0000
Received: from [85.158.137.68:16615] by server-16.bemta-3.messagelabs.com id
	C5/D5-26128-DBDEED25; Tue, 21 Jan 2014 21:59:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390341563!10483303!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3579 invoked from network); 21 Jan 2014 21:59:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 21:59:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LLxArQ004757
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 21:59:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLx6Qp006020
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 21:59:07 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LLx6eO004057; Tue, 21 Jan 2014 21:59:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 13:59:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5322E1BF76C; Tue, 21 Jan 2014 16:59:05 -0500 (EST)
Date: Tue, 21 Jan 2014 16:59:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140121215905.GC6363@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> 10:38:27 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > Date: 01/20/2014 10:38 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 10:14:36 
> AM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > Date: 01/20/2014 10:14 AM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > 
> > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola wrote:
> > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having consistent 
> > > crashes 
> > > > > with multiple older R600 series (HD 6470 and HD 6570) and unusably 
> 
> > > slow 
> > > > > graphics with a newer HD7000 (can see each line refresh 
> indiviually on 
> > > 
> > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > 
> > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > 
> > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> > > RadeonSI sluggishness is when using the KMS framebuffer device for a 
> plain 
> > > text console login.
> > 
> > So sluggish is probably due to the PAT not being enabled. This patch
> > should be applied:
> > 
> > lkml.org/lkml/2011/11/8/406
> > 
> > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > 
> > and these two reverted:
> > 
> >  "xen/pat: Disable PAT support for now."
> >  "xen/pat: Disable PAT using pat_enabled value."
> > 
> > Which is to say do:
> > 
> > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> 
> Thanks!  I cherry-picked that patch out of your testing tree, reverted 
> those 2 commits, recompiled and installed.  Definitely fixed the HD 7000 
> sluggishness and appears to have fixed the R600 crashes (although it's 
> only been running a few hours).
> 
> How come that patch didn't get into mainline?  It looks pretty innocuous 
> to me...

<Sigh> the x86 maintainers wanted a different route. And I hadn't had
the chance nor time to implement it.


> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jRR-0003oZ-9E; Tue, 21 Jan 2014 22:05:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jRP-0003oT-8w
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:05:03 +0000
Received: from [193.109.254.147:37850] by server-1.bemta-14.messagelabs.com id
	09/90-15600-E0FEED25; Tue, 21 Jan 2014 22:05:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390341900!12317258!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16435 invoked from network); 21 Jan 2014 22:05:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 22:05:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LM3vja010492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 22:03:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LM3uqG013557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 22:03:57 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LM3uSn013535; Tue, 21 Jan 2014 22:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 14:03:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E7131BF76C; Tue, 21 Jan 2014 17:03:55 -0500 (EST)
Date: Tue, 21 Jan 2014 17:03:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>, stefano.panella@citrix.com
Message-ID: <20140121220355.GA6557@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.

There was another patch posted by somebody from Citrix for a fix on 
32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).

Their fix was in the generic parts of code. Changing most of the 'unsigned'
to 'phys_addr_t' or such. Is his patch better or will this patch replace his?

It was " dma-mapping: dma_alloc_coherent_mask return dma_addr_t"
https://lkml.org/lkml/2013/12/10/593

> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jRR-0003oZ-9E; Tue, 21 Jan 2014 22:05:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5jRP-0003oT-8w
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:05:03 +0000
Received: from [193.109.254.147:37850] by server-1.bemta-14.messagelabs.com id
	09/90-15600-E0FEED25; Tue, 21 Jan 2014 22:05:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390341900!12317258!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16435 invoked from network); 21 Jan 2014 22:05:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 22:05:01 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LM3vja010492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 22:03:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0LM3uqG013557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 22:03:57 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LM3uSn013535; Tue, 21 Jan 2014 22:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 14:03:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2E7131BF76C; Tue, 21 Jan 2014 17:03:55 -0500 (EST)
Date: Tue, 21 Jan 2014 17:03:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>, stefano.panella@citrix.com
Message-ID: <20140121220355.GA6557@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.

There was another patch posted by somebody from Citrix for a fix on 
32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).

Their fix was in the generic parts of code. Changing most of the 'unsigned'
to 'phys_addr_t' or such. Is his patch better or will this patch replace his?

It was " dma-mapping: dma_alloc_coherent_mask return dma_addr_t"
https://lkml.org/lkml/2013/12/10/593

> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;
> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:13:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jZF-0004Js-Ad; Tue, 21 Jan 2014 22:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5jZE-0004Jn-Di
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:13:08 +0000
Received: from [85.158.143.35:26951] by server-2.bemta-4.messagelabs.com id
	ED/4C-11386-3F0FED25; Tue, 21 Jan 2014 22:13:07 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390342150!13115236!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2066 invoked from network); 21 Jan 2014 22:09:11 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 22:09:11 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so7740788qcx.2
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 14:09:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=uzzE3/lNs0dBpd1y/CMiJjt4umZRiEyS0oNI7p7SasE=;
	b=JA6VDqhRjW9j3zjU+m1QGKkPyuroCpCDQbjJP61W/Axief79KizIuhiQ9QsE8+pbWo
	2pWlfEZCCN0pH+at3szNCdqoYXQOKFUOdSrlYixdcnpHGNF0X8NIoVOD84MHRbtPBY0P
	khYAB3Rfz9Ht30d8clhbIBTxQlPnoeWM5JBRmixCy6g77wVpWVFhLIwRW/HjFIX33uln
	NhAcUkcibmDlsWoYxKpT5zU8Qx90hp5mwAxMk2VHkSdecRoukQ0fVC88ZIqmvawFGC4Z
	7slX3CT56b1yNyUVxAB+eQzbl9dYqH8IQ8zdDseL6O98MDcSPhx/21ZmAYiSqKsZL3yg
	mr0w==
MIME-Version: 1.0
X-Received: by 10.140.88.180 with SMTP id t49mr39023115qgd.97.1390342150384;
	Tue, 21 Jan 2014 14:09:10 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 14:09:10 -0800 (PST)
In-Reply-To: <20140121213117.GA6003@phenom.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
	<20140121213117.GA6003@phenom.dumpdata.com>
Date: Tue, 21 Jan 2014 17:09:10 -0500
Message-ID: <CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: daniel.kiper@oracle.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:31 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> Could you use the Linux patches that use Xen's EFI services? Ie,
> the DOM0 EFI ones?
>
> They were posted on the mailing list some time ago (last year, November-ish?)

That is an option, but I can't find the patches (a thorough search of
Google yields nothing). The other issue is that unless those were
merged into the kernel by version 3.12.8, we would have to roll our
own kernel (3.12.8 is the newest kernel that Debian supplies).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:13:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jZF-0004Js-Ad; Tue, 21 Jan 2014 22:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W5jZE-0004Jn-Di
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:13:08 +0000
Received: from [85.158.143.35:26951] by server-2.bemta-4.messagelabs.com id
	ED/4C-11386-3F0FED25; Tue, 21 Jan 2014 22:13:07 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390342150!13115236!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2066 invoked from network); 21 Jan 2014 22:09:11 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 22:09:11 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so7740788qcx.2
	for <xen-devel@lists.xen.org>; Tue, 21 Jan 2014 14:09:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=uzzE3/lNs0dBpd1y/CMiJjt4umZRiEyS0oNI7p7SasE=;
	b=JA6VDqhRjW9j3zjU+m1QGKkPyuroCpCDQbjJP61W/Axief79KizIuhiQ9QsE8+pbWo
	2pWlfEZCCN0pH+at3szNCdqoYXQOKFUOdSrlYixdcnpHGNF0X8NIoVOD84MHRbtPBY0P
	khYAB3Rfz9Ht30d8clhbIBTxQlPnoeWM5JBRmixCy6g77wVpWVFhLIwRW/HjFIX33uln
	NhAcUkcibmDlsWoYxKpT5zU8Qx90hp5mwAxMk2VHkSdecRoukQ0fVC88ZIqmvawFGC4Z
	7slX3CT56b1yNyUVxAB+eQzbl9dYqH8IQ8zdDseL6O98MDcSPhx/21ZmAYiSqKsZL3yg
	mr0w==
MIME-Version: 1.0
X-Received: by 10.140.88.180 with SMTP id t49mr39023115qgd.97.1390342150384;
	Tue, 21 Jan 2014 14:09:10 -0800 (PST)
Received: by 10.96.68.197 with HTTP; Tue, 21 Jan 2014 14:09:10 -0800 (PST)
In-Reply-To: <20140121213117.GA6003@phenom.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
	<20140121213117.GA6003@phenom.dumpdata.com>
Date: Tue, 21 Jan 2014 17:09:10 -0500
Message-ID: <CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: daniel.kiper@oracle.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 4:31 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> Could you use the Linux patches that use Xen's EFI services? Ie,
> the DOM0 EFI ones?
>
> They were posted on the mailing list some time ago (last year, November-ish?)

That is an option, but I can't find the patches (a thorough search of
Google yields nothing). The other issue is that unless those were
merged into the kernel by version 3.12.8, we would have to roll our
own kernel (3.12.8 is the newest kernel that Debian supplies).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jdb-0004Re-21; Tue, 21 Jan 2014 22:17:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W5jdZ-0004RY-Ck
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:17:37 +0000
Received: from [85.158.143.35:19195] by server-3.bemta-4.messagelabs.com id
	5C/66-32360-002FED25; Tue, 21 Jan 2014 22:17:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390342654!13133052!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12872 invoked from network); 21 Jan 2014 22:17:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 22:17:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LMHU87024913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 22:17:31 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LMHTSR017693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 22:17:30 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LMHTe9013925; Tue, 21 Jan 2014 22:17:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 14:17:28 -0800
Date: Tue, 21 Jan 2014 14:17:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140121141727.432b5c96@mantra.us.oracle.com>
In-Reply-To: <52DECA4E.4080004@citrix.com>
References: <52DECA4E.4080004@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMSBKYW4gMjAxNCAyMDoyODoxNCArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IEhlbGxvLAo+IAo+IFdoaWxlIGRvaW5nIHNvbWUg
YmVuY2htYXJrcyBvbiBQVi9QVkgvUFZIVk0sIEkndmUgcmVhbGl6ZWQgdGhhdCB0aGUKPiBjcHVp
ZCBmZWF0dXJlIGZsYWdzIGV4cG9zZWQgdG8gUFZIIGd1ZXN0cyBhcmUga2luZCBvZiBzdHJhbmdl
LCB0aGlzIGlzCj4gdGhlIG91dHB1dCBvZiB0aGUgZmVhdHVyZSBmbGFncyBhcyBzZWVuIGJ5IGFu
IEhWTSBkb21haW46Cj4gCj4gRmVhdHVyZXM9MHgxNzgzZmJmZjxGUFUsVk1FLERFLFBTRSxUU0Ms
TVNSLFBBRSxNQ0UsQ1g4LEFQSUMsU0VQLE1UUlIsUEdFLE1DQSxDTU9WLFBBVCxQU0UzNixNTVgs
RlhTUixTU0UsU1NFMixIVFQ+Cj4gIEZlYXR1cmVzMj0weDgxYjgyMjAxPFNTRTMsU1NTRTMsQ1gx
NixTU0U0LjEsU1NFNC4yLHgyQVBJQyxQT1BDTlQsVFNDRExULEhWPgo+IEFNRCBGZWF0dXJlcz0w
eDI4MTAwODAwPFNZU0NBTEwsTlgsUkRUU0NQLExNPgo+IEFNRCBGZWF0dXJlczI9MHgxPExBSEY+
Cj4gCj4gQW5kIHRoaXMgaXMgd2hhdCBhIFBWSCBkb21haW4gc2VlcyB3aGVuIHJ1bm5pbmcgb24g
dGhlIHNhbWUgaGFyZHdhcmU6Cj4gCj4gRmVhdHVyZXM9MHgxZmM5OGI3NTxGUFUsREUsVFNDLE1T
UixQQUUsQ1g4LEFQSUMsU0VQLENNT1YsUEFULENMRkxVU0gsQUNQSSxNTVgsRlhTUixTU0UsU1NF
MixTUyxIVFQ+Cj4gRmVhdHVyZXMyPTB4ODA5ODIyMDE8U1NFMyxTU1NFMyxDWDE2LFNTRTQuMSxT
U0U0LjIsUE9QQ05ULEhWPgo+IEFNRCBGZWF0dXJlcz0weDIwMTAwODAwPFNZU0NBTEwsTlgsTE0+
Cj4gQU1EIEZlYXR1cmVzMj0weDE8TEFIRj4KPiAKPiBJIHdvdWxkIGV4cGVjdCB0aGUgZmVhdHVy
ZSBmbGFncyB0byBiZSBxdWl0ZSBzaW1pbGFyIGJldHdlZW4gYW4gSFZNCj4gZG9tYWluIGFuZCBh
IFBWIGRvbWFpbiAoc2luY2UgdGhleSBib3RoIHJ1biBpbnNpZGUgb2YgYSBIVk0KPiBjb250YWlu
ZXIpLiBBRkFJSywgdGhlcmUncyBubyByZWFzb24gdG8gZGlzYWJsZSBQU0UsIFBHRSwgUFNFMzYg
YW5kCj4gUkRUU0NQIGZvciBQVkggZ3Vlc3RzLiBBbHNvLCBpcyB0aGVyZSBhbnkgcmVhc29uIHdo
eSBQVkggZ3Vlc3RzIGhhdmUKPiB0aGUgQUNQSSwgU1MgYW5kIENMRkxVU0ggZmVhdHVyZSBmbGFn
cyBidXQgbm90IEhWTT8KPiAKPiBNb3N0IChpZiBub3QgYWxsKSBvZiB0aGlzIHByb2JhYmx5IGNv
bWVzIGZyb20gdGhlIGZhY3QgdGhhdCB3ZSBhcmUKPiByZXBvcnRpbmcgdGhlIHNhbWUgZmVhdHVy
ZSBmbGFncyBhcyBwdXJlIFBWIGd1ZXN0cywgYnV0IEkgc2VlIG5vCgpSaWdodCwgSSBhbHJlYWR5
IHNhaWQgSSB3YXMgd29ya2luZyBvbiB4ZW4gcGF0Y2ggdG8gZXhwb3J0IENSNCBhbmQKb3RoZXIg
ZmVhdHVyZXMuIEJ1dCwgaXQgd2lsbCB0YWtlIGNvdXBsZSBkYXlzLCBhcyBJJ20gZGVidWdnaW5n
IHNvbWV0aGlnCmVsc2UgcmlnaHQgbm93LiBJZiB5b3UgY2FuJ3Qgd2FpdCwgZ28gZm9yIGl0LgoK
QlRXLCBpZiB5b3UgaGF2ZSB0aW1lIHRvIHdvcmsgb24gUFZILCBjYW4geW91IHBsIGhlbHAgd2l0
aCBtaWdyYXRpb24/CldvdWxkIGJlIHZlcnkgbXVjaCBhcHByZWNpYXRlZC4uLi4gCgp0aGFua3MK
TXVrZXNoCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5jdb-0004Re-21; Tue, 21 Jan 2014 22:17:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W5jdZ-0004RY-Ck
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 22:17:37 +0000
Received: from [85.158.143.35:19195] by server-3.bemta-4.messagelabs.com id
	5C/66-32360-002FED25; Tue, 21 Jan 2014 22:17:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390342654!13133052!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12872 invoked from network); 21 Jan 2014 22:17:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Jan 2014 22:17:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0LMHU87024913
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 21 Jan 2014 22:17:31 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0LMHTSR017693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 21 Jan 2014 22:17:30 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0LMHTe9013925; Tue, 21 Jan 2014 22:17:29 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 14:17:28 -0800
Date: Tue, 21 Jan 2014 14:17:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140121141727.432b5c96@mantra.us.oracle.com>
In-Reply-To: <52DECA4E.4080004@citrix.com>
References: <52DECA4E.4080004@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMSBKYW4gMjAxNCAyMDoyODoxNCArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IEhlbGxvLAo+IAo+IFdoaWxlIGRvaW5nIHNvbWUg
YmVuY2htYXJrcyBvbiBQVi9QVkgvUFZIVk0sIEkndmUgcmVhbGl6ZWQgdGhhdCB0aGUKPiBjcHVp
ZCBmZWF0dXJlIGZsYWdzIGV4cG9zZWQgdG8gUFZIIGd1ZXN0cyBhcmUga2luZCBvZiBzdHJhbmdl
LCB0aGlzIGlzCj4gdGhlIG91dHB1dCBvZiB0aGUgZmVhdHVyZSBmbGFncyBhcyBzZWVuIGJ5IGFu
IEhWTSBkb21haW46Cj4gCj4gRmVhdHVyZXM9MHgxNzgzZmJmZjxGUFUsVk1FLERFLFBTRSxUU0Ms
TVNSLFBBRSxNQ0UsQ1g4LEFQSUMsU0VQLE1UUlIsUEdFLE1DQSxDTU9WLFBBVCxQU0UzNixNTVgs
RlhTUixTU0UsU1NFMixIVFQ+Cj4gIEZlYXR1cmVzMj0weDgxYjgyMjAxPFNTRTMsU1NTRTMsQ1gx
NixTU0U0LjEsU1NFNC4yLHgyQVBJQyxQT1BDTlQsVFNDRExULEhWPgo+IEFNRCBGZWF0dXJlcz0w
eDI4MTAwODAwPFNZU0NBTEwsTlgsUkRUU0NQLExNPgo+IEFNRCBGZWF0dXJlczI9MHgxPExBSEY+
Cj4gCj4gQW5kIHRoaXMgaXMgd2hhdCBhIFBWSCBkb21haW4gc2VlcyB3aGVuIHJ1bm5pbmcgb24g
dGhlIHNhbWUgaGFyZHdhcmU6Cj4gCj4gRmVhdHVyZXM9MHgxZmM5OGI3NTxGUFUsREUsVFNDLE1T
UixQQUUsQ1g4LEFQSUMsU0VQLENNT1YsUEFULENMRkxVU0gsQUNQSSxNTVgsRlhTUixTU0UsU1NF
MixTUyxIVFQ+Cj4gRmVhdHVyZXMyPTB4ODA5ODIyMDE8U1NFMyxTU1NFMyxDWDE2LFNTRTQuMSxT
U0U0LjIsUE9QQ05ULEhWPgo+IEFNRCBGZWF0dXJlcz0weDIwMTAwODAwPFNZU0NBTEwsTlgsTE0+
Cj4gQU1EIEZlYXR1cmVzMj0weDE8TEFIRj4KPiAKPiBJIHdvdWxkIGV4cGVjdCB0aGUgZmVhdHVy
ZSBmbGFncyB0byBiZSBxdWl0ZSBzaW1pbGFyIGJldHdlZW4gYW4gSFZNCj4gZG9tYWluIGFuZCBh
IFBWIGRvbWFpbiAoc2luY2UgdGhleSBib3RoIHJ1biBpbnNpZGUgb2YgYSBIVk0KPiBjb250YWlu
ZXIpLiBBRkFJSywgdGhlcmUncyBubyByZWFzb24gdG8gZGlzYWJsZSBQU0UsIFBHRSwgUFNFMzYg
YW5kCj4gUkRUU0NQIGZvciBQVkggZ3Vlc3RzLiBBbHNvLCBpcyB0aGVyZSBhbnkgcmVhc29uIHdo
eSBQVkggZ3Vlc3RzIGhhdmUKPiB0aGUgQUNQSSwgU1MgYW5kIENMRkxVU0ggZmVhdHVyZSBmbGFn
cyBidXQgbm90IEhWTT8KPiAKPiBNb3N0IChpZiBub3QgYWxsKSBvZiB0aGlzIHByb2JhYmx5IGNv
bWVzIGZyb20gdGhlIGZhY3QgdGhhdCB3ZSBhcmUKPiByZXBvcnRpbmcgdGhlIHNhbWUgZmVhdHVy
ZSBmbGFncyBhcyBwdXJlIFBWIGd1ZXN0cywgYnV0IEkgc2VlIG5vCgpSaWdodCwgSSBhbHJlYWR5
IHNhaWQgSSB3YXMgd29ya2luZyBvbiB4ZW4gcGF0Y2ggdG8gZXhwb3J0IENSNCBhbmQKb3RoZXIg
ZmVhdHVyZXMuIEJ1dCwgaXQgd2lsbCB0YWtlIGNvdXBsZSBkYXlzLCBhcyBJJ20gZGVidWdnaW5n
IHNvbWV0aGlnCmVsc2UgcmlnaHQgbm93LiBJZiB5b3UgY2FuJ3Qgd2FpdCwgZ28gZm9yIGl0LgoK
QlRXLCBpZiB5b3UgaGF2ZSB0aW1lIHRvIHdvcmsgb24gUFZILCBjYW4geW91IHBsIGhlbHAgd2l0
aCBtaWdyYXRpb24/CldvdWxkIGJlIHZlcnkgbXVjaCBhcHByZWNpYXRlZC4uLi4gCgp0aGFua3MK
TXVrZXNoCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5k7R-0005wB-43; Tue, 21 Jan 2014 22:48:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5k7Q-0005w6-1c
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 22:48:28 +0000
Received: from [193.109.254.147:6488] by server-13.bemta-14.messagelabs.com id
	54/21-19374-B39FED25; Tue, 21 Jan 2014 22:48:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390344505!12272202!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23416 invoked from network); 21 Jan 2014 22:48:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 22:48:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93050871"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 22:48:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 17:48:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5k7L-00028G-66;
	Tue, 21 Jan 2014 22:48:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5k7L-0005i7-27;
	Tue, 21 Jan 2014 22:48:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24453-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 22:48:23 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24453: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24453 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24453/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24448

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24447
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  like 24440
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24448 like 24452-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24448 like 24447

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24448 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ branch=xen-unstable
+ revision=407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   58f5bca..407a3c0  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 22:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 22:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5k7R-0005wB-43; Tue, 21 Jan 2014 22:48:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5k7Q-0005w6-1c
	for xen-devel@lists.xensource.com; Tue, 21 Jan 2014 22:48:28 +0000
Received: from [193.109.254.147:6488] by server-13.bemta-14.messagelabs.com id
	54/21-19374-B39FED25; Tue, 21 Jan 2014 22:48:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390344505!12272202!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23416 invoked from network); 21 Jan 2014 22:48:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 22:48:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93050871"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 22:48:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 21 Jan 2014 17:48:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5k7L-00028G-66;
	Tue, 21 Jan 2014 22:48:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5k7L-0005i7-27;
	Tue, 21 Jan 2014 22:48:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24453-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 21 Jan 2014 22:48:23 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24453: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24453 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24453/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24448

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24447
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  like 24440
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24448 like 24452-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24448 like 24447

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24448 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  58f5bcaf05621810f06bf5b3592e2ae87475053d

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ branch=xen-unstable
+ revision=407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   58f5bca..407a3c0  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 23:34:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 23:34:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5kpB-0008A7-V1; Tue, 21 Jan 2014 23:33:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5kpA-0008A2-KI
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 23:33:40 +0000
Received: from [85.158.143.35:26512] by server-3.bemta-4.messagelabs.com id
	DC/45-32360-3D30FD25; Tue, 21 Jan 2014 23:33:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390347217!13150418!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15367 invoked from network); 21 Jan 2014 23:33:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 23:33:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93062625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 23:33:36 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 18:33:36 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 22 Jan 2014
	00:33:34 +0100
Message-ID: <52DF03D0.1060105@citrix.com>
Date: Tue, 21 Jan 2014 23:33:36 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philip Wernersbach <philip.wernersbach@gmail.com>,
	<xen-devel@lists.xen.org>
References: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
In-Reply-To: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to
 DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 20:55, Philip Wernersbach wrote:
> xen: [v3] Pass the location of the ACPI RSDP to DOM0.
>
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
>
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
>
> ---
> Changed since v2:
>     * Fix coding style
>     * Get rid of extra define
>     * Use correct typedef'd type for the ACPI RSDP pointer
>     * Better error checking conditional
>     * Simplify error message
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..fdeb9f2 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }

You have still got incorrect coding style at this point, as indicated in
my previous email.

> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 1];
> +
> +            if ( rp )
> +            {
> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,

sizeof(rp_str)

> +                         "%08lX", rp);

Personally, I prefer lowercase hexidecimal numbers, as they are easier
to read, particularly when 64bit.  What happens if the root pointer is
above the 4GB boundary? I dont see any reason at all for the leading 0s.

> +
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, rp_str);
> +            }
> +            else
> +            {

And coding style here.

> +                printk(XENLOG_WARNING
> +                       "Failed to get acpi_rsdp to pass to dom0\n");
> +            }
> +        }

And finally, you have yet to address Jan's concerns about this patch. 
Being an x86 maintainer, it is him you will have to convince to accept
the patch, even after I have run out of basic review points to cover.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 21 23:34:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jan 2014 23:34:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5kpB-0008A7-V1; Tue, 21 Jan 2014 23:33:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5kpA-0008A2-KI
	for xen-devel@lists.xen.org; Tue, 21 Jan 2014 23:33:40 +0000
Received: from [85.158.143.35:26512] by server-3.bemta-4.messagelabs.com id
	DC/45-32360-3D30FD25; Tue, 21 Jan 2014 23:33:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390347217!13150418!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15367 invoked from network); 21 Jan 2014 23:33:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Jan 2014 23:33:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93062625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Jan 2014 23:33:36 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 18:33:36 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 22 Jan 2014
	00:33:34 +0100
Message-ID: <52DF03D0.1060105@citrix.com>
Date: Tue, 21 Jan 2014 23:33:36 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Philip Wernersbach <philip.wernersbach@gmail.com>,
	<xen-devel@lists.xen.org>
References: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
In-Reply-To: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to
 DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 20:55, Philip Wernersbach wrote:
> xen: [v3] Pass the location of the ACPI RSDP to DOM0.
>
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.
>
> Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>
>
> ---
> Changed since v2:
>     * Fix coding style
>     * Get rid of extra define
>     * Use correct typedef'd type for the ACPI RSDP pointer
>     * Better error checking conditional
>     * Simplify error message
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..fdeb9f2 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }

You have still got incorrect coding style at this point, as indicated in
my previous email.

> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 1];
> +
> +            if ( rp )
> +            {
> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,

sizeof(rp_str)

> +                         "%08lX", rp);

Personally, I prefer lowercase hexidecimal numbers, as they are easier
to read, particularly when 64bit.  What happens if the root pointer is
above the 4GB boundary? I dont see any reason at all for the leading 0s.

> +
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, rp_str);
> +            }
> +            else
> +            {

And coding style here.

> +                printk(XENLOG_WARNING
> +                       "Failed to get acpi_rsdp to pass to dom0\n");
> +            }
> +        }

And finally, you have yet to address Jan's concerns about this patch. 
Being an x86 maintainer, it is him you will have to convince to accept
the patch, even after I have run out of basic review points to cover.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5lb5-00026j-Js; Wed, 22 Jan 2014 00:23:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5lb5-00026e-0E
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:23:11 +0000
Received: from [85.158.137.68:64182] by server-8.bemta-3.messagelabs.com id
	46/F5-31081-E6F0FD25; Wed, 22 Jan 2014 00:23:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390350187!10545222!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30985 invoked from network); 22 Jan 2014 00:23:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 00:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93075068"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 00:23:06 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 19:23:06 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 22 Jan 2014
	01:23:04 +0100
Message-ID: <52DF0F6A.4040309@citrix.com>
Date: Wed, 22 Jan 2014 00:23:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <20140121215433.GA6363@phenom.dumpdata.com>
In-Reply-To: <20140121215433.GA6363@phenom.dumpdata.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] Regression compared to Xen 4.3,
 Xen 4.4-rc2 -  pci_prepare_msix+0xb1/0x12 - BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 21:54, Konrad Rzeszutek Wilk wrote:
> Hey,
>
> I hadn't done yet any diagnosis to figure out exactly which
> PCI device is at fault here. But this is regression compared
> to Xen 4.3 which boots just fine (see logs). The xen-syms
> is at: http://darnok.org/xen/xen-syms.gz
>
> I used idential kernel for Xen 4.3 and it booted nicely.
>
> My next step is to instrument the do_physdev_op to figure out which
> of the PCI devices is triggering this, but that will have to wait
> till later this week.
>
> What I get is this when booting Xen 4.4:
>
>
> [   15.927480] xen: registering gsi 19 triggering 0 polarity 1
> [   15.933039] Already setup the GSI :19
> (XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
> (XEN) [2014-01-22 05:38:00] CPU:    0
> (XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
> (XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
> (XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0   rcx: 0000000000000000
> (XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000   rdi: 0000000000000000
> (XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08   r8:  0000000000000000
> (XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20   r11: 0000000000000202
> (XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005   r14: 0000000000000000
> (XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000001526f0
> (XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
> (XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=ffff82d0802cfe08:
> (XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d0802cfe68 000000000000001e
> (XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078716898 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d08012a25f 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080310f00 ffff82d0802cff18
> (XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000040004 0000000000000246
> (XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff8100122a 000000000000e030
> (XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070fe2780 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7fd300c7 ffff82d08022231b
> (XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f60e0e0 0000000000000000
> (XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078623e58 ffff880078716800
> (XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000000006 0000000000000000
> (XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000000000 ffff880078623e28
> (XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff8100142a 000000000000e033
> (XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 000000000000e02b 0000000000000000
> (XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000000000
> (XEN) [2014-01-22 05:38:00] Xen call trace:
> (XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
> (XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x119e
> (XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
> (XEN) [2014-01-22 05:38:00]  L4[0x000] = 0000000000000000 ffffffffffffffff
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] ****************************************
> (XEN) [2014-01-22 05:38:00] Panic on CPU 0:
> (XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
> (XEN) [2014-01-22 05:38:00] [error_code=0000]
> (XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
> (XEN) [2014-01-22 05:38:00] ****************************************
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

This is breakage, caused by 1035bb64fd7fd9f05c510466d98566fd82e37ad9
"PCI: break MSI-X data out of struct pci_dev_info", which made it valid
for a PCI device to not have an associated arch_msix structure.

In pci_prepare_msix(), there is a logic chain

    pdev = pci_get_pdev(seg, bus, devfn);
    if ( !pdev )
        rc = -ENODEV;
    else if ( pdev->msix->used_entries != !!off )
...

which dereferences this optional pointer without first checking whether
the guest-provided PCI device is actually MSI-X capable.

Therefore, dom0 is issuing PHYSDEVOP_prepare_msix hypercalls on PCI
devices Xen believes to be incapable of MSI-X.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5lb5-00026j-Js; Wed, 22 Jan 2014 00:23:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5lb5-00026e-0E
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:23:11 +0000
Received: from [85.158.137.68:64182] by server-8.bemta-3.messagelabs.com id
	46/F5-31081-E6F0FD25; Wed, 22 Jan 2014 00:23:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390350187!10545222!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30985 invoked from network); 22 Jan 2014 00:23:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 00:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="93075068"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 00:23:06 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Tue, 21 Jan 2014 19:23:06 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 22 Jan 2014
	01:23:04 +0100
Message-ID: <52DF0F6A.4040309@citrix.com>
Date: Wed, 22 Jan 2014 00:23:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, <jbeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <20140121215433.GA6363@phenom.dumpdata.com>
In-Reply-To: <20140121215433.GA6363@phenom.dumpdata.com>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] Regression compared to Xen 4.3,
 Xen 4.4-rc2 -  pci_prepare_msix+0xb1/0x12 - BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/2014 21:54, Konrad Rzeszutek Wilk wrote:
> Hey,
>
> I hadn't done yet any diagnosis to figure out exactly which
> PCI device is at fault here. But this is regression compared
> to Xen 4.3 which boots just fine (see logs). The xen-syms
> is at: http://darnok.org/xen/xen-syms.gz
>
> I used idential kernel for Xen 4.3 and it booted nicely.
>
> My next step is to instrument the do_physdev_op to figure out which
> of the PCI devices is triggering this, but that will have to wait
> till later this week.
>
> What I get is this when booting Xen 4.4:
>
>
> [   15.927480] xen: registering gsi 19 triggering 0 polarity 1
> [   15.933039] Already setup the GSI :19
> (XEN) [2014-01-22 05:38:00] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
> (XEN) [2014-01-22 05:38:00] CPU:    0
> (XEN) [2014-01-22 05:38:00] RIP:    e008:[<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
> (XEN) [2014-01-22 05:38:00] RFLAGS: 0000000000010246   CONTEXT: hypervisor
> (XEN) [2014-01-22 05:38:00] rax: 0000000000000000   rbx: 00000000fffffff0   rcx: 0000000000000000
> (XEN) [2014-01-22 05:38:00] rdx: ffff830239463b70   rsi: 0000000000000000   rdi: 0000000000000000
> (XEN) [2014-01-22 05:38:00] rbp: ffff82d0802cfe48   rsp: ffff82d0802cfe08   r8:  0000000000000000
> (XEN) [2014-01-22 05:38:00] r9:  00000000deadbeef   r10: ffff82d080238f20   r11: 0000000000000202
> (XEN) [2014-01-22 05:38:00] r12: ffff830239466700   r13: 0000000000000005   r14: 0000000000000000
> (XEN) [2014-01-22 05:38:00] r15: 0000000000000005   cr0: 0000000080050033   cr4: 00000000001526f0
> (XEN) [2014-01-22 05:38:00] cr3: 000000022dc0c000   cr2: 0000000000000004
> (XEN) [2014-01-22 05:38:00] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) [2014-01-22 05:38:00] Xen stack trace from rsp=ffff82d0802cfe08:
> (XEN) [2014-01-22 05:38:00]    00000070b7313060 0000000000310f00 ffff82d0802cfe68 000000000000001e
> (XEN) [2014-01-22 05:38:00]    ffff880078623e28 ffff8300b7313000 ffff880078716898 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08017fede ffff82d08012a25f 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff82d000050000 ffff82d08018cdc8 ffff82d080310f00 ffff82d0802cff18
> (XEN) [2014-01-22 05:38:00]    ffff82d0802cfef8 ffff82d08021d98c 0000000000040004 0000000000000246
> (XEN) [2014-01-22 05:38:00]    ffffffff8100122a 0000000000000000 ffffffff8100122a 000000000000e030
> (XEN) [2014-01-22 05:38:00]    0000000000000246 ffff8300b7313000 ffff880070fe2780 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff880078716898 0000000000000000 00007d2f7fd300c7 ffff82d08022231b
> (XEN) [2014-01-22 05:38:00]    ffffffff8100142a 0000000000000021 ffff88007f60e0e0 0000000000000000
> (XEN) [2014-01-22 05:38:00]    000000000007e8b5 00000003b5ef9df9 ffff880078623e58 ffff880078716800
> (XEN) [2014-01-22 05:38:00]    0000000000000202 0000000000000594 0000000000000006 0000000000000000
> (XEN) [2014-01-22 05:38:00]    0000000000000021 ffffffff8100142a 0000000000000000 ffff880078623e28
> (XEN) [2014-01-22 05:38:00]    000000000000001e 0001010000000000 ffffffff8100142a 000000000000e033
> (XEN) [2014-01-22 05:38:00]    0000000000000202 ffff880078623e10 000000000000e02b 0000000000000000
> (XEN) [2014-01-22 05:38:00]    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) [2014-01-22 05:38:00]    ffff8300b7313000 0000000000000000 0000000000000000
> (XEN) [2014-01-22 05:38:00] Xen call trace:
> (XEN) [2014-01-22 05:38:00]    [<ffff82d080168d51>] pci_prepare_msix+0xb1/0x128
> (XEN) [2014-01-22 05:38:00]    [<ffff82d08017fede>] do_physdev_op+0xd10/0x119e
> (XEN) [2014-01-22 05:38:00]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] Pagetable walk from 0000000000000004:
> (XEN) [2014-01-22 05:38:00]  L4[0x000] = 0000000000000000 ffffffffffffffff
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] ****************************************
> (XEN) [2014-01-22 05:38:00] Panic on CPU 0:
> (XEN) [2014-01-22 05:38:00] FATAL PAGE FAULT
> (XEN) [2014-01-22 05:38:00] [error_code=0000]
> (XEN) [2014-01-22 05:38:00] Faulting linear address: 0000000000000004
> (XEN) [2014-01-22 05:38:00] ****************************************
> (XEN) [2014-01-22 05:38:00] 
> (XEN) [2014-01-22 05:38:00] Manual reset required ('noreboot' specified)

This is breakage, caused by 1035bb64fd7fd9f05c510466d98566fd82e37ad9
"PCI: break MSI-X data out of struct pci_dev_info", which made it valid
for a PCI device to not have an associated arch_msix structure.

In pci_prepare_msix(), there is a logic chain

    pdev = pci_get_pdev(seg, bus, devfn);
    if ( !pdev )
        rc = -ENODEV;
    else if ( pdev->msix->used_entries != !!off )
...

which dereferences this optional pointer without first checking whether
the guest-provided PCI device is actually MSI-X capable.

Therefore, dom0 is issuing PHYSDEVOP_prepare_msix hypercalls on PCI
devices Xen believes to be incapable of MSI-X.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:24:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5lcA-00029V-2O; Wed, 22 Jan 2014 00:24:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5lc9-00029Q-AB
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:24:17 +0000
Received: from [85.158.143.35:49657] by server-2.bemta-4.messagelabs.com id
	F1/10-11386-0BF0FD25; Wed, 22 Jan 2014 00:24:16 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390350254!13127168!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30962 invoked from network); 22 Jan 2014 00:24:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 00:24:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="95126141"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 00:24:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 19:24:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W5lc4-0003FD-TG; Wed, 22 Jan 2014 00:24:12 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 22 Jan 2014 00:24:11 +0000
Message-ID: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <52DF0F6A.4040309@citrix.com>
References: <52DF0F6A.4040309@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
	devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As of c/s 1035bb64fd7fd9f05c510466d98566fd82e37ad9
  "PCI: break MSI-X data out of struct pci_dev_info"

pdev->msix is now conditional on whether the device actually has MSI-X
capabilities or not, so validate it before blindly dereferencing what amounts
to a guest-controlled parameter.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

This has only been compile tested, but is quite obviously needed to prevent
the NULL structure dereference.

George: This (well technically the fix for the underlying problem if this
  patch turns out to be incorrect) really need to be accepted for 4.4, or the
  underlying bug will turn into an XSA.  Currently only unstable is affected.
---
 xen/arch/x86/msi.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..36c5503 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1033,7 +1033,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off)
 
     spin_lock(&pcidevs_lock);
     pdev = pci_get_pdev(seg, bus, devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->msix )
         rc = -ENODEV;
     else if ( pdev->msix->used_entries != !!off )
         rc = -EBUSY;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:24:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5lcA-00029V-2O; Wed, 22 Jan 2014 00:24:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5lc9-00029Q-AB
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:24:17 +0000
Received: from [85.158.143.35:49657] by server-2.bemta-4.messagelabs.com id
	F1/10-11386-0BF0FD25; Wed, 22 Jan 2014 00:24:16 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390350254!13127168!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30962 invoked from network); 22 Jan 2014 00:24:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 00:24:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,697,1384300800"; d="scan'208";a="95126141"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 00:24:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 21 Jan 2014 19:24:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W5lc4-0003FD-TG; Wed, 22 Jan 2014 00:24:12 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 22 Jan 2014 00:24:11 +0000
Message-ID: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <52DF0F6A.4040309@citrix.com>
References: <52DF0F6A.4040309@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
	devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As of c/s 1035bb64fd7fd9f05c510466d98566fd82e37ad9
  "PCI: break MSI-X data out of struct pci_dev_info"

pdev->msix is now conditional on whether the device actually has MSI-X
capabilities or not, so validate it before blindly dereferencing what amounts
to a guest-controlled parameter.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

This has only been compile tested, but is quite obviously needed to prevent
the NULL structure dereference.

George: This (well technically the fix for the underlying problem if this
  patch turns out to be incorrect) really need to be accepted for 4.4, or the
  underlying bug will turn into an XSA.  Currently only unstable is affected.
---
 xen/arch/x86/msi.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..36c5503 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1033,7 +1033,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off)
 
     spin_lock(&pcidevs_lock);
     pdev = pci_get_pdev(seg, bus, devfn);
-    if ( !pdev )
+    if ( !pdev || !pdev->msix )
         rc = -ENODEV;
     else if ( pdev->msix->used_entries != !!off )
         rc = -EBUSY;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:49:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5m0c-0003Rl-1g; Wed, 22 Jan 2014 00:49:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W5m0a-0003Rg-M3
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:49:32 +0000
Received: from [85.158.143.35:35668] by server-2.bemta-4.messagelabs.com id
	71/79-11386-C951FD25; Wed, 22 Jan 2014 00:49:32 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390351770!1357807!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30059 invoked from network); 22 Jan 2014 00:49:31 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-21.messagelabs.com with SMTP;
	22 Jan 2014 00:49:31 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 21 Jan 2014 16:45:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,697,1384329600"; d="scan'208";a="442598996"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 21 Jan 2014 16:49:14 -0800
Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 16:49:14 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX114.amr.corp.intel.com (10.18.116.8) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 16:49:14 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Wed, 22 Jan 2014 08:49:11 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Shakeel Butt <shakeel.butt@gmail.com>, Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ///QC4CAAAY/AIAABUAAgACPQjCAAQSIgIAATo+AgAD+mvA=
Date: Wed, 22 Jan 2014 00:49:10 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D8DD3C@SHSMSX104.ccr.corp.intel.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<20140121125527.GG2924@reaktio.net>
	<CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
In-Reply-To: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Gordan Bobic <gordan@bobich.net>, "G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Shakeel Butt [mailto:shakeel.butt@gmail.com]
> Sent: Wednesday, January 22, 2014 1:37 AM
> To: Pasi K=E4rkk=E4inen
> Cc: Wu, Feng; Gordan Bobic; G.R.; G.R.; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> =

> On Tue, Jan 21, 2014 at 4:55 AM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
> > Hello,
> >
> > On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
> >>
> >>
> >> > >>> >
> >> > >>> > Hi all,
> >> > >>> >
> >> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu=
-xen
> as
> >> > >>> > device model? I tried but I am getting error 'gfx_passthru' in=
valid
> >> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >> > >>> > traditional i.e. qemu-dm.
> >> > >>>
> >> > >>> As far as I know, only qemu-traditional supports vga pass-through
> >> > >>> right now.
> >> > >>
> >> > >> Right.
> >> > >> It is not possible to assign your primary VGA card to a VM with
> >> > >> qemu-xen. You should be able to assign your secondary VGA card
> though.
> >> > >
> >> > > Let me understand this correctly. If I have two VGA cards then I c=
an
> >> > > passthrough
> >> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is th=
is
> >> > > right and
> >> > > if yes how can I do it?
> >> >
> >> > Passing any VGA card as a primary-in-domU has always been problemati=
c.
> >>
> >> I think passing VGA card as a primary-in-domU works well in
> Qemu-traditional, right?
> >>
> >
> > primary-in-domU requires vendor specific hacks in Xen qemu.
> > qemu-traditional includes many patches for Intel IGD primary passthru
> support,
> > but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-tradition=
al.
> >
> > There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru =
are
> in
> > various source trees, mailinglist archives, and on some blogs around the
> internet.
> >
> > Also for Intel IGD I think there's at least one outstanding patch/fix t=
hat
> > hasn't been merged to qemu-traditional yet, see:
> >
> > http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.ht=
ml
> > http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html
> >
> > The patch in question probably needs some work before it is suitable fo=
r being
> applied to qemu-traditional.
> >
> Thanks for sharing. Just for the information, I was able to make
> Nvidia K2000 work
> in passthrough mode without any patch to qemu traditional.

Intel IGD pass-through also works well on Sandy bridge, Ivy bridge, HSW pla=
tform
with qemu-traditional.

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 00:49:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 00:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5m0c-0003Rl-1g; Wed, 22 Jan 2014 00:49:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <feng.wu@intel.com>) id 1W5m0a-0003Rg-M3
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 00:49:32 +0000
Received: from [85.158.143.35:35668] by server-2.bemta-4.messagelabs.com id
	71/79-11386-C951FD25; Wed, 22 Jan 2014 00:49:32 +0000
X-Env-Sender: feng.wu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390351770!1357807!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30059 invoked from network); 22 Jan 2014 00:49:31 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-21.messagelabs.com with SMTP;
	22 Jan 2014 00:49:31 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 21 Jan 2014 16:45:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,697,1384329600"; d="scan'208";a="442598996"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 21 Jan 2014 16:49:14 -0800
Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 16:49:14 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX114.amr.corp.intel.com (10.18.116.8) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 21 Jan 2014 16:49:14 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Wed, 22 Jan 2014 08:49:11 +0800
From: "Wu, Feng" <feng.wu@intel.com>
To: Shakeel Butt <shakeel.butt@gmail.com>, Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
Thread-Index: AQHPFaN4d0WbbJ4Ux0yVoFE4Q2MhMJqNL2UQ///QC4CAAAY/AIAABUAAgACPQjCAAQSIgIAATo+AgAD+mvA=
Date: Wed, 22 Jan 2014 00:49:10 +0000
Message-ID: <E959C4978C3B6342920538CF579893F001D8DD3C@SHSMSX104.ccr.corp.intel.com>
References: <CAGj-7pUFtGV0m2tHjRMtESDXxgXDPuJYOygGULnJG9ZFV3EkuQ@mail.gmail.com>
	<E959C4978C3B6342920538CF579893F001D84BFC@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401201208280.21510@kaball.uk.xensource.com>
	<CAGj-7pV1di_1gao+F2OYWO8WUVLK8LedjWv=RRhcamvR2MktSg@mail.gmail.com>
	<2fde37a507b3f97d79c6f65a203950b0@mail.shatteredsilicon.net>
	<E959C4978C3B6342920538CF579893F001D85978@SHSMSX104.ccr.corp.intel.com>
	<20140121125527.GG2924@reaktio.net>
	<CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
In-Reply-To: <CAGj-7pXH65=00Vb5JrRJFvtngFt9FfHE5x=-ZkDyfLc_k-jt=w@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Gordan Bobic <gordan@bobich.net>, "G.R." <firemeteor@users.sourceforge.net>,
	"G.R." <firemeteor.guo@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> -----Original Message-----
> From: Shakeel Butt [mailto:shakeel.butt@gmail.com]
> Sent: Wednesday, January 22, 2014 1:37 AM
> To: Pasi K=E4rkk=E4inen
> Cc: Wu, Feng; Gordan Bobic; G.R.; G.R.; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] vga passthrough with qemu-xen (or qemu upstream)
> =

> On Tue, Jan 21, 2014 at 4:55 AM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:
> > Hello,
> >
> > On Mon, Jan 20, 2014 at 01:24:23PM +0000, Wu, Feng wrote:
> >>
> >>
> >> > >>> >
> >> > >>> > Hi all,
> >> > >>> >
> >> > >>> > Is it possible to do vga passthrough on xen-unstable with qemu=
-xen
> as
> >> > >>> > device model? I tried but I am getting error 'gfx_passthru' in=
valid
> >> > >>> > parameter for qemu-xen. I am able to do passthrough with qemu
> >> > >>> > traditional i.e. qemu-dm.
> >> > >>>
> >> > >>> As far as I know, only qemu-traditional supports vga pass-through
> >> > >>> right now.
> >> > >>
> >> > >> Right.
> >> > >> It is not possible to assign your primary VGA card to a VM with
> >> > >> qemu-xen. You should be able to assign your secondary VGA card
> though.
> >> > >
> >> > > Let me understand this correctly. If I have two VGA cards then I c=
an
> >> > > passthrough
> >> > > secondary VGA card (in Dom0) to HVM as its primary VGA card. Is th=
is
> >> > > right and
> >> > > if yes how can I do it?
> >> >
> >> > Passing any VGA card as a primary-in-domU has always been problemati=
c.
> >>
> >> I think passing VGA card as a primary-in-domU works well in
> Qemu-traditional, right?
> >>
> >
> > primary-in-domU requires vendor specific hacks in Xen qemu.
> > qemu-traditional includes many patches for Intel IGD primary passthru
> support,
> > but patches for AMD/ATI and Nvidia GPUs aren't merged to qemu-tradition=
al.
> >
> > There unapplied patches for qemu-traditional (AMD/Nvidia) GPU passthru =
are
> in
> > various source trees, mailinglist archives, and on some blogs around the
> internet.
> >
> > Also for Intel IGD I think there's at least one outstanding patch/fix t=
hat
> > hasn't been merged to qemu-traditional yet, see:
> >
> > http://lists.xenproject.org/archives/html/xen-devel/2013-02/msg00538.ht=
ml
> > http://lists.xen.org/archives/html/xen-devel/2013-07/msg01385.html
> >
> > The patch in question probably needs some work before it is suitable fo=
r being
> applied to qemu-traditional.
> >
> Thanks for sharing. Just for the information, I was able to make
> Nvidia K2000 work
> in passthrough mode without any patch to qemu traditional.

Intel IGD pass-through also works well on Sandy bridge, Ivy bridge, HSW pla=
tform
with qemu-traditional.

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 04:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 04:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pUd-0007VX-Os; Wed, 22 Jan 2014 04:32:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pUb-0007VS-JX
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 04:32:46 +0000
Received: from [193.109.254.147:25630] by server-12.bemta-14.messagelabs.com
	id D2/93-13681-CE94FD25; Wed, 22 Jan 2014 04:32:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390365161!12302396!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12079 invoked from network); 22 Jan 2014 04:32:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 04:32:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M4VbWi020099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 04:31:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4VZDA019520
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 04:31:36 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4VZme019513; Wed, 22 Jan 2014 04:31:35 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 20:31:34 -0800
Date: Tue, 21 Jan 2014 23:31:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140122043128.GA9931@konrad-lan.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="NzB8fVQJ5HfG6fxh"
Content-Disposition: inline
In-Reply-To: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Wed, Jan 22, 2014 at 12:24:11AM +0000, Andrew Cooper wrote:
> As of c/s 1035bb64fd7fd9f05c510466d98566fd82e37ad9
>   "PCI: break MSI-X data out of struct pci_dev_info"
> 
> pdev->msix is now conditional on whether the device actually has MSI-X
> capabilities or not, so validate it before blindly dereferencing what amounts
> to a guest-controlled parameter.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> This has only been compile tested, but is quite obviously needed to prevent
> the NULL structure dereference.

And it does fix that particular problem. Now I have another crash.

See attached (and relevant part inlined).
..
[   19.223716] xen: registering gsi 19 triggering 0 polarity 1
[   19.229300] Already setup the GSI :19
(XEN) [2014-01-22 12:27:07] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-22 12:27:07] CPU:    0
(XEN0000000000000
(XEN) [2014-01-22 12:27:07] rdx: 00000000f1e80000   rsi: 0000000000000200   rdi: ffff82d080281f20
(XEN) [2014-01-22 12:27:07] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08   r8:  000000000000001c
(XEN) [2014-01-22 12:27:07] r9:  00000000ffffffff   r10: ffff82d080238f20   r11: 0000000000000202
(XEN) [2014-01-22 12:27:07] r12: 0000000000000000   r13: ffff83023f65db70   r14: ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07] r15: 0000000000000000   cr0: 0000000080050033   cr4: 00000000001526f0
(XEN) [2014-01-22 12:27:07] cr3: 000000021db62000   cr2: 0000000000000004
(XEN) [2014-01-22 12:27:07] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-22 12:27:07] Xen stack trace from rsp=ffff82d0802cfc08:
(XEN) [2014-01-22 12:27:07]    000000050004fc38 ffff82d0802cfd88 00000072043a6340 80050070ffffffff
(XEN) [2014-01-22 12:27:07]    0000000000000000 0000000000000000 0000000000000005 0000000000000070
(XEN) [2014-01-22 12:27:07]    0000000500000000 0000000000000000 00000000f1e80000 ffff82d000000005
(XEN) [2014-01-22 12:27:07]    ffff82d000000003 80050070117fbb70 ffff82d0802cfe98 ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd88 ffff83023946e700 0000000000000005 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd28 ffff82d080168987 0000000000000246 ffff82d0802cfcd8
(XEN) [2014-01-22 12:27:07]    ffff82d080129d68 0000000000000000 ffff82d0802cfd28 ffff82d0801473d9
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd18 ffff8302337fbb70 000000000000010c ffff830233748000
(XEN) [2014-01-22 12:27:07]    000000000000010c 0000000000000025 00000000ffffffed ffff830239402500
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfdc8 ffff82d08016c65c ffff83023f65db00 000000000000010c
(XEN) [2014-01-22 12:27:07]    000000000000010c ffff8302337480e0 ffff82d0802cfd98 ffff82d0801047ed
(XEN) [2014-01-22 12:27:07]    0000010c01402500 ffff82d0802cfe98 ffff8302337480e0 ffff83023946e700
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff83023f65db00 ffff82d0802cfdc8 ffff830233748000
(XEN) [2014-01-22 12:27:07]    00000000fffffffd 0000000000000000 ffff82d0802cfe98 ffff82d0802cfe70
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe48 ffff82d08017f104 ffff82d0802cff18 ffffffff8154ea06
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff8302337480b8 ffff82d00000010c ffff82d08018bcb0
(XEN) [2014-01-22 12:27:07]    000000250000f800 ffff82d0802cfe74 ffff820040005000 000000000000000d
(XEN) [2014-01-22 12:27:07]    ffff88006ca859b8 ffff8300b7313000 ffff88006c35cc00 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfef8 ffff82d08017f814 0000000000000000 0000000700000004
(XEN) [2014-01-22 12:27:07]    0000000000007ff0 ffffffffffffffff 0000000000000005 0000000000000000
(XEN) [2014-01-22 12:27:07] Xen call trace:
(XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
(XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
(XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
(XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 12:27:07]  L4[0x000] = 000000021db66067 000000000006cb75
(XEN) [2014-01-22 12:27:07]  L3[0x000] = 000000021db65067 000000000006cb76
(XEN) [2014-01-22 12:27:07]  L2[0x000] = 0000000000000000 ffffffffffffffff 
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] Panic on CPU 0:
(XEN) [2014-01-22 12:27:07] FATAL PAGE FAULT
(XEN) [2014-01-22 12:27:07] [error_code=0000]
(XEN) [2014-01-22 12:27:07] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] Manual reset required ('noreboot' specified)

--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-4.4-pci_prepare_msix-patch.txt"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... =1B[07;00Hok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
Loading microcode.bin... ok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 21 23:12:04 EST 2014
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN)  A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.091 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 12:26:51] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 12:26:51] Allocated console ring of 1048576 KiB. lapic_ti=
mer_reliable_states 0xffffffff
(XEN) [2014-01-22 12:26:51] VMX: Supported advanced features:
(XEN) [2014-01-22 12:26:51]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 12:26:51]  - APIC TPR shadow
(XEN) [2014-01-22 12:26:51]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 12:26:51]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 12:26:51]  - Virtual NMI
(XEN) [2014-01-22 12:26:51]  - MSR direct-access bitmap
(XEN) [2014-01-22 12:26:51]  - Unrestricted Guest
(XEN) [2014-01-22 12:26:51]  - VMCS shadowing
(XEN) [2014-01-22 12:26:51] HVM: ASIDs enabled.
(XEN) [2014-01-22 12:26:51] HVM: VMX enabled
(XEN) [2014-01-22 12:26:51] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 12:26:51] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 12:26:51] Brought up 8 CPUs
(XEN) [2014-01-22 12:26:51] ACPI sleep modes: S3
(XEN) [2014-01-22 12:26:51] mcheck_poll: Machine check polling timer starte=
d.
(XEN) [2014-01-22 12:26:51] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa22000
(XEN) [2014-01-22 12:26:51] elf_parse_binary(XEN) [2014-01-22 12:26:51] elf=
_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 12:26:51] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 12:26:51]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 12:26:51]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 12:26:51]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 12:26:51]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 12:26:51]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 12:26:51]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 12:26:51]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 12:26:51]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 12:26:51]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 12:26:51] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 12:26:51]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487000 pages to be allocated)
(XEN) [2014-01-22 12:26:51]  Init. ramdisk: 000000023abdf000->000000023fd86=
e64
(XEN) [2014-01-22 12:26:51] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 12:26:51]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 12:26:51]  Init. ramdisk: ffffffff823f4000->ffffffff8759b=
e64
(XEN) [2014-01-22 12:26:51]  Phys-Mach map: ffffffff8759c000->ffffffff8799c=
000
(XEN) [2014-01-22 12:26:51]  Start info:    ffffffff8799c000->ffffffff8799c=
4b4
(XEN) [2014-01-22 12:26:51]  Page tables:   ffffffff8799d000->ffffffff879de=
000
(XEN) [2014-01-22 12:26:51]  Boot stack:    ffffffff879de000->ffffffff879df=
000
(XEN) [2014-01-22 12:26:51]  TOTAL:         ffffffff80000000->ffffffff87c00=
000
(XEN) [2014-01-22 12:26:51]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 12:26:51] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:) [2014-01-22 12:26:52] [VT-D]iom=
mu.c:1452: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000
(XEN) [2014-01-22 12:26:translation: iommu->reg =3D ffff82c000203000
(XEN) [2014-01-22 12:26:52] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 12:26:53] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 12:26:53] Std. Loglevel: All
(XEN) [2014-01-22 12:26:53] Guest Loglevel: All
(XEN) [2014-01-22 12:26:53] **********************************************
(XEN) [2014-01-22 12:26:53] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 12:26:53] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 12:26:53] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 12:26:53] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 12:26:53] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 12:26:53] **********************************************
(XEN) [2014-01-22 12:26:53] 3... 2... 1...=20
(XEN) [2014-01-22 12:26:56] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 12: kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cversion 3.13.0upstream-02502-gec513b1 (konrad@=
build-external.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) =
(GCC) ) #1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(xxxx:xxx:xx:) xen-acpi-processor.off=3D1
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x0759bfff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.252759] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.252760] Policy zone: DMA32
[    5.252762] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(xxxx:xxx:xx:) xen-acpi-processor.off=3D1
[    5.253069] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.253099] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.273555] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.276649] Memory: 1891276K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 205872K reserved)
[    5.276873] Hierarchical RCU implementation.
[    5.276874] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.276874] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.276882] NR_IRQS:33024 nr_irqs:256 16
[    5.276961] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.276962] xen: registering gsi 9 triggering 0 polarity 0
[    5.276973] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.276996] xen: acpi sci 9
[    5.276999] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.277002] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.277004] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.277007] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.277009] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.277012] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.277014] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.277017] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.277019] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.277022] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.277024] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.277027] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.277029] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.277031] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.278594] Console: colour VGA+ 80x25
[    6.234540] console [hvc0] enabled
[    6.238477] Xen: using vcpuop timer interface
[    6.242826] installing Xen timer for CPU 0
[    6.247007] tsc: Detected 3400.090 MHz processor
[    6.251691] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.18 BogoMIPS (lpj=3D3400090)
[    6.262327] pid_max: default: 32768 minimum: 301
[    6.267158] Security Framework initialized
[    6.271247] SELinux:  Initializing.
[    6.274819] SELinux:  Starting in permissive mode
[    6.279892] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.287341] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.294505] Mount-cache hash table entries: 256
[    6.299474] Initializing cgroup subsys freezer
[    6.303977] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.303977] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.317082] CPU: Physical Processor ID: 0
[    6.321154] CPU: Processor Core ID: 0
[    6.325579] mce: CPU supports 2 MCE banks
[    6.329589] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.329589] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.329589] tlb_flushall_shift: 6
[    6.366666] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.375309] ACPI: Core revision 20131115
[    6.428321] ACPI: All ACPI Tables successfully acquired
[    6.435084] cpu 0 spinlock event irq 41
[    6.438963] callingks_jump+0x0/0x1d returned 0 after 4882 usecs
[    6.457544] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.463184] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.470624] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.477037] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.485272] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.490905] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.498357] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.504075] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.511616] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.516816] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    6.525657] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    6.532964] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    6.539377] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    6.547611] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    6.553079] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    6.560204] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    6.564997] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    6.571555] calling  init_workqueues+0x0/0x59a @ 1
[    6.576564] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    6.583161] calling  migration_init+0x0/0x71 @ 1
[    6.587839] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    6.594339] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    6.599541] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    6.606558] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    6.612451] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    6.620165] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    6.625404] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    6.632391] calling  cpu_stop_init+0x0/0x76 @ 1
[    6.637002] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    6.643391] calling  relay_init+0x0/0x14 @ 1
[    6.647724] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    6.653878] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    6.659185] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    6.666270] calling  init_events+0x0/0x61 @ 1
[    6.670692] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    6.676930] calling  init_trace_printk+0x0/0x12 @ 1
[    6.681871] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    6.688630] calling  event_trace_memsetup+0x0/0x52 @ 1
[    6.693849] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    6.700849] calling  jump_label_init_module+0x0/0x12 @ 1
[    6.706222] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    6.713417] calling  balloon_clear+0x0/0x4f @ 1
[    6.718009] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    6.724423] calling  rand_initialize+0x0/0x30 @ 1
[    6.729212] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    6.735775] calling  mce_amd_init+0x0/0x165 @ 1
[    6.740369] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    6.746806] x86: Booted up 1 node, 1 CPUs
[    6.751557] NMI watchdog: disabled (cpu0): hardware events not enabled
[    6.758203] devtmpfs: initialized
[    6.764080] calling  ipc_ns_init+0x0/0x14 @ 1
[    6.768431] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    6.774671] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    6.779697] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    6.786544] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    6.793218] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    6.801709] calling  net_ns_init+0x0/0x104 @ 1
[    6.806273] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    6.812556] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    6.817744] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    6.825638] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    6.833801] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    6.841003] calling  cpufreq_tsc+0x0/0x37 @ 1
[    6.845423] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    6.851664] calling  reboot_init+0x0/0x1d @ 1
[    6.856086] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    6.862323] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    6.867176] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    6.873849] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    6.879396] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    6.886761] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    6.891615] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    6.898288] calling  wq_sysfs_init+0x0/0x14 @ 1
[    6.902983] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    6.909731] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    6.916256] calling  ksysfs_init+0x0/0x94 @ 1
[    6.920719] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    6.926914] calling  pm_init+0x0/0x4e @ 1
[    6.931026] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    6.936879] calling  pm_disk_init+0x0/0x19 @ 1
[    6.941401] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    6.947714] calling  swsusp_header_init+0x0/0x30 @ 1
[    6.952739] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    6.959586] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    6.965131] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    6.972499] calling  cgroup_wq_init+0x0/0x32 @ 1
[    6.977184] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    6.983678] calling  event_trace_enable+0x0/0x173 @ 1
[    6.989266] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    6.996126] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.000717] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.007130] calling  fsnotify_init+0x0/0x26 @ 1
[    7.011725] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.018136] calling  filelock_init+0x0/0x84 @ 1
[    7.022741] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.029143] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.033997] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.040669] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.045696] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.052542] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.057308] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.063895] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.069268] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.076462] calling  debugfs_init+0x0/0x5c @ 1
[    7.080978] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.087295] calling  securityfs_init+0x0/0x53 @ 1
[    7.092073] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.098648] calling  prandom_init+0x0/0xe2 @ 1
[    7.103155] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.109483] calling  virtio_init+0x0/0x30 @ 1
[    7.114003] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.120168] calling  __gnttab_init+0x0/0x30 @ 1
[    7.124764] xen:grant_table: Grant tables using version 2 layout
[    7.130846] Grant table initialized
[    7.134380] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.141054] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.146106] RTC time: 12:26:57, date: 01/22/14
[    7.150586] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.157606] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.162545] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.169479] calling  cpuidle_init+0x0/0x40 @ 1
[    7.173985] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.180484] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.185426] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.192185] calling  sock_init+0x0/0x8b @ 1
[    7.196534] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.202524] calling  net_inuse_init+0x0/0x26 @ 1
[    7.207206] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.213702] calling  netpoll_init+0x0/0x31 @ 1
[    7.218209] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.224536] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.229690] NET: Registered protocol family 16
[    7.234180] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.241276] calling  bdi_class_init+0x0/0x4d @ 1
[    7.246061] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.252491] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.257616] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.264533] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.269536] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.276232] calling  pci_driver_init+0x0/0x12 @ 1
[    7.281094] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.287611] calling  backlight_class_init+0x0/0x85 @ 1
[    7.292870] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.299834] calling  video_output_class_init+0x0/0x19 @ 1
[    7.305358] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.312572] calling  xenbus_init+0x0/0x26f @ 1
[    7.317169] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.323424] calling  tty_class_init+0x0/0x38 @ 1
[    7.328170] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.334603] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.339972] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.346917] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.352729] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.360351] calling  register_node_type+0x0/0x34 @ 1
[    7.365509] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.372284] calling  i2c_init+0x0/0x70 @ 1
[    7.376611] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.382518] calling  init_ladder+0x0/0x12 @ 1
[    7.386937] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.393349] calling  init_menu+0x0/0x12 @ 1
[    7.397597] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.403836] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.408863] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.415722] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.421274] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.428623] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.433765] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.440668] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.445175] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.451675] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.456444] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.463028] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.468229] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.475247] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.479841] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.487467] ACPI: bus type PCI registered
[    7.491540] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.498040] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.504715] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.509342] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.516049] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    7.522536] calling  dma_channel_table_init+0x0/0xde @ 1
[    7.527922] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    7.535128] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    7.540675] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    7.548039] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    7.553674] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    7.561126] calling  xen_pcpu_init+0x0/0xcc @ 1
[    7.566566] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    7.572910] calling  dmi_id_init+0x0/0x31d @ 1
[    7.577662] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    7.583920] calling  dca_init+0x0/0x20 @ 1
[    7.588078] dca service started, version 1.12.1
[    7.592730] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    7.598826] calling  iommu_init+0x0/0x58 @ 1
[    7.603167] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    7.609311] calling  pci_arch_init+0x0/0x69 @ 1
[    7.613921] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    7.623264] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    7.637956] PCI: Using configuration type 1 for base access
[    7.643520] initcall pci_arch_init+0x0/0x69 returned 0 after0x98 @ 1
[    7.655210] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    7.661576] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    7.666673] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    7.673600] calling  init_vdso+0x0/0x135 @ 1
[    7.677936] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    7.684086] calling  sysenter_setup+0x0/0x2dd @ 1
[    7.688853] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    7.695440] calling  param_sysfs_init+0x0/0x194 @ 1
[    7.716547] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    7.723582] calling  pm_sysrq_init+0x0/0x19 usecs
[    7.734599] calling  default_bdi_init+0x0/0x65 @ 1
[    7.739755] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    7.746361] calling  init_bio+0x0/0xe9 @ 1
[    7.750573] bio: create slab <bio-0> at 0
[    7.754640] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    7.760743] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    7.766485] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    7.774002] calling  cryptomgr_init+0x0/0x12 @ 1
[    7.778680] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    7.785182] calling  blk_settings_init+0x0/0x2c @ 1
[    7.790118] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    7.796882] calling  blk_ioc_init+0x0/0x2a @ 1
[    7.801397] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    7.807712] calling  blk_softirq_init+0x0/0x6e @ 1
[    7.812566] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    7.819236] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    7.824090] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    7.830763] calling  blk_mq_init+0x0/0x5f @ 1
[    7.835185] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    7.841423] calling  genhd_device_init+0x0/0x85 @ 1
[    7.846491] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    7.853180] calling  pci_slot_init+0x0/0x50 @ 1
[    7.857779] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    7.864182] calling  fbmem_init+0x0/0x98 @ 1
[    7.868587] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    7.874670] calling  acpi_init+0x0/0x27a @ 1
[    7.879029] ACPI: Added _OSI(Module Device)
[    7.883249] ACPI: Added _OSI(Processor Device)
[    7.887754] ACPI: Added _OSI(3.0 _SCP Extensions)
[    7.892520] ACPI: Added _OSI(Processor Aggregator Device)
[    7.901728] ACPI: Executed 1 blocks of module-level executable AML code
[    7.933422] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    7.941276] \_SB_:_OSC invalid UUID
[    7.944756] _OSC request data:1 1f=20
[    7.950396] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    7.959619] ACPI: Dynamic OEM Table Load:
[    7.963617] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    7.973445] ACPI: Interpreter enabled
[    7.977114] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    7.986373] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    7.995657] ACPI: (supports S0 S1 S4 S5)
[    7.999629] ACPI: Using IOAPIC for interrupt routing
[    8.004798] kworker/u2:0 (275) used greatest stack depth: 5560 bytes left
[    8.011817] HEST: Table parsing has been initialized.
[    8.016865] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.027251] ACPI: No dock devices found.
[    8.128343] ACPI: Power Resource [FN00] (off)
[    8.133497] ACPI: Power Resource [FN01] (off)
[    8.138652] ACPI: Power Resource [FN02] (off)
[    8.143782] ACPI: Power Resource [FN03] (off)
[    8.148928] ACPI: Power Resource [FN04] (off)
[    8.158651] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.164830] acpi PNP0A08:00: _OSC: OS supports [Exten
[    8.175631] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]
[    8.184625] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.198162] PCI host bridge to bus 0000:00
[    8.202255] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.207800] p0]
[    8.214040] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    8.220285] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.227218] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.234144] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.241078] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.248012] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.254944] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.261890] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:00.0
[    8.273467] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.279626] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.286243] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:01.0
[    8.297087] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.303151] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:01.1
[    8.314817] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.320828] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.327664] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.334940] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:02.0
[    8.346102] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.352124] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:03.0
[    8.364539] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.370600] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.377531] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.383760] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:14.0
[    8.394625] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.400664] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.407606] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:16.0
[    8.419245] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.425283] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.431579] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.437906] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.443665] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.450153] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:19.0
[    8.461004] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.467045] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.473500] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.480077] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1a.0
[    8.490933] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.496967] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.503930] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.510420] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1b.0
[    8.521266] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    8.527429] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    8.533946] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.0
[    8.544804] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    8.550969] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    8.557461] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.3
[    8.568309] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    8.574473] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    8.580962] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.5
[    8.591815] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    8.597975] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    8.604467] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.6
[    8.615316] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    8.621478] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    8.627968] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.7
[    8.638828] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    8.644866] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    8.651319] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    8.657897] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1d.0
[    8.668746] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.0
[    8.680429] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    8.686466] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    8.692068] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    8.697700] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    8.703334] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    8.708967] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    8.714599] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    8.721009] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.2
[    8.731776] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    8.737805] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    8.744660] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.3
[    8.755797] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    8.761831] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.6
[    8.774533] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    8.785326] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    8.791375] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    8.797009] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    8.803856] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    8.810705] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    8.817503] pci 0000:01:00.0: supports D1 D2
[    8.821894] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:01:00.0
[    8.834871] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    8.840088] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    8.846240] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    8.853088] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    8.859957] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    8.870755] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    8.876803] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    8.883124] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    8.889450] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    8.895082] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    8.901429] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    8.908219] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    8.914348] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    8.921261] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:02:00.0
[    8.933463] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    8.939472] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    8.945790] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    8.952116] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    8.957751] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    8.964096] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    8.970887] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    8.977012] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    8.983928] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:02:00.1
[    8.998223] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.003443] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.009593] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.016441] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.023476] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.034297] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.040342] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.046656] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.052984] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.058699] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.065528] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.071756] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:04:00.0
[    9.082672] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.088703] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.095016] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.101342] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.107059] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.113887] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:04:00.1
[    9.133944] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.139162] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.145312] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.152163] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.159193] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.170029] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.176059] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.182396] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.188009] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.194511] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.200747] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:05:00.0
[    9.213751] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.218970] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.225122] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.231972] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.239044] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.249869] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.256109] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.262885] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:06:00.0
[    9.273779] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.279008] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.285869] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.294363] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.300556] pci 0000:07:01.0: supports D1 D2
[    9.304815] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 12:27:00] PCI add device 0000:07:01.0
[    9.316641] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.322673] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.328981] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.335465] pci 0000:07:03.0: supports D1 D2
[    9.339721] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:07:03.0
[    9.357637] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.364691] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.371533] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.380103] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.388768] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.397348] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.405930] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.414328] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.420377] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:08.0
[    9.439053] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.445104] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:08.1
[    9.463802] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.469855] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:09.0
[    9.488550] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.494604] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:09.1
[    9.513314] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.519367] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0a.0
[    9.538078] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[    9.544126] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0a.1
[    9.562835] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[    9.568889] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0b.0
[    9.587570] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[    9.593620] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0b.1
[    9.612351] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[    9.617577] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[    9.624416] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[    9.631087] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[    9.637761] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[    9.644800] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.655693] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[    9.661796] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[    9.668966] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[    9.675254] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:09:00.0
[    9.688355] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[    9.693576] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[    9.700421] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[    9.707452] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.718252] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[    9.724303] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[    9.729929] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[    9.735561] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[    9.741195] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[    9.746828] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[    9.752462] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[    9.758998] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:0a:00.0
[    9.778514] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[    9.783738] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[    9.789890] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[    9.796738] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[    9.803504] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[    9.815255] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[    9.822567] ACPI: PCI Interrupt Link [LNKB] ( PCI Interrupt Link [LNKF] =
(IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
[    9.860288] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[    9.867592] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[    9.876007] ACPI: Enabled 4 GPEs in block 00 to 3F
[    9.880798] ACPI: \_SB_.PCI0: notify handler is installed
[    9.886281] Found 1 acpi root devices
[    9.890083] initcall acpi_init+0x0/0x27a returned 0 after 454101 usecs
[    9.896596] calling  pnp_init+0x0/0x12 @ 1
[    9.900850] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[    9.906761] calling  balloon_init+0x0/0x242 @ 1
[    9.911353] xen:balloon: Initialising balloon driver
[    9.916382] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[    9.922967] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[    9.928512] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[    9.935879] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[    9.941606] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[    9.948984] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[    9.954823] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[    9.962288] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[    9.967303] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[    9.973998] calling  balloon_init+0x0/0xfa @ 1
[    9.978500] xen_balloon: Initialising balloon driver
[    9.983916] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[    9.990348] calling  misc_init+0x0/0xba @ 1
[    9.994667] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.000662] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.005915] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.013993] vgaarb: loaded
[   10.016763] vgaarb: bridge control possible 0000:00:02.0
[   10.022138] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.029330] calling  cn_init+0x0/0xc0 @ 1
[   10.033421] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.039298] calling  dma_buf_init+0x0/0x75 @ 1
[   10.043814] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.050131] calling  phy_init+0x0/0x2e @ 1
[   10.054515] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.060425] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.065159] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.071604] calling  usb_init+0x0/0x169 @ 1
[   10.075865] ACPI: bus type USB registered
[   10.080123] usbcore: registered new interface driver usbfs
[   10.085703] usbcore: registered new interface driver hub
[   10.091097] usbcore: registered new device driver usb
[   10.096143] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.102467] calling  serio_init+0x0/0x31 @ 1
[   10.106920] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.113005] calling  input_init+0x0/0x103 @ 1
[   10.117494] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.123667] calling  rtc_init+0x0/0x5b @ 1
[   10.127898] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.133807] calling  pps_init+0x0/0xb7 @ 1
[   10.138028] pps_core: LinuxPPS API ver. 1 registered
[   10.142992] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.152177] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.158417] calling  ptp_init+0x0/0xa4 @ 1
[   10.162635] PTP clock support registered
[   10.166563] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.172716] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.178233] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.185458] calling  hwmon_init+0x0/0xe3 @ 1
[   10.189853] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.195945] calling  leds_init+0x0/0x40 @ 1
[   10.200249] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.206259] calling  efisubsys_init+0x0/0x142 @ 1
[   10.211023] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.217609] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.222373] PCI: Using ACPI for IRQ routing
[   10.230060] PCI: pci_cache_line_size set to 64 bytes
[   10.235218] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
 usecs
[   10.254146] calling  proto_init+0x0/0x12 @ 1
[   10.258465] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.264621] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.269837] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.276177] calling  neigh_init+0x0/0x80 @ 1
[   10.280508] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.286662] calling  fib_rules_init+0x0/0xaf @ 1
[   10.291341] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.297840] calling  pktsched_init+0x0/0x10a @ 1
[   10.302526] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.309020] calling  tc_filter_init+0x0/0x55 @ 1
[   10.313702] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.320201] calling  tc_action_init+0x0/0x55 @ 1
[   10.324880] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.331382] calling  genl_init+0x0/0x85 @ 1
[   10.335642] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.341693] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.346287] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.352700] calling  netlbl_init+0x0/0x81 @ 1
[   10.357120] NetLabel: Initializing
[   10.360587] NetLabel:  domain hash size =3D 128
[   10.365007] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.370070] NetLabel:  unlabeled traffic allowed by default
[   10.375667] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.382164] calling  rfkill_init+0x0/0x79 @ 1
[   10.386761] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.392935] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.397525] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.403954] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.408719] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.415288] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.420623] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.427683] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.432800] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.439727] calling  hpet_late_init+0x0/0x101 @ 1
[   10.444496] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.451253] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.455763] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.462087] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.467641] Switched to clocksource xen
[   10.471540] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs
[   10.479163] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.484649] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 279 =
usecs
[   10.491773] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.498103] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.506243] calling  event_trace_init+0x0/0x205 @ 1
[   10.525510] initcall event_trace_init+0x0/0x205 returned 0 after 13987 u=
secs
[   10.532548] calling  init_kprobe_trace+0x0/0x93 @ 1
[   10.537525] initcall init_kprobe_trace+0x0/0x93 returned 0 after 11 usecs
[   10.544362] calling  init_pipe_fs+0x0/0x4c @ 1
[   10.548903] initcall init_pipe_fs+0x0/0x4c returned 0 after 36 usecs
[   10.555280] calling  eventpoll_init+0x0/0xda @ 1
[   10.559987] initcall eventpoll_init+0x0/0xda returned 0 after 26 usecs
[   10.566546] calling  anon_inode_init+0x0/0x5b @ 1
[   10.571350] initcall anon_inode_init+0x0/0x5b returned 0 after 37 usecs
[   10.577986] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   10.583186] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   10.590207] calling  acpi_event_init+0x0/0x3a @ 1
[   10.594991] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs
[   10.601644] calling  pnp_system_init+0x0/0x12 @ 1
[   10.606507] initcall pnp_system_init+0x0/0x12 returned 0 after 91 usecs
[   10.613123] calling  pnpacpi_init+0x0/0x8c @ 1
[   10.617617] pnp: PnP ACPI init
[   10.620760] ACPI: bus type PNP registered
[   10.625137] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   10.631737] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   10.638629] pnp 00:01: [dma 4]
[   10.641843] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   10.648533] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   10.655593] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   10.663137] system 00:04: [io  0x0680-0x069f] has been reserved
[   10.669051] system 00:04: [io  0xffff] has been reserved
[   10.674422] system 00:04: [io  0xffff] has been reserved
[   10.679795] system 00:04: [io  0xffff] has been reserved
[   10.685168] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   10.691148] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   10.697129] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   10.703109] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   10.709087] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   10.715068] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   10.721394] system 00:04: [io  0x164e-0x164f] has been reserved
[   10.727369] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.734251] xen: registering gsi 8 triggering 1 polarity 0
[   10.739932] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   10.746826] system 00:06: [io  0x1854-0x1857] has been reserved
[   10.752741] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   10.761102] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   10.767016] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   10.772989] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.781210] xen: registering gsi 4 triggering 1 polarity 0
[   10.786683] Already setup the GSI :4
[   10.790329] pnp 00:08: [dma 0 disabled]
[   10.794428] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   10.802160] xen: registering gsi 3 triggering 1 polarity 0
[   10.807663] pnp 00:09: [dma 0 disabled]
[   10.811771] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   10.818612] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   10.824526] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.831401] xen: registering gsi 13 triggering 1 polarity 0
[   10.837187] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   10.846814] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   10.853424] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   10.860094] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   10.866767] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   10.873440] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   10.880112] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   10.886785] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   10.893458] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   10.900131] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   10.906805] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   10.913479] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   10.920149] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   10.926820] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.935715] pnp: PnP ACPI: found 13 devices
[   10.939887] ACPI: bus type PNP unregistered
[   10.944136] initcall pnpacpi_init+0x0/0x8c returned 0 after 318865 usecs
[   10.950894] calling  pcistub_init+0x0/0x29f @ 1
[   10.955486] xen_pciback: Error parsing pci_devs_to_hide at "(xxxx:xxx:xx=
:)"
[   10.962507] initcall pcistub_init+0x0/0x29f returned -22 after 6855 usecs
[   10.969353] calling  chr_dev_init+0x0/0xc6 @ 1
[   10.983003] initcall chr_dev_init+0x0/0xc6 returned 0 after 8928 usecs
[   10.989526] calling  firmware_class_init+0x0/0xec  11.013393] calling  t=
hermal_init+0x0/0x8b @ 1
[   11.017973] initcall thermal_init+0x0/0x8b returned 0 after 92 usecs
[   11.024325] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.030211] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.038096] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.046800] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.053227] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9361 usecs
[   11.061027] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.066681] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.071636] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.077789] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.084651] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.091578] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.098511] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.105442] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.112378] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.119310] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.126242] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.133177] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.140109] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.147042] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.153975] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.160899] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.168282] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.175198] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.182667] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.189584] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.196967] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.203884] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.211344] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.216624] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.222780] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.229629] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.234651] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.240808] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.247663] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.252678] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.258835] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.265689] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.270712] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.277568] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.282841] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.289696] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.294976] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.301828] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.306847] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.313700] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.318716] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.324874] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.331727] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.337348] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.342980] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.349307] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.355633] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.361959] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.368286] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.374699] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.381113] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.386745] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.393072] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.398705] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.405033] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.410664] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.416992] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.422625] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.428951] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.435279] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.441604] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.447931] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.454257] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.460585] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.466216] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.472545] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396456 usecs
[   11.480343] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.485209] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.491957] calling  inet_init+0x0/0x296 @ 1
[   11.496362] NET: Registered protocol family 2
[   11.501022] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.508278] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.514922] TCP: Hash tables configured (established 16384 bind 16384)
[   11.521513] TCP: reno registered
[   11.524796] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   11.530864] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   11.537504] initcall inet_init+0x0/0x296 returned 0 after 40247 usecs
[   11.543935] calling  ipv4_offload_init+0x0/0x61 @ 1
[   11.548871] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   11.555631] calling  af_unix_init+0x0/0x55 @ 1
[   11.560147] NET: Registered protocol family 1
[   11.564571] initcall af_unix_init+0x0/0x55 returned 0 after 4330 usecs
[   11.571144] calling  ipv6_offload_init+0x0/0x7f @ 1
[   11.576084] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   11.582843] calling  init_sunrpc+0x0/0x69 @ 1
[   11.587463] RPC: Registered named UNIX socket transport module.
[   11.593378] RPC: Registered udp transport module.
[   11.598141] RPC: Registered tcp transport module.
[   11.602907] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   11.609405] initcall init_sunrpc+0x0/0x69 returned 0 after 21621 usecs
[   11.615994] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   11.621460] pci 0000:00:02.0: Boot video device
[   11.626546] xen: registering gsi 16 triggering 0 polarity 1
[   11.632119] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   11.636758] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   11.644497] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   11.652404] xen: registering gsi 16 triggering 0 polarity 1
[   11.657965] Already setup the GSI :16
[   11.677676] xen: registering gsi 23 triggering 0 polarity 1
[   11.683252] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   11.704904] xen: registering gsi 18 triggering 0 polarity 1
[   11.710490] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   11.7157 returned 0 after 104854 usecs
[   11.736524] calling  populate_rootfs+0x0/0x112 @ 1
[   11.741512] Unpacking initramfs...
[   12.831493] Freeing initrd memory: 83616K (ffff8800023f4000 - ffff880007=
59c000)
[   12.838803] initcall populate_rootfs+0x0/0x112 returned 0 after 1071701 =
usecs
[   12.845987] calling  pci_iommu_init+0x0/0x41 @ 1
[   12.850667] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   12.857167] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   12.862799] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   12.870443] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   12.876406] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   12.884206] calling  i8259A_init_ops+0x0/0x21 @ 1
[   12.888972] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   12.895558] calling  vsyscall_init+0x0/0x27 @ 1
[   12.900155] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   12.906564] calling  sbf_init+0x0/0xf6 @ 1
[   12.910724] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   12.916704] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   12.921904] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs
[   12.928924] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   12.933433] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   12.939757] calling  i8237A_init_ops+0x0/0x14 @ 1
[   12.944524] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   12.951110] calling  cache_sysfs_init+0x0/0x65 @ 1
[   12.956211] initcall cache_sysfs_init+0x0/0x65 returned 0 after 239 usecs
[   12.962984] calling  amd_uncore_init+0x0/0x130 @ 1
[   12.967836] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   12.974682] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   12.979710] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   12.986728] calling  intel_uncore_init+0x0/0x3ab @ 1
[   12.991757] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   12.998776] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.003471] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.014029] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10326 usecs
[   13.020878] calling  inject_init+0x0/0x30 @ 1
[   13.025295] Machine check injector initialized
[   13.029803] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs
[   13.036302] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.042194] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.049907] calling  microcode_init+0x0/0x1b1 @ 1
[   13.054861] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.060972] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.069743] initcall microcode_init+0x0/0x1b1 returned 0 after 14715 use=
cs
[   13.076674] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.081263] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.087850] calling  msr_init+0x0/0x162 @ 1
[   13.092318] initcall msr_init+0x0/0x162 returned 0 after 216 usecs
[   13.098488] calling  cpuid_init+0x0/0x162 @ 1
[   13.103100] initcall cpuid_init+0x0/0x162 returned 0 after 192 usecs
[   13.109439] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.114204] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.120791] calling  add_pcspkr+0x0/0x40 @ 1
[   13.125227] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs
[   13.131490] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.137987] Scanning for low memory corruption every 60 seconds
[   13.143965] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5837 usecs
[   13.152545] calling  sysfb_init+0x0/0x9c @ 1
[   13.156985] initcall sysfb_init+0x0/0x9c returned 0 after 104 usecs
[   13.163243] calling  audit_classes_init+0x0/0xaf @ 1
[   13.168281] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.175201] calling  pt_dump_init+0x0/0x30 @ 1
[   13.179717] initcall pt_dump_init+0x0/0x30 returned 0 after 9 usecs
[   13.186035] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.190893] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.197559] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.202852] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.209951] calling  ioresources_init+0x0/0x3c @ 1
[   13.214809] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.221479] calling  uid_cache_init+0x0/0x85 @ 1
[   13.226174] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs
[   13.232744] calling  init_posix_timers+0x0/0x240 @ 1
[   13.237788] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.244703] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.249991] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.257096] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.262214] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.269145] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.274463] initcall snapshot_device_init+0x0/0x12 returned 0 after 116 =
usecs
[   13.281588] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.286353] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.292942] calling  create_proc_profile+0x0/0x300 @ 1
[   13.298141] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.305160] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.310360] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.317381] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.322967] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 20=
8 usecs
[   13.330263] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.335637] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 3 =
usecs
[   13.342827] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.347870] initcall alarmtimer_init+0x0/0x15f returned 0 after 187 usecs
[   13.354650] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.360316] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
6 usecs
[   13.367612] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.372641] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.379484] calling  futex_init+0x0/0xf6 @ 1
[   13.383831] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.389973] initcall futex_init+0x0/0xf6 returned 0 after 6012 usecs
[   13.396384] calling  proc_dma_init+0x0/0x22 @ 1
[   13.400984] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.407389] calling  proc_modules_init+0x0/0x22 @ 1
[   13.412333] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.419089] calling  kallsyms_init+0x0/0x25 @ 1
[   13.423685] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.430095] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.435912] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.443615] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.449078] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.456354] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.461481] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.468487] calling  ikconfig_init+0x0/0x3c @ 1
[   13.473084] initcall ikconfig_init+0x0/0x3c returned 0 after 4 usecs
[   13.479494] calling  audit_init+0x0/0x141 @ 1
[   13.483913] audit: initializing netlink socket (disabled)
[   13.489396] type=3D2000 audit(1390393621.194:1): initialized
[   13.494922] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs
[   13.501506] calling  audit_watch_init+0x0/0x3a @ 1
[   13.506361] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.513033] calling  audit_tree_init+0x0/0x49 @ 1
[   13.517801] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   13.524387] calling  init_kprobes+0x0/0x16c @ 1
[   13.539053] initcall init_kprobes+0x0/0x16c returned 0 after 9836 usecs
[   13.545655] calling  hung_task_init+0x0/0x56 @ 1
[   13.573837] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   13.580500] calling  init_blk_tracer+0x0/0x5a @ 1
[   13.585269] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs
[   13.591854] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   13.597568] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   13.605107] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   13.610947] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 537=
 usecs
[   13.618155] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   13.623682] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   13.630979] calling  kswapd_init+0x0/0x76 @ 1
[   13.635442] initcall kswapd_init+0x0/0x76 returned 0 after 42 usecs
[   13.641726] calling  extfrag_debug_init+0x0/0x7e @ 1
[   13.646771] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   13.653683] calling  setup_vmstat+0x0/0xf3 @ 1
[   13.658204] initcall setup_vmstat+0x0/0xf3 returned 0 after 14 usecs
[   13.664603] calling  mm_sysfs_init+0x0/0x29 @ 1
[   13.669205] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   13.675695] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   13.680984] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   13.688090] calling  slab_proc_init+0x0/0x25 @ 1
[   13.692772] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   13.699268] calling  init_reserve_notifier+0x0/0x26 @ 1
[   13.704556] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   13.711660] calling  init_admin_reserve+0x0/0x40 @ 1
[   13.716686] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   13.723533] calling  init_user_reserve+0x0/0x40 @ 1
[   13.728474] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   13.735233] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   13.740177] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   13.746934] calling  procswaps_init+0x0/0x22 @ 1
[   13.751616] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   13.758112] calling  init_frontswap+0x0/0x96 @ 1
[   13.762819] initcall init_frontswap+0x0/0x96 returned 0 after 26 usecs
[   13.769380] calling  hugetlb_init+0x0/0x4c2 @ 1
[   13.773972] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   13.780476] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6352 usecs
[   13.787076] calling  mmu_notifier_init+0x0/0x12 @ 1
[   13.792019] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   13.798776] calling  slab_proc_init+0x0/0x8 @ 1
[   13.803370] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   13.809781] calling  cpucache_init+0x0/0x4b @ 1
[   13.814375] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   13.820788] calling  hugepage_init+0x0/0x145 @ 1
[   13.825469] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   13.832142] calling  init_cleancache+0x0/0xbc @ 1
[   13.836936] initcall init_cleancache+0x0/0xbc returned 0 after 28 usecs
[   13.843583] calling  fcntl_init+0x0/0x2a @ 1
[   13.847927] initcall fcntl_init+0x0/0x2a returned 0 after 12 usecs
[   13.854156] calling  proc_filesystems_init+0x0/0x22 @ 1
[   13.859444] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   13.866549] calling  dio_init+0x0/0x2d @ 1
[   13.870719] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   13.876774] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   13.881827] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 25 use=
cs
[   13.888738] calling  dnotify_init+0x0/0x7b @ 1
[   13.893267] initcall dnotify_init+0x0/0x7b returned 0 after 24 usecs
[   13.899657] calling  inotify_user_setup+0x0/0x70 @ 1
[   13.904701] initcall inotify_user_setup+0x0/0x70 returned 0 after 18 use=
cs
[   13.911615] calling  aio_setup+0x0/0x7d @ 1
[   13.915919] initcall aio_setup+0x0/0x7d returned 0 after 55 usecs
[   13.922015] calling  proc_locks_init+0x0/0x22 @ 1
[   13.926782] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs
[   13.933366] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   13.938264] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs
[   13.944980] calling  dquot_init+0x0/0x121 @ 1
[   13.949397] VFS: Disk quotas dquot_6.5.2
[   13.953422] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   13.959887] initcall dquot_init+0x0/0x121 returned 0 after 10243 usecs
[   13.966472] calling  init_v2_quota_format+0x0/0x22 @ 1
[   13.971673] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   13.978692] calling  quota_init+0x0/0x31 @ 1
[   13.983042] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   13.989265] calling  proc_cmdline_init+0x0/0x22 @ 1
[   13.994207] initcall proc_cmdline_init+0x0/0x22 returned 0 after 3 usecs
[   14.000963] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.005994] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.012837] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.017780] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.024537] calling  proc_devices_init+0x0/0x22 @ 1
[   14.029479] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.036236] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.041440] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.048456] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.053399] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.060158] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.065099] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.071855] calling  proc_stat_init+0x0/0x22 @ 1
[   14.076541] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.083035] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.087892] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.094562] calling  proc_version_init+0x0/0x22 @ 1
[   14.099503] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.106261] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.111291] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.118135] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.122911] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.129573] calling  vmcore_init+0x0/0x5cb @ 1
[   14.134080] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.140406] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.145090] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs
[   14.151586] calling  proc_page_init+0x0/0x42 @ 1
[   14.156273] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.162766] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.167492] initcall init_devpts_fs+0x0/0x62 returned 0 after 44 usecs
[   14.174033] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.178635] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs
[   14.185039] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.190134] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 68 use=
cs
[   14.197000] calling  init_fat_fs+0x0/0x4f @ 1
[   14.201440] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.207745] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.212252] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.218579] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.223171] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.229585] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.234375] initcall init_iso9660_fs+0x0/0x70 returned 0 after 23 usecs
[   14.241023] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.245732] initcall init_nfs_fs+0x0/0x16c returned 0 after 196 usecs
[   14.252161] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.256580] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.262820] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.267240] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.273478] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.277899] NFS: Registering the id_resolver key type
[   14.283028] Key type id_resolver registered
[   14.287259] Key type id_legacy registered
[   14.291337] initcall init_nfs_v4+0x0/0x3b returned 0 after 13122 usecs
[   14.297919] calling  init_nlm+0x0/0x4c @ 1
[   14.302087] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.308059] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.312739] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.319239] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.323920] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.330418] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.335445] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.342291] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.346886] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.353299] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.357890] NTFS driver 2.1.30 [Flags: R/W].
[   14.362274] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4280 usecs
[   14.368897] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.373795] initcall init_autofs4_fs+0x0/0x2a returned 0 after 127 usecs
[   14.380490] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.385176] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.391752] calling  ipc_init+0x0/0x2f @ 1
[   14.395919] msgmni has been set to 3857
[   14.399822] initcall ipc_init+0x0/0x2f returned 0 after 3817 usecs
[   14.406051] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.410826] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.417404] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.422144] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.428672] calling  key_proc_init+0x0/0x5e @ 1
[   14.433270] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.439676] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.444701] SELinux:  Registering netfilter hooks
[   14.449605] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4788 u=
secs
[   14.456636] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.461409] initcall init_sel_fs+0x0/0xa5 returned 0 after 344 usecs
[   14.467748] calling  selnl_init+0x0/0x56 @ 1
[   14.472090] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.478320] calling  sel_netif_init+0x0/0x5c @ 1
[   14.483003] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs
[   14.489500] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.494356] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs
[   14.501027] calling  sel_netport_init+0x0/0x6a @ 1
[   14.505881] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs
[   14.512554] calling  aurule_init+0x0/0x2d @ 1
[   14.516973] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   14.523212] calling  crypto_wq_init+0x0/0x33 @ 1
[   14.527925] initcall crypto_wq_init+0x0/0x33 returned 0 after 32 usecs
[   14.534481] calling  crypto_algapi_init+0x0/0xd @ 1
[   14.539452] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   14.546207] calling  chainiv_module_init+0x0/0x12 @ 1
[   14.551320] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   14.558252] calling  eseqiv_module_init+0x0/0x12 @ 1
[   14.563279] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.570126] calling  hmac_module_init+0x0/0x12 @ 1
[   14.574979] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.581653] calling  md5_mod_init+0x0/0x12 @ 1
[   14.586190] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs
[   14.592573] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   14.597885] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 26 =
usecs
[   14.605052] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   14.610424] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   14.617618] calling  des_generic_mod_init+0x0/0x17 @ 1
[   14.622870] initcall des_generic_mod_init+0x0/0x17 returned 0 after 51 u=
secs
[   14.629924] calling  aes_init+0x0/0x12 @ 1
[   14.634112] initcall aes_init+0x0/0x12 returned 0 after 26 usecs
[   14.640151] calling  zlib_mod_init+0x0/0x12 @ 1
[   14.644769] initcall zlib_mod_init+0x0/0x12 returned 0 after 25 usecs
[   14.651243] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   14.656962] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   14.664503] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   14.670569] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   14.678456] calling  krng_mod_init+0x0/0x12 @ 1
[   14.683076] initcall krng_mod_init+0x0/0x12 returned 0 after 25 usecs
[   14.689548] calling  proc_genhd_init+0x0/0x3c @ 1
[   14.694323] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   14.700901] calling  bsg_init+0x0/0x12e @ 1
[   14.705229] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   14.712611] initcall bsg_init+0x0/0x12e returned 0 after 7286 usecs
[   14.718937] calling  noop_init+0x0/0x12 @ 1
[   14.723181] io scheduler noop registered
[   14.727168] initcall noop_init+0x0/0x12 returned 0 after 3893 usecs
[   14.733495] calling  deadline_init+0x0/0x12 @ 1
[   14.738089] io scheduler deadline registered
[   14.742422] initcall deadline_init+0x0/0x12 returned 0 after 4231 usecs
[   14.749095] calling  cfq_init+0x0/0x8b @ 1
[   14.753279] io scheduler cfq registered (default)
[   14.758022] initcall cfq_init+0x0/0x8b returned 0 after 4655 usecs
[   14.764261] calling  percpu_counter_startup+0x0/0x38 @ 1
[   14.769635] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   14.776826] calling  pci_proc_init+0x0/0x6a @ 1
[   14.781610] initcall pci_proc_init+0x0/0x6a returned 0 after 184 usecs
[   14.788120] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   14.793779] xen: registering gsi 16 triggering 0 polarity 1
[   14.799341] Already setup the GSI :16
[   14.803876] xen: registering gsi 16 triggering 0 polarity 1
[   14.809441] Already setup the GSI :16
[   14.813947] xen: registering gsi 16 triggering 0 polarity 1
[   14.819515] Already setup the GSI :16
[   14.823875] xen: registering gsi 19 triggering 0 polarity 1
[   14.829452] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   14.834681] xen: registering gsi 17 triggering 0 polarity 1
[   14.840255] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   14.845567] xen: registering gsi 19 triggering 0 polarity 1
[   14.851129] Already setup the GSI :19
[   14.855049] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60535 u=
secs
[   14.862083] calling  aer_service_init+0x0/0x2b @ 1
[   14.867006] initcall aer_service_init+0x0/0x2b returned 0 after 70 usecs
[   14.873694] calling  pci_hotplug_init+0x0/0x1d @ 1
[   14.878547] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   14.884179] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5499 use=
cs
[   14.891114] calling  pcied_init+0x0/0x79 @ 1
[   14.895644] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   14.902247] initcall pcied_init+0x0/0x79 returned 0 after 6641 usecs
[   14.908656] calling  pcifront_init+0x0/0x3f @ 1
[   14.913246] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   14.919834] calling  genericbl_driver_init+0x0/0x14 @ 1
[   14.925233] initcall genericbl_driver_init+0x0/0x14 returned 0 after 110=
 usecs
[   14.932443] calling  cirrusfb_init+0x0/0xcc @ 1
[   14.937126] initcall cirrusfb_init+0x0/0xcc returned 0 after 88 usecs
[   14.943554] calling  efifb_driver_init+0x0/0x14 @ 1
[   14.948564] initcall efifb_driver_init+0x0/0x14 returned 0 after 69 usecs
[   14.955344] calling  intel_idle_init+0x0/0x331 @ 1
[   14.960196] intel_idle: MWAIT substates: 0x42120
[   14.964875] intel_idle: v0.4 model 0x3C
[   14.968772] intel_idle: lapic_timer_reliable_states 0xffffffff
[   14.974671] intel_idle: intel_idle yielding to none
[   14.979345] initcall intel_idle_init+0x0/0x331 returned -19 after 18699 =
usecs
[   14.986801] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   14.992179] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   14.999364] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.003945] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs
[   15.010297] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.016028] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.024192] ACPI: Power Button [PWRB]
[   15.028171] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.035551] ACPI: Power Button [PWRF]
[   15.039347] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3048 usecs
[   15.046902] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.052336] ACPI: Fan [FAN0] (off)
[   15.055967] ACPI: Fan [FAN1] (off)
[   15.059585] ACPI: Fan [FAN2] (off)
[   15.063186] ACPI: Fan [FAN3] (off)
[   15.066793] ACPI: Fan [FAN4] (off)
[   15.070258] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1773=
2 usecs
[   15.077559] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.095589] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.104011] ACPI Error: MetPlatform Limit not supported.
[   15.136462] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 51939 usecs
[   15.144351] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.152493] thermal LNXTHERM:00: registered as thermal_zone0
[   15.158144] ACPI: Thermal Zone [TZ00] (28 C)
[   15.1645830 C)
[   15.174908] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25015 u=
secs
[   15.181946] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.186890] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.193640] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.198884] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.204607] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5631=
 usecs
[   15.211817] calling  erst_init+0x0/0x2fc @ 1
[   15.216191] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.223696] pstore: Registered erst as persistent store backend
[   15.229667] initcall erst_init+0x0/0x2fc returned 0 after 13202 usecs
[   15.236166] calling  ghes_init+0x0/0x173 @ 1
[   15.240647] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35324 usecs
[   15.249090] \_SB_:_OSC request failed
[   15.252749] _OSC request data:1 1 0=20
[   15.256385] \_SB_:_OSC invalid UUID
[   15.259938] _OSC request data:1 1 0=20
[   15.263577] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.269820] initcall ghes_init+0x0/0x173 returned 0 after 28633 usecs
[   15.276318] calling  einj_init+0x0/0x522 @ 1
[   15.280715] EINJ: Error INJection is initialized.
[   15.285417] initcall einj_init+0x0/0x522 returned 0 after 4654 usecs
[   15.291832] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.296683] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.302725] initcall ioat_init_module+0x0/0xb1 returned 0 after 5899 use=
cs
[   15.309607] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.314516] initcall virtio_mmio_init+0x0/0x14 returned 0 after 71 usecs
[   15.321203] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.326992] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 68 usecs
[   15.334549] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.339835] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.346940] calling  xenbus_init+0x0/0x3d @ 1
[   15.351496] initcall xenbus_init+0x0/0x3d returned 0 after 130 usecs
[   15.357836] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.363070] initcall xenbus_backend_init+0x0/0x51 returned 0 after 117 u=
secs
[   15.370109] calling  gntdev_init+0x0/0x4d @ 1
[   15.374679] initcall gntdev_init+0x0/0x4d returned 0 after 148 usecs
[   15.381022] calling  gntalloc_init+0x0/0x3d @ 1
[   15.385744] initcall gntalloc_init+0x0/0x3d returned 0 after 126 usecs
[   15.392262] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.397633] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.404824] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.409830] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 63 usecs
[   15.416611] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.422249] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
88 usecs
[   15.429630] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.435019] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 186 =
usecs
[   15.442144] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.446935] xen_pciback: backend is vpci
[   15.450973] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3969 usecs
[   15.457755] calling  xen_acpi_processor_init+0x0/0x24b @ 1
[   15.464056] xen_acpi_processor: Uploading Xen processor PM info
[   15.472556] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
9039 usecs
[   15.480110] calling  pty_init+0x0/0x453 @ 1
[   15.503680] kworker/u2:0 (861) used greatest stack depth: 5488 bytes left
[   15.547929] initcall pty_init+0x0/0x453 returned 0 after 62081 usecs
[   15.554276] calling  sysrq_init+0x0/0xb0 @ 1
[   /0x228 returned 0 after 1062 usecs
[   15.577058] calling  serial8250_init+0x0/0x1ab @ 1
[   15.581907] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   15.609583] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A
[   15.618001] initcall serial8250_init+0x0/0x1ab returned 0 after 35246 us=
ecs
[   15.624951] calling  serial_pci_driver_init+0x0/0x1b @ 1
[   15.630445] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 12=
1 usecs
[   15.637738] calling  init_kgdboc+0x0/0x16 @ 1
[   15.642158] kgdb: Registered I/O driver kgdboc.
[   15.646777] initcall init_kgdboc+0x0/0x16 returned 0 after 4511 usecs
[   15.653250] calling  init+0x0/0x10f @ 1
[   15.657366] initcall init+0x0/0x10f returned 0 after 210 usecs
[   15.663192] calling  hpet_init+0x0/0x6a @ 1
[   15.667917] hpet_acpi_add: no address or irqs in _CRS
[   15.673035] initcall hpet_init+0x0/0x6a returned 0 after 5464 usecs
[   15.679295] calling  nvram_init+0x0/0x82 @ 1
[   15.683767] Non-volatile memory driver v1.3
[   15.687946] initcall nvram_init+0x0/0x82 returned 0 after 4219 usecs
[   15.694356] calling  mod_init+0x0/0x5a @ 1
[   15.698514] initcall mod_init+0x0/0x5a returned -19 after 0 usecs
[   15.704667] calling  rng_init+0x0/0x12 @ 1
[   15.708963] initcall rng_init+0x0/0x12 returned 0 after 132 usecs
[   15.715044] calling  agp_init+0x0/0x26 @ 1
[   15.719202] Linux agpgart interface v0.103
[   15.723360] initcall agp_init+0x0/0x26 returned 0 after 4060 usecs
[   15.729601] calling  agp_amd64_mod_init+0x0/0xb @ 1
[   15.734686] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 142 u=
secs
[   15.741720] calling  agp_intel_init+0x0/0x29 @ 1
[   15.746499] initcall agp_intel_init+0x0/0x29 returned 0 after 97 usecs
[   15.753014] calling  agp_sis_init+0x0/0x29 @ 1
[   15.757614] initcall agp_sis_init+0x0/0x29 returned 0 after 94 usecs
[   15.763958] calling  agp_via_init+0x0/0x29 @ 1
[   15.768556] initcall agp_via_init+0x0/0x29 returned 0 after 90 usecs
[   15.774903] calling  drm_core_init+0x0/0x10c @ 1
[   15.779671] [drm] Initialized drm 1.1.0 20060810
[   15.784282] initcall drm_core_init+0x0/0x10c returned 0 after 4590 usecs
[   15.791040] calling  cn_proc_init+0x0/0x3d @ 1
[   15.795545] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs
[   15.801869] calling  topology_sysfs_init+0x0/0x70 @ 1
[   15.807014] initcall topology_sysfs_init+0x0/0x70 returned 0 after 30 us=
ecs
[   15.814003] calling  loop_init+0x0/0x14e @ 1
[   15.867870] loop: module loaded
[   15.871004] initcall loop_init+0x0/0x14e returned 0 after 51433 usecs
[   15.877503] calling  xen_blkif_init+0x0/0x22 @ 1
[   15.882286] initcall xen_blkif_init+0x0/0x22 returned 0 after 101 usecs
[   15.888894] calling  mac_hid_init+0x0/0x22 @ 1
[   15.893405] initcall mac_hid_init+0x0/0x22 returned 0 after 7 usecs
[   15.899724] calling  macvlan_init_module+0x0/0x3d @ 1
[   15.904840] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs
[   15.911772] calling  macvtap_init+0x0/0x100 @ 1
[   15.916454] initcall macvtap_init+0x0/0x100 returned 0 after 89 usecs
[   15.922881] calling  net_olddevs_init+0x0/0xb5 @ 1
[   15.927734] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs
[   15.934405] calling  fixed_mdio_bus_init+0x0/0x105 @ 1
[   15.939829] libphy: Fixed MDIO Bus: probed
[   15.943916] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4207=
 usecs
[   15.951195] calling  tun_init+0x0/0x93 @ 1
[   15.955352] tun: Universal TUN/TAP device driver, 1.6
[   15.960466] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[   15.966855] initcall tun_init+0x0/0x93 returned 0 after 11232 usecs
[   15.973112] calling  tg3_driver_init+0x0/0x1b @ 1
[   15.977993] initcall tg3_driver_init+0x0/0x1b returned 0 after 114 usecs
[   15.984681] calling  ixgbevf_init_module+0x0/0x4c @ 1
[   15.989793] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Ne=
twork Driver - version 2.11.3-k
[   15.999238] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[   16.005505] initcall ixgbevf_init_module+0x0/0x4c returned 0 after 15343=
 usecs
[   16.012719] calling  forcedeth_pci_driver_init+0x0/0x1b @ 1
[   16.018456] initcall forcedeth_pci_driver_init+0x0/0x1b returned 0 after=
 100 usecs
[   16.026012] calling  netback_init+0x0/0x48 @ 1
[   16.030590] initcall netback_init+0x0/0x48 returned 0 after 70 usecs
[   16.036934] calling  nonstatic_sysfs_init+0x0/0x12 @ 1
[   16.042132] initcall nonstatic_sysfs_init+0x0/0x12 returned 0 after 0 us=
ecs
[   16.049150] calling  yenta_cardbus_driver_init+0x0/0x1b @ 1
[   16.054898] initcall yenta_cardbus_driver_init+0x0/0x1b returned 0 after=
 112 usecs
[   16.062460] calling  mon_init+0x0/0xfe @ 1
[   16.066829] initcall mon_init+0x0/0xfe returned 0 after 211 usecs
[   16.072917] calling  ehci_hcd_init+0x0/0x5c @ 1
[   16.077506] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   16.084094] initcall ehci_hcd_init+0x0/0x5c returned 0 after 6433 usecs
[   16.090766] calling  ehci_pci_init+0x0/0x69 @ 1
[   16.095358] ehci-pci: EHCI PCI platform driver
[   16.100443] xen: registering gsi 16 triggering 0 polarity 1
[   16.106004] Already setup the GSI :16
[   16.109766] ehci-pci 0000:00:1a.0: EHCI Host Controller
[   16.115234] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus =
number 1
[   16.122650] ehci-pci 0000:00:1a.0: debug port 2
[   16.131127] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[   16.137979] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf153c000
[   16.148723] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[   16.154594] usb usb1: New USB device found, idVendor=3D1d6b,   16.173603=
] usb usb1: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 ehci_hcd
[   16.181050] usb usb1: SerialNumber: 0000:00:1a.0
[   16.186395] hub 1-0:1.0: USB hub found
[   16.190171] hub 1-0:1.0: 3 ports detected
[   16.195665] xen: registering gsi 23 triggering 0 polarity 1
[   16.201231] Already setup the GSI :23
[   16.204983] ehci-pci 0000:00:1d.0: EHCI Host Controller
[   16.210468] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus =
number 2
[   16.217875] ehci-pci 0000:00:1d.0: debug port 2
[   16.226369] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[   16.233219] ehci-pci 0000:00:1d.0: irq 23, io evice found, idVendor=3D1d=
6b, idProduct=3D0002
[   16.257396] usb usb2: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[   16.264667] usb usb2: Product: EHCI Host Controller
[   16.269612] usb usb2: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   16.277059] usb usb2: SerialNumber: 0000:00:1d.0
[   16.282368] hub 2-0:1.0: USB hub found
[   16.286138] hub 2-0:1.0: 3 ports detected
[   16.291162] initcall ehci_pci_init+0x0/0x69 returned 0 after 191213 usecs
[   16.297939] calling  ohci_hcd_mod_init+0x0/0x64 @ 1
[   16.302877] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   16.309123] initcall ohci_hcd_mod_init+0x0/0x64 returned 0 after 6098 us=
ecs
[   16.316134] calling  ohci_pci_init+0x0/0x69 @ 1
[   16.320728] ohci-pci: OHCI PCI platform driver
[   16.325341] initcall ohci_pci_init+0x0/0x69 returned 0 after 4504 usecs
[   16.331944] calling  uhci_hcd_init+0x0/0xb0 @ 1
[   16.336536] uhci_hcd: USB Universal Host Controller Interface driver
[   16.343082] initcall uhci_hcd_init+0x0/0xb0 returned 0 after 6391 usecs
[   16.349687] calling  usblp_driver_init+0x0/0x1b @ 1
[   16.354802] usbcore: registered new interface driver usblp
[   16.360277] initcall usblp_driver_init+0x0/0x1b returned 0 after 5519 us=
ecs
[   16.367296] calling  kgdbdbgp_start_thread+0x0/0x4f @ 1
[   16.372580] initcall kgdbdbgp_start_thread+0x0/0x4f returned 0 after 0 u=
secs
[   16.379687] calling  i8042_init+0x0/0x3c5 @ 1
[   16.384371] i8042: PNP: No PS/2 controller found. Probing ports directly.
[   16.394424] serio: i8042 KBD port at 0x60,0x64 irq 1
[   16.399385] serio: i8042 AUX port at 0x60,0x64 irq 12
[   16.404674] initcall i8042_init+0x0/0x3c5 returned 0 after 20084 usecs
[   16.411193] calling  serport_init+0x0/0x34 @ 1
[   16.415696] initcall serport_init+0x0/0x34 returned 0 after 0 usecs
[   16.422023] calling  mousedev_init+0x0/0x62 @ 1
[   16.426806] mousedev: PS/2 mouse device common for all mice
[   16.432372] initcall mousedev_init+0x0/0x62 returned 0 after 5620 usecs
[   16.439043] calling  evdev_init+0x0/0x12 @ 1
[   16.443771] initcall evdev_init+0x0/0x12 returned 0 after 384 usecs
[   16.450027] calling  atkbd_init+0x0/0x27 @ 1
[   16.454500] initcall atkbd_init+0x0/0x27 returned 0 after 137 usecs
[   16.460760] calling  psmouse_init+0x0/0x82 @ 1
[   16.465448] initcall psmouse_init+0x0/0x82 returned 0 after 182 usecs
[   16.471875] calling  cmos_init+0x0/0x77 @ 1
[   16.476168] rtc_cmos 00:05: RTC can wake from S4
[   16.481185] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0
[   16.487332] rtc_cmos 00:05: alarms up to one month, y3k, 242 bytes nvram
[   16.494495] initcall cmos_init+0x0/0x77 returned 0 after 17937 usecs
[   16.500840] calling  i2c_i801_init+0x0/0xad @ 1
[   16.506031] xen: registering gsi 18 triggering 0 polarity 1
[   16.511591] Already setup the GSI :18
[   16.515412] i801_smbus 0000:00:1f.3: SMBus using PCI Interrupt
[   16.521825] initcall i2c_i801_init+0x0/0xad returned 0 after 16005 usecs
[   16.528521] calling  cpufreq_gov_dbs_init+0x0/0x12 @ 1
[   16.533723] initcall cpufreq_gov_dbs_init+0x0/0x12 returned -19 after 0 =
usecs
[   16.540943] calling  efivars_sysfs_init+0x0/0x220 @ 1
[   16.546051] initcall efivars_sysfs_init+0x0/0x220 returned -19 after 0 u=
secs
[   16.553156] calling  efivars_pstore_init+0x0/0xa2 @ 1
[   16.558271] initcall efivars_pstore_init+0x0/0xa2 returned 0 after 0 use=
cs
[   16.565202] calling  vhost_net_init+0x0/0x30 @ 1
[   16.570377] initcall vhost_net_init+0x0/0x30 returned 0 after 482 usecs
[   16.576984] calling  vhost_init+0x0/0x8 @ 1
[   16.581233] initcall vhost_init+0x0/0x8 returned 0 after 0 usecs
[   16.587293] calling  staging_init+0x0/0x8 @ 1
[   16.591713] initcall staging_init+0x0/0x8 returned 0 after 0 usecs
[   16.597953] calling  zram_init+0x0/0x2fd @ 1
[   16.603120] zram: Created 1 device(s) ...
[   16.607126] initcall zram_init+0x0/0x2fd returned 0 after 4727 usecs
[   16.613536] calling  zs_init+0x0/0x90 @ 1
[   16.617611] initcall zs_init+0x0/0x90 returned 0 after 2 usecs
[   16.623502] calling  eeepc_laptop_init+0x0/0x5a @ 1
[   16.628496] usb 1-1: new high-speed USB device number 2 using ehci-pci
[   16.635282] initcall eeepc_laptop_init+0x0/0x5a returned -19 after 6680 =
usecs
[   16.642407] calling  sock_diag_init+0x0/0x12 @ 1
[   16.647105] initcall sock_diag_init+0x0/0x12 returned 0 after 16 usecs
[   16.653675] calling  flow_cache_init_global+0x0/0x19a @ 1
[   16.659149] initcall flow_cache_init_global+0x0/0x19a returned 0 after 2=
0 usecs
[   16.666496] calling  llc_init+0x0/0x20 @ 1
[   16.670657] initcall llc_init+0x0/0x20 returned 0 after 0 usecs
[   16.676635] calling  snap_init+0x0/0x38 @ 1
[   16.680883] initcall snap_init+0x0/0x38 returned 0 after 1 usecs
[   16.686949] calling  blackhole_module_init+0x0/0x12 @ 1
[   16.692235] initcall blackhole_module_init+0x0/0x12 returned 0 after 0 u=
secs
[   16.699342] calling  nfnetlink_init+0x0/0x59 @ 1
[   16.704022] Netfilter messages via NETLINK v0.30.
[   16.708803] initcall nfnetlink_init+0x0/0x59 returned 0 after 4668 usecs
[   16.715548] calling  nfnetlink_log_init+0x0/0xb6 @ 1
[   16.720585] initcall nfnetlink_log_init+0x0/0xb6 returned 0 after 10 use=
cs
[   16.727509] calling  nf_conntrack_standalone_init+0x0/0x82 @ 1
[   16.733401] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   16.739653] initcall nf_conntrack_standalone_init+0x0/0x82 returned 0 af=
ter 6104 usecs
[   16.747556] calling  ctnetlink_init+0x0/0xa4 @ 1
[   16.752232] ctnetlink v0.93: registering with nfnetlink.
[   16.757606] initcall ctnetlink_init+0x0/0xa4 returned 0 after 5247 usecs
[   16.764367] calling  nf_conntrack_ftp_init+0x0/0x1ca @ 1
[   16.769744] initcall nf_conntrack_ftp_init+0x0/0x1ca returned 0 after 4 =
usecs
[   16.776933] calling  nf_conntrack_irc_init+0x0/0x173 @ 1
[   16.782309] initcall nf_conntrack_irc_init+0x0/0x173 returned 0 after 3 =
usecs
[   16.789498] calling  nf_conntrack_sip_init+0x0/0x215 @ 1
[   16.794872] initcall nf_conntrack_sip_init+0x0/0x215 returned 0 after 0 =
usecs
[   16.802064] calling  xt_init+0x0/0x118 @ 1
[   16.806226] initcall xt_init+0x0/0x118 returned 0 after 2 usecs
[   16.812204] calling  tcpudp_mt_init+0x0/0x17 @ 1
[   16.816884] initcall tcpudp_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.823384] calling  connsecmark_tg_init+0x0/0x12 @ 1
[   16.828497] initcall connsecmark_tg_init+0x0/0x12 returned 0 after 0 use=
cs
[   16.835431] calling  nflog_tg_init+0x0/0x12 @ 1
[   16.840024] initcall nflog_tg_init+0x0/0x12 returned 0 after 0 usecs
[   16.846436] calling  secmark_tg_init+0x0/0x12 @ 1
[   16.851204] initcall secmark_tg_init+0x0/0x12 returned 0 after 0 usecs
[   16.857788] calling  tcpmss_tg_init+0x0/0x17 @ 1
[   16.862468] initcall tcpmss_tg_init+0x0/0x17 returned 0 after 0 usecs
[   16.868969] calling  conntrack_mt_init+0x0/0x17 @ 1
[   16.873910] initcall conntrack_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.880667] calling  policy_mt_init+0x0/0x17 @ 1
[   16.885349] initcall policy_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.891849] calling  state_mt_init+0x0/0x12 @ 1
[   16.896442] initcall state_mt_init+0x0/0x12 returned 0 after 0 usecs
[   16.902856] calling  sysctl_ipv4_init+0x0/0x92 @ 1
[   16.907735] initcall sysctl_ipv4_init+0x0/0x92 returned 0 after 26 usecs
[   16.914468] calling  tunnel4_init+0x0/0x72 @ 1
[   16.918974] initcall tunnel4_init+0x0/0x72 returned 0 after 0 usecs
[   16.925300] calling  ipv4_netfilter_init+0x0/0x12 @ 1
[   16.930415] initcall ipv4_netfilter_init+0x0/0x12 returned 0 after 0 use=
cs
[   16.937348] calling  nf_conntrack_l3proto_ipv4_init+0x0/0x17c @ 1
[   16.943607] initcall nf_conntrack_l3proto_ipv4_init+0x0/0x17c returned 0=
 after 101 usecs
[   16.951682] calling  nf_defrag_init+0x0/0x17 @ 1
[   16.956362] initcall nf_defrag_init+0x0/0x17 returned 0 after 0 usecs
[   16.962861] calling  ip_tables_init+0x0/0xaa @ 1
[   16.967554] ip_tables: (C) 2000-2006 Netfilter Core Team
[   16.972915] initcall ip_tables_init+0x0/0xaa returned 0 after 5247 usecs
[   16.979675] calling  iptable_filter_init+0x0/0x51 @ 1
[   16.984810] initcall iptable_filter_init+0x0/0x51 returned 0 after 22 us=
ecs
[   16.991808] calling  iptable_mangle_init+0x0/0x51 @ 1
[   16.996940] initcall iptable_mangle_init+0x0/0x51 returned 0 after 18 us=
ecs
[   17.003940] calling  reject_tg_init+0x0/0x12 @ 1
[   17.008621] initcall reject_tg_init+0x0/0x12 returned 0 after 0 usecs
[   17.015120] calling  ulog_tg_init+0x0/0x85 @ 1
[   17.019643] initcall ulog_tg_init+0x0/0x85 returned 0 after 16 usecs
[   17.026041] calling  cubictcp_register+0x0/0x5c @ 1
[   17.030979] TCP: cubic registered
[   17.034359] initcall cubictcp_register+0x0/0x5c returned 0 after 3300 us=
ecs
[   17.041379] calling  xfrm_user_init+0x0/0x4a @ 1
[   17.046059] Initializing XFRM netlink socket
[   17.050407] initcall xfrm_user_init+0x0/0x4a returned 0 after 4245 usecs
[   17.057153] calling  inet6_init+0x0/0x370 @ 1
[   17.061652] NET: Registered protocol family 10
[   17.066434] initcall inet6_init+0x0/0x370 returned 0 after 4746 usecs
[   17.072866] calling  ah6_init+0x0/0x79 @ 1
[   17.077027] initcall ah6_init+0x0/0x79 returned 0 after 0 usecs
[   17.083005] calling  esp6_init+0x0/0x79 @ 1
[   17.087253] initcall esp6_init+0x0/0x79 returned 0 after 0 usecs
[   17.093317] calling  xfrm6_transport_init+0x0/0x17 @ 1
[   17.098518] initcall xfrm6_transport_init+0x0/0x17 returned 0 after 0 us=
ecs
[   17.105537] calling  xfrm6_mode_tunnel_init+0x0/0x17 @ 1
[   17.110910] initcall xfrm6_mode_tunnel_init+0x0/0x17 returned 0 after 0 =
usecs
[   17.118103] calling  xfrm6_beet_init+0x0/0x17 @ 1
[   17.122870] initcall xfrm6_beet_init+0x0/0x17 returned 0 after 0 usecs
[   17.129456] calling  ip6_tables_init+0x0/0xaa @ 1
[   17.134237] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   17.139684] initcall ip6_tables_init+0x0/0xaa returned 0 after 5332 usecs
[   17.146531] calling  ip6table_filter_init+0x0/0x51 @ 1
[   17.151826] initcall ip6table_filter_init+0x0/0x51 returned 0 after 93 u=
secs
[   17.158863] calling  ip6table_mangle_init+0x0/0x51 @ 1
[   17.164104] initcall ip6table_mangle_init+0x0/0x51 returned 0 after 40 u=
secs
[   17.171169] calling  nf_conntrack_l3proto_ipv6_init+0x0/0x154 @ 1
[   17.177331] initcall nf_conntrack_l3proto_ipv6_init+0x0/0x154 returned 0=
 after 8 usecs
[   17.185295] calling  nf_defrag_init+0x0/0x54 @ 1
[   17.189984] initcall nf_defrag_init+0x0/0x54 returned 0 after 9 usecs
[   17.196475] calling  ipv6header_mt6_init+0x0/0x12 @ 1
[   17.201587] initcall ipv6header_mt6_init+0x0/0x12 returned 0 after 0 use=
cs
[   17.208520] calling  reject_tg6_init+0x0/0x12 @ 1
[   17.213287] initcall reject_tg6_init+0x0/0x12 returned 0 after 0 usecs
[   17.219876] calling  sit_init+0x0/0xcf @ 1
[   17.224033] sit: IPv6 over IPv4 tunneling driver
[   17.229602] initcall sit_init+0x0/0xcf returned 0 after 5436 usecs
[   17.235772] calling  packet_init+0x0/0x47 @ 1
[   17.240190] NET: Registered protocol family 17
[   17.244704] initcall packet_init+0x0/0x47 returned 0 after 4407 usecs
[   17.251195] calling  br_init+0x0/0xa2 @ 1
[   17.255286] initcall br_init+0x0/0xa2 returned 0 after 17 usecs
[   17.261247] calling  init_rpcsec_gss+0x0/0x64 @ 1
[   17.266053] initcall init_rpcsec_gss+0x0/0x64 returned 0 after 38 usecs
[   17.272688] calling  dcbnl_init+0x0/0x4d @ 1
[   17.277020] initcall dcbnl_init+0x0/0x4d returned 0 after 0 usecs
[   17.283174] calling  init_dns_resolver+0x0/0xe1 @ 1
[   17.288125] Key type dns_resolver registered
[   17.292446] initcall init_dns_resolver+0x0/0xe1 returned 0 after 4231 us=
ecs
[   17.299468] calling  mcheck_init_device+0x0/0x123 @ 1
[   17.304928] initcall mcheck_init_device+0x0/0x123 returned 0 after 340 u=
secs
[   17.311987] calling  tboot_late_init+0x0/0x243 @ 1
[   17.316817] initcall tboot_late_init+0x0/0x243 returned 0 after 0 usecs
[   17.323491] calling  mcheck_debugfs_init+0x0/0x3c @ 1
[   17.328618] initcall mcheck_debugfs_init+0x0/0x3c returned 0 after 13 us=
ecs
[   17.335622] calling  severities_debugfs_init+0x0/0x3c @ 1
[   17.341089] initcall severities_debugfs_init+0x0/0x3c returned 0 after 5=
 usecs
[   17.348363] calling  threshold_init_device+0x0/0x50 @ 1
[   17.353651] initcall threshold_init_device+0x0/0x50 returned 0 after 1 u=
secs
[   17.360755] calling  hpet_insert_resource+0x0/0x23 @ 1
[   17.365956] initcall hpet_insert_resource+0x0/0x23 returned 0 after 0 us=
ecs
[   17.372975] calling  update_mp_table+0x0/0x56d @ 1
[   17.377829] initcall update_mp_table+0x0/0x56d returned 0 after 0 usecs
[   17.384502] calling  lapic_insert_resource+0x0/0x3f @ 1
[   17.389788] initcall lapic_insert_resource+0x0/0x3f returned 0 after 0 u=
secs
[   17.396893] calling  io_apic_bug_finalize+0x0/0x1b @ 1
[   17.402094] initcall io_apic_bug_finalize+0x0/0x1b returned 0 after 0 us=
ecs
[   17.409113] calling  print_ICs+0x0/0x456 @ 1
[   17.413447] initcall print_ICs+0x0/0x456 returned 0 after 0 usecs
[   17.419600] calling  check_early_ioremap_leak+0x0/0x65 @ 1
[   17.425148] initcall check_early_ioremap_leak+0x0/0x65 returned 0 after =
0 usecs
[   17.432513] calling  pat_memtype_list_init+0x0/0x32 @ 1
[   17.437799] initcall pat_memtype_list_init+0x0/0x32 returned 0 after 0 u=
secs
[   17.444909] calling  init_oops_id+0x0/0x40 @ 1
[   17.449413] initcall init_oops_id+0x0/0x40 returned 0 after 1 usecs
[   17.455740] calling  pm_qos_power_init+0x0/0x7b @ 1
[   17.461016] initcall pm_qos_power_init+0x0/0x7b returned 0 after 330 use=
cs
[   17.467882] calling  pm_debugfs_init+0x0/0x24 @ 1
[   17.472655] initcall pm_debugfs_init+0x0/0x24 returned 0 after 6 usecs
[   17.479234] calling  printk_late_init+0x0/0x44 @ 1
[   17.484089] initcall printk_late_init+0x0/0x44 returned 0 after 0 usecs
[   17.490761] calling  tk_debug_sleep_time_init+0x0/0x3d @ 1
[   17.496311] initcall tk_debug_sleep_time_init+0x0/0x3d returned 0 after =
5 usecs
[   17.503674] calling  debugfs_kprobe_init+0x0/0x90 @ 1
[   17.508803] initcall debugfs_kprobe_init+0x0/0x90 returned 0 after 16 us=
ecs
[   17.515808] calling  taskstats_init+0x0/0x73 @ 1
[   17.520496] registered taskstats version 1
[   17.524646] initcall taskstats_init+0x0/0x73 returned 0 after 4062 usecs
[   17.531405] calling  clear_boot_tracer+0x0/0x2d @ 1
[   17.536347] initcall clear_boot_tracer+0x0/0x2d returned 0 after 0 usecs
[   17.543133] calling  kdb_ftrace_register+0x0/0x2f @ 1
[   17.548248] initcall kdb_ftrace_register+0x0/0x2f returned 0 after 1 use=
cs
[   17.555181] calling  max_swapfiles_check+0x0/0x8 @ 1
[   17.560205] initcall max_swapfiles_check+0x0/0x8 returned 0 after 0 usecs
[   17.567052] calling  set_recommended_min_free_kbytes+0x0/0xa0 @ 1
[   17.573206] initcall set_recommended_min_free_kbytes+0x0/0xa0 returned 0=
 after 0 usecs
[   17.581179] calling  kmemleak_late_init+0x0/0x93 @ 1
[   17.586234] kmemleak: Kernel memory leak detector initialized
[   17.592018] initcall kmemleak_late_init+0x0/0x93 returned 0 after 5676 u=
secs
[   17.599124] calling  init_root_keyring+0x0/0xb @ 1
[   17.603994] initcall init_root_keyring+0x0/0xb returned 0 after 19 usecs
[   17.610732] calling  fail_make_request_debugfs+0x0/0x2a @ 1
[   17.616405] initcall fail_make_request_debugfs+0x0/0x2a returned 0 after=
 38 usecs
[   17.623908] calling  prandom_reseed+0x0/0x47 @ 1
[   17.628589] initcall prandom_reseed+0x0/0x47 returned 0 after 2 usecs
[   17.635084] calling  pci_resource_alignment_sysfs_init+0x0/0x19 @ 1
[   17.641417] initcall pci_resource_alignment_sysfs_init+0x0/0x19 returned=
 0 after 5 usecs
[   17.649558] calling  pci_sysfs_init+0x0/0x51 @ 1
[   17.658402] initcall pci_sysfs_init+0x0/0x51 returned 0 after 4066 usecs
[   17.665089] calling  boot_wait_for_devices+0x0/_devices+0x0/0x30 returne=
d 0 after 0 usecs
[   17.677482] calling  deferred_probe_initcall+0x0/0x70 @ 1
[   17.682970] kmemleak: Automatic memory scanning thread started
[   17.688949] initcall deferred_probe_initcall+0x0/0x70 returned 0 after 5=
859 usecs
[   17.696419] calling  late_resume_init+0x0/0x1d0 @ 1
[   17.701357]   Magic number: 14:268:431
[   17.705268] initcall late_resume_init+0x0/0x1d0 returned 0 after 3818 us=
ecs
[   17.712217] calling  firmware_memmap_init+0x0/0x38 @ 1
[   17.717854] initcall firmware_memmap_init+0x0/0x38 returned 0 after 427 =
usecs
[   17.724973] calling  pci_mmcfg_late_insert_resources+0x0/0x50 @ 1
[   17.731127] initcall pci_mmcfg_late_insert_resources+0x0/0x50 returned 0=
 after 0 usecs
[   17.739098] calling  tcp_congestion_default+0x0/0x12 @ 1
[   17.744473] initcall tcp_congestion_default+0x0/0x12 returned 0 after 0 =
usecs
[   17.751666] calling  ip_auto_config+0x0/0xf1c @ 1
[   17.756437] initcall ip_auto_config+0x0/0xf1c returned 0 after 5 usecs
[   17.763019] calling  software_resume+0x0/0x290 @ 1
[   17.767870] PM: Hibernation image not present or could not be loaded.
[   17.774370] initcall software_resume+0x0/0x290 returned -2 after 6347 us=
ecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   17.803642] async_waiting @ 1
[   17.806670] async_continuing @ 1 after 0 usec
[   17.811455] Freeing unused kernel memory: 1724K (ffffffff81cc1000 - ffff=
ffff81e70000)
[   17.819273] Write protecting the kernel read-only data: 12288k
[   17.827733] Freeing unused kernel memory: 1244K (ffff8800016c9000 - ffff=
880001800000)
[   17.836066] Freeing unused kernel memory: 1912K (ffff880001a22000 - ffff=
880001c00000)
[   17.844303] usb 1-1: New USB device found, idVendor=3D8087, idProduct=3D=
8008
[   17.850993] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
init started: BusyBox v1.14.3 (2014-01-20 09:47:53 EST)
Mounting directories  [  OK  ]
[   17.874834] mount (1493) used greatest stack depth: 5032 bytes left
[   17.885031] hub 1-1:1.0: USB hub found
[   17.889174] hub 1-1:1.0: 6 ports detected
mount: mount point /proc/bus/usb does not exist
[   18.004919] calling  privcmd_init+0x0/0x1000 [xen_privcmd] @ 1522
[   18.igh-speed USB device number 2 using ehci-pci
^G^G^G^G[   18.018407] initcall privcmd_init+0x0/0x1000 [xen_privcmd] retur=
ned 0 after 7225 usecs
^G^G[   18.027061] calling  xenfs_init+0x0/0x1000 [xenfs] @ 1522
[   18.032455] initcall xenfs_init+0x0/0x1000 [xenfs] returned 0 after 1 us=
ecs
mount: mount point /sys/kernel/config does not exist
[   18.058095] calling  xenkbd_init+0x0/0x1000 [xen_kbdfront] @ 1533
[   18.064182] initcall xenkbd_init+0x0/0x1000 [xen_kbdfrerting xen_kbdfron=
t (/lib/modules/3.13.0upstream-02502-gec513b1/kernel/drivers/input/misc/xen=
-kbdfront.ko): No such device
^G^G^G^G^G^G^G^G^G^G^G^G[   18.089461] calling  xenfb_init+0x0/0x1000 [xen_=
fbfront] @ 1536
[   18.095373] initcall xenfb_init+0x0/0x1000 [xen_fbfront] returned -19 af=
ter 0 usecs
FATAL: Error inserting xen_fbfront (/lib/modules/3.13.0upstream-02502-gec51=
3b1/kernel/drivers/video/xen-fbfront.ko): No such device
[   18.116848] calling  netif_init+0x0/0x1000 [xen_netfront] @ 1543
[   18.122846] xen_netfront: Initialising Xen virtual ethernet driver
[   18.129212] initcall netif_init+0x0/0x1000 [xen_netfront] returned 0 aft=
er 6214 usecs
[   18.139187] calling  xlblk_init+0x0/0x1000 [xen_blkfront] @ 1546
[   18.145333] initcall xlblk_init+0x0/0x1000 [xen_blkfronG^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.185401] hub 2-1:1.0: USB hub found
[   18.189869] hub 2-1:1.0: 8 ports detected
[   18.194764] udG^G^G^G[   18.261447] calling  acpi_cpufreq_init+0x0/0x100=
0 [acpi_cpufreq] @ 1560
[   18.268093] initcall acpi_cpufreq_init+0x0/0x100req] returned -19 after =
40 usecs
^G^G^G[   18.283502] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @=
 1570
[   18.290114] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
[   18.312807] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1588
[   18.319424] initcall acpi_cpufreq_init+0x0/0x10033027] calling  acpi_vid=
eo_init+0x0/0xfee [video] @ 1596
[   18.338782] initcall acpi_video_init+0x0/0xfee [video] returned 0 after =
11 usecs
^G^G^G[   18.364612] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @=
 1564
[   18.371230] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 6 usecs
[   18.385187] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1572
[   18.391802] initcall acpi_cpufreq_init+0x0/0x100ling  acpi_cpufreq_init+=
0x0/0x1000 [acpi_cpufreq] @ 1590
[   18.418534] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
^G^G^G^G^G[   18.468430] calling  init_scsi+0x0/0x91 [scsi_mod] @ 1781
[   18.478012] SCSI subsystem initialized
[   18.481761] initcall init_scsi+0x0/0x91 [scsi_mod] returned 0 after 7740=
 usecs
alling  igb_init_module+0x0/0x1000 [igb] @ 1790
[   18.497783] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k
[   18.504795] igb: Copyright (c) 2007-2013 Intel Corporation.
[   18.510730] xen: registering gsi 17 triggering 0 polarity 1
[   18.516292] Already setup the GSI :17
^G^G^G^G[   18.521923] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq]=
 @ 1574
[   18.528538] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
[   18.538904] calling  sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
@ 1781
[   18.563468] calling  drm_fb_helper_modiniG^G^G^G[   18.581027] calling  =
e1000_init_module+0x0/0x1000 [e1000e] @ 1812
[   18.587120] e1000e: Intel(R) PRO/1000 Network Driver -upt Throttling Rat=
e (ints/sec) set to dynamic conservative mode
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.632599] calling  fb_console_init+0x0/0x10=
00 [fbcon] @ 1825
[   18.638665] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1591
[   18.638675] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 7 usecs
[   18.639773] initcall drm_fb_helper_modinit+0x0/0x1000 [drm_kms_helper] r=
eturned 0 after 67545 usecs
[   18.676560] initcall fb_console_init+0x0/0x1000 [fbcon] returned 0 after=
 37245 usecs
[   18.689751] initcall sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
returned 0 after 140267 usecs
[   18.731789] calling  i915_init+0x0/0x68 [i915] @ 1809
[   18.752037] calling  ata_init+0x0/0x4ce [libata] @ 1888
[   18.758928] calling  fusion_init+0x0/0x1000 [mptbase] @ 1781
[   18.773708] initcall fusion_init+0x0/0x1000 [mptbase] returned 0 after 8=
916 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.795463] libata ver=
sion 3.00 loaded.
[   18.799303] initcall ata_init+0x0/0x4ce [libata] returned 0 after 41054 =
usecs
[   18.819464] calling  mptsas_init+0x0/0x1000 [mptsas] @ 1781
[   18.825031] Fusion MPT SAS Host driver 3.04.20
[   18.8298G^G^G^G^G^G[   18.855258] mptbase: ioc0: Initiating bringup
^G^G^G^G^G^G^G^G^G^G^G^G[   18.866860] igb 0000:02:00.0: added PHC on eth0
[   18.871386] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
on
[   18.878317] igb 0000:02:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ac
[   18.885507] igb 0000:02:00.0: eth0: PBA No: Unknown
[   18.890446] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   18.898420] xen: registering gsi 18 triggering 0 polarity 1
[   18.903986] Already setup the GSI :18
udevd-work[1608]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: Permission denied

[   19.001838] e1000e 0000:00:19.0 eth1: registered PHC clock
[   19.007319] e1000e 0000:00:19.0 eth1: (PCI Express:2.5GT/s:Wi00:25:90:86=
:be:f0
[   19.015296] e1000e 0000:00:19.0 eth1: Intel(R) PRO/1000 Network Connecti=
on
[   19.022248] e1000e 0000:00:19.0 eth1: MAC: 11, PHY: 12, PBA No: 0100FF-0=
FF
[   19.029467] xen: registering gsi 16 triggering 0 polarity 1
[   19.035035] Already setup the GSI :16
[   19.038934] e1000e 0000:04:00.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
^G^G[   19.049535] xen: registering gsi 16 triggering 0 polarity 1
[   19.055104] Already setup the GSI :16
[   19.093810] [drm] Memory usable by graphics device =3D 2048M
[   19.131868] Failed to add WC MTRR for [00000000e0000000-00000000efffffff=
]; performance may suffer.
[   19.158279] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   19.165142] [drm] Driver supports precise vbl changed decodes: PCI:0000:=
00:02.0,olddecodes=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.192124] igb 0000:02:00.1: added PHC on eth2
[   19.196666] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
 eth2: PBA No: Unknown
[   19.215720] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   19.223716] xen: registering gsi 19 triggering 0 polarity 1
[   19.229300] Already setup the GSI :19
(XEN) [2014-01-22 12:27:07] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----
(XEN) [2014-01-22 12:27:07] CPU:    0
(XEN0000000000000
(XEN) [2014-01-22 12:27:07] rdx: 00000000f1e80000   rsi: 0000000000000200  =
 rdi: ffff82d080281f20
(XEN) [2014-01-22 12:27:07] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08  =
 r8:  000000000000001c
(XEN) [2014-01-22 12:27:07] r9:  00000000ffffffff   r10: ffff82d080238f20  =
 r11: 0000000000000202
(XEN) [2014-01-22 12:27:07] r12: 0000000000000000   r13: ffff83023f65db70  =
 r14: ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0
(XEN) [2014-01-22 12:27:07] cr3: 000000021db62000   cr2: 0000000000000004
(XEN) [2014-01-22 12:27:07] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008
(XEN) [2014-01-22 12:27:07] Xen stack trace from rsp=3Dffff82d0802cfc08:
(XEN) [2014-01-22 12:27:07]    000000050004fc38 ffff82d0802cfd88 0000007204=
3a6340 80050070ffffffff
(XEN) [2014-01-22 12:27:07]    0000000000000000 0000000000000000 0000000000=
000005 0000000000000070
(XEN) [2014-01-22 12:27:07]    0000000500000000 0000000000000000 00000000f1=
e80000 ffff82d000000005
(XEN) [2014-01-22 12:27:07]    ffff82d000000003 80050070117fbb70 ffff82d080=
2cfe98 ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd88 ffff83023946e700 0000000000=
000005 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd28 ffff82d080168987 0000000000=
000246 ffff82d0802cfcd8
(XEN) [2014-01-22 12:27:07]    ffff82d080129d68 0000000000000000 ffff82d080=
2cfd28 ffff82d0801473d9
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd18 ffff8302337fbb70 0000000000=
00010c ffff830233748000
(XEN) [2014-01-22 12:27:07]    000000000000010c 0000000000000025 00000000ff=
ffffed ffff830239402500
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfdc8 ffff82d08016c65c ffff83023f=
65db00 000000000000010c
(XEN) [2014-01-22 12:27:07]    000000000000010c ffff8302337480e0 ffff82d080=
2cfd98 ffff82d0801047ed
(XEN) [2014-01-22 12:27:07]    0000010c01402500 ffff82d0802cfe98 ffff830233=
7480e0 ffff83023946e700
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff83023f65db00 ffff82d080=
2cfdc8 ffff830233748000
(XEN) [2014-01-22 12:27:07]    00000000fffffffd 0000000000000000 ffff82d080=
2cfe98 ffff82d0802cfe70
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe48 ffff82d08017f104 ffff82d080=
2cff18 ffffffff8154ea06
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff8302337480b8 ffff82d000=
00010c ffff82d08018bcb0
(XEN) [2014-01-22 12:27:07]    000000250000f800 ffff82d0802cfe74 ffff820040=
005000 000000000000000d
(XEN) [2014-01-22 12:27:07]    ffff88006ca859b8 ffff8300b7313000 ffff88006c=
35cc00 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfef8 ffff82d08017f814 0000000000=
000000 0000000700000004
(XEN) [2014-01-22 12:27:07]    0000000000007ff0 ffffffffffffffff 0000000000=
000005 0000000000000000
(XEN) [2014-01-22 12:27:07] Xen call trace:
(XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x=
1dc/0x603
(XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x=
4d7
(XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0=
x5ad
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/=
0x5d1
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x1=
19e
(XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 12:27:07]  L4[0x000] =3D 000000021db66067 000000000006cb75
(XEN) [2014-01-22 12:27:07]  L3[0x000] =3D 000000021db65067 000000000006cb76
(XEN) [2014-01-22 12:27:07]  L2[0x000] =3D 0000000000000000 fffffffffffffff=
f=20
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] Panic on CPU 0:
(XEN) [2014-01-22 12:27:07] FATAL PAGE FAULT
(XEN) [2014-01-22 12:27:07] [error_code=3D0000]
(XEN) [2014-01-22 12:27:07] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] Manual reset required ('noreboot' specified)

--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--NzB8fVQJ5HfG6fxh--


From xen-devel-bounces@lists.xen.org Wed Jan 22 04:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 04:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pUd-0007VX-Os; Wed, 22 Jan 2014 04:32:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pUb-0007VS-JX
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 04:32:46 +0000
Received: from [193.109.254.147:25630] by server-12.bemta-14.messagelabs.com
	id D2/93-13681-CE94FD25; Wed, 22 Jan 2014 04:32:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390365161!12302396!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12079 invoked from network); 22 Jan 2014 04:32:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 04:32:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M4VbWi020099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 04:31:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4VZDA019520
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 04:31:36 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4VZme019513; Wed, 22 Jan 2014 04:31:35 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 20:31:34 -0800
Date: Tue, 21 Jan 2014 23:31:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140122043128.GA9931@konrad-lan.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="NzB8fVQJ5HfG6fxh"
Content-Disposition: inline
In-Reply-To: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Wed, Jan 22, 2014 at 12:24:11AM +0000, Andrew Cooper wrote:
> As of c/s 1035bb64fd7fd9f05c510466d98566fd82e37ad9
>   "PCI: break MSI-X data out of struct pci_dev_info"
> 
> pdev->msix is now conditional on whether the device actually has MSI-X
> capabilities or not, so validate it before blindly dereferencing what amounts
> to a guest-controlled parameter.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> This has only been compile tested, but is quite obviously needed to prevent
> the NULL structure dereference.

And it does fix that particular problem. Now I have another crash.

See attached (and relevant part inlined).
..
[   19.223716] xen: registering gsi 19 triggering 0 polarity 1
[   19.229300] Already setup the GSI :19
(XEN) [2014-01-22 12:27:07] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-22 12:27:07] CPU:    0
(XEN0000000000000
(XEN) [2014-01-22 12:27:07] rdx: 00000000f1e80000   rsi: 0000000000000200   rdi: ffff82d080281f20
(XEN) [2014-01-22 12:27:07] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08   r8:  000000000000001c
(XEN) [2014-01-22 12:27:07] r9:  00000000ffffffff   r10: ffff82d080238f20   r11: 0000000000000202
(XEN) [2014-01-22 12:27:07] r12: 0000000000000000   r13: ffff83023f65db70   r14: ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07] r15: 0000000000000000   cr0: 0000000080050033   cr4: 00000000001526f0
(XEN) [2014-01-22 12:27:07] cr3: 000000021db62000   cr2: 0000000000000004
(XEN) [2014-01-22 12:27:07] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) [2014-01-22 12:27:07] Xen stack trace from rsp=ffff82d0802cfc08:
(XEN) [2014-01-22 12:27:07]    000000050004fc38 ffff82d0802cfd88 00000072043a6340 80050070ffffffff
(XEN) [2014-01-22 12:27:07]    0000000000000000 0000000000000000 0000000000000005 0000000000000070
(XEN) [2014-01-22 12:27:07]    0000000500000000 0000000000000000 00000000f1e80000 ffff82d000000005
(XEN) [2014-01-22 12:27:07]    ffff82d000000003 80050070117fbb70 ffff82d0802cfe98 ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd88 ffff83023946e700 0000000000000005 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd28 ffff82d080168987 0000000000000246 ffff82d0802cfcd8
(XEN) [2014-01-22 12:27:07]    ffff82d080129d68 0000000000000000 ffff82d0802cfd28 ffff82d0801473d9
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd18 ffff8302337fbb70 000000000000010c ffff830233748000
(XEN) [2014-01-22 12:27:07]    000000000000010c 0000000000000025 00000000ffffffed ffff830239402500
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfdc8 ffff82d08016c65c ffff83023f65db00 000000000000010c
(XEN) [2014-01-22 12:27:07]    000000000000010c ffff8302337480e0 ffff82d0802cfd98 ffff82d0801047ed
(XEN) [2014-01-22 12:27:07]    0000010c01402500 ffff82d0802cfe98 ffff8302337480e0 ffff83023946e700
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff83023f65db00 ffff82d0802cfdc8 ffff830233748000
(XEN) [2014-01-22 12:27:07]    00000000fffffffd 0000000000000000 ffff82d0802cfe98 ffff82d0802cfe70
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe48 ffff82d08017f104 ffff82d0802cff18 ffffffff8154ea06
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff8302337480b8 ffff82d00000010c ffff82d08018bcb0
(XEN) [2014-01-22 12:27:07]    000000250000f800 ffff82d0802cfe74 ffff820040005000 000000000000000d
(XEN) [2014-01-22 12:27:07]    ffff88006ca859b8 ffff8300b7313000 ffff88006c35cc00 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfef8 ffff82d08017f814 0000000000000000 0000000700000004
(XEN) [2014-01-22 12:27:07]    0000000000007ff0 ffffffffffffffff 0000000000000005 0000000000000000
(XEN) [2014-01-22 12:27:07] Xen call trace:
(XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
(XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
(XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
(XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 12:27:07]  L4[0x000] = 000000021db66067 000000000006cb75
(XEN) [2014-01-22 12:27:07]  L3[0x000] = 000000021db65067 000000000006cb76
(XEN) [2014-01-22 12:27:07]  L2[0x000] = 0000000000000000 ffffffffffffffff 
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] Panic on CPU 0:
(XEN) [2014-01-22 12:27:07] FATAL PAGE FAULT
(XEN) [2014-01-22 12:27:07] [error_code=0000]
(XEN) [2014-01-22 12:27:07] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] 
(XEN) [2014-01-22 12:27:07] Manual reset required ('noreboot' specified)

--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="tst035-4.4-pci_prepare_msix-patch.txt"
Content-Transfer-Encoding: quoted-printable

Loading xen.gz... =1B[07;00Hok
Loading vmlinuz... =1B[01;00Hok
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
Loading microcode.bin... ok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 21 23:12:04 EST 2014
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: unknown
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN)  A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) [VT-D]dmar.c:778: Host address width 39
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.091 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) [2014-01-22 12:26:51] Platform timer is 14.318MHz HPET
(XEN) [2014-01-22 12:26:51] Allocated console ring of 1048576 KiB. lapic_ti=
mer_reliable_states 0xffffffff
(XEN) [2014-01-22 12:26:51] VMX: Supported advanced features:
(XEN) [2014-01-22 12:26:51]  - APIC MMIO access virtualisation
(XEN) [2014-01-22 12:26:51]  - APIC TPR shadow
(XEN) [2014-01-22 12:26:51]  - Extended Page Tables (EPT)
(XEN) [2014-01-22 12:26:51]  - Virtual-Processor Identifiers (VPID)
(XEN) [2014-01-22 12:26:51]  - Virtual NMI
(XEN) [2014-01-22 12:26:51]  - MSR direct-access bitmap
(XEN) [2014-01-22 12:26:51]  - Unrestricted Guest
(XEN) [2014-01-22 12:26:51]  - VMCS shadowing
(XEN) [2014-01-22 12:26:51] HVM: ASIDs enabled.
(XEN) [2014-01-22 12:26:51] HVM: VMX enabled
(XEN) [2014-01-22 12:26:51] HVM: Hardware Assisted Paging (HAP) detected
(XEN) [2014-01-22 12:26:51] HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) [2014-01-22 12:26:51] Brought up 8 CPUs
(XEN) [2014-01-22 12:26:51] ACPI sleep modes: S3
(XEN) [2014-01-22 12:26:51] mcheck_poll: Machine check polling timer starte=
d.
(XEN) [2014-01-22 12:26:51] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa22000
(XEN) [2014-01-22 12:26:51] elf_parse_binary(XEN) [2014-01-22 12:26:51] elf=
_xen_parse_note: GUEST_OS =3D "linux"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: LOADER =3D "generic"
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000
(XEN) [2014-01-22 12:26:51] elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) [2014-01-22 12:26:51] elf_xen_addr_calc_check: addresses:
(XEN) [2014-01-22 12:26:51]     virt_base        =3D 0xffffffff80000000
(XEN) [2014-01-22 12:26:51]     elf_paddr_offset =3D 0x0
(XEN) [2014-01-22 12:26:51]     virt_offset      =3D 0xffffffff80000000
(XEN) [2014-01-22 12:26:51]     virt_kstart      =3D 0xffffffff81000000
(XEN) [2014-01-22 12:26:51]     virt_kend        =3D 0xffffffff823f4000
(XEN) [2014-01-22 12:26:51]     virt_entry       =3D 0xffffffff81cd61e0
(XEN) [2014-01-22 12:26:51]     p2m_base         =3D 0xffffffffffffffff
(XEN) [2014-01-22 12:26:51]  Xen  kernel: 64-bit, lsb, compat32
(XEN) [2014-01-22 12:26:51]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f4000
(XEN) [2014-01-22 12:26:51] PHYSICAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 12:26:51]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487000 pages to be allocated)
(XEN) [2014-01-22 12:26:51]  Init. ramdisk: 000000023abdf000->000000023fd86=
e64
(XEN) [2014-01-22 12:26:51] VIRTUAL MEMORY ARRANGEMENT:
(XEN) [2014-01-22 12:26:51]  Loaded kernel: ffffffff81000000->ffffffff823f4=
000
(XEN) [2014-01-22 12:26:51]  Init. ramdisk: ffffffff823f4000->ffffffff8759b=
e64
(XEN) [2014-01-22 12:26:51]  Phys-Mach map: ffffffff8759c000->ffffffff8799c=
000
(XEN) [2014-01-22 12:26:51]  Start info:    ffffffff8799c000->ffffffff8799c=
4b4
(XEN) [2014-01-22 12:26:51]  Page tables:   ffffffff8799d000->ffffffff879de=
000
(XEN) [2014-01-22 12:26:51]  Boot stack:    ffffffff879de000->ffffffff879df=
000
(XEN) [2014-01-22 12:26:51]  TOTAL:         ffffffff80000000->ffffffff87c00=
000
(XEN) [2014-01-22 12:26:51]  ENTRY ADDRESS: ffffffff81cd61e0
(XEN) [2014-01-22 12:26:51] Dom0 has maximum 1 VCPUs
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a22000
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc00f0
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 2 at 0xffffffff81cc1000 -=
> 0xffffffff81cd5d80
(XEN) [2014-01-22 12:26:51] elf_load_binary: phdr 3 at 0xffffffff81cd6000 -=
> 0xffffffff81e78000
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:) [2014-01-22 12:26:52] [VT-D]iom=
mu.c:1452: d0:PCIe: map 0000:00:03.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:01:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0
(XEN) [2014-01-22 12:26:52] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000
(XEN) [2014-01-22 12:26:translation: iommu->reg =3D ffff82c000203000
(XEN) [2014-01-22 12:26:52] Scrubbing Free RAM: ...........................=
=2E....................done.
(XEN) [2014-01-22 12:26:53] Initial low memory virq threshold set at 0x4000=
 pages.
(XEN) [2014-01-22 12:26:53] Std. Loglevel: All
(XEN) [2014-01-22 12:26:53] Guest Loglevel: All
(XEN) [2014-01-22 12:26:53] **********************************************
(XEN) [2014-01-22 12:26:53] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) [2014-01-22 12:26:53] ******* This option is intended to aid debuggin=
g of Xen by ensuring
(XEN) [2014-01-22 12:26:53] ******* that all output is synchronously delive=
red on the serial line.
(XEN) [2014-01-22 12:26:53] ******* However it can introduce SIGNIFICANT la=
tencies and affect
(XEN) [2014-01-22 12:26:53] ******* timekeeping. It is NOT recommended for =
production use!
(XEN) [2014-01-22 12:26:53] **********************************************
(XEN) [2014-01-22 12:26:53] 3... 2... 1...=20
(XEN) [2014-01-22 12:26:56] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)
(XEN) [2014-01-22 12: kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cversion 3.13.0upstream-02502-gec513b1 (konrad@=
build-external.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) =
(GCC) ) #1 SMP Tue Jan 21 12:31:52 EST 2014
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pciback.hide=
=3D(xxxx:xxx:xx:) xen-acpi-processor.off=3D1
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] 1-1 mapping on 99->100
[    0.000000] 1-1 mapping on a58f1->a58f8
[    0.000000] 1-1 mapping on a61b1->a6597
[    0.000000] 1-1 mapping on b74b4->b76cb
[    0.000000] 1-1 mapping on b770c->b7fff
[    0.000000] 1-1 mapping on b8000->100000
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 80000-80067 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k
[    0.000000] BRK [0x01fec000, 0x01fecfff] PGTABLE
[    0.000000] BRK [0x01fed000, 0x01fedfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k
[    0.000000] BRK [0x01fee000, 0x01feefff] PGTABLE
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k
[    0.000000] RAMDISK: [mem 0x023f4000-0x0759bfff]
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)
[    0.000000] ACPI: FACS 00000000b77b7080 000040
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]
[    0.000000] On node 0 totalpages: 524287
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 7114 pages used for memmap
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7=20
[    5.252759] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096
[    5.252760] Policy zone: DMA32
[    5.252762] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 kgdboc=3Dhvc0 xen-pcibac=
k.hide=3D(xxxx:xxx:xx:) xen-acpi-processor.off=3D1
[    5.253069] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    5.253099] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    5.273555] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]
[    5.276649] Memory: 1891276K/2097148K available (6935K kernel code, 766K=
 rwdata, 2184K rodata, 1724K init, 1380K bss, 205872K reserved)
[    5.276873] Hierarchical RCU implementation.
[    5.276874] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.
[    5.276874] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1
[    5.276882] NR_IRQS:33024 nr_irqs:256 16
[    5.276961] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0
[    5.276962] xen: registering gsi 9 triggering 0 polarity 0
[    5.276973] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)
[    5.276996] xen: acpi sci 9
[    5.276999] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)
[    5.277002] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)
[    5.277004] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)
[    5.277007] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)
[    5.277009] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)
[    5.277012] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)
[    5.277014] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)
[    5.277017] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)
[    5.277019] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)
[    5.277022] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)
[    5.277024] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)
[    5.277027] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)
[    5.277029] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)
[    5.277031] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)
[    5.278594] Console: colour VGA+ 80x25
[    6.234540] console [hvc0] enabled
[    6.238477] Xen: using vcpuop timer interface
[    6.242826] installing Xen timer for CPU 0
[    6.247007] tsc: Detected 3400.090 MHz processor
[    6.251691] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.18 BogoMIPS (lpj=3D3400090)
[    6.262327] pid_max: default: 32768 minimum: 301
[    6.267158] Security Framework initialized
[    6.271247] SELinux:  Initializing.
[    6.274819] SELinux:  Starting in permissive mode
[    6.279892] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)
[    6.287341] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)
[    6.294505] Mount-cache hash table entries: 256
[    6.299474] Initializing cgroup subsys freezer
[    6.303977] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    6.303977] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)
[    6.317082] CPU: Physical Processor ID: 0
[    6.321154] CPU: Processor Core ID: 0
[    6.325579] mce: CPU supports 2 MCE banks
[    6.329589] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    6.329589] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    6.329589] tlb_flushall_shift: 6
[    6.366666] Freeing SMP alternatives memory: 28K (ffffffff81e70000 - fff=
fffff81e77000)
[    6.375309] ACPI: Core revision 20131115
[    6.428321] ACPI: All ACPI Tables successfully acquired
[    6.435084] cpu 0 spinlock event irq 41
[    6.438963] callingks_jump+0x0/0x1d returned 0 after 4882 usecs
[    6.457544] calling  set_real_mode_permissions+0x0/0xa9 @ 1
[    6.463184] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs
[    6.470624] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1
[    6.477037] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs
[    6.485272] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1
[    6.490905] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs
[    6.498357] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1
[    6.504075] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs
[    6.511616] calling  init_hw_perf_events+0x0/0x53b @ 1
[    6.516816] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.
[    6.525657] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs
[    6.532964] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1
[    6.539377] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs
[    6.547611] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1
[    6.553079] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs
[    6.560204] calling  spawn_ksoftirqd+0x0/0x28 @ 1
[    6.564997] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs
[    6.571555] calling  init_workqueues+0x0/0x59a @ 1
[    6.576564] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs
[    6.583161] calling  migration_init+0x0/0x71 @ 1
[    6.587839] initcall migration_init+0x0/0x71 returned 0 after 0 usecs
[    6.594339] calling  check_cpu_stall_init+0x0/0x1b @ 1
[    6.599541] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs
[    6.606558] calling  rcu_scheduler_really_started+0x0/0x12 @ 1
[    6.612451] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs
[    6.620165] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1
[    6.625404] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs
[    6.632391] calling  cpu_stop_init+0x0/0x76 @ 1
[    6.637002] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs
[    6.643391] calling  relay_init+0x0/0x14 @ 1
[    6.647724] initcall relay_init+0x0/0x14 returned 0 after 0 usecs
[    6.653878] calling  tracer_alloc_buffers+0x0/0x1bd @ 1
[    6.659185] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs
[    6.666270] calling  init_events+0x0/0x61 @ 1
[    6.670692] initcall init_events+0x0/0x61 returned 0 after 0 usecs
[    6.676930] calling  init_trace_printk+0x0/0x12 @ 1
[    6.681871] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs
[    6.688630] calling  event_trace_memsetup+0x0/0x52 @ 1
[    6.693849] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs
[    6.700849] calling  jump_label_init_module+0x0/0x12 @ 1
[    6.706222] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs
[    6.713417] calling  balloon_clear+0x0/0x4f @ 1
[    6.718009] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs
[    6.724423] calling  rand_initialize+0x0/0x30 @ 1
[    6.729212] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs
[    6.735775] calling  mce_amd_init+0x0/0x165 @ 1
[    6.740369] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs
[    6.746806] x86: Booted up 1 node, 1 CPUs
[    6.751557] NMI watchdog: disabled (cpu0): hardware events not enabled
[    6.758203] devtmpfs: initialized
[    6.764080] calling  ipc_ns_init+0x0/0x14 @ 1
[    6.768431] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs
[    6.774671] calling  init_mmap_min_addr+0x0/0x26 @ 1
[    6.779697] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usecs
[    6.786544] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1
[    6.793218] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs
[    6.801709] calling  net_ns_init+0x0/0x104 @ 1
[    6.806273] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs
[    6.812556] calling  e820_mark_nvs_memory+0x0/0x41 @ 1
[    6.817744] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)
[    6.825638] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)
[    6.833801] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs
[    6.841003] calling  cpufreq_tsc+0x0/0x37 @ 1
[    6.845423] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs
[    6.851664] calling  reboot_init+0x0/0x1d @ 1
[    6.856086] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs
[    6.862323] calling  init_lapic_sysfs+0x0/0x20 @ 1
[    6.867176] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs
[    6.873849] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1
[    6.879396] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs
[    6.886761] calling  alloc_frozen_cpus+0x0/0x8 @ 1
[    6.891615] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs
[    6.898288] calling  wq_sysfs_init+0x0/0x14 @ 1
[    6.902983] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left
[    6.909731] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs
[    6.916256] calling  ksysfs_init+0x0/0x94 @ 1
[    6.920719] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs
[    6.926914] calling  pm_init+0x0/0x4e @ 1
[    6.931026] initcall pm_init+0x0/0x4e returned 0 after 0 usecs
[    6.936879] calling  pm_disk_init+0x0/0x19 @ 1
[    6.941401] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs
[    6.947714] calling  swsusp_header_init+0x0/0x30 @ 1
[    6.952739] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usecs
[    6.959586] calling  init_jiffies_clocksource+0x0/0x12 @ 1
[    6.965131] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs
[    6.972499] calling  cgroup_wq_init+0x0/0x32 @ 1
[    6.977184] initcall cgroup_wq_init+0x0/0x32 returned 0 after 0 usecs
[    6.983678] calling  event_trace_enable+0x0/0x173 @ 1
[    6.989266] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs
[    6.996126] calling  init_zero_pfn+0x0/0x35 @ 1
[    7.000717] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs
[    7.007130] calling  fsnotify_init+0x0/0x26 @ 1
[    7.011725] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs
[    7.018136] calling  filelock_init+0x0/0x84 @ 1
[    7.022741] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs
[    7.029143] calling  init_misc_binfmt+0x0/0x31 @ 1
[    7.033997] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs
[    7.040669] calling  init_script_binfmt+0x0/0x16 @ 1
[    7.045696] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.052542] calling  init_elf_binfmt+0x0/0x16 @ 1
[    7.057308] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs
[    7.063895] calling  init_compat_elf_binfmt+0x0/0x16 @ 1
[    7.069268] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs
[    7.076462] calling  debugfs_init+0x0/0x5c @ 1
[    7.080978] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs
[    7.087295] calling  securityfs_init+0x0/0x53 @ 1
[    7.092073] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs
[    7.098648] calling  prandom_init+0x0/0xe2 @ 1
[    7.103155] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs
[    7.109483] calling  virtio_init+0x0/0x30 @ 1
[    7.114003] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs
[    7.120168] calling  __gnttab_init+0x0/0x30 @ 1
[    7.124764] xen:grant_table: Grant tables using version 2 layout
[    7.130846] Grant table initialized
[    7.134380] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs
[    7.141054] calling  early_resume_init+0x0/0x1d0 @ 1
[    7.146106] RTC time: 12:26:57, date: 01/22/14
[    7.150586] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs
[    7.157606] calling  cpufreq_core_init+0x0/0x37 @ 1
[    7.162545] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs
[    7.169479] calling  cpuidle_init+0x0/0x40 @ 1
[    7.173985] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs
[    7.180484] calling  bsp_pm_check_init+0x0/0x14 @ 1
[    7.185426] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs
[    7.192185] calling  sock_init+0x0/0x8b @ 1
[    7.196534] initcall sock_init+0x0/0x8b returned 0 after 0 usecs
[    7.202524] calling  net_inuse_init+0x0/0x26 @ 1
[    7.207206] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs
[    7.213702] calling  netpoll_init+0x0/0x31 @ 1
[    7.218209] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs
[    7.224536] calling  netlink_proto_init+0x0/0x1f7 @ 1
[    7.229690] NET: Registered protocol family 16
[    7.234180] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs
[    7.241276] calling  bdi_class_init+0x0/0x4d @ 1
[    7.246061] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs
[    7.252491] calling  kobject_uevent_init+0x0/0x12 @ 1
[    7.257616] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs
[    7.264533] calling  pcibus_class_init+0x0/0x19 @ 1
[    7.269536] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs
[    7.276232] calling  pci_driver_init+0x0/0x12 @ 1
[    7.281094] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs
[    7.287611] calling  backlight_class_init+0x0/0x85 @ 1
[    7.292870] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs
[    7.299834] calling  video_output_class_init+0x0/0x19 @ 1
[    7.305358] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs
[    7.312572] calling  xenbus_init+0x0/0x26f @ 1
[    7.317169] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs
[    7.323424] calling  tty_class_init+0x0/0x38 @ 1
[    7.328170] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs
[    7.334603] calling  vtconsole_class_init+0x0/0xc2 @ 1
[    7.339972] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs
[    7.346917] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1
[    7.352729] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs
[    7.360351] calling  register_node_type+0x0/0x34 @ 1
[    7.365509] initcall register_node_type+0x0/0x34 returned 0 after 0 usecs
[    7.372284] calling  i2c_init+0x0/0x70 @ 1
[    7.376611] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs
[    7.382518] calling  init_ladder+0x0/0x12 @ 1
[    7.386937] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs
[    7.393349] calling  init_menu+0x0/0x12 @ 1
[    7.397597] initcall init_menu+0x0/0x12 returned -19 after 0 usecs
[    7.403836] calling  amd_postcore_init+0x0/0x143 @ 1
[    7.408863] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usecs
[    7.415722] calling  boot_params_ksysfs_init+0x0/0x237 @ 1
[    7.421274] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs
[    7.428623] calling  arch_kdebugfs_init+0x0/0x233 @ 1
[    7.433765] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs
[    7.440668] calling  mtrr_if_init+0x0/0x78 @ 1
[    7.445175] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs
[    7.451675] calling  ffh_cstate_init+0x0/0x2a @ 1
[    7.456444] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs
[    7.463028] calling  activate_jump_labels+0x0/0x32 @ 1
[    7.468229] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs
[    7.475247] calling  acpi_pci_init+0x0/0x61 @ 1
[    7.479841] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it
[    7.487467] ACPI: bus type PCI registered
[    7.491540] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    7.498040] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs
[    7.504715] calling  dma_bus_init+0x0/0xd6 @ 1
[    7.509342] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left
[    7.516049] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs
[    7.522536] calling  dma_channel_table_init+0x0/0xde @ 1
[    7.527922] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs
[    7.535128] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1
[    7.540675] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs
[    7.548039] calling  register_xen_pci_notifier+0x0/0x38 @ 1
[    7.553674] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs
[    7.561126] calling  xen_pcpu_init+0x0/0xcc @ 1
[    7.566566] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs
[    7.572910] calling  dmi_id_init+0x0/0x31d @ 1
[    7.577662] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs
[    7.583920] calling  dca_init+0x0/0x20 @ 1
[    7.588078] dca service started, version 1.12.1
[    7.592730] initcall dca_init+0x0/0x20 returned 0 after 976 usecs
[    7.598826] calling  iommu_init+0x0/0x58 @ 1
[    7.603167] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs
[    7.609311] calling  pci_arch_init+0x0/0x69 @ 1
[    7.613921] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)
[    7.623264] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[    7.637956] PCI: Using configuration type 1 for base access
[    7.643520] initcall pci_arch_init+0x0/0x69 returned 0 after0x98 @ 1
[    7.655210] initcall topology_init+0x0/0x98 returned 0 after 0 usecs
[    7.661576] calling  mtrr_init_finialize+0x0/0x36 @ 1
[    7.666673] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs
[    7.673600] calling  init_vdso+0x0/0x135 @ 1
[    7.677936] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs
[    7.684086] calling  sysenter_setup+0x0/0x2dd @ 1
[    7.688853] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs
[    7.695440] calling  param_sysfs_init+0x0/0x194 @ 1
[    7.716547] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs
[    7.723582] calling  pm_sysrq_init+0x0/0x19 usecs
[    7.734599] calling  default_bdi_init+0x0/0x65 @ 1
[    7.739755] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs
[    7.746361] calling  init_bio+0x0/0xe9 @ 1
[    7.750573] bio: create slab <bio-0> at 0
[    7.754640] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs
[    7.760743] calling  fsnotify_notification_init+0x0/0x8b @ 1
[    7.766485] initcall fsnotify_notification_init+0x0/0x8b returned 0 afte=
r 0 usecs
[    7.774002] calling  cryptomgr_init+0x0/0x12 @ 1
[    7.778680] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs
[    7.785182] calling  blk_settings_init+0x0/0x2c @ 1
[    7.790118] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs
[    7.796882] calling  blk_ioc_init+0x0/0x2a @ 1
[    7.801397] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs
[    7.807712] calling  blk_softirq_init+0x0/0x6e @ 1
[    7.812566] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs
[    7.819236] calling  blk_iopoll_setup+0x0/0x6e @ 1
[    7.824090] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs
[    7.830763] calling  blk_mq_init+0x0/0x5f @ 1
[    7.835185] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs
[    7.841423] calling  genhd_device_init+0x0/0x85 @ 1
[    7.846491] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs
[    7.853180] calling  pci_slot_init+0x0/0x50 @ 1
[    7.857779] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs
[    7.864182] calling  fbmem_init+0x0/0x98 @ 1
[    7.868587] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs
[    7.874670] calling  acpi_init+0x0/0x27a @ 1
[    7.879029] ACPI: Added _OSI(Module Device)
[    7.883249] ACPI: Added _OSI(Processor Device)
[    7.887754] ACPI: Added _OSI(3.0 _SCP Extensions)
[    7.892520] ACPI: Added _OSI(Processor Aggregator Device)
[    7.901728] ACPI: Executed 1 blocks of module-level executable AML code
[    7.933422] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[    7.941276] \_SB_:_OSC invalid UUID
[    7.944756] _OSC request data:1 1f=20
[    7.950396] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    7.959619] ACPI: Dynamic OEM Table Load:
[    7.963617] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)
[    7.973445] ACPI: Interpreter enabled
[    7.977114] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)
[    7.986373] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)
[    7.995657] ACPI: (supports S0 S1 S4 S5)
[    7.999629] ACPI: Using IOAPIC for interrupt routing
[    8.004798] kworker/u2:0 (275) used greatest stack depth: 5560 bytes left
[    8.011817] HEST: Table parsing has been initialized.
[    8.016865] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    8.027251] ACPI: No dock devices found.
[    8.128343] ACPI: Power Resource [FN00] (off)
[    8.133497] ACPI: Power Resource [FN01] (off)
[    8.138652] ACPI: Power Resource [FN02] (off)
[    8.143782] ACPI: Power Resource [FN03] (off)
[    8.148928] ACPI: Power Resource [FN04] (off)
[    8.158651] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[    8.164830] acpi PNP0A08:00: _OSC: OS supports [Exten
[    8.175631] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]
[    8.184625] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]
[    8.198162] PCI host bridge to bus 0000:00
[    8.202255] pci_bus 0000:00: root bus resource [bus 00-3e]
[    8.207800] p0]
[    8.214040] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    8.220285] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]
[    8.227218] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]
[    8.234144] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]
[    8.241078] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]
[    8.248012] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]
[    8.254944] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]
[    8.261890] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:00.0
[    8.273467] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400
[    8.279626] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    8.286243] pci 0000:00:01.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:01.0
[    8.297087] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400
[    8.303151] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:01.1
[    8.314817] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000
[    8.320828] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[    8.327664] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]
[    8.334940] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:02.0
[    8.346102] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[    8.352124] pci 0000:00:03.0: reg 0x10: [mem 0xf1534000-0xf1537fff 64bit]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:03.0
[    8.364539] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[    8.370600] pci 0000:00:14.0: reg 0x10: [mem 0xf1520000-0xf152ffff 64bit]
[    8.377531] pci 0000:00:14.0: PME# supported from D3hot D3cold
[    8.383760] pci 0000:00:14.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:14.0
[    8.394625] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[    8.400664] pci 0000:00:16.0: reg 0x10: [mem 0xf153f000-0xf153f00f 64bit]
[    8.407606] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:16.0
[    8.419245] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000
[    8.425283] pci 0000:00:19.0: reg 0x10: [mem 0xf1500000-0xf151ffff]
[    8.431579] pci 0000:00:19.0: reg 0x14: [mem 0xf153d000-0xf153dfff]
[    8.437906] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]
[    8.443665] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    8.450153] pci 0000:00:19.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:19.0
[    8.461004] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[    8.467045] pci 0000:00:1a.0: reg 0x10: [mem 0xf153c000-0xf153c3ff]
[    8.473500] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    8.480077] pci 0000:00:1a.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1a.0
[    8.490933] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300
[    8.496967] pci 0000:00:1b.0: reg 0x10: [mem 0xf1530000-0xf1533fff 64bit]
[    8.503930] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    8.510420] pci 0000:00:1b.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1b.0
[    8.521266] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[    8.527429] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    8.533946] pci 0000:00:1c.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.0
[    8.544804] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400
[    8.550969] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[    8.557461] pci 0000:00:1c.3: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.3
[    8.568309] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400
[    8.574473] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
[    8.580962] pci 0000:00:1c.5: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.5
[    8.591815] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400
[    8.597975] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
[    8.604467] pci 0000:00:1c.6: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.6
[    8.615316] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400
[    8.621478] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold
[    8.627968] pci 0000:00:1c.7: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1c.7
[    8.638828] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[    8.644866] pci 0000:00:1d.0: reg 0x10: [mem 0xf153b000-0xf153b3ff]
[    8.651319] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    8.657897] pci 0000:00:1d.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1d.0
[    8.668746] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.0
[    8.680429] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[    8.686466] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]
[    8.692068] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]
[    8.697700] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]
[    8.703334] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]
[    8.708967] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]
[    8.714599] pci 0000:00:1f.2: reg 0x24: [mem 0xf153a000-0xf153a7ff]
[    8.721009] pci 0000:00:1f.2: PME# supported from D3hot
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.2
[    8.731776] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[    8.737805] pci 0000:00:1f.3: reg 0x10: [mem 0xf1539000-0xf15390ff 64bit]
[    8.744660] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.3
[    8.755797] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000
[    8.761831] pci 0000:00:1f.6: reg 0x10: [mem 0xf1538000-0xf1538fff 64bit]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:00:1f.6
[    8.774533] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    8.785326] pci 0000:01:00.0: [1000:0056] type 00 class 0x010000
[    8.791375] pci 0000:01:00.0: reg 0x10: [io  0xe000-0xe0ff]
[    8.797009] pci 0000:01:00.0: reg 0x14: [mem 0xf1810000-0xf1813fff 64bit]
[    8.803856] pci 0000:01:00.0: reg 0x1c: [mem 0xf1800000-0xf180ffff 64bit]
[    8.810705] pci 0000:01:00.0: reg 0x30: [mem 0xf1600000-0xf17fffff pref]
[    8.817503] pci 0000:01:00.0: supports D1 D2
[    8.821894] pci 0000:01:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:26:59] PCI add device 0000:01:00.0
[    8.834871] pci 0000:00:01.0: PCI bridge to [bus 01-ff]
[    8.840088] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[    8.846240] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[    8.853088] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    8.859957] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    8.870755] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000
[    8.876803] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]
[    8.883124] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]
[    8.889450] pci 0000:02:00.0: reg 0x18: [io  0xd020-0xd03f]
[    8.895082] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]
[    8.901429] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]
[    8.908219] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold
[    8.914348] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    8.921261] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:02:00.0
[    8.933463] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000
[    8.939472] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]
[    8.945790] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]
[    8.952116] pci 0000:02:00.1: reg 0x18: [io  0xd000-0xd01f]
[    8.957751] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]
[    8.964096] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]
[    8.970887] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold
[    8.977012] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[    8.983928] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
(XEN) [2014-01-22 12:26:59] PCI add device 0000:02:00.1
[    8.998223] pci 0000:00:01.1: PCI bridge to [bus 02-ff]
[    9.003443] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[    9.009593] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[    9.016441] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03
[    9.023476] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.034297] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000
[    9.040342] pci 0000:04:00.0: reg 0x10: [mem 0xf1fa0000-0xf1fbffff]
[    9.046656] pci 0000:04:00.0: reg 0x14: [mem 0xf1f80000-0xf1f9ffff]
[    9.052984] pci 0000:04:00.0: reg 0x18: [io  0xc020-0xc03f]
[    9.058699] pci 0000:04:00.0: reg 0x30: [mem 0xf1f60000-0xf1f7ffff pref]
[    9.065528] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
[    9.071756] pci 0000:04:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:04:00.0
[    9.082672] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000
[    9.088703] pci 0000:04:00.1: reg 0x10: [mem 0xf1f40000-0xf1f5ffff]
[    9.095016] pci 0000:04:00.1: reg 0x14: [mem 0xf1f20000-0xf1f3ffff]
[    9.101342] pci 0000:04:00.1: reg 0x18: [io  0xc000-0xc01f]
[    9.107059] pci 0000:04:00.1: reg 0x30: [mem 0xf1f00000-0xf1f1ffff pref]
[    9.113887] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:04:00.1
[    9.133944] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]
[    9.139162] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[    9.145312] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[    9.152163] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04
[    9.159193] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.170029] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000
[    9.176059] pci 0000:05:00.0: reg 0x10: [mem 0xf1e00000-0xf1e7ffff]
[    9.182396] pci 0000:05:00.0: reg 0x18: [io  0xb000-0xb01f]
[    9.188009] pci 0000:05:00.0: reg 0x1c: [mem 0xf1e80000-0xf1e83fff]
[    9.194511] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[    9.200747] pci 0000:05:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:05:00.0
[    9.213751] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]
[    9.218970] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[    9.225122] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[    9.231972] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05
[    9.239044] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.249869] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401
[    9.256109] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[    9.262885] pci 0000:06:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:06:00.0
[    9.273779] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]
[    9.279008] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.285869] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring
[    9.294363] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400
[    9.300556] pci 0000:07:01.0: supports D1 D2
[    9.304815] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold
(XEN) [2014-01-22 12:27:00] PCI add device 0000:07:01.0
[    9.316641] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010
[    9.322673] pci 0000:07:03.0: reg 0x10: [mem 0xf1b04000-0xf1b047ff]
[    9.328981] pci 0000:07:03.0: reg 0x14: [mem 0xf1b00000-0xf1b03fff]
[    9.335465] pci 0000:07:03.0: supports D1 D2
[    9.339721] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:07:03.0
[    9.357637] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)
[    9.364691] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[    9.371533] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.380103] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff=
] (subtractive decode)
[    9.388768] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.397348] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)
[    9.405930] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring
[    9.414328] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000
[    9.420377] pci 0000:08:08.0: reg 0x10: [mem 0xf1a07000-0xf1a07fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:08.0
[    9.439053] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000
[    9.445104] pci 0000:08:08.1: reg 0x10: [mem 0xf1a06000-0xf1a06fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:08.1
[    9.463802] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000
[    9.469855] pci 0000:08:09.0: reg 0x10: [mem 0xf1a05000-0xf1a05fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:09.0
[    9.488550] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000
[    9.494604] pci 0000:08:09.1: reg 0x10: [mem 0xf1a04000-0xf1a04fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:09.1
[    9.513314] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000
[    9.519367] pci 0000:08:0a.0: reg 0x10: [mem 0xf1a03000-0xf1a03fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0a.0
[    9.538078] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000
[    9.544126] pci 0000:08:0a.1: reg 0x10: [mem 0xf1a02000-0xf1a02fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0a.1
[    9.562835] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000
[    9.568889] pci 0000:08:0b.0: reg 0x10: [mem 0xf1a01000-0xf1a01fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0b.0
[    9.587570] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000
[    9.593620] pci 0000:08:0b.1: reg 0x10: [mem 0xf1a00000-0xf1a00fff pref]
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1
(XEN) [2014-01-22 12:27:00] PCI add device 0000:08:0b.1
[    9.612351] pci 0000:07:01.0: PCI bridge to [bus 08-ff]
[    9.617577] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[    9.624416] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08
[    9.631087] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08
[    9.637761] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08
[    9.644800] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.655693] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330
[    9.661796] pci 0000:09:00.0: reg 0x10: [mem 0xf1d00000-0xf1d01fff 64bit]
[    9.668966] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[    9.675254] pci 0000:09:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] PCI add device 0000:09:00.0
[    9.688355] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]
[    9.693576] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[    9.700421] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09
[    9.707452] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])
[    9.718252] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601
[    9.724303] pci 0000:0a:00.0: reg 0x10: [io  0xa050-0xa057]
[    9.729929] pci 0000:0a:00.0: reg 0x14: [io  0xa040-0xa043]
[    9.735561] pci 0000:0a:00.0: reg 0x18: [io  0xa030-0xa037]
[    9.741195] pci 0000:0a:00.0: reg 0x1c: [io  0xa020-0xa023]
[    9.746828] pci 0000:0a:00.0: reg 0x20: [io  0xa000-0xa01f]
[    9.752462] pci 0000:0a:00.0: reg 0x24: [mem 0xf1c00000-0xf1c001ff]
[    9.758998] pci 0000:0a:00.0: System wakeup disabled by ACPI
(XEN) [2014-01-22 12:27:00] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0
(XEN) [2014-01-22 12:27:00] PCI add device 0000:0a:00.0
[    9.778514] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]
[    9.783738] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[    9.789890] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[    9.796738] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a
[    9.803504] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)
[    9.815255] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[    9.822567] ACPI: PCI Interrupt Link [LNKB] ( PCI Interrupt Link [LNKF] =
(IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
[    9.860288] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)
[    9.867592] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)
[    9.876007] ACPI: Enabled 4 GPEs in block 00 to 3F
[    9.880798] ACPI: \_SB_.PCI0: notify handler is installed
[    9.886281] Found 1 acpi root devices
[    9.890083] initcall acpi_init+0x0/0x27a returned 0 after 454101 usecs
[    9.896596] calling  pnp_init+0x0/0x12 @ 1
[    9.900850] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs
[    9.906761] calling  balloon_init+0x0/0x242 @ 1
[    9.911353] xen:balloon: Initialising balloon driver
[    9.916382] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs
[    9.922967] calling  xen_setup_shutdown_event+0x0/0x30 @ 1
[    9.928512] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs
[    9.935879] calling  xenbus_probe_backend_init+0x0/0x2d @ 1
[    9.941606] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs
[    9.948984] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1
[    9.954823] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs
[    9.962288] calling  xen_acpi_pad_init+0x0/0x47 @ 1
[    9.967303] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs
[    9.973998] calling  balloon_init+0x0/0xfa @ 1
[    9.978500] xen_balloon: Initialising balloon driver
[    9.983916] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs
[    9.990348] calling  misc_init+0x0/0xba @ 1
[    9.994667] initcall misc_init+0x0/0xba returned 0 after 0 usecs
[   10.000662] calling  vga_arb_device_init+0x0/0xde @ 1
[   10.005915] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone
[   10.013993] vgaarb: loaded
[   10.016763] vgaarb: bridge control possible 0000:00:02.0
[   10.022138] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs
[   10.029330] calling  cn_init+0x0/0xc0 @ 1
[   10.033421] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs
[   10.039298] calling  dma_buf_init+0x0/0x75 @ 1
[   10.043814] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs
[   10.050131] calling  phy_init+0x0/0x2e @ 1
[   10.054515] initcall phy_init+0x0/0x2e returned 0 after 0 usecs
[   10.060425] calling  init_pcmcia_cs+0x0/0x3d @ 1
[   10.065159] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs
[   10.071604] calling  usb_init+0x0/0x169 @ 1
[   10.075865] ACPI: bus type USB registered
[   10.080123] usbcore: registered new interface driver usbfs
[   10.085703] usbcore: registered new interface driver hub
[   10.091097] usbcore: registered new device driver usb
[   10.096143] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs
[   10.102467] calling  serio_init+0x0/0x31 @ 1
[   10.106920] initcall serio_init+0x0/0x31 returned 0 after 0 usecs
[   10.113005] calling  input_init+0x0/0x103 @ 1
[   10.117494] initcall input_init+0x0/0x103 returned 0 after 0 usecs
[   10.123667] calling  rtc_init+0x0/0x5b @ 1
[   10.127898] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs
[   10.133807] calling  pps_init+0x0/0xb7 @ 1
[   10.138028] pps_core: LinuxPPS API ver. 1 registered
[   10.142992] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[   10.152177] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs
[   10.158417] calling  ptp_init+0x0/0xa4 @ 1
[   10.162635] PTP clock support registered
[   10.166563] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs
[   10.172716] calling  power_supply_class_init+0x0/0x44 @ 1
[   10.178233] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs
[   10.185458] calling  hwmon_init+0x0/0xe3 @ 1
[   10.189853] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs
[   10.195945] calling  leds_init+0x0/0x40 @ 1
[   10.200249] initcall leds_init+0x0/0x40 returned 0 after 0 usecs
[   10.206259] calling  efisubsys_init+0x0/0x142 @ 1
[   10.211023] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs
[   10.217609] calling  pci_subsys_init+0x0/0x4f @ 1
[   10.222373] PCI: Using ACPI for IRQ routing
[   10.230060] PCI: pci_cache_line_size set to 64 bytes
[   10.235218] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]
 usecs
[   10.254146] calling  proto_init+0x0/0x12 @ 1
[   10.258465] initcall proto_init+0x0/0x12 returned 0 after 0 usecs
[   10.264621] calling  net_dev_init+0x0/0x1c6 @ 1
[   10.269837] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs
[   10.276177] calling  neigh_init+0x0/0x80 @ 1
[   10.280508] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs
[   10.286662] calling  fib_rules_init+0x0/0xaf @ 1
[   10.291341] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs
[   10.297840] calling  pktsched_init+0x0/0x10a @ 1
[   10.302526] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs
[   10.309020] calling  tc_filter_init+0x0/0x55 @ 1
[   10.313702] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs
[   10.320201] calling  tc_action_init+0x0/0x55 @ 1
[   10.324880] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs
[   10.331382] calling  genl_init+0x0/0x85 @ 1
[   10.335642] initcall genl_init+0x0/0x85 returned 0 after 0 usecs
[   10.341693] calling  cipso_v4_init+0x0/0x61 @ 1
[   10.346287] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs
[   10.352700] calling  netlbl_init+0x0/0x81 @ 1
[   10.357120] NetLabel: Initializing
[   10.360587] NetLabel:  domain hash size =3D 128
[   10.365007] NetLabel:  protocols =3D UNLABELED CIPSOv4
[   10.370070] NetLabel:  unlabeled traffic allowed by default
[   10.375667] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs
[   10.382164] calling  rfkill_init+0x0/0x79 @ 1
[   10.386761] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs
[   10.392935] calling  xen_mcfg_late+0x0/0xab @ 1
[   10.397525] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs
[   10.403954] calling  xen_p2m_debugfs+0x0/0x4a @ 1
[   10.408719] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs
[   10.415288] calling  xen_spinlock_debugfs+0x0/0x13a @ 1
[   10.420623] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs
[   10.427683] calling  nmi_warning_debugfs+0x0/0x27 @ 1
[   10.432800] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs
[   10.439727] calling  hpet_late_init+0x0/0x101 @ 1
[   10.444496] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs
[   10.451253] calling  init_amd_nbs+0x0/0xb8 @ 1
[   10.455763] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs
[   10.462087] calling  clocksource_done_booting+0x0/0x42 @ 1
[   10.467641] Switched to clocksource xen
[   10.471540] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs
[   10.479163] calling  tracer_init_debugfs+0x0/0x1b2 @ 1
[   10.484649] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 279 =
usecs
[   10.491773] calling  init_trace_printk_function_export+0x0/0x2f @ 1
[   10.498103] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs
[   10.506243] calling  event_trace_init+0x0/0x205 @ 1
[   10.525510] initcall event_trace_init+0x0/0x205 returned 0 after 13987 u=
secs
[   10.532548] calling  init_kprobe_trace+0x0/0x93 @ 1
[   10.537525] initcall init_kprobe_trace+0x0/0x93 returned 0 after 11 usecs
[   10.544362] calling  init_pipe_fs+0x0/0x4c @ 1
[   10.548903] initcall init_pipe_fs+0x0/0x4c returned 0 after 36 usecs
[   10.555280] calling  eventpoll_init+0x0/0xda @ 1
[   10.559987] initcall eventpoll_init+0x0/0xda returned 0 after 26 usecs
[   10.566546] calling  anon_inode_init+0x0/0x5b @ 1
[   10.571350] initcall anon_inode_init+0x0/0x5b returned 0 after 37 usecs
[   10.577986] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1
[   10.583186] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs
[   10.590207] calling  acpi_event_init+0x0/0x3a @ 1
[   10.594991] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs
[   10.601644] calling  pnp_system_init+0x0/0x12 @ 1
[   10.606507] initcall pnp_system_init+0x0/0x12 returned 0 after 91 usecs
[   10.613123] calling  pnpacpi_init+0x0/0x8c @ 1
[   10.617617] pnp: PnP ACPI init
[   10.620760] ACPI: bus type PNP registered
[   10.625137] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[   10.631737] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[   10.638629] pnp 00:01: [dma 4]
[   10.641843] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)
[   10.648533] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)
[   10.655593] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)
[   10.663137] system 00:04: [io  0x0680-0x069f] has been reserved
[   10.669051] system 00:04: [io  0xffff] has been reserved
[   10.674422] system 00:04: [io  0xffff] has been reserved
[   10.679795] system 00:04: [io  0xffff] has been reserved
[   10.685168] system 00:04: [io  0x1c00-0x1cfe] has been reserved
[   10.691148] system 00:04: [io  0x1d00-0x1dfe] has been reserved
[   10.697129] system 00:04: [io  0x1e00-0x1efe] has been reserved
[   10.703109] system 00:04: [io  0x1f00-0x1ffe] has been reserved
[   10.709087] system 00:04: [io  0x0ca4-0x0ca7] has been reserved
[   10.715068] system 00:04: [io  0x1800-0x18fe] could not be reserved
[   10.721394] system 00:04: [io  0x164e-0x164f] has been reserved
[   10.727369] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.734251] xen: registering gsi 8 triggering 1 polarity 0
[   10.739932] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)
[   10.746826] system 00:06: [io  0x1854-0x1857] has been reserved
[   10.752741] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)
[   10.761102] system 00:07: [io  0x0a00-0x0a1f] has been reserved
[   10.767016] system 00:07: [io  0x0a30-0x0a3f] has been reserved
[   10.772989] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.781210] xen: registering gsi 4 triggering 1 polarity 0
[   10.786683] Already setup the GSI :4
[   10.790329] pnp 00:08: [dma 0 disabled]
[   10.794428] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)
[   10.802160] xen: registering gsi 3 triggering 1 polarity 0
[   10.807663] pnp 00:09: [dma 0 disabled]
[   10.811771] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)
[   10.818612] system 00:0a: [io  0x04d0-0x04d1] has been reserved
[   10.824526] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.831401] xen: registering gsi 13 triggering 1 polarity 0
[   10.837187] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)
[   10.846814] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved
[   10.853424] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved
[   10.860094] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved
[   10.866767] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved
[   10.873440] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved
[   10.880112] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved
[   10.886785] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved
[   10.893458] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved
[   10.900131] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved
[   10.906805] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved
[   10.913479] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved
[   10.920149] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[   10.926820] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
[   10.935715] pnp: PnP ACPI: found 13 devices
[   10.939887] ACPI: bus type PNP unregistered
[   10.944136] initcall pnpacpi_init+0x0/0x8c returned 0 after 318865 usecs
[   10.950894] calling  pcistub_init+0x0/0x29f @ 1
[   10.955486] xen_pciback: Error parsing pci_devs_to_hide at "(xxxx:xxx:xx=
:)"
[   10.962507] initcall pcistub_init+0x0/0x29f returned -22 after 6855 usecs
[   10.969353] calling  chr_dev_init+0x0/0xc6 @ 1
[   10.983003] initcall chr_dev_init+0x0/0xc6 returned 0 after 8928 usecs
[   10.989526] calling  firmware_class_init+0x0/0xec  11.013393] calling  t=
hermal_init+0x0/0x8b @ 1
[   11.017973] initcall thermal_init+0x0/0x8b returned 0 after 92 usecs
[   11.024325] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1
[   11.030211] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs
[   11.038096] calling  init_acpi_pm_clocksource+0x0/0xec @ 1
[   11.046800] PM-Timer failed consistency check  (0xffffff) - aborting.
[   11.053227] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9361 usecs
[   11.061027] calling  pcibios_assign_resources+0x0/0xbd @ 1
[   11.066681] pci 0000:00:01.0: PCI bridge to [bus 01]
[   11.071636] pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
[   11.077789] pci 0000:00:01.0:   bridge window [mem 0xf1600000-0xf18fffff]
[   11.084651] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.091578] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.098511] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.105442] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.112378] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.119310] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.126242] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.133177] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.140109] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.147042] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.153975] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.160899] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]
[   11.168282] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.175198] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]
[   11.182667] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.189584] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]
[   11.196967] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]
[   11.203884] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]
[   11.211344] pci 0000:00:01.1: PCI bridge to [bus 02-03]
[   11.216624] pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
[   11.222780] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff]
[   11.229629] pci 0000:00:1c.0: PCI bridge to [bus 04]
[   11.234651] pci 0000:00:1c.0:   bridge window [io  0xc000-0xcfff]
[   11.240808] pci 0000:00:1c.0:   bridge window [mem 0xf1f00000-0xf1ffffff]
[   11.247663] pci 0000:00:1c.3: PCI bridge to [bus 05]
[   11.252678] pci 0000:00:1c.3:   bridge window [io  0xb000-0xbfff]
[   11.258835] pci 0000:00:1c.3:   bridge window [mem 0xf1e00000-0xf1efffff]
[   11.265689] pci 0000:07:01.0: PCI bridge to [bus 08]
[   11.270712] pci 0000:07:01.0:   bridge window [mem 0xf1a00000-0xf1afffff]
[   11.277568] pci 0000:06:00.0: PCI bridge to [bus 07-08]
[   11.282841] pci 0000:06:00.0:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.289696] pci 0000:00:1c.5: PCI bridge to [bus 06-08]
[   11.294976] pci 0000:00:1c.5:   bridge window [mem 0xf1a00000-0xf1bfffff]
[   11.301828] pci 0000:00:1c.6: PCI bridge to [bus 09]
[   11.306847] pci 0000:00:1c.6:   bridge window [mem 0xf1d00000-0xf1dfffff]
[   11.313700] pci 0000:00:1c.7: PCI bridge to [bus 0a]
[   11.318716] pci 0000:00:1c.7:   bridge window [io  0xa000-0xafff]
[   11.324874] pci 0000:00:1c.7:   bridge window [mem 0xf1c00000-0xf1cfffff]
[   11.331727] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]
[   11.337348] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]
[   11.342980] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
[   11.349307] pci_bus 0000:00: resource 7 [mem 0x000d8000-0x000dbfff]
[   11.355633] pci_bus 0000:00: resource 8 [mem 0x000dc000-0x000dffff]
[   11.361959] pci_bus 0000:00: resource 9 [mem 0x000e0000-0x000e3fff]
[   11.368286] pci_bus 0000:00: resource 10 [mem 0x000e4000-0x000e7fff]
[   11.374699] pci_bus 0000:00: resource 11 [mem 0xbe200000-0xfeafffff]
[   11.381113] pci_bus 0000:01: resource 0 [io  0xe000-0xefff]
[   11.386745] pci_bus 0000:01: resource 1 [mem 0xf1600000-0xf18fffff]
[   11.393072] pci_bus 0000:02: resource 0 [io  0xd000-0xdfff]
[   11.398705] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]
[   11.405033] pci_bus 0000:04: resource 0 [io  0xc000-0xcfff]
[   11.410664] pci_bus 0000:04: resource 1 [mem 0xf1f00000-0xf1ffffff]
[   11.416992] pci_bus 0000:05: resource 0 [io  0xb000-0xbfff]
[   11.422625] pci_bus 0000:05: resource 1 [mem 0xf1e00000-0xf1efffff]
[   11.428951] pci_bus 0000:06: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.435279] pci_bus 0000:07: resource 1 [mem 0xf1a00000-0xf1bfffff]
[   11.441604] pci_bus 0000:07: resource 5 [mem 0xf1a00000-0xf1bfffff]
[   11.447931] pci_bus 0000:08: resource 1 [mem 0xf1a00000-0xf1afffff]
[   11.454257] pci_bus 0000:09: resource 1 [mem 0xf1d00000-0xf1dfffff]
[   11.460585] pci_bus 0000:0a: resource 0 [io  0xa000-0xafff]
[   11.466216] pci_bus 0000:0a: resource 1 [mem 0xf1c00000-0xf1cfffff]
[   11.472545] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
396456 usecs
[   11.480343] calling  sysctl_core_init+0x0/0x2c @ 1
[   11.485209] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs
[   11.491957] calling  inet_init+0x0/0x296 @ 1
[   11.496362] NET: Registered protocol family 2
[   11.501022] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)
[   11.508278] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[   11.514922] TCP: Hash tables configured (established 16384 bind 16384)
[   11.521513] TCP: reno registered
[   11.524796] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[   11.530864] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[   11.537504] initcall inet_init+0x0/0x296 returned 0 after 40247 usecs
[   11.543935] calling  ipv4_offload_init+0x0/0x61 @ 1
[   11.548871] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs
[   11.555631] calling  af_unix_init+0x0/0x55 @ 1
[   11.560147] NET: Registered protocol family 1
[   11.564571] initcall af_unix_init+0x0/0x55 returned 0 after 4330 usecs
[   11.571144] calling  ipv6_offload_init+0x0/0x7f @ 1
[   11.576084] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs
[   11.582843] calling  init_sunrpc+0x0/0x69 @ 1
[   11.587463] RPC: Registered named UNIX socket transport module.
[   11.593378] RPC: Registered udp transport module.
[   11.598141] RPC: Registered tcp transport module.
[   11.602907] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   11.609405] initcall init_sunrpc+0x0/0x69 returned 0 after 21621 usecs
[   11.615994] calling  pci_apply_final_quirks+0x0/0x117 @ 1
[   11.621460] pci 0000:00:02.0: Boot video device
[   11.626546] xen: registering gsi 16 triggering 0 polarity 1
[   11.632119] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)
[   11.636758] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.
[   11.644497] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.
[   11.652404] xen: registering gsi 16 triggering 0 polarity 1
[   11.657965] Already setup the GSI :16
[   11.677676] xen: registering gsi 23 triggering 0 polarity 1
[   11.683252] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)
[   11.704904] xen: registering gsi 18 triggering 0 polarity 1
[   11.710490] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)
[   11.7157 returned 0 after 104854 usecs
[   11.736524] calling  populate_rootfs+0x0/0x112 @ 1
[   11.741512] Unpacking initramfs...
[   12.831493] Freeing initrd memory: 83616K (ffff8800023f4000 - ffff880007=
59c000)
[   12.838803] initcall populate_rootfs+0x0/0x112 returned 0 after 1071701 =
usecs
[   12.845987] calling  pci_iommu_init+0x0/0x41 @ 1
[   12.850667] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs
[   12.857167] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1
[   12.862799] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs
[   12.870443] calling  register_kernel_offset_dumper+0x0/0x1b @ 1
[   12.876406] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs
[   12.884206] calling  i8259A_init_ops+0x0/0x21 @ 1
[   12.888972] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs
[   12.895558] calling  vsyscall_init+0x0/0x27 @ 1
[   12.900155] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs
[   12.906564] calling  sbf_init+0x0/0xf6 @ 1
[   12.910724] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs
[   12.916704] calling  init_tsc_clocksource+0x0/0xc2 @ 1
[   12.921904] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs
[   12.928924] calling  add_rtc_cmos+0x0/0xb4 @ 1
[   12.933433] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs
[   12.939757] calling  i8237A_init_ops+0x0/0x14 @ 1
[   12.944524] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs
[   12.951110] calling  cache_sysfs_init+0x0/0x65 @ 1
[   12.956211] initcall cache_sysfs_init+0x0/0x65 returned 0 after 239 usecs
[   12.962984] calling  amd_uncore_init+0x0/0x130 @ 1
[   12.967836] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usecs
[   12.974682] calling  amd_iommu_pc_init+0x0/0x150 @ 1
[   12.979710] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs
[   12.986728] calling  intel_uncore_init+0x0/0x3ab @ 1
[   12.991757] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs
[   12.998776] calling  rapl_pmu_init+0x0/0x1f8 @ 1
[   13.003471] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer
[   13.014029] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10326 usecs
[   13.020878] calling  inject_init+0x0/0x30 @ 1
[   13.025295] Machine check injector initialized
[   13.029803] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs
[   13.036302] calling  thermal_throttle_init_device+0x0/0x9c @ 1
[   13.042194] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs
[   13.049907] calling  microcode_init+0x0/0x1b1 @ 1
[   13.054861] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7
[   13.060972] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba
[   13.069743] initcall microcode_init+0x0/0x1b1 returned 0 after 14715 use=
cs
[   13.076674] calling  amd_ibs_init+0x0/0x292 @ 1
[   13.081263] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs
[   13.087850] calling  msr_init+0x0/0x162 @ 1
[   13.092318] initcall msr_init+0x0/0x162 returned 0 after 216 usecs
[   13.098488] calling  cpuid_init+0x0/0x162 @ 1
[   13.103100] initcall cpuid_init+0x0/0x162 returned 0 after 192 usecs
[   13.109439] calling  ioapic_init_ops+0x0/0x14 @ 1
[   13.114204] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.120791] calling  add_pcspkr+0x0/0x40 @ 1
[   13.125227] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs
[   13.131490] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1
[   13.137987] Scanning for low memory corruption every 60 seconds
[   13.143965] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5837 usecs
[   13.152545] calling  sysfb_init+0x0/0x9c @ 1
[   13.156985] initcall sysfb_init+0x0/0x9c returned 0 after 104 usecs
[   13.163243] calling  audit_classes_init+0x0/0xaf @ 1
[   13.168281] initcall audit_classes_init+0x0/0xaf returned 0 after 12 use=
cs
[   13.175201] calling  pt_dump_init+0x0/0x30 @ 1
[   13.179717] initcall pt_dump_init+0x0/0x30 returned 0 after 9 usecs
[   13.186035] calling  ia32_binfmt_init+0x0/0x14 @ 1
[   13.190893] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs
[   13.197559] calling  proc_execdomains_init+0x0/0x22 @ 1
[   13.202852] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs
[   13.209951] calling  ioresources_init+0x0/0x3c @ 1
[   13.214809] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs
[   13.221479] calling  uid_cache_init+0x0/0x85 @ 1
[   13.226174] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs
[   13.232744] calling  init_posix_timers+0x0/0x240 @ 1
[   13.237788] initcall init_posix_timers+0x0/0x240 returned 0 after 16 use=
cs
[   13.244703] calling  init_posix_cpu_timers+0x0/0xbf @ 1
[   13.249991] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs
[   13.257096] calling  proc_schedstat_init+0x0/0x22 @ 1
[   13.262214] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs
[   13.269145] calling  snapshot_device_init+0x0/0x12 @ 1
[   13.274463] initcall snapshot_device_init+0x0/0x12 returned 0 after 116 =
usecs
[   13.281588] calling  irq_pm_init_ops+0x0/0x14 @ 1
[   13.286353] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs
[   13.292942] calling  create_proc_profile+0x0/0x300 @ 1
[   13.298141] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs
[   13.305160] calling  timekeeping_init_ops+0x0/0x14 @ 1
[   13.310360] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs
[   13.317381] calling  init_clocksource_sysfs+0x0/0x69 @ 1
[   13.322967] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 20=
8 usecs
[   13.330263] calling  init_timer_list_procfs+0x0/0x2c @ 1
[   13.335637] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 3 =
usecs
[   13.342827] calling  alarmtimer_init+0x0/0x15f @ 1
[   13.347870] initcall alarmtimer_init+0x0/0x15f returned 0 after 187 usecs
[   13.354650] calling  clockevents_init_sysfs+0x0/0xd2 @ 1
[   13.360316] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 28=
6 usecs
[   13.367612] calling  init_tstats_procfs+0x0/0x2c @ 1
[   13.372641] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usecs
[   13.379484] calling  futex_init+0x0/0xf6 @ 1
[   13.383831] futex hash table entries: 256 (order: 2, 16384 bytes)
[   13.389973] initcall futex_init+0x0/0xf6 returned 0 after 6012 usecs
[   13.396384] calling  proc_dma_init+0x0/0x22 @ 1
[   13.400984] initcall proc_dma_init+0x0/0x22 returned 0 after 3 usecs
[   13.407389] calling  proc_modules_init+0x0/0x22 @ 1
[   13.412333] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs
[   13.419089] calling  kallsyms_init+0x0/0x25 @ 1
[   13.423685] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs
[   13.430095] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1
[   13.435912] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 10 usecs
[   13.443615] calling  crash_notes_memory_init+0x0/0x36 @ 1
[   13.449078] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs
[   13.456354] calling  pid_namespaces_init+0x0/0x2d @ 1
[   13.461481] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs
[   13.468487] calling  ikconfig_init+0x0/0x3c @ 1
[   13.473084] initcall ikconfig_init+0x0/0x3c returned 0 after 4 usecs
[   13.479494] calling  audit_init+0x0/0x141 @ 1
[   13.483913] audit: initializing netlink socket (disabled)
[   13.489396] type=3D2000 audit(1390393621.194:1): initialized
[   13.494922] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs
[   13.501506] calling  audit_watch_init+0x0/0x3a @ 1
[   13.506361] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs
[   13.513033] calling  audit_tree_init+0x0/0x49 @ 1
[   13.517801] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs
[   13.524387] calling  init_kprobes+0x0/0x16c @ 1
[   13.539053] initcall init_kprobes+0x0/0x16c returned 0 after 9836 usecs
[   13.545655] calling  hung_task_init+0x0/0x56 @ 1
[   13.573837] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs
[   13.580500] calling  init_blk_tracer+0x0/0x5a @ 1
[   13.585269] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs
[   13.591854] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1
[   13.597568] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs
[   13.605107] calling  perf_event_sysfs_init+0x0/0x93 @ 1
[   13.610947] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 537=
 usecs
[   13.618155] calling  init_per_zone_wmark_min+0x0/0xa8 @ 1
[   13.623682] initcall init_per_zone_wmark_min+0x0/0xa8 returned 0 after 6=
5 usecs
[   13.630979] calling  kswapd_init+0x0/0x76 @ 1
[   13.635442] initcall kswapd_init+0x0/0x76 returned 0 after 42 usecs
[   13.641726] calling  extfrag_debug_init+0x0/0x7e @ 1
[   13.646771] initcall extfrag_debug_init+0x0/0x7e returned 0 after 20 use=
cs
[   13.653683] calling  setup_vmstat+0x0/0xf3 @ 1
[   13.658204] initcall setup_vmstat+0x0/0xf3 returned 0 after 14 usecs
[   13.664603] calling  mm_sysfs_init+0x0/0x29 @ 1
[   13.669205] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs
[   13.675695] calling  mm_compute_batch_init+0x0/0x19 @ 1
[   13.680984] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs
[   13.688090] calling  slab_proc_init+0x0/0x25 @ 1
[   13.692772] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs
[   13.699268] calling  init_reserve_notifier+0x0/0x26 @ 1
[   13.704556] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs
[   13.711660] calling  init_admin_reserve+0x0/0x40 @ 1
[   13.716686] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usecs
[   13.723533] calling  init_user_reserve+0x0/0x40 @ 1
[   13.728474] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs
[   13.735233] calling  proc_vmalloc_init+0x0/0x25 @ 1
[   13.740177] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs
[   13.746934] calling  procswaps_init+0x0/0x22 @ 1
[   13.751616] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs
[   13.758112] calling  init_frontswap+0x0/0x96 @ 1
[   13.762819] initcall init_frontswap+0x0/0x96 returned 0 after 26 usecs
[   13.769380] calling  hugetlb_init+0x0/0x4c2 @ 1
[   13.773972] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[   13.780476] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6352 usecs
[   13.787076] calling  mmu_notifier_init+0x0/0x12 @ 1
[   13.792019] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs
[   13.798776] calling  slab_proc_init+0x0/0x8 @ 1
[   13.803370] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs
[   13.809781] calling  cpucache_init+0x0/0x4b @ 1
[   13.814375] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs
[   13.820788] calling  hugepage_init+0x0/0x145 @ 1
[   13.825469] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs
[   13.832142] calling  init_cleancache+0x0/0xbc @ 1
[   13.836936] initcall init_cleancache+0x0/0xbc returned 0 after 28 usecs
[   13.843583] calling  fcntl_init+0x0/0x2a @ 1
[   13.847927] initcall fcntl_init+0x0/0x2a returned 0 after 12 usecs
[   13.854156] calling  proc_filesystems_init+0x0/0x22 @ 1
[   13.859444] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs
[   13.866549] calling  dio_init+0x0/0x2d @ 1
[   13.870719] initcall dio_init+0x0/0x2d returned 0 after 10 usecs
[   13.876774] calling  fsnotify_mark_init+0x0/0x40 @ 1
[   13.881827] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 25 use=
cs
[   13.888738] calling  dnotify_init+0x0/0x7b @ 1
[   13.893267] initcall dnotify_init+0x0/0x7b returned 0 after 24 usecs
[   13.899657] calling  inotify_user_setup+0x0/0x70 @ 1
[   13.904701] initcall inotify_user_setup+0x0/0x70 returned 0 after 18 use=
cs
[   13.911615] calling  aio_setup+0x0/0x7d @ 1
[   13.915919] initcall aio_setup+0x0/0x7d returned 0 after 55 usecs
[   13.922015] calling  proc_locks_init+0x0/0x22 @ 1
[   13.926782] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs
[   13.933366] calling  init_sys32_ioctl+0x0/0x28 @ 1
[   13.938264] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs
[   13.944980] calling  dquot_init+0x0/0x121 @ 1
[   13.949397] VFS: Disk quotas dquot_6.5.2
[   13.953422] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[   13.959887] initcall dquot_init+0x0/0x121 returned 0 after 10243 usecs
[   13.966472] calling  init_v2_quota_format+0x0/0x22 @ 1
[   13.971673] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs
[   13.978692] calling  quota_init+0x0/0x31 @ 1
[   13.983042] initcall quota_init+0x0/0x31 returned 0 after 17 usecs
[   13.989265] calling  proc_cmdline_init+0x0/0x22 @ 1
[   13.994207] initcall proc_cmdline_init+0x0/0x22 returned 0 after 3 usecs
[   14.000963] calling  proc_consoles_init+0x0/0x22 @ 1
[   14.005994] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usecs
[   14.012837] calling  proc_cpuinfo_init+0x0/0x22 @ 1
[   14.017780] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.024537] calling  proc_devices_init+0x0/0x22 @ 1
[   14.029479] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs
[   14.036236] calling  proc_interrupts_init+0x0/0x22 @ 1
[   14.041440] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs
[   14.048456] calling  proc_loadavg_init+0x0/0x22 @ 1
[   14.053399] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs
[   14.060158] calling  proc_meminfo_init+0x0/0x22 @ 1
[   14.065099] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs
[   14.071855] calling  proc_stat_init+0x0/0x22 @ 1
[   14.076541] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs
[   14.083035] calling  proc_uptime_init+0x0/0x22 @ 1
[   14.087892] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs
[   14.094562] calling  proc_version_init+0x0/0x22 @ 1
[   14.099503] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs
[   14.106261] calling  proc_softirqs_init+0x0/0x22 @ 1
[   14.111291] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usecs
[   14.118135] calling  proc_kcore_init+0x0/0xb5 @ 1
[   14.122911] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs
[   14.129573] calling  vmcore_init+0x0/0x5cb @ 1
[   14.134080] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs
[   14.140406] calling  proc_kmsg_init+0x0/0x25 @ 1
[   14.145090] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs
[   14.151586] calling  proc_page_init+0x0/0x42 @ 1
[   14.156273] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs
[   14.162766] calling  init_devpts_fs+0x0/0x62 @ 1
[   14.167492] initcall init_devpts_fs+0x0/0x62 returned 0 after 44 usecs
[   14.174033] calling  init_ramfs_fs+0x0/0x4d @ 1
[   14.178635] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs
[   14.185039] calling  init_hugetlbfs_fs+0x0/0x15d @ 1
[   14.190134] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 68 use=
cs
[   14.197000] calling  init_fat_fs+0x0/0x4f @ 1
[   14.201440] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs
[   14.207745] calling  init_vfat_fs+0x0/0x12 @ 1
[   14.212252] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs
[   14.218579] calling  init_msdos_fs+0x0/0x12 @ 1
[   14.223171] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs
[   14.229585] calling  init_iso9660_fs+0x0/0x70 @ 1
[   14.234375] initcall init_iso9660_fs+0x0/0x70 returned 0 after 23 usecs
[   14.241023] calling  init_nfs_fs+0x0/0x16c @ 1
[   14.245732] initcall init_nfs_fs+0x0/0x16c returned 0 after 196 usecs
[   14.252161] calling  init_nfs_v2+0x0/0x14 @ 1
[   14.256580] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs
[   14.262820] calling  init_nfs_v3+0x0/0x14 @ 1
[   14.267240] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs
[   14.273478] calling  init_nfs_v4+0x0/0x3b @ 1
[   14.277899] NFS: Registering the id_resolver key type
[   14.283028] Key type id_resolver registered
[   14.287259] Key type id_legacy registered
[   14.291337] initcall init_nfs_v4+0x0/0x3b returned 0 after 13122 usecs
[   14.297919] calling  init_nlm+0x0/0x4c @ 1
[   14.302087] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs
[   14.308059] calling  init_nls_cp437+0x0/0x12 @ 1
[   14.312739] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs
[   14.319239] calling  init_nls_ascii+0x0/0x12 @ 1
[   14.323920] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs
[   14.330418] calling  init_nls_iso8859_1+0x0/0x12 @ 1
[   14.335445] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usecs
[   14.342291] calling  init_nls_utf8+0x0/0x2b @ 1
[   14.346886] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs
[   14.353299] calling  init_ntfs_fs+0x0/0x1d1 @ 1
[   14.357890] NTFS driver 2.1.30 [Flags: R/W].
[   14.362274] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4280 usecs
[   14.368897] calling  init_autofs4_fs+0x0/0x2a @ 1
[   14.373795] initcall init_autofs4_fs+0x0/0x2a returned 0 after 127 usecs
[   14.380490] calling  init_pstore_fs+0x0/0x53 @ 1
[   14.385176] initcall init_pstore_fs+0x0/0x53 returned 0 after 10 usecs
[   14.391752] calling  ipc_init+0x0/0x2f @ 1
[   14.395919] msgmni has been set to 3857
[   14.399822] initcall ipc_init+0x0/0x2f returned 0 after 3817 usecs
[   14.406051] calling  ipc_sysctl_init+0x0/0x14 @ 1
[   14.410826] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs
[   14.417404] calling  init_mqueue_fs+0x0/0xa2 @ 1
[   14.422144] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 59 usecs
[   14.428672] calling  key_proc_init+0x0/0x5e @ 1
[   14.433270] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs
[   14.439676] calling  selinux_nf_ip_init+0x0/0x69 @ 1
[   14.444701] SELinux:  Registering netfilter hooks
[   14.449605] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4788 u=
secs
[   14.456636] calling  init_sel_fs+0x0/0xa5 @ 1
[   14.461409] initcall init_sel_fs+0x0/0xa5 returned 0 after 344 usecs
[   14.467748] calling  selnl_init+0x0/0x56 @ 1
[   14.472090] initcall selnl_init+0x0/0x56 returned 0 after 11 usecs
[   14.478320] calling  sel_netif_init+0x0/0x5c @ 1
[   14.483003] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs
[   14.489500] calling  sel_netnode_init+0x0/0x6a @ 1
[   14.494356] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs
[   14.501027] calling  sel_netport_init+0x0/0x6a @ 1
[   14.505881] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs
[   14.512554] calling  aurule_init+0x0/0x2d @ 1
[   14.516973] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs
[   14.523212] calling  crypto_wq_init+0x0/0x33 @ 1
[   14.527925] initcall crypto_wq_init+0x0/0x33 returned 0 after 32 usecs
[   14.534481] calling  crypto_algapi_init+0x0/0xd @ 1
[   14.539452] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs
[   14.546207] calling  chainiv_module_init+0x0/0x12 @ 1
[   14.551320] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs
[   14.558252] calling  eseqiv_module_init+0x0/0x12 @ 1
[   14.563279] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.570126] calling  hmac_module_init+0x0/0x12 @ 1
[   14.574979] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs
[   14.581653] calling  md5_mod_init+0x0/0x12 @ 1
[   14.586190] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs
[   14.592573] calling  sha1_generic_mod_init+0x0/0x12 @ 1
[   14.597885] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 26 =
usecs
[   14.605052] calling  crypto_cbc_module_init+0x0/0x12 @ 1
[   14.610424] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs
[   14.617618] calling  des_generic_mod_init+0x0/0x17 @ 1
[   14.622870] initcall des_generic_mod_init+0x0/0x17 returned 0 after 51 u=
secs
[   14.629924] calling  aes_init+0x0/0x12 @ 1
[   14.634112] initcall aes_init+0x0/0x12 returned 0 after 26 usecs
[   14.640151] calling  zlib_mod_init+0x0/0x12 @ 1
[   14.644769] initcall zlib_mod_init+0x0/0x12 returned 0 after 25 usecs
[   14.651243] calling  crypto_authenc_module_init+0x0/0x12 @ 1
[   14.656962] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs
[   14.664503] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1
[   14.670569] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs
[   14.678456] calling  krng_mod_init+0x0/0x12 @ 1
[   14.683076] initcall krng_mod_init+0x0/0x12 returned 0 after 25 usecs
[   14.689548] calling  proc_genhd_init+0x0/0x3c @ 1
[   14.694323] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs
[   14.700901] calling  bsg_init+0x0/0x12e @ 1
[   14.705229] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)
[   14.712611] initcall bsg_init+0x0/0x12e returned 0 after 7286 usecs
[   14.718937] calling  noop_init+0x0/0x12 @ 1
[   14.723181] io scheduler noop registered
[   14.727168] initcall noop_init+0x0/0x12 returned 0 after 3893 usecs
[   14.733495] calling  deadline_init+0x0/0x12 @ 1
[   14.738089] io scheduler deadline registered
[   14.742422] initcall deadline_init+0x0/0x12 returned 0 after 4231 usecs
[   14.749095] calling  cfq_init+0x0/0x8b @ 1
[   14.753279] io scheduler cfq registered (default)
[   14.758022] initcall cfq_init+0x0/0x8b returned 0 after 4655 usecs
[   14.764261] calling  percpu_counter_startup+0x0/0x38 @ 1
[   14.769635] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs
[   14.776826] calling  pci_proc_init+0x0/0x6a @ 1
[   14.781610] initcall pci_proc_init+0x0/0x6a returned 0 after 184 usecs
[   14.788120] calling  pcie_portdrv_init+0x0/0x7a @ 1
[   14.793779] xen: registering gsi 16 triggering 0 polarity 1
[   14.799341] Already setup the GSI :16
[   14.803876] xen: registering gsi 16 triggering 0 polarity 1
[   14.809441] Already setup the GSI :16
[   14.813947] xen: registering gsi 16 triggering 0 polarity 1
[   14.819515] Already setup the GSI :16
[   14.823875] xen: registering gsi 19 triggering 0 polarity 1
[   14.829452] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)
[   14.834681] xen: registering gsi 17 triggering 0 polarity 1
[   14.840255] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)
[   14.845567] xen: registering gsi 19 triggering 0 polarity 1
[   14.851129] Already setup the GSI :19
[   14.855049] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60535 u=
secs
[   14.862083] calling  aer_service_init+0x0/0x2b @ 1
[   14.867006] initcall aer_service_init+0x0/0x2b returned 0 after 70 usecs
[   14.873694] calling  pci_hotplug_init+0x0/0x1d @ 1
[   14.878547] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[   14.884179] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5499 use=
cs
[   14.891114] calling  pcied_init+0x0/0x79 @ 1
[   14.895644] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[   14.902247] initcall pcied_init+0x0/0x79 returned 0 after 6641 usecs
[   14.908656] calling  pcifront_init+0x0/0x3f @ 1
[   14.913246] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs
[   14.919834] calling  genericbl_driver_init+0x0/0x14 @ 1
[   14.925233] initcall genericbl_driver_init+0x0/0x14 returned 0 after 110=
 usecs
[   14.932443] calling  cirrusfb_init+0x0/0xcc @ 1
[   14.937126] initcall cirrusfb_init+0x0/0xcc returned 0 after 88 usecs
[   14.943554] calling  efifb_driver_init+0x0/0x14 @ 1
[   14.948564] initcall efifb_driver_init+0x0/0x14 returned 0 after 69 usecs
[   14.955344] calling  intel_idle_init+0x0/0x331 @ 1
[   14.960196] intel_idle: MWAIT substates: 0x42120
[   14.964875] intel_idle: v0.4 model 0x3C
[   14.968772] intel_idle: lapic_timer_reliable_states 0xffffffff
[   14.974671] intel_idle: intel_idle yielding to none
[   14.979345] initcall intel_idle_init+0x0/0x331 returned -19 after 18699 =
usecs
[   14.986801] calling  acpi_reserve_resources+0x0/0xeb @ 1
[   14.992179] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs
[   14.999364] calling  acpi_ac_init+0x0/0x2a @ 1
[   15.003945] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs
[   15.010297] calling  acpi_button_driver_init+0x0/0x12 @ 1
[   15.016028] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0
[   15.024192] ACPI: Power Button [PWRB]
[   15.028171] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1
[   15.035551] ACPI: Power Button [PWRF]
[   15.039347] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3048 usecs
[   15.046902] calling  acpi_fan_driver_init+0x0/0x12 @ 1
[   15.052336] ACPI: Fan [FAN0] (off)
[   15.055967] ACPI: Fan [FAN1] (off)
[   15.059585] ACPI: Fan [FAN2] (off)
[   15.063186] ACPI: Fan [FAN3] (off)
[   15.066793] ACPI: Fan [FAN4] (off)
[   15.070258] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1773=
2 usecs
[   15.077559] calling  acpi_processor_driver_init+0x0/0x43 @ 1
[   15.095589] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)
[   15.104011] ACPI Error: MetPlatform Limit not supported.
[   15.136462] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 51939 usecs
[   15.144351] calling  acpi_thermal_init+0x0/0x42 @ 1
[   15.152493] thermal LNXTHERM:00: registered as thermal_zone0
[   15.158144] ACPI: Thermal Zone [TZ00] (28 C)
[   15.1645830 C)
[   15.174908] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25015 u=
secs
[   15.181946] calling  acpi_battery_init+0x0/0x16 @ 1
[   15.186890] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs
[   15.193640] calling  acpi_hed_driver_init+0x0/0x12 @ 1
[   15.198884] calling  1_acpi_battery_init_async+0x0/0x35 @ 6
[   15.204607] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5631=
 usecs
[   15.211817] calling  erst_init+0x0/0x2fc @ 1
[   15.216191] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.
[   15.223696] pstore: Registered erst as persistent store backend
[   15.229667] initcall erst_init+0x0/0x2fc returned 0 after 13202 usecs
[   15.236166] calling  ghes_init+0x0/0x173 @ 1
[   15.240647] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35324 usecs
[   15.249090] \_SB_:_OSC request failed
[   15.252749] _OSC request data:1 1 0=20
[   15.256385] \_SB_:_OSC invalid UUID
[   15.259938] _OSC request data:1 1 0=20
[   15.263577] GHES: APEI firmware first mode is enabled by APEI bit.
[   15.269820] initcall ghes_init+0x0/0x173 returned 0 after 28633 usecs
[   15.276318] calling  einj_init+0x0/0x522 @ 1
[   15.280715] EINJ: Error INJection is initialized.
[   15.285417] initcall einj_init+0x0/0x522 returned 0 after 4654 usecs
[   15.291832] calling  ioat_init_module+0x0/0xb1 @ 1
[   15.296683] ioatdma: Intel(R) QuickData Technology Driver 4.00
[   15.302725] initcall ioat_init_module+0x0/0xb1 returned 0 after 5899 use=
cs
[   15.309607] calling  virtio_mmio_init+0x0/0x14 @ 1
[   15.314516] initcall virtio_mmio_init+0x0/0x14 returned 0 after 71 usecs
[   15.321203] calling  virtio_balloon_driver_init+0x0/0x12 @ 1
[   15.326992] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 68 usecs
[   15.334549] calling  xenbus_probe_initcall+0x0/0x39 @ 1
[   15.339835] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs
[   15.346940] calling  xenbus_init+0x0/0x3d @ 1
[   15.351496] initcall xenbus_init+0x0/0x3d returned 0 after 130 usecs
[   15.357836] calling  xenbus_backend_init+0x0/0x51 @ 1
[   15.363070] initcall xenbus_backend_init+0x0/0x51 returned 0 after 117 u=
secs
[   15.370109] calling  gntdev_init+0x0/0x4d @ 1
[   15.374679] initcall gntdev_init+0x0/0x4d returned 0 after 148 usecs
[   15.381022] calling  gntalloc_init+0x0/0x3d @ 1
[   15.385744] initcall gntalloc_init+0x0/0x3d returned 0 after 126 usecs
[   15.392262] calling  hypervisor_subsys_init+0x0/0x25 @ 1
[   15.397633] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs
[   15.404824] calling  hyper_sysfs_init+0x0/0x103 @ 1
[   15.409830] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 63 usecs
[   15.416611] calling  platform_pci_module_init+0x0/0x1b @ 1
[   15.422249] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
88 usecs
[   15.429630] calling  xen_late_init_mcelog+0x0/0x3d @ 1
[   15.435019] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 186 =
usecs
[   15.442144] calling  xen_pcibk_init+0x0/0x13f @ 1
[   15.446935] xen_pciback: backend is vpci
[   15.450973] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3969 usecs
[   15.457755] calling  xen_acpi_processor_init+0x0/0x24b @ 1
[   15.464056] xen_acpi_processor: Uploading Xen processor PM info
[   15.472556] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
9039 usecs
[   15.480110] calling  pty_init+0x0/0x453 @ 1
[   15.503680] kworker/u2:0 (861) used greatest stack depth: 5488 bytes left
[   15.547929] initcall pty_init+0x0/0x453 returned 0 after 62081 usecs
[   15.554276] calling  sysrq_init+0x0/0xb0 @ 1
[   /0x228 returned 0 after 1062 usecs
[   15.577058] calling  serial8250_init+0x0/0x1ab @ 1
[   15.581907] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   15.609583] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A
[   15.618001] initcall serial8250_init+0x0/0x1ab returned 0 after 35246 us=
ecs
[   15.624951] calling  serial_pci_driver_init+0x0/0x1b @ 1
[   15.630445] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 12=
1 usecs
[   15.637738] calling  init_kgdboc+0x0/0x16 @ 1
[   15.642158] kgdb: Registered I/O driver kgdboc.
[   15.646777] initcall init_kgdboc+0x0/0x16 returned 0 after 4511 usecs
[   15.653250] calling  init+0x0/0x10f @ 1
[   15.657366] initcall init+0x0/0x10f returned 0 after 210 usecs
[   15.663192] calling  hpet_init+0x0/0x6a @ 1
[   15.667917] hpet_acpi_add: no address or irqs in _CRS
[   15.673035] initcall hpet_init+0x0/0x6a returned 0 after 5464 usecs
[   15.679295] calling  nvram_init+0x0/0x82 @ 1
[   15.683767] Non-volatile memory driver v1.3
[   15.687946] initcall nvram_init+0x0/0x82 returned 0 after 4219 usecs
[   15.694356] calling  mod_init+0x0/0x5a @ 1
[   15.698514] initcall mod_init+0x0/0x5a returned -19 after 0 usecs
[   15.704667] calling  rng_init+0x0/0x12 @ 1
[   15.708963] initcall rng_init+0x0/0x12 returned 0 after 132 usecs
[   15.715044] calling  agp_init+0x0/0x26 @ 1
[   15.719202] Linux agpgart interface v0.103
[   15.723360] initcall agp_init+0x0/0x26 returned 0 after 4060 usecs
[   15.729601] calling  agp_amd64_mod_init+0x0/0xb @ 1
[   15.734686] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 142 u=
secs
[   15.741720] calling  agp_intel_init+0x0/0x29 @ 1
[   15.746499] initcall agp_intel_init+0x0/0x29 returned 0 after 97 usecs
[   15.753014] calling  agp_sis_init+0x0/0x29 @ 1
[   15.757614] initcall agp_sis_init+0x0/0x29 returned 0 after 94 usecs
[   15.763958] calling  agp_via_init+0x0/0x29 @ 1
[   15.768556] initcall agp_via_init+0x0/0x29 returned 0 after 90 usecs
[   15.774903] calling  drm_core_init+0x0/0x10c @ 1
[   15.779671] [drm] Initialized drm 1.1.0 20060810
[   15.784282] initcall drm_core_init+0x0/0x10c returned 0 after 4590 usecs
[   15.791040] calling  cn_proc_init+0x0/0x3d @ 1
[   15.795545] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs
[   15.801869] calling  topology_sysfs_init+0x0/0x70 @ 1
[   15.807014] initcall topology_sysfs_init+0x0/0x70 returned 0 after 30 us=
ecs
[   15.814003] calling  loop_init+0x0/0x14e @ 1
[   15.867870] loop: module loaded
[   15.871004] initcall loop_init+0x0/0x14e returned 0 after 51433 usecs
[   15.877503] calling  xen_blkif_init+0x0/0x22 @ 1
[   15.882286] initcall xen_blkif_init+0x0/0x22 returned 0 after 101 usecs
[   15.888894] calling  mac_hid_init+0x0/0x22 @ 1
[   15.893405] initcall mac_hid_init+0x0/0x22 returned 0 after 7 usecs
[   15.899724] calling  macvlan_init_module+0x0/0x3d @ 1
[   15.904840] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs
[   15.911772] calling  macvtap_init+0x0/0x100 @ 1
[   15.916454] initcall macvtap_init+0x0/0x100 returned 0 after 89 usecs
[   15.922881] calling  net_olddevs_init+0x0/0xb5 @ 1
[   15.927734] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs
[   15.934405] calling  fixed_mdio_bus_init+0x0/0x105 @ 1
[   15.939829] libphy: Fixed MDIO Bus: probed
[   15.943916] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4207=
 usecs
[   15.951195] calling  tun_init+0x0/0x93 @ 1
[   15.955352] tun: Universal TUN/TAP device driver, 1.6
[   15.960466] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[   15.966855] initcall tun_init+0x0/0x93 returned 0 after 11232 usecs
[   15.973112] calling  tg3_driver_init+0x0/0x1b @ 1
[   15.977993] initcall tg3_driver_init+0x0/0x1b returned 0 after 114 usecs
[   15.984681] calling  ixgbevf_init_module+0x0/0x4c @ 1
[   15.989793] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Ne=
twork Driver - version 2.11.3-k
[   15.999238] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
[   16.005505] initcall ixgbevf_init_module+0x0/0x4c returned 0 after 15343=
 usecs
[   16.012719] calling  forcedeth_pci_driver_init+0x0/0x1b @ 1
[   16.018456] initcall forcedeth_pci_driver_init+0x0/0x1b returned 0 after=
 100 usecs
[   16.026012] calling  netback_init+0x0/0x48 @ 1
[   16.030590] initcall netback_init+0x0/0x48 returned 0 after 70 usecs
[   16.036934] calling  nonstatic_sysfs_init+0x0/0x12 @ 1
[   16.042132] initcall nonstatic_sysfs_init+0x0/0x12 returned 0 after 0 us=
ecs
[   16.049150] calling  yenta_cardbus_driver_init+0x0/0x1b @ 1
[   16.054898] initcall yenta_cardbus_driver_init+0x0/0x1b returned 0 after=
 112 usecs
[   16.062460] calling  mon_init+0x0/0xfe @ 1
[   16.066829] initcall mon_init+0x0/0xfe returned 0 after 211 usecs
[   16.072917] calling  ehci_hcd_init+0x0/0x5c @ 1
[   16.077506] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[   16.084094] initcall ehci_hcd_init+0x0/0x5c returned 0 after 6433 usecs
[   16.090766] calling  ehci_pci_init+0x0/0x69 @ 1
[   16.095358] ehci-pci: EHCI PCI platform driver
[   16.100443] xen: registering gsi 16 triggering 0 polarity 1
[   16.106004] Already setup the GSI :16
[   16.109766] ehci-pci 0000:00:1a.0: EHCI Host Controller
[   16.115234] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus =
number 1
[   16.122650] ehci-pci 0000:00:1a.0: debug port 2
[   16.131127] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[   16.137979] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf153c000
[   16.148723] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[   16.154594] usb usb1: New USB device found, idVendor=3D1d6b,   16.173603=
] usb usb1: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 ehci_hcd
[   16.181050] usb usb1: SerialNumber: 0000:00:1a.0
[   16.186395] hub 1-0:1.0: USB hub found
[   16.190171] hub 1-0:1.0: 3 ports detected
[   16.195665] xen: registering gsi 23 triggering 0 polarity 1
[   16.201231] Already setup the GSI :23
[   16.204983] ehci-pci 0000:00:1d.0: EHCI Host Controller
[   16.210468] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus =
number 2
[   16.217875] ehci-pci 0000:00:1d.0: debug port 2
[   16.226369] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[   16.233219] ehci-pci 0000:00:1d.0: irq 23, io evice found, idVendor=3D1d=
6b, idProduct=3D0002
[   16.257396] usb usb2: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[   16.264667] usb usb2: Product: EHCI Host Controller
[   16.269612] usb usb2: Manufacturer: Linux 3.13.0upstream-02502-gec513b1 =
ehci_hcd
[   16.277059] usb usb2: SerialNumber: 0000:00:1d.0
[   16.282368] hub 2-0:1.0: USB hub found
[   16.286138] hub 2-0:1.0: 3 ports detected
[   16.291162] initcall ehci_pci_init+0x0/0x69 returned 0 after 191213 usecs
[   16.297939] calling  ohci_hcd_mod_init+0x0/0x64 @ 1
[   16.302877] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[   16.309123] initcall ohci_hcd_mod_init+0x0/0x64 returned 0 after 6098 us=
ecs
[   16.316134] calling  ohci_pci_init+0x0/0x69 @ 1
[   16.320728] ohci-pci: OHCI PCI platform driver
[   16.325341] initcall ohci_pci_init+0x0/0x69 returned 0 after 4504 usecs
[   16.331944] calling  uhci_hcd_init+0x0/0xb0 @ 1
[   16.336536] uhci_hcd: USB Universal Host Controller Interface driver
[   16.343082] initcall uhci_hcd_init+0x0/0xb0 returned 0 after 6391 usecs
[   16.349687] calling  usblp_driver_init+0x0/0x1b @ 1
[   16.354802] usbcore: registered new interface driver usblp
[   16.360277] initcall usblp_driver_init+0x0/0x1b returned 0 after 5519 us=
ecs
[   16.367296] calling  kgdbdbgp_start_thread+0x0/0x4f @ 1
[   16.372580] initcall kgdbdbgp_start_thread+0x0/0x4f returned 0 after 0 u=
secs
[   16.379687] calling  i8042_init+0x0/0x3c5 @ 1
[   16.384371] i8042: PNP: No PS/2 controller found. Probing ports directly.
[   16.394424] serio: i8042 KBD port at 0x60,0x64 irq 1
[   16.399385] serio: i8042 AUX port at 0x60,0x64 irq 12
[   16.404674] initcall i8042_init+0x0/0x3c5 returned 0 after 20084 usecs
[   16.411193] calling  serport_init+0x0/0x34 @ 1
[   16.415696] initcall serport_init+0x0/0x34 returned 0 after 0 usecs
[   16.422023] calling  mousedev_init+0x0/0x62 @ 1
[   16.426806] mousedev: PS/2 mouse device common for all mice
[   16.432372] initcall mousedev_init+0x0/0x62 returned 0 after 5620 usecs
[   16.439043] calling  evdev_init+0x0/0x12 @ 1
[   16.443771] initcall evdev_init+0x0/0x12 returned 0 after 384 usecs
[   16.450027] calling  atkbd_init+0x0/0x27 @ 1
[   16.454500] initcall atkbd_init+0x0/0x27 returned 0 after 137 usecs
[   16.460760] calling  psmouse_init+0x0/0x82 @ 1
[   16.465448] initcall psmouse_init+0x0/0x82 returned 0 after 182 usecs
[   16.471875] calling  cmos_init+0x0/0x77 @ 1
[   16.476168] rtc_cmos 00:05: RTC can wake from S4
[   16.481185] rtc_cmos 00:05: rtc core: registered rtc_cmos as rtc0
[   16.487332] rtc_cmos 00:05: alarms up to one month, y3k, 242 bytes nvram
[   16.494495] initcall cmos_init+0x0/0x77 returned 0 after 17937 usecs
[   16.500840] calling  i2c_i801_init+0x0/0xad @ 1
[   16.506031] xen: registering gsi 18 triggering 0 polarity 1
[   16.511591] Already setup the GSI :18
[   16.515412] i801_smbus 0000:00:1f.3: SMBus using PCI Interrupt
[   16.521825] initcall i2c_i801_init+0x0/0xad returned 0 after 16005 usecs
[   16.528521] calling  cpufreq_gov_dbs_init+0x0/0x12 @ 1
[   16.533723] initcall cpufreq_gov_dbs_init+0x0/0x12 returned -19 after 0 =
usecs
[   16.540943] calling  efivars_sysfs_init+0x0/0x220 @ 1
[   16.546051] initcall efivars_sysfs_init+0x0/0x220 returned -19 after 0 u=
secs
[   16.553156] calling  efivars_pstore_init+0x0/0xa2 @ 1
[   16.558271] initcall efivars_pstore_init+0x0/0xa2 returned 0 after 0 use=
cs
[   16.565202] calling  vhost_net_init+0x0/0x30 @ 1
[   16.570377] initcall vhost_net_init+0x0/0x30 returned 0 after 482 usecs
[   16.576984] calling  vhost_init+0x0/0x8 @ 1
[   16.581233] initcall vhost_init+0x0/0x8 returned 0 after 0 usecs
[   16.587293] calling  staging_init+0x0/0x8 @ 1
[   16.591713] initcall staging_init+0x0/0x8 returned 0 after 0 usecs
[   16.597953] calling  zram_init+0x0/0x2fd @ 1
[   16.603120] zram: Created 1 device(s) ...
[   16.607126] initcall zram_init+0x0/0x2fd returned 0 after 4727 usecs
[   16.613536] calling  zs_init+0x0/0x90 @ 1
[   16.617611] initcall zs_init+0x0/0x90 returned 0 after 2 usecs
[   16.623502] calling  eeepc_laptop_init+0x0/0x5a @ 1
[   16.628496] usb 1-1: new high-speed USB device number 2 using ehci-pci
[   16.635282] initcall eeepc_laptop_init+0x0/0x5a returned -19 after 6680 =
usecs
[   16.642407] calling  sock_diag_init+0x0/0x12 @ 1
[   16.647105] initcall sock_diag_init+0x0/0x12 returned 0 after 16 usecs
[   16.653675] calling  flow_cache_init_global+0x0/0x19a @ 1
[   16.659149] initcall flow_cache_init_global+0x0/0x19a returned 0 after 2=
0 usecs
[   16.666496] calling  llc_init+0x0/0x20 @ 1
[   16.670657] initcall llc_init+0x0/0x20 returned 0 after 0 usecs
[   16.676635] calling  snap_init+0x0/0x38 @ 1
[   16.680883] initcall snap_init+0x0/0x38 returned 0 after 1 usecs
[   16.686949] calling  blackhole_module_init+0x0/0x12 @ 1
[   16.692235] initcall blackhole_module_init+0x0/0x12 returned 0 after 0 u=
secs
[   16.699342] calling  nfnetlink_init+0x0/0x59 @ 1
[   16.704022] Netfilter messages via NETLINK v0.30.
[   16.708803] initcall nfnetlink_init+0x0/0x59 returned 0 after 4668 usecs
[   16.715548] calling  nfnetlink_log_init+0x0/0xb6 @ 1
[   16.720585] initcall nfnetlink_log_init+0x0/0xb6 returned 0 after 10 use=
cs
[   16.727509] calling  nf_conntrack_standalone_init+0x0/0x82 @ 1
[   16.733401] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   16.739653] initcall nf_conntrack_standalone_init+0x0/0x82 returned 0 af=
ter 6104 usecs
[   16.747556] calling  ctnetlink_init+0x0/0xa4 @ 1
[   16.752232] ctnetlink v0.93: registering with nfnetlink.
[   16.757606] initcall ctnetlink_init+0x0/0xa4 returned 0 after 5247 usecs
[   16.764367] calling  nf_conntrack_ftp_init+0x0/0x1ca @ 1
[   16.769744] initcall nf_conntrack_ftp_init+0x0/0x1ca returned 0 after 4 =
usecs
[   16.776933] calling  nf_conntrack_irc_init+0x0/0x173 @ 1
[   16.782309] initcall nf_conntrack_irc_init+0x0/0x173 returned 0 after 3 =
usecs
[   16.789498] calling  nf_conntrack_sip_init+0x0/0x215 @ 1
[   16.794872] initcall nf_conntrack_sip_init+0x0/0x215 returned 0 after 0 =
usecs
[   16.802064] calling  xt_init+0x0/0x118 @ 1
[   16.806226] initcall xt_init+0x0/0x118 returned 0 after 2 usecs
[   16.812204] calling  tcpudp_mt_init+0x0/0x17 @ 1
[   16.816884] initcall tcpudp_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.823384] calling  connsecmark_tg_init+0x0/0x12 @ 1
[   16.828497] initcall connsecmark_tg_init+0x0/0x12 returned 0 after 0 use=
cs
[   16.835431] calling  nflog_tg_init+0x0/0x12 @ 1
[   16.840024] initcall nflog_tg_init+0x0/0x12 returned 0 after 0 usecs
[   16.846436] calling  secmark_tg_init+0x0/0x12 @ 1
[   16.851204] initcall secmark_tg_init+0x0/0x12 returned 0 after 0 usecs
[   16.857788] calling  tcpmss_tg_init+0x0/0x17 @ 1
[   16.862468] initcall tcpmss_tg_init+0x0/0x17 returned 0 after 0 usecs
[   16.868969] calling  conntrack_mt_init+0x0/0x17 @ 1
[   16.873910] initcall conntrack_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.880667] calling  policy_mt_init+0x0/0x17 @ 1
[   16.885349] initcall policy_mt_init+0x0/0x17 returned 0 after 0 usecs
[   16.891849] calling  state_mt_init+0x0/0x12 @ 1
[   16.896442] initcall state_mt_init+0x0/0x12 returned 0 after 0 usecs
[   16.902856] calling  sysctl_ipv4_init+0x0/0x92 @ 1
[   16.907735] initcall sysctl_ipv4_init+0x0/0x92 returned 0 after 26 usecs
[   16.914468] calling  tunnel4_init+0x0/0x72 @ 1
[   16.918974] initcall tunnel4_init+0x0/0x72 returned 0 after 0 usecs
[   16.925300] calling  ipv4_netfilter_init+0x0/0x12 @ 1
[   16.930415] initcall ipv4_netfilter_init+0x0/0x12 returned 0 after 0 use=
cs
[   16.937348] calling  nf_conntrack_l3proto_ipv4_init+0x0/0x17c @ 1
[   16.943607] initcall nf_conntrack_l3proto_ipv4_init+0x0/0x17c returned 0=
 after 101 usecs
[   16.951682] calling  nf_defrag_init+0x0/0x17 @ 1
[   16.956362] initcall nf_defrag_init+0x0/0x17 returned 0 after 0 usecs
[   16.962861] calling  ip_tables_init+0x0/0xaa @ 1
[   16.967554] ip_tables: (C) 2000-2006 Netfilter Core Team
[   16.972915] initcall ip_tables_init+0x0/0xaa returned 0 after 5247 usecs
[   16.979675] calling  iptable_filter_init+0x0/0x51 @ 1
[   16.984810] initcall iptable_filter_init+0x0/0x51 returned 0 after 22 us=
ecs
[   16.991808] calling  iptable_mangle_init+0x0/0x51 @ 1
[   16.996940] initcall iptable_mangle_init+0x0/0x51 returned 0 after 18 us=
ecs
[   17.003940] calling  reject_tg_init+0x0/0x12 @ 1
[   17.008621] initcall reject_tg_init+0x0/0x12 returned 0 after 0 usecs
[   17.015120] calling  ulog_tg_init+0x0/0x85 @ 1
[   17.019643] initcall ulog_tg_init+0x0/0x85 returned 0 after 16 usecs
[   17.026041] calling  cubictcp_register+0x0/0x5c @ 1
[   17.030979] TCP: cubic registered
[   17.034359] initcall cubictcp_register+0x0/0x5c returned 0 after 3300 us=
ecs
[   17.041379] calling  xfrm_user_init+0x0/0x4a @ 1
[   17.046059] Initializing XFRM netlink socket
[   17.050407] initcall xfrm_user_init+0x0/0x4a returned 0 after 4245 usecs
[   17.057153] calling  inet6_init+0x0/0x370 @ 1
[   17.061652] NET: Registered protocol family 10
[   17.066434] initcall inet6_init+0x0/0x370 returned 0 after 4746 usecs
[   17.072866] calling  ah6_init+0x0/0x79 @ 1
[   17.077027] initcall ah6_init+0x0/0x79 returned 0 after 0 usecs
[   17.083005] calling  esp6_init+0x0/0x79 @ 1
[   17.087253] initcall esp6_init+0x0/0x79 returned 0 after 0 usecs
[   17.093317] calling  xfrm6_transport_init+0x0/0x17 @ 1
[   17.098518] initcall xfrm6_transport_init+0x0/0x17 returned 0 after 0 us=
ecs
[   17.105537] calling  xfrm6_mode_tunnel_init+0x0/0x17 @ 1
[   17.110910] initcall xfrm6_mode_tunnel_init+0x0/0x17 returned 0 after 0 =
usecs
[   17.118103] calling  xfrm6_beet_init+0x0/0x17 @ 1
[   17.122870] initcall xfrm6_beet_init+0x0/0x17 returned 0 after 0 usecs
[   17.129456] calling  ip6_tables_init+0x0/0xaa @ 1
[   17.134237] ip6_tables: (C) 2000-2006 Netfilter Core Team
[   17.139684] initcall ip6_tables_init+0x0/0xaa returned 0 after 5332 usecs
[   17.146531] calling  ip6table_filter_init+0x0/0x51 @ 1
[   17.151826] initcall ip6table_filter_init+0x0/0x51 returned 0 after 93 u=
secs
[   17.158863] calling  ip6table_mangle_init+0x0/0x51 @ 1
[   17.164104] initcall ip6table_mangle_init+0x0/0x51 returned 0 after 40 u=
secs
[   17.171169] calling  nf_conntrack_l3proto_ipv6_init+0x0/0x154 @ 1
[   17.177331] initcall nf_conntrack_l3proto_ipv6_init+0x0/0x154 returned 0=
 after 8 usecs
[   17.185295] calling  nf_defrag_init+0x0/0x54 @ 1
[   17.189984] initcall nf_defrag_init+0x0/0x54 returned 0 after 9 usecs
[   17.196475] calling  ipv6header_mt6_init+0x0/0x12 @ 1
[   17.201587] initcall ipv6header_mt6_init+0x0/0x12 returned 0 after 0 use=
cs
[   17.208520] calling  reject_tg6_init+0x0/0x12 @ 1
[   17.213287] initcall reject_tg6_init+0x0/0x12 returned 0 after 0 usecs
[   17.219876] calling  sit_init+0x0/0xcf @ 1
[   17.224033] sit: IPv6 over IPv4 tunneling driver
[   17.229602] initcall sit_init+0x0/0xcf returned 0 after 5436 usecs
[   17.235772] calling  packet_init+0x0/0x47 @ 1
[   17.240190] NET: Registered protocol family 17
[   17.244704] initcall packet_init+0x0/0x47 returned 0 after 4407 usecs
[   17.251195] calling  br_init+0x0/0xa2 @ 1
[   17.255286] initcall br_init+0x0/0xa2 returned 0 after 17 usecs
[   17.261247] calling  init_rpcsec_gss+0x0/0x64 @ 1
[   17.266053] initcall init_rpcsec_gss+0x0/0x64 returned 0 after 38 usecs
[   17.272688] calling  dcbnl_init+0x0/0x4d @ 1
[   17.277020] initcall dcbnl_init+0x0/0x4d returned 0 after 0 usecs
[   17.283174] calling  init_dns_resolver+0x0/0xe1 @ 1
[   17.288125] Key type dns_resolver registered
[   17.292446] initcall init_dns_resolver+0x0/0xe1 returned 0 after 4231 us=
ecs
[   17.299468] calling  mcheck_init_device+0x0/0x123 @ 1
[   17.304928] initcall mcheck_init_device+0x0/0x123 returned 0 after 340 u=
secs
[   17.311987] calling  tboot_late_init+0x0/0x243 @ 1
[   17.316817] initcall tboot_late_init+0x0/0x243 returned 0 after 0 usecs
[   17.323491] calling  mcheck_debugfs_init+0x0/0x3c @ 1
[   17.328618] initcall mcheck_debugfs_init+0x0/0x3c returned 0 after 13 us=
ecs
[   17.335622] calling  severities_debugfs_init+0x0/0x3c @ 1
[   17.341089] initcall severities_debugfs_init+0x0/0x3c returned 0 after 5=
 usecs
[   17.348363] calling  threshold_init_device+0x0/0x50 @ 1
[   17.353651] initcall threshold_init_device+0x0/0x50 returned 0 after 1 u=
secs
[   17.360755] calling  hpet_insert_resource+0x0/0x23 @ 1
[   17.365956] initcall hpet_insert_resource+0x0/0x23 returned 0 after 0 us=
ecs
[   17.372975] calling  update_mp_table+0x0/0x56d @ 1
[   17.377829] initcall update_mp_table+0x0/0x56d returned 0 after 0 usecs
[   17.384502] calling  lapic_insert_resource+0x0/0x3f @ 1
[   17.389788] initcall lapic_insert_resource+0x0/0x3f returned 0 after 0 u=
secs
[   17.396893] calling  io_apic_bug_finalize+0x0/0x1b @ 1
[   17.402094] initcall io_apic_bug_finalize+0x0/0x1b returned 0 after 0 us=
ecs
[   17.409113] calling  print_ICs+0x0/0x456 @ 1
[   17.413447] initcall print_ICs+0x0/0x456 returned 0 after 0 usecs
[   17.419600] calling  check_early_ioremap_leak+0x0/0x65 @ 1
[   17.425148] initcall check_early_ioremap_leak+0x0/0x65 returned 0 after =
0 usecs
[   17.432513] calling  pat_memtype_list_init+0x0/0x32 @ 1
[   17.437799] initcall pat_memtype_list_init+0x0/0x32 returned 0 after 0 u=
secs
[   17.444909] calling  init_oops_id+0x0/0x40 @ 1
[   17.449413] initcall init_oops_id+0x0/0x40 returned 0 after 1 usecs
[   17.455740] calling  pm_qos_power_init+0x0/0x7b @ 1
[   17.461016] initcall pm_qos_power_init+0x0/0x7b returned 0 after 330 use=
cs
[   17.467882] calling  pm_debugfs_init+0x0/0x24 @ 1
[   17.472655] initcall pm_debugfs_init+0x0/0x24 returned 0 after 6 usecs
[   17.479234] calling  printk_late_init+0x0/0x44 @ 1
[   17.484089] initcall printk_late_init+0x0/0x44 returned 0 after 0 usecs
[   17.490761] calling  tk_debug_sleep_time_init+0x0/0x3d @ 1
[   17.496311] initcall tk_debug_sleep_time_init+0x0/0x3d returned 0 after =
5 usecs
[   17.503674] calling  debugfs_kprobe_init+0x0/0x90 @ 1
[   17.508803] initcall debugfs_kprobe_init+0x0/0x90 returned 0 after 16 us=
ecs
[   17.515808] calling  taskstats_init+0x0/0x73 @ 1
[   17.520496] registered taskstats version 1
[   17.524646] initcall taskstats_init+0x0/0x73 returned 0 after 4062 usecs
[   17.531405] calling  clear_boot_tracer+0x0/0x2d @ 1
[   17.536347] initcall clear_boot_tracer+0x0/0x2d returned 0 after 0 usecs
[   17.543133] calling  kdb_ftrace_register+0x0/0x2f @ 1
[   17.548248] initcall kdb_ftrace_register+0x0/0x2f returned 0 after 1 use=
cs
[   17.555181] calling  max_swapfiles_check+0x0/0x8 @ 1
[   17.560205] initcall max_swapfiles_check+0x0/0x8 returned 0 after 0 usecs
[   17.567052] calling  set_recommended_min_free_kbytes+0x0/0xa0 @ 1
[   17.573206] initcall set_recommended_min_free_kbytes+0x0/0xa0 returned 0=
 after 0 usecs
[   17.581179] calling  kmemleak_late_init+0x0/0x93 @ 1
[   17.586234] kmemleak: Kernel memory leak detector initialized
[   17.592018] initcall kmemleak_late_init+0x0/0x93 returned 0 after 5676 u=
secs
[   17.599124] calling  init_root_keyring+0x0/0xb @ 1
[   17.603994] initcall init_root_keyring+0x0/0xb returned 0 after 19 usecs
[   17.610732] calling  fail_make_request_debugfs+0x0/0x2a @ 1
[   17.616405] initcall fail_make_request_debugfs+0x0/0x2a returned 0 after=
 38 usecs
[   17.623908] calling  prandom_reseed+0x0/0x47 @ 1
[   17.628589] initcall prandom_reseed+0x0/0x47 returned 0 after 2 usecs
[   17.635084] calling  pci_resource_alignment_sysfs_init+0x0/0x19 @ 1
[   17.641417] initcall pci_resource_alignment_sysfs_init+0x0/0x19 returned=
 0 after 5 usecs
[   17.649558] calling  pci_sysfs_init+0x0/0x51 @ 1
[   17.658402] initcall pci_sysfs_init+0x0/0x51 returned 0 after 4066 usecs
[   17.665089] calling  boot_wait_for_devices+0x0/_devices+0x0/0x30 returne=
d 0 after 0 usecs
[   17.677482] calling  deferred_probe_initcall+0x0/0x70 @ 1
[   17.682970] kmemleak: Automatic memory scanning thread started
[   17.688949] initcall deferred_probe_initcall+0x0/0x70 returned 0 after 5=
859 usecs
[   17.696419] calling  late_resume_init+0x0/0x1d0 @ 1
[   17.701357]   Magic number: 14:268:431
[   17.705268] initcall late_resume_init+0x0/0x1d0 returned 0 after 3818 us=
ecs
[   17.712217] calling  firmware_memmap_init+0x0/0x38 @ 1
[   17.717854] initcall firmware_memmap_init+0x0/0x38 returned 0 after 427 =
usecs
[   17.724973] calling  pci_mmcfg_late_insert_resources+0x0/0x50 @ 1
[   17.731127] initcall pci_mmcfg_late_insert_resources+0x0/0x50 returned 0=
 after 0 usecs
[   17.739098] calling  tcp_congestion_default+0x0/0x12 @ 1
[   17.744473] initcall tcp_congestion_default+0x0/0x12 returned 0 after 0 =
usecs
[   17.751666] calling  ip_auto_config+0x0/0xf1c @ 1
[   17.756437] initcall ip_auto_config+0x0/0xf1c returned 0 after 5 usecs
[   17.763019] calling  software_resume+0x0/0x290 @ 1
[   17.767870] PM: Hibernation image not present or could not be loaded.
[   17.774370] initcall software_resume+0x0/0x290 returned -2 after 6347 us=
ecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   17.803642] async_waiting @ 1
[   17.806670] async_continuing @ 1 after 0 usec
[   17.811455] Freeing unused kernel memory: 1724K (ffffffff81cc1000 - ffff=
ffff81e70000)
[   17.819273] Write protecting the kernel read-only data: 12288k
[   17.827733] Freeing unused kernel memory: 1244K (ffff8800016c9000 - ffff=
880001800000)
[   17.836066] Freeing unused kernel memory: 1912K (ffff880001a22000 - ffff=
880001c00000)
[   17.844303] usb 1-1: New USB device found, idVendor=3D8087, idProduct=3D=
8008
[   17.850993] usb 1-1: New USB device strings: Mfr=3D0, Product=3D0, Seria=
lNumber=3D0
init started: BusyBox v1.14.3 (2014-01-20 09:47:53 EST)
Mounting directories  [  OK  ]
[   17.874834] mount (1493) used greatest stack depth: 5032 bytes left
[   17.885031] hub 1-1:1.0: USB hub found
[   17.889174] hub 1-1:1.0: 6 ports detected
mount: mount point /proc/bus/usb does not exist
[   18.004919] calling  privcmd_init+0x0/0x1000 [xen_privcmd] @ 1522
[   18.igh-speed USB device number 2 using ehci-pci
^G^G^G^G[   18.018407] initcall privcmd_init+0x0/0x1000 [xen_privcmd] retur=
ned 0 after 7225 usecs
^G^G[   18.027061] calling  xenfs_init+0x0/0x1000 [xenfs] @ 1522
[   18.032455] initcall xenfs_init+0x0/0x1000 [xenfs] returned 0 after 1 us=
ecs
mount: mount point /sys/kernel/config does not exist
[   18.058095] calling  xenkbd_init+0x0/0x1000 [xen_kbdfront] @ 1533
[   18.064182] initcall xenkbd_init+0x0/0x1000 [xen_kbdfrerting xen_kbdfron=
t (/lib/modules/3.13.0upstream-02502-gec513b1/kernel/drivers/input/misc/xen=
-kbdfront.ko): No such device
^G^G^G^G^G^G^G^G^G^G^G^G[   18.089461] calling  xenfb_init+0x0/0x1000 [xen_=
fbfront] @ 1536
[   18.095373] initcall xenfb_init+0x0/0x1000 [xen_fbfront] returned -19 af=
ter 0 usecs
FATAL: Error inserting xen_fbfront (/lib/modules/3.13.0upstream-02502-gec51=
3b1/kernel/drivers/video/xen-fbfront.ko): No such device
[   18.116848] calling  netif_init+0x0/0x1000 [xen_netfront] @ 1543
[   18.122846] xen_netfront: Initialising Xen virtual ethernet driver
[   18.129212] initcall netif_init+0x0/0x1000 [xen_netfront] returned 0 aft=
er 6214 usecs
[   18.139187] calling  xlblk_init+0x0/0x1000 [xen_blkfront] @ 1546
[   18.145333] initcall xlblk_init+0x0/0x1000 [xen_blkfronG^G^G^G^G^G^G^G^G=
^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.185401] hub 2-1:1.0: USB hub found
[   18.189869] hub 2-1:1.0: 8 ports detected
[   18.194764] udG^G^G^G[   18.261447] calling  acpi_cpufreq_init+0x0/0x100=
0 [acpi_cpufreq] @ 1560
[   18.268093] initcall acpi_cpufreq_init+0x0/0x100req] returned -19 after =
40 usecs
^G^G^G[   18.283502] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @=
 1570
[   18.290114] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
[   18.312807] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1588
[   18.319424] initcall acpi_cpufreq_init+0x0/0x10033027] calling  acpi_vid=
eo_init+0x0/0xfee [video] @ 1596
[   18.338782] initcall acpi_video_init+0x0/0xfee [video] returned 0 after =
11 usecs
^G^G^G[   18.364612] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @=
 1564
[   18.371230] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 6 usecs
[   18.385187] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1572
[   18.391802] initcall acpi_cpufreq_init+0x0/0x100ling  acpi_cpufreq_init+=
0x0/0x1000 [acpi_cpufreq] @ 1590
[   18.418534] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
^G^G^G^G^G[   18.468430] calling  init_scsi+0x0/0x91 [scsi_mod] @ 1781
[   18.478012] SCSI subsystem initialized
[   18.481761] initcall init_scsi+0x0/0x91 [scsi_mod] returned 0 after 7740=
 usecs
alling  igb_init_module+0x0/0x1000 [igb] @ 1790
[   18.497783] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k
[   18.504795] igb: Copyright (c) 2007-2013 Intel Corporation.
[   18.510730] xen: registering gsi 17 triggering 0 polarity 1
[   18.516292] Already setup the GSI :17
^G^G^G^G[   18.521923] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq]=
 @ 1574
[   18.528538] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 4 usecs
[   18.538904] calling  sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
@ 1781
[   18.563468] calling  drm_fb_helper_modiniG^G^G^G[   18.581027] calling  =
e1000_init_module+0x0/0x1000 [e1000e] @ 1812
[   18.587120] e1000e: Intel(R) PRO/1000 Network Driver -upt Throttling Rat=
e (ints/sec) set to dynamic conservative mode
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^=
G^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.632599] calling  fb_console_init+0x0/0x10=
00 [fbcon] @ 1825
[   18.638665] calling  acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] @ 1591
[   18.638675] initcall acpi_cpufreq_init+0x0/0x1000 [acpi_cpufreq] returne=
d -19 after 7 usecs
[   18.639773] initcall drm_fb_helper_modinit+0x0/0x1000 [drm_kms_helper] r=
eturned 0 after 67545 usecs
[   18.676560] initcall fb_console_init+0x0/0x1000 [fbcon] returned 0 after=
 37245 usecs
[   18.689751] initcall sas_transport_init+0x0/0x1000 [scsi_transport_sas] =
returned 0 after 140267 usecs
[   18.731789] calling  i915_init+0x0/0x68 [i915] @ 1809
[   18.752037] calling  ata_init+0x0/0x4ce [libata] @ 1888
[   18.758928] calling  fusion_init+0x0/0x1000 [mptbase] @ 1781
[   18.773708] initcall fusion_init+0x0/0x1000 [mptbase] returned 0 after 8=
916 usecs
^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G^G[   18.795463] libata ver=
sion 3.00 loaded.
[   18.799303] initcall ata_init+0x0/0x4ce [libata] returned 0 after 41054 =
usecs
[   18.819464] calling  mptsas_init+0x0/0x1000 [mptsas] @ 1781
[   18.825031] Fusion MPT SAS Host driver 3.04.20
[   18.8298G^G^G^G^G^G[   18.855258] mptbase: ioc0: Initiating bringup
^G^G^G^G^G^G^G^G^G^G^G^G[   18.866860] igb 0000:02:00.0: added PHC on eth0
[   18.871386] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
on
[   18.878317] igb 0000:02:00.0: eth0: (PCIe:2.5Gb/s:Width x4) 00:1b:21:45:=
d9:ac
[   18.885507] igb 0000:02:00.0: eth0: PBA No: Unknown
[   18.890446] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   18.898420] xen: registering gsi 18 triggering 0 polarity 1
[   18.903986] Already setup the GSI :18
udevd-work[1608]: error opening ATTR{/sys/devices/system/cpu/cpu0/online} f=
or writing: Permission denied

[   19.001838] e1000e 0000:00:19.0 eth1: registered PHC clock
[   19.007319] e1000e 0000:00:19.0 eth1: (PCI Express:2.5GT/s:Wi00:25:90:86=
:be:f0
[   19.015296] e1000e 0000:00:19.0 eth1: Intel(R) PRO/1000 Network Connecti=
on
[   19.022248] e1000e 0000:00:19.0 eth1: MAC: 11, PHY: 12, PBA No: 0100FF-0=
FF
[   19.029467] xen: registering gsi 16 triggering 0 polarity 1
[   19.035035] Already setup the GSI :16
[   19.038934] e1000e 0000:04:00.0: Interrupt Throttling Rate (ints/sec) se=
t to dynamic conservative mode
^G^G[   19.049535] xen: registering gsi 16 triggering 0 polarity 1
[   19.055104] Already setup the GSI :16
[   19.093810] [drm] Memory usable by graphics device =3D 2048M
[   19.131868] Failed to add WC MTRR for [00000000e0000000-00000000efffffff=
]; performance may suffer.
[   19.158279] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[   19.165142] [drm] Driver supports precise vbl changed decodes: PCI:0000:=
00:02.0,olddecodes=3Dio+mem,decodes=3Dio+mem:owns=3Dio+mem
^G^G^G^G^G^G^G^G^G^G^G^G^G[   19.192124] igb 0000:02:00.1: added PHC on eth2
[   19.196666] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
 eth2: PBA No: Unknown
[   19.215720] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)
[   19.223716] xen: registering gsi 19 triggering 0 polarity 1
[   19.229300] Already setup the GSI :19
(XEN) [2014-01-22 12:27:07] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----
(XEN) [2014-01-22 12:27:07] CPU:    0
(XEN0000000000000
(XEN) [2014-01-22 12:27:07] rdx: 00000000f1e80000   rsi: 0000000000000200  =
 rdi: ffff82d080281f20
(XEN) [2014-01-22 12:27:07] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08  =
 r8:  000000000000001c
(XEN) [2014-01-22 12:27:07] r9:  00000000ffffffff   r10: ffff82d080238f20  =
 r11: 0000000000000202
(XEN) [2014-01-22 12:27:07] r12: 0000000000000000   r13: ffff83023f65db70  =
 r14: ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0
(XEN) [2014-01-22 12:27:07] cr3: 000000021db62000   cr2: 0000000000000004
(XEN) [2014-01-22 12:27:07] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008
(XEN) [2014-01-22 12:27:07] Xen stack trace from rsp=3Dffff82d0802cfc08:
(XEN) [2014-01-22 12:27:07]    000000050004fc38 ffff82d0802cfd88 0000007204=
3a6340 80050070ffffffff
(XEN) [2014-01-22 12:27:07]    0000000000000000 0000000000000000 0000000000=
000005 0000000000000070
(XEN) [2014-01-22 12:27:07]    0000000500000000 0000000000000000 00000000f1=
e80000 ffff82d000000005
(XEN) [2014-01-22 12:27:07]    ffff82d000000003 80050070117fbb70 ffff82d080=
2cfe98 ffff82d0802cfe98
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd88 ffff83023946e700 0000000000=
000005 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd28 ffff82d080168987 0000000000=
000246 ffff82d0802cfcd8
(XEN) [2014-01-22 12:27:07]    ffff82d080129d68 0000000000000000 ffff82d080=
2cfd28 ffff82d0801473d9
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfd18 ffff8302337fbb70 0000000000=
00010c ffff830233748000
(XEN) [2014-01-22 12:27:07]    000000000000010c 0000000000000025 00000000ff=
ffffed ffff830239402500
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfdc8 ffff82d08016c65c ffff83023f=
65db00 000000000000010c
(XEN) [2014-01-22 12:27:07]    000000000000010c ffff8302337480e0 ffff82d080=
2cfd98 ffff82d0801047ed
(XEN) [2014-01-22 12:27:07]    0000010c01402500 ffff82d0802cfe98 ffff830233=
7480e0 ffff83023946e700
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff83023f65db00 ffff82d080=
2cfdc8 ffff830233748000
(XEN) [2014-01-22 12:27:07]    00000000fffffffd 0000000000000000 ffff82d080=
2cfe98 ffff82d0802cfe70
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe48 ffff82d08017f104 ffff82d080=
2cff18 ffffffff8154ea06
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfe98 ffff8302337480b8 ffff82d000=
00010c ffff82d08018bcb0
(XEN) [2014-01-22 12:27:07]    000000250000f800 ffff82d0802cfe74 ffff820040=
005000 000000000000000d
(XEN) [2014-01-22 12:27:07]    ffff88006ca859b8 ffff8300b7313000 ffff88006c=
35cc00 0000000000000000
(XEN) [2014-01-22 12:27:07]    ffff82d0802cfef8 ffff82d08017f814 0000000000=
000000 0000000700000004
(XEN) [2014-01-22 12:27:07]    0000000000007ff0 ffffffffffffffff 0000000000=
000005 0000000000000000
(XEN) [2014-01-22 12:27:07] Xen call trace:
(XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x=
1dc/0x603
(XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x=
4d7
(XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0=
x5ad
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/=
0x5d1
(XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x1=
19e
(XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
(XEN) [2014-01-22 12:27:07]  L4[0x000] =3D 000000021db66067 000000000006cb75
(XEN) [2014-01-22 12:27:07]  L3[0x000] =3D 000000021db65067 000000000006cb76
(XEN) [2014-01-22 12:27:07]  L2[0x000] =3D 0000000000000000 fffffffffffffff=
f=20
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07] Panic on CPU 0:
(XEN) [2014-01-22 12:27:07] FATAL PAGE FAULT
(XEN) [2014-01-22 12:27:07] [error_code=3D0000]
(XEN) [2014-01-22 12:27:07] Faulting linear address: 0000000000000004
(XEN) [2014-01-22 12:27:07] ****************************************
(XEN) [2014-01-22 12:27:07]=20
(XEN) [2014-01-22 12:27:07] Manual reset required ('noreboot' specified)

--NzB8fVQJ5HfG6fxh
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--NzB8fVQJ5HfG6fxh--


From xen-devel-bounces@lists.xen.org Wed Jan 22 04:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 04:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pXd-0007cm-Oa; Wed, 22 Jan 2014 04:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pXc-0007cg-3O
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 04:35:52 +0000
Received: from [85.158.139.211:56662] by server-12.bemta-5.messagelabs.com id
	CF/E6-30017-7AA4FD25; Wed, 22 Jan 2014 04:35:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390365349!11183421!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3843 invoked from network); 22 Jan 2014 04:35:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 04:35:50 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M4ZkuO023693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 04:35:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0M4ZkZJ014258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 04:35:46 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4Zjg0025654; Wed, 22 Jan 2014 04:35:45 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 20:35:45 -0800
Date: Tue, 21 Jan 2014 23:35:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Philip Wernersbach <philip.wernersbach@gmail.com>
Message-ID: <20140122043542.GB9931@konrad-lan.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
	<20140121213117.GA6003@phenom.dumpdata.com>
	<CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: daniel.kiper@oracle.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 05:09:10PM -0500, Philip Wernersbach wrote:
> On Tue, Jan 21, 2014 at 4:31 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > Could you use the Linux patches that use Xen's EFI services? Ie,
> > the DOM0 EFI ones?
> >
> > They were posted on the mailing list some time ago (last year, November-ish?)
> 
> That is an option, but I can't find the patches (a thorough search of
> Google yields nothing). The other issue is that unless those were

http://www.gossamer-threads.com/lists/xen/devel/294362

is my response.

> merged into the kernel by version 3.12.8, we would have to roll our
> own kernel (3.12.8 is the newest kernel that Debian supplies).

Right, but you are rolling your own hypervisor. With this you
wouldn't need to roll your hypervisor but instead can just do
the kernel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 04:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 04:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pXd-0007cm-Oa; Wed, 22 Jan 2014 04:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pXc-0007cg-3O
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 04:35:52 +0000
Received: from [85.158.139.211:56662] by server-12.bemta-5.messagelabs.com id
	CF/E6-30017-7AA4FD25; Wed, 22 Jan 2014 04:35:51 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390365349!11183421!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3843 invoked from network); 22 Jan 2014 04:35:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 04:35:50 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M4ZkuO023693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 04:35:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0M4ZkZJ014258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 04:35:46 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M4Zjg0025654; Wed, 22 Jan 2014 04:35:45 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 20:35:45 -0800
Date: Tue, 21 Jan 2014 23:35:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Philip Wernersbach <philip.wernersbach@gmail.com>
Message-ID: <20140122043542.GB9931@konrad-lan.dumpdata.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
	<20140121213117.GA6003@phenom.dumpdata.com>
	<CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAO5Rg12uVx8C1rnFqYb+LngUZDge2AkT7aH8WP-6EAf5nW-RTQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: daniel.kiper@oracle.com, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 05:09:10PM -0500, Philip Wernersbach wrote:
> On Tue, Jan 21, 2014 at 4:31 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > Could you use the Linux patches that use Xen's EFI services? Ie,
> > the DOM0 EFI ones?
> >
> > They were posted on the mailing list some time ago (last year, November-ish?)
> 
> That is an option, but I can't find the patches (a thorough search of
> Google yields nothing). The other issue is that unless those were

http://www.gossamer-threads.com/lists/xen/devel/294362

is my response.

> merged into the kernel by version 3.12.8, we would have to roll our
> own kernel (3.12.8 is the newest kernel that Debian supplies).

Right, but you are rolling your own hypervisor. With this you
wouldn't need to roll your hypervisor but instead can just do
the kernel.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 05:03:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 05:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pyP-0000ft-3L; Wed, 22 Jan 2014 05:03:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pyN-0000fo-9M
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 05:03:31 +0000
Received: from [85.158.143.35:37033] by server-3.bemta-4.messagelabs.com id
	EC/42-32360-2215FD25; Wed, 22 Jan 2014 05:03:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390367007!10532326!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24470 invoked from network); 22 Jan 2014 05:03:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 05:03:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M52KBP014373
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 05:02:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M52JAv027329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 05:02:19 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M52Jks010617; Wed, 22 Jan 2014 05:02:19 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 21:02:18 -0800
Date: Wed, 22 Jan 2014 00:02:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Steven Noonan <steven@uplinklabs.net>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140122050215.GC9931@konrad-lan.dumpdata.com>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140122032045.GA22182@falcon.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@suse.de>, Alex Thorlton <athorlton@sgi.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> > On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> > <gregkh@linuxfoundation.org> wrote:

Adding extra folks to the party.
> > >
> > > Odds are this also shows up in 3.13, right?
> 
> Reproduced using 3.13 on the PV guest:
> 
> 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
> 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
> 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
> 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
> 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
> 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
> 	[  368.756815] Call Trace:
> 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
> 	[  368.756872] Disabling lock debugging due to kernel taint
> 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
> 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
> 
> > 
> > Probably. I don't have a Xen PV setup to test with (and very little
> > interest in setting one up).. And I have a suspicion that it might not
> > be so much about Xen PV, as perhaps about the kind of hardware.
> > 
> > I suspect the issue has something to do with the magic _PAGE_NUMA
> > tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> > removing the _PAGE_PRESENT bit, and now the crazy numa code is
> > confused.
> > 
> > The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> > bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> > _PAGE_PRESENT.
> > 
> > Adding Andrea to the Cc, because he's the author of that horridness.
> > Putting Steven's test-case here as an attachement for Andrea, maybe
> > that makes him go "Ahh, yes, silly case".
> > 
> > Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> > 
> > Andrea, you can find the thread on lkml, but it boils down to commit
> > 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> > attached test-case (but apparently only under Xen PV). There it
> > apparently causes a "BUG: Bad page map .." error.

I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
value of PMDs and PTEs. That is - it does not use the pvops interface
and instead reads the values directly from the page-table. Since the
page-table is also manipulated by the hypervisor - there are certain
flags it also sets to do its business. It might be that it uses
_PAGE_GLOBAL as well - and Linux picks up on that. If it was using
pte_flags that would invoke the pvops interface.

Elena, Dariof and George, you guys had been looking at this a bit deeper
than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?

This not-compiled-totally-bad-patch might shed some light on what I was
thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
It does not fix it for PMDs naturally (as there are no PMD paravirt ops
for that).

The other question is - how is AutoNUMA running when it is not enabled?
Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
turned on?


diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..9fa7088 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -370,12 +370,15 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 		unsigned long pfn = mfn_to_pfn(mfn);
 
 		pteval_t flags = val & PTE_FLAGS_MASK;
+		/* No AutoNUMA for PV. TODO If Linux sees the PTE having
+		 * said bit, just igore it. */
+		if (flags & _PAGE_NUMA)
+			flags = flags & ~_PAGE_NUMA;
 		if (unlikely(pfn == ~0))
 			val = flags & ~_PAGE_PRESENT;
 		else
 			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
 	}
-
 	return val;
 }
 
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index db09234..a8bc07d 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -644,7 +644,7 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
 #ifndef pte_numa
 static inline int pte_numa(pte_t pte)
 {
-	return (pte_flags(pte) &
+	return (pte_val(pte) &
 		(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
 }
 #endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 05:03:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 05:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5pyP-0000ft-3L; Wed, 22 Jan 2014 05:03:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W5pyN-0000fo-9M
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 05:03:31 +0000
Received: from [85.158.143.35:37033] by server-3.bemta-4.messagelabs.com id
	EC/42-32360-2215FD25; Wed, 22 Jan 2014 05:03:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390367007!10532326!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24470 invoked from network); 22 Jan 2014 05:03:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 05:03:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0M52KBP014373
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 05:02:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M52JAv027329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 05:02:19 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0M52Jks010617; Wed, 22 Jan 2014 05:02:19 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 21 Jan 2014 21:02:18 -0800
Date: Wed, 22 Jan 2014 00:02:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Steven Noonan <steven@uplinklabs.net>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140122050215.GC9931@konrad-lan.dumpdata.com>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140122032045.GA22182@falcon.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@suse.de>, Alex Thorlton <athorlton@sgi.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> > On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> > <gregkh@linuxfoundation.org> wrote:

Adding extra folks to the party.
> > >
> > > Odds are this also shows up in 3.13, right?
> 
> Reproduced using 3.13 on the PV guest:
> 
> 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
> 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
> 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
> 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
> 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
> 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
> 	[  368.756815] Call Trace:
> 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
> 	[  368.756872] Disabling lock debugging due to kernel taint
> 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
> 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
> 
> > 
> > Probably. I don't have a Xen PV setup to test with (and very little
> > interest in setting one up).. And I have a suspicion that it might not
> > be so much about Xen PV, as perhaps about the kind of hardware.
> > 
> > I suspect the issue has something to do with the magic _PAGE_NUMA
> > tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> > removing the _PAGE_PRESENT bit, and now the crazy numa code is
> > confused.
> > 
> > The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> > bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> > _PAGE_PRESENT.
> > 
> > Adding Andrea to the Cc, because he's the author of that horridness.
> > Putting Steven's test-case here as an attachement for Andrea, maybe
> > that makes him go "Ahh, yes, silly case".
> > 
> > Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> > 
> > Andrea, you can find the thread on lkml, but it boils down to commit
> > 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> > attached test-case (but apparently only under Xen PV). There it
> > apparently causes a "BUG: Bad page map .." error.

I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
value of PMDs and PTEs. That is - it does not use the pvops interface
and instead reads the values directly from the page-table. Since the
page-table is also manipulated by the hypervisor - there are certain
flags it also sets to do its business. It might be that it uses
_PAGE_GLOBAL as well - and Linux picks up on that. If it was using
pte_flags that would invoke the pvops interface.

Elena, Dariof and George, you guys had been looking at this a bit deeper
than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?

This not-compiled-totally-bad-patch might shed some light on what I was
thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
It does not fix it for PMDs naturally (as there are no PMD paravirt ops
for that).

The other question is - how is AutoNUMA running when it is not enabled?
Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
turned on?


diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..9fa7088 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -370,12 +370,15 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 		unsigned long pfn = mfn_to_pfn(mfn);
 
 		pteval_t flags = val & PTE_FLAGS_MASK;
+		/* No AutoNUMA for PV. TODO If Linux sees the PTE having
+		 * said bit, just igore it. */
+		if (flags & _PAGE_NUMA)
+			flags = flags & ~_PAGE_NUMA;
 		if (unlikely(pfn == ~0))
 			val = flags & ~_PAGE_PRESENT;
 		else
 			val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
 	}
-
 	return val;
 }
 
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index db09234..a8bc07d 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -644,7 +644,7 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
 #ifndef pte_numa
 static inline int pte_numa(pte_t pte)
 {
-	return (pte_flags(pte) &
+	return (pte_val(pte) &
 		(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
 }
 #endif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 05:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 05:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5qQW-0001ws-Iu; Wed, 22 Jan 2014 05:32:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W5qQU-0001wn-Pp
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 05:32:35 +0000
Received: from [85.158.139.211:47170] by server-6.bemta-5.messagelabs.com id
	FA/22-16310-2F75FD25; Wed, 22 Jan 2014 05:32:34 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390368750!876887!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 22 Jan 2014 05:32:32 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 05:32:32 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 21 Jan 2014 22:32:24 -0700
Message-ID: <52DF57E2.2090602@suse.com>
Date: Tue, 21 Jan 2014 22:32:18 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
In-Reply-To: <21214.37402.648941.864060@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> I let this run over the weekend and today noticed libvirtd was deadlocked
>>     
>
> I have just retested xl with:
>   * my 3-patch 4.4 fixes series
>   * v2 of my fork series
>   * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
>   * "13/12" and "14/12" just posted
> and it WFM.
>
> Of course I don't have the same setup as Jim.
>
> Jim: if it's not too much trouble, I'd appreciate it if you could try
> that combination.
>
> For your convenience you can find a git branch of it at
>   http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
> aka
>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1
>   

I've been testing this branch and notice an occasional libvirtd segfault
that always occurs when calling libxl_domain_create_restore().  By
occasional, I mean my save/restore script might cause the segfault after
2 iterations, or 20 iterations, or ...  But the segfault always occurs
in libxl_domain_create_restore()

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffeef59700 (LWP 12083)]
0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
klass=0x5555558a1310)
    at util/virobject.c:362
362         return virClassIsDerivedFrom(obj->klass, klass);
(gdb) bt
#0  0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
klass=0x5555558a1310)
    at util/virobject.c:362
#1  0x00007ffff745765b in virObjectLock (anyobj=0x2f302f6e69616d6f) at
util/virobject.c:314
#2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
(priv=0x5555558fc310,
    hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
#3  0x00007fffe96f8fed in time_deregister (gc=0x7fffeef58220,
ev=0x5555559eee48)
    at libxl_event.c:294
#4  0x00007fffe96facfd in afterpoll_internal (egc=0x7fffeef58220,
poller=0x5555559a4c70, nfds=3,
    fds=0x5555559c09d0, now=...) at libxl_event.c:1008
#5  0x00007fffe96fc312 in eventloop_iteration (egc=0x7fffeef58220,
poller=0x5555559a4c70)
    at libxl_event.c:1455
#6  0x00007fffe96fce58 in libxl__ao_inprogress (ao=0x5555559e9690,
    file=0x7fffe970fadb "libxl_create.c", line=1356,
    func=0x7fffe97105f0 <__func__.16344> "do_domain_create") at
libxl_event.c:1700
#7  0x00007fffe96d711f in do_domain_create (ctx=0x5555559d9fa0,
d_config=0x7fffeef58490,
    domid=0x7fffeef5840c, restore_fd=89, checkpointed_stream=0,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1356
#8  0x00007fffe96d7238 in libxl_domain_create_restore
(ctx=0x5555559d9fa0, d_config=0x7fffeef58490,
    domid=0x7fffeef5840c, restore_fd=89, params=0x7fffeef58400,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1387
#...
(gdb) f 2
#2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
(priv=0x5555558fc310,
    hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
302         virObjectLock(info->priv);
(gdb) p info->priv
$3 = (libxlDomainObjPrivatePtr) 0x2f302f6e69616d6f
(gdb) f 9
#9  0x00007fffe993f2c7 in libxlVmStart (driver=0x5555558c2e50,
vm=0x5555558e6a50,
    start_paused=false, restore_fd=89) at libxl/libxl_driver.c:635
635             res = libxl_domain_create_restore(priv->ctx, &d_config,
&domid,
(gdb) p priv
$2 = (libxlDomainObjPrivatePtr) 0x5555558fc310

It looks like the libxlDomainObjPrivatePtr, stashed as part of
for_app_registration_out when registering the timeout, has been
trampled.  Not sure if the problem is in libvirt or libxl, but it is
late here and I'm calling it a night :).

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 05:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 05:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5qQW-0001ws-Iu; Wed, 22 Jan 2014 05:32:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W5qQU-0001wn-Pp
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 05:32:35 +0000
Received: from [85.158.139.211:47170] by server-6.bemta-5.messagelabs.com id
	FA/22-16310-2F75FD25; Wed, 22 Jan 2014 05:32:34 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390368750!876887!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 22 Jan 2014 05:32:32 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 05:32:32 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 21 Jan 2014 22:32:24 -0700
Message-ID: <52DF57E2.2090602@suse.com>
Date: Tue, 21 Jan 2014 22:32:18 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
In-Reply-To: <21214.37402.648941.864060@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> I let this run over the weekend and today noticed libvirtd was deadlocked
>>     
>
> I have just retested xl with:
>   * my 3-patch 4.4 fixes series
>   * v2 of my fork series
>   * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
>   * "13/12" and "14/12" just posted
> and it WFM.
>
> Of course I don't have the same setup as Jim.
>
> Jim: if it's not too much trouble, I'd appreciate it if you could try
> that combination.
>
> For your convenience you can find a git branch of it at
>   http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
> aka
>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1
>   

I've been testing this branch and notice an occasional libvirtd segfault
that always occurs when calling libxl_domain_create_restore().  By
occasional, I mean my save/restore script might cause the segfault after
2 iterations, or 20 iterations, or ...  But the segfault always occurs
in libxl_domain_create_restore()

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffeef59700 (LWP 12083)]
0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
klass=0x5555558a1310)
    at util/virobject.c:362
362         return virClassIsDerivedFrom(obj->klass, klass);
(gdb) bt
#0  0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
klass=0x5555558a1310)
    at util/virobject.c:362
#1  0x00007ffff745765b in virObjectLock (anyobj=0x2f302f6e69616d6f) at
util/virobject.c:314
#2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
(priv=0x5555558fc310,
    hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
#3  0x00007fffe96f8fed in time_deregister (gc=0x7fffeef58220,
ev=0x5555559eee48)
    at libxl_event.c:294
#4  0x00007fffe96facfd in afterpoll_internal (egc=0x7fffeef58220,
poller=0x5555559a4c70, nfds=3,
    fds=0x5555559c09d0, now=...) at libxl_event.c:1008
#5  0x00007fffe96fc312 in eventloop_iteration (egc=0x7fffeef58220,
poller=0x5555559a4c70)
    at libxl_event.c:1455
#6  0x00007fffe96fce58 in libxl__ao_inprogress (ao=0x5555559e9690,
    file=0x7fffe970fadb "libxl_create.c", line=1356,
    func=0x7fffe97105f0 <__func__.16344> "do_domain_create") at
libxl_event.c:1700
#7  0x00007fffe96d711f in do_domain_create (ctx=0x5555559d9fa0,
d_config=0x7fffeef58490,
    domid=0x7fffeef5840c, restore_fd=89, checkpointed_stream=0,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1356
#8  0x00007fffe96d7238 in libxl_domain_create_restore
(ctx=0x5555559d9fa0, d_config=0x7fffeef58490,
    domid=0x7fffeef5840c, restore_fd=89, params=0x7fffeef58400,
ao_how=0x0, aop_console_how=0x0)
    at libxl_create.c:1387
#...
(gdb) f 2
#2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
(priv=0x5555558fc310,
    hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
302         virObjectLock(info->priv);
(gdb) p info->priv
$3 = (libxlDomainObjPrivatePtr) 0x2f302f6e69616d6f
(gdb) f 9
#9  0x00007fffe993f2c7 in libxlVmStart (driver=0x5555558c2e50,
vm=0x5555558e6a50,
    start_paused=false, restore_fd=89) at libxl/libxl_driver.c:635
635             res = libxl_domain_create_restore(priv->ctx, &d_config,
&domid,
(gdb) p priv
$2 = (libxlDomainObjPrivatePtr) 0x5555558fc310

It looks like the libxlDomainObjPrivatePtr, stashed as part of
for_app_registration_out when registering the timeout, has been
trampled.  Not sure if the problem is in libvirt or libxl, but it is
late here and I'm calling it a night :).

Regards,
Jim



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 06:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 06:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5rE8-00041I-2X; Wed, 22 Jan 2014 06:23:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5rE6-00041D-0g
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 06:23:50 +0000
Received: from [85.158.139.211:3605] by server-7.bemta-5.messagelabs.com id
	03/FF-04824-5F36FD25; Wed, 22 Jan 2014 06:23:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390371826!11193480!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16962 invoked from network); 22 Jan 2014 06:23:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 06:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93139826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 06:23:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 01:23:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5rE0-0004Nv-KY;
	Wed, 22 Jan 2014 06:23:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5rE0-0005JG-Gn;
	Wed, 22 Jan 2014 06:23:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24456-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 06:23:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24456: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24456 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24456/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24453
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24453
 test-armhf-armhf-xl           3 host-install(3)  broken in 24453 pass in 24456
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore fail in 24453 pass in 24456

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24453 blocked in 24456

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24453 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24453 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 06:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 06:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5rE8-00041I-2X; Wed, 22 Jan 2014 06:23:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5rE6-00041D-0g
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 06:23:50 +0000
Received: from [85.158.139.211:3605] by server-7.bemta-5.messagelabs.com id
	03/FF-04824-5F36FD25; Wed, 22 Jan 2014 06:23:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390371826!11193480!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16962 invoked from network); 22 Jan 2014 06:23:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 06:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93139826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 06:23:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 01:23:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5rE0-0004Nv-KY;
	Wed, 22 Jan 2014 06:23:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5rE0-0005JG-Gn;
	Wed, 22 Jan 2014 06:23:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24456-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 06:23:44 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24456: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24456 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24456/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24453
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail pass in 24453
 test-armhf-armhf-xl           3 host-install(3)  broken in 24453 pass in 24456
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore fail in 24453 pass in 24456

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl         4 capture-logs(4) broken in 24453 blocked in 24456

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24453 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24453 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 06:58:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 06:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5rlM-0005HU-NM; Wed, 22 Jan 2014 06:58:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W5rlL-0005HP-5T
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 06:58:11 +0000
Received: from [85.158.139.211:50797] by server-3.bemta-5.messagelabs.com id
	F2/72-04773-10C6FD25; Wed, 22 Jan 2014 06:58:09 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390373887!886150!1
X-Originating-IP: [209.85.192.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11490 invoked from network); 22 Jan 2014 06:58:09 -0000
Received: from mail-pd0-f171.google.com (HELO mail-pd0-f171.google.com)
	(209.85.192.171)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 06:58:09 -0000
Received: by mail-pd0-f171.google.com with SMTP id g10so8524528pdj.2
	for <xen-devel@lists.xenproject.org>;
	Tue, 21 Jan 2014 22:58:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=f4PTXTbLN9NWBY7e1AU36Y2kkpff8nH4yb432LLyxYk=;
	b=U/bmgb8bfvNoa+s+J5KAJPOWtzxGY/4Jal3FydFQLY1/UZRyuRbtOldL0jIDy/I0zP
	skI+rVg8Psgcyvog/GKb6wEkNkm5fijjMhuJnJveBvxaVOQLZ2EdYwhMYoFJ9Pd7Rwca
	6/pLBsd/JKEoqDbePPJl/IXLYCLW1pCr4qZ67rfwc29JVvUSDlPM2HtsUZiYzthW6WpY
	RDhdYCIxtVcrV6qoir1PAiPnJPyoBWLU34kwl53NJowE8g651jrRcH5xUip2BGhUxTnT
	qWALAnkyhuZkfkTyVkS+WxM+djRZQpSf9N4SfEI8cSPZb6zKVl4L+GJ8dki4smDWjUFG
	FAIg==
X-Received: by 10.66.121.68 with SMTP id li4mr29866061pab.33.1390373887020;
	Tue, 21 Jan 2014 22:58:07 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vn10sm19267169pbc.21.2014.01.21.22.57.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 22:58:06 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: konrad.wilk@oracle.com
Date: Wed, 22 Jan 2014 14:57:44 +0800
Message-Id: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: james.dingwall@zynstra.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] drivers: xen: deaggressive selfballoon driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Current xen-selfballoon driver is too aggressive which may cause OOM be
triggered more often. Eg. this bug reported by James:
https://lkml.org/lkml/2013/11/21/158

There are two mainly reasons:
1) The original goal_page didn't consider some pages used by kernel space, like
slab pages and pages used by device drivers.

2) The balloon driver may not give back memory to guest OS fast enough when the
workload suddenly aquries a lot of physical memory.

In both cases, the guest OS will suffer from memory pressure and OOM may
be triggered.

The fix is make xen-selfballoon driver not that aggressive by adding extra 10%
of total ram pages to goal_page.
It's more valuable to keep the guest system reliable and response faster than
balloon out these 10% pages to XEN.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 drivers/xen/xen-selfballoon.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..745ad79 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
 #endif /* CONFIG_FRONTSWAP */
 
 #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
+#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
 
 /*
  * Use current balloon size, the goal (vm_committed_as), and hysteresis
@@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
 int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 {
 	bool enable = false;
+	unsigned long reserve_pages;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 	if (!enable)
 		return -ENODEV;
 
+	/*
+	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
+	 * to make selfballoon not so aggressive.
+	 *
+	 * There are mainly two reasons:
+	 * 1) The original goal_page didn't consider some pages used by kernel
+	 *    space, like slab pages and memory used by device drivers.
+	 *
+	 * 2) The balloon driver may not give back memory to guest OS fast
+	 *    enough when the workload suddenly aquries a lot of physical memory.
+	 *
+	 * In both cases, the guest OS will suffer from memory pressure and
+	 * OOM killer may be triggered.
+	 * By reserving extra 10% of total ram pages, we can keep the system
+	 * much more reliably and response faster in some cases.
+	 */
+	if (!selfballoon_reserved_mb) {
+		reserve_pages = totalram_pages / 10;
+		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
+	}
 	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
 
 	return 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 06:58:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 06:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5rlM-0005HU-NM; Wed, 22 Jan 2014 06:58:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W5rlL-0005HP-5T
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 06:58:11 +0000
Received: from [85.158.139.211:50797] by server-3.bemta-5.messagelabs.com id
	F2/72-04773-10C6FD25; Wed, 22 Jan 2014 06:58:09 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390373887!886150!1
X-Originating-IP: [209.85.192.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11490 invoked from network); 22 Jan 2014 06:58:09 -0000
Received: from mail-pd0-f171.google.com (HELO mail-pd0-f171.google.com)
	(209.85.192.171)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 06:58:09 -0000
Received: by mail-pd0-f171.google.com with SMTP id g10so8524528pdj.2
	for <xen-devel@lists.xenproject.org>;
	Tue, 21 Jan 2014 22:58:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=f4PTXTbLN9NWBY7e1AU36Y2kkpff8nH4yb432LLyxYk=;
	b=U/bmgb8bfvNoa+s+J5KAJPOWtzxGY/4Jal3FydFQLY1/UZRyuRbtOldL0jIDy/I0zP
	skI+rVg8Psgcyvog/GKb6wEkNkm5fijjMhuJnJveBvxaVOQLZ2EdYwhMYoFJ9Pd7Rwca
	6/pLBsd/JKEoqDbePPJl/IXLYCLW1pCr4qZ67rfwc29JVvUSDlPM2HtsUZiYzthW6WpY
	RDhdYCIxtVcrV6qoir1PAiPnJPyoBWLU34kwl53NJowE8g651jrRcH5xUip2BGhUxTnT
	qWALAnkyhuZkfkTyVkS+WxM+djRZQpSf9N4SfEI8cSPZb6zKVl4L+GJ8dki4smDWjUFG
	FAIg==
X-Received: by 10.66.121.68 with SMTP id li4mr29866061pab.33.1390373887020;
	Tue, 21 Jan 2014 22:58:07 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vn10sm19267169pbc.21.2014.01.21.22.57.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 22:58:06 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: konrad.wilk@oracle.com
Date: Wed, 22 Jan 2014 14:57:44 +0800
Message-Id: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: james.dingwall@zynstra.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] drivers: xen: deaggressive selfballoon driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Current xen-selfballoon driver is too aggressive which may cause OOM be
triggered more often. Eg. this bug reported by James:
https://lkml.org/lkml/2013/11/21/158

There are two mainly reasons:
1) The original goal_page didn't consider some pages used by kernel space, like
slab pages and pages used by device drivers.

2) The balloon driver may not give back memory to guest OS fast enough when the
workload suddenly aquries a lot of physical memory.

In both cases, the guest OS will suffer from memory pressure and OOM may
be triggered.

The fix is make xen-selfballoon driver not that aggressive by adding extra 10%
of total ram pages to goal_page.
It's more valuable to keep the guest system reliable and response faster than
balloon out these 10% pages to XEN.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 drivers/xen/xen-selfballoon.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..745ad79 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
 #endif /* CONFIG_FRONTSWAP */
 
 #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
+#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
 
 /*
  * Use current balloon size, the goal (vm_committed_as), and hysteresis
@@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
 int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 {
 	bool enable = false;
+	unsigned long reserve_pages;
 
 	if (!xen_domain())
 		return -ENODEV;
@@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
 	if (!enable)
 		return -ENODEV;
 
+	/*
+	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
+	 * to make selfballoon not so aggressive.
+	 *
+	 * There are mainly two reasons:
+	 * 1) The original goal_page didn't consider some pages used by kernel
+	 *    space, like slab pages and memory used by device drivers.
+	 *
+	 * 2) The balloon driver may not give back memory to guest OS fast
+	 *    enough when the workload suddenly aquries a lot of physical memory.
+	 *
+	 * In both cases, the guest OS will suffer from memory pressure and
+	 * OOM killer may be triggered.
+	 * By reserving extra 10% of total ram pages, we can keep the system
+	 * much more reliably and response faster in some cases.
+	 */
+	if (!selfballoon_reserved_mb) {
+		reserve_pages = totalram_pages / 10;
+		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
+	}
 	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
 
 	return 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 07:29:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 07:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5sFc-0007B2-Fw; Wed, 22 Jan 2014 07:29:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W5sFa-0007Ax-EI
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 07:29:26 +0000
Received: from [85.158.139.211:64803] by server-3.bemta-5.messagelabs.com id
	B1/53-04773-5537FD25; Wed, 22 Jan 2014 07:29:25 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390375762!11195858!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30616 invoked from network); 22 Jan 2014 07:29:24 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 07:29:24 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so21256pde.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 21 Jan 2014 23:29:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=fTOImRK885zP1B+kMf4521ngGqZILpfRZMcNMUT3UTk=;
	b=gtYqmodWFx1qkN2rudwsCSrBQBoXd5rEEHBGIA4YTtyYTYB33qN815y0+rJKxYeSnQ
	MI3VHMIl7D0buP7SMJbheogxwBHvgrfxoFkkippk+JkPttXyqH+2Tlz7I+T9/B0fQsRu
	VpoWdImXbcy3uJyFrXWPIwu437tyE+3JQkfZ17NJnX9cV+tGuhD/F2Aal/q6cLwc5MAE
	OawHGpStkTOd8eUpDJqLSsI2chZA3uo+X+vO9X7zsbEw2Ssg1QH8Z/UjBmYhiP8J8rNT
	3I+tSivBxTTden16/y5qq2juHnuPI/PPtcRuHl9zNjvJa5fqzpRaYc6zBPLc3Hf9C5iJ
	9u7w==
X-Gm-Message-State: ALoCoQn/XRDC2Oc6FjPQ/KVlXpHBJ7Ia9Vp6K42J1zJodgLUc9cMOVndzdcCOxpwvUdOMkmbZ4C4
X-Received: by 10.68.91.3 with SMTP id ca3mr77591pbb.20.1390375762479;
	Tue, 21 Jan 2014 23:29:22 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	sq7sm19576996pbc.19.2014.01.21.23.29.19 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 23:29:21 -0800 (PST)
Date: Tue, 21 Jan 2014 23:29:14 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140122072914.GA9283@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140122050215.GC9931@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>, george.dunlap@eu.citrix.com,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>, dario.faggioli@citrix.com,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> > On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> > > On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> > > <gregkh@linuxfoundation.org> wrote:
> 
> Adding extra folks to the party.
> > > >
> > > > Odds are this also shows up in 3.13, right?
> > 
> > Reproduced using 3.13 on the PV guest:
> > 
> > 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
> > 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
> > 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> > 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> > 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
> > 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
> > 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
> > 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
> > 	[  368.756815] Call Trace:
> > 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> > 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> > 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> > 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> > 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> > 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> > 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> > 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> > 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> > 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
> > 	[  368.756872] Disabling lock debugging due to kernel taint
> > 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
> > 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
> > 
> > > 
> > > Probably. I don't have a Xen PV setup to test with (and very little
> > > interest in setting one up).. And I have a suspicion that it might not
> > > be so much about Xen PV, as perhaps about the kind of hardware.
> > > 
> > > I suspect the issue has something to do with the magic _PAGE_NUMA
> > > tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> > > removing the _PAGE_PRESENT bit, and now the crazy numa code is
> > > confused.
> > > 
> > > The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> > > bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> > > _PAGE_PRESENT.
> > > 
> > > Adding Andrea to the Cc, because he's the author of that horridness.
> > > Putting Steven's test-case here as an attachement for Andrea, maybe
> > > that makes him go "Ahh, yes, silly case".
> > > 
> > > Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> > > 
> > > Andrea, you can find the thread on lkml, but it boils down to commit
> > > 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> > > attached test-case (but apparently only under Xen PV). There it
> > > apparently causes a "BUG: Bad page map .." error.
> 
> I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
> value of PMDs and PTEs. That is - it does not use the pvops interface
> and instead reads the values directly from the page-table. Since the
> page-table is also manipulated by the hypervisor - there are certain
> flags it also sets to do its business. It might be that it uses
> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> pte_flags that would invoke the pvops interface.
> 
> Elena, Dariof and George, you guys had been looking at this a bit deeper
> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> This not-compiled-totally-bad-patch might shed some light on what I was
> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> for that).

Unfortunately the Totally Bad Patch seems to make no difference. I am
still able to repro the issue:

	[  346.374929] BUG: Bad page map in process mp  pte:80000004ae928065 pmd:e993f9067
	[  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:          (null) index:0x0
	[  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
	[  346.374951] addr:00007f06a9bbb000 vm_flags:00100071 anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
	[  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+ #1
	[  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768 00007f06a9bbb000
	[  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000 0000000000000000
	[  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000 ffff880e991a3e30
	[  346.374979] Call Trace:
	[  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
	[  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
	[  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
	[  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
	[  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
	[  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
	[  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
	[  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
	[  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
	[  346.375034]  [<ffffffff814e712d>] system_call_fastpath+0x1a/0x1f
	[  346.375037] Disabling lock debugging due to kernel taint
	[  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:0 val:-1
	[  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:1 val:1

This dump doesn't look dramatically different, either.

> 
> The other question is - how is AutoNUMA running when it is not enabled?
> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> turned on?

Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
mean not enabled at runtime?

[1] http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 07:29:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 07:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5sFc-0007B2-Fw; Wed, 22 Jan 2014 07:29:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W5sFa-0007Ax-EI
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 07:29:26 +0000
Received: from [85.158.139.211:64803] by server-3.bemta-5.messagelabs.com id
	B1/53-04773-5537FD25; Wed, 22 Jan 2014 07:29:25 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390375762!11195858!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30616 invoked from network); 22 Jan 2014 07:29:24 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 07:29:24 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so21256pde.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 21 Jan 2014 23:29:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=fTOImRK885zP1B+kMf4521ngGqZILpfRZMcNMUT3UTk=;
	b=gtYqmodWFx1qkN2rudwsCSrBQBoXd5rEEHBGIA4YTtyYTYB33qN815y0+rJKxYeSnQ
	MI3VHMIl7D0buP7SMJbheogxwBHvgrfxoFkkippk+JkPttXyqH+2Tlz7I+T9/B0fQsRu
	VpoWdImXbcy3uJyFrXWPIwu437tyE+3JQkfZ17NJnX9cV+tGuhD/F2Aal/q6cLwc5MAE
	OawHGpStkTOd8eUpDJqLSsI2chZA3uo+X+vO9X7zsbEw2Ssg1QH8Z/UjBmYhiP8J8rNT
	3I+tSivBxTTden16/y5qq2juHnuPI/PPtcRuHl9zNjvJa5fqzpRaYc6zBPLc3Hf9C5iJ
	9u7w==
X-Gm-Message-State: ALoCoQn/XRDC2Oc6FjPQ/KVlXpHBJ7Ia9Vp6K42J1zJodgLUc9cMOVndzdcCOxpwvUdOMkmbZ4C4
X-Received: by 10.68.91.3 with SMTP id ca3mr77591pbb.20.1390375762479;
	Tue, 21 Jan 2014 23:29:22 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	sq7sm19576996pbc.19.2014.01.21.23.29.19 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 21 Jan 2014 23:29:21 -0800 (PST)
Date: Tue, 21 Jan 2014 23:29:14 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140122072914.GA9283@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140122050215.GC9931@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>, george.dunlap@eu.citrix.com,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>, dario.faggioli@citrix.com,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> > On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> > > On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> > > <gregkh@linuxfoundation.org> wrote:
> 
> Adding extra folks to the party.
> > > >
> > > > Odds are this also shows up in 3.13, right?
> > 
> > Reproduced using 3.13 on the PV guest:
> > 
> > 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
> > 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
> > 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> > 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> > 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
> > 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
> > 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
> > 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
> > 	[  368.756815] Call Trace:
> > 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> > 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> > 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> > 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> > 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> > 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> > 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> > 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> > 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> > 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
> > 	[  368.756872] Disabling lock debugging due to kernel taint
> > 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
> > 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
> > 
> > > 
> > > Probably. I don't have a Xen PV setup to test with (and very little
> > > interest in setting one up).. And I have a suspicion that it might not
> > > be so much about Xen PV, as perhaps about the kind of hardware.
> > > 
> > > I suspect the issue has something to do with the magic _PAGE_NUMA
> > > tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> > > removing the _PAGE_PRESENT bit, and now the crazy numa code is
> > > confused.
> > > 
> > > The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> > > bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> > > _PAGE_PRESENT.
> > > 
> > > Adding Andrea to the Cc, because he's the author of that horridness.
> > > Putting Steven's test-case here as an attachement for Andrea, maybe
> > > that makes him go "Ahh, yes, silly case".
> > > 
> > > Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> > > 
> > > Andrea, you can find the thread on lkml, but it boils down to commit
> > > 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> > > attached test-case (but apparently only under Xen PV). There it
> > > apparently causes a "BUG: Bad page map .." error.
> 
> I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
> value of PMDs and PTEs. That is - it does not use the pvops interface
> and instead reads the values directly from the page-table. Since the
> page-table is also manipulated by the hypervisor - there are certain
> flags it also sets to do its business. It might be that it uses
> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> pte_flags that would invoke the pvops interface.
> 
> Elena, Dariof and George, you guys had been looking at this a bit deeper
> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> This not-compiled-totally-bad-patch might shed some light on what I was
> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> for that).

Unfortunately the Totally Bad Patch seems to make no difference. I am
still able to repro the issue:

	[  346.374929] BUG: Bad page map in process mp  pte:80000004ae928065 pmd:e993f9067
	[  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:          (null) index:0x0
	[  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
	[  346.374951] addr:00007f06a9bbb000 vm_flags:00100071 anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
	[  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+ #1
	[  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768 00007f06a9bbb000
	[  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000 0000000000000000
	[  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000 ffff880e991a3e30
	[  346.374979] Call Trace:
	[  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
	[  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
	[  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
	[  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
	[  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
	[  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
	[  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
	[  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
	[  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
	[  346.375034]  [<ffffffff814e712d>] system_call_fastpath+0x1a/0x1f
	[  346.375037] Disabling lock debugging due to kernel taint
	[  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:0 val:-1
	[  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:1 val:1

This dump doesn't look dramatically different, either.

> 
> The other question is - how is AutoNUMA running when it is not enabled?
> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> turned on?

Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
mean not enabled at runtime?

[1] http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 08:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 08:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tCq-0001nU-Sw; Wed, 22 Jan 2014 08:30:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tCp-0001nP-Tm
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 08:30:40 +0000
Received: from [193.109.254.147:31615] by server-7.bemta-14.messagelabs.com id
	AE/FA-15500-FA18FD25; Wed, 22 Jan 2014 08:30:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390379438!12310619!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26214 invoked from network); 22 Jan 2014 08:30:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 08:30:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 08:30:37 +0000
Message-Id: <52DF8FB90200007800115AEC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 08:30:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Eric Houby" <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
In-Reply-To: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 11:39, "Eric Houby" <ehouby@yahoo.com> wrote:
> Xen console logs are attached.

Thanks. This

(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.

is the first thing you need to deal with. You may want to check
how a native kernel handles that (they have some DMI based
stuff in place to automatically deal with broken firmware, which
on Xen requires command line overrides like
"acpi_skip_timer_override").

That alone may already explain

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8

since these come from the HPET (which may be generating legacy
timer interrupts):

(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0

The final crash could easily be a consequence of the earlier
problems. In any event you ought to make sure you run up to
date firmware on your system, as there clearly are issues with it.

What would also be interesting to know is whether any older Xen
version ever booted okay on that system.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 08:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 08:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tCq-0001nU-Sw; Wed, 22 Jan 2014 08:30:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tCp-0001nP-Tm
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 08:30:40 +0000
Received: from [193.109.254.147:31615] by server-7.bemta-14.messagelabs.com id
	AE/FA-15500-FA18FD25; Wed, 22 Jan 2014 08:30:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390379438!12310619!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26214 invoked from network); 22 Jan 2014 08:30:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 08:30:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 08:30:37 +0000
Message-Id: <52DF8FB90200007800115AEC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 08:30:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Eric Houby" <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
In-Reply-To: <000001cf1695$0c960450$25c20cf0$@yahoo.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 11:39, "Eric Houby" <ehouby@yahoo.com> wrote:
> Xen console logs are attached.

Thanks. This

(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.

is the first thing you need to deal with. You may want to check
how a native kernel handles that (they have some DMI based
stuff in place to automatically deal with broken firmware, which
on Xen requires command line overrides like
"acpi_skip_timer_override").

That alone may already explain

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8

since these come from the HPET (which may be generating legacy
timer interrupts):

(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0

The final crash could easily be a consequence of the earlier
problems. In any event you ought to make sure you run up to
date firmware on your system, as there clearly are issues with it.

What would also be interesting to know is whether any older Xen
version ever booted okay on that system.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 08:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 08:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tOt-0002LY-FD; Wed, 22 Jan 2014 08:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tOq-0002LT-Vt
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 08:43:05 +0000
Received: from [85.158.137.68:24286] by server-14.bemta-3.messagelabs.com id
	2D/EB-06105-8948FD25; Wed, 22 Jan 2014 08:43:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390380183!10600548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24187 invoked from network); 22 Jan 2014 08:43:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 08:43:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 08:43:02 +0000
Message-Id: <52DF92A30200007800115B00@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 08:42:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
In-Reply-To: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 22:19, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
>>> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>>>
>>> Some machines, such as recent IBM servers, only allow the OS to get the
>>> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
>>
>> This reads as if this was a bug in Xen, which it isn't. Dom0's
>> lack of EFI support when running on top of Xen is the issue.
> 
> It all depends on how you look at it, as there's two ways to fix this.
> Linux currently supports EFI just fine. Xen nukes DOM0's ability to
> access EFI using the current methods, which causes Linux's existing
> EFI support to fail. This would be Xen's fault. If Xen currently
> exposes EFI to DOM0 in some other way that Linux is not aware of, then
> this would be Linux's fault. Either way, we can either choose to fix
> Xen or fix Linux.

Which implies you don't understand the fundamental concepts of
Xen PV guests and/or EFI and/or the implications thereof on the
interaction of the two: PV guests (including Dom0) aren't fully
privileged (they don't run in ring 0), and hence _can't_ be allowed
direct access to EFI services and data. Such accesses _have to_
go through the hypervisor.

>> I think I had indicated my opposition to this sort of hack on v1
>> already; I'm not sure I asked which OSes usable as Dom0 but
>> other than Linux recognize this option. Or which versions of
>> Linux actually do (I'm pretty sure older ones don't).
>>
>> Bottom line - I continue to think that the issue should be fixed
>> in Linux.
> 
> The method that this patch uses is a valid way to fix this. In the
> best case, the DOM0 OS recognizes this option and uses it. In the
> worst case, the DOM0 OS won't recognize the option and will ignore it,
> so we're no worse off.

OSes may also choose to not ignore but choke on unknown options
passed to them.

> I agree with you that this is more of a stop gap than a long term fix.
> The final solution is to fully expose EFI services to the DOM0 in some
> way. However, getting there will take some time. The reason this patch
> came about is that the company I work for bought new IBM servers in
> the hope of migrating our existing Xen server farm to the new IBM
> servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
> pointer on the new IBM servers, which means that Xen is dead in the
> water on these machines for now. The final solution of exposing EFI to
> the DOM0 is the ultimate goal, but for now businesses need an
> immediate solution. This provides an acceptable solution until DOM0
> EFI services are implemented at a later date. There is no reason why
> this can't be merged now and then removed when DOM0 EFI service
> support arrives.

One of the reasons I'm not in agreement with this view is that this
change papers over one of the issues but completely ignores others
(like the DMI tables likely also not being found on such systems by
the Dom0 OS, or like there not necessarily being an RTC available;
there are more things here - just go check what data can be
retrieved from Xen, and think about how the OS would get at that
data without proper EFI support implemented). I hope you agree
that it's unreasonable to work around all of these via command line
options. And I hope you understand that by not working around
them, you may end up with an apparently working system that in
the end breaks long after boot rather than right at boot (likely
leading to data loss, which I personally view as much worse than
not being able to boot on a certain system).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 08:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 08:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tOt-0002LY-FD; Wed, 22 Jan 2014 08:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tOq-0002LT-Vt
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 08:43:05 +0000
Received: from [85.158.137.68:24286] by server-14.bemta-3.messagelabs.com id
	2D/EB-06105-8948FD25; Wed, 22 Jan 2014 08:43:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390380183!10600548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24187 invoked from network); 22 Jan 2014 08:43:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 08:43:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 08:43:02 +0000
Message-Id: <52DF92A30200007800115B00@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 08:42:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg12iqvmDJB85zvSbzUSvy27UxnK4n5Z0fWCydpPywpJhrQ@mail.gmail.com>
	<52DE512A02000078001154C7@nat28.tlf.novell.com>
	<CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
In-Reply-To: <CAO5Rg120ihzBdgsvDSf2BC4BAYtFznfyCcORtTBjhXs7wquiHA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v2] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 22:19, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> On Tue, Jan 21, 2014 at 4:51 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 20.01.14 at 23:08, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
>>> xen: [v2] Pass the location of the ACPI RSDP to DOM0.
>>>
>>> Some machines, such as recent IBM servers, only allow the OS to get the
>>> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
>>
>> This reads as if this was a bug in Xen, which it isn't. Dom0's
>> lack of EFI support when running on top of Xen is the issue.
> 
> It all depends on how you look at it, as there's two ways to fix this.
> Linux currently supports EFI just fine. Xen nukes DOM0's ability to
> access EFI using the current methods, which causes Linux's existing
> EFI support to fail. This would be Xen's fault. If Xen currently
> exposes EFI to DOM0 in some other way that Linux is not aware of, then
> this would be Linux's fault. Either way, we can either choose to fix
> Xen or fix Linux.

Which implies you don't understand the fundamental concepts of
Xen PV guests and/or EFI and/or the implications thereof on the
interaction of the two: PV guests (including Dom0) aren't fully
privileged (they don't run in ring 0), and hence _can't_ be allowed
direct access to EFI services and data. Such accesses _have to_
go through the hypervisor.

>> I think I had indicated my opposition to this sort of hack on v1
>> already; I'm not sure I asked which OSes usable as Dom0 but
>> other than Linux recognize this option. Or which versions of
>> Linux actually do (I'm pretty sure older ones don't).
>>
>> Bottom line - I continue to think that the issue should be fixed
>> in Linux.
> 
> The method that this patch uses is a valid way to fix this. In the
> best case, the DOM0 OS recognizes this option and uses it. In the
> worst case, the DOM0 OS won't recognize the option and will ignore it,
> so we're no worse off.

OSes may also choose to not ignore but choke on unknown options
passed to them.

> I agree with you that this is more of a stop gap than a long term fix.
> The final solution is to fully expose EFI services to the DOM0 in some
> way. However, getting there will take some time. The reason this patch
> came about is that the company I work for bought new IBM servers in
> the hope of migrating our existing Xen server farm to the new IBM
> servers. But we soon found out that DOM0 couldn't find the ACPI RSDP
> pointer on the new IBM servers, which means that Xen is dead in the
> water on these machines for now. The final solution of exposing EFI to
> the DOM0 is the ultimate goal, but for now businesses need an
> immediate solution. This provides an acceptable solution until DOM0
> EFI services are implemented at a later date. There is no reason why
> this can't be merged now and then removed when DOM0 EFI service
> support arrives.

One of the reasons I'm not in agreement with this view is that this
change papers over one of the issues but completely ignores others
(like the DMI tables likely also not being found on such systems by
the Dom0 OS, or like there not necessarily being an RTC available;
there are more things here - just go check what data can be
retrieved from Xen, and think about how the OS would get at that
data without proper EFI support implemented). I hope you agree
that it's unreasonable to work around all of these via command line
options. And I hope you understand that by not working around
them, you may end up with an apparently working system that in
the end breaks long after boot rather than right at boot (likely
leading to data loss, which I personally view as much worse than
not being able to boot on a certain system).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tqY-0003k6-LW; Wed, 22 Jan 2014 09:11:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tqX-0003k1-RK
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:11:41 +0000
Received: from [193.109.254.147:36660] by server-11.bemta-14.messagelabs.com
	id 9F/85-20576-D4B8FD25; Wed, 22 Jan 2014 09:11:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390381900!12433240!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17569 invoked from network); 22 Jan 2014 09:11:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:11:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:11:40 +0000
Message-Id: <52DF995A0200007800115B18@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:11:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<1390296705.20516.82.camel@kazak.uk.xensource.com>
	<1390317421.4634.5.camel@astar.houby.net>
In-Reply-To: <1390317421.4634.5.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 16:17, Eric Houby <ehouby@yahoo.com> wrote:
> I am not using the ivrs_ioapic[] command option, although I plan to give
> it a try once I have 4.4 running properly.

Having seen the "iommu=debug" output, I don't think you have a
reason to.

> My full xen command line is:
> dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all
> guest_loglvl=all iommu=debug,verbose,no-amd-iommu-perdev-intremap
> 
> With no-amd-iommu-perdev-intremap, the system boots, without the command
> xen crashes at boot.  

Which is in line with the IOMMU faults you're seeing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5tqY-0003k6-LW; Wed, 22 Jan 2014 09:11:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5tqX-0003k1-RK
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:11:41 +0000
Received: from [193.109.254.147:36660] by server-11.bemta-14.messagelabs.com
	id 9F/85-20576-D4B8FD25; Wed, 22 Jan 2014 09:11:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390381900!12433240!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17569 invoked from network); 22 Jan 2014 09:11:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:11:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:11:40 +0000
Message-Id: <52DF995A0200007800115B18@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:11:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<1390296705.20516.82.camel@kazak.uk.xensource.com>
	<1390317421.4634.5.camel@astar.houby.net>
In-Reply-To: <1390317421.4634.5.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot in get_rte_index
 without no-amd-iommu-perdev-intremap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 16:17, Eric Houby <ehouby@yahoo.com> wrote:
> I am not using the ivrs_ioapic[] command option, although I plan to give
> it a try once I have 4.4 running properly.

Having seen the "iommu=debug" output, I don't think you have a
reason to.

> My full xen command line is:
> dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all
> guest_loglvl=all iommu=debug,verbose,no-amd-iommu-perdev-intremap
> 
> With no-amd-iommu-perdev-intremap, the system boots, without the command
> xen crashes at boot.  

Which is in line with the IOMMU faults you're seeing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:27:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5u5r-0004Gh-IX; Wed, 22 Jan 2014 09:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5u5p-0004Gc-HE
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:27:29 +0000
Received: from [85.158.143.35:17605] by server-3.bemta-4.messagelabs.com id
	75/E3-32360-00F8FD25; Wed, 22 Jan 2014 09:27:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390382848!11970222!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28724 invoked from network); 22 Jan 2014 09:27:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:27:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:27:27 +0000
Message-Id: <52DF9D0D0200007800115B29@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:27:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
In-Reply-To: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to
 DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 21:55, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )

Apart from all other reservations I have, you absolutely must not
do this unconditionally: This should be probably be limited to when
booting from EFI, and this should definitely be done only when
enabled by a command line option.

Jan

> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 1];
> +
> +            if ( rp )
> +            {
> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,
> +                         "%08lX", rp);
> +
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, rp_str);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING
> +                       "Failed to get acpi_rsdp to pass to dom0\n");
> +            }
> +        }
> 
>          cmdline = dom0_cmdline;
>      }




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:27:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5u5r-0004Gh-IX; Wed, 22 Jan 2014 09:27:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5u5p-0004Gc-HE
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:27:29 +0000
Received: from [85.158.143.35:17605] by server-3.bemta-4.messagelabs.com id
	75/E3-32360-00F8FD25; Wed, 22 Jan 2014 09:27:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390382848!11970222!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28724 invoked from network); 22 Jan 2014 09:27:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:27:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:27:27 +0000
Message-Id: <52DF9D0D0200007800115B29@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:27:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
In-Reply-To: <CAO5Rg11C4CT4Wd6yxAr+e-SUifW-otUmbyQtUVKcP2rh-HmQBQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v3] Pass the location of the ACPI RSDP to
 DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 21:55, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1378,6 +1378,25 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( !strstr(dom0_cmdline, "acpi_rsdp=") )

Apart from all other reservations I have, you absolutely must not
do this unconditionally: This should be probably be limited to when
booting from EFI, and this should definitely be done only when
enabled by a command line option.

Jan

> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 1];
> +
> +            if ( rp )
> +            {
> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 1,
> +                         "%08lX", rp);
> +
> +                safe_strcat(dom0_cmdline, " acpi_rsdp=0x");
> +                safe_strcat(dom0_cmdline, rp_str);
> +            }
> +            else
> +            {
> +                printk(XENLOG_WARNING
> +                       "Failed to get acpi_rsdp to pass to dom0\n");
> +            }
> +        }
> 
>          cmdline = dom0_cmdline;
>      }




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:38:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uGd-00059S-1S; Wed, 22 Jan 2014 09:38:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5uGc-00059D-Af
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:38:38 +0000
Received: from [193.109.254.147:3809] by server-13.bemta-14.messagelabs.com id
	C4/05-19374-D919FD25; Wed, 22 Jan 2014 09:38:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390383516!12441486!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13423 invoked from network); 22 Jan 2014 09:38:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:38:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:38:36 +0000
Message-Id: <52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:38:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DEB887.8070409@citrix.com>
In-Reply-To: <52DEB887.8070409@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> I have been giving nested virt a try, and have my first bug to report. 
> This is still ongoing, and is by no means complete yet.
> 
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
> 
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
> 
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
> 
> 
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
> 
> (XEN) <vm_launch_fail> error code 7

Considering that 7 is "VM entry with invalid control field(s)", I think
it would be quite helpful if we enhanced the error handling here to
dump the VMCS.

Also - did you perhaps mean to Cc VMX folks on your original mail?
Chances that they see your report without doing so are - according
to my experience - rather slim...

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:38:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uGd-00059S-1S; Wed, 22 Jan 2014 09:38:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5uGc-00059D-Af
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:38:38 +0000
Received: from [193.109.254.147:3809] by server-13.bemta-14.messagelabs.com id
	C4/05-19374-D919FD25; Wed, 22 Jan 2014 09:38:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390383516!12441486!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13423 invoked from network); 22 Jan 2014 09:38:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:38:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:38:36 +0000
Message-Id: <52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:38:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DEB887.8070409@citrix.com>
In-Reply-To: <52DEB887.8070409@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> I have been giving nested virt a try, and have my first bug to report. 
> This is still ongoing, and is by no means complete yet.
> 
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
> 
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
> 
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
> 
> 
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
> 
> (XEN) <vm_launch_fail> error code 7

Considering that 7 is "VM entry with invalid control field(s)", I think
it would be quite helpful if we enhanced the error handling here to
dump the VMCS.

Also - did you perhaps mean to Cc VMX folks on your original mail?
Chances that they see your report without doing so are - according
to my experience - rather slim...

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uLn-0005IA-8b; Wed, 22 Jan 2014 09:43:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uLl-0005I3-JL
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 09:43:57 +0000
Received: from [85.158.137.68:33498] by server-2.bemta-3.messagelabs.com id
	F6/72-17329-CD29FD25; Wed, 22 Jan 2014 09:43:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390383834!6945288!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12380 invoked from network); 22 Jan 2014 09:43:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95227543"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:43:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:43:53 -0500
Message-ID: <1390383831.32519.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:43:51 +0000
In-Reply-To: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, arnd@arndb.de, catalin.marinas@arm.com,
	jaccon.bastiaansen@gmail.com, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 13:44 +0000, Stefano Stabellini wrote:
> Remove !GENERIC_ATOMIC64 build dependency:
> - introduce xen_atomic64_xchg
> - use it to implement xchg_xen_ulong
> 
> Remove !CPU_V6 build dependency:
> - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
>   CONFIG_CPU_V6
> - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:44:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uLn-0005IA-8b; Wed, 22 Jan 2014 09:43:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uLl-0005I3-JL
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 09:43:57 +0000
Received: from [85.158.137.68:33498] by server-2.bemta-3.messagelabs.com id
	F6/72-17329-CD29FD25; Wed, 22 Jan 2014 09:43:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390383834!6945288!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12380 invoked from network); 22 Jan 2014 09:43:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:43:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95227543"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:43:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:43:53 -0500
Message-ID: <1390383831.32519.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:43:51 +0000
In-Reply-To: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: linux@arm.linux.org.uk, arnd@arndb.de, catalin.marinas@arm.com,
	jaccon.bastiaansen@gmail.com, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 13:44 +0000, Stefano Stabellini wrote:
> Remove !GENERIC_ATOMIC64 build dependency:
> - introduce xen_atomic64_xchg
> - use it to implement xchg_xen_ulong
> 
> Remove !CPU_V6 build dependency:
> - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
>   CONFIG_CPU_V6
> - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uQn-00062E-U2; Wed, 22 Jan 2014 09:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5uQm-000624-Js
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:49:08 +0000
Received: from [85.158.137.68:5532] by server-10.bemta-3.messagelabs.com id
	86/F1-23989-3149FD25; Wed, 22 Jan 2014 09:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390384146!6947005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22660 invoked from network); 22 Jan 2014 09:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:49:06 +0000
Message-Id: <52DFA2200200007800115B70@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:49:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
In-Reply-To: <20140122043128.GA9931@konrad-lan.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> See attached (and relevant part inlined).
>...
> (XEN) [2014-01-22 12:27:07] Xen call trace:
> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> (XEN) [2014-01-22 12:27:07] 
> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:

Considering the similarity, this is surely another incarnation of
the same issue. Which gets me to ask first of all - is the device
being acted upon an MSI-X capable one? If not, why is the call
being made? If so (and Xen thinks differently) that's what
needs fixing.

On that basis I'm also going to ignore your patch for the first
problem, Andrew: It's either incomplete or unnecessary or
fixing the wrong thing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uQn-00062E-U2; Wed, 22 Jan 2014 09:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5uQm-000624-Js
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:49:08 +0000
Received: from [85.158.137.68:5532] by server-10.bemta-3.messagelabs.com id
	86/F1-23989-3149FD25; Wed, 22 Jan 2014 09:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390384146!6947005!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22660 invoked from network); 22 Jan 2014 09:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 09:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 09:49:06 +0000
Message-Id: <52DFA2200200007800115B70@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 09:49:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
In-Reply-To: <20140122043128.GA9931@konrad-lan.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> See attached (and relevant part inlined).
>...
> (XEN) [2014-01-22 12:27:07] Xen call trace:
> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> (XEN) [2014-01-22 12:27:07] 
> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:

Considering the similarity, this is surely another incarnation of
the same issue. Which gets me to ask first of all - is the device
being acted upon an MSI-X capable one? If not, why is the call
being made? If so (and Xen thinks differently) that's what
needs fixing.

On that basis I'm also going to ignore your patch for the first
problem, Andrew: It's either incomplete or unnecessary or
fixing the wrong thing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:49:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uR2-00063f-Aj; Wed, 22 Jan 2014 09:49:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5uR0-00063M-UQ
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:49:23 +0000
Received: from [85.158.137.68:7140] by server-11.bemta-3.messagelabs.com id
	D2/7E-19379-2249FD25; Wed, 22 Jan 2014 09:49:22 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390384156!10617838!1
X-Originating-IP: [64.18.0.149]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1699 invoked from network); 22 Jan 2014 09:49:18 -0000
Received: from exprod5og117.obsmtp.com (HELO exprod5og117.obsmtp.com)
	(64.18.0.149)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 09:49:18 -0000
Received: from mail-wg0-f52.google.com ([74.125.82.52]) (using TLSv1) by
	exprod5ob117.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt+UHCG8L7kaiehOsoKoTE3UzS4l14Do@postini.com;
	Wed, 22 Jan 2014 01:49:18 PST
Received: by mail-wg0-f52.google.com with SMTP id b13so126021wgh.7
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 01:49:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=MGyAIntpx//1iBoaNqOgl8e3HqdyjDjUTAy+TreuW3A=;
	b=I/bILWUaae1FRyhra2Soq9OypxGLEfoNst0s81wqnVKr3Hl/Zl7p/PiX63Z9kxMCjm
	TwwyQa6BW5g8XEUBIiX+XEQIrM/Q+Lg7NazVImt7AmsGm5OU1RdTiBpvTVvo0hQ41pry
	d+Bf1fLg6JoQvJSJc/YeglJVaHCYjHMOMeAA4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=MGyAIntpx//1iBoaNqOgl8e3HqdyjDjUTAy+TreuW3A=;
	b=Tn2PgeCZDbNPmTf/wWhxnuzAFCOYu4AUpQEXPrrFEGp6LJ/Y8wpdFu1tE5sE799vBq
	0Be0IJaAZzUfpAP0HYs53lrFIxIAiY6gF1I6xcr44pD9hyurXmtGU5RqfePwlGPjbOWT
	U30PHXPAkUFtPpQaW3yLQc9rtaZc5NjLksXhG/LqDVYLzrmvhNKpZmn9MBhe0rv4tcz6
	aDbyiMbNrsuLgak457Zxn1zQtAT3TttR07liqr3EuYNUjn6UmlaSKFxCnrpcxQBFcmus
	3scg2VwU5dDQ89iwqI0tGTXMa9sdhOptHkgwDJPVhwdhAzbGgZSY29Fs28tEh/nqpmp4
	y05A==
X-Gm-Message-State: ALoCoQmR2TamArrgKMKjheBBUJUtYVBAt4y+eq1DDUNNgHpvY2F+WzgA0NrTtfGCcWH4v4oMTNnhWmteXxiBUxors5HZzEmIjtVZLYagu5IJbiHKnrHF0jt7oFkaa/hjqxkCpYWhifBH69CRuS50KPlnRts/hwtFc4bb84XSFFOGVaZiaF/gonA=
X-Received: by 10.180.206.41 with SMTP id ll9mr2115412wic.7.1390384154910;
	Wed, 22 Jan 2014 01:49:14 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.206.41 with SMTP id ll9mr2115393wic.7.1390384154755;
	Wed, 22 Jan 2014 01:49:14 -0800 (PST)
Received: by 10.194.92.201 with HTTP; Wed, 22 Jan 2014 01:49:14 -0800 (PST)
In-Reply-To: <20140121212359.GJ2924@reaktio.net>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<20140121212359.GJ2924@reaktio.net>
Date: Wed, 22 Jan 2014 11:49:14 +0200
Message-ID: <CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3945659488699508057=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3945659488699508057==
Content-Type: multipart/alternative; boundary=001a11c38882a5518004f08c0992

--001a11c38882a5518004f08c0992
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

In Konrad repository is an old version of drivers (for Linux 3.3).


On Tue, Jan 21, 2014 at 11:23 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savchenko wrote:
> > Hi,
> >
> > Could someone advice on the issue I am facing?
> >
> > I am trying to run PV USB on omap5uevm (omap5-panda) board.
> >
> > I use latest drivers for PV USB from Nathanael server:
> >
> http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch
> >
>
> I think Konrad actually has the Xen PVUSB drivers in his git tree, and it
> has
> some extra patches compared to nathanaels's version (iirc)..
>
> -- Pasi
>
>
> > I have applied it to k3.8(dom0) with some patches for USB HCD, usbback
> drivers
> > (attached) and run on Xen 4.4.0-rc2.
> >
> > I am facing an issue with USB_STORAGE:
> > USB storage inited and mounted on domU over PV USB drivers.
> > But I only can copy files on USB storage with small sizes(no more than
> ~100-500 kBytes).
> > Then USB storage falls to infinite loop
> > (leds on USB storage blinking all the time, more than needing for copy)
> > and then after few seconds dom0 disconnected usb device.
> >
> > Dom0, DomU use k3.8.
> >
> > I observed that usb-storage uses some scsi requests(from domU) which pa=
ss
> > directly to hardware, I think this is a problem.
> >
> > So, I applied PV SCSI drivers from
> >
> http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=3Drefs=
/heads/devel/xen-scsi.v1.0
> > to k3.8.
> >
> > Then I inited PV USB & PV SCSI with scripts vusb-start.sh and
> vscsi-start.sh accordingly.
> > But I still facing issue with this.
> >
> > Dom0 log:
> > [    0.000000] Booting Linux on physical CPU 0x0
> > [    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx018739=
4)
> (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
> > [    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7),
> cr=3D10c5387d
> > [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instructio=
n
> cache
> > [    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
> > [    0.000000] bootconsole [earlycon0] enabled
> > [    0.000000] Memory policy: ECC disabled, Data cache writeback
> > [    0.000000] On node 0 totalpages: 65280
> > [    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_ma=
p
> c428e000
> > [    0.000000]   Normal zone: 512 pages used for memmap
> > [    0.000000]   Normal zone: 0 pages reserved
> > [    0.000000]   Normal zone: 64768 pages, LIFO batch:15
> > [    0.000000] psci: probing function IDs from device-tree
> > [    0.000000] OMAP5432 ES2.0
> > [    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=3D1*32768
> > [    0.000000] pcpu-alloc: [0] 0
> > [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.
>  Total pages: 64768
> > [    0.000000] Kernel command line: console=3Dhvc0 earlyprintk
> > [    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
> > [    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072
> bytes)
> > [    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536
> bytes)
> > [    0.000000] __ex_table already sorted, skipping sort
> > [    0.000000] Memory: 255MB =3D 255MB total
> > [    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K
> highmem
> > [    0.000000] Virtual kernel memory layout:
> > [    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
> > [    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
> > [    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
> > [    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
> > [    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
> > [    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
> > [    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
> > [    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
> > [    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
> > [    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
> > [    0.000000] NR_IRQS:16 nr_irqs:16 16
> > [    0.000000] Architected local timer running at 6.14MHz (virt).
> > [    0.000000] Switching to timer-based delay loop
> > [    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns,
> wraps every 3489660920ms
> > [    0.000000] Console: colour dummy device 80x30
> > [    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat,
> Inc., Ingo Molnar
> > [    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
> > [    0.000000] ... MAX_LOCK_DEPTH:          48
> > [    0.000000] ... MAX_LOCKDEP_KEYS:        8191
> > [    0.000000] ... CLASSHASH_SIZE:          4096
> > [    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
> > [    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
> > [    0.000000] ... CHAINHASH_SIZE:          16384
> > [    0.000000]  memory used by lock dependency info: 3695 kB
> > [    0.000000]  per task-struct memory footprint: 1152 bytes
> > [    0.046875] Calibrating delay loop (skipped), value calculated using
> timer frequency.. 12.30 BogoMIPS (lpj=3D48000)
> > [    0.054687] pid_max: default: 32768 minimum: 301
> > [    0.054687] Security Framework initialized
> > [    0.062500] Mount-cache hash table entries: 512
> > [    0.070312] CPU: Testing write buffer coherency: ok
> > [    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e=
58
> > [    0.085937] devtmpfs: initialized
> > [    0.093750] Xen 4.4 support found, events_irq=3D31
> gnttab_frame_pfn=3D4b000
> > [    0.101562] xen:grant_table: Grant tables using version 1 layout.
> > [    0.101562] Grant table initialized
> > [    0.109375] omap_hwmod: aess: _wait_target_disable failed
> > [    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
> > [    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
> > [    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
> > [    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
> > [    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
> > [    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
> > [    0.234375] pinctrl core: initialized pinctrl subsystem
> > [    0.242187] regulator-dummy: no parameters
> > [    0.242187] NET: Registered protocol family 16
> > [    0.250000] Xen: initializing cpu0
> > [    0.250000] DMA: preallocated 256 KiB pool for atomic coherent
> allocations
> > [    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for
> software IO TLB
> > [    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped
> at [ce000000-ce7fffff]
> > [    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
> > [    0.281250] OMAP GPIO hardware version 0.1
> > [    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
> > [    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
> > [    0.296875] OMAP DMA hardware revision 0.0
> > [    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840
> size 438
> > [    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840
> size 56
> > [    0.335937] bio: create slab <bio-0> at 0
> > [    0.343750] xen:balloon: Initialising balloon driver
> > [    0.343750] of_get_named_gpio_flags exited with status 80
> > [    0.343750] hsusb2_reset: 3300 mV
> > [    0.351562] of_get_named_gpio_flags exited with status 79
> > [    0.351562] hsusb3_reset: 3300 mV
> > [    0.351562] SCSI subsystem initialized
> > [    0.359375] libata version 3.00 loaded.
> > [    0.359375] usbcore: registered new interface driver usbfs
> > [    0.367187] usbcore: registered new interface driver hub
> > [    0.367187] usbcore: registered new device driver usb
> > [    0.375000] Switching to clocksource arch_sys_counter
> > [    0.414062] NET: Registered protocol family 2
> > [    0.414062] TCP established hash table entries: 2048 (order: 2, 1638=
4
> bytes)
> > [    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes=
)
> > [    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
> > [    0.437500] TCP: reno registered
> > [    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
> > [    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
> > [    0.453125] NET: Registered protocol family 1
> > [    0.679687] NetWinder Floating Point Emulator V0.97 (double precisio=
n)
> > [    0.687500] VFS: Disk quotas dquot_6.5.2
> > [    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 byte=
s)
> > [    0.703125] msgmni has been set to 372
> > [    0.710937] io scheduler noop registered
> > [    0.718750] io scheduler deadline registered
> > [    0.718750] io scheduler cfq registered (default)
> > [    0.726562] xen:xen_evtchn: Event-channel device installed
> > [    0.742187] console [hvc0] enabled, bootconsole disabled
> > [    0.765625] brd: module loaded
> > [    0.781250] loop: module loaded
> > [    0.789062] ahci ahci.0.auto: can't get clock
> > [    0.789062] ahci ahci.0.auto: SATA PLL_STATUS =3D 0x00018041
> > [    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
> > [    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps
> 0x1 impl platform mode
> > [    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only
> pmp pio slum part ccc apst
> > [    0.820312] scsi0 : ahci_platform
> > [    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff]
> port 0x100 irq 86
> > [    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driv=
er
> > [    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
> > [    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
> > [    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigne=
d
> bus number 1
> > [    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
> > [    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
> > [    0.898437] usb usb1: New USB device found, idVendor=3D1d6b,
> idProduct=3D0002
> > [    0.906250] usb usb1: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    0.914062] usb usb1: Product: EHCI Host Controller
> > [    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> ehci_hcd
> > [    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
> > [    0.937500] hub 1-0:1.0: USB hub found
> > [    0.937500] hub 1-0:1.0: 3 ports detected
> > [    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> > [    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
> > [    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered,
> assigned bus number 2
> > [    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
> > [    1.187500] usb usb2: New USB device found, idVendor=3D1d6b,
> idProduct=3D0001
> > [    1.195312] usb usb2: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
> > [    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> ohci_hcd
> > [    1.210937] usb usb2: SerialNumber: 4a064800.ohci
> > [    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
> > [    1.226562] hub 2-0:1.0: USB hub found
> > [    1.226562] hub 2-0:1.0: 3 ports detected
> > [    1.359375] usb 1-2: new high-speed USB device number 2 using
> ehci-omap
> > [    1.515625] usb 1-2: New USB device found, idVendor=3D0424,
> idProduct=3D3503
> > [    1.515625] usb 1-2: New USB device strings: Mfr=3D0, Product=3D0,
> SerialNumber=3D0
> > [    1.531250] hub 1-2:1.0: USB hub found
> > [    1.531250] hub 1-2:1.0: 3 ports detected
> > [    1.664062] usb 1-3: new high-speed USB device number 3 using
> ehci-omap
> > [    1.820312] usb 1-3: New USB device found, idVendor=3D0424,
> idProduct=3D9730
> > [    1.820312] usb 1-3: New USB device strings: Mfr=3D0, Product=3D0,
> SerialNumber=3D0
> > [    1.835937] usbcore: registered new interface driver usbback
> > [    1.843750] Initializing USB Mass Storage driver...
> > [    1.843750] usbcore: registered new interface driver usb-storage
> > [    1.851562] USB Mass Storage support registered.
> > [    1.859375] i2c /dev entries driver
> > [    1.859375] usbcore: registered new interface driver usbhid
> > [    1.867187] usbhid: USB HID core driver
> > [    1.875000] TCP: cubic registered
> > [    1.875000] Initializing XFRM netlink socket
> > [    1.882812] NET: Registered protocol family 17
> > [    1.882812] NET: Registered protocol family 15
> > [    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30
> variant f rev 0
> > [    1.898437] mux: Failed to setup hwmod io irq -22
> > [    1.898437] Power Management for TI OMAP4PLUS devices.
> > [    1.906250] ThumbEE CPU extension supported.
> > [    1.914062] Registering SWP/SWPB emulation handler
> > [    1.921875] devtmpfs: mounted
> > [    1.968750] Freeing init memory: 57752K
> > # ./vusb-start.sh 1 0
> > [    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9,
> event-channel 3
> > # ./vscsi-start.sh 1 0
> > # echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport
> >
> > [   40.796875] usb 1-2.1: new high-speed USB device number 4 using
> ehci-omap
> > [   40.914062] usb 1-2.1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   40.929687] usb 1-2.1: Product: Mass Storage Device
> > [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> > [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> >
> > DomU log:
> > 0.710937] console [hvc0] enabled, bootconsole disabled
> > [    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq =3D 104) =
is
> a OMAP UART0
> > [    0.718750] omap_uart 4806c000.serial: did not get pins for uart1
> error: -19
> > [    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq =3D 105) =
is
> a OMAP UART1
> > [    0.718750] omap_uart 4806e000.serial: did not get pins for uart3
> error: -19
> > [    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq =3D 102) =
is
> a OMAP UART3
> > [    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq =3D 137) =
is
> a OMAP UART4
> > [    0.726562] omap_uart 48068000.serial: did not get pins for uart5
> error: -19
> > [    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq =3D 138) =
is
> a OMAP UART5
> > [    0.726562] [drm] Initialized drm 1.1.0 20060810
> > [    0.742187] brd: module loaded
> > [    0.757812] loop: module loaded
> > [    0.757812] omap2_mcspi 48098000.spi: pins are not configured from
> the driver
> > [    0.765625] Initialising Xen virtual ethernet driver.
> > [    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driv=
er
> > [    0.765625] ehci-platform: EHCI generic platform driver
> > [    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
> > [    0.765625] vusb vusb-0: new USB bus registered, assigned bus number=
 1
> > [    0.765625] usb usb1: New USB device found, idVendor=3D1d6b,
> idProduct=3D0002
> > [    0.765625] usb usb1: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
> > [    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> xen_hcd
> > [    0.765625] usb usb1: SerialNumber: vusb-0
> > [    0.773437] hub 1-0:1.0: USB hub found
> > [    0.773437] hub 1-0:1.0: 8 ports detected
> > [    0.773437] Initializing USB Mass Storage driver...
> > [    0.773437] usbcore: registered new interface driver usb-storage
> > [    0.773437] USB Mass Storage support registered.
> > [    0.773437] mousedev: PS/2 mouse device common for all mice
> > [    0.781250] usbcore: registered new interface driver usbhid
> > [    0.781250] usbhid: USB HID core driver
> > [    0.789062] TCP: cubic registered
> > [    0.789062] Initializing XFRM netlink socket
> > [    0.789062] NET: Registered protocol family 17
> > [    0.789062] NET: Registered protocol family 15
> > [    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30
> variant f rev 0
> > [    0.789062] mux: Failed to setup hwmod io irq -22
> > [    0.789062] ThumbEE CPU extension supported.
> > [    0.789062] Registering SWP/SWPB emulation handler
> > [    0.789062] dmm 4e000000.dmm: initialized all PAT entries
> > [    0.804687]
> /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open
> rtc device (rtc0)
> > [    0.804687] devtmpfs: mounted
> > [    0.812500] Freeing init memory: 6044K
> >
> > Please press Enter to activate this console.
> > [    6.500000] scsi0 : Xen SCSI frontend driver
> >
> > / # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using
> ehci-omap
> > [   40.914062] usb 1-2.1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   40.929687] usb 1-2.1: Product: Mass Storage Device
> > [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> > [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> > [   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
> > (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
> > [   32.875000] usb 1-1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   32.875000] usb 1-1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   32.875000] usb 1-1: Product: Mass Storage Device
> > [   32.875000] usb 1-1: Manufacturer: JetFlash
> > [   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
> > [   32.906250] scsi1 : usb-storage 1-1:1.0
> > [   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB
>  1100 PQ: 0 ANSI: 4
> > [   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.1=
0
> GB/7.54 GiB)
> > [   34.140625] sd 1:0:0:0: [sda] Write Protect is off
> > [   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data
> may changed on different boots*
> > [   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.195312]  sda: sda1
> > [   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk
> >
> >  # lsusb
> > Bus 001 Device 002: ID 8564:1000
> > Bus 001 Device 001: ID 1d6b:0002
> >
> > But it looks like scsi requests from usb-storage still passing directly
> to hardware
> > instead of passing through PV SCSI.
> >
> > Could smb tell me how to init PV SCSI and PV USB correctly?
> >
> > Regards,
> > Alexander
> >
> > Alexander Savchenko (2):
> >   usbback: Add new features
> >   HACK: usb:core:hcd: Do not remapping self dma addresses
> >
> > Nathanael Rensen (1):
> >   pvusb drivers
> >
> >  drivers/usb/core/hcd.c                 |    1 +
> >  drivers/usb/host/Kconfig               |   23 +
> >  drivers/usb/host/Makefile              |    2 +
> >  drivers/usb/host/xen-usbback/Makefile  |    3 +
> >  drivers/usb/host/xen-usbback/common.h  |  170 ++++
> >  drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
> >  drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
> >  drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
> >  drivers/usb/host/xen-usbfront.c        | 1739
> ++++++++++++++++++++++++++++++++
> >  include/xen/interface/io/usbif.h       |  150 +++
> >  10 files changed, 4244 insertions(+)
> >  create mode 100644 drivers/usb/host/xen-usbback/Makefile
> >  create mode 100644 drivers/usb/host/xen-usbback/common.h
> >  create mode 100644 drivers/usb/host/xen-usbback/usbback.c
> >  create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
> >  create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
> >  create mode 100644 drivers/usb/host/xen-usbfront.c
> >  create mode 100644 include/xen/interface/io/usbif.h
> >
> > --
> > 1.7.9.5
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>



--=20

Alexander Savchenko | Kernel developer
GlobalLogic
M +38-093-808-37-33  S darkside.warlock
www.globallogic.com

--001a11c38882a5518004f08c0992
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span id=3D"result_box" class=3D"" lang=3D"en"><span class=
=3D"">In</span> <span class=3D"">Konrad</span> <span class=3D"">repository<=
/span> <span class=3D"">is</span> <span class=3D"">an old version of</span>=
 <span class=3D"">drivers (for Linux 3.3).</span></span></div>
<div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Jan 2=
1, 2014 at 11:23 PM, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"m=
ailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wrote:<br=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
<div class=3D"im">On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savch=
enko wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; Could someone advice on the issue I am facing?<br>
&gt;<br>
&gt; I am trying to run PV USB on omap5uevm (omap5-panda) board.<br>
&gt;<br>
&gt; I use latest drivers for PV USB from Nathanael server:<br>
&gt; <a href=3D"http://members.iinet.net.au/~nathanael/0001-pvusb-driver.li=
nux-next.patch" target=3D"_blank">http://members.iinet.net.au/~nathanael/00=
01-pvusb-driver.linux-next.patch</a><br>
&gt;<br>
<br>
</div>I think Konrad actually has the Xen PVUSB drivers in his git tree, an=
d it has<br>
some extra patches compared to nathanaels&#39;s version (iirc)..<br>
<br>
-- Pasi<br>
<div><div class=3D"h5"><br>
<br>
&gt; I have applied it to k3.8(dom0) with some patches for USB HCD, usbback=
 drivers<br>
&gt; (attached) and run on Xen 4.4.0-rc2.<br>
&gt;<br>
&gt; I am facing an issue with USB_STORAGE:<br>
&gt; USB storage inited and mounted on domU over PV USB drivers.<br>
&gt; But I only can copy files on USB storage with small sizes(no more than=
 ~100-500 kBytes).<br>
&gt; Then USB storage falls to infinite loop<br>
&gt; (leds on USB storage blinking all the time, more than needing for copy=
)<br>
&gt; and then after few seconds dom0 disconnected usb device.<br>
&gt;<br>
&gt; Dom0, DomU use k3.8.<br>
&gt;<br>
&gt; I observed that usb-storage uses some scsi requests(from domU) which p=
ass<br>
&gt; directly to hardware, I think this is a problem.<br>
&gt;<br>
&gt; So, I applied PV SCSI drivers from<br>
&gt; <a href=3D"http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/=
log/?id=3Drefs/heads/devel/xen-scsi.v1.0" target=3D"_blank">http://git.kern=
el.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=3Drefs/heads/devel/xen-=
scsi.v1.0</a><br>

&gt; to k3.8.<br>
&gt;<br>
&gt; Then I inited PV USB &amp; PV SCSI with scripts vusb-start.sh and vscs=
i-start.sh accordingly.<br>
&gt; But I still facing issue with this.<br>
&gt;<br>
&gt; Dom0 log:<br>
&gt; [ =A0 =A00.000000] Booting Linux on physical CPU 0x0<br>
&gt; [ =A0 =A00.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0=
187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014<br>
&gt; [ =A0 =A00.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7),=
 cr=3D10c5387d<br>
&gt; [ =A0 =A00.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instr=
uction cache<br>
&gt; [ =A0 =A00.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM bo=
ard<br>
&gt; [ =A0 =A00.000000] bootconsole [earlycon0] enabled<br>
&gt; [ =A0 =A00.000000] Memory policy: ECC disabled, Data cache writeback<b=
r>
&gt; [ =A0 =A00.000000] On node 0 totalpages: 65280<br>
&gt; [ =A0 =A00.000000] free_area_init_node: node 0, pgdat c3d639f0, node_m=
em_map c428e000<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 512 pages used for memmap<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 0 pages reserved<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 64768 pages, LIFO batch:15<br>
&gt; [ =A0 =A00.000000] psci: probing function IDs from device-tree<br>
&gt; [ =A0 =A00.000000] OMAP5432 ES2.0<br>
&gt; [ =A0 =A00.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=3D1*32768<br>
&gt; [ =A0 =A00.000000] pcpu-alloc: [0] 0<br>
&gt; [ =A0 =A00.000000] Built 1 zonelists in Zone order, mobility grouping =
on. =A0Total pages: 64768<br>
&gt; [ =A0 =A00.000000] Kernel command line: console=3Dhvc0 earlyprintk<br>
&gt; [ =A0 =A00.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)=
<br>
&gt; [ =A0 =A00.000000] Dentry cache hash table entries: 32768 (order: 5, 1=
31072 bytes)<br>
&gt; [ =A0 =A00.000000] Inode-cache hash table entries: 16384 (order: 4, 65=
536 bytes)<br>
&gt; [ =A0 =A00.000000] __ex_table already sorted, skipping sort<br>
&gt; [ =A0 =A00.000000] Memory: 255MB =3D 255MB total<br>
&gt; [ =A0 =A00.000000] Memory: 190640k/190640k available, 71504k reserved,=
 0K highmem<br>
&gt; [ =A0 =A00.000000] Virtual kernel memory layout:<br>
&gt; [ =A0 =A00.000000] =A0 =A0 vector =A0: 0xffff0000 - 0xffff1000 =A0 ( =
=A0 4 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 fixmap =A0: 0xfff00000 - 0xfffe0000 =A0 ( 8=
96 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 vmalloc : 0xd0800000 - 0xff000000 =A0 ( 744=
 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 lowmem =A0: 0xc0000000 - 0xd0000000 =A0 ( 2=
56 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 pkmap =A0 : 0xbfe00000 - 0xc0000000 =A0 ( =
=A0 2 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 modules : 0xbf000000 - 0xbfe00000 =A0 ( =A0=
14 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .text : 0xc0008000 - 0xc0493748 =A0 (46=
54 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .init : 0xc0494000 - 0xc3cfa29c =A0 (57=
753 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .data : 0xc3cfc000 - 0xc3d64660 =A0 ( 4=
18 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 =A0.bss : 0xc3d64660 - 0xc428d634 =A0 (=
5284 kB)<br>
&gt; [ =A0 =A00.000000] NR_IRQS:16 nr_irqs:16 16<br>
&gt; [ =A0 =A00.000000] Architected local timer running at 6.14MHz (virt).<=
br>
&gt; [ =A0 =A00.000000] Switching to timer-based delay loop<br>
&gt; [ =A0 =A00.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500n=
s, wraps every 3489660920ms<br>
&gt; [ =A0 =A00.000000] Console: colour dummy device 80x30<br>
&gt; [ =A0 =A00.000000] Lock dependency validator: Copyright (c) 2006 Red H=
at, Inc., Ingo Molnar<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_SUBCLASSES: =A08<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCK_DEPTH: =A0 =A0 =A0 =A0 =A048<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_KEYS: =A0 =A0 =A0 =A08191<br>
&gt; [ =A0 =A00.000000] ... CLASSHASH_SIZE: =A0 =A0 =A0 =A0 =A04096<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_ENTRIES: =A0 =A0 16384<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_CHAINS: =A0 =A0 =A032768<br>
&gt; [ =A0 =A00.000000] ... CHAINHASH_SIZE: =A0 =A0 =A0 =A0 =A016384<br>
&gt; [ =A0 =A00.000000] =A0memory used by lock dependency info: 3695 kB<br>
&gt; [ =A0 =A00.000000] =A0per task-struct memory footprint: 1152 bytes<br>
&gt; [ =A0 =A00.046875] Calibrating delay loop (skipped), value calculated =
using timer frequency.. 12.30 BogoMIPS (lpj=3D48000)<br>
&gt; [ =A0 =A00.054687] pid_max: default: 32768 minimum: 301<br>
&gt; [ =A0 =A00.054687] Security Framework initialized<br>
&gt; [ =A0 =A00.062500] Mount-cache hash table entries: 512<br>
&gt; [ =A0 =A00.070312] CPU: Testing write buffer coherency: ok<br>
&gt; [ =A0 =A00.078125] Setting up static identity map for 0xd0334e00 - 0xd=
0334e58<br>
&gt; [ =A0 =A00.085937] devtmpfs: initialized<br>
&gt; [ =A0 =A00.093750] Xen 4.4 support found, events_irq=3D31 gnttab_frame=
_pfn=3D4b000<br>
&gt; [ =A0 =A00.101562] xen:grant_table: Grant tables using version 1 layou=
t.<br>
&gt; [ =A0 =A00.101562] Grant table initialized<br>
&gt; [ =A0 =A00.109375] omap_hwmod: aess: _wait_target_disable failed<br>
&gt; [ =A0 =A00.132812] omap_hwmod: dss_dispc: cannot be enabled for reset =
(3)<br>
&gt; [ =A0 =A00.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (=
3)<br>
&gt; [ =A0 =A00.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (=
3)<br>
&gt; [ =A0 =A00.234375] pinctrl core: initialized pinctrl subsystem<br>
&gt; [ =A0 =A00.242187] regulator-dummy: no parameters<br>
&gt; [ =A0 =A00.242187] NET: Registered protocol family 16<br>
&gt; [ =A0 =A00.250000] Xen: initializing cpu0<br>
&gt; [ =A0 =A00.250000] DMA: preallocated 256 KiB pool for atomic coherent =
allocations<br>
&gt; [ =A0 =A00.257812] xen:swiotlb_xen: Warning: only able to allocate 8 M=
B for software IO TLB<br>
&gt; [ =A0 =A00.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) m=
apped at [ce000000-ce7fffff]<br>
&gt; [ =A0 =A00.281250] gpiochip_add: registered GPIOs 0 to 31 on device: g=
pio<br>
&gt; [ =A0 =A00.281250] OMAP GPIO hardware version 0.1<br>
&gt; [ =A0 =A00.289062] gpiochip_add: registered GPIOs 32 to 63 on device: =
gpio<br>
&gt; [ =A0 =A00.289062] gpiochip_add: registered GPIOs 64 to 95 on device: =
gpio<br>
&gt; [ =A0 =A00.296875] OMAP DMA hardware revision 0.0<br>
&gt; [ =A0 =A00.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc00=
2840 size 438<br>
&gt; [ =A0 =A00.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c=
840 size 56<br>
&gt; [ =A0 =A00.335937] bio: create slab &lt;bio-0&gt; at 0<br>
&gt; [ =A0 =A00.343750] xen:balloon: Initialising balloon driver<br>
&gt; [ =A0 =A00.343750] of_get_named_gpio_flags exited with status 80<br>
&gt; [ =A0 =A00.343750] hsusb2_reset: 3300 mV<br>
&gt; [ =A0 =A00.351562] of_get_named_gpio_flags exited with status 79<br>
&gt; [ =A0 =A00.351562] hsusb3_reset: 3300 mV<br>
&gt; [ =A0 =A00.351562] SCSI subsystem initialized<br>
&gt; [ =A0 =A00.359375] libata version 3.00 loaded.<br>
&gt; [ =A0 =A00.359375] usbcore: registered new interface driver usbfs<br>
&gt; [ =A0 =A00.367187] usbcore: registered new interface driver hub<br>
&gt; [ =A0 =A00.367187] usbcore: registered new device driver usb<br>
&gt; [ =A0 =A00.375000] Switching to clocksource arch_sys_counter<br>
&gt; [ =A0 =A00.414062] NET: Registered protocol family 2<br>
&gt; [ =A0 =A00.414062] TCP established hash table entries: 2048 (order: 2,=
 16384 bytes)<br>
&gt; [ =A0 =A00.421875] TCP bind hash table entries: 2048 (order: 4, 73728 =
bytes)<br>
&gt; [ =A0 =A00.429687] TCP: Hash tables configured (established 2048 bind =
2048)<br>
&gt; [ =A0 =A00.437500] TCP: reno registered<br>
&gt; [ =A0 =A00.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)=
<br>
&gt; [ =A0 =A00.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 b=
ytes)<br>
&gt; [ =A0 =A00.453125] NET: Registered protocol family 1<br>
&gt; [ =A0 =A00.679687] NetWinder Floating Point Emulator V0.97 (double pre=
cision)<br>
&gt; [ =A0 =A00.687500] VFS: Disk quotas dquot_6.5.2<br>
&gt; [ =A0 =A00.695312] Dquot-cache hash table entries: 1024 (order 0, 4096=
 bytes)<br>
&gt; [ =A0 =A00.703125] msgmni has been set to 372<br>
&gt; [ =A0 =A00.710937] io scheduler noop registered<br>
&gt; [ =A0 =A00.718750] io scheduler deadline registered<br>
&gt; [ =A0 =A00.718750] io scheduler cfq registered (default)<br>
&gt; [ =A0 =A00.726562] xen:xen_evtchn: Event-channel device installed<br>
&gt; [ =A0 =A00.742187] console [hvc0] enabled, bootconsole disabled<br>
&gt; [ =A0 =A00.765625] brd: module loaded<br>
&gt; [ =A0 =A00.781250] loop: module loaded<br>
&gt; [ =A0 =A00.789062] ahci ahci.0.auto: can&#39;t get clock<br>
&gt; [ =A0 =A00.789062] ahci ahci.0.auto: SATA PLL_STATUS =3D 0x00018041<br=
>
&gt; [ =A0 =A00.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1<br>
&gt; [ =A0 =A00.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3=
 Gbps 0x1 impl platform mode<br>
&gt; [ =A0 =A00.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo =
only pmp pio slum part ccc apst<br>
&gt; [ =A0 =A00.820312] scsi0 : ahci_platform<br>
&gt; [ =A0 =A00.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a14=
01ff] port 0x100 irq 86<br>
&gt; [ =A0 =A00.835937] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controlle=
r (EHCI) Driver<br>
&gt; [ =A0 =A00.843750] ehci-omap: OMAP-EHCI Host Controller driver<br>
&gt; [ =A0 =A00.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller<br>
&gt; [ =A0 =A00.867187] ehci-omap 4a064c00.ehci: new USB bus registered, as=
signed bus number 1<br>
&gt; [ =A0 =A00.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00=
<br>
&gt; [ =A0 =A00.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00=
<br>
&gt; [ =A0 =A00.898437] usb usb1: New USB device found, idVendor=3D1d6b, id=
Product=3D0002<br>
&gt; [ =A0 =A00.906250] usb usb1: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A00.914062] usb usb1: Product: EHCI Host Controller<br>
&gt; [ =A0 =A00.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 ehci_hcd<br>
&gt; [ =A0 =A00.929687] usb usb1: SerialNumber: 4a064c00.ehci<br>
&gt; [ =A0 =A00.937500] hub 1-0:1.0: USB hub found<br>
&gt; [ =A0 =A00.937500] hub 1-0:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.085937] ohci_hcd: USB 1.1 &#39;Open&#39; Host Controller (O=
HCI) Driver<br>
&gt; [ =A0 =A01.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controlle=
r<br>
&gt; [ =A0 =A01.093750] ohci-omap3 4a064800.ohci: new USB bus registered, a=
ssigned bus number 2<br>
&gt; [ =A0 =A01.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a06480=
0<br>
&gt; [ =A0 =A01.187500] usb usb2: New USB device found, idVendor=3D1d6b, id=
Product=3D0001<br>
&gt; [ =A0 =A01.195312] usb usb2: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A01.203125] usb usb2: Product: OMAP3 OHCI Host Controller<br>
&gt; [ =A0 =A01.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 ohci_hcd<br>
&gt; [ =A0 =A01.210937] usb usb2: SerialNumber: 4a064800.ohci<br>
&gt; [ =A0 =A01.218750] ata1: SATA link down (SStatus 0 SControl 300)<br>
&gt; [ =A0 =A01.226562] hub 2-0:1.0: USB hub found<br>
&gt; [ =A0 =A01.226562] hub 2-0:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.359375] usb 1-2: new high-speed USB device number 2 using e=
hci-omap<br>
&gt; [ =A0 =A01.515625] usb 1-2: New USB device found, idVendor=3D0424, idP=
roduct=3D3503<br>
&gt; [ =A0 =A01.515625] usb 1-2: New USB device strings: Mfr=3D0, Product=
=3D0, SerialNumber=3D0<br>
&gt; [ =A0 =A01.531250] hub 1-2:1.0: USB hub found<br>
&gt; [ =A0 =A01.531250] hub 1-2:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.664062] usb 1-3: new high-speed USB device number 3 using e=
hci-omap<br>
&gt; [ =A0 =A01.820312] usb 1-3: New USB device found, idVendor=3D0424, idP=
roduct=3D9730<br>
&gt; [ =A0 =A01.820312] usb 1-3: New USB device strings: Mfr=3D0, Product=
=3D0, SerialNumber=3D0<br>
&gt; [ =A0 =A01.835937] usbcore: registered new interface driver usbback<br=
>
&gt; [ =A0 =A01.843750] Initializing USB Mass Storage driver...<br>
&gt; [ =A0 =A01.843750] usbcore: registered new interface driver usb-storag=
e<br>
&gt; [ =A0 =A01.851562] USB Mass Storage support registered.<br>
&gt; [ =A0 =A01.859375] i2c /dev entries driver<br>
&gt; [ =A0 =A01.859375] usbcore: registered new interface driver usbhid<br>
&gt; [ =A0 =A01.867187] usbhid: USB HID core driver<br>
&gt; [ =A0 =A01.875000] TCP: cubic registered<br>
&gt; [ =A0 =A01.875000] Initializing XFRM netlink socket<br>
&gt; [ =A0 =A01.882812] NET: Registered protocol family 17<br>
&gt; [ =A0 =A01.882812] NET: Registered protocol family 15<br>
&gt; [ =A0 =A01.890625] VFP support v0.3: implementor 41 architecture 4 par=
t 30 variant f rev 0<br>
&gt; [ =A0 =A01.898437] mux: Failed to setup hwmod io irq -22<br>
&gt; [ =A0 =A01.898437] Power Management for TI OMAP4PLUS devices.<br>
&gt; [ =A0 =A01.906250] ThumbEE CPU extension supported.<br>
&gt; [ =A0 =A01.914062] Registering SWP/SWPB emulation handler<br>
&gt; [ =A0 =A01.921875] devtmpfs: mounted<br>
&gt; [ =A0 =A01.968750] Freeing init memory: 57752K<br>
&gt; # ./vusb-start.sh 1 0<br>
&gt; [ =A0 =A09.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-=
channel 3<br>
&gt; # ./vscsi-start.sh 1 0<br>
&gt; # echo 1-2.1:1:0:1 &gt; /sys/bus/usb/drivers/usbback/new_vport<br>
&gt;<br>
&gt; [ =A0 40.796875] usb 1-2.1: new high-speed USB device number 4 using e=
hci-omap<br>
&gt; [ =A0 40.914062] usb 1-2.1: New USB device found, idVendor=3D8564, idP=
roduct=3D1000<br>
&gt; [ =A0 40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=
=3D2, SerialNumber=3D3<br>
&gt; [ =A0 40.929687] usb 1-2.1: Product: Mass Storage Device<br>
&gt; [ =A0 40.929687] usb 1-2.1: Manufacturer: JetFlash<br>
&gt; [ =A0 40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt;<br>
&gt; DomU log:<br>
&gt; 0.710937] console [hvc0] enabled, bootconsole disabled<br>
&gt; [ =A0 =A00.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq =3D =
104) is a OMAP UART0<br>
&gt; [ =A0 =A00.718750] omap_uart 4806c000.serial: did not get pins for uar=
t1 error: -19<br>
&gt; [ =A0 =A00.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq =3D =
105) is a OMAP UART1<br>
&gt; [ =A0 =A00.718750] omap_uart 4806e000.serial: did not get pins for uar=
t3 error: -19<br>
&gt; [ =A0 =A00.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq =3D =
102) is a OMAP UART3<br>
&gt; [ =A0 =A00.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq =3D =
137) is a OMAP UART4<br>
&gt; [ =A0 =A00.726562] omap_uart 48068000.serial: did not get pins for uar=
t5 error: -19<br>
&gt; [ =A0 =A00.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq =3D =
138) is a OMAP UART5<br>
&gt; [ =A0 =A00.726562] [drm] Initialized drm 1.1.0 20060810<br>
&gt; [ =A0 =A00.742187] brd: module loaded<br>
&gt; [ =A0 =A00.757812] loop: module loaded<br>
&gt; [ =A0 =A00.757812] omap2_mcspi 48098000.spi: pins are not configured f=
rom the driver<br>
&gt; [ =A0 =A00.765625] Initialising Xen virtual ethernet driver.<br>
&gt; [ =A0 =A00.765625] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controlle=
r (EHCI) Driver<br>
&gt; [ =A0 =A00.765625] ehci-platform: EHCI generic platform driver<br>
&gt; [ =A0 =A00.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller<br>
&gt; [ =A0 =A00.765625] vusb vusb-0: new USB bus registered, assigned bus n=
umber 1<br>
&gt; [ =A0 =A00.765625] usb usb1: New USB device found, idVendor=3D1d6b, id=
Product=3D0002<br>
&gt; [ =A0 =A00.765625] usb usb1: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A00.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controll=
er<br>
&gt; [ =A0 =A00.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 xen_hcd<br>
&gt; [ =A0 =A00.765625] usb usb1: SerialNumber: vusb-0<br>
&gt; [ =A0 =A00.773437] hub 1-0:1.0: USB hub found<br>
&gt; [ =A0 =A00.773437] hub 1-0:1.0: 8 ports detected<br>
&gt; [ =A0 =A00.773437] Initializing USB Mass Storage driver...<br>
&gt; [ =A0 =A00.773437] usbcore: registered new interface driver usb-storag=
e<br>
&gt; [ =A0 =A00.773437] USB Mass Storage support registered.<br>
&gt; [ =A0 =A00.773437] mousedev: PS/2 mouse device common for all mice<br>
&gt; [ =A0 =A00.781250] usbcore: registered new interface driver usbhid<br>
&gt; [ =A0 =A00.781250] usbhid: USB HID core driver<br>
&gt; [ =A0 =A00.789062] TCP: cubic registered<br>
&gt; [ =A0 =A00.789062] Initializing XFRM netlink socket<br>
&gt; [ =A0 =A00.789062] NET: Registered protocol family 17<br>
&gt; [ =A0 =A00.789062] NET: Registered protocol family 15<br>
&gt; [ =A0 =A00.789062] VFP support v0.3: implementor 41 architecture 4 par=
t 30 variant f rev 0<br>
&gt; [ =A0 =A00.789062] mux: Failed to setup hwmod io irq -22<br>
&gt; [ =A0 =A00.789062] ThumbEE CPU extension supported.<br>
&gt; [ =A0 =A00.789062] Registering SWP/SWPB emulation handler<br>
&gt; [ =A0 =A00.789062] dmm 4e000000.dmm: initialized all PAT entries<br>
&gt; [ =A0 =A00.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hct=
osys.c: unable to open rtc device (rtc0)<br>
&gt; [ =A0 =A00.804687] devtmpfs: mounted<br>
&gt; [ =A0 =A00.812500] Freeing init memory: 6044K<br>
&gt;<br>
&gt; Please press Enter to activate this console.<br>
&gt; [ =A0 =A06.500000] scsi0 : Xen SCSI frontend driver<br>
&gt;<br>
&gt; / # [ =A0 40.796875] usb 1-2.1: new high-speed USB device number 4 usi=
ng ehci-omap<br>
&gt; [ =A0 40.914062] usb 1-2.1: New USB device found, idVendor=3D8564, idP=
roduct=3D1000<br>
&gt; [ =A0 40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=
=3D2, SerialNumber=3D3<br>
&gt; [ =A0 40.929687] usb 1-2.1: Product: Mass Storage Device<br>
&gt; [ =A0 40.929687] usb 1-2.1: Manufacturer: JetFlash<br>
&gt; [ =A0 40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt; [ =A0 32.703125] usb 1-1: new high-speed USB device number 2 using vus=
b<br>
&gt; (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet<br>
&gt; [ =A0 32.875000] usb 1-1: New USB device found, idVendor=3D8564, idPro=
duct=3D1000<br>
&gt; [ =A0 32.875000] usb 1-1: New USB device strings: Mfr=3D1, Product=3D2=
, SerialNumber=3D3<br>
&gt; [ =A0 32.875000] usb 1-1: Product: Mass Storage Device<br>
&gt; [ =A0 32.875000] usb 1-1: Manufacturer: JetFlash<br>
&gt; [ =A0 32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt; [ =A0 32.906250] scsi1 : usb-storage 1-1:1.0<br>
&gt; [ =A0 34.117187] scsi 1:0:0:0: Direct-Access =A0 =A0 JetFlash Transcen=
d 8GB =A0 =A01100 PQ: 0 ANSI: 4<br>
&gt; [ =A0 34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (=
8.10 GB/7.54 GiB)<br>
&gt; [ =A0 34.140625] sd 1:0:0:0: [sda] Write Protect is off<br>
&gt; [ =A0 34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *&lt;--this=
 data may changed on different boots*<br>
&gt; [ =A0 34.156250] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.179687] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.195312] =A0sda: sda1<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk<br>
&gt;<br>
&gt; =A0# lsusb<br>
&gt; Bus 001 Device 002: ID 8564:1000<br>
&gt; Bus 001 Device 001: ID 1d6b:0002<br>
&gt;<br>
&gt; But it looks like scsi requests from usb-storage still passing directl=
y to hardware<br>
&gt; instead of passing through PV SCSI.<br>
&gt;<br>
&gt; Could smb tell me how to init PV SCSI and PV USB correctly?<br>
&gt;<br>
&gt; Regards,<br>
&gt; Alexander<br>
&gt;<br>
&gt; Alexander Savchenko (2):<br>
&gt; =A0 usbback: Add new features<br>
&gt; =A0 HACK: usb:core:hcd: Do not remapping self dma addresses<br>
&gt;<br>
&gt; Nathanael Rensen (1):<br>
&gt; =A0 pvusb drivers<br>
&gt;<br>
&gt; =A0drivers/usb/core/hcd.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 | =A0 =A01 +=
<br>
&gt; =A0drivers/usb/host/Kconfig =A0 =A0 =A0 =A0 =A0 =A0 =A0 | =A0 23 +<br>
&gt; =A0drivers/usb/host/Makefile =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 =A02 +<b=
r>
&gt; =A0drivers/usb/host/xen-usbback/Makefile =A0| =A0 =A03 +<br>
&gt; =A0drivers/usb/host/xen-usbback/common.h =A0| =A0170 ++++<br>
&gt; =A0drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++=
++<br>
&gt; =A0drivers/usb/host/xen-usbback/usbdev.c =A0| =A0402 ++++++++<br>
&gt; =A0drivers/usb/host/xen-usbback/xenbus.c =A0| =A0482 +++++++++<br>
&gt; =A0drivers/usb/host/xen-usbfront.c =A0 =A0 =A0 =A0| 1739 +++++++++++++=
+++++++++++++++++++<br>
&gt; =A0include/xen/interface/io/usbif.h =A0 =A0 =A0 | =A0150 +++<br>
&gt; =A010 files changed, 4244 insertions(+)<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/Makefile<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/common.h<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/usbback.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/usbdev.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/xenbus.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbfront.c<br>
&gt; =A0create mode 100644 include/xen/interface/io/usbif.h<br>
&gt;<br>
&gt; --<br>
&gt; 1.7.9.5<br>
&gt;<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
</blockquote></div><br><br clear=3D"all"><div><br></div>-- <br><div dir=3D"=
ltr"><br><span style=3D"font-size:12px;vertical-align:baseline;font-variant=
:normal;font-style:normal;background-color:transparent;text-decoration:init=
ial;font-family:Arial;font-weight:bold">Alexander Savchenko | Kernel develo=
per</span><br>
<span style=3D"font-size:12px;vertical-align:baseline;font-variant:normal;f=
ont-style:normal;background-color:transparent;text-decoration:initial;font-=
family:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"font-=
size:12px;vertical-align:baseline;font-variant:normal;font-style:normal;bac=
kground-color:transparent;text-decoration:initial;font-family:Arial;font-we=
ight:normal">M +38-093-808-37-33 =A0S=A0darkside.warlock</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></div>
</div>

--001a11c38882a5518004f08c0992--


--===============3945659488699508057==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3945659488699508057==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 09:49:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uR2-00063f-Aj; Wed, 22 Jan 2014 09:49:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.savchenko@globallogic.com>)
	id 1W5uR0-00063M-UQ
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:49:23 +0000
Received: from [85.158.137.68:7140] by server-11.bemta-3.messagelabs.com id
	D2/7E-19379-2249FD25; Wed, 22 Jan 2014 09:49:22 +0000
X-Env-Sender: oleksandr.savchenko@globallogic.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390384156!10617838!1
X-Originating-IP: [64.18.0.149]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1699 invoked from network); 22 Jan 2014 09:49:18 -0000
Received: from exprod5og117.obsmtp.com (HELO exprod5og117.obsmtp.com)
	(64.18.0.149)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 09:49:18 -0000
Received: from mail-wg0-f52.google.com ([74.125.82.52]) (using TLSv1) by
	exprod5ob117.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt+UHCG8L7kaiehOsoKoTE3UzS4l14Do@postini.com;
	Wed, 22 Jan 2014 01:49:18 PST
Received: by mail-wg0-f52.google.com with SMTP id b13so126021wgh.7
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 01:49:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=MGyAIntpx//1iBoaNqOgl8e3HqdyjDjUTAy+TreuW3A=;
	b=I/bILWUaae1FRyhra2Soq9OypxGLEfoNst0s81wqnVKr3Hl/Zl7p/PiX63Z9kxMCjm
	TwwyQa6BW5g8XEUBIiX+XEQIrM/Q+Lg7NazVImt7AmsGm5OU1RdTiBpvTVvo0hQ41pry
	d+Bf1fLg6JoQvJSJc/YeglJVaHCYjHMOMeAA4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=MGyAIntpx//1iBoaNqOgl8e3HqdyjDjUTAy+TreuW3A=;
	b=Tn2PgeCZDbNPmTf/wWhxnuzAFCOYu4AUpQEXPrrFEGp6LJ/Y8wpdFu1tE5sE799vBq
	0Be0IJaAZzUfpAP0HYs53lrFIxIAiY6gF1I6xcr44pD9hyurXmtGU5RqfePwlGPjbOWT
	U30PHXPAkUFtPpQaW3yLQc9rtaZc5NjLksXhG/LqDVYLzrmvhNKpZmn9MBhe0rv4tcz6
	aDbyiMbNrsuLgak457Zxn1zQtAT3TttR07liqr3EuYNUjn6UmlaSKFxCnrpcxQBFcmus
	3scg2VwU5dDQ89iwqI0tGTXMa9sdhOptHkgwDJPVhwdhAzbGgZSY29Fs28tEh/nqpmp4
	y05A==
X-Gm-Message-State: ALoCoQmR2TamArrgKMKjheBBUJUtYVBAt4y+eq1DDUNNgHpvY2F+WzgA0NrTtfGCcWH4v4oMTNnhWmteXxiBUxors5HZzEmIjtVZLYagu5IJbiHKnrHF0jt7oFkaa/hjqxkCpYWhifBH69CRuS50KPlnRts/hwtFc4bb84XSFFOGVaZiaF/gonA=
X-Received: by 10.180.206.41 with SMTP id ll9mr2115412wic.7.1390384154910;
	Wed, 22 Jan 2014 01:49:14 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.206.41 with SMTP id ll9mr2115393wic.7.1390384154755;
	Wed, 22 Jan 2014 01:49:14 -0800 (PST)
Received: by 10.194.92.201 with HTTP; Wed, 22 Jan 2014 01:49:14 -0800 (PST)
In-Reply-To: <20140121212359.GJ2924@reaktio.net>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<20140121212359.GJ2924@reaktio.net>
Date: Wed, 22 Jan 2014 11:49:14 +0200
Message-ID: <CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
From: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
To: =?ISO-8859-1?Q?Pasi_K=E4rkk=E4inen?= <pasik@iki.fi>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3945659488699508057=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3945659488699508057==
Content-Type: multipart/alternative; boundary=001a11c38882a5518004f08c0992

--001a11c38882a5518004f08c0992
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

In Konrad repository is an old version of drivers (for Linux 3.3).


On Tue, Jan 21, 2014 at 11:23 PM, Pasi K=E4rkk=E4inen <pasik@iki.fi> wrote:

> On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savchenko wrote:
> > Hi,
> >
> > Could someone advice on the issue I am facing?
> >
> > I am trying to run PV USB on omap5uevm (omap5-panda) board.
> >
> > I use latest drivers for PV USB from Nathanael server:
> >
> http://members.iinet.net.au/~nathanael/0001-pvusb-driver.linux-next.patch
> >
>
> I think Konrad actually has the Xen PVUSB drivers in his git tree, and it
> has
> some extra patches compared to nathanaels's version (iirc)..
>
> -- Pasi
>
>
> > I have applied it to k3.8(dom0) with some patches for USB HCD, usbback
> drivers
> > (attached) and run on Xen 4.4.0-rc2.
> >
> > I am facing an issue with USB_STORAGE:
> > USB storage inited and mounted on domU over PV USB drivers.
> > But I only can copy files on USB storage with small sizes(no more than
> ~100-500 kBytes).
> > Then USB storage falls to infinite loop
> > (leds on USB storage blinking all the time, more than needing for copy)
> > and then after few seconds dom0 disconnected usb device.
> >
> > Dom0, DomU use k3.8.
> >
> > I observed that usb-storage uses some scsi requests(from domU) which pa=
ss
> > directly to hardware, I think this is a problem.
> >
> > So, I applied PV SCSI drivers from
> >
> http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=3Drefs=
/heads/devel/xen-scsi.v1.0
> > to k3.8.
> >
> > Then I inited PV USB & PV SCSI with scripts vusb-start.sh and
> vscsi-start.sh accordingly.
> > But I still facing issue with this.
> >
> > Dom0 log:
> > [    0.000000] Booting Linux on physical CPU 0x0
> > [    0.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx018739=
4)
> (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014
> > [    0.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7),
> cr=3D10c5387d
> > [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instructio=
n
> cache
> > [    0.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM board
> > [    0.000000] bootconsole [earlycon0] enabled
> > [    0.000000] Memory policy: ECC disabled, Data cache writeback
> > [    0.000000] On node 0 totalpages: 65280
> > [    0.000000] free_area_init_node: node 0, pgdat c3d639f0, node_mem_ma=
p
> c428e000
> > [    0.000000]   Normal zone: 512 pages used for memmap
> > [    0.000000]   Normal zone: 0 pages reserved
> > [    0.000000]   Normal zone: 64768 pages, LIFO batch:15
> > [    0.000000] psci: probing function IDs from device-tree
> > [    0.000000] OMAP5432 ES2.0
> > [    0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=3D1*32768
> > [    0.000000] pcpu-alloc: [0] 0
> > [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.
>  Total pages: 64768
> > [    0.000000] Kernel command line: console=3Dhvc0 earlyprintk
> > [    0.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)
> > [    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072
> bytes)
> > [    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536
> bytes)
> > [    0.000000] __ex_table already sorted, skipping sort
> > [    0.000000] Memory: 255MB =3D 255MB total
> > [    0.000000] Memory: 190640k/190640k available, 71504k reserved, 0K
> highmem
> > [    0.000000] Virtual kernel memory layout:
> > [    0.000000]     vector  : 0xffff0000 - 0xffff1000   (   4 kB)
> > [    0.000000]     fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)
> > [    0.000000]     vmalloc : 0xd0800000 - 0xff000000   ( 744 MB)
> > [    0.000000]     lowmem  : 0xc0000000 - 0xd0000000   ( 256 MB)
> > [    0.000000]     pkmap   : 0xbfe00000 - 0xc0000000   (   2 MB)
> > [    0.000000]     modules : 0xbf000000 - 0xbfe00000   (  14 MB)
> > [    0.000000]       .text : 0xc0008000 - 0xc0493748   (4654 kB)
> > [    0.000000]       .init : 0xc0494000 - 0xc3cfa29c   (57753 kB)
> > [    0.000000]       .data : 0xc3cfc000 - 0xc3d64660   ( 418 kB)
> > [    0.000000]        .bss : 0xc3d64660 - 0xc428d634   (5284 kB)
> > [    0.000000] NR_IRQS:16 nr_irqs:16 16
> > [    0.000000] Architected local timer running at 6.14MHz (virt).
> > [    0.000000] Switching to timer-based delay loop
> > [    0.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500ns,
> wraps every 3489660920ms
> > [    0.000000] Console: colour dummy device 80x30
> > [    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat,
> Inc., Ingo Molnar
> > [    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
> > [    0.000000] ... MAX_LOCK_DEPTH:          48
> > [    0.000000] ... MAX_LOCKDEP_KEYS:        8191
> > [    0.000000] ... CLASSHASH_SIZE:          4096
> > [    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
> > [    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
> > [    0.000000] ... CHAINHASH_SIZE:          16384
> > [    0.000000]  memory used by lock dependency info: 3695 kB
> > [    0.000000]  per task-struct memory footprint: 1152 bytes
> > [    0.046875] Calibrating delay loop (skipped), value calculated using
> timer frequency.. 12.30 BogoMIPS (lpj=3D48000)
> > [    0.054687] pid_max: default: 32768 minimum: 301
> > [    0.054687] Security Framework initialized
> > [    0.062500] Mount-cache hash table entries: 512
> > [    0.070312] CPU: Testing write buffer coherency: ok
> > [    0.078125] Setting up static identity map for 0xd0334e00 - 0xd0334e=
58
> > [    0.085937] devtmpfs: initialized
> > [    0.093750] Xen 4.4 support found, events_irq=3D31
> gnttab_frame_pfn=3D4b000
> > [    0.101562] xen:grant_table: Grant tables using version 1 layout.
> > [    0.101562] Grant table initialized
> > [    0.109375] omap_hwmod: aess: _wait_target_disable failed
> > [    0.132812] omap_hwmod: dss_dispc: cannot be enabled for reset (3)
> > [    0.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset (3)
> > [    0.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset (3)
> > [    0.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset (3)
> > [    0.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (3)
> > [    0.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (3)
> > [    0.234375] pinctrl core: initialized pinctrl subsystem
> > [    0.242187] regulator-dummy: no parameters
> > [    0.242187] NET: Registered protocol family 16
> > [    0.250000] Xen: initializing cpu0
> > [    0.250000] DMA: preallocated 256 KiB pool for atomic coherent
> allocations
> > [    0.257812] xen:swiotlb_xen: Warning: only able to allocate 8 MB for
> software IO TLB
> > [    0.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) mapped
> at [ce000000-ce7fffff]
> > [    0.281250] gpiochip_add: registered GPIOs 0 to 31 on device: gpio
> > [    0.281250] OMAP GPIO hardware version 0.1
> > [    0.289062] gpiochip_add: registered GPIOs 32 to 63 on device: gpio
> > [    0.289062] gpiochip_add: registered GPIOs 64 to 95 on device: gpio
> > [    0.296875] OMAP DMA hardware revision 0.0
> > [    0.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc002840
> size 438
> > [    0.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c840
> size 56
> > [    0.335937] bio: create slab <bio-0> at 0
> > [    0.343750] xen:balloon: Initialising balloon driver
> > [    0.343750] of_get_named_gpio_flags exited with status 80
> > [    0.343750] hsusb2_reset: 3300 mV
> > [    0.351562] of_get_named_gpio_flags exited with status 79
> > [    0.351562] hsusb3_reset: 3300 mV
> > [    0.351562] SCSI subsystem initialized
> > [    0.359375] libata version 3.00 loaded.
> > [    0.359375] usbcore: registered new interface driver usbfs
> > [    0.367187] usbcore: registered new interface driver hub
> > [    0.367187] usbcore: registered new device driver usb
> > [    0.375000] Switching to clocksource arch_sys_counter
> > [    0.414062] NET: Registered protocol family 2
> > [    0.414062] TCP established hash table entries: 2048 (order: 2, 1638=
4
> bytes)
> > [    0.421875] TCP bind hash table entries: 2048 (order: 4, 73728 bytes=
)
> > [    0.429687] TCP: Hash tables configured (established 2048 bind 2048)
> > [    0.437500] TCP: reno registered
> > [    0.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)
> > [    0.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 bytes)
> > [    0.453125] NET: Registered protocol family 1
> > [    0.679687] NetWinder Floating Point Emulator V0.97 (double precisio=
n)
> > [    0.687500] VFS: Disk quotas dquot_6.5.2
> > [    0.695312] Dquot-cache hash table entries: 1024 (order 0, 4096 byte=
s)
> > [    0.703125] msgmni has been set to 372
> > [    0.710937] io scheduler noop registered
> > [    0.718750] io scheduler deadline registered
> > [    0.718750] io scheduler cfq registered (default)
> > [    0.726562] xen:xen_evtchn: Event-channel device installed
> > [    0.742187] console [hvc0] enabled, bootconsole disabled
> > [    0.765625] brd: module loaded
> > [    0.781250] loop: module loaded
> > [    0.789062] ahci ahci.0.auto: can't get clock
> > [    0.789062] ahci ahci.0.auto: SATA PLL_STATUS =3D 0x00018041
> > [    0.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1
> > [    0.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3 Gbps
> 0x1 impl platform mode
> > [    0.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo only
> pmp pio slum part ccc apst
> > [    0.820312] scsi0 : ahci_platform
> > [    0.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a1401ff]
> port 0x100 irq 86
> > [    0.835937] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driv=
er
> > [    0.843750] ehci-omap: OMAP-EHCI Host Controller driver
> > [    0.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller
> > [    0.867187] ehci-omap 4a064c00.ehci: new USB bus registered, assigne=
d
> bus number 1
> > [    0.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00
> > [    0.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00
> > [    0.898437] usb usb1: New USB device found, idVendor=3D1d6b,
> idProduct=3D0002
> > [    0.906250] usb usb1: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    0.914062] usb usb1: Product: EHCI Host Controller
> > [    0.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> ehci_hcd
> > [    0.929687] usb usb1: SerialNumber: 4a064c00.ehci
> > [    0.937500] hub 1-0:1.0: USB hub found
> > [    0.937500] hub 1-0:1.0: 3 ports detected
> > [    1.085937] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> > [    1.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controller
> > [    1.093750] ohci-omap3 4a064800.ohci: new USB bus registered,
> assigned bus number 2
> > [    1.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a064800
> > [    1.187500] usb usb2: New USB device found, idVendor=3D1d6b,
> idProduct=3D0001
> > [    1.195312] usb usb2: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    1.203125] usb usb2: Product: OMAP3 OHCI Host Controller
> > [    1.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> ohci_hcd
> > [    1.210937] usb usb2: SerialNumber: 4a064800.ohci
> > [    1.218750] ata1: SATA link down (SStatus 0 SControl 300)
> > [    1.226562] hub 2-0:1.0: USB hub found
> > [    1.226562] hub 2-0:1.0: 3 ports detected
> > [    1.359375] usb 1-2: new high-speed USB device number 2 using
> ehci-omap
> > [    1.515625] usb 1-2: New USB device found, idVendor=3D0424,
> idProduct=3D3503
> > [    1.515625] usb 1-2: New USB device strings: Mfr=3D0, Product=3D0,
> SerialNumber=3D0
> > [    1.531250] hub 1-2:1.0: USB hub found
> > [    1.531250] hub 1-2:1.0: 3 ports detected
> > [    1.664062] usb 1-3: new high-speed USB device number 3 using
> ehci-omap
> > [    1.820312] usb 1-3: New USB device found, idVendor=3D0424,
> idProduct=3D9730
> > [    1.820312] usb 1-3: New USB device strings: Mfr=3D0, Product=3D0,
> SerialNumber=3D0
> > [    1.835937] usbcore: registered new interface driver usbback
> > [    1.843750] Initializing USB Mass Storage driver...
> > [    1.843750] usbcore: registered new interface driver usb-storage
> > [    1.851562] USB Mass Storage support registered.
> > [    1.859375] i2c /dev entries driver
> > [    1.859375] usbcore: registered new interface driver usbhid
> > [    1.867187] usbhid: USB HID core driver
> > [    1.875000] TCP: cubic registered
> > [    1.875000] Initializing XFRM netlink socket
> > [    1.882812] NET: Registered protocol family 17
> > [    1.882812] NET: Registered protocol family 15
> > [    1.890625] VFP support v0.3: implementor 41 architecture 4 part 30
> variant f rev 0
> > [    1.898437] mux: Failed to setup hwmod io irq -22
> > [    1.898437] Power Management for TI OMAP4PLUS devices.
> > [    1.906250] ThumbEE CPU extension supported.
> > [    1.914062] Registering SWP/SWPB emulation handler
> > [    1.921875] devtmpfs: mounted
> > [    1.968750] Freeing init memory: 57752K
> > # ./vusb-start.sh 1 0
> > [    9.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9,
> event-channel 3
> > # ./vscsi-start.sh 1 0
> > # echo 1-2.1:1:0:1 > /sys/bus/usb/drivers/usbback/new_vport
> >
> > [   40.796875] usb 1-2.1: new high-speed USB device number 4 using
> ehci-omap
> > [   40.914062] usb 1-2.1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   40.929687] usb 1-2.1: Product: Mass Storage Device
> > [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> > [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> >
> > DomU log:
> > 0.710937] console [hvc0] enabled, bootconsole disabled
> > [    0.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq =3D 104) =
is
> a OMAP UART0
> > [    0.718750] omap_uart 4806c000.serial: did not get pins for uart1
> error: -19
> > [    0.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq =3D 105) =
is
> a OMAP UART1
> > [    0.718750] omap_uart 4806e000.serial: did not get pins for uart3
> error: -19
> > [    0.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq =3D 102) =
is
> a OMAP UART3
> > [    0.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq =3D 137) =
is
> a OMAP UART4
> > [    0.726562] omap_uart 48068000.serial: did not get pins for uart5
> error: -19
> > [    0.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq =3D 138) =
is
> a OMAP UART5
> > [    0.726562] [drm] Initialized drm 1.1.0 20060810
> > [    0.742187] brd: module loaded
> > [    0.757812] loop: module loaded
> > [    0.757812] omap2_mcspi 48098000.spi: pins are not configured from
> the driver
> > [    0.765625] Initialising Xen virtual ethernet driver.
> > [    0.765625] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driv=
er
> > [    0.765625] ehci-platform: EHCI generic platform driver
> > [    0.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller
> > [    0.765625] vusb vusb-0: new USB bus registered, assigned bus number=
 1
> > [    0.765625] usb usb1: New USB device found, idVendor=3D1d6b,
> idProduct=3D0002
> > [    0.765625] usb usb1: New USB device strings: Mfr=3D3, Product=3D2,
> SerialNumber=3D1
> > [    0.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controller
> > [    0.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6
> xen_hcd
> > [    0.765625] usb usb1: SerialNumber: vusb-0
> > [    0.773437] hub 1-0:1.0: USB hub found
> > [    0.773437] hub 1-0:1.0: 8 ports detected
> > [    0.773437] Initializing USB Mass Storage driver...
> > [    0.773437] usbcore: registered new interface driver usb-storage
> > [    0.773437] USB Mass Storage support registered.
> > [    0.773437] mousedev: PS/2 mouse device common for all mice
> > [    0.781250] usbcore: registered new interface driver usbhid
> > [    0.781250] usbhid: USB HID core driver
> > [    0.789062] TCP: cubic registered
> > [    0.789062] Initializing XFRM netlink socket
> > [    0.789062] NET: Registered protocol family 17
> > [    0.789062] NET: Registered protocol family 15
> > [    0.789062] VFP support v0.3: implementor 41 architecture 4 part 30
> variant f rev 0
> > [    0.789062] mux: Failed to setup hwmod io irq -22
> > [    0.789062] ThumbEE CPU extension supported.
> > [    0.789062] Registering SWP/SWPB emulation handler
> > [    0.789062] dmm 4e000000.dmm: initialized all PAT entries
> > [    0.804687]
> /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hctosys.c: unable to open
> rtc device (rtc0)
> > [    0.804687] devtmpfs: mounted
> > [    0.812500] Freeing init memory: 6044K
> >
> > Please press Enter to activate this console.
> > [    6.500000] scsi0 : Xen SCSI frontend driver
> >
> > / # [   40.796875] usb 1-2.1: new high-speed USB device number 4 using
> ehci-omap
> > [   40.914062] usb 1-2.1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   40.929687] usb 1-2.1: Product: Mass Storage Device
> > [   40.929687] usb 1-2.1: Manufacturer: JetFlash
> > [   40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO
> > [   32.703125] usb 1-1: new high-speed USB device number 2 using vusb
> > (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet
> > [   32.875000] usb 1-1: New USB device found, idVendor=3D8564,
> idProduct=3D1000
> > [   32.875000] usb 1-1: New USB device strings: Mfr=3D1, Product=3D2,
> SerialNumber=3D3
> > [   32.875000] usb 1-1: Product: Mass Storage Device
> > [   32.875000] usb 1-1: Manufacturer: JetFlash
> > [   32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO
> > [   32.906250] scsi1 : usb-storage 1-1:1.0
> > [   34.117187] scsi 1:0:0:0: Direct-Access     JetFlash Transcend 8GB
>  1100 PQ: 0 ANSI: 4
> > [   34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (8.1=
0
> GB/7.54 GiB)
> > [   34.140625] sd 1:0:0:0: [sda] Write Protect is off
> > [   34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *<--this data
> may changed on different boots*
> > [   34.156250] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.179687] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.195312]  sda: sda1
> > [   34.203125] sd 1:0:0:0: [sda] Asking for cache data failed
> > [   34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through
> > [   34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk
> >
> >  # lsusb
> > Bus 001 Device 002: ID 8564:1000
> > Bus 001 Device 001: ID 1d6b:0002
> >
> > But it looks like scsi requests from usb-storage still passing directly
> to hardware
> > instead of passing through PV SCSI.
> >
> > Could smb tell me how to init PV SCSI and PV USB correctly?
> >
> > Regards,
> > Alexander
> >
> > Alexander Savchenko (2):
> >   usbback: Add new features
> >   HACK: usb:core:hcd: Do not remapping self dma addresses
> >
> > Nathanael Rensen (1):
> >   pvusb drivers
> >
> >  drivers/usb/core/hcd.c                 |    1 +
> >  drivers/usb/host/Kconfig               |   23 +
> >  drivers/usb/host/Makefile              |    2 +
> >  drivers/usb/host/xen-usbback/Makefile  |    3 +
> >  drivers/usb/host/xen-usbback/common.h  |  170 ++++
> >  drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++++
> >  drivers/usb/host/xen-usbback/usbdev.c  |  402 ++++++++
> >  drivers/usb/host/xen-usbback/xenbus.c  |  482 +++++++++
> >  drivers/usb/host/xen-usbfront.c        | 1739
> ++++++++++++++++++++++++++++++++
> >  include/xen/interface/io/usbif.h       |  150 +++
> >  10 files changed, 4244 insertions(+)
> >  create mode 100644 drivers/usb/host/xen-usbback/Makefile
> >  create mode 100644 drivers/usb/host/xen-usbback/common.h
> >  create mode 100644 drivers/usb/host/xen-usbback/usbback.c
> >  create mode 100644 drivers/usb/host/xen-usbback/usbdev.c
> >  create mode 100644 drivers/usb/host/xen-usbback/xenbus.c
> >  create mode 100644 drivers/usb/host/xen-usbfront.c
> >  create mode 100644 include/xen/interface/io/usbif.h
> >
> > --
> > 1.7.9.5
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>



--=20

Alexander Savchenko | Kernel developer
GlobalLogic
M +38-093-808-37-33  S darkside.warlock
www.globallogic.com

--001a11c38882a5518004f08c0992
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span id=3D"result_box" class=3D"" lang=3D"en"><span class=
=3D"">In</span> <span class=3D"">Konrad</span> <span class=3D"">repository<=
/span> <span class=3D"">is</span> <span class=3D"">an old version of</span>=
 <span class=3D"">drivers (for Linux 3.3).</span></span></div>
<div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Jan 2=
1, 2014 at 11:23 PM, Pasi K=E4rkk=E4inen <span dir=3D"ltr">&lt;<a href=3D"m=
ailto:pasik@iki.fi" target=3D"_blank">pasik@iki.fi</a>&gt;</span> wrote:<br=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
<div class=3D"im">On Tue, Jan 21, 2014 at 06:53:14PM +0200, Alexander Savch=
enko wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; Could someone advice on the issue I am facing?<br>
&gt;<br>
&gt; I am trying to run PV USB on omap5uevm (omap5-panda) board.<br>
&gt;<br>
&gt; I use latest drivers for PV USB from Nathanael server:<br>
&gt; <a href=3D"http://members.iinet.net.au/~nathanael/0001-pvusb-driver.li=
nux-next.patch" target=3D"_blank">http://members.iinet.net.au/~nathanael/00=
01-pvusb-driver.linux-next.patch</a><br>
&gt;<br>
<br>
</div>I think Konrad actually has the Xen PVUSB drivers in his git tree, an=
d it has<br>
some extra patches compared to nathanaels&#39;s version (iirc)..<br>
<br>
-- Pasi<br>
<div><div class=3D"h5"><br>
<br>
&gt; I have applied it to k3.8(dom0) with some patches for USB HCD, usbback=
 drivers<br>
&gt; (attached) and run on Xen 4.4.0-rc2.<br>
&gt;<br>
&gt; I am facing an issue with USB_STORAGE:<br>
&gt; USB storage inited and mounted on domU over PV USB drivers.<br>
&gt; But I only can copy files on USB storage with small sizes(no more than=
 ~100-500 kBytes).<br>
&gt; Then USB storage falls to infinite loop<br>
&gt; (leds on USB storage blinking all the time, more than needing for copy=
)<br>
&gt; and then after few seconds dom0 disconnected usb device.<br>
&gt;<br>
&gt; Dom0, DomU use k3.8.<br>
&gt;<br>
&gt; I observed that usb-storage uses some scsi requests(from domU) which p=
ass<br>
&gt; directly to hardware, I think this is a problem.<br>
&gt;<br>
&gt; So, I applied PV SCSI drivers from<br>
&gt; <a href=3D"http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/=
log/?id=3Drefs/heads/devel/xen-scsi.v1.0" target=3D"_blank">http://git.kern=
el.org/cgit/linux/kernel/git/konrad/xen.git/log/?id=3Drefs/heads/devel/xen-=
scsi.v1.0</a><br>

&gt; to k3.8.<br>
&gt;<br>
&gt; Then I inited PV USB &amp; PV SCSI with scripts vusb-start.sh and vscs=
i-start.sh accordingly.<br>
&gt; But I still facing issue with this.<br>
&gt;<br>
&gt; Dom0 log:<br>
&gt; [ =A0 =A00.000000] Booting Linux on physical CPU 0x0<br>
&gt; [ =A0 =A00.000000] Linux version 3.8.13-53079-g8f32ae6 (x0187394@uglx0=
187394) (gcc version 4.7 (GCC) ) #55 Tue Jan 21 18:01:39 EET 2014<br>
&gt; [ =A0 =A00.000000] CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7),=
 cr=3D10c5387d<br>
&gt; [ =A0 =A00.000000] CPU: PIPT / VIPT nonaliasing data cache, PIPT instr=
uction cache<br>
&gt; [ =A0 =A00.000000] Machine: OMAP5 Panda board, model: TI OMAP5 uEVM bo=
ard<br>
&gt; [ =A0 =A00.000000] bootconsole [earlycon0] enabled<br>
&gt; [ =A0 =A00.000000] Memory policy: ECC disabled, Data cache writeback<b=
r>
&gt; [ =A0 =A00.000000] On node 0 totalpages: 65280<br>
&gt; [ =A0 =A00.000000] free_area_init_node: node 0, pgdat c3d639f0, node_m=
em_map c428e000<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 512 pages used for memmap<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 0 pages reserved<br>
&gt; [ =A0 =A00.000000] =A0 Normal zone: 64768 pages, LIFO batch:15<br>
&gt; [ =A0 =A00.000000] psci: probing function IDs from device-tree<br>
&gt; [ =A0 =A00.000000] OMAP5432 ES2.0<br>
&gt; [ =A0 =A00.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=3D1*32768<br>
&gt; [ =A0 =A00.000000] pcpu-alloc: [0] 0<br>
&gt; [ =A0 =A00.000000] Built 1 zonelists in Zone order, mobility grouping =
on. =A0Total pages: 64768<br>
&gt; [ =A0 =A00.000000] Kernel command line: console=3Dhvc0 earlyprintk<br>
&gt; [ =A0 =A00.000000] PID hash table entries: 1024 (order: 0, 4096 bytes)=
<br>
&gt; [ =A0 =A00.000000] Dentry cache hash table entries: 32768 (order: 5, 1=
31072 bytes)<br>
&gt; [ =A0 =A00.000000] Inode-cache hash table entries: 16384 (order: 4, 65=
536 bytes)<br>
&gt; [ =A0 =A00.000000] __ex_table already sorted, skipping sort<br>
&gt; [ =A0 =A00.000000] Memory: 255MB =3D 255MB total<br>
&gt; [ =A0 =A00.000000] Memory: 190640k/190640k available, 71504k reserved,=
 0K highmem<br>
&gt; [ =A0 =A00.000000] Virtual kernel memory layout:<br>
&gt; [ =A0 =A00.000000] =A0 =A0 vector =A0: 0xffff0000 - 0xffff1000 =A0 ( =
=A0 4 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 fixmap =A0: 0xfff00000 - 0xfffe0000 =A0 ( 8=
96 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 vmalloc : 0xd0800000 - 0xff000000 =A0 ( 744=
 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 lowmem =A0: 0xc0000000 - 0xd0000000 =A0 ( 2=
56 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 pkmap =A0 : 0xbfe00000 - 0xc0000000 =A0 ( =
=A0 2 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 modules : 0xbf000000 - 0xbfe00000 =A0 ( =A0=
14 MB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .text : 0xc0008000 - 0xc0493748 =A0 (46=
54 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .init : 0xc0494000 - 0xc3cfa29c =A0 (57=
753 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 .data : 0xc3cfc000 - 0xc3d64660 =A0 ( 4=
18 kB)<br>
&gt; [ =A0 =A00.000000] =A0 =A0 =A0 =A0.bss : 0xc3d64660 - 0xc428d634 =A0 (=
5284 kB)<br>
&gt; [ =A0 =A00.000000] NR_IRQS:16 nr_irqs:16 16<br>
&gt; [ =A0 =A00.000000] Architected local timer running at 6.14MHz (virt).<=
br>
&gt; [ =A0 =A00.000000] Switching to timer-based delay loop<br>
&gt; [ =A0 =A00.000000] sched_clock: 32 bits at 128 Hz, resolution 7812500n=
s, wraps every 3489660920ms<br>
&gt; [ =A0 =A00.000000] Console: colour dummy device 80x30<br>
&gt; [ =A0 =A00.000000] Lock dependency validator: Copyright (c) 2006 Red H=
at, Inc., Ingo Molnar<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_SUBCLASSES: =A08<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCK_DEPTH: =A0 =A0 =A0 =A0 =A048<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_KEYS: =A0 =A0 =A0 =A08191<br>
&gt; [ =A0 =A00.000000] ... CLASSHASH_SIZE: =A0 =A0 =A0 =A0 =A04096<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_ENTRIES: =A0 =A0 16384<br>
&gt; [ =A0 =A00.000000] ... MAX_LOCKDEP_CHAINS: =A0 =A0 =A032768<br>
&gt; [ =A0 =A00.000000] ... CHAINHASH_SIZE: =A0 =A0 =A0 =A0 =A016384<br>
&gt; [ =A0 =A00.000000] =A0memory used by lock dependency info: 3695 kB<br>
&gt; [ =A0 =A00.000000] =A0per task-struct memory footprint: 1152 bytes<br>
&gt; [ =A0 =A00.046875] Calibrating delay loop (skipped), value calculated =
using timer frequency.. 12.30 BogoMIPS (lpj=3D48000)<br>
&gt; [ =A0 =A00.054687] pid_max: default: 32768 minimum: 301<br>
&gt; [ =A0 =A00.054687] Security Framework initialized<br>
&gt; [ =A0 =A00.062500] Mount-cache hash table entries: 512<br>
&gt; [ =A0 =A00.070312] CPU: Testing write buffer coherency: ok<br>
&gt; [ =A0 =A00.078125] Setting up static identity map for 0xd0334e00 - 0xd=
0334e58<br>
&gt; [ =A0 =A00.085937] devtmpfs: initialized<br>
&gt; [ =A0 =A00.093750] Xen 4.4 support found, events_irq=3D31 gnttab_frame=
_pfn=3D4b000<br>
&gt; [ =A0 =A00.101562] xen:grant_table: Grant tables using version 1 layou=
t.<br>
&gt; [ =A0 =A00.101562] Grant table initialized<br>
&gt; [ =A0 =A00.109375] omap_hwmod: aess: _wait_target_disable failed<br>
&gt; [ =A0 =A00.132812] omap_hwmod: dss_dispc: cannot be enabled for reset =
(3)<br>
&gt; [ =A0 =A00.140625] omap_hwmod: dss_dsi1_a: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.148437] omap_hwmod: dss_dsi1_b: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.156250] omap_hwmod: dss_dsi1_c: cannot be enabled for reset=
 (3)<br>
&gt; [ =A0 =A00.164062] omap_hwmod: dss_hdmi: cannot be enabled for reset (=
3)<br>
&gt; [ =A0 =A00.171875] omap_hwmod: dss_rfbi: cannot be enabled for reset (=
3)<br>
&gt; [ =A0 =A00.234375] pinctrl core: initialized pinctrl subsystem<br>
&gt; [ =A0 =A00.242187] regulator-dummy: no parameters<br>
&gt; [ =A0 =A00.242187] NET: Registered protocol family 16<br>
&gt; [ =A0 =A00.250000] Xen: initializing cpu0<br>
&gt; [ =A0 =A00.250000] DMA: preallocated 256 KiB pool for atomic coherent =
allocations<br>
&gt; [ =A0 =A00.257812] xen:swiotlb_xen: Warning: only able to allocate 8 M=
B for software IO TLB<br>
&gt; [ =A0 =A00.265625] software IO TLB [mem 0xde000000-0xde800000] (8MB) m=
apped at [ce000000-ce7fffff]<br>
&gt; [ =A0 =A00.281250] gpiochip_add: registered GPIOs 0 to 31 on device: g=
pio<br>
&gt; [ =A0 =A00.281250] OMAP GPIO hardware version 0.1<br>
&gt; [ =A0 =A00.289062] gpiochip_add: registered GPIOs 32 to 63 on device: =
gpio<br>
&gt; [ =A0 =A00.289062] gpiochip_add: registered GPIOs 64 to 95 on device: =
gpio<br>
&gt; [ =A0 =A00.296875] OMAP DMA hardware revision 0.0<br>
&gt; [ =A0 =A00.304687] pinctrl-single 4a002840.pinmux: 219 pins at pa fc00=
2840 size 438<br>
&gt; [ =A0 =A00.312500] pinctrl-single 4ae0c840.pinmux: 28 pins at pa fce0c=
840 size 56<br>
&gt; [ =A0 =A00.335937] bio: create slab &lt;bio-0&gt; at 0<br>
&gt; [ =A0 =A00.343750] xen:balloon: Initialising balloon driver<br>
&gt; [ =A0 =A00.343750] of_get_named_gpio_flags exited with status 80<br>
&gt; [ =A0 =A00.343750] hsusb2_reset: 3300 mV<br>
&gt; [ =A0 =A00.351562] of_get_named_gpio_flags exited with status 79<br>
&gt; [ =A0 =A00.351562] hsusb3_reset: 3300 mV<br>
&gt; [ =A0 =A00.351562] SCSI subsystem initialized<br>
&gt; [ =A0 =A00.359375] libata version 3.00 loaded.<br>
&gt; [ =A0 =A00.359375] usbcore: registered new interface driver usbfs<br>
&gt; [ =A0 =A00.367187] usbcore: registered new interface driver hub<br>
&gt; [ =A0 =A00.367187] usbcore: registered new device driver usb<br>
&gt; [ =A0 =A00.375000] Switching to clocksource arch_sys_counter<br>
&gt; [ =A0 =A00.414062] NET: Registered protocol family 2<br>
&gt; [ =A0 =A00.414062] TCP established hash table entries: 2048 (order: 2,=
 16384 bytes)<br>
&gt; [ =A0 =A00.421875] TCP bind hash table entries: 2048 (order: 4, 73728 =
bytes)<br>
&gt; [ =A0 =A00.429687] TCP: Hash tables configured (established 2048 bind =
2048)<br>
&gt; [ =A0 =A00.437500] TCP: reno registered<br>
&gt; [ =A0 =A00.437500] UDP hash table entries: 256 (order: 2, 20480 bytes)=
<br>
&gt; [ =A0 =A00.445312] UDP-Lite hash table entries: 256 (order: 2, 20480 b=
ytes)<br>
&gt; [ =A0 =A00.453125] NET: Registered protocol family 1<br>
&gt; [ =A0 =A00.679687] NetWinder Floating Point Emulator V0.97 (double pre=
cision)<br>
&gt; [ =A0 =A00.687500] VFS: Disk quotas dquot_6.5.2<br>
&gt; [ =A0 =A00.695312] Dquot-cache hash table entries: 1024 (order 0, 4096=
 bytes)<br>
&gt; [ =A0 =A00.703125] msgmni has been set to 372<br>
&gt; [ =A0 =A00.710937] io scheduler noop registered<br>
&gt; [ =A0 =A00.718750] io scheduler deadline registered<br>
&gt; [ =A0 =A00.718750] io scheduler cfq registered (default)<br>
&gt; [ =A0 =A00.726562] xen:xen_evtchn: Event-channel device installed<br>
&gt; [ =A0 =A00.742187] console [hvc0] enabled, bootconsole disabled<br>
&gt; [ =A0 =A00.765625] brd: module loaded<br>
&gt; [ =A0 =A00.781250] loop: module loaded<br>
&gt; [ =A0 =A00.789062] ahci ahci.0.auto: can&#39;t get clock<br>
&gt; [ =A0 =A00.789062] ahci ahci.0.auto: SATA PLL_STATUS =3D 0x00018041<br=
>
&gt; [ =A0 =A00.796875] ahci ahci.0.auto: forcing PORTS_IMPL to 0x1<br>
&gt; [ =A0 =A00.804687] ahci ahci.0.auto: AHCI 0001.0300 32 slots 1 ports 3=
 Gbps 0x1 impl platform mode<br>
&gt; [ =A0 =A00.812500] ahci ahci.0.auto: flags: 64bit ncq sntf pm led clo =
only pmp pio slum part ccc apst<br>
&gt; [ =A0 =A00.820312] scsi0 : ahci_platform<br>
&gt; [ =A0 =A00.828125] ata1: SATA max UDMA/133 mmio [mem 0x4a140000-0x4a14=
01ff] port 0x100 irq 86<br>
&gt; [ =A0 =A00.835937] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controlle=
r (EHCI) Driver<br>
&gt; [ =A0 =A00.843750] ehci-omap: OMAP-EHCI Host Controller driver<br>
&gt; [ =A0 =A00.859375] ehci-omap 4a064c00.ehci: EHCI Host Controller<br>
&gt; [ =A0 =A00.867187] ehci-omap 4a064c00.ehci: new USB bus registered, as=
signed bus number 1<br>
&gt; [ =A0 =A00.875000] ehci-omap 4a064c00.ehci: irq 109, io mem 0x4a064c00=
<br>
&gt; [ =A0 =A00.898437] ehci-omap 4a064c00.ehci: USB 2.0 started, EHCI 1.00=
<br>
&gt; [ =A0 =A00.898437] usb usb1: New USB device found, idVendor=3D1d6b, id=
Product=3D0002<br>
&gt; [ =A0 =A00.906250] usb usb1: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A00.914062] usb usb1: Product: EHCI Host Controller<br>
&gt; [ =A0 =A00.921875] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 ehci_hcd<br>
&gt; [ =A0 =A00.929687] usb usb1: SerialNumber: 4a064c00.ehci<br>
&gt; [ =A0 =A00.937500] hub 1-0:1.0: USB hub found<br>
&gt; [ =A0 =A00.937500] hub 1-0:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.085937] ohci_hcd: USB 1.1 &#39;Open&#39; Host Controller (O=
HCI) Driver<br>
&gt; [ =A0 =A01.085937] ohci-omap3 4a064800.ohci: OMAP3 OHCI Host Controlle=
r<br>
&gt; [ =A0 =A01.093750] ohci-omap3 4a064800.ohci: new USB bus registered, a=
ssigned bus number 2<br>
&gt; [ =A0 =A01.101562] ohci-omap3 4a064800.ohci: irq 108, io mem 0x4a06480=
0<br>
&gt; [ =A0 =A01.187500] usb usb2: New USB device found, idVendor=3D1d6b, id=
Product=3D0001<br>
&gt; [ =A0 =A01.195312] usb usb2: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A01.203125] usb usb2: Product: OMAP3 OHCI Host Controller<br>
&gt; [ =A0 =A01.210937] usb usb2: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 ohci_hcd<br>
&gt; [ =A0 =A01.210937] usb usb2: SerialNumber: 4a064800.ohci<br>
&gt; [ =A0 =A01.218750] ata1: SATA link down (SStatus 0 SControl 300)<br>
&gt; [ =A0 =A01.226562] hub 2-0:1.0: USB hub found<br>
&gt; [ =A0 =A01.226562] hub 2-0:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.359375] usb 1-2: new high-speed USB device number 2 using e=
hci-omap<br>
&gt; [ =A0 =A01.515625] usb 1-2: New USB device found, idVendor=3D0424, idP=
roduct=3D3503<br>
&gt; [ =A0 =A01.515625] usb 1-2: New USB device strings: Mfr=3D0, Product=
=3D0, SerialNumber=3D0<br>
&gt; [ =A0 =A01.531250] hub 1-2:1.0: USB hub found<br>
&gt; [ =A0 =A01.531250] hub 1-2:1.0: 3 ports detected<br>
&gt; [ =A0 =A01.664062] usb 1-3: new high-speed USB device number 3 using e=
hci-omap<br>
&gt; [ =A0 =A01.820312] usb 1-3: New USB device found, idVendor=3D0424, idP=
roduct=3D9730<br>
&gt; [ =A0 =A01.820312] usb 1-3: New USB device strings: Mfr=3D0, Product=
=3D0, SerialNumber=3D0<br>
&gt; [ =A0 =A01.835937] usbcore: registered new interface driver usbback<br=
>
&gt; [ =A0 =A01.843750] Initializing USB Mass Storage driver...<br>
&gt; [ =A0 =A01.843750] usbcore: registered new interface driver usb-storag=
e<br>
&gt; [ =A0 =A01.851562] USB Mass Storage support registered.<br>
&gt; [ =A0 =A01.859375] i2c /dev entries driver<br>
&gt; [ =A0 =A01.859375] usbcore: registered new interface driver usbhid<br>
&gt; [ =A0 =A01.867187] usbhid: USB HID core driver<br>
&gt; [ =A0 =A01.875000] TCP: cubic registered<br>
&gt; [ =A0 =A01.875000] Initializing XFRM netlink socket<br>
&gt; [ =A0 =A01.882812] NET: Registered protocol family 17<br>
&gt; [ =A0 =A01.882812] NET: Registered protocol family 15<br>
&gt; [ =A0 =A01.890625] VFP support v0.3: implementor 41 architecture 4 par=
t 30 variant f rev 0<br>
&gt; [ =A0 =A01.898437] mux: Failed to setup hwmod io irq -22<br>
&gt; [ =A0 =A01.898437] Power Management for TI OMAP4PLUS devices.<br>
&gt; [ =A0 =A01.906250] ThumbEE CPU extension supported.<br>
&gt; [ =A0 =A01.914062] Registering SWP/SWPB emulation handler<br>
&gt; [ =A0 =A01.921875] devtmpfs: mounted<br>
&gt; [ =A0 =A01.968750] Freeing init memory: 57752K<br>
&gt; # ./vusb-start.sh 1 0<br>
&gt; [ =A0 =A09.289062] xen-usbback:urb-ring-ref 8, conn-ring-ref 9, event-=
channel 3<br>
&gt; # ./vscsi-start.sh 1 0<br>
&gt; # echo 1-2.1:1:0:1 &gt; /sys/bus/usb/drivers/usbback/new_vport<br>
&gt;<br>
&gt; [ =A0 40.796875] usb 1-2.1: new high-speed USB device number 4 using e=
hci-omap<br>
&gt; [ =A0 40.914062] usb 1-2.1: New USB device found, idVendor=3D8564, idP=
roduct=3D1000<br>
&gt; [ =A0 40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=
=3D2, SerialNumber=3D3<br>
&gt; [ =A0 40.929687] usb 1-2.1: Product: Mass Storage Device<br>
&gt; [ =A0 40.929687] usb 1-2.1: Manufacturer: JetFlash<br>
&gt; [ =A0 40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt;<br>
&gt; DomU log:<br>
&gt; 0.710937] console [hvc0] enabled, bootconsole disabled<br>
&gt; [ =A0 =A00.718750] 4806a000.serial: ttyO0 at MMIO 0x4806a000 (irq =3D =
104) is a OMAP UART0<br>
&gt; [ =A0 =A00.718750] omap_uart 4806c000.serial: did not get pins for uar=
t1 error: -19<br>
&gt; [ =A0 =A00.718750] 4806c000.serial: ttyO1 at MMIO 0x4806c000 (irq =3D =
105) is a OMAP UART1<br>
&gt; [ =A0 =A00.718750] omap_uart 4806e000.serial: did not get pins for uar=
t3 error: -19<br>
&gt; [ =A0 =A00.718750] 4806e000.serial: ttyO3 at MMIO 0x4806e000 (irq =3D =
102) is a OMAP UART3<br>
&gt; [ =A0 =A00.718750] 48066000.serial: ttyO4 at MMIO 0x48066000 (irq =3D =
137) is a OMAP UART4<br>
&gt; [ =A0 =A00.726562] omap_uart 48068000.serial: did not get pins for uar=
t5 error: -19<br>
&gt; [ =A0 =A00.726562] 48068000.serial: ttyO5 at MMIO 0x48068000 (irq =3D =
138) is a OMAP UART5<br>
&gt; [ =A0 =A00.726562] [drm] Initialized drm 1.1.0 20060810<br>
&gt; [ =A0 =A00.742187] brd: module loaded<br>
&gt; [ =A0 =A00.757812] loop: module loaded<br>
&gt; [ =A0 =A00.757812] omap2_mcspi 48098000.spi: pins are not configured f=
rom the driver<br>
&gt; [ =A0 =A00.765625] Initialising Xen virtual ethernet driver.<br>
&gt; [ =A0 =A00.765625] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controlle=
r (EHCI) Driver<br>
&gt; [ =A0 =A00.765625] ehci-platform: EHCI generic platform driver<br>
&gt; [ =A0 =A00.765625] vusb vusb-0: Xen USB2.0 Virtual Host Controller<br>
&gt; [ =A0 =A00.765625] vusb vusb-0: new USB bus registered, assigned bus n=
umber 1<br>
&gt; [ =A0 =A00.765625] usb usb1: New USB device found, idVendor=3D1d6b, id=
Product=3D0002<br>
&gt; [ =A0 =A00.765625] usb usb1: New USB device strings: Mfr=3D3, Product=
=3D2, SerialNumber=3D1<br>
&gt; [ =A0 =A00.765625] usb usb1: Product: Xen USB2.0 Virtual Host Controll=
er<br>
&gt; [ =A0 =A00.765625] usb usb1: Manufacturer: Linux 3.8.13-53079-g8f32ae6=
 xen_hcd<br>
&gt; [ =A0 =A00.765625] usb usb1: SerialNumber: vusb-0<br>
&gt; [ =A0 =A00.773437] hub 1-0:1.0: USB hub found<br>
&gt; [ =A0 =A00.773437] hub 1-0:1.0: 8 ports detected<br>
&gt; [ =A0 =A00.773437] Initializing USB Mass Storage driver...<br>
&gt; [ =A0 =A00.773437] usbcore: registered new interface driver usb-storag=
e<br>
&gt; [ =A0 =A00.773437] USB Mass Storage support registered.<br>
&gt; [ =A0 =A00.773437] mousedev: PS/2 mouse device common for all mice<br>
&gt; [ =A0 =A00.781250] usbcore: registered new interface driver usbhid<br>
&gt; [ =A0 =A00.781250] usbhid: USB HID core driver<br>
&gt; [ =A0 =A00.789062] TCP: cubic registered<br>
&gt; [ =A0 =A00.789062] Initializing XFRM netlink socket<br>
&gt; [ =A0 =A00.789062] NET: Registered protocol family 17<br>
&gt; [ =A0 =A00.789062] NET: Registered protocol family 15<br>
&gt; [ =A0 =A00.789062] VFP support v0.3: implementor 41 architecture 4 par=
t 30 variant f rev 0<br>
&gt; [ =A0 =A00.789062] mux: Failed to setup hwmod io irq -22<br>
&gt; [ =A0 =A00.789062] ThumbEE CPU extension supported.<br>
&gt; [ =A0 =A00.789062] Registering SWP/SWPB emulation handler<br>
&gt; [ =A0 =A00.789062] dmm 4e000000.dmm: initialized all PAT entries<br>
&gt; [ =A0 =A00.804687] /home/x0187394/work/xen/kernel_dom0/drivers/rtc/hct=
osys.c: unable to open rtc device (rtc0)<br>
&gt; [ =A0 =A00.804687] devtmpfs: mounted<br>
&gt; [ =A0 =A00.812500] Freeing init memory: 6044K<br>
&gt;<br>
&gt; Please press Enter to activate this console.<br>
&gt; [ =A0 =A06.500000] scsi0 : Xen SCSI frontend driver<br>
&gt;<br>
&gt; / # [ =A0 40.796875] usb 1-2.1: new high-speed USB device number 4 usi=
ng ehci-omap<br>
&gt; [ =A0 40.914062] usb 1-2.1: New USB device found, idVendor=3D8564, idP=
roduct=3D1000<br>
&gt; [ =A0 40.921875] usb 1-2.1: New USB device strings: Mfr=3D1, Product=
=3D2, SerialNumber=3D3<br>
&gt; [ =A0 40.929687] usb 1-2.1: Product: Mass Storage Device<br>
&gt; [ =A0 40.929687] usb 1-2.1: Manufacturer: JetFlash<br>
&gt; [ =A0 40.937500] usb 1-2.1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt; [ =A0 32.703125] usb 1-1: new high-speed USB device number 2 using vus=
b<br>
&gt; (XEN) mm.c:1176:d0 gnttab_mark_dirty not implemented yet<br>
&gt; [ =A0 32.875000] usb 1-1: New USB device found, idVendor=3D8564, idPro=
duct=3D1000<br>
&gt; [ =A0 32.875000] usb 1-1: New USB device strings: Mfr=3D1, Product=3D2=
, SerialNumber=3D3<br>
&gt; [ =A0 32.875000] usb 1-1: Product: Mass Storage Device<br>
&gt; [ =A0 32.875000] usb 1-1: Manufacturer: JetFlash<br>
&gt; [ =A0 32.875000] usb 1-1: SerialNumber: 54S44YGYMT2ZM7XO<br>
&gt; [ =A0 32.906250] scsi1 : usb-storage 1-1:1.0<br>
&gt; [ =A0 34.117187] scsi 1:0:0:0: Direct-Access =A0 =A0 JetFlash Transcen=
d 8GB =A0 =A01100 PQ: 0 ANSI: 4<br>
&gt; [ =A0 34.132812] sd 1:0:0:0: [sda] 15826944 512-byte logical blocks: (=
8.10 GB/7.54 GiB)<br>
&gt; [ =A0 34.140625] sd 1:0:0:0: [sda] Write Protect is off<br>
&gt; [ =A0 34.140625] sd 1:0:0:0: [sda] Mode Sense: 00 f1 7f ff *&lt;--this=
 data may changed on different boots*<br>
&gt; [ =A0 34.156250] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.156250] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.179687] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.179687] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.195312] =A0sda: sda1<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Asking for cache data failed<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Assuming drive cache: write through=
<br>
&gt; [ =A0 34.203125] sd 1:0:0:0: [sda] Attached SCSI removable disk<br>
&gt;<br>
&gt; =A0# lsusb<br>
&gt; Bus 001 Device 002: ID 8564:1000<br>
&gt; Bus 001 Device 001: ID 1d6b:0002<br>
&gt;<br>
&gt; But it looks like scsi requests from usb-storage still passing directl=
y to hardware<br>
&gt; instead of passing through PV SCSI.<br>
&gt;<br>
&gt; Could smb tell me how to init PV SCSI and PV USB correctly?<br>
&gt;<br>
&gt; Regards,<br>
&gt; Alexander<br>
&gt;<br>
&gt; Alexander Savchenko (2):<br>
&gt; =A0 usbback: Add new features<br>
&gt; =A0 HACK: usb:core:hcd: Do not remapping self dma addresses<br>
&gt;<br>
&gt; Nathanael Rensen (1):<br>
&gt; =A0 pvusb drivers<br>
&gt;<br>
&gt; =A0drivers/usb/core/hcd.c =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 | =A0 =A01 +=
<br>
&gt; =A0drivers/usb/host/Kconfig =A0 =A0 =A0 =A0 =A0 =A0 =A0 | =A0 23 +<br>
&gt; =A0drivers/usb/host/Makefile =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 =A02 +<b=
r>
&gt; =A0drivers/usb/host/xen-usbback/Makefile =A0| =A0 =A03 +<br>
&gt; =A0drivers/usb/host/xen-usbback/common.h =A0| =A0170 ++++<br>
&gt; =A0drivers/usb/host/xen-usbback/usbback.c | 1272 +++++++++++++++++++++=
++<br>
&gt; =A0drivers/usb/host/xen-usbback/usbdev.c =A0| =A0402 ++++++++<br>
&gt; =A0drivers/usb/host/xen-usbback/xenbus.c =A0| =A0482 +++++++++<br>
&gt; =A0drivers/usb/host/xen-usbfront.c =A0 =A0 =A0 =A0| 1739 +++++++++++++=
+++++++++++++++++++<br>
&gt; =A0include/xen/interface/io/usbif.h =A0 =A0 =A0 | =A0150 +++<br>
&gt; =A010 files changed, 4244 insertions(+)<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/Makefile<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/common.h<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/usbback.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/usbdev.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbback/xenbus.c<br>
&gt; =A0create mode 100644 drivers/usb/host/xen-usbfront.c<br>
&gt; =A0create mode 100644 include/xen/interface/io/usbif.h<br>
&gt;<br>
&gt; --<br>
&gt; 1.7.9.5<br>
&gt;<br>
&gt;<br>
</div></div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
</blockquote></div><br><br clear=3D"all"><div><br></div>-- <br><div dir=3D"=
ltr"><br><span style=3D"font-size:12px;vertical-align:baseline;font-variant=
:normal;font-style:normal;background-color:transparent;text-decoration:init=
ial;font-family:Arial;font-weight:bold">Alexander Savchenko | Kernel develo=
per</span><br>
<span style=3D"font-size:12px;vertical-align:baseline;font-variant:normal;f=
ont-style:normal;background-color:transparent;text-decoration:initial;font-=
family:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"font-=
size:12px;vertical-align:baseline;font-variant:normal;font-style:normal;bac=
kground-color:transparent;text-decoration:initial;font-family:Arial;font-we=
ight:normal">M +38-093-808-37-33 =A0S=A0darkside.warlock</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></div>
</div>

--001a11c38882a5518004f08c0992--


--===============3945659488699508057==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3945659488699508057==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 09:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uSW-0006Fh-5R; Wed, 22 Jan 2014 09:50:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uSU-0006FW-Sn
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:50:55 +0000
Received: from [85.158.137.68:28076] by server-10.bemta-3.messagelabs.com id
	EF/75-23989-C749FD25; Wed, 22 Jan 2014 09:50:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390384250!10611493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9204 invoked from network); 22 Jan 2014 09:50:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93180016"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:50:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:50:49 -0500
Message-ID: <1390384248.32519.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Jan 2014 09:50:48 +0000
In-Reply-To: <20140121220355.GA6557@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, stefano.panella@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:

(nb: I posted a v3 at
http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
)

> On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > 
> > This can happen in practice on ARM where foreign pages can be above 4GB even
> > though the local kernel does not have LPAE page tables enabled (which is
> > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > must) but the resulting DMA address (returned by the grant map operation) is
> > much higher.
> > 
> > This is analogous to a hardware device which has its view of RAM mapped up
> > high for some reason.
> > 
> > This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> > systems with more than 4GB of RAM.
> 
> There was another patch posted by somebody from Citrix for a fix on 
> 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> 
> Their fix was in the generic parts of code. Changing most of the 'unsigned'
> to 'phys_addr_t' or such. Is his patch better or will this patch replace his?

I believe they are orthogonal, or at least I'm not (yet) hitting the
same issue as Stefano P, the alloc cohoerent code paths are not involved
in the issue I'm seeing because it involves foreign pages whose
MFN/dma_addr is very high, not DMA to devices which are up high.

Ian.

> It was " dma-mapping: dma_alloc_coherent_mask return dma_addr_t"
> https://lkml.org/lkml/2013/12/10/593
> 
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> >  arch/arm/Kconfig          |    1 +
> >  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
> >  2 files changed, 13 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index c1f1a7e..24307dc 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -1885,6 +1885,7 @@ config XEN
> >  	depends on !GENERIC_ATOMIC64
> >  	select ARM_PSCI
> >  	select SWIOTLB_XEN
> > +	select ARCH_DMA_ADDR_T_64BIT
> >  	help
> >  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> >  
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> > +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> > +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> > +	dma |= paddr & ~PAGE_MASK;
> > +	return dma;
> >  }
> >  
> >  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
> >  {
> > -	return machine_to_phys(XMADDR(baddr)).paddr;
> > +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> > +	phys_addr_t paddr = dma;
> > +
> > +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> > +
> > +	paddr |= baddr & ~PAGE_MASK;
> > +
> > +	return paddr;
> >  }
> >  
> >  static inline dma_addr_t xen_virt_to_bus(void *address)
> > -- 
> > 1.7.10.4
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uSW-0006Fh-5R; Wed, 22 Jan 2014 09:50:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uSU-0006FW-Sn
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:50:55 +0000
Received: from [85.158.137.68:28076] by server-10.bemta-3.messagelabs.com id
	EF/75-23989-C749FD25; Wed, 22 Jan 2014 09:50:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390384250!10611493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9204 invoked from network); 22 Jan 2014 09:50:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93180016"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:50:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:50:49 -0500
Message-ID: <1390384248.32519.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 22 Jan 2014 09:50:48 +0000
In-Reply-To: <20140121220355.GA6557@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, stefano.panella@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:

(nb: I posted a v3 at
http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
)

> On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > 
> > This can happen in practice on ARM where foreign pages can be above 4GB even
> > though the local kernel does not have LPAE page tables enabled (which is
> > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > must) but the resulting DMA address (returned by the grant map operation) is
> > much higher.
> > 
> > This is analogous to a hardware device which has its view of RAM mapped up
> > high for some reason.
> > 
> > This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> > systems with more than 4GB of RAM.
> 
> There was another patch posted by somebody from Citrix for a fix on 
> 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> 
> Their fix was in the generic parts of code. Changing most of the 'unsigned'
> to 'phys_addr_t' or such. Is his patch better or will this patch replace his?

I believe they are orthogonal, or at least I'm not (yet) hitting the
same issue as Stefano P, the alloc cohoerent code paths are not involved
in the issue I'm seeing because it involves foreign pages whose
MFN/dma_addr is very high, not DMA to devices which are up high.

Ian.

> It was " dma-mapping: dma_alloc_coherent_mask return dma_addr_t"
> https://lkml.org/lkml/2013/12/10/593
> 
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> >  arch/arm/Kconfig          |    1 +
> >  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
> >  2 files changed, 13 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index c1f1a7e..24307dc 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -1885,6 +1885,7 @@ config XEN
> >  	depends on !GENERIC_ATOMIC64
> >  	select ARM_PSCI
> >  	select SWIOTLB_XEN
> > +	select ARCH_DMA_ADDR_T_64BIT
> >  	help
> >  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
> >  
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> > +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> > +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> > +	dma |= paddr & ~PAGE_MASK;
> > +	return dma;
> >  }
> >  
> >  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
> >  {
> > -	return machine_to_phys(XMADDR(baddr)).paddr;
> > +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> > +	phys_addr_t paddr = dma;
> > +
> > +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> > +
> > +	paddr |= baddr & ~PAGE_MASK;
> > +
> > +	return paddr;
> >  }
> >  
> >  static inline dma_addr_t xen_virt_to_bus(void *address)
> > -- 
> > 1.7.10.4
> > 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:53:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uVF-0006SG-Q1; Wed, 22 Jan 2014 09:53:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uVE-0006S5-00
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:53:44 +0000
Received: from [85.158.139.211:48966] by server-14.bemta-5.messagelabs.com id
	5E/2B-24200-7259FD25; Wed, 22 Jan 2014 09:53:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390384421!924815!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6753 invoked from network); 22 Jan 2014 09:53:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229496"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:53:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:53:40 -0500
Message-ID: <1390384419.32519.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:53:39 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following series refactors make-flight to move some of the useful
guts into a helper shell library which can be used for creating
alternative flights. My goal was to create a flight which ran distro
domU tests (starting with Debian), which I've posted before and you
rightly commented that make-flight could do with some refactoring...

I didn't actually get as far as a working distro flight (or rather I
haven't addressed the comments you had on test cases last time) but my
make-distro-flight is below for reference. It is much smaller that last
time!

There's a lot of code motion and whitespace cleanups in here, I've tried
to split them up (hence too many patches).

I've confirmed with mg-show-flight-runvars things only changed as
expected:

osstest				+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-4.0-testing			No change
xen-4.1-testing			No change
xen-4.2-testing			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-4.3-testing			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-unstable			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
qemu-upstream-unstable		No change
qemu-upstream-4.2-testing	No change
qemu-upstream-4.3-testing	No change
linux-3.10			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-3.4			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-arm-xen			No change
linux-linus			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-mingo-tip-master		+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-3.0			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1

Phew! (NB: no new tests for qemu-upstream-* because they are already included)

I'm a little uncertain about the number of global variables which are
assumed to be in scope in various places, but the alternative was dozens
of parameters to each function, which given the lack of named arguments
to functions in shell seemed like an equally bad proposition.

Ian.

8<--------------------------

#!/bin/bash

# This is part of "osstest", an automated testing framework for Xen.
# Copyright (C) 2009-2013 Citrix Inc.
# 
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# 
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.
# 
# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.


set -e

branch=$1
xenbranch=$2
blessing=$3
buildflight=$4

flight=`./cs-flight-create $blessing $branch`

. ap-common
. cri-common
. mfi-common

defsuite=`getconfig DebianSuite`
defguestsuite=`getconfig GuestDebianSuite`

if [ x$buildflight = x ]; then

  if [ "x$BUILD_LVEXTEND_MAX" != x ]; then
     BUILD_RUNVARS+=" build_lvextend_max=$BUILD_LVEXTEND_MAX "
  fi

  WANT_XEND=false REVISION_LINUX_OLD=disable

  create_build_jobs

else

  bfi=$buildflight.

fi

job_create_test_filter_callback () {
    :
}

test_matrix_branch_filter_callback () {
    :
}

test_do_one_arch_dist () {
  job_create_test test-$xenarch$kern-$dom0arch-$domU-$dist-di   \
    test-debian-di xl                                           \
      xenbuildjob=${bfi}build-$xenarch                          \
      kernbuildjob=${bfi}build-$dom0arch-$kernbuild             \
      buildjob=${bfi}build-$dom0arch                            \
      debian_arch=$domU                                         \
      debian_dist=$dist                                         \
      all_hostflags=$most_hostflags
}

test_matrix_do_one () {
  case ${xenarch} in
  amd64) domUarches="amd64 i386";;
  i386)  domUarches="";;
  armhf) domUarches="armhf";;
  esac

  for domU in $domUarches ; do
    for dist in squeeze wheezy jessie ; do
      case $domU_$dist in
      armhf_squeeze) continue;;
      *) ;;
      esac

      test_do_one_arch_dist

    done
  done
}

test_matrix_iterate

echo $flight

# Local variables:
# mode: sh
# sh-basic-offset: 2
# indent-tabs-mode: nil
# End:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:53:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uVF-0006SG-Q1; Wed, 22 Jan 2014 09:53:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uVE-0006S5-00
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:53:44 +0000
Received: from [85.158.139.211:48966] by server-14.bemta-5.messagelabs.com id
	5E/2B-24200-7259FD25; Wed, 22 Jan 2014 09:53:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390384421!924815!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6753 invoked from network); 22 Jan 2014 09:53:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229496"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:53:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:53:40 -0500
Message-ID: <1390384419.32519.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:53:39 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following series refactors make-flight to move some of the useful
guts into a helper shell library which can be used for creating
alternative flights. My goal was to create a flight which ran distro
domU tests (starting with Debian), which I've posted before and you
rightly commented that make-flight could do with some refactoring...

I didn't actually get as far as a working distro flight (or rather I
haven't addressed the comments you had on test cases last time) but my
make-distro-flight is below for reference. It is much smaller that last
time!

There's a lot of code motion and whitespace cleanups in here, I've tried
to split them up (hence too many patches).

I've confirmed with mg-show-flight-runvars things only changed as
expected:

osstest				+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-4.0-testing			No change
xen-4.1-testing			No change
xen-4.2-testing			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-4.3-testing			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
xen-unstable			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
qemu-upstream-unstable		No change
qemu-upstream-4.2-testing	No change
qemu-upstream-4.3-testing	No change
linux-3.10			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-3.4			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-arm-xen			No change
linux-linus			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-mingo-tip-master		+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
linux-3.0			+test-amd64-i386-xl-qemuu-win7-amd64 & +test-amd64-i386-xl-qemuu-winxpsp3-vcpus1

Phew! (NB: no new tests for qemu-upstream-* because they are already included)

I'm a little uncertain about the number of global variables which are
assumed to be in scope in various places, but the alternative was dozens
of parameters to each function, which given the lack of named arguments
to functions in shell seemed like an equally bad proposition.

Ian.

8<--------------------------

#!/bin/bash

# This is part of "osstest", an automated testing framework for Xen.
# Copyright (C) 2009-2013 Citrix Inc.
# 
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# 
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU Affero General Public License for more details.
# 
# You should have received a copy of the GNU Affero General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.


set -e

branch=$1
xenbranch=$2
blessing=$3
buildflight=$4

flight=`./cs-flight-create $blessing $branch`

. ap-common
. cri-common
. mfi-common

defsuite=`getconfig DebianSuite`
defguestsuite=`getconfig GuestDebianSuite`

if [ x$buildflight = x ]; then

  if [ "x$BUILD_LVEXTEND_MAX" != x ]; then
     BUILD_RUNVARS+=" build_lvextend_max=$BUILD_LVEXTEND_MAX "
  fi

  WANT_XEND=false REVISION_LINUX_OLD=disable

  create_build_jobs

else

  bfi=$buildflight.

fi

job_create_test_filter_callback () {
    :
}

test_matrix_branch_filter_callback () {
    :
}

test_do_one_arch_dist () {
  job_create_test test-$xenarch$kern-$dom0arch-$domU-$dist-di   \
    test-debian-di xl                                           \
      xenbuildjob=${bfi}build-$xenarch                          \
      kernbuildjob=${bfi}build-$dom0arch-$kernbuild             \
      buildjob=${bfi}build-$dom0arch                            \
      debian_arch=$domU                                         \
      debian_dist=$dist                                         \
      all_hostflags=$most_hostflags
}

test_matrix_do_one () {
  case ${xenarch} in
  amd64) domUarches="amd64 i386";;
  i386)  domUarches="";;
  armhf) domUarches="armhf";;
  esac

  for domU in $domUarches ; do
    for dist in squeeze wheezy jessie ; do
      case $domU_$dist in
      armhf_squeeze) continue;;
      *) ;;
      esac

      test_do_one_arch_dist

    done
  done
}

test_matrix_iterate

echo $flight

# Local variables:
# mode: sh
# sh-basic-offset: 2
# indent-tabs-mode: nil
# End:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006ae-39; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWZ-0006ZX-8u
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:07 +0000
Received: from [85.158.139.211:7356] by server-8.bemta-5.messagelabs.com id
	96/D9-29838-A759FD25; Wed, 22 Jan 2014 09:55:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1947 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-Ee;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:46 +0000
Message-ID: <1390384501-20552-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 02/17] make-flight: refactor common
	function "stripy" into helper library
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Will be useful for other make-flight variants.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 18 +-----------------
 mfi-common  | 38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+), 17 deletions(-)
 create mode 100644 mfi-common

diff --git a/make-flight b/make-flight
index ddd4427..fdde3af 100755
--- a/make-flight
+++ b/make-flight
@@ -28,6 +28,7 @@ flight=`./cs-flight-create $blessing $branch`
 
 . ap-common
 . cri-common
+. mfi-common
 
 defsuite=`getconfig DebianSuite`
 defguestsuite=`getconfig GuestDebianSuite`
@@ -208,23 +209,6 @@ else
 
 fi
 
-stripy () {
-        local out_vn="$1"; shift
-        local out_0="$1"; shift
-        local out_1="$1"; shift
-        local out_val=0
-        local this_val
-        local this_cmp
-        while [ $# != 0 ]; do
-                this_val="$1"; shift
-                this_cmp="$1"; shift
-                if [ "x$this_val" = "x$this_cmp" ]; then
-                        out_val=$(( $out_val ^ 1 ))
-                fi
-        done
-        eval "$out_vn=\"\$out_$out_val\""
-}
-
 job_create_test () {
         local job=$1; shift
         local recipe=$1; shift
diff --git a/mfi-common b/mfi-common
new file mode 100644
index 0000000..ec0beca
--- /dev/null
+++ b/mfi-common
@@ -0,0 +1,38 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2014 Citrix Inc.
+# 
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+# 
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+stripy () {
+  local out_vn="$1"; shift
+  local out_0="$1"; shift
+  local out_1="$1"; shift
+  local out_val=0
+  local this_val
+  local this_cmp
+  while [ $# != 0 ]; do
+    this_val="$1"; shift
+    this_cmp="$1"; shift
+    if [ "x$this_val" = "x$this_cmp" ]; then
+      out_val=$(( $out_val ^ 1 ))
+    fi
+  done
+  eval "$out_vn=\"\$out_$out_val\""
+}
+
+# Local variables:
+# mode: sh
+# sh-basic-offset: 2
+# indent-tabs-mode: nil
+# End:
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWf-0006gk-Ui; Wed, 22 Jan 2014 09:55:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWd-0006cQ-HP
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.139.211:7804] by server-14.bemta-5.messagelabs.com id
	2A/FE-24200-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390384507!8515375!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13529 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181290"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-MB;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:48 +0000
Message-ID: <1390384501-20552-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 04/17] make-flight: refactor build job
	creation into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is pure code motion *except* I have aligned the backslash continuation
lines at the same time -- I was unable to resist doing so.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 162 +---------------------------------------------------------
 mfi-common  | 165 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+), 161 deletions(-)

diff --git a/make-flight b/make-flight
index 19f60f1..5136d09 100755
--- a/make-flight
+++ b/make-flight
@@ -39,167 +39,7 @@ if [ x$buildflight = x ]; then
      BUILD_RUNVARS+=" build_lvextend_max=$BUILD_LVEXTEND_MAX "
   fi
 
-  for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
-
-    if [ "x$arch" = xdisable ]; then continue; fi
-
-    case "$arch" in
-    armhf)
-      case "$branch" in
-      linux-arm-xen) ;;
-      linux-*) continue;;
-      qemu-*) continue;;
-      esac
-      case "$xenbranch" in
-      xen-3.*-testing) continue;;
-      xen-4.0-testing) continue;;
-      xen-4.1-testing) continue;;
-      xen-4.2-testing) continue;;
-      esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX_ARM
-        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-      "
-      pvops_kconfig_overrides="
-        kconfig_override_y=CONFIG_EXT4_FS
-      "
-      ;;
-    *)
-      case "$branch" in
-      linux-arm-xen) continue;;
-      esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX
-        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-      "
-      ;;
-    esac
-
-    case "$arch" in
-    armhf) suite="wheezy";;
-    *)     suite=$defsuite;;
-    esac
-
-    if [ $suite != $defsuite ] ; then
-        suite_runvars="host_suite=$suite"
-    else
-        suite_runvars=
-    fi
-
-    # In 4.4 onwards xend is off by default. If necessary we build a
-    # separate set of binaries with xend enabled in order to run those
-    # tests which use xend.
-    case "$arch" in
-    i386|amd64) want_xend=true;;
-    *) want_xend=false;;
-    esac
-
-    case "$xenbranch" in
-    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
-    *) build_defxend=false;
-       build_extraxend=$want_xend
-    esac
-
-    case "$xenbranch" in
-    xen-3.*-testing) enable_ovmf=false;;
-    xen-4.0-testing) enable_ovmf=false;;
-    xen-4.1-testing) enable_ovmf=false;;
-    xen-4.2-testing) enable_ovmf=false;;
-    xen-4.3-testing) enable_ovmf=false;;
-    *) enable_ovmf=true;
-    esac
-
-    eval "
-        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
-    "
-
-    build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
-
-    ./cs-job-create $flight build-$arch build                                \
-                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf       \
-        tree_qemu=$TREE_QEMU         \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                revision_xen=$REVISION_XEN                                   \
-                revision_qemu=$REVISION_QEMU                                 \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM
-
-    if [ $build_extraxend = "true" ] ; then
-    ./cs-job-create $flight build-$arch-xend build                           \
-                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
-        tree_qemu=$TREE_QEMU         \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                revision_xen=$REVISION_XEN                                   \
-                revision_qemu=$REVISION_QEMU                                 \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM
-    fi
-
-    ./cs-job-create $flight build-$arch-pvops build-kern                     \
-                arch=$arch kconfighow=xen-enable-xen-config                  \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                xen_kernels=linux-2.6-pvops                                  \
-                revision_xen=$REVISION_XEN                                   \
-                $pvops_kernel $pvops_kconfig_overrides                       \
-                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}        \
-                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
-                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
-
-    case "$arch" in
-    armhf) continue;; # don't do any other kernel builds
-    esac
-
-    if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
-
-      ./cs-job-create $flight build-$arch-oldkern build                 \
-                arch=$arch                                              \
-        tree_qemu=$TREE_QEMU    \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN              \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
-                $arch_runvars $suite_runvars                            \
-                host_hostflags=$build_hostflags \
-                xen_kernels=linux-2.6-xen                               \
-                revision_xen=$REVISION_XEN                              \
-                revision_qemu=$REVISION_QEMU                            \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
-        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg   \
-        revision_linux=$REVISION_LINUX_OLD
-
-    fi
-
-    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
-      # XCP dom0 kernel is 32-bit only
-
-      ./cs-job-create $flight build-$arch-xcpkern build-kern                  \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS              \
-                $arch_runvars $suite_runvars                                  \
-                arch=$arch                                              \
-        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-                host_hostflags=$build_hostflags     \
-                tree_xen=$TREE_XEN            \
-                revision_xen=$REVISION_XEN                                    \
-        tree_linux=$TREEBASE_LINUX_XCP.hg    \
-     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg \
-        revision_linux=$REVISION_LINUX_XCP                                    \
-        revision_pq_linux=$REVISION_PQ_LINUX_XCP
-
-    fi
-
-  done
+  create_build_jobs
 
 else
 
diff --git a/mfi-common b/mfi-common
index ec0beca..97bc506 100644
--- a/mfi-common
+++ b/mfi-common
@@ -31,6 +31,171 @@ stripy () {
   eval "$out_vn=\"\$out_$out_val\""
 }
 
+create_build_jobs () {
+
+  for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
+
+    if [ "x$arch" = xdisable ]; then continue; fi
+
+    case "$arch" in
+    armhf)
+      case "$branch" in
+      linux-arm-xen) ;;
+      linux-*) continue;;
+      qemu-*) continue;;
+      esac
+      case "$xenbranch" in
+      xen-3.*-testing) continue;;
+      xen-4.0-testing) continue;;
+      xen-4.1-testing) continue;;
+      xen-4.2-testing) continue;;
+      esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+      "
+      pvops_kconfig_overrides="
+        kconfig_override_y=CONFIG_EXT4_FS
+      "
+      ;;
+    *)
+      case "$branch" in
+      linux-arm-xen) continue;;
+      esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+      "
+      ;;
+    esac
+
+    case "$arch" in
+    armhf) suite="wheezy";;
+    *)     suite=$defsuite;;
+    esac
+
+    if [ $suite != $defsuite ] ; then
+        suite_runvars="host_suite=$suite"
+    else
+        suite_runvars=
+    fi
+
+    # In 4.4 onwards xend is off by default. If necessary we build a
+    # separate set of binaries with xend enabled in order to run those
+    # tests which use xend.
+    case "$arch" in
+    i386|amd64) want_xend=true;;
+    *) want_xend=false;;
+    esac
+
+    case "$xenbranch" in
+    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
+    *) build_defxend=false;
+       build_extraxend=$want_xend
+    esac
+
+    case "$xenbranch" in
+    xen-3.*-testing) enable_ovmf=false;;
+    xen-4.0-testing) enable_ovmf=false;;
+    xen-4.1-testing) enable_ovmf=false;;
+    xen-4.2-testing) enable_ovmf=false;;
+    xen-4.3-testing) enable_ovmf=false;;
+    *) enable_ovmf=true;
+    esac
+
+    eval "
+        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
+    "
+
+    build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
+
+    ./cs-job-create $flight build-$arch build                                \
+                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf\
+        tree_qemu=$TREE_QEMU                                                 \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
+
+    if [ $build_extraxend = "true" ] ; then
+    ./cs-job-create $flight build-$arch-xend build                           \
+                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
+        tree_qemu=$TREE_QEMU                                                 \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
+    fi
+
+    ./cs-job-create $flight build-$arch-pvops build-kern                     \
+                arch=$arch kconfighow=xen-enable-xen-config                  \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                xen_kernels=linux-2.6-pvops                                  \
+                revision_xen=$REVISION_XEN                                   \
+                $pvops_kernel $pvops_kconfig_overrides                       \
+                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}             \
+                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
+                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
+
+    case "$arch" in
+    armhf) continue;; # don't do any other kernel builds
+    esac
+
+    if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
+
+      ./cs-job-create $flight build-$arch-oldkern build                 \
+                arch=$arch                                              \
+        tree_qemu=$TREE_QEMU                                            \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                  \
+        tree_xen=$TREE_XEN                                              \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                host_hostflags=$build_hostflags                         \
+                xen_kernels=linux-2.6-xen                               \
+                revision_xen=$REVISION_XEN                              \
+                revision_qemu=$REVISION_QEMU                            \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
+        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg           \
+        revision_linux=$REVISION_LINUX_OLD
+
+    fi
+
+    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
+      # XCP dom0 kernel is 32-bit only
+
+      ./cs-job-create $flight build-$arch-xcpkern build-kern            \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                arch=$arch                                              \
+        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
+                host_hostflags=$build_hostflags                         \
+                tree_xen=$TREE_XEN                                      \
+                revision_xen=$REVISION_XEN                              \
+        tree_linux=$TREEBASE_LINUX_XCP.hg                               \
+     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg                            \
+        revision_linux=$REVISION_LINUX_XCP                              \
+        revision_pq_linux=$REVISION_PQ_LINUX_XCP
+
+    fi
+
+  done
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWd-0006dL-TF; Wed, 22 Jan 2014 09:55:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bH-Cs
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:10 +0000
Received: from [85.158.139.211:50865] by server-1.bemta-5.messagelabs.com id
	7C/16-21065-D759FD25; Wed, 22 Jan 2014 09:55:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2297 invoked from network); 22 Jan 2014 09:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181289"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-1l;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:51 +0000
Message-ID: <1390384501-20552-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 07/17] mfi-common: restrict scope of
	local vars in create_build_jobs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mfi-common b/mfi-common
index 3342acc..28c3f3f 100644
--- a/mfi-common
+++ b/mfi-common
@@ -33,6 +33,13 @@ stripy () {
 
 create_build_jobs () {
 
+  local arch
+  local pvops_kernel pvops_kconfig_overrides
+  local suite suite_runvars
+  local want_xend build_defxend build_extraxend
+  local enable_ovmf
+  local build_hostflags
+
   for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
 
     if [ "x$arch" = xdisable ]; then continue; fi
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWf-0006fY-8N; Wed, 22 Jan 2014 09:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWd-0006cC-6a
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.137.68:37056] by server-11.bemta-3.messagelabs.com id
	41/0A-19379-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390384506!6937372!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20198 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181314"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:08 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-O0;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:59 +0000
Message-ID: <1390384501-20552-15-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 15/17] make-flight: Refactor test matrix
	iteration into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight |  92 +-----------------------------------------------------
 mfi-common  | 101 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 102 insertions(+), 91 deletions(-)

diff --git a/make-flight b/make-flight
index 177523b..4a144ec 100755
--- a/make-flight
+++ b/make-flight
@@ -241,97 +241,7 @@ test_matrix_do_one () {
   fi
 }
 
-for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-
-  if [ "x$xenarch" = xdisable ]; then continue; fi
-
-  test_matrix_branch_filter_callback || continue
-
-  case "$xenarch" in
-  armhf)
-        # Arm from 4.3 onwards only
-        case "$xenbranch" in
-        xen-3.*-testing) continue;;
-        xen-4.0-testing) continue;;
-        xen-4.1-testing) continue;;
-        xen-4.2-testing) continue;;
-        *) ;;
-        esac
-        ;;
-  i386)
-        # 32-bit Xen is dropped from 4.3 onwards
-        case "$xenbranch" in
-        xen-3.*-testing) ;;
-        xen-4.0-testing) ;;
-        xen-4.1-testing) ;;
-        xen-4.2-testing) ;;
-        *) continue ;;
-        esac
-        ;;
-  amd64)
-        ;;
-  esac
-
-  case "$xenarch" in
-  armhf) suite="wheezy";  guestsuite="wheezy";;
-  *)     suite=$defsuite; guestsuite=$defguestsuite;;
-  esac
-
-  if [ $suite != $defsuite ] ; then
-      suite_runvars="host_suite=$suite"
-  else
-      suite_runvars=
-  fi
-
-  case "$xenbranch" in
-  xen-3.*-testing)      onetoolstack=xend ;;
-  xen-4.0-testing)      onetoolstack=xend ;;
-  xen-4.1-testing)      onetoolstack=xend ;;
-  *)                    onetoolstack=xl ;;
-  esac
-
-  for kern in ''; do
-
-    case $kern in
-    '')
-                kernbuild=pvops
-                kernkind=pvops
-                ;;
-    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
-    esac
-
-    for dom0arch in i386 amd64 armhf; do
-
-      case ${xenarch}_${dom0arch} in
-          amd64_amd64) ;;
-          amd64_i386) ;;
-          i386_i386) ;;
-          armhf_armhf) ;;
-          *) continue ;;
-      esac
-
-      eval "
-          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
-      "
-
-      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
-      if [ $guestsuite != $defguestsuite ] ; then
-          debian_runvars="$debian_runvars debian_suite=$guestsuite"
-      fi
-
-      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
-
-      most_runvars="
-                arch=$dom0arch                                  \
-                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
-                kernkind=$kernkind                              \
-                $arch_runvars $suite_runvars
-                "
-
-      test_matrix_do_one
-    done
-  done
-done
+test_matrix_iterate
 
 echo $flight
 
diff --git a/mfi-common b/mfi-common
index 373110b..5529979 100644
--- a/mfi-common
+++ b/mfi-common
@@ -217,6 +217,107 @@ job_create_test () {
     xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
+# Iterate over xenarch, dom0arch and kernel calling test_matrix_do_one
+# for each combination.
+#
+# Filters non-sensical combinations.
+#
+# Provides various convenience variables for the callback.
+#
+test_matrix_iterate () {
+  for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
+
+    if [ "x$xenarch" = xdisable ]; then continue; fi
+
+    test_matrix_branch_filter_callback || continue
+
+    case "$xenarch" in
+    armhf)
+          # Arm from 4.3 onwards only
+          case "$xenbranch" in
+          xen-3.*-testing) continue;;
+          xen-4.0-testing) continue;;
+          xen-4.1-testing) continue;;
+          xen-4.2-testing) continue;;
+          *) ;;
+          esac
+          ;;
+    i386)
+          # 32-bit Xen is dropped from 4.3 onwards
+          case "$xenbranch" in
+          xen-3.*-testing) ;;
+          xen-4.0-testing) ;;
+          xen-4.1-testing) ;;
+          xen-4.2-testing) ;;
+          *) continue ;;
+          esac
+          ;;
+    amd64)
+          ;;
+    esac
+
+    case "$xenarch" in
+    armhf) suite="wheezy";  guestsuite="wheezy";;
+    *)     suite=$defsuite; guestsuite=$defguestsuite;;
+    esac
+
+    if [ $suite != $defsuite ] ; then
+        suite_runvars="host_suite=$suite"
+    else
+        suite_runvars=
+    fi
+
+    case "$xenbranch" in
+    xen-3.*-testing)      onetoolstack=xend ;;
+    xen-4.0-testing)      onetoolstack=xend ;;
+    xen-4.1-testing)      onetoolstack=xend ;;
+    *)                    onetoolstack=xl ;;
+    esac
+
+    for kern in ''; do
+
+      case $kern in
+      '')
+                  kernbuild=pvops
+                  kernkind=pvops
+                  ;;
+      *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
+      esac
+
+      for dom0arch in i386 amd64 armhf; do
+
+        case ${xenarch}_${dom0arch} in
+            amd64_amd64) ;;
+            amd64_i386) ;;
+            i386_i386) ;;
+            armhf_armhf) ;;
+            *) continue ;;
+        esac
+
+        eval "
+            arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+        "
+
+        debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+        if [ $guestsuite != $defguestsuite ] ; then
+            debian_runvars="$debian_runvars debian_suite=$guestsuite"
+        fi
+
+        most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
+
+        most_runvars="
+                  arch=$dom0arch                                  \
+                  kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                  kernkind=$kernkind                              \
+                  $arch_runvars $suite_runvars
+                  "
+
+        test_matrix_do_one
+      done
+    done
+  done
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006bF-Tl; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWa-0006aB-Ny
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:08 +0000
Received: from [85.158.139.211:7515] by server-9.bemta-5.messagelabs.com id
	B4/82-15098-C759FD25; Wed, 22 Jan 2014 09:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 22 Jan 2014 09:55:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-Pr;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:49 +0000
Message-ID: <1390384501-20552-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 05/17] Remove support for building the
	XCP kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This kernel is well and truly dead.

I've left some of the general support for using other kernel types in place
even though it is now unused.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cr-daily-branch   |  1 -
 cr-external-linux |  1 -
 make-flight       |  7 -------
 mfi-common        | 18 ------------------
 4 files changed, 27 deletions(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index e343a99..fa29045 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -137,7 +137,6 @@ xen)
 linux)
         realtree=linux
 	NEW_REVISION=$REVISION_LINUX
-	export REVISION_LINUX_XCP=disable
 	export REVISION_LINUX_OLD=disable
 	: ${GITFORCEFLAG:=$GITFORCEFLAG_TREE_LINUX_THIS}
 	;;
diff --git a/cr-external-linux b/cr-external-linux
index 2d1a826..073fc86 100755
--- a/cr-external-linux
+++ b/cr-external-linux
@@ -40,7 +40,6 @@ select_branch
 check_stop external-linux.
 
 export REVISION_LINUX_OLD=disable
-export REVISION_LINUX_XCP=disable
 export REVISION_XEN="`./ap-fetch-version-baseline $xenbranch`"
 export TREE_LINUX="$url"
 
diff --git a/make-flight b/make-flight
index 5136d09..8eaec51 100755
--- a/make-flight
+++ b/make-flight
@@ -174,11 +174,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
                 kernbuild=pvops
                 kernkind=pvops
                 ;;
-    -xcpkern)
-                kernbuild=xcpkern
-                kernkind=2627
-                if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
-                ;;
     *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
     esac
 
@@ -196,8 +191,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
           arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
       "
 
-      if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
-
       debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
       if [ $guestsuite != $defguestsuite ] ; then
           debian_runvars="$debian_runvars debian_suite=$guestsuite"
diff --git a/mfi-common b/mfi-common
index 97bc506..82bc875 100644
--- a/mfi-common
+++ b/mfi-common
@@ -175,24 +175,6 @@ create_build_jobs () {
 
     fi
 
-    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
-      # XCP dom0 kernel is 32-bit only
-
-      ./cs-job-create $flight build-$arch-xcpkern build-kern            \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS        \
-                $arch_runvars $suite_runvars                            \
-                arch=$arch                                              \
-        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-                host_hostflags=$build_hostflags                         \
-                tree_xen=$TREE_XEN                                      \
-                revision_xen=$REVISION_XEN                              \
-        tree_linux=$TREEBASE_LINUX_XCP.hg                               \
-     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg                            \
-        revision_linux=$REVISION_LINUX_XCP                              \
-        revision_pq_linux=$REVISION_PQ_LINUX_XCP
-
-    fi
-
   done
 }
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWe-0006e7-Cc; Wed, 22 Jan 2014 09:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bj-QX
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:10 +0000
Received: from [85.158.139.211:7735] by server-12.bemta-5.messagelabs.com id
	61/80-30017-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2410 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181291"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-5a;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:52 +0000
Message-ID: <1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mfi-common | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mfi-common b/mfi-common
index 28c3f3f..e7cae0a 100644
--- a/mfi-common
+++ b/mfi-common
@@ -90,7 +90,7 @@ create_build_jobs () {
     # In 4.4 onwards xend is off by default. If necessary we build a
     # separate set of binaries with xend enabled in order to run those
     # tests which use xend.
-    if [ -z "$WANT_XEND" ]; then
+    if [ -n "$WANT_XEND" ]; then
       want_xend=$WANT_XEND
     else
       case "$arch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWe-0006ep-Qn; Wed, 22 Jan 2014 09:55:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bP-Iq
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.137.68:41694] by server-1.bemta-3.messagelabs.com id
	D3/84-29598-D759FD25; Wed, 22 Jan 2014 09:55:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390384506!6937372!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19866 invoked from network); 22 Jan 2014 09:55:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181286"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-C2;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:45 +0000
Message-ID: <1390384501-20552-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 01/17] make-flight: expand hard tabs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Run expand on the file.

Add an emacs magic var block disabling indent-tabs-mode and setting the sh
mode basic offset to 2 (which appears to be the common, but not universal,
case).

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 370 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 188 insertions(+), 182 deletions(-)

diff --git a/make-flight b/make-flight
index fea642c..ddd4427 100755
--- a/make-flight
+++ b/make-flight
@@ -58,8 +58,8 @@ if [ x$buildflight = x ]; then
       xen-4.2-testing) continue;;
       esac
       pvops_kernel="
-	tree_linux=$TREE_LINUX_ARM
-	revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
       "
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
@@ -70,8 +70,8 @@ if [ x$buildflight = x ]; then
       linux-arm-xen) continue;;
       esac
       pvops_kernel="
-	tree_linux=$TREE_LINUX
-	revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
       "
       ;;
     esac
@@ -115,49 +115,49 @@ if [ x$buildflight = x ]; then
     esac
 
     eval "
-	arch_runvars=\"\$ARCH_RUNVARS_$arch\"
+        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
     "
 
     build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
 
-    ./cs-job-create $flight build-$arch build				     \
-		arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf	     \
-	tree_qemu=$TREE_QEMU	     \
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		revision_xen=$REVISION_XEN				     \
-		revision_qemu=$REVISION_QEMU				     \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM
+    ./cs-job-create $flight build-$arch build                                \
+                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf       \
+        tree_qemu=$TREE_QEMU         \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
 
     if [ $build_extraxend = "true" ] ; then
-    ./cs-job-create $flight build-$arch-xend build			     \
-		arch=$arch enable_xend=true enable_ovmf=$enable_ovmf	     \
-	tree_qemu=$TREE_QEMU	     \
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		revision_xen=$REVISION_XEN				     \
-		revision_qemu=$REVISION_QEMU				     \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM
+    ./cs-job-create $flight build-$arch-xend build                           \
+                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
+        tree_qemu=$TREE_QEMU         \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
     fi
 
-    ./cs-job-create $flight build-$arch-pvops build-kern		     \
-		arch=$arch kconfighow=xen-enable-xen-config		     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		xen_kernels=linux-2.6-pvops				     \
-		revision_xen=$REVISION_XEN				     \
-		$pvops_kernel $pvops_kconfig_overrides			     \
-		${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}	\
-		tree_linuxfirmware=$TREE_LINUXFIRMWARE			     \
-		revision_linuxfirmware=$REVISION_LINUXFIRMWARE
+    ./cs-job-create $flight build-$arch-pvops build-kern                     \
+                arch=$arch kconfighow=xen-enable-xen-config                  \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                xen_kernels=linux-2.6-pvops                                  \
+                revision_xen=$REVISION_XEN                                   \
+                $pvops_kernel $pvops_kconfig_overrides                       \
+                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}        \
+                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
+                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
 
     case "$arch" in
     armhf) continue;; # don't do any other kernel builds
@@ -165,19 +165,19 @@ if [ x$buildflight = x ]; then
 
     if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
 
-      ./cs-job-create $flight build-$arch-oldkern build			\
-		arch=$arch						\
-	tree_qemu=$TREE_QEMU	\
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		\
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS	\
-		$arch_runvars $suite_runvars				\
-		host_hostflags=$build_hostflags \
-		xen_kernels=linux-2.6-xen				\
-		revision_xen=$REVISION_XEN				\
-		revision_qemu=$REVISION_QEMU			        \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM			\
-	tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg	\
+      ./cs-job-create $flight build-$arch-oldkern build                 \
+                arch=$arch                                              \
+        tree_qemu=$TREE_QEMU    \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN              \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                host_hostflags=$build_hostflags \
+                xen_kernels=linux-2.6-xen                               \
+                revision_xen=$REVISION_XEN                              \
+                revision_qemu=$REVISION_QEMU                            \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
+        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg   \
         revision_linux=$REVISION_LINUX_OLD
 
     fi
@@ -185,17 +185,17 @@ if [ x$buildflight = x ]; then
     if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
       # XCP dom0 kernel is 32-bit only
 
-      ./cs-job-create $flight build-$arch-xcpkern build-kern		      \
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS	      \
-		$arch_runvars $suite_runvars				      \
-		arch=$arch						\
-	kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-		host_hostflags=$build_hostflags     \
-		tree_xen=$TREE_XEN	      \
-		revision_xen=$REVISION_XEN				      \
-	tree_linux=$TREEBASE_LINUX_XCP.hg    \
+      ./cs-job-create $flight build-$arch-xcpkern build-kern                  \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS              \
+                $arch_runvars $suite_runvars                                  \
+                arch=$arch                                              \
+        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
+                host_hostflags=$build_hostflags     \
+                tree_xen=$TREE_XEN            \
+                revision_xen=$REVISION_XEN                                    \
+        tree_linux=$TREEBASE_LINUX_XCP.hg    \
      tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg \
-        revision_linux=$REVISION_LINUX_XCP				      \
+        revision_linux=$REVISION_LINUX_XCP                                    \
         revision_pq_linux=$REVISION_PQ_LINUX_XCP
 
     fi
@@ -226,17 +226,17 @@ stripy () {
 }
 
 job_create_test () {
-	local job=$1; shift
-	local recipe=$1; shift
-	local toolstack=$1; shift
-	local xenarch=$1; shift
-	local dom0arch=$1; shift
+        local job=$1; shift
+        local recipe=$1; shift
+        local toolstack=$1; shift
+        local xenarch=$1; shift
+        local dom0arch=$1; shift
 
         local job_md5=`echo "$job" | md5sum`
         job_md5="${job_md5%  -}"
 
-	xenbuildjob="${bfi}build-$xenarch"
-	buildjob="${bfi}build-$dom0arch"
+        xenbuildjob="${bfi}build-$xenarch"
+        buildjob="${bfi}build-$dom0arch"
 
         case "$xenbranch:$toolstack" in
         xen-3.*-testing:*) ;;
@@ -245,8 +245,8 @@ job_create_test () {
         xen-4.2-testing:*) ;;
         xen-4.3-testing:*) ;;
         *:xend) xenbuildjob="$xenbuildjob-xend"
-		buildjob="${bfi}build-$dom0arch-xend"
-		;;
+                buildjob="${bfi}build-$dom0arch-xend"
+                ;;
         esac
 
         if [ "x$JOB_MD5_PATTERN" != x ]; then
@@ -256,36 +256,36 @@ job_create_test () {
                 esac
         fi
 
-	case "$branch" in
-	qemu-upstream-*)
-		case " $* " in
-		*" device_model_version=qemu-xen "*)
-			;;
-		*)
-			: "suppressed $job"
-			return;;
-		esac
-		;;
-	*)
-	        case "$job" in
-		*-qemuu-*)
-	           if [ "x$toolstack" != xxl ]; then return; fi
-
-	           case "$job" in
-	           *-win*)
-	                      case "$job_md5" in
-	                      *[0-a]) return;;
-	                      esac
-	                      ;;
-	           esac
-	           ;;
-		esac
-		;;
+        case "$branch" in
+        qemu-upstream-*)
+                case " $* " in
+                *" device_model_version=qemu-xen "*)
+                        ;;
+                *)
+                        : "suppressed $job"
+                        return;;
+                esac
+                ;;
+        *)
+                case "$job" in
+                *-qemuu-*)
+                   if [ "x$toolstack" != xxl ]; then return; fi
+
+                   case "$job" in
+                   *-win*)
+                              case "$job_md5" in
+                              *[0-a]) return;;
+                              esac
+                              ;;
+                   esac
+                   ;;
+                esac
+                ;;
         esac
 
-	./cs-job-create $flight $job $recipe toolstack=$toolstack	\
-		$RUNVARS $TEST_RUNVARS $most_runvars			\
-		xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
+        ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
+                $RUNVARS $TEST_RUNVARS $most_runvars                    \
+                xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
@@ -294,37 +294,37 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   case "$xenarch" in
   armhf)
-	# Arm from 4.3 onwards only
-	case "$xenbranch" in
-	xen-3.*-testing) continue;;
-	xen-4.0-testing) continue;;
-	xen-4.1-testing) continue;;
-	xen-4.2-testing) continue;;
-	*) ;;
-	esac
-	case "$branch" in
-	linux-arm-xen) ;;
-	linux-*) continue;;
-	qemu-*) continue;;
-	esac
-	;;
+        # Arm from 4.3 onwards only
+        case "$xenbranch" in
+        xen-3.*-testing) continue;;
+        xen-4.0-testing) continue;;
+        xen-4.1-testing) continue;;
+        xen-4.2-testing) continue;;
+        *) ;;
+        esac
+        case "$branch" in
+        linux-arm-xen) ;;
+        linux-*) continue;;
+        qemu-*) continue;;
+        esac
+        ;;
   i386)
-	# 32-bit Xen is dropped from 4.3 onwards
-	case "$xenbranch" in
-	xen-3.*-testing) ;;
-	xen-4.0-testing) ;;
-	xen-4.1-testing) ;;
-	xen-4.2-testing) ;;
-	*) continue ;;
-	esac
-	case "$branch" in
-	linux-arm-xen) continue;;
-	esac
-	;;
+        # 32-bit Xen is dropped from 4.3 onwards
+        case "$xenbranch" in
+        xen-3.*-testing) ;;
+        xen-4.0-testing) ;;
+        xen-4.1-testing) ;;
+        xen-4.2-testing) ;;
+        *) continue ;;
+        esac
+        case "$branch" in
+        linux-arm-xen) continue;;
+        esac
+        ;;
   amd64)
-	case "$branch" in
-	linux-arm-xen) continue;;
-	esac
+        case "$branch" in
+        linux-arm-xen) continue;;
+        esac
   esac
 
   case "$xenarch" in
@@ -339,10 +339,10 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
   fi
 
   case "$xenbranch" in
-  xen-3.*-testing)	onetoolstack=xend ;;
-  xen-4.0-testing)	onetoolstack=xend ;;
-  xen-4.1-testing)	onetoolstack=xend ;;
-  *)			onetoolstack=xl ;;
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
   esac
 
   for kern in ''; do
@@ -350,28 +350,28 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
     case $kern in
     '')
                 kernbuild=pvops
-		kernkind=pvops
-		;;
+                kernkind=pvops
+                ;;
     -xcpkern)
                 kernbuild=xcpkern
-		kernkind=2627
-		if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
-		;;
-    *)		echo >&2 "kernkind ?  $kern"; exit 1 ;;
+                kernkind=2627
+                if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
+                ;;
+    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
     esac
 
     for dom0arch in i386 amd64 armhf; do
 
       case ${xenarch}_${dom0arch} in
-	  amd64_amd64) ;;
-	  amd64_i386) ;;
-	  i386_i386) ;;
-	  armhf_armhf) ;;
-	  *) continue ;;
+          amd64_amd64) ;;
+          amd64_i386) ;;
+          i386_i386) ;;
+          armhf_armhf) ;;
+          *) continue ;;
       esac
 
       eval "
-	  arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
       "
 
       if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
@@ -384,35 +384,35 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
 
       most_runvars="
-		arch=$dom0arch			        	\
-		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
-		kernkind=$kernkind		        	\
-		$arch_runvars $suite_runvars
-		"
+                arch=$dom0arch                                  \
+                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                kernkind=$kernkind                              \
+                $arch_runvars $suite_runvars
+                "
       if [ $dom0arch = armhf ]; then
-	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
-	  continue
+          job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
+          continue
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
         for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
-			test-freebsd xl $xenarch $dom0arch \
-			freebsd_arch=$freebsdarch \
+                        test-freebsd xl $xenarch $dom0arch \
+                        freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
-			all_hostflags=$most_hostflags
+                        all_hostflags=$most_hostflags
 
         done
 
@@ -456,8 +456,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test \
                 test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
                 test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
-		win_image=winxpsp3.iso $vcpus_runvars	\
-		all_hostflags=$most_hostflags,hvm
+                win_image=winxpsp3.iso $vcpus_runvars   \
+                all_hostflags=$most_hostflags,hvm
 
             fi
         done
@@ -466,32 +466,32 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
                 test-win xl $xenarch $dom0arch $qemuu_runvar \
-		win_image=win7-x64.iso \
-		all_hostflags=$most_hostflags,hvm
+                win_image=win7-x64.iso \
+                all_hostflags=$most_hostflags,hvm
 
       fi
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-	for cpuvendor in amd intel; do
+        for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-						test-rhelhvm xl $xenarch $dom0arch \
-		redhat_image=rhel-server-6.1-i386-dvd.iso		\
-		all_hostflags=$most_hostflags,hvm-$cpuvendor \
+                                                test-rhelhvm xl $xenarch $dom0arch \
+                redhat_image=rhel-server-6.1-i386-dvd.iso               \
+                all_hostflags=$most_hostflags,hvm-$cpuvendor \
                 $qemuu_runvar
 
-	done
+        done
 
       fi
 
       done # qemuu_suffix
 
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-		$onetoolstack $xenarch $dom0arch \
+                $onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
-		$debian_runvars \
-		all_hostflags=$most_hostflags,equiv-1
+                $debian_runvars \
+                all_hostflags=$most_hostflags,equiv-1
 
       if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
 
@@ -499,8 +499,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
            test-debian xl $xenarch $dom0arch \
-		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-		$debian_runvars all_hostflags=$most_hostflags
+                guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+                $debian_runvars all_hostflags=$most_hostflags
 
        done
 
@@ -510,12 +510,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
                         test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-		        $debian_runvars all_hostflags=$most_hostflags
+                        $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch $dom0arch				  \
-		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
-		$debian_runvars all_hostflags=$most_hostflags
+           test-debian xl $xenarch $dom0arch                              \
+                guests_vcpus=4 xen_boot_append='sched=credit2'            \
+                $debian_runvars all_hostflags=$most_hostflags
 
       fi
 
@@ -524,10 +524,10 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for cpuvendor in intel; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch $dom0arch	  \
-		guests_vcpus=4						  \
-		$debian_runvars debian_pcipassthrough_nic=host		  \
-		all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+                        test-debian-nomigr xl $xenarch $dom0arch          \
+                guests_vcpus=4                                            \
+                $debian_runvars debian_pcipassthrough_nic=host            \
+                all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
         done
 
@@ -540,3 +540,9 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 done
 
 echo $flight
+
+# Local variables:
+# mode: sh
+# sh-basic-offset: 2
+# indent-tabs-mode: nil
+# End:
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWZ-0006Zf-AP; Wed, 22 Jan 2014 09:55:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWY-0006ZR-FV
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:06 +0000
Received: from [85.158.143.35:31613] by server-1.bemta-4.messagelabs.com id
	B6/19-02132-9759FD25; Wed, 22 Jan 2014 09:55:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24941 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229978"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-97;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:53 +0000
Message-ID: <1390384501-20552-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 09/17] make-flight: Remove md5sum based
	job filtering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

JOB_MD5_PATTERN was intended to allow making randomly smaller flights, but is
not used in practice.

The filtering of the qemmu*-win jobs was intended to reduce the number of
combinations but ended up suppressing only:
  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
  test-amd64-i386-xl-qemuu-win7-amd64

Both of which seem useful so allow them to be enabled.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/make-flight b/make-flight
index 8eaec51..7085dc2 100755
--- a/make-flight
+++ b/make-flight
@@ -54,9 +54,6 @@ job_create_test () {
         local xenarch=$1; shift
         local dom0arch=$1; shift
 
-        local job_md5=`echo "$job" | md5sum`
-        job_md5="${job_md5%  -}"
-
         xenbuildjob="${bfi}build-$xenarch"
         buildjob="${bfi}build-$dom0arch"
 
@@ -71,13 +68,6 @@ job_create_test () {
                 ;;
         esac
 
-        if [ "x$JOB_MD5_PATTERN" != x ]; then
-                case "$job_md5" in
-                $JOB_MD5_PATTERN)       ;;
-                *)                      return;;
-                esac
-        fi
-
         case "$branch" in
         qemu-upstream-*)
                 case " $* " in
@@ -92,14 +82,6 @@ job_create_test () {
                 case "$job" in
                 *-qemuu-*)
                    if [ "x$toolstack" != xxl ]; then return; fi
-
-                   case "$job" in
-                   *-win*)
-                              case "$job_md5" in
-                              *[0-a]) return;;
-                              esac
-                              ;;
-                   esac
                    ;;
                 esac
                 ;;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWd-0006cn-FX; Wed, 22 Jan 2014 09:55:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWb-0006aU-Gp
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:09 +0000
Received: from [85.158.139.211:8582] by server-17.bemta-5.messagelabs.com id
	84/52-19152-C759FD25; Wed, 22 Jan 2014 09:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2230 invoked from network); 22 Jan 2014 09:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181288"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-TT;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:50 +0000
Message-ID: <1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 06/17] mfi-common: Allow caller of
	create_build_jobs to include/exclude xend builds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mfi-common b/mfi-common
index 82bc875..3342acc 100644
--- a/mfi-common
+++ b/mfi-common
@@ -83,10 +83,14 @@ create_build_jobs () {
     # In 4.4 onwards xend is off by default. If necessary we build a
     # separate set of binaries with xend enabled in order to run those
     # tests which use xend.
-    case "$arch" in
-    i386|amd64) want_xend=true;;
-    *) want_xend=false;;
-    esac
+    if [ -z "$WANT_XEND" ]; then
+      want_xend=$WANT_XEND
+    else
+      case "$arch" in
+        i386|amd64) want_xend=true;;
+        *) want_xend=false;;
+      esac
+    fi
 
     case "$xenbranch" in
     xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006at-Fj; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWa-0006Zr-3A
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:08 +0000
Received: from [85.158.139.211:7436] by server-11.bemta-5.messagelabs.com id
	F7/6A-23268-B759FD25; Wed, 22 Jan 2014 09:55:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2024 invoked from network); 22 Jan 2014 09:55:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181284"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-IO;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:47 +0000
Message-ID: <1390384501-20552-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 03/17] make-flight: Drop obsolete/unused
	xenrt_images variable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 2 --
 1 file changed, 2 deletions(-)

diff --git a/make-flight b/make-flight
index fdde3af..19f60f1 100755
--- a/make-flight
+++ b/make-flight
@@ -33,8 +33,6 @@ flight=`./cs-flight-create $blessing $branch`
 defsuite=`getconfig DebianSuite`
 defguestsuite=`getconfig GuestDebianSuite`
 
-xenrt_images=/usr/groups/images/autoinstall
-
 if [ x$buildflight = x ]; then
 
   if [ "x$BUILD_LVEXTEND_MAX" != x ]; then
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWa-0006aJ-Ml; Wed, 22 Jan 2014 09:55:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWZ-0006ZY-A1
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:07 +0000
Received: from [85.158.143.35:31710] by server-1.bemta-4.messagelabs.com id
	4D/19-02132-A759FD25; Wed, 22 Jan 2014 09:55:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25047 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229979"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-Bf;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:54 +0000
Message-ID: <1390384501-20552-10-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 10/17] make-flight: refactor
	job_create_test filters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will allow job_create_test to be moved mfi-common.

No (intentional) change to the set of jobs which are created.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 50 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 31 insertions(+), 19 deletions(-)

diff --git a/make-flight b/make-flight
index 7085dc2..34c1ce4 100755
--- a/make-flight
+++ b/make-flight
@@ -47,7 +47,38 @@ else
 
 fi
 
+job_create_test_filter_callback () {
+  local job=$1; shift
+  local recipe=$1; shift
+  local toolstack=$1; shift
+  local xenarch=$1; shift
+  local dom0arch=$1; shift
+
+  case "$branch" in
+    qemu-upstream-*)
+      case " $* " in
+        *" device_model_version=qemu-xen "*)
+          ;;
+        *)
+          : "suppressed $job"
+          return 1;;
+      esac
+      ;;
+    *)
+      case "$job" in
+        *-qemuu-*)
+          if [ "x$toolstack" != xxl ]; then return 1; fi
+          ;;
+      esac
+      ;;
+  esac
+
+  return 0;
+}
+
 job_create_test () {
+        job_create_test_filter_callback "$@" || return 0
+
         local job=$1; shift
         local recipe=$1; shift
         local toolstack=$1; shift
@@ -68,25 +99,6 @@ job_create_test () {
                 ;;
         esac
 
-        case "$branch" in
-        qemu-upstream-*)
-                case " $* " in
-                *" device_model_version=qemu-xen "*)
-                        ;;
-                *)
-                        : "suppressed $job"
-                        return;;
-                esac
-                ;;
-        *)
-                case "$job" in
-                *-qemuu-*)
-                   if [ "x$toolstack" != xxl ]; then return; fi
-                   ;;
-                esac
-                ;;
-        esac
-
         ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
                 $RUNVARS $TEST_RUNVARS $most_runvars                    \
                 xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWa-0006aJ-Ml; Wed, 22 Jan 2014 09:55:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWZ-0006ZY-A1
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:07 +0000
Received: from [85.158.143.35:31710] by server-1.bemta-4.messagelabs.com id
	4D/19-02132-A759FD25; Wed, 22 Jan 2014 09:55:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25047 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229979"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-Bf;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:54 +0000
Message-ID: <1390384501-20552-10-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 10/17] make-flight: refactor
	job_create_test filters
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This will allow job_create_test to be moved mfi-common.

No (intentional) change to the set of jobs which are created.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 50 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 31 insertions(+), 19 deletions(-)

diff --git a/make-flight b/make-flight
index 7085dc2..34c1ce4 100755
--- a/make-flight
+++ b/make-flight
@@ -47,7 +47,38 @@ else
 
 fi
 
+job_create_test_filter_callback () {
+  local job=$1; shift
+  local recipe=$1; shift
+  local toolstack=$1; shift
+  local xenarch=$1; shift
+  local dom0arch=$1; shift
+
+  case "$branch" in
+    qemu-upstream-*)
+      case " $* " in
+        *" device_model_version=qemu-xen "*)
+          ;;
+        *)
+          : "suppressed $job"
+          return 1;;
+      esac
+      ;;
+    *)
+      case "$job" in
+        *-qemuu-*)
+          if [ "x$toolstack" != xxl ]; then return 1; fi
+          ;;
+      esac
+      ;;
+  esac
+
+  return 0;
+}
+
 job_create_test () {
+        job_create_test_filter_callback "$@" || return 0
+
         local job=$1; shift
         local recipe=$1; shift
         local toolstack=$1; shift
@@ -68,25 +99,6 @@ job_create_test () {
                 ;;
         esac
 
-        case "$branch" in
-        qemu-upstream-*)
-                case " $* " in
-                *" device_model_version=qemu-xen "*)
-                        ;;
-                *)
-                        : "suppressed $job"
-                        return;;
-                esac
-                ;;
-        *)
-                case "$job" in
-                *-qemuu-*)
-                   if [ "x$toolstack" != xxl ]; then return; fi
-                   ;;
-                esac
-                ;;
-        esac
-
         ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
                 $RUNVARS $TEST_RUNVARS $most_runvars                    \
                 xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006bF-Tl; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWa-0006aB-Ny
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:08 +0000
Received: from [85.158.139.211:7515] by server-9.bemta-5.messagelabs.com id
	B4/82-15098-C759FD25; Wed, 22 Jan 2014 09:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2151 invoked from network); 22 Jan 2014 09:55:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-Pr;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:49 +0000
Message-ID: <1390384501-20552-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 05/17] Remove support for building the
	XCP kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This kernel is well and truly dead.

I've left some of the general support for using other kernel types in place
even though it is now unused.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cr-daily-branch   |  1 -
 cr-external-linux |  1 -
 make-flight       |  7 -------
 mfi-common        | 18 ------------------
 4 files changed, 27 deletions(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index e343a99..fa29045 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -137,7 +137,6 @@ xen)
 linux)
         realtree=linux
 	NEW_REVISION=$REVISION_LINUX
-	export REVISION_LINUX_XCP=disable
 	export REVISION_LINUX_OLD=disable
 	: ${GITFORCEFLAG:=$GITFORCEFLAG_TREE_LINUX_THIS}
 	;;
diff --git a/cr-external-linux b/cr-external-linux
index 2d1a826..073fc86 100755
--- a/cr-external-linux
+++ b/cr-external-linux
@@ -40,7 +40,6 @@ select_branch
 check_stop external-linux.
 
 export REVISION_LINUX_OLD=disable
-export REVISION_LINUX_XCP=disable
 export REVISION_XEN="`./ap-fetch-version-baseline $xenbranch`"
 export TREE_LINUX="$url"
 
diff --git a/make-flight b/make-flight
index 5136d09..8eaec51 100755
--- a/make-flight
+++ b/make-flight
@@ -174,11 +174,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
                 kernbuild=pvops
                 kernkind=pvops
                 ;;
-    -xcpkern)
-                kernbuild=xcpkern
-                kernkind=2627
-                if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
-                ;;
     *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
     esac
 
@@ -196,8 +191,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
           arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
       "
 
-      if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
-
       debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
       if [ $guestsuite != $defguestsuite ] ; then
           debian_runvars="$debian_runvars debian_suite=$guestsuite"
diff --git a/mfi-common b/mfi-common
index 97bc506..82bc875 100644
--- a/mfi-common
+++ b/mfi-common
@@ -175,24 +175,6 @@ create_build_jobs () {
 
     fi
 
-    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
-      # XCP dom0 kernel is 32-bit only
-
-      ./cs-job-create $flight build-$arch-xcpkern build-kern            \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS        \
-                $arch_runvars $suite_runvars                            \
-                arch=$arch                                              \
-        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-                host_hostflags=$build_hostflags                         \
-                tree_xen=$TREE_XEN                                      \
-                revision_xen=$REVISION_XEN                              \
-        tree_linux=$TREEBASE_LINUX_XCP.hg                               \
-     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg                            \
-        revision_linux=$REVISION_LINUX_XCP                              \
-        revision_pq_linux=$REVISION_PQ_LINUX_XCP
-
-    fi
-
   done
 }
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWZ-0006Zf-AP; Wed, 22 Jan 2014 09:55:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWY-0006ZR-FV
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:06 +0000
Received: from [85.158.143.35:31613] by server-1.bemta-4.messagelabs.com id
	B6/19-02132-9759FD25; Wed, 22 Jan 2014 09:55:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24941 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95229978"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-97;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:53 +0000
Message-ID: <1390384501-20552-9-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 09/17] make-flight: Remove md5sum based
	job filtering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

JOB_MD5_PATTERN was intended to allow making randomly smaller flights, but is
not used in practice.

The filtering of the qemmu*-win jobs was intended to reduce the number of
combinations but ended up suppressing only:
  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1
  test-amd64-i386-xl-qemuu-win7-amd64

Both of which seem useful so allow them to be enabled.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/make-flight b/make-flight
index 8eaec51..7085dc2 100755
--- a/make-flight
+++ b/make-flight
@@ -54,9 +54,6 @@ job_create_test () {
         local xenarch=$1; shift
         local dom0arch=$1; shift
 
-        local job_md5=`echo "$job" | md5sum`
-        job_md5="${job_md5%  -}"
-
         xenbuildjob="${bfi}build-$xenarch"
         buildjob="${bfi}build-$dom0arch"
 
@@ -71,13 +68,6 @@ job_create_test () {
                 ;;
         esac
 
-        if [ "x$JOB_MD5_PATTERN" != x ]; then
-                case "$job_md5" in
-                $JOB_MD5_PATTERN)       ;;
-                *)                      return;;
-                esac
-        fi
-
         case "$branch" in
         qemu-upstream-*)
                 case " $* " in
@@ -92,14 +82,6 @@ job_create_test () {
                 case "$job" in
                 *-qemuu-*)
                    if [ "x$toolstack" != xxl ]; then return; fi
-
-                   case "$job" in
-                   *-win*)
-                              case "$job_md5" in
-                              *[0-a]) return;;
-                              esac
-                              ;;
-                   esac
                    ;;
                 esac
                 ;;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWf-0006gk-Ui; Wed, 22 Jan 2014 09:55:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWd-0006cQ-HP
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.139.211:7804] by server-14.bemta-5.messagelabs.com id
	2A/FE-24200-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390384507!8515375!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13529 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181290"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-MB;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:48 +0000
Message-ID: <1390384501-20552-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 04/17] make-flight: refactor build job
	creation into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is pure code motion *except* I have aligned the backslash continuation
lines at the same time -- I was unable to resist doing so.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 162 +---------------------------------------------------------
 mfi-common  | 165 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+), 161 deletions(-)

diff --git a/make-flight b/make-flight
index 19f60f1..5136d09 100755
--- a/make-flight
+++ b/make-flight
@@ -39,167 +39,7 @@ if [ x$buildflight = x ]; then
      BUILD_RUNVARS+=" build_lvextend_max=$BUILD_LVEXTEND_MAX "
   fi
 
-  for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
-
-    if [ "x$arch" = xdisable ]; then continue; fi
-
-    case "$arch" in
-    armhf)
-      case "$branch" in
-      linux-arm-xen) ;;
-      linux-*) continue;;
-      qemu-*) continue;;
-      esac
-      case "$xenbranch" in
-      xen-3.*-testing) continue;;
-      xen-4.0-testing) continue;;
-      xen-4.1-testing) continue;;
-      xen-4.2-testing) continue;;
-      esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX_ARM
-        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-      "
-      pvops_kconfig_overrides="
-        kconfig_override_y=CONFIG_EXT4_FS
-      "
-      ;;
-    *)
-      case "$branch" in
-      linux-arm-xen) continue;;
-      esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX
-        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-      "
-      ;;
-    esac
-
-    case "$arch" in
-    armhf) suite="wheezy";;
-    *)     suite=$defsuite;;
-    esac
-
-    if [ $suite != $defsuite ] ; then
-        suite_runvars="host_suite=$suite"
-    else
-        suite_runvars=
-    fi
-
-    # In 4.4 onwards xend is off by default. If necessary we build a
-    # separate set of binaries with xend enabled in order to run those
-    # tests which use xend.
-    case "$arch" in
-    i386|amd64) want_xend=true;;
-    *) want_xend=false;;
-    esac
-
-    case "$xenbranch" in
-    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
-    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
-    *) build_defxend=false;
-       build_extraxend=$want_xend
-    esac
-
-    case "$xenbranch" in
-    xen-3.*-testing) enable_ovmf=false;;
-    xen-4.0-testing) enable_ovmf=false;;
-    xen-4.1-testing) enable_ovmf=false;;
-    xen-4.2-testing) enable_ovmf=false;;
-    xen-4.3-testing) enable_ovmf=false;;
-    *) enable_ovmf=true;
-    esac
-
-    eval "
-        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
-    "
-
-    build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
-
-    ./cs-job-create $flight build-$arch build                                \
-                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf       \
-        tree_qemu=$TREE_QEMU         \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                revision_xen=$REVISION_XEN                                   \
-                revision_qemu=$REVISION_QEMU                                 \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM
-
-    if [ $build_extraxend = "true" ] ; then
-    ./cs-job-create $flight build-$arch-xend build                           \
-                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
-        tree_qemu=$TREE_QEMU         \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                revision_xen=$REVISION_XEN                                   \
-                revision_qemu=$REVISION_QEMU                                 \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM
-    fi
-
-    ./cs-job-create $flight build-$arch-pvops build-kern                     \
-                arch=$arch kconfighow=xen-enable-xen-config                  \
-        tree_xen=$TREE_XEN                   \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
-                $suite_runvars                                               \
-                host_hostflags=$build_hostflags    \
-                xen_kernels=linux-2.6-pvops                                  \
-                revision_xen=$REVISION_XEN                                   \
-                $pvops_kernel $pvops_kconfig_overrides                       \
-                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}        \
-                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
-                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
-
-    case "$arch" in
-    armhf) continue;; # don't do any other kernel builds
-    esac
-
-    if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
-
-      ./cs-job-create $flight build-$arch-oldkern build                 \
-                arch=$arch                                              \
-        tree_qemu=$TREE_QEMU    \
-        tree_qemuu=$TREE_QEMU_UPSTREAM       \
-        tree_xen=$TREE_XEN              \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
-                $arch_runvars $suite_runvars                            \
-                host_hostflags=$build_hostflags \
-                xen_kernels=linux-2.6-xen                               \
-                revision_xen=$REVISION_XEN                              \
-                revision_qemu=$REVISION_QEMU                            \
-                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
-        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg   \
-        revision_linux=$REVISION_LINUX_OLD
-
-    fi
-
-    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
-      # XCP dom0 kernel is 32-bit only
-
-      ./cs-job-create $flight build-$arch-xcpkern build-kern                  \
-                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS              \
-                $arch_runvars $suite_runvars                                  \
-                arch=$arch                                              \
-        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-                host_hostflags=$build_hostflags     \
-                tree_xen=$TREE_XEN            \
-                revision_xen=$REVISION_XEN                                    \
-        tree_linux=$TREEBASE_LINUX_XCP.hg    \
-     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg \
-        revision_linux=$REVISION_LINUX_XCP                                    \
-        revision_pq_linux=$REVISION_PQ_LINUX_XCP
-
-    fi
-
-  done
+  create_build_jobs
 
 else
 
diff --git a/mfi-common b/mfi-common
index ec0beca..97bc506 100644
--- a/mfi-common
+++ b/mfi-common
@@ -31,6 +31,171 @@ stripy () {
   eval "$out_vn=\"\$out_$out_val\""
 }
 
+create_build_jobs () {
+
+  for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
+
+    if [ "x$arch" = xdisable ]; then continue; fi
+
+    case "$arch" in
+    armhf)
+      case "$branch" in
+      linux-arm-xen) ;;
+      linux-*) continue;;
+      qemu-*) continue;;
+      esac
+      case "$xenbranch" in
+      xen-3.*-testing) continue;;
+      xen-4.0-testing) continue;;
+      xen-4.1-testing) continue;;
+      xen-4.2-testing) continue;;
+      esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+      "
+      pvops_kconfig_overrides="
+        kconfig_override_y=CONFIG_EXT4_FS
+      "
+      ;;
+    *)
+      case "$branch" in
+      linux-arm-xen) continue;;
+      esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+      "
+      ;;
+    esac
+
+    case "$arch" in
+    armhf) suite="wheezy";;
+    *)     suite=$defsuite;;
+    esac
+
+    if [ $suite != $defsuite ] ; then
+        suite_runvars="host_suite=$suite"
+    else
+        suite_runvars=
+    fi
+
+    # In 4.4 onwards xend is off by default. If necessary we build a
+    # separate set of binaries with xend enabled in order to run those
+    # tests which use xend.
+    case "$arch" in
+    i386|amd64) want_xend=true;;
+    *) want_xend=false;;
+    esac
+
+    case "$xenbranch" in
+    xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.0-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.1-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.2-testing) build_defxend=$want_xend; build_extraxend=false;;
+    xen-4.3-testing) build_defxend=$want_xend; build_extraxend=false;;
+    *) build_defxend=false;
+       build_extraxend=$want_xend
+    esac
+
+    case "$xenbranch" in
+    xen-3.*-testing) enable_ovmf=false;;
+    xen-4.0-testing) enable_ovmf=false;;
+    xen-4.1-testing) enable_ovmf=false;;
+    xen-4.2-testing) enable_ovmf=false;;
+    xen-4.3-testing) enable_ovmf=false;;
+    *) enable_ovmf=true;
+    esac
+
+    eval "
+        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
+    "
+
+    build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
+
+    ./cs-job-create $flight build-$arch build                                \
+                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf\
+        tree_qemu=$TREE_QEMU                                                 \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
+
+    if [ $build_extraxend = "true" ] ; then
+    ./cs-job-create $flight build-$arch-xend build                           \
+                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
+        tree_qemu=$TREE_QEMU                                                 \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
+    fi
+
+    ./cs-job-create $flight build-$arch-pvops build-kern                     \
+                arch=$arch kconfighow=xen-enable-xen-config                  \
+        tree_xen=$TREE_XEN                                                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags                              \
+                xen_kernels=linux-2.6-pvops                                  \
+                revision_xen=$REVISION_XEN                                   \
+                $pvops_kernel $pvops_kconfig_overrides                       \
+                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}             \
+                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
+                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
+
+    case "$arch" in
+    armhf) continue;; # don't do any other kernel builds
+    esac
+
+    if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
+
+      ./cs-job-create $flight build-$arch-oldkern build                 \
+                arch=$arch                                              \
+        tree_qemu=$TREE_QEMU                                            \
+        tree_qemuu=$TREE_QEMU_UPSTREAM                                  \
+        tree_xen=$TREE_XEN                                              \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                host_hostflags=$build_hostflags                         \
+                xen_kernels=linux-2.6-xen                               \
+                revision_xen=$REVISION_XEN                              \
+                revision_qemu=$REVISION_QEMU                            \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
+        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg           \
+        revision_linux=$REVISION_LINUX_OLD
+
+    fi
+
+    if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
+      # XCP dom0 kernel is 32-bit only
+
+      ./cs-job-create $flight build-$arch-xcpkern build-kern            \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                arch=$arch                                              \
+        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
+                host_hostflags=$build_hostflags                         \
+                tree_xen=$TREE_XEN                                      \
+                revision_xen=$REVISION_XEN                              \
+        tree_linux=$TREEBASE_LINUX_XCP.hg                               \
+     tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg                            \
+        revision_linux=$REVISION_LINUX_XCP                              \
+        revision_pq_linux=$REVISION_PQ_LINUX_XCP
+
+    fi
+
+  done
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWd-0006cn-FX; Wed, 22 Jan 2014 09:55:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWb-0006aU-Gp
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:09 +0000
Received: from [85.158.139.211:8582] by server-17.bemta-5.messagelabs.com id
	84/52-19152-C759FD25; Wed, 22 Jan 2014 09:55:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2230 invoked from network); 22 Jan 2014 09:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181288"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-TT;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:50 +0000
Message-ID: <1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 06/17] mfi-common: Allow caller of
	create_build_jobs to include/exclude xend builds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/mfi-common b/mfi-common
index 82bc875..3342acc 100644
--- a/mfi-common
+++ b/mfi-common
@@ -83,10 +83,14 @@ create_build_jobs () {
     # In 4.4 onwards xend is off by default. If necessary we build a
     # separate set of binaries with xend enabled in order to run those
     # tests which use xend.
-    case "$arch" in
-    i386|amd64) want_xend=true;;
-    *) want_xend=false;;
-    esac
+    if [ -z "$WANT_XEND" ]; then
+      want_xend=$WANT_XEND
+    else
+      case "$arch" in
+        i386|amd64) want_xend=true;;
+        *) want_xend=false;;
+      esac
+    fi
 
     case "$xenbranch" in
     xen-3.*-testing) build_defxend=$want_xend; build_extraxend=false;;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWf-0006fY-8N; Wed, 22 Jan 2014 09:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWd-0006cC-6a
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.137.68:37056] by server-11.bemta-3.messagelabs.com id
	41/0A-19379-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390384506!6937372!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20198 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181314"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:08 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-O0;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:59 +0000
Message-ID: <1390384501-20552-15-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 15/17] make-flight: Refactor test matrix
	iteration into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight |  92 +-----------------------------------------------------
 mfi-common  | 101 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 102 insertions(+), 91 deletions(-)

diff --git a/make-flight b/make-flight
index 177523b..4a144ec 100755
--- a/make-flight
+++ b/make-flight
@@ -241,97 +241,7 @@ test_matrix_do_one () {
   fi
 }
 
-for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-
-  if [ "x$xenarch" = xdisable ]; then continue; fi
-
-  test_matrix_branch_filter_callback || continue
-
-  case "$xenarch" in
-  armhf)
-        # Arm from 4.3 onwards only
-        case "$xenbranch" in
-        xen-3.*-testing) continue;;
-        xen-4.0-testing) continue;;
-        xen-4.1-testing) continue;;
-        xen-4.2-testing) continue;;
-        *) ;;
-        esac
-        ;;
-  i386)
-        # 32-bit Xen is dropped from 4.3 onwards
-        case "$xenbranch" in
-        xen-3.*-testing) ;;
-        xen-4.0-testing) ;;
-        xen-4.1-testing) ;;
-        xen-4.2-testing) ;;
-        *) continue ;;
-        esac
-        ;;
-  amd64)
-        ;;
-  esac
-
-  case "$xenarch" in
-  armhf) suite="wheezy";  guestsuite="wheezy";;
-  *)     suite=$defsuite; guestsuite=$defguestsuite;;
-  esac
-
-  if [ $suite != $defsuite ] ; then
-      suite_runvars="host_suite=$suite"
-  else
-      suite_runvars=
-  fi
-
-  case "$xenbranch" in
-  xen-3.*-testing)      onetoolstack=xend ;;
-  xen-4.0-testing)      onetoolstack=xend ;;
-  xen-4.1-testing)      onetoolstack=xend ;;
-  *)                    onetoolstack=xl ;;
-  esac
-
-  for kern in ''; do
-
-    case $kern in
-    '')
-                kernbuild=pvops
-                kernkind=pvops
-                ;;
-    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
-    esac
-
-    for dom0arch in i386 amd64 armhf; do
-
-      case ${xenarch}_${dom0arch} in
-          amd64_amd64) ;;
-          amd64_i386) ;;
-          i386_i386) ;;
-          armhf_armhf) ;;
-          *) continue ;;
-      esac
-
-      eval "
-          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
-      "
-
-      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
-      if [ $guestsuite != $defguestsuite ] ; then
-          debian_runvars="$debian_runvars debian_suite=$guestsuite"
-      fi
-
-      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
-
-      most_runvars="
-                arch=$dom0arch                                  \
-                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
-                kernkind=$kernkind                              \
-                $arch_runvars $suite_runvars
-                "
-
-      test_matrix_do_one
-    done
-  done
-done
+test_matrix_iterate
 
 echo $flight
 
diff --git a/mfi-common b/mfi-common
index 373110b..5529979 100644
--- a/mfi-common
+++ b/mfi-common
@@ -217,6 +217,107 @@ job_create_test () {
     xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
+# Iterate over xenarch, dom0arch and kernel calling test_matrix_do_one
+# for each combination.
+#
+# Filters non-sensical combinations.
+#
+# Provides various convenience variables for the callback.
+#
+test_matrix_iterate () {
+  for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
+
+    if [ "x$xenarch" = xdisable ]; then continue; fi
+
+    test_matrix_branch_filter_callback || continue
+
+    case "$xenarch" in
+    armhf)
+          # Arm from 4.3 onwards only
+          case "$xenbranch" in
+          xen-3.*-testing) continue;;
+          xen-4.0-testing) continue;;
+          xen-4.1-testing) continue;;
+          xen-4.2-testing) continue;;
+          *) ;;
+          esac
+          ;;
+    i386)
+          # 32-bit Xen is dropped from 4.3 onwards
+          case "$xenbranch" in
+          xen-3.*-testing) ;;
+          xen-4.0-testing) ;;
+          xen-4.1-testing) ;;
+          xen-4.2-testing) ;;
+          *) continue ;;
+          esac
+          ;;
+    amd64)
+          ;;
+    esac
+
+    case "$xenarch" in
+    armhf) suite="wheezy";  guestsuite="wheezy";;
+    *)     suite=$defsuite; guestsuite=$defguestsuite;;
+    esac
+
+    if [ $suite != $defsuite ] ; then
+        suite_runvars="host_suite=$suite"
+    else
+        suite_runvars=
+    fi
+
+    case "$xenbranch" in
+    xen-3.*-testing)      onetoolstack=xend ;;
+    xen-4.0-testing)      onetoolstack=xend ;;
+    xen-4.1-testing)      onetoolstack=xend ;;
+    *)                    onetoolstack=xl ;;
+    esac
+
+    for kern in ''; do
+
+      case $kern in
+      '')
+                  kernbuild=pvops
+                  kernkind=pvops
+                  ;;
+      *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
+      esac
+
+      for dom0arch in i386 amd64 armhf; do
+
+        case ${xenarch}_${dom0arch} in
+            amd64_amd64) ;;
+            amd64_i386) ;;
+            i386_i386) ;;
+            armhf_armhf) ;;
+            *) continue ;;
+        esac
+
+        eval "
+            arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+        "
+
+        debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+        if [ $guestsuite != $defguestsuite ] ; then
+            debian_runvars="$debian_runvars debian_suite=$guestsuite"
+        fi
+
+        most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
+
+        most_runvars="
+                  arch=$dom0arch                                  \
+                  kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                  kernkind=$kernkind                              \
+                  $arch_runvars $suite_runvars
+                  "
+
+        test_matrix_do_one
+      done
+    done
+  done
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWe-0006ep-Qn; Wed, 22 Jan 2014 09:55:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bP-Iq
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:11 +0000
Received: from [85.158.137.68:41694] by server-1.bemta-3.messagelabs.com id
	D3/84-29598-D759FD25; Wed, 22 Jan 2014 09:55:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390384506!6937372!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19866 invoked from network); 22 Jan 2014 09:55:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181286"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-C2;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:45 +0000
Message-ID: <1390384501-20552-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 01/17] make-flight: expand hard tabs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Run expand on the file.

Add an emacs magic var block disabling indent-tabs-mode and setting the sh
mode basic offset to 2 (which appears to be the common, but not universal,
case).

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 370 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 188 insertions(+), 182 deletions(-)

diff --git a/make-flight b/make-flight
index fea642c..ddd4427 100755
--- a/make-flight
+++ b/make-flight
@@ -58,8 +58,8 @@ if [ x$buildflight = x ]; then
       xen-4.2-testing) continue;;
       esac
       pvops_kernel="
-	tree_linux=$TREE_LINUX_ARM
-	revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
       "
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
@@ -70,8 +70,8 @@ if [ x$buildflight = x ]; then
       linux-arm-xen) continue;;
       esac
       pvops_kernel="
-	tree_linux=$TREE_LINUX
-	revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
       "
       ;;
     esac
@@ -115,49 +115,49 @@ if [ x$buildflight = x ]; then
     esac
 
     eval "
-	arch_runvars=\"\$ARCH_RUNVARS_$arch\"
+        arch_runvars=\"\$ARCH_RUNVARS_$arch\"
     "
 
     build_hostflags=share-build-$suite-$arch,arch-$arch,suite-$suite,purpose-build
 
-    ./cs-job-create $flight build-$arch build				     \
-		arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf	     \
-	tree_qemu=$TREE_QEMU	     \
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		revision_xen=$REVISION_XEN				     \
-		revision_qemu=$REVISION_QEMU				     \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM
+    ./cs-job-create $flight build-$arch build                                \
+                arch=$arch enable_xend=$build_defxend enable_ovmf=$enable_ovmf       \
+        tree_qemu=$TREE_QEMU         \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
 
     if [ $build_extraxend = "true" ] ; then
-    ./cs-job-create $flight build-$arch-xend build			     \
-		arch=$arch enable_xend=true enable_ovmf=$enable_ovmf	     \
-	tree_qemu=$TREE_QEMU	     \
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		revision_xen=$REVISION_XEN				     \
-		revision_qemu=$REVISION_QEMU				     \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM
+    ./cs-job-create $flight build-$arch-xend build                           \
+                arch=$arch enable_xend=true enable_ovmf=$enable_ovmf         \
+        tree_qemu=$TREE_QEMU         \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                revision_xen=$REVISION_XEN                                   \
+                revision_qemu=$REVISION_QEMU                                 \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM
     fi
 
-    ./cs-job-create $flight build-$arch-pvops build-kern		     \
-		arch=$arch kconfighow=xen-enable-xen-config		     \
-	tree_xen=$TREE_XEN		     \
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
-		$suite_runvars                                               \
-		host_hostflags=$build_hostflags    \
-		xen_kernels=linux-2.6-pvops				     \
-		revision_xen=$REVISION_XEN				     \
-		$pvops_kernel $pvops_kconfig_overrides			     \
-		${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}	\
-		tree_linuxfirmware=$TREE_LINUXFIRMWARE			     \
-		revision_linuxfirmware=$REVISION_LINUXFIRMWARE
+    ./cs-job-create $flight build-$arch-pvops build-kern                     \
+                arch=$arch kconfighow=xen-enable-xen-config                  \
+        tree_xen=$TREE_XEN                   \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_RUNVARS $arch_runvars   \
+                $suite_runvars                                               \
+                host_hostflags=$build_hostflags    \
+                xen_kernels=linux-2.6-pvops                                  \
+                revision_xen=$REVISION_XEN                                   \
+                $pvops_kernel $pvops_kconfig_overrides                       \
+                ${TREEVCS_LINUX:+treevcs_linux=}${TREEVCS_LINUX}        \
+                tree_linuxfirmware=$TREE_LINUXFIRMWARE                       \
+                revision_linuxfirmware=$REVISION_LINUXFIRMWARE
 
     case "$arch" in
     armhf) continue;; # don't do any other kernel builds
@@ -165,19 +165,19 @@ if [ x$buildflight = x ]; then
 
     if [ "x$REVISION_LINUX_OLD" != xdisable ]; then
 
-      ./cs-job-create $flight build-$arch-oldkern build			\
-		arch=$arch						\
-	tree_qemu=$TREE_QEMU	\
-	tree_qemuu=$TREE_QEMU_UPSTREAM	     \
-	tree_xen=$TREE_XEN		\
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS	\
-		$arch_runvars $suite_runvars				\
-		host_hostflags=$build_hostflags \
-		xen_kernels=linux-2.6-xen				\
-		revision_xen=$REVISION_XEN				\
-		revision_qemu=$REVISION_QEMU			        \
-		revision_qemuu=$REVISION_QEMU_UPSTREAM			\
-	tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg	\
+      ./cs-job-create $flight build-$arch-oldkern build                 \
+                arch=$arch                                              \
+        tree_qemu=$TREE_QEMU    \
+        tree_qemuu=$TREE_QEMU_UPSTREAM       \
+        tree_xen=$TREE_XEN              \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_OLD_RUNVARS        \
+                $arch_runvars $suite_runvars                            \
+                host_hostflags=$build_hostflags \
+                xen_kernels=linux-2.6-xen                               \
+                revision_xen=$REVISION_XEN                              \
+                revision_qemu=$REVISION_QEMU                            \
+                revision_qemuu=$REVISION_QEMU_UPSTREAM                  \
+        tree_linux=http://xenbits.xen.org/linux-2.6.18-xen.hg   \
         revision_linux=$REVISION_LINUX_OLD
 
     fi
@@ -185,17 +185,17 @@ if [ x$buildflight = x ]; then
     if false && [ $arch = i386 -a "x$REVISION_LINUX_XCP" != xdisable ]; then
       # XCP dom0 kernel is 32-bit only
 
-      ./cs-job-create $flight build-$arch-xcpkern build-kern		      \
-		$RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS	      \
-		$arch_runvars $suite_runvars				      \
-		arch=$arch						\
-	kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
-		host_hostflags=$build_hostflags     \
-		tree_xen=$TREE_XEN	      \
-		revision_xen=$REVISION_XEN				      \
-	tree_linux=$TREEBASE_LINUX_XCP.hg    \
+      ./cs-job-create $flight build-$arch-xcpkern build-kern                  \
+                $RUNVARS $BUILD_RUNVARS $BUILD_LINUX_XCP_RUNVARS              \
+                $arch_runvars $suite_runvars                                  \
+                arch=$arch                                              \
+        kconfighow=intree-buildconfigs kimagefile=arch/x86/boot/vmlinuz \
+                host_hostflags=$build_hostflags     \
+                tree_xen=$TREE_XEN            \
+                revision_xen=$REVISION_XEN                                    \
+        tree_linux=$TREEBASE_LINUX_XCP.hg    \
      tree_pq_linux=$TREEBASE_LINUX_XCP.pq.hg \
-        revision_linux=$REVISION_LINUX_XCP				      \
+        revision_linux=$REVISION_LINUX_XCP                                    \
         revision_pq_linux=$REVISION_PQ_LINUX_XCP
 
     fi
@@ -226,17 +226,17 @@ stripy () {
 }
 
 job_create_test () {
-	local job=$1; shift
-	local recipe=$1; shift
-	local toolstack=$1; shift
-	local xenarch=$1; shift
-	local dom0arch=$1; shift
+        local job=$1; shift
+        local recipe=$1; shift
+        local toolstack=$1; shift
+        local xenarch=$1; shift
+        local dom0arch=$1; shift
 
         local job_md5=`echo "$job" | md5sum`
         job_md5="${job_md5%  -}"
 
-	xenbuildjob="${bfi}build-$xenarch"
-	buildjob="${bfi}build-$dom0arch"
+        xenbuildjob="${bfi}build-$xenarch"
+        buildjob="${bfi}build-$dom0arch"
 
         case "$xenbranch:$toolstack" in
         xen-3.*-testing:*) ;;
@@ -245,8 +245,8 @@ job_create_test () {
         xen-4.2-testing:*) ;;
         xen-4.3-testing:*) ;;
         *:xend) xenbuildjob="$xenbuildjob-xend"
-		buildjob="${bfi}build-$dom0arch-xend"
-		;;
+                buildjob="${bfi}build-$dom0arch-xend"
+                ;;
         esac
 
         if [ "x$JOB_MD5_PATTERN" != x ]; then
@@ -256,36 +256,36 @@ job_create_test () {
                 esac
         fi
 
-	case "$branch" in
-	qemu-upstream-*)
-		case " $* " in
-		*" device_model_version=qemu-xen "*)
-			;;
-		*)
-			: "suppressed $job"
-			return;;
-		esac
-		;;
-	*)
-	        case "$job" in
-		*-qemuu-*)
-	           if [ "x$toolstack" != xxl ]; then return; fi
-
-	           case "$job" in
-	           *-win*)
-	                      case "$job_md5" in
-	                      *[0-a]) return;;
-	                      esac
-	                      ;;
-	           esac
-	           ;;
-		esac
-		;;
+        case "$branch" in
+        qemu-upstream-*)
+                case " $* " in
+                *" device_model_version=qemu-xen "*)
+                        ;;
+                *)
+                        : "suppressed $job"
+                        return;;
+                esac
+                ;;
+        *)
+                case "$job" in
+                *-qemuu-*)
+                   if [ "x$toolstack" != xxl ]; then return; fi
+
+                   case "$job" in
+                   *-win*)
+                              case "$job_md5" in
+                              *[0-a]) return;;
+                              esac
+                              ;;
+                   esac
+                   ;;
+                esac
+                ;;
         esac
 
-	./cs-job-create $flight $job $recipe toolstack=$toolstack	\
-		$RUNVARS $TEST_RUNVARS $most_runvars			\
-		xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
+        ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
+                $RUNVARS $TEST_RUNVARS $most_runvars                    \
+                xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
@@ -294,37 +294,37 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   case "$xenarch" in
   armhf)
-	# Arm from 4.3 onwards only
-	case "$xenbranch" in
-	xen-3.*-testing) continue;;
-	xen-4.0-testing) continue;;
-	xen-4.1-testing) continue;;
-	xen-4.2-testing) continue;;
-	*) ;;
-	esac
-	case "$branch" in
-	linux-arm-xen) ;;
-	linux-*) continue;;
-	qemu-*) continue;;
-	esac
-	;;
+        # Arm from 4.3 onwards only
+        case "$xenbranch" in
+        xen-3.*-testing) continue;;
+        xen-4.0-testing) continue;;
+        xen-4.1-testing) continue;;
+        xen-4.2-testing) continue;;
+        *) ;;
+        esac
+        case "$branch" in
+        linux-arm-xen) ;;
+        linux-*) continue;;
+        qemu-*) continue;;
+        esac
+        ;;
   i386)
-	# 32-bit Xen is dropped from 4.3 onwards
-	case "$xenbranch" in
-	xen-3.*-testing) ;;
-	xen-4.0-testing) ;;
-	xen-4.1-testing) ;;
-	xen-4.2-testing) ;;
-	*) continue ;;
-	esac
-	case "$branch" in
-	linux-arm-xen) continue;;
-	esac
-	;;
+        # 32-bit Xen is dropped from 4.3 onwards
+        case "$xenbranch" in
+        xen-3.*-testing) ;;
+        xen-4.0-testing) ;;
+        xen-4.1-testing) ;;
+        xen-4.2-testing) ;;
+        *) continue ;;
+        esac
+        case "$branch" in
+        linux-arm-xen) continue;;
+        esac
+        ;;
   amd64)
-	case "$branch" in
-	linux-arm-xen) continue;;
-	esac
+        case "$branch" in
+        linux-arm-xen) continue;;
+        esac
   esac
 
   case "$xenarch" in
@@ -339,10 +339,10 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
   fi
 
   case "$xenbranch" in
-  xen-3.*-testing)	onetoolstack=xend ;;
-  xen-4.0-testing)	onetoolstack=xend ;;
-  xen-4.1-testing)	onetoolstack=xend ;;
-  *)			onetoolstack=xl ;;
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
   esac
 
   for kern in ''; do
@@ -350,28 +350,28 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
     case $kern in
     '')
                 kernbuild=pvops
-		kernkind=pvops
-		;;
+                kernkind=pvops
+                ;;
     -xcpkern)
                 kernbuild=xcpkern
-		kernkind=2627
-		if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
-		;;
-    *)		echo >&2 "kernkind ?  $kern"; exit 1 ;;
+                kernkind=2627
+                if [ "x$REVISION_LINUX_XCP" = xdisable ]; then continue; fi
+                ;;
+    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
     esac
 
     for dom0arch in i386 amd64 armhf; do
 
       case ${xenarch}_${dom0arch} in
-	  amd64_amd64) ;;
-	  amd64_i386) ;;
-	  i386_i386) ;;
-	  armhf_armhf) ;;
-	  *) continue ;;
+          amd64_amd64) ;;
+          amd64_i386) ;;
+          i386_i386) ;;
+          armhf_armhf) ;;
+          *) continue ;;
       esac
 
       eval "
-	  arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
       "
 
       if [ x$kern = x-xcpkern -a $dom0arch != i386 ]; then continue; fi
@@ -384,35 +384,35 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
 
       most_runvars="
-		arch=$dom0arch			        	\
-		kernbuildjob=${bfi}build-$dom0arch-$kernbuild 	\
-		kernkind=$kernkind		        	\
-		$arch_runvars $suite_runvars
-		"
+                arch=$dom0arch                                  \
+                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                kernkind=$kernkind                              \
+                $arch_runvars $suite_runvars
+                "
       if [ $dom0arch = armhf ]; then
-	  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
-	  continue
+          job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
+          continue
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-		$xenarch $dom0arch					  \
-		$debian_runvars all_hostflags=$most_hostflags
+                $xenarch $dom0arch                                        \
+                $debian_runvars all_hostflags=$most_hostflags
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
         for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
-			test-freebsd xl $xenarch $dom0arch \
-			freebsd_arch=$freebsdarch \
+                        test-freebsd xl $xenarch $dom0arch \
+                        freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
-			all_hostflags=$most_hostflags
+                        all_hostflags=$most_hostflags
 
         done
 
@@ -456,8 +456,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
       job_create_test \
                 test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
                 test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
-		win_image=winxpsp3.iso $vcpus_runvars	\
-		all_hostflags=$most_hostflags,hvm
+                win_image=winxpsp3.iso $vcpus_runvars   \
+                all_hostflags=$most_hostflags,hvm
 
             fi
         done
@@ -466,32 +466,32 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
                 test-win xl $xenarch $dom0arch $qemuu_runvar \
-		win_image=win7-x64.iso \
-		all_hostflags=$most_hostflags,hvm
+                win_image=win7-x64.iso \
+                all_hostflags=$most_hostflags,hvm
 
       fi
 
       if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-	for cpuvendor in amd intel; do
+        for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-						test-rhelhvm xl $xenarch $dom0arch \
-		redhat_image=rhel-server-6.1-i386-dvd.iso		\
-		all_hostflags=$most_hostflags,hvm-$cpuvendor \
+                                                test-rhelhvm xl $xenarch $dom0arch \
+                redhat_image=rhel-server-6.1-i386-dvd.iso               \
+                all_hostflags=$most_hostflags,hvm-$cpuvendor \
                 $qemuu_runvar
 
-	done
+        done
 
       fi
 
       done # qemuu_suffix
 
       job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-		$onetoolstack $xenarch $dom0arch \
+                $onetoolstack $xenarch $dom0arch \
                 !host !host_hostflags \
-		$debian_runvars \
-		all_hostflags=$most_hostflags,equiv-1
+                $debian_runvars \
+                all_hostflags=$most_hostflags,equiv-1
 
       if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
 
@@ -499,8 +499,8 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
            test-debian xl $xenarch $dom0arch \
-		guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-		$debian_runvars all_hostflags=$most_hostflags
+                guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+                $debian_runvars all_hostflags=$most_hostflags
 
        done
 
@@ -510,12 +510,12 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
                         test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-		        $debian_runvars all_hostflags=$most_hostflags
+                        $debian_runvars all_hostflags=$most_hostflags
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch $dom0arch				  \
-		guests_vcpus=4 xen_boot_append='sched=credit2'		  \
-		$debian_runvars all_hostflags=$most_hostflags
+           test-debian xl $xenarch $dom0arch                              \
+                guests_vcpus=4 xen_boot_append='sched=credit2'            \
+                $debian_runvars all_hostflags=$most_hostflags
 
       fi
 
@@ -524,10 +524,10 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         for cpuvendor in intel; do
 
       job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch $dom0arch	  \
-		guests_vcpus=4						  \
-		$debian_runvars debian_pcipassthrough_nic=host		  \
-		all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+                        test-debian-nomigr xl $xenarch $dom0arch          \
+                guests_vcpus=4                                            \
+                $debian_runvars debian_pcipassthrough_nic=host            \
+                all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
         done
 
@@ -540,3 +540,9 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 done
 
 echo $flight
+
+# Local variables:
+# mode: sh
+# sh-basic-offset: 2
+# indent-tabs-mode: nil
+# End:
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWd-0006dL-TF; Wed, 22 Jan 2014 09:55:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bH-Cs
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:10 +0000
Received: from [85.158.139.211:50865] by server-1.bemta-5.messagelabs.com id
	7C/16-21065-D759FD25; Wed, 22 Jan 2014 09:55:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2297 invoked from network); 22 Jan 2014 09:55:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181289"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-1l;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:51 +0000
Message-ID: <1390384501-20552-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 07/17] mfi-common: restrict scope of
	local vars in create_build_jobs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mfi-common b/mfi-common
index 3342acc..28c3f3f 100644
--- a/mfi-common
+++ b/mfi-common
@@ -33,6 +33,13 @@ stripy () {
 
 create_build_jobs () {
 
+  local arch
+  local pvops_kernel pvops_kconfig_overrides
+  local suite suite_runvars
+  local want_xend build_defxend build_extraxend
+  local enable_ovmf
+  local build_hostflags
+
   for arch in ${BUILD_ARCHES- i386 amd64 armhf }; do
 
     if [ "x$arch" = xdisable ]; then continue; fi
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006ae-39; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWZ-0006ZX-8u
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:07 +0000
Received: from [85.158.139.211:7356] by server-8.bemta-5.messagelabs.com id
	96/D9-29838-A759FD25; Wed, 22 Jan 2014 09:55:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1947 invoked from network); 22 Jan 2014 09:55:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-Ee;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:46 +0000
Message-ID: <1390384501-20552-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 02/17] make-flight: refactor common
	function "stripy" into helper library
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Will be useful for other make-flight variants.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 18 +-----------------
 mfi-common  | 38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+), 17 deletions(-)
 create mode 100644 mfi-common

diff --git a/make-flight b/make-flight
index ddd4427..fdde3af 100755
--- a/make-flight
+++ b/make-flight
@@ -28,6 +28,7 @@ flight=`./cs-flight-create $blessing $branch`
 
 . ap-common
 . cri-common
+. mfi-common
 
 defsuite=`getconfig DebianSuite`
 defguestsuite=`getconfig GuestDebianSuite`
@@ -208,23 +209,6 @@ else
 
 fi
 
-stripy () {
-        local out_vn="$1"; shift
-        local out_0="$1"; shift
-        local out_1="$1"; shift
-        local out_val=0
-        local this_val
-        local this_cmp
-        while [ $# != 0 ]; do
-                this_val="$1"; shift
-                this_cmp="$1"; shift
-                if [ "x$this_val" = "x$this_cmp" ]; then
-                        out_val=$(( $out_val ^ 1 ))
-                fi
-        done
-        eval "$out_vn=\"\$out_$out_val\""
-}
-
 job_create_test () {
         local job=$1; shift
         local recipe=$1; shift
diff --git a/mfi-common b/mfi-common
new file mode 100644
index 0000000..ec0beca
--- /dev/null
+++ b/mfi-common
@@ -0,0 +1,38 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2014 Citrix Inc.
+# 
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+# 
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+stripy () {
+  local out_vn="$1"; shift
+  local out_0="$1"; shift
+  local out_1="$1"; shift
+  local out_val=0
+  local this_val
+  local this_cmp
+  while [ $# != 0 ]; do
+    this_val="$1"; shift
+    this_cmp="$1"; shift
+    if [ "x$this_val" = "x$this_cmp" ]; then
+      out_val=$(( $out_val ^ 1 ))
+    fi
+  done
+  eval "$out_vn=\"\$out_$out_val\""
+}
+
+# Local variables:
+# mode: sh
+# sh-basic-offset: 2
+# indent-tabs-mode: nil
+# End:
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWe-0006e7-Cc; Wed, 22 Jan 2014 09:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWc-0006bj-QX
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:10 +0000
Received: from [85.158.139.211:7735] by server-12.bemta-5.messagelabs.com id
	61/80-30017-E759FD25; Wed, 22 Jan 2014 09:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!6
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2410 invoked from network); 22 Jan 2014 09:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181291"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:03 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-5a;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:52 +0000
Message-ID: <1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mfi-common | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mfi-common b/mfi-common
index 28c3f3f..e7cae0a 100644
--- a/mfi-common
+++ b/mfi-common
@@ -90,7 +90,7 @@ create_build_jobs () {
     # In 4.4 onwards xend is off by default. If necessary we build a
     # separate set of binaries with xend enabled in order to run those
     # tests which use xend.
-    if [ -z "$WANT_XEND" ]; then
+    if [ -n "$WANT_XEND" ]; then
       want_xend=$WANT_XEND
     else
       case "$arch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWb-0006at-Fj; Wed, 22 Jan 2014 09:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWa-0006Zr-3A
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:08 +0000
Received: from [85.158.139.211:7436] by server-11.bemta-5.messagelabs.com id
	F7/6A-23268-B759FD25; Wed, 22 Jan 2014 09:55:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390384504!11231652!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2024 invoked from network); 22 Jan 2014 09:55:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181284"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:02 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWT-0005Rs-IO;
	Wed, 22 Jan 2014 09:55:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:47 +0000
Message-ID: <1390384501-20552-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 03/17] make-flight: Drop obsolete/unused
	xenrt_images variable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 2 --
 1 file changed, 2 deletions(-)

diff --git a/make-flight b/make-flight
index fdde3af..19f60f1 100755
--- a/make-flight
+++ b/make-flight
@@ -33,8 +33,6 @@ flight=`./cs-flight-create $blessing $branch`
 defsuite=`getconfig DebianSuite`
 defguestsuite=`getconfig GuestDebianSuite`
 
-xenrt_images=/usr/groups/images/autoinstall
-
 if [ x$buildflight = x ]; then
 
   if [ "x$BUILD_LVEXTEND_MAX" != x ]; then
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWg-0006jY-I3; Wed, 22 Jan 2014 09:55:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWe-0006d8-7D
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:12 +0000
Received: from [85.158.143.35:34277] by server-1.bemta-4.messagelabs.com id
	25/49-02132-F759FD25; Wed, 22 Jan 2014 09:55:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25913 invoked from network); 22 Jan 2014 09:55:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:10 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-HB;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:56 +0000
Message-ID: <1390384501-20552-12-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 12/17] make-flight: refactor test case
	filter over $branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 33 ++++++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 11 deletions(-)

diff --git a/make-flight b/make-flight
index 8862be5..e1b65b2 100755
--- a/make-flight
+++ b/make-flight
@@ -76,10 +76,31 @@ job_create_test_filter_callback () {
   return 0;
 }
 
+test_matrix_branch_filter_callback () {
+  case "$xenarch" in
+  armhf)
+        case "$branch" in
+        linux-arm-xen) ;;
+        linux-*) return 1;;
+        qemu-*) return 1;;
+        esac
+        ;;
+  i386|amd64)
+        case "$branch" in
+        linux-arm-xen) return 1;;
+        esac
+        ;;
+  esac
+
+  return 0
+}
+
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   if [ "x$xenarch" = xdisable ]; then continue; fi
 
+  test_matrix_branch_filter_callback || continue
+
   case "$xenarch" in
   armhf)
         # Arm from 4.3 onwards only
@@ -90,11 +111,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         xen-4.2-testing) continue;;
         *) ;;
         esac
-        case "$branch" in
-        linux-arm-xen) ;;
-        linux-*) continue;;
-        qemu-*) continue;;
-        esac
         ;;
   i386)
         # 32-bit Xen is dropped from 4.3 onwards
@@ -105,14 +121,9 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         xen-4.2-testing) ;;
         *) continue ;;
         esac
-        case "$branch" in
-        linux-arm-xen) continue;;
-        esac
         ;;
   amd64)
-        case "$branch" in
-        linux-arm-xen) continue;;
-        esac
+        ;;
   esac
 
   case "$xenarch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWg-0006jY-I3; Wed, 22 Jan 2014 09:55:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWe-0006d8-7D
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:12 +0000
Received: from [85.158.143.35:34277] by server-1.bemta-4.messagelabs.com id
	25/49-02132-F759FD25; Wed, 22 Jan 2014 09:55:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25913 invoked from network); 22 Jan 2014 09:55:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:10 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-HB;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:56 +0000
Message-ID: <1390384501-20552-12-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 12/17] make-flight: refactor test case
	filter over $branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 33 ++++++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 11 deletions(-)

diff --git a/make-flight b/make-flight
index 8862be5..e1b65b2 100755
--- a/make-flight
+++ b/make-flight
@@ -76,10 +76,31 @@ job_create_test_filter_callback () {
   return 0;
 }
 
+test_matrix_branch_filter_callback () {
+  case "$xenarch" in
+  armhf)
+        case "$branch" in
+        linux-arm-xen) ;;
+        linux-*) return 1;;
+        qemu-*) return 1;;
+        esac
+        ;;
+  i386|amd64)
+        case "$branch" in
+        linux-arm-xen) return 1;;
+        esac
+        ;;
+  esac
+
+  return 0
+}
+
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   if [ "x$xenarch" = xdisable ]; then continue; fi
 
+  test_matrix_branch_filter_callback || continue
+
   case "$xenarch" in
   armhf)
         # Arm from 4.3 onwards only
@@ -90,11 +111,6 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         xen-4.2-testing) continue;;
         *) ;;
         esac
-        case "$branch" in
-        linux-arm-xen) ;;
-        linux-*) continue;;
-        qemu-*) continue;;
-        esac
         ;;
   i386)
         # 32-bit Xen is dropped from 4.3 onwards
@@ -105,14 +121,9 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         xen-4.2-testing) ;;
         *) continue ;;
         esac
-        case "$branch" in
-        linux-arm-xen) continue;;
-        esac
         ;;
   amd64)
-        case "$branch" in
-        linux-arm-xen) continue;;
-        esac
+        ;;
   esac
 
   case "$xenarch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWj-0006nn-3T; Wed, 22 Jan 2014 09:55:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWh-0006ju-1H
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:15 +0000
Received: from [85.158.143.35:34573] by server-3.bemta-4.messagelabs.com id
	E4/0D-32360-2859FD25; Wed, 22 Jan 2014 09:55:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26715 invoked from network); 22 Jan 2014 09:55:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:12 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-J3;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:57 +0000
Message-ID: <1390384501-20552-13-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 13/17] make-flight: Separate matrix
	iteration from test job creation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 180 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 91 insertions(+), 89 deletions(-)

diff --git a/make-flight b/make-flight
index e1b65b2..97421f2 100755
--- a/make-flight
+++ b/make-flight
@@ -95,97 +95,12 @@ test_matrix_branch_filter_callback () {
   return 0
 }
 
-for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-
-  if [ "x$xenarch" = xdisable ]; then continue; fi
-
-  test_matrix_branch_filter_callback || continue
-
-  case "$xenarch" in
-  armhf)
-        # Arm from 4.3 onwards only
-        case "$xenbranch" in
-        xen-3.*-testing) continue;;
-        xen-4.0-testing) continue;;
-        xen-4.1-testing) continue;;
-        xen-4.2-testing) continue;;
-        *) ;;
-        esac
-        ;;
-  i386)
-        # 32-bit Xen is dropped from 4.3 onwards
-        case "$xenbranch" in
-        xen-3.*-testing) ;;
-        xen-4.0-testing) ;;
-        xen-4.1-testing) ;;
-        xen-4.2-testing) ;;
-        *) continue ;;
-        esac
-        ;;
-  amd64)
-        ;;
-  esac
-
-  case "$xenarch" in
-  armhf) suite="wheezy";  guestsuite="wheezy";;
-  *)     suite=$defsuite; guestsuite=$defguestsuite;;
-  esac
-
-  if [ $suite != $defsuite ] ; then
-      suite_runvars="host_suite=$suite"
-  else
-      suite_runvars=
-  fi
-
-  case "$xenbranch" in
-  xen-3.*-testing)      onetoolstack=xend ;;
-  xen-4.0-testing)      onetoolstack=xend ;;
-  xen-4.1-testing)      onetoolstack=xend ;;
-  *)                    onetoolstack=xl ;;
-  esac
-
-  for kern in ''; do
-
-    case $kern in
-    '')
-                kernbuild=pvops
-                kernkind=pvops
-                ;;
-    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
-    esac
-
-    for dom0arch in i386 amd64 armhf; do
-
-      case ${xenarch}_${dom0arch} in
-          amd64_amd64) ;;
-          amd64_i386) ;;
-          i386_i386) ;;
-          armhf_armhf) ;;
-          *) continue ;;
-      esac
-
-      eval "
-          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
-      "
-
-      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
-      if [ $guestsuite != $defguestsuite ] ; then
-          debian_runvars="$debian_runvars debian_suite=$guestsuite"
-      fi
-
-      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
-
-      most_runvars="
-                arch=$dom0arch                                  \
-                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
-                kernkind=$kernkind                              \
-                $arch_runvars $suite_runvars
-                "
+test_matrix_do_one () {
       if [ $dom0arch = armhf ]; then
           job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
                 $xenarch $dom0arch                                        \
                 $debian_runvars all_hostflags=$most_hostflags
-          continue
+          return
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
@@ -324,11 +239,98 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         done
 
       fi
+}
 
-    done
+for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
-  done
+  if [ "x$xenarch" = xdisable ]; then continue; fi
+
+  test_matrix_branch_filter_callback || continue
+
+  case "$xenarch" in
+  armhf)
+        # Arm from 4.3 onwards only
+        case "$xenbranch" in
+        xen-3.*-testing) continue;;
+        xen-4.0-testing) continue;;
+        xen-4.1-testing) continue;;
+        xen-4.2-testing) continue;;
+        *) ;;
+        esac
+        ;;
+  i386)
+        # 32-bit Xen is dropped from 4.3 onwards
+        case "$xenbranch" in
+        xen-3.*-testing) ;;
+        xen-4.0-testing) ;;
+        xen-4.1-testing) ;;
+        xen-4.2-testing) ;;
+        *) continue ;;
+        esac
+        ;;
+  amd64)
+        ;;
+  esac
+
+  case "$xenarch" in
+  armhf) suite="wheezy";  guestsuite="wheezy";;
+  *)     suite=$defsuite; guestsuite=$defguestsuite;;
+  esac
+
+  if [ $suite != $defsuite ] ; then
+      suite_runvars="host_suite=$suite"
+  else
+      suite_runvars=
+  fi
+
+  case "$xenbranch" in
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
+  esac
+
+  for kern in ''; do
+
+    case $kern in
+    '')
+                kernbuild=pvops
+                kernkind=pvops
+                ;;
+    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
+    esac
+
+    for dom0arch in i386 amd64 armhf; do
+
+      case ${xenarch}_${dom0arch} in
+          amd64_amd64) ;;
+          amd64_i386) ;;
+          i386_i386) ;;
+          armhf_armhf) ;;
+          *) continue ;;
+      esac
+
+      eval "
+          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+      "
+
+      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+      if [ $guestsuite != $defguestsuite ] ; then
+          debian_runvars="$debian_runvars debian_suite=$guestsuite"
+      fi
 
+      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
+
+      most_runvars="
+                arch=$dom0arch                                  \
+                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                kernkind=$kernkind                              \
+                $arch_runvars $suite_runvars
+                "
+
+      test_matrix_do_one
+    done
+  done
 done
 
 echo $flight
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWj-0006nn-3T; Wed, 22 Jan 2014 09:55:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWh-0006ju-1H
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:15 +0000
Received: from [85.158.143.35:34573] by server-3.bemta-4.messagelabs.com id
	E4/0D-32360-2859FD25; Wed, 22 Jan 2014 09:55:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26715 invoked from network); 22 Jan 2014 09:55:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230022"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:12 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-J3;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:57 +0000
Message-ID: <1390384501-20552-13-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 13/17] make-flight: Separate matrix
	iteration from test job creation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 180 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 91 insertions(+), 89 deletions(-)

diff --git a/make-flight b/make-flight
index e1b65b2..97421f2 100755
--- a/make-flight
+++ b/make-flight
@@ -95,97 +95,12 @@ test_matrix_branch_filter_callback () {
   return 0
 }
 
-for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-
-  if [ "x$xenarch" = xdisable ]; then continue; fi
-
-  test_matrix_branch_filter_callback || continue
-
-  case "$xenarch" in
-  armhf)
-        # Arm from 4.3 onwards only
-        case "$xenbranch" in
-        xen-3.*-testing) continue;;
-        xen-4.0-testing) continue;;
-        xen-4.1-testing) continue;;
-        xen-4.2-testing) continue;;
-        *) ;;
-        esac
-        ;;
-  i386)
-        # 32-bit Xen is dropped from 4.3 onwards
-        case "$xenbranch" in
-        xen-3.*-testing) ;;
-        xen-4.0-testing) ;;
-        xen-4.1-testing) ;;
-        xen-4.2-testing) ;;
-        *) continue ;;
-        esac
-        ;;
-  amd64)
-        ;;
-  esac
-
-  case "$xenarch" in
-  armhf) suite="wheezy";  guestsuite="wheezy";;
-  *)     suite=$defsuite; guestsuite=$defguestsuite;;
-  esac
-
-  if [ $suite != $defsuite ] ; then
-      suite_runvars="host_suite=$suite"
-  else
-      suite_runvars=
-  fi
-
-  case "$xenbranch" in
-  xen-3.*-testing)      onetoolstack=xend ;;
-  xen-4.0-testing)      onetoolstack=xend ;;
-  xen-4.1-testing)      onetoolstack=xend ;;
-  *)                    onetoolstack=xl ;;
-  esac
-
-  for kern in ''; do
-
-    case $kern in
-    '')
-                kernbuild=pvops
-                kernkind=pvops
-                ;;
-    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
-    esac
-
-    for dom0arch in i386 amd64 armhf; do
-
-      case ${xenarch}_${dom0arch} in
-          amd64_amd64) ;;
-          amd64_i386) ;;
-          i386_i386) ;;
-          armhf_armhf) ;;
-          *) continue ;;
-      esac
-
-      eval "
-          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
-      "
-
-      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
-      if [ $guestsuite != $defguestsuite ] ; then
-          debian_runvars="$debian_runvars debian_suite=$guestsuite"
-      fi
-
-      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
-
-      most_runvars="
-                arch=$dom0arch                                  \
-                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
-                kernkind=$kernkind                              \
-                $arch_runvars $suite_runvars
-                "
+test_matrix_do_one () {
       if [ $dom0arch = armhf ]; then
           job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
                 $xenarch $dom0arch                                        \
                 $debian_runvars all_hostflags=$most_hostflags
-          continue
+          return
       fi
 
       job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
@@ -324,11 +239,98 @@ for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
         done
 
       fi
+}
 
-    done
+for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
-  done
+  if [ "x$xenarch" = xdisable ]; then continue; fi
+
+  test_matrix_branch_filter_callback || continue
+
+  case "$xenarch" in
+  armhf)
+        # Arm from 4.3 onwards only
+        case "$xenbranch" in
+        xen-3.*-testing) continue;;
+        xen-4.0-testing) continue;;
+        xen-4.1-testing) continue;;
+        xen-4.2-testing) continue;;
+        *) ;;
+        esac
+        ;;
+  i386)
+        # 32-bit Xen is dropped from 4.3 onwards
+        case "$xenbranch" in
+        xen-3.*-testing) ;;
+        xen-4.0-testing) ;;
+        xen-4.1-testing) ;;
+        xen-4.2-testing) ;;
+        *) continue ;;
+        esac
+        ;;
+  amd64)
+        ;;
+  esac
+
+  case "$xenarch" in
+  armhf) suite="wheezy";  guestsuite="wheezy";;
+  *)     suite=$defsuite; guestsuite=$defguestsuite;;
+  esac
+
+  if [ $suite != $defsuite ] ; then
+      suite_runvars="host_suite=$suite"
+  else
+      suite_runvars=
+  fi
+
+  case "$xenbranch" in
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
+  esac
+
+  for kern in ''; do
+
+    case $kern in
+    '')
+                kernbuild=pvops
+                kernkind=pvops
+                ;;
+    *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
+    esac
+
+    for dom0arch in i386 amd64 armhf; do
+
+      case ${xenarch}_${dom0arch} in
+          amd64_amd64) ;;
+          amd64_i386) ;;
+          i386_i386) ;;
+          armhf_armhf) ;;
+          *) continue ;;
+      esac
+
+      eval "
+          arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
+      "
+
+      debian_runvars="debian_kernkind=$kernkind debian_arch=$dom0arch"
+      if [ $guestsuite != $defguestsuite ] ; then
+          debian_runvars="$debian_runvars debian_suite=$guestsuite"
+      fi
 
+      most_hostflags="arch-$dom0arch,arch-xen-$xenarch,suite-$suite,purpose-test"
+
+      most_runvars="
+                arch=$dom0arch                                  \
+                kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
+                kernkind=$kernkind                              \
+                $arch_runvars $suite_runvars
+                "
+
+      test_matrix_do_one
+    done
+  done
 done
 
 echo $flight
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWm-0006tn-E0; Wed, 22 Jan 2014 09:55:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWj-0006nv-KS
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:18 +0000
Received: from [85.158.143.35:34918] by server-3.bemta-4.messagelabs.com id
	4A/1D-32360-5859FD25; Wed, 22 Jan 2014 09:55:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27063 invoked from network); 22 Jan 2014 09:55:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230034"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:14 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-TG;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:55:01 +0000
Message-ID: <1390384501-20552-17-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 17/17] make-flight: refactor
	test_matrix_do_one
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pull some test creation out into their own subdirectories. This allows the
total level of indentation to be reduced.

It also allows us to invert the pre-condition test and simply return at the
top of the subroutine, which further reduces indentation.
---
 make-flight | 201 +++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 118 insertions(+), 83 deletions(-)

diff --git a/make-flight b/make-flight
index 4a144ec..092a0cf 100755
--- a/make-flight
+++ b/make-flight
@@ -95,25 +95,13 @@ test_matrix_branch_filter_callback () {
   return 0
 }
 
-test_matrix_do_one () {
-  if [ $dom0arch = armhf ]; then
-      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
-      return
-  fi
-
-  job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
-
-  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
+do_freebsd_tests () {
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch != amd64 -o $dom0arch != i386 -o "$kern" != "" ]; then
+    return
+  fi
 
-    for freebsdarch in amd64 i386; do
+  for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
@@ -121,36 +109,20 @@ test_matrix_do_one () {
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
-    done
-
-  fi
-
-    for qemuu_suffix in '' -qemut -qemuu; do
-      case "$qemuu_suffix" in
-      '')
-            qemuu_runvar=''
-            ;;
-      -qemut)
-            qemuu_runvar=device_model_version=qemu-xen-traditional
-            ;;
-      -qemuu)
-            case $xenbranch in
-            xen-3.*-testing) continue;;
-            xen-4.0-testing) continue;;
-            xen-4.1-testing) continue;;
-            esac
-            qemuu_runvar=device_model_version=qemu-xen
-            ;;
-      esac
+  done
+}
 
-    for vcpus in '' 1; do
-        case "$vcpus" in
-        '') vcpus_runvars=''; vcpus_suffix='' ;;
-        *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
-        esac
+do_hvm_winxp_tests () {
+  for vcpus in '' 1; do
+    case "$vcpus" in
+      '') vcpus_runvars=''; vcpus_suffix='' ;;
+      *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+    esac
 
-        if [ "x$vcpus" = x ] || \
-           [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
+    if [ "x$vcpus" != x ] && \
+       [ "$xenarch$kern-$dom0arch" != "amd64-i386" ]; then
+      continue
+    fi
 
     stripy toolstack xend xl \
             "$vcpus" 1 \
@@ -160,85 +132,148 @@ test_matrix_do_one () {
 
     toolstack_runvars="toolstack=$toolstack"
 
-  job_create_test \
+    job_create_test \
             test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
             test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
             win_image=winxpsp3.iso $vcpus_runvars   \
             all_hostflags=$most_hostflags,hvm
 
-        fi
-    done
+  done
+}
 
-  if [ $xenarch = amd64 ]; then
+do_hvm_win7_x64_tests () {
+  if [ $xenarch != amd64 ]; then
+    return
+  fi
 
   job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
             test-win xl $xenarch $dom0arch $qemuu_runvar \
             win_image=win7-x64.iso \
             all_hostflags=$most_hostflags,hvm
+}
 
+do_hvm_rhel6_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != i386 -o "$kern" != "" ]; then
+    return
   fi
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
-
-    for cpuvendor in amd intel; do
+  for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
                                             test-rhelhvm xl $xenarch $dom0arch \
-            redhat_image=rhel-server-6.1-i386-dvd.iso               \
+            redhat_image=rhel-server-6.1-i386-dvd.iso \
             all_hostflags=$most_hostflags,hvm-$cpuvendor \
             $qemuu_runvar
 
-    done
+  done
+}
 
+do_sedf_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != amd64 ]; then
+    return
   fi
 
-  done # qemuu_suffix
+  for pin in '' -pin; do
+    job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
+       test-debian xl $xenarch $dom0arch                      \
+            guests_vcpus=4                                    \
+            xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" \
+            linux_boot_append='loglevel=9 debug'              \
+            $debian_runvars all_hostflags=$most_hostflags
+  done
+}
 
-  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-            $onetoolstack $xenarch $dom0arch \
-            !host !host_hostflags \
-            $debian_runvars \
-            all_hostflags=$most_hostflags,equiv-1
+do_credit2_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != i386 ]; then
+    return
+  fi
 
-  if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
+  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2             \
+       test-debian xl $xenarch $dom0arch                              \
+            guests_vcpus=4 xen_boot_append='sched=credit2'            \
+            $debian_runvars all_hostflags=$most_hostflags
+}
 
-   for pin in '' -pin; do
+do_passthrough_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != amd64 -o "$kern" != "" ]; then
+    return
+  fi
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-       test-debian xl $xenarch $dom0arch \
-            guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+  for cpuvendor in intel; do
+    job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel       \
+                    test-debian-nomigr xl $xenarch $dom0arch          \
+            guests_vcpus=4                                            \
+            $debian_runvars debian_pcipassthrough_nic=host            \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+
+  done
+}
+
+test_matrix_do_one () {
+
+  # Basic PV Linux test with xl
+
+  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                   \
             $debian_runvars all_hostflags=$most_hostflags
 
-   done
+  # No further arm tests at the moment
+  if [ $dom0arch = armhf ]; then
+      return
+  fi
 
+  # xend PV guest test on x86 only
+  if [ $dom0arch = "i386" -o $dom0arch = "amd64" ]; then
+    job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
+            $xenarch $dom0arch                                       \
+            $debian_runvars all_hostflags=$most_hostflags
   fi
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
+  do_freebsd_tests
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-                    $debian_runvars all_hostflags=$most_hostflags
+  for qemuu_suffix in '' -qemut -qemuu; do
+    case "$qemuu_suffix" in
+    '')
+          qemuu_runvar=''
+          ;;
+    -qemut)
+          qemuu_runvar=device_model_version=qemu-xen-traditional
+          ;;
+    -qemuu)
+          case $xenbranch in
+          xen-3.*-testing) continue;;
+          xen-4.0-testing) continue;;
+          xen-4.1-testing) continue;;
+          esac
+          qemuu_runvar=device_model_version=qemu-xen
+          ;;
+    esac
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-       test-debian xl $xenarch $dom0arch                              \
-            guests_vcpus=4 xen_boot_append='sched=credit2'            \
-            $debian_runvars all_hostflags=$most_hostflags
+    do_hvm_winxp_tests
+    do_hvm_win7_x64_tests
+    do_hvm_rhel6_tests
 
-  fi
+  done # qemuu_suffix
 
-  if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
+  # Test live migration
+  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
+            $onetoolstack $xenarch $dom0arch \
+            !host !host_hostflags \
+            $debian_runvars \
+            all_hostflags=$most_hostflags,equiv-1
 
-    for cpuvendor in intel; do
+  do_sedf_tests
+  do_credit2_tests
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                    test-debian-nomigr xl $xenarch $dom0arch          \
-            guests_vcpus=4                                            \
-            $debian_runvars debian_pcipassthrough_nic=host            \
-            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
-    done
+  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
+                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
+                    $debian_runvars all_hostflags=$most_hostflags
 
   fi
+
+  do_passthrough_tests
 }
 
 test_matrix_iterate
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWm-0006tn-E0; Wed, 22 Jan 2014 09:55:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWj-0006nv-KS
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:18 +0000
Received: from [85.158.143.35:34918] by server-3.bemta-4.messagelabs.com id
	4A/1D-32360-5859FD25; Wed, 22 Jan 2014 09:55:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390384503!385!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27063 invoked from network); 22 Jan 2014 09:55:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230034"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:14 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-TG;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:55:01 +0000
Message-ID: <1390384501-20552-17-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 17/17] make-flight: refactor
	test_matrix_do_one
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pull some test creation out into their own subdirectories. This allows the
total level of indentation to be reduced.

It also allows us to invert the pre-condition test and simply return at the
top of the subroutine, which further reduces indentation.
---
 make-flight | 201 +++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 118 insertions(+), 83 deletions(-)

diff --git a/make-flight b/make-flight
index 4a144ec..092a0cf 100755
--- a/make-flight
+++ b/make-flight
@@ -95,25 +95,13 @@ test_matrix_branch_filter_callback () {
   return 0
 }
 
-test_matrix_do_one () {
-  if [ $dom0arch = armhf ]; then
-      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
-      return
-  fi
-
-  job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
-
-  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-            $xenarch $dom0arch                                        \
-            $debian_runvars all_hostflags=$most_hostflags
+do_freebsd_tests () {
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch != amd64 -o $dom0arch != i386 -o "$kern" != "" ]; then
+    return
+  fi
 
-    for freebsdarch in amd64 i386; do
+  for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
@@ -121,36 +109,20 @@ test_matrix_do_one () {
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
-    done
-
-  fi
-
-    for qemuu_suffix in '' -qemut -qemuu; do
-      case "$qemuu_suffix" in
-      '')
-            qemuu_runvar=''
-            ;;
-      -qemut)
-            qemuu_runvar=device_model_version=qemu-xen-traditional
-            ;;
-      -qemuu)
-            case $xenbranch in
-            xen-3.*-testing) continue;;
-            xen-4.0-testing) continue;;
-            xen-4.1-testing) continue;;
-            esac
-            qemuu_runvar=device_model_version=qemu-xen
-            ;;
-      esac
+  done
+}
 
-    for vcpus in '' 1; do
-        case "$vcpus" in
-        '') vcpus_runvars=''; vcpus_suffix='' ;;
-        *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
-        esac
+do_hvm_winxp_tests () {
+  for vcpus in '' 1; do
+    case "$vcpus" in
+      '') vcpus_runvars=''; vcpus_suffix='' ;;
+      *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+    esac
 
-        if [ "x$vcpus" = x ] || \
-           [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
+    if [ "x$vcpus" != x ] && \
+       [ "$xenarch$kern-$dom0arch" != "amd64-i386" ]; then
+      continue
+    fi
 
     stripy toolstack xend xl \
             "$vcpus" 1 \
@@ -160,85 +132,148 @@ test_matrix_do_one () {
 
     toolstack_runvars="toolstack=$toolstack"
 
-  job_create_test \
+    job_create_test \
             test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
             test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
             win_image=winxpsp3.iso $vcpus_runvars   \
             all_hostflags=$most_hostflags,hvm
 
-        fi
-    done
+  done
+}
 
-  if [ $xenarch = amd64 ]; then
+do_hvm_win7_x64_tests () {
+  if [ $xenarch != amd64 ]; then
+    return
+  fi
 
   job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
             test-win xl $xenarch $dom0arch $qemuu_runvar \
             win_image=win7-x64.iso \
             all_hostflags=$most_hostflags,hvm
+}
 
+do_hvm_rhel6_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != i386 -o "$kern" != "" ]; then
+    return
   fi
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
-
-    for cpuvendor in amd intel; do
+  for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
                                             test-rhelhvm xl $xenarch $dom0arch \
-            redhat_image=rhel-server-6.1-i386-dvd.iso               \
+            redhat_image=rhel-server-6.1-i386-dvd.iso \
             all_hostflags=$most_hostflags,hvm-$cpuvendor \
             $qemuu_runvar
 
-    done
+  done
+}
 
+do_sedf_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != amd64 ]; then
+    return
   fi
 
-  done # qemuu_suffix
+  for pin in '' -pin; do
+    job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
+       test-debian xl $xenarch $dom0arch                      \
+            guests_vcpus=4                                    \
+            xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" \
+            linux_boot_append='loglevel=9 debug'              \
+            $debian_runvars all_hostflags=$most_hostflags
+  done
+}
 
-  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-            $onetoolstack $xenarch $dom0arch \
-            !host !host_hostflags \
-            $debian_runvars \
-            all_hostflags=$most_hostflags,equiv-1
+do_credit2_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != i386 ]; then
+    return
+  fi
 
-  if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
+  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2             \
+       test-debian xl $xenarch $dom0arch                              \
+            guests_vcpus=4 xen_boot_append='sched=credit2'            \
+            $debian_runvars all_hostflags=$most_hostflags
+}
 
-   for pin in '' -pin; do
+do_passthrough_tests () {
+  if [ $xenarch != amd64 -o $dom0arch != amd64 -o "$kern" != "" ]; then
+    return
+  fi
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-       test-debian xl $xenarch $dom0arch \
-            guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+  for cpuvendor in intel; do
+    job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel       \
+                    test-debian-nomigr xl $xenarch $dom0arch          \
+            guests_vcpus=4                                            \
+            $debian_runvars debian_pcipassthrough_nic=host            \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+
+  done
+}
+
+test_matrix_do_one () {
+
+  # Basic PV Linux test with xl
+
+  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                   \
             $debian_runvars all_hostflags=$most_hostflags
 
-   done
+  # No further arm tests at the moment
+  if [ $dom0arch = armhf ]; then
+      return
+  fi
 
+  # xend PV guest test on x86 only
+  if [ $dom0arch = "i386" -o $dom0arch = "amd64" ]; then
+    job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
+            $xenarch $dom0arch                                       \
+            $debian_runvars all_hostflags=$most_hostflags
   fi
 
-  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
+  do_freebsd_tests
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-                    $debian_runvars all_hostflags=$most_hostflags
+  for qemuu_suffix in '' -qemut -qemuu; do
+    case "$qemuu_suffix" in
+    '')
+          qemuu_runvar=''
+          ;;
+    -qemut)
+          qemuu_runvar=device_model_version=qemu-xen-traditional
+          ;;
+    -qemuu)
+          case $xenbranch in
+          xen-3.*-testing) continue;;
+          xen-4.0-testing) continue;;
+          xen-4.1-testing) continue;;
+          esac
+          qemuu_runvar=device_model_version=qemu-xen
+          ;;
+    esac
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-       test-debian xl $xenarch $dom0arch                              \
-            guests_vcpus=4 xen_boot_append='sched=credit2'            \
-            $debian_runvars all_hostflags=$most_hostflags
+    do_hvm_winxp_tests
+    do_hvm_win7_x64_tests
+    do_hvm_rhel6_tests
 
-  fi
+  done # qemuu_suffix
 
-  if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
+  # Test live migration
+  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
+            $onetoolstack $xenarch $dom0arch \
+            !host !host_hostflags \
+            $debian_runvars \
+            all_hostflags=$most_hostflags,equiv-1
 
-    for cpuvendor in intel; do
+  do_sedf_tests
+  do_credit2_tests
 
-  job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                    test-debian-nomigr xl $xenarch $dom0arch          \
-            guests_vcpus=4                                            \
-            $debian_runvars debian_pcipassthrough_nic=host            \
-            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
-    done
+  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
+                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
+                    $debian_runvars all_hostflags=$most_hostflags
 
   fi
+
+  do_passthrough_tests
 }
 
 test_matrix_iterate
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWs-00072R-It; Wed, 22 Jan 2014 09:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWq-0006yH-A6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:24 +0000
Received: from [85.158.137.68:10053] by server-11.bemta-3.messagelabs.com id
	B5/5A-19379-8859FD25; Wed, 22 Jan 2014 09:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390384518!10600155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32320 invoked from network); 22 Jan 2014 09:55:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181367"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:17 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-E1;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:55 +0000
Message-ID: <1390384501-20552-11-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 11/17] make-flight: refactor
	job_create_test into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that it uses a callback it can trivially be moved to mfi-common

Arguably the setting of *buildjob could also be a callback, but my intended use
case doesn't need that and it seems reasonable enough for now that users of the
common job_create_test also use the common create_build_jobs (or produce
something sufficiently similar).

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 28 ----------------------------
 mfi-common  | 28 ++++++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/make-flight b/make-flight
index 34c1ce4..8862be5 100755
--- a/make-flight
+++ b/make-flight
@@ -76,34 +76,6 @@ job_create_test_filter_callback () {
   return 0;
 }
 
-job_create_test () {
-        job_create_test_filter_callback "$@" || return 0
-
-        local job=$1; shift
-        local recipe=$1; shift
-        local toolstack=$1; shift
-        local xenarch=$1; shift
-        local dom0arch=$1; shift
-
-        xenbuildjob="${bfi}build-$xenarch"
-        buildjob="${bfi}build-$dom0arch"
-
-        case "$xenbranch:$toolstack" in
-        xen-3.*-testing:*) ;;
-        xen-4.0-testing:*) ;;
-        xen-4.1-testing:*) ;;
-        xen-4.2-testing:*) ;;
-        xen-4.3-testing:*) ;;
-        *:xend) xenbuildjob="$xenbuildjob-xend"
-                buildjob="${bfi}build-$dom0arch-xend"
-                ;;
-        esac
-
-        ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
-                $RUNVARS $TEST_RUNVARS $most_runvars                    \
-                xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
-}
-
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   if [ "x$xenarch" = xdisable ]; then continue; fi
diff --git a/mfi-common b/mfi-common
index e7cae0a..373110b 100644
--- a/mfi-common
+++ b/mfi-common
@@ -189,6 +189,34 @@ create_build_jobs () {
   done
 }
 
+job_create_test () {
+  job_create_test_filter_callback "$@" || return 0
+
+  local job=$1; shift
+  local recipe=$1; shift
+  local toolstack=$1; shift
+  local xenarch=$1; shift
+  local dom0arch=$1; shift
+
+  xenbuildjob="${bfi}build-$xenarch"
+  buildjob="${bfi}build-$dom0arch"
+
+  case "$xenbranch:$toolstack" in
+    xen-3.*-testing:*) ;;
+    xen-4.0-testing:*) ;;
+    xen-4.1-testing:*) ;;
+    xen-4.2-testing:*) ;;
+    xen-4.3-testing:*) ;;
+    *:xend) xenbuildjob="$xenbuildjob-xend"
+            buildjob="${bfi}build-$dom0arch-xend"
+            ;;
+  esac
+
+  ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
+    $RUNVARS $TEST_RUNVARS $most_runvars                          \
+    xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWs-00072R-It; Wed, 22 Jan 2014 09:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWq-0006yH-A6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:24 +0000
Received: from [85.158.137.68:10053] by server-11.bemta-3.messagelabs.com id
	B5/5A-19379-8859FD25; Wed, 22 Jan 2014 09:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390384518!10600155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32320 invoked from network); 22 Jan 2014 09:55:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181367"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:17 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-E1;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:55 +0000
Message-ID: <1390384501-20552-11-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 11/17] make-flight: refactor
	job_create_test into mfi-common
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that it uses a callback it can trivially be moved to mfi-common

Arguably the setting of *buildjob could also be a callback, but my intended use
case doesn't need that and it seems reasonable enough for now that users of the
common job_create_test also use the common create_build_jobs (or produce
something sufficiently similar).

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 28 ----------------------------
 mfi-common  | 28 ++++++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/make-flight b/make-flight
index 34c1ce4..8862be5 100755
--- a/make-flight
+++ b/make-flight
@@ -76,34 +76,6 @@ job_create_test_filter_callback () {
   return 0;
 }
 
-job_create_test () {
-        job_create_test_filter_callback "$@" || return 0
-
-        local job=$1; shift
-        local recipe=$1; shift
-        local toolstack=$1; shift
-        local xenarch=$1; shift
-        local dom0arch=$1; shift
-
-        xenbuildjob="${bfi}build-$xenarch"
-        buildjob="${bfi}build-$dom0arch"
-
-        case "$xenbranch:$toolstack" in
-        xen-3.*-testing:*) ;;
-        xen-4.0-testing:*) ;;
-        xen-4.1-testing:*) ;;
-        xen-4.2-testing:*) ;;
-        xen-4.3-testing:*) ;;
-        *:xend) xenbuildjob="$xenbuildjob-xend"
-                buildjob="${bfi}build-$dom0arch-xend"
-                ;;
-        esac
-
-        ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
-                $RUNVARS $TEST_RUNVARS $most_runvars                    \
-                xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
-}
-
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
   if [ "x$xenarch" = xdisable ]; then continue; fi
diff --git a/mfi-common b/mfi-common
index e7cae0a..373110b 100644
--- a/mfi-common
+++ b/mfi-common
@@ -189,6 +189,34 @@ create_build_jobs () {
   done
 }
 
+job_create_test () {
+  job_create_test_filter_callback "$@" || return 0
+
+  local job=$1; shift
+  local recipe=$1; shift
+  local toolstack=$1; shift
+  local xenarch=$1; shift
+  local dom0arch=$1; shift
+
+  xenbuildjob="${bfi}build-$xenarch"
+  buildjob="${bfi}build-$dom0arch"
+
+  case "$xenbranch:$toolstack" in
+    xen-3.*-testing:*) ;;
+    xen-4.0-testing:*) ;;
+    xen-4.1-testing:*) ;;
+    xen-4.2-testing:*) ;;
+    xen-4.3-testing:*) ;;
+    *:xend) xenbuildjob="$xenbuildjob-xend"
+            buildjob="${bfi}build-$dom0arch-xend"
+            ;;
+  esac
+
+  ./cs-job-create $flight $job $recipe toolstack=$toolstack       \
+    $RUNVARS $TEST_RUNVARS $most_runvars                          \
+    xenbuildjob=$xenbuildjob buildjob=$buildjob "$@"
+}
+
 # Local variables:
 # mode: sh
 # sh-basic-offset: 2
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWu-00074z-3j; Wed, 22 Jan 2014 09:55:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWr-0006xB-9B
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:25 +0000
Received: from [85.158.137.68:10285] by server-8.bemta-3.messagelabs.com id
	15/1A-31081-A859FD25; Wed, 22 Jan 2014 09:55:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390384518!10600155!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32625 invoked from network); 22 Jan 2014 09:55:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181388"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:20 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-MC;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:58 +0000
Message-ID: <1390384501-20552-14-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 14/17] make-flight: reduce indentation
	in test_matrix_do_one
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the body of the multiple nested loops is in a function it doesn't need
to be so deeply indented. This wasn't done in the previous patch for clarity.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 198 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 99 insertions(+), 99 deletions(-)

diff --git a/make-flight b/make-flight
index 97421f2..177523b 100755
--- a/make-flight
+++ b/make-flight
@@ -96,24 +96,24 @@ test_matrix_branch_filter_callback () {
 }
 
 test_matrix_do_one () {
-      if [ $dom0arch = armhf ]; then
-          job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
-          return
-      fi
+  if [ $dom0arch = armhf ]; then
+      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
+      return
+  fi
 
-      job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-        for freebsdarch in amd64 i386; do
+    for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
@@ -121,124 +121,124 @@ test_matrix_do_one () {
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
-        done
-
-      fi
+    done
 
-        for qemuu_suffix in '' -qemut -qemuu; do
-          case "$qemuu_suffix" in
-          '')
-                qemuu_runvar=''
-                ;;
-          -qemut)
-                qemuu_runvar=device_model_version=qemu-xen-traditional
-                ;;
-          -qemuu)
-                case $xenbranch in
-                xen-3.*-testing) continue;;
-                xen-4.0-testing) continue;;
-                xen-4.1-testing) continue;;
-                esac
-                qemuu_runvar=device_model_version=qemu-xen
-                ;;
-          esac
+  fi
 
-        for vcpus in '' 1; do
-            case "$vcpus" in
-            '') vcpus_runvars=''; vcpus_suffix='' ;;
-            *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+    for qemuu_suffix in '' -qemut -qemuu; do
+      case "$qemuu_suffix" in
+      '')
+            qemuu_runvar=''
+            ;;
+      -qemut)
+            qemuu_runvar=device_model_version=qemu-xen-traditional
+            ;;
+      -qemuu)
+            case $xenbranch in
+            xen-3.*-testing) continue;;
+            xen-4.0-testing) continue;;
+            xen-4.1-testing) continue;;
             esac
+            qemuu_runvar=device_model_version=qemu-xen
+            ;;
+      esac
 
-            if [ "x$vcpus" = x ] || \
-               [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
+    for vcpus in '' 1; do
+        case "$vcpus" in
+        '') vcpus_runvars=''; vcpus_suffix='' ;;
+        *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+        esac
 
-        stripy toolstack xend xl \
-                "$vcpus" 1 \
-                "$kern" '' \
-                "$xenarch" i386 \
-                "$dom0arch" i386
+        if [ "x$vcpus" = x ] || \
+           [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
 
-        toolstack_runvars="toolstack=$toolstack"
+    stripy toolstack xend xl \
+            "$vcpus" 1 \
+            "$kern" '' \
+            "$xenarch" i386 \
+            "$dom0arch" i386
 
-      job_create_test \
-                test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
-                test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
-                win_image=winxpsp3.iso $vcpus_runvars   \
-                all_hostflags=$most_hostflags,hvm
+    toolstack_runvars="toolstack=$toolstack"
 
-            fi
-        done
+  job_create_test \
+            test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
+            test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
+            win_image=winxpsp3.iso $vcpus_runvars   \
+            all_hostflags=$most_hostflags,hvm
 
-      if [ $xenarch = amd64 ]; then
+        fi
+    done
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
-                test-win xl $xenarch $dom0arch $qemuu_runvar \
-                win_image=win7-x64.iso \
-                all_hostflags=$most_hostflags,hvm
+  if [ $xenarch = amd64 ]; then
 
-      fi
+  job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
+            test-win xl $xenarch $dom0arch $qemuu_runvar \
+            win_image=win7-x64.iso \
+            all_hostflags=$most_hostflags,hvm
+
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-        for cpuvendor in amd intel; do
+    for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-                                                test-rhelhvm xl $xenarch $dom0arch \
-                redhat_image=rhel-server-6.1-i386-dvd.iso               \
-                all_hostflags=$most_hostflags,hvm-$cpuvendor \
-                $qemuu_runvar
+                                            test-rhelhvm xl $xenarch $dom0arch \
+            redhat_image=rhel-server-6.1-i386-dvd.iso               \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor \
+            $qemuu_runvar
 
-        done
+    done
 
-      fi
+  fi
 
-      done # qemuu_suffix
+  done # qemuu_suffix
 
-      job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-                $onetoolstack $xenarch $dom0arch \
-                !host !host_hostflags \
-                $debian_runvars \
-                all_hostflags=$most_hostflags,equiv-1
+  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
+            $onetoolstack $xenarch $dom0arch \
+            !host !host_hostflags \
+            $debian_runvars \
+            all_hostflags=$most_hostflags,equiv-1
 
-      if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
+  if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
 
-       for pin in '' -pin; do
+   for pin in '' -pin; do
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-           test-debian xl $xenarch $dom0arch \
-                guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
+       test-debian xl $xenarch $dom0arch \
+            guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+            $debian_runvars all_hostflags=$most_hostflags
 
-       done
+   done
 
-      fi
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                        test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-                        $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
+                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
+                    $debian_runvars all_hostflags=$most_hostflags
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch $dom0arch                              \
-                guests_vcpus=4 xen_boot_append='sched=credit2'            \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
+       test-debian xl $xenarch $dom0arch                              \
+            guests_vcpus=4 xen_boot_append='sched=credit2'            \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      fi
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
 
-        for cpuvendor in intel; do
+    for cpuvendor in intel; do
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch $dom0arch          \
-                guests_vcpus=4                                            \
-                $debian_runvars debian_pcipassthrough_nic=host            \
-                all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+  job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
+                    test-debian-nomigr xl $xenarch $dom0arch          \
+            guests_vcpus=4                                            \
+            $debian_runvars debian_pcipassthrough_nic=host            \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
-        done
+    done
 
-      fi
+  fi
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uWu-00074z-3j; Wed, 22 Jan 2014 09:55:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWr-0006xB-9B
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:25 +0000
Received: from [85.158.137.68:10285] by server-8.bemta-3.messagelabs.com id
	15/1A-31081-A859FD25; Wed, 22 Jan 2014 09:55:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390384518!10600155!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32625 invoked from network); 22 Jan 2014 09:55:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181388"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:20 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-MC;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:54:58 +0000
Message-ID: <1390384501-20552-14-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 14/17] make-flight: reduce indentation
	in test_matrix_do_one
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the body of the multiple nested loops is in a function it doesn't need
to be so deeply indented. This wasn't done in the previous patch for clarity.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 198 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 99 insertions(+), 99 deletions(-)

diff --git a/make-flight b/make-flight
index 97421f2..177523b 100755
--- a/make-flight
+++ b/make-flight
@@ -96,24 +96,24 @@ test_matrix_branch_filter_callback () {
 }
 
 test_matrix_do_one () {
-      if [ $dom0arch = armhf ]; then
-          job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
-          return
-      fi
+  if [ $dom0arch = armhf ]; then
+      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
+      return
+  fi
 
-      job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-pv test-debian xend \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
-                $xenarch $dom0arch                                        \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl test-debian xl \
+            $xenarch $dom0arch                                        \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-        for freebsdarch in amd64 i386; do
+    for freebsdarch in amd64 i386; do
 
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
@@ -121,124 +121,124 @@ test_matrix_do_one () {
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
-        done
-
-      fi
+    done
 
-        for qemuu_suffix in '' -qemut -qemuu; do
-          case "$qemuu_suffix" in
-          '')
-                qemuu_runvar=''
-                ;;
-          -qemut)
-                qemuu_runvar=device_model_version=qemu-xen-traditional
-                ;;
-          -qemuu)
-                case $xenbranch in
-                xen-3.*-testing) continue;;
-                xen-4.0-testing) continue;;
-                xen-4.1-testing) continue;;
-                esac
-                qemuu_runvar=device_model_version=qemu-xen
-                ;;
-          esac
+  fi
 
-        for vcpus in '' 1; do
-            case "$vcpus" in
-            '') vcpus_runvars=''; vcpus_suffix='' ;;
-            *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+    for qemuu_suffix in '' -qemut -qemuu; do
+      case "$qemuu_suffix" in
+      '')
+            qemuu_runvar=''
+            ;;
+      -qemut)
+            qemuu_runvar=device_model_version=qemu-xen-traditional
+            ;;
+      -qemuu)
+            case $xenbranch in
+            xen-3.*-testing) continue;;
+            xen-4.0-testing) continue;;
+            xen-4.1-testing) continue;;
             esac
+            qemuu_runvar=device_model_version=qemu-xen
+            ;;
+      esac
 
-            if [ "x$vcpus" = x ] || \
-               [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
+    for vcpus in '' 1; do
+        case "$vcpus" in
+        '') vcpus_runvars=''; vcpus_suffix='' ;;
+        *) vcpus_runvars=guests_vcpus=$vcpus; vcpus_suffix=-vcpus$vcpus ;;
+        esac
 
-        stripy toolstack xend xl \
-                "$vcpus" 1 \
-                "$kern" '' \
-                "$xenarch" i386 \
-                "$dom0arch" i386
+        if [ "x$vcpus" = x ] || \
+           [ "$xenarch$kern-$dom0arch" = "amd64-i386" ]; then
 
-        toolstack_runvars="toolstack=$toolstack"
+    stripy toolstack xend xl \
+            "$vcpus" 1 \
+            "$kern" '' \
+            "$xenarch" i386 \
+            "$dom0arch" i386
 
-      job_create_test \
-                test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
-                test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
-                win_image=winxpsp3.iso $vcpus_runvars   \
-                all_hostflags=$most_hostflags,hvm
+    toolstack_runvars="toolstack=$toolstack"
 
-            fi
-        done
+  job_create_test \
+            test-$xenarch$kern-$dom0arch-$toolstack$qemuu_suffix-winxpsp3$vcpus_suffix \
+            test-win $toolstack $xenarch $dom0arch $qemuu_runvar \
+            win_image=winxpsp3.iso $vcpus_runvars   \
+            all_hostflags=$most_hostflags,hvm
 
-      if [ $xenarch = amd64 ]; then
+        fi
+    done
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
-                test-win xl $xenarch $dom0arch $qemuu_runvar \
-                win_image=win7-x64.iso \
-                all_hostflags=$most_hostflags,hvm
+  if [ $xenarch = amd64 ]; then
 
-      fi
+  job_create_test test-$xenarch$kern-$dom0arch-xl$qemuu_suffix-win7-amd64 \
+            test-win xl $xenarch $dom0arch $qemuu_runvar \
+            win_image=win7-x64.iso \
+            all_hostflags=$most_hostflags,hvm
+
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 -a "$kern" = "" ]; then
 
-        for cpuvendor in amd intel; do
+    for cpuvendor in amd intel; do
 
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-rhel6hvm-$cpuvendor \
-                                                test-rhelhvm xl $xenarch $dom0arch \
-                redhat_image=rhel-server-6.1-i386-dvd.iso               \
-                all_hostflags=$most_hostflags,hvm-$cpuvendor \
-                $qemuu_runvar
+                                            test-rhelhvm xl $xenarch $dom0arch \
+            redhat_image=rhel-server-6.1-i386-dvd.iso               \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor \
+            $qemuu_runvar
 
-        done
+    done
 
-      fi
+  fi
 
-      done # qemuu_suffix
+  done # qemuu_suffix
 
-      job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
-                $onetoolstack $xenarch $dom0arch \
-                !host !host_hostflags \
-                $debian_runvars \
-                all_hostflags=$most_hostflags,equiv-1
+  job_create_test test-$xenarch$kern-$dom0arch-pair test-pair \
+            $onetoolstack $xenarch $dom0arch \
+            !host !host_hostflags \
+            $debian_runvars \
+            all_hostflags=$most_hostflags,equiv-1
 
-      if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
+  if [ $xenarch = amd64 -a $dom0arch = amd64 ]; then
 
-       for pin in '' -pin; do
+   for pin in '' -pin; do
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
-           test-debian xl $xenarch $dom0arch \
-                guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-sedf$pin  \
+       test-debian xl $xenarch $dom0arch \
+            guests_vcpus=4 xen_boot_append="sched=sedf loglvl=all ${pin:+dom0_vcpus_pin}" linux_boot_append='loglevel=9 debug' \
+            $debian_runvars all_hostflags=$most_hostflags
 
-       done
+   done
 
-      fi
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
+  if [ $xenarch = amd64 -a $dom0arch = i386 ]; then
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
-                        test-debian xl $xenarch $dom0arch guests_vcpus=4  \
-                        $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-multivcpu \
+                    test-debian xl $xenarch $dom0arch guests_vcpus=4  \
+                    $debian_runvars all_hostflags=$most_hostflags
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
-           test-debian xl $xenarch $dom0arch                              \
-                guests_vcpus=4 xen_boot_append='sched=credit2'            \
-                $debian_runvars all_hostflags=$most_hostflags
+  job_create_test test-$xenarch$kern-$dom0arch-xl-credit2  \
+       test-debian xl $xenarch $dom0arch                              \
+            guests_vcpus=4 xen_boot_append='sched=credit2'            \
+            $debian_runvars all_hostflags=$most_hostflags
 
-      fi
+  fi
 
-      if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
+  if [ $xenarch = amd64 -a $dom0arch = amd64 -a "$kern" = "" ]; then
 
-        for cpuvendor in intel; do
+    for cpuvendor in intel; do
 
-      job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
-                        test-debian-nomigr xl $xenarch $dom0arch          \
-                guests_vcpus=4                                            \
-                $debian_runvars debian_pcipassthrough_nic=host            \
-                all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
+  job_create_test test-$xenarch$kern-$dom0arch-xl-pcipt-intel \
+                    test-debian-nomigr xl $xenarch $dom0arch          \
+            guests_vcpus=4                                            \
+            $debian_runvars debian_pcipassthrough_nic=host            \
+            all_hostflags=$most_hostflags,hvm-$cpuvendor,pcipassthrough-nic
 
-        done
+    done
 
-      fi
+  fi
 }
 
 for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uX0-0007Ds-PB; Wed, 22 Jan 2014 09:55:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWz-0007BJ-4R
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:33 +0000
Received: from [193.109.254.147:53248] by server-1.bemta-14.messagelabs.com id
	D3/0F-15600-4959FD25; Wed, 22 Jan 2014 09:55:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390384530!12439832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19461 invoked from network); 22 Jan 2014 09:55:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181439"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:30 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-RI;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:55:00 +0000
Message-ID: <1390384501-20552-16-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 16/17] mfi-common: onetoolstack does not
	vary for a given $xenbranch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mfi-common b/mfi-common
index 5529979..60802ef 100644
--- a/mfi-common
+++ b/mfi-common
@@ -225,6 +225,14 @@ job_create_test () {
 # Provides various convenience variables for the callback.
 #
 test_matrix_iterate () {
+
+  case "$xenbranch" in
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
+  esac
+
   for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
     if [ "x$xenarch" = xdisable ]; then continue; fi
@@ -267,13 +275,6 @@ test_matrix_iterate () {
         suite_runvars=
     fi
 
-    case "$xenbranch" in
-    xen-3.*-testing)      onetoolstack=xend ;;
-    xen-4.0-testing)      onetoolstack=xend ;;
-    xen-4.1-testing)      onetoolstack=xend ;;
-    *)                    onetoolstack=xl ;;
-    esac
-
     for kern in ''; do
 
       case $kern in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:55:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uX0-0007Ds-PB; Wed, 22 Jan 2014 09:55:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uWz-0007BJ-4R
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:55:33 +0000
Received: from [193.109.254.147:53248] by server-1.bemta-14.messagelabs.com id
	D3/0F-15600-4959FD25; Wed, 22 Jan 2014 09:55:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390384530!12439832!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19461 invoked from network); 22 Jan 2014 09:55:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:55:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93181439"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 09:55:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 04:55:30 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W5uWU-0005Rs-RI;
	Wed, 22 Jan 2014 09:55:02 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:55:00 +0000
Message-ID: <1390384501-20552-16-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 16/17] mfi-common: onetoolstack does not
	vary for a given $xenbranch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mfi-common b/mfi-common
index 5529979..60802ef 100644
--- a/mfi-common
+++ b/mfi-common
@@ -225,6 +225,14 @@ job_create_test () {
 # Provides various convenience variables for the callback.
 #
 test_matrix_iterate () {
+
+  case "$xenbranch" in
+  xen-3.*-testing)      onetoolstack=xend ;;
+  xen-4.0-testing)      onetoolstack=xend ;;
+  xen-4.1-testing)      onetoolstack=xend ;;
+  *)                    onetoolstack=xl ;;
+  esac
+
   for xenarch in ${TEST_ARCHES- i386 amd64 armhf } ; do
 
     if [ "x$xenarch" = xdisable ]; then continue; fi
@@ -267,13 +275,6 @@ test_matrix_iterate () {
         suite_runvars=
     fi
 
-    case "$xenbranch" in
-    xen-3.*-testing)      onetoolstack=xend ;;
-    xen-4.0-testing)      onetoolstack=xend ;;
-    xen-4.1-testing)      onetoolstack=xend ;;
-    *)                    onetoolstack=xl ;;
-    esac
-
     for kern in ''; do
 
       case $kern in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:59:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uau-0000ur-4J; Wed, 22 Jan 2014 09:59:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uas-0000uf-RP
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:59:35 +0000
Received: from [193.109.254.147:7028] by server-10.bemta-14.messagelabs.com id
	EA/19-20752-6869FD25; Wed, 22 Jan 2014 09:59:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390384772!8931306!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28264 invoked from network); 22 Jan 2014 09:59:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:59:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230832"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:59:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:59:31 -0500
Message-ID: <1390384770.32519.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:59:30 +0000
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> The following series refactors make-flight to move some of the useful

Perhaps I should have just done this instead of the massive patch spam:

The following changes since commit c12c2ef4fb9de6bef51f52916b398b3e6b6c5382:

  ts-kernel-build: override kernel config from runvars (2014-01-10 16:48:59 +0000)

are available in the git repository at:

  git://xenbits.xen.org/people/ianc/osstest.git make-flight-refactor

for you to fetch changes up to dcf4332148275d332ad94181a5983023029b5556:

  make-flight: refactor test_matrix_do_one (2014-01-22 09:55:50 +0000)

----------------------------------------------------------------
Ian Campbell (17):
      make-flight: expand hard tabs
      make-flight: refactor common function "stripy" into helper library
      make-flight: Drop obsolete/unused xenrt_images variable.
      make-flight: refactor build job creation into mfi-common
      Remove support for building the XCP kernel
      mfi-common: Allow caller of create_build_jobs to include/exclude xend builds
      mfi-common: restrict scope of local vars in create_build_jobs
      mfi-common: fixup
      make-flight: Remove md5sum based job filtering
      make-flight: refactor job_create_test filters
      make-flight: refactor job_create_test into mfi-common
      make-flight: refactor test case filter over $branch
      make-flight: Separate matrix iteration from test job creation
      make-flight: reduce indentation in test_matrix_do_one
      make-flight: Refactor test matrix iteration into mfi-common
      mfi-common: onetoolstack does not vary for a given $xenbranch
      make-flight: refactor test_matrix_do_one

 cr-daily-branch   |   1 -
 cr-external-linux |   1 -
 make-flight       | 635 ++++++++++++++++--------------------------------------
 mfi-common        | 326 ++++++++++++++++++++++++++++
 4 files changed, 516 insertions(+), 447 deletions(-)
 create mode 100644 mfi-common



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 09:59:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 09:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uau-0000ur-4J; Wed, 22 Jan 2014 09:59:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5uas-0000uf-RP
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 09:59:35 +0000
Received: from [193.109.254.147:7028] by server-10.bemta-14.messagelabs.com id
	EA/19-20752-6869FD25; Wed, 22 Jan 2014 09:59:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390384772!8931306!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28264 invoked from network); 22 Jan 2014 09:59:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 09:59:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95230832"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 09:59:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 04:59:31 -0500
Message-ID: <1390384770.32519.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 09:59:30 +0000
In-Reply-To: <1390384419.32519.32.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> The following series refactors make-flight to move some of the useful

Perhaps I should have just done this instead of the massive patch spam:

The following changes since commit c12c2ef4fb9de6bef51f52916b398b3e6b6c5382:

  ts-kernel-build: override kernel config from runvars (2014-01-10 16:48:59 +0000)

are available in the git repository at:

  git://xenbits.xen.org/people/ianc/osstest.git make-flight-refactor

for you to fetch changes up to dcf4332148275d332ad94181a5983023029b5556:

  make-flight: refactor test_matrix_do_one (2014-01-22 09:55:50 +0000)

----------------------------------------------------------------
Ian Campbell (17):
      make-flight: expand hard tabs
      make-flight: refactor common function "stripy" into helper library
      make-flight: Drop obsolete/unused xenrt_images variable.
      make-flight: refactor build job creation into mfi-common
      Remove support for building the XCP kernel
      mfi-common: Allow caller of create_build_jobs to include/exclude xend builds
      mfi-common: restrict scope of local vars in create_build_jobs
      mfi-common: fixup
      make-flight: Remove md5sum based job filtering
      make-flight: refactor job_create_test filters
      make-flight: refactor job_create_test into mfi-common
      make-flight: refactor test case filter over $branch
      make-flight: Separate matrix iteration from test job creation
      make-flight: reduce indentation in test_matrix_do_one
      make-flight: Refactor test matrix iteration into mfi-common
      mfi-common: onetoolstack does not vary for a given $xenbranch
      make-flight: refactor test_matrix_do_one

 cr-daily-branch   |   1 -
 cr-external-linux |   1 -
 make-flight       | 635 ++++++++++++++++--------------------------------------
 mfi-common        | 326 ++++++++++++++++++++++++++++
 4 files changed, 516 insertions(+), 447 deletions(-)
 create mode 100644 mfi-common



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:02:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5udo-0001RA-9t; Wed, 22 Jan 2014 10:02:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5udm-0001R4-Fb
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:02:34 +0000
Received: from [193.109.254.147:13911] by server-8.bemta-14.messagelabs.com id
	6D/84-30921-9379FD25; Wed, 22 Jan 2014 10:02:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390384951!12434674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31728 invoked from network); 22 Jan 2014 10:02:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:02:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93183180"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:02:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:02:30 -0500
Message-ID: <1390384949.32519.34.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 10:02:29 +0000
In-Reply-To: <21214.42116.463716.285955@mariner.uk.xensource.com>
References: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
	<21214.42116.463716.285955@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to
 JobDB flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 16:47 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] standalone: Correct arguments to JobDB flight_create"):
> > The real jobdb takes $intended and $branch in the other order, meaningthat the
> > standalone db ends up with them backwards.
> 
> *blrfhl*

Quite! I suppose it doesn't matter mcuh in standalone mode because I
didn't actually see anything in practice, just I manually did 'select
branch frmo flights' and got "play"...

> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks, I've pushed this to pretest.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:02:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5udo-0001RA-9t; Wed, 22 Jan 2014 10:02:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5udm-0001R4-Fb
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:02:34 +0000
Received: from [193.109.254.147:13911] by server-8.bemta-14.messagelabs.com id
	6D/84-30921-9379FD25; Wed, 22 Jan 2014 10:02:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390384951!12434674!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31728 invoked from network); 22 Jan 2014 10:02:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:02:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93183180"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:02:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:02:30 -0500
Message-ID: <1390384949.32519.34.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 10:02:29 +0000
In-Reply-To: <21214.42116.463716.285955@mariner.uk.xensource.com>
References: <1390322740-961-1-git-send-email-ian.campbell@citrix.com>
	<21214.42116.463716.285955@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] standalone: Correct arguments to
 JobDB flight_create
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-21 at 16:47 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] standalone: Correct arguments to JobDB flight_create"):
> > The real jobdb takes $intended and $branch in the other order, meaningthat the
> > standalone db ends up with them backwards.
> 
> *blrfhl*

Quite! I suppose it doesn't matter mcuh in standalone mode because I
didn't actually see anything in practice, just I manually did 'select
branch frmo flights' and got "play"...

> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks, I've pushed this to pretest.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:22:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uwR-0002mN-O9; Wed, 22 Jan 2014 10:21:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W5uwQ-0002mI-8d
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:21:50 +0000
Received: from [85.158.143.35:42245] by server-1.bemta-4.messagelabs.com id
	90/93-02132-DBB9FD25; Wed, 22 Jan 2014 10:21:49 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390386108!9739!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29878 invoked from network); 22 Jan 2014 10:21:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	22 Jan 2014 10:21:48 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0MAKh7l026496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 05:20:43 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-67.ams2.redhat.com
	[10.36.112.67])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s0MAKcra027324; Wed, 22 Jan 2014 05:20:39 -0500
Message-ID: <52DF9B76.8060807@redhat.com>
Date: Wed, 22 Jan 2014 11:20:38 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
In-Reply-To: <20140121182745.GA23328@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 21/01/2014 19:27, Wei Liu ha scritto:
>> >
>> > Googling "disable tcg" would have provided an answer, but the patches
>> > were old enough to be basically useless.  I'll refresh the current
>> > version in the next few days.  Currently I am (or try to be) on
>> > vacation, so I cannot really say when, but I'll do my best. :)
>> >
> Hi Paolo, any update?

Oops, sorry, I thought I had sent that out.  It's in the disable-tcg 
branch on my github repository.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:22:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5uwR-0002mN-O9; Wed, 22 Jan 2014 10:21:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W5uwQ-0002mI-8d
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:21:50 +0000
Received: from [85.158.143.35:42245] by server-1.bemta-4.messagelabs.com id
	90/93-02132-DBB9FD25; Wed, 22 Jan 2014 10:21:49 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390386108!9739!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29878 invoked from network); 22 Jan 2014 10:21:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-21.messagelabs.com with SMTP;
	22 Jan 2014 10:21:48 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0MAKh7l026496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 05:20:43 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-67.ams2.redhat.com
	[10.36.112.67])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s0MAKcra027324; Wed, 22 Jan 2014 05:20:39 -0500
Message-ID: <52DF9B76.8060807@redhat.com>
Date: Wed, 22 Jan 2014 11:20:38 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389014715.19378.8.camel@hamster.uk.xensource.com>
	<alpine.DEB.2.02.1401061415140.8667@kaball.uk.xensource.com>
	<CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
In-Reply-To: <20140121182745.GA23328@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 21/01/2014 19:27, Wei Liu ha scritto:
>> >
>> > Googling "disable tcg" would have provided an answer, but the patches
>> > were old enough to be basically useless.  I'll refresh the current
>> > version in the next few days.  Currently I am (or try to be) on
>> > vacation, so I cannot really say when, but I'll do my best. :)
>> >
> Hi Paolo, any update?

Oops, sorry, I thought I had sent that out.  It's in the disable-tcg 
branch on my github repository.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:31:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5v67-0003GH-8k; Wed, 22 Jan 2014 10:31:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5v65-0003GC-9Q
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:31:49 +0000
Received: from [85.158.143.35:36253] by server-1.bemta-4.messagelabs.com id
	81/17-02132-41E9FD25; Wed, 22 Jan 2014 10:31:48 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390386706!13140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15214 invoked from network); 22 Jan 2014 10:31:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:31:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95238425"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:31:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:31:45 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5v2l-00037z-3g;
	Wed, 22 Jan 2014 10:28:23 +0000
Message-ID: <52DF9D46.7030904@citrix.com>
Date: Wed, 22 Jan 2014 10:28:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
In-Reply-To: <52DFA2200200007800115B70@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 09:49, Jan Beulich wrote:
>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> See attached (and relevant part inlined).
>> ...
>> (XEN) [2014-01-22 12:27:07] Xen call trace:
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
>> (XEN) [2014-01-22 12:27:07] 
>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
> Considering the similarity, this is surely another incarnation of
> the same issue. Which gets me to ask first of all - is the device
> being acted upon an MSI-X capable one? If not, why is the call
> being made? If so (and Xen thinks differently) that's what
> needs fixing.
>
> On that basis I'm also going to ignore your patch for the first
> problem, Andrew: It's either incomplete or unnecessary or
> fixing the wrong thing.
>
> Jan
>

I am going to go with incomplete - it is certainly not unnecessary.  The
PCI device parameters to pci_prepare_msix() are completely guest
controlled; There is no validation of the SBDF at all.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:31:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5v67-0003GH-8k; Wed, 22 Jan 2014 10:31:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5v65-0003GC-9Q
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:31:49 +0000
Received: from [85.158.143.35:36253] by server-1.bemta-4.messagelabs.com id
	81/17-02132-41E9FD25; Wed, 22 Jan 2014 10:31:48 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390386706!13140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15214 invoked from network); 22 Jan 2014 10:31:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:31:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95238425"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:31:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:31:45 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5v2l-00037z-3g;
	Wed, 22 Jan 2014 10:28:23 +0000
Message-ID: <52DF9D46.7030904@citrix.com>
Date: Wed, 22 Jan 2014 10:28:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
In-Reply-To: <52DFA2200200007800115B70@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 09:49, Jan Beulich wrote:
>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> See attached (and relevant part inlined).
>> ...
>> (XEN) [2014-01-22 12:27:07] Xen call trace:
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] msix_capability_init+0x1dc/0x603
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
>> (XEN) [2014-01-22 12:27:07] 
>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
> Considering the similarity, this is surely another incarnation of
> the same issue. Which gets me to ask first of all - is the device
> being acted upon an MSI-X capable one? If not, why is the call
> being made? If so (and Xen thinks differently) that's what
> needs fixing.
>
> On that basis I'm also going to ignore your patch for the first
> problem, Andrew: It's either incomplete or unnecessary or
> fixing the wrong thing.
>
> Jan
>

I am going to go with incomplete - it is certainly not unnecessary.  The
PCI device parameters to pci_prepare_msix() are completely guest
controlled; There is no validation of the SBDF at all.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vEk-00047k-Lm; Wed, 22 Jan 2014 10:40:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5vEi-000456-V6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:40:45 +0000
Received: from [85.158.137.68:6500] by server-12.bemta-3.messagelabs.com id
	92/41-20055-C20AFD25; Wed, 22 Jan 2014 10:40:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390387240!10585068!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24983 invoked from network); 22 Jan 2014 10:40:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:40:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95240230"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:40:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:40:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5vEd-0003Ph-7V;
	Wed, 22 Jan 2014 10:40:39 +0000
Message-ID: <52DFA026.507@citrix.com>
Date: Wed, 22 Jan 2014 10:40:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DEB887.8070409@citrix.com>
	<52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
In-Reply-To: <52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 09:38, Jan Beulich wrote:
>>>> On 21.01.14 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> I have been giving nested virt a try, and have my first bug to report. 
>> This is still ongoing, and is by no means complete yet.
>>
>> Setup:
>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>>
>> Single Intel Haswell SDP (Grantley platform):
>> Native hypervisor: XenServer
>>
>> Two L1 guests:
>>   XenServer (running with EPT)
>>   XenServer (running with shadow)
>>
>>
>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>> domain, the L1 shadow domain is killed with:
>>
>> (XEN) <vm_launch_fail> error code 7
> Considering that 7 is "VM entry with invalid control field(s)", I think
> it would be quite helpful if we enhanced the error handling here to
> dump the VMCS.

Agreed.  I cannot find any further help from hardware to identify which
control field(s) is(are) invalid, so the best we appear to be able to
know is "At least one of these bits are wrong in the current context".

>
> Also - did you perhaps mean to Cc VMX folks on your original mail?
> Chances that they see your report without doing so are - according
> to my experience - rather slim...
>
> Jan
>
>

I wasn't really thinking that much - I had hoped to also try out
nested-virt on AMD, but have completely run out of time.

After 4.4 gets released, I will try to automate the environment setup,
and start investigating/reporting the encountered issues properly.

Until then sadly, I have more important issues to work on in the meantime.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vEk-00047k-Lm; Wed, 22 Jan 2014 10:40:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W5vEi-000456-V6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:40:45 +0000
Received: from [85.158.137.68:6500] by server-12.bemta-3.messagelabs.com id
	92/41-20055-C20AFD25; Wed, 22 Jan 2014 10:40:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390387240!10585068!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24983 invoked from network); 22 Jan 2014 10:40:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:40:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95240230"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:40:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:40:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W5vEd-0003Ph-7V;
	Wed, 22 Jan 2014 10:40:39 +0000
Message-ID: <52DFA026.507@citrix.com>
Date: Wed, 22 Jan 2014 10:40:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DEB887.8070409@citrix.com>
	<52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
In-Reply-To: <52DF9FAA0200007800115B3B@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 09:38, Jan Beulich wrote:
>>>> On 21.01.14 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> I have been giving nested virt a try, and have my first bug to report. 
>> This is still ongoing, and is by no means complete yet.
>>
>> Setup:
>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>>
>> Single Intel Haswell SDP (Grantley platform):
>> Native hypervisor: XenServer
>>
>> Two L1 guests:
>>   XenServer (running with EPT)
>>   XenServer (running with shadow)
>>
>>
>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>> domain, the L1 shadow domain is killed with:
>>
>> (XEN) <vm_launch_fail> error code 7
> Considering that 7 is "VM entry with invalid control field(s)", I think
> it would be quite helpful if we enhanced the error handling here to
> dump the VMCS.

Agreed.  I cannot find any further help from hardware to identify which
control field(s) is(are) invalid, so the best we appear to be able to
know is "At least one of these bits are wrong in the current context".

>
> Also - did you perhaps mean to Cc VMX folks on your original mail?
> Chances that they see your report without doing so are - according
> to my experience - rather slim...
>
> Jan
>
>

I wasn't really thinking that much - I had hoped to also try out
nested-virt on AMD, but have completely run out of time.

After 4.4 gets released, I will try to automate the environment setup,
and start investigating/reporting the encountered issues properly.

Until then sadly, I have more important issues to work on in the meantime.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vOS-0005Lo-1w; Wed, 22 Jan 2014 10:50:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5vOP-0005Lj-3A
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:50:45 +0000
Received: from [85.158.137.68:53492] by server-2.bemta-3.messagelabs.com id
	25/88-17329-482AFD25; Wed, 22 Jan 2014 10:50:44 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390387841!10575979!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7090 invoked from network); 22 Jan 2014 10:50:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:50:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95241968"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:50:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:50:40 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5vOJ-0003ft-Ea;
	Wed, 22 Jan 2014 10:50:39 +0000
Message-ID: <1390387834.32296.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Date: Wed, 22 Jan 2014 10:50:34 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines (in mctelem_reserve)


        newhead = oldhead->mcte_next;
        if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread or recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting
a new head that could point to whatever element (even already used).

This patch use instead a bit array and atomic bit operations.

Actually it use unsigned long instead of bitmap type as testing for
all zeroes is easier.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |   52 ++++++++++++++++++++++---------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 895ce1a..e56b6fb 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -69,6 +69,11 @@ struct mctelem_ent {
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+/* Check if we can fit enough bits in the free bit array */
+#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
+#error Too much elements
+#endif
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +82,9 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +104,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	unsigned long mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
 	}
 
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
@@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,18 +315,21 @@ static int mctelem_drop_count;
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	unsigned long *freelp;
+	unsigned long oldfree;
+	unsigned bit;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldfree = *freelp;
+
+		if (oldfree == 0) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
@@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		bit = find_first_set_bit(oldfree);
+		if (test_and_clear_bit(bit, freelp)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vOS-0005Lo-1w; Wed, 22 Jan 2014 10:50:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5vOP-0005Lj-3A
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:50:45 +0000
Received: from [85.158.137.68:53492] by server-2.bemta-3.messagelabs.com id
	25/88-17329-482AFD25; Wed, 22 Jan 2014 10:50:44 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390387841!10575979!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7090 invoked from network); 22 Jan 2014 10:50:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:50:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="95241968"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 10:50:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:50:40 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5vOJ-0003ft-Ea;
	Wed, 22 Jan 2014 10:50:39 +0000
Message-ID: <1390387834.32296.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Date: Wed, 22 Jan 2014 10:50:34 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These lines (in mctelem_reserve)


        newhead = oldhead->mcte_next;
        if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread or recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting
a new head that could point to whatever element (even already used).

This patch use instead a bit array and atomic bit operations.

Actually it use unsigned long instead of bitmap type as testing for
all zeroes is easier.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |   52 ++++++++++++++++++++++---------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 895ce1a..e56b6fb 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -69,6 +69,11 @@ struct mctelem_ent {
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+/* Check if we can fit enough bits in the free bit array */
+#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
+#error Too much elements
+#endif
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +82,9 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +104,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	unsigned long mctc_free[MC_NCLASSES];
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
 	}
 
 	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
@@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
 		tep->mcte_data = datarr + i * datasz;
 
 		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
+			tep->mcte_flags = MCTE_F_HOME_URGENT;
 		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
+			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
+			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
 		}
 
-		tep->mcte_next = *tepp;
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,18 +315,21 @@ static int mctelem_drop_count;
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
+	unsigned long *freelp;
+	unsigned long oldfree;
+	unsigned bit;
 	mctelem_class_t target = (which == MC_URGENT) ?
 	    MC_URGENT : MC_NONURGENT;
 
 	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
+		oldfree = *freelp;
+
+		if (oldfree == 0) {
 			if (which == MC_URGENT && target == MC_URGENT) {
 				/* raid the non-urgent freelist */
 				target = MC_NONURGENT;
@@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 			}
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		bit = find_first_set_bit(oldfree);
+		if (test_and_clear_bit(bit, freelp)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:57:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vUO-0005Un-09; Wed, 22 Jan 2014 10:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5vUM-0005Ui-Ou
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:56:54 +0000
Received: from [85.158.137.68:42949] by server-15.bemta-3.messagelabs.com id
	EC/82-11556-5F3AFD25; Wed, 22 Jan 2014 10:56:53 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390388211!6967668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14926 invoked from network); 22 Jan 2014 10:56:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:56:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93196464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:56:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:56:50 -0500
Message-ID: <52DFA3F1.4030303@citrix.com>
Date: Wed, 22 Jan 2014 10:56:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 10:50, Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.

bitmap_zero() does what you want.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:57:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vUO-0005Un-09; Wed, 22 Jan 2014 10:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5vUM-0005Ui-Ou
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 10:56:54 +0000
Received: from [85.158.137.68:42949] by server-15.bemta-3.messagelabs.com id
	EC/82-11556-5F3AFD25; Wed, 22 Jan 2014 10:56:53 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390388211!6967668!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14926 invoked from network); 22 Jan 2014 10:56:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:56:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,698,1384300800"; d="scan'208";a="93196464"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:56:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:56:50 -0500
Message-ID: <52DFA3F1.4030303@citrix.com>
Date: Wed, 22 Jan 2014 10:56:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 10:50, Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.

bitmap_zero() does what you want.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vVW-0005bO-L5; Wed, 22 Jan 2014 10:58:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5vVV-0005bH-7y
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 10:58:05 +0000
Received: from [85.158.143.35:25726] by server-1.bemta-4.messagelabs.com id
	1D/7B-02132-C34AFD25; Wed, 22 Jan 2014 10:58:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390388282!18915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1939 invoked from network); 22 Jan 2014 10:58:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:58:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93196688"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:58:02 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:58:01 -0500
Message-ID: <52DFA438.6050103@citrix.com>
Date: Wed, 22 Jan 2014 10:58:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 20:22, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this

I would prune this version information out of the commit message.

> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

But I think this needs Stefano's ack.

It would be useful to get this into 3.14 if possible.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 10:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 10:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5vVW-0005bO-L5; Wed, 22 Jan 2014 10:58:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5vVV-0005bH-7y
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 10:58:05 +0000
Received: from [85.158.143.35:25726] by server-1.bemta-4.messagelabs.com id
	1D/7B-02132-C34AFD25; Wed, 22 Jan 2014 10:58:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390388282!18915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1939 invoked from network); 22 Jan 2014 10:58:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 10:58:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93196688"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 10:58:02 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 05:58:01 -0500
Message-ID: <52DFA438.6050103@citrix.com>
Date: Wed, 22 Jan 2014 10:58:00 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 20:22, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this

I would prune this version information out of the commit message.

> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

But I think this needs Stefano's ack.

It would be useful to get this into 3.14 if possible.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:07:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5veh-0006TO-1j; Wed, 22 Jan 2014 11:07:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5vef-0006TJ-Pv
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:07:34 +0000
Received: from [85.158.137.68:27349] by server-3.bemta-3.messagelabs.com id
	3B/66-10658-576AFD25; Wed, 22 Jan 2014 11:07:33 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390388850!9821707!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3151 invoked from network); 22 Jan 2014 11:07:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:07:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93198716"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 11:07:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 06:07:23 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5vXf-0003ph-NS;
	Wed, 22 Jan 2014 11:00:19 +0000
Message-ID: <1390388414.32296.4.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 22 Jan 2014 11:00:14 +0000
In-Reply-To: <52DFA3F1.4030303@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFA3F1.4030303@citrix.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 10:56 +0000, David Vrabel wrote:
> On 22/01/14 10:50, Frediano Ziglio wrote:
> > These lines (in mctelem_reserve)
> > 
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Actually it use unsigned long instead of bitmap type as testing for
> > all zeroes is easier.
> 
> bitmap_zero() does what you want.
> 
> David

No, bitmap_zero fills with zero, do not check for zeroes.

Actually I read the bitmap, check for all zeroes (some item free) and
then use find_first_set_bit to find the bit number set to 1.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:07:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5veh-0006TO-1j; Wed, 22 Jan 2014 11:07:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W5vef-0006TJ-Pv
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:07:34 +0000
Received: from [85.158.137.68:27349] by server-3.bemta-3.messagelabs.com id
	3B/66-10658-576AFD25; Wed, 22 Jan 2014 11:07:33 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390388850!9821707!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3151 invoked from network); 22 Jan 2014 11:07:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:07:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93198716"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 11:07:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 06:07:23 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W5vXf-0003ph-NS;
	Wed, 22 Jan 2014 11:00:19 +0000
Message-ID: <1390388414.32296.4.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 22 Jan 2014 11:00:14 +0000
In-Reply-To: <52DFA3F1.4030303@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFA3F1.4030303@citrix.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 10:56 +0000, David Vrabel wrote:
> On 22/01/14 10:50, Frediano Ziglio wrote:
> > These lines (in mctelem_reserve)
> > 
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Actually it use unsigned long instead of bitmap type as testing for
> > all zeroes is easier.
> 
> bitmap_zero() does what you want.
> 
> David

No, bitmap_zero fills with zero, do not check for zeroes.

Actually I read the bitmap, check for all zeroes (some item free) and
then use find_first_set_bit to find the bit number set to 1.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wDd-00009U-0g; Wed, 22 Jan 2014 11:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wDc-00009B-1n
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:43:40 +0000
Received: from [85.158.143.35:61836] by server-2.bemta-4.messagelabs.com id
	18/12-11386-AEEAFD25; Wed, 22 Jan 2014 11:43:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390391016!35085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20333 invoked from network); 22 Jan 2014 11:43:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93206128"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 11:43:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:43:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wDB-000611-KP;
	Wed, 22 Jan 2014 11:43:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wD9-0000N7-TU;
	Wed, 22 Jan 2014 11:43:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.44750.682786.512983@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:43:10 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 06/17] mfi-common: Allow caller of
	create_build_jobs to include/exclude xend builds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 06/17] mfi-common: Allow caller of create_build_jobs to include/exclude xend builds"):
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> +    if [ -z "$WANT_XEND" ]; then
> +      want_xend=$WANT_XEND
> +    else

If $WANT_XEND is the empty string, also set want_xend to "" ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wDd-00009U-0g; Wed, 22 Jan 2014 11:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wDc-00009B-1n
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:43:40 +0000
Received: from [85.158.143.35:61836] by server-2.bemta-4.messagelabs.com id
	18/12-11386-AEEAFD25; Wed, 22 Jan 2014 11:43:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390391016!35085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20333 invoked from network); 22 Jan 2014 11:43:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93206128"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 11:43:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:43:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wDB-000611-KP;
	Wed, 22 Jan 2014 11:43:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wD9-0000N7-TU;
	Wed, 22 Jan 2014 11:43:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.44750.682786.512983@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:43:10 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 06/17] mfi-common: Allow caller of
	create_build_jobs to include/exclude xend builds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 06/17] mfi-common: Allow caller of create_build_jobs to include/exclude xend builds"):
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> +    if [ -z "$WANT_XEND" ]; then
> +      want_xend=$WANT_XEND
> +    else

If $WANT_XEND is the empty string, also set want_xend to "" ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:44:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wE7-0000CU-FU; Wed, 22 Jan 2014 11:44:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wE6-0000CJ-A6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:44:10 +0000
Received: from [85.158.143.35:45728] by server-1.bemta-4.messagelabs.com id
	9B/88-02132-90FAFD25; Wed, 22 Jan 2014 11:44:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390391047!34267!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18419 invoked from network); 22 Jan 2014 11:44:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:44:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95253091"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:44:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:44:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wE2-00061N-E6;
	Wed, 22 Jan 2014 11:44:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wE0-0000NF-Qh;
	Wed, 22 Jan 2014 11:44:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.44803.743906.818798@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:44:03 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 08/17] mfi-common: fixup"):
> -    if [ -z "$WANT_XEND" ]; then
> +    if [ -n "$WANT_XEND" ]; then

Ah :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:44:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wE7-0000CU-FU; Wed, 22 Jan 2014 11:44:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wE6-0000CJ-A6
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:44:10 +0000
Received: from [85.158.143.35:45728] by server-1.bemta-4.messagelabs.com id
	9B/88-02132-90FAFD25; Wed, 22 Jan 2014 11:44:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390391047!34267!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18419 invoked from network); 22 Jan 2014 11:44:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:44:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95253091"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:44:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:44:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wE2-00061N-E6;
	Wed, 22 Jan 2014 11:44:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wE0-0000NF-Qh;
	Wed, 22 Jan 2014 11:44:04 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.44803.743906.818798@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:44:03 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 08/17] mfi-common: fixup"):
> -    if [ -z "$WANT_XEND" ]; then
> +    if [ -n "$WANT_XEND" ]; then

Ah :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:46:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wGN-0000Ru-AA; Wed, 22 Jan 2014 11:46:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5wGM-0000Rf-An
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:46:30 +0000
Received: from [85.158.139.211:37584] by server-5.bemta-5.messagelabs.com id
	0D/97-14928-59FAFD25; Wed, 22 Jan 2014 11:46:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390391187!11266319!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17965 invoked from network); 22 Jan 2014 11:46:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:46:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95253990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:46:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 06:46:26 -0500
Message-ID: <1390391185.32519.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 11:46:25 +0000
In-Reply-To: <21215.44803.743906.818798@mariner.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
	<21215.44803.743906.818798@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:44 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST 08/17] mfi-common: fixup"):
> > -    if [ -z "$WANT_XEND" ]; then
> > +    if [ -n "$WANT_XEND" ]; then
> 
> Ah :-)

Um, oops, sorry ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:46:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wGN-0000Ru-AA; Wed, 22 Jan 2014 11:46:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5wGM-0000Rf-An
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:46:30 +0000
Received: from [85.158.139.211:37584] by server-5.bemta-5.messagelabs.com id
	0D/97-14928-59FAFD25; Wed, 22 Jan 2014 11:46:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390391187!11266319!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17965 invoked from network); 22 Jan 2014 11:46:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:46:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95253990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:46:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 06:46:26 -0500
Message-ID: <1390391185.32519.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 11:46:25 +0000
In-Reply-To: <21215.44803.743906.818798@mariner.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384501-20552-8-git-send-email-ian.campbell@citrix.com>
	<21215.44803.743906.818798@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 08/17] mfi-common: fixup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:44 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST 08/17] mfi-common: fixup"):
> > -    if [ -z "$WANT_XEND" ]; then
> > +    if [ -n "$WANT_XEND" ]; then
> 
> Ah :-)

Um, oops, sorry ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wRY-00018B-Pe; Wed, 22 Jan 2014 11:58:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wRW-000186-1o
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:58:03 +0000
Received: from [193.109.254.147:41347] by server-16.bemta-14.messagelabs.com
	id F9/CF-20600-942BFD25; Wed, 22 Jan 2014 11:58:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390391879!10161407!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 816 invoked from network); 22 Jan 2014 11:58:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:58:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95256336"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:57:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:57:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wRS-00065x-7t;
	Wed, 22 Jan 2014 11:57:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wRQ-0000PQ-HL;
	Wed, 22 Jan 2014 11:57:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.45634.990441.64413@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:57:54 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390384770.32519.33.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384770.32519.33.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight"):
> On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> > The following series refactors make-flight to move some of the useful
> 
> Perhaps I should have just done this instead of the massive patch spam:

Thanks.

Well, I've read some of the meat and all of the commit messages and it
looks reasonable.  I suggest you push it after I've had that other go
at the switch wheezy.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 11:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 11:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wRY-00018B-Pe; Wed, 22 Jan 2014 11:58:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5wRW-000186-1o
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 11:58:03 +0000
Received: from [193.109.254.147:41347] by server-16.bemta-14.messagelabs.com
	id F9/CF-20600-942BFD25; Wed, 22 Jan 2014 11:58:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390391879!10161407!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 816 invoked from network); 22 Jan 2014 11:58:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 11:58:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95256336"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 11:57:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 06:57:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wRS-00065x-7t;
	Wed, 22 Jan 2014 11:57:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W5wRQ-0000PQ-HL;
	Wed, 22 Jan 2014 11:57:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21215.45634.990441.64413@mariner.uk.xensource.com>
Date: Wed, 22 Jan 2014 11:57:54 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390384770.32519.33.camel@kazak.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384770.32519.33.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight"):
> On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> > The following series refactors make-flight to move some of the useful
> 
> Perhaps I should have just done this instead of the massive patch spam:

Thanks.

Well, I've read some of the meat and all of the commit messages and it
looks reasonable.  I suggest you push it after I've had that other go
at the switch wheezy.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wWe-0001on-3o; Wed, 22 Jan 2014 12:03:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5wWc-0001of-HA
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:03:18 +0000
Received: from [85.158.139.211:54126] by server-6.bemta-5.messagelabs.com id
	91/FC-16310-583BFD25; Wed, 22 Jan 2014 12:03:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390392195!11272550!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20684 invoked from network); 22 Jan 2014 12:03:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93211930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 12:03:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:03:14 -0500
Message-ID: <1390392193.32519.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 12:03:13 +0000
In-Reply-To: <21215.45634.990441.64413@mariner.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384770.32519.33.camel@kazak.uk.xensource.com>
	<21215.45634.990441.64413@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:57 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight"):
> > On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> > > The following series refactors make-flight to move some of the useful
> > 
> > Perhaps I should have just done this instead of the massive patch spam:
> 
> Thanks.
> 
> Well, I've read some of the meat and all of the commit messages and it
> looks reasonable.  I suggest you push it after I've had that other go
> at the switch wheezy.

OK. I'll keep an eye out for the results of that.

Thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:03:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wWe-0001on-3o; Wed, 22 Jan 2014 12:03:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W5wWc-0001of-HA
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:03:18 +0000
Received: from [85.158.139.211:54126] by server-6.bemta-5.messagelabs.com id
	91/FC-16310-583BFD25; Wed, 22 Jan 2014 12:03:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390392195!11272550!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20684 invoked from network); 22 Jan 2014 12:03:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93211930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 12:03:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:03:14 -0500
Message-ID: <1390392193.32519.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 12:03:13 +0000
In-Reply-To: <21215.45634.990441.64413@mariner.uk.xensource.com>
References: <1390384419.32519.32.camel@kazak.uk.xensource.com>
	<1390384770.32519.33.camel@kazak.uk.xensource.com>
	<21215.45634.990441.64413@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up
 make-flight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:57 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH OSSTEST 00/17] refactor and clean up make-flight"):
> > On Wed, 2014-01-22 at 09:53 +0000, Ian Campbell wrote:
> > > The following series refactors make-flight to move some of the useful
> > 
> > Perhaps I should have just done this instead of the massive patch spam:
> 
> Thanks.
> 
> Well, I've read some of the meat and all of the commit messages and it
> looks reasonable.  I suggest you push it after I've had that other go
> at the switch wheezy.

OK. I'll keep an eye out for the results of that.

Thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:08:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wbw-0002CU-0p; Wed, 22 Jan 2014 12:08:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5wbu-0002CI-0B
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:08:46 +0000
Received: from [85.158.143.35:57292] by server-1.bemta-4.messagelabs.com id
	33/17-02132-DC4BFD25; Wed, 22 Jan 2014 12:08:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390392524!41373!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5716 invoked from network); 22 Jan 2014 12:08:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 12:08:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 12:08:43 +0000
Message-Id: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 12:08:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
In-Reply-To: <52DF9D46.7030904@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 11:28, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 22/01/14 09:49, Jan Beulich wrote:
>>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>> See attached (and relevant part inlined).
>>> ...
>>> (XEN) [2014-01-22 12:27:07] Xen call trace:
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] 
> msix_capability_init+0x1dc/0x603
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
>>> (XEN) [2014-01-22 12:27:07] 
>>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
>> Considering the similarity, this is surely another incarnation of
>> the same issue. Which gets me to ask first of all - is the device
>> being acted upon an MSI-X capable one? If not, why is the call
>> being made? If so (and Xen thinks differently) that's what
>> needs fixing.
>>
>> On that basis I'm also going to ignore your patch for the first
>> problem, Andrew: It's either incomplete or unnecessary or
>> fixing the wrong thing.
> 
> I am going to go with incomplete - it is certainly not unnecessary.  The
> PCI device parameters to pci_prepare_msix() are completely guest
> controlled; There is no validation of the SBDF at all.

"Fixing the wrong thing" presumably, after taking a closer look at
Konrad's second crash: The device in question really appears to
be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
wonder whether the capability gets displayed/hidden dynamically
based on some other enabling the driver may be doing on the
device. In which case we'd need to allocate the structure on
demand.

But of course I'd like to first have confirmation that that's really
what is happening here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:08:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wbw-0002CU-0p; Wed, 22 Jan 2014 12:08:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5wbu-0002CI-0B
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:08:46 +0000
Received: from [85.158.143.35:57292] by server-1.bemta-4.messagelabs.com id
	33/17-02132-DC4BFD25; Wed, 22 Jan 2014 12:08:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390392524!41373!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5716 invoked from network); 22 Jan 2014 12:08:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 12:08:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 12:08:43 +0000
Message-Id: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 12:08:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
In-Reply-To: <52DF9D46.7030904@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 11:28, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 22/01/14 09:49, Jan Beulich wrote:
>>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>> See attached (and relevant part inlined).
>>> ...
>>> (XEN) [2014-01-22 12:27:07] Xen call trace:
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] 
> msix_capability_init+0x1dc/0x603
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
>>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
>>> (XEN) [2014-01-22 12:27:07] 
>>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
>> Considering the similarity, this is surely another incarnation of
>> the same issue. Which gets me to ask first of all - is the device
>> being acted upon an MSI-X capable one? If not, why is the call
>> being made? If so (and Xen thinks differently) that's what
>> needs fixing.
>>
>> On that basis I'm also going to ignore your patch for the first
>> problem, Andrew: It's either incomplete or unnecessary or
>> fixing the wrong thing.
> 
> I am going to go with incomplete - it is certainly not unnecessary.  The
> PCI device parameters to pci_prepare_msix() are completely guest
> controlled; There is no validation of the SBDF at all.

"Fixing the wrong thing" presumably, after taking a closer look at
Konrad's second crash: The device in question really appears to
be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
wonder whether the capability gets displayed/hidden dynamically
based on some other enabling the driver may be doing on the
device. In which case we'd need to allocate the structure on
demand.

But of course I'd like to first have confirmation that that's really
what is happening here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:09:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wcx-0002XH-FK; Wed, 22 Jan 2014 12:09:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5wcw-0002XA-Bx
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:09:50 +0000
Received: from [85.158.143.35:11968] by server-2.bemta-4.messagelabs.com id
	51/D3-11386-D05BFD25; Wed, 22 Jan 2014 12:09:49 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390392587!42955!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23433 invoked from network); 22 Jan 2014 12:09:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:09:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95260229"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 12:09:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:09:46 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5wcr-0005Ac-I5;
	Wed, 22 Jan 2014 12:09:45 +0000
Date: Wed, 22 Jan 2014 12:09:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140122120945.GA24675@zion.uk.xensource.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DF9B76.8060807@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>
> >>> Googling "disable tcg" would have provided an answer, but the patches
> >>> were old enough to be basically useless.  I'll refresh the current
> >>> version in the next few days.  Currently I am (or try to be) on
> >>> vacation, so I cannot really say when, but I'll do my best. :)
> >>>
> >Hi Paolo, any update?
> 
> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> branch on my github repository.
> 

Thanks. I will have a look.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:09:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wcx-0002XH-FK; Wed, 22 Jan 2014 12:09:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W5wcw-0002XA-Bx
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:09:50 +0000
Received: from [85.158.143.35:11968] by server-2.bemta-4.messagelabs.com id
	51/D3-11386-D05BFD25; Wed, 22 Jan 2014 12:09:49 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390392587!42955!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23433 invoked from network); 22 Jan 2014 12:09:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:09:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="95260229"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 12:09:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:09:46 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W5wcr-0005Ac-I5;
	Wed, 22 Jan 2014 12:09:45 +0000
Date: Wed, 22 Jan 2014 12:09:45 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140122120945.GA24675@zion.uk.xensource.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DF9B76.8060807@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>
> >>> Googling "disable tcg" would have provided an answer, but the patches
> >>> were old enough to be basically useless.  I'll refresh the current
> >>> version in the next few days.  Currently I am (or try to be) on
> >>> vacation, so I cannot really say when, but I'll do my best. :)
> >>>
> >Hi Paolo, any update?
> 
> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> branch on my github repository.
> 

Thanks. I will have a look.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:12:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wfv-0002ij-5D; Wed, 22 Jan 2014 12:12:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5wft-0002ia-JS
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:12:53 +0000
Received: from [193.109.254.147:15728] by server-8.bemta-14.messagelabs.com id
	0E/2C-30921-4C5BFD25; Wed, 22 Jan 2014 12:12:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390392769!10166180!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18290 invoked from network); 22 Jan 2014 12:12:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:12:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93215880"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 12:12:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:12:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5wfR-0005Dd-AN;
	Wed, 22 Jan 2014 12:12:25 +0000
Date: Wed, 22 Jan 2014 12:11:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390384248.32519.30.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, stefano.panella@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
> 
> (nb: I posted a v3 at
> http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
> )
> 
> > On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > > 
> > > This can happen in practice on ARM where foreign pages can be above 4GB even
> > > though the local kernel does not have LPAE page tables enabled (which is
> > > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > > must) but the resulting DMA address (returned by the grant map operation) is
> > > much higher.
> > > 
> > > This is analogous to a hardware device which has its view of RAM mapped up
> > > high for some reason.
> > > 
> > > This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> > > systems with more than 4GB of RAM.
> > 
> > There was another patch posted by somebody from Citrix for a fix on 
> > 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> > 
> > Their fix was in the generic parts of code. Changing most of the 'unsigned'
> > to 'phys_addr_t' or such. Is his patch better or will this patch replace his?
> 
> I believe they are orthogonal, or at least I'm not (yet) hitting the
> same issue as Stefano P, the alloc cohoerent code paths are not involved
> in the issue I'm seeing because it involves foreign pages whose
> MFN/dma_addr is very high, not DMA to devices which are up high.

Yes, the two issues are orthogonal.
It is worth noting that the problem reported by StefanoP is not fatal:
it should just cause more bouncing on the swiotlb buffer than it is
strictly necessary (dma_mask gets truncated).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:12:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wfv-0002ij-5D; Wed, 22 Jan 2014 12:12:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5wft-0002ia-JS
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:12:53 +0000
Received: from [193.109.254.147:15728] by server-8.bemta-14.messagelabs.com id
	0E/2C-30921-4C5BFD25; Wed, 22 Jan 2014 12:12:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390392769!10166180!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18290 invoked from network); 22 Jan 2014 12:12:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 12:12:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93215880"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 12:12:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 07:12:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5wfR-0005Dd-AN;
	Wed, 22 Jan 2014 12:12:25 +0000
Date: Wed, 22 Jan 2014 12:11:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390384248.32519.30.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, stefano.panella@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
> 
> (nb: I posted a v3 at
> http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
> )
> 
> > On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > > The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> > > causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
> > > 
> > > This can happen in practice on ARM where foreign pages can be above 4GB even
> > > though the local kernel does not have LPAE page tables enabled (which is
> > > totally reasonable if the guest does not itself have >4GB of RAM). In this
> > > case the kernel still maps the foreign pages at a phys addr below 4G (as it
> > > must) but the resulting DMA address (returned by the grant map operation) is
> > > much higher.
> > > 
> > > This is analogous to a hardware device which has its view of RAM mapped up
> > > high for some reason.
> > > 
> > > This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> > > systems with more than 4GB of RAM.
> > 
> > There was another patch posted by somebody from Citrix for a fix on 
> > 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> > 
> > Their fix was in the generic parts of code. Changing most of the 'unsigned'
> > to 'phys_addr_t' or such. Is his patch better or will this patch replace his?
> 
> I believe they are orthogonal, or at least I'm not (yet) hitting the
> same issue as Stefano P, the alloc cohoerent code paths are not involved
> in the issue I'm seeing because it involves foreign pages whose
> MFN/dma_addr is very high, not DMA to devices which are up high.

Yes, the two issues are orthogonal.
It is worth noting that the problem reported by StefanoP is not fatal:
it should just cause more bouncing on the swiotlb buffer than it is
strictly necessary (dma_mask gets truncated).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wns-0003Ss-9c; Wed, 22 Jan 2014 12:21:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5wnr-0003Sm-N0
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:21:07 +0000
Received: from [85.158.137.68:43469] by server-5.bemta-3.messagelabs.com id
	02/C0-25188-2B7BFD25; Wed, 22 Jan 2014 12:21:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390393264!10614521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4529 invoked from network); 22 Jan 2014 12:21:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 12:21:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 12:21:03 +0000
Message-Id: <52DFC5BC0200007800115C92@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 12:21:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 11:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.

Except you never test for all zeroes. All of the operations you use
(with the exception of find_first_set_bit(), which would need
replacing by find_first_bit()) are suitable to be used on bitmaps, so
I really don't see why this can't be a proper bitmap.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 12:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 12:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5wns-0003Ss-9c; Wed, 22 Jan 2014 12:21:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W5wnr-0003Sm-N0
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 12:21:07 +0000
Received: from [85.158.137.68:43469] by server-5.bemta-3.messagelabs.com id
	02/C0-25188-2B7BFD25; Wed, 22 Jan 2014 12:21:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390393264!10614521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4529 invoked from network); 22 Jan 2014 12:21:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 12:21:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 22 Jan 2014 12:21:03 +0000
Message-Id: <52DFC5BC0200007800115C92@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 22 Jan 2014 12:21:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 11:50, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.

Except you never test for all zeroes. All of the operations you use
(with the exception of find_first_set_bit(), which would need
replacing by find_first_bit()) are suitable to be used on bitmaps, so
I really don't see why this can't be a proper bitmap.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 13:05:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 13:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5xUd-0005cC-29; Wed, 22 Jan 2014 13:05:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5xUb-0005c7-SO
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 13:05:18 +0000
Received: from [85.158.139.211:60536] by server-15.bemta-5.messagelabs.com id
	71/23-08490-D02CFD25; Wed, 22 Jan 2014 13:05:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390395915!11282403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8830 invoked from network); 22 Jan 2014 13:05:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 13:05:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93229892"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 13:05:13 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 08:05:13 -0500
Message-ID: <52DFC207.3000805@citrix.com>
Date: Wed, 22 Jan 2014 13:05:11 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>	
	<52DFA3F1.4030303@citrix.com>
	<1390388414.32296.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1390388414.32296.4.camel@hamster.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 11:00, Frediano Ziglio wrote:
> On Wed, 2014-01-22 at 10:56 +0000, David Vrabel wrote:
>> On 22/01/14 10:50, Frediano Ziglio wrote:
>>> These lines (in mctelem_reserve)
>>>
>>>
>>>         newhead = oldhead->mcte_next;
>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>
>>> are racy. After you read the newhead pointer it can happen that another
>>> flow (thread or recursive invocation) change all the list but set head
>>> with same value. So oldhead is the same as *freelp but you are setting
>>> a new head that could point to whatever element (even already used).
>>>
>>> This patch use instead a bit array and atomic bit operations.
>>>
>>> Actually it use unsigned long instead of bitmap type as testing for
>>> all zeroes is easier.
>>
>> bitmap_zero() does what you want.
>>
>> David
> 
> No, bitmap_zero fills with zero, do not check for zeroes.

bitmap_empty() sorry.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 13:05:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 13:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5xUd-0005cC-29; Wed, 22 Jan 2014 13:05:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W5xUb-0005c7-SO
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 13:05:18 +0000
Received: from [85.158.139.211:60536] by server-15.bemta-5.messagelabs.com id
	71/23-08490-D02CFD25; Wed, 22 Jan 2014 13:05:17 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390395915!11282403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8830 invoked from network); 22 Jan 2014 13:05:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 13:05:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93229892"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 13:05:13 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 08:05:13 -0500
Message-ID: <52DFC207.3000805@citrix.com>
Date: Wed, 22 Jan 2014 13:05:11 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>	
	<52DFA3F1.4030303@citrix.com>
	<1390388414.32296.4.camel@hamster.uk.xensource.com>
In-Reply-To: <1390388414.32296.4.camel@hamster.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 11:00, Frediano Ziglio wrote:
> On Wed, 2014-01-22 at 10:56 +0000, David Vrabel wrote:
>> On 22/01/14 10:50, Frediano Ziglio wrote:
>>> These lines (in mctelem_reserve)
>>>
>>>
>>>         newhead = oldhead->mcte_next;
>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>
>>> are racy. After you read the newhead pointer it can happen that another
>>> flow (thread or recursive invocation) change all the list but set head
>>> with same value. So oldhead is the same as *freelp but you are setting
>>> a new head that could point to whatever element (even already used).
>>>
>>> This patch use instead a bit array and atomic bit operations.
>>>
>>> Actually it use unsigned long instead of bitmap type as testing for
>>> all zeroes is easier.
>>
>> bitmap_zero() does what you want.
>>
>> David
> 
> No, bitmap_zero fills with zero, do not check for zeroes.

bitmap_empty() sorry.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 13:23:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 13:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5xm8-0006gp-6f; Wed, 22 Jan 2014 13:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5xm7-0006gk-AR
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 13:23:23 +0000
Received: from [85.158.143.35:23114] by server-2.bemta-4.messagelabs.com id
	06/53-11386-A46CFD25; Wed, 22 Jan 2014 13:23:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390397000!62367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29499 invoked from network); 22 Jan 2014 13:23:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 13:23:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93235988"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 13:23:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 08:23:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5xm3-0006Zq-1V;
	Wed, 22 Jan 2014 13:23:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5xm2-0007gY-SM;
	Wed, 22 Jan 2014 13:23:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24457-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 13:23:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24457: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24457 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24457/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24448

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 13:23:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 13:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5xm8-0006gp-6f; Wed, 22 Jan 2014 13:23:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W5xm7-0006gk-AR
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 13:23:23 +0000
Received: from [85.158.143.35:23114] by server-2.bemta-4.messagelabs.com id
	06/53-11386-A46CFD25; Wed, 22 Jan 2014 13:23:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390397000!62367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29499 invoked from network); 22 Jan 2014 13:23:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 13:23:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,699,1384300800"; d="scan'208";a="93235988"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 13:23:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 08:23:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W5xm3-0006Zq-1V;
	Wed, 22 Jan 2014 13:23:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W5xm2-0007gY-SM;
	Wed, 22 Jan 2014 13:23:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24457-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 13:23:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24457: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24457 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24457/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24448

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5yqr-0001qa-Dr; Wed, 22 Jan 2014 14:32:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5yqp-0001qV-Ab
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:32:19 +0000
Received: from [85.158.139.211:46278] by server-2.bemta-5.messagelabs.com id
	C1/9B-29392-276DFD25; Wed, 22 Jan 2014 14:32:18 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390401137!11111835!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6452 invoked from network); 22 Jan 2014 14:32:18 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:32:18 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so4968224eek.3
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 06:32:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2B+/2ys3J2EgmdP3oNFKOE2jtSJUKsRomGT07Y+KRQo=;
	b=le15W1ddXO9v3jsm3XzE+oMLlNMffYc10OwVZGmQSWbRf+MlxTZK5FxmEeWqYLWWGg
	Ys4Buap6MQFukN3U6piyW+vdv0puyuKzYi2TuHq/yVM0331qknhZjQ2MS7p+htkSgar2
	k5Ub4saTpAP2ZE/Lt6q2RmvXAfs/u7QUJwcsH90jrbVfJD86GLMorsMsMeFPX1D86CK+
	638vsJx47u16knD5WyKDyOZs3w+vSk6RPiTRmXQk+xnTDV/i/WRyBCFucKmffsTNAGOQ
	VhjDUkyyhrv+7yiVl/cjuZZLH72DicmnIX2vOdIyXXFagplRJzU0HP/La+r0UpR3hK6S
	7rZw==
X-Gm-Message-State: ALoCoQm/K9PGbnFs+ZxiE63brVM5LEnvbB3V9NOerZGDKUeMjQKp5b8a13BNyv8N7SafHEzZBSoM
X-Received: by 10.14.174.193 with SMTP id x41mr303978eel.87.1390401137595;
	Wed, 22 Jan 2014 06:32:17 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id m47sm27765187eey.7.2014.01.22.06.32.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 06:32:16 -0800 (PST)
Message-ID: <52DFD674.80009@m2r.biz>
Date: Wed, 22 Jan 2014 15:32:20 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	"Gonglei (Arei)" <arei.gonglei@huawei.com>
References: <33183CC9F5247A488A2544077AF190208159E77D@SZXEMA503-MBS.china.huawei.com>
	<525E7E7A02000078000FB627@nat28.tlf.novell.com>
	<1381917877.21901.62.camel@kazak.uk.xensource.com>
	<871u3lmqwt.fsf@blackfin.pond.sub.org>
	<1381921102.21901.73.camel@kazak.uk.xensource.com><1381921102.21901.73.camel@kazak.uk.xensource.com>
	(Ian Campbell's message of "Wed,
	16 Oct 2013 11:58:22 +0100") <87txghii80.fsf@blackfin.pond.sub.org>
	<525E9CCE02000078000FB751@nat28.tlf.novell.com>
	<33183CC9F5247A488A2544077AF19020815A03EA@SZXEMA503-MBS.china.huawei.com>
	<52664E1002000078000FC9CF@nat28.tlf.novell.com>
	<33183CC9F5247A488A2544077AF19020815A0A24@SZXEMA503-MBS.china.huawei.com>
	<526E3EBD02000078000FD29F@nat28.tlf.novell.com>
In-Reply-To: <526E3EBD02000078000FD29F@nat28.tlf.novell.com>
Cc: Yanqiangjun <yanqiangjun@huawei.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Hanweidong \(Randy\)" <hanweidong@huawei.com>,
	Markus Armbruster <armbru@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Luonengjun <luonengjun@huawei.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>,
	"Gaowei \(UVP\)" <gao.gaowei@huawei.com>,
	"Huangweidong \(Hardware\)" <huangweidong@huawei.com>
Subject: Re: [Xen-devel] [Qemu-devel] Hvmloader: Modify ACPI to only supply
 _EJ0 methods for PCIslots that support hotplug by runtime patching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 28/10/2013 10:38, Jan Beulich ha scritto:
>>>> On 24.10.13 at 14:17, "Gonglei (Arei)" <arei.gonglei@huawei.com> wrote:
>> Now I test the patch based on the codes of trunk, which works well.
>> The patch has been modified after your suggestion.
> Partly. I looks reasonable now, but still not pretty. But the tools
> maintainers will have to have the final say here anyway.
>
> Jan
>

Are there news about this patch?

Thanks for any reply.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5yqr-0001qa-Dr; Wed, 22 Jan 2014 14:32:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W5yqp-0001qV-Ab
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:32:19 +0000
Received: from [85.158.139.211:46278] by server-2.bemta-5.messagelabs.com id
	C1/9B-29392-276DFD25; Wed, 22 Jan 2014 14:32:18 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390401137!11111835!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6452 invoked from network); 22 Jan 2014 14:32:18 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:32:18 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so4968224eek.3
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 06:32:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2B+/2ys3J2EgmdP3oNFKOE2jtSJUKsRomGT07Y+KRQo=;
	b=le15W1ddXO9v3jsm3XzE+oMLlNMffYc10OwVZGmQSWbRf+MlxTZK5FxmEeWqYLWWGg
	Ys4Buap6MQFukN3U6piyW+vdv0puyuKzYi2TuHq/yVM0331qknhZjQ2MS7p+htkSgar2
	k5Ub4saTpAP2ZE/Lt6q2RmvXAfs/u7QUJwcsH90jrbVfJD86GLMorsMsMeFPX1D86CK+
	638vsJx47u16knD5WyKDyOZs3w+vSk6RPiTRmXQk+xnTDV/i/WRyBCFucKmffsTNAGOQ
	VhjDUkyyhrv+7yiVl/cjuZZLH72DicmnIX2vOdIyXXFagplRJzU0HP/La+r0UpR3hK6S
	7rZw==
X-Gm-Message-State: ALoCoQm/K9PGbnFs+ZxiE63brVM5LEnvbB3V9NOerZGDKUeMjQKp5b8a13BNyv8N7SafHEzZBSoM
X-Received: by 10.14.174.193 with SMTP id x41mr303978eel.87.1390401137595;
	Wed, 22 Jan 2014 06:32:17 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id m47sm27765187eey.7.2014.01.22.06.32.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 06:32:16 -0800 (PST)
Message-ID: <52DFD674.80009@m2r.biz>
Date: Wed, 22 Jan 2014 15:32:20 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	"Gonglei (Arei)" <arei.gonglei@huawei.com>
References: <33183CC9F5247A488A2544077AF190208159E77D@SZXEMA503-MBS.china.huawei.com>
	<525E7E7A02000078000FB627@nat28.tlf.novell.com>
	<1381917877.21901.62.camel@kazak.uk.xensource.com>
	<871u3lmqwt.fsf@blackfin.pond.sub.org>
	<1381921102.21901.73.camel@kazak.uk.xensource.com><1381921102.21901.73.camel@kazak.uk.xensource.com>
	(Ian Campbell's message of "Wed,
	16 Oct 2013 11:58:22 +0100") <87txghii80.fsf@blackfin.pond.sub.org>
	<525E9CCE02000078000FB751@nat28.tlf.novell.com>
	<33183CC9F5247A488A2544077AF19020815A03EA@SZXEMA503-MBS.china.huawei.com>
	<52664E1002000078000FC9CF@nat28.tlf.novell.com>
	<33183CC9F5247A488A2544077AF19020815A0A24@SZXEMA503-MBS.china.huawei.com>
	<526E3EBD02000078000FD29F@nat28.tlf.novell.com>
In-Reply-To: <526E3EBD02000078000FD29F@nat28.tlf.novell.com>
Cc: Yanqiangjun <yanqiangjun@huawei.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"Hanweidong \(Randy\)" <hanweidong@huawei.com>,
	Markus Armbruster <armbru@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Luonengjun <luonengjun@huawei.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>,
	"Gaowei \(UVP\)" <gao.gaowei@huawei.com>,
	"Huangweidong \(Hardware\)" <huangweidong@huawei.com>
Subject: Re: [Xen-devel] [Qemu-devel] Hvmloader: Modify ACPI to only supply
 _EJ0 methods for PCIslots that support hotplug by runtime patching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 28/10/2013 10:38, Jan Beulich ha scritto:
>>>> On 24.10.13 at 14:17, "Gonglei (Arei)" <arei.gonglei@huawei.com> wrote:
>> Now I test the patch based on the codes of trunk, which works well.
>> The patch has been modified after your suggestion.
> Partly. I looks reasonable now, but still not pretty. But the tools
> maintainers will have to have the final say here anyway.
>
> Jan
>

Are there news about this patch?

Thanks for any reply.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:40:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5yyU-0002P5-K0; Wed, 22 Jan 2014 14:40:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <borkmann@iogearbox.net>) id 1W5yp6-0001fT-FB
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 14:30:32 +0000
Received: from [85.158.137.68:19865] by server-1.bemta-3.messagelabs.com id
	18/04-29598-706DFD25; Wed, 22 Jan 2014 14:30:31 +0000
X-Env-Sender: borkmann@iogearbox.net
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390401029!7014902!1
X-Originating-IP: [213.133.104.62]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21502 invoked from network); 22 Jan 2014 14:30:29 -0000
Received: from www62.your-server.de (HELO www62.your-server.de)
	(213.133.104.62)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 14:30:29 -0000
Received: from [88.198.220.131] (helo=sslproxy02.your-server.de)
	by www62.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <borkmann@iogearbox.net>)
	id 1W5yoV-0007L9-Fm; Wed, 22 Jan 2014 15:29:55 +0100
Received: from [217.192.90.146] (helo=[192.168.1.101])
	by sslproxy02.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <borkmann@iogearbox.net>)
	id 1W5yoS-0004hh-6c; Wed, 22 Jan 2014 15:29:52 +0100
Message-ID: <52DFD5DB.6060603@iogearbox.net>
Date: Wed, 22 Jan 2014 15:29:47 +0100
From: Daniel Borkmann <borkmann@iogearbox.net>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Steven Noonan <steven@uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
In-Reply-To: <20140122072914.GA9283@orcus.uplinklabs.net>
X-Authenticated-Sender: borkmann@iogearbox.net
X-Virus-Scanned: Clear (ClamAV 0.97.8/18381/Wed Jan 22 08:20:58 2014)
X-Mailman-Approved-At: Wed, 22 Jan 2014 14:40:13 +0000
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>, george.dunlap@eu.citrix.com,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>, dario.faggioli@citrix.com,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com, Elena Ufimtseva <ufimtseva@gmail.com>,
	Vlastimil Babka <vbabka@suse.cz>, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, Michel Lespinasse <walken@google.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 08:29 AM, Steven Noonan wrote:
> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>> <gregkh@linuxfoundation.org> wrote:
>>
>> Adding extra folks to the party.
>>>>>
>>>>> Odds are this also shows up in 3.13, right?
>>>
>>> Reproduced using 3.13 on the PV guest:
>>>
>>> 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
>>> 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
>>> 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>> 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>> 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
>>> 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
>>> 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
>>> 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
>>> 	[  368.756815] Call Trace:
>>> 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>> 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>> 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>> 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>> 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>> 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>> 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>> 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>> 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>> 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
>>> 	[  368.756872] Disabling lock debugging due to kernel taint
>>> 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
>>> 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
>>>
>>>>
>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>> interest in setting one up).. And I have a suspicion that it might not
>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>
>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>> confused.
>>>>
>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>> _PAGE_PRESENT.
>>>>
>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>> that makes him go "Ahh, yes, silly case".
>>>>
>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>
>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>> attached test-case (but apparently only under Xen PV). There it
>>>> apparently causes a "BUG: Bad page map .." error.
>>
>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> and instead reads the values directly from the page-table. Since the
>> page-table is also manipulated by the hypervisor - there are certain
>> flags it also sets to do its business. It might be that it uses
>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> pte_flags that would invoke the pvops interface.
>>
>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>
>> This not-compiled-totally-bad-patch might shed some light on what I was
>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> for that).
>
> Unfortunately the Totally Bad Patch seems to make no difference. I am
> still able to repro the issue:

Maybe this one is also related to this BUG here (cc'ed people investigating
this one) ...

   https://lkml.org/lkml/2014/1/10/427

... not sure, though.

> 	[  346.374929] BUG: Bad page map in process mp  pte:80000004ae928065 pmd:e993f9067
> 	[  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:          (null) index:0x0
> 	[  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> 	[  346.374951] addr:00007f06a9bbb000 vm_flags:00100071 anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> 	[  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+ #1
> 	[  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768 00007f06a9bbb000
> 	[  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000 0000000000000000
> 	[  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000 ffff880e991a3e30
> 	[  346.374979] Call Trace:
> 	[  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> 	[  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> 	[  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> 	[  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> 	[  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> 	[  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> 	[  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> 	[  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> 	[  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> 	[  346.375034]  [<ffffffff814e712d>] system_call_fastpath+0x1a/0x1f
> 	[  346.375037] Disabling lock debugging due to kernel taint
> 	[  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:0 val:-1
> 	[  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:1 val:1
>
> This dump doesn't look dramatically different, either.
>
>>
>> The other question is - how is AutoNUMA running when it is not enabled?
>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> turned on?
>
> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> mean not enabled at runtime?
>
> [1] http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:40:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5yyU-0002P5-K0; Wed, 22 Jan 2014 14:40:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <borkmann@iogearbox.net>) id 1W5yp6-0001fT-FB
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 14:30:32 +0000
Received: from [85.158.137.68:19865] by server-1.bemta-3.messagelabs.com id
	18/04-29598-706DFD25; Wed, 22 Jan 2014 14:30:31 +0000
X-Env-Sender: borkmann@iogearbox.net
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390401029!7014902!1
X-Originating-IP: [213.133.104.62]
X-SpamReason: No, hits=1.8 required=7.0 tests=RATWARE_GECKO_BUILD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21502 invoked from network); 22 Jan 2014 14:30:29 -0000
Received: from www62.your-server.de (HELO www62.your-server.de)
	(213.133.104.62)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 14:30:29 -0000
Received: from [88.198.220.131] (helo=sslproxy02.your-server.de)
	by www62.your-server.de with esmtpsa (TLSv1:AES256-SHA:256)
	(Exim 4.74) (envelope-from <borkmann@iogearbox.net>)
	id 1W5yoV-0007L9-Fm; Wed, 22 Jan 2014 15:29:55 +0100
Received: from [217.192.90.146] (helo=[192.168.1.101])
	by sslproxy02.your-server.de with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <borkmann@iogearbox.net>)
	id 1W5yoS-0004hh-6c; Wed, 22 Jan 2014 15:29:52 +0100
Message-ID: <52DFD5DB.6060603@iogearbox.net>
Date: Wed, 22 Jan 2014 15:29:47 +0100
From: Daniel Borkmann <borkmann@iogearbox.net>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Steven Noonan <steven@uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
In-Reply-To: <20140122072914.GA9283@orcus.uplinklabs.net>
X-Authenticated-Sender: borkmann@iogearbox.net
X-Virus-Scanned: Clear (ClamAV 0.97.8/18381/Wed Jan 22 08:20:58 2014)
X-Mailman-Approved-At: Wed, 22 Jan 2014 14:40:13 +0000
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>, george.dunlap@eu.citrix.com,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>, dario.faggioli@citrix.com,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com, Elena Ufimtseva <ufimtseva@gmail.com>,
	Vlastimil Babka <vbabka@suse.cz>, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, Michel Lespinasse <walken@google.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 08:29 AM, Steven Noonan wrote:
> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>> <gregkh@linuxfoundation.org> wrote:
>>
>> Adding extra folks to the party.
>>>>>
>>>>> Odds are this also shows up in 3.13, right?
>>>
>>> Reproduced using 3.13 on the PV guest:
>>>
>>> 	[  368.756763] BUG: Bad page map in process mp  pte:80000004a67c6165 pmd:e9b706067
>>> 	[  368.756777] page:ffffea001299f180 count:0 mapcount:-1 mapping:          (null) index:0x0
>>> 	[  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>> 	[  368.756786] addr:00007fd1388b7000 vm_flags:00100071 anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>> 	[  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2 #1
>>> 	[  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0 ffffffff814d8748 00007fd1388b7000
>>> 	[  368.756803]  ffff880e9eaf3d08 ffffffff8116d289 0000000000000000 0000000000000000
>>> 	[  368.756809]  ffff880e9b7065b8 ffffea001299f180 00007fd1388b8000 ffff880e9eaf3e30
>>> 	[  368.756815] Call Trace:
>>> 	[  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>> 	[  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>> 	[  368.756837]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>> 	[  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>> 	[  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>> 	[  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>> 	[  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>> 	[  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>> 	[  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>> 	[  368.756869]  [<ffffffff814e70ed>] system_call_fastpath+0x1a/0x1f
>>> 	[  368.756872] Disabling lock debugging due to kernel taint
>>> 	[  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:0 val:-1
>>> 	[  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680 idx:1 val:1
>>>
>>>>
>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>> interest in setting one up).. And I have a suspicion that it might not
>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>
>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>> confused.
>>>>
>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>> _PAGE_PRESENT.
>>>>
>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>> that makes him go "Ahh, yes, silly case".
>>>>
>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>
>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>> attached test-case (but apparently only under Xen PV). There it
>>>> apparently causes a "BUG: Bad page map .." error.
>>
>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the _raw_
>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> and instead reads the values directly from the page-table. Since the
>> page-table is also manipulated by the hypervisor - there are certain
>> flags it also sets to do its business. It might be that it uses
>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> pte_flags that would invoke the pvops interface.
>>
>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>
>> This not-compiled-totally-bad-patch might shed some light on what I was
>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> for that).
>
> Unfortunately the Totally Bad Patch seems to make no difference. I am
> still able to repro the issue:

Maybe this one is also related to this BUG here (cc'ed people investigating
this one) ...

   https://lkml.org/lkml/2014/1/10/427

... not sure, though.

> 	[  346.374929] BUG: Bad page map in process mp  pte:80000004ae928065 pmd:e993f9067
> 	[  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:          (null) index:0x0
> 	[  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> 	[  346.374951] addr:00007f06a9bbb000 vm_flags:00100071 anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> 	[  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+ #1
> 	[  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768 00007f06a9bbb000
> 	[  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000 0000000000000000
> 	[  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000 ffff880e991a3e30
> 	[  346.374979] Call Trace:
> 	[  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> 	[  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> 	[  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> 	[  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> 	[  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> 	[  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> 	[  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> 	[  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> 	[  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> 	[  346.375034]  [<ffffffff814e712d>] system_call_fastpath+0x1a/0x1f
> 	[  346.375037] Disabling lock debugging due to kernel taint
> 	[  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:0 val:-1
> 	[  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00 idx:1 val:1
>
> This dump doesn't look dramatically different, either.
>
>>
>> The other question is - how is AutoNUMA running when it is not enabled?
>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> turned on?
>
> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> mean not enabled at runtime?
>
> [1] http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:47:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5z5P-0002Yv-Hg; Wed, 22 Jan 2014 14:47:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W5z5M-0002YW-RE; Wed, 22 Jan 2014 14:47:21 +0000
Received: from [193.109.254.147:32743] by server-9.bemta-14.messagelabs.com id
	E7/0C-13957-8F9DFD25; Wed, 22 Jan 2014 14:47:20 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390402038!12534118!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21028 invoked from network); 22 Jan 2014 14:47:19 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:47:19 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so386710lan.26
	for <multiple recipients>; Wed, 22 Jan 2014 06:47:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=poOZXL6dnw1WAAIM3KV3tVKr6oK+uwaV4B62XxzVA5I=;
	b=e7osWRLfjnOxVW9ztSiFcEbe6/q8o+Tqn3QB1zEyTcul5lK4rp7w+axP6eG9r1glL1
	PvAZVMJtQRZk/RK415oIMqTuGijPUg9f2F6RhMS+s7YeSEvJ06u6dJ6psraLf4KVkWHZ
	iFh44DE7P46t/T47TAosthTfYPrhf/0WaPZLK+zIqAvFPJSuqdiJGsKVKTKNV+j0Z4eM
	z5k9XKYq8Joe7WCRbAyqtl/09nyyEkup5omHugTE7dOpy2GD2l3YeQ9pwkoSDxJQszE8
	6/1awrJyFd9xQX714vPpjBDfl7hM9flB7XXaWiakgGglHc8vqBPfhXN60ByWYq5wF4/l
	C9Ag==
MIME-Version: 1.0
X-Received: by 10.152.42.129 with SMTP id o1mr1326289lal.19.1390402038526;
	Wed, 22 Jan 2014 06:47:18 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Wed, 22 Jan 2014 06:47:18 -0800 (PST)
Date: Wed, 22 Jan 2014 09:47:18 -0500
X-Google-Sender-Auth: r47kH2GdkpR3E2nl1MW8NcvIRXM
Message-ID: <CAHehzX1Lq2ighFgHaeRC28NLadpuR32oV2nB1mtQdx3+PZuAcw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] REMINDER: Next Monday is Xen Project Document Day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remember that the next Xen Project Document Day will
be held next Monday, January 27.

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

If you get a few moments in the next week and a half, please take a
look at the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

So please think about how you can join in the action.  If you haven't
requested to be made a Wiki editor, save time and do it now so you are
ready to go on Document Day.  Just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

See you in #xendocs  on Monday!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:47:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5z5P-0002Yv-Hg; Wed, 22 Jan 2014 14:47:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W5z5M-0002YW-RE; Wed, 22 Jan 2014 14:47:21 +0000
Received: from [193.109.254.147:32743] by server-9.bemta-14.messagelabs.com id
	E7/0C-13957-8F9DFD25; Wed, 22 Jan 2014 14:47:20 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390402038!12534118!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21028 invoked from network); 22 Jan 2014 14:47:19 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:47:19 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so386710lan.26
	for <multiple recipients>; Wed, 22 Jan 2014 06:47:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=poOZXL6dnw1WAAIM3KV3tVKr6oK+uwaV4B62XxzVA5I=;
	b=e7osWRLfjnOxVW9ztSiFcEbe6/q8o+Tqn3QB1zEyTcul5lK4rp7w+axP6eG9r1glL1
	PvAZVMJtQRZk/RK415oIMqTuGijPUg9f2F6RhMS+s7YeSEvJ06u6dJ6psraLf4KVkWHZ
	iFh44DE7P46t/T47TAosthTfYPrhf/0WaPZLK+zIqAvFPJSuqdiJGsKVKTKNV+j0Z4eM
	z5k9XKYq8Joe7WCRbAyqtl/09nyyEkup5omHugTE7dOpy2GD2l3YeQ9pwkoSDxJQszE8
	6/1awrJyFd9xQX714vPpjBDfl7hM9flB7XXaWiakgGglHc8vqBPfhXN60ByWYq5wF4/l
	C9Ag==
MIME-Version: 1.0
X-Received: by 10.152.42.129 with SMTP id o1mr1326289lal.19.1390402038526;
	Wed, 22 Jan 2014 06:47:18 -0800 (PST)
Received: by 10.112.184.16 with HTTP; Wed, 22 Jan 2014 06:47:18 -0800 (PST)
Date: Wed, 22 Jan 2014 09:47:18 -0500
X-Google-Sender-Auth: r47kH2GdkpR3E2nl1MW8NcvIRXM
Message-ID: <CAHehzX1Lq2ighFgHaeRC28NLadpuR32oV2nB1mtQdx3+PZuAcw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] REMINDER: Next Monday is Xen Project Document Day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remember that the next Xen Project Document Day will
be held next Monday, January 27.

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

If you get a few moments in the next week and a half, please take a
look at the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

So please think about how you can join in the action.  If you haven't
requested to be made a Wiki editor, save time and do it now so you are
ready to go on Document Day.  Just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

See you in #xendocs  on Monday!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:48:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5z6M-0002ft-0B; Wed, 22 Jan 2014 14:48:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefano.panella@citrix.com>) id 1W5z6K-0002fi-MD
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:48:20 +0000
Received: from [85.158.137.68:23480] by server-9.bemta-3.messagelabs.com id
	48/A5-13104-33ADFD25; Wed, 22 Jan 2014 14:48:19 +0000
X-Env-Sender: stefano.panella@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390402097!10655109!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12717 invoked from network); 22 Jan 2014 14:48:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93273315"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 14:47:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 09:47:57 -0500
Received: from slimey.cam.xci-test.com ([10.80.249.177])	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<stefano.panella@citrix.com>)	id 1W5z5x-00070L-94;
	Wed, 22 Jan 2014 14:47:57 +0000
Message-ID: <52DFDA1A.6000009@citrix.com>
Date: Wed, 22 Jan 2014 14:47:54 +0000
From: Stefano Panella <stefano.panella@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 12:11 PM, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Ian Campbell wrote:
>> On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
>>
>> (nb: I posted a v3 at
>> http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
>> )
>>
>>> On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
>>>> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
>>>> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
>>>>
>>>> This can happen in practice on ARM where foreign pages can be above 4GB even
>>>> though the local kernel does not have LPAE page tables enabled (which is
>>>> totally reasonable if the guest does not itself have >4GB of RAM). In this
>>>> case the kernel still maps the foreign pages at a phys addr below 4G (as it
>>>> must) but the resulting DMA address (returned by the grant map operation) is
>>>> much higher.
>>>>
>>>> This is analogous to a hardware device which has its view of RAM mapped up
>>>> high for some reason.
>>>>
>>>> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
>>>> systems with more than 4GB of RAM.
>>> There was another patch posted by somebody from Citrix for a fix on
>>> 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
>>>
>>> Their fix was in the generic parts of code. Changing most of the 'unsigned'
>>> to 'phys_addr_t' or such. Is his patch better or will this patch replace his?
>> I believe they are orthogonal, or at least I'm not (yet) hitting the
>> same issue as Stefano P, the alloc cohoerent code paths are not involved
>> in the issue I'm seeing because it involves foreign pages whose
>> MFN/dma_addr is very high, not DMA to devices which are up high.
> Yes, the two issues are orthogonal.
> It is worth noting that the problem reported by StefanoP is not fatal:
> it should just cause more bouncing on the swiotlb buffer than it is
> strictly necessary (dma_mask gets truncated).
I agree it is not fatal, but would it be worth not truncating the 
dma_mask for devices capable of using this?
I was under the impression that the memory returned (<4GB) if we 
truncate that is limited and should be used for PCI devices not capable 
of 64bit addressing.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:48:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:48:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5z6M-0002ft-0B; Wed, 22 Jan 2014 14:48:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefano.panella@citrix.com>) id 1W5z6K-0002fi-MD
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:48:20 +0000
Received: from [85.158.137.68:23480] by server-9.bemta-3.messagelabs.com id
	48/A5-13104-33ADFD25; Wed, 22 Jan 2014 14:48:19 +0000
X-Env-Sender: stefano.panella@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390402097!10655109!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12717 invoked from network); 22 Jan 2014 14:48:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:48:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93273315"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 14:47:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 09:47:57 -0500
Received: from slimey.cam.xci-test.com ([10.80.249.177])	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<stefano.panella@citrix.com>)	id 1W5z5x-00070L-94;
	Wed, 22 Jan 2014 14:47:57 +0000
Message-ID: <52DFDA1A.6000009@citrix.com>
Date: Wed, 22 Jan 2014 14:47:54 +0000
From: Stefano Panella <stefano.panella@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/17.0 Thunderbird/17.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 12:11 PM, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Ian Campbell wrote:
>> On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
>>
>> (nb: I posted a v3 at
>> http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
>> )
>>
>>> On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
>>>> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
>>>> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
>>>>
>>>> This can happen in practice on ARM where foreign pages can be above 4GB even
>>>> though the local kernel does not have LPAE page tables enabled (which is
>>>> totally reasonable if the guest does not itself have >4GB of RAM). In this
>>>> case the kernel still maps the foreign pages at a phys addr below 4G (as it
>>>> must) but the resulting DMA address (returned by the grant map operation) is
>>>> much higher.
>>>>
>>>> This is analogous to a hardware device which has its view of RAM mapped up
>>>> high for some reason.
>>>>
>>>> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
>>>> systems with more than 4GB of RAM.
>>> There was another patch posted by somebody from Citrix for a fix on
>>> 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
>>>
>>> Their fix was in the generic parts of code. Changing most of the 'unsigned'
>>> to 'phys_addr_t' or such. Is his patch better or will this patch replace his?
>> I believe they are orthogonal, or at least I'm not (yet) hitting the
>> same issue as Stefano P, the alloc cohoerent code paths are not involved
>> in the issue I'm seeing because it involves foreign pages whose
>> MFN/dma_addr is very high, not DMA to devices which are up high.
> Yes, the two issues are orthogonal.
> It is worth noting that the problem reported by StefanoP is not fatal:
> it should just cause more bouncing on the swiotlb buffer than it is
> strictly necessary (dma_mask gets truncated).
I agree it is not fatal, but would it be worth not truncating the 
dma_mask for devices capable of using this?
I was under the impression that the memory returned (<4GB) if we 
truncate that is limited and should be used for PCI devices not capable 
of 64bit addressing.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:56:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5zDm-0003Ys-Gs; Wed, 22 Jan 2014 14:56:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5zDk-0003Yn-B0
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:56:00 +0000
Received: from [85.158.137.68:59233] by server-12.bemta-3.messagelabs.com id
	A5/61-20055-FFBDFD25; Wed, 22 Jan 2014 14:55:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390402557!10643713!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 22 Jan 2014 14:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95323925"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 14:55:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 09:55:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5zDf-0007aU-KB;
	Wed, 22 Jan 2014 14:55:55 +0000
Date: Wed, 22 Jan 2014 14:54:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Panella <stefano.panella@citrix.com>
In-Reply-To: <52DFDA1A.6000009@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221454180.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
	<52DFDA1A.6000009@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Stefano Panella wrote:
> On 01/22/2014 12:11 PM, Stefano Stabellini wrote:
> > On Wed, 22 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
> > > 
> > > (nb: I posted a v3 at
> > > http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
> > > )
> > > 
> > > > On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > > > > The use of phys_to_machine and machine_to_phys in the phys<=>bus
> > > > > conversions
> > > > > causes us to lose the top bits of the DMA address if the size of a DMA
> > > > > address is not the same as the size of the phyiscal address.
> > > > > 
> > > > > This can happen in practice on ARM where foreign pages can be above
> > > > > 4GB even
> > > > > though the local kernel does not have LPAE page tables enabled (which
> > > > > is
> > > > > totally reasonable if the guest does not itself have >4GB of RAM). In
> > > > > this
> > > > > case the kernel still maps the foreign pages at a phys addr below 4G
> > > > > (as it
> > > > > must) but the resulting DMA address (returned by the grant map
> > > > > operation) is
> > > > > much higher.
> > > > > 
> > > > > This is analogous to a hardware device which has its view of RAM
> > > > > mapped up
> > > > > high for some reason.
> > > > > 
> > > > > This patch makes I/O to foreign pages (specifically blkif) work on
> > > > > 32-bit ARM
> > > > > systems with more than 4GB of RAM.
> > > > There was another patch posted by somebody from Citrix for a fix on
> > > > 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> > > > 
> > > > Their fix was in the generic parts of code. Changing most of the
> > > > 'unsigned'
> > > > to 'phys_addr_t' or such. Is his patch better or will this patch replace
> > > > his?
> > > I believe they are orthogonal, or at least I'm not (yet) hitting the
> > > same issue as Stefano P, the alloc cohoerent code paths are not involved
> > > in the issue I'm seeing because it involves foreign pages whose
> > > MFN/dma_addr is very high, not DMA to devices which are up high.
> > Yes, the two issues are orthogonal.
> > It is worth noting that the problem reported by StefanoP is not fatal:
> > it should just cause more bouncing on the swiotlb buffer than it is
> > strictly necessary (dma_mask gets truncated).
> I agree it is not fatal, but would it be worth not truncating the dma_mask for
> devices capable of using this?
> I was under the impression that the memory returned (<4GB) if we truncate that
> is limited and should be used for PCI devices not capable of 64bit addressing.

Yeah, Linux should certainly not truncate the dma_mask.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 14:56:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 14:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W5zDm-0003Ys-Gs; Wed, 22 Jan 2014 14:56:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W5zDk-0003Yn-B0
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 14:56:00 +0000
Received: from [85.158.137.68:59233] by server-12.bemta-3.messagelabs.com id
	A5/61-20055-FFBDFD25; Wed, 22 Jan 2014 14:55:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390402557!10643713!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 22 Jan 2014 14:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 14:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95323925"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 14:55:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 09:55:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W5zDf-0007aU-KB;
	Wed, 22 Jan 2014 14:55:55 +0000
Date: Wed, 22 Jan 2014 14:54:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Panella <stefano.panella@citrix.com>
In-Reply-To: <52DFDA1A.6000009@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221454180.21510@kaball.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140121220355.GA6557@phenom.dumpdata.com>
	<1390384248.32519.30.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221208500.21510@kaball.uk.xensource.com>
	<52DFDA1A.6000009@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Stefano Panella wrote:
> On 01/22/2014 12:11 PM, Stefano Stabellini wrote:
> > On Wed, 22 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-21 at 17:03 -0500, Konrad Rzeszutek Wilk wrote:
> > > 
> > > (nb: I posted a v3 at
> > > http://article.gmane.org/gmane.linux.ports.arm.kernel/295594
> > > )
> > > 
> > > > On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> > > > > The use of phys_to_machine and machine_to_phys in the phys<=>bus
> > > > > conversions
> > > > > causes us to lose the top bits of the DMA address if the size of a DMA
> > > > > address is not the same as the size of the phyiscal address.
> > > > > 
> > > > > This can happen in practice on ARM where foreign pages can be above
> > > > > 4GB even
> > > > > though the local kernel does not have LPAE page tables enabled (which
> > > > > is
> > > > > totally reasonable if the guest does not itself have >4GB of RAM). In
> > > > > this
> > > > > case the kernel still maps the foreign pages at a phys addr below 4G
> > > > > (as it
> > > > > must) but the resulting DMA address (returned by the grant map
> > > > > operation) is
> > > > > much higher.
> > > > > 
> > > > > This is analogous to a hardware device which has its view of RAM
> > > > > mapped up
> > > > > high for some reason.
> > > > > 
> > > > > This patch makes I/O to foreign pages (specifically blkif) work on
> > > > > 32-bit ARM
> > > > > systems with more than 4GB of RAM.
> > > > There was another patch posted by somebody from Citrix for a fix on
> > > > 32-bit x86 dom0 with more than 4GB of RAM (for x86 platforms).
> > > > 
> > > > Their fix was in the generic parts of code. Changing most of the
> > > > 'unsigned'
> > > > to 'phys_addr_t' or such. Is his patch better or will this patch replace
> > > > his?
> > > I believe they are orthogonal, or at least I'm not (yet) hitting the
> > > same issue as Stefano P, the alloc cohoerent code paths are not involved
> > > in the issue I'm seeing because it involves foreign pages whose
> > > MFN/dma_addr is very high, not DMA to devices which are up high.
> > Yes, the two issues are orthogonal.
> > It is worth noting that the problem reported by StefanoP is not fatal:
> > it should just cause more bouncing on the swiotlb buffer than it is
> > strictly necessary (dma_mask gets truncated).
> I agree it is not fatal, but would it be worth not truncating the dma_mask for
> devices capable of using this?
> I was under the impression that the memory returned (<4GB) if we truncate that
> is limited and should be used for PCI devices not capable of 64bit addressing.

Yeah, Linux should certainly not truncate the dma_mask.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6066-0006YR-1B; Wed, 22 Jan 2014 15:52:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6064-0006YM-H8
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:08 +0000
Received: from [193.109.254.147:3569] by server-9.bemta-14.messagelabs.com id
	5C/7C-13957-729EFD25; Wed, 22 Jan 2014 15:52:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390405925!12534390!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2126 invoked from network); 22 Jan 2014 15:52:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 15:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95359003"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 15:52:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 10:52:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W605z-0000Ay-8N;
	Wed, 22 Jan 2014 15:52:03 +0000
Date: Wed, 22 Jan 2014 15:52:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140122155202.GB24675@zion.uk.xensource.com>
References: <1386882184-15324-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1386882184-15324-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC V3 0/7] OSSTest: OVMF test job
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian, ping.

On Thu, Dec 12, 2013 at 09:02:57PM +0000, Wei Liu wrote:
> RFC v3 of this series
> 
> This series implements a basic test job for OVMF guest. The test case will
> install an OVMF guest and try to boot it.
> 
> I've tried my best to factor out common code. :-)
> 
> Now the preseed data in the test case only contains essential items -
> partitioning recipe, late_command and two other items.
> 
> As for the file manipulation code, it has a small portion (first 6 lines as
> IanJ pointed out) that's copied from ts-redhat-install, but I don't see a
> sensible to factor out that 6 lines of command.
> 
> I basically didn't touch that last two patches as IanJ will take care of them
> when he takes this series.
> 
> Wei.
> 
> Changes in v3:
> * consolidate more config items into preseed_base
> * ts-ovmf-debian-install -> ts-debian-hvm-install
> * factor out functions to create ISOs.
> * $xl -> $toolstack in test case script
> * add $flight $job and $gn to all file paths
> 
> Changes in v2:
> * factor out preseed_base
> * make installation CD work with seabios
> 
> 
> Wei Liu (7):
>   make-flight: disable OVMF build for 4.3
>   TestSupport.pm: add bios option to guest config file
>   TestSupport.pm: functions for creating isos
>   Debian.pm: factor out preseed_base
>   Introduce ts-debian-hvm-install
>   make-flight: OVMF test filght
>   sg-run-job: OVMF job
> 
>  Osstest/Debian.pm      |  143 +++++++++++++++++++---------------
>  Osstest/TestSupport.pm |   33 ++++++++
>  make-flight            |    7 ++
>  sg-run-job             |    6 ++
>  ts-debian-hvm-install  |  202 ++++++++++++++++++++++++++++++++++++++++++++++++
>  ts-redhat-install      |   13 +---
>  6 files changed, 332 insertions(+), 72 deletions(-)
>  create mode 100755 ts-debian-hvm-install
> 
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606Q-0006aF-Vu; Wed, 22 Jan 2014 15:52:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606Q-0006Zs-4r
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:30 +0000
Received: from [85.158.143.35:35860] by server-1.bemta-4.messagelabs.com id
	8B/D1-02132-D39EFD25; Wed, 22 Jan 2014 15:52:29 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390405946!108344!1
X-Originating-IP: [64.18.0.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16609 invoked from network); 22 Jan 2014 15:52:28 -0000
Received: from exprod5og119.obsmtp.com (HELO exprod5og119.obsmtp.com)
	(64.18.0.189)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:28 -0000
Received: from mail-ee0-f42.google.com ([74.125.83.42]) (using TLSv1) by
	exprod5ob119.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOjNrhxwAdK5QjLzR9YleB4nN8e0h@postini.com;
	Wed, 22 Jan 2014 07:52:28 PST
Received: by mail-ee0-f42.google.com with SMTP id e49so4981791eek.15
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=CU1i8IZ6+gUV/G0p4TjU8TS1CnLB39dGqtDyuYbBTRA=;
	b=dHMDsRQz8uo6JDTn653eMRT6yiJIFiI4PEq+rsUAdbvOqJe7z0Y0J3pcAN5A1Z2Rh8
	2dc9pB/XgiK1BT5ex1vTRDUHLFwOYRlxhM0Dn6O5zkQ4tHsAWQTvppqWxJLM7TSeqUXj
	FSmNtaGyP/fqBofUeansKttYo9h5ha33/x8Ug=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=CU1i8IZ6+gUV/G0p4TjU8TS1CnLB39dGqtDyuYbBTRA=;
	b=E3Ct9GofyVyF5XXHBm1NuV64U6YIH3z4wfKTyo3AqI3576BcXPHIJYaXnJxw4sC640
	cwYldHSntyxCm6tamXUidzDqUvu3cc8npnq3L3JAtEhKO8DNX8a0YqGMly5P5PzGnDzu
	siP0FDkThrDpCCEPKM/XXppiqCzjBOk9+vtSFXkfDiyrfDm7EyQVtUrf6xDSMXbrKzJd
	M5torfRvbtf1ZY1cQfm4BEuDqZrt+CDmVjFsCggZXgADKOFCP0p8Vic/HmmCvd4LugJY
	wv6FB5EqzdkKgZhgQN4XUGiQ7KHt3PD3nmEq9izcF9zF9egp0DEoN8rMsVyLNGKk7whM
	50wg==
X-Gm-Message-State: ALoCoQljJUQFbkIwyEiZABnbrk6l/66jx3pI7VJXWf+3RVG7a/sx8nlpvENgwh302wtyWBRQyWJC4booz9FTJ2cn5RcTptnxBmsBgIQhL8SXzxGV1duta6t5Z3VhE48xgAGQGLICWbVs23WH8C5A8RWjb1Q185hLz6IMmShvNFjqz5BS6QtZztg=
X-Received: by 10.14.207.194 with SMTP id n42mr784348eeo.76.1390405945173;
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
X-Received: by 10.14.207.194 with SMTP id n42mr784347eeo.76.1390405945113;
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.23
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:24 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:04 +0200
Message-Id: <1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to 4K
	pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch introduces the following algorithm:
- enumerates all first level translation entries
- for each section creates 256 pages, each page is 4096 bytes
- for each supersection creates 4096 pages, each page is 4096 bytes
- flush cache to synchronize Cortex M15 and IOMMU

This algorithm make possible to use 4K mapping only.

Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 4dab30f..7ec03a2 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -72,6 +72,9 @@
 #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
 #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
 
+/* 16 sections in supersection */
+#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
+
 /*
  * some descriptor attributes.
  */
@@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
 	&omap_dsp_mmu,
 };
 
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
 #define mmu_for_each(pfunc, data)						\
 ({														\
 	u32 __i;											\
@@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 	return vaddr;
 }
 
+static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
+{
+	u32 *iopte = NULL;
+	u32 i;
+
+	iopte = xzalloc_bytes(PAGE_SIZE);
+	if (!iopte) {
+		printk("%s Fail to alloc 2nd level table\n", mmu->name);
+		return 0;
+	}
+
+	for (i = 0; i < PTRS_PER_IOPTE; i++) {
+		u32 da, vaddr, iopgd_tmp;
+		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
+		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
+		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
+		iopte[i] = vaddr | IOPTE_SMALL;
+	}
+
+	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
+	return __pa(iopte) | IOPGD_TABLE;
+}
+
 /*
  * on boot table is empty
  */
@@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 
 		/* "supersection" 16 Mb */
 		if (iopgd_is_super(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			if(likely(translate_supersections_to_pages)) {
+				u32 j, iopgd_tmp;
+				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
+					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
+					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
+				}
+				i += (j - 1);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			}
 
 		/* "section" 1Mb */
 		} else if (iopgd_is_section(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			if (likely(translate_sections_to_pages)) {
+				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			}
 
 		/* "table" */
 		} else if (iopgd_is_table(iopgd)) {
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606T-0006az-Df; Wed, 22 Jan 2014 15:52:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606R-0006aR-Ms
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:32 +0000
Received: from [85.158.137.68:6587] by server-12.bemta-3.messagelabs.com id
	D5/6D-20055-E39EFD25; Wed, 22 Jan 2014 15:52:30 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390405946!10720212!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11925 invoked from network); 22 Jan 2014 15:52:29 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:29 -0000
Received: from mail-ea0-f179.google.com ([209.85.215.179]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOjNrhxwAdK5QjLzR9YleB4nN8e0h@postini.com;
	Wed, 22 Jan 2014 07:52:28 PST
Received: by mail-ea0-f179.google.com with SMTP id q10so3533757ead.38
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=d6WJCu4NVO4HNsM3/h8JPsE70+p8SxmARVWFysv5BJk=;
	b=dSojXJhZ7thwCg/+sdjcAzVFNIngrITEhC5D9OqdjyJLwOOS5x3W/DiKq1La7Q1xZ1
	QCgV1AAWyUzMOe9qQjB59mUFNywexJX6ujJMhObekVIBGQB7y9Aw95PuAL/qScW7+YLP
	sixNCT4SsB82QIC3D6rstjoc0fnOplyIlZDeM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=d6WJCu4NVO4HNsM3/h8JPsE70+p8SxmARVWFysv5BJk=;
	b=gF4f4ltj8ooAy63cXJlmv3M5PG369/MHE4x0S2htA2ZBQNfviPDiP8gB0K9V9j5qDW
	hwSgEuJcQcvM9q5kh0ifhmjxnL9RXxu9tJQa7fe/MwzEs4LbKi+4JgfSbIPUTbPZyAww
	pNnQRrrnpuEz/zhG6e2sm3gkfQslJURvF2T9k6GMe0QGMKZLJQRm6cqmC1u4rjq5B9/M
	HXIMqM9RoiAeGS7i3+l47lr/S1/7xclzfWMxGmcNSbs+f5/5pB0hlMzIfT3nTyqomEKW
	nAgP4L9fBdCF4TUdnvNkbvAUrb6eqrUWlXX3uXKgsRjBjzfG2jQGoLGhxe/OSmncFcb0
	AC2w==
X-Gm-Message-State: ALoCoQkvCz7oSgdml9ySmrBOBLba4OaMwPFHLoGvkq/85E3cxmygWlNqqeTn3c4fkif/BLW4QMIjd6+qTTOwtqW75BzpCEVtegNE3gskm5U1W/n3eHIWAGziSKBp9Gp374pOIAAgYH1N6k/yywotGN+wwxgfb9iSnZaX474xSvB7DWyaWd5EsX0=
X-Received: by 10.14.209.129 with SMTP id s1mr2178511eeo.21.1390405944678;
	Wed, 22 Jan 2014 07:52:24 -0800 (PST)
X-Received: by 10.14.209.129 with SMTP id s1mr2178432eeo.21.1390405943784;
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.22
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:03 +0200
Message-Id: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

omap IOMMU module is designed to handle access to external
omap MMUs, connected to the L3 bus.

Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 418 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 003ac84..cb0b385 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -14,6 +14,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
+obj-y += omap_iommu.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a6db00b..3281b67 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
 {
     &vgic_distr_mmio_handler,
     &vuart_mmio_handler,
+	&mmu_mmio_handler,
 };
 #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
 
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 8d252c0..acb5dff 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -42,6 +42,7 @@ struct mmio_handler {
 
 extern const struct mmio_handler vgic_distr_mmio_handler;
 extern const struct mmio_handler vuart_mmio_handler;
+extern const struct mmio_handler mmu_mmio_handler;
 
 extern int handle_mmio(mmio_info_t *info);
 
diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
new file mode 100644
index 0000000..4dab30f
--- /dev/null
+++ b/xen/arch/arm/omap_iommu.c
@@ -0,0 +1,415 @@
+/*
+ * xen/arch/arm/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2013 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+
+#include "io.h"
+
+/* register where address of page table is stored */
+#define MMU_TTB			0x4c
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* 1st level translation */
+#define IOPGD_SHIFT		20
+#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
+#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
+
+/* "supersection" - 16 Mb */
+#define IOSUPER_SHIFT		24
+#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
+#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
+
+/* "section"  - 1 Mb */
+#define IOSECTION_SHIFT		20
+#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
+#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
+#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
+
+/* 2nd level translation */
+
+/* "small page" - 4Kb */
+#define IOPTE_SMALL_SHIFT		12
+#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
+#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
+
+/* "large page" - 64 Kb */
+#define IOPTE_LARGE_SHIFT		16
+#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
+#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
+#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
+
+/*
+ * some descriptor attributes.
+ */
+#define IOPGD_TABLE		(1 << 0)
+#define IOPGD_SECTION	(2 << 0)
+#define IOPGD_SUPER		(1 << 18 | 2 << 0)
+
+#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
+#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
+#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
+
+#define IOPTE_SMALL		(2 << 0)
+#define IOPTE_LARGE		(1 << 0)
+
+#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
+#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
+#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
+
+struct mmu_info {
+	const char			*name;
+	paddr_t				mem_start;
+	u32					mem_size;
+	u32					*pagetable;
+	void __iomem		*mem_map;
+};
+
+static struct mmu_info omap_ipu_mmu = {
+	.name		= "IPU_L2_MMU",
+	.mem_start	= 0x55082000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info omap_dsp_mmu = {
+	.name		= "DSP_L2_MMU",
+	.mem_start	= 0x4a066000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info *mmu_list[] = {
+	&omap_ipu_mmu,
+	&omap_dsp_mmu,
+};
+
+#define mmu_for_each(pfunc, data)						\
+({														\
+	u32 __i;											\
+	int __res = 0;										\
+														\
+	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
+		__res |= pfunc(mmu_list[__i], data);			\
+	}													\
+	__res;												\
+})
+
+static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
+		return 1;
+
+	return 0;
+}
+
+static inline struct mmu_info *mmu_lookup(u32 addr)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
+		if (mmu_check_mem_range(mmu_list[i], addr))
+			return mmu_list[i];
+	}
+
+	return NULL;
+}
+
+static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
+{
+	return (reg & mask) | (va & (~mask));
+}
+
+static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
+{
+	return (reg & ~mask) | pa;
+}
+
+static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
+{
+	return mmu_for_each(mmu_check_mem_range, addr);
+}
+
+static int mmu_copy_pagetable(struct mmu_info *mmu)
+{
+	void __iomem *pagetable = NULL;
+	u32 pgaddr;
+
+	ASSERT(mmu);
+
+	/* read address where kernel MMU pagetable is stored */
+	pgaddr = readl(mmu->mem_map + MMU_TTB);
+	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
+	if (!pagetable) {
+		printk("%s: %s failed to map pagetable\n",
+			   __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	/*
+	 * pagetable can be changed since last time
+	 * we accessed it therefore we need to copy it each time
+	 */
+	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
+
+	iounmap(pagetable);
+
+	return 0;
+}
+
+#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
+{																								\
+	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
+							(mask == IOPTE_LARGE_MASK)) ? "table"								\
+							: iopgd_is_super(iopgd) ? "supersection"							\
+							: iopgd_is_section(iopgd) ? "section"								\
+							: "Unknown section";												\
+	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
+		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
+}
+
+static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
+{
+	u32 vaddr, paddr;
+	paddr_t maddr;
+
+	paddr = mmu_virt_to_phys(iopgd, da, mask);
+	maddr = p2m_lookup(dom, paddr);
+	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
+
+	return vaddr;
+}
+
+/*
+ * on boot table is empty
+ */
+static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
+{
+	u32 i;
+	int res;
+	bool table_updated = false;
+
+	ASSERT(dom);
+	ASSERT(mmu);
+
+	/* copy pagetable from  domain to xen */
+	res = mmu_copy_pagetable(mmu);
+	if (res) {
+		printk("%s: %s failed to map pagetable memory\n",
+			   __func__, mmu->name);
+		return res;
+	}
+
+	/* 1-st level translation */
+	for (i = 0; i < PTRS_PER_IOPGD; i++) {
+		u32 da;
+		u32 iopgd = mmu->pagetable[i];
+
+		if (!iopgd)
+			continue;
+
+		table_updated = true;
+
+		/* "supersection" 16 Mb */
+		if (iopgd_is_super(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+
+		/* "section" 1Mb */
+		} else if (iopgd_is_section(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+
+		/* "table" */
+		} else if (iopgd_is_table(iopgd)) {
+			u32 j, mask;
+			u32 iopte = iopte_offset(iopgd);
+
+			/* 2-nd level translation */
+			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
+
+				/* "small table" 4Kb */
+				if (iopte_is_small(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
+					mask = IOPTE_SMALL_MASK;
+
+				/* "large table" 64Kb */
+				} else if (iopte_is_large(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
+					mask = IOPTE_LARGE_MASK;
+
+				/* error */
+				} else {
+					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
+					return -EINVAL;
+				}
+
+				/* translate 2-nd level entry */
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
+			}
+
+			continue;
+
+		/* error */
+		} else {
+			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
+			return -EINVAL;
+		}
+	}
+
+	/* force omap IOMMU to use new pagetable */
+	if (table_updated) {
+		paddr_t maddr;
+		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
+		maddr = __pa(mmu->pagetable);
+		writel(maddr, mmu->mem_map + MMU_TTB);
+		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
+	}
+
+	return 0;
+}
+
+static int mmu_trap_write_access(struct domain *dom,
+								 struct mmu_info *mmu, mmio_info_t *info)
+{
+	struct cpu_user_regs *regs = guest_cpu_user_regs();
+	register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res = 0;
+
+	switch (info->gpa - mmu->mem_start) {
+		case MMU_TTB:
+			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
+				   mmu->name, _p(info->gpa), *r);
+			res = mmu_translate_pagetable(dom, mmu);
+			break;
+		default:
+			break;
+	}
+
+	return res;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+	struct domain *dom = v->domain;
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res;
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+	/*
+	 * make sure that user register is written first in this function
+	 * following calls may expect valid data in it
+	 */
+    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+	res = mmu_trap_write_access(dom, mmu, info);
+	if (res)
+		return res;
+
+    return 1;
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+	ASSERT(mmu);
+	ASSERT(!mmu->mem_map);
+	ASSERT(!mmu->pagetable);
+
+    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
+	if (!mmu->mem_map) {
+		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
+
+	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
+	if (!mmu->pagetable) {
+		printk("%s: %s failed to alloc private pagetable\n",
+			   __func__, mmu->name);
+		return -ENOMEM;
+	}
+
+	printk("%s: %s private pagetable %lu bytes\n",
+		   __func__, mmu->name, IOPGD_TABLE_SIZE);
+
+	return 0;
+}
+
+static int mmu_init_all(void)
+{
+	int res;
+
+	res = mmu_for_each(mmu_init, 0);
+	if (res) {
+		printk("%s error during init %d\n", __func__, res);
+		return res;
+	}
+
+	return 0;
+}
+
+const struct mmio_handler mmu_mmio_handler = {
+	.check_handler = mmu_mmio_check,
+	.read_handler  = mmu_mmio_read,
+	.write_handler = mmu_mmio_write,
+};
+
+__initcall(mmu_init_all);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606U-0006bb-R7; Wed, 22 Jan 2014 15:52:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606S-0006aj-Ru
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:33 +0000
Received: from [85.158.137.68:16734] by server-12.bemta-3.messagelabs.com id
	DA/6D-20055-F39EFD25; Wed, 22 Jan 2014 15:52:31 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390405948!7037297!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25262 invoked from network); 22 Jan 2014 15:52:30 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:30 -0000
Received: from mail-ee0-f53.google.com ([74.125.83.53]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOzGKkMFXoc0ptIEFjjSnB4nSYR19@postini.com;
	Wed, 22 Jan 2014 07:52:29 PST
Received: by mail-ee0-f53.google.com with SMTP id t10so4954026eei.26
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=ELAhG6erl+ID9/jCE/T3prcOC3cZgoD8K2VRR2Y7v2k=;
	b=R9aymsh0QCKSEZhNbxd6J+ejG8PLgdmItho80tGMUut/M6ebkBE8sevwIS+rhC3og3
	wtHBGrYdEWw4Wj12TF9rkYa6zi3en3ZUKiC6rwz7TB4DSlZWrA0UBKfwXbgMfZHXp+7p
	H88WUIgHpDA9PDtGpIJ/XyFnjnFTz6p7tPI8w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=ELAhG6erl+ID9/jCE/T3prcOC3cZgoD8K2VRR2Y7v2k=;
	b=ByqVpDEErVOCugwTWp/ds20OoHViRHRiGg9iH6VqadVpcKlxHfxG4iOlu2Y85QfJgU
	BOWg7mhOPf4bWNJAjYRXodJgdv3fIyJuZnjKW3Lssnm4bE//RQjXB57DvnSpCjYZ8qM8
	CmJEVVu+83BKMRZLiTyifP2kp/Qr29V3yG4c1j9Ur1bOi4JjTg/2ImBy4+WnGF4fvftA
	WrB4JWwDfuVjBh4SCRIE7+C7cR+/bUVqpjOgiz4wea28hDNPtXsIqISLMicPRRftBcMe
	kE2sLCpg3tXMysugUeFvBt64Mm/50yVP3AuVUjJuB/QmcOYopXrCik7vyxNZHRN8jHgJ
	GBOw==
X-Gm-Message-State: ALoCoQlgo3IDRp2O5fuFNlHpWodsNXwfoLHDy1Vd9pviyPMJrsTpQnrcm8retBE93UReT0VYZR2qXZogjvkeCB7cqlDcP7aowvEbCSKhOSsrfZ9N5dayAMB9GgOjelMZ8/x3ZVrTjFxZWTEpnN8EKnHrBS/2SmJ8PDS89hq5awJ8mhXAgN82En4=
X-Received: by 10.15.108.197 with SMTP id cd45mr86157eeb.110.1390405946557;
	Wed, 22 Jan 2014 07:52:26 -0800 (PST)
X-Received: by 10.15.108.197 with SMTP id cd45mr86139eeb.110.1390405946280;
	Wed, 22 Jan 2014 07:52:26 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.25
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:05 +0200
Message-Id: <1390405925-1764-4-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 3/3] arm: omap: cleanup iopte allocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Each allocation for iopte requires 4Kb memory.
All previous allocations from previous MMU reconfiguration
must be cleaned before new reconfigureation cycle.

Change-Id: I6db69a400cdba1170b43d9dc68d0817db77cbf9c
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 7ec03a2..a5ad3ac 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -93,12 +93,18 @@
 #define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
 #define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
 
+struct mmu_alloc_node {
+	u32					*vptr;
+	struct list_head	node;
+};
+
 struct mmu_info {
 	const char			*name;
 	paddr_t				mem_start;
 	u32					mem_size;
 	u32					*pagetable;
 	void __iomem		*mem_map;
+	struct list_head	alloc_list;
 };
 
 static struct mmu_info omap_ipu_mmu = {
@@ -222,8 +228,15 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
 {
 	u32 *iopte = NULL;
+	struct mmu_alloc_node *alloc_node;
 	u32 i;
 
+	alloc_node = xzalloc_bytes(sizeof(struct mmu_alloc_node));
+	if (!alloc_node) {
+		printk("%s Fail to alloc vptr node\n", mmu->name);
+		return 0;
+	}
+
 	iopte = xzalloc_bytes(PAGE_SIZE);
 	if (!iopte) {
 		printk("%s Fail to alloc 2nd level table\n", mmu->name);
@@ -238,10 +251,27 @@ static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd,
 		iopte[i] = vaddr | IOPTE_SMALL;
 	}
 
+	/* store pointer for following cleanup */
+	alloc_node->vptr = iopte;
+	list_add(&alloc_node->node, &mmu->alloc_list);
+
 	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
 	return __pa(iopte) | IOPGD_TABLE;
 }
 
+static void mmu_cleanup_pagetable(struct mmu_info *mmu)
+{
+	struct mmu_alloc_node *mmu_alloc, *tmp;
+
+	ASSERT(mmu);
+
+	list_for_each_entry_safe(mmu_alloc, tmp, &mmu->alloc_list, node) {
+		xfree(mmu_alloc->vptr);
+		list_del(&mmu_alloc->node);
+		xfree(mmu_alloc);
+	}
+}
+
 /*
  * on boot table is empty
  */
@@ -254,6 +284,9 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 	ASSERT(dom);
 	ASSERT(mmu);
 
+	/* free all previous allocations */
+	mmu_cleanup_pagetable(mmu);
+
 	/* copy pagetable from  domain to xen */
 	res = mmu_copy_pagetable(mmu);
 	if (res) {
@@ -432,6 +465,8 @@ static int mmu_init(struct mmu_info *mmu, u32 data)
 	printk("%s: %s private pagetable %lu bytes\n",
 		   __func__, mmu->name, IOPGD_TABLE_SIZE);
 
+	INIT_LIST_HEAD(&mmu->alloc_list);
+
 	return 0;
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606T-0006az-Df; Wed, 22 Jan 2014 15:52:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606R-0006aR-Ms
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:32 +0000
Received: from [85.158.137.68:6587] by server-12.bemta-3.messagelabs.com id
	D5/6D-20055-E39EFD25; Wed, 22 Jan 2014 15:52:30 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390405946!10720212!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11925 invoked from network); 22 Jan 2014 15:52:29 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:29 -0000
Received: from mail-ea0-f179.google.com ([209.85.215.179]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOjNrhxwAdK5QjLzR9YleB4nN8e0h@postini.com;
	Wed, 22 Jan 2014 07:52:28 PST
Received: by mail-ea0-f179.google.com with SMTP id q10so3533757ead.38
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=d6WJCu4NVO4HNsM3/h8JPsE70+p8SxmARVWFysv5BJk=;
	b=dSojXJhZ7thwCg/+sdjcAzVFNIngrITEhC5D9OqdjyJLwOOS5x3W/DiKq1La7Q1xZ1
	QCgV1AAWyUzMOe9qQjB59mUFNywexJX6ujJMhObekVIBGQB7y9Aw95PuAL/qScW7+YLP
	sixNCT4SsB82QIC3D6rstjoc0fnOplyIlZDeM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=d6WJCu4NVO4HNsM3/h8JPsE70+p8SxmARVWFysv5BJk=;
	b=gF4f4ltj8ooAy63cXJlmv3M5PG369/MHE4x0S2htA2ZBQNfviPDiP8gB0K9V9j5qDW
	hwSgEuJcQcvM9q5kh0ifhmjxnL9RXxu9tJQa7fe/MwzEs4LbKi+4JgfSbIPUTbPZyAww
	pNnQRrrnpuEz/zhG6e2sm3gkfQslJURvF2T9k6GMe0QGMKZLJQRm6cqmC1u4rjq5B9/M
	HXIMqM9RoiAeGS7i3+l47lr/S1/7xclzfWMxGmcNSbs+f5/5pB0hlMzIfT3nTyqomEKW
	nAgP4L9fBdCF4TUdnvNkbvAUrb6eqrUWlXX3uXKgsRjBjzfG2jQGoLGhxe/OSmncFcb0
	AC2w==
X-Gm-Message-State: ALoCoQkvCz7oSgdml9ySmrBOBLba4OaMwPFHLoGvkq/85E3cxmygWlNqqeTn3c4fkif/BLW4QMIjd6+qTTOwtqW75BzpCEVtegNE3gskm5U1W/n3eHIWAGziSKBp9Gp374pOIAAgYH1N6k/yywotGN+wwxgfb9iSnZaX474xSvB7DWyaWd5EsX0=
X-Received: by 10.14.209.129 with SMTP id s1mr2178511eeo.21.1390405944678;
	Wed, 22 Jan 2014 07:52:24 -0800 (PST)
X-Received: by 10.14.209.129 with SMTP id s1mr2178432eeo.21.1390405943784;
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.22
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:03 +0200
Message-Id: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

omap IOMMU module is designed to handle access to external
omap MMUs, connected to the L3 bus.

Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 418 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 003ac84..cb0b385 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -14,6 +14,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
+obj-y += omap_iommu.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a6db00b..3281b67 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
 {
     &vgic_distr_mmio_handler,
     &vuart_mmio_handler,
+	&mmu_mmio_handler,
 };
 #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
 
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 8d252c0..acb5dff 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -42,6 +42,7 @@ struct mmio_handler {
 
 extern const struct mmio_handler vgic_distr_mmio_handler;
 extern const struct mmio_handler vuart_mmio_handler;
+extern const struct mmio_handler mmu_mmio_handler;
 
 extern int handle_mmio(mmio_info_t *info);
 
diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
new file mode 100644
index 0000000..4dab30f
--- /dev/null
+++ b/xen/arch/arm/omap_iommu.c
@@ -0,0 +1,415 @@
+/*
+ * xen/arch/arm/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2013 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+
+#include "io.h"
+
+/* register where address of page table is stored */
+#define MMU_TTB			0x4c
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* 1st level translation */
+#define IOPGD_SHIFT		20
+#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
+#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
+
+/* "supersection" - 16 Mb */
+#define IOSUPER_SHIFT		24
+#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
+#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
+
+/* "section"  - 1 Mb */
+#define IOSECTION_SHIFT		20
+#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
+#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
+#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
+
+/* 2nd level translation */
+
+/* "small page" - 4Kb */
+#define IOPTE_SMALL_SHIFT		12
+#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
+#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
+
+/* "large page" - 64 Kb */
+#define IOPTE_LARGE_SHIFT		16
+#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
+#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
+#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
+
+/*
+ * some descriptor attributes.
+ */
+#define IOPGD_TABLE		(1 << 0)
+#define IOPGD_SECTION	(2 << 0)
+#define IOPGD_SUPER		(1 << 18 | 2 << 0)
+
+#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
+#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
+#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
+
+#define IOPTE_SMALL		(2 << 0)
+#define IOPTE_LARGE		(1 << 0)
+
+#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
+#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
+#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
+
+struct mmu_info {
+	const char			*name;
+	paddr_t				mem_start;
+	u32					mem_size;
+	u32					*pagetable;
+	void __iomem		*mem_map;
+};
+
+static struct mmu_info omap_ipu_mmu = {
+	.name		= "IPU_L2_MMU",
+	.mem_start	= 0x55082000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info omap_dsp_mmu = {
+	.name		= "DSP_L2_MMU",
+	.mem_start	= 0x4a066000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info *mmu_list[] = {
+	&omap_ipu_mmu,
+	&omap_dsp_mmu,
+};
+
+#define mmu_for_each(pfunc, data)						\
+({														\
+	u32 __i;											\
+	int __res = 0;										\
+														\
+	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
+		__res |= pfunc(mmu_list[__i], data);			\
+	}													\
+	__res;												\
+})
+
+static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
+		return 1;
+
+	return 0;
+}
+
+static inline struct mmu_info *mmu_lookup(u32 addr)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
+		if (mmu_check_mem_range(mmu_list[i], addr))
+			return mmu_list[i];
+	}
+
+	return NULL;
+}
+
+static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
+{
+	return (reg & mask) | (va & (~mask));
+}
+
+static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
+{
+	return (reg & ~mask) | pa;
+}
+
+static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
+{
+	return mmu_for_each(mmu_check_mem_range, addr);
+}
+
+static int mmu_copy_pagetable(struct mmu_info *mmu)
+{
+	void __iomem *pagetable = NULL;
+	u32 pgaddr;
+
+	ASSERT(mmu);
+
+	/* read address where kernel MMU pagetable is stored */
+	pgaddr = readl(mmu->mem_map + MMU_TTB);
+	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
+	if (!pagetable) {
+		printk("%s: %s failed to map pagetable\n",
+			   __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	/*
+	 * pagetable can be changed since last time
+	 * we accessed it therefore we need to copy it each time
+	 */
+	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
+
+	iounmap(pagetable);
+
+	return 0;
+}
+
+#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
+{																								\
+	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
+							(mask == IOPTE_LARGE_MASK)) ? "table"								\
+							: iopgd_is_super(iopgd) ? "supersection"							\
+							: iopgd_is_section(iopgd) ? "section"								\
+							: "Unknown section";												\
+	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
+		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
+}
+
+static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
+{
+	u32 vaddr, paddr;
+	paddr_t maddr;
+
+	paddr = mmu_virt_to_phys(iopgd, da, mask);
+	maddr = p2m_lookup(dom, paddr);
+	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
+
+	return vaddr;
+}
+
+/*
+ * on boot table is empty
+ */
+static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
+{
+	u32 i;
+	int res;
+	bool table_updated = false;
+
+	ASSERT(dom);
+	ASSERT(mmu);
+
+	/* copy pagetable from  domain to xen */
+	res = mmu_copy_pagetable(mmu);
+	if (res) {
+		printk("%s: %s failed to map pagetable memory\n",
+			   __func__, mmu->name);
+		return res;
+	}
+
+	/* 1-st level translation */
+	for (i = 0; i < PTRS_PER_IOPGD; i++) {
+		u32 da;
+		u32 iopgd = mmu->pagetable[i];
+
+		if (!iopgd)
+			continue;
+
+		table_updated = true;
+
+		/* "supersection" 16 Mb */
+		if (iopgd_is_super(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+
+		/* "section" 1Mb */
+		} else if (iopgd_is_section(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+
+		/* "table" */
+		} else if (iopgd_is_table(iopgd)) {
+			u32 j, mask;
+			u32 iopte = iopte_offset(iopgd);
+
+			/* 2-nd level translation */
+			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
+
+				/* "small table" 4Kb */
+				if (iopte_is_small(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
+					mask = IOPTE_SMALL_MASK;
+
+				/* "large table" 64Kb */
+				} else if (iopte_is_large(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
+					mask = IOPTE_LARGE_MASK;
+
+				/* error */
+				} else {
+					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
+					return -EINVAL;
+				}
+
+				/* translate 2-nd level entry */
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
+			}
+
+			continue;
+
+		/* error */
+		} else {
+			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
+			return -EINVAL;
+		}
+	}
+
+	/* force omap IOMMU to use new pagetable */
+	if (table_updated) {
+		paddr_t maddr;
+		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
+		maddr = __pa(mmu->pagetable);
+		writel(maddr, mmu->mem_map + MMU_TTB);
+		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
+	}
+
+	return 0;
+}
+
+static int mmu_trap_write_access(struct domain *dom,
+								 struct mmu_info *mmu, mmio_info_t *info)
+{
+	struct cpu_user_regs *regs = guest_cpu_user_regs();
+	register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res = 0;
+
+	switch (info->gpa - mmu->mem_start) {
+		case MMU_TTB:
+			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
+				   mmu->name, _p(info->gpa), *r);
+			res = mmu_translate_pagetable(dom, mmu);
+			break;
+		default:
+			break;
+	}
+
+	return res;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+	struct domain *dom = v->domain;
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res;
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+	/*
+	 * make sure that user register is written first in this function
+	 * following calls may expect valid data in it
+	 */
+    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+	res = mmu_trap_write_access(dom, mmu, info);
+	if (res)
+		return res;
+
+    return 1;
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+	ASSERT(mmu);
+	ASSERT(!mmu->mem_map);
+	ASSERT(!mmu->pagetable);
+
+    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
+	if (!mmu->mem_map) {
+		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
+
+	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
+	if (!mmu->pagetable) {
+		printk("%s: %s failed to alloc private pagetable\n",
+			   __func__, mmu->name);
+		return -ENOMEM;
+	}
+
+	printk("%s: %s private pagetable %lu bytes\n",
+		   __func__, mmu->name, IOPGD_TABLE_SIZE);
+
+	return 0;
+}
+
+static int mmu_init_all(void)
+{
+	int res;
+
+	res = mmu_for_each(mmu_init, 0);
+	if (res) {
+		printk("%s error during init %d\n", __func__, res);
+		return res;
+	}
+
+	return 0;
+}
+
+const struct mmio_handler mmu_mmio_handler = {
+	.check_handler = mmu_mmio_check,
+	.read_handler  = mmu_mmio_read,
+	.write_handler = mmu_mmio_write,
+};
+
+__initcall(mmu_init_all);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606P-0006Zm-Ie; Wed, 22 Jan 2014 15:52:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606O-0006Zd-Rx
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:29 +0000
Received: from [85.158.143.35:4469] by server-1.bemta-4.messagelabs.com id
	D2/D1-02132-C39EFD25; Wed, 22 Jan 2014 15:52:28 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390405945!107936!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1460 invoked from network); 22 Jan 2014 15:52:27 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:27 -0000
Received: from mail-ea0-f181.google.com ([209.85.215.181]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOEcA1wVAwyhKzzRsGvTbhfVWzXFP@postini.com;
	Wed, 22 Jan 2014 07:52:26 PST
Received: by mail-ea0-f181.google.com with SMTP id m10so4706166eaj.26
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=+I8rBYEFGj5c45/7iXy8tnVdaDI3thQxT5f8HJjeh+c=;
	b=Irm0JsXt35F0ntSeKehoTjXggryOAzdW0MBfvnGDqKQUWwbwR0vhDS0UbvnzULOil+
	oH0M56lfdHhn9P6SRZdGKYmheCMwxyHK16mrHyDhi7d41QElv6dMgLOV1YHhEoTpH/7J
	J7IkYbzTBAYrMLN/oSJSvgseTyks/9GPsLKqE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=+I8rBYEFGj5c45/7iXy8tnVdaDI3thQxT5f8HJjeh+c=;
	b=Toadf4x/tnsp+FbfJ18jav96lWy3q/rzfmNsigiEGv8oysJGG2B4YdLVXhGfirD72I
	H+hMqAT/XDcZ/Kp7oZ51mf3VDOnyX/BrTJc1vpxEgnoUnrrvmr4IVXp8xY3Th4Azv5jl
	2Ciysh9dxLrBnmmZAar0/Qaxo//dMHN6n498xHtoT+5Ym4FdoDm0bYXaBmTtva8DWcTI
	E8ntOj+sXi3PSQ4bEWJDKj8DnMnfJRxLxmBjWxQKYXIpCpX0FJ5I4sslyrbrjXI4zkzm
	3BKjfacE+bUxU/M5B06QoJYJqpvZmcBC0F7MYk1hKRvxJ8XDTrRq4HBpqD8kKNFPLYju
	gdiw==
X-Gm-Message-State: ALoCoQl4IRoz2Ay5qOLWgoqjgqkphSOFLSLbaAcp/v/QyU+cEmTZDy4VUtbm5SNN2SNdd7eoQGpJdvNMn9OPmrBTXfHhzISCtCfCn/j/clUUl/rNjwVDxJ/6ASQ+mlMJRNHqAPQcrWoWCNAvDYpJBDoC2mnyMUgk66U6V1ZEdApJCz05xcPCaxA=
X-Received: by 10.14.204.9 with SMTP id g9mr674632eeo.82.1390405943787;
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
X-Received: by 10.14.204.9 with SMTP id g9mr674527eeo.82.1390405942625;
	Wed, 22 Jan 2014 07:52:22 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.21
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:21 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:02 +0200
Message-Id: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
	platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

The following patch series is an RFC for possible implementation of simple MMU module,
which is designed to translate IPA to MA for peripheral processors like GPU / IPU
for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
which need to be handled properly.

It would be great to get a community feedback - will this be useful for Xen project?

Let me describe an algorithm briefly. It is simple and straightforward.
The following simple logic is used to translate addresses from IPA to MA:

1. During boot time guest domain creates "pagetable" for external MMU IP.
Pagetable is a singletone data structure, which is stored in ususal kernel
heap memory. All memory mappings for corresponding MMU are stored inside it.
Format of "pagetable" is well defined.

2. Guest domain enables peripheral remote processor. As a part of enable sequence
kernel allocates chunks of heap memory needed for remote processor and stores
pointers to allocated chunks in already created "pagetable". After it writes
a physical address of pagetable to MMU configuration register. As result MMU IP
knows about all allocations, and remote processor can use them directly in its
software.

3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
It reads a physical address of "pagetable" from MMU register and creates a copy
of it in own memory. As result - we have two similar configuration data structures -
first - in guest domain kernel, second - in Xen hypervisor.

4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
addresses to corresponding machine addresses using existing p2m API call.
After it writes a physical address  of its pagetable (with already translated PA to MA)
to MMU IP configuration registers and returns control to guest domain.

As a result - guest domain continues enabling remote processor with it MMU and MMU
will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
directly by MMU IP, and its new structure will be hidden for guest domain kernel,
it won't know anything about p2m translation.

Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
Target platform OMAP5 panda.

Thank you for your attention,

Regards,

Andrii Tseglytskyi (3):
  arm: omap: introduce iommu module
  arm: omap: translate iommu mapping to 4K pages
  arm: omap: cleanup iopte allocations

 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606U-0006bb-R7; Wed, 22 Jan 2014 15:52:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606S-0006aj-Ru
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:33 +0000
Received: from [85.158.137.68:16734] by server-12.bemta-3.messagelabs.com id
	DA/6D-20055-F39EFD25; Wed, 22 Jan 2014 15:52:31 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390405948!7037297!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25262 invoked from network); 22 Jan 2014 15:52:30 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:30 -0000
Received: from mail-ee0-f53.google.com ([74.125.83.53]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOzGKkMFXoc0ptIEFjjSnB4nSYR19@postini.com;
	Wed, 22 Jan 2014 07:52:29 PST
Received: by mail-ee0-f53.google.com with SMTP id t10so4954026eei.26
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=ELAhG6erl+ID9/jCE/T3prcOC3cZgoD8K2VRR2Y7v2k=;
	b=R9aymsh0QCKSEZhNbxd6J+ejG8PLgdmItho80tGMUut/M6ebkBE8sevwIS+rhC3og3
	wtHBGrYdEWw4Wj12TF9rkYa6zi3en3ZUKiC6rwz7TB4DSlZWrA0UBKfwXbgMfZHXp+7p
	H88WUIgHpDA9PDtGpIJ/XyFnjnFTz6p7tPI8w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=ELAhG6erl+ID9/jCE/T3prcOC3cZgoD8K2VRR2Y7v2k=;
	b=ByqVpDEErVOCugwTWp/ds20OoHViRHRiGg9iH6VqadVpcKlxHfxG4iOlu2Y85QfJgU
	BOWg7mhOPf4bWNJAjYRXodJgdv3fIyJuZnjKW3Lssnm4bE//RQjXB57DvnSpCjYZ8qM8
	CmJEVVu+83BKMRZLiTyifP2kp/Qr29V3yG4c1j9Ur1bOi4JjTg/2ImBy4+WnGF4fvftA
	WrB4JWwDfuVjBh4SCRIE7+C7cR+/bUVqpjOgiz4wea28hDNPtXsIqISLMicPRRftBcMe
	kE2sLCpg3tXMysugUeFvBt64Mm/50yVP3AuVUjJuB/QmcOYopXrCik7vyxNZHRN8jHgJ
	GBOw==
X-Gm-Message-State: ALoCoQlgo3IDRp2O5fuFNlHpWodsNXwfoLHDy1Vd9pviyPMJrsTpQnrcm8retBE93UReT0VYZR2qXZogjvkeCB7cqlDcP7aowvEbCSKhOSsrfZ9N5dayAMB9GgOjelMZ8/x3ZVrTjFxZWTEpnN8EKnHrBS/2SmJ8PDS89hq5awJ8mhXAgN82En4=
X-Received: by 10.15.108.197 with SMTP id cd45mr86157eeb.110.1390405946557;
	Wed, 22 Jan 2014 07:52:26 -0800 (PST)
X-Received: by 10.15.108.197 with SMTP id cd45mr86139eeb.110.1390405946280;
	Wed, 22 Jan 2014 07:52:26 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.25
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:05 +0200
Message-Id: <1390405925-1764-4-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 3/3] arm: omap: cleanup iopte allocations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Each allocation for iopte requires 4Kb memory.
All previous allocations from previous MMU reconfiguration
must be cleaned before new reconfigureation cycle.

Change-Id: I6db69a400cdba1170b43d9dc68d0817db77cbf9c
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 7ec03a2..a5ad3ac 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -93,12 +93,18 @@
 #define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
 #define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
 
+struct mmu_alloc_node {
+	u32					*vptr;
+	struct list_head	node;
+};
+
 struct mmu_info {
 	const char			*name;
 	paddr_t				mem_start;
 	u32					mem_size;
 	u32					*pagetable;
 	void __iomem		*mem_map;
+	struct list_head	alloc_list;
 };
 
 static struct mmu_info omap_ipu_mmu = {
@@ -222,8 +228,15 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
 {
 	u32 *iopte = NULL;
+	struct mmu_alloc_node *alloc_node;
 	u32 i;
 
+	alloc_node = xzalloc_bytes(sizeof(struct mmu_alloc_node));
+	if (!alloc_node) {
+		printk("%s Fail to alloc vptr node\n", mmu->name);
+		return 0;
+	}
+
 	iopte = xzalloc_bytes(PAGE_SIZE);
 	if (!iopte) {
 		printk("%s Fail to alloc 2nd level table\n", mmu->name);
@@ -238,10 +251,27 @@ static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd,
 		iopte[i] = vaddr | IOPTE_SMALL;
 	}
 
+	/* store pointer for following cleanup */
+	alloc_node->vptr = iopte;
+	list_add(&alloc_node->node, &mmu->alloc_list);
+
 	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
 	return __pa(iopte) | IOPGD_TABLE;
 }
 
+static void mmu_cleanup_pagetable(struct mmu_info *mmu)
+{
+	struct mmu_alloc_node *mmu_alloc, *tmp;
+
+	ASSERT(mmu);
+
+	list_for_each_entry_safe(mmu_alloc, tmp, &mmu->alloc_list, node) {
+		xfree(mmu_alloc->vptr);
+		list_del(&mmu_alloc->node);
+		xfree(mmu_alloc);
+	}
+}
+
 /*
  * on boot table is empty
  */
@@ -254,6 +284,9 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 	ASSERT(dom);
 	ASSERT(mmu);
 
+	/* free all previous allocations */
+	mmu_cleanup_pagetable(mmu);
+
 	/* copy pagetable from  domain to xen */
 	res = mmu_copy_pagetable(mmu);
 	if (res) {
@@ -432,6 +465,8 @@ static int mmu_init(struct mmu_info *mmu, u32 data)
 	printk("%s: %s private pagetable %lu bytes\n",
 		   __func__, mmu->name, IOPGD_TABLE_SIZE);
 
+	INIT_LIST_HEAD(&mmu->alloc_list);
+
 	return 0;
 }
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606P-0006Zm-Ie; Wed, 22 Jan 2014 15:52:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606O-0006Zd-Rx
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:29 +0000
Received: from [85.158.143.35:4469] by server-1.bemta-4.messagelabs.com id
	D2/D1-02132-C39EFD25; Wed, 22 Jan 2014 15:52:28 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390405945!107936!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1460 invoked from network); 22 Jan 2014 15:52:27 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:27 -0000
Received: from mail-ea0-f181.google.com ([209.85.215.181]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOEcA1wVAwyhKzzRsGvTbhfVWzXFP@postini.com;
	Wed, 22 Jan 2014 07:52:26 PST
Received: by mail-ea0-f181.google.com with SMTP id m10so4706166eaj.26
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=+I8rBYEFGj5c45/7iXy8tnVdaDI3thQxT5f8HJjeh+c=;
	b=Irm0JsXt35F0ntSeKehoTjXggryOAzdW0MBfvnGDqKQUWwbwR0vhDS0UbvnzULOil+
	oH0M56lfdHhn9P6SRZdGKYmheCMwxyHK16mrHyDhi7d41QElv6dMgLOV1YHhEoTpH/7J
	J7IkYbzTBAYrMLN/oSJSvgseTyks/9GPsLKqE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=+I8rBYEFGj5c45/7iXy8tnVdaDI3thQxT5f8HJjeh+c=;
	b=Toadf4x/tnsp+FbfJ18jav96lWy3q/rzfmNsigiEGv8oysJGG2B4YdLVXhGfirD72I
	H+hMqAT/XDcZ/Kp7oZ51mf3VDOnyX/BrTJc1vpxEgnoUnrrvmr4IVXp8xY3Th4Azv5jl
	2Ciysh9dxLrBnmmZAar0/Qaxo//dMHN6n498xHtoT+5Ym4FdoDm0bYXaBmTtva8DWcTI
	E8ntOj+sXi3PSQ4bEWJDKj8DnMnfJRxLxmBjWxQKYXIpCpX0FJ5I4sslyrbrjXI4zkzm
	3BKjfacE+bUxU/M5B06QoJYJqpvZmcBC0F7MYk1hKRvxJ8XDTrRq4HBpqD8kKNFPLYju
	gdiw==
X-Gm-Message-State: ALoCoQl4IRoz2Ay5qOLWgoqjgqkphSOFLSLbaAcp/v/QyU+cEmTZDy4VUtbm5SNN2SNdd7eoQGpJdvNMn9OPmrBTXfHhzISCtCfCn/j/clUUl/rNjwVDxJ/6ASQ+mlMJRNHqAPQcrWoWCNAvDYpJBDoC2mnyMUgk66U6V1ZEdApJCz05xcPCaxA=
X-Received: by 10.14.204.9 with SMTP id g9mr674632eeo.82.1390405943787;
	Wed, 22 Jan 2014 07:52:23 -0800 (PST)
X-Received: by 10.14.204.9 with SMTP id g9mr674527eeo.82.1390405942625;
	Wed, 22 Jan 2014 07:52:22 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.21
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:21 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:02 +0200
Message-Id: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
	platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

The following patch series is an RFC for possible implementation of simple MMU module,
which is designed to translate IPA to MA for peripheral processors like GPU / IPU
for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
which need to be handled properly.

It would be great to get a community feedback - will this be useful for Xen project?

Let me describe an algorithm briefly. It is simple and straightforward.
The following simple logic is used to translate addresses from IPA to MA:

1. During boot time guest domain creates "pagetable" for external MMU IP.
Pagetable is a singletone data structure, which is stored in ususal kernel
heap memory. All memory mappings for corresponding MMU are stored inside it.
Format of "pagetable" is well defined.

2. Guest domain enables peripheral remote processor. As a part of enable sequence
kernel allocates chunks of heap memory needed for remote processor and stores
pointers to allocated chunks in already created "pagetable". After it writes
a physical address of pagetable to MMU configuration register. As result MMU IP
knows about all allocations, and remote processor can use them directly in its
software.

3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
It reads a physical address of "pagetable" from MMU register and creates a copy
of it in own memory. As result - we have two similar configuration data structures -
first - in guest domain kernel, second - in Xen hypervisor.

4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
addresses to corresponding machine addresses using existing p2m API call.
After it writes a physical address  of its pagetable (with already translated PA to MA)
to MMU IP configuration registers and returns control to guest domain.

As a result - guest domain continues enabling remote processor with it MMU and MMU
will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
directly by MMU IP, and its new structure will be hidden for guest domain kernel,
it won't know anything about p2m translation.

Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
Target platform OMAP5 panda.

Thank you for your attention,

Regards,

Andrii Tseglytskyi (3):
  arm: omap: introduce iommu module
  arm: omap: translate iommu mapping to 4K pages
  arm: omap: cleanup iopte allocations

 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W606Q-0006aF-Vu; Wed, 22 Jan 2014 15:52:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W606Q-0006Zs-4r
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:30 +0000
Received: from [85.158.143.35:35860] by server-1.bemta-4.messagelabs.com id
	8B/D1-02132-D39EFD25; Wed, 22 Jan 2014 15:52:29 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390405946!108344!1
X-Originating-IP: [64.18.0.189]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16609 invoked from network); 22 Jan 2014 15:52:28 -0000
Received: from exprod5og119.obsmtp.com (HELO exprod5og119.obsmtp.com)
	(64.18.0.189)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 15:52:28 -0000
Received: from mail-ee0-f42.google.com ([74.125.83.42]) (using TLSv1) by
	exprod5ob119.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUt/pOjNrhxwAdK5QjLzR9YleB4nN8e0h@postini.com;
	Wed, 22 Jan 2014 07:52:28 PST
Received: by mail-ee0-f42.google.com with SMTP id e49so4981791eek.15
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 07:52:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=CU1i8IZ6+gUV/G0p4TjU8TS1CnLB39dGqtDyuYbBTRA=;
	b=dHMDsRQz8uo6JDTn653eMRT6yiJIFiI4PEq+rsUAdbvOqJe7z0Y0J3pcAN5A1Z2Rh8
	2dc9pB/XgiK1BT5ex1vTRDUHLFwOYRlxhM0Dn6O5zkQ4tHsAWQTvppqWxJLM7TSeqUXj
	FSmNtaGyP/fqBofUeansKttYo9h5ha33/x8Ug=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=CU1i8IZ6+gUV/G0p4TjU8TS1CnLB39dGqtDyuYbBTRA=;
	b=E3Ct9GofyVyF5XXHBm1NuV64U6YIH3z4wfKTyo3AqI3576BcXPHIJYaXnJxw4sC640
	cwYldHSntyxCm6tamXUidzDqUvu3cc8npnq3L3JAtEhKO8DNX8a0YqGMly5P5PzGnDzu
	siP0FDkThrDpCCEPKM/XXppiqCzjBOk9+vtSFXkfDiyrfDm7EyQVtUrf6xDSMXbrKzJd
	M5torfRvbtf1ZY1cQfm4BEuDqZrt+CDmVjFsCggZXgADKOFCP0p8Vic/HmmCvd4LugJY
	wv6FB5EqzdkKgZhgQN4XUGiQ7KHt3PD3nmEq9izcF9zF9egp0DEoN8rMsVyLNGKk7whM
	50wg==
X-Gm-Message-State: ALoCoQljJUQFbkIwyEiZABnbrk6l/66jx3pI7VJXWf+3RVG7a/sx8nlpvENgwh302wtyWBRQyWJC4booz9FTJ2cn5RcTptnxBmsBgIQhL8SXzxGV1duta6t5Z3VhE48xgAGQGLICWbVs23WH8C5A8RWjb1Q185hLz6IMmShvNFjqz5BS6QtZztg=
X-Received: by 10.14.207.194 with SMTP id n42mr784348eeo.76.1390405945173;
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
X-Received: by 10.14.207.194 with SMTP id n42mr784347eeo.76.1390405945113;
	Wed, 22 Jan 2014 07:52:25 -0800 (PST)
Received: from uglx0174653.kyiv.inobject.com ([195.238.92.125])
	by mx.google.com with ESMTPSA id w4sm28557736eef.20.2014.01.22.07.52.23
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 07:52:24 -0800 (PST)
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 17:52:04 +0200
Message-Id: <1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to 4K
	pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch introduces the following algorithm:
- enumerates all first level translation entries
- for each section creates 256 pages, each page is 4096 bytes
- for each supersection creates 4096 pages, each page is 4096 bytes
- flush cache to synchronize Cortex M15 and IOMMU

This algorithm make possible to use 4K mapping only.

Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 4dab30f..7ec03a2 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -72,6 +72,9 @@
 #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
 #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
 
+/* 16 sections in supersection */
+#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
+
 /*
  * some descriptor attributes.
  */
@@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
 	&omap_dsp_mmu,
 };
 
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
 #define mmu_for_each(pfunc, data)						\
 ({														\
 	u32 __i;											\
@@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 	return vaddr;
 }
 
+static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
+{
+	u32 *iopte = NULL;
+	u32 i;
+
+	iopte = xzalloc_bytes(PAGE_SIZE);
+	if (!iopte) {
+		printk("%s Fail to alloc 2nd level table\n", mmu->name);
+		return 0;
+	}
+
+	for (i = 0; i < PTRS_PER_IOPTE; i++) {
+		u32 da, vaddr, iopgd_tmp;
+		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
+		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
+		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
+		iopte[i] = vaddr | IOPTE_SMALL;
+	}
+
+	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
+	return __pa(iopte) | IOPGD_TABLE;
+}
+
 /*
  * on boot table is empty
  */
@@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 
 		/* "supersection" 16 Mb */
 		if (iopgd_is_super(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			if(likely(translate_supersections_to_pages)) {
+				u32 j, iopgd_tmp;
+				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
+					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
+					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
+				}
+				i += (j - 1);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			}
 
 		/* "section" 1Mb */
 		} else if (iopgd_is_section(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			if (likely(translate_sections_to_pages)) {
+				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			}
 
 		/* "table" */
 		} else if (iopgd_is_table(iopgd)) {
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 15:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 15:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6066-0006YR-1B; Wed, 22 Jan 2014 15:52:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6064-0006YM-H8
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 15:52:08 +0000
Received: from [193.109.254.147:3569] by server-9.bemta-14.messagelabs.com id
	5C/7C-13957-729EFD25; Wed, 22 Jan 2014 15:52:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390405925!12534390!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2126 invoked from network); 22 Jan 2014 15:52:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 15:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95359003"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 15:52:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 10:52:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W605z-0000Ay-8N;
	Wed, 22 Jan 2014 15:52:03 +0000
Date: Wed, 22 Jan 2014 15:52:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140122155202.GB24675@zion.uk.xensource.com>
References: <1386882184-15324-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1386882184-15324-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC V3 0/7] OSSTest: OVMF test job
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian, ping.

On Thu, Dec 12, 2013 at 09:02:57PM +0000, Wei Liu wrote:
> RFC v3 of this series
> 
> This series implements a basic test job for OVMF guest. The test case will
> install an OVMF guest and try to boot it.
> 
> I've tried my best to factor out common code. :-)
> 
> Now the preseed data in the test case only contains essential items -
> partitioning recipe, late_command and two other items.
> 
> As for the file manipulation code, it has a small portion (first 6 lines as
> IanJ pointed out) that's copied from ts-redhat-install, but I don't see a
> sensible to factor out that 6 lines of command.
> 
> I basically didn't touch that last two patches as IanJ will take care of them
> when he takes this series.
> 
> Wei.
> 
> Changes in v3:
> * consolidate more config items into preseed_base
> * ts-ovmf-debian-install -> ts-debian-hvm-install
> * factor out functions to create ISOs.
> * $xl -> $toolstack in test case script
> * add $flight $job and $gn to all file paths
> 
> Changes in v2:
> * factor out preseed_base
> * make installation CD work with seabios
> 
> 
> Wei Liu (7):
>   make-flight: disable OVMF build for 4.3
>   TestSupport.pm: add bios option to guest config file
>   TestSupport.pm: functions for creating isos
>   Debian.pm: factor out preseed_base
>   Introduce ts-debian-hvm-install
>   make-flight: OVMF test filght
>   sg-run-job: OVMF job
> 
>  Osstest/Debian.pm      |  143 +++++++++++++++++++---------------
>  Osstest/TestSupport.pm |   33 ++++++++
>  make-flight            |    7 ++
>  sg-run-job             |    6 ++
>  ts-debian-hvm-install  |  202 ++++++++++++++++++++++++++++++++++++++++++++++++
>  ts-redhat-install      |   13 +---
>  6 files changed, 332 insertions(+), 72 deletions(-)
>  create mode 100755 ts-debian-hvm-install
> 
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60NA-0008F9-LY; Wed, 22 Jan 2014 16:09:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W60N9-0008EF-9G
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:09:47 +0000
Received: from [85.158.137.68:17322] by server-5.bemta-3.messagelabs.com id
	EA/88-25188-A4DEFD25; Wed, 22 Jan 2014 16:09:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390406983!10659863!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29248 invoked from network); 22 Jan 2014 16:09:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:09:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93323117"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:09:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:09:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W60N3-0000Rb-4Q;
	Wed, 22 Jan 2014 16:09:41 +0000
Date: Wed, 22 Jan 2014 16:09:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140122160940.GC24675@zion.uk.xensource.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DF9B76.8060807@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>
> >>> Googling "disable tcg" would have provided an answer, but the patches
> >>> were old enough to be basically useless.  I'll refresh the current
> >>> version in the next few days.  Currently I am (or try to be) on
> >>> vacation, so I cannot really say when, but I'll do my best. :)
> >>>
> >Hi Paolo, any update?
> 
> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> branch on my github repository.
> 

Unfortunately your branch didn't work when I enabled TCG support. If I
use "--disable-tcg" with configure then it works fine.

A simple "make distclean; ./configure --target-list=i386-softmmu; make
-j16" gives following errors while linking.

  LINK  i386-softmmu/qemu-system-i386
gdbstub.o: In function `gdb_vm_state_change':
/local/scratch/qemu/gdbstub.c:1227: undefined reference to `tb_flush'
../vl.o: In function `tcg_init':
/local/scratch/qemu/vl.c:2614: undefined reference to `tcg_exec_init'
hw/i386/kvmvapic.o: In function `patch_instruction':
/local/scratch/qemu/hw/i386/kvmvapic.c:409: undefined reference to `cpu_restore_state'
/local/scratch/qemu/hw/i386/kvmvapic.c:451: undefined reference to `tb_gen_code'
/local/scratch/qemu/hw/i386/kvmvapic.c:452: undefined reference to `cpu_resume_from_signal'
target-i386/cpu.o: In function `x86_cpu_common_class_init':
/local/scratch/qemu/target-i386/cpu.c:2750: undefined reference to `x86_cpu_do_interrupt'
target-i386/cpu.o: In function `x86_cpu_reset':
/local/scratch/qemu/target-i386/cpu.c:2385: undefined reference to `tlb_flush'
target-i386/cpu.o: In function `x86_cpu_initfn':
/local/scratch/qemu/target-i386/cpu.c:2693: undefined reference to `optimize_flags_init'
/local/scratch/qemu/target-i386/cpu.c:2695: undefined reference to `breakpoint_handler'
/local/scratch/qemu/target-i386/cpu.c:2695: undefined reference to `cpu_set_debug_excp_handler'
cpus.o: In function `cpu_signal':
/local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
cpus.o: In function `qemu_tcg_cpu_thread_fn':
/local/scratch/qemu/cpus.c:946: undefined reference to `exit_request'
cpus.o: In function `tcg_cpu_exec':
/local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
exec.o: In function `cpu_common_post_load':
/local/scratch/qemu/exec.c:405: undefined reference to `tlb_flush'
exec.o: In function `tcg_commit':
/local/scratch/qemu/exec.c:1796: undefined reference to `tlb_flush'
exec.o: In function `invalidate_and_set_dirty':
/local/scratch/qemu/exec.c:1920: undefined reference to `tb_invalidate_phys_page_range'
exec.o: In function `check_watchpoint':
/local/scratch/qemu/exec.c:1556: undefined reference to `tb_check_watchpoint'
/local/scratch/qemu/exec.c:1559: undefined reference to `cpu_loop_exit'
/local/scratch/qemu/exec.c:1562: undefined reference to `tb_gen_code'
/local/scratch/qemu/exec.c:1563: undefined reference to `cpu_resume_from_signal'
exec.o: In function `cpu_watchpoint_insert':
/local/scratch/qemu/exec.c:532: undefined reference to `tlb_flush_page'
exec.o: In function `cpu_watchpoint_remove_by_ref':
/local/scratch/qemu/exec.c:561: undefined reference to `tlb_flush_page'
exec.o: In function `notdirty_mem_write':
/local/scratch/qemu/exec.c:1491: undefined reference to `tb_invalidate_phys_page_fast'
exec.o: In function `stl_phys_notdirty':
/local/scratch/qemu/exec.c:2535: undefined reference to `tb_invalidate_phys_page_range'
exec.o: In function `breakpoint_invalidate':
/local/scratch/qemu/exec.c:488: undefined reference to `tb_invalidate_phys_addr'
exec.o: In function `cpu_single_step':
/local/scratch/qemu/exec.c:664: undefined reference to `tb_flush'
exec.o: In function `tlb_reset_dirty_range_all':
/local/scratch/qemu/exec.c:736: undefined reference to `cpu_tlb_reset_dirty_all'
exec.o: In function `notdirty_mem_write':
/local/scratch/qemu/exec.c:1515: undefined reference to `tlb_set_dirty'
monitor.o: In function `do_info_jit':
/local/scratch/qemu/monitor.c:1120: undefined reference to `dump_exec_info'
target-i386/helper.o: In function `x86_cpu_set_a20':
/local/scratch/qemu/target-i386/helper.c:397: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_update_cr0':
/local/scratch/qemu/target-i386/helper.c:411: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_update_cr4':
/local/scratch/qemu/target-i386/helper.c:464: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_handle_mmu_fault':
/local/scratch/qemu/target-i386/helper.c:862: undefined reference to `tlb_set_page'
target-i386/helper.o: In function `cpu_report_tpr_access':
/local/scratch/qemu/target-i386/helper.c:1224: undefined reference to `cpu_restore_state'
target-i386/helper.o: In function `cpu_x86_update_cr3':
/local/scratch/qemu/target-i386/helper.c:452: undefined reference to `tlb_flush'
target-i386/machine.o: In function `cpu_post_load':
/local/scratch/qemu/target-i386/machine.c:330: undefined reference to `tlb_flush'
collect2: error: ld returned 1 exit status
make[1]: *** [qemu-system-i386] Error 1
make: *** [subdir-i386-softmmu] Error 2


Wei.


> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60NA-0008F9-LY; Wed, 22 Jan 2014 16:09:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W60N9-0008EF-9G
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:09:47 +0000
Received: from [85.158.137.68:17322] by server-5.bemta-3.messagelabs.com id
	EA/88-25188-A4DEFD25; Wed, 22 Jan 2014 16:09:46 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390406983!10659863!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29248 invoked from network); 22 Jan 2014 16:09:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:09:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93323117"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:09:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:09:43 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W60N3-0000Rb-4Q;
	Wed, 22 Jan 2014 16:09:41 +0000
Date: Wed, 22 Jan 2014 16:09:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140122160940.GC24675@zion.uk.xensource.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>
	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>
	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DF9B76.8060807@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>
> >>> Googling "disable tcg" would have provided an answer, but the patches
> >>> were old enough to be basically useless.  I'll refresh the current
> >>> version in the next few days.  Currently I am (or try to be) on
> >>> vacation, so I cannot really say when, but I'll do my best. :)
> >>>
> >Hi Paolo, any update?
> 
> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> branch on my github repository.
> 

Unfortunately your branch didn't work when I enabled TCG support. If I
use "--disable-tcg" with configure then it works fine.

A simple "make distclean; ./configure --target-list=i386-softmmu; make
-j16" gives following errors while linking.

  LINK  i386-softmmu/qemu-system-i386
gdbstub.o: In function `gdb_vm_state_change':
/local/scratch/qemu/gdbstub.c:1227: undefined reference to `tb_flush'
../vl.o: In function `tcg_init':
/local/scratch/qemu/vl.c:2614: undefined reference to `tcg_exec_init'
hw/i386/kvmvapic.o: In function `patch_instruction':
/local/scratch/qemu/hw/i386/kvmvapic.c:409: undefined reference to `cpu_restore_state'
/local/scratch/qemu/hw/i386/kvmvapic.c:451: undefined reference to `tb_gen_code'
/local/scratch/qemu/hw/i386/kvmvapic.c:452: undefined reference to `cpu_resume_from_signal'
target-i386/cpu.o: In function `x86_cpu_common_class_init':
/local/scratch/qemu/target-i386/cpu.c:2750: undefined reference to `x86_cpu_do_interrupt'
target-i386/cpu.o: In function `x86_cpu_reset':
/local/scratch/qemu/target-i386/cpu.c:2385: undefined reference to `tlb_flush'
target-i386/cpu.o: In function `x86_cpu_initfn':
/local/scratch/qemu/target-i386/cpu.c:2693: undefined reference to `optimize_flags_init'
/local/scratch/qemu/target-i386/cpu.c:2695: undefined reference to `breakpoint_handler'
/local/scratch/qemu/target-i386/cpu.c:2695: undefined reference to `cpu_set_debug_excp_handler'
cpus.o: In function `cpu_signal':
/local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
cpus.o: In function `qemu_tcg_cpu_thread_fn':
/local/scratch/qemu/cpus.c:946: undefined reference to `exit_request'
cpus.o: In function `tcg_cpu_exec':
/local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
exec.o: In function `cpu_common_post_load':
/local/scratch/qemu/exec.c:405: undefined reference to `tlb_flush'
exec.o: In function `tcg_commit':
/local/scratch/qemu/exec.c:1796: undefined reference to `tlb_flush'
exec.o: In function `invalidate_and_set_dirty':
/local/scratch/qemu/exec.c:1920: undefined reference to `tb_invalidate_phys_page_range'
exec.o: In function `check_watchpoint':
/local/scratch/qemu/exec.c:1556: undefined reference to `tb_check_watchpoint'
/local/scratch/qemu/exec.c:1559: undefined reference to `cpu_loop_exit'
/local/scratch/qemu/exec.c:1562: undefined reference to `tb_gen_code'
/local/scratch/qemu/exec.c:1563: undefined reference to `cpu_resume_from_signal'
exec.o: In function `cpu_watchpoint_insert':
/local/scratch/qemu/exec.c:532: undefined reference to `tlb_flush_page'
exec.o: In function `cpu_watchpoint_remove_by_ref':
/local/scratch/qemu/exec.c:561: undefined reference to `tlb_flush_page'
exec.o: In function `notdirty_mem_write':
/local/scratch/qemu/exec.c:1491: undefined reference to `tb_invalidate_phys_page_fast'
exec.o: In function `stl_phys_notdirty':
/local/scratch/qemu/exec.c:2535: undefined reference to `tb_invalidate_phys_page_range'
exec.o: In function `breakpoint_invalidate':
/local/scratch/qemu/exec.c:488: undefined reference to `tb_invalidate_phys_addr'
exec.o: In function `cpu_single_step':
/local/scratch/qemu/exec.c:664: undefined reference to `tb_flush'
exec.o: In function `tlb_reset_dirty_range_all':
/local/scratch/qemu/exec.c:736: undefined reference to `cpu_tlb_reset_dirty_all'
exec.o: In function `notdirty_mem_write':
/local/scratch/qemu/exec.c:1515: undefined reference to `tlb_set_dirty'
monitor.o: In function `do_info_jit':
/local/scratch/qemu/monitor.c:1120: undefined reference to `dump_exec_info'
target-i386/helper.o: In function `x86_cpu_set_a20':
/local/scratch/qemu/target-i386/helper.c:397: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_update_cr0':
/local/scratch/qemu/target-i386/helper.c:411: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_update_cr4':
/local/scratch/qemu/target-i386/helper.c:464: undefined reference to `tlb_flush'
target-i386/helper.o: In function `cpu_x86_handle_mmu_fault':
/local/scratch/qemu/target-i386/helper.c:862: undefined reference to `tlb_set_page'
target-i386/helper.o: In function `cpu_report_tpr_access':
/local/scratch/qemu/target-i386/helper.c:1224: undefined reference to `cpu_restore_state'
target-i386/helper.o: In function `cpu_x86_update_cr3':
/local/scratch/qemu/target-i386/helper.c:452: undefined reference to `tlb_flush'
target-i386/machine.o: In function `cpu_post_load':
/local/scratch/qemu/target-i386/machine.c:330: undefined reference to `tlb_flush'
collect2: error: ld returned 1 exit status
make[1]: *** [qemu-system-i386] Error 1
make: *** [subdir-i386-softmmu] Error 2


Wei.


> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:35:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60mA-0001G6-Ey; Wed, 22 Jan 2014 16:35:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60m8-0001G1-VR
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:35:37 +0000
Received: from [85.158.137.68:18669] by server-4.bemta-3.messagelabs.com id
	21/96-10414-853FFD25; Wed, 22 Jan 2014 16:35:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390408533!7061452!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1268 invoked from network); 22 Jan 2014 16:35:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:35:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93338131"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:35:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:35:32 -0500
Message-ID: <1390408531.32519.78.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>
Date: Wed, 22 Jan 2014 16:35:31 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien,

I wonder if the following is any better than the current stuff in
staging for the issue you are seeing with BSD at start of day? Can you
try it please.

It has survived >1000 bootloops on Midway and >50 on Mustang, both are
still going.

It basically does a cache clean on all RAM mapped in the p2m. Anything
in the cache is either the result of an earlier scrub of the page or
something toolstack just wrote, so there is no need to be concerned
about clean vs. invalidate -- clean is always correct.

This should ensure that the guest has no dirty pages when it starts.
This nobbles the HCR_DC based stuff, too since it is no longer
necessary. This avoids concerns about guests which enable MMU before
caches.

It contains debug BUG()s in various trap locations to trap if the guest
experiences any incoherence and has lots of other debugging left in etc.
(and hacks wrt prototypes not in headers)

Ian.

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..60c3091 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_mfn = 0;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..43dae5c 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..55c86f0 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
     STATE_AO_GC(cdcs->dcs.ao);
 
     if (!rc)
+    {
         *cdcs->domid_out = domid;
+        xc_domain_cacheflush(CTX->xch, domid);
+    }
 
     libxl__ao_complete(egc, ao, rc);
 }
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..2edd09d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -475,7 +475,8 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
+    //v->arch.default_cache = true;
+    v->arch.default_cache = false;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..9e3b37d 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,12 +11,24 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <xen/guest_access.h>
+
+extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
+        if ( __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..f35ed57 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,15 +228,26 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
+static void do_one_cacheflush(paddr_t mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 static int create_p2m_entries(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     xen_pfn_t *last_mfn)
 {
     int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -381,18 +392,42 @@ static int create_p2m_entries(struct domain *d,
                     count++;
                 }
                 break;
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                    {
+                        count++;
+                        break;
+                    }
+
+                    count += 0x10;
+
+                    do_one_cacheflush(pte.p2m.base);
+                }
+                break;
         }
 
+        if ( last_mfn )
+            *last_mfn = addr >> PAGE_SHIFT;
+
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
-        if ( op == RELINQUISH && count >= 0x2000 )
+        switch ( op )
         {
-            if ( hypercall_preempt_check() )
+        case RELINQUISH:
+        case CACHEFLUSH:
+            if (count >= 0x2000 && hypercall_preempt_check() )
             {
                 p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
             count = 0;
+            break;
+        case INSERT:
+        case ALLOCATE:
+        case REMOVE:
+            /* No preemption */
+            break;
         }
 
         /* Got the next page */
@@ -439,7 +474,7 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+                              0, MATTR_MEM, p2m_ram_rw, NULL);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -448,7 +483,7 @@ int map_mmio_regions(struct domain *d,
                      paddr_t maddr)
 {
     return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+                              maddr, MATTR_DEV, p2m_mmio_direct, NULL);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -460,7 +495,7 @@ int guest_physmap_add_entry(struct domain *d,
     return create_p2m_entries(d, INSERT,
                               pfn_to_paddr(gpfn),
                               pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+                              pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -470,7 +505,7 @@ void guest_physmap_remove_page(struct domain *d,
     create_p2m_entries(d, REMOVE,
                        pfn_to_paddr(gpfn),
                        pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -622,7 +657,28 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, NULL);
+}
+
+long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    printk("dom%d p2m cache flush from mfn %"PRI_xen_pfn" RELIN %lx\n",
+           d->domain_id, *start_mfn, p2m->next_gfn_to_relinquish);
+
+    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
+
+    printk("dom%d p2m cache flush: %"PRIpaddr"-%"PRIpaddr"\n",
+           d->domain_id,
+           pfn_to_paddr(*start_mfn),
+           pfn_to_paddr(p2m->max_mapped_gfn));
+
+    return create_p2m_entries(d, CACHEFLUSH,
+                              pfn_to_paddr(*start_mfn),
+                              pfn_to_paddr(p2m->max_mapped_gfn),
+                              pfn_to_paddr(INVALID_MFN),
+                              MATTR_MEM, p2m_invalid, start_mfn);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 48a6fcc..546f7ce 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1283,6 +1283,7 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
 
 static void update_sctlr(struct vcpu *v, uint32_t val)
 {
+    BUG();
     /*
      * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
      * because they are incompatible.
@@ -1628,6 +1629,7 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
     register_t addr = READ_SYSREG(FAR_EL2);
+    BUG();
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
@@ -1683,6 +1685,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
 bad_data_abort:
+    show_execution_state(regs);
+    panic("DABT");
     inject_dabt_exception(regs, info.gva, hsr.len);
 }
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d7b22c3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_cacheflush {
+    /* Updated for progress */
+    xen_pfn_t start_mfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +961,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1020,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:35:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:35:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60mA-0001G6-Ey; Wed, 22 Jan 2014 16:35:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60m8-0001G1-VR
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:35:37 +0000
Received: from [85.158.137.68:18669] by server-4.bemta-3.messagelabs.com id
	21/96-10414-853FFD25; Wed, 22 Jan 2014 16:35:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390408533!7061452!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1268 invoked from network); 22 Jan 2014 16:35:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:35:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93338131"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:35:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:35:32 -0500
Message-ID: <1390408531.32519.78.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>
Date: Wed, 22 Jan 2014 16:35:31 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien,

I wonder if the following is any better than the current stuff in
staging for the issue you are seeing with BSD at start of day? Can you
try it please.

It has survived >1000 bootloops on Midway and >50 on Mustang, both are
still going.

It basically does a cache clean on all RAM mapped in the p2m. Anything
in the cache is either the result of an earlier scrub of the page or
something toolstack just wrote, so there is no need to be concerned
about clean vs. invalidate -- clean is always correct.

This should ensure that the guest has no dirty pages when it starts.
This nobbles the HCR_DC based stuff, too since it is no longer
necessary. This avoids concerns about guests which enable MMU before
caches.

It contains debug BUG()s in various trap locations to trap if the guest
experiences any incoherence and has lots of other debugging left in etc.
(and hacks wrt prototypes not in headers)

Ian.

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..60c3091 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_mfn = 0;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..43dae5c 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..55c86f0 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
     STATE_AO_GC(cdcs->dcs.ao);
 
     if (!rc)
+    {
         *cdcs->domid_out = domid;
+        xc_domain_cacheflush(CTX->xch, domid);
+    }
 
     libxl__ao_complete(egc, ao, rc);
 }
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..2edd09d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -475,7 +475,8 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
+    //v->arch.default_cache = true;
+    v->arch.default_cache = false;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..9e3b37d 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,12 +11,24 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <xen/guest_access.h>
+
+extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
+        if ( __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..f35ed57 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,15 +228,26 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
+static void do_one_cacheflush(paddr_t mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 static int create_p2m_entries(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     xen_pfn_t *last_mfn)
 {
     int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -381,18 +392,42 @@ static int create_p2m_entries(struct domain *d,
                     count++;
                 }
                 break;
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                    {
+                        count++;
+                        break;
+                    }
+
+                    count += 0x10;
+
+                    do_one_cacheflush(pte.p2m.base);
+                }
+                break;
         }
 
+        if ( last_mfn )
+            *last_mfn = addr >> PAGE_SHIFT;
+
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
-        if ( op == RELINQUISH && count >= 0x2000 )
+        switch ( op )
         {
-            if ( hypercall_preempt_check() )
+        case RELINQUISH:
+        case CACHEFLUSH:
+            if (count >= 0x2000 && hypercall_preempt_check() )
             {
                 p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
             count = 0;
+            break;
+        case INSERT:
+        case ALLOCATE:
+        case REMOVE:
+            /* No preemption */
+            break;
         }
 
         /* Got the next page */
@@ -439,7 +474,7 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+                              0, MATTR_MEM, p2m_ram_rw, NULL);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -448,7 +483,7 @@ int map_mmio_regions(struct domain *d,
                      paddr_t maddr)
 {
     return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+                              maddr, MATTR_DEV, p2m_mmio_direct, NULL);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -460,7 +495,7 @@ int guest_physmap_add_entry(struct domain *d,
     return create_p2m_entries(d, INSERT,
                               pfn_to_paddr(gpfn),
                               pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+                              pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -470,7 +505,7 @@ void guest_physmap_remove_page(struct domain *d,
     create_p2m_entries(d, REMOVE,
                        pfn_to_paddr(gpfn),
                        pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -622,7 +657,28 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, NULL);
+}
+
+long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    printk("dom%d p2m cache flush from mfn %"PRI_xen_pfn" RELIN %lx\n",
+           d->domain_id, *start_mfn, p2m->next_gfn_to_relinquish);
+
+    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
+
+    printk("dom%d p2m cache flush: %"PRIpaddr"-%"PRIpaddr"\n",
+           d->domain_id,
+           pfn_to_paddr(*start_mfn),
+           pfn_to_paddr(p2m->max_mapped_gfn));
+
+    return create_p2m_entries(d, CACHEFLUSH,
+                              pfn_to_paddr(*start_mfn),
+                              pfn_to_paddr(p2m->max_mapped_gfn),
+                              pfn_to_paddr(INVALID_MFN),
+                              MATTR_MEM, p2m_invalid, start_mfn);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 48a6fcc..546f7ce 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1283,6 +1283,7 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
 
 static void update_sctlr(struct vcpu *v, uint32_t val)
 {
+    BUG();
     /*
      * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
      * because they are incompatible.
@@ -1628,6 +1629,7 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
     register_t addr = READ_SYSREG(FAR_EL2);
+    BUG();
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
@@ -1683,6 +1685,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
 bad_data_abort:
+    show_execution_state(regs);
+    panic("DABT");
     inject_dabt_exception(regs, info.gva, hsr.len);
 }
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d7b22c3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_cacheflush {
+    /* Updated for progress */
+    xen_pfn_t start_mfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +961,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1020,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60pz-0001oO-94; Wed, 22 Jan 2014 16:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60px-0001o6-A2
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:39:33 +0000
Received: from [85.158.137.68:15120] by server-14.bemta-3.messagelabs.com id
	70/4E-06105-444FFD25; Wed, 22 Jan 2014 16:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390408769!10724809!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22167 invoked from network); 22 Jan 2014 16:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93340025"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:39:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 11:39:29 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W60ps-0007bO-7p;
	Wed, 22 Jan 2014 16:39:28 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 16:39:28 +0000
Message-ID: <1390408768-16100-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] standalone: add blessing to flights
	table.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sg-check-tested wants this to exist.

I haven't implemented Osstest/JobDB/Standalone::job_ensure_started, which
should transition from 'contructing' to 'running' because I'm not hitting that
path.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/JobDB/Standalone.pm | 4 ++--
 standalone-reset            | 1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index 14293ef..45ec9ca 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -67,8 +67,8 @@ END
     }
     $dbh_tests->do(<<END, {}, $fl, $branch, $intended);
              INSERT INTO flights
-                         (flight, branch, intended)
-                  VALUES (?, ?, ?)
+                         (flight, blessing, branch, intended)
+                  VALUES (?, 'constructing', ?, ?)
 END
     return $fl
 }
diff --git a/standalone-reset b/standalone-reset
index 8be7e86..83a6606 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -131,6 +131,7 @@ else
 	sqlite3 standalone.db <<END
 		CREATE TABLE flights (
 			flight TEXT PRIMARY KEY,
+			blessing TEXT,
 			intended TEXT,
 			branch TEXT
 			);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60py-0001oH-OD; Wed, 22 Jan 2014 16:39:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60pw-0001o3-Kw
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:39:32 +0000
Received: from [85.158.137.68:15068] by server-11.bemta-3.messagelabs.com id
	B7/48-19379-344FFD25; Wed, 22 Jan 2014 16:39:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390408769!10724809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22081 invoked from network); 22 Jan 2014 16:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93340005"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:39:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 11:39:25 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W60po-0007bL-4l;
	Wed, 22 Jan 2014 16:39:24 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 16:39:23 +0000
Message-ID: <1390408763-16035-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] cri-args-hostlists: Allow environment
	to control OSSTEST_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cri-args-hostlists | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 9f34e4a..4f7ddf2 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -17,7 +17,7 @@
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
 
-export OSSTEST_CONFIG=production-config
+export OSSTEST_CONFIG=${OSSTEST_CONFIG:-production-config}
 
 check_stop_core () {
 	if [ "x$OSSTEST_IGNORE_STOP" = xy ]; then return; fi
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60pz-0001oO-94; Wed, 22 Jan 2014 16:39:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60px-0001o6-A2
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:39:33 +0000
Received: from [85.158.137.68:15120] by server-14.bemta-3.messagelabs.com id
	70/4E-06105-444FFD25; Wed, 22 Jan 2014 16:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390408769!10724809!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22167 invoked from network); 22 Jan 2014 16:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93340025"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:39:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 11:39:29 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W60ps-0007bO-7p;
	Wed, 22 Jan 2014 16:39:28 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 16:39:28 +0000
Message-ID: <1390408768-16100-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] standalone: add blessing to flights
	table.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

sg-check-tested wants this to exist.

I haven't implemented Osstest/JobDB/Standalone::job_ensure_started, which
should transition from 'contructing' to 'running' because I'm not hitting that
path.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/JobDB/Standalone.pm | 4 ++--
 standalone-reset            | 1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index 14293ef..45ec9ca 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -67,8 +67,8 @@ END
     }
     $dbh_tests->do(<<END, {}, $fl, $branch, $intended);
              INSERT INTO flights
-                         (flight, branch, intended)
-                  VALUES (?, ?, ?)
+                         (flight, blessing, branch, intended)
+                  VALUES (?, 'constructing', ?, ?)
 END
     return $fl
 }
diff --git a/standalone-reset b/standalone-reset
index 8be7e86..83a6606 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -131,6 +131,7 @@ else
 	sqlite3 standalone.db <<END
 		CREATE TABLE flights (
 			flight TEXT PRIMARY KEY,
+			blessing TEXT,
 			intended TEXT,
 			branch TEXT
 			);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60py-0001oH-OD; Wed, 22 Jan 2014 16:39:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W60pw-0001o3-Kw
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:39:32 +0000
Received: from [85.158.137.68:15068] by server-11.bemta-3.messagelabs.com id
	B7/48-19379-344FFD25; Wed, 22 Jan 2014 16:39:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390408769!10724809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22081 invoked from network); 22 Jan 2014 16:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93340005"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:39:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 22 Jan 2014 11:39:25 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W60po-0007bL-4l;
	Wed, 22 Jan 2014 16:39:24 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 22 Jan 2014 16:39:23 +0000
Message-ID: <1390408763-16035-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] cri-args-hostlists: Allow environment
	to control OSSTEST_CONFIG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cri-args-hostlists | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 9f34e4a..4f7ddf2 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -17,7 +17,7 @@
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
 
-export OSSTEST_CONFIG=production-config
+export OSSTEST_CONFIG=${OSSTEST_CONFIG:-production-config}
 
 check_stop_core () {
 	if [ "x$OSSTEST_IGNORE_STOP" = xy ]; then return; fi
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:41:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60rZ-0001yP-S5; Wed, 22 Jan 2014 16:41:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W60rR-0001y6-KU
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 16:41:12 +0000
Received: from [85.158.143.35:30698] by server-1.bemta-4.messagelabs.com id
	81/1F-02132-1A4FFD25; Wed, 22 Jan 2014 16:41:05 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390408862!120109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3217 invoked from network); 22 Jan 2014 16:41:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:41:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95386892"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 16:41:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:41:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W60rM-0001De-HQ;
	Wed, 22 Jan 2014 16:41:00 +0000
Date: Wed, 22 Jan 2014 16:39:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/xen/page.h     |   12 +++--
>  arch/x86/xen/p2m.c                  |   25 ++--------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   90 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 107 insertions(+), 56 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..68a1438 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> -extern int m2p_add_override(unsigned long mfn, struct page *page,
> -			    struct gnttab_map_grant_ref *kmap_op);
> +extern int m2p_add_override(unsigned long mfn,
> +			    struct page *page,
> +			    struct gnttab_map_grant_ref *kmap_op,
> +			    unsigned long pfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long pfn,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*

Spurious change?


>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..0060178 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>  
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)

Do we really need to add another additional parameter to
m2p_add_override?
I would just let m2p_add_override and m2p_remove_override call
page_to_pfn again. It is not that expensive.


>  {
>  	unsigned long flags;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long pfn,
> +			unsigned long mfn)

Same here


>  {
>  	unsigned long flags;
> -	unsigned long mfn;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
> -
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..1f97fa0 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +			return -ENOMEM;

goto out


> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL, pfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;

We are loosing the error code possibly returned by m2p_add_override and
the previous check.


> +}
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> +			return -EINVAL;

goto out


> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  pfn,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -977,10 +1023,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;

We are loosing the error code possibly returned by m2p_remove_override
and the two previous checks.


> +}
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:41:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W60rZ-0001yP-S5; Wed, 22 Jan 2014 16:41:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W60rR-0001y6-KU
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 16:41:12 +0000
Received: from [85.158.143.35:30698] by server-1.bemta-4.messagelabs.com id
	81/1F-02132-1A4FFD25; Wed, 22 Jan 2014 16:41:05 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390408862!120109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3217 invoked from network); 22 Jan 2014 16:41:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:41:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="95386892"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 16:41:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:41:01 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W60rM-0001De-HQ;
	Wed, 22 Jan 2014 16:41:00 +0000
Date: Wed, 22 Jan 2014 16:39:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> ---
>  arch/x86/include/asm/xen/page.h     |   12 +++--
>  arch/x86/xen/p2m.c                  |   25 ++--------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   90 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 107 insertions(+), 56 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..68a1438 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> -extern int m2p_add_override(unsigned long mfn, struct page *page,
> -			    struct gnttab_map_grant_ref *kmap_op);
> +extern int m2p_add_override(unsigned long mfn,
> +			    struct page *page,
> +			    struct gnttab_map_grant_ref *kmap_op,
> +			    unsigned long pfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long pfn,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*

Spurious change?


>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..0060178 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>  
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)

Do we really need to add another additional parameter to
m2p_add_override?
I would just let m2p_add_override and m2p_remove_override call
page_to_pfn again. It is not that expensive.


>  {
>  	unsigned long flags;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long pfn,
> +			unsigned long mfn)

Same here


>  {
>  	unsigned long flags;
> -	unsigned long mfn;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
> -
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..1f97fa0 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +			return -ENOMEM;

goto out


> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL, pfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;

We are loosing the error code possibly returned by m2p_add_override and
the previous check.


> +}
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> +			return -EINVAL;

goto out


> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  pfn,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -977,10 +1023,24 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  	if (lazy)
>  		arch_leave_lazy_mmu_mode();
>  
> -	return ret;
> +	return 0;

We are loosing the error code possibly returned by m2p_remove_override
and the two previous checks.


> +}
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W618J-0002nf-Oz; Wed, 22 Jan 2014 16:58:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W618I-0002na-0W
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:58:30 +0000
Received: from [85.158.139.211:55019] by server-14.bemta-5.messagelabs.com id
	A0/BD-24200-5B8FFD25; Wed, 22 Jan 2014 16:58:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390409906!11347352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10950 invoked from network); 22 Jan 2014 16:58:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93348136"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:58:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:58:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W617o-0001TH-5h;
	Wed, 22 Jan 2014 16:58:00 +0000
Date: Wed, 22 Jan 2014 16:56:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> Hi,
> 
> The following patch series is an RFC for possible implementation of simple MMU module,
> which is designed to translate IPA to MA for peripheral processors like GPU / IPU
> for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
> which need to be handled properly.
> 
> It would be great to get a community feedback - will this be useful for Xen project?
> 
> Let me describe an algorithm briefly. It is simple and straightforward.
> The following simple logic is used to translate addresses from IPA to MA:
> 
> 1. During boot time guest domain creates "pagetable" for external MMU IP.
> Pagetable is a singletone data structure, which is stored in ususal kernel
> heap memory. All memory mappings for corresponding MMU are stored inside it.
> Format of "pagetable" is well defined.
> 
> 2. Guest domain enables peripheral remote processor. As a part of enable sequence
> kernel allocates chunks of heap memory needed for remote processor and stores
> pointers to allocated chunks in already created "pagetable". After it writes
> a physical address of pagetable to MMU configuration register. As result MMU IP
> knows about all allocations, and remote processor can use them directly in its
> software.
> 
> 3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
> It reads a physical address of "pagetable" from MMU register and creates a copy
> of it in own memory. As result - we have two similar configuration data structures -
> first - in guest domain kernel, second - in Xen hypervisor.
> 
> 4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
> addresses to corresponding machine addresses using existing p2m API call.
> After it writes a physical address  of its pagetable (with already translated PA to MA)
> to MMU IP configuration registers and returns control to guest domain.
> 
> As a result - guest domain continues enabling remote processor with it MMU and MMU
> will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
> directly by MMU IP, and its new structure will be hidden for guest domain kernel,
> it won't know anything about p2m translation.

Why don't you map Dom0 1:1 instead?
If you enabled PLATFORM_QUIRK_DOM0_MAPPING_11 (now enabled by default on
all platforms), all this wouldn't be necessary, right?



> Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
> Target platform OMAP5 panda.
> 
> Thank you for your attention,
> 
> Regards,
> 
> Andrii Tseglytskyi (3):
>   arm: omap: introduce iommu module
>   arm: omap: translate iommu mapping to 4K pages
>   arm: omap: cleanup iopte allocations
> 
>  xen/arch/arm/Makefile     |    1 +
>  xen/arch/arm/io.c         |    1 +
>  xen/arch/arm/io.h         |    1 +
>  xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 495 insertions(+)
>  create mode 100644 xen/arch/arm/omap_iommu.c
> 
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 16:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 16:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W618J-0002nf-Oz; Wed, 22 Jan 2014 16:58:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W618I-0002na-0W
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 16:58:30 +0000
Received: from [85.158.139.211:55019] by server-14.bemta-5.messagelabs.com id
	A0/BD-24200-5B8FFD25; Wed, 22 Jan 2014 16:58:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390409906!11347352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10950 invoked from network); 22 Jan 2014 16:58:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 16:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,700,1384300800"; d="scan'208";a="93348136"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 16:58:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 11:58:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W617o-0001TH-5h;
	Wed, 22 Jan 2014 16:58:00 +0000
Date: Wed, 22 Jan 2014 16:56:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> Hi,
> 
> The following patch series is an RFC for possible implementation of simple MMU module,
> which is designed to translate IPA to MA for peripheral processors like GPU / IPU
> for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
> which need to be handled properly.
> 
> It would be great to get a community feedback - will this be useful for Xen project?
> 
> Let me describe an algorithm briefly. It is simple and straightforward.
> The following simple logic is used to translate addresses from IPA to MA:
> 
> 1. During boot time guest domain creates "pagetable" for external MMU IP.
> Pagetable is a singletone data structure, which is stored in ususal kernel
> heap memory. All memory mappings for corresponding MMU are stored inside it.
> Format of "pagetable" is well defined.
> 
> 2. Guest domain enables peripheral remote processor. As a part of enable sequence
> kernel allocates chunks of heap memory needed for remote processor and stores
> pointers to allocated chunks in already created "pagetable". After it writes
> a physical address of pagetable to MMU configuration register. As result MMU IP
> knows about all allocations, and remote processor can use them directly in its
> software.
> 
> 3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
> It reads a physical address of "pagetable" from MMU register and creates a copy
> of it in own memory. As result - we have two similar configuration data structures -
> first - in guest domain kernel, second - in Xen hypervisor.
> 
> 4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
> addresses to corresponding machine addresses using existing p2m API call.
> After it writes a physical address  of its pagetable (with already translated PA to MA)
> to MMU IP configuration registers and returns control to guest domain.
> 
> As a result - guest domain continues enabling remote processor with it MMU and MMU
> will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
> directly by MMU IP, and its new structure will be hidden for guest domain kernel,
> it won't know anything about p2m translation.

Why don't you map Dom0 1:1 instead?
If you enabled PLATFORM_QUIRK_DOM0_MAPPING_11 (now enabled by default on
all platforms), all this wouldn't be necessary, right?



> Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
> Target platform OMAP5 panda.
> 
> Thank you for your attention,
> 
> Regards,
> 
> Andrii Tseglytskyi (3):
>   arm: omap: introduce iommu module
>   arm: omap: translate iommu mapping to 4K pages
>   arm: omap: cleanup iopte allocations
> 
>  xen/arch/arm/Makefile     |    1 +
>  xen/arch/arm/io.c         |    1 +
>  xen/arch/arm/io.h         |    1 +
>  xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 495 insertions(+)
>  create mode 100644 xen/arch/arm/omap_iommu.c
> 
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:14:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61NP-0003uO-Vg; Wed, 22 Jan 2014 17:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xenmail43267@gmail.com>) id 1W61NO-0003uG-Gc
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:14:06 +0000
Received: from [85.158.139.211:20567] by server-7.bemta-5.messagelabs.com id
	C0/D1-04824-D5CFFD25; Wed, 22 Jan 2014 17:14:05 +0000
X-Env-Sender: xenmail43267@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390410843!11337172!1
X-Originating-IP: [209.85.223.195]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22638 invoked from network); 22 Jan 2014 17:14:04 -0000
Received: from mail-ie0-f195.google.com (HELO mail-ie0-f195.google.com)
	(209.85.223.195)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:14:04 -0000
Received: by mail-ie0-f195.google.com with SMTP id tp5so1888199ieb.2
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 09:14:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=jE6sHIR4jV3uGd2zy0wyLk310aynvx72HVdBLUiaI0g=;
	b=xiZnjxvIVdbvxecA94cz1+u7QW8f1o1Ru1dTsziodG7shPS5RKVrMWhAJFCGh0NDil
	kIoHO+8RAL1tBWxzQgDTzsfrRUzXjFz53XX5QvYrTtIlUwCMdD4pPhdbkgvZ3MOS7+i8
	eucwFHVVfAFi72LVa9ih9ssM14kNhGo5j7r57o/E6ni7DB4rsHMHdoe8DPjhNxCJalmR
	pDkZbsbu8C1/TMM6Dfhy7qSX0dyNIEpVnmrK9GBJ/5iYvM44qJkCs8WkUXoXvrI8Q42O
	4dGjlx+xwTOWNz3zXtqPYdDxTWyve3qI4NzpjIT5mzhPUGtOKTDY0qTjxk0fZohTRyHW
	b07w==
X-Received: by 10.50.149.170 with SMTP id ub10mr4178896igb.9.1390410843298;
	Wed, 22 Jan 2014 09:14:03 -0800 (PST)
Received: from localhost.localdomain (c-24-118-49-166.hsd1.mn.comcast.net.
	[24.118.49.166])
	by mx.google.com with ESMTPSA id kt2sm48095471igb.1.2014.01.22.09.14.01
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 09:14:02 -0800 (PST)
From: <xenmail43267@gmail.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 11:12:49 -0600
Message-Id: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
X-Mailer: git-send-email 1.8.3.2
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mike Neilsen <mneilsen@acm.org>

This is a fix for bug 35:
http://bugs.xenproject.org/xen/bug/35

This bug report describes several format string mismatches which prevent
building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
copy of Alex Sharp's original patch with Andrew Cooper's recommendation applied
to extras/mini-os/xenbus/xenbus.c to avoid stack corruption.

Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.

Signed-off-by: Mike Neilsen <mneilsen@acm.org>
---
 extras/mini-os/fbfront.c       | 4 ++--
 extras/mini-os/pcifront.c      | 2 +-
 extras/mini-os/xenbus/xenbus.c | 5 +++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
index 1e01513..9cc07b4 100644
--- a/extras/mini-os/fbfront.c
+++ b/extras/mini-os/fbfront.c
@@ -105,7 +105,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
@@ -468,7 +468,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
index 16a4b49..ce5180c 100644
--- a/extras/mini-os/pcifront.c
+++ b/extras/mini-os/pcifront.c
@@ -424,7 +424,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
                 continue;
             }
 
-            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) != 4) {
+            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) != 4) {
                 printk("\"%s\" does not look like a PCI device address\n", s);
                 free(s);
                 continue;
diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
index ee1691b..c5d9b02 100644
--- a/extras/mini-os/xenbus/xenbus.c
+++ b/extras/mini-os/xenbus/xenbus.c
@@ -15,6 +15,7 @@
  *
  ****************************************************************************
  **/
+#include <inttypes.h>
 #include <mini-os/os.h>
 #include <mini-os/mm.h>
 #include <mini-os/traps.h>
@@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
     err = errmsg(rep);
     if (err)
 	return err;
-    sscanf((char *)(rep + 1), "%u", xbt);
+    sscanf((char *)(rep + 1), "%lu", xbt);
     free(rep);
     return NULL;
 }
@@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
     domid_t ret;
 
     BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
-    sscanf(dom_id, "%d", &ret);
+    sscanf(dom_id, "%"SCNd16, &ret);
 
     return ret;
 }
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:14:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:14:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61NP-0003uO-Vg; Wed, 22 Jan 2014 17:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xenmail43267@gmail.com>) id 1W61NO-0003uG-Gc
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:14:06 +0000
Received: from [85.158.139.211:20567] by server-7.bemta-5.messagelabs.com id
	C0/D1-04824-D5CFFD25; Wed, 22 Jan 2014 17:14:05 +0000
X-Env-Sender: xenmail43267@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390410843!11337172!1
X-Originating-IP: [209.85.223.195]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22638 invoked from network); 22 Jan 2014 17:14:04 -0000
Received: from mail-ie0-f195.google.com (HELO mail-ie0-f195.google.com)
	(209.85.223.195)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:14:04 -0000
Received: by mail-ie0-f195.google.com with SMTP id tp5so1888199ieb.2
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 09:14:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=jE6sHIR4jV3uGd2zy0wyLk310aynvx72HVdBLUiaI0g=;
	b=xiZnjxvIVdbvxecA94cz1+u7QW8f1o1Ru1dTsziodG7shPS5RKVrMWhAJFCGh0NDil
	kIoHO+8RAL1tBWxzQgDTzsfrRUzXjFz53XX5QvYrTtIlUwCMdD4pPhdbkgvZ3MOS7+i8
	eucwFHVVfAFi72LVa9ih9ssM14kNhGo5j7r57o/E6ni7DB4rsHMHdoe8DPjhNxCJalmR
	pDkZbsbu8C1/TMM6Dfhy7qSX0dyNIEpVnmrK9GBJ/5iYvM44qJkCs8WkUXoXvrI8Q42O
	4dGjlx+xwTOWNz3zXtqPYdDxTWyve3qI4NzpjIT5mzhPUGtOKTDY0qTjxk0fZohTRyHW
	b07w==
X-Received: by 10.50.149.170 with SMTP id ub10mr4178896igb.9.1390410843298;
	Wed, 22 Jan 2014 09:14:03 -0800 (PST)
Received: from localhost.localdomain (c-24-118-49-166.hsd1.mn.comcast.net.
	[24.118.49.166])
	by mx.google.com with ESMTPSA id kt2sm48095471igb.1.2014.01.22.09.14.01
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 09:14:02 -0800 (PST)
From: <xenmail43267@gmail.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 11:12:49 -0600
Message-Id: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
X-Mailer: git-send-email 1.8.3.2
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mike Neilsen <mneilsen@acm.org>

This is a fix for bug 35:
http://bugs.xenproject.org/xen/bug/35

This bug report describes several format string mismatches which prevent
building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
copy of Alex Sharp's original patch with Andrew Cooper's recommendation applied
to extras/mini-os/xenbus/xenbus.c to avoid stack corruption.

Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.

Signed-off-by: Mike Neilsen <mneilsen@acm.org>
---
 extras/mini-os/fbfront.c       | 4 ++--
 extras/mini-os/pcifront.c      | 2 +-
 extras/mini-os/xenbus/xenbus.c | 5 +++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
index 1e01513..9cc07b4 100644
--- a/extras/mini-os/fbfront.c
+++ b/extras/mini-os/fbfront.c
@@ -105,7 +105,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
@@ -468,7 +468,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
index 16a4b49..ce5180c 100644
--- a/extras/mini-os/pcifront.c
+++ b/extras/mini-os/pcifront.c
@@ -424,7 +424,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
                 continue;
             }
 
-            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) != 4) {
+            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) != 4) {
                 printk("\"%s\" does not look like a PCI device address\n", s);
                 free(s);
                 continue;
diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
index ee1691b..c5d9b02 100644
--- a/extras/mini-os/xenbus/xenbus.c
+++ b/extras/mini-os/xenbus/xenbus.c
@@ -15,6 +15,7 @@
  *
  ****************************************************************************
  **/
+#include <inttypes.h>
 #include <mini-os/os.h>
 #include <mini-os/mm.h>
 #include <mini-os/traps.h>
@@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
     err = errmsg(rep);
     if (err)
 	return err;
-    sscanf((char *)(rep + 1), "%u", xbt);
+    sscanf((char *)(rep + 1), "%lu", xbt);
     free(rep);
     return NULL;
 }
@@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
     domid_t ret;
 
     BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
-    sscanf(dom_id, "%d", &ret);
+    sscanf(dom_id, "%"SCNd16, &ret);
 
     return ret;
 }
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61Qi-00041a-Ol; Wed, 22 Jan 2014 17:17:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W61Qh-00041Q-3b
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:17:31 +0000
Received: from [85.158.143.35:26424] by server-2.bemta-4.messagelabs.com id
	9B/C8-11386-A2DFFD25; Wed, 22 Jan 2014 17:17:30 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390411046!127667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15972 invoked from network); 22 Jan 2014 17:17:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:17:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208,223";a="93358050"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:17:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:17:25 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W61Qa-0001mA-TT;
	Wed, 22 Jan 2014 17:17:24 +0000
Message-ID: <1390411039.32296.8.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 22 Jan 2014 17:17:19 +0000
In-Reply-To: <52DFC5BC0200007800115C92@nat28.tlf.novell.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
From: Frediano Ziglio <frediano.ziglio@citrix.com>
Date: Wed, 22 Jan 2014 10:48:50 +0000
Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These lines (in mctelem_reserve)

        newhead = oldhead->mcte_next;
        if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread or recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting
a new head that could point to whatever element (even already used).

This patch use instead a bit array and atomic bit operations.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
 1 file changed, 30 insertions(+), 51 deletions(-)

Changes from v1:
- Use bitmap to allow any number of items to be used;
- Use a single bitmap to simplify reserve loop;
- Remove HOME flags as was not used anymore.

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 895ce1a..ed8e8d2 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -37,24 +37,19 @@ struct mctelem_ent {
 	void *mcte_data;		/* corresponding data payload */
 };
 
-#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
-#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
-#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
-#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
+#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
+#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
 #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
 #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
 #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
 #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
 
-#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
 #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
 				MCTE_F_STATE_UNCOMMITTED | \
 				MCTE_F_STATE_COMMITTED | \
 				MCTE_F_STATE_PROCESSING)
 
-#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
-
 #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
 #define	MCTE_SET_CLASS(tep, new) do { \
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
@@ -69,6 +64,8 @@ struct mctelem_ent {
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +74,9 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +96,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	DECLARE_BITMAP(mctc_free, MC_NENT);
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
  */
 static void mctelem_free(struct mctelem_ent *tep)
 {
-	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
-	    MC_URGENT : MC_NONURGENT;
-
 	BUG_ON(tep->mcte_refcnt != 0);
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
 	}
 
 	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
-	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
-	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
-	    datasz)) == NULL) {
+	    MC_NENT)) == NULL ||
+	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
 		if (mctctl.mctc_elems)
 			xfree(mctctl.mctc_elems);
 		printk("Allocations for MCA telemetry failed\n");
 		return;
 	}
 
-	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+	for (i = 0; i < MC_NENT; i++) {
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
-		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
-		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
-		}
-
-		tep->mcte_next = *tepp;
+		__set_bit(i, mctctl.mctc_free);
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,32 +296,25 @@ static int mctelem_drop_count;
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
-	mctelem_class_t target = (which == MC_URGENT) ?
-	    MC_URGENT : MC_NONURGENT;
+	unsigned bit;
+	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
 
-	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
-			if (which == MC_URGENT && target == MC_URGENT) {
-				/* raid the non-urgent freelist */
-				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
-				continue;
-			} else {
-				mctelem_drop_count++;
-				return (NULL);
-			}
+		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
+
+		if (bit >= MC_NENT) {
+			mctelem_drop_count++;
+			return (NULL);
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61Qi-00041a-Ol; Wed, 22 Jan 2014 17:17:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W61Qh-00041Q-3b
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:17:31 +0000
Received: from [85.158.143.35:26424] by server-2.bemta-4.messagelabs.com id
	9B/C8-11386-A2DFFD25; Wed, 22 Jan 2014 17:17:30 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390411046!127667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15972 invoked from network); 22 Jan 2014 17:17:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:17:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208,223";a="93358050"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:17:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:17:25 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W61Qa-0001mA-TT;
	Wed, 22 Jan 2014 17:17:24 +0000
Message-ID: <1390411039.32296.8.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 22 Jan 2014 17:17:19 +0000
In-Reply-To: <52DFC5BC0200007800115C92@nat28.tlf.novell.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
From: Frediano Ziglio <frediano.ziglio@citrix.com>
Date: Wed, 22 Jan 2014 10:48:50 +0000
Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These lines (in mctelem_reserve)

        newhead = oldhead->mcte_next;
        if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {

are racy. After you read the newhead pointer it can happen that another
flow (thread or recursive invocation) change all the list but set head
with same value. So oldhead is the same as *freelp but you are setting
a new head that could point to whatever element (even already used).

This patch use instead a bit array and atomic bit operations.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
---
 xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
 1 file changed, 30 insertions(+), 51 deletions(-)

Changes from v1:
- Use bitmap to allow any number of items to be used;
- Use a single bitmap to simplify reserve loop;
- Remove HOME flags as was not used anymore.

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index 895ce1a..ed8e8d2 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -37,24 +37,19 @@ struct mctelem_ent {
 	void *mcte_data;		/* corresponding data payload */
 };
 
-#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
-#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
-#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
-#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
+#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
+#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
 #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
 #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
 #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
 #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
 
-#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
 #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
 #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
 				MCTE_F_STATE_UNCOMMITTED | \
 				MCTE_F_STATE_COMMITTED | \
 				MCTE_F_STATE_PROCESSING)
 
-#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
-
 #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
 #define	MCTE_SET_CLASS(tep, new) do { \
     (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
@@ -69,6 +64,8 @@ struct mctelem_ent {
 #define	MC_URGENT_NENT		10
 #define	MC_NONURGENT_NENT	20
 
+#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
+
 #define	MC_NCLASSES		(MC_NONURGENT + 1)
 
 #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
@@ -77,11 +74,9 @@ struct mctelem_ent {
 static struct mc_telem_ctl {
 	/* Linked lists that thread the array members together.
 	 *
-	 * The free lists are singly-linked via mcte_next, and we allocate
-	 * from them by atomically unlinking an element from the head.
-	 * Consumed entries are returned to the head of the free list.
-	 * When an entry is reserved off the free list it is not linked
-	 * on any list until it is committed or dismissed.
+	 * The free lists is a bit array where bit 1 means free.
+	 * This as element number is quite small and is easy to
+	 * atomically allocate that way.
 	 *
 	 * The committed list grows at the head and we do not maintain a
 	 * tail pointer; insertions are performed atomically.  The head
@@ -101,7 +96,7 @@ static struct mc_telem_ctl {
 	 * we can lock it for updates.  The head of the processing list
 	 * always has the oldest telemetry, and we append (as above)
 	 * at the tail of the processing list. */
-	struct mctelem_ent *mctc_free[MC_NCLASSES];
+	DECLARE_BITMAP(mctc_free, MC_NENT);
 	struct mctelem_ent *mctc_committed[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
 	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
@@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
  */
 static void mctelem_free(struct mctelem_ent *tep)
 {
-	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
-	    MC_URGENT : MC_NONURGENT;
-
 	BUG_ON(tep->mcte_refcnt != 0);
 	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
 
 	tep->mcte_prev = NULL;
-	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
+	tep->mcte_next = NULL;
+
+	/* set free in array */
+	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
 }
 
 /* Increment the reference count of an entry that is not linked on to
@@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
 	}
 
 	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
-	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
-	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
-	    datasz)) == NULL) {
+	    MC_NENT)) == NULL ||
+	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
 		if (mctctl.mctc_elems)
 			xfree(mctctl.mctc_elems);
 		printk("Allocations for MCA telemetry failed\n");
 		return;
 	}
 
-	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
-		struct mctelem_ent *tep, **tepp;
+	for (i = 0; i < MC_NENT; i++) {
+		struct mctelem_ent *tep;
 
 		tep = mctctl.mctc_elems + i;
 		tep->mcte_flags = MCTE_F_STATE_FREE;
 		tep->mcte_refcnt = 0;
 		tep->mcte_data = datarr + i * datasz;
 
-		if (i < MC_URGENT_NENT) {
-			tepp = &mctctl.mctc_free[MC_URGENT];
-			tep->mcte_flags |= MCTE_F_HOME_URGENT;
-		} else {
-			tepp = &mctctl.mctc_free[MC_NONURGENT];
-			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
-		}
-
-		tep->mcte_next = *tepp;
+		__set_bit(i, mctctl.mctc_free);
+		tep->mcte_next = NULL;
 		tep->mcte_prev = NULL;
-		*tepp = tep;
 	}
 }
 
@@ -310,32 +296,25 @@ static int mctelem_drop_count;
 
 /* Reserve a telemetry entry, or return NULL if none available.
  * If we return an entry then the caller must subsequently call exactly one of
- * mctelem_unreserve or mctelem_commit for that entry.
+ * mctelem_dismiss or mctelem_commit for that entry.
  */
 mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
 {
-	struct mctelem_ent **freelp;
-	struct mctelem_ent *oldhead, *newhead;
-	mctelem_class_t target = (which == MC_URGENT) ?
-	    MC_URGENT : MC_NONURGENT;
+	unsigned bit;
+	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
 
-	freelp = &mctctl.mctc_free[target];
 	for (;;) {
-		if ((oldhead = *freelp) == NULL) {
-			if (which == MC_URGENT && target == MC_URGENT) {
-				/* raid the non-urgent freelist */
-				target = MC_NONURGENT;
-				freelp = &mctctl.mctc_free[target];
-				continue;
-			} else {
-				mctelem_drop_count++;
-				return (NULL);
-			}
+		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
+
+		if (bit >= MC_NENT) {
+			mctelem_drop_count++;
+			return (NULL);
 		}
 
-		newhead = oldhead->mcte_next;
-		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
-			struct mctelem_ent *tep = oldhead;
+		/* try to allocate, atomically clear free bit */
+		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
+			/* return element we got */
+			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
 
 			mctelem_hold(tep);
 			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
-- 
1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:19:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61SJ-0004VM-9p; Wed, 22 Jan 2014 17:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W61SI-0004Up-Cd
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:19:10 +0000
Received: from [85.158.137.68:63086] by server-4.bemta-3.messagelabs.com id
	6F/CB-10414-D8DFFD25; Wed, 22 Jan 2014 17:19:09 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390411148!9916201!1
X-Originating-IP: [192.134.164.104]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14019 invoked from network); 22 Jan 2014 17:19:09 -0000
Received: from mail3-relais-sop.national.inria.fr (HELO
	mail3-relais-sop.national.inria.fr) (192.134.164.104)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384297200"; d="scan'208";a="45748203"
Received: from nat-eduroam-36-gw-01-bso.bordeaux.inria.fr (HELO type.ipv6)
	([194.199.1.36])
	by mail3-relais-sop.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	22 Jan 2014 18:19:07 +0100
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W61SE-0003Zj-LS; Wed, 22 Jan 2014 18:19:06 +0100
Date: Wed, 22 Jan 2014 18:19:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xenmail43267@gmail.com
Message-ID: <20140122171906.GT8171@type>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xenmail43267@gmail.com, xen-devel@lists.xen.org,
	alex.sharp@orionvm.com, andrew.cooper3@citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@citrix.com,
	Mike Neilsen <mneilsen@acm.org>
References: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenmail43267@gmail.com, le Wed 22 Jan 2014 11:12:49 -0600, a =E9crit :
> index 16a4b49..ce5180c 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -424,7 +424,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev=
 *dev,
>                  continue;
>              }
>  =

> -            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) !=3D 4) {
> +            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) !=3D 4) {
>                  printk("\"%s\" does not look like a PCI device address\n=
", s);
>                  free(s);
>                  continue;

Rather make fun an unsigned int, there is no reason why it should be an
unsigned long, I'm still wondering where that comes from.

There rest seems OK to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:19:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61SJ-0004VM-9p; Wed, 22 Jan 2014 17:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W61SI-0004Up-Cd
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:19:10 +0000
Received: from [85.158.137.68:63086] by server-4.bemta-3.messagelabs.com id
	6F/CB-10414-D8DFFD25; Wed, 22 Jan 2014 17:19:09 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390411148!9916201!1
X-Originating-IP: [192.134.164.104]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14019 invoked from network); 22 Jan 2014 17:19:09 -0000
Received: from mail3-relais-sop.national.inria.fr (HELO
	mail3-relais-sop.national.inria.fr) (192.134.164.104)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384297200"; d="scan'208";a="45748203"
Received: from nat-eduroam-36-gw-01-bso.bordeaux.inria.fr (HELO type.ipv6)
	([194.199.1.36])
	by mail3-relais-sop.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	22 Jan 2014 18:19:07 +0100
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W61SE-0003Zj-LS; Wed, 22 Jan 2014 18:19:06 +0100
Date: Wed, 22 Jan 2014 18:19:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xenmail43267@gmail.com
Message-ID: <20140122171906.GT8171@type>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xenmail43267@gmail.com, xen-devel@lists.xen.org,
	alex.sharp@orionvm.com, andrew.cooper3@citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@citrix.com,
	Mike Neilsen <mneilsen@acm.org>
References: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenmail43267@gmail.com, le Wed 22 Jan 2014 11:12:49 -0600, a =E9crit :
> index 16a4b49..ce5180c 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -424,7 +424,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev=
 *dev,
>                  continue;
>              }
>  =

> -            if (sscanf(s, "%x:%x:%x.%x", dom, bus, slot, fun) !=3D 4) {
> +            if (sscanf(s, "%x:%x:%x.%lx", dom, bus, slot, fun) !=3D 4) {
>                  printk("\"%s\" does not look like a PCI device address\n=
", s);
>                  free(s);
>                  continue;

Rather make fun an unsigned int, there is no reason why it should be an
unsigned long, I'm still wondering where that comes from.

There rest seems OK to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61gQ-0005H4-Av; Wed, 22 Jan 2014 17:33:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W61gO-0005Gz-DD
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:33:44 +0000
Received: from [85.158.143.35:55488] by server-1.bemta-4.messagelabs.com id
	60/24-02132-7F000E25; Wed, 22 Jan 2014 17:33:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390412020!129613!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4242 invoked from network); 22 Jan 2014 17:33:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:33:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93365215"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:33:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:33:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W61g5-0001zQ-10;
	Wed, 22 Jan 2014 17:33:25 +0000
Date: Wed, 22 Jan 2014 17:32:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390408531.32519.78.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Ian Campbell wrote:
> Julien,
> 
> I wonder if the following is any better than the current stuff in
> staging for the issue you are seeing with BSD at start of day? Can you
> try it please.

The approach seems reasonable.


> It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> still going.
> 
> It basically does a cache clean on all RAM mapped in the p2m. Anything
> in the cache is either the result of an earlier scrub of the page or
> something toolstack just wrote, so there is no need to be concerned
> about clean vs. invalidate -- clean is always correct.
> 
> This should ensure that the guest has no dirty pages when it starts.
> This nobbles the HCR_DC based stuff, too since it is no longer
> necessary. This avoids concerns about guests which enable MMU before
> caches.
> 
> It contains debug BUG()s in various trap locations to trap if the guest
> experiences any incoherence and has lots of other debugging left in etc.
> (and hacks wrt prototypes not in headers)
> 
> Ian.
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index e1d1bec..60c3091 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_mfn = 0;
> +    return do_domctl(xch, &domctl);
> +}

Do we really need to flush the entire p2m, or just things we have
written to?


>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..43dae5c 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
>                       xen_domain_handle_t handle,
>                       uint32_t flags,
>                       uint32_t *pdomid);
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
>  
>  
>  /* Functions to produce a dump of a given domain
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..55c86f0 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
>      STATE_AO_GC(cdcs->dcs.ao);
>  
>      if (!rc)
> +    {
>          *cdcs->domid_out = domid;
> +        xc_domain_cacheflush(CTX->xch, domid);
> +    }
>  
>      libxl__ao_complete(egc, ao, rc);
>  }
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..2edd09d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -475,7 +475,8 @@ int vcpu_initialise(struct vcpu *v)
>          return rc;
>  
>      v->arch.sctlr = SCTLR_GUEST_INIT;
> -    v->arch.default_cache = true;
> +    //v->arch.default_cache = true;
> +    v->arch.default_cache = false;
>  
>      /*
>       * By default exposes an SMP system with AFF0 set to the VCPU ID
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..9e3b37d 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -11,12 +11,24 @@
>  #include <xen/sched.h>
>  #include <xen/hypercall.h>
>  #include <public/domctl.h>
> +#include <xen/guest_access.h>
> +
> +extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
>  
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> +            rc = -EFAULT;
> +
> +        return rc;
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 85ca330..f35ed57 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -228,15 +228,26 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
> +static void do_one_cacheflush(paddr_t mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +
> +    unmap_domain_page(v);
> +}

A pity that we need to map a page just to flush the dcache.  It could be
expensive, especially if we really have to map every single guest mfn. I
wonder if we could use DCCSW instead.


>  static int create_p2m_entries(struct domain *d,
>                       enum p2m_operation op,
>                       paddr_t start_gpaddr,
>                       paddr_t end_gpaddr,
>                       paddr_t maddr,
>                       int mattr,
> -                     p2m_type_t t)
> +                     p2m_type_t t,
> +                     xen_pfn_t *last_mfn)
>  {
>      int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
> @@ -381,18 +392,42 @@ static int create_p2m_entries(struct domain *d,
>                      count++;
>                  }
>                  break;
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                    {
> +                        count++;
> +                        break;
> +                    }
> +
> +                    count += 0x10;
> +
> +                    do_one_cacheflush(pte.p2m.base);
> +                }
> +                break;
>          }
>  
> +        if ( last_mfn )
> +            *last_mfn = addr >> PAGE_SHIFT;
> +
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> -        if ( op == RELINQUISH && count >= 0x2000 )
> +        switch ( op )
>          {
> -            if ( hypercall_preempt_check() )
> +        case RELINQUISH:
> +        case CACHEFLUSH:
> +            if (count >= 0x2000 && hypercall_preempt_check() )
>              {
>                  p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;

If we are taking this code path for cache flushes, then we should rename
next_gfn_to_relinquish to something more generic.
    

>                  rc = -EAGAIN;
>                  goto out;
>              }
>              count = 0;
> +            break;
> +        case INSERT:
> +        case ALLOCATE:
> +        case REMOVE:
> +            /* No preemption */
> +            break;
>          }
>  
>          /* Got the next page */
> @@ -439,7 +474,7 @@ int p2m_populate_ram(struct domain *d,
>                       paddr_t end)
>  {
>      return create_p2m_entries(d, ALLOCATE, start, end,
> -                              0, MATTR_MEM, p2m_ram_rw);
> +                              0, MATTR_MEM, p2m_ram_rw, NULL);
>  }
>  
>  int map_mmio_regions(struct domain *d,
> @@ -448,7 +483,7 @@ int map_mmio_regions(struct domain *d,
>                       paddr_t maddr)
>  {
>      return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
> -                              maddr, MATTR_DEV, p2m_mmio_direct);
> +                              maddr, MATTR_DEV, p2m_mmio_direct, NULL);
>  }
>  
>  int guest_physmap_add_entry(struct domain *d,
> @@ -460,7 +495,7 @@ int guest_physmap_add_entry(struct domain *d,
>      return create_p2m_entries(d, INSERT,
>                                pfn_to_paddr(gpfn),
>                                pfn_to_paddr(gpfn + (1 << page_order)),
> -                              pfn_to_paddr(mfn), MATTR_MEM, t);
> +                              pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
>  }
>  
>  void guest_physmap_remove_page(struct domain *d,
> @@ -470,7 +505,7 @@ void guest_physmap_remove_page(struct domain *d,
>      create_p2m_entries(d, REMOVE,
>                         pfn_to_paddr(gpfn),
>                         pfn_to_paddr(gpfn + (1<<page_order)),
> -                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
> +                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
>  }
>  
>  int p2m_alloc_table(struct domain *d)
> @@ -622,7 +657,28 @@ int relinquish_p2m_mapping(struct domain *d)
>                                pfn_to_paddr(p2m->next_gfn_to_relinquish),
>                                pfn_to_paddr(p2m->max_mapped_gfn),
>                                pfn_to_paddr(INVALID_MFN),
> -                              MATTR_MEM, p2m_invalid);
> +                              MATTR_MEM, p2m_invalid, NULL);
> +}
> +
> +long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    printk("dom%d p2m cache flush from mfn %"PRI_xen_pfn" RELIN %lx\n",
> +           d->domain_id, *start_mfn, p2m->next_gfn_to_relinquish);
> +
> +    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
> +
> +    printk("dom%d p2m cache flush: %"PRIpaddr"-%"PRIpaddr"\n",
> +           d->domain_id,
> +           pfn_to_paddr(*start_mfn),
> +           pfn_to_paddr(p2m->max_mapped_gfn));
> +
> +    return create_p2m_entries(d, CACHEFLUSH,
> +                              pfn_to_paddr(*start_mfn),
> +                              pfn_to_paddr(p2m->max_mapped_gfn),
> +                              pfn_to_paddr(INVALID_MFN),
> +                              MATTR_MEM, p2m_invalid, start_mfn);
>  }
>  
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 48a6fcc..546f7ce 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1283,6 +1283,7 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>  
>  static void update_sctlr(struct vcpu *v, uint32_t val)
>  {
> +    BUG();
>      /*
>       * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
>       * because they are incompatible.
> @@ -1628,6 +1629,7 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)
>  {
>      register_t addr = READ_SYSREG(FAR_EL2);
> +    BUG();
>      inject_iabt_exception(regs, addr, hsr.len);
>  }
>  
> @@ -1683,6 +1685,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      }
>  
>  bad_data_abort:
> +    show_execution_state(regs);
> +    panic("DABT");
>      inject_dabt_exception(regs, info.gva, hsr.len);
>  }
>  
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..d7b22c3 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +struct xen_domctl_cacheflush {
> +    /* Updated for progress */
> +    xen_pfn_t start_mfn;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +961,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1020,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61gQ-0005H4-Av; Wed, 22 Jan 2014 17:33:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W61gO-0005Gz-DD
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:33:44 +0000
Received: from [85.158.143.35:55488] by server-1.bemta-4.messagelabs.com id
	60/24-02132-7F000E25; Wed, 22 Jan 2014 17:33:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390412020!129613!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4242 invoked from network); 22 Jan 2014 17:33:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:33:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93365215"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:33:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:33:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W61g5-0001zQ-10;
	Wed, 22 Jan 2014 17:33:25 +0000
Date: Wed, 22 Jan 2014 17:32:18 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390408531.32519.78.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Ian Campbell wrote:
> Julien,
> 
> I wonder if the following is any better than the current stuff in
> staging for the issue you are seeing with BSD at start of day? Can you
> try it please.

The approach seems reasonable.


> It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> still going.
> 
> It basically does a cache clean on all RAM mapped in the p2m. Anything
> in the cache is either the result of an earlier scrub of the page or
> something toolstack just wrote, so there is no need to be concerned
> about clean vs. invalidate -- clean is always correct.
> 
> This should ensure that the guest has no dirty pages when it starts.
> This nobbles the HCR_DC based stuff, too since it is no longer
> necessary. This avoids concerns about guests which enable MMU before
> caches.
> 
> It contains debug BUG()s in various trap locations to trap if the guest
> experiences any incoherence and has lots of other debugging left in etc.
> (and hacks wrt prototypes not in headers)
> 
> Ian.
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index e1d1bec..60c3091 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_mfn = 0;
> +    return do_domctl(xch, &domctl);
> +}

Do we really need to flush the entire p2m, or just things we have
written to?


>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..43dae5c 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
>                       xen_domain_handle_t handle,
>                       uint32_t flags,
>                       uint32_t *pdomid);
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
>  
>  
>  /* Functions to produce a dump of a given domain
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..55c86f0 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
>      STATE_AO_GC(cdcs->dcs.ao);
>  
>      if (!rc)
> +    {
>          *cdcs->domid_out = domid;
> +        xc_domain_cacheflush(CTX->xch, domid);
> +    }
>  
>      libxl__ao_complete(egc, ao, rc);
>  }
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..2edd09d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -475,7 +475,8 @@ int vcpu_initialise(struct vcpu *v)
>          return rc;
>  
>      v->arch.sctlr = SCTLR_GUEST_INIT;
> -    v->arch.default_cache = true;
> +    //v->arch.default_cache = true;
> +    v->arch.default_cache = false;
>  
>      /*
>       * By default exposes an SMP system with AFF0 set to the VCPU ID
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..9e3b37d 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -11,12 +11,24 @@
>  #include <xen/sched.h>
>  #include <xen/hypercall.h>
>  #include <public/domctl.h>
> +#include <xen/guest_access.h>
> +
> +extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
>  
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> +            rc = -EFAULT;
> +
> +        return rc;
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 85ca330..f35ed57 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -228,15 +228,26 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
> +static void do_one_cacheflush(paddr_t mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +
> +    unmap_domain_page(v);
> +}

A pity that we need to map a page just to flush the dcache.  It could be
expensive, especially if we really have to map every single guest mfn. I
wonder if we could use DCCSW instead.


>  static int create_p2m_entries(struct domain *d,
>                       enum p2m_operation op,
>                       paddr_t start_gpaddr,
>                       paddr_t end_gpaddr,
>                       paddr_t maddr,
>                       int mattr,
> -                     p2m_type_t t)
> +                     p2m_type_t t,
> +                     xen_pfn_t *last_mfn)
>  {
>      int rc;
>      struct p2m_domain *p2m = &d->arch.p2m;
> @@ -381,18 +392,42 @@ static int create_p2m_entries(struct domain *d,
>                      count++;
>                  }
>                  break;
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                    {
> +                        count++;
> +                        break;
> +                    }
> +
> +                    count += 0x10;
> +
> +                    do_one_cacheflush(pte.p2m.base);
> +                }
> +                break;
>          }
>  
> +        if ( last_mfn )
> +            *last_mfn = addr >> PAGE_SHIFT;
> +
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> -        if ( op == RELINQUISH && count >= 0x2000 )
> +        switch ( op )
>          {
> -            if ( hypercall_preempt_check() )
> +        case RELINQUISH:
> +        case CACHEFLUSH:
> +            if (count >= 0x2000 && hypercall_preempt_check() )
>              {
>                  p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;

If we are taking this code path for cache flushes, then we should rename
next_gfn_to_relinquish to something more generic.
    

>                  rc = -EAGAIN;
>                  goto out;
>              }
>              count = 0;
> +            break;
> +        case INSERT:
> +        case ALLOCATE:
> +        case REMOVE:
> +            /* No preemption */
> +            break;
>          }
>  
>          /* Got the next page */
> @@ -439,7 +474,7 @@ int p2m_populate_ram(struct domain *d,
>                       paddr_t end)
>  {
>      return create_p2m_entries(d, ALLOCATE, start, end,
> -                              0, MATTR_MEM, p2m_ram_rw);
> +                              0, MATTR_MEM, p2m_ram_rw, NULL);
>  }
>  
>  int map_mmio_regions(struct domain *d,
> @@ -448,7 +483,7 @@ int map_mmio_regions(struct domain *d,
>                       paddr_t maddr)
>  {
>      return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
> -                              maddr, MATTR_DEV, p2m_mmio_direct);
> +                              maddr, MATTR_DEV, p2m_mmio_direct, NULL);
>  }
>  
>  int guest_physmap_add_entry(struct domain *d,
> @@ -460,7 +495,7 @@ int guest_physmap_add_entry(struct domain *d,
>      return create_p2m_entries(d, INSERT,
>                                pfn_to_paddr(gpfn),
>                                pfn_to_paddr(gpfn + (1 << page_order)),
> -                              pfn_to_paddr(mfn), MATTR_MEM, t);
> +                              pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
>  }
>  
>  void guest_physmap_remove_page(struct domain *d,
> @@ -470,7 +505,7 @@ void guest_physmap_remove_page(struct domain *d,
>      create_p2m_entries(d, REMOVE,
>                         pfn_to_paddr(gpfn),
>                         pfn_to_paddr(gpfn + (1<<page_order)),
> -                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
> +                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
>  }
>  
>  int p2m_alloc_table(struct domain *d)
> @@ -622,7 +657,28 @@ int relinquish_p2m_mapping(struct domain *d)
>                                pfn_to_paddr(p2m->next_gfn_to_relinquish),
>                                pfn_to_paddr(p2m->max_mapped_gfn),
>                                pfn_to_paddr(INVALID_MFN),
> -                              MATTR_MEM, p2m_invalid);
> +                              MATTR_MEM, p2m_invalid, NULL);
> +}
> +
> +long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    printk("dom%d p2m cache flush from mfn %"PRI_xen_pfn" RELIN %lx\n",
> +           d->domain_id, *start_mfn, p2m->next_gfn_to_relinquish);
> +
> +    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
> +
> +    printk("dom%d p2m cache flush: %"PRIpaddr"-%"PRIpaddr"\n",
> +           d->domain_id,
> +           pfn_to_paddr(*start_mfn),
> +           pfn_to_paddr(p2m->max_mapped_gfn));
> +
> +    return create_p2m_entries(d, CACHEFLUSH,
> +                              pfn_to_paddr(*start_mfn),
> +                              pfn_to_paddr(p2m->max_mapped_gfn),
> +                              pfn_to_paddr(INVALID_MFN),
> +                              MATTR_MEM, p2m_invalid, start_mfn);
>  }
>  
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 48a6fcc..546f7ce 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1283,6 +1283,7 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
>  
>  static void update_sctlr(struct vcpu *v, uint32_t val)
>  {
> +    BUG();
>      /*
>       * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
>       * because they are incompatible.
> @@ -1628,6 +1629,7 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)
>  {
>      register_t addr = READ_SYSREG(FAR_EL2);
> +    BUG();
>      inject_iabt_exception(regs, addr, hsr.len);
>  }
>  
> @@ -1683,6 +1685,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      }
>  
>  bad_data_abort:
> +    show_execution_state(regs);
> +    panic("DABT");
>      inject_dabt_exception(regs, info.gva, hsr.len);
>  }
>  
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..d7b22c3 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +struct xen_domctl_cacheflush {
> +    /* Updated for progress */
> +    xen_pfn_t start_mfn;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +961,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1020,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61lV-0005n6-Tz; Wed, 22 Jan 2014 17:39:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jesse.benedict@citrix.com>)
	id 1W61To-0004hc-AN; Wed, 22 Jan 2014 17:20:44 +0000
Received: from [85.158.143.35:29256] by server-1.bemta-4.messagelabs.com id
	96/35-02132-BEDFFD25; Wed, 22 Jan 2014 17:20:43 +0000
X-Env-Sender: jesse.benedict@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390411240!124858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20884 invoked from network); 22 Jan 2014 17:20:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:20:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208,217";a="95405833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 17:20:40 +0000
Received: from FTLPEX01CL02.citrite.net ([169.254.2.8]) by
	FTLPEX01CL01.citrite.net ([169.254.4.32]) with mapi id 14.02.0342.004;
	Wed, 22 Jan 2014 12:20:40 -0500
From: Jesse Benedict <jesse.benedict@citrix.com>
To: "'xen-api-request@lists.xen.org'" <xen-api-request@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Request: XenCenter/XE Command To Preserve VM MAC
	Addresses/VIFs when XS NIC Settings Change
Thread-Index: Ac8XlWgUXzDnqmE6REmdtRruxBjVoQ==
Date: Wed, 22 Jan 2014 17:20:40 +0000
Message-ID: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.11.24.228]
MIME-Version: 1.0
X-DLP: MIA2
X-Mailman-Approved-At: Wed, 22 Jan 2014 17:39:00 +0000
Subject: [Xen-devel] Request: XenCenter/XE Command To Preserve VM MAC
 Addresses/VIFs when XS NIC Settings Change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0792855869020140230=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0792855869020140230==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_"

--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

All,

This request comes from experience with many clients.  It is somewhat easy =
to script, but here it goes:

ISSUE:
We currently have the capability to "AUTOMATICALLY" add a network to NEW VM=
s, but no way to "PRESERVE" existing VM Network IDs and MACs when XenServer=
 interfaces are changed.

SCENARIO:
User has several hundred VMs and needs to change BONDs/Interfaces in XenSer=
ver
After breaking bonds or modifying XenServer interfaces, the VMs are no long=
er associated with a "Network"

REQUEST:
Offer a method where:

-          Modifications to a network interface or Bond notes that "VMs are=
 associated with this network"

-          After changes are made to said interfaces, ask user "Would you l=
ike to update your VMs to use this interface and retain Network ID and MAC =
ADDRESS?"

o   Yes (All)

o   No (List of VMs to apply to or NONE)

I have a script - not currently with me - that I wrote to handle this type =
of activity as BOND configuration, additions, etc happen all the time and I=
 had one client (achem), have to manually associate all VMs with a specific=
 network bond that had to be modified.

If this request is not clear, I will reply with the script I wrote and will=
 offer screen shots.


Sincerely,


Jesse Benedict, CCA | Citrix, Inc. | XenServer, XenClient Support Team
Work Better.  Live Better.  Call us at 1-800-4CITRIX!
Join the Community: support.citrix.com<http://support.citrix.com/> | discus=
sions.citrix.com<http://discussions.citrix.com/> | blogs.citrix.com<http://=
blogs.citrix.com/> | taas.citrix.com<https://taas.citrix.com/>

Customer Satisfaction is our goal.
If you have feedback regarding my performance, please feel free to contact =
my Manager william.aycock@citrix.com<mailto:william.aycock@citrix.com>

CONFIDENTIALITY NOTICE
This e-mail message and all documents which accompany it are intended only =
for the use of the individual or entity to which addressed, and may contain=
 privileged or confidential information. Any unauthorized disclosure or dis=
tribution of this e-mail message is prohibited. Any private files or utilit=
ies that are included in this e-mail are intended only for the use of the i=
ndividual or entity to which this is addressed and distribution of these fi=
les or utilities is prohibited. If you have received this e-mail message in=
 error, please notify me immediately. Thank you.



--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
	{font-family:"MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
	{mso-style-priority:34;
	margin-top:0in;
	margin-right:0in;
	margin-bottom:0in;
	margin-left:.5in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:1096829415;
	mso-list-type:hybrid;
	mso-list-template-ids:1179797204 1618118818 67698691 67698693 67698689 676=
98691 67698693 67698689 67698691 67698693;}
@list l0:level1
	{mso-level-start-at:0;
	mso-level-number-format:bullet;
	mso-level-text:-;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Calibri","sans-serif";
	mso-fareast-font-family:"MS Mincho";}
@list l0:level2
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level3
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level4
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level5
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level6
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level7
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level8
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level9
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
ol
	{margin-bottom:0in;}
ul
	{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">All,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">This request comes from experience with many clients=
.&nbsp; It is somewhat easy to script, but here it goes:<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>ISSUE:<o:p></o:p></b></p>
<p class=3D"MsoNormal">We currently have the capability to &#8220;AUTOMATIC=
ALLY&#8221; add a network to NEW VMs, but no way to &#8220;PRESERVE&#8221; =
existing VM Network IDs and MACs when XenServer interfaces are changed.<o:p=
></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>SCENARIO:<o:p></o:p></b></p>
<p class=3D"MsoNormal">User has several hundred VMs and needs to change BON=
Ds/Interfaces in XenServer<o:p></o:p></p>
<p class=3D"MsoNormal">After breaking bonds or modifying XenServer interfac=
es, the VMs are no longer associated with a &#8220;Network&#8221;<o:p></o:p=
></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>REQUEST:<o:p></o:p></b></p>
<p class=3D"MsoNormal">Offer a method where:<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">-<span style=
=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;
</span></span><![endif]><span dir=3D"LTR"></span>Modifications to a network=
 interface or Bond notes that &#8220;VMs are associated with this network&#=
8221;<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">-<span style=
=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;
</span></span><![endif]><span dir=3D"LTR"></span>After changes are made to =
said interfaces, ask user &#8220;Would you like to update your VMs to use t=
his interface and retain Network ID and MAC ADDRESS?&#8221;<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"margin-left:1.0in;text-indent:-.25in=
;mso-list:l0 level2 lfo1">
<![if !supportLists]><span style=3D"font-family:&quot;Courier New&quot;"><s=
pan style=3D"mso-list:Ignore">o<span style=3D"font:7.0pt &quot;Times New Ro=
man&quot;">&nbsp;&nbsp;
</span></span></span><![endif]><span dir=3D"LTR"></span>Yes (All)<o:p></o:p=
></p>
<p class=3D"MsoListParagraph" style=3D"margin-left:1.0in;text-indent:-.25in=
;mso-list:l0 level2 lfo1">
<![if !supportLists]><span style=3D"font-family:&quot;Courier New&quot;"><s=
pan style=3D"mso-list:Ignore">o<span style=3D"font:7.0pt &quot;Times New Ro=
man&quot;">&nbsp;&nbsp;
</span></span></span><![endif]><span dir=3D"LTR"></span>No (List of VMs to =
apply to or NONE)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I have a script &#8211; not currently with me &#8211=
; that I wrote to handle this type of activity as BOND configuration, addit=
ions, etc happen all the time and I had one client (achem), have to manuall=
y associate all VMs with a specific network bond
 that had to be modified.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">If this request is not clear, I will reply with the =
script I wrote and will offer screen shots.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;mso-fareast-language=
:EN-US">Sincerely,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:12.0pt;color:#1F497D;mso=
-fareast-language:EN-US">Jesse Benedict, CCA</span></b><span style=3D"font-=
size:12.0pt;color:#1F497D;mso-fareast-language:EN-US"> | Citrix, Inc. | Xen=
Server, XenClient Support Team<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US">Work Better.&nbsp; Live Better.&nbsp; Call us at 1-80=
0-4CITRIX!<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US">Join the Community:
<a href=3D"http://support.citrix.com/"><span style=3D"color:blue">support.c=
itrix.com</span></a> |
<a href=3D"http://discussions.citrix.com/"><span style=3D"color:blue">discu=
ssions.citrix.com</span></a> |
<a href=3D"http://blogs.citrix.com/"><span style=3D"color:blue">blogs.citri=
x.com</span></a> |
<a href=3D"https://taas.citrix.com/"><span style=3D"color:blue">taas.citrix=
.com</span></a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;co=
lor:#999999;mso-fareast-language:EN-US">Customer Satisfaction is our goal.<=
/span></b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color:#1F497D;mso-=
fareast-language:EN-US">
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color=
:#999999;mso-fareast-language:EN-US">If you have feedback regarding my perf=
ormance, please feel free to contact my Manager
</span><span style=3D"font-size:12.0pt;mso-fareast-language:EN-US"><a href=
=3D"mailto:william.aycock@citrix.com"><span style=3D"color:blue">william.ay=
cock@citrix.com</span></a><span style=3D"color:#1F497D"><o:p></o:p></span><=
/span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><b><span lang=3D"EN-AU=
" style=3D"font-size:12.0pt;color:#999999;mso-fareast-language:EN-US">CONFI=
DENTIALITY NOTICE</span></b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;=
color:#999999;mso-fareast-language:EN-US">
<br>
This e-mail message and all documents which accompany it are intended only =
for the use of the individual or entity to which addressed, and may contain=
 privileged or confidential information. Any unauthorized disclosure or dis=
tribution of this e-mail message
 is prohibited. Any private files or utilities that are included in this e-=
mail are intended only for the use of the individual or entity to which thi=
s is addressed and distribution of these files or utilities is prohibited. =
If you have received this e-mail
 message in error, please notify me immediately. Thank you.</span><span lan=
g=3D"EN-AU" style=3D"font-size:12.0pt;color:#1F497D;mso-fareast-language:EN=
-US"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color=
:#1F497D;mso-fareast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_--


--===============0792855869020140230==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0792855869020140230==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 17:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61lV-0005n6-Tz; Wed, 22 Jan 2014 17:39:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jesse.benedict@citrix.com>)
	id 1W61To-0004hc-AN; Wed, 22 Jan 2014 17:20:44 +0000
Received: from [85.158.143.35:29256] by server-1.bemta-4.messagelabs.com id
	96/35-02132-BEDFFD25; Wed, 22 Jan 2014 17:20:43 +0000
X-Env-Sender: jesse.benedict@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390411240!124858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20884 invoked from network); 22 Jan 2014 17:20:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:20:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208,217";a="95405833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 17:20:40 +0000
Received: from FTLPEX01CL02.citrite.net ([169.254.2.8]) by
	FTLPEX01CL01.citrite.net ([169.254.4.32]) with mapi id 14.02.0342.004;
	Wed, 22 Jan 2014 12:20:40 -0500
From: Jesse Benedict <jesse.benedict@citrix.com>
To: "'xen-api-request@lists.xen.org'" <xen-api-request@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: Request: XenCenter/XE Command To Preserve VM MAC
	Addresses/VIFs when XS NIC Settings Change
Thread-Index: Ac8XlWgUXzDnqmE6REmdtRruxBjVoQ==
Date: Wed, 22 Jan 2014 17:20:40 +0000
Message-ID: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.11.24.228]
MIME-Version: 1.0
X-DLP: MIA2
X-Mailman-Approved-At: Wed, 22 Jan 2014 17:39:00 +0000
Subject: [Xen-devel] Request: XenCenter/XE Command To Preserve VM MAC
 Addresses/VIFs when XS NIC Settings Change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0792855869020140230=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0792855869020140230==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_"

--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

All,

This request comes from experience with many clients.  It is somewhat easy =
to script, but here it goes:

ISSUE:
We currently have the capability to "AUTOMATICALLY" add a network to NEW VM=
s, but no way to "PRESERVE" existing VM Network IDs and MACs when XenServer=
 interfaces are changed.

SCENARIO:
User has several hundred VMs and needs to change BONDs/Interfaces in XenSer=
ver
After breaking bonds or modifying XenServer interfaces, the VMs are no long=
er associated with a "Network"

REQUEST:
Offer a method where:

-          Modifications to a network interface or Bond notes that "VMs are=
 associated with this network"

-          After changes are made to said interfaces, ask user "Would you l=
ike to update your VMs to use this interface and retain Network ID and MAC =
ADDRESS?"

o   Yes (All)

o   No (List of VMs to apply to or NONE)

I have a script - not currently with me - that I wrote to handle this type =
of activity as BOND configuration, additions, etc happen all the time and I=
 had one client (achem), have to manually associate all VMs with a specific=
 network bond that had to be modified.

If this request is not clear, I will reply with the script I wrote and will=
 offer screen shots.


Sincerely,


Jesse Benedict, CCA | Citrix, Inc. | XenServer, XenClient Support Team
Work Better.  Live Better.  Call us at 1-800-4CITRIX!
Join the Community: support.citrix.com<http://support.citrix.com/> | discus=
sions.citrix.com<http://discussions.citrix.com/> | blogs.citrix.com<http://=
blogs.citrix.com/> | taas.citrix.com<https://taas.citrix.com/>

Customer Satisfaction is our goal.
If you have feedback regarding my performance, please feel free to contact =
my Manager william.aycock@citrix.com<mailto:william.aycock@citrix.com>

CONFIDENTIALITY NOTICE
This e-mail message and all documents which accompany it are intended only =
for the use of the individual or entity to which addressed, and may contain=
 privileged or confidential information. Any unauthorized disclosure or dis=
tribution of this e-mail message is prohibited. Any private files or utilit=
ies that are included in this e-mail are intended only for the use of the i=
ndividual or entity to which this is addressed and distribution of these fi=
les or utilities is prohibited. If you have received this e-mail message in=
 error, please notify me immediately. Thank you.



--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
	{font-family:"MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@MS Mincho";
	panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
	{mso-style-priority:34;
	margin-top:0in;
	margin-right:0in;
	margin-bottom:0in;
	margin-left:.5in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";
	mso-fareast-language:JA;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:1096829415;
	mso-list-type:hybrid;
	mso-list-template-ids:1179797204 1618118818 67698691 67698693 67698689 676=
98691 67698693 67698689 67698691 67698693;}
@list l0:level1
	{mso-level-start-at:0;
	mso-level-number-format:bullet;
	mso-level-text:-;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Calibri","sans-serif";
	mso-fareast-font-family:"MS Mincho";}
@list l0:level2
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level3
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level4
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level5
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level6
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
@list l0:level7
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Symbol;}
@list l0:level8
	{mso-level-number-format:bullet;
	mso-level-text:o;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:"Courier New";}
@list l0:level9
	{mso-level-number-format:bullet;
	mso-level-text:\F0A7;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;
	font-family:Wingdings;}
ol
	{margin-bottom:0in;}
ul
	{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">All,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">This request comes from experience with many clients=
.&nbsp; It is somewhat easy to script, but here it goes:<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>ISSUE:<o:p></o:p></b></p>
<p class=3D"MsoNormal">We currently have the capability to &#8220;AUTOMATIC=
ALLY&#8221; add a network to NEW VMs, but no way to &#8220;PRESERVE&#8221; =
existing VM Network IDs and MACs when XenServer interfaces are changed.<o:p=
></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>SCENARIO:<o:p></o:p></b></p>
<p class=3D"MsoNormal">User has several hundred VMs and needs to change BON=
Ds/Interfaces in XenServer<o:p></o:p></p>
<p class=3D"MsoNormal">After breaking bonds or modifying XenServer interfac=
es, the VMs are no longer associated with a &#8220;Network&#8221;<o:p></o:p=
></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>REQUEST:<o:p></o:p></b></p>
<p class=3D"MsoNormal">Offer a method where:<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">-<span style=
=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;
</span></span><![endif]><span dir=3D"LTR"></span>Modifications to a network=
 interface or Bond notes that &#8220;VMs are associated with this network&#=
8221;<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">-<span style=
=3D"font:7.0pt &quot;Times New Roman&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;
</span></span><![endif]><span dir=3D"LTR"></span>After changes are made to =
said interfaces, ask user &#8220;Would you like to update your VMs to use t=
his interface and retain Network ID and MAC ADDRESS?&#8221;<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"margin-left:1.0in;text-indent:-.25in=
;mso-list:l0 level2 lfo1">
<![if !supportLists]><span style=3D"font-family:&quot;Courier New&quot;"><s=
pan style=3D"mso-list:Ignore">o<span style=3D"font:7.0pt &quot;Times New Ro=
man&quot;">&nbsp;&nbsp;
</span></span></span><![endif]><span dir=3D"LTR"></span>Yes (All)<o:p></o:p=
></p>
<p class=3D"MsoListParagraph" style=3D"margin-left:1.0in;text-indent:-.25in=
;mso-list:l0 level2 lfo1">
<![if !supportLists]><span style=3D"font-family:&quot;Courier New&quot;"><s=
pan style=3D"mso-list:Ignore">o<span style=3D"font:7.0pt &quot;Times New Ro=
man&quot;">&nbsp;&nbsp;
</span></span></span><![endif]><span dir=3D"LTR"></span>No (List of VMs to =
apply to or NONE)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I have a script &#8211; not currently with me &#8211=
; that I wrote to handle this type of activity as BOND configuration, addit=
ions, etc happen all the time and I had one client (achem), have to manuall=
y associate all VMs with a specific network bond
 that had to be modified.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">If this request is not clear, I will reply with the =
script I wrote and will offer screen shots.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;mso-fareast-language=
:EN-US">Sincerely,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:12.0pt;color:#1F497D;mso=
-fareast-language:EN-US">Jesse Benedict, CCA</span></b><span style=3D"font-=
size:12.0pt;color:#1F497D;mso-fareast-language:EN-US"> | Citrix, Inc. | Xen=
Server, XenClient Support Team<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US">Work Better.&nbsp; Live Better.&nbsp; Call us at 1-80=
0-4CITRIX!<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US">Join the Community:
<a href=3D"http://support.citrix.com/"><span style=3D"color:blue">support.c=
itrix.com</span></a> |
<a href=3D"http://discussions.citrix.com/"><span style=3D"color:blue">discu=
ssions.citrix.com</span></a> |
<a href=3D"http://blogs.citrix.com/"><span style=3D"color:blue">blogs.citri=
x.com</span></a> |
<a href=3D"https://taas.citrix.com/"><span style=3D"color:blue">taas.citrix=
.com</span></a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;co=
lor:#999999;mso-fareast-language:EN-US">Customer Satisfaction is our goal.<=
/span></b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color:#1F497D;mso-=
fareast-language:EN-US">
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color=
:#999999;mso-fareast-language:EN-US">If you have feedback regarding my perf=
ormance, please feel free to contact my Manager
</span><span style=3D"font-size:12.0pt;mso-fareast-language:EN-US"><a href=
=3D"mailto:william.aycock@citrix.com"><span style=3D"color:blue">william.ay=
cock@citrix.com</span></a><span style=3D"color:#1F497D"><o:p></o:p></span><=
/span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:12.0pt;color:#1F497D;mso-fa=
reast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal" style=3D"text-autospace:none"><b><span lang=3D"EN-AU=
" style=3D"font-size:12.0pt;color:#999999;mso-fareast-language:EN-US">CONFI=
DENTIALITY NOTICE</span></b><span lang=3D"EN-AU" style=3D"font-size:12.0pt;=
color:#999999;mso-fareast-language:EN-US">
<br>
This e-mail message and all documents which accompany it are intended only =
for the use of the individual or entity to which addressed, and may contain=
 privileged or confidential information. Any unauthorized disclosure or dis=
tribution of this e-mail message
 is prohibited. Any private files or utilities that are included in this e-=
mail are intended only for the use of the individual or entity to which thi=
s is addressed and distribution of these files or utilities is prohibited. =
If you have received this e-mail
 message in error, please notify me immediately. Thank you.</span><span lan=
g=3D"EN-AU" style=3D"font-size:12.0pt;color:#1F497D;mso-fareast-language:EN=
-US"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-AU" style=3D"font-size:12.0pt;color=
:#1F497D;mso-fareast-language:EN-US"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_B8C24FCCBFC459419478083491FF39E21F10C0FTLPEX01CL02citri_--


--===============0792855869020140230==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0792855869020140230==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 17:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61n7-000614-GW; Wed, 22 Jan 2014 17:40:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W61n5-00060u-Uw
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:40:40 +0000
Received: from [193.109.254.147:57010] by server-9.bemta-14.messagelabs.com id
	4D/A5-13957-79200E25; Wed, 22 Jan 2014 17:40:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390412437!9056887!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14926 invoked from network); 22 Jan 2014 17:40:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:40:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93368006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:40:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:40:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W61n1-00024f-Qp;
	Wed, 22 Jan 2014 17:40:35 +0000
Date: Wed, 22 Jan 2014 17:39:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> > Hi,
> > 
> > The following patch series is an RFC for possible implementation of simple MMU module,
> > which is designed to translate IPA to MA for peripheral processors like GPU / IPU
> > for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
> > which need to be handled properly.
> > 
> > It would be great to get a community feedback - will this be useful for Xen project?
> > 
> > Let me describe an algorithm briefly. It is simple and straightforward.
> > The following simple logic is used to translate addresses from IPA to MA:
> > 
> > 1. During boot time guest domain creates "pagetable" for external MMU IP.
> > Pagetable is a singletone data structure, which is stored in ususal kernel
> > heap memory. All memory mappings for corresponding MMU are stored inside it.
> > Format of "pagetable" is well defined.
> > 
> > 2. Guest domain enables peripheral remote processor. As a part of enable sequence
> > kernel allocates chunks of heap memory needed for remote processor and stores
> > pointers to allocated chunks in already created "pagetable". After it writes
> > a physical address of pagetable to MMU configuration register. As result MMU IP
> > knows about all allocations, and remote processor can use them directly in its
> > software.
> > 
> > 3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
> > It reads a physical address of "pagetable" from MMU register and creates a copy
> > of it in own memory. As result - we have two similar configuration data structures -
> > first - in guest domain kernel, second - in Xen hypervisor.
> > 
> > 4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
> > addresses to corresponding machine addresses using existing p2m API call.
> > After it writes a physical address  of its pagetable (with already translated PA to MA)
> > to MMU IP configuration registers and returns control to guest domain.
> > 
> > As a result - guest domain continues enabling remote processor with it MMU and MMU
> > will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
> > directly by MMU IP, and its new structure will be hidden for guest domain kernel,
> > it won't know anything about p2m translation.
> 
> Why don't you map Dom0 1:1 instead?
> If you enabled PLATFORM_QUIRK_DOM0_MAPPING_11 (now enabled by default on
> all platforms), all this wouldn't be necessary, right?

I guess you can't do just use the 1:1 because you are assigning the GPU
or IPU to a guest other than Dom0, right?


> 
> 
> > Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
> > Target platform OMAP5 panda.
> > 
> > Thank you for your attention,
> > 
> > Regards,
> > 
> > Andrii Tseglytskyi (3):
> >   arm: omap: introduce iommu module
> >   arm: omap: translate iommu mapping to 4K pages
> >   arm: omap: cleanup iopte allocations
> > 
> >  xen/arch/arm/Makefile     |    1 +
> >  xen/arch/arm/io.c         |    1 +
> >  xen/arch/arm/io.h         |    1 +
> >  xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
> >  4 files changed, 495 insertions(+)
> >  create mode 100644 xen/arch/arm/omap_iommu.c
> > 
> > -- 
> > 1.7.9.5
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:40:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61n7-000614-GW; Wed, 22 Jan 2014 17:40:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W61n5-00060u-Uw
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:40:40 +0000
Received: from [193.109.254.147:57010] by server-9.bemta-14.messagelabs.com id
	4D/A5-13957-79200E25; Wed, 22 Jan 2014 17:40:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390412437!9056887!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14926 invoked from network); 22 Jan 2014 17:40:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:40:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93368006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 17:40:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 12:40:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W61n1-00024f-Qp;
	Wed, 22 Jan 2014 17:40:35 +0000
Date: Wed, 22 Jan 2014 17:39:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> > Hi,
> > 
> > The following patch series is an RFC for possible implementation of simple MMU module,
> > which is designed to translate IPA to MA for peripheral processors like GPU / IPU
> > for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs
> > which need to be handled properly.
> > 
> > It would be great to get a community feedback - will this be useful for Xen project?
> > 
> > Let me describe an algorithm briefly. It is simple and straightforward.
> > The following simple logic is used to translate addresses from IPA to MA:
> > 
> > 1. During boot time guest domain creates "pagetable" for external MMU IP.
> > Pagetable is a singletone data structure, which is stored in ususal kernel
> > heap memory. All memory mappings for corresponding MMU are stored inside it.
> > Format of "pagetable" is well defined.
> > 
> > 2. Guest domain enables peripheral remote processor. As a part of enable sequence
> > kernel allocates chunks of heap memory needed for remote processor and stores
> > pointers to allocated chunks in already created "pagetable". After it writes
> > a physical address of pagetable to MMU configuration register. As result MMU IP
> > knows about all allocations, and remote processor can use them directly in its
> > software.
> > 
> > 3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
> > It reads a physical address of "pagetable" from MMU register and creates a copy
> > of it in own memory. As result - we have two similar configuration data structures -
> > first - in guest domain kernel, second - in Xen hypervisor.
> > 
> > 4. Xen omap mmu driver parses its own copy of pagetable and translate all physical
> > addresses to corresponding machine addresses using existing p2m API call.
> > After it writes a physical address  of its pagetable (with already translated PA to MA)
> > to MMU IP configuration registers and returns control to guest domain.
> > 
> > As a result - guest domain continues enabling remote processor with it MMU and MMU
> > will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used
> > directly by MMU IP, and its new structure will be hidden for guest domain kernel,
> > it won't know anything about p2m translation.
> 
> Why don't you map Dom0 1:1 instead?
> If you enabled PLATFORM_QUIRK_DOM0_MAPPING_11 (now enabled by default on
> all platforms), all this wouldn't be necessary, right?

I guess you can't do just use the 1:1 because you are assigning the GPU
or IPU to a guest other than Dom0, right?


> 
> 
> > Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
> > Target platform OMAP5 panda.
> > 
> > Thank you for your attention,
> > 
> > Regards,
> > 
> > Andrii Tseglytskyi (3):
> >   arm: omap: introduce iommu module
> >   arm: omap: translate iommu mapping to 4K pages
> >   arm: omap: cleanup iopte allocations
> > 
> >  xen/arch/arm/Makefile     |    1 +
> >  xen/arch/arm/io.c         |    1 +
> >  xen/arch/arm/io.h         |    1 +
> >  xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
> >  4 files changed, 495 insertions(+)
> >  create mode 100644 xen/arch/arm/omap_iommu.c
> > 
> > -- 
> > 1.7.9.5
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61nk-000656-Kg; Wed, 22 Jan 2014 17:41:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xenmail43267@gmail.com>) id 1W61nj-00064u-3S
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:41:19 +0000
Received: from [85.158.143.35:46204] by server-3.bemta-4.messagelabs.com id
	BB/CB-32360-EB200E25; Wed, 22 Jan 2014 17:41:18 +0000
X-Env-Sender: xenmail43267@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390412475!129724!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 22 Jan 2014 17:41:17 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:41:17 -0000
Received: by mail-ie0-f174.google.com with SMTP id tp5so5656161ieb.33
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 09:41:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=0OvoEq01pDXQYyvKognYDvr55+oScWMVlLzPxMc9eYc=;
	b=092kuss04ZZ+rx65hb5+x8zpUd1S/55118eTOe9RY5U6TgauNcS/ItRoux2kuiZkDU
	cFORVv1TiqTMAYQKCE4k2Y7opPycSfvl8PLe00IU5ucPw+naJC77KTi9xgollVTZ7y5S
	oohL7aBOBFRr2GXKZCGrgJexl6sde0bmK6dC3oq5F0q6++Ybx9+ctTl26ohxSPXZc34N
	OoFoCAfaHIeQFTgk6zlNXC2+pBJZGsSUyHOJaRBSSwnR33f3G25Lhxh7wyitvbocWoJW
	+os9eoomkXZYSX80tcLtn0qsVnz4z92QZTRyzP+CD77hWEwwiMKCgMH/wryaNetZMw+s
	Qy+A==
X-Received: by 10.50.119.4 with SMTP id kq4mr4350025igb.40.1390412475631;
	Wed, 22 Jan 2014 09:41:15 -0800 (PST)
Received: from localhost.localdomain (c-24-118-49-166.hsd1.mn.comcast.net.
	[24.118.49.166])
	by mx.google.com with ESMTPSA id s4sm48298318ige.0.2014.01.22.09.41.12
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 09:41:12 -0800 (PST)
From: <xenmail43267@gmail.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 11:41:11 -0600
Message-Id: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
X-Mailer: git-send-email 1.8.3.2
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mike Neilsen <mneilsen@acm.org>

This is a fix for bug 35:
http://bugs.xenproject.org/xen/bug/35

This bug report describes several format string mismatches which prevent
building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
copy of Alex Sharp's original patch with the following modifications:

* Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
  avoid stack corruption
* Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
  unsigned long in pcifront_physical_to_virtual and related functions
  (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)

Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.

Signed-off-by: Mike Neilsen <mneilsen@acm.org>

---
Changed since v1:
* Change "fun" arguments into unsigned ints
---
 extras/mini-os/fbfront.c          |  4 ++--
 extras/mini-os/include/pcifront.h | 12 ++++++------
 extras/mini-os/pcifront.c         | 14 +++++++-------
 extras/mini-os/xenbus/xenbus.c    |  5 +++--
 4 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
index 1e01513..9cc07b4 100644
--- a/extras/mini-os/fbfront.c
+++ b/extras/mini-os/fbfront.c
@@ -105,7 +105,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
@@ -468,7 +468,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
index 0a6be8e..1b05963 100644
--- a/extras/mini-os/include/pcifront.h
+++ b/extras/mini-os/include/pcifront.h
@@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
 void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
 int pcifront_conf_read(struct pcifront_dev *dev,
                        unsigned int dom,
-                       unsigned int bus, unsigned int slot, unsigned long fun,
+                       unsigned int bus, unsigned int slot, unsigned int fun,
                        unsigned int off, unsigned int size, unsigned int *val);
 int pcifront_conf_write(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun,
+                        unsigned int bus, unsigned int slot, unsigned int fun,
                         unsigned int off, unsigned int size, unsigned int val);
 int pcifront_enable_msi(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun);
+                        unsigned int bus, unsigned int slot, unsigned int fun);
 int pcifront_disable_msi(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun);
+                         unsigned int bus, unsigned int slot, unsigned int fun);
 int pcifront_enable_msix(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun,
+                         unsigned int bus, unsigned int slot, unsigned int fun,
                          struct xen_msix_entry *entries, int n);
 int pcifront_disable_msix(struct pcifront_dev *dev,
                           unsigned int dom,
-                          unsigned int bus, unsigned int slot, unsigned long fun);
+                          unsigned int bus, unsigned int slot, unsigned int fun);
 void shutdown_pcifront(struct pcifront_dev *dev);
diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
index 16a4b49..0fc5b30 100644
--- a/extras/mini-os/pcifront.c
+++ b/extras/mini-os/pcifront.c
@@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
                                   unsigned int *dom,
                                   unsigned int *bus,
                                   unsigned int *slot,
-                                  unsigned long *fun)
+                                  unsigned int *fun)
 {
     /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
        should be enough to hold the paths we need to construct, even
@@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
 
 int pcifront_conf_read(struct pcifront_dev *dev,
                        unsigned int dom,
-                       unsigned int bus, unsigned int slot, unsigned long fun,
+                       unsigned int bus, unsigned int slot, unsigned int fun,
                        unsigned int off, unsigned int size, unsigned int *val)
 {
     struct xen_pci_op op;
@@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
 
 int pcifront_conf_write(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun,
+                        unsigned int bus, unsigned int slot, unsigned int fun,
                         unsigned int off, unsigned int size, unsigned int val)
 {
     struct xen_pci_op op;
@@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
 
 int pcifront_enable_msi(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun)
+                        unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
@@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
 
 int pcifront_disable_msi(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun)
+                         unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
@@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
 
 int pcifront_enable_msix(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun,
+                         unsigned int bus, unsigned int slot, unsigned int fun,
                          struct xen_msix_entry *entries, int n)
 {
     struct xen_pci_op op;
@@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
 
 int pcifront_disable_msix(struct pcifront_dev *dev,
                           unsigned int dom,
-                          unsigned int bus, unsigned int slot, unsigned long fun)
+                          unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
index ee1691b..c5d9b02 100644
--- a/extras/mini-os/xenbus/xenbus.c
+++ b/extras/mini-os/xenbus/xenbus.c
@@ -15,6 +15,7 @@
  *
  ****************************************************************************
  **/
+#include <inttypes.h>
 #include <mini-os/os.h>
 #include <mini-os/mm.h>
 #include <mini-os/traps.h>
@@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
     err = errmsg(rep);
     if (err)
 	return err;
-    sscanf((char *)(rep + 1), "%u", xbt);
+    sscanf((char *)(rep + 1), "%lu", xbt);
     free(rep);
     return NULL;
 }
@@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
     domid_t ret;
 
     BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
-    sscanf(dom_id, "%d", &ret);
+    sscanf(dom_id, "%"SCNd16, &ret);
 
     return ret;
 }
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W61nk-000656-Kg; Wed, 22 Jan 2014 17:41:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xenmail43267@gmail.com>) id 1W61nj-00064u-3S
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:41:19 +0000
Received: from [85.158.143.35:46204] by server-3.bemta-4.messagelabs.com id
	BB/CB-32360-EB200E25; Wed, 22 Jan 2014 17:41:18 +0000
X-Env-Sender: xenmail43267@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390412475!129724!1
X-Originating-IP: [209.85.223.174]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 22 Jan 2014 17:41:17 -0000
Received: from mail-ie0-f174.google.com (HELO mail-ie0-f174.google.com)
	(209.85.223.174)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 17:41:17 -0000
Received: by mail-ie0-f174.google.com with SMTP id tp5so5656161ieb.33
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 09:41:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=0OvoEq01pDXQYyvKognYDvr55+oScWMVlLzPxMc9eYc=;
	b=092kuss04ZZ+rx65hb5+x8zpUd1S/55118eTOe9RY5U6TgauNcS/ItRoux2kuiZkDU
	cFORVv1TiqTMAYQKCE4k2Y7opPycSfvl8PLe00IU5ucPw+naJC77KTi9xgollVTZ7y5S
	oohL7aBOBFRr2GXKZCGrgJexl6sde0bmK6dC3oq5F0q6++Ybx9+ctTl26ohxSPXZc34N
	OoFoCAfaHIeQFTgk6zlNXC2+pBJZGsSUyHOJaRBSSwnR33f3G25Lhxh7wyitvbocWoJW
	+os9eoomkXZYSX80tcLtn0qsVnz4z92QZTRyzP+CD77hWEwwiMKCgMH/wryaNetZMw+s
	Qy+A==
X-Received: by 10.50.119.4 with SMTP id kq4mr4350025igb.40.1390412475631;
	Wed, 22 Jan 2014 09:41:15 -0800 (PST)
Received: from localhost.localdomain (c-24-118-49-166.hsd1.mn.comcast.net.
	[24.118.49.166])
	by mx.google.com with ESMTPSA id s4sm48298318ige.0.2014.01.22.09.41.12
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 09:41:12 -0800 (PST)
From: <xenmail43267@gmail.com>
To: xen-devel@lists.xen.org
Date: Wed, 22 Jan 2014 11:41:11 -0600
Message-Id: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
X-Mailer: git-send-email 1.8.3.2
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Mike Neilsen <mneilsen@acm.org>

This is a fix for bug 35:
http://bugs.xenproject.org/xen/bug/35

This bug report describes several format string mismatches which prevent
building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
copy of Alex Sharp's original patch with the following modifications:

* Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
  avoid stack corruption
* Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
  unsigned long in pcifront_physical_to_virtual and related functions
  (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)

Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.

Signed-off-by: Mike Neilsen <mneilsen@acm.org>

---
Changed since v1:
* Change "fun" arguments into unsigned ints
---
 extras/mini-os/fbfront.c          |  4 ++--
 extras/mini-os/include/pcifront.h | 12 ++++++------
 extras/mini-os/pcifront.c         | 14 +++++++-------
 extras/mini-os/xenbus/xenbus.c    |  5 +++--
 4 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
index 1e01513..9cc07b4 100644
--- a/extras/mini-os/fbfront.c
+++ b/extras/mini-os/fbfront.c
@@ -105,7 +105,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
@@ -468,7 +468,7 @@ again:
         free(err);
     }
 
-    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
+    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
     if (err) {
         message = "writing page-ref";
         goto abort_transaction;
diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
index 0a6be8e..1b05963 100644
--- a/extras/mini-os/include/pcifront.h
+++ b/extras/mini-os/include/pcifront.h
@@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
 void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
 int pcifront_conf_read(struct pcifront_dev *dev,
                        unsigned int dom,
-                       unsigned int bus, unsigned int slot, unsigned long fun,
+                       unsigned int bus, unsigned int slot, unsigned int fun,
                        unsigned int off, unsigned int size, unsigned int *val);
 int pcifront_conf_write(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun,
+                        unsigned int bus, unsigned int slot, unsigned int fun,
                         unsigned int off, unsigned int size, unsigned int val);
 int pcifront_enable_msi(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun);
+                        unsigned int bus, unsigned int slot, unsigned int fun);
 int pcifront_disable_msi(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun);
+                         unsigned int bus, unsigned int slot, unsigned int fun);
 int pcifront_enable_msix(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun,
+                         unsigned int bus, unsigned int slot, unsigned int fun,
                          struct xen_msix_entry *entries, int n);
 int pcifront_disable_msix(struct pcifront_dev *dev,
                           unsigned int dom,
-                          unsigned int bus, unsigned int slot, unsigned long fun);
+                          unsigned int bus, unsigned int slot, unsigned int fun);
 void shutdown_pcifront(struct pcifront_dev *dev);
diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
index 16a4b49..0fc5b30 100644
--- a/extras/mini-os/pcifront.c
+++ b/extras/mini-os/pcifront.c
@@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
                                   unsigned int *dom,
                                   unsigned int *bus,
                                   unsigned int *slot,
-                                  unsigned long *fun)
+                                  unsigned int *fun)
 {
     /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
        should be enough to hold the paths we need to construct, even
@@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
 
 int pcifront_conf_read(struct pcifront_dev *dev,
                        unsigned int dom,
-                       unsigned int bus, unsigned int slot, unsigned long fun,
+                       unsigned int bus, unsigned int slot, unsigned int fun,
                        unsigned int off, unsigned int size, unsigned int *val)
 {
     struct xen_pci_op op;
@@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
 
 int pcifront_conf_write(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun,
+                        unsigned int bus, unsigned int slot, unsigned int fun,
                         unsigned int off, unsigned int size, unsigned int val)
 {
     struct xen_pci_op op;
@@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
 
 int pcifront_enable_msi(struct pcifront_dev *dev,
                         unsigned int dom,
-                        unsigned int bus, unsigned int slot, unsigned long fun)
+                        unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
@@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
 
 int pcifront_disable_msi(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun)
+                         unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
@@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
 
 int pcifront_enable_msix(struct pcifront_dev *dev,
                          unsigned int dom,
-                         unsigned int bus, unsigned int slot, unsigned long fun,
+                         unsigned int bus, unsigned int slot, unsigned int fun,
                          struct xen_msix_entry *entries, int n)
 {
     struct xen_pci_op op;
@@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
 
 int pcifront_disable_msix(struct pcifront_dev *dev,
                           unsigned int dom,
-                          unsigned int bus, unsigned int slot, unsigned long fun)
+                          unsigned int bus, unsigned int slot, unsigned int fun)
 {
     struct xen_pci_op op;
 
diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
index ee1691b..c5d9b02 100644
--- a/extras/mini-os/xenbus/xenbus.c
+++ b/extras/mini-os/xenbus/xenbus.c
@@ -15,6 +15,7 @@
  *
  ****************************************************************************
  **/
+#include <inttypes.h>
 #include <mini-os/os.h>
 #include <mini-os/mm.h>
 #include <mini-os/traps.h>
@@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
     err = errmsg(rep);
     if (err)
 	return err;
-    sscanf((char *)(rep + 1), "%u", xbt);
+    sscanf((char *)(rep + 1), "%lu", xbt);
     free(rep);
     return NULL;
 }
@@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
     domid_t ret;
 
     BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
-    sscanf(dom_id, "%d", &ret);
+    sscanf(dom_id, "%"SCNd16, &ret);
 
     return ret;
 }
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W623A-0006sl-Pu; Wed, 22 Jan 2014 17:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W6238-0006sg-Sa
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:57:15 +0000
Received: from [85.158.137.68:20256] by server-14.bemta-3.messagelabs.com id
	51/E9-06105-A7600E25; Wed, 22 Jan 2014 17:57:14 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390413431!9922872!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22611 invoked from network); 22 Jan 2014 17:57:11 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 17:57:11 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 91FE88408E;
	Wed, 22 Jan 2014 18:57:10 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id gw1tYy9DOE02; Wed, 22 Jan 2014 18:57:10 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id A1FAF8407D;
	Wed, 22 Jan 2014 18:57:09 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W6230-0001am-U6; Wed, 22 Jan 2014 18:57:06 +0100
Date: Wed, 22 Jan 2014 18:57:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xenmail43267@gmail.com
Message-ID: <20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xenmail43267@gmail.com, xen-devel@lists.xen.org,
	alex.sharp@orionvm.com, andrew.cooper3@citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@citrix.com,
	Mike Neilsen <mneilsen@acm.org>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenmail43267@gmail.com, le Wed 22 Jan 2014 11:41:11 -0600, a =E9crit :
> From: Mike Neilsen <mneilsen@acm.org>
> =

> This is a fix for bug 35:
> http://bugs.xenproject.org/xen/bug/35
> =

> This bug report describes several format string mismatches which prevent
> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This =
is a
> copy of Alex Sharp's original patch with the following modifications:
> =

> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.=
c to
>   avoid stack corruption
> * Samuel Thibault's recommendation to make "fun" an unsigned int rather t=
han an
>   unsigned long in pcifront_physical_to_virtual and related functions
>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> =

> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> =

> Signed-off-by: Mike Neilsen <mneilsen@acm.org>

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> Changed since v1:
> * Change "fun" arguments into unsigned ints
> ---
>  extras/mini-os/fbfront.c          |  4 ++--
>  extras/mini-os/include/pcifront.h | 12 ++++++------
>  extras/mini-os/pcifront.c         | 14 +++++++-------
>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>  4 files changed, 18 insertions(+), 17 deletions(-)
> =

> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 1e01513..9cc07b4 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  =

> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s)=
);
> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s=
));
>      if (err) {
>          message =3D "writing page-ref";
>          goto abort_transaction;
> @@ -468,7 +468,7 @@ again:
>          free(err);
>      }
>  =

> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s)=
);
> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s=
));
>      if (err) {
>          message =3D "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/p=
cifront.h
> index 0a6be8e..1b05963 100644
> --- a/extras/mini-os/include/pcifront.h
> +++ b/extras/mini-os/include/pcifront.h
> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_p=
ci_op *op);
>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int do=
main, unsigned int bus, unsigned slot, unsigned int fun));
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned lon=
g fun,
> +                       unsigned int bus, unsigned int slot, unsigned int=
 fun,
>                         unsigned int off, unsigned int size, unsigned int=
 *val);
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun,
>                          unsigned int off, unsigned int size, unsigned in=
t val);
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun);
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun);
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun);
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun);
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun,
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>                           struct xen_msix_entry *entries, int n);
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned =
long fun);
> +                          unsigned int bus, unsigned int slot, unsigned =
int fun);
>  void shutdown_pcifront(struct pcifront_dev *dev);
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index 16a4b49..0fc5b30 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev=
 *dev,
>                                    unsigned int *dom,
>                                    unsigned int *bus,
>                                    unsigned int *slot,
> -                                  unsigned long *fun)
> +                                  unsigned int *fun)
>  {
>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>         should be enough to hold the paths we need to construct, even
> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen=
_pci_op *op)
>  =

>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned lon=
g fun,
> +                       unsigned int bus, unsigned int slot, unsigned int=
 fun,
>                         unsigned int off, unsigned int size, unsigned int=
 *val)
>  {
>      struct xen_pci_op op;
> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>  =

>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun,
>                          unsigned int off, unsigned int size, unsigned in=
t val)
>  {
>      struct xen_pci_op op;
> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>  =

>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun)
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun)
>  {
>      struct xen_pci_op op;
>  =

> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>  =

>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun)
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun)
>  {
>      struct xen_pci_op op;
>  =

> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>  =

>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun,
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>                           struct xen_msix_entry *entries, int n)
>  {
>      struct xen_pci_op op;
> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>  =

>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned =
long fun)
> +                          unsigned int bus, unsigned int slot, unsigned =
int fun)
>  {
>      struct xen_pci_op op;
>  =

> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbu=
s.c
> index ee1691b..c5d9b02 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -15,6 +15,7 @@
>   *
>   ***********************************************************************=
*****
>   **/
> +#include <inttypes.h>
>  #include <mini-os/os.h>
>  #include <mini-os/mm.h>
>  #include <mini-os/traps.h>
> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *=
xbt)
>      err =3D errmsg(rep);
>      if (err)
>  	return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  =

>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%"SCNd16, &ret);
>  =

>      return ret;
>  }
> -- =

> 1.8.3.2
> =


-- =

Samuel
Client: "This program has been successfully installed."
Vendeur (surpris): "Et o=F9 voyez-vous une erreur ?"
Client: "C'est << HAS BEEN >> !"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 17:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 17:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W623A-0006sl-Pu; Wed, 22 Jan 2014 17:57:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W6238-0006sg-Sa
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 17:57:15 +0000
Received: from [85.158.137.68:20256] by server-14.bemta-3.messagelabs.com id
	51/E9-06105-A7600E25; Wed, 22 Jan 2014 17:57:14 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390413431!9922872!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22611 invoked from network); 22 Jan 2014 17:57:11 -0000
Received: from toccata.ens-lyon.org (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 17:57:11 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 91FE88408E;
	Wed, 22 Jan 2014 18:57:10 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id gw1tYy9DOE02; Wed, 22 Jan 2014 18:57:10 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id A1FAF8407D;
	Wed, 22 Jan 2014 18:57:09 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W6230-0001am-U6; Wed, 22 Jan 2014 18:57:06 +0100
Date: Wed, 22 Jan 2014 18:57:06 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: xenmail43267@gmail.com
Message-ID: <20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	xenmail43267@gmail.com, xen-devel@lists.xen.org,
	alex.sharp@orionvm.com, andrew.cooper3@citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@citrix.com,
	Mike Neilsen <mneilsen@acm.org>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Ian.Campbell@citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xenmail43267@gmail.com, le Wed 22 Jan 2014 11:41:11 -0600, a =E9crit :
> From: Mike Neilsen <mneilsen@acm.org>
> =

> This is a fix for bug 35:
> http://bugs.xenproject.org/xen/bug/35
> =

> This bug report describes several format string mismatches which prevent
> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This =
is a
> copy of Alex Sharp's original patch with the following modifications:
> =

> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.=
c to
>   avoid stack corruption
> * Samuel Thibault's recommendation to make "fun" an unsigned int rather t=
han an
>   unsigned long in pcifront_physical_to_virtual and related functions
>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> =

> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> =

> Signed-off-by: Mike Neilsen <mneilsen@acm.org>

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> Changed since v1:
> * Change "fun" arguments into unsigned ints
> ---
>  extras/mini-os/fbfront.c          |  4 ++--
>  extras/mini-os/include/pcifront.h | 12 ++++++------
>  extras/mini-os/pcifront.c         | 14 +++++++-------
>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>  4 files changed, 18 insertions(+), 17 deletions(-)
> =

> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 1e01513..9cc07b4 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  =

> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s)=
);
> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s=
));
>      if (err) {
>          message =3D "writing page-ref";
>          goto abort_transaction;
> @@ -468,7 +468,7 @@ again:
>          free(err);
>      }
>  =

> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s)=
);
> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s=
));
>      if (err) {
>          message =3D "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/p=
cifront.h
> index 0a6be8e..1b05963 100644
> --- a/extras/mini-os/include/pcifront.h
> +++ b/extras/mini-os/include/pcifront.h
> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_p=
ci_op *op);
>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int do=
main, unsigned int bus, unsigned slot, unsigned int fun));
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned lon=
g fun,
> +                       unsigned int bus, unsigned int slot, unsigned int=
 fun,
>                         unsigned int off, unsigned int size, unsigned int=
 *val);
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun,
>                          unsigned int off, unsigned int size, unsigned in=
t val);
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun);
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun);
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun);
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun);
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun,
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>                           struct xen_msix_entry *entries, int n);
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned =
long fun);
> +                          unsigned int bus, unsigned int slot, unsigned =
int fun);
>  void shutdown_pcifront(struct pcifront_dev *dev);
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index 16a4b49..0fc5b30 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev=
 *dev,
>                                    unsigned int *dom,
>                                    unsigned int *bus,
>                                    unsigned int *slot,
> -                                  unsigned long *fun)
> +                                  unsigned int *fun)
>  {
>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>         should be enough to hold the paths we need to construct, even
> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen=
_pci_op *op)
>  =

>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned lon=
g fun,
> +                       unsigned int bus, unsigned int slot, unsigned int=
 fun,
>                         unsigned int off, unsigned int size, unsigned int=
 *val)
>  {
>      struct xen_pci_op op;
> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>  =

>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun,
>                          unsigned int off, unsigned int size, unsigned in=
t val)
>  {
>      struct xen_pci_op op;
> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>  =

>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned lo=
ng fun)
> +                        unsigned int bus, unsigned int slot, unsigned in=
t fun)
>  {
>      struct xen_pci_op op;
>  =

> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>  =

>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun)
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun)
>  {
>      struct xen_pci_op op;
>  =

> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>  =

>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned l=
ong fun,
> +                         unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>                           struct xen_msix_entry *entries, int n)
>  {
>      struct xen_pci_op op;
> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>  =

>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned =
long fun)
> +                          unsigned int bus, unsigned int slot, unsigned =
int fun)
>  {
>      struct xen_pci_op op;
>  =

> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbu=
s.c
> index ee1691b..c5d9b02 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -15,6 +15,7 @@
>   *
>   ***********************************************************************=
*****
>   **/
> +#include <inttypes.h>
>  #include <mini-os/os.h>
>  #include <mini-os/mm.h>
>  #include <mini-os/traps.h>
> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *=
xbt)
>      err =3D errmsg(rep);
>      if (err)
>  	return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  =

>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%"SCNd16, &ret);
>  =

>      return ret;
>  }
> -- =

> 1.8.3.2
> =


-- =

Samuel
Client: "This program has been successfully installed."
Vendeur (surpris): "Et o=F9 voyez-vous une erreur ?"
Client: "C'est << HAS BEEN >> !"

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62HV-00083n-To; Wed, 22 Jan 2014 18:12:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W62HU-00082Q-Fn
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:12:04 +0000
Received: from [85.158.139.211:56515] by server-4.bemta-5.messagelabs.com id
	81/D8-26791-3F900E25; Wed, 22 Jan 2014 18:12:03 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390414322!11360930!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9608 invoked from network); 22 Jan 2014 18:12:03 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:12:03 -0000
Received: by mail-la0-f50.google.com with SMTP id ec20so608401lab.37
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 10:12:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=DqvOJ7LCncVBCI3kuKB4iFTrUOq3GxbdYaFAUxgqiiY=;
	b=nWtyLNzmkU/9nTzmuqGqYGNbTGOEpB++8xwG818S2mHaFYXWO0EM2CEbtVjEdbKo/r
	Y680kv8hqfrxNyDNXqZznwnQDk+eL76Rvg+hQuMyFRhg4EdqGZxE+sSPncJ5QUGBfsor
	3H+0VlFvkNET0yocJBTUAWfWBcK3UobOdcC5F3zMMWj2YKQCvNmgHAmvnzVlEa35tqnr
	Uj+94SUBil5CpJTC8hQnDnzntNN0PmUR1CD/sFaucoGQ9F01b8YLtgtd/Ku/YJVJAGlk
	ydCLq6H7bvH4RThiZnulBAOzCrcXMd0yhfirvK4uA4rrDkcLg5P0+HOAyrcKSPIMHQHI
	7RMA==
X-Received: by 10.152.120.231 with SMTP id lf7mr2065888lab.36.1390414322537;
	Wed, 22 Jan 2014 10:12:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 22 Jan 2014 10:11:42 -0800 (PST)
In-Reply-To: <1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 22 Jan 2014 10:11:42 -0800
X-Google-Sender-Auth: fT7JyGdtDigEITnDVcmK4K2GZXE
Message-ID: <CAB=NE6Wjj406T155jN2qSpcKDCk3iRvyzCGc9=AWw9ym7mMyqQ@mail.gmail.com>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Cc: Nathanael Rensen <nathanael@polymorpheus.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] pvusb drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 8:53 AM, Alexander Savchenko
<oleksandr.savchenko@globallogic.com> wrote:
> From: Nathanael Rensen <nathanael@polymorpheus.com>
>
> Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>

Without looking at the code yet I see you, the sender, are not on the
>From address of the patch, but yet you Signed-off-by the patch. This
is good but not sufficient. If Nathanael tossed over a patch to you
and you want to credit them with authorship you certainly do want to
keep the From as from them but you should also be sure to have
received a Signed-off-by from them as well so you can add it above
your own Signed-off-by tag.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62HV-00083n-To; Wed, 22 Jan 2014 18:12:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W62HU-00082Q-Fn
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:12:04 +0000
Received: from [85.158.139.211:56515] by server-4.bemta-5.messagelabs.com id
	81/D8-26791-3F900E25; Wed, 22 Jan 2014 18:12:03 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390414322!11360930!1
X-Originating-IP: [209.85.215.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9608 invoked from network); 22 Jan 2014 18:12:03 -0000
Received: from mail-la0-f50.google.com (HELO mail-la0-f50.google.com)
	(209.85.215.50)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:12:03 -0000
Received: by mail-la0-f50.google.com with SMTP id ec20so608401lab.37
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 10:12:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=DqvOJ7LCncVBCI3kuKB4iFTrUOq3GxbdYaFAUxgqiiY=;
	b=nWtyLNzmkU/9nTzmuqGqYGNbTGOEpB++8xwG818S2mHaFYXWO0EM2CEbtVjEdbKo/r
	Y680kv8hqfrxNyDNXqZznwnQDk+eL76Rvg+hQuMyFRhg4EdqGZxE+sSPncJ5QUGBfsor
	3H+0VlFvkNET0yocJBTUAWfWBcK3UobOdcC5F3zMMWj2YKQCvNmgHAmvnzVlEa35tqnr
	Uj+94SUBil5CpJTC8hQnDnzntNN0PmUR1CD/sFaucoGQ9F01b8YLtgtd/Ku/YJVJAGlk
	ydCLq6H7bvH4RThiZnulBAOzCrcXMd0yhfirvK4uA4rrDkcLg5P0+HOAyrcKSPIMHQHI
	7RMA==
X-Received: by 10.152.120.231 with SMTP id lf7mr2065888lab.36.1390414322537;
	Wed, 22 Jan 2014 10:12:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 22 Jan 2014 10:11:42 -0800 (PST)
In-Reply-To: <1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<1390323197-23003-2-git-send-email-oleksandr.savchenko@globallogic.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 22 Jan 2014 10:11:42 -0800
X-Google-Sender-Auth: fT7JyGdtDigEITnDVcmK4K2GZXE
Message-ID: <CAB=NE6Wjj406T155jN2qSpcKDCk3iRvyzCGc9=AWw9ym7mMyqQ@mail.gmail.com>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Cc: Nathanael Rensen <nathanael@polymorpheus.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] pvusb drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 8:53 AM, Alexander Savchenko
<oleksandr.savchenko@globallogic.com> wrote:
> From: Nathanael Rensen <nathanael@polymorpheus.com>
>
> Signed-off-by: Alexander Savchenko <oleksandr.savchenko@globallogic.com>

Without looking at the code yet I see you, the sender, are not on the
>From address of the patch, but yet you Signed-off-by the patch. This
is good but not sufficient. If Nathanael tossed over a patch to you
and you want to credit them with authorship you certainly do want to
keep the From as from them but you should also be sure to have
received a Signed-off-by from them as well so you can add it above
your own Signed-off-by tag.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:19:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62O7-0000HQ-V7; Wed, 22 Jan 2014 18:18:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W62O6-0000HL-Ns
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:18:54 +0000
Received: from [85.158.139.211:61312] by server-12.bemta-5.messagelabs.com id
	47/99-30017-E8B00E25; Wed, 22 Jan 2014 18:18:54 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390414733!11368216!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22048 invoked from network); 22 Jan 2014 18:18:53 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:18:53 -0000
Received: by mail-lb0-f182.google.com with SMTP id w7so632304lbi.13
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 10:18:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=G9FCRjKqk0CxsxMSkUWyxkOjBnt1DjJp2yHPoCY7h2M=;
	b=etpIqCn/iJDuxSntqxK9YTnlr0SeRx0qW+CY5GZoZju2F7lp7OdGy0spPiQeEEkLT6
	q4VuoABd/vPW/QFGRQmWjX6Yp14IjzjZhXJO9Bn0OADLp8+d6saugZxeKQSJHAecR6Af
	h+NBjAUHc669T/RJlLoUhy3MpncYQWhhbskk6KMRrG3dwO+6IYhzHkjLTMh95wDKcbeB
	XCgyFafpr0hets3hh9F+1v2cwa4PSb318bSJAHofU7P6OBTzaMY7pbmquhZlk3XP4K0/
	vOIjsEhRyBY4TvfxQewQaUTmV2xdeY+EHFvn6Lr9YJVMM9fXAzLLOBCEfXv+RmChpX9e
	Xi+g==
X-Received: by 10.112.135.67 with SMTP id pq3mr358721lbb.65.1390414732803;
	Wed, 22 Jan 2014 10:18:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 22 Jan 2014 10:18:32 -0800 (PST)
In-Reply-To: <CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<20140121212359.GJ2924@reaktio.net>
	<CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 22 Jan 2014 10:18:32 -0800
X-Google-Sender-Auth: 8iFMKk-AieKx6G6zjjDWmt_HrIY
Message-ID: <CAB=NE6XRHCjmr61d0QxER8sObPUj8pVeeXK3N1B3XASPJ+7vDw@mail.gmail.com>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 1:49 AM, Alexander Savchenko
<oleksandr.savchenko@globallogic.com> wrote:
> In Konrad repository is an old version of drivers (for Linux 3.3).

What is the intent of this patch set you just posted? It smells like
to me you had some issues and wanted input on your issues but yet --
you post a full patch series, and it makes me wonder if you are even
coordinating with the original authors of the patches. Can you please
clarify?

If you are not posting patches for review then please don't post them
with a PATCH title and please be clear of your intent.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:19:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62O7-0000HQ-V7; Wed, 22 Jan 2014 18:18:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W62O6-0000HL-Ns
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:18:54 +0000
Received: from [85.158.139.211:61312] by server-12.bemta-5.messagelabs.com id
	47/99-30017-E8B00E25; Wed, 22 Jan 2014 18:18:54 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390414733!11368216!1
X-Originating-IP: [209.85.217.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22048 invoked from network); 22 Jan 2014 18:18:53 -0000
Received: from mail-lb0-f182.google.com (HELO mail-lb0-f182.google.com)
	(209.85.217.182)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:18:53 -0000
Received: by mail-lb0-f182.google.com with SMTP id w7so632304lbi.13
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 10:18:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=G9FCRjKqk0CxsxMSkUWyxkOjBnt1DjJp2yHPoCY7h2M=;
	b=etpIqCn/iJDuxSntqxK9YTnlr0SeRx0qW+CY5GZoZju2F7lp7OdGy0spPiQeEEkLT6
	q4VuoABd/vPW/QFGRQmWjX6Yp14IjzjZhXJO9Bn0OADLp8+d6saugZxeKQSJHAecR6Af
	h+NBjAUHc669T/RJlLoUhy3MpncYQWhhbskk6KMRrG3dwO+6IYhzHkjLTMh95wDKcbeB
	XCgyFafpr0hets3hh9F+1v2cwa4PSb318bSJAHofU7P6OBTzaMY7pbmquhZlk3XP4K0/
	vOIjsEhRyBY4TvfxQewQaUTmV2xdeY+EHFvn6Lr9YJVMM9fXAzLLOBCEfXv+RmChpX9e
	Xi+g==
X-Received: by 10.112.135.67 with SMTP id pq3mr358721lbb.65.1390414732803;
	Wed, 22 Jan 2014 10:18:52 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 22 Jan 2014 10:18:32 -0800 (PST)
In-Reply-To: <CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
References: <1390323197-23003-1-git-send-email-oleksandr.savchenko@globallogic.com>
	<20140121212359.GJ2924@reaktio.net>
	<CAM0X+hNe1tq7kUo=6Way5Zsxo4+0uJs_+wne=DjSjU3v-qFrVw@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 22 Jan 2014 10:18:32 -0800
X-Google-Sender-Auth: 8iFMKk-AieKx6G6zjjDWmt_HrIY
Message-ID: <CAB=NE6XRHCjmr61d0QxER8sObPUj8pVeeXK3N1B3XASPJ+7vDw@mail.gmail.com>
To: Alexander Savchenko <oleksandr.savchenko@globallogic.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/3] xen/arm: omap5: PV USB driver issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 1:49 AM, Alexander Savchenko
<oleksandr.savchenko@globallogic.com> wrote:
> In Konrad repository is an old version of drivers (for Linux 3.3).

What is the intent of this patch set you just posted? It smells like
to me you had some issues and wanted input on your issues but yet --
you post a full patch series, and it makes me wonder if you are even
coordinating with the original authors of the patches. Can you please
clarify?

If you are not posting patches for review then please don't post them
with a PATCH title and please be clear of your intent.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62fU-000123-Fb; Wed, 22 Jan 2014 18:36:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W62fT-00011y-LM
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 18:36:51 +0000
Received: from [85.158.137.68:8942] by server-8.bemta-3.messagelabs.com id
	00/FB-31081-2CF00E25; Wed, 22 Jan 2014 18:36:50 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390415806!10751823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17109 invoked from network); 22 Jan 2014 18:36:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:36:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95439182"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 18:36:43 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:36:41 -0500
Message-ID: <52E00FB7.3040508@citrix.com>
Date: Wed, 22 Jan 2014 18:36:39 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 16:39, Stefano Stabellini wrote:
> On Tue, 21 Jan 2014, Zoltan Kiss wrote:
>> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>>   		pfn = m2p_find_override_pfn(mfn, ~0);
>>   	}
>>
>> -	/*
>> +	/*
>
> Spurious change?
It removes a stray space from the original code. Not necessary, but if 
it's there, I think we can keep it.

>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>> index 2ae8699..0060178 100644
>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>
>>   /* Add an MFN override for a particular page */
>>   int m2p_add_override(unsigned long mfn, struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>
> Do we really need to add another additional parameter to
> m2p_add_override?
> I would just let m2p_add_override and m2p_remove_override call
> page_to_pfn again. It is not that expensive.
Yes, because that page_to_pfn can return something different. That's why 
the v2 patches failed.

>> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>>   }
>>   EXPORT_SYMBOL_GPL(m2p_add_override);
>>   int m2p_remove_override(struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +			struct gnttab_map_grant_ref *kmap_op,
>> +			unsigned long pfn,
>> +			unsigned long mfn)
>
> Same here
Same as above.

>> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   		} else {
>>   			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>>   		}
>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn = page_to_pfn(pages[i]);
>> +
>> +		WARN_ON(PagePrivate(pages[i]));
>> +		SetPagePrivate(pages[i]);
>> +		set_page_private(pages[i], mfn);
>> +
>> +		pages[i]->index = pfn_to_mfn(pfn);
>> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> +			return -ENOMEM;
>
> goto out
And ret = -ENOMEM
>
>
>> +		if (m2p_override)
>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL, pfn);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   	if (lazy)
>>   		arch_leave_lazy_mmu_mode();
>>
>> -	return ret;
>> +	return 0;
>
> We are loosing the error code possibly returned by m2p_add_override and
> the previous check.
I'll fix that. Also in unmap.

		return ret;
>> @@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>
>> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy = true;
>>   	}
>>
>>   	for (i = 0; i < count; i++) {
>> -		ret = m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn = page_to_pfn(pages[i]);
>> +		mfn = get_phys_to_machine(pfn);
>> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
>> +			return -EINVAL;
>
> goto out
And ret = -EINVAL

The above are the the result of the fact that I've based this originally 
on 3.10, where the out label haven't existed. I'll send the next version 
when the tests pass.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62fU-000123-Fb; Wed, 22 Jan 2014 18:36:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W62fT-00011y-LM
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 18:36:51 +0000
Received: from [85.158.137.68:8942] by server-8.bemta-3.messagelabs.com id
	00/FB-31081-2CF00E25; Wed, 22 Jan 2014 18:36:50 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390415806!10751823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17109 invoked from network); 22 Jan 2014 18:36:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:36:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95439182"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 18:36:43 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:36:41 -0500
Message-ID: <52E00FB7.3040508@citrix.com>
Date: Wed, 22 Jan 2014 18:36:39 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 16:39, Stefano Stabellini wrote:
> On Tue, 21 Jan 2014, Zoltan Kiss wrote:
>> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>>   		pfn = m2p_find_override_pfn(mfn, ~0);
>>   	}
>>
>> -	/*
>> +	/*
>
> Spurious change?
It removes a stray space from the original code. Not necessary, but if 
it's there, I think we can keep it.

>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>> index 2ae8699..0060178 100644
>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>
>>   /* Add an MFN override for a particular page */
>>   int m2p_add_override(unsigned long mfn, struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>
> Do we really need to add another additional parameter to
> m2p_add_override?
> I would just let m2p_add_override and m2p_remove_override call
> page_to_pfn again. It is not that expensive.
Yes, because that page_to_pfn can return something different. That's why 
the v2 patches failed.

>> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>>   }
>>   EXPORT_SYMBOL_GPL(m2p_add_override);
>>   int m2p_remove_override(struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +			struct gnttab_map_grant_ref *kmap_op,
>> +			unsigned long pfn,
>> +			unsigned long mfn)
>
> Same here
Same as above.

>> @@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   		} else {
>>   			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>>   		}
>> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn = page_to_pfn(pages[i]);
>> +
>> +		WARN_ON(PagePrivate(pages[i]));
>> +		SetPagePrivate(pages[i]);
>> +		set_page_private(pages[i], mfn);
>> +
>> +		pages[i]->index = pfn_to_mfn(pfn);
>> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> +			return -ENOMEM;
>
> goto out
And ret = -ENOMEM
>
>
>> +		if (m2p_override)
>> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL, pfn);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -937,17 +951,34 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   	if (lazy)
>>   		arch_leave_lazy_mmu_mode();
>>
>> -	return ret;
>> +	return 0;
>
> We are loosing the error code possibly returned by m2p_add_override and
> the previous check.
I'll fix that. Also in unmap.

		return ret;
>> @@ -958,17 +989,32 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>
>> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy = true;
>>   	}
>>
>>   	for (i = 0; i < count; i++) {
>> -		ret = m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn = page_to_pfn(pages[i]);
>> +		mfn = get_phys_to_machine(pfn);
>> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
>> +			return -EINVAL;
>
> goto out
And ret = -EINVAL

The above are the the result of the fact that I've based this originally 
on 3.10, where the out label haven't existed. I'll send the next version 
when the tests pass.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:48:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62qA-0001gf-LD; Wed, 22 Jan 2014 18:47:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W62q8-0001ga-SU
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:47:53 +0000
Received: from [193.109.254.147:47546] by server-10.bemta-14.messagelabs.com
	id A9/31-20752-85210E25; Wed, 22 Jan 2014 18:47:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390416468!12503394!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26787 invoked from network); 22 Jan 2014 18:47:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:47:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93398675"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 18:47:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:47:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W62q2-00030f-E1;
	Wed, 22 Jan 2014 18:47:46 +0000
Message-ID: <52E01252.9000400@citrix.com>
Date: Wed, 22 Jan 2014 18:47:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, <xenmail43267@gmail.com>, 
	<xen-devel@lists.xen.org>, <alex.sharp@orionvm.com>,
	<Ian.Campbell@citrix.com>, <stefano.stabellini@citrix.com>, Mike Neilsen
	<mneilsen@acm.org>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 17:57, Samuel Thibault wrote:
> xenmail43267@gmail.com, le Wed 22 Jan 2014 11:41:11 -0600, a =E9crit :
>> From: Mike Neilsen <mneilsen@acm.org>
>>
>> This is a fix for bug 35:
>> http://bugs.xenproject.org/xen/bug/35
>>
>> This bug report describes several format string mismatches which prevent
>> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This=
 is a
>> copy of Alex Sharp's original patch with the following modifications:
>>
>> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus=
.c to
>>   avoid stack corruption
>> * Samuel Thibault's recommendation to make "fun" an unsigned int rather =
than an
>>   unsigned long in pcifront_physical_to_virtual and related functions
>>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
>>
>> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
>>
>> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

And after peeking at the Coverity database,

Coverity-IDs: 1055807 1055808 10558091055810

As this relates to build issues, it should probably be considered for
inclusion in 4.4

~Andrew

>
>> ---
>> Changed since v1:
>> * Change "fun" arguments into unsigned ints
>> ---
>>  extras/mini-os/fbfront.c          |  4 ++--
>>  extras/mini-os/include/pcifront.h | 12 ++++++------
>>  extras/mini-os/pcifront.c         | 14 +++++++-------
>>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>>  4 files changed, 18 insertions(+), 17 deletions(-)
>>
>> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
>> index 1e01513..9cc07b4 100644
>> --- a/extras/mini-os/fbfront.c
>> +++ b/extras/mini-os/fbfront.c
>> @@ -105,7 +105,7 @@ again:
>>          free(err);
>>      }
>>  =

>> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s=
));
>> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(=
s));
>>      if (err) {
>>          message =3D "writing page-ref";
>>          goto abort_transaction;
>> @@ -468,7 +468,7 @@ again:
>>          free(err);
>>      }
>>  =

>> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s=
));
>> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(=
s));
>>      if (err) {
>>          message =3D "writing page-ref";
>>          goto abort_transaction;
>> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/=
pcifront.h
>> index 0a6be8e..1b05963 100644
>> --- a/extras/mini-os/include/pcifront.h
>> +++ b/extras/mini-os/include/pcifront.h
>> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_=
pci_op *op);
>>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int d=
omain, unsigned int bus, unsigned slot, unsigned int fun));
>>  int pcifront_conf_read(struct pcifront_dev *dev,
>>                         unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
>> +                       unsigned int bus, unsigned int slot, unsigned in=
t fun,
>>                         unsigned int off, unsigned int size, unsigned in=
t *val);
>>  int pcifront_conf_write(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun,
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>>                          unsigned int off, unsigned int size, unsigned i=
nt val);
>>  int pcifront_enable_msi(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun);
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun);
>>  int pcifront_disable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun);
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun);
>>  int pcifront_enable_msix(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun,
>>                           struct xen_msix_entry *entries, int n);
>>  int pcifront_disable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned=
 long fun);
>> +                          unsigned int bus, unsigned int slot, unsigned=
 int fun);
>>  void shutdown_pcifront(struct pcifront_dev *dev);
>> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
>> index 16a4b49..0fc5b30 100644
>> --- a/extras/mini-os/pcifront.c
>> +++ b/extras/mini-os/pcifront.c
>> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_de=
v *dev,
>>                                    unsigned int *dom,
>>                                    unsigned int *bus,
>>                                    unsigned int *slot,
>> -                                  unsigned long *fun)
>> +                                  unsigned int *fun)
>>  {
>>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>>         should be enough to hold the paths we need to construct, even
>> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xe=
n_pci_op *op)
>>  =

>>  int pcifront_conf_read(struct pcifront_dev *dev,
>>                         unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
>> +                       unsigned int bus, unsigned int slot, unsigned in=
t fun,
>>                         unsigned int off, unsigned int size, unsigned in=
t *val)
>>  {
>>      struct xen_pci_op op;
>> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>>  =

>>  int pcifront_conf_write(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun,
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>>                          unsigned int off, unsigned int size, unsigned i=
nt val)
>>  {
>>      struct xen_pci_op op;
>> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>>  =

>>  int pcifront_enable_msi(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun)
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>>  =

>>  int pcifront_disable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun)
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>>  =

>>  int pcifront_enable_msix(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun,
>>                           struct xen_msix_entry *entries, int n)
>>  {
>>      struct xen_pci_op op;
>> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>>  =

>>  int pcifront_disable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned=
 long fun)
>> +                          unsigned int bus, unsigned int slot, unsigned=
 int fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenb=
us.c
>> index ee1691b..c5d9b02 100644
>> --- a/extras/mini-os/xenbus/xenbus.c
>> +++ b/extras/mini-os/xenbus/xenbus.c
>> @@ -15,6 +15,7 @@
>>   *
>>   **********************************************************************=
******
>>   **/
>> +#include <inttypes.h>
>>  #include <mini-os/os.h>
>>  #include <mini-os/mm.h>
>>  #include <mini-os/traps.h>
>> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t =
*xbt)
>>      err =3D errmsg(rep);
>>      if (err)
>>  	return err;
>> -    sscanf((char *)(rep + 1), "%u", xbt);
>> +    sscanf((char *)(rep + 1), "%lu", xbt);
>>      free(rep);
>>      return NULL;
>>  }
>> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>>      domid_t ret;
>>  =

>>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
>> -    sscanf(dom_id, "%d", &ret);
>> +    sscanf(dom_id, "%"SCNd16, &ret);
>>  =

>>      return ret;
>>  }
>> -- =

>> 1.8.3.2
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:48:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62qA-0001gf-LD; Wed, 22 Jan 2014 18:47:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W62q8-0001ga-SU
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 18:47:53 +0000
Received: from [193.109.254.147:47546] by server-10.bemta-14.messagelabs.com
	id A9/31-20752-85210E25; Wed, 22 Jan 2014 18:47:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390416468!12503394!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26787 invoked from network); 22 Jan 2014 18:47:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:47:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="93398675"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Jan 2014 18:47:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:47:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W62q2-00030f-E1;
	Wed, 22 Jan 2014 18:47:46 +0000
Message-ID: <52E01252.9000400@citrix.com>
Date: Wed, 22 Jan 2014 18:47:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, <xenmail43267@gmail.com>, 
	<xen-devel@lists.xen.org>, <alex.sharp@orionvm.com>,
	<Ian.Campbell@citrix.com>, <stefano.stabellini@citrix.com>, Mike Neilsen
	<mneilsen@acm.org>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
In-Reply-To: <20140122175706.GA5698@type.youpi.perso.aquilenet.fr>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 17:57, Samuel Thibault wrote:
> xenmail43267@gmail.com, le Wed 22 Jan 2014 11:41:11 -0600, a =E9crit :
>> From: Mike Neilsen <mneilsen@acm.org>
>>
>> This is a fix for bug 35:
>> http://bugs.xenproject.org/xen/bug/35
>>
>> This bug report describes several format string mismatches which prevent
>> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This=
 is a
>> copy of Alex Sharp's original patch with the following modifications:
>>
>> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus=
.c to
>>   avoid stack corruption
>> * Samuel Thibault's recommendation to make "fun" an unsigned int rather =
than an
>>   unsigned long in pcifront_physical_to_virtual and related functions
>>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
>>
>> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
>>
>> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

And after peeking at the Coverity database,

Coverity-IDs: 1055807 1055808 10558091055810

As this relates to build issues, it should probably be considered for
inclusion in 4.4

~Andrew

>
>> ---
>> Changed since v1:
>> * Change "fun" arguments into unsigned ints
>> ---
>>  extras/mini-os/fbfront.c          |  4 ++--
>>  extras/mini-os/include/pcifront.h | 12 ++++++------
>>  extras/mini-os/pcifront.c         | 14 +++++++-------
>>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>>  4 files changed, 18 insertions(+), 17 deletions(-)
>>
>> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
>> index 1e01513..9cc07b4 100644
>> --- a/extras/mini-os/fbfront.c
>> +++ b/extras/mini-os/fbfront.c
>> @@ -105,7 +105,7 @@ again:
>>          free(err);
>>      }
>>  =

>> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s=
));
>> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(=
s));
>>      if (err) {
>>          message =3D "writing page-ref";
>>          goto abort_transaction;
>> @@ -468,7 +468,7 @@ again:
>>          free(err);
>>      }
>>  =

>> -    err =3D xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s=
));
>> +    err =3D xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(=
s));
>>      if (err) {
>>          message =3D "writing page-ref";
>>          goto abort_transaction;
>> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/=
pcifront.h
>> index 0a6be8e..1b05963 100644
>> --- a/extras/mini-os/include/pcifront.h
>> +++ b/extras/mini-os/include/pcifront.h
>> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_=
pci_op *op);
>>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int d=
omain, unsigned int bus, unsigned slot, unsigned int fun));
>>  int pcifront_conf_read(struct pcifront_dev *dev,
>>                         unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
>> +                       unsigned int bus, unsigned int slot, unsigned in=
t fun,
>>                         unsigned int off, unsigned int size, unsigned in=
t *val);
>>  int pcifront_conf_write(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun,
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>>                          unsigned int off, unsigned int size, unsigned i=
nt val);
>>  int pcifront_enable_msi(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun);
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun);
>>  int pcifront_disable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun);
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun);
>>  int pcifront_enable_msix(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun,
>>                           struct xen_msix_entry *entries, int n);
>>  int pcifront_disable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned=
 long fun);
>> +                          unsigned int bus, unsigned int slot, unsigned=
 int fun);
>>  void shutdown_pcifront(struct pcifront_dev *dev);
>> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
>> index 16a4b49..0fc5b30 100644
>> --- a/extras/mini-os/pcifront.c
>> +++ b/extras/mini-os/pcifront.c
>> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_de=
v *dev,
>>                                    unsigned int *dom,
>>                                    unsigned int *bus,
>>                                    unsigned int *slot,
>> -                                  unsigned long *fun)
>> +                                  unsigned int *fun)
>>  {
>>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>>         should be enough to hold the paths we need to construct, even
>> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xe=
n_pci_op *op)
>>  =

>>  int pcifront_conf_read(struct pcifront_dev *dev,
>>                         unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned lo=
ng fun,
>> +                       unsigned int bus, unsigned int slot, unsigned in=
t fun,
>>                         unsigned int off, unsigned int size, unsigned in=
t *val)
>>  {
>>      struct xen_pci_op op;
>> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>>  =

>>  int pcifront_conf_write(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun,
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun,
>>                          unsigned int off, unsigned int size, unsigned i=
nt val)
>>  {
>>      struct xen_pci_op op;
>> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>>  =

>>  int pcifront_enable_msi(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned l=
ong fun)
>> +                        unsigned int bus, unsigned int slot, unsigned i=
nt fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>>  =

>>  int pcifront_disable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun)
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>>  =

>>  int pcifront_enable_msix(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned =
long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned =
int fun,
>>                           struct xen_msix_entry *entries, int n)
>>  {
>>      struct xen_pci_op op;
>> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>>  =

>>  int pcifront_disable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned=
 long fun)
>> +                          unsigned int bus, unsigned int slot, unsigned=
 int fun)
>>  {
>>      struct xen_pci_op op;
>>  =

>> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenb=
us.c
>> index ee1691b..c5d9b02 100644
>> --- a/extras/mini-os/xenbus/xenbus.c
>> +++ b/extras/mini-os/xenbus/xenbus.c
>> @@ -15,6 +15,7 @@
>>   *
>>   **********************************************************************=
******
>>   **/
>> +#include <inttypes.h>
>>  #include <mini-os/os.h>
>>  #include <mini-os/mm.h>
>>  #include <mini-os/traps.h>
>> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t =
*xbt)
>>      err =3D errmsg(rep);
>>      if (err)
>>  	return err;
>> -    sscanf((char *)(rep + 1), "%u", xbt);
>> +    sscanf((char *)(rep + 1), "%lu", xbt);
>>      free(rep);
>>      return NULL;
>>  }
>> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>>      domid_t ret;
>>  =

>>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
>> -    sscanf(dom_id, "%d", &ret);
>> +    sscanf(dom_id, "%"SCNd16, &ret);
>>  =

>>      return ret;
>>  }
>> -- =

>> 1.8.3.2
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:51:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62th-0002HE-Ih; Wed, 22 Jan 2014 18:51:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W62tf-0002H9-Df
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 18:51:31 +0000
Received: from [85.158.139.211:45520] by server-11.bemta-5.messagelabs.com id
	7C/8F-23268-23310E25; Wed, 22 Jan 2014 18:51:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390416688!1055775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27882 invoked from network); 22 Jan 2014 18:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95446297"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 18:51:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:51:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W62ta-000344-Ku;
	Wed, 22 Jan 2014 18:51:26 +0000
Date: Wed, 22 Jan 2014 18:50:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E00FB7.3040508@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> On 22/01/14 16:39, Stefano Stabellini wrote:
> > On Tue, 21 Jan 2014, Zoltan Kiss wrote:
> > > @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long
> > > mfn)
> > >   		pfn = m2p_find_override_pfn(mfn, ~0);
> > >   	}
> > > 
> > > -	/*
> > > +	/*
> > 
> > Spurious change?
> It removes a stray space from the original code. Not necessary, but if it's
> there, I think we can keep it.

Usually cosmetic changes are done in a separate patch, or at the very
least they are mentioned in the commit message.


> > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > index 2ae8699..0060178 100644
> > > --- a/arch/x86/xen/p2m.c
> > > +++ b/arch/x86/xen/p2m.c
> > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> > > 
> > >   /* Add an MFN override for a particular page */
> > >   int m2p_add_override(unsigned long mfn, struct page *page,
> > > -		struct gnttab_map_grant_ref *kmap_op)
> > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
> > 
> > Do we really need to add another additional parameter to
> > m2p_add_override?
> > I would just let m2p_add_override and m2p_remove_override call
> > page_to_pfn again. It is not that expensive.
> Yes, because that page_to_pfn can return something different. That's why the
> v2 patches failed.

I am really curious: how can page_to_pfn return something different?
I don't think is supposed to happen.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 18:51:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 18:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W62th-0002HE-Ih; Wed, 22 Jan 2014 18:51:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W62tf-0002H9-Df
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 18:51:31 +0000
Received: from [85.158.139.211:45520] by server-11.bemta-5.messagelabs.com id
	7C/8F-23268-23310E25; Wed, 22 Jan 2014 18:51:30 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390416688!1055775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27882 invoked from network); 22 Jan 2014 18:51:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 18:51:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95446297"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 18:51:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 13:51:27 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W62ta-000344-Ku;
	Wed, 22 Jan 2014 18:51:26 +0000
Date: Wed, 22 Jan 2014 18:50:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E00FB7.3040508@citrix.com>
Message-ID: <alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> On 22/01/14 16:39, Stefano Stabellini wrote:
> > On Tue, 21 Jan 2014, Zoltan Kiss wrote:
> > > @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long
> > > mfn)
> > >   		pfn = m2p_find_override_pfn(mfn, ~0);
> > >   	}
> > > 
> > > -	/*
> > > +	/*
> > 
> > Spurious change?
> It removes a stray space from the original code. Not necessary, but if it's
> there, I think we can keep it.

Usually cosmetic changes are done in a separate patch, or at the very
least they are mentioned in the commit message.


> > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > index 2ae8699..0060178 100644
> > > --- a/arch/x86/xen/p2m.c
> > > +++ b/arch/x86/xen/p2m.c
> > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> > > 
> > >   /* Add an MFN override for a particular page */
> > >   int m2p_add_override(unsigned long mfn, struct page *page,
> > > -		struct gnttab_map_grant_ref *kmap_op)
> > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
> > 
> > Do we really need to add another additional parameter to
> > m2p_add_override?
> > I would just let m2p_add_override and m2p_remove_override call
> > page_to_pfn again. It is not that expensive.
> Yes, because that page_to_pfn can return something different. That's why the
> v2 patches failed.

I am really curious: how can page_to_pfn return something different?
I don't think is supposed to happen.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 19:04:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W635a-0002zN-Mc; Wed, 22 Jan 2014 19:03:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W635Z-0002zI-7J
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 19:03:49 +0000
Received: from [85.158.143.35:19390] by server-3.bemta-4.messagelabs.com id
	D9/A3-32360-41610E25; Wed, 22 Jan 2014 19:03:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390417426!144739!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23821 invoked from network); 22 Jan 2014 19:03:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 19:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95452172"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 19:03:20 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 14:03:19 -0500
Message-ID: <52E015F5.70408@citrix.com>
Date: Wed, 22 Jan 2014 19:03:17 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 18:50, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Zoltan Kiss wrote:
>> On 22/01/14 16:39, Stefano Stabellini wrote:
>>> On Tue, 21 Jan 2014, Zoltan Kiss wrote:
>>>> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long
>>>> mfn)
>>>>    		pfn = m2p_find_override_pfn(mfn, ~0);
>>>>    	}
>>>>
>>>> -	/*
>>>> +	/*
>>>
>>> Spurious change?
>> It removes a stray space from the original code. Not necessary, but if it's
>> there, I think we can keep it.
>
> Usually cosmetic changes are done in a separate patch, or at the very
> least they are mentioned in the commit message.
Ok, I'll mention it.
>
>
>>>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>>>> index 2ae8699..0060178 100644
>>>> --- a/arch/x86/xen/p2m.c
>>>> +++ b/arch/x86/xen/p2m.c
>>>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>>>
>>>>    /* Add an MFN override for a particular page */
>>>>    int m2p_add_override(unsigned long mfn, struct page *page,
>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>>>
>>> Do we really need to add another additional parameter to
>>> m2p_add_override?
>>> I would just let m2p_add_override and m2p_remove_override call
>>> page_to_pfn again. It is not that expensive.
>> Yes, because that page_to_pfn can return something different. That's why the
>> v2 patches failed.
>
> I am really curious: how can page_to_pfn return something different?
> I don't think is supposed to happen.
You call set_phys_to_machine before calling m2p* functions.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 19:04:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W635a-0002zN-Mc; Wed, 22 Jan 2014 19:03:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W635Z-0002zI-7J
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 19:03:49 +0000
Received: from [85.158.143.35:19390] by server-3.bemta-4.messagelabs.com id
	D9/A3-32360-41610E25; Wed, 22 Jan 2014 19:03:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390417426!144739!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23821 invoked from network); 22 Jan 2014 19:03:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 19:03:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95452172"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 19:03:20 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 14:03:19 -0500
Message-ID: <52E015F5.70408@citrix.com>
Date: Wed, 22 Jan 2014 19:03:17 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/01/14 18:50, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Zoltan Kiss wrote:
>> On 22/01/14 16:39, Stefano Stabellini wrote:
>>> On Tue, 21 Jan 2014, Zoltan Kiss wrote:
>>>> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long
>>>> mfn)
>>>>    		pfn = m2p_find_override_pfn(mfn, ~0);
>>>>    	}
>>>>
>>>> -	/*
>>>> +	/*
>>>
>>> Spurious change?
>> It removes a stray space from the original code. Not necessary, but if it's
>> there, I think we can keep it.
>
> Usually cosmetic changes are done in a separate patch, or at the very
> least they are mentioned in the commit message.
Ok, I'll mention it.
>
>
>>>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>>>> index 2ae8699..0060178 100644
>>>> --- a/arch/x86/xen/p2m.c
>>>> +++ b/arch/x86/xen/p2m.c
>>>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>>>
>>>>    /* Add an MFN override for a particular page */
>>>>    int m2p_add_override(unsigned long mfn, struct page *page,
>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>>>
>>> Do we really need to add another additional parameter to
>>> m2p_add_override?
>>> I would just let m2p_add_override and m2p_remove_override call
>>> page_to_pfn again. It is not that expensive.
>> Yes, because that page_to_pfn can return something different. That's why the
>> v2 patches failed.
>
> I am really curious: how can page_to_pfn return something different?
> I don't think is supposed to happen.
You call set_phys_to_machine before calling m2p* functions.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 19:07:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W638c-00036y-C5; Wed, 22 Jan 2014 19:06:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1W638b-00036t-Ka
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 19:06:57 +0000
Received: from [193.109.254.147:16210] by server-16.bemta-14.messagelabs.com
	id 55/89-20600-0D610E25; Wed, 22 Jan 2014 19:06:56 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390417615!12586625!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24374 invoked from network); 22 Jan 2014 19:06:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 19:06:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95454195"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 19:06:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 14:06:54 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1W638X-0003GB-K4;
	Wed, 22 Jan 2014 19:06:53 +0000
Message-ID: <52E016CD.4020007@citrix.com>
Date: Wed, 22 Jan 2014 19:06:53 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
In-Reply-To: <1390408531.32519.78.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 04:35 PM, Ian Campbell wrote:
> Julien,

Hi Ian,

> I wonder if the following is any better than the current stuff in
> staging for the issue you are seeing with BSD at start of day? Can you
> try it please.

Thanks for the patch! It allows me to boot FreeBSD correctly (ie with
Write-Through for the first page table) on Midway.

> It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> still going.
> 
> It basically does a cache clean on all RAM mapped in the p2m. Anything
> in the cache is either the result of an earlier scrub of the page or
> something toolstack just wrote, so there is no need to be concerned
> about clean vs. invalidate -- clean is always correct.

I don't remember what was the conclusion... is it necessary to flush all
the RAM? Flushing the Kernel/initrd/DTB space should be enough.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 19:07:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W638c-00036y-C5; Wed, 22 Jan 2014 19:06:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1W638b-00036t-Ka
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 19:06:57 +0000
Received: from [193.109.254.147:16210] by server-16.bemta-14.messagelabs.com
	id 55/89-20600-0D610E25; Wed, 22 Jan 2014 19:06:56 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390417615!12586625!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24374 invoked from network); 22 Jan 2014 19:06:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 19:06:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,701,1384300800"; d="scan'208";a="95454195"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Jan 2014 19:06:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 22 Jan 2014 14:06:54 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1W638X-0003GB-K4;
	Wed, 22 Jan 2014 19:06:53 +0000
Message-ID: <52E016CD.4020007@citrix.com>
Date: Wed, 22 Jan 2014 19:06:53 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
In-Reply-To: <1390408531.32519.78.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 04:35 PM, Ian Campbell wrote:
> Julien,

Hi Ian,

> I wonder if the following is any better than the current stuff in
> staging for the issue you are seeing with BSD at start of day? Can you
> try it please.

Thanks for the patch! It allows me to boot FreeBSD correctly (ie with
Write-Through for the first page table) on Midway.

> It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> still going.
> 
> It basically does a cache clean on all RAM mapped in the p2m. Anything
> in the cache is either the result of an earlier scrub of the page or
> something toolstack just wrote, so there is no need to be concerned
> about clean vs. invalidate -- clean is always correct.

I don't remember what was the conclusion... is it necessary to flush all
the RAM? Flushing the Kernel/initrd/DTB space should be enough.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 19:37:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W63bJ-0004hz-7e; Wed, 22 Jan 2014 19:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W63bH-0004hu-HL
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 19:36:35 +0000
Received: from [85.158.139.211:9302] by server-4.bemta-5.messagelabs.com id
	8F/47-26791-2CD10E25; Wed, 22 Jan 2014 19:36:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390419392!11366069!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22655 invoked from network); 22 Jan 2014 19:36:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 19:36:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MJaTAt021275
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 19:36:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MJaSX1013070
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 19:36:28 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0MJaRst021997; Wed, 22 Jan 2014 19:36:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 11:36:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0D6301BFA72; Wed, 22 Jan 2014 14:36:26 -0500 (EST)
Date: Wed, 22 Jan 2014 14:36:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140122193625.GA8827@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0805564591263266617=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0805564591263266617==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="sm4nu43k4a2Rpi4c"
Content-Disposition: inline


--sm4nu43k4a2Rpi4c
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc0-tag

which has two major features that Xen community is excited about:
<blurb  - please include it in the merge commit>

The first is event channel scalability by David Vrabel - we switch over from an
two-level per-cpu bitmap of events (IRQs) - to an FIFO queue with priorities.
This lets us be able to handle more events, have lower latency, and better
scalability. Good stuff.

The other is PVH by Mukesh Rathor. In short, PV is a mode where the kernel lets
the hypervisor program page-tables, segments, etc. With EPT/NPT capabilities in
current processors, the overhead of doing this in an HVM (Hardware Virtual
Machine) container is much lower than the hypervisor doing it for us. In short
we let a PV guest run without doing page-table, segment, syscall, etc updates
through the hypervisor - instead it is all done within the guest container.
It is a "hybrid" PV - hence the 'PVH' name - a PV guest within an HVM container.

The major benefits are less code to deal with - for example we only
use one function from the the pv_mmu_ops (which has 39 function calls);
faster performance for syscall (no context switches into the hypervisor);
less traps on various operations; etc.

It is still being baked - the ABI is not yet set in stone. But it is
pretty awesome and we are excited about it.

Lastly, there are some changes to ARM code - you should get a simple conflict
which has been resolved in #linux-next.

In short, this pull has awesome features.

</blurb  - please include it in the merge commit>

Please pull!

 MAINTAINERS                                |    1 +
 arch/arm/include/asm/xen/page.h            |    3 +-
 arch/arm/xen/enlighten.c                   |    9 +-
 arch/x86/include/asm/xen/page.h            |    8 +-
 arch/x86/xen/Kconfig                       |    4 +
 arch/x86/xen/enlighten.c                   |  126 +-
 arch/x86/xen/grant-table.c                 |   63 +
 arch/x86/xen/irq.c                         |    5 +-
 arch/x86/xen/mmu.c                         |  166 ++-
 arch/x86/xen/p2m.c                         |   15 +-
 arch/x86/xen/platform-pci-unplug.c         |   79 +-
 arch/x86/xen/setup.c                       |   40 +-
 arch/x86/xen/smp.c                         |   49 +-
 arch/x86/xen/time.c                        |    1 +
 arch/x86/xen/xen-head.S                    |   25 +-
 arch/x86/xen/xen-ops.h                     |    1 +
 drivers/block/xen-blkfront.c               |    4 +-
 drivers/char/tpm/xen-tpmfront.c            |    4 +
 drivers/input/misc/xen-kbdfront.c          |    4 +
 drivers/net/xen-netfront.c                 |    2 +-
 drivers/pci/xen-pcifront.c                 |    4 +
 drivers/video/xen-fbfront.c                |    6 +-
 drivers/xen/Kconfig                        |    1 -
 drivers/xen/Makefile                       |    3 +-
 drivers/xen/balloon.c                      |    9 +-
 drivers/xen/dbgp.c                         |    2 +-
 drivers/xen/events.c                       | 1935 ----------------------------
 drivers/xen/events/Makefile                |    5 +
 drivers/xen/events/events_2l.c             |  372 ++++++
 drivers/xen/events/events_base.c           | 1716 ++++++++++++++++++++++++
 drivers/xen/events/events_fifo.c           |  428 ++++++
 drivers/xen/events/events_internal.h       |  150 +++
 drivers/xen/evtchn.c                       |    2 +-
 drivers/xen/gntdev.c                       |    2 +-
 drivers/xen/grant-table.c                  |   90 +-
 drivers/xen/pci.c                          |    2 +
 drivers/xen/platform-pci.c                 |   11 +-
 drivers/xen/xenbus/xenbus_client.c         |    3 +-
 drivers/xen/xenbus/xenbus_probe_frontend.c |    2 +-
 include/xen/events.h                       |    9 +
 include/xen/grant_table.h                  |    9 +-
 include/xen/interface/elfnote.h            |   13 +
 include/xen/interface/event_channel.h      |   68 +
 include/xen/interface/xen.h                |    6 -
 include/xen/platform_pci.h                 |   25 +-
 include/xen/xen.h                          |   14 +
 46 files changed, 3379 insertions(+), 2117 deletions(-)


Ben Hutchings (1):
      xen/pci: Fix build on non-x86

David Vrabel (15):
      xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
      xen/events: remove unnecessary init_evtchn_cpu_bindings()
      xen/events: move drivers/xen/events.c into drivers/xen/events/
      xen/events: move 2-level specific code into its own file
      xen/events: add struct evtchn_ops for the low-level port operations
      xen/events: allow setup of irq_info to fail
      xen/events: add a evtchn_op for port setup
      xen/events: Refactor evtchn_to_irq array to be dynamically allocated
      xen/events: add xen_evtchn_mask_all()
      xen/evtchn: support more than 4096 ports
      xen/events: Add the hypervisor interface for the FIFO-based event channels
      xen/events: allow event channel priority to be set
      xen/x86: set VIRQ_TIMER priority to maximum
      xen/events: use the FIFO-based ABI if available
      MAINTAINERS: add git repository for Xen

Ian Campbell (1):
      xen: balloon: enable for ARM

Jie Liu (1):
      xen: simplify balloon_first_page() with list_first_entry_or_null()

Konrad Rzeszutek Wilk (12):
      xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
      xen/pvhvm: Remove the xen_platform_pci int.
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code (v2).
      xen/mmu: Cleanup xen_pagetable_p2m_copy a bit.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct (v3).
      xen/pvh: Piggyback on PVHVM for grant driver (v4)
      xen/grant-table: Force to use v1 of grants.
      xen/pvh: Fix compile issues with xen_pvh_domain()
      xen/pvh: Use 'depend' instead of 'select'.

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v3).
      xen/pvh: Early bootup changes in PV code (v4).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh/mmu: Use PV TLB instead of native.
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh: Support ParaVirtualized Hardware extensions (v3).

Paul Gortmaker (1):
      xen: delete new instances of __cpuinit usage

Roger Pau Monne (1):
      xen/pvh: Set X86_CR0_WP and others in CR0 (v2)

Stefano Stabellini (1):
      xen/fb: allow xenfb initialization for hvm guests

Wei Liu (3):
      asm/xen/page.h: remove redundant semicolon
      xen/events: introduce test_and_set_mask()
      xen/events: replace raw bit ops with functions

Wei Yongjun (3):
      xen/pvh: remove duplicated include from enlighten.c
      xen-platform: fix error return code in platform_pci_init()
      xen/evtchn_fifo: fix error return code in evtchn_fifo_setup()

Yijing Wang (1):
      xen: Use dev_is_pci() to check whether it is pci device


--sm4nu43k4a2Rpi4c
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS4B21AAoJEFjIrFwIi8fJcHkH/ieS9chu+NCtIiBecbCaDsVk
zCM36mAir+SKabwsZkSH8FrAPfqetF5wfnl/WoXOGto3ep8GjfJJv0LSfiNOtxHi
2jpKSs0NyzVb9Vd6zGeygRQXyrB5nYaBT33/hZHX3DTXbQErKPjKdeQsOzxvqSKf
kPy32q34MxMtMHO3pJnVJyzfrfl5mMiE1Xfh6yQiIr1n2KnWesCxgMOKrfpMomQF
wEPfuKJXMc09J04KXaCHwxf1TEB3J7HvnqZABdXbtODDc42RkPvslP3SO8tv5N/O
o8jikXrUReAMZRVtkzT5uvB9b6K04WoP3lfokiDxD0IvqAMuk44HCbqDNfHZXS8=
=MqO7
-----END PGP SIGNATURE-----

--sm4nu43k4a2Rpi4c--


--===============0805564591263266617==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0805564591263266617==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 19:37:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 19:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W63bJ-0004hz-7e; Wed, 22 Jan 2014 19:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W63bH-0004hu-HL
	for xen-devel@lists.xensource.com; Wed, 22 Jan 2014 19:36:35 +0000
Received: from [85.158.139.211:9302] by server-4.bemta-5.messagelabs.com id
	8F/47-26791-2CD10E25; Wed, 22 Jan 2014 19:36:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390419392!11366069!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22655 invoked from network); 22 Jan 2014 19:36:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Jan 2014 19:36:33 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MJaTAt021275
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 19:36:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MJaSX1013070
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 19:36:28 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0MJaRst021997; Wed, 22 Jan 2014 19:36:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 11:36:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0D6301BFA72; Wed, 22 Jan 2014 14:36:26 -0500 (EST)
Date: Wed, 22 Jan 2014 14:36:25 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140122193625.GA8827@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0805564591263266617=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0805564591263266617==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="sm4nu43k4a2Rpi4c"
Content-Disposition: inline


--sm4nu43k4a2Rpi4c
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc0-tag

which has two major features that Xen community is excited about:
<blurb  - please include it in the merge commit>

The first is event channel scalability by David Vrabel - we switch over from an
two-level per-cpu bitmap of events (IRQs) - to an FIFO queue with priorities.
This lets us be able to handle more events, have lower latency, and better
scalability. Good stuff.

The other is PVH by Mukesh Rathor. In short, PV is a mode where the kernel lets
the hypervisor program page-tables, segments, etc. With EPT/NPT capabilities in
current processors, the overhead of doing this in an HVM (Hardware Virtual
Machine) container is much lower than the hypervisor doing it for us. In short
we let a PV guest run without doing page-table, segment, syscall, etc updates
through the hypervisor - instead it is all done within the guest container.
It is a "hybrid" PV - hence the 'PVH' name - a PV guest within an HVM container.

The major benefits are less code to deal with - for example we only
use one function from the the pv_mmu_ops (which has 39 function calls);
faster performance for syscall (no context switches into the hypervisor);
less traps on various operations; etc.

It is still being baked - the ABI is not yet set in stone. But it is
pretty awesome and we are excited about it.

Lastly, there are some changes to ARM code - you should get a simple conflict
which has been resolved in #linux-next.

In short, this pull has awesome features.

</blurb  - please include it in the merge commit>

Please pull!

 MAINTAINERS                                |    1 +
 arch/arm/include/asm/xen/page.h            |    3 +-
 arch/arm/xen/enlighten.c                   |    9 +-
 arch/x86/include/asm/xen/page.h            |    8 +-
 arch/x86/xen/Kconfig                       |    4 +
 arch/x86/xen/enlighten.c                   |  126 +-
 arch/x86/xen/grant-table.c                 |   63 +
 arch/x86/xen/irq.c                         |    5 +-
 arch/x86/xen/mmu.c                         |  166 ++-
 arch/x86/xen/p2m.c                         |   15 +-
 arch/x86/xen/platform-pci-unplug.c         |   79 +-
 arch/x86/xen/setup.c                       |   40 +-
 arch/x86/xen/smp.c                         |   49 +-
 arch/x86/xen/time.c                        |    1 +
 arch/x86/xen/xen-head.S                    |   25 +-
 arch/x86/xen/xen-ops.h                     |    1 +
 drivers/block/xen-blkfront.c               |    4 +-
 drivers/char/tpm/xen-tpmfront.c            |    4 +
 drivers/input/misc/xen-kbdfront.c          |    4 +
 drivers/net/xen-netfront.c                 |    2 +-
 drivers/pci/xen-pcifront.c                 |    4 +
 drivers/video/xen-fbfront.c                |    6 +-
 drivers/xen/Kconfig                        |    1 -
 drivers/xen/Makefile                       |    3 +-
 drivers/xen/balloon.c                      |    9 +-
 drivers/xen/dbgp.c                         |    2 +-
 drivers/xen/events.c                       | 1935 ----------------------------
 drivers/xen/events/Makefile                |    5 +
 drivers/xen/events/events_2l.c             |  372 ++++++
 drivers/xen/events/events_base.c           | 1716 ++++++++++++++++++++++++
 drivers/xen/events/events_fifo.c           |  428 ++++++
 drivers/xen/events/events_internal.h       |  150 +++
 drivers/xen/evtchn.c                       |    2 +-
 drivers/xen/gntdev.c                       |    2 +-
 drivers/xen/grant-table.c                  |   90 +-
 drivers/xen/pci.c                          |    2 +
 drivers/xen/platform-pci.c                 |   11 +-
 drivers/xen/xenbus/xenbus_client.c         |    3 +-
 drivers/xen/xenbus/xenbus_probe_frontend.c |    2 +-
 include/xen/events.h                       |    9 +
 include/xen/grant_table.h                  |    9 +-
 include/xen/interface/elfnote.h            |   13 +
 include/xen/interface/event_channel.h      |   68 +
 include/xen/interface/xen.h                |    6 -
 include/xen/platform_pci.h                 |   25 +-
 include/xen/xen.h                          |   14 +
 46 files changed, 3379 insertions(+), 2117 deletions(-)


Ben Hutchings (1):
      xen/pci: Fix build on non-x86

David Vrabel (15):
      xen/events: refactor retrigger_dynirq() and resend_irq_on_evtchn()
      xen/events: remove unnecessary init_evtchn_cpu_bindings()
      xen/events: move drivers/xen/events.c into drivers/xen/events/
      xen/events: move 2-level specific code into its own file
      xen/events: add struct evtchn_ops for the low-level port operations
      xen/events: allow setup of irq_info to fail
      xen/events: add a evtchn_op for port setup
      xen/events: Refactor evtchn_to_irq array to be dynamically allocated
      xen/events: add xen_evtchn_mask_all()
      xen/evtchn: support more than 4096 ports
      xen/events: Add the hypervisor interface for the FIFO-based event channels
      xen/events: allow event channel priority to be set
      xen/x86: set VIRQ_TIMER priority to maximum
      xen/events: use the FIFO-based ABI if available
      MAINTAINERS: add git repository for Xen

Ian Campbell (1):
      xen: balloon: enable for ARM

Jie Liu (1):
      xen: simplify balloon_first_page() with list_first_entry_or_null()

Konrad Rzeszutek Wilk (12):
      xen/pvhvm: If xen_platform_pci=0 is set don't blow up (v4).
      xen/pvhvm: Remove the xen_platform_pci int.
      xen/pvh: Don't setup P2M tree.
      xen/mmu/p2m: Refactor the xen_pagetable_init code (v2).
      xen/mmu: Cleanup xen_pagetable_p2m_copy a bit.
      xen/grants: Remove gnttab_max_grant_frames dependency on gnttab_init.
      xen/grant-table: Refactor gnttab_init
      xen/grant: Implement an grant frame array struct (v3).
      xen/pvh: Piggyback on PVHVM for grant driver (v4)
      xen/grant-table: Force to use v1 of grants.
      xen/pvh: Fix compile issues with xen_pvh_domain()
      xen/pvh: Use 'depend' instead of 'select'.

Mukesh Rathor (12):
      xen/p2m: Check for auto-xlat when doing mfn_to_local_pfn.
      xen/pvh/x86: Define what an PVH guest is (v3).
      xen/pvh: Early bootup changes in PV code (v4).
      xen/pvh: MMU changes for PVH (v2)
      xen/pvh/mmu: Use PV TLB instead of native.
      xen/pvh: Setup up shared_info.
      xen/pvh: Load GDT/GS in early PV bootup code for BSP.
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs)
      xen/pvh: Update E820 to work with PVH (v2)
      xen/pvh: Piggyback on PVHVM for event channels (v2)
      xen/pvh: Piggyback on PVHVM XenBus.
      xen/pvh: Support ParaVirtualized Hardware extensions (v3).

Paul Gortmaker (1):
      xen: delete new instances of __cpuinit usage

Roger Pau Monne (1):
      xen/pvh: Set X86_CR0_WP and others in CR0 (v2)

Stefano Stabellini (1):
      xen/fb: allow xenfb initialization for hvm guests

Wei Liu (3):
      asm/xen/page.h: remove redundant semicolon
      xen/events: introduce test_and_set_mask()
      xen/events: replace raw bit ops with functions

Wei Yongjun (3):
      xen/pvh: remove duplicated include from enlighten.c
      xen-platform: fix error return code in platform_pci_init()
      xen/evtchn_fifo: fix error return code in evtchn_fifo_setup()

Yijing Wang (1):
      xen: Use dev_is_pci() to check whether it is pci device


--sm4nu43k4a2Rpi4c
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS4B21AAoJEFjIrFwIi8fJcHkH/ieS9chu+NCtIiBecbCaDsVk
zCM36mAir+SKabwsZkSH8FrAPfqetF5wfnl/WoXOGto3ep8GjfJJv0LSfiNOtxHi
2jpKSs0NyzVb9Vd6zGeygRQXyrB5nYaBT33/hZHX3DTXbQErKPjKdeQsOzxvqSKf
kPy32q34MxMtMHO3pJnVJyzfrfl5mMiE1Xfh6yQiIr1n2KnWesCxgMOKrfpMomQF
wEPfuKJXMc09J04KXaCHwxf1TEB3J7HvnqZABdXbtODDc42RkPvslP3SO8tv5N/O
o8jikXrUReAMZRVtkzT5uvB9b6K04WoP3lfokiDxD0IvqAMuk44HCbqDNfHZXS8=
=MqO7
-----END PGP SIGNATURE-----

--sm4nu43k4a2Rpi4c--


--===============0805564591263266617==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0805564591263266617==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 20:19:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64GF-000799-Jw; Wed, 22 Jan 2014 20:18:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W64GE-000794-9k
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 20:18:54 +0000
Received: from [85.158.137.68:34457] by server-8.bemta-3.messagelabs.com id
	99/E4-31081-DA720E25; Wed, 22 Jan 2014 20:18:53 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390421931!7093939!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14118 invoked from network); 22 Jan 2014 20:18:52 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 20:18:52 -0000
Received: by mail-qa0-f43.google.com with SMTP id o15so1102585qap.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 22 Jan 2014 12:18:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=lWdux1uIhvvgINMiH/tybx1Y99j5W4aljNv+Xl8Oopc=;
	b=AQFC4XfH2GzbYgy+iKb8A4Y8SJ86s0atQ/BTmyUIFPOFZT39oM2YZnS6cUIpD9V3N9
	xCinGi7QPKTPiCia+7oZbpBWtb4dwF3v70jH/2HINIghlOtRy5WTrS/Kue/vvVxrlaUx
	NT01wzVcPHLTgmBKvAwBOjKgXeZHj5EbesHw7E4YpYXNyRiCpgm1MfwTioDx3yDs7482
	hQoiLLGgmGsrlSbO/+LHb8HgyQUaH8DgX0t8PWXABVHDEZ7V/GyATV7RnTEDmw7ihUMi
	CijkICF5VBqUheSAmjKcvgRQ2/5Q8rmhhLQ9wllBDNhWsVb7ssfbJozE8x3roFizEg2u
	UUOg==
MIME-Version: 1.0
X-Received: by 10.224.34.71 with SMTP id k7mr5697548qad.15.1390421930173; Wed,
	22 Jan 2014 12:18:50 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Wed, 22 Jan 2014 12:18:50 -0800 (PST)
In-Reply-To: <52DFD5DB.6060603@iogearbox.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
Date: Wed, 22 Jan 2014 15:18:50 -0500
Message-ID: <CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Daniel Borkmann <borkmann@iogearbox.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Michel Lespinasse <walken@google.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> On 01/22/2014 08:29 AM, Steven Noonan wrote:
>>
>> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>>>
>>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>>>
>>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>>>
>>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>>> <gregkh@linuxfoundation.org> wrote:
>>>
>>>
>>> Adding extra folks to the party.
>>>>>>
>>>>>>
>>>>>> Odds are this also shows up in 3.13, right?
>>>>
>>>>
>>>> Reproduced using 3.13 on the PV guest:
>>>>
>>>>         [  368.756763] BUG: Bad page map in process mp
>>>> pte:80000004a67c6165 pmd:e9b706067
>>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>>>> mapping:          (null) index:0x0
>>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>>>> #1
>>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>>>> ffffffff814d8748 00007fd1388b7000
>>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>>>> 0000000000000000 0000000000000000
>>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>>>> 00007fd1388b8000 ffff880e9eaf3e30
>>>>         [  368.756815] Call Trace:
>>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>         [  368.756837]  [<ffffffff8116eae3>]
>>>> unmap_single_vma+0x583/0x890
>>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>>>         [  368.756869]  [<ffffffff814e70ed>]
>>>> system_call_fastpath+0x1a/0x1f
>>>>         [  368.756872] Disabling lock debugging due to kernel taint
>>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>> idx:0 val:-1
>>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>> idx:1 val:1
>>>>
>>>>>
>>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>>> interest in setting one up).. And I have a suspicion that it might not
>>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>>
>>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>>> confused.
>>>>>
>>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>>> _PAGE_PRESENT.
>>>>>
>>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>>> that makes him go "Ahh, yes, silly case".
>>>>>
>>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>>
>>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>>> attached test-case (but apparently only under Xen PV). There it
>>>>> apparently causes a "BUG: Bad page map .." error.
>>>
>>>
>>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>>> _raw_
>>> value of PMDs and PTEs. That is - it does not use the pvops interface
>>> and instead reads the values directly from the page-table. Since the
>>> page-table is also manipulated by the hypervisor - there are certain
>>> flags it also sets to do its business. It might be that it uses
>>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>>> pte_flags that would invoke the pvops interface.
>>>
>>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>>
>>> This not-compiled-totally-bad-patch might shed some light on what I was
>>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>>> for that).
>>
>>
>> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> still able to repro the issue:

Steven, do you use numa=fake on boot cmd line for pv guest?

I had similar issue on pv guest. Let me check if the fix that resolved
this for me will help with 3.13.


>
>
> Maybe this one is also related to this BUG here (cc'ed people investigating
> this one) ...
>
>   https://lkml.org/lkml/2014/1/10/427
>
> ... not sure, though.
>
>
>>         [  346.374929] BUG: Bad page map in process mp
>> pte:80000004ae928065 pmd:e993f9067
>>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> (null) index:0x0
>>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> #1
>>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> 00007f06a9bbb000
>>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> 0000000000000000
>>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> ffff880e991a3e30
>>         [  346.374979] Call Trace:
>>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>>         [  346.375034]  [<ffffffff814e712d>]
>> system_call_fastpath+0x1a/0x1f
>>         [  346.375037] Disabling lock debugging due to kernel taint
>>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> idx:0 val:-1
>>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> idx:1 val:1
>>
>> This dump doesn't look dramatically different, either.
>>
>>>
>>> The other question is - how is AutoNUMA running when it is not enabled?
>>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>> turned on?
>>
>>
>> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> mean not enabled at runtime?
>>
>> [1]
>> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 20:19:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64GF-000799-Jw; Wed, 22 Jan 2014 20:18:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W64GE-000794-9k
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 20:18:54 +0000
Received: from [85.158.137.68:34457] by server-8.bemta-3.messagelabs.com id
	99/E4-31081-DA720E25; Wed, 22 Jan 2014 20:18:53 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390421931!7093939!1
X-Originating-IP: [209.85.216.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14118 invoked from network); 22 Jan 2014 20:18:52 -0000
Received: from mail-qa0-f43.google.com (HELO mail-qa0-f43.google.com)
	(209.85.216.43)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 20:18:52 -0000
Received: by mail-qa0-f43.google.com with SMTP id o15so1102585qap.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 22 Jan 2014 12:18:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=lWdux1uIhvvgINMiH/tybx1Y99j5W4aljNv+Xl8Oopc=;
	b=AQFC4XfH2GzbYgy+iKb8A4Y8SJ86s0atQ/BTmyUIFPOFZT39oM2YZnS6cUIpD9V3N9
	xCinGi7QPKTPiCia+7oZbpBWtb4dwF3v70jH/2HINIghlOtRy5WTrS/Kue/vvVxrlaUx
	NT01wzVcPHLTgmBKvAwBOjKgXeZHj5EbesHw7E4YpYXNyRiCpgm1MfwTioDx3yDs7482
	hQoiLLGgmGsrlSbO/+LHb8HgyQUaH8DgX0t8PWXABVHDEZ7V/GyATV7RnTEDmw7ihUMi
	CijkICF5VBqUheSAmjKcvgRQ2/5Q8rmhhLQ9wllBDNhWsVb7ssfbJozE8x3roFizEg2u
	UUOg==
MIME-Version: 1.0
X-Received: by 10.224.34.71 with SMTP id k7mr5697548qad.15.1390421930173; Wed,
	22 Jan 2014 12:18:50 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Wed, 22 Jan 2014 12:18:50 -0800 (PST)
In-Reply-To: <52DFD5DB.6060603@iogearbox.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
Date: Wed, 22 Jan 2014 15:18:50 -0500
Message-ID: <CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Daniel Borkmann <borkmann@iogearbox.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Michel Lespinasse <walken@google.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> On 01/22/2014 08:29 AM, Steven Noonan wrote:
>>
>> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>>>
>>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>>>
>>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>>>
>>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>>> <gregkh@linuxfoundation.org> wrote:
>>>
>>>
>>> Adding extra folks to the party.
>>>>>>
>>>>>>
>>>>>> Odds are this also shows up in 3.13, right?
>>>>
>>>>
>>>> Reproduced using 3.13 on the PV guest:
>>>>
>>>>         [  368.756763] BUG: Bad page map in process mp
>>>> pte:80000004a67c6165 pmd:e9b706067
>>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>>>> mapping:          (null) index:0x0
>>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>>>> #1
>>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>>>> ffffffff814d8748 00007fd1388b7000
>>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>>>> 0000000000000000 0000000000000000
>>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>>>> 00007fd1388b8000 ffff880e9eaf3e30
>>>>         [  368.756815] Call Trace:
>>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>         [  368.756837]  [<ffffffff8116eae3>]
>>>> unmap_single_vma+0x583/0x890
>>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>>>         [  368.756869]  [<ffffffff814e70ed>]
>>>> system_call_fastpath+0x1a/0x1f
>>>>         [  368.756872] Disabling lock debugging due to kernel taint
>>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>> idx:0 val:-1
>>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>> idx:1 val:1
>>>>
>>>>>
>>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>>> interest in setting one up).. And I have a suspicion that it might not
>>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>>
>>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>>> confused.
>>>>>
>>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>>> _PAGE_PRESENT.
>>>>>
>>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>>> that makes him go "Ahh, yes, silly case".
>>>>>
>>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>>
>>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>>> attached test-case (but apparently only under Xen PV). There it
>>>>> apparently causes a "BUG: Bad page map .." error.
>>>
>>>
>>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>>> _raw_
>>> value of PMDs and PTEs. That is - it does not use the pvops interface
>>> and instead reads the values directly from the page-table. Since the
>>> page-table is also manipulated by the hypervisor - there are certain
>>> flags it also sets to do its business. It might be that it uses
>>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>>> pte_flags that would invoke the pvops interface.
>>>
>>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>>
>>> This not-compiled-totally-bad-patch might shed some light on what I was
>>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>>> for that).
>>
>>
>> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> still able to repro the issue:

Steven, do you use numa=fake on boot cmd line for pv guest?

I had similar issue on pv guest. Let me check if the fix that resolved
this for me will help with 3.13.


>
>
> Maybe this one is also related to this BUG here (cc'ed people investigating
> this one) ...
>
>   https://lkml.org/lkml/2014/1/10/427
>
> ... not sure, though.
>
>
>>         [  346.374929] BUG: Bad page map in process mp
>> pte:80000004ae928065 pmd:e993f9067
>>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> (null) index:0x0
>>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> #1
>>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> 00007f06a9bbb000
>>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> 0000000000000000
>>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> ffff880e991a3e30
>>         [  346.374979] Call Trace:
>>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>>         [  346.375034]  [<ffffffff814e712d>]
>> system_call_fastpath+0x1a/0x1f
>>         [  346.375037] Disabling lock debugging due to kernel taint
>>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> idx:0 val:-1
>>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> idx:1 val:1
>>
>> This dump doesn't look dramatically different, either.
>>
>>>
>>> The other question is - how is AutoNUMA running when it is not enabled?
>>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>> turned on?
>>
>>
>> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> mean not enabled at runtime?
>>
>> [1]
>> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 20:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64Uh-0007wl-Ag; Wed, 22 Jan 2014 20:33:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W64Uf-0007wd-Gb
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 20:33:49 +0000
Received: from [85.158.139.211:30684] by server-5.bemta-5.messagelabs.com id
	F2/DA-14928-C2B20E25; Wed, 22 Jan 2014 20:33:48 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390422825!11364054!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9390 invoked from network); 22 Jan 2014 20:33:47 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 20:33:47 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so880286pbc.14
	for <xen-devel@lists.xenproject.org>;
	Wed, 22 Jan 2014 12:33:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=M2JNC1CvY4OEDKEeqr3tfLwWPr3iHGd71DGp6XAmohk=;
	b=Da22daYY+kQc0evowOSazMFMVwaZElTzWPyhavEuUpnc23/M2Q0qpOEk4zvSp7TUjc
	61UjWKnHs1nP2DCm3npYhvm9LNpgM+LJPXYzuoBxUApFsL52CSe3RPjODmUK7w/fHYrR
	oqCuVJQJRVqx2IYK/SH0XbQz/0wzgdPtFBljD39mYpXIHAhmCxD9IT9+4TM8GBRsfcAG
	PMxNXLnom6gV+cEZvhdX+pLCqY0XKsBJHx2KNnczsx1IJd4Opc+0C6pARrHwdSMnCmSj
	7znIPHfwElHEAxdNCXxur8Nye9/Jz/r8USUoXQGMU6SXOWMwyt1hbhZsd8RDz1qujpPF
	lfUg==
X-Gm-Message-State: ALoCoQlILF2uVacdul6vjxCCW+cFvcoDItFcdE9IUNvu0DZMjUutuJVFpV4kU2+miLCcFoMJIF4Q
X-Received: by 10.68.114.163 with SMTP id jh3mr3650302pbb.99.1390422825196;
	Wed, 22 Jan 2014 12:33:45 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	jn12sm27012169pbd.37.2014.01.22.12.33.42 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 12:33:44 -0800 (PST)
Date: Wed, 22 Jan 2014 12:33:37 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140122203337.GA31908@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
> >>
> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> >>>>
> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> >>>>>
> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> >>>>> <gregkh@linuxfoundation.org> wrote:
> >>>
> >>>
> >>> Adding extra folks to the party.
> >>>>>>
> >>>>>>
> >>>>>> Odds are this also shows up in 3.13, right?
> >>>>
> >>>>
> >>>> Reproduced using 3.13 on the PV guest:
> >>>>
> >>>>         [  368.756763] BUG: Bad page map in process mp
> >>>> pte:80000004a67c6165 pmd:e9b706067
> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
> >>>> mapping:          (null) index:0x0
> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
> >>>> #1
> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
> >>>> ffffffff814d8748 00007fd1388b7000
> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
> >>>> 0000000000000000 0000000000000000
> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
> >>>> 00007fd1388b8000 ffff880e9eaf3e30
> >>>>         [  368.756815] Call Trace:
> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >>>>         [  368.756837]  [<ffffffff8116eae3>]
> >>>> unmap_single_vma+0x583/0x890
> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> >>>>         [  368.756869]  [<ffffffff814e70ed>]
> >>>> system_call_fastpath+0x1a/0x1f
> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
> >>>> idx:0 val:-1
> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
> >>>> idx:1 val:1
> >>>>
> >>>>>
> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
> >>>>> interest in setting one up).. And I have a suspicion that it might not
> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
> >>>>>
> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
> >>>>> confused.
> >>>>>
> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> >>>>> _PAGE_PRESENT.
> >>>>>
> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
> >>>>> that makes him go "Ahh, yes, silly case".
> >>>>>
> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> >>>>>
> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> >>>>> attached test-case (but apparently only under Xen PV). There it
> >>>>> apparently causes a "BUG: Bad page map .." error.
> >>>
> >>>
> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
> >>> _raw_
> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
> >>> and instead reads the values directly from the page-table. Since the
> >>> page-table is also manipulated by the hypervisor - there are certain
> >>> flags it also sets to do its business. It might be that it uses
> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> >>> pte_flags that would invoke the pvops interface.
> >>>
> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> >>>
> >>> This not-compiled-totally-bad-patch might shed some light on what I was
> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> >>> for that).
> >>
> >>
> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
> >> still able to repro the issue:
> 
> Steven, do you use numa=fake on boot cmd line for pv guest?
> 
> I had similar issue on pv guest. Let me check if the fix that resolved
> this for me will help with 3.13.

Nope:

# cat /proc/cmdline
root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7

> 
> >
> >
> > Maybe this one is also related to this BUG here (cc'ed people investigating
> > this one) ...
> >
> >   https://lkml.org/lkml/2014/1/10/427
> >
> > ... not sure, though.
> >
> >
> >>         [  346.374929] BUG: Bad page map in process mp
> >> pte:80000004ae928065 pmd:e993f9067
> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
> >> (null) index:0x0
> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
> >> #1
> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
> >> 00007f06a9bbb000
> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
> >> 0000000000000000
> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
> >> ffff880e991a3e30
> >>         [  346.374979] Call Trace:
> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> >>         [  346.375034]  [<ffffffff814e712d>]
> >> system_call_fastpath+0x1a/0x1f
> >>         [  346.375037] Disabling lock debugging due to kernel taint
> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> idx:0 val:-1
> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> idx:1 val:1
> >>
> >> This dump doesn't look dramatically different, either.
> >>
> >>>
> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >>> turned on?
> >>
> >>
> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> mean not enabled at runtime?
> >>
> >> [1]
> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> 
> 
> 
> -- 
> Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 20:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64Uh-0007wl-Ag; Wed, 22 Jan 2014 20:33:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W64Uf-0007wd-Gb
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 20:33:49 +0000
Received: from [85.158.139.211:30684] by server-5.bemta-5.messagelabs.com id
	F2/DA-14928-C2B20E25; Wed, 22 Jan 2014 20:33:48 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390422825!11364054!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9390 invoked from network); 22 Jan 2014 20:33:47 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Jan 2014 20:33:47 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so880286pbc.14
	for <xen-devel@lists.xenproject.org>;
	Wed, 22 Jan 2014 12:33:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=M2JNC1CvY4OEDKEeqr3tfLwWPr3iHGd71DGp6XAmohk=;
	b=Da22daYY+kQc0evowOSazMFMVwaZElTzWPyhavEuUpnc23/M2Q0qpOEk4zvSp7TUjc
	61UjWKnHs1nP2DCm3npYhvm9LNpgM+LJPXYzuoBxUApFsL52CSe3RPjODmUK7w/fHYrR
	oqCuVJQJRVqx2IYK/SH0XbQz/0wzgdPtFBljD39mYpXIHAhmCxD9IT9+4TM8GBRsfcAG
	PMxNXLnom6gV+cEZvhdX+pLCqY0XKsBJHx2KNnczsx1IJd4Opc+0C6pARrHwdSMnCmSj
	7znIPHfwElHEAxdNCXxur8Nye9/Jz/r8USUoXQGMU6SXOWMwyt1hbhZsd8RDz1qujpPF
	lfUg==
X-Gm-Message-State: ALoCoQlILF2uVacdul6vjxCCW+cFvcoDItFcdE9IUNvu0DZMjUutuJVFpV4kU2+miLCcFoMJIF4Q
X-Received: by 10.68.114.163 with SMTP id jh3mr3650302pbb.99.1390422825196;
	Wed, 22 Jan 2014 12:33:45 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	jn12sm27012169pbd.37.2014.01.22.12.33.42 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 12:33:44 -0800 (PST)
Date: Wed, 22 Jan 2014 12:33:37 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140122203337.GA31908@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
> >>
> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> >>>
> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> >>>>
> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> >>>>>
> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> >>>>> <gregkh@linuxfoundation.org> wrote:
> >>>
> >>>
> >>> Adding extra folks to the party.
> >>>>>>
> >>>>>>
> >>>>>> Odds are this also shows up in 3.13, right?
> >>>>
> >>>>
> >>>> Reproduced using 3.13 on the PV guest:
> >>>>
> >>>>         [  368.756763] BUG: Bad page map in process mp
> >>>> pte:80000004a67c6165 pmd:e9b706067
> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
> >>>> mapping:          (null) index:0x0
> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
> >>>> #1
> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
> >>>> ffffffff814d8748 00007fd1388b7000
> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
> >>>> 0000000000000000 0000000000000000
> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
> >>>> 00007fd1388b8000 ffff880e9eaf3e30
> >>>>         [  368.756815] Call Trace:
> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >>>>         [  368.756837]  [<ffffffff8116eae3>]
> >>>> unmap_single_vma+0x583/0x890
> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> >>>>         [  368.756869]  [<ffffffff814e70ed>]
> >>>> system_call_fastpath+0x1a/0x1f
> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
> >>>> idx:0 val:-1
> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
> >>>> idx:1 val:1
> >>>>
> >>>>>
> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
> >>>>> interest in setting one up).. And I have a suspicion that it might not
> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
> >>>>>
> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
> >>>>> confused.
> >>>>>
> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> >>>>> _PAGE_PRESENT.
> >>>>>
> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
> >>>>> that makes him go "Ahh, yes, silly case".
> >>>>>
> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> >>>>>
> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> >>>>> attached test-case (but apparently only under Xen PV). There it
> >>>>> apparently causes a "BUG: Bad page map .." error.
> >>>
> >>>
> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
> >>> _raw_
> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
> >>> and instead reads the values directly from the page-table. Since the
> >>> page-table is also manipulated by the hypervisor - there are certain
> >>> flags it also sets to do its business. It might be that it uses
> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> >>> pte_flags that would invoke the pvops interface.
> >>>
> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> >>>
> >>> This not-compiled-totally-bad-patch might shed some light on what I was
> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> >>> for that).
> >>
> >>
> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
> >> still able to repro the issue:
> 
> Steven, do you use numa=fake on boot cmd line for pv guest?
> 
> I had similar issue on pv guest. Let me check if the fix that resolved
> this for me will help with 3.13.

Nope:

# cat /proc/cmdline
root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7

> 
> >
> >
> > Maybe this one is also related to this BUG here (cc'ed people investigating
> > this one) ...
> >
> >   https://lkml.org/lkml/2014/1/10/427
> >
> > ... not sure, though.
> >
> >
> >>         [  346.374929] BUG: Bad page map in process mp
> >> pte:80000004ae928065 pmd:e993f9067
> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
> >> (null) index:0x0
> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
> >> #1
> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
> >> 00007f06a9bbb000
> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
> >> 0000000000000000
> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
> >> ffff880e991a3e30
> >>         [  346.374979] Call Trace:
> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> >>         [  346.375034]  [<ffffffff814e712d>]
> >> system_call_fastpath+0x1a/0x1f
> >>         [  346.375037] Disabling lock debugging due to kernel taint
> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> idx:0 val:-1
> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> idx:1 val:1
> >>
> >> This dump doesn't look dramatically different, either.
> >>
> >>>
> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >>> turned on?
> >>
> >>
> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> mean not enabled at runtime?
> >>
> >> [1]
> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> 
> 
> 
> -- 
> Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 20:47:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64ht-0000Ci-3h; Wed, 22 Jan 2014 20:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W64hr-0000Cc-6G
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 20:47:27 +0000
Received: from [193.109.254.147:7776] by server-11.bemta-14.messagelabs.com id
	74/03-20576-E5E20E25; Wed, 22 Jan 2014 20:47:26 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390423643!12576509!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20343 invoked from network); 22 Jan 2014 20:47:25 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 20:47:25 -0000
Received: from mail-ve0-f180.google.com ([209.85.128.180]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuAuWziSsV9GPQulrRyTLF6Gbyr+PqnD@postini.com;
	Wed, 22 Jan 2014 12:47:25 PST
Received: by mail-ve0-f180.google.com with SMTP id db12so564835veb.25
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 12:47:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=msy2Z8jf9qz/UGEdxmz8gEpNwua8Y2MSURoBmflYx9o=;
	b=g8Qae2y46u5heeHgUqHQ7r27GFIZBFLipcTtmLuqtSyOANUvG7QciGuAfhr9X5XDMw
	+kZi/Tb3iTaQc+PS/9m4zxOFhYHmUUKSrPj85GZ815UL1TL1UqfOMx0AL9mA01Z9tCfT
	jpfMXMEzSy0v6VFxX5BevC5udAJHfEkjohv2g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=msy2Z8jf9qz/UGEdxmz8gEpNwua8Y2MSURoBmflYx9o=;
	b=mcXRJOaxpoLvnlSpDVzh79puPFvFLAO8ib7u8GtZhG/9L7LtNfXji27crEwcJ+uVvX
	mOebMHeGIofudmXBGnHfsNXMNj6FFNI+Yj2spn8dMyyogQcFJhpKOY6A+Y4brsH0GuYz
	LPlqPyiMMFQHa4ZxZchPW+RnHE8MHugbfK6M8Ao8gcpGhrGNQ4W7z6KjfGKml10Hmg98
	l2+VitlPxxF3L6HvK/u13ofshgJnZSycUWqSyhTDMwvHYgKdyJzy4dd64+kMWDqdM7hS
	7dovVMgrUnbcw84VmMoCj4E5OVHEwQk/CPV28P/opT0XQkzGJRUkN32PR2NNguiJZQaz
	NKJA==
X-Gm-Message-State: ALoCoQmtBy8mJi0XodDKj2qhGTuyDXJC84A+c/AjFsp+MasDMO5gIavUJrSxmeC2Y7dY+Lk1WsEzhz5RTcF4Rl/zOP6kDi/J7b011PP4M7Nqo1VAXwQau9vbyB7RnYJ0QNjVjjqpBaaR9pcgZXxI7S16qtpiyBuwZr0z4oMB2sIwScrxLpA4Z5c=
X-Received: by 10.221.55.8 with SMTP id vw8mr2388471vcb.8.1390423642663;
	Wed, 22 Jan 2014 12:47:22 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.55.8 with SMTP id vw8mr2388462vcb.8.1390423642525; Wed,
	22 Jan 2014 12:47:22 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Wed, 22 Jan 2014 12:47:22 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
Date: Wed, 22 Jan 2014 22:47:22 +0200
Message-ID: <CAH_mUMP6w9s2Sgcg4tFrOpmtWnxUyDCiPfhwmWW7ndgnGT9tNA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7047839499781491268=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7047839499781491268==
Content-Type: multipart/alternative; boundary=001a11338a6a4ce64904f0953bd2

--001a11338a6a4ce64904f0953bd2
Content-Type: text/plain; charset=ISO-8859-1

Hi Stefano,


>
> I guess you can't do just use the 1:1 because you are assigning the GPU
> or IPU to a guest other than Dom0, right?
>
>
Right.  GPU / IPU are both assigned to DomU, which is Android 4.3.

Thank you for your answer,

regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--001a11338a6a4ce64904f0953bd2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Stefano,<div><br><div><div class=3D"gmail_extra"><div c=
lass=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 =
0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><=
div class=3D"h5">
<br>
<br>
</div></div>I guess you can&#39;t do just use the 1:1 because you are assig=
ning the GPU<br>
or IPU to a guest other than Dom0, right?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockquote><div><=
br></div><div>Right. =A0GPU / IPU are both assigned to DomU, which is Andro=
id 4.3.</div><div><br></div><div>Thank you for your answer,</div><div><br><=
/div>
<div>regards,</div><div>Andrii</div></div><br><br clear=3D"all"><div><br></=
div>-- <br><font size=3D"-1"><br><span style=3D"vertical-align:baseline;fon=
t-variant:normal;font-style:normal;font-size:12px;background-color:transpar=
ent;text-decoration:none;font-family:Arial;font-weight:bold">Andrii Tseglyt=
skyi | Embedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div></div></div>

--001a11338a6a4ce64904f0953bd2--


--===============7047839499781491268==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7047839499781491268==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 20:47:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64ht-0000Ci-3h; Wed, 22 Jan 2014 20:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W64hr-0000Cc-6G
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 20:47:27 +0000
Received: from [193.109.254.147:7776] by server-11.bemta-14.messagelabs.com id
	74/03-20576-E5E20E25; Wed, 22 Jan 2014 20:47:26 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390423643!12576509!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20343 invoked from network); 22 Jan 2014 20:47:25 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 20:47:25 -0000
Received: from mail-ve0-f180.google.com ([209.85.128.180]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuAuWziSsV9GPQulrRyTLF6Gbyr+PqnD@postini.com;
	Wed, 22 Jan 2014 12:47:25 PST
Received: by mail-ve0-f180.google.com with SMTP id db12so564835veb.25
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 12:47:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=msy2Z8jf9qz/UGEdxmz8gEpNwua8Y2MSURoBmflYx9o=;
	b=g8Qae2y46u5heeHgUqHQ7r27GFIZBFLipcTtmLuqtSyOANUvG7QciGuAfhr9X5XDMw
	+kZi/Tb3iTaQc+PS/9m4zxOFhYHmUUKSrPj85GZ815UL1TL1UqfOMx0AL9mA01Z9tCfT
	jpfMXMEzSy0v6VFxX5BevC5udAJHfEkjohv2g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=msy2Z8jf9qz/UGEdxmz8gEpNwua8Y2MSURoBmflYx9o=;
	b=mcXRJOaxpoLvnlSpDVzh79puPFvFLAO8ib7u8GtZhG/9L7LtNfXji27crEwcJ+uVvX
	mOebMHeGIofudmXBGnHfsNXMNj6FFNI+Yj2spn8dMyyogQcFJhpKOY6A+Y4brsH0GuYz
	LPlqPyiMMFQHa4ZxZchPW+RnHE8MHugbfK6M8Ao8gcpGhrGNQ4W7z6KjfGKml10Hmg98
	l2+VitlPxxF3L6HvK/u13ofshgJnZSycUWqSyhTDMwvHYgKdyJzy4dd64+kMWDqdM7hS
	7dovVMgrUnbcw84VmMoCj4E5OVHEwQk/CPV28P/opT0XQkzGJRUkN32PR2NNguiJZQaz
	NKJA==
X-Gm-Message-State: ALoCoQmtBy8mJi0XodDKj2qhGTuyDXJC84A+c/AjFsp+MasDMO5gIavUJrSxmeC2Y7dY+Lk1WsEzhz5RTcF4Rl/zOP6kDi/J7b011PP4M7Nqo1VAXwQau9vbyB7RnYJ0QNjVjjqpBaaR9pcgZXxI7S16qtpiyBuwZr0z4oMB2sIwScrxLpA4Z5c=
X-Received: by 10.221.55.8 with SMTP id vw8mr2388471vcb.8.1390423642663;
	Wed, 22 Jan 2014 12:47:22 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.55.8 with SMTP id vw8mr2388462vcb.8.1390423642525; Wed,
	22 Jan 2014 12:47:22 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Wed, 22 Jan 2014 12:47:22 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
Date: Wed, 22 Jan 2014 22:47:22 +0200
Message-ID: <CAH_mUMP6w9s2Sgcg4tFrOpmtWnxUyDCiPfhwmWW7ndgnGT9tNA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7047839499781491268=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7047839499781491268==
Content-Type: multipart/alternative; boundary=001a11338a6a4ce64904f0953bd2

--001a11338a6a4ce64904f0953bd2
Content-Type: text/plain; charset=ISO-8859-1

Hi Stefano,


>
> I guess you can't do just use the 1:1 because you are assigning the GPU
> or IPU to a guest other than Dom0, right?
>
>
Right.  GPU / IPU are both assigned to DomU, which is Android 4.3.

Thank you for your answer,

regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--001a11338a6a4ce64904f0953bd2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Stefano,<div><br><div><div class=3D"gmail_extra"><div c=
lass=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 =
0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><=
div class=3D"h5">
<br>
<br>
</div></div>I guess you can&#39;t do just use the 1:1 because you are assig=
ning the GPU<br>
or IPU to a guest other than Dom0, right?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br></div></div></blockquote><div><=
br></div><div>Right. =A0GPU / IPU are both assigned to DomU, which is Andro=
id 4.3.</div><div><br></div><div>Thank you for your answer,</div><div><br><=
/div>
<div>regards,</div><div>Andrii</div></div><br><br clear=3D"all"><div><br></=
div>-- <br><font size=3D"-1"><br><span style=3D"vertical-align:baseline;fon=
t-variant:normal;font-style:normal;font-size:12px;background-color:transpar=
ent;text-decoration:none;font-family:Arial;font-weight:bold">Andrii Tseglyt=
skyi | Embedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div></div></div>

--001a11338a6a4ce64904f0953bd2--


--===============7047839499781491268==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7047839499781491268==--


From xen-devel-bounces@lists.xen.org Wed Jan 22 20:53:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64nO-0000kz-47; Wed, 22 Jan 2014 20:53:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W64nM-0000kl-3E
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 20:53:08 +0000
Received: from [85.158.143.35:2650] by server-2.bemta-4.messagelabs.com id
	E1/A8-11386-3BF20E25; Wed, 22 Jan 2014 20:53:07 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390423964!157753!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10380 invoked from network); 22 Jan 2014 20:53:06 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 20:53:06 -0000
Received: from mail-vb0-f49.google.com ([209.85.212.49]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuAvnCqN8tXV+PoVYWmx4T5lGi3EpIUH@postini.com;
	Wed, 22 Jan 2014 12:52:46 PST
Received: by mail-vb0-f49.google.com with SMTP id x14so540848vbb.36
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 12:52:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ChAzNnNpKLQY7ysAEO6kQPnaxoM3qv4i6pBAIc9BD4c=;
	b=T9zFDaaoK0inzE42nTXn9pmwXUhl5Nbnf597AG/mYbwdN0X1hmwrBA2y9g5fvZhxT0
	bOn+5TLZI+NzIQX+08DK51+z2X4xrRBd7tSj/uUB+Nsx+zHok4VeCUUu7nvqlCQBA/G4
	60E+XF0LlK/eH+9fNhbg2ospWRyPI7LSIkJdY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ChAzNnNpKLQY7ysAEO6kQPnaxoM3qv4i6pBAIc9BD4c=;
	b=YT6+8xwYqV21XBA9v9gwyNxKp8AXOtqq66dtYNusJQtUyeeuZjuwHxP6y7xbhDVuN7
	Gpvg0iE1MA5i/2QV8RCwMHQTd3ycz857ipJJ6GqduU0nTpfTnZ7rj7cyE/pJZWfIpA7q
	8s6vti36qixPrq4vNd8JicZDaztfc08skn8Jk40ytbDhJnMsgiXSZytaD+iRn6UvADv4
	yz1i4Cb5u9CposaOLQYXgi+GYgCtdObkVzqQJNemo8BEqbHKAg330Tunh5nD9oLqT6ht
	Hj5pWvfEKUiMhIdCUEjrnHX9X0qB/nBKcFuOpVwp3CUFrpbr/FicGMMzy2q1er1kPIvj
	SUwg==
X-Gm-Message-State: ALoCoQnUWxQR4w7KrN3ME2g2Ki6DHzHD+z/pZoasjOlYTbZXJR+vonhWjeCQKeKN6IrmsQHfEfKRegVLNdUsVwBAL8rWP+JEl8aXm6BlxmpVEEgJdSpogVUzyTZCgcyE2r91QnjRNj3yaxUDcu3TLV9E7P3MSbxD0hJ/LkZtcx8IAzzjBnRrCLM=
X-Received: by 10.220.76.201 with SMTP id d9mr2377992vck.33.1390423963703;
	Wed, 22 Jan 2014 12:52:43 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.76.201 with SMTP id d9mr2377984vck.33.1390423963587;
	Wed, 22 Jan 2014 12:52:43 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Wed, 22 Jan 2014 12:52:43 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
Date: Wed, 22 Jan 2014 22:52:43 +0200
Message-ID: <CAH_mUMNUtwtx8EN+1YAYJ0aKU-dYugJb+4ZQk-7+BYpUgNZRwA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Wed, Jan 22, 2014 at 7:39 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Stefano Stabellini wrote:
>
> I guess you can't do just use the 1:1 because you are assigning the GPU
> or IPU to a guest other than Dom0, right?
>

Right.  GPU / IPU are both assigned to DomU, which is Android 4.3.

Thank you for your answer,

regards,
Andrii


-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 20:53:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 20:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64nO-0000kz-47; Wed, 22 Jan 2014 20:53:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W64nM-0000kl-3E
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 20:53:08 +0000
Received: from [85.158.143.35:2650] by server-2.bemta-4.messagelabs.com id
	E1/A8-11386-3BF20E25; Wed, 22 Jan 2014 20:53:07 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390423964!157753!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10380 invoked from network); 22 Jan 2014 20:53:06 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 20:53:06 -0000
Received: from mail-vb0-f49.google.com ([209.85.212.49]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuAvnCqN8tXV+PoVYWmx4T5lGi3EpIUH@postini.com;
	Wed, 22 Jan 2014 12:52:46 PST
Received: by mail-vb0-f49.google.com with SMTP id x14so540848vbb.36
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 12:52:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ChAzNnNpKLQY7ysAEO6kQPnaxoM3qv4i6pBAIc9BD4c=;
	b=T9zFDaaoK0inzE42nTXn9pmwXUhl5Nbnf597AG/mYbwdN0X1hmwrBA2y9g5fvZhxT0
	bOn+5TLZI+NzIQX+08DK51+z2X4xrRBd7tSj/uUB+Nsx+zHok4VeCUUu7nvqlCQBA/G4
	60E+XF0LlK/eH+9fNhbg2ospWRyPI7LSIkJdY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ChAzNnNpKLQY7ysAEO6kQPnaxoM3qv4i6pBAIc9BD4c=;
	b=YT6+8xwYqV21XBA9v9gwyNxKp8AXOtqq66dtYNusJQtUyeeuZjuwHxP6y7xbhDVuN7
	Gpvg0iE1MA5i/2QV8RCwMHQTd3ycz857ipJJ6GqduU0nTpfTnZ7rj7cyE/pJZWfIpA7q
	8s6vti36qixPrq4vNd8JicZDaztfc08skn8Jk40ytbDhJnMsgiXSZytaD+iRn6UvADv4
	yz1i4Cb5u9CposaOLQYXgi+GYgCtdObkVzqQJNemo8BEqbHKAg330Tunh5nD9oLqT6ht
	Hj5pWvfEKUiMhIdCUEjrnHX9X0qB/nBKcFuOpVwp3CUFrpbr/FicGMMzy2q1er1kPIvj
	SUwg==
X-Gm-Message-State: ALoCoQnUWxQR4w7KrN3ME2g2Ki6DHzHD+z/pZoasjOlYTbZXJR+vonhWjeCQKeKN6IrmsQHfEfKRegVLNdUsVwBAL8rWP+JEl8aXm6BlxmpVEEgJdSpogVUzyTZCgcyE2r91QnjRNj3yaxUDcu3TLV9E7P3MSbxD0hJ/LkZtcx8IAzzjBnRrCLM=
X-Received: by 10.220.76.201 with SMTP id d9mr2377992vck.33.1390423963703;
	Wed, 22 Jan 2014 12:52:43 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.76.201 with SMTP id d9mr2377984vck.33.1390423963587;
	Wed, 22 Jan 2014 12:52:43 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Wed, 22 Jan 2014 12:52:43 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401221647290.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401221738460.21510@kaball.uk.xensource.com>
Date: Wed, 22 Jan 2014 22:52:43 +0200
Message-ID: <CAH_mUMNUtwtx8EN+1YAYJ0aKU-dYugJb+4ZQk-7+BYpUgNZRwA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
 OMAP platforms
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Wed, Jan 22, 2014 at 7:39 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Stefano Stabellini wrote:
>
> I guess you can't do just use the 1:1 because you are assigning the GPU
> or IPU to a guest other than Dom0, right?
>

Right.  GPU / IPU are both assigned to DomU, which is Android 4.3.

Thank you for your answer,

regards,
Andrii


-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:03:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64x2-0001Pb-Gp; Wed, 22 Jan 2014 21:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W64x1-0001PT-MC
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 21:03:07 +0000
Received: from [85.158.139.211:54051] by server-15.bemta-5.messagelabs.com id
	18/B2-08490-A0230E25; Wed, 22 Jan 2014 21:03:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390424584!602476!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5873 invoked from network); 22 Jan 2014 21:03:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:03:06 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0ML212O026467
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:02:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0ML206V006493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 21:02:00 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0ML1xs6017170; Wed, 22 Jan 2014 21:01:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:01:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BF3251BFA72; Wed, 22 Jan 2014 16:01:54 -0500 (EST)
Date: Wed, 22 Jan 2014 16:01:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140122210154.GC9585@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

.. physical
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;

Why not change 'phys_addr_t' to be unsigned long? Wouldn't
that solve the problem as well?

Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
of phys_addr_t?


> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:03:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W64x2-0001Pb-Gp; Wed, 22 Jan 2014 21:03:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W64x1-0001PT-MC
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 21:03:07 +0000
Received: from [85.158.139.211:54051] by server-15.bemta-5.messagelabs.com id
	18/B2-08490-A0230E25; Wed, 22 Jan 2014 21:03:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390424584!602476!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5873 invoked from network); 22 Jan 2014 21:03:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:03:06 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0ML212O026467
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:02:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0ML206V006493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 21:02:00 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0ML1xs6017170; Wed, 22 Jan 2014 21:01:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:01:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BF3251BFA72; Wed, 22 Jan 2014 16:01:54 -0500 (EST)
Date: Wed, 22 Jan 2014 16:01:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140122210154.GC9585@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 05:24:53PM +0000, Ian Campbell wrote:
> The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
> causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.

.. physical
> 
> This can happen in practice on ARM where foreign pages can be above 4GB even
> though the local kernel does not have LPAE page tables enabled (which is
> totally reasonable if the guest does not itself have >4GB of RAM). In this
> case the kernel still maps the foreign pages at a phys addr below 4G (as it
> must) but the resulting DMA address (returned by the grant map operation) is
> much higher.
> 
> This is analogous to a hardware device which has its view of RAM mapped up
> high for some reason.
> 
> This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
> systems with more than 4GB of RAM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  arch/arm/Kconfig          |    1 +
>  drivers/xen/swiotlb-xen.c |   14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..24307dc 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1885,6 +1885,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select ARCH_DMA_ADDR_T_64BIT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1eac073..b626c79 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>  
>  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>  {
> -	return phys_to_machine(XPADDR(paddr)).maddr;

Why not change 'phys_addr_t' to be unsigned long? Wouldn't
that solve the problem as well?

Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
of phys_addr_t?


> +	unsigned long mfn = pfn_to_mfn(PFN_DOWN(paddr));
> +	dma_addr_t dma = (dma_addr_t)mfn << PAGE_SHIFT;
> +	dma |= paddr & ~PAGE_MASK;
> +	return dma;
>  }
>  
>  static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
>  {
> -	return machine_to_phys(XMADDR(baddr)).paddr;
> +	dma_addr_t dma = PFN_PHYS(mfn_to_pfn(PFN_DOWN(baddr)));
> +	phys_addr_t paddr = dma;
> +
> +	BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
> +
> +	paddr |= baddr & ~PAGE_MASK;
> +
> +	return paddr;
>  }
>  
>  static inline dma_addr_t xen_virt_to_bus(void *address)
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:15:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W658a-00022d-Qb; Wed, 22 Jan 2014 21:15:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W658V-00022Y-Th
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 21:15:00 +0000
Received: from [85.158.143.35:21676] by server-2.bemta-4.messagelabs.com id
	00/14-11386-3D430E25; Wed, 22 Jan 2014 21:14:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390425297!160057!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4548 invoked from network); 22 Jan 2014 21:14:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:14:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MLErZU009005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:14:54 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0MLEp9W014866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 21:14:51 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0MLEoQt014846; Wed, 22 Jan 2014 21:14:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:14:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BF18E1BFA72; Wed, 22 Jan 2014 16:14:49 -0500 (EST)
Date: Wed, 22 Jan 2014 16:14:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140122211449.GA10426@phenom.dumpdata.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D955A902000078001149E2@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> > Also fix the name of the discard-alignment property, add the missing 'n'.
> > 
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Konrad,
> 
> you have been working on the discard stuff quite a bit iirc - any
> chance you could take a look and send and ack/review?
> 
> Jan
> 
> > ---
> >  xen/include/public/io/blkif.h | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> > index 84eb7fd..515ea90 100644
> > --- a/xen/include/public/io/blkif.h
> > +++ b/xen/include/public/io/blkif.h
> > @@ -175,7 +175,7 @@
> >   *
> >   *------------------------- Backend Device Properties -------------------------
> >   *
> > - * discard-aligment
> > + * discard-alignment
> >   *      Values:         <uint32_t>
> >   *      Default Value:  0
> >   *      Notes:          4, 5
> > @@ -194,6 +194,7 @@
> >   * discard-secure
> >   *      Values:         0/1 (boolean)
> >   *      Default Value:  0
> > + *      Notes:          10
> >   *
> >   *      A value of "1" indicates that the backend can process 
> > BLKIF_OP_DISCARD
> >   *      requests with the BLKIF_DISCARD_SECURE flag set.
> > @@ -323,9 +324,14 @@
> >   *     For full interoperability, block front and backends should publish
> >   *     identical ring parameters, adjusted for unit differences, to the
> >   *     XenStore nodes used in both schemes.
> > - * (4) Devices that support discard functionality may internally allocate
> > - *     space (discardable extents) in units that are larger than the
> > - *     exported logical block size.
> > + * (4) Devices that support discard functionality may internally allocate 
> > space
> > + *     (discardable extents) in units that are larger than the exported 
> > logical
> > + *     block size. If the backing device has such discardable extents the
> > + *     backend must provide both discard-granularity and discard-alignment.
                    ^^^^ - MAY

> > + *     Backends supporting discard should include discard-granularity and
                                        ^^^^^ - MAY
> > + *     discard-alignment even if it supports discarding individual sectors.
> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> > ==
> > + *     sector size if these keys are missing.
> >   * (5) The discard-alignment parameter allows a physical device to be
> >   *     partitioned into virtual devices that do not necessarily begin or
> >   *     end on a discardable extent boundary.
> > @@ -344,6 +350,8 @@
> >   *     grants that can be persistently mapped in the frontend driver, but
> >   *     due to the frontent driver implementation it should never be bigger
> >   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> > + *(10) The discard-secure property may be present and will be set to 1 if 
> > the
> > + *     backing device supports secure discard.
> >   */
> >  
> >  /*
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:15:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W658a-00022d-Qb; Wed, 22 Jan 2014 21:15:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W658V-00022Y-Th
	for xen-devel@lists.xenproject.org; Wed, 22 Jan 2014 21:15:00 +0000
Received: from [85.158.143.35:21676] by server-2.bemta-4.messagelabs.com id
	00/14-11386-3D430E25; Wed, 22 Jan 2014 21:14:59 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390425297!160057!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4548 invoked from network); 22 Jan 2014 21:14:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:14:58 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MLErZU009005
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:14:54 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0MLEp9W014866
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 22 Jan 2014 21:14:51 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0MLEoQt014846; Wed, 22 Jan 2014 21:14:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:14:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BF18E1BFA72; Wed, 22 Jan 2014 16:14:49 -0500 (EST)
Date: Wed, 22 Jan 2014 16:14:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140122211449.GA10426@phenom.dumpdata.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52D955A902000078001149E2@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> > Also fix the name of the discard-alignment property, add the missing 'n'.
> > 
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Konrad,
> 
> you have been working on the discard stuff quite a bit iirc - any
> chance you could take a look and send and ack/review?
> 
> Jan
> 
> > ---
> >  xen/include/public/io/blkif.h | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> > index 84eb7fd..515ea90 100644
> > --- a/xen/include/public/io/blkif.h
> > +++ b/xen/include/public/io/blkif.h
> > @@ -175,7 +175,7 @@
> >   *
> >   *------------------------- Backend Device Properties -------------------------
> >   *
> > - * discard-aligment
> > + * discard-alignment
> >   *      Values:         <uint32_t>
> >   *      Default Value:  0
> >   *      Notes:          4, 5
> > @@ -194,6 +194,7 @@
> >   * discard-secure
> >   *      Values:         0/1 (boolean)
> >   *      Default Value:  0
> > + *      Notes:          10
> >   *
> >   *      A value of "1" indicates that the backend can process 
> > BLKIF_OP_DISCARD
> >   *      requests with the BLKIF_DISCARD_SECURE flag set.
> > @@ -323,9 +324,14 @@
> >   *     For full interoperability, block front and backends should publish
> >   *     identical ring parameters, adjusted for unit differences, to the
> >   *     XenStore nodes used in both schemes.
> > - * (4) Devices that support discard functionality may internally allocate
> > - *     space (discardable extents) in units that are larger than the
> > - *     exported logical block size.
> > + * (4) Devices that support discard functionality may internally allocate 
> > space
> > + *     (discardable extents) in units that are larger than the exported 
> > logical
> > + *     block size. If the backing device has such discardable extents the
> > + *     backend must provide both discard-granularity and discard-alignment.
                    ^^^^ - MAY

> > + *     Backends supporting discard should include discard-granularity and
                                        ^^^^^ - MAY
> > + *     discard-alignment even if it supports discarding individual sectors.
> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> > ==
> > + *     sector size if these keys are missing.
> >   * (5) The discard-alignment parameter allows a physical device to be
> >   *     partitioned into virtual devices that do not necessarily begin or
> >   *     end on a discardable extent boundary.
> > @@ -344,6 +350,8 @@
> >   *     grants that can be persistently mapped in the frontend driver, but
> >   *     due to the frontent driver implementation it should never be bigger
> >   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> > + *(10) The discard-secure property may be present and will be set to 1 if 
> > the
> > + *     backing device supports secure discard.
> >   */
> >  
> >  /*
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W65YO-0003Zf-Cz; Wed, 22 Jan 2014 21:41:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W65YN-0003ZY-58
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 21:41:43 +0000
Received: from [85.158.143.35:3559] by server-2.bemta-4.messagelabs.com id
	EC/31-11386-61B30E25; Wed, 22 Jan 2014 21:41:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390426900!161100!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 22 Jan 2014 21:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:41:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MLeb80005605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:40:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MLeavi026364
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 22 Jan 2014 21:40:36 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MLeZoo029639; Wed, 22 Jan 2014 21:40:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:40:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4C1261BFA72; Wed, 22 Jan 2014 16:40:34 -0500 (EST)
Date: Wed, 22 Jan 2014 16:40:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140122214034.GB9460@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 11:28, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 22/01/14 09:49, Jan Beulich wrote:
> >>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>> See attached (and relevant part inlined).
> >>> ...
> >>> (XEN) [2014-01-22 12:27:07] Xen call trace:
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] 
> > msix_capability_init+0x1dc/0x603
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> >>> (XEN) [2014-01-22 12:27:07] 
> >>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
> >> Considering the similarity, this is surely another incarnation of
> >> the same issue. Which gets me to ask first of all - is the device
> >> being acted upon an MSI-X capable one? If not, why is the call
> >> being made? If so (and Xen thinks differently) that's what
> >> needs fixing.
> >>
> >> On that basis I'm also going to ignore your patch for the first
> >> problem, Andrew: It's either incomplete or unnecessary or
> >> fixing the wrong thing.
> > 
> > I am going to go with incomplete - it is certainly not unnecessary.  The
> > PCI device parameters to pci_prepare_msix() are completely guest
> > controlled; There is no validation of the SBDF at all.
> 
> "Fixing the wrong thing" presumably, after taking a closer look at
> Konrad's second crash: The device in question really appears to
> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
> wonder whether the capability gets displayed/hidden dynamically
> based on some other enabling the driver may be doing on the
> device. In which case we'd need to allocate the structure on
> demand.

The device in question (02:00.1) is an SR-IOV 82576:

02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

-bash-4.1# lspci -s 02:00.1 -v | more
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: fast devsel, IRQ 18
        Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
        Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
        I/O ports at d000 [disabled] [size=32]
        Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
        Expansion ROM at f0400000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: pciback
        Kernel modules: igb



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 22 21:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jan 2014 21:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W65YO-0003Zf-Cz; Wed, 22 Jan 2014 21:41:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W65YN-0003ZY-58
	for xen-devel@lists.xen.org; Wed, 22 Jan 2014 21:41:43 +0000
Received: from [85.158.143.35:3559] by server-2.bemta-4.messagelabs.com id
	EC/31-11386-61B30E25; Wed, 22 Jan 2014 21:41:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390426900!161100!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 22 Jan 2014 21:41:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Jan 2014 21:41:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0MLeb80005605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 22 Jan 2014 21:40:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MLeavi026364
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 22 Jan 2014 21:40:36 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0MLeZoo029639; Wed, 22 Jan 2014 21:40:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 13:40:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4C1261BFA72; Wed, 22 Jan 2014 16:40:34 -0500 (EST)
Date: Wed, 22 Jan 2014 16:40:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140122214034.GB9460@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 11:28, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 22/01/14 09:49, Jan Beulich wrote:
> >>>>> On 22.01.14 at 05:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>> See attached (and relevant part inlined).
> >>> ...
> >>> (XEN) [2014-01-22 12:27:07] Xen call trace:
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d0801683a2>] 
> > msix_capability_init+0x1dc/0x603
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x4d7
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0x5ad
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f104>] physdev_map_pirq+0x507/0x5d1
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08017f814>] do_physdev_op+0x646/0x119e
> >>> (XEN) [2014-01-22 12:27:07]    [<ffff82d08022231b>] syscall_enter+0xeb/0x145
> >>> (XEN) [2014-01-22 12:27:07] 
> >>> (XEN) [2014-01-22 12:27:07] Pagetable walk from 0000000000000004:
> >> Considering the similarity, this is surely another incarnation of
> >> the same issue. Which gets me to ask first of all - is the device
> >> being acted upon an MSI-X capable one? If not, why is the call
> >> being made? If so (and Xen thinks differently) that's what
> >> needs fixing.
> >>
> >> On that basis I'm also going to ignore your patch for the first
> >> problem, Andrew: It's either incomplete or unnecessary or
> >> fixing the wrong thing.
> > 
> > I am going to go with incomplete - it is certainly not unnecessary.  The
> > PCI device parameters to pci_prepare_msix() are completely guest
> > controlled; There is no validation of the SBDF at all.
> 
> "Fixing the wrong thing" presumably, after taking a closer look at
> Konrad's second crash: The device in question really appears to
> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
> wonder whether the capability gets displayed/hidden dynamically
> based on some other enabling the driver may be doing on the
> device. In which case we'd need to allocate the structure on
> demand.

The device in question (02:00.1) is an SR-IOV 82576:

02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)

-bash-4.1# lspci -s 02:00.1 -v | more
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: fast devsel, IRQ 18
        Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
        Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
        I/O ports at d000 [disabled] [size=32]
        Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
        Expansion ROM at f0400000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: pciback
        Kernel modules: igb



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68I6-0003Mw-Ou; Thu, 23 Jan 2014 00:37:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68I5-0003Mn-62
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:37:05 +0000
Received: from [85.158.137.68:15003] by server-8.bemta-3.messagelabs.com id
	16/87-31081-03460E25; Thu, 23 Jan 2014 00:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390437421!9594413!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31955 invoked from network); 23 Jan 2014 00:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0awrr015589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:36:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0avHl020363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:36:58 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0av1a029222; Thu, 23 Jan 2014 00:36:57 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:36:56 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 16:22:47 -0500
To: Zoltan Kiss <zoltan.kiss@citrix.com>, ian.campbell@citrix.com,
	wei.liu2@citrix.com, xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jonathan.davies@citrix.com
Message-ID: <a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
	during	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>The grant mapping API does m2p_override unnecessarily: only gntdev
>needs it,
>for blkback and future netback patches it just cause a lock contention,
>as
>those pages never go to userspace. Therefore this series does the
>following:
>- the original functions were renamed to __gnttab_[un]map_refs, with a
>new
>  parameter m2p_override
>- based on m2p_override either they follow the original behaviour, or
>just set
>  the private flag and call set_phys_to_machine
>- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs
>with
>  m2p_override false
>- a new function gnttab_[un]map_refs_userspace provides the old
>behaviour

You don't say anything about the 'return ret' changed to 'return 0'.

Any particular reason for that?

Thanks
>
>v2:
>- move the storing of the old mfn in page->index to gnttab_map_refs
>- move the function header update to a separate patch
>
>v3:
>- a new approach to retain old behaviour where it needed
>- squash the patches into one
>
>Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>Suggested-by: David Vrabel <david.vrabel@citrix.com>
>---
> drivers/block/xen-blkback/blkback.c |   15 +++----
> drivers/xen/gntdev.c                |   13 +++---
>drivers/xen/grant-table.c           |   81
>+++++++++++++++++++++++++++++------
> include/xen/grant_table.h           |    8 +++-
> 4 files changed, 87 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/block/xen-blkback/blkback.c
>b/drivers/block/xen-blkback/blkback.c
>index 6620b73..875025f 100644
>--- a/drivers/block/xen-blkback/blkback.c
>+++ b/drivers/block/xen-blkback/blkback.c
>@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif
>*blkif, struct rb_root *root,
> 
> 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
> 			!rb_next(&persistent_gnt->node)) {
>-			ret = gnttab_unmap_refs(unmap, NULL, pages,
>-				segs_to_unmap);
>+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, pages, segs_to_unmap);
> 			segs_to_unmap = 0;
>@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct
>*work)
> 		pages[segs_to_unmap] = persistent_gnt->page;
> 
> 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>-			ret = gnttab_unmap_refs(unmap, NULL, pages,
>-				segs_to_unmap);
>+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, pages, segs_to_unmap);
> 			segs_to_unmap = 0;
>@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct
>*work)
> 		kfree(persistent_gnt);
> 	}
> 	if (segs_to_unmap > 0) {
>-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
>+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 		BUG_ON(ret);
> 		put_free_pages(blkif, pages, segs_to_unmap);
> 	}
>@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif
>*blkif,
> 				    GNTMAP_host_map, pages[i]->handle);
> 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
> 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
>-			                        invcount);
>+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, unmap_pages, invcount);
> 			invcount = 0;
> 		}
> 	}
> 	if (invcount) {
>-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
>+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> 		BUG_ON(ret);
> 		put_free_pages(blkif, unmap_pages, invcount);
> 	}
>@@ -740,7 +737,7 @@ again:
> 	}
> 
> 	if (segs_to_map) {
>-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
>+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
> 		BUG_ON(ret);
> 	}
> 
>diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
>index e41c79c..e652c0e 100644
>--- a/drivers/xen/gntdev.c
>+++ b/drivers/xen/gntdev.c
>@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
> 	}
> 
> 	pr_debug("map %d+%d\n", map->index, map->count);
>-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops :
>NULL,
>-			map->pages, map->count);
>+	err = gnttab_map_refs_userspace(map->map_ops,
>+					use_ptemod ? map->kmap_ops : NULL,
>+					map->pages,
>+					map->count);
> 	if (err)
> 		return err;
> 
>@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map
>*map, int offset, int pages)
> 		}
> 	}
> 
>-	err = gnttab_unmap_refs(map->unmap_ops + offset,
>-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
>-			pages);
>+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
>+					  use_ptemod ? map->kmap_ops + offset : NULL,
>+					  map->pages + offset,
>+					  pages);
> 	if (err)
> 		return err;
> 
>diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>index aa846a4..87ded60 100644
>--- a/drivers/xen/grant-table.c
>+++ b/drivers/xen/grant-table.c
>@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch,
>unsigned count)
> }
> EXPORT_SYMBOL_GPL(gnttab_batch_copy);
> 
>-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> 		    struct gnttab_map_grant_ref *kmap_ops,
>-		    struct page **pages, unsigned int count)
>+		    struct page **pages, unsigned int count,
>+		    bool m2p_override)
> {
> 	int i, ret;
> 	bool lazy = false;
> 	pte_t *pte;
> 	unsigned long mfn;
> 
>+	BUG_ON(kmap_ops && !m2p_override);
>	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops,
>count);
> 	if (ret)
> 		return ret;
>@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> 		}
>-		return ret;
>+		return 0;
> 	}
> 
>-	if (!in_interrupt() && paravirt_get_lazy_mode() ==
>PARAVIRT_LAZY_NONE) {
>+	if (m2p_override &&
>+	    !in_interrupt() &&
>+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> 		arch_enter_lazy_mmu_mode();
> 		lazy = true;
> 	}
>@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 		} else {
> 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> 		}
>-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>-				       &kmap_ops[i] : NULL);
>+		if (m2p_override)
>+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>+					       &kmap_ops[i] : NULL);
>+		else {
>+			unsigned long pfn = page_to_pfn(pages[i]);
>+			WARN_ON(PagePrivate(pages[i]));
>+			SetPagePrivate(pages[i]);
>+			set_page_private(pages[i], mfn);
>+			pages[i]->index = pfn_to_mfn(pfn);
>+			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>+				return -ENOMEM;
>+		}
> 		if (ret)
> 			goto out;
> 	}
>@@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 	if (lazy)
> 		arch_leave_lazy_mmu_mode();
> 
>-	return ret;
>+	return 0;
>+}
>+
>+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>+		    struct page **pages, unsigned int count)
>+{
>+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> }
> EXPORT_SYMBOL_GPL(gnttab_map_refs);
> 
>-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>+			      struct gnttab_map_grant_ref *kmap_ops,
>+			      struct page **pages, unsigned int count)
>+{
>+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
>+}
>+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
>+
>+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> 		      struct gnttab_map_grant_ref *kmap_ops,
>-		      struct page **pages, unsigned int count)
>+		      struct page **pages, unsigned int count,
>+		      bool m2p_override)
> {
> 	int i, ret;
> 	bool lazy = false;
> 
>+	BUG_ON(kmap_ops && !m2p_override);
>	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops,
>count);
> 	if (ret)
> 		return ret;
>@@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct
>gnttab_unmap_grant_ref *unmap_ops,
> 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> 					INVALID_P2M_ENTRY);
> 		}
>-		return ret;
>+		return 0;
> 	}
> 
>-	if (!in_interrupt() && paravirt_get_lazy_mode() ==
>PARAVIRT_LAZY_NONE) {
>+	if (m2p_override &&
>+	    !in_interrupt() &&
>+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> 		arch_enter_lazy_mmu_mode();
> 		lazy = true;
> 	}
> 
> 	for (i = 0; i < count; i++) {
>-		ret = m2p_remove_override(pages[i], kmap_ops ?
>-				       &kmap_ops[i] : NULL);
>+		if (m2p_override)
>+			ret = m2p_remove_override(pages[i], kmap_ops ?
>+						  &kmap_ops[i] : NULL);
>+		else {
>+			unsigned long pfn = page_to_pfn(pages[i]);
>+			WARN_ON(!PagePrivate(pages[i]));
>+			ClearPagePrivate(pages[i]);
>+			set_phys_to_machine(pfn, pages[i]->index);
>+		}
> 		if (ret)
> 			goto out;
> 	}
>@@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct
>gnttab_unmap_grant_ref *unmap_ops,
> 	if (lazy)
> 		arch_leave_lazy_mmu_mode();
> 
>-	return ret;
>+	return 0;
>+}
>+
>+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
>+		    struct page **pages, unsigned int count)
>+{
>+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> }
> EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
> 
>+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref
>*map_ops,
>+				struct gnttab_map_grant_ref *kmap_ops,
>+				struct page **pages, unsigned int count)
>+{
>+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
>+}
>+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
>+
> static unsigned nr_status_frames(unsigned nr_grant_frames)
> {
> 	BUG_ON(grefs_per_grant_frame == 0);
>diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
>index 694dcaf..9a919b1 100644
>--- a/include/xen/grant_table.h
>+++ b/include/xen/grant_table.h
>@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
> #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> 
> int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>-		    struct gnttab_map_grant_ref *kmap_ops,
> 		    struct page **pages, unsigned int count);
>+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>+			      struct gnttab_map_grant_ref *kmap_ops,
>+			      struct page **pages, unsigned int count);
> int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>-		      struct gnttab_map_grant_ref *kunmap_ops,
> 		      struct page **pages, unsigned int count);
>+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref
>*unmap_ops,
>+				struct gnttab_map_grant_ref *kunmap_ops,
>+				struct page **pages, unsigned int count);
> 
>/* Perform a batch of grant map/copy operations. Retry every batch slot
> * for which the hypervisor returns GNTST_eagain. This is typically due
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Hz-0003MP-8v; Thu, 23 Jan 2014 00:36:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Hx-0003MJ-DV
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:36:57 +0000
Received: from [85.158.143.35:56730] by server-3.bemta-4.messagelabs.com id
	3D/96-32360-82460E25; Thu, 23 Jan 2014 00:36:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390437414!176757!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17514 invoked from network); 23 Jan 2014 00:36:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:36:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0aqP5015558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:36:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0apW2013949
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:36:51 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0apSm013945; Thu, 23 Jan 2014 00:36:51 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:36:50 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52DDA807.2050703@terremark.com>
References: <52DDA807.2050703@terremark.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 18:35:42 -0500
To: Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RG9uIFNsdXR6IDxkc2x1dHpAdmVyaXpvbi5jb20+IHdyb3RlOgo+KiBIYXJkd2FyZToKPgo+U2Vh
TWljcm8qU00xNTAwMC1YTipRdWFkIENvcmUgU2VydmVycyB3aXRoIEludGVswq4gWGVvbsKuIEUz
LTEyNjVMdjIKPnByb2Nlc3NvcnMgKOKAnEl2eSBCcmlkZ2XigJ0gbWljcm9hcmNoaXRlY3R1cmUp
Cj4KPjEgc2VydmVyLCAzMkcgb2YgbWVtb3J5Lgo+Cj4qIFNvZnR3YXJlOgo+Cj5GZWRvcmEgMTcg
KDMuOC4xMS0xMDAuZmMxNy54ODZfNjQpCj5DZW50T1MgcmVsZWFzZSA1LjEwICgyLjYuMTgtMzcx
LmVsNXhlbikKPgo+KiBHdWVzdCBvcGVyYXRpbmcgc3lzdGVtczoKPgo+Cj5vbiBGZWRvcmEgMTc6
IEFsbCBIVk06Cj4KPgo+dm06dWJ1bnR1LTEyLjA0LjMtc2VydmVyLWFtZDY0Cj5MaW51eCB1YnVu
dHUgMy44LjAtMjktZ2VuZXJpYyAjNDJ+cHJlY2lzZTEtVWJ1bnR1IFNNUCBXZWQgQXVnIDE0Cj4x
NjoxOToyMyBVVEMgMjAxMyB4ODZfNjQgeDg2XzY0IHg4Nl82NCBHTlUvTGludXgKPnZtOmNlbnRv
cy01LjkteDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4
ZW4gIzEgU01QIFR1ZSBKYW4gOCAxODozNTowNAo+RVNUIDIwMTMgeDg2XzY0IHg4Nl82NCB4ODZf
NjQgR05VL0xpbnV4Cj52bTpkZWJpYW4tNy4yLjAtYW1kNjQKPkxpbnV4IGRlYmlhbiAzLjIuMC00
LWFtZDY0ICMxIFNNUCBEZWJpYW4gMy4yLjUxLTEgeDg2XzY0IEdOVS9MaW51eAo+dm06cmhlbC02
LjQteDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjMyLTM1OC5lbDYueDg2
XzY0ICMxIFNNUCBUdWUgSmFuIDI5Cj4xMTo0Nzo0MSBFU1QgMjAxMyB4ODZfNjQgeDg2XzY0IHg4
Nl82NCBHTlUvTGludXgKPnZtOmNlbnRvcy02LjQtaTM4Ngo+TGludXggbG9jYWxob3N0LmxvY2Fs
ZG9tYWluIDIuNi4zMi0zNTguZWw2Lmk2ODYgIzEgU01QIFRodSBGZWIgMjEKPjIxOjUwOjQ5IFVU
QyAyMDEzIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51eAo+dm06Y2VudG9zLTUuOS1pMzg2Cj5MaW51
eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEgU01QIFR1ZSBKYW4g
OCAxOToyMjo1Ngo+RVNUIDIwMTMgaTY4NiBpNjg2IGkzODYgR05VL0xpbnV4Cj52bTpyaGVsLTUu
OS1pMzg2Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEg
U01QIFdlZCBOb3YgMjgKPjIyOjA0OjI2IEVTVCAyMDEyIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51
eAo+dm06Y2VudG9zLTYuNC14ODZfNjQKPkxpbnV4IGxvY2FsaG9zdC5sb2NhbGRvbWFpbiAyLjYu
MzItMzU4LmVsNi54ODZfNjQgIzEgU01QIEZyaSBGZWIgMjIKPjAwOjMxOjI2IFVUQyAyMDEzIHg4
Nl82NCB4ODZfNjQgeDg2XzY0IEdOVS9MaW51eAo+dm06cmhlbC02LjQtaTM4Ngo+TGludXggbG9j
YWxob3N0LmxvY2FsZG9tYWluIDIuNi4zMi0zNTguZWw2Lmk2ODYgIzEgU01QIFR1ZSBKYW4gMjkK
PjExOjQ4OjAxIEVTVCAyMDEzIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51eAo+dm06cmhlbC01Ljkt
eDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEg
U01QIFdlZCBOb3YgMjgKPjIxOjMxOjI4IEVTVCAyMDEyIHg4Nl82NCB4ODZfNjQgeDg2XzY0IEdO
VS9MaW51eAo+Cj4KPndpbmRvd3Mtc2VydmVyLTIwMDgtRU5ULXg4Nl82NAo+Cj4KPiogRnVuY3Rp
b25hbGl0eSB0ZXN0ZWQ6Cj54bAo+eGwgc2F2ZQo+eGwgcmVzdG9yZQo+Cj4KPiogQ29tbWVudHM6
Cj4KPjQuNC5yYzIgZG9lcyBub3QgYnVpbGQgb24gQ2VudE9TIHJlbGVhc2UgNS4xMDoKPgo+TmVl
ZAo+Cj4qIDQwN2EzYzAgKG9yaWdpbi9zdGFnaW5nKSBjb21wYXQvbWVtb3J5OiBmaXggYnVpbGQg
d2l0aCBvbGQgZ2NjCj4KPmFuZCBhIG5ld2VyIHVwc3RyZWFtIFFFTVUgdGhlbiBxZW11LXhlbi00
LjQuMC1yYzEKPgo+KiBiOTczMDdlIChIRUFELCB0YWc6IHFlbXUteGVuLTQuNC4wLXJjMSwgZHVt
bXkpIHhlbl9kaXNrOiBtYXJrIGlvcmVxCj5hcyBtYXBwZWQgYmVmb3JlIHVubWFwcGluZyBpbiBl
cnJvciBjYXNlCj4qICAgZDg0ZTQ1MiBNZXJnZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxl
LTEuNicgaW50bwo+eGVuLXN0YWdpbmctbWFzdGVyLTkKPnxcCj58ICogNjJlY2MzYSAodGFnOiB2
MS42LjEpIFVwZGF0ZSBWRVJTSU9OIGZvciAxLjYuMSByZWxlYXNlCj4KPgo+TWFpbCB0aHJlYWRz
IHRoYXQgYXJlIHJlbGF0ZWQ6Cj4KPmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwv
eGVuLWRldmVsLzIwMTQtMDEvbXNnMDE0NzcuaHRtbAo+Cj5odHRwOi8vbGlzdHMueGVuLm9yZy9h
cmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE0LTAxL21zZzAxNTIwLmh0bWwKPgo+Cj4gIAo+W3Jv
b3RAZGNzLXhlbi01NCB+XSMgeGwgc2F2ZSAtcCA2IC9iaWcveGwtc2F2ZS9jZW50b3MtNi40LXg4
Nl82NC4wLnNhdmUKPlNhdmluZyB0byAvYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5z
YXZlIG5ldyB4bCBmb3JtYXQgKGluZm8KPjB4MC8weDAvNTYwKQo+eGM6IFNhdmluZyBtZW1vcnk6
IGl0ZXIgMCAobGFzdCBzZW50IDAgc2tpcHBlZCAwKTogMTA0NDQ4MS8xMDQ0NDgxIAo+MTAwJQo+
W3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgdW5wYXVzZSA2Cj4KPmhhcyBsZWZ0IGRvbWFpbiAjNiBp
biBhIGJhZCBkaXNrIHN0YXRlIChvbiBWR0EpOgo+Cj5JTkZPOiB0YXNrIGpiZDIvZG0tMC04OjM4
NiBibG9ja2VkIGZvciBtb3JlIHRoZW4gMTIwIHNlY29uZHMuCj5JTkZPOiB0YXNrIHNhZGM6MjIx
MzkgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRzLgo+Cj4KPkhvd2V2ZXIgInhsIHJl
c3RvcmUgLVYgL2JpZy94bC1zYXZlL2NlbnRvcy02LjQteDg2XzY0LjAuc2F2ZSIgbG9va3MgdG8K
PndvcmsgZmluZS4KPgo+Mm5kIHRpbWUgdGhlIHVucGF1c2UgZmFpbGVkIHdpdGg6Cj5bcm9vdEBk
Y3MteGVuLTU0IH5dIyB4bCB1bnBhdXNlIDE3Cj4KPldBUk5JTkc6IGcuZS4gc3RpbGwgaW4gdXNl
IQo+V0FSTklORzogZy5lLiBzdGlsbCBpbiB1c2UhCj5XQVJOSU5HOiBnLmUuIHN0aWxsIGluIHVz
ZSEKPnBtX29wKCk6IHBsYXRmb3JtX3BtX3Jlc3VtZSsweDAvMHg1MCByZXR1cm5zIC0xOQo+UE06
IERldmljZSBpODA0MiBmYWlsZWQgdG8gcmVzdW1lOiBlcnJvciAtMTkKPklORk86IHRhc2sgc2Fk
YzoyMjE2NCBibG9ja2VkIGZvciBtb3JlIHRoZW4gMTIwIHNlY29uZHMuCj4iZWNobyAwID4uLi4i
Cj5JTkZPOiB0YXNrIHNhZGM6MjIxNjQgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRz
Lgo+Cj4gIAo+W3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgZGVzIDE3Cj5bcm9vdEBkY3MteGVuLTU0
IH5dIyB4bCByZXN0b3JlIC1WCj4vYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5zYXZl
Cj4KPgo+Tm90IHN1cmUgaWYgdGhpcyBpcyBleHBlY3RlZCBvciBub3QuCgpJIHRoaW5rIElhbiBz
YXcgdGhpcyB3aXRoIHRoZSAnZmFzdC1jYW5jZWwnIHNvbWV0aGluZyByZXN1bWUgYnV0IEkgbWln
aHQgYmUgaW5jb3JyZWN0LiBEaWQgaXQgd29yayBpZiB5b3UgdXNlZCB4ZW5kICh5b3UgbWlnaHQg
aGF2ZSB0byBjb25maWd1cmUgaXQgYmUgZW5hYmxlZCk/Cgo+Cj4gICAgLURvbiBTbHV0ego+Cj4K
Pl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj5YZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj5YZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+aHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Hz-0003MP-8v; Thu, 23 Jan 2014 00:36:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Hx-0003MJ-DV
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:36:57 +0000
Received: from [85.158.143.35:56730] by server-3.bemta-4.messagelabs.com id
	3D/96-32360-82460E25; Thu, 23 Jan 2014 00:36:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390437414!176757!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17514 invoked from network); 23 Jan 2014 00:36:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:36:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0aqP5015558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:36:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0apW2013949
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:36:51 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0apSm013945; Thu, 23 Jan 2014 00:36:51 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:36:50 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52DDA807.2050703@terremark.com>
References: <52DDA807.2050703@terremark.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 18:35:42 -0500
To: Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RG9uIFNsdXR6IDxkc2x1dHpAdmVyaXpvbi5jb20+IHdyb3RlOgo+KiBIYXJkd2FyZToKPgo+U2Vh
TWljcm8qU00xNTAwMC1YTipRdWFkIENvcmUgU2VydmVycyB3aXRoIEludGVswq4gWGVvbsKuIEUz
LTEyNjVMdjIKPnByb2Nlc3NvcnMgKOKAnEl2eSBCcmlkZ2XigJ0gbWljcm9hcmNoaXRlY3R1cmUp
Cj4KPjEgc2VydmVyLCAzMkcgb2YgbWVtb3J5Lgo+Cj4qIFNvZnR3YXJlOgo+Cj5GZWRvcmEgMTcg
KDMuOC4xMS0xMDAuZmMxNy54ODZfNjQpCj5DZW50T1MgcmVsZWFzZSA1LjEwICgyLjYuMTgtMzcx
LmVsNXhlbikKPgo+KiBHdWVzdCBvcGVyYXRpbmcgc3lzdGVtczoKPgo+Cj5vbiBGZWRvcmEgMTc6
IEFsbCBIVk06Cj4KPgo+dm06dWJ1bnR1LTEyLjA0LjMtc2VydmVyLWFtZDY0Cj5MaW51eCB1YnVu
dHUgMy44LjAtMjktZ2VuZXJpYyAjNDJ+cHJlY2lzZTEtVWJ1bnR1IFNNUCBXZWQgQXVnIDE0Cj4x
NjoxOToyMyBVVEMgMjAxMyB4ODZfNjQgeDg2XzY0IHg4Nl82NCBHTlUvTGludXgKPnZtOmNlbnRv
cy01LjkteDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4
ZW4gIzEgU01QIFR1ZSBKYW4gOCAxODozNTowNAo+RVNUIDIwMTMgeDg2XzY0IHg4Nl82NCB4ODZf
NjQgR05VL0xpbnV4Cj52bTpkZWJpYW4tNy4yLjAtYW1kNjQKPkxpbnV4IGRlYmlhbiAzLjIuMC00
LWFtZDY0ICMxIFNNUCBEZWJpYW4gMy4yLjUxLTEgeDg2XzY0IEdOVS9MaW51eAo+dm06cmhlbC02
LjQteDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjMyLTM1OC5lbDYueDg2
XzY0ICMxIFNNUCBUdWUgSmFuIDI5Cj4xMTo0Nzo0MSBFU1QgMjAxMyB4ODZfNjQgeDg2XzY0IHg4
Nl82NCBHTlUvTGludXgKPnZtOmNlbnRvcy02LjQtaTM4Ngo+TGludXggbG9jYWxob3N0LmxvY2Fs
ZG9tYWluIDIuNi4zMi0zNTguZWw2Lmk2ODYgIzEgU01QIFRodSBGZWIgMjEKPjIxOjUwOjQ5IFVU
QyAyMDEzIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51eAo+dm06Y2VudG9zLTUuOS1pMzg2Cj5MaW51
eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEgU01QIFR1ZSBKYW4g
OCAxOToyMjo1Ngo+RVNUIDIwMTMgaTY4NiBpNjg2IGkzODYgR05VL0xpbnV4Cj52bTpyaGVsLTUu
OS1pMzg2Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEg
U01QIFdlZCBOb3YgMjgKPjIyOjA0OjI2IEVTVCAyMDEyIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51
eAo+dm06Y2VudG9zLTYuNC14ODZfNjQKPkxpbnV4IGxvY2FsaG9zdC5sb2NhbGRvbWFpbiAyLjYu
MzItMzU4LmVsNi54ODZfNjQgIzEgU01QIEZyaSBGZWIgMjIKPjAwOjMxOjI2IFVUQyAyMDEzIHg4
Nl82NCB4ODZfNjQgeDg2XzY0IEdOVS9MaW51eAo+dm06cmhlbC02LjQtaTM4Ngo+TGludXggbG9j
YWxob3N0LmxvY2FsZG9tYWluIDIuNi4zMi0zNTguZWw2Lmk2ODYgIzEgU01QIFR1ZSBKYW4gMjkK
PjExOjQ4OjAxIEVTVCAyMDEzIGk2ODYgaTY4NiBpMzg2IEdOVS9MaW51eAo+dm06cmhlbC01Ljkt
eDg2XzY0Cj5MaW51eCBsb2NhbGhvc3QubG9jYWxkb21haW4gMi42LjE4LTM0OC5lbDV4ZW4gIzEg
U01QIFdlZCBOb3YgMjgKPjIxOjMxOjI4IEVTVCAyMDEyIHg4Nl82NCB4ODZfNjQgeDg2XzY0IEdO
VS9MaW51eAo+Cj4KPndpbmRvd3Mtc2VydmVyLTIwMDgtRU5ULXg4Nl82NAo+Cj4KPiogRnVuY3Rp
b25hbGl0eSB0ZXN0ZWQ6Cj54bAo+eGwgc2F2ZQo+eGwgcmVzdG9yZQo+Cj4KPiogQ29tbWVudHM6
Cj4KPjQuNC5yYzIgZG9lcyBub3QgYnVpbGQgb24gQ2VudE9TIHJlbGVhc2UgNS4xMDoKPgo+TmVl
ZAo+Cj4qIDQwN2EzYzAgKG9yaWdpbi9zdGFnaW5nKSBjb21wYXQvbWVtb3J5OiBmaXggYnVpbGQg
d2l0aCBvbGQgZ2NjCj4KPmFuZCBhIG5ld2VyIHVwc3RyZWFtIFFFTVUgdGhlbiBxZW11LXhlbi00
LjQuMC1yYzEKPgo+KiBiOTczMDdlIChIRUFELCB0YWc6IHFlbXUteGVuLTQuNC4wLXJjMSwgZHVt
bXkpIHhlbl9kaXNrOiBtYXJrIGlvcmVxCj5hcyBtYXBwZWQgYmVmb3JlIHVubWFwcGluZyBpbiBl
cnJvciBjYXNlCj4qICAgZDg0ZTQ1MiBNZXJnZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxl
LTEuNicgaW50bwo+eGVuLXN0YWdpbmctbWFzdGVyLTkKPnxcCj58ICogNjJlY2MzYSAodGFnOiB2
MS42LjEpIFVwZGF0ZSBWRVJTSU9OIGZvciAxLjYuMSByZWxlYXNlCj4KPgo+TWFpbCB0aHJlYWRz
IHRoYXQgYXJlIHJlbGF0ZWQ6Cj4KPmh0dHA6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwv
eGVuLWRldmVsLzIwMTQtMDEvbXNnMDE0NzcuaHRtbAo+Cj5odHRwOi8vbGlzdHMueGVuLm9yZy9h
cmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE0LTAxL21zZzAxNTIwLmh0bWwKPgo+Cj4gIAo+W3Jv
b3RAZGNzLXhlbi01NCB+XSMgeGwgc2F2ZSAtcCA2IC9iaWcveGwtc2F2ZS9jZW50b3MtNi40LXg4
Nl82NC4wLnNhdmUKPlNhdmluZyB0byAvYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5z
YXZlIG5ldyB4bCBmb3JtYXQgKGluZm8KPjB4MC8weDAvNTYwKQo+eGM6IFNhdmluZyBtZW1vcnk6
IGl0ZXIgMCAobGFzdCBzZW50IDAgc2tpcHBlZCAwKTogMTA0NDQ4MS8xMDQ0NDgxIAo+MTAwJQo+
W3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgdW5wYXVzZSA2Cj4KPmhhcyBsZWZ0IGRvbWFpbiAjNiBp
biBhIGJhZCBkaXNrIHN0YXRlIChvbiBWR0EpOgo+Cj5JTkZPOiB0YXNrIGpiZDIvZG0tMC04OjM4
NiBibG9ja2VkIGZvciBtb3JlIHRoZW4gMTIwIHNlY29uZHMuCj5JTkZPOiB0YXNrIHNhZGM6MjIx
MzkgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRzLgo+Cj4KPkhvd2V2ZXIgInhsIHJl
c3RvcmUgLVYgL2JpZy94bC1zYXZlL2NlbnRvcy02LjQteDg2XzY0LjAuc2F2ZSIgbG9va3MgdG8K
PndvcmsgZmluZS4KPgo+Mm5kIHRpbWUgdGhlIHVucGF1c2UgZmFpbGVkIHdpdGg6Cj5bcm9vdEBk
Y3MteGVuLTU0IH5dIyB4bCB1bnBhdXNlIDE3Cj4KPldBUk5JTkc6IGcuZS4gc3RpbGwgaW4gdXNl
IQo+V0FSTklORzogZy5lLiBzdGlsbCBpbiB1c2UhCj5XQVJOSU5HOiBnLmUuIHN0aWxsIGluIHVz
ZSEKPnBtX29wKCk6IHBsYXRmb3JtX3BtX3Jlc3VtZSsweDAvMHg1MCByZXR1cm5zIC0xOQo+UE06
IERldmljZSBpODA0MiBmYWlsZWQgdG8gcmVzdW1lOiBlcnJvciAtMTkKPklORk86IHRhc2sgc2Fk
YzoyMjE2NCBibG9ja2VkIGZvciBtb3JlIHRoZW4gMTIwIHNlY29uZHMuCj4iZWNobyAwID4uLi4i
Cj5JTkZPOiB0YXNrIHNhZGM6MjIxNjQgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRz
Lgo+Cj4gIAo+W3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgZGVzIDE3Cj5bcm9vdEBkY3MteGVuLTU0
IH5dIyB4bCByZXN0b3JlIC1WCj4vYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5zYXZl
Cj4KPgo+Tm90IHN1cmUgaWYgdGhpcyBpcyBleHBlY3RlZCBvciBub3QuCgpJIHRoaW5rIElhbiBz
YXcgdGhpcyB3aXRoIHRoZSAnZmFzdC1jYW5jZWwnIHNvbWV0aGluZyByZXN1bWUgYnV0IEkgbWln
aHQgYmUgaW5jb3JyZWN0LiBEaWQgaXQgd29yayBpZiB5b3UgdXNlZCB4ZW5kICh5b3UgbWlnaHQg
aGF2ZSB0byBjb25maWd1cmUgaXQgYmUgZW5hYmxlZCk/Cgo+Cj4gICAgLURvbiBTbHV0ego+Cj4K
Pl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj5YZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj5YZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+aHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68I6-0003Mw-Ou; Thu, 23 Jan 2014 00:37:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68I5-0003Mn-62
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:37:05 +0000
Received: from [85.158.137.68:15003] by server-8.bemta-3.messagelabs.com id
	16/87-31081-03460E25; Thu, 23 Jan 2014 00:37:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390437421!9594413!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31955 invoked from network); 23 Jan 2014 00:37:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:37:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0awrr015589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:36:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0avHl020363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:36:58 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0av1a029222; Thu, 23 Jan 2014 00:36:57 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:36:56 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 16:22:47 -0500
To: Zoltan Kiss <zoltan.kiss@citrix.com>, ian.campbell@citrix.com,
	wei.liu2@citrix.com, xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jonathan.davies@citrix.com
Message-ID: <a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
	during	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>The grant mapping API does m2p_override unnecessarily: only gntdev
>needs it,
>for blkback and future netback patches it just cause a lock contention,
>as
>those pages never go to userspace. Therefore this series does the
>following:
>- the original functions were renamed to __gnttab_[un]map_refs, with a
>new
>  parameter m2p_override
>- based on m2p_override either they follow the original behaviour, or
>just set
>  the private flag and call set_phys_to_machine
>- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs
>with
>  m2p_override false
>- a new function gnttab_[un]map_refs_userspace provides the old
>behaviour

You don't say anything about the 'return ret' changed to 'return 0'.

Any particular reason for that?

Thanks
>
>v2:
>- move the storing of the old mfn in page->index to gnttab_map_refs
>- move the function header update to a separate patch
>
>v3:
>- a new approach to retain old behaviour where it needed
>- squash the patches into one
>
>Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>Suggested-by: David Vrabel <david.vrabel@citrix.com>
>---
> drivers/block/xen-blkback/blkback.c |   15 +++----
> drivers/xen/gntdev.c                |   13 +++---
>drivers/xen/grant-table.c           |   81
>+++++++++++++++++++++++++++++------
> include/xen/grant_table.h           |    8 +++-
> 4 files changed, 87 insertions(+), 30 deletions(-)
>
>diff --git a/drivers/block/xen-blkback/blkback.c
>b/drivers/block/xen-blkback/blkback.c
>index 6620b73..875025f 100644
>--- a/drivers/block/xen-blkback/blkback.c
>+++ b/drivers/block/xen-blkback/blkback.c
>@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif
>*blkif, struct rb_root *root,
> 
> 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
> 			!rb_next(&persistent_gnt->node)) {
>-			ret = gnttab_unmap_refs(unmap, NULL, pages,
>-				segs_to_unmap);
>+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, pages, segs_to_unmap);
> 			segs_to_unmap = 0;
>@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct
>*work)
> 		pages[segs_to_unmap] = persistent_gnt->page;
> 
> 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>-			ret = gnttab_unmap_refs(unmap, NULL, pages,
>-				segs_to_unmap);
>+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, pages, segs_to_unmap);
> 			segs_to_unmap = 0;
>@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct
>*work)
> 		kfree(persistent_gnt);
> 	}
> 	if (segs_to_unmap > 0) {
>-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
>+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> 		BUG_ON(ret);
> 		put_free_pages(blkif, pages, segs_to_unmap);
> 	}
>@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif
>*blkif,
> 				    GNTMAP_host_map, pages[i]->handle);
> 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
> 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
>-			                        invcount);
>+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> 			BUG_ON(ret);
> 			put_free_pages(blkif, unmap_pages, invcount);
> 			invcount = 0;
> 		}
> 	}
> 	if (invcount) {
>-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
>+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> 		BUG_ON(ret);
> 		put_free_pages(blkif, unmap_pages, invcount);
> 	}
>@@ -740,7 +737,7 @@ again:
> 	}
> 
> 	if (segs_to_map) {
>-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
>+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
> 		BUG_ON(ret);
> 	}
> 
>diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
>index e41c79c..e652c0e 100644
>--- a/drivers/xen/gntdev.c
>+++ b/drivers/xen/gntdev.c
>@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
> 	}
> 
> 	pr_debug("map %d+%d\n", map->index, map->count);
>-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops :
>NULL,
>-			map->pages, map->count);
>+	err = gnttab_map_refs_userspace(map->map_ops,
>+					use_ptemod ? map->kmap_ops : NULL,
>+					map->pages,
>+					map->count);
> 	if (err)
> 		return err;
> 
>@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map
>*map, int offset, int pages)
> 		}
> 	}
> 
>-	err = gnttab_unmap_refs(map->unmap_ops + offset,
>-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
>-			pages);
>+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
>+					  use_ptemod ? map->kmap_ops + offset : NULL,
>+					  map->pages + offset,
>+					  pages);
> 	if (err)
> 		return err;
> 
>diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>index aa846a4..87ded60 100644
>--- a/drivers/xen/grant-table.c
>+++ b/drivers/xen/grant-table.c
>@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch,
>unsigned count)
> }
> EXPORT_SYMBOL_GPL(gnttab_batch_copy);
> 
>-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> 		    struct gnttab_map_grant_ref *kmap_ops,
>-		    struct page **pages, unsigned int count)
>+		    struct page **pages, unsigned int count,
>+		    bool m2p_override)
> {
> 	int i, ret;
> 	bool lazy = false;
> 	pte_t *pte;
> 	unsigned long mfn;
> 
>+	BUG_ON(kmap_ops && !m2p_override);
>	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops,
>count);
> 	if (ret)
> 		return ret;
>@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> 		}
>-		return ret;
>+		return 0;
> 	}
> 
>-	if (!in_interrupt() && paravirt_get_lazy_mode() ==
>PARAVIRT_LAZY_NONE) {
>+	if (m2p_override &&
>+	    !in_interrupt() &&
>+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> 		arch_enter_lazy_mmu_mode();
> 		lazy = true;
> 	}
>@@ -927,8 +931,18 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 		} else {
> 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> 		}
>-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>-				       &kmap_ops[i] : NULL);
>+		if (m2p_override)
>+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
>+					       &kmap_ops[i] : NULL);
>+		else {
>+			unsigned long pfn = page_to_pfn(pages[i]);
>+			WARN_ON(PagePrivate(pages[i]));
>+			SetPagePrivate(pages[i]);
>+			set_page_private(pages[i], mfn);
>+			pages[i]->index = pfn_to_mfn(pfn);
>+			if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>+				return -ENOMEM;
>+		}
> 		if (ret)
> 			goto out;
> 	}
>@@ -937,17 +951,33 @@ int gnttab_map_refs(struct gnttab_map_grant_ref
>*map_ops,
> 	if (lazy)
> 		arch_leave_lazy_mmu_mode();
> 
>-	return ret;
>+	return 0;
>+}
>+
>+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>+		    struct page **pages, unsigned int count)
>+{
>+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> }
> EXPORT_SYMBOL_GPL(gnttab_map_refs);
> 
>-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>+			      struct gnttab_map_grant_ref *kmap_ops,
>+			      struct page **pages, unsigned int count)
>+{
>+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
>+}
>+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
>+
>+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> 		      struct gnttab_map_grant_ref *kmap_ops,
>-		      struct page **pages, unsigned int count)
>+		      struct page **pages, unsigned int count,
>+		      bool m2p_override)
> {
> 	int i, ret;
> 	bool lazy = false;
> 
>+	BUG_ON(kmap_ops && !m2p_override);
>	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops,
>count);
> 	if (ret)
> 		return ret;
>@@ -958,17 +988,26 @@ int gnttab_unmap_refs(struct
>gnttab_unmap_grant_ref *unmap_ops,
> 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> 					INVALID_P2M_ENTRY);
> 		}
>-		return ret;
>+		return 0;
> 	}
> 
>-	if (!in_interrupt() && paravirt_get_lazy_mode() ==
>PARAVIRT_LAZY_NONE) {
>+	if (m2p_override &&
>+	    !in_interrupt() &&
>+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> 		arch_enter_lazy_mmu_mode();
> 		lazy = true;
> 	}
> 
> 	for (i = 0; i < count; i++) {
>-		ret = m2p_remove_override(pages[i], kmap_ops ?
>-				       &kmap_ops[i] : NULL);
>+		if (m2p_override)
>+			ret = m2p_remove_override(pages[i], kmap_ops ?
>+						  &kmap_ops[i] : NULL);
>+		else {
>+			unsigned long pfn = page_to_pfn(pages[i]);
>+			WARN_ON(!PagePrivate(pages[i]));
>+			ClearPagePrivate(pages[i]);
>+			set_phys_to_machine(pfn, pages[i]->index);
>+		}
> 		if (ret)
> 			goto out;
> 	}
>@@ -977,10 +1016,24 @@ int gnttab_unmap_refs(struct
>gnttab_unmap_grant_ref *unmap_ops,
> 	if (lazy)
> 		arch_leave_lazy_mmu_mode();
> 
>-	return ret;
>+	return 0;
>+}
>+
>+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
>+		    struct page **pages, unsigned int count)
>+{
>+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> }
> EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
> 
>+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref
>*map_ops,
>+				struct gnttab_map_grant_ref *kmap_ops,
>+				struct page **pages, unsigned int count)
>+{
>+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
>+}
>+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
>+
> static unsigned nr_status_frames(unsigned nr_grant_frames)
> {
> 	BUG_ON(grefs_per_grant_frame == 0);
>diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
>index 694dcaf..9a919b1 100644
>--- a/include/xen/grant_table.h
>+++ b/include/xen/grant_table.h
>@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
> #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> 
> int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>-		    struct gnttab_map_grant_ref *kmap_ops,
> 		    struct page **pages, unsigned int count);
>+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>+			      struct gnttab_map_grant_ref *kmap_ops,
>+			      struct page **pages, unsigned int count);
> int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>-		      struct gnttab_map_grant_ref *kunmap_ops,
> 		      struct page **pages, unsigned int count);
>+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref
>*unmap_ops,
>+				struct gnttab_map_grant_ref *kunmap_ops,
>+				struct page **pages, unsigned int count);
> 
>/* Perform a batch of grant map/copy operations. Retry every batch slot
> * for which the hypervisor returns GNTST_eagain. This is typically due
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Id-0003Qs-9e; Thu, 23 Jan 2014 00:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ib-0003QY-87
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:37:37 +0000
Received: from [85.158.139.211:48011] by server-7.bemta-5.messagelabs.com id
	93/63-04824-05460E25; Thu, 23 Jan 2014 00:37:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390437454!11386775!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8186 invoked from network); 23 Jan 2014 00:37:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:37:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0bOht018289
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:37:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0bNFj000140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:37:24 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0bNdK000134; Thu, 23 Jan 2014 00:37:23 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:37:22 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 13:55:25 -0500
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <1b62aa37-eb68-49bc-b823-92369c00c58b@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com>
>wrote:
>> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS
>patch [1]
>>> from David one of the xen development kernel git trees to track is
>>> xen/git.git [2], this tree however gives has undefined references
>when doing a
>>> fresh clone [shown below], but as expected does work well when only
>cloning
>>> the linux-next branch [also below]. While I'm sure this is fine for
>>> folks who can do the guess work do we really want to live with trees
>like
>>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify
>branches
>>> required, so perhaps it should -- if we want to live with these ?
>Curious, how
>>> many other git are there with a similar situation ?
>>
>> We don't recommend doing development work for the Xen subsystem based
>on
>> xen/tip.git so I think it's fine to have to checkout the specific
>branch
>> you are interested in.
>
>OK thanks.
>
>>> The xen project web site actually lists [3] Konrad's xen git tree
>[4] for
>>> development as the primary development tree, that probably should be
>>> updated now, and likely with instructions to clone only the
>linux-next
>>> branch ?
>>
>> I've updated the wiki to read:
>>
>>     For development the recommended branch is:
>>
>>         The mainline Linus linux.git tree.
>
>Is the delta of what is queued for the next release typically small?

Depends
>Otherwise someone doing development based on linux.git alone should
>have conflicts with anything on the queue, no?

Potentially. Usually the maintainer will spot where there are potential conflicts and give you a branch to base on.

>
>>     To see what's queued for the next release, the next merge window,
>>     and other work in progress:
>>
>>         The Xen subsystem maintainers' tip.git tree.
>
>That's the thing, you can't clone the tip.git tree today well, there
>are undefined references and git gives up, asking for the linux-next
>branch however did work.

It should work now. I made master point to 3.13.
>
>  Luis



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:37:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Id-0003Qs-9e; Thu, 23 Jan 2014 00:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ib-0003QY-87
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:37:37 +0000
Received: from [85.158.139.211:48011] by server-7.bemta-5.messagelabs.com id
	93/63-04824-05460E25; Thu, 23 Jan 2014 00:37:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390437454!11386775!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8186 invoked from network); 23 Jan 2014 00:37:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:37:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0bOht018289
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:37:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0bNFj000140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:37:24 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0bNdK000134; Thu, 23 Jan 2014 00:37:23 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:37:22 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
	<52DCFC8F.1050607@citrix.com>
	<CAB=NE6X9iHbBbQXypJtT=e+aD-9nkJZ=7PgtfNR-D=mi+T4XLw@mail.gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 20 Jan 2014 13:55:25 -0500
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <1b62aa37-eb68-49bc-b823-92369c00c58b@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>On Mon, Jan 20, 2014 at 2:38 AM, David Vrabel <david.vrabel@citrix.com>
>wrote:
>> On 17/01/14 23:02, Luis R. Rodriguez wrote:
>>> As per linux-next Next/Trees [0], and a recent January MAINTAINERS
>patch [1]
>>> from David one of the xen development kernel git trees to track is
>>> xen/git.git [2], this tree however gives has undefined references
>when doing a
>>> fresh clone [shown below], but as expected does work well when only
>cloning
>>> the linux-next branch [also below]. While I'm sure this is fine for
>>> folks who can do the guess work do we really want to live with trees
>like
>>> these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify
>branches
>>> required, so perhaps it should -- if we want to live with these ?
>Curious, how
>>> many other git are there with a similar situation ?
>>
>> We don't recommend doing development work for the Xen subsystem based
>on
>> xen/tip.git so I think it's fine to have to checkout the specific
>branch
>> you are interested in.
>
>OK thanks.
>
>>> The xen project web site actually lists [3] Konrad's xen git tree
>[4] for
>>> development as the primary development tree, that probably should be
>>> updated now, and likely with instructions to clone only the
>linux-next
>>> branch ?
>>
>> I've updated the wiki to read:
>>
>>     For development the recommended branch is:
>>
>>         The mainline Linus linux.git tree.
>
>Is the delta of what is queued for the next release typically small?

Depends
>Otherwise someone doing development based on linux.git alone should
>have conflicts with anything on the queue, no?

Potentially. Usually the maintainer will spot where there are potential conflicts and give you a branch to base on.

>
>>     To see what's queued for the next release, the next merge window,
>>     and other work in progress:
>>
>>         The Xen subsystem maintainers' tip.git tree.
>
>That's the thing, you can't clone the tip.git tree today well, there
>are undefined references and git gives up, asking for the linux-next
>branch however did work.

It should work now. I made master point to 3.13.
>
>  Luis



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JE-0003Yw-VS; Thu, 23 Jan 2014 00:38:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JD-0003Yg-Ow
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:15 +0000
Received: from [193.109.254.147:45640] by server-16.bemta-14.messagelabs.com
	id 83/1B-20600-77460E25; Thu, 23 Jan 2014 00:38:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390437493!9106692!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32299 invoked from network); 23 Jan 2014 00:38:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cAT1018971
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:10 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0c940016021
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:09 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0c8XB021989; Thu, 23 Jan 2014 00:38:08 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:08 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:56:24 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <11d39cdc-d0a7-4658-9ed5-f9fa1ca50aa1@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>pvh was designed to start with pv flags, but a commit in xen tree
>51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as
>they are not necessary. As a result, these CR flags must be set in the
>guest.
>
>Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>---
>arch/x86/xen/enlighten.c |   43
>+++++++++++++++++++++++++++++++++++++------
> arch/x86/xen/smp.c       |    2 +-
> arch/x86/xen/xen-ops.h   |    2 +-
> 3 files changed, 39 insertions(+), 8 deletions(-)
>
>diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>index 628099a..4a2aaa6 100644
>--- a/arch/x86/xen/enlighten.c
>+++ b/arch/x86/xen/enlighten.c
>@@ -1410,12 +1410,8 @@ static void __init
>xen_boot_params_init_edd(void)
>  * Set up the GDT and segment registers for -fstack-protector.  Until
>  * we do this, we have to be careful not to call any stack-protected
>  * function, which is most of the kernel.
>- *
>- * Note, that it is refok - because the only caller of this after init
>- * is PVH which is not going to use xen_load_gdt_boot or other
>- * __init functions.
>  */
>-void __ref xen_setup_gdt(int cpu)
>+static void xen_setup_gdt(int cpu)
> {
> 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> #ifdef CONFIG_X86_64
>@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
> 	pv_cpu_ops.load_gdt = xen_load_gdt;
> }
> 
>+/*
>+ * A pv guest starts with default flags that are not set for pvh, set
>them
>+ * here asap.
>+ */
>+static void xen_pvh_set_cr_flags(int cpu)
>+{
>+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);

I think it would be good to mention that Xen unconditionally sets PE and ET for HVM guests and that additionally for PVH the PG is set.

What about the NE? That looks to be missing from the list above? Should we set it?

>+
>+	if (!cpu)
>+		return;
>+	/*
>+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
>+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
>+	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
>+	 */
>+	if (cpu_has_pse)
>+		set_in_cr4(X86_CR4_PSE);
>+
>+	if (cpu_has_pge)
>+		set_in_cr4(X86_CR4_PGE);
>+}
>+
>+/*
>+ * Note, that it is refok - because the only caller of this after init
>+ * is PVH which is not going to use xen_load_gdt_boot or other
>+ * __init functions.
>+ */
>+void __ref xen_pvh_secondary_vcpu_init(int cpu)
>+{
>+	xen_setup_gdt(cpu);
>+	xen_pvh_set_cr_flags(cpu);
>+}
>+
> static void __init xen_pvh_early_guest_init(void)
> {
> 	if (!xen_feature(XENFEAT_auto_translated_physmap))
> 		return;
> 
>-	if (xen_feature(XENFEAT_hvm_callback_vector))
>+	if (xen_feature(XENFEAT_hvm_callback_vector)) {
> 		xen_have_vector_callback = 1;
>+		xen_pvh_set_cr_flags(0);
>+	}
> 
> #ifdef CONFIG_X86_32
> 	BUG(); /* PVH: Implement proper support. */
>diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
>index 5e46190..a18eadd 100644
>--- a/arch/x86/xen/smp.c
>+++ b/arch/x86/xen/smp.c
>@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
> #ifdef CONFIG_X86_64
> 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
> 	    xen_feature(XENFEAT_supervisor_mode_kernel))
>-		xen_setup_gdt(cpu);
>+		xen_pvh_secondary_vcpu_init(cpu);
> #endif
> 	cpu_bringup();
> 	cpu_startup_entry(CPUHP_ONLINE);
>diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
>index 9059c24..1cb6f4c 100644
>--- a/arch/x86/xen/xen-ops.h
>+++ b/arch/x86/xen/xen-ops.h
>@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
> 
> extern int xen_panic_handler_init(void);
> 
>-void xen_setup_gdt(int cpu);
>+void xen_pvh_secondary_vcpu_init(int cpu);
> #endif /* XEN_OPS_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JE-0003Yw-VS; Thu, 23 Jan 2014 00:38:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JD-0003Yg-Ow
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:15 +0000
Received: from [193.109.254.147:45640] by server-16.bemta-14.messagelabs.com
	id 83/1B-20600-77460E25; Thu, 23 Jan 2014 00:38:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390437493!9106692!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32299 invoked from network); 23 Jan 2014 00:38:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cAT1018971
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:10 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0c940016021
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:09 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0c8XB021989; Thu, 23 Jan 2014 00:38:08 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:08 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:56:24 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <11d39cdc-d0a7-4658-9ed5-f9fa1ca50aa1@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>pvh was designed to start with pv flags, but a commit in xen tree
>51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as
>they are not necessary. As a result, these CR flags must be set in the
>guest.
>
>Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>
>Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>---
>arch/x86/xen/enlighten.c |   43
>+++++++++++++++++++++++++++++++++++++------
> arch/x86/xen/smp.c       |    2 +-
> arch/x86/xen/xen-ops.h   |    2 +-
> 3 files changed, 39 insertions(+), 8 deletions(-)
>
>diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>index 628099a..4a2aaa6 100644
>--- a/arch/x86/xen/enlighten.c
>+++ b/arch/x86/xen/enlighten.c
>@@ -1410,12 +1410,8 @@ static void __init
>xen_boot_params_init_edd(void)
>  * Set up the GDT and segment registers for -fstack-protector.  Until
>  * we do this, we have to be careful not to call any stack-protected
>  * function, which is most of the kernel.
>- *
>- * Note, that it is refok - because the only caller of this after init
>- * is PVH which is not going to use xen_load_gdt_boot or other
>- * __init functions.
>  */
>-void __ref xen_setup_gdt(int cpu)
>+static void xen_setup_gdt(int cpu)
> {
> 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> #ifdef CONFIG_X86_64
>@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
> 	pv_cpu_ops.load_gdt = xen_load_gdt;
> }
> 
>+/*
>+ * A pv guest starts with default flags that are not set for pvh, set
>them
>+ * here asap.
>+ */
>+static void xen_pvh_set_cr_flags(int cpu)
>+{
>+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);

I think it would be good to mention that Xen unconditionally sets PE and ET for HVM guests and that additionally for PVH the PG is set.

What about the NE? That looks to be missing from the list above? Should we set it?

>+
>+	if (!cpu)
>+		return;
>+	/*
>+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
>+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
>+	 * set them here. For all, OSFXSR OSXMMEXCPT will be set in fpu_init
>+	 */
>+	if (cpu_has_pse)
>+		set_in_cr4(X86_CR4_PSE);
>+
>+	if (cpu_has_pge)
>+		set_in_cr4(X86_CR4_PGE);
>+}
>+
>+/*
>+ * Note, that it is refok - because the only caller of this after init
>+ * is PVH which is not going to use xen_load_gdt_boot or other
>+ * __init functions.
>+ */
>+void __ref xen_pvh_secondary_vcpu_init(int cpu)
>+{
>+	xen_setup_gdt(cpu);
>+	xen_pvh_set_cr_flags(cpu);
>+}
>+
> static void __init xen_pvh_early_guest_init(void)
> {
> 	if (!xen_feature(XENFEAT_auto_translated_physmap))
> 		return;
> 
>-	if (xen_feature(XENFEAT_hvm_callback_vector))
>+	if (xen_feature(XENFEAT_hvm_callback_vector)) {
> 		xen_have_vector_callback = 1;
>+		xen_pvh_set_cr_flags(0);
>+	}
> 
> #ifdef CONFIG_X86_32
> 	BUG(); /* PVH: Implement proper support. */
>diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
>index 5e46190..a18eadd 100644
>--- a/arch/x86/xen/smp.c
>+++ b/arch/x86/xen/smp.c
>@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
> #ifdef CONFIG_X86_64
> 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
> 	    xen_feature(XENFEAT_supervisor_mode_kernel))
>-		xen_setup_gdt(cpu);
>+		xen_pvh_secondary_vcpu_init(cpu);
> #endif
> 	cpu_bringup();
> 	cpu_startup_entry(CPUHP_ONLINE);
>diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
>index 9059c24..1cb6f4c 100644
>--- a/arch/x86/xen/xen-ops.h
>+++ b/arch/x86/xen/xen-ops.h
>@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
> 
> extern int xen_panic_handler_init(void);
> 
>-void xen_setup_gdt(int cpu);
>+void xen_pvh_secondary_vcpu_init(int cpu);
> #endif /* XEN_OPS_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JH-0003Zq-Cs; Thu, 23 Jan 2014 00:38:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JF-0003Yu-7h
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:17 +0000
Received: from [85.158.143.35:2038] by server-1.bemta-4.messagelabs.com id
	CA/C3-02132-87460E25; Thu, 23 Jan 2014 00:38:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390437494!178628!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11353 invoked from network); 23 Jan 2014 00:38:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cCtr016666
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cBN9016112
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:11 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cAJt022035; Thu, 23 Jan 2014 00:38:10 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:10 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:40:19 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <dd8e2ed3-4ddf-4f03-957f-902361234a66@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>Konrad,
>
>The following patch sets the bits in CR0 and CR4. Please note, I'm
>working
>on patch for the xen side. The CR4 features are not currently exported
>to a PVH guest. 

The patch should really have been split in two - one for CR0 and one for CR4.

Especially as the ramifications of enabling PGE are much more complex. For example - there is a need to  fix up the __supported_pte_mask to allow one to use PAGE_GLOBAL. There might be other things too that need tweaking.

>
>Roger, I added your SOB line, please lmk if I need to add anything
>else.
>
>This patch was build on top of a71accb67e7645c68061cec2bee6067205e439fc
>in
>konrad devel/pvh.v13 branch.

Pls use #linux-next at this stage.

Thank you!
>
>thanks
>Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JH-0003Zq-Cs; Thu, 23 Jan 2014 00:38:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JF-0003Yu-7h
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:17 +0000
Received: from [85.158.143.35:2038] by server-1.bemta-4.messagelabs.com id
	CA/C3-02132-87460E25; Thu, 23 Jan 2014 00:38:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390437494!178628!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11353 invoked from network); 23 Jan 2014 00:38:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cCtr016666
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cBN9016112
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:11 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cAJt022035; Thu, 23 Jan 2014 00:38:10 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:10 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:40:19 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <dd8e2ed3-4ddf-4f03-957f-902361234a66@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>Konrad,
>
>The following patch sets the bits in CR0 and CR4. Please note, I'm
>working
>on patch for the xen side. The CR4 features are not currently exported
>to a PVH guest. 

The patch should really have been split in two - one for CR0 and one for CR4.

Especially as the ramifications of enabling PGE are much more complex. For example - there is a need to  fix up the __supported_pte_mask to allow one to use PAGE_GLOBAL. There might be other things too that need tweaking.

>
>Roger, I added your SOB line, please lmk if I need to add anything
>else.
>
>This patch was build on top of a71accb67e7645c68061cec2bee6067205e439fc
>in
>konrad devel/pvh.v13 branch.

Pls use #linux-next at this stage.

Thank you!
>
>thanks
>Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JO-0003hk-U2; Thu, 23 Jan 2014 00:38:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JN-0003dX-8B
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:25 +0000
Received: from [85.158.139.211:49615] by server-14.bemta-5.messagelabs.com id
	00/74-24200-08460E25; Thu, 23 Jan 2014 00:38:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390437502!8683060!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11114 invoked from network); 23 Jan 2014 00:38:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cKQR019039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0cJKK005881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:20 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cJl3001497; Thu, 23 Jan 2014 00:38:19 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:18 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:20:53 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <f0ca98c8-b89d-4849-b8bc-a78cd8d54db9@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>pvh was designed to start with pv flags, but a commit in xen tree
>51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as

"Name of the patch in the Xen tree"

>they are not necessary. As a result, these CR flags must be set in the
>guest.

>
>Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>

You missed modifying the patch to reflect the authorship to be Roger's.

Please use git commit --amend --author "somebody s name " 

Also Roger should be credited with Reported-by. I can add that.

>Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>---
>arch/x86/xen/enlighten.c |   43
>+++++++++++++++++++++++++++++++++++++------
> arch/x86/xen/smp.c       |    2 +-
> arch/x86/xen/xen-ops.h   |    2 +-
> 3 files changed, 39 insertions(+), 8 deletions(-)
>
>diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>index 628099a..4a2aaa6 100644
>--- a/arch/x86/xen/enlighten.c
>+++ b/arch/x86/xen/enlighten.c
>@@ -1410,12 +1410,8 @@ static void __init
>xen_boot_params_init_edd(void)
>  * Set up the GDT and segment registers for -fstack-protector.  Until
>  * we do this, we have to be careful not to call any stack-protected
>  * function, which is most of the kernel.
>- *
>- * Note, that it is refok - because the only caller of this after init
>- * is PVH which is not going to use xen_load_gdt_boot or other
>- * __init functions.
>  */
>-void __ref xen_setup_gdt(int cpu)
>+static void xen_setup_gdt(int cpu)
> 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> #ifdef CONFIG_X86_64
>@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
> 	pv_cpu_ops.load_gdt = xen_load_gdt;
> }
> 
>+/*
>+ * A pv guest starts with default flags that are not set for pvh, set
>them
>+ * here asap.
>+ */
>+static void xen_pvh_set_cr_flags(int cpu)

>+{

Pls add:

/* See 'secondary_startup_64' for how bare metal does it. */

>+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);
>+
>+	if (!cpu)
>+		return;
>+	/*
>+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
>+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
>+	 * set them here. For all, OSFXSR

Might want to mention that for AP on bare metal they are set in 'secondary_start_64'

... 
>+	 */

Is it OK to set this twice? 

Meaning remove the 'if (!cpu)..' check so that this code path is run for BSP and AP?

>+	if (cpu_has_pse)
>+		set_in_cr4(X86_CR4_PSE);
>+
>+	if (cpu_has_pge)
>+		set_in_cr4(X86_CR4_PGE);
>+}
>+
>+/*
>+ * Note, that it is refok - because the only caller of this after init
>+ * is PVH which is not going to use xen_load_gdt_boot or other
>+ * __init functions.

Hmm. You must be using and older tree. The new one has __ref comment.

>+ */
>+void __ref xen_pvh_secondary_vcpu_init(int cpu)
>+{
>+	xen_setup_gdt(cpu);
>+	xen_pvh_set_cr_flags(cpu);
>+}
>+
> static void __init xen_pvh_early_guest_init(void)
> {
> 	if (!xen_feature(XENFEAT_auto_translated_physmap))
> 		return;
> 
>-	if (xen_feature(XENFEAT_hvm_callback_vector))
>+	if (xen_feature(XENFEAT_hvm_callback_vector)) 

> 		xen_have_vector_callback = 1;
>+		xen_pvh_set_cr_flags(0);
>+	}
> 
> #ifdef CONFIG_X86_32
> 	BUG(); /* PVH: Implement proper support. */
>diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
>index 5e46190..a18eadd 100644
>--- a/arch/x86/xen/smp.c
>+++ b/arch/x86/xen/smp.c
>@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
> #ifdef CONFIG_X86_64
> 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
> 	    xen_feature(XENFEAT_supervisor_mode_kernel))
>-		xen_setup_gdt(cpu);
>+		xen_pvh_secondary_vcpu_init(cpu);
> #endif
> 	cpu_bringup();
> 	cpu_startup_entry(CPUHP_ONLINE);
>diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
>index 9059c24..1cb6f4c 100644
>--- a/arch/x86/xen/xen-ops.h
>+++ b/arch/x86/xen/xen-ops.h
>@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
> 
> extern int xen_panic_handler_init(void);
> 
>-void xen_setup_gdt(int cpu);
>+void xen_pvh_secondary_vcpu_init(int cpu);
> #endif /* XEN_OPS_H */

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JO-0003hk-U2; Thu, 23 Jan 2014 00:38:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JN-0003dX-8B
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 00:38:25 +0000
Received: from [85.158.139.211:49615] by server-14.bemta-5.messagelabs.com id
	00/74-24200-08460E25; Thu, 23 Jan 2014 00:38:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390437502!8683060!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11114 invoked from network); 23 Jan 2014 00:38:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cKQR019039
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0cJKK005881
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:20 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cJl3001497; Thu, 23 Jan 2014 00:38:19 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:18 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 22:20:53 -0500
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <f0ca98c8-b89d-4849-b8bc-a78cd8d54db9@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
>pvh was designed to start with pv flags, but a commit in xen tree
>51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags as

"Name of the patch in the Xen tree"

>they are not necessary. As a result, these CR flags must be set in the
>guest.

>
>Signed-off-by: Roger Pau Monne <roger.pau@citrix.com>

You missed modifying the patch to reflect the authorship to be Roger's.

Please use git commit --amend --author "somebody s name " 

Also Roger should be credited with Reported-by. I can add that.

>Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>---
>arch/x86/xen/enlighten.c |   43
>+++++++++++++++++++++++++++++++++++++------
> arch/x86/xen/smp.c       |    2 +-
> arch/x86/xen/xen-ops.h   |    2 +-
> 3 files changed, 39 insertions(+), 8 deletions(-)
>
>diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>index 628099a..4a2aaa6 100644
>--- a/arch/x86/xen/enlighten.c
>+++ b/arch/x86/xen/enlighten.c
>@@ -1410,12 +1410,8 @@ static void __init
>xen_boot_params_init_edd(void)
>  * Set up the GDT and segment registers for -fstack-protector.  Until
>  * we do this, we have to be careful not to call any stack-protected
>  * function, which is most of the kernel.
>- *
>- * Note, that it is refok - because the only caller of this after init
>- * is PVH which is not going to use xen_load_gdt_boot or other
>- * __init functions.
>  */
>-void __ref xen_setup_gdt(int cpu)
>+static void xen_setup_gdt(int cpu)
> 	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> #ifdef CONFIG_X86_64
>@@ -1463,13 +1459,48 @@ void __ref xen_setup_gdt(int cpu)
> 	pv_cpu_ops.load_gdt = xen_load_gdt;
> }
> 
>+/*
>+ * A pv guest starts with default flags that are not set for pvh, set
>them
>+ * here asap.
>+ */
>+static void xen_pvh_set_cr_flags(int cpu)

>+{

Pls add:

/* See 'secondary_startup_64' for how bare metal does it. */

>+	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_WP | X86_CR0_AM);
>+
>+	if (!cpu)
>+		return;
>+	/*
>+	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR OSXMMEXCPT
>+	 * For BSP, PSE PGE will be set in probe_page_size_mask(), for AP
>+	 * set them here. For all, OSFXSR

Might want to mention that for AP on bare metal they are set in 'secondary_start_64'

... 
>+	 */

Is it OK to set this twice? 

Meaning remove the 'if (!cpu)..' check so that this code path is run for BSP and AP?

>+	if (cpu_has_pse)
>+		set_in_cr4(X86_CR4_PSE);
>+
>+	if (cpu_has_pge)
>+		set_in_cr4(X86_CR4_PGE);
>+}
>+
>+/*
>+ * Note, that it is refok - because the only caller of this after init
>+ * is PVH which is not going to use xen_load_gdt_boot or other
>+ * __init functions.

Hmm. You must be using and older tree. The new one has __ref comment.

>+ */
>+void __ref xen_pvh_secondary_vcpu_init(int cpu)
>+{
>+	xen_setup_gdt(cpu);
>+	xen_pvh_set_cr_flags(cpu);
>+}
>+
> static void __init xen_pvh_early_guest_init(void)
> {
> 	if (!xen_feature(XENFEAT_auto_translated_physmap))
> 		return;
> 
>-	if (xen_feature(XENFEAT_hvm_callback_vector))
>+	if (xen_feature(XENFEAT_hvm_callback_vector)) 

> 		xen_have_vector_callback = 1;
>+		xen_pvh_set_cr_flags(0);
>+	}
> 
> #ifdef CONFIG_X86_32
> 	BUG(); /* PVH: Implement proper support. */
>diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
>index 5e46190..a18eadd 100644
>--- a/arch/x86/xen/smp.c
>+++ b/arch/x86/xen/smp.c
>@@ -105,7 +105,7 @@ static void cpu_bringup_and_idle(int cpu)
> #ifdef CONFIG_X86_64
> 	if (xen_feature(XENFEAT_auto_translated_physmap) &&
> 	    xen_feature(XENFEAT_supervisor_mode_kernel))
>-		xen_setup_gdt(cpu);
>+		xen_pvh_secondary_vcpu_init(cpu);
> #endif
> 	cpu_bringup();
> 	cpu_startup_entry(CPUHP_ONLINE);
>diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
>index 9059c24..1cb6f4c 100644
>--- a/arch/x86/xen/xen-ops.h
>+++ b/arch/x86/xen/xen-ops.h
>@@ -123,5 +123,5 @@ __visible void xen_adjust_exception_frame(void);
> 
> extern int xen_panic_handler_init(void);
> 
>-void xen_setup_gdt(int cpu);
>+void xen_pvh_secondary_vcpu_init(int cpu);
> #endif /* XEN_OPS_H */

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JX-00041x-Dd; Thu, 23 Jan 2014 00:38:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JW-0003yC-9h
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:38:34 +0000
Received: from [193.109.254.147:52220] by server-11.bemta-14.messagelabs.com
	id 8C/3A-20576-98460E25; Thu, 23 Jan 2014 00:38:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390437511!12625172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12435 invoked from network); 23 Jan 2014 00:38:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cP4g019110
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cO3j022363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:25 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cO15022347; Thu, 23 Jan 2014 00:38:24 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:23 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140117230219.GA28413@garbanzo.do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 21:39:42 -0500
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, xen-devel@lists.xen.org
Message-ID: <8c3298a9-256a-43c3-8565-6f7fee6e4059@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>As per linux-next Next/Trees [0], and a recent January MAINTAINERS
>patch [1]
>from David one of the xen development kernel git trees to track is
>xen/git.git [2], this tree however gives has undefined references when
>doing a
>fresh clone [shown below], but as expected does work well when only
>cloning
>the linux-next branch [also below]. While I'm sure this is fine for
>folks who can do the guess work do we really want to live with trees
>like
>these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify
>branches
>required, so perhaps it should -- if we want to live with these ?

The master branch can be linked to the #linux-next or stable/for-linus.

That would solve the problem I think.
>Curious, how
>many other git are there with a similar situation ?
>
>The xen project web site actually lists [3] Konrad's xen git tree [4]
>for
>development as the primary development tree, that probably should be
>updated now, and likely with instructions to clone only the linux-next
>branch ?

Thank you for reporting. Will fix it next week if nobody else beats me to it.
>
>[0]
>https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/Next/Trees#n176
>[1] http://lists.xen.org/archives/html/xen-devel/2014-01/msg01504.html
>[2] git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>[3]
>http://wiki.xenproject.org/wiki/Xen_Repositories#Primary_Xen_Repository
>[4] git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
>
>mcgrof@bubbles ~ $ git clone
>git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git --reference
>linux/.git
>Cloning into 'tip'...
>remote: Counting objects: 2806, done.
>remote: Compressing objects: 100% (334/334), done.
>remote: Total 1797 (delta 1511), reused 1646 (delta 1462)
>Receiving objects: 100% (1797/1797), 711.01 KiB | 640.00 KiB/s, done.
>Resolving deltas: 100% (1511/1511), completed with 306 local objects.
>Checking connectivity... done.
>warning: remote HEAD refers to nonexistent ref, unable to checkout.
>
>mcgrof@work ~ $ git clone
>git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git -b linux-next
>--reference linux/.git
>Cloning into 'tip'...
>remote: Counting objects: 2806, done.
>remote: Compressing objects: 100% (377/377), done.
>remote: Total 1797 (delta 1545), reused 1607 (delta 1419)
>Receiving objects: 100% (1797/1797), 485.23 KiB | 0 bytes/s, done.
>Resolving deltas: 100% (1545/1545), completed with 327 local objects.
>Checking connectivity... done.
>Checking out files: 100% (44979/44979), done.
>
>  Luis



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68JX-00041x-Dd; Thu, 23 Jan 2014 00:38:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68JW-0003yC-9h
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:38:34 +0000
Received: from [193.109.254.147:52220] by server-11.bemta-14.messagelabs.com
	id 8C/3A-20576-98460E25; Thu, 23 Jan 2014 00:38:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390437511!12625172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12435 invoked from network); 23 Jan 2014 00:38:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:32 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cP4g019110
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cO3j022363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:38:25 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cO15022347; Thu, 23 Jan 2014 00:38:24 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:23 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140117230219.GA28413@garbanzo.do-not-panic.com>
References: <20140117230219.GA28413@garbanzo.do-not-panic.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 21:39:42 -0500
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, xen-devel@lists.xen.org
Message-ID: <8c3298a9-256a-43c3-8565-6f7fee6e4059@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Stephen Rothwell <sfr@canb.auug.org.au>,
	David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] MAINTAINERS tree branches [xen tip as an example]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>As per linux-next Next/Trees [0], and a recent January MAINTAINERS
>patch [1]
>from David one of the xen development kernel git trees to track is
>xen/git.git [2], this tree however gives has undefined references when
>doing a
>fresh clone [shown below], but as expected does work well when only
>cloning
>the linux-next branch [also below]. While I'm sure this is fine for
>folks who can do the guess work do we really want to live with trees
>like
>these on MAINTAINERS ? The MAINTAINERS file doesn't let us specify
>branches
>required, so perhaps it should -- if we want to live with these ?

The master branch can be linked to the #linux-next or stable/for-linus.

That would solve the problem I think.
>Curious, how
>many other git are there with a similar situation ?
>
>The xen project web site actually lists [3] Konrad's xen git tree [4]
>for
>development as the primary development tree, that probably should be
>updated now, and likely with instructions to clone only the linux-next
>branch ?

Thank you for reporting. Will fix it next week if nobody else beats me to it.
>
>[0]
>https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/Next/Trees#n176
>[1] http://lists.xen.org/archives/html/xen-devel/2014-01/msg01504.html
>[2] git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>[3]
>http://wiki.xenproject.org/wiki/Xen_Repositories#Primary_Xen_Repository
>[4] git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
>
>mcgrof@bubbles ~ $ git clone
>git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git --reference
>linux/.git
>Cloning into 'tip'...
>remote: Counting objects: 2806, done.
>remote: Compressing objects: 100% (334/334), done.
>remote: Total 1797 (delta 1511), reused 1646 (delta 1462)
>Receiving objects: 100% (1797/1797), 711.01 KiB | 640.00 KiB/s, done.
>Resolving deltas: 100% (1511/1511), completed with 306 local objects.
>Checking connectivity... done.
>warning: remote HEAD refers to nonexistent ref, unable to checkout.
>
>mcgrof@work ~ $ git clone
>git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git -b linux-next
>--reference linux/.git
>Cloning into 'tip'...
>remote: Counting objects: 2806, done.
>remote: Compressing objects: 100% (377/377), done.
>remote: Total 1797 (delta 1545), reused 1607 (delta 1419)
>Receiving objects: 100% (1797/1797), 485.23 KiB | 0 bytes/s, done.
>Resolving deltas: 100% (1545/1545), completed with 327 local objects.
>Checking connectivity... done.
>Checking out files: 100% (44979/44979), done.
>
>  Luis



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Jr-0004Fe-RL; Thu, 23 Jan 2014 00:38:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Jq-0004F6-Pm
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:38:55 +0000
Received: from [85.158.139.211:54582] by server-15.bemta-5.messagelabs.com id
	89/81-08490-E9460E25; Thu, 23 Jan 2014 00:38:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390437531!11362913!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32431 invoked from network); 23 Jan 2014 00:38:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cmLq019331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:49 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0clFw006632
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:48 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0clgs017198; Thu, 23 Jan 2014 00:38:47 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:46 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D955A902000078001149E2@nat28.tlf.novell.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 10:51:27 -0500
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <6396d00a-660b-4601-a7da-bd69e1df9b0d@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> Also fix the name of the discard-alignment property, add the missing
>'n'.
>> 
>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>
>Konrad,
>
>you have been working on the discard stuff quite a bit iirc - any
>chance you could take a look and send and ack/review?

Sure - next week I will dig through the patches.
>
>Jan
>
>> ---
>>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>>  1 file changed, 12 insertions(+), 4 deletions(-)
>> 
>> diff --git a/xen/include/public/io/blkif.h
>b/xen/include/public/io/blkif.h
>> index 84eb7fd..515ea90 100644
>> --- a/xen/include/public/io/blkif.h
>> +++ b/xen/include/public/io/blkif.h
>> @@ -175,7 +175,7 @@
>>   *
>>   *------------------------- Backend Device Properties
>-------------------------
>>   *
>> - * discard-aligment
>> + * discard-alignment
>>   *      Values:         <uint32_t>
>>   *      Default Value:  0
>>   *      Notes:          4, 5
>> @@ -194,6 +194,7 @@
>>   * discard-secure
>>   *      Values:         0/1 (boolean)
>>   *      Default Value:  0
>> + *      Notes:          10
>>   *
>>   *      A value of "1" indicates that the backend can process 
>> BLKIF_OP_DISCARD
>>   *      requests with the BLKIF_DISCARD_SECURE flag set.
>> @@ -323,9 +324,14 @@
>>   *     For full interoperability, block front and backends should
>publish
>>   *     identical ring parameters, adjusted for unit differences, to
>the
>>   *     XenStore nodes used in both schemes.
>> - * (4) Devices that support discard functionality may internally
>allocate
>> - *     space (discardable extents) in units that are larger than the
>> - *     exported logical block size.
>> + * (4) Devices that support discard functionality may internally
>allocate 
>> space
>> + *     (discardable extents) in units that are larger than the
>exported 
>> logical
>> + *     block size. If the backing device has such discardable
>extents the
>> + *     backend must provide both discard-granularity and
>discard-alignment.
>> + *     Backends supporting discard should include
>discard-granularity and
>> + *     discard-alignment even if it supports discarding individual
>sectors.
>> + *     Frontends should assume discard-alignment == 0 and
>discard-granularity 
>> ==
>> + *     sector size if these keys are missing.
>>   * (5) The discard-alignment parameter allows a physical device to
>be
>>   *     partitioned into virtual devices that do not necessarily
>begin or
>>   *     end on a discardable extent boundary.
>> @@ -344,6 +350,8 @@
>>   *     grants that can be persistently mapped in the frontend
>driver, but
>>   *     due to the frontent driver implementation it should never be
>bigger
>>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
>> + *(10) The discard-secure property may be present and will be set to
>1 if 
>> the
>> + *     backing device supports secure discard.
>>   */
>>  
>>  /*
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:38:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Jr-0004Fe-RL; Thu, 23 Jan 2014 00:38:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Jq-0004F6-Pm
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:38:55 +0000
Received: from [85.158.139.211:54582] by server-15.bemta-5.messagelabs.com id
	89/81-08490-E9460E25; Thu, 23 Jan 2014 00:38:54 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390437531!11362913!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32431 invoked from network); 23 Jan 2014 00:38:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:38:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0cmLq019331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:49 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0clFw006632
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:48 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0clgs017198; Thu, 23 Jan 2014 00:38:47 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:46 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D955A902000078001149E2@nat28.tlf.novell.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 10:51:27 -0500
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <6396d00a-660b-4601-a7da-bd69e1df9b0d@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> Also fix the name of the discard-alignment property, add the missing
>'n'.
>> 
>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>
>Konrad,
>
>you have been working on the discard stuff quite a bit iirc - any
>chance you could take a look and send and ack/review?

Sure - next week I will dig through the patches.
>
>Jan
>
>> ---
>>  xen/include/public/io/blkif.h | 16 ++++++++++++----
>>  1 file changed, 12 insertions(+), 4 deletions(-)
>> 
>> diff --git a/xen/include/public/io/blkif.h
>b/xen/include/public/io/blkif.h
>> index 84eb7fd..515ea90 100644
>> --- a/xen/include/public/io/blkif.h
>> +++ b/xen/include/public/io/blkif.h
>> @@ -175,7 +175,7 @@
>>   *
>>   *------------------------- Backend Device Properties
>-------------------------
>>   *
>> - * discard-aligment
>> + * discard-alignment
>>   *      Values:         <uint32_t>
>>   *      Default Value:  0
>>   *      Notes:          4, 5
>> @@ -194,6 +194,7 @@
>>   * discard-secure
>>   *      Values:         0/1 (boolean)
>>   *      Default Value:  0
>> + *      Notes:          10
>>   *
>>   *      A value of "1" indicates that the backend can process 
>> BLKIF_OP_DISCARD
>>   *      requests with the BLKIF_DISCARD_SECURE flag set.
>> @@ -323,9 +324,14 @@
>>   *     For full interoperability, block front and backends should
>publish
>>   *     identical ring parameters, adjusted for unit differences, to
>the
>>   *     XenStore nodes used in both schemes.
>> - * (4) Devices that support discard functionality may internally
>allocate
>> - *     space (discardable extents) in units that are larger than the
>> - *     exported logical block size.
>> + * (4) Devices that support discard functionality may internally
>allocate 
>> space
>> + *     (discardable extents) in units that are larger than the
>exported 
>> logical
>> + *     block size. If the backing device has such discardable
>extents the
>> + *     backend must provide both discard-granularity and
>discard-alignment.
>> + *     Backends supporting discard should include
>discard-granularity and
>> + *     discard-alignment even if it supports discarding individual
>sectors.
>> + *     Frontends should assume discard-alignment == 0 and
>discard-granularity 
>> ==
>> + *     sector size if these keys are missing.
>>   * (5) The discard-alignment parameter allows a physical device to
>be
>>   *     partitioned into virtual devices that do not necessarily
>begin or
>>   *     end on a discardable extent boundary.
>> @@ -344,6 +350,8 @@
>>   *     grants that can be persistently mapped in the frontend
>driver, but
>>   *     due to the frontent driver implementation it should never be
>bigger
>>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
>> + *(10) The discard-secure property may be present and will be set to
>1 if 
>> the
>> + *     backing device supports secure discard.
>>   */
>>  
>>  /*
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Kg-0004Uk-B9; Thu, 23 Jan 2014 00:39:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ke-0004UL-U0
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:39:45 +0000
Received: from [85.158.137.68:11581] by server-11.bemta-3.messagelabs.com id
	96/1E-19379-0D460E25; Thu, 23 Jan 2014 00:39:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390437581!10787995!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17789 invoked from network); 23 Jan 2014 00:39:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:39:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0ccHG016910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cb4t022659
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:37 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0caNs001877; Thu, 23 Jan 2014 00:38:36 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:36 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 10:53:33 -0500
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <36d4e579-fd68-44c8-bd0d-f02a4a9fbb89@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI
	debugging using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>Since Qemu-trad
>  c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
>  "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."
>
>there has been quite a lot of extra logging appearing in the VM logs.
>
>The hotplug debugging contributes 2 vmexits per slot per hotplug event,
>which
>are simply a waste of time unless a developer is trying to debug VM
>hotplugging problems.
>
>Introduce a platform flag, "acpi-debug" to indicate in the AML whether
>debugging writes are wanted or not

Why not just ifdef this whole thing with DEBUG so it will be only compiled when debug=y is used.

And also remove the extra print in QEMU for that port.
>
>Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>CC: Keir Fraser <keir@xen.org>
>CC: Jan Beulich <JBeulich@suse.com>
>CC: Ian Campbell <Ian.Campbell@citrix.com>
>CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>---
> docs/misc/xenstore-paths.markdown       |    1 +
> tools/firmware/hvmloader/acpi/build.c   |    5 +++++
> tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
> tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
> 4 files changed, 13 insertions(+)
>
>diff --git a/docs/misc/xenstore-paths.markdown
>b/docs/misc/xenstore-paths.markdown
>index 70ab7f4..593a7b1 100644
>--- a/docs/misc/xenstore-paths.markdown
>+++ b/docs/misc/xenstore-paths.markdown
>@@ -190,6 +190,7 @@ Various platform properties.
> * acpi -- is ACPI enabled for this domain
> * acpi_s3 -- is ACPI S3 support enabled for this domain
> * acpi_s4 -- is ACPI S4 support enabled for this domain
>+* acpi-debug -- whether the AML code should write debugging messages
>to qemu
> 
> #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
> 
>diff --git a/tools/firmware/hvmloader/acpi/build.c
>b/tools/firmware/hvmloader/acpi/build.c
>index f1dd3f0..59716bb 100644
>--- a/tools/firmware/hvmloader/acpi/build.c
>+++ b/tools/firmware/hvmloader/acpi/build.c
>@@ -47,6 +47,7 @@ struct acpi_info {
>     uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
>     uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
>     uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
>+    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
>     uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
>     uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
>uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC
>struct */
>@@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config,
>unsigned int physical)
>     unsigned char       *dsdt;
>     unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
>     int                  nr_secondaries, i;
>+    const char          *xs_str;
> 
>     /* Allocate and initialise the acpi info area. */
>    mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
>@@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config
>*config, unsigned int physical)
>     if ( !new_vm_gid(acpi_info) )
>         goto oom;
> 
>+    xs_str = xenstore_read("platform/acpi-debug", "0");
>+
>     acpi_info->com1_present = uart_exists(0x3f8);
>     acpi_info->com2_present = uart_exists(0x2f8);
>     acpi_info->lpt1_present = lpt_exists(0x378);
>     acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
>+    acpi_info->acpi_debugging = (xs_str[0] == '1');
>     acpi_info->pci_min = pci_mem_start;
>     acpi_info->pci_len = pci_mem_end - pci_mem_start;
> 
>diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl
>b/tools/firmware/hvmloader/acpi/dsdt.asl
>index 247a8ad..e753286 100644
>--- a/tools/firmware/hvmloader/acpi/dsdt.asl
>+++ b/tools/firmware/hvmloader/acpi/dsdt.asl
>@@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM",
>0)
>            UAR2, 1,
>            LTP1, 1,
>            HPET, 1,
>+           ADBG, 1,
>            Offset(4),
>            PMIN, 32,
>            PLEN, 32,
>diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c
>b/tools/firmware/hvmloader/acpi/mk_dsdt.c
>index a4b693b..3f0ca74 100644
>--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
>+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
>@@ -347,14 +347,18 @@ int main(int argc, char **argv)
>             /* _SUN == dev */
>             stmt("Name", "_SUN, 0x%08x", slot >> 3);
>             push_block("Method", "_EJ0, 1");
>+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>             stmt("Store", "0x88, \\_GPE.DPT2");
>+            pop_block();
>             stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
>                  (slot & 1) ? 0x10 : 0x01, slot & ~1);
>             pop_block();
>             push_block("Method", "_STA, 0");
>+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>             stmt("Store", "0x89, \\_GPE.DPT2");
>+            pop_block();
>             if ( slot & 1 )
>           stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
>             else
>@@ -426,8 +430,10 @@ int main(int argc, char **argv)
>         stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
>         stmt("And", "Local1, 0xff, SLT");
>         /* Debug */
>+        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>         stmt("Store", "SLT, DPT1");
>         stmt("Store", "EVT, DPT2");
>+        pop_block();
>         /* Decision tree */
>         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>         pop_block();



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Kg-0004Uk-B9; Thu, 23 Jan 2014 00:39:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ke-0004UL-U0
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:39:45 +0000
Received: from [85.158.137.68:11581] by server-11.bemta-3.messagelabs.com id
	96/1E-19379-0D460E25; Thu, 23 Jan 2014 00:39:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390437581!10787995!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17789 invoked from network); 23 Jan 2014 00:39:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:39:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0ccHG016910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:38:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0cb4t022659
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:38:37 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0caNs001877; Thu, 23 Jan 2014 00:38:36 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:38:36 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
References: <1389970498-2182-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 17 Jan 2014 10:53:33 -0500
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <36d4e579-fd68-44c8-bd0d-f02a4a9fbb89@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] [RFC] tools/hvmloader: Control ACPI
	debugging using a platform flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>Since Qemu-trad
>  c/s 147f83f9b7d87a698c200c4f3eb2d36a0e4fe54b
>  "hw/piix4acpi: Make writes to ACPI_DBG_IO_ADDR actually work."
>
>there has been quite a lot of extra logging appearing in the VM logs.
>
>The hotplug debugging contributes 2 vmexits per slot per hotplug event,
>which
>are simply a waste of time unless a developer is trying to debug VM
>hotplugging problems.
>
>Introduce a platform flag, "acpi-debug" to indicate in the AML whether
>debugging writes are wanted or not

Why not just ifdef this whole thing with DEBUG so it will be only compiled when debug=y is used.

And also remove the extra print in QEMU for that port.
>
>Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>CC: Keir Fraser <keir@xen.org>
>CC: Jan Beulich <JBeulich@suse.com>
>CC: Ian Campbell <Ian.Campbell@citrix.com>
>CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>---
> docs/misc/xenstore-paths.markdown       |    1 +
> tools/firmware/hvmloader/acpi/build.c   |    5 +++++
> tools/firmware/hvmloader/acpi/dsdt.asl  |    1 +
> tools/firmware/hvmloader/acpi/mk_dsdt.c |    6 ++++++
> 4 files changed, 13 insertions(+)
>
>diff --git a/docs/misc/xenstore-paths.markdown
>b/docs/misc/xenstore-paths.markdown
>index 70ab7f4..593a7b1 100644
>--- a/docs/misc/xenstore-paths.markdown
>+++ b/docs/misc/xenstore-paths.markdown
>@@ -190,6 +190,7 @@ Various platform properties.
> * acpi -- is ACPI enabled for this domain
> * acpi_s3 -- is ACPI S3 support enabled for this domain
> * acpi_s4 -- is ACPI S4 support enabled for this domain
>+* acpi-debug -- whether the AML code should write debugging messages
>to qemu
> 
> #### ~/platform/generation-id = INTEGER ":" INTEGER [HVM,INTERNAL]
> 
>diff --git a/tools/firmware/hvmloader/acpi/build.c
>b/tools/firmware/hvmloader/acpi/build.c
>index f1dd3f0..59716bb 100644
>--- a/tools/firmware/hvmloader/acpi/build.c
>+++ b/tools/firmware/hvmloader/acpi/build.c
>@@ -47,6 +47,7 @@ struct acpi_info {
>     uint8_t  com2_present:1;    /* 0[1] - System has COM2? */
>     uint8_t  lpt1_present:1;    /* 0[2] - System has LPT1? */
>     uint8_t  hpet_present:1;    /* 0[3] - System has HPET? */
>+    uint8_t  acpi_debugging:1;  /* 0[4] - ACPI debugging enabled ? */
>     uint32_t pci_min, pci_len;  /* 4, 8 - PCI I/O hole boundaries */
>     uint32_t madt_csum_addr;    /* 12   - Address of MADT checksum */
>uint32_t madt_lapic0_addr;  /* 16   - Address of first MADT LAPIC
>struct */
>@@ -404,6 +405,7 @@ void acpi_build_tables(struct acpi_config *config,
>unsigned int physical)
>     unsigned char       *dsdt;
>     unsigned long        secondary_tables[ACPI_MAX_SECONDARY_TABLES];
>     int                  nr_secondaries, i;
>+    const char          *xs_str;
> 
>     /* Allocate and initialise the acpi info area. */
>    mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
>@@ -519,10 +521,13 @@ void acpi_build_tables(struct acpi_config
>*config, unsigned int physical)
>     if ( !new_vm_gid(acpi_info) )
>         goto oom;
> 
>+    xs_str = xenstore_read("platform/acpi-debug", "0");
>+
>     acpi_info->com1_present = uart_exists(0x3f8);
>     acpi_info->com2_present = uart_exists(0x2f8);
>     acpi_info->lpt1_present = lpt_exists(0x378);
>     acpi_info->hpet_present = hpet_exists(ACPI_HPET_ADDRESS);
>+    acpi_info->acpi_debugging = (xs_str[0] == '1');
>     acpi_info->pci_min = pci_mem_start;
>     acpi_info->pci_len = pci_mem_end - pci_mem_start;
> 
>diff --git a/tools/firmware/hvmloader/acpi/dsdt.asl
>b/tools/firmware/hvmloader/acpi/dsdt.asl
>index 247a8ad..e753286 100644
>--- a/tools/firmware/hvmloader/acpi/dsdt.asl
>+++ b/tools/firmware/hvmloader/acpi/dsdt.asl
>@@ -51,6 +51,7 @@ DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM",
>0)
>            UAR2, 1,
>            LTP1, 1,
>            HPET, 1,
>+           ADBG, 1,
>            Offset(4),
>            PMIN, 32,
>            PLEN, 32,
>diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c
>b/tools/firmware/hvmloader/acpi/mk_dsdt.c
>index a4b693b..3f0ca74 100644
>--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
>+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
>@@ -347,14 +347,18 @@ int main(int argc, char **argv)
>             /* _SUN == dev */
>             stmt("Name", "_SUN, 0x%08x", slot >> 3);
>             push_block("Method", "_EJ0, 1");
>+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>             stmt("Store", "0x88, \\_GPE.DPT2");
>+            pop_block();
>             stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
>                  (slot & 1) ? 0x10 : 0x01, slot & ~1);
>             pop_block();
>             push_block("Method", "_STA, 0");
>+            push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>             stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
>             stmt("Store", "0x89, \\_GPE.DPT2");
>+            pop_block();
>             if ( slot & 1 )
>           stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
>             else
>@@ -426,8 +430,10 @@ int main(int argc, char **argv)
>         stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
>         stmt("And", "Local1, 0xff, SLT");
>         /* Debug */
>+        push_block("If", "LEqual( \\_SB.ADBG, ONE )");
>         stmt("Store", "SLT, DPT1");
>         stmt("Store", "EVT, DPT2");
>+        pop_block();
>         /* Decision tree */
>         decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
>         pop_block();



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:42:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68N6-0004tr-Fy; Thu, 23 Jan 2014 00:42:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68N4-0004te-Sz
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:42:15 +0000
Received: from [85.158.139.211:44520] by server-16.bemta-5.messagelabs.com id
	91/97-11843-66560E25; Thu, 23 Jan 2014 00:42:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390437731!11341818!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9020 invoked from network); 23 Jan 2014 00:42:13 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 00:42:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0g7Ij019824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:42:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0g6re006410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:42:07 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0g6dI006407; Thu, 23 Jan 2014 00:42:06 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:42:06 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Jan 2014 06:36:37 -0500
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Message-ID: <17ae8352-5a70-4f16-9c3b-5403dea5d461@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>Konrad Rzeszutek Wilk wrote on 2013-12-05:
>> Hey,
>> 
>> I just started noticing it today - with qemu-xen (tip is commit
>> b97307ecaad98360f41ea36cd9674ef810c4f8cf
>>     xen_disk: mark ioreq as mapped before unmapping in error case)
>> when I try to pass in a PCI device at bootup it blows up with:
>> 
>Hi, Konrad,

Hey!
>
>I cannot reproduce this issue with the same qemu-xen. Is there any
>special step needed?
>
>Here is the NIC info:
>08:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet
>Controller (Copper) (rev 06)
>  Subsystem: Intel Corporation PRO/1000 PT Desktop Adapter
>  Flags: bus master, fast devsel, latency 0, IRQ 34
>  Memory at d2440000 (32-bit, non-prefetchable) [size=128K]
>  Memory at d2420000 (32-bit, non-prefetchable) [size=128K]
>  I/O ports at 5000 [size=32]
>  Expansion ROM at d2400000 [disabled] [size=128K]

Your ROM is 128k while mine is 4MB. You need that (a big ROM image) to trigger this.

>  Capabilities: [c8] Power Management version 2
>  Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>  Capabilities: [e0] Express Endpoint, MSI 00
>  Capabilities: [100] Advanced Error Reporting
>  Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-1e-8e-97
>  Kernel driver in use: pciback
>  Kernel modules: e1000e
>
>
>BTW, is the fixing already in Qemu upstream?

I don't know. Anthony would know since his patch fixed it.
>
>> char device redirected to /dev/pts/2 (label serial0) qemu: hardware
>> error: xen: failed to populate ram at 40050000 CPU #0: EAX=00000000
>> EBX=00000000 ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000
>> EBP=00000000 ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0
>II=0
>> A20=1 SMM=0 HLT=0 ES =0000 00000000 0000ffff 00009300 CS =f000
>ffff0000
>> 0000ffff 00009b00 SS =0000 00000000 0000ffff 00009300 DS =0000
>00000000
>> 0000ffff 00009300 FS =0000 00000000 0000ffff 00009300 GS =0000
>00000000
>> 0000ffff 00009300 LDT=0000 00000000 0000ffff 00008200 TR =0000
>00000000
>> 0000ffff 00008b00 GDT=     00000000 0000ffff IDT=     00000000
>0000ffff
>> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=00000000
>> DR1=00000000 DR2=00000000 DR3=00000000 DR6=ffff0ff0 DR7=00000400
>> EFER=0000000000000000 FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> XMM00=00000000000000000000000000000000
>> XMM01=00000000000000000000000000000000
>> XMM02=00000000000000000000000000000000
>> XMM03=00000000000000000000000000000000
>> XMM04=00000000000000000000000000000000
>> XMM05=00000000000000000000000000000000
>> XMM06=00000000000000000000000000000000
>> XMM07=00000000000000000000000000000000 CPU #1: EAX=00000000
>EBX=00000000
>> ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000 EBP=00000000
>> ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
>SMM=0
>> HLT=1 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000 0000ffff
>> 00009b00
>> 
>> 
>> ...
>> -bash-4.1# xl dmesg | tail
>> [ 2788.038463] xen_pciback: vpci(d3) [2013-12-04 19:43:40] System
>> requested SeaBIOS
>> : 0000:01:00.1: (d3) [2013-12-04 19:43:40] CPU speed is 3093 MHz
>> assign to virtua(d3) [2013-12-04 19:43:40] Relocating guest memory
>for
>> lowmem MMIO space disabled l slot 0 [ 2788.076396] device vif3.0
>> entered promiscuous mode
>> 
>> If I don't have 'pci=' stanze in my guest config it boots just fine.
>> 
>> -bash-4.1# more /vm.cfg builder='hvm' disk = [
>> 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r'] memory=1024
>boot="d"
>> vcpus=2 serial="pty" vnclisten="0.0.0.0" name="latest" vif = [
>> 'mac=00:0F:4B:00:00:68, bridge=switch' ] on_crash="preserve"
>> pci=["01:00.0"]
>> 
>> Any ideas?
>> 
>> This is with today's Xen and 3.13-rc2. The device in question is
>> -bash-4.1# lspci -s 01:00.0 -v
>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
>> Connection (rev 01)
>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>Adapter
>>         Flags: fast devsel, IRQ 16 Memory at fbc20000 (32-bit,
>>         non-prefetchable) [disabled] [size=128K] Memory at fb800000
>>         (32-bit, non-prefetchable) [disabled] [size=4M] I/O ports at
>>         e020 [disabled] [size=32] Memory at fbc44000 (32-bit,
>>         non-prefetchable) [disabled] [size=16K] Expansion ROM at
>>         fb400000 [disabled] [size=4M] Capabilities: [40] Power
>>         Management version 3 Capabilities: [50] MSI: Enable-
>Count=1/1
>>         Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable- Count=10
>>         Masked- Capabilities: [a0] Express Endpoint, MSI 00
>>         Capabilities: [100] Advanced Error Reporting Capabilities:
>[140]
>>         Device Serial Number 00-1b-21-ff-ff-45-d9-ac Capabilities:
>[150]
>>         Alternative Routing-ID Interpretation (ARI) Capabilities:
>[160]
>>         Single Root I/O Virtualization (SR-IOV) Kernel driver in use:
>>         pciback Kernel modules: igb
>> 
>> Oh and of course it boots with the traditional QEMU.
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>Best regards,
>Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:42:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68N6-0004tr-Fy; Thu, 23 Jan 2014 00:42:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68N4-0004te-Sz
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 00:42:15 +0000
Received: from [85.158.139.211:44520] by server-16.bemta-5.messagelabs.com id
	91/97-11843-66560E25; Thu, 23 Jan 2014 00:42:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390437731!11341818!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9020 invoked from network); 23 Jan 2014 00:42:13 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 00:42:13 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0g7Ij019824
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:42:08 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0g6re006410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:42:07 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0g6dI006407; Thu, 23 Jan 2014 00:42:06 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:42:06 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE168@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 16 Jan 2014 06:36:37 -0500
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Message-ID: <17ae8352-5a70-4f16-9c3b-5403dea5d461@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>Konrad Rzeszutek Wilk wrote on 2013-12-05:
>> Hey,
>> 
>> I just started noticing it today - with qemu-xen (tip is commit
>> b97307ecaad98360f41ea36cd9674ef810c4f8cf
>>     xen_disk: mark ioreq as mapped before unmapping in error case)
>> when I try to pass in a PCI device at bootup it blows up with:
>> 
>Hi, Konrad,

Hey!
>
>I cannot reproduce this issue with the same qemu-xen. Is there any
>special step needed?
>
>Here is the NIC info:
>08:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet
>Controller (Copper) (rev 06)
>  Subsystem: Intel Corporation PRO/1000 PT Desktop Adapter
>  Flags: bus master, fast devsel, latency 0, IRQ 34
>  Memory at d2440000 (32-bit, non-prefetchable) [size=128K]
>  Memory at d2420000 (32-bit, non-prefetchable) [size=128K]
>  I/O ports at 5000 [size=32]
>  Expansion ROM at d2400000 [disabled] [size=128K]

Your ROM is 128k while mine is 4MB. You need that (a big ROM image) to trigger this.

>  Capabilities: [c8] Power Management version 2
>  Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>  Capabilities: [e0] Express Endpoint, MSI 00
>  Capabilities: [100] Advanced Error Reporting
>  Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-1e-8e-97
>  Kernel driver in use: pciback
>  Kernel modules: e1000e
>
>
>BTW, is the fixing already in Qemu upstream?

I don't know. Anthony would know since his patch fixed it.
>
>> char device redirected to /dev/pts/2 (label serial0) qemu: hardware
>> error: xen: failed to populate ram at 40050000 CPU #0: EAX=00000000
>> EBX=00000000 ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000
>> EBP=00000000 ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0
>II=0
>> A20=1 SMM=0 HLT=0 ES =0000 00000000 0000ffff 00009300 CS =f000
>ffff0000
>> 0000ffff 00009b00 SS =0000 00000000 0000ffff 00009300 DS =0000
>00000000
>> 0000ffff 00009300 FS =0000 00000000 0000ffff 00009300 GS =0000
>00000000
>> 0000ffff 00009300 LDT=0000 00000000 0000ffff 00008200 TR =0000
>00000000
>> 0000ffff 00008b00 GDT=     00000000 0000ffff IDT=     00000000
>0000ffff
>> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=00000000
>> DR1=00000000 DR2=00000000 DR3=00000000 DR6=ffff0ff0 DR7=00000400
>> EFER=0000000000000000 FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> XMM00=00000000000000000000000000000000
>> XMM01=00000000000000000000000000000000
>> XMM02=00000000000000000000000000000000
>> XMM03=00000000000000000000000000000000
>> XMM04=00000000000000000000000000000000
>> XMM05=00000000000000000000000000000000
>> XMM06=00000000000000000000000000000000
>> XMM07=00000000000000000000000000000000 CPU #1: EAX=00000000
>EBX=00000000
>> ECX=00000000 EDX=00000633 ESI=00000000 EDI=00000000 EBP=00000000
>> ESP=00000000 EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
>SMM=0
>> HLT=1 ES =0000 00000000 0000ffff 00009300 CS =f000 ffff0000 0000ffff
>> 00009b00
>> 
>> 
>> ...
>> -bash-4.1# xl dmesg | tail
>> [ 2788.038463] xen_pciback: vpci(d3) [2013-12-04 19:43:40] System
>> requested SeaBIOS
>> : 0000:01:00.1: (d3) [2013-12-04 19:43:40] CPU speed is 3093 MHz
>> assign to virtua(d3) [2013-12-04 19:43:40] Relocating guest memory
>for
>> lowmem MMIO space disabled l slot 0 [ 2788.076396] device vif3.0
>> entered promiscuous mode
>> 
>> If I don't have 'pci=' stanze in my guest config it boots just fine.
>> 
>> -bash-4.1# more /vm.cfg builder='hvm' disk = [
>> 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r'] memory=1024
>boot="d"
>> vcpus=2 serial="pty" vnclisten="0.0.0.0" name="latest" vif = [
>> 'mac=00:0F:4B:00:00:68, bridge=switch' ] on_crash="preserve"
>> pci=["01:00.0"]
>> 
>> Any ideas?
>> 
>> This is with today's Xen and 3.13-rc2. The device in question is
>> -bash-4.1# lspci -s 01:00.0 -v
>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
>> Connection (rev 01)
>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>Adapter
>>         Flags: fast devsel, IRQ 16 Memory at fbc20000 (32-bit,
>>         non-prefetchable) [disabled] [size=128K] Memory at fb800000
>>         (32-bit, non-prefetchable) [disabled] [size=4M] I/O ports at
>>         e020 [disabled] [size=32] Memory at fbc44000 (32-bit,
>>         non-prefetchable) [disabled] [size=16K] Expansion ROM at
>>         fb400000 [disabled] [size=4M] Capabilities: [40] Power
>>         Management version 3 Capabilities: [50] MSI: Enable-
>Count=1/1
>>         Maskable+ 64bit+ Capabilities: [70] MSI-X: Enable- Count=10
>>         Masked- Capabilities: [a0] Express Endpoint, MSI 00
>>         Capabilities: [100] Advanced Error Reporting Capabilities:
>[140]
>>         Device Serial Number 00-1b-21-ff-ff-45-d9-ac Capabilities:
>[150]
>>         Alternative Routing-ID Interpretation (ARI) Capabilities:
>[160]
>>         Single Root I/O Virtualization (SR-IOV) Kernel driver in use:
>>         pciback Kernel modules: igb
>> 
>> Oh and of course it boots with the traditional QEMU.
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>Best regards,
>Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:43:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Oa-00054a-45; Thu, 23 Jan 2014 00:43:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68OZ-00054S-65
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:43:47 +0000
Received: from [85.158.143.35:48717] by server-2.bemta-4.messagelabs.com id
	09/FC-11386-2C560E25; Thu, 23 Jan 2014 00:43:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390437824!179134!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6978 invoked from network); 23 Jan 2014 00:43:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:43:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0hd79020833
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:43:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hcP2024073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:43:39 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hcTA024069; Thu, 23 Jan 2014 00:43:38 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:43:38 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D3FD67.2060708@oracle.com>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 11:24:36 -0500
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Olaf Hering <olaf@aepfle.de>
Message-ID: <1a76cddb-cb54-4483-9ab2-a750ef0d9962@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: david.vrabel@citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>On 01/13/2014 04:30 AM, Olaf Hering wrote:
>> On Fri, Jan 10, Boris Ostrovsky wrote:
>>
>>> I don't know discard code works but it seems to me that if you pass,
>for
>>> example,  zero as discard_granularity (which may happen if
>xenbus_gather()
>>> fails) then blkdev_issue_discard() in the backend will set
>granularity to 1
>>> and continue with discard. This may not be what the the guest admin
>>> requested. And he won't know about this since no error message is
>printed
>>> anywhere.
>> If I understand the code using granularity/alignment correctly, both
>are
>> optional properties. So if the granularity is just 1 it means byte
>> ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
>> both properties are not admin controlled, for phy the blkbk drivers
>just
>> passes on what it gets from the underlying hardware.
>>
>>> Similarly, if xenbug_gather("discard-secure") fails, I think the
>code will
>>> assume that secure discard has not been requested. I don't know what
>>> security implications this will have but it sounds bad to me.
>> There are no security implications, if the backend does not advertise
>it
>> then its not present.
>
>Right. But my questions was what if the backend does advertise it and 
>wants the frontent to use it but xenbus_gather() in the frontend fails.
>
>Do we want to silently continue without discard-secure? Is this safe?
>

Yes
>
>-boris
>
>>
>> After poking around some more it seems that blkif.h is the spec, it
>does
>> not say anything that the three properties are optional. Also the
>> backend drivers in sles11sp2 and mainline create all three properties
>> unconditionally. So I think a better change is to expect all three
>> properties in the frontend. I will send another version of the patch.
>>
>>
>> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:43:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Oa-00054a-45; Thu, 23 Jan 2014 00:43:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68OZ-00054S-65
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:43:47 +0000
Received: from [85.158.143.35:48717] by server-2.bemta-4.messagelabs.com id
	09/FC-11386-2C560E25; Thu, 23 Jan 2014 00:43:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390437824!179134!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6978 invoked from network); 23 Jan 2014 00:43:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:43:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0hd79020833
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:43:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hcP2024073
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:43:39 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hcTA024069; Thu, 23 Jan 2014 00:43:38 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:43:38 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D3FD67.2060708@oracle.com>
References: <1389371301-29532-1-git-send-email-olaf@aepfle.de>
	<52D036FC.6000308@oracle.com> <20140110213746.GA933@aepfle.de>
	<52D073F0.5020400@oracle.com> <20140113093032.GA13919@aepfle.de>
	<52D3FD67.2060708@oracle.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 11:24:36 -0500
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Olaf Hering <olaf@aepfle.de>
Message-ID: <1a76cddb-cb54-4483-9ab2-a750ef0d9962@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: david.vrabel@citrix.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>On 01/13/2014 04:30 AM, Olaf Hering wrote:
>> On Fri, Jan 10, Boris Ostrovsky wrote:
>>
>>> I don't know discard code works but it seems to me that if you pass,
>for
>>> example,  zero as discard_granularity (which may happen if
>xenbus_gather()
>>> fails) then blkdev_issue_discard() in the backend will set
>granularity to 1
>>> and continue with discard. This may not be what the the guest admin
>>> requested. And he won't know about this since no error message is
>printed
>>> anywhere.
>> If I understand the code using granularity/alignment correctly, both
>are
>> optional properties. So if the granularity is just 1 it means byte
>> ranges, which is fine if the backend uses FALLOC_FL_PUNCH_HOLE. Also
>> both properties are not admin controlled, for phy the blkbk drivers
>just
>> passes on what it gets from the underlying hardware.
>>
>>> Similarly, if xenbug_gather("discard-secure") fails, I think the
>code will
>>> assume that secure discard has not been requested. I don't know what
>>> security implications this will have but it sounds bad to me.
>> There are no security implications, if the backend does not advertise
>it
>> then its not present.
>
>Right. But my questions was what if the backend does advertise it and 
>wants the frontent to use it but xenbus_gather() in the frontend fails.
>
>Do we want to silently continue without discard-secure? Is this safe?
>

Yes
>
>-boris
>
>>
>> After poking around some more it seems that blkif.h is the spec, it
>does
>> not say anything that the three properties are optional. Also the
>> backend drivers in sles11sp2 and mainline create all three properties
>> unconditionally. So I think a better change is to expect all three
>> properties in the frontend. I will send another version of the patch.
>>
>>
>> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Ou-00057l-JM; Thu, 23 Jan 2014 00:44:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ot-00057T-Er
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:44:07 +0000
Received: from [85.158.137.68:42789] by server-15.bemta-3.messagelabs.com id
	D1/6A-11556-6D560E25; Thu, 23 Jan 2014 00:44:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390437844!10781643!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 23 Jan 2014 00:44:05 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:44:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0i05s021012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:44:01 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0i0oQ013423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:44:00 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hx64000151; Thu, 23 Jan 2014 00:43:59 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:43:59 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D3FF0E02000078001131A5@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
	<52D3FF0E02000078001131A5@nat28.tlf.novell.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 10:57:54 -0500
To: Jan Beulich <JBeulich@suse.com>, David Vrabel <david.vrabel@citrix.com>
Message-ID: <9498309b-dd60-413f-9ab7-8e6ec6c9bcfa@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xen.org,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.01.14 at 14:45, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 13/01/14 13:16, Jan Beulich wrote:
>>>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com>
>wrote:
>>>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>>>
>>>>>>> You can't do this in one go - the first two and the last one may
>be
>>>>>>> set independently (and are independent in their meaning), and
>>>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>>>> on the first absent value).
>>>>>>
>>>>>> Yes, thats the purpose. Since the properties are required its an
>all or
>>>>>> nothing thing. If they are truly optional then blkif.h should be
>updated
>>>>>> to say that.
>>>>>
>>>>> They _are_ optional.
>>>>
>>>> But is it true that either they are all present or they are all
>absent?
>>> 
>>> No, it's not. discard-secure is independent of the other two (but
>>> those other two are tied together).
>> 
>> Can we have a patch to blkif.h that clarifies this?
>> 
>> e.g.,
>> 
>> feature-discard
>> 
>>    ...
>> 
>>    discard-granularity and discard-offset must also be present if
>>    feature-discard is enabled
>
>It would be "may" here too afaict. But I'll defer to Konrad, who
>has done more work in this area...
>
>Jan
>
>>    discard-secure may also be present if feature-discard is enabled.
>> 
>> David
>
>
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel

It is all 'may'. If there is just 'feature-discard' without any other options that is OK.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68Ou-00057l-JM; Thu, 23 Jan 2014 00:44:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68Ot-00057T-Er
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:44:07 +0000
Received: from [85.158.137.68:42789] by server-15.bemta-3.messagelabs.com id
	D1/6A-11556-6D560E25; Thu, 23 Jan 2014 00:44:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390437844!10781643!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18399 invoked from network); 23 Jan 2014 00:44:05 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:44:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0i05s021012
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:44:01 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0i0oQ013423
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 23 Jan 2014 00:44:00 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N0hx64000151; Thu, 23 Jan 2014 00:43:59 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:43:59 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52D3FF0E02000078001131A5@nat28.tlf.novell.com>
References: <1389608052-7139-1-git-send-email-olaf@aepfle.de>
	<52D3DAEE0200007800112FD4@nat28.tlf.novell.com>
	<20140113120131.GA15623@aepfle.de>
	<52D3EB5F02000078001130B5@nat28.tlf.novell.com>
	<1389618054.13654.57.camel@kazak.uk.xensource.com>
	<52D3F535020000780011311B@nat28.tlf.novell.com>
	<52D3EE14.3080609@citrix.com>
	<52D3FF0E02000078001131A5@nat28.tlf.novell.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 13 Jan 2014 10:57:54 -0500
To: Jan Beulich <JBeulich@suse.com>, David Vrabel <david.vrabel@citrix.com>
Message-ID: <9498309b-dd60-413f-9ab7-8e6ec6c9bcfa@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xen.org,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 13.01.14 at 14:45, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 13/01/14 13:16, Jan Beulich wrote:
>>>>>> On 13.01.14 at 14:00, Ian Campbell <Ian.Campbell@citrix.com>
>wrote:
>>>> On Mon, 2014-01-13 at 12:34 +0000, Jan Beulich wrote:
>>>>>>>> On 13.01.14 at 13:01, Olaf Hering <olaf@aepfle.de> wrote:
>>>>>> On Mon, Jan 13, Jan Beulich wrote:
>>>>>>
>>>>>>> You can't do this in one go - the first two and the last one may
>be
>>>>>>> set independently (and are independent in their meaning), and
>>>>>>> hence need to be queried independently (xenbus_gather() fails
>>>>>>> on the first absent value).
>>>>>>
>>>>>> Yes, thats the purpose. Since the properties are required its an
>all or
>>>>>> nothing thing. If they are truly optional then blkif.h should be
>updated
>>>>>> to say that.
>>>>>
>>>>> They _are_ optional.
>>>>
>>>> But is it true that either they are all present or they are all
>absent?
>>> 
>>> No, it's not. discard-secure is independent of the other two (but
>>> those other two are tied together).
>> 
>> Can we have a patch to blkif.h that clarifies this?
>> 
>> e.g.,
>> 
>> feature-discard
>> 
>>    ...
>> 
>>    discard-granularity and discard-offset must also be present if
>>    feature-discard is enabled
>
>It would be "may" here too afaict. But I'll defer to Konrad, who
>has done more work in this area...
>
>Jan
>
>>    discard-secure may also be present if feature-discard is enabled.
>> 
>> David
>
>
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel

It is all 'may'. If there is just 'feature-discard' without any other options that is OK.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:47:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68SQ-0005Qa-R0; Thu, 23 Jan 2014 00:47:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68SP-0005QN-D2
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:47:45 +0000
Received: from [85.158.139.211:39499] by server-4.bemta-5.messagelabs.com id
	26/73-26791-0B660E25; Thu, 23 Jan 2014 00:47:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390438062!8683922!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25693 invoked from network); 23 Jan 2014 00:47:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:47:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0kc66026106
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:46:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0kbVn018348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:46:38 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0N0kaou018322; Thu, 23 Jan 2014 00:46:37 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:46:36 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52C87C1C.5080908@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 05 Jan 2014 13:29:07 -0500
To: Don Slutz <dslutz@verizon.com>
Message-ID: <26e00e59-61d3-4f3e-a614-5210aa399108@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don Slutz <dslutz@verizon.com> wrote:
>On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
>> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>>> This also fixes the old debug output to compile and work if DBGP1
>>> and DBGP2 are defined like DBGP3.
>>>
>>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int
>len, domid_t domid, int toaddr,
>>>       return len;
>>>   }
>>>   
>>> +/*
>>> + * Local variables:
>>> + * mode: C
>>> + * c-file-style: "BSD"
>>> + * c-basic-offset: 4
>>> + * indent-tabs-mode: nil
>>> + * End:
>>> + */
>> ??
>
>I take that as "Why add this"?

>
>Dealing with many different styles of code, I find it helps to have
>this 
>emacs add.  From CODING_STYLE:

Sure. But then this should be a separate patch. The way both in Xen and Linux is to have one logical change per patch.

I would recommend you split this out from this patch.
>
>    Emacs local variables
>    ---------------------
>
>   A comment block containing local variables for emacs is permitted at
>    the end of files.  It should be:
>
>    /*
>      * Local variables:
>      * mode: C
>      * c-file-style: "BSD"
>      * c-basic-offset: 4
>      * indent-tabs-mode: nil
>      * End:
>      */
>
>    -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 00:47:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 00:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W68SQ-0005Qa-R0; Thu, 23 Jan 2014 00:47:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W68SP-0005QN-D2
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 00:47:45 +0000
Received: from [85.158.139.211:39499] by server-4.bemta-5.messagelabs.com id
	26/73-26791-0B660E25; Thu, 23 Jan 2014 00:47:44 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390438062!8683922!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25693 invoked from network); 23 Jan 2014 00:47:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 00:47:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N0kc66026106
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 00:46:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N0kbVn018348
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 00:46:38 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0N0kaou018322; Thu, 23 Jan 2014 00:46:37 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 16:46:36 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52C87C1C.5080908@terremark.com>
References: <1388857936-664-1-git-send-email-dslutz@verizon.com>
	<1388857936-664-3-git-send-email-dslutz@verizon.com>
	<20140104211319.GC9395@phenom.dumpdata.com>
	<52C87C1C.5080908@terremark.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 05 Jan 2014 13:29:07 -0500
To: Don Slutz <dslutz@verizon.com>
Message-ID: <26e00e59-61d3-4f3e-a614-5210aa399108@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/4] dbg_rw_guest_mem: Enable debug log
	output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don Slutz <dslutz@verizon.com> wrote:
>On 01/04/14 16:13, Konrad Rzeszutek Wilk wrote:
>> On Sat, Jan 04, 2014 at 12:52:14PM -0500, Don Slutz wrote:
>>> This also fixes the old debug output to compile and work if DBGP1
>>> and DBGP2 are defined like DBGP3.
>>>
>>> @@ -226,3 +239,11 @@ dbg_rw_mem(dbgva_t addr, dbgbyte_t *buf, int
>len, domid_t domid, int toaddr,
>>>       return len;
>>>   }
>>>   
>>> +/*
>>> + * Local variables:
>>> + * mode: C
>>> + * c-file-style: "BSD"
>>> + * c-basic-offset: 4
>>> + * indent-tabs-mode: nil
>>> + * End:
>>> + */
>> ??
>
>I take that as "Why add this"?

>
>Dealing with many different styles of code, I find it helps to have
>this 
>emacs add.  From CODING_STYLE:

Sure. But then this should be a separate patch. The way both in Xen and Linux is to have one logical change per patch.

I would recommend you split this out from this patch.
>
>    Emacs local variables
>    ---------------------
>
>   A comment block containing local variables for emacs is permitted at
>    the end of files.  It should be:
>
>    /*
>      * Local variables:
>      * mode: C
>      * c-file-style: "BSD"
>      * c-basic-offset: 4
>      * indent-tabs-mode: nil
>      * End:
>      */
>
>    -Don Slutz



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 01:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 01:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W69RE-0004Ce-Pa; Thu, 23 Jan 2014 01:50:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W69RD-0004CZ-Kk
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 01:50:35 +0000
Received: from [193.109.254.147:40026] by server-3.bemta-14.messagelabs.com id
	40/49-11000-B6570E25; Thu, 23 Jan 2014 01:50:35 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390441833!12585022!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28818 invoked from network); 23 Jan 2014 01:50:34 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-10.tower-27.messagelabs.com with SMTP;
	23 Jan 2014 01:50:34 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id B326A584105;
	Wed, 22 Jan 2014 17:50:31 -0800 (PST)
Date: Wed, 22 Jan 2014 17:50:31 -0800 (PST)
Message-Id: <20140122.175031.873909526743971037.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Mon, 20 Jan 2014 21:24:20 +0000

> A long known problem of the upstream netback implementation that on the TX
> path (from guest to Dom0) it copies the whole packet from guest memory into
> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
> huge perfomance penalty. The classic kernel version of netback used grant
> mapping, and to get notified when the page can be unmapped, it used page
> destructors. Unfortunately that destructor is not an upstreamable solution.
> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
> problem, however it seems to be very invasive on the network stack's code,
> and therefore haven't progressed very well.
> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
> know when the skb is freed up.

This series does not apply to net-next due to some other recent changes.

Please respin, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 01:51:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 01:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W69RE-0004Ce-Pa; Thu, 23 Jan 2014 01:50:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W69RD-0004CZ-Kk
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 01:50:35 +0000
Received: from [193.109.254.147:40026] by server-3.bemta-14.messagelabs.com id
	40/49-11000-B6570E25; Thu, 23 Jan 2014 01:50:35 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390441833!12585022!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28818 invoked from network); 23 Jan 2014 01:50:34 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-10.tower-27.messagelabs.com with SMTP;
	23 Jan 2014 01:50:34 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id B326A584105;
	Wed, 22 Jan 2014 17:50:31 -0800 (PST)
Date: Wed, 22 Jan 2014 17:50:31 -0800 (PST)
Message-Id: <20140122.175031.873909526743971037.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Mon, 20 Jan 2014 21:24:20 +0000

> A long known problem of the upstream netback implementation that on the TX
> path (from guest to Dom0) it copies the whole packet from guest memory into
> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
> huge perfomance penalty. The classic kernel version of netback used grant
> mapping, and to get notified when the page can be unmapped, it used page
> destructors. Unfortunately that destructor is not an upstreamable solution.
> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
> problem, however it seems to be very invasive on the network stack's code,
> and therefore haven't progressed very well.
> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
> know when the skb is freed up.

This series does not apply to net-next due to some other recent changes.

Please respin, thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 03:03:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 03:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6AZ0-0007Zz-2A; Thu, 23 Jan 2014 03:02:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prabu@ti.com>) id 1W6AYy-0007Zu-3M
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 03:02:40 +0000
Received: from [85.158.143.35:46762] by server-3.bemta-4.messagelabs.com id
	A3/DA-32360-F4680E25; Thu, 23 Jan 2014 03:02:39 +0000
X-Env-Sender: prabu@ti.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390446156!189266!1
X-Originating-IP: [198.47.26.152]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk4LjQ3LjI2LjE1MiA9PiAxNjQ5NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28141 invoked from network); 23 Jan 2014 03:02:38 -0000
Received: from comal.ext.ti.com (HELO comal.ext.ti.com) (198.47.26.152)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 03:02:38 -0000
Received: from dbdlxv05.itg.ti.com ([172.24.171.60])
	by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id s0N32WxW013288
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 21:02:33 -0600
Received: from DBDE72.ent.ti.com (dbde72.ent.ti.com [172.24.171.97])
	by dbdlxv05.itg.ti.com (8.14.3/8.13.8) with ESMTP id s0N32UTJ010555
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 08:32:31 +0530
Received: from DBDE04.ent.ti.com ([fe80::21ac:d9f:f810:c8e7]) by
	DBDE72.ent.ti.com ([fe80::bd56:32bf:87fd:2763%27]) with mapi id
	14.02.0342.003; Thu, 23 Jan 2014 11:02:30 +0800
From: "Sundareson, Prabindh" <prabu@ti.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
Thread-Index: Ac8X5yD2fTL5f+daQeOuuca6efUKFQ==
Date: Thu, 23 Jan 2014 03:02:29 +0000
Message-ID: <321768C95D21724485BCE784F1BE98473EC8FE81@DBDE04.ent.ti.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.24.170.142]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
	OMAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Andrii,

Dumb question - wouldn't there be a need to lock the data structures while updating/ copying ? Or does the Xen architecture somehow prevents other updates ?

I see 2 IPs being used - IPU and DSP, what did you have in mind for the 3rd IP ? GPU ?

If there are some quick instructions available to test, I can try this out.

regards,
Prabu


-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of xen-devel-request@lists.xen.org
Sent: Wednesday, January 22, 2014 9:23 PM
To: xen-devel@lists.xen.org
Subject: Xen-devel Digest, Vol 107, Issue 352

Send Xen-devel mailing list submissions to
	xen-devel@lists.xen.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel
or, via email, send a message with subject or body 'help' to
	xen-devel-request@lists.xen.org

You can reach the person managing the list at
	xen-devel-owner@lists.xen.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of Xen-devel digest..."


Today's Topics:

   1. [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
      platforms (Andrii Tseglytskyi)
   2. [RFC v01 2/3] arm: omap: translate iommu mapping to 4K	pages
      (Andrii Tseglytskyi)
   3. [RFC v01 1/3] arm: omap: introduce iommu module
      (Andrii Tseglytskyi)
   4. [RFC v01 3/3] arm: omap: cleanup iopte allocations
      (Andrii Tseglytskyi)


----------------------------------------------------------------------

Message: 1
Date: Wed, 22 Jan 2014 17:52:02 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
	OMAP	platforms
Message-ID:
	<1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>

Hi,

The following patch series is an RFC for possible implementation of simple MMU module, which is designed to translate IPA to MA for peripheral processors like GPU / IPU for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs which need to be handled properly.

It would be great to get a community feedback - will this be useful for Xen project?

Let me describe an algorithm briefly. It is simple and straightforward.
The following simple logic is used to translate addresses from IPA to MA:

1. During boot time guest domain creates "pagetable" for external MMU IP.
Pagetable is a singletone data structure, which is stored in ususal kernel heap memory. All memory mappings for corresponding MMU are stored inside it.
Format of "pagetable" is well defined.

2. Guest domain enables peripheral remote processor. As a part of enable sequence kernel allocates chunks of heap memory needed for remote processor and stores pointers to allocated chunks in already created "pagetable". After it writes a physical address of pagetable to MMU configuration register. As result MMU IP knows about all allocations, and remote processor can use them directly in its software.

3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
It reads a physical address of "pagetable" from MMU register and creates a copy of it in own memory. As result - we have two similar configuration data structures - first - in guest domain kernel, second - in Xen hypervisor.

4. Xen omap mmu driver parses its own copy of pagetable and translate all physical addresses to corresponding machine addresses using existing p2m API call.
After it writes a physical address  of its pagetable (with already translated PA to MA) to MMU IP configuration registers and returns control to guest domain.

As a result - guest domain continues enabling remote processor with it MMU and MMU will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used directly by MMU IP, and its new structure will be hidden for guest domain kernel, it won't know anything about p2m translation.

Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
Target platform OMAP5 panda.

Thank you for your attention,

Regards,

Andrii Tseglytskyi (3):
  arm: omap: introduce iommu module
  arm: omap: translate iommu mapping to 4K pages
  arm: omap: cleanup iopte allocations

 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

--
1.7.9.5




------------------------------

Message: 2
Date: Wed, 22 Jan 2014 17:52:04 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping
	to 4K	pages
Message-ID:
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>

Patch introduces the following algorithm:
- enumerates all first level translation entries
- for each section creates 256 pages, each page is 4096 bytes
- for each supersection creates 4096 pages, each page is 4096 bytes
- flush cache to synchronize Cortex M15 and IOMMU

This algorithm make possible to use 4K mapping only.

Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 4dab30f..7ec03a2 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -72,6 +72,9 @@
 #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
 #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
 
+/* 16 sections in supersection */
+#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
+
 /*
  * some descriptor attributes.
  */
@@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
 	&omap_dsp_mmu,
 };
 
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
 #define mmu_for_each(pfunc, data)						\
 ({														\
 	u32 __i;											\
@@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 	return vaddr;
 }
 
+static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
+{
+	u32 *iopte = NULL;
+	u32 i;
+
+	iopte = xzalloc_bytes(PAGE_SIZE);
+	if (!iopte) {
+		printk("%s Fail to alloc 2nd level table\n", mmu->name);
+		return 0;
+	}
+
+	for (i = 0; i < PTRS_PER_IOPTE; i++) {
+		u32 da, vaddr, iopgd_tmp;
+		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
+		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
+		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
+		iopte[i] = vaddr | IOPTE_SMALL;
+	}
+
+	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
+	return __pa(iopte) | IOPGD_TABLE;
+}
+
 /*
  * on boot table is empty
  */
@@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 
 		/* "supersection" 16 Mb */
 		if (iopgd_is_super(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			if(likely(translate_supersections_to_pages)) {
+				u32 j, iopgd_tmp;
+				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
+					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
+					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
+				}
+				i += (j - 1);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			}
 
 		/* "section" 1Mb */
 		} else if (iopgd_is_section(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			if (likely(translate_sections_to_pages)) {
+				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			}
 
 		/* "table" */
 		} else if (iopgd_is_table(iopgd)) {
-- 
1.7.9.5




------------------------------

Message: 3
Date: Wed, 22 Jan 2014 17:52:03 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
Message-ID:
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>

omap IOMMU module is designed to handle access to external
omap MMUs, connected to the L3 bus.

Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 418 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 003ac84..cb0b385 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -14,6 +14,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
+obj-y += omap_iommu.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a6db00b..3281b67 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
 {
     &vgic_distr_mmio_handler,
     &vuart_mmio_handler,
+	&mmu_mmio_handler,
 };
 #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
 
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 8d252c0..acb5dff 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -42,6 +42,7 @@ struct mmio_handler {
 
 extern const struct mmio_handler vgic_distr_mmio_handler;
 extern const struct mmio_handler vuart_mmio_handler;
+extern const struct mmio_handler mmu_mmio_handler;
 
 extern int handle_mmio(mmio_info_t *info);
 
diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
new file mode 100644
index 0000000..4dab30f
--- /dev/null
+++ b/xen/arch/arm/omap_iommu.c
@@ -0,0 +1,415 @@
+/*
+ * xen/arch/arm/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2013 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+
+#include "io.h"
+
+/* register where address of page table is stored */
+#define MMU_TTB			0x4c
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* 1st level translation */
+#define IOPGD_SHIFT		20
+#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
+#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
+
+/* "supersection" - 16 Mb */
+#define IOSUPER_SHIFT		24
+#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
+#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
+
+/* "section"  - 1 Mb */
+#define IOSECTION_SHIFT		20
+#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
+#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
+#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
+
+/* 2nd level translation */
+
+/* "small page" - 4Kb */
+#define IOPTE_SMALL_SHIFT		12
+#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
+#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
+
+/* "large page" - 64 Kb */
+#define IOPTE_LARGE_SHIFT		16
+#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
+#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
+#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
+
+/*
+ * some descriptor attributes.
+ */
+#define IOPGD_TABLE		(1 << 0)
+#define IOPGD_SECTION	(2 << 0)
+#define IOPGD_SUPER		(1 << 18 | 2 << 0)
+
+#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
+#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
+#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
+
+#define IOPTE_SMALL		(2 << 0)
+#define IOPTE_LARGE		(1 << 0)
+
+#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
+#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
+#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
+
+struct mmu_info {
+	const char			*name;
+	paddr_t				mem_start;
+	u32					mem_size;
+	u32					*pagetable;
+	void __iomem		*mem_map;
+};
+
+static struct mmu_info omap_ipu_mmu = {
+	.name		= "IPU_L2_MMU",
+	.mem_start	= 0x55082000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info omap_dsp_mmu = {
+	.name		= "DSP_L2_MMU",
+	.mem_start	= 0x4a066000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info *mmu_list[] = {
+	&omap_ipu_mmu,
+	&omap_dsp_mmu,
+};
+
+#define mmu_for_each(pfunc, data)						\
+({														\
+	u32 __i;											\
+	int __res = 0;										\
+														\
+	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
+		__res |= pfunc(mmu_list[__i], data);			\
+	}													\
+	__res;												\
+})
+
+static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
+		return 1;
+
+	return 0;
+}
+
+static inline struct mmu_info *mmu_lookup(u32 addr)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
+		if (mmu_check_mem_range(mmu_list[i], addr))
+			return mmu_list[i];
+	}
+
+	return NULL;
+}
+
+static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
+{
+	return (reg & mask) | (va & (~mask));
+}
+
+static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
+{
+	return (reg & ~mask) | pa;
+}
+
+static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
+{
+	return mmu_for_each(mmu_check_mem_range, addr);
+}
+
+static int mmu_copy_pagetable(struct mmu_info *mmu)
+{
+	void __iomem *pagetable = NULL;
+	u32 pgaddr;
+
+	ASSERT(mmu);
+
+	/* read address where kernel MMU pagetable is stored */
+	pgaddr = readl(mmu->mem_map + MMU_TTB);
+	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
+	if (!pagetable) {
+		printk("%s: %s failed to map pagetable\n",
+			   __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	/*
+	 * pagetable can be changed since last time
+	 * we accessed it therefore we need to copy it each time
+	 */
+	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
+
+	iounmap(pagetable);
+
+	return 0;
+}
+
+#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
+{																								\
+	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
+							(mask == IOPTE_LARGE_MASK)) ? "table"								\
+							: iopgd_is_super(iopgd) ? "supersection"							\
+							: iopgd_is_section(iopgd) ? "section"								\
+							: "Unknown section";												\
+	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
+		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
+}
+
+static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
+{
+	u32 vaddr, paddr;
+	paddr_t maddr;
+
+	paddr = mmu_virt_to_phys(iopgd, da, mask);
+	maddr = p2m_lookup(dom, paddr);
+	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
+
+	return vaddr;
+}
+
+/*
+ * on boot table is empty
+ */
+static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
+{
+	u32 i;
+	int res;
+	bool table_updated = false;
+
+	ASSERT(dom);
+	ASSERT(mmu);
+
+	/* copy pagetable from  domain to xen */
+	res = mmu_copy_pagetable(mmu);
+	if (res) {
+		printk("%s: %s failed to map pagetable memory\n",
+			   __func__, mmu->name);
+		return res;
+	}
+
+	/* 1-st level translation */
+	for (i = 0; i < PTRS_PER_IOPGD; i++) {
+		u32 da;
+		u32 iopgd = mmu->pagetable[i];
+
+		if (!iopgd)
+			continue;
+
+		table_updated = true;
+
+		/* "supersection" 16 Mb */
+		if (iopgd_is_super(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+
+		/* "section" 1Mb */
+		} else if (iopgd_is_section(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+
+		/* "table" */
+		} else if (iopgd_is_table(iopgd)) {
+			u32 j, mask;
+			u32 iopte = iopte_offset(iopgd);
+
+			/* 2-nd level translation */
+			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
+
+				/* "small table" 4Kb */
+				if (iopte_is_small(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
+					mask = IOPTE_SMALL_MASK;
+
+				/* "large table" 64Kb */
+				} else if (iopte_is_large(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
+					mask = IOPTE_LARGE_MASK;
+
+				/* error */
+				} else {
+					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
+					return -EINVAL;
+				}
+
+				/* translate 2-nd level entry */
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
+			}
+
+			continue;
+
+		/* error */
+		} else {
+			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
+			return -EINVAL;
+		}
+	}
+
+	/* force omap IOMMU to use new pagetable */
+	if (table_updated) {
+		paddr_t maddr;
+		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
+		maddr = __pa(mmu->pagetable);
+		writel(maddr, mmu->mem_map + MMU_TTB);
+		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
+	}
+
+	return 0;
+}
+
+static int mmu_trap_write_access(struct domain *dom,
+								 struct mmu_info *mmu, mmio_info_t *info)
+{
+	struct cpu_user_regs *regs = guest_cpu_user_regs();
+	register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res = 0;
+
+	switch (info->gpa - mmu->mem_start) {
+		case MMU_TTB:
+			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
+				   mmu->name, _p(info->gpa), *r);
+			res = mmu_translate_pagetable(dom, mmu);
+			break;
+		default:
+			break;
+	}
+
+	return res;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+	struct domain *dom = v->domain;
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res;
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+	/*
+	 * make sure that user register is written first in this function
+	 * following calls may expect valid data in it
+	 */
+    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+	res = mmu_trap_write_access(dom, mmu, info);
+	if (res)
+		return res;
+
+    return 1;
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+	ASSERT(mmu);
+	ASSERT(!mmu->mem_map);
+	ASSERT(!mmu->pagetable);
+
+    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
+	if (!mmu->mem_map) {
+		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
+
+	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
+	if (!mmu->pagetable) {
+		printk("%s: %s failed to alloc private pagetable\n",
+			   __func__, mmu->name);
+		return -ENOMEM;
+	}
+
+	printk("%s: %s private pagetable %lu bytes\n",
+		   __func__, mmu->name, IOPGD_TABLE_SIZE);
+
+	return 0;
+}
+
+static int mmu_init_all(void)
+{
+	int res;
+
+	res = mmu_for_each(mmu_init, 0);
+	if (res) {
+		printk("%s error during init %d\n", __func__, res);
+		return res;
+	}
+
+	return 0;
+}
+
+const struct mmio_handler mmu_mmio_handler = {
+	.check_handler = mmu_mmio_check,
+	.read_handler  = mmu_mmio_read,
+	.write_handler = mmu_mmio_write,
+};
+
+__initcall(mmu_init_all);
-- 
1.7.9.5




------------------------------

Message: 4
Date: Wed, 22 Jan 2014 17:52:05 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 3/3] arm: omap: cleanup iopte
	allocations
Message-ID:
	<1390405925-1764-4-git-send-email-andrii.tseglytskyi@globallogic.com>

Each allocation for iopte requires 4Kb memory.
All previous allocations from previous MMU reconfiguration
must be cleaned before new reconfigureation cycle.

Change-Id: I6db69a400cdba1170b43d9dc68d0817db77cbf9c
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 7ec03a2..a5ad3ac 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -93,12 +93,18 @@
 #define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
 #define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
 
+struct mmu_alloc_node {
+	u32					*vptr;
+	struct list_head	node;
+};
+
 struct mmu_info {
 	const char			*name;
 	paddr_t				mem_start;
 	u32					mem_size;
 	u32					*pagetable;
 	void __iomem		*mem_map;
+	struct list_head	alloc_list;
 };
 
 static struct mmu_info omap_ipu_mmu = {
@@ -222,8 +228,15 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
 {
 	u32 *iopte = NULL;
+	struct mmu_alloc_node *alloc_node;
 	u32 i;
 
+	alloc_node = xzalloc_bytes(sizeof(struct mmu_alloc_node));
+	if (!alloc_node) {
+		printk("%s Fail to alloc vptr node\n", mmu->name);
+		return 0;
+	}
+
 	iopte = xzalloc_bytes(PAGE_SIZE);
 	if (!iopte) {
 		printk("%s Fail to alloc 2nd level table\n", mmu->name);
@@ -238,10 +251,27 @@ static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd,
 		iopte[i] = vaddr | IOPTE_SMALL;
 	}
 
+	/* store pointer for following cleanup */
+	alloc_node->vptr = iopte;
+	list_add(&alloc_node->node, &mmu->alloc_list);
+
 	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
 	return __pa(iopte) | IOPGD_TABLE;
 }
 
+static void mmu_cleanup_pagetable(struct mmu_info *mmu)
+{
+	struct mmu_alloc_node *mmu_alloc, *tmp;
+
+	ASSERT(mmu);
+
+	list_for_each_entry_safe(mmu_alloc, tmp, &mmu->alloc_list, node) {
+		xfree(mmu_alloc->vptr);
+		list_del(&mmu_alloc->node);
+		xfree(mmu_alloc);
+	}
+}
+
 /*
  * on boot table is empty
  */
@@ -254,6 +284,9 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 	ASSERT(dom);
 	ASSERT(mmu);
 
+	/* free all previous allocations */
+	mmu_cleanup_pagetable(mmu);
+
 	/* copy pagetable from  domain to xen */
 	res = mmu_copy_pagetable(mmu);
 	if (res) {
@@ -432,6 +465,8 @@ static int mmu_init(struct mmu_info *mmu, u32 data)
 	printk("%s: %s private pagetable %lu bytes\n",
 		   __func__, mmu->name, IOPGD_TABLE_SIZE);
 
+	INIT_LIST_HEAD(&mmu->alloc_list);
+
 	return 0;
 }
 
-- 
1.7.9.5




------------------------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


End of Xen-devel Digest, Vol 107, Issue 352
*******************************************

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 03:03:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 03:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6AZ0-0007Zz-2A; Thu, 23 Jan 2014 03:02:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prabu@ti.com>) id 1W6AYy-0007Zu-3M
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 03:02:40 +0000
Received: from [85.158.143.35:46762] by server-3.bemta-4.messagelabs.com id
	A3/DA-32360-F4680E25; Thu, 23 Jan 2014 03:02:39 +0000
X-Env-Sender: prabu@ti.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390446156!189266!1
X-Originating-IP: [198.47.26.152]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk4LjQ3LjI2LjE1MiA9PiAxNjQ5NzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28141 invoked from network); 23 Jan 2014 03:02:38 -0000
Received: from comal.ext.ti.com (HELO comal.ext.ti.com) (198.47.26.152)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 03:02:38 -0000
Received: from dbdlxv05.itg.ti.com ([172.24.171.60])
	by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id s0N32WxW013288
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 21:02:33 -0600
Received: from DBDE72.ent.ti.com (dbde72.ent.ti.com [172.24.171.97])
	by dbdlxv05.itg.ti.com (8.14.3/8.13.8) with ESMTP id s0N32UTJ010555
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 08:32:31 +0530
Received: from DBDE04.ent.ti.com ([fe80::21ac:d9f:f810:c8e7]) by
	DBDE72.ent.ti.com ([fe80::bd56:32bf:87fd:2763%27]) with mapi id
	14.02.0342.003; Thu, 23 Jan 2014 11:02:30 +0800
From: "Sundareson, Prabindh" <prabu@ti.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
Thread-Index: Ac8X5yD2fTL5f+daQeOuuca6efUKFQ==
Date: Thu, 23 Jan 2014 03:02:29 +0000
Message-ID: <321768C95D21724485BCE784F1BE98473EC8FE81@DBDE04.ent.ti.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.24.170.142]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
	OMAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Andrii,

Dumb question - wouldn't there be a need to lock the data structures while updating/ copying ? Or does the Xen architecture somehow prevents other updates ?

I see 2 IPs being used - IPU and DSP, what did you have in mind for the 3rd IP ? GPU ?

If there are some quick instructions available to test, I can try this out.

regards,
Prabu


-----Original Message-----
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of xen-devel-request@lists.xen.org
Sent: Wednesday, January 22, 2014 9:23 PM
To: xen-devel@lists.xen.org
Subject: Xen-devel Digest, Vol 107, Issue 352

Send Xen-devel mailing list submissions to
	xen-devel@lists.xen.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel
or, via email, send a message with subject or body 'help' to
	xen-devel-request@lists.xen.org

You can reach the person managing the list at
	xen-devel-owner@lists.xen.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of Xen-devel digest..."


Today's Topics:

   1. [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
      platforms (Andrii Tseglytskyi)
   2. [RFC v01 2/3] arm: omap: translate iommu mapping to 4K	pages
      (Andrii Tseglytskyi)
   3. [RFC v01 1/3] arm: omap: introduce iommu module
      (Andrii Tseglytskyi)
   4. [RFC v01 3/3] arm: omap: cleanup iopte allocations
      (Andrii Tseglytskyi)


----------------------------------------------------------------------

Message: 1
Date: Wed, 22 Jan 2014 17:52:02 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
	OMAP	platforms
Message-ID:
	<1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>

Hi,

The following patch series is an RFC for possible implementation of simple MMU module, which is designed to translate IPA to MA for peripheral processors like GPU / IPU for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 external MMUs which need to be handled properly.

It would be great to get a community feedback - will this be useful for Xen project?

Let me describe an algorithm briefly. It is simple and straightforward.
The following simple logic is used to translate addresses from IPA to MA:

1. During boot time guest domain creates "pagetable" for external MMU IP.
Pagetable is a singletone data structure, which is stored in ususal kernel heap memory. All memory mappings for corresponding MMU are stored inside it.
Format of "pagetable" is well defined.

2. Guest domain enables peripheral remote processor. As a part of enable sequence kernel allocates chunks of heap memory needed for remote processor and stores pointers to allocated chunks in already created "pagetable". After it writes a physical address of pagetable to MMU configuration register. As result MMU IP knows about all allocations, and remote processor can use them directly in its software.

3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
It reads a physical address of "pagetable" from MMU register and creates a copy of it in own memory. As result - we have two similar configuration data structures - first - in guest domain kernel, second - in Xen hypervisor.

4. Xen omap mmu driver parses its own copy of pagetable and translate all physical addresses to corresponding machine addresses using existing p2m API call.
After it writes a physical address  of its pagetable (with already translated PA to MA) to MMU IP configuration registers and returns control to guest domain.

As a result - guest domain continues enabling remote processor with it MMU and MMU will use new pagetable, modified by Xen omap mmu driver. New pagetable will be used directly by MMU IP, and its new structure will be hidden for guest domain kernel, it won't know anything about p2m translation.

Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 3.4 as DomU.
Target platform OMAP5 panda.

Thank you for your attention,

Regards,

Andrii Tseglytskyi (3):
  arm: omap: introduce iommu module
  arm: omap: translate iommu mapping to 4K pages
  arm: omap: cleanup iopte allocations

 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

--
1.7.9.5




------------------------------

Message: 2
Date: Wed, 22 Jan 2014 17:52:04 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping
	to 4K	pages
Message-ID:
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>

Patch introduces the following algorithm:
- enumerates all first level translation entries
- for each section creates 256 pages, each page is 4096 bytes
- for each supersection creates 4096 pages, each page is 4096 bytes
- flush cache to synchronize Cortex M15 and IOMMU

This algorithm make possible to use 4K mapping only.

Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 4dab30f..7ec03a2 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -72,6 +72,9 @@
 #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
 #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
 
+/* 16 sections in supersection */
+#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
+
 /*
  * some descriptor attributes.
  */
@@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
 	&omap_dsp_mmu,
 };
 
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
 #define mmu_for_each(pfunc, data)						\
 ({														\
 	u32 __i;											\
@@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 	return vaddr;
 }
 
+static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
+{
+	u32 *iopte = NULL;
+	u32 i;
+
+	iopte = xzalloc_bytes(PAGE_SIZE);
+	if (!iopte) {
+		printk("%s Fail to alloc 2nd level table\n", mmu->name);
+		return 0;
+	}
+
+	for (i = 0; i < PTRS_PER_IOPTE; i++) {
+		u32 da, vaddr, iopgd_tmp;
+		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
+		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
+		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
+		iopte[i] = vaddr | IOPTE_SMALL;
+	}
+
+	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
+	return __pa(iopte) | IOPGD_TABLE;
+}
+
 /*
  * on boot table is empty
  */
@@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 
 		/* "supersection" 16 Mb */
 		if (iopgd_is_super(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			if(likely(translate_supersections_to_pages)) {
+				u32 j, iopgd_tmp;
+				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
+					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
+					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
+				}
+				i += (j - 1);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+			}
 
 		/* "section" 1Mb */
 		} else if (iopgd_is_section(iopgd)) {
-			da = i << IOSECTION_SHIFT;
-			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			if (likely(translate_sections_to_pages)) {
+				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
+			} else {
+				da = i << IOSECTION_SHIFT;
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+			}
 
 		/* "table" */
 		} else if (iopgd_is_table(iopgd)) {
-- 
1.7.9.5




------------------------------

Message: 3
Date: Wed, 22 Jan 2014 17:52:03 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
Message-ID:
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>

omap IOMMU module is designed to handle access to external
omap MMUs, connected to the L3 bus.

Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 418 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 003ac84..cb0b385 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -14,6 +14,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
+obj-y += omap_iommu.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a6db00b..3281b67 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
 {
     &vgic_distr_mmio_handler,
     &vuart_mmio_handler,
+	&mmu_mmio_handler,
 };
 #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
 
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 8d252c0..acb5dff 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -42,6 +42,7 @@ struct mmio_handler {
 
 extern const struct mmio_handler vgic_distr_mmio_handler;
 extern const struct mmio_handler vuart_mmio_handler;
+extern const struct mmio_handler mmu_mmio_handler;
 
 extern int handle_mmio(mmio_info_t *info);
 
diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
new file mode 100644
index 0000000..4dab30f
--- /dev/null
+++ b/xen/arch/arm/omap_iommu.c
@@ -0,0 +1,415 @@
+/*
+ * xen/arch/arm/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
+ * Copyright (c) 2013 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+
+#include "io.h"
+
+/* register where address of page table is stored */
+#define MMU_TTB			0x4c
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* 1st level translation */
+#define IOPGD_SHIFT		20
+#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
+#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
+
+/* "supersection" - 16 Mb */
+#define IOSUPER_SHIFT		24
+#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
+#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
+
+/* "section"  - 1 Mb */
+#define IOSECTION_SHIFT		20
+#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
+#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
+#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
+
+/* 2nd level translation */
+
+/* "small page" - 4Kb */
+#define IOPTE_SMALL_SHIFT		12
+#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
+#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
+
+/* "large page" - 64 Kb */
+#define IOPTE_LARGE_SHIFT		16
+#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
+#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
+#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
+
+/*
+ * some descriptor attributes.
+ */
+#define IOPGD_TABLE		(1 << 0)
+#define IOPGD_SECTION	(2 << 0)
+#define IOPGD_SUPER		(1 << 18 | 2 << 0)
+
+#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
+#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
+#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
+
+#define IOPTE_SMALL		(2 << 0)
+#define IOPTE_LARGE		(1 << 0)
+
+#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
+#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
+#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
+
+struct mmu_info {
+	const char			*name;
+	paddr_t				mem_start;
+	u32					mem_size;
+	u32					*pagetable;
+	void __iomem		*mem_map;
+};
+
+static struct mmu_info omap_ipu_mmu = {
+	.name		= "IPU_L2_MMU",
+	.mem_start	= 0x55082000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info omap_dsp_mmu = {
+	.name		= "DSP_L2_MMU",
+	.mem_start	= 0x4a066000,
+	.mem_size	= 0x1000,
+	.pagetable	= NULL,
+};
+
+static struct mmu_info *mmu_list[] = {
+	&omap_ipu_mmu,
+	&omap_dsp_mmu,
+};
+
+#define mmu_for_each(pfunc, data)						\
+({														\
+	u32 __i;											\
+	int __res = 0;										\
+														\
+	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
+		__res |= pfunc(mmu_list[__i], data);			\
+	}													\
+	__res;												\
+})
+
+static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
+		return 1;
+
+	return 0;
+}
+
+static inline struct mmu_info *mmu_lookup(u32 addr)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
+		if (mmu_check_mem_range(mmu_list[i], addr))
+			return mmu_list[i];
+	}
+
+	return NULL;
+}
+
+static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
+{
+	return (reg & mask) | (va & (~mask));
+}
+
+static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
+{
+	return (reg & ~mask) | pa;
+}
+
+static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
+{
+	return mmu_for_each(mmu_check_mem_range, addr);
+}
+
+static int mmu_copy_pagetable(struct mmu_info *mmu)
+{
+	void __iomem *pagetable = NULL;
+	u32 pgaddr;
+
+	ASSERT(mmu);
+
+	/* read address where kernel MMU pagetable is stored */
+	pgaddr = readl(mmu->mem_map + MMU_TTB);
+	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
+	if (!pagetable) {
+		printk("%s: %s failed to map pagetable\n",
+			   __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	/*
+	 * pagetable can be changed since last time
+	 * we accessed it therefore we need to copy it each time
+	 */
+	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
+
+	iounmap(pagetable);
+
+	return 0;
+}
+
+#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
+{																								\
+	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
+							(mask == IOPTE_LARGE_MASK)) ? "table"								\
+							: iopgd_is_super(iopgd) ? "supersection"							\
+							: iopgd_is_section(iopgd) ? "section"								\
+							: "Unknown section";												\
+	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
+		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
+}
+
+static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
+{
+	u32 vaddr, paddr;
+	paddr_t maddr;
+
+	paddr = mmu_virt_to_phys(iopgd, da, mask);
+	maddr = p2m_lookup(dom, paddr);
+	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
+
+	return vaddr;
+}
+
+/*
+ * on boot table is empty
+ */
+static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
+{
+	u32 i;
+	int res;
+	bool table_updated = false;
+
+	ASSERT(dom);
+	ASSERT(mmu);
+
+	/* copy pagetable from  domain to xen */
+	res = mmu_copy_pagetable(mmu);
+	if (res) {
+		printk("%s: %s failed to map pagetable memory\n",
+			   __func__, mmu->name);
+		return res;
+	}
+
+	/* 1-st level translation */
+	for (i = 0; i < PTRS_PER_IOPGD; i++) {
+		u32 da;
+		u32 iopgd = mmu->pagetable[i];
+
+		if (!iopgd)
+			continue;
+
+		table_updated = true;
+
+		/* "supersection" 16 Mb */
+		if (iopgd_is_super(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
+
+		/* "section" 1Mb */
+		} else if (iopgd_is_section(iopgd)) {
+			da = i << IOSECTION_SHIFT;
+			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
+
+		/* "table" */
+		} else if (iopgd_is_table(iopgd)) {
+			u32 j, mask;
+			u32 iopte = iopte_offset(iopgd);
+
+			/* 2-nd level translation */
+			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
+
+				/* "small table" 4Kb */
+				if (iopte_is_small(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
+					mask = IOPTE_SMALL_MASK;
+
+				/* "large table" 64Kb */
+				} else if (iopte_is_large(iopgd)) {
+					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
+					mask = IOPTE_LARGE_MASK;
+
+				/* error */
+				} else {
+					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
+					return -EINVAL;
+				}
+
+				/* translate 2-nd level entry */
+				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
+			}
+
+			continue;
+
+		/* error */
+		} else {
+			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
+			return -EINVAL;
+		}
+	}
+
+	/* force omap IOMMU to use new pagetable */
+	if (table_updated) {
+		paddr_t maddr;
+		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
+		maddr = __pa(mmu->pagetable);
+		writel(maddr, mmu->mem_map + MMU_TTB);
+		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
+	}
+
+	return 0;
+}
+
+static int mmu_trap_write_access(struct domain *dom,
+								 struct mmu_info *mmu, mmio_info_t *info)
+{
+	struct cpu_user_regs *regs = guest_cpu_user_regs();
+	register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res = 0;
+
+	switch (info->gpa - mmu->mem_start) {
+		case MMU_TTB:
+			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
+				   mmu->name, _p(info->gpa), *r);
+			res = mmu_translate_pagetable(dom, mmu);
+			break;
+		default:
+			break;
+	}
+
+	return res;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+	struct domain *dom = v->domain;
+	struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+	int res;
+
+	mmu = mmu_lookup(info->gpa);
+	if (!mmu) {
+		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
+		return -EINVAL;
+	}
+
+	/*
+	 * make sure that user register is written first in this function
+	 * following calls may expect valid data in it
+	 */
+    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+	res = mmu_trap_write_access(dom, mmu, info);
+	if (res)
+		return res;
+
+    return 1;
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+	ASSERT(mmu);
+	ASSERT(!mmu->mem_map);
+	ASSERT(!mmu->pagetable);
+
+    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
+	if (!mmu->mem_map) {
+		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
+		return -EINVAL;
+	}
+
+	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
+
+	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
+	if (!mmu->pagetable) {
+		printk("%s: %s failed to alloc private pagetable\n",
+			   __func__, mmu->name);
+		return -ENOMEM;
+	}
+
+	printk("%s: %s private pagetable %lu bytes\n",
+		   __func__, mmu->name, IOPGD_TABLE_SIZE);
+
+	return 0;
+}
+
+static int mmu_init_all(void)
+{
+	int res;
+
+	res = mmu_for_each(mmu_init, 0);
+	if (res) {
+		printk("%s error during init %d\n", __func__, res);
+		return res;
+	}
+
+	return 0;
+}
+
+const struct mmio_handler mmu_mmio_handler = {
+	.check_handler = mmu_mmio_check,
+	.read_handler  = mmu_mmio_read,
+	.write_handler = mmu_mmio_write,
+};
+
+__initcall(mmu_init_all);
-- 
1.7.9.5




------------------------------

Message: 4
Date: Wed, 22 Jan 2014 17:52:05 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC v01 3/3] arm: omap: cleanup iopte
	allocations
Message-ID:
	<1390405925-1764-4-git-send-email-andrii.tseglytskyi@globallogic.com>

Each allocation for iopte requires 4Kb memory.
All previous allocations from previous MMU reconfiguration
must be cleaned before new reconfigureation cycle.

Change-Id: I6db69a400cdba1170b43d9dc68d0817db77cbf9c
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/arch/arm/omap_iommu.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 7ec03a2..a5ad3ac 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -93,12 +93,18 @@
 #define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
 #define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
 
+struct mmu_alloc_node {
+	u32					*vptr;
+	struct list_head	node;
+};
+
 struct mmu_info {
 	const char			*name;
 	paddr_t				mem_start;
 	u32					mem_size;
 	u32					*pagetable;
 	void __iomem		*mem_map;
+	struct list_head	alloc_list;
 };
 
 static struct mmu_info omap_ipu_mmu = {
@@ -222,8 +228,15 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
 static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
 {
 	u32 *iopte = NULL;
+	struct mmu_alloc_node *alloc_node;
 	u32 i;
 
+	alloc_node = xzalloc_bytes(sizeof(struct mmu_alloc_node));
+	if (!alloc_node) {
+		printk("%s Fail to alloc vptr node\n", mmu->name);
+		return 0;
+	}
+
 	iopte = xzalloc_bytes(PAGE_SIZE);
 	if (!iopte) {
 		printk("%s Fail to alloc 2nd level table\n", mmu->name);
@@ -238,10 +251,27 @@ static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd,
 		iopte[i] = vaddr | IOPTE_SMALL;
 	}
 
+	/* store pointer for following cleanup */
+	alloc_node->vptr = iopte;
+	list_add(&alloc_node->node, &mmu->alloc_list);
+
 	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
 	return __pa(iopte) | IOPGD_TABLE;
 }
 
+static void mmu_cleanup_pagetable(struct mmu_info *mmu)
+{
+	struct mmu_alloc_node *mmu_alloc, *tmp;
+
+	ASSERT(mmu);
+
+	list_for_each_entry_safe(mmu_alloc, tmp, &mmu->alloc_list, node) {
+		xfree(mmu_alloc->vptr);
+		list_del(&mmu_alloc->node);
+		xfree(mmu_alloc);
+	}
+}
+
 /*
  * on boot table is empty
  */
@@ -254,6 +284,9 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
 	ASSERT(dom);
 	ASSERT(mmu);
 
+	/* free all previous allocations */
+	mmu_cleanup_pagetable(mmu);
+
 	/* copy pagetable from  domain to xen */
 	res = mmu_copy_pagetable(mmu);
 	if (res) {
@@ -432,6 +465,8 @@ static int mmu_init(struct mmu_info *mmu, u32 data)
 	printk("%s: %s private pagetable %lu bytes\n",
 		   __func__, mmu->name, IOPGD_TABLE_SIZE);
 
+	INIT_LIST_HEAD(&mmu->alloc_list);
+
 	return 0;
 }
 
-- 
1.7.9.5




------------------------------

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


End of Xen-devel Digest, Vol 107, Issue 352
*******************************************

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 03:22:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 03:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Arg-00007E-U1; Thu, 23 Jan 2014 03:22:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6Arf-000079-D4
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 03:21:59 +0000
Received: from [85.158.137.68:20339] by server-5.bemta-3.messagelabs.com id
	B5/B9-25188-6DA80E25; Thu, 23 Jan 2014 03:21:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390447314!9622735!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2414 invoked from network); 23 Jan 2014 03:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 03:21:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N3LpC2017582
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 03:21:52 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N3LoLw012813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 03:21:51 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N3LoI8016846; Thu, 23 Jan 2014 03:21:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 19:21:50 -0800
Date: Wed, 22 Jan 2014 19:21:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140122192149.5f9205f7@mantra.us.oracle.com>
In-Reply-To: <20140120150930.GA4598@phenom.dumpdata.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
	<20140120150930.GA4598@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014 10:09:30 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Fri, Jan 17, 2014 at 06:24:55PM -0800, Mukesh Rathor wrote:
> > pvh was designed to start with pv flags, but a commit in xen tree
> 
> Thank you for posting this!
> 
> > 51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags
> > as
> 
> You need to always include the title of said commit.
> 
> > they are not necessary. As a result, these CR flags must be set in
> > the guest.
> 
> I sent out replies to this over the weekend but somehow they are not
> showing up.
> 

Well, they finally showed up today... US mail must be slow :)...


> 
> > +
> > +	if (!cpu)
> > +		return;
> 
> And what happens if don't have this check? Will be bad if do multiple
> cr4 writes?

no, but just confuses the reader/debugger of the code IMO :)... 


> Fyi, this (cr4) should have been a seperate patch. I fixed it up that
> way.
> > +	/*
> > +	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR
> > OSXMMEXCPT
> > +	 * For BSP, PSE PGE will be set in probe_page_size_mask(),
> > for AP
> > +	 * set them here. For all, OSFXSR OSXMMEXCPT will be set
> > in fpu_init
> > +	 */
> > +	if (cpu_has_pse)
> > +		set_in_cr4(X86_CR4_PSE);
> > +
> > +	if (cpu_has_pge)
> > +		set_in_cr4(X86_CR4_PGE);
> > +}
> 
> Seperate patch and since the PGE part is more complicated that just
> setting the CR4 - you also have to tweak this:
> 
> 1512         /* Prevent unwanted bits from being set in PTEs.
> */ 1513         __supported_pte_mask &=
> ~_PAGE_GLOBAL;                                  
> 
> I think it should be done once we have actually confirmed that you can
> do 2MB pages within the guest. (might need some more tweaking?)

Umm... well, the above is just setting the PSE and PGE in the APs, the
BSP is already doing that in probe_page_size_mask, and setting 
__supported_pte_mask which needs to be set just once. So, because it's
being set in the BSP, it's already broken/untested if we add expose of PGE
from xen to a linux PVH guest... 

IOW, leaving above is no more harm, or we should 'if (pvh)' the code in 
probe_page_size_mask() for PSE, and wait till we can test it...

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 03:22:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 03:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Arg-00007E-U1; Thu, 23 Jan 2014 03:22:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6Arf-000079-D4
	for Xen-devel@lists.xensource.com; Thu, 23 Jan 2014 03:21:59 +0000
Received: from [85.158.137.68:20339] by server-5.bemta-3.messagelabs.com id
	B5/B9-25188-6DA80E25; Thu, 23 Jan 2014 03:21:58 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390447314!9622735!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2414 invoked from network); 23 Jan 2014 03:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 03:21:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N3LpC2017582
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 03:21:52 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0N3LoLw012813
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 03:21:51 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N3LoI8016846; Thu, 23 Jan 2014 03:21:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 22 Jan 2014 19:21:50 -0800
Date: Wed, 22 Jan 2014 19:21:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140122192149.5f9205f7@mantra.us.oracle.com>
In-Reply-To: <20140120150930.GA4598@phenom.dumpdata.com>
References: <1390011895-28123-1-git-send-email-mukesh.rathor@oracle.com>
	<1390011895-28123-2-git-send-email-mukesh.rathor@oracle.com>
	<20140120150930.GA4598@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] xen/pvh: set some cr flags upon vcpu
	start
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 20 Jan 2014 10:09:30 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Fri, Jan 17, 2014 at 06:24:55PM -0800, Mukesh Rathor wrote:
> > pvh was designed to start with pv flags, but a commit in xen tree
> 
> Thank you for posting this!
> 
> > 51e2cac257ec8b4080d89f0855c498cbbd76a5e5 removed some of the flags
> > as
> 
> You need to always include the title of said commit.
> 
> > they are not necessary. As a result, these CR flags must be set in
> > the guest.
> 
> I sent out replies to this over the weekend but somehow they are not
> showing up.
> 

Well, they finally showed up today... US mail must be slow :)...


> 
> > +
> > +	if (!cpu)
> > +		return;
> 
> And what happens if don't have this check? Will be bad if do multiple
> cr4 writes?

no, but just confuses the reader/debugger of the code IMO :)... 


> Fyi, this (cr4) should have been a seperate patch. I fixed it up that
> way.
> > +	/*
> > +	 * Unlike PV, for pvh xen does not set: PSE PGE OSFXSR
> > OSXMMEXCPT
> > +	 * For BSP, PSE PGE will be set in probe_page_size_mask(),
> > for AP
> > +	 * set them here. For all, OSFXSR OSXMMEXCPT will be set
> > in fpu_init
> > +	 */
> > +	if (cpu_has_pse)
> > +		set_in_cr4(X86_CR4_PSE);
> > +
> > +	if (cpu_has_pge)
> > +		set_in_cr4(X86_CR4_PGE);
> > +}
> 
> Seperate patch and since the PGE part is more complicated that just
> setting the CR4 - you also have to tweak this:
> 
> 1512         /* Prevent unwanted bits from being set in PTEs.
> */ 1513         __supported_pte_mask &=
> ~_PAGE_GLOBAL;                                  
> 
> I think it should be done once we have actually confirmed that you can
> do 2MB pages within the guest. (might need some more tweaking?)

Umm... well, the above is just setting the PSE and PGE in the APs, the
BSP is already doing that in probe_page_size_mask, and setting 
__supported_pte_mask which needs to be set just once. So, because it's
being set in the BSP, it's already broken/untested if we add expose of PGE
from xen to a linux PVH guest... 

IOW, leaving above is no more harm, or we should 'if (pvh)' the code in 
probe_page_size_mask() for PSE, and wait till we can test it...

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 04:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 04:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6BYF-00020g-Co; Thu, 23 Jan 2014 04:05:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6BYD-00020b-Nx
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 04:05:58 +0000
Received: from [193.109.254.147:47650] by server-10.bemta-14.messagelabs.com
	id 1B/1E-20752-52590E25; Thu, 23 Jan 2014 04:05:57 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390449954!12638097!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16984 invoked from network); 23 Jan 2014 04:05:55 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 04:05:55 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 22 Jan 2014 21:05:46 -0700
Message-ID: <52E09513.6060603@suse.com>
Date: Wed, 22 Jan 2014 21:05:39 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com>
In-Reply-To: <52DF57E2.2090602@suse.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Jackson wrote:
>   
>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>>   
>>     
>>> I let this run over the weekend and today noticed libvirtd was deadlocked
>>>     
>>>       
>> I have just retested xl with:
>>   * my 3-patch 4.4 fixes series
>>   * v2 of my fork series
>>   * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
>>   * "13/12" and "14/12" just posted
>> and it WFM.
>>
>> Of course I don't have the same setup as Jim.
>>
>> Jim: if it's not too much trouble, I'd appreciate it if you could try
>> that combination.
>>
>> For your convenience you can find a git branch of it at
>>   http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
>> aka
>>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1
>>   
>>     
>
> I've been testing this branch and notice an occasional libvirtd segfault
> that always occurs when calling libxl_domain_create_restore().  By
> occasional, I mean my save/restore script might cause the segfault after
> 2 iterations, or 20 iterations, or ...  But the segfault always occurs
> in libxl_domain_create_restore()
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeef59700 (LWP 12083)]
> 0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
> klass=0x5555558a1310)
>     at util/virobject.c:362
> 362         return virClassIsDerivedFrom(obj->klass, klass);
> (gdb) bt
> #0  0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
> klass=0x5555558a1310)
>     at util/virobject.c:362
> #1  0x00007ffff745765b in virObjectLock (anyobj=0x2f302f6e69616d6f) at
> util/virobject.c:314
> #2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
> (priv=0x5555558fc310,
>     hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
> #3  0x00007fffe96f8fed in time_deregister (gc=0x7fffeef58220,
> ev=0x5555559eee48)
>     at libxl_event.c:294
> #4  0x00007fffe96facfd in afterpoll_internal (egc=0x7fffeef58220,
> poller=0x5555559a4c70, nfds=3,
>     fds=0x5555559c09d0, now=...) at libxl_event.c:1008
> #5  0x00007fffe96fc312 in eventloop_iteration (egc=0x7fffeef58220,
> poller=0x5555559a4c70)
>     at libxl_event.c:1455
> #6  0x00007fffe96fce58 in libxl__ao_inprogress (ao=0x5555559e9690,
>     file=0x7fffe970fadb "libxl_create.c", line=1356,
>     func=0x7fffe97105f0 <__func__.16344> "do_domain_create") at
> libxl_event.c:1700
> #7  0x00007fffe96d711f in do_domain_create (ctx=0x5555559d9fa0,
> d_config=0x7fffeef58490,
>     domid=0x7fffeef5840c, restore_fd=89, checkpointed_stream=0,
> ao_how=0x0, aop_console_how=0x0)
>     at libxl_create.c:1356
> #8  0x00007fffe96d7238 in libxl_domain_create_restore
> (ctx=0x5555559d9fa0, d_config=0x7fffeef58490,
>     domid=0x7fffeef5840c, restore_fd=89, params=0x7fffeef58400,
> ao_how=0x0, aop_console_how=0x0)
>     at libxl_create.c:1387
> #...
> (gdb) f 2
> #2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
> (priv=0x5555558fc310,
>     hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
> 302         virObjectLock(info->priv);
> (gdb) p info->priv
> $3 = (libxlDomainObjPrivatePtr) 0x2f302f6e69616d6f
> (gdb) f 9
> #9  0x00007fffe993f2c7 in libxlVmStart (driver=0x5555558c2e50,
> vm=0x5555558e6a50,
>     start_paused=false, restore_fd=89) at libxl/libxl_driver.c:635
> 635             res = libxl_domain_create_restore(priv->ctx, &d_config,
> &domid,
> (gdb) p priv
> $2 = (libxlDomainObjPrivatePtr) 0x5555558fc310
>
> It looks like the libxlDomainObjPrivatePtr, stashed as part of
> for_app_registration_out when registering the timeout, has been
> trampled.  Not sure if the problem is in libvirt or libxl, but it is
> late here and I'm calling it a night :).
>   

It appears the timeout_modify callback is invoked on a previously
deregistered timeout.  I didn't notice the segfault when running
libvirtd under valgrind, but did see

==14653== Invalid read of size 8
==14653==    at 0x134ACD1C: libxlDomainObjTimeoutModifyEventHook
(libxl_domain.c:309)
==14653==    by 0x13730FEC: time_deregister (libxl_event.c:294)
==14653==    by 0x13732CFC: afterpoll_internal (libxl_event.c:1008)
==14653==    by 0x13734311: eventloop_iteration (libxl_event.c:1455)
==14653==    by 0x13734E57: libxl__ao_inprogress (libxl_event.c:1700)
==14653==    by 0x1370F11E: do_domain_create (libxl_create.c:1356)
==14653==    by 0x1370F237: libxl_domain_create_restore
(libxl_create.c:1387)
==14653==    by 0x134AF332: libxlVmStart (libxl_driver.c:635)
==14653==    by 0x134B382A: libxlDomainRestoreFlags (libxl_driver.c:2047)
==14653==    by 0x134B3975: libxlDomainRestore (libxl_driver.c:2070)
==14653==    by 0x53B5AC7: virDomainRestore (libvirt.c:2678)
==14653==    by 0x130ADC: remoteDispatchDomainRestore
(remote_dispatch.h:6657)
==14653==  Address 0x18000178 is 8 bytes inside a block of size 32 free'd
==14653==    at 0x4C28ADC: free (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==14653==    by 0x529B08F: virFree (viralloc.c:580)
==14653==    by 0x134AC578: libxlDomainObjEventHookInfoFree
(libxl_domain.c:110)
==14653==    by 0x52BE3DB: virEventPollCleanupTimeouts (vireventpoll.c:535)
==14653==    by 0x52BEA4C: virEventPollRunOnce (vireventpoll.c:651)
==14653==    by 0x52BC960: virEventRunDefaultImpl (virevent.c:306)

which is consistent with the gdb findings.  I've audited the timeout
handling code in libvirt and didn't notice any problems.  I'll have some
time tomorrow to continue poking.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 04:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 04:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6BYF-00020g-Co; Thu, 23 Jan 2014 04:05:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6BYD-00020b-Nx
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 04:05:58 +0000
Received: from [193.109.254.147:47650] by server-10.bemta-14.messagelabs.com
	id 1B/1E-20752-52590E25; Thu, 23 Jan 2014 04:05:57 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390449954!12638097!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16984 invoked from network); 23 Jan 2014 04:05:55 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 04:05:55 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Wed, 22 Jan 2014 21:05:46 -0700
Message-ID: <52E09513.6060603@suse.com>
Date: Wed, 22 Jan 2014 21:05:39 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com>
In-Reply-To: <52DF57E2.2090602@suse.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> Ian Jackson wrote:
>   
>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>>   
>>     
>>> I let this run over the weekend and today noticed libvirtd was deadlocked
>>>     
>>>       
>> I have just retested xl with:
>>   * my 3-patch 4.4 fixes series
>>   * v2 of my fork series
>>   * the extra mutex patch "libxl: fork: Fixup SIGCHLD sharing"
>>   * "13/12" and "14/12" just posted
>> and it WFM.
>>
>> Of course I don't have the same setup as Jim.
>>
>> Jim: if it's not too much trouble, I'd appreciate it if you could try
>> that combination.
>>
>> For your convenience you can find a git branch of it at
>>   http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/tags/wip.enumerate-pids-v2.1
>> aka
>>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1
>>   
>>     
>
> I've been testing this branch and notice an occasional libvirtd segfault
> that always occurs when calling libxl_domain_create_restore().  By
> occasional, I mean my save/restore script might cause the segfault after
> 2 iterations, or 20 iterations, or ...  But the segfault always occurs
> in libxl_domain_create_restore()
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffeef59700 (LWP 12083)]
> 0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
> klass=0x5555558a1310)
>     at util/virobject.c:362
> 362         return virClassIsDerivedFrom(obj->klass, klass);
> (gdb) bt
> #0  0x00007ffff74577ef in virObjectIsClass (anyobj=0x2f302f6e69616d6f,
> klass=0x5555558a1310)
>     at util/virobject.c:362
> #1  0x00007ffff745765b in virObjectLock (anyobj=0x2f302f6e69616d6f) at
> util/virobject.c:314
> #2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
> (priv=0x5555558fc310,
>     hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
> #3  0x00007fffe96f8fed in time_deregister (gc=0x7fffeef58220,
> ev=0x5555559eee48)
>     at libxl_event.c:294
> #4  0x00007fffe96facfd in afterpoll_internal (egc=0x7fffeef58220,
> poller=0x5555559a4c70, nfds=3,
>     fds=0x5555559c09d0, now=...) at libxl_event.c:1008
> #5  0x00007fffe96fc312 in eventloop_iteration (egc=0x7fffeef58220,
> poller=0x5555559a4c70)
>     at libxl_event.c:1455
> #6  0x00007fffe96fce58 in libxl__ao_inprogress (ao=0x5555559e9690,
>     file=0x7fffe970fadb "libxl_create.c", line=1356,
>     func=0x7fffe97105f0 <__func__.16344> "do_domain_create") at
> libxl_event.c:1700
> #7  0x00007fffe96d711f in do_domain_create (ctx=0x5555559d9fa0,
> d_config=0x7fffeef58490,
>     domid=0x7fffeef5840c, restore_fd=89, checkpointed_stream=0,
> ao_how=0x0, aop_console_how=0x0)
>     at libxl_create.c:1356
> #8  0x00007fffe96d7238 in libxl_domain_create_restore
> (ctx=0x5555559d9fa0, d_config=0x7fffeef58490,
>     domid=0x7fffeef5840c, restore_fd=89, params=0x7fffeef58400,
> ao_how=0x0, aop_console_how=0x0)
>     at libxl_create.c:1387
> #...
> (gdb) f 2
> #2  0x00007fffe993cc96 in libxlDomainObjTimeoutModifyEventHook
> (priv=0x5555558fc310,
>     hndp=0x5555559e5d88, abs_t=...) at libxl/libxl_domain.c:302
> 302         virObjectLock(info->priv);
> (gdb) p info->priv
> $3 = (libxlDomainObjPrivatePtr) 0x2f302f6e69616d6f
> (gdb) f 9
> #9  0x00007fffe993f2c7 in libxlVmStart (driver=0x5555558c2e50,
> vm=0x5555558e6a50,
>     start_paused=false, restore_fd=89) at libxl/libxl_driver.c:635
> 635             res = libxl_domain_create_restore(priv->ctx, &d_config,
> &domid,
> (gdb) p priv
> $2 = (libxlDomainObjPrivatePtr) 0x5555558fc310
>
> It looks like the libxlDomainObjPrivatePtr, stashed as part of
> for_app_registration_out when registering the timeout, has been
> trampled.  Not sure if the problem is in libvirt or libxl, but it is
> late here and I'm calling it a night :).
>   

It appears the timeout_modify callback is invoked on a previously
deregistered timeout.  I didn't notice the segfault when running
libvirtd under valgrind, but did see

==14653== Invalid read of size 8
==14653==    at 0x134ACD1C: libxlDomainObjTimeoutModifyEventHook
(libxl_domain.c:309)
==14653==    by 0x13730FEC: time_deregister (libxl_event.c:294)
==14653==    by 0x13732CFC: afterpoll_internal (libxl_event.c:1008)
==14653==    by 0x13734311: eventloop_iteration (libxl_event.c:1455)
==14653==    by 0x13734E57: libxl__ao_inprogress (libxl_event.c:1700)
==14653==    by 0x1370F11E: do_domain_create (libxl_create.c:1356)
==14653==    by 0x1370F237: libxl_domain_create_restore
(libxl_create.c:1387)
==14653==    by 0x134AF332: libxlVmStart (libxl_driver.c:635)
==14653==    by 0x134B382A: libxlDomainRestoreFlags (libxl_driver.c:2047)
==14653==    by 0x134B3975: libxlDomainRestore (libxl_driver.c:2070)
==14653==    by 0x53B5AC7: virDomainRestore (libvirt.c:2678)
==14653==    by 0x130ADC: remoteDispatchDomainRestore
(remote_dispatch.h:6657)
==14653==  Address 0x18000178 is 8 bytes inside a block of size 32 free'd
==14653==    at 0x4C28ADC: free (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==14653==    by 0x529B08F: virFree (viralloc.c:580)
==14653==    by 0x134AC578: libxlDomainObjEventHookInfoFree
(libxl_domain.c:110)
==14653==    by 0x52BE3DB: virEventPollCleanupTimeouts (vireventpoll.c:535)
==14653==    by 0x52BEA4C: virEventPollRunOnce (vireventpoll.c:651)
==14653==    by 0x52BC960: virEventRunDefaultImpl (virevent.c:306)

which is consistent with the gdb findings.  I've audited the timeout
handling code in libvirt and didn't notice any problems.  I'll have some
time tomorrow to continue poking.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 06:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 06:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6DSd-0007NX-Uw; Thu, 23 Jan 2014 06:08:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1W6DSc-0007NS-CE
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 06:08:18 +0000
Received: from [193.109.254.147:58333] by server-11.bemta-14.messagelabs.com
	id 59/D5-20576-1D1B0E25; Thu, 23 Jan 2014 06:08:17 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390457296!12633748!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3957 invoked from network); 23 Jan 2014 06:08:16 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 06:08:16 -0000
Received: by mail-ea0-f180.google.com with SMTP id f15so180350eak.11
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Jan 2014 22:08:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=cizGTqFVxmHIeMuIeUs4NJDzGEHYGrL8k7GbirrRoz0=;
	b=b7xKIsdWD0Gnrit+ZOnaCiruM0pAg5Lr3+B9zXPeH2mESo8syJO/KSW+LtYaGj1EAk
	CirAoOUY5g9w6PsNcNiynJ7IUveTRtjlFIKPR0pWp6arHC8Wj9VYrWfqoCp/qYuC+3ZJ
	1FHHx/m1rBDve0e+aC/e6w3uKpS28tFadxDnqrhzK+3xMzsxNdadL4hGUbne4AYq6y2o
	vg7je2bSnwdXDy0xzKWjGusION3wrwFHKId5xg4O8pCfMfpDT4Eo6n7os4SsiFeVGejh
	cHxBYv6xLmfwdEqm7s6smpFCjEeV+VL4j4pxCA1BX3KdLrS20t5doCNGZHvrXhGx6If0
	ZYBQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=cizGTqFVxmHIeMuIeUs4NJDzGEHYGrL8k7GbirrRoz0=;
	b=TSFj0s4wqx6bUAwKgBi1rqFcLHmrT/ebt018I9bvAEBx1Nu6oU4AZWfQ9CA39BxY90
	bXv0XwHEq94ja+CHZOMWt8nFt1m8saILRDMmMNYcx7+zstg0W2psYcZsVtAYEWTwFS2J
	4irjywT4vbXDVfAkTVGmYUC5ikFfxuiko6sUE=
MIME-Version: 1.0
X-Received: by 10.14.178.65 with SMTP id e41mr1538503eem.79.1390457296269;
	Wed, 22 Jan 2014 22:08:16 -0800 (PST)
Received: by 10.14.147.206 with HTTP; Wed, 22 Jan 2014 22:08:16 -0800 (PST)
In-Reply-To: <20140122193625.GA8827@phenom.dumpdata.com>
References: <20140122193625.GA8827@phenom.dumpdata.com>
Date: Wed, 22 Jan 2014 22:08:16 -0800
X-Google-Sender-Auth: ZvVwc6qHmyHynmnXRB0EV5mUCdQ
Message-ID: <CA+55aFxQxT37XdRU9aakhwUi5aHWnEJW__ECOz8SmWBxOF9G_Q@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:36 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
>  46 files changed, 3379 insertions(+), 2117 deletions(-)

Please fix your script to detect renames - add '-M' to your "git diff
--stat" line (and '--summary' too, for that matter)

The correct statistics are actually

 45 files changed, 1952 insertions(+), 690 deletions(-)
 rename drivers/xen/{events.c => events/events_base.c} (70%)

as rename detection would have shown.

(Rename detection isn't the default for git, because the resulting
diffs aren't applicable by old broken versions of patch. Some day I
might ask Junio to consider making it the default, but in the name of
interoperability that day is years from now.. GNU patch actually does
understand rename diffs, but other tools like diffstat etc still
don't)

            Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 06:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 06:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6DSd-0007NX-Uw; Thu, 23 Jan 2014 06:08:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1W6DSc-0007NS-CE
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 06:08:18 +0000
Received: from [193.109.254.147:58333] by server-11.bemta-14.messagelabs.com
	id 59/D5-20576-1D1B0E25; Thu, 23 Jan 2014 06:08:17 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390457296!12633748!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3957 invoked from network); 23 Jan 2014 06:08:16 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 06:08:16 -0000
Received: by mail-ea0-f180.google.com with SMTP id f15so180350eak.11
	for <xen-devel@lists.xensource.com>;
	Wed, 22 Jan 2014 22:08:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=cizGTqFVxmHIeMuIeUs4NJDzGEHYGrL8k7GbirrRoz0=;
	b=b7xKIsdWD0Gnrit+ZOnaCiruM0pAg5Lr3+B9zXPeH2mESo8syJO/KSW+LtYaGj1EAk
	CirAoOUY5g9w6PsNcNiynJ7IUveTRtjlFIKPR0pWp6arHC8Wj9VYrWfqoCp/qYuC+3ZJ
	1FHHx/m1rBDve0e+aC/e6w3uKpS28tFadxDnqrhzK+3xMzsxNdadL4hGUbne4AYq6y2o
	vg7je2bSnwdXDy0xzKWjGusION3wrwFHKId5xg4O8pCfMfpDT4Eo6n7os4SsiFeVGejh
	cHxBYv6xLmfwdEqm7s6smpFCjEeV+VL4j4pxCA1BX3KdLrS20t5doCNGZHvrXhGx6If0
	ZYBQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=cizGTqFVxmHIeMuIeUs4NJDzGEHYGrL8k7GbirrRoz0=;
	b=TSFj0s4wqx6bUAwKgBi1rqFcLHmrT/ebt018I9bvAEBx1Nu6oU4AZWfQ9CA39BxY90
	bXv0XwHEq94ja+CHZOMWt8nFt1m8saILRDMmMNYcx7+zstg0W2psYcZsVtAYEWTwFS2J
	4irjywT4vbXDVfAkTVGmYUC5ikFfxuiko6sUE=
MIME-Version: 1.0
X-Received: by 10.14.178.65 with SMTP id e41mr1538503eem.79.1390457296269;
	Wed, 22 Jan 2014 22:08:16 -0800 (PST)
Received: by 10.14.147.206 with HTTP; Wed, 22 Jan 2014 22:08:16 -0800 (PST)
In-Reply-To: <20140122193625.GA8827@phenom.dumpdata.com>
References: <20140122193625.GA8827@phenom.dumpdata.com>
Date: Wed, 22 Jan 2014 22:08:16 -0800
X-Google-Sender-Auth: ZvVwc6qHmyHynmnXRB0EV5mUCdQ
Message-ID: <CA+55aFxQxT37XdRU9aakhwUi5aHWnEJW__ECOz8SmWBxOF9G_Q@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 11:36 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
>  46 files changed, 3379 insertions(+), 2117 deletions(-)

Please fix your script to detect renames - add '-M' to your "git diff
--stat" line (and '--summary' too, for that matter)

The correct statistics are actually

 45 files changed, 1952 insertions(+), 690 deletions(-)
 rename drivers/xen/{events.c => events/events_base.c} (70%)

as rename detection would have shown.

(Rename detection isn't the default for git, because the resulting
diffs aren't applicable by old broken versions of patch. Some day I
might ask Junio to consider making it the default, but in the name of
interoperability that day is years from now.. GNU patch actually does
understand rename diffs, but other tools like diffstat etc still
don't)

            Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 06:33:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 06:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Dqv-0000J2-3N; Thu, 23 Jan 2014 06:33:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6Dqt-0000Ix-60
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 06:33:23 +0000
Received: from [85.158.137.68:15191] by server-7.bemta-3.messagelabs.com id
	E7/CB-27599-2B7B0E25; Thu, 23 Jan 2014 06:33:22 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390458799!10794424!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24944 invoked from network); 23 Jan 2014 06:33:21 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 06:33:21 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so1389981pdj.15
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 22:33:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=LHmSZf3FlmcLJOLUYIl4QsiOrN3RLKaa/99FQTAbrNw=;
	b=JQdM2W57dkvsNtw/h7SfL0tVhUGbFlvyMsoMVbRjG06LND3AEYSGSBVFNtlXDiGJpi
	3vQfeJz72Ck7RbT6c4cJVa8+WC+iiDxF9ytzTXiamhnkuZffBGfRjSHnXyPm10Mee3tA
	bli22vU0SsYdf3Bk6i+3cHFH8BH+bG0RP4ks8/L2Msx9RWST9JOzrL5OCOOU1gzWuQDa
	2IgMBF1Fa+jgEVi0jkjLZdnXTuI2XlwZNhVKvZP3tUbRILHT9cdgylxsGsgbByP4Jaw3
	DwSGmmU2P1hALIUwQFSjTRKYYYufjyZlhwBNFX7/xxQ5dP8GTIEIlgVQG2Bt7a0t5a9k
	+tHA==
X-Gm-Message-State: ALoCoQmSc47xTuP5fPKuK2dtblpa1wirl+WpTigLBY3m6eIu3UVlMN++AAID/o3auwjlit64v1U+
X-Received: by 10.68.192.131 with SMTP id hg3mr6250621pbc.136.1390458799255;
	Wed, 22 Jan 2014 22:33:19 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	xn12sm57708981pac.12.2014.01.22.22.33.16 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 22:33:18 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu, 23 Jan 2014 12:03:04 +0530
Message-Id: <1390458785-21862-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: platforms: Adding reset support for
	xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..bd6223d 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -19,7 +19,9 @@
  */
 
 #include <xen/config.h>
+#include <xen/delay.h>
 #include <asm/platform.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
 static uint32_t xgene_storm_quirks(void)
@@ -107,6 +109,25 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(0x17000014UL, 0x100);
+
+    if ( !addr )
+    {
+        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(0x1, addr);
+    mdelay(1000);
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +137,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 06:33:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 06:33:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Dqv-0000J2-3N; Thu, 23 Jan 2014 06:33:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6Dqt-0000Ix-60
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 06:33:23 +0000
Received: from [85.158.137.68:15191] by server-7.bemta-3.messagelabs.com id
	E7/CB-27599-2B7B0E25; Thu, 23 Jan 2014 06:33:22 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390458799!10794424!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24944 invoked from network); 23 Jan 2014 06:33:21 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 06:33:21 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so1389981pdj.15
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 22:33:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=LHmSZf3FlmcLJOLUYIl4QsiOrN3RLKaa/99FQTAbrNw=;
	b=JQdM2W57dkvsNtw/h7SfL0tVhUGbFlvyMsoMVbRjG06LND3AEYSGSBVFNtlXDiGJpi
	3vQfeJz72Ck7RbT6c4cJVa8+WC+iiDxF9ytzTXiamhnkuZffBGfRjSHnXyPm10Mee3tA
	bli22vU0SsYdf3Bk6i+3cHFH8BH+bG0RP4ks8/L2Msx9RWST9JOzrL5OCOOU1gzWuQDa
	2IgMBF1Fa+jgEVi0jkjLZdnXTuI2XlwZNhVKvZP3tUbRILHT9cdgylxsGsgbByP4Jaw3
	DwSGmmU2P1hALIUwQFSjTRKYYYufjyZlhwBNFX7/xxQ5dP8GTIEIlgVQG2Bt7a0t5a9k
	+tHA==
X-Gm-Message-State: ALoCoQmSc47xTuP5fPKuK2dtblpa1wirl+WpTigLBY3m6eIu3UVlMN++AAID/o3auwjlit64v1U+
X-Received: by 10.68.192.131 with SMTP id hg3mr6250621pbc.136.1390458799255;
	Wed, 22 Jan 2014 22:33:19 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	xn12sm57708981pac.12.2014.01.22.22.33.16 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 22:33:18 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu, 23 Jan 2014 12:03:04 +0530
Message-Id: <1390458785-21862-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: platforms: Adding reset support for
	xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..bd6223d 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -19,7 +19,9 @@
  */
 
 #include <xen/config.h>
+#include <xen/delay.h>
 #include <asm/platform.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
 static uint32_t xgene_storm_quirks(void)
@@ -107,6 +109,25 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(0x17000014UL, 0x100);
+
+    if ( !addr )
+    {
+        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(0x1, addr);
+    mdelay(1000);
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +137,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 07:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 07:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6EOp-0001vu-Cu; Thu, 23 Jan 2014 07:08:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6EOo-0001uE-KO
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 07:08:26 +0000
Received: from [85.158.143.35:20354] by server-2.bemta-4.messagelabs.com id
	71/AC-11386-9EFB0E25; Thu, 23 Jan 2014 07:08:25 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390460903!202103!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16334 invoked from network); 23 Jan 2014 07:08:24 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 07:08:24 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so1473142pbc.30
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 23:08:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=UDQfH0pu4BweOytCaCnYi0P6J94Ba0derm/RLJkJ9c8=;
	b=Wpf/Yt2cxyUgM8wsVxz5m8BBmEjaVwpAFQen2Z10j3TPBVaUFc4q06zn7kdOcbEMq/
	FyNSrzEWEe8h1loos3/RrNiHxgo4qB0HBPWr/sCheNUF0BZTFcdh8Cj34xBu7BYMX7lH
	2VYyTtLPCXVvEhiiD9YjUmthiatLb0dEdxGm+1QCMwLs9VuPtHALHAFslqpQeE3Ru7Wp
	M/DLI4ms4qQVio9lTpyPBrZpT1u/+jY+Ai24Tmi6RtpQelxH2auENkcFnrPxs3dRGv7T
	sbLIrP39z/dxRPtl5r7dMWoZAepwHGMmd7xv+Jy7iGAa8VOTOqMjWwuCT2OhWXlxDeaz
	jyrw==
X-Gm-Message-State: ALoCoQn6xjf9NFnlhLvf9T5dxKVT/a9BbgRTUDPdmnZ/XYcqn2LuLn9NY2sR/JkjbZzuAe27Qp4B
X-Received: by 10.68.233.166 with SMTP id tx6mr6306461pbc.165.1390460902898;
	Wed, 22 Jan 2014 23:08:22 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id xs1sm58092543pac.7.2014.01.22.23.08.19
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 23:08:22 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu, 23 Jan 2014 12:38:10 +0530
Message-Id: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..9901bf0 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,6 +20,9 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
 static uint32_t xgene_storm_quirks(void)
@@ -107,6 +110,26 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(0x17000014UL, 0x100);
+
+    if ( !addr )
+    {
+        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(0x1, addr);
+
+    iounmap(addr);
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 07:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 07:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6EOp-0001vu-Cu; Thu, 23 Jan 2014 07:08:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6EOo-0001uE-KO
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 07:08:26 +0000
Received: from [85.158.143.35:20354] by server-2.bemta-4.messagelabs.com id
	71/AC-11386-9EFB0E25; Thu, 23 Jan 2014 07:08:25 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390460903!202103!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16334 invoked from network); 23 Jan 2014 07:08:24 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 07:08:24 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so1473142pbc.30
	for <xen-devel@lists.xen.org>; Wed, 22 Jan 2014 23:08:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=UDQfH0pu4BweOytCaCnYi0P6J94Ba0derm/RLJkJ9c8=;
	b=Wpf/Yt2cxyUgM8wsVxz5m8BBmEjaVwpAFQen2Z10j3TPBVaUFc4q06zn7kdOcbEMq/
	FyNSrzEWEe8h1loos3/RrNiHxgo4qB0HBPWr/sCheNUF0BZTFcdh8Cj34xBu7BYMX7lH
	2VYyTtLPCXVvEhiiD9YjUmthiatLb0dEdxGm+1QCMwLs9VuPtHALHAFslqpQeE3Ru7Wp
	M/DLI4ms4qQVio9lTpyPBrZpT1u/+jY+Ai24Tmi6RtpQelxH2auENkcFnrPxs3dRGv7T
	sbLIrP39z/dxRPtl5r7dMWoZAepwHGMmd7xv+Jy7iGAa8VOTOqMjWwuCT2OhWXlxDeaz
	jyrw==
X-Gm-Message-State: ALoCoQn6xjf9NFnlhLvf9T5dxKVT/a9BbgRTUDPdmnZ/XYcqn2LuLn9NY2sR/JkjbZzuAe27Qp4B
X-Received: by 10.68.233.166 with SMTP id tx6mr6306461pbc.165.1390460902898;
	Wed, 22 Jan 2014 23:08:22 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id xs1sm58092543pac.7.2014.01.22.23.08.19
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 22 Jan 2014 23:08:22 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu, 23 Jan 2014 12:38:10 +0530
Message-Id: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..9901bf0 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,6 +20,9 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
 static uint32_t xgene_storm_quirks(void)
@@ -107,6 +110,26 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(0x17000014UL, 0x100);
+
+    if ( !addr )
+    {
+        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(0x1, addr);
+
+    iounmap(addr);
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 07:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 07:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Ekp-0003E7-7E; Thu, 23 Jan 2014 07:31:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Eko-0003E2-7Y
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 07:31:10 +0000
Received: from [85.158.137.68:18161] by server-13.bemta-3.messagelabs.com id
	F6/00-28603-D35C0E25; Thu, 23 Jan 2014 07:31:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390462266!10824158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13434 invoked from network); 23 Jan 2014 07:31:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 07:31:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,704,1384300800"; d="scan'208";a="93592246"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 07:31:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 02:31:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6Ekj-0005Jy-A7;
	Thu, 23 Jan 2014 07:31:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6Ekj-0002hE-4o;
	Thu, 23 Jan 2014 07:31:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24462-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 07:31:05 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24462: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24462 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24462/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24457

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24456
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  like 24453
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24457 like 24448

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24457 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24457 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 07:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 07:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Ekp-0003E7-7E; Thu, 23 Jan 2014 07:31:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Eko-0003E2-7Y
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 07:31:10 +0000
Received: from [85.158.137.68:18161] by server-13.bemta-3.messagelabs.com id
	F6/00-28603-D35C0E25; Thu, 23 Jan 2014 07:31:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390462266!10824158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13434 invoked from network); 23 Jan 2014 07:31:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 07:31:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,704,1384300800"; d="scan'208";a="93592246"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 07:31:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 02:31:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6Ekj-0005Jy-A7;
	Thu, 23 Jan 2014 07:31:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6Ekj-0002hE-4o;
	Thu, 23 Jan 2014 07:31:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24462-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 07:31:05 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24462: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24462 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24462/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24457

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail like 24456
 test-amd64-i386-xl-win7-amd64  8 guest-saverestore            fail  like 24453
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24457 like 24448

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24457 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24457 never pass

version targeted for testing:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 08:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 08:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6FVq-00063k-5v; Thu, 23 Jan 2014 08:19:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6FVp-00063f-KE
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 08:19:45 +0000
Received: from [85.158.137.68:30680] by server-16.bemta-3.messagelabs.com id
	B5/68-26128-0A0D0E25; Thu, 23 Jan 2014 08:19:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390465183!9642182!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17310 invoked from network); 23 Jan 2014 08:19:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 08:19:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 08:19:42 +0000
Message-Id: <52E0DEAC020000780011603E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 08:19:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
In-Reply-To: <20140122211449.GA10426@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
>> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> > @@ -323,9 +324,14 @@
>> >   *     For full interoperability, block front and backends should publish
>> >   *     identical ring parameters, adjusted for unit differences, to the
>> >   *     XenStore nodes used in both schemes.
>> > - * (4) Devices that support discard functionality may internally allocate
>> > - *     space (discardable extents) in units that are larger than the
>> > - *     exported logical block size.
>> > + * (4) Devices that support discard functionality may internally allocate 
>> > space
>> > + *     (discardable extents) in units that are larger than the exported 
>> > logical
>> > + *     block size. If the backing device has such discardable extents the
>> > + *     backend must provide both discard-granularity and discard-alignment.
>                     ^^^^ - MAY

I think the intention is to say that these two should go together,
i.e. specifying just one of them is a mistake.

Jan

>> > + *     Backends supporting discard should include discard-granularity and
>                                         ^^^^^ - MAY
>> > + *     discard-alignment even if it supports discarding individual sectors.
>> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
>> > ==
>> > + *     sector size if these keys are missing.
>> >   * (5) The discard-alignment parameter allows a physical device to be
>> >   *     partitioned into virtual devices that do not necessarily begin or
>> >   *     end on a discardable extent boundary.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 08:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 08:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6FVq-00063k-5v; Thu, 23 Jan 2014 08:19:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6FVp-00063f-KE
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 08:19:45 +0000
Received: from [85.158.137.68:30680] by server-16.bemta-3.messagelabs.com id
	B5/68-26128-0A0D0E25; Thu, 23 Jan 2014 08:19:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390465183!9642182!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17310 invoked from network); 23 Jan 2014 08:19:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 08:19:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 08:19:42 +0000
Message-Id: <52E0DEAC020000780011603E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 08:19:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
In-Reply-To: <20140122211449.GA10426@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
>> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> > @@ -323,9 +324,14 @@
>> >   *     For full interoperability, block front and backends should publish
>> >   *     identical ring parameters, adjusted for unit differences, to the
>> >   *     XenStore nodes used in both schemes.
>> > - * (4) Devices that support discard functionality may internally allocate
>> > - *     space (discardable extents) in units that are larger than the
>> > - *     exported logical block size.
>> > + * (4) Devices that support discard functionality may internally allocate 
>> > space
>> > + *     (discardable extents) in units that are larger than the exported 
>> > logical
>> > + *     block size. If the backing device has such discardable extents the
>> > + *     backend must provide both discard-granularity and discard-alignment.
>                     ^^^^ - MAY

I think the intention is to say that these two should go together,
i.e. specifying just one of them is a mistake.

Jan

>> > + *     Backends supporting discard should include discard-granularity and
>                                         ^^^^^ - MAY
>> > + *     discard-alignment even if it supports discarding individual sectors.
>> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
>> > ==
>> > + *     sector size if these keys are missing.
>> >   * (5) The discard-alignment parameter allows a physical device to be
>> >   *     partitioned into virtual devices that do not necessarily begin or
>> >   *     end on a discardable extent boundary.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 08:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 08:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6FaC-0006EU-07; Thu, 23 Jan 2014 08:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6FaA-0006EO-1c
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 08:24:14 +0000
Received: from [85.158.137.68:22431] by server-8.bemta-3.messagelabs.com id
	CE/52-31081-DA1D0E25; Thu, 23 Jan 2014 08:24:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390465452!9658870!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31380 invoked from network); 23 Jan 2014 08:24:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 08:24:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 08:24:12 +0000
Message-Id: <52E0DFBB0200007800116041@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 08:24:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
In-Reply-To: <20140122214034.GB9460@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 22:40, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
>> "Fixing the wrong thing" presumably, after taking a closer look at
>> Konrad's second crash: The device in question really appears to
>> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
>> wonder whether the capability gets displayed/hidden dynamically
>> based on some other enabling the driver may be doing on the
>> device. In which case we'd need to allocate the structure on
>> demand.
> 
> The device in question (02:00.1) is an SR-IOV 82576:
> 
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
> 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
> 
> -bash-4.1# lspci -s 02:00.1 -v | more
> 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: fast devsel, IRQ 18
>         Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
>         Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>         I/O ports at d000 [disabled] [size=32]
>         Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
>         Expansion ROM at f0400000 [disabled] [size=4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
>         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
>         Kernel driver in use: pciback
>         Kernel modules: igb

So is this state with igb never having been bound to the device,
or was it unbound before the device got handed to igb. I'm asking
because I'm trying to understand why alloc_pdev() didn't find the
MSI-X capability structure, and I continue to suspect that the
driver may have done something to the device to make it visible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 08:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 08:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6FaC-0006EU-07; Thu, 23 Jan 2014 08:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6FaA-0006EO-1c
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 08:24:14 +0000
Received: from [85.158.137.68:22431] by server-8.bemta-3.messagelabs.com id
	CE/52-31081-DA1D0E25; Thu, 23 Jan 2014 08:24:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390465452!9658870!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31380 invoked from network); 23 Jan 2014 08:24:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 08:24:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 08:24:12 +0000
Message-Id: <52E0DFBB0200007800116041@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 08:24:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
In-Reply-To: <20140122214034.GB9460@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 22.01.14 at 22:40, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
>> "Fixing the wrong thing" presumably, after taking a closer look at
>> Konrad's second crash: The device in question really appears to
>> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
>> wonder whether the capability gets displayed/hidden dynamically
>> based on some other enabling the driver may be doing on the
>> device. In which case we'd need to allocate the structure on
>> demand.
> 
> The device in question (02:00.1) is an SR-IOV 82576:
> 
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
> 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
> 
> -bash-4.1# lspci -s 02:00.1 -v | more
> 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: fast devsel, IRQ 18
>         Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
>         Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
>         I/O ports at d000 [disabled] [size=32]
>         Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
>         Expansion ROM at f0400000 [disabled] [size=4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
>         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
>         Kernel driver in use: pciback
>         Kernel modules: igb

So is this state with igb never having been bound to the device,
or was it unbound before the device got handed to igb. I'm asking
because I'm trying to understand why alloc_pdev() didn't find the
MSI-X capability structure, and I continue to suspect that the
driver may have done something to the device to make it visible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:12:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6GKV-00007i-Ng; Thu, 23 Jan 2014 09:12:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6GKU-00007d-6W
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:12:06 +0000
Received: from [85.158.137.68:54347] by server-15.bemta-3.messagelabs.com id
	63/5C-11556-5ECD0E25; Thu, 23 Jan 2014 09:12:05 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390468324!10801716!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9919 invoked from network); 23 Jan 2014 09:12:04 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:12:04 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so274838eaj.12
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 01:12:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=G4SANsO4mbLCQBRnpS2dzxsl0UsgVn5Y3nyI5zVk6Q8=;
	b=k6E1M90QsAHk3y/Bc02e3aXNEbCdGC2a0wWMqG539SKpWqFVZeweUR+vxf2OGgx3to
	UGyyBANGs/Da/VNIrtyn4fMTjBrkDWjZ1sI7anwAVLgXn2Rmxsfm8DfD9x7x9AV+rgFo
	xHmRFlZY0Ri0r47hlb/BYTXkPYH2c8v8FK+b1LBpfKc7RDssYjdA+mCZZx4npHfaIRfp
	+6s62bfwomCYT+NkwUByT519hySBuKyRp2W0VOz4O3r8hjIZaL/S/7Fj9f22E4pgtMVD
	PGYvE3G0qbcNhyu8Oqg6STszAJmOzWbWSrGFBa4/0YoNH2uFOkEQ+NGa7y/Xzxri1SP7
	ppIw==
X-Received: by 10.14.199.199 with SMTP id x47mr2297700een.78.1390468324532;
	Thu, 23 Jan 2014 01:12:04 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id 46sm37192509ees.4.2014.01.23.01.12.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 01:12:02 -0800 (PST)
Message-ID: <52E0DCDD.60405@redhat.com>
Date: Thu, 23 Jan 2014 10:11:57 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>	<52CAEF54.7030901@suse.de>
	<52CB17D1.2060400@redhat.com>	<20140107123417.GG10654@zion.uk.xensource.com>	<52CC01F6.6050502@redhat.com>	<20140121182745.GA23328@zion.uk.xensource.com>	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
In-Reply-To: <20140122160940.GC24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 22/01/2014 17:09, Wei Liu ha scritto:
> On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
>> Il 21/01/2014 19:27, Wei Liu ha scritto:
>>>>>
>>>>> Googling "disable tcg" would have provided an answer, but the patches
>>>>> were old enough to be basically useless.  I'll refresh the current
>>>>> version in the next few days.  Currently I am (or try to be) on
>>>>> vacation, so I cannot really say when, but I'll do my best. :)
>>>>>
>>> Hi Paolo, any update?
>>
>> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
>> branch on my github repository.
>>
>
> Unfortunately your branch didn't work when I enabled TCG support. If I
> use "--disable-tcg" with configure then it works fine.

Branch fixed.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:12:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6GKV-00007i-Ng; Thu, 23 Jan 2014 09:12:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6GKU-00007d-6W
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:12:06 +0000
Received: from [85.158.137.68:54347] by server-15.bemta-3.messagelabs.com id
	63/5C-11556-5ECD0E25; Thu, 23 Jan 2014 09:12:05 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390468324!10801716!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9919 invoked from network); 23 Jan 2014 09:12:04 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:12:04 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so274838eaj.12
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 01:12:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=G4SANsO4mbLCQBRnpS2dzxsl0UsgVn5Y3nyI5zVk6Q8=;
	b=k6E1M90QsAHk3y/Bc02e3aXNEbCdGC2a0wWMqG539SKpWqFVZeweUR+vxf2OGgx3to
	UGyyBANGs/Da/VNIrtyn4fMTjBrkDWjZ1sI7anwAVLgXn2Rmxsfm8DfD9x7x9AV+rgFo
	xHmRFlZY0Ri0r47hlb/BYTXkPYH2c8v8FK+b1LBpfKc7RDssYjdA+mCZZx4npHfaIRfp
	+6s62bfwomCYT+NkwUByT519hySBuKyRp2W0VOz4O3r8hjIZaL/S/7Fj9f22E4pgtMVD
	PGYvE3G0qbcNhyu8Oqg6STszAJmOzWbWSrGFBa4/0YoNH2uFOkEQ+NGa7y/Xzxri1SP7
	ppIw==
X-Received: by 10.14.199.199 with SMTP id x47mr2297700een.78.1390468324532;
	Thu, 23 Jan 2014 01:12:04 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id 46sm37192509ees.4.2014.01.23.01.12.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 01:12:02 -0800 (PST)
Message-ID: <52E0DCDD.60405@redhat.com>
Date: Thu, 23 Jan 2014 10:11:57 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <CAFEAcA-X9=aPPgM+ctT-ytAYE55xNtX9NCo_vqnna_CWCxka4w@mail.gmail.com>	<CA+aC4ksgVXk2pfS9tBX0AGomKzYoF6TwMOugFRo4QL-7_bE5yA@mail.gmail.com>	<alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>	<52CAEF54.7030901@suse.de>
	<52CB17D1.2060400@redhat.com>	<20140107123417.GG10654@zion.uk.xensource.com>	<52CC01F6.6050502@redhat.com>	<20140121182745.GA23328@zion.uk.xensource.com>	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
In-Reply-To: <20140122160940.GC24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 22/01/2014 17:09, Wei Liu ha scritto:
> On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
>> Il 21/01/2014 19:27, Wei Liu ha scritto:
>>>>>
>>>>> Googling "disable tcg" would have provided an answer, but the patches
>>>>> were old enough to be basically useless.  I'll refresh the current
>>>>> version in the next few days.  Currently I am (or try to be) on
>>>>> vacation, so I cannot really say when, but I'll do my best. :)
>>>>>
>>> Hi Paolo, any update?
>>
>> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
>> branch on my github repository.
>>
>
> Unfortunately your branch didn't work when I enabled TCG support. If I
> use "--disable-tcg" with configure then it works fine.

Branch fixed.

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:32:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6GeH-00017L-7w; Thu, 23 Jan 2014 09:32:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6GeF-00017G-Qg
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:32:32 +0000
Received: from [193.109.254.147:13076] by server-3.bemta-14.messagelabs.com id
	5F/85-11000-FA1E0E25; Thu, 23 Jan 2014 09:32:31 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390469548!12672493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28744 invoked from network); 23 Jan 2014 09:32:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 09:32:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N9WPLs013780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 09:32:26 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N9WMEj027889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 09:32:23 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N9WMD9028676; Thu, 23 Jan 2014 09:32:22 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 01:32:22 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Thu, 23 Jan 2014 09:36:51 +0800
Message-Id: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer code from netfront, and improves ending
grant acess mechanism since gnttab_end_foreign_access_ref may fail when
the grant entry is currently used for reading or writing.

* release grant reference and skb for tx/rx path, use get_page/put_page to
ensure page is released when grant access is completed successfully.
* change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
of code change for put_page in gnttab_end_foreign_access.
* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: improve patch comments.
Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/block/xen-blkfront.c    |   25 ++++++++---
 drivers/char/tpm/xen-tpmfront.c |    7 +++-
 drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
 drivers/pci/xen-pcifront.c      |    7 +++-
 drivers/xen/grant-table.c       |    4 +-
 5 files changed, 63 insertions(+), 73 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..c300bfd 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -918,6 +918,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	struct grant *persistent_gnt;
 	struct grant *n;
 	int i, j, segs;
+	struct page *page;
 
 	/* Prevent new requests being issued until we fix things up. */
 	spin_lock_irq(&info->io_lock);
@@ -932,13 +933,16 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		list_for_each_entry_safe(persistent_gnt, n,
 		                         &info->grants, node) {
 			list_del(&persistent_gnt->node);
+			page = pfn_to_page(persistent_gnt->pfn);
 			if (persistent_gnt->gref != GRANT_INVALID_REF) {
+				get_page(page);
 				gnttab_end_foreign_access(persistent_gnt->gref,
-				                          0, 0UL);
+							  0,
+							 (unsigned long)page_address(page));
 				info->persistent_gnts_c--;
 			}
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 	}
@@ -971,9 +975,12 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		       info->shadow[i].req.u.rw.nr_segments;
 		for (j = 0; j < segs; j++) {
 			persistent_gnt = info->shadow[i].grants_used[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+			page = pfn_to_page(persistent_gnt->pfn);
+			get_page(page);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0,
+						 (unsigned long)page_address(page));
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 
@@ -986,8 +993,11 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 
 		for (j = 0; j < INDIRECT_GREFS(segs); j++) {
 			persistent_gnt = info->shadow[i].indirect_grants[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
-			__free_page(pfn_to_page(persistent_gnt->pfn));
+			page = pfn_to_page(persistent_gnt->pfn);
+			get_page(page);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0,
+						 (unsigned long)page_address(page));
+			__free_page(page);
 			kfree(persistent_gnt);
 		}
 
@@ -1009,8 +1019,11 @@ free_shadow:
 
 	/* Free resources associated with old device channel. */
 	if (info->ring_ref != GRANT_INVALID_REF) {
+		page = virt_to_page(info->ring.sring);
+		get_page(page);
 		gnttab_end_foreign_access(info->ring_ref, 0,
 					  (unsigned long)info->ring.sring);
+		__free_page(page);
 		info->ring_ref = GRANT_INVALID_REF;
 		info->ring.sring = NULL;
 	}
diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
index c8ff4df..3e83585 100644
--- a/drivers/char/tpm/xen-tpmfront.c
+++ b/drivers/char/tpm/xen-tpmfront.c
@@ -312,9 +312,14 @@ static void ring_free(struct tpm_private *priv)
 	if (!priv)
 		return;
 
-	if (priv->ring_ref)
+	if (priv->ring_ref) {
+		struct page *page;
+		page = virt_to_page(priv->shr);
+		get_page(page);
 		gnttab_end_foreign_access(priv->ring_ref, 0,
 				(unsigned long)priv->shr);
+		__free_page(page);
+	}
 	else
 		free_page((unsigned long)priv->shr);
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..a22adaa 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
@@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
 static void xennet_end_access(int ref, void *page)
 {
 	/* This frees the page as a side-effect */
-	if (ref != GRANT_INVALID_REF)
+	if (ref != GRANT_INVALID_REF) {
+		get_page(virt_to_page(page));
 		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
+		free_page((unsigned long)page);
+	}
 }
 
 static void xennet_disconnect_backend(struct netfront_info *info)
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index f7197a7..ed732e5 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -756,9 +756,14 @@ static void free_pdev(struct pcifront_device *pdev)
 	if (pdev->evtchn != INVALID_EVTCHN)
 		xenbus_free_evtchn(pdev->xdev, pdev->evtchn);
 
-	if (pdev->gnt_ref != INVALID_GRANT_REF)
+	if (pdev->gnt_ref != INVALID_GRANT_REF) {
+		struct page *page;
+		page = virt_to_page(pdev->sh_info);
+		get_page(page);
 		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
 					  (unsigned long)pdev->sh_info);
+		__free_page(page);
+	}
 	else
 		free_page((unsigned long)pdev->sh_info);
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..b64a32e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
 			if (entry->page) {
 				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
 					 entry->ref, page_to_pfn(entry->page));
-				__free_page(entry->page);
+				put_page(entry->page);
 			} else
 				pr_info("freeing g.e. %#x\n", entry->ref);
 			kfree(entry);
@@ -560,7 +560,7 @@ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
 	if (gnttab_end_foreign_access_ref(ref, readonly)) {
 		put_free_entry(ref);
 		if (page != 0)
-			free_page(page);
+			put_page(virt_to_page(page));
 	} else
 		gnttab_add_deferred(ref, readonly,
 				    page ? virt_to_page(page) : NULL);
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:32:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6GeH-00017L-7w; Thu, 23 Jan 2014 09:32:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6GeF-00017G-Qg
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:32:32 +0000
Received: from [193.109.254.147:13076] by server-3.bemta-14.messagelabs.com id
	5F/85-11000-FA1E0E25; Thu, 23 Jan 2014 09:32:31 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390469548!12672493!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28744 invoked from network); 23 Jan 2014 09:32:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 09:32:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0N9WPLs013780
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 09:32:26 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N9WMEj027889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 09:32:23 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0N9WMD9028676; Thu, 23 Jan 2014 09:32:22 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 01:32:22 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Thu, 23 Jan 2014 09:36:51 +0800
Message-Id: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer code from netfront, and improves ending
grant acess mechanism since gnttab_end_foreign_access_ref may fail when
the grant entry is currently used for reading or writing.

* release grant reference and skb for tx/rx path, use get_page/put_page to
ensure page is released when grant access is completed successfully.
* change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
of code change for put_page in gnttab_end_foreign_access.
* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: improve patch comments.
Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/block/xen-blkfront.c    |   25 ++++++++---
 drivers/char/tpm/xen-tpmfront.c |    7 +++-
 drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
 drivers/pci/xen-pcifront.c      |    7 +++-
 drivers/xen/grant-table.c       |    4 +-
 5 files changed, 63 insertions(+), 73 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c4a4c90..c300bfd 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -918,6 +918,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	struct grant *persistent_gnt;
 	struct grant *n;
 	int i, j, segs;
+	struct page *page;
 
 	/* Prevent new requests being issued until we fix things up. */
 	spin_lock_irq(&info->io_lock);
@@ -932,13 +933,16 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		list_for_each_entry_safe(persistent_gnt, n,
 		                         &info->grants, node) {
 			list_del(&persistent_gnt->node);
+			page = pfn_to_page(persistent_gnt->pfn);
 			if (persistent_gnt->gref != GRANT_INVALID_REF) {
+				get_page(page);
 				gnttab_end_foreign_access(persistent_gnt->gref,
-				                          0, 0UL);
+							  0,
+							 (unsigned long)page_address(page));
 				info->persistent_gnts_c--;
 			}
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 	}
@@ -971,9 +975,12 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 		       info->shadow[i].req.u.rw.nr_segments;
 		for (j = 0; j < segs; j++) {
 			persistent_gnt = info->shadow[i].grants_used[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
+			page = pfn_to_page(persistent_gnt->pfn);
+			get_page(page);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0,
+						 (unsigned long)page_address(page));
 			if (info->feature_persistent)
-				__free_page(pfn_to_page(persistent_gnt->pfn));
+				__free_page(page);
 			kfree(persistent_gnt);
 		}
 
@@ -986,8 +993,11 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 
 		for (j = 0; j < INDIRECT_GREFS(segs); j++) {
 			persistent_gnt = info->shadow[i].indirect_grants[j];
-			gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
-			__free_page(pfn_to_page(persistent_gnt->pfn));
+			page = pfn_to_page(persistent_gnt->pfn);
+			get_page(page);
+			gnttab_end_foreign_access(persistent_gnt->gref, 0,
+						 (unsigned long)page_address(page));
+			__free_page(page);
 			kfree(persistent_gnt);
 		}
 
@@ -1009,8 +1019,11 @@ free_shadow:
 
 	/* Free resources associated with old device channel. */
 	if (info->ring_ref != GRANT_INVALID_REF) {
+		page = virt_to_page(info->ring.sring);
+		get_page(page);
 		gnttab_end_foreign_access(info->ring_ref, 0,
 					  (unsigned long)info->ring.sring);
+		__free_page(page);
 		info->ring_ref = GRANT_INVALID_REF;
 		info->ring.sring = NULL;
 	}
diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
index c8ff4df..3e83585 100644
--- a/drivers/char/tpm/xen-tpmfront.c
+++ b/drivers/char/tpm/xen-tpmfront.c
@@ -312,9 +312,14 @@ static void ring_free(struct tpm_private *priv)
 	if (!priv)
 		return;
 
-	if (priv->ring_ref)
+	if (priv->ring_ref) {
+		struct page *page;
+		page = virt_to_page(priv->shr);
+		get_page(page);
 		gnttab_end_foreign_access(priv->ring_ref, 0,
 				(unsigned long)priv->shr);
+		__free_page(page);
+	}
 	else
 		free_page((unsigned long)priv->shr);
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..a22adaa 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
@@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
 static void xennet_end_access(int ref, void *page)
 {
 	/* This frees the page as a side-effect */
-	if (ref != GRANT_INVALID_REF)
+	if (ref != GRANT_INVALID_REF) {
+		get_page(virt_to_page(page));
 		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
+		free_page((unsigned long)page);
+	}
 }
 
 static void xennet_disconnect_backend(struct netfront_info *info)
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index f7197a7..ed732e5 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -756,9 +756,14 @@ static void free_pdev(struct pcifront_device *pdev)
 	if (pdev->evtchn != INVALID_EVTCHN)
 		xenbus_free_evtchn(pdev->xdev, pdev->evtchn);
 
-	if (pdev->gnt_ref != INVALID_GRANT_REF)
+	if (pdev->gnt_ref != INVALID_GRANT_REF) {
+		struct page *page;
+		page = virt_to_page(pdev->sh_info);
+		get_page(page);
 		gnttab_end_foreign_access(pdev->gnt_ref, 0 /* r/w page */,
 					  (unsigned long)pdev->sh_info);
+		__free_page(page);
+	}
 	else
 		free_page((unsigned long)pdev->sh_info);
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..b64a32e 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -504,7 +504,7 @@ static void gnttab_handle_deferred(unsigned long unused)
 			if (entry->page) {
 				pr_debug("freeing g.e. %#x (pfn %#lx)\n",
 					 entry->ref, page_to_pfn(entry->page));
-				__free_page(entry->page);
+				put_page(entry->page);
 			} else
 				pr_info("freeing g.e. %#x\n", entry->ref);
 			kfree(entry);
@@ -560,7 +560,7 @@ void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
 	if (gnttab_end_foreign_access_ref(ref, readonly)) {
 		put_free_entry(ref);
 		if (page != 0)
-			free_page(page);
+			put_page(virt_to_page(page));
 	} else
 		gnttab_add_deferred(ref, readonly,
 				    page ? virt_to_page(page) : NULL);
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:34:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Gfi-0001Dl-Qd; Thu, 23 Jan 2014 09:34:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Gfg-0001De-BX
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:34:00 +0000
Received: from [85.158.139.211:62883] by server-14.bemta-5.messagelabs.com id
	E9/9A-24200-702E0E25; Thu, 23 Jan 2014 09:33:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390469637!8750483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7393 invoked from network); 23 Jan 2014 09:33:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:33:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95663421"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 09:33:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 04:33:55 -0500
Message-ID: <1390469635.24595.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 09:33:55 +0000
In-Reply-To: <alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.cacheflush.start_mfn = 0;
> > +    return do_domctl(xch, &domctl);
> > +}
> 
> Do we really need to flush the entire p2m, or just things we have
> written to?

I think we need to flush everything (well, all RAM backed pages, the
patch skips everything else).

Even things which we haven't explicitly written to will have been
scrubbed and therefore have scrubbed data in the cache but data
belonging to the previous owner in actual RAM. So we would really want
to clean in that case too.

We could do the clean at scrub time which arguably be better anyway and
would potentially allows us to only invalidate instead of clean
+invalidate some subset of pages, but we would need to track which sort
of page was which -- e.g. with a special p2m type for a page which had
been foreign mapped or some other bit of metadata.

> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 85ca330..f35ed57 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -228,15 +228,26 @@ enum p2m_operation {
> >      ALLOCATE,
> >      REMOVE,
> >      RELINQUISH,
> > +    CACHEFLUSH,
> >  };
> >  
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> A pity that we need to map a page just to flush the dcache.  It could be
> expensive, especially if we really have to map every single guest mfn.

Remember that this is basically free for arm64 and for arm32 we actually
map 2MB regions and cache, so it is only actually one map per 2MB
region.

> I wonder if we could use DCCSW instead.

There is no way to use this by VMID so we would have to blow the entire
cache. DCCSW is also very tricky to use in an SMP system, you might need
to do some sort of stop-machine trick (although perhaps for this use
case we know the tools in dom0 only has a single thread touching the
foreign memory and the guest itself isn't running). I'd need to think
very carefully about that case but since it involves flushing the entire
cache I'm not inclined to go down that path in the first place.

> > +        case RELINQUISH:
> > +        case CACHEFLUSH:
> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >              {
> >                  p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
> 
> If we are taking this code path for cache flushes, then we should rename
> next_gfn_to_relinquish to something more generic.

Yes, this was just a proof of concept so I didn't bother, but really
this is minimum_mapped_p2m.

create_p2m_entries should also be walk_p2m_entries or some such.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:34:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Gfi-0001Dl-Qd; Thu, 23 Jan 2014 09:34:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Gfg-0001De-BX
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:34:00 +0000
Received: from [85.158.139.211:62883] by server-14.bemta-5.messagelabs.com id
	E9/9A-24200-702E0E25; Thu, 23 Jan 2014 09:33:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390469637!8750483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7393 invoked from network); 23 Jan 2014 09:33:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:33:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95663421"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 09:33:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 04:33:55 -0500
Message-ID: <1390469635.24595.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 09:33:55 +0000
In-Reply-To: <alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.cacheflush.start_mfn = 0;
> > +    return do_domctl(xch, &domctl);
> > +}
> 
> Do we really need to flush the entire p2m, or just things we have
> written to?

I think we need to flush everything (well, all RAM backed pages, the
patch skips everything else).

Even things which we haven't explicitly written to will have been
scrubbed and therefore have scrubbed data in the cache but data
belonging to the previous owner in actual RAM. So we would really want
to clean in that case too.

We could do the clean at scrub time which arguably be better anyway and
would potentially allows us to only invalidate instead of clean
+invalidate some subset of pages, but we would need to track which sort
of page was which -- e.g. with a special p2m type for a page which had
been foreign mapped or some other bit of metadata.

> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 85ca330..f35ed57 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -228,15 +228,26 @@ enum p2m_operation {
> >      ALLOCATE,
> >      REMOVE,
> >      RELINQUISH,
> > +    CACHEFLUSH,
> >  };
> >  
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> A pity that we need to map a page just to flush the dcache.  It could be
> expensive, especially if we really have to map every single guest mfn.

Remember that this is basically free for arm64 and for arm32 we actually
map 2MB regions and cache, so it is only actually one map per 2MB
region.

> I wonder if we could use DCCSW instead.

There is no way to use this by VMID so we would have to blow the entire
cache. DCCSW is also very tricky to use in an SMP system, you might need
to do some sort of stop-machine trick (although perhaps for this use
case we know the tools in dom0 only has a single thread touching the
foreign memory and the guest itself isn't running). I'd need to think
very carefully about that case but since it involves flushing the entire
cache I'm not inclined to go down that path in the first place.

> > +        case RELINQUISH:
> > +        case CACHEFLUSH:
> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >              {
> >                  p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
> 
> If we are taking this code path for cache flushes, then we should rename
> next_gfn_to_relinquish to something more generic.

Yes, this was just a proof of concept so I didn't bother, but really
this is minimum_mapped_p2m.

create_p2m_entries should also be walk_p2m_entries or some such.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:57:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6H1p-0002N6-IP; Thu, 23 Jan 2014 09:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6H1o-0002N1-Ec
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:56:52 +0000
Received: from [85.158.143.35:65463] by server-3.bemta-4.messagelabs.com id
	63/C4-32360-367E0E25; Thu, 23 Jan 2014 09:56:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390471009!256251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16590 invoked from network); 23 Jan 2014 09:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95668062"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 09:56:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 04:56:48 -0500
Message-ID: <1390471008.24595.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Jan 2014 09:56:48 +0000
In-Reply-To: <52E016CD.4020007@citrix.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<52E016CD.4020007@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 19:06 +0000, Julien Grall wrote:
> On 01/22/2014 04:35 PM, Ian Campbell wrote:
> > Julien,
> 
> Hi Ian,
> 
> > I wonder if the following is any better than the current stuff in
> > staging for the issue you are seeing with BSD at start of day? Can you
> > try it please.
> 
> Thanks for the patch! It allows me to boot FreeBSD correctly (ie with
> Write-Through for the first page table) on Midway.

Perfect. I'm inclined to put clean this up and put it forward for 4.4
then.

> > It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> > still going.
> > 
> > It basically does a cache clean on all RAM mapped in the p2m. Anything
> > in the cache is either the result of an earlier scrub of the page or
> > something toolstack just wrote, so there is no need to be concerned
> > about clean vs. invalidate -- clean is always correct.
> 
> I don't remember what was the conclusion... is it necessary to flush all
> the RAM? Flushing the Kernel/initrd/DTB space should be enough.

See my reply to Stefano -- we need to be concerned about scrubbed page
data in the cache which is masking actual data from the previous owner.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 09:57:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 09:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6H1p-0002N6-IP; Thu, 23 Jan 2014 09:56:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6H1o-0002N1-Ec
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 09:56:52 +0000
Received: from [85.158.143.35:65463] by server-3.bemta-4.messagelabs.com id
	63/C4-32360-367E0E25; Thu, 23 Jan 2014 09:56:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390471009!256251!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16590 invoked from network); 23 Jan 2014 09:56:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 09:56:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95668062"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 09:56:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 04:56:48 -0500
Message-ID: <1390471008.24595.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Thu, 23 Jan 2014 09:56:48 +0000
In-Reply-To: <52E016CD.4020007@citrix.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<52E016CD.4020007@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 19:06 +0000, Julien Grall wrote:
> On 01/22/2014 04:35 PM, Ian Campbell wrote:
> > Julien,
> 
> Hi Ian,
> 
> > I wonder if the following is any better than the current stuff in
> > staging for the issue you are seeing with BSD at start of day? Can you
> > try it please.
> 
> Thanks for the patch! It allows me to boot FreeBSD correctly (ie with
> Write-Through for the first page table) on Midway.

Perfect. I'm inclined to put clean this up and put it forward for 4.4
then.

> > It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> > still going.
> > 
> > It basically does a cache clean on all RAM mapped in the p2m. Anything
> > in the cache is either the result of an earlier scrub of the page or
> > something toolstack just wrote, so there is no need to be concerned
> > about clean vs. invalidate -- clean is always correct.
> 
> I don't remember what was the conclusion... is it necessary to flush all
> the RAM? Flushing the Kernel/initrd/DTB space should be enough.

See my reply to Stefano -- we need to be concerned about scrubbed page
data in the cache which is masking actual data from the previous owner.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:05:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6H9r-0002zm-RC; Thu, 23 Jan 2014 10:05:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6D1o-0006SD-3O
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 05:40:37 +0000
Received: from [85.158.139.211:60883] by server-4.bemta-5.messagelabs.com id
	6A/4C-26791-35BA0E25; Thu, 23 Jan 2014 05:40:35 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390455624!11417579!1
X-Originating-IP: [98.139.213.152]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1858 invoked from network); 23 Jan 2014 05:40:26 -0000
Received: from nm11-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm11-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.152)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 05:40:26 -0000
Received: from [98.139.212.153] by nm11.bullet.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
Received: from [98.139.211.194] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
Received: from [127.0.0.1] by smtp203.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390455624; bh=r178wKMTvMCG1w4fLhHm3rcnH8TjvXozn4kCxsHcNew=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=waiLKGIOZ3MNGqaRI+R3gJloaN0gCNHHm03FUqi+QB/vvVmxec3C2fA780+97SvrO9Uu9XzfLMrhPyz4fCGD2L3Vie6N5KKB/AByXJbwN6PxRyvDqTuOmlNUN17EZOGsMlqS+QtRqJ1Dx2MlcQEZTZxsxjzUK0JNz/1Ym1lLQwU=
X-Yahoo-Newman-Id: 41882.48820.bm@smtp203.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 0Hs.7xcVM1n1lhqa7R4Wwf.BSG_h1IpTz38zQ464i8hx6Oi
	nLyeLljOQ2a7AXsC5kejbr5Z2NDuOyG5d23NYoD4cg3Vazq35L4r_lCipsVE
	qQZYupn4H7pTeT5qDX4EI__z7qE30m0V43uSKPcCmI1gcUzNKPNNTTWHjLPI
	w8q_Rr5kKk.frsznylGfIT9pZQBKq4jjLkjkqwN5nDN3RNFqQ5px9E78Y21_
	k55JYtJX3EFv_qnABFgFolUBToN9sSL8jD45usc3KQR6SW63_ua._wRhrYDf
	D8V1Xw_TQ_E.7eMFDgmagMvqTlLZO5HSA507x2TMG04qa6npATIwPfUmkAJK
	ecSSVPiGk5DreUTVdgrNjqa5019Ygu2vkM5apjjqydYmhtPbc9Ubn3tozjUp
	gF6NrYHTK3Rn4czH4UACwqOFdLrmHhF0o3ypfnEO5Sk_RVfoffrRXUHIr9Js
	88F1Zai0XCCNNYTq1U7Kgut1TKIWg7oi7wgapzg_QwNEKUpN3HQ3WMb0XFJF
	tp0fWbPjlucY6EQa1v.UGdW7jnKwUMqDG6iTzY5Gtq8ZS3NgXEeqGf.zpYrc
	L2k4w2zeI4KdWyFZhreZVh_ePWZLoNk647hfRXQw93WeK
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp203.mail.bf1.yahoo.com with SMTP; 22 Jan 2014 21:40:23 -0800 PST
Message-ID: <1390455621.2415.56.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 22 Jan 2014 22:40:21 -0700
In-Reply-To: <52DF8FB90200007800115AEC@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-HokMqAwF1mZkmjLTPer/"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Thu, 23 Jan 2014 10:05:09 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-HokMqAwF1mZkmjLTPer/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On Wed, 2014-01-22 at 08:30 +0000, Jan Beulich wrote:
> >>> On 21.01.14 at 11:39, "Eric Houby" <ehouby@yahoo.com> wrote:
> > Xen console logs are attached.
> 
> Thanks. This
> 
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... works.
> 
> is the first thing you need to deal with. You may want to check
> how a native kernel handles that (they have some DMI based
> stuff in place to automatically deal with broken firmware, which
> on Xen requires command line overrides like
> "acpi_skip_timer_override").
> 
> That alone may already explain
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> 
> since these come from the HPET (which may be generating legacy
> timer interrupts):
> 
> (XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
> 
> The final crash could easily be a consequence of the earlier
> problems. In any event you ought to make sure you run up to
> date firmware on your system, as there clearly are issues with it.
> 
> What would also be interesting to know is whether any older Xen
> version ever booted okay on that system.
> 
> Jan
> 

Jan,

The system I am testing with is a GA-890FXA-UD5 and it has been doing
secondary VGA passthrough to Win7/8.1 guests since mid 2010.  I believe
it was XSA-36 that was not friendly to the board and I have not been
able to get this functionality since xen 4.1.3.  In mid 2010, when I
last tested available BIOSs, I only found one version to support IOMMU
with Xen.  Since then, one more BIOS version has been released.  It may
be worth loading but my confidence in it fixing this issue is low.

Adding acpi_skip_timer_override to the xen command line did allow Dom0
to boot but the video display for Dom0 did not work.  The following log
was still seen the logs.

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address
= 0xfdf8020140, flags = 0x8


Including iommu=no-amd-iommu-perdev-intremap along with
acpi_skip_timer_override cleared the above errors and Dom0 video was
functional.  Attempts to passthrough a secondary VGA to a Win8.1 guest
was not successful with these logs in the xen console:

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 1, device id = 0x700, fault
address = 0x11620df40, flags = 0

Attached are two xen console boot logs, one with just
acpi_skip_timer_override in the xen command line and the other with
acpi_skip_timer_override and iommu=no-amd-iommu-perdev-intremap.  The
long files names I used make it clear which is which.  The log with both
xen command options also includes booting the Win8.1 guest with
secondary VGA passthrough.

I tried to see how a native kernel handled the timer bug you mentioned
but I could not find a corresponding log when booting 3.12.7 on bare
hardware. This boot log is attached as barehw.txt, maybe there is
something you can find.

Thanks,

Eric









--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="barehw.txt"
Content-Type: text/plain; name="barehw.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Jan 22 19:48:20 astar kernel: [    0.000000] Initializing cgroup subsys cpu
Jan 22 19:48:20 astar kernel: [    0.000000] Initializing cgroup subsys cpuacct
Jan 22 19:48:20 astar kernel: [    0.000000] Linux version 3.12.7-300.fc20.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 4.8.2 20131212 (Red Hat 4.8.2-7) (GCC) ) #1 SMP Fri Jan 10 15:35:31 UTC 2014
Jan 22 19:48:20 astar kernel: [    0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.12.7-300.fc20.x86_64 root=/dev/mapper/f20_astar-root ro rd.lvm.lv=f20_astar/root rd.lvm.lv=f20_astar/swap vconsole.font=latarcyrheb-sun16 rhgb quiet
Jan 22 19:48:20 astar kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000096fff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000afceffff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afcf0000-0x00000000afcf0fff] ACPI NVS
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afcf1000-0x00000000afcfffff] ACPI data
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afd00000-0x00000000afdfffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000ffffffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000024fffffff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] NX (Execute Disable) protection: active
Jan 22 19:48:20 astar kernel: [    0.000000] SMBIOS 2.4 present.
Jan 22 19:48:20 astar kernel: [    0.000000] No AGP bridge found
Jan 22 19:48:20 astar kernel: [    0.000000] e820: last_pfn = 0x250000 max_arch_pfn = 0x400000000
Jan 22 19:48:20 astar kernel: [    0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
Jan 22 19:48:20 astar kernel: [    0.000000] e820: last_pfn = 0xafcf0 max_arch_pfn = 0x400000000
Jan 22 19:48:20 astar kernel: [    0.000000] found SMP MP-table at [mem 0x000f4670-0x000f467f] mapped at [ffff8800000f4670]
Jan 22 19:48:20 astar kernel: [    0.000000] Using GB pages for direct mapping
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x24fe00000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x24c000000-0x24fdfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x200000000-0x24bffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x00100000-0xafceffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x100000000-0x1ffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] RAMDISK: [mem 0x3697c000-0x374b5fff]
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: RSDP 00000000000f6080 00014 (v00 GBT   )
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: RSDT 00000000afcf1000 00040 (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: FACP 00000000afcf1080 00074 (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: DSDT 00000000afcf1100 07BE1 (v01 GBT    GBTUACPI 00001000 MSFT 03000000)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: FACS 00000000afcf0000 00040
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: HPET 00000000afcf8dc0 00038 (v01 GBT    GBTUACPI 42302E31 GBTU 00000098)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: MCFG 00000000afcf8e00 0003C (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: MATS 00000000afcf8e40 00034 (v01 GBT             00000000      00000000)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: TAMG 00000000afcf8eb0 0048A (v01 GBT    GBT   B0 5455312E BG?? 53450101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: APIC 00000000afcf8d00 000BC (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: IVRS 00000000afcf93b0 000D8 (v01  AMD     RD890S 00202031 AMD  00000000)
Jan 22 19:48:20 astar kernel: [    0.000000] Scanning NUMA topology in Northbridge 24
Jan 22 19:48:20 astar kernel: [    0.000000] No NUMA configuration found
Jan 22 19:48:20 astar kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000024fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Initmem setup node 0 [mem 0x00000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   NODE_DATA [mem 0x24ffea000-0x24fffdfff]
Jan 22 19:48:20 astar kernel: [    0.000000] Zone ranges:
Jan 22 19:48:20 astar kernel: [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   Normal   [mem 0x100000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Movable zone start for each node
Jan 22 19:48:20 astar kernel: [    0.000000] Early memory node ranges
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x00001000-0x00096fff]
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x00100000-0xafceffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x100000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x808
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
Jan 22 19:48:20 astar kernel: [    0.000000] IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
Jan 22 19:48:20 astar kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: HPET id: 0x10b9a201 base: 0xfed00000
Jan 22 19:48:20 astar kernel: [    0.000000] smpboot: Allowing 8 CPUs, 2 hotplug CPUs
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00097000-0x0009ffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafcf0000-0xafcf0fff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafcf1000-0xafcfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafd00000-0xafdfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafe00000-0xdfffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xe0000000-0xefffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xf0000000-0xfebfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfec00000-0xffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] e820: [mem 0xafe00000-0xdfffffff] available for PCI devices
Jan 22 19:48:20 astar kernel: [    0.000000] Booting paravirtualized kernel on bare hardware
Jan 22 19:48:20 astar kernel: [    0.000000] setup_percpu: NR_CPUS:1024 nr_cpumask_bits:1024 nr_cpu_ids:8 nr_node_ids:1
Jan 22 19:48:20 astar kernel: [    0.000000] PERCPU: Embedded 29 pages/cpu @ffff88024fc00000 s86784 r8192 d23808 u262144
Jan 22 19:48:20 astar kernel: [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 2063485
Jan 22 19:48:20 astar kernel: [    0.000000] Policy zone: Normal
Jan 22 19:48:20 astar kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.12.7-300.fc20.x86_64 root=/dev/mapper/f20_astar-root ro rd.lvm.lv=f20_astar/root rd.lvm.lv=f20_astar/swap vconsole.font=latarcyrheb-sun16 rhgb quiet
Jan 22 19:48:20 astar kernel: [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
Jan 22 19:48:20 astar kernel: [    0.000000] Checking aperture...
Jan 22 19:48:20 astar kernel: [    0.000000] No AGP bridge found
Jan 22 19:48:20 astar kernel: [    0.000000] Node 0: aperture @ 7822000000 size 32 MB
Jan 22 19:48:20 astar kernel: [    0.000000] Aperture beyond 4GB. Ignoring.
Jan 22 19:48:20 astar kernel: [    0.000000] Your BIOS doesn't leave a aperture memory hole
Jan 22 19:48:20 astar kernel: [    0.000000] Please enable the IOMMU option in the BIOS setup
Jan 22 19:48:20 astar kernel: [    0.000000] This costs you 64 MB of RAM
Jan 22 19:48:20 astar kernel: [    0.000000] Mapping aperture over 65536 KB of RAM @ a4000000
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xa4000000-0xa7ffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Memory: 8093132K/8385048K available (6634K kernel code, 1044K rwdata, 2944K rodata, 1444K init, 1672K bss, 291916K reserved)
Jan 22 19:48:20 astar kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 19:48:20 astar kernel: [    0.000000] Hierarchical RCU implementation.
Jan 22 19:48:20 astar kernel: [    0.000000] 	RCU restricting CPUs from NR_CPUS=1024 to nr_cpu_ids=8.
Jan 22 19:48:20 astar kernel: [    0.000000] NR_IRQS:65792 nr_irqs:744 16
Jan 22 19:48:20 astar kernel: [    0.000000] Console: colour VGA+ 80x25
Jan 22 19:48:20 astar kernel: [    0.000000] console [tty0] enabled
Jan 22 19:48:20 astar kernel: [    0.000000] allocated 33554432 bytes of page_cgroup
Jan 22 19:48:20 astar kernel: [    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
Jan 22 19:48:20 astar kernel: [    0.000000] tsc: Fast TSC calibration using PIT
Jan 22 19:48:20 astar kernel: [    0.001000] tsc: Detected 2813.003 MHz processor
Jan 22 19:48:20 astar kernel: [    0.000003] Calibrating delay loop (skipped), value calculated using timer frequency.. 5626.00 BogoMIPS (lpj=2813003)
Jan 22 19:48:20 astar kernel: [    0.000006] pid_max: default: 32768 minimum: 301
Jan 22 19:48:20 astar kernel: [    0.000030] Security Framework initialized
Jan 22 19:48:20 astar kernel: [    0.000038] SELinux:  Initializing.
Jan 22 19:48:20 astar kernel: [    0.000567] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Jan 22 19:48:20 astar kernel: [    0.003040] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
Jan 22 19:48:20 astar kernel: [    0.004167] Mount-cache hash table entries: 256
Jan 22 19:48:20 astar kernel: [    0.004352] Initializing cgroup subsys memory
Jan 22 19:48:20 astar kernel: [    0.004364] Initializing cgroup subsys devices
Jan 22 19:48:20 astar kernel: [    0.004366] Initializing cgroup subsys freezer
Jan 22 19:48:20 astar kernel: [    0.004368] Initializing cgroup subsys net_cls
Jan 22 19:48:20 astar kernel: [    0.004369] Initializing cgroup subsys blkio
Jan 22 19:48:20 astar kernel: [    0.004371] Initializing cgroup subsys perf_event
Jan 22 19:48:20 astar kernel: [    0.004373] Initializing cgroup subsys hugetlb
Jan 22 19:48:20 astar kernel: [    0.004393] CPU: Physical Processor ID: 0
Jan 22 19:48:20 astar kernel: [    0.004394] CPU: Processor Core ID: 0
Jan 22 19:48:20 astar kernel: [    0.004396] mce: CPU supports 6 MCE banks
Jan 22 19:48:20 astar kernel: [    0.004400] LVT offset 0 assigned for vector 0xf9
Jan 22 19:48:20 astar kernel: [    0.004404] process: using AMD E400 aware idle routine
Jan 22 19:48:20 astar kernel: [    0.004406] Last level iTLB entries: 4KB 512, 2MB 16, 4MB 8
Jan 22 19:48:20 astar kernel: [    0.004406] Last level dTLB entries: 4KB 512, 2MB 128, 4MB 64
Jan 22 19:48:20 astar kernel: [    0.004406] tlb_flushall_shift: 4
Jan 22 19:48:20 astar kernel: [    0.004478] Freeing SMP alternatives memory: 24K (ffffffff81e70000 - ffffffff81e76000)
Jan 22 19:48:20 astar kernel: [    0.005123] ACPI: Core revision 20130725
Jan 22 19:48:20 astar kernel: [    0.006629] ACPI: All ACPI Tables successfully acquired
Jan 22 19:48:20 astar kernel: [    0.052536] ftrace: allocating 25533 entries in 100 pages
Jan 22 19:48:20 astar kernel: [    0.455073] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
Jan 22 19:48:20 astar kernel: [    0.455118] AMD-Vi: Disabling interrupt remapping
Jan 22 19:48:20 astar kernel: [    0.455509] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jan 22 19:48:20 astar kernel: [    0.465511] smpboot: CPU0: AMD Phenom(tm) II X6 1055T Processor (fam: 10, model: 0a, stepping: 00)
Jan 22 19:48:20 astar kernel: [    0.566608] Performance Events: AMD PMU driver.
Jan 22 19:48:20 astar kernel: [    0.566611] ... version:                0
Jan 22 19:48:20 astar kernel: [    0.566612] ... bit width:              48
Jan 22 19:48:20 astar kernel: [    0.566612] ... generic registers:      4
Jan 22 19:48:20 astar kernel: [    0.566613] ... value mask:             0000ffffffffffff
Jan 22 19:48:20 astar kernel: [    0.566614] ... max period:             00007fffffffffff
Jan 22 19:48:20 astar kernel: [    0.566615] ... fixed-purpose events:   0
Jan 22 19:48:20 astar kernel: [    0.566616] ... event mask:             000000000000000f
Jan 22 19:48:20 astar kernel: [    0.567541] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
Jan 22 19:48:20 astar kernel: [    0.580768] process: System has AMD C1E enabled
Jan 22 19:48:20 astar kernel: [    0.580781] process: Switch to broadcast mode on CPU1
Jan 22 19:48:20 astar kernel: [    0.593957] process: Switch to broadcast mode on CPU2
Jan 22 19:48:20 astar kernel: [    0.607173] process: Switch to broadcast mode on CPU3
Jan 22 19:48:20 astar kernel: [    0.567592] smpboot: Booting Node   0, Processors  #   1 #   2 #   3 #   4 #   5 OK
Jan 22 19:48:20 astar kernel: [    0.620386] process: Switch to broadcast mode on CPU4
Jan 22 19:48:20 astar kernel: [    0.633532] Brought up 6 CPUs
Jan 22 19:48:20 astar kernel: [    0.633535] smpboot: Total of 6 processors activated (33756.03 BogoMIPS)
Jan 22 19:48:20 astar kernel: [    0.640293] process: Switch to broadcast mode on CPU5
Jan 22 19:48:20 astar kernel: [    0.640409] process: Switch to broadcast mode on CPU0
Jan 22 19:48:20 astar kernel: [    0.640642] devtmpfs: initialized
Jan 22 19:48:20 astar kernel: [    0.640915] PM: Registering ACPI NVS region [mem 0xafcf0000-0xafcf0fff] (4096 bytes)
Jan 22 19:48:20 astar kernel: [    0.641765] atomic64 test passed for x86-64 platform with CX8 and with SSE
Jan 22 19:48:20 astar kernel: [    0.641767] pinctrl core: initialized pinctrl subsystem
Jan 22 19:48:20 astar kernel: [    0.641848] RTC time:  2:48:16, date: 01/23/14
Jan 22 19:48:20 astar kernel: [    0.641892] NET: Registered protocol family 16
Jan 22 19:48:20 astar kernel: [    0.641968] cpuidle: using governor menu
Jan 22 19:48:20 astar kernel: [    0.641974] TOM: 00000000b0000000 aka 2816M
Jan 22 19:48:20 astar kernel: [    0.641985] TOM2: 0000000250000000 aka 9472M
Jan 22 19:48:20 astar kernel: [    0.642084] ACPI: bus type PCI registered
Jan 22 19:48:20 astar kernel: [    0.642086] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 19:48:20 astar kernel: [    0.642132] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Jan 22 19:48:20 astar kernel: [    0.642134] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
Jan 22 19:48:20 astar kernel: [    0.650135] PCI: Using configuration type 1 for base access
Jan 22 19:48:20 astar kernel: [    0.650283] mtrr: your CPUs had inconsistent variable MTRR settings
Jan 22 19:48:20 astar kernel: [    0.650284] mtrr: probably your BIOS does not setup all CPUs.
Jan 22 19:48:20 astar kernel: [    0.650285] mtrr: corrected configuration.
Jan 22 19:48:20 astar kernel: [    0.651018] bio: create slab <bio-0> at 0
Jan 22 19:48:20 astar kernel: [    0.651151] ACPI: Added _OSI(Module Device)
Jan 22 19:48:20 astar kernel: [    0.651153] ACPI: Added _OSI(Processor Device)
Jan 22 19:48:20 astar kernel: [    0.651154] ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 22 19:48:20 astar kernel: [    0.651155] ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 19:48:20 astar kernel: [    0.655764] ACPI BIOS Warning (bug): Incorrect checksum in table [TAMG] - 0xBE, should be 0xBD (20130725/tbprint-204)
Jan 22 19:48:20 astar kernel: [    0.655884] ACPI: Interpreter enabled
Jan 22 19:48:20 astar kernel: [    0.655894] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20130725/hwxface-571)
Jan 22 19:48:20 astar kernel: [    0.655897] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S3_] (20130725/hwxface-571)
Jan 22 19:48:20 astar kernel: [    0.655905] ACPI: (supports S0 S1 S4 S5)
Jan 22 19:48:20 astar kernel: [    0.655907] ACPI: Using IOAPIC for interrupt routing
Jan 22 19:48:20 astar kernel: [    0.655933] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 19:48:20 astar kernel: [    0.656023] ACPI: No dock devices found.
Jan 22 19:48:20 astar kernel: [    0.713380] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 19:48:20 astar kernel: [    0.713385] acpi PNP0A03:00: ACPI _OSC support notification failed, disabling PCIe ASPM
Jan 22 19:48:20 astar kernel: [    0.713387] acpi PNP0A03:00: Unable to request _OSC control (_OSC support mask: 0x08)
Jan 22 19:48:20 astar kernel: [    0.713587] PCI host bridge to bus 0000:00
Jan 22 19:48:20 astar kernel: [    0.713589] pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 19:48:20 astar kernel: [    0.713591] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
Jan 22 19:48:20 astar kernel: [    0.713592] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
Jan 22 19:48:20 astar kernel: [    0.713594] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
Jan 22 19:48:20 astar kernel: [    0.713596] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff]
Jan 22 19:48:20 astar kernel: [    0.713597] pci_bus 0000:00: root bus resource [mem 0xaff00000-0xfebfffff]
Jan 22 19:48:20 astar kernel: [    0.713867] pci 0000:00:02.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.713961] pci 0000:00:04.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714055] pci 0000:00:05.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714146] pci 0000:00:06.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714236] pci 0000:00:07.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714328] pci 0000:00:09.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714408] pci 0000:00:0b.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714653] pci 0000:00:12.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714796] pci 0000:00:12.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714931] pci 0000:00:13.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715075] pci 0000:00:13.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715311] pci 0000:00:14.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715479] pci 0000:00:14.4: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715577] pci 0000:00:14.5: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715678] pci 0000:00:16.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715833] pci 0000:00:16.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.717894] pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 22 19:48:20 astar kernel: [    0.719892] pci 0000:00:04.0: PCI bridge to [bus 02]
Jan 22 19:48:20 astar kernel: [    0.720018] pci 0000:00:05.0: PCI bridge to [bus 03]
Jan 22 19:48:20 astar kernel: [    0.720102] pci 0000:00:06.0: PCI bridge to [bus 04]
Jan 22 19:48:20 astar kernel: [    0.721891] pci 0000:00:07.0: PCI bridge to [bus 05]
Jan 22 19:48:20 astar kernel: [    0.723891] pci 0000:00:09.0: PCI bridge to [bus 06]
Jan 22 19:48:20 astar kernel: [    0.725889] pci 0000:00:0b.0: PCI bridge to [bus 07]
Jan 22 19:48:20 astar kernel: [    0.726041] pci 0000:00:14.4: PCI bridge to [bus 08] (subtractive decode)
Jan 22 19:48:20 astar kernel: [    0.744096] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744138] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744173] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744206] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744240] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744273] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744306] ACPI: PCI Interrupt Link [LNK0] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744339] ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744902] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
Jan 22 19:48:20 astar kernel: [    0.744905] vgaarb: device added: PCI:0000:07:00.0,decodes=io+mem,owns=none,locks=none
Jan 22 19:48:20 astar kernel: [    0.744906] vgaarb: loaded
Jan 22 19:48:20 astar kernel: [    0.744907] vgaarb: bridge control possible 0000:07:00.0
Jan 22 19:48:20 astar kernel: [    0.744908] vgaarb: bridge control possible 0000:01:00.0
Jan 22 19:48:20 astar kernel: [    0.744971] SCSI subsystem initialized
Jan 22 19:48:20 astar kernel: [    0.745086] ACPI: bus type USB registered
Jan 22 19:48:20 astar kernel: [    0.745099] usbcore: registered new interface driver usbfs
Jan 22 19:48:20 astar kernel: [    0.745105] usbcore: registered new interface driver hub
Jan 22 19:48:20 astar kernel: [    0.745126] usbcore: registered new device driver usb
Jan 22 19:48:20 astar kernel: [    0.745195] PCI: Using ACPI for IRQ routing
Jan 22 19:48:20 astar kernel: [    0.751209] pci 0000:00:00.0: no compatible bridge window for [mem 0xe0000000-0xffffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.751339] NetLabel: Initializing
Jan 22 19:48:20 astar kernel: [    0.751340] NetLabel:  domain hash size = 128
Jan 22 19:48:20 astar kernel: [    0.751340] NetLabel:  protocols = UNLABELED CIPSOv4
Jan 22 19:48:20 astar kernel: [    0.751348] NetLabel:  unlabeled traffic allowed by default
Jan 22 19:48:20 astar kernel: [    0.751383] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Jan 22 19:48:20 astar kernel: [    0.751385] hpet0: 3 comparators, 32-bit 14.318180 MHz counter
Jan 22 19:48:20 astar kernel: [    0.753466] Switched to clocksource hpet
Jan 22 19:48:20 astar kernel: [    0.757598] pnp: PnP ACPI init
Jan 22 19:48:20 astar kernel: [    0.757606] ACPI: bus type PNP registered
Jan 22 19:48:20 astar kernel: [    0.757672] system 00:00: [io  0x04d0-0x04d1] has been reserved
Jan 22 19:48:20 astar kernel: [    0.757674] system 00:00: [io  0x0220-0x0225] has been reserved
Jan 22 19:48:20 astar kernel: [    0.757675] system 00:00: [io  0x0290-0x0294] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777704] pnp 00:01: disabling [mem 0x00000000-0x00000fff window] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.777721] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:01:00.0 BAR 6 [mem 0x00000000-0x0007ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777725] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:05:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777728] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:06:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777730] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:07:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777751] system 00:01: [io  0x0900-0x091f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777752] system 00:01: [io  0x0228-0x022f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777754] system 00:01: [io  0x040b] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777755] system 00:01: [io  0x04d6] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777757] system 00:01: [io  0x0c00-0x0c01] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777758] system 00:01: [io  0x0c14] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777759] system 00:01: [io  0x0c50-0x0c52] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777761] system 00:01: [io  0x0c6c-0x0c6d] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777762] system 00:01: [io  0x0c6f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777763] system 00:01: [io  0x0cd0-0x0cd1] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777765] system 00:01: [io  0x0cd2-0x0cd3] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777766] system 00:01: [io  0x0cd4-0x0cdf] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777767] system 00:01: [io  0x0800-0x08fe] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.777769] system 00:01: [io  0x0a10-0x0a17] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777770] system 00:01: [io  0x0b00-0x0b0f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777771] system 00:01: [io  0x0b10-0x0b1f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777773] system 00:01: [io  0x0b20-0x0b3f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777775] system 00:01: [mem 0xfee00400-0xfee00fff window] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778296] system 00:07: [mem 0xe0000000-0xefffffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778387] pnp 00:08: disabling [mem 0x000d4400-0x000d7fff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778389] pnp 00:08: disabling [mem 0x000f0000-0x000f7fff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778391] pnp 00:08: disabling [mem 0x000f8000-0x000fbfff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778393] pnp 00:08: disabling [mem 0x000fc000-0x000fffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778394] pnp 00:08: disabling [mem 0x00000000-0x0009ffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778396] pnp 00:08: disabling [mem 0x00100000-0xafceffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778428] system 00:08: [mem 0xafcf0000-0xafcfffff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778429] system 00:08: [mem 0xffff0000-0xffffffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778431] system 00:08: [mem 0xafd00000-0xafdfffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778432] system 00:08: [mem 0xafe00000-0xafefffff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778434] system 00:08: [mem 0xfec00000-0xfec00fff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778435] system 00:08: [mem 0xfee00000-0xfee00fff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778437] system 00:08: [mem 0xfff80000-0xfffeffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778451] pnp: PnP ACPI: found 9 devices
Jan 22 19:48:20 astar kernel: [    0.778452] ACPI: bus type PNP unregistered
Jan 22 19:48:20 astar kernel: [    0.784812] pci 0000:01:00.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784815] pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 22 19:48:20 astar kernel: [    0.784817] pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Jan 22 19:48:20 astar kernel: [    0.784820] pci 0000:00:02.0:   bridge window [mem 0xfb000000-0xfcffffff]
Jan 22 19:48:20 astar kernel: [    0.784822] pci 0000:00:02.0:   bridge window [mem 0xb0000000-0xcfffffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784825] pci 0000:00:04.0: PCI bridge to [bus 02]
Jan 22 19:48:20 astar kernel: [    0.784826] pci 0000:00:04.0:   bridge window [io  0xb000-0xbfff]
Jan 22 19:48:20 astar kernel: [    0.784829] pci 0000:00:04.0:   bridge window [mem 0xfdc00000-0xfdcfffff]
Jan 22 19:48:20 astar kernel: [    0.784831] pci 0000:00:04.0:   bridge window [mem 0xfdb00000-0xfdbfffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784833] pci 0000:00:05.0: PCI bridge to [bus 03]
Jan 22 19:48:20 astar kernel: [    0.784835] pci 0000:00:05.0:   bridge window [io  0xa000-0xafff]
Jan 22 19:48:20 astar kernel: [    0.784837] pci 0000:00:05.0:   bridge window [mem 0xfda00000-0xfdafffff]
Jan 22 19:48:20 astar kernel: [    0.784839] pci 0000:00:05.0:   bridge window [mem 0xfd900000-0xfd9fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784841] pci 0000:00:06.0: PCI bridge to [bus 04]
Jan 22 19:48:20 astar kernel: [    0.784843] pci 0000:00:06.0:   bridge window [io  0x9000-0x9fff]
Jan 22 19:48:20 astar kernel: [    0.784845] pci 0000:00:06.0:   bridge window [mem 0xfd800000-0xfd8fffff]
Jan 22 19:48:20 astar kernel: [    0.784847] pci 0000:00:06.0:   bridge window [mem 0xfd700000-0xfd7fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784850] pci 0000:05:00.0: BAR 6: assigned [mem 0xfd300000-0xfd31ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784851] pci 0000:00:07.0: PCI bridge to [bus 05]
Jan 22 19:48:20 astar kernel: [    0.784853] pci 0000:00:07.0:   bridge window [io  0x8000-0x8fff]
Jan 22 19:48:20 astar kernel: [    0.784855] pci 0000:00:07.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Jan 22 19:48:20 astar kernel: [    0.784857] pci 0000:00:07.0:   bridge window [mem 0xfd300000-0xfd3fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784860] pci 0000:06:00.0: BAR 6: assigned [mem 0xfde00000-0xfde1ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784861] pci 0000:00:09.0: PCI bridge to [bus 06]
Jan 22 19:48:20 astar kernel: [    0.784863] pci 0000:00:09.0:   bridge window [io  0xe000-0xefff]
Jan 22 19:48:20 astar kernel: [    0.784865] pci 0000:00:09.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Jan 22 19:48:20 astar kernel: [    0.784867] pci 0000:00:09.0:   bridge window [mem 0xfde00000-0xfdefffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784870] pci 0000:07:00.0: BAR 6: assigned [mem 0xfdd00000-0xfdd1ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784871] pci 0000:00:0b.0: PCI bridge to [bus 07]
Jan 22 19:48:20 astar kernel: [    0.784873] pci 0000:00:0b.0:   bridge window [io  0xd000-0xdfff]
Jan 22 19:48:20 astar kernel: [    0.784875] pci 0000:00:0b.0:   bridge window [mem 0xfdd00000-0xfddfffff]
Jan 22 19:48:20 astar kernel: [    0.784877] pci 0000:00:0b.0:   bridge window [mem 0xd0000000-0xdfffffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784879] pci 0000:00:14.4: PCI bridge to [bus 08]
Jan 22 19:48:20 astar kernel: [    0.784881] pci 0000:00:14.4:   bridge window [io  0x7000-0x7fff]
Jan 22 19:48:20 astar kernel: [    0.784885] pci 0000:00:14.4:   bridge window [mem 0xfd600000-0xfd6fffff]
Jan 22 19:48:20 astar kernel: [    0.784887] pci 0000:00:14.4:   bridge window [mem 0xfd500000-0xfd5fffff pref]
Jan 22 19:48:20 astar kernel: [    0.784968] NET: Registered protocol family 2
Jan 22 19:48:20 astar kernel: [    0.785154] TCP established hash table entries: 65536 (order: 8, 1048576 bytes)
Jan 22 19:48:20 astar kernel: [    0.785527] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Jan 22 19:48:20 astar kernel: [    0.785814] TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 19:48:20 astar kernel: [    0.785855] TCP: reno registered
Jan 22 19:48:20 astar kernel: [    0.785866] UDP hash table entries: 4096 (order: 5, 131072 bytes)
Jan 22 19:48:20 astar kernel: [    0.785916] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes)
Jan 22 19:48:20 astar kernel: [    0.786014] NET: Registered protocol family 1
Jan 22 19:48:20 astar kernel: [    1.042151] Unpacking initramfs...
Jan 22 19:48:20 astar kernel: [    1.189582] Freeing initrd memory: 11496K (ffff88003697c000 - ffff8800374b6000)
Jan 22 19:48:20 astar kernel: [    1.271913] pci 0000:00:00.2: can't derive routing for PCI INT A
Jan 22 19:48:20 astar kernel: [    1.271916] pci 0000:00:00.2: PCI INT A: no GSI
Jan 22 19:48:20 astar kernel: [    1.272057] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
Jan 22 19:48:20 astar kernel: [    1.281360] AMD-Vi: Lazy IO/TLB flushing enabled
Jan 22 19:48:20 astar kernel: [    1.362449] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 19:48:20 astar kernel: [    1.362451] software IO TLB [mem 0xabcf0000-0xafcf0000] (64MB) mapped at [ffff8800abcf0000-ffff8800afceffff]
Jan 22 19:48:20 astar kernel: [    1.362746] LVT offset 1 assigned for vector 0x400
Jan 22 19:48:20 astar kernel: [    1.362751] IBS: LVT offset 1 assigned
Jan 22 19:48:20 astar kernel: [    1.362780] perf: AMD IBS detected (0x0000001f)
Jan 22 19:48:20 astar kernel: [    1.363479] Initialise system trusted keyring
Jan 22 19:48:20 astar kernel: [    1.363523] audit: initializing netlink socket (disabled)
Jan 22 19:48:20 astar kernel: [    1.363538] type=2000 audit(1390445296.812:1): initialized
Jan 22 19:48:20 astar kernel: [    1.382633] HugeTLB registered 2 MB page size, pre-allocated 0 pages
Jan 22 19:48:20 astar kernel: [    1.383813] zbud: loaded
Jan 22 19:48:20 astar kernel: [    1.383988] VFS: Disk quotas dquot_6.5.2
Jan 22 19:48:20 astar kernel: [    1.384030] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 19:48:20 astar kernel: [    1.384377] msgmni has been set to 15829
Jan 22 19:48:20 astar kernel: [    1.384447] Key type big_key registered
Jan 22 19:48:20 astar kernel: [    1.385657] alg: No test for stdrng (krng)
Jan 22 19:48:20 astar kernel: [    1.385668] NET: Registered protocol family 38
Jan 22 19:48:20 astar kernel: [    1.385670] Key type asymmetric registered
Jan 22 19:48:20 astar kernel: [    1.385671] Asymmetric key parser 'x509' registered
Jan 22 19:48:20 astar kernel: [    1.385699] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
Jan 22 19:48:20 astar kernel: [    1.385756] io scheduler noop registered
Jan 22 19:48:20 astar kernel: [    1.385757] io scheduler deadline registered
Jan 22 19:48:20 astar kernel: [    1.385780] io scheduler cfq registered (default)
Jan 22 19:48:20 astar kernel: [    1.386600] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
Jan 22 19:48:20 astar kernel: [    1.386615] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
Jan 22 19:48:20 astar kernel: [    1.386697] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0
Jan 22 19:48:20 astar kernel: [    1.386700] ACPI: Power Button [PWRB]
Jan 22 19:48:20 astar kernel: [    1.386733] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Jan 22 19:48:20 astar kernel: [    1.386734] ACPI: Power Button [PWRF]
Jan 22 19:48:20 astar kernel: [    1.386758] ACPI: processor limited to max C-state 1
Jan 22 19:48:20 astar kernel: [    1.386967] GHES: HEST is not enabled!
Jan 22 19:48:20 astar kernel: [    1.387039] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 19:48:20 astar kernel: [    1.407599] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 19:48:20 astar kernel: [    1.407970] Non-volatile memory driver v1.3
Jan 22 19:48:20 astar kernel: [    1.407972] Linux agpgart interface v0.103
Jan 22 19:48:20 astar kernel: [    1.408267] ahci 0000:00:11.0: AHCI 0001.0200 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
Jan 22 19:48:20 astar kernel: [    1.408270] ahci 0000:00:11.0: flags: 64bit ncq sntf ilck pm led clo pmp pio slum part 
Jan 22 19:48:20 astar kernel: [    1.409012] scsi0 : ahci
Jan 22 19:48:20 astar kernel: [    1.409073] scsi1 : ahci
Jan 22 19:48:20 astar kernel: [    1.409120] scsi2 : ahci
Jan 22 19:48:20 astar kernel: [    1.409168] scsi3 : ahci
Jan 22 19:48:20 astar kernel: [    1.409214] scsi4 : ahci
Jan 22 19:48:20 astar kernel: [    1.409261] scsi5 : ahci
Jan 22 19:48:20 astar kernel: [    1.409282] ata1: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff100 irq 41
Jan 22 19:48:20 astar kernel: [    1.409285] ata2: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff180 irq 41
Jan 22 19:48:20 astar kernel: [    1.409287] ata3: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff200 irq 41
Jan 22 19:48:20 astar kernel: [    1.409288] ata4: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff280 irq 41
Jan 22 19:48:20 astar kernel: [    1.409290] ata5: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff300 irq 41
Jan 22 19:48:20 astar kernel: [    1.409292] ata6: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff380 irq 41
Jan 22 19:48:20 astar kernel: [    1.409424] libphy: Fixed MDIO Bus: probed
Jan 22 19:48:20 astar kernel: [    1.409499] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jan 22 19:48:20 astar kernel: [    1.409502] ehci-pci: EHCI PCI platform driver
Jan 22 19:48:20 astar kernel: [    1.409643] ehci-pci 0000:00:12.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.409697] ehci-pci 0000:00:12.2: new USB bus registered, assigned bus number 1
Jan 22 19:48:20 astar kernel: [    1.409706] ehci-pci 0000:00:12.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.409716] ehci-pci 0000:00:12.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.409750] ehci-pci 0000:00:12.2: irq 17, io mem 0xfdffd000
Jan 22 19:48:20 astar kernel: [    1.415508] ehci-pci 0000:00:12.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.415623] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.415629] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.415635] usb usb1: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.415640] usb usb1: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.415645] usb usb1: SerialNumber: 0000:00:12.2
Jan 22 19:48:20 astar kernel: [    1.415788] hub 1-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.415792] hub 1-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.416013] ehci-pci 0000:00:13.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.416076] ehci-pci 0000:00:13.2: new USB bus registered, assigned bus number 2
Jan 22 19:48:20 astar kernel: [    1.416079] ehci-pci 0000:00:13.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.416087] ehci-pci 0000:00:13.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.416114] ehci-pci 0000:00:13.2: irq 17, io mem 0xfdffb000
Jan 22 19:48:20 astar kernel: [    1.421502] ehci-pci 0000:00:13.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.421609] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.421615] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.421620] usb usb2: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.421625] usb usb2: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.421630] usb usb2: SerialNumber: 0000:00:13.2
Jan 22 19:48:20 astar kernel: [    1.421777] hub 2-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.421781] hub 2-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.422010] ehci-pci 0000:00:16.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.422057] ehci-pci 0000:00:16.2: new USB bus registered, assigned bus number 3
Jan 22 19:48:20 astar kernel: [    1.422059] ehci-pci 0000:00:16.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.422068] ehci-pci 0000:00:16.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.422092] ehci-pci 0000:00:16.2: irq 17, io mem 0xfdff8000
Jan 22 19:48:20 astar kernel: [    1.427501] ehci-pci 0000:00:16.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.427618] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.427624] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.427629] usb usb3: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.427634] usb usb3: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.427639] usb usb3: SerialNumber: 0000:00:16.2
Jan 22 19:48:20 astar kernel: [    1.427781] hub 3-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.427786] hub 3-0:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.427908] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jan 22 19:48:20 astar kernel: [    1.427910] ohci-pci: OHCI PCI platform driver
Jan 22 19:48:20 astar kernel: [    1.428024] ohci-pci 0000:00:12.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.428080] ohci-pci 0000:00:12.0: new USB bus registered, assigned bus number 4
Jan 22 19:48:20 astar kernel: [    1.428104] ohci-pci 0000:00:12.0: irq 18, io mem 0xfdffe000
Jan 22 19:48:20 astar kernel: [    1.482531] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.482533] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.482534] usb usb4: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.482536] usb usb4: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.482537] usb usb4: SerialNumber: 0000:00:12.0
Jan 22 19:48:20 astar kernel: [    1.482689] hub 4-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.482696] hub 4-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.482971] ohci-pci 0000:00:13.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.483016] ohci-pci 0000:00:13.0: new USB bus registered, assigned bus number 5
Jan 22 19:48:20 astar kernel: [    1.483031] ohci-pci 0000:00:13.0: irq 18, io mem 0xfdffc000
Jan 22 19:48:20 astar kernel: [    1.537512] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.537515] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.537516] usb usb5: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.537518] usb usb5: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.537519] usb usb5: SerialNumber: 0000:00:13.0
Jan 22 19:48:20 astar kernel: [    1.537615] hub 5-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.537621] hub 5-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.537835] ohci-pci 0000:00:14.5: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.537892] ohci-pci 0000:00:14.5: new USB bus registered, assigned bus number 6
Jan 22 19:48:20 astar kernel: [    1.537909] ohci-pci 0000:00:14.5: irq 18, io mem 0xfdffa000
Jan 22 19:48:20 astar kernel: [    1.592505] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.592508] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.592509] usb usb6: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.592510] usb usb6: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.592511] usb usb6: SerialNumber: 0000:00:14.5
Jan 22 19:48:20 astar kernel: [    1.592608] hub 6-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.592614] hub 6-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.592797] ohci-pci 0000:00:16.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.592856] ohci-pci 0000:00:16.0: new USB bus registered, assigned bus number 7
Jan 22 19:48:20 astar kernel: [    1.592870] ohci-pci 0000:00:16.0: irq 18, io mem 0xfdff9000
Jan 22 19:48:20 astar kernel: [    1.647495] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.647497] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.647498] usb usb7: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.647499] usb usb7: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.647501] usb usb7: SerialNumber: 0000:00:16.0
Jan 22 19:48:20 astar kernel: [    1.647596] hub 7-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.647602] hub 7-0:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.647697] uhci_hcd: USB Universal Host Controller Interface driver
Jan 22 19:48:20 astar kernel: [    1.647767] xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.647827] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 8
Jan 22 19:48:20 astar kernel: [    1.648067] usb usb8: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.648068] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.648069] usb usb8: Product: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.648071] usb usb8: Manufacturer: Linux 3.12.7-300.fc20.x86_64 xhci_hcd
Jan 22 19:48:20 astar kernel: [    1.648072] usb usb8: SerialNumber: 0000:02:00.0
Jan 22 19:48:20 astar kernel: [    1.648157] hub 8-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.648167] hub 8-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.648213] xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.648243] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9
Jan 22 19:48:20 astar kernel: [    1.651315] usb usb9: New USB device found, idVendor=1d6b, idProduct=0003
Jan 22 19:48:20 astar kernel: [    1.651316] usb usb9: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.651317] usb usb9: Product: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.651319] usb usb9: Manufacturer: Linux 3.12.7-300.fc20.x86_64 xhci_hcd
Jan 22 19:48:20 astar kernel: [    1.651320] usb usb9: SerialNumber: 0000:02:00.0
Jan 22 19:48:20 astar kernel: [    1.651420] hub 9-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.651429] hub 9-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.656626] usbcore: registered new interface driver usbserial
Jan 22 19:48:20 astar kernel: [    1.656649] usbcore: registered new interface driver usbserial_generic
Jan 22 19:48:20 astar kernel: [    1.656665] usbserial: USB Serial support registered for generic
Jan 22 19:48:20 astar kernel: [    1.656717] i8042: PNP: No PS/2 controller found. Probing ports directly.
Jan 22 19:48:20 astar kernel: [    1.691057] i8042: Failed to disable AUX port, but continuing anyway... Is this a SiS?
Jan 22 19:48:20 astar kernel: [    1.691058] i8042: If AUX port is really absent please use the 'i8042.noaux' option
Jan 22 19:48:20 astar kernel: [    1.714555] ata6: SATA link down (SStatus 0 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.714600] ata4: SATA link down (SStatus 0 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.717508] usb 1-4: new high-speed USB device number 2 using ehci-pci
Jan 22 19:48:20 astar kernel: [    1.833992] usb 1-4: New USB device found, idVendor=05e3, idProduct=0608
Jan 22 19:48:20 astar kernel: [    1.834002] usb 1-4: New USB device strings: Mfr=0, Product=1, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    1.834007] usb 1-4: Product: USB2.0 Hub
Jan 22 19:48:20 astar kernel: [    1.834693] hub 1-4:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.835112] hub 1-4:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.869471] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.869511] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.869553] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.870442] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.870471] ata3.00: ATA-8: SanDisk SDSSDHP064G, X2306RL, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.870473] ata3.00: 125045424 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.871542] ata3.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.871886] ata1.00: ATA-8: WDC WD5000AAKX-001CA0, 15.01H15, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.871888] ata1.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.872361] ata2.00: ATA-8: WDC WD5000AAKX-001CA0, 15.01H15, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.872370] ata2.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.872545] ata5.00: ATAPI: HL-DT-ST DVDRAM GH22NS50, TN02, max UDMA/100
Jan 22 19:48:20 astar kernel: [    1.873549] ata1.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.873774] scsi 0:0:0:0: Direct-Access     ATA      WDC WD5000AAKX-0 15.0 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.873955] ata2.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.874000] sd 0:0:0:0: Attached scsi generic sg0 type 0
Jan 22 19:48:20 astar kernel: [    1.874076] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/465 GiB)
Jan 22 19:48:20 astar kernel: [    1.874103] sd 0:0:0:0: [sda] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874117] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.874323] scsi 1:0:0:0: Direct-Access     ATA      WDC WD5000AAKX-0 15.0 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.874468] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB)
Jan 22 19:48:20 astar kernel: [    1.874475] sd 1:0:0:0: Attached scsi generic sg1 type 0
Jan 22 19:48:20 astar kernel: [    1.874616] scsi 2:0:0:0: Direct-Access     ATA      SanDisk SDSSDHP0 X230 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.874619] sd 1:0:0:0: [sdb] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874662] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.874745] sd 2:0:0:0: [sdc] 125045424 512-byte logical blocks: (64.0 GB/59.6 GiB)
Jan 22 19:48:20 astar kernel: [    1.874758] sd 2:0:0:0: Attached scsi generic sg2 type 0
Jan 22 19:48:20 astar kernel: [    1.874885] sd 2:0:0:0: [sdc] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874923] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.875423]  sdc: sdc1 sdc2
Jan 22 19:48:20 astar kernel: [    1.876010] sd 2:0:0:0: [sdc] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.876400] ata5.00: configured for UDMA/100
Jan 22 19:48:20 astar kernel: [    1.885695] scsi 4:0:0:0: CD-ROM            HL-DT-ST DVDRAM GH22NS50  TN02 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.894915] sr0: scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 cdda tray
Jan 22 19:48:20 astar kernel: [    1.894923] cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 19:48:20 astar kernel: [    1.895208] sr 4:0:0:0: Attached scsi generic sg3 type 5
Jan 22 19:48:20 astar kernel: [    1.920365]  sdb: sdb1 sdb2 sdb3 sdb4
Jan 22 19:48:20 astar kernel: [    1.921004] sd 1:0:0:0: [sdb] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.924507]  sda: sda1 sda2 sda3
Jan 22 19:48:20 astar kernel: [    1.924717] sd 0:0:0:0: [sda] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.940560] serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 19:48:20 astar kernel: [    1.940772] mousedev: PS/2 mouse device common for all mice
Jan 22 19:48:20 astar kernel: [    1.940992] rtc_cmos 00:03: RTC can wake from S4
Jan 22 19:48:20 astar kernel: [    1.941087] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0
Jan 22 19:48:20 astar kernel: [    1.941106] rtc_cmos 00:03: alarms up to one month, 242 bytes nvram, hpet irqs
Jan 22 19:48:20 astar kernel: [    1.941159] device-mapper: uevent: version 1.0.3
Jan 22 19:48:20 astar kernel: [    1.941217] device-mapper: ioctl: 4.26.0-ioctl (2013-08-15) initialised: dm-devel@redhat.com
Jan 22 19:48:20 astar kernel: [    1.941475] hidraw: raw HID events driver (C) Jiri Kosina
Jan 22 19:48:20 astar kernel: [    1.941549] usbcore: registered new interface driver usbhid
Jan 22 19:48:20 astar kernel: [    1.941550] usbhid: USB HID core driver
Jan 22 19:48:20 astar kernel: [    1.941586] drop_monitor: Initializing network drop monitor service
Jan 22 19:48:20 astar kernel: [    1.941649] ip_tables: (C) 2000-2006 Netfilter Core Team
Jan 22 19:48:20 astar kernel: [    1.941716] TCP: cubic registered
Jan 22 19:48:20 astar kernel: [    1.941718] Initializing XFRM netlink socket
Jan 22 19:48:20 astar kernel: [    1.941796] NET: Registered protocol family 10
Jan 22 19:48:20 astar kernel: [    1.941974] mip6: Mobile IPv6
Jan 22 19:48:20 astar kernel: [    1.941976] NET: Registered protocol family 17
Jan 22 19:48:20 astar kernel: [    1.942329] Loading compiled-in X.509 certificates
Jan 22 19:48:20 astar kernel: [    1.943238] Loaded X.509 cert 'Fedora kernel signing key: a65f5f670480b1e60c692845bc633e5d6d0f0f92'
Jan 22 19:48:20 astar kernel: [    1.943246] registered taskstats version 1
Jan 22 19:48:20 astar kernel: [    1.943964]   Magic number: 14:692:814
Jan 22 19:48:20 astar kernel: [    1.944054] rtc_cmos 00:03: setting system clock to 2014-01-23 02:48:18 UTC (1390445298)
Jan 22 19:48:20 astar kernel: [    1.945227] Freeing unused kernel memory: 1444K (ffffffff81d07000 - ffffffff81e70000)
Jan 22 19:48:20 astar kernel: [    1.945230] Write protecting the kernel read-only data: 12288k
Jan 22 19:48:20 astar kernel: [    1.948714] Freeing unused kernel memory: 1548K (ffff88000167d000 - ffff880001800000)
Jan 22 19:48:20 astar kernel: [    1.951176] Freeing unused kernel memory: 1152K (ffff880001ae0000 - ffff880001c00000)
Jan 22 19:48:20 astar kernel: [    2.100315] wmi: Mapper loaded
Jan 22 19:48:20 astar kernel: [    2.113871] [drm] Initialized drm 1.1.0 20060810
Jan 22 19:48:20 astar kernel: [    2.116363] usb 5-2: new low-speed USB device number 2 using ohci-pci
Jan 22 19:48:20 astar kernel: [    2.138402] pcieport 0000:00:02.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    2.138742] nouveau  [  DEVICE][0000:01:00.0] BOOT0  : 0x0a8180a2
Jan 22 19:48:20 astar kernel: [    2.138745] nouveau  [  DEVICE][0000:01:00.0] Chipset: GT218 (NVA8)
Jan 22 19:48:20 astar kernel: [    2.138746] nouveau  [  DEVICE][0000:01:00.0] Family : NV50
Jan 22 19:48:20 astar kernel: [    2.139307] nouveau  [   VBIOS][0000:01:00.0] checking PRAMIN for image...
Jan 22 19:48:20 astar kernel: [    2.211458] nouveau  [   VBIOS][0000:01:00.0] ... appears to be valid
Jan 22 19:48:20 astar kernel: [    2.211460] nouveau  [   VBIOS][0000:01:00.0] using image from PRAMIN
Jan 22 19:48:20 astar kernel: [    2.211564] nouveau  [   VBIOS][0000:01:00.0] BIT signature found
Jan 22 19:48:20 astar kernel: [    2.211567] nouveau  [   VBIOS][0000:01:00.0] version 70.18.2c.00.04
Jan 22 19:48:20 astar kernel: [    2.231812] nouveau  [     PFB][0000:01:00.0] RAM type: DDR2
Jan 22 19:48:20 astar kernel: [    2.231814] nouveau  [     PFB][0000:01:00.0] RAM size: 512 MiB
Jan 22 19:48:20 astar kernel: [    2.231816] nouveau  [     PFB][0000:01:00.0]    ZCOMP: 960 tags
Jan 22 19:48:20 astar kernel: [    2.253287] nouveau  [  PTHERM][0000:01:00.0] FAN control: none / external
Jan 22 19:48:20 astar kernel: [    2.253294] nouveau  [  PTHERM][0000:01:00.0] fan management: disabled
Jan 22 19:48:20 astar kernel: [    2.253296] nouveau  [  PTHERM][0000:01:00.0] internal sensor: yes
Jan 22 19:48:20 astar kernel: [    2.253518] [TTM] Zone  kernel: Available graphics memory: 4054398 kiB
Jan 22 19:48:20 astar kernel: [    2.253519] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
Jan 22 19:48:20 astar kernel: [    2.253520] [TTM] Initializing pool allocator
Jan 22 19:48:20 astar kernel: [    2.253524] [TTM] Initializing DMA pool allocator
Jan 22 19:48:20 astar kernel: [    2.253532] nouveau  [     DRM] VRAM: 512 MiB
Jan 22 19:48:20 astar kernel: [    2.253533] nouveau  [     DRM] GART: 1048576 MiB
Jan 22 19:48:20 astar kernel: [    2.253536] nouveau  [     DRM] TMDS table version 2.0
Jan 22 19:48:20 astar kernel: [    2.253538] nouveau  [     DRM] DCB version 4.0
Jan 22 19:48:20 astar kernel: [    2.253540] nouveau  [     DRM] DCB outp 00: 01000302 00020030
Jan 22 19:48:20 astar kernel: [    2.253542] nouveau  [     DRM] DCB outp 01: 02000300 00000000
Jan 22 19:48:20 astar kernel: [    2.253544] nouveau  [     DRM] DCB outp 02: 02011362 0f220010
Jan 22 19:48:20 astar kernel: [    2.253545] nouveau  [     DRM] DCB outp 03: 01022310 00020010
Jan 22 19:48:20 astar kernel: [    2.253547] nouveau  [     DRM] DCB conn 00: 00001030
Jan 22 19:48:20 astar kernel: [    2.253549] nouveau  [     DRM] DCB conn 01: 00202161
Jan 22 19:48:20 astar kernel: [    2.253550] nouveau  [     DRM] DCB conn 02: 00000200
Jan 22 19:48:20 astar kernel: [    2.260321] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
Jan 22 19:48:20 astar kernel: [    2.260322] [drm] No driver support for vblank timestamp query.
Jan 22 19:48:20 astar kernel: [    2.260418] nouveau  [     DRM] 3 available performance level(s)
Jan 22 19:48:20 astar kernel: [    2.260421] nouveau  [     DRM] 0: core 135MHz shader 270MHz memory 135MHz voltage 850mV
Jan 22 19:48:20 astar kernel: [    2.260423] nouveau  [     DRM] 1: core 405MHz shader 810MHz memory 300MHz voltage 900mV
Jan 22 19:48:20 astar kernel: [    2.260426] nouveau  [     DRM] 3: core 567MHz shader 1400MHz memory 350MHz voltage 1000mV
Jan 22 19:48:20 astar kernel: [    2.260428] nouveau  [     DRM] c: core 405MHz shader 810MHz memory 405MHz voltage 900mV
Jan 22 19:48:20 astar kernel: [    2.268198] nouveau  [     DRM] MM: using COPY for buffer copies
Jan 22 19:48:20 astar kernel: [    2.268453] usb 5-2: New USB device found, idVendor=046d, idProduct=c505
Jan 22 19:48:20 astar kernel: [    2.268469] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.268476] usb 5-2: Product: USB Receiver
Jan 22 19:48:20 astar kernel: [    2.268481] usb 5-2: Manufacturer: Logitech
Jan 22 19:48:20 astar kernel: [    2.276753] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:13.0/usb5/5-2/5-2:1.0/input/input3
Jan 22 19:48:20 astar kernel: [    2.276851] hid-generic 0003:046D:C505.0001: input,hidraw0: USB HID v1.10 Keyboard [Logitech USB Receiver] on usb-0000:00:13.0-2/input0
Jan 22 19:48:20 astar kernel: [    2.286932] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:13.0/usb5/5-2/5-2:1.1/input/input4
Jan 22 19:48:20 astar kernel: [    2.287039] hid-generic 0003:046D:C505.0002: input,hidraw1: USB HID v1.10 Mouse [Logitech USB Receiver] on usb-0000:00:13.0-2/input1
Jan 22 19:48:20 astar kernel: [    2.312772] nouveau  [     DRM] allocated 1920x1200 fb: 0x70000, bo ffff88024422ec00
Jan 22 19:48:20 astar kernel: [    2.312870] fbcon: nouveaufb (fb0) is primary device
Jan 22 19:48:20 astar kernel: [    2.364352] tsc: Refined TSC clocksource calibration: 2812.882 MHz
Jan 22 19:48:20 astar kernel: [    2.391766] Console: switching to colour frame buffer device 240x75
Jan 22 19:48:20 astar kernel: [    2.395334] nouveau 0000:01:00.0: fb0: nouveaufb frame buffer device
Jan 22 19:48:20 astar kernel: [    2.395339] nouveau 0000:01:00.0: registered panic notifier
Jan 22 19:48:20 astar kernel: [    2.395342] [drm] Initialized nouveau 1.1.1 20120801 for 0000:01:00.0 on minor 0
Jan 22 19:48:20 astar kernel: [    2.454682] usb 1-4.2: new full-speed USB device number 3 using ehci-pci
Jan 22 19:48:20 astar kernel: [    2.533357] usb 1-4.2: New USB device found, idVendor=085c, idProduct=0300
Jan 22 19:48:20 astar kernel: [    2.533369] usb 1-4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.533376] usb 1-4.2: Product: Datacolor Spyder3
Jan 22 19:48:20 astar kernel: [    2.533382] usb 1-4.2: Manufacturer: ColorVision Inc.
Jan 22 19:48:20 astar kernel: [    2.598747] usb 1-4.4: new full-speed USB device number 4 using ehci-pci
Jan 22 19:48:20 astar kernel: [    2.678235] usb 1-4.4: New USB device found, idVendor=046d, idProduct=c52b
Jan 22 19:48:20 astar kernel: [    2.678239] usb 1-4.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.678241] usb 1-4.4: Product: USB Receiver
Jan 22 19:48:20 astar kernel: [    2.678242] usb 1-4.4: Manufacturer: Logitech
Jan 22 19:48:20 astar kernel: [    2.693910] logitech-djreceiver 0003:046D:C52B.0005: hiddev0,hidraw2: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:12.2-4.4/input2
Jan 22 19:48:20 astar kernel: [    2.698045] input: Logitech Unifying Device. Wireless PID:1017 as /devices/pci0000:00/0000:00:12.2/usb1/1-4/1-4.4/1-4.4:1.2/0003:046D:C52B.0005/input/input5
Jan 22 19:48:20 astar kernel: [    2.698223] logitech-djdevice 0003:046D:C52B.0006: input,hidraw3: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:1017] on usb-0000:00:12.2-4.4:1
Jan 22 19:48:20 astar kernel: [    2.700153] input: Logitech Unifying Device. Wireless PID:2010 as /devices/pci0000:00/0000:00:12.2/usb1/1-4/1-4.4/1-4.4:1.2/0003:046D:C52B.0005/input/input6
Jan 22 19:48:20 astar kernel: [    2.700257] logitech-djdevice 0003:046D:C52B.0007: input,hidraw4: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:2010] on usb-0000:00:12.2-4.4:2
Jan 22 19:48:20 astar kernel: [    2.714789] bio: create slab <bio-1> at 1
Jan 22 19:48:20 astar kernel: [    2.748828] PM: Starting manual resume from disk
Jan 22 19:48:20 astar kernel: [    2.760724] EXT4-fs (dm-0): INFO: recovery required on readonly filesystem
Jan 22 19:48:20 astar kernel: [    2.760727] EXT4-fs (dm-0): write access will be enabled during recovery
Jan 22 19:48:20 astar kernel: [    2.846919] EXT4-fs (dm-0): recovery complete
Jan 22 19:48:20 astar kernel: [    2.847626] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.011008] SELinux:  Disabled at runtime.
Jan 22 19:48:20 astar kernel: [    3.041326] type=1404 audit(1390445299.596:2): selinux=0 auid=4294967295 ses=4294967295
Jan 22 19:48:20 astar kernel: [    3.186313] RPC: Registered named UNIX socket transport module.
Jan 22 19:48:20 astar kernel: [    3.186316] RPC: Registered udp transport module.
Jan 22 19:48:20 astar kernel: [    3.186317] RPC: Registered tcp transport module.
Jan 22 19:48:20 astar kernel: [    3.186318] RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 19:48:20 astar kernel: [    3.205860] EXT4-fs (dm-0): re-mounted. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.208427] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Jan 22 19:48:20 astar kernel: [    3.259327] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 19:48:20 astar kernel: [    3.266545] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jan 22 19:48:20 astar kernel: [    3.266562] r8169 0000:05:00.0: can't disable ASPM; OS doesn't have ASPM control
Jan 22 19:48:20 astar kernel: [    3.266570] pcieport 0000:00:07.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.266948] r8169 0000:05:00.0 eth0: RTL8168d/8111d at 0xffffc90010ef8000, 1c:6f:65:3e:b1:ed, XID 083000c0 IRQ 49
Jan 22 19:48:20 astar kernel: [    3.266950] r8169 0000:05:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
Jan 22 19:48:20 astar kernel: [    3.267507] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jan 22 19:48:20 astar kernel: [    3.267520] r8169 0000:06:00.0: can't disable ASPM; OS doesn't have ASPM control
Jan 22 19:48:20 astar kernel: [    3.267524] pcieport 0000:00:09.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.267844] r8169 0000:06:00.0 eth1: RTL8168d/8111d at 0xffffc90010f00000, 1c:6f:65:3e:b1:ef, XID 083000c0 IRQ 50
Jan 22 19:48:20 astar kernel: [    3.267848] r8169 0000:06:00.0 eth1: jumbo features [frames: 9200 bytes, tx checksumming: ko]
Jan 22 19:48:20 astar kernel: [    3.279583] ACPI Warning: 0x0000000000000b00-0x0000000000000b07 SystemIO conflicts with Region \SOR1 1 (20130725/utaddress-251)
Jan 22 19:48:20 astar kernel: [    3.279593] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
Jan 22 19:48:20 astar kernel: [    3.281503] sp5100_tco: SP5100/SB800 TCO WatchDog Timer Driver v0.05
Jan 22 19:48:20 astar kernel: [    3.281578] sp5100_tco: PCI Revision ID: 0x41
Jan 22 19:48:20 astar kernel: [    3.281614] sp5100_tco: Using 0xfed80b00 for watchdog MMIO address
Jan 22 19:48:20 astar kernel: [    3.281625] sp5100_tco: Last reboot was not triggered by watchdog.
Jan 22 19:48:20 astar kernel: [    3.281755] sp5100_tco: initialized (0xffffc90010efab00). heartbeat=60 sec (nowayout=0)
Jan 22 19:48:20 astar kernel: [    3.294413] MCE: In-kernel MCE decoding enabled.
Jan 22 19:48:20 astar kernel: [    3.296857] EDAC MC: Ver: 3.0.0
Jan 22 19:48:20 astar kernel: [    3.298985] AMD64 EDAC driver v3.4.0
Jan 22 19:48:20 astar kernel: [    3.306993] microcode: CPU0: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309697] microcode: CPU0: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309733] microcode: CPU1: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309740] microcode: CPU1: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309753] microcode: CPU2: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309762] microcode: CPU2: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309783] microcode: CPU3: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309791] microcode: CPU3: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309799] microcode: CPU4: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309807] microcode: CPU4: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309817] microcode: CPU5: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309825] microcode: CPU5: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.310231] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
Jan 22 19:48:20 astar kernel: [    3.324599] input: HDA ATI SB Front Headphone as /devices/pci0000:00/0000:00:14.2/sound/card0/input14
Jan 22 19:48:20 astar kernel: [    3.325457] [drm] radeon kernel modesetting enabled.
Jan 22 19:48:20 astar kernel: [    3.325644] input: HDA ATI SB Line Out Side as /devices/pci0000:00/0000:00:14.2/sound/card0/input13
Jan 22 19:48:20 astar kernel: [    3.325670] pcieport 0000:00:0b.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.325675] radeon 0000:07:00.0: enabling device (0000 -> 0003)
Jan 22 19:48:20 astar kernel: [    3.325835] input: HDA ATI SB Line Out CLFE as /devices/pci0000:00/0000:00:14.2/sound/card0/input12
Jan 22 19:48:20 astar kernel: [    3.327744] kvm: Nested Virtualization enabled
Jan 22 19:48:20 astar kernel: [    3.327749] kvm: Nested Paging enabled
Jan 22 19:48:20 astar kernel: [    3.330359] [drm] initializing kernel modesetting (VERDE 0x1002:0x683F 0x1682:0x3248).
Jan 22 19:48:20 astar kernel: [    3.330393] [drm] register mmio base: 0xFDD80000
Jan 22 19:48:20 astar kernel: [    3.330394] [drm] register mmio size: 262144
Jan 22 19:48:20 astar kernel: [    3.455406] Switched to clocksource tsc
Jan 22 19:48:20 astar kernel: [    3.455474] input: HDA ATI SB Line Out Surround as /devices/pci0000:00/0000:00:14.2/sound/card0/input11
Jan 22 19:48:20 astar kernel: [    3.455860] input: HDA ATI SB Line Out Front as /devices/pci0000:00/0000:00:14.2/sound/card0/input10
Jan 22 19:48:20 astar kernel: [    3.455873] ATOM BIOS: C444
Jan 22 19:48:20 astar kernel: [    3.455918] input: HDA ATI SB Line as /devices/pci0000:00/0000:00:14.2/sound/card0/input9
Jan 22 19:48:20 astar kernel: [    3.455953] [drm] GPU not posted. posting now...
Jan 22 19:48:20 astar kernel: [    3.455997] input: HDA ATI SB Rear Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input8
Jan 22 19:48:20 astar kernel: [    3.456573] input: HDA ATI SB Front Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input7
Jan 22 19:48:20 astar kernel: [    3.461346] radeon 0000:07:00.0: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used)
Jan 22 19:48:20 astar kernel: [    3.461348] radeon 0000:07:00.0: GTT: 1024M 0x0000000080000000 - 0x00000000BFFFFFFF
Jan 22 19:48:20 astar kernel: [    3.461350] [drm] Detected VRAM RAM=2048M, BAR=256M
Jan 22 19:48:20 astar kernel: [    3.461351] [drm] RAM width 128bits DDR
Jan 22 19:48:20 astar kernel: [    3.461369] [drm] radeon: 2048M of VRAM memory ready
Jan 22 19:48:20 astar kernel: [    3.461370] [drm] radeon: 1024M of GTT memory ready.
Jan 22 19:48:20 astar kernel: [    3.461567] hda_intel: Disabling MSI
Jan 22 19:48:20 astar kernel: [    3.461583] ALSA sound/pci/hda/hda_intel.c:3170 0000:01:00.1: Handle VGA-switcheroo audio client
Jan 22 19:48:20 astar kernel: [    3.461643] EDAC amd64: DRAM ECC disabled.
Jan 22 19:48:20 astar kernel: [    3.461655] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load.
Jan 22 19:48:20 astar kernel: [    3.461655]  Either enable ECC checking or force module loading by setting 'ecc_enable_override'.
Jan 22 19:48:20 astar kernel: [    3.461655]  (Note that use of the override may cause unknown side effects.)
Jan 22 19:48:20 astar kernel: [    3.462066] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.464246] [drm] GART: num cpu pages 262144, num gpu pages 262144
Jan 22 19:48:20 astar kernel: [    3.467861] [drm] probing gen 2 caps for device 1002:5a1f = 31cd02/0
Jan 22 19:48:20 astar kernel: [    3.467870] [drm] PCIE gen 2 link speeds already enabled
Jan 22 19:48:20 astar kernel: [    3.469070] [drm] Loading VERDE Microcode
Jan 22 19:48:20 astar kernel: [    3.476475] [drm] PCIE GART of 1024M enabled (table at 0x0000000000276000).
Jan 22 19:48:20 astar kernel: [    3.476603] radeon 0000:07:00.0: WB enabled
Jan 22 19:48:20 astar kernel: [    3.476607] radeon 0000:07:00.0: fence driver on ring 0 use gpu addr 0x0000000080000c00 and cpu addr 0xffff8800a840ac00
Jan 22 19:48:20 astar kernel: [    3.476609] radeon 0000:07:00.0: fence driver on ring 1 use gpu addr 0x0000000080000c04 and cpu addr 0xffff8800a840ac04
Jan 22 19:48:20 astar kernel: [    3.476611] radeon 0000:07:00.0: fence driver on ring 2 use gpu addr 0x0000000080000c08 and cpu addr 0xffff8800a840ac08
Jan 22 19:48:20 astar kernel: [    3.476613] radeon 0000:07:00.0: fence driver on ring 3 use gpu addr 0x0000000080000c0c and cpu addr 0xffff8800a840ac0c
Jan 22 19:48:20 astar kernel: [    3.476614] radeon 0000:07:00.0: fence driver on ring 4 use gpu addr 0x0000000080000c10 and cpu addr 0xffff8800a840ac10
Jan 22 19:48:20 astar kernel: [    3.477076] radeon 0000:07:00.0: fence driver on ring 5 use gpu addr 0x0000000000075a18 and cpu addr 0xffffc90016635a18
Jan 22 19:48:20 astar kernel: [    3.477080] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
Jan 22 19:48:20 astar kernel: [    3.477081] [drm] Driver supports precise vblank timestamp query.
Jan 22 19:48:20 astar kernel: [    3.477155] radeon 0000:07:00.0: radeon: using MSI.
Jan 22 19:48:20 astar kernel: [    3.477234] [drm] radeon: irq initialized.
Jan 22 19:48:20 astar kernel: [    3.479685] Adding 8187900k swap on /dev/mapper/f20_astar-swap.  Priority:-1 extents:1 across:8187900k SSFS
Jan 22 19:48:20 astar kernel: [    3.969220] [drm] ring test on 0 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    3.969227] [drm] ring test on 1 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    3.969230] [drm] ring test on 2 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    3.969294] [drm] ring test on 3 succeeded in 4 usecs
Jan 22 19:48:20 astar kernel: [    3.969316] [drm] ring test on 4 succeeded in 7 usecs
Jan 22 19:48:20 astar kernel: [    4.084034] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    4.107737] type=1305 audit(1390445300.662:3): audit_pid=660 old=0 auid=4294967295 ses=4294967295
Jan 22 19:48:20 astar kernel: [    4.107737]  res=1
Jan 22 19:48:20 astar kernel: [    4.176517] [drm] ring test on 5 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    4.176521] [drm] UVD initialized successfully.
Jan 22 19:48:20 astar kernel: [    4.177789] [drm] Enabling audio 0 support
Jan 22 19:48:20 astar kernel: [    4.177792] [drm] Enabling audio 1 support
Jan 22 19:48:20 astar kernel: [    4.177793] [drm] Enabling audio 2 support
Jan 22 19:48:20 astar kernel: [    4.177794] [drm] Enabling audio 3 support
Jan 22 19:48:20 astar kernel: [    4.177795] [drm] Enabling audio 4 support
Jan 22 19:48:20 astar kernel: [    4.177796] [drm] Enabling audio 5 support
Jan 22 19:48:20 astar kernel: [    4.177824] [drm] ib test on ring 0 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177844] [drm] ib test on ring 1 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177863] [drm] ib test on ring 2 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177908] [drm] ib test on ring 3 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177949] [drm] ib test on ring 4 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    4.351677] [drm] ib test on ring 5 succeeded
Jan 22 19:48:20 astar kernel: [    4.352213] [drm] Radeon Display Connectors
Jan 22 19:48:20 astar kernel: [    4.352215] [drm] Connector 0:
Jan 22 19:48:20 astar kernel: [    4.352216] [drm]   DP-1
Jan 22 19:48:20 astar kernel: [    4.352217] [drm]   HPD1
Jan 22 19:48:20 astar kernel: [    4.352219] [drm]   DDC: 0x6570 0x6570 0x6574 0x6574 0x6578 0x6578 0x657c 0x657c
Jan 22 19:48:20 astar kernel: [    4.352219] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352221] [drm]     DFP1: INTERNAL_UNIPHY2
Jan 22 19:48:20 astar kernel: [    4.352221] [drm] Connector 1:
Jan 22 19:48:20 astar kernel: [    4.352222] [drm]   HDMI-A-2
Jan 22 19:48:20 astar kernel: [    4.352223] [drm]   HPD4
Jan 22 19:48:20 astar kernel: [    4.352224] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
Jan 22 19:48:20 astar kernel: [    4.352225] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352226] [drm]     DFP2: INTERNAL_UNIPHY2
Jan 22 19:48:20 astar kernel: [    4.352227] [drm] Connector 2:
Jan 22 19:48:20 astar kernel: [    4.352229] [drm]   DVI-I-2
Jan 22 19:48:20 astar kernel: [    4.352230] [drm]   HPD6
Jan 22 19:48:20 astar kernel: [    4.352231] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
Jan 22 19:48:20 astar kernel: [    4.352232] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352233] [drm]     DFP3: INTERNAL_UNIPHY1
Jan 22 19:48:20 astar kernel: [    4.352234] [drm] Connector 3:
Jan 22 19:48:20 astar kernel: [    4.352234] [drm]   DVI-I-3
Jan 22 19:48:20 astar kernel: [    4.352235] [drm]   HPD2
Jan 22 19:48:20 astar kernel: [    4.352236] [drm]   DDC: 0x6560 0x6560 0x6564 0x6564 0x6568 0x6568 0x656c 0x656c
Jan 22 19:48:20 astar kernel: [    4.352237] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352238] [drm]     DFP4: INTERNAL_UNIPHY
Jan 22 19:48:20 astar kernel: [    4.352239] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
Jan 22 19:48:20 astar kernel: [    4.354307] [drm] Internal thermal controller with fan control
Jan 22 19:48:20 astar kernel: [    4.354367] [drm] radeon: power management initialized
Jan 22 19:48:20 astar kernel: [    4.387064] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
Jan 22 19:48:20 astar kernel: [    4.401438] ip6_tables: (C) 2000-2006 Netfilter Core Team
Jan 22 19:48:20 astar kernel: [    4.441204] Ebtables v2.0 registered
Jan 22 19:48:21 astar kernel: [    4.450537] [drm] fb mappable at 0xD1488000
Jan 22 19:48:21 astar kernel: [    4.450541] [drm] vram apper at 0xD0000000
Jan 22 19:48:21 astar kernel: [    4.450543] [drm] size 9216000
Jan 22 19:48:21 astar kernel: [    4.450544] [drm] fb depth is 24
Jan 22 19:48:21 astar kernel: [    4.450545] [drm]    pitch is 7680
Jan 22 19:48:21 astar kernel: [    4.450796] radeon 0000:07:00.0: fb1: radeondrmfb frame buffer device
Jan 22 19:48:21 astar kernel: [    4.450802] [drm] Initialized radeon 2.34.0 20080528 for 0000:07:00.0 on minor 1
Jan 22 19:48:21 astar kernel: [    4.453131] Bridge firewalling registered
Jan 22 19:48:21 astar kernel: [    4.577515] r8169 0000:05:00.0 p7p1: link down
Jan 22 19:48:21 astar kernel: [    4.577552] IPv6: ADDRCONF(NETDEV_UP): p7p1: link is not ready
Jan 22 19:48:21 astar kernel: [    4.577560] r8169 0000:05:00.0 p7p1: link down
Jan 22 19:48:21 astar kernel: [    4.579200] device p7p1 entered promiscuous mode
Jan 22 19:48:21 astar kernel: [    4.620517] IPv6: ADDRCONF(NETDEV_UP): br0: link is not ready
Jan 22 19:48:21 astar kernel: [    4.734283] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input18
Jan 22 19:48:21 astar kernel: [    4.734363] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input17
Jan 22 19:48:21 astar kernel: [    4.734418] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input16
Jan 22 19:48:21 astar kernel: [    4.734472] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input15
Jan 22 19:48:21 astar kernel: [    4.734896] ALSA sound/pci/hda/hda_intel.c:3170 0000:07:00.1: Handle VGA-switcheroo audio client
Jan 22 19:48:21 astar kernel: [    4.734899] ALSA sound/pci/hda/hda_intel.c:3499 0000:07:00.1: Force to non-snoop mode
Jan 22 19:48:21 astar kernel: [    4.748366] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748381] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748425] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748438] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748493] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748505] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748547] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748560] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748603] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748615] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748662] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748674] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748767] input: HDA ATI HDMI HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input24
Jan 22 19:48:21 astar kernel: [    4.748879] input: HDA ATI HDMI HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input23
Jan 22 19:48:21 astar kernel: [    4.748941] input: HDA ATI HDMI HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input22
Jan 22 19:48:21 astar kernel: [    4.748991] input: HDA ATI HDMI HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input21
Jan 22 19:48:21 astar kernel: [    4.749091] input: HDA ATI HDMI HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input20
Jan 22 19:48:21 astar kernel: [    4.749148] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input19
Jan 22 19:48:21 astar kernel: [    4.937070] vgaarb: device changed decodes: PCI:0000:07:00.0,olddecodes=io+mem,decodes=none:owns=none
Jan 22 19:48:21 astar kernel: [    4.937074] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none
Jan 22 19:48:24 astar kernel: [    7.648262] r8169 0000:05:00.0 p7p1: link up
Jan 22 19:48:24 astar kernel: [    7.648282] IPv6: ADDRCONF(NETDEV_CHANGE): p7p1: link becomes ready
Jan 22 19:48:24 astar kernel: [    7.649103] br0: port 1(p7p1) entered forwarding state
Jan 22 19:48:24 astar kernel: [    7.649113] br0: port 1(p7p1) entered forwarding state
Jan 22 19:48:24 astar kernel: [    7.649132] IPv6: ADDRCONF(NETDEV_CHANGE): br0: link becomes ready

--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="xen console acpi-skip.txt"
Content-Type: text/plain; name="xen console acpi-skip.txt"; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=FF Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (=
Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M dom0_max_vcpus=
=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall iommu=3Ddebug,verbose a=
cpi_skip_timer_override com1=3D38400,8n1,pci console=3Dcom1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: BIOS IRQ0 pin2 override ignored.
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.945 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D0 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) Brought up 6 CPUs
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0xae0000
(XEN) elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=3D0x1060f0
(XEN) elf_parse_binary: phdr: paddr=3D0x1d07000 memsz=3D0x15300
(XEN) elf_parse_binary: phdr: paddr=3D0x1d1d000 memsz=3D0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xffffffff80000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xffffffff80000000
(XEN)     virt_kstart      =3D 0xffffffff81000000
(XEN)     virt_kend        =3D 0xffffffff8243c000
(XEN)     virt_entry       =3D 0xffffffff81d1d1e0
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0, type =3D 0x6, root tab=
le =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x2, type =3D 0x7, root t=
able =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x10, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x20, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x28, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x30, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x38, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x48, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x58, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x88, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x90, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x92, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x98, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x9a, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa0, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa2, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa3, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa4, type =3D 0x5, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa5, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb0, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb2, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x100, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x101, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x200, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x500, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x600, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) Scrubbing Free RAM: .................................................=
...........done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x000000=
0000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8100160, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8110160, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8120160, flags =3D 0x8
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff88000345f000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff88000345f000.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login: root
Password:
Last login: Mon Jan 20 18:10:47 from astar.houby.net
[root@astar ~]# xl info
host                   : astar
release                : 3.12.7-300.fc20.x86_64
version                : #1 SMP Fri Jan 10 15:35:31 UTC 2014
machine                : x86_64
nr_cpus                : 6
max_cpu_id             : 7
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 2812
hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000=
000:000037ff:00000000
virt_caps              : hvm hvm_directio
total_memory           : 8188
free_memory            : 6044
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 4
xen_extra              : -rc2
xen_version            : 4.4-rc2
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder dom0_mem=3D2048M,max:2048M dom0_max_vc=
pus=3D2 dom0_vcpus_pin loglvl=3Dall gues                             t_logl=
vl=3Dall iommu=3Ddebug,verbose acpi_skip_timer_override com1=3D38400,8n1,pc=
i console=3Dcom1
cc_compiler            : gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)
cc_compile_by          : mockbuild
cc_compile_domain      : [unknown]
cc_compile_date        : Thu Jan 16 19:37:57 UTC 2014
xend_config_format     : 4
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2048     2     r-----      =
69.4
[root@astar ~]# xl pci-assignable-list
0000:07:00.0
0000:07:00.1
[root@astar ~]# xl create mars.xl
Parsing config from mars.xl
libxl: error: libxl_create.c:1034:domcreate_launch_dm: unable to add disk d=
evices
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device mode=
l pid in /local/domain/1/image/dev                             ice-model-pi=
d
libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_mode=
l failed for 1
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable to g=
et my domid
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy faile=
d for 1
[root@astar ~]#
[root@astar ~]#
[root@astar ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
[root@astar ~]#
[root@astar ~]#
[root@astar ~]#
[root@astar ~]# xl create mars.xl
Parsing config from mars.xl
(d2) HVM Loader
(d2) Detected Xen v4.4-rc2
(d2) Xenbus rings @0xfeffc000, event channel 6
(d2) System requested ROMBIOS
(d2) CPU speed is 2813 MHz
(d2) Relocating guest memory for lowmem MMIO space enabled
(XEN) irq.c:270: Dom2 PCI link 0 changed 0 -> 5
(d2) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom2 PCI link 1 changed 0 -> 10
(d2) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom2 PCI link 2 changed 0 -> 11
(d2) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom2 PCI link 3 changed 0 -> 5
(d2) PCI-ISA link 3 routed to IRQ5
[root@astar ~]# (d2) pci dev 01:2 INTD->IRQ5
(d2) pci dev 01:3 INTA->IRQ10
(d2) pci dev 03:0 INTA->IRQ5
(d2) pci dev 04:0 INTA->IRQ5
(d2) pci dev 05:0 INTA->IRQ10
(d2) RAM in high memory; setting high_mem resource base to 10fc00000
(d2) pci dev 02:0 bar 10 size 002000000: 0f0000008
(d2) pci dev 03:0 bar 14 size 001000000: 0f2000008
(d2) pci dev 04:0 bar 10 size 000020000: 0f3000000
(d2) pci dev 02:0 bar 14 size 000001000: 0f3020000
(d2) pci dev 03:0 bar 10 size 000000100: 00000c001
(d2) pci dev 05:0 bar 10 size 000000100: 00000c101
(d2) pci dev 04:0 bar 14 size 000000040: 00000c201
(d2) pci dev 01:2 bar 20 size 000000020: 00000c241
(d2) pci dev 01:1 bar 20 size 000000010: 00000c261
(d2) Multiprocessor initialisation:
(d2)  - CPU0 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU1 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU2 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU3 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2) Testing HVM environment:
(d2)  - REP INSB across page boundaries ... passed
(d2)  - GS base MSRs and SWAPGS ... passed
(d2) Passed 2 of 2 tests
(d2) Writing SMBIOS tables ...
(d2) Loading ROMBIOS ...
(d2) 12124 bytes of ROMBIOS high-memory extensions:
(d2)   Relocating to 0xfc001000-0xfc003f5c ... done
(d2) Creating MP tables ...
(d2) Loading Cirrus VGABIOS ...
(d2) Loading PCI Option ROM ...
(d2)  - Manufacturer: http://ipxe.org
(d2)  - Product name: iPXE
(d2) Option ROMs:
(d2)  c0000-c8fff: VGA BIOS
(d2)  c9000-d97ff: Etherboot ROM
(d2) Loading ACPI ...
(d2) vm86 TSS at fc010080
(d2) BIOS map:
(d2)  f0000-fffff: Main BIOS
(d2) E820 table:
(d2)  [00]: 00000000:00000000 - 00000000:0009e000: RAM
(d2)  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
(d2)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d2)  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d2)  [03]: 00000000:00100000 - 00000000:f0000000: RAM
(d2)  HOLE: 00000000:f0000000 - 00000000:fc000000
(d2)  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d2)  [05]: 00000001:00000000 - 00000001:0fc00000: RAM
(d2) Invoking ROMBIOS ...
(XEN) stdvga.c:147:d2 entering stdvga and caching modes
(d2) VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(d2) Bochs BIOS - build: 06/23/99
(d2) $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(d2) Options: apmbios pcibios eltorito PMM
(d2)
(d2) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk ( 160 GBytes)
(d2)
(d2)
(d2)
(d2) Press F12 for boot menu.
(d2)
(d2) Booting from Hard Disk...
(XEN) stdvga.c:151:d2 leaving stdvga

[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     1=
36.5
mars                                         2  4095     3     r-----      =
 4.5
[root@astar ~]# (XEN) irq.c:270: Dom2 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom2 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom2 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom2 PCI link 3 changed 5 -> 0

[root@astar ~]#
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     1=
71.2
mars                                         2  4095     4     ------      =
17.7
[root@astar ~]#
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     2=
86.1
mars                                         2  4095     4     -b----      =
76.3
[root@astar ~]#=20


--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="xen console acpi-skip no-amd-iommu.txt"
Content-Type: text/plain; name="xen console acpi-skip no-amd-iommu.txt"; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable


Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (=
Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M dom0_max_vcpus=
=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall iommu=3Ddebug,verbose,n=
o-amd-iommu-perdev-intremap acpi_skip_timer_override com1=3D38400,8n1,pci c=
onsole=3D                             com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: BIOS IRQ0 pin2 override ignored.
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.961 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Using global interrupt remap table is not recommended (see XS=
A-36)!
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D0 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0xae0000
(XEN) elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=3D0x1060f0
(XEN) elf_parse_binary: phdr: paddr=3D0x1d07000 memsz=3D0x15300
(XEN) elf_parse_binary: phdr: paddr=3D0x1d1d000 memsz=3D0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xffffffff80000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xffffffff80000000
(XEN)     virt_kstart      =3D 0xffffffff81000000
(XEN)     virt_kend        =3D 0xffffffff8243c000
(XEN)     virt_entry       =3D 0xffffffff81d1d1e0
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0, type =3D 0x6, root tab=
le =3D 0x2472e2000, domain =3D 0, paging m                             ode =
=3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x2, type =3D 0x7, root t=
able =3D 0x2472e2000, domain =3D 0, paging                              mod=
e =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x10, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x20, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x28, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x30, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x38, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x48, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x58, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x88, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x90, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x92, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x98, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x9a, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa0, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa2, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa3, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa4, type =3D 0x5, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa5, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb0, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb2, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x100, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x101, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x200, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x500, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x600, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) Scrubbing Free RAM: .................................................=
...........done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x000000=
0000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff880002f40400.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff880002f40400.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)



=3D=3D=3D> xl create mars with secondary vga passthrough



astar login: (XEN) io.c:280: d1: bind: m_gsi=3D19 g_gsi=3D40 device=3D6 int=
x=3D0
(XEN) AMD-Vi: Disable: device id =3D 0x700, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x7b8d5000, domain =3D 1, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.0 from dom0 to dom1
(XEN) io.c:280: d1: bind: m_gsi=3D16 g_gsi=3D45 device=3D7 intx=3D1
(XEN) AMD-Vi: Disable: device id =3D 0x701, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x7b8d5000, domain =3D 1, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.1 from dom0 to dom1
(d1) HVM Loader
(d1) Detected Xen v4.4-rc2
(d1) Xenbus rings @0xfeffc000, event channel 6
(d1) System requested ROMBIOS
(d1) CPU speed is 2813 MHz
(d1) Relocating guest memory for lowmem MMIO space enabled
(XEN) irq.c:270: Dom1 PCI link 0 changed 0 -> 5
(d1) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom1 PCI link 1 changed 0 -> 10
(d1) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom1 PCI link 2 changed 0 -> 11
(d1) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom1 PCI link 3 changed 0 -> 5
(d1) PCI-ISA link 3 routed to IRQ5
(d1) pci dev 01:2 INTD->IRQ5
(d1) pci dev 01:3 INTA->IRQ10
(d1) pci dev 03:0 INTA->IRQ5
(d1) pci dev 04:0 INTA->IRQ5
(d1) pci dev 05:0 INTA->IRQ10
(d1) pci dev 06:0 INTA->IRQ11
(d1) pci dev 07:0 INTB->IRQ5
(d1) Relocating 0xffff pages from 0e0001000 to 10fc00000 for lowmem MMIO ho=
le
(d1) Relocating 0x1 pages from 0e0000000 to 11fbff000 for lowmem MMIO hole
(d1) RAM in high memory; setting high_mem resource base to 11fc00000
(d1) pci dev 06:0 bar 10 size 010000000: 0e000000c
(XEN) memory_map:add: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(d1) pci dev 02:0 bar 10 size 002000000: 0f0000008
(d1) pci dev 03:0 bar 14 size 001000000: 0f2000008
(XEN) memory_map:add: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(d1) pci dev 06:0 bar 18 size 000040000: 0f3000004
(d1) pci dev 04:0 bar 10 size 000020000: 0f3040000
(d1) pci dev 06:0 bar 30 size 000020000: 0f3060000
(d1) pci dev 07:0 bar 10 size 000004000: 0f3080004
(XEN) memory_map:add: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(d1) pci dev 02:0 bar 14 size 000001000: 0f3084000
(d1) pci dev 03:0 bar 10 size 000000100: 00000c001
(d1) pci dev 05:0 bar 10 size 000000100: 00000c101
(d1) pci dev 06:0 bar 20 size 000000100: 00000c201
(XEN) ioport_map:add: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(d1) pci dev 04:0 bar 14 size 000000040: 00000c301
(d1) pci dev 01:2 bar 20 size 000000020: 00000c341
(d1) pci dev 01:1 bar 20 size 000000010: 00000c361
(d1) Multiprocessor initialisation:
(d1)  - CPU0 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU1 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU2 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU3 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1) Testing HVM environment:
(d1)  - REP INSB across page boundaries ... passed
(d1)  - GS base MSRs and SWAPGS ... passed
(d1) Passed 2 of 2 tests
(d1) Writing SMBIOS tables ...
(d1) Loading ROMBIOS ...
(d1) 12124 bytes of ROMBIOS high-memory extensions:
(d1)   Relocating to 0xfc001000-0xfc003f5c ... done
(d1) Creating MP tables ...
(d1) Loading Cirrus VGABIOS ...
(d1) Loading PCI Option ROM ...
(d1)  - Manufacturer: http://ipxe.org
(d1)  - Product name: iPXE
(d1) Option ROMs:
(d1)  c0000-c8fff: VGA BIOS
(d1)  c9000-d97ff: Etherboot ROM
(d1) Loading ACPI ...
(d1) vm86 TSS at fc010080
(d1) BIOS map:
(d1)  f0000-fffff: Main BIOS
(d1) E820 table:
(d1)  [00]: 00000000:00000000 - 00000000:0009e000: RAM
(d1)  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
(d1)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d1)  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d1)  [03]: 00000000:00100000 - 00000000:e0000000: RAM
(d1)  HOLE: 00000000:e0000000 - 00000000:fc000000
(d1)  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d1)  [05]: 00000001:00000000 - 00000001:1fc00000: RAM
(d1) Invoking ROMBIOS ...
(XEN) stdvga.c:147:d1 entering stdvga and caching modes
(d1) VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(d1) Bochs BIOS - build: 06/23/99
(d1) $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(d1) Options: apmbios pcibios eltorito PMM
(d1)
(d1) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk ( 250 GBytes)
(d1)
(d1)
(d1)
(d1) Press F12 for boot menu.
(d1)
(d1) Booting from Hard Disk...
(XEN) stdvga.c:151:d1 leaving stdvga
(XEN) irq.c:270: Dom1 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom1 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom1 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom1 PCI link 3 changed 5 -> 0
(XEN) memory_map:remove: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(XEN) memory_map:add: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(XEN) common.c:3525:d0 tracking VRAM f0000 - f0240
(XEN) memory_map:remove: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(XEN) memory_map:remove: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(XEN) ioport_map:remove: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(XEN) memory_map:add: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(XEN) memory_map:add: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(XEN) ioport_map:add: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620dec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620dfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db84c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db85c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db86c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db87c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db88c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db89c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec2c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec3c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168900c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168901c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168902c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168903c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168904c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168905c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168906c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168907c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd440, flags =3D 0
((XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addr=
ess =3D 0x117e87e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e871c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e870c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e872c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e873c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e875c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e874c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e876c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e877c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e879c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e878c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d2c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d3c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7ddc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dbc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dcc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a966c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a967c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a969c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a968c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e930c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e931c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e932c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e934c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e933c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e935c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e936c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e937c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e939c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e938c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbddc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdcc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdbc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc90c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc91c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc92c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc93c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc94c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc95c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc96c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc97c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc99c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc98c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee00c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee01c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee02c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee03c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee04c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee05c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee06c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee08c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee07c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee09c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0fc0, flags =3D 0
(XEN) AMD-Vi: Disable: device id =3D 0x700, domain =3D 1, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.0 from dom1 to dom0
(XEN) AMD-Vi: Disable: device id =3D 0x701, domain =3D 1, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.1 from dom1 to dom0
(XEN) Domain 0 shutdown: rebooting machine.
=FF

--=-HokMqAwF1mZkmjLTPer/
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-HokMqAwF1mZkmjLTPer/--



From xen-devel-bounces@lists.xen.org Thu Jan 23 10:05:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:05:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6H9r-0002zm-RC; Thu, 23 Jan 2014 10:05:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6D1o-0006SD-3O
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 05:40:37 +0000
Received: from [85.158.139.211:60883] by server-4.bemta-5.messagelabs.com id
	6A/4C-26791-35BA0E25; Thu, 23 Jan 2014 05:40:35 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390455624!11417579!1
X-Originating-IP: [98.139.213.152]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1858 invoked from network); 23 Jan 2014 05:40:26 -0000
Received: from nm11-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm11-vm1.bullet.mail.bf1.yahoo.com) (98.139.213.152)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 05:40:26 -0000
Received: from [98.139.212.153] by nm11.bullet.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
Received: from [98.139.211.194] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
Received: from [127.0.0.1] by smtp203.mail.bf1.yahoo.com with NNFMP;
	23 Jan 2014 05:40:24 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390455624; bh=r178wKMTvMCG1w4fLhHm3rcnH8TjvXozn4kCxsHcNew=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=waiLKGIOZ3MNGqaRI+R3gJloaN0gCNHHm03FUqi+QB/vvVmxec3C2fA780+97SvrO9Uu9XzfLMrhPyz4fCGD2L3Vie6N5KKB/AByXJbwN6PxRyvDqTuOmlNUN17EZOGsMlqS+QtRqJ1Dx2MlcQEZTZxsxjzUK0JNz/1Ym1lLQwU=
X-Yahoo-Newman-Id: 41882.48820.bm@smtp203.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 0Hs.7xcVM1n1lhqa7R4Wwf.BSG_h1IpTz38zQ464i8hx6Oi
	nLyeLljOQ2a7AXsC5kejbr5Z2NDuOyG5d23NYoD4cg3Vazq35L4r_lCipsVE
	qQZYupn4H7pTeT5qDX4EI__z7qE30m0V43uSKPcCmI1gcUzNKPNNTTWHjLPI
	w8q_Rr5kKk.frsznylGfIT9pZQBKq4jjLkjkqwN5nDN3RNFqQ5px9E78Y21_
	k55JYtJX3EFv_qnABFgFolUBToN9sSL8jD45usc3KQR6SW63_ua._wRhrYDf
	D8V1Xw_TQ_E.7eMFDgmagMvqTlLZO5HSA507x2TMG04qa6npATIwPfUmkAJK
	ecSSVPiGk5DreUTVdgrNjqa5019Ygu2vkM5apjjqydYmhtPbc9Ubn3tozjUp
	gF6NrYHTK3Rn4czH4UACwqOFdLrmHhF0o3ypfnEO5Sk_RVfoffrRXUHIr9Js
	88F1Zai0XCCNNYTq1U7Kgut1TKIWg7oi7wgapzg_QwNEKUpN3HQ3WMb0XFJF
	tp0fWbPjlucY6EQa1v.UGdW7jnKwUMqDG6iTzY5Gtq8ZS3NgXEeqGf.zpYrc
	L2k4w2zeI4KdWyFZhreZVh_ePWZLoNk647hfRXQw93WeK
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp203.mail.bf1.yahoo.com with SMTP; 22 Jan 2014 21:40:23 -0800 PST
Message-ID: <1390455621.2415.56.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 22 Jan 2014 22:40:21 -0700
In-Reply-To: <52DF8FB90200007800115AEC@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-HokMqAwF1mZkmjLTPer/"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Thu, 23 Jan 2014 10:05:09 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-HokMqAwF1mZkmjLTPer/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On Wed, 2014-01-22 at 08:30 +0000, Jan Beulich wrote:
> >>> On 21.01.14 at 11:39, "Eric Houby" <ehouby@yahoo.com> wrote:
> > Xen console logs are attached.
> 
> Thanks. This
> 
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... works.
> 
> is the first thing you need to deal with. You may want to check
> how a native kernel handles that (they have some DMI based
> stuff in place to automatically deal with broken firmware, which
> on Xen requires command line overrides like
> "acpi_skip_timer_override").
> 
> That alone may already explain
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
> 
> since these come from the HPET (which may be generating legacy
> timer interrupts):
> 
> (XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
> 
> The final crash could easily be a consequence of the earlier
> problems. In any event you ought to make sure you run up to
> date firmware on your system, as there clearly are issues with it.
> 
> What would also be interesting to know is whether any older Xen
> version ever booted okay on that system.
> 
> Jan
> 

Jan,

The system I am testing with is a GA-890FXA-UD5 and it has been doing
secondary VGA passthrough to Win7/8.1 guests since mid 2010.  I believe
it was XSA-36 that was not friendly to the board and I have not been
able to get this functionality since xen 4.1.3.  In mid 2010, when I
last tested available BIOSs, I only found one version to support IOMMU
with Xen.  Since then, one more BIOS version has been released.  It may
be worth loading but my confidence in it fixing this issue is low.

Adding acpi_skip_timer_override to the xen command line did allow Dom0
to boot but the video display for Dom0 did not work.  The following log
was still seen the logs.

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address
= 0xfdf8020140, flags = 0x8


Including iommu=no-amd-iommu-perdev-intremap along with
acpi_skip_timer_override cleared the above errors and Dom0 video was
functional.  Attempts to passthrough a secondary VGA to a Win8.1 guest
was not successful with these logs in the xen console:

(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 1, device id = 0x700, fault
address = 0x11620df40, flags = 0

Attached are two xen console boot logs, one with just
acpi_skip_timer_override in the xen command line and the other with
acpi_skip_timer_override and iommu=no-amd-iommu-perdev-intremap.  The
long files names I used make it clear which is which.  The log with both
xen command options also includes booting the Win8.1 guest with
secondary VGA passthrough.

I tried to see how a native kernel handled the timer bug you mentioned
but I could not find a corresponding log when booting 3.12.7 on bare
hardware. This boot log is attached as barehw.txt, maybe there is
something you can find.

Thanks,

Eric









--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="barehw.txt"
Content-Type: text/plain; name="barehw.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Jan 22 19:48:20 astar kernel: [    0.000000] Initializing cgroup subsys cpu
Jan 22 19:48:20 astar kernel: [    0.000000] Initializing cgroup subsys cpuacct
Jan 22 19:48:20 astar kernel: [    0.000000] Linux version 3.12.7-300.fc20.x86_64 (mockbuild@bkernel01.phx2.fedoraproject.org) (gcc version 4.8.2 20131212 (Red Hat 4.8.2-7) (GCC) ) #1 SMP Fri Jan 10 15:35:31 UTC 2014
Jan 22 19:48:20 astar kernel: [    0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.12.7-300.fc20.x86_64 root=/dev/mapper/f20_astar-root ro rd.lvm.lv=f20_astar/root rd.lvm.lv=f20_astar/swap vconsole.font=latarcyrheb-sun16 rhgb quiet
Jan 22 19:48:20 astar kernel: [    0.000000] e820: BIOS-provided physical RAM map:
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000096fff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000afceffff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afcf0000-0x00000000afcf0fff] ACPI NVS
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afcf1000-0x00000000afcfffff] ACPI data
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000afd00000-0x00000000afdfffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000ffffffff] reserved
Jan 22 19:48:20 astar kernel: [    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000024fffffff] usable
Jan 22 19:48:20 astar kernel: [    0.000000] NX (Execute Disable) protection: active
Jan 22 19:48:20 astar kernel: [    0.000000] SMBIOS 2.4 present.
Jan 22 19:48:20 astar kernel: [    0.000000] No AGP bridge found
Jan 22 19:48:20 astar kernel: [    0.000000] e820: last_pfn = 0x250000 max_arch_pfn = 0x400000000
Jan 22 19:48:20 astar kernel: [    0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
Jan 22 19:48:20 astar kernel: [    0.000000] e820: last_pfn = 0xafcf0 max_arch_pfn = 0x400000000
Jan 22 19:48:20 astar kernel: [    0.000000] found SMP MP-table at [mem 0x000f4670-0x000f467f] mapped at [ffff8800000f4670]
Jan 22 19:48:20 astar kernel: [    0.000000] Using GB pages for direct mapping
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x24fe00000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x24c000000-0x24fdfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x200000000-0x24bffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x00100000-0xafceffff]
Jan 22 19:48:20 astar kernel: [    0.000000] init_memory_mapping: [mem 0x100000000-0x1ffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] RAMDISK: [mem 0x3697c000-0x374b5fff]
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: RSDP 00000000000f6080 00014 (v00 GBT   )
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: RSDT 00000000afcf1000 00040 (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: FACP 00000000afcf1080 00074 (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: DSDT 00000000afcf1100 07BE1 (v01 GBT    GBTUACPI 00001000 MSFT 03000000)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: FACS 00000000afcf0000 00040
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: HPET 00000000afcf8dc0 00038 (v01 GBT    GBTUACPI 42302E31 GBTU 00000098)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: MCFG 00000000afcf8e00 0003C (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: MATS 00000000afcf8e40 00034 (v01 GBT             00000000      00000000)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: TAMG 00000000afcf8eb0 0048A (v01 GBT    GBT   B0 5455312E BG?? 53450101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: APIC 00000000afcf8d00 000BC (v01 GBT    GBTUACPI 42302E31 GBTU 01010101)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: IVRS 00000000afcf93b0 000D8 (v01  AMD     RD890S 00202031 AMD  00000000)
Jan 22 19:48:20 astar kernel: [    0.000000] Scanning NUMA topology in Northbridge 24
Jan 22 19:48:20 astar kernel: [    0.000000] No NUMA configuration found
Jan 22 19:48:20 astar kernel: [    0.000000] Faking a node at [mem 0x0000000000000000-0x000000024fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Initmem setup node 0 [mem 0x00000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   NODE_DATA [mem 0x24ffea000-0x24fffdfff]
Jan 22 19:48:20 astar kernel: [    0.000000] Zone ranges:
Jan 22 19:48:20 astar kernel: [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   Normal   [mem 0x100000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Movable zone start for each node
Jan 22 19:48:20 astar kernel: [    0.000000] Early memory node ranges
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x00001000-0x00096fff]
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x00100000-0xafceffff]
Jan 22 19:48:20 astar kernel: [    0.000000]   node   0: [mem 0x100000000-0x24fffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: PM-Timer IO Port: 0x808
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
Jan 22 19:48:20 astar kernel: [    0.000000] IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
Jan 22 19:48:20 astar kernel: [    0.000000] Using ACPI (MADT) for SMP configuration information
Jan 22 19:48:20 astar kernel: [    0.000000] ACPI: HPET id: 0x10b9a201 base: 0xfed00000
Jan 22 19:48:20 astar kernel: [    0.000000] smpboot: Allowing 8 CPUs, 2 hotplug CPUs
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x00097000-0x0009ffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafcf0000-0xafcf0fff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafcf1000-0xafcfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafd00000-0xafdfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xafe00000-0xdfffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xe0000000-0xefffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xf0000000-0xfebfffff]
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xfec00000-0xffffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] e820: [mem 0xafe00000-0xdfffffff] available for PCI devices
Jan 22 19:48:20 astar kernel: [    0.000000] Booting paravirtualized kernel on bare hardware
Jan 22 19:48:20 astar kernel: [    0.000000] setup_percpu: NR_CPUS:1024 nr_cpumask_bits:1024 nr_cpu_ids:8 nr_node_ids:1
Jan 22 19:48:20 astar kernel: [    0.000000] PERCPU: Embedded 29 pages/cpu @ffff88024fc00000 s86784 r8192 d23808 u262144
Jan 22 19:48:20 astar kernel: [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 2063485
Jan 22 19:48:20 astar kernel: [    0.000000] Policy zone: Normal
Jan 22 19:48:20 astar kernel: [    0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.12.7-300.fc20.x86_64 root=/dev/mapper/f20_astar-root ro rd.lvm.lv=f20_astar/root rd.lvm.lv=f20_astar/swap vconsole.font=latarcyrheb-sun16 rhgb quiet
Jan 22 19:48:20 astar kernel: [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
Jan 22 19:48:20 astar kernel: [    0.000000] Checking aperture...
Jan 22 19:48:20 astar kernel: [    0.000000] No AGP bridge found
Jan 22 19:48:20 astar kernel: [    0.000000] Node 0: aperture @ 7822000000 size 32 MB
Jan 22 19:48:20 astar kernel: [    0.000000] Aperture beyond 4GB. Ignoring.
Jan 22 19:48:20 astar kernel: [    0.000000] Your BIOS doesn't leave a aperture memory hole
Jan 22 19:48:20 astar kernel: [    0.000000] Please enable the IOMMU option in the BIOS setup
Jan 22 19:48:20 astar kernel: [    0.000000] This costs you 64 MB of RAM
Jan 22 19:48:20 astar kernel: [    0.000000] Mapping aperture over 65536 KB of RAM @ a4000000
Jan 22 19:48:20 astar kernel: [    0.000000] PM: Registered nosave memory: [mem 0xa4000000-0xa7ffffff]
Jan 22 19:48:20 astar kernel: [    0.000000] Memory: 8093132K/8385048K available (6634K kernel code, 1044K rwdata, 2944K rodata, 1444K init, 1672K bss, 291916K reserved)
Jan 22 19:48:20 astar kernel: [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 19:48:20 astar kernel: [    0.000000] Hierarchical RCU implementation.
Jan 22 19:48:20 astar kernel: [    0.000000] 	RCU restricting CPUs from NR_CPUS=1024 to nr_cpu_ids=8.
Jan 22 19:48:20 astar kernel: [    0.000000] NR_IRQS:65792 nr_irqs:744 16
Jan 22 19:48:20 astar kernel: [    0.000000] Console: colour VGA+ 80x25
Jan 22 19:48:20 astar kernel: [    0.000000] console [tty0] enabled
Jan 22 19:48:20 astar kernel: [    0.000000] allocated 33554432 bytes of page_cgroup
Jan 22 19:48:20 astar kernel: [    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
Jan 22 19:48:20 astar kernel: [    0.000000] tsc: Fast TSC calibration using PIT
Jan 22 19:48:20 astar kernel: [    0.001000] tsc: Detected 2813.003 MHz processor
Jan 22 19:48:20 astar kernel: [    0.000003] Calibrating delay loop (skipped), value calculated using timer frequency.. 5626.00 BogoMIPS (lpj=2813003)
Jan 22 19:48:20 astar kernel: [    0.000006] pid_max: default: 32768 minimum: 301
Jan 22 19:48:20 astar kernel: [    0.000030] Security Framework initialized
Jan 22 19:48:20 astar kernel: [    0.000038] SELinux:  Initializing.
Jan 22 19:48:20 astar kernel: [    0.000567] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Jan 22 19:48:20 astar kernel: [    0.003040] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
Jan 22 19:48:20 astar kernel: [    0.004167] Mount-cache hash table entries: 256
Jan 22 19:48:20 astar kernel: [    0.004352] Initializing cgroup subsys memory
Jan 22 19:48:20 astar kernel: [    0.004364] Initializing cgroup subsys devices
Jan 22 19:48:20 astar kernel: [    0.004366] Initializing cgroup subsys freezer
Jan 22 19:48:20 astar kernel: [    0.004368] Initializing cgroup subsys net_cls
Jan 22 19:48:20 astar kernel: [    0.004369] Initializing cgroup subsys blkio
Jan 22 19:48:20 astar kernel: [    0.004371] Initializing cgroup subsys perf_event
Jan 22 19:48:20 astar kernel: [    0.004373] Initializing cgroup subsys hugetlb
Jan 22 19:48:20 astar kernel: [    0.004393] CPU: Physical Processor ID: 0
Jan 22 19:48:20 astar kernel: [    0.004394] CPU: Processor Core ID: 0
Jan 22 19:48:20 astar kernel: [    0.004396] mce: CPU supports 6 MCE banks
Jan 22 19:48:20 astar kernel: [    0.004400] LVT offset 0 assigned for vector 0xf9
Jan 22 19:48:20 astar kernel: [    0.004404] process: using AMD E400 aware idle routine
Jan 22 19:48:20 astar kernel: [    0.004406] Last level iTLB entries: 4KB 512, 2MB 16, 4MB 8
Jan 22 19:48:20 astar kernel: [    0.004406] Last level dTLB entries: 4KB 512, 2MB 128, 4MB 64
Jan 22 19:48:20 astar kernel: [    0.004406] tlb_flushall_shift: 4
Jan 22 19:48:20 astar kernel: [    0.004478] Freeing SMP alternatives memory: 24K (ffffffff81e70000 - ffffffff81e76000)
Jan 22 19:48:20 astar kernel: [    0.005123] ACPI: Core revision 20130725
Jan 22 19:48:20 astar kernel: [    0.006629] ACPI: All ACPI Tables successfully acquired
Jan 22 19:48:20 astar kernel: [    0.052536] ftrace: allocating 25533 entries in 100 pages
Jan 22 19:48:20 astar kernel: [    0.455073] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
Jan 22 19:48:20 astar kernel: [    0.455118] AMD-Vi: Disabling interrupt remapping
Jan 22 19:48:20 astar kernel: [    0.455509] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jan 22 19:48:20 astar kernel: [    0.465511] smpboot: CPU0: AMD Phenom(tm) II X6 1055T Processor (fam: 10, model: 0a, stepping: 00)
Jan 22 19:48:20 astar kernel: [    0.566608] Performance Events: AMD PMU driver.
Jan 22 19:48:20 astar kernel: [    0.566611] ... version:                0
Jan 22 19:48:20 astar kernel: [    0.566612] ... bit width:              48
Jan 22 19:48:20 astar kernel: [    0.566612] ... generic registers:      4
Jan 22 19:48:20 astar kernel: [    0.566613] ... value mask:             0000ffffffffffff
Jan 22 19:48:20 astar kernel: [    0.566614] ... max period:             00007fffffffffff
Jan 22 19:48:20 astar kernel: [    0.566615] ... fixed-purpose events:   0
Jan 22 19:48:20 astar kernel: [    0.566616] ... event mask:             000000000000000f
Jan 22 19:48:20 astar kernel: [    0.567541] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
Jan 22 19:48:20 astar kernel: [    0.580768] process: System has AMD C1E enabled
Jan 22 19:48:20 astar kernel: [    0.580781] process: Switch to broadcast mode on CPU1
Jan 22 19:48:20 astar kernel: [    0.593957] process: Switch to broadcast mode on CPU2
Jan 22 19:48:20 astar kernel: [    0.607173] process: Switch to broadcast mode on CPU3
Jan 22 19:48:20 astar kernel: [    0.567592] smpboot: Booting Node   0, Processors  #   1 #   2 #   3 #   4 #   5 OK
Jan 22 19:48:20 astar kernel: [    0.620386] process: Switch to broadcast mode on CPU4
Jan 22 19:48:20 astar kernel: [    0.633532] Brought up 6 CPUs
Jan 22 19:48:20 astar kernel: [    0.633535] smpboot: Total of 6 processors activated (33756.03 BogoMIPS)
Jan 22 19:48:20 astar kernel: [    0.640293] process: Switch to broadcast mode on CPU5
Jan 22 19:48:20 astar kernel: [    0.640409] process: Switch to broadcast mode on CPU0
Jan 22 19:48:20 astar kernel: [    0.640642] devtmpfs: initialized
Jan 22 19:48:20 astar kernel: [    0.640915] PM: Registering ACPI NVS region [mem 0xafcf0000-0xafcf0fff] (4096 bytes)
Jan 22 19:48:20 astar kernel: [    0.641765] atomic64 test passed for x86-64 platform with CX8 and with SSE
Jan 22 19:48:20 astar kernel: [    0.641767] pinctrl core: initialized pinctrl subsystem
Jan 22 19:48:20 astar kernel: [    0.641848] RTC time:  2:48:16, date: 01/23/14
Jan 22 19:48:20 astar kernel: [    0.641892] NET: Registered protocol family 16
Jan 22 19:48:20 astar kernel: [    0.641968] cpuidle: using governor menu
Jan 22 19:48:20 astar kernel: [    0.641974] TOM: 00000000b0000000 aka 2816M
Jan 22 19:48:20 astar kernel: [    0.641985] TOM2: 0000000250000000 aka 9472M
Jan 22 19:48:20 astar kernel: [    0.642084] ACPI: bus type PCI registered
Jan 22 19:48:20 astar kernel: [    0.642086] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 19:48:20 astar kernel: [    0.642132] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Jan 22 19:48:20 astar kernel: [    0.642134] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
Jan 22 19:48:20 astar kernel: [    0.650135] PCI: Using configuration type 1 for base access
Jan 22 19:48:20 astar kernel: [    0.650283] mtrr: your CPUs had inconsistent variable MTRR settings
Jan 22 19:48:20 astar kernel: [    0.650284] mtrr: probably your BIOS does not setup all CPUs.
Jan 22 19:48:20 astar kernel: [    0.650285] mtrr: corrected configuration.
Jan 22 19:48:20 astar kernel: [    0.651018] bio: create slab <bio-0> at 0
Jan 22 19:48:20 astar kernel: [    0.651151] ACPI: Added _OSI(Module Device)
Jan 22 19:48:20 astar kernel: [    0.651153] ACPI: Added _OSI(Processor Device)
Jan 22 19:48:20 astar kernel: [    0.651154] ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 22 19:48:20 astar kernel: [    0.651155] ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 19:48:20 astar kernel: [    0.655764] ACPI BIOS Warning (bug): Incorrect checksum in table [TAMG] - 0xBE, should be 0xBD (20130725/tbprint-204)
Jan 22 19:48:20 astar kernel: [    0.655884] ACPI: Interpreter enabled
Jan 22 19:48:20 astar kernel: [    0.655894] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20130725/hwxface-571)
Jan 22 19:48:20 astar kernel: [    0.655897] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S3_] (20130725/hwxface-571)
Jan 22 19:48:20 astar kernel: [    0.655905] ACPI: (supports S0 S1 S4 S5)
Jan 22 19:48:20 astar kernel: [    0.655907] ACPI: Using IOAPIC for interrupt routing
Jan 22 19:48:20 astar kernel: [    0.655933] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 19:48:20 astar kernel: [    0.656023] ACPI: No dock devices found.
Jan 22 19:48:20 astar kernel: [    0.713380] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 19:48:20 astar kernel: [    0.713385] acpi PNP0A03:00: ACPI _OSC support notification failed, disabling PCIe ASPM
Jan 22 19:48:20 astar kernel: [    0.713387] acpi PNP0A03:00: Unable to request _OSC control (_OSC support mask: 0x08)
Jan 22 19:48:20 astar kernel: [    0.713587] PCI host bridge to bus 0000:00
Jan 22 19:48:20 astar kernel: [    0.713589] pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 19:48:20 astar kernel: [    0.713591] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
Jan 22 19:48:20 astar kernel: [    0.713592] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
Jan 22 19:48:20 astar kernel: [    0.713594] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
Jan 22 19:48:20 astar kernel: [    0.713596] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff]
Jan 22 19:48:20 astar kernel: [    0.713597] pci_bus 0000:00: root bus resource [mem 0xaff00000-0xfebfffff]
Jan 22 19:48:20 astar kernel: [    0.713867] pci 0000:00:02.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.713961] pci 0000:00:04.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714055] pci 0000:00:05.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714146] pci 0000:00:06.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714236] pci 0000:00:07.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714328] pci 0000:00:09.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714408] pci 0000:00:0b.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714653] pci 0000:00:12.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714796] pci 0000:00:12.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.714931] pci 0000:00:13.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715075] pci 0000:00:13.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715311] pci 0000:00:14.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715479] pci 0000:00:14.4: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715577] pci 0000:00:14.5: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715678] pci 0000:00:16.0: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.715833] pci 0000:00:16.2: System wakeup disabled by ACPI
Jan 22 19:48:20 astar kernel: [    0.717894] pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 22 19:48:20 astar kernel: [    0.719892] pci 0000:00:04.0: PCI bridge to [bus 02]
Jan 22 19:48:20 astar kernel: [    0.720018] pci 0000:00:05.0: PCI bridge to [bus 03]
Jan 22 19:48:20 astar kernel: [    0.720102] pci 0000:00:06.0: PCI bridge to [bus 04]
Jan 22 19:48:20 astar kernel: [    0.721891] pci 0000:00:07.0: PCI bridge to [bus 05]
Jan 22 19:48:20 astar kernel: [    0.723891] pci 0000:00:09.0: PCI bridge to [bus 06]
Jan 22 19:48:20 astar kernel: [    0.725889] pci 0000:00:0b.0: PCI bridge to [bus 07]
Jan 22 19:48:20 astar kernel: [    0.726041] pci 0000:00:14.4: PCI bridge to [bus 08] (subtractive decode)
Jan 22 19:48:20 astar kernel: [    0.744096] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744138] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744173] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744206] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744240] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744273] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744306] ACPI: PCI Interrupt Link [LNK0] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744339] ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 10 11) *0
Jan 22 19:48:20 astar kernel: [    0.744902] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
Jan 22 19:48:20 astar kernel: [    0.744905] vgaarb: device added: PCI:0000:07:00.0,decodes=io+mem,owns=none,locks=none
Jan 22 19:48:20 astar kernel: [    0.744906] vgaarb: loaded
Jan 22 19:48:20 astar kernel: [    0.744907] vgaarb: bridge control possible 0000:07:00.0
Jan 22 19:48:20 astar kernel: [    0.744908] vgaarb: bridge control possible 0000:01:00.0
Jan 22 19:48:20 astar kernel: [    0.744971] SCSI subsystem initialized
Jan 22 19:48:20 astar kernel: [    0.745086] ACPI: bus type USB registered
Jan 22 19:48:20 astar kernel: [    0.745099] usbcore: registered new interface driver usbfs
Jan 22 19:48:20 astar kernel: [    0.745105] usbcore: registered new interface driver hub
Jan 22 19:48:20 astar kernel: [    0.745126] usbcore: registered new device driver usb
Jan 22 19:48:20 astar kernel: [    0.745195] PCI: Using ACPI for IRQ routing
Jan 22 19:48:20 astar kernel: [    0.751209] pci 0000:00:00.0: no compatible bridge window for [mem 0xe0000000-0xffffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.751339] NetLabel: Initializing
Jan 22 19:48:20 astar kernel: [    0.751340] NetLabel:  domain hash size = 128
Jan 22 19:48:20 astar kernel: [    0.751340] NetLabel:  protocols = UNLABELED CIPSOv4
Jan 22 19:48:20 astar kernel: [    0.751348] NetLabel:  unlabeled traffic allowed by default
Jan 22 19:48:20 astar kernel: [    0.751383] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Jan 22 19:48:20 astar kernel: [    0.751385] hpet0: 3 comparators, 32-bit 14.318180 MHz counter
Jan 22 19:48:20 astar kernel: [    0.753466] Switched to clocksource hpet
Jan 22 19:48:20 astar kernel: [    0.757598] pnp: PnP ACPI init
Jan 22 19:48:20 astar kernel: [    0.757606] ACPI: bus type PNP registered
Jan 22 19:48:20 astar kernel: [    0.757672] system 00:00: [io  0x04d0-0x04d1] has been reserved
Jan 22 19:48:20 astar kernel: [    0.757674] system 00:00: [io  0x0220-0x0225] has been reserved
Jan 22 19:48:20 astar kernel: [    0.757675] system 00:00: [io  0x0290-0x0294] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777704] pnp 00:01: disabling [mem 0x00000000-0x00000fff window] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.777721] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:01:00.0 BAR 6 [mem 0x00000000-0x0007ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777725] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:05:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777728] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:06:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777730] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:07:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
Jan 22 19:48:20 astar kernel: [    0.777751] system 00:01: [io  0x0900-0x091f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777752] system 00:01: [io  0x0228-0x022f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777754] system 00:01: [io  0x040b] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777755] system 00:01: [io  0x04d6] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777757] system 00:01: [io  0x0c00-0x0c01] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777758] system 00:01: [io  0x0c14] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777759] system 00:01: [io  0x0c50-0x0c52] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777761] system 00:01: [io  0x0c6c-0x0c6d] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777762] system 00:01: [io  0x0c6f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777763] system 00:01: [io  0x0cd0-0x0cd1] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777765] system 00:01: [io  0x0cd2-0x0cd3] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777766] system 00:01: [io  0x0cd4-0x0cdf] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777767] system 00:01: [io  0x0800-0x08fe] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.777769] system 00:01: [io  0x0a10-0x0a17] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777770] system 00:01: [io  0x0b00-0x0b0f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777771] system 00:01: [io  0x0b10-0x0b1f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777773] system 00:01: [io  0x0b20-0x0b3f] has been reserved
Jan 22 19:48:20 astar kernel: [    0.777775] system 00:01: [mem 0xfee00400-0xfee00fff window] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778296] system 00:07: [mem 0xe0000000-0xefffffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778387] pnp 00:08: disabling [mem 0x000d4400-0x000d7fff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778389] pnp 00:08: disabling [mem 0x000f0000-0x000f7fff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778391] pnp 00:08: disabling [mem 0x000f8000-0x000fbfff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778393] pnp 00:08: disabling [mem 0x000fc000-0x000fffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778394] pnp 00:08: disabling [mem 0x00000000-0x0009ffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778396] pnp 00:08: disabling [mem 0x00100000-0xafceffff] because it overlaps 0000:00:00.0 BAR 3 [mem 0x00000000-0x1fffffff 64bit]
Jan 22 19:48:20 astar kernel: [    0.778428] system 00:08: [mem 0xafcf0000-0xafcfffff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778429] system 00:08: [mem 0xffff0000-0xffffffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778431] system 00:08: [mem 0xafd00000-0xafdfffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778432] system 00:08: [mem 0xafe00000-0xafefffff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778434] system 00:08: [mem 0xfec00000-0xfec00fff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778435] system 00:08: [mem 0xfee00000-0xfee00fff] could not be reserved
Jan 22 19:48:20 astar kernel: [    0.778437] system 00:08: [mem 0xfff80000-0xfffeffff] has been reserved
Jan 22 19:48:20 astar kernel: [    0.778451] pnp: PnP ACPI: found 9 devices
Jan 22 19:48:20 astar kernel: [    0.778452] ACPI: bus type PNP unregistered
Jan 22 19:48:20 astar kernel: [    0.784812] pci 0000:01:00.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784815] pci 0000:00:02.0: PCI bridge to [bus 01]
Jan 22 19:48:20 astar kernel: [    0.784817] pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Jan 22 19:48:20 astar kernel: [    0.784820] pci 0000:00:02.0:   bridge window [mem 0xfb000000-0xfcffffff]
Jan 22 19:48:20 astar kernel: [    0.784822] pci 0000:00:02.0:   bridge window [mem 0xb0000000-0xcfffffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784825] pci 0000:00:04.0: PCI bridge to [bus 02]
Jan 22 19:48:20 astar kernel: [    0.784826] pci 0000:00:04.0:   bridge window [io  0xb000-0xbfff]
Jan 22 19:48:20 astar kernel: [    0.784829] pci 0000:00:04.0:   bridge window [mem 0xfdc00000-0xfdcfffff]
Jan 22 19:48:20 astar kernel: [    0.784831] pci 0000:00:04.0:   bridge window [mem 0xfdb00000-0xfdbfffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784833] pci 0000:00:05.0: PCI bridge to [bus 03]
Jan 22 19:48:20 astar kernel: [    0.784835] pci 0000:00:05.0:   bridge window [io  0xa000-0xafff]
Jan 22 19:48:20 astar kernel: [    0.784837] pci 0000:00:05.0:   bridge window [mem 0xfda00000-0xfdafffff]
Jan 22 19:48:20 astar kernel: [    0.784839] pci 0000:00:05.0:   bridge window [mem 0xfd900000-0xfd9fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784841] pci 0000:00:06.0: PCI bridge to [bus 04]
Jan 22 19:48:20 astar kernel: [    0.784843] pci 0000:00:06.0:   bridge window [io  0x9000-0x9fff]
Jan 22 19:48:20 astar kernel: [    0.784845] pci 0000:00:06.0:   bridge window [mem 0xfd800000-0xfd8fffff]
Jan 22 19:48:20 astar kernel: [    0.784847] pci 0000:00:06.0:   bridge window [mem 0xfd700000-0xfd7fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784850] pci 0000:05:00.0: BAR 6: assigned [mem 0xfd300000-0xfd31ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784851] pci 0000:00:07.0: PCI bridge to [bus 05]
Jan 22 19:48:20 astar kernel: [    0.784853] pci 0000:00:07.0:   bridge window [io  0x8000-0x8fff]
Jan 22 19:48:20 astar kernel: [    0.784855] pci 0000:00:07.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Jan 22 19:48:20 astar kernel: [    0.784857] pci 0000:00:07.0:   bridge window [mem 0xfd300000-0xfd3fffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784860] pci 0000:06:00.0: BAR 6: assigned [mem 0xfde00000-0xfde1ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784861] pci 0000:00:09.0: PCI bridge to [bus 06]
Jan 22 19:48:20 astar kernel: [    0.784863] pci 0000:00:09.0:   bridge window [io  0xe000-0xefff]
Jan 22 19:48:20 astar kernel: [    0.784865] pci 0000:00:09.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Jan 22 19:48:20 astar kernel: [    0.784867] pci 0000:00:09.0:   bridge window [mem 0xfde00000-0xfdefffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784870] pci 0000:07:00.0: BAR 6: assigned [mem 0xfdd00000-0xfdd1ffff pref]
Jan 22 19:48:20 astar kernel: [    0.784871] pci 0000:00:0b.0: PCI bridge to [bus 07]
Jan 22 19:48:20 astar kernel: [    0.784873] pci 0000:00:0b.0:   bridge window [io  0xd000-0xdfff]
Jan 22 19:48:20 astar kernel: [    0.784875] pci 0000:00:0b.0:   bridge window [mem 0xfdd00000-0xfddfffff]
Jan 22 19:48:20 astar kernel: [    0.784877] pci 0000:00:0b.0:   bridge window [mem 0xd0000000-0xdfffffff 64bit pref]
Jan 22 19:48:20 astar kernel: [    0.784879] pci 0000:00:14.4: PCI bridge to [bus 08]
Jan 22 19:48:20 astar kernel: [    0.784881] pci 0000:00:14.4:   bridge window [io  0x7000-0x7fff]
Jan 22 19:48:20 astar kernel: [    0.784885] pci 0000:00:14.4:   bridge window [mem 0xfd600000-0xfd6fffff]
Jan 22 19:48:20 astar kernel: [    0.784887] pci 0000:00:14.4:   bridge window [mem 0xfd500000-0xfd5fffff pref]
Jan 22 19:48:20 astar kernel: [    0.784968] NET: Registered protocol family 2
Jan 22 19:48:20 astar kernel: [    0.785154] TCP established hash table entries: 65536 (order: 8, 1048576 bytes)
Jan 22 19:48:20 astar kernel: [    0.785527] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Jan 22 19:48:20 astar kernel: [    0.785814] TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 19:48:20 astar kernel: [    0.785855] TCP: reno registered
Jan 22 19:48:20 astar kernel: [    0.785866] UDP hash table entries: 4096 (order: 5, 131072 bytes)
Jan 22 19:48:20 astar kernel: [    0.785916] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes)
Jan 22 19:48:20 astar kernel: [    0.786014] NET: Registered protocol family 1
Jan 22 19:48:20 astar kernel: [    1.042151] Unpacking initramfs...
Jan 22 19:48:20 astar kernel: [    1.189582] Freeing initrd memory: 11496K (ffff88003697c000 - ffff8800374b6000)
Jan 22 19:48:20 astar kernel: [    1.271913] pci 0000:00:00.2: can't derive routing for PCI INT A
Jan 22 19:48:20 astar kernel: [    1.271916] pci 0000:00:00.2: PCI INT A: no GSI
Jan 22 19:48:20 astar kernel: [    1.272057] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
Jan 22 19:48:20 astar kernel: [    1.281360] AMD-Vi: Lazy IO/TLB flushing enabled
Jan 22 19:48:20 astar kernel: [    1.362449] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 19:48:20 astar kernel: [    1.362451] software IO TLB [mem 0xabcf0000-0xafcf0000] (64MB) mapped at [ffff8800abcf0000-ffff8800afceffff]
Jan 22 19:48:20 astar kernel: [    1.362746] LVT offset 1 assigned for vector 0x400
Jan 22 19:48:20 astar kernel: [    1.362751] IBS: LVT offset 1 assigned
Jan 22 19:48:20 astar kernel: [    1.362780] perf: AMD IBS detected (0x0000001f)
Jan 22 19:48:20 astar kernel: [    1.363479] Initialise system trusted keyring
Jan 22 19:48:20 astar kernel: [    1.363523] audit: initializing netlink socket (disabled)
Jan 22 19:48:20 astar kernel: [    1.363538] type=2000 audit(1390445296.812:1): initialized
Jan 22 19:48:20 astar kernel: [    1.382633] HugeTLB registered 2 MB page size, pre-allocated 0 pages
Jan 22 19:48:20 astar kernel: [    1.383813] zbud: loaded
Jan 22 19:48:20 astar kernel: [    1.383988] VFS: Disk quotas dquot_6.5.2
Jan 22 19:48:20 astar kernel: [    1.384030] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 19:48:20 astar kernel: [    1.384377] msgmni has been set to 15829
Jan 22 19:48:20 astar kernel: [    1.384447] Key type big_key registered
Jan 22 19:48:20 astar kernel: [    1.385657] alg: No test for stdrng (krng)
Jan 22 19:48:20 astar kernel: [    1.385668] NET: Registered protocol family 38
Jan 22 19:48:20 astar kernel: [    1.385670] Key type asymmetric registered
Jan 22 19:48:20 astar kernel: [    1.385671] Asymmetric key parser 'x509' registered
Jan 22 19:48:20 astar kernel: [    1.385699] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
Jan 22 19:48:20 astar kernel: [    1.385756] io scheduler noop registered
Jan 22 19:48:20 astar kernel: [    1.385757] io scheduler deadline registered
Jan 22 19:48:20 astar kernel: [    1.385780] io scheduler cfq registered (default)
Jan 22 19:48:20 astar kernel: [    1.386600] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
Jan 22 19:48:20 astar kernel: [    1.386615] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
Jan 22 19:48:20 astar kernel: [    1.386697] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0
Jan 22 19:48:20 astar kernel: [    1.386700] ACPI: Power Button [PWRB]
Jan 22 19:48:20 astar kernel: [    1.386733] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Jan 22 19:48:20 astar kernel: [    1.386734] ACPI: Power Button [PWRF]
Jan 22 19:48:20 astar kernel: [    1.386758] ACPI: processor limited to max C-state 1
Jan 22 19:48:20 astar kernel: [    1.386967] GHES: HEST is not enabled!
Jan 22 19:48:20 astar kernel: [    1.387039] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 19:48:20 astar kernel: [    1.407599] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 19:48:20 astar kernel: [    1.407970] Non-volatile memory driver v1.3
Jan 22 19:48:20 astar kernel: [    1.407972] Linux agpgart interface v0.103
Jan 22 19:48:20 astar kernel: [    1.408267] ahci 0000:00:11.0: AHCI 0001.0200 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
Jan 22 19:48:20 astar kernel: [    1.408270] ahci 0000:00:11.0: flags: 64bit ncq sntf ilck pm led clo pmp pio slum part 
Jan 22 19:48:20 astar kernel: [    1.409012] scsi0 : ahci
Jan 22 19:48:20 astar kernel: [    1.409073] scsi1 : ahci
Jan 22 19:48:20 astar kernel: [    1.409120] scsi2 : ahci
Jan 22 19:48:20 astar kernel: [    1.409168] scsi3 : ahci
Jan 22 19:48:20 astar kernel: [    1.409214] scsi4 : ahci
Jan 22 19:48:20 astar kernel: [    1.409261] scsi5 : ahci
Jan 22 19:48:20 astar kernel: [    1.409282] ata1: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff100 irq 41
Jan 22 19:48:20 astar kernel: [    1.409285] ata2: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff180 irq 41
Jan 22 19:48:20 astar kernel: [    1.409287] ata3: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff200 irq 41
Jan 22 19:48:20 astar kernel: [    1.409288] ata4: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff280 irq 41
Jan 22 19:48:20 astar kernel: [    1.409290] ata5: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff300 irq 41
Jan 22 19:48:20 astar kernel: [    1.409292] ata6: SATA max UDMA/133 abar m1024@0xfdfff000 port 0xfdfff380 irq 41
Jan 22 19:48:20 astar kernel: [    1.409424] libphy: Fixed MDIO Bus: probed
Jan 22 19:48:20 astar kernel: [    1.409499] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jan 22 19:48:20 astar kernel: [    1.409502] ehci-pci: EHCI PCI platform driver
Jan 22 19:48:20 astar kernel: [    1.409643] ehci-pci 0000:00:12.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.409697] ehci-pci 0000:00:12.2: new USB bus registered, assigned bus number 1
Jan 22 19:48:20 astar kernel: [    1.409706] ehci-pci 0000:00:12.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.409716] ehci-pci 0000:00:12.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.409750] ehci-pci 0000:00:12.2: irq 17, io mem 0xfdffd000
Jan 22 19:48:20 astar kernel: [    1.415508] ehci-pci 0000:00:12.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.415623] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.415629] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.415635] usb usb1: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.415640] usb usb1: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.415645] usb usb1: SerialNumber: 0000:00:12.2
Jan 22 19:48:20 astar kernel: [    1.415788] hub 1-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.415792] hub 1-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.416013] ehci-pci 0000:00:13.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.416076] ehci-pci 0000:00:13.2: new USB bus registered, assigned bus number 2
Jan 22 19:48:20 astar kernel: [    1.416079] ehci-pci 0000:00:13.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.416087] ehci-pci 0000:00:13.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.416114] ehci-pci 0000:00:13.2: irq 17, io mem 0xfdffb000
Jan 22 19:48:20 astar kernel: [    1.421502] ehci-pci 0000:00:13.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.421609] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.421615] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.421620] usb usb2: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.421625] usb usb2: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.421630] usb usb2: SerialNumber: 0000:00:13.2
Jan 22 19:48:20 astar kernel: [    1.421777] hub 2-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.421781] hub 2-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.422010] ehci-pci 0000:00:16.2: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.422057] ehci-pci 0000:00:16.2: new USB bus registered, assigned bus number 3
Jan 22 19:48:20 astar kernel: [    1.422059] ehci-pci 0000:00:16.2: applying AMD SB700/SB800/Hudson-2/3 EHCI dummy qh workaround
Jan 22 19:48:20 astar kernel: [    1.422068] ehci-pci 0000:00:16.2: debug port 1
Jan 22 19:48:20 astar kernel: [    1.422092] ehci-pci 0000:00:16.2: irq 17, io mem 0xfdff8000
Jan 22 19:48:20 astar kernel: [    1.427501] ehci-pci 0000:00:16.2: USB 2.0 started, EHCI 1.00
Jan 22 19:48:20 astar kernel: [    1.427618] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.427624] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.427629] usb usb3: Product: EHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.427634] usb usb3: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ehci_hcd
Jan 22 19:48:20 astar kernel: [    1.427639] usb usb3: SerialNumber: 0000:00:16.2
Jan 22 19:48:20 astar kernel: [    1.427781] hub 3-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.427786] hub 3-0:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.427908] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jan 22 19:48:20 astar kernel: [    1.427910] ohci-pci: OHCI PCI platform driver
Jan 22 19:48:20 astar kernel: [    1.428024] ohci-pci 0000:00:12.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.428080] ohci-pci 0000:00:12.0: new USB bus registered, assigned bus number 4
Jan 22 19:48:20 astar kernel: [    1.428104] ohci-pci 0000:00:12.0: irq 18, io mem 0xfdffe000
Jan 22 19:48:20 astar kernel: [    1.482531] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.482533] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.482534] usb usb4: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.482536] usb usb4: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.482537] usb usb4: SerialNumber: 0000:00:12.0
Jan 22 19:48:20 astar kernel: [    1.482689] hub 4-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.482696] hub 4-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.482971] ohci-pci 0000:00:13.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.483016] ohci-pci 0000:00:13.0: new USB bus registered, assigned bus number 5
Jan 22 19:48:20 astar kernel: [    1.483031] ohci-pci 0000:00:13.0: irq 18, io mem 0xfdffc000
Jan 22 19:48:20 astar kernel: [    1.537512] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.537515] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.537516] usb usb5: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.537518] usb usb5: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.537519] usb usb5: SerialNumber: 0000:00:13.0
Jan 22 19:48:20 astar kernel: [    1.537615] hub 5-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.537621] hub 5-0:1.0: 5 ports detected
Jan 22 19:48:20 astar kernel: [    1.537835] ohci-pci 0000:00:14.5: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.537892] ohci-pci 0000:00:14.5: new USB bus registered, assigned bus number 6
Jan 22 19:48:20 astar kernel: [    1.537909] ohci-pci 0000:00:14.5: irq 18, io mem 0xfdffa000
Jan 22 19:48:20 astar kernel: [    1.592505] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.592508] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.592509] usb usb6: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.592510] usb usb6: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.592511] usb usb6: SerialNumber: 0000:00:14.5
Jan 22 19:48:20 astar kernel: [    1.592608] hub 6-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.592614] hub 6-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.592797] ohci-pci 0000:00:16.0: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.592856] ohci-pci 0000:00:16.0: new USB bus registered, assigned bus number 7
Jan 22 19:48:20 astar kernel: [    1.592870] ohci-pci 0000:00:16.0: irq 18, io mem 0xfdff9000
Jan 22 19:48:20 astar kernel: [    1.647495] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001
Jan 22 19:48:20 astar kernel: [    1.647497] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.647498] usb usb7: Product: OHCI PCI host controller
Jan 22 19:48:20 astar kernel: [    1.647499] usb usb7: Manufacturer: Linux 3.12.7-300.fc20.x86_64 ohci_hcd
Jan 22 19:48:20 astar kernel: [    1.647501] usb usb7: SerialNumber: 0000:00:16.0
Jan 22 19:48:20 astar kernel: [    1.647596] hub 7-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.647602] hub 7-0:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.647697] uhci_hcd: USB Universal Host Controller Interface driver
Jan 22 19:48:20 astar kernel: [    1.647767] xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.647827] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 8
Jan 22 19:48:20 astar kernel: [    1.648067] usb usb8: New USB device found, idVendor=1d6b, idProduct=0002
Jan 22 19:48:20 astar kernel: [    1.648068] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.648069] usb usb8: Product: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.648071] usb usb8: Manufacturer: Linux 3.12.7-300.fc20.x86_64 xhci_hcd
Jan 22 19:48:20 astar kernel: [    1.648072] usb usb8: SerialNumber: 0000:02:00.0
Jan 22 19:48:20 astar kernel: [    1.648157] hub 8-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.648167] hub 8-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.648213] xhci_hcd 0000:02:00.0: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.648243] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9
Jan 22 19:48:20 astar kernel: [    1.651315] usb usb9: New USB device found, idVendor=1d6b, idProduct=0003
Jan 22 19:48:20 astar kernel: [    1.651316] usb usb9: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 19:48:20 astar kernel: [    1.651317] usb usb9: Product: xHCI Host Controller
Jan 22 19:48:20 astar kernel: [    1.651319] usb usb9: Manufacturer: Linux 3.12.7-300.fc20.x86_64 xhci_hcd
Jan 22 19:48:20 astar kernel: [    1.651320] usb usb9: SerialNumber: 0000:02:00.0
Jan 22 19:48:20 astar kernel: [    1.651420] hub 9-0:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.651429] hub 9-0:1.0: 2 ports detected
Jan 22 19:48:20 astar kernel: [    1.656626] usbcore: registered new interface driver usbserial
Jan 22 19:48:20 astar kernel: [    1.656649] usbcore: registered new interface driver usbserial_generic
Jan 22 19:48:20 astar kernel: [    1.656665] usbserial: USB Serial support registered for generic
Jan 22 19:48:20 astar kernel: [    1.656717] i8042: PNP: No PS/2 controller found. Probing ports directly.
Jan 22 19:48:20 astar kernel: [    1.691057] i8042: Failed to disable AUX port, but continuing anyway... Is this a SiS?
Jan 22 19:48:20 astar kernel: [    1.691058] i8042: If AUX port is really absent please use the 'i8042.noaux' option
Jan 22 19:48:20 astar kernel: [    1.714555] ata6: SATA link down (SStatus 0 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.714600] ata4: SATA link down (SStatus 0 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.717508] usb 1-4: new high-speed USB device number 2 using ehci-pci
Jan 22 19:48:20 astar kernel: [    1.833992] usb 1-4: New USB device found, idVendor=05e3, idProduct=0608
Jan 22 19:48:20 astar kernel: [    1.834002] usb 1-4: New USB device strings: Mfr=0, Product=1, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    1.834007] usb 1-4: Product: USB2.0 Hub
Jan 22 19:48:20 astar kernel: [    1.834693] hub 1-4:1.0: USB hub found
Jan 22 19:48:20 astar kernel: [    1.835112] hub 1-4:1.0: 4 ports detected
Jan 22 19:48:20 astar kernel: [    1.869471] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.869511] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.869553] ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.870442] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 19:48:20 astar kernel: [    1.870471] ata3.00: ATA-8: SanDisk SDSSDHP064G, X2306RL, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.870473] ata3.00: 125045424 sectors, multi 1: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.871542] ata3.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.871886] ata1.00: ATA-8: WDC WD5000AAKX-001CA0, 15.01H15, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.871888] ata1.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.872361] ata2.00: ATA-8: WDC WD5000AAKX-001CA0, 15.01H15, max UDMA/133
Jan 22 19:48:20 astar kernel: [    1.872370] ata2.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 22 19:48:20 astar kernel: [    1.872545] ata5.00: ATAPI: HL-DT-ST DVDRAM GH22NS50, TN02, max UDMA/100
Jan 22 19:48:20 astar kernel: [    1.873549] ata1.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.873774] scsi 0:0:0:0: Direct-Access     ATA      WDC WD5000AAKX-0 15.0 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.873955] ata2.00: configured for UDMA/133
Jan 22 19:48:20 astar kernel: [    1.874000] sd 0:0:0:0: Attached scsi generic sg0 type 0
Jan 22 19:48:20 astar kernel: [    1.874076] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/465 GiB)
Jan 22 19:48:20 astar kernel: [    1.874103] sd 0:0:0:0: [sda] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874117] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.874323] scsi 1:0:0:0: Direct-Access     ATA      WDC WD5000AAKX-0 15.0 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.874468] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB)
Jan 22 19:48:20 astar kernel: [    1.874475] sd 1:0:0:0: Attached scsi generic sg1 type 0
Jan 22 19:48:20 astar kernel: [    1.874616] scsi 2:0:0:0: Direct-Access     ATA      SanDisk SDSSDHP0 X230 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.874619] sd 1:0:0:0: [sdb] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874662] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.874745] sd 2:0:0:0: [sdc] 125045424 512-byte logical blocks: (64.0 GB/59.6 GiB)
Jan 22 19:48:20 astar kernel: [    1.874758] sd 2:0:0:0: Attached scsi generic sg2 type 0
Jan 22 19:48:20 astar kernel: [    1.874885] sd 2:0:0:0: [sdc] Write Protect is off
Jan 22 19:48:20 astar kernel: [    1.874923] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 19:48:20 astar kernel: [    1.875423]  sdc: sdc1 sdc2
Jan 22 19:48:20 astar kernel: [    1.876010] sd 2:0:0:0: [sdc] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.876400] ata5.00: configured for UDMA/100
Jan 22 19:48:20 astar kernel: [    1.885695] scsi 4:0:0:0: CD-ROM            HL-DT-ST DVDRAM GH22NS50  TN02 PQ: 0 ANSI: 5
Jan 22 19:48:20 astar kernel: [    1.894915] sr0: scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 cdda tray
Jan 22 19:48:20 astar kernel: [    1.894923] cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 19:48:20 astar kernel: [    1.895208] sr 4:0:0:0: Attached scsi generic sg3 type 5
Jan 22 19:48:20 astar kernel: [    1.920365]  sdb: sdb1 sdb2 sdb3 sdb4
Jan 22 19:48:20 astar kernel: [    1.921004] sd 1:0:0:0: [sdb] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.924507]  sda: sda1 sda2 sda3
Jan 22 19:48:20 astar kernel: [    1.924717] sd 0:0:0:0: [sda] Attached SCSI disk
Jan 22 19:48:20 astar kernel: [    1.940560] serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 19:48:20 astar kernel: [    1.940772] mousedev: PS/2 mouse device common for all mice
Jan 22 19:48:20 astar kernel: [    1.940992] rtc_cmos 00:03: RTC can wake from S4
Jan 22 19:48:20 astar kernel: [    1.941087] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0
Jan 22 19:48:20 astar kernel: [    1.941106] rtc_cmos 00:03: alarms up to one month, 242 bytes nvram, hpet irqs
Jan 22 19:48:20 astar kernel: [    1.941159] device-mapper: uevent: version 1.0.3
Jan 22 19:48:20 astar kernel: [    1.941217] device-mapper: ioctl: 4.26.0-ioctl (2013-08-15) initialised: dm-devel@redhat.com
Jan 22 19:48:20 astar kernel: [    1.941475] hidraw: raw HID events driver (C) Jiri Kosina
Jan 22 19:48:20 astar kernel: [    1.941549] usbcore: registered new interface driver usbhid
Jan 22 19:48:20 astar kernel: [    1.941550] usbhid: USB HID core driver
Jan 22 19:48:20 astar kernel: [    1.941586] drop_monitor: Initializing network drop monitor service
Jan 22 19:48:20 astar kernel: [    1.941649] ip_tables: (C) 2000-2006 Netfilter Core Team
Jan 22 19:48:20 astar kernel: [    1.941716] TCP: cubic registered
Jan 22 19:48:20 astar kernel: [    1.941718] Initializing XFRM netlink socket
Jan 22 19:48:20 astar kernel: [    1.941796] NET: Registered protocol family 10
Jan 22 19:48:20 astar kernel: [    1.941974] mip6: Mobile IPv6
Jan 22 19:48:20 astar kernel: [    1.941976] NET: Registered protocol family 17
Jan 22 19:48:20 astar kernel: [    1.942329] Loading compiled-in X.509 certificates
Jan 22 19:48:20 astar kernel: [    1.943238] Loaded X.509 cert 'Fedora kernel signing key: a65f5f670480b1e60c692845bc633e5d6d0f0f92'
Jan 22 19:48:20 astar kernel: [    1.943246] registered taskstats version 1
Jan 22 19:48:20 astar kernel: [    1.943964]   Magic number: 14:692:814
Jan 22 19:48:20 astar kernel: [    1.944054] rtc_cmos 00:03: setting system clock to 2014-01-23 02:48:18 UTC (1390445298)
Jan 22 19:48:20 astar kernel: [    1.945227] Freeing unused kernel memory: 1444K (ffffffff81d07000 - ffffffff81e70000)
Jan 22 19:48:20 astar kernel: [    1.945230] Write protecting the kernel read-only data: 12288k
Jan 22 19:48:20 astar kernel: [    1.948714] Freeing unused kernel memory: 1548K (ffff88000167d000 - ffff880001800000)
Jan 22 19:48:20 astar kernel: [    1.951176] Freeing unused kernel memory: 1152K (ffff880001ae0000 - ffff880001c00000)
Jan 22 19:48:20 astar kernel: [    2.100315] wmi: Mapper loaded
Jan 22 19:48:20 astar kernel: [    2.113871] [drm] Initialized drm 1.1.0 20060810
Jan 22 19:48:20 astar kernel: [    2.116363] usb 5-2: new low-speed USB device number 2 using ohci-pci
Jan 22 19:48:20 astar kernel: [    2.138402] pcieport 0000:00:02.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    2.138742] nouveau  [  DEVICE][0000:01:00.0] BOOT0  : 0x0a8180a2
Jan 22 19:48:20 astar kernel: [    2.138745] nouveau  [  DEVICE][0000:01:00.0] Chipset: GT218 (NVA8)
Jan 22 19:48:20 astar kernel: [    2.138746] nouveau  [  DEVICE][0000:01:00.0] Family : NV50
Jan 22 19:48:20 astar kernel: [    2.139307] nouveau  [   VBIOS][0000:01:00.0] checking PRAMIN for image...
Jan 22 19:48:20 astar kernel: [    2.211458] nouveau  [   VBIOS][0000:01:00.0] ... appears to be valid
Jan 22 19:48:20 astar kernel: [    2.211460] nouveau  [   VBIOS][0000:01:00.0] using image from PRAMIN
Jan 22 19:48:20 astar kernel: [    2.211564] nouveau  [   VBIOS][0000:01:00.0] BIT signature found
Jan 22 19:48:20 astar kernel: [    2.211567] nouveau  [   VBIOS][0000:01:00.0] version 70.18.2c.00.04
Jan 22 19:48:20 astar kernel: [    2.231812] nouveau  [     PFB][0000:01:00.0] RAM type: DDR2
Jan 22 19:48:20 astar kernel: [    2.231814] nouveau  [     PFB][0000:01:00.0] RAM size: 512 MiB
Jan 22 19:48:20 astar kernel: [    2.231816] nouveau  [     PFB][0000:01:00.0]    ZCOMP: 960 tags
Jan 22 19:48:20 astar kernel: [    2.253287] nouveau  [  PTHERM][0000:01:00.0] FAN control: none / external
Jan 22 19:48:20 astar kernel: [    2.253294] nouveau  [  PTHERM][0000:01:00.0] fan management: disabled
Jan 22 19:48:20 astar kernel: [    2.253296] nouveau  [  PTHERM][0000:01:00.0] internal sensor: yes
Jan 22 19:48:20 astar kernel: [    2.253518] [TTM] Zone  kernel: Available graphics memory: 4054398 kiB
Jan 22 19:48:20 astar kernel: [    2.253519] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
Jan 22 19:48:20 astar kernel: [    2.253520] [TTM] Initializing pool allocator
Jan 22 19:48:20 astar kernel: [    2.253524] [TTM] Initializing DMA pool allocator
Jan 22 19:48:20 astar kernel: [    2.253532] nouveau  [     DRM] VRAM: 512 MiB
Jan 22 19:48:20 astar kernel: [    2.253533] nouveau  [     DRM] GART: 1048576 MiB
Jan 22 19:48:20 astar kernel: [    2.253536] nouveau  [     DRM] TMDS table version 2.0
Jan 22 19:48:20 astar kernel: [    2.253538] nouveau  [     DRM] DCB version 4.0
Jan 22 19:48:20 astar kernel: [    2.253540] nouveau  [     DRM] DCB outp 00: 01000302 00020030
Jan 22 19:48:20 astar kernel: [    2.253542] nouveau  [     DRM] DCB outp 01: 02000300 00000000
Jan 22 19:48:20 astar kernel: [    2.253544] nouveau  [     DRM] DCB outp 02: 02011362 0f220010
Jan 22 19:48:20 astar kernel: [    2.253545] nouveau  [     DRM] DCB outp 03: 01022310 00020010
Jan 22 19:48:20 astar kernel: [    2.253547] nouveau  [     DRM] DCB conn 00: 00001030
Jan 22 19:48:20 astar kernel: [    2.253549] nouveau  [     DRM] DCB conn 01: 00202161
Jan 22 19:48:20 astar kernel: [    2.253550] nouveau  [     DRM] DCB conn 02: 00000200
Jan 22 19:48:20 astar kernel: [    2.260321] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
Jan 22 19:48:20 astar kernel: [    2.260322] [drm] No driver support for vblank timestamp query.
Jan 22 19:48:20 astar kernel: [    2.260418] nouveau  [     DRM] 3 available performance level(s)
Jan 22 19:48:20 astar kernel: [    2.260421] nouveau  [     DRM] 0: core 135MHz shader 270MHz memory 135MHz voltage 850mV
Jan 22 19:48:20 astar kernel: [    2.260423] nouveau  [     DRM] 1: core 405MHz shader 810MHz memory 300MHz voltage 900mV
Jan 22 19:48:20 astar kernel: [    2.260426] nouveau  [     DRM] 3: core 567MHz shader 1400MHz memory 350MHz voltage 1000mV
Jan 22 19:48:20 astar kernel: [    2.260428] nouveau  [     DRM] c: core 405MHz shader 810MHz memory 405MHz voltage 900mV
Jan 22 19:48:20 astar kernel: [    2.268198] nouveau  [     DRM] MM: using COPY for buffer copies
Jan 22 19:48:20 astar kernel: [    2.268453] usb 5-2: New USB device found, idVendor=046d, idProduct=c505
Jan 22 19:48:20 astar kernel: [    2.268469] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.268476] usb 5-2: Product: USB Receiver
Jan 22 19:48:20 astar kernel: [    2.268481] usb 5-2: Manufacturer: Logitech
Jan 22 19:48:20 astar kernel: [    2.276753] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:13.0/usb5/5-2/5-2:1.0/input/input3
Jan 22 19:48:20 astar kernel: [    2.276851] hid-generic 0003:046D:C505.0001: input,hidraw0: USB HID v1.10 Keyboard [Logitech USB Receiver] on usb-0000:00:13.0-2/input0
Jan 22 19:48:20 astar kernel: [    2.286932] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:13.0/usb5/5-2/5-2:1.1/input/input4
Jan 22 19:48:20 astar kernel: [    2.287039] hid-generic 0003:046D:C505.0002: input,hidraw1: USB HID v1.10 Mouse [Logitech USB Receiver] on usb-0000:00:13.0-2/input1
Jan 22 19:48:20 astar kernel: [    2.312772] nouveau  [     DRM] allocated 1920x1200 fb: 0x70000, bo ffff88024422ec00
Jan 22 19:48:20 astar kernel: [    2.312870] fbcon: nouveaufb (fb0) is primary device
Jan 22 19:48:20 astar kernel: [    2.364352] tsc: Refined TSC clocksource calibration: 2812.882 MHz
Jan 22 19:48:20 astar kernel: [    2.391766] Console: switching to colour frame buffer device 240x75
Jan 22 19:48:20 astar kernel: [    2.395334] nouveau 0000:01:00.0: fb0: nouveaufb frame buffer device
Jan 22 19:48:20 astar kernel: [    2.395339] nouveau 0000:01:00.0: registered panic notifier
Jan 22 19:48:20 astar kernel: [    2.395342] [drm] Initialized nouveau 1.1.1 20120801 for 0000:01:00.0 on minor 0
Jan 22 19:48:20 astar kernel: [    2.454682] usb 1-4.2: new full-speed USB device number 3 using ehci-pci
Jan 22 19:48:20 astar kernel: [    2.533357] usb 1-4.2: New USB device found, idVendor=085c, idProduct=0300
Jan 22 19:48:20 astar kernel: [    2.533369] usb 1-4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.533376] usb 1-4.2: Product: Datacolor Spyder3
Jan 22 19:48:20 astar kernel: [    2.533382] usb 1-4.2: Manufacturer: ColorVision Inc.
Jan 22 19:48:20 astar kernel: [    2.598747] usb 1-4.4: new full-speed USB device number 4 using ehci-pci
Jan 22 19:48:20 astar kernel: [    2.678235] usb 1-4.4: New USB device found, idVendor=046d, idProduct=c52b
Jan 22 19:48:20 astar kernel: [    2.678239] usb 1-4.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 22 19:48:20 astar kernel: [    2.678241] usb 1-4.4: Product: USB Receiver
Jan 22 19:48:20 astar kernel: [    2.678242] usb 1-4.4: Manufacturer: Logitech
Jan 22 19:48:20 astar kernel: [    2.693910] logitech-djreceiver 0003:046D:C52B.0005: hiddev0,hidraw2: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:12.2-4.4/input2
Jan 22 19:48:20 astar kernel: [    2.698045] input: Logitech Unifying Device. Wireless PID:1017 as /devices/pci0000:00/0000:00:12.2/usb1/1-4/1-4.4/1-4.4:1.2/0003:046D:C52B.0005/input/input5
Jan 22 19:48:20 astar kernel: [    2.698223] logitech-djdevice 0003:046D:C52B.0006: input,hidraw3: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:1017] on usb-0000:00:12.2-4.4:1
Jan 22 19:48:20 astar kernel: [    2.700153] input: Logitech Unifying Device. Wireless PID:2010 as /devices/pci0000:00/0000:00:12.2/usb1/1-4/1-4.4/1-4.4:1.2/0003:046D:C52B.0005/input/input6
Jan 22 19:48:20 astar kernel: [    2.700257] logitech-djdevice 0003:046D:C52B.0007: input,hidraw4: USB HID v1.11 Keyboard [Logitech Unifying Device. Wireless PID:2010] on usb-0000:00:12.2-4.4:2
Jan 22 19:48:20 astar kernel: [    2.714789] bio: create slab <bio-1> at 1
Jan 22 19:48:20 astar kernel: [    2.748828] PM: Starting manual resume from disk
Jan 22 19:48:20 astar kernel: [    2.760724] EXT4-fs (dm-0): INFO: recovery required on readonly filesystem
Jan 22 19:48:20 astar kernel: [    2.760727] EXT4-fs (dm-0): write access will be enabled during recovery
Jan 22 19:48:20 astar kernel: [    2.846919] EXT4-fs (dm-0): recovery complete
Jan 22 19:48:20 astar kernel: [    2.847626] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.011008] SELinux:  Disabled at runtime.
Jan 22 19:48:20 astar kernel: [    3.041326] type=1404 audit(1390445299.596:2): selinux=0 auid=4294967295 ses=4294967295
Jan 22 19:48:20 astar kernel: [    3.186313] RPC: Registered named UNIX socket transport module.
Jan 22 19:48:20 astar kernel: [    3.186316] RPC: Registered udp transport module.
Jan 22 19:48:20 astar kernel: [    3.186317] RPC: Registered tcp transport module.
Jan 22 19:48:20 astar kernel: [    3.186318] RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 19:48:20 astar kernel: [    3.205860] EXT4-fs (dm-0): re-mounted. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.208427] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Jan 22 19:48:20 astar kernel: [    3.259327] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 19:48:20 astar kernel: [    3.266545] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jan 22 19:48:20 astar kernel: [    3.266562] r8169 0000:05:00.0: can't disable ASPM; OS doesn't have ASPM control
Jan 22 19:48:20 astar kernel: [    3.266570] pcieport 0000:00:07.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.266948] r8169 0000:05:00.0 eth0: RTL8168d/8111d at 0xffffc90010ef8000, 1c:6f:65:3e:b1:ed, XID 083000c0 IRQ 49
Jan 22 19:48:20 astar kernel: [    3.266950] r8169 0000:05:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
Jan 22 19:48:20 astar kernel: [    3.267507] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jan 22 19:48:20 astar kernel: [    3.267520] r8169 0000:06:00.0: can't disable ASPM; OS doesn't have ASPM control
Jan 22 19:48:20 astar kernel: [    3.267524] pcieport 0000:00:09.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.267844] r8169 0000:06:00.0 eth1: RTL8168d/8111d at 0xffffc90010f00000, 1c:6f:65:3e:b1:ef, XID 083000c0 IRQ 50
Jan 22 19:48:20 astar kernel: [    3.267848] r8169 0000:06:00.0 eth1: jumbo features [frames: 9200 bytes, tx checksumming: ko]
Jan 22 19:48:20 astar kernel: [    3.279583] ACPI Warning: 0x0000000000000b00-0x0000000000000b07 SystemIO conflicts with Region \SOR1 1 (20130725/utaddress-251)
Jan 22 19:48:20 astar kernel: [    3.279593] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
Jan 22 19:48:20 astar kernel: [    3.281503] sp5100_tco: SP5100/SB800 TCO WatchDog Timer Driver v0.05
Jan 22 19:48:20 astar kernel: [    3.281578] sp5100_tco: PCI Revision ID: 0x41
Jan 22 19:48:20 astar kernel: [    3.281614] sp5100_tco: Using 0xfed80b00 for watchdog MMIO address
Jan 22 19:48:20 astar kernel: [    3.281625] sp5100_tco: Last reboot was not triggered by watchdog.
Jan 22 19:48:20 astar kernel: [    3.281755] sp5100_tco: initialized (0xffffc90010efab00). heartbeat=60 sec (nowayout=0)
Jan 22 19:48:20 astar kernel: [    3.294413] MCE: In-kernel MCE decoding enabled.
Jan 22 19:48:20 astar kernel: [    3.296857] EDAC MC: Ver: 3.0.0
Jan 22 19:48:20 astar kernel: [    3.298985] AMD64 EDAC driver v3.4.0
Jan 22 19:48:20 astar kernel: [    3.306993] microcode: CPU0: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309697] microcode: CPU0: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309733] microcode: CPU1: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309740] microcode: CPU1: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309753] microcode: CPU2: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309762] microcode: CPU2: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309783] microcode: CPU3: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309791] microcode: CPU3: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309799] microcode: CPU4: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309807] microcode: CPU4: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.309817] microcode: CPU5: patch_level=0x010000bf
Jan 22 19:48:20 astar kernel: [    3.309825] microcode: CPU5: new patch_level=0x010000dc
Jan 22 19:48:20 astar kernel: [    3.310231] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
Jan 22 19:48:20 astar kernel: [    3.324599] input: HDA ATI SB Front Headphone as /devices/pci0000:00/0000:00:14.2/sound/card0/input14
Jan 22 19:48:20 astar kernel: [    3.325457] [drm] radeon kernel modesetting enabled.
Jan 22 19:48:20 astar kernel: [    3.325644] input: HDA ATI SB Line Out Side as /devices/pci0000:00/0000:00:14.2/sound/card0/input13
Jan 22 19:48:20 astar kernel: [    3.325670] pcieport 0000:00:0b.0: driver skip pci_set_master, fix it!
Jan 22 19:48:20 astar kernel: [    3.325675] radeon 0000:07:00.0: enabling device (0000 -> 0003)
Jan 22 19:48:20 astar kernel: [    3.325835] input: HDA ATI SB Line Out CLFE as /devices/pci0000:00/0000:00:14.2/sound/card0/input12
Jan 22 19:48:20 astar kernel: [    3.327744] kvm: Nested Virtualization enabled
Jan 22 19:48:20 astar kernel: [    3.327749] kvm: Nested Paging enabled
Jan 22 19:48:20 astar kernel: [    3.330359] [drm] initializing kernel modesetting (VERDE 0x1002:0x683F 0x1682:0x3248).
Jan 22 19:48:20 astar kernel: [    3.330393] [drm] register mmio base: 0xFDD80000
Jan 22 19:48:20 astar kernel: [    3.330394] [drm] register mmio size: 262144
Jan 22 19:48:20 astar kernel: [    3.455406] Switched to clocksource tsc
Jan 22 19:48:20 astar kernel: [    3.455474] input: HDA ATI SB Line Out Surround as /devices/pci0000:00/0000:00:14.2/sound/card0/input11
Jan 22 19:48:20 astar kernel: [    3.455860] input: HDA ATI SB Line Out Front as /devices/pci0000:00/0000:00:14.2/sound/card0/input10
Jan 22 19:48:20 astar kernel: [    3.455873] ATOM BIOS: C444
Jan 22 19:48:20 astar kernel: [    3.455918] input: HDA ATI SB Line as /devices/pci0000:00/0000:00:14.2/sound/card0/input9
Jan 22 19:48:20 astar kernel: [    3.455953] [drm] GPU not posted. posting now...
Jan 22 19:48:20 astar kernel: [    3.455997] input: HDA ATI SB Rear Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input8
Jan 22 19:48:20 astar kernel: [    3.456573] input: HDA ATI SB Front Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input7
Jan 22 19:48:20 astar kernel: [    3.461346] radeon 0000:07:00.0: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used)
Jan 22 19:48:20 astar kernel: [    3.461348] radeon 0000:07:00.0: GTT: 1024M 0x0000000080000000 - 0x00000000BFFFFFFF
Jan 22 19:48:20 astar kernel: [    3.461350] [drm] Detected VRAM RAM=2048M, BAR=256M
Jan 22 19:48:20 astar kernel: [    3.461351] [drm] RAM width 128bits DDR
Jan 22 19:48:20 astar kernel: [    3.461369] [drm] radeon: 2048M of VRAM memory ready
Jan 22 19:48:20 astar kernel: [    3.461370] [drm] radeon: 1024M of GTT memory ready.
Jan 22 19:48:20 astar kernel: [    3.461567] hda_intel: Disabling MSI
Jan 22 19:48:20 astar kernel: [    3.461583] ALSA sound/pci/hda/hda_intel.c:3170 0000:01:00.1: Handle VGA-switcheroo audio client
Jan 22 19:48:20 astar kernel: [    3.461643] EDAC amd64: DRAM ECC disabled.
Jan 22 19:48:20 astar kernel: [    3.461655] EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load.
Jan 22 19:48:20 astar kernel: [    3.461655]  Either enable ECC checking or force module loading by setting 'ecc_enable_override'.
Jan 22 19:48:20 astar kernel: [    3.461655]  (Note that use of the override may cause unknown side effects.)
Jan 22 19:48:20 astar kernel: [    3.462066] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    3.464246] [drm] GART: num cpu pages 262144, num gpu pages 262144
Jan 22 19:48:20 astar kernel: [    3.467861] [drm] probing gen 2 caps for device 1002:5a1f = 31cd02/0
Jan 22 19:48:20 astar kernel: [    3.467870] [drm] PCIE gen 2 link speeds already enabled
Jan 22 19:48:20 astar kernel: [    3.469070] [drm] Loading VERDE Microcode
Jan 22 19:48:20 astar kernel: [    3.476475] [drm] PCIE GART of 1024M enabled (table at 0x0000000000276000).
Jan 22 19:48:20 astar kernel: [    3.476603] radeon 0000:07:00.0: WB enabled
Jan 22 19:48:20 astar kernel: [    3.476607] radeon 0000:07:00.0: fence driver on ring 0 use gpu addr 0x0000000080000c00 and cpu addr 0xffff8800a840ac00
Jan 22 19:48:20 astar kernel: [    3.476609] radeon 0000:07:00.0: fence driver on ring 1 use gpu addr 0x0000000080000c04 and cpu addr 0xffff8800a840ac04
Jan 22 19:48:20 astar kernel: [    3.476611] radeon 0000:07:00.0: fence driver on ring 2 use gpu addr 0x0000000080000c08 and cpu addr 0xffff8800a840ac08
Jan 22 19:48:20 astar kernel: [    3.476613] radeon 0000:07:00.0: fence driver on ring 3 use gpu addr 0x0000000080000c0c and cpu addr 0xffff8800a840ac0c
Jan 22 19:48:20 astar kernel: [    3.476614] radeon 0000:07:00.0: fence driver on ring 4 use gpu addr 0x0000000080000c10 and cpu addr 0xffff8800a840ac10
Jan 22 19:48:20 astar kernel: [    3.477076] radeon 0000:07:00.0: fence driver on ring 5 use gpu addr 0x0000000000075a18 and cpu addr 0xffffc90016635a18
Jan 22 19:48:20 astar kernel: [    3.477080] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
Jan 22 19:48:20 astar kernel: [    3.477081] [drm] Driver supports precise vblank timestamp query.
Jan 22 19:48:20 astar kernel: [    3.477155] radeon 0000:07:00.0: radeon: using MSI.
Jan 22 19:48:20 astar kernel: [    3.477234] [drm] radeon: irq initialized.
Jan 22 19:48:20 astar kernel: [    3.479685] Adding 8187900k swap on /dev/mapper/f20_astar-swap.  Priority:-1 extents:1 across:8187900k SSFS
Jan 22 19:48:20 astar kernel: [    3.969220] [drm] ring test on 0 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    3.969227] [drm] ring test on 1 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    3.969230] [drm] ring test on 2 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    3.969294] [drm] ring test on 3 succeeded in 4 usecs
Jan 22 19:48:20 astar kernel: [    3.969316] [drm] ring test on 4 succeeded in 7 usecs
Jan 22 19:48:20 astar kernel: [    4.084034] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null)
Jan 22 19:48:20 astar kernel: [    4.107737] type=1305 audit(1390445300.662:3): audit_pid=660 old=0 auid=4294967295 ses=4294967295
Jan 22 19:48:20 astar kernel: [    4.107737]  res=1
Jan 22 19:48:20 astar kernel: [    4.176517] [drm] ring test on 5 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    4.176521] [drm] UVD initialized successfully.
Jan 22 19:48:20 astar kernel: [    4.177789] [drm] Enabling audio 0 support
Jan 22 19:48:20 astar kernel: [    4.177792] [drm] Enabling audio 1 support
Jan 22 19:48:20 astar kernel: [    4.177793] [drm] Enabling audio 2 support
Jan 22 19:48:20 astar kernel: [    4.177794] [drm] Enabling audio 3 support
Jan 22 19:48:20 astar kernel: [    4.177795] [drm] Enabling audio 4 support
Jan 22 19:48:20 astar kernel: [    4.177796] [drm] Enabling audio 5 support
Jan 22 19:48:20 astar kernel: [    4.177824] [drm] ib test on ring 0 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177844] [drm] ib test on ring 1 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177863] [drm] ib test on ring 2 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177908] [drm] ib test on ring 3 succeeded in 0 usecs
Jan 22 19:48:20 astar kernel: [    4.177949] [drm] ib test on ring 4 succeeded in 1 usecs
Jan 22 19:48:20 astar kernel: [    4.351677] [drm] ib test on ring 5 succeeded
Jan 22 19:48:20 astar kernel: [    4.352213] [drm] Radeon Display Connectors
Jan 22 19:48:20 astar kernel: [    4.352215] [drm] Connector 0:
Jan 22 19:48:20 astar kernel: [    4.352216] [drm]   DP-1
Jan 22 19:48:20 astar kernel: [    4.352217] [drm]   HPD1
Jan 22 19:48:20 astar kernel: [    4.352219] [drm]   DDC: 0x6570 0x6570 0x6574 0x6574 0x6578 0x6578 0x657c 0x657c
Jan 22 19:48:20 astar kernel: [    4.352219] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352221] [drm]     DFP1: INTERNAL_UNIPHY2
Jan 22 19:48:20 astar kernel: [    4.352221] [drm] Connector 1:
Jan 22 19:48:20 astar kernel: [    4.352222] [drm]   HDMI-A-2
Jan 22 19:48:20 astar kernel: [    4.352223] [drm]   HPD4
Jan 22 19:48:20 astar kernel: [    4.352224] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
Jan 22 19:48:20 astar kernel: [    4.352225] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352226] [drm]     DFP2: INTERNAL_UNIPHY2
Jan 22 19:48:20 astar kernel: [    4.352227] [drm] Connector 2:
Jan 22 19:48:20 astar kernel: [    4.352229] [drm]   DVI-I-2
Jan 22 19:48:20 astar kernel: [    4.352230] [drm]   HPD6
Jan 22 19:48:20 astar kernel: [    4.352231] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
Jan 22 19:48:20 astar kernel: [    4.352232] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352233] [drm]     DFP3: INTERNAL_UNIPHY1
Jan 22 19:48:20 astar kernel: [    4.352234] [drm] Connector 3:
Jan 22 19:48:20 astar kernel: [    4.352234] [drm]   DVI-I-3
Jan 22 19:48:20 astar kernel: [    4.352235] [drm]   HPD2
Jan 22 19:48:20 astar kernel: [    4.352236] [drm]   DDC: 0x6560 0x6560 0x6564 0x6564 0x6568 0x6568 0x656c 0x656c
Jan 22 19:48:20 astar kernel: [    4.352237] [drm]   Encoders:
Jan 22 19:48:20 astar kernel: [    4.352238] [drm]     DFP4: INTERNAL_UNIPHY
Jan 22 19:48:20 astar kernel: [    4.352239] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
Jan 22 19:48:20 astar kernel: [    4.354307] [drm] Internal thermal controller with fan control
Jan 22 19:48:20 astar kernel: [    4.354367] [drm] radeon: power management initialized
Jan 22 19:48:20 astar kernel: [    4.387064] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
Jan 22 19:48:20 astar kernel: [    4.401438] ip6_tables: (C) 2000-2006 Netfilter Core Team
Jan 22 19:48:20 astar kernel: [    4.441204] Ebtables v2.0 registered
Jan 22 19:48:21 astar kernel: [    4.450537] [drm] fb mappable at 0xD1488000
Jan 22 19:48:21 astar kernel: [    4.450541] [drm] vram apper at 0xD0000000
Jan 22 19:48:21 astar kernel: [    4.450543] [drm] size 9216000
Jan 22 19:48:21 astar kernel: [    4.450544] [drm] fb depth is 24
Jan 22 19:48:21 astar kernel: [    4.450545] [drm]    pitch is 7680
Jan 22 19:48:21 astar kernel: [    4.450796] radeon 0000:07:00.0: fb1: radeondrmfb frame buffer device
Jan 22 19:48:21 astar kernel: [    4.450802] [drm] Initialized radeon 2.34.0 20080528 for 0000:07:00.0 on minor 1
Jan 22 19:48:21 astar kernel: [    4.453131] Bridge firewalling registered
Jan 22 19:48:21 astar kernel: [    4.577515] r8169 0000:05:00.0 p7p1: link down
Jan 22 19:48:21 astar kernel: [    4.577552] IPv6: ADDRCONF(NETDEV_UP): p7p1: link is not ready
Jan 22 19:48:21 astar kernel: [    4.577560] r8169 0000:05:00.0 p7p1: link down
Jan 22 19:48:21 astar kernel: [    4.579200] device p7p1 entered promiscuous mode
Jan 22 19:48:21 astar kernel: [    4.620517] IPv6: ADDRCONF(NETDEV_UP): br0: link is not ready
Jan 22 19:48:21 astar kernel: [    4.734283] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input18
Jan 22 19:48:21 astar kernel: [    4.734363] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input17
Jan 22 19:48:21 astar kernel: [    4.734418] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input16
Jan 22 19:48:21 astar kernel: [    4.734472] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.0/0000:01:00.1/sound/card1/input15
Jan 22 19:48:21 astar kernel: [    4.734896] ALSA sound/pci/hda/hda_intel.c:3170 0000:07:00.1: Handle VGA-switcheroo audio client
Jan 22 19:48:21 astar kernel: [    4.734899] ALSA sound/pci/hda/hda_intel.c:3499 0000:07:00.1: Force to non-snoop mode
Jan 22 19:48:21 astar kernel: [    4.748366] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748381] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748425] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748438] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748493] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748505] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748547] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748560] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748603] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748615] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748662] ALSA sound/pci/hda/hda_eld.c:334 HDMI: ELD buf size is 0, force 128
Jan 22 19:48:21 astar kernel: [    4.748674] ALSA sound/pci/hda/hda_eld.c:351 HDMI: invalid ELD data byte 0
Jan 22 19:48:21 astar kernel: [    4.748767] input: HDA ATI HDMI HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input24
Jan 22 19:48:21 astar kernel: [    4.748879] input: HDA ATI HDMI HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input23
Jan 22 19:48:21 astar kernel: [    4.748941] input: HDA ATI HDMI HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input22
Jan 22 19:48:21 astar kernel: [    4.748991] input: HDA ATI HDMI HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input21
Jan 22 19:48:21 astar kernel: [    4.749091] input: HDA ATI HDMI HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input20
Jan 22 19:48:21 astar kernel: [    4.749148] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:0b.0/0000:07:00.1/sound/card2/input19
Jan 22 19:48:21 astar kernel: [    4.937070] vgaarb: device changed decodes: PCI:0000:07:00.0,olddecodes=io+mem,decodes=none:owns=none
Jan 22 19:48:21 astar kernel: [    4.937074] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none
Jan 22 19:48:24 astar kernel: [    7.648262] r8169 0000:05:00.0 p7p1: link up
Jan 22 19:48:24 astar kernel: [    7.648282] IPv6: ADDRCONF(NETDEV_CHANGE): p7p1: link becomes ready
Jan 22 19:48:24 astar kernel: [    7.649103] br0: port 1(p7p1) entered forwarding state
Jan 22 19:48:24 astar kernel: [    7.649113] br0: port 1(p7p1) entered forwarding state
Jan 22 19:48:24 astar kernel: [    7.649132] IPv6: ADDRCONF(NETDEV_CHANGE): br0: link becomes ready

--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="xen console acpi-skip.txt"
Content-Type: text/plain; name="xen console acpi-skip.txt"; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=FF Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (=
Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M dom0_max_vcpus=
=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall iommu=3Ddebug,verbose a=
cpi_skip_timer_override com1=3D38400,8n1,pci console=3Dcom1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: BIOS IRQ0 pin2 override ignored.
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.945 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D0 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8020140, flags =3D 0x8
(XEN) Brought up 6 CPUs
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0xae0000
(XEN) elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=3D0x1060f0
(XEN) elf_parse_binary: phdr: paddr=3D0x1d07000 memsz=3D0x15300
(XEN) elf_parse_binary: phdr: paddr=3D0x1d1d000 memsz=3D0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xffffffff80000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xffffffff80000000
(XEN)     virt_kstart      =3D 0xffffffff81000000
(XEN)     virt_kend        =3D 0xffffffff8243c000
(XEN)     virt_entry       =3D 0xffffffff81d1d1e0
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0, type =3D 0x6, root tab=
le =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x2, type =3D 0x7, root t=
able =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x10, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x20, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x28, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x30, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x38, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x48, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x58, type =3D 0x2, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x88, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x90, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x92, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x98, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x9a, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa0, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa2, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa3, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa4, type =3D 0x5, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa5, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb0, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb2, type =3D 0x7, root =
table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x100, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x101, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x200, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x500, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x600, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472a2000, domain =3D 0, paging mode =3D 3
(XEN) Scrubbing Free RAM: .................................................=
...........done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x000000=
0000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8100160, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8110160, flags =3D 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 0, device id =3D 0xa0, fault addres=
s =3D 0xfdf8120160, flags =3D 0x8
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff88000345f000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff88000345f000.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login: root
Password:
Last login: Mon Jan 20 18:10:47 from astar.houby.net
[root@astar ~]# xl info
host                   : astar
release                : 3.12.7-300.fc20.x86_64
version                : #1 SMP Fri Jan 10 15:35:31 UTC 2014
machine                : x86_64
nr_cpus                : 6
max_cpu_id             : 7
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 1
cpu_mhz                : 2812
hw_caps                : 178bf3ff:efd3fbff:00000000:00001300:00802001:00000=
000:000037ff:00000000
virt_caps              : hvm hvm_directio
total_memory           : 8188
free_memory            : 6044
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 4
xen_extra              : -rc2
xen_version            : 4.4-rc2
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder dom0_mem=3D2048M,max:2048M dom0_max_vc=
pus=3D2 dom0_vcpus_pin loglvl=3Dall gues                             t_logl=
vl=3Dall iommu=3Ddebug,verbose acpi_skip_timer_override com1=3D38400,8n1,pc=
i console=3Dcom1
cc_compiler            : gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)
cc_compile_by          : mockbuild
cc_compile_domain      : [unknown]
cc_compile_date        : Thu Jan 16 19:37:57 UTC 2014
xend_config_format     : 4
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2048     2     r-----      =
69.4
[root@astar ~]# xl pci-assignable-list
0000:07:00.0
0000:07:00.1
[root@astar ~]# xl create mars.xl
Parsing config from mars.xl
libxl: error: libxl_create.c:1034:domcreate_launch_dm: unable to add disk d=
evices
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device mode=
l pid in /local/domain/1/image/dev                             ice-model-pi=
d
libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_mode=
l failed for 1
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable to g=
et my domid
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy faile=
d for 1
[root@astar ~]#
[root@astar ~]#
[root@astar ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
[root@astar ~]#
[root@astar ~]#
[root@astar ~]#
[root@astar ~]# xl create mars.xl
Parsing config from mars.xl
(d2) HVM Loader
(d2) Detected Xen v4.4-rc2
(d2) Xenbus rings @0xfeffc000, event channel 6
(d2) System requested ROMBIOS
(d2) CPU speed is 2813 MHz
(d2) Relocating guest memory for lowmem MMIO space enabled
(XEN) irq.c:270: Dom2 PCI link 0 changed 0 -> 5
(d2) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom2 PCI link 1 changed 0 -> 10
(d2) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom2 PCI link 2 changed 0 -> 11
(d2) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom2 PCI link 3 changed 0 -> 5
(d2) PCI-ISA link 3 routed to IRQ5
[root@astar ~]# (d2) pci dev 01:2 INTD->IRQ5
(d2) pci dev 01:3 INTA->IRQ10
(d2) pci dev 03:0 INTA->IRQ5
(d2) pci dev 04:0 INTA->IRQ5
(d2) pci dev 05:0 INTA->IRQ10
(d2) RAM in high memory; setting high_mem resource base to 10fc00000
(d2) pci dev 02:0 bar 10 size 002000000: 0f0000008
(d2) pci dev 03:0 bar 14 size 001000000: 0f2000008
(d2) pci dev 04:0 bar 10 size 000020000: 0f3000000
(d2) pci dev 02:0 bar 14 size 000001000: 0f3020000
(d2) pci dev 03:0 bar 10 size 000000100: 00000c001
(d2) pci dev 05:0 bar 10 size 000000100: 00000c101
(d2) pci dev 04:0 bar 14 size 000000040: 00000c201
(d2) pci dev 01:2 bar 20 size 000000020: 00000c241
(d2) pci dev 01:1 bar 20 size 000000010: 00000c261
(d2) Multiprocessor initialisation:
(d2)  - CPU0 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU1 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU2 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2)  - CPU3 ... 48-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d2) Testing HVM environment:
(d2)  - REP INSB across page boundaries ... passed
(d2)  - GS base MSRs and SWAPGS ... passed
(d2) Passed 2 of 2 tests
(d2) Writing SMBIOS tables ...
(d2) Loading ROMBIOS ...
(d2) 12124 bytes of ROMBIOS high-memory extensions:
(d2)   Relocating to 0xfc001000-0xfc003f5c ... done
(d2) Creating MP tables ...
(d2) Loading Cirrus VGABIOS ...
(d2) Loading PCI Option ROM ...
(d2)  - Manufacturer: http://ipxe.org
(d2)  - Product name: iPXE
(d2) Option ROMs:
(d2)  c0000-c8fff: VGA BIOS
(d2)  c9000-d97ff: Etherboot ROM
(d2) Loading ACPI ...
(d2) vm86 TSS at fc010080
(d2) BIOS map:
(d2)  f0000-fffff: Main BIOS
(d2) E820 table:
(d2)  [00]: 00000000:00000000 - 00000000:0009e000: RAM
(d2)  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
(d2)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d2)  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d2)  [03]: 00000000:00100000 - 00000000:f0000000: RAM
(d2)  HOLE: 00000000:f0000000 - 00000000:fc000000
(d2)  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d2)  [05]: 00000001:00000000 - 00000001:0fc00000: RAM
(d2) Invoking ROMBIOS ...
(XEN) stdvga.c:147:d2 entering stdvga and caching modes
(d2) VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(d2) Bochs BIOS - build: 06/23/99
(d2) $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(d2) Options: apmbios pcibios eltorito PMM
(d2)
(d2) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk ( 160 GBytes)
(d2)
(d2)
(d2)
(d2) Press F12 for boot menu.
(d2)
(d2) Booting from Hard Disk...
(XEN) stdvga.c:151:d2 leaving stdvga

[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     1=
36.5
mars                                         2  4095     3     r-----      =
 4.5
[root@astar ~]# (XEN) irq.c:270: Dom2 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom2 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom2 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom2 PCI link 3 changed 5 -> 0

[root@astar ~]#
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     1=
71.2
mars                                         2  4095     4     ------      =
17.7
[root@astar ~]#
[root@astar ~]# xl list
Name                                        ID   Mem VCPUs      State   Tim=
e(s)
Domain-0                                     0  2047     2     r-----     2=
86.1
mars                                         2  4095     4     -b----      =
76.3
[root@astar ~]#=20


--=-HokMqAwF1mZkmjLTPer/
Content-Disposition: attachment; filename="xen console acpi-skip no-amd-iommu.txt"
Content-Type: text/plain; name="xen console acpi-skip no-amd-iommu.txt"; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable


Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (=
Red Hat 4.8.2-7)) debug=3Dy Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=3D2048M,max:2048M dom0_max_vcpus=
=3D2 dom0_vcpus_pin loglvl=3Dall guest_loglvl=3Dall iommu=3Ddebug,verbose,n=
o-amd-iommu-perdev-intremap acpi_skip_timer_override com1=3D38400,8n1,pci c=
onsole=3D                             com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: BIOS IRQ0 pin2 override ignored.
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.961 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Using global interrupt remap table is not recommended (see XS=
A-36)!
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D0 apic2=3D-1 pin2=3D-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=3D0x10000bf
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=3D0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=3D0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0xae0000
(XEN) elf_parse_binary: phdr: paddr=3D0x1c00000 memsz=3D0x1060f0
(XEN) elf_parse_binary: phdr: paddr=3D0x1d07000 memsz=3D0x15300
(XEN) elf_parse_binary: phdr: paddr=3D0x1d1d000 memsz=3D0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xffffffff80000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xffffffff80000000
(XEN)     virt_kstart      =3D 0xffffffff81000000
(XEN)     virt_kend        =3D 0xffffffff8243c000
(XEN)     virt_entry       =3D 0xffffffff81d1d1e0
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0, type =3D 0x6, root tab=
le =3D 0x2472e2000, domain =3D 0, paging m                             ode =
=3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x2, type =3D 0x7, root t=
able =3D 0x2472e2000, domain =3D 0, paging                              mod=
e =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x10, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x20, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x28, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x30, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x38, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x48, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x58, type =3D 0x2, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x88, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x90, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x92, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x98, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x9a, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa0, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa2, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa3, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa4, type =3D 0x5, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xa5, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb0, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0xb2, type =3D 0x7, root =
table =3D 0x2472e2000, domain =3D 0, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x100, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x101, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x200, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x500, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x600, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) Scrubbing Free RAM: .................................................=
...........done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input t=
o Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x000000=
0000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff880002f40400.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x000000=
0000000000 to 0xffff880002f40400.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)



=3D=3D=3D> xl create mars with secondary vga passthrough



astar login: (XEN) io.c:280: d1: bind: m_gsi=3D19 g_gsi=3D40 device=3D6 int=
x=3D0
(XEN) AMD-Vi: Disable: device id =3D 0x700, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x7b8d5000, domain =3D 1, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.0 from dom0 to dom1
(XEN) io.c:280: d1: bind: m_gsi=3D16 g_gsi=3D45 device=3D7 intx=3D1
(XEN) AMD-Vi: Disable: device id =3D 0x701, domain =3D 0, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x7b8d5000, domain =3D 1, pagin                             g mo=
de =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.1 from dom0 to dom1
(d1) HVM Loader
(d1) Detected Xen v4.4-rc2
(d1) Xenbus rings @0xfeffc000, event channel 6
(d1) System requested ROMBIOS
(d1) CPU speed is 2813 MHz
(d1) Relocating guest memory for lowmem MMIO space enabled
(XEN) irq.c:270: Dom1 PCI link 0 changed 0 -> 5
(d1) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom1 PCI link 1 changed 0 -> 10
(d1) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom1 PCI link 2 changed 0 -> 11
(d1) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom1 PCI link 3 changed 0 -> 5
(d1) PCI-ISA link 3 routed to IRQ5
(d1) pci dev 01:2 INTD->IRQ5
(d1) pci dev 01:3 INTA->IRQ10
(d1) pci dev 03:0 INTA->IRQ5
(d1) pci dev 04:0 INTA->IRQ5
(d1) pci dev 05:0 INTA->IRQ10
(d1) pci dev 06:0 INTA->IRQ11
(d1) pci dev 07:0 INTB->IRQ5
(d1) Relocating 0xffff pages from 0e0001000 to 10fc00000 for lowmem MMIO ho=
le
(d1) Relocating 0x1 pages from 0e0000000 to 11fbff000 for lowmem MMIO hole
(d1) RAM in high memory; setting high_mem resource base to 11fc00000
(d1) pci dev 06:0 bar 10 size 010000000: 0e000000c
(XEN) memory_map:add: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(d1) pci dev 02:0 bar 10 size 002000000: 0f0000008
(d1) pci dev 03:0 bar 14 size 001000000: 0f2000008
(XEN) memory_map:add: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(d1) pci dev 06:0 bar 18 size 000040000: 0f3000004
(d1) pci dev 04:0 bar 10 size 000020000: 0f3040000
(d1) pci dev 06:0 bar 30 size 000020000: 0f3060000
(d1) pci dev 07:0 bar 10 size 000004000: 0f3080004
(XEN) memory_map:add: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(d1) pci dev 02:0 bar 14 size 000001000: 0f3084000
(d1) pci dev 03:0 bar 10 size 000000100: 00000c001
(d1) pci dev 05:0 bar 10 size 000000100: 00000c101
(d1) pci dev 06:0 bar 20 size 000000100: 00000c201
(XEN) ioport_map:add: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(d1) pci dev 04:0 bar 14 size 000000040: 00000c301
(d1) pci dev 01:2 bar 20 size 000000020: 00000c341
(d1) pci dev 01:1 bar 20 size 000000010: 00000c361
(d1) Multiprocessor initialisation:
(d1)  - CPU0 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU1 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU2 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1)  - CPU3 ... 48-bit phys ... fixed MTRRs ... var MTRRs [3/8] ... done.
(d1) Testing HVM environment:
(d1)  - REP INSB across page boundaries ... passed
(d1)  - GS base MSRs and SWAPGS ... passed
(d1) Passed 2 of 2 tests
(d1) Writing SMBIOS tables ...
(d1) Loading ROMBIOS ...
(d1) 12124 bytes of ROMBIOS high-memory extensions:
(d1)   Relocating to 0xfc001000-0xfc003f5c ... done
(d1) Creating MP tables ...
(d1) Loading Cirrus VGABIOS ...
(d1) Loading PCI Option ROM ...
(d1)  - Manufacturer: http://ipxe.org
(d1)  - Product name: iPXE
(d1) Option ROMs:
(d1)  c0000-c8fff: VGA BIOS
(d1)  c9000-d97ff: Etherboot ROM
(d1) Loading ACPI ...
(d1) vm86 TSS at fc010080
(d1) BIOS map:
(d1)  f0000-fffff: Main BIOS
(d1) E820 table:
(d1)  [00]: 00000000:00000000 - 00000000:0009e000: RAM
(d1)  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
(d1)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d1)  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d1)  [03]: 00000000:00100000 - 00000000:e0000000: RAM
(d1)  HOLE: 00000000:e0000000 - 00000000:fc000000
(d1)  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d1)  [05]: 00000001:00000000 - 00000001:1fc00000: RAM
(d1) Invoking ROMBIOS ...
(XEN) stdvga.c:147:d1 entering stdvga and caching modes
(d1) VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(d1) Bochs BIOS - build: 06/23/99
(d1) $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(d1) Options: apmbios pcibios eltorito PMM
(d1)
(d1) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk ( 250 GBytes)
(d1)
(d1)
(d1)
(d1) Press F12 for boot menu.
(d1)
(d1) Booting from Hard Disk...
(XEN) stdvga.c:151:d1 leaving stdvga
(XEN) irq.c:270: Dom1 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom1 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom1 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom1 PCI link 3 changed 5 -> 0
(XEN) memory_map:remove: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(XEN) memory_map:add: dom1 gfn=3Df3080 mfn=3Dfddfc nr=3D4
(XEN) common.c:3525:d0 tracking VRAM f0000 - f0240
(XEN) memory_map:remove: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(XEN) memory_map:remove: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(XEN) ioport_map:remove: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(XEN) memory_map:add: dom1 gfn=3De0000 mfn=3Dd0000 nr=3D10000
(XEN) memory_map:add: dom1 gfn=3Df3000 mfn=3Dfdd80 nr=3D40
(XEN) ioport_map:add: dom1 gport=3Dc200 mport=3Dde00 nr=3D100
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620de80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620dec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620df80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x11620dfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db84c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db85c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db86c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db87c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db88c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db89c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec2c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec3c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113db8fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x112aec9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168900c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168901c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e82fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168902c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168903c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168904c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168905c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116890780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168906c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x1168907c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd440, flags =3D 0
((XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addr=
ess =3D 0x117e87e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e871c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e870c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e872c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e873c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e875c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e874c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e876c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e877c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e879c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e878c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e87cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d0c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d1c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d2c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d3c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7d9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7da00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dc40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7db80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7ddc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dbc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dd40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dcc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7de40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7dfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e7df40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a966c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a967c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a969c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a968c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117eb0ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115a96f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e930c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e931c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e932c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e934c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e933c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e935c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e936c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e937c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e939c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e938c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd4c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x117e93fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd5c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd6c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd7c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd8c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbd9c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbda80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdc80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdb80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdd80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbddc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdcc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdbc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbde40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdfc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x113dbdf40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc90c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc91c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc92c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc93c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc94c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc95c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc96c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc97c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc99c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc98c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x115fc9fc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0040, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0000, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0080, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee00c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0100, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0140, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0180, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee01c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0200, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0240, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0280, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee02c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0300, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0340, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0380, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee03c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0400, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0440, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0500, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0480, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0540, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0580, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee04c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0600, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee05c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0640, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0680, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0800, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0700, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0840, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee06c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0880, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0900, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0780, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0740, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee08c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0940, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0980, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee07c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee09c0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0a40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0ac0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0b00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0bc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0d80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0c80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0cc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0dc0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e00, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0e40, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0ec0, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0f80, flags =3D 0
(XEN) AMD-Vi: IO_PAGE_FAULT: domain =3D 1, device id =3D 0x700, fault addre=
ss =3D 0x116ee0fc0, flags =3D 0
(XEN) AMD-Vi: Disable: device id =3D 0x700, domain =3D 1, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x700, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.0 from dom1 to dom0
(XEN) AMD-Vi: Disable: device id =3D 0x701, domain =3D 1, paging mode =3D 3
(XEN) AMD-Vi: Setup I/O page table: device id =3D 0x701, type =3D 0x1, root=
 table =3D 0x2472e2000, domain =3D 0, pagi                             ng m=
ode =3D 3
(XEN) AMD-Vi: Re-assign 0000:07:00.1 from dom1 to dom0
(XEN) Domain 0 shutdown: rebooting machine.
=FF

--=-HokMqAwF1mZkmjLTPer/
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-HokMqAwF1mZkmjLTPer/--



From xen-devel-bounces@lists.xen.org Thu Jan 23 10:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Hal-0004Wu-TR; Thu, 23 Jan 2014 10:32:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Haj-0004Wp-PP
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:32:57 +0000
Received: from [85.158.143.35:20063] by server-3.bemta-4.messagelabs.com id
	C1/C0-32360-9DFE0E25; Thu, 23 Jan 2014 10:32:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390473175!268985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5167 invoked from network); 23 Jan 2014 10:32:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:32:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93632527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:32:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:32:54 -0500
Message-ID: <1390473172.24595.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Thu, 23 Jan 2014 10:32:52 +0000
In-Reply-To: <20140122171906.GT8171@type>
References: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
	<20140122171906.GT8171@type>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com, xenmail43267@gmail.com
Subject: Re: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTIyIGF0IDE4OjE5ICswMTAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4geGVubWFpbDQzMjY3QGdtYWlsLmNvbSwgbGUgV2VkIDIyIEphbiAyMDE0IDExOjEyOjQ5IC0w
NjAwLCBhIMOpY3JpdCA6Cj4gPiBpbmRleCAxNmE0YjQ5Li5jZTUxODBjIDEwMDY0NAo+ID4gLS0t
IGEvZXh0cmFzL21pbmktb3MvcGNpZnJvbnQuYwo+ID4gKysrIGIvZXh0cmFzL21pbmktb3MvcGNp
ZnJvbnQuYwo+ID4gQEAgLTQyNCw3ICs0MjQsNyBAQCBpbnQgcGNpZnJvbnRfcGh5c2ljYWxfdG9f
dmlydHVhbCAoc3RydWN0IHBjaWZyb250X2RldiAqZGV2LAo+ID4gICAgICAgICAgICAgICAgICBj
b250aW51ZTsKPiA+ICAgICAgICAgICAgICB9Cj4gPiAgCj4gPiAtICAgICAgICAgICAgaWYgKHNz
Y2FuZihzLCAiJXg6JXg6JXguJXgiLCBkb20sIGJ1cywgc2xvdCwgZnVuKSAhPSA0KSB7Cj4gPiAr
ICAgICAgICAgICAgaWYgKHNzY2FuZihzLCAiJXg6JXg6JXguJWx4IiwgZG9tLCBidXMsIHNsb3Qs
IGZ1bikgIT0gNCkgewo+ID4gICAgICAgICAgICAgICAgICBwcmludGsoIlwiJXNcIiBkb2VzIG5v
dCBsb29rIGxpa2UgYSBQQ0kgZGV2aWNlIGFkZHJlc3NcbiIsIHMpOwo+ID4gICAgICAgICAgICAg
ICAgICBmcmVlKHMpOwo+ID4gICAgICAgICAgICAgICAgICBjb250aW51ZTsKPiAKPiBSYXRoZXIg
bWFrZSBmdW4gYW4gdW5zaWduZWQgaW50LCB0aGVyZSBpcyBubyByZWFzb24gd2h5IGl0IHNob3Vs
ZCBiZSBhbgo+IHVuc2lnbmVkIGxvbmcsIEknbSBzdGlsbCB3b25kZXJpbmcgd2hlcmUgdGhhdCBj
b21lcyBmcm9tLgoKVGhpcyB3aG9sZSBmaWxlIGhhcyBhbiBpbnRlcmVzdGluZyBtaXggb2YgdW5z
aWduZWQgbG9uZyB2cyB1bnNpZ25lZCBpbnQKZnVuIChpdCBzZWVtcyBwcmV0dHkgY29uc2lzdGVu
dCBhYm91dCBkb20sIGJ1cyAmIHNsb3QpLiBUaGUgdXNlIG9mCnVuc2lnbmVkIGxvbmcgc2VlbSB0
byBsZWFrIGludG8gdGhlIGV4dGVybmFsIGludGVyZmFjZSB0byB0aGlzIGZpbGUgYXMKd2VsbC4K
CkZvciA0LjQgdGhpcyBjaGFuZ2UgaXMgcHJvYmFibHkgbW9yZSBhcHByb3ByaWF0ZSBhdCB0aGlz
IGp1bmN0dXJlLApyYXRoZXIgdGhhbiBzaGF2aW5nIHRoZSB5YWtrLgoKPiAKPiBUaGVyZSByZXN0
IHNlZW1zIE9LIHRvIG1lLgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Hal-0004Wu-TR; Thu, 23 Jan 2014 10:32:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Haj-0004Wp-PP
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:32:57 +0000
Received: from [85.158.143.35:20063] by server-3.bemta-4.messagelabs.com id
	C1/C0-32360-9DFE0E25; Thu, 23 Jan 2014 10:32:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390473175!268985!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5167 invoked from network); 23 Jan 2014 10:32:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:32:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93632527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:32:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:32:54 -0500
Message-ID: <1390473172.24595.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Thu, 23 Jan 2014 10:32:52 +0000
In-Reply-To: <20140122171906.GT8171@type>
References: <1390410769-2517-1-git-send-email-xenmail43267@gmail.com>
	<20140122171906.GT8171@type>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	alex.sharp@orionvm.com, xenmail43267@gmail.com
Subject: Re: [Xen-devel] [PATCH] mini-os: Fix stubdom build failures on gcc
	4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTIyIGF0IDE4OjE5ICswMTAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4geGVubWFpbDQzMjY3QGdtYWlsLmNvbSwgbGUgV2VkIDIyIEphbiAyMDE0IDExOjEyOjQ5IC0w
NjAwLCBhIMOpY3JpdCA6Cj4gPiBpbmRleCAxNmE0YjQ5Li5jZTUxODBjIDEwMDY0NAo+ID4gLS0t
IGEvZXh0cmFzL21pbmktb3MvcGNpZnJvbnQuYwo+ID4gKysrIGIvZXh0cmFzL21pbmktb3MvcGNp
ZnJvbnQuYwo+ID4gQEAgLTQyNCw3ICs0MjQsNyBAQCBpbnQgcGNpZnJvbnRfcGh5c2ljYWxfdG9f
dmlydHVhbCAoc3RydWN0IHBjaWZyb250X2RldiAqZGV2LAo+ID4gICAgICAgICAgICAgICAgICBj
b250aW51ZTsKPiA+ICAgICAgICAgICAgICB9Cj4gPiAgCj4gPiAtICAgICAgICAgICAgaWYgKHNz
Y2FuZihzLCAiJXg6JXg6JXguJXgiLCBkb20sIGJ1cywgc2xvdCwgZnVuKSAhPSA0KSB7Cj4gPiAr
ICAgICAgICAgICAgaWYgKHNzY2FuZihzLCAiJXg6JXg6JXguJWx4IiwgZG9tLCBidXMsIHNsb3Qs
IGZ1bikgIT0gNCkgewo+ID4gICAgICAgICAgICAgICAgICBwcmludGsoIlwiJXNcIiBkb2VzIG5v
dCBsb29rIGxpa2UgYSBQQ0kgZGV2aWNlIGFkZHJlc3NcbiIsIHMpOwo+ID4gICAgICAgICAgICAg
ICAgICBmcmVlKHMpOwo+ID4gICAgICAgICAgICAgICAgICBjb250aW51ZTsKPiAKPiBSYXRoZXIg
bWFrZSBmdW4gYW4gdW5zaWduZWQgaW50LCB0aGVyZSBpcyBubyByZWFzb24gd2h5IGl0IHNob3Vs
ZCBiZSBhbgo+IHVuc2lnbmVkIGxvbmcsIEknbSBzdGlsbCB3b25kZXJpbmcgd2hlcmUgdGhhdCBj
b21lcyBmcm9tLgoKVGhpcyB3aG9sZSBmaWxlIGhhcyBhbiBpbnRlcmVzdGluZyBtaXggb2YgdW5z
aWduZWQgbG9uZyB2cyB1bnNpZ25lZCBpbnQKZnVuIChpdCBzZWVtcyBwcmV0dHkgY29uc2lzdGVu
dCBhYm91dCBkb20sIGJ1cyAmIHNsb3QpLiBUaGUgdXNlIG9mCnVuc2lnbmVkIGxvbmcgc2VlbSB0
byBsZWFrIGludG8gdGhlIGV4dGVybmFsIGludGVyZmFjZSB0byB0aGlzIGZpbGUgYXMKd2VsbC4K
CkZvciA0LjQgdGhpcyBjaGFuZ2UgaXMgcHJvYmFibHkgbW9yZSBhcHByb3ByaWF0ZSBhdCB0aGlz
IGp1bmN0dXJlLApyYXRoZXIgdGhhbiBzaGF2aW5nIHRoZSB5YWtrLgoKPiAKPiBUaGVyZSByZXN0
IHNlZW1zIE9LIHRvIG1lLgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
Cmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6HsV-0005Xp-25; Thu, 23 Jan 2014 10:51:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6HsT-0005Xk-0D
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:51:17 +0000
Received: from [85.158.143.35:52132] by server-1.bemta-4.messagelabs.com id
	7B/29-02132-424F0E25; Thu, 23 Jan 2014 10:51:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390474274!274248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26165 invoked from network); 23 Jan 2014 10:51:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93637256"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:51:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:51:13 -0500
Message-ID: <1390474272.24595.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 10:51:12 +0000
In-Reply-To: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 18:35 -0500, Konrad Rzeszutek Wilk wrote:
> >[root@dcs-xen-54 ~]# xl save -p 6 /big/xl-save/centos-6.4-x86_64.0.save
> >Saving to /big/xl-save/centos-6.4-x86_64.0.save new xl format (info
> >0x0/0x0/560)
> >xc: Saving memory: iter 0 (last sent 0 skipped 0): 1044481/1044481 
> >100%
> >[root@dcs-xen-54 ~]# xl unpause 6
> >
> >has left domain #6 in a bad disk state (on VGA):
> >
> >INFO: task jbd2/dm-0-8:386 blocked for more then 120 seconds.
> >INFO: task sadc:22139 blocked for more then 120 seconds.
> >
> >
> >However "xl restore -V /big/xl-save/centos-6.4-x86_64.0.save" looks to
> >work fine.
> >
> >2nd time the unpause failed with:
> >[root@dcs-xen-54 ~]# xl unpause 17
> >
> >WARNING: g.e. still in use!
> >WARNING: g.e. still in use!
> >WARNING: g.e. still in use!
> >pm_op(): platform_pm_resume+0x0/0x50 returns -19
> >PM: Device i8042 failed to resume: error -19
> >INFO: task sadc:22164 blocked for more then 120 seconds.
> >"echo 0 >..."
> >INFO: task sadc:22164 blocked for more then 120 seconds.
> >
> >  
> >[root@dcs-xen-54 ~]# xl des 17
> >[root@dcs-xen-54 ~]# xl restore -V
> >/big/xl-save/centos-6.4-x86_64.0.save
> >
> >
> >Not sure if this is expected or not.
> 
> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect.

It seems to me that it would be the same issue, but I can't recall if
the fix from Ian J was in rc2 or not.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6HsV-0005Xp-25; Thu, 23 Jan 2014 10:51:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6HsT-0005Xk-0D
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:51:17 +0000
Received: from [85.158.143.35:52132] by server-1.bemta-4.messagelabs.com id
	7B/29-02132-424F0E25; Thu, 23 Jan 2014 10:51:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390474274!274248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26165 invoked from network); 23 Jan 2014 10:51:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93637256"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:51:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:51:13 -0500
Message-ID: <1390474272.24595.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 10:51:12 +0000
In-Reply-To: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Don Slutz <dslutz@verizon.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 18:35 -0500, Konrad Rzeszutek Wilk wrote:
> >[root@dcs-xen-54 ~]# xl save -p 6 /big/xl-save/centos-6.4-x86_64.0.save
> >Saving to /big/xl-save/centos-6.4-x86_64.0.save new xl format (info
> >0x0/0x0/560)
> >xc: Saving memory: iter 0 (last sent 0 skipped 0): 1044481/1044481 
> >100%
> >[root@dcs-xen-54 ~]# xl unpause 6
> >
> >has left domain #6 in a bad disk state (on VGA):
> >
> >INFO: task jbd2/dm-0-8:386 blocked for more then 120 seconds.
> >INFO: task sadc:22139 blocked for more then 120 seconds.
> >
> >
> >However "xl restore -V /big/xl-save/centos-6.4-x86_64.0.save" looks to
> >work fine.
> >
> >2nd time the unpause failed with:
> >[root@dcs-xen-54 ~]# xl unpause 17
> >
> >WARNING: g.e. still in use!
> >WARNING: g.e. still in use!
> >WARNING: g.e. still in use!
> >pm_op(): platform_pm_resume+0x0/0x50 returns -19
> >PM: Device i8042 failed to resume: error -19
> >INFO: task sadc:22164 blocked for more then 120 seconds.
> >"echo 0 >..."
> >INFO: task sadc:22164 blocked for more then 120 seconds.
> >
> >  
> >[root@dcs-xen-54 ~]# xl des 17
> >[root@dcs-xen-54 ~]# xl restore -V
> >/big/xl-save/centos-6.4-x86_64.0.save
> >
> >
> >Not sure if this is expected or not.
> 
> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect.

It seems to me that it would be the same issue, but I can't recall if
the fix from Ian J was in rc2 or not.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Hx3-0005gh-Fg; Thu, 23 Jan 2014 10:56:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6Hx1-0005gY-FD
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:55:59 +0000
Received: from [85.158.139.211:22984] by server-10.bemta-5.messagelabs.com id
	4F/60-01405-E35F0E25; Thu, 23 Jan 2014 10:55:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390474556!11432396!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22896 invoked from network); 23 Jan 2014 10:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95680187"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 10:55:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:55:37 -0500
Message-ID: <52E0F528.7060302@citrix.com>
Date: Thu, 23 Jan 2014 10:55:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code
 in xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 01:36, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer code from netfront, and improves ending
> grant acess mechanism since gnttab_end_foreign_access_ref may fail when
> the grant entry is currently used for reading or writing.
> 
> * release grant reference and skb for tx/rx path, use get_page/put_page to
> ensure page is released when grant access is completed successfully.
> * change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
> of code change for put_page in gnttab_end_foreign_access.
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
> grant acess is ended.
> 
> V2: improve patch comments.
> Signed-off-by: Annie Li <annie.li@oracle.com>
> ---
>  drivers/block/xen-blkfront.c    |   25 ++++++++---
>  drivers/char/tpm/xen-tpmfront.c |    7 +++-
>  drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
>  drivers/pci/xen-pcifront.c      |    7 +++-
>  drivers/xen/grant-table.c       |    4 +-
>  5 files changed, 63 insertions(+), 73 deletions(-)

I don't understand why you've made all these unnecessary changes to the
other frontends and grant-table.c.

The xen-netfront.c changes are fine on their own.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Hx3-0005gh-Fg; Thu, 23 Jan 2014 10:56:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6Hx1-0005gY-FD
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:55:59 +0000
Received: from [85.158.139.211:22984] by server-10.bemta-5.messagelabs.com id
	4F/60-01405-E35F0E25; Thu, 23 Jan 2014 10:55:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390474556!11432396!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22896 invoked from network); 23 Jan 2014 10:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95680187"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 10:55:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:55:37 -0500
Message-ID: <52E0F528.7060302@citrix.com>
Date: Thu, 23 Jan 2014 10:55:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code
 in xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 01:36, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer code from netfront, and improves ending
> grant acess mechanism since gnttab_end_foreign_access_ref may fail when
> the grant entry is currently used for reading or writing.
> 
> * release grant reference and skb for tx/rx path, use get_page/put_page to
> ensure page is released when grant access is completed successfully.
> * change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
> of code change for put_page in gnttab_end_foreign_access.
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
> grant acess is ended.
> 
> V2: improve patch comments.
> Signed-off-by: Annie Li <annie.li@oracle.com>
> ---
>  drivers/block/xen-blkfront.c    |   25 ++++++++---
>  drivers/char/tpm/xen-tpmfront.c |    7 +++-
>  drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
>  drivers/pci/xen-pcifront.c      |    7 +++-
>  drivers/xen/grant-table.c       |    4 +-
>  5 files changed, 63 insertions(+), 73 deletions(-)

I don't understand why you've made all these unnecessary changes to the
other frontends and grant-table.c.

The xen-netfront.c changes are fine on their own.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:56:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6HxR-0005iw-IV; Thu, 23 Jan 2014 10:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6HxP-0005if-PA
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 10:56:24 +0000
Received: from [85.158.139.211:29640] by server-5.bemta-5.messagelabs.com id
	96/E7-14928-755F0E25; Thu, 23 Jan 2014 10:56:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390474581!1179016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29472 invoked from network); 23 Jan 2014 10:56:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93638285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:56:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 05:56:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6HxL-0006L9-H1;
	Thu, 23 Jan 2014 10:56:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6HxK-0002ML-2p;
	Thu, 23 Jan 2014 10:56:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21216.62800.746512.422459@mariner.uk.xensource.com>
Date: Thu, 23 Jan 2014 10:56:16 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E09513.6060603@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> It appears the timeout_modify callback is invoked on a previously
> deregistered timeout.  I didn't notice the segfault when running
> libvirtd under valgrind, but did see

Hmmm.  This could be a libxl problem.  I'll review the code again and
maybe think about adding some assertions.

But I've slept on this and I had an idea about libvirt's rescheduling
timeouts.  Can you point me at the libvirt branch you're using (a git
tree would be ideal) and I'll take a look at that too ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 10:56:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 10:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6HxR-0005iw-IV; Thu, 23 Jan 2014 10:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6HxP-0005if-PA
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 10:56:24 +0000
Received: from [85.158.139.211:29640] by server-5.bemta-5.messagelabs.com id
	96/E7-14928-755F0E25; Thu, 23 Jan 2014 10:56:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390474581!1179016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29472 invoked from network); 23 Jan 2014 10:56:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93638285"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 10:56:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 05:56:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6HxL-0006L9-H1;
	Thu, 23 Jan 2014 10:56:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6HxK-0002ML-2p;
	Thu, 23 Jan 2014 10:56:18 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21216.62800.746512.422459@mariner.uk.xensource.com>
Date: Thu, 23 Jan 2014 10:56:16 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E09513.6060603@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> It appears the timeout_modify callback is invoked on a previously
> deregistered timeout.  I didn't notice the segfault when running
> libvirtd under valgrind, but did see

Hmmm.  This could be a libxl problem.  I'll review the code again and
maybe think about adding some assertions.

But I've slept on this and I had an idea about libvirt's rescheduling
timeouts.  Can you point me at the libvirt branch you're using (a git
tree would be ideal) and I'll take a look at that too ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:06:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6I73-0006Sj-LB; Thu, 23 Jan 2014 11:06:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6I71-0006Se-KY
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:06:19 +0000
Received: from [193.109.254.147:48054] by server-13.bemta-14.messagelabs.com
	id DE/BB-19374-AA7F0E25; Thu, 23 Jan 2014 11:06:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390475176!10397920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16072 invoked from network); 23 Jan 2014 11:06:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:06:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93640830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:06:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:06:15 -0500
Message-ID: <1390475173.24595.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xenmail43267@gmail.com>
Date: Thu, 23 Jan 2014 11:06:13 +0000
In-Reply-To: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, Mike Neilsen <mneilsen@acm.org>,
	stefano.stabellini@citrix.com, samuel.thibault@ens-lyon.org,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
> From: Mike Neilsen <mneilsen@acm.org>
> 
> This is a fix for bug 35:
> http://bugs.xenproject.org/xen/bug/35
> 
> This bug report describes several format string mismatches which prevent
> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
> copy of Alex Sharp's original patch with the following modifications:
> 
> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
>   avoid stack corruption
> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
>   unsigned long in pcifront_physical_to_virtual and related functions
>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> 
> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> 
> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> 
> ---
> Changed since v1:
> * Change "fun" arguments into unsigned ints

Thanks for shaving that yakk! Since you've done it I obviously rescind
my previous comments ;-) (I should have read all my mail first).

Acked-by: Ian Campbell <ian,campbell@citrix.com>

George -- as a build fix for a compiler which is now in the wild I think
this should go in for 4.4.

> ---
>  extras/mini-os/fbfront.c          |  4 ++--
>  extras/mini-os/include/pcifront.h | 12 ++++++------
>  extras/mini-os/pcifront.c         | 14 +++++++-------
>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>  4 files changed, 18 insertions(+), 17 deletions(-)
> 
> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 1e01513..9cc07b4 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> @@ -468,7 +468,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
> index 0a6be8e..1b05963 100644
> --- a/extras/mini-os/include/pcifront.h
> +++ b/extras/mini-os/include/pcifront.h
> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned long fun,
> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>                         unsigned int off, unsigned int size, unsigned int *val);
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun,
> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>                          unsigned int off, unsigned int size, unsigned int val);
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun);
> +                        unsigned int bus, unsigned int slot, unsigned int fun);
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun);
> +                         unsigned int bus, unsigned int slot, unsigned int fun);
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun,
> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>                           struct xen_msix_entry *entries, int n);
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned long fun);
> +                          unsigned int bus, unsigned int slot, unsigned int fun);
>  void shutdown_pcifront(struct pcifront_dev *dev);
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index 16a4b49..0fc5b30 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>                                    unsigned int *dom,
>                                    unsigned int *bus,
>                                    unsigned int *slot,
> -                                  unsigned long *fun)
> +                                  unsigned int *fun)
>  {
>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>         should be enough to hold the paths we need to construct, even
> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
>  
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned long fun,
> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>                         unsigned int off, unsigned int size, unsigned int *val)
>  {
>      struct xen_pci_op op;
> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>  
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun,
> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>                          unsigned int off, unsigned int size, unsigned int val)
>  {
>      struct xen_pci_op op;
> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>  
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun)
> +                        unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>  
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun)
> +                         unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>  
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun,
> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>                           struct xen_msix_entry *entries, int n)
>  {
>      struct xen_pci_op op;
> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>  
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned long fun)
> +                          unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
> index ee1691b..c5d9b02 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -15,6 +15,7 @@
>   *
>   ****************************************************************************
>   **/
> +#include <inttypes.h>
>  #include <mini-os/os.h>
>  #include <mini-os/mm.h>
>  #include <mini-os/traps.h>
> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>      err = errmsg(rep);
>      if (err)
>  	return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  
>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%"SCNd16, &ret);
>  
>      return ret;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:06:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6I73-0006Sj-LB; Thu, 23 Jan 2014 11:06:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6I71-0006Se-KY
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:06:19 +0000
Received: from [193.109.254.147:48054] by server-13.bemta-14.messagelabs.com
	id DE/BB-19374-AA7F0E25; Thu, 23 Jan 2014 11:06:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390475176!10397920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16072 invoked from network); 23 Jan 2014 11:06:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:06:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93640830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:06:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:06:15 -0500
Message-ID: <1390475173.24595.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xenmail43267@gmail.com>
Date: Thu, 23 Jan 2014 11:06:13 +0000
In-Reply-To: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, Mike Neilsen <mneilsen@acm.org>,
	stefano.stabellini@citrix.com, samuel.thibault@ens-lyon.org,
	alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
> From: Mike Neilsen <mneilsen@acm.org>
> 
> This is a fix for bug 35:
> http://bugs.xenproject.org/xen/bug/35
> 
> This bug report describes several format string mismatches which prevent
> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
> copy of Alex Sharp's original patch with the following modifications:
> 
> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
>   avoid stack corruption
> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
>   unsigned long in pcifront_physical_to_virtual and related functions
>   (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> 
> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> 
> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> 
> ---
> Changed since v1:
> * Change "fun" arguments into unsigned ints

Thanks for shaving that yakk! Since you've done it I obviously rescind
my previous comments ;-) (I should have read all my mail first).

Acked-by: Ian Campbell <ian,campbell@citrix.com>

George -- as a build fix for a compiler which is now in the wild I think
this should go in for 4.4.

> ---
>  extras/mini-os/fbfront.c          |  4 ++--
>  extras/mini-os/include/pcifront.h | 12 ++++++------
>  extras/mini-os/pcifront.c         | 14 +++++++-------
>  extras/mini-os/xenbus/xenbus.c    |  5 +++--
>  4 files changed, 18 insertions(+), 17 deletions(-)
> 
> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
> index 1e01513..9cc07b4 100644
> --- a/extras/mini-os/fbfront.c
> +++ b/extras/mini-os/fbfront.c
> @@ -105,7 +105,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> @@ -468,7 +468,7 @@ again:
>          free(err);
>      }
>  
> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>      if (err) {
>          message = "writing page-ref";
>          goto abort_transaction;
> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
> index 0a6be8e..1b05963 100644
> --- a/extras/mini-os/include/pcifront.h
> +++ b/extras/mini-os/include/pcifront.h
> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
>  void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned long fun,
> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>                         unsigned int off, unsigned int size, unsigned int *val);
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun,
> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>                          unsigned int off, unsigned int size, unsigned int val);
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun);
> +                        unsigned int bus, unsigned int slot, unsigned int fun);
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun);
> +                         unsigned int bus, unsigned int slot, unsigned int fun);
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun,
> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>                           struct xen_msix_entry *entries, int n);
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned long fun);
> +                          unsigned int bus, unsigned int slot, unsigned int fun);
>  void shutdown_pcifront(struct pcifront_dev *dev);
> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
> index 16a4b49..0fc5b30 100644
> --- a/extras/mini-os/pcifront.c
> +++ b/extras/mini-os/pcifront.c
> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>                                    unsigned int *dom,
>                                    unsigned int *bus,
>                                    unsigned int *slot,
> -                                  unsigned long *fun)
> +                                  unsigned int *fun)
>  {
>      /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>         should be enough to hold the paths we need to construct, even
> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
>  
>  int pcifront_conf_read(struct pcifront_dev *dev,
>                         unsigned int dom,
> -                       unsigned int bus, unsigned int slot, unsigned long fun,
> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>                         unsigned int off, unsigned int size, unsigned int *val)
>  {
>      struct xen_pci_op op;
> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>  
>  int pcifront_conf_write(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun,
> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>                          unsigned int off, unsigned int size, unsigned int val)
>  {
>      struct xen_pci_op op;
> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>  
>  int pcifront_enable_msi(struct pcifront_dev *dev,
>                          unsigned int dom,
> -                        unsigned int bus, unsigned int slot, unsigned long fun)
> +                        unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>  
>  int pcifront_disable_msi(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun)
> +                         unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>  
>  int pcifront_enable_msix(struct pcifront_dev *dev,
>                           unsigned int dom,
> -                         unsigned int bus, unsigned int slot, unsigned long fun,
> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>                           struct xen_msix_entry *entries, int n)
>  {
>      struct xen_pci_op op;
> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>  
>  int pcifront_disable_msix(struct pcifront_dev *dev,
>                            unsigned int dom,
> -                          unsigned int bus, unsigned int slot, unsigned long fun)
> +                          unsigned int bus, unsigned int slot, unsigned int fun)
>  {
>      struct xen_pci_op op;
>  
> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
> index ee1691b..c5d9b02 100644
> --- a/extras/mini-os/xenbus/xenbus.c
> +++ b/extras/mini-os/xenbus/xenbus.c
> @@ -15,6 +15,7 @@
>   *
>   ****************************************************************************
>   **/
> +#include <inttypes.h>
>  #include <mini-os/os.h>
>  #include <mini-os/mm.h>
>  #include <mini-os/traps.h>
> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>      err = errmsg(rep);
>      if (err)
>  	return err;
> -    sscanf((char *)(rep + 1), "%u", xbt);
> +    sscanf((char *)(rep + 1), "%lu", xbt);
>      free(rep);
>      return NULL;
>  }
> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>      domid_t ret;
>  
>      BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
> -    sscanf(dom_id, "%d", &ret);
> +    sscanf(dom_id, "%"SCNd16, &ret);
>  
>      return ret;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:17:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IHM-00073z-1q; Thu, 23 Jan 2014 11:17:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6IHK-00073u-Hc
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:16:58 +0000
Received: from [193.109.254.147:22940] by server-13.bemta-14.messagelabs.com
	id AD/4F-19374-92AF0E25; Thu, 23 Jan 2014 11:16:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390475816!12635937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22336 invoked from network); 23 Jan 2014 11:16:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 11:16:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 11:17:06 +0000
Message-Id: <52E1083602000078001161D5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 11:16:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
In-Reply-To: <1390455621.2415.56.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 06:40, Eric Houby <ehouby@yahoo.com> wrote:
> Adding acpi_skip_timer_override to the xen command line did allow Dom0
> to boot but the video display for Dom0 did not work.  The following log
> was still seen the logs.
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address
> = 0xfdf8020140, flags = 0x8

Okay - we clearly need to understand where these faults in the
HT range come from. I'm adding Suravee for that reason.

> Including iommu=no-amd-iommu-perdev-intremap along with
> acpi_skip_timer_override cleared the above errors and Dom0 video was
> functional.  Attempts to passthrough a secondary VGA to a Win8.1 guest
> was not successful with these logs in the xen console:
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 1, device id = 0x700, fault
> address = 0x11620df40, flags = 0

I don't think we care about any such advanced functionality right
now.

> Attached are two xen console boot logs, one with just
> acpi_skip_timer_override in the xen command line and the other with
> acpi_skip_timer_override and iommu=no-amd-iommu-perdev-intremap.  The
> long files names I used make it clear which is which.

That's not really better:

(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=0 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.

We specifically want to avoid all of these workarounds.

> I tried to see how a native kernel handled the timer bug you mentioned
> but I could not find a corresponding log when booting 3.12.7 on bare
> hardware. This boot log is attached as barehw.txt, maybe there is
> something you can find.

But you didn't turn on interrupt remapping, or it got forcibly
disabled:

[Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
AMD-Vi: Disabling interrupt remapping

so the comparison isn't really between equal configurations. If
the kernel responds to that mentioned firmware bug by forcing
interrupt remapping off, maybe we would have to do the same...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:17:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IHM-00073z-1q; Thu, 23 Jan 2014 11:17:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6IHK-00073u-Hc
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:16:58 +0000
Received: from [193.109.254.147:22940] by server-13.bemta-14.messagelabs.com
	id AD/4F-19374-92AF0E25; Thu, 23 Jan 2014 11:16:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390475816!12635937!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22336 invoked from network); 23 Jan 2014 11:16:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 11:16:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 11:17:06 +0000
Message-Id: <52E1083602000078001161D5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 11:16:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
In-Reply-To: <1390455621.2415.56.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 06:40, Eric Houby <ehouby@yahoo.com> wrote:
> Adding acpi_skip_timer_override to the xen command line did allow Dom0
> to boot but the video display for Dom0 did not work.  The following log
> was still seen the logs.
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address
> = 0xfdf8020140, flags = 0x8

Okay - we clearly need to understand where these faults in the
HT range come from. I'm adding Suravee for that reason.

> Including iommu=no-amd-iommu-perdev-intremap along with
> acpi_skip_timer_override cleared the above errors and Dom0 video was
> functional.  Attempts to passthrough a secondary VGA to a Win8.1 guest
> was not successful with these logs in the xen console:
> 
> (XEN) AMD-Vi: IO_PAGE_FAULT: domain = 1, device id = 0x700, fault
> address = 0x11620df40, flags = 0

I don't think we care about any such advanced functionality right
now.

> Attached are two xen console boot logs, one with just
> acpi_skip_timer_override in the xen command line and the other with
> acpi_skip_timer_override and iommu=no-amd-iommu-perdev-intremap.  The
> long files names I used make it clear which is which.

That's not really better:

(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=0 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.

We specifically want to avoid all of these workarounds.

> I tried to see how a native kernel handled the timer bug you mentioned
> but I could not find a corresponding log when booting 3.12.7 on bare
> hardware. This boot log is attached as barehw.txt, maybe there is
> something you can find.

But you didn't turn on interrupt remapping, or it got forcibly
disabled:

[Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
AMD-Vi: Disabling interrupt remapping

so the comparison isn't really between equal configurations. If
the kernel responds to that mentioned firmware bug by forcing
interrupt remapping off, maybe we would have to do the same...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:19:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IJu-0007b8-SP; Thu, 23 Jan 2014 11:19:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6IJs-0007b2-Ng
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:19:36 +0000
Received: from [85.158.137.68:11261] by server-9.bemta-3.messagelabs.com id
	D0/14-13104-8CAF0E25; Thu, 23 Jan 2014 11:19:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390475973!9692103!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16167 invoked from network); 23 Jan 2014 11:19:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95686099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 11:19:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:19:32 -0500
Message-ID: <1390475971.24595.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 11:19:31 +0000
In-Reply-To: <20140122210154.GC9585@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 16:01 -0500, Konrad Rzeszutek Wilk wrote:
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> 
> Why not change 'phys_addr_t' to be unsigned long? Wouldn't
> that solve the problem as well?

It would, but it is fundamentally the wrong thing to do.

If the kernel is configured without LPAE (ARM's PAE extensions) then it
is configured for a 32-bit physical address space, throughout its page
table handling and else where. Pretending that physical addresses are
64-bits would have all sorts of knock on effects both in terms of type
mismatches and the space used by data structures doubling etc.

Enabling LPAE would also solve this issue but we don't want to force
that constraint onto Xen guests or dom0. Not least because of the knock
on affect on distro installers etc.

There is nothing fundamentally wrong with 32-bit phys addr with 64-bit
dma addr and it is the correct solution to this configuration.

> 
> Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
> of phys_addr_t?

phys_addr_t is unsigned long already, so that won't help. And you don't
want to expand those for the same reasons you don't want to expand
phys_addr_t.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:19:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IJu-0007b8-SP; Thu, 23 Jan 2014 11:19:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6IJs-0007b2-Ng
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:19:36 +0000
Received: from [85.158.137.68:11261] by server-9.bemta-3.messagelabs.com id
	D0/14-13104-8CAF0E25; Thu, 23 Jan 2014 11:19:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390475973!9692103!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16167 invoked from network); 23 Jan 2014 11:19:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95686099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 11:19:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:19:32 -0500
Message-ID: <1390475971.24595.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 11:19:31 +0000
In-Reply-To: <20140122210154.GC9585@phenom.dumpdata.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-22 at 16:01 -0500, Konrad Rzeszutek Wilk wrote:
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 1eac073..b626c79 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
> >  
> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
> >  {
> > -	return phys_to_machine(XPADDR(paddr)).maddr;
> 
> Why not change 'phys_addr_t' to be unsigned long? Wouldn't
> that solve the problem as well?

It would, but it is fundamentally the wrong thing to do.

If the kernel is configured without LPAE (ARM's PAE extensions) then it
is configured for a 32-bit physical address space, throughout its page
table handling and else where. Pretending that physical addresses are
64-bits would have all sorts of knock on effects both in terms of type
mismatches and the space used by data structures doubling etc.

Enabling LPAE would also solve this issue but we don't want to force
that constraint onto Xen guests or dom0. Not least because of the knock
on affect on distro installers etc.

There is nothing fundamentally wrong with 32-bit phys addr with 64-bit
dma addr and it is the correct solution to this configuration.

> 
> Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
> of phys_addr_t?

phys_addr_t is unsigned long already, so that won't help. And you don't
want to expand those for the same reasons you don't want to expand
phys_addr_t.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6INw-0007lf-Lg; Thu, 23 Jan 2014 11:23:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6INv-0007la-5I
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:23:47 +0000
Received: from [85.158.143.35:52975] by server-1.bemta-4.messagelabs.com id
	5A/EF-02132-2CBF0E25; Thu, 23 Jan 2014 11:23:46 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390476224!284045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23269 invoked from network); 23 Jan 2014 11:23:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:23:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93644534"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:23:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:23:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6INp-0007zU-SK;
	Thu, 23 Jan 2014 11:23:42 +0000
Message-ID: <52E0FBBC.1010102@eu.citrix.com>
Date: Thu, 23 Jan 2014 11:23:40 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.1.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xenmail43267@gmail.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<1390475173.24595.49.camel@kazak.uk.xensource.com>
In-Reply-To: <1390475173.24595.49.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 11:06 AM, Ian Campbell wrote:
> On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
>> From: Mike Neilsen <mneilsen@acm.org>
>>
>> This is a fix for bug 35:
>> http://bugs.xenproject.org/xen/bug/35
>>
>> This bug report describes several format string mismatches which prevent
>> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
>> copy of Alex Sharp's original patch with the following modifications:
>>
>> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
>>    avoid stack corruption
>> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
>>    unsigned long in pcifront_physical_to_virtual and related functions
>>    (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
>>
>> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
>>
>> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
>>
>> ---
>> Changed since v1:
>> * Change "fun" arguments into unsigned ints
>
> Thanks for shaving that yakk! Since you've done it I obviously rescind
> my previous comments ;-) (I should have read all my mail first).
>
> Acked-by: Ian Campbell <ian,campbell@citrix.com>
>
> George -- as a build fix for a compiler which is now in the wild I think
> this should go in for 4.4.

I agree.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>> ---
>>   extras/mini-os/fbfront.c          |  4 ++--
>>   extras/mini-os/include/pcifront.h | 12 ++++++------
>>   extras/mini-os/pcifront.c         | 14 +++++++-------
>>   extras/mini-os/xenbus/xenbus.c    |  5 +++--
>>   4 files changed, 18 insertions(+), 17 deletions(-)
>>
>> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
>> index 1e01513..9cc07b4 100644
>> --- a/extras/mini-os/fbfront.c
>> +++ b/extras/mini-os/fbfront.c
>> @@ -105,7 +105,7 @@ again:
>>           free(err);
>>       }
>>
>> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
>> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>>       if (err) {
>>           message = "writing page-ref";
>>           goto abort_transaction;
>> @@ -468,7 +468,7 @@ again:
>>           free(err);
>>       }
>>
>> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
>> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>>       if (err) {
>>           message = "writing page-ref";
>>           goto abort_transaction;
>> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
>> index 0a6be8e..1b05963 100644
>> --- a/extras/mini-os/include/pcifront.h
>> +++ b/extras/mini-os/include/pcifront.h
>> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
>>   void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
>>   int pcifront_conf_read(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned long fun,
>> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>>                          unsigned int off, unsigned int size, unsigned int *val);
>>   int pcifront_conf_write(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun,
>> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>>                           unsigned int off, unsigned int size, unsigned int val);
>>   int pcifront_enable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun);
>> +                        unsigned int bus, unsigned int slot, unsigned int fun);
>>   int pcifront_disable_msi(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun);
>> +                         unsigned int bus, unsigned int slot, unsigned int fun);
>>   int pcifront_enable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>>                            struct xen_msix_entry *entries, int n);
>>   int pcifront_disable_msix(struct pcifront_dev *dev,
>>                             unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned long fun);
>> +                          unsigned int bus, unsigned int slot, unsigned int fun);
>>   void shutdown_pcifront(struct pcifront_dev *dev);
>> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
>> index 16a4b49..0fc5b30 100644
>> --- a/extras/mini-os/pcifront.c
>> +++ b/extras/mini-os/pcifront.c
>> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>>                                     unsigned int *dom,
>>                                     unsigned int *bus,
>>                                     unsigned int *slot,
>> -                                  unsigned long *fun)
>> +                                  unsigned int *fun)
>>   {
>>       /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>>          should be enough to hold the paths we need to construct, even
>> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
>>
>>   int pcifront_conf_read(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned long fun,
>> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>>                          unsigned int off, unsigned int size, unsigned int *val)
>>   {
>>       struct xen_pci_op op;
>> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>>
>>   int pcifront_conf_write(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun,
>> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>>                           unsigned int off, unsigned int size, unsigned int val)
>>   {
>>       struct xen_pci_op op;
>> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>>
>>   int pcifront_enable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun)
>> +                        unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>>
>>   int pcifront_disable_msi(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun)
>> +                         unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>>
>>   int pcifront_enable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>>                            struct xen_msix_entry *entries, int n)
>>   {
>>       struct xen_pci_op op;
>> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>>
>>   int pcifront_disable_msix(struct pcifront_dev *dev,
>>                             unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned long fun)
>> +                          unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
>> index ee1691b..c5d9b02 100644
>> --- a/extras/mini-os/xenbus/xenbus.c
>> +++ b/extras/mini-os/xenbus/xenbus.c
>> @@ -15,6 +15,7 @@
>>    *
>>    ****************************************************************************
>>    **/
>> +#include <inttypes.h>
>>   #include <mini-os/os.h>
>>   #include <mini-os/mm.h>
>>   #include <mini-os/traps.h>
>> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>>       err = errmsg(rep);
>>       if (err)
>>   	return err;
>> -    sscanf((char *)(rep + 1), "%u", xbt);
>> +    sscanf((char *)(rep + 1), "%lu", xbt);
>>       free(rep);
>>       return NULL;
>>   }
>> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>>       domid_t ret;
>>
>>       BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
>> -    sscanf(dom_id, "%d", &ret);
>> +    sscanf(dom_id, "%"SCNd16, &ret);
>>
>>       return ret;
>>   }
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6INw-0007lf-Lg; Thu, 23 Jan 2014 11:23:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6INv-0007la-5I
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:23:47 +0000
Received: from [85.158.143.35:52975] by server-1.bemta-4.messagelabs.com id
	5A/EF-02132-2CBF0E25; Thu, 23 Jan 2014 11:23:46 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390476224!284045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23269 invoked from network); 23 Jan 2014 11:23:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:23:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93644534"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:23:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:23:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6INp-0007zU-SK;
	Thu, 23 Jan 2014 11:23:42 +0000
Message-ID: <52E0FBBC.1010102@eu.citrix.com>
Date: Thu, 23 Jan 2014 11:23:40 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.1.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xenmail43267@gmail.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<1390475173.24595.49.camel@kazak.uk.xensource.com>
In-Reply-To: <1390475173.24595.49.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 11:06 AM, Ian Campbell wrote:
> On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
>> From: Mike Neilsen <mneilsen@acm.org>
>>
>> This is a fix for bug 35:
>> http://bugs.xenproject.org/xen/bug/35
>>
>> This bug report describes several format string mismatches which prevent
>> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
>> copy of Alex Sharp's original patch with the following modifications:
>>
>> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
>>    avoid stack corruption
>> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
>>    unsigned long in pcifront_physical_to_virtual and related functions
>>    (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
>>
>> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
>>
>> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
>>
>> ---
>> Changed since v1:
>> * Change "fun" arguments into unsigned ints
>
> Thanks for shaving that yakk! Since you've done it I obviously rescind
> my previous comments ;-) (I should have read all my mail first).
>
> Acked-by: Ian Campbell <ian,campbell@citrix.com>
>
> George -- as a build fix for a compiler which is now in the wild I think
> this should go in for 4.4.

I agree.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>> ---
>>   extras/mini-os/fbfront.c          |  4 ++--
>>   extras/mini-os/include/pcifront.h | 12 ++++++------
>>   extras/mini-os/pcifront.c         | 14 +++++++-------
>>   extras/mini-os/xenbus/xenbus.c    |  5 +++--
>>   4 files changed, 18 insertions(+), 17 deletions(-)
>>
>> diff --git a/extras/mini-os/fbfront.c b/extras/mini-os/fbfront.c
>> index 1e01513..9cc07b4 100644
>> --- a/extras/mini-os/fbfront.c
>> +++ b/extras/mini-os/fbfront.c
>> @@ -105,7 +105,7 @@ again:
>>           free(err);
>>       }
>>
>> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
>> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>>       if (err) {
>>           message = "writing page-ref";
>>           goto abort_transaction;
>> @@ -468,7 +468,7 @@ again:
>>           free(err);
>>       }
>>
>> -    err = xenbus_printf(xbt, nodename, "page-ref","%u", virt_to_mfn(s));
>> +    err = xenbus_printf(xbt, nodename, "page-ref","%lu", virt_to_mfn(s));
>>       if (err) {
>>           message = "writing page-ref";
>>           goto abort_transaction;
>> diff --git a/extras/mini-os/include/pcifront.h b/extras/mini-os/include/pcifront.h
>> index 0a6be8e..1b05963 100644
>> --- a/extras/mini-os/include/pcifront.h
>> +++ b/extras/mini-os/include/pcifront.h
>> @@ -7,23 +7,23 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op);
>>   void pcifront_scan(struct pcifront_dev *dev, void (*fun)(unsigned int domain, unsigned int bus, unsigned slot, unsigned int fun));
>>   int pcifront_conf_read(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned long fun,
>> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>>                          unsigned int off, unsigned int size, unsigned int *val);
>>   int pcifront_conf_write(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun,
>> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>>                           unsigned int off, unsigned int size, unsigned int val);
>>   int pcifront_enable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun);
>> +                        unsigned int bus, unsigned int slot, unsigned int fun);
>>   int pcifront_disable_msi(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun);
>> +                         unsigned int bus, unsigned int slot, unsigned int fun);
>>   int pcifront_enable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>>                            struct xen_msix_entry *entries, int n);
>>   int pcifront_disable_msix(struct pcifront_dev *dev,
>>                             unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned long fun);
>> +                          unsigned int bus, unsigned int slot, unsigned int fun);
>>   void shutdown_pcifront(struct pcifront_dev *dev);
>> diff --git a/extras/mini-os/pcifront.c b/extras/mini-os/pcifront.c
>> index 16a4b49..0fc5b30 100644
>> --- a/extras/mini-os/pcifront.c
>> +++ b/extras/mini-os/pcifront.c
>> @@ -384,7 +384,7 @@ int pcifront_physical_to_virtual (struct pcifront_dev *dev,
>>                                     unsigned int *dom,
>>                                     unsigned int *bus,
>>                                     unsigned int *slot,
>> -                                  unsigned long *fun)
>> +                                  unsigned int *fun)
>>   {
>>       /* FIXME: the buffer sizing is a little lazy here. 10 extra bytes
>>          should be enough to hold the paths we need to construct, even
>> @@ -456,7 +456,7 @@ void pcifront_op(struct pcifront_dev *dev, struct xen_pci_op *op)
>>
>>   int pcifront_conf_read(struct pcifront_dev *dev,
>>                          unsigned int dom,
>> -                       unsigned int bus, unsigned int slot, unsigned long fun,
>> +                       unsigned int bus, unsigned int slot, unsigned int fun,
>>                          unsigned int off, unsigned int size, unsigned int *val)
>>   {
>>       struct xen_pci_op op;
>> @@ -486,7 +486,7 @@ int pcifront_conf_read(struct pcifront_dev *dev,
>>
>>   int pcifront_conf_write(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun,
>> +                        unsigned int bus, unsigned int slot, unsigned int fun,
>>                           unsigned int off, unsigned int size, unsigned int val)
>>   {
>>       struct xen_pci_op op;
>> @@ -513,7 +513,7 @@ int pcifront_conf_write(struct pcifront_dev *dev,
>>
>>   int pcifront_enable_msi(struct pcifront_dev *dev,
>>                           unsigned int dom,
>> -                        unsigned int bus, unsigned int slot, unsigned long fun)
>> +                        unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> @@ -538,7 +538,7 @@ int pcifront_enable_msi(struct pcifront_dev *dev,
>>
>>   int pcifront_disable_msi(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun)
>> +                         unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> @@ -560,7 +560,7 @@ int pcifront_disable_msi(struct pcifront_dev *dev,
>>
>>   int pcifront_enable_msix(struct pcifront_dev *dev,
>>                            unsigned int dom,
>> -                         unsigned int bus, unsigned int slot, unsigned long fun,
>> +                         unsigned int bus, unsigned int slot, unsigned int fun,
>>                            struct xen_msix_entry *entries, int n)
>>   {
>>       struct xen_pci_op op;
>> @@ -595,7 +595,7 @@ int pcifront_enable_msix(struct pcifront_dev *dev,
>>
>>   int pcifront_disable_msix(struct pcifront_dev *dev,
>>                             unsigned int dom,
>> -                          unsigned int bus, unsigned int slot, unsigned long fun)
>> +                          unsigned int bus, unsigned int slot, unsigned int fun)
>>   {
>>       struct xen_pci_op op;
>>
>> diff --git a/extras/mini-os/xenbus/xenbus.c b/extras/mini-os/xenbus/xenbus.c
>> index ee1691b..c5d9b02 100644
>> --- a/extras/mini-os/xenbus/xenbus.c
>> +++ b/extras/mini-os/xenbus/xenbus.c
>> @@ -15,6 +15,7 @@
>>    *
>>    ****************************************************************************
>>    **/
>> +#include <inttypes.h>
>>   #include <mini-os/os.h>
>>   #include <mini-os/mm.h>
>>   #include <mini-os/traps.h>
>> @@ -672,7 +673,7 @@ char *xenbus_transaction_start(xenbus_transaction_t *xbt)
>>       err = errmsg(rep);
>>       if (err)
>>   	return err;
>> -    sscanf((char *)(rep + 1), "%u", xbt);
>> +    sscanf((char *)(rep + 1), "%lu", xbt);
>>       free(rep);
>>       return NULL;
>>   }
>> @@ -769,7 +770,7 @@ domid_t xenbus_get_self_id(void)
>>       domid_t ret;
>>
>>       BUG_ON(xenbus_read(XBT_NIL, "domid", &dom_id));
>> -    sscanf(dom_id, "%d", &ret);
>> +    sscanf(dom_id, "%"SCNd16, &ret);
>>
>>       return ret;
>>   }
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IST-0007yY-Fm; Thu, 23 Jan 2014 11:28:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6ISR-0007wY-MZ
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:28:27 +0000
Received: from [85.158.139.211:13684] by server-13.bemta-5.messagelabs.com id
	E5/36-11357-BDCF0E25; Thu, 23 Jan 2014 11:28:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390476504!11484240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19697 invoked from network); 23 Jan 2014 11:28:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:28:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93645523"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:28:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:28:23 -0500
Message-ID: <1390476502.24595.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Thu, 23 Jan 2014 11:28:22 +0000
In-Reply-To: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
References: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 12:38 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..9901bf0 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,6 +20,9 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>

Do you really need all 3 of these?

>  #include <asm/gic.h>
>  
>  static uint32_t xgene_storm_quirks(void)
> @@ -107,6 +110,26 @@ err:
>      return ret;
>  }
>  
> +/*
> + * TODO: Get base address and mask from the device tree

How likely are these to change during the lifetime of Xen 4.4? I think
it cannot be ruled out. Doing this based on DT shouldn't be more than a
dozen lines of code -- see e.g init_xen_time or gic_init.

You probably want to do the detection at start of day in the
platform->init hook so that you can call:
        dt_device_set_used_by(node, DOMID_XEN);
on it so it doesn't get given to dom0.

Otherwise I expect you will also want to add the relevant compatibility
string to the platform blacklist_dev list so it isn't given to dom0 for
that reason.

> + */
> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    addr = ioremap_nocache(0x17000014UL, 0x100);

If you aren't going to use device tree please can you at least add a
#define for the address.

> +    if ( !addr )
> +    {
> +        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
> +        return;
> +    }
> +
> +    /* Write mask 0x1 to base address */
> +    writel(0x1, addr);
> +
> +    iounmap(addr);
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6IST-0007yY-Fm; Thu, 23 Jan 2014 11:28:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6ISR-0007wY-MZ
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:28:27 +0000
Received: from [85.158.139.211:13684] by server-13.bemta-5.messagelabs.com id
	E5/36-11357-BDCF0E25; Thu, 23 Jan 2014 11:28:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390476504!11484240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19697 invoked from network); 23 Jan 2014 11:28:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:28:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93645523"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:28:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:28:23 -0500
Message-ID: <1390476502.24595.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Thu, 23 Jan 2014 11:28:22 +0000
In-Reply-To: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
References: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 12:38 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..9901bf0 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,6 +20,9 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>

Do you really need all 3 of these?

>  #include <asm/gic.h>
>  
>  static uint32_t xgene_storm_quirks(void)
> @@ -107,6 +110,26 @@ err:
>      return ret;
>  }
>  
> +/*
> + * TODO: Get base address and mask from the device tree

How likely are these to change during the lifetime of Xen 4.4? I think
it cannot be ruled out. Doing this based on DT shouldn't be more than a
dozen lines of code -- see e.g init_xen_time or gic_init.

You probably want to do the detection at start of day in the
platform->init hook so that you can call:
        dt_device_set_used_by(node, DOMID_XEN);
on it so it doesn't get given to dom0.

Otherwise I expect you will also want to add the relevant compatibility
string to the platform blacklist_dev list so it isn't given to dom0 for
that reason.

> + */
> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    addr = ioremap_nocache(0x17000014UL, 0x100);

If you aren't going to use device tree please can you at least add a
#define for the address.

> +    if ( !addr )
> +    {
> +        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
> +        return;
> +    }
> +
> +    /* Write mask 0x1 to base address */
> +    writel(0x1, addr);
> +
> +    iounmap(addr);
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Irh-0001M9-Oj; Thu, 23 Jan 2014 11:54:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6Irg-0001M4-R4
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:54:33 +0000
Received: from [85.158.137.68:52563] by server-17.bemta-3.messagelabs.com id
	42/8C-15965-8F201E25; Thu, 23 Jan 2014 11:54:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390478069!10848037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13811 invoked from network); 23 Jan 2014 11:54:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:54:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93651703"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:54:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:54:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Irc-0008Qq-CC;
	Thu, 23 Jan 2014 11:54:28 +0000
Message-ID: <52E102F4.3060503@citrix.com>
Date: Thu, 23 Jan 2014 11:54:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52DE4BD8.7060209@citrix.com>
In-Reply-To: <52DE4BD8.7060209@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 10:28, Andrew Cooper wrote:
> Hello,
>
> I participated in (a rather extended version of) the 4.4-rc2 test day,
> and rc2 got a full XenRT nightly run in the XenServer testing system.
>
> For the setup, the comparison is against XenServer trunk, which is
> currently Xen-4.3-staging based (plus patch queue), Linux 3.10.y dom0
> kernel, CentOS 6.4 based dom0 userspace.
>
> The tested version had Xen 4.4 (staging, as I needed the ABI fix) in
> place of Xen-4.3, but identical dom0 kernel, dom0 userspace, qemu,
> toolstack and windows PV drivers.
>
>
> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
> which have problems on live migrate.  Some source of time is
> unexpectedly jumping forwards by two days, from the correct time to 2
> days in the future.  The observed result is that it looses its DHCP
> lease, drops its IP address and networking ceases to work (It appears
> that windows will not attempt to renew the lease itself).
>

This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
"x86/viridian: Time Reference Count MSR"

After double checking with the specification, it does appear to be
implemented as required (subject to a potential issue with multiple vcpu
guests).

I am currently experimenting to see whether hvm_get_guest_time() is
returning unexpected values, or whether it is returning expected values
and Windows is interpreting them differently.

At this point in the 4.4 release cycle, reverting the patch should be
seriously considered, although I would like to see whether it is
possible to work out why it is wrong and whether there is an obvious fix
first.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 11:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 11:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Irh-0001M9-Oj; Thu, 23 Jan 2014 11:54:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6Irg-0001M4-R4
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 11:54:33 +0000
Received: from [85.158.137.68:52563] by server-17.bemta-3.messagelabs.com id
	42/8C-15965-8F201E25; Thu, 23 Jan 2014 11:54:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390478069!10848037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13811 invoked from network); 23 Jan 2014 11:54:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 11:54:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="93651703"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 11:54:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 06:54:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Irc-0008Qq-CC;
	Thu, 23 Jan 2014 11:54:28 +0000
Message-ID: <52E102F4.3060503@citrix.com>
Date: Thu, 23 Jan 2014 11:54:28 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52DE4BD8.7060209@citrix.com>
In-Reply-To: <52DE4BD8.7060209@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/01/14 10:28, Andrew Cooper wrote:
> Hello,
>
> I participated in (a rather extended version of) the 4.4-rc2 test day,
> and rc2 got a full XenRT nightly run in the XenServer testing system.
>
> For the setup, the comparison is against XenServer trunk, which is
> currently Xen-4.3-staging based (plus patch queue), Linux 3.10.y dom0
> kernel, CentOS 6.4 based dom0 userspace.
>
> The tested version had Xen 4.4 (staging, as I needed the ABI fix) in
> place of Xen-4.3, but identical dom0 kernel, dom0 userspace, qemu,
> toolstack and windows PV drivers.
>
>
> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
> which have problems on live migrate.  Some source of time is
> unexpectedly jumping forwards by two days, from the correct time to 2
> days in the future.  The observed result is that it looses its DHCP
> lease, drops its IP address and networking ceases to work (It appears
> that windows will not attempt to renew the lease itself).
>

This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
"x86/viridian: Time Reference Count MSR"

After double checking with the specification, it does appear to be
implemented as required (subject to a potential issue with multiple vcpu
guests).

I am currently experimenting to see whether hvm_get_guest_time() is
returning unexpected values, or whether it is returning expected values
and Windows is interpreting them differently.

At this point in the 4.4 release cycle, reverting the patch should be
seriously considered, although I would like to see whether it is
possible to work out why it is wrong and whether there is an obvious fix
first.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 12:29:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6JPC-0003a8-CB; Thu, 23 Jan 2014 12:29:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6JPB-0003Z8-KH
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 12:29:09 +0000
Received: from [85.158.139.211:54122] by server-9.bemta-5.messagelabs.com id
	CC/A2-15098-41B01E25; Thu, 23 Jan 2014 12:29:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390480147!11501923!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27938 invoked from network); 23 Jan 2014 12:29:07 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 12:29:07 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so1378912lbj.16
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 04:29:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=QUGByBHAqnczXvTLJAQ4TQMqrF09tPbRYpulogA7T/c=;
	b=VqGMw7VXF/CZyHQ2+mcoaIHqXsXbzxQztaCwhoCm2FETOZEaXsYLJlYOXOVV1i2DRf
	QhuYiigovqySfuT0nKqXxMX6Wb/AAS/j5wSAzb8lpnNQ658KYbLfZXBOSanDRUga4gPc
	2HeAWqjaZActqswtXTO4jxtXHtQmEtmnMnBh9E1AzWW7oAQbT9BBBcHQjO1WUXwwz+Xn
	cofycoPJf7PGceUhtAyQLEN1VXGPxVGWkwX/hR7zJVAud0mI40MMxvLZS3AMt3SfOgi5
	6B1VjhvdTKAuSzuaPLhJL/GBwgoGNh27rC1XoSPA80p119r9HqXUAE6sqtFl/yHMThFc
	cgLA==
X-Gm-Message-State: ALoCoQn0Fdwz5qIFKKanc0K3fmnGoDjzI95G8UU3ShQzh1vYfq0bI4pNkMwmSmk7KAB1TPg0Is2y
MIME-Version: 1.0
X-Received: by 10.112.172.69 with SMTP id ba5mr392933lbc.55.1390480147148;
	Thu, 23 Jan 2014 04:29:07 -0800 (PST)
Received: by 10.112.170.229 with HTTP; Thu, 23 Jan 2014 04:29:06 -0800 (PST)
In-Reply-To: <1390476502.24595.62.camel@kazak.uk.xensource.com>
References: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
	<1390476502.24595.62.camel@kazak.uk.xensource.com>
Date: Thu, 23 Jan 2014 17:59:06 +0530
Message-ID: <CAAHg+HhqtXenJZEYAtH9D2AnB7QY2Q6yNyDBQ1z3hVq2NghGUA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 23 January 2014 16:58, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-23 at 12:38 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>  xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
>>  1 file changed, 24 insertions(+)
>>
>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>> index 5b0bd5f..9901bf0 100644
>> --- a/xen/arch/arm/platforms/xgene-storm.c
>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>> @@ -20,6 +20,9 @@
>>
>>  #include <xen/config.h>
>>  #include <asm/platform.h>
>> +#include <xen/mm.h>
>> +#include <xen/vmap.h>
>> +#include <asm/io.h>
>
> Do you really need all 3 of these?
I will remove mm.h
vmap.h is required for iounmap and io.h is for writel.

>
>>  #include <asm/gic.h>
>>
>>  static uint32_t xgene_storm_quirks(void)
>> @@ -107,6 +110,26 @@ err:
>>      return ret;
>>  }
>>
>> +/*
>> + * TODO: Get base address and mask from the device tree
>
> How likely are these to change during the lifetime of Xen 4.4? I think
> it cannot be ruled out. Doing this based on DT shouldn't be more than a
> dozen lines of code -- see e.g init_xen_time or gic_init.
>
> You probably want to do the detection at start of day in the
> platform->init hook so that you can call:
>         dt_device_set_used_by(node, DOMID_XEN);
> on it so it doesn't get given to dom0.
>
> Otherwise I expect you will also want to add the relevant compatibility
> string to the platform blacklist_dev list so it isn't given to dom0 for
> that reason.

Thanks i was not aware of this let me try to fix it using device tree only.

>
>> + */
>> +static void xgene_storm_reset(void)
>> +{
>> +    void __iomem *addr;
>> +
>> +    addr = ioremap_nocache(0x17000014UL, 0x100);
>
> If you aren't going to use device tree please can you at least add a
> #define for the address.

I think i can try using device tree only instead of hard-coding.

>
>> +    if ( !addr )
>> +    {
>> +        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
>> +        return;
>> +    }
>> +
>> +    /* Write mask 0x1 to base address */
>> +    writel(0x1, addr);
>> +
>> +    iounmap(addr);
>> +}
>>
>>  static const char * const xgene_storm_dt_compat[] __initconst =
>>  {
>> @@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>>
>>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>>      .compatible = xgene_storm_dt_compat,
>> +    .reset = xgene_storm_reset,
>>      .quirks = xgene_storm_quirks,
>>      .specific_mapping = xgene_storm_specific_mapping,
>>
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 12:29:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6JPC-0003a8-CB; Thu, 23 Jan 2014 12:29:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6JPB-0003Z8-KH
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 12:29:09 +0000
Received: from [85.158.139.211:54122] by server-9.bemta-5.messagelabs.com id
	CC/A2-15098-41B01E25; Thu, 23 Jan 2014 12:29:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390480147!11501923!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27938 invoked from network); 23 Jan 2014 12:29:07 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 12:29:07 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so1378912lbj.16
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 04:29:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=QUGByBHAqnczXvTLJAQ4TQMqrF09tPbRYpulogA7T/c=;
	b=VqGMw7VXF/CZyHQ2+mcoaIHqXsXbzxQztaCwhoCm2FETOZEaXsYLJlYOXOVV1i2DRf
	QhuYiigovqySfuT0nKqXxMX6Wb/AAS/j5wSAzb8lpnNQ658KYbLfZXBOSanDRUga4gPc
	2HeAWqjaZActqswtXTO4jxtXHtQmEtmnMnBh9E1AzWW7oAQbT9BBBcHQjO1WUXwwz+Xn
	cofycoPJf7PGceUhtAyQLEN1VXGPxVGWkwX/hR7zJVAud0mI40MMxvLZS3AMt3SfOgi5
	6B1VjhvdTKAuSzuaPLhJL/GBwgoGNh27rC1XoSPA80p119r9HqXUAE6sqtFl/yHMThFc
	cgLA==
X-Gm-Message-State: ALoCoQn0Fdwz5qIFKKanc0K3fmnGoDjzI95G8UU3ShQzh1vYfq0bI4pNkMwmSmk7KAB1TPg0Is2y
MIME-Version: 1.0
X-Received: by 10.112.172.69 with SMTP id ba5mr392933lbc.55.1390480147148;
	Thu, 23 Jan 2014 04:29:07 -0800 (PST)
Received: by 10.112.170.229 with HTTP; Thu, 23 Jan 2014 04:29:06 -0800 (PST)
In-Reply-To: <1390476502.24595.62.camel@kazak.uk.xensource.com>
References: <1390460890-27971-1-git-send-email-pranavkumar@linaro.org>
	<1390476502.24595.62.camel@kazak.uk.xensource.com>
Date: Thu, 23 Jan 2014 17:59:06 +0530
Message-ID: <CAAHg+HhqtXenJZEYAtH9D2AnB7QY2Q6yNyDBQ1z3hVq2NghGUA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 23 January 2014 16:58, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-23 at 12:38 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>  xen/arch/arm/platforms/xgene-storm.c |   24 ++++++++++++++++++++++++
>>  1 file changed, 24 insertions(+)
>>
>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>> index 5b0bd5f..9901bf0 100644
>> --- a/xen/arch/arm/platforms/xgene-storm.c
>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>> @@ -20,6 +20,9 @@
>>
>>  #include <xen/config.h>
>>  #include <asm/platform.h>
>> +#include <xen/mm.h>
>> +#include <xen/vmap.h>
>> +#include <asm/io.h>
>
> Do you really need all 3 of these?
I will remove mm.h
vmap.h is required for iounmap and io.h is for writel.

>
>>  #include <asm/gic.h>
>>
>>  static uint32_t xgene_storm_quirks(void)
>> @@ -107,6 +110,26 @@ err:
>>      return ret;
>>  }
>>
>> +/*
>> + * TODO: Get base address and mask from the device tree
>
> How likely are these to change during the lifetime of Xen 4.4? I think
> it cannot be ruled out. Doing this based on DT shouldn't be more than a
> dozen lines of code -- see e.g init_xen_time or gic_init.
>
> You probably want to do the detection at start of day in the
> platform->init hook so that you can call:
>         dt_device_set_used_by(node, DOMID_XEN);
> on it so it doesn't get given to dom0.
>
> Otherwise I expect you will also want to add the relevant compatibility
> string to the platform blacklist_dev list so it isn't given to dom0 for
> that reason.

Thanks i was not aware of this let me try to fix it using device tree only.

>
>> + */
>> +static void xgene_storm_reset(void)
>> +{
>> +    void __iomem *addr;
>> +
>> +    addr = ioremap_nocache(0x17000014UL, 0x100);
>
> If you aren't going to use device tree please can you at least add a
> #define for the address.

I think i can try using device tree only instead of hard-coding.

>
>> +    if ( !addr )
>> +    {
>> +        dprintk(XENLOG_ERR, "Unable to map xgene reset address\n");
>> +        return;
>> +    }
>> +
>> +    /* Write mask 0x1 to base address */
>> +    writel(0x1, addr);
>> +
>> +    iounmap(addr);
>> +}
>>
>>  static const char * const xgene_storm_dt_compat[] __initconst =
>>  {
>> @@ -116,6 +139,7 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>>
>>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>>      .compatible = xgene_storm_dt_compat,
>> +    .reset = xgene_storm_reset,
>>      .quirks = xgene_storm_quirks,
>>      .specific_mapping = xgene_storm_specific_mapping,
>>
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 12:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Jii-0004tw-Bm; Thu, 23 Jan 2014 12:49:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6Jig-0004tU-49; Thu, 23 Jan 2014 12:49:18 +0000
Received: from [85.158.137.68:57423] by server-5.bemta-3.messagelabs.com id
	50/6D-25188-CCF01E25; Thu, 23 Jan 2014 12:49:16 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390481355!10910256!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21130 invoked from network); 23 Jan 2014 12:49:16 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-4.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Jan 2014 12:49:16 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6JiY-0004l0-CR; Thu, 23 Jan 2014 12:49:10 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6JiW-0005PJ-M7; Thu, 23 Jan 2014 12:49:10 +0000
Date: Thu, 23 Jan 2014 12:49:08 +0000
Message-Id: <E1W6JiW-0005PJ-M7@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 83 - Out-of-memory condition
 yielding memory corruption during IRQ setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                    Xen Security Advisory XSA-83
                              version 2

       Out-of-memory condition yielding memory corruption during IRQ setup

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

When setting up the IRQ for a passed through physical device, a flaw
in the error handling could result in a memory allocation being used
after it is freed, and then freed a second time.  This would typically
result in memory corruption.

IMPACT
======

Malicious guest administrators can trigger a use-after-free error, resulting
in hypervisor memory corruption.  The effects of memory corruption could be
anything, including a host-wide denial of service, or privilege escalation.

VULNERABLE SYSTEMS
==================

Xen 4.2.x and later are vulnerable.
Xen 4.1.x and earlier are not vulnerable.

Only systems making use of device passthrough are vulnerable.

Only systems with a 64-bit hypervisor configured to support more than 128
CPUs or with a 32-bit hypervisor configured to support more than 64 CPUs are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests on
systems supporting Intel VT-d or AMD Vi.

CREDITS
=======

This issue was discovered by Coverity Scan, prompted by modelling
improvements contributed by Andrew Coooper.  The issue was diagnosed
by Matthew Daley and Andrew Coooper.  The patch was prepared by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa83.patch                 Xen 4.2.x, Xen 4.3.x, xen-unstable

$ sha256sum xsa83*.patch
71ba62c024ed867f99f335ed63d7e04a7981d348cc29a3718e5c48f15a1e0fb1  xsa83.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4Q+yAAoJEIP+FMlX6CvZjQQIALVrMD9bMEfBbQJ6ZvZZBP2f
g8y7FvzGMC2fiP1gPyOxwHYI2lAsT6euiFgEunamlWAtTpgFhTeXLrx/pbdKpMv9
AwWA94umPrSSNVoUGtX9JqPcg9lzWCxgTjkKcmGyH6Yo/Z78juYeQMTss3/DQ0ms
asIYS011i/6lyKDo1XKJiabzOYI0F/R1JQEDnaVZBTk57+1Ux+9acnt5KK1dt9t3
KpcOQCiJKqVDFMaQ0NmTUQS7pC/5N/QZRe5AdMG1LhJI7Yw5tbHnTxdSYxnprQEn
KUJfYQYycp4XJU7U6GMFE0Ybqf3FMlNqS+KHcetgN7XA6C8xjyDoMIUsGzA9/3E=
=P/H4
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa83.patch"
Content-Disposition: attachment; filename="xsa83.patch"
Content-Transfer-Encoding: base64

eDg2L2lycTogYXZvaWQgdXNlLWFmdGVyLWZyZWUgb24gZXJyb3IgcGF0aCBp
biBwaXJxX2d1ZXN0X2JpbmQoKQoKVGhpcyBpcyBYU0EtODMuCgpDb3Zlcml0
eS1JRDogMTE0Njk1MgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L2lycS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwpAQCAtMTU5MCw4ICsx
NTkwLDcgQEAgaW50IHBpcnFfZ3Vlc3RfYmluZChzdHJ1Y3QgdmNwdSAqdiwg
c3RydQogICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0lORk8KICAgICAg
ICAgICAgICAgICAgICAiQ2Fubm90IGJpbmQgSVJRJWQgdG8gZG9tJWQuIE91
dCBvZiBtZW1vcnkuXG4iLAogICAgICAgICAgICAgICAgICAgIHBpcnEtPnBp
cnEsIHYtPmRvbWFpbi0+ZG9tYWluX2lkKTsKLSAgICAgICAgICAgIHJjID0g
LUVOT01FTTsKLSAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAg
cmV0dXJuIC1FTk9NRU07CiAgICAgICAgIH0KIAogICAgICAgICBhY3Rpb24g
PSBuZXdhY3Rpb247Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Jan 23 12:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Jii-0004tw-Bm; Thu, 23 Jan 2014 12:49:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6Jig-0004tU-49; Thu, 23 Jan 2014 12:49:18 +0000
Received: from [85.158.137.68:57423] by server-5.bemta-3.messagelabs.com id
	50/6D-25188-CCF01E25; Thu, 23 Jan 2014 12:49:16 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390481355!10910256!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21130 invoked from network); 23 Jan 2014 12:49:16 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-4.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Jan 2014 12:49:16 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6JiY-0004l0-CR; Thu, 23 Jan 2014 12:49:10 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6JiW-0005PJ-M7; Thu, 23 Jan 2014 12:49:10 +0000
Date: Thu, 23 Jan 2014 12:49:08 +0000
Message-Id: <E1W6JiW-0005PJ-M7@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 83 - Out-of-memory condition
 yielding memory corruption during IRQ setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                    Xen Security Advisory XSA-83
                              version 2

       Out-of-memory condition yielding memory corruption during IRQ setup

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

When setting up the IRQ for a passed through physical device, a flaw
in the error handling could result in a memory allocation being used
after it is freed, and then freed a second time.  This would typically
result in memory corruption.

IMPACT
======

Malicious guest administrators can trigger a use-after-free error, resulting
in hypervisor memory corruption.  The effects of memory corruption could be
anything, including a host-wide denial of service, or privilege escalation.

VULNERABLE SYSTEMS
==================

Xen 4.2.x and later are vulnerable.
Xen 4.1.x and earlier are not vulnerable.

Only systems making use of device passthrough are vulnerable.

Only systems with a 64-bit hypervisor configured to support more than 128
CPUs or with a 32-bit hypervisor configured to support more than 64 CPUs are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests on
systems supporting Intel VT-d or AMD Vi.

CREDITS
=======

This issue was discovered by Coverity Scan, prompted by modelling
improvements contributed by Andrew Coooper.  The issue was diagnosed
by Matthew Daley and Andrew Coooper.  The patch was prepared by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa83.patch                 Xen 4.2.x, Xen 4.3.x, xen-unstable

$ sha256sum xsa83*.patch
71ba62c024ed867f99f335ed63d7e04a7981d348cc29a3718e5c48f15a1e0fb1  xsa83.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4Q+yAAoJEIP+FMlX6CvZjQQIALVrMD9bMEfBbQJ6ZvZZBP2f
g8y7FvzGMC2fiP1gPyOxwHYI2lAsT6euiFgEunamlWAtTpgFhTeXLrx/pbdKpMv9
AwWA94umPrSSNVoUGtX9JqPcg9lzWCxgTjkKcmGyH6Yo/Z78juYeQMTss3/DQ0ms
asIYS011i/6lyKDo1XKJiabzOYI0F/R1JQEDnaVZBTk57+1Ux+9acnt5KK1dt9t3
KpcOQCiJKqVDFMaQ0NmTUQS7pC/5N/QZRe5AdMG1LhJI7Yw5tbHnTxdSYxnprQEn
KUJfYQYycp4XJU7U6GMFE0Ybqf3FMlNqS+KHcetgN7XA6C8xjyDoMIUsGzA9/3E=
=P/H4
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa83.patch"
Content-Disposition: attachment; filename="xsa83.patch"
Content-Transfer-Encoding: base64

eDg2L2lycTogYXZvaWQgdXNlLWFmdGVyLWZyZWUgb24gZXJyb3IgcGF0aCBp
biBwaXJxX2d1ZXN0X2JpbmQoKQoKVGhpcyBpcyBYU0EtODMuCgpDb3Zlcml0
eS1JRDogMTE0Njk1MgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L2lycS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwpAQCAtMTU5MCw4ICsx
NTkwLDcgQEAgaW50IHBpcnFfZ3Vlc3RfYmluZChzdHJ1Y3QgdmNwdSAqdiwg
c3RydQogICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0lORk8KICAgICAg
ICAgICAgICAgICAgICAiQ2Fubm90IGJpbmQgSVJRJWQgdG8gZG9tJWQuIE91
dCBvZiBtZW1vcnkuXG4iLAogICAgICAgICAgICAgICAgICAgIHBpcnEtPnBp
cnEsIHYtPmRvbWFpbi0+ZG9tYWluX2lkKTsKLSAgICAgICAgICAgIHJjID0g
LUVOT01FTTsKLSAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAg
cmV0dXJuIC1FTk9NRU07CiAgICAgICAgIH0KIAogICAgICAgICBhY3Rpb24g
PSBuZXdhY3Rpb247Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Jan 23 12:53:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6JmW-0005HN-OZ; Thu, 23 Jan 2014 12:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6JmV-0005HF-PB
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 12:53:16 +0000
Received: from [85.158.139.211:55286] by server-3.bemta-5.messagelabs.com id
	62/24-04773-BB011E25; Thu, 23 Jan 2014 12:53:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390481593!11334074!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13737 invoked from network); 23 Jan 2014 12:53:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 12:53:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 12:53:13 +0000
Message-Id: <52E11EC80200007800116238@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 12:53:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
In-Reply-To: <52E102F4.3060503@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 12:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 21/01/14 10:28, Andrew Cooper wrote:
>> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
>> which have problems on live migrate.  Some source of time is
>> unexpectedly jumping forwards by two days, from the correct time to 2
>> days in the future.  The observed result is that it looses its DHCP
>> lease, drops its IP address and networking ceases to work (It appears
>> that windows will not attempt to renew the lease itself).
>>
> 
> This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
> "x86/viridian: Time Reference Count MSR"
> 
> After double checking with the specification, it does appear to be
> implemented as required (subject to a potential issue with multiple vcpu
> guests).
> 
> I am currently experimenting to see whether hvm_get_guest_time() is
> returning unexpected values, or whether it is returning expected values
> and Windows is interpreting them differently.
> 
> At this point in the 4.4 release cycle, reverting the patch should be
> seriously considered, although I would like to see whether it is
> possible to work out why it is wrong and whether there is an obvious fix
> first.

I suppose you and/or Paul will let us know either way.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 12:53:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 12:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6JmW-0005HN-OZ; Thu, 23 Jan 2014 12:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6JmV-0005HF-PB
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 12:53:16 +0000
Received: from [85.158.139.211:55286] by server-3.bemta-5.messagelabs.com id
	62/24-04773-BB011E25; Thu, 23 Jan 2014 12:53:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390481593!11334074!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13737 invoked from network); 23 Jan 2014 12:53:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 12:53:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 23 Jan 2014 12:53:13 +0000
Message-Id: <52E11EC80200007800116238@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 23 Jan 2014 12:53:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
In-Reply-To: <52E102F4.3060503@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 12:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 21/01/14 10:28, Andrew Cooper wrote:
>> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
>> which have problems on live migrate.  Some source of time is
>> unexpectedly jumping forwards by two days, from the correct time to 2
>> days in the future.  The observed result is that it looses its DHCP
>> lease, drops its IP address and networking ceases to work (It appears
>> that windows will not attempt to renew the lease itself).
>>
> 
> This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
> "x86/viridian: Time Reference Count MSR"
> 
> After double checking with the specification, it does appear to be
> implemented as required (subject to a potential issue with multiple vcpu
> guests).
> 
> I am currently experimenting to see whether hvm_get_guest_time() is
> returning unexpected values, or whether it is returning expected values
> and Windows is interpreting them differently.
> 
> At this point in the 4.4 release cycle, reverting the patch should be
> seriously considered, although I would like to see whether it is
> possible to work out why it is wrong and whether there is an obvious fix
> first.

I suppose you and/or Paul will let us know either way.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Jz8-0006EY-Ka; Thu, 23 Jan 2014 13:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Jz7-0006ET-6U
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:06:17 +0000
Received: from [85.158.139.211:35788] by server-17.bemta-5.messagelabs.com id
	13/5D-19152-8C311E25; Thu, 23 Jan 2014 13:06:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390482374!11486399!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26682 invoked from network); 23 Jan 2014 13:06:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:06:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93672771"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 13:05:53 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:05:52 -0500
Message-ID: <52E113AE.5060405@citrix.com>
Date: Thu, 23 Jan 2014 13:05:50 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, <ian.campbell@citrix.com>, 
	<wei.liu2@citrix.com>, <xen-devel@lists.xenproject.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<jonathan.davies@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
In-Reply-To: <a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 21:22, Konrad Rzeszutek Wilk wrote:
> Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev
>> needs it,
>> for blkback and future netback patches it just cause a lock contention,
>> as
>> those pages never go to userspace. Therefore this series does the
>> following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a
>> new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or
>> just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs
>> with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old
>> behaviour
>
> You don't say anything about the 'return ret' changed to 'return 0'.
>
> Any particular reason for that?

That's the only possible return value there, so it just makes it more 
obvious. I'll add a description about that.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Jz8-0006EY-Ka; Thu, 23 Jan 2014 13:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Jz7-0006ET-6U
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:06:17 +0000
Received: from [85.158.139.211:35788] by server-17.bemta-5.messagelabs.com id
	13/5D-19152-8C311E25; Thu, 23 Jan 2014 13:06:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390482374!11486399!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26682 invoked from network); 23 Jan 2014 13:06:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:06:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93672771"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 13:05:53 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:05:52 -0500
Message-ID: <52E113AE.5060405@citrix.com>
Date: Thu, 23 Jan 2014 13:05:50 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, <ian.campbell@citrix.com>, 
	<wei.liu2@citrix.com>, <xen-devel@lists.xenproject.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<jonathan.davies@citrix.com>
References: <1390244288-3186-1-git-send-email-zoltan.kiss@citrix.com>
	<a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
In-Reply-To: <a6736948-c67e-4509-89a8-42ec9693830f@email.android.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v3] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 21:22, Konrad Rzeszutek Wilk wrote:
> Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev
>> needs it,
>> for blkback and future netback patches it just cause a lock contention,
>> as
>> those pages never go to userspace. Therefore this series does the
>> following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a
>> new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or
>> just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs
>> with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old
>> behaviour
>
> You don't say anything about the 'return ret' changed to 'return 0'.
>
> Any particular reason for that?

That's the only possible return value there, so it just makes it more 
obvious. I'll add a description about that.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:07:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6K0D-0006Il-4m; Thu, 23 Jan 2014 13:07:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6K0B-0006IX-Km
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:07:24 +0000
Received: from [85.158.137.68:18765] by server-2.bemta-3.messagelabs.com id
	8A/CE-17329-90411E25; Thu, 23 Jan 2014 13:07:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390482438!10088201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3973 invoked from network); 23 Jan 2014 13:07:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:07:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95714794"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:07:18 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 08:07:18 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Thu, 23 Jan 2014 13:07:10 +0000
Message-ID: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |   12 +++--
 arch/x86/xen/p2m.c                  |   25 ++--------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 109 insertions(+), 54 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..68a1438 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
-extern int m2p_add_override(unsigned long mfn, struct page *page,
-			    struct gnttab_map_grant_ref *kmap_op);
+extern int m2p_add_override(unsigned long mfn,
+			    struct page *page,
+			    struct gnttab_map_grant_ref *kmap_op,
+			    unsigned long pfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long pfn,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..0060178 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
 
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
 {
 	unsigned long flags;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long pfn,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
-
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..2add483 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL, pfn);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  pfn,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:07:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6K0D-0006Il-4m; Thu, 23 Jan 2014 13:07:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6K0B-0006IX-Km
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:07:24 +0000
Received: from [85.158.137.68:18765] by server-2.bemta-3.messagelabs.com id
	8A/CE-17329-90411E25; Thu, 23 Jan 2014 13:07:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390482438!10088201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3973 invoked from network); 23 Jan 2014 13:07:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:07:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95714794"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:07:18 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 08:07:18 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Thu, 23 Jan 2014 13:07:10 +0000
Message-ID: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |   12 +++--
 arch/x86/xen/p2m.c                  |   25 ++--------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 109 insertions(+), 54 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..68a1438 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
-extern int m2p_add_override(unsigned long mfn, struct page *page,
-			    struct gnttab_map_grant_ref *kmap_op);
+extern int m2p_add_override(unsigned long mfn,
+			    struct page *page,
+			    struct gnttab_map_grant_ref *kmap_op,
+			    unsigned long pfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long pfn,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..0060178 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
 
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
 {
 	unsigned long flags;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long pfn,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
-	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
-	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
-
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
 		ptep = lookup_address(address, &level);
@@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..2add483 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL, pfn);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  pfn,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:13:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6K5r-000712-Ap; Thu, 23 Jan 2014 13:13:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6K5p-00070w-Bq
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:13:13 +0000
Received: from [85.158.143.35:60539] by server-1.bemta-4.messagelabs.com id
	A5/0D-02132-86511E25; Thu, 23 Jan 2014 13:13:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390482790!314551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23023 invoked from network); 23 Jan 2014 13:13:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:13:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93674915"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 13:13:10 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:13:10 -0500
Message-ID: <52E11563.1090707@citrix.com>
Date: Thu, 23 Jan 2014 13:13:07 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<20140122.175031.873909526743971037.davem@davemloft.net>
In-Reply-To: <20140122.175031.873909526743971037.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 01:50, David Miller wrote:
> From: Zoltan Kiss <zoltan.kiss@citrix.com>
> Date: Mon, 20 Jan 2014 21:24:20 +0000
>
>> A long known problem of the upstream netback implementation that on the TX
>> path (from guest to Dom0) it copies the whole packet from guest memory into
>> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
>> huge perfomance penalty. The classic kernel version of netback used grant
>> mapping, and to get notified when the page can be unmapped, it used page
>> destructors. Unfortunately that destructor is not an upstreamable solution.
>> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
>> problem, however it seems to be very invasive on the network stack's code,
>> and therefore haven't progressed very well.
>> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
>> know when the skb is freed up.
>
> This series does not apply to net-next due to some other recent changes.
>
> Please respin, thanks.

It is already based on two predecessor patches, one which is already 
accepted but not applied yet:

[PATCH net-next v2] xen-netback: Rework rx_work_todo

And the other one is hopefully will be accepted very soon:

[PATCH v5] xen/grant-table: Avoid m2p_override during mapping

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:13:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6K5r-000712-Ap; Thu, 23 Jan 2014 13:13:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6K5p-00070w-Bq
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 13:13:13 +0000
Received: from [85.158.143.35:60539] by server-1.bemta-4.messagelabs.com id
	A5/0D-02132-86511E25; Thu, 23 Jan 2014 13:13:12 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390482790!314551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23023 invoked from network); 23 Jan 2014 13:13:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:13:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93674915"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 13:13:10 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:13:10 -0500
Message-ID: <52E11563.1090707@citrix.com>
Date: Thu, 23 Jan 2014 13:13:07 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<20140122.175031.873909526743971037.davem@davemloft.net>
In-Reply-To: <20140122.175031.873909526743971037.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 01:50, David Miller wrote:
> From: Zoltan Kiss <zoltan.kiss@citrix.com>
> Date: Mon, 20 Jan 2014 21:24:20 +0000
>
>> A long known problem of the upstream netback implementation that on the TX
>> path (from guest to Dom0) it copies the whole packet from guest memory into
>> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
>> huge perfomance penalty. The classic kernel version of netback used grant
>> mapping, and to get notified when the page can be unmapped, it used page
>> destructors. Unfortunately that destructor is not an upstreamable solution.
>> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
>> problem, however it seems to be very invasive on the network stack's code,
>> and therefore haven't progressed very well.
>> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
>> know when the skb is freed up.
>
> This series does not apply to net-next due to some other recent changes.
>
> Please respin, thanks.

It is already based on two predecessor patches, one which is already 
accepted but not applied yet:

[PATCH net-next v2] xen-netback: Rework rx_work_todo

And the other one is hopefully will be accepted very soon:

[PATCH v5] xen/grant-table: Avoid m2p_override during mapping

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6KUE-00088a-3D; Thu, 23 Jan 2014 13:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6KUC-00087U-7k
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:38:24 +0000
Received: from [85.158.137.68:8618] by server-10.bemta-3.messagelabs.com id
	72/02-23989-F4B11E25; Thu, 23 Jan 2014 13:38:23 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390484300!9730507!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1936 invoked from network); 23 Jan 2014 13:38:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 13:38:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0NDcHRp015460
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 13:38:17 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0NDcFSc013452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 13:38:16 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0NDcF08007916; Thu, 23 Jan 2014 13:38:15 GMT
Received: from [192.168.1.101] (/123.123.253.44)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 05:38:15 -0800
References: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
	<52E0F528.7060302@citrix.com>
In-Reply-To: <52E0F528.7060302@citrix.com>
Mime-Version: 1.0 (1.0)
Message-Id: <1317802A-6BB6-468B-9FA0-8A05CDDFEA8C@oracle.com>
X-Mailer: iPhone Mail (9A334)
From: Annie <annie.li@oracle.com>
Date: Thu, 23 Jan 2014 21:38:04 +0800
To: David Vrabel <david.vrabel@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu2@citrix.com" <wei.liu2@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code
	in xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org




On 01/23/2014 2:55 AM, David Vrabel <david.vrabel@citrix.com> wrote

> On 23/01/14 01:36, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>> 
>> This patch removes grant transfer code from netfront, and improves ending
>> grant acess mechanism since gnttab_end_foreign_access_ref may fail when
>> the grant entry is currently used for reading or writing.
>> 
>> * release grant reference and skb for tx/rx path, use get_page/put_page to
>> ensure page is released when grant access is completed successfully.
>> * change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
>> of code change for put_page in gnttab_end_foreign_access.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>> 
>> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
>> grant acess is ended.
>> 
>> V2: improve patch comments.
>> Signed-off-by: Annie Li <annie.li@oracle.com>
>> ---
>> drivers/block/xen-blkfront.c    |   25 ++++++++---
>> drivers/char/tpm/xen-tpmfront.c |    7 +++-
>> drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
>> drivers/pci/xen-pcifront.c      |    7 +++-
>> drivers/xen/grant-table.c       |    4 +-
>> 5 files changed, 63 insertions(+), 73 deletions(-)
> 
> I don't understand why you've made all these unnecessary changes to the
> other frontends and grant-table.c.
> 
> The xen-netfront.c changes are fine on their own.

Changes in grant-table.c corresponds to get_page before gnttab_end_foreign_access, just keeping consistent in the function name. This change does not really change the mechanism, so I can revert it.

For the changes in other frontends, the reason is they also have similar issue - grant access probably is not ended successfully when the page is freed.

Thanks
Annie
> 
> David
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6KUE-00088a-3D; Thu, 23 Jan 2014 13:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6KUC-00087U-7k
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:38:24 +0000
Received: from [85.158.137.68:8618] by server-10.bemta-3.messagelabs.com id
	72/02-23989-F4B11E25; Thu, 23 Jan 2014 13:38:23 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390484300!9730507!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1936 invoked from network); 23 Jan 2014 13:38:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 13:38:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0NDcHRp015460
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 13:38:17 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0NDcFSc013452
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 13:38:16 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0NDcF08007916; Thu, 23 Jan 2014 13:38:15 GMT
Received: from [192.168.1.101] (/123.123.253.44)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 05:38:15 -0800
References: <1390441011-3816-1-git-send-email-Annie.li@oracle.com>
	<52E0F528.7060302@citrix.com>
In-Reply-To: <52E0F528.7060302@citrix.com>
Mime-Version: 1.0 (1.0)
Message-Id: <1317802A-6BB6-468B-9FA0-8A05CDDFEA8C@oracle.com>
X-Mailer: iPhone Mail (9A334)
From: Annie <annie.li@oracle.com>
Date: Thu, 23 Jan 2014 21:38:04 +0800
To: David Vrabel <david.vrabel@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu2@citrix.com" <wei.liu2@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [ [PATCH net-next v3] xen-netfront: clean up code
	in xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org




On 01/23/2014 2:55 AM, David Vrabel <david.vrabel@citrix.com> wrote

> On 23/01/14 01:36, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>> 
>> This patch removes grant transfer code from netfront, and improves ending
>> grant acess mechanism since gnttab_end_foreign_access_ref may fail when
>> the grant entry is currently used for reading or writing.
>> 
>> * release grant reference and skb for tx/rx path, use get_page/put_page to
>> ensure page is released when grant access is completed successfully.
>> * change corresponding code in xen-blkfront/xen-tpmfront/xen-pcifront because
>> of code change for put_page in gnttab_end_foreign_access.
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>> 
>> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
>> grant acess is ended.
>> 
>> V2: improve patch comments.
>> Signed-off-by: Annie Li <annie.li@oracle.com>
>> ---
>> drivers/block/xen-blkfront.c    |   25 ++++++++---
>> drivers/char/tpm/xen-tpmfront.c |    7 +++-
>> drivers/net/xen-netfront.c      |   93 ++++++++++++--------------------------
>> drivers/pci/xen-pcifront.c      |    7 +++-
>> drivers/xen/grant-table.c       |    4 +-
>> 5 files changed, 63 insertions(+), 73 deletions(-)
> 
> I don't understand why you've made all these unnecessary changes to the
> other frontends and grant-table.c.
> 
> The xen-netfront.c changes are fine on their own.

Changes in grant-table.c corresponds to get_page before gnttab_end_foreign_access, just keeping consistent in the function name. This change does not really change the mechanism, so I can revert it.

For the changes in other frontends, the reason is they also have similar issue - grant access probably is not ended successfully when the page is freed.

Thanks
Annie
> 
> David
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:55:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Kk3-0000jr-8z; Thu, 23 Jan 2014 13:54:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6Kk1-0000jm-VH
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:54:46 +0000
Received: from [85.158.137.68:50101] by server-14.bemta-3.messagelabs.com id
	BE/69-06105-52F11E25; Thu, 23 Jan 2014 13:54:45 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390485282!10879997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 313 invoked from network); 23 Jan 2014 13:54:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95730436"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:54:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:54:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6Kjw-0001oV-Nb;
	Thu, 23 Jan 2014 13:54:40 +0000
Date: Thu, 23 Jan 2014 13:54:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140123135440.GF24675@zion.uk.xensource.com>
References: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
	<52E0DCDD.60405@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DCDD.60405@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
> Il 22/01/2014 17:09, Wei Liu ha scritto:
> >On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> >>Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>>>
> >>>>>Googling "disable tcg" would have provided an answer, but the patches
> >>>>>were old enough to be basically useless.  I'll refresh the current
> >>>>>version in the next few days.  Currently I am (or try to be) on
> >>>>>vacation, so I cannot really say when, but I'll do my best. :)
> >>>>>
> >>>Hi Paolo, any update?
> >>
> >>Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> >>branch on my github repository.
> >>
> >
> >Unfortunately your branch didn't work when I enabled TCG support. If I
> >use "--disable-tcg" with configure then it works fine.
> 
> Branch fixed.
> 

Yes, it's fixed for the case I reported. Thanks.

But it is now broken with following rune:
./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
--disable-xen --enable-debug

  LINK  i386-softmmu/qemu-system-i386
  cpus.o: In function `cpu_signal':
  /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
  cpus.o: In function `tcg_cpu_exec':
  /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
  cpus.o: In function `tcg_exec_all':
  /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
  /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
  exec.o: In function `tlb_reset_dirty_range_all':
  /local/scratch/qemu/exec.c:736: undefined reference to
  `cpu_tlb_reset_dirty_all'
  collect2: error: ld returned 1 exit status
  make[1]: *** [qemu-system-i386] Error 1
  make: *** [subdir-i386-softmmu] Error 2

--enable-debug is the one to blame. Without that it links successfully.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:55:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Kk3-0000jr-8z; Thu, 23 Jan 2014 13:54:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6Kk1-0000jm-VH
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:54:46 +0000
Received: from [85.158.137.68:50101] by server-14.bemta-3.messagelabs.com id
	BE/69-06105-52F11E25; Thu, 23 Jan 2014 13:54:45 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390485282!10879997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 313 invoked from network); 23 Jan 2014 13:54:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95730436"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:54:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:54:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6Kjw-0001oV-Nb;
	Thu, 23 Jan 2014 13:54:40 +0000
Date: Thu, 23 Jan 2014 13:54:40 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140123135440.GF24675@zion.uk.xensource.com>
References: <alpine.DEB.2.02.1401061431230.8667@kaball.uk.xensource.com>
	<CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
	<52E0DCDD.60405@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DCDD.60405@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
> Il 22/01/2014 17:09, Wei Liu ha scritto:
> >On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> >>Il 21/01/2014 19:27, Wei Liu ha scritto:
> >>>>>
> >>>>>Googling "disable tcg" would have provided an answer, but the patches
> >>>>>were old enough to be basically useless.  I'll refresh the current
> >>>>>version in the next few days.  Currently I am (or try to be) on
> >>>>>vacation, so I cannot really say when, but I'll do my best. :)
> >>>>>
> >>>Hi Paolo, any update?
> >>
> >>Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> >>branch on my github repository.
> >>
> >
> >Unfortunately your branch didn't work when I enabled TCG support. If I
> >use "--disable-tcg" with configure then it works fine.
> 
> Branch fixed.
> 

Yes, it's fixed for the case I reported. Thanks.

But it is now broken with following rune:
./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
--disable-xen --enable-debug

  LINK  i386-softmmu/qemu-system-i386
  cpus.o: In function `cpu_signal':
  /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
  cpus.o: In function `tcg_cpu_exec':
  /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
  cpus.o: In function `tcg_exec_all':
  /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
  /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
  exec.o: In function `tlb_reset_dirty_range_all':
  /local/scratch/qemu/exec.c:736: undefined reference to
  `cpu_tlb_reset_dirty_all'
  collect2: error: ld returned 1 exit status
  make[1]: *** [qemu-system-i386] Error 1
  make: *** [subdir-i386-softmmu] Error 2

--enable-debug is the one to blame. Without that it links successfully.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:56:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Klo-0000oo-S5; Thu, 23 Jan 2014 13:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Klm-0000og-Ul
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:56:35 +0000
Received: from [85.158.143.35:57200] by server-3.bemta-4.messagelabs.com id
	B5/BF-32360-29F11E25; Thu, 23 Jan 2014 13:56:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390485392!328451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11856 invoked from network); 23 Jan 2014 13:56:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:56:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95730968"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:56:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:56:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Kli-0001rS-RT;
	Thu, 23 Jan 2014 13:56:30 +0000
Date: Thu, 23 Jan 2014 13:55:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390469635.24595.7.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > +{
> > > +    DECLARE_DOMCTL;
> > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > +    domctl.domain = (domid_t)domid;
> > > +    domctl.u.cacheflush.start_mfn = 0;
> > > +    return do_domctl(xch, &domctl);
> > > +}
> > 
> > Do we really need to flush the entire p2m, or just things we have
> > written to?
> 
> I think we need to flush everything (well, all RAM backed pages, the
> patch skips everything else).
> 
> Even things which we haven't explicitly written to will have been
> scrubbed and therefore have scrubbed data in the cache but data
> belonging to the previous owner in actual RAM. So we would really want
> to clean in that case too.
> 
> We could do the clean at scrub time which arguably be better anyway and
> would potentially allows us to only invalidate instead of clean
> +invalidate some subset of pages, but we would need to track which sort
> of page was which -- e.g. with a special p2m type for a page which had
> been foreign mapped or some other bit of metadata.

That seems like the way to go.


> > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > index 85ca330..f35ed57 100644
> > > --- a/xen/arch/arm/p2m.c
> > > +++ b/xen/arch/arm/p2m.c
> > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > >      ALLOCATE,
> > >      REMOVE,
> > >      RELINQUISH,
> > > +    CACHEFLUSH,
> > >  };
> > >  
> > > +static void do_one_cacheflush(paddr_t mfn)
> > > +{
> > > +    void *v = map_domain_page(mfn);
> > > +
> > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > +
> > > +    unmap_domain_page(v);
> > > +}
> > 
> > A pity that we need to map a page just to flush the dcache.  It could be
> > expensive, especially if we really have to map every single guest mfn.
> 
> Remember that this is basically free for arm64 and for arm32 we actually
> map 2MB regions and cache, so it is only actually one map per 2MB
> region.

Even with 2MB at a time it is easy for this to become really slow. A 4GB
guest would need 2048 iterations of map/flush/unmap. I don't have any
numbers but I bet they won't look good.
At least if it was combined with the RAM scrub we would save the 2048
map/unmap.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 13:56:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 13:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Klo-0000oo-S5; Thu, 23 Jan 2014 13:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Klm-0000og-Ul
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 13:56:35 +0000
Received: from [85.158.143.35:57200] by server-3.bemta-4.messagelabs.com id
	B5/BF-32360-29F11E25; Thu, 23 Jan 2014 13:56:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390485392!328451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11856 invoked from network); 23 Jan 2014 13:56:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 13:56:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95730968"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 13:56:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 08:56:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Kli-0001rS-RT;
	Thu, 23 Jan 2014 13:56:30 +0000
Date: Thu, 23 Jan 2014 13:55:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390469635.24595.7.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > +{
> > > +    DECLARE_DOMCTL;
> > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > +    domctl.domain = (domid_t)domid;
> > > +    domctl.u.cacheflush.start_mfn = 0;
> > > +    return do_domctl(xch, &domctl);
> > > +}
> > 
> > Do we really need to flush the entire p2m, or just things we have
> > written to?
> 
> I think we need to flush everything (well, all RAM backed pages, the
> patch skips everything else).
> 
> Even things which we haven't explicitly written to will have been
> scrubbed and therefore have scrubbed data in the cache but data
> belonging to the previous owner in actual RAM. So we would really want
> to clean in that case too.
> 
> We could do the clean at scrub time which arguably be better anyway and
> would potentially allows us to only invalidate instead of clean
> +invalidate some subset of pages, but we would need to track which sort
> of page was which -- e.g. with a special p2m type for a page which had
> been foreign mapped or some other bit of metadata.

That seems like the way to go.


> > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > index 85ca330..f35ed57 100644
> > > --- a/xen/arch/arm/p2m.c
> > > +++ b/xen/arch/arm/p2m.c
> > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > >      ALLOCATE,
> > >      REMOVE,
> > >      RELINQUISH,
> > > +    CACHEFLUSH,
> > >  };
> > >  
> > > +static void do_one_cacheflush(paddr_t mfn)
> > > +{
> > > +    void *v = map_domain_page(mfn);
> > > +
> > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > +
> > > +    unmap_domain_page(v);
> > > +}
> > 
> > A pity that we need to map a page just to flush the dcache.  It could be
> > expensive, especially if we really have to map every single guest mfn.
> 
> Remember that this is basically free for arm64 and for arm32 we actually
> map 2MB regions and cache, so it is only actually one map per 2MB
> region.

Even with 2MB at a time it is easy for this to become really slow. A 4GB
guest would need 2048 iterations of map/flush/unmap. I don't have any
numbers but I bet they won't look good.
At least if it was combined with the RAM scrub we would save the 2048
map/unmap.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:00:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6KpS-0001Jm-Ji; Thu, 23 Jan 2014 14:00:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6KpO-0001GV-VW
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 14:00:19 +0000
Received: from [85.158.137.68:9486] by server-8.bemta-3.messagelabs.com id
	FA/B4-31081-17021E25; Thu, 23 Jan 2014 14:00:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390485615!7258444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2814 invoked from network); 23 Jan 2014 14:00:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:00:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95732110"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:00:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:00:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6KpK-0001v3-7y;
	Thu, 23 Jan 2014 14:00:14 +0000
Date: Thu, 23 Jan 2014 13:59:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E015F5.70408@citrix.com>
Message-ID: <alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> > > > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > > > index 2ae8699..0060178 100644
> > > > > --- a/arch/x86/xen/p2m.c
> > > > > +++ b/arch/x86/xen/p2m.c
> > > > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> > > > > 
> > > > >    /* Add an MFN override for a particular page */
> > > > >    int m2p_add_override(unsigned long mfn, struct page *page,
> > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long
> > > > > pfn)
> > > > 
> > > > Do we really need to add another additional parameter to
> > > > m2p_add_override?
> > > > I would just let m2p_add_override and m2p_remove_override call
> > > > page_to_pfn again. It is not that expensive.
> > > Yes, because that page_to_pfn can return something different. That's why
> > > the
> > > v2 patches failed.
> > 
> > I am really curious: how can page_to_pfn return something different?
> > I don't think is supposed to happen.
> You call set_phys_to_machine before calling m2p* functions.

set_phys_to_machine changes the physical to machine mapping, that would
be the mfn corresponding to a given pfn. It shouldn't affect the output
of page_to_pfn that returns the pfn corresponding to a given struct
page. The calculation of which is based on address offsets and should be
static and unaffected by things like set_phys_to_machine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:00:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6KpS-0001Jm-Ji; Thu, 23 Jan 2014 14:00:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6KpO-0001GV-VW
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 14:00:19 +0000
Received: from [85.158.137.68:9486] by server-8.bemta-3.messagelabs.com id
	FA/B4-31081-17021E25; Thu, 23 Jan 2014 14:00:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390485615!7258444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2814 invoked from network); 23 Jan 2014 14:00:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:00:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95732110"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:00:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:00:14 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6KpK-0001v3-7y;
	Thu, 23 Jan 2014 14:00:14 +0000
Date: Thu, 23 Jan 2014 13:59:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E015F5.70408@citrix.com>
Message-ID: <alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> > > > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > > > index 2ae8699..0060178 100644
> > > > > --- a/arch/x86/xen/p2m.c
> > > > > +++ b/arch/x86/xen/p2m.c
> > > > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> > > > > 
> > > > >    /* Add an MFN override for a particular page */
> > > > >    int m2p_add_override(unsigned long mfn, struct page *page,
> > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long
> > > > > pfn)
> > > > 
> > > > Do we really need to add another additional parameter to
> > > > m2p_add_override?
> > > > I would just let m2p_add_override and m2p_remove_override call
> > > > page_to_pfn again. It is not that expensive.
> > > Yes, because that page_to_pfn can return something different. That's why
> > > the
> > > v2 patches failed.
> > 
> > I am really curious: how can page_to_pfn return something different?
> > I don't think is supposed to happen.
> You call set_phys_to_machine before calling m2p* functions.

set_phys_to_machine changes the physical to machine mapping, that would
be the mfn corresponding to a given pfn. It shouldn't affect the output
of page_to_pfn that returns the pfn corresponding to a given struct
page. The calculation of which is based on address offsets and should be
static and unaffected by things like set_phys_to_machine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:01:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Kqd-0001QV-JL; Thu, 23 Jan 2014 14:01:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Kqc-0001QN-Dw
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:01:34 +0000
Received: from [193.109.254.147:38946] by server-4.bemta-14.messagelabs.com id
	F2/86-03916-DB021E25; Thu, 23 Jan 2014 14:01:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390485691!12664097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23404 invoked from network); 23 Jan 2014 14:01:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:01:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93690741"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:01:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:01:06 -0500
Message-ID: <1390485665.24595.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 14:01:05 +0000
In-Reply-To: <alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > +{
> > > > +    DECLARE_DOMCTL;
> > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > +    domctl.domain = (domid_t)domid;
> > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > +    return do_domctl(xch, &domctl);
> > > > +}
> > > 
> > > Do we really need to flush the entire p2m, or just things we have
> > > written to?
> > 
> > I think we need to flush everything (well, all RAM backed pages, the
> > patch skips everything else).
> > 
> > Even things which we haven't explicitly written to will have been
> > scrubbed and therefore have scrubbed data in the cache but data
> > belonging to the previous owner in actual RAM. So we would really want
> > to clean in that case too.
> > 
> > We could do the clean at scrub time which arguably be better anyway and
> > would potentially allows us to only invalidate instead of clean
> > +invalidate some subset of pages, but we would need to track which sort
> > of page was which -- e.g. with a special p2m type for a page which had
> > been foreign mapped or some other bit of metadata.
> 
> That seems like the way to go.

I'm not convinced actually, and in any case, not for 4.4...

> > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > index 85ca330..f35ed57 100644
> > > > --- a/xen/arch/arm/p2m.c
> > > > +++ b/xen/arch/arm/p2m.c
> > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > >      ALLOCATE,
> > > >      REMOVE,
> > > >      RELINQUISH,
> > > > +    CACHEFLUSH,
> > > >  };
> > > >  
> > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > +{
> > > > +    void *v = map_domain_page(mfn);
> > > > +
> > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > +
> > > > +    unmap_domain_page(v);
> > > > +}
> > > 
> > > A pity that we need to map a page just to flush the dcache.  It could be
> > > expensive, especially if we really have to map every single guest mfn.
> > 
> > Remember that this is basically free for arm64 and for arm32 we actually
> > map 2MB regions and cache, so it is only actually one map per 2MB
> > region.
> 
> Even with 2MB at a time it is easy for this to become really slow. A 4GB
> guest would need 2048 iterations of map/flush/unmap. I don't have any
> numbers but I bet they won't look good.

It happens once during domain build and the delay it isn't noticeable to
me compared with the rest of the build process.

> At least if it was combined with the RAM scrub we would save the 2048
> map/unmap.

The scrub has to map it in exactly the same way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:01:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Kqd-0001QV-JL; Thu, 23 Jan 2014 14:01:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6Kqc-0001QN-Dw
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:01:34 +0000
Received: from [193.109.254.147:38946] by server-4.bemta-14.messagelabs.com id
	F2/86-03916-DB021E25; Thu, 23 Jan 2014 14:01:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390485691!12664097!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23404 invoked from network); 23 Jan 2014 14:01:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:01:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93690741"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:01:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:01:06 -0500
Message-ID: <1390485665.24595.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 14:01:05 +0000
In-Reply-To: <alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > +{
> > > > +    DECLARE_DOMCTL;
> > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > +    domctl.domain = (domid_t)domid;
> > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > +    return do_domctl(xch, &domctl);
> > > > +}
> > > 
> > > Do we really need to flush the entire p2m, or just things we have
> > > written to?
> > 
> > I think we need to flush everything (well, all RAM backed pages, the
> > patch skips everything else).
> > 
> > Even things which we haven't explicitly written to will have been
> > scrubbed and therefore have scrubbed data in the cache but data
> > belonging to the previous owner in actual RAM. So we would really want
> > to clean in that case too.
> > 
> > We could do the clean at scrub time which arguably be better anyway and
> > would potentially allows us to only invalidate instead of clean
> > +invalidate some subset of pages, but we would need to track which sort
> > of page was which -- e.g. with a special p2m type for a page which had
> > been foreign mapped or some other bit of metadata.
> 
> That seems like the way to go.

I'm not convinced actually, and in any case, not for 4.4...

> > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > index 85ca330..f35ed57 100644
> > > > --- a/xen/arch/arm/p2m.c
> > > > +++ b/xen/arch/arm/p2m.c
> > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > >      ALLOCATE,
> > > >      REMOVE,
> > > >      RELINQUISH,
> > > > +    CACHEFLUSH,
> > > >  };
> > > >  
> > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > +{
> > > > +    void *v = map_domain_page(mfn);
> > > > +
> > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > +
> > > > +    unmap_domain_page(v);
> > > > +}
> > > 
> > > A pity that we need to map a page just to flush the dcache.  It could be
> > > expensive, especially if we really have to map every single guest mfn.
> > 
> > Remember that this is basically free for arm64 and for arm32 we actually
> > map 2MB regions and cache, so it is only actually one map per 2MB
> > region.
> 
> Even with 2MB at a time it is easy for this to become really slow. A 4GB
> guest would need 2048 iterations of map/flush/unmap. I don't have any
> numbers but I bet they won't look good.

It happens once during domain build and the delay it isn't noticeable to
me compared with the rest of the build process.

> At least if it was combined with the RAM scrub we would save the 2048
> map/unmap.

The scrub has to map it in exactly the same way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LFi-0002i5-96; Thu, 23 Jan 2014 14:27:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFf-0002hi-OJ; Thu, 23 Jan 2014 14:27:27 +0000
Received: from [85.158.137.68:13488] by server-9.bemta-3.messagelabs.com id
	52/EB-13104-EC621E25; Thu, 23 Jan 2014 14:27:26 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390487245!10935544!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22540 invoked from network); 23 Jan 2014 14:27:26 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-7.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Jan 2014 14:27:26 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFY-0005r8-AF; Thu, 23 Jan 2014 14:27:20 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFX-0000mR-TA; Thu, 23 Jan 2014 14:27:20 +0000
Date: Thu, 23 Jan 2014 14:27:20 +0000
Message-Id: <E1W6LFX-0000mR-TA@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 83 (CVE-2014-1642) -
 Out-of-memory condition yielding memory corruption during IRQ setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

               Xen Security Advisory CVE-2014-1642 / XSA-83
                              version 3

       Out-of-memory condition yielding memory corruption during IRQ setup

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

When setting up the IRQ for a passed through physical device, a flaw
in the error handling could result in a memory allocation being used
after it is freed, and then freed a second time.  This would typically
result in memory corruption.

IMPACT
======

Malicious guest administrators can trigger a use-after-free error, resulting
in hypervisor memory corruption.  The effects of memory corruption could be
anything, including a host-wide denial of service, or privilege escalation.

VULNERABLE SYSTEMS
==================

Xen 4.2.x and later are vulnerable.
Xen 4.1.x and earlier are not vulnerable.

Only systems making use of device passthrough are vulnerable.

Only systems with a 64-bit hypervisor configured to support more than 128
CPUs or with a 32-bit hypervisor configured to support more than 64 CPUs are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests on
systems supporting Intel VT-d or AMD Vi.

CREDITS
=======

This issue was discovered by Coverity Scan, prompted by modelling
improvements contributed by Andrew Coooper.  The issue was diagnosed
by Matthew Daley and Andrew Coooper.  The patch was prepared by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa83.patch                 Xen 4.2.x, Xen 4.3.x, xen-unstable

$ sha256sum xsa83*.patch
71ba62c024ed867f99f335ed63d7e04a7981d348cc29a3718e5c48f15a1e0fb1  xsa83.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4SaHAAoJEIP+FMlX6CvZ4GEH/1iRjPPj+FedKNsROJ4XZDYQ
rhu5evDxGjFKC1YD5aDexDPMKYn1lLtOy2YnsW4nqPJdHCpBpPIhzTFisaNUqMzE
XQwQwBSVYhxZAV2J9v3e7nsz0wswVdAHkbFf2df1eUvmiGsKQPHuCqlCZEbQjW/w
7F9MC2Qo9nlg/1GtNE5J4U4jB9EtEhI5Kbvh3WFoOLz7vtJDKlsYQlcTZLJVdDjN
OFoptImqig7Yin0/ix4AKYt5+trnkpvKjR3dfIeM3WUxG3Nc4qKxy5C5cbVfgKnr
/sidbCO4K4G56fvl3aBg49594x8aFh8MYZF42CDCEnojXCaiXidwBiWUV9KHN5g=
=5A46
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa83.patch"
Content-Disposition: attachment; filename="xsa83.patch"
Content-Transfer-Encoding: base64

eDg2L2lycTogYXZvaWQgdXNlLWFmdGVyLWZyZWUgb24gZXJyb3IgcGF0aCBp
biBwaXJxX2d1ZXN0X2JpbmQoKQoKVGhpcyBpcyBYU0EtODMuCgpDb3Zlcml0
eS1JRDogMTE0Njk1MgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L2lycS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwpAQCAtMTU5MCw4ICsx
NTkwLDcgQEAgaW50IHBpcnFfZ3Vlc3RfYmluZChzdHJ1Y3QgdmNwdSAqdiwg
c3RydQogICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0lORk8KICAgICAg
ICAgICAgICAgICAgICAiQ2Fubm90IGJpbmQgSVJRJWQgdG8gZG9tJWQuIE91
dCBvZiBtZW1vcnkuXG4iLAogICAgICAgICAgICAgICAgICAgIHBpcnEtPnBp
cnEsIHYtPmRvbWFpbi0+ZG9tYWluX2lkKTsKLSAgICAgICAgICAgIHJjID0g
LUVOT01FTTsKLSAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAg
cmV0dXJuIC1FTk9NRU07CiAgICAgICAgIH0KIAogICAgICAgICBhY3Rpb24g
PSBuZXdhY3Rpb247Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Jan 23 14:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LFi-0002i5-96; Thu, 23 Jan 2014 14:27:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFf-0002hi-OJ; Thu, 23 Jan 2014 14:27:27 +0000
Received: from [85.158.137.68:13488] by server-9.bemta-3.messagelabs.com id
	52/EB-13104-EC621E25; Thu, 23 Jan 2014 14:27:26 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390487245!10935544!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22540 invoked from network); 23 Jan 2014 14:27:26 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-7.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	23 Jan 2014 14:27:26 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFY-0005r8-AF; Thu, 23 Jan 2014 14:27:20 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6LFX-0000mR-TA; Thu, 23 Jan 2014 14:27:20 +0000
Date: Thu, 23 Jan 2014 14:27:20 +0000
Message-Id: <E1W6LFX-0000mR-TA@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 83 (CVE-2014-1642) -
 Out-of-memory condition yielding memory corruption during IRQ setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

               Xen Security Advisory CVE-2014-1642 / XSA-83
                              version 3

       Out-of-memory condition yielding memory corruption during IRQ setup

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

When setting up the IRQ for a passed through physical device, a flaw
in the error handling could result in a memory allocation being used
after it is freed, and then freed a second time.  This would typically
result in memory corruption.

IMPACT
======

Malicious guest administrators can trigger a use-after-free error, resulting
in hypervisor memory corruption.  The effects of memory corruption could be
anything, including a host-wide denial of service, or privilege escalation.

VULNERABLE SYSTEMS
==================

Xen 4.2.x and later are vulnerable.
Xen 4.1.x and earlier are not vulnerable.

Only systems making use of device passthrough are vulnerable.

Only systems with a 64-bit hypervisor configured to support more than 128
CPUs or with a 32-bit hypervisor configured to support more than 64 CPUs are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests on
systems supporting Intel VT-d or AMD Vi.

CREDITS
=======

This issue was discovered by Coverity Scan, prompted by modelling
improvements contributed by Andrew Coooper.  The issue was diagnosed
by Matthew Daley and Andrew Coooper.  The patch was prepared by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa83.patch                 Xen 4.2.x, Xen 4.3.x, xen-unstable

$ sha256sum xsa83*.patch
71ba62c024ed867f99f335ed63d7e04a7981d348cc29a3718e5c48f15a1e0fb1  xsa83.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4SaHAAoJEIP+FMlX6CvZ4GEH/1iRjPPj+FedKNsROJ4XZDYQ
rhu5evDxGjFKC1YD5aDexDPMKYn1lLtOy2YnsW4nqPJdHCpBpPIhzTFisaNUqMzE
XQwQwBSVYhxZAV2J9v3e7nsz0wswVdAHkbFf2df1eUvmiGsKQPHuCqlCZEbQjW/w
7F9MC2Qo9nlg/1GtNE5J4U4jB9EtEhI5Kbvh3WFoOLz7vtJDKlsYQlcTZLJVdDjN
OFoptImqig7Yin0/ix4AKYt5+trnkpvKjR3dfIeM3WUxG3Nc4qKxy5C5cbVfgKnr
/sidbCO4K4G56fvl3aBg49594x8aFh8MYZF42CDCEnojXCaiXidwBiWUV9KHN5g=
=5A46
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa83.patch"
Content-Disposition: attachment; filename="xsa83.patch"
Content-Transfer-Encoding: base64

eDg2L2lycTogYXZvaWQgdXNlLWFmdGVyLWZyZWUgb24gZXJyb3IgcGF0aCBp
biBwaXJxX2d1ZXN0X2JpbmQoKQoKVGhpcyBpcyBYU0EtODMuCgpDb3Zlcml0
eS1JRDogMTE0Njk1MgpTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L2lycS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9pcnEuYwpAQCAtMTU5MCw4ICsx
NTkwLDcgQEAgaW50IHBpcnFfZ3Vlc3RfYmluZChzdHJ1Y3QgdmNwdSAqdiwg
c3RydQogICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19HX0lORk8KICAgICAg
ICAgICAgICAgICAgICAiQ2Fubm90IGJpbmQgSVJRJWQgdG8gZG9tJWQuIE91
dCBvZiBtZW1vcnkuXG4iLAogICAgICAgICAgICAgICAgICAgIHBpcnEtPnBp
cnEsIHYtPmRvbWFpbi0+ZG9tYWluX2lkKTsKLSAgICAgICAgICAgIHJjID0g
LUVOT01FTTsKLSAgICAgICAgICAgIGdvdG8gb3V0OworICAgICAgICAgICAg
cmV0dXJuIC1FTk9NRU07CiAgICAgICAgIH0KIAogICAgICAgICBhY3Rpb24g
PSBuZXdhY3Rpb247Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Jan 23 14:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LGf-0002si-KA; Thu, 23 Jan 2014 14:28:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6LGe-0002sE-T7
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:28:29 +0000
Received: from [193.109.254.147:7472] by server-7.bemta-14.messagelabs.com id
	4D/BE-15500-C0721E25; Thu, 23 Jan 2014 14:28:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390487306!12760126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3278 invoked from network); 23 Jan 2014 14:28:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:28:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93702086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:28:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:28:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W6LGa-0002Ra-KL; Thu, 23 Jan 2014 14:28:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 23 Jan 2014 14:28:22 +0000
Message-ID: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v5] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

George:
  This is just documentation, and it would be nice to include it as part of
  the 4.4 release.
---
 misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)
 create mode 100644 misc/coverity_model.c

diff --git a/misc/coverity_model.c b/misc/coverity_model.c
new file mode 100644
index 0000000..418d25e
--- /dev/null
+++ b/misc/coverity_model.c
@@ -0,0 +1,98 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialized local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LGf-0002si-KA; Thu, 23 Jan 2014 14:28:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6LGe-0002sE-T7
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:28:29 +0000
Received: from [193.109.254.147:7472] by server-7.bemta-14.messagelabs.com id
	4D/BE-15500-C0721E25; Thu, 23 Jan 2014 14:28:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390487306!12760126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3278 invoked from network); 23 Jan 2014 14:28:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:28:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93702086"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:28:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:28:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W6LGa-0002Ra-KL; Thu, 23 Jan 2014 14:28:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 23 Jan 2014 14:28:22 +0000
Message-ID: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v5] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

George:
  This is just documentation, and it would be nice to include it as part of
  the 4.4 release.
---
 misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)
 create mode 100644 misc/coverity_model.c

diff --git a/misc/coverity_model.c b/misc/coverity_model.c
new file mode 100644
index 0000000..418d25e
--- /dev/null
+++ b/misc/coverity_model.c
@@ -0,0 +1,98 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialized local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LOe-0003so-SR; Thu, 23 Jan 2014 14:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6LOd-0003sf-9B
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:36:43 +0000
Received: from [85.158.143.35:44536] by server-1.bemta-4.messagelabs.com id
	80/24-02132-AF821E25; Thu, 23 Jan 2014 14:36:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390487800!343482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26589 invoked from network); 23 Jan 2014 14:36:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93705351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:36:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:36:27 -0500
Message-ID: <1390487786.24595.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 23 Jan 2014 14:36:26 +0000
In-Reply-To: <1383582193.8826.114.camel@kazak.uk.xensource.com>
References: <1381755391.24708.113.camel@kazak.uk.xensource.com>
	<1383582193.8826.114.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [PROPOSAL] Coverity Access Policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-11-04 at 16:23 +0000, Ian Campbell wrote:
> On Mon, 2013-10-14 at 13:56 +0100, Ian Campbell wrote:
> > Here is the updated proposal. I've addressed the comments made on the
> > draft[0] and I think we can call this an actually proposal to be voted
> > on.
> > 
> > Please indicate your support with +1 or your disagreement with a -1. If
> > you disagree please provide a reason and/or an alternative proposal.
> > 
> > Please reply before 1200 UTC on Monday 21 October 2013. (~1 week from
> > today).
> 
> I actually left this quite a bit longer by mistake.
> 
> We had two positive votes (plus my implied +1 having made the proposal)
> and no objections.

This is now published at
http://xenproject.org/help/contribution-guidelines.html under "Code
Security Scanning".

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LOe-0003so-SR; Thu, 23 Jan 2014 14:36:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6LOd-0003sf-9B
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:36:43 +0000
Received: from [85.158.143.35:44536] by server-1.bemta-4.messagelabs.com id
	80/24-02132-AF821E25; Thu, 23 Jan 2014 14:36:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390487800!343482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26589 invoked from network); 23 Jan 2014 14:36:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93705351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:36:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:36:27 -0500
Message-ID: <1390487786.24595.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 23 Jan 2014 14:36:26 +0000
In-Reply-To: <1383582193.8826.114.camel@kazak.uk.xensource.com>
References: <1381755391.24708.113.camel@kazak.uk.xensource.com>
	<1383582193.8826.114.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Lars Kurth <lars.kurth@xen.org>
Subject: Re: [Xen-devel] [PROPOSAL] Coverity Access Policy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2013-11-04 at 16:23 +0000, Ian Campbell wrote:
> On Mon, 2013-10-14 at 13:56 +0100, Ian Campbell wrote:
> > Here is the updated proposal. I've addressed the comments made on the
> > draft[0] and I think we can call this an actually proposal to be voted
> > on.
> > 
> > Please indicate your support with +1 or your disagreement with a -1. If
> > you disagree please provide a reason and/or an alternative proposal.
> > 
> > Please reply before 1200 UTC on Monday 21 October 2013. (~1 week from
> > today).
> 
> I actually left this quite a bit longer by mistake.
> 
> We had two positive votes (plus my implied +1 having made the proposal)
> and no objections.

This is now published at
http://xenproject.org/help/contribution-guidelines.html under "Code
Security Scanning".

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LSa-0004FY-PF; Thu, 23 Jan 2014 14:40:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6LSZ-0004FQ-31
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:40:47 +0000
Received: from [85.158.143.35:25573] by server-2.bemta-4.messagelabs.com id
	74/5E-11386-EE921E25; Thu, 23 Jan 2014 14:40:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390488044!337639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4859 invoked from network); 23 Jan 2014 14:40:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95748909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:40:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:40:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6LSU-0002c1-KF;
	Thu, 23 Jan 2014 14:40:42 +0000
Date: Thu, 23 Jan 2014 14:39:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> omap IOMMU module is designed to handle access to external
> omap MMUs, connected to the L3 bus.
> 
> Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/arch/arm/Makefile     |    1 +
>  xen/arch/arm/io.c         |    1 +
>  xen/arch/arm/io.h         |    1 +
>  xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 418 insertions(+)
>  create mode 100644 xen/arch/arm/omap_iommu.c
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 003ac84..cb0b385 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -14,6 +14,7 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-y += mm.o
> +obj-y += omap_iommu.o
>  obj-y += p2m.o
>  obj-y += percpu.o
>  obj-y += guestcopy.o
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index a6db00b..3281b67 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
>  {
>      &vgic_distr_mmio_handler,
>      &vuart_mmio_handler,
> +	&mmu_mmio_handler,
>  };
>  #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)

I think that omap_iommu should be a platform specific driver, and it
should hook into a set of platform specific mmio_handlers instead of
using the generic mmio_handler structure.


> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
> index 8d252c0..acb5dff 100644
> --- a/xen/arch/arm/io.h
> +++ b/xen/arch/arm/io.h
> @@ -42,6 +42,7 @@ struct mmio_handler {
>  
>  extern const struct mmio_handler vgic_distr_mmio_handler;
>  extern const struct mmio_handler vuart_mmio_handler;
> +extern const struct mmio_handler mmu_mmio_handler;
>  
>  extern int handle_mmio(mmio_info_t *info);
>  
> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
> new file mode 100644
> index 0000000..4dab30f
> --- /dev/null
> +++ b/xen/arch/arm/omap_iommu.c

It should probably live under xen/arch/arm/platforms.


> @@ -0,0 +1,415 @@
> +/*
> + * xen/arch/arm/omap_iommu.c
> + *
> + * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> + * Copyright (c) 2013 GlobalLogic
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/lib.h>
> +#include <xen/errno.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +#include <xen/init.h>
> +#include <xen/sched.h>
> +#include <xen/stdbool.h>
> +#include <asm/system.h>
> +#include <asm/current.h>
> +#include <asm/io.h>
> +#include <asm/p2m.h>
> +
> +#include "io.h"
> +
> +/* register where address of page table is stored */
> +#define MMU_TTB			0x4c
> +
> +/*
> + * "L2 table" address mask and size definitions.
> + */
> +
> +/* 1st level translation */
> +#define IOPGD_SHIFT		20
> +#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
> +#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
> +
> +/* "supersection" - 16 Mb */
> +#define IOSUPER_SHIFT		24
> +#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
> +#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
> +
> +/* "section"  - 1 Mb */
> +#define IOSECTION_SHIFT		20
> +#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
> +#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
> +
> +/* 4096 first level descriptors for "supersection" and "section" */
> +#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
> +#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
> +
> +/* 2nd level translation */
> +
> +/* "small page" - 4Kb */
> +#define IOPTE_SMALL_SHIFT		12
> +#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
> +#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
> +
> +/* "large page" - 64 Kb */
> +#define IOPTE_LARGE_SHIFT		16
> +#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
> +#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
> +
> +/* 256 second level descriptors for "small" and "large" pages */
> +#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
> +#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
> +
> +/*
> + * some descriptor attributes.
> + */
> +#define IOPGD_TABLE		(1 << 0)
> +#define IOPGD_SECTION	(2 << 0)
> +#define IOPGD_SUPER		(1 << 18 | 2 << 0)
> +
> +#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
> +#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
> +#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
> +
> +#define IOPTE_SMALL		(2 << 0)
> +#define IOPTE_LARGE		(1 << 0)
> +
> +#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
> +#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
> +#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
> +
> +struct mmu_info {
> +	const char			*name;
> +	paddr_t				mem_start;
> +	u32					mem_size;
> +	u32					*pagetable;
> +	void __iomem		*mem_map;
> +};
> +
> +static struct mmu_info omap_ipu_mmu = {
> +	.name		= "IPU_L2_MMU",
> +	.mem_start	= 0x55082000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info omap_dsp_mmu = {
> +	.name		= "DSP_L2_MMU",
> +	.mem_start	= 0x4a066000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info *mmu_list[] = {
> +	&omap_ipu_mmu,
> +	&omap_dsp_mmu,
> +};
> +
> +#define mmu_for_each(pfunc, data)						\
> +({														\
> +	u32 __i;											\
> +	int __res = 0;										\
> +														\
> +	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
> +		__res |= pfunc(mmu_list[__i], data);			\
> +	}													\
> +	__res;												\
> +})
> +
> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
> +{
> +	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static inline struct mmu_info *mmu_lookup(u32 addr)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
> +		if (mmu_check_mem_range(mmu_list[i], addr))
> +			return mmu_list[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
> +{
> +	return (reg & mask) | (va & (~mask));
> +}
> +
> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
> +{
> +	return (reg & ~mask) | pa;
> +}
> +
> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
> +{
> +	return mmu_for_each(mmu_check_mem_range, addr);
> +}
> +
> +static int mmu_copy_pagetable(struct mmu_info *mmu)
> +{
> +	void __iomem *pagetable = NULL;
> +	u32 pgaddr;
> +
> +	ASSERT(mmu);
> +
> +	/* read address where kernel MMU pagetable is stored */
> +	pgaddr = readl(mmu->mem_map + MMU_TTB);
> +	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
> +	if (!pagetable) {

Xen uses a different coding style from Linux, see CODING_STYLE.


> +		printk("%s: %s failed to map pagetable\n",
> +			   __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * pagetable can be changed since last time
> +	 * we accessed it therefore we need to copy it each time
> +	 */
> +	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
> +
> +	iounmap(pagetable);

Do you need to flush the dcache here?


> +	return 0;
> +}
> +
> +#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
> +{																								\
> +	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
> +							(mask == IOPTE_LARGE_MASK)) ? "table"								\
> +							: iopgd_is_super(iopgd) ? "supersection"							\
> +							: iopgd_is_section(iopgd) ? "section"								\
> +							: "Unknown section";												\
> +	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
> +		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
> +}
> +
> +static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
> +{
> +	u32 vaddr, paddr;
> +	paddr_t maddr;
> +
> +	paddr = mmu_virt_to_phys(iopgd, da, mask);
> +	maddr = p2m_lookup(dom, paddr);
> +	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
> +
> +	return vaddr;
> +}
> +
> +/*
> + * on boot table is empty
> + */
> +static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
> +{
> +	u32 i;
> +	int res;
> +	bool table_updated = false;
> +
> +	ASSERT(dom);
> +	ASSERT(mmu);
> +
> +	/* copy pagetable from  domain to xen */
> +	res = mmu_copy_pagetable(mmu);
> +	if (res) {
> +		printk("%s: %s failed to map pagetable memory\n",
> +			   __func__, mmu->name);
> +		return res;
> +	}
> +
> +	/* 1-st level translation */
> +	for (i = 0; i < PTRS_PER_IOPGD; i++) {
> +		u32 da;
> +		u32 iopgd = mmu->pagetable[i];
> +
> +		if (!iopgd)
> +			continue;
> +
> +		table_updated = true;
> +
> +		/* "supersection" 16 Mb */
> +		if (iopgd_is_super(iopgd)) {
> +			da = i << IOSECTION_SHIFT;
> +			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +
> +		/* "section" 1Mb */
> +		} else if (iopgd_is_section(iopgd)) {
> +			da = i << IOSECTION_SHIFT;
> +			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +
> +		/* "table" */
> +		} else if (iopgd_is_table(iopgd)) {
> +			u32 j, mask;
> +			u32 iopte = iopte_offset(iopgd);
> +
> +			/* 2-nd level translation */
> +			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
> +
> +				/* "small table" 4Kb */
> +				if (iopte_is_small(iopgd)) {
> +					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
> +					mask = IOPTE_SMALL_MASK;
> +
> +				/* "large table" 64Kb */
> +				} else if (iopte_is_large(iopgd)) {
> +					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
> +					mask = IOPTE_LARGE_MASK;
> +
> +				/* error */
> +				} else {
> +					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
> +					return -EINVAL;
> +				}
> +
> +				/* translate 2-nd level entry */
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
> +			}
> +
> +			continue;
> +
> +		/* error */
> +		} else {
> +			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
> +			return -EINVAL;
> +		}
> +	}
> +
> +	/* force omap IOMMU to use new pagetable */
> +	if (table_updated) {
> +		paddr_t maddr;
> +		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);

So you are flushing the dcache all at once at the end, probably better
this way.

> +		maddr = __pa(mmu->pagetable);
> +		writel(maddr, mmu->mem_map + MMU_TTB);
> +		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmu_trap_write_access(struct domain *dom,
> +								 struct mmu_info *mmu, mmio_info_t *info)
> +{
> +	struct cpu_user_regs *regs = guest_cpu_user_regs();
> +	register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res = 0;
> +
> +	switch (info->gpa - mmu->mem_start) {
> +		case MMU_TTB:
> +			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
> +				   mmu->name, _p(info->gpa), *r);
> +			res = mmu_translate_pagetable(dom, mmu);
> +			break;
> +		default:
> +			break;
> +	}
> +
> +	return res;
> +}
> +
> +static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> +
> +    return 1;
> +}
> +
> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct domain *dom = v->domain;
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res;
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * make sure that user register is written first in this function
> +	 * following calls may expect valid data in it
> +	 */
> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> +
> +	res = mmu_trap_write_access(dom, mmu, info);
> +	if (res)
> +		return res;
> +
> +    return 1;
> +}

I wonder if we actually need to trap guest accesses in all cases: if we
leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
Maybe we can find a way to detect that so that we can avoid trapping and
translating in that case.


> +static int mmu_init(struct mmu_info *mmu, u32 data)
> +{
> +	ASSERT(mmu);
> +	ASSERT(!mmu->mem_map);
> +	ASSERT(!mmu->pagetable);
> +
> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
> +	if (!mmu->mem_map) {
> +		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
> +
> +	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
> +	if (!mmu->pagetable) {
> +		printk("%s: %s failed to alloc private pagetable\n",
> +			   __func__, mmu->name);
> +		return -ENOMEM;
> +	}
> +
> +	printk("%s: %s private pagetable %lu bytes\n",
> +		   __func__, mmu->name, IOPGD_TABLE_SIZE);
> +
> +	return 0;
> +}
> +
> +static int mmu_init_all(void)
> +{
> +	int res;
> +
> +	res = mmu_for_each(mmu_init, 0);
> +	if (res) {
> +		printk("%s error during init %d\n", __func__, res);
> +		return res;
> +	}
> +
> +	return 0;
> +}
> +
> +const struct mmio_handler mmu_mmio_handler = {
> +	.check_handler = mmu_mmio_check,
> +	.read_handler  = mmu_mmio_read,
> +	.write_handler = mmu_mmio_write,
> +};
> +
> +__initcall(mmu_init_all);
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LSa-0004FY-PF; Thu, 23 Jan 2014 14:40:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6LSZ-0004FQ-31
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:40:47 +0000
Received: from [85.158.143.35:25573] by server-2.bemta-4.messagelabs.com id
	74/5E-11386-EE921E25; Thu, 23 Jan 2014 14:40:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390488044!337639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4859 invoked from network); 23 Jan 2014 14:40:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95748909"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:40:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:40:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6LSU-0002c1-KF;
	Thu, 23 Jan 2014 14:40:42 +0000
Date: Thu, 23 Jan 2014 14:39:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> omap IOMMU module is designed to handle access to external
> omap MMUs, connected to the L3 bus.
> 
> Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/arch/arm/Makefile     |    1 +
>  xen/arch/arm/io.c         |    1 +
>  xen/arch/arm/io.h         |    1 +
>  xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 418 insertions(+)
>  create mode 100644 xen/arch/arm/omap_iommu.c
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 003ac84..cb0b385 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -14,6 +14,7 @@ obj-y += io.o
>  obj-y += irq.o
>  obj-y += kernel.o
>  obj-y += mm.o
> +obj-y += omap_iommu.o
>  obj-y += p2m.o
>  obj-y += percpu.o
>  obj-y += guestcopy.o
> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
> index a6db00b..3281b67 100644
> --- a/xen/arch/arm/io.c
> +++ b/xen/arch/arm/io.c
> @@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
>  {
>      &vgic_distr_mmio_handler,
>      &vuart_mmio_handler,
> +	&mmu_mmio_handler,
>  };
>  #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)

I think that omap_iommu should be a platform specific driver, and it
should hook into a set of platform specific mmio_handlers instead of
using the generic mmio_handler structure.


> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
> index 8d252c0..acb5dff 100644
> --- a/xen/arch/arm/io.h
> +++ b/xen/arch/arm/io.h
> @@ -42,6 +42,7 @@ struct mmio_handler {
>  
>  extern const struct mmio_handler vgic_distr_mmio_handler;
>  extern const struct mmio_handler vuart_mmio_handler;
> +extern const struct mmio_handler mmu_mmio_handler;
>  
>  extern int handle_mmio(mmio_info_t *info);
>  
> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
> new file mode 100644
> index 0000000..4dab30f
> --- /dev/null
> +++ b/xen/arch/arm/omap_iommu.c

It should probably live under xen/arch/arm/platforms.


> @@ -0,0 +1,415 @@
> +/*
> + * xen/arch/arm/omap_iommu.c
> + *
> + * Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> + * Copyright (c) 2013 GlobalLogic
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <xen/config.h>
> +#include <xen/lib.h>
> +#include <xen/errno.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +#include <xen/init.h>
> +#include <xen/sched.h>
> +#include <xen/stdbool.h>
> +#include <asm/system.h>
> +#include <asm/current.h>
> +#include <asm/io.h>
> +#include <asm/p2m.h>
> +
> +#include "io.h"
> +
> +/* register where address of page table is stored */
> +#define MMU_TTB			0x4c
> +
> +/*
> + * "L2 table" address mask and size definitions.
> + */
> +
> +/* 1st level translation */
> +#define IOPGD_SHIFT		20
> +#define IOPGD_SIZE		(1UL << IOPGD_SHIFT)
> +#define IOPGD_MASK		(~(IOPGD_SIZE - 1))
> +
> +/* "supersection" - 16 Mb */
> +#define IOSUPER_SHIFT		24
> +#define IOSUPER_SIZE		(1UL << IOSUPER_SHIFT)
> +#define IOSUPER_MASK		(~(IOSUPER_SIZE - 1))
> +
> +/* "section"  - 1 Mb */
> +#define IOSECTION_SHIFT		20
> +#define IOSECTION_SIZE		(1UL << IOSECTION_SHIFT)
> +#define IOSECTION_MASK		(~(IOSECTION_SIZE - 1))
> +
> +/* 4096 first level descriptors for "supersection" and "section" */
> +#define PTRS_PER_IOPGD		(1UL << (32 - IOPGD_SHIFT))
> +#define IOPGD_TABLE_SIZE	(PTRS_PER_IOPGD * sizeof(u32))
> +
> +/* 2nd level translation */
> +
> +/* "small page" - 4Kb */
> +#define IOPTE_SMALL_SHIFT		12
> +#define IOPTE_SMALL_SIZE		(1UL << IOPTE_SMALL_SHIFT)
> +#define IOPTE_SMALL_MASK		(~(IOPTE_SMALL_SIZE - 1))
> +
> +/* "large page" - 64 Kb */
> +#define IOPTE_LARGE_SHIFT		16
> +#define IOPTE_LARGE_SIZE		(1UL << IOPTE_LARGE_SHIFT)
> +#define IOPTE_LARGE_MASK		(~(IOPTE_LARGE_SIZE - 1))
> +
> +/* 256 second level descriptors for "small" and "large" pages */
> +#define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
> +#define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
> +
> +/*
> + * some descriptor attributes.
> + */
> +#define IOPGD_TABLE		(1 << 0)
> +#define IOPGD_SECTION	(2 << 0)
> +#define IOPGD_SUPER		(1 << 18 | 2 << 0)
> +
> +#define iopgd_is_table(x)	(((x) & 3) == IOPGD_TABLE)
> +#define iopgd_is_section(x)	(((x) & (1 << 18 | 3)) == IOPGD_SECTION)
> +#define iopgd_is_super(x)	(((x) & (1 << 18 | 3)) == IOPGD_SUPER)
> +
> +#define IOPTE_SMALL		(2 << 0)
> +#define IOPTE_LARGE		(1 << 0)
> +
> +#define iopte_is_small(x)	(((x) & 2) == IOPTE_SMALL)
> +#define iopte_is_large(x)	(((x) & 3) == IOPTE_LARGE)
> +#define iopte_offset(x)		((x) & IOPTE_SMALL_MASK)
> +
> +struct mmu_info {
> +	const char			*name;
> +	paddr_t				mem_start;
> +	u32					mem_size;
> +	u32					*pagetable;
> +	void __iomem		*mem_map;
> +};
> +
> +static struct mmu_info omap_ipu_mmu = {
> +	.name		= "IPU_L2_MMU",
> +	.mem_start	= 0x55082000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info omap_dsp_mmu = {
> +	.name		= "DSP_L2_MMU",
> +	.mem_start	= 0x4a066000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info *mmu_list[] = {
> +	&omap_ipu_mmu,
> +	&omap_dsp_mmu,
> +};
> +
> +#define mmu_for_each(pfunc, data)						\
> +({														\
> +	u32 __i;											\
> +	int __res = 0;										\
> +														\
> +	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
> +		__res |= pfunc(mmu_list[__i], data);			\
> +	}													\
> +	__res;												\
> +})
> +
> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
> +{
> +	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static inline struct mmu_info *mmu_lookup(u32 addr)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
> +		if (mmu_check_mem_range(mmu_list[i], addr))
> +			return mmu_list[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
> +{
> +	return (reg & mask) | (va & (~mask));
> +}
> +
> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
> +{
> +	return (reg & ~mask) | pa;
> +}
> +
> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
> +{
> +	return mmu_for_each(mmu_check_mem_range, addr);
> +}
> +
> +static int mmu_copy_pagetable(struct mmu_info *mmu)
> +{
> +	void __iomem *pagetable = NULL;
> +	u32 pgaddr;
> +
> +	ASSERT(mmu);
> +
> +	/* read address where kernel MMU pagetable is stored */
> +	pgaddr = readl(mmu->mem_map + MMU_TTB);
> +	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
> +	if (!pagetable) {

Xen uses a different coding style from Linux, see CODING_STYLE.


> +		printk("%s: %s failed to map pagetable\n",
> +			   __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * pagetable can be changed since last time
> +	 * we accessed it therefore we need to copy it each time
> +	 */
> +	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
> +
> +	iounmap(pagetable);

Do you need to flush the dcache here?


> +	return 0;
> +}
> +
> +#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)									\
> +{																								\
> +	const char *sect_type = (iopgd_is_table(iopgd) || (mask == IOPTE_SMALL_MASK) ||				\
> +							(mask == IOPTE_LARGE_MASK)) ? "table"								\
> +							: iopgd_is_super(iopgd) ? "supersection"							\
> +							: iopgd_is_section(iopgd) ? "section"								\
> +							: "Unknown section";												\
> +	printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS vaddr 0x%08x mask 0x%08x\n",\
> +		   sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);								\
> +}
> +
> +static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask)
> +{
> +	u32 vaddr, paddr;
> +	paddr_t maddr;
> +
> +	paddr = mmu_virt_to_phys(iopgd, da, mask);
> +	maddr = p2m_lookup(dom, paddr);
> +	vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
> +
> +	return vaddr;
> +}
> +
> +/*
> + * on boot table is empty
> + */
> +static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
> +{
> +	u32 i;
> +	int res;
> +	bool table_updated = false;
> +
> +	ASSERT(dom);
> +	ASSERT(mmu);
> +
> +	/* copy pagetable from  domain to xen */
> +	res = mmu_copy_pagetable(mmu);
> +	if (res) {
> +		printk("%s: %s failed to map pagetable memory\n",
> +			   __func__, mmu->name);
> +		return res;
> +	}
> +
> +	/* 1-st level translation */
> +	for (i = 0; i < PTRS_PER_IOPGD; i++) {
> +		u32 da;
> +		u32 iopgd = mmu->pagetable[i];
> +
> +		if (!iopgd)
> +			continue;
> +
> +		table_updated = true;
> +
> +		/* "supersection" 16 Mb */
> +		if (iopgd_is_super(iopgd)) {
> +			da = i << IOSECTION_SHIFT;
> +			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +
> +		/* "section" 1Mb */
> +		} else if (iopgd_is_section(iopgd)) {
> +			da = i << IOSECTION_SHIFT;
> +			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +
> +		/* "table" */
> +		} else if (iopgd_is_table(iopgd)) {
> +			u32 j, mask;
> +			u32 iopte = iopte_offset(iopgd);
> +
> +			/* 2-nd level translation */
> +			for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += IOPTE_SMALL_SIZE) {
> +
> +				/* "small table" 4Kb */
> +				if (iopte_is_small(iopgd)) {
> +					da = (i << IOSECTION_SHIFT) + (j << IOPTE_SMALL_SHIFT);
> +					mask = IOPTE_SMALL_MASK;
> +
> +				/* "large table" 64Kb */
> +				} else if (iopte_is_large(iopgd)) {
> +					da = (i << IOSECTION_SHIFT) + (j << IOPTE_LARGE_SHIFT);
> +					mask = IOPTE_LARGE_MASK;
> +
> +				/* error */
> +				} else {
> +					printk("%s Unknown table type 0x%08x\n", mmu->name, iopte);
> +					return -EINVAL;
> +				}
> +
> +				/* translate 2-nd level entry */
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopte, da, mask);
> +			}
> +
> +			continue;
> +
> +		/* error */
> +		} else {
> +			printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
> +			return -EINVAL;
> +		}
> +	}
> +
> +	/* force omap IOMMU to use new pagetable */
> +	if (table_updated) {
> +		paddr_t maddr;
> +		flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);

So you are flushing the dcache all at once at the end, probably better
this way.

> +		maddr = __pa(mmu->pagetable);
> +		writel(maddr, mmu->mem_map + MMU_TTB);
> +		printk("%s update pagetable, maddr 0x%pS\n", mmu->name, _p(maddr));
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmu_trap_write_access(struct domain *dom,
> +								 struct mmu_info *mmu, mmio_info_t *info)
> +{
> +	struct cpu_user_regs *regs = guest_cpu_user_regs();
> +	register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res = 0;
> +
> +	switch (info->gpa - mmu->mem_start) {
> +		case MMU_TTB:
> +			printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
> +				   mmu->name, _p(info->gpa), *r);
> +			res = mmu_translate_pagetable(dom, mmu);
> +			break;
> +		default:
> +			break;
> +	}
> +
> +	return res;
> +}
> +
> +static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> +
> +    return 1;
> +}
> +
> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct domain *dom = v->domain;
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res;
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * make sure that user register is written first in this function
> +	 * following calls may expect valid data in it
> +	 */
> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> +
> +	res = mmu_trap_write_access(dom, mmu, info);
> +	if (res)
> +		return res;
> +
> +    return 1;
> +}

I wonder if we actually need to trap guest accesses in all cases: if we
leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
Maybe we can find a way to detect that so that we can avoid trapping and
translating in that case.


> +static int mmu_init(struct mmu_info *mmu, u32 data)
> +{
> +	ASSERT(mmu);
> +	ASSERT(!mmu->mem_map);
> +	ASSERT(!mmu->pagetable);
> +
> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
> +	if (!mmu->mem_map) {
> +		printk("%s: %s failed to map memory\n",  __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, _p(mmu->mem_map));
> +
> +	mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
> +	if (!mmu->pagetable) {
> +		printk("%s: %s failed to alloc private pagetable\n",
> +			   __func__, mmu->name);
> +		return -ENOMEM;
> +	}
> +
> +	printk("%s: %s private pagetable %lu bytes\n",
> +		   __func__, mmu->name, IOPGD_TABLE_SIZE);
> +
> +	return 0;
> +}
> +
> +static int mmu_init_all(void)
> +{
> +	int res;
> +
> +	res = mmu_for_each(mmu_init, 0);
> +	if (res) {
> +		printk("%s error during init %d\n", __func__, res);
> +		return res;
> +	}
> +
> +	return 0;
> +}
> +
> +const struct mmio_handler mmu_mmio_handler = {
> +	.check_handler = mmu_mmio_check,
> +	.read_handler  = mmu_mmio_read,
> +	.write_handler = mmu_mmio_write,
> +};
> +
> +__initcall(mmu_init_all);
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:54:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lex-0005EX-EQ; Thu, 23 Jan 2014 14:53:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Lev-0005ES-Co
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:53:33 +0000
Received: from [193.109.254.147:31611] by server-7.bemta-14.messagelabs.com id
	EC/76-15500-CEC21E25; Thu, 23 Jan 2014 14:53:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390488810!11256732!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2167 invoked from network); 23 Jan 2014 14:53:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:53:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95754409"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:53:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:53:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Ler-0002mU-F0;
	Thu, 23 Jan 2014 14:53:29 +0000
Date: Thu, 23 Jan 2014 14:52:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to
 4K pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> Patch introduces the following algorithm:
> - enumerates all first level translation entries
> - for each section creates 256 pages, each page is 4096 bytes
> - for each supersection creates 4096 pages, each page is 4096 bytes
> - flush cache to synchronize Cortex M15 and IOMMU
> 
> This algorithm make possible to use 4K mapping only.
> 
> Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>

I take that the first patch doesn't actually work without this one?
In that case it might make sense to just merge them into one.


>  xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 46 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
> index 4dab30f..7ec03a2 100644
> --- a/xen/arch/arm/omap_iommu.c
> +++ b/xen/arch/arm/omap_iommu.c
> @@ -72,6 +72,9 @@
>  #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
>  #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
>  
> +/* 16 sections in supersection */
> +#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
> +
>  /*
>   * some descriptor attributes.
>   */
> @@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
>  	&omap_dsp_mmu,
>  };
>  
> +static bool translate_supersections_to_pages = true;
> +static bool translate_sections_to_pages = true;
> +
>  #define mmu_for_each(pfunc, data)						\
>  ({														\
>  	u32 __i;											\
> @@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
>  	return vaddr;
>  }
>  
> +static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
> +{
> +	u32 *iopte = NULL;
> +	u32 i;
> +
> +	iopte = xzalloc_bytes(PAGE_SIZE);
> +	if (!iopte) {
> +		printk("%s Fail to alloc 2nd level table\n", mmu->name);
> +		return 0;
> +	}
> +
> +	for (i = 0; i < PTRS_PER_IOPTE; i++) {
> +		u32 da, vaddr, iopgd_tmp;
> +		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
> +		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
> +		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
> +		iopte[i] = vaddr | IOPTE_SMALL;
> +	}
> +
> +	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
> +	return __pa(iopte) | IOPGD_TABLE;
> +}
> +
>  /*
>   * on boot table is empty
>   */
> @@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
>  
>  		/* "supersection" 16 Mb */
>  		if (iopgd_is_super(iopgd)) {
> -			da = i << IOSECTION_SHIFT;
> -			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +			if(likely(translate_supersections_to_pages)) {
> +				u32 j, iopgd_tmp;
> +				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
> +					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
> +					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
> +				}
> +				i += (j - 1);
> +			} else {
> +				da = i << IOSECTION_SHIFT;
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +			}
>  
>  		/* "section" 1Mb */
>  		} else if (iopgd_is_section(iopgd)) {
> -			da = i << IOSECTION_SHIFT;
> -			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +			if (likely(translate_sections_to_pages)) {
> +				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
> +			} else {
> +				da = i << IOSECTION_SHIFT;
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +			}
>  
>  		/* "table" */
>  		} else if (iopgd_is_table(iopgd)) {

Since the 16MB and 1MB sections might not actually be contigous in
machine address space, this patch replaces the guest allocated sections
with pte tables pointing to the original IPAs. Is that right?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:54:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lex-0005EX-EQ; Thu, 23 Jan 2014 14:53:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Lev-0005ES-Co
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:53:33 +0000
Received: from [193.109.254.147:31611] by server-7.bemta-14.messagelabs.com id
	EC/76-15500-CEC21E25; Thu, 23 Jan 2014 14:53:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390488810!11256732!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2167 invoked from network); 23 Jan 2014 14:53:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:53:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95754409"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:53:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:53:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Ler-0002mU-F0;
	Thu, 23 Jan 2014 14:53:29 +0000
Date: Thu, 23 Jan 2014 14:52:22 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to
 4K pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> Patch introduces the following algorithm:
> - enumerates all first level translation entries
> - for each section creates 256 pages, each page is 4096 bytes
> - for each supersection creates 4096 pages, each page is 4096 bytes
> - flush cache to synchronize Cortex M15 and IOMMU
> 
> This algorithm make possible to use 4K mapping only.
> 
> Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>

I take that the first patch doesn't actually work without this one?
In that case it might make sense to just merge them into one.


>  xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 46 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
> index 4dab30f..7ec03a2 100644
> --- a/xen/arch/arm/omap_iommu.c
> +++ b/xen/arch/arm/omap_iommu.c
> @@ -72,6 +72,9 @@
>  #define PTRS_PER_IOPTE		(1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
>  #define IOPTE_TABLE_SIZE	(PTRS_PER_IOPTE * sizeof(u32))
>  
> +/* 16 sections in supersection */
> +#define IOSECTION_PER_IOSUPER	(1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
> +
>  /*
>   * some descriptor attributes.
>   */
> @@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
>  	&omap_dsp_mmu,
>  };
>  
> +static bool translate_supersections_to_pages = true;
> +static bool translate_sections_to_pages = true;
> +
>  #define mmu_for_each(pfunc, data)						\
>  ({														\
>  	u32 __i;											\
> @@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 mask
>  	return vaddr;
>  }
>  
> +static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 iopgd, u32 sect_num)
> +{
> +	u32 *iopte = NULL;
> +	u32 i;
> +
> +	iopte = xzalloc_bytes(PAGE_SIZE);
> +	if (!iopte) {
> +		printk("%s Fail to alloc 2nd level table\n", mmu->name);
> +		return 0;
> +	}
> +
> +	for (i = 0; i < PTRS_PER_IOPTE; i++) {
> +		u32 da, vaddr, iopgd_tmp;
> +		da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
> +		iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
> +		vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, IOPTE_SMALL_MASK);
> +		iopte[i] = vaddr | IOPTE_SMALL;
> +	}
> +
> +	flush_xen_dcache_va_range(iopte, PAGE_SIZE);
> +	return __pa(iopte) | IOPGD_TABLE;
> +}
> +
>  /*
>   * on boot table is empty
>   */
> @@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
>  
>  		/* "supersection" 16 Mb */
>  		if (iopgd_is_super(iopgd)) {
> -			da = i << IOSECTION_SHIFT;
> -			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +			if(likely(translate_supersections_to_pages)) {
> +				u32 j, iopgd_tmp;
> +				for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
> +					iopgd_tmp = iopgd + (j * IOSECTION_SIZE);
> +					mmu->pagetable[i + j] = mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
> +				}
> +				i += (j - 1);
> +			} else {
> +				da = i << IOSECTION_SHIFT;
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSUPER_MASK);
> +			}
>  
>  		/* "section" 1Mb */
>  		} else if (iopgd_is_section(iopgd)) {
> -			da = i << IOSECTION_SHIFT;
> -			mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +			if (likely(translate_sections_to_pages)) {
> +				mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
> +			} else {
> +				da = i << IOSECTION_SHIFT;
> +				mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
> +			}
>  
>  		/* "table" */
>  		} else if (iopgd_is_table(iopgd)) {

Since the 16MB and 1MB sections might not actually be contigous in
machine address space, this patch replaces the guest allocated sections
with pte tables pointing to the original IPAs. Is that right?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lhx-0005NZ-Az; Thu, 23 Jan 2014 14:56:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Lhw-0005NS-0z
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:56:40 +0000
Received: from [193.109.254.147:17863] by server-4.bemta-14.messagelabs.com id
	7F/63-03916-7AD21E25; Thu, 23 Jan 2014 14:56:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390488997!12768506!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27733 invoked from network); 23 Jan 2014 14:56:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:56:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93714674"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:56:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 09:55:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6Lh5-0007Zz-GQ;
	Thu, 23 Jan 2014 14:55:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6Lh3-00034r-ES;
	Thu, 23 Jan 2014 14:55:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21217.11631.994209.326104@mariner.uk.xensource.com>
Date: Thu, 23 Jan 2014 14:55:43 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
	the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] [PATCH v5] coverity: Store the modelling file in the source tree."):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

This is fine as far as it goes.  But surely we should also include
some build instructions.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lhx-0005NZ-Az; Thu, 23 Jan 2014 14:56:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Lhw-0005NS-0z
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:56:40 +0000
Received: from [193.109.254.147:17863] by server-4.bemta-14.messagelabs.com id
	7F/63-03916-7AD21E25; Thu, 23 Jan 2014 14:56:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390488997!12768506!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27733 invoked from network); 23 Jan 2014 14:56:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:56:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93714674"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:56:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 09:55:48 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6Lh5-0007Zz-GQ;
	Thu, 23 Jan 2014 14:55:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6Lh3-00034r-ES;
	Thu, 23 Jan 2014 14:55:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21217.11631.994209.326104@mariner.uk.xensource.com>
Date: Thu, 23 Jan 2014 14:55:43 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
	the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("[Xen-devel] [PATCH v5] coverity: Store the modelling file in the source tree."):
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

This is fine as far as it goes.  But surely we should also include
some build instructions.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:57:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lic-0005RS-Oa; Thu, 23 Jan 2014 14:57:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Lib-0005RB-BA
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:57:21 +0000
Received: from [85.158.137.68:29875] by server-8.bemta-3.messagelabs.com id
	B6/53-31081-0DD21E25; Thu, 23 Jan 2014 14:57:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390489030!10116248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17590 invoked from network); 23 Jan 2014 14:57:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93715234"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:57:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:57:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6LiO-0002tH-EF;
	Thu, 23 Jan 2014 14:57:08 +0000
Date: Thu, 23 Jan 2014 14:56:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390485665.24595.89.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
	<1390485665.24595.89.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > On Thu, 23 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > > +{
> > > > > +    DECLARE_DOMCTL;
> > > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > > +    domctl.domain = (domid_t)domid;
> > > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > > +    return do_domctl(xch, &domctl);
> > > > > +}
> > > > 
> > > > Do we really need to flush the entire p2m, or just things we have
> > > > written to?
> > > 
> > > I think we need to flush everything (well, all RAM backed pages, the
> > > patch skips everything else).
> > > 
> > > Even things which we haven't explicitly written to will have been
> > > scrubbed and therefore have scrubbed data in the cache but data
> > > belonging to the previous owner in actual RAM. So we would really want
> > > to clean in that case too.
> > > 
> > > We could do the clean at scrub time which arguably be better anyway and
> > > would potentially allows us to only invalidate instead of clean
> > > +invalidate some subset of pages, but we would need to track which sort
> > > of page was which -- e.g. with a special p2m type for a page which had
> > > been foreign mapped or some other bit of metadata.
> > 
> > That seems like the way to go.
> 
> I'm not convinced actually, and in any case, not for 4.4...
> 
> > > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > > index 85ca330..f35ed57 100644
> > > > > --- a/xen/arch/arm/p2m.c
> > > > > +++ b/xen/arch/arm/p2m.c
> > > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > > >      ALLOCATE,
> > > > >      REMOVE,
> > > > >      RELINQUISH,
> > > > > +    CACHEFLUSH,
> > > > >  };
> > > > >  
> > > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > > +{
> > > > > +    void *v = map_domain_page(mfn);
> > > > > +
> > > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > > +
> > > > > +    unmap_domain_page(v);
> > > > > +}
> > > > 
> > > > A pity that we need to map a page just to flush the dcache.  It could be
> > > > expensive, especially if we really have to map every single guest mfn.
> > > 
> > > Remember that this is basically free for arm64 and for arm32 we actually
> > > map 2MB regions and cache, so it is only actually one map per 2MB
> > > region.
> > 
> > Even with 2MB at a time it is easy for this to become really slow. A 4GB
> > guest would need 2048 iterations of map/flush/unmap. I don't have any
> > numbers but I bet they won't look good.
> 
> It happens once during domain build and the delay it isn't noticeable to
> me compared with the rest of the build process.
> 
> > At least if it was combined with the RAM scrub we would save the 2048
> > map/unmap.
> 
> The scrub has to map it in exactly the same way.

Right, since we are already doing it once, why do it twice? :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:57:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lic-0005RS-Oa; Thu, 23 Jan 2014 14:57:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Lib-0005RB-BA
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:57:21 +0000
Received: from [85.158.137.68:29875] by server-8.bemta-3.messagelabs.com id
	B6/53-31081-0DD21E25; Thu, 23 Jan 2014 14:57:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390489030!10116248!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17590 invoked from network); 23 Jan 2014 14:57:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:57:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93715234"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 14:57:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:57:08 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6LiO-0002tH-EF;
	Thu, 23 Jan 2014 14:57:08 +0000
Date: Thu, 23 Jan 2014 14:56:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390485665.24595.89.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
	<1390485665.24595.89.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > On Thu, 23 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > > +{
> > > > > +    DECLARE_DOMCTL;
> > > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > > +    domctl.domain = (domid_t)domid;
> > > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > > +    return do_domctl(xch, &domctl);
> > > > > +}
> > > > 
> > > > Do we really need to flush the entire p2m, or just things we have
> > > > written to?
> > > 
> > > I think we need to flush everything (well, all RAM backed pages, the
> > > patch skips everything else).
> > > 
> > > Even things which we haven't explicitly written to will have been
> > > scrubbed and therefore have scrubbed data in the cache but data
> > > belonging to the previous owner in actual RAM. So we would really want
> > > to clean in that case too.
> > > 
> > > We could do the clean at scrub time which arguably be better anyway and
> > > would potentially allows us to only invalidate instead of clean
> > > +invalidate some subset of pages, but we would need to track which sort
> > > of page was which -- e.g. with a special p2m type for a page which had
> > > been foreign mapped or some other bit of metadata.
> > 
> > That seems like the way to go.
> 
> I'm not convinced actually, and in any case, not for 4.4...
> 
> > > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > > index 85ca330..f35ed57 100644
> > > > > --- a/xen/arch/arm/p2m.c
> > > > > +++ b/xen/arch/arm/p2m.c
> > > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > > >      ALLOCATE,
> > > > >      REMOVE,
> > > > >      RELINQUISH,
> > > > > +    CACHEFLUSH,
> > > > >  };
> > > > >  
> > > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > > +{
> > > > > +    void *v = map_domain_page(mfn);
> > > > > +
> > > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > > +
> > > > > +    unmap_domain_page(v);
> > > > > +}
> > > > 
> > > > A pity that we need to map a page just to flush the dcache.  It could be
> > > > expensive, especially if we really have to map every single guest mfn.
> > > 
> > > Remember that this is basically free for arm64 and for arm32 we actually
> > > map 2MB regions and cache, so it is only actually one map per 2MB
> > > region.
> > 
> > Even with 2MB at a time it is easy for this to become really slow. A 4GB
> > guest would need 2048 iterations of map/flush/unmap. I don't have any
> > numbers but I bet they won't look good.
> 
> It happens once during domain build and the delay it isn't noticeable to
> me compared with the rest of the build process.
> 
> > At least if it was combined with the RAM scrub we would save the 2048
> > map/unmap.
> 
> The scrub has to map it in exactly the same way.

Right, since we are already doing it once, why do it twice? :)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Ljj-0005Zq-8Q; Thu, 23 Jan 2014 14:58:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6Lji-0005Zl-D5
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:58:30 +0000
Received: from [85.158.139.211:20412] by server-15.bemta-5.messagelabs.com id
	2A/70-08490-51E21E25; Thu, 23 Jan 2014 14:58:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390489107!11550815!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6392 invoked from network); 23 Jan 2014 14:58:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95756560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:58:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:58:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Ljc-0002xs-Js;
	Thu, 23 Jan 2014 14:58:24 +0000
Message-ID: <52E12E10.6060701@citrix.com>
Date: Thu, 23 Jan 2014 14:58:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<21217.11631.994209.326104@mariner.uk.xensource.com>
In-Reply-To: <21217.11631.994209.326104@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 14:55, Ian Jackson wrote:
> Andrew Cooper writes ("[Xen-devel] [PATCH v5] coverity: Store the modelling file in the source tree."):
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
> This is fine as far as it goes.  But surely we should also include
> some build instructions.
>
> Ian.

There arn't really build instruction.  The way to use it with coverity
scan is for one of the coverity scan admins to click the "Upload a new
modelling file" button.  Any scan initiated will use the latest
modelling file.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 14:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 14:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Ljj-0005Zq-8Q; Thu, 23 Jan 2014 14:58:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6Lji-0005Zl-D5
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 14:58:30 +0000
Received: from [85.158.139.211:20412] by server-15.bemta-5.messagelabs.com id
	2A/70-08490-51E21E25; Thu, 23 Jan 2014 14:58:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390489107!11550815!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6392 invoked from network); 23 Jan 2014 14:58:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 14:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95756560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 14:58:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 09:58:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Ljc-0002xs-Js;
	Thu, 23 Jan 2014 14:58:24 +0000
Message-ID: <52E12E10.6060701@citrix.com>
Date: Thu, 23 Jan 2014 14:58:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<21217.11631.994209.326104@mariner.uk.xensource.com>
In-Reply-To: <21217.11631.994209.326104@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 14:55, Ian Jackson wrote:
> Andrew Cooper writes ("[Xen-devel] [PATCH v5] coverity: Store the modelling file in the source tree."):
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
> This is fine as far as it goes.  But surely we should also include
> some build instructions.
>
> Ian.

There arn't really build instruction.  The way to use it with coverity
scan is for one of the coverity scan admins to click the "Upload a new
modelling file" button.  Any scan initiated will use the latest
modelling file.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:01:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LmA-0006Bo-W0; Thu, 23 Jan 2014 15:01:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6LmA-0006Bf-35
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:01:02 +0000
Received: from [85.158.137.68:11921] by server-15.bemta-3.messagelabs.com id
	E5/A8-11556-DAE21E25; Thu, 23 Jan 2014 15:01:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390489258!10888666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17095 invoked from network); 23 Jan 2014 15:01:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:01:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95757589"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 15:00:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:00:57 -0500
Message-ID: <1390489256.24595.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 15:00:56 +0000
In-Reply-To: <alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
	<1390485665.24595.89.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 14:56 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > > At least if it was combined with the RAM scrub we would save the 2048
> > > map/unmap.
> > 
> > The scrub has to map it in exactly the same way.
> 
> Right, since we are already doing it once, why do it twice? :)

Because you need a load of other infrastructure (to track clean vs dirty
guest pages) too for it to be of any benefit.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:01:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6LmA-0006Bo-W0; Thu, 23 Jan 2014 15:01:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6LmA-0006Bf-35
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:01:02 +0000
Received: from [85.158.137.68:11921] by server-15.bemta-3.messagelabs.com id
	E5/A8-11556-DAE21E25; Thu, 23 Jan 2014 15:01:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390489258!10888666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17095 invoked from network); 23 Jan 2014 15:01:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:01:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95757589"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 15:00:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:00:57 -0500
Message-ID: <1390489256.24595.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 23 Jan 2014 15:00:56 +0000
In-Reply-To: <alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
References: <1390408531.32519.78.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401221701100.21510@kaball.uk.xensource.com>
	<1390469635.24595.7.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231348130.15917@kaball.uk.xensource.com>
	<1390485665.24595.89.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401231453530.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen/arm: Alternative start of day cache coherency
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 14:56 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > > At least if it was combined with the RAM scrub we would save the 2048
> > > map/unmap.
> > 
> > The scrub has to map it in exactly the same way.
> 
> Right, since we are already doing it once, why do it twice? :)

Because you need a load of other infrastructure (to track clean vs dirty
guest pages) too for it to be of any benefit.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lx4-0006tN-3M; Thu, 23 Jan 2014 15:12:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Lx2-0006tE-Gh
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 15:12:16 +0000
Received: from [85.158.137.68:35233] by server-16.bemta-3.messagelabs.com id
	EC/58-26128-F4131E25; Thu, 23 Jan 2014 15:12:15 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390489933!7263740!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3090 invoked from network); 23 Jan 2014 15:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93724787"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:12:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:12:11 -0500
Message-ID: <52E13149.1070705@citrix.com>
Date: Thu, 23 Jan 2014 15:12:09 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
	<alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 13:59, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Zoltan Kiss wrote:
>>>>>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>>>>>> index 2ae8699..0060178 100644
>>>>>> --- a/arch/x86/xen/p2m.c
>>>>>> +++ b/arch/x86/xen/p2m.c
>>>>>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>>>>>
>>>>>>     /* Add an MFN override for a particular page */
>>>>>>     int m2p_add_override(unsigned long mfn, struct page *page,
>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long
>>>>>> pfn)
>>>>>
>>>>> Do we really need to add another additional parameter to
>>>>> m2p_add_override?
>>>>> I would just let m2p_add_override and m2p_remove_override call
>>>>> page_to_pfn again. It is not that expensive.
>>>> Yes, because that page_to_pfn can return something different. That's why
>>>> the
>>>> v2 patches failed.
>>>
>>> I am really curious: how can page_to_pfn return something different?
>>> I don't think is supposed to happen.
>> You call set_phys_to_machine before calling m2p* functions.
>
> set_phys_to_machine changes the physical to machine mapping, that would
> be the mfn corresponding to a given pfn. It shouldn't affect the output
> of page_to_pfn that returns the pfn corresponding to a given struct
> page. The calculation of which is based on address offsets and should be
> static and unaffected by things like set_phys_to_machine.

Indeed, my mistake. The mfn is the only thing which changes, it still 
has to be passed to m2p_remove_override. I'll send in a next version

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lx4-0006tN-3M; Thu, 23 Jan 2014 15:12:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Lx2-0006tE-Gh
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 15:12:16 +0000
Received: from [85.158.137.68:35233] by server-16.bemta-3.messagelabs.com id
	EC/58-26128-F4131E25; Thu, 23 Jan 2014 15:12:15 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390489933!7263740!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3090 invoked from network); 23 Jan 2014 15:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93724787"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:12:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:12:11 -0500
Message-ID: <52E13149.1070705@citrix.com>
Date: Thu, 23 Jan 2014 15:12:09 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
	<alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 13:59, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Zoltan Kiss wrote:
>>>>>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>>>>>> index 2ae8699..0060178 100644
>>>>>> --- a/arch/x86/xen/p2m.c
>>>>>> +++ b/arch/x86/xen/p2m.c
>>>>>> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>>>>>>
>>>>>>     /* Add an MFN override for a particular page */
>>>>>>     int m2p_add_override(unsigned long mfn, struct page *page,
>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>> +		struct gnttab_map_grant_ref *kmap_op, unsigned long
>>>>>> pfn)
>>>>>
>>>>> Do we really need to add another additional parameter to
>>>>> m2p_add_override?
>>>>> I would just let m2p_add_override and m2p_remove_override call
>>>>> page_to_pfn again. It is not that expensive.
>>>> Yes, because that page_to_pfn can return something different. That's why
>>>> the
>>>> v2 patches failed.
>>>
>>> I am really curious: how can page_to_pfn return something different?
>>> I don't think is supposed to happen.
>> You call set_phys_to_machine before calling m2p* functions.
>
> set_phys_to_machine changes the physical to machine mapping, that would
> be the mfn corresponding to a given pfn. It shouldn't affect the output
> of page_to_pfn that returns the pfn corresponding to a given struct
> page. The calculation of which is based on address offsets and should be
> static and unaffected by things like set_phys_to_machine.

Indeed, my mistake. The mfn is the only thing which changes, it still 
has to be passed to m2p_remove_override. I'll send in a next version

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:14:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lyh-00077i-LY; Thu, 23 Jan 2014 15:13:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6Lyf-00077Z-Vq
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:13:58 +0000
Received: from [85.158.143.35:20609] by server-2.bemta-4.messagelabs.com id
	E9/6B-11386-5B131E25; Thu, 23 Jan 2014 15:13:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390490034!351893!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20680 invoked from network); 23 Jan 2014 15:13:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93725729"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:13:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:13:53 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6Lyb-0003AH-7C;
	Thu, 23 Jan 2014 15:13:53 +0000
Message-ID: <52E131AB.7020603@eu.citrix.com>
Date: Thu, 23 Jan 2014 15:13:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 02:28 PM, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
> ---
>
> George:
>    This is just documentation, and it would be nice to include it as part of
>    the 4.4 release.
> ---
>   misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 98 insertions(+)
>   create mode 100644 misc/coverity_model.c
>
> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> new file mode 100644
> index 0000000..418d25e
> --- /dev/null
> +++ b/misc/coverity_model.c
> @@ -0,0 +1,98 @@
> +/* Coverity Scan model
> + *
> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> + * positives.
> + *
> + * - A model file can't import any header files.
> + * - Therefore only some built-in primitives like int, char and void are
> + *   available but not NULL etc.
> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> + *   and similar types are sufficient.
> + * - An uninitialized local pointer is not an error. It signifies that the
> + *   variable could be either NULL or have some data.
> + *
> + * Coverity Scan doesn't pick up modifications automatically. The model file
> + * must be uploaded by an admin in the analysis.

So this file isn't compiled; it's manually uploaded as part of the 
coverity scanning process; and could be provided out-of-band, but it's 
just convenient to put it in the tree, particularly if any of these 
things should change as things go forward.  (Hence comparing it to 
documentation.)  Is that right?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:14:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Lyh-00077i-LY; Thu, 23 Jan 2014 15:13:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6Lyf-00077Z-Vq
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:13:58 +0000
Received: from [85.158.143.35:20609] by server-2.bemta-4.messagelabs.com id
	E9/6B-11386-5B131E25; Thu, 23 Jan 2014 15:13:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390490034!351893!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20680 invoked from network); 23 Jan 2014 15:13:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93725729"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:13:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:13:53 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6Lyb-0003AH-7C;
	Thu, 23 Jan 2014 15:13:53 +0000
Message-ID: <52E131AB.7020603@eu.citrix.com>
Date: Thu, 23 Jan 2014 15:13:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 02:28 PM, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
> ---
>
> George:
>    This is just documentation, and it would be nice to include it as part of
>    the 4.4 release.
> ---
>   misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 98 insertions(+)
>   create mode 100644 misc/coverity_model.c
>
> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> new file mode 100644
> index 0000000..418d25e
> --- /dev/null
> +++ b/misc/coverity_model.c
> @@ -0,0 +1,98 @@
> +/* Coverity Scan model
> + *
> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> + * positives.
> + *
> + * - A model file can't import any header files.
> + * - Therefore only some built-in primitives like int, char and void are
> + *   available but not NULL etc.
> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> + *   and similar types are sufficient.
> + * - An uninitialized local pointer is not an error. It signifies that the
> + *   variable could be either NULL or have some data.
> + *
> + * Coverity Scan doesn't pick up modifications automatically. The model file
> + * must be uploaded by an admin in the analysis.

So this file isn't compiled; it's manually uploaded as part of the 
coverity scanning process; and could be provided out-of-band, but it's 
just convenient to put it in the tree, particularly if any of these 
things should change as things go forward.  (Hence comparing it to 
documentation.)  Is that right?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:15:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6M0W-0007Si-71; Thu, 23 Jan 2014 15:15:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6M0V-0007Sa-9A
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 15:15:51 +0000
Received: from [193.109.254.147:29996] by server-15.bemta-14.messagelabs.com
	id C6/85-22186-62231E25; Thu, 23 Jan 2014 15:15:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390490148!12795295!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15085 invoked from network); 23 Jan 2014 15:15:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95767313"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 15:15:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:15:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6M0R-0003C4-7C;
	Thu, 23 Jan 2014 15:15:47 +0000
Date: Thu, 23 Jan 2014 15:14:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E13149.1070705@citrix.com>
Message-ID: <alpine.DEB.2.02.1401231513430.15917@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
	<alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
	<52E13149.1070705@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Zoltan Kiss wrote:
> On 23/01/14 13:59, Stefano Stabellini wrote:
> > On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> > > > > > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > > > > > index 2ae8699..0060178 100644
> > > > > > > --- a/arch/x86/xen/p2m.c
> > > > > > > +++ b/arch/x86/xen/p2m.c
> > > > > > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long
> > > > > > > mfn)
> > > > > > > 
> > > > > > >     /* Add an MFN override for a particular page */
> > > > > > >     int m2p_add_override(unsigned long mfn, struct page *page,
> > > > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long
> > > > > > > pfn)
> > > > > > 
> > > > > > Do we really need to add another additional parameter to
> > > > > > m2p_add_override?
> > > > > > I would just let m2p_add_override and m2p_remove_override call
> > > > > > page_to_pfn again. It is not that expensive.
> > > > > Yes, because that page_to_pfn can return something different. That's
> > > > > why
> > > > > the
> > > > > v2 patches failed.
> > > > 
> > > > I am really curious: how can page_to_pfn return something different?
> > > > I don't think is supposed to happen.
> > > You call set_phys_to_machine before calling m2p* functions.
> > 
> > set_phys_to_machine changes the physical to machine mapping, that would
> > be the mfn corresponding to a given pfn. It shouldn't affect the output
> > of page_to_pfn that returns the pfn corresponding to a given struct
> > page. The calculation of which is based on address offsets and should be
> > static and unaffected by things like set_phys_to_machine.
> 
> Indeed, my mistake. The mfn is the only thing which changes, it still has to
> be passed to m2p_remove_override. I'll send in a next version

Passing the mfn to m2p_remove_override is OK.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:15:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6M0W-0007Si-71; Thu, 23 Jan 2014 15:15:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6M0V-0007Sa-9A
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 15:15:51 +0000
Received: from [193.109.254.147:29996] by server-15.bemta-14.messagelabs.com
	id C6/85-22186-62231E25; Thu, 23 Jan 2014 15:15:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390490148!12795295!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15085 invoked from network); 23 Jan 2014 15:15:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95767313"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 15:15:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:15:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6M0R-0003C4-7C;
	Thu, 23 Jan 2014 15:15:47 +0000
Date: Thu, 23 Jan 2014 15:14:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52E13149.1070705@citrix.com>
Message-ID: <alpine.DEB.2.02.1401231513430.15917@kaball.uk.xensource.com>
References: <1390335755-29328-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401221618340.21510@kaball.uk.xensource.com>
	<52E00FB7.3040508@citrix.com>
	<alpine.DEB.2.02.1401221848080.21510@kaball.uk.xensource.com>
	<52E015F5.70408@citrix.com>
	<alpine.DEB.2.02.1401231355340.15917@kaball.uk.xensource.com>
	<52E13149.1070705@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v4] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Zoltan Kiss wrote:
> On 23/01/14 13:59, Stefano Stabellini wrote:
> > On Wed, 22 Jan 2014, Zoltan Kiss wrote:
> > > > > > > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > > > > > > index 2ae8699..0060178 100644
> > > > > > > --- a/arch/x86/xen/p2m.c
> > > > > > > +++ b/arch/x86/xen/p2m.c
> > > > > > > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long
> > > > > > > mfn)
> > > > > > > 
> > > > > > >     /* Add an MFN override for a particular page */
> > > > > > >     int m2p_add_override(unsigned long mfn, struct page *page,
> > > > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > > > +		struct gnttab_map_grant_ref *kmap_op, unsigned long
> > > > > > > pfn)
> > > > > > 
> > > > > > Do we really need to add another additional parameter to
> > > > > > m2p_add_override?
> > > > > > I would just let m2p_add_override and m2p_remove_override call
> > > > > > page_to_pfn again. It is not that expensive.
> > > > > Yes, because that page_to_pfn can return something different. That's
> > > > > why
> > > > > the
> > > > > v2 patches failed.
> > > > 
> > > > I am really curious: how can page_to_pfn return something different?
> > > > I don't think is supposed to happen.
> > > You call set_phys_to_machine before calling m2p* functions.
> > 
> > set_phys_to_machine changes the physical to machine mapping, that would
> > be the mfn corresponding to a given pfn. It shouldn't affect the output
> > of page_to_pfn that returns the pfn corresponding to a given struct
> > page. The calculation of which is based on address offsets and should be
> > static and unaffected by things like set_phys_to_machine.
> 
> Indeed, my mistake. The mfn is the only thing which changes, it still has to
> be passed to m2p_remove_override. I'll send in a next version

Passing the mfn to m2p_remove_override is OK.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:19:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6M3o-00080o-IK; Thu, 23 Jan 2014 15:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6M3m-00080e-Nr
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:19:14 +0000
Received: from [85.158.143.35:40485] by server-1.bemta-4.messagelabs.com id
	B5/32-02132-2F231E25; Thu, 23 Jan 2014 15:19:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390490352!353397!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11269 invoked from network); 23 Jan 2014 15:19:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:19:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93728843"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:19:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:19:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6M3i-0003F6-Hm;
	Thu, 23 Jan 2014 15:19:10 +0000
Message-ID: <52E132EE.2030101@citrix.com>
Date: Thu, 23 Jan 2014 15:19:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com>
In-Reply-To: <52E131AB.7020603@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 15:13, George Dunlap wrote:
> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> ---
>>
>> George:
>>    This is just documentation, and it would be nice to include it as
>> part of
>>    the 4.4 release.
>> ---
>>   misc/coverity_model.c |   98
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 98 insertions(+)
>>   create mode 100644 misc/coverity_model.c
>>
>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>> new file mode 100644
>> index 0000000..418d25e
>> --- /dev/null
>> +++ b/misc/coverity_model.c
>> @@ -0,0 +1,98 @@
>> +/* Coverity Scan model
>> + *
>> + * This is a modelling file for Coverity Scan. Modelling helps to
>> avoid false
>> + * positives.
>> + *
>> + * - A model file can't import any header files.
>> + * - Therefore only some built-in primitives like int, char and void
>> are
>> + *   available but not NULL etc.
>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>> structs
>> + *   and similar types are sufficient.
>> + * - An uninitialized local pointer is not an error. It signifies
>> that the
>> + *   variable could be either NULL or have some data.
>> + *
>> + * Coverity Scan doesn't pick up modifications automatically. The
>> model file
>> + * must be uploaded by an admin in the analysis.
>
> So this file isn't compiled; it's manually uploaded as part of the
> coverity scanning process; and could be provided out-of-band, but it's
> just convenient to put it in the tree, particularly if any of these
> things should change as things go forward.  (Hence comparing it to
> documentation.)  Is that right?
>
>  -George
>

Correct.  I believe internally Coverity compiles it (at least to an
AST), but that is completely opaque to users of Scan.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:19:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6M3o-00080o-IK; Thu, 23 Jan 2014 15:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6M3m-00080e-Nr
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:19:14 +0000
Received: from [85.158.143.35:40485] by server-1.bemta-4.messagelabs.com id
	B5/32-02132-2F231E25; Thu, 23 Jan 2014 15:19:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390490352!353397!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11269 invoked from network); 23 Jan 2014 15:19:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:19:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93728843"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 15:19:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 10:19:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6M3i-0003F6-Hm;
	Thu, 23 Jan 2014 15:19:10 +0000
Message-ID: <52E132EE.2030101@citrix.com>
Date: Thu, 23 Jan 2014 15:19:10 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com>
In-Reply-To: <52E131AB.7020603@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 15:13, George Dunlap wrote:
> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> ---
>>
>> George:
>>    This is just documentation, and it would be nice to include it as
>> part of
>>    the 4.4 release.
>> ---
>>   misc/coverity_model.c |   98
>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 98 insertions(+)
>>   create mode 100644 misc/coverity_model.c
>>
>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>> new file mode 100644
>> index 0000000..418d25e
>> --- /dev/null
>> +++ b/misc/coverity_model.c
>> @@ -0,0 +1,98 @@
>> +/* Coverity Scan model
>> + *
>> + * This is a modelling file for Coverity Scan. Modelling helps to
>> avoid false
>> + * positives.
>> + *
>> + * - A model file can't import any header files.
>> + * - Therefore only some built-in primitives like int, char and void
>> are
>> + *   available but not NULL etc.
>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>> structs
>> + *   and similar types are sufficient.
>> + * - An uninitialized local pointer is not an error. It signifies
>> that the
>> + *   variable could be either NULL or have some data.
>> + *
>> + * Coverity Scan doesn't pick up modifications automatically. The
>> model file
>> + * must be uploaded by an admin in the analysis.
>
> So this file isn't compiled; it's manually uploaded as part of the
> coverity scanning process; and could be provided out-of-band, but it's
> just convenient to put it in the tree, particularly if any of these
> things should change as things go forward.  (Hence comparing it to
> documentation.)  Is that right?
>
>  -George
>

Correct.  I believe internally Coverity compiles it (at least to an
AST), but that is completely opaque to users of Scan.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6MG5-0000bq-C7; Thu, 23 Jan 2014 15:31:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6MG3-0000bk-Sh
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:31:56 +0000
Received: from [85.158.143.35:44981] by server-3.bemta-4.messagelabs.com id
	50/18-32360-BE531E25; Thu, 23 Jan 2014 15:31:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390491113!358394!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26768 invoked from network); 23 Jan 2014 15:31:53 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:31:53 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so475114eek.28
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 07:31:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4LjoM4NPzrCvNwwN0LP1ZykvwNi1V3jDVdu0AH1iDcI=;
	b=bJvvBYqOxv/LUYpKtVabO08vg/BSLGAq/SwooDNS16t7m9kfVy9TFIypV38mHo7oK6
	kxdxTDkfoqB2/6/pd/FqnPbu9jp81aqOHKvxh26AlYu9wFxf5LcY4TwKpDZTtDGyqN0W
	ZsNH6e1l+c4puB5IXUBTO9vXQIQQtBfztwXbOlg1d1mHXvGBf1x2O1XcsxMj3d4daBEA
	rQeWsCgWFBbW8IUW3pEgYza7FUtR2g8vpPtPIwByVxM/QQWLegLzll3o1OR+xSFTsi4j
	M4t74LXQLKDEErVum4GZiLN94rTtN0rOttflLR/0x3WN0ipzCtxnqtzfuRLVCwV5sFxY
	RUNw==
X-Gm-Message-State: ALoCoQlwMUxVcupFC5Z4xQJAhLCz68q0ZCpvQdZhMj2S4UxJ2L/mmyLULkotcnfFd/BL7CCMbgrk
X-Received: by 10.14.221.193 with SMTP id r41mr3354270eep.92.1390491113484;
	Thu, 23 Jan 2014 07:31:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm40467024eeo.8.2014.01.23.07.31.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 07:31:52 -0800 (PST)
Message-ID: <52E135E6.7030109@linaro.org>
Date: Thu, 23 Jan 2014 15:31:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
> omap IOMMU module is designed to handle access to external
> omap MMUs, connected to the L3 bus.

Hello Andrii,

Thanks for the patch. See my comment inline (I won't add the same
comment as Stefano).

> Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---

[..]

> +struct mmu_info {
> +	const char			*name;
> +	paddr_t				mem_start;
> +	u32					mem_size;
> +	u32					*pagetable;
> +	void __iomem		*mem_map;
> +};
> +
> +static struct mmu_info omap_ipu_mmu = {

static const?

> +	.name		= "IPU_L2_MMU",
> +	.mem_start	= 0x55082000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info omap_dsp_mmu = {

static const?

> +	.name		= "DSP_L2_MMU",
> +	.mem_start	= 0x4a066000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info *mmu_list[] = {

static const?

> +	&omap_ipu_mmu,
> +	&omap_dsp_mmu,
> +};
> +
> +#define mmu_for_each(pfunc, data)						\
> +({														\
> +	u32 __i;											\
> +	int __res = 0;										\
> +														\
> +	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
> +		__res |= pfunc(mmu_list[__i], data);			\

You res |= will result to a "wrong" errno if you have multiple failure.
Would it be better to have:

__res = pfunc(...)
if ( __res )
  break;

> +	}													\
> +	__res;												\
> +})
> +
> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
> +{
> +	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static inline struct mmu_info *mmu_lookup(u32 addr)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
> +		if (mmu_check_mem_range(mmu_list[i], addr))
> +			return mmu_list[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
> +{
> +	return (reg & mask) | (va & (~mask));
> +}
> +
> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
> +{
> +	return (reg & ~mask) | pa;
> +}
> +
> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
> +{
> +	return mmu_for_each(mmu_check_mem_range, addr);
> +}

As I understand your cover letter, the device (and therefore the MMU) is
only passthrough to a single guest, right?

If so, your mmu_mmio_check should check if the domain is handling the
device.
With your current code any guest can write to this range and rewriting
the MMU page table.

> +
> +static int mmu_copy_pagetable(struct mmu_info *mmu)
> +{
> +	void __iomem *pagetable = NULL;
> +	u32 pgaddr;
> +
> +	ASSERT(mmu);
> +
> +	/* read address where kernel MMU pagetable is stored */
> +	pgaddr = readl(mmu->mem_map + MMU_TTB);
> +	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
> +	if (!pagetable) {
> +		printk("%s: %s failed to map pagetable\n",
> +			   __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * pagetable can be changed since last time
> +	 * we accessed it therefore we need to copy it each time
> +	 */
> +	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
> +
> +	iounmap(pagetable);
> +
> +	return 0;
> +}

I'm confused, it should copy from the guest MMU pagetable, right? In
this case you should use map_domain_page.
ioremap *MUST* only be used with device memory, otherwise memory
coherency is not guaranteed.

[..]

> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct domain *dom = v->domain;
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res;
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * make sure that user register is written first in this function
> +	 * following calls may expect valid data in it
> +	 */
> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));

Hmmm ... I think this is very confusing, you should only write to the
memory if mmu_trap_write_access has not failed. And use "*r" where it's
needed.

Writing to the device memory could have side effect (for instance
updating the page table with the wrong translation...).

> +
> +	res = mmu_trap_write_access(dom, mmu, info);
> +	if (res)
> +		return res;
> +
> +    return 1;
> +}
> +
> +static int mmu_init(struct mmu_info *mmu, u32 data)
> +{
> +	ASSERT(mmu);
> +	ASSERT(!mmu->mem_map);
> +	ASSERT(!mmu->pagetable);
> +
> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);

Can you use ioremap_nocache instead of ioremap? The behavior is the same
but the former name is less confusing.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 15:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 15:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6MG5-0000bq-C7; Thu, 23 Jan 2014 15:31:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6MG3-0000bk-Sh
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 15:31:56 +0000
Received: from [85.158.143.35:44981] by server-3.bemta-4.messagelabs.com id
	50/18-32360-BE531E25; Thu, 23 Jan 2014 15:31:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390491113!358394!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26768 invoked from network); 23 Jan 2014 15:31:53 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 15:31:53 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so475114eek.28
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 07:31:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4LjoM4NPzrCvNwwN0LP1ZykvwNi1V3jDVdu0AH1iDcI=;
	b=bJvvBYqOxv/LUYpKtVabO08vg/BSLGAq/SwooDNS16t7m9kfVy9TFIypV38mHo7oK6
	kxdxTDkfoqB2/6/pd/FqnPbu9jp81aqOHKvxh26AlYu9wFxf5LcY4TwKpDZTtDGyqN0W
	ZsNH6e1l+c4puB5IXUBTO9vXQIQQtBfztwXbOlg1d1mHXvGBf1x2O1XcsxMj3d4daBEA
	rQeWsCgWFBbW8IUW3pEgYza7FUtR2g8vpPtPIwByVxM/QQWLegLzll3o1OR+xSFTsi4j
	M4t74LXQLKDEErVum4GZiLN94rTtN0rOttflLR/0x3WN0ipzCtxnqtzfuRLVCwV5sFxY
	RUNw==
X-Gm-Message-State: ALoCoQlwMUxVcupFC5Z4xQJAhLCz68q0ZCpvQdZhMj2S4UxJ2L/mmyLULkotcnfFd/BL7CCMbgrk
X-Received: by 10.14.221.193 with SMTP id r41mr3354270eep.92.1390491113484;
	Thu, 23 Jan 2014 07:31:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm40467024eeo.8.2014.01.23.07.31.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 07:31:52 -0800 (PST)
Message-ID: <52E135E6.7030109@linaro.org>
Date: Thu, 23 Jan 2014 15:31:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
In-Reply-To: <1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
> omap IOMMU module is designed to handle access to external
> omap MMUs, connected to the L3 bus.

Hello Andrii,

Thanks for the patch. See my comment inline (I won't add the same
comment as Stefano).

> Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---

[..]

> +struct mmu_info {
> +	const char			*name;
> +	paddr_t				mem_start;
> +	u32					mem_size;
> +	u32					*pagetable;
> +	void __iomem		*mem_map;
> +};
> +
> +static struct mmu_info omap_ipu_mmu = {

static const?

> +	.name		= "IPU_L2_MMU",
> +	.mem_start	= 0x55082000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info omap_dsp_mmu = {

static const?

> +	.name		= "DSP_L2_MMU",
> +	.mem_start	= 0x4a066000,
> +	.mem_size	= 0x1000,
> +	.pagetable	= NULL,
> +};
> +
> +static struct mmu_info *mmu_list[] = {

static const?

> +	&omap_ipu_mmu,
> +	&omap_dsp_mmu,
> +};
> +
> +#define mmu_for_each(pfunc, data)						\
> +({														\
> +	u32 __i;											\
> +	int __res = 0;										\
> +														\
> +	for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {	\
> +		__res |= pfunc(mmu_list[__i], data);			\

You res |= will result to a "wrong" errno if you have multiple failure.
Would it be better to have:

__res = pfunc(...)
if ( __res )
  break;

> +	}													\
> +	__res;												\
> +})
> +
> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
> +{
> +	if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static inline struct mmu_info *mmu_lookup(u32 addr)
> +{
> +	u32 i;
> +
> +	for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
> +		if (mmu_check_mem_range(mmu_list[i], addr))
> +			return mmu_list[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
> +{
> +	return (reg & mask) | (va & (~mask));
> +}
> +
> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
> +{
> +	return (reg & ~mask) | pa;
> +}
> +
> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
> +{
> +	return mmu_for_each(mmu_check_mem_range, addr);
> +}

As I understand your cover letter, the device (and therefore the MMU) is
only passthrough to a single guest, right?

If so, your mmu_mmio_check should check if the domain is handling the
device.
With your current code any guest can write to this range and rewriting
the MMU page table.

> +
> +static int mmu_copy_pagetable(struct mmu_info *mmu)
> +{
> +	void __iomem *pagetable = NULL;
> +	u32 pgaddr;
> +
> +	ASSERT(mmu);
> +
> +	/* read address where kernel MMU pagetable is stored */
> +	pgaddr = readl(mmu->mem_map + MMU_TTB);
> +	pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
> +	if (!pagetable) {
> +		printk("%s: %s failed to map pagetable\n",
> +			   __func__, mmu->name);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * pagetable can be changed since last time
> +	 * we accessed it therefore we need to copy it each time
> +	 */
> +	memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
> +
> +	iounmap(pagetable);
> +
> +	return 0;
> +}

I'm confused, it should copy from the guest MMU pagetable, right? In
this case you should use map_domain_page.
ioremap *MUST* only be used with device memory, otherwise memory
coherency is not guaranteed.

[..]

> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
> +{
> +	struct domain *dom = v->domain;
> +	struct mmu_info *mmu = NULL;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    register_t *r = select_user_reg(regs, info->dabt.reg);
> +	int res;
> +
> +	mmu = mmu_lookup(info->gpa);
> +	if (!mmu) {
> +		printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * make sure that user register is written first in this function
> +	 * following calls may expect valid data in it
> +	 */
> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));

Hmmm ... I think this is very confusing, you should only write to the
memory if mmu_trap_write_access has not failed. And use "*r" where it's
needed.

Writing to the device memory could have side effect (for instance
updating the page table with the wrong translation...).

> +
> +	res = mmu_trap_write_access(dom, mmu, info);
> +	if (res)
> +		return res;
> +
> +    return 1;
> +}
> +
> +static int mmu_init(struct mmu_info *mmu, u32 data)
> +{
> +	ASSERT(mmu);
> +	ASSERT(!mmu->mem_map);
> +	ASSERT(!mmu->pagetable);
> +
> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);

Can you use ioremap_nocache instead of ioremap? The behavior is the same
but the former name is less confusing.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6N44-00037U-B9; Thu, 23 Jan 2014 16:23:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6N42-00037P-Ad
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 16:23:34 +0000
Received: from [85.158.143.35:51536] by server-2.bemta-4.messagelabs.com id
	55/82-11386-50241E25; Thu, 23 Jan 2014 16:23:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390494211!369790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11564 invoked from network); 23 Jan 2014 16:23:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 16:23:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93763593"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 16:23:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 11:23:30 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6N3x-0004Ds-NC;
	Thu, 23 Jan 2014 16:23:29 +0000
Date: Thu, 23 Jan 2014 16:23:29 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140123162329.GG24675@zion.uk.xensource.com>
References: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
	<52E0DCDD.60405@redhat.com>
	<20140123135440.GF24675@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140123135440.GF24675@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 01:54:40PM +0000, Wei Liu wrote:
> On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
> > Il 22/01/2014 17:09, Wei Liu ha scritto:
> > >On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> > >>Il 21/01/2014 19:27, Wei Liu ha scritto:
> > >>>>>
> > >>>>>Googling "disable tcg" would have provided an answer, but the patches
> > >>>>>were old enough to be basically useless.  I'll refresh the current
> > >>>>>version in the next few days.  Currently I am (or try to be) on
> > >>>>>vacation, so I cannot really say when, but I'll do my best. :)
> > >>>>>
> > >>>Hi Paolo, any update?
> > >>
> > >>Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> > >>branch on my github repository.
> > >>
> > >
> > >Unfortunately your branch didn't work when I enabled TCG support. If I
> > >use "--disable-tcg" with configure then it works fine.
> > 
> > Branch fixed.
> > 
> 
> Yes, it's fixed for the case I reported. Thanks.
> 
> But it is now broken with following rune:
> ./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
> --disable-xen --enable-debug
> 
>   LINK  i386-softmmu/qemu-system-i386
>   cpus.o: In function `cpu_signal':
>   /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
>   cpus.o: In function `tcg_cpu_exec':
>   /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
>   cpus.o: In function `tcg_exec_all':
>   /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
>   /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
>   exec.o: In function `tlb_reset_dirty_range_all':
>   /local/scratch/qemu/exec.c:736: undefined reference to
>   `cpu_tlb_reset_dirty_all'
>   collect2: error: ld returned 1 exit status
>   make[1]: *** [qemu-system-i386] Error 1
>   make: *** [subdir-i386-softmmu] Error 2
> 
> --enable-debug is the one to blame. Without that it links successfully.
> 
> Wei.
> 

Finally I figured out what was wrong. Your patch series was relying on
compiler to aggresively optimize away unused code.

So when --enable-debug is set, compiler won't optimize away the dead
code, hence those undefine references. With any optimization option -O
you series compiles successfully.

Feel free to integrate my patch below, or fix those errors in the way
you see appropriate.

Wei.

---8<---
diff --git a/cpus.c b/cpus.c
index 508b26c..2cc841b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -563,6 +563,9 @@ static void cpu_handle_guest_debug(CPUState *cpu)
 
 static void cpu_signal(int sig)
 {
+
+    assert(tcg_enabled());
+
     if (current_cpu) {
         cpu_exit(current_cpu);
     }
@@ -1226,6 +1229,8 @@ static int tcg_cpu_exec(CPUArchState *env)
     int64_t ti;
 #endif
 
+    assert(tcg_enabled());
+
 #ifdef CONFIG_PROFILER
     ti = profile_getclock();
 #endif
@@ -1273,6 +1278,8 @@ static void tcg_exec_all(void)
 {
     int r;
 
+    assert(tcg_enabled());
+
     /* Account partial waits to QEMU_CLOCK_VIRTUAL.  */
     qemu_clock_warp(QEMU_CLOCK_VIRTUAL);
 
diff --git a/tcg-stub.c b/tcg-stub.c
index a06070f..b14a21d 100644
--- a/tcg-stub.c
+++ b/tcg-stub.c
@@ -29,3 +29,7 @@ void tlb_set_page(CPUArchState *env, target_ulong vaddr,
 void tb_invalidate_phys_addr(hwaddr addr)
 {
 }
+
+void cpu_tlb_reset_dirty_all(ram_addr_t start1, ram_addr_t length)
+{
+}


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6N49-00037l-NI; Thu, 23 Jan 2014 16:23:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W6N48-00037d-Ta
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 16:23:41 +0000
Received: from [193.109.254.147:20176] by server-15.bemta-14.messagelabs.com
	id D8/D6-22186-C0241E25; Thu, 23 Jan 2014 16:23:40 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390494217!10478777!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29018 invoked from network); 23 Jan 2014 16:23:38 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 16:23:38 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so2482391qaq.11
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 08:23:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IVmK9B5PM7KTOMz7gWJx7x4Y4WQkVKtPxNvcMOfGGMI=;
	b=EPz0y11fmhQo43m6kMdugO0m9XkwzGl29pxVkVhDMQsi2/LUP2WvTTizsojiMq9heP
	SKLpBoIXuM3Fz0TaJjA+TV1aDWZv1rpLxmVgoPQGr6U2R+/cXG4fpRkTO/Er/wIcRoMO
	bMqrrmpn79zzjHlXXANm9bG2L13ysWKH3ukLABDNJh23sI6gH1q8gTcukG4TNpXMNK83
	OSvunmDKnNeEZZfgO9pa0FsR+I297mnUdVWiLtleVlaAZaKH8qvr2Rcs5I1pwa2Bif+p
	4Z2vTD6cKlWoMTWYUi8DYEoDOkxM4bMFA/YtI3Rk647pDX0GLTILcZvTILNlnEBMWvcU
	HW3g==
MIME-Version: 1.0
X-Received: by 10.140.26.43 with SMTP id 40mr12210603qgu.86.1390494217626;
	Thu, 23 Jan 2014 08:23:37 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Thu, 23 Jan 2014 08:23:37 -0800 (PST)
In-Reply-To: <20140122203337.GA31908@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
Date: Thu, 23 Jan 2014 11:23:37 -0500
Message-ID: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
>> >>
>> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> >>>
>> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>> >>>>
>> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>> >>>>>
>> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>> >>>>> <gregkh@linuxfoundation.org> wrote:
>> >>>
>> >>>
>> >>> Adding extra folks to the party.
>> >>>>>>
>> >>>>>>
>> >>>>>> Odds are this also shows up in 3.13, right?
>> >>>>
>> >>>>
>> >>>> Reproduced using 3.13 on the PV guest:
>> >>>>
>> >>>>         [  368.756763] BUG: Bad page map in process mp
>> >>>> pte:80000004a67c6165 pmd:e9b706067
>> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>> >>>> mapping:          (null) index:0x0
>> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>> >>>> #1
>> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>> >>>> ffffffff814d8748 00007fd1388b7000
>> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>> >>>> 0000000000000000 0000000000000000
>> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>> >>>> 00007fd1388b8000 ffff880e9eaf3e30
>> >>>>         [  368.756815] Call Trace:
>> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >>>>         [  368.756837]  [<ffffffff8116eae3>]
>> >>>> unmap_single_vma+0x583/0x890
>> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>> >>>>         [  368.756869]  [<ffffffff814e70ed>]
>> >>>> system_call_fastpath+0x1a/0x1f
>> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
>> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >>>> idx:0 val:-1
>> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >>>> idx:1 val:1
>> >>>>
>> >>>>>
>> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
>> >>>>> interest in setting one up).. And I have a suspicion that it might not
>> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>> >>>>>
>> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>> >>>>> confused.
>> >>>>>
>> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>> >>>>> _PAGE_PRESENT.
>> >>>>>
>> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>> >>>>> that makes him go "Ahh, yes, silly case".
>> >>>>>
>> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>> >>>>>
>> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>> >>>>> attached test-case (but apparently only under Xen PV). There it
>> >>>>> apparently causes a "BUG: Bad page map .." error.
>> >>>
>> >>>
>> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>> >>> _raw_
>> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> >>> and instead reads the values directly from the page-table. Since the
>> >>> page-table is also manipulated by the hypervisor - there are certain
>> >>> flags it also sets to do its business. It might be that it uses
>> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> >>> pte_flags that would invoke the pvops interface.
>> >>>
>> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?

It does use _PAGE_GLOBAL for guest user pages

>> >>>
>> >>> This not-compiled-totally-bad-patch might shed some light on what I was
>> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> >>> for that).
>> >>
>> >>
>> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> >> still able to repro the issue:
>>
>> Steven, do you use numa=fake on boot cmd line for pv guest?
>>
>> I had similar issue on pv guest. Let me check if the fix that resolved
>> this for me will help with 3.13.
>
> Nope:
>
> # cat /proc/cmdline
> root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7

>
>>
>> >
>> >
>> > Maybe this one is also related to this BUG here (cc'ed people investigating
>> > this one) ...
>> >
>> >   https://lkml.org/lkml/2014/1/10/427
>> >
>> > ... not sure, though.
>> >
>> >
>> >>         [  346.374929] BUG: Bad page map in process mp
>> >> pte:80000004ae928065 pmd:e993f9067
>> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> >> (null) index:0x0
>> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> >> #1
>> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> >> 00007f06a9bbb000
>> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> >> 0000000000000000
>> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> >> ffff880e991a3e30
>> >>         [  346.374979] Call Trace:
>> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>> >>         [  346.375034]  [<ffffffff814e712d>]
>> >> system_call_fastpath+0x1a/0x1f
>> >>         [  346.375037] Disabling lock debugging due to kernel taint
>> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> idx:0 val:-1
>> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> idx:1 val:1
>> >>
>> >> This dump doesn't look dramatically different, either.
>> >>
>> >>>
>> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >>> turned on?
>> >>
>> >>
>> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> mean not enabled at runtime?
>> >>
>> >> [1]
>> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>
>>
>>
>> --
>> Elena

I was able to reproduce this consistently, also with the latest mm
patches from yesterday.
Can you please try this:

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..76dcf96 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || ((val &
(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || ((val &
(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6N49-00037l-NI; Thu, 23 Jan 2014 16:23:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W6N48-00037d-Ta
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 16:23:41 +0000
Received: from [193.109.254.147:20176] by server-15.bemta-14.messagelabs.com
	id D8/D6-22186-C0241E25; Thu, 23 Jan 2014 16:23:40 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390494217!10478777!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29018 invoked from network); 23 Jan 2014 16:23:38 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 16:23:38 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so2482391qaq.11
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 08:23:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=IVmK9B5PM7KTOMz7gWJx7x4Y4WQkVKtPxNvcMOfGGMI=;
	b=EPz0y11fmhQo43m6kMdugO0m9XkwzGl29pxVkVhDMQsi2/LUP2WvTTizsojiMq9heP
	SKLpBoIXuM3Fz0TaJjA+TV1aDWZv1rpLxmVgoPQGr6U2R+/cXG4fpRkTO/Er/wIcRoMO
	bMqrrmpn79zzjHlXXANm9bG2L13ysWKH3ukLABDNJh23sI6gH1q8gTcukG4TNpXMNK83
	OSvunmDKnNeEZZfgO9pa0FsR+I297mnUdVWiLtleVlaAZaKH8qvr2Rcs5I1pwa2Bif+p
	4Z2vTD6cKlWoMTWYUi8DYEoDOkxM4bMFA/YtI3Rk647pDX0GLTILcZvTILNlnEBMWvcU
	HW3g==
MIME-Version: 1.0
X-Received: by 10.140.26.43 with SMTP id 40mr12210603qgu.86.1390494217626;
	Thu, 23 Jan 2014 08:23:37 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Thu, 23 Jan 2014 08:23:37 -0800 (PST)
In-Reply-To: <20140122203337.GA31908@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
Date: Thu, 23 Jan 2014 11:23:37 -0500
Message-ID: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
>> >>
>> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> >>>
>> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>> >>>>
>> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>> >>>>>
>> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>> >>>>> <gregkh@linuxfoundation.org> wrote:
>> >>>
>> >>>
>> >>> Adding extra folks to the party.
>> >>>>>>
>> >>>>>>
>> >>>>>> Odds are this also shows up in 3.13, right?
>> >>>>
>> >>>>
>> >>>> Reproduced using 3.13 on the PV guest:
>> >>>>
>> >>>>         [  368.756763] BUG: Bad page map in process mp
>> >>>> pte:80000004a67c6165 pmd:e9b706067
>> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>> >>>> mapping:          (null) index:0x0
>> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>> >>>> #1
>> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>> >>>> ffffffff814d8748 00007fd1388b7000
>> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>> >>>> 0000000000000000 0000000000000000
>> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>> >>>> 00007fd1388b8000 ffff880e9eaf3e30
>> >>>>         [  368.756815] Call Trace:
>> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >>>>         [  368.756837]  [<ffffffff8116eae3>]
>> >>>> unmap_single_vma+0x583/0x890
>> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>> >>>>         [  368.756869]  [<ffffffff814e70ed>]
>> >>>> system_call_fastpath+0x1a/0x1f
>> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
>> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >>>> idx:0 val:-1
>> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >>>> idx:1 val:1
>> >>>>
>> >>>>>
>> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
>> >>>>> interest in setting one up).. And I have a suspicion that it might not
>> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>> >>>>>
>> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>> >>>>> confused.
>> >>>>>
>> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>> >>>>> _PAGE_PRESENT.
>> >>>>>
>> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>> >>>>> that makes him go "Ahh, yes, silly case".
>> >>>>>
>> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>> >>>>>
>> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>> >>>>> attached test-case (but apparently only under Xen PV). There it
>> >>>>> apparently causes a "BUG: Bad page map .." error.
>> >>>
>> >>>
>> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>> >>> _raw_
>> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> >>> and instead reads the values directly from the page-table. Since the
>> >>> page-table is also manipulated by the hypervisor - there are certain
>> >>> flags it also sets to do its business. It might be that it uses
>> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> >>> pte_flags that would invoke the pvops interface.
>> >>>
>> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?

It does use _PAGE_GLOBAL for guest user pages

>> >>>
>> >>> This not-compiled-totally-bad-patch might shed some light on what I was
>> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> >>> for that).
>> >>
>> >>
>> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> >> still able to repro the issue:
>>
>> Steven, do you use numa=fake on boot cmd line for pv guest?
>>
>> I had similar issue on pv guest. Let me check if the fix that resolved
>> this for me will help with 3.13.
>
> Nope:
>
> # cat /proc/cmdline
> root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7

>
>>
>> >
>> >
>> > Maybe this one is also related to this BUG here (cc'ed people investigating
>> > this one) ...
>> >
>> >   https://lkml.org/lkml/2014/1/10/427
>> >
>> > ... not sure, though.
>> >
>> >
>> >>         [  346.374929] BUG: Bad page map in process mp
>> >> pte:80000004ae928065 pmd:e993f9067
>> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> >> (null) index:0x0
>> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> >> #1
>> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> >> 00007f06a9bbb000
>> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> >> 0000000000000000
>> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> >> ffff880e991a3e30
>> >>         [  346.374979] Call Trace:
>> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>> >>         [  346.375034]  [<ffffffff814e712d>]
>> >> system_call_fastpath+0x1a/0x1f
>> >>         [  346.375037] Disabling lock debugging due to kernel taint
>> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> idx:0 val:-1
>> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> idx:1 val:1
>> >>
>> >> This dump doesn't look dramatically different, either.
>> >>
>> >>>
>> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >>> turned on?
>> >>
>> >>
>> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> mean not enabled at runtime?
>> >>
>> >> [1]
>> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>
>>
>>
>> --
>> Elena

I was able to reproduce this consistently, also with the latest mm
patches from yesterday.
Can you please try this:

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index ce563be..76dcf96 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || ((val &
(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || ((val &
(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6N44-00037U-B9; Thu, 23 Jan 2014 16:23:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6N42-00037P-Ad
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 16:23:34 +0000
Received: from [85.158.143.35:51536] by server-2.bemta-4.messagelabs.com id
	55/82-11386-50241E25; Thu, 23 Jan 2014 16:23:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390494211!369790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11564 invoked from network); 23 Jan 2014 16:23:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 16:23:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="93763593"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 16:23:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 11:23:30 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6N3x-0004Ds-NC;
	Thu, 23 Jan 2014 16:23:29 +0000
Date: Thu, 23 Jan 2014 16:23:29 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140123162329.GG24675@zion.uk.xensource.com>
References: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>
	<52CAEF54.7030901@suse.de> <52CB17D1.2060400@redhat.com>
	<20140107123417.GG10654@zion.uk.xensource.com>
	<52CC01F6.6050502@redhat.com>
	<20140121182745.GA23328@zion.uk.xensource.com>
	<52DF9B76.8060807@redhat.com>
	<20140122160940.GC24675@zion.uk.xensource.com>
	<52E0DCDD.60405@redhat.com>
	<20140123135440.GF24675@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140123135440.GF24675@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Andreas =?iso-8859-1?Q?F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 01:54:40PM +0000, Wei Liu wrote:
> On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
> > Il 22/01/2014 17:09, Wei Liu ha scritto:
> > >On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
> > >>Il 21/01/2014 19:27, Wei Liu ha scritto:
> > >>>>>
> > >>>>>Googling "disable tcg" would have provided an answer, but the patches
> > >>>>>were old enough to be basically useless.  I'll refresh the current
> > >>>>>version in the next few days.  Currently I am (or try to be) on
> > >>>>>vacation, so I cannot really say when, but I'll do my best. :)
> > >>>>>
> > >>>Hi Paolo, any update?
> > >>
> > >>Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
> > >>branch on my github repository.
> > >>
> > >
> > >Unfortunately your branch didn't work when I enabled TCG support. If I
> > >use "--disable-tcg" with configure then it works fine.
> > 
> > Branch fixed.
> > 
> 
> Yes, it's fixed for the case I reported. Thanks.
> 
> But it is now broken with following rune:
> ./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
> --disable-xen --enable-debug
> 
>   LINK  i386-softmmu/qemu-system-i386
>   cpus.o: In function `cpu_signal':
>   /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
>   cpus.o: In function `tcg_cpu_exec':
>   /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
>   cpus.o: In function `tcg_exec_all':
>   /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
>   /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
>   exec.o: In function `tlb_reset_dirty_range_all':
>   /local/scratch/qemu/exec.c:736: undefined reference to
>   `cpu_tlb_reset_dirty_all'
>   collect2: error: ld returned 1 exit status
>   make[1]: *** [qemu-system-i386] Error 1
>   make: *** [subdir-i386-softmmu] Error 2
> 
> --enable-debug is the one to blame. Without that it links successfully.
> 
> Wei.
> 

Finally I figured out what was wrong. Your patch series was relying on
compiler to aggresively optimize away unused code.

So when --enable-debug is set, compiler won't optimize away the dead
code, hence those undefine references. With any optimization option -O
you series compiles successfully.

Feel free to integrate my patch below, or fix those errors in the way
you see appropriate.

Wei.

---8<---
diff --git a/cpus.c b/cpus.c
index 508b26c..2cc841b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -563,6 +563,9 @@ static void cpu_handle_guest_debug(CPUState *cpu)
 
 static void cpu_signal(int sig)
 {
+
+    assert(tcg_enabled());
+
     if (current_cpu) {
         cpu_exit(current_cpu);
     }
@@ -1226,6 +1229,8 @@ static int tcg_cpu_exec(CPUArchState *env)
     int64_t ti;
 #endif
 
+    assert(tcg_enabled());
+
 #ifdef CONFIG_PROFILER
     ti = profile_getclock();
 #endif
@@ -1273,6 +1278,8 @@ static void tcg_exec_all(void)
 {
     int r;
 
+    assert(tcg_enabled());
+
     /* Account partial waits to QEMU_CLOCK_VIRTUAL.  */
     qemu_clock_warp(QEMU_CLOCK_VIRTUAL);
 
diff --git a/tcg-stub.c b/tcg-stub.c
index a06070f..b14a21d 100644
--- a/tcg-stub.c
+++ b/tcg-stub.c
@@ -29,3 +29,7 @@ void tlb_set_page(CPUArchState *env, target_ulong vaddr,
 void tb_invalidate_phys_addr(hwaddr addr)
 {
 }
+
+void cpu_tlb_reset_dirty_all(ram_addr_t start1, ram_addr_t length)
+{
+}


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6NZ1-0005En-Kk; Thu, 23 Jan 2014 16:55:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0100a54bff=mlabriol@gdeb.com>)
	id 1W6NZ0-0005Ef-35; Thu, 23 Jan 2014 16:55:34 +0000
Received: from [85.158.137.68:52300] by server-14.bemta-3.messagelabs.com id
	65/0D-06105-58941E25; Thu, 23 Jan 2014 16:55:33 +0000
X-Env-Sender: prvs=0100a54bff=mlabriol@gdeb.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390496125!10969009!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11662 invoked from network); 23 Jan 2014 16:55:31 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 16:55:31 -0000
Received: from elbmasnwh002.us-ct-eb01.gdeb.com ([153.11.13.41]
	helo=ebsmtp.gdeb.com) by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W6NYI-0002Vw-Lr; Thu, 23 Jan 2014 11:54:50 -0500
In-Reply-To: <20140121215905.GC6363@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 72DA261E:EBD23FAC-85257C69:005C71A2;
 type=4; name=$KeepSent
Message-ID: <OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Thu, 23 Jan 2014 11:54:37 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1W6NYI-0002Vw-Lr
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> Date: 01/21/2014 04:59 PM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> Sent by: xen-devel-bounces@lists.xen.org
> 
> On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> > 10:38:27 AM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > Date: 01/20/2014 10:38 AM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > 
> > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
10:14:36 
> > AM:
> > > > 
> > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > Date: 01/20/2014 10:14 AM
> > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > 
> > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
wrote:
> > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
consistent 
> > > > crashes 
> > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
unusably 
> > 
> > > > slow 
> > > > > > graphics with a newer HD7000 (can see each line refresh 
> > indiviually on 
> > > > 
> > > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > > 
> > > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > > 
> > > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 

> > > > RadeonSI sluggishness is when using the KMS framebuffer device for 
a 
> > plain 
> > > > text console login.
> > > 
> > > So sluggish is probably due to the PAT not being enabled. This patch
> > > should be applied:
> > > 
> > > lkml.org/lkml/2011/11/8/406
> > > 
> > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > 
> > > and these two reverted:
> > > 
> > >  "xen/pat: Disable PAT support for now."
> > >  "xen/pat: Disable PAT using pat_enabled value."
> > > 
> > > Which is to say do:
> > > 
> > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > 
> > Thanks!  I cherry-picked that patch out of your testing tree, reverted 

> > those 2 commits, recompiled and installed.  Definitely fixed the HD 
7000 
> > sluggishness and appears to have fixed the R600 crashes (although it's 

> > only been running a few hours).
> > 
> > How come that patch didn't get into mainline?  It looks pretty 
innocuous 
> > to me...
> 
> <Sigh> the x86 maintainers wanted a different route. And I hadn't had
> the chance nor time to implement it.

I see.  Well, I've got a handful of boxes in my lab that need that patch 
to be usable.  If you do come up with a more mainline-able solution, I'd 
gladly test it for you.  ;-)

Thanks again, by the way.

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6NZ1-0005En-Kk; Thu, 23 Jan 2014 16:55:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0100a54bff=mlabriol@gdeb.com>)
	id 1W6NZ0-0005Ef-35; Thu, 23 Jan 2014 16:55:34 +0000
Received: from [85.158.137.68:52300] by server-14.bemta-3.messagelabs.com id
	65/0D-06105-58941E25; Thu, 23 Jan 2014 16:55:33 +0000
X-Env-Sender: prvs=0100a54bff=mlabriol@gdeb.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390496125!10969009!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11662 invoked from network); 23 Jan 2014 16:55:31 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 16:55:31 -0000
Received: from elbmasnwh002.us-ct-eb01.gdeb.com ([153.11.13.41]
	helo=ebsmtp.gdeb.com) by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1W6NYI-0002Vw-Lr; Thu, 23 Jan 2014 11:54:50 -0500
In-Reply-To: <20140121215905.GC6363@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 72DA261E:EBD23FAC-85257C69:005C71A2;
 type=4; name=$KeepSent
Message-ID: <OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Thu, 23 Jan 2014 11:54:37 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1W6NYI-0002Vw-Lr
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> Date: 01/21/2014 04:59 PM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> Sent by: xen-devel-bounces@lists.xen.org
> 
> On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> > 10:38:27 AM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > Date: 01/20/2014 10:38 AM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > 
> > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
10:14:36 
> > AM:
> > > > 
> > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > Date: 01/20/2014 10:14 AM
> > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > 
> > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
wrote:
> > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
consistent 
> > > > crashes 
> > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
unusably 
> > 
> > > > slow 
> > > > > > graphics with a newer HD7000 (can see each line refresh 
> > indiviually on 
> > > > 
> > > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > > 
> > > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > > 
> > > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 

> > > > RadeonSI sluggishness is when using the KMS framebuffer device for 
a 
> > plain 
> > > > text console login.
> > > 
> > > So sluggish is probably due to the PAT not being enabled. This patch
> > > should be applied:
> > > 
> > > lkml.org/lkml/2011/11/8/406
> > > 
> > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > 
> > > and these two reverted:
> > > 
> > >  "xen/pat: Disable PAT support for now."
> > >  "xen/pat: Disable PAT using pat_enabled value."
> > > 
> > > Which is to say do:
> > > 
> > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > 
> > Thanks!  I cherry-picked that patch out of your testing tree, reverted 

> > those 2 commits, recompiled and installed.  Definitely fixed the HD 
7000 
> > sluggishness and appears to have fixed the R600 crashes (although it's 

> > only been running a few hours).
> > 
> > How come that patch didn't get into mainline?  It looks pretty 
innocuous 
> > to me...
> 
> <Sigh> the x86 maintainers wanted a different route. And I hadn't had
> the chance nor time to implement it.

I see.  Well, I've got a handful of boxes in my lab that need that patch 
to be usable.  If you do come up with a more mainline-able solution, I'd 
gladly test it for you.  ;-)

Thanks again, by the way.

---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6NuE-0005tI-Mv; Thu, 23 Jan 2014 17:17:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6NuC-0005tD-Ux
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:17:29 +0000
Received: from [193.109.254.147:40814] by server-12.bemta-14.messagelabs.com
	id 31/03-13681-8AE41E25; Thu, 23 Jan 2014 17:17:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390497446!12715832!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30108 invoked from network); 23 Jan 2014 17:17:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 17:17:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95833390"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 17:17:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 12:17:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Nu8-000509-Qf;
	Thu, 23 Jan 2014 17:17:24 +0000
Message-ID: <52E14EA4.6010009@citrix.com>
Date: Thu, 23 Jan 2014 17:17:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
In-Reply-To: <52E11EC80200007800116238@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 12:53, Jan Beulich wrote:
>>>> On 23.01.14 at 12:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 21/01/14 10:28, Andrew Cooper wrote:
>>> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
>>> which have problems on live migrate.  Some source of time is
>>> unexpectedly jumping forwards by two days, from the correct time to 2
>>> days in the future.  The observed result is that it looses its DHCP
>>> lease, drops its IP address and networking ceases to work (It appears
>>> that windows will not attempt to renew the lease itself).
>>>
>> This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
>> "x86/viridian: Time Reference Count MSR"
>>
>> After double checking with the specification, it does appear to be
>> implemented as required (subject to a potential issue with multiple vcpu
>> guests).
>>
>> I am currently experimenting to see whether hvm_get_guest_time() is
>> returning unexpected values, or whether it is returning expected values
>> and Windows is interpreting them differently.
>>
>> At this point in the 4.4 release cycle, reverting the patch should be
>> seriously considered, although I would like to see whether it is
>> possible to work out why it is wrong and whether there is an obvious fix
>> first.
> I suppose you and/or Paul will let us know either way.
>
> Jan
>

The value of time read from hvm_get_guest_time() resets with a new
domid, making it an inappropriate source of time for the described
function of the MSR.

I suspect Windows 8 only notices at first on migration as I believe that
it is the first case where the generation ID is supposed to change and
signal a reset of state.  The detection of the failure is actually
further complicated as there appears to be a race condition between the
guest tools reseting the clock back to the correct value, and the DHCP
lease being flushed.  XenRT only notices the failure if the DHCP lease
is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
inside the VM), and doesn't directly notice the foward/backward stepping
in time.

Anyway - please revert the patch - it will be a non-trivial change to
expose an appropriate source of time to be consumed by this MSR.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6NuE-0005tI-Mv; Thu, 23 Jan 2014 17:17:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6NuC-0005tD-Ux
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:17:29 +0000
Received: from [193.109.254.147:40814] by server-12.bemta-14.messagelabs.com
	id 31/03-13681-8AE41E25; Thu, 23 Jan 2014 17:17:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390497446!12715832!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30108 invoked from network); 23 Jan 2014 17:17:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 17:17:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,706,1384300800"; d="scan'208";a="95833390"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 17:17:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 12:17:25 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6Nu8-000509-Qf;
	Thu, 23 Jan 2014 17:17:24 +0000
Message-ID: <52E14EA4.6010009@citrix.com>
Date: Thu, 23 Jan 2014 17:17:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
In-Reply-To: <52E11EC80200007800116238@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 12:53, Jan Beulich wrote:
>>>> On 23.01.14 at 12:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 21/01/14 10:28, Andrew Cooper wrote:
>>> The major issue identified is with Windows 8/8.1 and Server 2012/2012r2,
>>> which have problems on live migrate.  Some source of time is
>>> unexpectedly jumping forwards by two days, from the correct time to 2
>>> days in the future.  The observed result is that it looses its DHCP
>>> lease, drops its IP address and networking ceases to work (It appears
>>> that windows will not attempt to renew the lease itself).
>>>
>> This is caused by commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b
>> "x86/viridian: Time Reference Count MSR"
>>
>> After double checking with the specification, it does appear to be
>> implemented as required (subject to a potential issue with multiple vcpu
>> guests).
>>
>> I am currently experimenting to see whether hvm_get_guest_time() is
>> returning unexpected values, or whether it is returning expected values
>> and Windows is interpreting them differently.
>>
>> At this point in the 4.4 release cycle, reverting the patch should be
>> seriously considered, although I would like to see whether it is
>> possible to work out why it is wrong and whether there is an obvious fix
>> first.
> I suppose you and/or Paul will let us know either way.
>
> Jan
>

The value of time read from hvm_get_guest_time() resets with a new
domid, making it an inappropriate source of time for the described
function of the MSR.

I suspect Windows 8 only notices at first on migration as I believe that
it is the first case where the generation ID is supposed to change and
signal a reset of state.  The detection of the failure is actually
further complicated as there appears to be a race condition between the
guest tools reseting the clock back to the correct value, and the DHCP
lease being flushed.  XenRT only notices the failure if the DHCP lease
is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
inside the VM), and doesn't directly notice the foward/backward stepping
in time.

Anyway - please revert the patch - it will be a non-trivial change to
expose an appropriate source of time to be consumed by this MSR.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6OJf-0007pz-Oa; Thu, 23 Jan 2014 17:43:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6OJe-0007pu-DL
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:43:46 +0000
Received: from [85.158.143.35:40938] by server-1.bemta-4.messagelabs.com id
	A3/8E-02132-1D451E25; Thu, 23 Jan 2014 17:43:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390499020!386808!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14639 invoked from network); 23 Jan 2014 17:43:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 17:43:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0NHgaBd021602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 17:42:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0NHgZ9g008627
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 17:42:36 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0NHgZjw004310; Thu, 23 Jan 2014 17:42:35 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 09:42:34 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390475971.24595.55.camel@kazak.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
	<1390475971.24595.55.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 12:42:32 -0500
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Wed, 2014-01-22 at 16:01 -0500, Konrad Rzeszutek Wilk wrote:
>> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>> > index 1eac073..b626c79 100644
>> > --- a/drivers/xen/swiotlb-xen.c
>> > +++ b/drivers/xen/swiotlb-xen.c
>> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>> >  
>> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>> >  {
>> > -	return phys_to_machine(XPADDR(paddr)).maddr;
>> 
>> Why not change 'phys_addr_t' to be unsigned long? Wouldn't
>> that solve the problem as well?
>
>It would, but it is fundamentally the wrong thing to do.
>
>If the kernel is configured without LPAE (ARM's PAE extensions) then it
>is configured for a 32-bit physical address space, throughout its page
>table handling and else where. Pretending that physical addresses are
>64-bits would have all sorts of knock on effects both in terms of type
>mismatches and the space used by data structures doubling etc
>
>Enabling LPAE would also solve this issue but we don't want to force
>that constraint onto Xen guests or dom0. Not least because of the knock
>on affect on distro installers etc.
>
>There is nothing fundamentally wrong with 32-bit phys addr with 64-bit
>dma addr and it is the correct solution to this configuration.

Right. 

And I presume dma_addr_t is 64bit regardless of PAE and non-PAE?
 (Sorry I am on my phone and its hard to do SSH and cscope).

Based on your explanation I believe the patch is fine though I have to work out carefully the casting it does in my mind.

>
>> 
>> Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
>> of phys_addr_t?
>
>phys_addr_t is unsigned long already, so that won't help. And you don't
>want to expand those for the same reasons you don't want to expand
>phys_addr_t.
>
>Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6OJf-0007pz-Oa; Thu, 23 Jan 2014 17:43:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6OJe-0007pu-DL
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:43:46 +0000
Received: from [85.158.143.35:40938] by server-1.bemta-4.messagelabs.com id
	A3/8E-02132-1D451E25; Thu, 23 Jan 2014 17:43:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390499020!386808!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14639 invoked from network); 23 Jan 2014 17:43:44 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 17:43:44 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0NHgaBd021602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 23 Jan 2014 17:42:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0NHgZ9g008627
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 23 Jan 2014 17:42:36 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0NHgZjw004310; Thu, 23 Jan 2014 17:42:35 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 09:42:34 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1390475971.24595.55.camel@kazak.uk.xensource.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
	<1390475971.24595.55.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 12:42:32 -0500
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
	sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Wed, 2014-01-22 at 16:01 -0500, Konrad Rzeszutek Wilk wrote:
>> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>> > index 1eac073..b626c79 100644
>> > --- a/drivers/xen/swiotlb-xen.c
>> > +++ b/drivers/xen/swiotlb-xen.c
>> > @@ -77,12 +77,22 @@ static u64 start_dma_addr;
>> >  
>> >  static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
>> >  {
>> > -	return phys_to_machine(XPADDR(paddr)).maddr;
>> 
>> Why not change 'phys_addr_t' to be unsigned long? Wouldn't
>> that solve the problem as well?
>
>It would, but it is fundamentally the wrong thing to do.
>
>If the kernel is configured without LPAE (ARM's PAE extensions) then it
>is configured for a 32-bit physical address space, throughout its page
>table handling and else where. Pretending that physical addresses are
>64-bits would have all sorts of knock on effects both in terms of type
>mismatches and the space used by data structures doubling etc
>
>Enabling LPAE would also solve this issue but we don't want to force
>that constraint onto Xen guests or dom0. Not least because of the knock
>on affect on distro installers etc.
>
>There is nothing fundamentally wrong with 32-bit phys addr with 64-bit
>dma addr and it is the correct solution to this configuration.

Right. 

And I presume dma_addr_t is 64bit regardless of PAE and non-PAE?
 (Sorry I am on my phone and its hard to do SSH and cscope).

Based on your explanation I believe the patch is fine though I have to work out carefully the casting it does in my mind.

>
>> 
>> Or make 'xmaddr_t' and 'xpaddr_t' use 'unsigned long' instead
>> of phys_addr_t?
>
>phys_addr_t is unsigned long already, so that won't help. And you don't
>want to expand those for the same reasons you don't want to expand
>phys_addr_t.
>
>Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ORg-0008R9-UC; Thu, 23 Jan 2014 17:52:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6ORe-0008R4-6W
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:52:03 +0000
Received: from [85.158.143.35:25330] by server-3.bemta-4.messagelabs.com id
	15/0B-32360-1C651E25; Thu, 23 Jan 2014 17:52:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390499519!393154!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10463 invoked from network); 23 Jan 2014 17:52:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 17:52:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,707,1384300800"; d="scan'208";a="95848739"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 17:51:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 12:51:33 -0500
Message-ID: <1390499492.2124.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 17:51:32 +0000
In-Reply-To: <8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
	<1390475971.24595.55.camel@kazak.uk.xensource.com>
	<8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 12:42 -0500, Konrad Rzeszutek Wilk wrote:
> And I presume dma_addr_t is 64bit regardless of PAE and non-PAE?

Correct. Or rather, correct after this patch which adds "select
ARCH_DMA_ADDR_T_64BIT" to the Kconfig.

>  (Sorry I am on my phone and its hard to do SSH and cscope).

You're doing very well to quote everything properly, so no worries!

> Based on your explanation I believe the patch is fine though I have to
> work out carefully the casting it does in my mind.

OK, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ORg-0008R9-UC; Thu, 23 Jan 2014 17:52:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6ORe-0008R4-6W
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 17:52:03 +0000
Received: from [85.158.143.35:25330] by server-3.bemta-4.messagelabs.com id
	15/0B-32360-1C651E25; Thu, 23 Jan 2014 17:52:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390499519!393154!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10463 invoked from network); 23 Jan 2014 17:52:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 17:52:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,707,1384300800"; d="scan'208";a="95848739"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 17:51:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 12:51:33 -0500
Message-ID: <1390499492.2124.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 17:51:32 +0000
In-Reply-To: <8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
References: <1389979493-22670-1-git-send-email-ian.campbell@citrix.com>
	<20140122210154.GC9585@phenom.dumpdata.com>
	<1390475971.24595.55.camel@kazak.uk.xensource.com>
	<8173790b-8731-4fee-9b59-e5753857a654@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: swiotlb: handle sizeof(dma_addr_t) !=
 sizeof(phys_addr_t)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 12:42 -0500, Konrad Rzeszutek Wilk wrote:
> And I presume dma_addr_t is 64bit regardless of PAE and non-PAE?

Correct. Or rather, correct after this patch which adds "select
ARCH_DMA_ADDR_T_64BIT" to the Kconfig.

>  (Sorry I am on my phone and its hard to do SSH and cscope).

You're doing very well to quote everything properly, so no worries!

> Based on your explanation I believe the patch is fine though I have to
> work out carefully the casting it does in my mind.

OK, thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 18:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 18:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Omw-0001EK-Bx; Thu, 23 Jan 2014 18:14:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Omu-0001EF-47
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 18:14:00 +0000
Received: from [85.158.139.211:6647] by server-1.bemta-5.messagelabs.com id
	30/D2-21065-7EB51E25; Thu, 23 Jan 2014 18:13:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390500836!11591819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22255 invoked from network); 23 Jan 2014 18:13:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 18:13:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,707,1384300800"; d="scan'208";a="95857822"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 18:13:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 13:13:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6Omp-0000AU-3V;
	Thu, 23 Jan 2014 18:13:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6Omo-0005aI-T9;
	Thu, 23 Jan 2014 18:13:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24465-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 18:13:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24465: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24465 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24465/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24457

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ branch=xen-unstable
+ revision=231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   407a3c0..231d7f4  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 18:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 18:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Omw-0001EK-Bx; Thu, 23 Jan 2014 18:14:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6Omu-0001EF-47
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 18:14:00 +0000
Received: from [85.158.139.211:6647] by server-1.bemta-5.messagelabs.com id
	30/D2-21065-7EB51E25; Thu, 23 Jan 2014 18:13:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390500836!11591819!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22255 invoked from network); 23 Jan 2014 18:13:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 18:13:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,707,1384300800"; d="scan'208";a="95857822"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 18:13:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 13:13:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6Omp-0000AU-3V;
	Thu, 23 Jan 2014 18:13:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6Omo-0005aI-T9;
	Thu, 23 Jan 2014 18:13:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24465-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 18:13:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24465: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24465 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24465/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24457

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
baseline version:
 xen                  407a3c00ffe9b283b2bd7e3ae6aa86a54d51ed92

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ branch=xen-unstable
+ revision=231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   407a3c0..231d7f4  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 19:00:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6PVw-0003fJ-2y; Thu, 23 Jan 2014 19:00:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W6PVu-0003fE-7x
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:00:31 +0000
Received: from [193.109.254.147:45432] by server-5.bemta-14.messagelabs.com id
	1F/AE-03510-DC661E25; Thu, 23 Jan 2014 19:00:29 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390503624!12825332!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 23 Jan 2014 19:00:26 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 19:00:26 -0000
Received: from mail-oa0-f43.google.com ([209.85.219.43]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuFmyEGsglclEKuz4R1jvdKHz8mCCEtv@postini.com;
	Thu, 23 Jan 2014 11:00:26 PST
Received: by mail-oa0-f43.google.com with SMTP id h16so2612363oag.2
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:00:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=zjmXoFn7YC5LrFPsknH1GEIzC+mQyZqq+pv5GcSgMxw=;
	b=AwiOJoErRRrhWAcyT3VEH6RRKipQnFxZvkB7ycO91CRSgey+eqFMa8QGVI0YeEwJxL
	sae5ouzGCDmMfvmjRR42KAZ9hzk08lRhbyEBIw9MT2CcLoCNcUx8LFjxKSGy5CEiTtBu
	UcnTT6OiKyVdxjY5kAtru5runw5iqc3R3BkX0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc
	:content-type;
	bh=zjmXoFn7YC5LrFPsknH1GEIzC+mQyZqq+pv5GcSgMxw=;
	b=WFy/xm/cT1DqwQsE1s4fB4BLDcKVant2+V5ZhnCT7jIoDJBrlXNpBa2GWueuQJZoR+
	ptdQTj+LwTnLUVPGUYKryHnscA13cDGEKp+kjJVJPp0ap45tx5BNp3Q7f3rV6f8RQ+MD
	7QSVwRfdh+Xw9r2jidNobs+cN/MFkU/u57Wk31LWpANnmmkVSZhChqoEuY9fshG9fyPq
	4y7dxiFGnhco0ETHmrk9owDSaL65wcHmyNVEYzTF5DKspibdLcC50u1P47iE3tzjdWQ5
	2MduDPt/PY/lPJPlDpTgNeAA6DXHE/Y7NM5TsrQlGPYlf2CqArYLi7sAWxmTxXo5Lcg5
	y6qg==
X-Gm-Message-State: ALoCoQmJKcbxkl6FLlayDGLfufNvgHLwMAy74X7+rfRGexuXW8H5WW3eow9zGQlvaMFyaGaeQfFGhFxglPBjxzvzbnz6fDfLqvFskhQQeaaWKicL3PgHLbeX9YpWTVl3VbUOPsbtPVCD6w0TJzk+Kvue2MGY+Fjjyg9ePeYRw1QmtmKbAoJ1I20=
X-Received: by 10.182.113.195 with SMTP id ja3mr8111500obb.46.1390503623990;
	Thu, 23 Jan 2014 11:00:23 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr8111338obb.46.1390503622351;
	Thu, 23 Jan 2014 11:00:22 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Thu, 23 Jan 2014 11:00:22 -0800 (PST)
Date: Thu, 23 Jan 2014 21:00:22 +0200
Message-ID: <CAE4oM6ztuTgrDTvo65e889LPK9iet0igR4aRBhgPpiTKROstdQ@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Cc: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] epoll_wait() exceeds timeout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0543485283493194280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0543485283493194280==
Content-Type: multipart/alternative; boundary=089e013d0db0787c6504f0a7da37

--089e013d0db0787c6504f0a7da37
Content-Type: text/plain; charset=ISO-8859-1

Hi,

our setup looks like this:

Hardware: ARMv7, TI Jacinto6
Xen: 4.4-rc1
dom0: Linux 3.8.13
domU: Android 4.3 with Linux kernel 3.8.13

Network: ethernet over USB. USB is mapped 1 to 1 to domU. Use-case: reading
from the network socket using epoll_wait() with timeout being set to less
than 4 ms (it varies from call to call, but is always less than 4 ms as
required by the algorithm). What we observe is: on a bare metal setup
timeouts are always met:

*1) Android4.3 - without Xen*
epoll_wait 3 ms: timeout = 3 ms
epoll_wait 2 ms: timeout = 2 ms
epoll_wait 3 ms: timeout = 3 ms
epoll_wait 2 ms: timeout = 2 ms
epoll_wait 3 ms: timeout = 3 ms

On the same system as a domU in Xen timeouts are exceeded by of 4 to 8 ms:

*1) Android4.3 - Xen*
epoll_wait 9 ms: timeout = 1 ms
epoll_wait 8 ms: timeout = 1 ms
epoll_wait 7 ms: timeout = 3 ms
epoll_wait 8 ms: timeout = 1 ms
epoll_wait 7 ms: timeout = 3 ms

epoll_wait() returns with a correct return value (timeout), but it blocks
the thread for much longer time than it should.

As we had already issues with the scheduler, we have made tests with pCPU
pinned to vCPUs, so no Xen scheduling should interfere with the timings
(and the cyclictest is doing well): timeouts are still exceeded.

Has anyone faced the same problem or has suggestion on what can possibly be
causing it?

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--089e013d0db0787c6504f0a7da37
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi,<br><br></div><div>our setup looks like this:<br><=
/div><div><br>Hardware: ARMv7, TI Jacinto6<br><div>Xen: 4.4-rc1<br></div><d=
iv>dom0: Linux 3.8.13<br>
</div>domU: Android 4.3 with Linux kernel 3.8.13<br><br></div><div>Network:=
 ethernet over USB. USB is mapped 1 to 1 to domU. Use-case: reading from th=
e network socket using epoll_wait() with timeout being set to less than 4 m=
s (it varies from call to call, but is always less than 4 ms as required by=
 the algorithm). What we observe is: on a bare metal setup timeouts are alw=
ays met:<br>
</div><div><br><div><i>1) Android4.3 - without Xen</i><br></div><div>epoll_=
wait 3=A0ms:=A0timeout=A0=3D 3 ms</div>

<div>epoll_wait 2=A0ms:=A0timeout=A0=3D 2 ms</div><div>epoll_wait 3 ms:=A0t=
imeout=A0=3D 3 ms</div><div>epoll_wait 2=A0ms:=A0timeout=A0=3D 2 ms</div><d=
iv>epoll_wait 3 ms:=A0timeout=A0=3D 3 ms</div><br></div><div>On the same sy=
stem as a domU in Xen timeouts are exceeded by of 4 to 8 ms:<br>
</div><div><br></div><div><div><i>1) Android4.3 - Xen</i></div><div><div>ep=
oll_wait 9 ms: timeout =3D 1 ms</div><div>epoll_wait 8 ms:=A0timeout=A0=3D =
1 ms</div><div>

epoll_wait 7 ms:=A0timeout=A0=3D 3 ms</div><div>epoll_wait 8 ms:=A0timeout=
=A0=3D 1 ms<br></div><div>epoll_wait 7 ms:=A0timeout=A0=3D 3 ms<br></div></=
div><div><br></div><div>epoll_wait() returns with a correct return value (t=
imeout), but it blocks the thread for much longer time than it should.<br>
<br></div><div>As we had already issues with the scheduler, we have made te=
sts with pCPU pinned to vCPUs, so no Xen scheduling should interfere with t=
he timings (and the cyclictest is doing well): timeouts are still exceeded.=
<br>
<br></div><div>Has anyone faced the same problem or has suggestion on what =
can possibly be causing it?<br clear=3D"all"></div><div><div><div><div dir=
=3D"ltr"><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-=
variant:normal;font-style:normal;font-size:12px;background-color:transparen=
t;text-decoration:none;font-family:Arial;font-weight:bold">Suikov Pavlo</sp=
an><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
<br></div></div></div></div>

--089e013d0db0787c6504f0a7da37--


--===============0543485283493194280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0543485283493194280==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 19:00:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6PVw-0003fJ-2y; Thu, 23 Jan 2014 19:00:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W6PVu-0003fE-7x
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:00:31 +0000
Received: from [193.109.254.147:45432] by server-5.bemta-14.messagelabs.com id
	1F/AE-03510-DC661E25; Thu, 23 Jan 2014 19:00:29 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390503624!12825332!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.7 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 23 Jan 2014 19:00:26 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 19:00:26 -0000
Received: from mail-oa0-f43.google.com ([209.85.219.43]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuFmyEGsglclEKuz4R1jvdKHz8mCCEtv@postini.com;
	Thu, 23 Jan 2014 11:00:26 PST
Received: by mail-oa0-f43.google.com with SMTP id h16so2612363oag.2
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:00:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=zjmXoFn7YC5LrFPsknH1GEIzC+mQyZqq+pv5GcSgMxw=;
	b=AwiOJoErRRrhWAcyT3VEH6RRKipQnFxZvkB7ycO91CRSgey+eqFMa8QGVI0YeEwJxL
	sae5ouzGCDmMfvmjRR42KAZ9hzk08lRhbyEBIw9MT2CcLoCNcUx8LFjxKSGy5CEiTtBu
	UcnTT6OiKyVdxjY5kAtru5runw5iqc3R3BkX0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc
	:content-type;
	bh=zjmXoFn7YC5LrFPsknH1GEIzC+mQyZqq+pv5GcSgMxw=;
	b=WFy/xm/cT1DqwQsE1s4fB4BLDcKVant2+V5ZhnCT7jIoDJBrlXNpBa2GWueuQJZoR+
	ptdQTj+LwTnLUVPGUYKryHnscA13cDGEKp+kjJVJPp0ap45tx5BNp3Q7f3rV6f8RQ+MD
	7QSVwRfdh+Xw9r2jidNobs+cN/MFkU/u57Wk31LWpANnmmkVSZhChqoEuY9fshG9fyPq
	4y7dxiFGnhco0ETHmrk9owDSaL65wcHmyNVEYzTF5DKspibdLcC50u1P47iE3tzjdWQ5
	2MduDPt/PY/lPJPlDpTgNeAA6DXHE/Y7NM5TsrQlGPYlf2CqArYLi7sAWxmTxXo5Lcg5
	y6qg==
X-Gm-Message-State: ALoCoQmJKcbxkl6FLlayDGLfufNvgHLwMAy74X7+rfRGexuXW8H5WW3eow9zGQlvaMFyaGaeQfFGhFxglPBjxzvzbnz6fDfLqvFskhQQeaaWKicL3PgHLbeX9YpWTVl3VbUOPsbtPVCD6w0TJzk+Kvue2MGY+Fjjyg9ePeYRw1QmtmKbAoJ1I20=
X-Received: by 10.182.113.195 with SMTP id ja3mr8111500obb.46.1390503623990;
	Thu, 23 Jan 2014 11:00:23 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr8111338obb.46.1390503622351;
	Thu, 23 Jan 2014 11:00:22 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Thu, 23 Jan 2014 11:00:22 -0800 (PST)
Date: Thu, 23 Jan 2014 21:00:22 +0200
Message-ID: <CAE4oM6ztuTgrDTvo65e889LPK9iet0igR4aRBhgPpiTKROstdQ@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Cc: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Subject: [Xen-devel] epoll_wait() exceeds timeout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0543485283493194280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0543485283493194280==
Content-Type: multipart/alternative; boundary=089e013d0db0787c6504f0a7da37

--089e013d0db0787c6504f0a7da37
Content-Type: text/plain; charset=ISO-8859-1

Hi,

our setup looks like this:

Hardware: ARMv7, TI Jacinto6
Xen: 4.4-rc1
dom0: Linux 3.8.13
domU: Android 4.3 with Linux kernel 3.8.13

Network: ethernet over USB. USB is mapped 1 to 1 to domU. Use-case: reading
from the network socket using epoll_wait() with timeout being set to less
than 4 ms (it varies from call to call, but is always less than 4 ms as
required by the algorithm). What we observe is: on a bare metal setup
timeouts are always met:

*1) Android4.3 - without Xen*
epoll_wait 3 ms: timeout = 3 ms
epoll_wait 2 ms: timeout = 2 ms
epoll_wait 3 ms: timeout = 3 ms
epoll_wait 2 ms: timeout = 2 ms
epoll_wait 3 ms: timeout = 3 ms

On the same system as a domU in Xen timeouts are exceeded by of 4 to 8 ms:

*1) Android4.3 - Xen*
epoll_wait 9 ms: timeout = 1 ms
epoll_wait 8 ms: timeout = 1 ms
epoll_wait 7 ms: timeout = 3 ms
epoll_wait 8 ms: timeout = 1 ms
epoll_wait 7 ms: timeout = 3 ms

epoll_wait() returns with a correct return value (timeout), but it blocks
the thread for much longer time than it should.

As we had already issues with the scheduler, we have made tests with pCPU
pinned to vCPUs, so no Xen scheduling should interfere with the timings
(and the cyclictest is doing well): timeouts are still exceeded.

Has anyone faced the same problem or has suggestion on what can possibly be
causing it?

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--089e013d0db0787c6504f0a7da37
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi,<br><br></div><div>our setup looks like this:<br><=
/div><div><br>Hardware: ARMv7, TI Jacinto6<br><div>Xen: 4.4-rc1<br></div><d=
iv>dom0: Linux 3.8.13<br>
</div>domU: Android 4.3 with Linux kernel 3.8.13<br><br></div><div>Network:=
 ethernet over USB. USB is mapped 1 to 1 to domU. Use-case: reading from th=
e network socket using epoll_wait() with timeout being set to less than 4 m=
s (it varies from call to call, but is always less than 4 ms as required by=
 the algorithm). What we observe is: on a bare metal setup timeouts are alw=
ays met:<br>
</div><div><br><div><i>1) Android4.3 - without Xen</i><br></div><div>epoll_=
wait 3=A0ms:=A0timeout=A0=3D 3 ms</div>

<div>epoll_wait 2=A0ms:=A0timeout=A0=3D 2 ms</div><div>epoll_wait 3 ms:=A0t=
imeout=A0=3D 3 ms</div><div>epoll_wait 2=A0ms:=A0timeout=A0=3D 2 ms</div><d=
iv>epoll_wait 3 ms:=A0timeout=A0=3D 3 ms</div><br></div><div>On the same sy=
stem as a domU in Xen timeouts are exceeded by of 4 to 8 ms:<br>
</div><div><br></div><div><div><i>1) Android4.3 - Xen</i></div><div><div>ep=
oll_wait 9 ms: timeout =3D 1 ms</div><div>epoll_wait 8 ms:=A0timeout=A0=3D =
1 ms</div><div>

epoll_wait 7 ms:=A0timeout=A0=3D 3 ms</div><div>epoll_wait 8 ms:=A0timeout=
=A0=3D 1 ms<br></div><div>epoll_wait 7 ms:=A0timeout=A0=3D 3 ms<br></div></=
div><div><br></div><div>epoll_wait() returns with a correct return value (t=
imeout), but it blocks the thread for much longer time than it should.<br>
<br></div><div>As we had already issues with the scheduler, we have made te=
sts with pCPU pinned to vCPUs, so no Xen scheduling should interfere with t=
he timings (and the cyclictest is doing well): timeouts are still exceeded.=
<br>
<br></div><div>Has anyone faced the same problem or has suggestion on what =
can possibly be causing it?<br clear=3D"all"></div><div><div><div><div dir=
=3D"ltr"><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-=
variant:normal;font-style:normal;font-size:12px;background-color:transparen=
t;text-decoration:none;font-family:Arial;font-weight:bold">Suikov Pavlo</sp=
an><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
<br></div></div></div></div>

--089e013d0db0787c6504f0a7da37--


--===============0543485283493194280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0543485283493194280==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 19:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Pf3-0003wi-37; Thu, 23 Jan 2014 19:09:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W6Pf2-0003wd-0u
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:09:56 +0000
Received: from [85.158.137.68:16267] by server-16.bemta-3.messagelabs.com id
	1B/C7-26128-30961E25; Thu, 23 Jan 2014 19:09:55 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390504190!7310251!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_10_20,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5517 invoked from network); 23 Jan 2014 19:09:54 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 19:09:54 -0000
Received: from mail-oa0-f47.google.com ([209.85.219.47]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuFo/kLvbqUU+suMlOtbaM7KUSmS9du6@postini.com;
	Thu, 23 Jan 2014 11:09:53 PST
Received: by mail-oa0-f47.google.com with SMTP id m1so2615704oag.6
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:09:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=1Xh1QK7SCOR0ViIJdpygWbVuiGU8fJl87QoGy26WP70=;
	b=DX/+zivFVBNN9b5CExA7F+F48ADkT9z/XejhKueYAaXU2FYfRtK1c2JcnDznC7Ckwn
	paXHosa9YUO4jCmLRBz0GLf6l6xyR6y4PShnjlm2C5P2UqZTpXQ6pJ/cxOIyJiC4VbT5
	orLznybnQZZx4xAhj6dot0jIv/FepF2JWuQH4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=1Xh1QK7SCOR0ViIJdpygWbVuiGU8fJl87QoGy26WP70=;
	b=cNyssTfGGtHWr0Ef+5EFtj2pDkUivqJi4cGYDotqfIHBIOFmzd3G0RdiXAFFe0GByc
	6a1DkFe/JqtQiNuXNZW3bs6rZOgudvzk20a3SX0Fo/GGX7YC1AUN2MiKnSSTPcIaIwej
	Y9GnO62naDvr3qQjATvk5EPvI7E6XJOtJUfmEIlbWJeJQ5XE3CHZY2U5audvDgywQuGm
	dxy5msjb7fi3ZylvmpNCm6Yi/MndXcm4y5l42y4y2YD9DT605SJAPrzLR4Ase4iuKPMK
	LlKvkWGaKz99xdDieKhE//fCreYI3VlmosYCxSk3V1doq+99qn/0SoSE7Av1Ev14Ygol
	KH4w==
X-Gm-Message-State: ALoCoQl8YWE1u8oegKLcfHYj4h9SNlNOQrAFARTFpXICdQsZfm2b9d0eMhyIwTDi7OeIsK75VGlzryh6O027tq8rKdqw1OFisfaDSBgE0MO+9D72+/EzWtLh7aK0lTwnzZw86dhzGDnaU041Iyw2eRwzLhn8mthyA770JIu7Kb1BOlzDaxyd9fg=
X-Received: by 10.60.62.243 with SMTP id b19mr3880oes.42.1390504189270;
	Thu, 23 Jan 2014 11:09:49 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.62.243 with SMTP id b19mr3870oes.42.1390504189172; Thu,
	23 Jan 2014 11:09:49 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Thu, 23 Jan 2014 11:09:49 -0800 (PST)
In-Reply-To: <1390327005.23576.219.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
	<1390327005.23576.219.camel@Solace>
Date: Thu, 23 Jan 2014 21:09:49 +0200
Message-ID: <CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0788540442848286093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0788540442848286093==
Content-Type: multipart/alternative; boundary=001a11c24cee4130f004f0a7fca0

--001a11c24cee4130f004f0a7fca0
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario,

> Can I ask how the numbers (for DomU, of course) looks like now?

They are all 31 ms, so minimal overhead is achieved. However, it looks like
we still have some gremlins in there: from boot to boot this time can
change to 39 ms. So without Xen scheduler being active sleep latency
stabilizes, but not always on a correct value.

> Another thing I'll try, if you haven't done that already, is as follows:
> - get rid of the DomU
> - pin the 2 Dom0's vCPUs each one to one pCPU
> - repeat the experiment

Yeah, we have tried this as well and it gives us almost the same result as
in previous case: sleep latency in dom0 is not present, so we get 30 to 31
ms on 30 ms sleep without any variations.

> If that works, iterate, without the second step, i.e., basically, run
> the experiment with no pinning, but only with Dom0.

Still good results. Problems are showing only when we have domU actually
being running.

Regards,
  Pavlo

--001a11c24cee4130f004f0a7fca0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi Dario,<br><br>&gt; Can I ask how th=
e numbers (for DomU, of course) looks like now?<br><br></div>They are all 3=
1 ms, so minimal overhead is achieved. However, it looks like we still have=
 some gremlins in there: from boot to boot this time can change to 39 ms. S=
o without Xen scheduler being active sleep latency stabilizes, but not alwa=
ys on a correct value.<br>
<br>&gt; Another thing I&#39;ll try, if you haven&#39;t done that already, =
is as follows:<br>&gt; - get rid of the DomU<br>&gt; - pin the 2 Dom0&#39;s=
 vCPUs each one to one pCPU<br>&gt; - repeat the experiment<br><br></div>
Yeah, we have tried this as well and it gives us almost the same result as =
in previous case: sleep latency in dom0 is not present, so we get 30 to 31 =
ms on 30 ms sleep without any variations.<br><br>&gt; If that works, iterat=
e, without the second step, i.e., basically, run<br>
&gt; the experiment with no pinning, but only with Dom0.<br><br></div>Still=
 good results. Problems are showing only when we have domU actually being r=
unning.<br><br></div><div>Regards,<br></div><div>=A0 Pavlo<br></div></div>

--001a11c24cee4130f004f0a7fca0--


--===============0788540442848286093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0788540442848286093==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 19:10:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Pf3-0003wi-37; Thu, 23 Jan 2014 19:09:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1W6Pf2-0003wd-0u
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:09:56 +0000
Received: from [85.158.137.68:16267] by server-16.bemta-3.messagelabs.com id
	1B/C7-26128-30961E25; Thu, 23 Jan 2014 19:09:55 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390504190!7310251!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_10_20,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5517 invoked from network); 23 Jan 2014 19:09:54 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 19:09:54 -0000
Received: from mail-oa0-f47.google.com ([209.85.219.47]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuFo/kLvbqUU+suMlOtbaM7KUSmS9du6@postini.com;
	Thu, 23 Jan 2014 11:09:53 PST
Received: by mail-oa0-f47.google.com with SMTP id m1so2615704oag.6
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:09:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=1Xh1QK7SCOR0ViIJdpygWbVuiGU8fJl87QoGy26WP70=;
	b=DX/+zivFVBNN9b5CExA7F+F48ADkT9z/XejhKueYAaXU2FYfRtK1c2JcnDznC7Ckwn
	paXHosa9YUO4jCmLRBz0GLf6l6xyR6y4PShnjlm2C5P2UqZTpXQ6pJ/cxOIyJiC4VbT5
	orLznybnQZZx4xAhj6dot0jIv/FepF2JWuQH4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=1Xh1QK7SCOR0ViIJdpygWbVuiGU8fJl87QoGy26WP70=;
	b=cNyssTfGGtHWr0Ef+5EFtj2pDkUivqJi4cGYDotqfIHBIOFmzd3G0RdiXAFFe0GByc
	6a1DkFe/JqtQiNuXNZW3bs6rZOgudvzk20a3SX0Fo/GGX7YC1AUN2MiKnSSTPcIaIwej
	Y9GnO62naDvr3qQjATvk5EPvI7E6XJOtJUfmEIlbWJeJQ5XE3CHZY2U5audvDgywQuGm
	dxy5msjb7fi3ZylvmpNCm6Yi/MndXcm4y5l42y4y2YD9DT605SJAPrzLR4Ase4iuKPMK
	LlKvkWGaKz99xdDieKhE//fCreYI3VlmosYCxSk3V1doq+99qn/0SoSE7Av1Ev14Ygol
	KH4w==
X-Gm-Message-State: ALoCoQl8YWE1u8oegKLcfHYj4h9SNlNOQrAFARTFpXICdQsZfm2b9d0eMhyIwTDi7OeIsK75VGlzryh6O027tq8rKdqw1OFisfaDSBgE0MO+9D72+/EzWtLh7aK0lTwnzZw86dhzGDnaU041Iyw2eRwzLhn8mthyA770JIu7Kb1BOlzDaxyd9fg=
X-Received: by 10.60.62.243 with SMTP id b19mr3880oes.42.1390504189270;
	Thu, 23 Jan 2014 11:09:49 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.62.243 with SMTP id b19mr3870oes.42.1390504189172; Thu,
	23 Jan 2014 11:09:49 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Thu, 23 Jan 2014 11:09:49 -0800 (PST)
In-Reply-To: <1390327005.23576.219.camel@Solace>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
	<1390327005.23576.219.camel@Solace>
Date: Thu, 23 Jan 2014 21:09:49 +0200
Message-ID: <CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0788540442848286093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0788540442848286093==
Content-Type: multipart/alternative; boundary=001a11c24cee4130f004f0a7fca0

--001a11c24cee4130f004f0a7fca0
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario,

> Can I ask how the numbers (for DomU, of course) looks like now?

They are all 31 ms, so minimal overhead is achieved. However, it looks like
we still have some gremlins in there: from boot to boot this time can
change to 39 ms. So without Xen scheduler being active sleep latency
stabilizes, but not always on a correct value.

> Another thing I'll try, if you haven't done that already, is as follows:
> - get rid of the DomU
> - pin the 2 Dom0's vCPUs each one to one pCPU
> - repeat the experiment

Yeah, we have tried this as well and it gives us almost the same result as
in previous case: sleep latency in dom0 is not present, so we get 30 to 31
ms on 30 ms sleep without any variations.

> If that works, iterate, without the second step, i.e., basically, run
> the experiment with no pinning, but only with Dom0.

Still good results. Problems are showing only when we have domU actually
being running.

Regards,
  Pavlo

--001a11c24cee4130f004f0a7fca0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div><div>Hi Dario,<br><br>&gt; Can I ask how th=
e numbers (for DomU, of course) looks like now?<br><br></div>They are all 3=
1 ms, so minimal overhead is achieved. However, it looks like we still have=
 some gremlins in there: from boot to boot this time can change to 39 ms. S=
o without Xen scheduler being active sleep latency stabilizes, but not alwa=
ys on a correct value.<br>
<br>&gt; Another thing I&#39;ll try, if you haven&#39;t done that already, =
is as follows:<br>&gt; - get rid of the DomU<br>&gt; - pin the 2 Dom0&#39;s=
 vCPUs each one to one pCPU<br>&gt; - repeat the experiment<br><br></div>
Yeah, we have tried this as well and it gives us almost the same result as =
in previous case: sleep latency in dom0 is not present, so we get 30 to 31 =
ms on 30 ms sleep without any variations.<br><br>&gt; If that works, iterat=
e, without the second step, i.e., basically, run<br>
&gt; the experiment with no pinning, but only with Dom0.<br><br></div>Still=
 good results. Problems are showing only when we have domU actually being r=
unning.<br><br></div><div>Regards,<br></div><div>=A0 Pavlo<br></div></div>

--001a11c24cee4130f004f0a7fca0--


--===============0788540442848286093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0788540442848286093==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 19:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6PxQ-0004az-C1; Thu, 23 Jan 2014 19:28:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W6PxP-0004au-D5
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:28:55 +0000
Received: from [85.158.143.35:28424] by server-3.bemta-4.messagelabs.com id
	67/26-32360-67D61E25; Thu, 23 Jan 2014 19:28:54 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390505332!401658!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10569 invoked from network); 23 Jan 2014 19:28:53 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 19:28:53 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so2171639pdi.9
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:28:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:mime-version:content-type
	:content-transfer-encoding;
	bh=oOjaVHnhKosZ8pv2zx4O7ZNbwhF7b5gPdDaeVAJQGa0=;
	b=iHWxP6tsfdeZXqna9xtKU+CNHl95bgYvctrrIf0CruqDtwQlXMCOQbht75AMLjPW3Z
	Hm7XhsjqX+lwkSECS9IJFm6m015KCKtBAJrO3jQX13rc09wo/0oTGuqtL4f6rJ0v/Hhz
	cL49K4ftOOXjWqy4QIJEgP2D7mp6oWZoQnwe1NVJ9NhMcPPzQ2HJa/XzbUWF5hQ2CDRD
	er/jp+94EgXLOR5T37Y5Lq7+fcWUKpxbijHgVIqv7FJl9jn2H9h3ifBpWW/th3ZY6uG0
	GbV1P7W3tkdqxQJtfvYJQcwd76NBr3ntkUcq9tzZ2nNYbJISpMDF4FFT/FrutrUL014j
	y0Tw==
X-Received: by 10.66.248.227 with SMTP id yp3mr9727861pac.116.1390505331843;
	Thu, 23 Jan 2014 11:28:51 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id
	yd4sm40132012pbc.13.2014.01.23.11.28.48 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 11:28:50 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 11:28:47 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 11:28:46 -0800
Message-Id: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkback: fix memory leak when persistent
	grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0
Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250
X2xpc3Q6CiAJCQlwcmludF9zdGF0cyhibGtpZik7CiAJfQogCi0JLyogU2luY2Ugd2UgYXJlIHNo
dXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KLQlzaHJpbmtf
ZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwotCiAJLyogRnJlZSBhbGwgcGVyc2lz
dGVudCBncmFudCBwYWdlcyAqLwogCWlmICghUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cykpCiAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJsa2lmLCAmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cywKQEAgLTYzNiw2ICs2MzMsOSBAQCBwdXJnZV9nbnRfbGlzdDoKIAlCVUdfT04oIVJC
X0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMpKTsKIAlibGtpZi0+cGVyc2lzdGVu
dF9nbnRfYyA9IDA7CiAKKwkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxs
IHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLworCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAw
IC8qIEFsbCAqLyk7CisKIAlpZiAobG9nX3N0YXRzKQogCQlwcmludF9zdGF0cyhibGtpZik7CiAK
LS0gCjEuNy45LjUKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 23 19:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 19:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6PxQ-0004az-C1; Thu, 23 Jan 2014 19:28:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W6PxP-0004au-D5
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 19:28:55 +0000
Received: from [85.158.143.35:28424] by server-3.bemta-4.messagelabs.com id
	67/26-32360-67D61E25; Thu, 23 Jan 2014 19:28:54 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390505332!401658!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10569 invoked from network); 23 Jan 2014 19:28:53 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 19:28:53 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so2171639pdi.9
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 11:28:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:mime-version:content-type
	:content-transfer-encoding;
	bh=oOjaVHnhKosZ8pv2zx4O7ZNbwhF7b5gPdDaeVAJQGa0=;
	b=iHWxP6tsfdeZXqna9xtKU+CNHl95bgYvctrrIf0CruqDtwQlXMCOQbht75AMLjPW3Z
	Hm7XhsjqX+lwkSECS9IJFm6m015KCKtBAJrO3jQX13rc09wo/0oTGuqtL4f6rJ0v/Hhz
	cL49K4ftOOXjWqy4QIJEgP2D7mp6oWZoQnwe1NVJ9NhMcPPzQ2HJa/XzbUWF5hQ2CDRD
	er/jp+94EgXLOR5T37Y5Lq7+fcWUKpxbijHgVIqv7FJl9jn2H9h3ifBpWW/th3ZY6uG0
	GbV1P7W3tkdqxQJtfvYJQcwd76NBr3ntkUcq9tzZ2nNYbJISpMDF4FFT/FrutrUL014j
	y0Tw==
X-Received: by 10.66.248.227 with SMTP id yp3mr9727861pac.116.1390505331843;
	Thu, 23 Jan 2014 11:28:51 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id
	yd4sm40132012pbc.13.2014.01.23.11.28.48 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 11:28:50 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 11:28:47 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 23 Jan 2014 11:28:46 -0800
Message-Id: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkback: fix memory leak when persistent
	grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0
Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250
X2xpc3Q6CiAJCQlwcmludF9zdGF0cyhibGtpZik7CiAJfQogCi0JLyogU2luY2Ugd2UgYXJlIHNo
dXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KLQlzaHJpbmtf
ZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwotCiAJLyogRnJlZSBhbGwgcGVyc2lz
dGVudCBncmFudCBwYWdlcyAqLwogCWlmICghUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cykpCiAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJsa2lmLCAmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cywKQEAgLTYzNiw2ICs2MzMsOSBAQCBwdXJnZV9nbnRfbGlzdDoKIAlCVUdfT04oIVJC
X0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMpKTsKIAlibGtpZi0+cGVyc2lzdGVu
dF9nbnRfYyA9IDA7CiAKKwkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxs
IHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLworCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAw
IC8qIEFsbCAqLyk7CisKIAlpZiAobG9nX3N0YXRzKQogCQlwcmludF9zdGF0cyhibGtpZik7CiAK
LS0gCjEuNy45LjUKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:24:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Rkj-0008Sp-Ty; Thu, 23 Jan 2014 21:23:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Rkh-0008Sk-3w
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:23:55 +0000
Received: from [85.158.143.35:22659] by server-3.bemta-4.messagelabs.com id
	8E/95-32360-A6881E25; Thu, 23 Jan 2014 21:23:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390512232!412805!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2647 invoked from network); 23 Jan 2014 21:23:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 21:23:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="93903678"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 21:23:51 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 16:23:51 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Thu, 23 Jan 2014 21:23:44 +0000
Message-ID: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |    5 +-
 arch/x86/xen/p2m.c                  |   17 +------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   89 ++++++++++++++++++++++++++++++-----
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 101 insertions(+), 46 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..ce47243 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..bd4724b 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..e4ddfeb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:24:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Rkj-0008Sp-Ty; Thu, 23 Jan 2014 21:23:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6Rkh-0008Sk-3w
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:23:55 +0000
Received: from [85.158.143.35:22659] by server-3.bemta-4.messagelabs.com id
	8E/95-32360-A6881E25; Thu, 23 Jan 2014 21:23:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390512232!412805!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2647 invoked from network); 23 Jan 2014 21:23:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 21:23:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="93903678"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Jan 2014 21:23:51 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 16:23:51 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Thu, 23 Jan 2014 21:23:44 +0000
Message-ID: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/page.h     |    5 +-
 arch/x86/xen/p2m.c                  |   17 +------
 drivers/block/xen-blkback/blkback.c |   15 +++---
 drivers/xen/gntdev.c                |   13 +++--
 drivers/xen/grant-table.c           |   89 ++++++++++++++++++++++++++++++-----
 include/xen/grant_table.h           |    8 +++-
 6 files changed, 101 insertions(+), 46 deletions(-)

diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..ce47243 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..bd4724b 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..e4ddfeb 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
-	unsigned long mfn;
+	unsigned long mfn, pfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+
+		pages[i]->index = pfn_to_mfn(pfn);
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long pfn, mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		pfn = page_to_pfn(pages[i]);
+		mfn = get_phys_to_machine(pfn);
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6RxD-0000NB-HF; Thu, 23 Jan 2014 21:36:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6RxC-0000N6-3t
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 21:36:50 +0000
Received: from [85.158.137.68:34289] by server-6.bemta-3.messagelabs.com id
	9B/76-04868-17B81E25; Thu, 23 Jan 2014 21:36:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390513006!9819409!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4804 invoked from network); 23 Jan 2014 21:36:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 21:36:48 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 23 Jan 2014 14:36:35 -0700
Message-ID: <52E18B62.30809@suse.com>
Date: Thu, 23 Jan 2014 14:36:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
In-Reply-To: <21216.62800.746512.422459@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> It appears the timeout_modify callback is invoked on a previously
>> deregistered timeout.  I didn't notice the segfault when running
>> libvirtd under valgrind, but did see
>>     
>
> Hmmm.  This could be a libxl problem.  I'll review the code again and
> maybe think about adding some assertions.
>
> But I've slept on this and I had an idea about libvirt's rescheduling
> timeouts.  Can you point me at the libvirt branch you're using (a git
> tree would be ideal) and I'll take a look at that too ?
>   

Sure

git@github.com:jfehlig/libvirt.git

I just pushed that from the test machine, which has the 3 commits I'm
testing plopped on the master branch

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6RxD-0000NB-HF; Thu, 23 Jan 2014 21:36:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6RxC-0000N6-3t
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 21:36:50 +0000
Received: from [85.158.137.68:34289] by server-6.bemta-3.messagelabs.com id
	9B/76-04868-17B81E25; Thu, 23 Jan 2014 21:36:49 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390513006!9819409!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4804 invoked from network); 23 Jan 2014 21:36:48 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 21:36:48 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 23 Jan 2014 14:36:35 -0700
Message-ID: <52E18B62.30809@suse.com>
Date: Thu, 23 Jan 2014 14:36:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
In-Reply-To: <21216.62800.746512.422459@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> It appears the timeout_modify callback is invoked on a previously
>> deregistered timeout.  I didn't notice the segfault when running
>> libvirtd under valgrind, but did see
>>     
>
> Hmmm.  This could be a libxl problem.  I'll review the code again and
> maybe think about adding some assertions.
>
> But I've slept on this and I had an idea about libvirt's rescheduling
> timeouts.  Can you point me at the libvirt branch you're using (a git
> tree would be ideal) and I'll take a look at that too ?
>   

Sure

git@github.com:jfehlig/libvirt.git

I just pushed that from the test machine, which has the 3 commits I'm
testing plopped on the master branch

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6S0A-0000gI-8D; Thu, 23 Jan 2014 21:39:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W6S08-0000g9-2n
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:39:52 +0000
Received: from [85.158.137.68:57585] by server-7.bemta-3.messagelabs.com id
	07/18-27599-72C81E25; Thu, 23 Jan 2014 21:39:51 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390513189!11002792!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9999 invoked from network); 23 Jan 2014 21:39:50 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	23 Jan 2014 21:39:50 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 1D94C5871E4;
	Thu, 23 Jan 2014 13:39:49 -0800 (PST)
Date: Thu, 23 Jan 2014 13:39:48 -0800 (PST)
Message-Id: <20140123.133948.1912867106975845377.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52E11563.1090707@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<20140122.175031.873909526743971037.davem@davemloft.net>
	<52E11563.1090707@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Thu, 23 Jan 2014 13:13:07 +0000

> It is already based on two predecessor patches, one which is already
> accepted but not applied yet:
> 
> [PATCH net-next v2] xen-netback: Rework rx_work_todo
> 
> And the other one is hopefully will be accepted very soon:
> 
> [PATCH v5] xen/grant-table: Avoid m2p_override during mapping

These have both been asked for changes or small adjustments to be made.

Also, you really have to precisely and explicitly mention any
dependencies which exist.

In fact, it's often best to not post a series until the dependent
patches have been accepted.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6S0A-0000gI-8D; Thu, 23 Jan 2014 21:39:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W6S08-0000g9-2n
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:39:52 +0000
Received: from [85.158.137.68:57585] by server-7.bemta-3.messagelabs.com id
	07/18-27599-72C81E25; Thu, 23 Jan 2014 21:39:51 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390513189!11002792!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9999 invoked from network); 23 Jan 2014 21:39:50 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	23 Jan 2014 21:39:50 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 1D94C5871E4;
	Thu, 23 Jan 2014 13:39:49 -0800 (PST)
Date: Thu, 23 Jan 2014 13:39:48 -0800 (PST)
Message-Id: <20140123.133948.1912867106975845377.davem@davemloft.net>
To: zoltan.kiss@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52E11563.1090707@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<20140122.175031.873909526743971037.davem@davemloft.net>
	<52E11563.1090707@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Thu, 23 Jan 2014 13:13:07 +0000

> It is already based on two predecessor patches, one which is already
> accepted but not applied yet:
> 
> [PATCH net-next v2] xen-netback: Rework rx_work_todo
> 
> And the other one is hopefully will be accepted very soon:
> 
> [PATCH v5] xen/grant-table: Avoid m2p_override during mapping

These have both been asked for changes or small adjustments to be made.

Also, you really have to precisely and explicitly mention any
dependencies which exist.

In fact, it's often best to not post a series until the dependent
patches have been accepted.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:49:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6S9L-00014t-6u; Thu, 23 Jan 2014 21:49:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6S9K-00014o-3T
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:49:22 +0000
Received: from [85.158.143.35:52820] by server-2.bemta-4.messagelabs.com id
	97/99-11386-16E81E25; Thu, 23 Jan 2014 21:49:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390513759!416325!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19548 invoked from network); 23 Jan 2014 21:49:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 21:49:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95952738"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 21:49:13 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 16:49:13 -0500
Message-ID: <52E18E57.8040600@citrix.com>
Date: Thu, 23 Jan 2014 21:49:11 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	<20140122.175031.873909526743971037.davem@davemloft.net>	<52E11563.1090707@citrix.com>
	<20140123.133948.1912867106975845377.davem@davemloft.net>
In-Reply-To: <20140123.133948.1912867106975845377.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 21:39, David Miller wrote:
> From: Zoltan Kiss <zoltan.kiss@citrix.com>
> Date: Thu, 23 Jan 2014 13:13:07 +0000
>
>> It is already based on two predecessor patches, one which is already
>> accepted but not applied yet:
>>
>> [PATCH net-next v2] xen-netback: Rework rx_work_todo
>>
>> And the other one is hopefully will be accepted very soon:
>>
>> [PATCH v5] xen/grant-table: Avoid m2p_override during mapping
>
> These have both been asked for changes or small adjustments to be made.
AFAIK Wei acked the netback one:

http://www.spinics.net/lists/netdev/msg267800.html

I've just sent in the latest one for the grant mapping one

> Also, you really have to precisely and explicitly mention any
> dependencies which exist.
Ok, the grant mapping API dependency is vaguely mentioned in the patch 
history, I'll move it. I haven't mentioned the other one because it's 
not related to the grant mapping changes, it's a generic bug.

> In fact, it's often best to not post a series until the dependent
> patches have been accepted.
I've posted the first version of this series in November, these two 
issues turned up in the recent weeks.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 21:49:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 21:49:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6S9L-00014t-6u; Thu, 23 Jan 2014 21:49:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6S9K-00014o-3T
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 21:49:22 +0000
Received: from [85.158.143.35:52820] by server-2.bemta-4.messagelabs.com id
	97/99-11386-16E81E25; Thu, 23 Jan 2014 21:49:21 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390513759!416325!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19548 invoked from network); 23 Jan 2014 21:49:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 21:49:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95952738"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 21:49:13 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 16:49:13 -0500
Message-ID: <52E18E57.8040600@citrix.com>
Date: Thu, 23 Jan 2014 21:49:11 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	<20140122.175031.873909526743971037.davem@davemloft.net>	<52E11563.1090707@citrix.com>
	<20140123.133948.1912867106975845377.davem@davemloft.net>
In-Reply-To: <20140123.133948.1912867106975845377.davem@davemloft.net>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 21:39, David Miller wrote:
> From: Zoltan Kiss <zoltan.kiss@citrix.com>
> Date: Thu, 23 Jan 2014 13:13:07 +0000
>
>> It is already based on two predecessor patches, one which is already
>> accepted but not applied yet:
>>
>> [PATCH net-next v2] xen-netback: Rework rx_work_todo
>>
>> And the other one is hopefully will be accepted very soon:
>>
>> [PATCH v5] xen/grant-table: Avoid m2p_override during mapping
>
> These have both been asked for changes or small adjustments to be made.
AFAIK Wei acked the netback one:

http://www.spinics.net/lists/netdev/msg267800.html

I've just sent in the latest one for the grant mapping one

> Also, you really have to precisely and explicitly mention any
> dependencies which exist.
Ok, the grant mapping API dependency is vaguely mentioned in the patch 
history, I'll move it. I haven't mentioned the other one because it's 
not related to the grant mapping changes, it's a generic bug.

> In fact, it's often best to not post a series until the dependent
> patches have been accepted.
I've posted the first version of this series in November, these two 
issues turned up in the recent weeks.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SOt-0001UT-Ed; Thu, 23 Jan 2014 22:05:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6SOs-0001UO-1n
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:05:26 +0000
Received: from [85.158.139.211:27153] by server-4.bemta-5.messagelabs.com id
	BA/F3-26791-52291E25; Thu, 23 Jan 2014 22:05:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390514722!11436489!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15541 invoked from network); 23 Jan 2014 22:05:24 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 22:05:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 23 Jan 2014 22:05:21 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="244754676"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 23 Jan 2014 22:05:20 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 23 Jan 2014 17:03:40 -0500
Message-ID: <52E191BB.7040904@terremark.com>
Date: Thu, 23 Jan 2014 17:03:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
In-Reply-To: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
> Don Slutz <dslutz@verizon.com> wrote:

[snip]

>> WARNING: g.e. still in use!
>> WARNING: g.e. still in use!
>> WARNING: g.e. still in use!
>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>> PM: Device i8042 failed to resume: error -19
>> INFO: task sadc:22164 blocked for more then 120 seconds.
>> "echo 0 >..."
>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>
>>   
>> [root@dcs-xen-54 ~]# xl des 17
>> [root@dcs-xen-54 ~]# xl restore -V
>> /big/xl-save/centos-6.4-x86_64.0.save
>>
>>
>> Not sure if this is expected or not.
> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?

I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
   File "/usr/sbin/xend", line 110, in <module>
     sys.exit(main())
   File "/usr/sbin/xend", line 91, in main
     start_blktapctrl()
   File "/usr/sbin/xend", line 77, in start_blktapctrl
     start_daemon("blktapctrl", "")
   File "/usr/sbin/xend", line 74, in start_daemon
     os.execvp(daemon, (daemon,) + args)
   File "/usr/lib64/python2.7/os.py", line 344, in execvp
     _execvpe(file, args)
   File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
     func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?

     -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SOt-0001UT-Ed; Thu, 23 Jan 2014 22:05:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6SOs-0001UO-1n
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:05:26 +0000
Received: from [85.158.139.211:27153] by server-4.bemta-5.messagelabs.com id
	BA/F3-26791-52291E25; Thu, 23 Jan 2014 22:05:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390514722!11436489!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15541 invoked from network); 23 Jan 2014 22:05:24 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 23 Jan 2014 22:05:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 23 Jan 2014 22:05:21 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="244754676"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 23 Jan 2014 22:05:20 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Thu, 23 Jan 2014 17:03:40 -0500
Message-ID: <52E191BB.7040904@terremark.com>
Date: Thu, 23 Jan 2014 17:03:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
In-Reply-To: <c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
> Don Slutz <dslutz@verizon.com> wrote:

[snip]

>> WARNING: g.e. still in use!
>> WARNING: g.e. still in use!
>> WARNING: g.e. still in use!
>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>> PM: Device i8042 failed to resume: error -19
>> INFO: task sadc:22164 blocked for more then 120 seconds.
>> "echo 0 >..."
>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>
>>   
>> [root@dcs-xen-54 ~]# xl des 17
>> [root@dcs-xen-54 ~]# xl restore -V
>> /big/xl-save/centos-6.4-x86_64.0.save
>>
>>
>> Not sure if this is expected or not.
> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?

I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
   File "/usr/sbin/xend", line 110, in <module>
     sys.exit(main())
   File "/usr/sbin/xend", line 91, in main
     start_blktapctrl()
   File "/usr/sbin/xend", line 77, in start_blktapctrl
     start_daemon("blktapctrl", "")
   File "/usr/sbin/xend", line 74, in start_daemon
     os.execvp(daemon, (daemon,) + args)
   File "/usr/lib64/python2.7/os.py", line 344, in execvp
     _execvpe(file, args)
   File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
     func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?

     -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SUW-0001pP-QJ; Thu, 23 Jan 2014 22:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <faizulbari@gmail.com>) id 1W6SK6-0001TD-5h
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:00:30 +0000
Received: from [193.109.254.147:11109] by server-12.bemta-14.messagelabs.com
	id 47/64-13681-DF091E25; Thu, 23 Jan 2014 22:00:29 +0000
X-Env-Sender: faizulbari@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390514428!12819885!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4694 invoked from network); 23 Jan 2014 22:00:29 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:00:29 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so1911972wes.1
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 14:00:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=HM8tqMAGdK6UBTPlBwzJwpPyz5Ltoy1KNA/dDuwdH5U=;
	b=C7xkvnJeK0mcPGMoz9EV2wDeKvtx1CPd91bFGHUuxPbrfar/Q1dO3TEcvUx/ZKvm+q
	mnhHwElle8rKCl6X+JNavWg1ACCCSv5LyY6pod96eXIqd2d+M1qxuFiGi3mfwlcX7PIP
	ZLClSvooEGmcmYX0oHGaaGYAvF9jqoe0345vhyf3GxfGYMDy1RApihSwyO+WMazaYYgI
	E+Y151bPWq1vYsonDSMzCJ+F9d5u3Qt7rvwfcqZ2WH5g59KRl9q4G5jAcBbrnxHJ4Rps
	8whg5Ego92Lf9FvUHJ8dOFSfHyMpEke5h0VJMB2FCIJbrLsWRB3vVu67shiSFVtZdqst
	dnCA==
X-Received: by 10.194.185.113 with SMTP id fb17mr8173587wjc.29.1390514428734; 
	Thu, 23 Jan 2014 14:00:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.71.77 with HTTP; Thu, 23 Jan 2014 14:00:08 -0800 (PST)
From: Faizul Bari <faizulbari@gmail.com>
Date: Thu, 23 Jan 2014 17:00:08 -0500
Message-ID: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Thu, 23 Jan 2014 22:11:16 +0000
Subject: [Xen-devel] Live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7516640658140216496=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7516640658140216496==
Content-Type: multipart/alternative; boundary=047d7bd6bb0e94726604f0aa5e82

--047d7bd6bb0e94726604f0aa5e82
Content-Type: text/plain; charset=UTF-8

Hi,

I am new to xen development. I am trying to add a feature to the live
migration process. Instead of depending on the expected downtime and
transfer rate, I want to be able to manually trigger the stop-copy phase of
a live migration.

I am totally lost in the code. Can someone please point me where I should
start looking to implement this feature.

--
Faiz

--047d7bd6bb0e94726604f0aa5e82
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-family:georgia,=
serif;font-size:small;color:rgb(0,0,0)"><div class=3D"gmail_default" style=
=3D"color:rgb(34,34,34)">Hi,=C2=A0</div><div class=3D"gmail_default" style=
=3D"color:rgb(34,34,34)">

<br></div><div class=3D"gmail_default" style=3D"color:rgb(34,34,34)">I am n=
ew to xen development. I am trying to add a feature to the live migration p=
rocess. Instead of depending on the expected downtime and transfer rate, I =
want to be able to manually trigger the stop-copy phase of a live migration=
.</div>

<div class=3D"gmail_default" style=3D"color:rgb(34,34,34)"><br></div><div c=
lass=3D"gmail_default" style=3D"color:rgb(34,34,34)">I am totally lost in t=
he code. Can someone please point me where I should start looking to implem=
ent this feature.</div>

<div class=3D"gmail_default" style=3D"color:rgb(34,34,34)"><br></div><div c=
lass=3D"gmail_default" style=3D"color:rgb(34,34,34)">--</div><div class=3D"=
gmail_default" style=3D"color:rgb(34,34,34)">Faiz</div></div>
</div>

--047d7bd6bb0e94726604f0aa5e82--


--===============7516640658140216496==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7516640658140216496==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 22:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SUW-0001pI-Ez; Thu, 23 Jan 2014 22:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6HwP-0005fd-GN
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:55:21 +0000
Received: from [193.109.254.147:18657] by server-12.bemta-14.messagelabs.com
	id F4/E2-13681-815F0E25; Thu, 23 Jan 2014 10:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390473984!11191760!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30968 invoked from network); 23 Jan 2014 10:46:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:46:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95678049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 10:46:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:46:22 -0500
Message-ID: <1390473981.24595.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jesse Benedict <jesse.benedict@citrix.com>
Date: Thu, 23 Jan 2014 10:46:21 +0000
In-Reply-To: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
References: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Thu, 23 Jan 2014 22:11:16 +0000
Cc: "'xen-api@lists.xen.org'" <xen-api-request@lists.xen.org>
Subject: Re: [Xen-devel] Request: XenCenter/XE Command To Preserve VM MAC
 Addresses/VIFs when XS NIC Settings Change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTIyIGF0IDE3OjIwICswMDAwLCBKZXNzZSBCZW5lZGljdCB3cm90ZToK
PiBBbGwsCgpJIHRoaW5rIHlvdSB3YW50ZWQgeGVuLWFwaUAgbm90IHhlbi1hcGktcmVxdWVzdEAg
KHRoZSBsYXR0ZXIgYmVpbmcgc29tZQpzb3J0IG9mIGFkbWluIGFkZHJlc3MgSSB0aGluaykuIEkn
dmUgbWFkZSB0aGF0IGNoYW5nZSB0byB0aGlzIHJlcGx5LgoKTm8gWGVuQ2VudGVyL1hlIGRldmVs
b3BtZW50IGhhcHBlbnMgb24geGVuLWRldmVsQCAod2hpY2ggaXMgdGhlIGxpc3QgZm9yCmRldmVs
b3BtZW50IG9mIHRoZSB1cHN0cmVhbSBoeXBlcnZpc29yIGFuZCBhc3NvY2lhdGVkIGxvdyBsZXZl
bCB0b29scykKc28gSSBoYXZlIG1vdmVkIHRoYXQgdG8gQmNjLiBJIHRoaW5rIGZvciBYZW5DZW50
ZXIgc3R1ZmYgeW91IHByb2JhYmx5CndhbnQgb25lIG9mIHRoZSB4ZW5zZXJ2ZXIub3JnIGxpc3Rz
LCBidXQgSSBndWVzcyB4ZW4tYXBpQCBmb2xrcyBjYW4KcmVkaXJlY3QgYXMgbmVjZXNzYXJ5LgoK
UXVvdGVzIHVudHJpbW1lZCBmb3IgdGhlIGJlbmVmaXQgb2YgeGVuLWFwaUAuCgpJYW4uCgo+IAo+
ICAKPiAKPiBUaGlzIHJlcXVlc3QgY29tZXMgZnJvbSBleHBlcmllbmNlIHdpdGggbWFueSBjbGll
bnRzLiAgSXQgaXMgc29tZXdoYXQKPiBlYXN5IHRvIHNjcmlwdCwgYnV0IGhlcmUgaXQgZ29lczoK
PiAKPiAgCj4gCj4gSVNTVUU6Cj4gCj4gV2UgY3VycmVudGx5IGhhdmUgdGhlIGNhcGFiaWxpdHkg
dG8g4oCcQVVUT01BVElDQUxMWeKAnSBhZGQgYSBuZXR3b3JrIHRvCj4gTkVXIFZNcywgYnV0IG5v
IHdheSB0byDigJxQUkVTRVJWReKAnSBleGlzdGluZyBWTSBOZXR3b3JrIElEcyBhbmQgTUFDcwo+
IHdoZW4gWGVuU2VydmVyIGludGVyZmFjZXMgYXJlIGNoYW5nZWQuCj4gCj4gIAo+IAo+IFNDRU5B
UklPOgo+IAo+IFVzZXIgaGFzIHNldmVyYWwgaHVuZHJlZCBWTXMgYW5kIG5lZWRzIHRvIGNoYW5n
ZSBCT05Ecy9JbnRlcmZhY2VzIGluCj4gWGVuU2VydmVyCj4gCj4gQWZ0ZXIgYnJlYWtpbmcgYm9u
ZHMgb3IgbW9kaWZ5aW5nIFhlblNlcnZlciBpbnRlcmZhY2VzLCB0aGUgVk1zIGFyZSBubwo+IGxv
bmdlciBhc3NvY2lhdGVkIHdpdGggYSDigJxOZXR3b3Jr4oCdCj4gCj4gIAo+IAo+IFJFUVVFU1Q6
Cj4gCj4gT2ZmZXIgYSBtZXRob2Qgd2hlcmU6Cj4gCj4gLSAgICAgICAgIE1vZGlmaWNhdGlvbnMg
dG8gYSBuZXR3b3JrIGludGVyZmFjZSBvciBCb25kIG5vdGVzIHRoYXQg4oCcVk1zCj4gYXJlIGFz
c29jaWF0ZWQgd2l0aCB0aGlzIG5ldHdvcmvigJ0KPiAKPiAtICAgICAgICAgQWZ0ZXIgY2hhbmdl
cyBhcmUgbWFkZSB0byBzYWlkIGludGVyZmFjZXMsIGFzayB1c2VyIOKAnFdvdWxkCj4geW91IGxp
a2UgdG8gdXBkYXRlIHlvdXIgVk1zIHRvIHVzZSB0aGlzIGludGVyZmFjZSBhbmQgcmV0YWluIE5l
dHdvcmsKPiBJRCBhbmQgTUFDIEFERFJFU1M/4oCdCj4gCj4gbyAgWWVzIChBbGwpCj4gCj4gbyAg
Tm8gKExpc3Qgb2YgVk1zIHRvIGFwcGx5IHRvIG9yIE5PTkUpCj4gCj4gIAo+IAo+IEkgaGF2ZSBh
IHNjcmlwdCDigJMgbm90IGN1cnJlbnRseSB3aXRoIG1lIOKAkyB0aGF0IEkgd3JvdGUgdG8gaGFu
ZGxlIHRoaXMKPiB0eXBlIG9mIGFjdGl2aXR5IGFzIEJPTkQgY29uZmlndXJhdGlvbiwgYWRkaXRp
b25zLCBldGMgaGFwcGVuIGFsbCB0aGUKPiB0aW1lIGFuZCBJIGhhZCBvbmUgY2xpZW50IChhY2hl
bSksIGhhdmUgdG8gbWFudWFsbHkgYXNzb2NpYXRlIGFsbCBWTXMKPiB3aXRoIGEgc3BlY2lmaWMg
bmV0d29yayBib25kIHRoYXQgaGFkIHRvIGJlIG1vZGlmaWVkLgo+IAo+ICAKPiAKPiBJZiB0aGlz
IHJlcXVlc3QgaXMgbm90IGNsZWFyLCBJIHdpbGwgcmVwbHkgd2l0aCB0aGUgc2NyaXB0IEkgd3Jv
dGUgYW5kCj4gd2lsbCBvZmZlciBzY3JlZW4gc2hvdHMuCj4gCj4gIAo+IAo+ICAKPiAKPiBTaW5j
ZXJlbHksCj4gCj4gIAo+IAo+ICAKPiAKPiBKZXNzZSBCZW5lZGljdCwgQ0NBIHwgQ2l0cml4LCBJ
bmMuIHwgWGVuU2VydmVyLCBYZW5DbGllbnQgU3VwcG9ydCBUZWFtCj4gCj4gV29yayBCZXR0ZXIu
ICBMaXZlIEJldHRlci4gIENhbGwgdXMgYXQgMS04MDAtNENJVFJJWCEKPiAKPiBKb2luIHRoZSBD
b21tdW5pdHk6IHN1cHBvcnQuY2l0cml4LmNvbSB8IGRpc2N1c3Npb25zLmNpdHJpeC5jb20gfAo+
IGJsb2dzLmNpdHJpeC5jb20gfCB0YWFzLmNpdHJpeC5jb20KPiAKPiAgCj4gCj4gQ3VzdG9tZXIg
U2F0aXNmYWN0aW9uIGlzIG91ciBnb2FsLgo+IAo+IElmIHlvdSBoYXZlIGZlZWRiYWNrIHJlZ2Fy
ZGluZyBteSBwZXJmb3JtYW5jZSwgcGxlYXNlIGZlZWwgZnJlZSB0bwo+IGNvbnRhY3QgbXkgTWFu
YWdlcndpbGxpYW0uYXljb2NrQGNpdHJpeC5jb20KPiAKPiAgCj4gCj4gQ09ORklERU5USUFMSVRZ
IE5PVElDRQo+IFRoaXMgZS1tYWlsIG1lc3NhZ2UgYW5kIGFsbCBkb2N1bWVudHMgd2hpY2ggYWNj
b21wYW55IGl0IGFyZSBpbnRlbmRlZAo+IG9ubHkgZm9yIHRoZSB1c2Ugb2YgdGhlIGluZGl2aWR1
YWwgb3IgZW50aXR5IHRvIHdoaWNoIGFkZHJlc3NlZCwgYW5kCj4gbWF5IGNvbnRhaW4gcHJpdmls
ZWdlZCBvciBjb25maWRlbnRpYWwgaW5mb3JtYXRpb24uIEFueSB1bmF1dGhvcml6ZWQKPiBkaXNj
bG9zdXJlIG9yIGRpc3RyaWJ1dGlvbiBvZiB0aGlzIGUtbWFpbCBtZXNzYWdlIGlzIHByb2hpYml0
ZWQuIEFueQo+IHByaXZhdGUgZmlsZXMgb3IgdXRpbGl0aWVzIHRoYXQgYXJlIGluY2x1ZGVkIGlu
IHRoaXMgZS1tYWlsIGFyZQo+IGludGVuZGVkIG9ubHkgZm9yIHRoZSB1c2Ugb2YgdGhlIGluZGl2
aWR1YWwgb3IgZW50aXR5IHRvIHdoaWNoIHRoaXMgaXMKPiBhZGRyZXNzZWQgYW5kIGRpc3RyaWJ1
dGlvbiBvZiB0aGVzZSBmaWxlcyBvciB1dGlsaXRpZXMgaXMgcHJvaGliaXRlZC4KPiBJZiB5b3Ug
aGF2ZSByZWNlaXZlZCB0aGlzIGUtbWFpbCBtZXNzYWdlIGluIGVycm9yLCBwbGVhc2Ugbm90aWZ5
IG1lCj4gaW1tZWRpYXRlbHkuIFRoYW5rIHlvdS4KPiAKPiAgCj4gCj4gIAo+IAo+IAo+IF9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1h
aWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SUW-0001pP-QJ; Thu, 23 Jan 2014 22:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <faizulbari@gmail.com>) id 1W6SK6-0001TD-5h
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:00:30 +0000
Received: from [193.109.254.147:11109] by server-12.bemta-14.messagelabs.com
	id 47/64-13681-DF091E25; Thu, 23 Jan 2014 22:00:29 +0000
X-Env-Sender: faizulbari@gmail.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390514428!12819885!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4694 invoked from network); 23 Jan 2014 22:00:29 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:00:29 -0000
Received: by mail-we0-f170.google.com with SMTP id u57so1911972wes.1
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 14:00:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=HM8tqMAGdK6UBTPlBwzJwpPyz5Ltoy1KNA/dDuwdH5U=;
	b=C7xkvnJeK0mcPGMoz9EV2wDeKvtx1CPd91bFGHUuxPbrfar/Q1dO3TEcvUx/ZKvm+q
	mnhHwElle8rKCl6X+JNavWg1ACCCSv5LyY6pod96eXIqd2d+M1qxuFiGi3mfwlcX7PIP
	ZLClSvooEGmcmYX0oHGaaGYAvF9jqoe0345vhyf3GxfGYMDy1RApihSwyO+WMazaYYgI
	E+Y151bPWq1vYsonDSMzCJ+F9d5u3Qt7rvwfcqZ2WH5g59KRl9q4G5jAcBbrnxHJ4Rps
	8whg5Ego92Lf9FvUHJ8dOFSfHyMpEke5h0VJMB2FCIJbrLsWRB3vVu67shiSFVtZdqst
	dnCA==
X-Received: by 10.194.185.113 with SMTP id fb17mr8173587wjc.29.1390514428734; 
	Thu, 23 Jan 2014 14:00:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.71.77 with HTTP; Thu, 23 Jan 2014 14:00:08 -0800 (PST)
From: Faizul Bari <faizulbari@gmail.com>
Date: Thu, 23 Jan 2014 17:00:08 -0500
Message-ID: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Thu, 23 Jan 2014 22:11:16 +0000
Subject: [Xen-devel] Live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7516640658140216496=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7516640658140216496==
Content-Type: multipart/alternative; boundary=047d7bd6bb0e94726604f0aa5e82

--047d7bd6bb0e94726604f0aa5e82
Content-Type: text/plain; charset=UTF-8

Hi,

I am new to xen development. I am trying to add a feature to the live
migration process. Instead of depending on the expected downtime and
transfer rate, I want to be able to manually trigger the stop-copy phase of
a live migration.

I am totally lost in the code. Can someone please point me where I should
start looking to implement this feature.

--
Faiz

--047d7bd6bb0e94726604f0aa5e82
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-family:georgia,=
serif;font-size:small;color:rgb(0,0,0)"><div class=3D"gmail_default" style=
=3D"color:rgb(34,34,34)">Hi,=C2=A0</div><div class=3D"gmail_default" style=
=3D"color:rgb(34,34,34)">

<br></div><div class=3D"gmail_default" style=3D"color:rgb(34,34,34)">I am n=
ew to xen development. I am trying to add a feature to the live migration p=
rocess. Instead of depending on the expected downtime and transfer rate, I =
want to be able to manually trigger the stop-copy phase of a live migration=
.</div>

<div class=3D"gmail_default" style=3D"color:rgb(34,34,34)"><br></div><div c=
lass=3D"gmail_default" style=3D"color:rgb(34,34,34)">I am totally lost in t=
he code. Can someone please point me where I should start looking to implem=
ent this feature.</div>

<div class=3D"gmail_default" style=3D"color:rgb(34,34,34)"><br></div><div c=
lass=3D"gmail_default" style=3D"color:rgb(34,34,34)">--</div><div class=3D"=
gmail_default" style=3D"color:rgb(34,34,34)">Faiz</div></div>
</div>

--047d7bd6bb0e94726604f0aa5e82--


--===============7516640658140216496==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7516640658140216496==--


From xen-devel-bounces@lists.xen.org Thu Jan 23 22:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SUW-0001pI-Ez; Thu, 23 Jan 2014 22:11:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6HwP-0005fd-GN
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 10:55:21 +0000
Received: from [193.109.254.147:18657] by server-12.bemta-14.messagelabs.com
	id F4/E2-13681-815F0E25; Thu, 23 Jan 2014 10:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390473984!11191760!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30968 invoked from network); 23 Jan 2014 10:46:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 10:46:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,705,1384300800"; d="scan'208";a="95678049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 10:46:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 05:46:22 -0500
Message-ID: <1390473981.24595.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jesse Benedict <jesse.benedict@citrix.com>
Date: Thu, 23 Jan 2014 10:46:21 +0000
In-Reply-To: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
References: <B8C24FCCBFC459419478083491FF39E21F10C0@FTLPEX01CL02.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Thu, 23 Jan 2014 22:11:16 +0000
Cc: "'xen-api@lists.xen.org'" <xen-api-request@lists.xen.org>
Subject: Re: [Xen-devel] Request: XenCenter/XE Command To Preserve VM MAC
 Addresses/VIFs when XS NIC Settings Change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAxLTIyIGF0IDE3OjIwICswMDAwLCBKZXNzZSBCZW5lZGljdCB3cm90ZToK
PiBBbGwsCgpJIHRoaW5rIHlvdSB3YW50ZWQgeGVuLWFwaUAgbm90IHhlbi1hcGktcmVxdWVzdEAg
KHRoZSBsYXR0ZXIgYmVpbmcgc29tZQpzb3J0IG9mIGFkbWluIGFkZHJlc3MgSSB0aGluaykuIEkn
dmUgbWFkZSB0aGF0IGNoYW5nZSB0byB0aGlzIHJlcGx5LgoKTm8gWGVuQ2VudGVyL1hlIGRldmVs
b3BtZW50IGhhcHBlbnMgb24geGVuLWRldmVsQCAod2hpY2ggaXMgdGhlIGxpc3QgZm9yCmRldmVs
b3BtZW50IG9mIHRoZSB1cHN0cmVhbSBoeXBlcnZpc29yIGFuZCBhc3NvY2lhdGVkIGxvdyBsZXZl
bCB0b29scykKc28gSSBoYXZlIG1vdmVkIHRoYXQgdG8gQmNjLiBJIHRoaW5rIGZvciBYZW5DZW50
ZXIgc3R1ZmYgeW91IHByb2JhYmx5CndhbnQgb25lIG9mIHRoZSB4ZW5zZXJ2ZXIub3JnIGxpc3Rz
LCBidXQgSSBndWVzcyB4ZW4tYXBpQCBmb2xrcyBjYW4KcmVkaXJlY3QgYXMgbmVjZXNzYXJ5LgoK
UXVvdGVzIHVudHJpbW1lZCBmb3IgdGhlIGJlbmVmaXQgb2YgeGVuLWFwaUAuCgpJYW4uCgo+IAo+
ICAKPiAKPiBUaGlzIHJlcXVlc3QgY29tZXMgZnJvbSBleHBlcmllbmNlIHdpdGggbWFueSBjbGll
bnRzLiAgSXQgaXMgc29tZXdoYXQKPiBlYXN5IHRvIHNjcmlwdCwgYnV0IGhlcmUgaXQgZ29lczoK
PiAKPiAgCj4gCj4gSVNTVUU6Cj4gCj4gV2UgY3VycmVudGx5IGhhdmUgdGhlIGNhcGFiaWxpdHkg
dG8g4oCcQVVUT01BVElDQUxMWeKAnSBhZGQgYSBuZXR3b3JrIHRvCj4gTkVXIFZNcywgYnV0IG5v
IHdheSB0byDigJxQUkVTRVJWReKAnSBleGlzdGluZyBWTSBOZXR3b3JrIElEcyBhbmQgTUFDcwo+
IHdoZW4gWGVuU2VydmVyIGludGVyZmFjZXMgYXJlIGNoYW5nZWQuCj4gCj4gIAo+IAo+IFNDRU5B
UklPOgo+IAo+IFVzZXIgaGFzIHNldmVyYWwgaHVuZHJlZCBWTXMgYW5kIG5lZWRzIHRvIGNoYW5n
ZSBCT05Ecy9JbnRlcmZhY2VzIGluCj4gWGVuU2VydmVyCj4gCj4gQWZ0ZXIgYnJlYWtpbmcgYm9u
ZHMgb3IgbW9kaWZ5aW5nIFhlblNlcnZlciBpbnRlcmZhY2VzLCB0aGUgVk1zIGFyZSBubwo+IGxv
bmdlciBhc3NvY2lhdGVkIHdpdGggYSDigJxOZXR3b3Jr4oCdCj4gCj4gIAo+IAo+IFJFUVVFU1Q6
Cj4gCj4gT2ZmZXIgYSBtZXRob2Qgd2hlcmU6Cj4gCj4gLSAgICAgICAgIE1vZGlmaWNhdGlvbnMg
dG8gYSBuZXR3b3JrIGludGVyZmFjZSBvciBCb25kIG5vdGVzIHRoYXQg4oCcVk1zCj4gYXJlIGFz
c29jaWF0ZWQgd2l0aCB0aGlzIG5ldHdvcmvigJ0KPiAKPiAtICAgICAgICAgQWZ0ZXIgY2hhbmdl
cyBhcmUgbWFkZSB0byBzYWlkIGludGVyZmFjZXMsIGFzayB1c2VyIOKAnFdvdWxkCj4geW91IGxp
a2UgdG8gdXBkYXRlIHlvdXIgVk1zIHRvIHVzZSB0aGlzIGludGVyZmFjZSBhbmQgcmV0YWluIE5l
dHdvcmsKPiBJRCBhbmQgTUFDIEFERFJFU1M/4oCdCj4gCj4gbyAgWWVzIChBbGwpCj4gCj4gbyAg
Tm8gKExpc3Qgb2YgVk1zIHRvIGFwcGx5IHRvIG9yIE5PTkUpCj4gCj4gIAo+IAo+IEkgaGF2ZSBh
IHNjcmlwdCDigJMgbm90IGN1cnJlbnRseSB3aXRoIG1lIOKAkyB0aGF0IEkgd3JvdGUgdG8gaGFu
ZGxlIHRoaXMKPiB0eXBlIG9mIGFjdGl2aXR5IGFzIEJPTkQgY29uZmlndXJhdGlvbiwgYWRkaXRp
b25zLCBldGMgaGFwcGVuIGFsbCB0aGUKPiB0aW1lIGFuZCBJIGhhZCBvbmUgY2xpZW50IChhY2hl
bSksIGhhdmUgdG8gbWFudWFsbHkgYXNzb2NpYXRlIGFsbCBWTXMKPiB3aXRoIGEgc3BlY2lmaWMg
bmV0d29yayBib25kIHRoYXQgaGFkIHRvIGJlIG1vZGlmaWVkLgo+IAo+ICAKPiAKPiBJZiB0aGlz
IHJlcXVlc3QgaXMgbm90IGNsZWFyLCBJIHdpbGwgcmVwbHkgd2l0aCB0aGUgc2NyaXB0IEkgd3Jv
dGUgYW5kCj4gd2lsbCBvZmZlciBzY3JlZW4gc2hvdHMuCj4gCj4gIAo+IAo+ICAKPiAKPiBTaW5j
ZXJlbHksCj4gCj4gIAo+IAo+ICAKPiAKPiBKZXNzZSBCZW5lZGljdCwgQ0NBIHwgQ2l0cml4LCBJ
bmMuIHwgWGVuU2VydmVyLCBYZW5DbGllbnQgU3VwcG9ydCBUZWFtCj4gCj4gV29yayBCZXR0ZXIu
ICBMaXZlIEJldHRlci4gIENhbGwgdXMgYXQgMS04MDAtNENJVFJJWCEKPiAKPiBKb2luIHRoZSBD
b21tdW5pdHk6IHN1cHBvcnQuY2l0cml4LmNvbSB8IGRpc2N1c3Npb25zLmNpdHJpeC5jb20gfAo+
IGJsb2dzLmNpdHJpeC5jb20gfCB0YWFzLmNpdHJpeC5jb20KPiAKPiAgCj4gCj4gQ3VzdG9tZXIg
U2F0aXNmYWN0aW9uIGlzIG91ciBnb2FsLgo+IAo+IElmIHlvdSBoYXZlIGZlZWRiYWNrIHJlZ2Fy
ZGluZyBteSBwZXJmb3JtYW5jZSwgcGxlYXNlIGZlZWwgZnJlZSB0bwo+IGNvbnRhY3QgbXkgTWFu
YWdlcndpbGxpYW0uYXljb2NrQGNpdHJpeC5jb20KPiAKPiAgCj4gCj4gQ09ORklERU5USUFMSVRZ
IE5PVElDRQo+IFRoaXMgZS1tYWlsIG1lc3NhZ2UgYW5kIGFsbCBkb2N1bWVudHMgd2hpY2ggYWNj
b21wYW55IGl0IGFyZSBpbnRlbmRlZAo+IG9ubHkgZm9yIHRoZSB1c2Ugb2YgdGhlIGluZGl2aWR1
YWwgb3IgZW50aXR5IHRvIHdoaWNoIGFkZHJlc3NlZCwgYW5kCj4gbWF5IGNvbnRhaW4gcHJpdmls
ZWdlZCBvciBjb25maWRlbnRpYWwgaW5mb3JtYXRpb24uIEFueSB1bmF1dGhvcml6ZWQKPiBkaXNj
bG9zdXJlIG9yIGRpc3RyaWJ1dGlvbiBvZiB0aGlzIGUtbWFpbCBtZXNzYWdlIGlzIHByb2hpYml0
ZWQuIEFueQo+IHByaXZhdGUgZmlsZXMgb3IgdXRpbGl0aWVzIHRoYXQgYXJlIGluY2x1ZGVkIGlu
IHRoaXMgZS1tYWlsIGFyZQo+IGludGVuZGVkIG9ubHkgZm9yIHRoZSB1c2Ugb2YgdGhlIGluZGl2
aWR1YWwgb3IgZW50aXR5IHRvIHdoaWNoIHRoaXMgaXMKPiBhZGRyZXNzZWQgYW5kIGRpc3RyaWJ1
dGlvbiBvZiB0aGVzZSBmaWxlcyBvciB1dGlsaXRpZXMgaXMgcHJvaGliaXRlZC4KPiBJZiB5b3Ug
aGF2ZSByZWNlaXZlZCB0aGlzIGUtbWFpbCBtZXNzYWdlIGluIGVycm9yLCBwbGVhc2Ugbm90aWZ5
IG1lCj4gaW1tZWRpYXRlbHkuIFRoYW5rIHlvdS4KPiAKPiAgCj4gCj4gIAo+IAo+IAo+IF9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1h
aWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0
cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZD-00025y-Nr; Thu, 23 Jan 2014 22:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W6SZC-00025s-Bt
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:06 +0000
Received: from [193.109.254.147:15880] by server-9.bemta-14.messagelabs.com id
	C9/D4-13957-5A491E25; Thu, 23 Jan 2014 22:16:05 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390515364!12752636!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32360 invoked from network); 23 Jan 2014 22:16:05 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 22:16:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390515364; l=526;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=nFr5Zc9GcGayNyRsStYG4OJ69Po=;
	b=qa3slzXE5Rc9VONXxiqbFXSBPx+Dh2+traetSkh7SI8+1+ewWEMfQrmXiqbG1Wc/qbC
	1eE5nInlD8R0FxQWzcASbQLlWnEIByvJ7RGUZB6Iw+f8NxXmfj5Bh2lwCdXIMq41Q1o9Z
	L81ZzVqm8vZXfIkUs6m1nPJCVMhgL+btZ2o=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id R00cbdq0NMG4CUu ; 
	Thu, 23 Jan 2014 23:16:04 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 3108C50268; Thu, 23 Jan 2014 23:16:04 +0100 (CET)
Date: Thu, 23 Jan 2014 23:16:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Faizul Bari <faizulbari@gmail.com>
Message-ID: <20140123221603.GA3550@aepfle.de>
References: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, Faizul Bari wrote:

> I am new to xen development. I am trying to add a feature to the live migration
> process. Instead of depending on the expected downtime and transfer rate, I
> want to be able to manually trigger the stop-copy phase of a live migration.
> 
> I am totally lost in the code. Can someone please point me where I should start
> looking to implement this feature.

Have a look at this series of changes:

http://lists.xen.org/archives/html/xen-devel/2013-03/msg02549.html


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZD-00025y-Nr; Thu, 23 Jan 2014 22:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W6SZC-00025s-Bt
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:06 +0000
Received: from [193.109.254.147:15880] by server-9.bemta-14.messagelabs.com id
	C9/D4-13957-5A491E25; Thu, 23 Jan 2014 22:16:05 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390515364!12752636!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32360 invoked from network); 23 Jan 2014 22:16:05 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Jan 2014 22:16:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390515364; l=526;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=nFr5Zc9GcGayNyRsStYG4OJ69Po=;
	b=qa3slzXE5Rc9VONXxiqbFXSBPx+Dh2+traetSkh7SI8+1+ewWEMfQrmXiqbG1Wc/qbC
	1eE5nInlD8R0FxQWzcASbQLlWnEIByvJ7RGUZB6Iw+f8NxXmfj5Bh2lwCdXIMq41Q1o9Z
	L81ZzVqm8vZXfIkUs6m1nPJCVMhgL+btZ2o=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.17 AUTH)
	with (TLSv1:DHE-RSA-AES256-SHA encrypted) ESMTPSA id R00cbdq0NMG4CUu ; 
	Thu, 23 Jan 2014 23:16:04 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 3108C50268; Thu, 23 Jan 2014 23:16:04 +0100 (CET)
Date: Thu, 23 Jan 2014 23:16:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Faizul Bari <faizulbari@gmail.com>
Message-ID: <20140123221603.GA3550@aepfle.de>
References: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAATLjzp63Ao1btG4zCvOnjyT0xtnZhBimjjHaF9KekzunN2HqQ@mail.gmail.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Live migration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, Faizul Bari wrote:

> I am new to xen development. I am trying to add a feature to the live migration
> process. Instead of depending on the expected downtime and transfer rate, I
> want to be able to manually trigger the stop-copy phase of a live migration.
> 
> I am totally lost in the code. Can someone please point me where I should start
> looking to implement this feature.

Have a look at this series of changes:

http://lists.xen.org/archives/html/xen-devel/2013-03/msg02549.html


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZJ-00026w-7b; Thu, 23 Jan 2014 22:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZH-00026J-8c
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:11 +0000
Received: from [85.158.143.35:5685] by server-3.bemta-4.messagelabs.com id
	45/2E-32360-AA491E25; Thu, 23 Jan 2014 22:16:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2523 invoked from network); 23 Jan 2014 22:16:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:07 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-Ti;
	Thu, 23 Jan 2014 22:16:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:02 +0000
Message-ID: <1390515366-32236-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 1/5] xen: move Xen PV machine files to
	hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZJ-00026w-7b; Thu, 23 Jan 2014 22:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZH-00026J-8c
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:11 +0000
Received: from [85.158.143.35:5685] by server-3.bemta-4.messagelabs.com id
	45/2E-32360-AA491E25; Thu, 23 Jan 2014 22:16:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2523 invoked from network); 23 Jan 2014 22:16:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:07 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-Ti;
	Thu, 23 Jan 2014 22:16:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:02 +0000
Message-ID: <1390515366-32236-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 1/5] xen: move Xen PV machine files to
	hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZJ-00027P-Kw; Thu, 23 Jan 2014 22:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026O-2a
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:12 +0000
Received: from [85.158.143.35:30063] by server-3.bemta-4.messagelabs.com id
	D6/2E-32360-BA491E25; Thu, 23 Jan 2014 22:16:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2580 invoked from network); 23 Jan 2014 22:16:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963564"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:07 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-RO;
	Thu, 23 Jan 2014 22:16:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:01 +0000
Message-ID: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As promised I hacked a prototype based on Paolo's disable TCG series.
However I coded some stubs for TCG anyway. So this series in principle
should work with / without Paolo's series.

The first 3 patches refactor some code to disentangle Xen PV and HVM
guest. The 4th patch has the real meat. It introduces Xen PV target,
which contains basically a dummy CPU, then hooks up this Xen PV CPU to
QEMU internal structures.

The last patch introduces xenpv-softmmu, which contains *no* emulation
code. I know that in previous discussion people said that every device
emulation should be included if the target architecture is called null.
But since this target CPU is now called xenpv I don't feel obliged to
include any device emulation in this prototype anymore. :-)

Please note that the existing Xen QEMU build is not affected at all. You
can still use "--disable-tcg --enable-xen --target-list=i386-softmmu"
(or x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for
both HVM and PV guest. This series adds another option to build QEMU with
"--disable-tcg --enable-xen --target-list=xenpv-softmmu" and get a QEMU
binary tailored for Xen PV guest. The effect is that we reduce the
binary size from 14MB to 7.3MB.

What do you think of this idea? I'm all ears.

Wei.

Wei Liu (5):
  xen: move Xen PV machine files to hw/xenpv
  xen: factor out common functions
  exec: guard Xen HVM hooks with CONFIG_XEN_I386
  xen: implement Xen PV target
  xen: introduce xenpv-softmmu.mak

 Makefile.target                      |    3 +-
 arch_init.c                          |    2 +
 configure                            |   12 ++-
 cpu-exec.c                           |    2 +
 default-configs/i386-softmmu.mak     |    1 -
 default-configs/x86_64-softmmu.mak   |    1 -
 default-configs/xenpv-softmmu.mak    |    2 +
 exec.c                               |   16 ++++
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 include/exec/memory-internal.h       |    2 +
 include/sysemu/arch_init.h           |    1 +
 target-xenpv/Makefile.objs           |    1 +
 target-xenpv/cpu-qom.h               |   64 ++++++++++++++++
 target-xenpv/cpu.h                   |   66 ++++++++++++++++
 target-xenpv/helper.c                |   32 ++++++++
 target-xenpv/translate.c             |   27 +++++++
 xen-common.c                         |  137 ++++++++++++++++++++++++++++++++++
 xen-all.c => xen-hvm.c               |  112 +--------------------------
 xen-stub.c                           |    4 -
 23 files changed, 368 insertions(+), 121 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c
 create mode 100644 xen-common.c
 rename xen-all.c => xen-hvm.c (92%)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZJ-00027P-Kw; Thu, 23 Jan 2014 22:16:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026O-2a
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:12 +0000
Received: from [85.158.143.35:30063] by server-3.bemta-4.messagelabs.com id
	D6/2E-32360-BA491E25; Thu, 23 Jan 2014 22:16:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2580 invoked from network); 23 Jan 2014 22:16:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963564"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:07 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-RO;
	Thu, 23 Jan 2014 22:16:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:01 +0000
Message-ID: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As promised I hacked a prototype based on Paolo's disable TCG series.
However I coded some stubs for TCG anyway. So this series in principle
should work with / without Paolo's series.

The first 3 patches refactor some code to disentangle Xen PV and HVM
guest. The 4th patch has the real meat. It introduces Xen PV target,
which contains basically a dummy CPU, then hooks up this Xen PV CPU to
QEMU internal structures.

The last patch introduces xenpv-softmmu, which contains *no* emulation
code. I know that in previous discussion people said that every device
emulation should be included if the target architecture is called null.
But since this target CPU is now called xenpv I don't feel obliged to
include any device emulation in this prototype anymore. :-)

Please note that the existing Xen QEMU build is not affected at all. You
can still use "--disable-tcg --enable-xen --target-list=i386-softmmu"
(or x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for
both HVM and PV guest. This series adds another option to build QEMU with
"--disable-tcg --enable-xen --target-list=xenpv-softmmu" and get a QEMU
binary tailored for Xen PV guest. The effect is that we reduce the
binary size from 14MB to 7.3MB.

What do you think of this idea? I'm all ears.

Wei.

Wei Liu (5):
  xen: move Xen PV machine files to hw/xenpv
  xen: factor out common functions
  exec: guard Xen HVM hooks with CONFIG_XEN_I386
  xen: implement Xen PV target
  xen: introduce xenpv-softmmu.mak

 Makefile.target                      |    3 +-
 arch_init.c                          |    2 +
 configure                            |   12 ++-
 cpu-exec.c                           |    2 +
 default-configs/i386-softmmu.mak     |    1 -
 default-configs/x86_64-softmmu.mak   |    1 -
 default-configs/xenpv-softmmu.mak    |    2 +
 exec.c                               |   16 ++++
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 include/exec/memory-internal.h       |    2 +
 include/sysemu/arch_init.h           |    1 +
 target-xenpv/Makefile.objs           |    1 +
 target-xenpv/cpu-qom.h               |   64 ++++++++++++++++
 target-xenpv/cpu.h                   |   66 ++++++++++++++++
 target-xenpv/helper.c                |   32 ++++++++
 target-xenpv/translate.c             |   27 +++++++
 xen-common.c                         |  137 ++++++++++++++++++++++++++++++++++
 xen-all.c => xen-hvm.c               |  112 +--------------------------
 xen-stub.c                           |    4 -
 23 files changed, 368 insertions(+), 121 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c
 create mode 100644 xen-common.c
 rename xen-all.c => xen-hvm.c (92%)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZK-000284-7D; Thu, 23 Jan 2014 22:16:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026W-Ox
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:12 +0000
Received: from [85.158.143.35:5721] by server-1.bemta-4.messagelabs.com id
	00/26-02132-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2596 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963565"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-0x;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:04 +0000
Message-ID: <1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 3/5] exec: guard Xen HVM hooks with
	CONFIG_XEN_I386
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those are only useful when building QEMU with HVM support.

We need to expose CONFIG_XEN_I386 to source code so we modify configure
and i386/x86_64-softmmu.mak.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                          |    1 +
 default-configs/i386-softmmu.mak   |    1 -
 default-configs/x86_64-softmmu.mak |    1 -
 exec.c                             |   16 ++++++++++++++++
 include/exec/memory-internal.h     |    2 ++
 5 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/configure b/configure
index 10a6562..1e515be 100755
--- a/configure
+++ b/configure
@@ -4472,6 +4472,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
+    echo "CONFIG_XEN_I386=y" >> $config_target_mak
     if test "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
     fi
diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak
index 37ef90f..3c89aaa 100644
--- a/default-configs/i386-softmmu.mak
+++ b/default-configs/i386-softmmu.mak
@@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
 CONFIG_PAM=y
 CONFIG_PCI_PIIX=y
 CONFIG_WDT_IB700=y
-CONFIG_XEN_I386=$(CONFIG_XEN)
 CONFIG_ISA_DEBUG=y
 CONFIG_ISA_TESTDEV=y
 CONFIG_VMPORT=y
diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak
index 31bddce..1dc1f85 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
 CONFIG_PAM=y
 CONFIG_PCI_PIIX=y
 CONFIG_WDT_IB700=y
-CONFIG_XEN_I386=$(CONFIG_XEN)
 CONFIG_ISA_DEBUG=y
 CONFIG_ISA_TESTDEV=y
 CONFIG_VMPORT=y
diff --git a/exec.c b/exec.c
index defe38f..a72efe2 100644
--- a/exec.c
+++ b/exec.c
@@ -1228,7 +1228,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
             fprintf(stderr, "-mem-path not supported with Xen\n");
             exit(1);
         }
+#ifdef CONFIG_XEN_I386
         xen_ram_alloc(new_block->offset, size, mr);
+#endif
     } else {
         if (mem_path) {
             if (phys_mem_alloc != qemu_anon_ram_alloc) {
@@ -1324,7 +1326,9 @@ void qemu_ram_free(ram_addr_t addr)
             if (block->flags & RAM_PREALLOC_MASK) {
                 ;
             } else if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
                 xen_invalidate_map_cache_entry(block->host);
+#endif
 #ifndef _WIN32
             } else if (block->fd >= 0) {
                 munmap(block->host, block->length);
@@ -1409,6 +1413,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
     RAMBlock *block = qemu_get_ram_block(addr);
 
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         /* We need to check if the requested address is in the RAM
          * because we don't want to map the entire memory in QEMU.
          * In that case just map until the end of the page.
@@ -1419,6 +1424,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
             block->host =
                 xen_map_cache(block->offset, block->length, 1);
         }
+#endif
     }
     return block->host + (addr - block->offset);
 }
@@ -1431,7 +1437,11 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, hwaddr *size)
         return NULL;
     }
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         return xen_map_cache(addr, *size, 1);
+#else
+        return NULL;
+#endif
     } else {
         RAMBlock *block;
 
@@ -1456,8 +1466,10 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
     uint8_t *host = ptr;
 
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         *ram_addr = xen_ram_addr_from_mapcache(ptr);
         return qemu_get_ram_block(*ram_addr)->mr;
+#endif
     }
 
     block = ram_list.mru_block;
@@ -1921,7 +1933,9 @@ static void invalidate_and_set_dirty(hwaddr addr,
         /* set dirty bit */
         cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
     }
+#ifdef CONFIG_XEN_I386
     xen_modified_memory(addr, length);
+#endif
 }
 
 static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
@@ -2298,7 +2312,9 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
             }
         }
         if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
             xen_invalidate_map_cache_entry(buffer);
+#endif
         }
         memory_region_unref(mr);
         return;
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
index d0e0633..b4e76e2 100644
--- a/include/exec/memory-internal.h
+++ b/include/exec/memory-internal.h
@@ -100,7 +100,9 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
     for (addr = start; addr < end; addr += TARGET_PAGE_SIZE) {
         cpu_physical_memory_set_dirty_flags(addr, dirty_flags);
     }
+#ifdef CONFIG_XEN_I386
     xen_modified_memory(addr, length);
+#endif
 }
 
 static inline void cpu_physical_memory_mask_dirty_range(ram_addr_t start,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZK-000284-7D; Thu, 23 Jan 2014 22:16:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026W-Ox
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:12 +0000
Received: from [85.158.143.35:5721] by server-1.bemta-4.messagelabs.com id
	00/26-02132-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2596 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963565"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-0x;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:04 +0000
Message-ID: <1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 3/5] exec: guard Xen HVM hooks with
	CONFIG_XEN_I386
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Those are only useful when building QEMU with HVM support.

We need to expose CONFIG_XEN_I386 to source code so we modify configure
and i386/x86_64-softmmu.mak.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                          |    1 +
 default-configs/i386-softmmu.mak   |    1 -
 default-configs/x86_64-softmmu.mak |    1 -
 exec.c                             |   16 ++++++++++++++++
 include/exec/memory-internal.h     |    2 ++
 5 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/configure b/configure
index 10a6562..1e515be 100755
--- a/configure
+++ b/configure
@@ -4472,6 +4472,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
+    echo "CONFIG_XEN_I386=y" >> $config_target_mak
     if test "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
     fi
diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak
index 37ef90f..3c89aaa 100644
--- a/default-configs/i386-softmmu.mak
+++ b/default-configs/i386-softmmu.mak
@@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
 CONFIG_PAM=y
 CONFIG_PCI_PIIX=y
 CONFIG_WDT_IB700=y
-CONFIG_XEN_I386=$(CONFIG_XEN)
 CONFIG_ISA_DEBUG=y
 CONFIG_ISA_TESTDEV=y
 CONFIG_VMPORT=y
diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak
index 31bddce..1dc1f85 100644
--- a/default-configs/x86_64-softmmu.mak
+++ b/default-configs/x86_64-softmmu.mak
@@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
 CONFIG_PAM=y
 CONFIG_PCI_PIIX=y
 CONFIG_WDT_IB700=y
-CONFIG_XEN_I386=$(CONFIG_XEN)
 CONFIG_ISA_DEBUG=y
 CONFIG_ISA_TESTDEV=y
 CONFIG_VMPORT=y
diff --git a/exec.c b/exec.c
index defe38f..a72efe2 100644
--- a/exec.c
+++ b/exec.c
@@ -1228,7 +1228,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
             fprintf(stderr, "-mem-path not supported with Xen\n");
             exit(1);
         }
+#ifdef CONFIG_XEN_I386
         xen_ram_alloc(new_block->offset, size, mr);
+#endif
     } else {
         if (mem_path) {
             if (phys_mem_alloc != qemu_anon_ram_alloc) {
@@ -1324,7 +1326,9 @@ void qemu_ram_free(ram_addr_t addr)
             if (block->flags & RAM_PREALLOC_MASK) {
                 ;
             } else if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
                 xen_invalidate_map_cache_entry(block->host);
+#endif
 #ifndef _WIN32
             } else if (block->fd >= 0) {
                 munmap(block->host, block->length);
@@ -1409,6 +1413,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
     RAMBlock *block = qemu_get_ram_block(addr);
 
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         /* We need to check if the requested address is in the RAM
          * because we don't want to map the entire memory in QEMU.
          * In that case just map until the end of the page.
@@ -1419,6 +1424,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
             block->host =
                 xen_map_cache(block->offset, block->length, 1);
         }
+#endif
     }
     return block->host + (addr - block->offset);
 }
@@ -1431,7 +1437,11 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, hwaddr *size)
         return NULL;
     }
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         return xen_map_cache(addr, *size, 1);
+#else
+        return NULL;
+#endif
     } else {
         RAMBlock *block;
 
@@ -1456,8 +1466,10 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
     uint8_t *host = ptr;
 
     if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
         *ram_addr = xen_ram_addr_from_mapcache(ptr);
         return qemu_get_ram_block(*ram_addr)->mr;
+#endif
     }
 
     block = ram_list.mru_block;
@@ -1921,7 +1933,9 @@ static void invalidate_and_set_dirty(hwaddr addr,
         /* set dirty bit */
         cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
     }
+#ifdef CONFIG_XEN_I386
     xen_modified_memory(addr, length);
+#endif
 }
 
 static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
@@ -2298,7 +2312,9 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
             }
         }
         if (xen_enabled()) {
+#ifdef CONFIG_XEN_I386
             xen_invalidate_map_cache_entry(buffer);
+#endif
         }
         memory_region_unref(mr);
         return;
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
index d0e0633..b4e76e2 100644
--- a/include/exec/memory-internal.h
+++ b/include/exec/memory-internal.h
@@ -100,7 +100,9 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
     for (addr = start; addr < end; addr += TARGET_PAGE_SIZE) {
         cpu_physical_memory_set_dirty_flags(addr, dirty_flags);
     }
+#ifdef CONFIG_XEN_I386
     xen_modified_memory(addr, length);
+#endif
 }
 
 static inline void cpu_physical_memory_mask_dirty_range(ram_addr_t start,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZK-00028a-Ml; Thu, 23 Jan 2014 22:16:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026X-Vi
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:13 +0000
Received: from [85.158.143.35:30108] by server-3.bemta-4.messagelabs.com id
	88/2E-32360-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515370!424103!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2616 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-32;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:06 +0000
Message-ID: <1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The modification to configure is rebased on Paolo's change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                         |   13 +++++++++----
 default-configs/xenpv-softmmu.mak |    2 ++
 2 files changed, 11 insertions(+), 4 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak

diff --git a/configure b/configure
index 1e515be..324270c 100755
--- a/configure
+++ b/configure
@@ -4283,7 +4283,7 @@ supported_xen_target() {
     test "$xen" = "yes" || return 1
     test "$target_softmmu" = "yes" || return 1
     case "$target_name:$cpu" in
-        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | xenpv:*)
             return 0
         ;;
     esac
@@ -4443,6 +4443,9 @@ case "$target_name" in
   ;;
   unicore32)
   ;;
+  xenpv)
+    TARGET_ARCH=xenpv
+  ;;
   xtensa|xtensaeb)
     TARGET_ARCH=xtensa
   ;;
@@ -4472,9 +4475,11 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
-    echo "CONFIG_XEN_I386=y" >> $config_target_mak
-    if test "$xen_pci_passthrough" = yes; then
-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+    if test "$target_name" != "xenpv"; then
+        echo "CONFIG_XEN_I386=y" >> $config_target_mak
+        if test "$xen_pci_passthrough" = yes; then
+            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+        fi
     fi
 fi
 if supported_kvm_target; then
diff --git a/default-configs/xenpv-softmmu.mak b/default-configs/xenpv-softmmu.mak
new file mode 100644
index 0000000..773f128
--- /dev/null
+++ b/default-configs/xenpv-softmmu.mak
@@ -0,0 +1,2 @@
+# Default configuration for xenpv-softmmu
+# Yes it is empty as we don't need to include any device emulation code
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZK-00028a-Ml; Thu, 23 Jan 2014 22:16:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZI-00026X-Vi
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:13 +0000
Received: from [85.158.143.35:30108] by server-3.bemta-4.messagelabs.com id
	88/2E-32360-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515370!424103!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2616 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-32;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:06 +0000
Message-ID: <1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The modification to configure is rebased on Paolo's change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                         |   13 +++++++++----
 default-configs/xenpv-softmmu.mak |    2 ++
 2 files changed, 11 insertions(+), 4 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak

diff --git a/configure b/configure
index 1e515be..324270c 100755
--- a/configure
+++ b/configure
@@ -4283,7 +4283,7 @@ supported_xen_target() {
     test "$xen" = "yes" || return 1
     test "$target_softmmu" = "yes" || return 1
     case "$target_name:$cpu" in
-        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | xenpv:*)
             return 0
         ;;
     esac
@@ -4443,6 +4443,9 @@ case "$target_name" in
   ;;
   unicore32)
   ;;
+  xenpv)
+    TARGET_ARCH=xenpv
+  ;;
   xtensa|xtensaeb)
     TARGET_ARCH=xtensa
   ;;
@@ -4472,9 +4475,11 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
-    echo "CONFIG_XEN_I386=y" >> $config_target_mak
-    if test "$xen_pci_passthrough" = yes; then
-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+    if test "$target_name" != "xenpv"; then
+        echo "CONFIG_XEN_I386=y" >> $config_target_mak
+        if test "$xen_pci_passthrough" = yes; then
+            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+        fi
     fi
 fi
 if supported_kvm_target; then
diff --git a/default-configs/xenpv-softmmu.mak b/default-configs/xenpv-softmmu.mak
new file mode 100644
index 0000000..773f128
--- /dev/null
+++ b/default-configs/xenpv-softmmu.mak
@@ -0,0 +1,2 @@
+# Default configuration for xenpv-softmmu
+# Yes it is empty as we don't need to include any device emulation code
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZL-00029i-G1; Thu, 23 Jan 2014 22:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZJ-00026p-I4
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:13 +0000
Received: from [85.158.143.35:30129] by server-2.bemta-4.messagelabs.com id
	52/E5-11386-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2640 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963569"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-Uo;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:03 +0000
Message-ID: <1390515366-32236-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 2/5] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Also extract a QMP function from xen-all.c and xen-stub.c to
xen-common.c

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target        |    3 +-
 xen-common.c           |  137 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen-all.c => xen-hvm.c |  112 +--------------------------------------
 xen-stub.c             |    4 --
 4 files changed, 141 insertions(+), 115 deletions(-)
 create mode 100644 xen-common.c
 rename xen-all.c => xen-hvm.c (92%)

diff --git a/Makefile.target b/Makefile.target
index d07cd95..12c6448 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -123,7 +123,8 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
 obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
 
 # Hardware support
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..05a244a
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,137 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
+#ifdef CONFIG_XEN_I386
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
+#else
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+#endif
diff --git a/xen-all.c b/xen-hvm.c
similarity index 92%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..5b9bc5f 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
diff --git a/xen-stub.c b/xen-stub.c
index ad189a6..4e2e1e8 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -56,10 +56,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-}
-
 void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZL-00029i-G1; Thu, 23 Jan 2014 22:16:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZJ-00026p-I4
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:13 +0000
Received: from [85.158.143.35:30129] by server-2.bemta-4.messagelabs.com id
	52/E5-11386-CA491E25; Thu, 23 Jan 2014 22:16:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515368!424098!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2640 invoked from network); 23 Jan 2014 22:16:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963569"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZC-0000da-Uo;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:03 +0000
Message-ID: <1390515366-32236-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 2/5] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Also extract a QMP function from xen-all.c and xen-stub.c to
xen-common.c

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target        |    3 +-
 xen-common.c           |  137 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen-all.c => xen-hvm.c |  112 +--------------------------------------
 xen-stub.c             |    4 --
 4 files changed, 141 insertions(+), 115 deletions(-)
 create mode 100644 xen-common.c
 rename xen-all.c => xen-hvm.c (92%)

diff --git a/Makefile.target b/Makefile.target
index d07cd95..12c6448 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -123,7 +123,8 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
 obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
 
 # Hardware support
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..05a244a
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,137 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
+#ifdef CONFIG_XEN_I386
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
+#else
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+}
+#endif
diff --git a/xen-all.c b/xen-hvm.c
similarity index 92%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..5b9bc5f 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
diff --git a/xen-stub.c b/xen-stub.c
index ad189a6..4e2e1e8 100644
--- a/xen-stub.c
+++ b/xen-stub.c
@@ -56,10 +56,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-}
-
 void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZM-0002AY-B5; Thu, 23 Jan 2014 22:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZJ-00027K-Vh
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:14 +0000
Received: from [85.158.143.35:5767] by server-3.bemta-4.messagelabs.com id
	BA/2E-32360-DA491E25; Thu, 23 Jan 2014 22:16:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515370!424103!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2661 invoked from network); 23 Jan 2014 22:16:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963570"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-1z;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:05 +0000
Message-ID: <1390515366-32236-5-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 4/5] xen: implement Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Basically it's a dummy CPU that doens't do anything. This patch contains
necessary hooks to make QEMU compile.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch_init.c                |    2 ++
 cpu-exec.c                 |    2 ++
 include/sysemu/arch_init.h |    1 +
 target-xenpv/Makefile.objs |    1 +
 target-xenpv/cpu-qom.h     |   64 ++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/cpu.h         |   66 ++++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/helper.c      |   32 +++++++++++++++++++++
 target-xenpv/translate.c   |   27 ++++++++++++++++++
 8 files changed, 195 insertions(+)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c

diff --git a/arch_init.c b/arch_init.c
index 2935426..41041d5 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -101,6 +101,8 @@ int graphic_depth = 32;
 #define QEMU_ARCH QEMU_ARCH_XTENSA
 #elif defined(TARGET_UNICORE32)
 #define QEMU_ARCH QEMU_ARCH_UNICORE32
+#elif defined(TARGET_XENPV)
+#define QEMU_ARCH QEMU_ARCH_XENPV
 #endif
 
 const uint32_t arch_type = QEMU_ARCH;
diff --git a/cpu-exec.c b/cpu-exec.c
index b744a1f..c86c383 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -255,6 +255,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
@@ -710,6 +711,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
diff --git a/include/sysemu/arch_init.h b/include/sysemu/arch_init.h
index be71bca..66ea63f 100644
--- a/include/sysemu/arch_init.h
+++ b/include/sysemu/arch_init.h
@@ -22,6 +22,7 @@ enum {
     QEMU_ARCH_OPENRISC = 8192,
     QEMU_ARCH_UNICORE32 = 0x4000,
     QEMU_ARCH_MOXIE = 0x8000,
+    QEMU_ARCH_XENPV = 0x10000,
 };
 
 extern const uint32_t arch_type;
diff --git a/target-xenpv/Makefile.objs b/target-xenpv/Makefile.objs
new file mode 100644
index 0000000..ae3cad5
--- /dev/null
+++ b/target-xenpv/Makefile.objs
@@ -0,0 +1 @@
+obj-$(CONFIG_TCG) += helper.o translate.o
diff --git a/target-xenpv/cpu-qom.h b/target-xenpv/cpu-qom.h
new file mode 100644
index 0000000..61135a6
--- /dev/null
+++ b/target-xenpv/cpu-qom.h
@@ -0,0 +1,64 @@
+/*
+ * QEMU XenPV CPU
+ *
+ * Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+#ifndef QEMU_XENPV_CPU_QOM_H
+#define QEMU_XENPV_CPU_QOM_H
+
+#include "qom/cpu.h"
+#include "cpu.h"
+#include "qapi/error.h"
+
+#define TYPE_XENPV_CPU "xenpv-cpu"
+
+/**
+ * XenPVCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ */
+typedef struct XenPVCPUClass {
+    /*< private >*/
+    CPUClass parent_class;
+    /*< public >*/
+
+    DeviceRealize parent_realize;
+    void (*parent_reset)(CPUState *cpu);
+} XenPVCPUClass;
+
+/**
+ * XenPVCPU:
+ * @env: #CPUXenPVState
+ *
+ */
+typedef struct XenPVCPU {
+    /*< private >*/
+    CPUState parent_obj;
+    /*< public >*/
+    CPUXenPVState env;
+} XenPVCPU;
+
+static inline XenPVCPU *noarch_env_get_cpu(CPUXenPVState *env)
+{
+    return container_of(env, XenPVCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(noarch_env_get_cpu(e))
+
+#endif
+
diff --git a/target-xenpv/cpu.h b/target-xenpv/cpu.h
new file mode 100644
index 0000000..0e65707
--- /dev/null
+++ b/target-xenpv/cpu.h
@@ -0,0 +1,66 @@
+/*
+ * XenPV virtual CPU header
+ *
+ *  Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_XENPV_H
+#define CPU_XENPV_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+#define TARGET_PAGE_BITS 12
+#define TARGET_PHYS_ADDR_SPACE_BITS 52
+#define TARGET_VIRT_ADDR_SPACE_BITS 47
+#define NB_MMU_MODES 1
+
+#define CPUArchState struct CPUXenPVState
+
+#include "exec/cpu-defs.h"
+
+typedef struct CPUXenPVState {
+    CPU_COMMON
+} CPUXenPVState;
+
+#include "cpu-qom.h"
+
+static inline int cpu_mmu_index (CPUXenPVState *env)
+{
+    abort();
+}
+
+static inline void cpu_get_tb_cpu_state(CPUXenPVState *env, target_ulong *pc,
+                                        target_ulong *cs_base, int *flags)
+{
+    abort();
+}
+
+static inline bool cpu_has_work(CPUState *cs)
+{
+    abort();
+    return false;
+}
+
+int cpu_xenpv_exec(CPUXenPVState *s);
+#define cpu_exec cpu_xenpv_exec
+
+#include "exec/cpu-all.h"
+
+#include "exec/exec-all.h"
+
+#endif /* CPU_XENPV_H */
+
diff --git a/target-xenpv/helper.c b/target-xenpv/helper.c
new file mode 100644
index 0000000..225a063
--- /dev/null
+++ b/target-xenpv/helper.c
@@ -0,0 +1,32 @@
+#include "cpu.h"
+#include "helper.h"
+#if !defined(CONFIG_USER_ONLY)
+#include "exec/softmmu_exec.h"
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+
+#define MMUSUFFIX _mmu
+
+#define SHIFT 0
+#include "exec/softmmu_template.h"
+
+#define SHIFT 1
+#include "exec/softmmu_template.h"
+
+#define SHIFT 2
+#include "exec/softmmu_template.h"
+
+#define SHIFT 3
+#include "exec/softmmu_template.h"
+
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+void tlb_fill(CPUXenPVState *env, target_ulong addr, int is_write,
+              int mmu_idx, uintptr_t retaddr)
+{
+    abort();
+}
+#endif
+
diff --git a/target-xenpv/helper.h b/target-xenpv/helper.h
new file mode 100644
index 0000000..e69de29
diff --git a/target-xenpv/translate.c b/target-xenpv/translate.c
new file mode 100644
index 0000000..4bc84e5
--- /dev/null
+++ b/target-xenpv/translate.c
@@ -0,0 +1,27 @@
+#include <inttypes.h>
+#include "qemu/host-utils.h"
+#include "cpu.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+
+#include "helper.h"
+#define GEN_HELPER 1
+#include "helper.h"
+
+void gen_intermediate_code(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void gen_intermediate_code_pc(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void restore_state_to_opc(CPUXenPVState *env, TranslationBlock *tb,
+                          int pc_pos)
+{
+    abort();
+}
+
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:16:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SZM-0002AY-B5; Thu, 23 Jan 2014 22:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6SZJ-00027K-Vh
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:16:14 +0000
Received: from [85.158.143.35:5767] by server-3.bemta-4.messagelabs.com id
	BA/2E-32360-DA491E25; Thu, 23 Jan 2014 22:16:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390515370!424103!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2661 invoked from network); 23 Jan 2014 22:16:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,708,1384300800"; d="scan'208";a="95963570"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:16:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 17:16:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W6SZD-0000da-1z;
	Thu, 23 Jan 2014 22:16:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <qemu-devel@nongnu.org>
Date: Thu, 23 Jan 2014 22:16:05 +0000
Message-ID: <1390515366-32236-5-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC 4/5] xen: implement Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Basically it's a dummy CPU that doens't do anything. This patch contains
necessary hooks to make QEMU compile.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch_init.c                |    2 ++
 cpu-exec.c                 |    2 ++
 include/sysemu/arch_init.h |    1 +
 target-xenpv/Makefile.objs |    1 +
 target-xenpv/cpu-qom.h     |   64 ++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/cpu.h         |   66 ++++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/helper.c      |   32 +++++++++++++++++++++
 target-xenpv/translate.c   |   27 ++++++++++++++++++
 8 files changed, 195 insertions(+)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c

diff --git a/arch_init.c b/arch_init.c
index 2935426..41041d5 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -101,6 +101,8 @@ int graphic_depth = 32;
 #define QEMU_ARCH QEMU_ARCH_XTENSA
 #elif defined(TARGET_UNICORE32)
 #define QEMU_ARCH QEMU_ARCH_UNICORE32
+#elif defined(TARGET_XENPV)
+#define QEMU_ARCH QEMU_ARCH_XENPV
 #endif
 
 const uint32_t arch_type = QEMU_ARCH;
diff --git a/cpu-exec.c b/cpu-exec.c
index b744a1f..c86c383 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -255,6 +255,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
@@ -710,6 +711,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
diff --git a/include/sysemu/arch_init.h b/include/sysemu/arch_init.h
index be71bca..66ea63f 100644
--- a/include/sysemu/arch_init.h
+++ b/include/sysemu/arch_init.h
@@ -22,6 +22,7 @@ enum {
     QEMU_ARCH_OPENRISC = 8192,
     QEMU_ARCH_UNICORE32 = 0x4000,
     QEMU_ARCH_MOXIE = 0x8000,
+    QEMU_ARCH_XENPV = 0x10000,
 };
 
 extern const uint32_t arch_type;
diff --git a/target-xenpv/Makefile.objs b/target-xenpv/Makefile.objs
new file mode 100644
index 0000000..ae3cad5
--- /dev/null
+++ b/target-xenpv/Makefile.objs
@@ -0,0 +1 @@
+obj-$(CONFIG_TCG) += helper.o translate.o
diff --git a/target-xenpv/cpu-qom.h b/target-xenpv/cpu-qom.h
new file mode 100644
index 0000000..61135a6
--- /dev/null
+++ b/target-xenpv/cpu-qom.h
@@ -0,0 +1,64 @@
+/*
+ * QEMU XenPV CPU
+ *
+ * Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+#ifndef QEMU_XENPV_CPU_QOM_H
+#define QEMU_XENPV_CPU_QOM_H
+
+#include "qom/cpu.h"
+#include "cpu.h"
+#include "qapi/error.h"
+
+#define TYPE_XENPV_CPU "xenpv-cpu"
+
+/**
+ * XenPVCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ */
+typedef struct XenPVCPUClass {
+    /*< private >*/
+    CPUClass parent_class;
+    /*< public >*/
+
+    DeviceRealize parent_realize;
+    void (*parent_reset)(CPUState *cpu);
+} XenPVCPUClass;
+
+/**
+ * XenPVCPU:
+ * @env: #CPUXenPVState
+ *
+ */
+typedef struct XenPVCPU {
+    /*< private >*/
+    CPUState parent_obj;
+    /*< public >*/
+    CPUXenPVState env;
+} XenPVCPU;
+
+static inline XenPVCPU *noarch_env_get_cpu(CPUXenPVState *env)
+{
+    return container_of(env, XenPVCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(noarch_env_get_cpu(e))
+
+#endif
+
diff --git a/target-xenpv/cpu.h b/target-xenpv/cpu.h
new file mode 100644
index 0000000..0e65707
--- /dev/null
+++ b/target-xenpv/cpu.h
@@ -0,0 +1,66 @@
+/*
+ * XenPV virtual CPU header
+ *
+ *  Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_XENPV_H
+#define CPU_XENPV_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+#define TARGET_PAGE_BITS 12
+#define TARGET_PHYS_ADDR_SPACE_BITS 52
+#define TARGET_VIRT_ADDR_SPACE_BITS 47
+#define NB_MMU_MODES 1
+
+#define CPUArchState struct CPUXenPVState
+
+#include "exec/cpu-defs.h"
+
+typedef struct CPUXenPVState {
+    CPU_COMMON
+} CPUXenPVState;
+
+#include "cpu-qom.h"
+
+static inline int cpu_mmu_index (CPUXenPVState *env)
+{
+    abort();
+}
+
+static inline void cpu_get_tb_cpu_state(CPUXenPVState *env, target_ulong *pc,
+                                        target_ulong *cs_base, int *flags)
+{
+    abort();
+}
+
+static inline bool cpu_has_work(CPUState *cs)
+{
+    abort();
+    return false;
+}
+
+int cpu_xenpv_exec(CPUXenPVState *s);
+#define cpu_exec cpu_xenpv_exec
+
+#include "exec/cpu-all.h"
+
+#include "exec/exec-all.h"
+
+#endif /* CPU_XENPV_H */
+
diff --git a/target-xenpv/helper.c b/target-xenpv/helper.c
new file mode 100644
index 0000000..225a063
--- /dev/null
+++ b/target-xenpv/helper.c
@@ -0,0 +1,32 @@
+#include "cpu.h"
+#include "helper.h"
+#if !defined(CONFIG_USER_ONLY)
+#include "exec/softmmu_exec.h"
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+
+#define MMUSUFFIX _mmu
+
+#define SHIFT 0
+#include "exec/softmmu_template.h"
+
+#define SHIFT 1
+#include "exec/softmmu_template.h"
+
+#define SHIFT 2
+#include "exec/softmmu_template.h"
+
+#define SHIFT 3
+#include "exec/softmmu_template.h"
+
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+void tlb_fill(CPUXenPVState *env, target_ulong addr, int is_write,
+              int mmu_idx, uintptr_t retaddr)
+{
+    abort();
+}
+#endif
+
diff --git a/target-xenpv/helper.h b/target-xenpv/helper.h
new file mode 100644
index 0000000..e69de29
diff --git a/target-xenpv/translate.c b/target-xenpv/translate.c
new file mode 100644
index 0000000..4bc84e5
--- /dev/null
+++ b/target-xenpv/translate.c
@@ -0,0 +1,27 @@
+#include <inttypes.h>
+#include "qemu/host-utils.h"
+#include "cpu.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+
+#include "helper.h"
+#define GEN_HELPER 1
+#include "helper.h"
+
+void gen_intermediate_code(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void gen_intermediate_code_pc(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void restore_state_to_opc(CPUXenPVState *env, TranslationBlock *tb,
+                          int pc_pos)
+{
+    abort();
+}
+
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SnJ-0003WT-JO; Thu, 23 Jan 2014 22:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W6SnG-0003WO-UJ
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:30:39 +0000
Received: from [85.158.143.35:17422] by server-3.bemta-4.messagelabs.com id
	15/04-32360-E0891E25; Thu, 23 Jan 2014 22:30:38 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390516237!425141!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4412 invoked from network); 23 Jan 2014 22:30:37 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:30:37 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2010139lan.10
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 14:30:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=wV4hDFmC3i4ygrBEUZ9OaoYPCVXlSX3sNnZA4KtPLjI=;
	b=nN2dVSnJSUAI/1DGywqn8NNo4kYFSwbJRGV4xDidA4Wo3Hbo+NobAQaigdiEt8YkOG
	9bopWU0Zlec8/KP/R6al3Xi0W4MeyNENHN2xuhiGGJuKIhcUtnFCKznOIFy5fb/HcLEI
	y3O7mI/hmSw160vN9FyptN0vs84bW77IQMRezWt/vHvPU8eFBKaJV/p+njsMEwpIOVXf
	ZKRXCWU6N4GnhiFabCIW6LsW5i9qdLt2dXua4v7IfcmAbhqjZHyW7WrRaLq3tn3EXDD1
	8K5e1bnPSM46YbrJJr2YuLjZ790J4Kgd7VU0iXMs/s3pR4vt+h3aXMkUVEKDvSX56/uo
	9ctw==
X-Gm-Message-State: ALoCoQnbCE9AhA/kPdlgTik88xm648TIbekNOCUlkXbqcN9kSRYB+PLUvOm1YhkScPfekfKCGGNM
X-Received: by 10.152.225.129 with SMTP id rk1mr57325lac.74.1390516236735;
	Thu, 23 Jan 2014 14:30:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Thu, 23 Jan 2014 14:30:16 -0800 (PST)
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 23 Jan 2014 22:30:16 +0000
Message-ID: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> As promised I hacked a prototype based on Paolo's disable TCG series.
> However I coded some stubs for TCG anyway. So this series in principle
> should work with / without Paolo's series.

I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
"the binary is smaller" isn't IMHO sufficient justification for breaking
QEMU's basic structure of "target-* define target CPUs and we have
a lot of compile time constants which are specific to a CPU which
get defined there". How would you support a bigendian Xen CPU,
just to pick one example of where this falls down?

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:31:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6SnJ-0003WT-JO; Thu, 23 Jan 2014 22:30:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W6SnG-0003WO-UJ
	for xen-devel@lists.xen.org; Thu, 23 Jan 2014 22:30:39 +0000
Received: from [85.158.143.35:17422] by server-3.bemta-4.messagelabs.com id
	15/04-32360-E0891E25; Thu, 23 Jan 2014 22:30:38 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390516237!425141!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4412 invoked from network); 23 Jan 2014 22:30:37 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:30:37 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2010139lan.10
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 14:30:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=wV4hDFmC3i4ygrBEUZ9OaoYPCVXlSX3sNnZA4KtPLjI=;
	b=nN2dVSnJSUAI/1DGywqn8NNo4kYFSwbJRGV4xDidA4Wo3Hbo+NobAQaigdiEt8YkOG
	9bopWU0Zlec8/KP/R6al3Xi0W4MeyNENHN2xuhiGGJuKIhcUtnFCKznOIFy5fb/HcLEI
	y3O7mI/hmSw160vN9FyptN0vs84bW77IQMRezWt/vHvPU8eFBKaJV/p+njsMEwpIOVXf
	ZKRXCWU6N4GnhiFabCIW6LsW5i9qdLt2dXua4v7IfcmAbhqjZHyW7WrRaLq3tn3EXDD1
	8K5e1bnPSM46YbrJJr2YuLjZ790J4Kgd7VU0iXMs/s3pR4vt+h3aXMkUVEKDvSX56/uo
	9ctw==
X-Gm-Message-State: ALoCoQnbCE9AhA/kPdlgTik88xm648TIbekNOCUlkXbqcN9kSRYB+PLUvOm1YhkScPfekfKCGGNM
X-Received: by 10.152.225.129 with SMTP id rk1mr57325lac.74.1390516236735;
	Thu, 23 Jan 2014 14:30:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Thu, 23 Jan 2014 14:30:16 -0800 (PST)
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 23 Jan 2014 22:30:16 +0000
Message-ID: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> As promised I hacked a prototype based on Paolo's disable TCG series.
> However I coded some stubs for TCG anyway. So this series in principle
> should work with / without Paolo's series.

I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
"the binary is smaller" isn't IMHO sufficient justification for breaking
QEMU's basic structure of "target-* define target CPUs and we have
a lot of compile time constants which are specific to a CPU which
get defined there". How would you support a bigendian Xen CPU,
just to pick one example of where this falls down?

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:58:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:58:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6TDz-0004EV-3T; Thu, 23 Jan 2014 22:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6TDx-0004EQ-JD
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 22:58:13 +0000
Received: from [85.158.143.35:42994] by server-3.bemta-4.messagelabs.com id
	84/3F-32360-48E91E25; Thu, 23 Jan 2014 22:58:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390517890!422754!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1217 invoked from network); 23 Jan 2014 22:58:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:58:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95979070"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:58:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 17:58:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6TDs-0001gI-KG;
	Thu, 23 Jan 2014 22:58:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6TDs-0002dB-Hf;
	Thu, 23 Jan 2014 22:58:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24469-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 22:58:08 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24469: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24469 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24469/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 24422

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7
baseline version:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ branch=xen-4.2-testing
+ revision=85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 85c4e39100037fafc4e4c3e517aaef8180ffdde7:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   12d9655..85c4e39  85c4e39100037fafc4e4c3e517aaef8180ffdde7 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 22:58:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 22:58:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6TDz-0004EV-3T; Thu, 23 Jan 2014 22:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6TDx-0004EQ-JD
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 22:58:13 +0000
Received: from [85.158.143.35:42994] by server-3.bemta-4.messagelabs.com id
	84/3F-32360-48E91E25; Thu, 23 Jan 2014 22:58:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390517890!422754!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1217 invoked from network); 23 Jan 2014 22:58:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 22:58:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95979070"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 22:58:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 17:58:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6TDs-0001gI-KG;
	Thu, 23 Jan 2014 22:58:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6TDs-0002dB-Hf;
	Thu, 23 Jan 2014 22:58:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24469-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 22:58:08 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24469: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24469 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24469/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10       fail   like 24422

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7
baseline version:
 xen                  12d9655b96c702c7a936cefeec27c7fd19ff6d09

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ branch=xen-4.2-testing
+ revision=85c4e39100037fafc4e4c3e517aaef8180ffdde7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 85c4e39100037fafc4e4c3e517aaef8180ffdde7:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   12d9655..85c4e39  85c4e39100037fafc4e4c3e517aaef8180ffdde7 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:02:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6THi-0004bN-5V; Thu, 23 Jan 2014 23:02:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6THg-0004bH-L9
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 23:02:04 +0000
Received: from [193.109.254.147:46942] by server-1.bemta-14.messagelabs.com id
	F0/DE-15600-B6F91E25; Thu, 23 Jan 2014 23:02:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390518121!12773219!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18633 invoked from network); 23 Jan 2014 23:02:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:02:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95980482"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 23:02:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 18:02:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6THc-0001hS-2x;
	Thu, 23 Jan 2014 23:02:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6THc-00010b-0p;
	Thu, 23 Jan 2014 23:02:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24470-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 23:02:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24470: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24470 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24470/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24434

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964
baseline version:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=4e54949a7baa5d66f1ab36571d6c974c9319a964
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 4e54949a7baa5d66f1ab36571d6c974c9319a964
+ branch=xen-4.3-testing
+ revision=4e54949a7baa5d66f1ab36571d6c974c9319a964
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 4e54949a7baa5d66f1ab36571d6c974c9319a964:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   7261a3f..4e54949  4e54949a7baa5d66f1ab36571d6c974c9319a964 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:02:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6THi-0004bN-5V; Thu, 23 Jan 2014 23:02:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6THg-0004bH-L9
	for xen-devel@lists.xensource.com; Thu, 23 Jan 2014 23:02:04 +0000
Received: from [193.109.254.147:46942] by server-1.bemta-14.messagelabs.com id
	F0/DE-15600-B6F91E25; Thu, 23 Jan 2014 23:02:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390518121!12773219!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18633 invoked from network); 23 Jan 2014 23:02:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:02:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95980482"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 23:02:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 18:02:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6THc-0001hS-2x;
	Thu, 23 Jan 2014 23:02:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6THc-00010b-0p;
	Thu, 23 Jan 2014 23:02:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24470-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 23 Jan 2014 23:02:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24470: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24470 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24470/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24434

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964
baseline version:
 xen                  7261a3fc6e6101293cff232b9423dd41b140fc0f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=4e54949a7baa5d66f1ab36571d6c974c9319a964
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 4e54949a7baa5d66f1ab36571d6c974c9319a964
+ branch=xen-4.3-testing
+ revision=4e54949a7baa5d66f1ab36571d6c974c9319a964
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 4e54949a7baa5d66f1ab36571d6c974c9319a964:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   7261a3f..4e54949  4e54949a7baa5d66f1ab36571d6c974c9319a964 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:20:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6TZS-0005OX-6t; Thu, 23 Jan 2014 23:20:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W6TZQ-0005OS-G8
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 23:20:24 +0000
Received: from [85.158.137.68:27089] by server-11.bemta-3.messagelabs.com id
	62/85-19379-7B3A1E25; Thu, 23 Jan 2014 23:20:23 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390519220!10987101!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21940 invoked from network); 23 Jan 2014 23:20:22 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:20:22 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so2508799pad.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 15:20:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=WuLYGAxR48HlVXejX5rE9RrdkqxTWk03Ni4TXlChjVQ=;
	b=PfrChGe0hGBKFlDtXVN3yR0nUTgES2PpwvQfjpivtf1moS7r9UZqysts1BjOTyJ6U3
	D4k/1J0K6eCSivQ/IC5GrrmMT51mt52UwxNyGY/YwOqVYYup+H1GByBQXfIRuWBAUgST
	csCmHcCXtdJP/t0eQN0FIB+FjzWwVTx1dcM9RQI+NUueS9pmTCEeONemdkfaPpKfRrT6
	mpti2gG87YjUu561nKBfCoGCJKgPA8TIjHqICtigC6ryD8JWVt3LLXlmj8Ms+g81mjg6
	Cm774TS3uNWcf2ZIi8uP1IS2Mw7HbIBIqghSdiVUeRuvxE6T/ZeIZHMWZY5WY+zQw5Fh
	8M7A==
X-Gm-Message-State: ALoCoQn2AYsGW1XYsSG84BnhaeBCJDQkJTwJYqNjtX/JYxIERk2cEaKJoHSZxmTBhVwbVXrpl35C
X-Received: by 10.66.175.4 with SMTP id bw4mr10981375pac.56.1390519219532;
	Thu, 23 Jan 2014 15:20:19 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	iu7sm42282227pbc.45.2014.01.23.15.20.17 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 15:20:18 -0800 (PST)
Date: Thu, 23 Jan 2014 15:20:12 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140123232012.GA23046@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> > On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
> >> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> >> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
> >> >>
> >> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> >> >>>
> >> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> >> >>>>
> >> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> >> >>>>>
> >> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> >> >>>>> <gregkh@linuxfoundation.org> wrote:
> >> >>>
> >> >>>
> >> >>> Adding extra folks to the party.
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Odds are this also shows up in 3.13, right?
> >> >>>>
> >> >>>>
> >> >>>> Reproduced using 3.13 on the PV guest:
> >> >>>>
> >> >>>>         [  368.756763] BUG: Bad page map in process mp
> >> >>>> pte:80000004a67c6165 pmd:e9b706067
> >> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
> >> >>>> mapping:          (null) index:0x0
> >> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> >> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
> >> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> >> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
> >> >>>> #1
> >> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
> >> >>>> ffffffff814d8748 00007fd1388b7000
> >> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
> >> >>>> 0000000000000000 0000000000000000
> >> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
> >> >>>> 00007fd1388b8000 ffff880e9eaf3e30
> >> >>>>         [  368.756815] Call Trace:
> >> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> >> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >> >>>>         [  368.756837]  [<ffffffff8116eae3>]
> >> >>>> unmap_single_vma+0x583/0x890
> >> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> >> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> >> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> >> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> >> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> >> >>>>         [  368.756869]  [<ffffffff814e70ed>]
> >> >>>> system_call_fastpath+0x1a/0x1f
> >> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
> >> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
> >> >>>> idx:0 val:-1
> >> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
> >> >>>> idx:1 val:1
> >> >>>>
> >> >>>>>
> >> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
> >> >>>>> interest in setting one up).. And I have a suspicion that it might not
> >> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
> >> >>>>>
> >> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
> >> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> >> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
> >> >>>>> confused.
> >> >>>>>
> >> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> >> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> >> >>>>> _PAGE_PRESENT.
> >> >>>>>
> >> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
> >> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
> >> >>>>> that makes him go "Ahh, yes, silly case".
> >> >>>>>
> >> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> >> >>>>>
> >> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
> >> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> >> >>>>> attached test-case (but apparently only under Xen PV). There it
> >> >>>>> apparently causes a "BUG: Bad page map .." error.
> >> >>>
> >> >>>
> >> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
> >> >>> _raw_
> >> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
> >> >>> and instead reads the values directly from the page-table. Since the
> >> >>> page-table is also manipulated by the hypervisor - there are certain
> >> >>> flags it also sets to do its business. It might be that it uses
> >> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> >> >>> pte_flags that would invoke the pvops interface.
> >> >>>
> >> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
> >> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> It does use _PAGE_GLOBAL for guest user pages
> 
> >> >>>
> >> >>> This not-compiled-totally-bad-patch might shed some light on what I was
> >> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> >> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> >> >>> for that).
> >> >>
> >> >>
> >> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
> >> >> still able to repro the issue:
> >>
> >> Steven, do you use numa=fake on boot cmd line for pv guest?
> >>
> >> I had similar issue on pv guest. Let me check if the fix that resolved
> >> this for me will help with 3.13.
> >
> > Nope:
> >
> > # cat /proc/cmdline
> > root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
> 
> >
> >>
> >> >
> >> >
> >> > Maybe this one is also related to this BUG here (cc'ed people investigating
> >> > this one) ...
> >> >
> >> >   https://lkml.org/lkml/2014/1/10/427
> >> >
> >> > ... not sure, though.
> >> >
> >> >
> >> >>         [  346.374929] BUG: Bad page map in process mp
> >> >> pte:80000004ae928065 pmd:e993f9067
> >> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
> >> >> (null) index:0x0
> >> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> >> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
> >> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> >> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
> >> >> #1
> >> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
> >> >> 00007f06a9bbb000
> >> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
> >> >> 0000000000000000
> >> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
> >> >> ffff880e991a3e30
> >> >>         [  346.374979] Call Trace:
> >> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> >> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> >> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> >> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> >> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> >> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> >> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> >> >>         [  346.375034]  [<ffffffff814e712d>]
> >> >> system_call_fastpath+0x1a/0x1f
> >> >>         [  346.375037] Disabling lock debugging due to kernel taint
> >> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> >> idx:0 val:-1
> >> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> >> idx:1 val:1
> >> >>
> >> >> This dump doesn't look dramatically different, either.
> >> >>
> >> >>>
> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >> >>> turned on?
> >> >>
> >> >>
> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> >> mean not enabled at runtime?
> >> >>
> >> >> [1]
> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> >>
> >>
> >>
> >> --
> >> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
> 
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
> 
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;

Thanks Elena, I just tested that and it does un-break things.

- Steven

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:20:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6TZS-0005OX-6t; Thu, 23 Jan 2014 23:20:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1W6TZQ-0005OS-G8
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 23:20:24 +0000
Received: from [85.158.137.68:27089] by server-11.bemta-3.messagelabs.com id
	62/85-19379-7B3A1E25; Thu, 23 Jan 2014 23:20:23 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390519220!10987101!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21940 invoked from network); 23 Jan 2014 23:20:22 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:20:22 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so2508799pad.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 15:20:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=WuLYGAxR48HlVXejX5rE9RrdkqxTWk03Ni4TXlChjVQ=;
	b=PfrChGe0hGBKFlDtXVN3yR0nUTgES2PpwvQfjpivtf1moS7r9UZqysts1BjOTyJ6U3
	D4k/1J0K6eCSivQ/IC5GrrmMT51mt52UwxNyGY/YwOqVYYup+H1GByBQXfIRuWBAUgST
	csCmHcCXtdJP/t0eQN0FIB+FjzWwVTx1dcM9RQI+NUueS9pmTCEeONemdkfaPpKfRrT6
	mpti2gG87YjUu561nKBfCoGCJKgPA8TIjHqICtigC6ryD8JWVt3LLXlmj8Ms+g81mjg6
	Cm774TS3uNWcf2ZIi8uP1IS2Mw7HbIBIqghSdiVUeRuvxE6T/ZeIZHMWZY5WY+zQw5Fh
	8M7A==
X-Gm-Message-State: ALoCoQn2AYsGW1XYsSG84BnhaeBCJDQkJTwJYqNjtX/JYxIERk2cEaKJoHSZxmTBhVwbVXrpl35C
X-Received: by 10.66.175.4 with SMTP id bw4mr10981375pac.56.1390519219532;
	Thu, 23 Jan 2014 15:20:19 -0800 (PST)
Received: from orcus.uplinklabs.net (c-71-231-56-34.hsd1.wa.comcast.net.
	[71.231.56.34]) by mx.google.com with ESMTPSA id
	iu7sm42282227pbc.45.2014.01.23.15.20.17 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 15:20:18 -0800 (PST)
Date: Thu, 23 Jan 2014 15:20:12 -0800
From: Steven Noonan <steven@uplinklabs.net>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140123232012.GA23046@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> > On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
> >> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
> >> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
> >> >>
> >> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
> >> >>>
> >> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
> >> >>>>
> >> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
> >> >>>>>
> >> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
> >> >>>>> <gregkh@linuxfoundation.org> wrote:
> >> >>>
> >> >>>
> >> >>> Adding extra folks to the party.
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Odds are this also shows up in 3.13, right?
> >> >>>>
> >> >>>>
> >> >>>> Reproduced using 3.13 on the PV guest:
> >> >>>>
> >> >>>>         [  368.756763] BUG: Bad page map in process mp
> >> >>>> pte:80000004a67c6165 pmd:e9b706067
> >> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
> >> >>>> mapping:          (null) index:0x0
> >> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
> >> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
> >> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
> >> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
> >> >>>> #1
> >> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
> >> >>>> ffffffff814d8748 00007fd1388b7000
> >> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
> >> >>>> 0000000000000000 0000000000000000
> >> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
> >> >>>> 00007fd1388b8000 ffff880e9eaf3e30
> >> >>>>         [  368.756815] Call Trace:
> >> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
> >> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >> >>>>         [  368.756837]  [<ffffffff8116eae3>]
> >> >>>> unmap_single_vma+0x583/0x890
> >> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
> >> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
> >> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
> >> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
> >> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
> >> >>>>         [  368.756869]  [<ffffffff814e70ed>]
> >> >>>> system_call_fastpath+0x1a/0x1f
> >> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
> >> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
> >> >>>> idx:0 val:-1
> >> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
> >> >>>> idx:1 val:1
> >> >>>>
> >> >>>>>
> >> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
> >> >>>>> interest in setting one up).. And I have a suspicion that it might not
> >> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
> >> >>>>>
> >> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
> >> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
> >> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
> >> >>>>> confused.
> >> >>>>>
> >> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
> >> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
> >> >>>>> _PAGE_PRESENT.
> >> >>>>>
> >> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
> >> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
> >> >>>>> that makes him go "Ahh, yes, silly case".
> >> >>>>>
> >> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
> >> >>>>>
> >> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
> >> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
> >> >>>>> attached test-case (but apparently only under Xen PV). There it
> >> >>>>> apparently causes a "BUG: Bad page map .." error.
> >> >>>
> >> >>>
> >> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
> >> >>> _raw_
> >> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
> >> >>> and instead reads the values directly from the page-table. Since the
> >> >>> page-table is also manipulated by the hypervisor - there are certain
> >> >>> flags it also sets to do its business. It might be that it uses
> >> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
> >> >>> pte_flags that would invoke the pvops interface.
> >> >>>
> >> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
> >> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> It does use _PAGE_GLOBAL for guest user pages
> 
> >> >>>
> >> >>> This not-compiled-totally-bad-patch might shed some light on what I was
> >> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
> >> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
> >> >>> for that).
> >> >>
> >> >>
> >> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
> >> >> still able to repro the issue:
> >>
> >> Steven, do you use numa=fake on boot cmd line for pv guest?
> >>
> >> I had similar issue on pv guest. Let me check if the fix that resolved
> >> this for me will help with 3.13.
> >
> > Nope:
> >
> > # cat /proc/cmdline
> > root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
> 
> >
> >>
> >> >
> >> >
> >> > Maybe this one is also related to this BUG here (cc'ed people investigating
> >> > this one) ...
> >> >
> >> >   https://lkml.org/lkml/2014/1/10/427
> >> >
> >> > ... not sure, though.
> >> >
> >> >
> >> >>         [  346.374929] BUG: Bad page map in process mp
> >> >> pte:80000004ae928065 pmd:e993f9067
> >> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
> >> >> (null) index:0x0
> >> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
> >> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
> >> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
> >> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
> >> >> #1
> >> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
> >> >> 00007f06a9bbb000
> >> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
> >> >> 0000000000000000
> >> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
> >> >> ffff880e991a3e30
> >> >>         [  346.374979] Call Trace:
> >> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
> >> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
> >> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
> >> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
> >> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
> >> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
> >> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
> >> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
> >> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
> >> >>         [  346.375034]  [<ffffffff814e712d>]
> >> >> system_call_fastpath+0x1a/0x1f
> >> >>         [  346.375037] Disabling lock debugging due to kernel taint
> >> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> >> idx:0 val:-1
> >> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
> >> >> idx:1 val:1
> >> >>
> >> >> This dump doesn't look dramatically different, either.
> >> >>
> >> >>>
> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >> >>> turned on?
> >> >>
> >> >>
> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> >> mean not enabled at runtime?
> >> >>
> >> >> [1]
> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> >>
> >>
> >>
> >> --
> >> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
> 
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
> 
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;

Thanks Elena, I just tested that and it does un-break things.

- Steven

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:35:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Tnn-0005mj-T9; Thu, 23 Jan 2014 23:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Tnm-0005me-3F
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 23:35:14 +0000
Received: from [85.158.143.35:51391] by server-3.bemta-4.messagelabs.com id
	F6/4F-32360-137A1E25; Thu, 23 Jan 2014 23:35:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390520110!430374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26082 invoked from network); 23 Jan 2014 23:35:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95990490"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 23:35:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 18:35:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Tng-0001id-Qo;
	Thu, 23 Jan 2014 23:35:08 +0000
Date: Thu, 23 Jan 2014 23:34:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>



>  arch/x86/include/asm/xen/page.h     |    5 +-
>  arch/x86/xen/p2m.c                  |   17 +------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   89 ++++++++++++++++++++++++++++++-----
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 101 insertions(+), 46 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..ce47243 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..bd4724b 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..e4ddfeb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
>  		if (ret)
>  			goto out;
>  	}
> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 23 23:35:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jan 2014 23:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Tnn-0005mj-T9; Thu, 23 Jan 2014 23:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6Tnm-0005me-3F
	for xen-devel@lists.xenproject.org; Thu, 23 Jan 2014 23:35:14 +0000
Received: from [85.158.143.35:51391] by server-3.bemta-4.messagelabs.com id
	F6/4F-32360-137A1E25; Thu, 23 Jan 2014 23:35:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390520110!430374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26082 invoked from network); 23 Jan 2014 23:35:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Jan 2014 23:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="95990490"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Jan 2014 23:35:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 23 Jan 2014 18:35:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6Tng-0001id-Qo;
	Thu, 23 Jan 2014 23:35:08 +0000
Date: Thu, 23 Jan 2014 23:34:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>



>  arch/x86/include/asm/xen/page.h     |    5 +-
>  arch/x86/xen/p2m.c                  |   17 +------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   89 ++++++++++++++++++++++++++++++-----
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 101 insertions(+), 46 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..ce47243 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..bd4724b 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..e4ddfeb 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
>  		if (ret)
>  			goto out;
>  	}
> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 00:23:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 00:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6UY9-0007Ty-2z; Fri, 24 Jan 2014 00:23:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6UY7-0007Tq-Hf
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 00:23:07 +0000
Received: from [85.158.139.211:8950] by server-10.bemta-5.messagelabs.com id
	EE/54-01405-A62B1E25; Fri, 24 Jan 2014 00:23:06 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390522984!11629141!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22233 invoked from network); 24 Jan 2014 00:23:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 00:23:06 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0O0Lxn5032412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 00:22:00 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0O0LwbV024363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 00:21:58 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0O0Lw1h017839; Fri, 24 Jan 2014 00:21:58 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 16:21:57 -0800
Date: Thu, 23 Jan 2014 16:21:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140123162156.3fc545b7@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Qin Li <qin.l.li@oracle.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014 12:17:59 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> CC'ing xen-devel and Roger.
> 
> On Wed, 15 Jan 2014, Qin Li wrote:
> > Hi Stefano,
> > 
> > How do you do?
> > 
> > Currently, Solaris only works as PV-on-HVM guest on Xen, but
> > recently,
> 
> Actually now you have a better way of running Solaris on Xen: PVH.
> 
> http://wiki.xen.org/wiki/Xen_Overview#PV_in_an_HVM_Container_.28PVH.29_-_New_in_Xen_4.4
> 
> Roger already ported FreeBSD to Xen as a PVH guest:
> 
> http://marc.info/?l=freebsd-current&m=138971161228874&w=2


Qin,

I will be giving a talk internally on PVH in the next couple weeks
to the PMs and others, so if you are interested, ping me offline.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 00:23:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 00:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6UY9-0007Ty-2z; Fri, 24 Jan 2014 00:23:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6UY7-0007Tq-Hf
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 00:23:07 +0000
Received: from [85.158.139.211:8950] by server-10.bemta-5.messagelabs.com id
	EE/54-01405-A62B1E25; Fri, 24 Jan 2014 00:23:06 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390522984!11629141!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22233 invoked from network); 24 Jan 2014 00:23:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 00:23:06 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0O0Lxn5032412
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 00:22:00 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0O0LwbV024363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 00:21:58 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0O0Lw1h017839; Fri, 24 Jan 2014 00:21:58 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 16:21:57 -0800
Date: Thu, 23 Jan 2014 16:21:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140123162156.3fc545b7@mantra.us.oracle.com>
In-Reply-To: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Qin Li <qin.l.li@oracle.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 16 Jan 2014 12:17:59 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> CC'ing xen-devel and Roger.
> 
> On Wed, 15 Jan 2014, Qin Li wrote:
> > Hi Stefano,
> > 
> > How do you do?
> > 
> > Currently, Solaris only works as PV-on-HVM guest on Xen, but
> > recently,
> 
> Actually now you have a better way of running Solaris on Xen: PVH.
> 
> http://wiki.xen.org/wiki/Xen_Overview#PV_in_an_HVM_Container_.28PVH.29_-_New_in_Xen_4.4
> 
> Roger already ported FreeBSD to Xen as a PVH guest:
> 
> http://marc.info/?l=freebsd-current&m=138971161228874&w=2


Qin,

I will be giving a talk internally on PVH in the next couple weeks
to the PMs and others, so if you are interested, ping me offline.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 01:13:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VKG-0004Av-Mw; Fri, 24 Jan 2014 01:12:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6Uvo-00083R-MD
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 00:47:37 +0000
Received: from [85.158.143.35:44854] by server-1.bemta-4.messagelabs.com id
	75/D1-02132-728B1E25; Fri, 24 Jan 2014 00:47:35 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390524452!438454!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12844 invoked from network); 24 Jan 2014 00:47:33 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 00:47:33 -0000
Received: from [66.196.81.174] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
Received: from [98.139.211.202] by tm20.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
Received: from [127.0.0.1] by smtp211.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390524452; bh=cVCaBYbmAofstoz0LKAjYxpZt3wX6Sm8vvj7TRkby0w=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=188YaVFByAEEOI1T3znlcmHS9eCevU+NPXKG7RjYFxA7WTt/F1Nou7WwvdbSzlua4eaJfuIoDzUIxpJwIRPudTV10x32f/2XTyaMBRBw7aYS4vRs/f9pIZGycgUmWf5Kn2VVxQDudFQep44NlILYbPmYFuvrhJzqgbcIRn6pA/M=
X-Yahoo-Newman-Id: 695246.35441.bm@smtp211.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 2vZDWz4VM1kHgk97.7KweFEN32NvBRMWFLSOw2KXcvsYAQX
	6CiyCym50.a7h.bbUkdngQEhHj8dyi0gK8ivgQf0gGs8AA.v7JX0N6JN9FO7
	Ts30ZmaMzd3_aucqufTVU9bLqZJ.P1IpjhZa4Kc8FNr7OhsdvSSuJVKllZGO
	B7xdNrLx6ypXpsZRWRUbFW0E4ixGeuok7iXYafj4qMfIL0rI5LsrqyvOCFou
	jCFHMmvpyCmOiakTvt7eqnZurMRwh2xhbbvMp4f9ueCd7K8KbV.TEIcugtQZ
	0pRG_TqTQBCx.Y8.WcN1oGc3uQD6akX8vte5t3237ZvdcsxaZinLbtw3ApMR
	LzysOKEOpfYJas.MiU47xHnLVnyHgHXHvkFwxjMwXLjajsVYeDR4Fo1sFJGD
	QZYWztDfCcdWpn3BjWtwijJL5co_qERvWdX_Fzg6dJ2kIZQ_.RT0cu0ddWvw
	69SZ4S7UnkVGfqtnhCRmdRBToVmR8OV8rKbOpWarRNDS1..AfLDzUcx21Hg0
	DcQkJZM2nLzoxu.KrgnLlG0i4kYyntizjYETrgxAS83FoAbT4UUqUag9PUsY -
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp211.mail.bf1.yahoo.com with SMTP; 23 Jan 2014 16:47:32 -0800 PST
Message-ID: <1390524450.2281.12.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 23 Jan 2014 17:47:30 -0700
In-Reply-To: <52E1083602000078001161D5@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-uaORwp1LPdxPrNvTCTyZ"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 24 Jan 2014 01:12:51 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-uaORwp1LPdxPrNvTCTyZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:

> But you didn't turn on interrupt remapping, or it got forcibly
> disabled:
> 
> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
> AMD-Vi: Disabling interrupt remapping


Jan,

If there is not a specific switch required to enable interrupt remapping
while booting 3.12.7 on bare hardware, then it was forcibly disabled.
Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.

> If
> the kernel responds to that mentioned firmware bug by forcing
> interrupt remapping off, maybe we would have to do the same...
> 
> Jan



That would be better than xen failing to boot.


Thanks,

-Eric

--=-uaORwp1LPdxPrNvTCTyZ
Content-Disposition: attachment; filename="xl-dmesg413.txt"
Content-Type: text/plain; name="xl-dmesg413.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 __  __            _  _    _   _____   ____     __      _ _____ 
 \ \/ /___ _ __   | || |  / | |___ /  |___ \   / _| ___/ |___  |
  \  // _ \ '_ \  | || |_ | |   |_ \ __ __) | | |_ / __| |  / / 
  /  \  __/ | | | |__   _|| |_ ___) |__/ __/ _|  _| (__| | / /  
 /_/\_\___|_| |_|    |_|(_)_(_)____/  |_____(_)_|  \___|_|/_/   
                                                                
(XEN) Xen version 4.1.3 (mockbuild@phx2.fedoraproject.org) (gcc version 4.7.0 20120507 (Red Hat 4.7.0-5) (GCC) ) Fri Aug 10 23:58:46 UTC 2012
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: dom0_mem=2048m dom0_max_vcpus=1 iommu=debug,verbos
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) Domain heap initialised
(XEN) Processor #0 0:10 APIC version 16
(XEN) Processor #1 0:10 APIC version 16
(XEN) Processor #2 0:10 APIC version 16
(XEN) Processor #3 0:10 APIC version 16
(XEN) Processor #4 0:10 APIC version 16
(XEN) Processor #5 0:10 APIC version 16
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) Table is not found!
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.967 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Enabling global vector map
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer appears to have unexpectedly wrapped 10 or more times.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) do_IRQ: 1.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 2.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 3.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 4.231 No irq handler for vector (irq -1)
(XEN) Brought up 6 CPUs
(XEN) do_IRQ: 5.231 No irq handler for vector (irq -1)
(XEN) Xenoprofile: AMD IBS detected (0x0000001f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23b7000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (496457 pages to be allocated)
(XEN)  Init. ramdisk: 000000024d349000->000000024ffff400
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823b7000
(XEN)  Init. ramdisk: ffffffff823b7000->ffffffff8506d400
(XEN)  Phys-Mach map: ffffffff8506e000->ffffffff8546e000
(XEN)  Start info:    ffffffff8546e000->ffffffff8546e4b4
(XEN)  Page tables:   ffffffff8546f000->ffffffff8549e000
(XEN)  Boot stack:    ffffffff8549e000->ffffffff8549f000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81cf4200
(XEN) Dom0 has maximum 1 VCPUs
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 220kB init memory.
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000fffd6a133789 to 0x000000000000abcd.
(XEN) physdev.c:164: dom0: wrong map_pirq type 3
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88006bd17000.

--=-uaORwp1LPdxPrNvTCTyZ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-uaORwp1LPdxPrNvTCTyZ--



From xen-devel-bounces@lists.xen.org Fri Jan 24 01:13:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VKG-0004Av-Mw; Fri, 24 Jan 2014 01:12:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6Uvo-00083R-MD
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 00:47:37 +0000
Received: from [85.158.143.35:44854] by server-1.bemta-4.messagelabs.com id
	75/D1-02132-728B1E25; Fri, 24 Jan 2014 00:47:35 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390524452!438454!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12844 invoked from network); 24 Jan 2014 00:47:33 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 00:47:33 -0000
Received: from [66.196.81.174] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
Received: from [98.139.211.202] by tm20.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
Received: from [127.0.0.1] by smtp211.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 00:47:32 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390524452; bh=cVCaBYbmAofstoz0LKAjYxpZt3wX6Sm8vvj7TRkby0w=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=188YaVFByAEEOI1T3znlcmHS9eCevU+NPXKG7RjYFxA7WTt/F1Nou7WwvdbSzlua4eaJfuIoDzUIxpJwIRPudTV10x32f/2XTyaMBRBw7aYS4vRs/f9pIZGycgUmWf5Kn2VVxQDudFQep44NlILYbPmYFuvrhJzqgbcIRn6pA/M=
X-Yahoo-Newman-Id: 695246.35441.bm@smtp211.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 2vZDWz4VM1kHgk97.7KweFEN32NvBRMWFLSOw2KXcvsYAQX
	6CiyCym50.a7h.bbUkdngQEhHj8dyi0gK8ivgQf0gGs8AA.v7JX0N6JN9FO7
	Ts30ZmaMzd3_aucqufTVU9bLqZJ.P1IpjhZa4Kc8FNr7OhsdvSSuJVKllZGO
	B7xdNrLx6ypXpsZRWRUbFW0E4ixGeuok7iXYafj4qMfIL0rI5LsrqyvOCFou
	jCFHMmvpyCmOiakTvt7eqnZurMRwh2xhbbvMp4f9ueCd7K8KbV.TEIcugtQZ
	0pRG_TqTQBCx.Y8.WcN1oGc3uQD6akX8vte5t3237ZvdcsxaZinLbtw3ApMR
	LzysOKEOpfYJas.MiU47xHnLVnyHgHXHvkFwxjMwXLjajsVYeDR4Fo1sFJGD
	QZYWztDfCcdWpn3BjWtwijJL5co_qERvWdX_Fzg6dJ2kIZQ_.RT0cu0ddWvw
	69SZ4S7UnkVGfqtnhCRmdRBToVmR8OV8rKbOpWarRNDS1..AfLDzUcx21Hg0
	DcQkJZM2nLzoxu.KrgnLlG0i4kYyntizjYETrgxAS83FoAbT4UUqUag9PUsY -
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp211.mail.bf1.yahoo.com with SMTP; 23 Jan 2014 16:47:32 -0800 PST
Message-ID: <1390524450.2281.12.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 23 Jan 2014 17:47:30 -0700
In-Reply-To: <52E1083602000078001161D5@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-uaORwp1LPdxPrNvTCTyZ"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 24 Jan 2014 01:12:51 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-uaORwp1LPdxPrNvTCTyZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:

> But you didn't turn on interrupt remapping, or it got forcibly
> disabled:
> 
> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
> AMD-Vi: Disabling interrupt remapping


Jan,

If there is not a specific switch required to enable interrupt remapping
while booting 3.12.7 on bare hardware, then it was forcibly disabled.
Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.

> If
> the kernel responds to that mentioned firmware bug by forcing
> interrupt remapping off, maybe we would have to do the same...
> 
> Jan



That would be better than xen failing to boot.


Thanks,

-Eric

--=-uaORwp1LPdxPrNvTCTyZ
Content-Disposition: attachment; filename="xl-dmesg413.txt"
Content-Type: text/plain; name="xl-dmesg413.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 __  __            _  _    _   _____   ____     __      _ _____ 
 \ \/ /___ _ __   | || |  / | |___ /  |___ \   / _| ___/ |___  |
  \  // _ \ '_ \  | || |_ | |   |_ \ __ __) | | |_ / __| |  / / 
  /  \  __/ | | | |__   _|| |_ ___) |__/ __/ _|  _| (__| | / /  
 /_/\_\___|_| |_|    |_|(_)_(_)____/  |_____(_)_|  \___|_|/_/   
                                                                
(XEN) Xen version 4.1.3 (mockbuild@phx2.fedoraproject.org) (gcc version 4.7.0 20120507 (Red Hat 4.7.0-5) (GCC) ) Fri Aug 10 23:58:46 UTC 2012
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: dom0_mem=2048m dom0_max_vcpus=1 iommu=debug,verbos
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) Domain heap initialised
(XEN) Processor #0 0:10 APIC version 16
(XEN) Processor #1 0:10 APIC version 16
(XEN) Processor #2 0:10 APIC version 16
(XEN) Processor #3 0:10 APIC version 16
(XEN) Processor #4 0:10 APIC version 16
(XEN) Processor #5 0:10 APIC version 16
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) Table is not found!
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.967 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Enabling global vector map
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer appears to have unexpectedly wrapped 10 or more times.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) do_IRQ: 1.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 2.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 3.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 4.231 No irq handler for vector (irq -1)
(XEN) Brought up 6 CPUs
(XEN) do_IRQ: 5.231 No irq handler for vector (irq -1)
(XEN) Xenoprofile: AMD IBS detected (0x0000001f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23b7000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (496457 pages to be allocated)
(XEN)  Init. ramdisk: 000000024d349000->000000024ffff400
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823b7000
(XEN)  Init. ramdisk: ffffffff823b7000->ffffffff8506d400
(XEN)  Phys-Mach map: ffffffff8506e000->ffffffff8546e000
(XEN)  Start info:    ffffffff8546e000->ffffffff8546e4b4
(XEN)  Page tables:   ffffffff8546f000->ffffffff8549e000
(XEN)  Boot stack:    ffffffff8549e000->ffffffff8549f000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81cf4200
(XEN) Dom0 has maximum 1 VCPUs
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 220kB init memory.
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000fffd6a133789 to 0x000000000000abcd.
(XEN) physdev.c:164: dom0: wrong map_pirq type 3
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88006bd17000.

--=-uaORwp1LPdxPrNvTCTyZ
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-uaORwp1LPdxPrNvTCTyZ--



From xen-devel-bounces@lists.xen.org Fri Jan 24 01:41:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VlZ-0004z7-1J; Fri, 24 Jan 2014 01:41:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W6VlX-0004z2-70
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 01:41:03 +0000
Received: from [193.109.254.147:4435] by server-14.bemta-14.messagelabs.com id
	AA/C7-12628-EA4C1E25; Fri, 24 Jan 2014 01:41:02 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390527659!12771689!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17660 invoked from network); 24 Jan 2014 01:41:01 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 01:41:01 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so2630795pab.9
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 17:40:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:subject:message-id:mime-version:content-type
	:content-disposition:user-agent;
	bh=88xNKAmN5GmQ2HJ7wpgE1ty2NX7Wl6Q4iptOT2TusjY=;
	b=Ez7NkX1CeBRSzWMlHkczlKAs3BrQV5JQaTG9ozDiN1BUVCBMkHXDuWzJs6eQCvgsoZ
	NWPuNBelAHAOBRCZ00Z2IgetexBS3uy/vHyjdllzZ9f+3hZZS+tPfj09XE+3OosEASdT
	vEJCAG74RspNhWoXp+HnaRTddNsc+hHAOnY/1M8haitY2xQlqRV7jVDFaHbmn/bZtJqB
	HCm9ddoachlyP/yL7QvHI1VxOE4s2tNRZcqI8le11sUqRfidLV+QgUXNq6hztg3ycuGd
	tuuX/72+vE62JAgRgKFWZJLb1RhJ5U5wnIMCBAqVG05WCK9xVv4iOoyH9WSjoFvaH+hT
	AXng==
X-Received: by 10.68.244.103 with SMTP id xf7mr11461125pbc.50.1390527659095;
	Thu, 23 Jan 2014 17:40:59 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	fm1sm68198387pab.22.2014.01.23.17.40.55 for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 17:40:57 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 17:40:54 -0800
Date: Thu, 23 Jan 2014 17:40:54 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: xen-devel@lists.xen.org
Message-ID: <20140124014050.GA26624@garbanzo.do-not-panic.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] Removing building qemu from xen ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0535858599269749418=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0535858599269749418==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="cNdxnHkX5QqsyA0e"
Content-Disposition: inline


--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

I see xen by default builds qemu on x86 / x86_64, and by default it will
build it on tools/qemu-xen-dir, and regardless of x86 32/64 bit architecture
it will call the binary qemu-system-i386. I see recently also support
was added to avoid having to build qemu and let distributions specify
their own binary path for qemu. That's great! I see though that by
default xen builds a different repo version of qemu:

git://xenbits.xen.org/qemu-upstream-unstable.git
tag: qemu-xen-4.4.0-rc1

At first glance I am not seeing much delta between what's upstream and
at least the latest commit of qemu-upstream-unstable.git is already
upstream on the *real* qemu upstream tree [0]. This begs the question of
whether or not xen needs to have the qemu building infrastructure as
part of its own build system. Can we rip it out?

The same question applies to SeaBIOS but I haven't looked at that yet.

[0] git://git.qemu.org/qemu.git

  Luis

--cNdxnHkX5QqsyA0e
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJS4cSiAAoJEHoaE3+fqtqovB0P/3IvDt0LRmd3IenyZNdu+oVF
ljR9/BoWIwhF2id+Y0IMyezodZ2CO84LQDkZ+/yVnCHL8dbnTCbDVsEv/536mtaj
epbD4r8DPKhBAFeURzwB2w1Jeu6cxUBOYPwc1XdhNoq+FRV8N2cmCtB2PV/OvQ05
BS0NSYPCwCBDyUUMBiDxIF6Uf5NDc3pNR21PWApMyetP3k4FvEL0EXgKSMC0aD8Z
+qBUkRo1sP4A1DkTZUBlmI33MrWIzZYHjiW7I7IB+20ZN2yTxfNy4ut2EHGDOCNh
nPHCyUF7zONc7QDqg3CgTSiffP2XjTlrVrg3hjLmLWCH1J7tiO3f0Wiab4inMJUC
L98jTAEYF7g9MMDmEAfYTHZI1656++5c9odmADCUXQVk1r1sOq0mk7vfW7u5VOxE
2CNilgHh/QLwuW972mebC7fP7h3gahMGcJmLiJWjiZ1YZpo3aAEMp15Xw/ZqImAZ
cNYYtH4JwfhO0HufAhFLUa5ug/iYQe7IomVm899GS6JvBkOWQpzIK9yzRPzUnc83
g8AIADOVpwIAKT71dBJHX3ngy4KQWX9Lh+5NKsJkZ+3aFmlgaBppymzjS2slY/75
bvUA/e+MvWM7tmuUr6sTN1Thw7qjjDOxoFdL3VCLM3U3SxBiFlX1jZR5KCgJUCY/
K+RdUaL/97QVTlSucW1C
=QX/Z
-----END PGP SIGNATURE-----

--cNdxnHkX5QqsyA0e--


--===============0535858599269749418==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0535858599269749418==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 01:41:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VlZ-0004z7-1J; Fri, 24 Jan 2014 01:41:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1W6VlX-0004z2-70
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 01:41:03 +0000
Received: from [193.109.254.147:4435] by server-14.bemta-14.messagelabs.com id
	AA/C7-12628-EA4C1E25; Fri, 24 Jan 2014 01:41:02 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390527659!12771689!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17660 invoked from network); 24 Jan 2014 01:41:01 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 01:41:01 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so2630795pab.9
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 17:40:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:subject:message-id:mime-version:content-type
	:content-disposition:user-agent;
	bh=88xNKAmN5GmQ2HJ7wpgE1ty2NX7Wl6Q4iptOT2TusjY=;
	b=Ez7NkX1CeBRSzWMlHkczlKAs3BrQV5JQaTG9ozDiN1BUVCBMkHXDuWzJs6eQCvgsoZ
	NWPuNBelAHAOBRCZ00Z2IgetexBS3uy/vHyjdllzZ9f+3hZZS+tPfj09XE+3OosEASdT
	vEJCAG74RspNhWoXp+HnaRTddNsc+hHAOnY/1M8haitY2xQlqRV7jVDFaHbmn/bZtJqB
	HCm9ddoachlyP/yL7QvHI1VxOE4s2tNRZcqI8le11sUqRfidLV+QgUXNq6hztg3ycuGd
	tuuX/72+vE62JAgRgKFWZJLb1RhJ5U5wnIMCBAqVG05WCK9xVv4iOoyH9WSjoFvaH+hT
	AXng==
X-Received: by 10.68.244.103 with SMTP id xf7mr11461125pbc.50.1390527659095;
	Thu, 23 Jan 2014 17:40:59 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	fm1sm68198387pab.22.2014.01.23.17.40.55 for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 17:40:57 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 17:40:54 -0800
Date: Thu, 23 Jan 2014 17:40:54 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: xen-devel@lists.xen.org
Message-ID: <20140124014050.GA26624@garbanzo.do-not-panic.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
Subject: [Xen-devel] Removing building qemu from xen ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0535858599269749418=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0535858599269749418==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="cNdxnHkX5QqsyA0e"
Content-Disposition: inline


--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

I see xen by default builds qemu on x86 / x86_64, and by default it will
build it on tools/qemu-xen-dir, and regardless of x86 32/64 bit architecture
it will call the binary qemu-system-i386. I see recently also support
was added to avoid having to build qemu and let distributions specify
their own binary path for qemu. That's great! I see though that by
default xen builds a different repo version of qemu:

git://xenbits.xen.org/qemu-upstream-unstable.git
tag: qemu-xen-4.4.0-rc1

At first glance I am not seeing much delta between what's upstream and
at least the latest commit of qemu-upstream-unstable.git is already
upstream on the *real* qemu upstream tree [0]. This begs the question of
whether or not xen needs to have the qemu building infrastructure as
part of its own build system. Can we rip it out?

The same question applies to SeaBIOS but I haven't looked at that yet.

[0] git://git.qemu.org/qemu.git

  Luis

--cNdxnHkX5QqsyA0e
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIcBAEBAgAGBQJS4cSiAAoJEHoaE3+fqtqovB0P/3IvDt0LRmd3IenyZNdu+oVF
ljR9/BoWIwhF2id+Y0IMyezodZ2CO84LQDkZ+/yVnCHL8dbnTCbDVsEv/536mtaj
epbD4r8DPKhBAFeURzwB2w1Jeu6cxUBOYPwc1XdhNoq+FRV8N2cmCtB2PV/OvQ05
BS0NSYPCwCBDyUUMBiDxIF6Uf5NDc3pNR21PWApMyetP3k4FvEL0EXgKSMC0aD8Z
+qBUkRo1sP4A1DkTZUBlmI33MrWIzZYHjiW7I7IB+20ZN2yTxfNy4ut2EHGDOCNh
nPHCyUF7zONc7QDqg3CgTSiffP2XjTlrVrg3hjLmLWCH1J7tiO3f0Wiab4inMJUC
L98jTAEYF7g9MMDmEAfYTHZI1656++5c9odmADCUXQVk1r1sOq0mk7vfW7u5VOxE
2CNilgHh/QLwuW972mebC7fP7h3gahMGcJmLiJWjiZ1YZpo3aAEMp15Xw/ZqImAZ
cNYYtH4JwfhO0HufAhFLUa5ug/iYQe7IomVm899GS6JvBkOWQpzIK9yzRPzUnc83
g8AIADOVpwIAKT71dBJHX3ngy4KQWX9Lh+5NKsJkZ+3aFmlgaBppymzjS2slY/75
bvUA/e+MvWM7tmuUr6sTN1Thw7qjjDOxoFdL3VCLM3U3SxBiFlX1jZR5KCgJUCY/
K+RdUaL/97QVTlSucW1C
=QX/Z
-----END PGP SIGNATURE-----

--cNdxnHkX5QqsyA0e--


--===============0535858599269749418==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0535858599269749418==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 01:44:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VoW-000564-PK; Fri, 24 Jan 2014 01:44:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6VoV-00055s-LR
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 01:44:07 +0000
Received: from [85.158.137.68:10161] by server-15.bemta-3.messagelabs.com id
	67/D0-11556-665C1E25; Fri, 24 Jan 2014 01:44:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390527844!9840958!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19439 invoked from network); 24 Jan 2014 01:44:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 01:44:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="96020806"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 01:44:04 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 23 Jan 2014 20:44:03 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 24 Jan 2014
	02:44:02 +0100
Message-ID: <52E1C563.7050406@citrix.com>
Date: Fri, 24 Jan 2014 01:44:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ehouby@yahoo.com>, Jan Beulich <JBeulich@suse.com>
References: <1390244796.2322.6.camel@astar.houby.net>	<52DE4CA1020000780011547D@nat28.tlf.novell.com>	<1390297542.20516.90.camel@kazak.uk.xensource.com>	<000001cf1695$0c960450$25c20cf0$@yahoo.com>	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>	<1390455621.2415.56.camel@astar.houby.net>	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/2014 00:47, Eric Houby wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>
>> But you didn't turn on interrupt remapping, or it got forcibly
>> disabled:
>>
>> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
>> AMD-Vi: Disabling interrupt remapping
>
> Jan,
>
> If there is not a specific switch required to enable interrupt remapping
> while booting 3.12.7 on bare hardware, then it was forcibly disabled.
> Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.
>
>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
>>
>> Jan
>
>
> That would be better than xen failing to boot.
>
>
> Thanks,
>
> -Eric

You probably want "iommu=debug,verbose" rather than verbos

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 01:44:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 01:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6VoW-000564-PK; Fri, 24 Jan 2014 01:44:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6VoV-00055s-LR
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 01:44:07 +0000
Received: from [85.158.137.68:10161] by server-15.bemta-3.messagelabs.com id
	67/D0-11556-665C1E25; Fri, 24 Jan 2014 01:44:06 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390527844!9840958!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19439 invoked from network); 24 Jan 2014 01:44:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 01:44:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,709,1384300800"; d="scan'208";a="96020806"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 01:44:04 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 23 Jan 2014 20:44:03 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Fri, 24 Jan 2014
	02:44:02 +0100
Message-ID: <52E1C563.7050406@citrix.com>
Date: Fri, 24 Jan 2014 01:44:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <ehouby@yahoo.com>, Jan Beulich <JBeulich@suse.com>
References: <1390244796.2322.6.camel@astar.houby.net>	<52DE4CA1020000780011547D@nat28.tlf.novell.com>	<1390297542.20516.90.camel@kazak.uk.xensource.com>	<000001cf1695$0c960450$25c20cf0$@yahoo.com>	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>	<1390455621.2415.56.camel@astar.houby.net>	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/2014 00:47, Eric Houby wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>
>> But you didn't turn on interrupt remapping, or it got forcibly
>> disabled:
>>
>> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
>> AMD-Vi: Disabling interrupt remapping
>
> Jan,
>
> If there is not a specific switch required to enable interrupt remapping
> while booting 3.12.7 on bare hardware, then it was forcibly disabled.
> Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.
>
>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
>>
>> Jan
>
>
> That would be better than xen failing to boot.
>
>
> Thanks,
>
> -Eric

You probably want "iommu=debug,verbose" rather than verbos

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 03:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 03:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6XPh-0007yU-H4; Fri, 24 Jan 2014 03:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6XPg-0007yP-M9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 03:26:37 +0000
Received: from [85.158.137.68:51201] by server-9.bemta-3.messagelabs.com id
	D9/A5-13104-B6DD1E25; Fri, 24 Jan 2014 03:26:35 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390533983!7366984!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26160 invoked from network); 24 Jan 2014 03:26:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 03:26:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0O3QJsk018495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 03:26:20 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0O3QIJT024356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 03:26:18 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0O3QHQ2004910; Fri, 24 Jan 2014 03:26:18 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 19:26:17 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Fri, 24 Jan 2014 11:28:15 +0800
Message-Id: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
use get_page to ensure page is released when grant access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   93 ++++++++++++++-----------------------------
 1 files changed, 30 insertions(+), 63 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..a22adaa 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
@@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
 static void xennet_end_access(int ref, void *page)
 {
 	/* This frees the page as a side-effect */
-	if (ref != GRANT_INVALID_REF)
+	if (ref != GRANT_INVALID_REF) {
+		get_page(virt_to_page(page));
 		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
+		free_page((unsigned long)page);
+	}
 }
 
 static void xennet_disconnect_backend(struct netfront_info *info)
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 03:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 03:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6XPh-0007yU-H4; Fri, 24 Jan 2014 03:26:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6XPg-0007yP-M9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 03:26:37 +0000
Received: from [85.158.137.68:51201] by server-9.bemta-3.messagelabs.com id
	D9/A5-13104-B6DD1E25; Fri, 24 Jan 2014 03:26:35 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390533983!7366984!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26160 invoked from network); 24 Jan 2014 03:26:34 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 03:26:34 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0O3QJsk018495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 03:26:20 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0O3QIJT024356
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 03:26:18 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0O3QHQ2004910; Fri, 24 Jan 2014 03:26:18 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 23 Jan 2014 19:26:17 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Fri, 24 Jan 2014 11:28:15 +0800
Message-Id: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
use get_page to ensure page is released when grant access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   93 ++++++++++++++-----------------------------
 1 files changed, 30 insertions(+), 63 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..a22adaa 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
-
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
@@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
 static void xennet_end_access(int ref, void *page)
 {
 	/* This frees the page as a side-effect */
-	if (ref != GRANT_INVALID_REF)
+	if (ref != GRANT_INVALID_REF) {
+		get_page(virt_to_page(page));
 		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
+		free_page((unsigned long)page);
+	}
 }
 
 static void xennet_disconnect_backend(struct netfront_info *info)
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 03:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 03:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6XTg-0008Hz-BI; Fri, 24 Jan 2014 03:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6XTe-0008Hm-SW
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 03:30:43 +0000
Received: from [85.158.137.68:48003] by server-3.bemta-3.messagelabs.com id
	E9/CE-10658-16ED1E25; Fri, 24 Jan 2014 03:30:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390534239!11039056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14455 invoked from network); 24 Jan 2014 03:30:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 03:30:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,710,1384300800"; d="scan'208";a="93999581"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 03:30:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 22:30:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6XTZ-000325-Fm;
	Fri, 24 Jan 2014 03:30:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6XTZ-0004sI-9i;
	Fri, 24 Jan 2014 03:30:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24471-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 03:30:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24471: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24471 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24471/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24465

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36
baseline version:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 650fc2f76d0a156e23703683d0c18fa262ecea36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 23 13:55:42 2014 +0100

    x86/irq: avoid use-after-free on error path in pirq_guest_bind()
    
    This is XSA-83.
    
    Coverity-ID: 1146952
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 03:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 03:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6XTg-0008Hz-BI; Fri, 24 Jan 2014 03:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6XTe-0008Hm-SW
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 03:30:43 +0000
Received: from [85.158.137.68:48003] by server-3.bemta-3.messagelabs.com id
	E9/CE-10658-16ED1E25; Fri, 24 Jan 2014 03:30:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390534239!11039056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14455 invoked from network); 24 Jan 2014 03:30:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 03:30:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,710,1384300800"; d="scan'208";a="93999581"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 03:30:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 23 Jan 2014 22:30:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6XTZ-000325-Fm;
	Fri, 24 Jan 2014 03:30:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6XTZ-0004sI-9i;
	Fri, 24 Jan 2014 03:30:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24471-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 03:30:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24471: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24471 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24471/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24465

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36
baseline version:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 650fc2f76d0a156e23703683d0c18fa262ecea36
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 23 13:55:42 2014 +0100

    x86/irq: avoid use-after-free on error path in pirq_guest_bind()
    
    This is XSA-83.
    
    Coverity-ID: 1146952
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 04:27:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 04:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6YMT-0001A4-8f; Fri, 24 Jan 2014 04:27:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6YMR-00019z-L1
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 04:27:19 +0000
Received: from [85.158.139.211:12844] by server-3.bemta-5.messagelabs.com id
	67/F2-04773-6ABE1E25; Fri, 24 Jan 2014 04:27:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390537636!11596967!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15941 invoked from network); 24 Jan 2014 04:27:18 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 04:27:18 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 23 Jan 2014 21:27:05 -0700
Message-ID: <52E1EB97.4080007@suse.com>
Date: Thu, 23 Jan 2014 21:27:03 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
In-Reply-To: <21216.62800.746512.422459@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> It appears the timeout_modify callback is invoked on a previously
>> deregistered timeout.  I didn't notice the segfault when running
>> libvirtd under valgrind, but did see
>>     
>
> Hmmm.  This could be a libxl problem.  I'll review the code again and
> maybe think about adding some assertions.
>   

BTW, I only see the crash when the save/restore script is running.  I
stopped the other scripts and domains, running only save/restore on a
single domain, and see the crash rather quickly (within 10 iterations).

> But I've slept on this and I had an idea about libvirt's rescheduling
> timeouts. 

I'm not so thrilled with the timeout handling code in the libvirt libxl
driver.  The driver maintains a list of active timeouts because IIRC,
there were cases when the driver received timeout deregistrations when
calling libxl_ctx_free, at which point some of the associated structures
were freed.  The idea was to call libxl_osevent_occurred_timeout on any
active timeouts before freeing libxlDomainObjPrivate and its contents.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 04:27:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 04:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6YMT-0001A4-8f; Fri, 24 Jan 2014 04:27:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W6YMR-00019z-L1
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 04:27:19 +0000
Received: from [85.158.139.211:12844] by server-3.bemta-5.messagelabs.com id
	67/F2-04773-6ABE1E25; Fri, 24 Jan 2014 04:27:18 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390537636!11596967!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15941 invoked from network); 24 Jan 2014 04:27:18 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 04:27:18 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 23 Jan 2014 21:27:05 -0700
Message-ID: <52E1EB97.4080007@suse.com>
Date: Thu, 23 Jan 2014 21:27:03 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
In-Reply-To: <21216.62800.746512.422459@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> It appears the timeout_modify callback is invoked on a previously
>> deregistered timeout.  I didn't notice the segfault when running
>> libvirtd under valgrind, but did see
>>     
>
> Hmmm.  This could be a libxl problem.  I'll review the code again and
> maybe think about adding some assertions.
>   

BTW, I only see the crash when the save/restore script is running.  I
stopped the other scripts and domains, running only save/restore on a
single domain, and see the crash rather quickly (within 10 iterations).

> But I've slept on this and I had an idea about libvirt's rescheduling
> timeouts. 

I'm not so thrilled with the timeout handling code in the libvirt libxl
driver.  The driver maintains a list of active timeouts because IIRC,
there were cases when the driver received timeout deregistrations when
calling libxl_ctx_free, at which point some of the associated structures
were freed.  The idea was to call libxl_osevent_occurred_timeout on any
active timeouts before freeing libxlDomainObjPrivate and its contents.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 04:28:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 04:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6YNB-0001DH-PC; Fri, 24 Jan 2014 04:28:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W6YN9-0001D7-Oj
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 04:28:04 +0000
Received: from [85.158.137.68:15942] by server-12.bemta-3.messagelabs.com id
	59/97-20055-2DBE1E25; Fri, 24 Jan 2014 04:28:02 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390537680!11042747!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 24 Jan 2014 04:28:01 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 04:28:01 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so3288899qac.22
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 20:28:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=T9lj1jznn8zK0c/uVLe2jvZYEqqHrjan1/zY0IvqjC4=;
	b=qwwGGnxI4wqml43ILR+GFkTZDj7g00mmai/WaoraT7JP5srDb3/1+nnq4jGW2CSRPF
	z2MNcqJ1cbF+g23maTYUxFA+oMAe9yJ3ywzX9GFmfUUjwIQ1Yl+M3zeenIIW7Rb9TQUP
	aXDmm2uD+kPrlrKrcinpKAZ8F0+rZlr8TLaU/+IM51VHpW7WrtDjLe+xHOJCOXoKHj3d
	jebP/qGlRhGVOCNZF6DueCFs/+2hvddMUKfkzf5RuJ1wc5GS2dlB6uP73USXtvNPbyTg
	/mAOz5Oxk8hjWkpOzLyqa7UD6ddPa0XD/Mox/akbusHXEJILzL1W5Rf65qA8/YebJ7ah
	Yv+g==
MIME-Version: 1.0
X-Received: by 10.140.96.116 with SMTP id j107mr16467880qge.6.1390537680311;
	Thu, 23 Jan 2014 20:28:00 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Thu, 23 Jan 2014 20:28:00 -0800 (PST)
In-Reply-To: <20140123232012.GA23046@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140123232012.GA23046@orcus.uplinklabs.net>
Date: Thu, 23 Jan 2014 23:28:00 -0500
Message-ID: <CAEr7rXieOpz7sYRoAdCkh6j09cee9Bew-FcQ8gJQ9mC6dccgfQ@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 6:20 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
>> > On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>> >> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>> >> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
>> >> >>
>> >> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> >> >>>
>> >> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>> >> >>>>
>> >> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>> >> >>>>>
>> >> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>> >> >>>>> <gregkh@linuxfoundation.org> wrote:
>> >> >>>
>> >> >>>
>> >> >>> Adding extra folks to the party.
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Odds are this also shows up in 3.13, right?
>> >> >>>>
>> >> >>>>
>> >> >>>> Reproduced using 3.13 on the PV guest:
>> >> >>>>
>> >> >>>>         [  368.756763] BUG: Bad page map in process mp
>> >> >>>> pte:80000004a67c6165 pmd:e9b706067
>> >> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>> >> >>>> mapping:          (null) index:0x0
>> >> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>> >> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>> >> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>> >> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>> >> >>>> #1
>> >> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>> >> >>>> ffffffff814d8748 00007fd1388b7000
>> >> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>> >> >>>> 0000000000000000 0000000000000000
>> >> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>> >> >>>> 00007fd1388b8000 ffff880e9eaf3e30
>> >> >>>>         [  368.756815] Call Trace:
>> >> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>> >> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >> >>>>         [  368.756837]  [<ffffffff8116eae3>]
>> >> >>>> unmap_single_vma+0x583/0x890
>> >> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>> >> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>> >> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>> >> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>> >> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>> >> >>>>         [  368.756869]  [<ffffffff814e70ed>]
>> >> >>>> system_call_fastpath+0x1a/0x1f
>> >> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
>> >> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >> >>>> idx:0 val:-1
>> >> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >> >>>> idx:1 val:1
>> >> >>>>
>> >> >>>>>
>> >> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
>> >> >>>>> interest in setting one up).. And I have a suspicion that it might not
>> >> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>> >> >>>>>
>> >> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>> >> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>> >> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>> >> >>>>> confused.
>> >> >>>>>
>> >> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>> >> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>> >> >>>>> _PAGE_PRESENT.
>> >> >>>>>
>> >> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>> >> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>> >> >>>>> that makes him go "Ahh, yes, silly case".
>> >> >>>>>
>> >> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>> >> >>>>>
>> >> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>> >> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>> >> >>>>> attached test-case (but apparently only under Xen PV). There it
>> >> >>>>> apparently causes a "BUG: Bad page map .." error.
>> >> >>>
>> >> >>>
>> >> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>> >> >>> _raw_
>> >> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> >> >>> and instead reads the values directly from the page-table. Since the
>> >> >>> page-table is also manipulated by the hypervisor - there are certain
>> >> >>> flags it also sets to do its business. It might be that it uses
>> >> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> >> >>> pte_flags that would invoke the pvops interface.
>> >> >>>
>> >> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> >> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>
>> It does use _PAGE_GLOBAL for guest user pages
>>
>> >> >>>
>> >> >>> This not-compiled-totally-bad-patch might shed some light on what I was
>> >> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> >> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> >> >>> for that).
>> >> >>
>> >> >>
>> >> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> >> >> still able to repro the issue:
>> >>
>> >> Steven, do you use numa=fake on boot cmd line for pv guest?
>> >>
>> >> I had similar issue on pv guest. Let me check if the fix that resolved
>> >> this for me will help with 3.13.
>> >
>> > Nope:
>> >
>> > # cat /proc/cmdline
>> > root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
>>
>> >
>> >>
>> >> >
>> >> >
>> >> > Maybe this one is also related to this BUG here (cc'ed people investigating
>> >> > this one) ...
>> >> >
>> >> >   https://lkml.org/lkml/2014/1/10/427
>> >> >
>> >> > ... not sure, though.
>> >> >
>> >> >
>> >> >>         [  346.374929] BUG: Bad page map in process mp
>> >> >> pte:80000004ae928065 pmd:e993f9067
>> >> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> >> >> (null) index:0x0
>> >> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>> >> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> >> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>> >> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> >> >> #1
>> >> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> >> >> 00007f06a9bbb000
>> >> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> >> >> 0000000000000000
>> >> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> >> >> ffff880e991a3e30
>> >> >>         [  346.374979] Call Trace:
>> >> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>> >> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>> >> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>> >> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>> >> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>> >> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>> >> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>> >> >>         [  346.375034]  [<ffffffff814e712d>]
>> >> >> system_call_fastpath+0x1a/0x1f
>> >> >>         [  346.375037] Disabling lock debugging due to kernel taint
>> >> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> >> idx:0 val:-1
>> >> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> >> idx:1 val:1
>> >> >>
>> >> >> This dump doesn't look dramatically different, either.
>> >> >>
>> >> >>>
>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >> >>> turned on?
>> >> >>
>> >> >>
>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> >> mean not enabled at runtime?
>> >> >>
>> >> >> [1]
>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>> >>
>> >>
>> >>
>> >> --
>> >> Elena
>>
>> I was able to reproduce this consistently, also with the latest mm
>> patches from yesterday.
>> Can you please try this:
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index ce563be..76dcf96 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>> *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>
> Thanks Elena, I just tested that and it does un-break things.
>
> - Steven


Good sign ) Ok, Ill format this patch for submission after it will be
clear what path is taken here (as its not numa auto balancing related
thing).
-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 04:28:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 04:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6YNB-0001DH-PC; Fri, 24 Jan 2014 04:28:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W6YN9-0001D7-Oj
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 04:28:04 +0000
Received: from [85.158.137.68:15942] by server-12.bemta-3.messagelabs.com id
	59/97-20055-2DBE1E25; Fri, 24 Jan 2014 04:28:02 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390537680!11042747!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 24 Jan 2014 04:28:01 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 04:28:01 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so3288899qac.22
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 20:28:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=T9lj1jznn8zK0c/uVLe2jvZYEqqHrjan1/zY0IvqjC4=;
	b=qwwGGnxI4wqml43ILR+GFkTZDj7g00mmai/WaoraT7JP5srDb3/1+nnq4jGW2CSRPF
	z2MNcqJ1cbF+g23maTYUxFA+oMAe9yJ3ywzX9GFmfUUjwIQ1Yl+M3zeenIIW7Rb9TQUP
	aXDmm2uD+kPrlrKrcinpKAZ8F0+rZlr8TLaU/+IM51VHpW7WrtDjLe+xHOJCOXoKHj3d
	jebP/qGlRhGVOCNZF6DueCFs/+2hvddMUKfkzf5RuJ1wc5GS2dlB6uP73USXtvNPbyTg
	/mAOz5Oxk8hjWkpOzLyqa7UD6ddPa0XD/Mox/akbusHXEJILzL1W5Rf65qA8/YebJ7ah
	Yv+g==
MIME-Version: 1.0
X-Received: by 10.140.96.116 with SMTP id j107mr16467880qge.6.1390537680311;
	Thu, 23 Jan 2014 20:28:00 -0800 (PST)
Received: by 10.96.133.33 with HTTP; Thu, 23 Jan 2014 20:28:00 -0800 (PST)
In-Reply-To: <20140123232012.GA23046@orcus.uplinklabs.net>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140123232012.GA23046@orcus.uplinklabs.net>
Date: Thu, 23 Jan 2014 23:28:00 -0500
Message-ID: <CAEr7rXieOpz7sYRoAdCkh6j09cee9Bew-FcQ8gJQ9mC6dccgfQ@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Alex Thorlton <athorlton@sgi.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 6:20 PM, Steven Noonan <steven@uplinklabs.net> wrote:
> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
>> > On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>> >> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>> >> > On 01/22/2014 08:29 AM, Steven Noonan wrote:
>> >> >>
>> >> >> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>> >> >>>
>> >> >>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>> >> >>>>
>> >> >>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>> >> >>>>>
>> >> >>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>> >> >>>>> <gregkh@linuxfoundation.org> wrote:
>> >> >>>
>> >> >>>
>> >> >>> Adding extra folks to the party.
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Odds are this also shows up in 3.13, right?
>> >> >>>>
>> >> >>>>
>> >> >>>> Reproduced using 3.13 on the PV guest:
>> >> >>>>
>> >> >>>>         [  368.756763] BUG: Bad page map in process mp
>> >> >>>> pte:80000004a67c6165 pmd:e9b706067
>> >> >>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>> >> >>>> mapping:          (null) index:0x0
>> >> >>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>> >> >>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>> >> >>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>> >> >>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>> >> >>>> #1
>> >> >>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>> >> >>>> ffffffff814d8748 00007fd1388b7000
>> >> >>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>> >> >>>> 0000000000000000 0000000000000000
>> >> >>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>> >> >>>> 00007fd1388b8000 ffff880e9eaf3e30
>> >> >>>>         [  368.756815] Call Trace:
>> >> >>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>> >> >>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >> >>>>         [  368.756837]  [<ffffffff8116eae3>]
>> >> >>>> unmap_single_vma+0x583/0x890
>> >> >>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >> >>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>> >> >>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>> >> >>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>> >> >>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>> >> >>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>> >> >>>>         [  368.756869]  [<ffffffff814e70ed>]
>> >> >>>> system_call_fastpath+0x1a/0x1f
>> >> >>>>         [  368.756872] Disabling lock debugging due to kernel taint
>> >> >>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >> >>>> idx:0 val:-1
>> >> >>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>> >> >>>> idx:1 val:1
>> >> >>>>
>> >> >>>>>
>> >> >>>>> Probably. I don't have a Xen PV setup to test with (and very little
>> >> >>>>> interest in setting one up).. And I have a suspicion that it might not
>> >> >>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>> >> >>>>>
>> >> >>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>> >> >>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>> >> >>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>> >> >>>>> confused.
>> >> >>>>>
>> >> >>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>> >> >>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>> >> >>>>> _PAGE_PRESENT.
>> >> >>>>>
>> >> >>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>> >> >>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>> >> >>>>> that makes him go "Ahh, yes, silly case".
>> >> >>>>>
>> >> >>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>> >> >>>>>
>> >> >>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>> >> >>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>> >> >>>>> attached test-case (but apparently only under Xen PV). There it
>> >> >>>>> apparently causes a "BUG: Bad page map .." error.
>> >> >>>
>> >> >>>
>> >> >>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>> >> >>> _raw_
>> >> >>> value of PMDs and PTEs. That is - it does not use the pvops interface
>> >> >>> and instead reads the values directly from the page-table. Since the
>> >> >>> page-table is also manipulated by the hypervisor - there are certain
>> >> >>> flags it also sets to do its business. It might be that it uses
>> >> >>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>> >> >>> pte_flags that would invoke the pvops interface.
>> >> >>>
>> >> >>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>> >> >>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
>>
>> It does use _PAGE_GLOBAL for guest user pages
>>
>> >> >>>
>> >> >>> This not-compiled-totally-bad-patch might shed some light on what I was
>> >> >>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>> >> >>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>> >> >>> for that).
>> >> >>
>> >> >>
>> >> >> Unfortunately the Totally Bad Patch seems to make no difference. I am
>> >> >> still able to repro the issue:
>> >>
>> >> Steven, do you use numa=fake on boot cmd line for pv guest?
>> >>
>> >> I had similar issue on pv guest. Let me check if the fix that resolved
>> >> this for me will help with 3.13.
>> >
>> > Nope:
>> >
>> > # cat /proc/cmdline
>> > root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
>>
>> >
>> >>
>> >> >
>> >> >
>> >> > Maybe this one is also related to this BUG here (cc'ed people investigating
>> >> > this one) ...
>> >> >
>> >> >   https://lkml.org/lkml/2014/1/10/427
>> >> >
>> >> > ... not sure, though.
>> >> >
>> >> >
>> >> >>         [  346.374929] BUG: Bad page map in process mp
>> >> >> pte:80000004ae928065 pmd:e993f9067
>> >> >>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>> >> >> (null) index:0x0
>> >> >>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>> >> >>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>> >> >> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>> >> >>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>> >> >> #1
>> >> >>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>> >> >> 00007f06a9bbb000
>> >> >>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>> >> >> 0000000000000000
>> >> >>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>> >> >> ffff880e991a3e30
>> >> >>         [  346.374979] Call Trace:
>> >> >>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>> >> >>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>> >> >>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>> >> >>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>> >> >>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>> >> >>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>> >> >>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>> >> >>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>> >> >>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>> >> >>         [  346.375034]  [<ffffffff814e712d>]
>> >> >> system_call_fastpath+0x1a/0x1f
>> >> >>         [  346.375037] Disabling lock debugging due to kernel taint
>> >> >>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> >> idx:0 val:-1
>> >> >>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>> >> >> idx:1 val:1
>> >> >>
>> >> >> This dump doesn't look dramatically different, either.
>> >> >>
>> >> >>>
>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >> >>> turned on?
>> >> >>
>> >> >>
>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> >> mean not enabled at runtime?
>> >> >>
>> >> >> [1]
>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>> >>
>> >>
>> >>
>> >> --
>> >> Elena
>>
>> I was able to reproduce this consistently, also with the latest mm
>> patches from yesterday.
>> Can you please try this:
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index ce563be..76dcf96 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>> *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>
> Thanks Elena, I just tested that and it does un-break things.
>
> - Steven


Good sign ) Ok, Ill format this patch for submission after it will be
clear what path is taken here (as its not numa auto balancing related
thing).
-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 05:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 05:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Zd9-0003Wf-4n; Fri, 24 Jan 2014 05:48:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W6Zd7-0003VW-ST
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 05:48:38 +0000
Received: from [193.109.254.147:63450] by server-4.bemta-14.messagelabs.com id
	D8/A2-03916-5BEF1E25; Fri, 24 Jan 2014 05:48:37 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390542514!12887052!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12105 invoked from network); 24 Jan 2014 05:48:36 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 05:48:36 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so2854743pab.9
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 21:48:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3ClvWheMhOmluAaY4bVXjddBRoVTHXFMCVdVW8FkGRk=;
	b=cMMcVEGDtroOhBPJiJkQOdBAvhCErXl61xY901GRwLad44aOVT8zFaxjHu0VQvG5qI
	1Hcb08XMfGL5bDJOmgx5EydsI5xfFlC6GisQzBVMt7PP+IjHNtqawfgI3Ujf8WhlwWzw
	uMCGTr7wkZyfB2k00GdAWqlQhNwn5KNnCmsmdoAFSJEwFvI6azwjuIkAxLj5sWvdHxFl
	/gu2izJqdoEyy+hPBxeGsgw24/+Nc5E3Et0xgkChjhW6kHf4f0thPNG6nzcg+xtqjcBE
	9xRWiB/cAHS3spOZTRnKYtn5XODfGrl8SboRXIqEFeluSapmIruYwXZB3agh+QCpnEtL
	dhbA==
X-Received: by 10.66.217.133 with SMTP id oy5mr12693698pac.46.1390542514040;
	Thu, 23 Jan 2014 21:48:34 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id
	jp3sm46006966pbc.36.2014.01.23.21.48.30 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 21:48:32 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 21:48:29 -0800
Date: Thu, 23 Jan 2014 21:48:29 -0800
From: Matt Wilson <msw@linux.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Apologies for coming in late on this thread. I'm quite behind on
xen-devel mail that isn't CC: to me.

It seems to have been forgotten that Anthony and I proposed a similar
change last November.

https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

Or am I misunderstanding the change?

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 05:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 05:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6Zd9-0003Wf-4n; Fri, 24 Jan 2014 05:48:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W6Zd7-0003VW-ST
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 05:48:38 +0000
Received: from [193.109.254.147:63450] by server-4.bemta-14.messagelabs.com id
	D8/A2-03916-5BEF1E25; Fri, 24 Jan 2014 05:48:37 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390542514!12887052!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12105 invoked from network); 24 Jan 2014 05:48:36 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 05:48:36 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so2854743pab.9
	for <xen-devel@lists.xenproject.org>;
	Thu, 23 Jan 2014 21:48:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=3ClvWheMhOmluAaY4bVXjddBRoVTHXFMCVdVW8FkGRk=;
	b=cMMcVEGDtroOhBPJiJkQOdBAvhCErXl61xY901GRwLad44aOVT8zFaxjHu0VQvG5qI
	1Hcb08XMfGL5bDJOmgx5EydsI5xfFlC6GisQzBVMt7PP+IjHNtqawfgI3Ujf8WhlwWzw
	uMCGTr7wkZyfB2k00GdAWqlQhNwn5KNnCmsmdoAFSJEwFvI6azwjuIkAxLj5sWvdHxFl
	/gu2izJqdoEyy+hPBxeGsgw24/+Nc5E3Et0xgkChjhW6kHf4f0thPNG6nzcg+xtqjcBE
	9xRWiB/cAHS3spOZTRnKYtn5XODfGrl8SboRXIqEFeluSapmIruYwXZB3agh+QCpnEtL
	dhbA==
X-Received: by 10.66.217.133 with SMTP id oy5mr12693698pac.46.1390542514040;
	Thu, 23 Jan 2014 21:48:34 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id
	jp3sm46006966pbc.36.2014.01.23.21.48.30 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 21:48:32 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 23 Jan 2014 21:48:29 -0800
Date: Thu, 23 Jan 2014 21:48:29 -0800
From: Matt Wilson <msw@linux.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

Apologies for coming in late on this thread. I'm quite behind on
xen-devel mail that isn't CC: to me.

It seems to have been forgotten that Anthony and I proposed a similar
change last November.

https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

Or am I misunderstanding the change?

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bJ3-0006AR-Bn; Fri, 24 Jan 2014 07:36:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bJ1-0006AM-Lp
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:36:00 +0000
Received: from [85.158.137.68:34901] by server-13.bemta-3.messagelabs.com id
	51/AF-28603-ED712E25; Fri, 24 Jan 2014 07:35:58 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390548957!9871889!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12444 invoked from network); 24 Jan 2014 07:35:58 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:35:58 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so242374eaj.39
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=YGvDA0hyIiE+dTCMBiOE1A8iMC7VYt4zwNpHUvb3vYc=;
	b=KPWj4KTzLKPyiAWoPvnLRMmSW3GlwOIMLlr85Yr/qK78/Pp0QbFSSJfC5WQZGPZxLb
	wGW0KU5jwypq/wddFpRXkovabTRVzvtzgME6yl7hfJnozqoJZyhXLDBGOASI4ZAI1zm1
	DEBMcnoCB9hxxpn2u+3Cseuc34vu+0ZcxmPUNo/Sm/LnVS2G5rWXrCU3bEldsdI+WL1M
	r3NACrQ3HzDqbqQb3xK1hvMZsjePiwQfRm3PBbZuFFtXJCpJHYxqrqVmcPWlmyNCEmgd
	adSgH+vzuFF3nAjDj4OiXutdAz+7fPndrPkt0UrYUpIvWGqrTujvENUWPawM4n5pclRN
	vyKw==
X-Received: by 10.14.100.71 with SMTP id y47mr967299eef.91.1390548957779;
	Thu, 23 Jan 2014 23:35:57 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id m47sm251672eey.7.2014.01.23.23.35.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:35:56 -0800 (PST)
Message-ID: <52E217DA.1020208@redhat.com>
Date: Fri, 24 Jan 2014 08:35:54 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/5] exec: guard Xen HVM hooks with
	CONFIG_XEN_I386
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> Those are only useful when building QEMU with HVM support.
>
> We need to expose CONFIG_XEN_I386 to source code so we modify configure
> and i386/x86_64-softmmu.mak.

I think the right way is to add a xen-hvm-stub.c file and include it in 
xenpv-softmmu.

Since you are at it, xen_platform.o xen_apic.o xen_pvdevice.o can be 
moved to hw/i386/xen and you can drop CONFIG_XEN_I386 completely.

Paolo

> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  configure                          |    1 +
>  default-configs/i386-softmmu.mak   |    1 -
>  default-configs/x86_64-softmmu.mak |    1 -
>  exec.c                             |   16 ++++++++++++++++
>  include/exec/memory-internal.h     |    2 ++
>  5 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/configure b/configure
> index 10a6562..1e515be 100755
> --- a/configure
> +++ b/configure
> @@ -4472,6 +4472,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
>
>  if supported_xen_target; then
>      echo "CONFIG_XEN=y" >> $config_target_mak
> +    echo "CONFIG_XEN_I386=y" >> $config_target_mak
>      if test "$xen_pci_passthrough" = yes; then
>          echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
>      fi
> diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak
> index 37ef90f..3c89aaa 100644
> --- a/default-configs/i386-softmmu.mak
> +++ b/default-configs/i386-softmmu.mak
> @@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
>  CONFIG_PAM=y
>  CONFIG_PCI_PIIX=y
>  CONFIG_WDT_IB700=y
> -CONFIG_XEN_I386=$(CONFIG_XEN)
>  CONFIG_ISA_DEBUG=y
>  CONFIG_ISA_TESTDEV=y
>  CONFIG_VMPORT=y
> diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak
> index 31bddce..1dc1f85 100644
> --- a/default-configs/x86_64-softmmu.mak
> +++ b/default-configs/x86_64-softmmu.mak
> @@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
>  CONFIG_PAM=y
>  CONFIG_PCI_PIIX=y
>  CONFIG_WDT_IB700=y
> -CONFIG_XEN_I386=$(CONFIG_XEN)
>  CONFIG_ISA_DEBUG=y
>  CONFIG_ISA_TESTDEV=y
>  CONFIG_VMPORT=y
> diff --git a/exec.c b/exec.c
> index defe38f..a72efe2 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1228,7 +1228,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
>              fprintf(stderr, "-mem-path not supported with Xen\n");
>              exit(1);
>          }
> +#ifdef CONFIG_XEN_I386
>          xen_ram_alloc(new_block->offset, size, mr);
> +#endif
>      } else {
>          if (mem_path) {
>              if (phys_mem_alloc != qemu_anon_ram_alloc) {
> @@ -1324,7 +1326,9 @@ void qemu_ram_free(ram_addr_t addr)
>              if (block->flags & RAM_PREALLOC_MASK) {
>                  ;
>              } else if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>                  xen_invalidate_map_cache_entry(block->host);
> +#endif
>  #ifndef _WIN32
>              } else if (block->fd >= 0) {
>                  munmap(block->host, block->length);
> @@ -1409,6 +1413,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
>      RAMBlock *block = qemu_get_ram_block(addr);
>
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          /* We need to check if the requested address is in the RAM
>           * because we don't want to map the entire memory in QEMU.
>           * In that case just map until the end of the page.
> @@ -1419,6 +1424,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
>              block->host =
>                  xen_map_cache(block->offset, block->length, 1);
>          }
> +#endif
>      }
>      return block->host + (addr - block->offset);
>  }
> @@ -1431,7 +1437,11 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, hwaddr *size)
>          return NULL;
>      }
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          return xen_map_cache(addr, *size, 1);
> +#else
> +        return NULL;
> +#endif
>      } else {
>          RAMBlock *block;
>
> @@ -1456,8 +1466,10 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
>      uint8_t *host = ptr;
>
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          *ram_addr = xen_ram_addr_from_mapcache(ptr);
>          return qemu_get_ram_block(*ram_addr)->mr;
> +#endif
>      }
>
>      block = ram_list.mru_block;
> @@ -1921,7 +1933,9 @@ static void invalidate_and_set_dirty(hwaddr addr,
>          /* set dirty bit */
>          cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
>      }
> +#ifdef CONFIG_XEN_I386
>      xen_modified_memory(addr, length);
> +#endif
>  }
>
>  static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
> @@ -2298,7 +2312,9 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
>              }
>          }
>          if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>              xen_invalidate_map_cache_entry(buffer);
> +#endif
>          }
>          memory_region_unref(mr);
>          return;
> diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
> index d0e0633..b4e76e2 100644
> --- a/include/exec/memory-internal.h
> +++ b/include/exec/memory-internal.h
> @@ -100,7 +100,9 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
>      for (addr = start; addr < end; addr += TARGET_PAGE_SIZE) {
>          cpu_physical_memory_set_dirty_flags(addr, dirty_flags);
>      }
> +#ifdef CONFIG_XEN_I386
>      xen_modified_memory(addr, length);
> +#endif
>  }
>
>  static inline void cpu_physical_memory_mask_dirty_range(ram_addr_t start,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bJ3-0006AR-Bn; Fri, 24 Jan 2014 07:36:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bJ1-0006AM-Lp
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:36:00 +0000
Received: from [85.158.137.68:34901] by server-13.bemta-3.messagelabs.com id
	51/AF-28603-ED712E25; Fri, 24 Jan 2014 07:35:58 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390548957!9871889!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12444 invoked from network); 24 Jan 2014 07:35:58 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:35:58 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so242374eaj.39
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=YGvDA0hyIiE+dTCMBiOE1A8iMC7VYt4zwNpHUvb3vYc=;
	b=KPWj4KTzLKPyiAWoPvnLRMmSW3GlwOIMLlr85Yr/qK78/Pp0QbFSSJfC5WQZGPZxLb
	wGW0KU5jwypq/wddFpRXkovabTRVzvtzgME6yl7hfJnozqoJZyhXLDBGOASI4ZAI1zm1
	DEBMcnoCB9hxxpn2u+3Cseuc34vu+0ZcxmPUNo/Sm/LnVS2G5rWXrCU3bEldsdI+WL1M
	r3NACrQ3HzDqbqQb3xK1hvMZsjePiwQfRm3PBbZuFFtXJCpJHYxqrqVmcPWlmyNCEmgd
	adSgH+vzuFF3nAjDj4OiXutdAz+7fPndrPkt0UrYUpIvWGqrTujvENUWPawM4n5pclRN
	vyKw==
X-Received: by 10.14.100.71 with SMTP id y47mr967299eef.91.1390548957779;
	Thu, 23 Jan 2014 23:35:57 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id m47sm251672eey.7.2014.01.23.23.35.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:35:56 -0800 (PST)
Message-ID: <52E217DA.1020208@redhat.com>
Date: Fri, 24 Jan 2014 08:35:54 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-4-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/5] exec: guard Xen HVM hooks with
	CONFIG_XEN_I386
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> Those are only useful when building QEMU with HVM support.
>
> We need to expose CONFIG_XEN_I386 to source code so we modify configure
> and i386/x86_64-softmmu.mak.

I think the right way is to add a xen-hvm-stub.c file and include it in 
xenpv-softmmu.

Since you are at it, xen_platform.o xen_apic.o xen_pvdevice.o can be 
moved to hw/i386/xen and you can drop CONFIG_XEN_I386 completely.

Paolo

> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  configure                          |    1 +
>  default-configs/i386-softmmu.mak   |    1 -
>  default-configs/x86_64-softmmu.mak |    1 -
>  exec.c                             |   16 ++++++++++++++++
>  include/exec/memory-internal.h     |    2 ++
>  5 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/configure b/configure
> index 10a6562..1e515be 100755
> --- a/configure
> +++ b/configure
> @@ -4472,6 +4472,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
>
>  if supported_xen_target; then
>      echo "CONFIG_XEN=y" >> $config_target_mak
> +    echo "CONFIG_XEN_I386=y" >> $config_target_mak
>      if test "$xen_pci_passthrough" = yes; then
>          echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
>      fi
> diff --git a/default-configs/i386-softmmu.mak b/default-configs/i386-softmmu.mak
> index 37ef90f..3c89aaa 100644
> --- a/default-configs/i386-softmmu.mak
> +++ b/default-configs/i386-softmmu.mak
> @@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
>  CONFIG_PAM=y
>  CONFIG_PCI_PIIX=y
>  CONFIG_WDT_IB700=y
> -CONFIG_XEN_I386=$(CONFIG_XEN)
>  CONFIG_ISA_DEBUG=y
>  CONFIG_ISA_TESTDEV=y
>  CONFIG_VMPORT=y
> diff --git a/default-configs/x86_64-softmmu.mak b/default-configs/x86_64-softmmu.mak
> index 31bddce..1dc1f85 100644
> --- a/default-configs/x86_64-softmmu.mak
> +++ b/default-configs/x86_64-softmmu.mak
> @@ -33,7 +33,6 @@ CONFIG_MC146818RTC=y
>  CONFIG_PAM=y
>  CONFIG_PCI_PIIX=y
>  CONFIG_WDT_IB700=y
> -CONFIG_XEN_I386=$(CONFIG_XEN)
>  CONFIG_ISA_DEBUG=y
>  CONFIG_ISA_TESTDEV=y
>  CONFIG_VMPORT=y
> diff --git a/exec.c b/exec.c
> index defe38f..a72efe2 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1228,7 +1228,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
>              fprintf(stderr, "-mem-path not supported with Xen\n");
>              exit(1);
>          }
> +#ifdef CONFIG_XEN_I386
>          xen_ram_alloc(new_block->offset, size, mr);
> +#endif
>      } else {
>          if (mem_path) {
>              if (phys_mem_alloc != qemu_anon_ram_alloc) {
> @@ -1324,7 +1326,9 @@ void qemu_ram_free(ram_addr_t addr)
>              if (block->flags & RAM_PREALLOC_MASK) {
>                  ;
>              } else if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>                  xen_invalidate_map_cache_entry(block->host);
> +#endif
>  #ifndef _WIN32
>              } else if (block->fd >= 0) {
>                  munmap(block->host, block->length);
> @@ -1409,6 +1413,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
>      RAMBlock *block = qemu_get_ram_block(addr);
>
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          /* We need to check if the requested address is in the RAM
>           * because we don't want to map the entire memory in QEMU.
>           * In that case just map until the end of the page.
> @@ -1419,6 +1424,7 @@ void *qemu_get_ram_ptr(ram_addr_t addr)
>              block->host =
>                  xen_map_cache(block->offset, block->length, 1);
>          }
> +#endif
>      }
>      return block->host + (addr - block->offset);
>  }
> @@ -1431,7 +1437,11 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, hwaddr *size)
>          return NULL;
>      }
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          return xen_map_cache(addr, *size, 1);
> +#else
> +        return NULL;
> +#endif
>      } else {
>          RAMBlock *block;
>
> @@ -1456,8 +1466,10 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
>      uint8_t *host = ptr;
>
>      if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>          *ram_addr = xen_ram_addr_from_mapcache(ptr);
>          return qemu_get_ram_block(*ram_addr)->mr;
> +#endif
>      }
>
>      block = ram_list.mru_block;
> @@ -1921,7 +1933,9 @@ static void invalidate_and_set_dirty(hwaddr addr,
>          /* set dirty bit */
>          cpu_physical_memory_set_dirty_flags(addr, (0xff & ~CODE_DIRTY_FLAG));
>      }
> +#ifdef CONFIG_XEN_I386
>      xen_modified_memory(addr, length);
> +#endif
>  }
>
>  static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
> @@ -2298,7 +2312,9 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
>              }
>          }
>          if (xen_enabled()) {
> +#ifdef CONFIG_XEN_I386
>              xen_invalidate_map_cache_entry(buffer);
> +#endif
>          }
>          memory_region_unref(mr);
>          return;
> diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
> index d0e0633..b4e76e2 100644
> --- a/include/exec/memory-internal.h
> +++ b/include/exec/memory-internal.h
> @@ -100,7 +100,9 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
>      for (addr = start; addr < end; addr += TARGET_PAGE_SIZE) {
>          cpu_physical_memory_set_dirty_flags(addr, dirty_flags);
>      }
> +#ifdef CONFIG_XEN_I386
>      xen_modified_memory(addr, length);
> +#endif
>  }
>
>  static inline void cpu_physical_memory_mask_dirty_range(ram_addr_t start,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:37:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bK5-0006Da-Sq; Fri, 24 Jan 2014 07:37:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bK5-0006DR-7K
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:37:05 +0000
Received: from [85.158.137.68:50241] by server-8.bemta-3.messagelabs.com id
	F0/A9-31081-02812E25; Fri, 24 Jan 2014 07:37:04 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390549023!7379485!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31512 invoked from network); 24 Jan 2014 07:37:03 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:37:03 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so782315eek.2
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:37:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=oQde+YLgBvBkgolZwup+/+rKp82IYA2j/GmANLojgzU=;
	b=HbWJ0g+YUUQYTufI7jbMmUHjb/q8eN1A7RC4shSg9aTyEexIaOSSo+iSg0huUP5BbG
	1LyXLgASJxhBGD9yaRx9jttyuUP//1cGsMEOfaETeTzMTD9G7ZsfxhQDAKCEH3gSrIeE
	hi86GxjtUL3q8boXrZvj8Xp8mYlFXKx8Ur7ATw6HVWH7+D2weygD8448CW0oHzNxcto+
	u8V1JUiI2U/PdT6NSueEXjL3P824dktb2I0p0hvRG6VfSQNbVsxliscvgb9Kinf1OuQ/
	LCCtr+6zqrCFznlhTQZHdnWnSoFqXrArnqKeJzwlKQJiJGtSJFlJ06RbtMu4mAHDpXCP
	rfSw==
X-Received: by 10.15.110.133 with SMTP id ch5mr276715eeb.112.1390549023401;
	Thu, 23 Jan 2014 23:37:03 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id 8sm228151eeq.15.2014.01.23.23.37.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:37:02 -0800 (PST)
Message-ID: <52E2181B.9040002@redhat.com>
Date: Fri, 24 Jan 2014 08:36:59 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> As promised I hacked a prototype based on Paolo's disable TCG series.
> However I coded some stubs for TCG anyway. So this series in principle
> should work with / without Paolo's series.
>
> The first 3 patches refactor some code to disentangle Xen PV and HVM
> guest. The 4th patch has the real meat. It introduces Xen PV target,
> which contains basically a dummy CPU, then hooks up this Xen PV CPU to
> QEMU internal structures.
>
> The last patch introduces xenpv-softmmu, which contains *no* emulation
> code. I know that in previous discussion people said that every device
> emulation should be included if the target architecture is called null.
> But since this target CPU is now called xenpv I don't feel obliged to
> include any device emulation in this prototype anymore. :-)
>
> Please note that the existing Xen QEMU build is not affected at all. You
> can still use "--disable-tcg --enable-xen --target-list=i386-softmmu"
> (or x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for
> both HVM and PV guest. This series adds another option to build QEMU with
> "--disable-tcg --enable-xen --target-list=xenpv-softmmu" and get a QEMU
> binary tailored for Xen PV guest. The effect is that we reduce the
> binary size from 14MB to 7.3MB.
>
> What do you think of this idea? I'm all ears.
>
> Wei.
>
> Wei Liu (5):
>   xen: move Xen PV machine files to hw/xenpv
>   xen: factor out common functions
>   exec: guard Xen HVM hooks with CONFIG_XEN_I386
>   xen: implement Xen PV target
>   xen: introduce xenpv-softmmu.mak
>
>  Makefile.target                      |    3 +-
>  arch_init.c                          |    2 +
>  configure                            |   12 ++-
>  cpu-exec.c                           |    2 +
>  default-configs/i386-softmmu.mak     |    1 -
>  default-configs/x86_64-softmmu.mak   |    1 -
>  default-configs/xenpv-softmmu.mak    |    2 +
>  exec.c                               |   16 ++++
>  hw/i386/Makefile.objs                |    2 +-
>  hw/xenpv/Makefile.objs               |    2 +
>  hw/{i386 => xenpv}/xen_domainbuild.c |    0
>  hw/{i386 => xenpv}/xen_domainbuild.h |    0
>  hw/{i386 => xenpv}/xen_machine_pv.c  |    0
>  include/exec/memory-internal.h       |    2 +
>  include/sysemu/arch_init.h           |    1 +
>  target-xenpv/Makefile.objs           |    1 +
>  target-xenpv/cpu-qom.h               |   64 ++++++++++++++++
>  target-xenpv/cpu.h                   |   66 ++++++++++++++++
>  target-xenpv/helper.c                |   32 ++++++++
>  target-xenpv/translate.c             |   27 +++++++
>  xen-common.c                         |  137 ++++++++++++++++++++++++++++++++++
>  xen-all.c => xen-hvm.c               |  112 +--------------------------
>  xen-stub.c                           |    4 -
>  23 files changed, 368 insertions(+), 121 deletions(-)
>  create mode 100644 default-configs/xenpv-softmmu.mak
>  create mode 100644 hw/xenpv/Makefile.objs
>  rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
>  rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
>  rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
>  create mode 100644 target-xenpv/Makefile.objs
>  create mode 100644 target-xenpv/cpu-qom.h
>  create mode 100644 target-xenpv/cpu.h
>  create mode 100644 target-xenpv/helper.c
>  create mode 100644 target-xenpv/helper.h
>  create mode 100644 target-xenpv/translate.c
>  create mode 100644 xen-common.c
>  rename xen-all.c => xen-hvm.c (92%)
>

Mostly looks good!

Feel free to take any initial patches you need from my branch, and post 
them for inclusion together with this series!

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:37:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bK5-0006Da-Sq; Fri, 24 Jan 2014 07:37:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bK5-0006DR-7K
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:37:05 +0000
Received: from [85.158.137.68:50241] by server-8.bemta-3.messagelabs.com id
	F0/A9-31081-02812E25; Fri, 24 Jan 2014 07:37:04 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390549023!7379485!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31512 invoked from network); 24 Jan 2014 07:37:03 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:37:03 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so782315eek.2
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:37:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=oQde+YLgBvBkgolZwup+/+rKp82IYA2j/GmANLojgzU=;
	b=HbWJ0g+YUUQYTufI7jbMmUHjb/q8eN1A7RC4shSg9aTyEexIaOSSo+iSg0huUP5BbG
	1LyXLgASJxhBGD9yaRx9jttyuUP//1cGsMEOfaETeTzMTD9G7ZsfxhQDAKCEH3gSrIeE
	hi86GxjtUL3q8boXrZvj8Xp8mYlFXKx8Ur7ATw6HVWH7+D2weygD8448CW0oHzNxcto+
	u8V1JUiI2U/PdT6NSueEXjL3P824dktb2I0p0hvRG6VfSQNbVsxliscvgb9Kinf1OuQ/
	LCCtr+6zqrCFznlhTQZHdnWnSoFqXrArnqKeJzwlKQJiJGtSJFlJ06RbtMu4mAHDpXCP
	rfSw==
X-Received: by 10.15.110.133 with SMTP id ch5mr276715eeb.112.1390549023401;
	Thu, 23 Jan 2014 23:37:03 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id 8sm228151eeq.15.2014.01.23.23.37.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:37:02 -0800 (PST)
Message-ID: <52E2181B.9040002@redhat.com>
Date: Fri, 24 Jan 2014 08:36:59 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> As promised I hacked a prototype based on Paolo's disable TCG series.
> However I coded some stubs for TCG anyway. So this series in principle
> should work with / without Paolo's series.
>
> The first 3 patches refactor some code to disentangle Xen PV and HVM
> guest. The 4th patch has the real meat. It introduces Xen PV target,
> which contains basically a dummy CPU, then hooks up this Xen PV CPU to
> QEMU internal structures.
>
> The last patch introduces xenpv-softmmu, which contains *no* emulation
> code. I know that in previous discussion people said that every device
> emulation should be included if the target architecture is called null.
> But since this target CPU is now called xenpv I don't feel obliged to
> include any device emulation in this prototype anymore. :-)
>
> Please note that the existing Xen QEMU build is not affected at all. You
> can still use "--disable-tcg --enable-xen --target-list=i386-softmmu"
> (or x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for
> both HVM and PV guest. This series adds another option to build QEMU with
> "--disable-tcg --enable-xen --target-list=xenpv-softmmu" and get a QEMU
> binary tailored for Xen PV guest. The effect is that we reduce the
> binary size from 14MB to 7.3MB.
>
> What do you think of this idea? I'm all ears.
>
> Wei.
>
> Wei Liu (5):
>   xen: move Xen PV machine files to hw/xenpv
>   xen: factor out common functions
>   exec: guard Xen HVM hooks with CONFIG_XEN_I386
>   xen: implement Xen PV target
>   xen: introduce xenpv-softmmu.mak
>
>  Makefile.target                      |    3 +-
>  arch_init.c                          |    2 +
>  configure                            |   12 ++-
>  cpu-exec.c                           |    2 +
>  default-configs/i386-softmmu.mak     |    1 -
>  default-configs/x86_64-softmmu.mak   |    1 -
>  default-configs/xenpv-softmmu.mak    |    2 +
>  exec.c                               |   16 ++++
>  hw/i386/Makefile.objs                |    2 +-
>  hw/xenpv/Makefile.objs               |    2 +
>  hw/{i386 => xenpv}/xen_domainbuild.c |    0
>  hw/{i386 => xenpv}/xen_domainbuild.h |    0
>  hw/{i386 => xenpv}/xen_machine_pv.c  |    0
>  include/exec/memory-internal.h       |    2 +
>  include/sysemu/arch_init.h           |    1 +
>  target-xenpv/Makefile.objs           |    1 +
>  target-xenpv/cpu-qom.h               |   64 ++++++++++++++++
>  target-xenpv/cpu.h                   |   66 ++++++++++++++++
>  target-xenpv/helper.c                |   32 ++++++++
>  target-xenpv/translate.c             |   27 +++++++
>  xen-common.c                         |  137 ++++++++++++++++++++++++++++++++++
>  xen-all.c => xen-hvm.c               |  112 +--------------------------
>  xen-stub.c                           |    4 -
>  23 files changed, 368 insertions(+), 121 deletions(-)
>  create mode 100644 default-configs/xenpv-softmmu.mak
>  create mode 100644 hw/xenpv/Makefile.objs
>  rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
>  rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
>  rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
>  create mode 100644 target-xenpv/Makefile.objs
>  create mode 100644 target-xenpv/cpu-qom.h
>  create mode 100644 target-xenpv/cpu.h
>  create mode 100644 target-xenpv/helper.c
>  create mode 100644 target-xenpv/helper.h
>  create mode 100644 target-xenpv/translate.c
>  create mode 100644 xen-common.c
>  rename xen-all.c => xen-hvm.c (92%)
>

Mostly looks good!

Feel free to take any initial patches you need from my branch, and post 
them for inclusion together with this series!

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bLN-0006NT-C3; Fri, 24 Jan 2014 07:38:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bLM-0006KY-9y
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:38:24 +0000
Received: from [85.158.143.35:2742] by server-2.bemta-4.messagelabs.com id
	93/22-11386-F6812E25; Fri, 24 Jan 2014 07:38:23 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390549102!470920!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14825 invoked from network); 24 Jan 2014 07:38:23 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:38:23 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so782723eek.39
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:38:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=AyG/7TD+myHVTtFz+JftG6wpIKbbX4gxd0H/QqA7pQA=;
	b=NONhri9ehuJ4axOauRjHIBv+UZMXbmxzs6NKkd9BG5QETbYXva7WpqvsTDkwh/aG09
	qfuQ7K4dRcVpMIuodMbYLv2sxu2kMAf51GxWlz8+JB2HgWDor60wqr0/C4mxpfsY9C95
	cz+1A5JDafQvathApKVMddERC0TAGVFTmq8B5aCuHzcc69Ac/X+76SdbGXRw5Cmsn1ib
	pYOQxYIn5g2etyOAi4zQKIkWpQWLm6AhIM0qzag7lRBind5R41Jiu6naGgP2ceaTR91x
	n1sM5z2Jpn2BPwSv244h09pXcHofc9xc6WZT8n+20/+zncMXToUWIaxeCtpW7mf1XyeU
	tUbw==
X-Received: by 10.14.114.70 with SMTP id b46mr6520695eeh.84.1390549102692;
	Thu, 23 Jan 2014 23:38:22 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id j46sm220479eew.18.2014.01.23.23.38.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:38:21 -0800 (PST)
Message-ID: <52E2186B.6060200@redhat.com>
Date: Fri, 24 Jan 2014 08:38:19 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> -        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> +    if test "$target_name" != "xenpv"; then
> +        echo "CONFIG_XEN_I386=y" >> $config_target_mak
> +        if test "$xen_pci_passthrough" = yes; then
> +            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> +        fi

You can add

CONFIG_XEN_PCI_PASSTHROUGH=$(CONFIG_XEN)

to i386-softmmu.mak and x86_64-softmmu.mak, and drop this setting from 
config-target.mak too.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bLN-0006NT-C3; Fri, 24 Jan 2014 07:38:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6bLM-0006KY-9y
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 07:38:24 +0000
Received: from [85.158.143.35:2742] by server-2.bemta-4.messagelabs.com id
	93/22-11386-F6812E25; Fri, 24 Jan 2014 07:38:23 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390549102!470920!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14825 invoked from network); 24 Jan 2014 07:38:23 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 07:38:23 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so782723eek.39
	for <xen-devel@lists.xen.org>; Thu, 23 Jan 2014 23:38:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=AyG/7TD+myHVTtFz+JftG6wpIKbbX4gxd0H/QqA7pQA=;
	b=NONhri9ehuJ4axOauRjHIBv+UZMXbmxzs6NKkd9BG5QETbYXva7WpqvsTDkwh/aG09
	qfuQ7K4dRcVpMIuodMbYLv2sxu2kMAf51GxWlz8+JB2HgWDor60wqr0/C4mxpfsY9C95
	cz+1A5JDafQvathApKVMddERC0TAGVFTmq8B5aCuHzcc69Ac/X+76SdbGXRw5Cmsn1ib
	pYOQxYIn5g2etyOAi4zQKIkWpQWLm6AhIM0qzag7lRBind5R41Jiu6naGgP2ceaTR91x
	n1sM5z2Jpn2BPwSv244h09pXcHofc9xc6WZT8n+20/+zncMXToUWIaxeCtpW7mf1XyeU
	tUbw==
X-Received: by 10.14.114.70 with SMTP id b46mr6520695eeh.84.1390549102692;
	Thu, 23 Jan 2014 23:38:22 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id j46sm220479eew.18.2014.01.23.23.38.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 23 Jan 2014 23:38:21 -0800 (PST)
Message-ID: <52E2186B.6060200@redhat.com>
Date: Fri, 24 Jan 2014 08:38:19 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org, 
	qemu-devel@nongnu.org
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:16, Wei Liu ha scritto:
> -        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> +    if test "$target_name" != "xenpv"; then
> +        echo "CONFIG_XEN_I386=y" >> $config_target_mak
> +        if test "$xen_pci_passthrough" = yes; then
> +            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> +        fi

You can add

CONFIG_XEN_PCI_PASSTHROUGH=$(CONFIG_XEN)

to i386-softmmu.mak and x86_64-softmmu.mak, and drop this setting from 
config-target.mak too.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bQU-0006kt-Cc; Fri, 24 Jan 2014 07:43:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6Yii-00021s-HU
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 04:50:20 +0000
Received: from [85.158.139.211:43537] by server-1.bemta-5.messagelabs.com id
	67/20-21065-B01F1E25; Fri, 24 Jan 2014 04:50:19 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390539017!11646103!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8211 invoked from network); 24 Jan 2014 04:50:18 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 04:50:18 -0000
Received: from [66.196.81.174] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
Received: from [98.139.211.162] by tm20.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
Received: from [127.0.0.1] by smtp219.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390539016; bh=Qw4IoGpbuQ5gAO+CMCNsWsygw+7OyYItlm+FEFftnJQ=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=AYB/083K9dO9c+WFLVhCa+lGR6rS3bY89NV0L9dLunyKp9CFlrvi9exOM27GXClwKpLHmhXTJ455DmrvNng/PHj1Ca/z/xMuv18NytWxv+OhmYET0b2UCk9t+uuIzekF/wW2+w5hRRWxnr1EQy94DfCONX5nfFglnNV08WzjYt8=
X-Yahoo-Newman-Id: 789887.28080.bm@smtp219.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: d4Ae6GsVM1lR7tfZD2wnPuCmxjJDt0HyIBF.iOGbG58u1Ol
	VlNUv3cKDCPGcpcAJVPUnewXOrtWGDKbx0gPqfVqeHx8hvOLarElwMKSLiPT
	KuRm1m.q0km_.hDkUN00q0jZxuELVEpI.lQF2MJQwZR.FS0hEhUYqlJkAkQr
	N.9sCOWTfRhzLV9NZPDiYAKsYhjOzWrt_ouN5en9v2FBvc0vl8BZQQFeGwnC
	OYI6MIxLoPILHLPKl4OGB76.o_4MRA2vthFDHT1hSAwLH7YsyRkQL2tEF60C
	1sopQ8u6p3.NpZG6rujX.u4Ewi10kc45bbyX4qn8UAmQsj3cYkgLE30V.hfq
	ixL_4771MUTUgiXbGeX1mwc485Qgv0IZ2vu3E4tfsMDvo3PA4pktH5IZ_lEL
	AvbSJCkngVWk87uqyAe5ERK6XFd2GJpOVtN27oH05geTkgK7uKmoAYBu7qGU
	QI.hiaFBcLhpfCY3StRkAbgLT.0g3R3XJCmVDc6pIiFvfEly1LfMyjeq_ZMk
	d.ehz3iEt.VyDkmGqox7iBf0oXKGENhX_qa4WL1wGsUr6_M9vvtnT9G.pTg- -
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp219.mail.bf1.yahoo.com with SMTP; 23 Jan 2014 20:50:16 -0800 PST
Message-ID: <1390539014.2357.6.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 23 Jan 2014 21:50:14 -0700
In-Reply-To: <52E1C563.7050406@citrix.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E1C563.7050406@citrix.com>
Content-Type: multipart/mixed; boundary="=-pA6/T9iFyANIn98F+0BY"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 24 Jan 2014 07:43:41 +0000
Cc: suravee.suthikulpanit@amd.com, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-pA6/T9iFyANIn98F+0BY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 01:44 +0000, Andrew Cooper wrote:
> On 24/01/2014 00:47, Eric Houby wrote:
> > On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >
> >> But you didn't turn on interrupt remapping, or it got forcibly
> >> disabled:
> >>
> >> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
> >> AMD-Vi: Disabling interrupt remapping
> >
> > Jan,
> >
> > If there is not a specific switch required to enable interrupt remapping
> > while booting 3.12.7 on bare hardware, then it was forcibly disabled.
> > Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.
> >
> >> If
> >> the kernel responds to that mentioned firmware bug by forcing
> >> interrupt remapping off, maybe we would have to do the same...
> >>
> >> Jan
> >
> >
> > That would be better than xen failing to boot.
> >
> >
> > Thanks,
> >
> > -Eric
> 
> You probably want "iommu=debug,verbose" rather than verbos
> 
> ~Andrew

oops.

The verbose version without a typo doesn't provide any additional
details.  I also tried iommu=verbose and the log was no different.  Is
the command different in 4.1.3?


 Eric



--=-pA6/T9iFyANIn98F+0BY
Content-Disposition: attachment; filename="dmesg413v.txt"
Content-Type: text/plain; name="dmesg413v.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 __  __            _  _    _   _____   ____     __      _ _____ 
 \ \/ /___ _ __   | || |  / | |___ /  |___ \   / _| ___/ |___  |
  \  // _ \ '_ \  | || |_ | |   |_ \ __ __) | | |_ / __| |  / / 
  /  \  __/ | | | |__   _|| |_ ___) |__/ __/ _|  _| (__| | / /  
 /_/\_\___|_| |_|    |_|(_)_(_)____/  |_____(_)_|  \___|_|/_/   
                                                                
(XEN) Xen version 4.1.3 (mockbuild@phx2.fedoraproject.org) (gcc version 4.7.0 20120507 (Red Hat 4.7.0-5) (GCC) ) Fri Aug 10 23:58:46 UTC 2012
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: dom0_mem=2048m dom0_max_vcpus=1 iommu=debug,verbose
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) Domain heap initialised
(XEN) Processor #0 0:10 APIC version 16
(XEN) Processor #1 0:10 APIC version 16
(XEN) Processor #2 0:10 APIC version 16
(XEN) Processor #3 0:10 APIC version 16
(XEN) Processor #4 0:10 APIC version 16
(XEN) Processor #5 0:10 APIC version 16
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) Table is not found!
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.962 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Enabling global vector map
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer appears to have unexpectedly wrapped 10 or more times.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) do_IRQ: 1.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 2.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 3.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 4.231 No irq handler for vector (irq -1)
(XEN) Brought up 6 CPUs
(XEN) do_IRQ: 5.231 No irq handler for vector (irq -1)
(XEN) Xenoprofile: AMD IBS detected (0x0000001f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23b7000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (496457 pages to be allocated)
(XEN)  Init. ramdisk: 000000024d349000->000000024ffff400
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823b7000
(XEN)  Init. ramdisk: ffffffff823b7000->ffffffff8506d400
(XEN)  Phys-Mach map: ffffffff8506e000->ffffffff8546e000
(XEN)  Start info:    ffffffff8546e000->ffffffff8546e4b4
(XEN)  Page tables:   ffffffff8546f000->ffffffff8549e000
(XEN)  Boot stack:    ffffffff8549e000->ffffffff8549f000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81cf4200
(XEN) Dom0 has maximum 1 VCPUs
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 220kB init memory.
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000abcd.
(XEN) physdev.c:164: dom0: wrong map_pirq type 3
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88006ce6b000.

--=-pA6/T9iFyANIn98F+0BY
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-pA6/T9iFyANIn98F+0BY--



From xen-devel-bounces@lists.xen.org Fri Jan 24 07:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bQU-0006kt-Cc; Fri, 24 Jan 2014 07:43:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W6Yii-00021s-HU
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 04:50:20 +0000
Received: from [85.158.139.211:43537] by server-1.bemta-5.messagelabs.com id
	67/20-21065-B01F1E25; Fri, 24 Jan 2014 04:50:19 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390539017!11646103!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8211 invoked from network); 24 Jan 2014 04:50:18 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 04:50:18 -0000
Received: from [66.196.81.174] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
Received: from [98.139.211.162] by tm20.bullet.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
Received: from [127.0.0.1] by smtp219.mail.bf1.yahoo.com with NNFMP;
	24 Jan 2014 04:50:16 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390539016; bh=Qw4IoGpbuQ5gAO+CMCNsWsygw+7OyYItlm+FEFftnJQ=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=AYB/083K9dO9c+WFLVhCa+lGR6rS3bY89NV0L9dLunyKp9CFlrvi9exOM27GXClwKpLHmhXTJ455DmrvNng/PHj1Ca/z/xMuv18NytWxv+OhmYET0b2UCk9t+uuIzekF/wW2+w5hRRWxnr1EQy94DfCONX5nfFglnNV08WzjYt8=
X-Yahoo-Newman-Id: 789887.28080.bm@smtp219.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: d4Ae6GsVM1lR7tfZD2wnPuCmxjJDt0HyIBF.iOGbG58u1Ol
	VlNUv3cKDCPGcpcAJVPUnewXOrtWGDKbx0gPqfVqeHx8hvOLarElwMKSLiPT
	KuRm1m.q0km_.hDkUN00q0jZxuELVEpI.lQF2MJQwZR.FS0hEhUYqlJkAkQr
	N.9sCOWTfRhzLV9NZPDiYAKsYhjOzWrt_ouN5en9v2FBvc0vl8BZQQFeGwnC
	OYI6MIxLoPILHLPKl4OGB76.o_4MRA2vthFDHT1hSAwLH7YsyRkQL2tEF60C
	1sopQ8u6p3.NpZG6rujX.u4Ewi10kc45bbyX4qn8UAmQsj3cYkgLE30V.hfq
	ixL_4771MUTUgiXbGeX1mwc485Qgv0IZ2vu3E4tfsMDvo3PA4pktH5IZ_lEL
	AvbSJCkngVWk87uqyAe5ERK6XFd2GJpOVtN27oH05geTkgK7uKmoAYBu7qGU
	QI.hiaFBcLhpfCY3StRkAbgLT.0g3R3XJCmVDc6pIiFvfEly1LfMyjeq_ZMk
	d.ehz3iEt.VyDkmGqox7iBf0oXKGENhX_qa4WL1wGsUr6_M9vvtnT9G.pTg- -
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp219.mail.bf1.yahoo.com with SMTP; 23 Jan 2014 20:50:16 -0800 PST
Message-ID: <1390539014.2357.6.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 23 Jan 2014 21:50:14 -0700
In-Reply-To: <52E1C563.7050406@citrix.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E1C563.7050406@citrix.com>
Content-Type: multipart/mixed; boundary="=-pA6/T9iFyANIn98F+0BY"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Fri, 24 Jan 2014 07:43:41 +0000
Cc: suravee.suthikulpanit@amd.com, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-pA6/T9iFyANIn98F+0BY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 01:44 +0000, Andrew Cooper wrote:
> On 24/01/2014 00:47, Eric Houby wrote:
> > On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >
> >> But you didn't turn on interrupt remapping, or it got forcibly
> >> disabled:
> >>
> >> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
> >> AMD-Vi: Disabling interrupt remapping
> >
> > Jan,
> >
> > If there is not a specific switch required to enable interrupt remapping
> > while booting 3.12.7 on bare hardware, then it was forcibly disabled.
> > Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.
> >
> >> If
> >> the kernel responds to that mentioned firmware bug by forcing
> >> interrupt remapping off, maybe we would have to do the same...
> >>
> >> Jan
> >
> >
> > That would be better than xen failing to boot.
> >
> >
> > Thanks,
> >
> > -Eric
> 
> You probably want "iommu=debug,verbose" rather than verbos
> 
> ~Andrew

oops.

The verbose version without a typo doesn't provide any additional
details.  I also tried iommu=verbose and the log was no different.  Is
the command different in 4.1.3?


 Eric



--=-pA6/T9iFyANIn98F+0BY
Content-Disposition: attachment; filename="dmesg413v.txt"
Content-Type: text/plain; name="dmesg413v.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 __  __            _  _    _   _____   ____     __      _ _____ 
 \ \/ /___ _ __   | || |  / | |___ /  |___ \   / _| ___/ |___  |
  \  // _ \ '_ \  | || |_ | |   |_ \ __ __) | | |_ / __| |  / / 
  /  \  __/ | | | |__   _|| |_ ___) |__/ __/ _|  _| (__| | / /  
 /_/\_\___|_| |_|    |_|(_)_(_)____/  |_____(_)_|  \___|_|/_/   
                                                                
(XEN) Xen version 4.1.3 (mockbuild@phx2.fedoraproject.org) (gcc version 4.7.0 20120507 (Red Hat 4.7.0-5) (GCC) ) Fri Aug 10 23:58:46 UTC 2012
(XEN) Latest ChangeSet: unavailable
(XEN) Bootloader: GRUB 2.00~beta4
(XEN) Command line: dom0_mem=2048m dom0_max_vcpus=1 iommu=debug,verbose
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) Domain heap initialised
(XEN) Processor #0 0:10 APIC version 16
(XEN) Processor #1 0:10 APIC version 16
(XEN) Processor #2 0:10 APIC version 16
(XEN) Processor #3 0:10 APIC version 16
(XEN) Processor #4 0:10 APIC version 16
(XEN) Processor #5 0:10 APIC version 16
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) Table is not found!
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.962 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) AMD-Vi: Enabling global vector map
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) Platform timer appears to have unexpectedly wrapped 10 or more times.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 16 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) do_IRQ: 1.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 2.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 3.231 No irq handler for vector (irq -1)
(XEN) do_IRQ: 4.231 No irq handler for vector (irq -1)
(XEN) Brought up 6 CPUs
(XEN) do_IRQ: 5.231 No irq handler for vector (irq -1)
(XEN) Xenoprofile: AMD IBS detected (0x0000001f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23b7000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (496457 pages to be allocated)
(XEN)  Init. ramdisk: 000000024d349000->000000024ffff400
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823b7000
(XEN)  Init. ramdisk: ffffffff823b7000->ffffffff8506d400
(XEN)  Phys-Mach map: ffffffff8506e000->ffffffff8546e000
(XEN)  Start info:    ffffffff8546e000->ffffffff8546e4b4
(XEN)  Page tables:   ffffffff8546f000->ffffffff8549e000
(XEN)  Boot stack:    ffffffff8549e000->ffffffff8549f000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81cf4200
(XEN) Dom0 has maximum 1 VCPUs
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 220kB init memory.
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000abcd.
(XEN) physdev.c:164: dom0: wrong map_pirq type 3
(XEN) traps.c:2488:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88006ce6b000.

--=-pA6/T9iFyANIn98F+0BY
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-pA6/T9iFyANIn98F+0BY--



From xen-devel-bounces@lists.xen.org Fri Jan 24 07:59:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bfR-0007N4-LB; Fri, 24 Jan 2014 07:59:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W6bfP-0007Mw-OB
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 07:59:07 +0000
Received: from [85.158.137.68:24011] by server-9.bemta-3.messagelabs.com id
	EF/73-13104-A4D12E25; Fri, 24 Jan 2014 07:59:06 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390550345!7383569!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32243 invoked from network); 24 Jan 2014 07:59:06 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-31.messagelabs.com with SMTP;
	24 Jan 2014 07:59:06 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 23 Jan 2014 23:59:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,711,1384329600"; d="scan'208";a="443933826"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 23 Jan 2014 23:59:04 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 23 Jan 2014 23:59:03 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 23 Jan 2014 23:59:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 24 Jan 2014 15:59:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnJqII+YAgAtpVFA=
Date: Fri, 24 Jan 2014 07:59:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389951634.6697.43.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote on 2014-01-17:
> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>> As Andrew said, nested still in experimental stage, because there
>> are still lots of scenarios I am not covered in my testing. So it
>> may not accurate to say it is good supported. But I hope people know
>> that the nested is ready to use now. And encourage them to try it
>> and report bug to us to push nested move forward.
> 
> Perhaps we could say it is "tech preview" rather than "experimental"?
> 
> What would really help is some clear guidance (i.e. docs, wiki page, a
> blog post or more than one of these) on:
>       * what configurations (L1 hyp/L2 guest) are supposed to work, are
>         tested and are considered Supported by you guys. * what
>         scenarios are expected to work, but have not been tested
>         (bug/success reports welcome etc) * what scenarios are expected
>         not to work but which we would like to support at some point or
>         are actively working on adding support for
>       * what scenarios are not expected to work and no one is working
> on
> 
> I think documenting the things toward the top end of that list would
> be most valuable, the last one could be quite a long list ("everything
> else") and is maybe a bit silly to try and enumerate.
> 
> Some docs and a blog post to raise awareness of the feature would
> likely help push things forwards, do you think you could put something
> together? For the docs either send a docs patch or write something on
> the wiki (email me your wiki userid if you need to be granted write
> access), for a blog post contact the publicity@ list.
> 
> You could also consider writing some instructions for test days so that
> the community can do some testing, it also a good way to gain visibility
> for a feature. It's probably too late to get that done for the one on
> Monday[0] but you should coordinate with Russ & Dario about a future one
> (a test day dedicated to nested virt is even a possibility)
> 

I have a draft document, but it seems I don't have the access to add wiki page. Do you know how to get it?

> Ian.
> 
> [0] http://wiki.xen.org/wiki/Xen_Test_Days
> 
> ttp://www.xenproject.org/about/events/viewevent/90-xen-test-day-for-4-
> 4-rc2 .html
>     http://wiki.xen.org/wiki/Xen_4.4_RC2_test_instructions


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 07:59:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 07:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bfR-0007N4-LB; Fri, 24 Jan 2014 07:59:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W6bfP-0007Mw-OB
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 07:59:07 +0000
Received: from [85.158.137.68:24011] by server-9.bemta-3.messagelabs.com id
	EF/73-13104-A4D12E25; Fri, 24 Jan 2014 07:59:06 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390550345!7383569!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32243 invoked from network); 24 Jan 2014 07:59:06 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-31.messagelabs.com with SMTP;
	24 Jan 2014 07:59:06 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 23 Jan 2014 23:59:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,711,1384329600"; d="scan'208";a="443933826"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 23 Jan 2014 23:59:04 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 23 Jan 2014 23:59:03 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 23 Jan 2014 23:59:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 24 Jan 2014 15:59:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnJqII+YAgAtpVFA=
Date: Fri, 24 Jan 2014 07:59:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389951634.6697.43.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote on 2014-01-17:
> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>> As Andrew said, nested still in experimental stage, because there
>> are still lots of scenarios I am not covered in my testing. So it
>> may not accurate to say it is good supported. But I hope people know
>> that the nested is ready to use now. And encourage them to try it
>> and report bug to us to push nested move forward.
> 
> Perhaps we could say it is "tech preview" rather than "experimental"?
> 
> What would really help is some clear guidance (i.e. docs, wiki page, a
> blog post or more than one of these) on:
>       * what configurations (L1 hyp/L2 guest) are supposed to work, are
>         tested and are considered Supported by you guys. * what
>         scenarios are expected to work, but have not been tested
>         (bug/success reports welcome etc) * what scenarios are expected
>         not to work but which we would like to support at some point or
>         are actively working on adding support for
>       * what scenarios are not expected to work and no one is working
> on
> 
> I think documenting the things toward the top end of that list would
> be most valuable, the last one could be quite a long list ("everything
> else") and is maybe a bit silly to try and enumerate.
> 
> Some docs and a blog post to raise awareness of the feature would
> likely help push things forwards, do you think you could put something
> together? For the docs either send a docs patch or write something on
> the wiki (email me your wiki userid if you need to be granted write
> access), for a blog post contact the publicity@ list.
> 
> You could also consider writing some instructions for test days so that
> the community can do some testing, it also a good way to gain visibility
> for a feature. It's probably too late to get that done for the one on
> Monday[0] but you should coordinate with Russ & Dario about a future one
> (a test day dedicated to nested virt is even a possibility)
> 

I have a draft document, but it seems I don't have the access to add wiki page. Do you know how to get it?

> Ian.
> 
> [0] http://wiki.xen.org/wiki/Xen_Test_Days
> 
> ttp://www.xenproject.org/about/events/viewevent/90-xen-test-day-for-4-
> 4-rc2 .html
>     http://wiki.xen.org/wiki/Xen_4.4_RC2_test_instructions


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 08:00:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bgv-0007m7-Qu; Fri, 24 Jan 2014 08:00:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6bgu-0007lz-Ta
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:00:41 +0000
Received: from [85.158.143.35:47187] by server-2.bemta-4.messagelabs.com id
	E2/AA-11386-8AD12E25; Fri, 24 Jan 2014 08:00:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390550439!480052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19883 invoked from network); 24 Jan 2014 08:00:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:00:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:01:13 +0000
Message-Id: <52E22BB5020000780011673A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:00:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xenproject.org>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
In-Reply-To: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7043CDB5.1__="
Cc: Matthew Daley <mattjd@gmail.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] xen/unlz4: always set an error return code on
	failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7043CDB5.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"ret", being set to -1 early on, gets cleared by the first invocation
of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
subsequent failures wouldn't be noticed by the caller without setting
it back to -1 right after those calls.

Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728

Reported-by: Matthew Daley <mattjd@gmail.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/unlz4.c
+++ b/xen/common/unlz4.c
@@ -133,6 +133,7 @@ STATIC int INIT unlz4(unsigned char *inp
 			goto exit_2;
 		}
=20
+		ret =3D -1;
 		if (flush && flush(outp, dest_len) !=3D dest_len)
 			goto exit_2;
 		if (output)




--=__Part7043CDB5.1__=
Content-Type: text/plain; name="unlz4-errors-dom0.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="unlz4-errors-dom0.patch"

xen/unlz4: always set an error return code on failures=0A=0A"ret", being =
set to -1 early on, gets cleared by the first invocation=0Aof lz4_decompres=
s()/lz4_decompress_unknownoutputsize(), and hence=0Asubsequent failures =
wouldn't be noticed by the caller without setting=0Ait back to -1 right =
after those calls.=0A=0ALinux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b=
728=0A=0AReported-by: Matthew Daley <mattjd@gmail.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/common/unlz4.c=0A+++ =
b/xen/common/unlz4.c=0A@@ -133,6 +133,7 @@ STATIC int INIT unlz4(unsigned =
char *inp=0A 			goto exit_2;=0A 		}=0A =0A+	=
	ret =3D -1;=0A 		if (flush && flush(outp, dest_len) !=3D =
dest_len)=0A 			goto exit_2;=0A 		if =
(output)=0A
--=__Part7043CDB5.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7043CDB5.1__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 08:00:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bgv-0007m7-Qu; Fri, 24 Jan 2014 08:00:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6bgu-0007lz-Ta
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:00:41 +0000
Received: from [85.158.143.35:47187] by server-2.bemta-4.messagelabs.com id
	E2/AA-11386-8AD12E25; Fri, 24 Jan 2014 08:00:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390550439!480052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19883 invoked from network); 24 Jan 2014 08:00:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:00:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:01:13 +0000
Message-Id: <52E22BB5020000780011673A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:00:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xenproject.org>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
In-Reply-To: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7043CDB5.1__="
Cc: Matthew Daley <mattjd@gmail.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] xen/unlz4: always set an error return code on
	failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7043CDB5.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"ret", being set to -1 early on, gets cleared by the first invocation
of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
subsequent failures wouldn't be noticed by the caller without setting
it back to -1 right after those calls.

Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728

Reported-by: Matthew Daley <mattjd@gmail.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/unlz4.c
+++ b/xen/common/unlz4.c
@@ -133,6 +133,7 @@ STATIC int INIT unlz4(unsigned char *inp
 			goto exit_2;
 		}
=20
+		ret =3D -1;
 		if (flush && flush(outp, dest_len) !=3D dest_len)
 			goto exit_2;
 		if (output)




--=__Part7043CDB5.1__=
Content-Type: text/plain; name="unlz4-errors-dom0.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="unlz4-errors-dom0.patch"

xen/unlz4: always set an error return code on failures=0A=0A"ret", being =
set to -1 early on, gets cleared by the first invocation=0Aof lz4_decompres=
s()/lz4_decompress_unknownoutputsize(), and hence=0Asubsequent failures =
wouldn't be noticed by the caller without setting=0Ait back to -1 right =
after those calls.=0A=0ALinux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b=
728=0A=0AReported-by: Matthew Daley <mattjd@gmail.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/common/unlz4.c=0A+++ =
b/xen/common/unlz4.c=0A@@ -133,6 +133,7 @@ STATIC int INIT unlz4(unsigned =
char *inp=0A 			goto exit_2;=0A 		}=0A =0A+	=
	ret =3D -1;=0A 		if (flush && flush(outp, dest_len) !=3D =
dest_len)=0A 			goto exit_2;=0A 		if =
(output)=0A
--=__Part7043CDB5.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7043CDB5.1__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 08:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bhe-0007yC-Fk; Fri, 24 Jan 2014 08:01:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6bhc-0007y5-R3
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:01:25 +0000
Received: from [85.158.143.35:20086] by server-2.bemta-4.messagelabs.com id
	77/FB-11386-4DD12E25; Fri, 24 Jan 2014 08:01:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390550483!476257!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13041 invoked from network); 24 Jan 2014 08:01:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:01:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:02:01 +0000
Message-Id: <52E22BE1020000780011673E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:01:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xenproject.org>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
In-Reply-To: <1383870390-9273-2-git-send-email-mattjd@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0437B9C1.3__="
Cc: Matthew Daley <mattjd@gmail.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxc/unlz4: always set an error return code on
 failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0437B9C1.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"ret", being set to -1 early on, gets cleared by the first invocation
of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
subsequent failures wouldn't be noticed by the caller without setting
it back to -1 right after those calls.

Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728

Reported-by: Matthew Daley <mattjd@gmail.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxc/xc_dom_decompress_lz4.c
+++ b/tools/libxc/xc_dom_decompress_lz4.c
@@ -98,6 +98,7 @@ int xc_try_lz4_decode(
 			goto exit_2;
 		}
=20
+		ret =3D -1;
 		outp +=3D dest_len;
 		size -=3D chunksize;
=20




--=__Part0437B9C1.3__=
Content-Type: text/plain; name="unlz4-errors-domU.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="unlz4-errors-domU.patch"

libxc/unlz4: always set an error return code on failures=0A=0A"ret", being =
set to -1 early on, gets cleared by the first invocation=0Aof lz4_decompres=
s()/lz4_decompress_unknownoutputsize(), and hence=0Asubsequent failures =
wouldn't be noticed by the caller without setting=0Ait back to -1 right =
after those calls.=0A=0ALinux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b=
728=0A=0AReported-by: Matthew Daley <mattjd@gmail.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/tools/libxc/xc_dom_decompress_lz=
4.c=0A+++ b/tools/libxc/xc_dom_decompress_lz4.c=0A@@ -98,6 +98,7 @@ int =
xc_try_lz4_decode(=0A 			goto exit_2;=0A 		=
}=0A =0A+		ret =3D -1;=0A 		outp +=3D dest_len;=0A 		=
size -=3D chunksize;=0A =0A
--=__Part0437B9C1.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0437B9C1.3__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 08:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6bhe-0007yC-Fk; Fri, 24 Jan 2014 08:01:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6bhc-0007y5-R3
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:01:25 +0000
Received: from [85.158.143.35:20086] by server-2.bemta-4.messagelabs.com id
	77/FB-11386-4DD12E25; Fri, 24 Jan 2014 08:01:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390550483!476257!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13041 invoked from network); 24 Jan 2014 08:01:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:01:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:02:01 +0000
Message-Id: <52E22BE1020000780011673E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:01:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <xen-devel@lists.xenproject.org>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
In-Reply-To: <1383870390-9273-2-git-send-email-mattjd@gmail.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0437B9C1.3__="
Cc: Matthew Daley <mattjd@gmail.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxc/unlz4: always set an error return code on
 failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0437B9C1.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"ret", being set to -1 early on, gets cleared by the first invocation
of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
subsequent failures wouldn't be noticed by the caller without setting
it back to -1 right after those calls.

Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728

Reported-by: Matthew Daley <mattjd@gmail.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libxc/xc_dom_decompress_lz4.c
+++ b/tools/libxc/xc_dom_decompress_lz4.c
@@ -98,6 +98,7 @@ int xc_try_lz4_decode(
 			goto exit_2;
 		}
=20
+		ret =3D -1;
 		outp +=3D dest_len;
 		size -=3D chunksize;
=20




--=__Part0437B9C1.3__=
Content-Type: text/plain; name="unlz4-errors-domU.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="unlz4-errors-domU.patch"

libxc/unlz4: always set an error return code on failures=0A=0A"ret", being =
set to -1 early on, gets cleared by the first invocation=0Aof lz4_decompres=
s()/lz4_decompress_unknownoutputsize(), and hence=0Asubsequent failures =
wouldn't be noticed by the caller without setting=0Ait back to -1 right =
after those calls.=0A=0ALinux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b=
728=0A=0AReported-by: Matthew Daley <mattjd@gmail.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/tools/libxc/xc_dom_decompress_lz=
4.c=0A+++ b/tools/libxc/xc_dom_decompress_lz4.c=0A@@ -98,6 +98,7 @@ int =
xc_try_lz4_decode(=0A 			goto exit_2;=0A 		=
}=0A =0A+		ret =3D -1;=0A 		outp +=3D dest_len;=0A 		=
size -=3D chunksize;=0A =0A
--=__Part0437B9C1.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0437B9C1.3__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 08:23:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6c2q-0008U4-VI; Fri, 24 Jan 2014 08:23:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6c2p-0008Tz-SA
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:23:20 +0000
Received: from [193.109.254.147:59269] by server-4.bemta-14.messagelabs.com id
	02/A2-03916-7F222E25; Fri, 24 Jan 2014 08:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390551782!10601360!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1093 invoked from network); 24 Jan 2014 08:23:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 08:23:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96087544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 08:23:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 03:23:01 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W6c2W-0008U4-8P;
	Fri, 24 Jan 2014 08:23:00 +0000
Message-ID: <1390551779.10695.117.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 24 Jan 2014 08:22:59 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 07:59 +0000, Zhang, Yang Z wrote:
> I have a draft document, but it seems I don't have the access to add
> wiki page. Do you know how to get it?

Either fill out the form linked from the front page of the wiki, or just
send me your wiki login name privately.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 08:23:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6c2q-0008U4-VI; Fri, 24 Jan 2014 08:23:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6c2p-0008Tz-SA
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 08:23:20 +0000
Received: from [193.109.254.147:59269] by server-4.bemta-14.messagelabs.com id
	02/A2-03916-7F222E25; Fri, 24 Jan 2014 08:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390551782!10601360!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1093 invoked from network); 24 Jan 2014 08:23:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 08:23:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96087544"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 08:23:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 03:23:01 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W6c2W-0008U4-8P;
	Fri, 24 Jan 2014 08:23:00 +0000
Message-ID: <1390551779.10695.117.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 24 Jan 2014 08:22:59 +0000
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C57F1@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 07:59 +0000, Zhang, Yang Z wrote:
> I have a draft document, but it seems I don't have the access to add
> wiki page. Do you know how to get it?

Either fill out the form linked from the front page of the wiki, or just
send me your wiki login name privately.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 08:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cDi-0000a1-HR; Fri, 24 Jan 2014 08:34:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6cDh-0000Zw-Ad
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 08:34:33 +0000
Received: from [193.109.254.147:8423] by server-4.bemta-14.messagelabs.com id
	A2/1F-03916-89522E25; Fri, 24 Jan 2014 08:34:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390552471!12912851!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29741 invoked from network); 24 Jan 2014 08:34:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:34:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:35:32 +0000
Message-Id: <52E233A60200007800116776@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:34:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>> But you didn't turn on interrupt remapping, or it got forcibly
>> disabled:
>> 
>> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
>> AMD-Vi: Disabling interrupt remapping
> 
> If there is not a specific switch required to enable interrupt remapping
> while booting 3.12.7 on bare hardware, then it was forcibly disabled.

Indeed it was - I just looked more closely at that code. Seems like
the default changed at some point after I looked last.

> Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.

No. The global remapping tables approach is insecure, and hence
was a wrong route in any event. We should actually get rid of the
respective command line option rather sooner than later.

>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
> 
> That would be better than xen failing to boot.

But you realize that, following precedents elsewhere in the
IOMMU code, we would disable the IOMMU as a whole rather
than just interrupt remapping.

But yes, looking at the Linux side code, I guess we need to do
so. Would be nice if you could confirm that the system comes up
fine (and hopefully without IOMMU faults) with
"iommu=no-intremap,debug" as well as with "iommu=off".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 08:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 08:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cDi-0000a1-HR; Fri, 24 Jan 2014 08:34:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6cDh-0000Zw-Ad
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 08:34:33 +0000
Received: from [193.109.254.147:8423] by server-4.bemta-14.messagelabs.com id
	A2/1F-03916-89522E25; Fri, 24 Jan 2014 08:34:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390552471!12912851!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29741 invoked from network); 24 Jan 2014 08:34:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 08:34:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 08:35:32 +0000
Message-Id: <52E233A60200007800116776@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 08:34:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>> But you didn't turn on interrupt remapping, or it got forcibly
>> disabled:
>> 
>> [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found
>> AMD-Vi: Disabling interrupt remapping
> 
> If there is not a specific switch required to enable interrupt remapping
> while booting 3.12.7 on bare hardware, then it was forcibly disabled.

Indeed it was - I just looked more closely at that code. Seems like
the default changed at some point after I looked last.

> Would boot logs from xen 4.1.3 be helpful?  If so, they are attached.

No. The global remapping tables approach is insecure, and hence
was a wrong route in any event. We should actually get rid of the
respective command line option rather sooner than later.

>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
> 
> That would be better than xen failing to boot.

But you realize that, following precedents elsewhere in the
IOMMU code, we would disable the IOMMU as a whole rather
than just interrupt remapping.

But yes, looking at the Linux side code, I guess we need to do
so. Would be nice if you could confirm that the system comes up
fine (and hopefully without IOMMU faults) with
"iommu=no-intremap,debug" as well as with "iommu=off".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:02:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ce3-0001QL-Od; Fri, 24 Jan 2014 09:01:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6ce3-0001QG-6W
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:01:47 +0000
Received: from [85.158.137.68:38819] by server-1.bemta-3.messagelabs.com id
	FD/9B-29598-AFB22E25; Fri, 24 Jan 2014 09:01:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390554105!11032582!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20336 invoked from network); 24 Jan 2014 09:01:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:01:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:01:44 +0000
Message-Id: <52E23A0502000078001167D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:01:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
> 
> That would be better than xen failing to boot.

You may also want to try booting with "ivrs_ioapic[0]=00:14.0" or
"ivrs_ioapic[1]=00:14.0", but I'm afraid that would still not work
properly as the base system related ACPI tables are also lacking
that second IO-APIC (and I'm not convinced that adding a
respective command line option would help either).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:02:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ce3-0001QL-Od; Fri, 24 Jan 2014 09:01:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6ce3-0001QG-6W
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:01:47 +0000
Received: from [85.158.137.68:38819] by server-1.bemta-3.messagelabs.com id
	FD/9B-29598-AFB22E25; Fri, 24 Jan 2014 09:01:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390554105!11032582!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20336 invoked from network); 24 Jan 2014 09:01:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:01:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:01:44 +0000
Message-Id: <52E23A0502000078001167D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:01:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
In-Reply-To: <1390524450.2281.12.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>> If
>> the kernel responds to that mentioned firmware bug by forcing
>> interrupt remapping off, maybe we would have to do the same...
> 
> That would be better than xen failing to boot.

You may also want to try booting with "ivrs_ioapic[0]=00:14.0" or
"ivrs_ioapic[1]=00:14.0", but I'm afraid that would still not work
properly as the base system related ACPI tables are also lacking
that second IO-APIC (and I'm not convinced that adding a
respective command line option would help either).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:21:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:21:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cxJ-0001zI-14; Fri, 24 Jan 2014 09:21:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6cxH-0001zD-ET
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:21:39 +0000
Received: from [85.158.137.68:35860] by server-14.bemta-3.messagelabs.com id
	F7/7E-06105-2A032E25; Fri, 24 Jan 2014 09:21:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390555296!11053246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1346 invoked from network); 24 Jan 2014 09:21:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 09:21:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96100279"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 09:21:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 04:21:35 -0500
Message-ID: <1390555293.2124.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Fri, 24 Jan 2014 09:21:33 +0000
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiBG
cm9tOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9uQGFtYXpvbi5jb20+Cj4gCj4gQ3VycmVudGx5IHNo
cmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IK
PiBwZXJzaXN0ZW50IGdyYW50cyBhcmUgcmVsZWFzZWQgdmlhIGZyZWVfcGVyc2lzdGVudF9nbnRz
KCkuIFRoaXMKPiByZXN1bHRzIGluIGEgbWVtb3J5IGxlYWsgd2hlbiBhIFZCRCB0aGF0IHVzZXMg
cGVyc2lzdGVudCBncmFudHMgaXMKPiB0b3JuIGRvd24uCgpUaGlzIG1heSB3ZWxsIGJlIHRoZSBl
eHBsYW5hdGlvbiBmb3IgdGhlIG1lbW9yeSBsZWFrIEkgd2FzIG9ic2VydmluZyBvbgpBUk0gbGFz
dCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Uga25vdy4KCj4gQ2M6IEtvbnJh
ZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiBDYzogIlJvZ2VyIFBh
dSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiBDYzogSWFuIENhbXBiZWxsIDxJYW4u
Q2FtcGJlbGxAY2l0cml4LmNvbT4KPiBDYzogRGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0
cml4LmNvbT4KPiBDYzogbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZwo+IENjOiB4ZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwo+IENjOiBBbnRob255IExpZ3VvcmkgPGFsaWd1b3JpQGFtYXpvbi5j
b20+Cj4gU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgo+
IFNpZ25lZC1vZmYtYnk6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KPiAtLS0KPiAgZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsrKy0tLQo+ICAxIGZpbGUg
Y2hhbmdlZCwgMyBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQo+IAo+IGRpZmYgLS1naXQg
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jCj4gaW5kZXggNjYyMGI3My4uMzBlZjdiMyAxMDA2NDQKPiAtLS0g
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+ICsrKyBiL2RyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2svYmxrYmFjay5jCj4gQEAgLTYyNSw5ICs2MjUsNiBAQCBwdXJnZV9nbnRf
bGlzdDoKPiAgCQkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ICAJfQo+ICAKPiAtCS8qIFNpbmNlIHdl
IGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCj4g
LQlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwo+IC0KPiAgCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KPiAgCWlmICghUkJfRU1QVFlfUk9PVCgm
YmxraWYtPnBlcnNpc3RlbnRfZ250cykpCj4gIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYs
ICZibGtpZi0+cGVyc2lzdGVudF9nbnRzLAo+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250
X2xpc3Q6Cj4gIAlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKPiAgCWJsa2lmLT5wZXJzaXN0ZW50X2dudF9jID0gMDsKPiAgCj4gKwkvKiBTaW5jZSB3ZSBh
cmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+ICsJ
c2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiArCj4gIAlpZiAobG9n
X3N0YXRzKQo+ICAJCXByaW50X3N0YXRzKGJsa2lmKTsKPiAgCgoKCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:21:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:21:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cxJ-0001zI-14; Fri, 24 Jan 2014 09:21:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6cxH-0001zD-ET
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:21:39 +0000
Received: from [85.158.137.68:35860] by server-14.bemta-3.messagelabs.com id
	F7/7E-06105-2A032E25; Fri, 24 Jan 2014 09:21:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390555296!11053246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1346 invoked from network); 24 Jan 2014 09:21:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 09:21:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96100279"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 09:21:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 04:21:35 -0500
Message-ID: <1390555293.2124.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Fri, 24 Jan 2014 09:21:33 +0000
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiBG
cm9tOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9uQGFtYXpvbi5jb20+Cj4gCj4gQ3VycmVudGx5IHNo
cmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IK
PiBwZXJzaXN0ZW50IGdyYW50cyBhcmUgcmVsZWFzZWQgdmlhIGZyZWVfcGVyc2lzdGVudF9nbnRz
KCkuIFRoaXMKPiByZXN1bHRzIGluIGEgbWVtb3J5IGxlYWsgd2hlbiBhIFZCRCB0aGF0IHVzZXMg
cGVyc2lzdGVudCBncmFudHMgaXMKPiB0b3JuIGRvd24uCgpUaGlzIG1heSB3ZWxsIGJlIHRoZSBl
eHBsYW5hdGlvbiBmb3IgdGhlIG1lbW9yeSBsZWFrIEkgd2FzIG9ic2VydmluZyBvbgpBUk0gbGFz
dCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Uga25vdy4KCj4gQ2M6IEtvbnJh
ZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiBDYzogIlJvZ2VyIFBh
dSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiBDYzogSWFuIENhbXBiZWxsIDxJYW4u
Q2FtcGJlbGxAY2l0cml4LmNvbT4KPiBDYzogRGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0
cml4LmNvbT4KPiBDYzogbGludXgta2VybmVsQHZnZXIua2VybmVsLm9yZwo+IENjOiB4ZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwo+IENjOiBBbnRob255IExpZ3VvcmkgPGFsaWd1b3JpQGFtYXpvbi5j
b20+Cj4gU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgo+
IFNpZ25lZC1vZmYtYnk6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KPiAtLS0KPiAgZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsrKy0tLQo+ICAxIGZpbGUg
Y2hhbmdlZCwgMyBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQo+IAo+IGRpZmYgLS1naXQg
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jCj4gaW5kZXggNjYyMGI3My4uMzBlZjdiMyAxMDA2NDQKPiAtLS0g
YS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+ICsrKyBiL2RyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2svYmxrYmFjay5jCj4gQEAgLTYyNSw5ICs2MjUsNiBAQCBwdXJnZV9nbnRf
bGlzdDoKPiAgCQkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ICAJfQo+ICAKPiAtCS8qIFNpbmNlIHdl
IGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCj4g
LQlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwo+IC0KPiAgCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KPiAgCWlmICghUkJfRU1QVFlfUk9PVCgm
YmxraWYtPnBlcnNpc3RlbnRfZ250cykpCj4gIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYs
ICZibGtpZi0+cGVyc2lzdGVudF9nbnRzLAo+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250
X2xpc3Q6Cj4gIAlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKPiAgCWJsa2lmLT5wZXJzaXN0ZW50X2dudF9jID0gMDsKPiAgCj4gKwkvKiBTaW5jZSB3ZSBh
cmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+ICsJ
c2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiArCj4gIAlpZiAobG9n
X3N0YXRzKQo+ICAJCXByaW50X3N0YXRzKGJsa2lmKTsKPiAgCgoKCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cyb-00023J-Hj; Fri, 24 Jan 2014 09:23:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6cya-00023C-3S
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 09:23:00 +0000
Received: from [85.158.139.211:17154] by server-10.bemta-5.messagelabs.com id
	CC/AB-01405-3F032E25; Fri, 24 Jan 2014 09:22:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390555377!11661454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9973 invoked from network); 24 Jan 2014 09:22:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 09:22:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96100513"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 09:22:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 04:22:55 -0500
Message-ID: <1390555374.2124.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 24 Jan 2014 09:22:54 +0000
In-Reply-To: <52E22BE1020000780011673E@nat28.tlf.novell.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
	code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> "ret", being set to -1 early on, gets cleared by the first invocation
> of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> subsequent failures wouldn't be noticed by the caller without setting
> it back to -1 right after those calls.
> 
> Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> 
> Reported-by: Matthew Daley <mattjd@gmail.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6cyb-00023J-Hj; Fri, 24 Jan 2014 09:23:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6cya-00023C-3S
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 09:23:00 +0000
Received: from [85.158.139.211:17154] by server-10.bemta-5.messagelabs.com id
	CC/AB-01405-3F032E25; Fri, 24 Jan 2014 09:22:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390555377!11661454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9973 invoked from network); 24 Jan 2014 09:22:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 09:22:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,711,1384300800"; d="scan'208";a="96100513"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 09:22:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 04:22:55 -0500
Message-ID: <1390555374.2124.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 24 Jan 2014 09:22:54 +0000
In-Reply-To: <52E22BE1020000780011673E@nat28.tlf.novell.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
	code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> "ret", being set to -1 early on, gets cleared by the first invocation
> of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> subsequent failures wouldn't be noticed by the caller without setting
> it back to -1 right after those calls.
> 
> Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> 
> Reported-by: Matthew Daley <mattjd@gmail.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:23:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6czC-00028g-Ew; Fri, 24 Jan 2014 09:23:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6czB-00028V-Vc
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:23:38 +0000
Received: from [85.158.137.68:7268] by server-13.bemta-3.messagelabs.com id
	11/1A-28603-91132E25; Fri, 24 Jan 2014 09:23:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390555416!11080161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24613 invoked from network); 24 Jan 2014 09:23:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:23:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:23:35 +0000
Message-Id: <52E23F2602000078001167EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:23:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
In-Reply-To: <52E14EA4.6010009@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> The value of time read from hvm_get_guest_time() resets with a new
> domid, making it an inappropriate source of time for the described
> function of the MSR.
> 
> I suspect Windows 8 only notices at first on migration as I believe that
> it is the first case where the generation ID is supposed to change and
> signal a reset of state.  The detection of the failure is actually
> further complicated as there appears to be a race condition between the
> guest tools reseting the clock back to the correct value, and the DHCP
> lease being flushed.  XenRT only notices the failure if the DHCP lease
> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
> inside the VM), and doesn't directly notice the foward/backward stepping
> in time.
> 
> Anyway - please revert the patch - it will be a non-trivial change to
> expose an appropriate source of time to be consumed by this MSR.

Done, albeit not completely - I left the #define-s in place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:23:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6czC-00028g-Ew; Fri, 24 Jan 2014 09:23:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6czB-00028V-Vc
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:23:38 +0000
Received: from [85.158.137.68:7268] by server-13.bemta-3.messagelabs.com id
	11/1A-28603-91132E25; Fri, 24 Jan 2014 09:23:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390555416!11080161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24613 invoked from network); 24 Jan 2014 09:23:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:23:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:23:35 +0000
Message-Id: <52E23F2602000078001167EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:23:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
In-Reply-To: <52E14EA4.6010009@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> The value of time read from hvm_get_guest_time() resets with a new
> domid, making it an inappropriate source of time for the described
> function of the MSR.
> 
> I suspect Windows 8 only notices at first on migration as I believe that
> it is the first case where the generation ID is supposed to change and
> signal a reset of state.  The detection of the failure is actually
> further complicated as there appears to be a race condition between the
> guest tools reseting the clock back to the correct value, and the DHCP
> lease being flushed.  XenRT only notices the failure if the DHCP lease
> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
> inside the VM), and doesn't directly notice the foward/backward stepping
> in time.
> 
> Anyway - please revert the patch - it will be a non-trivial change to
> expose an appropriate source of time to be consumed by this MSR.

Done, albeit not completely - I left the #define-s in place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6d3A-0002P8-CJ; Fri, 24 Jan 2014 09:27:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6d39-0002Ow-66
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:27:43 +0000
Received: from [85.158.139.211:44300] by server-3.bemta-5.messagelabs.com id
	59/AE-04773-E0232E25; Fri, 24 Jan 2014 09:27:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390555661!11679375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25219 invoked from network); 24 Jan 2014 09:27:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:27:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:27:41 +0000
Message-Id: <52E2401C0200007800116802@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:27:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,<ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E1C563.7050406@citrix.com>
	<1390539014.2357.6.camel@astar.houby.net>
In-Reply-To: <1390539014.2357.6.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 05:50, Eric Houby <ehouby@yahoo.com> wrote:
>> You probably want "iommu=debug,verbose" rather than verbos
>> 
>> ~Andrew
> 
> oops.
> 
> The verbose version without a typo doesn't provide any additional
> details.  I also tried iommu=verbose and the log was no different.  Is
> the command different in 4.1.3?

Right - we made "debug" imply "verbose" a while ago.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 09:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 09:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6d3A-0002P8-CJ; Fri, 24 Jan 2014 09:27:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6d39-0002Ow-66
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 09:27:43 +0000
Received: from [85.158.139.211:44300] by server-3.bemta-5.messagelabs.com id
	59/AE-04773-E0232E25; Fri, 24 Jan 2014 09:27:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390555661!11679375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25219 invoked from network); 24 Jan 2014 09:27:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 09:27:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 09:27:41 +0000
Message-Id: <52E2401C0200007800116802@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 09:27:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,<ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E1C563.7050406@citrix.com>
	<1390539014.2357.6.camel@astar.houby.net>
In-Reply-To: <1390539014.2357.6.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 05:50, Eric Houby <ehouby@yahoo.com> wrote:
>> You probably want "iommu=debug,verbose" rather than verbos
>> 
>> ~Andrew
> 
> oops.
> 
> The verbose version without a typo doesn't provide any additional
> details.  I also tried iommu=verbose and the log was no different.  Is
> the command different in 4.1.3?

Right - we made "debug" imply "verbose" a while ago.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:12:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dkF-0003iR-FA; Fri, 24 Jan 2014 10:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W6dkD-0003iM-Vl
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:12:14 +0000
Received: from [193.109.254.147:37631] by server-4.bemta-14.messagelabs.com id
	72/13-03916-D7C32E25; Fri, 24 Jan 2014 10:12:13 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390558331!12865714!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17290 invoked from network); 24 Jan 2014 10:12:12 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-27.messagelabs.com with SMTP;
	24 Jan 2014 10:12:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384272000"; 
   d="scan'208";a="9451151"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 24 Jan 2014 18:08:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0OAC7ca010554;
	Fri, 24 Jan 2014 18:12:07 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012418104034-1344159 ;
	Fri, 24 Jan 2014 18:10:40 +0800 
Message-ID: <52E23D01.50204@cn.fujitsu.com>
Date: Fri, 24 Jan 2014 18:14:25 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/24 18:10:40,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/24 18:10:41,
	Serialize complete at 2014/01/24 18:10:41
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 05:05 PM, Lai Jiangshan wrote:

>>
> 
> Changes in V6:
>   Applied Ian Jackson's comments of V5 series.
>   the [PATCH 2/4 V5] is split by small functionalities.
> 
>   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> 

Ping!

Ian Jackson, any suggestion?

Shriram, could you review it?

thanks,
Lai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:12:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dkF-0003iR-FA; Fri, 24 Jan 2014 10:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1W6dkD-0003iM-Vl
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:12:14 +0000
Received: from [193.109.254.147:37631] by server-4.bemta-14.messagelabs.com id
	72/13-03916-D7C32E25; Fri, 24 Jan 2014 10:12:13 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390558331!12865714!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17290 invoked from network); 24 Jan 2014 10:12:12 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-27.messagelabs.com with SMTP;
	24 Jan 2014 10:12:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384272000"; 
   d="scan'208";a="9451151"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 24 Jan 2014 18:08:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0OAC7ca010554;
	Fri, 24 Jan 2014 18:12:07 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012418104034-1344159 ;
	Fri, 24 Jan 2014 18:10:40 +0800 
Message-ID: <52E23D01.50204@cn.fujitsu.com>
Date: Fri, 24 Jan 2014 18:14:25 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
In-Reply-To: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/24 18:10:40,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/24 18:10:41,
	Serialize complete at 2014/01/24 18:10:41
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 05:05 PM, Lai Jiangshan wrote:

>>
> 
> Changes in V6:
>   Applied Ian Jackson's comments of V5 series.
>   the [PATCH 2/4 V5] is split by small functionalities.
> 
>   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> 

Ping!

Ian Jackson, any suggestion?

Shriram, could you review it?

thanks,
Lai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:14:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dmP-0003ns-2u; Fri, 24 Jan 2014 10:14:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6dmN-0003nk-ST
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:14:28 +0000
Received: from [85.158.139.211:16907] by server-16.bemta-5.messagelabs.com id
	47/31-11843-30D32E25; Fri, 24 Jan 2014 10:14:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390558464!11704758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12829 invoked from network); 24 Jan 2014 10:14:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:14:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94077628"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:14:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:14:12 -0500
Message-ID: <1390558450.2124.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Arianna Avanzini <avanzini.arianna@gmail.com>
Date: Fri, 24 Jan 2014 10:14:10 +0000
In-Reply-To: <52E239AB.8040906@gmail.com>
References: <52E239AB.8040906@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 11:00 +0100, Arianna Avanzini wrote:
> I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
> assigned to it by providing the dom0_mem boot option. The error message produced
> during the boot process is the following.

This is an unfortunate side effect of the use of the 1:1 mapping. The
threads:
        "Master not working on Allwinner A20"
        "create multiple banks for dom0 in 1:1 mapping if necessary"
Have some more details, but in short the allocation of dom0 memory needs
to be done in a single chunk and because of the way the Xen allocator
works it will also be aligned to its own size -- this creates some
limitations on the size of the region vs what memory is free at start of
day

I'm afraid this probably won't be solved for Xen 4.4, since the change
is likely to be rather intrusive.

As a workaround you could try changing the load addresses of Xen and the
kernel, dtb etc used by u-boot to pack them towards the top of RAM. This
*should* result in the entire lower half of RAM being available which
will make it more likely to achieve the necessary alignment constraints
for a dom0 taking up to half of the system RAM. I've not actually tried
this but I'd recommend trying the following addresses from the top of
RAM:
	-2M: Leave free for Xen to relocate to
	-4M: dom0 kernel
	-6M: DTB
	-8M: Initial Xen load address

If you have an initrd then put it between dom0 kernel and dtb and bump
everything else down, likewise if anything is bigger than 2M then round
up to 2M and push everything down to accommodate it.

Also consider that for a Xen system it is common to only give dom0 a
fairly small fraction of the system RAM in order to leave as much as
possible for guest domains. What target amount of dom0 memory are you
aiming for? 256M or even 128M is probably plenty if you are going to run
a few domains.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:14:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dmP-0003ns-2u; Fri, 24 Jan 2014 10:14:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6dmN-0003nk-ST
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:14:28 +0000
Received: from [85.158.139.211:16907] by server-16.bemta-5.messagelabs.com id
	47/31-11843-30D32E25; Fri, 24 Jan 2014 10:14:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390558464!11704758!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12829 invoked from network); 24 Jan 2014 10:14:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:14:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94077628"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:14:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:14:12 -0500
Message-ID: <1390558450.2124.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Arianna Avanzini <avanzini.arianna@gmail.com>
Date: Fri, 24 Jan 2014 10:14:10 +0000
In-Reply-To: <52E239AB.8040906@gmail.com>
References: <52E239AB.8040906@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 11:00 +0100, Arianna Avanzini wrote:
> I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
> assigned to it by providing the dom0_mem boot option. The error message produced
> during the boot process is the following.

This is an unfortunate side effect of the use of the 1:1 mapping. The
threads:
        "Master not working on Allwinner A20"
        "create multiple banks for dom0 in 1:1 mapping if necessary"
Have some more details, but in short the allocation of dom0 memory needs
to be done in a single chunk and because of the way the Xen allocator
works it will also be aligned to its own size -- this creates some
limitations on the size of the region vs what memory is free at start of
day

I'm afraid this probably won't be solved for Xen 4.4, since the change
is likely to be rather intrusive.

As a workaround you could try changing the load addresses of Xen and the
kernel, dtb etc used by u-boot to pack them towards the top of RAM. This
*should* result in the entire lower half of RAM being available which
will make it more likely to achieve the necessary alignment constraints
for a dom0 taking up to half of the system RAM. I've not actually tried
this but I'd recommend trying the following addresses from the top of
RAM:
	-2M: Leave free for Xen to relocate to
	-4M: dom0 kernel
	-6M: DTB
	-8M: Initial Xen load address

If you have an initrd then put it between dom0 kernel and dtb and bump
everything else down, likewise if anything is bigger than 2M then round
up to 2M and push everything down to accommodate it.

Also consider that for a Xen system it is common to only give dom0 a
fairly small fraction of the system RAM in order to leave as much as
possible for guest domains. What target amount of dom0 memory are you
aiming for? 256M or even 128M is probably plenty if you are going to run
a few domains.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:20:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dsI-0004Bs-1j; Fri, 24 Jan 2014 10:20:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1W6dYn-0003Tr-G2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:00:28 +0000
Received: from [193.109.254.147:41801] by server-4.bemta-14.messagelabs.com id
	03/A0-03916-8B932E25; Fri, 24 Jan 2014 10:00:24 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390557618!12840895!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 812 invoked from network); 24 Jan 2014 10:00:18 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:00:18 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so856121eek.32
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:00:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:content-type; bh=fVkYXSgbY2Axy/QZz0L0zy7TVSVvgpj0sSVJD6S3DwE=;
	b=xCfMKAhWdpGx+8dzMpWJMDRhuIutw9fFIj2jdVbE3LbB81iko5Ahk3fqTEyX51b7lk
	uBI2Dgku0SRmQm/uPNvRIjwbgc1ZcjbpitvP3FYnST6Z3DcKXwDt/cGF+ZyE3aExQWV0
	ZHYG5WYXOHeMOOidbUZhSa2VYls5O5xGOMAoFSyCCSKUPJBsZFVnMsBDoigT4PAIX23E
	vnWDoZQIge/eEMovm/JjERgEBVoxcwG6LKINPN6N1lLeHvxY7Et/jyCrcyiJp0SM8BZP
	DcYGWORff8tjux4x/tNk5y+Sqv6+kajur1fFOPFP2s0hZ6sqkI3HPKKG0d4zFC8wX+vb
	K+Hg==
X-Received: by 10.14.5.67 with SMTP id 43mr1630375eek.93.1390557618473;
	Fri, 24 Jan 2014 02:00:18 -0800 (PST)
Received: from [192.168.0.2]
	(host252-40-dynamic.56-79-r.retail.telecomitalia.it. [79.56.40.252])
	by mx.google.com with ESMTPSA id 4sm1563455eed.14.2014.01.24.02.00.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 02:00:16 -0800 (PST)
Message-ID: <52E239AB.8040906@gmail.com>
Date: Fri, 24 Jan 2014 11:00:11 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed; boundary="------------050705070304060703090801"
X-Mailman-Approved-At: Fri, 24 Jan 2014 10:20:33 +0000
Cc: Paolo Gai <pj@evidence.eu.com>, Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory for
 dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050705070304060703090801
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,

my name's Arianna and, for my master thesis at University of Modena (with Paolo
Valente, in Italy, [1]), I'm porting an automotive grade real-time OS
(Evidence's Erika [2], [3]) on Xen on ARM. I'm running Xen on a cubieboard2,
featuring an Allwinner A20 chip and 1GB of RAM.

I compiled from source both Xen (from staging, my current head is commit
231d7f4) and Linux (from linux-sunxi/sunxi-devel merged with torvalds/master, my
head is commit df32e43), as a dom0 kernel. I was also able to start a PV domU
with the same kernel, which I built with both dom0 and domU options enabled.

I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
assigned to it by providing the dom0_mem boot option. The error message produced
during the boot process is the following.

Xen heap: 0000000076000000-000000007e000000 (32768 pages)
Dom heap: 229376 pages
Looking for UART console serial0
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (ava@) (armv7a-hardfloat-linux-gnueabi-gcc (Gentoo
4.7.3 p1.0, pie-0.5.5) 4.7.3) debug=y Fri Jan 24 10:46:18 CET 2014
(XEN) Latest ChangeSet: Thu Jan 23 10:30:08 2014 +0100 git:231d7f4
(XEN) Console output is synchronous.
(XEN) Processor: 410fc074: "ARM Limited", variant: 0x0, part 0xc07, rev 0x4
(XEN) 32-bit Execution:
(XEN)   Processor Features: 00001131:00011011
(XEN)     Instruction Sets: AArch32 Thumb Thumb-2 ThumbEE Jazelle
(XEN)     Extensions: GenericTimer Security
(XEN)   Debug Features: 02010555
(XEN)   Auxiliary Features: 00000000
(XEN)   Memory Model Features: 10101105 40000000 01240000 02102211
(XEN)  ISA Features: 02101110 13112111 21232041 11112131 10011142 00000000
(XEN) Platform: Allwinner A20
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27
(XEN) Using generic timer at 24000 KHz
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000001c81000
(XEN)         gic_cpu_addr=0000000001c82000
(XEN)         gic_hyp_addr=0000000001c84000
(XEN)         gic_vcpu_addr=0000000001c86000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 160 lines, 2 cpus, secure (IID 0100143b).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 2 part 0x30 variant 0x7 rev 0x4
(XEN) Bringing up CPU1
(XEN) Failed to bring up CPU1
(XEN) Failed to bring up CPU 1 (error -19)
(XEN) Brought up 1 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Failed to allocate contiguous memory for dom0
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

After a short search on the xen-devel mailing list, I found a discussion which I
believe might be related to this issue:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg02239.html.
However, from that thread, I wasn't able to extrapolate a solution to the said
issue (not even to understand whether it is a bug or a feature, actually). You
can find attached to this mail the complete logs of a successful boot (with
dom0_mem=256M) and a failing boot (with dom0_mem=512M). If interesting, I'd be
glad to help investigate that, e.g., by providing further logs or by testing
patches.

Thank you,
Arianna



[1] http://www.unimore.it/
[2] http://www.evidence.eu.com/
[3] http://erika.tuxfamily.org/drupal/


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

--------------050705070304060703090801
Content-Type: text/x-log;
 name="failing-boot-dom0_mem-512.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="failing-boot-dom0_mem-512.log"

WGVuIGhlYXA6IDAwMDAwMDAwNzYwMDAwMDAtMDAwMDAwMDA3ZTAwMDAwMCAoMzI3NjggcGFn
ZXMpCkRvbSBoZWFwOiAyMjkzNzYgcGFnZXMKTG9va2luZyBmb3IgVUFSVCBjb25zb2xlIHNl
cmlhbDAKIFhlbiA0LjQtcmMyCihYRU4pIFhlbiB2ZXJzaW9uIDQuNC1yYzIgKGF2YUApIChh
cm12N2EtaGFyZGZsb2F0LWxpbnV4LWdudWVhYmktZ2NjIChHZW50b28gNC43LjMgcDEuMCwg
cGllLTAuNS41KSA0LjcuMykgZGVidWc9eSBGcmkgSmFuIDI0IDEwOjQ2OjE4IENFVCAyMDE0
CihYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFRodSBKYW4gMjMgMTA6MzA6MDggMjAxNCArMDEw
MCBnaXQ6MjMxZDdmNAooWEVOKSBDb25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4KKFhF
TikgUHJvY2Vzc29yOiA0MTBmYzA3NDogIkFSTSBMaW1pdGVkIiwgdmFyaWFudDogMHgwLCBw
YXJ0IDB4YzA3LCByZXYgMHg0CihYRU4pIDMyLWJpdCBFeGVjdXRpb246CihYRU4pICAgUHJv
Y2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMTowMDAxMTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rp
b24gU2V0czogQUFyY2gzMiBUaHVtYiBUaHVtYi0yIFRodW1iRUUgSmF6ZWxsZQooWEVOKSAg
ICAgRXh0ZW5zaW9uczogR2VuZXJpY1RpbWVyIFNlY3VyaXR5CihYRU4pICAgRGVidWcgRmVh
dHVyZXM6IDAyMDEwNTU1CihYRU4pICAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMAoo
WEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMTAxMDExMDUgNDAwMDAwMDAgMDEyNDAw
MDAgMDIxMDIyMTEKKFhFTikgIElTQSBGZWF0dXJlczogMDIxMDExMTAgMTMxMTIxMTEgMjEy
MzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAwMDAKKFhFTikgUGxhdGZvcm06IEFsbHdp
bm5lciBBMjAKKFhFTikgR2VuZXJpYyBUaW1lciBJUlE6IHBoeXM9MzAgaHlwPTI2IHZpcnQ9
MjcKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBhdCAyNDAwMCBLSHoKKFhFTikgR0lDIGlu
aXRpYWxpemF0aW9uOgooWEVOKSAgICAgICAgIGdpY19kaXN0X2FkZHI9MDAwMDAwMDAwMWM4
MTAwMAooWEVOKSAgICAgICAgIGdpY19jcHVfYWRkcj0wMDAwMDAwMDAxYzgyMDAwCihYRU4p
ICAgICAgICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwMDFjODQwMDAKKFhFTikgICAgICAgICBn
aWNfdmNwdV9hZGRyPTAwMDAwMDAwMDFjODYwMDAKKFhFTikgICAgICAgICBnaWNfbWFpbnRl
bmFuY2VfaXJxPTI1CihYRU4pIEdJQzogMTYwIGxpbmVzLCAyIGNwdXMsIHNlY3VyZSAoSUlE
IDAxMDAxNDNiKS4KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxl
ciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDE2IEtpQi4KKFhF
TikgVkZQIGltcGxlbWVudGVyIDB4NDEgYXJjaGl0ZWN0dXJlIDIgcGFydCAweDMwIHZhcmlh
bnQgMHg3IHJldiAweDQKKFhFTikgQnJpbmdpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8g
YnJpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8gYnJpbmcgdXAgQ1BVIDEgKGVycm9yIC0x
OSkKKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAg
KioqCihYRU4pIAooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqCihYRU4pIFBhbmljIG9uIENQVSAwOgooWEVOKSBGYWlsZWQgdG8gYWxsb2NhdGUgY29u
dGlndW91cyBtZW1vcnkgZm9yIGRvbTAKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4u
LgoK
--------------050705070304060703090801
Content-Type: text/x-log;
 name="successful-boot-dom0_mem-256.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="successful-boot-dom0_mem-256.log"

KFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQKIFhlbiA0LjQtcmMyCihYRU4pIFhlbiB2
ZXJzaW9uIDQuNC1yYzIgKGF2YUApIChhcm12N2EtaGFyZGZsb2F0LWxpbnV4LWdudWVhYmkt
Z2NjIChHZW50b28gNC43LjMgcDEuMCwgcGllLTAuNS41KSA0LjcuMykgZGVidWc9eSBGcmkg
SmFuIDI0IDEwOjQwOjM3IENFVCAyMDE0CihYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFRodSBK
YW4gMjMgMTA6MzA6MDggMjAxNCArMDEwMCBnaXQ6MjMxZDdmNAooWEVOKSBDb25zb2xlIG91
dHB1dCBpcyBzeW5jaHJvbm91cy4KKFhFTikgUHJvY2Vzc29yOiA0MTBmYzA3NDogIkFSTSBM
aW1pdGVkIiwgdmFyaWFudDogMHgwLCBwYXJ0IDB4YzA3LCByZXYgMHg0CihYRU4pIDMyLWJp
dCBFeGVjdXRpb246CihYRU4pICAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMTowMDAx
MTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rpb24gU2V0czogQUFyY2gzMiBUaHVtYiBUaHVtYi0y
IFRodW1iRUUgSmF6ZWxsZQooWEVOKSAgICAgRXh0ZW5zaW9uczogR2VuZXJpY1RpbWVyIFNl
Y3VyaXR5CihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAyMDEwNTU1CihYRU4pICAgQXV4aWxp
YXJ5IEZlYXR1cmVzOiAwMDAwMDAwMAooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczog
MTAxMDExMDUgNDAwMDAwMDAgMDEyNDAwMDAgMDIxMDIyMTEKKFhFTikgIElTQSBGZWF0dXJl
czogMDIxMDExMTAgMTMxMTIxMTEgMjEyMzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAw
MDAKKFhFTikgUGxhdGZvcm06IEFsbHdpbm5lciBBMjAKKFhFTikgR2VuZXJpYyBUaW1lciBJ
UlE6IHBoeXM9MzAgaHlwPTI2IHZpcnQ9MjcKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBh
dCAyNDAwMCBLSHoKKFhFTikgR0lDIGluaXRpYWxpemF0aW9uOgooWEVOKSAgICAgICAgIGdp
Y19kaXN0X2FkZHI9MDAwMDAwMDAwMWM4MTAwMAooWEVOKSAgICAgICAgIGdpY19jcHVfYWRk
cj0wMDAwMDAwMDAxYzgyMDAwCihYRU4pICAgICAgICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAw
MDFjODQwMDAKKFhFTikgICAgICAgICBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwMDFjODYwMDAK
KFhFTikgICAgICAgICBnaWNfbWFpbnRlbmFuY2VfaXJxPTI1CihYRU4pIEdJQzogMTYwIGxp
bmVzLCAyIGNwdXMsIHNlY3VyZSAoSUlEIDAxMDAxNDNiKS4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29u
c29sZSByaW5nIG9mIDE2IEtpQi4KKFhFTikgVkZQIGltcGxlbWVudGVyIDB4NDEgYXJjaGl0
ZWN0dXJlIDIgcGFydCAweDMwIHZhcmlhbnQgMHg3IHJldiAweDQKKFhFTikgQnJpbmdpbmcg
dXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8gYnJpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8g
YnJpbmcgdXAgQ1BVIDEgKGVycm9yIC0xOSkKKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhF
TikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIFBvcHVsYXRlIFAyTSAweDYwMDAw
MDAwLT4weDcwMDAwMDAwICgxOjEgbWFwcGluZyBmb3IgZG9tMCkKKFhFTikgTG9hZGluZyBr
ZXJuZWwgZnJvbSBib290IG1vZHVsZSAyCihYRU4pIExvYWRpbmcgekltYWdlIGZyb20gMDAw
MDAwMDA1MDAwMDAwMCB0byAwMDAwMDAwMDY3YzAwMDAwLTAwMDAwMDAwNjdmMTMzMDAKKFhF
TikgTG9hZGluZyBkb20wIERUQiB0byAweDAwMDAwMDAwNjgwMDAwMDAtMHgwMDAwMDAwMDY4
MDAzYmQ1CihYRU4pIFNjcnViYmluZyBGcmVlIFJBTTogLi4uLi4uLi5kb25lLgooWEVOKSBJ
bml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4K
KFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGwKKFhF
TikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVO
KSAqKioqKioqIFdBUk5JTkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTCihYRU4p
ICoqKioqKiogVGhpcyBvcHRpb24gaXMgaW50ZW5kZWQgdG8gYWlkIGRlYnVnZ2luZyBvZiBY
ZW4gYnkgZW5zdXJpbmcKKFhFTikgKioqKioqKiB0aGF0IGFsbCBvdXRwdXQgaXMgc3luY2hy
b25vdXNseSBkZWxpdmVyZWQgb24gdGhlIHNlcmlhbCBsaW5lLgooWEVOKSAqKioqKioqIEhv
d2V2ZXIgaXQgY2FuIGludHJvZHVjZSBTSUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVj
dAooWEVOKSAqKioqKioqIHRpbWVrZWVwaW5nLiBJdCBpcyBOT1QgcmVjb21tZW5kZWQgZm9y
IHByb2R1Y3Rpb24gdXNlIQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCihYRU4pIDMuLi4gMi4uLiAxLi4uIAooWEVOKSAqKiogU2VyaWFs
IGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlu
cHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjY0a0IgaW5pdCBtZW1vcnkuClsgICAgMC4wMDAw
MDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNSwg
MTMxMDcyIGJ5dGVzKQpbICAgIDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDE2Mzg0IChvcmRlcjogNCwgNjU1MzYgYnl0ZXMpClsgICAgMC4wMDAwMDBdIGFs
bG9jYXRlZCA1MjQyODggYnl0ZXMgb2YgcGFnZV9jZ3JvdXAKWyAgICAwLjAwMDAwMF0gcGxl
YXNlIHRyeSAnY2dyb3VwX2Rpc2FibGU9bWVtb3J5JyBvcHRpb24gaWYgeW91IGRvbid0IHdh
bnQgbWVtb3J5IGNncm91cHMKWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAyNTI4NDRLLzI2MjE0
NEsgYXZhaWxhYmxlICg0Njg2SyBrZXJuZWwgY29kZSwgMjI5SyByd2RhdGEsIDk0OEsgcm9k
YXRhLCAyMzVLIGluaXQsIDMyOUsgYnNzLCA5MzAwSyByZXNlcnZlZCwgMEsgaGlnaG1lbSkK
WyAgICAwLjAwMDAwMF0gVmlydHVhbCBrZXJuZWwgbWVtb3J5IGxheW91dDoKICAgIHZlY3Rv
ciAgOiAweGZmZmYwMDAwIC0gMHhmZmZmMTAwMCAgICggICA0IGtCKQogICAgZml4bWFwICA6
IDB4ZmZmMDAwMDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCiAgICB2bWFsbG9jIDogMHhk
MDgwMDAwMCAtIDB4ZmYwMDAwMDAgICAoIDc0NCBNQikKICAgIGxvd21lbSAgOiAweGMwMDAw
MDAwIC0gMHhkMDAwMDAwMCAgICggMjU2IE1CKQogICAgcGttYXAgICA6IDB4YmZlMDAwMDAg
LSAweGMwMDAwMDAwICAgKCAgIDIgTUIpCiAgICAgIC50ZXh0IDogMHhjMDAwODAwMCAtIDB4
YzA1ODhjMjAgICAoNTYzNiBrQikKICAgICAgLmluaXQgOiAweGMwNTg5MDAwIC0gMHhjMDVj
M2YwMCAgICggMjM2IGtCKQogICAgICAuZGF0YSA6IDB4YzA1YzQwMDAgLSAweGMwNWZkNGEw
ICAgKCAyMzAga0IpCiAgICAgICAuYnNzIDogMHhjMDVmZDRhOCAtIDB4YzA2NGZhYTQgICAo
IDMzMCBrQikKWyAgICAwLjAwMDAwMF0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBN
aW5PYmplY3RzPTAsIENQVXM9MSwgTm9kZXM9MQpbICAgIDAuMDAwMDAwXSBIaWVyYXJjaGlj
YWwgUkNVIGltcGxlbWVudGF0aW9uLgpbICAgIDAuMDAwMDAwXSAJUkNVIHJlc3RyaWN0aW5n
IENQVXMgZnJvbSBOUl9DUFVTPTQgdG8gbnJfY3B1X2lkcz0xLgpbICAgIDAuMDAwMDAwXSBS
Q1U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9jcHVf
aWRzPTEKWyAgICAwLjAwMDAwMF0gTlJfSVJRUzoxNiBucl9pcnFzOjE2IDE2ClsgICAgMC4w
MDAwMDBdIEFyY2hpdGVjdGVkIGNwMTUgdGltZXIocykgcnVubmluZyBhdCAyNC4wME1IeiAo
dmlydCkuClsgICAgMy44MDk5MjRdIHNjaGVkX2Nsb2NrOiA1NiBiaXRzIGF0IDI0TUh6LCBy
ZXNvbHV0aW9uIDQxbnMsIHdyYXBzIGV2ZXJ5IDI4NjMzMTE1MTk3NDRucwpbICAgIDAuMDAw
MDAzXSBTd2l0Y2hpbmcgdG8gdGltZXItYmFzZWQgZGVsYXkgbG9vcApbICAxNzUuMTQ3MDUx
XSBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAyNE1IeiwgcmVzb2x1dGlvbiA0MW5zLCB3cmFw
cyBldmVyeSAxNzg5NTY5Njk5NDJucwpbICAgIDAuMDAwMDM4XSBhcmNoX3RpbWVyOiBtdWx0
aXBsZSBub2RlcyBpbiBkdCwgc2tpcHBpbmcKWyAgICAwLjAwMDAwN10gc2NoZWRfY2xvY2s6
IDMyIGJpdHMgYXQgMTYwTUh6LCByZXNvbHV0aW9uIDZucywgd3JhcHMgZXZlcnkgMjY4NDM1
NDU1OTNucwpbICAgIDAuMDAwMDA4XSBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxNjBNSHos
IHJlc29sdXRpb24gNm5zLCB3cmFwcyBldmVyeSAyNjg0MzU0NTU5M25zClsgICAgMC4wMDAx
NzZdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKWyAgICAwLjAwMDIxMV0g
Q2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQgdXNp
bmcgdGltZXIgZnJlcXVlbmN5Li4gNDguMDAgQm9nb01JUFMgKGxwaj0yNDAwMDApClsgICAg
MC4wMDAyMjZdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMQpbICAgIDAu
MDAwMzk2XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMgpbICAgIDAuMDAz
NzQyXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1vcnkKWyAgICAwLjAwMzc4NF0g
SW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZGV2aWNlcwpbICAgIDAuMDAzNzk1XSBJbml0
aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBibGtpbwpbICAgIDAuMDAzODQ1XSBDUFU6IFRlc3Rp
bmcgd3JpdGUgYnVmZmVyIGNvaGVyZW5jeTogb2sKWyAgICAwLjAwNDI0OF0gL2NwdXMvY3B1
QDAgbWlzc2luZyBjbG9jay1mcmVxdWVuY3kgcHJvcGVydHkKWyAgICAwLjAwNDI3MV0gQ1BV
MDogdGhyZWFkIC0xLCBjcHUgMCwgc29ja2V0IDAsIG1waWRyIDgwMDAwMDAwClsgICAgMC4w
MDQzMDNdIFNldHRpbmcgdXAgc3RhdGljIGlkZW50aXR5IG1hcCBmb3IgMHg2MDQ3MTJlOCAt
IDB4NjA0NzEzNDAKWyAgICAwLjAwNTE4Ml0gQnJvdWdodCB1cCAxIENQVXMKWyAgICAwLjAw
NTE5Nl0gU01QOiBUb3RhbCBvZiAxIHByb2Nlc3NvcnMgYWN0aXZhdGVkLgpbICAgIDAuMDA1
MjAzXSBDUFU6IEFsbCBDUFUocykgc3RhcnRlZCBpbiBTVkMgbW9kZS4KWyAgICAwLjAwNTg1
N10gZGV2dG1wZnM6IGluaXRpYWxpemVkClsgICAgMC4wMDk5ODhdIFZGUCBzdXBwb3J0IHYw
LjM6IGltcGxlbWVudG9yIDQxIGFyY2hpdGVjdHVyZSAyIHBhcnQgMzAgdmFyaWFudCA3IHJl
diA0ClsgICAgMC4wMTAxMzddIFhlbiA0LjQgc3VwcG9ydCBmb3VuZCwgZXZlbnRzX2lycT0z
MSBnbnR0YWJfZnJhbWVfcGZuPTFkMDAKWyAgICAwLjAxMDIyMF0geGVuOmdyYW50X3RhYmxl
OiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dApbICAgIDAuMDEwMjc2XSBH
cmFudCB0YWJsZSBpbml0aWFsaXplZApbICAgIDAuMDEwNjAwXSBwaW5jdHJsIGNvcmU6IGlu
aXRpYWxpemVkIHBpbmN0cmwgc3Vic3lzdGVtClsgICAgMC4wMTA5NzRdIHJlZ3VsYXRvci1k
dW1teTogbm8gcGFyYW1ldGVycwpbICAgIDAuMDExMjkwXSBORVQ6IFJlZ2lzdGVyZWQgcHJv
dG9jb2wgZmFtaWx5IDE2ClsgICAgMC4wMTE3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTAK
WyAgICAwLjAxMjAzMV0gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBwb29sIGZvciBhdG9t
aWMgY29oZXJlbnQgYWxsb2NhdGlvbnMKWyAgICAwLjAxMzA1NV0geGVuOnN3aW90bGJfeGVu
OiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUgSU8g
VExCClsgICAgMC4wMTUyMjhdIHNvZnR3YXJlIElPIFRMQiBbbWVtIDB4NmYwMDAwMDAtMHg2
ZjQwMDAwMF0gKDRNQikgbWFwcGVkIGF0IFtjZjAwMDAwMC1jZjNmZmZmZl0KWyAgICAwLjAy
NDQ5OV0gYmlvOiBjcmVhdGUgc2xhYiA8YmlvLTA+IGF0IDAKWyAgICAwLjAyNTE2Nl0geGVu
OmJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlcgpbICAgIDAuMDI1NDc1XSBy
ZWctZml4ZWQtdm9sdGFnZSBhaGNpLTV2LjU6IGNvdWxkIG5vdCBmaW5kIHBjdGxkZXYgZm9y
IG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIwODAwL2FoY2lfcHdyX3BpbkAwLCBk
ZWZlcnJpbmcgcHJvYmUKWyAgICAwLjAyNTQ5OF0gcGxhdGZvcm0gYWhjaS01di41OiBEcml2
ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMgcHJvYmUgZGVmZXJyYWwKWyAgICAwLjAy
NTUyMV0gcmVnLWZpeGVkLXZvbHRhZ2UgdXNiMS12YnVzLjY6IGNvdWxkIG5vdCBmaW5kIHBj
dGxkZXYgZm9yIG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIwODAwL3VzYjFfdmJ1
c19waW5AMCwgZGVmZXJyaW5nIHByb2JlClsgICAgMC4wMjU1MzRdIHBsYXRmb3JtIHVzYjEt
dmJ1cy42OiBEcml2ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMgcHJvYmUgZGVmZXJy
YWwKWyAgICAwLjAyNTU1M10gcmVnLWZpeGVkLXZvbHRhZ2UgdXNiMi12YnVzLjc6IGNvdWxk
IG5vdCBmaW5kIHBjdGxkZXYgZm9yIG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIw
ODAwL3VzYjJfdmJ1c19waW5AMCwgZGVmZXJyaW5nIHByb2JlClsgICAgMC4wMjU1NjZdIHBs
YXRmb3JtIHVzYjItdmJ1cy43OiBEcml2ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMg
cHJvYmUgZGVmZXJyYWwKWyAgICAwLjAyNjQ1MF0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6
ZWQKWyAgICAwLjAyNjY2M10gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAgMC4w
MjY4NzldIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMK
WyAgICAwLjAyNjkyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZl
ciBodWIKWyAgICAwLjAyNzA0N10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRy
aXZlciB1c2IKWyAgICAwLjAyNzMwNV0gcHBzX2NvcmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEg
cmVnaXN0ZXJlZApbICAgIDAuMDI3MzE0XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMu
NiAtIENvcHlyaWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSA8Z2lvbWV0dGlAbGlu
dXguaXQ+ClsgICAgMC4wMjczMzhdIFBUUCBjbG9jayBzdXBwb3J0IHJlZ2lzdGVyZWQKWyAg
ICAwLjAyNzQwN10gRURBQyBNQzogVmVyOiAzLjAuMApbICAgIDAuMDI4NTkxXSBTd2l0Y2hl
ZCB0byBjbG9ja3NvdXJjZSBhcmNoX3N5c19jb3VudGVyClsgICAgMC4wMzgxMjZdIE5FVDog
UmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMgpbICAgIDAuMDM4ODI1XSBUQ1AgZXN0YWJs
aXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRlcjogMSwgODE5MiBieXRlcykK
WyAgICAwLjAzODg2Ml0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRl
cjogMiwgMTYzODQgYnl0ZXMpClsgICAgMC4wMzg4OTldIFRDUDogSGFzaCB0YWJsZXMgY29u
ZmlndXJlZCAoZXN0YWJsaXNoZWQgMjA0OCBiaW5kIDIwNDgpClsgICAgMC4wMzg5NzBdIFRD
UDogcmVubyByZWdpc3RlcmVkClsgICAgMC4wMzg5ODZdIFVEUCBoYXNoIHRhYmxlIGVudHJp
ZXM6IDI1NiAob3JkZXI6IDEsIDgxOTIgYnl0ZXMpClsgICAgMC4wMzkwMjFdIFVEUC1MaXRl
IGhhc2ggdGFibGUgZW50cmllczogMjU2IChvcmRlcjogMSwgODE5MiBieXRlcykKWyAgICAw
LjAzOTI1Ml0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxClsgICAgMC4wMzk2
ODldIFJQQzogUmVnaXN0ZXJlZCBuYW1lZCBVTklYIHNvY2tldCB0cmFuc3BvcnQgbW9kdWxl
LgpbICAgIDAuMDM5NzAzXSBSUEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBtb2R1bGUu
ClsgICAgMC4wMzk3MDldIFJQQzogUmVnaXN0ZXJlZCB0Y3AgdHJhbnNwb3J0IG1vZHVsZS4K
WyAgICAwLjAzOTcxNV0gUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4xIGJhY2tjaGFubmVs
IHRyYW5zcG9ydCBtb2R1bGUuClsgICAgMC4wNDA5NTBdIGZ1dGV4IGhhc2ggdGFibGUgZW50
cmllczogMjU2IChvcmRlcjogMiwgMTYzODQgYnl0ZXMpClsgICAgMC4wNTE1MjBdIE5GUzog
UmVnaXN0ZXJpbmcgdGhlIGlkX3Jlc29sdmVyIGtleSB0eXBlClsgICAgMC4wNTE2MjFdIEtl
eSB0eXBlIGlkX3Jlc29sdmVyIHJlZ2lzdGVyZWQKWyAgICAwLjA1MTYyOV0gS2V5IHR5cGUg
aWRfbGVnYWN5IHJlZ2lzdGVyZWQKWyAgICAwLjA1MTY0OV0gbmZzNGZpbGVsYXlvdXRfaW5p
dDogTkZTdjQgRmlsZSBMYXlvdXQgRHJpdmVyIFJlZ2lzdGVyaW5nLi4uClsgICAgMC4wNTIy
MzhdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQg
bG9hZGVkIChtYWpvciAyNTApClsgICAgMC4wNTIyNTJdIGlvIHNjaGVkdWxlciBub29wIHJl
Z2lzdGVyZWQKWyAgICAwLjA1MjI1OF0gaW8gc2NoZWR1bGVyIGRlYWRsaW5lIHJlZ2lzdGVy
ZWQKWyAgICAwLjA1MjYxN10gaW8gc2NoZWR1bGVyIGNmcSByZWdpc3RlcmVkIChkZWZhdWx0
KQpbICAgIDAuMDU0Nzc0XSBzdW54aS1waW5jdHJsIDFjMjA4MDAucGluY3RybDogaW5pdGlh
bGl6ZWQgc3VuWGkgUElPIGRyaXZlcgpbICAgIDAuMDU1NTQyXSB4ZW46eGVuX2V2dGNobjog
RXZlbnQtY2hhbm5lbCBkZXZpY2UgaW5zdGFsbGVkClsgICAgMC44MDQ0MDldIGNvbnNvbGUg
W2h2YzBdIGVuYWJsZWQKWyAgICAwLjgwNzkzNV0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZl
ciwgOCBwb3J0cywgSVJRIHNoYXJpbmcgZGlzYWJsZWQKWyAgICAwLjgxNjcwMF0gc2VyaWFs
OiBGcmVlc2NhbGUgbHB1YXJ0IGRyaXZlcgpbICAgIDAuODIyMTYzXSBicmQ6IG1vZHVsZSBs
b2FkZWQKWyAgICAwLjgyOTIyNV0gbG9vcDogbW9kdWxlIGxvYWRlZApbICAgIDAuODMzNTE2
XSAgUmluZyBtb2RlIGVuYWJsZWQKWyAgICAwLjgzNjQ5OF0gIE5vIEhXIERNQSBmZWF0dXJl
IHJlZ2lzdGVyIHN1cHBvcnRlZApbICAgIDAuODQxMDY1XSAgTm9ybWFsIGRlc2NyaXB0b3Jz
ClsgICAgMC44NDQ0NzBdICBXYWtlLVVwIE9uIExhbiBzdXBwb3J0ZWQKWyAgICAwLjg1MTc0
OF0gbGlicGh5OiBzdG1tYWM6IHByb2JlZApbICAgIDAuODU1MDc2XSBldGgwOiBQSFkgSUQg
MDAwMDgyMDEgYXQgMSBJUlEgMCAoc3RtbWFjLTA6MDEpIGFjdGl2ZQpbICAgIDAuODYxMzY2
XSB4ZW5fbmV0ZnJvbnQ6IEluaXRpYWxpc2luZyBYZW4gdmlydHVhbCBldGhlcm5ldCBkcml2
ZXIKWyAgICAwLjg2NzU1N10gZWhjaV9oY2Q6IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENv
bnRyb2xsZXIgKEVIQ0kpIERyaXZlcgpbICAgIDAuODc0MDgzXSBlaGNpLXBsYXRmb3JtOiBF
SENJIGdlbmVyaWMgcGxhdGZvcm0gZHJpdmVyClsgICAgMC44Nzk0NTZdIHN1bnhpLWVoY2k6
IEFsbHdpbm5lciBzdW5YaSBFSENJIGRyaXZlcgpbICAgIDAuODg0NDA0XSBwbGF0Zm9ybSAx
YzE0MDAwLmVoY2kwOiBEcml2ZXIgc3VueGktZWhjaSByZXF1ZXN0cyBwcm9iZSBkZWZlcnJh
bApbICAgIDAuODkxNTU3XSBwbGF0Zm9ybSAxYzFjMDAwLmVoY2kxOiBEcml2ZXIgc3VueGkt
ZWhjaSByZXF1ZXN0cyBwcm9iZSBkZWZlcnJhbApbICAgIDAuODk4OTQ1XSB1c2Jjb3JlOiBy
ZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlClsgICAgMC45MDU1
NjJdIG1vdXNlZGV2OiBQUy8yIG1vdXNlIGRldmljZSBjb21tb24gZm9yIGFsbCBtaWNlClsg
ICAgMC45MTE1MzddIGkyYyAvZGV2IGVudHJpZXMgZHJpdmVyClsgICAgMC45MTYyNjldIHN1
bnhpLXdkdCAxYzIwYzkwLndhdGNoZG9nOiBXYXRjaGRvZyBlbmFibGVkICh0aW1lb3V0PTE2
IHNlYywgbm93YXlvdXQ9MCkKWyAgICAwLjkyNDE2Nl0geGVuX3dkdDogWGVuIFdhdGNoRG9n
IFRpbWVyIERyaXZlciB2MC4wMQpbICAgIDAuOTI5MzE0XSB4ZW5fd2R0OiBjYW5ub3QgcmVn
aXN0ZXIgbWlzY2RldiBvbiBtaW5vcj0xMzAgKC0xNikKWyAgICAwLjkzNTE1NF0gd2R0OiBw
cm9iZSBvZiB3ZHQgZmFpbGVkIHdpdGggZXJyb3IgLTE2ClsgICAgMC45NDAxNjNdIHNkaGNp
OiBTZWN1cmUgRGlnaXRhbCBIb3N0IENvbnRyb2xsZXIgSW50ZXJmYWNlIGRyaXZlcgpbICAg
IDAuOTQ2MzIyXSBzZGhjaTogQ29weXJpZ2h0KGMpIFBpZXJyZSBPc3NtYW4KWyAgICAwLjk1
MTM5N10gc3VueGktbWNpIDFjMGYwMDAubW1jOiBiYXNlOjB4ZDA4NWMwMDAgaXJxOjY0Clsg
ICAgMC45NTY4MThdIHNkaGNpLXBsdGZtOiBTREhDSSBwbGF0Zm9ybSBhbmQgT0YgZHJpdmVy
IGhlbHBlcgpbICAgIDAuOTYzNTUwXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZh
Y2UgZHJpdmVyIHVzYmhpZApbICAgIDAuOTY5MTE3XSB1c2JoaWQ6IFVTQiBISUQgY29yZSBk
cml2ZXIKWyAgICAwLjk3MzIwN10gVENQOiBjdWJpYyByZWdpc3RlcmVkClsgICAgMC45Nzcx
OTZdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTAKWyAgICAwLjk4MjQ2M10g
c2l0OiBJUHY2IG92ZXIgSVB2NCB0dW5uZWxpbmcgZHJpdmVyClsgICAgMC45ODc3MTddIE5F
VDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTcKWyAgICAwLjk5MjMyNF0gS2V5IHR5
cGUgZG5zX3Jlc29sdmVyIHJlZ2lzdGVyZWQKWyAgICAwLjk5NjcwOV0gUmVnaXN0ZXJpbmcg
U1dQL1NXUEIgZW11bGF0aW9uIGhhbmRsZXIKWyAgICAxLjAwMjg3NV0gYWhjaS01djogNTAw
MCBtViAKWyAgICAxLjAwNjEyMV0gdXNiMS12YnVzOiA1MDAwIG1WIApbICAgIDEuMDA5NjIx
XSB1c2IyLXZidXM6IDUwMDAgbVYgClsgICAgMS4wMTMwNDNdIHVuYWJsZSB0byBmaW5kIHRy
YW5zY2VpdmVyClsgICAgMS4wMTY3MjZdIHN1bnhpLWVoY2kgMWMxNDAwMC5laGNpMDogRUhD
SSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjAyMjQwMF0gc3VueGktZWhjaSAxYzE0MDAwLmVo
Y2kwOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDEKWyAg
ICAxLjAzMDE2NV0gc3VueGktZWhjaSAxYzE0MDAwLmVoY2kwOiBpcnEgNzEsIGlvIG1lbSAw
eDAxYzE0MDAwClsgICAgMS4wNDg3MTJdIHN1bnhpLWVoY2kgMWMxNDAwMC5laGNpMDogVVNC
IDIuMCBzdGFydGVkLCBFSENJIDEuMDAKWyAgICAxLjA1NTU5MV0gaHViIDEtMDoxLjA6IFVT
QiBodWIgZm91bmQKWyAgICAxLjA1OTQwM10gaHViIDEtMDoxLjA6IDEgcG9ydCBkZXRlY3Rl
ZApbICAgIDEuMDYzOTQzXSB1bmFibGUgdG8gZmluZCB0cmFuc2NlaXZlcgpbICAgIDEuMDY3
NjI5XSBzdW54aS1laGNpIDFjMWMwMDAuZWhjaTE6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsg
ICAgMS4wNzMyNzRdIHN1bnhpLWVoY2kgMWMxYzAwMC5laGNpMTogbmV3IFVTQiBidXMgcmVn
aXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgICAgMS4wODEwNDZdIHN1bnhpLWVo
Y2kgMWMxYzAwMC5laGNpMTogaXJxIDcyLCBpbyBtZW0gMHgwMWMxYzAwMApbICAgIDEuMDk4
Njc1XSBzdW54aS1laGNpIDFjMWMwMDAuZWhjaTE6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAx
LjAwClsgICAgMS4xMDU0NzFdIGh1YiAyLTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS4x
MDkyOTFdIGh1YiAyLTA6MS4wOiAxIHBvcnQgZGV0ZWN0ZWQKWyAgICAxLjExMzU1OV0gZHJp
dmVycy9ydGMvaGN0b3N5cy5jOiB1bmFibGUgdG8gb3BlbiBydGMgZGV2aWNlIChydGMwKQpb
ICAgIDEuMTE5ODE5XSBjbGs6IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2NrcwpbICAgIDEu
MTI0OTczXSBXYWl0aW5nIGZvciByb290IGRldmljZSAvZGV2L3NkYTEuLi4KWyAgICAxLjQy
ODcxMV0gdXNiIDItMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2lu
ZyBzdW54aS1laGNpClsgICAgMS41NDg2NzZdIG1tYzA6IG5ldyBoaWdoIHNwZWVkIFNESEMg
Y2FyZCBhdCBhZGRyZXNzIGIzNjgKWyAgICAxLjU1NDQ5N10gaXNhIGJvdW5jZSBwb29sIHNp
emU6IDE2IHBhZ2VzClsgICAgMS41NTg4MDNdIG1tY2JsazA6IG1tYzA6YjM2OCAgICAgICA3
LjQ1IEdpQiAKWyAgICAxLjU2NDc2NV0gIG1tY2JsazA6IHAxClsgICAgMS41OTYwNTBdIHVz
Yi1zdG9yYWdlIDItMToxLjA6IFVTQiBNYXNzIFN0b3JhZ2UgZGV2aWNlIGRldGVjdGVkClsg
ICAgMS42MDI0MTVdIHNjc2kwIDogdXNiLXN0b3JhZ2UgMi0xOjEuMApbICAgIDIuNjYyODA1
XSBzY3NpIDA6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgICAgICAgICAgIFVTQiBESVNLIDIu
MCAgICAgUE1BUCBQUTogMCBBTlNJOiA0ClsgICAgMy43NjU4NzVdIHNkIDA6MDowOjA6IFtz
ZGFdIDYyNTU0MTEyIDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoMzIuMCBHQi8yOS44IEdp
QikKWyAgICAzLjc3NjEwOF0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUgUHJvdGVjdCBpcyBv
ZmYKWyAgICAzLjc4MDg1NV0gc2QgMDowOjA6MDogW3NkYV0gTW9kZSBTZW5zZTogMjMgMDAg
MDAgMDAKWyAgICAzLjc4ODYzNF0gc2QgMDowOjA6MDogW3NkYV0gTm8gQ2FjaGluZyBtb2Rl
IHBhZ2UgZm91bmQKWyAgICAzLjc5Mzg2NF0gc2QgMDowOjA6MDogW3NkYV0gQXNzdW1pbmcg
ZHJpdmUgY2FjaGU6IHdyaXRlIHRocm91Z2gKWyAgICAzLjgwOTQ3MV0gc2QgMDowOjA6MDog
W3NkYV0gTm8gQ2FjaGluZyBtb2RlIHBhZ2UgZm91bmQKWyAgICAzLjgxNDc0Nl0gc2QgMDow
OjA6MDogW3NkYV0gQXNzdW1pbmcgZHJpdmUgY2FjaGU6IHdyaXRlIHRocm91Z2gKWyAgICAz
Ljg1MzY2Nl0gIHNkYTogc2RhMQpbICAgIDMuODYzOTc1XSBzZCAwOjA6MDowOiBbc2RhXSBO
byBDYWNoaW5nIG1vZGUgcGFnZSBmb3VuZApbICAgIDMuODY5MjY1XSBzZCAwOjA6MDowOiBb
c2RhXSBBc3N1bWluZyBkcml2ZSBjYWNoZTogd3JpdGUgdGhyb3VnaApbICAgIDMuODc1Mzg3
XSBzZCAwOjA6MDowOiBbc2RhXSBBdHRhY2hlZCBTQ1NJIHJlbW92YWJsZSBkaXNrClsgICAg
My44ODM2MThdIEVYVDMtZnMgKHNkYTEpOiBlcnJvcjogY291bGRuJ3QgbW91bnQgYmVjYXVz
ZSBvZiB1bnN1cHBvcnRlZCBvcHRpb25hbCBmZWF0dXJlcyAoMjQwKQpbICAgIDMuODk0Mjc3
XSBFWFQyLWZzIChzZGExKTogZXJyb3I6IGNvdWxkbid0IG1vdW50IGJlY2F1c2Ugb2YgdW5z
dXBwb3J0ZWQgb3B0aW9uYWwgZmVhdHVyZXMgKDI0MCkKWyAgICA1LjIyNTgxMl0gRVhUNC1m
cyAoc2RhMSk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQgZGF0YSBtb2RlLiBP
cHRzOiAobnVsbCkKWyAgICA1LjIzMzU0N10gVkZTOiBNb3VudGVkIHJvb3QgKGV4dDQgZmls
ZXN5c3RlbSkgb24gZGV2aWNlIDg6MS4KWyAgICA1LjI1MTg2MV0gZGV2dG1wZnM6IG1vdW50
ZWQKWyAgICA1LjI1NTExNF0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogMjMySyAo
YzA1ODkwMDAgLSBjMDVjMzAwMCkKWyAgICA3LjA1MDMwOV0gc3lzdGVtZFsxXTogTW91bnRp
bmcgY2dyb3VwIHRvIC9zeXMvZnMvY2dyb3VwL2NwdXNldCBvZiB0eXBlIGNncm91cCB3aXRo
IG9wdGlvbnMgY3B1c2V0LgpbICAgIDcuMDYwMDg1XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBj
Z3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvY3B1LGNwdWFjY3Qgb2YgdHlwZSBjZ3JvdXAgd2l0
aCBvcHRpb25zIGNwdSxjcHVhY2N0LgpbICAgIDcuMDcwNTk0XSBzeXN0ZW1kWzFdOiBNb3Vu
dGluZyBjZ3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvbWVtb3J5IG9mIHR5cGUgY2dyb3VwIHdp
dGggb3B0aW9ucyBtZW1vcnkuClsgICAgNy4wODAyNDldIHN5c3RlbWRbMV06IE1vdW50aW5n
IGNncm91cCB0byAvc3lzL2ZzL2Nncm91cC9kZXZpY2VzIG9mIHR5cGUgY2dyb3VwIHdpdGgg
b3B0aW9ucyBkZXZpY2VzLgpbICAgIDcuMDg5OTUyXSBzeXN0ZW1kWzFdOiBNb3VudGluZyBj
Z3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvYmxraW8gb2YgdHlwZSBjZ3JvdXAgd2l0aCBvcHRp
b25zIGJsa2lvLgpbICAgIDcuMDk5Mjk3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kIDIwOCBydW5u
aW5nIGluIHN5c3RlbSBtb2RlLiAoK1BBTSAtTElCV1JBUCAtQVVESVQgLVNFTElOVVggLUlN
QSAtU1lTVklOSVQgK0xJQkNSWVBUU0VUVVAgK0dDUllQVCArQUNMICtYWikKWyAgICA3LjEy
MTI1MV0gc3lzdGVtZFsxXTogU2V0IGhvc3RuYW1lIHRvIDxhbGFybT4uClsgICAgNy4xMzE4
MDldIHN5c3RlbWRbMV06IFVzaW5nIGNncm91cCBjb250cm9sbGVyIG5hbWU9c3lzdGVtZC4g
RmlsZSBzeXN0ZW0gaGllcmFyY2h5IGlzIGF0IC9zeXMvZnMvY2dyb3VwL3N5c3RlbWQuClsg
ICAgNy4xNDI2NzldIHN5c3RlbWRbMV06IEluc3RhbGxlZCByZWxlYXNlIGFnZW50LgpbICAg
IDcuMTQ4MDc1XSBzeXN0ZW1kWzFdOiBVc2luZyBub3RpZmljYXRpb24gc29ja2V0IEAvb3Jn
L2ZyZWVkZXNrdG9wL3N5c3RlbWQxL25vdGlmeQpbICAgIDcuMTU1OTkwXSBzeXN0ZW1kWzFd
OiBTZXQgdXAgVEZEX1RJTUVSX0NBTkNFTF9PTl9TRVQgdGltZXJmZC4KWyAgICA3LjE3MjA0
Ml0gcmFuZG9tOiBzeXN0ZW1kIHVyYW5kb20gcmVhZCB3aXRoIDU3IGJpdHMgb2YgZW50cm9w
eSBhdmFpbGFibGUKWyAgICA3LjE3OTIzMl0gc3lzdGVtZFsxXTogU3VjY2Vzc2Z1bGx5IGNy
ZWF0ZWQgcHJpdmF0ZSBELUJ1cyBzZXJ2ZXIuClsgICAgNy4xODc3NTldIHN5c3RlbWRbMV06
IFNwYXduZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWdl
dHR5LWdlbmVyYXRvciBhcyA0MApbICAgIDcuMTk4MDgwXSBzeXN0ZW1kWzFdOiBTcGF3bmVk
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1zeXN0ZW0tdXBk
YXRlLWdlbmVyYXRvciBhcyA0MQpbICAgIDcuMjEwMjM0XSBzeXN0ZW1kWzFdOiBTcGF3bmVk
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1jcnlwdHNldHVw
LWdlbmVyYXRvciBhcyA0MgpbICAgIDcuMjI0NzMyXSBzeXN0ZW1kWzFdOiBTcGF3bmVkIC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1ncHQtYXV0by1nZW5l
cmF0b3IgYXMgNDMKWyAgICA3LjIzNjM1NV0gc3lzdGVtZFsxXTogU3Bhd25lZCAvdXNyL2xp
Yi9zeXN0ZW1kL3N5c3RlbS1nZW5lcmF0b3JzL3N5c3RlbWQtZnN0YWItZ2VuZXJhdG9yIGFz
IDQ0ClsgICAgNy4yNDc3NDRdIHN5c3RlbWRbMV06IFNwYXduZWQgL3Vzci9saWIvc3lzdGVt
ZC9zeXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWVmaS1ib290LWdlbmVyYXRvciBhcyA0NQpb
ICAgIDcuMjYwNzI0XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbS1nZW5l
cmF0b3JzL3N5c3RlbWQtZ2V0dHktZ2VuZXJhdG9yIGV4aXRlZCBzdWNjZXNzZnVsbHkuClsg
ICAgNy4yNzA0MzNdIHN5c3RlbWRbMV06IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVy
YXRvcnMvc3lzdGVtZC1zeXN0ZW0tdXBkYXRlLWdlbmVyYXRvciBleGl0ZWQgc3VjY2Vzc2Z1
bGx5LgpbICAgIDcuMzA0MDU5XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3Rl
bS1nZW5lcmF0b3JzL3N5c3RlbWQtY3J5cHRzZXR1cC1nZW5lcmF0b3IgZXhpdGVkIHN1Y2Nl
c3NmdWxseS4KWyAgICA4LjgxNDY4OF0gc3lzdGVtZFsxXTogL3Vzci9saWIvc3lzdGVtZC9z
eXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWdwdC1hdXRvLWdlbmVyYXRvciBleGl0ZWQgc3Vj
Y2Vzc2Z1bGx5LgpbICAgIDguODI0NTQ0XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1k
L3N5c3RlbS1nZW5lcmF0b3JzL3N5c3RlbWQtZnN0YWItZ2VuZXJhdG9yIGV4aXRlZCBzdWNj
ZXNzZnVsbHkuClsgICAgOC44MzM5NzVdIHN5c3RlbWRbMV06IC91c3IvbGliL3N5c3RlbWQv
c3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1lZmktYm9vdC1nZW5lcmF0b3IgZXhpdGVkIHN1
Y2Nlc3NmdWxseS4KWyAgICA4Ljg0Nzg2OV0gc3lzdGVtZFsxXTogTG9va2luZyBmb3IgdW5p
dCBmaWxlcyBpbiAoaGlnaGVyIHByaW9yaXR5IGZpcnN0KToKWyAgICA4Ljg1NDg2OV0gc3lz
dGVtZFsxXTogCS9ldGMvc3lzdGVtZC9zeXN0ZW0KWyAgICA4Ljg1OTEyNF0gc3lzdGVtZFsx
XTogCS9ydW4vc3lzdGVtZC9zeXN0ZW0KWyAgICA4Ljg2MzQxNV0gc3lzdGVtZFsxXTogCS9y
dW4vc3lzdGVtZC9nZW5lcmF0b3IKWyAgICA4Ljg2ODAwN10gc3lzdGVtZFsxXTogCS91c3Iv
bG9jYWwvbGliL3N5c3RlbWQvc3lzdGVtClsgICAgOC44NzMyNDhdIHN5c3RlbWRbMV06IAkv
dXNyL2xpYi9zeXN0ZW1kL3N5c3RlbQpbICAgIDguODc3OTA5XSBzeXN0ZW1kWzFdOiBTeXNW
IGluaXQgc2NyaXB0cyBhbmQgcmNOLmQgbGlua3Mgc3VwcG9ydCBkaXNhYmxlZApbICAgIDgu
OTczMTI5XSBzeXN0ZW1kWzFdOiBGYWlsZWQgdG8gbG9hZCBjb25maWd1cmF0aW9uIGZvciBw
bHltb3V0aC1zdGFydC5zZXJ2aWNlOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5ClsgICAg
OC45OTY1MDddIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9y
IHN5c2xvZy5zZXJ2aWNlOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5ClsgICAgOS4wMjc4
NTNdIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9yIHN5c2xv
Zy50YXJnZXQ6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjA0ODM2M10gc3lz
dGVtZFsxXTogRmFpbGVkIHRvIGxvYWQgY29uZmlndXJhdGlvbiBmb3IgZGlzcGxheS1tYW5h
Z2VyLnNlcnZpY2U6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjA3Njg0M10g
c3lzdGVtZFsxXTogRmFpbGVkIHRvIGxvYWQgY29uZmlndXJhdGlvbiBmb3IgcGx5bW91dGgt
cXVpdC13YWl0LnNlcnZpY2U6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjEy
MDU2MF0gc3lzdGVtZFsxXTogLS5tb3VudCBjaGFuZ2VkIGRlYWQgLT4gbW91bnRlZApbICAg
IDkuMTI2MTQwXSBzeXN0ZW1kWzFdOiBBY3RpdmF0aW5nIGRlZmF1bHQgdW5pdDogZGVmYXVs
dC50YXJnZXQKWyAgICA5LjEzMjIxM10gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUg
am9iIGdyYXBoaWNhbC50YXJnZXQvc3RhcnQvaXNvbGF0ZQpbICAgIDkuMTM5OTM0XSBzeXN0
ZW1kWzFdOiBDYW5ub3QgYWRkIGRlcGVuZGVuY3kgam9iIGZvciB1bml0IGRpc3BsYXktbWFu
YWdlci5zZXJ2aWNlLCBpZ25vcmluZzogVW5pdCBkaXNwbGF5LW1hbmFnZXIuc2VydmljZSBm
YWlsZWQgdG8gbG9hZDogTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeS4KWyAgICA5LjE1NTMz
Nl0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgZ3JhcGhpY2FsLnRhcmdldC9zdGFy
dCBhcyAxClsgICAgOS4xNjE4MzFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIG11
bHRpLXVzZXIudGFyZ2V0L3N0YXJ0IGFzIDIKWyAgICA5LjE2ODMzMV0gc3lzdGVtZFsxXTog
SW5zdGFsbGVkIG5ldyBqb2IgYmFzaWMudGFyZ2V0L3N0YXJ0IGFzIDMKWyAgICA5LjE3NDUz
MV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzaW5pdC50YXJnZXQvc3RhcnQg
YXMgNApbICAgIDkuMTgwODE4XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBsb2Nh
bC1mcy50YXJnZXQvc3RhcnQgYXMgNQpbICAgIDkuMTg3MTk4XSBzeXN0ZW1kWzFdOiBJbnN0
YWxsZWQgbmV3IGpvYiB0bXAubW91bnQvc3RhcnQgYXMgNgpbICAgIDkuMTkzMTE1XSBzeXN0
ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXJlbW91bnQtZnMuc2VydmljZS9z
dGFydCBhcyA4ClsgICAgOS4yMDA0NzddIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IGxvY2FsLWZzLXByZS50YXJnZXQvc3RhcnQgYXMgOQpbICAgIDkuMjA3MjE2XSBzeXN0ZW1k
WzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzd2FwLnRhcmdldC9zdGFydCBhcyAxMQpbICAgIDku
MjEzMzkyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBrbW9kLXN0YXRpYy1ub2Rl
cy5zZXJ2aWNlL3N0YXJ0IGFzIDEyClsgICAgOS4yMjA3NjBdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIGRldi1odWdlcGFnZXMubW91bnQvc3RhcnQgYXMgMTMKWyAgICA5LjIy
NzU4M10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgcHJvYy1zeXMtZnMtYmluZm10
X21pc2MuYXV0b21vdW50L3N0YXJ0IGFzIDE0ClsgICAgOS4yMzU2NjVdIHN5c3RlbWRbMV06
IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtam91cm5hbGQuc2VydmljZS9zdGFydCBhcyAx
NQpbICAgIDkuMjQyOTQyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1k
LWpvdXJuYWxkLnNvY2tldC9zdGFydCBhcyAxNgpbICAgIDkuMjUwMTMyXSBzeXN0ZW1kWzFd
OiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25zb2xlLnBhdGgv
c3RhcnQgYXMgMTcKWyAgICA5LjI1ODE4N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBq
b2Igc3lzdGVtZC1iaW5mbXQuc2VydmljZS9zdGFydCBhcyAxOApbICAgIDkuMjY1MzA1XSBz
eXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZp
Y2Uvc3RhcnQgYXMgMTkKWyAgICA5LjI3Mjg1N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5l
dyBqb2Igc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudC9zdGFydCBhcyAyMApbICAgIDkuMjc5OTQ2
XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXJhbmRvbS1zZWVkLnNl
cnZpY2Uvc3RhcnQgYXMgMjEKWyAgICA5LjI4NzQ3MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVk
IG5ldyBqb2Igc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlL3N0YXJ0IGFzIDIyClsg
ICAgOS4yOTUyODldIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtdG1w
ZmlsZXMtc2V0dXAtZGV2LnNlcnZpY2Uvc3RhcnQgYXMgMjMKWyAgICA5LjMwMzQ0NV0gc3lz
dGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC11ZGV2ZC5zZXJ2aWNlL3N0YXJ0
IGFzIDI0ClsgICAgOS4zMTA0NTZdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5
c3RlbWQtdWRldmQtY29udHJvbC5zb2NrZXQvc3RhcnQgYXMgMjUKWyAgICA5LjMxODA2NF0g
c3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29j
a2V0L3N0YXJ0IGFzIDI2ClsgICAgOS4zMjU2MjBdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBu
ZXcgam9iIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS9zdGFydCBhcyAyNwpbICAg
IDkuMzMzNDIyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBkZXYtbXF1ZXVlLm1v
dW50L3N0YXJ0IGFzIDI4ClsgICAgOS4zNDAwMDhdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBu
ZXcgam9iIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2Uvc3RhcnQgYXMgMjkKWyAgICA5
LjM0NzYzNF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzLWZzLWZ1c2UtY29u
bmVjdGlvbnMubW91bnQvc3RhcnQgYXMgMzAKWyAgICA5LjM1NTM0N10gc3lzdGVtZFsxXTog
SW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC1qb3VybmFsLWZsdXNoLnNlcnZpY2Uvc3RhcnQg
YXMgMzEKWyAgICA5LjM2MzA2MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lz
LWtlcm5lbC1jb25maWcubW91bnQvc3RhcnQgYXMgMzIKWyAgICA5LjM3MDI1N10gc3lzdGVt
ZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC1tb2R1bGVzLWxvYWQuc2VydmljZS9z
dGFydCBhcyAzMwpbICAgIDkuMzc3ODYxXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpv
YiBjcnlwdHNldHVwLnRhcmdldC9zdGFydCBhcyAzNApbICAgIDkuMzg0NTYzXSBzeXN0ZW1k
WzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlL3N0YXJ0IGFz
IDM1ClsgICAgOS4zOTE2NjRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNvY2tl
dHMudGFyZ2V0L3N0YXJ0IGFzIDM4ClsgICAgOS4zOTgwNThdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIGRidXMuc29ja2V0L3N0YXJ0IGFzIDM5ClsgICAgOS40MDQyMzJdIHN5
c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbS5zbGljZS9zdGFydCBhcyA0MApb
ICAgIDkuNDEwNDY1XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiAtLnNsaWNlL3N0
YXJ0IGFzIDQxClsgICAgOS40MTYyNThdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHN5c3RlbWQtc2h1dGRvd25kLnNvY2tldC9zdGFydCBhcyA0MgpbICAgIDkuNDIzNTU0XSBz
eXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWluaXRjdGwuc29ja2V0L3N0
YXJ0IGFzIDQzClsgICAgOS40MzA2NjFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IGRtZXZlbnRkLnNvY2tldC9zdGFydCBhcyA0NApbICAgIDkuNDM3MTQ0XSBzeXN0ZW1kWzFd
OiBJbnN0YWxsZWQgbmV3IGpvYiBsdm1ldGFkLnNvY2tldC9zdGFydCBhcyA0NQpbICAgIDku
NDQzNTg0XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiB0aW1lcnMudGFyZ2V0L3N0
YXJ0IGFzIDQ2ClsgICAgOS40NDk5MDFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHN5c3RlbWQtdG1wZmlsZXMtY2xlYW4udGltZXIvc3RhcnQgYXMgNDcKWyAgICA5LjQ1NzUx
MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgcGF0aHMudGFyZ2V0L3N0YXJ0IGFz
IDQ4ClsgICAgOS40NjM3NjZdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNsaWNl
cy50YXJnZXQvc3RhcnQgYXMgNDkKWyAgICA5LjQ3MDA5NF0gc3lzdGVtZFsxXTogSW5zdGFs
bGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3N0YXJ0IGFzIDUwClsgICAgOS40NzYzMTZdIHN5
c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQg
YXMgNTEKWyAgICA5LjQ4MzE5MV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgaGF2
ZWdlZC5zZXJ2aWNlL3N0YXJ0IGFzIDUyClsgICAgOS40ODk2ODldIHN5c3RlbWRbMV06IElu
c3RhbGxlZCBuZXcgam9iIHJlbW90ZS1mcy50YXJnZXQvc3RhcnQgYXMgNTMKWyAgICA5LjQ5
NjI2N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgbmV0Y3RsLWlmcGx1Z2RAZXRo
MC5zZXJ2aWNlL3N0YXJ0IGFzIDU0ClsgICAgOS41MDM4MDddIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIHN5cy1zdWJzeXN0ZW0tbmV0LWRldmljZXMtZXRoMC5kZXZpY2Uvc3Rh
cnQgYXMgNTUKWyAgICA5LjUxMjI2M10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Ig
c3lzdGVtLW5ldGN0bFx4MmRpZnBsdWdkLnNsaWNlL3N0YXJ0IGFzIDU2ClsgICAgOS41MjAw
MzRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtYXNrLXBhc3N3b3Jk
LXdhbGwucGF0aC9zdGFydCBhcyA1NwpbICAgIDkuNTI3ODE0XSBzeXN0ZW1kWzFdOiBJbnN0
YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWxvZ2luZC5zZXJ2aWNlL3N0YXJ0IGFzIDU4ClsgICAg
OS41MzQ5MjJdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHVzZXIuc2xpY2Uvc3Rh
cnQgYXMgNTkKWyAgICA5LjU0MDk4N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Ig
ZGJ1cy5zZXJ2aWNlL3N0YXJ0IGFzIDYwClsgICAgOS41NDcyMTBdIHN5c3RlbWRbMV06IElu
c3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtdXNlci1zZXNzaW9ucy5zZXJ2aWNlL3N0YXJ0IGFz
IDYxClsgICAgOS41NTQ5NDFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIGdldHR5
LnRhcmdldC9zdGFydCBhcyA2MgpbICAgIDkuNTYxMzA0XSBzeXN0ZW1kWzFdOiBJbnN0YWxs
ZWQgbmV3IGpvYiBnZXR0eUB0dHkxLnNlcnZpY2Uvc3RhcnQgYXMgNjMKWyAgICA5LjU2Nzkz
Nl0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtLWdldHR5LnNsaWNlL3N0
YXJ0IGFzIDY0ClsgICAgOS41NzQ3MzVdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2Uvc3RhcnQgYXMgNjUKWyAgICA5LjU4MjA4M10g
c3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgZGV2LWh2YzAuZGV2aWNlL3N0YXJ0IGFz
IDY2ClsgICAgOS41ODg2MTRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3Rl
bS1zZXJpYWxceDJkZ2V0dHkuc2xpY2Uvc3RhcnQgYXMgNjcKWyAgICA5LjU5NjIzOF0gc3lz
dGVtZFsxXTogRW5xdWV1ZWQgam9iIGdyYXBoaWNhbC50YXJnZXQvc3RhcnQgYXMgMQpbICAg
IDkuNjAyMzEzXSBzeXN0ZW1kWzFdOiBMb2FkZWQgdW5pdHMgYW5kIGRldGVybWluZWQgaW5p
dGlhbCB0cmFuc2FjdGlvbiBpbiAyLjQxNjczMnMuClsgICAgOS42MTA1OThdIHN5c3RlbWRb
MV06IEV4cGVjdGluZyBkZXZpY2UgZGV2LWh2YzAuZGV2aWNlLi4uClsgICAgOS42MjA4ODVd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIEZvcndhcmQgUGFzc3dvcmQgUmVxdWVzdHMgdG8gV2Fs
bCBEaXJlY3RvcnkgV2F0Y2guClsgICAgOS42MjkwODBdIHN5c3RlbWRbMV06IHN5c3RlbWQt
YXNrLXBhc3N3b3JkLXdhbGwucGF0aCBjaGFuZ2VkIGRlYWQgLT4gd2FpdGluZwpbICAgIDku
NjM2MjY3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC1hc2stcGFzc3dvcmQtd2FsbC5wYXRo
L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuNjQ0MzU4XSBzeXN0ZW1kWzFd
OiBTdGFydGVkIEZvcndhcmQgUGFzc3dvcmQgUmVxdWVzdHMgdG8gV2FsbCBEaXJlY3Rvcnkg
V2F0Y2guClsgICAgOS42NTIwMjJdIHN5c3RlbWRbMV06IEV4cGVjdGluZyBkZXZpY2Ugc3lz
LXN1YnN5c3RlbS1uZXQtZGV2aWNlcy1ldGgwLmRldmljZS4uLgpbICAgIDkuNjY2MDcxXSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBSZW1vdGUgRmlsZSBTeXN0ZW1zLgpbICAgIDkuNjcxMTc1
XSBzeXN0ZW1kWzFdOiByZW1vdGUtZnMudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUK
WyAgICA5LjY3NzA1MF0gc3lzdGVtZFsxXTogSm9iIHJlbW90ZS1mcy50YXJnZXQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS42ODkyMDRdIHN5c3RlbWRbMV06IFJlYWNo
ZWQgdGFyZ2V0IFJlbW90ZSBGaWxlIFN5c3RlbXMuClsgICAgOS42OTQ3OTVdIHN5c3RlbWRb
MV06IFN0YXJ0aW5nIExWTTIgbWV0YWRhdGEgZGFlbW9uIHNvY2tldC4KWyAgICA5LjcwMDgz
NV0gc3lzdGVtZFsxXTogbHZtZXRhZC5zb2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3Rlbmlu
ZwpbICAgIDkuNzA2Nzk4XSBzeXN0ZW1kWzFdOiBKb2IgbHZtZXRhZC5zb2NrZXQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS43MTkzNTldIHN5c3RlbWRbMV06IExpc3Rl
bmluZyBvbiBMVk0yIG1ldGFkYXRhIGRhZW1vbiBzb2NrZXQuClsgICAgOS43MjU0NjhdIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIERldmljZS1tYXBwZXIgZXZlbnQgZGFlbW9uIEZJRk9zLgpb
ICAgIDkuNzMxODc2XSBzeXN0ZW1kWzFdOiBkbWV2ZW50ZC5zb2NrZXQgY2hhbmdlZCBkZWFk
IC0+IGxpc3RlbmluZwpbICAgIDkuNzM3OTIwXSBzeXN0ZW1kWzFdOiBKb2IgZG1ldmVudGQu
c29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuNzUwOTU2XSBzeXN0
ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRGV2aWNlLW1hcHBlciBldmVudCBkYWVtb24gRklGT3Mu
ClsgICAgOS43NTc1MDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIC9kZXYvaW5pdGN0bCBDb21w
YXRpYmlsaXR5IE5hbWVkIFBpcGUuClsgICAgOS43NjQzMjldIHN5c3RlbWRbMV06IHN5c3Rl
bWQtaW5pdGN0bC5zb2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgIDkuNzcx
MDQ3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC1pbml0Y3RsLnNvY2tldC9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgICA5Ljc4NTAzNF0gc3lzdGVtZFsxXTogTGlzdGVuaW5n
IG9uIC9kZXYvaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBpcGUuClsgICAgOS43OTIw
OTZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIERlbGF5ZWQgU2h1dGRvd24gU29ja2V0LgpbICAg
IDkuNzk3NTk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXNodXRkb3duZC5zb2NrZXQgY2hhbmdl
ZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgIDkuODA0NDkzXSBzeXN0ZW1kWzFdOiBKb2Igc3lz
dGVtZC1zaHV0ZG93bmQuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
IDkuODE3NDA5XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRGVsYXllZCBTaHV0ZG93biBT
b2NrZXQuClsgICAgOS44MjMyNjVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJvb3QgU2xpY2Uu
ClsgICAgOS44MjgyNjddIHN5c3RlbWRbMV06IC0uc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFj
dGl2ZQpbICAgIDkuODMzNDc4XSBzeXN0ZW1kWzFdOiBKb2IgLS5zbGljZS9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgICA5Ljg0MzkzNl0gc3lzdGVtZFsxXTogQ3JlYXRlZCBz
bGljZSBSb290IFNsaWNlLgpbICAgIDkuODQ4ODAzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBV
c2VyIGFuZCBTZXNzaW9uIFNsaWNlLgpbICAgIDkuODU0NDg2XSBzeXN0ZW1kWzFdOiB1c2Vy
LnNsaWNlIGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgICA5Ljg1OTk0MF0gc3lzdGVtZFsx
XTogSm9iIHVzZXIuc2xpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS44
NzE2NTZdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2UgVXNlciBhbmQgU2Vzc2lvbiBTbGlj
ZS4KWyAgICA5Ljg3NzQyM10gc3lzdGVtZFsxXTogU3RhcnRpbmcgU3lzdGVtIFNsaWNlLgpb
ICAgIDkuODgyMzQ1XSBzeXN0ZW1kWzFdOiBzeXN0ZW0uc2xpY2UgY2hhbmdlZCBkZWFkIC0+
IGFjdGl2ZQpbICAgIDkuODg3ODY4XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtLnNsaWNlL3N0
YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuODk5MDUyXSBzeXN0ZW1kWzFdOiBD
cmVhdGVkIHNsaWNlIFN5c3RlbSBTbGljZS4KWyAgICA5LjkwMzk1N10gc3lzdGVtZFsxXTog
U3RhcnRpbmcgc3lzdGVtLW5ldGN0bFx4MmRpZnBsdWdkLnNsaWNlLgpbICAgIDkuOTEwNTY2
XSBzeXN0ZW1kWzFdOiBzeXN0ZW0tbmV0Y3RsXHgyZGlmcGx1Z2Quc2xpY2UgY2hhbmdlZCBk
ZWFkIC0+IGFjdGl2ZQpbICAgIDkuOTE3NjU5XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtLW5l
dGN0bFx4MmRpZnBsdWdkLnNsaWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
IDkuOTMxOTkzXSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3RlbS1uZXRjdGxceDJk
aWZwbHVnZC5zbGljZS4KWyAgICA5LjkzODQ0OV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lz
dGVtLWdldHR5LnNsaWNlLgpbICAgIDkuOTQ0MDE3XSBzeXN0ZW1kWzFdOiBzeXN0ZW0tZ2V0
dHkuc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgIDkuOTUwMTc0XSBzeXN0ZW1k
WzFdOiBKb2Igc3lzdGVtLWdldHR5LnNsaWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgIDkuOTYyMjI0XSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3RlbS1nZXR0
eS5zbGljZS4KWyAgICA5Ljk2NzY0NV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lzdGVtLXNl
cmlhbFx4MmRnZXR0eS5zbGljZS4KWyAgICA5Ljk3NDAzOF0gc3lzdGVtZFsxXTogc3lzdGVt
LXNlcmlhbFx4MmRnZXR0eS5zbGljZSBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAgOS45
ODEwNjJdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW0tc2VyaWFsXHgyZGdldHR5LnNsaWNlL3N0
YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuOTk0ODgwXSBzeXN0ZW1kWzFdOiBD
cmVhdGVkIHNsaWNlIHN5c3RlbS1zZXJpYWxceDJkZ2V0dHkuc2xpY2UuClsgICAxMC4wMDEy
OThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNsaWNlcy4KWyAgIDEwLjAwNTE2MV0gc3lzdGVt
ZFsxXTogc2xpY2VzLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAxMC4wMTA5
MzBdIHN5c3RlbWRbMV06IEpvYiBzbGljZXMudGFyZ2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMTAuMDIxNjA4XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBTbGlj
ZXMuClsgICAxMC4wMjYwNjRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEVuY3J5cHRlZCBWb2x1
bWVzLgpbICAgMTAuMDMxMDE5XSBzeXN0ZW1kWzFdOiBjcnlwdHNldHVwLnRhcmdldCBjaGFu
Z2VkIGRlYWQgLT4gYWN0aXZlClsgICAxMC4wMzcwMDZdIHN5c3RlbWRbMV06IEpvYiBjcnlw
dHNldHVwLnRhcmdldC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjA0OTA5
Nl0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgRW5jcnlwdGVkIFZvbHVtZXMuClsgICAx
MC4wNTQ1NjldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0vc3lzIHN1
Y2NlZWRlZCBmb3Igc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29ja2V0LgpbICAgMTAuMDYzNDIy
XSBzeXN0ZW1kWzFdOiBTdGFydGluZyB1ZGV2IEtlcm5lbCBTb2NrZXQuClsgICAxMC4wNjg0
NjBdIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldmQta2VybmVsLnNvY2tldCBjaGFuZ2VkIGRl
YWQgLT4gbGlzdGVuaW5nClsgICAxMC4wNzU2MTVdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1k
LXVkZXZkLWtlcm5lbC5zb2NrZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAx
MC4wODgzODNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiB1ZGV2IEtlcm5lbCBTb2NrZXQu
ClsgICAxMC4wOTM4MThdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0v
c3lzIHN1Y2NlZWRlZCBmb3Igc3lzdGVtZC11ZGV2ZC1jb250cm9sLnNvY2tldC4KWyAgIDEw
LjEwMjcxMF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgdWRldiBDb250cm9sIFNvY2tldC4KWyAg
IDEwLjEwNzg5OV0gc3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC1jb250cm9sLnNvY2tldCBj
aGFuZ2VkIGRlYWQgLT4gbGlzdGVuaW5nClsgICAxMC4xMTUxNTJdIHN5c3RlbWRbMV06IEpv
YiBzeXN0ZW1kLXVkZXZkLWNvbnRyb2wuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9
ZG9uZQpbICAgMTAuMTI4MTA0XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gdWRldiBDb250
cm9sIFNvY2tldC4KWyAgIDEwLjEzMzY1Nl0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4
aXN0cz0hL3J1bi9wbHltb3V0aC9waWQgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLWFzay1wYXNz
d29yZC1jb25zb2xlLnBhdGguClsgICAxMC4xNDM3ODNdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IERpc3BhdGNoIFBhc3N3b3JkIFJlcXVlc3RzIHRvIENvbnNvbGUgRGlyZWN0b3J5IFdhdGNo
LgpbICAgMTAuMTUyMDE3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25z
b2xlLnBhdGggY2hhbmdlZCBkZWFkIC0+IHdhaXRpbmcKWyAgIDEwLjE1OTU1NF0gc3lzdGVt
ZFsxXTogSm9iIHN5c3RlbWQtYXNrLXBhc3N3b3JkLWNvbnNvbGUucGF0aC9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjE2NzY5NV0gc3lzdGVtZFsxXTogU3RhcnRlZCBE
aXNwYXRjaCBQYXNzd29yZCBSZXF1ZXN0cyB0byBDb25zb2xlIERpcmVjdG9yeSBXYXRjaC4K
WyAgIDEwLjE3NTc1NF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgUGF0aHMuClsgICAxMC4xNzk2
MzJdIHN5c3RlbWRbMV06IHBhdGhzLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsg
ICAxMC4xODUxODJdIHN5c3RlbWRbMV06IEpvYiBwYXRocy50YXJnZXQvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAxMC4xOTU4MTRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFy
Z2V0IFBhdGhzLgpbICAgMTAuMjAwMzA3XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBKb3VybmFs
IFNvY2tldC4KWyAgIDEwLjIwNTE1MV0gc3lzdGVtZFsxXTogc3lzdGVtZC1qb3VybmFsZC5z
b2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgMTAuMjExOTgxXSBzeXN0ZW1k
WzFdOiBKb2Igc3lzdGVtZC1qb3VybmFsZC5zb2NrZXQvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAxMC4yMjQxMDNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBKb3VybmFs
IFNvY2tldC4KWyAgIDEwLjIyOTI4M10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0
cz0vc3lzL2tlcm5lbC9kZWJ1ZyBzdWNjZWVkZWQgZm9yIHN5cy1rZXJuZWwtZGVidWcubW91
bnQuClsgICAxMC4yMzgyOTJdIHN5c3RlbWRbMV06IE1vdW50aW5nIERlYnVnIEZpbGUgU3lz
dGVtLi4uClsgICAxMC4yNDgxOTBdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC9i
aW4vbW91bnQgZGVidWdmcyAvc3lzL2tlcm5lbC9kZWJ1ZyAtdCBkZWJ1Z2ZzClsgICAxMC4y
NTY5NzZdIHN5c3RlbWRbMV06IEZvcmtlZCAvYmluL21vdW50IGFzIDQ3ClsgICAxMC4yNjQ5
MDNdIHN5c3RlbWRbNDddOiBFeGVjdXRpbmc6IC9iaW4vbW91bnQgZGVidWdmcyAvc3lzL2tl
cm5lbC9kZWJ1ZyAtdCBkZWJ1Z2ZzClsgICAxMC4yNzMxMzBdIHN5c3RlbWRbMV06IHN5cy1r
ZXJuZWwtZGVidWcubW91bnQgY2hhbmdlZCBkZWFkIC0+IG1vdW50aW5nClsgICAxMC4yODAw
NDBdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9L2Rldi90dHkwIHN1Y2NlZWRl
ZCBmb3Igc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlLgpbICAgMTAuMjg5ODM4XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBTZXR1cCBWaXJ0dWFsIENvbnNvbGUuLi4KWyAgIDEwLjMw
MTQ5NF0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9z
eXN0ZW1kLXZjb25zb2xlLXNldHVwClsgICAxMC4zMTAwOTZdIHN5c3RlbWRbMV06IEZvcmtl
ZCAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtdmNvbnNvbGUtc2V0dXAgYXMgNDgKWyAgIDEw
LjMxODA4NV0gc3lzdGVtZFsxXTogc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTAuMzI4NTAxXSBzeXN0ZW1kWzQ4XTogRXhlY3V0
aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtdmNvbnNvbGUtc2V0dXAKWyAgIDEwLjMz
NjIyOV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vcHJvYy9zeXMvZnMvbXF1
ZXVlIGZhaWxlZCBmb3IgZGV2LW1xdWV1ZS5tb3VudC4KWyAgIDEwLjM0NTI1Nl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgb2YgZGV2LW1xdWV1ZS5tb3VudCByZXF1ZXN0ZWQgYnV0IGNvbmRp
dGlvbiBmYWlsZWQuIElnbm9yaW5nLgpbICAgMTAuMzU0OTYyXSBzeXN0ZW1kWzFdOiBKb2Ig
ZGV2LW1xdWV1ZS5tb3VudC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjM2
MTg1NV0gc3lzdGVtZFsxXTogTW91bnRlZCBQT1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lz
dGVtLgpbICAgMTAuMzY5NDkwXSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoSXNSZWFkV3Jp
dGU9L3N5cyBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UuClsg
ICAxMC4zNzg0OTFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHVkZXYgQ29sZHBsdWcgYWxsIERl
dmljZXMuLi4KWyAgIDEwLjM5NTQwMF0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTog
L3Vzci9iaW4vdWRldmFkbSB0cmlnZ2VyIC0tdHlwZT1zdWJzeXN0ZW1zIC0tYWN0aW9uPWFk
ZApbICAgMTAuNDA0OTE0XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9iaW4vdWRldmFkbSBh
cyA0OQpbICAgMTAuNDE1MDE1XSBzeXN0ZW1kWzQ5XTogRXhlY3V0aW5nOiAvdXNyL2Jpbi91
ZGV2YWRtIHRyaWdnZXIgLS10eXBlPXN1YnN5c3RlbXMgLS1hY3Rpb249YWRkClsgICAxMC40
MjUyOTddIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHN0YXJ0ClsgICAxMC40MzI5MjZdIHN5c3RlbWRbMV06IENvbmRpdGlvbktl
cm5lbENvbW1hbmRMaW5lPXxyZC5tb2R1bGVzLWxvYWQgZmFpbGVkIGZvciBzeXN0ZW1kLW1v
ZHVsZXMtbG9hZC5zZXJ2aWNlLgpbICAgMTAuNDQ0OTc3XSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25LZXJuZWxDb21tYW5kTGluZT18bW9kdWxlcy1sb2FkIGZhaWxlZCBmb3Igc3lzdGVtZC1t
b2R1bGVzLWxvYWQuc2VydmljZS4KWyAgIDEwLjQ1NDk3OV0gc3lzdGVtZFsxXTogQ29uZGl0
aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ydW4vbW9kdWxlcy1sb2FkLmQgZmFpbGVkIGZvciBz
eXN0ZW1kLW1vZHVsZXMtbG9hZC5zZXJ2aWNlLgpbICAgMTAuNDc0ODA5XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy9tb2R1bGVzLWxvYWQuZCBmYWls
ZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2UuClsgICAxMC40ODUyNjRdIHN5
c3RlbWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xvY2FsL2xpYi9t
b2R1bGVzLWxvYWQuZCBmYWlsZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2Uu
ClsgICAxMC40OTk0NDFdIHN5c3RlbWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5
PXwvdXNyL2xpYi9tb2R1bGVzLWxvYWQuZCBmYWlsZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1s
b2FkLnNlcnZpY2UuClsgICAxMC41MTAzMzhdIHN5c3RlbWRbMV06IENvbmRpdGlvbkRpcmVj
dG9yeU5vdEVtcHR5PXwvbGliL21vZHVsZXMtbG9hZC5kIGZhaWxlZCBmb3Igc3lzdGVtZC1t
b2R1bGVzLWxvYWQuc2VydmljZS4KWyAgIDEwLjUyMTg3OV0gc3lzdGVtZFsxXTogQ29uZGl0
aW9uQ2FwYWJpbGl0eT1DQVBfU1lTX01PRFVMRSBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtbW9k
dWxlcy1sb2FkLnNlcnZpY2UuClsgICAxMC41MzE0MjRdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRp
b24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDEwLjU0MjI0MV0gc3lzdGVtZFsxXTogSm9iIHN5
c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25l
ClsgICAxMC41NTAyNTRdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9hZCBLZXJuZWwgTW9kdWxl
cy4KWyAgIDEwLjU1NTM1NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vc3lz
L2ZzL2Z1c2UvY29ubmVjdGlvbnMgZmFpbGVkIGZvciBzeXMtZnMtZnVzZS1jb25uZWN0aW9u
cy5tb3VudC4KWyAgIDEwLjU2Nzc0Ml0gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2Ygc3lzLWZz
LWZ1c2UtY29ubmVjdGlvbnMubW91bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDEwLjU3Nzc0OV0gc3lzdGVtZFsxXTogSm9iIHN5cy1mcy1mdXNl
LWNvbm5lY3Rpb25zLm1vdW50L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTAu
NTg1NjUyXSBzeXN0ZW1kWzFdOiBNb3VudGVkIEZVU0UgQ29udHJvbCBGaWxlIFN5c3RlbS4K
WyAgIDEwLjU5MTc0N10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vc3lzL2tl
cm5lbC9jb25maWcgZmFpbGVkIGZvciBzeXMta2VybmVsLWNvbmZpZy5tb3VudC4KWyAgIDEw
LjYwMDg4MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2Ygc3lzLWtlcm5lbC1jb25maWcubW91
bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDEwLjYx
MDE2NV0gc3lzdGVtZFsxXTogSm9iIHN5cy1rZXJuZWwtY29uZmlnLm1vdW50L3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTAuNjE3NjM1XSBzeXN0ZW1kWzFdOiBNb3VudGVk
IENvbmZpZ3VyYXRpb24gRmlsZSBTeXN0ZW0uClsgICAxMC42MjMyOThdIHN5c3RlbWRbMV06
IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvcnVuL3N5c2N0bC5kIGZhaWxlZCBmb3Ig
c3lzdGVtZC1zeXNjdGwuc2VydmljZS4KWyAgIDEwLjY0MzkzNV0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ldGMvc3lzY3RsLmQgZmFpbGVkIGZvciBzeXN0
ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjUzNDcyXSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9sb2NhbC9saWIvc3lzY3RsLmQgZmFpbGVkIGZv
ciBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjY2MTgwXSBzeXN0ZW1kWzFdOiBD
b25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9saWIvc3lzY3RsLmQgc3VjY2VlZGVk
IGZvciBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjc2MzQ1XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2xpYi9zeXNjdGwuZCBzdWNjZWVkZWQg
Zm9yIHN5c3RlbWQtc3lzY3RsLnNlcnZpY2UuClsgICAxMC42ODU3OTddIHN5c3RlbWRbMV06
IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0vcHJvYy9zeXMvIHN1Y2NlZWRlZCBmb3Igc3lz
dGVtZC1zeXNjdGwuc2VydmljZS4KWyAgIDEwLjY5ODI5N10gc3lzdGVtZFsxXTogU3RhcnRp
bmcgQXBwbHkgS2VybmVsIFZhcmlhYmxlcy4uLgpbICAgMTAuNzA5ODQ2XSBzeXN0ZW1kWzFd
OiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtc3lzY3RsClsg
ICAxMC43MTgwMTRdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2xpYi9zeXN0ZW1kL3N5c3Rl
bWQtc3lzY3RsIGFzIDUwClsgICAxMC43MzI1NDVdIHN5c3RlbWRbNTBdOiBFeGVjdXRpbmc6
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1zeXNjdGwKWyAgIDEwLjc0Nzg2M10gc3lzdGVt
ZFsxXTogc3lzdGVtZC1zeXNjdGwuc2VydmljZSBjaGFuZ2VkIGRlYWQgLT4gc3RhcnQKWyAg
IDEwLjc4NzMzN10gc3lzdGVtZFsxXTogU3RhcnRpbmcgSm91cm5hbCBTZXJ2aWNlLi4uClsg
ICAxMC44MDExMzddIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvbGliL3N5
c3RlbWQvc3lzdGVtZC1qb3VybmFsZApbICAgMTAuODE0NDc0XSBzeXN0ZW1kWzU0XTogRXhl
Y3V0aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtam91cm5hbGQKWyAgIDEwLjgyMTIx
M10gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1qb3VybmFs
ZCBhcyA1NApbICAgMTAuODMzNDk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWxkLnNl
cnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDEwLjg0MTA3MF0gc3lzdGVtZFsx
XTogSm9iIHN5c3RlbWQtam91cm5hbGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0
PWRvbmUKWyAgIDEwLjg1NDYzMl0gc3lzdGVtZFsxXTogU3RhcnRlZCBKb3VybmFsIFNlcnZp
Y2UuClsgICAxMC44NjE4NjRdIHN5c3RlbWRbMV06IHN5c3RlbWQtam91cm5hbGQuc29ja2V0
IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDEwLjg2OTQ0NV0gc3lzdGVtZFsx
XTogQ29uZGl0aW9uUGF0aElzUmVhZFdyaXRlPS9wcm9jL3N5cy8gc3VjY2VlZGVkIGZvciBw
cm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5hdXRvbW91bnQuClsgICAxMC44Nzk1MTVdIHN5c3Rl
bWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9L3Byb2Mvc3lzL2ZzL2JpbmZtdF9taXNjLyBm
YWlsZWQgZm9yIHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLmF1dG9tb3VudC4KWyAgIDEwLjg5
MDM5M10gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2YgcHJvYy1zeXMtZnMtYmluZm10X21pc2Mu
YXV0b21vdW50IHJlcXVlc3RlZCBidXQgY29uZGl0aW9uIGZhaWxlZC4gSWdub3JpbmcuClsg
ICAxMC45MDA2MzJdIHN5c3RlbWRbMV06IEpvYiBwcm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5h
dXRvbW91bnQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMC45MDk2MDBdIHN5
c3RlbWRbMV06IFNldCB1cCBhdXRvbW91bnQgQXJiaXRyYXJ5IEV4ZWN1dGFibGUgRmlsZSBG
b3JtYXRzIEZpbGUgU3lzdGVtIEF1dG9tb3VudCBQb2ludC4KWyAgIDEwLjkxOTI4OF0gc3lz
dGVtZFsxXTogQ29uZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ydW4vYmluZm10LmQgZmFp
bGVkIGZvciBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlLgpbICAgMTAuOTM5MDYzXSBzeXN0ZW1k
WzFdOiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy9iaW5mbXQuZCBmYWlsZWQg
Zm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45NDgyNjddIHN5c3RlbWRbMV06
IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xvY2FsL2xpYi9iaW5mbXQuZCBm
YWlsZWQgZm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45NzEwMTldIHN5c3Rl
bWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xpYi9iaW5mbXQuZCBm
YWlsZWQgZm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45ODEzMDRdIHN5c3Rl
bWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvbGliL2JpbmZtdC5kIGZhaWxl
ZCBmb3Igc3lzdGVtZC1iaW5mbXQuc2VydmljZS4KWyAgIDEwLjk5MDU4NV0gc3lzdGVtZFsx
XTogQ29uZGl0aW9uUGF0aElzUmVhZFdyaXRlPS9wcm9jL3N5cy8gc3VjY2VlZGVkIGZvciBz
eXN0ZW1kLWJpbmZtdC5zZXJ2aWNlLgpbICAgMTEuMDAxMzY2XSBzeXN0ZW1kWzFdOiBTdGFy
dGluZyBvZiBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlIHJlcXVlc3RlZCBidXQgY29uZGl0aW9u
IGZhaWxlZC4gSWdub3JpbmcuClsgICAxMS4wMTA3NTddIHN5c3RlbWRbMV06IEpvYiBzeXN0
ZW1kLWJpbmZtdC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTEu
MDE3OTY2XSBzeXN0ZW1kWzFdOiBTdGFydGVkIFNldCBVcCBBZGRpdGlvbmFsIEJpbmFyeSBG
b3JtYXRzLgpbICAgMTEuMDI1MzU4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3Rz
PS9zeXMva2VybmVsL21tL2h1Z2VwYWdlcyBmYWlsZWQgZm9yIGRldi1odWdlcGFnZXMubW91
bnQuClsgICAxMS4wMzQ2NTBdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIGRldi1odWdlcGFn
ZXMubW91bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAg
IDExLjA0MzcwNl0gc3lzdGVtZFsxXTogSm9iIGRldi1odWdlcGFnZXMubW91bnQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4wNTA4NzNdIHN5c3RlbWRbMV06IE1vdW50
ZWQgSHVnZSBQYWdlcyBGaWxlIFN5c3RlbS4KWyAgIDExLjA2ODMwOF0gc3lzdGVtZFsxXTog
Q29uZGl0aW9uUGF0aEV4aXN0cz0vbGliL21vZHVsZXMvMy4xMy4wLTAzNTgzLWdmOWZjODBi
LWRpcnR5L21vZHVsZXMuZGV2bmFtZSBmYWlsZWQgZm9yIGttb2Qtc3RhdGljLW5vZGVzLnNl
cnZpY2UuClsgICAxMS4wODA4NjZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIGttb2Qtc3Rh
dGljLW5vZGVzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25v
cmluZy4KWyAgIDExLjA5MDM0N10gc3lzdGVtZFsxXTogSm9iIGttb2Qtc3RhdGljLW5vZGVz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4wOTc4MTJdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgQ3JlYXRlIGxpc3Qgb2YgcmVxdWlyZWQgc3RhdGljIGRldmlj
ZSBub2RlcyBmb3IgdGhlIGN1cnJlbnQga2VybmVsLgpbICAgMTEuMTEwNzIxXSBzeXN0ZW1k
WzFdOiBDb25kaXRpb25DYXBhYmlsaXR5PUNBUF9NS05PRCBzdWNjZWVkZWQgZm9yIHN5c3Rl
bWQtdG1wZmlsZXMtc2V0dXAtZGV2LnNlcnZpY2UuClsgICAxMS4xMjQ0OThdIHN5c3RlbWQt
am91cm5hbGRbNTRdOiBWYWN1dW1pbmcgZG9uZSwgZnJlZWQgMCBieXRlcwpbICAgMTEuMTMw
NjcxXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBDcmVhdGUgc3RhdGljIGRldmljZSBub2RlcyBp
biAvZGV2Li4uClsgICAxMS4xNDk0MzZdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6
IC91c3IvYmluL3N5c3RlbWQtdG1wZmlsZXMgLS1wcmVmaXg9L2RldiAtLWNyZWF0ZQpbICAg
MTEuMTU4MjY3XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9iaW4vc3lzdGVtZC10bXBmaWxl
cyBhcyA1NgpbICAgMTEuMTgxOTY0XSBzeXN0ZW1kWzU2XTogRXhlY3V0aW5nOiAvdXNyL2Jp
bi9zeXN0ZW1kLXRtcGZpbGVzIC0tcHJlZml4PS9kZXYgLS1jcmVhdGUKWyAgIDExLjE5OTYx
OV0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxlcy1zZXR1cC1kZXYuc2VydmljZSBjaGFu
Z2VkIGRlYWQgLT4gc3RhcnQKWyAgIDExLjIwNzEzMl0gc3lzdGVtZFsxXTogU3RhcnRpbmcg
U3dhcC4KWyAgIDExLjIxNjU3N10gc3lzdGVtZFsxXTogc3dhcC50YXJnZXQgY2hhbmdlZCBk
ZWFkIC0+IGFjdGl2ZQpbICAgMTEuMjI4ODQ4XSBzeXN0ZW1kWzFdOiBKb2Igc3dhcC50YXJn
ZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4yNTI3MjJdIHN5c3RlbWRb
MV06IFJlYWNoZWQgdGFyZ2V0IFN3YXAuClsgICAxMS4yNTcyMjVdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9L2V0Yy9mc3RhYiBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtcmVt
b3VudC1mcy5zZXJ2aWNlLgpbICAgMTEuMjc1MDk2XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBS
ZW1vdW50IFJvb3QgYW5kIEtlcm5lbCBGaWxlIFN5c3RlbXMuLi4KWyAgIDExLjMwMDc4Ml0g
c3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1k
LXJlbW91bnQtZnMKWyAgIDExLjMwODU1N10gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGli
L3N5c3RlbWQvc3lzdGVtZC1yZW1vdW50LWZzIGFzIDU3ClsgICAxMS4zMzEzNjBdIHN5c3Rl
bWRbNTddOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1yZW1vdW50LWZz
ClsgICAxMS4zNTk2NTldIHN5c3RlbWRbMV06IHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNl
IGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTEuMzY2NTEzXSBzeXN0ZW1kWzFdOiBNb3Vu
dGluZyBUZW1wb3JhcnkgRGlyZWN0b3J5Li4uClsgICAxMi4yMDUxODZdIHN5c3RlbWRbMV06
IEFib3V0IHRvIGV4ZWN1dGU6IC9iaW4vbW91bnQgdG1wZnMgL3RtcCAtdCB0bXBmcyAtbyBt
b2RlPTE3Nzcsc3RyaWN0YXRpbWUKWyAgIDEyLjIxNTM2NF0gc3lzdGVtZFsxXTogRm9ya2Vk
IC9iaW4vbW91bnQgYXMgNjAKWyAgIDEyLjIyMzkxNV0gc3lzdGVtZFsxXTogdG1wLm1vdW50
IGNoYW5nZWQgZGVhZCAtPiBtb3VudGluZwpbICAgMTIuMjMxMTk1XSBzeXN0ZW1kWzYwXTog
RXhlY3V0aW5nOiAvYmluL21vdW50IHRtcGZzIC90bXAgLXQgdG1wZnMgLW8gbW9kZT0xNzc3
LHN0cmljdGF0aW1lClsgICAxMi4yNDAyODldIHN5c3RlbWRbMV06IFNldCB1cCBqb2JzIHBy
b2dyZXNzIHRpbWVyZmQuClsgICAxMi4yNTQ5NDhdIHN5c3RlbWRbMV06IFJlY2VpdmVkIFNJ
R0NITEQgZnJvbSBQSUQgNDAgKG4vYSkuClsgICAxMi4yNjE0NDBdIHN5c3RlbWRbMV06IEdv
dCBTSUdDSExEIGZvciBwcm9jZXNzIDQ3IChtb3VudCkKWyAgIDEyLjI2OTM2NV0gc3lzdGVt
ZFsxXTogR290IG5vdGlmaWNhdGlvbiBtZXNzYWdlIGZvciB1bml0IHN5c3RlbWQtam91cm5h
bGQuc2VydmljZQpbICAgMTIuMjc2OTMxXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWxk
LnNlcnZpY2U6IEdvdCBtZXNzYWdlClsgICAxMi4yODc5MzhdIHN5c3RlbWRbMV06IHN5c3Rl
bWQtam91cm5hbGQuc2VydmljZTogZ290IFNUQVRVUz1Qcm9jZXNzaW5nIHJlcXVlc3RzLi4u
ClsgICAxMi4yOTY0NzldIHN5c3RlbWRbMV06IENoaWxkIDQ3IGRpZWQgKGNvZGU9ZXhpdGVk
LCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuMzAzMjczXSBzeXN0ZW1kWzFdOiBDaGlsZCA0
NyBiZWxvbmdzIHRvIHN5cy1rZXJuZWwtZGVidWcubW91bnQKWyAgIDEyLjMwOTc5NV0gc3lz
dGVtZFsxXTogc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudCBtb3VudCBwcm9jZXNzIGV4aXRlZCwg
Y29kZT1leGl0ZWQgc3RhdHVzPTAKWyAgIDEyLjMxNzk1NF0gc3lzdGVtZFsxXTogc3lzLWtl
cm5lbC1kZWJ1Zy5tb3VudCBjaGFuZ2VkIG1vdW50aW5nIC0+IG1vdW50ZWQKWyAgIDEyLjMy
NTQ0N10gc3lzdGVtZFsxXTogSm9iIHN5cy1rZXJuZWwtZGVidWcubW91bnQvc3RhcnQgZmlu
aXNoZWQsIHJlc3VsdD1kb25lClsgICAxMi4zMzc2MzddIHN5c3RlbWRbMV06IE1vdW50ZWQg
RGVidWcgRmlsZSBTeXN0ZW0uClsgICAxMi4zNDMzMDJdIHN5c3RlbWRbMV06IEdvdCBTSUdD
SExEIGZvciBwcm9jZXNzIDQ4IChzeXN0ZW1kLXZjb25zb2wpClsgICAxMi4zNTAyNzhdIHN5
c3RlbWRbMV06IENoaWxkIDQ4IGRpZWQgKGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNT
KQpbICAgMTIuMzU2NjYwXSBzeXN0ZW1kWzFdOiBDaGlsZCA0OCBiZWxvbmdzIHRvIHN5c3Rl
bWQtdmNvbnNvbGUtc2V0dXAuc2VydmljZQpbICAgMTIuMzY0NDM3XSBzeXN0ZW1kWzFdOiBz
eXN0ZW1kLXZjb25zb2xlLXNldHVwLnNlcnZpY2U6IG1haW4gcHJvY2VzcyBleGl0ZWQsIGNv
ZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTClsgICAxMi4zNzUxMTNdIHN5c3RlbWRbMV06
IHN5c3RlbWQtdmNvbnNvbGUtc2V0dXAuc2VydmljZSBjaGFuZ2VkIHN0YXJ0IC0+IGV4aXRl
ZApbICAgMTIuMzgyODQ3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC12Y29uc29sZS1zZXR1
cC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuMzk2MTc3XSBz
eXN0ZW1kWzFdOiBTdGFydGVkIFNldHVwIFZpcnR1YWwgQ29uc29sZS4KWyAgIDEyLjQwMjE4
OF0gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNDkgKHVkZXZhZG0pClsg
ICAxMi40MDgwNDNdIHN5c3RlbWRbMV06IENoaWxkIDQ5IGRpZWQgKGNvZGU9ZXhpdGVkLCBz
dGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNDE1MDc3XSBzeXN0ZW1kWzFdOiBDaGlsZCA0OSBi
ZWxvbmdzIHRvIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UKWyAgIDEyLjQyMjExNF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZTogbWFpbiBwcm9jZXNz
IGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAgIDEyLjQzMjE1NF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZSBydW5uaW5nIG5leHQg
bWFpbiBjb21tYW5kIGZvciBzdGF0ZSBzdGFydApbICAgMTIuNDQxMjA5XSBzeXN0ZW1kWzFd
OiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi91ZGV2YWRtIHRyaWdnZXIgLS10eXBlPWRl
dmljZXMgLS1hY3Rpb249YWRkClsgICAxMi40NTA2NzldIHN5c3RlbWRbMV06IEZvcmtlZCAv
dXNyL2Jpbi91ZGV2YWRtIGFzIDYyClsgICAxMi40NTY3MjJdIHN5c3RlbWRbMV06IEdvdCBT
SUdDSExEIGZvciBwcm9jZXNzIDUwIChzeXN0ZW1kLXN5c2N0bCkKWyAgIDEyLjQ2NzA3Nl0g
c3lzdGVtZFs2Ml06IEV4ZWN1dGluZzogL3Vzci9iaW4vdWRldmFkbSB0cmlnZ2VyIC0tdHlw
ZT1kZXZpY2VzIC0tYWN0aW9uPWFkZApbICAgMTIuNDc1ODk4XSBzeXN0ZW1kWzFdOiBDaGls
ZCA1MCBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDEyLjQ4ODk5
OV0gc3lzdGVtZFsxXTogQ2hpbGQgNTAgYmVsb25ncyB0byBzeXN0ZW1kLXN5c2N0bC5zZXJ2
aWNlClsgICAxMi40OTUyMzBdIHN5c3RlbWRbMV06IHN5c3RlbWQtc3lzY3RsLnNlcnZpY2U6
IG1haW4gcHJvY2VzcyBleGl0ZWQsIGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTClsg
ICAxMi41Mjk3MTNdIHN5c3RlbWRbMV06IHN5c3RlbWQtc3lzY3RsLnNlcnZpY2UgY2hhbmdl
ZCBzdGFydCAtPiBleGl0ZWQKWyAgIDEyLjUzNjI0Nl0gc3lzdGVtZFsxXTogSm9iIHN5c3Rl
bWQtc3lzY3RsLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMi41
NjQyMDhdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgQXBwbHkgS2VybmVsIFZhcmlhYmxlcy4KWyAg
IDEyLjU3OTMzM10gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNTYgKHN5
c3RlbWQtdG1wZmlsZSkKWyAgIDEyLjU4NTk0NF0gc3lzdGVtZFsxXTogQ2hpbGQgNTYgZGll
ZCAoY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMi42MDg5NTVdIHN5c3Rl
bWRbMV06IENoaWxkIDU2IGJlbG9uZ3MgdG8gc3lzdGVtZC10bXBmaWxlcy1zZXR1cC1kZXYu
c2VydmljZQpbICAgMTIuNjE2MjQ3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLXNl
dHVwLWRldi5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3Rh
dHVzPTAvU1VDQ0VTUwpbICAgMTIuNjM5NTk1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZp
bGVzLXNldHVwLWRldi5zZXJ2aWNlIGNoYW5nZWQgc3RhcnQgLT4gZXhpdGVkClsgICAxMi42
NDcxNTRdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1kLXRtcGZpbGVzLXNldHVwLWRldi5zZXJ2
aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuNjgxMDg2XSBzeXN0ZW1k
WzFdOiBTdGFydGVkIENyZWF0ZSBzdGF0aWMgZGV2aWNlIG5vZGVzIGluIC9kZXYuClsgICAx
Mi42ODc3NThdIHN5c3RlbWRbMV06IEdvdCBTSUdDSExEIGZvciBwcm9jZXNzIDU3IChzeXN0
ZW1kLXJlbW91bnQpClsgICAxMi42OTk1NjNdIHN5c3RlbWRbMV06IENoaWxkIDU3IGRpZWQg
KGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNzA1OTY0XSBzeXN0ZW1k
WzFdOiBDaGlsZCA1NyBiZWxvbmdzIHRvIHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNlClsg
ICAxMi43MTMyODddIHN5c3RlbWRbMV06IHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNlOiBt
YWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAg
MTIuNzIzNjczXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXJlbW91bnQtZnMuc2VydmljZSBjaGFu
Z2VkIHN0YXJ0IC0+IGV4aXRlZApbICAgMTIuNzMxMDg4XSBzeXN0ZW1kWzFdOiBKb2Igc3lz
dGVtZC1yZW1vdW50LWZzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsg
ICAxMi43NDUzOTFdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgUmVtb3VudCBSb290IGFuZCBLZXJu
ZWwgRmlsZSBTeXN0ZW1zLgpbICAgMTIuNzUyNzI4XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hM
RCBmb3IgcHJvY2VzcyA2MCAobW91bnQpClsgICAxMi43NTg0MDhdIHN5c3RlbWRbMV06IENo
aWxkIDYwIGRpZWQgKGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNzY1
Njk0XSBzeXN0ZW1kWzFdOiBDaGlsZCA2MCBiZWxvbmdzIHRvIHRtcC5tb3VudApbICAgMTIu
NzcxMDg4XSBzeXN0ZW1kWzFdOiB0bXAubW91bnQgbW91bnQgcHJvY2VzcyBleGl0ZWQsIGNv
ZGU9ZXhpdGVkIHN0YXR1cz0wClsgICAxMi43NzgxMTldIHN5c3RlbWRbMV06IHRtcC5tb3Vu
dCBjaGFuZ2VkIG1vdW50aW5nIC0+IG1vdW50ZWQKWyAgIDEyLjc4NDUyMF0gc3lzdGVtZFsx
XTogSm9iIHRtcC5tb3VudC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEyLjc5
NTc0M10gc3lzdGVtZFsxXTogTW91bnRlZCBUZW1wb3JhcnkgRGlyZWN0b3J5LgpbICAgMTIu
ODAxNTU1XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA2MiAodWRldmFk
bSkKWyAgIDEyLjgwNzQxMV0gc3lzdGVtZFsxXTogQ2hpbGQgNjIgZGllZCAoY29kZT1leGl0
ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMi44MTQ0MzhdIHN5c3RlbWRbMV06IENoaWxk
IDYyIGJlbG9uZ3MgdG8gc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZQpbICAgMTIuODIx
NDI0XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXYtdHJpZ2dlci5zZXJ2aWNlOiBtYWluIHBy
b2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAgMTIuODMy
MTI2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXYtdHJpZ2dlci5zZXJ2aWNlIGNoYW5nZWQg
c3RhcnQgLT4gZXhpdGVkClsgICAxMi44Mzk3MjldIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1k
LXVkZXYtdHJpZ2dlci5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
MTIuODUzMTUzXSBzeXN0ZW1kWzFdOiBTdGFydGVkIHVkZXYgQ29sZHBsdWcgYWxsIERldmlj
ZXMuClsgICAxMi44NTk1MjldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIExvYWQvU2F2ZSBSYW5k
b20gU2VlZC4uLgpbICAgMTIuODcwMjY4XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRl
OiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtcmFuZG9tLXNlZWQgbG9hZApbICAgMTIuODc4
NDc1XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXJhbmRv
bS1zZWVkIGFzIDY0ClsgICAxMi44OTAxMjBdIHN5c3RlbWRbNjRdOiBFeGVjdXRpbmc6IC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1yYW5kb20tc2VlZCBsb2FkClsgICAxMi44OTg3MTBd
IHN5c3RlbWRbMV06IHN5c3RlbWQtcmFuZG9tLXNlZWQuc2VydmljZSBjaGFuZ2VkIGRlYWQg
LT4gc3RhcnQKWyAgIDEyLjkwNjEwNl0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aElzUmVh
ZFdyaXRlPS9zeXMgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLXVkZXZkLnNlcnZpY2UuClsgICAx
Mi45MTYzNThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHVkZXYgS2VybmVsIERldmljZSBNYW5h
Z2VyLi4uClsgICAxMi45MzI3NzBdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11ZGV2ZApbICAgMTIuOTQxNDQ1XSBzeXN0ZW1kWzFd
OiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVkZXZkIGFzIDY1ClsgICAxMi45
NTI4NjldIHN5c3RlbWRbNjVdOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVt
ZC11ZGV2ZApbICAgMTIuOTYwMzY3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLnNlcnZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHN0YXJ0ClsgICAxMi45Njc5MjJdIHN5c3RlbWRbMV06IFN0
YXJ0aW5nIExvY2FsIEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgIDEyLjk3NTIyOV0gc3lzdGVt
ZFsxXTogbG9jYWwtZnMtcHJlLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAx
Mi45ODIxNTZdIHN5c3RlbWRbMV06IEpvYiBsb2NhbC1mcy1wcmUudGFyZ2V0L3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuOTk2MDY1XSBzeXN0ZW1kWzFdOiBSZWFjaGVk
IHRhcmdldCBMb2NhbCBGaWxlIFN5c3RlbXMgKFByZSkuClsgICAxMy4wMDMwMTRdIHN5c3Rl
bWRbMV06IFN0YXJ0aW5nIExvY2FsIEZpbGUgU3lzdGVtcy4KWyAgIDEzLjAwNzkzNV0gc3lz
dGVtZFsxXTogbG9jYWwtZnMudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgIDEz
LjAxNDQyMF0gc3lzdGVtZFsxXTogSm9iIGxvY2FsLWZzLnRhcmdldC9zdGFydCBmaW5pc2hl
ZCwgcmVzdWx0PWRvbmUKWyAgIDEzLjAyNzg1Ml0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJn
ZXQgTG9jYWwgRmlsZSBTeXN0ZW1zLgpbICAgMTMuMDM1NDEyXSBzeXN0ZW1kWzFdOiBDb25k
aXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3J1bi90bXBmaWxlcy5kIGZhaWxlZCBmb3Igc3lz
dGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlLgpbICAgMTMuMDQ2NzU3XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy90bXBmaWxlcy5kIGZhaWxlZCBm
b3Igc3lzdGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlLgpbICAgMTMuMDU5MDU4XSBzeXN0
ZW1kWzFdOiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9sb2NhbC9saWIvdG1w
ZmlsZXMuZCBmYWlsZWQgZm9yIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS4KWyAg
IDEzLjA4MDUwNV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC91
c3IvbGliL3RtcGZpbGVzLmQgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLXRtcGZpbGVzLXNldHVw
LnNlcnZpY2UuClsgICAxMy4wOTIyMDNdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJlY3JlYXRl
IFZvbGF0aWxlIEZpbGVzIGFuZCBEaXJlY3Rvcmllcy4uLgpbICAgMTMuMTAxNzA0XSBzeXN0
ZW1kLXVkZXZkWzY1XTogc3RhcnRpbmcgdmVyc2lvbiAyMDgKWyAgIDEzLjExNTI0OV0gc3lz
dGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9iaW4vc3lzdGVtZC10bXBmaWxlcyAt
LWNyZWF0ZSAtLXJlbW92ZSAtLWV4Y2x1ZGUtcHJlZml4PS9kZXYKWyAgIDEzLjEyNjE4Ml0g
c3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL3N5c3RlbWQtdG1wZmlsZXMgYXMgNjcKWyAg
IDEzLjE0MDEwMF0gc3lzdGVtZFs2N106IEV4ZWN1dGluZzogL3Vzci9iaW4vc3lzdGVtZC10
bXBmaWxlcyAtLWNyZWF0ZSAtLXJlbW92ZSAtLWV4Y2x1ZGUtcHJlZml4PS9kZXYKWyAgIDEz
LjE1MjA3NV0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTMuMTYwNzc5XSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBUcmlnZ2VyIEZsdXNoaW5nIG9mIEpvdXJuYWwgdG8gUGVyc2lzdGVudCBTdG9yYWdlLi4u
ClsgICAxMy4xOTIzNzNdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvYmlu
L3N5c3RlbWN0bCBraWxsIC0ta2lsbC13aG89bWFpbiAtLXNpZ25hbD1TSUdVU1IxIHN5c3Rl
bWQtam91cm5hbGQuc2VydmljZQpbICAgMTMuMjA2Mzg5XSBzeXN0ZW1kWzFdOiBGb3JrZWQg
L3Vzci9iaW4vc3lzdGVtY3RsIGFzIDY4ClsgICAxMy4yMTY5MDldIHN5c3RlbWRbNjhdOiBF
eGVjdXRpbmc6IC91c3IvYmluL3N5c3RlbWN0bCBraWxsIC0ta2lsbC13aG89bWFpbiAtLXNp
Z25hbD1TSUdVU1IxIHN5c3RlbWQtam91cm5hbGQuc2VydmljZQpbICAgMTMuMjMwNjEyXSBz
eXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2VydmljZSBjaGFuZ2VkIGRlYWQg
LT4gc3RhcnQKWyAgIDEzLjI0OTQzN10gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlv
biBvbiBwcml2YXRlIGJ1cy4KWyAgIDEzLjI1NzUzOF0gc3lzdGVtZFsxXTogSW5jb21pbmcg
dHJhZmZpYyBvbiBzeXN0ZW1kLXVkZXZkLWtlcm5lbC5zb2NrZXQKWyAgIDEzLjI2NjM5MV0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29ja2V0IGNoYW5nZWQgbGlzdGVu
aW5nIC0+IHJ1bm5pbmcKWyAgIDEzLjI3NTYzOF0gc3lzdGVtZFsxXTogR290IG5vdGlmaWNh
dGlvbiBtZXNzYWdlIGZvciB1bml0IHN5c3RlbWQtdWRldmQuc2VydmljZQpbICAgMTMuMjgz
NDUwXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLnNlcnZpY2U6IEdvdCBtZXNzYWdlClsg
ICAxMy4yOTEyMTJdIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldmQuc2VydmljZTogZ290IFJF
QURZPTEKWyAgIDEzLjMwMjg4NV0gc3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC5zZXJ2aWNl
IGNoYW5nZWQgc3RhcnQgLT4gcnVubmluZwpbICAgMTMuMzExMDIxXSBzeXN0ZW1kWzFdOiBK
b2Igc3lzdGVtZC11ZGV2ZC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpb
ICAgMTMuMzI1NzYxXSBzeXN0ZW1kWzFdOiBTdGFydGVkIHVkZXYgS2VybmVsIERldmljZSBN
YW5hZ2VyLgpbICAgMTMuMzMyMTY4XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLWNvbnRy
b2wuc29ja2V0IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDEzLjM0NjUwM10g
c3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCA2MCAobi9hKS4KWyAgIDEz
LjM1Mjk2NV0gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNjQgKHN5c3Rl
bWQtcmFuZG9tLSkKWyAgIDEzLjM2MTc3OF0gc3lzdGVtZFsxXTogQ2hpbGQgNjQgZGllZCAo
Y29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMy4zNjgyNzddIHN5c3RlbWRb
MV06IENoaWxkIDY0IGJlbG9uZ3MgdG8gc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlClsg
ICAxMy4zNzU5NjhdIHN5c3RlbWRbMV06IHN5c3RlbWQtcmFuZG9tLXNlZWQuc2VydmljZTog
bWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAg
IDEzLjM4ODI0Ml0gc3lzdGVtZFsxXTogc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlIGNo
YW5nZWQgc3RhcnQgLT4gZXhpdGVkClsgICAxMy4zOTYwODldIHN5c3RlbWRbMV06IEpvYiBz
eXN0ZW1kLXJhbmRvbS1zZWVkLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25l
ClsgICAxMy40MTA4MTldIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9hZC9TYXZlIFJhbmRvbSBT
ZWVkLgpbICAgMTMuNDE2MzY5XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2Vz
cyA2NyAoc3lzdGVtZC10bXBmaWxlKQpbICAgMTMuNDI0OTE2XSBzeXN0ZW1kWzFdOiBDaGls
ZCA2NyBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDEzLjQzMTg4
OV0gc3lzdGVtZFsxXTogQ2hpbGQgNjcgYmVsb25ncyB0byBzeXN0ZW1kLXRtcGZpbGVzLXNl
dHVwLnNlcnZpY2UKWyAgIDEzLjQ0MDM0MF0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxl
cy1zZXR1cC5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3Rh
dHVzPTAvU1VDQ0VTUwpbICAgMTMuNDUxNDI3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZp
bGVzLXNldHVwLnNlcnZpY2UgY2hhbmdlZCBzdGFydCAtPiBleGl0ZWQKWyAgIDEzLjQ2MDgz
M10gc3lzdGVtZFsxXTogSm9iIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS9zdGFy
dCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEzLjQ3NzE3OF0gc3lzdGVtZFsxXTogU3Rh
cnRlZCBSZWNyZWF0ZSBWb2xhdGlsZSBGaWxlcyBhbmQgRGlyZWN0b3JpZXMuClsgICAxMy40
ODU4OTldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFVwZGF0ZSBVVE1QIGFib3V0IFN5c3RlbSBS
ZWJvb3QvU2h1dGRvd24uLi4KWyAgIDEzLjUwMTQyNV0gc3lzdGVtZFsxXTogQWJvdXQgdG8g
ZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVwZGF0ZS11dG1wIHJlYm9vdApb
ICAgMTMuNTEwNzEzXSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0
ZW1kLXVwZGF0ZS11dG1wIGFzIDcwClsgICAxMy41MzE3MTldIHN5c3RlbWRbNzBdOiBFeGVj
dXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11cGRhdGUtdXRtcCByZWJvb3QKWyAg
IDEzLjU0MTUxM10gc3lzdGVtZFsxXTogc3lzdGVtZC11cGRhdGUtdXRtcC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTMuNTUyNjExXSBzeXN0ZW1kWzFdOiBBY2NlcHRl
ZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNTY0OTA3XSBzeXN0ZW1kWzFd
OiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNTc2NDExXSBz
eXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMu
NTg0ODIyXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVz
LgpbICAgMTMuNTk1MDMxXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHBy
aXZhdGUgYnVzLgpbICAgMTMuNjA0Mjg3XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVz
dDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24gL29yZy9m
cmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTMuNjI5OTk3XSBzeXN0ZW1kWzFdOiBB
Y2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNjUyMzcyXSBzeXN0
ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFn
ZW50LlJlbGVhc2VkKCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAg
MTMuNzA0NTY0XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNr
dG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVz
L0xvY2FsClsgICAxMy43NTA4MDJdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24g
b24gcHJpdmF0ZSBidXMuClsgICAxMy43NTcwMjRdIHJhbmRvbTogbm9uYmxvY2tpbmcgcG9v
bCBpcyBpbml0aWFsaXplZApbICAgMTMuODAwNTc1XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMg
cmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24g
L29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTMuODg4MzczXSBzeXN0ZW1k
WzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlz
Y29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAxMy45ODA1
NTBdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJpdmF0ZSBidXMuClsg
ICAxNC4wMzM4MjddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRl
c2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5
c3RlbWQxL2FnZW50ClsgICAxNC4xMTc3NzNdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1
ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAvb3Jn
L2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE0LjE4NTQ5MV0gc3lzdGVtZFsxXTogZGV2
LXR0eVMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTQuMjI1MTE0XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4MjUwLXR0eS10dHlTMC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE0LjI3MzI0NF0gc3lzdGVtZFsx
XTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAgIDE0LjMxNDUyNF0g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1k
MS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQK
WyAgIDE0LjM2OTgwOF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVl
ZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Av
REJ1cy9Mb2NhbApbICAgMTQuNDIzMzUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5UzEuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNC40NTc5OTVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXBsYXRmb3JtLXNlcmlhbDgyNTAtdHR5LXR0eVMxLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTQuNDk5MjU5XSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBj
b25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTQuNTMxODc0XSBzeXN0ZW1kWzFdOiBH
b3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVh
c2VkKCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTQuNTg0Mzk1
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsg
ICAxNC42MzY0MTBdIHN5c3RlbWRbMV06IGRldi10dHlTMi5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDE0LjY2OTg1M10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxh
dGZvcm0tc2VyaWFsODI1MC10dHktdHR5UzIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxNC43MTU5NTNdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24g
cHJpdmF0ZSBidXMuClsgICAxNC43NDY1NjZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1
ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3Jn
L2ZyZWVkZXNrdG9wL3N5c3RlbWQxL2FnZW50ClsgICAxNC44MDM4OTVdIHN5c3RlbWRbMV06
IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25u
ZWN0ZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE0Ljg2NjM0MF0g
c3lzdGVtZFsxXTogZGV2LXR0eVMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTQuOTAwODgzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4
MjUwLXR0eS10dHlTMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE0Ljk1
NDA3Ml0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5z
eXN0ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEv
YWdlbnQKWyAgIDE1LjAwOTc2OV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9y
Zy5mcmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRl
c2t0b3AvREJ1cy9Mb2NhbApbICAgMTUuMDc1MjEzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5UzQu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS4xMTI4NDFdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXBsYXRmb3JtLXNlcmlhbDgyNTAtdHR5LXR0eVM0LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuMTYzMjI4XSBzeXN0ZW1kWzFdOiBHb3Qg
RC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2Vk
KCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTUuMjI2OTMyXSBz
eXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9j
YWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAx
NS4yODY3OTZdIHN5c3RlbWRbMV06IGRldi10dHlTNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDE1LjMyNDc2M10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxhdGZv
cm0tc2VyaWFsODI1MC10dHktdHR5UzUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS4zMzcwMTFdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJl
ZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9w
L3N5c3RlbWQxL2FnZW50ClsgICAxNS4zNDk1ODddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyBy
ZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAv
b3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE1LjM2MjA1MF0gc3lzdGVtZFsxXTog
ZGV2LXR0eVM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuMzY4MDUw
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4MjUwLXR0eS10dHlT
Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjM3ODUxNV0gc3lzdGVt
ZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5NYW5h
Z2VyLktpbGxVbml0KCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMQpbICAgMTUuMzg5
NTQyXSBzeXN0ZW1kLWpvdXJuYWxkWzU0XTogUmVjZWl2ZWQgcmVxdWVzdCB0byBmbHVzaCBy
dW50aW1lIGpvdXJuYWwgZnJvbSBQSUQgMQpbICAgMTUuMzk5MzczXSBzeXN0ZW1kWzFdOiBH
b3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVj
dGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAxNS40MTE0ODVdIHN5
c3RlbWRbMV06IGRldi10dHlTNy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDE1LjQxNzQ3Nl0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxhdGZvcm0tc2VyaWFsODI1
MC10dHktdHR5UzcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS40MjY2
ODddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1
cy5Qcm9wZXJ0aWVzLkdldCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEKWyAgIDE1
LjQzODE0MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3Rv
cC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9M
b2NhbApbICAgMTUuNDUxMjc5XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRwYXRoLXBs
YXRmb3JtXHgyZDFjMGYwMDAubW1jLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuNDYwNDU3XSBzeXN0ZW1kWzFdOiBkZXYtbW1jYmxrMC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjQ2NjU5MV0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMt
c29jLjEtMWMwZjAwMC5tbWMtbW1jX2hvc3QtbW1jMC1tbWMwOmIzNjgtYmxvY2stbW1jYmxr
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjQ4MDk0Ml0gc3lzdGVt
ZFsxXTogZGV2LWRpc2stYnlceDJkdXVpZC1mMmM2ODg3YVx4MmQ0MTZiXHgyZDRiYzhceDJk
YmUzN1x4MmRiOWE3M2JkZTM4ZDkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAxNS40OTIxNjddIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHBhdGgtcGxhdGZvcm1c
eDJkMWMwZjAwMC5tbWNceDJkcGFydDEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS41MDE5MzNdIHN5c3RlbWRbMV06IGRldi1tbWNibGswcDEuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41MDgyNjRdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXNvYy4xLTFjMGYwMDAubW1jLW1tY19ob3N0LW1tYzAtbW1jMDpiMzY4LWJsb2NrLW1t
Y2JsazAtbW1jYmxrMHAxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUu
NTIxMzE3XSBzeXN0ZW1kWzFdOiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLWV0aDAuZGV2
aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41MjkyMjldIHN5c3RlbWRbMV06
IEpvYiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLWV0aDAuZGV2aWNlL3N0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMTUuNTQ0MDY3XSBzeXN0ZW1kWzFdOiBGb3VuZCBkZXZp
Y2UgL3N5cy9zdWJzeXN0ZW0vbmV0L2RldmljZXMvZXRoMC4KWyAgIDE1LjU1MDY5NF0gc3lz
dGVtZFsxXTogc3lzLWRldmljZXMtc29jLjEtMWM1MDAwMC5ldGhlcm5ldC1uZXQtZXRoMC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjU2MDc0MV0gc3lzdGVtZFsx
XTogZGV2LWh2YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41NjY2
NjFdIHN5c3RlbWRbMV06IEpvYiBkZXYtaHZjMC5kZXZpY2Uvc3RhcnQgZmluaXNoZWQsIHJl
c3VsdD1kb25lClsgICAxNS41Nzc4MTldIHN5c3RlbWRbMV06IEZvdW5kIGRldmljZSAvZGV2
L2h2YzAuClsgICAxNS41ODI0ODddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LWh2YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41OTE2MTNd
IHN5c3RlbWRbMV06IGRldi1odmMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuNTk3NTAxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS1odmMx
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjA2Nzg4XSBzeXN0ZW1k
WzFdOiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLXNpdDAuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNS42MTQ3MDldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtbmV0LXNpdDAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42
MjM3NTNdIHN5c3RlbWRbMV06IGRldi1odmMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTUuNjI5NzY5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS1odmMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjM4ODEwXSBz
eXN0ZW1kWzFdOiBkZXYtaHZjMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDE1LjY0NDcwM10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtdmlydHVhbC10dHktaHZjMi5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjY1MzgzMl0gc3lzdGVtZFsx
XTogZGV2LWh2YzQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42NTk4
NTNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LWh2YzQuZGV2aWNlIGNo
YW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42Njg4OTddIHN5c3RlbWRbMV06IGRldi1o
dmM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjc0NzkyXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS1odmM2LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMTUuNjgzOTcxXSBzeXN0ZW1kWzFdOiBkZXYtaHZjNS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjY4OTk5NV0gc3lzdGVtZFsxXTog
c3lzLWRldmljZXMtdmlydHVhbC10dHktaHZjNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDE1LjY5OTAwOV0gc3lzdGVtZFsxXTogZGV2LWh2YzcuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43MDQ4OTddIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LWh2YzcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAxNS43MTU4NjFdIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHBhdGgtcGxhdGZvcm1c
eDJkMWMxYzAwMC5laGNpMVx4MmR1c2JceDJkMDoxOjEuMFx4MmRzY3NpXHgyZDA6MDowOjAu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43MjgzMzhdIHN5c3RlbWRb
MV06IGRldi1kaXNrLWJ5XHgyZGlkLXVzYlx4MmRfVVNCX0RJU0tfMi4wXzA3QkMwNTAyQ0Y0
MDAwQ0FceDJkMDowLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNzM4
OTU4XSBzeXN0ZW1kWzFdOiBkZXYtc2RhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMTUuNzQ0Nzc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1zb2MuMS0xYzFjMDAw
LmVoY2kxLXVzYjItMlx4MmQxLTJceDJkMToxLjAtaG9zdDAtdGFyZ2V0MDowOjAtMDowOjA6
MC1ibG9jay1zZGEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43NjE3
OTRdIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHV1aWQtZGZiODdiZjdceDJkNTJjOFx4
MmQ0NzgwXHgyZDk1ZGVceDJkMmM2YzNjZGMxZGMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMTUuNzczMDM4XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRwYXRo
LXBsYXRmb3JtXHgyZDFjMWMwMDAuZWhjaTFceDJkdXNiXHgyZDA6MToxLjBceDJkc2NzaVx4
MmQwOjA6MDowXHgyZHBhcnQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MTUuNzg2MTY1XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRpZC11c2JceDJkX1VTQl9E
SVNLXzIuMF8wN0JDMDUwMkNGNDAwMENBXHgyZDA6MFx4MmRwYXJ0MS5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1Ljc5NzczMl0gc3lzdGVtZFsxXTogZGV2LXNkYTEu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44MDM2MzZdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXNvYy4xLTFjMWMwMDAuZWhjaTEtdXNiMi0yXHgyZDEtMlx4MmQx
OjEuMC1ob3N0MC10YXJnZXQwOjA6MC0wOjA6MDowLWJsb2NrLXNkYS1zZGExLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuODE4NTU5XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YTEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44MjQ2NzNdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWExLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuODMzNzIwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTAu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44Mzk4ODFdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWEwLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTUuODQ5MzIyXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44NTUzMDVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTUuODY0NjM0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTMuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44NzA3NzldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWEzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuODc5ODY2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTIuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNS44ODU4NDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWEyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUu
ODk1MDM1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNS45MDExMTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWE3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTEwMjg3
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS45MTYyNjddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWE2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTI1NTIyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YTUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
NS45MzE2MDldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE1LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTQwNzg4XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NDY3
NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFhLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTU1OTgyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YTkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NjIwODJdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE5LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTcxMzI4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTgu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NzczMTJdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTUuOTg2NTYxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45OTI2NTFdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMDAxNzk3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWMuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wMDc3NzFdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWFjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMDE3MDcwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWIuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4wMjMxNjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWFiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MDMyMzYzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4wMzgzNDRdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWIwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDQ3NTYx
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4wNTM2ODNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWFmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDYyODIwXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YWUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4wNjg5MjRdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDc3OTQ2XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YjMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wODQw
MjZdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWIzLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDkzMjA4XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YjIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wOTkzMjNdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWIyLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTA4MzMzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjUu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xMTQ0MjJdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI1LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMTIzNTY2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xMjk2NjRdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMTM4ODc0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjEuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xNDQ4NTJdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWIxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMTU0MTA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjguZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4xNjAyMjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWI4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MTY5MzU0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4xNzUzMzFdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWI2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTg0NTk2
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4xOTA3NDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWI3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTk5ODAyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YmIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4yMDU3NzhdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJiLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjE1MDIwXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YjkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yMjEx
MjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjMwMjUyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YmEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yMzYyMjddIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJhLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjQ1NDY5XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmUu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yNTE1NjRdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJlLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMjYwNzI2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmMuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yNjY3MDRdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMjc1OTY0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmQuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yODIwNjFdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWJkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMjkxMjIzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzEuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4yOTcxOTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MzA2NDA3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4zMTI0OTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWJmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzIxNjA4
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4zMjc1ODJdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzM2ODc2XSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YzQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4zNDI5NzBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzUyMTczXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YzIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zNTgx
NTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWMyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzY3Mzg3XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YzMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zNzM0ODBdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWMzLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzgyNjA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Yzcu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zODg3MTNdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM3LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMzk3ODA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzUuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40MDM5MDVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNDEzMDM4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzkuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40MTkxNDVdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWM5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNDI4MTE4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzYuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi40MzQxOTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NDQzNzU3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2IuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi40NDk5MzFdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWNiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDU5MTc3
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2EuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi40NjUxNThdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWNhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDc0NDg5XSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YzguZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni40ODA2NDVdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM4LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDg5NzQzXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5Y2UuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40OTU3
MjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTA0OTI5XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5Y2MuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41MTEwMDddIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNjLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTIwMTE1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2Qu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41MjYwOTVdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuNTM1NDEwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDEuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41NDE0OTVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNTUwNjM2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDAuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41NTY2MDldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWQwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNTY1OTgwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2YuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi41NzIwNzJdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWNmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NTgxMjA3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi41ODcxODBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWQ1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTk2NDY0
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi42MDI1NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWQ0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjExNzEyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5ZDMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni42MTc2ODldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjI3Njg5XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5ZDIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42MzM4
ODldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjQzNDkyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5ZDguZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42NDk2MTldIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ4LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjYwODUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDYu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42NjY4OTVdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ2LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuNjc4MDAyXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDcuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42ODQ1MDldIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNjk0MTM1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGIuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi43MDAyNjldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWRiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNzA5NTYwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDkuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi43MTU3MDZdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWQ5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NzI3ODExXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi43NDYxMDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWRhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNzU0NDY3
XSBzeXN0ZW1kWzFdOiBSZWNlaXZlZCBTSUdDSExEIGZyb20gUElEIDY4IChzeXN0ZW1jdGwp
LgpbICAgMTYuNzYwODkwXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA2
OCAoc3lzdGVtY3RsKQpbICAgMTYuNzY2OTI3XSBzeXN0ZW1kWzFdOiBDaGlsZCA2OCBkaWVk
IChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE2Ljc3MzUzN10gc3lzdGVt
ZFsxXTogQ2hpbGQgNjggYmVsb25ncyB0byBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2Vydmlj
ZQpbICAgMTYuNzgwODk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2Vy
dmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NF
U1MKWyAgIDE2Ljc5MTM0N10gc3lzdGVtZFsxXTogc3lzdGVtZC1qb3VybmFsLWZsdXNoLnNl
cnZpY2UgY2hhbmdlZCBzdGFydCAtPiBkZWFkClsgICAxNi44MDA2NDVdIHN5c3RlbWRbMV06
IEpvYiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVz
dWx0PWRvbmUKWyAgIDE2LjgxNjE2MF0gc3lzdGVtZFsxXTogU3RhcnRlZCBUcmlnZ2VyIEZs
dXNoaW5nIG9mIEpvdXJuYWwgdG8gUGVyc2lzdGVudCBTdG9yYWdlLgpbICAgMTYuODI0MTQ3
XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA3MCAoc3lzdGVtZC11cGRh
dGUtKQpbICAgMTYuODMwOTI4XSBzeXN0ZW1kWzFdOiBDaGlsZCA3MCBkaWVkIChjb2RlPWV4
aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE2LjgzNzMxM10gc3lzdGVtZFsxXTogQ2hp
bGQgNzAgYmVsb25ncyB0byBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZpY2UKWyAgIDE2Ljg0
NDEzNl0gc3lzdGVtZFsxXTogc3lzdGVtZC11cGRhdGUtdXRtcC5zZXJ2aWNlOiBtYWluIHBy
b2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAgMTYuODU0
MjIyXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZpY2UgY2hhbmdlZCBz
dGFydCAtPiBleGl0ZWQKWyAgIDE2Ljg2MTM4N10gc3lzdGVtZFsxXTogSm9iIHN5c3RlbWQt
dXBkYXRlLXV0bXAuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE2
Ljg3NTgwMV0gc3lzdGVtZFsxXTogU3RhcnRlZCBVcGRhdGUgVVRNUCBhYm91dCBTeXN0ZW0g
UmVib290L1NodXRkb3duLgpbICAgMTYuODgyNzI3XSBzeXN0ZW1kWzFdOiBDbG9zZWQgam9i
cyBwcm9ncmVzcyB0aW1lcmZkLgpbICAgMTYuODg3OTY0XSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBTeXN0ZW0gSW5pdGlhbGl6YXRpb24uClsgICAxNi44OTMyMzddIHN5c3RlbWRbMV06IHN5
c2luaXQudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgIDE2Ljg5OTAwMl0gc3lz
dGVtZFsxXTogSm9iIHN5c2luaXQudGFyZ2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgMTYuOTExMTI5XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBTeXN0ZW0gSW5p
dGlhbGl6YXRpb24uClsgICAxNi45MTY5MzRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEQtQnVz
IFN5c3RlbSBNZXNzYWdlIEJ1cyBTb2NrZXQuClsgICAxNi45MjMyNTddIHN5c3RlbWRbMV06
IGRidXMuc29ja2V0IGNoYW5nZWQgZGVhZCAtPiBsaXN0ZW5pbmcKWyAgIDE2LjkyOTAzN10g
c3lzdGVtZFsxXTogSm9iIGRidXMuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgMTYuOTQxNTYzXSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRC1CdXMgU3lzdGVt
IE1lc3NhZ2UgQnVzIFNvY2tldC4KWyAgIDE2Ljk0ODA1Nl0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgU29ja2V0cy4KWyAgIDE2Ljk1MjE0MV0gc3lzdGVtZFsxXTogc29ja2V0cy50YXJnZXQg
Y2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgMTYuOTU3ODMxXSBzeXN0ZW1kWzFdOiBKb2Ig
c29ja2V0cy50YXJnZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNi45Njg4
NDRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFNvY2tldHMuClsgICAxNi45NzM0NDZd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIERhaWx5IENsZWFudXAgb2YgVGVtcG9yYXJ5IERpcmVj
dG9yaWVzLgpbICAgMTYuOTgwMjEyXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLWNs
ZWFuLnRpbWVyOiBNb25vdG9uaWMgdGltZXIgZWxhcHNlcyBpbiAxNG1pbiA0My4wMjg0MDFz
LgpbICAgMTYuOTg5MjEzXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLWNsZWFuLnRp
bWVyIGNoYW5nZWQgZGVhZCAtPiB3YWl0aW5nClsgICAxNi45OTYyMDhdIHN5c3RlbWRbMV06
IEpvYiBzeXN0ZW1kLXRtcGZpbGVzLWNsZWFuLnRpbWVyL3N0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMTcuMDA0MDU1XSBzeXN0ZW1kWzFdOiBTdGFydGVkIERhaWx5IENsZWFu
dXAgb2YgVGVtcG9yYXJ5IERpcmVjdG9yaWVzLgpbICAgMTcuMDEwNzY0XSBzeXN0ZW1kWzFd
OiBTdGFydGluZyBUaW1lcnMuClsgICAxNy4wMTQ2NTVdIHN5c3RlbWRbMV06IHRpbWVycy50
YXJnZXQgY2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgMTcuMDIwNTE5XSBzeXN0ZW1kWzFd
OiBKb2IgdGltZXJzLnRhcmdldC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE3
LjAzMTYxNV0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgVGltZXJzLgpbICAgMTcuMDQw
NDQ1XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBCYXNpYyBTeXN0ZW0uClsgICAxNy4wNDQ5MDVd
IHN5c3RlbWRbMV06IGJhc2ljLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAx
Ny4wNTA2NTBdIHN5c3RlbWRbMV06IEpvYiBiYXNpYy50YXJnZXQvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICAxNy4wNjE5MTFdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0
IEJhc2ljIFN5c3RlbS4KWyAgIDE3LjA5NTA1M10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0
aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3No
ZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjEwNTAwM10gc3lzdGVtZFsxXTogQ29uZGl0aW9u
UGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hk
Z2Vua2V5cy5zZXJ2aWNlLgpbICAgMTcuMTE0NDY5XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25Q
YXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBz
c2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMTcuMTI0Mzc0XSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xMzQ4MjBdIHN5c3RlbWRbMV06IENvbmRpdGlv
blBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjE0NTI1N10gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQg
Zm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xNTU2NDhdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjE2NTI5Nl0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xNzQ0MThdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDE3LjE4MzUwOF0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNy4xOTA1ODJdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMTcuMTk2Mjk3XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMTcuMjA1MzM1XSBz
eXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAxNy4y
MTI2MTRdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDg1ClsgICAxNy4y
MjE3MzldIHN5c3RlbWRbODVdOiBFeGVjdXRpbmc6IC91c3IvYmluL3NzaGQgLUQKWyAgIDE3
LjIyNzk5NV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBydW5u
aW5nClsgICAxNy4yMzQ5NDVdIHN5c3RlbWRbMV06IEpvYiBzc2hkLnNlcnZpY2Uvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNy4yNDYyMzJdIHN5c3RlbWRbMV06IFN0YXJ0
ZWQgT3BlblNTSCBEYWVtb24uClsgICAxNy4yNTE5MDBdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IEVudHJvcHkgSGFydmVzdGluZyBEYWVtb24uLi4KWyAgIDE3LjI2NDY1MV0gc3lzdGVtZFsx
XTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9iaW4vaGF2ZWdlZCAtdyAxMDI0IC12IDEKWyAg
IDE3LjI3MjMyNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL2hhdmVnZWQgYXMgODYK
WyAgIDE3LjI4MjkxMl0gc3lzdGVtZFs4Nl06IEV4ZWN1dGluZzogL3Vzci9iaW4vaGF2ZWdl
ZCAtdyAxMDI0IC12IDEKWyAgIDE3LjI4OTU3NF0gc3lzdGVtZFsxXTogaGF2ZWdlZC5zZXJ2
aWNlIGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTcuMjk1NDI1XSBzeXN0ZW1kWzFdOiBT
dGFydGluZyBBdXRvbWF0aWMgd2lyZWQgbmV0d29yayBjb25uZWN0aW9uIHVzaW5nIG5ldGN0
bCBwcm9maWxlcy4uLgpbICAgMTcuMzEzODQyXSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVj
dXRlOiAvdXNyL2Jpbi9pZnBsdWdkIC1pIGV0aDAgLXIgL2V0Yy9pZnBsdWdkL25ldGN0bC5h
Y3Rpb24gLWJmSW5zClsgICAxNy4zMjQ0MjZdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jp
bi9pZnBsdWdkIGFzIDg3ClsgICAxNy4zMzM1MzhdIHN5c3RlbWRbODddOiBFeGVjdXRpbmc6
IC91c3IvYmluL2lmcGx1Z2QgLWkgZXRoMCAtciAvZXRjL2lmcGx1Z2QvbmV0Y3RsLmFjdGlv
biAtYmZJbnMKWyAgIDE3LjM0NDc0MF0gc3lzdGVtZFsxXTogbmV0Y3RsLWlmcGx1Z2RAZXRo
MC5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBydW5uaW5nClsgICAxNy4zNTE5NjddIHN5c3Rl
bWRbMV06IEpvYiBuZXRjdGwtaWZwbHVnZEBldGgwLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICAxNy4zNjkwNDddIHN5c3RlbWRbMV06IFN0YXJ0ZWQgQXV0b21h
dGljIHdpcmVkIG5ldHdvcmsgY29ubmVjdGlvbiB1c2luZyBuZXRjdGwgcHJvZmlsZXMuClsg
ICAxNy4zNzc1MDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIExvZ2luIFNlcnZpY2UuLi4KWyAg
IDE3LjM4OTM2Ml0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lz
dGVtZC9zeXN0ZW1kLWxvZ2luZApbICAgMTcuMzk3NzkxXSBzeXN0ZW1kWzFdOiBGb3JrZWQg
L3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLWxvZ2luZCBhcyA4OApbICAgMTcuNDEzMDgzXSBz
eXN0ZW1kWzg4XTogRXhlY3V0aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtbG9naW5k
ClsgICAxNy40MzA2MzNdIHN5c3RlbWRbMV06IHN5c3RlbWQtbG9naW5kLnNlcnZpY2UgY2hh
bmdlZCBkZWFkIC0+IHN0YXJ0ClsgICAxNy40NDcwNzZdIGV0aDA6IGRldmljZSBNQUMgYWRk
cmVzcyA4YTo3MDozYzo0MTplNzozZgpbICAgMTcuNDYzMzk0XSAgTm8gTUFDIE1hbmFnZW1l
bnQgQ291bnRlcnMgYXZhaWxhYmxlClsgICAxNy40NjgwMjddIHN0bW1hY19vcGVuOiBmYWls
ZWQgUFRQIGluaXRpYWxpc2F0aW9uClsgICAxNy40NzMzNzVdIHN5c3RlbWRbMV06IFN0YXJ0
aW5nIEQtQnVzIFN5c3RlbSBNZXNzYWdlIEJ1cy4uLgpbICAgMTcuNTAxMjY0XSBzeXN0ZW1k
WzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9kYnVzLWRhZW1vbiAtLXN5c3RlbSAt
LWFkZHJlc3M9c3lzdGVtZDogLS1ub2ZvcmsgLS1ub3BpZGZpbGUgLS1zeXN0ZW1kLWFjdGl2
YXRpb24KWyAgIDE3LjUyOTMxNl0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL2RidXMt
ZGFlbW9uIGFzIDkxClsgICAxNy41MzU5NzNdIHN5c3RlbWRbMV06IGRidXMuc2VydmljZSBj
aGFuZ2VkIGRlYWQgLT4gcnVubmluZwpbICAgMTcuNTYxOTcyXSBzeXN0ZW1kWzkxXTogRXhl
Y3V0aW5nOiAvdXNyL2Jpbi9kYnVzLWRhZW1vbiAtLXN5c3RlbSAtLWFkZHJlc3M9c3lzdGVt
ZDogLS1ub2ZvcmsgLS1ub3BpZGZpbGUgLS1zeXN0ZW1kLWFjdGl2YXRpb24KWyAgIDE3LjU3
NjkwN10gc3lzdGVtZFsxXTogSm9iIGRidXMuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVz
dWx0PWRvbmUKWyAgIDE3LjYwOTI1OV0gc3lzdGVtZFsxXTogU3RhcnRlZCBELUJ1cyBTeXN0
ZW0gTWVzc2FnZSBCdXMuClsgICAxNy42MjUxMDJdIHN5c3RlbWRbMV06IGRidXMuc29ja2V0
IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDE3LjYzOTkxMV0gc3lzdGVtZFsx
XTogU3RhcnRpbmcgUGVybWl0IFVzZXIgU2Vzc2lvbnMuLi4KWyAgIDE3LjY2NDQ3MF0gc3lz
dGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVz
ZXItc2Vzc2lvbnMgc3RhcnQKWyAgIDE3LjY4NDcyNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11c2VyLXNlc3Npb25zIGFzIDkyClsgICAxNy43MDIz
MzNdIHN5c3RlbWRbOTJdOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11
c2VyLXNlc3Npb25zIHN0YXJ0ClsgICAxNy43MTYyOTddIHN5c3RlbWRbMV06IHN5c3RlbWQt
dXNlci1zZXNzaW9ucy5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTcuNzM1
Nzc1XSBzeXN0ZW1kWzFdOiBTZXQgdXAgam9icyBwcm9ncmVzcyB0aW1lcmZkLgpbICAgMTcu
NzU1OTg0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNy43NzUzMjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWRmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTcuNzk5MjQx
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsg
ICAxNy44MTkyODJdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJpdmF0
ZSBidXMuClsgICAxNy44MjUzMjRdIHN5c3RlbWRbMV06IFJlY2VpdmVkIFNJR0NITEQgZnJv
bSBQSUQgODYgKGhhdmVnZWQpLgpbICAgMTcuODQ4ODM2XSBzeXN0ZW1kWzFdOiBHb3QgU0lH
Q0hMRCBmb3IgcHJvY2VzcyA4NiAoaGF2ZWdlZCkKWyAgIDE3Ljg1NDc3OF0gc3lzdGVtZFsx
XTogQ2hpbGQgODYgZGllZCAoY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAx
Ny44Nzg5MjBdIHN5c3RlbWRbMV06IENoaWxkIDg2IGJlbG9uZ3MgdG8gaGF2ZWdlZC5zZXJ2
aWNlClsgICAxNy44ODkxODJdIHN5c3RlbWRbMV06IGhhdmVnZWQuc2VydmljZTogY29udHJv
bCBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQgc3RhdHVzPTAKWyAgIDE3Ljg5ODE0OF0g
c3lzdGVtZFsxXTogaGF2ZWdlZC5zZXJ2aWNlIGdvdCBmaW5hbCBTSUdDSExEIGZvciBzdGF0
ZSBzdGFydApbICAgMTcuOTE5Mzg2XSBzeXN0ZW1kWzFdOiBNYWluIFBJRCBsb2FkZWQ6IDkw
ClsgICAxNy45MjQxNDRdIHN5c3RlbWRbMV06IGhhdmVnZWQuc2VydmljZSBjaGFuZ2VkIHN0
YXJ0IC0+IHJ1bm5pbmcKWyAgIDE3LjkzNTExMV0gc3lzdGVtZFsxXTogSm9iIGhhdmVnZWQu
c2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE3Ljk1OTE3N10gc3lz
dGVtZFsxXTogU3RhcnRlZCBFbnRyb3B5IEhhcnZlc3RpbmcgRGFlbW9uLgpbICAgMTcuOTY1
MTkxXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA5MiAoc3lzdGVtZC11
c2VyLXNlKQpbICAgMTcuOTg3NDU0XSBzeXN0ZW1kWzFdOiBDaGlsZCA5MiBkaWVkIChjb2Rl
PWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE4LjAwMzU3M10gc3lzdGVtZFsxXTog
Q2hpbGQgOTIgYmVsb25ncyB0byBzeXN0ZW1kLXVzZXItc2Vzc2lvbnMuc2VydmljZQpbICAg
MTguMDIzNDg1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVzZXItc2Vzc2lvbnMuc2VydmljZTog
bWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAg
IDE4LjA0ODQyN10gc3lzdGVtZFsxXTogc3lzdGVtZC11c2VyLXNlc3Npb25zLnNlcnZpY2Ug
Y2hhbmdlZCBzdGFydCAtPiBleGl0ZWQKWyAgIDE4LjA1ODc4Nl0gc3lzdGVtZFsxXTogSm9i
IHN5c3RlbWQtdXNlci1zZXNzaW9ucy5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9
ZG9uZQpbICAgMTguMDg5Mjc3XSBzeXN0ZW1kWzFdOiBTdGFydGVkIFBlcm1pdCBVc2VyIFNl
c3Npb25zLgpbICAgMTguMDk0NzI1XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3Rz
PS9kZXYvdHR5MCBzdWNjZWVkZWQgZm9yIGdldHR5QHR0eTEuc2VydmljZS4KWyAgIDE4LjEx
ODQ0OV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgR2V0dHkgb24gdHR5MS4uLgpbICAgMTguMTMy
Njg2XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvc2Jpbi9hZ2V0dHkgLS1ub2Ns
ZWFyIHR0eTEKWyAgIDE4LjE1NDAyMF0gc3lzdGVtZFsxXTogRm9ya2VkIC9zYmluL2FnZXR0
eSBhcyA5NApbICAgMTguMTY3MjgxXSBzeXN0ZW1kWzFdOiBnZXR0eUB0dHkxLnNlcnZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDE4LjE4OTI0M10gc3lzdGVtZFsxXTogSm9i
IGdldHR5QHR0eTEuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE4
LjIwNTk0OV0gc3lzdGVtZFsxXTogU3RhcnRlZCBHZXR0eSBvbiB0dHkxLgpbICAgMTguMjE5
MTYxXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBTZXJpYWwgR2V0dHkgb24gaHZjMC4uLgpbICAg
MTguMjM5ODA4XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvc2Jpbi9hZ2V0dHkg
LS1rZWVwLWJhdWQgaHZjMCAxMTUyMDAsMzg0MDAsOTYwMApbICAgMTguMjU5MDAxXSBzeXN0
ZW1kWzFdOiBGb3JrZWQgL3NiaW4vYWdldHR5IGFzIDk1ClsgICAxOC4yNjUwMDZdIHN5c3Rl
bWRbMV06IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5p
bmcKWyAgIDE4LjI4OTg1NF0gc3lzdGVtZFsxXTogSm9iIHNlcmlhbC1nZXR0eUBodmMwLnNl
cnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxOC4zMTU0NjRdIHN5c3Rl
bWRbMV06IFN0YXJ0ZWQgU2VyaWFsIEdldHR5IG9uIGh2YzAuClsgICAxOC4zMjgxNDZdIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIExvZ2luIFByb21wdHMuClsgICAxOC4zNTk5NTFdIHN5c3Rl
bWRbMV06IGdldHR5LnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAxOC4zNzkw
MTddIHN5c3RlbWRbMV06IEpvYiBnZXR0eS50YXJnZXQvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAxOC4zOTI5NTZdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IExvZ2lu
IFByb21wdHMuClsgICAxOC4zOTgwNjddIHN5c3RlbWRbMV06IFNldCB1cCBpZGxlX3BpcGUg
d2F0Y2guClsgICAxOC40MjM5MDVdIHN5c3RlbWRbMV06IGRldi10dHlkZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjQzNjgxMV0gc3lzdGVtZFsxXTogc3lzLWRl
dmljZXMtdmlydHVhbC10dHktdHR5ZGUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxOC40NTkzNTFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJp
dmF0ZSBidXMuClsgICAxOC40NzAxODVdIHN5c3RlbWRbMV06IGRldi10dHlkZC5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjQ3NjIzMF0gc3lzdGVtZFsxXTogc3lz
LWRldmljZXMtdmlydHVhbC10dHktdHR5ZGQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxOC41MDY5MzFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24g
cHJpdmF0ZSBidXMuClsgICAxOC41MjgxMzZdIHN5c3RlbWRbMV06IGRldi10dHlkYy5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjU1MjExM10gc3lzdGVtZFsxXTog
c3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5ZGMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxOC41ODIzMTldIHN5c3RlbWRbMV06IGRldi10dHllMi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjU4ODM4NF0gc3lzdGVtZFsxXTogc3lzLWRl
dmljZXMtdmlydHVhbC10dHktdHR5ZTIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxOC42MzY1MTldIHN5c3RlbWRbMV06IGRldi10dHllMC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjY2NzE2MF0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMt
dmlydHVhbC10dHktdHR5ZTAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
OC42ODI4NThdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0
b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3Rl
bWQxL2FnZW50ClsgICAxOC43MTU1OThdIHN5c3RlbWRbMV06IGRldi10dHllMS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjcyOTY5MF0gc3lzdGVtZFsxXTogc3lz
LWRldmljZXMtdmlydHVhbC10dHktdHR5ZTEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxOC43MzkzNDVdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcu
ZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNr
dG9wL3N5c3RlbWQxL2FnZW50ClsgICAxOC43NjU2NzRdIHN5c3RlbWRbMV06IEdvdCBELUJ1
cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBv
biAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE4Ljc5MDYxOF0gc3lzdGVtZFsx
XTogZGV2LXR0eWU1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTguODA2
NjkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllNS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjgyOTEwNV0gc3lzdGVtZFsxXTogR290
IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNl
ZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDE4Ljg0ODQ3MF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11c2VyLXNlc3Npb25zLnNlcnZpY2U6IGNncm91cCBpcyBl
bXB0eQpbICAgMTguODY5MzI2XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3Jn
LmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVz
a3RvcC9EQnVzL0xvY2FsClsgICAxOC44OTUwMjZdIHN5c3RlbWRbMV06IGRldi10dHllNC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjkwODgzMV0gc3lzdGVtZFsx
XTogc3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5ZTQuZGV2aWNlIGNoYW5nZWQgZGVhZCAt
PiBwbHVnZ2VkClsgICAxOC45NDM4NTFdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0
OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAvb3JnL2Zy
ZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE4Ljk3MTk4OF0gc3lzdGVtZFsxXTogZGV2LXR0
eWUzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTguOTc4MDMwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllMy5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE5LjAwOTMyMF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgbmFt
ZSA6MS4xIGluIHJlcGx5IHRvIEhlbGxvClsgICAxOS4wMTc4OTFdIHN5c3RlbWRbMV06IFN1
Y2Nlc3NmdWxseSBjb25uZWN0ZWQgdG8gc3lzdGVtIEQtQnVzIGJ1cyBmOTEwNzJiYmZlMjNm
YTJmMTMzOTQ5NmUwMDAwMDAxMiBhcyA6MS4xClsgICAxOS4wNTU0NDJdIHN5c3RlbWRbMV06
IFN1Y2Nlc3NmdWxseSBpbml0aWFsaXplZCBBUEkgb24gdGhlIHN5c3RlbSBidXMKWyAgIDE5
LjA2Mjc0MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3Rv
cC5EQnVzLk5hbWVBY3F1aXJlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpbICAgMTku
MDkwMTAzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZTYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxOS4wOTYyMTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWU2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuMTI5NDEz
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpbICAgMTkuMTQ4
OTY3XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRC
dXMuTmFtZUFjcXVpcmVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzClsgICAxOS4xNzkw
MDZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lz
dGVtZDEuTWFuYWdlci5TdWJzY3JpYmUoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQx
ClsgICAxOS4yMTAxMTFdIHN5c3RlbWRbMV06IFN1Y2Nlc3NmdWxseSBhY3F1aXJlZCBuYW1l
LgpbICAgMTkuMjExNzUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZTkuZGV2aWNlIGNoYW5nZWQg
ZGVhZCAtPiBwbHVnZ2VkClsgICAxOS4yMTE3OTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2Vz
LXZpcnR1YWwtdHR5LXR0eWU5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MTkuMjQ2NTAyXSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNr
dG9wLkRCdXMuTmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpb
ICAgMTkuMjk5MzE0XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWxvZ2luZC5zZXJ2aWNlJ3MgRC1C
dXMgbmFtZSBvcmcuZnJlZWRlc2t0b3AubG9naW4xIG5vdyByZWdpc3RlcmVkIGJ5IDoxLjAK
WyAgIDE5LjMzOTA5NV0gc3lzdGVtZFsxXTogc3lzdGVtZC1sb2dpbmQuc2VydmljZSBjaGFu
Z2VkIHN0YXJ0IC0+IHJ1bm5pbmcKWyAgIDE5LjM1ODg0NF0gc3lzdGVtZFsxXTogSm9iIHN5
c3RlbWQtbG9naW5kLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAx
OS40MDE0NTNdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9naW4gU2VydmljZS4KWyAgIDE5LjQy
MjE1MF0gc3lzdGVtZFsxXTogQ2xvc2VkIGpvYnMgcHJvZ3Jlc3MgdGltZXJmZC4KWyAgIDE5
LjQzOTE1NF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgTXVsdGktVXNlciBTeXN0ZW0uClsgICAx
OS40NTg4MjBdIHN5c3RlbWRbMV06IG11bHRpLXVzZXIudGFyZ2V0IGNoYW5nZWQgZGVhZCAt
PiBhY3RpdmUKWyAgIDE5LjQ3ODI3Ml0gc3lzdGVtZFsxXTogSm9iIG11bHRpLXVzZXIudGFy
Z2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTkuNTE1NDQ3XSBzeXN0ZW1k
WzFdOiBSZWFjaGVkIHRhcmdldCBNdWx0aS1Vc2VyIFN5c3RlbS4KWyAgIDE5LjUzMjU3MF0g
c3lzdGVtZFsxXTogU3RhcnRpbmcgR3JhcGhpY2FsIEludGVyZmFjZS4KWyAgIDE5LjU0NDI1
MF0gc3lzdGVtZFsxXTogZ3JhcGhpY2FsLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZl
ClsgICAxOS41NTg5MjBdIHN5c3RlbWRbMV06IEpvYiBncmFwaGljYWwudGFyZ2V0L3N0YXJ0
IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTkuNTg1MzM1XSBzeXN0ZW1kWzFdOiBSZWFj
aGVkIHRhcmdldCBHcmFwaGljYWwgSW50ZXJmYWNlLgpbICAgMTkuNTk5OTU1XSBzeXN0ZW1k
WzFdOiBDbG9zZWQgaWRsZV9waXBlIHdhdGNoLgpbICAgMTkuNjEyODQyXSBzeXN0ZW1kWzk0
XTogRXhlY3V0aW5nOiAvc2Jpbi9hZ2V0dHkgLS1ub2NsZWFyIHR0eTEKWyAgIDE5LjYyMzM2
M10gc3lzdGVtZFs5NV06IEV4ZWN1dGluZzogL3NiaW4vYWdldHR5IC0ta2VlcC1iYXVkIGh2
YzAgMTE1MjAwLDM4NDAwLDk2MDAKWyAgIDE5LjYzOTI2OV0gc3lzdGVtZFsxXTogU3RhcnR1
cCBmaW5pc2hlZCBpbiA2Ljg2MHMgKGtlcm5lbCkgKyAxMi43NjNzICh1c2Vyc3BhY2UpID0g
MTkuNjI0cy4KWyAgIDE5LjY2NTkzNF0gc3lzdGVtZFsxXTogZGV2LXR0eWU3LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuNjc4ODczXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllNy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDE5LjcwMDA2OF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJ
RCA4NSAoc3NoZCkuClsgICAxOS43MTkyNjNdIHN5c3RlbWRbMV06IEdvdCBTSUdDSExEIGZv
ciBwcm9jZXNzIDg1IChzc2hkKQpbICAgMTkuNzI3MDYxXSBzeXN0ZW1kWzFdOiBDaGlsZCA4
NSBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDE5Ljc0MzczOV0g
c3lzdGVtZFsxXTogQ2hpbGQgODUgYmVsb25ncyB0byBzc2hkLnNlcnZpY2UKWyAgIDE5Ljc1
ODkzMF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBj
b2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRQpbICAgMTkuNzc5NTE3XSBzeXN0ZW1kWzFd
OiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBydW5uaW5nIC0+IGZhaWxlZApbICAgMTkuODE5NzAw
XSBzeXN0ZW1kWzFdOiBVbml0IHNzaGQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0ZS4K
WyAgIDE5LjgyNTcwOV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgZmFpbGVk
IC0+IGF1dG8tcmVzdGFydApbICAgMTkuODU5ODgxXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBj
b25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTkuODgxOTE1XSBzeXN0ZW1kWzFdOiBk
ZXYtdHR5ZWMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxOS45MDIzMjZd
IHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWVjLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuOTE0Mzk4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5
ZWIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxOS45Mjg4OTZdIHN5c3Rl
bWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWViLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMTkuOTUwNTk1XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVx
dWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24gL29y
Zy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTkuOTk5MzQzXSBzeXN0ZW1kWzFd
OiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29u
bmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAyMC4wMzU2OTJd
IHN5c3RlbWRbMV06IGRldi10dHllYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDIwLjA2NTI0OV0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5
ZWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMC4wODQxNDVdIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZSBob2xkb2ZmIHRpbWUgb3Zlciwgc2NoZWR1bGluZyByZXN0
YXJ0LgpbICAgMjAuMDk3MTI1XSBzeXN0ZW1kWzFdOiBUcnlpbmcgdG8gZW5xdWV1ZSBqb2Ig
c3NoZC5zZXJ2aWNlL3Jlc3RhcnQvZmFpbApbICAgMjAuMTI5NDYwXSBzeXN0ZW1kWzFdOiBJ
bnN0YWxsZWQgbmV3IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyA3MApbICAgMjAuMTM1
ODAwXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzc2hkZ2Vua2V5cy5zZXJ2aWNl
L3N0YXJ0IGFzIDExOApbICAgMjAuMTYyMjYyXSBzeXN0ZW1kWzFdOiBFbnF1ZXVlZCBqb2Ig
c3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgNzAKWyAgIDIwLjE4Njk2MF0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIwLjE5MjczMV0gc3lz
dGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIwLjE5NzU1M10gc3lz
dGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAg
IDIwLjIyODg0MV0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMjAuMjM1MzkyXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5n
IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIw
LjI2NDUwOF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAg
IDIwLjI5MjM5OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3No
L3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAg
MjAuMzIyODk4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gv
c3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpb
ICAgMjAuMzQ1MzEwXSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9z
c2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMC4zNTg4MThdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3Nz
aC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIwLjM4ODk3NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZp
Y2UuClsgICAyMC40Mjg3NzZdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEv
ZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIwLjQzODE4MF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMC40Nzg4NDFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZp
Y2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIwLjUw
NzA2Nl0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAyMC41MjMxODhdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NI
IEtleSBHZW5lcmF0aW9uLgpbICAgMjAuNTM0ODAwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBP
cGVuU1NIIERhZW1vbi4uLgpbICAgMjAuNTQ3MTM5XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBl
eGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAyMC41NjM3MDBdIHN5c3RlbWRbMV06IEZv
cmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDExNQpbICAgMjAuNTcyODA1XSBzeXN0ZW1kWzExNV06
IEV4ZWN1dGluZzogL3Vzci9iaW4vc3NoZCAtRApbICAgMjAuNTgyMjI0XSBzeXN0ZW1kWzFd
OiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDIwLjU5Njk1MV0g
c3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRv
bmUKWyAgIDIwLjYxODkwMV0gc3lzdGVtZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4K
WyAgIDIwLjY0NjE0Ml0gc3lzdGVtZFsxXTogZGV2LXR0eWU4LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjAuNjcwMzU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHllOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIw
LjY5MDc0NF0gc3lzdGVtZFsxXTogZGV2LXR0eWVmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjAuNzA1MDIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHllZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIwLjczMTQx
OF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMTUgKHNzaGQpLgpb
ICAgMjAuNzQ5MDU3XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMTUg
KHNzaGQpClsgICAyMC43NjY4OTRdIHN5c3RlbWRbMV06IENoaWxkIDExNSBkaWVkIChjb2Rl
PWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIwLjc3NTk1N10gc3lzdGVtZFsxXTog
Q2hpbGQgMTE1IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMC43ODc5ODddIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQs
IHN0YXR1cz0xL0ZBSUxVUkUKWyAgIDIwLjgwNzM1N10gc3lzdGVtZFsxXTogc3NoZC5zZXJ2
aWNlIGNoYW5nZWQgcnVubmluZyAtPiBmYWlsZWQKWyAgIDIwLjgxNjMxNl0gc3lzdGVtZFsx
XTogVW5pdCBzc2hkLnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMC44Mjg3
NzVdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJl
c3RhcnQKWyAgIDIwLjg0NTk0Nl0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBv
biBwcml2YXRlIGJ1cy4KWyAgIDIwLjg1OTc2M10gc3lzdGVtZFsxXTogZGV2LXR0eWVlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjAuODcyODk3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIwLjg5MjI4MF0gc3lzdGVtZFsxXTogZGV2LXR0eWVkLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjAuOTA3MzU3XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIwLjkzMDI5NF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5m
cmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0
b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDIwLjk1NjcyMV0gc3lzdGVtZFsxXTogR290IEQtQnVz
IHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9u
IC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9Mb2NhbApbICAgMjAuOTk3Njc5XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5cDIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMS4wMDU1
MDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXAyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjEuMDI0NTI0XSBzeXN0ZW1kWzFdOiBzc2hk
LnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIx
LjAzODM2NF0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2Vydmlj
ZS9yZXN0YXJ0L2ZhaWwKWyAgIDIxLjA1NjgwNV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5l
dyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgMTE5ClsgICAyMS4wNjgyNjBdIHN5c3Rl
bWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMg
MTY3ClsgICAyMS4wODE4NzFdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZp
Y2UvcmVzdGFydCBhcyAxMTkKWyAgIDIxLjA5ODI0OV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2
aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIxLjEwNjYyNl0gc3lzdGVtZFsxXTog
U3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIxLjExNzgwNV0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIxLjEyOTAx
NF0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMjEuMTQzOTkyXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hk
LnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIxLjE2MTcyMF0g
c3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3Jz
YV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIxLjE4MzU1
NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0
X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjEuMjA4ODA4
XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3Rf
ZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjEuMjM4
OTA4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hv
c3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMS4yNTYy
NjldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9z
dF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIx
LjI3NzQ3OF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAy
MS4yOTc1NzRdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9z
c2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIx
LjMwNzE0Ml0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMS4zMjg3
NjVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVz
dGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIxLjM0NDQ1OF0gc3lz
dGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAyMS4zNjQ4MDJdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5l
cmF0aW9uLgpbICAgMjEuMzcyMTMyXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERh
ZW1vbi4uLgpbICAgMjEuMzg2MzgwXSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAv
dXNyL2Jpbi9zc2hkIC1EClsgICAyMS4zOTk0OTBdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNy
L2Jpbi9zc2hkIGFzIDEyMwpbICAgMjEuNDE2MDU3XSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDIxLjQyNTA0MF0gc3lzdGVtZFsxMjNd
OiBFeGVjdXRpbmc6IC91c3IvYmluL3NzaGQgLUQKWyAgIDIxLjQ0ODcyOV0gc3lzdGVtZFsx
XTogSm9iIHNzaGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIx
LjQ1NTY4M10gc3lzdGVtZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIxLjQ3
MTYxMF0gc3lzdGVtZFsxXTogZGV2LXR0eXAxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjEuNDc3NjkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlwMS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjUyMDMyMl0g
c3lzdGVtZFsxXTogZGV2LXR0eXAwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjEuNTM4OTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlw
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjU1OTYxM10gc3lzdGVt
ZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMjMgKHNzaGQpLgpbICAgMjEuNTY5
MDgwXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMjMgKHNzaGQpClsg
ICAyMS41ODY4MzFdIHN5c3RlbWRbMV06IENoaWxkIDEyMyBkaWVkIChjb2RlPWV4aXRlZCwg
c3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIxLjU5OTU2MF0gc3lzdGVtZFsxXTogQ2hpbGQgMTIz
IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMS42MTM1ODRdIHN5c3RlbWRbMV06IHNz
aGQuc2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0x
L0ZBSUxVUkUKWyAgIDIxLjYyOTE4OF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5n
ZWQgcnVubmluZyAtPiBmYWlsZWQKWyAgIDIxLjY0MjQ5N10gc3lzdGVtZFsxXTogVW5pdCBz
c2hkLnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMS42NjI4MjZdIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAg
IDIxLjY4MTYxMl0gc3lzdGVtZFsxXTogZGV2LXR0eXA1LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjEuNjk4NzEzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlwNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjcx
NDUxNV0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4K
WyAgIDIxLjcyODg2OF0gc3lzdGVtZFsxXTogZGV2LXR0eXA0LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjEuNzQ3MjUzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHlwNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIx
Ljc2MzgxN10gc3lzdGVtZFsxXTogZGV2LXR0eXAzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjEuNzc2OTc3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlwMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjc5NjUy
OF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0
ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdl
bnQKWyAgIDIxLjgyNjA4MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5m
cmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0
b3AvREJ1cy9Mb2NhbApbICAgMjEuODQ0MjMxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5cDYuZGV2
aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMS44NTg3MDddIHN5c3RlbWRbMV06
IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXA2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjEuODc3NzgxXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9m
ZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIxLjkwMjIzM10gc3lzdGVt
ZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwK
WyAgIDIxLjkxNjgyM10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2
aWNlL3Jlc3RhcnQgYXMgMTY4ClsgICAyMS45MzE5OTddIHN5c3RlbWRbMV06IEluc3RhbGxl
ZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMgMjE2ClsgICAyMS45NDc1
NjBdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyAx
NjgKWyAgIDIxLjk1ODM0NF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCBy
ZXN0YXJ0IGpvYi4KWyAgIDIxLjk3MTM0MF0gc3lzdGVtZFsxXTogU3RvcHBpbmcgT3BlblNT
SCBEYWVtb24uLi4KWyAgIDIxLjk3NjE3OV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNo
YW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIxLjk5MDg3N10gc3lzdGVtZFsxXTog
Sm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMjEu
OTk3NTcwXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFy
dCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIyLjAwNTg1MV0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxl
ZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjAxNTcyMl0gc3lzdGVtZFsxXTog
Q29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVk
IGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuMDI1MjE0XSBzeXN0ZW1kWzFdOiBD
b25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFp
bGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuMDM1MTk5XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWls
ZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wNDQ3MDBdIHN5c3RlbWRbMV06
IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHVi
IGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjA1NDcyOF0gc3lzdGVt
ZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tl
eSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wNjQ0NzZdIHN5c3Rl
bWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHVi
IGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjA3Mzk3MF0gc3lzdGVt
ZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWls
ZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wODMwNThdIHN5c3RlbWRbMV06
IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRp
b24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIyLjA5MjA3NF0gc3lzdGVtZFsxXTogSm9iIHNz
aGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAyMi4w
OTkwNTldIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMjIu
MTA4MDE4XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMjIu
MTE4MDI2XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1E
ClsgICAyMi4xMjk2MDldIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDEz
MwpbICAgMjIuMTQwMDAwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFk
IC0+IHJ1bm5pbmcKWyAgIDIyLjE0NzIzMl0gc3lzdGVtZFsxMzNdOiBFeGVjdXRpbmc6IC91
c3IvYmluL3NzaGQgLUQKWyAgIDIyLjE1MjUzNl0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2Vy
dmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIyLjE1OTY2N10gc3lzdGVt
ZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIyLjE3MDY4OF0gc3lzdGVtZFsx
XTogZGV2LXR0eXA5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMTc2
OTE5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwOS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjE4NzcwMF0gc3lzdGVtZFsxXTogZGV2
LXR0eXA3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMTk0MTIxXSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwNy5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjIwNDY5OF0gc3lzdGVtZFsxXTogZGV2LXR0eXBi
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjExMDYyXSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwYi5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDIyLjIyMTUxOF0gc3lzdGVtZFsxXTogZGV2LXR0eXBhLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjI3Njk5XSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDIyLjIzODI4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXA4LmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjQ0NjQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHlwOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDIyLjI1NTE1Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXBlLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjIuMjYxNDIxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHlwZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIy
LjI3MTQxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXBkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjIuMjc3NDQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlwZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjMwMTQx
OV0gc3lzdGVtZFsxXTogZGV2LXR0eXBjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMjIuMzA3NTc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10
dHlwYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjMzNzcyOV0gc3lz
dGVtZFsxXTogZGV2LXR0eXExLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjIuMzU2MTAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxMS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjM2NTU5Nl0gc3lzdGVtZFsx
XTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMzMgKHNzaGQpLgpbICAgMjIuMzcxOTg4
XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMzMgKHNzaGQpClsgICAy
Mi4zNzc3OTddIHN5c3RlbWRbMV06IENoaWxkIDEzMyBkaWVkIChjb2RlPWV4aXRlZCwgc3Rh
dHVzPTEvRkFJTFVSRSkKWyAgIDIyLjM4NDQ3N10gc3lzdGVtZFsxXTogQ2hpbGQgMTMzIGJl
bG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMi4zOTAxMTZdIHN5c3RlbWRbMV06IHNzaGQu
c2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0xL0ZB
SUxVUkUKWyAgIDIyLjM5ODk1N10gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQg
cnVubmluZyAtPiBmYWlsZWQKWyAgIDIyLjQwNTU1NF0gc3lzdGVtZFsxXTogVW5pdCBzc2hk
LnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMi40MTE5NjNdIHN5c3RlbWRb
MV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAgIDIy
LjQyMTY2NV0gc3lzdGVtZFsxXTogZGV2LXR0eXEwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjIuNDI3NzYzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlxMC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQzNzIy
NV0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAg
IDIyLjQ0NDQwM10gc3lzdGVtZFsxXTogZGV2LXR0eXBmLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjIuNDUwOTI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlwZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQ2
MTkxMV0gc3lzdGVtZFsxXTogZGV2LXR0eXE0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjIuNDY3OTU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlxNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQ3ODE0M10g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1k
MS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQK
WyAgIDIyLjQ5MTEwMV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVl
ZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Av
REJ1cy9Mb2NhbApbICAgMjIuNTAzNjQzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTMuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMi41MDk5MjNdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXEzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjIuNTIwNDk3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTIuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMi41MjY1NDBdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eXEyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjIuNTM1NzEwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92
ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIyLjU0Mjg0OV0gc3lzdGVtZFsxXTogVHJ5
aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwKWyAgIDIyLjU1
MDQ5N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3Rh
cnQgYXMgMjE3ClsgICAyMi41NTkzOTRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMgMjY1ClsgICAyMi41NzE4NDVdIHN5c3Rl
bWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyAyMTcKWyAgIDIy
LjU4MTMyOV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpv
Yi4KWyAgIDIyLjU4NzM5MF0gc3lzdGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24u
Li4KWyAgIDIyLjU5MjM1MF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0
by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIyLjU5ODQyMl0gc3lzdGVtZFsxXTogSm9iIHNzaGQu
c2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMjIuNjA1MTYzXSBz
eXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hk
LnNlcnZpY2Uvc3RhcnQKWyAgIDIyLjYxMzAzNV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0
aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3No
ZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjYyMjkxNF0gc3lzdGVtZFsxXTogQ29uZGl0aW9u
UGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hk
Z2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuNjMyNDk3XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25Q
YXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBz
c2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuNjQyMzQxXSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42NTE4MDRdIHN5c3RlbWRbMV06IENvbmRpdGlv
blBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjY2MTkxOF0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQg
Zm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42NzE1MzVdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjY4MDk2NF0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42OTAxNThdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDIyLjY5OTA4OV0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAyMi43MDYwOTVdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMjIuNzEzMTQ3XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMjIuNzE4NzQ5XSBz
eXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAyMi43
MjUxNzNdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDEzOQpbICAgMjIu
NzM0MTMyXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5p
bmcKWyAgIDIyLjc0MTI5OV0gc3lzdGVtZFsxMzldOiBFeGVjdXRpbmc6IC91c3IvYmluL3Nz
aGQgLUQKWyAgIDIyLjc0NzU3Nl0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9zdGFy
dCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIyLjc2MTU2N10gc3lzdGVtZFsxXTogU3Rh
cnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIyLjc3ODUyN10gc3lzdGVtZFsxXTogZGV2LXR0
eXE3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuODAyMzM3XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNy5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjgxODMzNV0gc3lzdGVtZFsxXTogZGV2LXR0eXE2LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuODI0NjIwXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIyLjgzNDEzOV0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBm
cm9tIFBJRCAxMzkgKHNzaGQpLgpbICAgMjIuODQwMjk2XSBzeXN0ZW1kWzFdOiBHb3QgU0lH
Q0hMRCBmb3IgcHJvY2VzcyAxMzkgKHNzaGQpClsgICAyMi44NDYzMzFdIHN5c3RlbWRbMV06
IENoaWxkIDEzOSBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIy
Ljg1MjkxMF0gc3lzdGVtZFsxXTogQ2hpbGQgMTM5IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNl
ClsgICAyMi44NTgzNzFdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZTogbWFpbiBwcm9jZXNz
IGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0xL0ZBSUxVUkUKWyAgIDIyLjg2NzE1NF0g
c3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgcnVubmluZyAtPiBmYWlsZWQKWyAg
IDIyLjg3MzYxOF0gc3lzdGVtZFsxXTogVW5pdCBzc2hkLnNlcnZpY2UgZW50ZXJlZCBmYWls
ZWQgc3RhdGUuClsgICAyMi44Nzk5OTFdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFu
Z2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAgIDIyLjg4ODkzM10gc3lzdGVtZFsxXTog
QWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAgIDIyLjg5NjAxOV0gc3lz
dGVtZFsxXTogZGV2LXR0eXE1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjIuOTAyMzI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjkxMzUyM10gc3lzdGVtZFsx
XTogZGV2LXR0eXE5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTE5
ODk3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxOS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjkzMDIzOF0gc3lzdGVtZFsxXTogR290
IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNl
ZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDIyLjk0MzE1N10g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVzLkxv
Y2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9Mb2NhbApbICAg
MjIuOTU1NTA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTguZGV2aWNlIGNoYW5nZWQgZGVhZCAt
PiBwbHVnZ2VkClsgICAyMi45NjIwMTVdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1
YWwtdHR5LXR0eXE4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTcy
MjM3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAyMi45Nzg4NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5
LXR0eXFhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTg5NzMzXSBz
eXN0ZW1kWzFdOiBkZXYtdHR5cWUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAyMi45OTU4NjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXFl
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMDA2NzYzXSBzeXN0ZW1k
WzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFy
dC4KWyAgIDIzLjAyMjAxMl0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNz
aGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwKWyAgIDIzLjAyOTkzOF0gc3lzdGVtZFsxXTogSW5z
dGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgMjY2ClsgICAyMy4wMzY0
NzFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uv
c3RhcnQgYXMgMzE0ClsgICAyMy4wNDM2MjFdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBz
c2hkLnNlcnZpY2UvcmVzdGFydCBhcyAyNjYKWyAgIDIzLjA0OTc3NV0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIzLjA1NTYwMV0gc3lz
dGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIzLjA2MDUyNF0gc3lz
dGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAg
IDIzLjA2Njc5NV0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMjMuMDczNDE3XSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5n
IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIz
LjA4MTIwMF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAg
IDIzLjA5MTAzMV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3No
L3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAg
MjMuMTAwNDc3XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gv
c3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpb
ICAgMjMuMTEwMjI4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9z
c2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMy4xMTk4MjRdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3Nz
aC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIzLjEyOTg0NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZp
Y2UuClsgICAyMy4xMzk0NTldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEv
ZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIzLjE0OTA3OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMy4xNTgxMThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZp
Y2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIzLjE2
NzA1NV0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAyMy4xNzQyMzZdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NI
IEtleSBHZW5lcmF0aW9uLgpbICAgMjMuMTgxMzEwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBP
cGVuU1NIIERhZW1vbi4uLgpbICAgMjMuMTg2NDMwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZp
Y2Ugc3RhcnQgcmVxdWVzdCByZXBlYXRlZCB0b28gcXVpY2tseSwgcmVmdXNpbmcgdG8gc3Rh
cnQuClsgICAyMy4xOTQ4OTBdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGRl
YWQgLT4gZmFpbGVkClsgICAyMy4yMDA1NzBdIHN5c3RlbWRbMV06IEpvYiBzc2hkLnNlcnZp
Y2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1mYWlsZWQKWyAgIDIzLjIwNzA4OF0gc3lzdGVt
ZFsxXTogRmFpbGVkIHRvIHN0YXJ0IE9wZW5TU0ggRGFlbW9uLgpbICAgMjMuMjE0MDk3XSBz
eXN0ZW1kWzFdOiBVbml0IHNzaGQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0ZS4KWyAg
IDIzLjIyNjczOV0gc3lzdGVtZFsxXTogZGV2LXR0eXFkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuMjM5NTk5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlxZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI1
MjU5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXFjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuMjU4NzY3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlxYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI2OTYyMV0g
c3lzdGVtZFsxXTogZGV2LXR0eXIxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuMjc1NzkxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI4NjE2MV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXIwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
MjkyNDQ0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyMC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMwMjYwM10gc3lzdGVtZFsxXTog
ZGV2LXR0eXFiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzA5MjE1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxYi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMxOTYxOF0gc3lzdGVtZFsxXTogZGV2LXR0
eXFmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzI1ODIyXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxZi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMzNjI1NF0gc3lzdGVtZFsxXTogZGV2LXR0eXI0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzQyNTk3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjM1MzEyNl0gc3lzdGVtZFsxXTogZGV2LXR0eXIyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzU5NjE1XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjM3MDc0MF0gc3lzdGVtZFsxXTogZGV2LXR0eXIzLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzc2NzczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlyMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjM4NzAzMl0gc3lzdGVtZFsxXTogZGV2LXR0eXI4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuMzkzMTYwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlyOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQw
MzIzMF0gc3lzdGVtZFsxXTogZGV2LXR0eXI1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNDA5MzY1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlyNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQxOTQ3OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXI3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNDI1NDc0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQzNTYwMF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXI2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NDQxODMyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyNi5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ1NDA5NV0gc3lzdGVtZFsxXTog
ZGV2LXR0eXJiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNDY1ODA5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyYi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ3ODI5MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXI5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNDg0NTY2XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyOS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ5NDc4NF0gc3lzdGVtZFsxXTogZGV2LXR0eXJhLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTAwOTI3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjUxMDkzOV0gc3lzdGVtZFsxXTogZGV2LXR0eXJlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTE2OTM3XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjUyNzIwMl0gc3lzdGVtZFsxXTogZGV2LXR0eXJjLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTMzNDE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlyYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjU0MzM2NV0gc3lzdGVtZFsxXTogZGV2LXR0eXJkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuNTQ5NTAzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlyZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU1
OTczNF0gc3lzdGVtZFsxXTogZGV2LXR0eXMyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNTY1NzIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlzMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU3NTkxOV0g
c3lzdGVtZFsxXTogZGV2LXR0eXJmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNTgyMTE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU5MjExMV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NTk4MDk1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjYwODE5NF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjE0MzM2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjYyNDQxOV0gc3lzdGVtZFsxXTogZGV2LXR0
eXM1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjMwNTUzXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzNS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjY0MDU2NF0gc3lzdGVtZFsxXTogZGV2LXR0eXMzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjQ2NTU0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjY1Njc0MF0gc3lzdGVtZFsxXTogZGV2LXR0eXM0LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjYyOTU4XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjY3Mjg4OF0gc3lzdGVtZFsxXTogZGV2LXR0eXM5LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjgwNzU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlzOS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjY5ODg0NV0gc3lzdGVtZFsxXTogZGV2LXR0eXM4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuNzA0ODUyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlzOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjcx
NTA5M10gc3lzdGVtZFsxXTogZGV2LXR0eXM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNzIxMjU2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlzNi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjczMTI5OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXM3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNzM3Mjg3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlz
Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc0NzQ3MF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXNjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NzUzNjgyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYy5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc2MzYzNl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXNhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNzY5Nzc1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc3OTg0MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXNiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNzg1ODMxXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc5NTk3Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXNkLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODAyMTg0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjgxMjE4OV0gc3lzdGVtZFsxXTogZGV2LXR0eXNlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODE4MTc4XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjgyODM1Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXNmLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODM0NDc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlzZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjg0NDU2MV0gc3lzdGVtZFsxXTogZGV2LXR0eXQzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuODUwNjk3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl0My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg2
MDY0MV0gc3lzdGVtZFsxXTogZGV2LXR0eXQyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuODY2NjMwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl0Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg3NjkwM10g
c3lzdGVtZFsxXTogZGV2LXR0eXQwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuODgzMTE0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg5MzE3M10gc3lzdGVt
ZFsxXTogZGV2LXR0eXQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
OTAxMDM3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0MS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjkxOTUxMl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXQ0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTI1NTEz
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0NC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjkzNTc3Ml0gc3lzdGVtZFsxXTogZGV2LXR0
eXQ3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTQyMDIwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Ny5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjk1MjAwOV0gc3lzdGVtZFsxXTogZGV2LXR0eXQ1LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTU4MDI3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjk2ODI2N10gc3lzdGVtZFsxXTogZGV2LXR0eXQ2LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTc0Mzg0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjk4NDQ2MF0gc3lzdGVtZFsxXTogZGV2LXR0eXRhLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTkwNTkxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl0YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjAwMTIwNF0gc3lzdGVtZFsxXTogZGV2LXR0eXQ5LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMDA3MjM2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl0OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjAx
Nzk1OV0gc3lzdGVtZFsxXTogZGV2LXR0eXRjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMDI0MTAxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl0Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjAzNDI5Nl0g
c3lzdGVtZFsxXTogZGV2LXR0eXRiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMDQwODYyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0
Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA1MTU4NV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXQ4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MDU3NjE2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0OC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA2ODI3Nl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXRmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMDc0NDE4
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Zi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA4NDYwNl0gc3lzdGVtZFsxXTogZGV2LXR0
eXRkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMDkwODY1XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0ZC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjEwMTMyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXRlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTA3NTA0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjEyMDE5NV0gc3lzdGVtZFsxXTogZGV2LXR0eXUwLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTMyMzg5XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjE0NTkxNF0gc3lzdGVtZFsxXTogZGV2LXR0eXUyLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTUyMDc1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl1Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjE2MzExMl0gc3lzdGVtZFsxXTogZGV2LXR0eXUxLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMTY5MzY0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl1MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjE3
OTgzMF0gc3lzdGVtZFsxXTogZGV2LXR0eXU1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMTg2MTc4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl1NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjE5NjQxMV0g
c3lzdGVtZFsxXTogZGV2LXR0eXUzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMjAyNjkzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1
My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjIxMzE1Ml0gc3lzdGVt
ZFsxXTogZGV2LXR0eXU0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MjE5NDYxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1NC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjIzMDE2OF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXU2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjM2NDMz
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Ni5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjI0NjgwNF0gc3lzdGVtZFsxXTogZGV2LXR0
eXU3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjUzMzExXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Ny5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjI2MzUwMl0gc3lzdGVtZFsxXTogZGV2LXR0eXU4LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjcwMDExXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjI4MDgxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXVjLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjg2OTMzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjI5NzUwOV0gc3lzdGVtZFsxXTogZGV2LXR0eXViLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMzA0MDEyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl1Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjMxNDc3OV0gc3lzdGVtZFsxXTogZGV2LXR0eXVhLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMzIxMTI3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl1YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjMz
MTY3NV0gc3lzdGVtZFsxXTogZGV2LXR0eXU5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMzM3OTM4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl1OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM1MzMyNV0g
c3lzdGVtZFsxXTogZGV2LXR0eXVkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMzY1OTczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1
ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM3NzM2OV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXYwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MzgzNzQyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2MC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM5NDA0OV0gc3lzdGVtZFsxXTog
ZGV2LXR0eXVlLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDAwNjA1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1ZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQxMTA1MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXYyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDE3MjA1XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Mi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQyNzc5Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXYxLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDM0MTI2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjQ0NDUwNF0gc3lzdGVtZFsxXTogZGV2LXR0eXVmLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDUxMTEyXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjQ2MTg3N10gc3lzdGVtZFsxXTogZGV2LXR0eXY0LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDY3OTY4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl2NC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjQ3ODY3MF0gc3lzdGVtZFsxXTogZGV2LXR0eXYzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuNDg0OTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl2My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQ5
NTgxMF0gc3lzdGVtZFsxXTogZGV2LXR0eXY1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuNTAyMDY4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl2NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjUxMjcxMl0g
c3lzdGVtZFsxXTogZGV2LXR0eXY4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuNTE5MjUzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2
OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjUzMDAxNF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXY2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
NTM2MzQ1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Ni5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU0NjkzNl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXY3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTUzMzY2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Ny5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU2MzkwNV0gc3lzdGVtZFsxXTogZGV2LXR0
eXZiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTcwNDkwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Yi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU4NTA4Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXZhLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTk3NjkzXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjYwOTE1MV0gc3lzdGVtZFsxXTogZGV2LXR0eXY5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNjE1Mjk0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjYyNjMyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXZlLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNjMyNjU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl2ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjY0Mzc4Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXZjLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuNjUwNzk4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl2Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY2
MjEyMl0gc3lzdGVtZFsxXTogZGV2LXR0eXZkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuNjY4OTk2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl2ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY3OTQzN10g
c3lzdGVtZFsxXTogZGV2LXR0eXcxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuNjg1NjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY5Njc5M10gc3lzdGVt
ZFsxXTogZGV2LXR0eXcwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
NzAzNzAwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3MC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjcxNDA2M10gc3lzdGVtZFsxXTog
ZGV2LXR0eXZmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzIwOTQ5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Zi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjczMjIwNV0gc3lzdGVtZFsxXTogZGV2LXR0
eXc0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzM4MzU5XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3NC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljc0OTc4OF0gc3lzdGVtZFsxXTogZGV2LXR0eXczLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzU2NDY5XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0Ljc2Nzk0MV0gc3lzdGVtZFsxXTogZGV2LXR0eXcyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzc0ODk0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0Ljc4NTE5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXc1LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzkyMTg1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl3NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjgwMzI0N10gc3lzdGVtZFsxXTogZGV2LXR0eXc4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuODExMTcyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl3OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljgz
MTA2Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXc3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuODM3MjM3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl3Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg0ODQwN10g
c3lzdGVtZFsxXTogZGV2LXR0eXc2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuODU1Mjk2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg2NjQxNF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXdiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
ODczMjUwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3Yi5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg4MzY0N10gc3lzdGVtZFsxXTog
ZGV2LXR0eXdhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuODkwNTk5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3YS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjkwMDk1NF0gc3lzdGVtZFsxXTogZGV2LXR0
eXc5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTA3NzYwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3OS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjkxODk4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXdlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTI1NzY4XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjkzNzA4OV0gc3lzdGVtZFsxXTogZGV2LXR0eXdkLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTQzODcxXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0Ljk1NTYwMV0gc3lzdGVtZFsxXTogZGV2LXR0eXgxLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTYyNDcyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl4MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0Ljk3MjgyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXdjLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuOTc5MTE4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl3Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljk5
MDIxM10gc3lzdGVtZFsxXTogZGV2LXR0eXgwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuOTk2ODU0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl4MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjAwODI5OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXdmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMDE1MjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjAyNTM3Ml0gc3lzdGVt
ZFsxXTogZGV2LXR0eXg1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MDM0MTQ5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4NS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA1NDMzM10gc3lzdGVtZFsxXTog
ZGV2LXR0eXg0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDYxMTg2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4NC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA3MjM4N10gc3lzdGVtZFsxXTogZGV2LXR0
eXgyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDc5MjIxXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Mi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA4OTczNF0gc3lzdGVtZFsxXTogZGV2LXR0eXgzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDk2NDk2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjEwNzMwMF0gc3lzdGVtZFsxXTogZGV2LXR0eXg5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMTEzNTYwXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjEyNDc0OV0gc3lzdGVtZFsxXTogZGV2LXR0eXg4LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMTMxNTQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl4OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjE0MjUzN10gc3lzdGVtZFsxXTogZGV2LXR0eXg2LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuMTQ5MzU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl4Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE1
OTcxNF0gc3lzdGVtZFsxXTogZGV2LXR0eXg3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuMTY1ODc1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl4Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE3NjgxOF0g
c3lzdGVtZFsxXTogZGV2LXR0eXhjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMTgzNTcwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4
Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE5Mzk0MF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXhhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MjAwMjIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4YS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjIxMTI5Ml0gc3lzdGVtZFsxXTog
ZGV2LXR0eXhiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjE4MDA0
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Yi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjIyOTIzMF0gc3lzdGVtZFsxXTogZGV2LXR0
eXhmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjM1MzU5XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Zi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjI0NjI3NF0gc3lzdGVtZFsxXTogZGV2LXR0eXhkLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjU1MDY4XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjI3MzMxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXhlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjgwMDkzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjI5MDQ4NF0gc3lzdGVtZFsxXTogZGV2LXR0eXkwLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjk3MjYwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl5MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjMwNzU3OV0gc3lzdGVtZFsxXTogZGV2LXR0eXkzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuMzEzOTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl5My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjMy
NDc3N10gc3lzdGVtZFsxXTogZGV2LXR0eXkxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuMzMxMDU3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl5MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM0MTM2OF0g
c3lzdGVtZFsxXTogZGV2LXR0eXkyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMzQ3OTI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5
Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM2MDQ3MV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXk2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MzY2NTAzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Ni5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM3NzYyMF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXk1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMzgzODcx
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5NS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM5NDgwMV0gc3lzdGVtZFsxXTogZGV2LXR0
eXk4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDA4ODUwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5OC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjQzMDkwMV0gc3lzdGVtZFsxXTogZGV2LXR0eXk0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDM2OTQ2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5NC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjQ2MTI0OV0gc3lzdGVtZFsxXTogZGV2LXR0eXk3LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDY3MjUzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjUwMTIxMl0gc3lzdGVtZFsxXTogZGV2LXR0eXliLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNTA3MjExXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl5Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjUzMTMxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXk5LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuNTM3MzIxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl5OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjU3
MTE4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXlhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuNTc3MjA5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl5YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjYwMTIwMF0g
c3lzdGVtZFsxXTogZGV2LXR0eXljLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuNjA3MjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5
Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjYzMTI1OV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXlkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
NjM3MjU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5ZC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjY2MTQwMF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXllLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNjY3Mzk3
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5ZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjcwMDg0MF0gc3lzdGVtZFsxXTogZGV2LXR0
eXoxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNzA2ODUwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6MS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljc0MDQ5N10gZXRoMDogZGV2aWNlIE1BQyBhZGRyZXNz
IDhhOjcwOjNjOjQxOmU3OjNmClsgICAyNS43NTY1NzBdICBObyBNQUMgTWFuYWdlbWVudCBD
b3VudGVycyBhdmFpbGFibGUKWyAgIDI1Ljc2MTIzNF0gc3RtbWFjX29wZW46IGZhaWxlZCBQ
VFAgaW5pdGlhbGlzYXRpb24KWyAgIDI1Ljc2NjE0Nl0gSVB2NjogQUREUkNPTkYoTkVUREVW
X1VQKTogZXRoMDogbGluayBpcyBub3QgcmVhZHkKWyAgIDI1Ljc4MTMxNF0gc3lzdGVtZFsx
XTogZGV2LXR0eXlmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNzg3
MzE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Zi5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjgxMTE0OV0gc3lzdGVtZFsxXTogZGV2
LXR0eXowLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODE3MTY1XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6MC5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjgzMzcwN10gc3lzdGVtZFsxXTogZGV2LXR0eXo0
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODM5OTU3XSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6NC5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDI1Ljg1MDA0M10gc3lzdGVtZFsxXTogZGV2LXR0eXoyLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODU2MDMxXSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDI1Ljg2NjE1MF0gc3lzdGVtZFsxXTogZGV2LXR0eXozLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODcyMjczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHl6My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDI1Ljg4MjM0Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXo4LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjUuODg4MzI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHl6OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1
Ljg5ODQ4MV0gc3lzdGVtZFsxXTogZGV2LXR0eXo3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjUuOTA0NjI2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHl6Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjkxNDY4
Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXo2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMjUuOTIwODI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10
dHl6Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjkzMDgwMl0gc3lz
dGVtZFsxXTogZGV2LXR0eXo1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjUuOTM2NzkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6NS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk0NzAyOF0gc3lzdGVtZFsx
XTogZGV2LXR0eXpiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTUz
MjQyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Yi5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk2MzI0OF0gc3lzdGVtZFsxXTogZGV2
LXR0eXphLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTY5Mzg0XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6YS5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk3OTQ4N10gc3lzdGVtZFsxXTogZGV2LXR0eXo5
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTg1NDc3XSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6OS5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDI1Ljk5NTY0M10gc3lzdGVtZFsxXTogZGV2LXR0eXpmLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjYuMDAxODQ5XSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDI2LjAxMTgwNl0gc3lzdGVtZFsxXTogZGV2LXR0eXplLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjYuMDE3ODIwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHl6ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDI2LjAyNzk5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXpkLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjYuMDM0MTI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHl6ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI2
LjA0NDIyM10gc3lzdGVtZFsxXTogZGV2LXR0eXpjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjYuMDUwNDEwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHl6Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDUxLjM1NDY5
NF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVz
Lk5hbWVPd25lckNoYW5nZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMKWyAgIDUxLjM3
MTI4Ml0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5z
eXN0ZW1kMS5NYW5hZ2VyLlN0YXJ0VW5pdCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVt
ZDEKWyAgIDUxLjM4NDA1MV0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHVz
ZXItMC5zbGljZS9zdGFydC9mYWlsClsgICA1MS4zOTE0MDFdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIHVzZXItMC5zbGljZS9zdGFydCBhcyAzMTUKWyAgIDUxLjM5ODAxN10g
c3lzdGVtZFsxXTogRW5xdWV1ZWQgam9iIHVzZXItMC5zbGljZS9zdGFydCBhcyAzMTUKWyAg
IDUxLjQwNjUyNF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgdXNlci0wLnNsaWNlLgpbICAgNTEu
NDEyNzIyXSBzeXN0ZW1kWzFdOiB1c2VyLTAuc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFjdGl2
ZQpbICAgNTEuNDE4MzY1XSBzeXN0ZW1kWzFdOiBKb2IgdXNlci0wLnNsaWNlL3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgNTEuNDI1NzQ0XSBzeXN0ZW1kWzFdOiBDcmVhdGVk
IHNsaWNlIHVzZXItMC5zbGljZS4KWyAgIDUxLjQzMzU0Nl0gc3lzdGVtZFsxXTogR290IEQt
QnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5NYW5hZ2VyLlN0YXJ0VW5p
dCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEKWyAgIDUxLjQ2OTk2NV0gc3lzdGVt
ZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHVzZXJAMC5zZXJ2aWNlL3N0YXJ0L2ZhaWwK
WyAgIDUxLjQ3NzMxM10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgdXNlckAwLnNl
cnZpY2Uvc3RhcnQgYXMgMzE5ClsgICA1MS40ODU4NzVdIHN5c3RlbWRbMV06IEVucXVldWVk
IGpvYiB1c2VyQDAuc2VydmljZS9zdGFydCBhcyAzMTkKWyAgIDUxLjQ5NTgyMl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgVXNlciBNYW5hZ2VyIGZvciAwLi4uClsgICA1MS41MDI1MjNdIHN5
c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZCAt
LXVzZXIKWyAgIDUxLjUxMTEzNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGliL3N5c3Rl
bWQvc3lzdGVtZCBhcyAxNzAKWyAgIDUxLjUyNDY5OV0gc3lzdGVtZFsxXTogdXNlckAwLnNl
cnZpY2UgY2hhbmdlZCBkZWFkIC0+IHN0YXJ0ClsgICA1MS41MzE3NjJdIHN5c3RlbWRbMV06
IFNldCB1cCBqb2JzIHByb2dyZXNzIHRpbWVyZmQuClsgICA1MS41MzY4NTRdIHN5c3RlbWRb
MV06IFNldCB1cCBpZGxlX3BpcGUgd2F0Y2guClsgICA1MS41NDQ0MDFdIHN5c3RlbWRbMV06
IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuTWFuYWdlci5T
dGFydFRyYW5zaWVudFVuaXQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQxClsgICA1
MS41Njc4ODFdIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9y
IHNlc3Npb24tYzEuc2NvcGU6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgIDUxLjU4
MjI4Nl0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNlc3Npb24tYzEuc2Nv
cGUvc3RhcnQvZmFpbApbICAgNTEuNTkwNDM1XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3
IGpvYiBzZXNzaW9uLWMxLnNjb3BlL3N0YXJ0IGFzIDM2OQpbICAgNTEuNTk3MTE3XSBzeXN0
ZW1kWzFdOiBFbnF1ZXVlZCBqb2Igc2Vzc2lvbi1jMS5zY29wZS9zdGFydCBhcyAzNjkKWyAg
IDUxLjYxMDk5MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgU2Vzc2lvbiBjMSBvZiB1c2VyIHJv
b3QuClsgICA1MS42MzI4OTBdIHN5c3RlbWRbMV06IHNlc3Npb24tYzEuc2NvcGUgY2hhbmdl
ZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDUxLjY0NzQzNl0gc3lzdGVtZFsxNzBdOiBFeGVjdXRp
bmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZCAtLXVzZXIKWyAgIDUxLjY1NTM2Ml0gc3lz
dGVtZFsxXTogSm9iIHNlc3Npb24tYzEuc2NvcGUvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1k
b25lClsgICA1MS42Njg5MzddIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU2Vzc2lvbiBjMSBvZiB1
c2VyIHJvb3QuClsgICA1MS42ODc2NzZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0
OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5OYW1lT3duZXJDaGFuZ2VkKCkgb24gL29yZy9mcmVl
ZGVza3RvcC9EQnVzClsgICA1MS43MDI3MDFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5l
Y3Rpb24gb24gcHJpdmF0ZSBidXMuClsgICA1MS43MzAzNDldIHN5c3RlbWRbMV06IEdvdCBE
LUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQo
KSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQxL2FnZW50ClsgICA1MS43NTU0ODhdIHN5
c3RlbWRbMV06IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2U6IGNncm91cCBpcyBlbXB0eQpb
ICAgNTEuNzY4MDUzXSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVk
ZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9E
QnVzL0xvY2FsClsgICA1Mi4wNjIzNDhdIHN5c3RlbWRbMV06IEdvdCBub3RpZmljYXRpb24g
bWVzc2FnZSBmb3IgdW5pdCB1c2VyQDAuc2VydmljZQpbICAgNTIuMDgwNDI4XSBzeXN0ZW1k
WzFdOiB1c2VyQDAuc2VydmljZTogR290IG1lc3NhZ2UKWyAgIDUyLjA4NTMzM10gc3lzdGVt
ZFsxXTogdXNlckAwLnNlcnZpY2U6IGdvdCBSRUFEWT0xClsgICA1Mi4xMDcxNDddIHN5c3Rl
bWRbMV06IHVzZXJAMC5zZXJ2aWNlIGNoYW5nZWQgc3RhcnQgLT4gcnVubmluZwpbICAgNTIu
MTIwMzMzXSBzeXN0ZW1kWzFdOiBKb2IgdXNlckAwLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICA1Mi4xMzUzMTVdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgVXNlciBN
YW5hZ2VyIGZvciAwLgpbICAgNTIuMTQ4NjgzXSBzeXN0ZW1kWzFdOiBDbG9zZWQgam9icyBw
cm9ncmVzcyB0aW1lcmZkLgpbICAgNTIuMTUzODMxXSBzeXN0ZW1kWzFdOiBDbG9zZWQgaWRs
ZV9waXBlIHdhdGNoLgpbICAgNTIuMTY4ODU4XSBzeXN0ZW1kWzFdOiB1c2VyQDAuc2Vydmlj
ZTogZ290IFNUQVRVUz1TdGFydHVwIGZpbmlzaGVkIGluIDM4NG1zLgpbICAgNTIuMTgwNTEw
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwo=
--------------050705070304060703090801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050705070304060703090801--


From xen-devel-bounces@lists.xen.org Fri Jan 24 10:20:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dsI-0004Bs-1j; Fri, 24 Jan 2014 10:20:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1W6dYn-0003Tr-G2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:00:28 +0000
Received: from [193.109.254.147:41801] by server-4.bemta-14.messagelabs.com id
	03/A0-03916-8B932E25; Fri, 24 Jan 2014 10:00:24 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390557618!12840895!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 812 invoked from network); 24 Jan 2014 10:00:18 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:00:18 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so856121eek.32
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:00:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:content-type; bh=fVkYXSgbY2Axy/QZz0L0zy7TVSVvgpj0sSVJD6S3DwE=;
	b=xCfMKAhWdpGx+8dzMpWJMDRhuIutw9fFIj2jdVbE3LbB81iko5Ahk3fqTEyX51b7lk
	uBI2Dgku0SRmQm/uPNvRIjwbgc1ZcjbpitvP3FYnST6Z3DcKXwDt/cGF+ZyE3aExQWV0
	ZHYG5WYXOHeMOOidbUZhSa2VYls5O5xGOMAoFSyCCSKUPJBsZFVnMsBDoigT4PAIX23E
	vnWDoZQIge/eEMovm/JjERgEBVoxcwG6LKINPN6N1lLeHvxY7Et/jyCrcyiJp0SM8BZP
	DcYGWORff8tjux4x/tNk5y+Sqv6+kajur1fFOPFP2s0hZ6sqkI3HPKKG0d4zFC8wX+vb
	K+Hg==
X-Received: by 10.14.5.67 with SMTP id 43mr1630375eek.93.1390557618473;
	Fri, 24 Jan 2014 02:00:18 -0800 (PST)
Received: from [192.168.0.2]
	(host252-40-dynamic.56-79-r.retail.telecomitalia.it. [79.56.40.252])
	by mx.google.com with ESMTPSA id 4sm1563455eed.14.2014.01.24.02.00.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 02:00:16 -0800 (PST)
Message-ID: <52E239AB.8040906@gmail.com>
Date: Fri, 24 Jan 2014 11:00:11 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed; boundary="------------050705070304060703090801"
X-Mailman-Approved-At: Fri, 24 Jan 2014 10:20:33 +0000
Cc: Paolo Gai <pj@evidence.eu.com>, Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory for
 dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050705070304060703090801
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,

my name's Arianna and, for my master thesis at University of Modena (with Paolo
Valente, in Italy, [1]), I'm porting an automotive grade real-time OS
(Evidence's Erika [2], [3]) on Xen on ARM. I'm running Xen on a cubieboard2,
featuring an Allwinner A20 chip and 1GB of RAM.

I compiled from source both Xen (from staging, my current head is commit
231d7f4) and Linux (from linux-sunxi/sunxi-devel merged with torvalds/master, my
head is commit df32e43), as a dom0 kernel. I was also able to start a PV domU
with the same kernel, which I built with both dom0 and domU options enabled.

I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
assigned to it by providing the dom0_mem boot option. The error message produced
during the boot process is the following.

Xen heap: 0000000076000000-000000007e000000 (32768 pages)
Dom heap: 229376 pages
Looking for UART console serial0
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (ava@) (armv7a-hardfloat-linux-gnueabi-gcc (Gentoo
4.7.3 p1.0, pie-0.5.5) 4.7.3) debug=y Fri Jan 24 10:46:18 CET 2014
(XEN) Latest ChangeSet: Thu Jan 23 10:30:08 2014 +0100 git:231d7f4
(XEN) Console output is synchronous.
(XEN) Processor: 410fc074: "ARM Limited", variant: 0x0, part 0xc07, rev 0x4
(XEN) 32-bit Execution:
(XEN)   Processor Features: 00001131:00011011
(XEN)     Instruction Sets: AArch32 Thumb Thumb-2 ThumbEE Jazelle
(XEN)     Extensions: GenericTimer Security
(XEN)   Debug Features: 02010555
(XEN)   Auxiliary Features: 00000000
(XEN)   Memory Model Features: 10101105 40000000 01240000 02102211
(XEN)  ISA Features: 02101110 13112111 21232041 11112131 10011142 00000000
(XEN) Platform: Allwinner A20
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27
(XEN) Using generic timer at 24000 KHz
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000001c81000
(XEN)         gic_cpu_addr=0000000001c82000
(XEN)         gic_hyp_addr=0000000001c84000
(XEN)         gic_vcpu_addr=0000000001c86000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 160 lines, 2 cpus, secure (IID 0100143b).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 2 part 0x30 variant 0x7 rev 0x4
(XEN) Bringing up CPU1
(XEN) Failed to bring up CPU1
(XEN) Failed to bring up CPU 1 (error -19)
(XEN) Brought up 1 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Failed to allocate contiguous memory for dom0
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

After a short search on the xen-devel mailing list, I found a discussion which I
believe might be related to this issue:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg02239.html.
However, from that thread, I wasn't able to extrapolate a solution to the said
issue (not even to understand whether it is a bug or a feature, actually). You
can find attached to this mail the complete logs of a successful boot (with
dom0_mem=256M) and a failing boot (with dom0_mem=512M). If interesting, I'd be
glad to help investigate that, e.g., by providing further logs or by testing
patches.

Thank you,
Arianna



[1] http://www.unimore.it/
[2] http://www.evidence.eu.com/
[3] http://erika.tuxfamily.org/drupal/


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

--------------050705070304060703090801
Content-Type: text/x-log;
 name="failing-boot-dom0_mem-512.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="failing-boot-dom0_mem-512.log"

WGVuIGhlYXA6IDAwMDAwMDAwNzYwMDAwMDAtMDAwMDAwMDA3ZTAwMDAwMCAoMzI3NjggcGFn
ZXMpCkRvbSBoZWFwOiAyMjkzNzYgcGFnZXMKTG9va2luZyBmb3IgVUFSVCBjb25zb2xlIHNl
cmlhbDAKIFhlbiA0LjQtcmMyCihYRU4pIFhlbiB2ZXJzaW9uIDQuNC1yYzIgKGF2YUApIChh
cm12N2EtaGFyZGZsb2F0LWxpbnV4LWdudWVhYmktZ2NjIChHZW50b28gNC43LjMgcDEuMCwg
cGllLTAuNS41KSA0LjcuMykgZGVidWc9eSBGcmkgSmFuIDI0IDEwOjQ2OjE4IENFVCAyMDE0
CihYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFRodSBKYW4gMjMgMTA6MzA6MDggMjAxNCArMDEw
MCBnaXQ6MjMxZDdmNAooWEVOKSBDb25zb2xlIG91dHB1dCBpcyBzeW5jaHJvbm91cy4KKFhF
TikgUHJvY2Vzc29yOiA0MTBmYzA3NDogIkFSTSBMaW1pdGVkIiwgdmFyaWFudDogMHgwLCBw
YXJ0IDB4YzA3LCByZXYgMHg0CihYRU4pIDMyLWJpdCBFeGVjdXRpb246CihYRU4pICAgUHJv
Y2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMTowMDAxMTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rp
b24gU2V0czogQUFyY2gzMiBUaHVtYiBUaHVtYi0yIFRodW1iRUUgSmF6ZWxsZQooWEVOKSAg
ICAgRXh0ZW5zaW9uczogR2VuZXJpY1RpbWVyIFNlY3VyaXR5CihYRU4pICAgRGVidWcgRmVh
dHVyZXM6IDAyMDEwNTU1CihYRU4pICAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMAoo
WEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMTAxMDExMDUgNDAwMDAwMDAgMDEyNDAw
MDAgMDIxMDIyMTEKKFhFTikgIElTQSBGZWF0dXJlczogMDIxMDExMTAgMTMxMTIxMTEgMjEy
MzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAwMDAKKFhFTikgUGxhdGZvcm06IEFsbHdp
bm5lciBBMjAKKFhFTikgR2VuZXJpYyBUaW1lciBJUlE6IHBoeXM9MzAgaHlwPTI2IHZpcnQ9
MjcKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBhdCAyNDAwMCBLSHoKKFhFTikgR0lDIGlu
aXRpYWxpemF0aW9uOgooWEVOKSAgICAgICAgIGdpY19kaXN0X2FkZHI9MDAwMDAwMDAwMWM4
MTAwMAooWEVOKSAgICAgICAgIGdpY19jcHVfYWRkcj0wMDAwMDAwMDAxYzgyMDAwCihYRU4p
ICAgICAgICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwMDFjODQwMDAKKFhFTikgICAgICAgICBn
aWNfdmNwdV9hZGRyPTAwMDAwMDAwMDFjODYwMDAKKFhFTikgICAgICAgICBnaWNfbWFpbnRl
bmFuY2VfaXJxPTI1CihYRU4pIEdJQzogMTYwIGxpbmVzLCAyIGNwdXMsIHNlY3VyZSAoSUlE
IDAxMDAxNDNiKS4KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxl
ciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDE2IEtpQi4KKFhF
TikgVkZQIGltcGxlbWVudGVyIDB4NDEgYXJjaGl0ZWN0dXJlIDIgcGFydCAweDMwIHZhcmlh
bnQgMHg3IHJldiAweDQKKFhFTikgQnJpbmdpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8g
YnJpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8gYnJpbmcgdXAgQ1BVIDEgKGVycm9yIC0x
OSkKKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAg
KioqCihYRU4pIAooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqCihYRU4pIFBhbmljIG9uIENQVSAwOgooWEVOKSBGYWlsZWQgdG8gYWxsb2NhdGUgY29u
dGlndW91cyBtZW1vcnkgZm9yIGRvbTAKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKgooWEVOKSAKKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4u
LgoK
--------------050705070304060703090801
Content-Type: text/x-log;
 name="successful-boot-dom0_mem-256.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="successful-boot-dom0_mem-256.log"

KFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQKIFhlbiA0LjQtcmMyCihYRU4pIFhlbiB2
ZXJzaW9uIDQuNC1yYzIgKGF2YUApIChhcm12N2EtaGFyZGZsb2F0LWxpbnV4LWdudWVhYmkt
Z2NjIChHZW50b28gNC43LjMgcDEuMCwgcGllLTAuNS41KSA0LjcuMykgZGVidWc9eSBGcmkg
SmFuIDI0IDEwOjQwOjM3IENFVCAyMDE0CihYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFRodSBK
YW4gMjMgMTA6MzA6MDggMjAxNCArMDEwMCBnaXQ6MjMxZDdmNAooWEVOKSBDb25zb2xlIG91
dHB1dCBpcyBzeW5jaHJvbm91cy4KKFhFTikgUHJvY2Vzc29yOiA0MTBmYzA3NDogIkFSTSBM
aW1pdGVkIiwgdmFyaWFudDogMHgwLCBwYXJ0IDB4YzA3LCByZXYgMHg0CihYRU4pIDMyLWJp
dCBFeGVjdXRpb246CihYRU4pICAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMTEzMTowMDAx
MTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rpb24gU2V0czogQUFyY2gzMiBUaHVtYiBUaHVtYi0y
IFRodW1iRUUgSmF6ZWxsZQooWEVOKSAgICAgRXh0ZW5zaW9uczogR2VuZXJpY1RpbWVyIFNl
Y3VyaXR5CihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAyMDEwNTU1CihYRU4pICAgQXV4aWxp
YXJ5IEZlYXR1cmVzOiAwMDAwMDAwMAooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczog
MTAxMDExMDUgNDAwMDAwMDAgMDEyNDAwMDAgMDIxMDIyMTEKKFhFTikgIElTQSBGZWF0dXJl
czogMDIxMDExMTAgMTMxMTIxMTEgMjEyMzIwNDEgMTExMTIxMzEgMTAwMTExNDIgMDAwMDAw
MDAKKFhFTikgUGxhdGZvcm06IEFsbHdpbm5lciBBMjAKKFhFTikgR2VuZXJpYyBUaW1lciBJ
UlE6IHBoeXM9MzAgaHlwPTI2IHZpcnQ9MjcKKFhFTikgVXNpbmcgZ2VuZXJpYyB0aW1lciBh
dCAyNDAwMCBLSHoKKFhFTikgR0lDIGluaXRpYWxpemF0aW9uOgooWEVOKSAgICAgICAgIGdp
Y19kaXN0X2FkZHI9MDAwMDAwMDAwMWM4MTAwMAooWEVOKSAgICAgICAgIGdpY19jcHVfYWRk
cj0wMDAwMDAwMDAxYzgyMDAwCihYRU4pICAgICAgICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAw
MDFjODQwMDAKKFhFTikgICAgICAgICBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwMDFjODYwMDAK
KFhFTikgICAgICAgICBnaWNfbWFpbnRlbmFuY2VfaXJxPTI1CihYRU4pIEdJQzogMTYwIGxp
bmVzLCAyIGNwdXMsIHNlY3VyZSAoSUlEIDAxMDAxNDNiKS4KKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBBbGxvY2F0ZWQgY29u
c29sZSByaW5nIG9mIDE2IEtpQi4KKFhFTikgVkZQIGltcGxlbWVudGVyIDB4NDEgYXJjaGl0
ZWN0dXJlIDIgcGFydCAweDMwIHZhcmlhbnQgMHg3IHJldiAweDQKKFhFTikgQnJpbmdpbmcg
dXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8gYnJpbmcgdXAgQ1BVMQooWEVOKSBGYWlsZWQgdG8g
YnJpbmcgdXAgQ1BVIDEgKGVycm9yIC0xOSkKKFhFTikgQnJvdWdodCB1cCAxIENQVXMKKFhF
TikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pIFBvcHVsYXRlIFAyTSAweDYwMDAw
MDAwLT4weDcwMDAwMDAwICgxOjEgbWFwcGluZyBmb3IgZG9tMCkKKFhFTikgTG9hZGluZyBr
ZXJuZWwgZnJvbSBib290IG1vZHVsZSAyCihYRU4pIExvYWRpbmcgekltYWdlIGZyb20gMDAw
MDAwMDA1MDAwMDAwMCB0byAwMDAwMDAwMDY3YzAwMDAwLTAwMDAwMDAwNjdmMTMzMDAKKFhF
TikgTG9hZGluZyBkb20wIERUQiB0byAweDAwMDAwMDAwNjgwMDAwMDAtMHgwMDAwMDAwMDY4
MDAzYmQ1CihYRU4pIFNjcnViYmluZyBGcmVlIFJBTTogLi4uLi4uLi5kb25lLgooWEVOKSBJ
bml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4K
KFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGwKKFhF
TikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgooWEVO
KSAqKioqKioqIFdBUk5JTkc6IENPTlNPTEUgT1VUUFVUIElTIFNZTkNIUk9OT1VTCihYRU4p
ICoqKioqKiogVGhpcyBvcHRpb24gaXMgaW50ZW5kZWQgdG8gYWlkIGRlYnVnZ2luZyBvZiBY
ZW4gYnkgZW5zdXJpbmcKKFhFTikgKioqKioqKiB0aGF0IGFsbCBvdXRwdXQgaXMgc3luY2hy
b25vdXNseSBkZWxpdmVyZWQgb24gdGhlIHNlcmlhbCBsaW5lLgooWEVOKSAqKioqKioqIEhv
d2V2ZXIgaXQgY2FuIGludHJvZHVjZSBTSUdOSUZJQ0FOVCBsYXRlbmNpZXMgYW5kIGFmZmVj
dAooWEVOKSAqKioqKioqIHRpbWVrZWVwaW5nLiBJdCBpcyBOT1QgcmVjb21tZW5kZWQgZm9y
IHByb2R1Y3Rpb24gdXNlIQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCihYRU4pIDMuLi4gMi4uLiAxLi4uIAooWEVOKSAqKiogU2VyaWFs
IGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlu
cHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjY0a0IgaW5pdCBtZW1vcnkuClsgICAgMC4wMDAw
MDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNSwg
MTMxMDcyIGJ5dGVzKQpbICAgIDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDE2Mzg0IChvcmRlcjogNCwgNjU1MzYgYnl0ZXMpClsgICAgMC4wMDAwMDBdIGFs
bG9jYXRlZCA1MjQyODggYnl0ZXMgb2YgcGFnZV9jZ3JvdXAKWyAgICAwLjAwMDAwMF0gcGxl
YXNlIHRyeSAnY2dyb3VwX2Rpc2FibGU9bWVtb3J5JyBvcHRpb24gaWYgeW91IGRvbid0IHdh
bnQgbWVtb3J5IGNncm91cHMKWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAyNTI4NDRLLzI2MjE0
NEsgYXZhaWxhYmxlICg0Njg2SyBrZXJuZWwgY29kZSwgMjI5SyByd2RhdGEsIDk0OEsgcm9k
YXRhLCAyMzVLIGluaXQsIDMyOUsgYnNzLCA5MzAwSyByZXNlcnZlZCwgMEsgaGlnaG1lbSkK
WyAgICAwLjAwMDAwMF0gVmlydHVhbCBrZXJuZWwgbWVtb3J5IGxheW91dDoKICAgIHZlY3Rv
ciAgOiAweGZmZmYwMDAwIC0gMHhmZmZmMTAwMCAgICggICA0IGtCKQogICAgZml4bWFwICA6
IDB4ZmZmMDAwMDAgLSAweGZmZmUwMDAwICAgKCA4OTYga0IpCiAgICB2bWFsbG9jIDogMHhk
MDgwMDAwMCAtIDB4ZmYwMDAwMDAgICAoIDc0NCBNQikKICAgIGxvd21lbSAgOiAweGMwMDAw
MDAwIC0gMHhkMDAwMDAwMCAgICggMjU2IE1CKQogICAgcGttYXAgICA6IDB4YmZlMDAwMDAg
LSAweGMwMDAwMDAwICAgKCAgIDIgTUIpCiAgICAgIC50ZXh0IDogMHhjMDAwODAwMCAtIDB4
YzA1ODhjMjAgICAoNTYzNiBrQikKICAgICAgLmluaXQgOiAweGMwNTg5MDAwIC0gMHhjMDVj
M2YwMCAgICggMjM2IGtCKQogICAgICAuZGF0YSA6IDB4YzA1YzQwMDAgLSAweGMwNWZkNGEw
ICAgKCAyMzAga0IpCiAgICAgICAuYnNzIDogMHhjMDVmZDRhOCAtIDB4YzA2NGZhYTQgICAo
IDMzMCBrQikKWyAgICAwLjAwMDAwMF0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBN
aW5PYmplY3RzPTAsIENQVXM9MSwgTm9kZXM9MQpbICAgIDAuMDAwMDAwXSBIaWVyYXJjaGlj
YWwgUkNVIGltcGxlbWVudGF0aW9uLgpbICAgIDAuMDAwMDAwXSAJUkNVIHJlc3RyaWN0aW5n
IENQVXMgZnJvbSBOUl9DUFVTPTQgdG8gbnJfY3B1X2lkcz0xLgpbICAgIDAuMDAwMDAwXSBS
Q1U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9jcHVf
aWRzPTEKWyAgICAwLjAwMDAwMF0gTlJfSVJRUzoxNiBucl9pcnFzOjE2IDE2ClsgICAgMC4w
MDAwMDBdIEFyY2hpdGVjdGVkIGNwMTUgdGltZXIocykgcnVubmluZyBhdCAyNC4wME1IeiAo
dmlydCkuClsgICAgMy44MDk5MjRdIHNjaGVkX2Nsb2NrOiA1NiBiaXRzIGF0IDI0TUh6LCBy
ZXNvbHV0aW9uIDQxbnMsIHdyYXBzIGV2ZXJ5IDI4NjMzMTE1MTk3NDRucwpbICAgIDAuMDAw
MDAzXSBTd2l0Y2hpbmcgdG8gdGltZXItYmFzZWQgZGVsYXkgbG9vcApbICAxNzUuMTQ3MDUx
XSBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAyNE1IeiwgcmVzb2x1dGlvbiA0MW5zLCB3cmFw
cyBldmVyeSAxNzg5NTY5Njk5NDJucwpbICAgIDAuMDAwMDM4XSBhcmNoX3RpbWVyOiBtdWx0
aXBsZSBub2RlcyBpbiBkdCwgc2tpcHBpbmcKWyAgICAwLjAwMDAwN10gc2NoZWRfY2xvY2s6
IDMyIGJpdHMgYXQgMTYwTUh6LCByZXNvbHV0aW9uIDZucywgd3JhcHMgZXZlcnkgMjY4NDM1
NDU1OTNucwpbICAgIDAuMDAwMDA4XSBzY2hlZF9jbG9jazogMzIgYml0cyBhdCAxNjBNSHos
IHJlc29sdXRpb24gNm5zLCB3cmFwcyBldmVyeSAyNjg0MzU0NTU5M25zClsgICAgMC4wMDAx
NzZdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MzAKWyAgICAwLjAwMDIxMV0g
Q2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQgdXNp
bmcgdGltZXIgZnJlcXVlbmN5Li4gNDguMDAgQm9nb01JUFMgKGxwaj0yNDAwMDApClsgICAg
MC4wMDAyMjZdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMQpbICAgIDAu
MDAwMzk2XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMgpbICAgIDAuMDAz
NzQyXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1vcnkKWyAgICAwLjAwMzc4NF0g
SW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZGV2aWNlcwpbICAgIDAuMDAzNzk1XSBJbml0
aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBibGtpbwpbICAgIDAuMDAzODQ1XSBDUFU6IFRlc3Rp
bmcgd3JpdGUgYnVmZmVyIGNvaGVyZW5jeTogb2sKWyAgICAwLjAwNDI0OF0gL2NwdXMvY3B1
QDAgbWlzc2luZyBjbG9jay1mcmVxdWVuY3kgcHJvcGVydHkKWyAgICAwLjAwNDI3MV0gQ1BV
MDogdGhyZWFkIC0xLCBjcHUgMCwgc29ja2V0IDAsIG1waWRyIDgwMDAwMDAwClsgICAgMC4w
MDQzMDNdIFNldHRpbmcgdXAgc3RhdGljIGlkZW50aXR5IG1hcCBmb3IgMHg2MDQ3MTJlOCAt
IDB4NjA0NzEzNDAKWyAgICAwLjAwNTE4Ml0gQnJvdWdodCB1cCAxIENQVXMKWyAgICAwLjAw
NTE5Nl0gU01QOiBUb3RhbCBvZiAxIHByb2Nlc3NvcnMgYWN0aXZhdGVkLgpbICAgIDAuMDA1
MjAzXSBDUFU6IEFsbCBDUFUocykgc3RhcnRlZCBpbiBTVkMgbW9kZS4KWyAgICAwLjAwNTg1
N10gZGV2dG1wZnM6IGluaXRpYWxpemVkClsgICAgMC4wMDk5ODhdIFZGUCBzdXBwb3J0IHYw
LjM6IGltcGxlbWVudG9yIDQxIGFyY2hpdGVjdHVyZSAyIHBhcnQgMzAgdmFyaWFudCA3IHJl
diA0ClsgICAgMC4wMTAxMzddIFhlbiA0LjQgc3VwcG9ydCBmb3VuZCwgZXZlbnRzX2lycT0z
MSBnbnR0YWJfZnJhbWVfcGZuPTFkMDAKWyAgICAwLjAxMDIyMF0geGVuOmdyYW50X3RhYmxl
OiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dApbICAgIDAuMDEwMjc2XSBH
cmFudCB0YWJsZSBpbml0aWFsaXplZApbICAgIDAuMDEwNjAwXSBwaW5jdHJsIGNvcmU6IGlu
aXRpYWxpemVkIHBpbmN0cmwgc3Vic3lzdGVtClsgICAgMC4wMTA5NzRdIHJlZ3VsYXRvci1k
dW1teTogbm8gcGFyYW1ldGVycwpbICAgIDAuMDExMjkwXSBORVQ6IFJlZ2lzdGVyZWQgcHJv
dG9jb2wgZmFtaWx5IDE2ClsgICAgMC4wMTE3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTAK
WyAgICAwLjAxMjAzMV0gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBwb29sIGZvciBhdG9t
aWMgY29oZXJlbnQgYWxsb2NhdGlvbnMKWyAgICAwLjAxMzA1NV0geGVuOnN3aW90bGJfeGVu
OiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUgSU8g
VExCClsgICAgMC4wMTUyMjhdIHNvZnR3YXJlIElPIFRMQiBbbWVtIDB4NmYwMDAwMDAtMHg2
ZjQwMDAwMF0gKDRNQikgbWFwcGVkIGF0IFtjZjAwMDAwMC1jZjNmZmZmZl0KWyAgICAwLjAy
NDQ5OV0gYmlvOiBjcmVhdGUgc2xhYiA8YmlvLTA+IGF0IDAKWyAgICAwLjAyNTE2Nl0geGVu
OmJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlcgpbICAgIDAuMDI1NDc1XSBy
ZWctZml4ZWQtdm9sdGFnZSBhaGNpLTV2LjU6IGNvdWxkIG5vdCBmaW5kIHBjdGxkZXYgZm9y
IG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIwODAwL2FoY2lfcHdyX3BpbkAwLCBk
ZWZlcnJpbmcgcHJvYmUKWyAgICAwLjAyNTQ5OF0gcGxhdGZvcm0gYWhjaS01di41OiBEcml2
ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMgcHJvYmUgZGVmZXJyYWwKWyAgICAwLjAy
NTUyMV0gcmVnLWZpeGVkLXZvbHRhZ2UgdXNiMS12YnVzLjY6IGNvdWxkIG5vdCBmaW5kIHBj
dGxkZXYgZm9yIG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIwODAwL3VzYjFfdmJ1
c19waW5AMCwgZGVmZXJyaW5nIHByb2JlClsgICAgMC4wMjU1MzRdIHBsYXRmb3JtIHVzYjEt
dmJ1cy42OiBEcml2ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMgcHJvYmUgZGVmZXJy
YWwKWyAgICAwLjAyNTU1M10gcmVnLWZpeGVkLXZvbHRhZ2UgdXNiMi12YnVzLjc6IGNvdWxk
IG5vdCBmaW5kIHBjdGxkZXYgZm9yIG5vZGUgL3NvY0AwMWMwMDAwMC9waW5jdHJsQDAxYzIw
ODAwL3VzYjJfdmJ1c19waW5AMCwgZGVmZXJyaW5nIHByb2JlClsgICAgMC4wMjU1NjZdIHBs
YXRmb3JtIHVzYjItdmJ1cy43OiBEcml2ZXIgcmVnLWZpeGVkLXZvbHRhZ2UgcmVxdWVzdHMg
cHJvYmUgZGVmZXJyYWwKWyAgICAwLjAyNjQ1MF0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6
ZWQKWyAgICAwLjAyNjY2M10gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAgMC4w
MjY4NzldIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMK
WyAgICAwLjAyNjkyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZl
ciBodWIKWyAgICAwLjAyNzA0N10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRy
aXZlciB1c2IKWyAgICAwLjAyNzMwNV0gcHBzX2NvcmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEg
cmVnaXN0ZXJlZApbICAgIDAuMDI3MzE0XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMu
NiAtIENvcHlyaWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSA8Z2lvbWV0dGlAbGlu
dXguaXQ+ClsgICAgMC4wMjczMzhdIFBUUCBjbG9jayBzdXBwb3J0IHJlZ2lzdGVyZWQKWyAg
ICAwLjAyNzQwN10gRURBQyBNQzogVmVyOiAzLjAuMApbICAgIDAuMDI4NTkxXSBTd2l0Y2hl
ZCB0byBjbG9ja3NvdXJjZSBhcmNoX3N5c19jb3VudGVyClsgICAgMC4wMzgxMjZdIE5FVDog
UmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMgpbICAgIDAuMDM4ODI1XSBUQ1AgZXN0YWJs
aXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRlcjogMSwgODE5MiBieXRlcykK
WyAgICAwLjAzODg2Ml0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRl
cjogMiwgMTYzODQgYnl0ZXMpClsgICAgMC4wMzg4OTldIFRDUDogSGFzaCB0YWJsZXMgY29u
ZmlndXJlZCAoZXN0YWJsaXNoZWQgMjA0OCBiaW5kIDIwNDgpClsgICAgMC4wMzg5NzBdIFRD
UDogcmVubyByZWdpc3RlcmVkClsgICAgMC4wMzg5ODZdIFVEUCBoYXNoIHRhYmxlIGVudHJp
ZXM6IDI1NiAob3JkZXI6IDEsIDgxOTIgYnl0ZXMpClsgICAgMC4wMzkwMjFdIFVEUC1MaXRl
IGhhc2ggdGFibGUgZW50cmllczogMjU2IChvcmRlcjogMSwgODE5MiBieXRlcykKWyAgICAw
LjAzOTI1Ml0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxClsgICAgMC4wMzk2
ODldIFJQQzogUmVnaXN0ZXJlZCBuYW1lZCBVTklYIHNvY2tldCB0cmFuc3BvcnQgbW9kdWxl
LgpbICAgIDAuMDM5NzAzXSBSUEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBtb2R1bGUu
ClsgICAgMC4wMzk3MDldIFJQQzogUmVnaXN0ZXJlZCB0Y3AgdHJhbnNwb3J0IG1vZHVsZS4K
WyAgICAwLjAzOTcxNV0gUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4xIGJhY2tjaGFubmVs
IHRyYW5zcG9ydCBtb2R1bGUuClsgICAgMC4wNDA5NTBdIGZ1dGV4IGhhc2ggdGFibGUgZW50
cmllczogMjU2IChvcmRlcjogMiwgMTYzODQgYnl0ZXMpClsgICAgMC4wNTE1MjBdIE5GUzog
UmVnaXN0ZXJpbmcgdGhlIGlkX3Jlc29sdmVyIGtleSB0eXBlClsgICAgMC4wNTE2MjFdIEtl
eSB0eXBlIGlkX3Jlc29sdmVyIHJlZ2lzdGVyZWQKWyAgICAwLjA1MTYyOV0gS2V5IHR5cGUg
aWRfbGVnYWN5IHJlZ2lzdGVyZWQKWyAgICAwLjA1MTY0OV0gbmZzNGZpbGVsYXlvdXRfaW5p
dDogTkZTdjQgRmlsZSBMYXlvdXQgRHJpdmVyIFJlZ2lzdGVyaW5nLi4uClsgICAgMC4wNTIy
MzhdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQg
bG9hZGVkIChtYWpvciAyNTApClsgICAgMC4wNTIyNTJdIGlvIHNjaGVkdWxlciBub29wIHJl
Z2lzdGVyZWQKWyAgICAwLjA1MjI1OF0gaW8gc2NoZWR1bGVyIGRlYWRsaW5lIHJlZ2lzdGVy
ZWQKWyAgICAwLjA1MjYxN10gaW8gc2NoZWR1bGVyIGNmcSByZWdpc3RlcmVkIChkZWZhdWx0
KQpbICAgIDAuMDU0Nzc0XSBzdW54aS1waW5jdHJsIDFjMjA4MDAucGluY3RybDogaW5pdGlh
bGl6ZWQgc3VuWGkgUElPIGRyaXZlcgpbICAgIDAuMDU1NTQyXSB4ZW46eGVuX2V2dGNobjog
RXZlbnQtY2hhbm5lbCBkZXZpY2UgaW5zdGFsbGVkClsgICAgMC44MDQ0MDldIGNvbnNvbGUg
W2h2YzBdIGVuYWJsZWQKWyAgICAwLjgwNzkzNV0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZl
ciwgOCBwb3J0cywgSVJRIHNoYXJpbmcgZGlzYWJsZWQKWyAgICAwLjgxNjcwMF0gc2VyaWFs
OiBGcmVlc2NhbGUgbHB1YXJ0IGRyaXZlcgpbICAgIDAuODIyMTYzXSBicmQ6IG1vZHVsZSBs
b2FkZWQKWyAgICAwLjgyOTIyNV0gbG9vcDogbW9kdWxlIGxvYWRlZApbICAgIDAuODMzNTE2
XSAgUmluZyBtb2RlIGVuYWJsZWQKWyAgICAwLjgzNjQ5OF0gIE5vIEhXIERNQSBmZWF0dXJl
IHJlZ2lzdGVyIHN1cHBvcnRlZApbICAgIDAuODQxMDY1XSAgTm9ybWFsIGRlc2NyaXB0b3Jz
ClsgICAgMC44NDQ0NzBdICBXYWtlLVVwIE9uIExhbiBzdXBwb3J0ZWQKWyAgICAwLjg1MTc0
OF0gbGlicGh5OiBzdG1tYWM6IHByb2JlZApbICAgIDAuODU1MDc2XSBldGgwOiBQSFkgSUQg
MDAwMDgyMDEgYXQgMSBJUlEgMCAoc3RtbWFjLTA6MDEpIGFjdGl2ZQpbICAgIDAuODYxMzY2
XSB4ZW5fbmV0ZnJvbnQ6IEluaXRpYWxpc2luZyBYZW4gdmlydHVhbCBldGhlcm5ldCBkcml2
ZXIKWyAgICAwLjg2NzU1N10gZWhjaV9oY2Q6IFVTQiAyLjAgJ0VuaGFuY2VkJyBIb3N0IENv
bnRyb2xsZXIgKEVIQ0kpIERyaXZlcgpbICAgIDAuODc0MDgzXSBlaGNpLXBsYXRmb3JtOiBF
SENJIGdlbmVyaWMgcGxhdGZvcm0gZHJpdmVyClsgICAgMC44Nzk0NTZdIHN1bnhpLWVoY2k6
IEFsbHdpbm5lciBzdW5YaSBFSENJIGRyaXZlcgpbICAgIDAuODg0NDA0XSBwbGF0Zm9ybSAx
YzE0MDAwLmVoY2kwOiBEcml2ZXIgc3VueGktZWhjaSByZXF1ZXN0cyBwcm9iZSBkZWZlcnJh
bApbICAgIDAuODkxNTU3XSBwbGF0Zm9ybSAxYzFjMDAwLmVoY2kxOiBEcml2ZXIgc3VueGkt
ZWhjaSByZXF1ZXN0cyBwcm9iZSBkZWZlcnJhbApbICAgIDAuODk4OTQ1XSB1c2Jjb3JlOiBy
ZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlClsgICAgMC45MDU1
NjJdIG1vdXNlZGV2OiBQUy8yIG1vdXNlIGRldmljZSBjb21tb24gZm9yIGFsbCBtaWNlClsg
ICAgMC45MTE1MzddIGkyYyAvZGV2IGVudHJpZXMgZHJpdmVyClsgICAgMC45MTYyNjldIHN1
bnhpLXdkdCAxYzIwYzkwLndhdGNoZG9nOiBXYXRjaGRvZyBlbmFibGVkICh0aW1lb3V0PTE2
IHNlYywgbm93YXlvdXQ9MCkKWyAgICAwLjkyNDE2Nl0geGVuX3dkdDogWGVuIFdhdGNoRG9n
IFRpbWVyIERyaXZlciB2MC4wMQpbICAgIDAuOTI5MzE0XSB4ZW5fd2R0OiBjYW5ub3QgcmVn
aXN0ZXIgbWlzY2RldiBvbiBtaW5vcj0xMzAgKC0xNikKWyAgICAwLjkzNTE1NF0gd2R0OiBw
cm9iZSBvZiB3ZHQgZmFpbGVkIHdpdGggZXJyb3IgLTE2ClsgICAgMC45NDAxNjNdIHNkaGNp
OiBTZWN1cmUgRGlnaXRhbCBIb3N0IENvbnRyb2xsZXIgSW50ZXJmYWNlIGRyaXZlcgpbICAg
IDAuOTQ2MzIyXSBzZGhjaTogQ29weXJpZ2h0KGMpIFBpZXJyZSBPc3NtYW4KWyAgICAwLjk1
MTM5N10gc3VueGktbWNpIDFjMGYwMDAubW1jOiBiYXNlOjB4ZDA4NWMwMDAgaXJxOjY0Clsg
ICAgMC45NTY4MThdIHNkaGNpLXBsdGZtOiBTREhDSSBwbGF0Zm9ybSBhbmQgT0YgZHJpdmVy
IGhlbHBlcgpbICAgIDAuOTYzNTUwXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZh
Y2UgZHJpdmVyIHVzYmhpZApbICAgIDAuOTY5MTE3XSB1c2JoaWQ6IFVTQiBISUQgY29yZSBk
cml2ZXIKWyAgICAwLjk3MzIwN10gVENQOiBjdWJpYyByZWdpc3RlcmVkClsgICAgMC45Nzcx
OTZdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTAKWyAgICAwLjk4MjQ2M10g
c2l0OiBJUHY2IG92ZXIgSVB2NCB0dW5uZWxpbmcgZHJpdmVyClsgICAgMC45ODc3MTddIE5F
VDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTcKWyAgICAwLjk5MjMyNF0gS2V5IHR5
cGUgZG5zX3Jlc29sdmVyIHJlZ2lzdGVyZWQKWyAgICAwLjk5NjcwOV0gUmVnaXN0ZXJpbmcg
U1dQL1NXUEIgZW11bGF0aW9uIGhhbmRsZXIKWyAgICAxLjAwMjg3NV0gYWhjaS01djogNTAw
MCBtViAKWyAgICAxLjAwNjEyMV0gdXNiMS12YnVzOiA1MDAwIG1WIApbICAgIDEuMDA5NjIx
XSB1c2IyLXZidXM6IDUwMDAgbVYgClsgICAgMS4wMTMwNDNdIHVuYWJsZSB0byBmaW5kIHRy
YW5zY2VpdmVyClsgICAgMS4wMTY3MjZdIHN1bnhpLWVoY2kgMWMxNDAwMC5laGNpMDogRUhD
SSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjAyMjQwMF0gc3VueGktZWhjaSAxYzE0MDAwLmVo
Y2kwOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDEKWyAg
ICAxLjAzMDE2NV0gc3VueGktZWhjaSAxYzE0MDAwLmVoY2kwOiBpcnEgNzEsIGlvIG1lbSAw
eDAxYzE0MDAwClsgICAgMS4wNDg3MTJdIHN1bnhpLWVoY2kgMWMxNDAwMC5laGNpMDogVVNC
IDIuMCBzdGFydGVkLCBFSENJIDEuMDAKWyAgICAxLjA1NTU5MV0gaHViIDEtMDoxLjA6IFVT
QiBodWIgZm91bmQKWyAgICAxLjA1OTQwM10gaHViIDEtMDoxLjA6IDEgcG9ydCBkZXRlY3Rl
ZApbICAgIDEuMDYzOTQzXSB1bmFibGUgdG8gZmluZCB0cmFuc2NlaXZlcgpbICAgIDEuMDY3
NjI5XSBzdW54aS1laGNpIDFjMWMwMDAuZWhjaTE6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsg
ICAgMS4wNzMyNzRdIHN1bnhpLWVoY2kgMWMxYzAwMC5laGNpMTogbmV3IFVTQiBidXMgcmVn
aXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyClsgICAgMS4wODEwNDZdIHN1bnhpLWVo
Y2kgMWMxYzAwMC5laGNpMTogaXJxIDcyLCBpbyBtZW0gMHgwMWMxYzAwMApbICAgIDEuMDk4
Njc1XSBzdW54aS1laGNpIDFjMWMwMDAuZWhjaTE6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAx
LjAwClsgICAgMS4xMDU0NzFdIGh1YiAyLTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS4x
MDkyOTFdIGh1YiAyLTA6MS4wOiAxIHBvcnQgZGV0ZWN0ZWQKWyAgICAxLjExMzU1OV0gZHJp
dmVycy9ydGMvaGN0b3N5cy5jOiB1bmFibGUgdG8gb3BlbiBydGMgZGV2aWNlIChydGMwKQpb
ICAgIDEuMTE5ODE5XSBjbGs6IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2NrcwpbICAgIDEu
MTI0OTczXSBXYWl0aW5nIGZvciByb290IGRldmljZSAvZGV2L3NkYTEuLi4KWyAgICAxLjQy
ODcxMV0gdXNiIDItMTogbmV3IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2lu
ZyBzdW54aS1laGNpClsgICAgMS41NDg2NzZdIG1tYzA6IG5ldyBoaWdoIHNwZWVkIFNESEMg
Y2FyZCBhdCBhZGRyZXNzIGIzNjgKWyAgICAxLjU1NDQ5N10gaXNhIGJvdW5jZSBwb29sIHNp
emU6IDE2IHBhZ2VzClsgICAgMS41NTg4MDNdIG1tY2JsazA6IG1tYzA6YjM2OCAgICAgICA3
LjQ1IEdpQiAKWyAgICAxLjU2NDc2NV0gIG1tY2JsazA6IHAxClsgICAgMS41OTYwNTBdIHVz
Yi1zdG9yYWdlIDItMToxLjA6IFVTQiBNYXNzIFN0b3JhZ2UgZGV2aWNlIGRldGVjdGVkClsg
ICAgMS42MDI0MTVdIHNjc2kwIDogdXNiLXN0b3JhZ2UgMi0xOjEuMApbICAgIDIuNjYyODA1
XSBzY3NpIDA6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgICAgICAgICAgIFVTQiBESVNLIDIu
MCAgICAgUE1BUCBQUTogMCBBTlNJOiA0ClsgICAgMy43NjU4NzVdIHNkIDA6MDowOjA6IFtz
ZGFdIDYyNTU0MTEyIDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoMzIuMCBHQi8yOS44IEdp
QikKWyAgICAzLjc3NjEwOF0gc2QgMDowOjA6MDogW3NkYV0gV3JpdGUgUHJvdGVjdCBpcyBv
ZmYKWyAgICAzLjc4MDg1NV0gc2QgMDowOjA6MDogW3NkYV0gTW9kZSBTZW5zZTogMjMgMDAg
MDAgMDAKWyAgICAzLjc4ODYzNF0gc2QgMDowOjA6MDogW3NkYV0gTm8gQ2FjaGluZyBtb2Rl
IHBhZ2UgZm91bmQKWyAgICAzLjc5Mzg2NF0gc2QgMDowOjA6MDogW3NkYV0gQXNzdW1pbmcg
ZHJpdmUgY2FjaGU6IHdyaXRlIHRocm91Z2gKWyAgICAzLjgwOTQ3MV0gc2QgMDowOjA6MDog
W3NkYV0gTm8gQ2FjaGluZyBtb2RlIHBhZ2UgZm91bmQKWyAgICAzLjgxNDc0Nl0gc2QgMDow
OjA6MDogW3NkYV0gQXNzdW1pbmcgZHJpdmUgY2FjaGU6IHdyaXRlIHRocm91Z2gKWyAgICAz
Ljg1MzY2Nl0gIHNkYTogc2RhMQpbICAgIDMuODYzOTc1XSBzZCAwOjA6MDowOiBbc2RhXSBO
byBDYWNoaW5nIG1vZGUgcGFnZSBmb3VuZApbICAgIDMuODY5MjY1XSBzZCAwOjA6MDowOiBb
c2RhXSBBc3N1bWluZyBkcml2ZSBjYWNoZTogd3JpdGUgdGhyb3VnaApbICAgIDMuODc1Mzg3
XSBzZCAwOjA6MDowOiBbc2RhXSBBdHRhY2hlZCBTQ1NJIHJlbW92YWJsZSBkaXNrClsgICAg
My44ODM2MThdIEVYVDMtZnMgKHNkYTEpOiBlcnJvcjogY291bGRuJ3QgbW91bnQgYmVjYXVz
ZSBvZiB1bnN1cHBvcnRlZCBvcHRpb25hbCBmZWF0dXJlcyAoMjQwKQpbICAgIDMuODk0Mjc3
XSBFWFQyLWZzIChzZGExKTogZXJyb3I6IGNvdWxkbid0IG1vdW50IGJlY2F1c2Ugb2YgdW5z
dXBwb3J0ZWQgb3B0aW9uYWwgZmVhdHVyZXMgKDI0MCkKWyAgICA1LjIyNTgxMl0gRVhUNC1m
cyAoc2RhMSk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRoIG9yZGVyZWQgZGF0YSBtb2RlLiBP
cHRzOiAobnVsbCkKWyAgICA1LjIzMzU0N10gVkZTOiBNb3VudGVkIHJvb3QgKGV4dDQgZmls
ZXN5c3RlbSkgb24gZGV2aWNlIDg6MS4KWyAgICA1LjI1MTg2MV0gZGV2dG1wZnM6IG1vdW50
ZWQKWyAgICA1LjI1NTExNF0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogMjMySyAo
YzA1ODkwMDAgLSBjMDVjMzAwMCkKWyAgICA3LjA1MDMwOV0gc3lzdGVtZFsxXTogTW91bnRp
bmcgY2dyb3VwIHRvIC9zeXMvZnMvY2dyb3VwL2NwdXNldCBvZiB0eXBlIGNncm91cCB3aXRo
IG9wdGlvbnMgY3B1c2V0LgpbICAgIDcuMDYwMDg1XSBzeXN0ZW1kWzFdOiBNb3VudGluZyBj
Z3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvY3B1LGNwdWFjY3Qgb2YgdHlwZSBjZ3JvdXAgd2l0
aCBvcHRpb25zIGNwdSxjcHVhY2N0LgpbICAgIDcuMDcwNTk0XSBzeXN0ZW1kWzFdOiBNb3Vu
dGluZyBjZ3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvbWVtb3J5IG9mIHR5cGUgY2dyb3VwIHdp
dGggb3B0aW9ucyBtZW1vcnkuClsgICAgNy4wODAyNDldIHN5c3RlbWRbMV06IE1vdW50aW5n
IGNncm91cCB0byAvc3lzL2ZzL2Nncm91cC9kZXZpY2VzIG9mIHR5cGUgY2dyb3VwIHdpdGgg
b3B0aW9ucyBkZXZpY2VzLgpbICAgIDcuMDg5OTUyXSBzeXN0ZW1kWzFdOiBNb3VudGluZyBj
Z3JvdXAgdG8gL3N5cy9mcy9jZ3JvdXAvYmxraW8gb2YgdHlwZSBjZ3JvdXAgd2l0aCBvcHRp
b25zIGJsa2lvLgpbICAgIDcuMDk5Mjk3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kIDIwOCBydW5u
aW5nIGluIHN5c3RlbSBtb2RlLiAoK1BBTSAtTElCV1JBUCAtQVVESVQgLVNFTElOVVggLUlN
QSAtU1lTVklOSVQgK0xJQkNSWVBUU0VUVVAgK0dDUllQVCArQUNMICtYWikKWyAgICA3LjEy
MTI1MV0gc3lzdGVtZFsxXTogU2V0IGhvc3RuYW1lIHRvIDxhbGFybT4uClsgICAgNy4xMzE4
MDldIHN5c3RlbWRbMV06IFVzaW5nIGNncm91cCBjb250cm9sbGVyIG5hbWU9c3lzdGVtZC4g
RmlsZSBzeXN0ZW0gaGllcmFyY2h5IGlzIGF0IC9zeXMvZnMvY2dyb3VwL3N5c3RlbWQuClsg
ICAgNy4xNDI2NzldIHN5c3RlbWRbMV06IEluc3RhbGxlZCByZWxlYXNlIGFnZW50LgpbICAg
IDcuMTQ4MDc1XSBzeXN0ZW1kWzFdOiBVc2luZyBub3RpZmljYXRpb24gc29ja2V0IEAvb3Jn
L2ZyZWVkZXNrdG9wL3N5c3RlbWQxL25vdGlmeQpbICAgIDcuMTU1OTkwXSBzeXN0ZW1kWzFd
OiBTZXQgdXAgVEZEX1RJTUVSX0NBTkNFTF9PTl9TRVQgdGltZXJmZC4KWyAgICA3LjE3MjA0
Ml0gcmFuZG9tOiBzeXN0ZW1kIHVyYW5kb20gcmVhZCB3aXRoIDU3IGJpdHMgb2YgZW50cm9w
eSBhdmFpbGFibGUKWyAgICA3LjE3OTIzMl0gc3lzdGVtZFsxXTogU3VjY2Vzc2Z1bGx5IGNy
ZWF0ZWQgcHJpdmF0ZSBELUJ1cyBzZXJ2ZXIuClsgICAgNy4xODc3NTldIHN5c3RlbWRbMV06
IFNwYXduZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWdl
dHR5LWdlbmVyYXRvciBhcyA0MApbICAgIDcuMTk4MDgwXSBzeXN0ZW1kWzFdOiBTcGF3bmVk
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1zeXN0ZW0tdXBk
YXRlLWdlbmVyYXRvciBhcyA0MQpbICAgIDcuMjEwMjM0XSBzeXN0ZW1kWzFdOiBTcGF3bmVk
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1jcnlwdHNldHVw
LWdlbmVyYXRvciBhcyA0MgpbICAgIDcuMjI0NzMyXSBzeXN0ZW1kWzFdOiBTcGF3bmVkIC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1ncHQtYXV0by1nZW5l
cmF0b3IgYXMgNDMKWyAgICA3LjIzNjM1NV0gc3lzdGVtZFsxXTogU3Bhd25lZCAvdXNyL2xp
Yi9zeXN0ZW1kL3N5c3RlbS1nZW5lcmF0b3JzL3N5c3RlbWQtZnN0YWItZ2VuZXJhdG9yIGFz
IDQ0ClsgICAgNy4yNDc3NDRdIHN5c3RlbWRbMV06IFNwYXduZWQgL3Vzci9saWIvc3lzdGVt
ZC9zeXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWVmaS1ib290LWdlbmVyYXRvciBhcyA0NQpb
ICAgIDcuMjYwNzI0XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbS1nZW5l
cmF0b3JzL3N5c3RlbWQtZ2V0dHktZ2VuZXJhdG9yIGV4aXRlZCBzdWNjZXNzZnVsbHkuClsg
ICAgNy4yNzA0MzNdIHN5c3RlbWRbMV06IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtLWdlbmVy
YXRvcnMvc3lzdGVtZC1zeXN0ZW0tdXBkYXRlLWdlbmVyYXRvciBleGl0ZWQgc3VjY2Vzc2Z1
bGx5LgpbICAgIDcuMzA0MDU5XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3Rl
bS1nZW5lcmF0b3JzL3N5c3RlbWQtY3J5cHRzZXR1cC1nZW5lcmF0b3IgZXhpdGVkIHN1Y2Nl
c3NmdWxseS4KWyAgICA4LjgxNDY4OF0gc3lzdGVtZFsxXTogL3Vzci9saWIvc3lzdGVtZC9z
eXN0ZW0tZ2VuZXJhdG9ycy9zeXN0ZW1kLWdwdC1hdXRvLWdlbmVyYXRvciBleGl0ZWQgc3Vj
Y2Vzc2Z1bGx5LgpbICAgIDguODI0NTQ0XSBzeXN0ZW1kWzFdOiAvdXNyL2xpYi9zeXN0ZW1k
L3N5c3RlbS1nZW5lcmF0b3JzL3N5c3RlbWQtZnN0YWItZ2VuZXJhdG9yIGV4aXRlZCBzdWNj
ZXNzZnVsbHkuClsgICAgOC44MzM5NzVdIHN5c3RlbWRbMV06IC91c3IvbGliL3N5c3RlbWQv
c3lzdGVtLWdlbmVyYXRvcnMvc3lzdGVtZC1lZmktYm9vdC1nZW5lcmF0b3IgZXhpdGVkIHN1
Y2Nlc3NmdWxseS4KWyAgICA4Ljg0Nzg2OV0gc3lzdGVtZFsxXTogTG9va2luZyBmb3IgdW5p
dCBmaWxlcyBpbiAoaGlnaGVyIHByaW9yaXR5IGZpcnN0KToKWyAgICA4Ljg1NDg2OV0gc3lz
dGVtZFsxXTogCS9ldGMvc3lzdGVtZC9zeXN0ZW0KWyAgICA4Ljg1OTEyNF0gc3lzdGVtZFsx
XTogCS9ydW4vc3lzdGVtZC9zeXN0ZW0KWyAgICA4Ljg2MzQxNV0gc3lzdGVtZFsxXTogCS9y
dW4vc3lzdGVtZC9nZW5lcmF0b3IKWyAgICA4Ljg2ODAwN10gc3lzdGVtZFsxXTogCS91c3Iv
bG9jYWwvbGliL3N5c3RlbWQvc3lzdGVtClsgICAgOC44NzMyNDhdIHN5c3RlbWRbMV06IAkv
dXNyL2xpYi9zeXN0ZW1kL3N5c3RlbQpbICAgIDguODc3OTA5XSBzeXN0ZW1kWzFdOiBTeXNW
IGluaXQgc2NyaXB0cyBhbmQgcmNOLmQgbGlua3Mgc3VwcG9ydCBkaXNhYmxlZApbICAgIDgu
OTczMTI5XSBzeXN0ZW1kWzFdOiBGYWlsZWQgdG8gbG9hZCBjb25maWd1cmF0aW9uIGZvciBw
bHltb3V0aC1zdGFydC5zZXJ2aWNlOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5ClsgICAg
OC45OTY1MDddIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9y
IHN5c2xvZy5zZXJ2aWNlOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5ClsgICAgOS4wMjc4
NTNdIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9yIHN5c2xv
Zy50YXJnZXQ6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjA0ODM2M10gc3lz
dGVtZFsxXTogRmFpbGVkIHRvIGxvYWQgY29uZmlndXJhdGlvbiBmb3IgZGlzcGxheS1tYW5h
Z2VyLnNlcnZpY2U6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjA3Njg0M10g
c3lzdGVtZFsxXTogRmFpbGVkIHRvIGxvYWQgY29uZmlndXJhdGlvbiBmb3IgcGx5bW91dGgt
cXVpdC13YWl0LnNlcnZpY2U6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgICA5LjEy
MDU2MF0gc3lzdGVtZFsxXTogLS5tb3VudCBjaGFuZ2VkIGRlYWQgLT4gbW91bnRlZApbICAg
IDkuMTI2MTQwXSBzeXN0ZW1kWzFdOiBBY3RpdmF0aW5nIGRlZmF1bHQgdW5pdDogZGVmYXVs
dC50YXJnZXQKWyAgICA5LjEzMjIxM10gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUg
am9iIGdyYXBoaWNhbC50YXJnZXQvc3RhcnQvaXNvbGF0ZQpbICAgIDkuMTM5OTM0XSBzeXN0
ZW1kWzFdOiBDYW5ub3QgYWRkIGRlcGVuZGVuY3kgam9iIGZvciB1bml0IGRpc3BsYXktbWFu
YWdlci5zZXJ2aWNlLCBpZ25vcmluZzogVW5pdCBkaXNwbGF5LW1hbmFnZXIuc2VydmljZSBm
YWlsZWQgdG8gbG9hZDogTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeS4KWyAgICA5LjE1NTMz
Nl0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgZ3JhcGhpY2FsLnRhcmdldC9zdGFy
dCBhcyAxClsgICAgOS4xNjE4MzFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIG11
bHRpLXVzZXIudGFyZ2V0L3N0YXJ0IGFzIDIKWyAgICA5LjE2ODMzMV0gc3lzdGVtZFsxXTog
SW5zdGFsbGVkIG5ldyBqb2IgYmFzaWMudGFyZ2V0L3N0YXJ0IGFzIDMKWyAgICA5LjE3NDUz
MV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzaW5pdC50YXJnZXQvc3RhcnQg
YXMgNApbICAgIDkuMTgwODE4XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBsb2Nh
bC1mcy50YXJnZXQvc3RhcnQgYXMgNQpbICAgIDkuMTg3MTk4XSBzeXN0ZW1kWzFdOiBJbnN0
YWxsZWQgbmV3IGpvYiB0bXAubW91bnQvc3RhcnQgYXMgNgpbICAgIDkuMTkzMTE1XSBzeXN0
ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXJlbW91bnQtZnMuc2VydmljZS9z
dGFydCBhcyA4ClsgICAgOS4yMDA0NzddIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IGxvY2FsLWZzLXByZS50YXJnZXQvc3RhcnQgYXMgOQpbICAgIDkuMjA3MjE2XSBzeXN0ZW1k
WzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzd2FwLnRhcmdldC9zdGFydCBhcyAxMQpbICAgIDku
MjEzMzkyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBrbW9kLXN0YXRpYy1ub2Rl
cy5zZXJ2aWNlL3N0YXJ0IGFzIDEyClsgICAgOS4yMjA3NjBdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIGRldi1odWdlcGFnZXMubW91bnQvc3RhcnQgYXMgMTMKWyAgICA5LjIy
NzU4M10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgcHJvYy1zeXMtZnMtYmluZm10
X21pc2MuYXV0b21vdW50L3N0YXJ0IGFzIDE0ClsgICAgOS4yMzU2NjVdIHN5c3RlbWRbMV06
IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtam91cm5hbGQuc2VydmljZS9zdGFydCBhcyAx
NQpbICAgIDkuMjQyOTQyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1k
LWpvdXJuYWxkLnNvY2tldC9zdGFydCBhcyAxNgpbICAgIDkuMjUwMTMyXSBzeXN0ZW1kWzFd
OiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25zb2xlLnBhdGgv
c3RhcnQgYXMgMTcKWyAgICA5LjI1ODE4N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBq
b2Igc3lzdGVtZC1iaW5mbXQuc2VydmljZS9zdGFydCBhcyAxOApbICAgIDkuMjY1MzA1XSBz
eXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZp
Y2Uvc3RhcnQgYXMgMTkKWyAgICA5LjI3Mjg1N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5l
dyBqb2Igc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudC9zdGFydCBhcyAyMApbICAgIDkuMjc5OTQ2
XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXJhbmRvbS1zZWVkLnNl
cnZpY2Uvc3RhcnQgYXMgMjEKWyAgICA5LjI4NzQ3MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVk
IG5ldyBqb2Igc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlL3N0YXJ0IGFzIDIyClsg
ICAgOS4yOTUyODldIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtdG1w
ZmlsZXMtc2V0dXAtZGV2LnNlcnZpY2Uvc3RhcnQgYXMgMjMKWyAgICA5LjMwMzQ0NV0gc3lz
dGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC11ZGV2ZC5zZXJ2aWNlL3N0YXJ0
IGFzIDI0ClsgICAgOS4zMTA0NTZdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5
c3RlbWQtdWRldmQtY29udHJvbC5zb2NrZXQvc3RhcnQgYXMgMjUKWyAgICA5LjMxODA2NF0g
c3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29j
a2V0L3N0YXJ0IGFzIDI2ClsgICAgOS4zMjU2MjBdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBu
ZXcgam9iIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS9zdGFydCBhcyAyNwpbICAg
IDkuMzMzNDIyXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBkZXYtbXF1ZXVlLm1v
dW50L3N0YXJ0IGFzIDI4ClsgICAgOS4zNDAwMDhdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBu
ZXcgam9iIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2Uvc3RhcnQgYXMgMjkKWyAgICA5
LjM0NzYzNF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzLWZzLWZ1c2UtY29u
bmVjdGlvbnMubW91bnQvc3RhcnQgYXMgMzAKWyAgICA5LjM1NTM0N10gc3lzdGVtZFsxXTog
SW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC1qb3VybmFsLWZsdXNoLnNlcnZpY2Uvc3RhcnQg
YXMgMzEKWyAgICA5LjM2MzA2MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lz
LWtlcm5lbC1jb25maWcubW91bnQvc3RhcnQgYXMgMzIKWyAgICA5LjM3MDI1N10gc3lzdGVt
ZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtZC1tb2R1bGVzLWxvYWQuc2VydmljZS9z
dGFydCBhcyAzMwpbICAgIDkuMzc3ODYxXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpv
YiBjcnlwdHNldHVwLnRhcmdldC9zdGFydCBhcyAzNApbICAgIDkuMzg0NTYzXSBzeXN0ZW1k
WzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlL3N0YXJ0IGFz
IDM1ClsgICAgOS4zOTE2NjRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNvY2tl
dHMudGFyZ2V0L3N0YXJ0IGFzIDM4ClsgICAgOS4zOTgwNThdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIGRidXMuc29ja2V0L3N0YXJ0IGFzIDM5ClsgICAgOS40MDQyMzJdIHN5
c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbS5zbGljZS9zdGFydCBhcyA0MApb
ICAgIDkuNDEwNDY1XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiAtLnNsaWNlL3N0
YXJ0IGFzIDQxClsgICAgOS40MTYyNThdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHN5c3RlbWQtc2h1dGRvd25kLnNvY2tldC9zdGFydCBhcyA0MgpbICAgIDkuNDIzNTU0XSBz
eXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWluaXRjdGwuc29ja2V0L3N0
YXJ0IGFzIDQzClsgICAgOS40MzA2NjFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IGRtZXZlbnRkLnNvY2tldC9zdGFydCBhcyA0NApbICAgIDkuNDM3MTQ0XSBzeXN0ZW1kWzFd
OiBJbnN0YWxsZWQgbmV3IGpvYiBsdm1ldGFkLnNvY2tldC9zdGFydCBhcyA0NQpbICAgIDku
NDQzNTg0XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiB0aW1lcnMudGFyZ2V0L3N0
YXJ0IGFzIDQ2ClsgICAgOS40NDk5MDFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHN5c3RlbWQtdG1wZmlsZXMtY2xlYW4udGltZXIvc3RhcnQgYXMgNDcKWyAgICA5LjQ1NzUx
MF0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgcGF0aHMudGFyZ2V0L3N0YXJ0IGFz
IDQ4ClsgICAgOS40NjM3NjZdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNsaWNl
cy50YXJnZXQvc3RhcnQgYXMgNDkKWyAgICA5LjQ3MDA5NF0gc3lzdGVtZFsxXTogSW5zdGFs
bGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3N0YXJ0IGFzIDUwClsgICAgOS40NzYzMTZdIHN5
c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQg
YXMgNTEKWyAgICA5LjQ4MzE5MV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgaGF2
ZWdlZC5zZXJ2aWNlL3N0YXJ0IGFzIDUyClsgICAgOS40ODk2ODldIHN5c3RlbWRbMV06IElu
c3RhbGxlZCBuZXcgam9iIHJlbW90ZS1mcy50YXJnZXQvc3RhcnQgYXMgNTMKWyAgICA5LjQ5
NjI2N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgbmV0Y3RsLWlmcGx1Z2RAZXRo
MC5zZXJ2aWNlL3N0YXJ0IGFzIDU0ClsgICAgOS41MDM4MDddIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIHN5cy1zdWJzeXN0ZW0tbmV0LWRldmljZXMtZXRoMC5kZXZpY2Uvc3Rh
cnQgYXMgNTUKWyAgICA5LjUxMjI2M10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Ig
c3lzdGVtLW5ldGN0bFx4MmRpZnBsdWdkLnNsaWNlL3N0YXJ0IGFzIDU2ClsgICAgOS41MjAw
MzRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtYXNrLXBhc3N3b3Jk
LXdhbGwucGF0aC9zdGFydCBhcyA1NwpbICAgIDkuNTI3ODE0XSBzeXN0ZW1kWzFdOiBJbnN0
YWxsZWQgbmV3IGpvYiBzeXN0ZW1kLWxvZ2luZC5zZXJ2aWNlL3N0YXJ0IGFzIDU4ClsgICAg
OS41MzQ5MjJdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHVzZXIuc2xpY2Uvc3Rh
cnQgYXMgNTkKWyAgICA5LjU0MDk4N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Ig
ZGJ1cy5zZXJ2aWNlL3N0YXJ0IGFzIDYwClsgICAgOS41NDcyMTBdIHN5c3RlbWRbMV06IElu
c3RhbGxlZCBuZXcgam9iIHN5c3RlbWQtdXNlci1zZXNzaW9ucy5zZXJ2aWNlL3N0YXJ0IGFz
IDYxClsgICAgOS41NTQ5NDFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIGdldHR5
LnRhcmdldC9zdGFydCBhcyA2MgpbICAgIDkuNTYxMzA0XSBzeXN0ZW1kWzFdOiBJbnN0YWxs
ZWQgbmV3IGpvYiBnZXR0eUB0dHkxLnNlcnZpY2Uvc3RhcnQgYXMgNjMKWyAgICA5LjU2Nzkz
Nl0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3lzdGVtLWdldHR5LnNsaWNlL3N0
YXJ0IGFzIDY0ClsgICAgOS41NzQ3MzVdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2Uvc3RhcnQgYXMgNjUKWyAgICA5LjU4MjA4M10g
c3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgZGV2LWh2YzAuZGV2aWNlL3N0YXJ0IGFz
IDY2ClsgICAgOS41ODg2MTRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHN5c3Rl
bS1zZXJpYWxceDJkZ2V0dHkuc2xpY2Uvc3RhcnQgYXMgNjcKWyAgICA5LjU5NjIzOF0gc3lz
dGVtZFsxXTogRW5xdWV1ZWQgam9iIGdyYXBoaWNhbC50YXJnZXQvc3RhcnQgYXMgMQpbICAg
IDkuNjAyMzEzXSBzeXN0ZW1kWzFdOiBMb2FkZWQgdW5pdHMgYW5kIGRldGVybWluZWQgaW5p
dGlhbCB0cmFuc2FjdGlvbiBpbiAyLjQxNjczMnMuClsgICAgOS42MTA1OThdIHN5c3RlbWRb
MV06IEV4cGVjdGluZyBkZXZpY2UgZGV2LWh2YzAuZGV2aWNlLi4uClsgICAgOS42MjA4ODVd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIEZvcndhcmQgUGFzc3dvcmQgUmVxdWVzdHMgdG8gV2Fs
bCBEaXJlY3RvcnkgV2F0Y2guClsgICAgOS42MjkwODBdIHN5c3RlbWRbMV06IHN5c3RlbWQt
YXNrLXBhc3N3b3JkLXdhbGwucGF0aCBjaGFuZ2VkIGRlYWQgLT4gd2FpdGluZwpbICAgIDku
NjM2MjY3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC1hc2stcGFzc3dvcmQtd2FsbC5wYXRo
L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuNjQ0MzU4XSBzeXN0ZW1kWzFd
OiBTdGFydGVkIEZvcndhcmQgUGFzc3dvcmQgUmVxdWVzdHMgdG8gV2FsbCBEaXJlY3Rvcnkg
V2F0Y2guClsgICAgOS42NTIwMjJdIHN5c3RlbWRbMV06IEV4cGVjdGluZyBkZXZpY2Ugc3lz
LXN1YnN5c3RlbS1uZXQtZGV2aWNlcy1ldGgwLmRldmljZS4uLgpbICAgIDkuNjY2MDcxXSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBSZW1vdGUgRmlsZSBTeXN0ZW1zLgpbICAgIDkuNjcxMTc1
XSBzeXN0ZW1kWzFdOiByZW1vdGUtZnMudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUK
WyAgICA5LjY3NzA1MF0gc3lzdGVtZFsxXTogSm9iIHJlbW90ZS1mcy50YXJnZXQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS42ODkyMDRdIHN5c3RlbWRbMV06IFJlYWNo
ZWQgdGFyZ2V0IFJlbW90ZSBGaWxlIFN5c3RlbXMuClsgICAgOS42OTQ3OTVdIHN5c3RlbWRb
MV06IFN0YXJ0aW5nIExWTTIgbWV0YWRhdGEgZGFlbW9uIHNvY2tldC4KWyAgICA5LjcwMDgz
NV0gc3lzdGVtZFsxXTogbHZtZXRhZC5zb2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3Rlbmlu
ZwpbICAgIDkuNzA2Nzk4XSBzeXN0ZW1kWzFdOiBKb2IgbHZtZXRhZC5zb2NrZXQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS43MTkzNTldIHN5c3RlbWRbMV06IExpc3Rl
bmluZyBvbiBMVk0yIG1ldGFkYXRhIGRhZW1vbiBzb2NrZXQuClsgICAgOS43MjU0NjhdIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIERldmljZS1tYXBwZXIgZXZlbnQgZGFlbW9uIEZJRk9zLgpb
ICAgIDkuNzMxODc2XSBzeXN0ZW1kWzFdOiBkbWV2ZW50ZC5zb2NrZXQgY2hhbmdlZCBkZWFk
IC0+IGxpc3RlbmluZwpbICAgIDkuNzM3OTIwXSBzeXN0ZW1kWzFdOiBKb2IgZG1ldmVudGQu
c29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuNzUwOTU2XSBzeXN0
ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRGV2aWNlLW1hcHBlciBldmVudCBkYWVtb24gRklGT3Mu
ClsgICAgOS43NTc1MDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIC9kZXYvaW5pdGN0bCBDb21w
YXRpYmlsaXR5IE5hbWVkIFBpcGUuClsgICAgOS43NjQzMjldIHN5c3RlbWRbMV06IHN5c3Rl
bWQtaW5pdGN0bC5zb2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgIDkuNzcx
MDQ3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC1pbml0Y3RsLnNvY2tldC9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgICA5Ljc4NTAzNF0gc3lzdGVtZFsxXTogTGlzdGVuaW5n
IG9uIC9kZXYvaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBpcGUuClsgICAgOS43OTIw
OTZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIERlbGF5ZWQgU2h1dGRvd24gU29ja2V0LgpbICAg
IDkuNzk3NTk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXNodXRkb3duZC5zb2NrZXQgY2hhbmdl
ZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgIDkuODA0NDkzXSBzeXN0ZW1kWzFdOiBKb2Igc3lz
dGVtZC1zaHV0ZG93bmQuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
IDkuODE3NDA5XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRGVsYXllZCBTaHV0ZG93biBT
b2NrZXQuClsgICAgOS44MjMyNjVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJvb3QgU2xpY2Uu
ClsgICAgOS44MjgyNjddIHN5c3RlbWRbMV06IC0uc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFj
dGl2ZQpbICAgIDkuODMzNDc4XSBzeXN0ZW1kWzFdOiBKb2IgLS5zbGljZS9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgICA5Ljg0MzkzNl0gc3lzdGVtZFsxXTogQ3JlYXRlZCBz
bGljZSBSb290IFNsaWNlLgpbICAgIDkuODQ4ODAzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBV
c2VyIGFuZCBTZXNzaW9uIFNsaWNlLgpbICAgIDkuODU0NDg2XSBzeXN0ZW1kWzFdOiB1c2Vy
LnNsaWNlIGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgICA5Ljg1OTk0MF0gc3lzdGVtZFsx
XTogSm9iIHVzZXIuc2xpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAgOS44
NzE2NTZdIHN5c3RlbWRbMV06IENyZWF0ZWQgc2xpY2UgVXNlciBhbmQgU2Vzc2lvbiBTbGlj
ZS4KWyAgICA5Ljg3NzQyM10gc3lzdGVtZFsxXTogU3RhcnRpbmcgU3lzdGVtIFNsaWNlLgpb
ICAgIDkuODgyMzQ1XSBzeXN0ZW1kWzFdOiBzeXN0ZW0uc2xpY2UgY2hhbmdlZCBkZWFkIC0+
IGFjdGl2ZQpbICAgIDkuODg3ODY4XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtLnNsaWNlL3N0
YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuODk5MDUyXSBzeXN0ZW1kWzFdOiBD
cmVhdGVkIHNsaWNlIFN5c3RlbSBTbGljZS4KWyAgICA5LjkwMzk1N10gc3lzdGVtZFsxXTog
U3RhcnRpbmcgc3lzdGVtLW5ldGN0bFx4MmRpZnBsdWdkLnNsaWNlLgpbICAgIDkuOTEwNTY2
XSBzeXN0ZW1kWzFdOiBzeXN0ZW0tbmV0Y3RsXHgyZGlmcGx1Z2Quc2xpY2UgY2hhbmdlZCBk
ZWFkIC0+IGFjdGl2ZQpbICAgIDkuOTE3NjU5XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtLW5l
dGN0bFx4MmRpZnBsdWdkLnNsaWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
IDkuOTMxOTkzXSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3RlbS1uZXRjdGxceDJk
aWZwbHVnZC5zbGljZS4KWyAgICA5LjkzODQ0OV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lz
dGVtLWdldHR5LnNsaWNlLgpbICAgIDkuOTQ0MDE3XSBzeXN0ZW1kWzFdOiBzeXN0ZW0tZ2V0
dHkuc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgIDkuOTUwMTc0XSBzeXN0ZW1k
WzFdOiBKb2Igc3lzdGVtLWdldHR5LnNsaWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgIDkuOTYyMjI0XSBzeXN0ZW1kWzFdOiBDcmVhdGVkIHNsaWNlIHN5c3RlbS1nZXR0
eS5zbGljZS4KWyAgICA5Ljk2NzY0NV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgc3lzdGVtLXNl
cmlhbFx4MmRnZXR0eS5zbGljZS4KWyAgICA5Ljk3NDAzOF0gc3lzdGVtZFsxXTogc3lzdGVt
LXNlcmlhbFx4MmRnZXR0eS5zbGljZSBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAgOS45
ODEwNjJdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW0tc2VyaWFsXHgyZGdldHR5LnNsaWNlL3N0
YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgIDkuOTk0ODgwXSBzeXN0ZW1kWzFdOiBD
cmVhdGVkIHNsaWNlIHN5c3RlbS1zZXJpYWxceDJkZ2V0dHkuc2xpY2UuClsgICAxMC4wMDEy
OThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNsaWNlcy4KWyAgIDEwLjAwNTE2MV0gc3lzdGVt
ZFsxXTogc2xpY2VzLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAxMC4wMTA5
MzBdIHN5c3RlbWRbMV06IEpvYiBzbGljZXMudGFyZ2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMTAuMDIxNjA4XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBTbGlj
ZXMuClsgICAxMC4wMjYwNjRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEVuY3J5cHRlZCBWb2x1
bWVzLgpbICAgMTAuMDMxMDE5XSBzeXN0ZW1kWzFdOiBjcnlwdHNldHVwLnRhcmdldCBjaGFu
Z2VkIGRlYWQgLT4gYWN0aXZlClsgICAxMC4wMzcwMDZdIHN5c3RlbWRbMV06IEpvYiBjcnlw
dHNldHVwLnRhcmdldC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjA0OTA5
Nl0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgRW5jcnlwdGVkIFZvbHVtZXMuClsgICAx
MC4wNTQ1NjldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0vc3lzIHN1
Y2NlZWRlZCBmb3Igc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29ja2V0LgpbICAgMTAuMDYzNDIy
XSBzeXN0ZW1kWzFdOiBTdGFydGluZyB1ZGV2IEtlcm5lbCBTb2NrZXQuClsgICAxMC4wNjg0
NjBdIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldmQta2VybmVsLnNvY2tldCBjaGFuZ2VkIGRl
YWQgLT4gbGlzdGVuaW5nClsgICAxMC4wNzU2MTVdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1k
LXVkZXZkLWtlcm5lbC5zb2NrZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAx
MC4wODgzODNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiB1ZGV2IEtlcm5lbCBTb2NrZXQu
ClsgICAxMC4wOTM4MThdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0v
c3lzIHN1Y2NlZWRlZCBmb3Igc3lzdGVtZC11ZGV2ZC1jb250cm9sLnNvY2tldC4KWyAgIDEw
LjEwMjcxMF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgdWRldiBDb250cm9sIFNvY2tldC4KWyAg
IDEwLjEwNzg5OV0gc3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC1jb250cm9sLnNvY2tldCBj
aGFuZ2VkIGRlYWQgLT4gbGlzdGVuaW5nClsgICAxMC4xMTUxNTJdIHN5c3RlbWRbMV06IEpv
YiBzeXN0ZW1kLXVkZXZkLWNvbnRyb2wuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9
ZG9uZQpbICAgMTAuMTI4MTA0XSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gdWRldiBDb250
cm9sIFNvY2tldC4KWyAgIDEwLjEzMzY1Nl0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4
aXN0cz0hL3J1bi9wbHltb3V0aC9waWQgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLWFzay1wYXNz
d29yZC1jb25zb2xlLnBhdGguClsgICAxMC4xNDM3ODNdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IERpc3BhdGNoIFBhc3N3b3JkIFJlcXVlc3RzIHRvIENvbnNvbGUgRGlyZWN0b3J5IFdhdGNo
LgpbICAgMTAuMTUyMDE3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWFzay1wYXNzd29yZC1jb25z
b2xlLnBhdGggY2hhbmdlZCBkZWFkIC0+IHdhaXRpbmcKWyAgIDEwLjE1OTU1NF0gc3lzdGVt
ZFsxXTogSm9iIHN5c3RlbWQtYXNrLXBhc3N3b3JkLWNvbnNvbGUucGF0aC9zdGFydCBmaW5p
c2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjE2NzY5NV0gc3lzdGVtZFsxXTogU3RhcnRlZCBE
aXNwYXRjaCBQYXNzd29yZCBSZXF1ZXN0cyB0byBDb25zb2xlIERpcmVjdG9yeSBXYXRjaC4K
WyAgIDEwLjE3NTc1NF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgUGF0aHMuClsgICAxMC4xNzk2
MzJdIHN5c3RlbWRbMV06IHBhdGhzLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsg
ICAxMC4xODUxODJdIHN5c3RlbWRbMV06IEpvYiBwYXRocy50YXJnZXQvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAxMC4xOTU4MTRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFy
Z2V0IFBhdGhzLgpbICAgMTAuMjAwMzA3XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBKb3VybmFs
IFNvY2tldC4KWyAgIDEwLjIwNTE1MV0gc3lzdGVtZFsxXTogc3lzdGVtZC1qb3VybmFsZC5z
b2NrZXQgY2hhbmdlZCBkZWFkIC0+IGxpc3RlbmluZwpbICAgMTAuMjExOTgxXSBzeXN0ZW1k
WzFdOiBKb2Igc3lzdGVtZC1qb3VybmFsZC5zb2NrZXQvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAxMC4yMjQxMDNdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiBKb3VybmFs
IFNvY2tldC4KWyAgIDEwLjIyOTI4M10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0
cz0vc3lzL2tlcm5lbC9kZWJ1ZyBzdWNjZWVkZWQgZm9yIHN5cy1rZXJuZWwtZGVidWcubW91
bnQuClsgICAxMC4yMzgyOTJdIHN5c3RlbWRbMV06IE1vdW50aW5nIERlYnVnIEZpbGUgU3lz
dGVtLi4uClsgICAxMC4yNDgxOTBdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC9i
aW4vbW91bnQgZGVidWdmcyAvc3lzL2tlcm5lbC9kZWJ1ZyAtdCBkZWJ1Z2ZzClsgICAxMC4y
NTY5NzZdIHN5c3RlbWRbMV06IEZvcmtlZCAvYmluL21vdW50IGFzIDQ3ClsgICAxMC4yNjQ5
MDNdIHN5c3RlbWRbNDddOiBFeGVjdXRpbmc6IC9iaW4vbW91bnQgZGVidWdmcyAvc3lzL2tl
cm5lbC9kZWJ1ZyAtdCBkZWJ1Z2ZzClsgICAxMC4yNzMxMzBdIHN5c3RlbWRbMV06IHN5cy1r
ZXJuZWwtZGVidWcubW91bnQgY2hhbmdlZCBkZWFkIC0+IG1vdW50aW5nClsgICAxMC4yODAw
NDBdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9L2Rldi90dHkwIHN1Y2NlZWRl
ZCBmb3Igc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlLgpbICAgMTAuMjg5ODM4XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBTZXR1cCBWaXJ0dWFsIENvbnNvbGUuLi4KWyAgIDEwLjMw
MTQ5NF0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9z
eXN0ZW1kLXZjb25zb2xlLXNldHVwClsgICAxMC4zMTAwOTZdIHN5c3RlbWRbMV06IEZvcmtl
ZCAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtdmNvbnNvbGUtc2V0dXAgYXMgNDgKWyAgIDEw
LjMxODA4NV0gc3lzdGVtZFsxXTogc3lzdGVtZC12Y29uc29sZS1zZXR1cC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTAuMzI4NTAxXSBzeXN0ZW1kWzQ4XTogRXhlY3V0
aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtdmNvbnNvbGUtc2V0dXAKWyAgIDEwLjMz
NjIyOV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vcHJvYy9zeXMvZnMvbXF1
ZXVlIGZhaWxlZCBmb3IgZGV2LW1xdWV1ZS5tb3VudC4KWyAgIDEwLjM0NTI1Nl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgb2YgZGV2LW1xdWV1ZS5tb3VudCByZXF1ZXN0ZWQgYnV0IGNvbmRp
dGlvbiBmYWlsZWQuIElnbm9yaW5nLgpbICAgMTAuMzU0OTYyXSBzeXN0ZW1kWzFdOiBKb2Ig
ZGV2LW1xdWV1ZS5tb3VudC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEwLjM2
MTg1NV0gc3lzdGVtZFsxXTogTW91bnRlZCBQT1NJWCBNZXNzYWdlIFF1ZXVlIEZpbGUgU3lz
dGVtLgpbICAgMTAuMzY5NDkwXSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoSXNSZWFkV3Jp
dGU9L3N5cyBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UuClsg
ICAxMC4zNzg0OTFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHVkZXYgQ29sZHBsdWcgYWxsIERl
dmljZXMuLi4KWyAgIDEwLjM5NTQwMF0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTog
L3Vzci9iaW4vdWRldmFkbSB0cmlnZ2VyIC0tdHlwZT1zdWJzeXN0ZW1zIC0tYWN0aW9uPWFk
ZApbICAgMTAuNDA0OTE0XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9iaW4vdWRldmFkbSBh
cyA0OQpbICAgMTAuNDE1MDE1XSBzeXN0ZW1kWzQ5XTogRXhlY3V0aW5nOiAvdXNyL2Jpbi91
ZGV2YWRtIHRyaWdnZXIgLS10eXBlPXN1YnN5c3RlbXMgLS1hY3Rpb249YWRkClsgICAxMC40
MjUyOTddIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHN0YXJ0ClsgICAxMC40MzI5MjZdIHN5c3RlbWRbMV06IENvbmRpdGlvbktl
cm5lbENvbW1hbmRMaW5lPXxyZC5tb2R1bGVzLWxvYWQgZmFpbGVkIGZvciBzeXN0ZW1kLW1v
ZHVsZXMtbG9hZC5zZXJ2aWNlLgpbICAgMTAuNDQ0OTc3XSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25LZXJuZWxDb21tYW5kTGluZT18bW9kdWxlcy1sb2FkIGZhaWxlZCBmb3Igc3lzdGVtZC1t
b2R1bGVzLWxvYWQuc2VydmljZS4KWyAgIDEwLjQ1NDk3OV0gc3lzdGVtZFsxXTogQ29uZGl0
aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ydW4vbW9kdWxlcy1sb2FkLmQgZmFpbGVkIGZvciBz
eXN0ZW1kLW1vZHVsZXMtbG9hZC5zZXJ2aWNlLgpbICAgMTAuNDc0ODA5XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy9tb2R1bGVzLWxvYWQuZCBmYWls
ZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2UuClsgICAxMC40ODUyNjRdIHN5
c3RlbWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xvY2FsL2xpYi9t
b2R1bGVzLWxvYWQuZCBmYWlsZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2Uu
ClsgICAxMC40OTk0NDFdIHN5c3RlbWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5
PXwvdXNyL2xpYi9tb2R1bGVzLWxvYWQuZCBmYWlsZWQgZm9yIHN5c3RlbWQtbW9kdWxlcy1s
b2FkLnNlcnZpY2UuClsgICAxMC41MTAzMzhdIHN5c3RlbWRbMV06IENvbmRpdGlvbkRpcmVj
dG9yeU5vdEVtcHR5PXwvbGliL21vZHVsZXMtbG9hZC5kIGZhaWxlZCBmb3Igc3lzdGVtZC1t
b2R1bGVzLWxvYWQuc2VydmljZS4KWyAgIDEwLjUyMTg3OV0gc3lzdGVtZFsxXTogQ29uZGl0
aW9uQ2FwYWJpbGl0eT1DQVBfU1lTX01PRFVMRSBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtbW9k
dWxlcy1sb2FkLnNlcnZpY2UuClsgICAxMC41MzE0MjRdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHN5c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRp
b24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDEwLjU0MjI0MV0gc3lzdGVtZFsxXTogSm9iIHN5
c3RlbWQtbW9kdWxlcy1sb2FkLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25l
ClsgICAxMC41NTAyNTRdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9hZCBLZXJuZWwgTW9kdWxl
cy4KWyAgIDEwLjU1NTM1NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vc3lz
L2ZzL2Z1c2UvY29ubmVjdGlvbnMgZmFpbGVkIGZvciBzeXMtZnMtZnVzZS1jb25uZWN0aW9u
cy5tb3VudC4KWyAgIDEwLjU2Nzc0Ml0gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2Ygc3lzLWZz
LWZ1c2UtY29ubmVjdGlvbnMubW91bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDEwLjU3Nzc0OV0gc3lzdGVtZFsxXTogSm9iIHN5cy1mcy1mdXNl
LWNvbm5lY3Rpb25zLm1vdW50L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTAu
NTg1NjUyXSBzeXN0ZW1kWzFdOiBNb3VudGVkIEZVU0UgQ29udHJvbCBGaWxlIFN5c3RlbS4K
WyAgIDEwLjU5MTc0N10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz0vc3lzL2tl
cm5lbC9jb25maWcgZmFpbGVkIGZvciBzeXMta2VybmVsLWNvbmZpZy5tb3VudC4KWyAgIDEw
LjYwMDg4MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2Ygc3lzLWtlcm5lbC1jb25maWcubW91
bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDEwLjYx
MDE2NV0gc3lzdGVtZFsxXTogSm9iIHN5cy1rZXJuZWwtY29uZmlnLm1vdW50L3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTAuNjE3NjM1XSBzeXN0ZW1kWzFdOiBNb3VudGVk
IENvbmZpZ3VyYXRpb24gRmlsZSBTeXN0ZW0uClsgICAxMC42MjMyOThdIHN5c3RlbWRbMV06
IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvcnVuL3N5c2N0bC5kIGZhaWxlZCBmb3Ig
c3lzdGVtZC1zeXNjdGwuc2VydmljZS4KWyAgIDEwLjY0MzkzNV0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ldGMvc3lzY3RsLmQgZmFpbGVkIGZvciBzeXN0
ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjUzNDcyXSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9sb2NhbC9saWIvc3lzY3RsLmQgZmFpbGVkIGZv
ciBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjY2MTgwXSBzeXN0ZW1kWzFdOiBD
b25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9saWIvc3lzY3RsLmQgc3VjY2VlZGVk
IGZvciBzeXN0ZW1kLXN5c2N0bC5zZXJ2aWNlLgpbICAgMTAuNjc2MzQ1XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2xpYi9zeXNjdGwuZCBzdWNjZWVkZWQg
Zm9yIHN5c3RlbWQtc3lzY3RsLnNlcnZpY2UuClsgICAxMC42ODU3OTddIHN5c3RlbWRbMV06
IENvbmRpdGlvblBhdGhJc1JlYWRXcml0ZT0vcHJvYy9zeXMvIHN1Y2NlZWRlZCBmb3Igc3lz
dGVtZC1zeXNjdGwuc2VydmljZS4KWyAgIDEwLjY5ODI5N10gc3lzdGVtZFsxXTogU3RhcnRp
bmcgQXBwbHkgS2VybmVsIFZhcmlhYmxlcy4uLgpbICAgMTAuNzA5ODQ2XSBzeXN0ZW1kWzFd
OiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtc3lzY3RsClsg
ICAxMC43MTgwMTRdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2xpYi9zeXN0ZW1kL3N5c3Rl
bWQtc3lzY3RsIGFzIDUwClsgICAxMC43MzI1NDVdIHN5c3RlbWRbNTBdOiBFeGVjdXRpbmc6
IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1zeXNjdGwKWyAgIDEwLjc0Nzg2M10gc3lzdGVt
ZFsxXTogc3lzdGVtZC1zeXNjdGwuc2VydmljZSBjaGFuZ2VkIGRlYWQgLT4gc3RhcnQKWyAg
IDEwLjc4NzMzN10gc3lzdGVtZFsxXTogU3RhcnRpbmcgSm91cm5hbCBTZXJ2aWNlLi4uClsg
ICAxMC44MDExMzddIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvbGliL3N5
c3RlbWQvc3lzdGVtZC1qb3VybmFsZApbICAgMTAuODE0NDc0XSBzeXN0ZW1kWzU0XTogRXhl
Y3V0aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtam91cm5hbGQKWyAgIDEwLjgyMTIx
M10gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1qb3VybmFs
ZCBhcyA1NApbICAgMTAuODMzNDk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWxkLnNl
cnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDEwLjg0MTA3MF0gc3lzdGVtZFsx
XTogSm9iIHN5c3RlbWQtam91cm5hbGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0
PWRvbmUKWyAgIDEwLjg1NDYzMl0gc3lzdGVtZFsxXTogU3RhcnRlZCBKb3VybmFsIFNlcnZp
Y2UuClsgICAxMC44NjE4NjRdIHN5c3RlbWRbMV06IHN5c3RlbWQtam91cm5hbGQuc29ja2V0
IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDEwLjg2OTQ0NV0gc3lzdGVtZFsx
XTogQ29uZGl0aW9uUGF0aElzUmVhZFdyaXRlPS9wcm9jL3N5cy8gc3VjY2VlZGVkIGZvciBw
cm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5hdXRvbW91bnQuClsgICAxMC44Nzk1MTVdIHN5c3Rl
bWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9L3Byb2Mvc3lzL2ZzL2JpbmZtdF9taXNjLyBm
YWlsZWQgZm9yIHByb2Mtc3lzLWZzLWJpbmZtdF9taXNjLmF1dG9tb3VudC4KWyAgIDEwLjg5
MDM5M10gc3lzdGVtZFsxXTogU3RhcnRpbmcgb2YgcHJvYy1zeXMtZnMtYmluZm10X21pc2Mu
YXV0b21vdW50IHJlcXVlc3RlZCBidXQgY29uZGl0aW9uIGZhaWxlZC4gSWdub3JpbmcuClsg
ICAxMC45MDA2MzJdIHN5c3RlbWRbMV06IEpvYiBwcm9jLXN5cy1mcy1iaW5mbXRfbWlzYy5h
dXRvbW91bnQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMC45MDk2MDBdIHN5
c3RlbWRbMV06IFNldCB1cCBhdXRvbW91bnQgQXJiaXRyYXJ5IEV4ZWN1dGFibGUgRmlsZSBG
b3JtYXRzIEZpbGUgU3lzdGVtIEF1dG9tb3VudCBQb2ludC4KWyAgIDEwLjkxOTI4OF0gc3lz
dGVtZFsxXTogQ29uZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC9ydW4vYmluZm10LmQgZmFp
bGVkIGZvciBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlLgpbICAgMTAuOTM5MDYzXSBzeXN0ZW1k
WzFdOiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy9iaW5mbXQuZCBmYWlsZWQg
Zm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45NDgyNjddIHN5c3RlbWRbMV06
IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xvY2FsL2xpYi9iaW5mbXQuZCBm
YWlsZWQgZm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45NzEwMTldIHN5c3Rl
bWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvdXNyL2xpYi9iaW5mbXQuZCBm
YWlsZWQgZm9yIHN5c3RlbWQtYmluZm10LnNlcnZpY2UuClsgICAxMC45ODEzMDRdIHN5c3Rl
bWRbMV06IENvbmRpdGlvbkRpcmVjdG9yeU5vdEVtcHR5PXwvbGliL2JpbmZtdC5kIGZhaWxl
ZCBmb3Igc3lzdGVtZC1iaW5mbXQuc2VydmljZS4KWyAgIDEwLjk5MDU4NV0gc3lzdGVtZFsx
XTogQ29uZGl0aW9uUGF0aElzUmVhZFdyaXRlPS9wcm9jL3N5cy8gc3VjY2VlZGVkIGZvciBz
eXN0ZW1kLWJpbmZtdC5zZXJ2aWNlLgpbICAgMTEuMDAxMzY2XSBzeXN0ZW1kWzFdOiBTdGFy
dGluZyBvZiBzeXN0ZW1kLWJpbmZtdC5zZXJ2aWNlIHJlcXVlc3RlZCBidXQgY29uZGl0aW9u
IGZhaWxlZC4gSWdub3JpbmcuClsgICAxMS4wMTA3NTddIHN5c3RlbWRbMV06IEpvYiBzeXN0
ZW1kLWJpbmZtdC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTEu
MDE3OTY2XSBzeXN0ZW1kWzFdOiBTdGFydGVkIFNldCBVcCBBZGRpdGlvbmFsIEJpbmFyeSBG
b3JtYXRzLgpbICAgMTEuMDI1MzU4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3Rz
PS9zeXMva2VybmVsL21tL2h1Z2VwYWdlcyBmYWlsZWQgZm9yIGRldi1odWdlcGFnZXMubW91
bnQuClsgICAxMS4wMzQ2NTBdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIGRldi1odWdlcGFn
ZXMubW91bnQgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAg
IDExLjA0MzcwNl0gc3lzdGVtZFsxXTogSm9iIGRldi1odWdlcGFnZXMubW91bnQvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4wNTA4NzNdIHN5c3RlbWRbMV06IE1vdW50
ZWQgSHVnZSBQYWdlcyBGaWxlIFN5c3RlbS4KWyAgIDExLjA2ODMwOF0gc3lzdGVtZFsxXTog
Q29uZGl0aW9uUGF0aEV4aXN0cz0vbGliL21vZHVsZXMvMy4xMy4wLTAzNTgzLWdmOWZjODBi
LWRpcnR5L21vZHVsZXMuZGV2bmFtZSBmYWlsZWQgZm9yIGttb2Qtc3RhdGljLW5vZGVzLnNl
cnZpY2UuClsgICAxMS4wODA4NjZdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIGttb2Qtc3Rh
dGljLW5vZGVzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25v
cmluZy4KWyAgIDExLjA5MDM0N10gc3lzdGVtZFsxXTogSm9iIGttb2Qtc3RhdGljLW5vZGVz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4wOTc4MTJdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgQ3JlYXRlIGxpc3Qgb2YgcmVxdWlyZWQgc3RhdGljIGRldmlj
ZSBub2RlcyBmb3IgdGhlIGN1cnJlbnQga2VybmVsLgpbICAgMTEuMTEwNzIxXSBzeXN0ZW1k
WzFdOiBDb25kaXRpb25DYXBhYmlsaXR5PUNBUF9NS05PRCBzdWNjZWVkZWQgZm9yIHN5c3Rl
bWQtdG1wZmlsZXMtc2V0dXAtZGV2LnNlcnZpY2UuClsgICAxMS4xMjQ0OThdIHN5c3RlbWQt
am91cm5hbGRbNTRdOiBWYWN1dW1pbmcgZG9uZSwgZnJlZWQgMCBieXRlcwpbICAgMTEuMTMw
NjcxXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBDcmVhdGUgc3RhdGljIGRldmljZSBub2RlcyBp
biAvZGV2Li4uClsgICAxMS4xNDk0MzZdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6
IC91c3IvYmluL3N5c3RlbWQtdG1wZmlsZXMgLS1wcmVmaXg9L2RldiAtLWNyZWF0ZQpbICAg
MTEuMTU4MjY3XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9iaW4vc3lzdGVtZC10bXBmaWxl
cyBhcyA1NgpbICAgMTEuMTgxOTY0XSBzeXN0ZW1kWzU2XTogRXhlY3V0aW5nOiAvdXNyL2Jp
bi9zeXN0ZW1kLXRtcGZpbGVzIC0tcHJlZml4PS9kZXYgLS1jcmVhdGUKWyAgIDExLjE5OTYx
OV0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxlcy1zZXR1cC1kZXYuc2VydmljZSBjaGFu
Z2VkIGRlYWQgLT4gc3RhcnQKWyAgIDExLjIwNzEzMl0gc3lzdGVtZFsxXTogU3RhcnRpbmcg
U3dhcC4KWyAgIDExLjIxNjU3N10gc3lzdGVtZFsxXTogc3dhcC50YXJnZXQgY2hhbmdlZCBk
ZWFkIC0+IGFjdGl2ZQpbICAgMTEuMjI4ODQ4XSBzeXN0ZW1kWzFdOiBKb2Igc3dhcC50YXJn
ZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMS4yNTI3MjJdIHN5c3RlbWRb
MV06IFJlYWNoZWQgdGFyZ2V0IFN3YXAuClsgICAxMS4yNTcyMjVdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9L2V0Yy9mc3RhYiBzdWNjZWVkZWQgZm9yIHN5c3RlbWQtcmVt
b3VudC1mcy5zZXJ2aWNlLgpbICAgMTEuMjc1MDk2XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBS
ZW1vdW50IFJvb3QgYW5kIEtlcm5lbCBGaWxlIFN5c3RlbXMuLi4KWyAgIDExLjMwMDc4Ml0g
c3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1k
LXJlbW91bnQtZnMKWyAgIDExLjMwODU1N10gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGli
L3N5c3RlbWQvc3lzdGVtZC1yZW1vdW50LWZzIGFzIDU3ClsgICAxMS4zMzEzNjBdIHN5c3Rl
bWRbNTddOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1yZW1vdW50LWZz
ClsgICAxMS4zNTk2NTldIHN5c3RlbWRbMV06IHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNl
IGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTEuMzY2NTEzXSBzeXN0ZW1kWzFdOiBNb3Vu
dGluZyBUZW1wb3JhcnkgRGlyZWN0b3J5Li4uClsgICAxMi4yMDUxODZdIHN5c3RlbWRbMV06
IEFib3V0IHRvIGV4ZWN1dGU6IC9iaW4vbW91bnQgdG1wZnMgL3RtcCAtdCB0bXBmcyAtbyBt
b2RlPTE3Nzcsc3RyaWN0YXRpbWUKWyAgIDEyLjIxNTM2NF0gc3lzdGVtZFsxXTogRm9ya2Vk
IC9iaW4vbW91bnQgYXMgNjAKWyAgIDEyLjIyMzkxNV0gc3lzdGVtZFsxXTogdG1wLm1vdW50
IGNoYW5nZWQgZGVhZCAtPiBtb3VudGluZwpbICAgMTIuMjMxMTk1XSBzeXN0ZW1kWzYwXTog
RXhlY3V0aW5nOiAvYmluL21vdW50IHRtcGZzIC90bXAgLXQgdG1wZnMgLW8gbW9kZT0xNzc3
LHN0cmljdGF0aW1lClsgICAxMi4yNDAyODldIHN5c3RlbWRbMV06IFNldCB1cCBqb2JzIHBy
b2dyZXNzIHRpbWVyZmQuClsgICAxMi4yNTQ5NDhdIHN5c3RlbWRbMV06IFJlY2VpdmVkIFNJ
R0NITEQgZnJvbSBQSUQgNDAgKG4vYSkuClsgICAxMi4yNjE0NDBdIHN5c3RlbWRbMV06IEdv
dCBTSUdDSExEIGZvciBwcm9jZXNzIDQ3IChtb3VudCkKWyAgIDEyLjI2OTM2NV0gc3lzdGVt
ZFsxXTogR290IG5vdGlmaWNhdGlvbiBtZXNzYWdlIGZvciB1bml0IHN5c3RlbWQtam91cm5h
bGQuc2VydmljZQpbICAgMTIuMjc2OTMxXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWxk
LnNlcnZpY2U6IEdvdCBtZXNzYWdlClsgICAxMi4yODc5MzhdIHN5c3RlbWRbMV06IHN5c3Rl
bWQtam91cm5hbGQuc2VydmljZTogZ290IFNUQVRVUz1Qcm9jZXNzaW5nIHJlcXVlc3RzLi4u
ClsgICAxMi4yOTY0NzldIHN5c3RlbWRbMV06IENoaWxkIDQ3IGRpZWQgKGNvZGU9ZXhpdGVk
LCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuMzAzMjczXSBzeXN0ZW1kWzFdOiBDaGlsZCA0
NyBiZWxvbmdzIHRvIHN5cy1rZXJuZWwtZGVidWcubW91bnQKWyAgIDEyLjMwOTc5NV0gc3lz
dGVtZFsxXTogc3lzLWtlcm5lbC1kZWJ1Zy5tb3VudCBtb3VudCBwcm9jZXNzIGV4aXRlZCwg
Y29kZT1leGl0ZWQgc3RhdHVzPTAKWyAgIDEyLjMxNzk1NF0gc3lzdGVtZFsxXTogc3lzLWtl
cm5lbC1kZWJ1Zy5tb3VudCBjaGFuZ2VkIG1vdW50aW5nIC0+IG1vdW50ZWQKWyAgIDEyLjMy
NTQ0N10gc3lzdGVtZFsxXTogSm9iIHN5cy1rZXJuZWwtZGVidWcubW91bnQvc3RhcnQgZmlu
aXNoZWQsIHJlc3VsdD1kb25lClsgICAxMi4zMzc2MzddIHN5c3RlbWRbMV06IE1vdW50ZWQg
RGVidWcgRmlsZSBTeXN0ZW0uClsgICAxMi4zNDMzMDJdIHN5c3RlbWRbMV06IEdvdCBTSUdD
SExEIGZvciBwcm9jZXNzIDQ4IChzeXN0ZW1kLXZjb25zb2wpClsgICAxMi4zNTAyNzhdIHN5
c3RlbWRbMV06IENoaWxkIDQ4IGRpZWQgKGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNT
KQpbICAgMTIuMzU2NjYwXSBzeXN0ZW1kWzFdOiBDaGlsZCA0OCBiZWxvbmdzIHRvIHN5c3Rl
bWQtdmNvbnNvbGUtc2V0dXAuc2VydmljZQpbICAgMTIuMzY0NDM3XSBzeXN0ZW1kWzFdOiBz
eXN0ZW1kLXZjb25zb2xlLXNldHVwLnNlcnZpY2U6IG1haW4gcHJvY2VzcyBleGl0ZWQsIGNv
ZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTClsgICAxMi4zNzUxMTNdIHN5c3RlbWRbMV06
IHN5c3RlbWQtdmNvbnNvbGUtc2V0dXAuc2VydmljZSBjaGFuZ2VkIHN0YXJ0IC0+IGV4aXRl
ZApbICAgMTIuMzgyODQ3XSBzeXN0ZW1kWzFdOiBKb2Igc3lzdGVtZC12Y29uc29sZS1zZXR1
cC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuMzk2MTc3XSBz
eXN0ZW1kWzFdOiBTdGFydGVkIFNldHVwIFZpcnR1YWwgQ29uc29sZS4KWyAgIDEyLjQwMjE4
OF0gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNDkgKHVkZXZhZG0pClsg
ICAxMi40MDgwNDNdIHN5c3RlbWRbMV06IENoaWxkIDQ5IGRpZWQgKGNvZGU9ZXhpdGVkLCBz
dGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNDE1MDc3XSBzeXN0ZW1kWzFdOiBDaGlsZCA0OSBi
ZWxvbmdzIHRvIHN5c3RlbWQtdWRldi10cmlnZ2VyLnNlcnZpY2UKWyAgIDEyLjQyMjExNF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZTogbWFpbiBwcm9jZXNz
IGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAgIDEyLjQzMjE1NF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZSBydW5uaW5nIG5leHQg
bWFpbiBjb21tYW5kIGZvciBzdGF0ZSBzdGFydApbICAgMTIuNDQxMjA5XSBzeXN0ZW1kWzFd
OiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi91ZGV2YWRtIHRyaWdnZXIgLS10eXBlPWRl
dmljZXMgLS1hY3Rpb249YWRkClsgICAxMi40NTA2NzldIHN5c3RlbWRbMV06IEZvcmtlZCAv
dXNyL2Jpbi91ZGV2YWRtIGFzIDYyClsgICAxMi40NTY3MjJdIHN5c3RlbWRbMV06IEdvdCBT
SUdDSExEIGZvciBwcm9jZXNzIDUwIChzeXN0ZW1kLXN5c2N0bCkKWyAgIDEyLjQ2NzA3Nl0g
c3lzdGVtZFs2Ml06IEV4ZWN1dGluZzogL3Vzci9iaW4vdWRldmFkbSB0cmlnZ2VyIC0tdHlw
ZT1kZXZpY2VzIC0tYWN0aW9uPWFkZApbICAgMTIuNDc1ODk4XSBzeXN0ZW1kWzFdOiBDaGls
ZCA1MCBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDEyLjQ4ODk5
OV0gc3lzdGVtZFsxXTogQ2hpbGQgNTAgYmVsb25ncyB0byBzeXN0ZW1kLXN5c2N0bC5zZXJ2
aWNlClsgICAxMi40OTUyMzBdIHN5c3RlbWRbMV06IHN5c3RlbWQtc3lzY3RsLnNlcnZpY2U6
IG1haW4gcHJvY2VzcyBleGl0ZWQsIGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTClsg
ICAxMi41Mjk3MTNdIHN5c3RlbWRbMV06IHN5c3RlbWQtc3lzY3RsLnNlcnZpY2UgY2hhbmdl
ZCBzdGFydCAtPiBleGl0ZWQKWyAgIDEyLjUzNjI0Nl0gc3lzdGVtZFsxXTogSm9iIHN5c3Rl
bWQtc3lzY3RsLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxMi41
NjQyMDhdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgQXBwbHkgS2VybmVsIFZhcmlhYmxlcy4KWyAg
IDEyLjU3OTMzM10gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNTYgKHN5
c3RlbWQtdG1wZmlsZSkKWyAgIDEyLjU4NTk0NF0gc3lzdGVtZFsxXTogQ2hpbGQgNTYgZGll
ZCAoY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMi42MDg5NTVdIHN5c3Rl
bWRbMV06IENoaWxkIDU2IGJlbG9uZ3MgdG8gc3lzdGVtZC10bXBmaWxlcy1zZXR1cC1kZXYu
c2VydmljZQpbICAgMTIuNjE2MjQ3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLXNl
dHVwLWRldi5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3Rh
dHVzPTAvU1VDQ0VTUwpbICAgMTIuNjM5NTk1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZp
bGVzLXNldHVwLWRldi5zZXJ2aWNlIGNoYW5nZWQgc3RhcnQgLT4gZXhpdGVkClsgICAxMi42
NDcxNTRdIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1kLXRtcGZpbGVzLXNldHVwLWRldi5zZXJ2
aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuNjgxMDg2XSBzeXN0ZW1k
WzFdOiBTdGFydGVkIENyZWF0ZSBzdGF0aWMgZGV2aWNlIG5vZGVzIGluIC9kZXYuClsgICAx
Mi42ODc3NThdIHN5c3RlbWRbMV06IEdvdCBTSUdDSExEIGZvciBwcm9jZXNzIDU3IChzeXN0
ZW1kLXJlbW91bnQpClsgICAxMi42OTk1NjNdIHN5c3RlbWRbMV06IENoaWxkIDU3IGRpZWQg
KGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNzA1OTY0XSBzeXN0ZW1k
WzFdOiBDaGlsZCA1NyBiZWxvbmdzIHRvIHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNlClsg
ICAxMi43MTMyODddIHN5c3RlbWRbMV06IHN5c3RlbWQtcmVtb3VudC1mcy5zZXJ2aWNlOiBt
YWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAg
MTIuNzIzNjczXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXJlbW91bnQtZnMuc2VydmljZSBjaGFu
Z2VkIHN0YXJ0IC0+IGV4aXRlZApbICAgMTIuNzMxMDg4XSBzeXN0ZW1kWzFdOiBKb2Igc3lz
dGVtZC1yZW1vdW50LWZzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsg
ICAxMi43NDUzOTFdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgUmVtb3VudCBSb290IGFuZCBLZXJu
ZWwgRmlsZSBTeXN0ZW1zLgpbICAgMTIuNzUyNzI4XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hM
RCBmb3IgcHJvY2VzcyA2MCAobW91bnQpClsgICAxMi43NTg0MDhdIHN5c3RlbWRbMV06IENo
aWxkIDYwIGRpZWQgKGNvZGU9ZXhpdGVkLCBzdGF0dXM9MC9TVUNDRVNTKQpbICAgMTIuNzY1
Njk0XSBzeXN0ZW1kWzFdOiBDaGlsZCA2MCBiZWxvbmdzIHRvIHRtcC5tb3VudApbICAgMTIu
NzcxMDg4XSBzeXN0ZW1kWzFdOiB0bXAubW91bnQgbW91bnQgcHJvY2VzcyBleGl0ZWQsIGNv
ZGU9ZXhpdGVkIHN0YXR1cz0wClsgICAxMi43NzgxMTldIHN5c3RlbWRbMV06IHRtcC5tb3Vu
dCBjaGFuZ2VkIG1vdW50aW5nIC0+IG1vdW50ZWQKWyAgIDEyLjc4NDUyMF0gc3lzdGVtZFsx
XTogSm9iIHRtcC5tb3VudC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEyLjc5
NTc0M10gc3lzdGVtZFsxXTogTW91bnRlZCBUZW1wb3JhcnkgRGlyZWN0b3J5LgpbICAgMTIu
ODAxNTU1XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA2MiAodWRldmFk
bSkKWyAgIDEyLjgwNzQxMV0gc3lzdGVtZFsxXTogQ2hpbGQgNjIgZGllZCAoY29kZT1leGl0
ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMi44MTQ0MzhdIHN5c3RlbWRbMV06IENoaWxk
IDYyIGJlbG9uZ3MgdG8gc3lzdGVtZC11ZGV2LXRyaWdnZXIuc2VydmljZQpbICAgMTIuODIx
NDI0XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXYtdHJpZ2dlci5zZXJ2aWNlOiBtYWluIHBy
b2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAgMTIuODMy
MTI2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXYtdHJpZ2dlci5zZXJ2aWNlIGNoYW5nZWQg
c3RhcnQgLT4gZXhpdGVkClsgICAxMi44Mzk3MjldIHN5c3RlbWRbMV06IEpvYiBzeXN0ZW1k
LXVkZXYtdHJpZ2dlci5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAg
MTIuODUzMTUzXSBzeXN0ZW1kWzFdOiBTdGFydGVkIHVkZXYgQ29sZHBsdWcgYWxsIERldmlj
ZXMuClsgICAxMi44NTk1MjldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIExvYWQvU2F2ZSBSYW5k
b20gU2VlZC4uLgpbICAgMTIuODcwMjY4XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRl
OiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtcmFuZG9tLXNlZWQgbG9hZApbICAgMTIuODc4
NDc1XSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXJhbmRv
bS1zZWVkIGFzIDY0ClsgICAxMi44OTAxMjBdIHN5c3RlbWRbNjRdOiBFeGVjdXRpbmc6IC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC1yYW5kb20tc2VlZCBsb2FkClsgICAxMi44OTg3MTBd
IHN5c3RlbWRbMV06IHN5c3RlbWQtcmFuZG9tLXNlZWQuc2VydmljZSBjaGFuZ2VkIGRlYWQg
LT4gc3RhcnQKWyAgIDEyLjkwNjEwNl0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aElzUmVh
ZFdyaXRlPS9zeXMgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLXVkZXZkLnNlcnZpY2UuClsgICAx
Mi45MTYzNThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIHVkZXYgS2VybmVsIERldmljZSBNYW5h
Z2VyLi4uClsgICAxMi45MzI3NzBdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11ZGV2ZApbICAgMTIuOTQxNDQ1XSBzeXN0ZW1kWzFd
OiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVkZXZkIGFzIDY1ClsgICAxMi45
NTI4NjldIHN5c3RlbWRbNjVdOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVt
ZC11ZGV2ZApbICAgMTIuOTYwMzY3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLnNlcnZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHN0YXJ0ClsgICAxMi45Njc5MjJdIHN5c3RlbWRbMV06IFN0
YXJ0aW5nIExvY2FsIEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgIDEyLjk3NTIyOV0gc3lzdGVt
ZFsxXTogbG9jYWwtZnMtcHJlLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAx
Mi45ODIxNTZdIHN5c3RlbWRbMV06IEpvYiBsb2NhbC1mcy1wcmUudGFyZ2V0L3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTIuOTk2MDY1XSBzeXN0ZW1kWzFdOiBSZWFjaGVk
IHRhcmdldCBMb2NhbCBGaWxlIFN5c3RlbXMgKFByZSkuClsgICAxMy4wMDMwMTRdIHN5c3Rl
bWRbMV06IFN0YXJ0aW5nIExvY2FsIEZpbGUgU3lzdGVtcy4KWyAgIDEzLjAwNzkzNV0gc3lz
dGVtZFsxXTogbG9jYWwtZnMudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgIDEz
LjAxNDQyMF0gc3lzdGVtZFsxXTogSm9iIGxvY2FsLWZzLnRhcmdldC9zdGFydCBmaW5pc2hl
ZCwgcmVzdWx0PWRvbmUKWyAgIDEzLjAyNzg1Ml0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJn
ZXQgTG9jYWwgRmlsZSBTeXN0ZW1zLgpbICAgMTMuMDM1NDEyXSBzeXN0ZW1kWzFdOiBDb25k
aXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3J1bi90bXBmaWxlcy5kIGZhaWxlZCBmb3Igc3lz
dGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlLgpbICAgMTMuMDQ2NzU3XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L2V0Yy90bXBmaWxlcy5kIGZhaWxlZCBm
b3Igc3lzdGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlLgpbICAgMTMuMDU5MDU4XSBzeXN0
ZW1kWzFdOiBDb25kaXRpb25EaXJlY3RvcnlOb3RFbXB0eT18L3Vzci9sb2NhbC9saWIvdG1w
ZmlsZXMuZCBmYWlsZWQgZm9yIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS4KWyAg
IDEzLjA4MDUwNV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uRGlyZWN0b3J5Tm90RW1wdHk9fC91
c3IvbGliL3RtcGZpbGVzLmQgc3VjY2VlZGVkIGZvciBzeXN0ZW1kLXRtcGZpbGVzLXNldHVw
LnNlcnZpY2UuClsgICAxMy4wOTIyMDNdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFJlY3JlYXRl
IFZvbGF0aWxlIEZpbGVzIGFuZCBEaXJlY3Rvcmllcy4uLgpbICAgMTMuMTAxNzA0XSBzeXN0
ZW1kLXVkZXZkWzY1XTogc3RhcnRpbmcgdmVyc2lvbiAyMDgKWyAgIDEzLjExNTI0OV0gc3lz
dGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9iaW4vc3lzdGVtZC10bXBmaWxlcyAt
LWNyZWF0ZSAtLXJlbW92ZSAtLWV4Y2x1ZGUtcHJlZml4PS9kZXYKWyAgIDEzLjEyNjE4Ml0g
c3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL3N5c3RlbWQtdG1wZmlsZXMgYXMgNjcKWyAg
IDEzLjE0MDEwMF0gc3lzdGVtZFs2N106IEV4ZWN1dGluZzogL3Vzci9iaW4vc3lzdGVtZC10
bXBmaWxlcyAtLWNyZWF0ZSAtLXJlbW92ZSAtLWV4Y2x1ZGUtcHJlZml4PS9kZXYKWyAgIDEz
LjE1MjA3NV0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxlcy1zZXR1cC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTMuMTYwNzc5XSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBUcmlnZ2VyIEZsdXNoaW5nIG9mIEpvdXJuYWwgdG8gUGVyc2lzdGVudCBTdG9yYWdlLi4u
ClsgICAxMy4xOTIzNzNdIHN5c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvYmlu
L3N5c3RlbWN0bCBraWxsIC0ta2lsbC13aG89bWFpbiAtLXNpZ25hbD1TSUdVU1IxIHN5c3Rl
bWQtam91cm5hbGQuc2VydmljZQpbICAgMTMuMjA2Mzg5XSBzeXN0ZW1kWzFdOiBGb3JrZWQg
L3Vzci9iaW4vc3lzdGVtY3RsIGFzIDY4ClsgICAxMy4yMTY5MDldIHN5c3RlbWRbNjhdOiBF
eGVjdXRpbmc6IC91c3IvYmluL3N5c3RlbWN0bCBraWxsIC0ta2lsbC13aG89bWFpbiAtLXNp
Z25hbD1TSUdVU1IxIHN5c3RlbWQtam91cm5hbGQuc2VydmljZQpbICAgMTMuMjMwNjEyXSBz
eXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2VydmljZSBjaGFuZ2VkIGRlYWQg
LT4gc3RhcnQKWyAgIDEzLjI0OTQzN10gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlv
biBvbiBwcml2YXRlIGJ1cy4KWyAgIDEzLjI1NzUzOF0gc3lzdGVtZFsxXTogSW5jb21pbmcg
dHJhZmZpYyBvbiBzeXN0ZW1kLXVkZXZkLWtlcm5lbC5zb2NrZXQKWyAgIDEzLjI2NjM5MV0g
c3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC1rZXJuZWwuc29ja2V0IGNoYW5nZWQgbGlzdGVu
aW5nIC0+IHJ1bm5pbmcKWyAgIDEzLjI3NTYzOF0gc3lzdGVtZFsxXTogR290IG5vdGlmaWNh
dGlvbiBtZXNzYWdlIGZvciB1bml0IHN5c3RlbWQtdWRldmQuc2VydmljZQpbICAgMTMuMjgz
NDUwXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLnNlcnZpY2U6IEdvdCBtZXNzYWdlClsg
ICAxMy4yOTEyMTJdIHN5c3RlbWRbMV06IHN5c3RlbWQtdWRldmQuc2VydmljZTogZ290IFJF
QURZPTEKWyAgIDEzLjMwMjg4NV0gc3lzdGVtZFsxXTogc3lzdGVtZC11ZGV2ZC5zZXJ2aWNl
IGNoYW5nZWQgc3RhcnQgLT4gcnVubmluZwpbICAgMTMuMzExMDIxXSBzeXN0ZW1kWzFdOiBK
b2Igc3lzdGVtZC11ZGV2ZC5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpb
ICAgMTMuMzI1NzYxXSBzeXN0ZW1kWzFdOiBTdGFydGVkIHVkZXYgS2VybmVsIERldmljZSBN
YW5hZ2VyLgpbICAgMTMuMzMyMTY4XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVkZXZkLWNvbnRy
b2wuc29ja2V0IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDEzLjM0NjUwM10g
c3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCA2MCAobi9hKS4KWyAgIDEz
LjM1Mjk2NV0gc3lzdGVtZFsxXTogR290IFNJR0NITEQgZm9yIHByb2Nlc3MgNjQgKHN5c3Rl
bWQtcmFuZG9tLSkKWyAgIDEzLjM2MTc3OF0gc3lzdGVtZFsxXTogQ2hpbGQgNjQgZGllZCAo
Y29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAxMy4zNjgyNzddIHN5c3RlbWRb
MV06IENoaWxkIDY0IGJlbG9uZ3MgdG8gc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlClsg
ICAxMy4zNzU5NjhdIHN5c3RlbWRbMV06IHN5c3RlbWQtcmFuZG9tLXNlZWQuc2VydmljZTog
bWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAg
IDEzLjM4ODI0Ml0gc3lzdGVtZFsxXTogc3lzdGVtZC1yYW5kb20tc2VlZC5zZXJ2aWNlIGNo
YW5nZWQgc3RhcnQgLT4gZXhpdGVkClsgICAxMy4zOTYwODldIHN5c3RlbWRbMV06IEpvYiBz
eXN0ZW1kLXJhbmRvbS1zZWVkLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25l
ClsgICAxMy40MTA4MTldIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9hZC9TYXZlIFJhbmRvbSBT
ZWVkLgpbICAgMTMuNDE2MzY5XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2Vz
cyA2NyAoc3lzdGVtZC10bXBmaWxlKQpbICAgMTMuNDI0OTE2XSBzeXN0ZW1kWzFdOiBDaGls
ZCA2NyBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDEzLjQzMTg4
OV0gc3lzdGVtZFsxXTogQ2hpbGQgNjcgYmVsb25ncyB0byBzeXN0ZW1kLXRtcGZpbGVzLXNl
dHVwLnNlcnZpY2UKWyAgIDEzLjQ0MDM0MF0gc3lzdGVtZFsxXTogc3lzdGVtZC10bXBmaWxl
cy1zZXR1cC5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3Rh
dHVzPTAvU1VDQ0VTUwpbICAgMTMuNDUxNDI3XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZp
bGVzLXNldHVwLnNlcnZpY2UgY2hhbmdlZCBzdGFydCAtPiBleGl0ZWQKWyAgIDEzLjQ2MDgz
M10gc3lzdGVtZFsxXTogSm9iIHN5c3RlbWQtdG1wZmlsZXMtc2V0dXAuc2VydmljZS9zdGFy
dCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDEzLjQ3NzE3OF0gc3lzdGVtZFsxXTogU3Rh
cnRlZCBSZWNyZWF0ZSBWb2xhdGlsZSBGaWxlcyBhbmQgRGlyZWN0b3JpZXMuClsgICAxMy40
ODU4OTldIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFVwZGF0ZSBVVE1QIGFib3V0IFN5c3RlbSBS
ZWJvb3QvU2h1dGRvd24uLi4KWyAgIDEzLjUwMTQyNV0gc3lzdGVtZFsxXTogQWJvdXQgdG8g
ZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVwZGF0ZS11dG1wIHJlYm9vdApb
ICAgMTMuNTEwNzEzXSBzeXN0ZW1kWzFdOiBGb3JrZWQgL3Vzci9saWIvc3lzdGVtZC9zeXN0
ZW1kLXVwZGF0ZS11dG1wIGFzIDcwClsgICAxMy41MzE3MTldIHN5c3RlbWRbNzBdOiBFeGVj
dXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11cGRhdGUtdXRtcCByZWJvb3QKWyAg
IDEzLjU0MTUxM10gc3lzdGVtZFsxXTogc3lzdGVtZC11cGRhdGUtdXRtcC5zZXJ2aWNlIGNo
YW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTMuNTUyNjExXSBzeXN0ZW1kWzFdOiBBY2NlcHRl
ZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNTY0OTA3XSBzeXN0ZW1kWzFd
OiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNTc2NDExXSBz
eXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMu
NTg0ODIyXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVz
LgpbICAgMTMuNTk1MDMxXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBjb25uZWN0aW9uIG9uIHBy
aXZhdGUgYnVzLgpbICAgMTMuNjA0Mjg3XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVz
dDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24gL29yZy9m
cmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTMuNjI5OTk3XSBzeXN0ZW1kWzFdOiBB
Y2NlcHRlZCBjb25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTMuNjUyMzcyXSBzeXN0
ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFn
ZW50LlJlbGVhc2VkKCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAg
MTMuNzA0NTY0XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNr
dG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVz
L0xvY2FsClsgICAxMy43NTA4MDJdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24g
b24gcHJpdmF0ZSBidXMuClsgICAxMy43NTcwMjRdIHJhbmRvbTogbm9uYmxvY2tpbmcgcG9v
bCBpcyBpbml0aWFsaXplZApbICAgMTMuODAwNTc1XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMg
cmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24g
L29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTMuODg4MzczXSBzeXN0ZW1k
WzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlz
Y29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAxMy45ODA1
NTBdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJpdmF0ZSBidXMuClsg
ICAxNC4wMzM4MjddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRl
c2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5
c3RlbWQxL2FnZW50ClsgICAxNC4xMTc3NzNdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1
ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAvb3Jn
L2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE0LjE4NTQ5MV0gc3lzdGVtZFsxXTogZGV2
LXR0eVMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTQuMjI1MTE0XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4MjUwLXR0eS10dHlTMC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE0LjI3MzI0NF0gc3lzdGVtZFsx
XTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAgIDE0LjMxNDUyNF0g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1k
MS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQK
WyAgIDE0LjM2OTgwOF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVl
ZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Av
REJ1cy9Mb2NhbApbICAgMTQuNDIzMzUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5UzEuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNC40NTc5OTVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXBsYXRmb3JtLXNlcmlhbDgyNTAtdHR5LXR0eVMxLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTQuNDk5MjU5XSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBj
b25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTQuNTMxODc0XSBzeXN0ZW1kWzFdOiBH
b3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVh
c2VkKCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTQuNTg0Mzk1
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsg
ICAxNC42MzY0MTBdIHN5c3RlbWRbMV06IGRldi10dHlTMi5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDE0LjY2OTg1M10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxh
dGZvcm0tc2VyaWFsODI1MC10dHktdHR5UzIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxNC43MTU5NTNdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24g
cHJpdmF0ZSBidXMuClsgICAxNC43NDY1NjZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1
ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3Jn
L2ZyZWVkZXNrdG9wL3N5c3RlbWQxL2FnZW50ClsgICAxNC44MDM4OTVdIHN5c3RlbWRbMV06
IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25u
ZWN0ZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE0Ljg2NjM0MF0g
c3lzdGVtZFsxXTogZGV2LXR0eVMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTQuOTAwODgzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4
MjUwLXR0eS10dHlTMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE0Ljk1
NDA3Ml0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5z
eXN0ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEv
YWdlbnQKWyAgIDE1LjAwOTc2OV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9y
Zy5mcmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRl
c2t0b3AvREJ1cy9Mb2NhbApbICAgMTUuMDc1MjEzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5UzQu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS4xMTI4NDFdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXBsYXRmb3JtLXNlcmlhbDgyNTAtdHR5LXR0eVM0LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuMTYzMjI4XSBzeXN0ZW1kWzFdOiBHb3Qg
RC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2Vk
KCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTUuMjI2OTMyXSBz
eXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9j
YWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAx
NS4yODY3OTZdIHN5c3RlbWRbMV06IGRldi10dHlTNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDE1LjMyNDc2M10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxhdGZv
cm0tc2VyaWFsODI1MC10dHktdHR5UzUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS4zMzcwMTFdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJl
ZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9w
L3N5c3RlbWQxL2FnZW50ClsgICAxNS4zNDk1ODddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyBy
ZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAv
b3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE1LjM2MjA1MF0gc3lzdGVtZFsxXTog
ZGV2LXR0eVM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuMzY4MDUw
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1wbGF0Zm9ybS1zZXJpYWw4MjUwLXR0eS10dHlT
Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjM3ODUxNV0gc3lzdGVt
ZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5NYW5h
Z2VyLktpbGxVbml0KCkgb24gL29yZy9mcmVlZGVza3RvcC9zeXN0ZW1kMQpbICAgMTUuMzg5
NTQyXSBzeXN0ZW1kLWpvdXJuYWxkWzU0XTogUmVjZWl2ZWQgcmVxdWVzdCB0byBmbHVzaCBy
dW50aW1lIGpvdXJuYWwgZnJvbSBQSUQgMQpbICAgMTUuMzk5MzczXSBzeXN0ZW1kWzFdOiBH
b3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVj
dGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAxNS40MTE0ODVdIHN5
c3RlbWRbMV06IGRldi10dHlTNy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDE1LjQxNzQ3Nl0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtcGxhdGZvcm0tc2VyaWFsODI1
MC10dHktdHR5UzcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS40MjY2
ODddIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1
cy5Qcm9wZXJ0aWVzLkdldCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEKWyAgIDE1
LjQzODE0MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3Rv
cC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9M
b2NhbApbICAgMTUuNDUxMjc5XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRwYXRoLXBs
YXRmb3JtXHgyZDFjMGYwMDAubW1jLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuNDYwNDU3XSBzeXN0ZW1kWzFdOiBkZXYtbW1jYmxrMC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjQ2NjU5MV0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMt
c29jLjEtMWMwZjAwMC5tbWMtbW1jX2hvc3QtbW1jMC1tbWMwOmIzNjgtYmxvY2stbW1jYmxr
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjQ4MDk0Ml0gc3lzdGVt
ZFsxXTogZGV2LWRpc2stYnlceDJkdXVpZC1mMmM2ODg3YVx4MmQ0MTZiXHgyZDRiYzhceDJk
YmUzN1x4MmRiOWE3M2JkZTM4ZDkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAxNS40OTIxNjddIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHBhdGgtcGxhdGZvcm1c
eDJkMWMwZjAwMC5tbWNceDJkcGFydDEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS41MDE5MzNdIHN5c3RlbWRbMV06IGRldi1tbWNibGswcDEuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41MDgyNjRdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXNvYy4xLTFjMGYwMDAubW1jLW1tY19ob3N0LW1tYzAtbW1jMDpiMzY4LWJsb2NrLW1t
Y2JsazAtbW1jYmxrMHAxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUu
NTIxMzE3XSBzeXN0ZW1kWzFdOiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLWV0aDAuZGV2
aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41MjkyMjldIHN5c3RlbWRbMV06
IEpvYiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLWV0aDAuZGV2aWNlL3N0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMTUuNTQ0MDY3XSBzeXN0ZW1kWzFdOiBGb3VuZCBkZXZp
Y2UgL3N5cy9zdWJzeXN0ZW0vbmV0L2RldmljZXMvZXRoMC4KWyAgIDE1LjU1MDY5NF0gc3lz
dGVtZFsxXTogc3lzLWRldmljZXMtc29jLjEtMWM1MDAwMC5ldGhlcm5ldC1uZXQtZXRoMC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjU2MDc0MV0gc3lzdGVtZFsx
XTogZGV2LWh2YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41NjY2
NjFdIHN5c3RlbWRbMV06IEpvYiBkZXYtaHZjMC5kZXZpY2Uvc3RhcnQgZmluaXNoZWQsIHJl
c3VsdD1kb25lClsgICAxNS41Nzc4MTldIHN5c3RlbWRbMV06IEZvdW5kIGRldmljZSAvZGV2
L2h2YzAuClsgICAxNS41ODI0ODddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LWh2YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS41OTE2MTNd
IHN5c3RlbWRbMV06IGRldi1odmMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuNTk3NTAxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS1odmMx
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjA2Nzg4XSBzeXN0ZW1k
WzFdOiBzeXMtc3Vic3lzdGVtLW5ldC1kZXZpY2VzLXNpdDAuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNS42MTQ3MDldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtbmV0LXNpdDAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42
MjM3NTNdIHN5c3RlbWRbMV06IGRldi1odmMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTUuNjI5NzY5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS1odmMzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjM4ODEwXSBz
eXN0ZW1kWzFdOiBkZXYtaHZjMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDE1LjY0NDcwM10gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtdmlydHVhbC10dHktaHZjMi5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjY1MzgzMl0gc3lzdGVtZFsx
XTogZGV2LWh2YzQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42NTk4
NTNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LWh2YzQuZGV2aWNlIGNo
YW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS42Njg4OTddIHN5c3RlbWRbMV06IGRldi1o
dmM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNjc0NzkyXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS1odmM2LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMTUuNjgzOTcxXSBzeXN0ZW1kWzFdOiBkZXYtaHZjNS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1LjY4OTk5NV0gc3lzdGVtZFsxXTog
c3lzLWRldmljZXMtdmlydHVhbC10dHktaHZjNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDE1LjY5OTAwOV0gc3lzdGVtZFsxXTogZGV2LWh2YzcuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43MDQ4OTddIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LWh2YzcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAxNS43MTU4NjFdIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHBhdGgtcGxhdGZvcm1c
eDJkMWMxYzAwMC5laGNpMVx4MmR1c2JceDJkMDoxOjEuMFx4MmRzY3NpXHgyZDA6MDowOjAu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43MjgzMzhdIHN5c3RlbWRb
MV06IGRldi1kaXNrLWJ5XHgyZGlkLXVzYlx4MmRfVVNCX0RJU0tfMi4wXzA3QkMwNTAyQ0Y0
MDAwQ0FceDJkMDowLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuNzM4
OTU4XSBzeXN0ZW1kWzFdOiBkZXYtc2RhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMTUuNzQ0Nzc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy1zb2MuMS0xYzFjMDAw
LmVoY2kxLXVzYjItMlx4MmQxLTJceDJkMToxLjAtaG9zdDAtdGFyZ2V0MDowOjAtMDowOjA6
MC1ibG9jay1zZGEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS43NjE3
OTRdIHN5c3RlbWRbMV06IGRldi1kaXNrLWJ5XHgyZHV1aWQtZGZiODdiZjdceDJkNTJjOFx4
MmQ0NzgwXHgyZDk1ZGVceDJkMmM2YzNjZGMxZGMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMTUuNzczMDM4XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRwYXRo
LXBsYXRmb3JtXHgyZDFjMWMwMDAuZWhjaTFceDJkdXNiXHgyZDA6MToxLjBceDJkc2NzaVx4
MmQwOjA6MDowXHgyZHBhcnQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MTUuNzg2MTY1XSBzeXN0ZW1kWzFdOiBkZXYtZGlzay1ieVx4MmRpZC11c2JceDJkX1VTQl9E
SVNLXzIuMF8wN0JDMDUwMkNGNDAwMENBXHgyZDA6MFx4MmRwYXJ0MS5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE1Ljc5NzczMl0gc3lzdGVtZFsxXTogZGV2LXNkYTEu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44MDM2MzZdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXNvYy4xLTFjMWMwMDAuZWhjaTEtdXNiMi0yXHgyZDEtMlx4MmQx
OjEuMC1ob3N0MC10YXJnZXQwOjA6MC0wOjA6MDowLWJsb2NrLXNkYS1zZGExLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuODE4NTU5XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YTEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44MjQ2NzNdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWExLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuODMzNzIwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTAu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44Mzk4ODFdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWEwLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTUuODQ5MzIyXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44NTUzMDVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTUuODY0NjM0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTMuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS44NzA3NzldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWEzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTUuODc5ODY2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTIuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNS44ODU4NDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWEyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUu
ODk1MDM1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNS45MDExMTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWE3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTEwMjg3
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNS45MTYyNjddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWE2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTI1NTIyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YTUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
NS45MzE2MDldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE1LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTQwNzg4XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NDY3
NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFhLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTU1OTgyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YTkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NjIwODJdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE5LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTUuOTcxMzI4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YTgu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45NzczMTJdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWE4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTUuOTg2NTYxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNS45OTI2NTFdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMDAxNzk3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWMuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wMDc3NzFdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWFjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMDE3MDcwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWIuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4wMjMxNjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWFiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MDMyMzYzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4wMzgzNDRdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWIwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDQ3NTYx
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YWYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4wNTM2ODNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWFmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDYyODIwXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YWUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4wNjg5MjRdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWFlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDc3OTQ2XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YjMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wODQw
MjZdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWIzLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMDkzMjA4XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YjIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4wOTkzMjNdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWIyLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTA4MzMzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjUu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xMTQ0MjJdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI1LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMTIzNTY2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjQuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xMjk2NjRdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMTM4ODc0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjEuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4xNDQ4NTJdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWIxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMTU0MTA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjguZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4xNjAyMjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWI4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MTY5MzU0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4xNzUzMzFdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWI2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTg0NTk2
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YjcuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4xOTA3NDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWI3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMTk5ODAyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YmIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4yMDU3NzhdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJiLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjE1MDIwXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YjkuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yMjEx
MjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWI5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjMwMjUyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YmEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yMzYyMjddIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJhLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMjQ1NDY5XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmUu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yNTE1NjRdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJlLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMjYwNzI2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmMuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yNjY3MDRdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWJjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuMjc1OTY0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmQuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4yODIwNjFdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWJkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuMjkxMjIzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzEuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi4yOTcxOTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
MzA2NDA3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YmYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi4zMTI0OTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWJmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzIxNjA4
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi4zMjc1ODJdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzM2ODc2XSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YzQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni4zNDI5NzBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzUyMTczXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5YzIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zNTgx
NTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWMyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzY3Mzg3XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5YzMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zNzM0ODBdIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWMzLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuMzgyNjA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Yzcu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi4zODg3MTNdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM3LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuMzk3ODA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzUuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40MDM5MDVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNDEzMDM4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzkuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40MTkxNDVdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWM5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNDI4MTE4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5YzYuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi40MzQxOTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NDQzNzU3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2IuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi40NDk5MzFdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWNiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDU5MTc3
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2EuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi40NjUxNThdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWNhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDc0NDg5XSBzeXN0
ZW1kWzFdOiBkZXYtdHR5YzguZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni40ODA2NDVdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWM4LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNDg5NzQzXSBzeXN0ZW1kWzFd
OiBkZXYtdHR5Y2UuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi40OTU3
MjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTA0OTI5XSBzeXN0ZW1kWzFdOiBkZXYt
dHR5Y2MuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41MTEwMDddIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNjLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTIwMTE1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2Qu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41MjYwOTVdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWNkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuNTM1NDEwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDEuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41NDE0OTVdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNTUwNjM2XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDAuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi41NTY2MDldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWQwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNTY1OTgwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5Y2YuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi41NzIwNzJdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWNmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NTgxMjA3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi41ODcxODBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWQ1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNTk2NDY0
XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxNi42MDI1NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0
eWQ0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjExNzEyXSBzeXN0
ZW1kWzFdOiBkZXYtdHR5ZDMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
Ni42MTc2ODldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjI3Njg5XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5ZDIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42MzM4
ODldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjQzNDkyXSBzeXN0ZW1kWzFdOiBkZXYt
dHR5ZDguZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42NDk2MTldIHN5
c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ4LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNjYwODUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDYu
ZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42NjY4OTVdIHN5c3RlbWRb
MV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ2LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMTYuNjc4MDAyXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDcuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi42ODQ1MDldIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWQ3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMTYuNjk0MTM1XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGIuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxNi43MDAyNjldIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eWRiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMTYuNzA5NTYwXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZDkuZGV2aWNlIGNoYW5nZWQgZGVh
ZCAtPiBwbHVnZ2VkClsgICAxNi43MTU3MDZdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZp
cnR1YWwtdHR5LXR0eWQ5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYu
NzI3ODExXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNi43NDYxMDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWRhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTYuNzU0NDY3
XSBzeXN0ZW1kWzFdOiBSZWNlaXZlZCBTSUdDSExEIGZyb20gUElEIDY4IChzeXN0ZW1jdGwp
LgpbICAgMTYuNzYwODkwXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA2
OCAoc3lzdGVtY3RsKQpbICAgMTYuNzY2OTI3XSBzeXN0ZW1kWzFdOiBDaGlsZCA2OCBkaWVk
IChjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE2Ljc3MzUzN10gc3lzdGVt
ZFsxXTogQ2hpbGQgNjggYmVsb25ncyB0byBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2Vydmlj
ZQpbICAgMTYuNzgwODk2XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2Vy
dmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NF
U1MKWyAgIDE2Ljc5MTM0N10gc3lzdGVtZFsxXTogc3lzdGVtZC1qb3VybmFsLWZsdXNoLnNl
cnZpY2UgY2hhbmdlZCBzdGFydCAtPiBkZWFkClsgICAxNi44MDA2NDVdIHN5c3RlbWRbMV06
IEpvYiBzeXN0ZW1kLWpvdXJuYWwtZmx1c2guc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVz
dWx0PWRvbmUKWyAgIDE2LjgxNjE2MF0gc3lzdGVtZFsxXTogU3RhcnRlZCBUcmlnZ2VyIEZs
dXNoaW5nIG9mIEpvdXJuYWwgdG8gUGVyc2lzdGVudCBTdG9yYWdlLgpbICAgMTYuODI0MTQ3
XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA3MCAoc3lzdGVtZC11cGRh
dGUtKQpbICAgMTYuODMwOTI4XSBzeXN0ZW1kWzFdOiBDaGlsZCA3MCBkaWVkIChjb2RlPWV4
aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE2LjgzNzMxM10gc3lzdGVtZFsxXTogQ2hp
bGQgNzAgYmVsb25ncyB0byBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZpY2UKWyAgIDE2Ljg0
NDEzNl0gc3lzdGVtZFsxXTogc3lzdGVtZC11cGRhdGUtdXRtcC5zZXJ2aWNlOiBtYWluIHBy
b2Nlc3MgZXhpdGVkLCBjb2RlPWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUwpbICAgMTYuODU0
MjIyXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVwZGF0ZS11dG1wLnNlcnZpY2UgY2hhbmdlZCBz
dGFydCAtPiBleGl0ZWQKWyAgIDE2Ljg2MTM4N10gc3lzdGVtZFsxXTogSm9iIHN5c3RlbWQt
dXBkYXRlLXV0bXAuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE2
Ljg3NTgwMV0gc3lzdGVtZFsxXTogU3RhcnRlZCBVcGRhdGUgVVRNUCBhYm91dCBTeXN0ZW0g
UmVib290L1NodXRkb3duLgpbICAgMTYuODgyNzI3XSBzeXN0ZW1kWzFdOiBDbG9zZWQgam9i
cyBwcm9ncmVzcyB0aW1lcmZkLgpbICAgMTYuODg3OTY0XSBzeXN0ZW1kWzFdOiBTdGFydGlu
ZyBTeXN0ZW0gSW5pdGlhbGl6YXRpb24uClsgICAxNi44OTMyMzddIHN5c3RlbWRbMV06IHN5
c2luaXQudGFyZ2V0IGNoYW5nZWQgZGVhZCAtPiBhY3RpdmUKWyAgIDE2Ljg5OTAwMl0gc3lz
dGVtZFsxXTogSm9iIHN5c2luaXQudGFyZ2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgMTYuOTExMTI5XSBzeXN0ZW1kWzFdOiBSZWFjaGVkIHRhcmdldCBTeXN0ZW0gSW5p
dGlhbGl6YXRpb24uClsgICAxNi45MTY5MzRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIEQtQnVz
IFN5c3RlbSBNZXNzYWdlIEJ1cyBTb2NrZXQuClsgICAxNi45MjMyNTddIHN5c3RlbWRbMV06
IGRidXMuc29ja2V0IGNoYW5nZWQgZGVhZCAtPiBsaXN0ZW5pbmcKWyAgIDE2LjkyOTAzN10g
c3lzdGVtZFsxXTogSm9iIGRidXMuc29ja2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9u
ZQpbICAgMTYuOTQxNTYzXSBzeXN0ZW1kWzFdOiBMaXN0ZW5pbmcgb24gRC1CdXMgU3lzdGVt
IE1lc3NhZ2UgQnVzIFNvY2tldC4KWyAgIDE2Ljk0ODA1Nl0gc3lzdGVtZFsxXTogU3RhcnRp
bmcgU29ja2V0cy4KWyAgIDE2Ljk1MjE0MV0gc3lzdGVtZFsxXTogc29ja2V0cy50YXJnZXQg
Y2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgMTYuOTU3ODMxXSBzeXN0ZW1kWzFdOiBKb2Ig
c29ja2V0cy50YXJnZXQvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNi45Njg4
NDRdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFNvY2tldHMuClsgICAxNi45NzM0NDZd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIERhaWx5IENsZWFudXAgb2YgVGVtcG9yYXJ5IERpcmVj
dG9yaWVzLgpbICAgMTYuOTgwMjEyXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLWNs
ZWFuLnRpbWVyOiBNb25vdG9uaWMgdGltZXIgZWxhcHNlcyBpbiAxNG1pbiA0My4wMjg0MDFz
LgpbICAgMTYuOTg5MjEzXSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXRtcGZpbGVzLWNsZWFuLnRp
bWVyIGNoYW5nZWQgZGVhZCAtPiB3YWl0aW5nClsgICAxNi45OTYyMDhdIHN5c3RlbWRbMV06
IEpvYiBzeXN0ZW1kLXRtcGZpbGVzLWNsZWFuLnRpbWVyL3N0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMTcuMDA0MDU1XSBzeXN0ZW1kWzFdOiBTdGFydGVkIERhaWx5IENsZWFu
dXAgb2YgVGVtcG9yYXJ5IERpcmVjdG9yaWVzLgpbICAgMTcuMDEwNzY0XSBzeXN0ZW1kWzFd
OiBTdGFydGluZyBUaW1lcnMuClsgICAxNy4wMTQ2NTVdIHN5c3RlbWRbMV06IHRpbWVycy50
YXJnZXQgY2hhbmdlZCBkZWFkIC0+IGFjdGl2ZQpbICAgMTcuMDIwNTE5XSBzeXN0ZW1kWzFd
OiBKb2IgdGltZXJzLnRhcmdldC9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE3
LjAzMTYxNV0gc3lzdGVtZFsxXTogUmVhY2hlZCB0YXJnZXQgVGltZXJzLgpbICAgMTcuMDQw
NDQ1XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBCYXNpYyBTeXN0ZW0uClsgICAxNy4wNDQ5MDVd
IHN5c3RlbWRbMV06IGJhc2ljLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAx
Ny4wNTA2NTBdIHN5c3RlbWRbMV06IEpvYiBiYXNpYy50YXJnZXQvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICAxNy4wNjE5MTFdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0
IEJhc2ljIFN5c3RlbS4KWyAgIDE3LjA5NTA1M10gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0
aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3No
ZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjEwNTAwM10gc3lzdGVtZFsxXTogQ29uZGl0aW9u
UGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hk
Z2Vua2V5cy5zZXJ2aWNlLgpbICAgMTcuMTE0NDY5XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25Q
YXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBz
c2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMTcuMTI0Mzc0XSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xMzQ4MjBdIHN5c3RlbWRbMV06IENvbmRpdGlv
blBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjE0NTI1N10gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQg
Zm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xNTU2NDhdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDE3LjE2NTI5Nl0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAxNy4xNzQ0MThdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDE3LjE4MzUwOF0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNy4xOTA1ODJdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMTcuMTk2Mjk3XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMTcuMjA1MzM1XSBz
eXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAxNy4y
MTI2MTRdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDg1ClsgICAxNy4y
MjE3MzldIHN5c3RlbWRbODVdOiBFeGVjdXRpbmc6IC91c3IvYmluL3NzaGQgLUQKWyAgIDE3
LjIyNzk5NV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBydW5u
aW5nClsgICAxNy4yMzQ5NDVdIHN5c3RlbWRbMV06IEpvYiBzc2hkLnNlcnZpY2Uvc3RhcnQg
ZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxNy4yNDYyMzJdIHN5c3RlbWRbMV06IFN0YXJ0
ZWQgT3BlblNTSCBEYWVtb24uClsgICAxNy4yNTE5MDBdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IEVudHJvcHkgSGFydmVzdGluZyBEYWVtb24uLi4KWyAgIDE3LjI2NDY1MV0gc3lzdGVtZFsx
XTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9iaW4vaGF2ZWdlZCAtdyAxMDI0IC12IDEKWyAg
IDE3LjI3MjMyNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL2hhdmVnZWQgYXMgODYK
WyAgIDE3LjI4MjkxMl0gc3lzdGVtZFs4Nl06IEV4ZWN1dGluZzogL3Vzci9iaW4vaGF2ZWdl
ZCAtdyAxMDI0IC12IDEKWyAgIDE3LjI4OTU3NF0gc3lzdGVtZFsxXTogaGF2ZWdlZC5zZXJ2
aWNlIGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTcuMjk1NDI1XSBzeXN0ZW1kWzFdOiBT
dGFydGluZyBBdXRvbWF0aWMgd2lyZWQgbmV0d29yayBjb25uZWN0aW9uIHVzaW5nIG5ldGN0
bCBwcm9maWxlcy4uLgpbICAgMTcuMzEzODQyXSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVj
dXRlOiAvdXNyL2Jpbi9pZnBsdWdkIC1pIGV0aDAgLXIgL2V0Yy9pZnBsdWdkL25ldGN0bC5h
Y3Rpb24gLWJmSW5zClsgICAxNy4zMjQ0MjZdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jp
bi9pZnBsdWdkIGFzIDg3ClsgICAxNy4zMzM1MzhdIHN5c3RlbWRbODddOiBFeGVjdXRpbmc6
IC91c3IvYmluL2lmcGx1Z2QgLWkgZXRoMCAtciAvZXRjL2lmcGx1Z2QvbmV0Y3RsLmFjdGlv
biAtYmZJbnMKWyAgIDE3LjM0NDc0MF0gc3lzdGVtZFsxXTogbmV0Y3RsLWlmcGx1Z2RAZXRo
MC5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBydW5uaW5nClsgICAxNy4zNTE5NjddIHN5c3Rl
bWRbMV06IEpvYiBuZXRjdGwtaWZwbHVnZEBldGgwLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICAxNy4zNjkwNDddIHN5c3RlbWRbMV06IFN0YXJ0ZWQgQXV0b21h
dGljIHdpcmVkIG5ldHdvcmsgY29ubmVjdGlvbiB1c2luZyBuZXRjdGwgcHJvZmlsZXMuClsg
ICAxNy4zNzc1MDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIExvZ2luIFNlcnZpY2UuLi4KWyAg
IDE3LjM4OTM2Ml0gc3lzdGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lz
dGVtZC9zeXN0ZW1kLWxvZ2luZApbICAgMTcuMzk3NzkxXSBzeXN0ZW1kWzFdOiBGb3JrZWQg
L3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLWxvZ2luZCBhcyA4OApbICAgMTcuNDEzMDgzXSBz
eXN0ZW1kWzg4XTogRXhlY3V0aW5nOiAvdXNyL2xpYi9zeXN0ZW1kL3N5c3RlbWQtbG9naW5k
ClsgICAxNy40MzA2MzNdIHN5c3RlbWRbMV06IHN5c3RlbWQtbG9naW5kLnNlcnZpY2UgY2hh
bmdlZCBkZWFkIC0+IHN0YXJ0ClsgICAxNy40NDcwNzZdIGV0aDA6IGRldmljZSBNQUMgYWRk
cmVzcyA4YTo3MDozYzo0MTplNzozZgpbICAgMTcuNDYzMzk0XSAgTm8gTUFDIE1hbmFnZW1l
bnQgQ291bnRlcnMgYXZhaWxhYmxlClsgICAxNy40NjgwMjddIHN0bW1hY19vcGVuOiBmYWls
ZWQgUFRQIGluaXRpYWxpc2F0aW9uClsgICAxNy40NzMzNzVdIHN5c3RlbWRbMV06IFN0YXJ0
aW5nIEQtQnVzIFN5c3RlbSBNZXNzYWdlIEJ1cy4uLgpbICAgMTcuNTAxMjY0XSBzeXN0ZW1k
WzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9kYnVzLWRhZW1vbiAtLXN5c3RlbSAt
LWFkZHJlc3M9c3lzdGVtZDogLS1ub2ZvcmsgLS1ub3BpZGZpbGUgLS1zeXN0ZW1kLWFjdGl2
YXRpb24KWyAgIDE3LjUyOTMxNl0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvYmluL2RidXMt
ZGFlbW9uIGFzIDkxClsgICAxNy41MzU5NzNdIHN5c3RlbWRbMV06IGRidXMuc2VydmljZSBj
aGFuZ2VkIGRlYWQgLT4gcnVubmluZwpbICAgMTcuNTYxOTcyXSBzeXN0ZW1kWzkxXTogRXhl
Y3V0aW5nOiAvdXNyL2Jpbi9kYnVzLWRhZW1vbiAtLXN5c3RlbSAtLWFkZHJlc3M9c3lzdGVt
ZDogLS1ub2ZvcmsgLS1ub3BpZGZpbGUgLS1zeXN0ZW1kLWFjdGl2YXRpb24KWyAgIDE3LjU3
NjkwN10gc3lzdGVtZFsxXTogSm9iIGRidXMuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVz
dWx0PWRvbmUKWyAgIDE3LjYwOTI1OV0gc3lzdGVtZFsxXTogU3RhcnRlZCBELUJ1cyBTeXN0
ZW0gTWVzc2FnZSBCdXMuClsgICAxNy42MjUxMDJdIHN5c3RlbWRbMV06IGRidXMuc29ja2V0
IGNoYW5nZWQgbGlzdGVuaW5nIC0+IHJ1bm5pbmcKWyAgIDE3LjYzOTkxMV0gc3lzdGVtZFsx
XTogU3RhcnRpbmcgUGVybWl0IFVzZXIgU2Vzc2lvbnMuLi4KWyAgIDE3LjY2NDQ3MF0gc3lz
dGVtZFsxXTogQWJvdXQgdG8gZXhlY3V0ZTogL3Vzci9saWIvc3lzdGVtZC9zeXN0ZW1kLXVz
ZXItc2Vzc2lvbnMgc3RhcnQKWyAgIDE3LjY4NDcyNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91
c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11c2VyLXNlc3Npb25zIGFzIDkyClsgICAxNy43MDIz
MzNdIHN5c3RlbWRbOTJdOiBFeGVjdXRpbmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZC11
c2VyLXNlc3Npb25zIHN0YXJ0ClsgICAxNy43MTYyOTddIHN5c3RlbWRbMV06IHN5c3RlbWQt
dXNlci1zZXNzaW9ucy5zZXJ2aWNlIGNoYW5nZWQgZGVhZCAtPiBzdGFydApbICAgMTcuNzM1
Nzc1XSBzeXN0ZW1kWzFdOiBTZXQgdXAgam9icyBwcm9ncmVzcyB0aW1lcmZkLgpbICAgMTcu
NzU1OTg0XSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZGYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxNy43NzUzMjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWRmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTcuNzk5MjQx
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsg
ICAxNy44MTkyODJdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJpdmF0
ZSBidXMuClsgICAxNy44MjUzMjRdIHN5c3RlbWRbMV06IFJlY2VpdmVkIFNJR0NITEQgZnJv
bSBQSUQgODYgKGhhdmVnZWQpLgpbICAgMTcuODQ4ODM2XSBzeXN0ZW1kWzFdOiBHb3QgU0lH
Q0hMRCBmb3IgcHJvY2VzcyA4NiAoaGF2ZWdlZCkKWyAgIDE3Ljg1NDc3OF0gc3lzdGVtZFsx
XTogQ2hpbGQgODYgZGllZCAoY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MpClsgICAx
Ny44Nzg5MjBdIHN5c3RlbWRbMV06IENoaWxkIDg2IGJlbG9uZ3MgdG8gaGF2ZWdlZC5zZXJ2
aWNlClsgICAxNy44ODkxODJdIHN5c3RlbWRbMV06IGhhdmVnZWQuc2VydmljZTogY29udHJv
bCBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQgc3RhdHVzPTAKWyAgIDE3Ljg5ODE0OF0g
c3lzdGVtZFsxXTogaGF2ZWdlZC5zZXJ2aWNlIGdvdCBmaW5hbCBTSUdDSExEIGZvciBzdGF0
ZSBzdGFydApbICAgMTcuOTE5Mzg2XSBzeXN0ZW1kWzFdOiBNYWluIFBJRCBsb2FkZWQ6IDkw
ClsgICAxNy45MjQxNDRdIHN5c3RlbWRbMV06IGhhdmVnZWQuc2VydmljZSBjaGFuZ2VkIHN0
YXJ0IC0+IHJ1bm5pbmcKWyAgIDE3LjkzNTExMV0gc3lzdGVtZFsxXTogSm9iIGhhdmVnZWQu
c2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE3Ljk1OTE3N10gc3lz
dGVtZFsxXTogU3RhcnRlZCBFbnRyb3B5IEhhcnZlc3RpbmcgRGFlbW9uLgpbICAgMTcuOTY1
MTkxXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyA5MiAoc3lzdGVtZC11
c2VyLXNlKQpbICAgMTcuOTg3NDU0XSBzeXN0ZW1kWzFdOiBDaGlsZCA5MiBkaWVkIChjb2Rl
PWV4aXRlZCwgc3RhdHVzPTAvU1VDQ0VTUykKWyAgIDE4LjAwMzU3M10gc3lzdGVtZFsxXTog
Q2hpbGQgOTIgYmVsb25ncyB0byBzeXN0ZW1kLXVzZXItc2Vzc2lvbnMuc2VydmljZQpbICAg
MTguMDIzNDg1XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLXVzZXItc2Vzc2lvbnMuc2VydmljZTog
bWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0wL1NVQ0NFU1MKWyAg
IDE4LjA0ODQyN10gc3lzdGVtZFsxXTogc3lzdGVtZC11c2VyLXNlc3Npb25zLnNlcnZpY2Ug
Y2hhbmdlZCBzdGFydCAtPiBleGl0ZWQKWyAgIDE4LjA1ODc4Nl0gc3lzdGVtZFsxXTogSm9i
IHN5c3RlbWQtdXNlci1zZXNzaW9ucy5zZXJ2aWNlL3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9
ZG9uZQpbICAgMTguMDg5Mjc3XSBzeXN0ZW1kWzFdOiBTdGFydGVkIFBlcm1pdCBVc2VyIFNl
c3Npb25zLgpbICAgMTguMDk0NzI1XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3Rz
PS9kZXYvdHR5MCBzdWNjZWVkZWQgZm9yIGdldHR5QHR0eTEuc2VydmljZS4KWyAgIDE4LjEx
ODQ0OV0gc3lzdGVtZFsxXTogU3RhcnRpbmcgR2V0dHkgb24gdHR5MS4uLgpbICAgMTguMTMy
Njg2XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvc2Jpbi9hZ2V0dHkgLS1ub2Ns
ZWFyIHR0eTEKWyAgIDE4LjE1NDAyMF0gc3lzdGVtZFsxXTogRm9ya2VkIC9zYmluL2FnZXR0
eSBhcyA5NApbICAgMTguMTY3MjgxXSBzeXN0ZW1kWzFdOiBnZXR0eUB0dHkxLnNlcnZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDE4LjE4OTI0M10gc3lzdGVtZFsxXTogSm9i
IGdldHR5QHR0eTEuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDE4
LjIwNTk0OV0gc3lzdGVtZFsxXTogU3RhcnRlZCBHZXR0eSBvbiB0dHkxLgpbICAgMTguMjE5
MTYxXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBTZXJpYWwgR2V0dHkgb24gaHZjMC4uLgpbICAg
MTguMjM5ODA4XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvc2Jpbi9hZ2V0dHkg
LS1rZWVwLWJhdWQgaHZjMCAxMTUyMDAsMzg0MDAsOTYwMApbICAgMTguMjU5MDAxXSBzeXN0
ZW1kWzFdOiBGb3JrZWQgL3NiaW4vYWdldHR5IGFzIDk1ClsgICAxOC4yNjUwMDZdIHN5c3Rl
bWRbMV06IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5p
bmcKWyAgIDE4LjI4OTg1NF0gc3lzdGVtZFsxXTogSm9iIHNlcmlhbC1nZXR0eUBodmMwLnNl
cnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAxOC4zMTU0NjRdIHN5c3Rl
bWRbMV06IFN0YXJ0ZWQgU2VyaWFsIEdldHR5IG9uIGh2YzAuClsgICAxOC4zMjgxNDZdIHN5
c3RlbWRbMV06IFN0YXJ0aW5nIExvZ2luIFByb21wdHMuClsgICAxOC4zNTk5NTFdIHN5c3Rl
bWRbMV06IGdldHR5LnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZlClsgICAxOC4zNzkw
MTddIHN5c3RlbWRbMV06IEpvYiBnZXR0eS50YXJnZXQvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAxOC4zOTI5NTZdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IExvZ2lu
IFByb21wdHMuClsgICAxOC4zOTgwNjddIHN5c3RlbWRbMV06IFNldCB1cCBpZGxlX3BpcGUg
d2F0Y2guClsgICAxOC40MjM5MDVdIHN5c3RlbWRbMV06IGRldi10dHlkZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjQzNjgxMV0gc3lzdGVtZFsxXTogc3lzLWRl
dmljZXMtdmlydHVhbC10dHktdHR5ZGUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxOC40NTkzNTFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24gcHJp
dmF0ZSBidXMuClsgICAxOC40NzAxODVdIHN5c3RlbWRbMV06IGRldi10dHlkZC5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjQ3NjIzMF0gc3lzdGVtZFsxXTogc3lz
LWRldmljZXMtdmlydHVhbC10dHktdHR5ZGQuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxOC41MDY5MzFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5lY3Rpb24gb24g
cHJpdmF0ZSBidXMuClsgICAxOC41MjgxMzZdIHN5c3RlbWRbMV06IGRldi10dHlkYy5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjU1MjExM10gc3lzdGVtZFsxXTog
c3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5ZGMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxOC41ODIzMTldIHN5c3RlbWRbMV06IGRldi10dHllMi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjU4ODM4NF0gc3lzdGVtZFsxXTogc3lzLWRl
dmljZXMtdmlydHVhbC10dHktdHR5ZTIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2Vk
ClsgICAxOC42MzY1MTldIHN5c3RlbWRbMV06IGRldi10dHllMC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjY2NzE2MF0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMt
dmlydHVhbC10dHktdHR5ZTAuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAx
OC42ODI4NThdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0
b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3Rl
bWQxL2FnZW50ClsgICAxOC43MTU1OThdIHN5c3RlbWRbMV06IGRldi10dHllMS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjcyOTY5MF0gc3lzdGVtZFsxXTogc3lz
LWRldmljZXMtdmlydHVhbC10dHktdHR5ZTEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAxOC43MzkzNDVdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcu
ZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQoKSBvbiAvb3JnL2ZyZWVkZXNr
dG9wL3N5c3RlbWQxL2FnZW50ClsgICAxOC43NjU2NzRdIHN5c3RlbWRbMV06IEdvdCBELUJ1
cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBv
biAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE4Ljc5MDYxOF0gc3lzdGVtZFsx
XTogZGV2LXR0eWU1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTguODA2
NjkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllNS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjgyOTEwNV0gc3lzdGVtZFsxXTogR290
IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNl
ZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDE4Ljg0ODQ3MF0g
c3lzdGVtZFsxXTogc3lzdGVtZC11c2VyLXNlc3Npb25zLnNlcnZpY2U6IGNncm91cCBpcyBl
bXB0eQpbICAgMTguODY5MzI2XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3Jn
LmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVz
a3RvcC9EQnVzL0xvY2FsClsgICAxOC44OTUwMjZdIHN5c3RlbWRbMV06IGRldi10dHllNC5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDE4LjkwODgzMV0gc3lzdGVtZFsx
XTogc3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5ZTQuZGV2aWNlIGNoYW5nZWQgZGVhZCAt
PiBwbHVnZ2VkClsgICAxOC45NDM4NTFdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0
OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5Mb2NhbC5EaXNjb25uZWN0ZWQoKSBvbiAvb3JnL2Zy
ZWVkZXNrdG9wL0RCdXMvTG9jYWwKWyAgIDE4Ljk3MTk4OF0gc3lzdGVtZFsxXTogZGV2LXR0
eWUzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTguOTc4MDMwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllMy5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDE5LjAwOTMyMF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgbmFt
ZSA6MS4xIGluIHJlcGx5IHRvIEhlbGxvClsgICAxOS4wMTc4OTFdIHN5c3RlbWRbMV06IFN1
Y2Nlc3NmdWxseSBjb25uZWN0ZWQgdG8gc3lzdGVtIEQtQnVzIGJ1cyBmOTEwNzJiYmZlMjNm
YTJmMTMzOTQ5NmUwMDAwMDAxMiBhcyA6MS4xClsgICAxOS4wNTU0NDJdIHN5c3RlbWRbMV06
IFN1Y2Nlc3NmdWxseSBpbml0aWFsaXplZCBBUEkgb24gdGhlIHN5c3RlbSBidXMKWyAgIDE5
LjA2Mjc0MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3Rv
cC5EQnVzLk5hbWVBY3F1aXJlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpbICAgMTku
MDkwMTAzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZTYuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBw
bHVnZ2VkClsgICAxOS4wOTYyMTldIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwt
dHR5LXR0eWU2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuMTI5NDEz
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpbICAgMTkuMTQ4
OTY3XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRC
dXMuTmFtZUFjcXVpcmVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzClsgICAxOS4xNzkw
MDZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lz
dGVtZDEuTWFuYWdlci5TdWJzY3JpYmUoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQx
ClsgICAxOS4yMTAxMTFdIHN5c3RlbWRbMV06IFN1Y2Nlc3NmdWxseSBhY3F1aXJlZCBuYW1l
LgpbICAgMTkuMjExNzUxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5ZTkuZGV2aWNlIGNoYW5nZWQg
ZGVhZCAtPiBwbHVnZ2VkClsgICAxOS4yMTE3OTddIHN5c3RlbWRbMV06IHN5cy1kZXZpY2Vz
LXZpcnR1YWwtdHR5LXR0eWU5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MTkuMjQ2NTAyXSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNr
dG9wLkRCdXMuTmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwpb
ICAgMTkuMjk5MzE0XSBzeXN0ZW1kWzFdOiBzeXN0ZW1kLWxvZ2luZC5zZXJ2aWNlJ3MgRC1C
dXMgbmFtZSBvcmcuZnJlZWRlc2t0b3AubG9naW4xIG5vdyByZWdpc3RlcmVkIGJ5IDoxLjAK
WyAgIDE5LjMzOTA5NV0gc3lzdGVtZFsxXTogc3lzdGVtZC1sb2dpbmQuc2VydmljZSBjaGFu
Z2VkIHN0YXJ0IC0+IHJ1bm5pbmcKWyAgIDE5LjM1ODg0NF0gc3lzdGVtZFsxXTogSm9iIHN5
c3RlbWQtbG9naW5kLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAx
OS40MDE0NTNdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgTG9naW4gU2VydmljZS4KWyAgIDE5LjQy
MjE1MF0gc3lzdGVtZFsxXTogQ2xvc2VkIGpvYnMgcHJvZ3Jlc3MgdGltZXJmZC4KWyAgIDE5
LjQzOTE1NF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgTXVsdGktVXNlciBTeXN0ZW0uClsgICAx
OS40NTg4MjBdIHN5c3RlbWRbMV06IG11bHRpLXVzZXIudGFyZ2V0IGNoYW5nZWQgZGVhZCAt
PiBhY3RpdmUKWyAgIDE5LjQ3ODI3Ml0gc3lzdGVtZFsxXTogSm9iIG11bHRpLXVzZXIudGFy
Z2V0L3N0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTkuNTE1NDQ3XSBzeXN0ZW1k
WzFdOiBSZWFjaGVkIHRhcmdldCBNdWx0aS1Vc2VyIFN5c3RlbS4KWyAgIDE5LjUzMjU3MF0g
c3lzdGVtZFsxXTogU3RhcnRpbmcgR3JhcGhpY2FsIEludGVyZmFjZS4KWyAgIDE5LjU0NDI1
MF0gc3lzdGVtZFsxXTogZ3JhcGhpY2FsLnRhcmdldCBjaGFuZ2VkIGRlYWQgLT4gYWN0aXZl
ClsgICAxOS41NTg5MjBdIHN5c3RlbWRbMV06IEpvYiBncmFwaGljYWwudGFyZ2V0L3N0YXJ0
IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMTkuNTg1MzM1XSBzeXN0ZW1kWzFdOiBSZWFj
aGVkIHRhcmdldCBHcmFwaGljYWwgSW50ZXJmYWNlLgpbICAgMTkuNTk5OTU1XSBzeXN0ZW1k
WzFdOiBDbG9zZWQgaWRsZV9waXBlIHdhdGNoLgpbICAgMTkuNjEyODQyXSBzeXN0ZW1kWzk0
XTogRXhlY3V0aW5nOiAvc2Jpbi9hZ2V0dHkgLS1ub2NsZWFyIHR0eTEKWyAgIDE5LjYyMzM2
M10gc3lzdGVtZFs5NV06IEV4ZWN1dGluZzogL3NiaW4vYWdldHR5IC0ta2VlcC1iYXVkIGh2
YzAgMTE1MjAwLDM4NDAwLDk2MDAKWyAgIDE5LjYzOTI2OV0gc3lzdGVtZFsxXTogU3RhcnR1
cCBmaW5pc2hlZCBpbiA2Ljg2MHMgKGtlcm5lbCkgKyAxMi43NjNzICh1c2Vyc3BhY2UpID0g
MTkuNjI0cy4KWyAgIDE5LjY2NTkzNF0gc3lzdGVtZFsxXTogZGV2LXR0eWU3LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuNjc4ODczXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllNy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDE5LjcwMDA2OF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJ
RCA4NSAoc3NoZCkuClsgICAxOS43MTkyNjNdIHN5c3RlbWRbMV06IEdvdCBTSUdDSExEIGZv
ciBwcm9jZXNzIDg1IChzc2hkKQpbICAgMTkuNzI3MDYxXSBzeXN0ZW1kWzFdOiBDaGlsZCA4
NSBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDE5Ljc0MzczOV0g
c3lzdGVtZFsxXTogQ2hpbGQgODUgYmVsb25ncyB0byBzc2hkLnNlcnZpY2UKWyAgIDE5Ljc1
ODkzMF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlOiBtYWluIHByb2Nlc3MgZXhpdGVkLCBj
b2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRQpbICAgMTkuNzc5NTE3XSBzeXN0ZW1kWzFd
OiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBydW5uaW5nIC0+IGZhaWxlZApbICAgMTkuODE5NzAw
XSBzeXN0ZW1kWzFdOiBVbml0IHNzaGQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0ZS4K
WyAgIDE5LjgyNTcwOV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgZmFpbGVk
IC0+IGF1dG8tcmVzdGFydApbICAgMTkuODU5ODgxXSBzeXN0ZW1kWzFdOiBBY2NlcHRlZCBj
b25uZWN0aW9uIG9uIHByaXZhdGUgYnVzLgpbICAgMTkuODgxOTE1XSBzeXN0ZW1kWzFdOiBk
ZXYtdHR5ZWMuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxOS45MDIzMjZd
IHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWVjLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMTkuOTE0Mzk4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5
ZWIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAxOS45Mjg4OTZdIHN5c3Rl
bWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eWViLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMTkuOTUwNTk1XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVx
dWVzdDogb3JnLmZyZWVkZXNrdG9wLnN5c3RlbWQxLkFnZW50LlJlbGVhc2VkKCkgb24gL29y
Zy9mcmVlZGVza3RvcC9zeXN0ZW1kMS9hZ2VudApbICAgMTkuOTk5MzQzXSBzeXN0ZW1kWzFd
OiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29u
bmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9EQnVzL0xvY2FsClsgICAyMC4wMzU2OTJd
IHN5c3RlbWRbMV06IGRldi10dHllYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDIwLjA2NTI0OV0gc3lzdGVtZFsxXTogc3lzLWRldmljZXMtdmlydHVhbC10dHktdHR5
ZWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMC4wODQxNDVdIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZSBob2xkb2ZmIHRpbWUgb3Zlciwgc2NoZWR1bGluZyByZXN0
YXJ0LgpbICAgMjAuMDk3MTI1XSBzeXN0ZW1kWzFdOiBUcnlpbmcgdG8gZW5xdWV1ZSBqb2Ig
c3NoZC5zZXJ2aWNlL3Jlc3RhcnQvZmFpbApbICAgMjAuMTI5NDYwXSBzeXN0ZW1kWzFdOiBJ
bnN0YWxsZWQgbmV3IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyA3MApbICAgMjAuMTM1
ODAwXSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3IGpvYiBzc2hkZ2Vua2V5cy5zZXJ2aWNl
L3N0YXJ0IGFzIDExOApbICAgMjAuMTYyMjYyXSBzeXN0ZW1kWzFdOiBFbnF1ZXVlZCBqb2Ig
c3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgNzAKWyAgIDIwLjE4Njk2MF0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIwLjE5MjczMV0gc3lz
dGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIwLjE5NzU1M10gc3lz
dGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAg
IDIwLjIyODg0MV0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMjAuMjM1MzkyXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5n
IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIw
LjI2NDUwOF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAg
IDIwLjI5MjM5OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3No
L3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAg
MjAuMzIyODk4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gv
c3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpb
ICAgMjAuMzQ1MzEwXSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9z
c2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMC4zNTg4MThdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3Nz
aC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIwLjM4ODk3NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZp
Y2UuClsgICAyMC40Mjg3NzZdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEv
ZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIwLjQzODE4MF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMC40Nzg4NDFdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZp
Y2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIwLjUw
NzA2Nl0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAyMC41MjMxODhdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NI
IEtleSBHZW5lcmF0aW9uLgpbICAgMjAuNTM0ODAwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBP
cGVuU1NIIERhZW1vbi4uLgpbICAgMjAuNTQ3MTM5XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBl
eGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAyMC41NjM3MDBdIHN5c3RlbWRbMV06IEZv
cmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDExNQpbICAgMjAuNTcyODA1XSBzeXN0ZW1kWzExNV06
IEV4ZWN1dGluZzogL3Vzci9iaW4vc3NoZCAtRApbICAgMjAuNTgyMjI0XSBzeXN0ZW1kWzFd
OiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDIwLjU5Njk1MV0g
c3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRv
bmUKWyAgIDIwLjYxODkwMV0gc3lzdGVtZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4K
WyAgIDIwLjY0NjE0Ml0gc3lzdGVtZFsxXTogZGV2LXR0eWU4LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjAuNjcwMzU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHllOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIw
LjY5MDc0NF0gc3lzdGVtZFsxXTogZGV2LXR0eWVmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjAuNzA1MDIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHllZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIwLjczMTQx
OF0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMTUgKHNzaGQpLgpb
ICAgMjAuNzQ5MDU3XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMTUg
KHNzaGQpClsgICAyMC43NjY4OTRdIHN5c3RlbWRbMV06IENoaWxkIDExNSBkaWVkIChjb2Rl
PWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIwLjc3NTk1N10gc3lzdGVtZFsxXTog
Q2hpbGQgMTE1IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMC43ODc5ODddIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQs
IHN0YXR1cz0xL0ZBSUxVUkUKWyAgIDIwLjgwNzM1N10gc3lzdGVtZFsxXTogc3NoZC5zZXJ2
aWNlIGNoYW5nZWQgcnVubmluZyAtPiBmYWlsZWQKWyAgIDIwLjgxNjMxNl0gc3lzdGVtZFsx
XTogVW5pdCBzc2hkLnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMC44Mjg3
NzVdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJl
c3RhcnQKWyAgIDIwLjg0NTk0Nl0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBv
biBwcml2YXRlIGJ1cy4KWyAgIDIwLjg1OTc2M10gc3lzdGVtZFsxXTogZGV2LXR0eWVlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjAuODcyODk3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIwLjg5MjI4MF0gc3lzdGVtZFsxXTogZGV2LXR0eWVkLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjAuOTA3MzU3XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHllZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIwLjkzMDI5NF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5m
cmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0
b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDIwLjk1NjcyMV0gc3lzdGVtZFsxXTogR290IEQtQnVz
IHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9u
IC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9Mb2NhbApbICAgMjAuOTk3Njc5XSBzeXN0ZW1kWzFd
OiBkZXYtdHR5cDIuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMS4wMDU1
MDNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXAyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjEuMDI0NTI0XSBzeXN0ZW1kWzFdOiBzc2hk
LnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIx
LjAzODM2NF0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2Vydmlj
ZS9yZXN0YXJ0L2ZhaWwKWyAgIDIxLjA1NjgwNV0gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5l
dyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgMTE5ClsgICAyMS4wNjgyNjBdIHN5c3Rl
bWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMg
MTY3ClsgICAyMS4wODE4NzFdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZp
Y2UvcmVzdGFydCBhcyAxMTkKWyAgIDIxLjA5ODI0OV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2
aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIxLjEwNjYyNl0gc3lzdGVtZFsxXTog
U3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIxLjExNzgwNV0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIxLjEyOTAx
NF0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1
bHQ9ZG9uZQpbICAgMjEuMTQzOTkyXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hk
LnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIxLjE2MTcyMF0g
c3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3Jz
YV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIxLjE4MzU1
NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0
X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjEuMjA4ODA4
XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3Rf
ZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjEuMjM4
OTA4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hv
c3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMS4yNTYy
NjldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9z
dF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIx
LjI3NzQ3OF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAy
MS4yOTc1NzRdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9z
c2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIx
LjMwNzE0Ml0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMS4zMjg3
NjVdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVz
dGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIxLjM0NDQ1OF0gc3lz
dGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3Vs
dD1kb25lClsgICAyMS4zNjQ4MDJdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5l
cmF0aW9uLgpbICAgMjEuMzcyMTMyXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERh
ZW1vbi4uLgpbICAgMjEuMzg2MzgwXSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAv
dXNyL2Jpbi9zc2hkIC1EClsgICAyMS4zOTk0OTBdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNy
L2Jpbi9zc2hkIGFzIDEyMwpbICAgMjEuNDE2MDU3XSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDIxLjQyNTA0MF0gc3lzdGVtZFsxMjNd
OiBFeGVjdXRpbmc6IC91c3IvYmluL3NzaGQgLUQKWyAgIDIxLjQ0ODcyOV0gc3lzdGVtZFsx
XTogSm9iIHNzaGQuc2VydmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIx
LjQ1NTY4M10gc3lzdGVtZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIxLjQ3
MTYxMF0gc3lzdGVtZFsxXTogZGV2LXR0eXAxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjEuNDc3NjkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlwMS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjUyMDMyMl0g
c3lzdGVtZFsxXTogZGV2LXR0eXAwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjEuNTM4OTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlw
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjU1OTYxM10gc3lzdGVt
ZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMjMgKHNzaGQpLgpbICAgMjEuNTY5
MDgwXSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMjMgKHNzaGQpClsg
ICAyMS41ODY4MzFdIHN5c3RlbWRbMV06IENoaWxkIDEyMyBkaWVkIChjb2RlPWV4aXRlZCwg
c3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIxLjU5OTU2MF0gc3lzdGVtZFsxXTogQ2hpbGQgMTIz
IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMS42MTM1ODRdIHN5c3RlbWRbMV06IHNz
aGQuc2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0x
L0ZBSUxVUkUKWyAgIDIxLjYyOTE4OF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5n
ZWQgcnVubmluZyAtPiBmYWlsZWQKWyAgIDIxLjY0MjQ5N10gc3lzdGVtZFsxXTogVW5pdCBz
c2hkLnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMS42NjI4MjZdIHN5c3Rl
bWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAg
IDIxLjY4MTYxMl0gc3lzdGVtZFsxXTogZGV2LXR0eXA1LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjEuNjk4NzEzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlwNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjcx
NDUxNV0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4K
WyAgIDIxLjcyODg2OF0gc3lzdGVtZFsxXTogZGV2LXR0eXA0LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjEuNzQ3MjUzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHlwNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIx
Ljc2MzgxN10gc3lzdGVtZFsxXTogZGV2LXR0eXAzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjEuNzc2OTc3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlwMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIxLjc5NjUy
OF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0
ZW1kMS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdl
bnQKWyAgIDIxLjgyNjA4MV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5m
cmVlZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0
b3AvREJ1cy9Mb2NhbApbICAgMjEuODQ0MjMxXSBzeXN0ZW1kWzFdOiBkZXYtdHR5cDYuZGV2
aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMS44NTg3MDddIHN5c3RlbWRbMV06
IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXA2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjEuODc3NzgxXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9m
ZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIxLjkwMjIzM10gc3lzdGVt
ZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwK
WyAgIDIxLjkxNjgyM10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2
aWNlL3Jlc3RhcnQgYXMgMTY4ClsgICAyMS45MzE5OTddIHN5c3RlbWRbMV06IEluc3RhbGxl
ZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMgMjE2ClsgICAyMS45NDc1
NjBdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyAx
NjgKWyAgIDIxLjk1ODM0NF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCBy
ZXN0YXJ0IGpvYi4KWyAgIDIxLjk3MTM0MF0gc3lzdGVtZFsxXTogU3RvcHBpbmcgT3BlblNT
SCBEYWVtb24uLi4KWyAgIDIxLjk3NjE3OV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNo
YW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIxLjk5MDg3N10gc3lzdGVtZFsxXTog
Sm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMjEu
OTk3NTcwXSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFy
dCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIyLjAwNTg1MV0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxl
ZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjAxNTcyMl0gc3lzdGVtZFsxXTog
Q29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVk
IGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuMDI1MjE0XSBzeXN0ZW1kWzFdOiBD
b25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFp
bGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuMDM1MTk5XSBzeXN0ZW1kWzFd
OiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWls
ZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wNDQ3MDBdIHN5c3RlbWRbMV06
IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHVi
IGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjA1NDcyOF0gc3lzdGVt
ZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tl
eSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wNjQ0NzZdIHN5c3Rl
bWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHVi
IGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjA3Mzk3MF0gc3lzdGVt
ZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWls
ZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi4wODMwNThdIHN5c3RlbWRbMV06
IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRp
b24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIyLjA5MjA3NF0gc3lzdGVtZFsxXTogSm9iIHNz
aGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAyMi4w
OTkwNTldIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMjIu
MTA4MDE4XSBzeXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMjIu
MTE4MDI2XSBzeXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1E
ClsgICAyMi4xMjk2MDldIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDEz
MwpbICAgMjIuMTQwMDAwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFk
IC0+IHJ1bm5pbmcKWyAgIDIyLjE0NzIzMl0gc3lzdGVtZFsxMzNdOiBFeGVjdXRpbmc6IC91
c3IvYmluL3NzaGQgLUQKWyAgIDIyLjE1MjUzNl0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2Vy
dmljZS9zdGFydCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIyLjE1OTY2N10gc3lzdGVt
ZFsxXTogU3RhcnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIyLjE3MDY4OF0gc3lzdGVtZFsx
XTogZGV2LXR0eXA5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMTc2
OTE5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwOS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjE4NzcwMF0gc3lzdGVtZFsxXTogZGV2
LXR0eXA3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMTk0MTIxXSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwNy5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjIwNDY5OF0gc3lzdGVtZFsxXTogZGV2LXR0eXBi
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjExMDYyXSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwYi5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDIyLjIyMTUxOF0gc3lzdGVtZFsxXTogZGV2LXR0eXBhLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjI3Njk5XSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlwYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDIyLjIzODI4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXA4LmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuMjQ0NjQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHlwOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDIyLjI1NTE1Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXBlLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjIuMjYxNDIxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHlwZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIy
LjI3MTQxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXBkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjIuMjc3NDQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlwZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjMwMTQx
OV0gc3lzdGVtZFsxXTogZGV2LXR0eXBjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMjIuMzA3NTc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10
dHlwYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjMzNzcyOV0gc3lz
dGVtZFsxXTogZGV2LXR0eXExLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjIuMzU2MTAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxMS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjM2NTU5Nl0gc3lzdGVtZFsx
XTogUmVjZWl2ZWQgU0lHQ0hMRCBmcm9tIFBJRCAxMzMgKHNzaGQpLgpbICAgMjIuMzcxOTg4
XSBzeXN0ZW1kWzFdOiBHb3QgU0lHQ0hMRCBmb3IgcHJvY2VzcyAxMzMgKHNzaGQpClsgICAy
Mi4zNzc3OTddIHN5c3RlbWRbMV06IENoaWxkIDEzMyBkaWVkIChjb2RlPWV4aXRlZCwgc3Rh
dHVzPTEvRkFJTFVSRSkKWyAgIDIyLjM4NDQ3N10gc3lzdGVtZFsxXTogQ2hpbGQgMTMzIGJl
bG9uZ3MgdG8gc3NoZC5zZXJ2aWNlClsgICAyMi4zOTAxMTZdIHN5c3RlbWRbMV06IHNzaGQu
c2VydmljZTogbWFpbiBwcm9jZXNzIGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0xL0ZB
SUxVUkUKWyAgIDIyLjM5ODk1N10gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQg
cnVubmluZyAtPiBmYWlsZWQKWyAgIDIyLjQwNTU1NF0gc3lzdGVtZFsxXTogVW5pdCBzc2hk
LnNlcnZpY2UgZW50ZXJlZCBmYWlsZWQgc3RhdGUuClsgICAyMi40MTE5NjNdIHN5c3RlbWRb
MV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAgIDIy
LjQyMTY2NV0gc3lzdGVtZFsxXTogZGV2LXR0eXEwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjIuNDI3NzYzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHlxMC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQzNzIy
NV0gc3lzdGVtZFsxXTogQWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAg
IDIyLjQ0NDQwM10gc3lzdGVtZFsxXTogZGV2LXR0eXBmLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjIuNDUwOTI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlwZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQ2
MTkxMV0gc3lzdGVtZFsxXTogZGV2LXR0eXE0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjIuNDY3OTU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlxNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjQ3ODE0M10g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1k
MS5BZ2VudC5SZWxlYXNlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQK
WyAgIDIyLjQ5MTEwMV0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVl
ZGVza3RvcC5EQnVzLkxvY2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Av
REJ1cy9Mb2NhbApbICAgMjIuNTAzNjQzXSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTMuZGV2aWNl
IGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMi41MDk5MjNdIHN5c3RlbWRbMV06IHN5
cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXEzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjIuNTIwNDk3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTIuZGV2aWNlIGNoYW5n
ZWQgZGVhZCAtPiBwbHVnZ2VkClsgICAyMi41MjY1NDBdIHN5c3RlbWRbMV06IHN5cy1kZXZp
Y2VzLXZpcnR1YWwtdHR5LXR0eXEyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjIuNTM1NzEwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92
ZXIsIHNjaGVkdWxpbmcgcmVzdGFydC4KWyAgIDIyLjU0Mjg0OV0gc3lzdGVtZFsxXTogVHJ5
aW5nIHRvIGVucXVldWUgam9iIHNzaGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwKWyAgIDIyLjU1
MDQ5N10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3Rh
cnQgYXMgMjE3ClsgICAyMi41NTkzOTRdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9i
IHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgYXMgMjY1ClsgICAyMi41NzE4NDVdIHN5c3Rl
bWRbMV06IEVucXVldWVkIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCBhcyAyMTcKWyAgIDIy
LjU4MTMyOV0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpv
Yi4KWyAgIDIyLjU4NzM5MF0gc3lzdGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24u
Li4KWyAgIDIyLjU5MjM1MF0gc3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0
by1yZXN0YXJ0IC0+IGRlYWQKWyAgIDIyLjU5ODQyMl0gc3lzdGVtZFsxXTogSm9iIHNzaGQu
c2VydmljZS9yZXN0YXJ0IGZpbmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgMjIuNjA1MTYzXSBz
eXN0ZW1kWzFdOiBDb252ZXJ0aW5nIGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hk
LnNlcnZpY2Uvc3RhcnQKWyAgIDIyLjYxMzAzNV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0
aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3No
ZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjYyMjkxNF0gc3lzdGVtZFsxXTogQ29uZGl0aW9u
UGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hk
Z2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuNjMyNDk3XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25Q
YXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBz
c2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAgMjIuNjQyMzQxXSBzeXN0ZW1kWzFdOiBDb25kaXRp
b25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42NTE4MDRdIHN5c3RlbWRbMV06IENvbmRpdGlv
blBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjY2MTkxOF0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQg
Zm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42NzE1MzVdIHN5c3RlbWRbMV06IENv
bmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBm
b3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAgIDIyLjY4MDk2NF0gc3lzdGVtZFsxXTogQ29u
ZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNz
aGRnZW5rZXlzLnNlcnZpY2UuClsgICAyMi42OTAxNThdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IG9mIHNzaGRnZW5rZXlzLnNlcnZpY2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVk
LiBJZ25vcmluZy4KWyAgIDIyLjY5OTA4OV0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlz
LnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1kb25lClsgICAyMi43MDYwOTVdIHN5
c3RlbWRbMV06IFN0YXJ0ZWQgU1NIIEtleSBHZW5lcmF0aW9uLgpbICAgMjIuNzEzMTQ3XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBPcGVuU1NIIERhZW1vbi4uLgpbICAgMjIuNzE4NzQ5XSBz
eXN0ZW1kWzFdOiBBYm91dCB0byBleGVjdXRlOiAvdXNyL2Jpbi9zc2hkIC1EClsgICAyMi43
MjUxNzNdIHN5c3RlbWRbMV06IEZvcmtlZCAvdXNyL2Jpbi9zc2hkIGFzIDEzOQpbICAgMjIu
NzM0MTMyXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZpY2UgY2hhbmdlZCBkZWFkIC0+IHJ1bm5p
bmcKWyAgIDIyLjc0MTI5OV0gc3lzdGVtZFsxMzldOiBFeGVjdXRpbmc6IC91c3IvYmluL3Nz
aGQgLUQKWyAgIDIyLjc0NzU3Nl0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9zdGFy
dCBmaW5pc2hlZCwgcmVzdWx0PWRvbmUKWyAgIDIyLjc2MTU2N10gc3lzdGVtZFsxXTogU3Rh
cnRlZCBPcGVuU1NIIERhZW1vbi4KWyAgIDIyLjc3ODUyN10gc3lzdGVtZFsxXTogZGV2LXR0
eXE3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuODAyMzM3XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNy5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjgxODMzNV0gc3lzdGVtZFsxXTogZGV2LXR0eXE2LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuODI0NjIwXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIyLjgzNDEzOV0gc3lzdGVtZFsxXTogUmVjZWl2ZWQgU0lHQ0hMRCBm
cm9tIFBJRCAxMzkgKHNzaGQpLgpbICAgMjIuODQwMjk2XSBzeXN0ZW1kWzFdOiBHb3QgU0lH
Q0hMRCBmb3IgcHJvY2VzcyAxMzkgKHNzaGQpClsgICAyMi44NDYzMzFdIHN5c3RlbWRbMV06
IENoaWxkIDEzOSBkaWVkIChjb2RlPWV4aXRlZCwgc3RhdHVzPTEvRkFJTFVSRSkKWyAgIDIy
Ljg1MjkxMF0gc3lzdGVtZFsxXTogQ2hpbGQgMTM5IGJlbG9uZ3MgdG8gc3NoZC5zZXJ2aWNl
ClsgICAyMi44NTgzNzFdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZTogbWFpbiBwcm9jZXNz
IGV4aXRlZCwgY29kZT1leGl0ZWQsIHN0YXR1cz0xL0ZBSUxVUkUKWyAgIDIyLjg2NzE1NF0g
c3lzdGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgcnVubmluZyAtPiBmYWlsZWQKWyAg
IDIyLjg3MzYxOF0gc3lzdGVtZFsxXTogVW5pdCBzc2hkLnNlcnZpY2UgZW50ZXJlZCBmYWls
ZWQgc3RhdGUuClsgICAyMi44Nzk5OTFdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFu
Z2VkIGZhaWxlZCAtPiBhdXRvLXJlc3RhcnQKWyAgIDIyLjg4ODkzM10gc3lzdGVtZFsxXTog
QWNjZXB0ZWQgY29ubmVjdGlvbiBvbiBwcml2YXRlIGJ1cy4KWyAgIDIyLjg5NjAxOV0gc3lz
dGVtZFsxXTogZGV2LXR0eXE1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjIuOTAyMzI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxNS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjkxMzUyM10gc3lzdGVtZFsx
XTogZGV2LXR0eXE5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTE5
ODk3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxOS5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIyLjkzMDIzOF0gc3lzdGVtZFsxXTogR290
IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5BZ2VudC5SZWxlYXNl
ZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEvYWdlbnQKWyAgIDIyLjk0MzE1N10g
c3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVzLkxv
Y2FsLkRpc2Nvbm5lY3RlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cy9Mb2NhbApbICAg
MjIuOTU1NTA4XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cTguZGV2aWNlIGNoYW5nZWQgZGVhZCAt
PiBwbHVnZ2VkClsgICAyMi45NjIwMTVdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1
YWwtdHR5LXR0eXE4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTcy
MjM3XSBzeXN0ZW1kWzFdOiBkZXYtdHR5cWEuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVn
Z2VkClsgICAyMi45Nzg4NjNdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5
LXR0eXFhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjIuOTg5NzMzXSBz
eXN0ZW1kWzFdOiBkZXYtdHR5cWUuZGV2aWNlIGNoYW5nZWQgZGVhZCAtPiBwbHVnZ2VkClsg
ICAyMi45OTU4NjBdIHN5c3RlbWRbMV06IHN5cy1kZXZpY2VzLXZpcnR1YWwtdHR5LXR0eXFl
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMDA2NzYzXSBzeXN0ZW1k
WzFdOiBzc2hkLnNlcnZpY2UgaG9sZG9mZiB0aW1lIG92ZXIsIHNjaGVkdWxpbmcgcmVzdGFy
dC4KWyAgIDIzLjAyMjAxMl0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNz
aGQuc2VydmljZS9yZXN0YXJ0L2ZhaWwKWyAgIDIzLjAyOTkzOF0gc3lzdGVtZFsxXTogSW5z
dGFsbGVkIG5ldyBqb2Igc3NoZC5zZXJ2aWNlL3Jlc3RhcnQgYXMgMjY2ClsgICAyMy4wMzY0
NzFdIHN5c3RlbWRbMV06IEluc3RhbGxlZCBuZXcgam9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uv
c3RhcnQgYXMgMzE0ClsgICAyMy4wNDM2MjFdIHN5c3RlbWRbMV06IEVucXVldWVkIGpvYiBz
c2hkLnNlcnZpY2UvcmVzdGFydCBhcyAyNjYKWyAgIDIzLjA0OTc3NV0gc3lzdGVtZFsxXTog
c3NoZC5zZXJ2aWNlIHNjaGVkdWxlZCByZXN0YXJ0IGpvYi4KWyAgIDIzLjA1NTYwMV0gc3lz
dGVtZFsxXTogU3RvcHBpbmcgT3BlblNTSCBEYWVtb24uLi4KWyAgIDIzLjA2MDUyNF0gc3lz
dGVtZFsxXTogc3NoZC5zZXJ2aWNlIGNoYW5nZWQgYXV0by1yZXN0YXJ0IC0+IGRlYWQKWyAg
IDIzLjA2Njc5NV0gc3lzdGVtZFsxXTogSm9iIHNzaGQuc2VydmljZS9yZXN0YXJ0IGZpbmlz
aGVkLCByZXN1bHQ9ZG9uZQpbICAgMjMuMDczNDE3XSBzeXN0ZW1kWzFdOiBDb252ZXJ0aW5n
IGpvYiBzc2hkLnNlcnZpY2UvcmVzdGFydCAtPiBzc2hkLnNlcnZpY2Uvc3RhcnQKWyAgIDIz
LjA4MTIwMF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3NoL3Nz
aF9ob3N0X3JzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2VydmljZS4KWyAg
IDIzLjA5MTAzMV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9ldGMvc3No
L3NzaF9ob3N0X3JzYV9rZXkgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpbICAg
MjMuMTAwNDc3XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9zc2gv
c3NoX2hvc3RfZHNhX2tleS5wdWIgZmFpbGVkIGZvciBzc2hkZ2Vua2V5cy5zZXJ2aWNlLgpb
ICAgMjMuMTEwMjI4XSBzeXN0ZW1kWzFdOiBDb25kaXRpb25QYXRoRXhpc3RzPXwhL2V0Yy9z
c2gvc3NoX2hvc3RfZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMy4xMTk4MjRdIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEvZXRjL3Nz
aC9zc2hfaG9zdF9lY2RzYV9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIzLjEyOTg0NF0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2VjZHNhX2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZp
Y2UuClsgICAyMy4xMzk0NTldIHN5c3RlbWRbMV06IENvbmRpdGlvblBhdGhFeGlzdHM9fCEv
ZXRjL3NzaC9zc2hfaG9zdF9rZXkucHViIGZhaWxlZCBmb3Igc3NoZGdlbmtleXMuc2Vydmlj
ZS4KWyAgIDIzLjE0OTA3OV0gc3lzdGVtZFsxXTogQ29uZGl0aW9uUGF0aEV4aXN0cz18IS9l
dGMvc3NoL3NzaF9ob3N0X2tleSBmYWlsZWQgZm9yIHNzaGRnZW5rZXlzLnNlcnZpY2UuClsg
ICAyMy4xNTgxMThdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIG9mIHNzaGRnZW5rZXlzLnNlcnZp
Y2UgcmVxdWVzdGVkIGJ1dCBjb25kaXRpb24gZmFpbGVkLiBJZ25vcmluZy4KWyAgIDIzLjE2
NzA1NV0gc3lzdGVtZFsxXTogSm9iIHNzaGRnZW5rZXlzLnNlcnZpY2Uvc3RhcnQgZmluaXNo
ZWQsIHJlc3VsdD1kb25lClsgICAyMy4xNzQyMzZdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU1NI
IEtleSBHZW5lcmF0aW9uLgpbICAgMjMuMTgxMzEwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBP
cGVuU1NIIERhZW1vbi4uLgpbICAgMjMuMTg2NDMwXSBzeXN0ZW1kWzFdOiBzc2hkLnNlcnZp
Y2Ugc3RhcnQgcmVxdWVzdCByZXBlYXRlZCB0b28gcXVpY2tseSwgcmVmdXNpbmcgdG8gc3Rh
cnQuClsgICAyMy4xOTQ4OTBdIHN5c3RlbWRbMV06IHNzaGQuc2VydmljZSBjaGFuZ2VkIGRl
YWQgLT4gZmFpbGVkClsgICAyMy4yMDA1NzBdIHN5c3RlbWRbMV06IEpvYiBzc2hkLnNlcnZp
Y2Uvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1mYWlsZWQKWyAgIDIzLjIwNzA4OF0gc3lzdGVt
ZFsxXTogRmFpbGVkIHRvIHN0YXJ0IE9wZW5TU0ggRGFlbW9uLgpbICAgMjMuMjE0MDk3XSBz
eXN0ZW1kWzFdOiBVbml0IHNzaGQuc2VydmljZSBlbnRlcmVkIGZhaWxlZCBzdGF0ZS4KWyAg
IDIzLjIyNjczOV0gc3lzdGVtZFsxXTogZGV2LXR0eXFkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuMjM5NTk5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlxZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI1
MjU5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXFjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuMjU4NzY3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlxYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI2OTYyMV0g
c3lzdGVtZFsxXTogZGV2LXR0eXIxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuMjc1NzkxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjI4NjE2MV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXIwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
MjkyNDQ0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyMC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMwMjYwM10gc3lzdGVtZFsxXTog
ZGV2LXR0eXFiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzA5MjE1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxYi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMxOTYxOF0gc3lzdGVtZFsxXTogZGV2LXR0
eXFmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzI1ODIyXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlxZi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjMzNjI1NF0gc3lzdGVtZFsxXTogZGV2LXR0eXI0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzQyNTk3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjM1MzEyNl0gc3lzdGVtZFsxXTogZGV2LXR0eXIyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzU5NjE1XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjM3MDc0MF0gc3lzdGVtZFsxXTogZGV2LXR0eXIzLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuMzc2NzczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlyMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjM4NzAzMl0gc3lzdGVtZFsxXTogZGV2LXR0eXI4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuMzkzMTYwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlyOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQw
MzIzMF0gc3lzdGVtZFsxXTogZGV2LXR0eXI1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNDA5MzY1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlyNS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQxOTQ3OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXI3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNDI1NDc0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQzNTYwMF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXI2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NDQxODMyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyNi5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ1NDA5NV0gc3lzdGVtZFsxXTog
ZGV2LXR0eXJiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNDY1ODA5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyYi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ3ODI5MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXI5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNDg0NTY2XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyOS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjQ5NDc4NF0gc3lzdGVtZFsxXTogZGV2LXR0eXJhLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTAwOTI3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyYS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjUxMDkzOV0gc3lzdGVtZFsxXTogZGV2LXR0eXJlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTE2OTM3XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlyZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjUyNzIwMl0gc3lzdGVtZFsxXTogZGV2LXR0eXJjLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNTMzNDE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlyYy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjU0MzM2NV0gc3lzdGVtZFsxXTogZGV2LXR0eXJkLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuNTQ5NTAzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlyZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU1
OTczNF0gc3lzdGVtZFsxXTogZGV2LXR0eXMyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNTY1NzIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlzMi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU3NTkxOV0g
c3lzdGVtZFsxXTogZGV2LXR0eXJmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNTgyMTE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHly
Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjU5MjExMV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXMxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NTk4MDk1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjYwODE5NF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXMwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjE0MzM2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjYyNDQxOV0gc3lzdGVtZFsxXTogZGV2LXR0
eXM1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjMwNTUzXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzNS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjY0MDU2NF0gc3lzdGVtZFsxXTogZGV2LXR0eXMzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjQ2NTU0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzMy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjY1Njc0MF0gc3lzdGVtZFsxXTogZGV2LXR0eXM0LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjYyOTU4XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzNC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjY3Mjg4OF0gc3lzdGVtZFsxXTogZGV2LXR0eXM5LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNjgwNzU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlzOS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjY5ODg0NV0gc3lzdGVtZFsxXTogZGV2LXR0eXM4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuNzA0ODUyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHlzOC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjcx
NTA5M10gc3lzdGVtZFsxXTogZGV2LXR0eXM2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuNzIxMjU2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHlzNi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjczMTI5OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXM3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuNzM3Mjg3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlz
Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc0NzQ3MF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXNjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
NzUzNjgyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYy5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc2MzYzNl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXNhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNzY5Nzc1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc3OTg0MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXNiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuNzg1ODMxXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzYi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjc5NTk3Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXNkLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODAyMTg0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjgxMjE4OV0gc3lzdGVtZFsxXTogZGV2LXR0eXNlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODE4MTc4XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHlzZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjgyODM1Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXNmLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuODM0NDc5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHlzZi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDIzLjg0NDU2MV0gc3lzdGVtZFsxXTogZGV2LXR0eXQzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjMuODUwNjk3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl0My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg2
MDY0MV0gc3lzdGVtZFsxXTogZGV2LXR0eXQyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjMuODY2NjMwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl0Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg3NjkwM10g
c3lzdGVtZFsxXTogZGV2LXR0eXQwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjMuODgzMTE0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0
MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjg5MzE3M10gc3lzdGVt
ZFsxXTogZGV2LXR0eXQxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMu
OTAxMDM3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0MS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjkxOTUxMl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXQ0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTI1NTEz
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0NC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjkzNTc3Ml0gc3lzdGVtZFsxXTogZGV2LXR0
eXQ3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTQyMDIwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Ny5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDIzLjk1MjAwOV0gc3lzdGVtZFsxXTogZGV2LXR0eXQ1LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTU4MDI3XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDIzLjk2ODI2N10gc3lzdGVtZFsxXTogZGV2LXR0eXQ2LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTc0Mzg0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDIzLjk4NDQ2MF0gc3lzdGVtZFsxXTogZGV2LXR0eXRhLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjMuOTkwNTkxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl0YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjAwMTIwNF0gc3lzdGVtZFsxXTogZGV2LXR0eXQ5LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMDA3MjM2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl0OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjAx
Nzk1OV0gc3lzdGVtZFsxXTogZGV2LXR0eXRjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMDI0MTAxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl0Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjAzNDI5Nl0g
c3lzdGVtZFsxXTogZGV2LXR0eXRiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMDQwODYyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0
Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA1MTU4NV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXQ4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MDU3NjE2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0OC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA2ODI3Nl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXRmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMDc0NDE4
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0Zi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjA4NDYwNl0gc3lzdGVtZFsxXTogZGV2LXR0
eXRkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMDkwODY1XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0ZC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjEwMTMyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXRlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTA3NTA0XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl0ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjEyMDE5NV0gc3lzdGVtZFsxXTogZGV2LXR0eXUwLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTMyMzg5XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjE0NTkxNF0gc3lzdGVtZFsxXTogZGV2LXR0eXUyLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMTUyMDc1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl1Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjE2MzExMl0gc3lzdGVtZFsxXTogZGV2LXR0eXUxLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMTY5MzY0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl1MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjE3
OTgzMF0gc3lzdGVtZFsxXTogZGV2LXR0eXU1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMTg2MTc4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl1NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjE5NjQxMV0g
c3lzdGVtZFsxXTogZGV2LXR0eXUzLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMjAyNjkzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1
My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjIxMzE1Ml0gc3lzdGVt
ZFsxXTogZGV2LXR0eXU0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MjE5NDYxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1NC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjIzMDE2OF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXU2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjM2NDMz
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Ni5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjI0NjgwNF0gc3lzdGVtZFsxXTogZGV2LXR0
eXU3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjUzMzExXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Ny5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjI2MzUwMl0gc3lzdGVtZFsxXTogZGV2LXR0eXU4LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjcwMDExXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjI4MDgxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXVjLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMjg2OTMzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjI5NzUwOV0gc3lzdGVtZFsxXTogZGV2LXR0eXViLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuMzA0MDEyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl1Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjMxNDc3OV0gc3lzdGVtZFsxXTogZGV2LXR0eXVhLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuMzIxMTI3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl1YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjMz
MTY3NV0gc3lzdGVtZFsxXTogZGV2LXR0eXU5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuMzM3OTM4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl1OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM1MzMyNV0g
c3lzdGVtZFsxXTogZGV2LXR0eXVkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuMzY1OTczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1
ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM3NzM2OV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXYwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
MzgzNzQyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2MC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjM5NDA0OV0gc3lzdGVtZFsxXTog
ZGV2LXR0eXVlLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDAwNjA1
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1ZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQxMTA1MV0gc3lzdGVtZFsxXTogZGV2LXR0
eXYyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDE3MjA1XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Mi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQyNzc5Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXYxLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDM0MTI2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjQ0NDUwNF0gc3lzdGVtZFsxXTogZGV2LXR0eXVmLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDUxMTEyXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl1Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjQ2MTg3N10gc3lzdGVtZFsxXTogZGV2LXR0eXY0LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNDY3OTY4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl2NC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjQ3ODY3MF0gc3lzdGVtZFsxXTogZGV2LXR0eXYzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuNDg0OTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl2My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjQ5
NTgxMF0gc3lzdGVtZFsxXTogZGV2LXR0eXY1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuNTAyMDY4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl2NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjUxMjcxMl0g
c3lzdGVtZFsxXTogZGV2LXR0eXY4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuNTE5MjUzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2
OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjUzMDAxNF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXY2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
NTM2MzQ1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Ni5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU0NjkzNl0gc3lzdGVtZFsxXTog
ZGV2LXR0eXY3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTUzMzY2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Ny5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU2MzkwNV0gc3lzdGVtZFsxXTogZGV2LXR0
eXZiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTcwNDkwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Yi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjU4NTA4Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXZhLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNTk3NjkzXSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjYwOTE1MV0gc3lzdGVtZFsxXTogZGV2LXR0eXY5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNjE1Mjk0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0LjYyNjMyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXZlLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNjMyNjU1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl2ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjY0Mzc4Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXZjLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuNjUwNzk4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl2Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY2
MjEyMl0gc3lzdGVtZFsxXTogZGV2LXR0eXZkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuNjY4OTk2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl2ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY3OTQzN10g
c3lzdGVtZFsxXTogZGV2LXR0eXcxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuNjg1NjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjY5Njc5M10gc3lzdGVt
ZFsxXTogZGV2LXR0eXcwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
NzAzNzAwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3MC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjcxNDA2M10gc3lzdGVtZFsxXTog
ZGV2LXR0eXZmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzIwOTQ5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl2Zi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjczMjIwNV0gc3lzdGVtZFsxXTogZGV2LXR0
eXc0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzM4MzU5XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3NC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljc0OTc4OF0gc3lzdGVtZFsxXTogZGV2LXR0eXczLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzU2NDY5XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0Ljc2Nzk0MV0gc3lzdGVtZFsxXTogZGV2LXR0eXcyLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzc0ODk0XSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0Ljc4NTE5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXc1LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuNzkyMTg1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl3NS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0LjgwMzI0N10gc3lzdGVtZFsxXTogZGV2LXR0eXc4LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuODExMTcyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl3OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljgz
MTA2Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXc3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuODM3MjM3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl3Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg0ODQwN10g
c3lzdGVtZFsxXTogZGV2LXR0eXc2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjQuODU1Mjk2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg2NjQxNF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXdiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQu
ODczMjUwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3Yi5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljg4MzY0N10gc3lzdGVtZFsxXTog
ZGV2LXR0eXdhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuODkwNTk5
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3YS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjkwMDk1NF0gc3lzdGVtZFsxXTogZGV2LXR0
eXc5LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTA3NzYwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3OS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI0LjkxODk4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXdlLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTI1NzY4XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI0LjkzNzA4OV0gc3lzdGVtZFsxXTogZGV2LXR0eXdkLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTQzODcxXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI0Ljk1NTYwMV0gc3lzdGVtZFsxXTogZGV2LXR0eXgxLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjQuOTYyNDcyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl4MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI0Ljk3MjgyMV0gc3lzdGVtZFsxXTogZGV2LXR0eXdjLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjQuOTc5MTE4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl3Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI0Ljk5
MDIxM10gc3lzdGVtZFsxXTogZGV2LXR0eXgwLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjQuOTk2ODU0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl4MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjAwODI5OV0g
c3lzdGVtZFsxXTogZGV2LXR0eXdmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMDE1MjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl3
Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjAyNTM3Ml0gc3lzdGVt
ZFsxXTogZGV2LXR0eXg1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MDM0MTQ5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4NS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA1NDMzM10gc3lzdGVtZFsxXTog
ZGV2LXR0eXg0LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDYxMTg2
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4NC5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA3MjM4N10gc3lzdGVtZFsxXTogZGV2LXR0
eXgyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDc5MjIxXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Mi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjA4OTczNF0gc3lzdGVtZFsxXTogZGV2LXR0eXgzLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMDk2NDk2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjEwNzMwMF0gc3lzdGVtZFsxXTogZGV2LXR0eXg5LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMTEzNTYwXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjEyNDc0OV0gc3lzdGVtZFsxXTogZGV2LXR0eXg4LmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMTMxNTQ2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl4OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjE0MjUzN10gc3lzdGVtZFsxXTogZGV2LXR0eXg2LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuMTQ5MzU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl4Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE1
OTcxNF0gc3lzdGVtZFsxXTogZGV2LXR0eXg3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuMTY1ODc1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl4Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE3NjgxOF0g
c3lzdGVtZFsxXTogZGV2LXR0eXhjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMTgzNTcwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4
Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjE5Mzk0MF0gc3lzdGVt
ZFsxXTogZGV2LXR0eXhhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MjAwMjIzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4YS5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjIxMTI5Ml0gc3lzdGVtZFsxXTog
ZGV2LXR0eXhiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjE4MDA0
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Yi5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjIyOTIzMF0gc3lzdGVtZFsxXTogZGV2LXR0
eXhmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjM1MzU5XSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4Zi5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjI0NjI3NF0gc3lzdGVtZFsxXTogZGV2LXR0eXhkLmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjU1MDY4XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjI3MzMxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXhlLmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjgwMDkzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl4ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjI5MDQ4NF0gc3lzdGVtZFsxXTogZGV2LXR0eXkwLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMjk3MjYwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl5MC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjMwNzU3OV0gc3lzdGVtZFsxXTogZGV2LXR0eXkzLmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuMzEzOTU5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl5My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjMy
NDc3N10gc3lzdGVtZFsxXTogZGV2LXR0eXkxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuMzMxMDU3XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl5MS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM0MTM2OF0g
c3lzdGVtZFsxXTogZGV2LXR0eXkyLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuMzQ3OTI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5
Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM2MDQ3MV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXk2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
MzY2NTAzXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Ni5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM3NzYyMF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXk1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuMzgzODcx
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5NS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjM5NDgwMV0gc3lzdGVtZFsxXTogZGV2LXR0
eXk4LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDA4ODUwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5OC5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjQzMDkwMV0gc3lzdGVtZFsxXTogZGV2LXR0eXk0LmRl
dmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDM2OTQ2XSBzeXN0ZW1kWzFd
OiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5NC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+
IHBsdWdnZWQKWyAgIDI1LjQ2MTI0OV0gc3lzdGVtZFsxXTogZGV2LXR0eXk3LmRldmljZSBj
aGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNDY3MjUzXSBzeXN0ZW1kWzFdOiBzeXMt
ZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdn
ZWQKWyAgIDI1LjUwMTIxMl0gc3lzdGVtZFsxXTogZGV2LXR0eXliLmRldmljZSBjaGFuZ2Vk
IGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNTA3MjExXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNl
cy12aXJ0dWFsLXR0eS10dHl5Yi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAg
IDI1LjUzMTMxOV0gc3lzdGVtZFsxXTogZGV2LXR0eXk5LmRldmljZSBjaGFuZ2VkIGRlYWQg
LT4gcGx1Z2dlZApbICAgMjUuNTM3MzIxXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0
dWFsLXR0eS10dHl5OS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjU3
MTE4NV0gc3lzdGVtZFsxXTogZGV2LXR0eXlhLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1
Z2dlZApbICAgMjUuNTc3MjA5XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0
eS10dHl5YS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjYwMTIwMF0g
c3lzdGVtZFsxXTogZGV2LXR0eXljLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApb
ICAgMjUuNjA3MjAyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5
Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjYzMTI1OV0gc3lzdGVt
ZFsxXTogZGV2LXR0eXlkLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUu
NjM3MjU4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5ZC5kZXZp
Y2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjY2MTQwMF0gc3lzdGVtZFsxXTog
ZGV2LXR0eXllLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNjY3Mzk3
XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5ZS5kZXZpY2UgY2hh
bmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjcwMDg0MF0gc3lzdGVtZFsxXTogZGV2LXR0
eXoxLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNzA2ODUwXSBzeXN0
ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6MS5kZXZpY2UgY2hhbmdlZCBk
ZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljc0MDQ5N10gZXRoMDogZGV2aWNlIE1BQyBhZGRyZXNz
IDhhOjcwOjNjOjQxOmU3OjNmClsgICAyNS43NTY1NzBdICBObyBNQUMgTWFuYWdlbWVudCBD
b3VudGVycyBhdmFpbGFibGUKWyAgIDI1Ljc2MTIzNF0gc3RtbWFjX29wZW46IGZhaWxlZCBQ
VFAgaW5pdGlhbGlzYXRpb24KWyAgIDI1Ljc2NjE0Nl0gSVB2NjogQUREUkNPTkYoTkVUREVW
X1VQKTogZXRoMDogbGluayBpcyBub3QgcmVhZHkKWyAgIDI1Ljc4MTMxNF0gc3lzdGVtZFsx
XTogZGV2LXR0eXlmLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuNzg3
MzE1XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl5Zi5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjgxMTE0OV0gc3lzdGVtZFsxXTogZGV2
LXR0eXowLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODE3MTY1XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6MC5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjgzMzcwN10gc3lzdGVtZFsxXTogZGV2LXR0eXo0
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODM5OTU3XSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6NC5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDI1Ljg1MDA0M10gc3lzdGVtZFsxXTogZGV2LXR0eXoyLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODU2MDMxXSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Mi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDI1Ljg2NjE1MF0gc3lzdGVtZFsxXTogZGV2LXR0eXozLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuODcyMjczXSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHl6My5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDI1Ljg4MjM0Ml0gc3lzdGVtZFsxXTogZGV2LXR0eXo4LmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjUuODg4MzI4XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHl6OC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1
Ljg5ODQ4MV0gc3lzdGVtZFsxXTogZGV2LXR0eXo3LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjUuOTA0NjI2XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHl6Ny5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjkxNDY4
Nl0gc3lzdGVtZFsxXTogZGV2LXR0eXo2LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dl
ZApbICAgMjUuOTIwODI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10
dHl6Ni5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1LjkzMDgwMl0gc3lz
dGVtZFsxXTogZGV2LXR0eXo1LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAg
MjUuOTM2NzkyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6NS5k
ZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk0NzAyOF0gc3lzdGVtZFsx
XTogZGV2LXR0eXpiLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTUz
MjQyXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Yi5kZXZpY2Ug
Y2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk2MzI0OF0gc3lzdGVtZFsxXTogZGV2
LXR0eXphLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTY5Mzg0XSBz
eXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6YS5kZXZpY2UgY2hhbmdl
ZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI1Ljk3OTQ4N10gc3lzdGVtZFsxXTogZGV2LXR0eXo5
LmRldmljZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjUuOTg1NDc3XSBzeXN0ZW1k
WzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6OS5kZXZpY2UgY2hhbmdlZCBkZWFk
IC0+IHBsdWdnZWQKWyAgIDI1Ljk5NTY0M10gc3lzdGVtZFsxXTogZGV2LXR0eXpmLmRldmlj
ZSBjaGFuZ2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjYuMDAxODQ5XSBzeXN0ZW1kWzFdOiBz
eXMtZGV2aWNlcy12aXJ0dWFsLXR0eS10dHl6Zi5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBs
dWdnZWQKWyAgIDI2LjAxMTgwNl0gc3lzdGVtZFsxXTogZGV2LXR0eXplLmRldmljZSBjaGFu
Z2VkIGRlYWQgLT4gcGx1Z2dlZApbICAgMjYuMDE3ODIwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2
aWNlcy12aXJ0dWFsLXR0eS10dHl6ZS5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQK
WyAgIDI2LjAyNzk5OV0gc3lzdGVtZFsxXTogZGV2LXR0eXpkLmRldmljZSBjaGFuZ2VkIGRl
YWQgLT4gcGx1Z2dlZApbICAgMjYuMDM0MTI0XSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12
aXJ0dWFsLXR0eS10dHl6ZC5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDI2
LjA0NDIyM10gc3lzdGVtZFsxXTogZGV2LXR0eXpjLmRldmljZSBjaGFuZ2VkIGRlYWQgLT4g
cGx1Z2dlZApbICAgMjYuMDUwNDEwXSBzeXN0ZW1kWzFdOiBzeXMtZGV2aWNlcy12aXJ0dWFs
LXR0eS10dHl6Yy5kZXZpY2UgY2hhbmdlZCBkZWFkIC0+IHBsdWdnZWQKWyAgIDUxLjM1NDY5
NF0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5EQnVz
Lk5hbWVPd25lckNoYW5nZWQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL0RCdXMKWyAgIDUxLjM3
MTI4Ml0gc3lzdGVtZFsxXTogR290IEQtQnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5z
eXN0ZW1kMS5NYW5hZ2VyLlN0YXJ0VW5pdCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVt
ZDEKWyAgIDUxLjM4NDA1MV0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHVz
ZXItMC5zbGljZS9zdGFydC9mYWlsClsgICA1MS4zOTE0MDFdIHN5c3RlbWRbMV06IEluc3Rh
bGxlZCBuZXcgam9iIHVzZXItMC5zbGljZS9zdGFydCBhcyAzMTUKWyAgIDUxLjM5ODAxN10g
c3lzdGVtZFsxXTogRW5xdWV1ZWQgam9iIHVzZXItMC5zbGljZS9zdGFydCBhcyAzMTUKWyAg
IDUxLjQwNjUyNF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgdXNlci0wLnNsaWNlLgpbICAgNTEu
NDEyNzIyXSBzeXN0ZW1kWzFdOiB1c2VyLTAuc2xpY2UgY2hhbmdlZCBkZWFkIC0+IGFjdGl2
ZQpbICAgNTEuNDE4MzY1XSBzeXN0ZW1kWzFdOiBKb2IgdXNlci0wLnNsaWNlL3N0YXJ0IGZp
bmlzaGVkLCByZXN1bHQ9ZG9uZQpbICAgNTEuNDI1NzQ0XSBzeXN0ZW1kWzFdOiBDcmVhdGVk
IHNsaWNlIHVzZXItMC5zbGljZS4KWyAgIDUxLjQzMzU0Nl0gc3lzdGVtZFsxXTogR290IEQt
QnVzIHJlcXVlc3Q6IG9yZy5mcmVlZGVza3RvcC5zeXN0ZW1kMS5NYW5hZ2VyLlN0YXJ0VW5p
dCgpIG9uIC9vcmcvZnJlZWRlc2t0b3Avc3lzdGVtZDEKWyAgIDUxLjQ2OTk2NV0gc3lzdGVt
ZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHVzZXJAMC5zZXJ2aWNlL3N0YXJ0L2ZhaWwK
WyAgIDUxLjQ3NzMxM10gc3lzdGVtZFsxXTogSW5zdGFsbGVkIG5ldyBqb2IgdXNlckAwLnNl
cnZpY2Uvc3RhcnQgYXMgMzE5ClsgICA1MS40ODU4NzVdIHN5c3RlbWRbMV06IEVucXVldWVk
IGpvYiB1c2VyQDAuc2VydmljZS9zdGFydCBhcyAzMTkKWyAgIDUxLjQ5NTgyMl0gc3lzdGVt
ZFsxXTogU3RhcnRpbmcgVXNlciBNYW5hZ2VyIGZvciAwLi4uClsgICA1MS41MDI1MjNdIHN5
c3RlbWRbMV06IEFib3V0IHRvIGV4ZWN1dGU6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZCAt
LXVzZXIKWyAgIDUxLjUxMTEzNF0gc3lzdGVtZFsxXTogRm9ya2VkIC91c3IvbGliL3N5c3Rl
bWQvc3lzdGVtZCBhcyAxNzAKWyAgIDUxLjUyNDY5OV0gc3lzdGVtZFsxXTogdXNlckAwLnNl
cnZpY2UgY2hhbmdlZCBkZWFkIC0+IHN0YXJ0ClsgICA1MS41MzE3NjJdIHN5c3RlbWRbMV06
IFNldCB1cCBqb2JzIHByb2dyZXNzIHRpbWVyZmQuClsgICA1MS41MzY4NTRdIHN5c3RlbWRb
MV06IFNldCB1cCBpZGxlX3BpcGUgd2F0Y2guClsgICA1MS41NDQ0MDFdIHN5c3RlbWRbMV06
IEdvdCBELUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuTWFuYWdlci5T
dGFydFRyYW5zaWVudFVuaXQoKSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQxClsgICA1
MS41Njc4ODFdIHN5c3RlbWRbMV06IEZhaWxlZCB0byBsb2FkIGNvbmZpZ3VyYXRpb24gZm9y
IHNlc3Npb24tYzEuc2NvcGU6IE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkKWyAgIDUxLjU4
MjI4Nl0gc3lzdGVtZFsxXTogVHJ5aW5nIHRvIGVucXVldWUgam9iIHNlc3Npb24tYzEuc2Nv
cGUvc3RhcnQvZmFpbApbICAgNTEuNTkwNDM1XSBzeXN0ZW1kWzFdOiBJbnN0YWxsZWQgbmV3
IGpvYiBzZXNzaW9uLWMxLnNjb3BlL3N0YXJ0IGFzIDM2OQpbICAgNTEuNTk3MTE3XSBzeXN0
ZW1kWzFdOiBFbnF1ZXVlZCBqb2Igc2Vzc2lvbi1jMS5zY29wZS9zdGFydCBhcyAzNjkKWyAg
IDUxLjYxMDk5MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgU2Vzc2lvbiBjMSBvZiB1c2VyIHJv
b3QuClsgICA1MS42MzI4OTBdIHN5c3RlbWRbMV06IHNlc3Npb24tYzEuc2NvcGUgY2hhbmdl
ZCBkZWFkIC0+IHJ1bm5pbmcKWyAgIDUxLjY0NzQzNl0gc3lzdGVtZFsxNzBdOiBFeGVjdXRp
bmc6IC91c3IvbGliL3N5c3RlbWQvc3lzdGVtZCAtLXVzZXIKWyAgIDUxLjY1NTM2Ml0gc3lz
dGVtZFsxXTogSm9iIHNlc3Npb24tYzEuc2NvcGUvc3RhcnQgZmluaXNoZWQsIHJlc3VsdD1k
b25lClsgICA1MS42Njg5MzddIHN5c3RlbWRbMV06IFN0YXJ0ZWQgU2Vzc2lvbiBjMSBvZiB1
c2VyIHJvb3QuClsgICA1MS42ODc2NzZdIHN5c3RlbWRbMV06IEdvdCBELUJ1cyByZXF1ZXN0
OiBvcmcuZnJlZWRlc2t0b3AuREJ1cy5OYW1lT3duZXJDaGFuZ2VkKCkgb24gL29yZy9mcmVl
ZGVza3RvcC9EQnVzClsgICA1MS43MDI3MDFdIHN5c3RlbWRbMV06IEFjY2VwdGVkIGNvbm5l
Y3Rpb24gb24gcHJpdmF0ZSBidXMuClsgICA1MS43MzAzNDldIHN5c3RlbWRbMV06IEdvdCBE
LUJ1cyByZXF1ZXN0OiBvcmcuZnJlZWRlc2t0b3Auc3lzdGVtZDEuQWdlbnQuUmVsZWFzZWQo
KSBvbiAvb3JnL2ZyZWVkZXNrdG9wL3N5c3RlbWQxL2FnZW50ClsgICA1MS43NTU0ODhdIHN5
c3RlbWRbMV06IHNlcmlhbC1nZXR0eUBodmMwLnNlcnZpY2U6IGNncm91cCBpcyBlbXB0eQpb
ICAgNTEuNzY4MDUzXSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVk
ZXNrdG9wLkRCdXMuTG9jYWwuRGlzY29ubmVjdGVkKCkgb24gL29yZy9mcmVlZGVza3RvcC9E
QnVzL0xvY2FsClsgICA1Mi4wNjIzNDhdIHN5c3RlbWRbMV06IEdvdCBub3RpZmljYXRpb24g
bWVzc2FnZSBmb3IgdW5pdCB1c2VyQDAuc2VydmljZQpbICAgNTIuMDgwNDI4XSBzeXN0ZW1k
WzFdOiB1c2VyQDAuc2VydmljZTogR290IG1lc3NhZ2UKWyAgIDUyLjA4NTMzM10gc3lzdGVt
ZFsxXTogdXNlckAwLnNlcnZpY2U6IGdvdCBSRUFEWT0xClsgICA1Mi4xMDcxNDddIHN5c3Rl
bWRbMV06IHVzZXJAMC5zZXJ2aWNlIGNoYW5nZWQgc3RhcnQgLT4gcnVubmluZwpbICAgNTIu
MTIwMzMzXSBzeXN0ZW1kWzFdOiBKb2IgdXNlckAwLnNlcnZpY2Uvc3RhcnQgZmluaXNoZWQs
IHJlc3VsdD1kb25lClsgICA1Mi4xMzUzMTVdIHN5c3RlbWRbMV06IFN0YXJ0ZWQgVXNlciBN
YW5hZ2VyIGZvciAwLgpbICAgNTIuMTQ4NjgzXSBzeXN0ZW1kWzFdOiBDbG9zZWQgam9icyBw
cm9ncmVzcyB0aW1lcmZkLgpbICAgNTIuMTUzODMxXSBzeXN0ZW1kWzFdOiBDbG9zZWQgaWRs
ZV9waXBlIHdhdGNoLgpbICAgNTIuMTY4ODU4XSBzeXN0ZW1kWzFdOiB1c2VyQDAuc2Vydmlj
ZTogZ290IFNUQVRVUz1TdGFydHVwIGZpbmlzaGVkIGluIDM4NG1zLgpbICAgNTIuMTgwNTEw
XSBzeXN0ZW1kWzFdOiBHb3QgRC1CdXMgcmVxdWVzdDogb3JnLmZyZWVkZXNrdG9wLkRCdXMu
TmFtZU93bmVyQ2hhbmdlZCgpIG9uIC9vcmcvZnJlZWRlc2t0b3AvREJ1cwo=
--------------050705070304060703090801
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050705070304060703090801--


From xen-devel-bounces@lists.xen.org Fri Jan 24 10:20:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dsg-0004EI-UE; Fri, 24 Jan 2014 10:20:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6dsf-0004E2-Fe
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:20:57 +0000
Received: from [193.109.254.147:5879] by server-11.bemta-14.messagelabs.com id
	F6/AB-20576-88E32E25; Fri, 24 Jan 2014 10:20:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390558855!9445582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8286 invoked from network); 24 Jan 2014 10:20:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:20:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96114080"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:20:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:20:54 -0500
Message-ID: <1390558852.2124.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Fri, 24 Jan 2014 10:20:52 +0000
In-Reply-To: <52E23D01.50204@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<52E23D01.50204@cn.fujitsu.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Dong Eddie <eddie.dong@intel.com>, FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>, Ian
	Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:14 +0800, Lai Jiangshan wrote:
> On 01/21/2014 05:05 PM, Lai Jiangshan wrote:
> 
> >>
> > 
> > Changes in V6:
> >   Applied Ian Jackson's comments of V5 series.
> >   the [PATCH 2/4 V5] is split by small functionalities.
> > 
> >   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> > 
> 
> Ping!

This is targeting 4.5 I think? I'm afraid that this means it is likely
to be queued behind anything targeting 4.4 as far as review bandwidth
goes.

Ian.

> 
> Ian Jackson, any suggestion?
> 
> Shriram, could you review it?
> 
> thanks,
> Lai



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:20:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dsg-0004EI-UE; Fri, 24 Jan 2014 10:20:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6dsf-0004E2-Fe
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:20:57 +0000
Received: from [193.109.254.147:5879] by server-11.bemta-14.messagelabs.com id
	F6/AB-20576-88E32E25; Fri, 24 Jan 2014 10:20:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390558855!9445582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8286 invoked from network); 24 Jan 2014 10:20:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:20:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96114080"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:20:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:20:54 -0500
Message-ID: <1390558852.2124.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Fri, 24 Jan 2014 10:20:52 +0000
In-Reply-To: <52E23D01.50204@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<52E23D01.50204@cn.fujitsu.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Dong Eddie <eddie.dong@intel.com>, FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>, Ian
	Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:14 +0800, Lai Jiangshan wrote:
> On 01/21/2014 05:05 PM, Lai Jiangshan wrote:
> 
> >>
> > 
> > Changes in V6:
> >   Applied Ian Jackson's comments of V5 series.
> >   the [PATCH 2/4 V5] is split by small functionalities.
> > 
> >   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> > 
> 
> Ping!

This is targeting 4.5 I think? I'm afraid that this means it is likely
to be queued behind anything targeting 4.4 as far as review bandwidth
goes.

Ian.

> 
> Ian Jackson, any suggestion?
> 
> Shriram, could you review it?
> 
> thanks,
> Lai



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:21:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dtU-0004Kz-DQ; Fri, 24 Jan 2014 10:21:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W6dtS-0004Kl-Pa
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:21:47 +0000
Received: from [193.109.254.147:55811] by server-11.bemta-14.messagelabs.com
	id F6/0D-20576-ABE32E25; Fri, 24 Jan 2014 10:21:46 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390558903!12864649!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7866 invoked from network); 24 Jan 2014 10:21:44 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:21:44 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so3720222qaq.29
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:21:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zLwZx+dqDWFUMH0+g9ssDNvGmpgMJiVvaPAVMzrAYrg=;
	b=WFss/n4JjUpH1Mc7fNqAFGpKd3Bb/barwFWdA7QizAcNhzlyof7DJPdG/e9UxFGE+E
	NLLdS/3JBqRKVgNOYldslgSdR/JyuVGRNLTs7IgbYvYJ6DzaywOPrVAWJUagSQWU4c+0
	Ze7F7/pJor36MFa1BkD+DWxMG+S/Mo5D48Khv7Nl4NF/AHZDMhct+/ocL1vcpIsWSS48
	5ptmv/YVIpI1Ntvt87g6enDR11CtykapXoa2mhTGA4wFnpw65pJnkLaxi2o+xCNs0Dp9
	EjR/LJrLWYhnxbdmTNMzjzZFZ94hAdAjSCewduGYsnPlyslC3Qk7vFzE/ms9aH0qhzhJ
	r8hg==
MIME-Version: 1.0
X-Received: by 10.140.97.137 with SMTP id m9mr17685410qge.95.1390558903326;
	Fri, 24 Jan 2014 02:21:43 -0800 (PST)
Received: by 10.224.77.8 with HTTP; Fri, 24 Jan 2014 02:21:43 -0800 (PST)
In-Reply-To: <1389696705.9887.52.camel@kazak.uk.xensource.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
Date: Fri, 24 Jan 2014 10:21:43 +0000
Message-ID: <CAOTdubv1kXhZ9DdsBa_PCyU2gPaNHUUjw1zov0dUQcwKW0nWmg@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 10:51 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sat, 2014-01-11 at 01:58 +0000, karim.allah.ahmed@gmail.com wrote:
>> On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
>> >> Create multiple banks to hold dom0 memory in case of 1:1 mapping
>> >> if we failed to find 1 large contiguous memory to hold the whole
>> >> thing.
>> >
>> > Thanks. While with my ARM maintainer hat on I'd love for this to go in
>> > for 4.4 with my acting release manager hat on I think if I have to be
>> > honest this is too big a change for 4.4 at this stage, which is a
>> > pity :-(
>> >
>> >
>> >>
>> >> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> >> ---
>> >>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>> >>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>> >>  2 files changed, 121 insertions(+), 39 deletions(-)
>> >>
>> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> >> index faff88e..bb44cdd 100644
>> >> --- a/xen/arch/arm/domain_build.c
>> >> +++ b/xen/arch/arm/domain_build.c
>> >> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>> >>  {
>> >>      paddr_t start;
>> >>      paddr_t size;
>> >> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>> >>      struct page_info *pg = NULL;
>> >> -    unsigned int order = get_order_from_bytes(dom0_mem);
>> >> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>> >>      int res;
>> >>      paddr_t spfn;
>> >>      unsigned int bits;
>> >>
>> >> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> >> +#define MIN_BANK_ORDER 10
>> >
>> > 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
>> > which can be made, but I think in practice the lowest useful bank size
>> > is going to be somewhat larger than that.
>>
>> MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
>> makes the minimum 4 Mbyte
>
> My mistake. Please add a comment though.
>
>> >> +
>> >> +    kinfo->mem.nr_banks = 0;
>> >> +
>> >> +    /*
>> >> +     * We always first try to allocate all dom0 memory in 1 bank.
>> >> +     * However, if we failed to allocate physically contiguous memory
>> >> +     * from the allocator, then we try to create more than one bank.
>> >> +     */
>> >> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>> >
>> > I think this can be just
>> >         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
>> > (maybe order >= ?) or
>> >         while (order > MIN_BANK_ORDER )
>> >         {
>> >                 ...
>> >                 order--;
>> >         }
>> > I think the first works better.
>> >
>> > This does away with the need for cur_order vs order. I think order is
>> > mostly unused after this patch, also not renaming cur_order would
>> > hopefully reduce the diff and therefore the "scariness" of the patch wrt
>> > 4.4 (although that may not be sufficient).
>>
>> Yes, that's correct, however I'm intentionally using a different
>> variable because I thought that this is going to make things more
>> obvious. If you think it's better to use the same variable, then I'll
>> update it.
>
> Personally I found it confusing to read as it is. Although reading
> further down I'm still confused and I'm not sure that my suggestion
> would actually help.
>
>> >
>> >> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>> >> +                             if ( pg != NULL )
>> >> +                                     break;
>> >> +                     }
>> >> +
>> >> +                     if ( !pg )
>> >> +                             break;
>> >> +
>> >> +                     spfn = page_to_mfn(pg);
>> >> +                     start = pfn_to_paddr(spfn);
>> >> +                     size = pfn_to_paddr((1 << cur_order));
>> >> +
>> >> +                 kinfo->mem.bank[index].start = start;
>> >> +                 kinfo->mem.bank[index].size = size;
>> >> +                 index++;
>> >> +                 kinfo->mem.nr_banks++;
>> >> +     }
>> >> +
>> >> +     if( pg )
>> >> +             break;
>> >> +
>> >> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
>> >
>> > <<1 ?
>>
>> * 2 :)
>
> I know that ;-) But why are you doubling the number of banks at all?
>
>> > Is this not just kinfo->mem.nr_banks?
>>
>> No, In this equation I'm calculating how much more memory will be
>> needed to satisfy the memory size of dom0.
>> So at the end of the iteration, I check how much memory has been
>> allocated (=cur_bank * cur_order) and how much memory was needed
>> (=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
>> cur_bank + 1) * cur_order
>> So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
>> ) and we do another iteration to satisfy the rest of unallocated
>> memory.
>
> Hrm, I'm still a bit confused. I think perhaps choosing better names for
> the variables (e.g. it seems like you are saying nr_banks is actually
> nr_unallocated_banks?) and adding some comments might help.
>
> But is this algorithm not more complex than it needs to be? Why not just
> allocate memory, subtracting it from kinfo->unassigned_mem as you go and
> adding a new bank each time? The result would the same wouldn't it?
> e.g.:
>
>         while ( kinfo->unassigned_mem )
>         {
>                 order = get_order_of_bytes(kinfo->unsigned_mem)
>                 do {
>                         pg = alloc_domheap_pages(... order ...)
>                 } while(!pg && order>>=1 > MIN_BANK_ORDER)
>
>                 if (pg)
>                         urk!
>
>                 kinfo->mem.bank[kinfo->mem.nr_banks].{start,size} = (stuff based on pg, order etc)
>                 kinfo->mem.nr_banks++
>                 kinfo->unassigned_mem -= /*whatever it is*/
>
>                 /* maybe do guest_physmap_add_page here too */
>         }
>
> I think that will produce the same result, won't it?
>
> In either algorithm there needs to be a check for NR_MEM_BANKS somewhere
> or else it will run off the end of kinfo->mem.bank[].
>
>>
>> >
>> > The basic structure here is:
>> >
>> > for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>> >         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>> >                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> >
>> > Shouldn't the iteration over bank be the outer one?
>> >
>> > The banks might be different sizes, right?
>> >
>>
>> The outer loop will define the order of the allocated bank[s] while
>> the inner one will define how many banks of that order will be needed.
>>
>> So you try to allocate one big bank, if it succeeds you're done. If
>> not, you double the number of required banks and retry with smaller
>> order (order - 1).
>>
>> The code can indeed allocate banks of different sizes. So, if we
>> failed to allocate 1 big bank, we will try to allocate two banks of
>> (order = order -1) in that case we might only allocate the first bank
>> and fail to allocate the second one. So, we try to allocate the
>> required memory for the second one in two banks.
>>
>> So the logic is always: In the end of the outer loop calculate how
>> many banks we need to allocate for next iteration as well as the
>> required order. So all allocation that occur in the same iteration is
>> equal, while each iteration changes the nr banks and order depending
>> on how much memory we still need to allocate
>
> I think I get it now, but it still seems complicated. Am I missing
> something with the simpler loop I suggested?
>
>> > Also with either approach then depending on where memory is available
>> > this may result in the memory not being allocated in and/or that they
>> > are not in increasing order (in fact, because Xen prefer to allocate
>> > higher memory first it seems likely that it will be in reverse order).
>>
>> Yes, there's no restriction what so ever on the order of the
>> addresses. However each allocation is trying to allocate the bank as
>> early as possible
>>
>> for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> {
>>     pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>>     if ( pg != NULL )
>>         break;
>> }
>>
>> >
>> > I don't know if either of those things matter. What does ePAPR have to
>> > say on the matter?
>>
>> There is no mention of address range ordering ( at least in section
>> 3.4 "Memory Node" )
>
> OK. It'll be interesting to see whether the implementations match up to
> that!
>
> Since max banks is necessarily small I do wonder if it might be easier
> to just do a quick insertion sort as we go rather than risking it. I
> suppose there will be plenty of time in the 4.5 cycle (which this work
> is now almost certainly targeting) to hit any problem cases.
>
>> > I'd expect that the ordering might matter from the point of view of
>> > putting the kernel in the first bank -- since that may no longer be the
>> > lowest address.
>> >
>> In the patch, when I set the loadaddr of the image I search through
>> the banks for the lowest address bank not the first one anyway.
>
> OK, I think that makes sense since the requirement is wrt position
> relative to the start of RAM, not the first bank.
>
>> > You don't seem to reference kinfo->unassigned_mem anywhere after the
>> > initial order calculation -- I think you need to subtract memory from it
>> > on each iteration, or else I'm not sure you will actually get the right
>> > amount allocated in all cases.
>>
>> It's being properly calculated in the external mapping loop.
>
> Hrm yes with all that doubling etc.
>
>> >>
>> >> -    kinfo->mem.bank[0].start = start;
>> >> -    kinfo->mem.bank[0].size = size;
>> >> -    kinfo->mem.nr_banks = 1;
>> >> +         if ( res )
>> >> +             panic("Unable to add pages in DOM0: %d", res);
>> >>
>> >> -    kinfo->unassigned_mem -= size;
>> >> +         kinfo->unassigned_mem -= size;
>> >> +     }
>> >>  }
>> >>
>> >>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
>> >> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> >> index 6a5772b..658c3de 100644
>> >> --- a/xen/arch/arm/kernel.c
>> >> +++ b/xen/arch/arm/kernel.c
>> >> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>> >>      const paddr_t total = initrd_len + dtb_len;
>> >>
>> >>      /* Convenient */
>> >
>> > If you are going to remove all of the following convenience variables
>> > then this comment is no longer correct.
>> >
>> > (Convenient here means a shorter local name for something complex)
>>
>> It still applies to these variables except probably "unsigned int i,
>> min_i = -1;", so I'll push the comment one line down.
>
> No it doesn't AFAICT. "Convenient" here means "these are shorthand for
> something longer and inconvenient to type", if the variables aren't
> const then they almost certainly are not a convenience in the intended
> sense. same_bank, mem_* and kernel_size are all just regular variables I
> think.
>
>> >> -    const paddr_t mem_start = info->mem.bank[0].start;
>> >> -    const paddr_t mem_size = info->mem.bank[0].size;
>> >> -    const paddr_t mem_end = mem_start + mem_size;
>> >> -    const paddr_t kernel_size = kernel_end - kernel_start;
>> >> +    unsigned int i, min_i = -1;
>> >> +    bool_t same_bank = false;
>> >> +
>> >> +    paddr_t mem_start, mem_end, mem_size;
>> >> +    paddr_t kernel_size;
>> >>
>> >>      paddr_t addr;
>> >>
>> >> -    if ( total + kernel_size > mem_size )
>> >> -        panic("Not enough memory in the first bank for the dtb+initrd");
>> >> +    kernel_size = kernel_end - kernel_start;
>> >> +
>> >> +    for ( i = 0; i < info->mem.nr_banks; i++ )
>> >> +    {
>> >> +        mem_start = info->mem.bank[i].start;
>> >> +        mem_size = info->mem.bank[i].size;
>> >> +        mem_end = mem_start + mem_size - 1;
>> >> +
>> >> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
>> >> +            same_bank = true;
>> >> +        else
>> >> +            same_bank = false;
>> >> +
>> >> +        if ( same_bank && ((total + kernel_size) < mem_size) )
>> >> +            min_i = i;
>> >> +        else if ( (!same_bank) && (total < mem_size) )
>> >> +            min_i = i;
>> >> +        else
>> >> +            continue;
>> >
>> > What is all this doing?
>>
>> Search through the banks for the bank that fits the initrd and dtb.
>> Calculation is slightly different depending on whether I ended up
>> using the same bank as the one used by the kernel or not. ( as
>> mentioned previously, I don't just blindly choose the first bank for
>> the kernel. I search through all of the banks for the lowest banks -
>> address-wise - ).
>
> Where is the address comparison before assigning min_i?
>
>>  In the same_bank case, I just use the old
>> calculations that was already there in the code, however in the
>> !same_bank case I just start at the end of the bank.
>
> Would it be clearer to do
>         if ( dtb+initrd fits in kernel's back )
>                 ok
>         else
>                 search
> ?
>
> There is also a requirement that the dtb and initrd are within certain
> ranges of the start of RAM (see the booting.txt and Booting in
> Documentation/arm*/), does this implement this? It doesn't look to be
> considered during the search or when placing in the !same_bank case.
>
> This would all be simpler if we sorted the banks too. Which is another
> argument for doing so IMHO.
>
>> >> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
>> >> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
>> >> +        /*
>> >> +         * Load kernel at the lowest possible bank
>> >> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
>> >
>> > That commit says nothing about loading in the lowest possible bank,
>> > though. If there is some relevant factor which is worth commenting on
>> > please do so directly.
>>
>> Loading at the lowest bank is safer because of the:
>> "The current solution in Linux assuming that the delta physical
>> address - virtual address is always negative".
>> This delta is being calculated based on where the kernel was loaded (
>> using "adr" to find physical address ).
>> So loading it as early as possible is a good idea to avoid this problem.
>
> I think this problem is now fixed upstream, the intention was to
> eventually revert the workaround (Julien was going to tell me when it
> had gone into stable etc, but this is now a 4.5 era revert candidate).
>
>> However, I'm not quite sure why not just load it at the beginning of
>> the memory then! I think I'll look at booting.txt for that, maybe it's
>> a decompresser limitation or something!
>
> Yes, it's all a bit complicated.
>
>> > Actually now that the kernel is fixed upstream we don't need the
>> > behaviour of that commit at all. Although there are still restrictions
>> > based on load address vs start of RAM (See booting.txt in the kernel
>> > docs)
>> Ah, I see. I'm using an allwinner branch of the kernel (
>> https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
>> the latest thing.
>
> git://github.com/linux-sunxi/linux-sunxi.git either sunxi-next or
> sunxi-devel branches are pretty good baselines nowadays. I've got an
> outstanding TODO to update the wiki...
Sorry for the delay!

Okay, I'm going to update my system with this tree and refresh the
patch based on your comments and resend this weekend.

>
> Ian.
>



-- 
Karim Allah Ahmed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:21:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6dtU-0004Kz-DQ; Fri, 24 Jan 2014 10:21:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <karim.allah.ahmed@gmail.com>) id 1W6dtS-0004Kl-Pa
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:21:47 +0000
Received: from [193.109.254.147:55811] by server-11.bemta-14.messagelabs.com
	id F6/0D-20576-ABE32E25; Fri, 24 Jan 2014 10:21:46 +0000
X-Env-Sender: karim.allah.ahmed@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390558903!12864649!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7866 invoked from network); 24 Jan 2014 10:21:44 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:21:44 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so3720222qaq.29
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:21:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zLwZx+dqDWFUMH0+g9ssDNvGmpgMJiVvaPAVMzrAYrg=;
	b=WFss/n4JjUpH1Mc7fNqAFGpKd3Bb/barwFWdA7QizAcNhzlyof7DJPdG/e9UxFGE+E
	NLLdS/3JBqRKVgNOYldslgSdR/JyuVGRNLTs7IgbYvYJ6DzaywOPrVAWJUagSQWU4c+0
	Ze7F7/pJor36MFa1BkD+DWxMG+S/Mo5D48Khv7Nl4NF/AHZDMhct+/ocL1vcpIsWSS48
	5ptmv/YVIpI1Ntvt87g6enDR11CtykapXoa2mhTGA4wFnpw65pJnkLaxi2o+xCNs0Dp9
	EjR/LJrLWYhnxbdmTNMzjzZFZ94hAdAjSCewduGYsnPlyslC3Qk7vFzE/ms9aH0qhzhJ
	r8hg==
MIME-Version: 1.0
X-Received: by 10.140.97.137 with SMTP id m9mr17685410qge.95.1390558903326;
	Fri, 24 Jan 2014 02:21:43 -0800 (PST)
Received: by 10.224.77.8 with HTTP; Fri, 24 Jan 2014 02:21:43 -0800 (PST)
In-Reply-To: <1389696705.9887.52.camel@kazak.uk.xensource.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
Date: Fri, 24 Jan 2014 10:21:43 +0000
Message-ID: <CAOTdubv1kXhZ9DdsBa_PCyU2gPaNHUUjw1zov0dUQcwKW0nWmg@mail.gmail.com>
From: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: tim@xen.org, Julien Grall <julien.grall@linaro.org>,
	stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 14, 2014 at 10:51 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Sat, 2014-01-11 at 01:58 +0000, karim.allah.ahmed@gmail.com wrote:
>> On Fri, Jan 10, 2014 at 3:48 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Fri, 2014-01-10 at 04:12 +0000, Karim Raslan wrote:
>> >> Create multiple banks to hold dom0 memory in case of 1:1 mapping
>> >> if we failed to find 1 large contiguous memory to hold the whole
>> >> thing.
>> >
>> > Thanks. While with my ARM maintainer hat on I'd love for this to go in
>> > for 4.4 with my acting release manager hat on I think if I have to be
>> > honest this is too big a change for 4.4 at this stage, which is a
>> > pity :-(
>> >
>> >
>> >>
>> >> Signed-off-by: Karim Raslan <karim.allah.ahmed@gmail.com>
>> >> ---
>> >>  xen/arch/arm/domain_build.c |   74 ++++++++++++++++++++++++++-----------
>> >>  xen/arch/arm/kernel.c       |   86 ++++++++++++++++++++++++++++++++++---------
>> >>  2 files changed, 121 insertions(+), 39 deletions(-)
>> >>
>> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> >> index faff88e..bb44cdd 100644
>> >> --- a/xen/arch/arm/domain_build.c
>> >> +++ b/xen/arch/arm/domain_build.c
>> >> @@ -69,39 +69,71 @@ static void allocate_memory_11(struct domain *d, struct kernel_info *kinfo)
>> >>  {
>> >>      paddr_t start;
>> >>      paddr_t size;
>> >> +    unsigned int cur_order, cur_bank, nr_banks = 1, index = 0;
>> >>      struct page_info *pg = NULL;
>> >> -    unsigned int order = get_order_from_bytes(dom0_mem);
>> >> +    unsigned int order = get_order_from_bytes(kinfo->unassigned_mem);
>> >>      int res;
>> >>      paddr_t spfn;
>> >>      unsigned int bits;
>> >>
>> >> -    for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> >> +#define MIN_BANK_ORDER 10
>> >
>> > 2^10 is < PAGE_SIZE (PAGE_SHIFT is 12). 12 is the lowest allocation size
>> > which can be made, but I think in practice the lowest useful bank size
>> > is going to be somewhat larger than that.
>>
>> MIN_BANK_ORDER is in pages, so it 2^10 pages not 2^10 bytes which
>> makes the minimum 4 Mbyte
>
> My mistake. Please add a comment though.
>
>> >> +
>> >> +    kinfo->mem.nr_banks = 0;
>> >> +
>> >> +    /*
>> >> +     * We always first try to allocate all dom0 memory in 1 bank.
>> >> +     * However, if we failed to allocate physically contiguous memory
>> >> +     * from the allocator, then we try to create more than one bank.
>> >> +     */
>> >> +    for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>> >
>> > I think this can be just
>> >         for( order = get_order_from_bytes(...) ; order > MIN_BANK_ORDER ; order-- )
>> > (maybe order >= ?) or
>> >         while (order > MIN_BANK_ORDER )
>> >         {
>> >                 ...
>> >                 order--;
>> >         }
>> > I think the first works better.
>> >
>> > This does away with the need for cur_order vs order. I think order is
>> > mostly unused after this patch, also not renaming cur_order would
>> > hopefully reduce the diff and therefore the "scariness" of the patch wrt
>> > 4.4 (although that may not be sufficient).
>>
>> Yes, that's correct, however I'm intentionally using a different
>> variable because I thought that this is going to make things more
>> obvious. If you think it's better to use the same variable, then I'll
>> update it.
>
> Personally I found it confusing to read as it is. Although reading
> further down I'm still confused and I'm not sure that my suggestion
> would actually help.
>
>> >
>> >> +                             pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>> >> +                             if ( pg != NULL )
>> >> +                                     break;
>> >> +                     }
>> >> +
>> >> +                     if ( !pg )
>> >> +                             break;
>> >> +
>> >> +                     spfn = page_to_mfn(pg);
>> >> +                     start = pfn_to_paddr(spfn);
>> >> +                     size = pfn_to_paddr((1 << cur_order));
>> >> +
>> >> +                 kinfo->mem.bank[index].start = start;
>> >> +                 kinfo->mem.bank[index].size = size;
>> >> +                 index++;
>> >> +                 kinfo->mem.nr_banks++;
>> >> +     }
>> >> +
>> >> +     if( pg )
>> >> +             break;
>> >> +
>> >> +     nr_banks = (nr_banks - cur_bank + 1) << 1;
>> >
>> > <<1 ?
>>
>> * 2 :)
>
> I know that ;-) But why are you doubling the number of banks at all?
>
>> > Is this not just kinfo->mem.nr_banks?
>>
>> No, In this equation I'm calculating how much more memory will be
>> needed to satisfy the memory size of dom0.
>> So at the end of the iteration, I check how much memory has been
>> allocated (=cur_bank * cur_order) and how much memory was needed
>> (=nr_banks * cur_order), so nr_unallocated_banks = (nr_banks -
>> cur_bank + 1) * cur_order
>> So cur_order is decremented and nr_unallocated_banks is doubled ( <<1
>> ) and we do another iteration to satisfy the rest of unallocated
>> memory.
>
> Hrm, I'm still a bit confused. I think perhaps choosing better names for
> the variables (e.g. it seems like you are saying nr_banks is actually
> nr_unallocated_banks?) and adding some comments might help.
>
> But is this algorithm not more complex than it needs to be? Why not just
> allocate memory, subtracting it from kinfo->unassigned_mem as you go and
> adding a new bank each time? The result would the same wouldn't it?
> e.g.:
>
>         while ( kinfo->unassigned_mem )
>         {
>                 order = get_order_of_bytes(kinfo->unsigned_mem)
>                 do {
>                         pg = alloc_domheap_pages(... order ...)
>                 } while(!pg && order>>=1 > MIN_BANK_ORDER)
>
>                 if (pg)
>                         urk!
>
>                 kinfo->mem.bank[kinfo->mem.nr_banks].{start,size} = (stuff based on pg, order etc)
>                 kinfo->mem.nr_banks++
>                 kinfo->unassigned_mem -= /*whatever it is*/
>
>                 /* maybe do guest_physmap_add_page here too */
>         }
>
> I think that will produce the same result, won't it?
>
> In either algorithm there needs to be a check for NR_MEM_BANKS somewhere
> or else it will run off the end of kinfo->mem.bank[].
>
>>
>> >
>> > The basic structure here is:
>> >
>> > for ( cur_order = order; cur_order > MIN_BANK_ORDER; cur_order--)
>> >         for( cur_bank = 1; cur_bank <= nr_banks; cur_bank++ )
>> >                 for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> >
>> > Shouldn't the iteration over bank be the outer one?
>> >
>> > The banks might be different sizes, right?
>> >
>>
>> The outer loop will define the order of the allocated bank[s] while
>> the inner one will define how many banks of that order will be needed.
>>
>> So you try to allocate one big bank, if it succeeds you're done. If
>> not, you double the number of required banks and retry with smaller
>> order (order - 1).
>>
>> The code can indeed allocate banks of different sizes. So, if we
>> failed to allocate 1 big bank, we will try to allocate two banks of
>> (order = order -1) in that case we might only allocate the first bank
>> and fail to allocate the second one. So, we try to allocate the
>> required memory for the second one in two banks.
>>
>> So the logic is always: In the end of the outer loop calculate how
>> many banks we need to allocate for next iteration as well as the
>> required order. So all allocation that occur in the same iteration is
>> equal, while each iteration changes the nr banks and order depending
>> on how much memory we still need to allocate
>
> I think I get it now, but it still seems complicated. Am I missing
> something with the simpler loop I suggested?
>
>> > Also with either approach then depending on where memory is available
>> > this may result in the memory not being allocated in and/or that they
>> > are not in increasing order (in fact, because Xen prefer to allocate
>> > higher memory first it seems likely that it will be in reverse order).
>>
>> Yes, there's no restriction what so ever on the order of the
>> addresses. However each allocation is trying to allocate the bank as
>> early as possible
>>
>> for ( bits = PAGE_SHIFT + 1; bits < PADDR_BITS; bits++ )
>> {
>>     pg = alloc_domheap_pages(d, cur_order, MEMF_bits(bits));
>>     if ( pg != NULL )
>>         break;
>> }
>>
>> >
>> > I don't know if either of those things matter. What does ePAPR have to
>> > say on the matter?
>>
>> There is no mention of address range ordering ( at least in section
>> 3.4 "Memory Node" )
>
> OK. It'll be interesting to see whether the implementations match up to
> that!
>
> Since max banks is necessarily small I do wonder if it might be easier
> to just do a quick insertion sort as we go rather than risking it. I
> suppose there will be plenty of time in the 4.5 cycle (which this work
> is now almost certainly targeting) to hit any problem cases.
>
>> > I'd expect that the ordering might matter from the point of view of
>> > putting the kernel in the first bank -- since that may no longer be the
>> > lowest address.
>> >
>> In the patch, when I set the loadaddr of the image I search through
>> the banks for the lowest address bank not the first one anyway.
>
> OK, I think that makes sense since the requirement is wrt position
> relative to the start of RAM, not the first bank.
>
>> > You don't seem to reference kinfo->unassigned_mem anywhere after the
>> > initial order calculation -- I think you need to subtract memory from it
>> > on each iteration, or else I'm not sure you will actually get the right
>> > amount allocated in all cases.
>>
>> It's being properly calculated in the external mapping loop.
>
> Hrm yes with all that doubling etc.
>
>> >>
>> >> -    kinfo->mem.bank[0].start = start;
>> >> -    kinfo->mem.bank[0].size = size;
>> >> -    kinfo->mem.nr_banks = 1;
>> >> +         if ( res )
>> >> +             panic("Unable to add pages in DOM0: %d", res);
>> >>
>> >> -    kinfo->unassigned_mem -= size;
>> >> +         kinfo->unassigned_mem -= size;
>> >> +     }
>> >>  }
>> >>
>> >>  static void allocate_memory(struct domain *d, struct kernel_info *kinfo)
>> >> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> >> index 6a5772b..658c3de 100644
>> >> --- a/xen/arch/arm/kernel.c
>> >> +++ b/xen/arch/arm/kernel.c
>> >> @@ -79,15 +79,43 @@ static void place_modules(struct kernel_info *info,
>> >>      const paddr_t total = initrd_len + dtb_len;
>> >>
>> >>      /* Convenient */
>> >
>> > If you are going to remove all of the following convenience variables
>> > then this comment is no longer correct.
>> >
>> > (Convenient here means a shorter local name for something complex)
>>
>> It still applies to these variables except probably "unsigned int i,
>> min_i = -1;", so I'll push the comment one line down.
>
> No it doesn't AFAICT. "Convenient" here means "these are shorthand for
> something longer and inconvenient to type", if the variables aren't
> const then they almost certainly are not a convenience in the intended
> sense. same_bank, mem_* and kernel_size are all just regular variables I
> think.
>
>> >> -    const paddr_t mem_start = info->mem.bank[0].start;
>> >> -    const paddr_t mem_size = info->mem.bank[0].size;
>> >> -    const paddr_t mem_end = mem_start + mem_size;
>> >> -    const paddr_t kernel_size = kernel_end - kernel_start;
>> >> +    unsigned int i, min_i = -1;
>> >> +    bool_t same_bank = false;
>> >> +
>> >> +    paddr_t mem_start, mem_end, mem_size;
>> >> +    paddr_t kernel_size;
>> >>
>> >>      paddr_t addr;
>> >>
>> >> -    if ( total + kernel_size > mem_size )
>> >> -        panic("Not enough memory in the first bank for the dtb+initrd");
>> >> +    kernel_size = kernel_end - kernel_start;
>> >> +
>> >> +    for ( i = 0; i < info->mem.nr_banks; i++ )
>> >> +    {
>> >> +        mem_start = info->mem.bank[i].start;
>> >> +        mem_size = info->mem.bank[i].size;
>> >> +        mem_end = mem_start + mem_size - 1;
>> >> +
>> >> +        if ( (kernel_end > mem_start) && (kernel_end <= mem_end) )
>> >> +            same_bank = true;
>> >> +        else
>> >> +            same_bank = false;
>> >> +
>> >> +        if ( same_bank && ((total + kernel_size) < mem_size) )
>> >> +            min_i = i;
>> >> +        else if ( (!same_bank) && (total < mem_size) )
>> >> +            min_i = i;
>> >> +        else
>> >> +            continue;
>> >
>> > What is all this doing?
>>
>> Search through the banks for the bank that fits the initrd and dtb.
>> Calculation is slightly different depending on whether I ended up
>> using the same bank as the one used by the kernel or not. ( as
>> mentioned previously, I don't just blindly choose the first bank for
>> the kernel. I search through all of the banks for the lowest banks -
>> address-wise - ).
>
> Where is the address comparison before assigning min_i?
>
>>  In the same_bank case, I just use the old
>> calculations that was already there in the code, however in the
>> !same_bank case I just start at the end of the bank.
>
> Would it be clearer to do
>         if ( dtb+initrd fits in kernel's back )
>                 ok
>         else
>                 search
> ?
>
> There is also a requirement that the dtb and initrd are within certain
> ranges of the start of RAM (see the booting.txt and Booting in
> Documentation/arm*/), does this implement this? It doesn't look to be
> considered during the search or when placing in the !same_bank case.
>
> This would all be simpler if we sorted the banks too. Which is another
> argument for doing so IMHO.
>
>> >> -        load_end = info->mem.bank[0].start + info->mem.bank[0].size;
>> >> -        load_end = MIN(info->mem.bank[0].start + MB(128), load_end);
>> >> +        /*
>> >> +         * Load kernel at the lowest possible bank
>> >> +         * ( check commit 6c21cb36e263de2db8716b477157a5b6cd531e1e for reason behind that )
>> >
>> > That commit says nothing about loading in the lowest possible bank,
>> > though. If there is some relevant factor which is worth commenting on
>> > please do so directly.
>>
>> Loading at the lowest bank is safer because of the:
>> "The current solution in Linux assuming that the delta physical
>> address - virtual address is always negative".
>> This delta is being calculated based on where the kernel was loaded (
>> using "adr" to find physical address ).
>> So loading it as early as possible is a good idea to avoid this problem.
>
> I think this problem is now fixed upstream, the intention was to
> eventually revert the workaround (Julien was going to tell me when it
> had gone into stable etc, but this is now a 4.5 era revert candidate).
>
>> However, I'm not quite sure why not just load it at the beginning of
>> the memory then! I think I'll look at booting.txt for that, maybe it's
>> a decompresser limitation or something!
>
> Yes, it's all a bit complicated.
>
>> > Actually now that the kernel is fixed upstream we don't need the
>> > behaviour of that commit at all. Although there are still restrictions
>> > based on load address vs start of RAM (See booting.txt in the kernel
>> > docs)
>> Ah, I see. I'm using an allwinner branch of the kernel (
>> https://github.com/bjzhang/linux-allwinner.git ). I'll take a look at
>> the latest thing.
>
> git://github.com/linux-sunxi/linux-sunxi.git either sunxi-next or
> sunxi-devel branches are pretty good baselines nowadays. I've got an
> outstanding TODO to update the wiki...
Sorry for the delay!

Okay, I'm going to update my system with this tree and refresh the
patch based on your comments and resend this weekend.

>
> Ian.
>



-- 
Karim Allah Ahmed.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e6M-0004s3-6C; Fri, 24 Jan 2014 10:35:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6e6K-0004ry-K2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:35:05 +0000
Received: from [193.109.254.147:32326] by server-2.bemta-14.messagelabs.com id
	68/B1-00361-7D142E25; Fri, 24 Jan 2014 10:35:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390559701!12919551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26453 invoked from network); 24 Jan 2014 10:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94082458"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:35:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:35:00 -0500
Message-ID: <1390559700.2124.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 24 Jan 2014 10:35:00 +0000
In-Reply-To: <20140124014050.GA26624@garbanzo.do-not-panic.com>
References: <20140124014050.GA26624@garbanzo.do-not-panic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Removing building qemu from xen ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 17:40 -0800, Luis R. Rodriguez wrote:
> I see xen by default builds qemu on x86 / x86_64, and by default it will
> build it on tools/qemu-xen-dir, and regardless of x86 32/64 bit architecture
> it will call the binary qemu-system-i386. I see recently also support
> was added to avoid having to build qemu and let distributions specify
> their own binary path for qemu. That's great! I see though that by
> default xen builds a different repo version of qemu:
> 
> git://xenbits.xen.org/qemu-upstream-unstable.git
> tag: qemu-xen-4.4.0-rc1
> 
> At first glance I am not seeing much delta between what's upstream and
> at least the latest commit of qemu-upstream-unstable.git is already
> upstream on the *real* qemu upstream tree [0]. This begs the question of
> whether or not xen needs to have the qemu building infrastructure as
> part of its own build system.

The xenbits qemu tree serves a useful purpose as a "stable xen branch",
that is it is the qemu stable branch + patches which are very useful for
Xen but perhaps not suitable for the upstream stable branch for one
reason or another (maybe they are risky for some non-Xen use case).
Obviously such patches are quite rare, which is why the delta is
normally small.

>  Can we rip it out?

At a minimum someone would first need to teach Xen's automated test
system to fetch & build the separate qemu.

Probably more bank for buck short term would be a test flight targeting
the upstream qemu development and stable branches to help us spot
regressions there early.

In fact one of those additional flights would be a prerequisite for the
first thing because the outputs of it would need to serve as input to
the main Xen flight (in order to only vary one thing per flight).

But if you are interested in removing build integration cruft from the
Xen tree then the first thing to target would be the Linux kernel build
stuff, which is unused by most people (disabled by default, obsolete
etc) but which osstest.git still uses for some small pieces (at least
xen.git/buildconfigs/enable-xen-config, maybe other bits) which would
need moving to osstest.git.

http://xenbits.xen.org/gitweb/?p=osstest.git is the thing to send
patches against. I'd certainly welcome patches to remove the need for
the kernel build stuff. Probably best to let the conversation re: qemu
play out before digging into that side of things too much.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e6M-0004s3-6C; Fri, 24 Jan 2014 10:35:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6e6K-0004ry-K2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:35:05 +0000
Received: from [193.109.254.147:32326] by server-2.bemta-14.messagelabs.com id
	68/B1-00361-7D142E25; Fri, 24 Jan 2014 10:35:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390559701!12919551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26453 invoked from network); 24 Jan 2014 10:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94082458"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:35:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:35:00 -0500
Message-ID: <1390559700.2124.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 24 Jan 2014 10:35:00 +0000
In-Reply-To: <20140124014050.GA26624@garbanzo.do-not-panic.com>
References: <20140124014050.GA26624@garbanzo.do-not-panic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Removing building qemu from xen ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 17:40 -0800, Luis R. Rodriguez wrote:
> I see xen by default builds qemu on x86 / x86_64, and by default it will
> build it on tools/qemu-xen-dir, and regardless of x86 32/64 bit architecture
> it will call the binary qemu-system-i386. I see recently also support
> was added to avoid having to build qemu and let distributions specify
> their own binary path for qemu. That's great! I see though that by
> default xen builds a different repo version of qemu:
> 
> git://xenbits.xen.org/qemu-upstream-unstable.git
> tag: qemu-xen-4.4.0-rc1
> 
> At first glance I am not seeing much delta between what's upstream and
> at least the latest commit of qemu-upstream-unstable.git is already
> upstream on the *real* qemu upstream tree [0]. This begs the question of
> whether or not xen needs to have the qemu building infrastructure as
> part of its own build system.

The xenbits qemu tree serves a useful purpose as a "stable xen branch",
that is it is the qemu stable branch + patches which are very useful for
Xen but perhaps not suitable for the upstream stable branch for one
reason or another (maybe they are risky for some non-Xen use case).
Obviously such patches are quite rare, which is why the delta is
normally small.

>  Can we rip it out?

At a minimum someone would first need to teach Xen's automated test
system to fetch & build the separate qemu.

Probably more bank for buck short term would be a test flight targeting
the upstream qemu development and stable branches to help us spot
regressions there early.

In fact one of those additional flights would be a prerequisite for the
first thing because the outputs of it would need to serve as input to
the main Xen flight (in order to only vary one thing per flight).

But if you are interested in removing build integration cruft from the
Xen tree then the first thing to target would be the Linux kernel build
stuff, which is unused by most people (disabled by default, obsolete
etc) but which osstest.git still uses for some small pieces (at least
xen.git/buildconfigs/enable-xen-config, maybe other bits) which would
need moving to osstest.git.

http://xenbits.xen.org/gitweb/?p=osstest.git is the thing to send
patches against. I'd certainly welcome patches to remove the need for
the kernel build stuff. Probably best to let the conversation re: qemu
play out before digging into that side of things too much.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e86-00050w-Bb; Fri, 24 Jan 2014 10:36:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6e83-00050f-UG
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 10:36:52 +0000
Received: from [193.109.254.147:58349] by server-4.bemta-14.messagelabs.com id
	C8/B6-03916-34242E25; Fri, 24 Jan 2014 10:36:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390559809!12961767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27450 invoked from network); 24 Jan 2014 10:36:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94082853"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:36:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 05:36:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6e7z-0005Kp-6d;
	Fri, 24 Jan 2014 10:36:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6e7z-00025U-2o;
	Fri, 24 Jan 2014 10:36:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24473-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 10:36:47 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24473: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24473 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24473/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24471
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24471 pass in 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24465

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24471 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24471 never pass

version targeted for testing:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36
baseline version:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=650fc2f76d0a156e23703683d0c18fa262ecea36
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 650fc2f76d0a156e23703683d0c18fa262ecea36
+ branch=xen-unstable
+ revision=650fc2f76d0a156e23703683d0c18fa262ecea36
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 650fc2f76d0a156e23703683d0c18fa262ecea36:master
Counting objects: 11, done.
Compressing objects:  20% (1/5)   
Compressing objects:  40% (2/5)   
Compressing objects:  60% (3/5)   
Compressing objects:  80% (4/5)   
Compressing objects: 100% (5/5)   
Compressing objects: 100% (5/5), done.
Writing objects:  16% (1/6)   
Writing objects:  33% (2/6)   
Writing objects:  50% (3/6)   
Writing objects:  66% (4/6)   
Writing objects:  83% (5/6)   
Writing objects: 100% (6/6)   
Writing objects: 100% (6/6), 874 bytes, done.
Total 6 (delta 3), reused 4 (delta 1)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   231d7f4..650fc2f  650fc2f76d0a156e23703683d0c18fa262ecea36 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e86-00050w-Bb; Fri, 24 Jan 2014 10:36:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6e83-00050f-UG
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 10:36:52 +0000
Received: from [193.109.254.147:58349] by server-4.bemta-14.messagelabs.com id
	C8/B6-03916-34242E25; Fri, 24 Jan 2014 10:36:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390559809!12961767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27450 invoked from network); 24 Jan 2014 10:36:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:36:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94082853"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:36:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 05:36:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6e7z-0005Kp-6d;
	Fri, 24 Jan 2014 10:36:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6e7z-00025U-2o;
	Fri, 24 Jan 2014 10:36:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24473-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 10:36:47 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24473: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24473 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24473/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24471
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24471 pass in 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24465

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24471 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24471 never pass

version targeted for testing:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36
baseline version:
 xen                  231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=650fc2f76d0a156e23703683d0c18fa262ecea36
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 650fc2f76d0a156e23703683d0c18fa262ecea36
+ branch=xen-unstable
+ revision=650fc2f76d0a156e23703683d0c18fa262ecea36
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 650fc2f76d0a156e23703683d0c18fa262ecea36:master
Counting objects: 11, done.
Compressing objects:  20% (1/5)   
Compressing objects:  40% (2/5)   
Compressing objects:  60% (3/5)   
Compressing objects:  80% (4/5)   
Compressing objects: 100% (5/5)   
Compressing objects: 100% (5/5), done.
Writing objects:  16% (1/6)   
Writing objects:  33% (2/6)   
Writing objects:  50% (3/6)   
Writing objects:  66% (4/6)   
Writing objects:  83% (5/6)   
Writing objects: 100% (6/6)   
Writing objects: 100% (6/6), 874 bytes, done.
Total 6 (delta 3), reused 4 (delta 1)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   231d7f4..650fc2f  650fc2f76d0a156e23703683d0c18fa262ecea36 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:38:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e9B-000594-4m; Fri, 24 Jan 2014 10:38:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6e99-00058u-W5
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:38:00 +0000
Received: from [193.109.254.147:18877] by server-7.bemta-14.messagelabs.com id
	5A/3A-15500-78242E25; Fri, 24 Jan 2014 10:37:59 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390559876!12961584!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11943 invoked from network); 24 Jan 2014 10:37:58 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 10:37:58 -0000
Received: from mail-vb0-f46.google.com ([209.85.212.46]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJChMVoFi8UKLKJfu35dHv+lsfRp5PU@postini.com;
	Fri, 24 Jan 2014 02:37:58 PST
Received: by mail-vb0-f46.google.com with SMTP id o19so1739596vbm.19
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:37:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=80LHmr9jckNkbk1eMh2v9WJ3oFu1OAXJ6qJ+6VrbrN4=;
	b=Q9nb54nF6gELCILCkSA4YqPd/MLgA7mXPwfDrDowSasNaMnGuJ8MAi4BK8869MJpdm
	5RGngbuuurkouPCgzw38K5eYzS67TKLGgZhHvqtQ2nqfDS5GpFbkYDJ8YuvQTL+aJGqp
	A6egzCY6DkaBAnHi6Alu2CKI4pKIXek7v2UE8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=80LHmr9jckNkbk1eMh2v9WJ3oFu1OAXJ6qJ+6VrbrN4=;
	b=I+7YAU1bq24M5tJxDFRt87IxVFmB95FARzHqqPUyktihXSACKLCn41yP0akZ48Amtj
	0k1G2ib262/ohCLJ8y9LFtXzm50nF63h+4X72HUVty8BNCXo9nRmBRRKxhikt5kZfA6E
	mrU4ljpjhKhP6YZTUc6BNfd/V2ZVKseoW9PVf5F7pnNP/NxtdCPtNeBlOtMEMWPGdc6s
	HKKTAY2rMGaaFKSEfHIbkMPE9TUfzO05EMMXtB0rww8eDcfiPNBTiO+z9bERvg86e0cW
	w1Nt/Ram3ePTPWWFW4gqeZTx0OcdNtOx5KTaqMLdUO2YDVPCRV8SplLXLmMQ/ufxPTbT
	U1Ug==
X-Gm-Message-State: ALoCoQngrOgfxzBb8fgRc01Dw/2APuGOvuK2YepRKtL5N657EccIu8c5j4TACD1oqYSxfS9j8xzqmei/GaciJtSWjNJ71e0T63k3KZtc8j+6TAO4LxVSQOc2nfUrVtnej5CHR7JUczZvPzap2B0bimvvQyZSR9GEI8OMh57yZZcNMjBl46JeRbU=
X-Received: by 10.58.219.1 with SMTP id pk1mr345623vec.49.1390559875917;
	Fri, 24 Jan 2014 02:37:55 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.219.1 with SMTP id pk1mr345618vec.49.1390559875797; Fri,
	24 Jan 2014 02:37:55 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 02:37:55 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 12:37:55 +0200
Message-ID: <CAH_mUMNq7Hdm73en36SbLz9f-oMKqqKsVicxS2-3hWJ8oGjcoA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to
 4K pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Thu, Jan 23, 2014 at 4:52 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
>> Patch introduces the following algorithm:
>> - enumerates all first level translation entries
>> - for each section creates 256 pages, each page is 4096 bytes
>> - for each supersection creates 4096 pages, each page is 4096 bytes
>> - flush cache to synchronize Cortex M15 and IOMMU
>>
>> This algorithm make possible to use 4K mapping only.
>>
>> Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>
> I take that the first patch doesn't actually work without this one?
> In that case it might make sense to just merge them into one.
>

No, each patch from the series works standalone. This patch rewrites
structure of the pagetable. I did this to increase granularity of
memory allocations. I'm pretty sure that we won't have continuous
blocks of 1Mb and 16 Mb
of memory, when several guest domains will be active. With 4K
granularity chances for successful translation are much more higher.

>>               /* "section" 1Mb */
>>               } else if (iopgd_is_section(iopgd)) {
>> -                     da = i << IOSECTION_SHIFT;
>> -                     mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
>> +                     if (likely(translate_sections_to_pages)) {
>> +                             mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
>> +                     } else {
>> +                             da = i << IOSECTION_SHIFT;
>> +                             mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
>> +                     }
>>
>>               /* "table" */
>>               } else if (iopgd_is_table(iopgd)) {
>
> Since the 16MB and 1MB sections might not actually be contigous in
> machine address space, this patch replaces the guest allocated sections
> with pte tables pointing to the original IPAs. Is that right?

Yes, you are right.

Thank you for review,

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:38:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:38:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6e9B-000594-4m; Fri, 24 Jan 2014 10:38:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6e99-00058u-W5
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:38:00 +0000
Received: from [193.109.254.147:18877] by server-7.bemta-14.messagelabs.com id
	5A/3A-15500-78242E25; Fri, 24 Jan 2014 10:37:59 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390559876!12961584!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11943 invoked from network); 24 Jan 2014 10:37:58 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 10:37:58 -0000
Received: from mail-vb0-f46.google.com ([209.85.212.46]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJChMVoFi8UKLKJfu35dHv+lsfRp5PU@postini.com;
	Fri, 24 Jan 2014 02:37:58 PST
Received: by mail-vb0-f46.google.com with SMTP id o19so1739596vbm.19
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 02:37:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=80LHmr9jckNkbk1eMh2v9WJ3oFu1OAXJ6qJ+6VrbrN4=;
	b=Q9nb54nF6gELCILCkSA4YqPd/MLgA7mXPwfDrDowSasNaMnGuJ8MAi4BK8869MJpdm
	5RGngbuuurkouPCgzw38K5eYzS67TKLGgZhHvqtQ2nqfDS5GpFbkYDJ8YuvQTL+aJGqp
	A6egzCY6DkaBAnHi6Alu2CKI4pKIXek7v2UE8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=80LHmr9jckNkbk1eMh2v9WJ3oFu1OAXJ6qJ+6VrbrN4=;
	b=I+7YAU1bq24M5tJxDFRt87IxVFmB95FARzHqqPUyktihXSACKLCn41yP0akZ48Amtj
	0k1G2ib262/ohCLJ8y9LFtXzm50nF63h+4X72HUVty8BNCXo9nRmBRRKxhikt5kZfA6E
	mrU4ljpjhKhP6YZTUc6BNfd/V2ZVKseoW9PVf5F7pnNP/NxtdCPtNeBlOtMEMWPGdc6s
	HKKTAY2rMGaaFKSEfHIbkMPE9TUfzO05EMMXtB0rww8eDcfiPNBTiO+z9bERvg86e0cW
	w1Nt/Ram3ePTPWWFW4gqeZTx0OcdNtOx5KTaqMLdUO2YDVPCRV8SplLXLmMQ/ufxPTbT
	U1Ug==
X-Gm-Message-State: ALoCoQngrOgfxzBb8fgRc01Dw/2APuGOvuK2YepRKtL5N657EccIu8c5j4TACD1oqYSxfS9j8xzqmei/GaciJtSWjNJ71e0T63k3KZtc8j+6TAO4LxVSQOc2nfUrVtnej5CHR7JUczZvPzap2B0bimvvQyZSR9GEI8OMh57yZZcNMjBl46JeRbU=
X-Received: by 10.58.219.1 with SMTP id pk1mr345623vec.49.1390559875917;
	Fri, 24 Jan 2014 02:37:55 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.219.1 with SMTP id pk1mr345618vec.49.1390559875797; Fri,
	24 Jan 2014 02:37:55 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 02:37:55 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-3-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231441470.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 12:37:55 +0200
Message-ID: <CAH_mUMNq7Hdm73en36SbLz9f-oMKqqKsVicxS2-3hWJ8oGjcoA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to
 4K pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Thu, Jan 23, 2014 at 4:52 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
>> Patch introduces the following algorithm:
>> - enumerates all first level translation entries
>> - for each section creates 256 pages, each page is 4096 bytes
>> - for each supersection creates 4096 pages, each page is 4096 bytes
>> - flush cache to synchronize Cortex M15 and IOMMU
>>
>> This algorithm make possible to use 4K mapping only.
>>
>> Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>
> I take that the first patch doesn't actually work without this one?
> In that case it might make sense to just merge them into one.
>

No, each patch from the series works standalone. This patch rewrites
structure of the pagetable. I did this to increase granularity of
memory allocations. I'm pretty sure that we won't have continuous
blocks of 1Mb and 16 Mb
of memory, when several guest domains will be active. With 4K
granularity chances for successful translation are much more higher.

>>               /* "section" 1Mb */
>>               } else if (iopgd_is_section(iopgd)) {
>> -                     da = i << IOSECTION_SHIFT;
>> -                     mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
>> +                     if (likely(translate_sections_to_pages)) {
>> +                             mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, iopgd, i);
>> +                     } else {
>> +                             da = i << IOSECTION_SHIFT;
>> +                             mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, da, IOSECTION_MASK);
>> +                     }
>>
>>               /* "table" */
>>               } else if (iopgd_is_table(iopgd)) {
>
> Since the 16MB and 1MB sections might not actually be contigous in
> machine address space, this patch replaces the guest allocated sections
> with pte tables pointing to the original IPAs. Is that right?

Yes, you are right.

Thank you for review,

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:39:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eAb-0005VZ-SW; Fri, 24 Jan 2014 10:39:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W6eAa-0005VE-Kl
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:39:28 +0000
Received: from [85.158.143.35:10948] by server-2.bemta-4.messagelabs.com id
	E5/AE-11386-FD242E25; Fri, 24 Jan 2014 10:39:27 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390559965!517203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10435 invoked from network); 24 Jan 2014 10:39:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:39:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94083277"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:39:13 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:39:12 -0500
Message-ID: <52E242CE.4040400@citrix.com>
Date: Fri, 24 Jan 2014 11:39:10 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjMvMDEvMTQgMjA6MjgsIE1hdHQgV2lsc29uIHdyb3RlOgo+IEZyb206IE1hdHQgUnVzaHRv
biA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiAKPiBDdXJyZW50bHkgc2hyaW5rX2ZyZWVfcGFnZXBv
b2woKSBpcyBjYWxsZWQgYmVmb3JlIHRoZSBwYWdlcyB1c2VkIGZvcgo+IHBlcnNpc3RlbnQgZ3Jh
bnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0ZW50X2dudHMoKS4gVGhpcwo+IHJlc3Vs
dHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0ZW50IGdyYW50
cyBpcwo+IHRvcm4gZG93bi4KCkdvb2QgY2F0Y2guCgo+IENjOiBLb25yYWQgUnplc3p1dGVrIFdp
bGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+Cj4gQ2M6ICJSb2dlciBQYXUgTW9ubsOpIiA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+Cj4gQ2M6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVsQGNpdHJpeC5jb20+Cj4gQ2M6
IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKPiBDYzogQW50aG9ueSBMaWd1b3JpIDxhbGlndW9yaUBhbWF6b24uY29tPgo+IFNpZ25lZC1v
ZmYtYnk6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiBTaWduZWQtb2ZmLWJ5
OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+Cj4gLS0tCj4gIGRyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jIHwgICAgNiArKystLS0KPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5z
ZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwo+IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiArKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYwo+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250X2xpc3Q6Cj4gIAkJCXBy
aW50X3N0YXRzKGJsa2lmKTsKPiAgCX0KPiAgCj4gLQkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcg
ZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+IC0Jc2hyaW5rX2ZyZWVf
cGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiAtCj4gIAkvKiBGcmVlIGFsbCBwZXJzaXN0
ZW50IGdyYW50IHBhZ2VzICovCj4gIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQo+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJsa2lmLCAmYmxraWYtPnBlcnNp
c3RlbnRfZ250cywKPiBAQCAtNjM2LDYgKzYzMyw5IEBAIHB1cmdlX2dudF9saXN0Ogo+ICAJQlVH
X09OKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7Cj4gIAlibGtpZi0+
cGVyc2lzdGVudF9nbnRfYyA9IDA7CgpJIHRoaW5rIHdlIHNob3VsZCBhbHNvIGFkZDoKCmZsdXNo
X3dvcmsoJmJsa2lmLT5wZXJzaXN0ZW50X3B1cmdlX3dvcmspOwoKSGVyZSBpbiBjYXNlIHRoZXJl
J3Mgc29tZSBwZW5kaW5nIHB1cmdlIHdvcmsgZ29pbmcgb24sIHdoaWNoIGNvdWxkIGFsc28KYWRk
IHBhZ2VzIHRvIHRoZSBmcmVlIGxpc3QgYWZ0ZXIgd2UgaGF2ZSBjbGVhbmVkIGl0LgoKPiArCS8q
IFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVm
ZmVyICovCj4gKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwo+ICsK
PiAgCWlmIChsb2dfc3RhdHMpCj4gIAkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ICAKPiAKCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:39:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eAb-0005VZ-SW; Fri, 24 Jan 2014 10:39:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W6eAa-0005VE-Kl
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:39:28 +0000
Received: from [85.158.143.35:10948] by server-2.bemta-4.messagelabs.com id
	E5/AE-11386-FD242E25; Fri, 24 Jan 2014 10:39:27 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390559965!517203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10435 invoked from network); 24 Jan 2014 10:39:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:39:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94083277"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:39:13 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:39:12 -0500
Message-ID: <52E242CE.4040400@citrix.com>
Date: Fri, 24 Jan 2014 11:39:10 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjMvMDEvMTQgMjA6MjgsIE1hdHQgV2lsc29uIHdyb3RlOgo+IEZyb206IE1hdHQgUnVzaHRv
biA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiAKPiBDdXJyZW50bHkgc2hyaW5rX2ZyZWVfcGFnZXBv
b2woKSBpcyBjYWxsZWQgYmVmb3JlIHRoZSBwYWdlcyB1c2VkIGZvcgo+IHBlcnNpc3RlbnQgZ3Jh
bnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0ZW50X2dudHMoKS4gVGhpcwo+IHJlc3Vs
dHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0ZW50IGdyYW50
cyBpcwo+IHRvcm4gZG93bi4KCkdvb2QgY2F0Y2guCgo+IENjOiBLb25yYWQgUnplc3p1dGVrIFdp
bGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+Cj4gQ2M6ICJSb2dlciBQYXUgTW9ubsOpIiA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+Cj4gQ2M6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVsQGNpdHJpeC5jb20+Cj4gQ2M6
IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKPiBDYzogQW50aG9ueSBMaWd1b3JpIDxhbGlndW9yaUBhbWF6b24uY29tPgo+IFNpZ25lZC1v
ZmYtYnk6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiBTaWduZWQtb2ZmLWJ5
OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+Cj4gLS0tCj4gIGRyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jIHwgICAgNiArKystLS0KPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5z
ZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwo+IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiArKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYwo+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250X2xpc3Q6Cj4gIAkJCXBy
aW50X3N0YXRzKGJsa2lmKTsKPiAgCX0KPiAgCj4gLQkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcg
ZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+IC0Jc2hyaW5rX2ZyZWVf
cGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiAtCj4gIAkvKiBGcmVlIGFsbCBwZXJzaXN0
ZW50IGdyYW50IHBhZ2VzICovCj4gIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQo+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJsa2lmLCAmYmxraWYtPnBlcnNp
c3RlbnRfZ250cywKPiBAQCAtNjM2LDYgKzYzMyw5IEBAIHB1cmdlX2dudF9saXN0Ogo+ICAJQlVH
X09OKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7Cj4gIAlibGtpZi0+
cGVyc2lzdGVudF9nbnRfYyA9IDA7CgpJIHRoaW5rIHdlIHNob3VsZCBhbHNvIGFkZDoKCmZsdXNo
X3dvcmsoJmJsa2lmLT5wZXJzaXN0ZW50X3B1cmdlX3dvcmspOwoKSGVyZSBpbiBjYXNlIHRoZXJl
J3Mgc29tZSBwZW5kaW5nIHB1cmdlIHdvcmsgZ29pbmcgb24sIHdoaWNoIGNvdWxkIGFsc28KYWRk
IHBhZ2VzIHRvIHRoZSBmcmVlIGxpc3QgYWZ0ZXIgd2UgaGF2ZSBjbGVhbmVkIGl0LgoKPiArCS8q
IFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVm
ZmVyICovCj4gKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8pOwo+ICsK
PiAgCWlmIChsb2dfc3RhdHMpCj4gIAkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ICAKPiAKCgpfX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:48:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eJa-000652-4c; Fri, 24 Jan 2014 10:48:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eJY-00064x-M1
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:48:44 +0000
Received: from [85.158.137.68:21424] by server-1.bemta-3.messagelabs.com id
	3C/9F-29598-B0542E25; Fri, 24 Jan 2014 10:48:43 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390560521!9934606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24588 invoked from network); 24 Jan 2014 10:48:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:48:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94085254"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:48:41 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:48:40 -0500
Message-ID: <52E24507.8060402@citrix.com>
Date: Fri, 24 Jan 2014 10:48:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 03:28, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.
> 
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
> use get_page to ensure page is released when grant access is completed successfully.
> 
> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
> for them will be created separately.
...
> @@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
>  static void xennet_end_access(int ref, void *page)
>  {
>  	/* This frees the page as a side-effect */
> -	if (ref != GRANT_INVALID_REF)
> +	if (ref != GRANT_INVALID_REF) {
> +		get_page(virt_to_page(page));
>  		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
> +		free_page((unsigned long)page);
> +	}
>  }

Please drop this hunk.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:48:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eJa-000652-4c; Fri, 24 Jan 2014 10:48:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eJY-00064x-M1
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:48:44 +0000
Received: from [85.158.137.68:21424] by server-1.bemta-3.messagelabs.com id
	3C/9F-29598-B0542E25; Fri, 24 Jan 2014 10:48:43 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390560521!9934606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24588 invoked from network); 24 Jan 2014 10:48:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:48:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94085254"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 10:48:41 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:48:40 -0500
Message-ID: <52E24507.8060402@citrix.com>
Date: Fri, 24 Jan 2014 10:48:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 03:28, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.
> 
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
> use get_page to ensure page is released when grant access is completed successfully.
> 
> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
> for them will be created separately.
...
> @@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
>  static void xennet_end_access(int ref, void *page)
>  {
>  	/* This frees the page as a side-effect */
> -	if (ref != GRANT_INVALID_REF)
> +	if (ref != GRANT_INVALID_REF) {
> +		get_page(virt_to_page(page));
>  		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
> +		free_page((unsigned long)page);
> +	}
>  }

Please drop this hunk.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eKa-00069A-K6; Fri, 24 Jan 2014 10:49:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6eKY-00068x-Ug
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:49:47 +0000
Received: from [85.158.143.35:50017] by server-3.bemta-4.messagelabs.com id
	81/C2-32360-A4542E25; Fri, 24 Jan 2014 10:49:46 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390560580!515157!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13025 invoked from network); 24 Jan 2014 10:49:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 10:49:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OAncPY016770
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 10:49:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OAnbMF024694
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 10:49:37 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OAnbof001569; Fri, 24 Jan 2014 10:49:37 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 02:49:36 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 18:51:34 +0800
Message-Id: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: annie.li@oracle.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split eventchannel feature was supported by netback/netfront,
"xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
to show tx/rx eventchannel correctly.

V2: keep evtch field without breaking the API

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |    8 ++++++++
 tools/libxl/libxl.h         |   12 ++++++++++++
 tools/libxl/libxl_types.idl |    3 +++
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 4 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4ce9efc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3142,6 +3142,14 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
     nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(nicinfo->evtch > 0)
+        nicinfo->evtch_rx = nicinfo->evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..27ef565 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,18 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+ * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
+ * containing the value of eventchannel for rx path of netback&netfront which support
+ * split event channel. The original evtch field contains the value of eventchannel
+ * for tx path in such case.
+ *
+ */
+#if LIBXL_API_VERSION > 0x040400
+#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
+#endif
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b1b4946 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
     ("devid", libxl_devid),
     ("state", integer),
     ("evtch", integer),
+#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+    ("evtch_rx", integer),
+#endif
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..be1162e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eKa-00069A-K6; Fri, 24 Jan 2014 10:49:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W6eKY-00068x-Ug
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:49:47 +0000
Received: from [85.158.143.35:50017] by server-3.bemta-4.messagelabs.com id
	81/C2-32360-A4542E25; Fri, 24 Jan 2014 10:49:46 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390560580!515157!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13025 invoked from network); 24 Jan 2014 10:49:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 10:49:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OAncPY016770
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 10:49:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OAnbMF024694
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 10:49:37 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OAnbof001569; Fri, 24 Jan 2014 10:49:37 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 02:49:36 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 18:51:34 +0800
Message-Id: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: annie.li@oracle.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split eventchannel feature was supported by netback/netfront,
"xl network-list" does not show eventchannel correctly. Add tx-/rx-evt-ch
to show tx/rx eventchannel correctly.

V2: keep evtch field without breaking the API

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |    8 ++++++++
 tools/libxl/libxl.h         |   12 ++++++++++++
 tools/libxl/libxl_types.idl |    3 +++
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 4 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4ce9efc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3142,6 +3142,14 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
     nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(nicinfo->evtch > 0)
+        nicinfo->evtch_rx = nicinfo->evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..27ef565 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,18 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+ * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
+ * containing the value of eventchannel for rx path of netback&netfront which support
+ * split event channel. The original evtch field contains the value of eventchannel
+ * for tx path in such case.
+ *
+ */
+#if LIBXL_API_VERSION > 0x040400
+#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
+#endif
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b1b4946 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
     ("devid", libxl_devid),
     ("state", integer),
     ("evtch", integer),
+#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+    ("evtch_rx", integer),
+#endif
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..be1162e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eSM-0006Yk-DQ; Fri, 24 Jan 2014 10:57:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eSL-0006Yd-2r
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 10:57:49 +0000
Received: from [85.158.139.211:14771] by server-4.bemta-5.messagelabs.com id
	C5/E2-26791-C2742E25; Fri, 24 Jan 2014 10:57:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390561066!11537903!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22277 invoked from network); 24 Jan 2014 10:57:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:57:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96121406"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:57:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:57:45 -0500
Message-ID: <52E24727.6090809@citrix.com>
Date: Fri, 24 Jan 2014 10:57:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid
 m2p_override	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 05:48, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
>> for blkback and future netback patches it just cause a lock contention, as
>> those pages never go to userspace. Therefore this series does the following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return value
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
>> - clear page->private before doing anything with the page, so m2p_find_override
>>   won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret = 0 now at some places
>>
>> v6:
>> - don't pass pfn to m2p* functions, just get it locally
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
> 
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
> 
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
> 
> Or am I misunderstanding the change?

Yes, it's equivalent.  There doesn't seem to have been a follow up patch
posted for Anthony's patch so it's not surprising that it's fallen
through the cracks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eSM-0006Yk-DQ; Fri, 24 Jan 2014 10:57:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eSL-0006Yd-2r
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 10:57:49 +0000
Received: from [85.158.139.211:14771] by server-4.bemta-5.messagelabs.com id
	C5/E2-26791-C2742E25; Fri, 24 Jan 2014 10:57:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390561066!11537903!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22277 invoked from network); 24 Jan 2014 10:57:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:57:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96121406"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:57:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:57:45 -0500
Message-ID: <52E24727.6090809@citrix.com>
Date: Fri, 24 Jan 2014 10:57:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid
 m2p_override	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 05:48, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
>> for blkback and future netback patches it just cause a lock contention, as
>> those pages never go to userspace. Therefore this series does the following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return value
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
>> - clear page->private before doing anything with the page, so m2p_find_override
>>   won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret = 0 now at some places
>>
>> v6:
>> - don't pass pfn to m2p* functions, just get it locally
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
> 
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
> 
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
> 
> Or am I misunderstanding the change?

Yes, it's equivalent.  There doesn't seem to have been a follow up patch
posted for Anthony's patch so it's not surprising that it's fallen
through the cracks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:58:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eTF-0006kF-TF; Fri, 24 Jan 2014 10:58:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6eTF-0006k8-1b
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:58:45 +0000
Received: from [85.158.139.211:23516] by server-6.bemta-5.messagelabs.com id
	05/A0-16310-46742E25; Fri, 24 Jan 2014 10:58:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390561122!11519770!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13576 invoked from network); 24 Jan 2014 10:58:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96121559"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:58:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:58:41 -0500
Message-ID: <1390561120.2124.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Date: Fri, 24 Jan 2014 10:58:40 +0000
In-Reply-To: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
References: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:51 +0800, Annie Li wrote:

> +/*
> + * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
> + * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
> + * containing the value of eventchannel for rx path of netback&netfront which support
> + * split event channel. The original evtch field contains the value of eventchannel
> + * for tx path in such case.

I think it can be either "event channel" (two words) or
"evtchn" (abbreviation) but not "eventchannel".

> + *
> + */
> +#if LIBXL_API_VERSION > 0x040400
> +#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1

This should be unconditional. There are several existing examples in
libxl.h which you could have copied.

> +#endif
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b1b4946 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("devid", libxl_devid),
>      ("state", integer),
>      ("evtch", integer),
> +#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
> +    ("evtch_rx", integer),
> +#endif

Did you even build this? Because it can't have worked (Libxl_types.idl
is a Python source file which is not preprocessed).

In any case, this ifdef is unneccessary and wrong. the LIBXL_HAVE
indicates to the consumer that the field is present, but it should not
actually gate the presence of the field.

Think about it -- if an application is building against libxl version
4.4 (which was therefore built with evtchn_rx present) but requests
LIBXL_API_VERSION == 0x040300 then your patch has now created an ABI
mismatch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 10:58:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 10:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eTF-0006kF-TF; Fri, 24 Jan 2014 10:58:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6eTF-0006k8-1b
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 10:58:45 +0000
Received: from [85.158.139.211:23516] by server-6.bemta-5.messagelabs.com id
	05/A0-16310-46742E25; Fri, 24 Jan 2014 10:58:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390561122!11519770!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13576 invoked from network); 24 Jan 2014 10:58:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 10:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96121559"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 10:58:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 05:58:41 -0500
Message-ID: <1390561120.2124.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Annie Li <Annie.li@oracle.com>
Date: Fri, 24 Jan 2014 10:58:40 +0000
In-Reply-To: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
References: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:51 +0800, Annie Li wrote:

> +/*
> + * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
> + * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
> + * containing the value of eventchannel for rx path of netback&netfront which support
> + * split event channel. The original evtch field contains the value of eventchannel
> + * for tx path in such case.

I think it can be either "event channel" (two words) or
"evtchn" (abbreviation) but not "eventchannel".

> + *
> + */
> +#if LIBXL_API_VERSION > 0x040400
> +#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1

This should be unconditional. There are several existing examples in
libxl.h which you could have copied.

> +#endif
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b1b4946 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
>      ("devid", libxl_devid),
>      ("state", integer),
>      ("evtch", integer),
> +#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
> +    ("evtch_rx", integer),
> +#endif

Did you even build this? Because it can't have worked (Libxl_types.idl
is a Python source file which is not preprocessed).

In any case, this ifdef is unneccessary and wrong. the LIBXL_HAVE
indicates to the consumer that the field is present, but it should not
actually gate the presence of the field.

Think about it -- if an application is building against libxl version
4.4 (which was therefore built with evtchn_rx present) but requests
LIBXL_API_VERSION == 0x040300 then your patch has now created an ABI
mismatch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:05:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eZl-0007Ee-MK; Fri, 24 Jan 2014 11:05:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eZj-0007ER-CN
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 11:05:27 +0000
Received: from [193.109.254.147:33446] by server-11.bemta-14.messagelabs.com
	id 0F/B8-20576-6F842E25; Fri, 24 Jan 2014 11:05:26 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390561524!11439309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5076 invoked from network); 24 Jan 2014 11:05:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:05:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94088928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:05:23 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:05:23 -0500
Message-ID: <52E248F0.1060708@citrix.com>
Date: Fri, 24 Jan 2014 11:05:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Elena Ufimtseva <ufimtseva@gmail.com>
References: <20140121232708.GA29787@amazon.com>	<20140122014908.GG18164@kroah.com>	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>	<20140122032045.GA22182@falcon.amazon.com>	<20140122050215.GC9931@konrad-lan.dumpdata.com>	<20140122072914.GA9283@orcus.uplinklabs.net>	<52DFD5DB.6060603@iogearbox.net>	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 16:23, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
>> On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>>> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>>>> On 01/22/2014 08:29 AM, Steven Noonan wrote:
>>>>>
>>>>> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>>>>>>
>>>>>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>>>>>>
>>>>>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>>>>>>
>>>>>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>>>>>> <gregkh@linuxfoundation.org> wrote:
>>>>>>
>>>>>>
>>>>>> Adding extra folks to the party.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Odds are this also shows up in 3.13, right?
>>>>>>>
>>>>>>>
>>>>>>> Reproduced using 3.13 on the PV guest:
>>>>>>>
>>>>>>>         [  368.756763] BUG: Bad page map in process mp
>>>>>>> pte:80000004a67c6165 pmd:e9b706067
>>>>>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>>>>>>> mapping:          (null) index:0x0
>>>>>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>>>>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>>>>>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>>>>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>>>>>>> #1
>>>>>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>>>>>>> ffffffff814d8748 00007fd1388b7000
>>>>>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>>>>>>> 0000000000000000 0000000000000000
>>>>>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>>>>>>> 00007fd1388b8000 ffff880e9eaf3e30
>>>>>>>         [  368.756815] Call Trace:
>>>>>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>>>>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>>>>         [  368.756837]  [<ffffffff8116eae3>]
>>>>>>> unmap_single_vma+0x583/0x890
>>>>>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>>>>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>>>>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>>>>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>>>>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>>>>>>         [  368.756869]  [<ffffffff814e70ed>]
>>>>>>> system_call_fastpath+0x1a/0x1f
>>>>>>>         [  368.756872] Disabling lock debugging due to kernel taint
>>>>>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>>>>> idx:0 val:-1
>>>>>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>>>>> idx:1 val:1
>>>>>>>
>>>>>>>>
>>>>>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>>>>>> interest in setting one up).. And I have a suspicion that it might not
>>>>>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>>>>>
>>>>>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>>>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>>>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>>>>>> confused.
>>>>>>>>
>>>>>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>>>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>>>>>> _PAGE_PRESENT.
>>>>>>>>
>>>>>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>>>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>>>>>> that makes him go "Ahh, yes, silly case".
>>>>>>>>
>>>>>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>>>>>
>>>>>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>>>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>>>>>> attached test-case (but apparently only under Xen PV). There it
>>>>>>>> apparently causes a "BUG: Bad page map .." error.
>>>>>>
>>>>>>
>>>>>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>>>>>> _raw_
>>>>>> value of PMDs and PTEs. That is - it does not use the pvops interface
>>>>>> and instead reads the values directly from the page-table. Since the
>>>>>> page-table is also manipulated by the hypervisor - there are certain
>>>>>> flags it also sets to do its business. It might be that it uses
>>>>>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>>>>>> pte_flags that would invoke the pvops interface.
>>>>>>
>>>>>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>>>>>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> It does use _PAGE_GLOBAL for guest user pages
> 
>>>>>>
>>>>>> This not-compiled-totally-bad-patch might shed some light on what I was
>>>>>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>>>>>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>>>>>> for that).
>>>>>
>>>>>
>>>>> Unfortunately the Totally Bad Patch seems to make no difference. I am
>>>>> still able to repro the issue:
>>>
>>> Steven, do you use numa=fake on boot cmd line for pv guest?
>>>
>>> I had similar issue on pv guest. Let me check if the fix that resolved
>>> this for me will help with 3.13.
>>
>> Nope:
>>
>> # cat /proc/cmdline
>> root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
> 
>>
>>>
>>>>
>>>>
>>>> Maybe this one is also related to this BUG here (cc'ed people investigating
>>>> this one) ...
>>>>
>>>>   https://lkml.org/lkml/2014/1/10/427
>>>>
>>>> ... not sure, though.
>>>>
>>>>
>>>>>         [  346.374929] BUG: Bad page map in process mp
>>>>> pte:80000004ae928065 pmd:e993f9067
>>>>>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>>>>> (null) index:0x0
>>>>>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>>>>>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>>>>> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>>>>>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>>>>> #1
>>>>>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>>>>> 00007f06a9bbb000
>>>>>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>>>>> 0000000000000000
>>>>>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>>>>> ffff880e991a3e30
>>>>>         [  346.374979] Call Trace:
>>>>>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>>>>>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>>>>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>>>>>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>>>>>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>>>>>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>>>>>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>>>>>         [  346.375034]  [<ffffffff814e712d>]
>>>>> system_call_fastpath+0x1a/0x1f
>>>>>         [  346.375037] Disabling lock debugging due to kernel taint
>>>>>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>>>>> idx:0 val:-1
>>>>>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>>>>> idx:1 val:1
>>>>>
>>>>> This dump doesn't look dramatically different, either.
>>>>>
>>>>>>
>>>>>> The other question is - how is AutoNUMA running when it is not enabled?
>>>>>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>>>>> turned on?
>>>>>
>>>>>
>>>>> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>>>>> mean not enabled at runtime?
>>>>>
>>>>> [1]
>>>>> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>>
>>>
>>>
>>> --
>>> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {

if (val & (_PAGE_PRESENT | _PAGE_NUMA))

is equivalent.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:05:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eZl-0007Ee-MK; Fri, 24 Jan 2014 11:05:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6eZj-0007ER-CN
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 11:05:27 +0000
Received: from [193.109.254.147:33446] by server-11.bemta-14.messagelabs.com
	id 0F/B8-20576-6F842E25; Fri, 24 Jan 2014 11:05:26 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390561524!11439309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5076 invoked from network); 24 Jan 2014 11:05:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:05:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94088928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:05:23 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:05:23 -0500
Message-ID: <52E248F0.1060708@citrix.com>
Date: Fri, 24 Jan 2014 11:05:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Elena Ufimtseva <ufimtseva@gmail.com>
References: <20140121232708.GA29787@amazon.com>	<20140122014908.GG18164@kroah.com>	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>	<20140122032045.GA22182@falcon.amazon.com>	<20140122050215.GC9931@konrad-lan.dumpdata.com>	<20140122072914.GA9283@orcus.uplinklabs.net>	<52DFD5DB.6060603@iogearbox.net>	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@suse.de>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/01/14 16:23, Elena Ufimtseva wrote:
> On Wed, Jan 22, 2014 at 3:33 PM, Steven Noonan <steven@uplinklabs.net> wrote:
>> On Wed, Jan 22, 2014 at 03:18:50PM -0500, Elena Ufimtseva wrote:
>>> On Wed, Jan 22, 2014 at 9:29 AM, Daniel Borkmann <borkmann@iogearbox.net> wrote:
>>>> On 01/22/2014 08:29 AM, Steven Noonan wrote:
>>>>>
>>>>> On Wed, Jan 22, 2014 at 12:02:15AM -0500, Konrad Rzeszutek Wilk wrote:
>>>>>>
>>>>>> On Tue, Jan 21, 2014 at 07:20:45PM -0800, Steven Noonan wrote:
>>>>>>>
>>>>>>> On Tue, Jan 21, 2014 at 06:47:07PM -0800, Linus Torvalds wrote:
>>>>>>>>
>>>>>>>> On Tue, Jan 21, 2014 at 5:49 PM, Greg Kroah-Hartman
>>>>>>>> <gregkh@linuxfoundation.org> wrote:
>>>>>>
>>>>>>
>>>>>> Adding extra folks to the party.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Odds are this also shows up in 3.13, right?
>>>>>>>
>>>>>>>
>>>>>>> Reproduced using 3.13 on the PV guest:
>>>>>>>
>>>>>>>         [  368.756763] BUG: Bad page map in process mp
>>>>>>> pte:80000004a67c6165 pmd:e9b706067
>>>>>>>         [  368.756777] page:ffffea001299f180 count:0 mapcount:-1
>>>>>>> mapping:          (null) index:0x0
>>>>>>>         [  368.756781] page flags: 0x2fffff80000014(referenced|dirty)
>>>>>>>         [  368.756786] addr:00007fd1388b7000 vm_flags:00100071
>>>>>>> anon_vma:ffff880e9ba15f80 mapping:          (null) index:7fd1388b7
>>>>>>>         [  368.756792] CPU: 29 PID: 618 Comm: mp Not tainted 3.13.0-ec2
>>>>>>> #1
>>>>>>>         [  368.756795]  ffff880e9b718958 ffff880e9eaf3cc0
>>>>>>> ffffffff814d8748 00007fd1388b7000
>>>>>>>         [  368.756803]  ffff880e9eaf3d08 ffffffff8116d289
>>>>>>> 0000000000000000 0000000000000000
>>>>>>>         [  368.756809]  ffff880e9b7065b8 ffffea001299f180
>>>>>>> 00007fd1388b8000 ffff880e9eaf3e30
>>>>>>>         [  368.756815] Call Trace:
>>>>>>>         [  368.756825]  [<ffffffff814d8748>] dump_stack+0x45/0x56
>>>>>>>         [  368.756833]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>>>>         [  368.756837]  [<ffffffff8116eae3>]
>>>>>>> unmap_single_vma+0x583/0x890
>>>>>>>         [  368.756842]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>>>>         [  368.756847]  [<ffffffff81175dac>] unmap_region+0xac/0x120
>>>>>>>         [  368.756852]  [<ffffffff81176379>] ? vma_rb_erase+0x1c9/0x210
>>>>>>>         [  368.756856]  [<ffffffff81177f10>] do_munmap+0x280/0x370
>>>>>>>         [  368.756860]  [<ffffffff81178041>] vm_munmap+0x41/0x60
>>>>>>>         [  368.756864]  [<ffffffff81178f32>] SyS_munmap+0x22/0x30
>>>>>>>         [  368.756869]  [<ffffffff814e70ed>]
>>>>>>> system_call_fastpath+0x1a/0x1f
>>>>>>>         [  368.756872] Disabling lock debugging due to kernel taint
>>>>>>>         [  368.760084] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>>>>> idx:0 val:-1
>>>>>>>         [  368.760091] BUG: Bad rss-counter state mm:ffff880e9d079680
>>>>>>> idx:1 val:1
>>>>>>>
>>>>>>>>
>>>>>>>> Probably. I don't have a Xen PV setup to test with (and very little
>>>>>>>> interest in setting one up).. And I have a suspicion that it might not
>>>>>>>> be so much about Xen PV, as perhaps about the kind of hardware.
>>>>>>>>
>>>>>>>> I suspect the issue has something to do with the magic _PAGE_NUMA
>>>>>>>> tie-in with _PAGE_PRESENT. And then mprotect(PROT_NONE) ends up
>>>>>>>> removing the _PAGE_PRESENT bit, and now the crazy numa code is
>>>>>>>> confused.
>>>>>>>>
>>>>>>>> The whole _PAGE_NUMA thing is a f*cking horrible hack, and shares the
>>>>>>>> bit with _PAGE_PROTNONE, which is why it then has that tie-in to
>>>>>>>> _PAGE_PRESENT.
>>>>>>>>
>>>>>>>> Adding Andrea to the Cc, because he's the author of that horridness.
>>>>>>>> Putting Steven's test-case here as an attachement for Andrea, maybe
>>>>>>>> that makes him go "Ahh, yes, silly case".
>>>>>>>>
>>>>>>>> Also added Kirill, because he was involved the last _PAGE_NUMA debacle.
>>>>>>>>
>>>>>>>> Andrea, you can find the thread on lkml, but it boils down to commit
>>>>>>>> 1667918b6483 (backported to 3.12.7 as 3d792d616ba4) breaking the
>>>>>>>> attached test-case (but apparently only under Xen PV). There it
>>>>>>>> apparently causes a "BUG: Bad page map .." error.
>>>>>>
>>>>>>
>>>>>> I *think* it is due to the fact that pmd_numa and pte_numa is getting the
>>>>>> _raw_
>>>>>> value of PMDs and PTEs. That is - it does not use the pvops interface
>>>>>> and instead reads the values directly from the page-table. Since the
>>>>>> page-table is also manipulated by the hypervisor - there are certain
>>>>>> flags it also sets to do its business. It might be that it uses
>>>>>> _PAGE_GLOBAL as well - and Linux picks up on that. If it was using
>>>>>> pte_flags that would invoke the pvops interface.
>>>>>>
>>>>>> Elena, Dariof and George, you guys had been looking at this a bit deeper
>>>>>> than I have. Does the Xen hypervisor use the _PAGE_GLOBAL for PV guests?
> 
> It does use _PAGE_GLOBAL for guest user pages
> 
>>>>>>
>>>>>> This not-compiled-totally-bad-patch might shed some light on what I was
>>>>>> thinking _could_ fix this issue - and IS NOT A FIX - JUST A HACK.
>>>>>> It does not fix it for PMDs naturally (as there are no PMD paravirt ops
>>>>>> for that).
>>>>>
>>>>>
>>>>> Unfortunately the Totally Bad Patch seems to make no difference. I am
>>>>> still able to repro the issue:
>>>
>>> Steven, do you use numa=fake on boot cmd line for pv guest?
>>>
>>> I had similar issue on pv guest. Let me check if the fix that resolved
>>> this for me will help with 3.13.
>>
>> Nope:
>>
>> # cat /proc/cmdline
>> root=/dev/xvda1 ro rootwait rootfstype=ext4 nomodeset console=hvc0 earlyprintk=xen,verbose loglevel=7
> 
>>
>>>
>>>>
>>>>
>>>> Maybe this one is also related to this BUG here (cc'ed people investigating
>>>> this one) ...
>>>>
>>>>   https://lkml.org/lkml/2014/1/10/427
>>>>
>>>> ... not sure, though.
>>>>
>>>>
>>>>>         [  346.374929] BUG: Bad page map in process mp
>>>>> pte:80000004ae928065 pmd:e993f9067
>>>>>         [  346.374942] page:ffffea0012ba4a00 count:0 mapcount:-1 mapping:
>>>>> (null) index:0x0
>>>>>         [  346.374946] page flags: 0x2fffff80000014(referenced|dirty)
>>>>>         [  346.374951] addr:00007f06a9bbb000 vm_flags:00100071
>>>>> anon_vma:ffff880e9939fe00 mapping:          (null) index:7f06a9bbb
>>>>>         [  346.374956] CPU: 29 PID: 609 Comm: mp Not tainted 3.13.0-ec2+
>>>>> #1
>>>>>         [  346.374960]  ffff880e9cc38da8 ffff880e991a3cc0 ffffffff814d8768
>>>>> 00007f06a9bbb000
>>>>>         [  346.374967]  ffff880e991a3d08 ffffffff8116d289 0000000000000000
>>>>> 0000000000000000
>>>>>         [  346.374972]  ffff880e993f9dd8 ffffea0012ba4a00 00007f06a9bbc000
>>>>> ffff880e991a3e30
>>>>>         [  346.374979] Call Trace:
>>>>>         [  346.374988]  [<ffffffff814d8768>] dump_stack+0x45/0x56
>>>>>         [  346.374996]  [<ffffffff8116d289>] print_bad_pte+0x229/0x250
>>>>>         [  346.375000]  [<ffffffff8116eae3>] unmap_single_vma+0x583/0x890
>>>>>         [  346.375006]  [<ffffffff8116feb5>] unmap_vmas+0x65/0x90
>>>>>         [  346.375011]  [<ffffffff81175dbc>] unmap_region+0xac/0x120
>>>>>         [  346.375016]  [<ffffffff81176389>] ? vma_rb_erase+0x1c9/0x210
>>>>>         [  346.375021]  [<ffffffff81177f20>] do_munmap+0x280/0x370
>>>>>         [  346.375025]  [<ffffffff81178051>] vm_munmap+0x41/0x60
>>>>>         [  346.375029]  [<ffffffff81178f42>] SyS_munmap+0x22/0x30
>>>>>         [  346.375034]  [<ffffffff814e712d>]
>>>>> system_call_fastpath+0x1a/0x1f
>>>>>         [  346.375037] Disabling lock debugging due to kernel taint
>>>>>         [  346.380082] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>>>>> idx:0 val:-1
>>>>>         [  346.380088] BUG: Bad rss-counter state mm:ffff880e9d22bc00
>>>>> idx:1 val:1
>>>>>
>>>>> This dump doesn't look dramatically different, either.
>>>>>
>>>>>>
>>>>>> The other question is - how is AutoNUMA running when it is not enabled?
>>>>>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>>>>> turned on?
>>>>>
>>>>>
>>>>> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>>>>> mean not enabled at runtime?
>>>>>
>>>>> [1]
>>>>> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>>
>>>
>>>
>>> --
>>> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {

if (val & (_PAGE_PRESENT | _PAGE_NUMA))

is equivalent.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:05:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eZu-0007G6-3L; Fri, 24 Jan 2014 11:05:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6eZt-0007Fu-65
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:05:37 +0000
Received: from [193.109.254.147:31100] by server-15.bemta-14.messagelabs.com
	id 4F/AE-22186-00942E25; Fri, 24 Jan 2014 11:05:36 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390561534!12968626!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8239 invoked from network); 24 Jan 2014 11:05:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 11:05:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OB5VNj031578
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 11:05:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OB5VnJ005386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 11:05:31 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OB5URA005361; Fri, 24 Jan 2014 11:05:31 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 03:05:30 -0800
Message-ID: <52E248F6.9080601@oracle.com>
Date: Fri, 24 Jan 2014 19:05:26 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
	<52E24507.8060402@citrix.com>
In-Reply-To: <52E24507.8060402@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/24 18:48, David Vrabel wrote:
> On 24/01/14 03:28, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>>
>> This patch removes grant transfer releasing code from netfront, and uses
>> gnttab_end_foreign_access to end grant access since
>> gnttab_end_foreign_access_ref may fail when the grant entry is
>> currently used for reading or writing.
>>
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
>> use get_page to ensure page is released when grant access is completed successfully.
>>
>> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
>> for them will be created separately.
> ...
>> @@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
>>   static void xennet_end_access(int ref, void *page)
>>   {
>>   	/* This frees the page as a side-effect */
>> -	if (ref != GRANT_INVALID_REF)
>> +	if (ref != GRANT_INVALID_REF) {
>> +		get_page(virt_to_page(page));
>>   		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
>> +		free_page((unsigned long)page);
>> +	}
>>   }

Oh, these code were kept from my debug, will remove it.

Thanks
Annie
> Please drop this hunk.
>
> Otherwise,
>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
>
> David
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:05:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:05:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eZu-0007G6-3L; Fri, 24 Jan 2014 11:05:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6eZt-0007Fu-65
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:05:37 +0000
Received: from [193.109.254.147:31100] by server-15.bemta-14.messagelabs.com
	id 4F/AE-22186-00942E25; Fri, 24 Jan 2014 11:05:36 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390561534!12968626!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8239 invoked from network); 24 Jan 2014 11:05:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 11:05:35 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OB5VNj031578
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 11:05:32 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OB5VnJ005386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 11:05:31 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OB5URA005361; Fri, 24 Jan 2014 11:05:31 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 03:05:30 -0800
Message-ID: <52E248F6.9080601@oracle.com>
Date: Fri, 24 Jan 2014 19:05:26 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1390534095-5097-1-git-send-email-Annie.li@oracle.com>
	<52E24507.8060402@citrix.com>
In-Reply-To: <52E24507.8060402@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v4] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/24 18:48, David Vrabel wrote:
> On 24/01/14 03:28, Annie Li wrote:
>> From: Annie Li <annie.li@oracle.com>
>>
>> This patch removes grant transfer releasing code from netfront, and uses
>> gnttab_end_foreign_access to end grant access since
>> gnttab_end_foreign_access_ref may fail when the grant entry is
>> currently used for reading or writing.
>>
>> * clean up grant transfer code kept from old netfront(2.6.18) which grants
>> pages for access/map and transfer. But grant transfer is deprecated in current
>> netfront, so remove corresponding release code for transfer.
>>
>> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
>> use get_page to ensure page is released when grant access is completed successfully.
>>
>> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
>> for them will be created separately.
> ...
>> @@ -1439,8 +1403,11 @@ static int netfront_probe(struct xenbus_device *dev,
>>   static void xennet_end_access(int ref, void *page)
>>   {
>>   	/* This frees the page as a side-effect */
>> -	if (ref != GRANT_INVALID_REF)
>> +	if (ref != GRANT_INVALID_REF) {
>> +		get_page(virt_to_page(page));
>>   		gnttab_end_foreign_access(ref, 0, (unsigned long)page);
>> +		free_page((unsigned long)page);
>> +	}
>>   }

Oh, these code were kept from my debug, will remove it.

Thanks
Annie
> Please drop this hunk.
>
> Otherwise,
>
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
>
> David
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eaG-0007LQ-It; Fri, 24 Jan 2014 11:06:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6eaE-0007L4-Vd
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:05:59 +0000
Received: from [85.158.143.35:14913] by server-2.bemta-4.messagelabs.com id
	5F/83-11386-61942E25; Fri, 24 Jan 2014 11:05:58 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390561555!530576!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29772 invoked from network); 24 Jan 2014 11:05:56 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 11:05:56 -0000
Received: from mail-vb0-f41.google.com ([209.85.212.41]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJJEuilG5JZAqRfTAgdptgCqeL+1gh5@postini.com;
	Fri, 24 Jan 2014 03:05:56 PST
Received: by mail-vb0-f41.google.com with SMTP id g10so1785935vbg.0
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:05:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=iE87RNpWJfJJwsSXNfIN3PP9OP389nIvmCIew3hjO94=;
	b=D9p92JuVcEtvvl5oA1A0yCYz15rjIVETdqSIFoMSUDXCIcSDdhPP5LLlEYsIuft1Xl
	kLXpkpZdyzXbCeyHHQZqWYVKeVaqITqzuzJxJhBTpTckEFy879fI974XsVCfPYQPgQER
	uSqoroIlJ+7AHVsimwRlg5sDefThKqqN0BGe8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=iE87RNpWJfJJwsSXNfIN3PP9OP389nIvmCIew3hjO94=;
	b=g6qjZsK86N1VubozmCfkhvv2CcNSuYgrJY8cUH2GzRSMIX7Lnv9Oh8ojK3jCxCwxFM
	A3r2fmouAa3S9fvG3Ej8iWZwZQx/6ljfE8ZU6gIrmx3d06Kxvad0IQH90Ye/CQVQK9DZ
	mZdf3BbDeTDDRY7UKnD/2T9FL+u0dOQTaIgcXD7zJcOjJi5hypzccsEV4SmP4GuMpKak
	Eawy4m3MKapcB+5PEE8fekSh5VxvcxVwyJOW74vcedq72qAXqnmcpkzinGRqgf8vCVpk
	vmpC/l2zqsf2ewoZqfa1ZiTvx8zwx0kvQJwb4tPkagIVAB5EGpPo708+quyBZuRvAtCr
	AbKg==
X-Gm-Message-State: ALoCoQkNP1gv0NOn8pN6eaGkcDV6gcHZAmCuD5Sh9yLn5HmKY1jvKdsHlnXCTjtlZ0FjWdzuu9xVLOgN00lab3Db0RMSUUu510jENkt74OyHf7PKLLVvoPbW6tvB0W8jmR3wDAKtnuAsb+I62rRWCDJY8crsw0nmOwxVRXPG7ynR2PUc6H+Wf2Q=
X-Received: by 10.221.34.211 with SMTP id st19mr7289230vcb.5.1390561554389;
	Fri, 24 Jan 2014 03:05:54 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.34.211 with SMTP id st19mr7289227vcb.5.1390561554305;
	Fri, 24 Jan 2014 03:05:54 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 03:05:54 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 13:05:54 +0200
Message-ID: <CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Thu, Jan 23, 2014 at 4:39 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
>> omap IOMMU module is designed to handle access to external
>> omap MMUs, connected to the L3 bus.
>>
...
>> @@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
>>  {
>>      &vgic_distr_mmio_handler,
>>      &vuart_mmio_handler,
>> +     &mmu_mmio_handler,
>>  };
>>  #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
>
> I think that omap_iommu should be a platform specific driver, and it
> should hook into a set of platform specific mmio_handlers instead of
> using the generic mmio_handler structure.
>

Agree.

...

>> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
>> index 8d252c0..acb5dff 100644
>> --- a/xen/arch/arm/io.h
>> +++ b/xen/arch/arm/io.h
>> @@ -42,6 +42,7 @@ struct mmio_handler {
>>
>>  extern const struct mmio_handler vgic_distr_mmio_handler;
>>  extern const struct mmio_handler vuart_mmio_handler;
>> +extern const struct mmio_handler mmu_mmio_handler;
>>
>>  extern int handle_mmio(mmio_info_t *info);
>>
>> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
>> new file mode 100644
>> index 0000000..4dab30f
>> --- /dev/null
>> +++ b/xen/arch/arm/omap_iommu.c
>
> It should probably live under xen/arch/arm/platforms.
>

Agree.

...

>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>> +{
>> +     void __iomem *pagetable = NULL;
>> +     u32 pgaddr;
>> +
>> +     ASSERT(mmu);
>> +
>> +     /* read address where kernel MMU pagetable is stored */
>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>> +     if (!pagetable) {
>
> Xen uses a different coding style from Linux, see CODING_STYLE.
>

Sure. Thank you. Will update my editor settings accordingly to fit Xen
coding style.

...
>
>> +             printk("%s: %s failed to map pagetable\n",
>> +                        __func__, mmu->name);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * pagetable can be changed since last time
>> +      * we accessed it therefore we need to copy it each time
>> +      */
>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>> +
>> +     iounmap(pagetable);
>
> Do you need to flush the dcache here?
>

No, it is copying from kernel to Xen. Kernel already has valid
pagetable in it physical memory, when this call happens. Kernel makes
sure of this before accessing MMU configuration register. The only
reason to flush cache here if kernel mmu driver will be modified
somehow. This may be the point.

...

>> +
>> +             /* error */
>> +             } else {
>> +                     printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
>> +                     return -EINVAL;
>> +             }
>> +     }
>> +
>> +     /* force omap IOMMU to use new pagetable */
>> +     if (table_updated) {
>> +             paddr_t maddr;
>> +             flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
>
> So you are flushing the dcache all at once at the end, probably better
> this way.

Right. After all translations complete I flush caches. This guarantees
that phys memory will contain valid data for MMU.

 ...

>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>> +
>> +     res = mmu_trap_write_access(dom, mmu, info);
>> +     if (res)
>> +             return res;
>> +
>> +    return 1;
>> +}
>
> I wonder if we actually need to trap guest accesses in all cases: if we
> leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
> Maybe we can find a way to detect that so that we can avoid trapping and
> translating in that case.
>

If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
translation. But for now we need this in DomU, where 1:1 mapping will
not be available.

>> +
>> +const struct mmio_handler mmu_mmio_handler = {
>> +     .check_handler = mmu_mmio_check,
>> +     .read_handler  = mmu_mmio_read,
>> +     .write_handler = mmu_mmio_write,
>> +};
>> +
>> +__initcall(mmu_init_all);
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>

Thank you for review,

Regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6eaG-0007LQ-It; Fri, 24 Jan 2014 11:06:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6eaE-0007L4-Vd
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:05:59 +0000
Received: from [85.158.143.35:14913] by server-2.bemta-4.messagelabs.com id
	5F/83-11386-61942E25; Fri, 24 Jan 2014 11:05:58 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390561555!530576!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29772 invoked from network); 24 Jan 2014 11:05:56 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 11:05:56 -0000
Received: from mail-vb0-f41.google.com ([209.85.212.41]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJJEuilG5JZAqRfTAgdptgCqeL+1gh5@postini.com;
	Fri, 24 Jan 2014 03:05:56 PST
Received: by mail-vb0-f41.google.com with SMTP id g10so1785935vbg.0
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:05:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=iE87RNpWJfJJwsSXNfIN3PP9OP389nIvmCIew3hjO94=;
	b=D9p92JuVcEtvvl5oA1A0yCYz15rjIVETdqSIFoMSUDXCIcSDdhPP5LLlEYsIuft1Xl
	kLXpkpZdyzXbCeyHHQZqWYVKeVaqITqzuzJxJhBTpTckEFy879fI974XsVCfPYQPgQER
	uSqoroIlJ+7AHVsimwRlg5sDefThKqqN0BGe8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=iE87RNpWJfJJwsSXNfIN3PP9OP389nIvmCIew3hjO94=;
	b=g6qjZsK86N1VubozmCfkhvv2CcNSuYgrJY8cUH2GzRSMIX7Lnv9Oh8ojK3jCxCwxFM
	A3r2fmouAa3S9fvG3Ej8iWZwZQx/6ljfE8ZU6gIrmx3d06Kxvad0IQH90Ye/CQVQK9DZ
	mZdf3BbDeTDDRY7UKnD/2T9FL+u0dOQTaIgcXD7zJcOjJi5hypzccsEV4SmP4GuMpKak
	Eawy4m3MKapcB+5PEE8fekSh5VxvcxVwyJOW74vcedq72qAXqnmcpkzinGRqgf8vCVpk
	vmpC/l2zqsf2ewoZqfa1ZiTvx8zwx0kvQJwb4tPkagIVAB5EGpPo708+quyBZuRvAtCr
	AbKg==
X-Gm-Message-State: ALoCoQkNP1gv0NOn8pN6eaGkcDV6gcHZAmCuD5Sh9yLn5HmKY1jvKdsHlnXCTjtlZ0FjWdzuu9xVLOgN00lab3Db0RMSUUu510jENkt74OyHf7PKLLVvoPbW6tvB0W8jmR3wDAKtnuAsb+I62rRWCDJY8crsw0nmOwxVRXPG7ynR2PUc6H+Wf2Q=
X-Received: by 10.221.34.211 with SMTP id st19mr7289230vcb.5.1390561554389;
	Fri, 24 Jan 2014 03:05:54 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.34.211 with SMTP id st19mr7289227vcb.5.1390561554305;
	Fri, 24 Jan 2014 03:05:54 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 03:05:54 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 13:05:54 +0200
Message-ID: <CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On Thu, Jan 23, 2014 at 4:39 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
>> omap IOMMU module is designed to handle access to external
>> omap MMUs, connected to the L3 bus.
>>
...
>> @@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
>>  {
>>      &vgic_distr_mmio_handler,
>>      &vuart_mmio_handler,
>> +     &mmu_mmio_handler,
>>  };
>>  #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
>
> I think that omap_iommu should be a platform specific driver, and it
> should hook into a set of platform specific mmio_handlers instead of
> using the generic mmio_handler structure.
>

Agree.

...

>> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
>> index 8d252c0..acb5dff 100644
>> --- a/xen/arch/arm/io.h
>> +++ b/xen/arch/arm/io.h
>> @@ -42,6 +42,7 @@ struct mmio_handler {
>>
>>  extern const struct mmio_handler vgic_distr_mmio_handler;
>>  extern const struct mmio_handler vuart_mmio_handler;
>> +extern const struct mmio_handler mmu_mmio_handler;
>>
>>  extern int handle_mmio(mmio_info_t *info);
>>
>> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
>> new file mode 100644
>> index 0000000..4dab30f
>> --- /dev/null
>> +++ b/xen/arch/arm/omap_iommu.c
>
> It should probably live under xen/arch/arm/platforms.
>

Agree.

...

>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>> +{
>> +     void __iomem *pagetable = NULL;
>> +     u32 pgaddr;
>> +
>> +     ASSERT(mmu);
>> +
>> +     /* read address where kernel MMU pagetable is stored */
>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>> +     if (!pagetable) {
>
> Xen uses a different coding style from Linux, see CODING_STYLE.
>

Sure. Thank you. Will update my editor settings accordingly to fit Xen
coding style.

...
>
>> +             printk("%s: %s failed to map pagetable\n",
>> +                        __func__, mmu->name);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * pagetable can be changed since last time
>> +      * we accessed it therefore we need to copy it each time
>> +      */
>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>> +
>> +     iounmap(pagetable);
>
> Do you need to flush the dcache here?
>

No, it is copying from kernel to Xen. Kernel already has valid
pagetable in it physical memory, when this call happens. Kernel makes
sure of this before accessing MMU configuration register. The only
reason to flush cache here if kernel mmu driver will be modified
somehow. This may be the point.

...

>> +
>> +             /* error */
>> +             } else {
>> +                     printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
>> +                     return -EINVAL;
>> +             }
>> +     }
>> +
>> +     /* force omap IOMMU to use new pagetable */
>> +     if (table_updated) {
>> +             paddr_t maddr;
>> +             flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
>
> So you are flushing the dcache all at once at the end, probably better
> this way.

Right. After all translations complete I flush caches. This guarantees
that phys memory will contain valid data for MMU.

 ...

>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>> +
>> +     res = mmu_trap_write_access(dom, mmu, info);
>> +     if (res)
>> +             return res;
>> +
>> +    return 1;
>> +}
>
> I wonder if we actually need to trap guest accesses in all cases: if we
> leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
> Maybe we can find a way to detect that so that we can avoid trapping and
> translating in that case.
>

If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
translation. But for now we need this in DomU, where 1:1 mapping will
not be available.

>> +
>> +const struct mmio_handler mmu_mmio_handler = {
>> +     .check_handler = mmu_mmio_check,
>> +     .read_handler  = mmu_mmio_read,
>> +     .write_handler = mmu_mmio_write,
>> +};
>> +
>> +__initcall(mmu_init_all);
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>

Thank you for review,

Regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:38:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6f52-0000A6-Ju; Fri, 24 Jan 2014 11:37:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6f51-0000A1-Mo
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:37:47 +0000
Received: from [85.158.139.211:34741] by server-9.bemta-5.messagelabs.com id
	5C/82-15098-B8052E25; Fri, 24 Jan 2014 11:37:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390563464!11722584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3037 invoked from network); 24 Jan 2014 11:37:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:37:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94095227"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:37:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:37:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6f4x-0002wC-Fe;
	Fri, 24 Jan 2014 11:37:43 +0000
Message-ID: <52E25087.6050706@citrix.com>
Date: Fri, 24 Jan 2014 11:37:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
	<52E23F2602000078001167EC@nat28.tlf.novell.com>
In-Reply-To: <52E23F2602000078001167EC@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 09:23, Jan Beulich wrote:
>>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> The value of time read from hvm_get_guest_time() resets with a new
>> domid, making it an inappropriate source of time for the described
>> function of the MSR.
>>
>> I suspect Windows 8 only notices at first on migration as I believe that
>> it is the first case where the generation ID is supposed to change and
>> signal a reset of state.  The detection of the failure is actually
>> further complicated as there appears to be a race condition between the
>> guest tools reseting the clock back to the correct value, and the DHCP
>> lease being flushed.  XenRT only notices the failure if the DHCP lease
>> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
>> inside the VM), and doesn't directly notice the foward/backward stepping
>> in time.
>>
>> Anyway - please revert the patch - it will be a non-trivial change to
>> expose an appropriate source of time to be consumed by this MSR.
> Done, albeit not completely - I left the #define-s in place.
>
> Jan
>

Thanks - I have pulled XenServer's 4.4-rc2 branch forward to current
staging, and the w2k12 vmlifecycle tests are now working without error.

I shall organise another full nightly regression test for some time in
the next few days.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:38:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6f52-0000A6-Ju; Fri, 24 Jan 2014 11:37:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6f51-0000A1-Mo
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:37:47 +0000
Received: from [85.158.139.211:34741] by server-9.bemta-5.messagelabs.com id
	5C/82-15098-B8052E25; Fri, 24 Jan 2014 11:37:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390563464!11722584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3037 invoked from network); 24 Jan 2014 11:37:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:37:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94095227"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:37:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:37:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6f4x-0002wC-Fe;
	Fri, 24 Jan 2014 11:37:43 +0000
Message-ID: <52E25087.6050706@citrix.com>
Date: Fri, 24 Jan 2014 11:37:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
	<52E23F2602000078001167EC@nat28.tlf.novell.com>
In-Reply-To: <52E23F2602000078001167EC@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 09:23, Jan Beulich wrote:
>>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> The value of time read from hvm_get_guest_time() resets with a new
>> domid, making it an inappropriate source of time for the described
>> function of the MSR.
>>
>> I suspect Windows 8 only notices at first on migration as I believe that
>> it is the first case where the generation ID is supposed to change and
>> signal a reset of state.  The detection of the failure is actually
>> further complicated as there appears to be a race condition between the
>> guest tools reseting the clock back to the correct value, and the DHCP
>> lease being flushed.  XenRT only notices the failure if the DHCP lease
>> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
>> inside the VM), and doesn't directly notice the foward/backward stepping
>> in time.
>>
>> Anyway - please revert the patch - it will be a non-trivial change to
>> expose an appropriate source of time to be consumed by this MSR.
> Done, albeit not completely - I left the #define-s in place.
>
> Jan
>

Thanks - I have pulled XenServer's 4.4-rc2 branch forward to current
staging, and the w2k12 vmlifecycle tests are now working without error.

I shall organise another full nightly regression test for some time in
the next few days.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:41:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6f8X-0000TS-Qj; Fri, 24 Jan 2014 11:41:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8W-0000T7-7z; Fri, 24 Jan 2014 11:41:24 +0000
Received: from [85.158.139.211:8934] by server-1.bemta-5.messagelabs.com id
	B6/92-21065-36152E25; Fri, 24 Jan 2014 11:41:23 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390563681!11531387!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23729 invoked from network); 24 Jan 2014 11:41:22 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	24 Jan 2014 11:41:22 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8M-0002QH-Ic; Fri, 24 Jan 2014 11:41:14 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8L-0004Sl-QT; Fri, 24 Jan 2014 11:41:14 +0000
Date: Fri, 24 Jan 2014 11:41:14 +0000
Message-Id: <E1W6f8L-0004Sl-QT@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 87 - PHYSDEVOP_{prepare,
 release}_msix exposed to unprivileged guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                   Xen Security Advisory XSA-87

     PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests

ISSUE DESCRIPTION
=================

The PHYSDEVOP_{prepare,release}_msix operations are supposed to be available
to privileged guests (domain 0 in non-disaggregated setups) only, but the
necessary privilege check was missing.

IMPACT
======

Malicious or misbehaving unprivileged guests can cause the host or other
guests to malfunction. This can result in host-wide denial of service.
Privilege escalation, while seeming to be unlikely, cannot be excluded.

VULNERABLE SYSTEMS
==================

Xen 4.1.5 and 4.1.6.1 as well as 4.2.2 and later are vulnerable.
Xen 4.2.1 and 4.2.0 as well as 4.1.4 and earlier are not vulnerable.

Only PV guests can take advantage of this vulnerability.

MITIGATION
==========

Running only HVM guests will avoid this issue.

There is no mitigation available for PV guests.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was disclosed publicly on the xen-devel mailing list.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa87-unstable-4.3.patch    xen-unstable, Xen 4.3.x
xsa87-4.2.patch             Xen 4.2.x
xsa87-4.1.patch             Xen 4.1.x

$ sha256sum xsa87*.patch
45e5cc892626293067cc088a671a6bbdc18b018f54ff09b6a1cbb1fabbdf114d  xsa87-4.1.patch
df9c1507d7bb0e5266a2fadd992d1e6ed0f7bf5be7466b8a93ed3bd8e3ab8e8d  xsa87-4.2.patch
a13ce270b177d33537d627b85471abaa01215cd458541f4c6524914d7c81eb38  xsa87-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4TtaAAoJEIP+FMlX6CvZd+IH/i2WTmxuMRe4znSrGg2JJE1L
Wx3ioEKGnU/+5n2T94radln7lA85QvQJpIhwK6aA+BrPYhbtLKI5cq+d5LQ+RLmM
4YUvKZuoolyaHUZSs6XZCopExCz537CCW+rAPhUEGYgP6sLr5aGEG0x8AQimDAJX
YwlF1MqhfxYyWWI6xplzBo3ZoKlMQNikGOQN9isBF5J6ygQZYBgyfeK/M8C7PZlp
GAtVfLNYhbMuZLCJpUcrei7QXSERKf++Li7Vfc6WOZ4OzqPysNrJmMVlPwe/k9RZ
ldNznuYNsTV6WNl/SB4u6W1iygvYhXk4t1xyzIDlmVP+GwsHtuFW9IFiV2aZohc=
=ekUq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa87-4.1.patch"
Content-Disposition: attachment; filename="xsa87-4.1.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTU1NCw3ICs1NTQs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZSBpZiAoIGRldi5zZWcgKQogICAgICAgICAgICAgcmV0ID0g
LUVPUE5PVFNVUFA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa87-4.2.patch"
Content-Disposition: attachment; filename="xsa87-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTYxMiw3ICs2MTIs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgcmV0ID0gcGNpX3ByZXBhcmVfbXNp
eChkZXYuc2VnLCBkZXYuYnVzLCBkZXYuZGV2Zm4sCg==

--=separator
Content-Type: application/octet-stream; name="xsa87-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa87-unstable-4.3.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKLS0tIDIwMTQtMDEtMTQub3Jp
Zy94ZW4vYXJjaC94ODYvcGh5c2Rldi5jCTIwMTMtMTEtMTggMTE6MDM6Mzcu
MDAwMDAwMDAwICswMTAwCisrKyAyMDE0LTAxLTE0L3hlbi9hcmNoL3g4Ni9w
aHlzZGV2LmMJMjAxNC0wMS0yMiAxMjo0Nzo0Ny4wMDAwMDAwMDAgKzAxMDAK
QEAgLTY0MCw3ICs2NDAsMTAgQEAgcmV0X3QgZG9fcGh5c2Rldl9vcChpbnQg
Y21kLCBYRU5fR1VFU1RfSAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVz
dCgmZGV2LCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldCA9IC1FRkFVTFQ7
CiAgICAgICAgIGVsc2UKLSAgICAgICAgICAgIHJldCA9IHBjaV9wcmVwYXJl
X21zaXgoZGV2LnNlZywgZGV2LmJ1cywgZGV2LmRldmZuLAorICAgICAgICAg
ICAgcmV0ID0geHNtX3Jlc291cmNlX3NldHVwX3BjaShYU01fUFJJViwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKGRldi5z
ZWcgPDwgMTYpIHwgKGRldi5idXMgPDwgOCkgfAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBkZXYuZGV2Zm4pID86CisgICAg
ICAgICAgICAgICAgICBwY2lfcHJlcGFyZV9tc2l4KGRldi5zZWcsIGRldi5i
dXMsIGRldi5kZXZmbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY21kICE9IFBIWVNERVZPUF9wcmVwYXJlX21zaXgpOwogICAgICAg
ICBicmVhazsKICAgICB9Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Fri Jan 24 11:41:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6f8X-0000TS-Qj; Fri, 24 Jan 2014 11:41:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8W-0000T7-7z; Fri, 24 Jan 2014 11:41:24 +0000
Received: from [85.158.139.211:8934] by server-1.bemta-5.messagelabs.com id
	B6/92-21065-36152E25; Fri, 24 Jan 2014 11:41:23 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390563681!11531387!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23729 invoked from network); 24 Jan 2014 11:41:22 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-3.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	24 Jan 2014 11:41:22 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8M-0002QH-Ic; Fri, 24 Jan 2014 11:41:14 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6f8L-0004Sl-QT; Fri, 24 Jan 2014 11:41:14 +0000
Date: Fri, 24 Jan 2014 11:41:14 +0000
Message-Id: <E1W6f8L-0004Sl-QT@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 87 - PHYSDEVOP_{prepare,
 release}_msix exposed to unprivileged guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                   Xen Security Advisory XSA-87

     PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests

ISSUE DESCRIPTION
=================

The PHYSDEVOP_{prepare,release}_msix operations are supposed to be available
to privileged guests (domain 0 in non-disaggregated setups) only, but the
necessary privilege check was missing.

IMPACT
======

Malicious or misbehaving unprivileged guests can cause the host or other
guests to malfunction. This can result in host-wide denial of service.
Privilege escalation, while seeming to be unlikely, cannot be excluded.

VULNERABLE SYSTEMS
==================

Xen 4.1.5 and 4.1.6.1 as well as 4.2.2 and later are vulnerable.
Xen 4.2.1 and 4.2.0 as well as 4.1.4 and earlier are not vulnerable.

Only PV guests can take advantage of this vulnerability.

MITIGATION
==========

Running only HVM guests will avoid this issue.

There is no mitigation available for PV guests.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was disclosed publicly on the xen-devel mailing list.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa87-unstable-4.3.patch    xen-unstable, Xen 4.3.x
xsa87-4.2.patch             Xen 4.2.x
xsa87-4.1.patch             Xen 4.1.x

$ sha256sum xsa87*.patch
45e5cc892626293067cc088a671a6bbdc18b018f54ff09b6a1cbb1fabbdf114d  xsa87-4.1.patch
df9c1507d7bb0e5266a2fadd992d1e6ed0f7bf5be7466b8a93ed3bd8e3ab8e8d  xsa87-4.2.patch
a13ce270b177d33537d627b85471abaa01215cd458541f4c6524914d7c81eb38  xsa87-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4TtaAAoJEIP+FMlX6CvZd+IH/i2WTmxuMRe4znSrGg2JJE1L
Wx3ioEKGnU/+5n2T94radln7lA85QvQJpIhwK6aA+BrPYhbtLKI5cq+d5LQ+RLmM
4YUvKZuoolyaHUZSs6XZCopExCz537CCW+rAPhUEGYgP6sLr5aGEG0x8AQimDAJX
YwlF1MqhfxYyWWI6xplzBo3ZoKlMQNikGOQN9isBF5J6ygQZYBgyfeK/M8C7PZlp
GAtVfLNYhbMuZLCJpUcrei7QXSERKf++Li7Vfc6WOZ4OzqPysNrJmMVlPwe/k9RZ
ldNznuYNsTV6WNl/SB4u6W1iygvYhXk4t1xyzIDlmVP+GwsHtuFW9IFiV2aZohc=
=ekUq
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa87-4.1.patch"
Content-Disposition: attachment; filename="xsa87-4.1.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTU1NCw3ICs1NTQs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZSBpZiAoIGRldi5zZWcgKQogICAgICAgICAgICAgcmV0ID0g
LUVPUE5PVFNVUFA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa87-4.2.patch"
Content-Disposition: attachment; filename="xsa87-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTYxMiw3ICs2MTIs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgcmV0ID0gcGNpX3ByZXBhcmVfbXNp
eChkZXYuc2VnLCBkZXYuYnVzLCBkZXYuZGV2Zm4sCg==

--=separator
Content-Type: application/octet-stream; name="xsa87-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa87-unstable-4.3.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKLS0tIDIwMTQtMDEtMTQub3Jp
Zy94ZW4vYXJjaC94ODYvcGh5c2Rldi5jCTIwMTMtMTEtMTggMTE6MDM6Mzcu
MDAwMDAwMDAwICswMTAwCisrKyAyMDE0LTAxLTE0L3hlbi9hcmNoL3g4Ni9w
aHlzZGV2LmMJMjAxNC0wMS0yMiAxMjo0Nzo0Ny4wMDAwMDAwMDAgKzAxMDAK
QEAgLTY0MCw3ICs2NDAsMTAgQEAgcmV0X3QgZG9fcGh5c2Rldl9vcChpbnQg
Y21kLCBYRU5fR1VFU1RfSAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVz
dCgmZGV2LCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldCA9IC1FRkFVTFQ7
CiAgICAgICAgIGVsc2UKLSAgICAgICAgICAgIHJldCA9IHBjaV9wcmVwYXJl
X21zaXgoZGV2LnNlZywgZGV2LmJ1cywgZGV2LmRldmZuLAorICAgICAgICAg
ICAgcmV0ID0geHNtX3Jlc291cmNlX3NldHVwX3BjaShYU01fUFJJViwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKGRldi5z
ZWcgPDwgMTYpIHwgKGRldi5idXMgPDwgOCkgfAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBkZXYuZGV2Zm4pID86CisgICAg
ICAgICAgICAgICAgICBwY2lfcHJlcGFyZV9tc2l4KGRldi5zZWcsIGRldi5i
dXMsIGRldi5kZXZmbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY21kICE9IFBIWVNERVZPUF9wcmVwYXJlX21zaXgpOwogICAgICAg
ICBicmVhazsKICAgICB9Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Fri Jan 24 11:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fG7-0001Sh-PB; Fri, 24 Jan 2014 11:49:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fG1-0001SB-37
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:49:12 +0000
Received: from [85.158.137.68:38191] by server-11.bemta-3.messagelabs.com id
	57/7C-19379-43352E25; Fri, 24 Jan 2014 11:49:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390564145!11117896!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5191 invoked from network); 24 Jan 2014 11:49:07 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:49:07 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so3177458pbb.3
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:49:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=olKxaI7Im4eNyPbWBTntwcTVwu8gRZ2ep3V/8uEjshs=;
	b=RbGfq4bWonwStkv10WBDASx2WfE9RYi6MKiU14gjesCm3aO3UG0KyC7UVBNSlKsG1m
	GAqIVIazSsyak3rMiytpVA1oxz0v5j7YHav5iUyME1l7vlSAUaVmqeDx1xlsEHkIIEr2
	GFFSulN64xq9OIhROqUbCIJLV4aLzJBCnLQiKITB341STrwnLSz/FdB7zpd3w+kBGfMg
	iZftpRgVSWifEicZcypRF5CYrs3Y2ccft/vjsUD7y6yNVupbv7iRapgKHPuViGcBH9Hq
	59129/MRX89OorNF7RQoZHCfms+Cxuhj/JONL/OZUsGLhuAlM95OztzYVo1POMUEwGML
	JgQg==
X-Gm-Message-State: ALoCoQkVRTzYJ1nyRNTTtXii1KXUhCw5Wm5FpPW+cQ7I4H7xIzQdS5Ddzg68gxoPMJakVOYB7k9T
X-Received: by 10.68.196.226 with SMTP id ip2mr13780413pbc.106.1390564145387; 
	Fri, 24 Jan 2014 03:49:05 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id i10sm5621665pat.11.2014.01.24.03.48.59
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 03:49:04 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 17:18:49 +0530
Message-Id: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..986284c 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,17 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+#define DT_MATCH_RESET                      \
+    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
+
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +116,65 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_RESET,
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("Unable to find a compatible reset node in "
+               "the device tree");
+        return -ENODEV;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("Unable to retrieve the base address for reset\n");
+        return res;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("Unable to retrieve the reset mask\n");
+        return res;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fGB-0001TO-Db; Fri, 24 Jan 2014 11:49:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6fG8-0001Sy-FO
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:49:16 +0000
Received: from [85.158.139.211:61586] by server-1.bemta-5.messagelabs.com id
	3A/FF-21065-B3352E25; Fri, 24 Jan 2014 11:49:15 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390564152!11681696!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24549 invoked from network); 24 Jan 2014 11:49:14 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 11:49:14 -0000
Received: from mail-vb0-f54.google.com ([209.85.212.54]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJTN2ontSHMSbhGB44ZBUQ1BY9r333j@postini.com;
	Fri, 24 Jan 2014 03:49:13 PST
Received: by mail-vb0-f54.google.com with SMTP id w20so1813854vbb.27
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:49:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ETDJqEcewhvpeHmQp8l6jIk5yX4u5iGBj/ooP0mzcOU=;
	b=Leja18fxGvjiDLY+1eqZx2vhdJFsgK60oem/ZRKY9EYIdcnP36lMoBwGGnNx21rUtz
	dH/I5S7IoYhCJemYfK0cocVdUo3jzAK71DffdhAQAU3ii0u8c8g1VRv9of6Ko+O5lNgP
	jTA23dwER2leZCAkWy8pAcVoXMVpTgAGN79vY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ETDJqEcewhvpeHmQp8l6jIk5yX4u5iGBj/ooP0mzcOU=;
	b=GagbQaEKxX0a1sA0Z0ncEZ5CTtxukVE0HcB+F0vqoaOTXmBC1cC53CVP3X+ARMGcql
	0PMqXw4oKDSIdNgzx39cl7WQ1gzvE9nl/4B5gNtBzGKs/udgMCnzrNrnLEH/D5sgTE7A
	QvPwAOOrD2BjbwYreyYCfb/dw9+vjGAJf1DOm1MLH6Ivh7u1RJg9TpcLw+7tky9T1PeF
	dx1sOO/KAEpPLp/nviYwlu3BTn6aAcER4fbTY7k0qhMcaXv+IpMllaykWjAgteQN06GK
	7HpueaeGyPHsDhLUJoIFGK6KkxDgTBm7EVL2x3nbcdTIMQkuQ9wiJ9VFbZhcK4Qhaoib
	6D6g==
X-Gm-Message-State: ALoCoQlGBNKGngfLRgbHBmZ8JeGjKPEthqJcI6P+S4kqWOUQEmZ95lUj4lazjnh3o2A/NGc4zy4Q3TQ6+M8wBYNzi/RcAF+voDLp7Li4emaD48w+XiowHHjYDa4M8JojNkMkH70WOZn8N1KEaOFG3I8kegKiWmP1rXFV8TnQ4iQcjmI60Rd+nSo=
X-Received: by 10.59.6.7 with SMTP id cq7mr2062962ved.14.1390564151032;
	Fri, 24 Jan 2014 03:49:11 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.59.6.7 with SMTP id cq7mr2062955ved.14.1390564150868; Fri,
	24 Jan 2014 03:49:10 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 03:49:10 -0800 (PST)
In-Reply-To: <52E135E6.7030109@linaro.org>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<52E135E6.7030109@linaro.org>
Date: Fri, 24 Jan 2014 13:49:10 +0200
Message-ID: <CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Julien,

On Thu, Jan 23, 2014 at 5:31 PM, Julien Grall <julien.grall@linaro.org> wrote:
> On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
>> omap IOMMU module is designed to handle access to external
>> omap MMUs, connected to the L3 bus.

[...]

>
>> +struct mmu_info {
>> +     const char                      *name;
>> +     paddr_t                         mem_start;
>> +     u32                                     mem_size;
>> +     u32                                     *pagetable;
>> +     void __iomem            *mem_map;
>> +};
>> +
>> +static struct mmu_info omap_ipu_mmu = {
>
> static const?
>

Unfortunately, no. I like const modifiers very much and try to put
them everywhere I can, but in these structs I need to modify several
fields during MMU configuratiion.

[...]

>> +     .name           = "IPU_L2_MMU",
>> +     .mem_start      = 0x55082000,
>> +     .mem_size       = 0x1000,
>> +     .pagetable      = NULL,
>> +};
>> +
>> +static struct mmu_info omap_dsp_mmu = {
>
> static const?
>

The same as previous.

>> +     .name           = "DSP_L2_MMU",
>> +     .mem_start      = 0x4a066000,
>> +     .mem_size       = 0x1000,
>> +     .pagetable      = NULL,
>> +};
>> +
>> +static struct mmu_info *mmu_list[] = {
>
> static const?
>

The same as previous.

>> +     &omap_ipu_mmu,
>> +     &omap_dsp_mmu,
>> +};
>> +
>> +#define mmu_for_each(pfunc, data)                                            \
>> +({                                                                                                           \
>> +     u32 __i;                                                                                        \
>> +     int __res = 0;                                                                          \
>> +                                                                                                             \
>> +     for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {      \
>> +             __res |= pfunc(mmu_list[__i], data);                    \
>
> You res |= will result to a "wrong" errno if you have multiple failure.
> Would it be better to have:
>
> __res = pfunc(...)
> if ( __res )
>   break;
>

I know. I tried both solutions - mine and what you proposed. Agree in
general, will update this.


>> +     }                                                                                                       \
>> +     __res;                                                                                          \
>> +})
>> +
>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>> +{
>> +     if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
>> +             return 1;
>> +
>> +     return 0;
>> +}
>> +
>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>> +{
>> +     u32 i;
>> +
>> +     for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
>> +             if (mmu_check_mem_range(mmu_list[i], addr))
>> +                     return mmu_list[i];
>> +     }
>> +
>> +     return NULL;
>> +}
>> +
>> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
>> +{
>> +     return (reg & mask) | (va & (~mask));
>> +}
>> +
>> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
>> +{
>> +     return (reg & ~mask) | pa;
>> +}
>> +
>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>> +{
>> +     return mmu_for_each(mmu_check_mem_range, addr);
>> +}
>
> As I understand your cover letter, the device (and therefore the MMU) is
> only passthrough to a single guest, right?
>
> If so, your mmu_mmio_check should check if the domain is handling the
> device.
> With your current code any guest can write to this range and rewriting
> the MMU page table.
>

Oh, I knew that someone will catch this :)
This is a next step for this patch series - to make sure that only one
guest can configure / access MMU.

>> +
>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>> +{
>> +     void __iomem *pagetable = NULL;
>> +     u32 pgaddr;
>> +
>> +     ASSERT(mmu);
>> +
>> +     /* read address where kernel MMU pagetable is stored */
>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>> +     if (!pagetable) {
>> +             printk("%s: %s failed to map pagetable\n",
>> +                        __func__, mmu->name);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * pagetable can be changed since last time
>> +      * we accessed it therefore we need to copy it each time
>> +      */
>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>> +
>> +     iounmap(pagetable);
>> +
>> +     return 0;
>> +}
>
> I'm confused, it should copy from the guest MMU pagetable, right? In
> this case you should use map_domain_page.
> ioremap *MUST* only be used with device memory, otherwise memory
> coherency is not guaranteed.
>

OK. Will try this.

> [..]
>
>> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
>> +{
>> +     struct domain *dom = v->domain;
>> +     struct mmu_info *mmu = NULL;
>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +    register_t *r = select_user_reg(regs, info->dabt.reg);
>> +     int res;
>> +
>> +     mmu = mmu_lookup(info->gpa);
>> +     if (!mmu) {
>> +             printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * make sure that user register is written first in this function
>> +      * following calls may expect valid data in it
>> +      */
>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>
> Hmmm ... I think this is very confusing, you should only write to the
> memory if mmu_trap_write_access has not failed. And use "*r" where it's
> needed.
>
> Writing to the device memory could have side effect (for instance
> updating the page table with the wrong translation...).
>

Agree - it is a bit confusing here. But I need a valid data in the
user register.
Following calls use it ->
mmu_trap_write_access()->mmu_translate_pagetable()->mmu_copy_pagetable()->pgaddr
= readl(mmu->mem_map + MMU_TTB);
Last read will be from register written in this function. Taking in
account your comment - I will think about changing this logic.

>> +
>> +     res = mmu_trap_write_access(dom, mmu, info);
>> +     if (res)
>> +             return res;
>> +
>> +    return 1;
>> +}
>> +
>> +static int mmu_init(struct mmu_info *mmu, u32 data)
>> +{
>> +     ASSERT(mmu);
>> +     ASSERT(!mmu->mem_map);
>> +     ASSERT(!mmu->pagetable);
>> +
>> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
>
> Can you use ioremap_nocache instead of ioremap? The behavior is the same
> but the former name is less confusing.
>

Sure.

> --
> Julien Grall

Thank you for review.

Regards,
Andrii


-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fG7-0001Sh-PB; Fri, 24 Jan 2014 11:49:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fG1-0001SB-37
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:49:12 +0000
Received: from [85.158.137.68:38191] by server-11.bemta-3.messagelabs.com id
	57/7C-19379-43352E25; Fri, 24 Jan 2014 11:49:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390564145!11117896!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5191 invoked from network); 24 Jan 2014 11:49:07 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:49:07 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so3177458pbb.3
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:49:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=olKxaI7Im4eNyPbWBTntwcTVwu8gRZ2ep3V/8uEjshs=;
	b=RbGfq4bWonwStkv10WBDASx2WfE9RYi6MKiU14gjesCm3aO3UG0KyC7UVBNSlKsG1m
	GAqIVIazSsyak3rMiytpVA1oxz0v5j7YHav5iUyME1l7vlSAUaVmqeDx1xlsEHkIIEr2
	GFFSulN64xq9OIhROqUbCIJLV4aLzJBCnLQiKITB341STrwnLSz/FdB7zpd3w+kBGfMg
	iZftpRgVSWifEicZcypRF5CYrs3Y2ccft/vjsUD7y6yNVupbv7iRapgKHPuViGcBH9Hq
	59129/MRX89OorNF7RQoZHCfms+Cxuhj/JONL/OZUsGLhuAlM95OztzYVo1POMUEwGML
	JgQg==
X-Gm-Message-State: ALoCoQkVRTzYJ1nyRNTTtXii1KXUhCw5Wm5FpPW+cQ7I4H7xIzQdS5Ddzg68gxoPMJakVOYB7k9T
X-Received: by 10.68.196.226 with SMTP id ip2mr13780413pbc.106.1390564145387; 
	Fri, 24 Jan 2014 03:49:05 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id i10sm5621665pat.11.2014.01.24.03.48.59
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 03:49:04 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 17:18:49 +0530
Message-Id: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..986284c 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,17 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+#define DT_MATCH_RESET                      \
+    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
+
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +116,65 @@ err:
     return ret;
 }
 
+/*
+ * TODO: Get base address and mask from the device tree
+ */
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write mask 0x1 to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_RESET,
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("Unable to find a compatible reset node in "
+               "the device tree");
+        return -ENODEV;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("Unable to retrieve the base address for reset\n");
+        return res;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("Unable to retrieve the reset mask\n");
+        return res;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:49:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fGB-0001TO-Db; Fri, 24 Jan 2014 11:49:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6fG8-0001Sy-FO
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:49:16 +0000
Received: from [85.158.139.211:61586] by server-1.bemta-5.messagelabs.com id
	3A/FF-21065-B3352E25; Fri, 24 Jan 2014 11:49:15 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390564152!11681696!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24549 invoked from network); 24 Jan 2014 11:49:14 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 11:49:14 -0000
Received: from mail-vb0-f54.google.com ([209.85.212.54]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJTN2ontSHMSbhGB44ZBUQ1BY9r333j@postini.com;
	Fri, 24 Jan 2014 03:49:13 PST
Received: by mail-vb0-f54.google.com with SMTP id w20so1813854vbb.27
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:49:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ETDJqEcewhvpeHmQp8l6jIk5yX4u5iGBj/ooP0mzcOU=;
	b=Leja18fxGvjiDLY+1eqZx2vhdJFsgK60oem/ZRKY9EYIdcnP36lMoBwGGnNx21rUtz
	dH/I5S7IoYhCJemYfK0cocVdUo3jzAK71DffdhAQAU3ii0u8c8g1VRv9of6Ko+O5lNgP
	jTA23dwER2leZCAkWy8pAcVoXMVpTgAGN79vY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ETDJqEcewhvpeHmQp8l6jIk5yX4u5iGBj/ooP0mzcOU=;
	b=GagbQaEKxX0a1sA0Z0ncEZ5CTtxukVE0HcB+F0vqoaOTXmBC1cC53CVP3X+ARMGcql
	0PMqXw4oKDSIdNgzx39cl7WQ1gzvE9nl/4B5gNtBzGKs/udgMCnzrNrnLEH/D5sgTE7A
	QvPwAOOrD2BjbwYreyYCfb/dw9+vjGAJf1DOm1MLH6Ivh7u1RJg9TpcLw+7tky9T1PeF
	dx1sOO/KAEpPLp/nviYwlu3BTn6aAcER4fbTY7k0qhMcaXv+IpMllaykWjAgteQN06GK
	7HpueaeGyPHsDhLUJoIFGK6KkxDgTBm7EVL2x3nbcdTIMQkuQ9wiJ9VFbZhcK4Qhaoib
	6D6g==
X-Gm-Message-State: ALoCoQlGBNKGngfLRgbHBmZ8JeGjKPEthqJcI6P+S4kqWOUQEmZ95lUj4lazjnh3o2A/NGc4zy4Q3TQ6+M8wBYNzi/RcAF+voDLp7Li4emaD48w+XiowHHjYDa4M8JojNkMkH70WOZn8N1KEaOFG3I8kegKiWmP1rXFV8TnQ4iQcjmI60Rd+nSo=
X-Received: by 10.59.6.7 with SMTP id cq7mr2062962ved.14.1390564151032;
	Fri, 24 Jan 2014 03:49:11 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.59.6.7 with SMTP id cq7mr2062955ved.14.1390564150868; Fri,
	24 Jan 2014 03:49:10 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 03:49:10 -0800 (PST)
In-Reply-To: <52E135E6.7030109@linaro.org>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<52E135E6.7030109@linaro.org>
Date: Fri, 24 Jan 2014 13:49:10 +0200
Message-ID: <CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Julien,

On Thu, Jan 23, 2014 at 5:31 PM, Julien Grall <julien.grall@linaro.org> wrote:
> On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
>> omap IOMMU module is designed to handle access to external
>> omap MMUs, connected to the L3 bus.

[...]

>
>> +struct mmu_info {
>> +     const char                      *name;
>> +     paddr_t                         mem_start;
>> +     u32                                     mem_size;
>> +     u32                                     *pagetable;
>> +     void __iomem            *mem_map;
>> +};
>> +
>> +static struct mmu_info omap_ipu_mmu = {
>
> static const?
>

Unfortunately, no. I like const modifiers very much and try to put
them everywhere I can, but in these structs I need to modify several
fields during MMU configuratiion.

[...]

>> +     .name           = "IPU_L2_MMU",
>> +     .mem_start      = 0x55082000,
>> +     .mem_size       = 0x1000,
>> +     .pagetable      = NULL,
>> +};
>> +
>> +static struct mmu_info omap_dsp_mmu = {
>
> static const?
>

The same as previous.

>> +     .name           = "DSP_L2_MMU",
>> +     .mem_start      = 0x4a066000,
>> +     .mem_size       = 0x1000,
>> +     .pagetable      = NULL,
>> +};
>> +
>> +static struct mmu_info *mmu_list[] = {
>
> static const?
>

The same as previous.

>> +     &omap_ipu_mmu,
>> +     &omap_dsp_mmu,
>> +};
>> +
>> +#define mmu_for_each(pfunc, data)                                            \
>> +({                                                                                                           \
>> +     u32 __i;                                                                                        \
>> +     int __res = 0;                                                                          \
>> +                                                                                                             \
>> +     for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {      \
>> +             __res |= pfunc(mmu_list[__i], data);                    \
>
> You res |= will result to a "wrong" errno if you have multiple failure.
> Would it be better to have:
>
> __res = pfunc(...)
> if ( __res )
>   break;
>

I know. I tried both solutions - mine and what you proposed. Agree in
general, will update this.


>> +     }                                                                                                       \
>> +     __res;                                                                                          \
>> +})
>> +
>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>> +{
>> +     if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
>> +             return 1;
>> +
>> +     return 0;
>> +}
>> +
>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>> +{
>> +     u32 i;
>> +
>> +     for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
>> +             if (mmu_check_mem_range(mmu_list[i], addr))
>> +                     return mmu_list[i];
>> +     }
>> +
>> +     return NULL;
>> +}
>> +
>> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
>> +{
>> +     return (reg & mask) | (va & (~mask));
>> +}
>> +
>> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
>> +{
>> +     return (reg & ~mask) | pa;
>> +}
>> +
>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>> +{
>> +     return mmu_for_each(mmu_check_mem_range, addr);
>> +}
>
> As I understand your cover letter, the device (and therefore the MMU) is
> only passthrough to a single guest, right?
>
> If so, your mmu_mmio_check should check if the domain is handling the
> device.
> With your current code any guest can write to this range and rewriting
> the MMU page table.
>

Oh, I knew that someone will catch this :)
This is a next step for this patch series - to make sure that only one
guest can configure / access MMU.

>> +
>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>> +{
>> +     void __iomem *pagetable = NULL;
>> +     u32 pgaddr;
>> +
>> +     ASSERT(mmu);
>> +
>> +     /* read address where kernel MMU pagetable is stored */
>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>> +     if (!pagetable) {
>> +             printk("%s: %s failed to map pagetable\n",
>> +                        __func__, mmu->name);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * pagetable can be changed since last time
>> +      * we accessed it therefore we need to copy it each time
>> +      */
>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>> +
>> +     iounmap(pagetable);
>> +
>> +     return 0;
>> +}
>
> I'm confused, it should copy from the guest MMU pagetable, right? In
> this case you should use map_domain_page.
> ioremap *MUST* only be used with device memory, otherwise memory
> coherency is not guaranteed.
>

OK. Will try this.

> [..]
>
>> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
>> +{
>> +     struct domain *dom = v->domain;
>> +     struct mmu_info *mmu = NULL;
>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>> +    register_t *r = select_user_reg(regs, info->dabt.reg);
>> +     int res;
>> +
>> +     mmu = mmu_lookup(info->gpa);
>> +     if (!mmu) {
>> +             printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * make sure that user register is written first in this function
>> +      * following calls may expect valid data in it
>> +      */
>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>
> Hmmm ... I think this is very confusing, you should only write to the
> memory if mmu_trap_write_access has not failed. And use "*r" where it's
> needed.
>
> Writing to the device memory could have side effect (for instance
> updating the page table with the wrong translation...).
>

Agree - it is a bit confusing here. But I need a valid data in the
user register.
Following calls use it ->
mmu_trap_write_access()->mmu_translate_pagetable()->mmu_copy_pagetable()->pgaddr
= readl(mmu->mem_map + MMU_TTB);
Last read will be from register written in this function. Taking in
account your comment - I will think about changing this logic.

>> +
>> +     res = mmu_trap_write_access(dom, mmu, info);
>> +     if (res)
>> +             return res;
>> +
>> +    return 1;
>> +}
>> +
>> +static int mmu_init(struct mmu_info *mmu, u32 data)
>> +{
>> +     ASSERT(mmu);
>> +     ASSERT(!mmu->mem_map);
>> +     ASSERT(!mmu->pagetable);
>> +
>> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
>
> Can you use ioremap_nocache instead of ioremap? The behavior is the same
> but the former name is less confusing.
>

Sure.

> --
> Julien Grall

Thank you for review.

Regards,
Andrii


-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fI3-0001ft-7V; Fri, 24 Jan 2014 11:51:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6fHx-0001fT-Fs
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:51:13 +0000
Received: from [85.158.137.68:57423] by server-14.bemta-3.messagelabs.com id
	0C/0F-06105-CA352E25; Fri, 24 Jan 2014 11:51:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390564266!11119341!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28841 invoked from network); 24 Jan 2014 11:51:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:51:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94098383"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:51:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:51:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6fHs-00036N-9n;
	Fri, 24 Jan 2014 11:51:04 +0000
Date: Fri, 24 Jan 2014 11:49:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
	<CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Andrii Tseglytskyi wrote:
> >> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> >> +
> >> +     res = mmu_trap_write_access(dom, mmu, info);
> >> +     if (res)
> >> +             return res;
> >> +
> >> +    return 1;
> >> +}
> >
> > I wonder if we actually need to trap guest accesses in all cases: if we
> > leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
> > Maybe we can find a way to detect that so that we can avoid trapping and
> > translating in that case.
> >
> 
> If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
> translation. But for now we need this in DomU, where 1:1 mapping will
> not be available.

Sure, that is fine.

What I meant to say is that in order to make this code more generic, it
would be a good idea to check whether the guest is dom0 or domU and only
trap MMU accesses if the guest is domU (because for dom0 is not
necessary).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fI3-0001ft-7V; Fri, 24 Jan 2014 11:51:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6fHx-0001fT-Fs
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:51:13 +0000
Received: from [85.158.137.68:57423] by server-14.bemta-3.messagelabs.com id
	0C/0F-06105-CA352E25; Fri, 24 Jan 2014 11:51:08 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390564266!11119341!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28841 invoked from network); 24 Jan 2014 11:51:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:51:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94098383"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 11:51:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 06:51:04 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6fHs-00036N-9n;
	Fri, 24 Jan 2014 11:51:04 +0000
Date: Fri, 24 Jan 2014 11:49:56 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
	<CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Andrii Tseglytskyi wrote:
> >> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
> >> +
> >> +     res = mmu_trap_write_access(dom, mmu, info);
> >> +     if (res)
> >> +             return res;
> >> +
> >> +    return 1;
> >> +}
> >
> > I wonder if we actually need to trap guest accesses in all cases: if we
> > leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
> > Maybe we can find a way to detect that so that we can avoid trapping and
> > translating in that case.
> >
> 
> If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
> translation. But for now we need this in DomU, where 1:1 mapping will
> not be available.

Sure, that is fine.

What I meant to say is that in order to make this code more generic, it
would be a good idea to check whether the guest is dom0 or domU and only
trap MMU accesses if the guest is domU (because for dom0 is not
necessary).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:57:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fNa-000239-PT; Fri, 24 Jan 2014 11:56:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fNY-000233-Pn
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:56:57 +0000
Received: from [85.158.137.68:31660] by server-1.bemta-3.messagelabs.com id
	E7/9D-29598-80552E25; Fri, 24 Jan 2014 11:56:56 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390564614!9952348!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22334 invoked from network); 24 Jan 2014 11:56:55 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:56:55 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so3682169qaq.25
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:56:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=w6GGpW7qNooY5N+KyZPIx4awZMJGDreFWhA+NQWgbA8=;
	b=LZtjKUwyvnGwANi47FRYTXLoWMNa921pPiEYTzeqWlIxxTk3fBuTvS5kryN0cbwX0E
	JxrGazKqcZhLEZ495lBydl9tmiSKIZ1nDY764y5mpIynf+5Glm4Tc4YyixywVtBdaQ0g
	8root38rSFQw3p7ZUvCPZTUXPf1WQbnnIxUuFrsjhqpp4yVUr9E61snBUJdebboKvZWx
	YfI4bfKr5ctfBC0nDISUd2CHpiSZJEflAHes8W804DipHyYO1+pYIDl0MFFHX0DxFi0f
	wCnWiReXX08/kNFKvPi971WNNwJ+etsHXxHvo2e6khtP3zxC3Fu2U/WNLMJkJqhF6+wq
	30bg==
X-Gm-Message-State: ALoCoQkHGwU2uZBnMzyNosOGdceQvN/yoeICdr0qS+AoGx/7eylvtXaij/TLkAIuUOMi7XGxt134
MIME-Version: 1.0
X-Received: by 10.140.85.179 with SMTP id n48mr18253489qgd.91.1390564613939;
	Fri, 24 Jan 2014 03:56:53 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 24 Jan 2014 03:56:53 -0800 (PST)
In-Reply-To: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
Date: Fri, 24 Jan 2014 17:26:53 +0530
Message-ID: <CAAHg+HgiAJH1KFT_rbAut9NZR7Z_hawyTOgKiAwozYfG1UjYbg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <ian.campbell@citrix.com>, Anup Patel <anup.patel@linaro.org>,
	Patch Tracking <patches@linaro.org>, patches@apm.com,
	stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 24 January 2014 17:18, Pranavkumar Sawargaonkar
<pranavkumar@linaro.org> wrote:
> This patch adds a reset support for xgene arm64 platform.
>
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
>
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..986284c 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,17 @@
>
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>
> +#define DT_MATCH_RESET                      \
> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
> +
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +116,65 @@ err:
>      return ret;
>  }
>
> +/*
> + * TODO: Get base address and mask from the device tree
> + */

Will resend a patch with removal of this comment.

> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("Unable to map xgene reset address\n");
> +        return;
> +    }
> +
> +    /* Write mask 0x1 to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_RESET,
> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("Unable to find a compatible reset node in "
> +               "the device tree");
> +        return -ENODEV;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("Unable to retrieve the base address for reset\n");
> +        return res;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("Unable to retrieve the reset mask\n");
> +        return res;
> +    }
> +
> +    return 0;
> +}
>
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>
> --
> 1.7.9.5
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:57:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fNa-000239-PT; Fri, 24 Jan 2014 11:56:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fNY-000233-Pn
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:56:57 +0000
Received: from [85.158.137.68:31660] by server-1.bemta-3.messagelabs.com id
	E7/9D-29598-80552E25; Fri, 24 Jan 2014 11:56:56 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390564614!9952348!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22334 invoked from network); 24 Jan 2014 11:56:55 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:56:55 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so3682169qaq.25
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:56:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=w6GGpW7qNooY5N+KyZPIx4awZMJGDreFWhA+NQWgbA8=;
	b=LZtjKUwyvnGwANi47FRYTXLoWMNa921pPiEYTzeqWlIxxTk3fBuTvS5kryN0cbwX0E
	JxrGazKqcZhLEZ495lBydl9tmiSKIZ1nDY764y5mpIynf+5Glm4Tc4YyixywVtBdaQ0g
	8root38rSFQw3p7ZUvCPZTUXPf1WQbnnIxUuFrsjhqpp4yVUr9E61snBUJdebboKvZWx
	YfI4bfKr5ctfBC0nDISUd2CHpiSZJEflAHes8W804DipHyYO1+pYIDl0MFFHX0DxFi0f
	wCnWiReXX08/kNFKvPi971WNNwJ+etsHXxHvo2e6khtP3zxC3Fu2U/WNLMJkJqhF6+wq
	30bg==
X-Gm-Message-State: ALoCoQkHGwU2uZBnMzyNosOGdceQvN/yoeICdr0qS+AoGx/7eylvtXaij/TLkAIuUOMi7XGxt134
MIME-Version: 1.0
X-Received: by 10.140.85.179 with SMTP id n48mr18253489qgd.91.1390564613939;
	Fri, 24 Jan 2014 03:56:53 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 24 Jan 2014 03:56:53 -0800 (PST)
In-Reply-To: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
Date: Fri, 24 Jan 2014 17:26:53 +0530
Message-ID: <CAAHg+HgiAJH1KFT_rbAut9NZR7Z_hawyTOgKiAwozYfG1UjYbg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Cc: Ian Campbell <ian.campbell@citrix.com>, Anup Patel <anup.patel@linaro.org>,
	Patch Tracking <patches@linaro.org>, patches@apm.com,
	stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 24 January 2014 17:18, Pranavkumar Sawargaonkar
<pranavkumar@linaro.org> wrote:
> This patch adds a reset support for xgene arm64 platform.
>
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
>
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..986284c 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,17 @@
>
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>
> +#define DT_MATCH_RESET                      \
> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
> +
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +116,65 @@ err:
>      return ret;
>  }
>
> +/*
> + * TODO: Get base address and mask from the device tree
> + */

Will resend a patch with removal of this comment.

> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("Unable to map xgene reset address\n");
> +        return;
> +    }
> +
> +    /* Write mask 0x1 to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_RESET,
> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("Unable to find a compatible reset node in "
> +               "the device tree");
> +        return -ENODEV;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("Unable to retrieve the base address for reset\n");
> +        return res;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("Unable to retrieve the reset mask\n");
> +        return res;
> +    }
> +
> +    return 0;
> +}
>
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>
> --
> 1.7.9.5
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:58:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fOv-000297-Iq; Fri, 24 Jan 2014 11:58:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fOu-00028g-Ib
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:58:20 +0000
Received: from [85.158.137.68:41034] by server-15.bemta-3.messagelabs.com id
	29/37-11556-A5552E25; Fri, 24 Jan 2014 11:58:18 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390564695!11131717!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18926 invoked from network); 24 Jan 2014 11:58:17 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:58:17 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so3208137pad.0
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:58:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=0uioFf4N1wTfMsL/ZjOl5CI9k47sa+kD1goBbMO+rTQ=;
	b=YHL/qU3FvLvkUi46Cl6xstzRLyzs7BxRsNS/JgvX0i8PIeQxrGE2nJa3vze5I7C4O8
	Gm6mENzSJr1sU6R9JT1IYdMfQls2nMVECIcFtGeqo3tLtsM3LuzB3cR2ej4MTPjAoPEf
	E9WyS3eQVS2I6jwkntVNFIxpOIx/lsMVzQZFyTPbkywPg2dKZelhXLXAYCIo2vyjfdL6
	L5bT2EpgZUig/AepOtFxklwcofv6dyNhHE5WlykCsOtNGxtc4fS3j86WCM5VnuiqpOhv
	yRFe7EOEHV1/LgkyvwSr/mGi2ksoO07J67bXpIEQ5cZlST6A0cLJR6YvbZMnf0RTg9Ig
	fFSg==
X-Gm-Message-State: ALoCoQnfOEjVfKjkI2j3m6FEc3pbC+eqwDA6i08mpw9gENK07EUGT92ZAMrIsihI08OV23DW/TY5
X-Received: by 10.66.188.203 with SMTP id gc11mr13821962pac.63.1390564695496; 
	Fri, 24 Jan 2014 03:58:15 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id db3sm2556546pbb.10.2014.01.24.03.58.12
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 03:58:14 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 17:28:03 +0530
Message-Id: <1390564683-3905-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V4] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   67 ++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..0ad4b4b 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,17 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+#define DT_MATCH_RESET                      \
+    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
+
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +116,62 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_RESET,
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("Unable to find a compatible reset node in "
+               "the device tree");
+        return -ENODEV;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("Unable to retrieve the base address for reset\n");
+        return res;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("Unable to retrieve the reset mask\n");
+        return res;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +181,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 11:58:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 11:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fOv-000297-Iq; Fri, 24 Jan 2014 11:58:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W6fOu-00028g-Ib
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 11:58:20 +0000
Received: from [85.158.137.68:41034] by server-15.bemta-3.messagelabs.com id
	29/37-11556-A5552E25; Fri, 24 Jan 2014 11:58:18 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390564695!11131717!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18926 invoked from network); 24 Jan 2014 11:58:17 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 11:58:17 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so3208137pad.0
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 03:58:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=0uioFf4N1wTfMsL/ZjOl5CI9k47sa+kD1goBbMO+rTQ=;
	b=YHL/qU3FvLvkUi46Cl6xstzRLyzs7BxRsNS/JgvX0i8PIeQxrGE2nJa3vze5I7C4O8
	Gm6mENzSJr1sU6R9JT1IYdMfQls2nMVECIcFtGeqo3tLtsM3LuzB3cR2ej4MTPjAoPEf
	E9WyS3eQVS2I6jwkntVNFIxpOIx/lsMVzQZFyTPbkywPg2dKZelhXLXAYCIo2vyjfdL6
	L5bT2EpgZUig/AepOtFxklwcofv6dyNhHE5WlykCsOtNGxtc4fS3j86WCM5VnuiqpOhv
	yRFe7EOEHV1/LgkyvwSr/mGi2ksoO07J67bXpIEQ5cZlST6A0cLJR6YvbZMnf0RTg9Ig
	fFSg==
X-Gm-Message-State: ALoCoQnfOEjVfKjkI2j3m6FEc3pbC+eqwDA6i08mpw9gENK07EUGT92ZAMrIsihI08OV23DW/TY5
X-Received: by 10.66.188.203 with SMTP id gc11mr13821962pac.63.1390564695496; 
	Fri, 24 Jan 2014 03:58:15 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id db3sm2556546pbb.10.2014.01.24.03.58.12
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 03:58:14 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri, 24 Jan 2014 17:28:03 +0530
Message-Id: <1390564683-3905-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V4] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   67 ++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..0ad4b4b 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,17 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+#define DT_MATCH_RESET                      \
+    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
+
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +116,62 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_RESET,
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("Unable to find a compatible reset node in "
+               "the device tree");
+        return -ENODEV;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("Unable to retrieve the base address for reset\n");
+        return res;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("Unable to retrieve the reset mask\n");
+        return res;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +181,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:01:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fSH-0002j4-18; Fri, 24 Jan 2014 12:01:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6fSF-0002iq-Ej
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:01:47 +0000
Received: from [85.158.137.68:58625] by server-15.bemta-3.messagelabs.com id
	0A/BE-11556-A2652E25; Fri, 24 Jan 2014 12:01:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390564904!9953830!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8407 invoked from network); 24 Jan 2014 12:01:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:01:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96134730"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 12:01:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:01:43 -0500
Message-ID: <1390564902.2124.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Fri, 24 Jan 2014 12:01:42 +0000
In-Reply-To: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:18 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..986284c 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,17 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +#define DT_MATCH_RESET                      \
> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")

The gic and timer use a #define here because it is needed in multiple
places, for this use case you can just inline it into the array in
xgene_storm_init. i.e.:

> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_RESET,

           DT_MATCH_COMPATIBLE("apm,xgene-reboot")

is fine IMHO.

> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("Unable to find a compatible reset node in "
> +               "the device tree");

Please don't wrap string constants, it makes it hard to grep and I'd
rather have a long line (in this case it's not too long either).

Please can you add an xgene: (or whatever is appropriate) prefix too.

> +        return -ENODEV;

I wonder if it is worth returning success here? The system would be
mostly functional after all.

(You could apply this logic to the other returns if you wish, although
if the node is present then an error in the content could be considered
more critical to abort on)

> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("Unable to retrieve the base address for reset\n");
> +        return res;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("Unable to retrieve the reset mask\n");
> +        return res;
> +    }

All looks good, thanks.


> +
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:01:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fSH-0002j4-18; Fri, 24 Jan 2014 12:01:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6fSF-0002iq-Ej
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:01:47 +0000
Received: from [85.158.137.68:58625] by server-15.bemta-3.messagelabs.com id
	0A/BE-11556-A2652E25; Fri, 24 Jan 2014 12:01:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390564904!9953830!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8407 invoked from network); 24 Jan 2014 12:01:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:01:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96134730"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 12:01:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:01:43 -0500
Message-ID: <1390564902.2124.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Fri, 24 Jan 2014 12:01:42 +0000
In-Reply-To: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:18 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..986284c 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,17 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +#define DT_MATCH_RESET                      \
> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")

The gic and timer use a #define here because it is needed in multiple
places, for this use case you can just inline it into the array in
xgene_storm_init. i.e.:

> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_RESET,

           DT_MATCH_COMPATIBLE("apm,xgene-reboot")

is fine IMHO.

> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("Unable to find a compatible reset node in "
> +               "the device tree");

Please don't wrap string constants, it makes it hard to grep and I'd
rather have a long line (in this case it's not too long either).

Please can you add an xgene: (or whatever is appropriate) prefix too.

> +        return -ENODEV;

I wonder if it is worth returning success here? The system would be
mostly functional after all.

(You could apply this logic to the other returns if you wish, although
if the node is present then an error in the content could be considered
more critical to abort on)

> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("Unable to retrieve the base address for reset\n");
> +        return res;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("Unable to retrieve the reset mask\n");
> +        return res;
> +    }

All looks good, thanks.


> +
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:03:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fTP-0002ra-HR; Fri, 24 Jan 2014 12:02:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6fTO-0002rN-P9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:02:58 +0000
Received: from [85.158.139.211:60832] by server-5.bemta-5.messagelabs.com id
	FE/D8-14928-07652E25; Fri, 24 Jan 2014 12:02:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390564974!11728905!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14516 invoked from network); 24 Jan 2014 12:02:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96135042"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 12:02:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:02:53 -0500
Message-ID: <1390564971.2124.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Arianna Avanzini <avanzini.arianna@gmail.com>
Date: Fri, 24 Jan 2014 12:02:51 +0000
In-Reply-To: <1390558450.2124.24.camel@kazak.uk.xensource.com>
References: <52E239AB.8040906@gmail.com>
	<1390558450.2124.24.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 10:14 +0000, Ian Campbell wrote:
> As a workaround you could try changing the load addresses of Xen and
> the kernel, dtb etc used by u-boot to pack them towards the top of
> RAM. This *should* result in the entire lower half of RAM being
> available which will make it more likely to achieve the necessary
> alignment constraints for a dom0 taking up to half of the system RAM.
> I've not actually tried this 

I have now. Using the boot script from
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Allwinner with:
        # cubieboard2
        # Top of RAM:        0x80000000
        # Xen relocate addr  0x7fe00000
        setenv kernel_addr_r 0x7fa00000 # 4M
        setenv fdt_addr      0x7f800000 # 2M
        setenv xen_addr_r    0x7f600000 # 2M
allows for dom0_mem=512M.

On a cubietruck with 2GB of RAM addresses 0xbxxxxxxx instead of
0x7xxxxxxx allow for dom0_mem up to 1GB.

I have added a general note on this topic to
http://wiki.xen.org/wiki/Xen_ARM_TODO#Domain_0_memory_limitation_due_to_1:1_mapping

I'm going to update
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Allwinner with the specifics for the A20 platforms in a moment.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:03:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fTP-0002ra-HR; Fri, 24 Jan 2014 12:02:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6fTO-0002rN-P9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:02:58 +0000
Received: from [85.158.139.211:60832] by server-5.bemta-5.messagelabs.com id
	FE/D8-14928-07652E25; Fri, 24 Jan 2014 12:02:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390564974!11728905!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14516 invoked from network); 24 Jan 2014 12:02:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96135042"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 12:02:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:02:53 -0500
Message-ID: <1390564971.2124.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Arianna Avanzini <avanzini.arianna@gmail.com>
Date: Fri, 24 Jan 2014 12:02:51 +0000
In-Reply-To: <1390558450.2124.24.camel@kazak.uk.xensource.com>
References: <52E239AB.8040906@gmail.com>
	<1390558450.2124.24.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 10:14 +0000, Ian Campbell wrote:
> As a workaround you could try changing the load addresses of Xen and
> the kernel, dtb etc used by u-boot to pack them towards the top of
> RAM. This *should* result in the entire lower half of RAM being
> available which will make it more likely to achieve the necessary
> alignment constraints for a dom0 taking up to half of the system RAM.
> I've not actually tried this 

I have now. Using the boot script from
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Allwinner with:
        # cubieboard2
        # Top of RAM:        0x80000000
        # Xen relocate addr  0x7fe00000
        setenv kernel_addr_r 0x7fa00000 # 4M
        setenv fdt_addr      0x7f800000 # 2M
        setenv xen_addr_r    0x7f600000 # 2M
allows for dom0_mem=512M.

On a cubietruck with 2GB of RAM addresses 0xbxxxxxxx instead of
0x7xxxxxxx allow for dom0_mem up to 1GB.

I have added a general note on this topic to
http://wiki.xen.org/wiki/Xen_ARM_TODO#Domain_0_memory_limitation_due_to_1:1_mapping

I'm going to update
http://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions/Allwinner with the specifics for the A20 platforms in a moment.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:04:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fVI-000347-57; Fri, 24 Jan 2014 12:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6fVG-00033t-NN
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:04:54 +0000
Received: from [85.158.143.35:43547] by server-2.bemta-4.messagelabs.com id
	AB/38-11386-6E652E25; Fri, 24 Jan 2014 12:04:54 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390565091!543141!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6749 invoked from network); 24 Jan 2014 12:04:53 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 12:04:53 -0000
Received: from mail-ve0-f169.google.com ([209.85.128.169]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJW4+GRzAyGfloGQT3bngWbahSskf9C@postini.com;
	Fri, 24 Jan 2014 04:04:53 PST
Received: by mail-ve0-f169.google.com with SMTP id oy12so1921205veb.28
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 04:04:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fDVlqgXtEdX1L1wO9cYdYJXKACQn6F+5poedo7QzESk=;
	b=JN7RowAcYmoskqvEBz4qMCQ3XPWkm0Jn2pT1LVzb2ntDfZC5yk71bNGIQwYL1Ohl0J
	C7K0iS3oWIuPiL/AQcTMH0F3axgKzreNe4GETFytlnNjDun6t4VV6PSdE3E3aC1jebSq
	PC/B3Fz8KT6lPYy8vdlnV8OQ0sTNgRpDVEAxk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=fDVlqgXtEdX1L1wO9cYdYJXKACQn6F+5poedo7QzESk=;
	b=k6ezxiOTjR1O3bWKtsd1pKuSa5gZKPHm5IxRISCTdYbyLVCuCsQXgfjbmQc1i9FpL4
	yMcP0xVlOHhondYqda6ZTLuxdI9Gx6Rm/dP1XLZ4p5BA33enUVf52+QJ8z3vVdPnbOn4
	KJ3w/R3Xh6I7vGuU6Avyt+vtLrFBGWaSEkm2NyKLSO/0Ph9aKaGjETWuZM3D1EDQlInt
	sh7klsROziXkuODFuqn7MBUy1MvFD7tSeL+2B8vDITfZiyEL0U3Y+Hkgrbg3/384WTz1
	BXn3105lMlrgGRsmwfFGv1x+dT66NM88G1QXp53rN9trW6AGjaOLDGr6lK9bYDRTCjr6
	oKPA==
X-Received: by 10.220.193.70 with SMTP id dt6mr7391539vcb.17.1390565090576;
	Fri, 24 Jan 2014 04:04:50 -0800 (PST)
X-Gm-Message-State: ALoCoQnY15B1ihjs6QzowMC9uqrzdoOsiPE4y/Mog0OfA8DWPh4veUGGn9iYg6jryJEIHS4BWZw7dwpL2ZNpFKTUwbs+SBGkY/QFdmWw7A6uoAWbDXP03YH29FhGLfdR48LySTHZGMVhFDDiCJIR7UZfqtnfAlPxw4cfvMcnO68dpeJIayQ4gCw=
MIME-Version: 1.0
X-Received: by 10.220.193.70 with SMTP id dt6mr7391534vcb.17.1390565090460;
	Fri, 24 Jan 2014 04:04:50 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 04:04:50 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
	<CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
	<alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 14:04:50 +0200
Message-ID: <CAH_mUMMadHjmVHLUsyEuLMNMRkH6N+2+xVnTJS-AM=Lv6=M7bQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,


On Fri, Jan 24, 2014 at 1:49 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Fri, 24 Jan 2014, Andrii Tseglytskyi wrote:
>>
>> If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
>> translation. But for now we need this in DomU, where 1:1 mapping will
>> not be available.
>
> Sure, that is fine.
>
> What I meant to say is that in order to make this code more generic, it
> would be a good idea to check whether the guest is dom0 or domU and only
> trap MMU accesses if the guest is domU (because for dom0 is not
> necessary).

Oh, got it. Sure will make this check.


regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:04:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6fVI-000347-57; Fri, 24 Jan 2014 12:04:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1W6fVG-00033t-NN
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 12:04:54 +0000
Received: from [85.158.143.35:43547] by server-2.bemta-4.messagelabs.com id
	AB/38-11386-6E652E25; Fri, 24 Jan 2014 12:04:54 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390565091!543141!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6749 invoked from network); 24 Jan 2014 12:04:53 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 12:04:53 -0000
Received: from mail-ve0-f169.google.com ([209.85.128.169]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuJW4+GRzAyGfloGQT3bngWbahSskf9C@postini.com;
	Fri, 24 Jan 2014 04:04:53 PST
Received: by mail-ve0-f169.google.com with SMTP id oy12so1921205veb.28
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 04:04:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fDVlqgXtEdX1L1wO9cYdYJXKACQn6F+5poedo7QzESk=;
	b=JN7RowAcYmoskqvEBz4qMCQ3XPWkm0Jn2pT1LVzb2ntDfZC5yk71bNGIQwYL1Ohl0J
	C7K0iS3oWIuPiL/AQcTMH0F3axgKzreNe4GETFytlnNjDun6t4VV6PSdE3E3aC1jebSq
	PC/B3Fz8KT6lPYy8vdlnV8OQ0sTNgRpDVEAxk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=fDVlqgXtEdX1L1wO9cYdYJXKACQn6F+5poedo7QzESk=;
	b=k6ezxiOTjR1O3bWKtsd1pKuSa5gZKPHm5IxRISCTdYbyLVCuCsQXgfjbmQc1i9FpL4
	yMcP0xVlOHhondYqda6ZTLuxdI9Gx6Rm/dP1XLZ4p5BA33enUVf52+QJ8z3vVdPnbOn4
	KJ3w/R3Xh6I7vGuU6Avyt+vtLrFBGWaSEkm2NyKLSO/0Ph9aKaGjETWuZM3D1EDQlInt
	sh7klsROziXkuODFuqn7MBUy1MvFD7tSeL+2B8vDITfZiyEL0U3Y+Hkgrbg3/384WTz1
	BXn3105lMlrgGRsmwfFGv1x+dT66NM88G1QXp53rN9trW6AGjaOLDGr6lK9bYDRTCjr6
	oKPA==
X-Received: by 10.220.193.70 with SMTP id dt6mr7391539vcb.17.1390565090576;
	Fri, 24 Jan 2014 04:04:50 -0800 (PST)
X-Gm-Message-State: ALoCoQnY15B1ihjs6QzowMC9uqrzdoOsiPE4y/Mog0OfA8DWPh4veUGGn9iYg6jryJEIHS4BWZw7dwpL2ZNpFKTUwbs+SBGkY/QFdmWw7A6uoAWbDXP03YH29FhGLfdR48LySTHZGMVhFDDiCJIR7UZfqtnfAlPxw4cfvMcnO68dpeJIayQ4gCw=
MIME-Version: 1.0
X-Received: by 10.220.193.70 with SMTP id dt6mr7391534vcb.17.1390565090460;
	Fri, 24 Jan 2014 04:04:50 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Fri, 24 Jan 2014 04:04:50 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<alpine.DEB.2.02.1401231429420.15917@kaball.uk.xensource.com>
	<CAH_mUMMuWard8bQD4f+54tBF1CVSu9Og8wRGsoX6+_68aRghQg@mail.gmail.com>
	<alpine.DEB.2.02.1401241146490.15917@kaball.uk.xensource.com>
Date: Fri, 24 Jan 2014 14:04:50 +0200
Message-ID: <CAH_mUMMadHjmVHLUsyEuLMNMRkH6N+2+xVnTJS-AM=Lv6=M7bQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,


On Fri, Jan 24, 2014 at 1:49 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Fri, 24 Jan 2014, Andrii Tseglytskyi wrote:
>>
>> If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
>> translation. But for now we need this in DomU, where 1:1 mapping will
>> not be available.
>
> Sure, that is fine.
>
> What I meant to say is that in order to make this code more generic, it
> would be a good idea to check whether the guest is dom0 or domU and only
> trap MMU accesses if the guest is domU (because for dom0 is not
> necessary).

Oh, got it. Sure will make this check.


regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:28:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6frf-0003nb-W0; Fri, 24 Jan 2014 12:28:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6fre-0003nW-9m
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 12:28:03 +0000
Received: from [85.158.139.211:11417] by server-15.bemta-5.messagelabs.com id
	2E/12-08490-15C52E25; Fri, 24 Jan 2014 12:28:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390566479!1434279!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24767 invoked from network); 24 Jan 2014 12:28:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:28:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94109518"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:27:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:27:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6fWJ-0003Kn-KS;
	Fri, 24 Jan 2014 12:05:59 +0000
Date: Fri, 24 Jan 2014 12:04:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Matt Wilson <msw@linux.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
Message-ID: <alpine.DEB.2.02.1401241200380.15917@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> > The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> > for blkback and future netback patches it just cause a lock contention, as
> > those pages never go to userspace. Therefore this series does the following:
> > - the original functions were renamed to __gnttab_[un]map_refs, with a new
> >   parameter m2p_override
> > - based on m2p_override either they follow the original behaviour, or just set
> >   the private flag and call set_phys_to_machine
> > - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
> >   m2p_override false
> > - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> > 
> > It also removes a stray space from page.h and change ret to 0 if
> > XENFEAT_auto_translated_physmap, as that is the only possible return value
> > there.
> > 
> > v2:
> > - move the storing of the old mfn in page->index to gnttab_map_refs
> > - move the function header update to a separate patch
> > 
> > v3:
> > - a new approach to retain old behaviour where it needed
> > - squash the patches into one
> > 
> > v4:
> > - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> > - clear page->private before doing anything with the page, so m2p_find_override
> >   won't race with this
> > 
> > v5:
> > - change return value handling in __gnttab_[un]map_refs
> > - remove a stray space in page.h
> > - add detail why ret = 0 now at some places
> > 
> > v6:
> > - don't pass pfn to m2p* functions, just get it locally
> > 
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
> 
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
> 
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
> 
> Or am I misunderstanding the change?

Matt, you are correct, it is very similar. I had forgotten about
Anthony's patch, otherwise I would have CC'ed him in the discussion.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:28:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6frf-0003nb-W0; Fri, 24 Jan 2014 12:28:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6fre-0003nW-9m
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 12:28:03 +0000
Received: from [85.158.139.211:11417] by server-15.bemta-5.messagelabs.com id
	2E/12-08490-15C52E25; Fri, 24 Jan 2014 12:28:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390566479!1434279!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24767 invoked from network); 24 Jan 2014 12:28:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:28:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94109518"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:27:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:27:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6fWJ-0003Kn-KS;
	Fri, 24 Jan 2014 12:05:59 +0000
Date: Fri, 24 Jan 2014 12:04:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Matt Wilson <msw@linux.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
Message-ID: <alpine.DEB.2.02.1401241200380.15917@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 23 Jan 2014, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> > The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> > for blkback and future netback patches it just cause a lock contention, as
> > those pages never go to userspace. Therefore this series does the following:
> > - the original functions were renamed to __gnttab_[un]map_refs, with a new
> >   parameter m2p_override
> > - based on m2p_override either they follow the original behaviour, or just set
> >   the private flag and call set_phys_to_machine
> > - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
> >   m2p_override false
> > - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> > 
> > It also removes a stray space from page.h and change ret to 0 if
> > XENFEAT_auto_translated_physmap, as that is the only possible return value
> > there.
> > 
> > v2:
> > - move the storing of the old mfn in page->index to gnttab_map_refs
> > - move the function header update to a separate patch
> > 
> > v3:
> > - a new approach to retain old behaviour where it needed
> > - squash the patches into one
> > 
> > v4:
> > - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> > - clear page->private before doing anything with the page, so m2p_find_override
> >   won't race with this
> > 
> > v5:
> > - change return value handling in __gnttab_[un]map_refs
> > - remove a stray space in page.h
> > - add detail why ret = 0 now at some places
> > 
> > v6:
> > - don't pass pfn to m2p* functions, just get it locally
> > 
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
> 
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
> 
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
> 
> Or am I misunderstanding the change?

Matt, you are correct, it is very similar. I had forgotten about
Anthony's patch, otherwise I would have CC'ed him in the discussion.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6g5E-0004PT-Tc; Fri, 24 Jan 2014 12:42:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6g5C-0004PO-3o
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 12:42:03 +0000
Received: from [85.158.139.211:62042] by server-9.bemta-5.messagelabs.com id
	43/E9-15098-99F52E25; Fri, 24 Jan 2014 12:42:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390567319!11563768!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20391 invoked from network); 24 Jan 2014 12:42:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:42:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94113476"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:41:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 07:41:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6g57-0005zd-KF;
	Fri, 24 Jan 2014 12:41:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6g55-0005PP-HS;
	Fri, 24 Jan 2014 12:41:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.24466.92095.134875@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 12:41:54 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E1EB97.4080007@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> BTW, I only see the crash when the save/restore script is running.  I
> stopped the other scripts and domains, running only save/restore on a
> single domain, and see the crash rather quickly (within 10 iterations).

I'll look at the libvirt code, but:

With a recurring timeout, how can you ever know it's cancelled ?
There might be threads out there, which don't hold any locks, which
are in the process of executing a callback for a timeout.  That might
be arbitrarily delayed from the pov of the rest of the program.

E.g.:

 Thread A                                             Thread B

   invoke some libxl operation
X    do some libxl stuff
X    register timeout (libxl)
XV     record timeout info
X    do some more libxl stuff
     ...
X    do some more libxl stuff
X    deregister timeout (libxl internal)
X     converted to request immediate timeout
XV     record new timeout info
X      release libvirt event loop lock
                                            entering libvirt event loop
                                       V     observe timeout is immediate
                                       V      need to do callback
                                               call libxl driver

      entering libvirt event loop
 V     observe timeout is immediate
 V      need to do callback
         call libxl driver
           call libxl
  X          libxl sees timeout is live
  X          libxl does libxl stuff
         libxl driver deregisters
 V         record lack of timeout
         free driver's timeout struct
                                               call libxl
                                      X          libxl sees timeout is dead
                                      X          libxl does nothing
                                             libxl driver deregisters
                                       V       CRASH due to deregistering
                                       V        already-deregistered timeout

If this is how things are, then I think there is no sane way to use
libvirt's timeouts (!)

In principle I guess the driver could keep its per-timeout structs
around forever and remember whether they've been deregistered or not.
Each one would have to have a lock in it.

But if you think about it, if you have 10 threads all running the
event loop and you set a timeout to zero, doesn't that mean that every
thread's event loop should do the timeout callback as fast as it can ?
That could be a lot of wasted effort.

The best solution would appear to be to provide a non-recurring
callback.

> I'm not so thrilled with the timeout handling code in the libvirt libxl
> driver.  The driver maintains a list of active timeouts because IIRC,
> there were cases when the driver received timeout deregistrations when
> calling libxl_ctx_free, at which point some of the associated structures
> were freed.  The idea was to call libxl_osevent_occurred_timeout on any
> active timeouts before freeing libxlDomainObjPrivate and its contents.

libxl does deregister fd callbacks in libxl_ctx_free.

But libxl doesn't currently "deregister" any timeouts in
libxl_ctx_free; indeed it would be a bit daft for it to do so as at
libxl_ctx_free there are no aos running so there would be nothing to
time out.

But there is a difficulty with timeouts which libxl has set to occur
immediately but which have not yet actually had the callback.  The the
application cannot call libxl_ctx_free with such timeouts outstanding,
because that would imply later calling back into libxl with a stale
ctx.

(Looking at the code I see that the "nexi" are never actually freed.
Bah.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6g5E-0004PT-Tc; Fri, 24 Jan 2014 12:42:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6g5C-0004PO-3o
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 12:42:03 +0000
Received: from [85.158.139.211:62042] by server-9.bemta-5.messagelabs.com id
	43/E9-15098-99F52E25; Fri, 24 Jan 2014 12:42:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390567319!11563768!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20391 invoked from network); 24 Jan 2014 12:42:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:42:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94113476"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:41:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 07:41:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6g57-0005zd-KF;
	Fri, 24 Jan 2014 12:41:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6g55-0005PP-HS;
	Fri, 24 Jan 2014 12:41:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.24466.92095.134875@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 12:41:54 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E1EB97.4080007@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> BTW, I only see the crash when the save/restore script is running.  I
> stopped the other scripts and domains, running only save/restore on a
> single domain, and see the crash rather quickly (within 10 iterations).

I'll look at the libvirt code, but:

With a recurring timeout, how can you ever know it's cancelled ?
There might be threads out there, which don't hold any locks, which
are in the process of executing a callback for a timeout.  That might
be arbitrarily delayed from the pov of the rest of the program.

E.g.:

 Thread A                                             Thread B

   invoke some libxl operation
X    do some libxl stuff
X    register timeout (libxl)
XV     record timeout info
X    do some more libxl stuff
     ...
X    do some more libxl stuff
X    deregister timeout (libxl internal)
X     converted to request immediate timeout
XV     record new timeout info
X      release libvirt event loop lock
                                            entering libvirt event loop
                                       V     observe timeout is immediate
                                       V      need to do callback
                                               call libxl driver

      entering libvirt event loop
 V     observe timeout is immediate
 V      need to do callback
         call libxl driver
           call libxl
  X          libxl sees timeout is live
  X          libxl does libxl stuff
         libxl driver deregisters
 V         record lack of timeout
         free driver's timeout struct
                                               call libxl
                                      X          libxl sees timeout is dead
                                      X          libxl does nothing
                                             libxl driver deregisters
                                       V       CRASH due to deregistering
                                       V        already-deregistered timeout

If this is how things are, then I think there is no sane way to use
libvirt's timeouts (!)

In principle I guess the driver could keep its per-timeout structs
around forever and remember whether they've been deregistered or not.
Each one would have to have a lock in it.

But if you think about it, if you have 10 threads all running the
event loop and you set a timeout to zero, doesn't that mean that every
thread's event loop should do the timeout callback as fast as it can ?
That could be a lot of wasted effort.

The best solution would appear to be to provide a non-recurring
callback.

> I'm not so thrilled with the timeout handling code in the libvirt libxl
> driver.  The driver maintains a list of active timeouts because IIRC,
> there were cases when the driver received timeout deregistrations when
> calling libxl_ctx_free, at which point some of the associated structures
> were freed.  The idea was to call libxl_osevent_occurred_timeout on any
> active timeouts before freeing libxlDomainObjPrivate and its contents.

libxl does deregister fd callbacks in libxl_ctx_free.

But libxl doesn't currently "deregister" any timeouts in
libxl_ctx_free; indeed it would be a bit daft for it to do so as at
libxl_ctx_free there are no aos running so there would be nothing to
time out.

But there is a difficulty with timeouts which libxl has set to occur
immediately but which have not yet actually had the callback.  The the
application cannot call libxl_ctx_free with such timeouts outstanding,
because that would imply later calling back into libxl with a stale
ctx.

(Looking at the code I see that the "nexi" are never actually freed.
Bah.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gFV-0004mf-Cd; Fri, 24 Jan 2014 12:52:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6gFT-0004ma-Uh
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 12:52:40 +0000
Received: from [85.158.137.68:7964] by server-9.bemta-3.messagelabs.com id
	5A/58-13104-71262E25; Fri, 24 Jan 2014 12:52:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390567956!11083042!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25023 invoked from network); 24 Jan 2014 12:52:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94115484"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:52:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:52:35 -0500
Message-ID: <1390567954.2124.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 12:52:34 +0000
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:

>  Thread A                                             Thread B
> 
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate

Is there nothing in this interval which deregisters, pauses, quiesces or
otherwise prevents the timer from going off again until after the
callback (when the lock would be reacquired and whatever was done is
undone)?

(V is the libvirt event loop lock I presume?)

>                                        V      need to do callback
>                                                call libxl driver
> 
>       entering libvirt event loop
>  V     observe timeout is immediate

Given the behaviour I suggest above this would be prevented I think?

> But if you think about it, if you have 10 threads all running the
> event loop and you set a timeout to zero, doesn't that mean that every
> thread's event loop should do the timeout callback as fast as it can ?
> That could be a lot of wasted effort.

It doesn't seem all that likely triggering the same timeout multiple
times in different threads simultaneously would be a deliberate design
decision, so if the libvirt event core doesn't already prevent this
somehow then it seems to me that this is just a bug in the event loop
core.

In that case should be addressed in libvirt, and in any case the libvirt
core folks should be involved in the discussion, so they have the
opportunity to tell us how best, if at all, we can use the provided
facilities and/or whether these issues are deliberate of things which
should be fixed etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 12:52:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 12:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gFV-0004mf-Cd; Fri, 24 Jan 2014 12:52:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6gFT-0004ma-Uh
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 12:52:40 +0000
Received: from [85.158.137.68:7964] by server-9.bemta-3.messagelabs.com id
	5A/58-13104-71262E25; Fri, 24 Jan 2014 12:52:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390567956!11083042!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25023 invoked from network); 24 Jan 2014 12:52:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 12:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94115484"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 12:52:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 07:52:35 -0500
Message-ID: <1390567954.2124.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 12:52:34 +0000
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:

>  Thread A                                             Thread B
> 
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate

Is there nothing in this interval which deregisters, pauses, quiesces or
otherwise prevents the timer from going off again until after the
callback (when the lock would be reacquired and whatever was done is
undone)?

(V is the libvirt event loop lock I presume?)

>                                        V      need to do callback
>                                                call libxl driver
> 
>       entering libvirt event loop
>  V     observe timeout is immediate

Given the behaviour I suggest above this would be prevented I think?

> But if you think about it, if you have 10 threads all running the
> event loop and you set a timeout to zero, doesn't that mean that every
> thread's event loop should do the timeout callback as fast as it can ?
> That could be a lot of wasted effort.

It doesn't seem all that likely triggering the same timeout multiple
times in different threads simultaneously would be a deliberate design
decision, so if the libvirt event core doesn't already prevent this
somehow then it seems to me that this is just a bug in the event loop
core.

In that case should be addressed in libvirt, and in any case the libvirt
core folks should be involved in the discussion, so they have the
opportunity to tell us how best, if at all, we can use the provided
facilities and/or whether these issues are deliberate of things which
should be fixed etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:13:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gZJ-0005SW-0p; Fri, 24 Jan 2014 13:13:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=094a346f2=chegger@amazon.de>)
	id 1W6gZH-0005SO-0V
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 13:13:07 +0000
Received: from [85.158.137.68:19401] by server-1.bemta-3.messagelabs.com id
	7F/14-29598-2E662E25; Fri, 24 Jan 2014 13:13:06 +0000
X-Env-Sender: prvs=094a346f2=chegger@amazon.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390569183!11097141!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1555 invoked from network); 24 Jan 2014 13:13:05 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 13:13:05 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390569185; x=1422105185;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=NXCkF9B9vAb7R+0w3kt1PZ6HRoP1/8rz9r3Xke2r3jg=;
	b=rvEuFUb1pJ2T8BnjBnwUeGZNVhXm7WuuRJ0s2cyBIPb4PWMD00m780HI
	vKKtl+iLkatqPUL8iqH7aH6RPpxqpM6d5vBcMK92bQRo6v4UA6OG+VFzu
	YAbAebhra2Li0hR3AE6ERZ4jRD6z65zgWvcoAdvfW1qVepkASZtQROThV k=;
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="67768427"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 24 Jan 2014 13:13:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-5102.iad5.amazon.com (8.14.7/8.14.7) with ESMTP id
	s0ODCtSo024224
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 24 Jan 2014 13:12:59 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Fri, 24 Jan 2014 05:12:58 -0800
Message-ID: <52E266D6.20208@amazon.de>
Date: Fri, 24 Jan 2014 14:12:54 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjMuMDEuMTQgMjA6MjgsIE1hdHQgV2lsc29uIHdyb3RlOgo+IEZyb206IE1hdHQgUnVzaHRv
biA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiAKPiBDdXJyZW50bHkgc2hyaW5rX2ZyZWVfcGFnZXBv
b2woKSBpcyBjYWxsZWQgYmVmb3JlIHRoZSBwYWdlcyB1c2VkIGZvcgo+IHBlcnNpc3RlbnQgZ3Jh
bnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0ZW50X2dudHMoKS4gVGhpcwo+IHJlc3Vs
dHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0ZW50IGdyYW50
cyBpcwo+IHRvcm4gZG93bi4KClRoaXMgbWVtb3J5IGxlYWsgd2FzIGludHJvZHVjZWQgd2l0aCBj
b21taXQKYzZjYzE0MmRhYzUyZTYyZTFlOGEyYWZmNWRlMTMwMDIwMmI5NmM2NiB4ZW4tYmxrYmFj
azogdXNlIGJhbGxvb24gcGFnZXMKZm9yIGFsbCBtYXBwaW5ncwoKQ2hyaXN0b3BoCgo+IENjOiBL
b25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+Cj4gQ2M6ICJSb2dl
ciBQYXUgTW9ubsOpIiA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gQ2M6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVs
QGNpdHJpeC5jb20+Cj4gQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKPiBDYzogeGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKPiBDYzogQW50aG9ueSBMaWd1b3JpIDxhbGlndW9yaUBhbWF6
b24uY29tPgo+IFNpZ25lZC1vZmYtYnk6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+Cj4gLS0tCj4g
IGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAgNiArKystLS0KPiAgMSBm
aWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBkaWZmIC0t
Z2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4g
LS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiArKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2Vf
Z250X2xpc3Q6Cj4gIAkJCXByaW50X3N0YXRzKGJsa2lmKTsKPiAgCX0KPiAgCj4gLQkvKiBTaW5j
ZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAq
Lwo+IC0Jc2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiAtCj4gIAkv
KiBGcmVlIGFsbCBwZXJzaXN0ZW50IGdyYW50IHBhZ2VzICovCj4gIAlpZiAoIVJCX0VNUFRZX1JP
T1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMpKQo+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJs
a2lmLCAmYmxraWYtPnBlcnNpc3RlbnRfZ250cywKPiBAQCAtNjM2LDYgKzYzMyw5IEBAIHB1cmdl
X2dudF9saXN0Ogo+ICAJQlVHX09OKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9n
bnRzKSk7Cj4gIAlibGtpZi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7Cj4gIAo+ICsJLyogU2luY2Ug
d2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8K
PiArCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7Cj4gKwo+ICAJaWYg
KGxvZ19zdGF0cykKPiAgCQlwcmludF9zdGF0cyhibGtpZik7Cj4gIAo+IAoKCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:13:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gZJ-0005SW-0p; Fri, 24 Jan 2014 13:13:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=094a346f2=chegger@amazon.de>)
	id 1W6gZH-0005SO-0V
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 13:13:07 +0000
Received: from [85.158.137.68:19401] by server-1.bemta-3.messagelabs.com id
	7F/14-29598-2E662E25; Fri, 24 Jan 2014 13:13:06 +0000
X-Env-Sender: prvs=094a346f2=chegger@amazon.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390569183!11097141!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1555 invoked from network); 24 Jan 2014 13:13:05 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 13:13:05 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390569185; x=1422105185;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=NXCkF9B9vAb7R+0w3kt1PZ6HRoP1/8rz9r3Xke2r3jg=;
	b=rvEuFUb1pJ2T8BnjBnwUeGZNVhXm7WuuRJ0s2cyBIPb4PWMD00m780HI
	vKKtl+iLkatqPUL8iqH7aH6RPpxqpM6d5vBcMK92bQRo6v4UA6OG+VFzu
	YAbAebhra2Li0hR3AE6ERZ4jRD6z65zgWvcoAdvfW1qVepkASZtQROThV k=;
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="67768427"
Received: from smtp-in-5102.iad5.amazon.com ([10.218.9.29])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 24 Jan 2014 13:13:00 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-5102.iad5.amazon.com (8.14.7/8.14.7) with ESMTP id
	s0ODCtSo024224
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Fri, 24 Jan 2014 13:12:59 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Fri, 24 Jan 2014 05:12:58 -0800
Message-ID: <52E266D6.20208@amazon.de>
Date: Fri, 24 Jan 2014 14:12:54 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
In-Reply-To: <1390505326-9368-1-git-send-email-msw@linux.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjMuMDEuMTQgMjA6MjgsIE1hdHQgV2lsc29uIHdyb3RlOgo+IEZyb206IE1hdHQgUnVzaHRv
biA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiAKPiBDdXJyZW50bHkgc2hyaW5rX2ZyZWVfcGFnZXBv
b2woKSBpcyBjYWxsZWQgYmVmb3JlIHRoZSBwYWdlcyB1c2VkIGZvcgo+IHBlcnNpc3RlbnQgZ3Jh
bnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0ZW50X2dudHMoKS4gVGhpcwo+IHJlc3Vs
dHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0ZW50IGdyYW50
cyBpcwo+IHRvcm4gZG93bi4KClRoaXMgbWVtb3J5IGxlYWsgd2FzIGludHJvZHVjZWQgd2l0aCBj
b21taXQKYzZjYzE0MmRhYzUyZTYyZTFlOGEyYWZmNWRlMTMwMDIwMmI5NmM2NiB4ZW4tYmxrYmFj
azogdXNlIGJhbGxvb24gcGFnZXMKZm9yIGFsbCBtYXBwaW5ncwoKQ2hyaXN0b3BoCgo+IENjOiBL
b25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+Cj4gQ2M6ICJSb2dl
ciBQYXUgTW9ubsOpIiA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8
SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gQ2M6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVs
QGNpdHJpeC5jb20+Cj4gQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmcKPiBDYzogeGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKPiBDYzogQW50aG9ueSBMaWd1b3JpIDxhbGlndW9yaUBhbWF6
b24uY29tPgo+IFNpZ25lZC1vZmYtYnk6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+Cj4gLS0tCj4g
IGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jIHwgICAgNiArKystLS0KPiAgMSBm
aWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKPiAKPiBkaWZmIC0t
Z2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2Nr
L3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4g
LS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiArKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2Vf
Z250X2xpc3Q6Cj4gIAkJCXByaW50X3N0YXRzKGJsa2lmKTsKPiAgCX0KPiAgCj4gLQkvKiBTaW5j
ZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBhZ2VzIGZyb20gdGhlIGJ1ZmZlciAq
Lwo+IC0Jc2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKPiAtCj4gIAkv
KiBGcmVlIGFsbCBwZXJzaXN0ZW50IGdyYW50IHBhZ2VzICovCj4gIAlpZiAoIVJCX0VNUFRZX1JP
T1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMpKQo+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJs
a2lmLCAmYmxraWYtPnBlcnNpc3RlbnRfZ250cywKPiBAQCAtNjM2LDYgKzYzMyw5IEBAIHB1cmdl
X2dudF9saXN0Ogo+ICAJQlVHX09OKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9n
bnRzKSk7Cj4gIAlibGtpZi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7Cj4gIAo+ICsJLyogU2luY2Ug
d2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8K
PiArCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7Cj4gKwo+ICAJaWYg
KGxvZ19zdGF0cykKPiAgCQlwcmludF9zdGF0cyhibGtpZik7Cj4gIAo+IAoKCl9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxp
c3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVs
Cg==

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gkD-0005pG-Gk; Fri, 24 Jan 2014 13:24:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6gkB-0005pB-AK
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:24:23 +0000
Received: from [85.158.143.35:57971] by server-2.bemta-4.messagelabs.com id
	C8/9D-11386-68962E25; Fri, 24 Jan 2014 13:24:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390569861!572451!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4885 invoked from network); 24 Jan 2014 13:24:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 13:24:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 13:24:20 +0000
Message-Id: <52E277920200007800116A70@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 13:24:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
In-Reply-To: <52E233A60200007800116776@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part23109E92.2__="
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part23109E92.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
>> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>>> If
>>> the kernel responds to that mentioned firmware bug by forcing
>>> interrupt remapping off, maybe we would have to do the same...
>>=20
>> That would be better than xen failing to boot.
>=20
> But you realize that, following precedents elsewhere in the
> IOMMU code, we would disable the IOMMU as a whole rather
> than just interrupt remapping.
>=20
> But yes, looking at the Linux side code, I guess we need to do
> so. Would be nice if you could confirm that the system comes up
> fine (and hopefully without IOMMU faults) with
> "iommu=3Dno-intremap,debug" as well as with "iommu=3Doff".

Here's a patch attempting to do just that. Please give this a try
without any IOMMU-related override options.

Jan

AMD IOMMU: fail if there is no southbridge IO-APIC

... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <joerg.roedel@amd.com>.

Reported-by: Eric Houby <ehouby@yahoo.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic =3D !iommu_intremap;
     int error =3D 0;
=20
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) =
)
+            sb_ioapic =3D 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
=20
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
=20
+    if ( !error && !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error =3D -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
=20




--=__Part23109E92.2__=
Content-Type: text/plain; name="AMD-IOMMU-require-SB-IOAPIC.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-require-SB-IOAPIC.patch"

AMD IOMMU: fail if there is no southbridge IO-APIC=0A=0A... but interrupt =
remapping is requested (with per-device remapping=0Atables). Without it, =
the timer interrupt is usually not working.=0A=0AInspired by Linux'es =
"iommu/amd: Work around wrong IOAPIC device-id in=0AIVRS table" (commit =
c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg=0ARoedel <joerg.roedel@a=
md.com>.=0A=0AReported-by: Eric Houby <ehouby@yahoo.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/amd/iomm=
u_acpi.c=0A+++ b/xen/drivers/passthrough/amd/iommu_acpi.c=0A@@ -984,6 =
+984,7 @@ static int __init parse_ivrs_table(struc=0A     const struct =
acpi_ivrs_header *ivrs_block;=0A     unsigned long length;=0A     unsigned =
int apic;=0A+    bool_t sb_ioapic =3D !iommu_intremap;=0A     int error =
=3D 0;=0A =0A     BUG_ON(!table);=0A@@ -1017,8 +1018,15 @@ static int =
__init parse_ivrs_table(struc=0A     /* Each IO-APIC must have been =
mentioned in the table. */=0A     for ( apic =3D 0; !error && iommu_intrema=
p && apic < nr_ioapics; ++apic )=0A     {=0A-        if ( !nr_ioapic_entrie=
s[apic] ||=0A-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A+   =
     if ( !nr_ioapic_entries[apic] )=0A+            continue;=0A+=0A+      =
  if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&=0A+             /* SB IO-APIC =
is always on this device in AMD systems. */=0A+             ioapic_sbdf[IO_=
APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) )=0A+            sb_ioapic =
=3D 1;=0A+=0A+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A    =
         continue;=0A =0A         if ( !test_bit(IO_APIC_ID(apic), =
ioapic_cmdline) )=0A@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_tab=
le(struc=0A         }=0A     }=0A =0A+    if ( !error && !sb_ioapic )=0A+  =
  {=0A+        if ( amd_iommu_perdev_intremap )=0A+            error =3D =
-ENXIO;=0A+        printk("%sNo southbridge IO-APIC found in IVRS =
table\n",=0A+               amd_iommu_perdev_intremap ? XENLOG_ERR : =
XENLOG_WARNING);=0A+    }=0A+=0A     return error;=0A }=0A =0A
--=__Part23109E92.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part23109E92.2__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 13:24:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gkD-0005pG-Gk; Fri, 24 Jan 2014 13:24:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6gkB-0005pB-AK
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:24:23 +0000
Received: from [85.158.143.35:57971] by server-2.bemta-4.messagelabs.com id
	C8/9D-11386-68962E25; Fri, 24 Jan 2014 13:24:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390569861!572451!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4885 invoked from network); 24 Jan 2014 13:24:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 13:24:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 13:24:20 +0000
Message-Id: <52E277920200007800116A70@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 13:24:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
In-Reply-To: <52E233A60200007800116776@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part23109E92.2__="
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part23109E92.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
>> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
>>> If
>>> the kernel responds to that mentioned firmware bug by forcing
>>> interrupt remapping off, maybe we would have to do the same...
>>=20
>> That would be better than xen failing to boot.
>=20
> But you realize that, following precedents elsewhere in the
> IOMMU code, we would disable the IOMMU as a whole rather
> than just interrupt remapping.
>=20
> But yes, looking at the Linux side code, I guess we need to do
> so. Would be nice if you could confirm that the system comes up
> fine (and hopefully without IOMMU faults) with
> "iommu=3Dno-intremap,debug" as well as with "iommu=3Doff".

Here's a patch attempting to do just that. Please give this a try
without any IOMMU-related override options.

Jan

AMD IOMMU: fail if there is no southbridge IO-APIC

... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <joerg.roedel@amd.com>.

Reported-by: Eric Houby <ehouby@yahoo.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic =3D !iommu_intremap;
     int error =3D 0;
=20
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) =
)
+            sb_ioapic =3D 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
=20
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
=20
+    if ( !error && !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error =3D -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
=20




--=__Part23109E92.2__=
Content-Type: text/plain; name="AMD-IOMMU-require-SB-IOAPIC.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-require-SB-IOAPIC.patch"

AMD IOMMU: fail if there is no southbridge IO-APIC=0A=0A... but interrupt =
remapping is requested (with per-device remapping=0Atables). Without it, =
the timer interrupt is usually not working.=0A=0AInspired by Linux'es =
"iommu/amd: Work around wrong IOAPIC device-id in=0AIVRS table" (commit =
c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg=0ARoedel <joerg.roedel@a=
md.com>.=0A=0AReported-by: Eric Houby <ehouby@yahoo.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/amd/iomm=
u_acpi.c=0A+++ b/xen/drivers/passthrough/amd/iommu_acpi.c=0A@@ -984,6 =
+984,7 @@ static int __init parse_ivrs_table(struc=0A     const struct =
acpi_ivrs_header *ivrs_block;=0A     unsigned long length;=0A     unsigned =
int apic;=0A+    bool_t sb_ioapic =3D !iommu_intremap;=0A     int error =
=3D 0;=0A =0A     BUG_ON(!table);=0A@@ -1017,8 +1018,15 @@ static int =
__init parse_ivrs_table(struc=0A     /* Each IO-APIC must have been =
mentioned in the table. */=0A     for ( apic =3D 0; !error && iommu_intrema=
p && apic < nr_ioapics; ++apic )=0A     {=0A-        if ( !nr_ioapic_entrie=
s[apic] ||=0A-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A+   =
     if ( !nr_ioapic_entries[apic] )=0A+            continue;=0A+=0A+      =
  if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&=0A+             /* SB IO-APIC =
is always on this device in AMD systems. */=0A+             ioapic_sbdf[IO_=
APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) )=0A+            sb_ioapic =
=3D 1;=0A+=0A+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A    =
         continue;=0A =0A         if ( !test_bit(IO_APIC_ID(apic), =
ioapic_cmdline) )=0A@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_tab=
le(struc=0A         }=0A     }=0A =0A+    if ( !error && !sb_ioapic )=0A+  =
  {=0A+        if ( amd_iommu_perdev_intremap )=0A+            error =3D =
-ENXIO;=0A+        printk("%sNo southbridge IO-APIC found in IVRS =
table\n",=0A+               amd_iommu_perdev_intremap ? XENLOG_ERR : =
XENLOG_WARNING);=0A+    }=0A+=0A     return error;=0A }=0A =0A
--=__Part23109E92.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part23109E92.2__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 13:31:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gqy-0006B6-KG; Fri, 24 Jan 2014 13:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6gqw-0006B1-W3
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 13:31:23 +0000
Received: from [85.158.139.211:29720] by server-11.bemta-5.messagelabs.com id
	93/58-23268-A2B62E25; Fri, 24 Jan 2014 13:31:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390570281!11757096!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8749 invoked from network); 24 Jan 2014 13:31:21 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 13:31:21 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so969573eek.36
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 05:31:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lIXD1TwlTw5sMI/YQ97HPvHr1SBRmx3B/c32uZcME2A=;
	b=eiD025L+DW9MnQiM+Tn6sNC3OIySl17wpT6Kte07KhkOCUXs1enox3OcSZukt+iiFX
	dh9mpsUiCPasecQipNtc+0ihp97jYRpf3dudv+CBxcziDAjLuwDSUoH+EtMO5pPI1PMs
	gPnjuf/IiIl7OcWdCLfb7OLt16crY+91zMIztIRsVKHVc65QR81uEc5cnMXGbM0p/i4Z
	M3I8CCgWjjHScGUK2jDunb7aJEcGoxOa24MAeV+9cihswLshUyIxCL9Vfgbw8ERH9edW
	Cq6yzvSAbOxpc3BYswtzfkDVutw6zBWuT3FltjRI0ZOK5TOVkRv9on123Q+AndIdbCvh
	WMtA==
X-Gm-Message-State: ALoCoQkje5YJ3iCYmH/b7fBZMd3Rm4F9etNxKwcXpSdIxX50nxCn2bNF6TWbvkwqzKhii9Dckk4H
X-Received: by 10.14.221.4 with SMTP id q4mr12386522eep.47.1390570281158;
	Fri, 24 Jan 2014 05:31:21 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm3666851eef.9.2014.01.24.05.31.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 05:31:20 -0800 (PST)
Message-ID: <52E26B25.3050206@linaro.org>
Date: Fri, 24 Jan 2014 13:31:17 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<52E135E6.7030109@linaro.org>
	<CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
In-Reply-To: <CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:49 AM, Andrii Tseglytskyi wrote:
> Hi Julien,
> 
> On Thu, Jan 23, 2014 at 5:31 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
>>> omap IOMMU module is designed to handle access to external
>>> omap MMUs, connected to the L3 bus.
> 
> [...]
> 
>>
>>> +struct mmu_info {
>>> +     const char                      *name;
>>> +     paddr_t                         mem_start;
>>> +     u32                                     mem_size;
>>> +     u32                                     *pagetable;
>>> +     void __iomem            *mem_map;
>>> +};
>>> +
>>> +static struct mmu_info omap_ipu_mmu = {
>>
>> static const?
>>
> 
> Unfortunately, no. I like const modifiers very much and try to put
> them everywhere I can, but in these structs I need to modify several
> fields during MMU configuratiion.
> 
> [...]
> 
>>> +     .name           = "IPU_L2_MMU",
>>> +     .mem_start      = 0x55082000,
>>> +     .mem_size       = 0x1000,
>>> +     .pagetable      = NULL,
>>> +};
>>> +
>>> +static struct mmu_info omap_dsp_mmu = {
>>
>> static const?
>>
> 
> The same as previous.
> 
>>> +     .name           = "DSP_L2_MMU",
>>> +     .mem_start      = 0x4a066000,
>>> +     .mem_size       = 0x1000,
>>> +     .pagetable      = NULL,
>>> +};
>>> +
>>> +static struct mmu_info *mmu_list[] = {
>>
>> static const?
>>
> 
> The same as previous.
> 
>>> +     &omap_ipu_mmu,
>>> +     &omap_dsp_mmu,
>>> +};
>>> +
>>> +#define mmu_for_each(pfunc, data)                                            \
>>> +({                                                                                                           \
>>> +     u32 __i;                                                                                        \
>>> +     int __res = 0;                                                                          \
>>> +                                                                                                             \
>>> +     for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {      \
>>> +             __res |= pfunc(mmu_list[__i], data);                    \
>>
>> You res |= will result to a "wrong" errno if you have multiple failure.
>> Would it be better to have:
>>
>> __res = pfunc(...)
>> if ( __res )
>>   break;
>>
> 
> I know. I tried both solutions - mine and what you proposed. Agree in
> general, will update this.
> 
> 
>>> +     }                                                                                                       \
>>> +     __res;                                                                                          \
>>> +})
>>> +
>>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>>> +{
>>> +     if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
>>> +             return 1;
>>> +
>>> +     return 0;
>>> +}
>>> +
>>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>>> +{
>>> +     u32 i;
>>> +
>>> +     for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
>>> +             if (mmu_check_mem_range(mmu_list[i], addr))
>>> +                     return mmu_list[i];
>>> +     }
>>> +
>>> +     return NULL;
>>> +}
>>> +
>>> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
>>> +{
>>> +     return (reg & mask) | (va & (~mask));
>>> +}
>>> +
>>> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
>>> +{
>>> +     return (reg & ~mask) | pa;
>>> +}
>>> +
>>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>>> +{
>>> +     return mmu_for_each(mmu_check_mem_range, addr);
>>> +}
>>
>> As I understand your cover letter, the device (and therefore the MMU) is
>> only passthrough to a single guest, right?
>>
>> If so, your mmu_mmio_check should check if the domain is handling the
>> device.
>> With your current code any guest can write to this range and rewriting
>> the MMU page table.
>>
> 
> Oh, I knew that someone will catch this :)
> This is a next step for this patch series - to make sure that only one
> guest can configure / access MMU.
> 
>>> +
>>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>>> +{
>>> +     void __iomem *pagetable = NULL;
>>> +     u32 pgaddr;
>>> +
>>> +     ASSERT(mmu);
>>> +
>>> +     /* read address where kernel MMU pagetable is stored */
>>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>>> +     if (!pagetable) {
>>> +             printk("%s: %s failed to map pagetable\n",
>>> +                        __func__, mmu->name);
>>> +             return -EINVAL;
>>> +     }
>>> +
>>> +     /*
>>> +      * pagetable can be changed since last time
>>> +      * we accessed it therefore we need to copy it each time
>>> +      */
>>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>>> +
>>> +     iounmap(pagetable);
>>> +
>>> +     return 0;
>>> +}
>>
>> I'm confused, it should copy from the guest MMU pagetable, right? In
>> this case you should use map_domain_page.
>> ioremap *MUST* only be used with device memory, otherwise memory
>> coherency is not guaranteed.
>>
> 
> OK. Will try this.

You can look at to __copy_{to,from}* macro. They will do the right job.

> 
>> [..]
>>
>>> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
>>> +{
>>> +     struct domain *dom = v->domain;
>>> +     struct mmu_info *mmu = NULL;
>>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +    register_t *r = select_user_reg(regs, info->dabt.reg);
>>> +     int res;
>>> +
>>> +     mmu = mmu_lookup(info->gpa);
>>> +     if (!mmu) {
>>> +             printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
>>> +             return -EINVAL;
>>> +     }
>>> +
>>> +     /*
>>> +      * make sure that user register is written first in this function
>>> +      * following calls may expect valid data in it
>>> +      */
>>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>>
>> Hmmm ... I think this is very confusing, you should only write to the
>> memory if mmu_trap_write_access has not failed. And use "*r" where it's
>> needed.
>>
>> Writing to the device memory could have side effect (for instance
>> updating the page table with the wrong translation...).
>>
> 
> Agree - it is a bit confusing here. But I need a valid data in the
> user register.
> Following calls use it ->
> mmu_trap_write_access()->mmu_translate_pagetable()->mmu_copy_pagetable()->pgaddr
> = readl(mmu->mem_map + MMU_TTB);
> Last read will be from register written in this function. Taking in
> account your comment - I will think about changing this logic.
> 
>>> +
>>> +     res = mmu_trap_write_access(dom, mmu, info);
>>> +     if (res)
>>> +             return res;
>>> +
>>> +    return 1;
>>> +}
>>> +
>>> +static int mmu_init(struct mmu_info *mmu, u32 data)
>>> +{
>>> +     ASSERT(mmu);
>>> +     ASSERT(!mmu->mem_map);
>>> +     ASSERT(!mmu->pagetable);
>>> +
>>> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
>>
>> Can you use ioremap_nocache instead of ioremap? The behavior is the same
>> but the former name is less confusing.
>>
> 
> Sure.
> 
>> --
>> Julien Grall
> 
> Thank you for review.
> 
> Regards,
> Andrii
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:31:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gqy-0006B6-KG; Fri, 24 Jan 2014 13:31:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6gqw-0006B1-W3
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 13:31:23 +0000
Received: from [85.158.139.211:29720] by server-11.bemta-5.messagelabs.com id
	93/58-23268-A2B62E25; Fri, 24 Jan 2014 13:31:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390570281!11757096!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8749 invoked from network); 24 Jan 2014 13:31:21 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 13:31:21 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so969573eek.36
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 05:31:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lIXD1TwlTw5sMI/YQ97HPvHr1SBRmx3B/c32uZcME2A=;
	b=eiD025L+DW9MnQiM+Tn6sNC3OIySl17wpT6Kte07KhkOCUXs1enox3OcSZukt+iiFX
	dh9mpsUiCPasecQipNtc+0ihp97jYRpf3dudv+CBxcziDAjLuwDSUoH+EtMO5pPI1PMs
	gPnjuf/IiIl7OcWdCLfb7OLt16crY+91zMIztIRsVKHVc65QR81uEc5cnMXGbM0p/i4Z
	M3I8CCgWjjHScGUK2jDunb7aJEcGoxOa24MAeV+9cihswLshUyIxCL9Vfgbw8ERH9edW
	Cq6yzvSAbOxpc3BYswtzfkDVutw6zBWuT3FltjRI0ZOK5TOVkRv9on123Q+AndIdbCvh
	WMtA==
X-Gm-Message-State: ALoCoQkje5YJ3iCYmH/b7fBZMd3Rm4F9etNxKwcXpSdIxX50nxCn2bNF6TWbvkwqzKhii9Dckk4H
X-Received: by 10.14.221.4 with SMTP id q4mr12386522eep.47.1390570281158;
	Fri, 24 Jan 2014 05:31:21 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm3666851eef.9.2014.01.24.05.31.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 05:31:20 -0800 (PST)
Message-ID: <52E26B25.3050206@linaro.org>
Date: Fri, 24 Jan 2014 13:31:17 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <1390405925-1764-1-git-send-email-andrii.tseglytskyi@globallogic.com>
	<1390405925-1764-2-git-send-email-andrii.tseglytskyi@globallogic.com>
	<52E135E6.7030109@linaro.org>
	<CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
In-Reply-To: <CAH_mUMOwy37twqErP7qEf+mWYvPskwaPV8ZpO9KjVy1TR=hRVQ@mail.gmail.com>
Cc: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:49 AM, Andrii Tseglytskyi wrote:
> Hi Julien,
> 
> On Thu, Jan 23, 2014 at 5:31 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 01/22/2014 03:52 PM, Andrii Tseglytskyi wrote:
>>> omap IOMMU module is designed to handle access to external
>>> omap MMUs, connected to the L3 bus.
> 
> [...]
> 
>>
>>> +struct mmu_info {
>>> +     const char                      *name;
>>> +     paddr_t                         mem_start;
>>> +     u32                                     mem_size;
>>> +     u32                                     *pagetable;
>>> +     void __iomem            *mem_map;
>>> +};
>>> +
>>> +static struct mmu_info omap_ipu_mmu = {
>>
>> static const?
>>
> 
> Unfortunately, no. I like const modifiers very much and try to put
> them everywhere I can, but in these structs I need to modify several
> fields during MMU configuratiion.
> 
> [...]
> 
>>> +     .name           = "IPU_L2_MMU",
>>> +     .mem_start      = 0x55082000,
>>> +     .mem_size       = 0x1000,
>>> +     .pagetable      = NULL,
>>> +};
>>> +
>>> +static struct mmu_info omap_dsp_mmu = {
>>
>> static const?
>>
> 
> The same as previous.
> 
>>> +     .name           = "DSP_L2_MMU",
>>> +     .mem_start      = 0x4a066000,
>>> +     .mem_size       = 0x1000,
>>> +     .pagetable      = NULL,
>>> +};
>>> +
>>> +static struct mmu_info *mmu_list[] = {
>>
>> static const?
>>
> 
> The same as previous.
> 
>>> +     &omap_ipu_mmu,
>>> +     &omap_dsp_mmu,
>>> +};
>>> +
>>> +#define mmu_for_each(pfunc, data)                                            \
>>> +({                                                                                                           \
>>> +     u32 __i;                                                                                        \
>>> +     int __res = 0;                                                                          \
>>> +                                                                                                             \
>>> +     for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {      \
>>> +             __res |= pfunc(mmu_list[__i], data);                    \
>>
>> You res |= will result to a "wrong" errno if you have multiple failure.
>> Would it be better to have:
>>
>> __res = pfunc(...)
>> if ( __res )
>>   break;
>>
> 
> I know. I tried both solutions - mine and what you proposed. Agree in
> general, will update this.
> 
> 
>>> +     }                                                                                                       \
>>> +     __res;                                                                                          \
>>> +})
>>> +
>>> +static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
>>> +{
>>> +     if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + mmu->mem_size)))
>>> +             return 1;
>>> +
>>> +     return 0;
>>> +}
>>> +
>>> +static inline struct mmu_info *mmu_lookup(u32 addr)
>>> +{
>>> +     u32 i;
>>> +
>>> +     for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
>>> +             if (mmu_check_mem_range(mmu_list[i], addr))
>>> +                     return mmu_list[i];
>>> +     }
>>> +
>>> +     return NULL;
>>> +}
>>> +
>>> +static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
>>> +{
>>> +     return (reg & mask) | (va & (~mask));
>>> +}
>>> +
>>> +static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
>>> +{
>>> +     return (reg & ~mask) | pa;
>>> +}
>>> +
>>> +static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
>>> +{
>>> +     return mmu_for_each(mmu_check_mem_range, addr);
>>> +}
>>
>> As I understand your cover letter, the device (and therefore the MMU) is
>> only passthrough to a single guest, right?
>>
>> If so, your mmu_mmio_check should check if the domain is handling the
>> device.
>> With your current code any guest can write to this range and rewriting
>> the MMU page table.
>>
> 
> Oh, I knew that someone will catch this :)
> This is a next step for this patch series - to make sure that only one
> guest can configure / access MMU.
> 
>>> +
>>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>>> +{
>>> +     void __iomem *pagetable = NULL;
>>> +     u32 pgaddr;
>>> +
>>> +     ASSERT(mmu);
>>> +
>>> +     /* read address where kernel MMU pagetable is stored */
>>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>>> +     if (!pagetable) {
>>> +             printk("%s: %s failed to map pagetable\n",
>>> +                        __func__, mmu->name);
>>> +             return -EINVAL;
>>> +     }
>>> +
>>> +     /*
>>> +      * pagetable can be changed since last time
>>> +      * we accessed it therefore we need to copy it each time
>>> +      */
>>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>>> +
>>> +     iounmap(pagetable);
>>> +
>>> +     return 0;
>>> +}
>>
>> I'm confused, it should copy from the guest MMU pagetable, right? In
>> this case you should use map_domain_page.
>> ioremap *MUST* only be used with device memory, otherwise memory
>> coherency is not guaranteed.
>>
> 
> OK. Will try this.

You can look at to __copy_{to,from}* macro. They will do the right job.

> 
>> [..]
>>
>>> +static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
>>> +{
>>> +     struct domain *dom = v->domain;
>>> +     struct mmu_info *mmu = NULL;
>>> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> +    register_t *r = select_user_reg(regs, info->dabt.reg);
>>> +     int res;
>>> +
>>> +     mmu = mmu_lookup(info->gpa);
>>> +     if (!mmu) {
>>> +             printk("%s: can't get mmu for addr 0x%08x\n", __func__, (u32)info->gpa);
>>> +             return -EINVAL;
>>> +     }
>>> +
>>> +     /*
>>> +      * make sure that user register is written first in this function
>>> +      * following calls may expect valid data in it
>>> +      */
>>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>>
>> Hmmm ... I think this is very confusing, you should only write to the
>> memory if mmu_trap_write_access has not failed. And use "*r" where it's
>> needed.
>>
>> Writing to the device memory could have side effect (for instance
>> updating the page table with the wrong translation...).
>>
> 
> Agree - it is a bit confusing here. But I need a valid data in the
> user register.
> Following calls use it ->
> mmu_trap_write_access()->mmu_translate_pagetable()->mmu_copy_pagetable()->pgaddr
> = readl(mmu->mem_map + MMU_TTB);
> Last read will be from register written in this function. Taking in
> account your comment - I will think about changing this logic.
> 
>>> +
>>> +     res = mmu_trap_write_access(dom, mmu, info);
>>> +     if (res)
>>> +             return res;
>>> +
>>> +    return 1;
>>> +}
>>> +
>>> +static int mmu_init(struct mmu_info *mmu, u32 data)
>>> +{
>>> +     ASSERT(mmu);
>>> +     ASSERT(!mmu->mem_map);
>>> +     ASSERT(!mmu->pagetable);
>>> +
>>> +    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
>>
>> Can you use ioremap_nocache instead of ioremap? The behavior is the same
>> but the former name is less confusing.
>>
> 
> Sure.
> 
>> --
>> Julien Grall
> 
> Thank you for review.
> 
> Regards,
> Andrii
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gvZ-0006JN-Ek; Fri, 24 Jan 2014 13:36:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W6gvX-0006JD-D5
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:36:07 +0000
Received: from [85.158.143.35:48785] by server-2.bemta-4.messagelabs.com id
	A3/B1-11386-64C62E25; Fri, 24 Jan 2014 13:36:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390570565!567633!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30945 invoked from network); 24 Jan 2014 13:36:06 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Jan 2014 13:36:06 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50503 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W6gjM-0004OR-GB; Fri, 24 Jan 2014 14:23:32 +0100
Date: Fri, 24 Jan 2014 14:36:02 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1889333978.20140124143602@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!


Hi Konrad,

Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
seems to help with my problem,i'm no capable of using:
- xl pci-detach
- xl pci-assignable-remove
- echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
So the first 4 seem to be an improvement.

That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gvZ-0006JN-Ek; Fri, 24 Jan 2014 13:36:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W6gvX-0006JD-D5
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:36:07 +0000
Received: from [85.158.143.35:48785] by server-2.bemta-4.messagelabs.com id
	A3/B1-11386-64C62E25; Fri, 24 Jan 2014 13:36:06 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390570565!567633!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30945 invoked from network); 24 Jan 2014 13:36:06 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Jan 2014 13:36:06 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50503 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W6gjM-0004OR-GB; Fri, 24 Jan 2014 14:23:32 +0100
Date: Fri, 24 Jan 2014 14:36:02 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1889333978.20140124143602@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140110173809.GA19423@pegasus.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 10, 2014, 6:38:10 PM, you wrote:

>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> > nonethless.
>> 
>> As usual ;-)

> Ha!
> ..snip..
>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> 
>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> > I totally forgot about it !
>> 
>> Got a link to that patchset ?

> https://lkml.org/lkml/2013/12/13/315

>> I at least could give it a spin .. you never know when fortune is on your side :-)

> It is also at this git tree:

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> want to merge it in your current Linus tree.

> Thank you!


Hi Konrad,

Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
seems to help with my problem,i'm no capable of using:
- xl pci-detach
- xl pci-assignable-remove
- echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
So the first 4 seem to be an improvement.

That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:38:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gxy-0006Ul-2y; Fri, 24 Jan 2014 13:38:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1W6gxw-0006UJ-VF
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:38:37 +0000
Received: from [193.109.254.147:5088] by server-6.bemta-14.messagelabs.com id
	AD/09-14958-CDC62E25; Fri, 24 Jan 2014 13:38:36 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390570715!12996927!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13430 invoked from network); 24 Jan 2014 13:38:35 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 13:38:35 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D31F9AAD1;
	Fri, 24 Jan 2014 13:38:34 +0000 (UTC)
Date: Fri, 24 Jan 2014 13:38:30 +0000
From: Mel Gorman <mgorman@suse.de>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140124133830.GU4963@suse.de>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
> >> >> <SNIP>
> >> >>
> >> >> This dump doesn't look dramatically different, either.
> >> >>
> >> >>>
> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >> >>> turned on?
> >> >>
> >> >>
> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> >> mean not enabled at runtime?
> >> >>
> >> >> [1]
> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> >>
> >>
> >>
> >> --
> >> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 

Thanks Elena,

> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
> 
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
> 
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;

Would reusing pte_present be an option? Ordinarily I expect that
PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
is defined as

static inline int pte_present(pte_t a)
{
        return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
                               _PAGE_NUMA);
}

So it looks like it work work. Of course it would need to be split to
reuse it within xen if pte_present was split to have a pteval_present
helper like so

static inline int pteval_present(pteval_t val)
{
	/*
	 * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
	 * way clearly states that the intent is that a protnone and numa
	 * hinting ptes are considered present for the purposes of
	 * pagetable operations like zapping, protection changes, gup etc.
	 */
	return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
}

static inline int pte_present(pte_t pte)
{
	return pteval_present(pte_flags(pte))
}

If Xen is doing some other tricks with _PAGE_PRESENT then it might be
ruled out as an option. If so, then maybe it could still be made a
little clearer for future reference?


diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index c1d406f..ff621de 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
 
@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 
 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
 		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		pteval_t flags = val & PTE_FLAGS_MASK;
 		unsigned long mfn;
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 8e4f41d..693fe00 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
  * (because _PAGE_PRESENT is not set).
  */
 #ifndef pte_numa
+static inline int pteval_numa(pteval_t pteval)
+{
+	return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
+}
+
 static inline int pte_numa(pte_t pte)
 {
-	return (pte_flags(pte) &
-		(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
+	return pteval_numa(pte_flags(pte));
 }
 #endif
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 13:38:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 13:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6gxy-0006Ul-2y; Fri, 24 Jan 2014 13:38:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1W6gxw-0006UJ-VF
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 13:38:37 +0000
Received: from [193.109.254.147:5088] by server-6.bemta-14.messagelabs.com id
	AD/09-14958-CDC62E25; Fri, 24 Jan 2014 13:38:36 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390570715!12996927!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13430 invoked from network); 24 Jan 2014 13:38:35 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 13:38:35 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id D31F9AAD1;
	Fri, 24 Jan 2014 13:38:34 +0000 (UTC)
Date: Fri, 24 Jan 2014 13:38:30 +0000
From: Mel Gorman <mgorman@suse.de>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140124133830.GU4963@suse.de>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
> >> >> <SNIP>
> >> >>
> >> >> This dump doesn't look dramatically different, either.
> >> >>
> >> >>>
> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
> >> >>> turned on?
> >> >>
> >> >>
> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
> >> >> mean not enabled at runtime?
> >> >>
> >> >> [1]
> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
> >>
> >>
> >>
> >> --
> >> Elena
> 
> I was able to reproduce this consistently, also with the latest mm
> patches from yesterday.
> Can you please try this:
> 

Thanks Elena,

> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index ce563be..76dcf96 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
> *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
> 
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
> 
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || ((val &
> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;

Would reusing pte_present be an option? Ordinarily I expect that
PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
is defined as

static inline int pte_present(pte_t a)
{
        return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
                               _PAGE_NUMA);
}

So it looks like it work work. Of course it would need to be split to
reuse it within xen if pte_present was split to have a pteval_present
helper like so

static inline int pteval_present(pteval_t val)
{
	/*
	 * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
	 * way clearly states that the intent is that a protnone and numa
	 * hinting ptes are considered present for the purposes of
	 * pagetable operations like zapping, protection changes, gup etc.
	 */
	return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
}

static inline int pte_present(pte_t pte)
{
	return pteval_present(pte_flags(pte))
}

If Xen is doing some other tricks with _PAGE_PRESENT then it might be
ruled out as an option. If so, then maybe it could still be made a
little clearer for future reference?


diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index c1d406f..ff621de 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
 
@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 
 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
 		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		pteval_t flags = val & PTE_FLAGS_MASK;
 		unsigned long mfn;
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 8e4f41d..693fe00 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
  * (because _PAGE_PRESENT is not set).
  */
 #ifndef pte_numa
+static inline int pteval_numa(pteval_t pteval)
+{
+	return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
+}
+
 static inline int pte_numa(pte_t pte)
 {
-	return (pte_flags(pte) &
-		(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
+	return pteval_numa(pte_flags(pte));
 }
 #endif
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:16:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hYR-0007cQ-6Q; Fri, 24 Jan 2014 14:16:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hYP-0007cI-GV
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:16:17 +0000
Received: from [85.158.137.68:24562] by server-10.bemta-3.messagelabs.com id
	66/3C-23989-0B572E25; Fri, 24 Jan 2014 14:16:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390572974!11139124!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30498 invoked from network); 24 Jan 2014 14:16:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:16:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:16:13 +0000
Message-Id: <52E283BC0200007800116AC0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:16:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 01/17] common/symbols: Export hypervisor
 symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
>      }
>      break;
>  
> +    case XENPF_get_symbol:
> +    {
> +        char name[XEN_KSYM_NAME_LEN + 1];
> +        XEN_GUEST_HANDLE_64(char) nameh;

Why _64?

> +
> +        guest_from_compat_handle(nameh, op->u.symdata.u.name);
> +
> +        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
> +                           &op->u.symdata.address, name);
> +
> +        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )

Afaict symbols_expand_symbol() always zero terminates its
output, so I can't see why you're not properly using strlen() here.
The way you do it now you're leaking hypervisor stack contents
to the caller.

> +int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
> +{
> +    if ( *symnum > symbols_num_syms )
> +        return -ERANGE;
> +    if ( *symnum == symbols_num_syms )
> +        return 0;
> +
> +    spin_lock(&symbols_mutex);
> +
> +    if ( *symnum == 0 )
> +        next_offset = next_symbol = 0;
> +    if ( next_symbol != *symnum )
> +        /* Non-sequential access */
> +        next_offset = get_symbol_offset(*symnum);
> +
> +    *type = symbols_get_symbol_type(next_offset);
> +    next_offset = symbols_expand_symbol(next_offset, name);
> +    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
> +
> +    next_symbol = ++(*symnum);

Pointless parentheses.

> +#define XENPF_get_symbol   61
> +#define XEN_KSYM_NAME_LEN 127
> +struct xenpf_symdata {
> +    /* IN variables */
> +    uint32_t symnum;
> +
> +    /* OUT variables */
> +    uint32_t type;
> +    uint64_t address;
> +
> +    union {
> +        XEN_GUEST_HANDLE(char) name;
> +        uint64_t pad;
> +    } u;

Since you need to do translation anyway, I don't see what good
the padding field (and hence the union) here does.

> --- a/xen/include/xen/symbols.h
> +++ b/xen/include/xen/symbols.h
> @@ -2,8 +2,8 @@
>  #define _XEN_SYMBOLS_H
>  
>  #include <xen/types.h>
> -
> -#define KSYM_NAME_LEN 127
> +#include <public/xen.h>

I don't think you really need this one.

> +#include <public/platform.h>
>  
>  /* Lookup an address. */
>  const char *symbols_lookup(unsigned long addr,
> @@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
>                             unsigned long *offset,
>                             char *namebuf);
>  
> +extern int xensyms_read(uint32_t *symnum, uint32_t *type,
> +                        uint64_t *address, char *name);
> +

Please be consistent at least within individual files: There's no
"extern" in the existing function declaration here, so there
shouldn't be one here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:16:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hYR-0007cQ-6Q; Fri, 24 Jan 2014 14:16:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hYP-0007cI-GV
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:16:17 +0000
Received: from [85.158.137.68:24562] by server-10.bemta-3.messagelabs.com id
	66/3C-23989-0B572E25; Fri, 24 Jan 2014 14:16:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390572974!11139124!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30498 invoked from network); 24 Jan 2014 14:16:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:16:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:16:13 +0000
Message-Id: <52E283BC0200007800116AC0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:16:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-2-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 01/17] common/symbols: Export hypervisor
 symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
>      }
>      break;
>  
> +    case XENPF_get_symbol:
> +    {
> +        char name[XEN_KSYM_NAME_LEN + 1];
> +        XEN_GUEST_HANDLE_64(char) nameh;

Why _64?

> +
> +        guest_from_compat_handle(nameh, op->u.symdata.u.name);
> +
> +        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
> +                           &op->u.symdata.address, name);
> +
> +        if ( !ret && copy_to_guest(nameh, name, XEN_KSYM_NAME_LEN + 1) )

Afaict symbols_expand_symbol() always zero terminates its
output, so I can't see why you're not properly using strlen() here.
The way you do it now you're leaking hypervisor stack contents
to the caller.

> +int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
> +{
> +    if ( *symnum > symbols_num_syms )
> +        return -ERANGE;
> +    if ( *symnum == symbols_num_syms )
> +        return 0;
> +
> +    spin_lock(&symbols_mutex);
> +
> +    if ( *symnum == 0 )
> +        next_offset = next_symbol = 0;
> +    if ( next_symbol != *symnum )
> +        /* Non-sequential access */
> +        next_offset = get_symbol_offset(*symnum);
> +
> +    *type = symbols_get_symbol_type(next_offset);
> +    next_offset = symbols_expand_symbol(next_offset, name);
> +    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
> +
> +    next_symbol = ++(*symnum);

Pointless parentheses.

> +#define XENPF_get_symbol   61
> +#define XEN_KSYM_NAME_LEN 127
> +struct xenpf_symdata {
> +    /* IN variables */
> +    uint32_t symnum;
> +
> +    /* OUT variables */
> +    uint32_t type;
> +    uint64_t address;
> +
> +    union {
> +        XEN_GUEST_HANDLE(char) name;
> +        uint64_t pad;
> +    } u;

Since you need to do translation anyway, I don't see what good
the padding field (and hence the union) here does.

> --- a/xen/include/xen/symbols.h
> +++ b/xen/include/xen/symbols.h
> @@ -2,8 +2,8 @@
>  #define _XEN_SYMBOLS_H
>  
>  #include <xen/types.h>
> -
> -#define KSYM_NAME_LEN 127
> +#include <public/xen.h>

I don't think you really need this one.

> +#include <public/platform.h>
>  
>  /* Lookup an address. */
>  const char *symbols_lookup(unsigned long addr,
> @@ -11,4 +11,7 @@ const char *symbols_lookup(unsigned long addr,
>                             unsigned long *offset,
>                             char *namebuf);
>  
> +extern int xensyms_read(uint32_t *symnum, uint32_t *type,
> +                        uint64_t *address, char *name);
> +

Please be consistent at least within individual files: There's no
"extern" in the existing function declaration here, so there
shouldn't be one here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:23:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hf7-000815-Aa; Fri, 24 Jan 2014 14:23:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6hf6-000810-AQ
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:23:12 +0000
Received: from [85.158.143.35:53843] by server-3.bemta-4.messagelabs.com id
	F2/82-32360-F4772E25; Fri, 24 Jan 2014 14:23:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390573389!557469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3138 invoked from network); 24 Jan 2014 14:23:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:23:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96175131"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:23:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 09:23:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W6hf1-0006YZ-SQ;
	Fri, 24 Jan 2014 14:23:07 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 24 Jan 2014 14:23:07 +0000
Message-ID: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

find_next_bit takes a "const unsigned long *" but forcing a cast of an
"uint32_t *" throws away the alignment constraints and ends up causing an
alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.

Instead of casting use a temporary variable of the right type.

I've had a look around for similar constructs and the only thing I found was
maintenance_interrupt which cases a uint64_t down to an unsigned long, which
although perhaps not best advised is safe I think.

This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
is just coincidental due to subtle changes to the stack layout etc.

Reported-by: Fu Wei <fu.wei@linaro.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vgic.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..553411d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -362,11 +362,12 @@ read_as_zero:
 
 static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
+    const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
 
-    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
+    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
 static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 {
+    const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
 
-    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
+    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:23:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hf7-000815-Aa; Fri, 24 Jan 2014 14:23:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6hf6-000810-AQ
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:23:12 +0000
Received: from [85.158.143.35:53843] by server-3.bemta-4.messagelabs.com id
	F2/82-32360-F4772E25; Fri, 24 Jan 2014 14:23:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390573389!557469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3138 invoked from network); 24 Jan 2014 14:23:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:23:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="96175131"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:23:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 09:23:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W6hf1-0006YZ-SQ;
	Fri, 24 Jan 2014 14:23:07 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 24 Jan 2014 14:23:07 +0000
Message-ID: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

find_next_bit takes a "const unsigned long *" but forcing a cast of an
"uint32_t *" throws away the alignment constraints and ends up causing an
alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.

Instead of casting use a temporary variable of the right type.

I've had a look around for similar constructs and the only thing I found was
maintenance_interrupt which cases a uint64_t down to an unsigned long, which
although perhaps not best advised is safe I think.

This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
is just coincidental due to subtle changes to the stack layout etc.

Reported-by: Fu Wei <fu.wei@linaro.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/vgic.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..553411d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -362,11 +362,12 @@ read_as_zero:
 
 static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
+    const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
 
-    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
+    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
@@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 
 static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 {
+    const unsigned long mask = r;
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
 
-    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
+    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hfb-00083a-Oh; Fri, 24 Jan 2014 14:23:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6hfa-00083N-7l
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:23:42 +0000
Received: from [85.158.137.68:63267] by server-9.bemta-3.messagelabs.com id
	B8/C6-13104-D6772E25; Fri, 24 Jan 2014 14:23:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390573419!11157762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26313 invoked from network); 24 Jan 2014 14:23:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:23:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94144143"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:23:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:23:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6hfV-0005Yr-Km;
	Fri, 24 Jan 2014 14:23:37 +0000
Date: Fri, 24 Jan 2014 14:23:37 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20140124142337.GH24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
> On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> > As promised I hacked a prototype based on Paolo's disable TCG series.
> > However I coded some stubs for TCG anyway. So this series in principle
> > should work with / without Paolo's series.
> 
> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and

Thanks for being blunt. ;-)

> "the binary is smaller" isn't IMHO sufficient justification for breaking
> QEMU's basic structure of "target-* define target CPUs and we have
> a lot of compile time constants which are specific to a CPU which
> get defined there". How would you support a bigendian Xen CPU,
> just to pick one example of where this falls down?
> 

I think about this deeper. From Xen's (and I speculate this applies to
other hardware assisted virtulization solution as well) PoV only the
native endianess is supported, does it make sense to have a
target-native thing?

Wei.

> thanks -- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hfb-00083a-Oh; Fri, 24 Jan 2014 14:23:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6hfa-00083N-7l
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:23:42 +0000
Received: from [85.158.137.68:63267] by server-9.bemta-3.messagelabs.com id
	B8/C6-13104-D6772E25; Fri, 24 Jan 2014 14:23:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390573419!11157762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26313 invoked from network); 24 Jan 2014 14:23:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:23:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,712,1384300800"; d="scan'208";a="94144143"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:23:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:23:38 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6hfV-0005Yr-Km;
	Fri, 24 Jan 2014 14:23:37 +0000
Date: Fri, 24 Jan 2014 14:23:37 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Peter Maydell <peter.maydell@linaro.org>
Message-ID: <20140124142337.GH24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
> On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> > As promised I hacked a prototype based on Paolo's disable TCG series.
> > However I coded some stubs for TCG anyway. So this series in principle
> > should work with / without Paolo's series.
> 
> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and

Thanks for being blunt. ;-)

> "the binary is smaller" isn't IMHO sufficient justification for breaking
> QEMU's basic structure of "target-* define target CPUs and we have
> a lot of compile time constants which are specific to a CPU which
> get defined there". How would you support a bigendian Xen CPU,
> just to pick one example of where this falls down?
> 

I think about this deeper. From Xen's (and I speculate this applies to
other hardware assisted virtulization solution as well) PoV only the
native endianess is supported, does it make sense to have a
target-native thing?

Wei.

> thanks -- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hik-0008Fx-Ek; Fri, 24 Jan 2014 14:26:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6hij-0008Fr-Tn
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:26:58 +0000
Received: from [85.158.143.35:57441] by server-2.bemta-4.messagelabs.com id
	B0/C6-11386-13872E25; Fri, 24 Jan 2014 14:26:57 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390573615!580901!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31051 invoked from network); 24 Jan 2014 14:26:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:26:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEQrTU026495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:26:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEQqbl025267
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:26:53 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEQqRp009123; Fri, 24 Jan 2014 14:26:52 GMT
Received: from [10.154.174.229] (/10.154.174.229)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:26:52 -0800
Message-ID: <52E27827.6090405@oracle.com>
Date: Fri, 24 Jan 2014 22:26:47 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
	<1390561120.2124.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1390561120.2124.50.camel@kazak.uk.xensource.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/24 18:58, Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:51 +0800, Annie Li wrote:
>
>> +/*
>> + * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
>> + * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
>> + * containing the value of eventchannel for rx path of netback&netfront which support
>> + * split event channel. The original evtch field contains the value of eventchannel
>> + * for tx path in such case.
> I think it can be either "event channel" (two words) or
> "evtchn" (abbreviation) but not "eventchannel".

Ok, thanks.

>
>> + *
>> + */
>> +#if LIBXL_API_VERSION > 0x040400
>> +#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
> This should be unconditional. There are several existing examples in
> libxl.h which you could have copied.

OK

>
>> +#endif
>> +
>>   /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>>    * called from within libxl itself. Callers outside libxl, who
>>    * do not #include libxl_internal.h, are fine. */
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index 649ce50..b1b4946 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
>>       ("devid", libxl_devid),
>>       ("state", integer),
>>       ("evtch", integer),
>> +#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
>> +    ("evtch_rx", integer),
>> +#endif
> Did you even build this? Because it can't have worked (Libxl_types.idl
> is a Python source file which is not preprocessed).

My fault, I only built out xl from libxl and replace current one for 
test, not familiar with Python.:-(

>
> In any case, this ifdef is unneccessary and wrong. the LIBXL_HAVE
> indicates to the consumer that the field is present, but it should not
> actually gate the presence of the field.
>
> Think about it -- if an application is building against libxl version
> 4.4 (which was therefore built with evtchn_rx present) but requests
> LIBXL_API_VERSION == 0x040300 then your patch has now created an ABI
> mismatch.

I misunderstood LIBXL_HAVE_* in libxl.h, and thought it is used by libxl 
itself. Much clear now, thanks.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hik-0008Fx-Ek; Fri, 24 Jan 2014 14:26:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W6hij-0008Fr-Tn
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:26:58 +0000
Received: from [85.158.143.35:57441] by server-2.bemta-4.messagelabs.com id
	B0/C6-11386-13872E25; Fri, 24 Jan 2014 14:26:57 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390573615!580901!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31051 invoked from network); 24 Jan 2014 14:26:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:26:56 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEQrTU026495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:26:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEQqbl025267
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:26:53 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEQqRp009123; Fri, 24 Jan 2014 14:26:52 GMT
Received: from [10.154.174.229] (/10.154.174.229)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:26:52 -0800
Message-ID: <52E27827.6090405@oracle.com>
Date: Fri, 24 Jan 2014 22:26:47 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390560694-2130-1-git-send-email-Annie.li@oracle.com>
	<1390561120.2124.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1390561120.2124.50.camel@kazak.uk.xensource.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xl: correctly shows split
 eventchannel for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/24 18:58, Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:51 +0800, Annie Li wrote:
>
>> +/*
>> + * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
>> + * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
>> + * containing the value of eventchannel for rx path of netback&netfront which support
>> + * split event channel. The original evtch field contains the value of eventchannel
>> + * for tx path in such case.
> I think it can be either "event channel" (two words) or
> "evtchn" (abbreviation) but not "eventchannel".

Ok, thanks.

>
>> + *
>> + */
>> +#if LIBXL_API_VERSION > 0x040400
>> +#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
> This should be unconditional. There are several existing examples in
> libxl.h which you could have copied.

OK

>
>> +#endif
>> +
>>   /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>>    * called from within libxl itself. Callers outside libxl, who
>>    * do not #include libxl_internal.h, are fine. */
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index 649ce50..b1b4946 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -489,6 +489,9 @@ libxl_nicinfo = Struct("nicinfo", [
>>       ("devid", libxl_devid),
>>       ("state", integer),
>>       ("evtch", integer),
>> +#ifdef LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
>> +    ("evtch_rx", integer),
>> +#endif
> Did you even build this? Because it can't have worked (Libxl_types.idl
> is a Python source file which is not preprocessed).

My fault, I only built out xl from libxl and replace current one for 
test, not familiar with Python.:-(

>
> In any case, this ifdef is unneccessary and wrong. the LIBXL_HAVE
> indicates to the consumer that the field is present, but it should not
> actually gate the presence of the field.
>
> Think about it -- if an application is building against libxl version
> 4.4 (which was therefore built with evtchn_rx present) but requests
> LIBXL_API_VERSION == 0x040300 then your patch has now created an ABI
> mismatch.

I misunderstood LIBXL_HAVE_* in libxl.h, and thought it is used by libxl 
itself. Much clear now, thanks.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:28:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hkL-0008MC-13; Fri, 24 Jan 2014 14:28:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hkK-0008M5-1Z
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:28:36 +0000
Received: from [85.158.139.211:52784] by server-3.bemta-5.messagelabs.com id
	8C/5A-04773-39872E25; Fri, 24 Jan 2014 14:28:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390573714!1464805!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12586 invoked from network); 24 Jan 2014 14:28:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:28:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:28:33 +0000
Message-Id: <52E286A00200007800116AEF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:28:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
>  
>      context_save(v);
>  
> -    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
> +    if ( !is_pv_domain(v->domain) && 

If I understand the intentions right, this is supposed to be
is_hvm_container_domain(). See the mail archives or talk to George D
if you want to learn about the intended distinction between the two.

Further this is inconsistent with the patch description saying "Make
sure that we only touch MSR bitmap on HVM guests", as that would
exclude PVH ones. With the three models in place now, you have to
be careful to not cause confusion by imprecise statements.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:28:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hkL-0008MC-13; Fri, 24 Jan 2014 14:28:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hkK-0008M5-1Z
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:28:36 +0000
Received: from [85.158.139.211:52784] by server-3.bemta-5.messagelabs.com id
	8C/5A-04773-39872E25; Fri, 24 Jan 2014 14:28:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390573714!1464805!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12586 invoked from network); 24 Jan 2014 14:28:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:28:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:28:33 +0000
Message-Id: <52E286A00200007800116AEF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:28:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-4-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -236,7 +236,8 @@ static int amd_vpmu_save(struct vcpu *v)
>  
>      context_save(v);
>  
> -    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
> +    if ( !is_pv_domain(v->domain) && 

If I understand the intentions right, this is supposed to be
is_hvm_container_domain(). See the mail archives or talk to George D
if you want to learn about the intended distinction between the two.

Further this is inconsistent with the patch description saying "Make
sure that we only touch MSR bitmap on HVM guests", as that would
exclude PVH ones. With the three models in place now, you have to
be careful to not cause confusion by imprecise statements.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hmL-0008U3-K6; Fri, 24 Jan 2014 14:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6hmJ-0008Tr-Il
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:30:39 +0000
Received: from [85.158.139.211:20784] by server-8.bemta-5.messagelabs.com id
	D1/26-29838-E0972E25; Fri, 24 Jan 2014 14:30:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390573836!11772459!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32535 invoked from network); 24 Jan 2014 14:30:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:30:38 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEUXk8031155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:30:34 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEUUCw004531
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:30:32 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEUUxK011322; Fri, 24 Jan 2014 14:30:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:30:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D388D1BFA72; Fri, 24 Jan 2014 09:30:28 -0500 (EST)
Date: Fri, 24 Jan 2014 09:30:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124143028.GA12946@phenom.dumpdata.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DEAC020000780011603E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 08:19:40AM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
> >> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> >> > @@ -323,9 +324,14 @@
> >> >   *     For full interoperability, block front and backends should publish
> >> >   *     identical ring parameters, adjusted for unit differences, to the
> >> >   *     XenStore nodes used in both schemes.
> >> > - * (4) Devices that support discard functionality may internally allocate
> >> > - *     space (discardable extents) in units that are larger than the
> >> > - *     exported logical block size.
> >> > + * (4) Devices that support discard functionality may internally allocate 
> >> > space
> >> > + *     (discardable extents) in units that are larger than the exported 
> >> > logical
> >> > + *     block size. If the backing device has such discardable extents the
> >> > + *     backend must provide both discard-granularity and discard-alignment.
> >                     ^^^^ - MAY
> 
> I think the intention is to say that these two should go together,
> i.e. specifying just one of them is a mistake.

The 'and' in that sentence covers that I think?

My reading with 'must' is that 'features-discard' MUST have both
discard-granularity and discard-alignment. But that is not the case
- even if the device does support them - it does not have to
expose them.

> 
> Jan
> 
> >> > + *     Backends supporting discard should include discard-granularity and
> >                                         ^^^^^ - MAY
> >> > + *     discard-alignment even if it supports discarding individual sectors.
> >> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> >> > ==
> >> > + *     sector size if these keys are missing.
> >> >   * (5) The discard-alignment parameter allows a physical device to be
> >> >   *     partitioned into virtual devices that do not necessarily begin or
> >> >   *     end on a discardable extent boundary.
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hmL-0008U3-K6; Fri, 24 Jan 2014 14:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6hmJ-0008Tr-Il
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:30:39 +0000
Received: from [85.158.139.211:20784] by server-8.bemta-5.messagelabs.com id
	D1/26-29838-E0972E25; Fri, 24 Jan 2014 14:30:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390573836!11772459!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32535 invoked from network); 24 Jan 2014 14:30:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:30:38 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEUXk8031155
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:30:34 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEUUCw004531
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:30:32 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEUUxK011322; Fri, 24 Jan 2014 14:30:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:30:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D388D1BFA72; Fri, 24 Jan 2014 09:30:28 -0500 (EST)
Date: Fri, 24 Jan 2014 09:30:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124143028.GA12946@phenom.dumpdata.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DEAC020000780011603E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 08:19:40AM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
> >> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
> >> > @@ -323,9 +324,14 @@
> >> >   *     For full interoperability, block front and backends should publish
> >> >   *     identical ring parameters, adjusted for unit differences, to the
> >> >   *     XenStore nodes used in both schemes.
> >> > - * (4) Devices that support discard functionality may internally allocate
> >> > - *     space (discardable extents) in units that are larger than the
> >> > - *     exported logical block size.
> >> > + * (4) Devices that support discard functionality may internally allocate 
> >> > space
> >> > + *     (discardable extents) in units that are larger than the exported 
> >> > logical
> >> > + *     block size. If the backing device has such discardable extents the
> >> > + *     backend must provide both discard-granularity and discard-alignment.
> >                     ^^^^ - MAY
> 
> I think the intention is to say that these two should go together,
> i.e. specifying just one of them is a mistake.

The 'and' in that sentence covers that I think?

My reading with 'must' is that 'features-discard' MUST have both
discard-granularity and discard-alignment. But that is not the case
- even if the device does support them - it does not have to
expose them.

> 
> Jan
> 
> >> > + *     Backends supporting discard should include discard-granularity and
> >                                         ^^^^^ - MAY
> >> > + *     discard-alignment even if it supports discarding individual sectors.
> >> > + *     Frontends should assume discard-alignment == 0 and discard-granularity 
> >> > ==
> >> > + *     sector size if these keys are missing.
> >> >   * (5) The discard-alignment parameter allows a physical device to be
> >> >   *     partitioned into virtual devices that do not necessarily begin or
> >> >   *     end on a discardable extent boundary.
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:30:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hmO-0008Ug-0Y; Fri, 24 Jan 2014 14:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6hmM-0008U9-A8
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:30:42 +0000
Received: from [85.158.137.68:28059] by server-12.bemta-3.messagelabs.com id
	D6/17-20055-11972E25; Fri, 24 Jan 2014 14:30:41 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390573840!7496501!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 644 invoked from network); 24 Jan 2014 14:30:41 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:30:41 -0000
Received: by mail-ea0-f177.google.com with SMTP id n15so1031693ead.8
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:30:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=xvpUJIEYwNZTWZQH/6qntzCCsYBZO9V0OV0Ih7fJ1yg=;
	b=M867DHh/mOiYhrpPqSJOGmUb87YV/oGel2xA+0N1zMyo0Asw1Kxea4ldpfW+McpUuR
	tNDOusskV6w6hsSDZx+yoxUysgeqZr0AmoVtyJAI3fZOYsQXUGQtNwsY3kYNVTjyy1Se
	S0RccRSGohEzb54WRhe4mNeIBJ+THzQFp77G9kVRlUDTHxvLF2El52UeSCwkG/iJdJcL
	ECj9tUjhG+hy6LiROez227EmiH5tM5Mk0zAEdLVxPw48heEFzLXx9XTJvaMwiQ3MECDX
	E1obgsKfnG4ec3rd60BWkJeeOcSakCGd2CiumZNzxz2xPKqlEfhFHHHqZxVVaGFJyiTl
	gnEA==
X-Received: by 10.15.43.10 with SMTP id w10mr13044368eev.13.1390573840751;
	Fri, 24 Jan 2014 06:30:40 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id l4sm4239978een.13.2014.01.24.06.30.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:30:39 -0800 (PST)
Message-ID: <52E2790C.20304@redhat.com>
Date: Fri, 24 Jan 2014 15:30:36 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>, 
 Wei Liu <wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
In-Reply-To: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:30, Peter Maydell ha scritto:
>> > As promised I hacked a prototype based on Paolo's disable TCG series.
>> > However I coded some stubs for TCG anyway. So this series in principle
>> > should work with / without Paolo's series.
> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
> "the binary is smaller" isn't IMHO sufficient justification for breaking
> QEMU's basic structure of "target-* define target CPUs and we have
> a lot of compile time constants which are specific to a CPU which
> get defined there". How would you support a bigendian Xen CPU,
> just to pick one example of where this falls down?

(1) decide that the Xen ring buffers are little-endian even on 
big-endian CPUs

(2) communicate the endianness of the Xen ring buffers via Xenstore, 
just like we do for sizeof(long), and let the guest use either 
endianness on any architecture.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:30:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hmO-0008Ug-0Y; Fri, 24 Jan 2014 14:30:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6hmM-0008U9-A8
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:30:42 +0000
Received: from [85.158.137.68:28059] by server-12.bemta-3.messagelabs.com id
	D6/17-20055-11972E25; Fri, 24 Jan 2014 14:30:41 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390573840!7496501!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 644 invoked from network); 24 Jan 2014 14:30:41 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:30:41 -0000
Received: by mail-ea0-f177.google.com with SMTP id n15so1031693ead.8
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:30:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=xvpUJIEYwNZTWZQH/6qntzCCsYBZO9V0OV0Ih7fJ1yg=;
	b=M867DHh/mOiYhrpPqSJOGmUb87YV/oGel2xA+0N1zMyo0Asw1Kxea4ldpfW+McpUuR
	tNDOusskV6w6hsSDZx+yoxUysgeqZr0AmoVtyJAI3fZOYsQXUGQtNwsY3kYNVTjyy1Se
	S0RccRSGohEzb54WRhe4mNeIBJ+THzQFp77G9kVRlUDTHxvLF2El52UeSCwkG/iJdJcL
	ECj9tUjhG+hy6LiROez227EmiH5tM5Mk0zAEdLVxPw48heEFzLXx9XTJvaMwiQ3MECDX
	E1obgsKfnG4ec3rd60BWkJeeOcSakCGd2CiumZNzxz2xPKqlEfhFHHHqZxVVaGFJyiTl
	gnEA==
X-Received: by 10.15.43.10 with SMTP id w10mr13044368eev.13.1390573840751;
	Fri, 24 Jan 2014 06:30:40 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id l4sm4239978een.13.2014.01.24.06.30.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:30:39 -0800 (PST)
Message-ID: <52E2790C.20304@redhat.com>
Date: Fri, 24 Jan 2014 15:30:36 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>, 
 Wei Liu <wei.liu2@citrix.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
In-Reply-To: <CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 23:30, Peter Maydell ha scritto:
>> > As promised I hacked a prototype based on Paolo's disable TCG series.
>> > However I coded some stubs for TCG anyway. So this series in principle
>> > should work with / without Paolo's series.
> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
> "the binary is smaller" isn't IMHO sufficient justification for breaking
> QEMU's basic structure of "target-* define target CPUs and we have
> a lot of compile time constants which are specific to a CPU which
> get defined there". How would you support a bigendian Xen CPU,
> just to pick one example of where this falls down?

(1) decide that the Xen ring buffers are little-endian even on 
big-endian CPUs

(2) communicate the endianness of the Xen ring buffers via Xenstore, 
just like we do for sizeof(long), and let the guest use either 
endianness on any architecture.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:36:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hrP-0000PV-4e; Fri, 24 Jan 2014 14:35:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W6hrN-0000PN-53
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:35:53 +0000
Received: from [85.158.143.35:53582] by server-3.bemta-4.messagelabs.com id
	C4/2A-32360-84A72E25; Fri, 24 Jan 2014 14:35:52 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390574151!588563!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18229 invoked from network); 24 Jan 2014 14:35:51 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:35:51 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2605969lan.38
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:35:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=DQMaysH9UNHIepQUuHYJSsfPC3yERv5JT37Eo9LlYIg=;
	b=SpUK9WOUltRZyYAxV1pTTLTOENHTLil8kdJZIlHZXdc6KN/K+8B1GuWb4N6vTAPxZ/
	nfwc+5sydbm6tJu+TkdJLZZPm8aVMNmoV0OXlkEUQy+ZSOrsGaXKSt2JyTlncvONWaaT
	6cN3l9Sna+kCz/ZNXnCM8nynFRMad/Ap4IQB99HFto2SKGkDvLrt4XCVh2y06tg5gzz/
	E5bw/8jPeNNQdsB4muQUiBOnuKqB4+q9gp+iv89xHJ5Oo6Ngr6CkX4WHqbHhhOSJxxRo
	n+mg4sGZiIi6HuQqMa5JZpipRH9QYCdLheZmnwsxTlW9s0V0tw7aCPE9Xbqg6DUMu1bR
	ozcg==
X-Gm-Message-State: ALoCoQkx6k4nHl+nqa1aXr23RTVpRqn18U1y3ZJ/MvnqTJ2FWz0EevbBIJTSinlqcyfHJi9tefh+
X-Received: by 10.112.126.135 with SMTP id my7mr348855lbb.56.1390574150848;
	Fri, 24 Jan 2014 06:35:50 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Fri, 24 Jan 2014 06:35:30 -0800 (PST)
In-Reply-To: <52E2790C.20304@redhat.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Fri, 24 Jan 2014 14:35:30 +0000
Message-ID: <CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24 January 2014 14:30, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 23/01/2014 23:30, Peter Maydell ha scritto:
>> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
>> "the binary is smaller" isn't IMHO sufficient justification for breaking
>> QEMU's basic structure of "target-* define target CPUs and we have
>> a lot of compile time constants which are specific to a CPU which
>> get defined there". How would you support a bigendian Xen CPU,
>> just to pick one example of where this falls down?
>
>
> (1) decide that the Xen ring buffers are little-endian even on big-endian
> CPUs
>
> (2) communicate the endianness of the Xen ring buffers via Xenstore, just
> like we do for sizeof(long), and let the guest use either endianness on any
> architecture.

You still have to make a choice about what you think
TARGET_WORDS_BIGENDIAN should be, and it's still going
to be wrong half the time and horribly confusing.
I just think this is completely the wrong solution to
the problem.

If Xen really wants a totally standalone binary of the
smallest possible size with just paravirtualized hardware
and minimal to no dependency on guest CPU architecture then
they should write one, along the lines of kvmtool :-)

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:36:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hrP-0000PV-4e; Fri, 24 Jan 2014 14:35:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W6hrN-0000PN-53
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:35:53 +0000
Received: from [85.158.143.35:53582] by server-3.bemta-4.messagelabs.com id
	C4/2A-32360-84A72E25; Fri, 24 Jan 2014 14:35:52 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390574151!588563!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18229 invoked from network); 24 Jan 2014 14:35:51 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:35:51 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2605969lan.38
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:35:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=DQMaysH9UNHIepQUuHYJSsfPC3yERv5JT37Eo9LlYIg=;
	b=SpUK9WOUltRZyYAxV1pTTLTOENHTLil8kdJZIlHZXdc6KN/K+8B1GuWb4N6vTAPxZ/
	nfwc+5sydbm6tJu+TkdJLZZPm8aVMNmoV0OXlkEUQy+ZSOrsGaXKSt2JyTlncvONWaaT
	6cN3l9Sna+kCz/ZNXnCM8nynFRMad/Ap4IQB99HFto2SKGkDvLrt4XCVh2y06tg5gzz/
	E5bw/8jPeNNQdsB4muQUiBOnuKqB4+q9gp+iv89xHJ5Oo6Ngr6CkX4WHqbHhhOSJxxRo
	n+mg4sGZiIi6HuQqMa5JZpipRH9QYCdLheZmnwsxTlW9s0V0tw7aCPE9Xbqg6DUMu1bR
	ozcg==
X-Gm-Message-State: ALoCoQkx6k4nHl+nqa1aXr23RTVpRqn18U1y3ZJ/MvnqTJ2FWz0EevbBIJTSinlqcyfHJi9tefh+
X-Received: by 10.112.126.135 with SMTP id my7mr348855lbb.56.1390574150848;
	Fri, 24 Jan 2014 06:35:50 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Fri, 24 Jan 2014 06:35:30 -0800 (PST)
In-Reply-To: <52E2790C.20304@redhat.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Fri, 24 Jan 2014 14:35:30 +0000
Message-ID: <CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24 January 2014 14:30, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 23/01/2014 23:30, Peter Maydell ha scritto:
>> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
>> "the binary is smaller" isn't IMHO sufficient justification for breaking
>> QEMU's basic structure of "target-* define target CPUs and we have
>> a lot of compile time constants which are specific to a CPU which
>> get defined there". How would you support a bigendian Xen CPU,
>> just to pick one example of where this falls down?
>
>
> (1) decide that the Xen ring buffers are little-endian even on big-endian
> CPUs
>
> (2) communicate the endianness of the Xen ring buffers via Xenstore, just
> like we do for sizeof(long), and let the guest use either endianness on any
> architecture.

You still have to make a choice about what you think
TARGET_WORDS_BIGENDIAN should be, and it's still going
to be wrong half the time and horribly confusing.
I just think this is completely the wrong solution to
the problem.

If Xen really wants a totally standalone binary of the
smallest possible size with just paravirtualized hardware
and minimal to no dependency on guest CPU architecture then
they should write one, along the lines of kvmtool :-)

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hrs-0000Rm-Iu; Fri, 24 Jan 2014 14:36:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6hrr-0000RW-4P
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:36:23 +0000
Received: from [85.158.137.68:61714] by server-14.bemta-3.messagelabs.com id
	43/A2-06105-66A72E25; Fri, 24 Jan 2014 14:36:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390574181!11167521!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16129 invoked from network); 24 Jan 2014 14:36:21 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:36:21 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1015601eei.12
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:36:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RI+PIdF0YlKSE4d5LeWljJGOewdCHTEEf4tiFnXyACs=;
	b=CgjhQy1ZP0f6V01ZYBhWDgeaSeDDnhqau/0pgtbKvyXhms8DX/jn75r6SE8wmuywEg
	LbWM4MX0cmLxia+IrPQxEhByXvkGmvdMNLHUNkH/O+IaAE8e4N6PZ92pP1UQD21wiRsA
	UP9e3Bn4fkldSKJiitj1hL/JL4gKWbu6qdeHP2/gANgaz1euj7gSCZC16I8LjwhLuWrN
	WJRCStfQaCgY1KGNxD0cq2ZESvTpI5qnelgEt8KQ/Wdm55kfLdYIE2+fcxVGzqU0cbMe
	NM8YfLIktZR06PpHbRtITvOo2H6jGA6r18ztoehcQbfy7PRRb3UfhHZZIfBaSBuCtp7H
	l1vg==
X-Gm-Message-State: ALoCoQkd/3kIxLlYN/HsIwWu2ylbyWJkfbOdcHzhcJNsGPDy9TbuQCnW6c5EJXz+43eQ1POITCbG
X-Received: by 10.15.45.199 with SMTP id b47mr2651571eew.104.1390574181435;
	Fri, 24 Jan 2014 06:36:21 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm4319759eef.9.2014.01.24.06.36.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:36:20 -0800 (PST)
Message-ID: <52E27A63.6030903@linaro.org>
Date: Fri, 24 Jan 2014 14:36:19 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 02:23 PM, Ian Campbell wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> 
> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Good catch! Do you plan to apply this patch for Xen 4.4?

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hrs-0000Rm-Iu; Fri, 24 Jan 2014 14:36:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6hrr-0000RW-4P
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:36:23 +0000
Received: from [85.158.137.68:61714] by server-14.bemta-3.messagelabs.com id
	43/A2-06105-66A72E25; Fri, 24 Jan 2014 14:36:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390574181!11167521!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16129 invoked from network); 24 Jan 2014 14:36:21 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:36:21 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1015601eei.12
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:36:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RI+PIdF0YlKSE4d5LeWljJGOewdCHTEEf4tiFnXyACs=;
	b=CgjhQy1ZP0f6V01ZYBhWDgeaSeDDnhqau/0pgtbKvyXhms8DX/jn75r6SE8wmuywEg
	LbWM4MX0cmLxia+IrPQxEhByXvkGmvdMNLHUNkH/O+IaAE8e4N6PZ92pP1UQD21wiRsA
	UP9e3Bn4fkldSKJiitj1hL/JL4gKWbu6qdeHP2/gANgaz1euj7gSCZC16I8LjwhLuWrN
	WJRCStfQaCgY1KGNxD0cq2ZESvTpI5qnelgEt8KQ/Wdm55kfLdYIE2+fcxVGzqU0cbMe
	NM8YfLIktZR06PpHbRtITvOo2H6jGA6r18ztoehcQbfy7PRRb3UfhHZZIfBaSBuCtp7H
	l1vg==
X-Gm-Message-State: ALoCoQkd/3kIxLlYN/HsIwWu2ylbyWJkfbOdcHzhcJNsGPDy9TbuQCnW6c5EJXz+43eQ1POITCbG
X-Received: by 10.15.45.199 with SMTP id b47mr2651571eew.104.1390574181435;
	Fri, 24 Jan 2014 06:36:21 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm4319759eef.9.2014.01.24.06.36.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:36:20 -0800 (PST)
Message-ID: <52E27A63.6030903@linaro.org>
Date: Fri, 24 Jan 2014 14:36:19 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 02:23 PM, Ian Campbell wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> 
> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Good catch! Do you plan to apply this patch for Xen 4.4?

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:38:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6htZ-0000aV-4x; Fri, 24 Jan 2014 14:38:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6htX-0000aH-He
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:38:07 +0000
Received: from [193.109.254.147:45628] by server-4.bemta-14.messagelabs.com id
	F3/8A-03916-ECA72E25; Fri, 24 Jan 2014 14:38:06 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390574285!13007807!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24604 invoked from network); 24 Jan 2014 14:38:05 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:38:05 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1037116eae.33
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:38:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=t6cpEseCefLD1csnodcRjBnkybXTYMek/pD/YBY4rv4=;
	b=RqnYiCDZfZGQMizjYOkMwBIpAtN0YDZwoOUembw/2tt/0cS5ynWiTE7dvOs3cLN795
	5Lt1KZ31wVzx2mob/F3FSIdhV9DP+YtYZ7S4nXCPK4YYKKxKs67pmd7POBkd/9paVhiO
	gaVVDsH1NC3hLgzOieez0TkthfnUAXC4m6LWtz2bEoxNMLzZJ8csA1y1FilZjm+T7gHV
	YZfk3X0fn/EkhskSnhCf6oh/BwXt2/h1lCAXPBkoAjaAfUGEJpO9SoifsTe3HAo5UzSo
	n9LdQXWr+t+jp52uJSm6oFeBlfQwgYHNr7gx0ZGBg5GywnvwAjWLwM6dMrAyEAjF7xNh
	QbmA==
X-Received: by 10.15.52.136 with SMTP id p8mr12811935eew.11.1390574285626;
	Fri, 24 Jan 2014 06:38:05 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id o13sm4291289eex.19.2014.01.24.06.38.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:38:04 -0800 (PST)
Message-ID: <52E27AC9.8050506@redhat.com>
Date: Fri, 24 Jan 2014 15:38:01 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, 
 Peter Maydell <peter.maydell@linaro.org>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<20140124142337.GH24675@zion.uk.xensource.com>
In-Reply-To: <20140124142337.GH24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 15:23, Wei Liu ha scritto:
> On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
>> On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
>>> As promised I hacked a prototype based on Paolo's disable TCG series.
>>> However I coded some stubs for TCG anyway. So this series in principle
>>> should work with / without Paolo's series.
>>
>> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
>
> Thanks for being blunt. ;-)
>
>> "the binary is smaller" isn't IMHO sufficient justification for breaking
>> QEMU's basic structure of "target-* define target CPUs and we have
>> a lot of compile time constants which are specific to a CPU which
>> get defined there". How would you support a bigendian Xen CPU,
>> just to pick one example of where this falls down?
>>
>
> I think about this deeper. From Xen's (and I speculate this applies to
> other hardware assisted virtulization solution as well) PoV only the
> native endianess is supported, does it make sense to have a
> target-native thing?

I think this is wrong, for a few reasons:

(1) xenpv is not hardware assisted virtualization

(2) supporting only native endianness leads to complications when 
systems are bi-endian, as is the case for PPC.  For example, virtio 1.0 
will always be little endian.

(3) there's a precedent for supporting different guests between the 
guest and the host in blkback, you can do the same for endianness.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:38:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6htZ-0000aV-4x; Fri, 24 Jan 2014 14:38:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W6htX-0000aH-He
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:38:07 +0000
Received: from [193.109.254.147:45628] by server-4.bemta-14.messagelabs.com id
	F3/8A-03916-ECA72E25; Fri, 24 Jan 2014 14:38:06 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390574285!13007807!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24604 invoked from network); 24 Jan 2014 14:38:05 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:38:05 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1037116eae.33
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:38:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=t6cpEseCefLD1csnodcRjBnkybXTYMek/pD/YBY4rv4=;
	b=RqnYiCDZfZGQMizjYOkMwBIpAtN0YDZwoOUembw/2tt/0cS5ynWiTE7dvOs3cLN795
	5Lt1KZ31wVzx2mob/F3FSIdhV9DP+YtYZ7S4nXCPK4YYKKxKs67pmd7POBkd/9paVhiO
	gaVVDsH1NC3hLgzOieez0TkthfnUAXC4m6LWtz2bEoxNMLzZJ8csA1y1FilZjm+T7gHV
	YZfk3X0fn/EkhskSnhCf6oh/BwXt2/h1lCAXPBkoAjaAfUGEJpO9SoifsTe3HAo5UzSo
	n9LdQXWr+t+jp52uJSm6oFeBlfQwgYHNr7gx0ZGBg5GywnvwAjWLwM6dMrAyEAjF7xNh
	QbmA==
X-Received: by 10.15.52.136 with SMTP id p8mr12811935eew.11.1390574285626;
	Fri, 24 Jan 2014 06:38:05 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.dsl.vodafone.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id o13sm4291289eex.19.2014.01.24.06.38.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:38:04 -0800 (PST)
Message-ID: <52E27AC9.8050506@redhat.com>
Date: Fri, 24 Jan 2014 15:38:01 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, 
 Peter Maydell <peter.maydell@linaro.org>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<20140124142337.GH24675@zion.uk.xensource.com>
In-Reply-To: <20140124142337.GH24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 15:23, Wei Liu ha scritto:
> On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
>> On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
>>> As promised I hacked a prototype based on Paolo's disable TCG series.
>>> However I coded some stubs for TCG anyway. So this series in principle
>>> should work with / without Paolo's series.
>>
>> I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
>
> Thanks for being blunt. ;-)
>
>> "the binary is smaller" isn't IMHO sufficient justification for breaking
>> QEMU's basic structure of "target-* define target CPUs and we have
>> a lot of compile time constants which are specific to a CPU which
>> get defined there". How would you support a bigendian Xen CPU,
>> just to pick one example of where this falls down?
>>
>
> I think about this deeper. From Xen's (and I speculate this applies to
> other hardware assisted virtulization solution as well) PoV only the
> native endianess is supported, does it make sense to have a
> target-native thing?

I think this is wrong, for a few reasons:

(1) xenpv is not hardware assisted virtualization

(2) supporting only native endianness leads to complications when 
systems are bi-endian, as is the case for PPC.  For example, virtio 1.0 
will always be little endian.

(3) there's a precedent for supporting different guests between the 
guest and the host in blkback, you can do the same for endianness.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6htp-0000dO-IR; Fri, 24 Jan 2014 14:38:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6htn-0000cH-Np
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:38:24 +0000
Received: from [193.109.254.147:64654] by server-15.bemta-14.messagelabs.com
	id 8B/B1-22186-FDA72E25; Fri, 24 Jan 2014 14:38:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390574300!12977587!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1695 invoked from network); 24 Jan 2014 14:38:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:38:21 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEbFq4021132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:37:16 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEbDNQ025175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:37:14 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEbDP3029089; Fri, 24 Jan 2014 14:37:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:37:12 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 717BC1BFA72; Fri, 24 Jan 2014 09:37:11 -0500 (EST)
Date: Fri, 24 Jan 2014 09:37:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Message-ID: <20140124143711.GB12946@phenom.dumpdata.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

It looks OK to me and while it is not a bug-fix I think it should
go for v3.14 - as it _should_ improve the backends.

David or Boris; Stefano, please Ack/Nack it.

Thank you.
> ---
>  arch/x86/include/asm/xen/page.h     |   12 +++--
>  arch/x86/xen/p2m.c                  |   25 ++--------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 109 insertions(+), 54 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..68a1438 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> -extern int m2p_add_override(unsigned long mfn, struct page *page,
> -			    struct gnttab_map_grant_ref *kmap_op);
> +extern int m2p_add_override(unsigned long mfn,
> +			    struct page *page,
> +			    struct gnttab_map_grant_ref *kmap_op,
> +			    unsigned long pfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long pfn,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..0060178 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>  
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>  {
>  	unsigned long flags;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long pfn,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
> -
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..2add483 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL, pfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  pfn,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6htp-0000dO-IR; Fri, 24 Jan 2014 14:38:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6htn-0000cH-Np
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:38:24 +0000
Received: from [193.109.254.147:64654] by server-15.bemta-14.messagelabs.com
	id 8B/B1-22186-FDA72E25; Fri, 24 Jan 2014 14:38:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390574300!12977587!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1695 invoked from network); 24 Jan 2014 14:38:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:38:21 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEbFq4021132
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:37:16 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEbDNQ025175
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:37:14 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEbDP3029089; Fri, 24 Jan 2014 14:37:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:37:12 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 717BC1BFA72; Fri, 24 Jan 2014 09:37:11 -0500 (EST)
Date: Fri, 24 Jan 2014 09:37:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Message-ID: <20140124143711.GB12946@phenom.dumpdata.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

It looks OK to me and while it is not a bug-fix I think it should
go for v3.14 - as it _should_ improve the backends.

David or Boris; Stefano, please Ack/Nack it.

Thank you.
> ---
>  arch/x86/include/asm/xen/page.h     |   12 +++--
>  arch/x86/xen/p2m.c                  |   25 ++--------
>  drivers/block/xen-blkback/blkback.c |   15 +++---
>  drivers/xen/gntdev.c                |   13 +++--
>  drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
>  include/xen/grant_table.h           |    8 +++-
>  6 files changed, 109 insertions(+), 54 deletions(-)
> 
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index b913915..68a1438 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> -extern int m2p_add_override(unsigned long mfn, struct page *page,
> -			    struct gnttab_map_grant_ref *kmap_op);
> +extern int m2p_add_override(unsigned long mfn,
> +			    struct page *page,
> +			    struct gnttab_map_grant_ref *kmap_op,
> +			    unsigned long pfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long pfn,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 2ae8699..0060178 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
>  
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
>  {
>  	unsigned long flags;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long pfn,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
> -	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
> -	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
> -
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
>  		ptep = lookup_address(address, &level);
> @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index e41c79c..e652c0e 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index aa846a4..2add483 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
> -	unsigned long mfn;
> +	unsigned long mfn, pfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +
> +		pages[i]->index = pfn_to_mfn(pfn);
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL, pfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
> +	unsigned long pfn, mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		pfn = page_to_pfn(pages[i]);
> +		mfn = get_phys_to_machine(pfn);
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  pfn,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 694dcaf..9a919b1 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:40:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hvv-000164-Dr; Fri, 24 Jan 2014 14:40:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hvu-00015t-30
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:40:34 +0000
Received: from [85.158.137.68:35891] by server-8.bemta-3.messagelabs.com id
	C7/F5-31081-16B72E25; Fri, 24 Jan 2014 14:40:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390574432!11161927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30305 invoked from network); 24 Jan 2014 14:40:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:40:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:40:31 +0000
Message-Id: <52E2896E0200007800116B1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:40:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, julien.grall@linaro.org, xen-devel@lists.xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 15:23, Ian Campbell <ian.campbell@citrix.com> wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte 
> aligned.
> 
> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;

Why don't you just change the type of "r" to "unsigned long"?

Jan

>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) 
> {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t 
> r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) 
> {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:40:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hvv-000164-Dr; Fri, 24 Jan 2014 14:40:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hvu-00015t-30
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:40:34 +0000
Received: from [85.158.137.68:35891] by server-8.bemta-3.messagelabs.com id
	C7/F5-31081-16B72E25; Fri, 24 Jan 2014 14:40:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390574432!11161927!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30305 invoked from network); 24 Jan 2014 14:40:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:40:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:40:31 +0000
Message-Id: <52E2896E0200007800116B1B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:40:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: tim@xen.org, julien.grall@linaro.org, xen-devel@lists.xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 15:23, Ian Campbell <ian.campbell@citrix.com> wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte 
> aligned.
> 
> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;

Why don't you just change the type of "r" to "unsigned long"?

Jan

>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) 
> {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t 
> r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) 
> {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:42:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hxF-0001Ff-UD; Fri, 24 Jan 2014 14:41:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6hxE-0001FU-KO
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:41:56 +0000
Received: from [85.158.137.68:8758] by server-5.bemta-3.messagelabs.com id
	70/D6-25188-3BB72E25; Fri, 24 Jan 2014 14:41:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390574515!7499580!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14480 invoked from network); 24 Jan 2014 14:41:55 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:41:55 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1014371eek.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 06:41:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NFzxanTc+ol31eDkKKUcT578NozGavHvXn5IQMQaF+M=;
	b=gstDFjOa904hNQBuGBZnW4Ypdqi5lmiO7DTRrHmIR04N7PeEqKL4GvHwBoXFM7NGYz
	thaIfoal0EW0ptVGHLD0X42fEmYhJD5TUfv3Uy4K3owe3GPWeIUZdgPo6NDIIw6yFdSt
	Kf7Du2V2K5U58484/17N1in4iqlihUQHTdWHuTqwP8PyvBKH3cjx3+rKIVCzSZj3+El7
	Q/YJUCXbT3tjxtS4W43aQO1/MIqoMD4g049reazLWNgggZTn8W0HGZ/k9iNWhHrL68kw
	colLzqF+1nlk2D1Eyi9BQfTh1PNtV4IgztqVHFWZu1F1vDAmF13VbUQ0ZhCHxqzY0vw/
	eJ/g==
X-Gm-Message-State: ALoCoQn5dvkpYL87oPKL5MeiZLQ4+YEZ4sR7kQAPiRVPMmoW7usg75STm0slf6sOPb0pVNiBXXL8
X-Received: by 10.14.0.6 with SMTP id 6mr2746334eea.99.1390574514923;
	Fri, 24 Jan 2014 06:41:54 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm4319109een.13.2014.01.24.06.41.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:41:54 -0800 (PST)
Message-ID: <52E27BB0.5030500@linaro.org>
Date: Fri, 24 Jan 2014 14:41:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794845.3793.50.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:07 PM, Ian Campbell wrote:
> 
> I'll put this in my 4.5 queue and consider it again later. Please ping
> me a little while after 4.4 branches if I haven't done so.

I plan to send a bigger patch series today on interrupt management for
ARM. The patch will be include in this serie. You can remove the patch
from your 4.5 queue for now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:42:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hxF-0001Ff-UD; Fri, 24 Jan 2014 14:41:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6hxE-0001FU-KO
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:41:56 +0000
Received: from [85.158.137.68:8758] by server-5.bemta-3.messagelabs.com id
	70/D6-25188-3BB72E25; Fri, 24 Jan 2014 14:41:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390574515!7499580!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14480 invoked from network); 24 Jan 2014 14:41:55 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:41:55 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1014371eek.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 06:41:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NFzxanTc+ol31eDkKKUcT578NozGavHvXn5IQMQaF+M=;
	b=gstDFjOa904hNQBuGBZnW4Ypdqi5lmiO7DTRrHmIR04N7PeEqKL4GvHwBoXFM7NGYz
	thaIfoal0EW0ptVGHLD0X42fEmYhJD5TUfv3Uy4K3owe3GPWeIUZdgPo6NDIIw6yFdSt
	Kf7Du2V2K5U58484/17N1in4iqlihUQHTdWHuTqwP8PyvBKH3cjx3+rKIVCzSZj3+El7
	Q/YJUCXbT3tjxtS4W43aQO1/MIqoMD4g049reazLWNgggZTn8W0HGZ/k9iNWhHrL68kw
	colLzqF+1nlk2D1Eyi9BQfTh1PNtV4IgztqVHFWZu1F1vDAmF13VbUQ0ZhCHxqzY0vw/
	eJ/g==
X-Gm-Message-State: ALoCoQn5dvkpYL87oPKL5MeiZLQ4+YEZ4sR7kQAPiRVPMmoW7usg75STm0slf6sOPb0pVNiBXXL8
X-Received: by 10.14.0.6 with SMTP id 6mr2746334eea.99.1390574514923;
	Fri, 24 Jan 2014 06:41:54 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm4319109een.13.2014.01.24.06.41.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:41:54 -0800 (PST)
Message-ID: <52E27BB0.5030500@linaro.org>
Date: Fri, 24 Jan 2014 14:41:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389386987-26201-1-git-send-email-julien.grall@linaro.org>
	<1389793490.3793.29.camel@kazak.uk.xensource.com>
	<52D694B2.3040308@linaro.org>
	<1389794845.3793.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1389794845.3793.50.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: setup_dt_irq: don't enable the IRQ
 if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/15/2014 02:07 PM, Ian Campbell wrote:
> 
> I'll put this in my 4.5 queue and consider it again later. Please ping
> me a little while after 4.4 branches if I haven't done so.

I plan to send a bigger patch series today on interrupt management for
ARM. The patch will be include in this serie. You can remove the patch
from your 4.5 queue for now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hyr-0001O0-Fc; Fri, 24 Jan 2014 14:43:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W6hyq-0001Nt-78
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:43:36 +0000
Received: from [193.109.254.147:10309] by server-12.bemta-14.messagelabs.com
	id 9E/73-13681-71C72E25; Fri, 24 Jan 2014 14:43:35 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390574614!13012584!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17791 invoked from network); 24 Jan 2014 14:43:34 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	24 Jan 2014 14:43:34 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0OEgSGK001090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 09:42:28 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-69.ams2.redhat.com
	[10.36.112.69])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0OEgMGg018411; Fri, 24 Jan 2014 09:42:24 -0500
Message-ID: <52E27BCD.70303@redhat.com>
Date: Fri, 24 Jan 2014 15:42:21 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
In-Reply-To: <CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 15:35, Peter Maydell ha scritto:
>> > (1) decide that the Xen ring buffers are little-endian even on big-endian
>> > CPUs
>> >
>> > (2) communicate the endianness of the Xen ring buffers via Xenstore, just
>> > like we do for sizeof(long), and let the guest use either endianness on any
>> > architecture.
> You still have to make a choice about what you think
> TARGET_WORDS_BIGENDIAN should be, and it's still going
> to be wrong half the time and horribly confusing.
> I just think this is completely the wrong solution to
> the problem.

Theoretically the xenpv-softmmu machine shouldn't need any code that 
depends on TARGET_WORDS_BIGENDIAN.

If we changed every #ifdef TARGET_WORDS_BIGENDIAN to if(), we could 
compile it with "#define TARGET_WORDS_BIGENDIAN abort()".

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hyr-0001O0-Fc; Fri, 24 Jan 2014 14:43:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W6hyq-0001Nt-78
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:43:36 +0000
Received: from [193.109.254.147:10309] by server-12.bemta-14.messagelabs.com
	id 9E/73-13681-71C72E25; Fri, 24 Jan 2014 14:43:35 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390574614!13012584!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17791 invoked from network); 24 Jan 2014 14:43:34 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	24 Jan 2014 14:43:34 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0OEgSGK001090
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 09:42:28 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-69.ams2.redhat.com
	[10.36.112.69])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0OEgMGg018411; Fri, 24 Jan 2014 09:42:24 -0500
Message-ID: <52E27BCD.70303@redhat.com>
Date: Fri, 24 Jan 2014 15:42:21 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Peter Maydell <peter.maydell@linaro.org>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
In-Reply-To: <CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, QEMU Developers <qemu-devel@nongnu.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 15:35, Peter Maydell ha scritto:
>> > (1) decide that the Xen ring buffers are little-endian even on big-endian
>> > CPUs
>> >
>> > (2) communicate the endianness of the Xen ring buffers via Xenstore, just
>> > like we do for sizeof(long), and let the guest use either endianness on any
>> > architecture.
> You still have to make a choice about what you think
> TARGET_WORDS_BIGENDIAN should be, and it's still going
> to be wrong half the time and horribly confusing.
> I just think this is completely the wrong solution to
> the problem.

Theoretically the xenpv-softmmu machine shouldn't need any code that 
depends on TARGET_WORDS_BIGENDIAN.

If we changed every #ifdef TARGET_WORDS_BIGENDIAN to if(), we could 
compile it with "#define TARGET_WORDS_BIGENDIAN abort()".

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hzf-0001TR-V4; Fri, 24 Jan 2014 14:44:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6hzf-0001TE-5i
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:44:27 +0000
Received: from [85.158.139.211:30491] by server-1.bemta-5.messagelabs.com id
	89/28-21065-A4C72E25; Fri, 24 Jan 2014 14:44:26 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390574664!11776705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14022 invoked from network); 24 Jan 2014 14:44:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94151539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:44:23 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:44:23 -0500
Message-ID: <52E27C45.4040706@citrix.com>
Date: Fri, 24 Jan 2014 14:44:21 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124143711.GB12946@phenom.dumpdata.com>
In-Reply-To: <20140124143711.GB12946@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 14:37, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
>> for blkback and future netback patches it just cause a lock contention, as
>> those pages never go to userspace. Therefore this series does the following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return value
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
>> - clear page->private before doing anything with the page, so m2p_find_override
>>   won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret = 0 now at some places
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> It looks OK to me and while it is not a bug-fix I think it should
> go for v3.14 - as it _should_ improve the backends.
> 
> David or Boris; Stefano, please Ack/Nack it.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hzf-0001TR-V4; Fri, 24 Jan 2014 14:44:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W6hzf-0001TE-5i
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:44:27 +0000
Received: from [85.158.139.211:30491] by server-1.bemta-5.messagelabs.com id
	89/28-21065-A4C72E25; Fri, 24 Jan 2014 14:44:26 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390574664!11776705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14022 invoked from network); 24 Jan 2014 14:44:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:44:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94151539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:44:23 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:44:23 -0500
Message-ID: <52E27C45.4040706@citrix.com>
Date: Fri, 24 Jan 2014 14:44:21 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124143711.GB12946@phenom.dumpdata.com>
In-Reply-To: <20140124143711.GB12946@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 14:37, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
>> for blkback and future netback patches it just cause a lock contention, as
>> those pages never go to userspace. Therefore this series does the following:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>>   parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or just set
>>   the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>>   m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return value
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
>> - clear page->private before doing anything with the page, so m2p_find_override
>>   won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret = 0 now at some places
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> It looks OK to me and while it is not a bug-fix I think it should
> go for v3.14 - as it _should_ improve the backends.
> 
> David or Boris; Stefano, please Ack/Nack it.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hzq-0001VT-BR; Fri, 24 Jan 2014 14:44:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hzo-0001V4-Vl
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:44:37 +0000
Received: from [193.109.254.147:14516] by server-14.bemta-14.messagelabs.com
	id 00/A0-12628-45C72E25; Fri, 24 Jan 2014 14:44:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390574675!11495817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16440 invoked from network); 24 Jan 2014 14:44:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:44:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:44:35 +0000
Message-Id: <52E28A620200007800116B34@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:44:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
	<20140124143028.GA12946@phenom.dumpdata.com>
In-Reply-To: <20140124143028.GA12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 15:30, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Jan 23, 2014 at 08:19:40AM +0000, Jan Beulich wrote:
>> >>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
>> >> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> >> > @@ -323,9 +324,14 @@
>> >> >   *     For full interoperability, block front and backends should publish
>> >> >   *     identical ring parameters, adjusted for unit differences, to the
>> >> >   *     XenStore nodes used in both schemes.
>> >> > - * (4) Devices that support discard functionality may internally allocate
>> >> > - *     space (discardable extents) in units that are larger than the
>> >> > - *     exported logical block size.
>> >> > + * (4) Devices that support discard functionality may internally allocate 
> 
>> >> > space
>> >> > + *     (discardable extents) in units that are larger than the exported 
>> >> > logical
>> >> > + *     block size. If the backing device has such discardable extents the
>> >> > + *     backend must provide both discard-granularity and discard-alignment.
>> >                     ^^^^ - MAY
>> 
>> I think the intention is to say that these two should go together,
>> i.e. specifying just one of them is a mistake.
> 
> The 'and' in that sentence covers that I think?

Not with my reading of it - when using "must", it's clear that both
need to be present. When using "may", one can read it that
providing just one is fine too. But I agree that the wording with
"must" isn't fully correct either. I'd go for "should", and extend the
sentence by "; providing just one of the two may be considered an
error by the frontend".

Jan

> My reading with 'must' is that 'features-discard' MUST have both
> discard-granularity and discard-alignment. But that is not the case
> - even if the device does support them - it does not have to
> expose them.
> 
>> 
>> Jan
>> 
>> >> > + *     Backends supporting discard should include discard-granularity and
>> >                                         ^^^^^ - MAY
>> >> > + *     discard-alignment even if it supports discarding individual 
> sectors.
>> >> > + *     Frontends should assume discard-alignment == 0 and 
> discard-granularity 
>> >> > ==
>> >> > + *     sector size if these keys are missing.
>> >> >   * (5) The discard-alignment parameter allows a physical device to be
>> >> >   *     partitioned into virtual devices that do not necessarily begin or
>> >> >   *     end on a discardable extent boundary.
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6hzq-0001VT-BR; Fri, 24 Jan 2014 14:44:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6hzo-0001V4-Vl
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:44:37 +0000
Received: from [193.109.254.147:14516] by server-14.bemta-14.messagelabs.com
	id 00/A0-12628-45C72E25; Fri, 24 Jan 2014 14:44:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390574675!11495817!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16440 invoked from network); 24 Jan 2014 14:44:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:44:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:44:35 +0000
Message-Id: <52E28A620200007800116B34@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:44:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
	<20140124143028.GA12946@phenom.dumpdata.com>
In-Reply-To: <20140124143028.GA12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, Olaf Hering <olaf@aepfle.de>,
	Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 15:30, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Jan 23, 2014 at 08:19:40AM +0000, Jan Beulich wrote:
>> >>> On 22.01.14 at 22:14, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> > On Fri, Jan 17, 2014 at 03:09:13PM +0000, Jan Beulich wrote:
>> >> >>> On 14.01.14 at 22:57, Olaf Hering <olaf@aepfle.de> wrote:
>> >> > @@ -323,9 +324,14 @@
>> >> >   *     For full interoperability, block front and backends should publish
>> >> >   *     identical ring parameters, adjusted for unit differences, to the
>> >> >   *     XenStore nodes used in both schemes.
>> >> > - * (4) Devices that support discard functionality may internally allocate
>> >> > - *     space (discardable extents) in units that are larger than the
>> >> > - *     exported logical block size.
>> >> > + * (4) Devices that support discard functionality may internally allocate 
> 
>> >> > space
>> >> > + *     (discardable extents) in units that are larger than the exported 
>> >> > logical
>> >> > + *     block size. If the backing device has such discardable extents the
>> >> > + *     backend must provide both discard-granularity and discard-alignment.
>> >                     ^^^^ - MAY
>> 
>> I think the intention is to say that these two should go together,
>> i.e. specifying just one of them is a mistake.
> 
> The 'and' in that sentence covers that I think?

Not with my reading of it - when using "must", it's clear that both
need to be present. When using "may", one can read it that
providing just one is fine too. But I agree that the wording with
"must" isn't fully correct either. I'd go for "should", and extend the
sentence by "; providing just one of the two may be considered an
error by the frontend".

Jan

> My reading with 'must' is that 'features-discard' MUST have both
> discard-granularity and discard-alignment. But that is not the case
> - even if the device does support them - it does not have to
> expose them.
> 
>> 
>> Jan
>> 
>> >> > + *     Backends supporting discard should include discard-granularity and
>> >                                         ^^^^^ - MAY
>> >> > + *     discard-alignment even if it supports discarding individual 
> sectors.
>> >> > + *     Frontends should assume discard-alignment == 0 and 
> discard-granularity 
>> >> > ==
>> >> > + *     sector size if these keys are missing.
>> >> >   * (5) The discard-alignment parameter allows a physical device to be
>> >> >   *     partitioned into virtual devices that do not necessarily begin or
>> >> >   *     end on a discardable extent boundary.
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:45:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i12-0001h4-2u; Fri, 24 Jan 2014 14:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W6i0z-0001gk-4x
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 14:45:50 +0000
Received: from [85.158.137.68:26605] by server-14.bemta-3.messagelabs.com id
	DF/23-06105-C9C72E25; Fri, 24 Jan 2014 14:45:48 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390574747!9980376!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27581 invoked from network); 24 Jan 2014 14:45:47 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:45:47 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so1014103eek.39
	for <xen-devel@lists.xensource.com>;
	Fri, 24 Jan 2014 06:45:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8LGYJBF8pv2MQlpV2ZFPXt6011Qsroa4ZBqcG2ARwVY=;
	b=l5D5nGTLc3jYe1OGKgOw1rWMJztgFsf4wgZp0cVrfOW1yo7hvIAwvt41YaTSDesg+5
	JBARb6KMKa0JpsdwmsC5yVVXZ8EjUfZrb3Q5EYdt6R8kM2auU+u0M09VMl9mR3kkyfxP
	anAXSZcIBaf7qmTTTJUCxJTAbN5BHzbjIxczVcFeRaapgr4070QMUnzqgjCM9wX0Tym6
	LnD5Yg1D0OyW2SyBud7++joDNPpzZW5/CQXmMtamgc815lL/sCa0yzfZi3TjXq/Q4SE1
	1BhnSb1jYkm3Iy/wkgiTYGP0jMtrmO405/zDj9RRsch33Uk6aQA6OH665Mav2Jps+X9I
	AZGg==
X-Gm-Message-State: ALoCoQlfRUWUG6mLCKSou555ZDKO8qJ8v8C/vSHIT2Rj0YywAnW+O1Bxn+dj5w9VOOabEPcv2LHM
X-Received: by 10.14.175.131 with SMTP id z3mr9146359eel.65.1390574747473;
	Fri, 24 Jan 2014 06:45:47 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id z46sm4371269een.1.2014.01.24.06.45.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:45:46 -0800 (PST)
Message-ID: <52E27CA5.50407@m2r.biz>
Date: Fri, 24 Jan 2014 15:45:57 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1390312565.20516.119.camel@kazak.uk.xensource.com>
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xensource.com,
	Ian.Jackson@eu.citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 21/01/2014 14:56, Ian Campbell ha scritto:
> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>> Make rdname function work with xl
>>
>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Although I would have preferred a slightly more verbose changelog.

This patch fix this problem:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
and perhaps also other problems.

I have done extensive testing with the addition of this patch without 
encountering errors, can it be added to the 4.4?

Thanks for any reply.

>> ---
>>   tools/hotplug/Linux/init.d/xendomains |    2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
>> index 38371af..59f1e3d 100644
>> --- a/tools/hotplug/Linux/init.d/xendomains
>> +++ b/tools/hotplug/Linux/init.d/xendomains
>> @@ -186,7 +186,7 @@ contains_something()
>>   rdname()
>>   {
>>       NM=$($CMD create --quiet --dryrun --defconfig "$1" |
>> -         sed -n 's/^.*(name \(.*\))$/\1/p')
>> +         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')
>>   }
>>   
>>   rdnames()
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:45:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i12-0001h4-2u; Fri, 24 Jan 2014 14:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W6i0z-0001gk-4x
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 14:45:50 +0000
Received: from [85.158.137.68:26605] by server-14.bemta-3.messagelabs.com id
	DF/23-06105-C9C72E25; Fri, 24 Jan 2014 14:45:48 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390574747!9980376!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27581 invoked from network); 24 Jan 2014 14:45:47 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:45:47 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so1014103eek.39
	for <xen-devel@lists.xensource.com>;
	Fri, 24 Jan 2014 06:45:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8LGYJBF8pv2MQlpV2ZFPXt6011Qsroa4ZBqcG2ARwVY=;
	b=l5D5nGTLc3jYe1OGKgOw1rWMJztgFsf4wgZp0cVrfOW1yo7hvIAwvt41YaTSDesg+5
	JBARb6KMKa0JpsdwmsC5yVVXZ8EjUfZrb3Q5EYdt6R8kM2auU+u0M09VMl9mR3kkyfxP
	anAXSZcIBaf7qmTTTJUCxJTAbN5BHzbjIxczVcFeRaapgr4070QMUnzqgjCM9wX0Tym6
	LnD5Yg1D0OyW2SyBud7++joDNPpzZW5/CQXmMtamgc815lL/sCa0yzfZi3TjXq/Q4SE1
	1BhnSb1jYkm3Iy/wkgiTYGP0jMtrmO405/zDj9RRsch33Uk6aQA6OH665Mav2Jps+X9I
	AZGg==
X-Gm-Message-State: ALoCoQlfRUWUG6mLCKSou555ZDKO8qJ8v8C/vSHIT2Rj0YywAnW+O1Bxn+dj5w9VOOabEPcv2LHM
X-Received: by 10.14.175.131 with SMTP id z3mr9146359eel.65.1390574747473;
	Fri, 24 Jan 2014 06:45:47 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id z46sm4371269een.1.2014.01.24.06.45.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 06:45:46 -0800 (PST)
Message-ID: <52E27CA5.50407@m2r.biz>
Date: Fri, 24 Jan 2014 15:45:57 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1390312565.20516.119.camel@kazak.uk.xensource.com>
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xensource.com,
	Ian.Jackson@eu.citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 21/01/2014 14:56, Ian Campbell ha scritto:
> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>> Make rdname function work with xl
>>
>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Although I would have preferred a slightly more verbose changelog.

This patch fix this problem:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
and perhaps also other problems.

I have done extensive testing with the addition of this patch without 
encountering errors, can it be added to the 4.4?

Thanks for any reply.

>> ---
>>   tools/hotplug/Linux/init.d/xendomains |    2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/hotplug/Linux/init.d/xendomains b/tools/hotplug/Linux/init.d/xendomains
>> index 38371af..59f1e3d 100644
>> --- a/tools/hotplug/Linux/init.d/xendomains
>> +++ b/tools/hotplug/Linux/init.d/xendomains
>> @@ -186,7 +186,7 @@ contains_something()
>>   rdname()
>>   {
>>       NM=$($CMD create --quiet --dryrun --defconfig "$1" |
>> -         sed -n 's/^.*(name \(.*\))$/\1/p')
>> +         sed -n 's/^.*(name \(.*\))$/\1/p;s/^.*"name": "\(.*\)",$/\1/p')
>>   }
>>   
>>   rdnames()
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i2V-0001s2-KN; Fri, 24 Jan 2014 14:47:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6i2U-0001rq-7q
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:47:22 +0000
Received: from [85.158.139.211:14939] by server-2.bemta-5.messagelabs.com id
	58/89-29392-9FC72E25; Fri, 24 Jan 2014 14:47:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390574839!11770458!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15330 invoked from network); 24 Jan 2014 14:47:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:47:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96183163"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:47:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:47:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6i2P-0005ss-Mn;
	Fri, 24 Jan 2014 14:47:17 +0000
Message-ID: <52E27CEF.2010009@eu.citrix.com>
Date: Fri, 24 Jan 2014 14:47:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Zhang, Yang Z"
	<yang.z.zhang@intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>	
	<1389865519.5190.9.camel@kazak.uk.xensource.com>	
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389951634.6697.43.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/17/2014 09:40 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>> As Andrew said, nested still in experimental stage, because there are
>> still lots of scenarios I am not covered in my testing. So it may not
>> accurate to say it is good supported. But I hope people know that the
>> nested is ready to use now. And encourage them to try it and report
>> bug to us to push nested move forward.
> Perhaps we could say it is "tech preview" rather than "experimental"?

If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
paging / PoD don't work, I think we should be able to call this a "1.0" 
release for nested virt.  Then we can add in "now works with HyperV", 
"Now works with shadow", "Now works with paging" as those become mature.

I think if we have a wiki page describing what is expected to work, and 
some of the key things that don't work, then we should be able to add 
"Basic nested virt support for VMX no longer experimental (see the the 
wiki for details)" to the "4.4 feature" list.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i2V-0001s2-KN; Fri, 24 Jan 2014 14:47:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6i2U-0001rq-7q
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:47:22 +0000
Received: from [85.158.139.211:14939] by server-2.bemta-5.messagelabs.com id
	58/89-29392-9FC72E25; Fri, 24 Jan 2014 14:47:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390574839!11770458!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15330 invoked from network); 24 Jan 2014 14:47:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:47:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96183163"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:47:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:47:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6i2P-0005ss-Mn;
	Fri, 24 Jan 2014 14:47:17 +0000
Message-ID: <52E27CEF.2010009@eu.citrix.com>
Date: Fri, 24 Jan 2014 14:47:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Zhang, Yang Z"
	<yang.z.zhang@intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>	
	<1389865519.5190.9.camel@kazak.uk.xensource.com>	
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1389951634.6697.43.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/17/2014 09:40 AM, Ian Campbell wrote:
> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>> As Andrew said, nested still in experimental stage, because there are
>> still lots of scenarios I am not covered in my testing. So it may not
>> accurate to say it is good supported. But I hope people know that the
>> nested is ready to use now. And encourage them to try it and report
>> bug to us to push nested move forward.
> Perhaps we could say it is "tech preview" rather than "experimental"?

If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
paging / PoD don't work, I think we should be able to call this a "1.0" 
release for nested virt.  Then we can add in "now works with HyperV", 
"Now works with shadow", "Now works with paging" as those become mature.

I think if we have a wiki page describing what is expected to work, and 
some of the key things that don't work, then we should be able to add 
"Basic nested virt support for VMX no longer experimental (see the the 
wiki for details)" to the "4.4 feature" list.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:48:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i3s-0002Se-CD; Fri, 24 Jan 2014 14:48:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6i3q-0002SS-75
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:48:46 +0000
Received: from [85.158.139.211:54658] by server-2.bemta-5.messagelabs.com id
	61/0C-29392-D4D72E25; Fri, 24 Jan 2014 14:48:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390574923!998308!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3615 invoked from network); 24 Jan 2014 14:48:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:48:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153007"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:48:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:48:42 -0500
Message-ID: <1390574921.2124.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 24 Jan 2014 14:48:41 +0000
In-Reply-To: <52E27A63.6030903@linaro.org>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E27A63.6030903@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:36 +0000, Julien Grall wrote:
> On 01/24/2014 02:23 PM, Ian Campbell wrote:
> > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > "uint32_t *" throws away the alignment constraints and ends up causing an
> > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> > 
> > Instead of casting use a temporary variable of the right type.
> > 
> > I've had a look around for similar constructs and the only thing I found was
> > maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> > although perhaps not best advised is safe I think.
> > 
> > This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> > is just coincidental due to subtle changes to the stack layout etc.
> > 
> > Reported-by: Fu Wei <fu.wei@linaro.org>
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Good catch! Do you plan to apply this patch for Xen 4.4?

Yes, I think it is a suitable bug fix.

> 
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> > ---
> >  xen/arch/arm/vgic.c |    6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 90e9707..553411d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -362,11 +362,12 @@ read_as_zero:
> >  
> >  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> >      struct pending_irq *p;
> >      unsigned int irq;
> >      int i = 0;
> >  
> > -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> > +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> >          p = irq_to_pending(v, irq);
> >          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  
> >  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> >      struct pending_irq *p;
> >      unsigned int irq;
> >      int i = 0;
> >  
> > -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> > +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> >          p = irq_to_pending(v, irq);
> >          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:48:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i3s-0002Se-CD; Fri, 24 Jan 2014 14:48:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6i3q-0002SS-75
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:48:46 +0000
Received: from [85.158.139.211:54658] by server-2.bemta-5.messagelabs.com id
	61/0C-29392-D4D72E25; Fri, 24 Jan 2014 14:48:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390574923!998308!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3615 invoked from network); 24 Jan 2014 14:48:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:48:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153007"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:48:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:48:42 -0500
Message-ID: <1390574921.2124.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Fri, 24 Jan 2014 14:48:41 +0000
In-Reply-To: <52E27A63.6030903@linaro.org>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E27A63.6030903@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:36 +0000, Julien Grall wrote:
> On 01/24/2014 02:23 PM, Ian Campbell wrote:
> > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > "uint32_t *" throws away the alignment constraints and ends up causing an
> > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> > 
> > Instead of casting use a temporary variable of the right type.
> > 
> > I've had a look around for similar constructs and the only thing I found was
> > maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> > although perhaps not best advised is safe I think.
> > 
> > This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> > is just coincidental due to subtle changes to the stack layout etc.
> > 
> > Reported-by: Fu Wei <fu.wei@linaro.org>
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Good catch! Do you plan to apply this patch for Xen 4.4?

Yes, I think it is a suitable bug fix.

> 
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> > ---
> >  xen/arch/arm/vgic.c |    6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 90e9707..553411d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -362,11 +362,12 @@ read_as_zero:
> >  
> >  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> >      struct pending_irq *p;
> >      unsigned int irq;
> >      int i = 0;
> >  
> > -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> > +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> >          p = irq_to_pending(v, irq);
> >          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  
> >  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> >      struct pending_irq *p;
> >      unsigned int irq;
> >      int i = 0;
> >  
> > -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> > +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> >          irq = i + (32 * n);
> >          p = irq_to_pending(v, irq);
> >          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i56-0002cE-T5; Fri, 24 Jan 2014 14:50:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1W6i55-0002by-GR; Fri, 24 Jan 2014 14:50:03 +0000
Received: from [85.158.137.68:31103] by server-14.bemta-3.messagelabs.com id
	FF/CA-06105-A9D72E25; Fri, 24 Jan 2014 14:50:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390575000!9981470!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18535 invoked from network); 24 Jan 2014 14:50:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:50:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEni7x021249
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:49:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEneei000974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:49:41 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEneBi029996; Fri, 24 Jan 2014 14:49:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:49:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2A431BFA72; Fri, 24 Jan 2014 09:49:38 -0500 (EST)
Date: Fri, 24 Jan 2014 09:49:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140124144938.GD12946@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > Date: 01/21/2014 04:59 PM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > Sent by: xen-devel-bounces@lists.xen.org
> > 
> > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> > > 10:38:27 AM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > Date: 01/20/2014 10:38 AM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > 
> > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> 10:14:36 
> > > AM:
> > > > > 
> > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > Date: 01/20/2014 10:14 AM
> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > 
> > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
> wrote:
> > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> consistent 
> > > > > crashes 
> > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> unusably 
> > > 
> > > > > slow 
> > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > indiviually on 
> > > > > 
> > > > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > > > 
> > > > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > > > 
> > > > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> 
> > > > > RadeonSI sluggishness is when using the KMS framebuffer device for 
> a 
> > > plain 
> > > > > text console login.
> > > > 
> > > > So sluggish is probably due to the PAT not being enabled. This patch
> > > > should be applied:
> > > > 
> > > > lkml.org/lkml/2011/11/8/406
> > > > 
> > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > 
> > > > and these two reverted:
> > > > 
> > > >  "xen/pat: Disable PAT support for now."
> > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > 
> > > > Which is to say do:
> > > > 
> > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > 
> > > Thanks!  I cherry-picked that patch out of your testing tree, reverted 
> 
> > > those 2 commits, recompiled and installed.  Definitely fixed the HD 
> 7000 
> > > sluggishness and appears to have fixed the R600 crashes (although it's 
> 
> > > only been running a few hours).
> > > 
> > > How come that patch didn't get into mainline?  It looks pretty 
> innocuous 
> > > to me...
> > 
> > <Sigh> the x86 maintainers wanted a different route. And I hadn't had
> > the chance nor time to implement it.
> 
> I see.  Well, I've got a handful of boxes in my lab that need that patch 
> to be usable.  If you do come up with a more mainline-able solution, I'd 
> gladly test it for you.  ;-)

Thank you!
> 
> Thanks again, by the way.
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i56-0002cE-T5; Fri, 24 Jan 2014 14:50:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1W6i55-0002by-GR; Fri, 24 Jan 2014 14:50:03 +0000
Received: from [85.158.137.68:31103] by server-14.bemta-3.messagelabs.com id
	FF/CA-06105-A9D72E25; Fri, 24 Jan 2014 14:50:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390575000!9981470!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18535 invoked from network); 24 Jan 2014 14:50:01 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:50:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEni7x021249
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:49:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OEneei000974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:49:41 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEneBi029996; Fri, 24 Jan 2014 14:49:40 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:49:39 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D2A431BFA72; Fri, 24 Jan 2014 09:49:38 -0500 (EST)
Date: Fri, 24 Jan 2014 09:49:38 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140124144938.GD12946@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > Date: 01/21/2014 04:59 PM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > Sent by: xen-devel-bounces@lists.xen.org
> > 
> > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> > > 10:38:27 AM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > Date: 01/20/2014 10:38 AM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > 
> > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola wrote:
> > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> 10:14:36 
> > > AM:
> > > > > 
> > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > Date: 01/20/2014 10:14 AM
> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > 
> > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
> wrote:
> > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> consistent 
> > > > > crashes 
> > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> unusably 
> > > 
> > > > > slow 
> > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > indiviually on 
> > > > > 
> > > > > > > radeonfb tty).  All 3 systems seem to work fine bare metal.
> > > > > > 
> > > > > > I hadn't been using DRM, just Xserver. Is that what you mean?
> > > > > 
> > > > > The R600 problems happen when in X, using OpenGL, on my dom0.  The 
> 
> > > > > RadeonSI sluggishness is when using the KMS framebuffer device for 
> a 
> > > plain 
> > > > > text console login.
> > > > 
> > > > So sluggish is probably due to the PAT not being enabled. This patch
> > > > should be applied:
> > > > 
> > > > lkml.org/lkml/2011/11/8/406
> > > > 
> > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > 
> > > > and these two reverted:
> > > > 
> > > >  "xen/pat: Disable PAT support for now."
> > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > 
> > > > Which is to say do:
> > > > 
> > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > 
> > > Thanks!  I cherry-picked that patch out of your testing tree, reverted 
> 
> > > those 2 commits, recompiled and installed.  Definitely fixed the HD 
> 7000 
> > > sluggishness and appears to have fixed the R600 crashes (although it's 
> 
> > > only been running a few hours).
> > > 
> > > How come that patch didn't get into mainline?  It looks pretty 
> innocuous 
> > > to me...
> > 
> > <Sigh> the x86 maintainers wanted a different route. And I hadn't had
> > the chance nor time to implement it.
> 
> I see.  Well, I've got a handful of boxes in my lab that need that patch 
> to be usable.  If you do come up with a more mainline-able solution, I'd 
> gladly test it for you.  ;-)

Thank you!
> 
> Thanks again, by the way.
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5I-0002eB-AH; Fri, 24 Jan 2014 14:50:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6i5G-0002ds-Sd
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:50:15 +0000
Received: from [85.158.137.68:36371] by server-6.bemta-3.messagelabs.com id
	B4/88-04868-6AD72E25; Fri, 24 Jan 2014 14:50:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390575011!9996678!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1982 invoked from network); 24 Jan 2014 14:50:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96184102"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6i5C-0005wC-6K;
	Fri, 24 Jan 2014 14:50:10 +0000
Date: Fri, 24 Jan 2014 14:50:09 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140124145009.GJ24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<20140124142337.GH24675@zion.uk.xensource.com>
	<52E27AC9.8050506@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E27AC9.8050506@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 03:38:01PM +0100, Paolo Bonzini wrote:
> Il 24/01/2014 15:23, Wei Liu ha scritto:
> >On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
> >>On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> >>>As promised I hacked a prototype based on Paolo's disable TCG series.
> >>>However I coded some stubs for TCG anyway. So this series in principle
> >>>should work with / without Paolo's series.
> >>
> >>I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
> >
> >Thanks for being blunt. ;-)
> >
> >>"the binary is smaller" isn't IMHO sufficient justification for breaking
> >>QEMU's basic structure of "target-* define target CPUs and we have
> >>a lot of compile time constants which are specific to a CPU which
> >>get defined there". How would you support a bigendian Xen CPU,
> >>just to pick one example of where this falls down?
> >>
> >
> >I think about this deeper. From Xen's (and I speculate this applies to
> >other hardware assisted virtulization solution as well) PoV only the
> >native endianess is supported, does it make sense to have a
> >target-native thing?
> 
> I think this is wrong, for a few reasons:
> 
> (1) xenpv is not hardware assisted virtualization
> 

Correct... What I really meant was "those virtualization solutions don't
care much about CPU emulation" (if there's any other than Xen).

> (2) supporting only native endianness leads to complications when
> systems are bi-endian, as is the case for PPC.  For example, virtio
> 1.0 will always be little endian.
> 

OK, good to know.

> (3) there's a precedent for supporting different guests between the
> guest and the host in blkback, you can do the same for endianness.
> 

Yes. But this is still open question at the moment, as there's no
canonical protocol for this (it hasn't even been discussed).

Your second point is the thing I missed though. Thanks for reminding.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5I-0002eB-AH; Fri, 24 Jan 2014 14:50:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6i5G-0002ds-Sd
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:50:15 +0000
Received: from [85.158.137.68:36371] by server-6.bemta-3.messagelabs.com id
	B4/88-04868-6AD72E25; Fri, 24 Jan 2014 14:50:14 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390575011!9996678!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1982 invoked from network); 24 Jan 2014 14:50:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96184102"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:11 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6i5C-0005wC-6K;
	Fri, 24 Jan 2014 14:50:10 +0000
Date: Fri, 24 Jan 2014 14:50:09 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140124145009.GJ24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<20140124142337.GH24675@zion.uk.xensource.com>
	<52E27AC9.8050506@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E27AC9.8050506@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 03:38:01PM +0100, Paolo Bonzini wrote:
> Il 24/01/2014 15:23, Wei Liu ha scritto:
> >On Thu, Jan 23, 2014 at 10:30:16PM +0000, Peter Maydell wrote:
> >>On 23 January 2014 22:16, Wei Liu <wei.liu2@citrix.com> wrote:
> >>>As promised I hacked a prototype based on Paolo's disable TCG series.
> >>>However I coded some stubs for TCG anyway. So this series in principle
> >>>should work with / without Paolo's series.
> >>
> >>I'm afraid I still think this is a terrible idea. "Xen" isn't a CPU, and
> >
> >Thanks for being blunt. ;-)
> >
> >>"the binary is smaller" isn't IMHO sufficient justification for breaking
> >>QEMU's basic structure of "target-* define target CPUs and we have
> >>a lot of compile time constants which are specific to a CPU which
> >>get defined there". How would you support a bigendian Xen CPU,
> >>just to pick one example of where this falls down?
> >>
> >
> >I think about this deeper. From Xen's (and I speculate this applies to
> >other hardware assisted virtulization solution as well) PoV only the
> >native endianess is supported, does it make sense to have a
> >target-native thing?
> 
> I think this is wrong, for a few reasons:
> 
> (1) xenpv is not hardware assisted virtualization
> 

Correct... What I really meant was "those virtualization solutions don't
care much about CPU emulation" (if there's any other than Xen).

> (2) supporting only native endianness leads to complications when
> systems are bi-endian, as is the case for PPC.  For example, virtio
> 1.0 will always be little endian.
> 

OK, good to know.

> (3) there's a precedent for supporting different guests between the
> guest and the host in blkback, you can do the same for endianness.
> 

Yes. But this is still open question at the moment, as there's no
canonical protocol for this (it hasn't even been discussed).

Your second point is the thing I missed though. Thanks for reminding.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5L-0002f0-Sf; Fri, 24 Jan 2014 14:50:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6i5I-0002e8-7X
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:50:17 +0000
Received: from [85.158.137.68:36472] by server-8.bemta-3.messagelabs.com id
	28/F4-31081-7AD72E25; Fri, 24 Jan 2014 14:50:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390575011!11132846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21002 invoked from network); 24 Jan 2014 14:50:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153499"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6i58-0005w9-Gw;
	Fri, 24 Jan 2014 14:50:06 +0000
Date: Fri, 24 Jan 2014 14:48:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124143711.GB12946@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401241448430.15917@kaball.uk.xensource.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124143711.GB12946@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
> > The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> > for blkback and future netback patches it just cause a lock contention, as
> > those pages never go to userspace. Therefore this series does the following:
> > - the original functions were renamed to __gnttab_[un]map_refs, with a new
> >   parameter m2p_override
> > - based on m2p_override either they follow the original behaviour, or just set
> >   the private flag and call set_phys_to_machine
> > - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
> >   m2p_override false
> > - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> > 
> > It also removes a stray space from page.h and change ret to 0 if
> > XENFEAT_auto_translated_physmap, as that is the only possible return value
> > there.
> > 
> > v2:
> > - move the storing of the old mfn in page->index to gnttab_map_refs
> > - move the function header update to a separate patch
> > 
> > v3:
> > - a new approach to retain old behaviour where it needed
> > - squash the patches into one
> > 
> > v4:
> > - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> > - clear page->private before doing anything with the page, so m2p_find_override
> >   won't race with this
> > 
> > v5:
> > - change return value handling in __gnttab_[un]map_refs
> > - remove a stray space in page.h
> > - add detail why ret = 0 now at some places
> > 
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> It looks OK to me and while it is not a bug-fix I think it should
> go for v3.14 - as it _should_ improve the backends.
> 
> David or Boris; Stefano, please Ack/Nack it.
> 
> Thank you.

I reviewed-by the v6 version of the patch


> >  arch/x86/include/asm/xen/page.h     |   12 +++--
> >  arch/x86/xen/p2m.c                  |   25 ++--------
> >  drivers/block/xen-blkback/blkback.c |   15 +++---
> >  drivers/xen/gntdev.c                |   13 +++--
> >  drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
> >  include/xen/grant_table.h           |    8 +++-
> >  6 files changed, 109 insertions(+), 54 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> > index b913915..68a1438 100644
> > --- a/arch/x86/include/asm/xen/page.h
> > +++ b/arch/x86/include/asm/xen/page.h
> > @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
> >  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
> >  					     unsigned long pfn_e);
> >  
> > -extern int m2p_add_override(unsigned long mfn, struct page *page,
> > -			    struct gnttab_map_grant_ref *kmap_op);
> > +extern int m2p_add_override(unsigned long mfn,
> > +			    struct page *page,
> > +			    struct gnttab_map_grant_ref *kmap_op,
> > +			    unsigned long pfn);
> >  extern int m2p_remove_override(struct page *page,
> > -				struct gnttab_map_grant_ref *kmap_op);
> > +			       struct gnttab_map_grant_ref *kmap_op,
> > +			       unsigned long pfn,
> > +			       unsigned long mfn);
> >  extern struct page *m2p_find_override(unsigned long mfn);
> >  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
> >  
> > @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
> >  		pfn = m2p_find_override_pfn(mfn, ~0);
> >  	}
> >  
> > -	/* 
> > +	/*
> >  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
> >  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
> >  	 * valid entry for it.
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 2ae8699..0060178 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> >  
> >  /* Add an MFN override for a particular page */
> >  int m2p_add_override(unsigned long mfn, struct page *page,
> > -		struct gnttab_map_grant_ref *kmap_op)
> > +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
> >  {
> >  	unsigned long flags;
> > -	unsigned long pfn;
> >  	unsigned long uninitialized_var(address);
> >  	unsigned level;
> >  	pte_t *ptep = NULL;
> >  
> > -	pfn = page_to_pfn(page);
> >  	if (!PageHighMem(page)) {
> >  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> >  		ptep = lookup_address(address, &level);
> > @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
> >  					"m2p_add_override: pfn %lx not mapped", pfn))
> >  			return -EINVAL;
> >  	}
> > -	WARN_ON(PagePrivate(page));
> > -	SetPagePrivate(page);
> > -	set_page_private(page, mfn);
> > -	page->index = pfn_to_mfn(pfn);
> > -
> > -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> > -		return -ENOMEM;
> >  
> >  	if (kmap_op != NULL) {
> >  		if (!PageHighMem(page)) {
> > @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
> >  }
> >  EXPORT_SYMBOL_GPL(m2p_add_override);
> >  int m2p_remove_override(struct page *page,
> > -		struct gnttab_map_grant_ref *kmap_op)
> > +			struct gnttab_map_grant_ref *kmap_op,
> > +			unsigned long pfn,
> > +			unsigned long mfn)
> >  {
> >  	unsigned long flags;
> > -	unsigned long mfn;
> > -	unsigned long pfn;
> >  	unsigned long uninitialized_var(address);
> >  	unsigned level;
> >  	pte_t *ptep = NULL;
> >  
> > -	pfn = page_to_pfn(page);
> > -	mfn = get_phys_to_machine(pfn);
> > -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> > -		return -EINVAL;
> > -
> >  	if (!PageHighMem(page)) {
> >  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> >  		ptep = lookup_address(address, &level);
> > @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
> >  	spin_lock_irqsave(&m2p_override_lock, flags);
> >  	list_del(&page->lru);
> >  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> > -	WARN_ON(!PagePrivate(page));
> > -	ClearPagePrivate(page);
> >  
> > -	set_phys_to_machine(pfn, page->index);
> >  	if (kmap_op != NULL) {
> >  		if (!PageHighMem(page)) {
> >  			struct multicall_space mcs;
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 6620b73..875025f 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
> >  
> >  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
> >  			!rb_next(&persistent_gnt->node)) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> > -				segs_to_unmap);
> > +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, pages, segs_to_unmap);
> >  			segs_to_unmap = 0;
> > @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
> >  		pages[segs_to_unmap] = persistent_gnt->page;
> >  
> >  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> > -				segs_to_unmap);
> > +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, pages, segs_to_unmap);
> >  			segs_to_unmap = 0;
> > @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
> >  		kfree(persistent_gnt);
> >  	}
> >  	if (segs_to_unmap > 0) {
> > -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> > +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  		BUG_ON(ret);
> >  		put_free_pages(blkif, pages, segs_to_unmap);
> >  	}
> > @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
> >  				    GNTMAP_host_map, pages[i]->handle);
> >  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
> >  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> > -			                        invcount);
> > +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, unmap_pages, invcount);
> >  			invcount = 0;
> >  		}
> >  	}
> >  	if (invcount) {
> > -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> > +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> >  		BUG_ON(ret);
> >  		put_free_pages(blkif, unmap_pages, invcount);
> >  	}
> > @@ -740,7 +737,7 @@ again:
> >  	}
> >  
> >  	if (segs_to_map) {
> > -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> > +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
> >  		BUG_ON(ret);
> >  	}
> >  
> > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > index e41c79c..e652c0e 100644
> > --- a/drivers/xen/gntdev.c
> > +++ b/drivers/xen/gntdev.c
> > @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
> >  	}
> >  
> >  	pr_debug("map %d+%d\n", map->index, map->count);
> > -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> > -			map->pages, map->count);
> > +	err = gnttab_map_refs_userspace(map->map_ops,
> > +					use_ptemod ? map->kmap_ops : NULL,
> > +					map->pages,
> > +					map->count);
> >  	if (err)
> >  		return err;
> >  
> > @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
> >  		}
> >  	}
> >  
> > -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> > -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> > -			pages);
> > +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> > +					  use_ptemod ? map->kmap_ops + offset : NULL,
> > +					  map->pages + offset,
> > +					  pages);
> >  	if (err)
> >  		return err;
> >  
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index aa846a4..2add483 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
> >  
> > -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  		    struct gnttab_map_grant_ref *kmap_ops,
> > -		    struct page **pages, unsigned int count)
> > +		    struct page **pages, unsigned int count,
> > +		    bool m2p_override)
> >  {
> >  	int i, ret;
> >  	bool lazy = false;
> >  	pte_t *pte;
> > -	unsigned long mfn;
> > +	unsigned long mfn, pfn;
> >  
> > +	BUG_ON(kmap_ops && !m2p_override);
> >  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
> >  	if (ret)
> >  		return ret;
> > @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> >  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> >  		}
> > -		return ret;
> > +		return 0;
> >  	}
> >  
> > -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> > +	if (m2p_override &&
> > +	    !in_interrupt() &&
> > +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> >  		arch_enter_lazy_mmu_mode();
> >  		lazy = true;
> >  	}
> > @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  		} else {
> >  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> >  		}
> > -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> > -				       &kmap_ops[i] : NULL);
> > +		pfn = page_to_pfn(pages[i]);
> > +
> > +		WARN_ON(PagePrivate(pages[i]));
> > +		SetPagePrivate(pages[i]);
> > +		set_page_private(pages[i], mfn);
> > +
> > +		pages[i]->index = pfn_to_mfn(pfn);
> > +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> > +			ret = -ENOMEM;
> > +			goto out;
> > +		}
> > +		if (m2p_override)
> > +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> > +					       &kmap_ops[i] : NULL, pfn);
> >  		if (ret)
> >  			goto out;
> >  	}
> > @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  
> >  	return ret;
> >  }
> > +
> > +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > +		    struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> > +}
> >  EXPORT_SYMBOL_GPL(gnttab_map_refs);
> >  
> > -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> > +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> > +			      struct gnttab_map_grant_ref *kmap_ops,
> > +			      struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> > +
> > +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  		      struct gnttab_map_grant_ref *kmap_ops,
> > -		      struct page **pages, unsigned int count)
> > +		      struct page **pages, unsigned int count,
> > +		      bool m2p_override)
> >  {
> >  	int i, ret;
> >  	bool lazy = false;
> > +	unsigned long pfn, mfn;
> >  
> > +	BUG_ON(kmap_ops && !m2p_override);
> >  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
> >  	if (ret)
> >  		return ret;
> > @@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> >  					INVALID_P2M_ENTRY);
> >  		}
> > -		return ret;
> > +		return 0;
> >  	}
> >  
> > -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> > +	if (m2p_override &&
> > +	    !in_interrupt() &&
> > +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> >  		arch_enter_lazy_mmu_mode();
> >  		lazy = true;
> >  	}
> >  
> >  	for (i = 0; i < count; i++) {
> > -		ret = m2p_remove_override(pages[i], kmap_ops ?
> > -				       &kmap_ops[i] : NULL);
> > +		pfn = page_to_pfn(pages[i]);
> > +		mfn = get_phys_to_machine(pfn);
> > +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> > +			ret = -EINVAL;
> > +			goto out;
> > +		}
> > +
> > +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> > +		WARN_ON(!PagePrivate(pages[i]));
> > +		ClearPagePrivate(pages[i]);
> > +		set_phys_to_machine(pfn, pages[i]->index);
> > +		if (m2p_override)
> > +			ret = m2p_remove_override(pages[i],
> > +						  kmap_ops ?
> > +						   &kmap_ops[i] : NULL,
> > +						  pfn,
> > +						  mfn);
> >  		if (ret)
> >  			goto out;
> >  	}
> > @@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  
> >  	return ret;
> >  }
> > +
> > +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> > +		    struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> > +}
> >  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
> >  
> > +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> > +				struct gnttab_map_grant_ref *kmap_ops,
> > +				struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> > +
> >  static unsigned nr_status_frames(unsigned nr_grant_frames)
> >  {
> >  	BUG_ON(grefs_per_grant_frame == 0);
> > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > index 694dcaf..9a919b1 100644
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
> >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> >  
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > -		    struct gnttab_map_grant_ref *kmap_ops,
> >  		    struct page **pages, unsigned int count);
> > +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> > +			      struct gnttab_map_grant_ref *kmap_ops,
> > +			      struct page **pages, unsigned int count);
> >  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> > -		      struct gnttab_map_grant_ref *kunmap_ops,
> >  		      struct page **pages, unsigned int count);
> > +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> > +				struct gnttab_map_grant_ref *kunmap_ops,
> > +				struct page **pages, unsigned int count);
> >  
> >  /* Perform a batch of grant map/copy operations. Retry every batch slot
> >   * for which the hypervisor returns GNTST_eagain. This is typically due
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5L-0002f0-Sf; Fri, 24 Jan 2014 14:50:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6i5I-0002e8-7X
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:50:17 +0000
Received: from [85.158.137.68:36472] by server-8.bemta-3.messagelabs.com id
	28/F4-31081-7AD72E25; Fri, 24 Jan 2014 14:50:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390575011!11132846!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21002 invoked from network); 24 Jan 2014 14:50:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153499"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6i58-0005w9-Gw;
	Fri, 24 Jan 2014 14:50:06 +0000
Date: Fri, 24 Jan 2014 14:48:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124143711.GB12946@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401241448430.15917@kaball.uk.xensource.com>
References: <1390482430-9168-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124143711.GB12946@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH v5] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 01:07:10PM +0000, Zoltan Kiss wrote:
> > The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> > for blkback and future netback patches it just cause a lock contention, as
> > those pages never go to userspace. Therefore this series does the following:
> > - the original functions were renamed to __gnttab_[un]map_refs, with a new
> >   parameter m2p_override
> > - based on m2p_override either they follow the original behaviour, or just set
> >   the private flag and call set_phys_to_machine
> > - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
> >   m2p_override false
> > - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> > 
> > It also removes a stray space from page.h and change ret to 0 if
> > XENFEAT_auto_translated_physmap, as that is the only possible return value
> > there.
> > 
> > v2:
> > - move the storing of the old mfn in page->index to gnttab_map_refs
> > - move the function header update to a separate patch
> > 
> > v3:
> > - a new approach to retain old behaviour where it needed
> > - squash the patches into one
> > 
> > v4:
> > - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> > - clear page->private before doing anything with the page, so m2p_find_override
> >   won't race with this
> > 
> > v5:
> > - change return value handling in __gnttab_[un]map_refs
> > - remove a stray space in page.h
> > - add detail why ret = 0 now at some places
> > 
> > Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> > Suggested-by: David Vrabel <david.vrabel@citrix.com>
> 
> It looks OK to me and while it is not a bug-fix I think it should
> go for v3.14 - as it _should_ improve the backends.
> 
> David or Boris; Stefano, please Ack/Nack it.
> 
> Thank you.

I reviewed-by the v6 version of the patch


> >  arch/x86/include/asm/xen/page.h     |   12 +++--
> >  arch/x86/xen/p2m.c                  |   25 ++--------
> >  drivers/block/xen-blkback/blkback.c |   15 +++---
> >  drivers/xen/gntdev.c                |   13 +++--
> >  drivers/xen/grant-table.c           |   90 ++++++++++++++++++++++++++++++-----
> >  include/xen/grant_table.h           |    8 +++-
> >  6 files changed, 109 insertions(+), 54 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> > index b913915..68a1438 100644
> > --- a/arch/x86/include/asm/xen/page.h
> > +++ b/arch/x86/include/asm/xen/page.h
> > @@ -49,10 +49,14 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
> >  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
> >  					     unsigned long pfn_e);
> >  
> > -extern int m2p_add_override(unsigned long mfn, struct page *page,
> > -			    struct gnttab_map_grant_ref *kmap_op);
> > +extern int m2p_add_override(unsigned long mfn,
> > +			    struct page *page,
> > +			    struct gnttab_map_grant_ref *kmap_op,
> > +			    unsigned long pfn);
> >  extern int m2p_remove_override(struct page *page,
> > -				struct gnttab_map_grant_ref *kmap_op);
> > +			       struct gnttab_map_grant_ref *kmap_op,
> > +			       unsigned long pfn,
> > +			       unsigned long mfn);
> >  extern struct page *m2p_find_override(unsigned long mfn);
> >  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
> >  
> > @@ -121,7 +125,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
> >  		pfn = m2p_find_override_pfn(mfn, ~0);
> >  	}
> >  
> > -	/* 
> > +	/*
> >  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
> >  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
> >  	 * valid entry for it.
> > diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> > index 2ae8699..0060178 100644
> > --- a/arch/x86/xen/p2m.c
> > +++ b/arch/x86/xen/p2m.c
> > @@ -872,15 +872,13 @@ static unsigned long mfn_hash(unsigned long mfn)
> >  
> >  /* Add an MFN override for a particular page */
> >  int m2p_add_override(unsigned long mfn, struct page *page,
> > -		struct gnttab_map_grant_ref *kmap_op)
> > +		struct gnttab_map_grant_ref *kmap_op, unsigned long pfn)
> >  {
> >  	unsigned long flags;
> > -	unsigned long pfn;
> >  	unsigned long uninitialized_var(address);
> >  	unsigned level;
> >  	pte_t *ptep = NULL;
> >  
> > -	pfn = page_to_pfn(page);
> >  	if (!PageHighMem(page)) {
> >  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> >  		ptep = lookup_address(address, &level);
> > @@ -888,13 +886,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
> >  					"m2p_add_override: pfn %lx not mapped", pfn))
> >  			return -EINVAL;
> >  	}
> > -	WARN_ON(PagePrivate(page));
> > -	SetPagePrivate(page);
> > -	set_page_private(page, mfn);
> > -	page->index = pfn_to_mfn(pfn);
> > -
> > -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> > -		return -ENOMEM;
> >  
> >  	if (kmap_op != NULL) {
> >  		if (!PageHighMem(page)) {
> > @@ -933,20 +924,15 @@ int m2p_add_override(unsigned long mfn, struct page *page,
> >  }
> >  EXPORT_SYMBOL_GPL(m2p_add_override);
> >  int m2p_remove_override(struct page *page,
> > -		struct gnttab_map_grant_ref *kmap_op)
> > +			struct gnttab_map_grant_ref *kmap_op,
> > +			unsigned long pfn,
> > +			unsigned long mfn)
> >  {
> >  	unsigned long flags;
> > -	unsigned long mfn;
> > -	unsigned long pfn;
> >  	unsigned long uninitialized_var(address);
> >  	unsigned level;
> >  	pte_t *ptep = NULL;
> >  
> > -	pfn = page_to_pfn(page);
> > -	mfn = get_phys_to_machine(pfn);
> > -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> > -		return -EINVAL;
> > -
> >  	if (!PageHighMem(page)) {
> >  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> >  		ptep = lookup_address(address, &level);
> > @@ -959,10 +945,7 @@ int m2p_remove_override(struct page *page,
> >  	spin_lock_irqsave(&m2p_override_lock, flags);
> >  	list_del(&page->lru);
> >  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> > -	WARN_ON(!PagePrivate(page));
> > -	ClearPagePrivate(page);
> >  
> > -	set_phys_to_machine(pfn, page->index);
> >  	if (kmap_op != NULL) {
> >  		if (!PageHighMem(page)) {
> >  			struct multicall_space mcs;
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 6620b73..875025f 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
> >  
> >  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
> >  			!rb_next(&persistent_gnt->node)) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> > -				segs_to_unmap);
> > +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, pages, segs_to_unmap);
> >  			segs_to_unmap = 0;
> > @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
> >  		pages[segs_to_unmap] = persistent_gnt->page;
> >  
> >  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> > -				segs_to_unmap);
> > +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, pages, segs_to_unmap);
> >  			segs_to_unmap = 0;
> > @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
> >  		kfree(persistent_gnt);
> >  	}
> >  	if (segs_to_unmap > 0) {
> > -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> > +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
> >  		BUG_ON(ret);
> >  		put_free_pages(blkif, pages, segs_to_unmap);
> >  	}
> > @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
> >  				    GNTMAP_host_map, pages[i]->handle);
> >  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
> >  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> > -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> > -			                        invcount);
> > +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> >  			BUG_ON(ret);
> >  			put_free_pages(blkif, unmap_pages, invcount);
> >  			invcount = 0;
> >  		}
> >  	}
> >  	if (invcount) {
> > -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> > +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
> >  		BUG_ON(ret);
> >  		put_free_pages(blkif, unmap_pages, invcount);
> >  	}
> > @@ -740,7 +737,7 @@ again:
> >  	}
> >  
> >  	if (segs_to_map) {
> > -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> > +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
> >  		BUG_ON(ret);
> >  	}
> >  
> > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > index e41c79c..e652c0e 100644
> > --- a/drivers/xen/gntdev.c
> > +++ b/drivers/xen/gntdev.c
> > @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
> >  	}
> >  
> >  	pr_debug("map %d+%d\n", map->index, map->count);
> > -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> > -			map->pages, map->count);
> > +	err = gnttab_map_refs_userspace(map->map_ops,
> > +					use_ptemod ? map->kmap_ops : NULL,
> > +					map->pages,
> > +					map->count);
> >  	if (err)
> >  		return err;
> >  
> > @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
> >  		}
> >  	}
> >  
> > -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> > -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> > -			pages);
> > +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> > +					  use_ptemod ? map->kmap_ops + offset : NULL,
> > +					  map->pages + offset,
> > +					  pages);
> >  	if (err)
> >  		return err;
> >  
> > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> > index aa846a4..2add483 100644
> > --- a/drivers/xen/grant-table.c
> > +++ b/drivers/xen/grant-table.c
> > @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
> >  }
> >  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
> >  
> > -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  		    struct gnttab_map_grant_ref *kmap_ops,
> > -		    struct page **pages, unsigned int count)
> > +		    struct page **pages, unsigned int count,
> > +		    bool m2p_override)
> >  {
> >  	int i, ret;
> >  	bool lazy = false;
> >  	pte_t *pte;
> > -	unsigned long mfn;
> > +	unsigned long mfn, pfn;
> >  
> > +	BUG_ON(kmap_ops && !m2p_override);
> >  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
> >  	if (ret)
> >  		return ret;
> > @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> >  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> >  		}
> > -		return ret;
> > +		return 0;
> >  	}
> >  
> > -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> > +	if (m2p_override &&
> > +	    !in_interrupt() &&
> > +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> >  		arch_enter_lazy_mmu_mode();
> >  		lazy = true;
> >  	}
> > @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  		} else {
> >  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> >  		}
> > -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> > -				       &kmap_ops[i] : NULL);
> > +		pfn = page_to_pfn(pages[i]);
> > +
> > +		WARN_ON(PagePrivate(pages[i]));
> > +		SetPagePrivate(pages[i]);
> > +		set_page_private(pages[i], mfn);
> > +
> > +		pages[i]->index = pfn_to_mfn(pfn);
> > +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> > +			ret = -ENOMEM;
> > +			goto out;
> > +		}
> > +		if (m2p_override)
> > +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> > +					       &kmap_ops[i] : NULL, pfn);
> >  		if (ret)
> >  			goto out;
> >  	}
> > @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> >  
> >  	return ret;
> >  }
> > +
> > +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > +		    struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> > +}
> >  EXPORT_SYMBOL_GPL(gnttab_map_refs);
> >  
> > -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> > +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> > +			      struct gnttab_map_grant_ref *kmap_ops,
> > +			      struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> > +
> > +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  		      struct gnttab_map_grant_ref *kmap_ops,
> > -		      struct page **pages, unsigned int count)
> > +		      struct page **pages, unsigned int count,
> > +		      bool m2p_override)
> >  {
> >  	int i, ret;
> >  	bool lazy = false;
> > +	unsigned long pfn, mfn;
> >  
> > +	BUG_ON(kmap_ops && !m2p_override);
> >  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
> >  	if (ret)
> >  		return ret;
> > @@ -958,17 +991,34 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> >  					INVALID_P2M_ENTRY);
> >  		}
> > -		return ret;
> > +		return 0;
> >  	}
> >  
> > -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> > +	if (m2p_override &&
> > +	    !in_interrupt() &&
> > +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> >  		arch_enter_lazy_mmu_mode();
> >  		lazy = true;
> >  	}
> >  
> >  	for (i = 0; i < count; i++) {
> > -		ret = m2p_remove_override(pages[i], kmap_ops ?
> > -				       &kmap_ops[i] : NULL);
> > +		pfn = page_to_pfn(pages[i]);
> > +		mfn = get_phys_to_machine(pfn);
> > +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> > +			ret = -EINVAL;
> > +			goto out;
> > +		}
> > +
> > +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> > +		WARN_ON(!PagePrivate(pages[i]));
> > +		ClearPagePrivate(pages[i]);
> > +		set_phys_to_machine(pfn, pages[i]->index);
> > +		if (m2p_override)
> > +			ret = m2p_remove_override(pages[i],
> > +						  kmap_ops ?
> > +						   &kmap_ops[i] : NULL,
> > +						  pfn,
> > +						  mfn);
> >  		if (ret)
> >  			goto out;
> >  	}
> > @@ -979,8 +1029,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> >  
> >  	return ret;
> >  }
> > +
> > +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> > +		    struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> > +}
> >  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
> >  
> > +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> > +				struct gnttab_map_grant_ref *kmap_ops,
> > +				struct page **pages, unsigned int count)
> > +{
> > +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> > +}
> > +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> > +
> >  static unsigned nr_status_frames(unsigned nr_grant_frames)
> >  {
> >  	BUG_ON(grefs_per_grant_frame == 0);
> > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> > index 694dcaf..9a919b1 100644
> > --- a/include/xen/grant_table.h
> > +++ b/include/xen/grant_table.h
> > @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
> >  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
> >  
> >  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> > -		    struct gnttab_map_grant_ref *kmap_ops,
> >  		    struct page **pages, unsigned int count);
> > +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> > +			      struct gnttab_map_grant_ref *kmap_ops,
> > +			      struct page **pages, unsigned int count);
> >  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> > -		      struct gnttab_map_grant_ref *kunmap_ops,
> >  		      struct page **pages, unsigned int count);
> > +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> > +				struct gnttab_map_grant_ref *kunmap_ops,
> > +				struct page **pages, unsigned int count);
> >  
> >  /* Perform a batch of grant map/copy operations. Retry every batch slot
> >   * for which the hypervisor returns GNTST_eagain. This is typically due
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5T-0002i7-5v; Fri, 24 Jan 2014 14:50:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6i5R-0002hQ-Pr
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:50:25 +0000
Received: from [85.158.139.211:16362] by server-1.bemta-5.messagelabs.com id
	72/11-21065-0BD72E25; Fri, 24 Jan 2014 14:50:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390575022!11778469!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23745 invoked from network); 24 Jan 2014 14:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153605"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:22 -0500
Message-ID: <1390575020.2124.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 24 Jan 2014 14:50:20 +0000
In-Reply-To: <52E2896E0200007800116B1B@nat28.tlf.novell.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E2896E0200007800116B1B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, julien.grall@linaro.org, xen-devel@lists.xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:40 +0000, Jan Beulich wrote:
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 90e9707..553411d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -362,11 +362,12 @@ read_as_zero:
> >  
> >  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> 
> Why don't you just change the type of "r" to "unsigned long"?

The MMIO register which this function is emulating is a 32-bit register,
so I preferred to keep this wrinkle confined to the internals.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i5T-0002i7-5v; Fri, 24 Jan 2014 14:50:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6i5R-0002hQ-Pr
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:50:25 +0000
Received: from [85.158.139.211:16362] by server-1.bemta-5.messagelabs.com id
	72/11-21065-0BD72E25; Fri, 24 Jan 2014 14:50:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390575022!11778469!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23745 invoked from network); 24 Jan 2014 14:50:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:50:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94153605"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 14:50:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:50:22 -0500
Message-ID: <1390575020.2124.89.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 24 Jan 2014 14:50:20 +0000
In-Reply-To: <52E2896E0200007800116B1B@nat28.tlf.novell.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E2896E0200007800116B1B@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: tim@xen.org, julien.grall@linaro.org, xen-devel@lists.xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:40 +0000, Jan Beulich wrote:
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 90e9707..553411d 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -362,11 +362,12 @@ read_as_zero:
> >  
> >  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
> >  {
> > +    const unsigned long mask = r;
> 
> Why don't you just change the type of "r" to "unsigned long"?

The MMIO register which this function is emulating is a 32-bit register,
so I preferred to keep this wrinkle confined to the internals.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:53:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i8I-0003Bf-W7; Fri, 24 Jan 2014 14:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6i8H-0003BX-Vv
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 14:53:22 +0000
Received: from [85.158.143.35:30519] by server-2.bemta-4.messagelabs.com id
	AE/65-11386-16E72E25; Fri, 24 Jan 2014 14:53:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390575199!595478!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21627 invoked from network); 24 Jan 2014 14:53:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:53:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96185183"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:53:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:53:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6i8D-0005yr-Nc;
	Fri, 24 Jan 2014 14:53:17 +0000
Message-ID: <52E27E56.6000209@eu.citrix.com>
Date: Fri, 24 Jan 2014 14:53:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Fabio Fantoni
	<fabio.fantoni@m2r.biz>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1390312565.20516.119.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 01:56 PM, Ian Campbell wrote:
> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>> Make rdname function work with xl
>>
>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Although I would have preferred a slightly more verbose changelog.

This clearly fixes a bug in xendomains.  The worst it might do is break 
xendomains for xend; if Ian C. is reasonably confident, based on his 
inspection of the patch that it won't do so, then:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:53:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i8I-0003Bf-W7; Fri, 24 Jan 2014 14:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6i8H-0003BX-Vv
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 14:53:22 +0000
Received: from [85.158.143.35:30519] by server-2.bemta-4.messagelabs.com id
	AE/65-11386-16E72E25; Fri, 24 Jan 2014 14:53:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390575199!595478!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21627 invoked from network); 24 Jan 2014 14:53:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:53:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96185183"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:53:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:53:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6i8D-0005yr-Nc;
	Fri, 24 Jan 2014 14:53:17 +0000
Message-ID: <52E27E56.6000209@eu.citrix.com>
Date: Fri, 24 Jan 2014 14:53:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Fabio Fantoni
	<fabio.fantoni@m2r.biz>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1390312565.20516.119.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 01:56 PM, Ian Campbell wrote:
> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>> Make rdname function work with xl
>>
>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> Although I would have preferred a slightly more verbose changelog.

This clearly fixes a bug in xendomains.  The worst it might do is break 
xendomains for xend; if Ian C. is reasonably confident, based on his 
inspection of the patch that it won't do so, then:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i9p-0003Jx-6N; Fri, 24 Jan 2014 14:54:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6i9n-0003Jn-Nm
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:54:55 +0000
Received: from [85.158.143.35:49996] by server-1.bemta-4.messagelabs.com id
	7E/34-02132-FBE72E25; Fri, 24 Jan 2014 14:54:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390575294!586695!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15636 invoked from network); 24 Jan 2014 14:54:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:54:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:54:54 +0000
Message-Id: <52E28CCC0200007800116B7D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:54:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add xenpmu.h header file,

To me, naming a public Xen header (other than the core one) xen*.h
is redundant. There's no information lost if you just called it pmu.h.

Also I think you ought to use plural here.

> --- /dev/null
> +++ b/xen/include/public/arch-x86/xenpmu.h
> @@ -0,0 +1,66 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +
> +#include "xen.h"

Why?

> +struct xen_pmu_intel_ctxt {
> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */

I think these last two could easily be uint32_t.

> +/* Shared between hypervisor and PV domain */
> +struct xen_pmu_data {
> +    uint32_t domain_id;
> +    uint32_t vcpu_id;
> +    uint32_t pcpu_id;
> +    uint32_t pmu_flags;
> +
> +    xen_arch_pmu_t pmu;
> +};

So if this got included by an architecture independent source file
on ARM, how would this build? You at least need a stub definition
there for xen_arch_pmu_t afaict (if already give the impression -
further up - that you're supporting ARM compilation of this header).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6i9p-0003Jx-6N; Fri, 24 Jan 2014 14:54:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6i9n-0003Jn-Nm
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:54:55 +0000
Received: from [85.158.143.35:49996] by server-1.bemta-4.messagelabs.com id
	7E/34-02132-FBE72E25; Fri, 24 Jan 2014 14:54:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390575294!586695!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15636 invoked from network); 24 Jan 2014 14:54:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:54:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:54:54 +0000
Message-Id: <52E28CCC0200007800116B7D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:54:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add xenpmu.h header file,

To me, naming a public Xen header (other than the core one) xen*.h
is redundant. There's no information lost if you just called it pmu.h.

Also I think you ought to use plural here.

> --- /dev/null
> +++ b/xen/include/public/arch-x86/xenpmu.h
> @@ -0,0 +1,66 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +
> +#include "xen.h"

Why?

> +struct xen_pmu_intel_ctxt {
> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */

I think these last two could easily be uint32_t.

> +/* Shared between hypervisor and PV domain */
> +struct xen_pmu_data {
> +    uint32_t domain_id;
> +    uint32_t vcpu_id;
> +    uint32_t pcpu_id;
> +    uint32_t pmu_flags;
> +
> +    xen_arch_pmu_t pmu;
> +};

So if this got included by an architecture independent source file
on ARM, how would this build? You at least need a stub definition
there for xen_arch_pmu_t afaict (if already give the impression -
further up - that you're supporting ARM compilation of this header).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:56:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iBk-0003TK-3b; Fri, 24 Jan 2014 14:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6iBi-0003TC-6U
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:56:54 +0000
Received: from [85.158.137.68:6756] by server-9.bemta-3.messagelabs.com id
	CF/91-13104-53F72E25; Fri, 24 Jan 2014 14:56:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390575412!11175359!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3941 invoked from network); 24 Jan 2014 14:56:52 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:56:52 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6iBV-000148-6s; Fri, 24 Jan 2014 14:56:41 +0000
Date: Fri, 24 Jan 2014 15:56:41 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140124145641.GA83765@deinos.phlegethon.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E27CEF.2010009@eu.citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
> On 01/17/2014 09:40 AM, Ian Campbell wrote:
> > On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> >> As Andrew said, nested still in experimental stage, because there are
> >> still lots of scenarios I am not covered in my testing. So it may not
> >> accurate to say it is good supported. But I hope people know that the
> >> nested is ready to use now. And encourage them to try it and report
> >> bug to us to push nested move forward.
> > Perhaps we could say it is "tech preview" rather than "experimental"?
> 
> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
> compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
> paging / PoD don't work, I think we should be able to call this a "1.0" 
> release for nested virt.  Then we can add in "now works with HyperV", 
> "Now works with shadow", "Now works with paging" as those become mature.

That depends on what the failure modes are for the other cases --
esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
are not under the control of the L0 admin.  I thikn that has to be
clearly understood before we encourage people to turn this on.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:56:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:56:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iBk-0003TK-3b; Fri, 24 Jan 2014 14:56:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6iBi-0003TC-6U
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 14:56:54 +0000
Received: from [85.158.137.68:6756] by server-9.bemta-3.messagelabs.com id
	CF/91-13104-53F72E25; Fri, 24 Jan 2014 14:56:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390575412!11175359!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3941 invoked from network); 24 Jan 2014 14:56:52 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:56:52 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6iBV-000148-6s; Fri, 24 Jan 2014 14:56:41 +0000
Date: Fri, 24 Jan 2014 15:56:41 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140124145641.GA83765@deinos.phlegethon.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E27CEF.2010009@eu.citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
> On 01/17/2014 09:40 AM, Ian Campbell wrote:
> > On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> >> As Andrew said, nested still in experimental stage, because there are
> >> still lots of scenarios I am not covered in my testing. So it may not
> >> accurate to say it is good supported. But I hope people know that the
> >> nested is ready to use now. And encourage them to try it and report
> >> bug to us to push nested move forward.
> > Perhaps we could say it is "tech preview" rather than "experimental"?
> 
> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
> compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
> paging / PoD don't work, I think we should be able to call this a "1.0" 
> release for nested virt.  Then we can add in "now works with HyperV", 
> "Now works with shadow", "Now works with paging" as those become mature.

That depends on what the failure modes are for the other cases --
esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
are not under the control of the L0 admin.  I thikn that has to be
clearly understood before we encourage people to turn this on.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:57:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iCJ-0003X4-Op; Fri, 24 Jan 2014 14:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6iCH-0003Wn-Rg
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:57:30 +0000
Received: from [85.158.137.68:13531] by server-3.bemta-3.messagelabs.com id
	48/48-10658-95F72E25; Fri, 24 Jan 2014 14:57:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390575446!11111793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22510 invoked from network); 24 Jan 2014 14:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96187120"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:57:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:57:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6iCC-00062b-Uu;
	Fri, 24 Jan 2014 14:57:24 +0000
Date: Fri, 24 Jan 2014 14:56:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52E27BCD.70303@redhat.com>
Message-ID: <alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
	<52E27BCD.70303@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Paolo Bonzini wrote:
> Il 24/01/2014 15:35, Peter Maydell ha scritto:
> > > > (1) decide that the Xen ring buffers are little-endian even on
> > > big-endian
> > > > CPUs
> > > >
> > > > (2) communicate the endianness of the Xen ring buffers via Xenstore,
> > > just
> > > > like we do for sizeof(long), and let the guest use either endianness on
> > > any
> > > > architecture.
> > You still have to make a choice about what you think
> > TARGET_WORDS_BIGENDIAN should be, and it's still going
> > to be wrong half the time and horribly confusing.
> > I just think this is completely the wrong solution to
> > the problem.
> 
> Theoretically the xenpv-softmmu machine shouldn't need any code that depends
> on TARGET_WORDS_BIGENDIAN.
> 
> If we changed every #ifdef TARGET_WORDS_BIGENDIAN to if(), we could compile it
> with "#define TARGET_WORDS_BIGENDIAN abort()".

Right. All our PV protocols are little endian by definition.

Besides I don't know how to state this more clearly but there are no big
endian Xen guests. There have never been any big endian Xen guests.
There are no big endian Xen guests on the roadmap.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:57:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iCJ-0003X4-Op; Fri, 24 Jan 2014 14:57:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6iCH-0003Wn-Rg
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:57:30 +0000
Received: from [85.158.137.68:13531] by server-3.bemta-3.messagelabs.com id
	48/48-10658-95F72E25; Fri, 24 Jan 2014 14:57:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390575446!11111793!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22510 invoked from network); 24 Jan 2014 14:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96187120"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 14:57:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 09:57:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6iCC-00062b-Uu;
	Fri, 24 Jan 2014 14:57:24 +0000
Date: Fri, 24 Jan 2014 14:56:17 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52E27BCD.70303@redhat.com>
Message-ID: <alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
	<52E27BCD.70303@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Paolo Bonzini wrote:
> Il 24/01/2014 15:35, Peter Maydell ha scritto:
> > > > (1) decide that the Xen ring buffers are little-endian even on
> > > big-endian
> > > > CPUs
> > > >
> > > > (2) communicate the endianness of the Xen ring buffers via Xenstore,
> > > just
> > > > like we do for sizeof(long), and let the guest use either endianness on
> > > any
> > > > architecture.
> > You still have to make a choice about what you think
> > TARGET_WORDS_BIGENDIAN should be, and it's still going
> > to be wrong half the time and horribly confusing.
> > I just think this is completely the wrong solution to
> > the problem.
> 
> Theoretically the xenpv-softmmu machine shouldn't need any code that depends
> on TARGET_WORDS_BIGENDIAN.
> 
> If we changed every #ifdef TARGET_WORDS_BIGENDIAN to if(), we could compile it
> with "#define TARGET_WORDS_BIGENDIAN abort()".

Right. All our PV protocols are little endian by definition.

Besides I don't know how to state this more clearly but there are no big
endian Xen guests. There have never been any big endian Xen guests.
There are no big endian Xen guests on the roadmap.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:58:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iDY-0003j4-Kp; Fri, 24 Jan 2014 14:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6iDW-0003ip-Ro
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:58:47 +0000
Received: from [85.158.137.68:27413] by server-15.bemta-3.messagelabs.com id
	D1/D3-11556-6AF72E25; Fri, 24 Jan 2014 14:58:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390575523!9998618!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20831 invoked from network); 24 Jan 2014 14:58:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:58:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEwgVQ032265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:58:43 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OEwfYl022684
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:58:42 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEwfOA022674; Fri, 24 Jan 2014 14:58:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:58:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5EB6E1BFA72; Fri, 24 Jan 2014 09:58:40 -0500 (EST)
Date: Fri, 24 Jan 2014 09:58:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140124145840.GE12946@phenom.dumpdata.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E191BB.7040904@terremark.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
> >Don Slutz <dslutz@verizon.com> wrote:
> 
> [snip]
> 
> >>WARNING: g.e. still in use!
> >>WARNING: g.e. still in use!
> >>WARNING: g.e. still in use!
> >>pm_op(): platform_pm_resume+0x0/0x50 returns -19
> >>PM: Device i8042 failed to resume: error -19
> >>INFO: task sadc:22164 blocked for more then 120 seconds.
> >>"echo 0 >..."
> >>INFO: task sadc:22164 blocked for more then 120 seconds.
> >>
> >>[root@dcs-xen-54 ~]# xl des 17
> >>[root@dcs-xen-54 ~]# xl restore -V
> >>/big/xl-save/centos-6.4-x86_64.0.save
> >>
> >>
> >>Not sure if this is expected or not.
> >I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
> 
> I have not used xend/xe in a long time.  I did need to configure it.
> 
> Does not start:
> 
> 
> # /etc/init.d/xend start
> WARNING: Enabling the xend toolstack.
> xend is deprecated and scheduled for removal. Please migrate
> to another toolstack ASAP.
> Traceback (most recent call last):
>   File "/usr/sbin/xend", line 110, in <module>
>     sys.exit(main())
>   File "/usr/sbin/xend", line 91, in main
>     start_blktapctrl()
>   File "/usr/sbin/xend", line 77, in start_blktapctrl
>     start_daemon("blktapctrl", "")
>   File "/usr/sbin/xend", line 74, in start_daemon
>     os.execvp(daemon, (daemon,) + args)
>   File "/usr/lib64/python2.7/os.py", line 344, in execvp
>     _execvpe(file, args)
>   File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>     func(fullname, *argrest)
> OSError: [Errno 2] No such file or directory
> 
> 
> How important is it to try this?

It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?

> 
>     -Don Slutz
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:58:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iDY-0003j4-Kp; Fri, 24 Jan 2014 14:58:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6iDW-0003ip-Ro
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:58:47 +0000
Received: from [85.158.137.68:27413] by server-15.bemta-3.messagelabs.com id
	D1/D3-11556-6AF72E25; Fri, 24 Jan 2014 14:58:46 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390575523!9998618!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20831 invoked from network); 24 Jan 2014 14:58:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 14:58:45 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OEwgVQ032265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 14:58:43 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OEwfYl022684
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 14:58:42 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OEwfOA022674; Fri, 24 Jan 2014 14:58:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 06:58:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5EB6E1BFA72; Fri, 24 Jan 2014 09:58:40 -0500 (EST)
Date: Fri, 24 Jan 2014 09:58:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140124145840.GE12946@phenom.dumpdata.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E191BB.7040904@terremark.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
> >Don Slutz <dslutz@verizon.com> wrote:
> 
> [snip]
> 
> >>WARNING: g.e. still in use!
> >>WARNING: g.e. still in use!
> >>WARNING: g.e. still in use!
> >>pm_op(): platform_pm_resume+0x0/0x50 returns -19
> >>PM: Device i8042 failed to resume: error -19
> >>INFO: task sadc:22164 blocked for more then 120 seconds.
> >>"echo 0 >..."
> >>INFO: task sadc:22164 blocked for more then 120 seconds.
> >>
> >>[root@dcs-xen-54 ~]# xl des 17
> >>[root@dcs-xen-54 ~]# xl restore -V
> >>/big/xl-save/centos-6.4-x86_64.0.save
> >>
> >>
> >>Not sure if this is expected or not.
> >I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
> 
> I have not used xend/xe in a long time.  I did need to configure it.
> 
> Does not start:
> 
> 
> # /etc/init.d/xend start
> WARNING: Enabling the xend toolstack.
> xend is deprecated and scheduled for removal. Please migrate
> to another toolstack ASAP.
> Traceback (most recent call last):
>   File "/usr/sbin/xend", line 110, in <module>
>     sys.exit(main())
>   File "/usr/sbin/xend", line 91, in main
>     start_blktapctrl()
>   File "/usr/sbin/xend", line 77, in start_blktapctrl
>     start_daemon("blktapctrl", "")
>   File "/usr/sbin/xend", line 74, in start_daemon
>     os.execvp(daemon, (daemon,) + args)
>   File "/usr/lib64/python2.7/os.py", line 344, in execvp
>     _execvpe(file, args)
>   File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>     func(fullname, *argrest)
> OSError: [Errno 2] No such file or directory
> 
> 
> How important is it to try this?

It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?

> 
>     -Don Slutz
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:58:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iDf-0003l2-CI; Fri, 24 Jan 2014 14:58:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W6iDd-0003kN-Ng
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:58:53 +0000
Received: from [85.158.139.211:9466] by server-16.bemta-5.messagelabs.com id
	87/8E-11843-DAF72E25; Fri, 24 Jan 2014 14:58:53 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390575532!11764803!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29359 invoked from network); 24 Jan 2014 14:58:52 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:58:52 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so3104971wgg.20
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:58:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=iWJUdQGffdsvx4sk4uatKsD/XqJVvrmX22G36sm30X4=;
	b=qapmsAfMnUF0ypk8fks8+9a7J+ur2ZMhnvkrse1Rbb6uQe8pTfGOtA9Q4gDG67PJCS
	gj3kDQ/mTM5mLXGnixBsBKIbYh4BwfIRd81Rp5BCkRLRGq77JOoGPQl4VoNU0yfl/56b
	zc0+atR3hipvblRRGBqCPQ2UaoUI75gOJZP/GmgJJ9lqiXMFuPdcFxWwDeFDERo/t/hg
	CvtsaVaTew3V/kuwoDIXmtI/LapQQeg2HYMFkgIbsOgv4PizPTDQtUmBu8KUzvO+rJEo
	3HbmEUlWLF0xEra0vlDh+/zx/OCUDJ7/5FHYlO1ArUF4kzi8noCsUMaoJtMzBC2bfhbx
	slIA==
MIME-Version: 1.0
X-Received: by 10.194.86.130 with SMTP id p2mr1949wjz.88.1390575532051; Fri,
	24 Jan 2014 06:58:52 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 24 Jan 2014 06:58:51 -0800 (PST)
In-Reply-To: <52DEB887.8070409@citrix.com>
References: <52DEB887.8070409@citrix.com>
Date: Fri, 24 Jan 2014 14:58:51 +0000
X-Google-Sender-Auth: yHA8JhvcVj55EBvFsEEatPDC0rE
Message-ID: <CAFLBxZYPvwtHe93Ee--zhHn89QfM=fjvHEWHnxE+Kav=+O3_nw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cc'ing the guy working on nested virt...

On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report.
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
>
> (XEN) <vm_launch_fail> error code 7
> (XEN) domain_crash_sync called from vmcs.c:1293
> (XEN) Domain 16 (vcpu#3) crashed on cpu#2:
> (XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    2
> (XEN) RIP:    0000:[<0000000000000000>]
> (XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
> (XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
> (XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
> (XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
> (XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
> (XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
> (XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
> (XEN) cr3: 0000000000000000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000
>
>
> I am continuing experiments with different VMs under each L1 hypervisor,
> to see what else breaks.
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 14:58:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iDf-0003l2-CI; Fri, 24 Jan 2014 14:58:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W6iDd-0003kN-Ng
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:58:53 +0000
Received: from [85.158.139.211:9466] by server-16.bemta-5.messagelabs.com id
	87/8E-11843-DAF72E25; Fri, 24 Jan 2014 14:58:53 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390575532!11764803!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29359 invoked from network); 24 Jan 2014 14:58:52 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 14:58:52 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so3104971wgg.20
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 06:58:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=iWJUdQGffdsvx4sk4uatKsD/XqJVvrmX22G36sm30X4=;
	b=qapmsAfMnUF0ypk8fks8+9a7J+ur2ZMhnvkrse1Rbb6uQe8pTfGOtA9Q4gDG67PJCS
	gj3kDQ/mTM5mLXGnixBsBKIbYh4BwfIRd81Rp5BCkRLRGq77JOoGPQl4VoNU0yfl/56b
	zc0+atR3hipvblRRGBqCPQ2UaoUI75gOJZP/GmgJJ9lqiXMFuPdcFxWwDeFDERo/t/hg
	CvtsaVaTew3V/kuwoDIXmtI/LapQQeg2HYMFkgIbsOgv4PizPTDQtUmBu8KUzvO+rJEo
	3HbmEUlWLF0xEra0vlDh+/zx/OCUDJ7/5FHYlO1ArUF4kzi8noCsUMaoJtMzBC2bfhbx
	slIA==
MIME-Version: 1.0
X-Received: by 10.194.86.130 with SMTP id p2mr1949wjz.88.1390575532051; Fri,
	24 Jan 2014 06:58:52 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 24 Jan 2014 06:58:51 -0800 (PST)
In-Reply-To: <52DEB887.8070409@citrix.com>
References: <52DEB887.8070409@citrix.com>
Date: Fri, 24 Jan 2014 14:58:51 +0000
X-Google-Sender-Auth: yHA8JhvcVj55EBvFsEEatPDC0rE
Message-ID: <CAFLBxZYPvwtHe93Ee--zhHn89QfM=fjvHEWHnxE+Kav=+O3_nw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cc'ing the guy working on nested virt...

On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report.
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:
>
> (XEN) <vm_launch_fail> error code 7
> (XEN) domain_crash_sync called from vmcs.c:1293
> (XEN) Domain 16 (vcpu#3) crashed on cpu#2:
> (XEN) ----[ Xen-4.4.0-xs82349-d  x86_64  debug=y  Not tainted ]----
> (XEN) CPU:    2
> (XEN) RIP:    0000:[<0000000000000000>]
> (XEN) RFLAGS: 0000000000000002   CONTEXT: hvm guest
> (XEN) rax: 0000000000000000   rbx: ffff83043cad8000   rcx: ffff83043cadff80
> (XEN) rdx: ffff82d0801d6ea0   rsi: 0000000000000000   rdi: ffff82d0801e2e8c
> (XEN) rbp: ffff82d080105680   rsp: 0000000000000000   r8:  ffff830064100000
> (XEN) r9:  ffff82d0801056ee   r10: ffff83043cadff70   r11: 0000000000000000
> (XEN) r12: ffff83043cadff50   r13: ffff830441e42000   r14: ffff830064100000
> (XEN) r15: ffff82d080189425   cr0: 0000000000000039   cr4: 0000000000002050
> (XEN) cr3: 0000000000000000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000
>
>
> I am continuing experiments with different VMs under each L1 hypervisor,
> to see what else breaks.
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:00:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iEi-0003zd-Ei; Fri, 24 Jan 2014 15:00:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6iEg-0003zI-SI
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:59:58 +0000
Received: from [85.158.139.211:51061] by server-3.bemta-5.messagelabs.com id
	E3/FD-04773-EEF72E25; Fri, 24 Jan 2014 14:59:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390575597!1001532!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32591 invoked from network); 24 Jan 2014 14:59:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:59:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:59:56 +0000
Message-Id: <52E28DFB0200007800116BA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:59:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 08/17] x86/VPMU: Make vpmu not
	HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> -#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
> -#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
> -                                          arch.hvm_vcpu.vpmu))
> +#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
> +#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))

If you already edit this, I'd prefer if you also stripped the various
redundant parentheses to make them better readable:

#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:00:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iEi-0003zd-Ei; Fri, 24 Jan 2014 15:00:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6iEg-0003zI-SI
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 14:59:58 +0000
Received: from [85.158.139.211:51061] by server-3.bemta-5.messagelabs.com id
	E3/FD-04773-EEF72E25; Fri, 24 Jan 2014 14:59:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390575597!1001532!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32591 invoked from network); 24 Jan 2014 14:59:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 14:59:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 14:59:56 +0000
Message-Id: <52E28DFB0200007800116BA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 14:59:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-9-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 08/17] x86/VPMU: Make vpmu not
	HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> -#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
> -#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
> -                                          arch.hvm_vcpu.vpmu))
> +#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.vpmu))
> +#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, arch.vpmu))

If you already edit this, I'd prefer if you also stripped the various
redundant parentheses to make them better readable:

#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:02:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iHA-0004Iw-GO; Fri, 24 Jan 2014 15:02:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6iH9-0004Ii-E6
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 15:02:31 +0000
Received: from [193.109.254.147:49697] by server-3.bemta-14.messagelabs.com id
	BD/55-11000-68082E25; Fri, 24 Jan 2014 15:02:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390575748!13019036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11598 invoked from network); 24 Jan 2014 15:02:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:02:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96189411"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:02:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:02:27 -0500
Message-ID: <1390575746.2124.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 24 Jan 2014 15:02:26 +0000
In-Reply-To: <20140124145641.GA83765@deinos.phlegethon.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
	<20140124145641.GA83765@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
> > On 01/17/2014 09:40 AM, Ian Campbell wrote:
> > > On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> > >> As Andrew said, nested still in experimental stage, because there are
> > >> still lots of scenarios I am not covered in my testing. So it may not
> > >> accurate to say it is good supported. But I hope people know that the
> > >> nested is ready to use now. And encourage them to try it and report
> > >> bug to us to push nested move forward.
> > > Perhaps we could say it is "tech preview" rather than "experimental"?
> > 
> > If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
> > compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
> > paging / PoD don't work, I think we should be able to call this a "1.0" 
> > release for nested virt.  Then we can add in "now works with HyperV", 
> > "Now works with shadow", "Now works with paging" as those become mature.
> 
> That depends on what the failure modes are for the other cases --
> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
> are not under the control of the L0 admin.  I thikn that has to be
> clearly understood before we encourage people to turn this on.

Especially in the light of the previous two bugs here which let the
guest admin crash the host, in at least one of the two cases even if the
host admin had disabled nested virt for that guest (and I think it was
actually in both cases...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:02:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iHA-0004Iw-GO; Fri, 24 Jan 2014 15:02:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6iH9-0004Ii-E6
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 15:02:31 +0000
Received: from [193.109.254.147:49697] by server-3.bemta-14.messagelabs.com id
	BD/55-11000-68082E25; Fri, 24 Jan 2014 15:02:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390575748!13019036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11598 invoked from network); 24 Jan 2014 15:02:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:02:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96189411"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:02:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:02:27 -0500
Message-ID: <1390575746.2124.91.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 24 Jan 2014 15:02:26 +0000
In-Reply-To: <20140124145641.GA83765@deinos.phlegethon.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
	<20140124145641.GA83765@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, "Zhang,
	Yang Z" <yang.z.zhang@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
> > On 01/17/2014 09:40 AM, Ian Campbell wrote:
> > > On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
> > >> As Andrew said, nested still in experimental stage, because there are
> > >> still lots of scenarios I am not covered in my testing. So it may not
> > >> accurate to say it is good supported. But I hope people know that the
> > >> nested is ready to use now. And encourage them to try it and report
> > >> bug to us to push nested move forward.
> > > Perhaps we could say it is "tech preview" rather than "experimental"?
> > 
> > If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP 
> > compatibility mode are tested regularly, and only HyperV, L2 shadow, and 
> > paging / PoD don't work, I think we should be able to call this a "1.0" 
> > release for nested virt.  Then we can add in "now works with HyperV", 
> > "Now works with shadow", "Now works with paging" as those become mature.
> 
> That depends on what the failure modes are for the other cases --
> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
> are not under the control of the L0 admin.  I thikn that has to be
> clearly understood before we encourage people to turn this on.

Especially in the light of the previous two bugs here which let the
guest admin crash the host, in at least one of the two cases even if the
host admin had disabled nested virt for that guest (and I think it was
actually in both cases...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:02:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iHI-0004KO-2F; Fri, 24 Jan 2014 15:02:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6iHG-0004Jy-A9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:02:38 +0000
Received: from [85.158.143.35:53203] by server-3.bemta-4.messagelabs.com id
	F7/99-32360-D8082E25; Fri, 24 Jan 2014 15:02:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390575755!592495!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24360 invoked from network); 24 Jan 2014 15:02:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 15:02:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OF1U0i017170
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 15:01:31 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OF1Uih027430
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 15:01:30 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OF1Tu2027398; Fri, 24 Jan 2014 15:01:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 07:01:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A2FC61BFA72; Fri, 24 Jan 2014 10:01:28 -0500 (EST)
Date: Fri, 24 Jan 2014 10:01:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124150128.GF12946@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DFBB0200007800116041@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 08:24:11AM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 22:40, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
> >> "Fixing the wrong thing" presumably, after taking a closer look at
> >> Konrad's second crash: The device in question really appears to
> >> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
> >> wonder whether the capability gets displayed/hidden dynamically
> >> based on some other enabling the driver may be doing on the
> >> device. In which case we'd need to allocate the structure on
> >> demand.
> > 
> > The device in question (02:00.1) is an SR-IOV 82576:
> > 
> > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> > 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> > 
> > -bash-4.1# lspci -s 02:00.1 -v | more
> > 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: fast devsel, IRQ 18
> >         Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
> >         Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> >         I/O ports at d000 [disabled] [size=32]
> >         Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
> >         Expansion ROM at f0400000 [disabled] [size=4M]
> >         Capabilities: [40] Power Management version 3
> >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> >         Capabilities: [a0] Express Endpoint, MSI 00
> >         Capabilities: [100] Advanced Error Reporting
> >         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
> >         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
> >         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
> >         Kernel driver in use: pciback
> >         Kernel modules: igb
> 
> So is this state with igb never having been bound to the device,
> or was it unbound before the device got handed to igb. I'm asking
> because I'm trying to understand why alloc_pdev() didn't find the
> MSI-X capability structure, and I continue to suspect that the
> driver may have done something to the device to make it visible.

I built the kernel without the igb driver just to eliminate it being
the culprit. Now I can boot without issues and this is what lspci
reports:

-bash-4.1# lspci -s 02:00.0 -v
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 10
        Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
        Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
        I/O ports at e020 [size=32]
        Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at f0c00000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

-bash-4.1# lspci -s 02:00.1 -v
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 5
        Memory at f1400000 (32-bit, non-prefetchable) [size=128K]
        Memory at f0800000 (32-bit, non-prefetchable) [size=4M]
        I/O ports at e000 [size=32]
        Memory at f1440000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at f0400000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:02:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iHI-0004KO-2F; Fri, 24 Jan 2014 15:02:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6iHG-0004Jy-A9
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:02:38 +0000
Received: from [85.158.143.35:53203] by server-3.bemta-4.messagelabs.com id
	F7/99-32360-D8082E25; Fri, 24 Jan 2014 15:02:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390575755!592495!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24360 invoked from network); 24 Jan 2014 15:02:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 15:02:36 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OF1U0i017170
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 15:01:31 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OF1Uih027430
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 15:01:30 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OF1Tu2027398; Fri, 24 Jan 2014 15:01:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 07:01:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id A2FC61BFA72; Fri, 24 Jan 2014 10:01:28 -0500 (EST)
Date: Fri, 24 Jan 2014 10:01:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124150128.GF12946@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E0DFBB0200007800116041@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 23, 2014 at 08:24:11AM +0000, Jan Beulich wrote:
> >>> On 22.01.14 at 22:40, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Wed, Jan 22, 2014 at 12:08:42PM +0000, Jan Beulich wrote:
> >> "Fixing the wrong thing" presumably, after taking a closer look at
> >> Konrad's second crash: The device in question really appears to
> >> be MSI-X capable, yet alloc_pdev() didn't recognize it as such. I
> >> wonder whether the capability gets displayed/hidden dynamically
> >> based on some other enabling the driver may be doing on the
> >> device. In which case we'd need to allocate the structure on
> >> demand.
> > 
> > The device in question (02:00.1) is an SR-IOV 82576:
> > 
> > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> > 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> > 
> > -bash-4.1# lspci -s 02:00.1 -v | more
> > 02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: fast devsel, IRQ 18
> >         Memory at f1400000 (32-bit, non-prefetchable) [disabled] [size=128K]
> >         Memory at f0800000 (32-bit, non-prefetchable) [disabled] [size=4M]
> >         I/O ports at d000 [disabled] [size=32]
> >         Memory at f1440000 (32-bit, non-prefetchable) [disabled] [size=16K]
> >         Expansion ROM at f0400000 [disabled] [size=4M]
> >         Capabilities: [40] Power Management version 3
> >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> >         Capabilities: [a0] Express Endpoint, MSI 00
> >         Capabilities: [100] Advanced Error Reporting
> >         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
> >         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
> >         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
> >         Kernel driver in use: pciback
> >         Kernel modules: igb
> 
> So is this state with igb never having been bound to the device,
> or was it unbound before the device got handed to igb. I'm asking
> because I'm trying to understand why alloc_pdev() didn't find the
> MSI-X capability structure, and I continue to suspect that the
> driver may have done something to the device to make it visible.

I built the kernel without the igb driver just to eliminate it being
the culprit. Now I can boot without issues and this is what lspci
reports:

-bash-4.1# lspci -s 02:00.0 -v
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 10
        Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
        Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
        I/O ports at e020 [size=32]
        Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at f0c00000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

-bash-4.1# lspci -s 02:00.1 -v
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
        Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 5
        Memory at f1400000 (32-bit, non-prefetchable) [size=128K]
        Memory at f0800000 (32-bit, non-prefetchable) [size=4M]
        I/O ports at e000 [size=32]
        Memory at f1440000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at f0400000 [disabled] [size=4M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=10 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iOZ-0004xa-D5; Fri, 24 Jan 2014 15:10:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6iOX-0004xU-Ml
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:10:09 +0000
Received: from [85.158.139.211:61707] by server-9.bemta-5.messagelabs.com id
	A8/CE-15098-05282E25; Fri, 24 Jan 2014 15:10:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390576207!11752970!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23415 invoked from network); 24 Jan 2014 15:10:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 15:10:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 15:10:06 +0000
Message-Id: <52E2905D0200007800116BD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 15:10:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
> +{
> +    int ret = -EINVAL;
> +    xen_pmu_params_t pmu_params;
> +    uint32_t mode;
> +
> +    switch ( op )
> +    {
> +    case XENPMU_mode_set:
> +        if ( !is_control_domain(current->domain) )
> +            return -EPERM;
> +
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
> +        if ( mode & ~XENPMU_MODE_ON )
> +            return -EINVAL;

Please, if you add a new interface, think carefully about future
extension room: Here you ignore the upper 32 bits of .val instead
of making sure they're zero, thus making it impossible to assign
them some meaning later on.

> +
> +        vpmu_mode &= ~XENPMU_MODE_MASK;
> +        vpmu_mode |= mode;
> +
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_mode_get:
> +        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
> +        pmu_params.v.version.maj = XENPMU_VER_MAJ;
> +        pmu_params.v.version.min = XENPMU_VER_MIN;
> +        if ( copy_to_guest(arg, &pmu_params, 1) )

__copy_to_guest().

> +            return -EFAULT;
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_feature_set:
> +        if ( !is_control_domain(current->domain) )
> +            return -EPERM;
> +
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
> +            return -EINVAL;

See above.

> +
> +        vpmu_mode &= ~XENPMU_FEATURE_MASK;
> +        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
> +
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_feature_get:
> +        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
> +        if ( copy_to_guest(arg, &pmu_params, 1) )

See above.

> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>  #define __HYPERVISOR_kexec_op             37
>  #define __HYPERVISOR_tmem_op              38
>  #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
> +#define __HYPERVISOR_xenpmu_op            40
>  
>  /* Architecture-specific hypercall definitions. */
>  #define __HYPERVISOR_arch_0               48

Are you certain this wouldn't better be an architecture-specific
hypercall? Just like with Machine Check, I don't think all
architectures are guaranteed to have (or ever get) performance
monitoring capabilities.

> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
> +struct xen_pmu_params {
> +    /* IN/OUT parameters */
> +    union {
> +        struct version {
> +            uint8_t maj;
> +            uint8_t min;
> +        } version;
> +        uint64_t pad;
> +    } v;

Looking at the implementation above I don't see this ever being an
IN parameter.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iOZ-0004xa-D5; Fri, 24 Jan 2014 15:10:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6iOX-0004xU-Ml
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:10:09 +0000
Received: from [85.158.139.211:61707] by server-9.bemta-5.messagelabs.com id
	A8/CE-15098-05282E25; Fri, 24 Jan 2014 15:10:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390576207!11752970!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23415 invoked from network); 24 Jan 2014 15:10:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 15:10:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 15:10:06 +0000
Message-Id: <52E2905D0200007800116BD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 15:10:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
> +{
> +    int ret = -EINVAL;
> +    xen_pmu_params_t pmu_params;
> +    uint32_t mode;
> +
> +    switch ( op )
> +    {
> +    case XENPMU_mode_set:
> +        if ( !is_control_domain(current->domain) )
> +            return -EPERM;
> +
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
> +        if ( mode & ~XENPMU_MODE_ON )
> +            return -EINVAL;

Please, if you add a new interface, think carefully about future
extension room: Here you ignore the upper 32 bits of .val instead
of making sure they're zero, thus making it impossible to assign
them some meaning later on.

> +
> +        vpmu_mode &= ~XENPMU_MODE_MASK;
> +        vpmu_mode |= mode;
> +
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_mode_get:
> +        pmu_params.d.val = vpmu_mode & XENPMU_MODE_MASK;
> +        pmu_params.v.version.maj = XENPMU_VER_MAJ;
> +        pmu_params.v.version.min = XENPMU_VER_MIN;
> +        if ( copy_to_guest(arg, &pmu_params, 1) )

__copy_to_guest().

> +            return -EFAULT;
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_feature_set:
> +        if ( !is_control_domain(current->domain) )
> +            return -EPERM;
> +
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        if ( (uint32_t)pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
> +            return -EINVAL;

See above.

> +
> +        vpmu_mode &= ~XENPMU_FEATURE_MASK;
> +        vpmu_mode |= (uint32_t)pmu_params.d.val << XENPMU_FEATURE_SHIFT;
> +
> +        ret = 0;
> +        break;
> +
> +    case XENPMU_feature_get:
> +        pmu_params.d.val = vpmu_mode & XENPMU_FEATURE_MASK;
> +        if ( copy_to_guest(arg, &pmu_params, 1) )

See above.

> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>  #define __HYPERVISOR_kexec_op             37
>  #define __HYPERVISOR_tmem_op              38
>  #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
> +#define __HYPERVISOR_xenpmu_op            40
>  
>  /* Architecture-specific hypercall definitions. */
>  #define __HYPERVISOR_arch_0               48

Are you certain this wouldn't better be an architecture-specific
hypercall? Just like with Machine Check, I don't think all
architectures are guaranteed to have (or ever get) performance
monitoring capabilities.

> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
> +struct xen_pmu_params {
> +    /* IN/OUT parameters */
> +    union {
> +        struct version {
> +            uint8_t maj;
> +            uint8_t min;
> +        } version;
> +        uint64_t pad;
> +    } v;

Looking at the implementation above I don't see this ever being an
IN parameter.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iTS-00058x-JX; Fri, 24 Jan 2014 15:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6iTQ-00058s-UY
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 15:15:13 +0000
Received: from [85.158.143.35:26613] by server-1.bemta-4.messagelabs.com id
	BE/75-02132-08382E25; Fri, 24 Jan 2014 15:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390576509!589841!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4667 invoked from network); 24 Jan 2014 15:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96196565"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:14:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 10:14:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iTB-0006nj-5j;
	Fri, 24 Jan 2014 15:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iT8-0005qM-V4;
	Fri, 24 Jan 2014 15:14:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.33645.354750.805594@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 15:14:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390567954.2124.85.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:
> >  Thread A                                             Thread B
> > X      release libvirt event loop lock
> >                                             entering libvirt event loop
> >                                        V     observe timeout is immediate
> 
> Is there nothing in this interval which deregisters, pauses, quiesces or
> otherwise prevents the timer from going off again until after the
> callback (when the lock would be reacquired and whatever was done is
> undone)?

No.  libvirt is quite happy to call the callback multiple times at
once.

I have just looked at the code in vireventpoll.c and there is nothing
that stops this being a problem.

> Given the behaviour I suggest above this would be prevented I think?

Yes, I think so.

> It doesn't seem all that likely triggering the same timeout multiple
> times in different threads simultaneously would be a deliberate design
> decision, so if the libvirt event core doesn't already prevent this
> somehow then it seems to me that this is just a bug in the event loop
> core.

Yes.

> In that case should be addressed in libvirt, and in any case the libvirt
> core folks should be involved in the discussion, so they have the
> opportunity to tell us how best, if at all, we can use the provided
> facilities and/or whether these issues are deliberate of things which
> should be fixed etc.

I'm looking at a way to fix this in libvirt.  Mail to follow ...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iTS-00058x-JX; Fri, 24 Jan 2014 15:15:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6iTQ-00058s-UY
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 15:15:13 +0000
Received: from [85.158.143.35:26613] by server-1.bemta-4.messagelabs.com id
	BE/75-02132-08382E25; Fri, 24 Jan 2014 15:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390576509!589841!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4667 invoked from network); 24 Jan 2014 15:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96196565"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:14:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 10:14:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iTB-0006nj-5j;
	Fri, 24 Jan 2014 15:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iT8-0005qM-V4;
	Fri, 24 Jan 2014 15:14:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.33645.354750.805594@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 15:14:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390567954.2124.85.camel@kazak.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:
> >  Thread A                                             Thread B
> > X      release libvirt event loop lock
> >                                             entering libvirt event loop
> >                                        V     observe timeout is immediate
> 
> Is there nothing in this interval which deregisters, pauses, quiesces or
> otherwise prevents the timer from going off again until after the
> callback (when the lock would be reacquired and whatever was done is
> undone)?

No.  libvirt is quite happy to call the callback multiple times at
once.

I have just looked at the code in vireventpoll.c and there is nothing
that stops this being a problem.

> Given the behaviour I suggest above this would be prevented I think?

Yes, I think so.

> It doesn't seem all that likely triggering the same timeout multiple
> times in different threads simultaneously would be a deliberate design
> decision, so if the libvirt event core doesn't already prevent this
> somehow then it seems to me that this is just a bug in the event loop
> core.

Yes.

> In that case should be addressed in libvirt, and in any case the libvirt
> core folks should be involved in the discussion, so they have the
> opportunity to tell us how best, if at all, we can use the provided
> facilities and/or whether these issues are deliberate of things which
> should be fixed etc.

I'm looking at a way to fix this in libvirt.  Mail to follow ...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iWy-0005Lh-J2; Fri, 24 Jan 2014 15:18:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6iWw-0005LW-D3
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 15:18:50 +0000
Received: from [85.158.139.211:10997] by server-14.bemta-5.messagelabs.com id
	CA/CA-24200-95482E25; Fri, 24 Jan 2014 15:18:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390576727!11734315!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19486 invoked from network); 24 Jan 2014 15:18:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94167355"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 15:18:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 10:18:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iWm-0006oy-8O;
	Fri, 24 Jan 2014 15:18:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iWk-0005re-IK;
	Fri, 24 Jan 2014 15:18:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.33868.87899.677644@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 15:18:36 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.33645.354750.805594@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I have just looked at the code in vireventpoll.c and there is nothing
> that stops this being a problem.

While looking at this I found:

        /* Add 20ms fuzz so we don't pointlessly spin doing
         * <10ms sleeps, particularly on kernels with low HZ
         * it is fine that a timer expires 20ms earlier than
         * requested
         */
        if (eventLoop.timeouts[i].expiresAt <= (now+20)) {

This is Not Good.  Result is that maybe libvirt wakes up, calls the
timeout callback function for libxl, libxl compares the time with the
timeout requested and thinks "oh this must be some previous callback
with the same struct" (due to essential workaround for inherent API
race) and does nothing.
                                                            
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iWy-0005Lh-J2; Fri, 24 Jan 2014 15:18:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6iWw-0005LW-D3
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 15:18:50 +0000
Received: from [85.158.139.211:10997] by server-14.bemta-5.messagelabs.com id
	CA/CA-24200-95482E25; Fri, 24 Jan 2014 15:18:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390576727!11734315!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19486 invoked from network); 24 Jan 2014 15:18:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94167355"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 15:18:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 10:18:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iWm-0006oy-8O;
	Fri, 24 Jan 2014 15:18:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6iWk-0005re-IK;
	Fri, 24 Jan 2014 15:18:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.33868.87899.677644@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 15:18:36 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.33645.354750.805594@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I have just looked at the code in vireventpoll.c and there is nothing
> that stops this being a problem.

While looking at this I found:

        /* Add 20ms fuzz so we don't pointlessly spin doing
         * <10ms sleeps, particularly on kernels with low HZ
         * it is fine that a timer expires 20ms earlier than
         * requested
         */
        if (eventLoop.timeouts[i].expiresAt <= (now+20)) {

This is Not Good.  Result is that maybe libvirt wakes up, calls the
timeout callback function for libxl, libxl compares the time with the
timeout requested and thinks "oh this must be some previous callback
with the same struct" (due to essential workaround for inherent API
race) and does nothing.
                                                            
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:22:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iaj-0005f6-Sb; Fri, 24 Jan 2014 15:22:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6iai-0005eT-63
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:22:44 +0000
Received: from [85.158.139.211:48906] by server-9.bemta-5.messagelabs.com id
	F2/77-15098-34582E25; Fri, 24 Jan 2014 15:22:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390576961!11735283!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14572 invoked from network); 24 Jan 2014 15:22:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:22:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96200265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:22:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:22:40 -0500
Message-ID: <1390576958.13513.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 24 Jan 2014 15:22:38 +0000
In-Reply-To: <alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
	<52E27BCD.70303@redhat.com>
	<alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:56 +0000, Stefano Stabellini wrote:
> Besides I don't know how to state this more clearly but there are no big
> endian Xen guests. There have never been any big endian Xen guests.
> There are no big endian Xen guests on the roadmap.

That is perhaps a bit strong, but what is 100% true is that none of the
defined Xen PV protocols (listed at [0]) is big endian.

If someone were to build e.g. an armbe guest then they would either have
to do the swapping in the frontend (and continue to claim to be the l.e.
XEN_IO_PROTO_ABI_ARM) or they would have to define
XEN_IO_PROTO_ABI_ARMBE and arrange for the front and backend to
negotiate as appropriate, and that would be the appropriate point in
time for the backends to be taught about non-native endianess IMHO.

[0] http://xenbits.xen.org/docs/unstable-staging/hypercall/x86_64/include,public,io,protocols.h.html

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:22:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6iaj-0005f6-Sb; Fri, 24 Jan 2014 15:22:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6iai-0005eT-63
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:22:44 +0000
Received: from [85.158.139.211:48906] by server-9.bemta-5.messagelabs.com id
	F2/77-15098-34582E25; Fri, 24 Jan 2014 15:22:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390576961!11735283!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14572 invoked from network); 24 Jan 2014 15:22:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:22:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96200265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:22:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:22:40 -0500
Message-ID: <1390576958.13513.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 24 Jan 2014 15:22:38 +0000
In-Reply-To: <alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<CAFEAcA888Gr7hfq9M_x6Ue9Vt03TJKo5a_heYL_nfYXnL2Sn0g@mail.gmail.com>
	<52E2790C.20304@redhat.com>
	<CAFEAcA8NLu3-LZ9sMqwLPfBOeAMdXqoHXs1tCkkWp_Rxz5YLRg@mail.gmail.com>
	<52E27BCD.70303@redhat.com>
	<alpine.DEB.2.02.1401241451070.15917@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Wei Liu <wei.liu2@citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/5] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:56 +0000, Stefano Stabellini wrote:
> Besides I don't know how to state this more clearly but there are no big
> endian Xen guests. There have never been any big endian Xen guests.
> There are no big endian Xen guests on the roadmap.

That is perhaps a bit strong, but what is 100% true is that none of the
defined Xen PV protocols (listed at [0]) is big endian.

If someone were to build e.g. an armbe guest then they would either have
to do the swapping in the frontend (and continue to claim to be the l.e.
XEN_IO_PROTO_ABI_ARM) or they would have to define
XEN_IO_PROTO_ABI_ARMBE and arrange for the front and backend to
negotiate as appropriate, and that would be the appropriate point in
time for the backends to be taught about non-native endianess IMHO.

[0] http://xenbits.xen.org/docs/unstable-staging/hypercall/x86_64/include,public,io,protocols.h.html

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6io1-0006AR-Re; Fri, 24 Jan 2014 15:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6io0-0006AM-SF
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:36:29 +0000
Received: from [85.158.139.211:10705] by server-17.bemta-5.messagelabs.com id
	99/B2-19152-C7882E25; Fri, 24 Jan 2014 15:36:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390577785!11782864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30524 invoked from network); 24 Jan 2014 15:36:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96206346"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:36:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:36:24 -0500
Message-ID: <1390577782.13513.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Fri, 24 Jan 2014 15:36:22 +0000
In-Reply-To: <1390555293.2124.6.camel@kazak.uk.xensource.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAxLTI0IGF0IDA5OjIxICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiA+
IEZyb206IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiA+IAo+ID4gQ3VycmVu
dGx5IHNocmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNl
ZCBmb3IKPiA+IHBlcnNpc3RlbnQgZ3JhbnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0
ZW50X2dudHMoKS4gVGhpcwo+ID4gcmVzdWx0cyBpbiBhIG1lbW9yeSBsZWFrIHdoZW4gYSBWQkQg
dGhhdCB1c2VzIHBlcnNpc3RlbnQgZ3JhbnRzIGlzCj4gPiB0b3JuIGRvd24uCj4gCj4gVGhpcyBt
YXkgd2VsbCBiZSB0aGUgZXhwbGFuYXRpb24gZm9yIHRoZSBtZW1vcnkgbGVhayBJIHdhcyBvYnNl
cnZpbmcgb24KPiBBUk0gbGFzdCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Ug
a25vdy4KClJlc3VsdHMgYXJlIGEgYml0IGluY29uY2x1c2l2ZSB1bmZvcnR1bmF0ZWx5LCBpdCBz
ZWVtcyBsaWtlIEkgYW0gc2VlaW5nCnNvbWUgb3RoZXIgbGVhayB0b28gKG9yIGluc3RlYWQpLgoK
VG90YWxseSB1bnNjaWVudGlmaWNhbGx5IGl0IGRvZXMgc2VlbSB0byBiZSBsZWFraW5nIG1vcmUg
c2xvd2x5IHRoYW4KYmVmb3JlLCBzbyBwZXJoYXBzIHRoaXMgcGF0Y2ggaGFzIGhlbHBlZCwgYnV0
IG5vdGhpbmcgY29uY2x1c2l2ZSBJJ20KYWZyYWlkLgoKSSBkb24ndCB0aGluayB0aGF0IHF1aXRl
IHF1YWxpZmllcyBmb3IgYSBUZXN0ZWQtYnkgdGhvdWdoLCBzb3JyeS4KCklhbi4gCgo+IAo+ID4g
Q2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiA+IENj
OiAiUm9nZXIgUGF1IE1vbm7DqSIgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgo+ID4gQ2M6IElhbiBD
YW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gPiBDYzogRGF2aWQgVnJhYmVsIDxk
YXZpZC52cmFiZWxAY2l0cml4LmNvbT4KPiA+IENjOiBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwu
b3JnCj4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiA+IENjOiBBbnRob255IExpZ3Vv
cmkgPGFsaWd1b3JpQGFtYXpvbi5jb20+Cj4gPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFJ1c2h0b24g
PG1ydXNodG9uQGFtYXpvbi5jb20+Cj4gPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3
QGFtYXpvbi5jb20+Cj4gPiAtLS0KPiA+ICBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYyB8ICAgIDYgKysrLS0tCj4gPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwg
MyBkZWxldGlvbnMoLSkKPiA+IAo+ID4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiA+
IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4gPiAtLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYwo+ID4gKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9i
bGtiYWNrLmMKPiA+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250X2xpc3Q6Cj4gPiAgCQkJ
cHJpbnRfc3RhdHMoYmxraWYpOwo+ID4gIAl9Cj4gPiAgCj4gPiAtCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCj4gPiAtCXNo
cmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7Cj4gPiAtCj4gPiAgCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KPiA+ICAJaWYgKCFSQl9FTVBUWV9ST09U
KCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSkKPiA+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJs
a2lmLCAmYmxraWYtPnBlcnNpc3RlbnRfZ250cywKPiA+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVy
Z2VfZ250X2xpc3Q6Cj4gPiAgCUJVR19PTighUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cykpOwo+ID4gIAlibGtpZi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7Cj4gPiAgCj4gPiAr
CS8qIFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUg
YnVmZmVyICovCj4gPiArCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7
Cj4gPiArCj4gPiAgCWlmIChsb2dfc3RhdHMpCj4gPiAgCQlwcmludF9zdGF0cyhibGtpZik7Cj4g
PiAgCj4gCj4gCj4gCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6io1-0006AR-Re; Fri, 24 Jan 2014 15:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6io0-0006AM-SF
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:36:29 +0000
Received: from [85.158.139.211:10705] by server-17.bemta-5.messagelabs.com id
	99/B2-19152-C7882E25; Fri, 24 Jan 2014 15:36:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390577785!11782864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30524 invoked from network); 24 Jan 2014 15:36:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 15:36:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96206346"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 15:36:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 10:36:24 -0500
Message-ID: <1390577782.13513.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Fri, 24 Jan 2014 15:36:22 +0000
In-Reply-To: <1390555293.2124.6.camel@kazak.uk.xensource.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAxLTI0IGF0IDA5OjIxICswMDAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4g
T24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiA+
IEZyb206IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KPiA+IAo+ID4gQ3VycmVu
dGx5IHNocmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNl
ZCBmb3IKPiA+IHBlcnNpc3RlbnQgZ3JhbnRzIGFyZSByZWxlYXNlZCB2aWEgZnJlZV9wZXJzaXN0
ZW50X2dudHMoKS4gVGhpcwo+ID4gcmVzdWx0cyBpbiBhIG1lbW9yeSBsZWFrIHdoZW4gYSBWQkQg
dGhhdCB1c2VzIHBlcnNpc3RlbnQgZ3JhbnRzIGlzCj4gPiB0b3JuIGRvd24uCj4gCj4gVGhpcyBt
YXkgd2VsbCBiZSB0aGUgZXhwbGFuYXRpb24gZm9yIHRoZSBtZW1vcnkgbGVhayBJIHdhcyBvYnNl
cnZpbmcgb24KPiBBUk0gbGFzdCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Ug
a25vdy4KClJlc3VsdHMgYXJlIGEgYml0IGluY29uY2x1c2l2ZSB1bmZvcnR1bmF0ZWx5LCBpdCBz
ZWVtcyBsaWtlIEkgYW0gc2VlaW5nCnNvbWUgb3RoZXIgbGVhayB0b28gKG9yIGluc3RlYWQpLgoK
VG90YWxseSB1bnNjaWVudGlmaWNhbGx5IGl0IGRvZXMgc2VlbSB0byBiZSBsZWFraW5nIG1vcmUg
c2xvd2x5IHRoYW4KYmVmb3JlLCBzbyBwZXJoYXBzIHRoaXMgcGF0Y2ggaGFzIGhlbHBlZCwgYnV0
IG5vdGhpbmcgY29uY2x1c2l2ZSBJJ20KYWZyYWlkLgoKSSBkb24ndCB0aGluayB0aGF0IHF1aXRl
IHF1YWxpZmllcyBmb3IgYSBUZXN0ZWQtYnkgdGhvdWdoLCBzb3JyeS4KCklhbi4gCgo+IAo+ID4g
Q2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiA+IENj
OiAiUm9nZXIgUGF1IE1vbm7DqSIgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgo+ID4gQ2M6IElhbiBD
YW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Cj4gPiBDYzogRGF2aWQgVnJhYmVsIDxk
YXZpZC52cmFiZWxAY2l0cml4LmNvbT4KPiA+IENjOiBsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwu
b3JnCj4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiA+IENjOiBBbnRob255IExpZ3Vv
cmkgPGFsaWd1b3JpQGFtYXpvbi5jb20+Cj4gPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFJ1c2h0b24g
PG1ydXNodG9uQGFtYXpvbi5jb20+Cj4gPiBTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3
QGFtYXpvbi5jb20+Cj4gPiAtLS0KPiA+ICBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYyB8ICAgIDYgKysrLS0tCj4gPiAgMSBmaWxlIGNoYW5nZWQsIDMgaW5zZXJ0aW9ucygrKSwg
MyBkZWxldGlvbnMoLSkKPiA+IAo+ID4gZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKPiA+
IGluZGV4IDY2MjBiNzMuLjMwZWY3YjMgMTAwNjQ0Cj4gPiAtLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYwo+ID4gKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9i
bGtiYWNrLmMKPiA+IEBAIC02MjUsOSArNjI1LDYgQEAgcHVyZ2VfZ250X2xpc3Q6Cj4gPiAgCQkJ
cHJpbnRfc3RhdHMoYmxraWYpOwo+ID4gIAl9Cj4gPiAgCj4gPiAtCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCj4gPiAtCXNo
cmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7Cj4gPiAtCj4gPiAgCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KPiA+ICAJaWYgKCFSQl9FTVBUWV9ST09U
KCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSkKPiA+ICAJCWZyZWVfcGVyc2lzdGVudF9nbnRzKGJs
a2lmLCAmYmxraWYtPnBlcnNpc3RlbnRfZ250cywKPiA+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVy
Z2VfZ250X2xpc3Q6Cj4gPiAgCUJVR19PTighUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250cykpOwo+ID4gIAlibGtpZi0+cGVyc2lzdGVudF9nbnRfYyA9IDA7Cj4gPiAgCj4gPiAr
CS8qIFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUg
YnVmZmVyICovCj4gPiArCXNocmlua19mcmVlX3BhZ2Vwb29sKGJsa2lmLCAwIC8qIEFsbCAqLyk7
Cj4gPiArCj4gPiAgCWlmIChsb2dfc3RhdHMpCj4gPiAgCQlwcmludF9zdGF0cyhibGtpZik7Cj4g
PiAgCj4gCj4gCj4gCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KPiBYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwKCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:38:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ipy-0006Im-Gi; Fri, 24 Jan 2014 15:38:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipv-0006Hx-OT; Fri, 24 Jan 2014 15:38:28 +0000
Received: from [85.158.137.68:31994] by server-2.bemta-3.messagelabs.com id
	AA/08-17329-2F882E25; Fri, 24 Jan 2014 15:38:26 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390577904!11121997!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23002 invoked from network); 24 Jan 2014 15:38:25 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	24 Jan 2014 15:38:25 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipm-00053o-OT; Fri, 24 Jan 2014 15:38:18 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipm-0003Y2-D5; Fri, 24 Jan 2014 15:38:18 +0000
Date: Fri, 24 Jan 2014 15:38:18 +0000
Message-Id: <E1W6ipm-0003Y2-D5@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 87 (CVE-2014-1666) -
 PHYSDEVOP_{prepare, release}_msix exposed to unprivileged guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1666 / XSA-87
                              version 2

     PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests

UPDATES IN VERSION 2
====================

CVE assigned.

ISSUE DESCRIPTION
=================

The PHYSDEVOP_{prepare,release}_msix operations are supposed to be available
to privileged guests (domain 0 in non-disaggregated setups) only, but the
necessary privilege check was missing.

IMPACT
======

Malicious or misbehaving unprivileged guests can cause the host or other
guests to malfunction. This can result in host-wide denial of service.
Privilege escalation, while seeming to be unlikely, cannot be excluded.

VULNERABLE SYSTEMS
==================

Xen 4.1.5 and 4.1.6.1 as well as 4.2.2 and later are vulnerable.
Xen 4.2.1 and 4.2.0 as well as 4.1.4 and earlier are not vulnerable.

Only PV guests can take advantage of this vulnerability.

MITIGATION
==========

Running only HVM guests will avoid this issue.

There is no mitigation available for PV guests.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was disclosed publicly on the xen-devel mailing list.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa87-unstable-4.3.patch    xen-unstable, Xen 4.3.x
xsa87-4.2.patch             Xen 4.2.x
xsa87-4.1.patch             Xen 4.1.x

$ sha256sum xsa87*.patch
45e5cc892626293067cc088a671a6bbdc18b018f54ff09b6a1cbb1fabbdf114d  xsa87-4.1.patch
df9c1507d7bb0e5266a2fadd992d1e6ed0f7bf5be7466b8a93ed3bd8e3ab8e8d  xsa87-4.2.patch
a13ce270b177d33537d627b85471abaa01215cd458541f4c6524914d7c81eb38  xsa87-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4ojJAAoJEIP+FMlX6CvZKpsH/3lVDKRMvFVkaHVPt1uRhqQo
HxBDflm//lR5M8j8364rRSknSv8X2m/JfKJ7DCbX0WQWPrIU/i8MzTHM9fQqLvAR
QYEhXYZC+ctkqk/sUvQaxOkyu8bNszuIOlWM9GuH2OnFN68zSl7kXiX7KZ5dHoYQ
eNAjQeCXNaXTiSo3X3ZIFwZOlpkUj+NxJnZlZx5Hb/m5WH86FeqBNMi/jZB/i53F
LFu7rhJ4rq25jbfuLp1ISBs5GA+71pNRvhukHijQHks1fApKhqmUiDhrBYX21l/Y
5GJLG6L3sYdScjoeHu+QH0akwTC5L+BauMLMWljJOTKvL0p2yU/vDc2JMjXXnzk=
=morx
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa87-4.1.patch"
Content-Disposition: attachment; filename="xsa87-4.1.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTU1NCw3ICs1NTQs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZSBpZiAoIGRldi5zZWcgKQogICAgICAgICAgICAgcmV0ID0g
LUVPUE5PVFNVUFA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa87-4.2.patch"
Content-Disposition: attachment; filename="xsa87-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTYxMiw3ICs2MTIs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgcmV0ID0gcGNpX3ByZXBhcmVfbXNp
eChkZXYuc2VnLCBkZXYuYnVzLCBkZXYuZGV2Zm4sCg==

--=separator
Content-Type: application/octet-stream; name="xsa87-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa87-unstable-4.3.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKLS0tIDIwMTQtMDEtMTQub3Jp
Zy94ZW4vYXJjaC94ODYvcGh5c2Rldi5jCTIwMTMtMTEtMTggMTE6MDM6Mzcu
MDAwMDAwMDAwICswMTAwCisrKyAyMDE0LTAxLTE0L3hlbi9hcmNoL3g4Ni9w
aHlzZGV2LmMJMjAxNC0wMS0yMiAxMjo0Nzo0Ny4wMDAwMDAwMDAgKzAxMDAK
QEAgLTY0MCw3ICs2NDAsMTAgQEAgcmV0X3QgZG9fcGh5c2Rldl9vcChpbnQg
Y21kLCBYRU5fR1VFU1RfSAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVz
dCgmZGV2LCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldCA9IC1FRkFVTFQ7
CiAgICAgICAgIGVsc2UKLSAgICAgICAgICAgIHJldCA9IHBjaV9wcmVwYXJl
X21zaXgoZGV2LnNlZywgZGV2LmJ1cywgZGV2LmRldmZuLAorICAgICAgICAg
ICAgcmV0ID0geHNtX3Jlc291cmNlX3NldHVwX3BjaShYU01fUFJJViwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKGRldi5z
ZWcgPDwgMTYpIHwgKGRldi5idXMgPDwgOCkgfAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBkZXYuZGV2Zm4pID86CisgICAg
ICAgICAgICAgICAgICBwY2lfcHJlcGFyZV9tc2l4KGRldi5zZWcsIGRldi5i
dXMsIGRldi5kZXZmbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY21kICE9IFBIWVNERVZPUF9wcmVwYXJlX21zaXgpOwogICAgICAg
ICBicmVhazsKICAgICB9Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Fri Jan 24 15:38:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6ipy-0006Im-Gi; Fri, 24 Jan 2014 15:38:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipv-0006Hx-OT; Fri, 24 Jan 2014 15:38:28 +0000
Received: from [85.158.137.68:31994] by server-2.bemta-3.messagelabs.com id
	AA/08-17329-2F882E25; Fri, 24 Jan 2014 15:38:26 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390577904!11121997!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23002 invoked from network); 24 Jan 2014 15:38:25 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	24 Jan 2014 15:38:25 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipm-00053o-OT; Fri, 24 Jan 2014 15:38:18 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1W6ipm-0003Y2-D5; Fri, 24 Jan 2014 15:38:18 +0000
Date: Fri, 24 Jan 2014 15:38:18 +0000
Message-Id: <E1W6ipm-0003Y2-D5@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 87 (CVE-2014-1666) -
 PHYSDEVOP_{prepare, release}_msix exposed to unprivileged guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1666 / XSA-87
                              version 2

     PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests

UPDATES IN VERSION 2
====================

CVE assigned.

ISSUE DESCRIPTION
=================

The PHYSDEVOP_{prepare,release}_msix operations are supposed to be available
to privileged guests (domain 0 in non-disaggregated setups) only, but the
necessary privilege check was missing.

IMPACT
======

Malicious or misbehaving unprivileged guests can cause the host or other
guests to malfunction. This can result in host-wide denial of service.
Privilege escalation, while seeming to be unlikely, cannot be excluded.

VULNERABLE SYSTEMS
==================

Xen 4.1.5 and 4.1.6.1 as well as 4.2.2 and later are vulnerable.
Xen 4.2.1 and 4.2.0 as well as 4.1.4 and earlier are not vulnerable.

Only PV guests can take advantage of this vulnerability.

MITIGATION
==========

Running only HVM guests will avoid this issue.

There is no mitigation available for PV guests.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was disclosed publicly on the xen-devel mailing list.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa87-unstable-4.3.patch    xen-unstable, Xen 4.3.x
xsa87-4.2.patch             Xen 4.2.x
xsa87-4.1.patch             Xen 4.1.x

$ sha256sum xsa87*.patch
45e5cc892626293067cc088a671a6bbdc18b018f54ff09b6a1cbb1fabbdf114d  xsa87-4.1.patch
df9c1507d7bb0e5266a2fadd992d1e6ed0f7bf5be7466b8a93ed3bd8e3ab8e8d  xsa87-4.2.patch
a13ce270b177d33537d627b85471abaa01215cd458541f4c6524914d7c81eb38  xsa87-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS4ojJAAoJEIP+FMlX6CvZKpsH/3lVDKRMvFVkaHVPt1uRhqQo
HxBDflm//lR5M8j8364rRSknSv8X2m/JfKJ7DCbX0WQWPrIU/i8MzTHM9fQqLvAR
QYEhXYZC+ctkqk/sUvQaxOkyu8bNszuIOlWM9GuH2OnFN68zSl7kXiX7KZ5dHoYQ
eNAjQeCXNaXTiSo3X3ZIFwZOlpkUj+NxJnZlZx5Hb/m5WH86FeqBNMi/jZB/i53F
LFu7rhJ4rq25jbfuLp1ISBs5GA+71pNRvhukHijQHks1fApKhqmUiDhrBYX21l/Y
5GJLG6L3sYdScjoeHu+QH0akwTC5L+BauMLMWljJOTKvL0p2yU/vDc2JMjXXnzk=
=morx
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa87-4.1.patch"
Content-Disposition: attachment; filename="xsa87-4.1.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTU1NCw3ICs1NTQs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZSBpZiAoIGRldi5zZWcgKQogICAgICAgICAgICAgcmV0ID0g
LUVPUE5PVFNVUFA7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa87-4.2.patch"
Content-Disposition: attachment; filename="xsa87-4.2.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vYXJjaC94ODYvcGh5c2Rldi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKQEAgLTYxMiw3ICs2MTIs
OSBAQCByZXRfdCBkb19waHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9I
CiAgICAgY2FzZSBQSFlTREVWT1BfcmVsZWFzZV9tc2l4OiB7CiAgICAgICAg
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2UgZGV2OwogCi0gICAgICAgIGlm
ICggY29weV9mcm9tX2d1ZXN0KCZkZXYsIGFyZywgMSkgKQorICAgICAgICBp
ZiAoICFJU19QUklWKHYtPmRvbWFpbikgKQorICAgICAgICAgICAgcmV0ID0g
LUVQRVJNOworICAgICAgICBlbHNlIGlmICggY29weV9mcm9tX2d1ZXN0KCZk
ZXYsIGFyZywgMSkgKQogICAgICAgICAgICAgcmV0ID0gLUVGQVVMVDsKICAg
ICAgICAgZWxzZQogICAgICAgICAgICAgcmV0ID0gcGNpX3ByZXBhcmVfbXNp
eChkZXYuc2VnLCBkZXYuYnVzLCBkZXYuZGV2Zm4sCg==

--=separator
Content-Type: application/octet-stream; name="xsa87-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa87-unstable-4.3.patch"
Content-Transfer-Encoding: base64

eDg2OiBQSFlTREVWT1Bfe3ByZXBhcmUscmVsZWFzZX1fbXNpeCBhcmUgcHJp
dmlsZWdlZAoKWWV0IHRoaXMgd2Fzbid0IGJlaW5nIGVuZm9yY2VkLgoKVGhp
cyBpcyBYU0EtODcuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxh
bmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKLS0tIDIwMTQtMDEtMTQub3Jp
Zy94ZW4vYXJjaC94ODYvcGh5c2Rldi5jCTIwMTMtMTEtMTggMTE6MDM6Mzcu
MDAwMDAwMDAwICswMTAwCisrKyAyMDE0LTAxLTE0L3hlbi9hcmNoL3g4Ni9w
aHlzZGV2LmMJMjAxNC0wMS0yMiAxMjo0Nzo0Ny4wMDAwMDAwMDAgKzAxMDAK
QEAgLTY0MCw3ICs2NDAsMTAgQEAgcmV0X3QgZG9fcGh5c2Rldl9vcChpbnQg
Y21kLCBYRU5fR1VFU1RfSAogICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVz
dCgmZGV2LCBhcmcsIDEpICkKICAgICAgICAgICAgIHJldCA9IC1FRkFVTFQ7
CiAgICAgICAgIGVsc2UKLSAgICAgICAgICAgIHJldCA9IHBjaV9wcmVwYXJl
X21zaXgoZGV2LnNlZywgZGV2LmJ1cywgZGV2LmRldmZuLAorICAgICAgICAg
ICAgcmV0ID0geHNtX3Jlc291cmNlX3NldHVwX3BjaShYU01fUFJJViwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKGRldi5z
ZWcgPDwgMTYpIHwgKGRldi5idXMgPDwgOCkgfAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBkZXYuZGV2Zm4pID86CisgICAg
ICAgICAgICAgICAgICBwY2lfcHJlcGFyZV9tc2l4KGRldi5zZWcsIGRldi5i
dXMsIGRldi5kZXZmbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY21kICE9IFBIWVNERVZPUF9wcmVwYXJlX21zaXgpOwogICAgICAg
ICBicmVhazsKICAgICB9Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Fri Jan 24 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6j63-0007SL-KU; Fri, 24 Jan 2014 15:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6j61-0007S6-Vx
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:55:06 +0000
Received: from [193.109.254.147:35577] by server-4.bemta-14.messagelabs.com id
	DB/D1-03916-9DC82E25; Fri, 24 Jan 2014 15:55:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390578902!13043827!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23933 invoked from network); 24 Jan 2014 15:55:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 15:55:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 15:55:02 +0000
Message-Id: <52E29AE40200007800116C82@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 15:55:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
In-Reply-To: <20140124150128.GF12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Jan 23, 2014 at 08:24:11AM +0000, Jan Beulich wrote:
>> So is this state with igb never having been bound to the device,
>> or was it unbound before the device got handed to igb. I'm asking
>> because I'm trying to understand why alloc_pdev() didn't find the
>> MSI-X capability structure, and I continue to suspect that the
>> driver may have done something to the device to make it visible.
> 
> I built the kernel without the igb driver just to eliminate it being
> the culprit. Now I can boot without issues and this is what lspci
> reports:
> 
> -bash-4.1# lspci -s 02:00.0 -v
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: bus master, fast devsel, latency 0, IRQ 10
>         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
>         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
>         I/O ports at e020 [size=32]
>         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
>         Expansion ROM at f0c00000 [disabled] [size=4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
>         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

Very odd - I guess I need to hand you a debugging patch for the
hypervisor, unless you want to code one up yourself...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6j63-0007SL-KU; Fri, 24 Jan 2014 15:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6j61-0007S6-Vx
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 15:55:06 +0000
Received: from [193.109.254.147:35577] by server-4.bemta-14.messagelabs.com id
	DB/D1-03916-9DC82E25; Fri, 24 Jan 2014 15:55:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390578902!13043827!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23933 invoked from network); 24 Jan 2014 15:55:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 15:55:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 15:55:02 +0000
Message-Id: <52E29AE40200007800116C82@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 15:55:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
In-Reply-To: <20140124150128.GF12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Thu, Jan 23, 2014 at 08:24:11AM +0000, Jan Beulich wrote:
>> So is this state with igb never having been bound to the device,
>> or was it unbound before the device got handed to igb. I'm asking
>> because I'm trying to understand why alloc_pdev() didn't find the
>> MSI-X capability structure, and I continue to suspect that the
>> driver may have done something to the device to make it visible.
> 
> I built the kernel without the igb driver just to eliminate it being
> the culprit. Now I can boot without issues and this is what lspci
> reports:
> 
> -bash-4.1# lspci -s 02:00.0 -v
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: bus master, fast devsel, latency 0, IRQ 10
>         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
>         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
>         I/O ports at e020 [size=32]
>         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
>         Expansion ROM at f0c00000 [disabled] [size=4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
>         Capabilities: [a0] Express Endpoint, MSI 00
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-45-d9-ac
>         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)

Very odd - I guess I need to hand you a debugging patch for the
hypervisor, unless you want to code one up yourself...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:00:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jB9-0008AA-Uo; Fri, 24 Jan 2014 16:00:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W6jB7-0008A3-JG
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:00:21 +0000
Received: from [85.158.143.35:45883] by server-1.bemta-4.messagelabs.com id
	3B/DB-02132-41E82E25; Fri, 24 Jan 2014 16:00:20 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390579220!611094!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2589 invoked from network); 24 Jan 2014 16:00:20 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:00:20 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so3114869wgh.28
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 08:00:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=UAF+bSHgx29hMBy/MvpuXBnPKvO/7DrReC8TZG/01aM=;
	b=aAkYvfbNdf9l920phx7GhLRBbJhWNbuxrNJ532iWulsvPrnburu/oHGUc91n1+NCF+
	fBcE2ibebyeSfmh8Wm1mwUGy2VQinhSF91VJusfKIFtJNNEoNqVfcQdKTwpGbH2sXaDj
	xBEW0H2aDZsDD0H13sbQIYjZ667mhIU7Pk6jcZ97f9vxUGSiRYZcK4T3uyu6KQRwkT/b
	4bQY/xorzqLtF+aZW7X6JowoHE+RrpqJf4b6GrpVNy1egCNqcv9XfOIM+rYwpywbBElo
	6MWtqRBIvPZNnNmAHEf1nWTACpsUt3Uwl1bVhWuEg4aQ1eQrv6MTOU0DSf0Wm2+xllGU
	0IkQ==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr3754308wib.28.1390579220150;
	Fri, 24 Jan 2014 08:00:20 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 24 Jan 2014 08:00:20 -0800 (PST)
In-Reply-To: <52DEB887.8070409@citrix.com>
References: <52DEB887.8070409@citrix.com>
Date: Fri, 24 Jan 2014 16:00:20 +0000
X-Google-Sender-Auth: xsi-KHmtloha5T1cyhmpMXQVnNg
Message-ID: <CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report.
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:

Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
the L2 HAP stuff assumed that L1 was HAP as well.

In which case, if an L1 guest is started in shadow mode, then EPT
should not be advertised.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:00:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jB9-0008AA-Uo; Fri, 24 Jan 2014 16:00:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W6jB7-0008A3-JG
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:00:21 +0000
Received: from [85.158.143.35:45883] by server-1.bemta-4.messagelabs.com id
	3B/DB-02132-41E82E25; Fri, 24 Jan 2014 16:00:20 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390579220!611094!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2589 invoked from network); 24 Jan 2014 16:00:20 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:00:20 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so3114869wgh.28
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 08:00:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=UAF+bSHgx29hMBy/MvpuXBnPKvO/7DrReC8TZG/01aM=;
	b=aAkYvfbNdf9l920phx7GhLRBbJhWNbuxrNJ532iWulsvPrnburu/oHGUc91n1+NCF+
	fBcE2ibebyeSfmh8Wm1mwUGy2VQinhSF91VJusfKIFtJNNEoNqVfcQdKTwpGbH2sXaDj
	xBEW0H2aDZsDD0H13sbQIYjZ667mhIU7Pk6jcZ97f9vxUGSiRYZcK4T3uyu6KQRwkT/b
	4bQY/xorzqLtF+aZW7X6JowoHE+RrpqJf4b6GrpVNy1egCNqcv9XfOIM+rYwpywbBElo
	6MWtqRBIvPZNnNmAHEf1nWTACpsUt3Uwl1bVhWuEg4aQ1eQrv6MTOU0DSf0Wm2+xllGU
	0IkQ==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr3754308wib.28.1390579220150;
	Fri, 24 Jan 2014 08:00:20 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 24 Jan 2014 08:00:20 -0800 (PST)
In-Reply-To: <52DEB887.8070409@citrix.com>
References: <52DEB887.8070409@citrix.com>
Date: Fri, 24 Jan 2014 16:00:20 +0000
X-Google-Sender-Auth: xsi-KHmtloha5T1cyhmpMXQVnNg
Message-ID: <CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Hello,
>
> I have been giving nested virt a try, and have my first bug to report.
> This is still ongoing, and is by no means complete yet.
>
> Setup:
> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>
> Single Intel Haswell SDP (Grantley platform):
> Native hypervisor: XenServer
>
> Two L1 guests:
>   XenServer (running with EPT)
>   XenServer (running with shadow)
>
>
> When attempting to create an L2 EPT HVM domain under an L1 shadow
> domain, the L1 shadow domain is killed with:

Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
the L2 HAP stuff assumed that L1 was HAP as well.

In which case, if an L1 guest is started in shadow mode, then EPT
should not be advertised.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:04:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jFE-0008Qw-60; Fri, 24 Jan 2014 16:04:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jFD-0008Qr-Ay
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:04:35 +0000
Received: from [85.158.139.211:62179] by server-9.bemta-5.messagelabs.com id
	B1/98-15098-21F82E25; Fri, 24 Jan 2014 16:04:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390579472!11789718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22620 invoked from network); 24 Jan 2014 16:04:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94189816"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:04:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:04:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jEw-0007E6-2G;
	Fri, 24 Jan 2014 16:04:18 +0000
Message-ID: <52E28EFB.3020008@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:04:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>	
	<1389865519.5190.9.camel@kazak.uk.xensource.com>	
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>	
	<1389951634.6697.43.camel@kazak.uk.xensource.com>	
	<52E27CEF.2010009@eu.citrix.com>	
	<20140124145641.GA83765@deinos.phlegethon.org>
	<1390575746.2124.91.camel@kazak.uk.xensource.com>
In-Reply-To: <1390575746.2124.91.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 03:02 PM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
>> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
>>> On 01/17/2014 09:40 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>>>>> As Andrew said, nested still in experimental stage, because there are
>>>>> still lots of scenarios I am not covered in my testing. So it may not
>>>>> accurate to say it is good supported. But I hope people know that the
>>>>> nested is ready to use now. And encourage them to try it and report
>>>>> bug to us to push nested move forward.
>>>> Perhaps we could say it is "tech preview" rather than "experimental"?
>>> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP
>>> compatibility mode are tested regularly, and only HyperV, L2 shadow, and
>>> paging / PoD don't work, I think we should be able to call this a "1.0"
>>> release for nested virt.  Then we can add in "now works with HyperV",
>>> "Now works with shadow", "Now works with paging" as those become mature.
>> That depends on what the failure modes are for the other cases --
>> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
>> are not under the control of the L0 admin.  I thikn that has to be
>> clearly understood before we encourage people to turn this on.
> Especially in the light of the previous two bugs here which let the
> guest admin crash the host, in at least one of the two cases even if the
> host admin had disabled nested virt for that guest (and I think it was
> actually in both cases...)

Right -- well I think then we need to help try to define some criteria 
that VMX nested virt would need to meet for portions of it to stop being 
considered "experimental" or "tech preview".  Just a couple of angles:

* L1 / L2 guests tested.  What do people think of the mix of L1 / L2 
guests there?  They look like a pretty good combination to me.

* L2 workloads tested

Other than booting, what kinds of workloads are run in the L2 guests?  
Do the L2 guests ever get into heavy swapping scenarios, for instance?

* Minimum subset of functionality

I think it makes sense to explicitly say that we support only certain 
hypervisors, and to not support some advanced features in L2 guests.  
Saying only L1 HAP L2 HAP is reasonable, I think.  No HyperV, no L2 
shadow, no PoD are reasonable restrictions; it should be fine for us to 
say that the L1 admin enables that, and badness ensues, he has only 
himself to blame.

* Security

That said, I think we must assume that some of our users will have L0 
admin != L1 admin.  This means that L1 admin must not be able to do 
anything to crash L0.  In the PoD case above, for example, if L1 enables 
PoD or paging, it might cause locking issue in L0; that's not acceptable.

Anything else?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:04:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jFE-0008Qw-60; Fri, 24 Jan 2014 16:04:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jFD-0008Qr-Ay
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:04:35 +0000
Received: from [85.158.139.211:62179] by server-9.bemta-5.messagelabs.com id
	B1/98-15098-21F82E25; Fri, 24 Jan 2014 16:04:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390579472!11789718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22620 invoked from network); 24 Jan 2014 16:04:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94189816"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:04:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:04:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jEw-0007E6-2G;
	Fri, 24 Jan 2014 16:04:18 +0000
Message-ID: <52E28EFB.3020008@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:04:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>	
	<1389865519.5190.9.camel@kazak.uk.xensource.com>	
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>	
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>	
	<1389951634.6697.43.camel@kazak.uk.xensource.com>	
	<52E27CEF.2010009@eu.citrix.com>	
	<20140124145641.GA83765@deinos.phlegethon.org>
	<1390575746.2124.91.camel@kazak.uk.xensource.com>
In-Reply-To: <1390575746.2124.91.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 03:02 PM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
>> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George Dunlap wrote:
>>> On 01/17/2014 09:40 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>>>>> As Andrew said, nested still in experimental stage, because there are
>>>>> still lots of scenarios I am not covered in my testing. So it may not
>>>>> accurate to say it is good supported. But I hope people know that the
>>>>> nested is ready to use now. And encourage them to try it and report
>>>>> bug to us to push nested move forward.
>>>> Perhaps we could say it is "tech preview" rather than "experimental"?
>>> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP
>>> compatibility mode are tested regularly, and only HyperV, L2 shadow, and
>>> paging / PoD don't work, I think we should be able to call this a "1.0"
>>> release for nested virt.  Then we can add in "now works with HyperV",
>>> "Now works with shadow", "Now works with paging" as those become mature.
>> That depends on what the failure modes are for the other cases --
>> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
>> are not under the control of the L0 admin.  I thikn that has to be
>> clearly understood before we encourage people to turn this on.
> Especially in the light of the previous two bugs here which let the
> guest admin crash the host, in at least one of the two cases even if the
> host admin had disabled nested virt for that guest (and I think it was
> actually in both cases...)

Right -- well I think then we need to help try to define some criteria 
that VMX nested virt would need to meet for portions of it to stop being 
considered "experimental" or "tech preview".  Just a couple of angles:

* L1 / L2 guests tested.  What do people think of the mix of L1 / L2 
guests there?  They look like a pretty good combination to me.

* L2 workloads tested

Other than booting, what kinds of workloads are run in the L2 guests?  
Do the L2 guests ever get into heavy swapping scenarios, for instance?

* Minimum subset of functionality

I think it makes sense to explicitly say that we support only certain 
hypervisors, and to not support some advanced features in L2 guests.  
Saying only L1 HAP L2 HAP is reasonable, I think.  No HyperV, no L2 
shadow, no PoD are reasonable restrictions; it should be fine for us to 
say that the L1 admin enables that, and badness ensues, he has only 
himself to blame.

* Security

That said, I think we must assume that some of our users will have L0 
admin != L1 admin.  This means that L1 admin must not be able to do 
anything to crash L0.  In the PoD case above, for example, if L1 enables 
PoD or paging, it might cause locking issue in L0; that's not acceptable.

Anything else?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jQd-0000Nc-1j; Fri, 24 Jan 2014 16:16:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jQa-0000NX-UI
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:16:21 +0000
Received: from [85.158.139.211:36576] by server-6.bemta-5.messagelabs.com id
	C2/0B-16310-4D192E25; Fri, 24 Jan 2014 16:16:20 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390580177!9098541!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28358 invoked from network); 24 Jan 2014 16:16:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:16:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94195505"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:16:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:16:14 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jQT-0007Oh-O5;
	Fri, 24 Jan 2014 16:16:13 +0000
Message-ID: <52E291C6.80205@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:16:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27CA5.50407@m2r.biz>
In-Reply-To: <52E27CA5.50407@m2r.biz>
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 02:45 PM, Fabio Fantoni wrote:
> Il 21/01/2014 14:56, Ian Campbell ha scritto:
>> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>>> Make rdname function work with xl
>>>
>>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Although I would have preferred a slightly more verbose changelog.
>
> This patch fix this problem:
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
> and perhaps also other problems.
>
> I have done extensive testing with the addition of this patch without 
> encountering errors, can it be added to the 4.4?
>
> Thanks for any reply.

Ian said he was OK with the patch, but that he wished it had a better 
description.

The one-line description is good; but a better body description would 
include:

1) A description of what's wrong
2) How this patch fixes the problem

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jQd-0000Nc-1j; Fri, 24 Jan 2014 16:16:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jQa-0000NX-UI
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:16:21 +0000
Received: from [85.158.139.211:36576] by server-6.bemta-5.messagelabs.com id
	C2/0B-16310-4D192E25; Fri, 24 Jan 2014 16:16:20 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390580177!9098541!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28358 invoked from network); 24 Jan 2014 16:16:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:16:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94195505"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:16:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:16:14 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jQT-0007Oh-O5;
	Fri, 24 Jan 2014 16:16:13 +0000
Message-ID: <52E291C6.80205@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:16:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27CA5.50407@m2r.biz>
In-Reply-To: <52E27CA5.50407@m2r.biz>
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 02:45 PM, Fabio Fantoni wrote:
> Il 21/01/2014 14:56, Ian Campbell ha scritto:
>> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>>> Make rdname function work with xl
>>>
>>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Although I would have preferred a slightly more verbose changelog.
>
> This patch fix this problem:
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
> and perhaps also other problems.
>
> I have done extensive testing with the addition of this patch without 
> encountering errors, can it be added to the 4.4?
>
> Thanks for any reply.

Ian said he was OK with the patch, but that he wished it had a better 
description.

The one-line description is good; but a better body description would 
include:

1) A description of what's wrong
2) How this patch fixes the problem

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:17:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jRn-0000Sk-Og; Fri, 24 Jan 2014 16:17:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6jRm-0000Sc-03
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:17:34 +0000
Received: from [193.109.254.147:37535] by server-4.bemta-14.messagelabs.com id
	2E/60-03916-D1292E25; Fri, 24 Jan 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390580251!12962483!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15628 invoked from network); 24 Jan 2014 16:17:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94196020"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:17:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:17:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6jRh-00077M-HS;
	Fri, 24 Jan 2014 16:17:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6jRh-0007xZ-E4;
	Fri, 24 Jan 2014 16:17:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24478-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:17:29 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24478: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24478 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops             4 kernel-build              fail REGR. vs. 24470
 build-amd64                   4 xen-build                 fail REGR. vs. 24470
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24470

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9612d2948e1637c303e6be68df2168775ac5e97e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:45:03 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:17:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jRn-0000Sk-Og; Fri, 24 Jan 2014 16:17:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6jRm-0000Sc-03
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:17:34 +0000
Received: from [193.109.254.147:37535] by server-4.bemta-14.messagelabs.com id
	2E/60-03916-D1292E25; Fri, 24 Jan 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390580251!12962483!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15628 invoked from network); 24 Jan 2014 16:17:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94196020"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:17:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:17:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6jRh-00077M-HS;
	Fri, 24 Jan 2014 16:17:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6jRh-0007xZ-E4;
	Fri, 24 Jan 2014 16:17:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24478-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:17:29 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24478: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24478 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-pvops             4 kernel-build              fail REGR. vs. 24470
 build-amd64                   4 xen-build                 fail REGR. vs. 24470
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24470

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9612d2948e1637c303e6be68df2168775ac5e97e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:45:03 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jTc-0000p3-M3; Fri, 24 Jan 2014 16:19:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6jTa-0000ow-Pb
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:19:26 +0000
Received: from [85.158.143.35:3578] by server-2.bemta-4.messagelabs.com id
	6F/DE-11386-E8292E25; Fri, 24 Jan 2014 16:19:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390580365!608297!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31092 invoked from network); 24 Jan 2014 16:19:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:19:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 16:19:17 +0000
Message-Id: <52E2A0930200007800116CAE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 16:19:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
In-Reply-To: <20140124150128.GF12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartD9EA6493.2__="
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartD9EA6493.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> =
wrote:
> I built the kernel without the igb driver just to eliminate it being
> the culprit. Now I can boot without issues and this is what lspci
> reports:
>=20
> -bash-4.1# lspci -s 02:00.0 -v
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network=20
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: bus master, fast devsel, latency 0, IRQ 10
>         Memory at f1420000 (32-bit, non-prefetchable) [size=3D128K]
>         Memory at f1000000 (32-bit, non-prefetchable) [size=3D4M]
>         I/O ports at e020 [size=3D32]
>         Memory at f1444000 (32-bit, non-prefetchable) [size=3D16K]
>         Expansion ROM at f0c00000 [disabled] [size=3D4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=3D1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=3D10 Masked-

So here's a patch to figure out why we don't find this.

Jan


--=__PartD9EA6493.2__=
Content-Type: text/plain; name="Konrad-MSI-X.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="Konrad-MSI-X.patch"

--- a/xen/drivers/passthrough/pci.c=0A+++ b/xen/drivers/passthrough/pci.c=
=0A@@ -278,7 +278,7 @@ static struct pci_dev *alloc_pdev(struct=0A     =
INIT_LIST_HEAD(&pdev->msi_list);=0A =0A     if ( pci_find_cap_offset(pseg->=
nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A-                             =
PCI_CAP_ID_MSIX) )=0A+                             PCI_CAP_ID_MSIX | 0x80) =
)=0A     {=0A         struct arch_msix *msix =3D xzalloc(struct arch_msix);=
=0A =0A--- a/xen/drivers/pci/pci.c=0A+++ b/xen/drivers/pci/pci.c=0A@@ =
-14,19 +14,24 @@ int pci_find_cap_offset(u16 seg, u8 bus,=0A     int =
max_cap =3D 48;=0A     u8 pos =3D PCI_CAPABILITY_LIST;=0A     u16 =
status;=0A+bool_t log =3D (cap & 0x80) && !seg && bus =3D=3D 2;//temp=0A+ca=
p &=3D ~0x80;//temp=0A =0A     status =3D pci_conf_read16(seg, bus, dev, =
func, PCI_STATUS);=0A+if(log) printk("02:%02x.%u: status=3D%04x (%ps wants =
%02x)\n", dev, func, status, __builtin_return_address(0), cap);//temp=0A   =
  if ( (status & PCI_STATUS_CAP_LIST) =3D=3D 0 )=0A         return 0;=0A =
=0A     while ( max_cap-- )=0A     {=0A         pos =3D pci_conf_read8(seg,=
 bus, dev, func, pos);=0A+if(log) printk("02:%02x.%u: pos=3D%02x\n", dev, =
func, pos);//temp=0A         if ( pos < 0x40 )=0A             break;=0A =
=0A         pos &=3D ~3;=0A         id =3D pci_conf_read8(seg, bus, dev, =
func, pos + PCI_CAP_LIST_ID);=0A+if(log) printk("02:%02x.%u: id=3D%02x\n", =
dev, func, id);//temp=0A =0A         if ( id =3D=3D 0xff )=0A             =
break;=0A@@ -35,6 +40,7 @@ int pci_find_cap_offset(u16 seg, u8 bus,=0A =0A =
        pos +=3D PCI_CAP_LIST_NEXT;=0A     }=0A+if(log) printk("02:%02x.%u:=
 no cap %02x\n", dev, func, cap);//temp=0A =0A     return 0;=0A }=0A
--=__PartD9EA6493.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartD9EA6493.2__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 16:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jTc-0000p3-M3; Fri, 24 Jan 2014 16:19:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6jTa-0000ow-Pb
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:19:26 +0000
Received: from [85.158.143.35:3578] by server-2.bemta-4.messagelabs.com id
	6F/DE-11386-E8292E25; Fri, 24 Jan 2014 16:19:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390580365!608297!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31092 invoked from network); 24 Jan 2014 16:19:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:19:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 16:19:17 +0000
Message-Id: <52E2A0930200007800116CAE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 16:19:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
In-Reply-To: <20140124150128.GF12946@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartD9EA6493.2__="
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartD9EA6493.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> =
wrote:
> I built the kernel without the igb driver just to eliminate it being
> the culprit. Now I can boot without issues and this is what lspci
> reports:
>=20
> -bash-4.1# lspci -s 02:00.0 -v
> 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network=20
> Connection (rev 01)
>         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
>         Flags: bus master, fast devsel, latency 0, IRQ 10
>         Memory at f1420000 (32-bit, non-prefetchable) [size=3D128K]
>         Memory at f1000000 (32-bit, non-prefetchable) [size=3D4M]
>         I/O ports at e020 [size=3D32]
>         Memory at f1444000 (32-bit, non-prefetchable) [size=3D16K]
>         Expansion ROM at f0c00000 [disabled] [size=3D4M]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [50] MSI: Enable- Count=3D1/1 Maskable+ 64bit+
>         Capabilities: [70] MSI-X: Enable- Count=3D10 Masked-

So here's a patch to figure out why we don't find this.

Jan


--=__PartD9EA6493.2__=
Content-Type: text/plain; name="Konrad-MSI-X.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="Konrad-MSI-X.patch"

--- a/xen/drivers/passthrough/pci.c=0A+++ b/xen/drivers/passthrough/pci.c=
=0A@@ -278,7 +278,7 @@ static struct pci_dev *alloc_pdev(struct=0A     =
INIT_LIST_HEAD(&pdev->msi_list);=0A =0A     if ( pci_find_cap_offset(pseg->=
nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),=0A-                             =
PCI_CAP_ID_MSIX) )=0A+                             PCI_CAP_ID_MSIX | 0x80) =
)=0A     {=0A         struct arch_msix *msix =3D xzalloc(struct arch_msix);=
=0A =0A--- a/xen/drivers/pci/pci.c=0A+++ b/xen/drivers/pci/pci.c=0A@@ =
-14,19 +14,24 @@ int pci_find_cap_offset(u16 seg, u8 bus,=0A     int =
max_cap =3D 48;=0A     u8 pos =3D PCI_CAPABILITY_LIST;=0A     u16 =
status;=0A+bool_t log =3D (cap & 0x80) && !seg && bus =3D=3D 2;//temp=0A+ca=
p &=3D ~0x80;//temp=0A =0A     status =3D pci_conf_read16(seg, bus, dev, =
func, PCI_STATUS);=0A+if(log) printk("02:%02x.%u: status=3D%04x (%ps wants =
%02x)\n", dev, func, status, __builtin_return_address(0), cap);//temp=0A   =
  if ( (status & PCI_STATUS_CAP_LIST) =3D=3D 0 )=0A         return 0;=0A =
=0A     while ( max_cap-- )=0A     {=0A         pos =3D pci_conf_read8(seg,=
 bus, dev, func, pos);=0A+if(log) printk("02:%02x.%u: pos=3D%02x\n", dev, =
func, pos);//temp=0A         if ( pos < 0x40 )=0A             break;=0A =
=0A         pos &=3D ~3;=0A         id =3D pci_conf_read8(seg, bus, dev, =
func, pos + PCI_CAP_LIST_ID);=0A+if(log) printk("02:%02x.%u: id=3D%02x\n", =
dev, func, id);//temp=0A =0A         if ( id =3D=3D 0xff )=0A             =
break;=0A@@ -35,6 +40,7 @@ int pci_find_cap_offset(u16 seg, u8 bus,=0A =0A =
        pos +=3D PCI_CAP_LIST_NEXT;=0A     }=0A+if(log) printk("02:%02x.%u:=
 no cap %02x\n", dev, func, cap);//temp=0A =0A     return 0;=0A }=0A
--=__PartD9EA6493.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartD9EA6493.2__=--


From xen-devel-bounces@lists.xen.org Fri Jan 24 16:20:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jUW-0000v1-5n; Fri, 24 Jan 2014 16:20:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jUV-0000uu-Dx
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:20:23 +0000
Received: from [85.158.139.211:8530] by server-12.bemta-5.messagelabs.com id
	0D/15-30017-6C292E25; Fri, 24 Jan 2014 16:20:22 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390580420!11792964!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1966 invoked from network); 24 Jan 2014 16:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96228774"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:20:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:20:01 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jU9-0007S9-81;
	Fri, 24 Jan 2014 16:20:01 +0000
Message-ID: <52E292AA.4010107@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:19:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
	<52E23F2602000078001167EC@nat28.tlf.novell.com>
	<52E25087.6050706@citrix.com>
In-Reply-To: <52E25087.6050706@citrix.com>
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:37 AM, Andrew Cooper wrote:
> On 24/01/14 09:23, Jan Beulich wrote:
>>>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> The value of time read from hvm_get_guest_time() resets with a new
>>> domid, making it an inappropriate source of time for the described
>>> function of the MSR.
>>>
>>> I suspect Windows 8 only notices at first on migration as I believe that
>>> it is the first case where the generation ID is supposed to change and
>>> signal a reset of state.  The detection of the failure is actually
>>> further complicated as there appears to be a race condition between the
>>> guest tools reseting the clock back to the correct value, and the DHCP
>>> lease being flushed.  XenRT only notices the failure if the DHCP lease
>>> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
>>> inside the VM), and doesn't directly notice the foward/backward stepping
>>> in time.
>>>
>>> Anyway - please revert the patch - it will be a non-trivial change to
>>> expose an appropriate source of time to be consumed by this MSR.
>> Done, albeit not completely - I left the #define-s in place.
>>
>> Jan
>>
> Thanks - I have pulled XenServer's 4.4-rc2 branch forward to current
> staging, and the w2k12 vmlifecycle tests are now working without error.
>
> I shall organise another full nightly regression test for some time in
> the next few days.

Overall, excellent news.  Thanks.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:20:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jUW-0000v1-5n; Fri, 24 Jan 2014 16:20:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jUV-0000uu-Dx
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:20:23 +0000
Received: from [85.158.139.211:8530] by server-12.bemta-5.messagelabs.com id
	0D/15-30017-6C292E25; Fri, 24 Jan 2014 16:20:22 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390580420!11792964!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1966 invoked from network); 24 Jan 2014 16:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96228774"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:20:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:20:01 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jU9-0007S9-81;
	Fri, 24 Jan 2014 16:20:01 +0000
Message-ID: <52E292AA.4010107@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:19:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <52DE4BD8.7060209@citrix.com> <52E102F4.3060503@citrix.com>
	<52E11EC80200007800116238@nat28.tlf.novell.com>
	<52E14EA4.6010009@citrix.com>
	<52E23F2602000078001167EC@nat28.tlf.novell.com>
	<52E25087.6050706@citrix.com>
In-Reply-To: <52E25087.6050706@citrix.com>
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Results from the Xen 4.4-rc2 test day
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:37 AM, Andrew Cooper wrote:
> On 24/01/14 09:23, Jan Beulich wrote:
>>>>> On 23.01.14 at 18:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> The value of time read from hvm_get_guest_time() resets with a new
>>> domid, making it an inappropriate source of time for the described
>>> function of the MSR.
>>>
>>> I suspect Windows 8 only notices at first on migration as I believe that
>>> it is the first case where the generation ID is supposed to change and
>>> signal a reset of state.  The detection of the failure is actually
>>> further complicated as there appears to be a race condition between the
>>> guest tools reseting the clock back to the correct value, and the DHCP
>>> lease being flushed.  XenRT only notices the failure if the DHCP lease
>>> is actually lost (thus XenRT can't communicate with it's xmlrpc daemon
>>> inside the VM), and doesn't directly notice the foward/backward stepping
>>> in time.
>>>
>>> Anyway - please revert the patch - it will be a non-trivial change to
>>> expose an appropriate source of time to be consumed by this MSR.
>> Done, albeit not completely - I left the #define-s in place.
>>
>> Jan
>>
> Thanks - I have pulled XenServer's 4.4-rc2 branch forward to current
> staging, and the w2k12 vmlifecycle tests are now working without error.
>
> I shall organise another full nightly regression test for some time in
> the next few days.

Overall, excellent news.  Thanks.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jjt-0001RO-3H; Fri, 24 Jan 2014 16:36:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6jjr-0001RJ-5A
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:36:15 +0000
Received: from [85.158.139.211:20264] by server-2.bemta-5.messagelabs.com id
	89/71-29392-E7692E25; Fri, 24 Jan 2014 16:36:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390581371!11796126!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11667 invoked from network); 24 Jan 2014 16:36:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96235083"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:36:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:36:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6jjm-0007Dk-1a;
	Fri, 24 Jan 2014 16:36:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6jjk-00063U-Ch;
	Fri, 24 Jan 2014 16:36:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.38519.34609.370576@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 16:36:07 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.33645.354750.805594@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I have just looked at the code in vireventpoll.c and there is nothing
> that stops this being a problem.

I think it's a problem with fd callbacks too.

fd callbacks aren't always removed by the very same handler; they
might be removed by pretty much any arbitrary code (although it will
be in _some_ handler).

It isn't correct to defer removal of an fd callback because it is
important that by the time the removal callback returns, the fd is no
longer being polled by the event loop.  That includes being polled by
_any_ instance of the event loop perhaps in another thread.  Neither
libxl nor libvirt have any machinery to wake up other instances of the
event loop in other threads to change their fd sets, and to wait until
that's done.

libxl could (and perhaps should) grow such a thing in its own event
loop, but thinking about how to write it, it's going to be quite
tedious![1]

Changing the identity of the file objects referred to by fds being
polled on in another thread has unfortunate behaviours.  In
experiments done by a friend of mine we discovered that:
 * Linux keeps a secret handle onto the old open-file struct
 * FreeBSD honours the change immediately
 * Solaris returns EBADF even if the operation is dup2 to
   dup a new open file object onto the existing fd
(After discussion we agreed that a close reading of SuS permits only
the FreeBSD behaviour if the open file objects refer to plain files
which in practice means that the Solaris behaviour is buggy.  I think
the Linux behaviour is mad and the FreeBSD behaviour correct.)

And of course if the fd remains being polled on after the internal
deregistration callback has returned, libxl will feel free to close
the fd.  It might then be reused for any arbitrary object, which might
cause malfunctions if polled in the old way.  And in any case poll
might return EBADF which will probaby crash the event loop.

I'm trying to think of a way for libvirt to handle all of this without
duplicating the horrible synchronisation/wakeup thing I'm
contemplating for libxl.

Ian.

[1] Add a ctx-wide list of pollers, one for every libxl thread in
poll.  This list has to be covered by its own lock.

When fd deregistration occurs, we take this lock, wake up all the
pollers, release the lock, and then wait (perhaps with a condition
variable) for the pollers to acknowledge that they have left poll().

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jjt-0001RO-3H; Fri, 24 Jan 2014 16:36:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6jjr-0001RJ-5A
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:36:15 +0000
Received: from [85.158.139.211:20264] by server-2.bemta-5.messagelabs.com id
	89/71-29392-E7692E25; Fri, 24 Jan 2014 16:36:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390581371!11796126!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11667 invoked from network); 24 Jan 2014 16:36:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:36:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96235083"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:36:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:36:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6jjm-0007Dk-1a;
	Fri, 24 Jan 2014 16:36:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6jjk-00063U-Ch;
	Fri, 24 Jan 2014 16:36:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.38519.34609.370576@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 16:36:07 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.33645.354750.805594@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> I have just looked at the code in vireventpoll.c and there is nothing
> that stops this being a problem.

I think it's a problem with fd callbacks too.

fd callbacks aren't always removed by the very same handler; they
might be removed by pretty much any arbitrary code (although it will
be in _some_ handler).

It isn't correct to defer removal of an fd callback because it is
important that by the time the removal callback returns, the fd is no
longer being polled by the event loop.  That includes being polled by
_any_ instance of the event loop perhaps in another thread.  Neither
libxl nor libvirt have any machinery to wake up other instances of the
event loop in other threads to change their fd sets, and to wait until
that's done.

libxl could (and perhaps should) grow such a thing in its own event
loop, but thinking about how to write it, it's going to be quite
tedious![1]

Changing the identity of the file objects referred to by fds being
polled on in another thread has unfortunate behaviours.  In
experiments done by a friend of mine we discovered that:
 * Linux keeps a secret handle onto the old open-file struct
 * FreeBSD honours the change immediately
 * Solaris returns EBADF even if the operation is dup2 to
   dup a new open file object onto the existing fd
(After discussion we agreed that a close reading of SuS permits only
the FreeBSD behaviour if the open file objects refer to plain files
which in practice means that the Solaris behaviour is buggy.  I think
the Linux behaviour is mad and the FreeBSD behaviour correct.)

And of course if the fd remains being polled on after the internal
deregistration callback has returned, libxl will feel free to close
the fd.  It might then be reused for any arbitrary object, which might
cause malfunctions if polled in the old way.  And in any case poll
might return EBADF which will probaby crash the event loop.

I'm trying to think of a way for libvirt to handle all of this without
duplicating the horrible synchronisation/wakeup thing I'm
contemplating for libxl.

Ian.

[1] Add a ctx-wide list of pollers, one for every libxl thread in
poll.  This list has to be covered by its own lock.

When fd deregistration occurs, we take this lock, wake up all the
pollers, release the lock, and then wait (perhaps with a condition
variable) for the pollers to acknowledge that they have left poll().

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jqq-0001p7-8o; Fri, 24 Jan 2014 16:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6jqp-0001p2-2S
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:43:27 +0000
Received: from [85.158.143.35:50704] by server-1.bemta-4.messagelabs.com id
	FA/5F-02132-E2892E25; Fri, 24 Jan 2014 16:43:26 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390581803!615190!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22032 invoked from network); 24 Jan 2014 16:43:24 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 16:43:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 24 Jan 2014 16:43:22 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; 
	d="scan'208,217";a="245029879"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 24 Jan 2014 16:43:21 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Fri, 24 Jan 2014 11:43:20 -0500
Message-ID: <52E29827.7000001@terremark.com>
Date: Fri, 24 Jan 2014 11:43:19 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
In-Reply-To: <20140124145840.GE12946@phenom.dumpdata.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5968812017067885968=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5968812017067885968==
Content-Type: multipart/alternative;
	boundary="------------000206070401070303060404"

--------------000206070401070303060404
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/24/14 09:58, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
>> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
>>> Don Slutz <dslutz@verizon.com> wrote:
>> [snip]
>>
>>>> WARNING: g.e. still in use!
>>>> WARNING: g.e. still in use!
>>>> WARNING: g.e. still in use!
>>>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>>>> PM: Device i8042 failed to resume: error -19
>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>> "echo 0 >..."
>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>
>>>> [root@dcs-xen-54 ~]# xl des 17
>>>> [root@dcs-xen-54 ~]# xl restore -V
>>>> /big/xl-save/centos-6.4-x86_64.0.save
>>>>
>>>>
>>>> Not sure if this is expected or not.
>>> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
>> I have not used xend/xe in a long time.  I did need to configure it.
>>
>> Does not start:
>>
>>
>> # /etc/init.d/xend start
>> WARNING: Enabling the xend toolstack.
>> xend is deprecated and scheduled for removal. Please migrate
>> to another toolstack ASAP.
>> Traceback (most recent call last):
>>    File "/usr/sbin/xend", line 110, in <module>
>>      sys.exit(main())
>>    File "/usr/sbin/xend", line 91, in main
>>      start_blktapctrl()
>>    File "/usr/sbin/xend", line 77, in start_blktapctrl
>>      start_daemon("blktapctrl", "")
>>    File "/usr/sbin/xend", line 74, in start_daemon
>>      os.execvp(daemon, (daemon,) + args)
>>    File "/usr/lib64/python2.7/os.py", line 344, in execvp
>>      _execvpe(file, args)
>>    File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>>      func(fullname, *argrest)
>> OSError: [Errno 2] No such file or directory
>>
>>
>> How important is it to try this?
> It tells us whether the issue is indeed with the 'fast-cancel' thing.
>
> But, I do recall seeing a patch from Ian Jackson for this - I just
> don't remember what it was called - it was posted here and perhaps
> applying it would help?

I have not found a patch.  The bug #30:


http://bugs.xenproject.org/xen/bug/30

Looks to me like the issue I am seeing.  The Guest I was testing this on is an old kernel (as far as this bug) and so I would expect it might be related.


Here is what I see that may link it to this:

    *From*: Ian Campbell <Ian.Campbell@citrix.com>
    *To*: Ian Jackson <Ian.Jackson@eu.citrix.com>
    *Cc*: xen-devel@lists.xen.org
    *Subject*: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
    *Date*: Fri, 10 Jan 2014 10:26:31 +0000

...

    Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
    support for the new mode at all.

    It would probably be wise to validate this under xend before chasing
    red-herrings with xl.

    Ian.


So I will continue the fight to get xend running.

    -Don





>>      -Don Slutz
>>


--------------000206070401070303060404
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/24/14 09:58, Konrad Rzeszutek
      Wilk wrote:<br>
    </div>
    <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">Don Slutz <a class="moz-txt-link-rfc2396E" href="mailto:dslutz@verizon.com">&lt;dslutz@verizon.com&gt;</a> wrote:
</pre>
        </blockquote>
        <pre wrap="">
[snip]

</pre>
        <blockquote type="cite">
          <blockquote type="cite">
            <pre wrap="">WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</pre>
          </blockquote>
          <pre wrap="">I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
</pre>
        </blockquote>
        <pre wrap="">
I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</pre>
      </blockquote>
      <pre wrap="">
It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</pre>
    </blockquote>
    <br>
    I have not found a patch.&nbsp; The bug #30:<br>
    <br>
    <br>
    <a class="moz-txt-link-freetext" href="http://bugs.xenproject.org/xen/bug/30">http://bugs.xenproject.org/xen/bug/30</a><br>
    <br>
    Looks to me like the issue I am seeing.&nbsp; The Guest I was testing
    this on is an old kernel (as far as this bug) and so I would expect
    it might be related.<br>
    <br>
    <br>
    Here is what I see that may link it to this:<br>
    <br>
    <blockquote>
      <pre class="headers"><b>From</b>: Ian Campbell <a class="moz-txt-link-rfc2396E" href="mailto:Ian.Campbell@citrix.com">&lt;Ian.Campbell@citrix.com&gt;</a>
<b>To</b>: Ian Jackson <a class="moz-txt-link-rfc2396E" href="mailto:Ian.Jackson@eu.citrix.com">&lt;Ian.Jackson@eu.citrix.com&gt;</a>
<b>Cc</b>: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
<b>Subject</b>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
<b>Date</b>: Fri, 10 Jan 2014 10:26:31 +0000

</pre>
    </blockquote>
    <pre class="headers">...
</pre>
    <blockquote>
      <pre class="headers">
Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</pre>
    </blockquote>
    <br>
    So I will continue the fight to get xend running.<br>
    <br>
    &nbsp;&nbsp; -Don<br>
    <br>
    <br>
    <br>
    <br>
    <br>
    <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">
</pre>
      <blockquote type="cite">
        <pre wrap="">
    -Don Slutz

</pre>
      </blockquote>
    </blockquote>
    <br>
  </body>
</html>

--------------000206070401070303060404--


--===============5968812017067885968==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5968812017067885968==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jqq-0001p7-8o; Fri, 24 Jan 2014 16:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6jqp-0001p2-2S
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:43:27 +0000
Received: from [85.158.143.35:50704] by server-1.bemta-4.messagelabs.com id
	FA/5F-02132-E2892E25; Fri, 24 Jan 2014 16:43:26 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390581803!615190!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22032 invoked from network); 24 Jan 2014 16:43:24 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 16:43:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 24 Jan 2014 16:43:22 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; 
	d="scan'208,217";a="245029879"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 24 Jan 2014 16:43:21 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Fri, 24 Jan 2014 11:43:20 -0500
Message-ID: <52E29827.7000001@terremark.com>
Date: Fri, 24 Jan 2014 11:43:19 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
In-Reply-To: <20140124145840.GE12946@phenom.dumpdata.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5968812017067885968=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5968812017067885968==
Content-Type: multipart/alternative;
	boundary="------------000206070401070303060404"

--------------000206070401070303060404
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/24/14 09:58, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
>> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
>>> Don Slutz <dslutz@verizon.com> wrote:
>> [snip]
>>
>>>> WARNING: g.e. still in use!
>>>> WARNING: g.e. still in use!
>>>> WARNING: g.e. still in use!
>>>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>>>> PM: Device i8042 failed to resume: error -19
>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>> "echo 0 >..."
>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>
>>>> [root@dcs-xen-54 ~]# xl des 17
>>>> [root@dcs-xen-54 ~]# xl restore -V
>>>> /big/xl-save/centos-6.4-x86_64.0.save
>>>>
>>>>
>>>> Not sure if this is expected or not.
>>> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
>> I have not used xend/xe in a long time.  I did need to configure it.
>>
>> Does not start:
>>
>>
>> # /etc/init.d/xend start
>> WARNING: Enabling the xend toolstack.
>> xend is deprecated and scheduled for removal. Please migrate
>> to another toolstack ASAP.
>> Traceback (most recent call last):
>>    File "/usr/sbin/xend", line 110, in <module>
>>      sys.exit(main())
>>    File "/usr/sbin/xend", line 91, in main
>>      start_blktapctrl()
>>    File "/usr/sbin/xend", line 77, in start_blktapctrl
>>      start_daemon("blktapctrl", "")
>>    File "/usr/sbin/xend", line 74, in start_daemon
>>      os.execvp(daemon, (daemon,) + args)
>>    File "/usr/lib64/python2.7/os.py", line 344, in execvp
>>      _execvpe(file, args)
>>    File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>>      func(fullname, *argrest)
>> OSError: [Errno 2] No such file or directory
>>
>>
>> How important is it to try this?
> It tells us whether the issue is indeed with the 'fast-cancel' thing.
>
> But, I do recall seeing a patch from Ian Jackson for this - I just
> don't remember what it was called - it was posted here and perhaps
> applying it would help?

I have not found a patch.  The bug #30:


http://bugs.xenproject.org/xen/bug/30

Looks to me like the issue I am seeing.  The Guest I was testing this on is an old kernel (as far as this bug) and so I would expect it might be related.


Here is what I see that may link it to this:

    *From*: Ian Campbell <Ian.Campbell@citrix.com>
    *To*: Ian Jackson <Ian.Jackson@eu.citrix.com>
    *Cc*: xen-devel@lists.xen.org
    *Subject*: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
    *Date*: Fri, 10 Jan 2014 10:26:31 +0000

...

    Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
    support for the new mode at all.

    It would probably be wise to validate this under xend before chasing
    red-herrings with xl.

    Ian.


So I will continue the fight to get xend running.

    -Don





>>      -Don Slutz
>>


--------------000206070401070303060404
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/24/14 09:58, Konrad Rzeszutek
      Wilk wrote:<br>
    </div>
    <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">Don Slutz <a class="moz-txt-link-rfc2396E" href="mailto:dslutz@verizon.com">&lt;dslutz@verizon.com&gt;</a> wrote:
</pre>
        </blockquote>
        <pre wrap="">
[snip]

</pre>
        <blockquote type="cite">
          <blockquote type="cite">
            <pre wrap="">WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</pre>
          </blockquote>
          <pre wrap="">I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
</pre>
        </blockquote>
        <pre wrap="">
I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</pre>
      </blockquote>
      <pre wrap="">
It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</pre>
    </blockquote>
    <br>
    I have not found a patch.&nbsp; The bug #30:<br>
    <br>
    <br>
    <a class="moz-txt-link-freetext" href="http://bugs.xenproject.org/xen/bug/30">http://bugs.xenproject.org/xen/bug/30</a><br>
    <br>
    Looks to me like the issue I am seeing.&nbsp; The Guest I was testing
    this on is an old kernel (as far as this bug) and so I would expect
    it might be related.<br>
    <br>
    <br>
    Here is what I see that may link it to this:<br>
    <br>
    <blockquote>
      <pre class="headers"><b>From</b>: Ian Campbell <a class="moz-txt-link-rfc2396E" href="mailto:Ian.Campbell@citrix.com">&lt;Ian.Campbell@citrix.com&gt;</a>
<b>To</b>: Ian Jackson <a class="moz-txt-link-rfc2396E" href="mailto:Ian.Jackson@eu.citrix.com">&lt;Ian.Jackson@eu.citrix.com&gt;</a>
<b>Cc</b>: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
<b>Subject</b>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
<b>Date</b>: Fri, 10 Jan 2014 10:26:31 +0000

</pre>
    </blockquote>
    <pre class="headers">...
</pre>
    <blockquote>
      <pre class="headers">
Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</pre>
    </blockquote>
    <br>
    So I will continue the fight to get xend running.<br>
    <br>
    &nbsp;&nbsp; -Don<br>
    <br>
    <br>
    <br>
    <br>
    <br>
    <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
      type="cite">
      <pre wrap="">
</pre>
      <blockquote type="cite">
        <pre wrap="">
    -Don Slutz

</pre>
      </blockquote>
    </blockquote>
    <br>
  </body>
</html>

--------------000206070401070303060404--


--===============5968812017067885968==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5968812017067885968==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrB-0001ri-RP; Fri, 24 Jan 2014 16:43:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrA-0001rM-4q
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:48 +0000
Received: from [85.158.137.68:56935] by server-10.bemta-3.messagelabs.com id
	BD/EA-23989-34892E25; Fri, 24 Jan 2014 16:43:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390581826!7516599!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24963 invoked from network); 24 Jan 2014 16:43:46 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:46 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1075767eek.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=I+B3ptBBz5Ddu9j6DCH+K27utweqzzKBxJmTzldM9J0=;
	b=NxXnsYs70FFeu0zhNLHAyIxJh5sbc1MeJOJUMGdPDBiQEk97JLsM/BaoIe8t2riA7p
	TFi56AQCdm4bWaqe7srYaAXnTC1VfW+nYxq9FNFoDQIXNT6YESTFMErv6D48L/bit3rw
	rHWTmF/ysR/vcTAG/oYuhWCqVldc37A9JDQXm53D522RKitGmzk5USFdTqeKcO/9DFzZ
	rP3/L6cAoCiZxjFg6i99cRJ0aqMDlsyn0bHYjd5Q4iNojrkBj1VNjgyzccWo4tSKtKg2
	sGb+muadUhAsb8/msdR/Cv06gV6mvPgf2EdzGXrSSLJCFEBtgbqNrQ3QDkYSUVV4SDgJ
	VSkw==
X-Gm-Message-State: ALoCoQlc1wX6+Un8jlkcVkyOmpZr9tB7xDBftOT6n84lew7snpRsH82jOYDAoRWzaWGbouagKss8
X-Received: by 10.14.202.8 with SMTP id c8mr3467644eeo.88.1390581826267;
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:45 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:34 +0000
Message-Id: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 0/8] Interrupt management reworking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While I was working on the ARM SMMU driver for Xen, I made some changes
to improve interrupt handling.

The main modifications of this patch series:
    - Add multiple handler support for interrupts
    - Merge route and setup IRQ functions
    - Improve error checking on some functions

This patch series is a requirement to support ARM SMMU driver.

Sincelery yours,

*** BLURB HERE ***

Julien Grall (8):
  xen/arm: irq: move gic {,un}lock in gic_set_irq_properties
  xen/arm: setup_dt_irq: don't enable the IRQ if the creation has
    failed
  xen/arm: IRQ: Protect IRQ to be shared between domains and XEN
  xen/arm: irq: Don't need to have a specific function to route IRQ to
    Xen
  xen/arm: IRQ: rename release_irq in release_dt_irq
  xen/arm: IRQ: Add lock contrainst for gic_irq_{startup,shutdown}
  xen/irq: Handle multiple action per IRQ
  xen/serial: remove serial_dt_irq

 xen/arch/arm/domain_build.c        |    8 +-
 xen/arch/arm/gic.c                 |  206 +++++++++++++++++++++++-------------
 xen/arch/arm/irq.c                 |    6 +-
 xen/arch/arm/setup.c               |    3 -
 xen/arch/arm/smpboot.c             |    2 -
 xen/arch/arm/time.c                |   11 --
 xen/drivers/char/exynos4210-uart.c |    8 --
 xen/drivers/char/ns16550.c         |   11 --
 xen/drivers/char/omap-uart.c       |    8 --
 xen/drivers/char/pl011.c           |    8 --
 xen/drivers/char/serial.c          |    9 --
 xen/include/asm-arm/gic.h          |    7 --
 xen/include/asm-arm/irq.h          |    1 +
 xen/include/asm-arm/time.h         |    3 -
 xen/include/xen/irq.h              |    1 +
 xen/include/xen/serial.h           |    5 -
 16 files changed, 146 insertions(+), 151 deletions(-)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrB-0001ri-RP; Fri, 24 Jan 2014 16:43:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrA-0001rM-4q
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:48 +0000
Received: from [85.158.137.68:56935] by server-10.bemta-3.messagelabs.com id
	BD/EA-23989-34892E25; Fri, 24 Jan 2014 16:43:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390581826!7516599!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24963 invoked from network); 24 Jan 2014 16:43:46 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:46 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1075767eek.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=I+B3ptBBz5Ddu9j6DCH+K27utweqzzKBxJmTzldM9J0=;
	b=NxXnsYs70FFeu0zhNLHAyIxJh5sbc1MeJOJUMGdPDBiQEk97JLsM/BaoIe8t2riA7p
	TFi56AQCdm4bWaqe7srYaAXnTC1VfW+nYxq9FNFoDQIXNT6YESTFMErv6D48L/bit3rw
	rHWTmF/ysR/vcTAG/oYuhWCqVldc37A9JDQXm53D522RKitGmzk5USFdTqeKcO/9DFzZ
	rP3/L6cAoCiZxjFg6i99cRJ0aqMDlsyn0bHYjd5Q4iNojrkBj1VNjgyzccWo4tSKtKg2
	sGb+muadUhAsb8/msdR/Cv06gV6mvPgf2EdzGXrSSLJCFEBtgbqNrQ3QDkYSUVV4SDgJ
	VSkw==
X-Gm-Message-State: ALoCoQlc1wX6+Un8jlkcVkyOmpZr9tB7xDBftOT6n84lew7snpRsH82jOYDAoRWzaWGbouagKss8
X-Received: by 10.14.202.8 with SMTP id c8mr3467644eeo.88.1390581826267;
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:45 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:34 +0000
Message-Id: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 0/8] Interrupt management reworking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

While I was working on the ARM SMMU driver for Xen, I made some changes
to improve interrupt handling.

The main modifications of this patch series:
    - Add multiple handler support for interrupts
    - Merge route and setup IRQ functions
    - Improve error checking on some functions

This patch series is a requirement to support ARM SMMU driver.

Sincelery yours,

*** BLURB HERE ***

Julien Grall (8):
  xen/arm: irq: move gic {,un}lock in gic_set_irq_properties
  xen/arm: setup_dt_irq: don't enable the IRQ if the creation has
    failed
  xen/arm: IRQ: Protect IRQ to be shared between domains and XEN
  xen/arm: irq: Don't need to have a specific function to route IRQ to
    Xen
  xen/arm: IRQ: rename release_irq in release_dt_irq
  xen/arm: IRQ: Add lock contrainst for gic_irq_{startup,shutdown}
  xen/irq: Handle multiple action per IRQ
  xen/serial: remove serial_dt_irq

 xen/arch/arm/domain_build.c        |    8 +-
 xen/arch/arm/gic.c                 |  206 +++++++++++++++++++++++-------------
 xen/arch/arm/irq.c                 |    6 +-
 xen/arch/arm/setup.c               |    3 -
 xen/arch/arm/smpboot.c             |    2 -
 xen/arch/arm/time.c                |   11 --
 xen/drivers/char/exynos4210-uart.c |    8 --
 xen/drivers/char/ns16550.c         |   11 --
 xen/drivers/char/omap-uart.c       |    8 --
 xen/drivers/char/pl011.c           |    8 --
 xen/drivers/char/serial.c          |    9 --
 xen/include/asm-arm/gic.h          |    7 --
 xen/include/asm-arm/irq.h          |    1 +
 xen/include/asm-arm/time.h         |    3 -
 xen/include/xen/irq.h              |    1 +
 xen/include/xen/serial.h           |    5 -
 16 files changed, 146 insertions(+), 151 deletions(-)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrD-0001sM-7c; Fri, 24 Jan 2014 16:43:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrB-0001rY-6k
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:49 +0000
Received: from [85.158.139.211:33438] by server-4.bemta-5.messagelabs.com id
	8E/AD-26791-44892E25; Fri, 24 Jan 2014 16:43:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390581827!11787831!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24123 invoked from network); 24 Jan 2014 16:43:47 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:47 -0000
Received: by mail-ea0-f172.google.com with SMTP id g15so1114142eak.17
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=BxfWK/5ZB829nCOWLXPUR+GRiCiyfd0N28Lysj6iNdM=;
	b=Pxw5Gpn++eYmlWCeJrll1eWmJ93XZNxxHCKQHafQCbbHGHFqTUnZ8EVsXgo24x94zl
	JH9/KMq3Z9/4bp1MQaA8KRd35VMWZZI6fSMZ3FRvg7FftywbvhNh1smnVuls830iq2Di
	2aEeytaFU+p1WwpH1lBTmhlvRG9ZohmRQi1D5Cdw/m5ZiYuROFEPdTT7Wtv8sGKaVHHM
	RhsD7xVo6v8S3lq0POJxT5aYXKjAqrXVKf9cwtGJ8RaTML2NIvDfeND7fKNWnxdgYnpS
	Lff4nYKrzRoFUjtSrWsRPOqPK80AQzmK18AlZs6A/APvwDEd0s7IrXxTySV34zFZ33Xd
	imnQ==
X-Gm-Message-State: ALoCoQmgqOXaFAcFP9Pw9ucqlxihClBPpXtjsmhOLGavi3w8JsDT3QnGfgToI3Hm8vFeDRsOi+Qp
X-Received: by 10.15.68.129 with SMTP id w1mr13680197eex.26.1390581827622;
	Fri, 24 Jan 2014 08:43:47 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.46
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:35 +0000
Message-Id: <1390581822-32624-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
	un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gic_set_irq_properties is only called in two places:
    - gic_route_irq: the gic.lock is only taken for the call to the
    former function.
    - gic_route_irq_to_guest: the gic.lock is taken for the duration of
    the function. But the lock is only useful when gic_set_irq_properties.

So we can safely move the lock in gic_set_irq_properties and restrict the
critical section for the gic.lock in gic_route_irq_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..1943f92 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -228,7 +228,11 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
 {
     volatile unsigned char *bytereg;
     uint32_t cfg, edgebit;
-    unsigned int mask = gic_cpu_mask(cpu_mask);
+    unsigned int mask;
+
+    spin_lock(&gic.lock);
+
+    mask = gic_cpu_mask(cpu_mask);
 
     /* Set edge / level */
     cfg = GICD[GICD_ICFGR + irq / 16];
@@ -247,6 +251,7 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
     bytereg = (unsigned char *) (GICD + GICD_IPRIORITYR);
     bytereg[irq] = priority;
 
+    spin_unlock(&gic.lock);
 }
 
 /* Program the GIC to route an interrupt */
@@ -269,9 +274,7 @@ static int gic_route_irq(unsigned int irq, bool_t level,
 
     desc->handler = &gic_host_irq_type;
 
-    spin_lock(&gic.lock);
     gic_set_irq_properties(irq, level, cpu_mask, priority);
-    spin_unlock(&gic.lock);
 
     spin_unlock_irqrestore(&desc->lock, flags);
     return 0;
@@ -769,7 +772,6 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     action->free_on_release = 1;
 
     spin_lock_irqsave(&desc->lock, flags);
-    spin_lock(&gic.lock);
 
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
@@ -790,7 +792,6 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     p->desc = desc;
 
 out:
-    spin_unlock(&gic.lock);
     spin_unlock_irqrestore(&desc->lock, flags);
     return retval;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrD-0001sM-7c; Fri, 24 Jan 2014 16:43:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrB-0001rY-6k
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:49 +0000
Received: from [85.158.139.211:33438] by server-4.bemta-5.messagelabs.com id
	8E/AD-26791-44892E25; Fri, 24 Jan 2014 16:43:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390581827!11787831!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24123 invoked from network); 24 Jan 2014 16:43:47 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:47 -0000
Received: by mail-ea0-f172.google.com with SMTP id g15so1114142eak.17
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=BxfWK/5ZB829nCOWLXPUR+GRiCiyfd0N28Lysj6iNdM=;
	b=Pxw5Gpn++eYmlWCeJrll1eWmJ93XZNxxHCKQHafQCbbHGHFqTUnZ8EVsXgo24x94zl
	JH9/KMq3Z9/4bp1MQaA8KRd35VMWZZI6fSMZ3FRvg7FftywbvhNh1smnVuls830iq2Di
	2aEeytaFU+p1WwpH1lBTmhlvRG9ZohmRQi1D5Cdw/m5ZiYuROFEPdTT7Wtv8sGKaVHHM
	RhsD7xVo6v8S3lq0POJxT5aYXKjAqrXVKf9cwtGJ8RaTML2NIvDfeND7fKNWnxdgYnpS
	Lff4nYKrzRoFUjtSrWsRPOqPK80AQzmK18AlZs6A/APvwDEd0s7IrXxTySV34zFZ33Xd
	imnQ==
X-Gm-Message-State: ALoCoQmgqOXaFAcFP9Pw9ucqlxihClBPpXtjsmhOLGavi3w8JsDT3QnGfgToI3Hm8vFeDRsOi+Qp
X-Received: by 10.15.68.129 with SMTP id w1mr13680197eex.26.1390581827622;
	Fri, 24 Jan 2014 08:43:47 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.46
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:46 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:35 +0000
Message-Id: <1390581822-32624-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
	un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function gic_set_irq_properties is only called in two places:
    - gic_route_irq: the gic.lock is only taken for the call to the
    former function.
    - gic_route_irq_to_guest: the gic.lock is taken for the duration of
    the function. But the lock is only useful when gic_set_irq_properties.

So we can safely move the lock in gic_set_irq_properties and restrict the
critical section for the gic.lock in gic_route_irq_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..1943f92 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -228,7 +228,11 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
 {
     volatile unsigned char *bytereg;
     uint32_t cfg, edgebit;
-    unsigned int mask = gic_cpu_mask(cpu_mask);
+    unsigned int mask;
+
+    spin_lock(&gic.lock);
+
+    mask = gic_cpu_mask(cpu_mask);
 
     /* Set edge / level */
     cfg = GICD[GICD_ICFGR + irq / 16];
@@ -247,6 +251,7 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
     bytereg = (unsigned char *) (GICD + GICD_IPRIORITYR);
     bytereg[irq] = priority;
 
+    spin_unlock(&gic.lock);
 }
 
 /* Program the GIC to route an interrupt */
@@ -269,9 +274,7 @@ static int gic_route_irq(unsigned int irq, bool_t level,
 
     desc->handler = &gic_host_irq_type;
 
-    spin_lock(&gic.lock);
     gic_set_irq_properties(irq, level, cpu_mask, priority);
-    spin_unlock(&gic.lock);
 
     spin_unlock_irqrestore(&desc->lock, flags);
     return 0;
@@ -769,7 +772,6 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     action->free_on_release = 1;
 
     spin_lock_irqsave(&desc->lock, flags);
-    spin_lock(&gic.lock);
 
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
@@ -790,7 +792,6 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     p->desc = desc;
 
 out:
-    spin_unlock(&gic.lock);
     spin_unlock_irqrestore(&desc->lock, flags);
     return retval;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrD-0001sq-N3; Fri, 24 Jan 2014 16:43:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrC-0001ru-Mi
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:50 +0000
Received: from [85.158.137.68:18957] by server-15.bemta-3.messagelabs.com id
	D2/94-11556-54892E25; Fri, 24 Jan 2014 16:43:49 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390581829!10021455!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15322 invoked from network); 24 Jan 2014 16:43:49 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:49 -0000
Received: by mail-ee0-f42.google.com with SMTP id e49so1079089eek.1
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=RJKrQ79F9jtybYF5o92xt9w2VktARRoorYEHJ1kI+jc=;
	b=QD24jY7VjLs9Q0MezXwv24J4ALlWGZ1ImeSGoPv3uAK0HqGHuEttOm6JHzGk5HaqXm
	Rjd7oypajVNvtci7hd4LCIEFr3gLLxO3kmkHc0rKsIBzvwrVoF2sFM75ecsKg3/JDtdt
	MmFAGpg2nctZT0v+ErvtQjx94DAykLabtcvHcoCjql32e+lpC1jo8AT5muwFNGGdL1kW
	nBqm6ZrZpFfMZEhu+/nA8Q1+ulG8qUutJqG1agMXZLprGsqxvfMrvZnLZIMZjV500Daj
	X8jg+GmOR2iMv41juLEWpGqS/Qdb9BTboLXglTuHBr7iI+4FAN0/x8aGHP77elOZJM2j
	tcjQ==
X-Gm-Message-State: ALoCoQlataXm2gNWWFC/1OqtF2lhzEBjuRQiY40l5hx2LBIm/GkH0me+YViNxI0u7UJp38M/pGgB
X-Received: by 10.14.37.131 with SMTP id y3mr13326727eea.1.1390581828951;
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.47
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:36 +0000
Message-Id: <1390581822-32624-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 2/8] xen/arm: setup_dt_irq: don't enable
	the IRQ if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now __setup_dt_irq can only fail if the action is already set. If in the
future, the function is updated we don't want to enable the IRQ.

Assuming the function can fail with action = NULL, when Xen will receive the
IRQ it will segfault because do_IRQ doesn't check if action is NULL.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1943f92..55e7622 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -608,8 +608,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
-    desc->handler->startup(desc);
-
+    if ( !rc )
+        desc->handler->startup(desc);
 
     return rc;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrD-0001sq-N3; Fri, 24 Jan 2014 16:43:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrC-0001ru-Mi
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:50 +0000
Received: from [85.158.137.68:18957] by server-15.bemta-3.messagelabs.com id
	D2/94-11556-54892E25; Fri, 24 Jan 2014 16:43:49 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390581829!10021455!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15322 invoked from network); 24 Jan 2014 16:43:49 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:49 -0000
Received: by mail-ee0-f42.google.com with SMTP id e49so1079089eek.1
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=RJKrQ79F9jtybYF5o92xt9w2VktARRoorYEHJ1kI+jc=;
	b=QD24jY7VjLs9Q0MezXwv24J4ALlWGZ1ImeSGoPv3uAK0HqGHuEttOm6JHzGk5HaqXm
	Rjd7oypajVNvtci7hd4LCIEFr3gLLxO3kmkHc0rKsIBzvwrVoF2sFM75ecsKg3/JDtdt
	MmFAGpg2nctZT0v+ErvtQjx94DAykLabtcvHcoCjql32e+lpC1jo8AT5muwFNGGdL1kW
	nBqm6ZrZpFfMZEhu+/nA8Q1+ulG8qUutJqG1agMXZLprGsqxvfMrvZnLZIMZjV500Daj
	X8jg+GmOR2iMv41juLEWpGqS/Qdb9BTboLXglTuHBr7iI+4FAN0/x8aGHP77elOZJM2j
	tcjQ==
X-Gm-Message-State: ALoCoQlataXm2gNWWFC/1OqtF2lhzEBjuRQiY40l5hx2LBIm/GkH0me+YViNxI0u7UJp38M/pGgB
X-Received: by 10.14.37.131 with SMTP id y3mr13326727eea.1.1390581828951;
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.47
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:36 +0000
Message-Id: <1390581822-32624-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 2/8] xen/arm: setup_dt_irq: don't enable
	the IRQ if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For now __setup_dt_irq can only fail if the action is already set. If in the
future, the function is updated we don't want to enable the IRQ.

Assuming the function can fail with action = NULL, when Xen will receive the
IRQ it will segfault because do_IRQ doesn't check if action is NULL.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1943f92..55e7622 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -608,8 +608,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
-    desc->handler->startup(desc);
-
+    if ( !rc )
+        desc->handler->startup(desc);
 
     return rc;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrH-0001uh-81; Fri, 24 Jan 2014 16:43:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrF-0001tg-8T
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:53 +0000
Received: from [85.158.143.35:56662] by server-2.bemta-4.messagelabs.com id
	C9/13-11386-84892E25; Fri, 24 Jan 2014 16:43:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390581831!609753!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24368 invoked from network); 24 Jan 2014 16:43:51 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:51 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so958816eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=YEjqHMBPNUIclsDbTl8UOSt90DKGGrhe3I+hbf14uyw=;
	b=Mxz0hhlnHsDnrfXgCi3Jc8epQNh2sMcUDub8U6r+p741YkH0TYdROkTcAvS2h2Wi88
	BUcLe8/dS2RUVSqDEP9hPkQtZGcu+tonmF7ZtvmmlXVnMl8E3pxH3rL4Mgok0KhR05eW
	jxDo8mQJ5nBU/eQDeDxolNWkLVxA85z6zHKOvp/kNEMw6On8oecK//pVI8jM+x9ayJSK
	0QgZCctyOSEm7i2eaJ3SXwP5AnafhBOJwKYx1Sfy9MH7hs0zQJLlWJgQP5qTQVetxIUz
	cwyR0Q0LrlTa4Dbt2bVMjD5SerzWn1B4+n1iceIGHR6/kU4Nyc4LCzuKQNVvtMfdeOpA
	Vj5w==
X-Gm-Message-State: ALoCoQnBVB5KL18q8tJmcPrWVunGTsJi5xnmcbAWWSQWWaAoMzNhQ1U7XG9Lj8yxCkJS49s7SpLh
X-Received: by 10.14.4.67 with SMTP id 43mr9559231eei.70.1390581831316;
	Fri, 24 Jan 2014 08:43:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:37 +0000
Message-Id: <1390581822-32624-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
	shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
IRQ is correctly setup.

As IRQ can be shared between devices, if the devices are not assigned to the
same domain or Xen, this could result to IRQ route to the domain instead of
Xen ...

Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Hopefully, none of the supported platforms have UARTs (the only device
    currently used by Xen). It would be nice to have this patch for Xen 4.4 to
    avoid waste of time for developer.

    The downside of this patch is if someone wants to support a such platform
    (eg IRQ shared between device assigned to different domain/XEN), it will
    end up to a error message and a panic.
---
 xen/arch/arm/domain_build.c |    8 ++++++--
 xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..1fc359a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
         }
 
         DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
-        /* Don't check return because the IRQ can be use by multiple device */
-        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
+            return res;
+        }
     }
 
     /* Map the address ranges */
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 55e7622..d68bde3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -605,6 +605,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock, flags);
+
+    if ( desc->status & IRQ_GUEST )
+    {
+        struct domain *d;
+
+        ASSERT(desc->action != NULL);
+
+        d = desc->action->dev_id;
+
+        spin_unlock_irqrestore(&desc->lock, flags);
+        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+               irq->irq, d->domain_id);
+        return -EADDRINUSE;
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
@@ -759,7 +774,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     struct irqaction *action;
     struct irq_desc *desc = irq_to_desc(irq->irq);
     unsigned long flags;
-    int retval;
+    int retval = 0;
     bool_t level;
     struct pending_irq *p;
 
@@ -773,6 +788,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     spin_lock_irqsave(&desc->lock, flags);
 
+    /* If the IRQ is already used by someone
+     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
+     *  - Otherwise -> For now, don't allow the IRQ to be shared between
+     *  Xen and domains.
+     */
+    if ( desc->action != NULL )
+    {
+        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
+            goto out;
+
+        if ( desc->status & IRQ_GUEST )
+        {
+            d = desc->action->dev_id;
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+                   irq->irq, d->domain_id);
+        }
+        else
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
+                   irq->irq);
+        retval = -EADDRINUSE;
+        goto out;
+    }
+
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrH-0001uh-81; Fri, 24 Jan 2014 16:43:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrF-0001tg-8T
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:53 +0000
Received: from [85.158.143.35:56662] by server-2.bemta-4.messagelabs.com id
	C9/13-11386-84892E25; Fri, 24 Jan 2014 16:43:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390581831!609753!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24368 invoked from network); 24 Jan 2014 16:43:51 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:51 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so958816eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=YEjqHMBPNUIclsDbTl8UOSt90DKGGrhe3I+hbf14uyw=;
	b=Mxz0hhlnHsDnrfXgCi3Jc8epQNh2sMcUDub8U6r+p741YkH0TYdROkTcAvS2h2Wi88
	BUcLe8/dS2RUVSqDEP9hPkQtZGcu+tonmF7ZtvmmlXVnMl8E3pxH3rL4Mgok0KhR05eW
	jxDo8mQJ5nBU/eQDeDxolNWkLVxA85z6zHKOvp/kNEMw6On8oecK//pVI8jM+x9ayJSK
	0QgZCctyOSEm7i2eaJ3SXwP5AnafhBOJwKYx1Sfy9MH7hs0zQJLlWJgQP5qTQVetxIUz
	cwyR0Q0LrlTa4Dbt2bVMjD5SerzWn1B4+n1iceIGHR6/kU4Nyc4LCzuKQNVvtMfdeOpA
	Vj5w==
X-Gm-Message-State: ALoCoQnBVB5KL18q8tJmcPrWVunGTsJi5xnmcbAWWSQWWaAoMzNhQ1U7XG9Lj8yxCkJS49s7SpLh
X-Received: by 10.14.4.67 with SMTP id 43mr9559231eei.70.1390581831316;
	Fri, 24 Jan 2014 08:43:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:37 +0000
Message-Id: <1390581822-32624-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
	shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
IRQ is correctly setup.

As IRQ can be shared between devices, if the devices are not assigned to the
same domain or Xen, this could result to IRQ route to the domain instead of
Xen ...

Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Hopefully, none of the supported platforms have UARTs (the only device
    currently used by Xen). It would be nice to have this patch for Xen 4.4 to
    avoid waste of time for developer.

    The downside of this patch is if someone wants to support a such platform
    (eg IRQ shared between device assigned to different domain/XEN), it will
    end up to a error message and a panic.
---
 xen/arch/arm/domain_build.c |    8 ++++++--
 xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..1fc359a 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
         }
 
         DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
-        /* Don't check return because the IRQ can be use by multiple device */
-        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
+            return res;
+        }
     }
 
     /* Map the address ranges */
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 55e7622..d68bde3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -605,6 +605,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock, flags);
+
+    if ( desc->status & IRQ_GUEST )
+    {
+        struct domain *d;
+
+        ASSERT(desc->action != NULL);
+
+        d = desc->action->dev_id;
+
+        spin_unlock_irqrestore(&desc->lock, flags);
+        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+               irq->irq, d->domain_id);
+        return -EADDRINUSE;
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
@@ -759,7 +774,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     struct irqaction *action;
     struct irq_desc *desc = irq_to_desc(irq->irq);
     unsigned long flags;
-    int retval;
+    int retval = 0;
     bool_t level;
     struct pending_irq *p;
 
@@ -773,6 +788,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     spin_lock_irqsave(&desc->lock, flags);
 
+    /* If the IRQ is already used by someone
+     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
+     *  - Otherwise -> For now, don't allow the IRQ to be shared between
+     *  Xen and domains.
+     */
+    if ( desc->action != NULL )
+    {
+        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
+            goto out;
+
+        if ( desc->status & IRQ_GUEST )
+        {
+            d = desc->action->dev_id;
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
+                   irq->irq, d->domain_id);
+        }
+        else
+            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
+                   irq->irq);
+        retval = -EADDRINUSE;
+        goto out;
+    }
+
     desc->handler = &gic_guest_irq_type;
     desc->status |= IRQ_GUEST;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrH-0001vE-PA; Fri, 24 Jan 2014 16:43:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrG-0001uJ-Qr
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:55 +0000
Received: from [85.158.137.68:19233] by server-2.bemta-3.messagelabs.com id
	EE/29-17329-A4892E25; Fri, 24 Jan 2014 16:43:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390581833!11190437!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32017 invoked from network); 24 Jan 2014 16:43:53 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:53 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so1079474eek.3
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Vd6cA+aHxOLodG7CNsn9ayjeDt8YPiuxhzQ8NJnjpJ8=;
	b=Yb3bxlRIMEy+D2NlUh1LbMxU4vTQfDYGxkCLvV4uGBnIUmzjyrErVaL2slptaxl6Ag
	UgAiGao6sIaKjp+RzvKEYe/Q99ShHaQX8saB+XhieCkvX7mLIVE3m81mKWoBstjdW+A5
	QZucUuziVZOGiIeBLx6aq2yZj5KUR1SI8zIuQ35AwruBo++i/48qzjzldzfdX7olNEQL
	veAQl8Ze16p/69lXYtyqOg9TLTnMS99yjKaKsyM8fVw71jS8ArBocEfiUHvTWTCrRChN
	2yopSbLzRpFUhLAeH3beHVzu78JcCw7LQFHwXNNJDArCse6BHXqDjZMs7YnWWsBSWE/O
	W0DA==
X-Gm-Message-State: ALoCoQnNkxj2kWpWz7bsTP9OsxQ9hMuu8PAYmhoXIH/Q8yOys9NJWv9HaM1qBxsKe5/uOZWF35zy
X-Received: by 10.15.41.140 with SMTP id s12mr13562920eev.50.1390581832995;
	Fri, 24 Jan 2014 08:43:52 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:52 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:38 +0000
Message-Id: <1390581822-32624-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to have a
	specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:
    - Route the IRQ to the current CPU and set priorities
    - Set up the handler

For PPIs, these step are called on every cpus. For SPIs, it's called only
on the boot CPU.

Divided the setup in two step complicated the code when a new driver is
added by Xen (for instance a SMMU driver). Xen can safely route the IRQ
when the driver setup the interrupt handler.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
 xen/arch/arm/setup.c       |    3 --
 xen/arch/arm/smpboot.c     |    2 --
 xen/arch/arm/time.c        |   11 --------
 xen/include/asm-arm/gic.h  |    7 -----
 xen/include/asm-arm/time.h |    3 --
 6 files changed, 23 insertions(+), 70 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d68bde3..58bcba3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
     spin_unlock(&gic.lock);
 }
 
-/* Program the GIC to route an interrupt */
+/* Program the GIC to route an interrupt to the host (eg Xen)
+ * - needs to be called with desc.lock held
+ */
 static int gic_route_irq(unsigned int irq, bool_t level,
                          const cpumask_t *cpu_mask, unsigned int priority)
 {
     struct irq_desc *desc = irq_to_desc(irq);
-    unsigned long flags;
 
     ASSERT(priority <= 0xff);     /* Only 8 bits of priority */
     ASSERT(irq < gic.lines);      /* Can't route interrupts that don't exist */
-
-    if ( desc->action != NULL )
-        return -EBUSY;
-
-    /* Disable interrupt */
-    desc->handler->shutdown(desc);
-
-    spin_lock_irqsave(&desc->lock, flags);
+    ASSERT(desc->status & IRQ_DISABLED);
 
     desc->handler = &gic_host_irq_type;
 
     gic_set_irq_properties(irq, level, cpu_mask, priority);
 
-    spin_unlock_irqrestore(&desc->lock, flags);
     return 0;
 }
 
-/* Program the GIC to route an interrupt with a dt_irq */
-void gic_route_dt_irq(const struct dt_irq *irq, const cpumask_t *cpu_mask,
-                      unsigned int priority)
-{
-    bool_t level;
-
-    level = dt_irq_is_level_triggered(irq);
-
-    gic_route_irq(irq->irq, level, cpu_mask, priority);
-}
-
 static void __init gic_dist_init(void)
 {
     uint32_t type;
@@ -538,28 +520,6 @@ void gic_disable_cpu(void)
     spin_unlock(&gic.lock);
 }
 
-void gic_route_ppis(void)
-{
-    /* GIC maintenance */
-    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
-    /* Route timer interrupt */
-    route_timer_interrupt();
-}
-
-void gic_route_spis(void)
-{
-    int seridx;
-    const struct dt_irq *irq;
-
-    for ( seridx = 0; seridx <= SERHND_IDX; seridx++ )
-    {
-        if ( (irq = serial_dt_irq(seridx)) == NULL )
-            continue;
-
-        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
-    }
-}
-
 void __init release_irq(unsigned int irq)
 {
     struct irq_desc *desc;
@@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     int rc;
     unsigned long flags;
     struct irq_desc *desc;
+    bool_t disabled = 0;
 
     desc = irq_to_desc(irq->irq);
 
@@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
         return -EADDRINUSE;
     }
 
+    disabled = (desc->action == NULL);
+
+    /* First time the IRQ is setup */
+    if ( disabled )
+    {
+        bool_t level;
+
+        level = dt_irq_is_level_triggered(irq);
+        /* It's fine to use smp_processor_id() because:
+         * For PPI: irq_desc is banked
+         * For SGI: we don't care for now which CPU will receive the
+         * interrupt
+         * TODO: Handle case where SGI is setup on different CPU than
+         * the targeted CPU and the priority.
+         */
+        gic_route_irq(irq->irq, level, cpumask_of(smp_processor_id()), 0xa0);
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5434784..1f6d713 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -711,9 +711,6 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_IRQ();
 
-    gic_route_ppis();
-    gic_route_spis();
-
     init_maintenance_interrupt();
     init_timer_interrupt();
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..807e7d3 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -287,8 +287,6 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
 
     init_secondary_IRQ();
 
-    gic_route_ppis();
-
     init_maintenance_interrupt();
     init_timer_interrupt();
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..cd16318 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -218,17 +218,6 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
     vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
 }
 
-/* Route timer's IRQ on this CPU */
-void __cpuinit route_timer_interrupt(void)
-{
-    gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-    gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-    gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-}
-
 /* Set up the timer interrupt on this CPU */
 void __cpuinit init_timer_interrupt(void)
 {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 87f4298..3fd1024 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -144,13 +144,6 @@ extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-/* Program the GIC to route an interrupt with a dt_irq */
-extern void gic_route_dt_irq(const struct dt_irq *irq,
-                             const cpumask_t *cpu_mask,
-                             unsigned int priority);
-extern void gic_route_ppis(void);
-extern void gic_route_spis(void);
-
 extern void gic_inject(void);
 extern void gic_clear_pending_irqs(struct vcpu *v);
 extern int gic_events_need_delivery(void);
diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
index 9d302d3..eaa96bc 100644
--- a/xen/include/asm-arm/time.h
+++ b/xen/include/asm-arm/time.h
@@ -28,9 +28,6 @@ enum timer_ppi
 /* Get one of the timer IRQ description */
 const struct dt_irq* timer_dt_irq(enum timer_ppi ppi);
 
-/* Route timer's IRQ on this CPU */
-extern void __cpuinit route_timer_interrupt(void);
-
 /* Set up the timer interrupt on this CPU */
 extern void __cpuinit init_timer_interrupt(void);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrH-0001vE-PA; Fri, 24 Jan 2014 16:43:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrG-0001uJ-Qr
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:55 +0000
Received: from [85.158.137.68:19233] by server-2.bemta-3.messagelabs.com id
	EE/29-17329-A4892E25; Fri, 24 Jan 2014 16:43:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390581833!11190437!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32017 invoked from network); 24 Jan 2014 16:43:53 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:53 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so1079474eek.3
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Vd6cA+aHxOLodG7CNsn9ayjeDt8YPiuxhzQ8NJnjpJ8=;
	b=Yb3bxlRIMEy+D2NlUh1LbMxU4vTQfDYGxkCLvV4uGBnIUmzjyrErVaL2slptaxl6Ag
	UgAiGao6sIaKjp+RzvKEYe/Q99ShHaQX8saB+XhieCkvX7mLIVE3m81mKWoBstjdW+A5
	QZucUuziVZOGiIeBLx6aq2yZj5KUR1SI8zIuQ35AwruBo++i/48qzjzldzfdX7olNEQL
	veAQl8Ze16p/69lXYtyqOg9TLTnMS99yjKaKsyM8fVw71jS8ArBocEfiUHvTWTCrRChN
	2yopSbLzRpFUhLAeH3beHVzu78JcCw7LQFHwXNNJDArCse6BHXqDjZMs7YnWWsBSWE/O
	W0DA==
X-Gm-Message-State: ALoCoQnNkxj2kWpWz7bsTP9OsxQ9hMuu8PAYmhoXIH/Q8yOys9NJWv9HaM1qBxsKe5/uOZWF35zy
X-Received: by 10.15.41.140 with SMTP id s12mr13562920eev.50.1390581832995;
	Fri, 24 Jan 2014 08:43:52 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:52 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:38 +0000
Message-Id: <1390581822-32624-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to have a
	specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:
    - Route the IRQ to the current CPU and set priorities
    - Set up the handler

For PPIs, these step are called on every cpus. For SPIs, it's called only
on the boot CPU.

Divided the setup in two step complicated the code when a new driver is
added by Xen (for instance a SMMU driver). Xen can safely route the IRQ
when the driver setup the interrupt handler.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
 xen/arch/arm/setup.c       |    3 --
 xen/arch/arm/smpboot.c     |    2 --
 xen/arch/arm/time.c        |   11 --------
 xen/include/asm-arm/gic.h  |    7 -----
 xen/include/asm-arm/time.h |    3 --
 6 files changed, 23 insertions(+), 70 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d68bde3..58bcba3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
     spin_unlock(&gic.lock);
 }
 
-/* Program the GIC to route an interrupt */
+/* Program the GIC to route an interrupt to the host (eg Xen)
+ * - needs to be called with desc.lock held
+ */
 static int gic_route_irq(unsigned int irq, bool_t level,
                          const cpumask_t *cpu_mask, unsigned int priority)
 {
     struct irq_desc *desc = irq_to_desc(irq);
-    unsigned long flags;
 
     ASSERT(priority <= 0xff);     /* Only 8 bits of priority */
     ASSERT(irq < gic.lines);      /* Can't route interrupts that don't exist */
-
-    if ( desc->action != NULL )
-        return -EBUSY;
-
-    /* Disable interrupt */
-    desc->handler->shutdown(desc);
-
-    spin_lock_irqsave(&desc->lock, flags);
+    ASSERT(desc->status & IRQ_DISABLED);
 
     desc->handler = &gic_host_irq_type;
 
     gic_set_irq_properties(irq, level, cpu_mask, priority);
 
-    spin_unlock_irqrestore(&desc->lock, flags);
     return 0;
 }
 
-/* Program the GIC to route an interrupt with a dt_irq */
-void gic_route_dt_irq(const struct dt_irq *irq, const cpumask_t *cpu_mask,
-                      unsigned int priority)
-{
-    bool_t level;
-
-    level = dt_irq_is_level_triggered(irq);
-
-    gic_route_irq(irq->irq, level, cpu_mask, priority);
-}
-
 static void __init gic_dist_init(void)
 {
     uint32_t type;
@@ -538,28 +520,6 @@ void gic_disable_cpu(void)
     spin_unlock(&gic.lock);
 }
 
-void gic_route_ppis(void)
-{
-    /* GIC maintenance */
-    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
-    /* Route timer interrupt */
-    route_timer_interrupt();
-}
-
-void gic_route_spis(void)
-{
-    int seridx;
-    const struct dt_irq *irq;
-
-    for ( seridx = 0; seridx <= SERHND_IDX; seridx++ )
-    {
-        if ( (irq = serial_dt_irq(seridx)) == NULL )
-            continue;
-
-        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
-    }
-}
-
 void __init release_irq(unsigned int irq)
 {
     struct irq_desc *desc;
@@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     int rc;
     unsigned long flags;
     struct irq_desc *desc;
+    bool_t disabled = 0;
 
     desc = irq_to_desc(irq->irq);
 
@@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
         return -EADDRINUSE;
     }
 
+    disabled = (desc->action == NULL);
+
+    /* First time the IRQ is setup */
+    if ( disabled )
+    {
+        bool_t level;
+
+        level = dt_irq_is_level_triggered(irq);
+        /* It's fine to use smp_processor_id() because:
+         * For PPI: irq_desc is banked
+         * For SGI: we don't care for now which CPU will receive the
+         * interrupt
+         * TODO: Handle case where SGI is setup on different CPU than
+         * the targeted CPU and the priority.
+         */
+        gic_route_irq(irq->irq, level, cpumask_of(smp_processor_id()), 0xa0);
+    }
+
     rc = __setup_irq(desc, irq->irq, new);
     spin_unlock_irqrestore(&desc->lock, flags);
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5434784..1f6d713 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -711,9 +711,6 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     init_IRQ();
 
-    gic_route_ppis();
-    gic_route_spis();
-
     init_maintenance_interrupt();
     init_timer_interrupt();
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..807e7d3 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -287,8 +287,6 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
 
     init_secondary_IRQ();
 
-    gic_route_ppis();
-
     init_maintenance_interrupt();
     init_timer_interrupt();
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..cd16318 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -218,17 +218,6 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
     vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
 }
 
-/* Route timer's IRQ on this CPU */
-void __cpuinit route_timer_interrupt(void)
-{
-    gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-    gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-    gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
-}
-
 /* Set up the timer interrupt on this CPU */
 void __cpuinit init_timer_interrupt(void)
 {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 87f4298..3fd1024 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -144,13 +144,6 @@ extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-/* Program the GIC to route an interrupt with a dt_irq */
-extern void gic_route_dt_irq(const struct dt_irq *irq,
-                             const cpumask_t *cpu_mask,
-                             unsigned int priority);
-extern void gic_route_ppis(void);
-extern void gic_route_spis(void);
-
 extern void gic_inject(void);
 extern void gic_clear_pending_irqs(struct vcpu *v);
 extern int gic_events_need_delivery(void);
diff --git a/xen/include/asm-arm/time.h b/xen/include/asm-arm/time.h
index 9d302d3..eaa96bc 100644
--- a/xen/include/asm-arm/time.h
+++ b/xen/include/asm-arm/time.h
@@ -28,9 +28,6 @@ enum timer_ppi
 /* Get one of the timer IRQ description */
 const struct dt_irq* timer_dt_irq(enum timer_ppi ppi);
 
-/* Route timer's IRQ on this CPU */
-extern void __cpuinit route_timer_interrupt(void);
-
 /* Set up the timer interrupt on this CPU */
 extern void __cpuinit init_timer_interrupt(void);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrJ-0001ww-Gd; Fri, 24 Jan 2014 16:43:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrI-0001vB-4s
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:56 +0000
Received: from [85.158.139.211:34361] by server-13.bemta-5.messagelabs.com id
	8F/CC-11357-B4892E25; Fri, 24 Jan 2014 16:43:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390581834!1024752!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21478 invoked from network); 24 Jan 2014 16:43:54 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:54 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so958835eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=eiyArPnmm83JJcB65B+3z8jaUKG7Ko6IEwACZp100JA=;
	b=M8JqkCHw0zIkiYEeDFl/vGRu3wlVgEX5aqseiymPV0BXrAB6m45ZqwsYPJEpKK/n0d
	G2DsBVexS6ByDT9/ZSu8Bcs3liqCn5ria2+BN+RqosDJSBoNTZN4jMTxCCPZtd80qhfG
	fFJbLuENdSqUe7d/5Mb5Xj9xFTbX4Wspr5g/fwU6wDa3jc66ID8ocfVTKwLuq/6Sjb+q
	DqIJmuXN5tDqROBbDKxPBlud3e0ngf6YkkrFWP9V7U4wS20y0g502saFdHra4nF2LcZc
	WOyZXG9gq/5OONjq70Y6O3KV96TYBRcij+mmLaGezUZ4yDMeL4VOTGg2kT0H8bAQgEz8
	eOhw==
X-Gm-Message-State: ALoCoQlV2js2v0eqokv3uUKVSfQfvSwr79ftLiYrwzrWXnI0WgkSbsNbsixPi5YnkJ+isGkik7qU
X-Received: by 10.14.32.132 with SMTP id o4mr13654919eea.14.1390581834435;
	Fri, 24 Jan 2014 08:43:54 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:53 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:39 +0000
Message-Id: <1390581822-32624-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename release_irq in
	release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rename the function and make the prototype consistent with request_dt_irq.

The new parameter (dev_id) will be used in a later patch to release the right
action when the support for multiple action will be added.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c        |    4 ++--
 xen/include/asm-arm/irq.h |    1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 58bcba3..2643b46 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -520,13 +520,13 @@ void gic_disable_cpu(void)
     spin_unlock(&gic.lock);
 }
 
-void __init release_irq(unsigned int irq)
+void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 {
     struct irq_desc *desc;
     unsigned long flags;
    struct irqaction *action;
 
-    desc = irq_to_desc(irq);
+    desc = irq_to_desc(irq->irq);
 
     desc->handler->shutdown(desc);
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 7c20703..bd8aac1 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
                           void (*handler)(int, void *, struct cpu_user_regs *),
                           const char *devname, void *dev_id);
 int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);
+void release_dt_irq(const struct dt_irq *irq, const void *dev_id);
 
 #endif /* _ASM_HW_IRQ_H */
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:43:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrJ-0001ww-Gd; Fri, 24 Jan 2014 16:43:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrI-0001vB-4s
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:56 +0000
Received: from [85.158.139.211:34361] by server-13.bemta-5.messagelabs.com id
	8F/CC-11357-B4892E25; Fri, 24 Jan 2014 16:43:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390581834!1024752!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21478 invoked from network); 24 Jan 2014 16:43:54 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:54 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so958835eaj.28
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=eiyArPnmm83JJcB65B+3z8jaUKG7Ko6IEwACZp100JA=;
	b=M8JqkCHw0zIkiYEeDFl/vGRu3wlVgEX5aqseiymPV0BXrAB6m45ZqwsYPJEpKK/n0d
	G2DsBVexS6ByDT9/ZSu8Bcs3liqCn5ria2+BN+RqosDJSBoNTZN4jMTxCCPZtd80qhfG
	fFJbLuENdSqUe7d/5Mb5Xj9xFTbX4Wspr5g/fwU6wDa3jc66ID8ocfVTKwLuq/6Sjb+q
	DqIJmuXN5tDqROBbDKxPBlud3e0ngf6YkkrFWP9V7U4wS20y0g502saFdHra4nF2LcZc
	WOyZXG9gq/5OONjq70Y6O3KV96TYBRcij+mmLaGezUZ4yDMeL4VOTGg2kT0H8bAQgEz8
	eOhw==
X-Gm-Message-State: ALoCoQlV2js2v0eqokv3uUKVSfQfvSwr79ftLiYrwzrWXnI0WgkSbsNbsixPi5YnkJ+isGkik7qU
X-Received: by 10.14.32.132 with SMTP id o4mr13654919eea.14.1390581834435;
	Fri, 24 Jan 2014 08:43:54 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:53 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:39 +0000
Message-Id: <1390581822-32624-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename release_irq in
	release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rename the function and make the prototype consistent with request_dt_irq.

The new parameter (dev_id) will be used in a later patch to release the right
action when the support for multiple action will be added.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c        |    4 ++--
 xen/include/asm-arm/irq.h |    1 +
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 58bcba3..2643b46 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -520,13 +520,13 @@ void gic_disable_cpu(void)
     spin_unlock(&gic.lock);
 }
 
-void __init release_irq(unsigned int irq)
+void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 {
     struct irq_desc *desc;
     unsigned long flags;
    struct irqaction *action;
 
-    desc = irq_to_desc(irq);
+    desc = irq_to_desc(irq->irq);
 
     desc->handler->shutdown(desc);
 
diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
index 7c20703..bd8aac1 100644
--- a/xen/include/asm-arm/irq.h
+++ b/xen/include/asm-arm/irq.h
@@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
                           void (*handler)(int, void *, struct cpu_user_regs *),
                           const char *devname, void *dev_id);
 int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);
+void release_dt_irq(const struct dt_irq *irq, const void *dev_id);
 
 #endif /* _ASM_HW_IRQ_H */
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrM-0001zg-UM; Fri, 24 Jan 2014 16:44:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrL-0001y1-0I
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:59 +0000
Received: from [85.158.143.35:18001] by server-3.bemta-4.messagelabs.com id
	01/6D-32360-E4892E25; Fri, 24 Jan 2014 16:43:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390581837!611291!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9997 invoked from network); 24 Jan 2014 16:43:57 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:57 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so1084971eek.0
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=P7EqNQk683TJWAD5+UoQAz1j571CwBAfNMAFne+g2jc=;
	b=INsmXglA5HyNED9BuzyWu1i7DojZR5HBff4Lg7i3e3IRt2uYIiGoJorcp/QKRgsg6H
	3aa/Xqx7DVgjE0WsrMTdglamLhtUjF9Bl4f80275nRnDyGxn2MX+9xHK9zzzzLdNLJH6
	e3RTeDpZoWe4PwB3pbNr8ZRhd2dH4RHBLtX/zm0dYNH1TK+fNoM7/K2rLCB1cn/8rKaC
	khTdP1jYhui/s7IZBdGi9eb4ryTrivpkWycSkT1rMHjFT53CpFGDXGqbCHWaxMtqu0qz
	3iIlczutfTfrqrLtFOjrCAZyrY9nPH9NDm+T30eV2cGl66vJ7axLrgslYNM33LWU94Bp
	Prfg==
X-Gm-Message-State: ALoCoQlTEnsZ8awznSvNNRFY09GrZTyUf7AtSSKaHiIBxRc1uTPIkVmM0NyPkNXUoFjIEiOsX8Oj
X-Received: by 10.15.110.133 with SMTP id ch5mr42084eeb.112.1390581837123;
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:55 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:40 +0000
Message-Id: <1390581822-32624-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock contrainst
	for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When multiple action will be supported, gic_irq_{startup,shutdown} will have
to be called in the same critical zone has setup/release.
Otherwise it could have a race condition if at the same time CPU A is calling
release_dt_irq and CPU B is calling setup_dt_irq.

This could end up to the IRQ is not enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 2643b46..ebd2b5f 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
     gic_restore_pending_irqs(v);
 }
 
-static void gic_irq_enable(struct irq_desc *desc)
+static unsigned int gic_irq_startup(struct irq_desc *desc)
 {
     int irq = desc->irq;
-    unsigned long flags;
 
-    spin_lock_irqsave(&desc->lock, flags);
+    ASSERT(spin_is_locked(&desc->lock));
+    ASSERT(!local_irq_is_enabled());
+
     spin_lock(&gic.lock);
     desc->status &= ~IRQ_DISABLED;
     dsb();
     /* Enable routing */
     GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
     spin_unlock(&gic.lock);
-    spin_unlock_irqrestore(&desc->lock, flags);
+
+    return 0;
 }
 
-static void gic_irq_disable(struct irq_desc *desc)
+static void gic_irq_shutdown(struct irq_desc *desc)
 {
     int irq = desc->irq;
 
-    spin_lock(&desc->lock);
+    ASSERT(spin_is_locked(&desc->lock));
+    ASSERT(!local_irq_is_enabled());
+
     spin_lock(&gic.lock);
     /* Disable routing */
     GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
     desc->status |= IRQ_DISABLED;
     spin_unlock(&gic.lock);
-    spin_unlock(&desc->lock);
 }
 
-static unsigned int gic_irq_startup(struct irq_desc *desc)
+static void gic_irq_enable(struct irq_desc *desc)
 {
-    gic_irq_enable(desc);
-    return 0;
+    unsigned long flags;
+
+    spin_lock_irqsave(&desc->lock, flags);
+    gic_irq_startup(desc);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
-static void gic_irq_shutdown(struct irq_desc *desc)
+static void gic_irq_disable(struct irq_desc *desc)
 {
-    gic_irq_disable(desc);
+    unsigned long flags;
+
+    spin_lock_irqsave(&desc->lock, flags);
+    gic_irq_shutdown(desc);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 static void gic_irq_ack(struct irq_desc *desc)
@@ -528,9 +538,8 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 
     desc = irq_to_desc(irq->irq);
 
-    desc->handler->shutdown(desc);
-
     spin_lock_irqsave(&desc->lock,flags);
+    desc->handler->shutdown(desc);
     action = desc->action;
     desc->action  = NULL;
     desc->status &= ~IRQ_GUEST;
@@ -600,11 +609,12 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     }
 
     rc = __setup_irq(desc, irq->irq, new);
-    spin_unlock_irqrestore(&desc->lock, flags);
 
     if ( !rc )
         desc->handler->startup(desc);
 
+    spin_unlock_irqrestore(&desc->lock, flags);
+
     return rc;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrM-0001zg-UM; Fri, 24 Jan 2014 16:44:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrL-0001y1-0I
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:43:59 +0000
Received: from [85.158.143.35:18001] by server-3.bemta-4.messagelabs.com id
	01/6D-32360-E4892E25; Fri, 24 Jan 2014 16:43:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390581837!611291!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9997 invoked from network); 24 Jan 2014 16:43:57 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:57 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so1084971eek.0
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=P7EqNQk683TJWAD5+UoQAz1j571CwBAfNMAFne+g2jc=;
	b=INsmXglA5HyNED9BuzyWu1i7DojZR5HBff4Lg7i3e3IRt2uYIiGoJorcp/QKRgsg6H
	3aa/Xqx7DVgjE0WsrMTdglamLhtUjF9Bl4f80275nRnDyGxn2MX+9xHK9zzzzLdNLJH6
	e3RTeDpZoWe4PwB3pbNr8ZRhd2dH4RHBLtX/zm0dYNH1TK+fNoM7/K2rLCB1cn/8rKaC
	khTdP1jYhui/s7IZBdGi9eb4ryTrivpkWycSkT1rMHjFT53CpFGDXGqbCHWaxMtqu0qz
	3iIlczutfTfrqrLtFOjrCAZyrY9nPH9NDm+T30eV2cGl66vJ7axLrgslYNM33LWU94Bp
	Prfg==
X-Gm-Message-State: ALoCoQlTEnsZ8awznSvNNRFY09GrZTyUf7AtSSKaHiIBxRc1uTPIkVmM0NyPkNXUoFjIEiOsX8Oj
X-Received: by 10.15.110.133 with SMTP id ch5mr42084eeb.112.1390581837123;
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:55 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:40 +0000
Message-Id: <1390581822-32624-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock contrainst
	for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When multiple action will be supported, gic_irq_{startup,shutdown} will have
to be called in the same critical zone has setup/release.
Otherwise it could have a race condition if at the same time CPU A is calling
release_dt_irq and CPU B is calling setup_dt_irq.

This could end up to the IRQ is not enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 2643b46..ebd2b5f 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
     gic_restore_pending_irqs(v);
 }
 
-static void gic_irq_enable(struct irq_desc *desc)
+static unsigned int gic_irq_startup(struct irq_desc *desc)
 {
     int irq = desc->irq;
-    unsigned long flags;
 
-    spin_lock_irqsave(&desc->lock, flags);
+    ASSERT(spin_is_locked(&desc->lock));
+    ASSERT(!local_irq_is_enabled());
+
     spin_lock(&gic.lock);
     desc->status &= ~IRQ_DISABLED;
     dsb();
     /* Enable routing */
     GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
     spin_unlock(&gic.lock);
-    spin_unlock_irqrestore(&desc->lock, flags);
+
+    return 0;
 }
 
-static void gic_irq_disable(struct irq_desc *desc)
+static void gic_irq_shutdown(struct irq_desc *desc)
 {
     int irq = desc->irq;
 
-    spin_lock(&desc->lock);
+    ASSERT(spin_is_locked(&desc->lock));
+    ASSERT(!local_irq_is_enabled());
+
     spin_lock(&gic.lock);
     /* Disable routing */
     GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
     desc->status |= IRQ_DISABLED;
     spin_unlock(&gic.lock);
-    spin_unlock(&desc->lock);
 }
 
-static unsigned int gic_irq_startup(struct irq_desc *desc)
+static void gic_irq_enable(struct irq_desc *desc)
 {
-    gic_irq_enable(desc);
-    return 0;
+    unsigned long flags;
+
+    spin_lock_irqsave(&desc->lock, flags);
+    gic_irq_startup(desc);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
-static void gic_irq_shutdown(struct irq_desc *desc)
+static void gic_irq_disable(struct irq_desc *desc)
 {
-    gic_irq_disable(desc);
+    unsigned long flags;
+
+    spin_lock_irqsave(&desc->lock, flags);
+    gic_irq_shutdown(desc);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 static void gic_irq_ack(struct irq_desc *desc)
@@ -528,9 +538,8 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 
     desc = irq_to_desc(irq->irq);
 
-    desc->handler->shutdown(desc);
-
     spin_lock_irqsave(&desc->lock,flags);
+    desc->handler->shutdown(desc);
     action = desc->action;
     desc->action  = NULL;
     desc->status &= ~IRQ_GUEST;
@@ -600,11 +609,12 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     }
 
     rc = __setup_irq(desc, irq->irq, new);
-    spin_unlock_irqrestore(&desc->lock, flags);
 
     if ( !rc )
         desc->handler->startup(desc);
 
+    spin_unlock_irqrestore(&desc->lock, flags);
+
     return rc;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrO-00020G-Ox; Fri, 24 Jan 2014 16:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrM-0001yx-CG
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:44:00 +0000
Received: from [85.158.143.35:57313] by server-2.bemta-4.messagelabs.com id
	42/43-11386-F4892E25; Fri, 24 Jan 2014 16:43:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390581838!611071!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17369 invoked from network); 24 Jan 2014 16:43:58 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:58 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1106268eae.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VdC77sRHlEzs0x0xL7GzfQvfshF8EMZo2t9FVrPfKUg=;
	b=Hg67BmDPjIJsFltpoaUs9SCOxoXlBW7AIpGTaJR0hnWtFIHIaglvsTWWVv6lNF5Kt6
	HQHypHND9otZfpnTZl/5ywgppE6C+9kxKA80cChn00tzRDop4P2Mk4LZ180r6CmuXAES
	mGIbkNzDNMXfk1XQktmsQJ/c8BxyScZ2Lx2/qdGcyapGxct1eLhhBblzwISMz2UmkzMA
	26ApYNbgbtU34Xv76qDSv6ur/ZUY4HuIcsueFV2JZp8lKfoYwLpoOjFHt3qhYfNmjcj6
	B+T7XT/Yr6W9moJ4f6MKIXhPVWdYEDmCjH2zYqYiOLznRIVQYFtDEpyRq2uc3bsKZw4E
	UHDA==
X-Gm-Message-State: ALoCoQmHs0ydPQOiBmWRph8yB/lne6patBjc10hqxdNLTnn7nUGQvHzSMYVfArAJV5L6ATFqiEdG
X-Received: by 10.14.184.66 with SMTP id r42mr8914108eem.86.1390581838546;
	Fri, 24 Jan 2014 08:43:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:41 +0000
Message-Id: <1390581822-32624-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action per
	IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
interrupt.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
CC: Keir Fraser <keir@xen.org>
---
 xen/arch/arm/gic.c    |   48 ++++++++++++++++++++++++++++++++++++++++--------
 xen/arch/arm/irq.c    |    6 +++++-
 xen/include/xen/irq.h |    1 +
 3 files changed, 46 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index ebd2b5f..8ba1de3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -534,32 +534,64 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 {
     struct irq_desc *desc;
     unsigned long flags;
-   struct irqaction *action;
+    struct irqaction *action, **action_ptr;
 
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock,flags);
     desc->handler->shutdown(desc);
     action = desc->action;
-    desc->action  = NULL;
-    desc->status &= ~IRQ_GUEST;
+
+    action_ptr = &desc->action;
+    for ( ;; )
+    {
+        action = *action_ptr;
+
+        if ( !action )
+        {
+            printk(XENLOG_WARNING "Trying to free already-free IRQ %u\n",
+                   irq->irq);
+            return;
+        }
+
+        if ( action->dev_id == dev_id )
+            break;
+
+        action_ptr = &action->next;
+    }
+
+    /* Found it - remove it from the action list */
+    *action_ptr = action->next;
+
+    /* If this was the list action, shut down the IRQ */
+    if ( !desc->action )
+    {
+        desc->handler->shutdown(desc);
+        desc->status &= ~IRQ_GUEST;
+    }
 
     spin_unlock_irqrestore(&desc->lock,flags);
 
     /* Wait to make sure it's not being used on another CPU */
     do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
 
-    if (action && action->free_on_release)
+    if ( action && action->free_on_release )
         xfree(action);
 }
 
 static int __setup_irq(struct irq_desc *desc, unsigned int irq,
                        struct irqaction *new)
 {
-    if ( desc->action != NULL )
-        return -EBUSY;
+    struct irqaction *action = desc->action;
+
+    ASSERT(new != NULL);
+
+    /* Check that dev_id is correctly filled if we have multiple action */
+    if ( action != NULL && ( action->dev_id == NULL || new->dev_id == NULL ) )
+        return -EINVAL;
 
-    desc->action  = new;
+    new->next = desc->action;
+    desc->action = new;
     dsb();
 
     return 0;
@@ -610,7 +642,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
 
     rc = __setup_irq(desc, irq->irq, new);
 
-    if ( !rc )
+    if ( !rc && disabled )
         desc->handler->startup(desc);
 
     spin_unlock_irqrestore(&desc->lock, flags);
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..edf0404 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -179,7 +179,11 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
     {
         desc->status &= ~IRQ_PENDING;
         spin_unlock_irq(&desc->lock);
-        action->handler(irq, action->dev_id, regs);
+        do
+        {
+            action->handler(irq, action->dev_id, regs);
+            action = action->next;
+        } while ( action );
         spin_lock_irq(&desc->lock);
     }
 
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index f2e6215..54314b8 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -11,6 +11,7 @@
 
 struct irqaction {
     void (*handler)(int, void *, struct cpu_user_regs *);
+    struct irqaction *next;
     const char *name;
     void *dev_id;
     bool_t free_on_release;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrO-00020G-Ox; Fri, 24 Jan 2014 16:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrM-0001yx-CG
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:44:00 +0000
Received: from [85.158.143.35:57313] by server-2.bemta-4.messagelabs.com id
	42/43-11386-F4892E25; Fri, 24 Jan 2014 16:43:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390581838!611071!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17369 invoked from network); 24 Jan 2014 16:43:58 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:43:58 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1106268eae.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:43:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VdC77sRHlEzs0x0xL7GzfQvfshF8EMZo2t9FVrPfKUg=;
	b=Hg67BmDPjIJsFltpoaUs9SCOxoXlBW7AIpGTaJR0hnWtFIHIaglvsTWWVv6lNF5Kt6
	HQHypHND9otZfpnTZl/5ywgppE6C+9kxKA80cChn00tzRDop4P2Mk4LZ180r6CmuXAES
	mGIbkNzDNMXfk1XQktmsQJ/c8BxyScZ2Lx2/qdGcyapGxct1eLhhBblzwISMz2UmkzMA
	26ApYNbgbtU34Xv76qDSv6ur/ZUY4HuIcsueFV2JZp8lKfoYwLpoOjFHt3qhYfNmjcj6
	B+T7XT/Yr6W9moJ4f6MKIXhPVWdYEDmCjH2zYqYiOLznRIVQYFtDEpyRq2uc3bsKZw4E
	UHDA==
X-Gm-Message-State: ALoCoQmHs0ydPQOiBmWRph8yB/lne6patBjc10hqxdNLTnn7nUGQvHzSMYVfArAJV5L6ATFqiEdG
X-Received: by 10.14.184.66 with SMTP id r42mr8914108eem.86.1390581838546;
	Fri, 24 Jan 2014 08:43:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:57 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:41 +0000
Message-Id: <1390581822-32624-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action per
	IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
interrupt.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
CC: Keir Fraser <keir@xen.org>
---
 xen/arch/arm/gic.c    |   48 ++++++++++++++++++++++++++++++++++++++++--------
 xen/arch/arm/irq.c    |    6 +++++-
 xen/include/xen/irq.h |    1 +
 3 files changed, 46 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index ebd2b5f..8ba1de3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -534,32 +534,64 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
 {
     struct irq_desc *desc;
     unsigned long flags;
-   struct irqaction *action;
+    struct irqaction *action, **action_ptr;
 
     desc = irq_to_desc(irq->irq);
 
     spin_lock_irqsave(&desc->lock,flags);
     desc->handler->shutdown(desc);
     action = desc->action;
-    desc->action  = NULL;
-    desc->status &= ~IRQ_GUEST;
+
+    action_ptr = &desc->action;
+    for ( ;; )
+    {
+        action = *action_ptr;
+
+        if ( !action )
+        {
+            printk(XENLOG_WARNING "Trying to free already-free IRQ %u\n",
+                   irq->irq);
+            return;
+        }
+
+        if ( action->dev_id == dev_id )
+            break;
+
+        action_ptr = &action->next;
+    }
+
+    /* Found it - remove it from the action list */
+    *action_ptr = action->next;
+
+    /* If this was the list action, shut down the IRQ */
+    if ( !desc->action )
+    {
+        desc->handler->shutdown(desc);
+        desc->status &= ~IRQ_GUEST;
+    }
 
     spin_unlock_irqrestore(&desc->lock,flags);
 
     /* Wait to make sure it's not being used on another CPU */
     do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
 
-    if (action && action->free_on_release)
+    if ( action && action->free_on_release )
         xfree(action);
 }
 
 static int __setup_irq(struct irq_desc *desc, unsigned int irq,
                        struct irqaction *new)
 {
-    if ( desc->action != NULL )
-        return -EBUSY;
+    struct irqaction *action = desc->action;
+
+    ASSERT(new != NULL);
+
+    /* Check that dev_id is correctly filled if we have multiple action */
+    if ( action != NULL && ( action->dev_id == NULL || new->dev_id == NULL ) )
+        return -EINVAL;
 
-    desc->action  = new;
+    new->next = desc->action;
+    desc->action = new;
     dsb();
 
     return 0;
@@ -610,7 +642,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
 
     rc = __setup_irq(desc, irq->irq, new);
 
-    if ( !rc )
+    if ( !rc && disabled )
         desc->handler->startup(desc);
 
     spin_unlock_irqrestore(&desc->lock, flags);
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..edf0404 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -179,7 +179,11 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
     {
         desc->status &= ~IRQ_PENDING;
         spin_unlock_irq(&desc->lock);
-        action->handler(irq, action->dev_id, regs);
+        do
+        {
+            action->handler(irq, action->dev_id, regs);
+            action = action->next;
+        } while ( action );
         spin_lock_irq(&desc->lock);
     }
 
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index f2e6215..54314b8 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -11,6 +11,7 @@
 
 struct irqaction {
     void (*handler)(int, void *, struct cpu_user_regs *);
+    struct irqaction *next;
     const char *name;
     void *dev_id;
     bool_t free_on_release;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrQ-00022l-Pu; Fri, 24 Jan 2014 16:44:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrN-00020H-Pp
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:44:02 +0000
Received: from [85.158.143.35:18176] by server-1.bemta-4.messagelabs.com id
	96/10-02132-15892E25; Fri, 24 Jan 2014 16:44:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390581840!611078!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 24 Jan 2014 16:44:00 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:44:00 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so545019eaj.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:44:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=IvyzpqJZz1Ss41HeXdwJ+aVKTox/gU2ewnyy2jfjCfA=;
	b=m1LcCHcRjlXHHe0LPpj4KlSMQyGnQIbCek3BXHPA1T6odzkrSbzij+6Mcf71EOzP4h
	1wjv2MkrOHQL5kOL3g4Gqt1R72jUJ0EREo0TZdI47jb7alyD9Uhzqvnd1CpZjr1bdLFm
	5oFL8eJ8+bSQ9t8AghIs8JY/F6mz5iZlRWELXfSercb+TuEh4EgOrZAeaEhLcHgWQYT3
	+SqMbo8a0hJt2Z8/NCKcjxC4MLwbXcCJPosWPqTdeibgMnbMr7lfxPa2QUsg12SOKeSN
	JJLCYsBvMI1TwjjNgGjm7RfElkWySuGe/RtDpso3eJJoejZ5/+L+DSno55Dr7sAWEnhu
	5XNw==
X-Gm-Message-State: ALoCoQle9tTQjmvhzPmlYXfaz7RHQn+7iqNdRXqw31qRBQXVtlQYUa3U3Avurml5Tn2yTlSVYw5R
X-Received: by 10.14.246.194 with SMTP id q42mr399030eer.35.1390581840000;
	Fri, 24 Jan 2014 08:44:00 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:42 +0000
Message-Id: <1390581822-32624-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH for-4.5 8/8] xen/serial: remove serial_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function was only used for IRQ routing which has been removed in an
earlier patch.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
CC: Keir Fraser <keir@xen.org>
---
 xen/drivers/char/exynos4210-uart.c |    8 --------
 xen/drivers/char/ns16550.c         |   11 -----------
 xen/drivers/char/omap-uart.c       |    8 --------
 xen/drivers/char/pl011.c           |    8 --------
 xen/drivers/char/serial.c          |    9 ---------
 xen/include/xen/serial.h           |    5 -----
 6 files changed, 49 deletions(-)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 74ac33d..9179138 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -275,13 +275,6 @@ static int __init exynos4210_uart_irq(struct serial_port *port)
     return uart->irq.irq;
 }
 
-static const struct dt_irq __init *exynos4210_uart_dt_irq(struct serial_port *port)
-{
-    struct exynos4210_uart *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *exynos4210_vuart_info(struct serial_port *port)
 {
     struct exynos4210_uart *uart = port->uart;
@@ -299,7 +292,6 @@ static struct uart_driver __read_mostly exynos4210_uart_driver = {
     .putc         = exynos4210_uart_putc,
     .getc         = exynos4210_uart_getc,
     .irq          = exynos4210_uart_irq,
-    .dt_irq_get   = exynos4210_uart_dt_irq,
     .vuart_info   = exynos4210_vuart_info,
 };
 
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e7cb0ba..ca16d48 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -446,14 +446,6 @@ static int __init ns16550_irq(struct serial_port *port)
     return ((uart->irq > 0) ? uart->irq : -1);
 }
 
-#ifdef HAS_DEVICE_TREE
-static const struct dt_irq __init *ns16550_dt_irq(struct serial_port *port)
-{
-    struct ns16550 *uart = port->uart;
-    return &uart->dt_irq;
-}
-#endif
-
 #ifdef CONFIG_ARM
 static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
 {
@@ -473,9 +465,6 @@ static struct uart_driver __read_mostly ns16550_driver = {
     .putc         = ns16550_putc,
     .getc         = ns16550_getc,
     .irq          = ns16550_irq,
-#ifdef HAS_DEVICE_TREE
-    .dt_irq_get   = ns16550_dt_irq,
-#endif
 #ifdef CONFIG_ARM
     .vuart_info   = ns16550_vuart_info,
 #endif
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index 7f21f1f..6d882a3 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -262,13 +262,6 @@ static int __init omap_uart_irq(struct serial_port *port)
     return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
 }
 
-static const struct dt_irq __init *omap_uart_dt_irq(struct serial_port *port)
-{
-    struct omap_uart *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *omap_vuart_info(struct serial_port *port)
 {
     struct omap_uart *uart = port->uart;
@@ -286,7 +279,6 @@ static struct uart_driver __read_mostly omap_uart_driver = {
     .putc = omap_uart_putc,
     .getc = omap_uart_getc,
     .irq = omap_uart_irq,
-    .dt_irq_get = omap_uart_dt_irq,
     .vuart_info = omap_vuart_info,
 };
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index 9c2870a..ac9c4f8 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -189,13 +189,6 @@ static int __init pl011_irq(struct serial_port *port)
     return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
 }
 
-static const struct dt_irq __init *pl011_dt_irq(struct serial_port *port)
-{
-    struct pl011 *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *pl011_vuart(struct serial_port *port)
 {
     struct pl011 *uart = port->uart;
@@ -213,7 +206,6 @@ static struct uart_driver __read_mostly pl011_driver = {
     .putc         = pl011_putc,
     .getc         = pl011_getc,
     .irq          = pl011_irq,
-    .dt_irq_get   = pl011_dt_irq,
     .vuart_info   = pl011_vuart,
 };
 
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 9b006f2..44026b1 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -500,15 +500,6 @@ int __init serial_irq(int idx)
     return -1;
 }
 
-const struct dt_irq __init *serial_dt_irq(int idx)
-{
-    if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
-         com[idx].driver && com[idx].driver->dt_irq_get )
-        return com[idx].driver->dt_irq_get(&com[idx]);
-
-    return NULL;
-}
-
 const struct vuart_info *serial_vuart_info(int idx)
 {
     if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index f38c9b7..9f4451b 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -81,8 +81,6 @@ struct uart_driver {
     int  (*getc)(struct serial_port *, char *);
     /* Get IRQ number for this port's serial line: returns -1 if none. */
     int  (*irq)(struct serial_port *);
-    /* Get IRQ device node for this port's serial line: returns NULL if none. */
-    const struct dt_irq *(*dt_irq_get)(struct serial_port *);
     /* Get serial information */
     const struct vuart_info *(*vuart_info)(struct serial_port *);
 };
@@ -135,9 +133,6 @@ void serial_end_log_everything(int handle);
 /* Return irq number for specified serial port (identified by index). */
 int serial_irq(int idx);
 
-/* Return irq device node for specified serial port (identified by index). */
-const struct dt_irq *serial_dt_irq(int idx);
-
 /* Retrieve basic UART information to emulate it (base address, size...) */
 const struct vuart_info* serial_vuart_info(int idx);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:44:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jrQ-00022l-Pu; Fri, 24 Jan 2014 16:44:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6jrN-00020H-Pp
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 16:44:02 +0000
Received: from [85.158.143.35:18176] by server-1.bemta-4.messagelabs.com id
	96/10-02132-15892E25; Fri, 24 Jan 2014 16:44:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390581840!611078!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17526 invoked from network); 24 Jan 2014 16:44:00 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:44:00 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so545019eaj.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 24 Jan 2014 08:44:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=IvyzpqJZz1Ss41HeXdwJ+aVKTox/gU2ewnyy2jfjCfA=;
	b=m1LcCHcRjlXHHe0LPpj4KlSMQyGnQIbCek3BXHPA1T6odzkrSbzij+6Mcf71EOzP4h
	1wjv2MkrOHQL5kOL3g4Gqt1R72jUJ0EREo0TZdI47jb7alyD9Uhzqvnd1CpZjr1bdLFm
	5oFL8eJ8+bSQ9t8AghIs8JY/F6mz5iZlRWELXfSercb+TuEh4EgOrZAeaEhLcHgWQYT3
	+SqMbo8a0hJt2Z8/NCKcjxC4MLwbXcCJPosWPqTdeibgMnbMr7lfxPa2QUsg12SOKeSN
	JJLCYsBvMI1TwjjNgGjm7RfElkWySuGe/RtDpso3eJJoejZ5/+L+DSno55Dr7sAWEnhu
	5XNw==
X-Gm-Message-State: ALoCoQle9tTQjmvhzPmlYXfaz7RHQn+7iqNdRXqw31qRBQXVtlQYUa3U3Avurml5Tn2yTlSVYw5R
X-Received: by 10.14.246.194 with SMTP id q42mr399030eer.35.1390581840000;
	Fri, 24 Jan 2014 08:44:00 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm5521285eey.0.2014.01.24.08.43.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 24 Jan 2014 08:43:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 24 Jan 2014 16:43:42 +0000
Message-Id: <1390581822-32624-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH for-4.5 8/8] xen/serial: remove serial_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function was only used for IRQ routing which has been removed in an
earlier patch.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
CC: Keir Fraser <keir@xen.org>
---
 xen/drivers/char/exynos4210-uart.c |    8 --------
 xen/drivers/char/ns16550.c         |   11 -----------
 xen/drivers/char/omap-uart.c       |    8 --------
 xen/drivers/char/pl011.c           |    8 --------
 xen/drivers/char/serial.c          |    9 ---------
 xen/include/xen/serial.h           |    5 -----
 6 files changed, 49 deletions(-)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 74ac33d..9179138 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -275,13 +275,6 @@ static int __init exynos4210_uart_irq(struct serial_port *port)
     return uart->irq.irq;
 }
 
-static const struct dt_irq __init *exynos4210_uart_dt_irq(struct serial_port *port)
-{
-    struct exynos4210_uart *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *exynos4210_vuart_info(struct serial_port *port)
 {
     struct exynos4210_uart *uart = port->uart;
@@ -299,7 +292,6 @@ static struct uart_driver __read_mostly exynos4210_uart_driver = {
     .putc         = exynos4210_uart_putc,
     .getc         = exynos4210_uart_getc,
     .irq          = exynos4210_uart_irq,
-    .dt_irq_get   = exynos4210_uart_dt_irq,
     .vuart_info   = exynos4210_vuart_info,
 };
 
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e7cb0ba..ca16d48 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -446,14 +446,6 @@ static int __init ns16550_irq(struct serial_port *port)
     return ((uart->irq > 0) ? uart->irq : -1);
 }
 
-#ifdef HAS_DEVICE_TREE
-static const struct dt_irq __init *ns16550_dt_irq(struct serial_port *port)
-{
-    struct ns16550 *uart = port->uart;
-    return &uart->dt_irq;
-}
-#endif
-
 #ifdef CONFIG_ARM
 static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
 {
@@ -473,9 +465,6 @@ static struct uart_driver __read_mostly ns16550_driver = {
     .putc         = ns16550_putc,
     .getc         = ns16550_getc,
     .irq          = ns16550_irq,
-#ifdef HAS_DEVICE_TREE
-    .dt_irq_get   = ns16550_dt_irq,
-#endif
 #ifdef CONFIG_ARM
     .vuart_info   = ns16550_vuart_info,
 #endif
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index 7f21f1f..6d882a3 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -262,13 +262,6 @@ static int __init omap_uart_irq(struct serial_port *port)
     return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
 }
 
-static const struct dt_irq __init *omap_uart_dt_irq(struct serial_port *port)
-{
-    struct omap_uart *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *omap_vuart_info(struct serial_port *port)
 {
     struct omap_uart *uart = port->uart;
@@ -286,7 +279,6 @@ static struct uart_driver __read_mostly omap_uart_driver = {
     .putc = omap_uart_putc,
     .getc = omap_uart_getc,
     .irq = omap_uart_irq,
-    .dt_irq_get = omap_uart_dt_irq,
     .vuart_info = omap_vuart_info,
 };
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index 9c2870a..ac9c4f8 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -189,13 +189,6 @@ static int __init pl011_irq(struct serial_port *port)
     return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
 }
 
-static const struct dt_irq __init *pl011_dt_irq(struct serial_port *port)
-{
-    struct pl011 *uart = port->uart;
-
-    return &uart->irq;
-}
-
 static const struct vuart_info *pl011_vuart(struct serial_port *port)
 {
     struct pl011 *uart = port->uart;
@@ -213,7 +206,6 @@ static struct uart_driver __read_mostly pl011_driver = {
     .putc         = pl011_putc,
     .getc         = pl011_getc,
     .irq          = pl011_irq,
-    .dt_irq_get   = pl011_dt_irq,
     .vuart_info   = pl011_vuart,
 };
 
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 9b006f2..44026b1 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -500,15 +500,6 @@ int __init serial_irq(int idx)
     return -1;
 }
 
-const struct dt_irq __init *serial_dt_irq(int idx)
-{
-    if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
-         com[idx].driver && com[idx].driver->dt_irq_get )
-        return com[idx].driver->dt_irq_get(&com[idx]);
-
-    return NULL;
-}
-
 const struct vuart_info *serial_vuart_info(int idx)
 {
     if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index f38c9b7..9f4451b 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -81,8 +81,6 @@ struct uart_driver {
     int  (*getc)(struct serial_port *, char *);
     /* Get IRQ number for this port's serial line: returns -1 if none. */
     int  (*irq)(struct serial_port *);
-    /* Get IRQ device node for this port's serial line: returns NULL if none. */
-    const struct dt_irq *(*dt_irq_get)(struct serial_port *);
     /* Get serial information */
     const struct vuart_info *(*vuart_info)(struct serial_port *);
 };
@@ -135,9 +133,6 @@ void serial_end_log_everything(int handle);
 /* Return irq number for specified serial port (identified by index). */
 int serial_irq(int idx);
 
-/* Return irq device node for specified serial port (identified by index). */
-const struct dt_irq *serial_dt_irq(int idx);
-
 /* Retrieve basic UART information to emulate it (base address, size...) */
 const struct vuart_info* serial_vuart_info(int idx);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:48:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jvj-0003AJ-6E; Fri, 24 Jan 2014 16:48:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W6jvi-00039q-0w
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:48:30 +0000
Received: from [85.158.139.211:6392] by server-4.bemta-5.messagelabs.com id
	B8/15-26791-D5992E25; Fri, 24 Jan 2014 16:48:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390582106!11619968!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30601 invoked from network); 24 Jan 2014 16:48:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:48:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OGmLYR021050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 16:48:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OGmKK6020387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 16:48:20 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OGmJRX020364; Fri, 24 Jan 2014 16:48:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 08:48:19 -0800
Message-ID: <52E29988.2040503@oracle.com>
Date: Fri, 24 Jan 2014 11:49:12 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
	<52E28CCC0200007800116B7D@nat28.tlf.novell.com>
In-Reply-To: <52E28CCC0200007800116B7D@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 09:54 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Add xenpmu.h header file,
> To me, naming a public Xen header (other than the core one) xen*.h
> is redundant. There's no information lost if you just called it pmu.h.

I was trying to keep filename and top-level data structures the same 
(although now that I changed xenpmu_ prefix to xen_pmu_ they no longer 
are).

>
> Also I think you ought to use plural here.

I'd prefer to keep the arch-independent and -dependent file names the same.

...

>
>> +struct xen_pmu_intel_ctxt {
>> +    uint64_t global_ctrl;
>> +    uint64_t global_ovf_ctrl;
>> +    uint64_t global_status;
>> +    uint64_t fixed_ctrl;
>> +    uint64_t ds_area;
>> +    uint64_t pebs_enable;
>> +    uint64_t debugctl;
>> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
>> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
> I think these last two could easily be uint32_t.
>
>> +/* Shared between hypervisor and PV domain */
>> +struct xen_pmu_data {
>> +    uint32_t domain_id;
>> +    uint32_t vcpu_id;
>> +    uint32_t pcpu_id;
>> +    uint32_t pmu_flags;
>> +
>> +    xen_arch_pmu_t pmu;
>> +};
> So if this got included by an architecture independent source file
> on ARM, how would this build? You at least need a stub definition
> there for xen_arch_pmu_t afaict (if already give the impression -
> further up - that you're supporting ARM compilation of this header).

I was supposed to have an entry in arch-arm.h but dropped it somewhere 
along the way. I'll put it back.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:48:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jvj-0003AJ-6E; Fri, 24 Jan 2014 16:48:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W6jvi-00039q-0w
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:48:30 +0000
Received: from [85.158.139.211:6392] by server-4.bemta-5.messagelabs.com id
	B8/15-26791-D5992E25; Fri, 24 Jan 2014 16:48:29 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390582106!11619968!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30601 invoked from network); 24 Jan 2014 16:48:28 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:48:28 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OGmLYR021050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 16:48:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OGmKK6020387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 16:48:20 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OGmJRX020364; Fri, 24 Jan 2014 16:48:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 08:48:19 -0800
Message-ID: <52E29988.2040503@oracle.com>
Date: Fri, 24 Jan 2014 11:49:12 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
	<52E28CCC0200007800116B7D@nat28.tlf.novell.com>
In-Reply-To: <52E28CCC0200007800116B7D@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 09:54 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Add xenpmu.h header file,
> To me, naming a public Xen header (other than the core one) xen*.h
> is redundant. There's no information lost if you just called it pmu.h.

I was trying to keep filename and top-level data structures the same 
(although now that I changed xenpmu_ prefix to xen_pmu_ they no longer 
are).

>
> Also I think you ought to use plural here.

I'd prefer to keep the arch-independent and -dependent file names the same.

...

>
>> +struct xen_pmu_intel_ctxt {
>> +    uint64_t global_ctrl;
>> +    uint64_t global_ovf_ctrl;
>> +    uint64_t global_status;
>> +    uint64_t fixed_ctrl;
>> +    uint64_t ds_area;
>> +    uint64_t pebs_enable;
>> +    uint64_t debugctl;
>> +    uint64_t fixed_counters;  /* Offset to fixed counter MSRs */
>> +    uint64_t arch_counters;   /* Offset to architectural counter MSRs */
> I think these last two could easily be uint32_t.
>
>> +/* Shared between hypervisor and PV domain */
>> +struct xen_pmu_data {
>> +    uint32_t domain_id;
>> +    uint32_t vcpu_id;
>> +    uint32_t pcpu_id;
>> +    uint32_t pmu_flags;
>> +
>> +    xen_arch_pmu_t pmu;
>> +};
> So if this got included by an architecture independent source file
> on ARM, how would this build? You at least need a stub definition
> there for xen_arch_pmu_t afaict (if already give the impression -
> further up - that you're supporting ARM compilation of this header).

I was supposed to have an entry in arch-arm.h but dropped it somewhere 
along the way. I'll put it back.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jzP-0003ZE-RL; Fri, 24 Jan 2014 16:52:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jzN-0003Z6-Sf
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:52:18 +0000
Received: from [85.158.139.211:56958] by server-11.bemta-5.messagelabs.com id
	64/24-23268-14A92E25; Fri, 24 Jan 2014 16:52:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390582334!1499423!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 24 Jan 2014 16:52:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:52:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94209645"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:52:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:52:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jzI-0007yr-Rg;
	Fri, 24 Jan 2014 16:52:12 +0000
Message-ID: <52E29A35.9060205@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:52:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
In-Reply-To: <52E132EE.2030101@citrix.com>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 03:19 PM, Andrew Cooper wrote:
> On 23/01/14 15:13, George Dunlap wrote:
>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Keir Fraser <keir@xen.org>
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Tim Deegan <tim@xen.org>
>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>
>>> ---
>>>
>>> George:
>>>     This is just documentation, and it would be nice to include it as
>>> part of
>>>     the 4.4 release.
>>> ---
>>>    misc/coverity_model.c |   98
>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 98 insertions(+)
>>>    create mode 100644 misc/coverity_model.c
>>>
>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>> new file mode 100644
>>> index 0000000..418d25e
>>> --- /dev/null
>>> +++ b/misc/coverity_model.c
>>> @@ -0,0 +1,98 @@
>>> +/* Coverity Scan model
>>> + *
>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>> avoid false
>>> + * positives.
>>> + *
>>> + * - A model file can't import any header files.
>>> + * - Therefore only some built-in primitives like int, char and void
>>> are
>>> + *   available but not NULL etc.
>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>> structs
>>> + *   and similar types are sufficient.
>>> + * - An uninitialized local pointer is not an error. It signifies
>>> that the
>>> + *   variable could be either NULL or have some data.
>>> + *
>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>> model file
>>> + * must be uploaded by an admin in the analysis.
>> So this file isn't compiled; it's manually uploaded as part of the
>> coverity scanning process; and could be provided out-of-band, but it's
>> just convenient to put it in the tree, particularly if any of these
>> things should change as things go forward.  (Hence comparing it to
>> documentation.)  Is that right?
>>
>>   -George
>>
> Correct.  I believe internally Coverity compiles it (at least to an
> AST), but that is completely opaque to users of Scan.

Right; I have a hard time coming up with a compelling reason to wait for 
this one.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

The name of the file might be a bit confusing though, if people think it 
is supposed to be compliled... would it make sense maybe to call it 
".txt", and include some instructions at the top with a line that says 
"---- cut here 8< ---" or something?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6jzP-0003ZE-RL; Fri, 24 Jan 2014 16:52:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6jzN-0003Z6-Sf
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:52:18 +0000
Received: from [85.158.139.211:56958] by server-11.bemta-5.messagelabs.com id
	64/24-23268-14A92E25; Fri, 24 Jan 2014 16:52:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390582334!1499423!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 24 Jan 2014 16:52:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:52:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94209645"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:52:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:52:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6jzI-0007yr-Rg;
	Fri, 24 Jan 2014 16:52:12 +0000
Message-ID: <52E29A35.9060205@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:52:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
In-Reply-To: <52E132EE.2030101@citrix.com>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/23/2014 03:19 PM, Andrew Cooper wrote:
> On 23/01/14 15:13, George Dunlap wrote:
>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Keir Fraser <keir@xen.org>
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Tim Deegan <tim@xen.org>
>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>
>>> ---
>>>
>>> George:
>>>     This is just documentation, and it would be nice to include it as
>>> part of
>>>     the 4.4 release.
>>> ---
>>>    misc/coverity_model.c |   98
>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 98 insertions(+)
>>>    create mode 100644 misc/coverity_model.c
>>>
>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>> new file mode 100644
>>> index 0000000..418d25e
>>> --- /dev/null
>>> +++ b/misc/coverity_model.c
>>> @@ -0,0 +1,98 @@
>>> +/* Coverity Scan model
>>> + *
>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>> avoid false
>>> + * positives.
>>> + *
>>> + * - A model file can't import any header files.
>>> + * - Therefore only some built-in primitives like int, char and void
>>> are
>>> + *   available but not NULL etc.
>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>> structs
>>> + *   and similar types are sufficient.
>>> + * - An uninitialized local pointer is not an error. It signifies
>>> that the
>>> + *   variable could be either NULL or have some data.
>>> + *
>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>> model file
>>> + * must be uploaded by an admin in the analysis.
>> So this file isn't compiled; it's manually uploaded as part of the
>> coverity scanning process; and could be provided out-of-band, but it's
>> just convenient to put it in the tree, particularly if any of these
>> things should change as things go forward.  (Hence comparing it to
>> documentation.)  Is that right?
>>
>>   -George
>>
> Correct.  I believe internally Coverity compiles it (at least to an
> AST), but that is completely opaque to users of Scan.

Right; I have a hard time coming up with a compelling reason to wait for 
this one.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

The name of the file might be a bit confusing though, if people think it 
is supposed to be compliled... would it make sense maybe to call it 
".txt", and include some instructions at the top with a line that says 
"---- cut here 8< ---" or something?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k1Y-0003fx-Dg; Fri, 24 Jan 2014 16:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1W6jRv-0000Tw-1L
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:17:43 +0000
Received: from [193.109.254.147:33877] by server-6.bemta-14.messagelabs.com id
	E6/0F-14958-62292E25; Fri, 24 Jan 2014 16:17:42 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390580261!13036210!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 847 invoked from network); 24 Jan 2014 16:17:41 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:17:41 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so946450eaj.14
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 08:17:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=xRt2qdwW4A9Pjiov7ArAux/jehnXD4QdCEz5CpxEHnk=;
	b=AFaydMqLwjTE7de/hFyflaLSQW+QgcwANUVto+wRsKpLwBbfqCaRrCJmvBAOQgBVCP
	afkCJ9Hy3yq8tV8jUTZ5PPzoWH05asYOtpjmvazeO4BtR0vMw04idzMEmf3bwNn2UxP+
	W9MoAq8u/D9QGFXNxdBqaBqTBi85JtOLGiEQSySgaJeIzOMo1qA09Slw0Si9uKiKkzYp
	x+0dNBh2AsZjb/TIAARuk8fV+zQjJdwigCresVrG32Tx9azc0OZirHWJM8oG45dGIyt1
	bg+mui64VYTyZ7DdjtKX9OlmGhQ2NR3IvT6ynestb631qIs/CNKWSkzVSnVtzTVz0gXw
	QgEA==
X-Received: by 10.14.148.138 with SMTP id v10mr13548979eej.37.1390580261317;
	Fri, 24 Jan 2014 08:17:41 -0800 (PST)
Received: from [155.185.131.242] (aruba-131-242.aruba.unimo.it.
	[155.185.131.242])
	by mx.google.com with ESMTPSA id z46sm5282643een.1.2014.01.24.08.17.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 08:17:40 -0800 (PST)
Message-ID: <52E29221.2080804@gmail.com>
Date: Fri, 24 Jan 2014 17:17:37 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52E239AB.8040906@gmail.com>
	<1390558450.2124.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1390558450.2124.24.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Fri, 24 Jan 2014 16:54:30 +0000
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:14 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 11:00 +0100, Arianna Avanzini wrote:
>> I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
>> assigned to it by providing the dom0_mem boot option. The error message produced
>> during the boot process is the following.
> 
> This is an unfortunate side effect of the use of the 1:1 mapping. The
> threads:
>         "Master not working on Allwinner A20"
>         "create multiple banks for dom0 in 1:1 mapping if necessary"
> Have some more details, but in short the allocation of dom0 memory needs
> to be done in a single chunk and because of the way the Xen allocator
> works it will also be aligned to its own size -- this creates some
> limitations on the size of the region vs what memory is free at start of
> day
> 
> I'm afraid this probably won't be solved for Xen 4.4, since the change
> is likely to be rather intrusive.
> 

I see, thank you for the explanation.


> As a workaround you could try changing the load addresses of Xen and the
> kernel, dtb etc used by u-boot to pack them towards the top of RAM. This
> *should* result in the entire lower half of RAM being available which
> will make it more likely to achieve the necessary alignment constraints
> for a dom0 taking up to half of the system RAM. I've not actually tried
> this but I'd recommend trying the following addresses from the top of
> RAM:
> 	-2M: Leave free for Xen to relocate to
> 	-4M: dom0 kernel
> 	-6M: DTB
> 	-8M: Initial Xen load address
> 
> If you have an initrd then put it between dom0 kernel and dtb and bump
> everything else down, likewise if anything is bigger than 2M then round
> up to 2M and push everything down to accommodate it.
> 
> Also consider that for a Xen system it is common to only give dom0 a
> fairly small fraction of the system RAM in order to leave as much as
> possible for guest domains. What target amount of dom0 memory are you
> aiming for? 256M or even 128M is probably plenty if you are going to run
> a few domains.
> 

So far, 256MB of RAM are actually enough for a dom0 in my use-case. Thank you
again for the info and hints, and for taking the time to test both the failing
scenario and the workaround.



> Ian.
> 


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k1Y-0003fx-Dg; Fri, 24 Jan 2014 16:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1W6jRv-0000Tw-1L
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:17:43 +0000
Received: from [193.109.254.147:33877] by server-6.bemta-14.messagelabs.com id
	E6/0F-14958-62292E25; Fri, 24 Jan 2014 16:17:42 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390580261!13036210!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 847 invoked from network); 24 Jan 2014 16:17:41 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:17:41 -0000
Received: by mail-ea0-f169.google.com with SMTP id l9so946450eaj.14
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 08:17:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=xRt2qdwW4A9Pjiov7ArAux/jehnXD4QdCEz5CpxEHnk=;
	b=AFaydMqLwjTE7de/hFyflaLSQW+QgcwANUVto+wRsKpLwBbfqCaRrCJmvBAOQgBVCP
	afkCJ9Hy3yq8tV8jUTZ5PPzoWH05asYOtpjmvazeO4BtR0vMw04idzMEmf3bwNn2UxP+
	W9MoAq8u/D9QGFXNxdBqaBqTBi85JtOLGiEQSySgaJeIzOMo1qA09Slw0Si9uKiKkzYp
	x+0dNBh2AsZjb/TIAARuk8fV+zQjJdwigCresVrG32Tx9azc0OZirHWJM8oG45dGIyt1
	bg+mui64VYTyZ7DdjtKX9OlmGhQ2NR3IvT6ynestb631qIs/CNKWSkzVSnVtzTVz0gXw
	QgEA==
X-Received: by 10.14.148.138 with SMTP id v10mr13548979eej.37.1390580261317;
	Fri, 24 Jan 2014 08:17:41 -0800 (PST)
Received: from [155.185.131.242] (aruba-131-242.aruba.unimo.it.
	[155.185.131.242])
	by mx.google.com with ESMTPSA id z46sm5282643een.1.2014.01.24.08.17.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 08:17:40 -0800 (PST)
Message-ID: <52E29221.2080804@gmail.com>
Date: Fri, 24 Jan 2014 17:17:37 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52E239AB.8040906@gmail.com>
	<1390558450.2124.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1390558450.2124.24.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Fri, 24 Jan 2014 16:54:30 +0000
Cc: Paolo Gai <pj@evidence.eu.com>, Paolo Valente <paolo.valente@unimore.it>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Claudio Scordino <claudio@evidence.eu.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Roger Pau Monne <roger.paumonne@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: "Failed to allocate contiguous memory
 for dom0" with dom0_mem greater than 256MB
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 11:14 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 11:00 +0100, Arianna Avanzini wrote:
>> I noticed that Xen 4.4 fails to boot the dom0 if more than 256MB of RAM are
>> assigned to it by providing the dom0_mem boot option. The error message produced
>> during the boot process is the following.
> 
> This is an unfortunate side effect of the use of the 1:1 mapping. The
> threads:
>         "Master not working on Allwinner A20"
>         "create multiple banks for dom0 in 1:1 mapping if necessary"
> Have some more details, but in short the allocation of dom0 memory needs
> to be done in a single chunk and because of the way the Xen allocator
> works it will also be aligned to its own size -- this creates some
> limitations on the size of the region vs what memory is free at start of
> day
> 
> I'm afraid this probably won't be solved for Xen 4.4, since the change
> is likely to be rather intrusive.
> 

I see, thank you for the explanation.


> As a workaround you could try changing the load addresses of Xen and the
> kernel, dtb etc used by u-boot to pack them towards the top of RAM. This
> *should* result in the entire lower half of RAM being available which
> will make it more likely to achieve the necessary alignment constraints
> for a dom0 taking up to half of the system RAM. I've not actually tried
> this but I'd recommend trying the following addresses from the top of
> RAM:
> 	-2M: Leave free for Xen to relocate to
> 	-4M: dom0 kernel
> 	-6M: DTB
> 	-8M: Initial Xen load address
> 
> If you have an initrd then put it between dom0 kernel and dtb and bump
> everything else down, likewise if anything is bigger than 2M then round
> up to 2M and push everything down to accommodate it.
> 
> Also consider that for a Xen system it is common to only give dom0 a
> fairly small fraction of the system RAM in order to leave as much as
> possible for guest domains. What target amount of dom0 memory are you
> aiming for? 256M or even 128M is probably plenty if you are going to run
> a few domains.
> 

So far, 256MB of RAM are actually enough for a dom0 in my use-case. Thank you
again for the info and hints, and for taking the time to test both the failing
scenario and the workaround.



> Ian.
> 


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4K-0003pi-34; Fri, 24 Jan 2014 16:57:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6k4J-0003pa-80
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:57:23 +0000
Received: from [85.158.139.211:64163] by server-15.bemta-5.messagelabs.com id
	06/ED-08490-27B92E25; Fri, 24 Jan 2014 16:57:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390582639!1500459!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 24 Jan 2014 16:57:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:57:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96243012"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:57:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:57:18 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6k4D-00084E-Hv;
	Fri, 24 Jan 2014 16:57:17 +0000
Message-ID: <52E29B6D.6070208@citrix.com>
Date: Fri, 24 Jan 2014 16:57:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com>
In-Reply-To: <52E29A35.9060205@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 16:52, George Dunlap wrote:
> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>> On 23/01/14 15:13, George Dunlap wrote:
>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Keir Fraser <keir@xen.org>
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Tim Deegan <tim@xen.org>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>
>>>> ---
>>>>
>>>> George:
>>>>     This is just documentation, and it would be nice to include it as
>>>> part of
>>>>     the 4.4 release.
>>>> ---
>>>>    misc/coverity_model.c |   98
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 98 insertions(+)
>>>>    create mode 100644 misc/coverity_model.c
>>>>
>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>> new file mode 100644
>>>> index 0000000..418d25e
>>>> --- /dev/null
>>>> +++ b/misc/coverity_model.c
>>>> @@ -0,0 +1,98 @@
>>>> +/* Coverity Scan model
>>>> + *
>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>> avoid false
>>>> + * positives.
>>>> + *
>>>> + * - A model file can't import any header files.
>>>> + * - Therefore only some built-in primitives like int, char and void
>>>> are
>>>> + *   available but not NULL etc.
>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>> structs
>>>> + *   and similar types are sufficient.
>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>> that the
>>>> + *   variable could be either NULL or have some data.
>>>> + *
>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>> model file
>>>> + * must be uploaded by an admin in the analysis.
>>> So this file isn't compiled; it's manually uploaded as part of the
>>> coverity scanning process; and could be provided out-of-band, but it's
>>> just convenient to put it in the tree, particularly if any of these
>>> things should change as things go forward.  (Hence comparing it to
>>> documentation.)  Is that right?
>>>
>>>   -George
>>>
>> Correct.  I believe internally Coverity compiles it (at least to an
>> AST), but that is completely opaque to users of Scan.
>
> Right; I have a hard time coming up with a compelling reason to wait
> for this one.
>
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> The name of the file might be a bit confusing though, if people think
> it is supposed to be compliled... would it make sense maybe to call it
> ".txt", and include some instructions at the top with a line that says
> "---- cut here 8< ---" or something?
>
>  -George

Not really - Coverity uses the file extension to work out how to
interpret the modelling file.  ".c" is correct here, and will cause
smart text editors to apply proper syntax highlighting.

Alternates are .cpp and .java, depending on the primary language of the
project.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4K-0003pi-34; Fri, 24 Jan 2014 16:57:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6k4J-0003pa-80
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:57:23 +0000
Received: from [85.158.139.211:64163] by server-15.bemta-5.messagelabs.com id
	06/ED-08490-27B92E25; Fri, 24 Jan 2014 16:57:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390582639!1500459!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5475 invoked from network); 24 Jan 2014 16:57:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:57:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96243012"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:57:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:57:18 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6k4D-00084E-Hv;
	Fri, 24 Jan 2014 16:57:17 +0000
Message-ID: <52E29B6D.6070208@citrix.com>
Date: Fri, 24 Jan 2014 16:57:17 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com>
In-Reply-To: <52E29A35.9060205@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 16:52, George Dunlap wrote:
> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>> On 23/01/14 15:13, George Dunlap wrote:
>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Keir Fraser <keir@xen.org>
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Tim Deegan <tim@xen.org>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>
>>>> ---
>>>>
>>>> George:
>>>>     This is just documentation, and it would be nice to include it as
>>>> part of
>>>>     the 4.4 release.
>>>> ---
>>>>    misc/coverity_model.c |   98
>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 98 insertions(+)
>>>>    create mode 100644 misc/coverity_model.c
>>>>
>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>> new file mode 100644
>>>> index 0000000..418d25e
>>>> --- /dev/null
>>>> +++ b/misc/coverity_model.c
>>>> @@ -0,0 +1,98 @@
>>>> +/* Coverity Scan model
>>>> + *
>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>> avoid false
>>>> + * positives.
>>>> + *
>>>> + * - A model file can't import any header files.
>>>> + * - Therefore only some built-in primitives like int, char and void
>>>> are
>>>> + *   available but not NULL etc.
>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>> structs
>>>> + *   and similar types are sufficient.
>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>> that the
>>>> + *   variable could be either NULL or have some data.
>>>> + *
>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>> model file
>>>> + * must be uploaded by an admin in the analysis.
>>> So this file isn't compiled; it's manually uploaded as part of the
>>> coverity scanning process; and could be provided out-of-band, but it's
>>> just convenient to put it in the tree, particularly if any of these
>>> things should change as things go forward.  (Hence comparing it to
>>> documentation.)  Is that right?
>>>
>>>   -George
>>>
>> Correct.  I believe internally Coverity compiles it (at least to an
>> AST), but that is completely opaque to users of Scan.
>
> Right; I have a hard time coming up with a compelling reason to wait
> for this one.
>
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>
> The name of the file might be a bit confusing though, if people think
> it is supposed to be compliled... would it make sense maybe to call it
> ".txt", and include some instructions at the top with a line that says
> "---- cut here 8< ---" or something?
>
>  -George

Not really - Coverity uses the file extension to work out how to
interpret the modelling file.  ".c" is correct here, and will cause
smart text editors to apply proper syntax highlighting.

Alternates are .cpp and .java, depending on the primary language of the
project.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4a-0003rA-Gv; Fri, 24 Jan 2014 16:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6k4Z-0003qy-1x
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:57:39 +0000
Received: from [85.158.143.35:47099] by server-2.bemta-4.messagelabs.com id
	FA/35-11386-28B92E25; Fri, 24 Jan 2014 16:57:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390582657!618461!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 24 Jan 2014 16:57:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:57:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 16:57:37 +0000
Message-Id: <52E2A98F0200007800116D42@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 16:57:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
	<52E28CCC0200007800116B7D@nat28.tlf.novell.com>
	<52E29988.2040503@oracle.com>
In-Reply-To: <52E29988.2040503@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 17:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/24/2014 09:54 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> Add xenpmu.h header file,
>> To me, naming a public Xen header (other than the core one) xen*.h
>> is redundant. There's no information lost if you just called it pmu.h.
> 
> I was trying to keep filename and top-level data structures the same 
> (although now that I changed xenpmu_ prefix to xen_pmu_ they no longer 
> are).
> 
>>
>> Also I think you ought to use plural here.
> 
> I'd prefer to keep the arch-independent and -dependent file names the same.

Right, that's appreciated. Nevertheless it's two of them, i.e.
"Add pmu.h header files, ..."

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4a-0003rA-Gv; Fri, 24 Jan 2014 16:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W6k4Z-0003qy-1x
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:57:39 +0000
Received: from [85.158.143.35:47099] by server-2.bemta-4.messagelabs.com id
	FA/35-11386-28B92E25; Fri, 24 Jan 2014 16:57:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390582657!618461!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 24 Jan 2014 16:57:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 16:57:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 24 Jan 2014 16:57:37 +0000
Message-Id: <52E2A98F0200007800116D42@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 24 Jan 2014 16:57:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-8-git-send-email-boris.ostrovsky@oracle.com>
	<52E28CCC0200007800116B7D@nat28.tlf.novell.com>
	<52E29988.2040503@oracle.com>
In-Reply-To: <52E29988.2040503@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 17:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/24/2014 09:54 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> Add xenpmu.h header file,
>> To me, naming a public Xen header (other than the core one) xen*.h
>> is redundant. There's no information lost if you just called it pmu.h.
> 
> I was trying to keep filename and top-level data structures the same 
> (although now that I changed xenpmu_ prefix to xen_pmu_ they no longer 
> are).
> 
>>
>> Also I think you ought to use plural here.
> 
> I'd prefer to keep the arch-independent and -dependent file names the same.

Right, that's appreciated. Nevertheless it's two of them, i.e.
"Add pmu.h header files, ..."

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4m-0003u2-Tx; Fri, 24 Jan 2014 16:57:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6k4l-0003te-8z
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:57:51 +0000
Received: from [85.158.139.211:15163] by server-14.bemta-5.messagelabs.com id
	C9/F2-24200-E8B92E25; Fri, 24 Jan 2014 16:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390582668!11807417!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8511 invoked from network); 24 Jan 2014 16:57:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94211427"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:57:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:57:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6k4g-0007Ms-QI;
	Fri, 24 Jan 2014 16:57:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6k4f-00067A-Io;
	Fri, 24 Jan 2014 16:57:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.39816.608447.189661@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 16:57:44 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.38519.34609.370576@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
	<21218.38519.34609.370576@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> libxl could (and perhaps should) grow such a thing in its own event
> loop, but thinking about how to write it, it's going to be quite
> tedious![1]
...
> [1] Add a ctx-wide list of pollers, one for every libxl thread in
> poll.  This list has to be covered by its own lock.
> 
> When fd deregistration occurs, we take this lock, wake up all the
> pollers, release the lock, and then wait (perhaps with a condition
> variable) for the pollers to acknowledge that they have left poll().

Something like this (warning, pseudocode).

Ian.

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 93f8fdc..5e3fe87 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -232,6 +232,17 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
+    with cohort lock{
+        push new cohort, mark it in use;
+
+        loop waiting for acks{
+            signal all pollers in our cohort and all older;
+            if there are none, hooray (exit the loop);
+            condvar wait;
+        }
+        cleanup;
+    }
+
  out:
     CTX_UNLOCK;
 }
@@ -1436,10 +1447,22 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
         poller->fd_polls_allocd = nfds;
     }
 
+    with cohort lock{
+        add poller to list in running cohort;
+    }
+
     CTX_UNLOCK;
     rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
+    with cohort lock{
+        remove poller from whatever cohort's list it was in #';
+        clean up old running cohorts:
+            while (oldest cohort is empty and unused)
+                delete it;
+        signal cv;
+    }
+
     if (rc < 0) {
         if (errno == EINTR)
             return 0; /* will go round again if caller requires */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:57:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:57:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k4m-0003u2-Tx; Fri, 24 Jan 2014 16:57:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6k4l-0003te-8z
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 16:57:51 +0000
Received: from [85.158.139.211:15163] by server-14.bemta-5.messagelabs.com id
	C9/F2-24200-E8B92E25; Fri, 24 Jan 2014 16:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390582668!11807417!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8511 invoked from network); 24 Jan 2014 16:57:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94211427"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 16:57:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 11:57:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6k4g-0007Ms-QI;
	Fri, 24 Jan 2014 16:57:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W6k4f-00067A-Io;
	Fri, 24 Jan 2014 16:57:45 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21218.39816.608447.189661@mariner.uk.xensource.com>
Date: Fri, 24 Jan 2014 16:57:44 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	<xen-devel@lists.xensource.com>
In-Reply-To: <21218.38519.34609.370576@mariner.uk.xensource.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
	<21218.33645.354750.805594@mariner.uk.xensource.com>
	<21218.38519.34609.370576@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> libxl could (and perhaps should) grow such a thing in its own event
> loop, but thinking about how to write it, it's going to be quite
> tedious![1]
...
> [1] Add a ctx-wide list of pollers, one for every libxl thread in
> poll.  This list has to be covered by its own lock.
> 
> When fd deregistration occurs, we take this lock, wake up all the
> pollers, release the lock, and then wait (perhaps with a condition
> variable) for the pollers to acknowledge that they have left poll().

Something like this (warning, pseudocode).

Ian.

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 93f8fdc..5e3fe87 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -232,6 +232,17 @@ void libxl__ev_fd_deregister(libxl__gc *gc, libxl__ev_fd *ev)
     LIBXL_LIST_REMOVE(ev, entry);
     ev->fd = -1;
 
+    with cohort lock{
+        push new cohort, mark it in use;
+
+        loop waiting for acks{
+            signal all pollers in our cohort and all older;
+            if there are none, hooray (exit the loop);
+            condvar wait;
+        }
+        cleanup;
+    }
+
  out:
     CTX_UNLOCK;
 }
@@ -1436,10 +1447,22 @@ static int eventloop_iteration(libxl__egc *egc, libxl__poller *poller) {
         poller->fd_polls_allocd = nfds;
     }
 
+    with cohort lock{
+        add poller to list in running cohort;
+    }
+
     CTX_UNLOCK;
     rc = poll(poller->fd_polls, nfds, timeout);
     CTX_LOCK;
 
+    with cohort lock{
+        remove poller from whatever cohort's list it was in #';
+        clean up old running cohorts:
+            while (oldest cohort is empty and unused)
+                delete it;
+        signal cv;
+    }
+
     if (rc < 0) {
         if (errno == EINTR)
             return 0; /* will go round again if caller requires */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:59:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:59:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k6N-0004FH-Et; Fri, 24 Jan 2014 16:59:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6k6M-0004EW-Fy
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:59:30 +0000
Received: from [85.158.137.68:64358] by server-16.bemta-3.messagelabs.com id
	E3/86-26128-1FB92E25; Fri, 24 Jan 2014 16:59:29 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390582767!11203900!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26854 invoked from network); 24 Jan 2014 16:59:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:59:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96243918"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:59:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:59:26 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6k6H-00086i-PF;
	Fri, 24 Jan 2014 16:59:25 +0000
Message-ID: <52E29BE6.6050601@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:59:18 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
In-Reply-To: <52E29B6D.6070208@citrix.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> On 24/01/14 16:52, George Dunlap wrote:
>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>> On 23/01/14 15:13, George Dunlap wrote:
>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> CC: Keir Fraser <keir@xen.org>
>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>> CC: Tim Deegan <tim@xen.org>
>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>
>>>>> ---
>>>>>
>>>>> George:
>>>>>      This is just documentation, and it would be nice to include it as
>>>>> part of
>>>>>      the 4.4 release.
>>>>> ---
>>>>>     misc/coverity_model.c |   98
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>     1 file changed, 98 insertions(+)
>>>>>     create mode 100644 misc/coverity_model.c
>>>>>
>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>> new file mode 100644
>>>>> index 0000000..418d25e
>>>>> --- /dev/null
>>>>> +++ b/misc/coverity_model.c
>>>>> @@ -0,0 +1,98 @@
>>>>> +/* Coverity Scan model
>>>>> + *
>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>> avoid false
>>>>> + * positives.
>>>>> + *
>>>>> + * - A model file can't import any header files.
>>>>> + * - Therefore only some built-in primitives like int, char and void
>>>>> are
>>>>> + *   available but not NULL etc.
>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>> structs
>>>>> + *   and similar types are sufficient.
>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>> that the
>>>>> + *   variable could be either NULL or have some data.
>>>>> + *
>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>> model file
>>>>> + * must be uploaded by an admin in the analysis.
>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>> coverity scanning process; and could be provided out-of-band, but it's
>>>> just convenient to put it in the tree, particularly if any of these
>>>> things should change as things go forward.  (Hence comparing it to
>>>> documentation.)  Is that right?
>>>>
>>>>    -George
>>>>
>>> Correct.  I believe internally Coverity compiles it (at least to an
>>> AST), but that is completely opaque to users of Scan.
>> Right; I have a hard time coming up with a compelling reason to wait
>> for this one.
>>
>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> The name of the file might be a bit confusing though, if people think
>> it is supposed to be compliled... would it make sense maybe to call it
>> ".txt", and include some instructions at the top with a line that says
>> "---- cut here 8< ---" or something?
>>
>>   -George
> Not really - Coverity uses the file extension to work out how to
> interpret the modelling file.  ".c" is correct here, and will cause
> smart text editors to apply proper syntax highlighting.
>
> Alternates are .cpp and .java, depending on the primary language of the
> project.

Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
to be a .c file in the xen tree -- the instructions could say, "Place 
the text below into a file named coverity_model.c".

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 16:59:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 16:59:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k6N-0004FH-Et; Fri, 24 Jan 2014 16:59:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6k6M-0004EW-Fy
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 16:59:30 +0000
Received: from [85.158.137.68:64358] by server-16.bemta-3.messagelabs.com id
	E3/86-26128-1FB92E25; Fri, 24 Jan 2014 16:59:29 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390582767!11203900!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26854 invoked from network); 24 Jan 2014 16:59:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 16:59:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96243918"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 16:59:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 11:59:26 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6k6H-00086i-PF;
	Fri, 24 Jan 2014 16:59:25 +0000
Message-ID: <52E29BE6.6050601@eu.citrix.com>
Date: Fri, 24 Jan 2014 16:59:18 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
In-Reply-To: <52E29B6D.6070208@citrix.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> On 24/01/14 16:52, George Dunlap wrote:
>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>> On 23/01/14 15:13, George Dunlap wrote:
>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> CC: Keir Fraser <keir@xen.org>
>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>> CC: Tim Deegan <tim@xen.org>
>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>
>>>>> ---
>>>>>
>>>>> George:
>>>>>      This is just documentation, and it would be nice to include it as
>>>>> part of
>>>>>      the 4.4 release.
>>>>> ---
>>>>>     misc/coverity_model.c |   98
>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>     1 file changed, 98 insertions(+)
>>>>>     create mode 100644 misc/coverity_model.c
>>>>>
>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>> new file mode 100644
>>>>> index 0000000..418d25e
>>>>> --- /dev/null
>>>>> +++ b/misc/coverity_model.c
>>>>> @@ -0,0 +1,98 @@
>>>>> +/* Coverity Scan model
>>>>> + *
>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>> avoid false
>>>>> + * positives.
>>>>> + *
>>>>> + * - A model file can't import any header files.
>>>>> + * - Therefore only some built-in primitives like int, char and void
>>>>> are
>>>>> + *   available but not NULL etc.
>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>> structs
>>>>> + *   and similar types are sufficient.
>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>> that the
>>>>> + *   variable could be either NULL or have some data.
>>>>> + *
>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>> model file
>>>>> + * must be uploaded by an admin in the analysis.
>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>> coverity scanning process; and could be provided out-of-band, but it's
>>>> just convenient to put it in the tree, particularly if any of these
>>>> things should change as things go forward.  (Hence comparing it to
>>>> documentation.)  Is that right?
>>>>
>>>>    -George
>>>>
>>> Correct.  I believe internally Coverity compiles it (at least to an
>>> AST), but that is completely opaque to users of Scan.
>> Right; I have a hard time coming up with a compelling reason to wait
>> for this one.
>>
>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> The name of the file might be a bit confusing though, if people think
>> it is supposed to be compliled... would it make sense maybe to call it
>> ".txt", and include some instructions at the top with a line that says
>> "---- cut here 8< ---" or something?
>>
>>   -George
> Not really - Coverity uses the file extension to work out how to
> interpret the modelling file.  ".c" is correct here, and will cause
> smart text editors to apply proper syntax highlighting.
>
> Alternates are .cpp and .java, depending on the primary language of the
> project.

Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
to be a .c file in the xen tree -- the instructions could say, "Place 
the text below into a file named coverity_model.c".

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:01:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k7k-0004Yh-66; Fri, 24 Jan 2014 17:00:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6k7i-0004YU-Ec
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:00:54 +0000
Received: from [85.158.143.35:13573] by server-3.bemta-4.messagelabs.com id
	F6/04-32360-54C92E25; Fri, 24 Jan 2014 17:00:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390582844!622806!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13420 invoked from network); 24 Jan 2014 17:00:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:00:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94212501"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:00:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:00:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6k7K-00087b-R9;
	Fri, 24 Jan 2014 17:00:30 +0000
Date: Fri, 24 Jan 2014 17:00:30 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140124170030.GK24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
	<52E2186B.6060200@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E2186B.6060200@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: peter.maydell@linaro.org, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org, anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 08:38:19AM +0100, Paolo Bonzini wrote:
> Il 23/01/2014 23:16, Wei Liu ha scritto:
> >-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> >+    if test "$target_name" != "xenpv"; then
> >+        echo "CONFIG_XEN_I386=y" >> $config_target_mak
> >+        if test "$xen_pci_passthrough" = yes; then
> >+            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> >+        fi
> 
> You can add
> 
> CONFIG_XEN_PCI_PASSTHROUGH=$(CONFIG_XEN)
> 
> to i386-softmmu.mak and x86_64-softmmu.mak, and drop this setting
> from config-target.mak too.
> 

I'm afraid not. CONFIG_XEN is prerequiste for CONFIG_XEN_PCI_PASSTHROUGH
but doens't imply the feature is enabled by the user.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:01:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k7k-0004Yh-66; Fri, 24 Jan 2014 17:00:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W6k7i-0004YU-Ec
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:00:54 +0000
Received: from [85.158.143.35:13573] by server-3.bemta-4.messagelabs.com id
	F6/04-32360-54C92E25; Fri, 24 Jan 2014 17:00:53 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390582844!622806!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13420 invoked from network); 24 Jan 2014 17:00:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:00:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94212501"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:00:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:00:31 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W6k7K-00087b-R9;
	Fri, 24 Jan 2014 17:00:30 +0000
Date: Fri, 24 Jan 2014 17:00:30 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140124170030.GK24675@zion.uk.xensource.com>
References: <1390515366-32236-1-git-send-email-wei.liu2@citrix.com>
	<1390515366-32236-6-git-send-email-wei.liu2@citrix.com>
	<52E2186B.6060200@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E2186B.6060200@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: peter.maydell@linaro.org, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org, anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 5/5] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 08:38:19AM +0100, Paolo Bonzini wrote:
> Il 23/01/2014 23:16, Wei Liu ha scritto:
> >-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> >+    if test "$target_name" != "xenpv"; then
> >+        echo "CONFIG_XEN_I386=y" >> $config_target_mak
> >+        if test "$xen_pci_passthrough" = yes; then
> >+            echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
> >+        fi
> 
> You can add
> 
> CONFIG_XEN_PCI_PASSTHROUGH=$(CONFIG_XEN)
> 
> to i386-softmmu.mak and x86_64-softmmu.mak, and drop this setting
> from config-target.mak too.
> 

I'm afraid not. CONFIG_XEN is prerequiste for CONFIG_XEN_PCI_PASSTHROUGH
but doens't imply the feature is enabled by the user.

Wei.

> Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:02:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:02:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k8r-0004jl-S2; Fri, 24 Jan 2014 17:02:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6k8r-0004jZ-00
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:02:05 +0000
Received: from [193.109.254.147:44027] by server-6.bemta-14.messagelabs.com id
	94/75-14958-C8C92E25; Fri, 24 Jan 2014 17:02:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390582922!13058096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5575 invoked from network); 24 Jan 2014 17:02:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:02:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96245554"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:02:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:02:01 -0500
Message-ID: <1390582919.13513.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 24 Jan 2014 17:01:59 +0000
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:59 +0000, George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > On 24/01/14 16:52, George Dunlap wrote:
> >> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
> >>> On 23/01/14 15:13, George Dunlap wrote:
> >>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
> >>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>> CC: Keir Fraser <keir@xen.org>
> >>>>> CC: Jan Beulich <JBeulich@suse.com>
> >>>>> CC: Tim Deegan <tim@xen.org>
> >>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> >>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
> >>>>>
> >>>>> ---
> >>>>>
> >>>>> George:
> >>>>>      This is just documentation, and it would be nice to include it as
> >>>>> part of
> >>>>>      the 4.4 release.
> >>>>> ---
> >>>>>     misc/coverity_model.c |   98
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>     1 file changed, 98 insertions(+)
> >>>>>     create mode 100644 misc/coverity_model.c
> >>>>>
> >>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> >>>>> new file mode 100644
> >>>>> index 0000000..418d25e
> >>>>> --- /dev/null
> >>>>> +++ b/misc/coverity_model.c
> >>>>> @@ -0,0 +1,98 @@
> >>>>> +/* Coverity Scan model
> >>>>> + *
> >>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
> >>>>> avoid false
> >>>>> + * positives.
> >>>>> + *
> >>>>> + * - A model file can't import any header files.
> >>>>> + * - Therefore only some built-in primitives like int, char and void
> >>>>> are
> >>>>> + *   available but not NULL etc.
> >>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
> >>>>> structs
> >>>>> + *   and similar types are sufficient.
> >>>>> + * - An uninitialized local pointer is not an error. It signifies
> >>>>> that the
> >>>>> + *   variable could be either NULL or have some data.
> >>>>> + *
> >>>>> + * Coverity Scan doesn't pick up modifications automatically. The
> >>>>> model file
> >>>>> + * must be uploaded by an admin in the analysis.
> >>>> So this file isn't compiled; it's manually uploaded as part of the
> >>>> coverity scanning process; and could be provided out-of-band, but it's
> >>>> just convenient to put it in the tree, particularly if any of these
> >>>> things should change as things go forward.  (Hence comparing it to
> >>>> documentation.)  Is that right?
> >>>>
> >>>>    -George
> >>>>
> >>> Correct.  I believe internally Coverity compiles it (at least to an
> >>> AST), but that is completely opaque to users of Scan.
> >> Right; I have a hard time coming up with a compelling reason to wait
> >> for this one.
> >>
> >> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> >>
> >> The name of the file might be a bit confusing though, if people think
> >> it is supposed to be compliled... would it make sense maybe to call it
> >> ".txt", and include some instructions at the top with a line that says
> >> "---- cut here 8< ---" or something?
> >>
> >>   -George
> > Not really - Coverity uses the file extension to work out how to
> > interpret the modelling file.  ".c" is correct here, and will cause
> > smart text editors to apply proper syntax highlighting.
> >
> > Alternates are .cpp and .java, depending on the primary language of the
> > project.
> 
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> to be a .c file in the xen tree -- the instructions could say, "Place 
> the text below into a file named coverity_model.c".

This file is uploaded via a web interface. It'd be far more convenient
to browse with the browesers file dialog to the exact file in the xen
source tree.

Perhaps a README next to the file? Perhaps
misc/coverity/{README,model.c} ?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:02:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:02:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k8r-0004jl-S2; Fri, 24 Jan 2014 17:02:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6k8r-0004jZ-00
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:02:05 +0000
Received: from [193.109.254.147:44027] by server-6.bemta-14.messagelabs.com id
	94/75-14958-C8C92E25; Fri, 24 Jan 2014 17:02:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390582922!13058096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5575 invoked from network); 24 Jan 2014 17:02:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:02:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96245554"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:02:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:02:01 -0500
Message-ID: <1390582919.13513.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 24 Jan 2014 17:01:59 +0000
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:59 +0000, George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > On 24/01/14 16:52, George Dunlap wrote:
> >> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
> >>> On 23/01/14 15:13, George Dunlap wrote:
> >>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
> >>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>> CC: Keir Fraser <keir@xen.org>
> >>>>> CC: Jan Beulich <JBeulich@suse.com>
> >>>>> CC: Tim Deegan <tim@xen.org>
> >>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> >>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
> >>>>>
> >>>>> ---
> >>>>>
> >>>>> George:
> >>>>>      This is just documentation, and it would be nice to include it as
> >>>>> part of
> >>>>>      the 4.4 release.
> >>>>> ---
> >>>>>     misc/coverity_model.c |   98
> >>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>     1 file changed, 98 insertions(+)
> >>>>>     create mode 100644 misc/coverity_model.c
> >>>>>
> >>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> >>>>> new file mode 100644
> >>>>> index 0000000..418d25e
> >>>>> --- /dev/null
> >>>>> +++ b/misc/coverity_model.c
> >>>>> @@ -0,0 +1,98 @@
> >>>>> +/* Coverity Scan model
> >>>>> + *
> >>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
> >>>>> avoid false
> >>>>> + * positives.
> >>>>> + *
> >>>>> + * - A model file can't import any header files.
> >>>>> + * - Therefore only some built-in primitives like int, char and void
> >>>>> are
> >>>>> + *   available but not NULL etc.
> >>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
> >>>>> structs
> >>>>> + *   and similar types are sufficient.
> >>>>> + * - An uninitialized local pointer is not an error. It signifies
> >>>>> that the
> >>>>> + *   variable could be either NULL or have some data.
> >>>>> + *
> >>>>> + * Coverity Scan doesn't pick up modifications automatically. The
> >>>>> model file
> >>>>> + * must be uploaded by an admin in the analysis.
> >>>> So this file isn't compiled; it's manually uploaded as part of the
> >>>> coverity scanning process; and could be provided out-of-band, but it's
> >>>> just convenient to put it in the tree, particularly if any of these
> >>>> things should change as things go forward.  (Hence comparing it to
> >>>> documentation.)  Is that right?
> >>>>
> >>>>    -George
> >>>>
> >>> Correct.  I believe internally Coverity compiles it (at least to an
> >>> AST), but that is completely opaque to users of Scan.
> >> Right; I have a hard time coming up with a compelling reason to wait
> >> for this one.
> >>
> >> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> >>
> >> The name of the file might be a bit confusing though, if people think
> >> it is supposed to be compliled... would it make sense maybe to call it
> >> ".txt", and include some instructions at the top with a line that says
> >> "---- cut here 8< ---" or something?
> >>
> >>   -George
> > Not really - Coverity uses the file extension to work out how to
> > interpret the modelling file.  ".c" is correct here, and will cause
> > smart text editors to apply proper syntax highlighting.
> >
> > Alternates are .cpp and .java, depending on the primary language of the
> > project.
> 
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> to be a .c file in the xen tree -- the instructions could say, "Place 
> the text below into a file named coverity_model.c".

This file is uploaded via a web interface. It'd be far more convenient
to browse with the browesers file dialog to the exact file in the xen
source tree.

Perhaps a README next to the file? Perhaps
misc/coverity/{README,model.c} ?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:02:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k9V-0004po-Cq; Fri, 24 Jan 2014 17:02:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6k9U-0004pW-0K
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:02:44 +0000
Received: from [85.158.143.35:30406] by server-1.bemta-4.messagelabs.com id
	AF/48-02132-3BC92E25; Fri, 24 Jan 2014 17:02:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390582960!626006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11340 invoked from network); 24 Jan 2014 17:02:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:02:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96245954"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:02:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:02:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6k9P-00089b-7E;
	Fri, 24 Jan 2014 17:02:39 +0000
Message-ID: <52E29CAE.8050701@citrix.com>
Date: Fri, 24 Jan 2014 17:02:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 16:59, George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>> On 24/01/14 16:52, George Dunlap wrote:
>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> George:
>>>>>>      This is just documentation, and it would be nice to include
>>>>>> it as
>>>>>> part of
>>>>>>      the 4.4 release.
>>>>>> ---
>>>>>>     misc/coverity_model.c |   98
>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>     1 file changed, 98 insertions(+)
>>>>>>     create mode 100644 misc/coverity_model.c
>>>>>>
>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>> new file mode 100644
>>>>>> index 0000000..418d25e
>>>>>> --- /dev/null
>>>>>> +++ b/misc/coverity_model.c
>>>>>> @@ -0,0 +1,98 @@
>>>>>> +/* Coverity Scan model
>>>>>> + *
>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>> avoid false
>>>>>> + * positives.
>>>>>> + *
>>>>>> + * - A model file can't import any header files.
>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>> void
>>>>>> are
>>>>>> + *   available but not NULL etc.
>>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>>> structs
>>>>>> + *   and similar types are sufficient.
>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>> that the
>>>>>> + *   variable could be either NULL or have some data.
>>>>>> + *
>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>> model file
>>>>>> + * must be uploaded by an admin in the analysis.
>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>> it's
>>>>> just convenient to put it in the tree, particularly if any of these
>>>>> things should change as things go forward.  (Hence comparing it to
>>>>> documentation.)  Is that right?
>>>>>
>>>>>    -George
>>>>>
>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>> AST), but that is completely opaque to users of Scan.
>>> Right; I have a hard time coming up with a compelling reason to wait
>>> for this one.
>>>
>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>
>>> The name of the file might be a bit confusing though, if people think
>>> it is supposed to be compliled... would it make sense maybe to call it
>>> ".txt", and include some instructions at the top with a line that says
>>> "---- cut here 8< ---" or something?
>>>
>>>   -George
>> Not really - Coverity uses the file extension to work out how to
>> interpret the modelling file.  ".c" is correct here, and will cause
>> smart text editors to apply proper syntax highlighting.
>>
>> Alternates are .cpp and .java, depending on the primary language of the
>> project.
>
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
> need to be a .c file in the xen tree -- the instructions could say,
> "Place the text below into a file named coverity_model.c".
>
>  -George

This file was deliberately placed in a brand new directory, away from
any Makefiles which might try to compile it for this reason.

Requiring users to post-process this file just to help prevent someone
from accidentally trying to compile it seems crazy IMO.  The worst that
happens is that someone tries to compile it and it fails to compile.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:02:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6k9V-0004po-Cq; Fri, 24 Jan 2014 17:02:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6k9U-0004pW-0K
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:02:44 +0000
Received: from [85.158.143.35:30406] by server-1.bemta-4.messagelabs.com id
	AF/48-02132-3BC92E25; Fri, 24 Jan 2014 17:02:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390582960!626006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11340 invoked from network); 24 Jan 2014 17:02:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:02:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96245954"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:02:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:02:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6k9P-00089b-7E;
	Fri, 24 Jan 2014 17:02:39 +0000
Message-ID: <52E29CAE.8050701@citrix.com>
Date: Fri, 24 Jan 2014 17:02:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 16:59, George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>> On 24/01/14 16:52, George Dunlap wrote:
>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> George:
>>>>>>      This is just documentation, and it would be nice to include
>>>>>> it as
>>>>>> part of
>>>>>>      the 4.4 release.
>>>>>> ---
>>>>>>     misc/coverity_model.c |   98
>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>     1 file changed, 98 insertions(+)
>>>>>>     create mode 100644 misc/coverity_model.c
>>>>>>
>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>> new file mode 100644
>>>>>> index 0000000..418d25e
>>>>>> --- /dev/null
>>>>>> +++ b/misc/coverity_model.c
>>>>>> @@ -0,0 +1,98 @@
>>>>>> +/* Coverity Scan model
>>>>>> + *
>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>> avoid false
>>>>>> + * positives.
>>>>>> + *
>>>>>> + * - A model file can't import any header files.
>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>> void
>>>>>> are
>>>>>> + *   available but not NULL etc.
>>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>>> structs
>>>>>> + *   and similar types are sufficient.
>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>> that the
>>>>>> + *   variable could be either NULL or have some data.
>>>>>> + *
>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>> model file
>>>>>> + * must be uploaded by an admin in the analysis.
>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>> it's
>>>>> just convenient to put it in the tree, particularly if any of these
>>>>> things should change as things go forward.  (Hence comparing it to
>>>>> documentation.)  Is that right?
>>>>>
>>>>>    -George
>>>>>
>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>> AST), but that is completely opaque to users of Scan.
>>> Right; I have a hard time coming up with a compelling reason to wait
>>> for this one.
>>>
>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>
>>> The name of the file might be a bit confusing though, if people think
>>> it is supposed to be compliled... would it make sense maybe to call it
>>> ".txt", and include some instructions at the top with a line that says
>>> "---- cut here 8< ---" or something?
>>>
>>>   -George
>> Not really - Coverity uses the file extension to work out how to
>> interpret the modelling file.  ".c" is correct here, and will cause
>> smart text editors to apply proper syntax highlighting.
>>
>> Alternates are .cpp and .java, depending on the primary language of the
>> project.
>
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
> need to be a .c file in the xen tree -- the instructions could say,
> "Place the text below into a file named coverity_model.c".
>
>  -George

This file was deliberately placed in a brand new directory, away from
any Makefiles which might try to compile it for this reason.

Requiring users to post-process this file just to help prevent someone
from accidentally trying to compile it seems crazy IMO.  The worst that
happens is that someone tries to compile it and it fails to compile.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:04:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kAy-000513-Th; Fri, 24 Jan 2014 17:04:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kAw-00050j-TX
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:04:15 +0000
Received: from [193.109.254.147:16618] by server-11.bemta-14.messagelabs.com
	id 7C/85-20576-E0D92E25; Fri, 24 Jan 2014 17:04:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390583051!11526913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16556 invoked from network); 24 Jan 2014 17:04:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:04:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94215155"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:04:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:04:11 -0500
Message-ID: <1390583049.13513.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Jan 2014 17:04:09 +0000
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> George:
>   This is just documentation, and it would be nice to include it as part of
>   the 4.4 release.
> ---
>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 98 insertions(+)
>  create mode 100644 misc/coverity_model.c
> 
> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> new file mode 100644
> index 0000000..418d25e
> --- /dev/null
> +++ b/misc/coverity_model.c
> @@ -0,0 +1,98 @@
> +/* Coverity Scan model
> + *
> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> + * positives.
> + *
> + * - A model file can't import any header files.
> + * - Therefore only some built-in primitives like int, char and void are
> + *   available but not NULL etc.
> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> + *   and similar types are sufficient.
> + * - An uninitialized local pointer is not an error. It signifies that the
> + *   variable could be either NULL or have some data.
> + *
> + * Coverity Scan doesn't pick up modifications automatically. The model file
> + * must be uploaded by an admin in the analysis.
> + *
> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
> + *
> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
> + * reference to get started (suggested by Coverty Scan themselves as a good
> + * example), but all content is Xen specific.

Given that you (I pressume?) wrote at least some of the C like stuff I
think you can include your copyright too as well as the Python one. Is
there actually any cpython stuff left?

If there were a link to the docs in this comment that would be good too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:04:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kAy-000513-Th; Fri, 24 Jan 2014 17:04:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kAw-00050j-TX
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:04:15 +0000
Received: from [193.109.254.147:16618] by server-11.bemta-14.messagelabs.com
	id 7C/85-20576-E0D92E25; Fri, 24 Jan 2014 17:04:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390583051!11526913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16556 invoked from network); 24 Jan 2014 17:04:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:04:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94215155"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:04:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:04:11 -0500
Message-ID: <1390583049.13513.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Jan 2014 17:04:09 +0000
In-Reply-To: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> George:
>   This is just documentation, and it would be nice to include it as part of
>   the 4.4 release.
> ---
>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 98 insertions(+)
>  create mode 100644 misc/coverity_model.c
> 
> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> new file mode 100644
> index 0000000..418d25e
> --- /dev/null
> +++ b/misc/coverity_model.c
> @@ -0,0 +1,98 @@
> +/* Coverity Scan model
> + *
> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> + * positives.
> + *
> + * - A model file can't import any header files.
> + * - Therefore only some built-in primitives like int, char and void are
> + *   available but not NULL etc.
> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> + *   and similar types are sufficient.
> + * - An uninitialized local pointer is not an error. It signifies that the
> + *   variable could be either NULL or have some data.
> + *
> + * Coverity Scan doesn't pick up modifications automatically. The model file
> + * must be uploaded by an admin in the analysis.
> + *
> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
> + *
> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
> + * reference to get started (suggested by Coverty Scan themselves as a good
> + * example), but all content is Xen specific.

Given that you (I pressume?) wrote at least some of the C like stuff I
think you can include your copyright too as well as the Python one. Is
there actually any cpython stuff left?

If there were a link to the docs in this comment that would be good too.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kBB-00053o-C6; Fri, 24 Jan 2014 17:04:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6kB9-00053K-VP
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:04:28 +0000
Received: from [193.109.254.147:24615] by server-7.bemta-14.messagelabs.com id
	22/52-15500-B1D92E25; Fri, 24 Jan 2014 17:04:27 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390583066!13058625!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25621 invoked from network); 24 Jan 2014 17:04:26 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:04:26 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6kB3-0003HC-Qg; Fri, 24 Jan 2014 17:04:21 +0000
Date: Fri, 24 Jan 2014 18:04:21 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140124170421.GA4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > On 24/01/14 16:52, George Dunlap wrote:
> >> The name of the file might be a bit confusing though, if people think
> >> it is supposed to be compliled... would it make sense maybe to call it
> >> ".txt", and include some instructions at the top with a line that says
> >> "---- cut here 8< ---" or something?
> >>
> >>   -George
> > Not really - Coverity uses the file extension to work out how to
> > interpret the modelling file.  ".c" is correct here, and will cause
> > smart text editors to apply proper syntax highlighting.
> >
> > Alternates are .cpp and .java, depending on the primary language of the
> > project.
> 
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> to be a .c file in the xen tree -- the instructions could say, "Place 
> the text below into a file named coverity_model.c".

+1.  I don't think it's confusing for humans but we don't want
e.g. ctags/cscope or IDEs picking up the coverity versions
of things.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kBB-00053o-C6; Fri, 24 Jan 2014 17:04:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6kB9-00053K-VP
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:04:28 +0000
Received: from [193.109.254.147:24615] by server-7.bemta-14.messagelabs.com id
	22/52-15500-B1D92E25; Fri, 24 Jan 2014 17:04:27 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390583066!13058625!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25621 invoked from network); 24 Jan 2014 17:04:26 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:04:26 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6kB3-0003HC-Qg; Fri, 24 Jan 2014 17:04:21 +0000
Date: Fri, 24 Jan 2014 18:04:21 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140124170421.GA4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E29BE6.6050601@eu.citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > On 24/01/14 16:52, George Dunlap wrote:
> >> The name of the file might be a bit confusing though, if people think
> >> it is supposed to be compliled... would it make sense maybe to call it
> >> ".txt", and include some instructions at the top with a line that says
> >> "---- cut here 8< ---" or something?
> >>
> >>   -George
> > Not really - Coverity uses the file extension to work out how to
> > interpret the modelling file.  ".c" is correct here, and will cause
> > smart text editors to apply proper syntax highlighting.
> >
> > Alternates are .cpp and .java, depending on the primary language of the
> > project.
> 
> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> to be a .c file in the xen tree -- the instructions could say, "Place 
> the text below into a file named coverity_model.c".

+1.  I don't think it's confusing for humans but we don't want
e.g. ctags/cscope or IDEs picking up the coverity versions
of things.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kCP-0005G3-TJ; Fri, 24 Jan 2014 17:05:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kCO-0005Fm-B3
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:05:44 +0000
Received: from [85.158.137.68:33841] by server-2.bemta-3.messagelabs.com id
	1C/B5-17329-76D92E25; Fri, 24 Jan 2014 17:05:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390583141!10376193!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11939 invoked from network); 24 Jan 2014 17:05:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:05:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94215823"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:05:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:05:39 -0500
Message-ID: <1390583138.13513.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 24 Jan 2014 17:05:38 +0000
In-Reply-To: <20140124170421.GA4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
	<20140124170421.GA4909@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:04 +0100, Tim Deegan wrote:
> At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> > On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > > On 24/01/14 16:52, George Dunlap wrote:
> > >> The name of the file might be a bit confusing though, if people think
> > >> it is supposed to be compliled... would it make sense maybe to call it
> > >> ".txt", and include some instructions at the top with a line that says
> > >> "---- cut here 8< ---" or something?
> > >>
> > >>   -George
> > > Not really - Coverity uses the file extension to work out how to
> > > interpret the modelling file.  ".c" is correct here, and will cause
> > > smart text editors to apply proper syntax highlighting.
> > >
> > > Alternates are .cpp and .java, depending on the primary language of the
> > > project.
> > 
> > Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> > to be a .c file in the xen tree -- the instructions could say, "Place 
> > the text below into a file named coverity_model.c".
> 
> +1.  I don't think it's confusing for humans but we don't want
> e.g. ctags/cscope or IDEs picking up the coverity versions
> of things.

Do you run tags on the whole of xen.git? I run it individually on the
xen and tools subdirs, because all the other cruft in the tree is just
too noisy otherwise.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kCP-0005G3-TJ; Fri, 24 Jan 2014 17:05:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kCO-0005Fm-B3
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:05:44 +0000
Received: from [85.158.137.68:33841] by server-2.bemta-3.messagelabs.com id
	1C/B5-17329-76D92E25; Fri, 24 Jan 2014 17:05:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390583141!10376193!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11939 invoked from network); 24 Jan 2014 17:05:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:05:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94215823"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:05:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:05:39 -0500
Message-ID: <1390583138.13513.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Fri, 24 Jan 2014 17:05:38 +0000
In-Reply-To: <20140124170421.GA4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
	<20140124170421.GA4909@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:04 +0100, Tim Deegan wrote:
> At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> > On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > > On 24/01/14 16:52, George Dunlap wrote:
> > >> The name of the file might be a bit confusing though, if people think
> > >> it is supposed to be compliled... would it make sense maybe to call it
> > >> ".txt", and include some instructions at the top with a line that says
> > >> "---- cut here 8< ---" or something?
> > >>
> > >>   -George
> > > Not really - Coverity uses the file extension to work out how to
> > > interpret the modelling file.  ".c" is correct here, and will cause
> > > smart text editors to apply proper syntax highlighting.
> > >
> > > Alternates are .cpp and .java, depending on the primary language of the
> > > project.
> > 
> > Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> > to be a .c file in the xen tree -- the instructions could say, "Place 
> > the text below into a file named coverity_model.c".
> 
> +1.  I don't think it's confusing for humans but we don't want
> e.g. ctags/cscope or IDEs picking up the coverity versions
> of things.

Do you run tags on the whole of xen.git? I run it individually on the
xen and tools subdirs, because all the other cruft in the tree is just
too noisy otherwise.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:08:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kF5-0005Uv-6J; Fri, 24 Jan 2014 17:08:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W6kF3-0005Uh-JB
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:08:29 +0000
Received: from [85.158.137.68:63066] by server-7.bemta-3.messagelabs.com id
	40/BD-27599-C0E92E25; Fri, 24 Jan 2014 17:08:28 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390583306!11148950!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23561 invoked from network); 24 Jan 2014 17:08:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; 
	d="asc'?scan'208";a="94216888"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:08:26 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:08:25 -0500
Message-ID: <1390583304.3869.182.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Fri, 24 Jan 2014 18:08:24 +0100
In-Reply-To: <CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
	<1390327005.23576.219.camel@Solace>
	<CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2924601876300090027=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2924601876300090027==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-MxdYYiuWOreRh7IytDbq"

--=-MxdYYiuWOreRh7IytDbq
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-23 at 21:09 +0200, Pavlo Suikov wrote:
> Hi Dario,
>=20
> > Can I ask how the numbers (for DomU, of course) looks like now?
>=20
>=20
> They are all 31 ms, so minimal overhead is achieved. However, it looks
> like we still have some gremlins in there: from boot to boot this time
> can change to 39 ms. So without Xen scheduler being active sleep
> latency stabilizes, but not always on a correct value.
>=20
Wow... So, you're saying that, with the DomU exclusively pinned to a
specific pCPU, latency is stable, but the value at which it stabilizes
varies from boot to boot?

That's very weird...

> > Another thing I'll try, if you haven't done that already, is as
> follows:
> > - get rid of the DomU
> > - pin the 2 Dom0's vCPUs each one to one pCPU
> > - repeat the experiment
>=20
>=20
> Yeah, we have tried this as well and it gives us almost the same
> result as in previous case: sleep latency in dom0 is not present, so
> we get 30 to 31 ms on 30 ms sleep without any variations.
>=20
Ok. Are you aware of xentrace and xenalyze?

Have, for example, a look here:
http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze=
/

Perhaps you could, in this Dom0 case, you can start the tracing, start
the DomU and then run the experiments. It's going to be though to
correlate the actual activity in Dom0 (the test running inside it), with
the Xen's traces, but at least you should be able to see when, and try
to figure out why, the DomU, which should be just idle, ends up getting
in your way.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-MxdYYiuWOreRh7IytDbq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLinggACgkQk4XaBE3IOsQ6mwCdGrPo0COsGGYh2kSMscPFllVW
lRUAn2kAS8842ZKPpPYy75Az+oi0xZ3b
=PLTk
-----END PGP SIGNATURE-----

--=-MxdYYiuWOreRh7IytDbq--


--===============2924601876300090027==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2924601876300090027==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:08:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kF5-0005Uv-6J; Fri, 24 Jan 2014 17:08:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W6kF3-0005Uh-JB
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:08:29 +0000
Received: from [85.158.137.68:63066] by server-7.bemta-3.messagelabs.com id
	40/BD-27599-C0E92E25; Fri, 24 Jan 2014 17:08:28 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390583306!11148950!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23561 invoked from network); 24 Jan 2014 17:08:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:08:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; 
	d="asc'?scan'208";a="94216888"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:08:26 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:08:25 -0500
Message-ID: <1390583304.3869.182.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
Date: Fri, 24 Jan 2014 18:08:24 +0100
In-Reply-To: <CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<CAE4oM6xZpkw5BLcK3toYMix=L9gmp89Ww5=biC5+rsncm10A3w@mail.gmail.com>
	<1390304761.23576.161.camel@Solace>
	<CAE4oM6y12tNYFihdKbWcBmHcy__2rL570viV7T=Esj4vxHALqg@mail.gmail.com>
	<1390327005.23576.219.camel@Solace>
	<CAE4oM6zDKTX_xi6Kz9HhweD4n0bwEBqoqmTDTFsiniQEzKUvpQ@mail.gmail.com>
X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2924601876300090027=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2924601876300090027==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-MxdYYiuWOreRh7IytDbq"

--=-MxdYYiuWOreRh7IytDbq
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-01-23 at 21:09 +0200, Pavlo Suikov wrote:
> Hi Dario,
>=20
> > Can I ask how the numbers (for DomU, of course) looks like now?
>=20
>=20
> They are all 31 ms, so minimal overhead is achieved. However, it looks
> like we still have some gremlins in there: from boot to boot this time
> can change to 39 ms. So without Xen scheduler being active sleep
> latency stabilizes, but not always on a correct value.
>=20
Wow... So, you're saying that, with the DomU exclusively pinned to a
specific pCPU, latency is stable, but the value at which it stabilizes
varies from boot to boot?

That's very weird...

> > Another thing I'll try, if you haven't done that already, is as
> follows:
> > - get rid of the DomU
> > - pin the 2 Dom0's vCPUs each one to one pCPU
> > - repeat the experiment
>=20
>=20
> Yeah, we have tried this as well and it gives us almost the same
> result as in previous case: sleep latency in dom0 is not present, so
> we get 30 to 31 ms on 30 ms sleep without any variations.
>=20
Ok. Are you aware of xentrace and xenalyze?

Have, for example, a look here:
http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze=
/

Perhaps you could, in this Dom0 case, you can start the tracing, start
the DomU and then run the experiments. It's going to be though to
correlate the actual activity in Dom0 (the test running inside it), with
the Xen's traces, but at least you should be able to see when, and try
to figure out why, the DomU, which should be just idle, ends up getting
in your way.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-MxdYYiuWOreRh7IytDbq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLinggACgkQk4XaBE3IOsQ6mwCdGrPo0COsGGYh2kSMscPFllVW
lRUAn2kAS8842ZKPpPYy75Az+oi0xZ3b
=PLTk
-----END PGP SIGNATURE-----

--=-MxdYYiuWOreRh7IytDbq--


--===============2924601876300090027==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2924601876300090027==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kHm-0005ny-HP; Fri, 24 Jan 2014 17:11:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6kHl-0005nP-2V
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:11:17 +0000
Received: from [85.158.139.211:7127] by server-12.bemta-5.messagelabs.com id
	AF/7C-30017-4BE92E25; Fri, 24 Jan 2014 17:11:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390583475!11810132!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9423 invoked from network); 24 Jan 2014 17:11:15 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:11:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6kHg-0003Pb-Vs; Fri, 24 Jan 2014 17:11:12 +0000
Date: Fri, 24 Jan 2014 18:11:12 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140124171112.GB4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
	<20140124170421.GA4909@deinos.phlegethon.org>
	<1390583138.13513.20.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390583138.13513.20.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:05 +0000 on 24 Jan (1390579538), Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:04 +0100, Tim Deegan wrote:
> > At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> > > On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > > > On 24/01/14 16:52, George Dunlap wrote:
> > > >> The name of the file might be a bit confusing though, if people think
> > > >> it is supposed to be compliled... would it make sense maybe to call it
> > > >> ".txt", and include some instructions at the top with a line that says
> > > >> "---- cut here 8< ---" or something?
> > > >>
> > > >>   -George
> > > > Not really - Coverity uses the file extension to work out how to
> > > > interpret the modelling file.  ".c" is correct here, and will cause
> > > > smart text editors to apply proper syntax highlighting.
> > > >
> > > > Alternates are .cpp and .java, depending on the primary language of the
> > > > project.
> > > 
> > > Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> > > to be a .c file in the xen tree -- the instructions could say, "Place 
> > > the text below into a file named coverity_model.c".
> > 
> > +1.  I don't think it's confusing for humans but we don't want
> > e.g. ctags/cscope or IDEs picking up the coverity versions
> > of things.
> 
> Do you run tags on the whole of xen.git?

Not usually; I guess it's no worse than the multiple architectures,
minios &c where it is.  So, stet as a .c.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kHm-0005ny-HP; Fri, 24 Jan 2014 17:11:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W6kHl-0005nP-2V
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:11:17 +0000
Received: from [85.158.139.211:7127] by server-12.bemta-5.messagelabs.com id
	AF/7C-30017-4BE92E25; Fri, 24 Jan 2014 17:11:16 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390583475!11810132!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9423 invoked from network); 24 Jan 2014 17:11:15 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:11:15 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W6kHg-0003Pb-Vs; Fri, 24 Jan 2014 17:11:12 +0000
Date: Fri, 24 Jan 2014 18:11:12 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140124171112.GB4909@deinos.phlegethon.org>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com>
	<20140124170421.GA4909@deinos.phlegethon.org>
	<1390583138.13513.20.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390583138.13513.20.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:05 +0000 on 24 Jan (1390579538), Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:04 +0100, Tim Deegan wrote:
> > At 16:59 +0000 on 24 Jan (1390579158), George Dunlap wrote:
> > > On 01/24/2014 04:57 PM, Andrew Cooper wrote:
> > > > On 24/01/14 16:52, George Dunlap wrote:
> > > >> The name of the file might be a bit confusing though, if people think
> > > >> it is supposed to be compliled... would it make sense maybe to call it
> > > >> ".txt", and include some instructions at the top with a line that says
> > > >> "---- cut here 8< ---" or something?
> > > >>
> > > >>   -George
> > > > Not really - Coverity uses the file extension to work out how to
> > > > interpret the modelling file.  ".c" is correct here, and will cause
> > > > smart text editors to apply proper syntax highlighting.
> > > >
> > > > Alternates are .cpp and .java, depending on the primary language of the
> > > > project.
> > > 
> > > Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't need 
> > > to be a .c file in the xen tree -- the instructions could say, "Place 
> > > the text below into a file named coverity_model.c".
> > 
> > +1.  I don't think it's confusing for humans but we don't want
> > e.g. ctags/cscope or IDEs picking up the coverity versions
> > of things.
> 
> Do you run tags on the whole of xen.git?

Not usually; I guess it's no worse than the multiple architectures,
minios &c where it is.  So, stet as a .c.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:11:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kHv-0005px-Tq; Fri, 24 Jan 2014 17:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6kHu-0005pF-9G
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:11:26 +0000
Received: from [85.158.139.211:64342] by server-5.bemta-5.messagelabs.com id
	FC/0C-14928-CBE92E25; Fri, 24 Jan 2014 17:11:24 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390583483!11810155!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9844 invoked from network); 24 Jan 2014 17:11:23 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:11:23 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so1120665ead.27
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 09:11:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Y5A/RjnHiCoZOf51jWsz5zXFibKvOld1H1Y2b+7Ude8=;
	b=XBqqPfGE08mtom0L2L0+pvRQQZ1m1kpJNTcD3Vixx1O78ui0kTck11yj855/6ufuW5
	04TeyncjbFSHyyfwIS2cthtHrqQvnZ2hEweHAnpg2p05s+r8vr6FVYQGQfEMzf0fx3hL
	BXYlfqMVMhMP3MWteAHnFQgZIMOWo/pRZY7zeIsbtq5J6lDcqsvD2dXikt4sTacD5SHN
	XeuI9PXLICq9EqIgH2gUR8zfjJdKJBKPV9gOUJDh9lbWiAV9/BVqMvFnZ7j623zgxuFe
	Tvb0hweny5xzp3xXsPUvAM7X8CMwSPJwr8OkF8nm7IGFmrJ2VrHoWGKMIjTJA1gEmSMS
	l4Iw==
X-Gm-Message-State: ALoCoQlIqMSAOeFf/CztwN3KTQSpuW2txOq2g5aUy+n7/CKVEUESY40CqP7RfYFr447LK2coFCAy
X-Received: by 10.15.75.200 with SMTP id l48mr1698762eey.109.1390583483568;
	Fri, 24 Jan 2014 09:11:23 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x1sm5628262een.17.2014.01.24.09.11.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 09:11:22 -0800 (PST)
Message-ID: <52E29EB9.7020906@linaro.org>
Date: Fri, 24 Jan 2014 17:11:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
In-Reply-To: <1389696705.9887.52.camel@kazak.uk.xensource.com>
Cc: tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 10:51 AM, Ian Campbell wrote:
> I think this problem is now fixed upstream, the intention was to
> eventually revert the workaround (Julien was going to tell me when it
> had gone into stable etc, but this is now a 4.5 era revert candidate).

The patch was merged for 3.13.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:11:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kHv-0005px-Tq; Fri, 24 Jan 2014 17:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W6kHu-0005pF-9G
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:11:26 +0000
Received: from [85.158.139.211:64342] by server-5.bemta-5.messagelabs.com id
	FC/0C-14928-CBE92E25; Fri, 24 Jan 2014 17:11:24 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390583483!11810155!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9844 invoked from network); 24 Jan 2014 17:11:23 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:11:23 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so1120665ead.27
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 09:11:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Y5A/RjnHiCoZOf51jWsz5zXFibKvOld1H1Y2b+7Ude8=;
	b=XBqqPfGE08mtom0L2L0+pvRQQZ1m1kpJNTcD3Vixx1O78ui0kTck11yj855/6ufuW5
	04TeyncjbFSHyyfwIS2cthtHrqQvnZ2hEweHAnpg2p05s+r8vr6FVYQGQfEMzf0fx3hL
	BXYlfqMVMhMP3MWteAHnFQgZIMOWo/pRZY7zeIsbtq5J6lDcqsvD2dXikt4sTacD5SHN
	XeuI9PXLICq9EqIgH2gUR8zfjJdKJBKPV9gOUJDh9lbWiAV9/BVqMvFnZ7j623zgxuFe
	Tvb0hweny5xzp3xXsPUvAM7X8CMwSPJwr8OkF8nm7IGFmrJ2VrHoWGKMIjTJA1gEmSMS
	l4Iw==
X-Gm-Message-State: ALoCoQlIqMSAOeFf/CztwN3KTQSpuW2txOq2g5aUy+n7/CKVEUESY40CqP7RfYFr447LK2coFCAy
X-Received: by 10.15.75.200 with SMTP id l48mr1698762eey.109.1390583483568;
	Fri, 24 Jan 2014 09:11:23 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x1sm5628262een.17.2014.01.24.09.11.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 24 Jan 2014 09:11:22 -0800 (PST)
Message-ID: <52E29EB9.7020906@linaro.org>
Date: Fri, 24 Jan 2014 17:11:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
In-Reply-To: <1389696705.9887.52.camel@kazak.uk.xensource.com>
Cc: tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
	mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/14/2014 10:51 AM, Ian Campbell wrote:
> I think this problem is now fixed upstream, the intention was to
> eventually revert the workaround (Julien was going to tell me when it
> had gone into stable etc, but this is now a 4.5 era revert candidate).

The patch was merged for 3.13.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:12:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kIX-0005z0-DO; Fri, 24 Jan 2014 17:12:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6kIU-0005xy-2O
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:12:03 +0000
Received: from [85.158.137.68:46834] by server-1.bemta-3.messagelabs.com id
	8B/A7-29598-1EE92E25; Fri, 24 Jan 2014 17:12:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390583518!10377305!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4077 invoked from network); 24 Jan 2014 17:12:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96249715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:11:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:11:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6kIP-0008L1-Fg;
	Fri, 24 Jan 2014 17:11:57 +0000
Message-ID: <52E29EDD.6050100@citrix.com>
Date: Fri, 24 Jan 2014 17:11:57 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<1390583049.13513.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1390583049.13513.19.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 17:04, Ian Campbell wrote:
> On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> ---
>>
>> George:
>>   This is just documentation, and it would be nice to include it as part of
>>   the 4.4 release.
>> ---
>>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 98 insertions(+)
>>  create mode 100644 misc/coverity_model.c
>>
>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>> new file mode 100644
>> index 0000000..418d25e
>> --- /dev/null
>> +++ b/misc/coverity_model.c
>> @@ -0,0 +1,98 @@
>> +/* Coverity Scan model
>> + *
>> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
>> + * positives.
>> + *
>> + * - A model file can't import any header files.
>> + * - Therefore only some built-in primitives like int, char and void are
>> + *   available but not NULL etc.
>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
>> + *   and similar types are sufficient.
>> + * - An uninitialized local pointer is not an error. It signifies that the
>> + *   variable could be either NULL or have some data.
>> + *
>> + * Coverity Scan doesn't pick up modifications automatically. The model file
>> + * must be uploaded by an admin in the analysis.
>> + *
>> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
>> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
>> + *
>> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
>> + * reference to get started (suggested by Coverty Scan themselves as a good
>> + * example), but all content is Xen specific.
> Given that you (I pressume?) wrote at least some of the C like stuff I
> think you can include your copyright too as well as the Python one. Is
> there actually any cpython stuff left?
>
> If there were a link to the docs in this comment that would be good too.
>
>

See "Useful links" in the comment below this one in the file.

Most of this comment is from the cpython file, and I suppose technically
the "#define NULL (void)0" and "#define assert() /*empty*/" were.

But mainly, the cpython was just an example of an existing modelling
file, which was a substantially more useful when working out how to
write the Xen one than the Coverity documentation of which each of the
__coverity_$FOO() functions mean and do.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:12:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kIX-0005z0-DO; Fri, 24 Jan 2014 17:12:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6kIU-0005xy-2O
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:12:03 +0000
Received: from [85.158.137.68:46834] by server-1.bemta-3.messagelabs.com id
	8B/A7-29598-1EE92E25; Fri, 24 Jan 2014 17:12:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390583518!10377305!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4077 invoked from network); 24 Jan 2014 17:12:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96249715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:11:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:11:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6kIP-0008L1-Fg;
	Fri, 24 Jan 2014 17:11:57 +0000
Message-ID: <52E29EDD.6050100@citrix.com>
Date: Fri, 24 Jan 2014 17:11:57 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<1390583049.13513.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1390583049.13513.19.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 17:04, Ian Campbell wrote:
> On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>> ---
>>
>> George:
>>   This is just documentation, and it would be nice to include it as part of
>>   the 4.4 release.
>> ---
>>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 98 insertions(+)
>>  create mode 100644 misc/coverity_model.c
>>
>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>> new file mode 100644
>> index 0000000..418d25e
>> --- /dev/null
>> +++ b/misc/coverity_model.c
>> @@ -0,0 +1,98 @@
>> +/* Coverity Scan model
>> + *
>> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
>> + * positives.
>> + *
>> + * - A model file can't import any header files.
>> + * - Therefore only some built-in primitives like int, char and void are
>> + *   available but not NULL etc.
>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
>> + *   and similar types are sufficient.
>> + * - An uninitialized local pointer is not an error. It signifies that the
>> + *   variable could be either NULL or have some data.
>> + *
>> + * Coverity Scan doesn't pick up modifications automatically. The model file
>> + * must be uploaded by an admin in the analysis.
>> + *
>> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
>> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
>> + *
>> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
>> + * reference to get started (suggested by Coverty Scan themselves as a good
>> + * example), but all content is Xen specific.
> Given that you (I pressume?) wrote at least some of the C like stuff I
> think you can include your copyright too as well as the Python one. Is
> there actually any cpython stuff left?
>
> If there were a link to the docs in this comment that would be good too.
>
>

See "Useful links" in the comment below this one in the file.

Most of this comment is from the cpython file, and I suppose technically
the "#define NULL (void)0" and "#define assert() /*empty*/" were.

But mainly, the cpython was just an example of an existing modelling
file, which was a substantially more useful when working out how to
write the Xen one than the Coverity documentation of which each of the
__coverity_$FOO() functions mean and do.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:12:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kIx-000651-RU; Fri, 24 Jan 2014 17:12:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W6kIw-00064l-Ux
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:12:31 +0000
Received: from [85.158.139.211:19996] by server-10.bemta-5.messagelabs.com id
	3C/F9-01405-EFE92E25; Fri, 24 Jan 2014 17:12:30 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390583547!11758093!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2522 invoked from network); 24 Jan 2014 17:12:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:12:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHCMUF008215
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:12:23 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHCKCO024440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 17:12:20 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHCJxX024416; Fri, 24 Jan 2014 17:12:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:12:19 -0800
Message-ID: <52E29F27.50403@oracle.com>
Date: Fri, 24 Jan 2014 12:13:11 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
In-Reply-To: <52E2905D0200007800116BD2@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 10:10 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>> +{
>> +    int ret = -EINVAL;
>> +    xen_pmu_params_t pmu_params;
>> +    uint32_t mode;
>> +
>> +    switch ( op )
>> +    {
>> +    case XENPMU_mode_set:
>> +        if ( !is_control_domain(current->domain) )
>> +            return -EPERM;
>> +
>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>> +            return -EFAULT;
>> +
>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>> +        if ( mode & ~XENPMU_MODE_ON )
>> +            return -EINVAL;
> Please, if you add a new interface, think carefully about future
> extension room: Here you ignore the upper 32 bits of .val instead
> of making sure they're zero, thus making it impossible to assign
> them some meaning later on.

I think I can leave this as is for now --- I am storing VPMU mode and 
VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.

What I probably should do is remove XENPMU_MODE_MASK (and 
XENPMU_FEATURE_SHIFT  and XENPMU_FEATURE_MASK) from the public header 
since Linux passes down 64-bit pmu_params.d.val without any format 
assumptions anyway.

>
>> --- a/xen/include/public/xen.h
>> +++ b/xen/include/public/xen.h
>> @@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>>   #define __HYPERVISOR_kexec_op             37
>>   #define __HYPERVISOR_tmem_op              38
>>   #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
>> +#define __HYPERVISOR_xenpmu_op            40
>>   
>>   /* Architecture-specific hypercall definitions. */
>>   #define __HYPERVISOR_arch_0               48
> Are you certain this wouldn't better be an architecture-specific
> hypercall? Just like with Machine Check, I don't think all
> architectures are guaranteed to have (or ever get) performance
> monitoring capabilities.

An architecture doesn't necessarily need to have HW performance 
monitoring support. In principle this interface can be used for passing 
any performance-related data (e.g. collected by the hypervisor) to the 
guest.

>> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
>> +struct xen_pmu_params {
>> +    /* IN/OUT parameters */
>> +    union {
>> +        struct version {
>> +            uint8_t maj;
>> +            uint8_t min;
>> +        } version;
>> +        uint64_t pad;
>> +    } v;
> Looking at the implementation above I don't see this ever being an
> IN parameter.

Currently Xen doesn't care about version but in the future a guest may 
specify what version of PMU it wants to use (I hope this day will never 
come though...)

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:12:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kIx-000651-RU; Fri, 24 Jan 2014 17:12:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W6kIw-00064l-Ux
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:12:31 +0000
Received: from [85.158.139.211:19996] by server-10.bemta-5.messagelabs.com id
	3C/F9-01405-EFE92E25; Fri, 24 Jan 2014 17:12:30 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390583547!11758093!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2522 invoked from network); 24 Jan 2014 17:12:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 17:12:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHCMUF008215
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:12:23 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHCKCO024440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 17:12:20 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHCJxX024416; Fri, 24 Jan 2014 17:12:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:12:19 -0800
Message-ID: <52E29F27.50403@oracle.com>
Date: Fri, 24 Jan 2014 12:13:11 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
In-Reply-To: <52E2905D0200007800116BD2@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 10:10 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>> +{
>> +    int ret = -EINVAL;
>> +    xen_pmu_params_t pmu_params;
>> +    uint32_t mode;
>> +
>> +    switch ( op )
>> +    {
>> +    case XENPMU_mode_set:
>> +        if ( !is_control_domain(current->domain) )
>> +            return -EPERM;
>> +
>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>> +            return -EFAULT;
>> +
>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>> +        if ( mode & ~XENPMU_MODE_ON )
>> +            return -EINVAL;
> Please, if you add a new interface, think carefully about future
> extension room: Here you ignore the upper 32 bits of .val instead
> of making sure they're zero, thus making it impossible to assign
> them some meaning later on.

I think I can leave this as is for now --- I am storing VPMU mode and 
VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.

What I probably should do is remove XENPMU_MODE_MASK (and 
XENPMU_FEATURE_SHIFT  and XENPMU_FEATURE_MASK) from the public header 
since Linux passes down 64-bit pmu_params.d.val without any format 
assumptions anyway.

>
>> --- a/xen/include/public/xen.h
>> +++ b/xen/include/public/xen.h
>> @@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>>   #define __HYPERVISOR_kexec_op             37
>>   #define __HYPERVISOR_tmem_op              38
>>   #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
>> +#define __HYPERVISOR_xenpmu_op            40
>>   
>>   /* Architecture-specific hypercall definitions. */
>>   #define __HYPERVISOR_arch_0               48
> Are you certain this wouldn't better be an architecture-specific
> hypercall? Just like with Machine Check, I don't think all
> architectures are guaranteed to have (or ever get) performance
> monitoring capabilities.

An architecture doesn't necessarily need to have HW performance 
monitoring support. In principle this interface can be used for passing 
any performance-related data (e.g. collected by the hypervisor) to the 
guest.

>> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
>> +struct xen_pmu_params {
>> +    /* IN/OUT parameters */
>> +    union {
>> +        struct version {
>> +            uint8_t maj;
>> +            uint8_t min;
>> +        } version;
>> +        uint64_t pad;
>> +    } v;
> Looking at the implementation above I don't see this ever being an
> IN parameter.

Currently Xen doesn't care about version but in the future a guest may 
specify what version of PMU it wants to use (I hope this day will never 
come though...)

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:16:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kMI-0006TO-SM; Fri, 24 Jan 2014 17:15:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W6kMH-0006TG-1D
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:15:57 +0000
Received: from [85.158.139.211:60028] by server-5.bemta-5.messagelabs.com id
	37/32-14928-CCF92E25; Fri, 24 Jan 2014 17:15:56 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390583753!11612126!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29399 invoked from network); 24 Jan 2014 17:15:54 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:15:54 -0000
Received: by mail-qa0-f47.google.com with SMTP id j5so4177443qaq.20
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 09:15:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=XNt4cOh2eniAexNqGlYzSHcV2t4JnNNdHZ78SDg5jO0=;
	b=YWX6lZZDf0qMVmiF5Ik4s+c39UZ6DILKN6swgATe42dFE/TpZBviwvBjlLEiVbS08y
	Edbaz65XmEqB5q9p9RcWZVGYF8CSzDmlQ7w2mgCTfDjJWmBLhj3WwuAaK6XjnH0i5ABW
	2vpmjvUb16gXAI1/8PHfxBnLvF8yMkNrU0D/hyT/DTRIL5fjW3ojlKFFABa2QZb2D7ib
	q7r/4Qo73PgUMPtSOtGgmkhZajdmJGvLa+OEMFZQ4XMfwbILURSF4qAcUXWK4v/ZRLZ1
	k2bh07iUcSvBy2HepPeIzAYm1hFHGibdjcmJn2PbVay7Vm2AgX38giU5+DdmE7tiDqjF
	Ojtg==
MIME-Version: 1.0
X-Received: by 10.140.46.119 with SMTP id j110mr21340754qga.32.1390583753492; 
	Fri, 24 Jan 2014 09:15:53 -0800 (PST)
Received: by 10.96.177.66 with HTTP; Fri, 24 Jan 2014 09:15:53 -0800 (PST)
Date: Fri, 24 Jan 2014 12:15:53 -0500
Message-ID: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a11395aaaa85e5604f0ba827c
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH][v4] xen: Pass the location of the ACPI RSDP to
	DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11395aaaa85e5604f0ba827c
Content-Type: text/plain; charset=ISO-8859-1

xen: [v4] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

---
Changed since v3:
    * Use standard pointer print for snprintf
    * Add an option to enable or disable ACPI RSDP passthrough
    * Only passthrough when running in EFI and acpi_rsdp_passthrough is enabled

Changed since v2:
    * Fix coding style
    * Get rid of extra define
    * Use correct typedef'd type for the ACPI RSDP pointer
    * Better error checking conditional
    * Simplify error message

diff --git a/xen/arch/x86/acpi/boot.c b/xen/arch/x86/acpi/boot.c
index 6d7984f..ce8ffbe 100644
--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -57,6 +57,7 @@ bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;

 bool_t acpi_skip_timer_override __initdata;
+bool_t acpi_rsdp_passthrough    __initdata;

 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..7827b5f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -75,6 +75,11 @@ custom_param("acpi", parse_acpi_param);
 boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);

 /* **** Linux config option: propagated to domain0. */
+/* acpi_rsdp_passthrough: Explicitly pass the ACPI RSDP pointer to */
+/*                        domain0 via the acpi_rsdp option.        */
+boolean_param("acpi_rsdp_passthrough", acpi_rsdp_passthrough);
+
+/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);

@@ -1378,6 +1383,26 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( efi_enabled && acpi_rsdp_passthrough &&
+             !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            acpi_physical_address rp = acpi_os_get_root_pointer();
+            char rp_str[sizeof(acpi_physical_address)*2 + 3];
+
+            if ( rp )
+            {
+                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 3,
+                         "%p", (void *)rp);
+
+                safe_strcat(dom0_cmdline, " acpi_rsdp=");
+                safe_strcat(dom0_cmdline, rp_str);
+            }
+            else
+            {
+                printk(XENLOG_WARNING
+                       "Failed to get acpi_rsdp to pass to dom0\n");
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/asm-x86/acpi.h b/xen/include/asm-x86/acpi.h
index a7b11b8..e13f873 100644
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -81,6 +81,7 @@ int __acpi_release_global_lock(unsigned int *lock);
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
 extern bool_t acpi_skip_timer_override;
+extern bool_t acpi_rsdp_passthrough;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);

--001a11395aaaa85e5604f0ba827c
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqtpo8qy0

eGVuOiBbdjRdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgotLS0KQ2hhbmdlZCBzaW5jZSB2MzoKICAgICogVXNlIHN0YW5kYXJkIHBv
aW50ZXIgcHJpbnQgZm9yIHNucHJpbnRmCiAgICAqIEFkZCBhbiBvcHRpb24gdG8gZW5hYmxlIG9y
IGRpc2FibGUgQUNQSSBSU0RQIHBhc3N0aHJvdWdoCiAgICAqIE9ubHkgcGFzc3Rocm91Z2ggd2hl
biBydW5uaW5nIGluIEVGSSBhbmQgYWNwaV9yc2RwX3Bhc3N0aHJvdWdoIGlzIGVuYWJsZWQKCkNo
YW5nZWQgc2luY2UgdjI6CiAgICAqIEZpeCBjb2Rpbmcgc3R5bGUKICAgICogR2V0IHJpZCBvZiBl
eHRyYSBkZWZpbmUKICAgICogVXNlIGNvcnJlY3QgdHlwZWRlZidkIHR5cGUgZm9yIHRoZSBBQ1BJ
IFJTRFAgcG9pbnRlcgogICAgKiBCZXR0ZXIgZXJyb3IgY2hlY2tpbmcgY29uZGl0aW9uYWwKICAg
ICogU2ltcGxpZnkgZXJyb3IgbWVzc2FnZQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9hY3Bp
L2Jvb3QuYyBiL3hlbi9hcmNoL3g4Ni9hY3BpL2Jvb3QuYwppbmRleCA2ZDc5ODRmLi5jZThmZmJl
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvYWNwaS9ib290LmMKKysrIGIveGVuL2FyY2gveDg2
L2FjcGkvYm9vdC5jCkBAIC01Nyw2ICs1Nyw3IEBAIGJvb2xfdCBfX2luaXRkYXRhIGFjcGlfbGFw
aWM7CiBib29sX3QgX19pbml0ZGF0YSBhY3BpX2lvYXBpYzsKIAogYm9vbF90IGFjcGlfc2tpcF90
aW1lcl9vdmVycmlkZSBfX2luaXRkYXRhOworYm9vbF90IGFjcGlfcnNkcF9wYXNzdGhyb3VnaCAg
ICBfX2luaXRkYXRhOwogCiAjaWZkZWYgQ09ORklHX1g4Nl9MT0NBTF9BUElDCiBzdGF0aWMgdTY0
IGFjcGlfbGFwaWNfYWRkciBfX2luaXRkYXRhID0gQVBJQ19ERUZBVUxUX1BIWVNfQkFTRTsKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9zZXR1cC5jIGIveGVuL2FyY2gveDg2L3NldHVwLmMKaW5k
ZXggYjQ5MjU2ZC4uNzgyN2I1ZiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NldHVwLmMKKysr
IGIveGVuL2FyY2gveDg2L3NldHVwLmMKQEAgLTc1LDYgKzc1LDExIEBAIGN1c3RvbV9wYXJhbSgi
YWNwaSIsIHBhcnNlX2FjcGlfcGFyYW0pOwogYm9vbGVhbl9wYXJhbSgiYWNwaV9za2lwX3RpbWVy
X292ZXJyaWRlIiwgYWNwaV9za2lwX3RpbWVyX292ZXJyaWRlKTsKIAogLyogKioqKiBMaW51eCBj
b25maWcgb3B0aW9uOiBwcm9wYWdhdGVkIHRvIGRvbWFpbjAuICovCisvKiBhY3BpX3JzZHBfcGFz
c3Rocm91Z2g6IEV4cGxpY2l0bHkgcGFzcyB0aGUgQUNQSSBSU0RQIHBvaW50ZXIgdG8gKi8KKy8q
ICAgICAgICAgICAgICAgICAgICAgICAgZG9tYWluMCB2aWEgdGhlIGFjcGlfcnNkcCBvcHRpb24u
ICAgICAgICAqLworYm9vbGVhbl9wYXJhbSgiYWNwaV9yc2RwX3Bhc3N0aHJvdWdoIiwgYWNwaV9y
c2RwX3Bhc3N0aHJvdWdoKTsKKworLyogKioqKiBMaW51eCBjb25maWcgb3B0aW9uOiBwcm9wYWdh
dGVkIHRvIGRvbWFpbjAuICovCiAvKiBub2FwaWM6IERpc2FibGUgSU9BUElDIHNldHVwLiAqLwog
Ym9vbGVhbl9wYXJhbSgibm9hcGljIiwgc2tpcF9pb2FwaWNfc2V0dXApOwogCkBAIC0xMzc4LDYg
KzEzODMsMjYgQEAgdm9pZCBfX2luaXQgX19zdGFydF94ZW4odW5zaWduZWQgbG9uZyBtYmlfcCkK
ICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgIiBhY3BpPSIpOwogICAgICAg
ICAgICAgc2FmZV9zdHJjYXQoZG9tMF9jbWRsaW5lLCBhY3BpX3BhcmFtKTsKICAgICAgICAgfQor
ICAgICAgICBpZiAoIGVmaV9lbmFibGVkICYmIGFjcGlfcnNkcF9wYXNzdGhyb3VnaCAmJgorICAg
ICAgICAgICAgICFzdHJzdHIoZG9tMF9jbWRsaW5lLCAiYWNwaV9yc2RwPSIpICkKKyAgICAgICAg
eworICAgICAgICAgICAgYWNwaV9waHlzaWNhbF9hZGRyZXNzIHJwID0gYWNwaV9vc19nZXRfcm9v
dF9wb2ludGVyKCk7CisgICAgICAgICAgICBjaGFyIHJwX3N0cltzaXplb2YoYWNwaV9waHlzaWNh
bF9hZGRyZXNzKSoyICsgM107CisKKyAgICAgICAgICAgIGlmICggcnAgKQorICAgICAgICAgICAg
eworICAgICAgICAgICAgICAgIHNucHJpbnRmKHJwX3N0ciwgc2l6ZW9mKGFjcGlfcGh5c2ljYWxf
YWRkcmVzcykqMiArIDMsCisgICAgICAgICAgICAgICAgICAgICAgICAgIiVwIiwgKHZvaWQgKily
cCk7CisKKyAgICAgICAgICAgICAgICBzYWZlX3N0cmNhdChkb20wX2NtZGxpbmUsICIgYWNwaV9y
c2RwPSIpOworICAgICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgcnBfc3Ry
KTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgIHsKKyAgICAg
ICAgICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgCisgICAgICAgICAgICAgICAgICAgICAg
ICJGYWlsZWQgdG8gZ2V0IGFjcGlfcnNkcCB0byBwYXNzIHRvIGRvbTBcbiIpOworICAgICAgICAg
ICAgfQorICAgICAgICB9CiAKICAgICAgICAgY21kbGluZSA9IGRvbTBfY21kbGluZTsKICAgICB9
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L2FjcGkuaCBiL3hlbi9pbmNsdWRlL2Fz
bS14ODYvYWNwaS5oCmluZGV4IGE3YjExYjguLmUxM2Y4NzMgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNs
dWRlL2FzbS14ODYvYWNwaS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvYWNwaS5oCkBAIC04
MSw2ICs4MSw3IEBAIGludCBfX2FjcGlfcmVsZWFzZV9nbG9iYWxfbG9jayh1bnNpZ25lZCBpbnQg
KmxvY2spOwogZXh0ZXJuIGJvb2xfdCBhY3BpX2xhcGljLCBhY3BpX2lvYXBpYywgYWNwaV9ub2ly
cTsKIGV4dGVybiBib29sX3QgYWNwaV9mb3JjZSwgYWNwaV9odCwgYWNwaV9kaXNhYmxlZDsKIGV4
dGVybiBib29sX3QgYWNwaV9za2lwX3RpbWVyX292ZXJyaWRlOworZXh0ZXJuIGJvb2xfdCBhY3Bp
X3JzZHBfcGFzc3Rocm91Z2g7CiBleHRlcm4gdTMyIGFjcGlfc21pX2NtZDsKIGV4dGVybiB1OCBh
Y3BpX2VuYWJsZV92YWx1ZSwgYWNwaV9kaXNhYmxlX3ZhbHVlOwogdm9pZCBhY3BpX3BpY19zY2lf
c2V0X3RyaWdnZXIodW5zaWduZWQgaW50LCB1MTYpOwo=
--001a11395aaaa85e5604f0ba827c
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11395aaaa85e5604f0ba827c--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:16:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kMI-0006TO-SM; Fri, 24 Jan 2014 17:15:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <philip.wernersbach@gmail.com>) id 1W6kMH-0006TG-1D
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:15:57 +0000
Received: from [85.158.139.211:60028] by server-5.bemta-5.messagelabs.com id
	37/32-14928-CCF92E25; Fri, 24 Jan 2014 17:15:56 +0000
X-Env-Sender: philip.wernersbach@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390583753!11612126!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29399 invoked from network); 24 Jan 2014 17:15:54 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:15:54 -0000
Received: by mail-qa0-f47.google.com with SMTP id j5so4177443qaq.20
	for <xen-devel@lists.xen.org>; Fri, 24 Jan 2014 09:15:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=XNt4cOh2eniAexNqGlYzSHcV2t4JnNNdHZ78SDg5jO0=;
	b=YWX6lZZDf0qMVmiF5Ik4s+c39UZ6DILKN6swgATe42dFE/TpZBviwvBjlLEiVbS08y
	Edbaz65XmEqB5q9p9RcWZVGYF8CSzDmlQ7w2mgCTfDjJWmBLhj3WwuAaK6XjnH0i5ABW
	2vpmjvUb16gXAI1/8PHfxBnLvF8yMkNrU0D/hyT/DTRIL5fjW3ojlKFFABa2QZb2D7ib
	q7r/4Qo73PgUMPtSOtGgmkhZajdmJGvLa+OEMFZQ4XMfwbILURSF4qAcUXWK4v/ZRLZ1
	k2bh07iUcSvBy2HepPeIzAYm1hFHGibdjcmJn2PbVay7Vm2AgX38giU5+DdmE7tiDqjF
	Ojtg==
MIME-Version: 1.0
X-Received: by 10.140.46.119 with SMTP id j110mr21340754qga.32.1390583753492; 
	Fri, 24 Jan 2014 09:15:53 -0800 (PST)
Received: by 10.96.177.66 with HTTP; Fri, 24 Jan 2014 09:15:53 -0800 (PST)
Date: Fri, 24 Jan 2014 12:15:53 -0500
Message-ID: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
From: Philip Wernersbach <philip.wernersbach@gmail.com>
To: xen-devel@lists.xen.org
Content-Type: multipart/mixed; boundary=001a11395aaaa85e5604f0ba827c
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH][v4] xen: Pass the location of the ACPI RSDP to
	DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11395aaaa85e5604f0ba827c
Content-Type: text/plain; charset=ISO-8859-1

xen: [v4] Pass the location of the ACPI RSDP to DOM0.

Some machines, such as recent IBM servers, only allow the OS to get the
ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
cannot get the RSDP on these machines, leading to all sorts of
functionality reductions.

Signed-off-by: Philip Wernersbach <philip.wernersbach@gmail.com>

---
Changed since v3:
    * Use standard pointer print for snprintf
    * Add an option to enable or disable ACPI RSDP passthrough
    * Only passthrough when running in EFI and acpi_rsdp_passthrough is enabled

Changed since v2:
    * Fix coding style
    * Get rid of extra define
    * Use correct typedef'd type for the ACPI RSDP pointer
    * Better error checking conditional
    * Simplify error message

diff --git a/xen/arch/x86/acpi/boot.c b/xen/arch/x86/acpi/boot.c
index 6d7984f..ce8ffbe 100644
--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -57,6 +57,7 @@ bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;

 bool_t acpi_skip_timer_override __initdata;
+bool_t acpi_rsdp_passthrough    __initdata;

 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..7827b5f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -75,6 +75,11 @@ custom_param("acpi", parse_acpi_param);
 boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);

 /* **** Linux config option: propagated to domain0. */
+/* acpi_rsdp_passthrough: Explicitly pass the ACPI RSDP pointer to */
+/*                        domain0 via the acpi_rsdp option.        */
+boolean_param("acpi_rsdp_passthrough", acpi_rsdp_passthrough);
+
+/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);

@@ -1378,6 +1383,26 @@ void __init __start_xen(unsigned long mbi_p)
             safe_strcat(dom0_cmdline, " acpi=");
             safe_strcat(dom0_cmdline, acpi_param);
         }
+        if ( efi_enabled && acpi_rsdp_passthrough &&
+             !strstr(dom0_cmdline, "acpi_rsdp=") )
+        {
+            acpi_physical_address rp = acpi_os_get_root_pointer();
+            char rp_str[sizeof(acpi_physical_address)*2 + 3];
+
+            if ( rp )
+            {
+                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 3,
+                         "%p", (void *)rp);
+
+                safe_strcat(dom0_cmdline, " acpi_rsdp=");
+                safe_strcat(dom0_cmdline, rp_str);
+            }
+            else
+            {
+                printk(XENLOG_WARNING
+                       "Failed to get acpi_rsdp to pass to dom0\n");
+            }
+        }

         cmdline = dom0_cmdline;
     }
diff --git a/xen/include/asm-x86/acpi.h b/xen/include/asm-x86/acpi.h
index a7b11b8..e13f873 100644
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -81,6 +81,7 @@ int __acpi_release_global_lock(unsigned int *lock);
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
 extern bool_t acpi_skip_timer_override;
+extern bool_t acpi_rsdp_passthrough;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);

--001a11395aaaa85e5604f0ba827c
Content-Type: text/x-patch; charset=US-ASCII; name="xen-master-pass-acpi-rsdp.patch"
Content-Disposition: attachment; filename="xen-master-pass-acpi-rsdp.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hqtpo8qy0

eGVuOiBbdjRdIFBhc3MgdGhlIGxvY2F0aW9uIG9mIHRoZSBBQ1BJIFJTRFAgdG8gRE9NMC4KIApT
b21lIG1hY2hpbmVzLCBzdWNoIGFzIHJlY2VudCBJQk0gc2VydmVycywgb25seSBhbGxvdyB0aGUg
T1MgdG8gZ2V0IHRoZQpBQ1BJIFJTRFAgZnJvbSBFRkkuIFNpbmNlIFhlbiBudWtlcyBET00wJ3Mg
YWJpbGl0eSB0byBhY2Nlc3MgRUZJLCBET00wCmNhbm5vdCBnZXQgdGhlIFJTRFAgb24gdGhlc2Ug
bWFjaGluZXMsIGxlYWRpbmcgdG8gYWxsIHNvcnRzIG9mCmZ1bmN0aW9uYWxpdHkgcmVkdWN0aW9u
cy4KIApTaWduZWQtb2ZmLWJ5OiBQaGlsaXAgV2VybmVyc2JhY2ggPHBoaWxpcC53ZXJuZXJzYmFj
aEBnbWFpbC5jb20+CgotLS0KQ2hhbmdlZCBzaW5jZSB2MzoKICAgICogVXNlIHN0YW5kYXJkIHBv
aW50ZXIgcHJpbnQgZm9yIHNucHJpbnRmCiAgICAqIEFkZCBhbiBvcHRpb24gdG8gZW5hYmxlIG9y
IGRpc2FibGUgQUNQSSBSU0RQIHBhc3N0aHJvdWdoCiAgICAqIE9ubHkgcGFzc3Rocm91Z2ggd2hl
biBydW5uaW5nIGluIEVGSSBhbmQgYWNwaV9yc2RwX3Bhc3N0aHJvdWdoIGlzIGVuYWJsZWQKCkNo
YW5nZWQgc2luY2UgdjI6CiAgICAqIEZpeCBjb2Rpbmcgc3R5bGUKICAgICogR2V0IHJpZCBvZiBl
eHRyYSBkZWZpbmUKICAgICogVXNlIGNvcnJlY3QgdHlwZWRlZidkIHR5cGUgZm9yIHRoZSBBQ1BJ
IFJTRFAgcG9pbnRlcgogICAgKiBCZXR0ZXIgZXJyb3IgY2hlY2tpbmcgY29uZGl0aW9uYWwKICAg
ICogU2ltcGxpZnkgZXJyb3IgbWVzc2FnZQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9hY3Bp
L2Jvb3QuYyBiL3hlbi9hcmNoL3g4Ni9hY3BpL2Jvb3QuYwppbmRleCA2ZDc5ODRmLi5jZThmZmJl
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvYWNwaS9ib290LmMKKysrIGIveGVuL2FyY2gveDg2
L2FjcGkvYm9vdC5jCkBAIC01Nyw2ICs1Nyw3IEBAIGJvb2xfdCBfX2luaXRkYXRhIGFjcGlfbGFw
aWM7CiBib29sX3QgX19pbml0ZGF0YSBhY3BpX2lvYXBpYzsKIAogYm9vbF90IGFjcGlfc2tpcF90
aW1lcl9vdmVycmlkZSBfX2luaXRkYXRhOworYm9vbF90IGFjcGlfcnNkcF9wYXNzdGhyb3VnaCAg
ICBfX2luaXRkYXRhOwogCiAjaWZkZWYgQ09ORklHX1g4Nl9MT0NBTF9BUElDCiBzdGF0aWMgdTY0
IGFjcGlfbGFwaWNfYWRkciBfX2luaXRkYXRhID0gQVBJQ19ERUZBVUxUX1BIWVNfQkFTRTsKZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9zZXR1cC5jIGIveGVuL2FyY2gveDg2L3NldHVwLmMKaW5k
ZXggYjQ5MjU2ZC4uNzgyN2I1ZiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3NldHVwLmMKKysr
IGIveGVuL2FyY2gveDg2L3NldHVwLmMKQEAgLTc1LDYgKzc1LDExIEBAIGN1c3RvbV9wYXJhbSgi
YWNwaSIsIHBhcnNlX2FjcGlfcGFyYW0pOwogYm9vbGVhbl9wYXJhbSgiYWNwaV9za2lwX3RpbWVy
X292ZXJyaWRlIiwgYWNwaV9za2lwX3RpbWVyX292ZXJyaWRlKTsKIAogLyogKioqKiBMaW51eCBj
b25maWcgb3B0aW9uOiBwcm9wYWdhdGVkIHRvIGRvbWFpbjAuICovCisvKiBhY3BpX3JzZHBfcGFz
c3Rocm91Z2g6IEV4cGxpY2l0bHkgcGFzcyB0aGUgQUNQSSBSU0RQIHBvaW50ZXIgdG8gKi8KKy8q
ICAgICAgICAgICAgICAgICAgICAgICAgZG9tYWluMCB2aWEgdGhlIGFjcGlfcnNkcCBvcHRpb24u
ICAgICAgICAqLworYm9vbGVhbl9wYXJhbSgiYWNwaV9yc2RwX3Bhc3N0aHJvdWdoIiwgYWNwaV9y
c2RwX3Bhc3N0aHJvdWdoKTsKKworLyogKioqKiBMaW51eCBjb25maWcgb3B0aW9uOiBwcm9wYWdh
dGVkIHRvIGRvbWFpbjAuICovCiAvKiBub2FwaWM6IERpc2FibGUgSU9BUElDIHNldHVwLiAqLwog
Ym9vbGVhbl9wYXJhbSgibm9hcGljIiwgc2tpcF9pb2FwaWNfc2V0dXApOwogCkBAIC0xMzc4LDYg
KzEzODMsMjYgQEAgdm9pZCBfX2luaXQgX19zdGFydF94ZW4odW5zaWduZWQgbG9uZyBtYmlfcCkK
ICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgIiBhY3BpPSIpOwogICAgICAg
ICAgICAgc2FmZV9zdHJjYXQoZG9tMF9jbWRsaW5lLCBhY3BpX3BhcmFtKTsKICAgICAgICAgfQor
ICAgICAgICBpZiAoIGVmaV9lbmFibGVkICYmIGFjcGlfcnNkcF9wYXNzdGhyb3VnaCAmJgorICAg
ICAgICAgICAgICFzdHJzdHIoZG9tMF9jbWRsaW5lLCAiYWNwaV9yc2RwPSIpICkKKyAgICAgICAg
eworICAgICAgICAgICAgYWNwaV9waHlzaWNhbF9hZGRyZXNzIHJwID0gYWNwaV9vc19nZXRfcm9v
dF9wb2ludGVyKCk7CisgICAgICAgICAgICBjaGFyIHJwX3N0cltzaXplb2YoYWNwaV9waHlzaWNh
bF9hZGRyZXNzKSoyICsgM107CisKKyAgICAgICAgICAgIGlmICggcnAgKQorICAgICAgICAgICAg
eworICAgICAgICAgICAgICAgIHNucHJpbnRmKHJwX3N0ciwgc2l6ZW9mKGFjcGlfcGh5c2ljYWxf
YWRkcmVzcykqMiArIDMsCisgICAgICAgICAgICAgICAgICAgICAgICAgIiVwIiwgKHZvaWQgKily
cCk7CisKKyAgICAgICAgICAgICAgICBzYWZlX3N0cmNhdChkb20wX2NtZGxpbmUsICIgYWNwaV9y
c2RwPSIpOworICAgICAgICAgICAgICAgIHNhZmVfc3RyY2F0KGRvbTBfY21kbGluZSwgcnBfc3Ry
KTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAgICAgIHsKKyAgICAg
ICAgICAgICAgICBwcmludGsoWEVOTE9HX1dBUk5JTkcgCisgICAgICAgICAgICAgICAgICAgICAg
ICJGYWlsZWQgdG8gZ2V0IGFjcGlfcnNkcCB0byBwYXNzIHRvIGRvbTBcbiIpOworICAgICAgICAg
ICAgfQorICAgICAgICB9CiAKICAgICAgICAgY21kbGluZSA9IGRvbTBfY21kbGluZTsKICAgICB9
CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L2FjcGkuaCBiL3hlbi9pbmNsdWRlL2Fz
bS14ODYvYWNwaS5oCmluZGV4IGE3YjExYjguLmUxM2Y4NzMgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNs
dWRlL2FzbS14ODYvYWNwaS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvYWNwaS5oCkBAIC04
MSw2ICs4MSw3IEBAIGludCBfX2FjcGlfcmVsZWFzZV9nbG9iYWxfbG9jayh1bnNpZ25lZCBpbnQg
KmxvY2spOwogZXh0ZXJuIGJvb2xfdCBhY3BpX2xhcGljLCBhY3BpX2lvYXBpYywgYWNwaV9ub2ly
cTsKIGV4dGVybiBib29sX3QgYWNwaV9mb3JjZSwgYWNwaV9odCwgYWNwaV9kaXNhYmxlZDsKIGV4
dGVybiBib29sX3QgYWNwaV9za2lwX3RpbWVyX292ZXJyaWRlOworZXh0ZXJuIGJvb2xfdCBhY3Bp
X3JzZHBfcGFzc3Rocm91Z2g7CiBleHRlcm4gdTMyIGFjcGlfc21pX2NtZDsKIGV4dGVybiB1OCBh
Y3BpX2VuYWJsZV92YWx1ZSwgYWNwaV9kaXNhYmxlX3ZhbHVlOwogdm9pZCBhY3BpX3BpY19zY2lf
c2V0X3RyaWdnZXIodW5zaWduZWQgaW50LCB1MTYpOwo=
--001a11395aaaa85e5604f0ba827c
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11395aaaa85e5604f0ba827c--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kQi-0006gL-SI; Fri, 24 Jan 2014 17:20:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6kQi-0006gC-7s
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 17:20:32 +0000
Received: from [85.158.139.211:19505] by server-5.bemta-5.messagelabs.com id
	12/C7-14928-FD0A2E25; Fri, 24 Jan 2014 17:20:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390584028!11780072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13664 invoked from network); 24 Jan 2014 17:20:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:20:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94221208"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:20:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:20:09 -0500
Message-ID: <52E2A0C6.4030801@citrix.com>
Date: Fri, 24 Jan 2014 17:20:06 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 05:48, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
>
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
>
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
>
> Or am I misunderstanding the change?

I didn't know about this patch, but yes, both of them do basically the 
same. One subtle difference that you store the old mfn in page->private, 
while my patch keeps the original behaviour, and store it in 
page->index. page->private is used instead to store the new mfn we got 
from Xen, however I haven't checked where do we use that.
Your approach might be better, we also talked with David that we should 
stop using page->index, as e.g. with the netback grant mapping patches I 
spent a lot of time to figure out a packet drop issue, which eventually 
boiled down to the fact that index is in union with pfmemalloc, and if 
you don't set mapping, the local IP stack will think it is a pfmemalloc 
page. (see the comment in my second patch, xenvif_fill_frags)
However I think that should be a separate patch, I tried to keep the 
original behaviour as much as possible, and focus just on avoiding 
m2p_override when possible.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kQi-0006gL-SI; Fri, 24 Jan 2014 17:20:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W6kQi-0006gC-7s
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 17:20:32 +0000
Received: from [85.158.139.211:19505] by server-5.bemta-5.messagelabs.com id
	12/C7-14928-FD0A2E25; Fri, 24 Jan 2014 17:20:31 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390584028!11780072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13664 invoked from network); 24 Jan 2014 17:20:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:20:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94221208"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:20:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:20:09 -0500
Message-ID: <52E2A0C6.4030801@citrix.com>
Date: Fri, 24 Jan 2014 17:20:06 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140124054828.GA18522@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 05:48, Matt Wilson wrote:
> On Thu, Jan 23, 2014 at 09:23:44PM +0000, Zoltan Kiss wrote:
> Apologies for coming in late on this thread. I'm quite behind on
> xen-devel mail that isn't CC: to me.
>
> It seems to have been forgotten that Anthony and I proposed a similar
> change last November.
>
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws
>
> Or am I misunderstanding the change?

I didn't know about this patch, but yes, both of them do basically the 
same. One subtle difference that you store the old mfn in page->private, 
while my patch keeps the original behaviour, and store it in 
page->index. page->private is used instead to store the new mfn we got 
from Xen, however I haven't checked where do we use that.
Your approach might be better, we also talked with David that we should 
stop using page->index, as e.g. with the netback grant mapping patches I 
spent a lot of time to figure out a packet drop issue, which eventually 
boiled down to the fact that index is in union with pfmemalloc, and if 
you don't set mapping, the local IP stack will think it is a pfmemalloc 
page. (see the comment in my second patch, xenvif_fill_frags)
However I think that should be a separate patch, I tried to keep the 
original behaviour as much as possible, and focus just on avoiding 
m2p_override when possible.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kVy-0006sL-Q9; Fri, 24 Jan 2014 17:25:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6kVx-0006sE-Jo
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:25:57 +0000
Received: from [85.158.143.35:19817] by server-1.bemta-4.messagelabs.com id
	63/C2-02132-522A2E25; Fri, 24 Jan 2014 17:25:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390584354!619369!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12613 invoked from network); 24 Jan 2014 17:25:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:25:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94223127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:25:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:25:06 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6kV7-00007C-NZ;
	Fri, 24 Jan 2014 17:25:05 +0000
Message-ID: <52E2A1EA.2020203@eu.citrix.com>
Date: Fri, 24 Jan 2014 17:24:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com> <52E29CAE.8050701@citrix.com>
In-Reply-To: <52E29CAE.8050701@citrix.com>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 05:02 PM, Andrew Cooper wrote:
> On 24/01/14 16:59, George Dunlap wrote:
>> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>>> On 24/01/14 16:52, George Dunlap wrote:
>>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>
>>>>>>> ---
>>>>>>>
>>>>>>> George:
>>>>>>>       This is just documentation, and it would be nice to include
>>>>>>> it as
>>>>>>> part of
>>>>>>>       the 4.4 release.
>>>>>>> ---
>>>>>>>      misc/coverity_model.c |   98
>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>      1 file changed, 98 insertions(+)
>>>>>>>      create mode 100644 misc/coverity_model.c
>>>>>>>
>>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>>> new file mode 100644
>>>>>>> index 0000000..418d25e
>>>>>>> --- /dev/null
>>>>>>> +++ b/misc/coverity_model.c
>>>>>>> @@ -0,0 +1,98 @@
>>>>>>> +/* Coverity Scan model
>>>>>>> + *
>>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>>> avoid false
>>>>>>> + * positives.
>>>>>>> + *
>>>>>>> + * - A model file can't import any header files.
>>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>>> void
>>>>>>> are
>>>>>>> + *   available but not NULL etc.
>>>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>>>> structs
>>>>>>> + *   and similar types are sufficient.
>>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>>> that the
>>>>>>> + *   variable could be either NULL or have some data.
>>>>>>> + *
>>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>>> model file
>>>>>>> + * must be uploaded by an admin in the analysis.
>>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>>> it's
>>>>>> just convenient to put it in the tree, particularly if any of these
>>>>>> things should change as things go forward.  (Hence comparing it to
>>>>>> documentation.)  Is that right?
>>>>>>
>>>>>>     -George
>>>>>>
>>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>>> AST), but that is completely opaque to users of Scan.
>>>> Right; I have a hard time coming up with a compelling reason to wait
>>>> for this one.
>>>>
>>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>>
>>>> The name of the file might be a bit confusing though, if people think
>>>> it is supposed to be compliled... would it make sense maybe to call it
>>>> ".txt", and include some instructions at the top with a line that says
>>>> "---- cut here 8< ---" or something?
>>>>
>>>>    -George
>>> Not really - Coverity uses the file extension to work out how to
>>> interpret the modelling file.  ".c" is correct here, and will cause
>>> smart text editors to apply proper syntax highlighting.
>>>
>>> Alternates are .cpp and .java, depending on the primary language of the
>>> project.
>> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
>> need to be a .c file in the xen tree -- the instructions could say,
>> "Place the text below into a file named coverity_model.c".
>>
>>   -George
> This file was deliberately placed in a brand new directory, away from
> any Makefiles which might try to compile it for this reason.
>
> Requiring users to post-process this file just to help prevent someone
> from accidentally trying to compile it seems crazy IMO.  The worst that
> happens is that someone tries to compile it and it fails to compile.

Right, well a directory named "misc" probably won't remain empty for 
long. :-)  I'd prefer Ian's suggestion of misc/coverity/, or maybe just 
coverity/; but I'm not too particular about it.

(And the consensus seems to be that .c shoud be fine, so I'll go along 
with that too.)

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kVy-0006sL-Q9; Fri, 24 Jan 2014 17:25:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W6kVx-0006sE-Jo
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:25:57 +0000
Received: from [85.158.143.35:19817] by server-1.bemta-4.messagelabs.com id
	63/C2-02132-522A2E25; Fri, 24 Jan 2014 17:25:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390584354!619369!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12613 invoked from network); 24 Jan 2014 17:25:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:25:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94223127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:25:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:25:06 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W6kV7-00007C-NZ;
	Fri, 24 Jan 2014 17:25:05 +0000
Message-ID: <52E2A1EA.2020203@eu.citrix.com>
Date: Fri, 24 Jan 2014 17:24:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com> <52E29CAE.8050701@citrix.com>
In-Reply-To: <52E29CAE.8050701@citrix.com>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 05:02 PM, Andrew Cooper wrote:
> On 24/01/14 16:59, George Dunlap wrote:
>> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>>> On 24/01/14 16:52, George Dunlap wrote:
>>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>
>>>>>>> ---
>>>>>>>
>>>>>>> George:
>>>>>>>       This is just documentation, and it would be nice to include
>>>>>>> it as
>>>>>>> part of
>>>>>>>       the 4.4 release.
>>>>>>> ---
>>>>>>>      misc/coverity_model.c |   98
>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>      1 file changed, 98 insertions(+)
>>>>>>>      create mode 100644 misc/coverity_model.c
>>>>>>>
>>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>>> new file mode 100644
>>>>>>> index 0000000..418d25e
>>>>>>> --- /dev/null
>>>>>>> +++ b/misc/coverity_model.c
>>>>>>> @@ -0,0 +1,98 @@
>>>>>>> +/* Coverity Scan model
>>>>>>> + *
>>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>>> avoid false
>>>>>>> + * positives.
>>>>>>> + *
>>>>>>> + * - A model file can't import any header files.
>>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>>> void
>>>>>>> are
>>>>>>> + *   available but not NULL etc.
>>>>>>> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary
>>>>>>> structs
>>>>>>> + *   and similar types are sufficient.
>>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>>> that the
>>>>>>> + *   variable could be either NULL or have some data.
>>>>>>> + *
>>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>>> model file
>>>>>>> + * must be uploaded by an admin in the analysis.
>>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>>> it's
>>>>>> just convenient to put it in the tree, particularly if any of these
>>>>>> things should change as things go forward.  (Hence comparing it to
>>>>>> documentation.)  Is that right?
>>>>>>
>>>>>>     -George
>>>>>>
>>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>>> AST), but that is completely opaque to users of Scan.
>>>> Right; I have a hard time coming up with a compelling reason to wait
>>>> for this one.
>>>>
>>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>>
>>>> The name of the file might be a bit confusing though, if people think
>>>> it is supposed to be compliled... would it make sense maybe to call it
>>>> ".txt", and include some instructions at the top with a line that says
>>>> "---- cut here 8< ---" or something?
>>>>
>>>>    -George
>>> Not really - Coverity uses the file extension to work out how to
>>> interpret the modelling file.  ".c" is correct here, and will cause
>>> smart text editors to apply proper syntax highlighting.
>>>
>>> Alternates are .cpp and .java, depending on the primary language of the
>>> project.
>> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
>> need to be a .c file in the xen tree -- the instructions could say,
>> "Place the text below into a file named coverity_model.c".
>>
>>   -George
> This file was deliberately placed in a brand new directory, away from
> any Makefiles which might try to compile it for this reason.
>
> Requiring users to post-process this file just to help prevent someone
> from accidentally trying to compile it seems crazy IMO.  The worst that
> happens is that someone tries to compile it and it fails to compile.

Right, well a directory named "misc" probably won't remain empty for 
long. :-)  I'd prefer Ian's suggestion of misc/coverity/, or maybe just 
coverity/; but I'm not too particular about it.

(And the consensus seems to be that .c shoud be fine, so I'll go along 
with that too.)

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:27:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kWx-0006wG-9T; Fri, 24 Jan 2014 17:26:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6kWv-0006w8-T2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:26:58 +0000
Received: from [85.158.137.68:35265] by server-8.bemta-3.messagelabs.com id
	68/17-31081-162A2E25; Fri, 24 Jan 2014 17:26:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390584414!10029296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28815 invoked from network); 24 Jan 2014 17:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94223821"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:26:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:26:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6kWr-00008e-2D;
	Fri, 24 Jan 2014 17:26:53 +0000
Message-ID: <52E2A25C.5060507@citrix.com>
Date: Fri, 24 Jan 2014 17:26:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com> <52E29CAE.8050701@citrix.com>
	<52E2A1EA.2020203@eu.citrix.com>
In-Reply-To: <52E2A1EA.2020203@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 17:24, George Dunlap wrote:
> On 01/24/2014 05:02 PM, Andrew Cooper wrote:
>> On 24/01/14 16:59, George Dunlap wrote:
>>> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>>>> On 24/01/14 16:52, George Dunlap wrote:
>>>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>>
>>>>>>>> ---
>>>>>>>>
>>>>>>>> George:
>>>>>>>>       This is just documentation, and it would be nice to include
>>>>>>>> it as
>>>>>>>> part of
>>>>>>>>       the 4.4 release.
>>>>>>>> ---
>>>>>>>>      misc/coverity_model.c |   98
>>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>      1 file changed, 98 insertions(+)
>>>>>>>>      create mode 100644 misc/coverity_model.c
>>>>>>>>
>>>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>>>> new file mode 100644
>>>>>>>> index 0000000..418d25e
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/misc/coverity_model.c
>>>>>>>> @@ -0,0 +1,98 @@
>>>>>>>> +/* Coverity Scan model
>>>>>>>> + *
>>>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>>>> avoid false
>>>>>>>> + * positives.
>>>>>>>> + *
>>>>>>>> + * - A model file can't import any header files.
>>>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>>>> void
>>>>>>>> are
>>>>>>>> + *   available but not NULL etc.
>>>>>>>> + * - Mode-ling doesn't need full structs and typedefs.
>>>>>>>> Rudimentary
>>>>>>>> structs
>>>>>>>> + *   and similar types are sufficient.
>>>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>>>> that the
>>>>>>>> + *   variable could be either NULL or have some data.
>>>>>>>> + *
>>>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>>>> model file
>>>>>>>> + * must be uploaded by an admin in the analysis.
>>>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>>>> it's
>>>>>>> just convenient to put it in the tree, particularly if any of these
>>>>>>> things should change as things go forward.  (Hence comparing it to
>>>>>>> documentation.)  Is that right?
>>>>>>>
>>>>>>>     -George
>>>>>>>
>>>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>>>> AST), but that is completely opaque to users of Scan.
>>>>> Right; I have a hard time coming up with a compelling reason to wait
>>>>> for this one.
>>>>>
>>>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>
>>>>> The name of the file might be a bit confusing though, if people think
>>>>> it is supposed to be compliled... would it make sense maybe to
>>>>> call it
>>>>> ".txt", and include some instructions at the top with a line that
>>>>> says
>>>>> "---- cut here 8< ---" or something?
>>>>>
>>>>>    -George
>>>> Not really - Coverity uses the file extension to work out how to
>>>> interpret the modelling file.  ".c" is correct here, and will cause
>>>> smart text editors to apply proper syntax highlighting.
>>>>
>>>> Alternates are .cpp and .java, depending on the primary language of
>>>> the
>>>> project.
>>> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
>>> need to be a .c file in the xen tree -- the instructions could say,
>>> "Place the text below into a file named coverity_model.c".
>>>
>>>   -George
>> This file was deliberately placed in a brand new directory, away from
>> any Makefiles which might try to compile it for this reason.
>>
>> Requiring users to post-process this file just to help prevent someone
>> from accidentally trying to compile it seems crazy IMO.  The worst that
>> happens is that someone tries to compile it and it fails to compile.
>
> Right, well a directory named "misc" probably won't remain empty for
> long. :-)  I'd prefer Ian's suggestion of misc/coverity/, or maybe
> just coverity/; but I'm not too particular about it.
>
> (And the consensus seems to be that .c shoud be fine, so I'll go along
> with that too.)
>
>  -George
>

I will send v6 moving it to misc/coverity/, and the result of the
copyright thread

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:27:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kWx-0006wG-9T; Fri, 24 Jan 2014 17:26:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6kWv-0006w8-T2
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:26:58 +0000
Received: from [85.158.137.68:35265] by server-8.bemta-3.messagelabs.com id
	68/17-31081-162A2E25; Fri, 24 Jan 2014 17:26:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390584414!10029296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28815 invoked from network); 24 Jan 2014 17:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="94223821"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 17:26:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:26:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W6kWr-00008e-2D;
	Fri, 24 Jan 2014 17:26:53 +0000
Message-ID: <52E2A25C.5060507@citrix.com>
Date: Fri, 24 Jan 2014 17:26:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<52E131AB.7020603@eu.citrix.com> <52E132EE.2030101@citrix.com>
	<52E29A35.9060205@eu.citrix.com> <52E29B6D.6070208@citrix.com>
	<52E29BE6.6050601@eu.citrix.com> <52E29CAE.8050701@citrix.com>
	<52E2A1EA.2020203@eu.citrix.com>
In-Reply-To: <52E2A1EA.2020203@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 17:24, George Dunlap wrote:
> On 01/24/2014 05:02 PM, Andrew Cooper wrote:
>> On 24/01/14 16:59, George Dunlap wrote:
>>> On 01/24/2014 04:57 PM, Andrew Cooper wrote:
>>>> On 24/01/14 16:52, George Dunlap wrote:
>>>>> On 01/23/2014 03:19 PM, Andrew Cooper wrote:
>>>>>> On 23/01/14 15:13, George Dunlap wrote:
>>>>>>> On 01/23/2014 02:28 PM, Andrew Cooper wrote:
>>>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>> CC: Keir Fraser <keir@xen.org>
>>>>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>>>>> CC: Tim Deegan <tim@xen.org>
>>>>>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>>>>>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>>>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>>
>>>>>>>> ---
>>>>>>>>
>>>>>>>> George:
>>>>>>>>       This is just documentation, and it would be nice to include
>>>>>>>> it as
>>>>>>>> part of
>>>>>>>>       the 4.4 release.
>>>>>>>> ---
>>>>>>>>      misc/coverity_model.c |   98
>>>>>>>> +++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>      1 file changed, 98 insertions(+)
>>>>>>>>      create mode 100644 misc/coverity_model.c
>>>>>>>>
>>>>>>>> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
>>>>>>>> new file mode 100644
>>>>>>>> index 0000000..418d25e
>>>>>>>> --- /dev/null
>>>>>>>> +++ b/misc/coverity_model.c
>>>>>>>> @@ -0,0 +1,98 @@
>>>>>>>> +/* Coverity Scan model
>>>>>>>> + *
>>>>>>>> + * This is a modelling file for Coverity Scan. Modelling helps to
>>>>>>>> avoid false
>>>>>>>> + * positives.
>>>>>>>> + *
>>>>>>>> + * - A model file can't import any header files.
>>>>>>>> + * - Therefore only some built-in primitives like int, char and
>>>>>>>> void
>>>>>>>> are
>>>>>>>> + *   available but not NULL etc.
>>>>>>>> + * - Mode-ling doesn't need full structs and typedefs.
>>>>>>>> Rudimentary
>>>>>>>> structs
>>>>>>>> + *   and similar types are sufficient.
>>>>>>>> + * - An uninitialized local pointer is not an error. It signifies
>>>>>>>> that the
>>>>>>>> + *   variable could be either NULL or have some data.
>>>>>>>> + *
>>>>>>>> + * Coverity Scan doesn't pick up modifications automatically. The
>>>>>>>> model file
>>>>>>>> + * must be uploaded by an admin in the analysis.
>>>>>>> So this file isn't compiled; it's manually uploaded as part of the
>>>>>>> coverity scanning process; and could be provided out-of-band, but
>>>>>>> it's
>>>>>>> just convenient to put it in the tree, particularly if any of these
>>>>>>> things should change as things go forward.  (Hence comparing it to
>>>>>>> documentation.)  Is that right?
>>>>>>>
>>>>>>>     -George
>>>>>>>
>>>>>> Correct.  I believe internally Coverity compiles it (at least to an
>>>>>> AST), but that is completely opaque to users of Scan.
>>>>> Right; I have a hard time coming up with a compelling reason to wait
>>>>> for this one.
>>>>>
>>>>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>
>>>>> The name of the file might be a bit confusing though, if people think
>>>>> it is supposed to be compliled... would it make sense maybe to
>>>>> call it
>>>>> ".txt", and include some instructions at the top with a line that
>>>>> says
>>>>> "---- cut here 8< ---" or something?
>>>>>
>>>>>    -George
>>>> Not really - Coverity uses the file extension to work out how to
>>>> interpret the modelling file.  ".c" is correct here, and will cause
>>>> smart text editors to apply proper syntax highlighting.
>>>>
>>>> Alternates are .cpp and .java, depending on the primary language of
>>>> the
>>>> project.
>>> Yes, I assumed that *coverity* needs it to be a .c.  But it doesn't
>>> need to be a .c file in the xen tree -- the instructions could say,
>>> "Place the text below into a file named coverity_model.c".
>>>
>>>   -George
>> This file was deliberately placed in a brand new directory, away from
>> any Makefiles which might try to compile it for this reason.
>>
>> Requiring users to post-process this file just to help prevent someone
>> from accidentally trying to compile it seems crazy IMO.  The worst that
>> happens is that someone tries to compile it and it fails to compile.
>
> Right, well a directory named "misc" probably won't remain empty for
> long. :-)  I'd prefer Ian's suggestion of misc/coverity/, or maybe
> just coverity/; but I'm not too particular about it.
>
> (And the consensus seems to be that .c shoud be fine, so I'll go along
> with that too.)
>
>  -George
>

I will send v6 moving it to misc/coverity/, and the result of the
copyright thread

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:36:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kgH-0007Y4-QN; Fri, 24 Jan 2014 17:36:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kgF-0007Xz-NW
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:36:35 +0000
Received: from [193.109.254.147:57455] by server-3.bemta-14.messagelabs.com id
	96/3A-11000-2A4A2E25; Fri, 24 Jan 2014 17:36:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390584992!12973378!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13227 invoked from network); 24 Jan 2014 17:36:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96258815"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:36:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:36:31 -0500
Message-ID: <1390584989.13513.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Jan 2014 17:36:29 +0000
In-Reply-To: <52E29EDD.6050100@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<1390583049.13513.19.camel@kazak.uk.xensource.com>
	<52E29EDD.6050100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:11 +0000, Andrew Cooper wrote:
> On 24/01/14 17:04, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Keir Fraser <keir@xen.org>
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Tim Deegan <tim@xen.org>
> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> >> CC: George Dunlap <george.dunlap@eu.citrix.com>
> >>
> >> ---
> >>
> >> George:
> >>   This is just documentation, and it would be nice to include it as part of
> >>   the 4.4 release.
> >> ---
> >>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 98 insertions(+)
> >>  create mode 100644 misc/coverity_model.c
> >>
> >> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> >> new file mode 100644
> >> index 0000000..418d25e
> >> --- /dev/null
> >> +++ b/misc/coverity_model.c
> >> @@ -0,0 +1,98 @@
> >> +/* Coverity Scan model
> >> + *
> >> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> >> + * positives.
> >> + *
> >> + * - A model file can't import any header files.
> >> + * - Therefore only some built-in primitives like int, char and void are
> >> + *   available but not NULL etc.
> >> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> >> + *   and similar types are sufficient.

Is "Mode-ling" a word?

> >> + * - An uninitialized local pointer is not an error. It signifies that the
> >> + *   variable could be either NULL or have some data.
> >> + *
> >> + * Coverity Scan doesn't pick up modifications automatically. The model file
> >> + * must be uploaded by an admin in the analysis.
> >> + *
> >> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
> >> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
> >> + *
> >> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
> >> + * reference to get started (suggested by Coverty Scan themselves as a good
> >> + * example), but all content is Xen specific.
> > Given that you (I pressume?) wrote at least some of the C like stuff I
> > think you can include your copyright too as well as the Python one. Is
> > there actually any cpython stuff left?
> >
> > If there were a link to the docs in this comment that would be good too.
> >
> >
> 
> See "Useful links" in the comment below this one in the file.

Oops, stopped after the first comment ;-)

> Most of this comment is from the cpython file, and I suppose technically
> the "#define NULL (void)0" and "#define assert() /*empty*/" were.
> 
> But mainly, the cpython was just an example of an existing modelling
> file, which was a substantially more useful when working out how to
> write the Xen one than the Coverity documentation of which each of the
> __coverity_$FOO() functions mean and do.

Might be worth including a link to the python one?
I'd normally do the copyright as:
        Write Copyright (c) 2013-2014 Andrew Cooper (or Citrix etc as you wish/are required)

        Based on <http://linky> which is:
          Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
          2011, 2012, 2013 Python Software Foundation; All Rights
        Reserved

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:36:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6kgH-0007Y4-QN; Fri, 24 Jan 2014 17:36:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W6kgF-0007Xz-NW
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:36:35 +0000
Received: from [193.109.254.147:57455] by server-3.bemta-14.messagelabs.com id
	96/3A-11000-2A4A2E25; Fri, 24 Jan 2014 17:36:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390584992!12973378!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13227 invoked from network); 24 Jan 2014 17:36:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 17:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,713,1384300800"; d="scan'208";a="96258815"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 17:36:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 12:36:31 -0500
Message-ID: <1390584989.13513.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 24 Jan 2014 17:36:29 +0000
In-Reply-To: <52E29EDD.6050100@citrix.com>
References: <1390487303-31765-1-git-send-email-andrew.cooper3@citrix.com>
	<1390583049.13513.19.camel@kazak.uk.xensource.com>
	<52E29EDD.6050100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>, Jan
	Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v5] coverity: Store the modelling file in
 the source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:11 +0000, Andrew Cooper wrote:
> On 24/01/14 17:04, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 14:28 +0000, Andrew Cooper wrote:
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Keir Fraser <keir@xen.org>
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Tim Deegan <tim@xen.org>
> >> CC: Ian Campbell <Ian.Campbell@citrix.com>
> >> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> >> CC: George Dunlap <george.dunlap@eu.citrix.com>
> >>
> >> ---
> >>
> >> George:
> >>   This is just documentation, and it would be nice to include it as part of
> >>   the 4.4 release.
> >> ---
> >>  misc/coverity_model.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 98 insertions(+)
> >>  create mode 100644 misc/coverity_model.c
> >>
> >> diff --git a/misc/coverity_model.c b/misc/coverity_model.c
> >> new file mode 100644
> >> index 0000000..418d25e
> >> --- /dev/null
> >> +++ b/misc/coverity_model.c
> >> @@ -0,0 +1,98 @@
> >> +/* Coverity Scan model
> >> + *
> >> + * This is a modelling file for Coverity Scan. Modelling helps to avoid false
> >> + * positives.
> >> + *
> >> + * - A model file can't import any header files.
> >> + * - Therefore only some built-in primitives like int, char and void are
> >> + *   available but not NULL etc.
> >> + * - Mode-ling doesn't need full structs and typedefs. Rudimentary structs
> >> + *   and similar types are sufficient.

Is "Mode-ling" a word?

> >> + * - An uninitialized local pointer is not an error. It signifies that the
> >> + *   variable could be either NULL or have some data.
> >> + *
> >> + * Coverity Scan doesn't pick up modifications automatically. The model file
> >> + * must be uploaded by an admin in the analysis.
> >> + *
> >> + * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
> >> + * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
> >> + *
> >> + * The Xen Coverity Scan modelling file used the cpython modelling file as a
> >> + * reference to get started (suggested by Coverty Scan themselves as a good
> >> + * example), but all content is Xen specific.
> > Given that you (I pressume?) wrote at least some of the C like stuff I
> > think you can include your copyright too as well as the Python one. Is
> > there actually any cpython stuff left?
> >
> > If there were a link to the docs in this comment that would be good too.
> >
> >
> 
> See "Useful links" in the comment below this one in the file.

Oops, stopped after the first comment ;-)

> Most of this comment is from the cpython file, and I suppose technically
> the "#define NULL (void)0" and "#define assert() /*empty*/" were.
> 
> But mainly, the cpython was just an example of an existing modelling
> file, which was a substantially more useful when working out how to
> write the Xen one than the Coverity documentation of which each of the
> __coverity_$FOO() functions mean and do.

Might be worth including a link to the python one?
I'd normally do the copyright as:
        Write Copyright (c) 2013-2014 Andrew Cooper (or Citrix etc as you wish/are required)

        Based on <http://linky> which is:
          Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
          2011, 2012, 2013 Python Software Foundation; All Rights
        Reserved

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6koS-00083J-9I; Fri, 24 Jan 2014 17:45:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6koP-00083E-6e
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:45:03 +0000
Received: from [85.158.143.35:12283] by server-1.bemta-4.messagelabs.com id
	A7/93-02132-C96A2E25; Fri, 24 Jan 2014 17:45:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390585497!631684!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29068 invoked from network); 24 Jan 2014 17:44:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 17:44:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHhrrl021772
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:43:53 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHhpC9012407
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 17:43:52 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OHhpiD008361; Fri, 24 Jan 2014 17:43:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:43:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6B3921BFA72; Fri, 24 Jan 2014 12:43:49 -0500 (EST)
Date: Fri, 24 Jan 2014 12:43:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124174349.GA15472@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="cNdxnHkX5QqsyA0e"
Content-Disposition: inline
In-Reply-To: <52E2A0930200007800116CAE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Fri, Jan 24, 2014 at 04:19:15PM +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > I built the kernel without the igb driver just to eliminate it being
> > the culprit. Now I can boot without issues and this is what lspci
> > reports:
> > 
> > -bash-4.1# lspci -s 02:00.0 -v
> > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: bus master, fast devsel, latency 0, IRQ 10
> >         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
> >         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
> >         I/O ports at e020 [size=32]
> >         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
> >         Expansion ROM at f0c00000 [disabled] [size=4M]
> >         Capabilities: [40] Power Management version 3
> >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> 
> So here's a patch to figure out why we don't find this.

Thank you!

See attached log. The corresponding xen-syms is compressed and
updated at : http://darnok.org/xen/xen-syms.gz

The interesting bit is:

(XEN) 02:00.0: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
(XEN) 02:00.0: pos=40
(XEN) 02:00.0: id=01
(XEN) 02:00.0: pos=50
(XEN) 02:00.0: id=05
(XEN) 02:00.0: pos=70
(XEN) 02:00.0: id=11
(XEN) 02:00.1: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
(XEN) 02:00.1: pos=40
(XEN) 02:00.1: id=01
(XEN) 02:00.1: pos=50
(XEN) 02:00.1: id=05
(XEN) 02:00.1: pos=70
(XEN) 02:00.1: id=11

The diff of the tree is:

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..3eadf9f 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1033,7 +1033,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off)
 
     spin_lock(&pcidevs_lock);
     pdev = pci_get_pdev(seg, bus, devfn);
-    if ( !pdev )
+    if ( !pdev  || !pdev->msix )
         rc = -ENODEV;
     else if ( pdev->msix->used_entries != !!off )
         rc = -EBUSY;
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 1040b2c..ff5587b 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -564,7 +564,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
-
+        printk("PHYSDEVOP_manage_pci_add of %x:%x.%x\n", manage_pci.bus, PCI_SLOT(manage_pci.devfn), PCI_FUNC(manage_pci.devfn));
         ret = pci_add_device(0, manage_pci.bus, manage_pci.devfn, NULL);
         break;
     }
@@ -588,6 +588,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             break;
 
         ret = -EINVAL;
+        printk("PHYSDEVOP_manage_pci_add_ext of %x:%x.%x %d,%d\n", manage_pci_ext.bus, PCI_SLOT(manage_pci_ext.devfn), PCI_FUNC(manage_pci_ext.devfn), manage_pci_ext.is_extfn, manage_pci_ext.is_virtfn);
         if ( (manage_pci_ext.is_extfn > 1) || (manage_pci_ext.is_virtfn > 1) )
             break;
 
@@ -609,6 +610,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
 
+        printk("PHYSDEVOP_pci_device_add of %x:%x.%x flags:%x\n", add.bus, PCI_SLOT(add.devfn), PCI_FUNC(add.devfn), add.flags);
         pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
         if ( add.flags & XEN_PCI_DEV_VIRTFN )
         {
@@ -639,9 +641,11 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         if ( copy_from_guest(&dev, arg, 1) )
             ret = -EFAULT;
-        else
+        else {
+            printk("PHYSDEVOP_prepare/release_msix of %x:%x.%x \n", dev.bus, PCI_SLOT(dev.devfn), PCI_FUNC(dev.devfn));
             ret = pci_prepare_msix(dev.seg, dev.bus, dev.devfn,
                                    cmd != PHYSDEVOP_prepare_msix);
+        }
         break;
     }
 
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..93ba11c 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -172,7 +172,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
     INIT_LIST_HEAD(&pdev->msi_list);
 
     if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                             PCI_CAP_ID_MSIX) )
+                             PCI_CAP_ID_MSIX | 0x80) )
     {
         struct arch_msix *msix = xzalloc(struct arch_msix);
 
diff --git a/xen/drivers/pci/pci.c b/xen/drivers/pci/pci.c
index 25dc5f1..9f9a371 100644
--- a/xen/drivers/pci/pci.c
+++ b/xen/drivers/pci/pci.c
@@ -14,19 +14,24 @@ int pci_find_cap_offset(u16 seg, u8 bus, u8 dev, u8 func, u8 cap)
     int max_cap = 48;
     u8 pos = PCI_CAPABILITY_LIST;
     u16 status;
+bool_t log = (cap & 0x80) && !seg && bus == 2;//temp
+cap &= ~0x80;//temp
 
     status = pci_conf_read16(seg, bus, dev, func, PCI_STATUS);
+if(log) printk("02:%02x.%u: status=%04x (%ps wants %02x)\n", dev, func, status, __builtin_return_address(0), cap);//temp
     if ( (status & PCI_STATUS_CAP_LIST) == 0 )
         return 0;
 
     while ( max_cap-- )
     {
         pos = pci_conf_read8(seg, bus, dev, func, pos);
+if(log) printk("02:%02x.%u: pos=%02x\n", dev, func, pos);//temp
         if ( pos < 0x40 )
             break;
 
         pos &= ~3;
         id = pci_conf_read8(seg, bus, dev, func, pos + PCI_CAP_LIST_ID);
+if(log) printk("02:%02x.%u: id=%02x\n", dev, func, id);//temp
 
         if ( id == 0xff )
             break;
@@ -35,6 +40,7 @@ int pci_find_cap_offset(u16 seg, u8 bus, u8 dev, u8 func, u8 cap)
 
         pos += PCI_CAP_LIST_NEXT;
     }
+if(log) printk("02:%02x.%u: no cap %02x\n", dev, func, cap);//temp
 
     return 0;
 }

--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="tst035-jan.txt"
Content-Transfer-Encoding: quoted-printable

Trying 192.168.102.15...=0D
Connected to maxsrv2.=0D
Escape character is '^]'.=0D
=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mInitializing Intel(R) Boot Agent GE v=
1.3.22                                     =1B[02;00HPXE 2.1 Build 086 (WfM=
 2.0)                                                     =1B[03;00HPress C=
trl+S to enter the Setup Menu.                                           =
=1B[04;00H                                                                 =
               =1B[05;00H                                                  =
                              =1B[06;00H                                   =
                                             =1B[07;00H                    =
                                                            =1B[08;00H     =
                                                                           =
=1B[09;00H                                                                 =
               =1B[10;00H                                                  =
                              =1B[11;00H                                   =
                                             =1B[12;00H                    =
                                                            =1B[13;00H     =
                                                                           =
=1B[14;00H                                                                 =
               =1B[15;00H                                                  =
                              =1B[16;00H                                   =
                                             =1B[17;00H                    =
                                                            =1B[18;00H     =
                                                                           =
=1B[19;00H                                                                 =
               =1B[20;00H                                                  =
                              =1B[21;00H                                   =
                                             =1B[22;00H                    =
                                                            =1B[23;00H     =
                                                                           =
=1B[24;00H                                                                 =
              =1B[24;00H=1B[03;39H=1B[03;00HPress Ctrl+S to enter the Setup=
 Menu..                                          =1B[03;39H=1B[03;39H=1B[03=
;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HIni=
tializing Intel(R) Boot Agent GE v1.3.22                                   =
  =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                    =
                 =1B[03;00H                                                =
                                =1B[04;00H                                 =
                                               =1B[05;00HInitializing Intel=
(R) Boot Agent GE v1.3.22                                     =1B[06;00HPXE=
 2.1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00HPress Ctrl+S to enter the Setup Menu.                          =
                 =1B[08;00H                                                =
                                =1B[09;00H                                 =
                                               =1B[10;00H                  =
                                                              =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00H                                                =
                                =1B[14;00H                                 =
                                               =1B[15;00H                  =
                                                              =1B[16;00H   =
                                                                           =
  =1B[17;00H                                                               =
                 =1B[18;00H                                                =
                                =1B[19;00H                                 =
                                               =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[07;39H=1B[07;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39=
H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00H                                =
                                                =1B[08;00H                 =
                                                               =1B[09;00HIn=
itializing Intel(R) Boot Agent GE v1.4.10                                  =
   =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                   =
                  =1B[11;00HPress Ctrl+S to enter the Setup Menu.          =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[11;39H=1B[11;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[11;39H=1B[11;39H=1B[11;39H=1B[11;3=
9H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B=
[11;39H=1B[11;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitia=
lizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00H                                                  =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing Intel(R=
) Boot Agent GE v1.3.22                                     =1B[06;00HPXE 2=
=2E1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00H                                                               =
                 =1B[08;00H                                                =
                                =1B[09;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[10;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00HInitializing Intel(R) Boot Agent GE v1.4.10     =
                                =1B[14;00HPXE 2.1 Build 092 (WfM 2.0)      =
                                               =1B[15;00HPress Ctrl+S to en=
ter the Setup Menu.                                           =1B[16;00H   =
                                                                           =
  =1B[17;00H                                                               =
                 =1B[18;00H                                                =
                                =1B[19;00H                                 =
                                               =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[15;39H=1B[15;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39=
H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[=
15;39H=1B[15;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitial=
izing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00H                                                  =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing Intel(R=
) Boot Agent GE v1.3.22                                     =1B[06;00HPXE 2=
=2E1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00H                                                               =
                 =1B[08;00H                                                =
                                =1B[09;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[10;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00HInitializing Intel(R) Boot Agent GE v1.4.10     =
                                =1B[14;00HPXE 2.1 Build 092 (WfM 2.0)      =
                                               =1B[15;00H                  =
                                                              =1B[16;00H   =
                                                                           =
  =1B[17;00HInitializing Intel(R) Boot Agent GE v1.4.10                    =
                 =1B[18;00HPXE 2.1 Build 092 (WfM 2.0)                     =
                                =1B[19;00HPress Ctrl+S to enter the Setup M=
enu.                                           =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[19;39H=1B[19;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39=
H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[=
19;39H=1B[19;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H       =
                                                                         =
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=1B[2J=1B[1;1H=1B[2J=1B[1;1H=80=08 =08=1B[0=
1;00H                                                                      =
          =1B[02;00HIntel(R) Boot Agent GE v1.4.10                         =
                         =1B[03;00HCopyright (C) 1997-2012, Intel Corporati=
on                                      =1B[04;00H                         =
                                                       =1B[05;00HInitializi=
ng and establishing link...                                           =1B[0=
6;00H                                                                      =
          =1B[07;00H                                                       =
                         =1B[08;00H                                        =
                                        =1B[09;00H                         =
                                                       =1B[10;00H          =
                                                                      =1B[1=
1;00H                                                                      =
          =1B[12;00H                                                       =
                         =1B[13;00H                                        =
                                        =1B[14;00H                         =
                                                       =1B[15;00H          =
                                                                      =1B[1=
6;00H                                                                      =
          =1B[17;00H                                                       =
                         =1B[18;00H                                        =
                                        =1B[19;00H                         =
                                                       =1B[20;00H          =
                                                                      =1B[2=
1;00H                                                                      =
          =1B[22;00H                                                       =
                         =1B[23;00H                                        =
                                        =1B[24;00H                         =
                                                      =1B[24;00H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;00H=
CLIENT MAC ADDR: 00 25 90 86 BE F0  GUID: 00000000 0000 0000 0000 00259086B=
EF0  =1B[06;00HDHCP.\                                                      =
                    =1B[06;06H=1B[06;06H=1B[06;00HDHCP.|                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP./                                                                     =
     =1B[06;06H=1B[06;00HDHCP.-                                            =
                              =1B[06;06H=1B[06;00HDHCP.\                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.|                                                                     =
     =1B[06;06H=1B[06;00HDHCP./                                            =
                              =1B[06;06H=1B[06;00HDHCP.-                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.\                                                                     =
     =1B[06;06H=1B[06;00HDHCP.|                                            =
                              =1B[06;06H=1B[06;00HDHCP./                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.-                                                                     =
     =1B[06;06H=1B[06;00HDHCP.\                                            =
                              =1B[06;06H=1B[06;00HDHCP.|                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP./                                                                     =
     =1B[06;06H=1B[06;00HDHCP.-                                            =
                              =1B[06;06H=1B[06;00HDHCP.\                   =
                                                       =1B[06;06H=1B[06;00H=
CLIENT IP: 192.168.102.35  MASK: 255.255.255.0  DHCP IP: 192.168.102.1     =
     =1B[07;00HGATEWAY IP: 192.168.102.1                                   =
                    =1B[08;00HTFTP.                                        =
                                   =1B[08;06H=0D
PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al=0D
Loading xen.gz... =1B[08;00Hok=0D
Loading vmlinuz... =1B[01;00Hok=0D
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00Hok=0D
Loading microcode.bin... =1B[02;00Hok=0D
 Xen 4.4-rc2=0D
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Fri Jan 24 12:22:58t:47a3c0-dirty=0D
(XEN) Console output is synchronous.=0D
(XEN) Bootloader: unknown=0D
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G=
=0D
(XEN) Video information:=0D
(XEN)  VGA is text mode 80x25, font 8x16=0D
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds=0D
(XEN)  EDID info not retrieved because no DDC retrieval method detected=0D
(XEN) Disc information:=0D
(XEN)  Found 1 MBR signatures=0D
(XEN)  Found 1 EDD information structures=0D
(XEN) Xen-e820 RAM map:=0D
(XEN)  0000000000000000 - 0000000000099c00 (usable)=0D
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)=0D
(XEN)  00000000000e0000 - 0000000000100000 (reserved)=0D
(XEN)  0000000000100000 - 00000000a58f1000 (usable)=0D
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)=0D
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)=0D
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)=0D
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)=0D
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)=0D
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)=0D
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)=0D
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)=0D
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)=0D
(XEN)  00000000bc000000 - 00000000be200000 (reserved)=0D
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)=0D
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)=0D
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)=0D
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)=0D
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)=0D
(XEN)  00000000ff000000 - 0000000100000000 (reserved)=0D
(XEN)  0000000100000000 - 000000023fe00000 (usable)=0D
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)=0D
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) CPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)=
=0D
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)=
=0D
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)=
=0D
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)=
=0D
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)=
=0D
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240=
)=0D
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)=
=0D
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)=
=0D
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)=
=0D
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)=
=0D
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)=
=0D
(XEN) System RAM: 8046MB (8239752kB)=0D
(XEN) No NUMA configuration found=0D
(XEN) Faking a node at 0000000000000000-000000023fe00000=0D
(XEN) Domain heap initialised=0D
(XEN) found SMP MP-table at 000fd870=0D
(XEN) DMI 2.7 present.=0D
(XEN) Using APIC driver default=0D
(XEN) ACPI: PM-Timer IO Port: 0x1808=0D
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]=0D
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]=0D
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32=0D
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]=0D
(XEN) ACPI: Local APIC address 0xfee00000=0D
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
(XEN) Processor #0 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
(XEN) Processor #2 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
(XEN) Processor #4 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
(XEN) Processor #6 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
(XEN) Processor #1 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
(XEN) Processor #3 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
(XEN) Processor #5 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
(XEN) Processor #7 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=0D
(XEN) ACPI: IRQ0 used by override.=0D
(XEN) ACPI: IRQ2 used by override.=0D
(XEN) ACPI: IRQ9 used by override.=0D
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs=0D
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
(XEN) [VT-D]dmar.c:778: Host address width 39=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da=0D
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0=0D
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0=0D
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff=0D
(XEN) Xen ERST support is initialized.=0D
(XEN) Using ACPI (MADT) for SMP configuration information=0D
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)=0D
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X=0D
(XEN) Switched to APIC driver x2apic_cluster.=0D
(XEN) Using scheduler: SMP Credit Scheduler (credit)=0D
(XEN) Detected 3400.079 MHz processor.=0D
(XEN) Initing memory sharing.=0D
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7=0D
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0=0D
(XEN) Intel machine check reporting enabled=0D
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f=0D
(XEN) PCI: MCFG area at f8000000 reserved in E820=0D
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f=0D
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.=0D
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.=0D
(XEN) Intel VT-d Snoop Control not enabled.=0D
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.=0D
(XEN) Intel VT-d Queued Invalidation enabled.=0D
(XEN) Intel VT-d Interrupt Remapping enabled.=0D
(XEN) Intel VT-d Shared EPT tables not enabled.=0D
(XEN) 02:00.0: status=3D0010 (alloc_pdev+0xb4/0x2e9 wants 11)=0D
(XEN) 02:00.0: pos=3D40=0D
(XEN) 02:00.0: id=3D01=0D
(XEN) 02:00.0: pos=3D50=0D
(XEN) 02:00.0: id=3D05=0D
(XEN) 02:00.0: pos=3D70=0D
(XEN) 02:00.0: id=3D11=0D
(XEN) 02:00.1: status=3D0010 (alloc_pdev+0xb4/0x2e9 wants 11)=0D
(XEN) 02:00.1: pos=3D40=0D
(XEN) 02:00.1: id=3D01=0D
(XEN) 02:00.1: pos=3D50=0D
(XEN) 02:00.1: id=3D05=0D
(XEN) 02:00.1: pos=3D70=0D
(XEN) 02:00.1: id=3D11=0D
(XEN) I/O virtualisation enabled=0D
(XEN)  - Dom0 mode: Relaxed=0D
(XEN) Interrupt remapping enabled=0D
(XEN) Enabled directed EOI with ioapic_ack_old on!=0D
(XEN) ENABLING IO-APIC IRQs=0D
(XEN)  -> Using old ACK method=0D
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1=0D
(XEN) TSC deadline timer enabled=0D
(XEN) [2014-01-25 01:29:31] Platform timer is 14.318MHz HPET=0D
(XEN) [2014-01-25 01:29:31] Allocated console ring of 1048576 KiB.=0D
(XEN) [2014-01-25 01:29:31] mwait-idle: MWAIT substates: 0x42120=0D
(XEN) [2014-01-25 01:29:31] mwait-idle: v0.4 model 0x3c=0D
(XEN) [2014-01-25 01:29:32] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff=0D
(XEN) [2014-01-25 01:29:32] VMX: Supported advanced features:=0D
(XEN) [2014-01-25 01:29:32]  - APIC MMIO access virtualisation=0D
(XEN) [2014-01-25 01:29:32]  - APIC TPR shadow=0D
(XEN) [2014-01-25 01:29:32]  - Extended Page Tables (EPT)=0D
(XEN) [2014-01-25 01:29:32]  - Virtual-Processor Identifiers (VPID)=0D
(XEN) [2014-01-25 01:29:32]  - Virtual NMI=0D
(XEN) [2014-01-25 01:29:32]  - MSR direct-access bitmap=0D
(XEN) [2014-01-25 01:29:32]  - Unrestricted Guest=0D
(XEN) [2014-01-25 01:29:32]  - VMCS shadowing=0D
(XEN) [2014-01-25 01:29:32] HVM: ASIDs enabled.=0D
(XEN) [2014-01-25 01:29:32] HVM: VMX enabled=0D
(XEN) [2014-01-25 01:29:32] HVM: Hardware Assisted Paging (HAP) detected=0D
(XEN) [2014-01-25 01:29:32] HVM: HAP page sizes: 4kB, 2MB, 1GB=0D
(XEN) [2014-01-25 01:29:32] Brought up 8 CPUs=0D
(XEN) [2014-01-25 01:29:32] ACPI sleep modes: S3=0D
(XEN) [2014-01-25 01:29:32] mcheck_poll: Machine check polling timer starte=
d.=0D
(XEN) [2014-01-25 01:29:32] Multiple initrd candidates, picking module #1=0D
(XEN) [2014-01-25 01:29:32] *** LOADING DOMAIN 0 ***=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa28000=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binarylf_parse_binary: phdr: paddr=3D=
0x1cc3000 memsz=3D0x14d80=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: phdr: paddr=3D0x1cd8000 memsz=
=3D0x71f000=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: memory: 0x1000000 -> 0x23f700=
0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: GUEST_OS =3D "linux"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: GUEST_VERSION =3D "2.6"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd81e=
0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: PAE_MODE =3D "yes"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: LOADER =3D "generic"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: unknown xen elf note (0xd)=
=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: PADDR_OFFSET =3D 0x0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_addr_calc_check: addresses:=0D
(XEN) [2014-01-25 01:29:32]     virt_base        =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 01:29:32]     elf_paddr_offset =3D 0x0=0D
(XEN) [2014-01-25 01:29:32]     virt_offset      =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 01:29:32]     virt_kstart      =3D 0xffffffff81000000=0D
(XEN) [2014-01-25 01:29:32]     virt_kend        =3D 0xffffffff823f7000=0D
(XEN) [2014-01-25 01:29:32]     virt_entry       =3D 0xffffffff81cd81e0=0D
(XEN) [2014-01-25 01:29:32]     p2m_base         =3D 0xffffffffffffffff=0D
(XEN) [2014-01-25 01:29:32]  Xen  kernel: 64-bit, lsb, compat32=0D
(XEN) [2014-01-25 01:29:32]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f7000=0D
(XEN) [2014-01-25 01:29:32] PHYSICAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 01:29:32]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487082 pages to be allocated)=0D
(XEN) [2014-01-25 01:29:32]  Init. ramdisk: 000000023ac31000->000000023fd86=
b1b=0D
(XEN) [2014-01-25 01:29:32] VIRTUAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 01:29:32]  Loaded kernel: ffffffff81000000->ffffffff823f7=
000=0D
(XEN) [2014-01-25 01:29:32]  Init. ramdisk: ffffffff823f7000->ffffffff8754c=
b1b=0D
(XEN) [2014-01-25 01:29:32]  Phys-Mach map: ffffffff8754d000->ffffffff8794d=
000=0D
(XEN) [2014-01-25 01:29:32]  Start info:    ffffffff8794d000->ffffffff8794d=
4b4=0D
(XEN) [2014-01-25 01:29:32]  Page tables:   ffffffff8794e000->ffffffff8798f=
000=0D
(XEN) [2014-01-25 01:29:32]  Boot stack:    ffffffff8798f000->ffffffff87990=
000=0D
(XEN) [2014-01-25 01:29:32]  TOTAL:         ffffffff80000000->ffffffff87c00=
000=0D
(XEN) [2014-01-25 01:29:32]  ENTRY ADDRESS: ffffffff81cd81e0=0D
(XEN) [2014-01-25 01:29:32] Dom0 has maximum 1 VCPUs=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a28000=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc20f0=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 2 at 0xffffffff81cc3000 -=
> 0xffffffff81cd7d80=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 3 at 0xffffffff81cd8000 -=
> 0xffffffff81e7b000=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:01-25 01:29:33] [VT-D]iommu.c:145=
2: d0:PCIe: map 0000:00:03.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000=0D
(XEN) [2014-01-25 01:29: Free RAM: ........................................=
=2E.......done.=0D
(XEN) [2014-01-25 01:29:33] Initial low memory virq threshold set at 0x4000=
 pages.=0D
(XEN) [2014-01-25 01:29:33] Std. Loglntended to aid debugging of Xen by ens=
uring=0D
(XEN) [2014-01-25 01:29:33] ******* that all output is synchronously delive=
red on the serial line.=0D
(XEN) [2014-01-25 01:29:33] ******* However it can introduce SIGNIFICANT la=
tencies and affect=0D
(XEN) [2014-01-25 01:29:33] ******* timekeeping. It is NOT recommended for =
production use!=0D
(XEN) [2014-01-25 01:29:33] **********************************************=
=0D
(XEN) [2014-01-25 01:29:33] 3... 2... 1... =0D
(XEN) [2014-01-25 01:29:36] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)=0D
(XEN) [2014-01-25 01:29:36] Freed 272kB init memory.=0D
mapping kernel into physical memory=0D
about to get started...=0D
[    0.000000] Initializing cgroup subsys cpuset=0D
[    0.000000] Initializing cgroup subsys cpu=0D
[    0.000000] Initializing cibackAAA.hide=3D(02:00.*) kgdboc=3Dhvc0=0D
[    0.000000] Freeing 99-100 pfn range: 103 pages freed=0D
[    0.000000] 1-1 mapping on 99->100=0D
[    0.000000] 1-1 mapping on a58f1->a58f8=0D
[    0.000000] 1-1 mapping on a61b1->a6597=0D
[    0.000000] 1-1 mapping on b74b4->b76cb=0D
[    0.000000] 1-1 mapping on b770c->b7fff=0D
[    0.000000] 1-1 mapping on b8000->100000=0D
[    0.000000] Released 103 pages of unused memory=0D
[    0.000000] Set 298846 page(s) to 1-1 mapping=0D
[    0.000000] Populating 80000-80067 pfn range: 103 pages added=0D
[    0.000000] e820: BIOS-provided physical RAM map:=0D
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable=0D
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable=0D
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable=0D
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable=0D
[    0.000000] NX (Execute Disable) protection: active=0D
[    0.000000] SMBIOS 2.7 present.=0D
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013=0D
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved=0D
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable=0D
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000=0D
[    0.000000] Scanning 1 areas for low memory corruption=0D
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 2457=
6=0D
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]=0D
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]=0D
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k=0D
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE=0D
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]=0D
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k=0D
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE=0D
[    0.000000] BRK [0x01ff2000, 0x01ff2fff] PGTABLE=0D
[    0.000000] BRK [0x01ff3000, 0x01ff3fff] PGTABLE=0D
[    0.000000] BRK [0x01ff4000, 0x01ff4fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]=0D
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]=0D
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k=0D
[    0.000000] RAMDISK: [mem 0x023f7000-0x0754cfff]=0D
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)=0D
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)=0D
[    0.000000] ACPI: FACS 00000000b77b7080 000040=0D
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)=0D
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)=0D
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)=0D
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)=0D
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)=0D
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)=0D
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)=0D
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)=0D
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)=0D
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] NUMA turned off=0D
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]=
=0D
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]=0D
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]=0D
[    0.000000] Zone ranges:=0D
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]=0D
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]=0D
[    0.000000]   Normal   empty=0D
[    0.000000] Movable zone start for each node=0D
[    0.000000] Early memory node ranges=0D
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]=0D
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]=0D
[    0.000000] On node 0 totalpages: 524287=0D
[    0.000000]   DMA zone: 56 pages used for memmap=0D
[    0.000000]   DMA zone: 21 pages reserved=0D
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0=0D
[    0.000000]   DMA32 zone: 7114 pages used for memmap=0D
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31=0D
[    0.000000] ACPI: PM-Timer IO Port: 0x1808=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=
=0D
[    0.000000] ACPI: IRQ0 used by override.=0D
[    0.000000] ACPI: IRQ2 used by override.=0D
[    0.000000] ACPI: IRQ9 used by override.=0D
[    0.000000] Using ACPI (MADT) for SMP configuration information=0D
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs=0D
[    0.000000] nr_irqs_gsi: 40=0D
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]=0D
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]=0D
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices=
=0D
[    0.000000] Booting paravirtualized kernel on Xen=0D
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)=0D
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1=0D
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144=0D
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152=0D
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 =0D
[    6.004223] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096=0D
[    6.004224] Policy zone: DMA32=0D
[    6.004225] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAAA.hide=3D(0=
2:00.*) kgdboc=3Dhvc0=0D
[    6.004539] PID hash table entries: 4096 (order: 3, 32768 bytes)=0D
[    6.004569] xsave: enabled xstate_bv 0x7, cntxt size 0x340=0D
[    6.025034] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]=0D
[    6.028115] Memory: 1891592K/2097148K available (7058K kernel code, 773K=
 rwdata, 2208K rodata, 1724K init, 1380K bss, 205556K reserved)=0D
[    6.028345] Hierarchical RCU implementation.=0D
[    6.028345] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.=
=0D
[    6.028346] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1=0D
[    6.028354] NR_IRQS:33024 nr_irqs:256 16=0D
[    6.028433] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0=0D
[    6.028435] xen: registering gsi 9 triggering 0 polarity 0=0D
[    6.028445] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)=0D
[    6.028467] xen: acpi sci 9=0D
[    6.028470] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)=0D
[    6.028473] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)=0D
[    6.028475] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)=0D
[    6.028478] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)=0D
[    6.028480] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)=0D
[    6.028483] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)=0D
[    6.028485] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)=0D
[    6.028488] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)=0D
[    6.028490] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)=0D
[    6.028493] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)=0D
[    6.028495] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)=0D
[    6.028498] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)=0D
[    6.028500] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)=0D
[    6.028503] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)=0D
[    6.030067] Console: colour VGA+ 80x25=0D
[    6.981340] console [hvc0] enabled=0D
[    6.985294] Xen: using vcpuop timer interface=0D
[    6.989644] installing Xen timer for CPU 0=0D
[    6.993827] tsc: Detected 3400.078 MHz processor=0D
[    6.998509] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.15 BogoMIPS (lpj=3D3400078)=0D
[    7.009143] pid_max: default: 32768 minimum: 301=0D
[    7.013988] Security Framework initialized=0D
[    7.018079] SELinux:  Initializing.=0D
[    7.021655] SELinux:  Starting in permissive mode=0D
[    7.026744] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)=0D
[    7.034202] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)=0D
[    7.041369] Mount-cache hash table entries: 256=0D
[    7.046366] Initializing cgroup subsys freezer=0D
[    7.050877] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'=0D
[    7.050877] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)=0D
[    7.063979] CPU: Physical Processor ID: 0=0D
[    7.068049] CPU: Processor Core ID: 0=0D
[    7.072492] mce: CPU supports 2 MCE banks=0D
[    7.076503] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024=0D
[    7.076503] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4=
=0D
[    7.076503] tlb_flushall_shift: 6=0D
[    7.113961] Freeing SMP alternatives memory: 32K (ffffffff81e72000 - fff=
fffff81e7a000)=0D
[    7.122611] ACPI: Core revision 2[    7.176872] ACPI: All ACPI Tables su=
ccessfully acquired=0D
[    7.183637] cpu 0 spinlock event irq 41=0D
[    7.187516] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1=0D
[    7.198516] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs=0D
[    7.205985] calling  set_real_mode_per_irq_work_exit+0x0/0x13 returned 0=
 after 0 usecs=0D
[    7.233718] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1=0D
[    7.239351] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs=0D
[    7.246803] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1=0D
[    7.252524] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[    7.260113] calling  init_hw_perf_events+0x0/0x53b @ 1=0D
[    7.265287] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.=0D
[    7.274128] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs=0D
[    7.281408] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1=0D
[    7.287822] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs=0D
[    7.296053] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1=0D
[    7.301522] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs=0D
[    7.308646] calling  spawn_ksoftirqd+0x0/0x28 @ 1=0D
[    7.313440] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs=0D
[    7.320000] calling  init_workqueues+0x0/0x59a @ 1=0D
[    7.325011] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs=
=0D
[    7.331613] calling  migration_init+0x0/0x72 @ 1=0D
[    7.336292] initcall migration_init+0x0/0x72 returned 0 after 0 usecs=0D
[    7.342792] calling  check_cpu_stall_init+0x0/0x1b @ 1=0D
[    7.347993] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs=0D
[    7.355011] calling  rcu_scheduler_really_started+0x0/0x12 @ 1=0D
[    7.360904] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs=0D
[    7.368618] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1=0D
[    7.373856] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs=0D
[    7.380843] calling  cpu_stop_init+0x0/0x76 @ 1=0D
[    7.385456] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs=0D
[    7.391845] calling  relay_init+0x0/0x14 @ 1=0D
[    7.396176] initcall relay_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.402330] calling  tracer_alloc_buffers+0x0/0x1bd @ 1=0D
[    7.407639] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs=0D
[    7.414723] calling  init_events+0x0/0x61 @ 1=0D
[    7.419144] initcall init_events+0x0/0x61 returned 0 after 0 usecs=0D
[    7.425383] calling  init_trace_printk+0x0/0x12 @ 1=0D
[    7.430322] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs=
=0D
[    7.437082] calling  event_trace_memsetup+0x0/0x52 @ 1=0D
[    7.442303] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs=0D
[    7.449302] calling  jump_label_init_module+0x0/0x12 @ 1=0D
[    7.454676] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs=0D
[    7.461870] calling  balloon_clear+0x0/0x4f @ 1=0D
[    7.466463] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs=0D
[    7.472875] calling  rand_initialize+0x0/0x30 @ 1=0D
[    7.477664] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs=0D
[    7.484228] calling  mce_amd_init+0x0/0x165 @ 1=0D
[    7.488822] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs=0D
[    7.495261] x86: Booted up 1 node, 1 CPUs=0D
[    7.500015] NMI watchdog: disabled (cpu0): hardware events not enabled=0D
[    7.506658] devtmpfs: initialized=0D
[    7.512564] calling  ipc_ns_init+0x0/0x14 @ 1=0D
[    7.516911] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.523150] calling  init_mmap_min_addr+0x0/0x26 @ 1=0D
[    7.528177] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usec=
s=0D
[    7.535022] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1=
=0D
[    7.541699] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs=0D
[    7.550191] calling  net_ns_init+0x0/0x104 @ 1=0D
[    7.554753] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs=0D
[    7.561036] calling  e820_mark_nvs_memory+0x0/0x41 @ 1=0D
[    7.566224] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)=0D
[    7.574118] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)=0D
[    7.582278] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs=0D
[    7.589482] calling  cpufreq_tsc+0x0/0x37 @ 1=0D
[    7.593901] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs=0D
[    7.600143] calling  reboot_init+0x0/0x1d @ 1=0D
[    7.604564] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs=0D
[    7.610804] calling  init_lapic_sysfs+0x0/0x20 @ 1=0D
[    7.615656] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs=
=0D
[    7.622328] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1=0D
[    7.627877] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs=0D
[    7.635243] calling  alloc_frozen_cpus+0x0/0x8 @ 1=0D
[    7.640095] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs=
=0D
[    7.646770] calling  wq_sysfs_init+0x0/0x14 @ 1=0D
[    7.651465] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left=
=0D
[    7.658212] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs=0D
[    7.664734] calling  ksysfs_init+0x0/0x94 @ 1=0D
[    7.669200] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs=0D
[    7.675394] calling  pm_init+0x0/0x4e @ 1=0D
[    7.679506] initcall pm_init+0x0/0x4e returned 0 after 0 usecs=0D
[    7.685361] calling  pm_disk_init+0x0/0x19 @ 1=0D
[    7.689883] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs=0D
[    7.696195] calling  swsusp_header_init+0x0/0x30 @ 1=0D
[    7.701220] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usec=
s=0D
[    7.708068] calling  init_jiffies_clocksource+0x0/0x12 @ 1=0D
[    7.713613] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs=0D
[    7.720981] calling  cgroup_wq_init+0x0/0x5c @ 1=0D
[    7.725666] initcall cgroup_wq_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.732159] calling  event_trace_enable+0x0/0x173 @ 1=0D
[    7.737765] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs=0D
[    7.744623] calling  init_zero_pfn+0x0/0x35 @ 1=0D
[    7.749214] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs=0D
[    7.755628] calling  fsnotify_init+0x0/0x26 @ 1=0D
[    7.760222] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.766634] calling  filelock_init+0x0/0x84 @ 1=0D
[    7.771238] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs=0D
[    7.777640] calling  init_misc_binfmt+0x0/0x31 @ 1=0D
[    7.782495] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs=
=0D
[    7.789168] calling  init_script_binfmt+0x0/0x16 @ 1=0D
[    7.794193] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usec=
s=0D
[    7.801040] calling  init_elf_binfmt+0x0/0x16 @ 1=0D
[    7.805806] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs=0D
[    7.812393] calling  init_compat_elf_binfmt+0x0/0x16 @ 1=0D
[    7.817767] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs=0D
[    7.824959] calling  debugfs_init+0x0/0x5c @ 1=0D
[    7.829476] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.835791] calling  securityfs_init+0x0/0x53 @ 1=0D
[    7.840568] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs=0D
[    7.847146] calling  prandom_init+0x0/0xe2 @ 1=0D
[    7.851652] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs=0D
[    7.857979] calling  virtio_init+0x0/0x30 @ 1=0D
[    7.862504] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs=0D
[    7.868674] calling  __gnttab_init+0x0/0x30 @ 1=0D
[    7.873269] xen:grant_table: Grant tables using version 2 layout=0D
[    7.879352] Grant table initialized=0D
[    7.882885] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs=
=0D
[    7.889559] calling  early_resume_init+0x0/0x1d0 @ 1=0D
[    7.894611] RTC time:  1:29:37, date: 01/25/14=0D
[    7.899091] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs=0D
[    7.906112] calling  cpufreq_core_init+0x0/0x37 @ 1=0D
[    7.911052] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs=0D
[    7.917984] calling  cpuidle_init+0x0/0x40 @ 1=0D
[    7.922492] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs=0D
[    7.928993] calling  bsp_pm_check_init+0x0/0x14 @ 1=0D
[    7.933932] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs=
=0D
[    7.940691] calling  sock_init+0x0/0x8b @ 1=0D
[    7.945043] initcall sock_init+0x0/0x8b returned 0 after 0 usecs=0D
[    7.951039] calling  net_inuse_init+0x0/0x26 @ 1=0D
[    7.955721] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.962219] calling  netpoll_init+0x0/0x31 @ 1=0D
[    7.966724] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs=0D
[    7.973051] calling  netlink_proto_init+0x0/0x1f7 @ 1=0D
[    7.978206] NET: Registered protocol family 16=0D
[    7.982697] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs=0D
[    7.989791] calling  bdi_class_init+0x0/0x4d @ 1=0D
[    7.994575] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs=0D
[    8.001006] calling  kobject_uevent_init+0x0/0x12 @ 1=0D
[    8.006132] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[    8.013048] calling  pcibus_class_init+0x0/0x19 @ 1=0D
[    8.018053] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs=
=0D
[    8.024748] calling  pci_driver_init+0x0/0x12 @ 1=0D
[    8.029611] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.036128] calling  backlight_class_init+0x0/0x85 @ 1=0D
[    8.041386] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs=0D
[    8.048349] calling  video_output_class_init+0x0/0x19 @ 1=0D
[    8.053875] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs=0D
[    8.061086] calling  xenbus_init+0x0/0x26f @ 1=0D
[    8.065687] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs=0D
[    8.071940] calling  tty_class_init+0x0/0x38 @ 1=0D
[    8.076688] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs=0D
[    8.083118] calling  vtconsole_class_init+0x0/0xc2 @ 1=0D
[    8.088489] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs=0D
[    8.095444] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1=0D
[    8.101256] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs=0D
[    8.108876] calling  register_node_type+0x0/0x34 @ 1=0D
[    8.114034] initcall register_node_type+0x0/0x34 returned 0 after 0 usec=
s=0D
[    8.120807] calling  i2c_init+0x0/0x70 @ 1=0D
[    8.125137] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs=0D
[    8.131042] calling  init_ladder+0x0/0x12 @ 1=0D
[    8.135463] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs=0D
[    8.141873] calling  init_menu+0x0/0x12 @ 1=0D
[    8.146121] initcall init_menu+0x0/0x12 returned -19 after 0 usecs=0D
[    8.152361] calling  amd_postcore_init+0x0/0x143 @ 1=0D
[    8.157389] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usec=
s=0D
[    8.164248] calling  boot_params_ksysfs_init+0x0/0x237 @ 1=0D
[    8.169800] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs=0D
[    8.177146] calling  arch_kdebugfs_init+0x0/0x233 @ 1=0D
[    8.182291] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs=0D
[    8.189193] calling  mtrr_if_init+0x0/0x78 @ 1=0D
[    8.193700] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs=0D
[    8.200200] calling  ffh_cstate_init+0x0/0x2a @ 1=0D
[    8.204969] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.211552] calling  activate_jump_labels+0x0/0x32 @ 1=0D
[    8.216752] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs=0D
[    8.223772] calling  acpi_pci_init+0x0/0x61 @ 1=0D
[    8.228366] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it=0D
[    8.235992] ACPI: bus type PCI registered=0D
[    8.240066] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5=0D
[    8.246565] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs=
=0D
[    8.253238] calling  dma_bus_init+0x0/0xd6 @ 1=0D
[    8.257869] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left=
=0D
[    8.264601] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs=0D
[    8.271084] calling  dma_channel_table_init+0x0/0xde @ 1=0D
[    8.276471] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs=0D
[    8.283649] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1=0D
[    8.289197] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs=0D
[    8.296560] calling  register_xen_pci_notifier+0x0/0x38 @ 1=0D
[    8.302197] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs=0D
[    8.309648] calling  xen_pcpu_init+0x0/0xcc @ 1=0D
[    8.315098] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs=0D
[    8.321452] calling  dmi_id_init+0x0/0x31d @ 1=0D
[    8.326205] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs=0D
[    8.332458] calling  dca_init+0x0/0x20 @ 1=0D
[    8.336617] dca service started, version 1.12.1=0D
[    8.341269] initcall dca_init+0x0/0x20 returned 0 after 976 usecs=0D
[    8.347365] calling  iommu_init+0x0/0x58 @ 1=0D
[    8.351707] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs=0D
[    8.357851] calling  pci_arch_init+0x0/0x69 @ 1=0D
[    8.362460] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)=0D
[    8.371803] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E82=
0=0D
[    8.386554] PCI: Using configuration type 1 for base access=0D
[    8.392119] initcall pci_arch_init+0x0/0x69 returned 0 after 9765 usecs=
=0D
[    8.398804] calling  topology_init+0x0/0x98 @ 1=0D
[    8.403804] initcall topology_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.410163] calling  mtrr_init_finialize+0x0/0x36 @ 1=0D
[    8.415258] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs=0D
[    8.422192] calling  init_vdso+0x0/0x135 @ 1=0D
[    8.426526] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs=0D
[    8.432676] calling  sysenter_setup+0x0/0x2dd @ 1=0D
[    8.437444] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs=0D
[    8.444031] calling  param_sysfs_init+0x0/0x194 @ 1=0D
[    8.465475] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs=0D
[    8.472513] calling  pm_sysrq_init+0x0/0x19 @ 1=0D
[    8.477103] initcall pm_sysrq_init+0x0/0x19 returned 0 after 0 usecs=0D
[    8.483514] calling  default_bdi_init+0x0/0x65 @ 1=0D
[    8.488672] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs=
=0D
[    8.495276] calling  init_bio+0x0/0xe9 @ 1=0D
[    8.499490] bio: create slab <bio-0> at 0=0D
[    8.503557] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs=0D
[    8.509663] calling  cryptomgr_init+0x0/0x12 @ 1=0D
[    8.514341] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.520842] calling  blk_settings_init+0x0/0x2c @ 1=0D
[    8.525782] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs=
=0D
[    8.532542] calling  blk_ioc_init+0x0/0x2a @ 1=0D
[    8.537060] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.543373] calling  blk_softirq_init+0x0/0x6e @ 1=0D
[    8.548228] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.554901] calling  blk_iopoll_setup+0x0/0x6e @ 1=0D
[    8.559752] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.566427] calling  blk_mq_init+0x0/0x5f @ 1=0D
[    8.570847] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs=0D
[    8.577086] calling  genhd_device_init+0x0/0x85 @ 1=0D
[    8.582171] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs=
=0D
[    8.588858] calling  pci_slot_init+0x0/0x50 @ 1=0D
[    8.593457] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs=0D
[    8.599861] calling  fbmem_init+0x0/0x98 @ 1=0D
[    8.604266] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.610348] calling  acpi_init+0x0/0x27a @ 1=0D
[    8.614707] ACPI: Added _OSI(Module Device)=0D
[    8.618930] ACPI: Added _OSI(Processor Device)=0D
[    8.623433] ACPI: Added _OSI(3.0 _SCP Extensions)=0D
[    8.628200] ACPI: Added _OSI(Processor Aggregator Device)=0D
[    8.637452] ACPI: Executed 1 blocks of module-level executable AML code=
=0D
[    8.669789] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored=0D
[    8.677671] \_SB_:_OSC invalid UUID=0D
[    8.681155] _OSC request data:1 1f =0D
[    8.686851] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.696080] ACPI: Dynamic OEM Table Load:=0D
[    8.700077] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.709930] ACPI: Interpreter enabled=0D
[    8.713597] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)=0D
[    8.722861] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)=0D
[    8.732143] ACPI: (supports S0 S1 S4 S5)=0D
[    8.736114] ACPI: Using IOAPIC for interrupt routing=0D
[    8.741516] HEST: Table parsing has been initialized.=0D
[    8.746566] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug=0D
[    8.756953] ACPI: No dock devices found.=0D
[    8.858976] ACPI: Power Resource [FN00] (off)=0D
[    8.864120] ACPI: Power Resource [FN01] (off)=0D
[    8.869290] ACPI: Power Resource [FN02] (off)=0D
[    8.874421] ACPI: Power Resource [FN03] (off)=0D
[    8.879562] ACPI: Power Resource [FN04] (off)=0D
[    8.889203] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])=0D
[    8.895378] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM Cloc=
kPM Segments MSI]=0D
[    8.906135] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]=0D
[    8.915140] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]=
=0D
[    8.928438] PCI host bridge to bus 0000:00=0D
[    8.932526] pci_bus 0000:00: root bus resource [bus 00-3e]=0D
[    8.938073] p0-0xffff]=0D
[    8.950552] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]=0D
[    8.957484] pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7ff=
f]=0D
[    8.964418] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]=0D
[    8.971350] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]=0D
[    8.978284] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]=0D
[    8.985217] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]=0D
[    8.992151] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]=0D
[    8.999094] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:00.0=0D
[    9.016818] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400=0D
[    9.022973] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold=0D
[    9.029591] pci 0000:00:01.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:01.0=0D
[    9.046582] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400=0D
[    9.052648] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:01.1=0D
[    9.070468] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000=0D
[    9.076484] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit=
]=0D
[    9.083320] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]=0D
[    9.090597] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:2.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:02.0=0D
[    9.107914] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300=0D
[    9.113933] pci 0000:00:03.0: reg 0x10: [mem 0xf1b34000-0xf1b37fff 64bit=
]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:3.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:03.0=0D
[    9.132511] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330=0D
[    9.138571] pci 0000:00:14.0: reg 0x10: [mem 0xf1b20000-0xf1b2ffff 64bit=
]=0D
[    9.145504] pci 0000:00:14.0: PME# supported from D3hot D3cold=0D
[    9.151734] pci 0000:00:14.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:14.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:14.0=0D
[    9.168829] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000=0D
[    9.174870] pci 0000:00:16.0: reg 0x10: [mem 0xf1b3f000-0xf1b3f00f 64bit=
]=0D
[    9.181809] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:16.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:16.0=0D
[    9.199688] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000=0D
[    9.205727] pci 0000:00:19.0: reg 0x10: [mem 0xf1b00000-0xf1b1ffff]=0D
[    9.212022] pci 0000:00:19.0: reg 0x14: [mem 0xf1b3d000-0xf1b3dfff]=0D
[    9.218348] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]=0D
[    9.224110] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold=0D
[    9.230598] pci 0000:00:19.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:19.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:19.0=0D
[    9.247696] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320=0D
[    9.253738] pci 0000:00:1a.0: reg 0x10: [mem 0xf1b3c000-0xf1b3c3ff]=0D
[    9.260191] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold=0D
[    9.266798] pci 0000:00:1a.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1a.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1a.0=0D
[    9.283899] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300=0D
[    9.289930] pci 0000:00:1b.0: reg 0x10: [mem 0xf1b30000-0xf1b33fff 64bit=
]=0D
[    9.296894] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold=0D
[    9.303381] pci 0000:00:1b.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1b.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1b.0=0D
[    9.320462] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400=0D
[    9.326626] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold=0D
[    9.333119] pci 0000:00:1c.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.0=0D
[    9.350208] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400=0D
[    9.356371] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold=0D
[    9.362863] pci 0000:00:1c.3: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.3 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.3=0D
[    9.379949] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400=0D
[    9.386112] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold=0D
[    9.392608] pci 0000:00:1c.5: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.5 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.5=0D
[    9.409702] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400=0D
[    9.415864] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold=0D
[    9.422358] pci 0000:00:1c.6: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.6 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.6=0D
[    9.439444] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400=0D
[    9.445607] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold=0D
[    9.452101] pci 0000:00:1c.7: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.7 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.7=0D
[    9.469203] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320=0D
[    9.475245] pci 0000:00:1d.0: reg 0x10: [mem 0xf1b3b000-0xf1b3b3ff]=0D
[    9.481697] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold=0D
[    9.488276] pci 0000:00:1d.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1d.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1d.0=0D
[    9.505372] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.0=0D
[    9.523303] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601=0D
[    9.529339] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]=0D
[    9.534941] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]=0D
[    9.540573] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]=0D
[    9.546208] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]=0D
[    9.551841] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]=0D
[    9.557474] pci 0000:00:1f.2: reg 0x24: [mem 0xf1b3a000-0xf1b3a7ff]=0D
[    9.563881] pci 0000:00:1f.2: PME# supported from D3hot=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.2 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.2=0D
[    9.580892] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500=0D
[    9.586922] pci 0000:00:1f.3: reg 0x10: [mem 0xf1b39000-0xf1b390ff 64bit=
]=0D
[    9.593772] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.3 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.3=0D
[    9.611152] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000=0D
[    9.617194] pci 0000:00:1f.6: reg 0x10: [mem 0xf1b38000-0xf1b38fff 64bit=
]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.6 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.6=0D
[    9.636130] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.646905] pci 0000:00:01.0: PCI bridge to [bus 01-ff]=0D
[    9.652183] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01=
=0D
[    9.659048] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.669850] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000=0D
[    9.675893] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]=0D
[    9.682214] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]=0D
[    9.688541] pci 0000:02:00.0: reg 0x18: [io  0xe020-0xe03f]=0D
[    9.694174] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]=0D
[    9.700519] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]=
=0D
[    9.707312] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.713439] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.720353] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 2:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:02:00.0=0D
[    9.738706] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000=0D
[    9.744718] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]=0D
[    9.751037] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]=0D
[    9.757362] pci 0000:02:00.1: reg 0x18: [io  0xe000-0xe01f]=0D
[    9.762996] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]=0D
[    9.769343] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]=
=0D
[    9.776133] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold=0D
[    9.782259] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.789174] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 2:0.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:02:00.1=0D
[    9.809600] pci 0000:00:01.1: PCI bridge to [bus 02-ff]=0D
[    9.814816] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[    9.820968] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[    9.827816] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03=
=0D
[    9.834852] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.845671] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000=0D
[    9.851716] pci 0000:04:00.0: reg 0x10: [mem 0xf1aa0000-0xf1abffff]=0D
[    9.858031] pci 0000:04:00.0: reg 0x14: [mem 0xf1a80000-0xf1a9ffff]=0D
[    9.864356] pci 0000:04:00.0: reg 0x18: [io  0xd020-0xd03f]=0D
[    9.870074] pci 0000:04:00.0: reg 0x30: [mem 0xf1a60000-0xf1a7ffff pref]=
=0D
[    9.876901] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.883130] pci 0000:04:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 4:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:04:00.0=0D
[    9.900194] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000=0D
[    9.906229] pci 0000:04:00.1: reg 0x10: [mem 0xf1a40000-0xf1a5ffff]=0D
[    9.912543] pci 0000:04:00.1: reg 0x14: [mem 0xf1a20000-0xf1a3ffff]=0D
[    9.918868] pci 0000:04:00.1: reg 0x18: [io  0xd000-0xd01f]=0D
[    9.924584] pci 0000:04:00.1: reg 0x30: [mem 0xf1a00000-0xf1a1ffff pref]=
=0D
[    9.931411] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 4:0.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:04:00.1=0D
[    9.957555] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]=0D
[    9.962778] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[    9.968930] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[    9.975782] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04=
=0D
[    9.982813] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.993644] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000=0D
[    9.999677] pci 0000:05:00.0: reg 0x10: [mem 0xf1900000-0xf197ffff]=0D
[   10.006013] pci 0000:05:00.0: reg 0x18: [io  0xc000-0xc01f]=0D
[   10.011627] pci 0000:05:00.0: reg 0x1c: [mem 0xf1980000-0xf1983fff]=0D
[   10.018128] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.024367] pci 0000:05:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 5:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:05:00.0=0D
[   10.043509] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]=0D
[   10.048734] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   10.054886] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   10.061734] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05=
=0D
[   10.068808] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.079636] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401=0D
[   10.085873] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold=
=0D
[   10.092649] pci 0000:06:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 6:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:06:00.0=0D
[   10.109661] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]=0D
[   10.114890] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   10.121752] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring=0D
[   10.130246] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400=0D
[   10.136445] pci 0000:07:01.0: supports D1 D2=0D
[   10.140705] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 7:1.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:07:01.0=0D
[   10.158654] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010=0D
[   10.164692] pci 0000:07:03.0: reg 0x10: [mem 0xf1604000-0xf16047ff]=0D
[   10.170999] pci 0000:07:03.0: reg 0x14: [mem 0xf1600000-0xf1603fff]=0D
[   10.177484] pci 0000:07:03.0: supports D1 D2=0D
[   10.181741] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 7:3.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:07:03.0=0D
[   10.205793] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)=0D
[   10.212846] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   10.219688] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.228257] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
] (subtractive decode)=0D
[   10.236923] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.245502] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.254084] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring=0D
[   10.262532] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000=0D
[   10.268590] pci 0000:08:08.0: reg 0x10: [mem 0xf1507000-0xf1507fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:8.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:08.0=0D
[   10.293383] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000=0D
[   10.299434] pci 0000:08:08.1: reg 0x10: [mem 0xf1506000-0xf1506fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:8.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:08.1=0D
[   10.324249] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000=0D
[   10.330296] pci 0000:08:09.0: reg 0x10: [mem 0xf1505000-0xf1505fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:9.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:09.0=0D
[   10.355093] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000=0D
[   10.361151] pci 0000:08:09.1: reg 0x10: [mem 0xf1504000-0xf1504fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:9.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:09.1=0D
[   10.385981] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000=0D
[   10.392035] pci 0000:08:0a.0: reg 0x10: [mem 0xf1503000-0xf1503fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:a.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0a.0=0D
[   10.416830] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000=0D
[   10.422882] pci 0000:08:0a.1: reg 0x10: [mem 0xf1502000-0xf1502fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:a.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0a.1=0D
[   10.447704] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000=0D
[   10.453757] pci 0000:08:0b.0: reg 0x10: [mem 0xf1501000-0xf1501fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:b.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0b.0=0D
[   10.478555] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000=0D
[   10.484613] pci 0000:08:0b.1: reg 0x10: [mem 0xf1500000-0xf1500fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:b.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0b.1=0D
[   10.509468] pci 0000:07:01.0: PCI bridge to [bus 08-ff]=0D
[   10.514694] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   10.521531] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08=
=0D
[   10.528206] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08=
=0D
[   10.534877] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08=
=0D
[   10.541908] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.552799] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330=0D
[   10.558907] pci 0000:09:00.0: reg 0x10: [mem 0xf1800000-0xf1801fff 64bit=
]=0D
[   10.566074] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.572364] pci 0000:09:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 9:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:09:00.0=0D
[   10.591548] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]=0D
[   10.596771] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   10.603615] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09=
=0D
[   10.610646] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.621449] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601=0D
[   10.627497] pci 0000:0a:00.0: reg 0x10: [io  0xb050-0xb057]=0D
[   10.633123] pci 0000:0a:00.0: reg 0x14: [io  0xb040-0xb043]=0D
[   10.638754] pci 0000:0a:00.0: reg 0x18: [io  0xb030-0xb037]=0D
[   10.644388] pci 0000:0a:00.0: reg 0x1c: [io  0xb020-0xb023]=0D
[   10.650021] pci 0000:0a:00.0: reg 0x20: [io  0xb000-0xb01f]=0D
[   10.655656] pci 0000:0a:00.0: reg 0x24: [mem 0xf1700000-0xf17001ff]=0D
[   10.662190] pci 0000:0a:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of a:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:0a:00.0=0D
[   10.687820] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]=0D
[   10.693041] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   10.699191] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   10.706043] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a=
=0D
[   10.712806] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)=0D
[   10.724592] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.731909] ACPI: PCI Interrupt Link [LNKB] (pink [LNKD] (IRQs 3 4 5 6 1=
0 *11 12 14 15)=0D
[   10.753851] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)=0D
[   10.761159] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.=0D
[   10.769596] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)=0D
[   10.776910] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.785344] ACPI: Enabled 4 GPEs in block 00 to 3F=0D
[   10.790137] ACPI: \_SB_.PCI0: notify handler is installed=0D
[   10.795622] Found 1 acpi root devices=0D
[   10.799423] initcall acpi_init+0x0/0x27a returned 0 after 443359 usecs=0D
[   10.805949] calling  pnp_init+0x0/0x12 @ 1=0D
[   10.810290] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.816206] calling  balloon_init+0x0/0x242 @ 1=0D
[   10.820798] xen:balloon: Initialising balloon driver=0D
[   10.825824] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs=0D
[   10.832411] calling  xen_setup_shutdown_event+0x0/0x30 @ 1=0D
[   10.837955] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs=0D
[   10.845323] calling  xenbus_probe_backend_init+0x0/0x2d @ 1=0D
[   10.851050] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs=0D
[   10.858438] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1=0D
[   10.864274] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs=0D
[   10.871738] calling  xen_acpi_pad_init+0x0/0x47 @ 1=0D
[   10.876753] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs=
=0D
[   10.883437] calling  balloon_init+0x0/0xfa @ 1=0D
[   10.887941] xen_balloon: Initialising balloon driver=0D
[   10.893249] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs=0D
[   10.899682] calling  misc_init+0x0/0xba @ 1=0D
[   10.904021] initcall misc_init+0x0/0xba returned 0 after 0 usecs=0D
[   10.910020] calling  vga_arb_device_init+0x0/0xde @ 1=0D
[   10.915283] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone=0D
[   10.923358] vgaarb: loaded=0D
[   10.926126] vgaarb: bridge control possible 0000:00:02.0=0D
[   10.931500] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs=0D
[   10.938694] calling  cn_init+0x0/0xc0 @ 1=0D
[   10.942785] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs=0D
[   10.948660] calling  dma_buf_init+0x0/0x75 @ 1=0D
[   10.953178] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs=0D
[   10.959492] calling  phy_init+0x0/0x2e @ 1=0D
[   10.963873] initcall phy_init+0x0/0x2e returned 0 after 0 usecs=0D
[   10.969784] calling  init_pcmcia_cs+0x0/0x3d @ 1=0D
[   10.974525] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs=0D
[   10.980965] calling  usb_init+0x0/0x169 @ 1=0D
[   10.985222] ACPI: bus type USB registered=0D
[   10.989491] usbcore: registered new interface driver usbfs=0D
[   10.995072] usbcore: registered new interface driver hub=0D
[   11.000485] usbcore: registered new device driver usb=0D
[   11.005534] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs=0D
[   11.011857] calling  serio_init+0x0/0x31 @ 1=0D
[   11.016286] initcall serio_init+0x0/0x31 returned 0 after 0 usecs=0D
[   11.022372] calling  input_init+0x0/0x103 @ 1=0D
[   11.026858] initcall input_init+0x0/0x103 returned 0 after 0 usecs=0D
[   11.033032] calling  rtc_init+0x0/0x5b @ 1=0D
[   11.037256] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs=0D
[   11.043170] calling  pps_init+0x0/0xb7 @ 1=0D
[   11.047391] pps_core: LinuxPPS API ver. 1 registered=0D
[   11.052357] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>=0D
[   11.061541] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs=0D
[   11.067781] calling  ptp_init+0x0/0xa4 @ 1=0D
[   11.072002] PTP clock support registered=0D
[   11.075927] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs=0D
[   11.082080] calling  power_supply_class_init+0x0/0x44 @ 1=0D
[   11.087601] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs=0D
[   11.094824] calling  hwmon_init+0x0/0xe3 @ 1=0D
[   11.099217] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs=0D
[   11.105309] calling  leds_init+0x0/0x40 @ 1=0D
[   11.109616] initcall leds_init+0x0/0x40 returned 0 after 0 usecs=0D
[   11.115623] calling  efisubsys_init+0x0/0x142 @ 1=0D
[   11.120388] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs=0D
[   11.126975] calling  pci_subsys_init+0x0/0x4f @ 1=0D
[   11.131739] PCI: Using ACPI for IRQ routing=0D
[   11.139418] PCI: pci_cache_line_size set to 64 bytes=0D
[   11.144579] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]=0D
[   11.150573] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]=0D
[   11.156639] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usec=
s=0D
[   11.163485] calling  proto_init+0x0/0x12 @ 1=0D
[   11.167823] initcall proto_init+0x0/0x12 returned 0 after 0 usecs=0D
[   11.173969] calling  net_dev_init+0x0/0x1c6 @ 1=0D
[   11.179199] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs=0D
[   11.185545] calling  neigh_init+0x0/0x80 @ 1=0D
[   11.189874] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs=0D
[   11.196026] calling  fib_rules_init+0x0/0xaf @ 1=0D
[   11.200706] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs=0D
[   11.207206] calling  pktsched_init+0x0/0x10a @ 1=0D
[   11.211891] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs=0D
[   11.218386] calling  tc_filter_init+0x0/0x55 @ 1=0D
[   11.223065] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs=0D
[   11.229566] calling  tc_action_init+0x0/0x55 @ 1=0D
[   11.234245] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs=0D
[   11.240746] calling  genl_init+0x0/0x85 @ 1=0D
[   11.245008] initcall genl_init+0x0/0x85 returned 0 after 0 usecs=0D
[   11.251059] calling  cipso_v4_init+0x0/0x61 @ 1=0D
[   11.255653] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs=0D
[   11.262065] calling  netlbl_init+0x0/0x81 @ 1=0D
[   11.266509] NetLabel: Initializing=0D
[   11.269976] NetLabel:  domain hash size =3D 128=0D
[   11.274394] NetLabel:  protocols =3D UNLABELED CIPSOv4=0D
[   11.279461] NetLabel:  unlabeled traffic allowed by default=0D
[   11.285056] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs=0D
[   11.291556] calling  rfkill_init+0x0/0x79 @ 1=0D
[   11.296155] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs=0D
[   11.302325] calling  xen_mcfg_late+0x0/0xab @ 1=0D
[   11.306915] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs=0D
[   11.313346] calling  xen_p2m_debugfs+0x0/0x4a @ 1=0D
[   11.318110] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs=0D
[   11.324680] calling  xen_spinlock_debugfs+0x0/0x13a @ 1=0D
[   11.330015] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs=0D
[   11.337073] calling  nmi_warning_debugfs+0x0/0x27 @ 1=0D
[   11.342192] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs=0D
[   11.349119] calling  hpet_late_init+0x0/0x101 @ 1=0D
[   11.353886] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs=
=0D
[   11.360644] calling  init_amd_nbs+0x0/0xb8 @ 1=0D
[   11.365154] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs=0D
[   11.371478] calling  clocksource_done_booting+0x0/0x42 @ 1=0D
[   11.377031] Switched to clocksource xen=0D
[   11.380931] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3811 usecs=0D
[   11.388554] calling  tracer_init_debugfs+0x0/0x1b2 @ 1=0D
[   11.394045] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 284 =
usecs=0D
[   11.401171] calling  init_trace_printk_function_export+0x0/0x2f @ 1=0D
[   11.407503] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs=0D
[   11.415643] calling  event_trace_init+0x0/0x205 @ 1=0D
[   11.435392] initcall event_trace_init+0x0/0x205 returned 0 after 14459 u=
secs=0D
[   11.442431] calling  init_kprobe_trace+0x0/=0D
[   11.469847] initcall eventpoll_init+0x0/0xda returned 0 after 29 usecs=0D
[   11.476403] calling  anon_inode_init+0x0/0x5b @ 1=0D
[   11.481204] initcall anon_inode_init+0x0/0x5b returned 0 after 34 usecs=
=0D
[   11.487842] calling  init_ramfs_fs+0x0/0x4d @ 1=0D
[   11.492444] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs=0D
[   11.498849] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1=0D
[   11.504050] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs=0D
[   11.511069] calling  acpi_event_init+0x0/0x3a @ 1=0D
[   11.515854] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs=
=0D
[   11.522510] calling  pnp_system_init+0x0/0x12 @ 1=0D
[   11.527371] initcall pnp_system_init+0x0/0x12 returned 0 after 94 usecs=
=0D
[   11.533988] calling  pnpacpi_init+0x0/0x8c @ 1=0D
[   11.538481] pnp: PnP ACPI init=0D
[   11.541623] ACPI: bus type PNP registered=0D
[   11.546002] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved=
=0D
[   11.552600] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active=
)=0D
[   11.559491] pnp 00:01: [dma 4]=0D
[   11.562708] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)=0D
[   11.569395] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)=0D
[   11.576467] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)=0D
[   11.584008] system 00:04: [io  0x0680-0x069f] has been reserved=0D
[   11.589924] system 00:04: [io  0xffff] has been reserved=0D
[   11.595296] system 00:04: [io  0xffff] has been reserved=0D
[   11.600668] system 00:04: [io  0xffff] has been reserved=0D
[   11.606043] system 00:04: [io  0x1c00-0x1cfe] has been reserved=0D
[   11.612020] system 00:04: [io  0x1d00-0x1dfe] has been reserved=0D
[   11.618001] system 00:04: [io  0x1e00-0x1efe] has been reserved=0D
[   11.623981] system 00:04: [io  0x1f00-0x1ffe] has been reserved=0D
[   11.629962] system 00:04: [io  0x0ca4-0x0ca7] has been reserved=0D
[   11.635941] system 00:04: [io  0x1800-0x18fe] could not be reserved=0D
[   11.642267] system 00:04: [io  0x164e-0x164f] has been reserved=0D
[   11.648244] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.655122] xen: registering gsi 8 triggering 1 polarity 0=0D
[   11.660806] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)=0D
[   11.667649] system 00:06: [io  0x1854-0x1857] has been reserved=0D
[   11.673559] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)=0D
[   11.681890] kworker/u2:0 (517) used greatest stack depth: 5560 bytes lef=
t=0D
[   11.688704] system 00:07: [io  0x0a00-0x0a1f] has been reserved=0D
[   11.694651] system 00:07: [io  0x0a30-0x0a3f] has been reserved=0D
[   11.700625] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.708874] xen: registering gsi 4 triggering 1 polarity 0=0D
[   11.714351] Already setup the GSI :4=0D
[   11.717995] pnp 00:08: [dma 0 disabled]=0D
[   11.722157] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.729876] xen: registering gsi 3 triggering 1 polarity 0=0D
[   11.735371] pnp 00:09: [dma 0 disabled]=0D
[   11.739456] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.746302] system 00:0a: [io  0x04d0-0x04d1] has been reserved=0D
[   11.752217] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.759094] xen: registering gsi 13 triggering 1 polarity 0=0D
[   11.764906] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)=0D
[   11.774553] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved=
=0D
[   11.781163] system 00:0c: [mem 0xfed10000-0xfed00:0c: [mem 0xfed18000-0x=
fed18fff] has been reserved=0D
[   11.794501] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved=
=0D
[   11.801176] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved=
=0D
[   11.807849] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved=
=0D
[   11.814522] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved=
=0D
[   11.821195] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved=
=0D
[   11.827868] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved=
=0D
[   11.834542] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved=
=0D
[   11.841214] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved=
=0D
[   11.847888] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved=
=0D
[   11.854556] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.863457] pnp: PnP ACPI: found 13 devices=0D
[   11.867634] ACPI: bus type PNP unregistered=0D
[   11.871880] initcall pnpacpi_init+0x0/0x8c returned 0 after 325583 usecs=
=0D
[   11.878641] calling  pcistub_init+0x0/0x29f @ 1=0D
[   11.883902] initcall pcistub_init+0x0/0x29f returned 0 after 653 usecs=0D
[   11.890429] calling  chr_dev_init+0x0/0xc6 @ 1=0D
[   11.904149] initcall chr_dev_init+0x0/0xc6 returned 0 after 9007 usecs=0D
[   11.910667] calling  firmware_class_init+0x0/0xec @ 1=0D
[   11.915868] initcall firmware_class_init+0x0/0xec returned 0 after 87 us=
ecs=0D
[   11.922816] calling  init_pcmcia_bus+0x0/0x65 @ 1=0D
[   11.927722] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 139 usecs=
=0D
[   11.934414] calling  thermal_init+0x0/0x8b @ 1=0D
[   11.938996] initcall thermal_init+0x0/0x8b returned 0 after 75 usecs=0D
[   11.945350] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1=0D
[   11.951240] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs=0D
[   11.959125] calling  init_acpi_pm_clocksource+0x0/0xec @ 1=0D
[   11.967825] PM-Timer failed consistency check  (0xffffff) - aborting.=0D
[   11.974251] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9354 usecs=0D
[   11.982047] calling  pcibios_assign_resources+0x0/0xbd @ 1=0D
[   11.987703] pci 0000:00:01.0: PCI bridge to [bus 01]=0D
[   11.992677] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.999601] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.006535] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.013464] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.020399] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.027330] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.034263] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.041199] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.048130] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.055062] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.061997] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.068920] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]=0D
[   12.076303] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.083221] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]=0D
[   12.090690] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.097606] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]=0D
[   12.104989] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.111905] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]=0D
[   12.119366] pci 0000:00:01.1: PCI bridge to [bus 02-03]=0D
[   12.124647] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[   12.130801] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[   12.137649] pci 0000:00:1c.0: PCI bridge to [bus 04]=0D
[   12.142673] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[   12.148830] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[   12.155682] pci 0000:00:1c.3: PCI bridge to [bus 05]=0D
[   12.160700] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   12.166857] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   12.173709] pci 0000:07:01.0: PCI bridge to [bus 08]=0D
[   12.178734] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   12.185590] pci 0000:06:00.0: PCI bridge to [bus 07-08]=0D
[   12.190864] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   12.197719] pci 0000:00:1c.5: PCI bridge to [bus 06-08]=0D
[   12.202996] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   12.209849] pci 0000:00:1c.6: PCI bridge to [bus 09]=0D
[   12.214868] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   12.221722] pci 0000:00:1c.7: PCI bridge to [bus 0a]=0D
[   12.226738] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   12.232894] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   12.239748] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]=0D
[   12.245370] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]=0D
[   12.251002] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]=0D
[   12.257331] pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff]=0D
[   12.263703] pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff]=0D
[   12.270006] pci_bus 0000:00: resource 9 [mem 0x000dc000-0x000dffff]=0D
[   12.276332] pci_bus 0000:00: resource 10 [mem 0x000e0000-0x000e3fff]=0D
[   12.282744] pci_bus 0000:00: resource 11 [mem 0x000e4000-0x000e7fff]=0D
[   12.289160] pci_bus 0000:00: resource 12 [mem 0xbe200000-0xfeafffff]=0D
[   12.295572] pci_bus 0000:02: resource 0 [io  0xe000-0xefff]=0D
[   12.301206] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]=0D
[   12.307533] pci_bus 0000:04: resource 0 [io  0xd000-0xdfff]=0D
[   12.313165] pci_bus 0000:04: resource 1 [mem 0xf1a00000-0xf1afffff]=0D
[   12.319492] pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]=0D
[   12.325125] pci_bus 0000:05: resource 1 [mem 0xf1900000-0xf19fffff]=0D
[   12.331451] pci_bus 0000:06: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   12.337779] pci_bus 0000:07: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   12.344104] pci_bus 0000:07: resource 5 [mem 0xf1500000-0xf16fffff]=0D
[   12.350430] pci_bus 0000:08: resource 1 [mem 0xf1500000-0xf15fffff]=0D
[   12.356758] pci_bus 0000:09: resource 1 [mem 0xf1800000-0xf18fffff]=0D
[   12.363084] pci_bus 0000:0a: resource 0 [io  0xb000-0xbfff]=0D
[   12.368717] pci_bus 0000:0a: resource 1 [mem 0xf1700000-0xf17fffff]=0D
[   12.375045] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
378369 usecs=0D
[   12.382843] calling  sysctl_core_init+0x0/0x2c @ 1=0D
[   12.387711] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs=
=0D
[   12.394459] calling  inet_init+0x0/0x296 @ 1=0D
[   12.398858] NET: Registered protocol family 2=0D
[   12.403526] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)=0D
[   12.410780] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)=
=0D
[   12.417420] TCP: Hash tables configured (established 16384 bind 16384)=0D
[   12.424012] TCP: reno registered=0D
[   12.427299] UDP hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   12.433365] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   12.439976] initcall inet_init+0x0/0x296 returned 0 after 40220 usecs=0D
[   12.446406] calling  ipv4_offload_init+0x0/0x61 @ 1=0D
[   12.451343] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs=
=0D
[   12.458103] calling  af_unix_init+0x0/0x55 @ 1=0D
[   12.462623] NET: Registered protocol family 1=0D
[   12.467044] initcall af_unix_init+0x0/0x55 returned 0 after 4330 usecs=0D
[   12.473617] calling  ipv6_offload_init+0x0/0x7f @ 1=0D
[   12.478556] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs=
=0D
[   12.485317] calling  init_sunrpc+0x0/0x69 @ 1=0D
[   12.489934] RPC: Registered named UNIX socket transport module.=0D
[   12.495842] RPC: Registered udp transport module.=0D
[   12.500606] RPC: Registered tcp transport module.=0D
[   12.505371] RPC: Registered tcp NFSv4.1 backchannel transport module.=0D
[   12.511871] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs=0D
[   12.518457] calling  pci_apply_final_quirks+0x0/0x117 @ 1=0D
[   12.523926] pci 0000:00:02.0: Boot video device=0D
[   12.529014] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.534592] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)=0D
[   12.539231] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.=0D
[   12.546970] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.=0D
[   12.554891] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.560455] Already setup the GSI :16=0D
[   12.580089] xen: registering gsi 23 triggering 0 polarity 1=0D
[   12.585660] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)=0D
[   12.606155] xen: registering gsi 18 triggering 0 polarity 1=0D
[   12.611729] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)=0D
[   12.61664=0D
[   12.630005] initcall pci_apply_final_quirks+0x0/0x117 returned 0 after 1=
03599 usecs=0D
[   12.637716] calling  populate_rootfs+0x0/0x112 @ 1=0D
[   12.642683] Unpacking initramfs...=0D
[   13.735361] Freeing initrd memory: 83288K (ffff8800023f7000 - ffff880007=
54d000)=0D
[   13.742670] initcall populate_rootfs+0x0[   13.749856] calling  pci_iomm=
u_init+0x0/0x41 @ 1=0D
[   13.754534] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs=0D
[   13.761035] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1=0D
[   13.766668] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs=0D
[   13.774314] calling  register_kernel_offset_dumper+0x0/0x1b @ 1=0D
[   13.780274] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs=0D
[   13.788074] calling  i8259A_init_ops+0x0/0x21 @ 1=0D
[   13.792843] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs=0D
[   13.799427] calling  vsyscall_init+0x0/0x27 @ 1=0D
[   13.804027] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs=0D
[   13.810434] calling  sbf_init+0x0/0xf6 @ 1=0D
[   13.814595] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs=0D
[   13.820573] calling  init_tsc_clocksource+0x0/0xc2 @ 1=0D
[   13.825775] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs=0D
[   13.832794] calling  add_rtc_cmos+0x0/0xb4 @ 1=0D
[   13.837303] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs=0D
[   13.843627] calling  i8237A_init_ops+0x0/0x14 @ 1=0D
[   13.848393] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.854979] calling  cache_sysfs_init+0x0/0x65 @ 1=0D
[   13.860085] initcall cache_sysfs_init+0x0/0x65 returned 0 after 245 usec=
s=0D
[   13.866863] calling  amd_uncore_init+0x0/0x130 @ 1=0D
[   13.871713] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usec=
s=0D
[   13.878561] calling  amd_iommu_pc_init+0x0/0x150 @ 1=0D
[   13.883588] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs=0D
[   13.890607] calling  intel_uncore_init+0x0/0x3ab @ 1=0D
[   13.895633] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs=0D
[   13.902652] calling  rapl_pmu_init+0x0/0x1f8 @ 1=0D
[   13.907349] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer=0D
[   13.917908] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10327 usec=
s=0D
[   13.924755] calling  inject_init+0x0/0x30 @ 1=0D
[   13.929172] Machine check injector initialized=0D
[   13.933680] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs=0D
[   13.940180] calling  thermal_throttle_init_device+0x0/0x9c @ 1=0D
[   13.946072] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs=0D
[   13.953785] calling  microcode_init+0x0/0x1b1 @ 1=0D
[   13.958738] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7=0D
[   13.964849] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba=0D
[   13.973621] initcall microcode_init+0x0/0x1b1 returned 0 after 14714 use=
cs=0D
[   13.980552] calling  amd_ibs_init+0x0/0x292 @ 1=0D
[   13.985141] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs=0D
[   13.991729] calling  msr_init+0x0/0x162 @ 1=0D
[   13.996204] initcall msr_init+0x0/0x162 returned 0 after 223 usecs=0D
[   14.002372] calling  cpuid_init+0x0/0x162 @ 1=0D
[   14.006994] initcall cpuid_init+0x0/0x162 returned 0 after 197 usecs=0D
[   14.013335] calling  ioapic_init_ops+0x0/0x14 @ 1=0D
[   14.018100] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   14.024686] calling  add_pcspkr+0x0/0x40 @ 1=0D
[   14.029126] initcall add_pcspkr+0x0/0x40 returned 0 after 103 usecs=0D
[   14.035388] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1=0D
[   14.041882] Scanning for low memory corruption every 60 seconds=0D
[   14.047862] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5838 usecs=0D
[   14.056440] calling  sysfb_init+0x0/0x9c @ 1=0D
[   14.060886] initcall sysfb_init+0x0/0x9c returned 0 after 109 usecs=0D
[   14.067148] calling  audit_classes_init+0x0/0xaf @ 1=0D
[   14.072187] initcall audit_classes_init+0x0/0xaf returned 0 after 13 use=
cs=0D
[   14.079104] calling  pt_dump_init+0x0/0x30 @ 1=0D
[   14.083621] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs=0D
[   14.089938] calling  ia32_binfmt_init+0x0/0x14 @ 1=0D
[   14.094799] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs=
=0D
[   14.101465] calling  proc_execdomains_init+0x0/0x22 @ 1=0D
[   14.106757] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs=0D
[   14.113856] calling  ioresources_init+0x0/0x3c @ 1=0D
[   14.118716] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs=
=0D
[   14.125382] calling  uid_cache_init+0x0/0x85 @ 1=0D
[   14.130078] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs=0D
[   14.136649] calling  init_posix_timers+0x0/0x240 @ 1=0D
[   14.141688] initcall init_posix_timers+0x0/0x240 returned 0 after 12 use=
cs=0D
[   14.148608] calling  init_posix_cpu_timers+0x0/0xbf @ 1=0D
[   14.153896] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs=0D
[   14.161002] calling  proc_schedstat_init+0x0/0x22 @ 1=0D
[   14.166118] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs=0D
[   14.173048] calling  snapshot_device_init+0x0/0x12 @ 1=0D
[   14.178371] initcall snapshot_device_init+0x0/0x12 returned 0 after 119 =
usecs=0D
[   14.185494] calling  irq_pm_init_ops+0x0/0x14 @ 1=0D
[   14.190260] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   14.196845] calling  create_proc_profile+0x0/0x300 @ 1=0D
[   14.202047] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs=0D
[   14.209066] calling  timekeeping_init_ops+0x0/0x14 @ 1=0D
[   14.214267] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs=0D
[   14.221285] calling  init_clocksource_sysfs+0x0/0x69 @ 1=0D
[   14.226877] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 21=
2 usecs=0D
[   14.234177] calling  init_timer_list_procfs+0x0/0x2c @ 1=0D
[   14.239554] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs=0D
[   14.246741] calling  alarmtimer_init+0x0/0x15f @ 1=0D
[   14.251788] initcall alarmtimer_init+0x0/0x15f returned 0 after 190 usec=
s=0D
[   14.258564] calling  clockevents_init_sysfs+0x0/0xd2 @ 1=0D
[   14.264313] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 31=
5 usecs=0D
[   14.271611] calling  init_tstats_procfs+0x0/0x2c @ 1=0D
[   14.276640] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usec=
s=0D
[   14.283483] calling  futex_init+0x0/0xf6 @ 1=0D
[   14.287832] futex hash table entries: 256 (order: 2, 16384 bytes)=0D
[   14.293973] initcall futex_init+0x0/0xf6 returned 0 after 6013 usecs=0D
[   14.300381] calling  proc_dma_init+0x0/0x22 @ 1=0D
[   14.304979] initcall proc_dma_init+0x0/0x22 returned 0 after 4 usecs=0D
[   14.311389] calling  proc_modules_init+0x0/0x22 @ 1=0D
[   14.316332] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.323089] calling  kallsyms_init+0x0/0x25 @ 1=0D
[   14.327685] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.334095] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1=0D
[   14.339911] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 9 usecs=0D
[   14.347528] calling  crash_notes_memory_init+0x0/0x36 @ 1=0D
[   14.352990] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs=0D
[   14.360267] calling  pid_namespaces_init+0x0/0x2d @ 1=0D
[   14.365394] initcall pid_namespaces_init+0x0/0x2d returned 0 after 12 us=
ecs=0D
[   14.372400] calling  ikconfig_init+0x0/0x3c @ 1=0D
[   14.376996] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs=0D
[   14.383407] calling  audit_init+0x0/0x141 @ 1=0D
[   14.387826] audit: initializing netlink socket (disabled)=0D
[   14.393314] type=3D2000 audit(1390613381.925:1): initialized=0D
[   14.398835] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs=0D
[   14.405418] calling  audit_watch_init+0x0/0x3a @ 1=0D
[   14.410274] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs=
=0D
[   14.416946] calling  audit_tree_init+0x0/0x49 @ 1=0D
[   14.421714] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs=0D
[   14.428297] calling  init_kprobes+0x0/0x16c @ 1=0D
[   14.443274] initcall init_kprobes+0x0/0x16c returned 0 after 10138 usecs=
=0D
[   14.449967] calling  hung_task_init+0x0/0x56 @ l_init+0x0/0x14 returned =
0 after 8 usecs=0D
[   14.473280] calling  init_tracepoints+0x0/0x20 @ 1=0D
[   14.478134] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs=
=0D
[   14.484804] calling  init_blk_tracer+0x0/0x5a @ 1=0D
[   14.489573] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs=0D
[   14.496157] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1=0D
[   14.501879] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs=0D
[   14.509416] calling  perf_event_sysfs_init+0x0/0x93 @ 1=0D
[   14.515303] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 584=
 usecs=0D
[   14.522520] calling  init_per_zone_wmark_min+0x0/0xa9 @ 1=0D
[   14.527985] initcall init_per_zone_wmark_min+0x0/0xa9 returned 0 after 1=
0 usecs=0D
[   14.535341] calling  kswapd_init+0x0/0x76 @ 1=0D
[   14.539807] initcall kswapd_init+0x0/0x76 returned 0 after 46 usecs=0D
[   14.546088] calling  extfrag_debug_init+0x0/0x7e @ 1=0D
[   14.551132] initcall extfrag_debug_init+0x0/0x7e returned 0 after 19 use=
cs=0D
[   14.558045] calling  setup_vmstat+0x0/0xf3 @ 1=0D
[   14.562567] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs=0D
[   14.568965] calling  mm_sysfs_init+0x0/0x29 @ 1=0D
[   14.573570] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs=0D
[   14.580058] calling  mm_compute_batch_init+0x0/0x19 @ 1=0D
[   14.585345] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs=0D
[   14.592450] calling  slab_proc_init+0x0/0x25 @ 1=0D
[   14.597136] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.603630] calling  init_reserve_notifier+0x0/0x26 @ 1=0D
[   14.608919] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs=0D
[   14.616025] calling  init_admin_reserve+0x0/0x40 @ 1=0D
[   14.621049] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usec=
s=0D
[   14.627896] calling  init_user_reserve+0x0/0x40 @ 1=0D
[   14.632837] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs=
=0D
[   14.639597] calling  proc_vmalloc_init+0x0/0x25 @ 1=0D
[   14.644541] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs=
=0D
[   14.651296] calling  procswaps_init+0x0/0x22 @ 1=0D
[   14.655979] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.662476] calling  init_frontswap+0x0/0x96 @ 1=0D
[   14.667186] initcall init_frontswap+0x0/0x96 returned 0 after 29 usecs=0D
[   14.673743] calling  hugetlb_init+0x0/0x4c2 @ 1=0D
[   14.678335] HugeTLB registered 2 MB page size, pre-allocated 0 pages=0D
[   14.684829] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6342 usecs=
=0D
[   14.691430] calling  mmu_notifier_init+0x0/0x12 @ 1=0D
[   14.696373] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs=
=0D
[   14.703131] calling  slab_proc_init+0x0/0x8 @ 1=0D
[   14.707723] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs=0D
[   14.714136] calling  cpucache_init+0x0/0x4b @ 1=0D
[   14.718731] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs=0D
[   14.725143] calling  hugepage_init+0x0/0x145 @ 1=0D
[   14.729823] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs=
=0D
[   14.736497] calling  init_cleancache+0x0/0xbc @ 1=0D
[   14.741290] initcall init_cleancache+0x0/0xbc returned 0 after 27 usecs=
=0D
[   14.747936] calling  fcntl_init+0x0/0x2a @ 1=0D
[   14.752281] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs=0D
[   14.758510] calling  proc_filesystems_init+0x0/0x22 @ 1=0D
[   14.763799] initcall proc_filesystems_init+0x0/0x22 returned 0 after 3 u=
secs=0D
[   14.770901] calling  dio_init+0x0/0x2d @ 1=0D
[   14.775074] initcall dio_init+0x0/0x2d returned 0 after 10 usecs=0D
[   14.781128] calling  fsnotify_mark_init+0x0/0x40 @ 1=0D
[   14.786182] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 26 use=
cs=0D
[   14.793092] calling  dnotify_init+0x0/0x7b @ 1=0D
[   14.797618] initcall dnotify_init+0x0/0x7b returned 0 after 21 usecs=0D
[   14.804011] calling  inotify_user_setup+0x0/0x4b @ 1=0D
[   14.809052] initcall inotify_user_setup+0x0/0x4b returned 0 after 12 use=
cs=0D
[   14.815968] calling  aio_setup+0x0/0x7d @ 1=0D
[   14.820271] initcall aio_setup+0x0/0x7d returned 0 after 53 usecs=0D
[   14.826369] calling  proc_locks_init+0x0/0x22 @ 1=0D
[   14.831137] initcall proc_locks_init+0x0/0x22 returned 0 after 4 usecs=0D
[   14.837719] calling  init_sys32_ioctl+0x0/0x28 @ 1=0D
[   14.842619] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 43 usecs=
=0D
[   14.849332] calling  dquot_init+0x0/0x121 @ 1=0D
[   14.853752] VFS: Disk quotas dquot_6.5.2=0D
[   14.857773] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)=0D
[   14.864241] initcall dquot_init+0x0/0x121 returned 0 after 10243 usecs=0D
[   14.870826] calling  init_v2_quota_format+0x0/0x22 @ 1=0D
[   14.876026] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs=0D
[   14.883046] calling  quota_init+0x0/0x31 @ 1=0D
[   14.887397] initcall quota_init+0x0/0x31 returned 0 after 17 usecs=0D
[   14.893620] calling  proc_cmdline_init+0x0/0x22 @ 1=0D
[   14.898563] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   14.905320] calling  proc_consoles_init+0x0/0x22 @ 1=0D
[   14.910348] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.917191] calling  proc_cpuinfo_init+0x0/0x22 @ 1=0D
[   14.922136] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.928891] calling  proc_devices_init+0x0/0x22 @ 1=0D
[   14.933835] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.940591] calling  proc_interrupts_init+0x0/0x22 @ 1=0D
[   14.945794] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs=0D
[   14.952811] calling  proc_loadavg_init+0x0/0x22 @ 1=0D
[   14.957755] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.964510] calling  proc_meminfo_init+0x0/0x22 @ 1=0D
[   14.969456] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.976209] calling  proc_stat_init+0x0/0x22 @ 1=0D
[   14.980892] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.987390] calling  proc_uptime_init+0x0/0x22 @ 1=0D
[   14.992246] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.998916] calling  proc_version_init+0x0/0x22 @ 1=0D
[   15.003860] initcall proc_version_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   15.010617] calling  proc_softirqs_init+0x0/0x22 @ 1=0D
[   15.015646] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   15.022488] calling  proc_kcore_init+0x0/0xb5 @ 1=0D
[   15.027265] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs=
=0D
[   15.033931] calling  vmcore_init+0x0/0x5cb @ 1=0D
[   15.038436] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs=0D
[   15.044761] calling  proc_kmsg_init+0x0/0x25 @ 1=0D
[   15.049445] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs=0D
[   15.055942] calling  proc_page_init+0x0/0x42 @ 1=0D
[   15.060628] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs=0D
[   15.067121] calling  init_devpts_fs+0x0/0x62 @ 1=0D
[   15.071845] initcall init_devpts_fs+0x0/0x62 returned 0 after 42 usecs=0D
[   15.078388] calling  init_hugetlbfs_fs+0x0/0x15d @ 1=0D
[   15.083487] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 71 use=
cs=0D
[   15.090347] calling  init_fat_fs+0x0/0x4f @ 1=0D
[   15.094788] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs=0D
[   15.101094] calling  init_vfat_fs+0x0/0x12 @ 1=0D
[   15.105600] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   15.111926] calling  init_msdos_fs+0x0/0x12 @ 1=0D
[   15.116521] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   15.122933] calling  init_iso9660_fs+0x0/0x70 @ 1=0D
[   15.127724] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs=
=0D
[   15.134374] calling  init_nfs_fs+0x0/0x16c @ 1=0D
[   15.139073] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs=0D
[   15.145506] calling  init_nfs_v2+0x0/0x14 @ 1=0D
[   15.149922] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs=0D
[   15.156161] calling  init_nfs_v3+0x0/0x14 @ 1=0D
[   15.160581] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs=0D
[   15.166821] calling  init_nfs_v4+0x0/0x3b @ 1=0D
[   15.171241] NFS: Registering the id_resolver key type=0D
[   15.176365] Key type id_resolver registered=0D
[   15.180599] Key type id_legacy registered=0D
[   15.184682] initcall init_nfs_v4+0x0/0x3b returned 0 after 13125 usecs=0D
[   15.191260] calling  init_nlm+0x0/0x4c @ 1=0D
[   15.195428] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs=0D
[   15.201400] calling  init_nls_cp437+0x0/0x12 @ 1=0D
[   15.206081] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs=0D
[   15.212578] calling  init_nls_ascii+0x0/0x12 @ 1=0D
[   15.217258] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs=0D
[   15.223758] calling  init_nls_iso8859_1+0x0/0x12 @ 1=0D
[   15.228786] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usec=
s=0D
[   15.235632] calling  init_nls_utf8+0x0/0x2b @ 1=0D
[   15.240226] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs=0D
[   15.246638] calling  init_ntfs_fs+0x0/0x1d1 @ 1=0D
[   15.251231] NTFS driver 2.1.30 [Flags: R/W].=0D
[   15.255618] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4283 usecs=
=0D
[   15.262240] calling  init_autofs4_fs+0x0/0x2a @ 1=0D
[   15.267163] initcall init_autofs4_fs+0x0/0x2a returned 0 after 129 usecs=
=0D
[   15.273857] calling  init_pstore_fs+0x0/0x53 @ 1=0D
[   15.278541] initcall init_pstore_fs+0x0/0x53 returned 0 after 11 usecs=0D
[   15.285117] calling  ipc_init+0x0/0x2f @ 1=0D
[   15.289282] msgmni has been set to 3857=0D
[   15.293187] initcall ipc_init+0x0/0x2f returned 0 after 3820 usecs=0D
[   15.299414] calling  ipc_sysctl_init+0x0/0x14 @ 1=0D
[   15.304189] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs=0D
[   15.310767] calling  init_mqueue_fs+0x0/0xa2 @ 1=0D
[   15.315507] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 57 usecs=0D
[   15.322036] calling  key_proc_init+0x0/0x5e @ 1=0D
[   15.326635] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs=0D
[   15.333044] calling  selinux_nf_ip_init+0x0/0x69 @ 1=0D
[   15.338068] SELinux:  Registering netfilter hooks=0D
[   15.342970] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4786 u=
secs=0D
[   15.350002] calling  init_sel_fs+0x0/0xa5 @ 1=0D
[   15.354781] initcall init_sel_fs+0x0/0xa5 returned 0 after 350 usecs=0D
[   15.361122] calling  selnl_init+0x0/0x56 @ 1=0D
[   15.365468] initcall selnl_init+0x0/0x56 returned 0 after 13 usecs=0D
[   15.371694] calling  sel_netif_init+0x0/0x5c @ 1=0D
[   15.376376] initcall sel_netif_init+0x0/0x5c returned 0 after 2 usecs=0D
[   15.382874] calling  sel_netnode_init+0x0/0x6a @ 1=0D
[   15.387730] initcall sel_netnode_init+0x0/0x6a returned 0 after 2 usecs=
=0D
[   15.394401] calling  sel_netport_init+0x0/0x6a @ 1=0D
[   15.399257] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs=
=0D
[   15.405927] calling  aurule_init+0x0/0x2d @ 1=0D
[   15.410347] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs=0D
[   15.416586] calling  crypto_wq_init+0x0/0x33 @ 1=0D
[   15.421298] initcall crypto_wq_init+0x0/0x33 returned 0 after 30 usecs=0D
[   15.427855] calling  crypto_algapi_init+0x0/0xd @ 1=0D
[   15.432798] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs=
=0D
[   15.439553] calling  chainiv_module_init+0x0/0x12 @ 1=0D
[   15.444668] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[   15.451599] calling  eseqiv_module_init+0x0/0x12 @ 1=0D
[   15.456625] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usec=
s=0D
[   15.463474] calling  hmac_module_init+0x0/0x12 @ 1=0D
[   15.468326] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs=
=0D
[   15.474999] calling  md5_mod_init+0x0/0x12 @ 1=0D
[   15.479539] initcall md5_mod_init+0x0/0x12 returned 0 after 33 usecs=0D
[   15.485922] calling  sha1_generic_mod_init+0x0/0x12 @ 1=0D
[   15.491238] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 29 =
usecs=0D
[   15.498399] calling  crypto_cbc_module_init+0x0/0x12 @ 1=0D
[   15.503771] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs=0D
[   15.510964] calling  des_generic_mod_init+0x0/0x17 @ 1=0D
[   15.516217] initcall des_generic_mod_init+0x0/0x17 returned 0 after 51 u=
secs=0D
[   15.523275] calling  aes_init+0x0/0x12 @ 1=0D
[   15.527459] initcall aes_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.533500] calling  zlib_mod_init+0x0/0x12 @ 1=0D
[   15.538116] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.544594] calling  crypto_authenc_module_init+0x0/0x12 @ 1=0D
[   15.550312] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[   15.557850] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1=0D
[   15.563916] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs=0D
[   15.571803] calling  krng_mod_init+0x0/0x12 @ 1=0D
[   15.576425] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.582902] calling  proc_genhd_init+0x0/0x3c @ 1=0D
[   15.587672] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs=0D
[   15.594249] calling  bsg_init+0x0/0x12e @ 1=0D
[   15.598574] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)=0D
[   15.605959] initcall bsg_init+0x0/0x12e returned 0 after 7288 usecs=0D
[   15.612283] calling  noop_init+0x0/0x12 @ 1=0D
[   15.616531] io scheduler noop registered=0D
[   15.620516] initcall noop_init+0x0/0x12 returned 0 after 3891 usecs=0D
[   15.626843] calling  deadline_init+0x0/0x12 @ 1=0D
[   15.631437] io scheduler deadline registered=0D
[   15.635770] initcall deadline_init+0x0/0x12 returned 0 after 4230 usecs=
=0D
[   15.642445] calling  cfq_init+0x0/0x8b @ 1=0D
[   15.646626] io scheduler cfq registered (default)=0D
[   15.651370] initcall cfq_init+0x0/0x8b returned 0 after 4653 usecs=0D
[   15.657610] calling  percpu_counter_startup+0x0/0x38 @ 1=0D
[   15.662985] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs=0D
[   15.670176] calling  pci_proc_init+0x0/0x6a @ 1=0D
[   15.674952] initcall pci_proc_init+0x0/0x6a returned 0 after 178 usecs=0D
[   15.681469] calling  pcie_portdrv_init+0x0/0x7a @ 1=0D
[   15.687142] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.692704] Already setup the GSI :16=0D
[   15.697246] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.702806] Already setup the GSI :16=0D
[   15.707333] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.712897] Already setup the GSI :16=0D
[   15.717275] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.722849] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)=0D
[   15.728099] xen: registering gsi 17 triggering 0 polarity 1=0D
[   15.733670] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)=0D
[   15.739000] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.744563] Already setup the GSI :19=0D
[   15.748482] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60618 u=
secs=0D
[   15.755518] calling  aer_service_init+0x0/0x2b @ 1=0D
[   15.760444] initcall aer_service_init+0x0/0x2b returned 0 after 72 usecs=
=0D
[   15.767129] calling  pci_hotplug_init+0x0/0x1d @ 1=0D
[   15.771980] pci_hotplug: PCI Hot Plug PCI Core version: 0.5=0D
[   15.777614] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5501 use=
cs=0D
[   15.784546] calling  pcied_init+0x0/0x79 @ 1=0D
[   15.789129] pciehp: PCI Express Hot Plug Controller Driver version: 0.4=
=0D
[   15.795727] initcall pcied_init+0x0/0x79 returned 0 after 6686 usecs=0D
[   15.802141] calling  pcifront_init+0x0/0x3f @ 1=0D
[   15.806733] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs=0D
[   15.813321] calling  genericbl_driver_init+0x0/0x14 @ 1=0D
[   15.818681] initcall genericbl_driver_init+0x0/0x14 returned 0 after 73 =
usecs=0D
[   15.825804] calling  cirrusfb_init+0x0/0xcc @ 1=0D
[   15.830501] initcall cirrusfb_init+0x0/0xcc returned 0 after 101 usecs=0D
[   15.837020] calling  efifb_driver_init+0x0/0x14 @ 1=0D
[   15.842034] initcall efifb_driver_init+0x0/0x14 returned 0 after 76 usec=
s=0D
[   15.848813] calling  intel_idle_init+0x0/0x331 @ 1=0D
[   15.853664] intel_idle: MWAIT substates: 0x42120=0D
[   15.858342] intel_idle: v0.4 model 0x3C=0D
[   15.862243] intel_idle: lapic_timer_reliable_states 0xffffffff=0D
[   15.868139] intel_idle: intel_idle yielding to none=0D
[   15.872816] initcall intel_idle_init+0x0/0x331 returned -19 after 18703 =
usecs=0D
[   15.880268] calling  acpi_reserve_resources+0x0/0xeb @ 1=0D
[   15.885649] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs=0D
[   15.892834] calling  acpi_ac_init+0x0/0x2a @ 1=0D
[   15.897414] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs=0D
[   15.903757] calling  acpi_button_driver_init+0x0/0x12 @ 1=0D
[   15.909499] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0=0D
[   15.917661] ACPI: Power Button [PWRB]=0D
[   15.921648] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1=0D
[   15.929031] ACPI: Power Button [PWRF]=0D
[   15.932827] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3058 usecs=0D
[   15.940380] calling  acpi_fan_driver_init+0x0/0x12 @ 1=0D
[   15.945818] ACPI: Fan [FAN0] (off)=0D
[   15.949442] ACPI: Fan [FAN1] (off)=0D
[   15.953051] ACPI: Fan [FAN2] (off)=0D
[   15.956659] ACPI: Fan [FAN3] (off)=0D
[   15.960262] ACPI: Fan [FAN4] (off)=0D
[   15.963732] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1772=
6 usecs=0D
[   15.971029] calling  acpi_processor_driver_init+0x0/0x43 @ 1=0D
[   15.989249] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)=0D
[   15.997672] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)=0D
[   16.013318] Monitor-Mwait will be used to enter C-1 state=0D
[   16.018712] Monitor-Mwait will be used to enter C-2 state=0D
[   16.024371] Warning: Processor Platform Limit not supported.=0D
[   16.030018] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 52023 usecs=0D
[   16.037902] calling  acpi_thermal_init+0x0/0x42 @ 1=0D
[   16.046117] thermal LNXTHERM:00: registered as thermal_zone0=0D
[   16.051769] ACPI: Thermal Zone [TZ00] (28 C)=0D
[   16.058277] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25141 u=
secs=0D
[   16.075621] calling  acpi_battery_init+0x0/0x16 @ 1=0D
[   16.080562] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs=
=0D
[   16.087320] calling  acpi_hed_driver_init+0x0/0x12 @ 1=0D
[   16.092555] calling  1_acpi_battery_init_async+0x0/0x35 @ 6=0D
[   16.098291] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5635=
 usecs=0D
[   16.105508] calling  erst_init+0x0/0x2fc @ 1=0D
[   16.109880] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.=0D
[   16.117382] pstore: Registered erst as persistent store backend=0D
[   16.123355] initcall erst_init+0x0/0x2fc returned 0 after 13199 usecs=0D
[   16.129854] calling  ghes_init+0x0/0x173 @ 1=0D
[   16.134367] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35363 usecs=0D
[   16.142766] \_SB_:_OSC request failed=0D
[   16.146425] _OSC request data:1 1 0 =0D
[   16.150064] \_SB_:_OSC invalid UUID=0D
[   16.153616] _OSC request data:1 1 0 =0D
[   16.157257] GHES: APEI firmware first mode is enabled by APEI bit.=0D
[   16.163499] initcall ghes_init+0x0/0x173 returned 0 after 28622 usecs=0D
[   16.169995] calling  einj_init+0x0/0x522 @ 1=0D
[   16.174394] EINJ: Error INJection is initialized.=0D
[   16.179097] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs=0D
[   16.185511] calling  ioat_init_module+0x0/0xb1 @ 1=0D
[   16.190363] ioatdma: Intel(R) QuickData Technology Driver 4.00=0D
[   16.196441] initcall ioat_init_module+0x0/0xb1 returned 0 after 5934 use=
cs=0D
[   16.203310] calling  virtio_mmio_init+0x0/0x14 @ 1=0D
[   16.208267] initcall virtio_mmio_init+0x0/0x14 returned 0 after 105 usec=
s=0D
[   16.215038] calling  virtio_balloon_driver_init+0x0/0x12 @ 1=0D
[   16.220834] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 75 usecs=0D
[   16.228394] calling  xenbus_probe_initcall+0x0/0x39 @ 1=0D
[   16.233681] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs=0D
[   16.240784] calling  xenbus_init+0x0/0x3d @ 1=0D
[   16.245346] initcall xenbus_init+0x0/0x3d returned 0 after 135 usecs=0D
[   16.251697] calling  xenbus_backend_init+0x0/0x51 @ 1=0D
[   16.256926] initcall xenbus_backend_init+0x0/0x51 returned 0 after 120 u=
secs=0D
[   16.263960] calling  gntdev_init+0x0/0x4d @ 1=0D
[   16.268528] initcall gntdev_init+0x0/0x4d returned 0 after 122 usecs=0D
[   16.274872] calling  gntalloc_init+0x0/0x3d @ 1=0D
[   16.279599] initcall gntalloc_init+0x0/0x3d returned 0 after 131 usecs=0D
[   16.286114] calling  hypervisor_subsys_init+0x0/0x25 @ 1=0D
[   16.291484] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs=0D
[   16.298676] calling  hyper_sysfs_init+0x0/0x103 @ 1=0D
[   16.303682] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 65 usec=
s=0D
[   16.310463] calling  platform_pci_module_init+0x0/0x1b @ 1=0D
[   16.316101] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
90 usecs=0D
[   16.323483] calling  xen_late_init_mcelog+0x0/0x3d @ 1=0D
[   16.328873] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs=0D
[   16.335995] calling  xen_pcibk_init+0x0/0x13f @ 1=0D
[   16.340788] xen_pciback: backend is vpci=0D
[   16.344824] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3967 usec=
s=0D
[   16.351605] calling  xen_acpi_processor_init+0x0/0x24b @ 1=0D
[   16.357913] xen_acpi_processor: Uploading Xen processor PM info=0D
(XEN) [2014-01-25 01:29:43] Set CPU acpi_id(1) cpuid(0) Px State info:=0D
(XEN) [2014-01-25 01:29:43] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:43] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:43] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:43] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:43] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:43] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:43] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:43] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:43] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:43] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:43] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU0: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 0 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(2) cpuid(2) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU2: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 2 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(3) cpuid(4) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU4: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 4 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(4) cpuid(6) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU6: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 6 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(5) cpuid(1) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU1: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 1 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(6) cpuid(3) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU3: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 3 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(7) cpuid(5) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:45] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:45] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:45] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:45] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:45] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:45] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:45] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:45] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:45] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:45] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:45] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:45] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:45] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:45] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:45] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:45] CPU5: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:45] CPU 5 initialization completed=0D
(XEN) [2014-01-25 01:29:45] Set CPU acpi_id(8) cpuid(7) Px State info:=0D
(XEN) [2014-01-25 01:29:45] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:45] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:45] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:45] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:45] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:45] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:45] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:45] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:45] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:45] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:45] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:45] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:45] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:45] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:45] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:45] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:45] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:45] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:45] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:45] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:45] CPU7: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:45] CPU 7 initialization completed=0D
[   17.780119] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389616 usecs=0D
[   17.788000] calling  pty_init+0x0/0x453 @ 1=0D
[   17.798157] kworker/u2:0 (743) used greatest stack depth: 5488 bytes lef=
t=0D
[   17.854289] initcall pty_init+0x0/0x453 returned 0 after 60584 usecs=0D
[   17.860637] calling  sysrq_init+0x0/0xb0 @ 1=0D
[    0 after 8 usecs=0D
[   17.871122] calling  xen_hvc_init+0x0/0x228 @ 1=0D
[   17.876761] initcall xen_hvc_init+0x0/0x228 returned 0 after 1022 usecs=
=0D
[   17.883361] calling  serial8250_init+0x0/0x1ab @ 1=0D
[   17.888210] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled=0D
[   17.915839] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A=0D
[   17.924254] initcall serial8250_init+0x09] kgdb: Registered I/O driver k=
gdboc.=0D
[   17.953023] initcall init_kgdboc+0x0/0x16 returned 0 after 4515 usecs=0D
[   17.959493] calling  init+0x0/0x10f @ 1=0D
[   17.963613] initcall init+0x0/0x10f returned 0 after 215 usecs=0D
[   17.969434] calling  hpet_init+0x0/0x6a @ 1=0D
[   17.974171] hpet_acpi_add: no address or irqs in _CRS=0D
[   17.979302] initcall hpet_init+0x0/0x6a returned 0 after 5491 usecs=0D
[   17.985553] calling  nvram_init+0x0/0x82 @ 1=0D
[   17.990013] Non-volatile memory driver v1.3=0D
[   17.994187] initcall nvram_init+0x0/0x82 returned 0 after 4200 usecs=0D
[   18.000598] calling  mod_init+0x0/0x5a @ 1=0D
[   18.004757] initcall mod_init+0x0/0x5a returned -19 after 0 usecs=0D
[   18.010911] calling  rng_init+0x0/0x12 @ 1=0D
[   18.015207] initcall rng_init+0x0/0x12 returned 0 after 132 usecs=0D
[   18.021285] calling  agp_init+0x0/0x26 @ 1=0D
[   18.025444] Linux agpgart interface v0.103=0D
[   18.029605] initcall agp_init+0x0/0x26 returned 0 after 4063 usecs=0D
[   18.035844] calling  agp_amd64_mod_init+0x0/0xb @ 1=0D
[   18.040934] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 146 u=
secs=0D
[   18.047969] calling  agp_intel_init+0x0/0x29 @ 1=0D
[   18.052740] initcall agp_intel_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.059254] calling  agp_sis_init+0x0/0x29 @ 1=0D
[   18.063851] initcall agp_sis_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.070191] calling  agp_via_init+0x0/0x29 @ 1=0D
[   18.074788] initcall agp_via_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.081128] calling  drm_core_init+0x0/0x10c @ 1=0D
[   18.085896] [drm] Initialized drm 1.1.0 20060810=0D
[   18.090504] initcall drm_core_init+0x0/0x10c returned 0 after 4585 usecs=
=0D
[   18.097263] calling  cn_proc_init+0x0/0x3d @ 1=0D
[   18.101773] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs=0D
[   18.108098] calling  topology_sysfs_init+0x0/0x70 @ 1=0D
[   18.113242] initcall topology_sysfs_init+0x0/0x70 returned 0 after 32 us=
ecs=0D
[   18.120228] calling  loop_init+0x0/0x14e @ 1=0D
[   18.178369] loop: module loaded=0D
[   18.181530] initcall loop_init+0x0/0x14e returned 0 after 55630 usecs=0D
[   18.188002] calling  xen_blkif_init+0x0/0x22 @ 1=0D
[   18.192784] initcall xen_blkif_init+0x0/0x22 returned 0 after 99 usecs=0D
[   18.199321] calling  mac_hid_init+0x0/0x22 @ 1=0D
[   18.203810] initcall mac_hid_init+0x0/0x22 returned 0 after 9 usecs=0D
[   18.210126] calling  macvlan_init_module+0x0/0x3d @ 1=0D
[   18.215242] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs=0D
[   18.222176] calling  macvtap_init+0x0/0x100 @ 1=0D
[   18.226836] initcall macvtap_init+0x0/0x100 returned 0 after 67 usecs=0D
[   18.233277] calling  net_olddevs_init+0x0/0xb5 @ 1=0D
[   18.238121] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs=
=0D
[   18.244793] calling  fixed_mdio_bus_init+0x0/0x105 @ 1=0D
[   18.250218] libphy: Fixed MDIO Bus: probed=0D
[   18.254309] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4215=
 usecs=0D
[   18.261589] calling  tun_init+0x0/0x93 @ 1=0D
[   18.265749] tun: Universal TUN/TAP device driver, 1.6=0D
[   18.270859] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>=0D
[   18.277245] initcall tun_init+0x0/0x93 returned 0 after 11226 usecs=0D
[   18.283501] calling  tg3_driver_init+0x0/0x1b @ 1=0D
[   18.288405] initcall tg3_driver_init+0x0/0x1b returned 0 after 136 usecs=
=0D
[   18.295097] calling  igb_init_module+0x0/0x58 @ 1=0D
[   18.299863] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k=0D
[   18.306879] igb: Copyright (c) 2007-2013 Intel Corporation.=0D
[   18.312779] xen: registering gsi 17 triggering 0 polarity 1=0D
[   18.318340] Already setup the GSI :17=0D
[   18.485658] igb 0000:02:00.0: added PHC on eth0=0D
[   18.490189] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
509248] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue=
(s)=0D
[   18.517140] xen: registering gsi 18 triggering 0 polarity 1=0D
[   18.522707] Already setup the GSI :18=0D
[   18.690630] igb 0000:02:00.1: added PHC on eth1=0D
[   18.695157] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
09279] igb 0000:02:00.1: eth1: PBA No: Unknown=0D
[   18.714218] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)=0D
[   18.722118] xen: registering gsi 19 triggering 0 polarity 1=0D
[   18.727685] Already setup the GSI :19=0D
(XEN) [2014-01-25 01:29:46] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----=0D
(XEN) [2014-01-25 01:29:46] CPU:    0=0D
(XEN ffff8302394665b0   rcx: 0000000000000000=0D
(XEN) [2014-01-25 01:29:46] rdx: 00000000f1980000   rsi: 0000000000000200  =
 rdi: ffff82d080281f20=0D
(XEN) [2014-01-25 01:29:46] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08  =
 r8:  000000000000001c=0D
(XEN) [2014-01-25 01:29:46] r9:  00000000ffffffff   r10: ffff82d080238f20  =
 r11: 0000000000000202=0D
(XEN) [2014-01-25 01:29:46] r12: 0000000000000000   r13: ffff83022a085e30  =
 r14: ffff82d0802cfe98=0D
(XEN) [2014-01-25 01:29:46] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0=0D
(XEN) [2014-01-25 01:29:46] cr3: 000000022dc0c000   cr2: 0000000000000004=0D
(XEN) [2014-01-25 01:29:46] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008=0D
(XEN) [2014-01-25 01:29:46] Xen stack trace from rsp=3Dffff82d0802cfc08:=0D
(XEN) [2014-01-25 01:29:46]    0000000500040070 ffff82d0802cfd88 0000007280=
2cfc38 ffff82d0ffffffff=0D
(XEN) [2014-01-25 01:29:46]    0000000000000000 0000000000000000 0000000000=
000005 0000000000000070=0D
(XEN) [2014-01-25 01:29:46]    0000000500000000 0000000000000000 00000000f1=
980000 ffff82d000000005=0D
(XEN) [2014-01-25 01:29:46]    0000000500000003 8005007000000000 ffff82d080=
2cfe98 ffff82d0802cfe98=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd88 ffff8302394665b0 0000000000=
000005 0000000000000000=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd28 ffff82d080168987 0000000000=
000246 ffff82d0802cfcd8=0D
(XEN) [2014-01-25 01:29:46]    ffff82d080129d68 0000000000000000 ffff82d080=
2cfd28 ffff82d0801474f9=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd18 ffff830239463b70 0000000000=
00010f ffff8302337f8000=0D
(XEN) [2014-01-25 01:29:46]    000000000000010f 0000000000000022 00000000ff=
ffffed ffff830239402200=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfdc8 ffff82d08016c65c ffff83022a=
085e00 000000000000010f=0D
(XEN) [2014-01-25 01:29:46]    000000000000010f ffff8302337f80e0 ffff82d080=
2cfd98 ffff82d0801047ed=0D
(XEN) [2014-01-25 01:29:46]    0000010f01402200 ffff82d0802cfe98 ffff830233=
7f80e0 ffff8302394665b0=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe98 ffff83022a085e00 ffff82d080=
2cfdc8 ffff8302337f8000=0D
(XEN) [2014-01-25 01:29:46]    00000000fffffffd 0000000000000000 ffff82d080=
2cfe98 ffff82d0802cfe70=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe48 ffff82d08017f104 ffff82d080=
2cff18 ffffffff8156d7c6=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe98 ffff8302337f80b8 ffff82d000=
00010f ffff82d08018bd40=0D
(XEN) [2014-01-25 01:29:46]    000000220000f800 ffff82d0802cfe74 ffff820040=
004000 000000000000000d=0D
(XEN) [2014-01-25 01:29:46]    ffff880078623b08 ffff8300b7313000 ffff880006=
dbb180 0000000000000000=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfef8 ffff82d08017f814 0000000000=
000000 0000000700000004=0D
(XEN) [2014-01-25 01:29:46]    0000000000007ff0 ffffffffffffffff 0000000000=
000005 0000000000000000=0D
(XEN) [2014-01-25 01:29:46] Xen call trace:=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d0801683a2>] msix_capability_init+0x=
1dc/0x603=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x=
4d7=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0=
x5ad=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08017f104>] physdev_map_pirq+0x507/=
0x5d1=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08017f814>] do_physdev_op+0x646/0x1=
232=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x14=
5=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] Pagetable walk from 0000000000000004:=0D
(XEN) [2014-01-25 01:29:46]  L4[0x000] =3D 0000000000000000 fffffffffffffff=
f=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] ****************************************=0D
(XEN) [2014-01-25 01:29:46] Panic on CPU 0:=0D
(XEN) [2014-01-25 01:29:46] FATAL PAGE FAULT=0D
(XEN) [2014-01-25 01:29:46] [error_code=3D0000]=0D
(XEN) [2014-01-25 01:29:46] Faulting linear address: 0000000000000004=0D
(XEN) [2014-01-25 01:29:46] ****************************************=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] Manual reset required ('noreboot' specified)=0D

--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--cNdxnHkX5QqsyA0e--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6koS-00083J-9I; Fri, 24 Jan 2014 17:45:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6koP-00083E-6e
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 17:45:03 +0000
Received: from [85.158.143.35:12283] by server-1.bemta-4.messagelabs.com id
	A7/93-02132-C96A2E25; Fri, 24 Jan 2014 17:45:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390585497!631684!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29068 invoked from network); 24 Jan 2014 17:44:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 17:44:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHhrrl021772
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:43:53 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHhpC9012407
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 17:43:52 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OHhpiD008361; Fri, 24 Jan 2014 17:43:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:43:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6B3921BFA72; Fri, 24 Jan 2014 12:43:49 -0500 (EST)
Date: Fri, 24 Jan 2014 12:43:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124174349.GA15472@phenom.dumpdata.com>
References: <52DF0F6A.4040309@citrix.com>
	<1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="cNdxnHkX5QqsyA0e"
Content-Disposition: inline
In-Reply-To: <52E2A0930200007800116CAE@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/msi: Validate the guest-identified PCI
 devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Fri, Jan 24, 2014 at 04:19:15PM +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > I built the kernel without the igb driver just to eliminate it being
> > the culprit. Now I can boot without issues and this is what lspci
> > reports:
> > 
> > -bash-4.1# lspci -s 02:00.0 -v
> > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > Connection (rev 01)
> >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> >         Flags: bus master, fast devsel, latency 0, IRQ 10
> >         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
> >         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
> >         I/O ports at e020 [size=32]
> >         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
> >         Expansion ROM at f0c00000 [disabled] [size=4M]
> >         Capabilities: [40] Power Management version 3
> >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> 
> So here's a patch to figure out why we don't find this.

Thank you!

See attached log. The corresponding xen-syms is compressed and
updated at : http://darnok.org/xen/xen-syms.gz

The interesting bit is:

(XEN) 02:00.0: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
(XEN) 02:00.0: pos=40
(XEN) 02:00.0: id=01
(XEN) 02:00.0: pos=50
(XEN) 02:00.0: id=05
(XEN) 02:00.0: pos=70
(XEN) 02:00.0: id=11
(XEN) 02:00.1: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
(XEN) 02:00.1: pos=40
(XEN) 02:00.1: id=01
(XEN) 02:00.1: pos=50
(XEN) 02:00.1: id=05
(XEN) 02:00.1: pos=70
(XEN) 02:00.1: id=11

The diff of the tree is:

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..3eadf9f 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1033,7 +1033,7 @@ int pci_prepare_msix(u16 seg, u8 bus, u8 devfn, bool_t off)
 
     spin_lock(&pcidevs_lock);
     pdev = pci_get_pdev(seg, bus, devfn);
-    if ( !pdev )
+    if ( !pdev  || !pdev->msix )
         rc = -ENODEV;
     else if ( pdev->msix->used_entries != !!off )
         rc = -EBUSY;
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 1040b2c..ff5587b 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -564,7 +564,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&manage_pci, arg, 1) != 0 )
             break;
-
+        printk("PHYSDEVOP_manage_pci_add of %x:%x.%x\n", manage_pci.bus, PCI_SLOT(manage_pci.devfn), PCI_FUNC(manage_pci.devfn));
         ret = pci_add_device(0, manage_pci.bus, manage_pci.devfn, NULL);
         break;
     }
@@ -588,6 +588,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             break;
 
         ret = -EINVAL;
+        printk("PHYSDEVOP_manage_pci_add_ext of %x:%x.%x %d,%d\n", manage_pci_ext.bus, PCI_SLOT(manage_pci_ext.devfn), PCI_FUNC(manage_pci_ext.devfn), manage_pci_ext.is_extfn, manage_pci_ext.is_virtfn);
         if ( (manage_pci_ext.is_extfn > 1) || (manage_pci_ext.is_virtfn > 1) )
             break;
 
@@ -609,6 +610,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
 
+        printk("PHYSDEVOP_pci_device_add of %x:%x.%x flags:%x\n", add.bus, PCI_SLOT(add.devfn), PCI_FUNC(add.devfn), add.flags);
         pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
         if ( add.flags & XEN_PCI_DEV_VIRTFN )
         {
@@ -639,9 +641,11 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         if ( copy_from_guest(&dev, arg, 1) )
             ret = -EFAULT;
-        else
+        else {
+            printk("PHYSDEVOP_prepare/release_msix of %x:%x.%x \n", dev.bus, PCI_SLOT(dev.devfn), PCI_FUNC(dev.devfn));
             ret = pci_prepare_msix(dev.seg, dev.bus, dev.devfn,
                                    cmd != PHYSDEVOP_prepare_msix);
+        }
         break;
     }
 
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..93ba11c 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -172,7 +172,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
     INIT_LIST_HEAD(&pdev->msi_list);
 
     if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                             PCI_CAP_ID_MSIX) )
+                             PCI_CAP_ID_MSIX | 0x80) )
     {
         struct arch_msix *msix = xzalloc(struct arch_msix);
 
diff --git a/xen/drivers/pci/pci.c b/xen/drivers/pci/pci.c
index 25dc5f1..9f9a371 100644
--- a/xen/drivers/pci/pci.c
+++ b/xen/drivers/pci/pci.c
@@ -14,19 +14,24 @@ int pci_find_cap_offset(u16 seg, u8 bus, u8 dev, u8 func, u8 cap)
     int max_cap = 48;
     u8 pos = PCI_CAPABILITY_LIST;
     u16 status;
+bool_t log = (cap & 0x80) && !seg && bus == 2;//temp
+cap &= ~0x80;//temp
 
     status = pci_conf_read16(seg, bus, dev, func, PCI_STATUS);
+if(log) printk("02:%02x.%u: status=%04x (%ps wants %02x)\n", dev, func, status, __builtin_return_address(0), cap);//temp
     if ( (status & PCI_STATUS_CAP_LIST) == 0 )
         return 0;
 
     while ( max_cap-- )
     {
         pos = pci_conf_read8(seg, bus, dev, func, pos);
+if(log) printk("02:%02x.%u: pos=%02x\n", dev, func, pos);//temp
         if ( pos < 0x40 )
             break;
 
         pos &= ~3;
         id = pci_conf_read8(seg, bus, dev, func, pos + PCI_CAP_LIST_ID);
+if(log) printk("02:%02x.%u: id=%02x\n", dev, func, id);//temp
 
         if ( id == 0xff )
             break;
@@ -35,6 +40,7 @@ int pci_find_cap_offset(u16 seg, u8 bus, u8 dev, u8 func, u8 cap)
 
         pos += PCI_CAP_LIST_NEXT;
     }
+if(log) printk("02:%02x.%u: no cap %02x\n", dev, func, cap);//temp
 
     return 0;
 }

--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="tst035-jan.txt"
Content-Transfer-Encoding: quoted-printable

Trying 192.168.102.15...=0D
Connected to maxsrv2.=0D
Escape character is '^]'.=0D
=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mInitializing Intel(R) Boot Agent GE v=
1.3.22                                     =1B[02;00HPXE 2.1 Build 086 (WfM=
 2.0)                                                     =1B[03;00HPress C=
trl+S to enter the Setup Menu.                                           =
=1B[04;00H                                                                 =
               =1B[05;00H                                                  =
                              =1B[06;00H                                   =
                                             =1B[07;00H                    =
                                                            =1B[08;00H     =
                                                                           =
=1B[09;00H                                                                 =
               =1B[10;00H                                                  =
                              =1B[11;00H                                   =
                                             =1B[12;00H                    =
                                                            =1B[13;00H     =
                                                                           =
=1B[14;00H                                                                 =
               =1B[15;00H                                                  =
                              =1B[16;00H                                   =
                                             =1B[17;00H                    =
                                                            =1B[18;00H     =
                                                                           =
=1B[19;00H                                                                 =
               =1B[20;00H                                                  =
                              =1B[21;00H                                   =
                                             =1B[22;00H                    =
                                                            =1B[23;00H     =
                                                                           =
=1B[24;00H                                                                 =
              =1B[24;00H=1B[03;39H=1B[03;00HPress Ctrl+S to enter the Setup=
 Menu..                                          =1B[03;39H=1B[03;39H=1B[03=
;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HIni=
tializing Intel(R) Boot Agent GE v1.3.22                                   =
  =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                    =
                 =1B[03;00H                                                =
                                =1B[04;00H                                 =
                                               =1B[05;00HInitializing Intel=
(R) Boot Agent GE v1.3.22                                     =1B[06;00HPXE=
 2.1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00HPress Ctrl+S to enter the Setup Menu.                          =
                 =1B[08;00H                                                =
                                =1B[09;00H                                 =
                                               =1B[10;00H                  =
                                                              =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00H                                                =
                                =1B[14;00H                                 =
                                               =1B[15;00H                  =
                                                              =1B[16;00H   =
                                                                           =
  =1B[17;00H                                                               =
                 =1B[18;00H                                                =
                                =1B[19;00H                                 =
                                               =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[07;39H=1B[07;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39=
H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00H                                =
                                                =1B[08;00H                 =
                                                               =1B[09;00HIn=
itializing Intel(R) Boot Agent GE v1.4.10                                  =
   =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                   =
                  =1B[11;00HPress Ctrl+S to enter the Setup Menu.          =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[11;39H=1B[11;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[11;39H=1B[11;39H=1B[11;39H=1B[11;3=
9H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B=
[11;39H=1B[11;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitia=
lizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00H                                                  =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing Intel(R=
) Boot Agent GE v1.3.22                                     =1B[06;00HPXE 2=
=2E1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00H                                                               =
                 =1B[08;00H                                                =
                                =1B[09;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[10;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00HInitializing Intel(R) Boot Agent GE v1.4.10     =
                                =1B[14;00HPXE 2.1 Build 092 (WfM 2.0)      =
                                               =1B[15;00HPress Ctrl+S to en=
ter the Setup Menu.                                           =1B[16;00H   =
                                                                           =
  =1B[17;00H                                                               =
                 =1B[18;00H                                                =
                                =1B[19;00H                                 =
                                               =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[15;39H=1B[15;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39=
H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[=
15;39H=1B[15;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitial=
izing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00H                                                  =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing Intel(R=
) Boot Agent GE v1.3.22                                     =1B[06;00HPXE 2=
=2E1 Build 086 (WfM 2.0)                                                   =
  =1B[07;00H                                                               =
                 =1B[08;00H                                                =
                                =1B[09;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[10;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[11;00H   =
                                                                           =
  =1B[12;00H                                                               =
                 =1B[13;00HInitializing Intel(R) Boot Agent GE v1.4.10     =
                                =1B[14;00HPXE 2.1 Build 092 (WfM 2.0)      =
                                               =1B[15;00H                  =
                                                              =1B[16;00H   =
                                                                           =
  =1B[17;00HInitializing Intel(R) Boot Agent GE v1.4.10                    =
                 =1B[18;00HPXE 2.1 Build 092 (WfM 2.0)                     =
                                =1B[19;00HPress Ctrl+S to enter the Setup M=
enu.                                           =1B[20;00H                  =
                                                              =1B[21;00H   =
                                                                           =
  =1B[22;00H                                                               =
                 =1B[23;00H                                                =
                                =1B[24;00H                                 =
                                              =1B[24;00H=1B[19;39H=1B[19;00=
HPress Ctrl+S to enter the Setup Menu..                                    =
      =1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39=
H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[=
19;39H=1B[19;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H       =
                                                                         =
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=1B[2J=1B[1;1H=1B[2J=1B[1;1H=80=08 =08=1B[0=
1;00H                                                                      =
          =1B[02;00HIntel(R) Boot Agent GE v1.4.10                         =
                         =1B[03;00HCopyright (C) 1997-2012, Intel Corporati=
on                                      =1B[04;00H                         =
                                                       =1B[05;00HInitializi=
ng and establishing link...                                           =1B[0=
6;00H                                                                      =
          =1B[07;00H                                                       =
                         =1B[08;00H                                        =
                                        =1B[09;00H                         =
                                                       =1B[10;00H          =
                                                                      =1B[1=
1;00H                                                                      =
          =1B[12;00H                                                       =
                         =1B[13;00H                                        =
                                        =1B[14;00H                         =
                                                       =1B[15;00H          =
                                                                      =1B[1=
6;00H                                                                      =
          =1B[17;00H                                                       =
                         =1B[18;00H                                        =
                                        =1B[19;00H                         =
                                                       =1B[20;00H          =
                                                                      =1B[2=
1;00H                                                                      =
          =1B[22;00H                                                       =
                         =1B[23;00H                                        =
                                        =1B[24;00H                         =
                                                      =1B[24;00H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;00H=
CLIENT MAC ADDR: 00 25 90 86 BE F0  GUID: 00000000 0000 0000 0000 00259086B=
EF0  =1B[06;00HDHCP.\                                                      =
                    =1B[06;06H=1B[06;06H=1B[06;00HDHCP.|                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP./                                                                     =
     =1B[06;06H=1B[06;00HDHCP.-                                            =
                              =1B[06;06H=1B[06;00HDHCP.\                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.|                                                                     =
     =1B[06;06H=1B[06;00HDHCP./                                            =
                              =1B[06;06H=1B[06;00HDHCP.-                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.\                                                                     =
     =1B[06;06H=1B[06;00HDHCP.|                                            =
                              =1B[06;06H=1B[06;00HDHCP./                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP.-                                                                     =
     =1B[06;06H=1B[06;00HDHCP.\                                            =
                              =1B[06;06H=1B[06;00HDHCP.|                   =
                                                       =1B[06;06H=1B[06;00H=
DHCP./                                                                     =
     =1B[06;06H=1B[06;00HDHCP.-                                            =
                              =1B[06;06H=1B[06;00HDHCP.\                   =
                                                       =1B[06;06H=1B[06;00H=
CLIENT IP: 192.168.102.35  MASK: 255.255.255.0  DHCP IP: 192.168.102.1     =
     =1B[07;00HGATEWAY IP: 192.168.102.1                                   =
                    =1B[08;00HTFTP.                                        =
                                   =1B[08;06H=0D
PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al=0D
Loading xen.gz... =1B[08;00Hok=0D
Loading vmlinuz... =1B[01;00Hok=0D
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00Hok=0D
Loading microcode.bin... =1B[02;00Hok=0D
 Xen 4.4-rc2=0D
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Fri Jan 24 12:22:58t:47a3c0-dirty=0D
(XEN) Console output is synchronous.=0D
(XEN) Bootloader: unknown=0D
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G=
=0D
(XEN) Video information:=0D
(XEN)  VGA is text mode 80x25, font 8x16=0D
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds=0D
(XEN)  EDID info not retrieved because no DDC retrieval method detected=0D
(XEN) Disc information:=0D
(XEN)  Found 1 MBR signatures=0D
(XEN)  Found 1 EDD information structures=0D
(XEN) Xen-e820 RAM map:=0D
(XEN)  0000000000000000 - 0000000000099c00 (usable)=0D
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)=0D
(XEN)  00000000000e0000 - 0000000000100000 (reserved)=0D
(XEN)  0000000000100000 - 00000000a58f1000 (usable)=0D
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)=0D
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)=0D
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)=0D
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)=0D
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)=0D
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)=0D
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)=0D
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)=0D
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)=0D
(XEN)  00000000bc000000 - 00000000be200000 (reserved)=0D
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)=0D
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)=0D
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)=0D
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)=0D
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)=0D
(XEN)  00000000ff000000 - 0000000100000000 (reserved)=0D
(XEN)  0000000100000000 - 000000023fe00000 (usable)=0D
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)=0D
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) CPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)=
=0D
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)=
=0D
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)=
=0D
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)=
=0D
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)=
=0D
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240=
)=0D
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)=
=0D
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)=
=0D
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)=
=0D
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)=
=0D
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)=
=0D
(XEN) System RAM: 8046MB (8239752kB)=0D
(XEN) No NUMA configuration found=0D
(XEN) Faking a node at 0000000000000000-000000023fe00000=0D
(XEN) Domain heap initialised=0D
(XEN) found SMP MP-table at 000fd870=0D
(XEN) DMI 2.7 present.=0D
(XEN) Using APIC driver default=0D
(XEN) ACPI: PM-Timer IO Port: 0x1808=0D
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]=0D
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]=0D
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32=0D
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]=0D
(XEN) ACPI: Local APIC address 0xfee00000=0D
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
(XEN) Processor #0 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
(XEN) Processor #2 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
(XEN) Processor #4 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
(XEN) Processor #6 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
(XEN) Processor #1 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
(XEN) Processor #3 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
(XEN) Processor #5 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
(XEN) Processor #7 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=0D
(XEN) ACPI: IRQ0 used by override.=0D
(XEN) ACPI: IRQ2 used by override.=0D
(XEN) ACPI: IRQ9 used by override.=0D
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs=0D
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
(XEN) [VT-D]dmar.c:778: Host address width 39=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da=0D
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0=0D
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0=0D
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff=0D
(XEN) Xen ERST support is initialized.=0D
(XEN) Using ACPI (MADT) for SMP configuration information=0D
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)=0D
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X=0D
(XEN) Switched to APIC driver x2apic_cluster.=0D
(XEN) Using scheduler: SMP Credit Scheduler (credit)=0D
(XEN) Detected 3400.079 MHz processor.=0D
(XEN) Initing memory sharing.=0D
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x7=0D
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 ext=
ended MCE MSR 0=0D
(XEN) Intel machine check reporting enabled=0D
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f=0D
(XEN) PCI: MCFG area at f8000000 reserved in E820=0D
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f=0D
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.=0D
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.=0D
(XEN) Intel VT-d Snoop Control not enabled.=0D
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.=0D
(XEN) Intel VT-d Queued Invalidation enabled.=0D
(XEN) Intel VT-d Interrupt Remapping enabled.=0D
(XEN) Intel VT-d Shared EPT tables not enabled.=0D
(XEN) 02:00.0: status=3D0010 (alloc_pdev+0xb4/0x2e9 wants 11)=0D
(XEN) 02:00.0: pos=3D40=0D
(XEN) 02:00.0: id=3D01=0D
(XEN) 02:00.0: pos=3D50=0D
(XEN) 02:00.0: id=3D05=0D
(XEN) 02:00.0: pos=3D70=0D
(XEN) 02:00.0: id=3D11=0D
(XEN) 02:00.1: status=3D0010 (alloc_pdev+0xb4/0x2e9 wants 11)=0D
(XEN) 02:00.1: pos=3D40=0D
(XEN) 02:00.1: id=3D01=0D
(XEN) 02:00.1: pos=3D50=0D
(XEN) 02:00.1: id=3D05=0D
(XEN) 02:00.1: pos=3D70=0D
(XEN) 02:00.1: id=3D11=0D
(XEN) I/O virtualisation enabled=0D
(XEN)  - Dom0 mode: Relaxed=0D
(XEN) Interrupt remapping enabled=0D
(XEN) Enabled directed EOI with ioapic_ack_old on!=0D
(XEN) ENABLING IO-APIC IRQs=0D
(XEN)  -> Using old ACK method=0D
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1=0D
(XEN) TSC deadline timer enabled=0D
(XEN) [2014-01-25 01:29:31] Platform timer is 14.318MHz HPET=0D
(XEN) [2014-01-25 01:29:31] Allocated console ring of 1048576 KiB.=0D
(XEN) [2014-01-25 01:29:31] mwait-idle: MWAIT substates: 0x42120=0D
(XEN) [2014-01-25 01:29:31] mwait-idle: v0.4 model 0x3c=0D
(XEN) [2014-01-25 01:29:32] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff=0D
(XEN) [2014-01-25 01:29:32] VMX: Supported advanced features:=0D
(XEN) [2014-01-25 01:29:32]  - APIC MMIO access virtualisation=0D
(XEN) [2014-01-25 01:29:32]  - APIC TPR shadow=0D
(XEN) [2014-01-25 01:29:32]  - Extended Page Tables (EPT)=0D
(XEN) [2014-01-25 01:29:32]  - Virtual-Processor Identifiers (VPID)=0D
(XEN) [2014-01-25 01:29:32]  - Virtual NMI=0D
(XEN) [2014-01-25 01:29:32]  - MSR direct-access bitmap=0D
(XEN) [2014-01-25 01:29:32]  - Unrestricted Guest=0D
(XEN) [2014-01-25 01:29:32]  - VMCS shadowing=0D
(XEN) [2014-01-25 01:29:32] HVM: ASIDs enabled.=0D
(XEN) [2014-01-25 01:29:32] HVM: VMX enabled=0D
(XEN) [2014-01-25 01:29:32] HVM: Hardware Assisted Paging (HAP) detected=0D
(XEN) [2014-01-25 01:29:32] HVM: HAP page sizes: 4kB, 2MB, 1GB=0D
(XEN) [2014-01-25 01:29:32] Brought up 8 CPUs=0D
(XEN) [2014-01-25 01:29:32] ACPI sleep modes: S3=0D
(XEN) [2014-01-25 01:29:32] mcheck_poll: Machine check polling timer starte=
d.=0D
(XEN) [2014-01-25 01:29:32] Multiple initrd candidates, picking module #1=0D
(XEN) [2014-01-25 01:29:32] *** LOADING DOMAIN 0 ***=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa28000=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binarylf_parse_binary: phdr: paddr=3D=
0x1cc3000 memsz=3D0x14d80=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: phdr: paddr=3D0x1cd8000 memsz=
=3D0x71f000=0D
(XEN) [2014-01-25 01:29:32] elf_parse_binary: memory: 0x1000000 -> 0x23f700=
0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: GUEST_OS =3D "linux"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: GUEST_VERSION =3D "2.6"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd81e=
0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: PAE_MODE =3D "yes"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: LOADER =3D "generic"=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: unknown xen elf note (0xd)=
=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000=0D
(XEN) [2014-01-25 01:29:32] elf_xen_parse_note: PADDR_OFFSET =3D 0x0=0D
(XEN) [2014-01-25 01:29:32] elf_xen_addr_calc_check: addresses:=0D
(XEN) [2014-01-25 01:29:32]     virt_base        =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 01:29:32]     elf_paddr_offset =3D 0x0=0D
(XEN) [2014-01-25 01:29:32]     virt_offset      =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 01:29:32]     virt_kstart      =3D 0xffffffff81000000=0D
(XEN) [2014-01-25 01:29:32]     virt_kend        =3D 0xffffffff823f7000=0D
(XEN) [2014-01-25 01:29:32]     virt_entry       =3D 0xffffffff81cd81e0=0D
(XEN) [2014-01-25 01:29:32]     p2m_base         =3D 0xffffffffffffffff=0D
(XEN) [2014-01-25 01:29:32]  Xen  kernel: 64-bit, lsb, compat32=0D
(XEN) [2014-01-25 01:29:32]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f7000=0D
(XEN) [2014-01-25 01:29:32] PHYSICAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 01:29:32]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487082 pages to be allocated)=0D
(XEN) [2014-01-25 01:29:32]  Init. ramdisk: 000000023ac31000->000000023fd86=
b1b=0D
(XEN) [2014-01-25 01:29:32] VIRTUAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 01:29:32]  Loaded kernel: ffffffff81000000->ffffffff823f7=
000=0D
(XEN) [2014-01-25 01:29:32]  Init. ramdisk: ffffffff823f7000->ffffffff8754c=
b1b=0D
(XEN) [2014-01-25 01:29:32]  Phys-Mach map: ffffffff8754d000->ffffffff8794d=
000=0D
(XEN) [2014-01-25 01:29:32]  Start info:    ffffffff8794d000->ffffffff8794d=
4b4=0D
(XEN) [2014-01-25 01:29:32]  Page tables:   ffffffff8794e000->ffffffff8798f=
000=0D
(XEN) [2014-01-25 01:29:32]  Boot stack:    ffffffff8798f000->ffffffff87990=
000=0D
(XEN) [2014-01-25 01:29:32]  TOTAL:         ffffffff80000000->ffffffff87c00=
000=0D
(XEN) [2014-01-25 01:29:32]  ENTRY ADDRESS: ffffffff81cd81e0=0D
(XEN) [2014-01-25 01:29:32] Dom0 has maximum 1 VCPUs=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a28000=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc20f0=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 2 at 0xffffffff81cc3000 -=
> 0xffffffff81cd7d80=0D
(XEN) [2014-01-25 01:29:32] elf_load_binary: phdr 3 at 0xffffffff81cd8000 -=
> 0xffffffff81e7b000=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:01-25 01:29:33] [VT-D]iommu.c:145=
2: d0:PCIe: map 0000:00:03.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:14.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:16.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0=0D
(XEN) [2014-01-25 01:29:33] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000=0D
(XEN) [2014-01-25 01:29: Free RAM: ........................................=
=2E.......done.=0D
(XEN) [2014-01-25 01:29:33] Initial low memory virq threshold set at 0x4000=
 pages.=0D
(XEN) [2014-01-25 01:29:33] Std. Loglntended to aid debugging of Xen by ens=
uring=0D
(XEN) [2014-01-25 01:29:33] ******* that all output is synchronously delive=
red on the serial line.=0D
(XEN) [2014-01-25 01:29:33] ******* However it can introduce SIGNIFICANT la=
tencies and affect=0D
(XEN) [2014-01-25 01:29:33] ******* timekeeping. It is NOT recommended for =
production use!=0D
(XEN) [2014-01-25 01:29:33] **********************************************=
=0D
(XEN) [2014-01-25 01:29:33] 3... 2... 1... =0D
(XEN) [2014-01-25 01:29:36] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)=0D
(XEN) [2014-01-25 01:29:36] Freed 272kB init memory.=0D
mapping kernel into physical memory=0D
about to get started...=0D
[    0.000000] Initializing cgroup subsys cpuset=0D
[    0.000000] Initializing cgroup subsys cpu=0D
[    0.000000] Initializing cibackAAA.hide=3D(02:00.*) kgdboc=3Dhvc0=0D
[    0.000000] Freeing 99-100 pfn range: 103 pages freed=0D
[    0.000000] 1-1 mapping on 99->100=0D
[    0.000000] 1-1 mapping on a58f1->a58f8=0D
[    0.000000] 1-1 mapping on a61b1->a6597=0D
[    0.000000] 1-1 mapping on b74b4->b76cb=0D
[    0.000000] 1-1 mapping on b770c->b7fff=0D
[    0.000000] 1-1 mapping on b8000->100000=0D
[    0.000000] Released 103 pages of unused memory=0D
[    0.000000] Set 298846 page(s) to 1-1 mapping=0D
[    0.000000] Populating 80000-80067 pfn range: 103 pages added=0D
[    0.000000] e820: BIOS-provided physical RAM map:=0D
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable=0D
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable=0D
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable=0D
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable=0D
[    0.000000] NX (Execute Disable) protection: active=0D
[    0.000000] SMBIOS 2.7 present.=0D
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013=0D
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved=0D
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable=0D
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000=0D
[    0.000000] Scanning 1 areas for low memory corruption=0D
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 2457=
6=0D
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]=0D
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]=0D
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k=0D
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE=0D
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]=0D
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k=0D
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE=0D
[    0.000000] BRK [0x01ff2000, 0x01ff2fff] PGTABLE=0D
[    0.000000] BRK [0x01ff3000, 0x01ff3fff] PGTABLE=0D
[    0.000000] BRK [0x01ff4000, 0x01ff4fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]=0D
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]=0D
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k=0D
[    0.000000] RAMDISK: [mem 0x023f7000-0x0754cfff]=0D
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)=0D
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)=0D
[    0.000000] ACPI: FACS 00000000b77b7080 000040=0D
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)=0D
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)=0D
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)=0D
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)=0D
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)=0D
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)=0D
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)=0D
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)=0D
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)=0D
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] NUMA turned off=0D
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]=
=0D
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]=0D
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]=0D
[    0.000000] Zone ranges:=0D
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]=0D
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]=0D
[    0.000000]   Normal   empty=0D
[    0.000000] Movable zone start for each node=0D
[    0.000000] Early memory node ranges=0D
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]=0D
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]=0D
[    0.000000] On node 0 totalpages: 524287=0D
[    0.000000]   DMA zone: 56 pages used for memmap=0D
[    0.000000]   DMA zone: 21 pages reserved=0D
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0=0D
[    0.000000]   DMA32 zone: 7114 pages used for memmap=0D
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31=0D
[    0.000000] ACPI: PM-Timer IO Port: 0x1808=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=
=0D
[    0.000000] ACPI: IRQ0 used by override.=0D
[    0.000000] ACPI: IRQ2 used by override.=0D
[    0.000000] ACPI: IRQ9 used by override.=0D
[    0.000000] Using ACPI (MADT) for SMP configuration information=0D
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs=0D
[    0.000000] nr_irqs_gsi: 40=0D
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]=0D
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]=0D
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices=
=0D
[    0.000000] Booting paravirtualized kernel on Xen=0D
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)=0D
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1=0D
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144=0D
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152=0D
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 =0D
[    6.004223] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096=0D
[    6.004224] Policy zone: DMA32=0D
[    6.004225] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAAA.hide=3D(0=
2:00.*) kgdboc=3Dhvc0=0D
[    6.004539] PID hash table entries: 4096 (order: 3, 32768 bytes)=0D
[    6.004569] xsave: enabled xstate_bv 0x7, cntxt size 0x340=0D
[    6.025034] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]=0D
[    6.028115] Memory: 1891592K/2097148K available (7058K kernel code, 773K=
 rwdata, 2208K rodata, 1724K init, 1380K bss, 205556K reserved)=0D
[    6.028345] Hierarchical RCU implementation.=0D
[    6.028345] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.=
=0D
[    6.028346] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1=0D
[    6.028354] NR_IRQS:33024 nr_irqs:256 16=0D
[    6.028433] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0=0D
[    6.028435] xen: registering gsi 9 triggering 0 polarity 0=0D
[    6.028445] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)=0D
[    6.028467] xen: acpi sci 9=0D
[    6.028470] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)=0D
[    6.028473] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)=0D
[    6.028475] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)=0D
[    6.028478] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)=0D
[    6.028480] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)=0D
[    6.028483] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)=0D
[    6.028485] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)=0D
[    6.028488] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)=0D
[    6.028490] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)=0D
[    6.028493] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)=0D
[    6.028495] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)=0D
[    6.028498] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)=0D
[    6.028500] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)=0D
[    6.028503] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)=0D
[    6.030067] Console: colour VGA+ 80x25=0D
[    6.981340] console [hvc0] enabled=0D
[    6.985294] Xen: using vcpuop timer interface=0D
[    6.989644] installing Xen timer for CPU 0=0D
[    6.993827] tsc: Detected 3400.078 MHz processor=0D
[    6.998509] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.15 BogoMIPS (lpj=3D3400078)=0D
[    7.009143] pid_max: default: 32768 minimum: 301=0D
[    7.013988] Security Framework initialized=0D
[    7.018079] SELinux:  Initializing.=0D
[    7.021655] SELinux:  Starting in permissive mode=0D
[    7.026744] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)=0D
[    7.034202] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)=0D
[    7.041369] Mount-cache hash table entries: 256=0D
[    7.046366] Initializing cgroup subsys freezer=0D
[    7.050877] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'=0D
[    7.050877] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)=0D
[    7.063979] CPU: Physical Processor ID: 0=0D
[    7.068049] CPU: Processor Core ID: 0=0D
[    7.072492] mce: CPU supports 2 MCE banks=0D
[    7.076503] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024=0D
[    7.076503] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4=
=0D
[    7.076503] tlb_flushall_shift: 6=0D
[    7.113961] Freeing SMP alternatives memory: 32K (ffffffff81e72000 - fff=
fffff81e7a000)=0D
[    7.122611] ACPI: Core revision 2[    7.176872] ACPI: All ACPI Tables su=
ccessfully acquired=0D
[    7.183637] cpu 0 spinlock event irq 41=0D
[    7.187516] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1=0D
[    7.198516] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs=0D
[    7.205985] calling  set_real_mode_per_irq_work_exit+0x0/0x13 returned 0=
 after 0 usecs=0D
[    7.233718] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1=0D
[    7.239351] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs=0D
[    7.246803] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1=0D
[    7.252524] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[    7.260113] calling  init_hw_perf_events+0x0/0x53b @ 1=0D
[    7.265287] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.=0D
[    7.274128] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs=0D
[    7.281408] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1=0D
[    7.287822] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs=0D
[    7.296053] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1=0D
[    7.301522] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs=0D
[    7.308646] calling  spawn_ksoftirqd+0x0/0x28 @ 1=0D
[    7.313440] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs=0D
[    7.320000] calling  init_workqueues+0x0/0x59a @ 1=0D
[    7.325011] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs=
=0D
[    7.331613] calling  migration_init+0x0/0x72 @ 1=0D
[    7.336292] initcall migration_init+0x0/0x72 returned 0 after 0 usecs=0D
[    7.342792] calling  check_cpu_stall_init+0x0/0x1b @ 1=0D
[    7.347993] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs=0D
[    7.355011] calling  rcu_scheduler_really_started+0x0/0x12 @ 1=0D
[    7.360904] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs=0D
[    7.368618] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1=0D
[    7.373856] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs=0D
[    7.380843] calling  cpu_stop_init+0x0/0x76 @ 1=0D
[    7.385456] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs=0D
[    7.391845] calling  relay_init+0x0/0x14 @ 1=0D
[    7.396176] initcall relay_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.402330] calling  tracer_alloc_buffers+0x0/0x1bd @ 1=0D
[    7.407639] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs=0D
[    7.414723] calling  init_events+0x0/0x61 @ 1=0D
[    7.419144] initcall init_events+0x0/0x61 returned 0 after 0 usecs=0D
[    7.425383] calling  init_trace_printk+0x0/0x12 @ 1=0D
[    7.430322] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs=
=0D
[    7.437082] calling  event_trace_memsetup+0x0/0x52 @ 1=0D
[    7.442303] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs=0D
[    7.449302] calling  jump_label_init_module+0x0/0x12 @ 1=0D
[    7.454676] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs=0D
[    7.461870] calling  balloon_clear+0x0/0x4f @ 1=0D
[    7.466463] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs=0D
[    7.472875] calling  rand_initialize+0x0/0x30 @ 1=0D
[    7.477664] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs=0D
[    7.484228] calling  mce_amd_init+0x0/0x165 @ 1=0D
[    7.488822] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs=0D
[    7.495261] x86: Booted up 1 node, 1 CPUs=0D
[    7.500015] NMI watchdog: disabled (cpu0): hardware events not enabled=0D
[    7.506658] devtmpfs: initialized=0D
[    7.512564] calling  ipc_ns_init+0x0/0x14 @ 1=0D
[    7.516911] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.523150] calling  init_mmap_min_addr+0x0/0x26 @ 1=0D
[    7.528177] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usec=
s=0D
[    7.535022] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1=
=0D
[    7.541699] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs=0D
[    7.550191] calling  net_ns_init+0x0/0x104 @ 1=0D
[    7.554753] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs=0D
[    7.561036] calling  e820_mark_nvs_memory+0x0/0x41 @ 1=0D
[    7.566224] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)=0D
[    7.574118] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)=0D
[    7.582278] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs=0D
[    7.589482] calling  cpufreq_tsc+0x0/0x37 @ 1=0D
[    7.593901] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs=0D
[    7.600143] calling  reboot_init+0x0/0x1d @ 1=0D
[    7.604564] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs=0D
[    7.610804] calling  init_lapic_sysfs+0x0/0x20 @ 1=0D
[    7.615656] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs=
=0D
[    7.622328] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1=0D
[    7.627877] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs=0D
[    7.635243] calling  alloc_frozen_cpus+0x0/0x8 @ 1=0D
[    7.640095] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs=
=0D
[    7.646770] calling  wq_sysfs_init+0x0/0x14 @ 1=0D
[    7.651465] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left=
=0D
[    7.658212] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs=0D
[    7.664734] calling  ksysfs_init+0x0/0x94 @ 1=0D
[    7.669200] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs=0D
[    7.675394] calling  pm_init+0x0/0x4e @ 1=0D
[    7.679506] initcall pm_init+0x0/0x4e returned 0 after 0 usecs=0D
[    7.685361] calling  pm_disk_init+0x0/0x19 @ 1=0D
[    7.689883] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs=0D
[    7.696195] calling  swsusp_header_init+0x0/0x30 @ 1=0D
[    7.701220] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usec=
s=0D
[    7.708068] calling  init_jiffies_clocksource+0x0/0x12 @ 1=0D
[    7.713613] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs=0D
[    7.720981] calling  cgroup_wq_init+0x0/0x5c @ 1=0D
[    7.725666] initcall cgroup_wq_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.732159] calling  event_trace_enable+0x0/0x173 @ 1=0D
[    7.737765] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs=0D
[    7.744623] calling  init_zero_pfn+0x0/0x35 @ 1=0D
[    7.749214] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs=0D
[    7.755628] calling  fsnotify_init+0x0/0x26 @ 1=0D
[    7.760222] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.766634] calling  filelock_init+0x0/0x84 @ 1=0D
[    7.771238] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs=0D
[    7.777640] calling  init_misc_binfmt+0x0/0x31 @ 1=0D
[    7.782495] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs=
=0D
[    7.789168] calling  init_script_binfmt+0x0/0x16 @ 1=0D
[    7.794193] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usec=
s=0D
[    7.801040] calling  init_elf_binfmt+0x0/0x16 @ 1=0D
[    7.805806] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs=0D
[    7.812393] calling  init_compat_elf_binfmt+0x0/0x16 @ 1=0D
[    7.817767] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs=0D
[    7.824959] calling  debugfs_init+0x0/0x5c @ 1=0D
[    7.829476] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.835791] calling  securityfs_init+0x0/0x53 @ 1=0D
[    7.840568] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs=0D
[    7.847146] calling  prandom_init+0x0/0xe2 @ 1=0D
[    7.851652] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs=0D
[    7.857979] calling  virtio_init+0x0/0x30 @ 1=0D
[    7.862504] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs=0D
[    7.868674] calling  __gnttab_init+0x0/0x30 @ 1=0D
[    7.873269] xen:grant_table: Grant tables using version 2 layout=0D
[    7.879352] Grant table initialized=0D
[    7.882885] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs=
=0D
[    7.889559] calling  early_resume_init+0x0/0x1d0 @ 1=0D
[    7.894611] RTC time:  1:29:37, date: 01/25/14=0D
[    7.899091] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs=0D
[    7.906112] calling  cpufreq_core_init+0x0/0x37 @ 1=0D
[    7.911052] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs=0D
[    7.917984] calling  cpuidle_init+0x0/0x40 @ 1=0D
[    7.922492] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs=0D
[    7.928993] calling  bsp_pm_check_init+0x0/0x14 @ 1=0D
[    7.933932] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs=
=0D
[    7.940691] calling  sock_init+0x0/0x8b @ 1=0D
[    7.945043] initcall sock_init+0x0/0x8b returned 0 after 0 usecs=0D
[    7.951039] calling  net_inuse_init+0x0/0x26 @ 1=0D
[    7.955721] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.962219] calling  netpoll_init+0x0/0x31 @ 1=0D
[    7.966724] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs=0D
[    7.973051] calling  netlink_proto_init+0x0/0x1f7 @ 1=0D
[    7.978206] NET: Registered protocol family 16=0D
[    7.982697] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs=0D
[    7.989791] calling  bdi_class_init+0x0/0x4d @ 1=0D
[    7.994575] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs=0D
[    8.001006] calling  kobject_uevent_init+0x0/0x12 @ 1=0D
[    8.006132] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[    8.013048] calling  pcibus_class_init+0x0/0x19 @ 1=0D
[    8.018053] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs=
=0D
[    8.024748] calling  pci_driver_init+0x0/0x12 @ 1=0D
[    8.029611] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.036128] calling  backlight_class_init+0x0/0x85 @ 1=0D
[    8.041386] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs=0D
[    8.048349] calling  video_output_class_init+0x0/0x19 @ 1=0D
[    8.053875] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs=0D
[    8.061086] calling  xenbus_init+0x0/0x26f @ 1=0D
[    8.065687] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs=0D
[    8.071940] calling  tty_class_init+0x0/0x38 @ 1=0D
[    8.076688] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs=0D
[    8.083118] calling  vtconsole_class_init+0x0/0xc2 @ 1=0D
[    8.088489] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs=0D
[    8.095444] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1=0D
[    8.101256] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs=0D
[    8.108876] calling  register_node_type+0x0/0x34 @ 1=0D
[    8.114034] initcall register_node_type+0x0/0x34 returned 0 after 0 usec=
s=0D
[    8.120807] calling  i2c_init+0x0/0x70 @ 1=0D
[    8.125137] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs=0D
[    8.131042] calling  init_ladder+0x0/0x12 @ 1=0D
[    8.135463] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs=0D
[    8.141873] calling  init_menu+0x0/0x12 @ 1=0D
[    8.146121] initcall init_menu+0x0/0x12 returned -19 after 0 usecs=0D
[    8.152361] calling  amd_postcore_init+0x0/0x143 @ 1=0D
[    8.157389] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usec=
s=0D
[    8.164248] calling  boot_params_ksysfs_init+0x0/0x237 @ 1=0D
[    8.169800] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs=0D
[    8.177146] calling  arch_kdebugfs_init+0x0/0x233 @ 1=0D
[    8.182291] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs=0D
[    8.189193] calling  mtrr_if_init+0x0/0x78 @ 1=0D
[    8.193700] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs=0D
[    8.200200] calling  ffh_cstate_init+0x0/0x2a @ 1=0D
[    8.204969] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.211552] calling  activate_jump_labels+0x0/0x32 @ 1=0D
[    8.216752] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs=0D
[    8.223772] calling  acpi_pci_init+0x0/0x61 @ 1=0D
[    8.228366] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it=0D
[    8.235992] ACPI: bus type PCI registered=0D
[    8.240066] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5=0D
[    8.246565] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs=
=0D
[    8.253238] calling  dma_bus_init+0x0/0xd6 @ 1=0D
[    8.257869] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left=
=0D
[    8.264601] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs=0D
[    8.271084] calling  dma_channel_table_init+0x0/0xde @ 1=0D
[    8.276471] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs=0D
[    8.283649] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1=0D
[    8.289197] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs=0D
[    8.296560] calling  register_xen_pci_notifier+0x0/0x38 @ 1=0D
[    8.302197] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs=0D
[    8.309648] calling  xen_pcpu_init+0x0/0xcc @ 1=0D
[    8.315098] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs=0D
[    8.321452] calling  dmi_id_init+0x0/0x31d @ 1=0D
[    8.326205] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs=0D
[    8.332458] calling  dca_init+0x0/0x20 @ 1=0D
[    8.336617] dca service started, version 1.12.1=0D
[    8.341269] initcall dca_init+0x0/0x20 returned 0 after 976 usecs=0D
[    8.347365] calling  iommu_init+0x0/0x58 @ 1=0D
[    8.351707] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs=0D
[    8.357851] calling  pci_arch_init+0x0/0x69 @ 1=0D
[    8.362460] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)=0D
[    8.371803] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E82=
0=0D
[    8.386554] PCI: Using configuration type 1 for base access=0D
[    8.392119] initcall pci_arch_init+0x0/0x69 returned 0 after 9765 usecs=
=0D
[    8.398804] calling  topology_init+0x0/0x98 @ 1=0D
[    8.403804] initcall topology_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.410163] calling  mtrr_init_finialize+0x0/0x36 @ 1=0D
[    8.415258] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs=0D
[    8.422192] calling  init_vdso+0x0/0x135 @ 1=0D
[    8.426526] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs=0D
[    8.432676] calling  sysenter_setup+0x0/0x2dd @ 1=0D
[    8.437444] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs=0D
[    8.444031] calling  param_sysfs_init+0x0/0x194 @ 1=0D
[    8.465475] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs=0D
[    8.472513] calling  pm_sysrq_init+0x0/0x19 @ 1=0D
[    8.477103] initcall pm_sysrq_init+0x0/0x19 returned 0 after 0 usecs=0D
[    8.483514] calling  default_bdi_init+0x0/0x65 @ 1=0D
[    8.488672] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs=
=0D
[    8.495276] calling  init_bio+0x0/0xe9 @ 1=0D
[    8.499490] bio: create slab <bio-0> at 0=0D
[    8.503557] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs=0D
[    8.509663] calling  cryptomgr_init+0x0/0x12 @ 1=0D
[    8.514341] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.520842] calling  blk_settings_init+0x0/0x2c @ 1=0D
[    8.525782] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs=
=0D
[    8.532542] calling  blk_ioc_init+0x0/0x2a @ 1=0D
[    8.537060] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.543373] calling  blk_softirq_init+0x0/0x6e @ 1=0D
[    8.548228] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.554901] calling  blk_iopoll_setup+0x0/0x6e @ 1=0D
[    8.559752] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.566427] calling  blk_mq_init+0x0/0x5f @ 1=0D
[    8.570847] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs=0D
[    8.577086] calling  genhd_device_init+0x0/0x85 @ 1=0D
[    8.582171] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs=
=0D
[    8.588858] calling  pci_slot_init+0x0/0x50 @ 1=0D
[    8.593457] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs=0D
[    8.599861] calling  fbmem_init+0x0/0x98 @ 1=0D
[    8.604266] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.610348] calling  acpi_init+0x0/0x27a @ 1=0D
[    8.614707] ACPI: Added _OSI(Module Device)=0D
[    8.618930] ACPI: Added _OSI(Processor Device)=0D
[    8.623433] ACPI: Added _OSI(3.0 _SCP Extensions)=0D
[    8.628200] ACPI: Added _OSI(Processor Aggregator Device)=0D
[    8.637452] ACPI: Executed 1 blocks of module-level executable AML code=
=0D
[    8.669789] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored=0D
[    8.677671] \_SB_:_OSC invalid UUID=0D
[    8.681155] _OSC request data:1 1f =0D
[    8.686851] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.696080] ACPI: Dynamic OEM Table Load:=0D
[    8.700077] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.709930] ACPI: Interpreter enabled=0D
[    8.713597] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)=0D
[    8.722861] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)=0D
[    8.732143] ACPI: (supports S0 S1 S4 S5)=0D
[    8.736114] ACPI: Using IOAPIC for interrupt routing=0D
[    8.741516] HEST: Table parsing has been initialized.=0D
[    8.746566] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug=0D
[    8.756953] ACPI: No dock devices found.=0D
[    8.858976] ACPI: Power Resource [FN00] (off)=0D
[    8.864120] ACPI: Power Resource [FN01] (off)=0D
[    8.869290] ACPI: Power Resource [FN02] (off)=0D
[    8.874421] ACPI: Power Resource [FN03] (off)=0D
[    8.879562] ACPI: Power Resource [FN04] (off)=0D
[    8.889203] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])=0D
[    8.895378] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM Cloc=
kPM Segments MSI]=0D
[    8.906135] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]=0D
[    8.915140] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]=
=0D
[    8.928438] PCI host bridge to bus 0000:00=0D
[    8.932526] pci_bus 0000:00: root bus resource [bus 00-3e]=0D
[    8.938073] p0-0xffff]=0D
[    8.950552] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f]=0D
[    8.957484] pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7ff=
f]=0D
[    8.964418] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]=0D
[    8.971350] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]=0D
[    8.978284] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]=0D
[    8.985217] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]=0D
[    8.992151] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]=0D
[    8.999094] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:00.0=0D
[    9.016818] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400=0D
[    9.022973] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold=0D
[    9.029591] pci 0000:00:01.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:01.0=0D
[    9.046582] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400=0D
[    9.052648] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:01.1=0D
[    9.070468] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000=0D
[    9.076484] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit=
]=0D
[    9.083320] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]=0D
[    9.090597] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:2.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:02.0=0D
[    9.107914] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300=0D
[    9.113933] pci 0000:00:03.0: reg 0x10: [mem 0xf1b34000-0xf1b37fff 64bit=
]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:3.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:03.0=0D
[    9.132511] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330=0D
[    9.138571] pci 0000:00:14.0: reg 0x10: [mem 0xf1b20000-0xf1b2ffff 64bit=
]=0D
[    9.145504] pci 0000:00:14.0: PME# supported from D3hot D3cold=0D
[    9.151734] pci 0000:00:14.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:14.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:14.0=0D
[    9.168829] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000=0D
[    9.174870] pci 0000:00:16.0: reg 0x10: [mem 0xf1b3f000-0xf1b3f00f 64bit=
]=0D
[    9.181809] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:16.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:16.0=0D
[    9.199688] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000=0D
[    9.205727] pci 0000:00:19.0: reg 0x10: [mem 0xf1b00000-0xf1b1ffff]=0D
[    9.212022] pci 0000:00:19.0: reg 0x14: [mem 0xf1b3d000-0xf1b3dfff]=0D
[    9.218348] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]=0D
[    9.224110] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold=0D
[    9.230598] pci 0000:00:19.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:19.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:19.0=0D
[    9.247696] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320=0D
[    9.253738] pci 0000:00:1a.0: reg 0x10: [mem 0xf1b3c000-0xf1b3c3ff]=0D
[    9.260191] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold=0D
[    9.266798] pci 0000:00:1a.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1a.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1a.0=0D
[    9.283899] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300=0D
[    9.289930] pci 0000:00:1b.0: reg 0x10: [mem 0xf1b30000-0xf1b33fff 64bit=
]=0D
[    9.296894] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold=0D
[    9.303381] pci 0000:00:1b.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1b.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1b.0=0D
[    9.320462] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400=0D
[    9.326626] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold=0D
[    9.333119] pci 0000:00:1c.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.0=0D
[    9.350208] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400=0D
[    9.356371] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold=0D
[    9.362863] pci 0000:00:1c.3: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.3 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.3=0D
[    9.379949] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400=0D
[    9.386112] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold=0D
[    9.392608] pci 0000:00:1c.5: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.5 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.5=0D
[    9.409702] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400=0D
[    9.415864] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold=0D
[    9.422358] pci 0000:00:1c.6: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.6 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.6=0D
[    9.439444] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400=0D
[    9.445607] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold=0D
[    9.452101] pci 0000:00:1c.7: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1c.7 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1c.7=0D
[    9.469203] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320=0D
[    9.475245] pci 0000:00:1d.0: reg 0x10: [mem 0xf1b3b000-0xf1b3b3ff]=0D
[    9.481697] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold=0D
[    9.488276] pci 0000:00:1d.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1d.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1d.0=0D
[    9.505372] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.0=0D
[    9.523303] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601=0D
[    9.529339] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]=0D
[    9.534941] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]=0D
[    9.540573] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]=0D
[    9.546208] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]=0D
[    9.551841] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]=0D
[    9.557474] pci 0000:00:1f.2: reg 0x24: [mem 0xf1b3a000-0xf1b3a7ff]=0D
[    9.563881] pci 0000:00:1f.2: PME# supported from D3hot=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.2 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.2=0D
[    9.580892] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500=0D
[    9.586922] pci 0000:00:1f.3: reg 0x10: [mem 0xf1b39000-0xf1b390ff 64bit=
]=0D
[    9.593772] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.3 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.3=0D
[    9.611152] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000=0D
[    9.617194] pci 0000:00:1f.6: reg 0x10: [mem 0xf1b38000-0xf1b38fff 64bit=
]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 0:1f.6 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:00:1f.6=0D
[    9.636130] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.646905] pci 0000:00:01.0: PCI bridge to [bus 01-ff]=0D
[    9.652183] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01=
=0D
[    9.659048] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.669850] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000=0D
[    9.675893] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]=0D
[    9.682214] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]=0D
[    9.688541] pci 0000:02:00.0: reg 0x18: [io  0xe020-0xe03f]=0D
[    9.694174] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]=0D
[    9.700519] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]=
=0D
[    9.707312] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.713439] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.720353] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 2:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:02:00.0=0D
[    9.738706] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000=0D
[    9.744718] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]=0D
[    9.751037] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]=0D
[    9.757362] pci 0000:02:00.1: reg 0x18: [io  0xe000-0xe01f]=0D
[    9.762996] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]=0D
[    9.769343] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]=
=0D
[    9.776133] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold=0D
[    9.782259] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.789174] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 2:0.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:02:00.1=0D
[    9.809600] pci 0000:00:01.1: PCI bridge to [bus 02-ff]=0D
[    9.814816] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[    9.820968] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[    9.827816] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03=
=0D
[    9.834852] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.845671] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000=0D
[    9.851716] pci 0000:04:00.0: reg 0x10: [mem 0xf1aa0000-0xf1abffff]=0D
[    9.858031] pci 0000:04:00.0: reg 0x14: [mem 0xf1a80000-0xf1a9ffff]=0D
[    9.864356] pci 0000:04:00.0: reg 0x18: [io  0xd020-0xd03f]=0D
[    9.870074] pci 0000:04:00.0: reg 0x30: [mem 0xf1a60000-0xf1a7ffff pref]=
=0D
[    9.876901] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.883130] pci 0000:04:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 4:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:04:00.0=0D
[    9.900194] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000=0D
[    9.906229] pci 0000:04:00.1: reg 0x10: [mem 0xf1a40000-0xf1a5ffff]=0D
[    9.912543] pci 0000:04:00.1: reg 0x14: [mem 0xf1a20000-0xf1a3ffff]=0D
[    9.918868] pci 0000:04:00.1: reg 0x18: [io  0xd000-0xd01f]=0D
[    9.924584] pci 0000:04:00.1: reg 0x30: [mem 0xf1a00000-0xf1a1ffff pref]=
=0D
[    9.931411] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:40] PHYSDEVOP_pci_device_add of 4:0.1 flags:0=0D
(XEN) [2014-01-25 01:29:40] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1=0D
(XEN) [2014-01-25 01:29:40] PCI add device 0000:04:00.1=0D
[    9.957555] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]=0D
[    9.962778] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[    9.968930] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[    9.975782] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04=
=0D
[    9.982813] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.993644] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000=0D
[    9.999677] pci 0000:05:00.0: reg 0x10: [mem 0xf1900000-0xf197ffff]=0D
[   10.006013] pci 0000:05:00.0: reg 0x18: [io  0xc000-0xc01f]=0D
[   10.011627] pci 0000:05:00.0: reg 0x1c: [mem 0xf1980000-0xf1983fff]=0D
[   10.018128] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.024367] pci 0000:05:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 5:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:05:00.0=0D
[   10.043509] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]=0D
[   10.048734] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   10.054886] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   10.061734] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05=
=0D
[   10.068808] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.079636] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401=0D
[   10.085873] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold=
=0D
[   10.092649] pci 0000:06:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 6:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:06:00.0=0D
[   10.109661] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]=0D
[   10.114890] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   10.121752] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring=0D
[   10.130246] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400=0D
[   10.136445] pci 0000:07:01.0: supports D1 D2=0D
[   10.140705] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 7:1.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:07:01.0=0D
[   10.158654] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010=0D
[   10.164692] pci 0000:07:03.0: reg 0x10: [mem 0xf1604000-0xf16047ff]=0D
[   10.170999] pci 0000:07:03.0: reg 0x14: [mem 0xf1600000-0xf1603fff]=0D
[   10.177484] pci 0000:07:03.0: supports D1 D2=0D
[   10.181741] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 7:3.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:07:03.0=0D
[   10.205793] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)=0D
[   10.212846] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   10.219688] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.228257] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
] (subtractive decode)=0D
[   10.236923] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.245502] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[   10.254084] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring=0D
[   10.262532] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000=0D
[   10.268590] pci 0000:08:08.0: reg 0x10: [mem 0xf1507000-0xf1507fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:8.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:08.0=0D
[   10.293383] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000=0D
[   10.299434] pci 0000:08:08.1: reg 0x10: [mem 0xf1506000-0xf1506fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:8.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:08.1=0D
[   10.324249] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000=0D
[   10.330296] pci 0000:08:09.0: reg 0x10: [mem 0xf1505000-0xf1505fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:9.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:09.0=0D
[   10.355093] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000=0D
[   10.361151] pci 0000:08:09.1: reg 0x10: [mem 0xf1504000-0xf1504fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:9.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:09.1=0D
[   10.385981] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000=0D
[   10.392035] pci 0000:08:0a.0: reg 0x10: [mem 0xf1503000-0xf1503fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:a.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0a.0=0D
[   10.416830] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000=0D
[   10.422882] pci 0000:08:0a.1: reg 0x10: [mem 0xf1502000-0xf1502fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:a.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0a.1=0D
[   10.447704] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000=0D
[   10.453757] pci 0000:08:0b.0: reg 0x10: [mem 0xf1501000-0xf1501fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:b.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0b.0=0D
[   10.478555] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000=0D
[   10.484613] pci 0000:08:0b.1: reg 0x10: [mem 0xf1500000-0xf1500fff pref]=
=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 8:b.1 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:08:0b.1=0D
[   10.509468] pci 0000:07:01.0: PCI bridge to [bus 08-ff]=0D
[   10.514694] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   10.521531] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08=
=0D
[   10.528206] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08=
=0D
[   10.534877] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08=
=0D
[   10.541908] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.552799] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330=0D
[   10.558907] pci 0000:09:00.0: reg 0x10: [mem 0xf1800000-0xf1801fff 64bit=
]=0D
[   10.566074] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.572364] pci 0000:09:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of 9:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:09:00.0=0D
[   10.591548] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]=0D
[   10.596771] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   10.603615] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09=
=0D
[   10.610646] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.621449] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601=0D
[   10.627497] pci 0000:0a:00.0: reg 0x10: [io  0xb050-0xb057]=0D
[   10.633123] pci 0000:0a:00.0: reg 0x14: [io  0xb040-0xb043]=0D
[   10.638754] pci 0000:0a:00.0: reg 0x18: [io  0xb030-0xb037]=0D
[   10.644388] pci 0000:0a:00.0: reg 0x1c: [io  0xb020-0xb023]=0D
[   10.650021] pci 0000:0a:00.0: reg 0x20: [io  0xb000-0xb01f]=0D
[   10.655656] pci 0000:0a:00.0: reg 0x24: [mem 0xf1700000-0xf17001ff]=0D
[   10.662190] pci 0000:0a:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 01:29:41] PHYSDEVOP_pci_device_add of a:0.0 flags:0=0D
(XEN) [2014-01-25 01:29:41] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0=0D
(XEN) [2014-01-25 01:29:41] PCI add device 0000:0a:00.0=0D
[   10.687820] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]=0D
[   10.693041] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   10.699191] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   10.706043] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a=
=0D
[   10.712806] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)=0D
[   10.724592] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.731909] ACPI: PCI Interrupt Link [LNKB] (pink [LNKD] (IRQs 3 4 5 6 1=
0 *11 12 14 15)=0D
[   10.753851] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 *10 11 12 14 1=
5)=0D
[   10.761159] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15=
) *0, disabled.=0D
[   10.769596] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)=0D
[   10.776910] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.785344] ACPI: Enabled 4 GPEs in block 00 to 3F=0D
[   10.790137] ACPI: \_SB_.PCI0: notify handler is installed=0D
[   10.795622] Found 1 acpi root devices=0D
[   10.799423] initcall acpi_init+0x0/0x27a returned 0 after 443359 usecs=0D
[   10.805949] calling  pnp_init+0x0/0x12 @ 1=0D
[   10.810290] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.816206] calling  balloon_init+0x0/0x242 @ 1=0D
[   10.820798] xen:balloon: Initialising balloon driver=0D
[   10.825824] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs=0D
[   10.832411] calling  xen_setup_shutdown_event+0x0/0x30 @ 1=0D
[   10.837955] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs=0D
[   10.845323] calling  xenbus_probe_backend_init+0x0/0x2d @ 1=0D
[   10.851050] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs=0D
[   10.858438] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1=0D
[   10.864274] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs=0D
[   10.871738] calling  xen_acpi_pad_init+0x0/0x47 @ 1=0D
[   10.876753] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs=
=0D
[   10.883437] calling  balloon_init+0x0/0xfa @ 1=0D
[   10.887941] xen_balloon: Initialising balloon driver=0D
[   10.893249] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs=0D
[   10.899682] calling  misc_init+0x0/0xba @ 1=0D
[   10.904021] initcall misc_init+0x0/0xba returned 0 after 0 usecs=0D
[   10.910020] calling  vga_arb_device_init+0x0/0xde @ 1=0D
[   10.915283] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone=0D
[   10.923358] vgaarb: loaded=0D
[   10.926126] vgaarb: bridge control possible 0000:00:02.0=0D
[   10.931500] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs=0D
[   10.938694] calling  cn_init+0x0/0xc0 @ 1=0D
[   10.942785] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs=0D
[   10.948660] calling  dma_buf_init+0x0/0x75 @ 1=0D
[   10.953178] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs=0D
[   10.959492] calling  phy_init+0x0/0x2e @ 1=0D
[   10.963873] initcall phy_init+0x0/0x2e returned 0 after 0 usecs=0D
[   10.969784] calling  init_pcmcia_cs+0x0/0x3d @ 1=0D
[   10.974525] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs=0D
[   10.980965] calling  usb_init+0x0/0x169 @ 1=0D
[   10.985222] ACPI: bus type USB registered=0D
[   10.989491] usbcore: registered new interface driver usbfs=0D
[   10.995072] usbcore: registered new interface driver hub=0D
[   11.000485] usbcore: registered new device driver usb=0D
[   11.005534] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs=0D
[   11.011857] calling  serio_init+0x0/0x31 @ 1=0D
[   11.016286] initcall serio_init+0x0/0x31 returned 0 after 0 usecs=0D
[   11.022372] calling  input_init+0x0/0x103 @ 1=0D
[   11.026858] initcall input_init+0x0/0x103 returned 0 after 0 usecs=0D
[   11.033032] calling  rtc_init+0x0/0x5b @ 1=0D
[   11.037256] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs=0D
[   11.043170] calling  pps_init+0x0/0xb7 @ 1=0D
[   11.047391] pps_core: LinuxPPS API ver. 1 registered=0D
[   11.052357] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>=0D
[   11.061541] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs=0D
[   11.067781] calling  ptp_init+0x0/0xa4 @ 1=0D
[   11.072002] PTP clock support registered=0D
[   11.075927] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs=0D
[   11.082080] calling  power_supply_class_init+0x0/0x44 @ 1=0D
[   11.087601] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs=0D
[   11.094824] calling  hwmon_init+0x0/0xe3 @ 1=0D
[   11.099217] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs=0D
[   11.105309] calling  leds_init+0x0/0x40 @ 1=0D
[   11.109616] initcall leds_init+0x0/0x40 returned 0 after 0 usecs=0D
[   11.115623] calling  efisubsys_init+0x0/0x142 @ 1=0D
[   11.120388] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs=0D
[   11.126975] calling  pci_subsys_init+0x0/0x4f @ 1=0D
[   11.131739] PCI: Using ACPI for IRQ routing=0D
[   11.139418] PCI: pci_cache_line_size set to 64 bytes=0D
[   11.144579] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]=0D
[   11.150573] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]=0D
[   11.156639] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usec=
s=0D
[   11.163485] calling  proto_init+0x0/0x12 @ 1=0D
[   11.167823] initcall proto_init+0x0/0x12 returned 0 after 0 usecs=0D
[   11.173969] calling  net_dev_init+0x0/0x1c6 @ 1=0D
[   11.179199] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs=0D
[   11.185545] calling  neigh_init+0x0/0x80 @ 1=0D
[   11.189874] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs=0D
[   11.196026] calling  fib_rules_init+0x0/0xaf @ 1=0D
[   11.200706] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs=0D
[   11.207206] calling  pktsched_init+0x0/0x10a @ 1=0D
[   11.211891] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs=0D
[   11.218386] calling  tc_filter_init+0x0/0x55 @ 1=0D
[   11.223065] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs=0D
[   11.229566] calling  tc_action_init+0x0/0x55 @ 1=0D
[   11.234245] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs=0D
[   11.240746] calling  genl_init+0x0/0x85 @ 1=0D
[   11.245008] initcall genl_init+0x0/0x85 returned 0 after 0 usecs=0D
[   11.251059] calling  cipso_v4_init+0x0/0x61 @ 1=0D
[   11.255653] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs=0D
[   11.262065] calling  netlbl_init+0x0/0x81 @ 1=0D
[   11.266509] NetLabel: Initializing=0D
[   11.269976] NetLabel:  domain hash size =3D 128=0D
[   11.274394] NetLabel:  protocols =3D UNLABELED CIPSOv4=0D
[   11.279461] NetLabel:  unlabeled traffic allowed by default=0D
[   11.285056] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs=0D
[   11.291556] calling  rfkill_init+0x0/0x79 @ 1=0D
[   11.296155] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs=0D
[   11.302325] calling  xen_mcfg_late+0x0/0xab @ 1=0D
[   11.306915] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs=0D
[   11.313346] calling  xen_p2m_debugfs+0x0/0x4a @ 1=0D
[   11.318110] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs=0D
[   11.324680] calling  xen_spinlock_debugfs+0x0/0x13a @ 1=0D
[   11.330015] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs=0D
[   11.337073] calling  nmi_warning_debugfs+0x0/0x27 @ 1=0D
[   11.342192] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs=0D
[   11.349119] calling  hpet_late_init+0x0/0x101 @ 1=0D
[   11.353886] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs=
=0D
[   11.360644] calling  init_amd_nbs+0x0/0xb8 @ 1=0D
[   11.365154] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs=0D
[   11.371478] calling  clocksource_done_booting+0x0/0x42 @ 1=0D
[   11.377031] Switched to clocksource xen=0D
[   11.380931] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3811 usecs=0D
[   11.388554] calling  tracer_init_debugfs+0x0/0x1b2 @ 1=0D
[   11.394045] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 284 =
usecs=0D
[   11.401171] calling  init_trace_printk_function_export+0x0/0x2f @ 1=0D
[   11.407503] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs=0D
[   11.415643] calling  event_trace_init+0x0/0x205 @ 1=0D
[   11.435392] initcall event_trace_init+0x0/0x205 returned 0 after 14459 u=
secs=0D
[   11.442431] calling  init_kprobe_trace+0x0/=0D
[   11.469847] initcall eventpoll_init+0x0/0xda returned 0 after 29 usecs=0D
[   11.476403] calling  anon_inode_init+0x0/0x5b @ 1=0D
[   11.481204] initcall anon_inode_init+0x0/0x5b returned 0 after 34 usecs=
=0D
[   11.487842] calling  init_ramfs_fs+0x0/0x4d @ 1=0D
[   11.492444] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs=0D
[   11.498849] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1=0D
[   11.504050] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs=0D
[   11.511069] calling  acpi_event_init+0x0/0x3a @ 1=0D
[   11.515854] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs=
=0D
[   11.522510] calling  pnp_system_init+0x0/0x12 @ 1=0D
[   11.527371] initcall pnp_system_init+0x0/0x12 returned 0 after 94 usecs=
=0D
[   11.533988] calling  pnpacpi_init+0x0/0x8c @ 1=0D
[   11.538481] pnp: PnP ACPI init=0D
[   11.541623] ACPI: bus type PNP registered=0D
[   11.546002] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved=
=0D
[   11.552600] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active=
)=0D
[   11.559491] pnp 00:01: [dma 4]=0D
[   11.562708] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)=0D
[   11.569395] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)=0D
[   11.576467] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)=0D
[   11.584008] system 00:04: [io  0x0680-0x069f] has been reserved=0D
[   11.589924] system 00:04: [io  0xffff] has been reserved=0D
[   11.595296] system 00:04: [io  0xffff] has been reserved=0D
[   11.600668] system 00:04: [io  0xffff] has been reserved=0D
[   11.606043] system 00:04: [io  0x1c00-0x1cfe] has been reserved=0D
[   11.612020] system 00:04: [io  0x1d00-0x1dfe] has been reserved=0D
[   11.618001] system 00:04: [io  0x1e00-0x1efe] has been reserved=0D
[   11.623981] system 00:04: [io  0x1f00-0x1ffe] has been reserved=0D
[   11.629962] system 00:04: [io  0x0ca4-0x0ca7] has been reserved=0D
[   11.635941] system 00:04: [io  0x1800-0x18fe] could not be reserved=0D
[   11.642267] system 00:04: [io  0x164e-0x164f] has been reserved=0D
[   11.648244] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.655122] xen: registering gsi 8 triggering 1 polarity 0=0D
[   11.660806] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)=0D
[   11.667649] system 00:06: [io  0x1854-0x1857] has been reserved=0D
[   11.673559] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)=0D
[   11.681890] kworker/u2:0 (517) used greatest stack depth: 5560 bytes lef=
t=0D
[   11.688704] system 00:07: [io  0x0a00-0x0a1f] has been reserved=0D
[   11.694651] system 00:07: [io  0x0a30-0x0a3f] has been reserved=0D
[   11.700625] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.708874] xen: registering gsi 4 triggering 1 polarity 0=0D
[   11.714351] Already setup the GSI :4=0D
[   11.717995] pnp 00:08: [dma 0 disabled]=0D
[   11.722157] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.729876] xen: registering gsi 3 triggering 1 polarity 0=0D
[   11.735371] pnp 00:09: [dma 0 disabled]=0D
[   11.739456] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.746302] system 00:0a: [io  0x04d0-0x04d1] has been reserved=0D
[   11.752217] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.759094] xen: registering gsi 13 triggering 1 polarity 0=0D
[   11.764906] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)=0D
[   11.774553] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved=
=0D
[   11.781163] system 00:0c: [mem 0xfed10000-0xfed00:0c: [mem 0xfed18000-0x=
fed18fff] has been reserved=0D
[   11.794501] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved=
=0D
[   11.801176] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved=
=0D
[   11.807849] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved=
=0D
[   11.814522] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved=
=0D
[   11.821195] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved=
=0D
[   11.827868] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved=
=0D
[   11.834542] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved=
=0D
[   11.841214] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved=
=0D
[   11.847888] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved=
=0D
[   11.854556] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.863457] pnp: PnP ACPI: found 13 devices=0D
[   11.867634] ACPI: bus type PNP unregistered=0D
[   11.871880] initcall pnpacpi_init+0x0/0x8c returned 0 after 325583 usecs=
=0D
[   11.878641] calling  pcistub_init+0x0/0x29f @ 1=0D
[   11.883902] initcall pcistub_init+0x0/0x29f returned 0 after 653 usecs=0D
[   11.890429] calling  chr_dev_init+0x0/0xc6 @ 1=0D
[   11.904149] initcall chr_dev_init+0x0/0xc6 returned 0 after 9007 usecs=0D
[   11.910667] calling  firmware_class_init+0x0/0xec @ 1=0D
[   11.915868] initcall firmware_class_init+0x0/0xec returned 0 after 87 us=
ecs=0D
[   11.922816] calling  init_pcmcia_bus+0x0/0x65 @ 1=0D
[   11.927722] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 139 usecs=
=0D
[   11.934414] calling  thermal_init+0x0/0x8b @ 1=0D
[   11.938996] initcall thermal_init+0x0/0x8b returned 0 after 75 usecs=0D
[   11.945350] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1=0D
[   11.951240] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs=0D
[   11.959125] calling  init_acpi_pm_clocksource+0x0/0xec @ 1=0D
[   11.967825] PM-Timer failed consistency check  (0xffffff) - aborting.=0D
[   11.974251] initcall init_acpi_pm_clocksource+0x0/0xec returned -19 afte=
r 9354 usecs=0D
[   11.982047] calling  pcibios_assign_resources+0x0/0xbd @ 1=0D
[   11.987703] pci 0000:00:01.0: PCI bridge to [bus 01]=0D
[   11.992677] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.999601] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.006535] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.013464] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.020399] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.027330] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.034263] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.041199] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.048130] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.055062] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.061997] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.068920] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]=0D
[   12.076303] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.083221] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]=0D
[   12.090690] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.097606] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]=0D
[   12.104989] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   12.111905] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]=0D
[   12.119366] pci 0000:00:01.1: PCI bridge to [bus 02-03]=0D
[   12.124647] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[   12.130801] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[   12.137649] pci 0000:00:1c.0: PCI bridge to [bus 04]=0D
[   12.142673] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[   12.148830] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[   12.155682] pci 0000:00:1c.3: PCI bridge to [bus 05]=0D
[   12.160700] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   12.166857] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   12.173709] pci 0000:07:01.0: PCI bridge to [bus 08]=0D
[   12.178734] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   12.185590] pci 0000:06:00.0: PCI bridge to [bus 07-08]=0D
[   12.190864] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   12.197719] pci 0000:00:1c.5: PCI bridge to [bus 06-08]=0D
[   12.202996] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   12.209849] pci 0000:00:1c.6: PCI bridge to [bus 09]=0D
[   12.214868] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   12.221722] pci 0000:00:1c.7: PCI bridge to [bus 0a]=0D
[   12.226738] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   12.232894] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   12.239748] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]=0D
[   12.245370] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]=0D
[   12.251002] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]=0D
[   12.257331] pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff]=0D
[   12.263703] pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff]=0D
[   12.270006] pci_bus 0000:00: resource 9 [mem 0x000dc000-0x000dffff]=0D
[   12.276332] pci_bus 0000:00: resource 10 [mem 0x000e0000-0x000e3fff]=0D
[   12.282744] pci_bus 0000:00: resource 11 [mem 0x000e4000-0x000e7fff]=0D
[   12.289160] pci_bus 0000:00: resource 12 [mem 0xbe200000-0xfeafffff]=0D
[   12.295572] pci_bus 0000:02: resource 0 [io  0xe000-0xefff]=0D
[   12.301206] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]=0D
[   12.307533] pci_bus 0000:04: resource 0 [io  0xd000-0xdfff]=0D
[   12.313165] pci_bus 0000:04: resource 1 [mem 0xf1a00000-0xf1afffff]=0D
[   12.319492] pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]=0D
[   12.325125] pci_bus 0000:05: resource 1 [mem 0xf1900000-0xf19fffff]=0D
[   12.331451] pci_bus 0000:06: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   12.337779] pci_bus 0000:07: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   12.344104] pci_bus 0000:07: resource 5 [mem 0xf1500000-0xf16fffff]=0D
[   12.350430] pci_bus 0000:08: resource 1 [mem 0xf1500000-0xf15fffff]=0D
[   12.356758] pci_bus 0000:09: resource 1 [mem 0xf1800000-0xf18fffff]=0D
[   12.363084] pci_bus 0000:0a: resource 0 [io  0xb000-0xbfff]=0D
[   12.368717] pci_bus 0000:0a: resource 1 [mem 0xf1700000-0xf17fffff]=0D
[   12.375045] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
378369 usecs=0D
[   12.382843] calling  sysctl_core_init+0x0/0x2c @ 1=0D
[   12.387711] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs=
=0D
[   12.394459] calling  inet_init+0x0/0x296 @ 1=0D
[   12.398858] NET: Registered protocol family 2=0D
[   12.403526] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)=0D
[   12.410780] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)=
=0D
[   12.417420] TCP: Hash tables configured (established 16384 bind 16384)=0D
[   12.424012] TCP: reno registered=0D
[   12.427299] UDP hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   12.433365] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   12.439976] initcall inet_init+0x0/0x296 returned 0 after 40220 usecs=0D
[   12.446406] calling  ipv4_offload_init+0x0/0x61 @ 1=0D
[   12.451343] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs=
=0D
[   12.458103] calling  af_unix_init+0x0/0x55 @ 1=0D
[   12.462623] NET: Registered protocol family 1=0D
[   12.467044] initcall af_unix_init+0x0/0x55 returned 0 after 4330 usecs=0D
[   12.473617] calling  ipv6_offload_init+0x0/0x7f @ 1=0D
[   12.478556] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs=
=0D
[   12.485317] calling  init_sunrpc+0x0/0x69 @ 1=0D
[   12.489934] RPC: Registered named UNIX socket transport module.=0D
[   12.495842] RPC: Registered udp transport module.=0D
[   12.500606] RPC: Registered tcp transport module.=0D
[   12.505371] RPC: Registered tcp NFSv4.1 backchannel transport module.=0D
[   12.511871] initcall init_sunrpc+0x0/0x69 returned 0 after 21615 usecs=0D
[   12.518457] calling  pci_apply_final_quirks+0x0/0x117 @ 1=0D
[   12.523926] pci 0000:00:02.0: Boot video device=0D
[   12.529014] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.534592] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)=0D
[   12.539231] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.=0D
[   12.546970] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.=0D
[   12.554891] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.560455] Already setup the GSI :16=0D
[   12.580089] xen: registering gsi 23 triggering 0 polarity 1=0D
[   12.585660] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)=0D
[   12.606155] xen: registering gsi 18 triggering 0 polarity 1=0D
[   12.611729] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)=0D
[   12.61664=0D
[   12.630005] initcall pci_apply_final_quirks+0x0/0x117 returned 0 after 1=
03599 usecs=0D
[   12.637716] calling  populate_rootfs+0x0/0x112 @ 1=0D
[   12.642683] Unpacking initramfs...=0D
[   13.735361] Freeing initrd memory: 83288K (ffff8800023f7000 - ffff880007=
54d000)=0D
[   13.742670] initcall populate_rootfs+0x0[   13.749856] calling  pci_iomm=
u_init+0x0/0x41 @ 1=0D
[   13.754534] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs=0D
[   13.761035] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1=0D
[   13.766668] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs=0D
[   13.774314] calling  register_kernel_offset_dumper+0x0/0x1b @ 1=0D
[   13.780274] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs=0D
[   13.788074] calling  i8259A_init_ops+0x0/0x21 @ 1=0D
[   13.792843] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs=0D
[   13.799427] calling  vsyscall_init+0x0/0x27 @ 1=0D
[   13.804027] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs=0D
[   13.810434] calling  sbf_init+0x0/0xf6 @ 1=0D
[   13.814595] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs=0D
[   13.820573] calling  init_tsc_clocksource+0x0/0xc2 @ 1=0D
[   13.825775] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs=0D
[   13.832794] calling  add_rtc_cmos+0x0/0xb4 @ 1=0D
[   13.837303] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs=0D
[   13.843627] calling  i8237A_init_ops+0x0/0x14 @ 1=0D
[   13.848393] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.854979] calling  cache_sysfs_init+0x0/0x65 @ 1=0D
[   13.860085] initcall cache_sysfs_init+0x0/0x65 returned 0 after 245 usec=
s=0D
[   13.866863] calling  amd_uncore_init+0x0/0x130 @ 1=0D
[   13.871713] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usec=
s=0D
[   13.878561] calling  amd_iommu_pc_init+0x0/0x150 @ 1=0D
[   13.883588] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs=0D
[   13.890607] calling  intel_uncore_init+0x0/0x3ab @ 1=0D
[   13.895633] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs=0D
[   13.902652] calling  rapl_pmu_init+0x0/0x1f8 @ 1=0D
[   13.907349] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer=0D
[   13.917908] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10327 usec=
s=0D
[   13.924755] calling  inject_init+0x0/0x30 @ 1=0D
[   13.929172] Machine check injector initialized=0D
[   13.933680] initcall inject_init+0x0/0x30 returned 0 after 4401 usecs=0D
[   13.940180] calling  thermal_throttle_init_device+0x0/0x9c @ 1=0D
[   13.946072] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs=0D
[   13.953785] calling  microcode_init+0x0/0x1b1 @ 1=0D
[   13.958738] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7=0D
[   13.964849] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba=0D
[   13.973621] initcall microcode_init+0x0/0x1b1 returned 0 after 14714 use=
cs=0D
[   13.980552] calling  amd_ibs_init+0x0/0x292 @ 1=0D
[   13.985141] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs=0D
[   13.991729] calling  msr_init+0x0/0x162 @ 1=0D
[   13.996204] initcall msr_init+0x0/0x162 returned 0 after 223 usecs=0D
[   14.002372] calling  cpuid_init+0x0/0x162 @ 1=0D
[   14.006994] initcall cpuid_init+0x0/0x162 returned 0 after 197 usecs=0D
[   14.013335] calling  ioapic_init_ops+0x0/0x14 @ 1=0D
[   14.018100] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   14.024686] calling  add_pcspkr+0x0/0x40 @ 1=0D
[   14.029126] initcall add_pcspkr+0x0/0x40 returned 0 after 103 usecs=0D
[   14.035388] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1=0D
[   14.041882] Scanning for low memory corruption every 60 seconds=0D
[   14.047862] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5838 usecs=0D
[   14.056440] calling  sysfb_init+0x0/0x9c @ 1=0D
[   14.060886] initcall sysfb_init+0x0/0x9c returned 0 after 109 usecs=0D
[   14.067148] calling  audit_classes_init+0x0/0xaf @ 1=0D
[   14.072187] initcall audit_classes_init+0x0/0xaf returned 0 after 13 use=
cs=0D
[   14.079104] calling  pt_dump_init+0x0/0x30 @ 1=0D
[   14.083621] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs=0D
[   14.089938] calling  ia32_binfmt_init+0x0/0x14 @ 1=0D
[   14.094799] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 7 usecs=
=0D
[   14.101465] calling  proc_execdomains_init+0x0/0x22 @ 1=0D
[   14.106757] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs=0D
[   14.113856] calling  ioresources_init+0x0/0x3c @ 1=0D
[   14.118716] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs=
=0D
[   14.125382] calling  uid_cache_init+0x0/0x85 @ 1=0D
[   14.130078] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs=0D
[   14.136649] calling  init_posix_timers+0x0/0x240 @ 1=0D
[   14.141688] initcall init_posix_timers+0x0/0x240 returned 0 after 12 use=
cs=0D
[   14.148608] calling  init_posix_cpu_timers+0x0/0xbf @ 1=0D
[   14.153896] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs=0D
[   14.161002] calling  proc_schedstat_init+0x0/0x22 @ 1=0D
[   14.166118] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs=0D
[   14.173048] calling  snapshot_device_init+0x0/0x12 @ 1=0D
[   14.178371] initcall snapshot_device_init+0x0/0x12 returned 0 after 119 =
usecs=0D
[   14.185494] calling  irq_pm_init_ops+0x0/0x14 @ 1=0D
[   14.190260] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   14.196845] calling  create_proc_profile+0x0/0x300 @ 1=0D
[   14.202047] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs=0D
[   14.209066] calling  timekeeping_init_ops+0x0/0x14 @ 1=0D
[   14.214267] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs=0D
[   14.221285] calling  init_clocksource_sysfs+0x0/0x69 @ 1=0D
[   14.226877] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 21=
2 usecs=0D
[   14.234177] calling  init_timer_list_procfs+0x0/0x2c @ 1=0D
[   14.239554] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs=0D
[   14.246741] calling  alarmtimer_init+0x0/0x15f @ 1=0D
[   14.251788] initcall alarmtimer_init+0x0/0x15f returned 0 after 190 usec=
s=0D
[   14.258564] calling  clockevents_init_sysfs+0x0/0xd2 @ 1=0D
[   14.264313] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 31=
5 usecs=0D
[   14.271611] calling  init_tstats_procfs+0x0/0x2c @ 1=0D
[   14.276640] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usec=
s=0D
[   14.283483] calling  futex_init+0x0/0xf6 @ 1=0D
[   14.287832] futex hash table entries: 256 (order: 2, 16384 bytes)=0D
[   14.293973] initcall futex_init+0x0/0xf6 returned 0 after 6013 usecs=0D
[   14.300381] calling  proc_dma_init+0x0/0x22 @ 1=0D
[   14.304979] initcall proc_dma_init+0x0/0x22 returned 0 after 4 usecs=0D
[   14.311389] calling  proc_modules_init+0x0/0x22 @ 1=0D
[   14.316332] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.323089] calling  kallsyms_init+0x0/0x25 @ 1=0D
[   14.327685] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.334095] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1=0D
[   14.339911] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 9 usecs=0D
[   14.347528] calling  crash_notes_memory_init+0x0/0x36 @ 1=0D
[   14.352990] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs=0D
[   14.360267] calling  pid_namespaces_init+0x0/0x2d @ 1=0D
[   14.365394] initcall pid_namespaces_init+0x0/0x2d returned 0 after 12 us=
ecs=0D
[   14.372400] calling  ikconfig_init+0x0/0x3c @ 1=0D
[   14.376996] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs=0D
[   14.383407] calling  audit_init+0x0/0x141 @ 1=0D
[   14.387826] audit: initializing netlink socket (disabled)=0D
[   14.393314] type=3D2000 audit(1390613381.925:1): initialized=0D
[   14.398835] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs=0D
[   14.405418] calling  audit_watch_init+0x0/0x3a @ 1=0D
[   14.410274] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs=
=0D
[   14.416946] calling  audit_tree_init+0x0/0x49 @ 1=0D
[   14.421714] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs=0D
[   14.428297] calling  init_kprobes+0x0/0x16c @ 1=0D
[   14.443274] initcall init_kprobes+0x0/0x16c returned 0 after 10138 usecs=
=0D
[   14.449967] calling  hung_task_init+0x0/0x56 @ l_init+0x0/0x14 returned =
0 after 8 usecs=0D
[   14.473280] calling  init_tracepoints+0x0/0x20 @ 1=0D
[   14.478134] initcall init_tracepoints+0x0/0x20 returned 0 after 0 usecs=
=0D
[   14.484804] calling  init_blk_tracer+0x0/0x5a @ 1=0D
[   14.489573] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs=0D
[   14.496157] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1=0D
[   14.501879] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs=0D
[   14.509416] calling  perf_event_sysfs_init+0x0/0x93 @ 1=0D
[   14.515303] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 584=
 usecs=0D
[   14.522520] calling  init_per_zone_wmark_min+0x0/0xa9 @ 1=0D
[   14.527985] initcall init_per_zone_wmark_min+0x0/0xa9 returned 0 after 1=
0 usecs=0D
[   14.535341] calling  kswapd_init+0x0/0x76 @ 1=0D
[   14.539807] initcall kswapd_init+0x0/0x76 returned 0 after 46 usecs=0D
[   14.546088] calling  extfrag_debug_init+0x0/0x7e @ 1=0D
[   14.551132] initcall extfrag_debug_init+0x0/0x7e returned 0 after 19 use=
cs=0D
[   14.558045] calling  setup_vmstat+0x0/0xf3 @ 1=0D
[   14.562567] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs=0D
[   14.568965] calling  mm_sysfs_init+0x0/0x29 @ 1=0D
[   14.573570] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs=0D
[   14.580058] calling  mm_compute_batch_init+0x0/0x19 @ 1=0D
[   14.585345] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs=0D
[   14.592450] calling  slab_proc_init+0x0/0x25 @ 1=0D
[   14.597136] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.603630] calling  init_reserve_notifier+0x0/0x26 @ 1=0D
[   14.608919] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs=0D
[   14.616025] calling  init_admin_reserve+0x0/0x40 @ 1=0D
[   14.621049] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usec=
s=0D
[   14.627896] calling  init_user_reserve+0x0/0x40 @ 1=0D
[   14.632837] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs=
=0D
[   14.639597] calling  proc_vmalloc_init+0x0/0x25 @ 1=0D
[   14.644541] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 3 usecs=
=0D
[   14.651296] calling  procswaps_init+0x0/0x22 @ 1=0D
[   14.655979] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.662476] calling  init_frontswap+0x0/0x96 @ 1=0D
[   14.667186] initcall init_frontswap+0x0/0x96 returned 0 after 29 usecs=0D
[   14.673743] calling  hugetlb_init+0x0/0x4c2 @ 1=0D
[   14.678335] HugeTLB registered 2 MB page size, pre-allocated 0 pages=0D
[   14.684829] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6342 usecs=
=0D
[   14.691430] calling  mmu_notifier_init+0x0/0x12 @ 1=0D
[   14.696373] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs=
=0D
[   14.703131] calling  slab_proc_init+0x0/0x8 @ 1=0D
[   14.707723] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs=0D
[   14.714136] calling  cpucache_init+0x0/0x4b @ 1=0D
[   14.718731] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs=0D
[   14.725143] calling  hugepage_init+0x0/0x145 @ 1=0D
[   14.729823] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs=
=0D
[   14.736497] calling  init_cleancache+0x0/0xbc @ 1=0D
[   14.741290] initcall init_cleancache+0x0/0xbc returned 0 after 27 usecs=
=0D
[   14.747936] calling  fcntl_init+0x0/0x2a @ 1=0D
[   14.752281] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs=0D
[   14.758510] calling  proc_filesystems_init+0x0/0x22 @ 1=0D
[   14.763799] initcall proc_filesystems_init+0x0/0x22 returned 0 after 3 u=
secs=0D
[   14.770901] calling  dio_init+0x0/0x2d @ 1=0D
[   14.775074] initcall dio_init+0x0/0x2d returned 0 after 10 usecs=0D
[   14.781128] calling  fsnotify_mark_init+0x0/0x40 @ 1=0D
[   14.786182] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 26 use=
cs=0D
[   14.793092] calling  dnotify_init+0x0/0x7b @ 1=0D
[   14.797618] initcall dnotify_init+0x0/0x7b returned 0 after 21 usecs=0D
[   14.804011] calling  inotify_user_setup+0x0/0x4b @ 1=0D
[   14.809052] initcall inotify_user_setup+0x0/0x4b returned 0 after 12 use=
cs=0D
[   14.815968] calling  aio_setup+0x0/0x7d @ 1=0D
[   14.820271] initcall aio_setup+0x0/0x7d returned 0 after 53 usecs=0D
[   14.826369] calling  proc_locks_init+0x0/0x22 @ 1=0D
[   14.831137] initcall proc_locks_init+0x0/0x22 returned 0 after 4 usecs=0D
[   14.837719] calling  init_sys32_ioctl+0x0/0x28 @ 1=0D
[   14.842619] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 43 usecs=
=0D
[   14.849332] calling  dquot_init+0x0/0x121 @ 1=0D
[   14.853752] VFS: Disk quotas dquot_6.5.2=0D
[   14.857773] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)=0D
[   14.864241] initcall dquot_init+0x0/0x121 returned 0 after 10243 usecs=0D
[   14.870826] calling  init_v2_quota_format+0x0/0x22 @ 1=0D
[   14.876026] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs=0D
[   14.883046] calling  quota_init+0x0/0x31 @ 1=0D
[   14.887397] initcall quota_init+0x0/0x31 returned 0 after 17 usecs=0D
[   14.893620] calling  proc_cmdline_init+0x0/0x22 @ 1=0D
[   14.898563] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   14.905320] calling  proc_consoles_init+0x0/0x22 @ 1=0D
[   14.910348] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.917191] calling  proc_cpuinfo_init+0x0/0x22 @ 1=0D
[   14.922136] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.928891] calling  proc_devices_init+0x0/0x22 @ 1=0D
[   14.933835] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.940591] calling  proc_interrupts_init+0x0/0x22 @ 1=0D
[   14.945794] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs=0D
[   14.952811] calling  proc_loadavg_init+0x0/0x22 @ 1=0D
[   14.957755] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.964510] calling  proc_meminfo_init+0x0/0x22 @ 1=0D
[   14.969456] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.976209] calling  proc_stat_init+0x0/0x22 @ 1=0D
[   14.980892] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.987390] calling  proc_uptime_init+0x0/0x22 @ 1=0D
[   14.992246] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.998916] calling  proc_version_init+0x0/0x22 @ 1=0D
[   15.003860] initcall proc_version_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   15.010617] calling  proc_softirqs_init+0x0/0x22 @ 1=0D
[   15.015646] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   15.022488] calling  proc_kcore_init+0x0/0xb5 @ 1=0D
[   15.027265] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs=
=0D
[   15.033931] calling  vmcore_init+0x0/0x5cb @ 1=0D
[   15.038436] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs=0D
[   15.044761] calling  proc_kmsg_init+0x0/0x25 @ 1=0D
[   15.049445] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs=0D
[   15.055942] calling  proc_page_init+0x0/0x42 @ 1=0D
[   15.060628] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs=0D
[   15.067121] calling  init_devpts_fs+0x0/0x62 @ 1=0D
[   15.071845] initcall init_devpts_fs+0x0/0x62 returned 0 after 42 usecs=0D
[   15.078388] calling  init_hugetlbfs_fs+0x0/0x15d @ 1=0D
[   15.083487] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 71 use=
cs=0D
[   15.090347] calling  init_fat_fs+0x0/0x4f @ 1=0D
[   15.094788] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs=0D
[   15.101094] calling  init_vfat_fs+0x0/0x12 @ 1=0D
[   15.105600] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   15.111926] calling  init_msdos_fs+0x0/0x12 @ 1=0D
[   15.116521] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   15.122933] calling  init_iso9660_fs+0x0/0x70 @ 1=0D
[   15.127724] initcall init_iso9660_fs+0x0/0x70 returned 0 after 24 usecs=
=0D
[   15.134374] calling  init_nfs_fs+0x0/0x16c @ 1=0D
[   15.139073] initcall init_nfs_fs+0x0/0x16c returned 0 after 188 usecs=0D
[   15.145506] calling  init_nfs_v2+0x0/0x14 @ 1=0D
[   15.149922] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs=0D
[   15.156161] calling  init_nfs_v3+0x0/0x14 @ 1=0D
[   15.160581] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs=0D
[   15.166821] calling  init_nfs_v4+0x0/0x3b @ 1=0D
[   15.171241] NFS: Registering the id_resolver key type=0D
[   15.176365] Key type id_resolver registered=0D
[   15.180599] Key type id_legacy registered=0D
[   15.184682] initcall init_nfs_v4+0x0/0x3b returned 0 after 13125 usecs=0D
[   15.191260] calling  init_nlm+0x0/0x4c @ 1=0D
[   15.195428] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs=0D
[   15.201400] calling  init_nls_cp437+0x0/0x12 @ 1=0D
[   15.206081] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs=0D
[   15.212578] calling  init_nls_ascii+0x0/0x12 @ 1=0D
[   15.217258] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs=0D
[   15.223758] calling  init_nls_iso8859_1+0x0/0x12 @ 1=0D
[   15.228786] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usec=
s=0D
[   15.235632] calling  init_nls_utf8+0x0/0x2b @ 1=0D
[   15.240226] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs=0D
[   15.246638] calling  init_ntfs_fs+0x0/0x1d1 @ 1=0D
[   15.251231] NTFS driver 2.1.30 [Flags: R/W].=0D
[   15.255618] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4283 usecs=
=0D
[   15.262240] calling  init_autofs4_fs+0x0/0x2a @ 1=0D
[   15.267163] initcall init_autofs4_fs+0x0/0x2a returned 0 after 129 usecs=
=0D
[   15.273857] calling  init_pstore_fs+0x0/0x53 @ 1=0D
[   15.278541] initcall init_pstore_fs+0x0/0x53 returned 0 after 11 usecs=0D
[   15.285117] calling  ipc_init+0x0/0x2f @ 1=0D
[   15.289282] msgmni has been set to 3857=0D
[   15.293187] initcall ipc_init+0x0/0x2f returned 0 after 3820 usecs=0D
[   15.299414] calling  ipc_sysctl_init+0x0/0x14 @ 1=0D
[   15.304189] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs=0D
[   15.310767] calling  init_mqueue_fs+0x0/0xa2 @ 1=0D
[   15.315507] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 57 usecs=0D
[   15.322036] calling  key_proc_init+0x0/0x5e @ 1=0D
[   15.326635] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs=0D
[   15.333044] calling  selinux_nf_ip_init+0x0/0x69 @ 1=0D
[   15.338068] SELinux:  Registering netfilter hooks=0D
[   15.342970] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4786 u=
secs=0D
[   15.350002] calling  init_sel_fs+0x0/0xa5 @ 1=0D
[   15.354781] initcall init_sel_fs+0x0/0xa5 returned 0 after 350 usecs=0D
[   15.361122] calling  selnl_init+0x0/0x56 @ 1=0D
[   15.365468] initcall selnl_init+0x0/0x56 returned 0 after 13 usecs=0D
[   15.371694] calling  sel_netif_init+0x0/0x5c @ 1=0D
[   15.376376] initcall sel_netif_init+0x0/0x5c returned 0 after 2 usecs=0D
[   15.382874] calling  sel_netnode_init+0x0/0x6a @ 1=0D
[   15.387730] initcall sel_netnode_init+0x0/0x6a returned 0 after 2 usecs=
=0D
[   15.394401] calling  sel_netport_init+0x0/0x6a @ 1=0D
[   15.399257] initcall sel_netport_init+0x0/0x6a returned 0 after 1 usecs=
=0D
[   15.405927] calling  aurule_init+0x0/0x2d @ 1=0D
[   15.410347] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs=0D
[   15.416586] calling  crypto_wq_init+0x0/0x33 @ 1=0D
[   15.421298] initcall crypto_wq_init+0x0/0x33 returned 0 after 30 usecs=0D
[   15.427855] calling  crypto_algapi_init+0x0/0xd @ 1=0D
[   15.432798] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs=
=0D
[   15.439553] calling  chainiv_module_init+0x0/0x12 @ 1=0D
[   15.444668] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[   15.451599] calling  eseqiv_module_init+0x0/0x12 @ 1=0D
[   15.456625] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usec=
s=0D
[   15.463474] calling  hmac_module_init+0x0/0x12 @ 1=0D
[   15.468326] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs=
=0D
[   15.474999] calling  md5_mod_init+0x0/0x12 @ 1=0D
[   15.479539] initcall md5_mod_init+0x0/0x12 returned 0 after 33 usecs=0D
[   15.485922] calling  sha1_generic_mod_init+0x0/0x12 @ 1=0D
[   15.491238] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 29 =
usecs=0D
[   15.498399] calling  crypto_cbc_module_init+0x0/0x12 @ 1=0D
[   15.503771] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs=0D
[   15.510964] calling  des_generic_mod_init+0x0/0x17 @ 1=0D
[   15.516217] initcall des_generic_mod_init+0x0/0x17 returned 0 after 51 u=
secs=0D
[   15.523275] calling  aes_init+0x0/0x12 @ 1=0D
[   15.527459] initcall aes_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.533500] calling  zlib_mod_init+0x0/0x12 @ 1=0D
[   15.538116] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.544594] calling  crypto_authenc_module_init+0x0/0x12 @ 1=0D
[   15.550312] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[   15.557850] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1=0D
[   15.563916] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs=0D
[   15.571803] calling  krng_mod_init+0x0/0x12 @ 1=0D
[   15.576425] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.582902] calling  proc_genhd_init+0x0/0x3c @ 1=0D
[   15.587672] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs=0D
[   15.594249] calling  bsg_init+0x0/0x12e @ 1=0D
[   15.598574] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)=0D
[   15.605959] initcall bsg_init+0x0/0x12e returned 0 after 7288 usecs=0D
[   15.612283] calling  noop_init+0x0/0x12 @ 1=0D
[   15.616531] io scheduler noop registered=0D
[   15.620516] initcall noop_init+0x0/0x12 returned 0 after 3891 usecs=0D
[   15.626843] calling  deadline_init+0x0/0x12 @ 1=0D
[   15.631437] io scheduler deadline registered=0D
[   15.635770] initcall deadline_init+0x0/0x12 returned 0 after 4230 usecs=
=0D
[   15.642445] calling  cfq_init+0x0/0x8b @ 1=0D
[   15.646626] io scheduler cfq registered (default)=0D
[   15.651370] initcall cfq_init+0x0/0x8b returned 0 after 4653 usecs=0D
[   15.657610] calling  percpu_counter_startup+0x0/0x38 @ 1=0D
[   15.662985] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs=0D
[   15.670176] calling  pci_proc_init+0x0/0x6a @ 1=0D
[   15.674952] initcall pci_proc_init+0x0/0x6a returned 0 after 178 usecs=0D
[   15.681469] calling  pcie_portdrv_init+0x0/0x7a @ 1=0D
[   15.687142] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.692704] Already setup the GSI :16=0D
[   15.697246] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.702806] Already setup the GSI :16=0D
[   15.707333] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.712897] Already setup the GSI :16=0D
[   15.717275] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.722849] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)=0D
[   15.728099] xen: registering gsi 17 triggering 0 polarity 1=0D
[   15.733670] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)=0D
[   15.739000] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.744563] Already setup the GSI :19=0D
[   15.748482] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60618 u=
secs=0D
[   15.755518] calling  aer_service_init+0x0/0x2b @ 1=0D
[   15.760444] initcall aer_service_init+0x0/0x2b returned 0 after 72 usecs=
=0D
[   15.767129] calling  pci_hotplug_init+0x0/0x1d @ 1=0D
[   15.771980] pci_hotplug: PCI Hot Plug PCI Core version: 0.5=0D
[   15.777614] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5501 use=
cs=0D
[   15.784546] calling  pcied_init+0x0/0x79 @ 1=0D
[   15.789129] pciehp: PCI Express Hot Plug Controller Driver version: 0.4=
=0D
[   15.795727] initcall pcied_init+0x0/0x79 returned 0 after 6686 usecs=0D
[   15.802141] calling  pcifront_init+0x0/0x3f @ 1=0D
[   15.806733] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs=0D
[   15.813321] calling  genericbl_driver_init+0x0/0x14 @ 1=0D
[   15.818681] initcall genericbl_driver_init+0x0/0x14 returned 0 after 73 =
usecs=0D
[   15.825804] calling  cirrusfb_init+0x0/0xcc @ 1=0D
[   15.830501] initcall cirrusfb_init+0x0/0xcc returned 0 after 101 usecs=0D
[   15.837020] calling  efifb_driver_init+0x0/0x14 @ 1=0D
[   15.842034] initcall efifb_driver_init+0x0/0x14 returned 0 after 76 usec=
s=0D
[   15.848813] calling  intel_idle_init+0x0/0x331 @ 1=0D
[   15.853664] intel_idle: MWAIT substates: 0x42120=0D
[   15.858342] intel_idle: v0.4 model 0x3C=0D
[   15.862243] intel_idle: lapic_timer_reliable_states 0xffffffff=0D
[   15.868139] intel_idle: intel_idle yielding to none=0D
[   15.872816] initcall intel_idle_init+0x0/0x331 returned -19 after 18703 =
usecs=0D
[   15.880268] calling  acpi_reserve_resources+0x0/0xeb @ 1=0D
[   15.885649] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 7 =
usecs=0D
[   15.892834] calling  acpi_ac_init+0x0/0x2a @ 1=0D
[   15.897414] initcall acpi_ac_init+0x0/0x2a returned 0 after 71 usecs=0D
[   15.903757] calling  acpi_button_driver_init+0x0/0x12 @ 1=0D
[   15.909499] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0=0D
[   15.917661] ACPI: Power Button [PWRB]=0D
[   15.921648] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1=0D
[   15.929031] ACPI: Power Button [PWRF]=0D
[   15.932827] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3058 usecs=0D
[   15.940380] calling  acpi_fan_driver_init+0x0/0x12 @ 1=0D
[   15.945818] ACPI: Fan [FAN0] (off)=0D
[   15.949442] ACPI: Fan [FAN1] (off)=0D
[   15.953051] ACPI: Fan [FAN2] (off)=0D
[   15.956659] ACPI: Fan [FAN3] (off)=0D
[   15.960262] ACPI: Fan [FAN4] (off)=0D
[   15.963732] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1772=
6 usecs=0D
[   15.971029] calling  acpi_processor_driver_init+0x0/0x43 @ 1=0D
[   15.989249] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)=0D
[   15.997672] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)=0D
[   16.013318] Monitor-Mwait will be used to enter C-1 state=0D
[   16.018712] Monitor-Mwait will be used to enter C-2 state=0D
[   16.024371] Warning: Processor Platform Limit not supported.=0D
[   16.030018] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 52023 usecs=0D
[   16.037902] calling  acpi_thermal_init+0x0/0x42 @ 1=0D
[   16.046117] thermal LNXTHERM:00: registered as thermal_zone0=0D
[   16.051769] ACPI: Thermal Zone [TZ00] (28 C)=0D
[   16.058277] initcall acpi_thermal_init+0x0/0x42 returned 0 after 25141 u=
secs=0D
[   16.075621] calling  acpi_battery_init+0x0/0x16 @ 1=0D
[   16.080562] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs=
=0D
[   16.087320] calling  acpi_hed_driver_init+0x0/0x12 @ 1=0D
[   16.092555] calling  1_acpi_battery_init_async+0x0/0x35 @ 6=0D
[   16.098291] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5635=
 usecs=0D
[   16.105508] calling  erst_init+0x0/0x2fc @ 1=0D
[   16.109880] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.=0D
[   16.117382] pstore: Registered erst as persistent store backend=0D
[   16.123355] initcall erst_init+0x0/0x2fc returned 0 after 13199 usecs=0D
[   16.129854] calling  ghes_init+0x0/0x173 @ 1=0D
[   16.134367] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35363 usecs=0D
[   16.142766] \_SB_:_OSC request failed=0D
[   16.146425] _OSC request data:1 1 0 =0D
[   16.150064] \_SB_:_OSC invalid UUID=0D
[   16.153616] _OSC request data:1 1 0 =0D
[   16.157257] GHES: APEI firmware first mode is enabled by APEI bit.=0D
[   16.163499] initcall ghes_init+0x0/0x173 returned 0 after 28622 usecs=0D
[   16.169995] calling  einj_init+0x0/0x522 @ 1=0D
[   16.174394] EINJ: Error INJection is initialized.=0D
[   16.179097] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs=0D
[   16.185511] calling  ioat_init_module+0x0/0xb1 @ 1=0D
[   16.190363] ioatdma: Intel(R) QuickData Technology Driver 4.00=0D
[   16.196441] initcall ioat_init_module+0x0/0xb1 returned 0 after 5934 use=
cs=0D
[   16.203310] calling  virtio_mmio_init+0x0/0x14 @ 1=0D
[   16.208267] initcall virtio_mmio_init+0x0/0x14 returned 0 after 105 usec=
s=0D
[   16.215038] calling  virtio_balloon_driver_init+0x0/0x12 @ 1=0D
[   16.220834] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 75 usecs=0D
[   16.228394] calling  xenbus_probe_initcall+0x0/0x39 @ 1=0D
[   16.233681] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs=0D
[   16.240784] calling  xenbus_init+0x0/0x3d @ 1=0D
[   16.245346] initcall xenbus_init+0x0/0x3d returned 0 after 135 usecs=0D
[   16.251697] calling  xenbus_backend_init+0x0/0x51 @ 1=0D
[   16.256926] initcall xenbus_backend_init+0x0/0x51 returned 0 after 120 u=
secs=0D
[   16.263960] calling  gntdev_init+0x0/0x4d @ 1=0D
[   16.268528] initcall gntdev_init+0x0/0x4d returned 0 after 122 usecs=0D
[   16.274872] calling  gntalloc_init+0x0/0x3d @ 1=0D
[   16.279599] initcall gntalloc_init+0x0/0x3d returned 0 after 131 usecs=0D
[   16.286114] calling  hypervisor_subsys_init+0x0/0x25 @ 1=0D
[   16.291484] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs=0D
[   16.298676] calling  hyper_sysfs_init+0x0/0x103 @ 1=0D
[   16.303682] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 65 usec=
s=0D
[   16.310463] calling  platform_pci_module_init+0x0/0x1b @ 1=0D
[   16.316101] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
90 usecs=0D
[   16.323483] calling  xen_late_init_mcelog+0x0/0x3d @ 1=0D
[   16.328873] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs=0D
[   16.335995] calling  xen_pcibk_init+0x0/0x13f @ 1=0D
[   16.340788] xen_pciback: backend is vpci=0D
[   16.344824] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3967 usec=
s=0D
[   16.351605] calling  xen_acpi_processor_init+0x0/0x24b @ 1=0D
[   16.357913] xen_acpi_processor: Uploading Xen processor PM info=0D
(XEN) [2014-01-25 01:29:43] Set CPU acpi_id(1) cpuid(0) Px State info:=0D
(XEN) [2014-01-25 01:29:43] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:43] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:43] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:43] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:43] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:43] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:43] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:43] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:43] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:43] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:43] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:43] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU0: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 0 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(2) cpuid(2) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU2: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 2 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(3) cpuid(4) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU4: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 4 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(4) cpuid(6) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU6: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 6 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(5) cpuid(1) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU1: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 1 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(6) cpuid(3) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:44] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:44] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:44] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:44] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:44] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:44] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:44] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:44] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:44] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:44] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:44] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:44] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:44] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:44] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:44] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:44] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:44] CPU3: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:44] CPU 3 initialization completed=0D
(XEN) [2014-01-25 01:29:44] Set CPU acpi_id(7) cpuid(5) Px State info:=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:44] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:44] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:44] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:45] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:45] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:45] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:45] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:45] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:45] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:45] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:45] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:45] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:45] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:45] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:45] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:45] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:45] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:45] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:45] CPU5: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:45] CPU 5 initialization completed=0D
(XEN) [2014-01-25 01:29:45] Set CPU acpi_id(8) cpuid(7) Px State info:=0D
(XEN) [2014-01-25 01:29:45] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:45] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 01:29:45] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 01:29:45] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 01:29:45] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 01:29:45] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 01:29:45] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 01:29:45] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 01:29:45] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 01:29:45] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 01:29:45] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 01:29:45] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 01:29:45] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 01:29:45] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 01:29:45] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 01:29:45] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 01:29:45] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 01:29:45] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 01:29:45] 	_PPC: 0=0D
(XEN) [2014-01-25 01:29:45] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 01:29:45] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 01:29:45] CPU7: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 01:29:45] CPU 7 initialization completed=0D
[   17.780119] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389616 usecs=0D
[   17.788000] calling  pty_init+0x0/0x453 @ 1=0D
[   17.798157] kworker/u2:0 (743) used greatest stack depth: 5488 bytes lef=
t=0D
[   17.854289] initcall pty_init+0x0/0x453 returned 0 after 60584 usecs=0D
[   17.860637] calling  sysrq_init+0x0/0xb0 @ 1=0D
[    0 after 8 usecs=0D
[   17.871122] calling  xen_hvc_init+0x0/0x228 @ 1=0D
[   17.876761] initcall xen_hvc_init+0x0/0x228 returned 0 after 1022 usecs=
=0D
[   17.883361] calling  serial8250_init+0x0/0x1ab @ 1=0D
[   17.888210] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled=0D
[   17.915839] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A=0D
[   17.924254] initcall serial8250_init+0x09] kgdb: Registered I/O driver k=
gdboc.=0D
[   17.953023] initcall init_kgdboc+0x0/0x16 returned 0 after 4515 usecs=0D
[   17.959493] calling  init+0x0/0x10f @ 1=0D
[   17.963613] initcall init+0x0/0x10f returned 0 after 215 usecs=0D
[   17.969434] calling  hpet_init+0x0/0x6a @ 1=0D
[   17.974171] hpet_acpi_add: no address or irqs in _CRS=0D
[   17.979302] initcall hpet_init+0x0/0x6a returned 0 after 5491 usecs=0D
[   17.985553] calling  nvram_init+0x0/0x82 @ 1=0D
[   17.990013] Non-volatile memory driver v1.3=0D
[   17.994187] initcall nvram_init+0x0/0x82 returned 0 after 4200 usecs=0D
[   18.000598] calling  mod_init+0x0/0x5a @ 1=0D
[   18.004757] initcall mod_init+0x0/0x5a returned -19 after 0 usecs=0D
[   18.010911] calling  rng_init+0x0/0x12 @ 1=0D
[   18.015207] initcall rng_init+0x0/0x12 returned 0 after 132 usecs=0D
[   18.021285] calling  agp_init+0x0/0x26 @ 1=0D
[   18.025444] Linux agpgart interface v0.103=0D
[   18.029605] initcall agp_init+0x0/0x26 returned 0 after 4063 usecs=0D
[   18.035844] calling  agp_amd64_mod_init+0x0/0xb @ 1=0D
[   18.040934] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 146 u=
secs=0D
[   18.047969] calling  agp_intel_init+0x0/0x29 @ 1=0D
[   18.052740] initcall agp_intel_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.059254] calling  agp_sis_init+0x0/0x29 @ 1=0D
[   18.063851] initcall agp_sis_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.070191] calling  agp_via_init+0x0/0x29 @ 1=0D
[   18.074788] initcall agp_via_init+0x0/0x29 returned 0 after 89 usecs=0D
[   18.081128] calling  drm_core_init+0x0/0x10c @ 1=0D
[   18.085896] [drm] Initialized drm 1.1.0 20060810=0D
[   18.090504] initcall drm_core_init+0x0/0x10c returned 0 after 4585 usecs=
=0D
[   18.097263] calling  cn_proc_init+0x0/0x3d @ 1=0D
[   18.101773] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs=0D
[   18.108098] calling  topology_sysfs_init+0x0/0x70 @ 1=0D
[   18.113242] initcall topology_sysfs_init+0x0/0x70 returned 0 after 32 us=
ecs=0D
[   18.120228] calling  loop_init+0x0/0x14e @ 1=0D
[   18.178369] loop: module loaded=0D
[   18.181530] initcall loop_init+0x0/0x14e returned 0 after 55630 usecs=0D
[   18.188002] calling  xen_blkif_init+0x0/0x22 @ 1=0D
[   18.192784] initcall xen_blkif_init+0x0/0x22 returned 0 after 99 usecs=0D
[   18.199321] calling  mac_hid_init+0x0/0x22 @ 1=0D
[   18.203810] initcall mac_hid_init+0x0/0x22 returned 0 after 9 usecs=0D
[   18.210126] calling  macvlan_init_module+0x0/0x3d @ 1=0D
[   18.215242] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs=0D
[   18.222176] calling  macvtap_init+0x0/0x100 @ 1=0D
[   18.226836] initcall macvtap_init+0x0/0x100 returned 0 after 67 usecs=0D
[   18.233277] calling  net_olddevs_init+0x0/0xb5 @ 1=0D
[   18.238121] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs=
=0D
[   18.244793] calling  fixed_mdio_bus_init+0x0/0x105 @ 1=0D
[   18.250218] libphy: Fixed MDIO Bus: probed=0D
[   18.254309] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4215=
 usecs=0D
[   18.261589] calling  tun_init+0x0/0x93 @ 1=0D
[   18.265749] tun: Universal TUN/TAP device driver, 1.6=0D
[   18.270859] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>=0D
[   18.277245] initcall tun_init+0x0/0x93 returned 0 after 11226 usecs=0D
[   18.283501] calling  tg3_driver_init+0x0/0x1b @ 1=0D
[   18.288405] initcall tg3_driver_init+0x0/0x1b returned 0 after 136 usecs=
=0D
[   18.295097] calling  igb_init_module+0x0/0x58 @ 1=0D
[   18.299863] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k=0D
[   18.306879] igb: Copyright (c) 2007-2013 Intel Corporation.=0D
[   18.312779] xen: registering gsi 17 triggering 0 polarity 1=0D
[   18.318340] Already setup the GSI :17=0D
[   18.485658] igb 0000:02:00.0: added PHC on eth0=0D
[   18.490189] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
509248] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue=
(s)=0D
[   18.517140] xen: registering gsi 18 triggering 0 polarity 1=0D
[   18.522707] Already setup the GSI :18=0D
[   18.690630] igb 0000:02:00.1: added PHC on eth1=0D
[   18.695157] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
09279] igb 0000:02:00.1: eth1: PBA No: Unknown=0D
[   18.714218] igb 0000:02:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)=0D
[   18.722118] xen: registering gsi 19 triggering 0 polarity 1=0D
[   18.727685] Already setup the GSI :19=0D
(XEN) [2014-01-25 01:29:46] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----=0D
(XEN) [2014-01-25 01:29:46] CPU:    0=0D
(XEN ffff8302394665b0   rcx: 0000000000000000=0D
(XEN) [2014-01-25 01:29:46] rdx: 00000000f1980000   rsi: 0000000000000200  =
 rdi: ffff82d080281f20=0D
(XEN) [2014-01-25 01:29:46] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfc08  =
 r8:  000000000000001c=0D
(XEN) [2014-01-25 01:29:46] r9:  00000000ffffffff   r10: ffff82d080238f20  =
 r11: 0000000000000202=0D
(XEN) [2014-01-25 01:29:46] r12: 0000000000000000   r13: ffff83022a085e30  =
 r14: ffff82d0802cfe98=0D
(XEN) [2014-01-25 01:29:46] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0=0D
(XEN) [2014-01-25 01:29:46] cr3: 000000022dc0c000   cr2: 0000000000000004=0D
(XEN) [2014-01-25 01:29:46] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008=0D
(XEN) [2014-01-25 01:29:46] Xen stack trace from rsp=3Dffff82d0802cfc08:=0D
(XEN) [2014-01-25 01:29:46]    0000000500040070 ffff82d0802cfd88 0000007280=
2cfc38 ffff82d0ffffffff=0D
(XEN) [2014-01-25 01:29:46]    0000000000000000 0000000000000000 0000000000=
000005 0000000000000070=0D
(XEN) [2014-01-25 01:29:46]    0000000500000000 0000000000000000 00000000f1=
980000 ffff82d000000005=0D
(XEN) [2014-01-25 01:29:46]    0000000500000003 8005007000000000 ffff82d080=
2cfe98 ffff82d0802cfe98=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd88 ffff8302394665b0 0000000000=
000005 0000000000000000=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd28 ffff82d080168987 0000000000=
000246 ffff82d0802cfcd8=0D
(XEN) [2014-01-25 01:29:46]    ffff82d080129d68 0000000000000000 ffff82d080=
2cfd28 ffff82d0801474f9=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfd18 ffff830239463b70 0000000000=
00010f ffff8302337f8000=0D
(XEN) [2014-01-25 01:29:46]    000000000000010f 0000000000000022 00000000ff=
ffffed ffff830239402200=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfdc8 ffff82d08016c65c ffff83022a=
085e00 000000000000010f=0D
(XEN) [2014-01-25 01:29:46]    000000000000010f ffff8302337f80e0 ffff82d080=
2cfd98 ffff82d0801047ed=0D
(XEN) [2014-01-25 01:29:46]    0000010f01402200 ffff82d0802cfe98 ffff830233=
7f80e0 ffff8302394665b0=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe98 ffff83022a085e00 ffff82d080=
2cfdc8 ffff8302337f8000=0D
(XEN) [2014-01-25 01:29:46]    00000000fffffffd 0000000000000000 ffff82d080=
2cfe98 ffff82d0802cfe70=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe48 ffff82d08017f104 ffff82d080=
2cff18 ffffffff8156d7c6=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfe98 ffff8302337f80b8 ffff82d000=
00010f ffff82d08018bd40=0D
(XEN) [2014-01-25 01:29:46]    000000220000f800 ffff82d0802cfe74 ffff820040=
004000 000000000000000d=0D
(XEN) [2014-01-25 01:29:46]    ffff880078623b08 ffff8300b7313000 ffff880006=
dbb180 0000000000000000=0D
(XEN) [2014-01-25 01:29:46]    ffff82d0802cfef8 ffff82d08017f814 0000000000=
000000 0000000700000004=0D
(XEN) [2014-01-25 01:29:46]    0000000000007ff0 ffffffffffffffff 0000000000=
000005 0000000000000000=0D
(XEN) [2014-01-25 01:29:46] Xen call trace:=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d0801683a2>] msix_capability_init+0x=
1dc/0x603=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d080168987>] pci_enable_msi+0x1be/0x=
4d7=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08016c65c>] map_domain_pirq+0x222/0=
x5ad=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08017f104>] physdev_map_pirq+0x507/=
0x5d1=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d08017f814>] do_physdev_op+0x646/0x1=
232=0D
(XEN) [2014-01-25 01:29:46]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x14=
5=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] Pagetable walk from 0000000000000004:=0D
(XEN) [2014-01-25 01:29:46]  L4[0x000] =3D 0000000000000000 fffffffffffffff=
f=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] ****************************************=0D
(XEN) [2014-01-25 01:29:46] Panic on CPU 0:=0D
(XEN) [2014-01-25 01:29:46] FATAL PAGE FAULT=0D
(XEN) [2014-01-25 01:29:46] [error_code=3D0000]=0D
(XEN) [2014-01-25 01:29:46] Faulting linear address: 0000000000000004=0D
(XEN) [2014-01-25 01:29:46] ****************************************=0D
(XEN) [2014-01-25 01:29:46] =0D
(XEN) [2014-01-25 01:29:46] Manual reset required ('noreboot' specified)=0D

--cNdxnHkX5QqsyA0e
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--cNdxnHkX5QqsyA0e--


From xen-devel-bounces@lists.xen.org Fri Jan 24 17:48:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6krf-0008DW-M9; Fri, 24 Jan 2014 17:48:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6kre-0008DP-CU
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 17:48:22 +0000
Received: from [85.158.139.211:31332] by server-1.bemta-5.messagelabs.com id
	FC/A1-21065-567A2E25; Fri, 24 Jan 2014 17:48:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390585699!9114639!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21591 invoked from network); 24 Jan 2014 17:48:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 17:48:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHmE9F018516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:48:15 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHmBj4024488
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 17:48:13 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHmBMI015916; Fri, 24 Jan 2014 17:48:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:48:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 13F8E1BFA72; Fri, 24 Jan 2014 12:48:06 -0500 (EST)
Date: Fri, 24 Jan 2014 12:48:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1889333978.20140124143602@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 6:38:10 PM, you wrote:
> 
> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
> >> > nonethless.
> >> 
> >> As usual ;-)
> 
> > Ha!
> > ..snip..
> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> >> 
> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> >> > I totally forgot about it !
> >> 
> >> Got a link to that patchset ?
> 
> > https://lkml.org/lkml/2013/12/13/315
> 
> >> I at least could give it a spin .. you never know when fortune is on your side :-)
> 
> > It is also at this git tree:
> 
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> > want to merge it in your current Linus tree.
> 
> > Thank you!
> 
> 
> Hi Konrad,
> 
> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
> seems to help with my problem,i'm no capable of using:
> - xl pci-detach
> - xl pci-assignable-remove
> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
> 
> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
> So the first 4 seem to be an improvement.
> 
> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

Could you email me your lspci output and also which devices you move/switch etc?

Thanks!
> 
> --
> Sander
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 17:48:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 17:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6krf-0008DW-M9; Fri, 24 Jan 2014 17:48:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6kre-0008DP-CU
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 17:48:22 +0000
Received: from [85.158.139.211:31332] by server-1.bemta-5.messagelabs.com id
	FC/A1-21065-567A2E25; Fri, 24 Jan 2014 17:48:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390585699!9114639!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21591 invoked from network); 24 Jan 2014 17:48:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 17:48:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OHmE9F018516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 17:48:15 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHmBj4024488
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 17:48:13 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OHmBMI015916; Fri, 24 Jan 2014 17:48:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 09:48:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 13F8E1BFA72; Fri, 24 Jan 2014 12:48:06 -0500 (EST)
Date: Fri, 24 Jan 2014 12:48:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1889333978.20140124143602@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
 pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
> 
> Friday, January 10, 2014, 6:38:10 PM, you wrote:
> 
> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
> >> > nonethless.
> >> 
> >> As usual ;-)
> 
> > Ha!
> > ..snip..
> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
> >> 
> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
> >> > I totally forgot about it !
> >> 
> >> Got a link to that patchset ?
> 
> > https://lkml.org/lkml/2013/12/13/315
> 
> >> I at least could give it a spin .. you never know when fortune is on your side :-)
> 
> > It is also at this git tree:
> 
> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
> > want to merge it in your current Linus tree.
> 
> > Thank you!
> 
> 
> Hi Konrad,
> 
> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
> seems to help with my problem,i'm no capable of using:
> - xl pci-detach
> - xl pci-assignable-remove
> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
> 
> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
> So the first 4 seem to be an improvement.
> 
> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

Could you email me your lspci output and also which devices you move/switch etc?

Thanks!
> 
> --
> Sander
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:06:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6l8b-0000aP-4X; Fri, 24 Jan 2014 18:05:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6l8Z-0000aK-Ax
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 18:05:51 +0000
Received: from [85.158.143.35:11822] by server-1.bemta-4.messagelabs.com id
	49/A4-02132-E7BA2E25; Fri, 24 Jan 2014 18:05:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390586748!629945!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6288 invoked from network); 24 Jan 2014 18:05:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:05:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OI5juj007110
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:05:46 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI5iRs008554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:05:45 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI5iN1028757; Fri, 24 Jan 2014 18:05:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:05:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D06D41BFA72; Fri, 24 Jan 2014 13:05:42 -0500 (EST)
Date: Fri, 24 Jan 2014 13:05:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140124180542.GA15785@phenom.dumpdata.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
 multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb rxhash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 130 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 508ea96..9b08da5 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues = 16;

Should this be based on some for of num_online_cpus() ?

> +module_param(xennet_max_queues, uint, 0644);


How about just 'max_queues' ? Without the 'xennet'?

> +
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> @@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>  
>  static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>  {
> -	/* Stub for later implementation of queue selection */
> -	return 0;
> +	struct netfront_info *info = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_idx;
> +
> +	/* First, check if there is only one queue */
> +	if (info->num_queues == 1)
> +		queue_idx = 0;
> +	else {
> +		hash = skb_get_rxhash(skb);
> +		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
> +	}
> +
> +	return queue_idx;
>  }
>  
>  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> @@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	struct net_device *netdev;
>  	struct netfront_info *np;
>  
> -	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
> +	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
>  	if (!netdev)
>  		return ERR_PTR(-ENOMEM);
>  
> @@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
>  	return err;
>  }
>  
> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
> +			   struct xenbus_transaction *xbt, int write_hierarchical)
> +{
> +	/* Write the queue-specific keys into XenStore in the traditional
> +	 * way for a single queue, or in a queue subkeys for multiple
> +	 * queues.
> +	 */
> +	struct xenbus_device *dev = queue->info->xbdev;
> +	int err;
> +	const char *message;
> +	char *path;
> +	size_t pathsize;
> +
> +	/* Choose the correct place to write the keys */
> +	if (write_hierarchical) {
> +		pathsize = strlen(dev->nodename) + 10;
> +		path = kzalloc(pathsize, GFP_KERNEL);
> +		if (!path) {
> +			err = -ENOMEM;
> +			message = "writing ring references";
> +			goto error;
> +		}
> +		snprintf(path, pathsize, "%s/queue-%u",
> +				dev->nodename, queue->number);
> +	}
> +	else
> +		path = (char *)dev->nodename;
> +
> +	/* Write ring references */
> +	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
> +			queue->tx_ring_ref);
> +	if (err) {
> +		message = "writing tx-ring-ref";
> +		goto error;
> +	}
> +
> +	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
> +			queue->rx_ring_ref);
> +	if (err) {
> +		message = "writing rx-ring-ref";
> +		goto error;
> +	}
> +
> +	/* Write event channels; taking into account both shared
> +	 * and split event channel scenarios.
> +	 */
> +	if (queue->tx_evtchn == queue->rx_evtchn) {
> +		/* Shared event channel */
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel", "%u", queue->tx_evtchn);
> +		if (err) {
> +			message = "writing event-channel";
> +			goto error;
> +		}
> +	}
> +	else {
> +		/* Split event channels */
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel-tx", "%u", queue->tx_evtchn);
> +		if (err) {
> +			message = "writing event-channel-tx";
> +			goto error;
> +		}
> +
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel-rx", "%u", queue->rx_evtchn);
> +		if (err) {
> +			message = "writing event-channel-rx";
> +			goto error;
> +		}
> +	}
> +
> +	if (write_hierarchical)
> +		kfree(path);
> +	return 0;
> +
> +error:
> +	if (write_hierarchical)
> +		kfree(path);
> +	xenbus_dev_fatal(dev, err, "%s", message);
> +	return err;
> +}
> +
>  /* Common code used when first setting up, and when resuming. */
>  static int talk_to_netback(struct xenbus_device *dev,
>  			   struct netfront_info *info)
> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	int err;
>  	unsigned int feature_split_evtchn;
>  	unsigned int i = 0;
> +	unsigned int max_queues = 0;
>  	struct netfront_queue *queue = NULL;
>  
>  	info->netdev->irq = 0;
>  
> +	/* Check if backend supports multiple queues */
> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
> +			"multi-queue-max-queues", "%u", &max_queues);
> +	if (err < 0)
> +		max_queues = 1;
> +
>  	/* Check feature-split-event-channels */
>  	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>  			   "feature-split-event-channels", "%u",
> @@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	}
>  
>  	/* Allocate array of queues */
> -	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
> +	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
>  	if (!info->queues) {
>  		err = -ENOMEM;
>  		goto out;
>  	}
> -	info->num_queues = 1;
> +	info->num_queues = max_queues;
>  
>  	/* Create shared ring, alloc event channel -- for each queue */
>  	for (i = 0; i < info->num_queues; ++i) {
> @@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	}
>  
>  again:
> -	queue = &info->queues[0]; /* Use first queue only */
> -
>  	err = xenbus_transaction_start(&xbt);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err, "starting transaction");
>  		goto destroy_ring;
>  	}
>  
> -	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
> -			    queue->tx_ring_ref);
> -	if (err) {
> -		message = "writing tx ring-ref";
> -		goto abort_transaction;
> -	}
> -	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
> -			    queue->rx_ring_ref);
> -	if (err) {
> -		message = "writing rx ring-ref";
> -		goto abort_transaction;
> +	if (info->num_queues == 1) {
> +		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
> +		if (err)
> +			goto abort_transaction_no_dev_fatal;
>  	}
> -
> -	if (queue->tx_evtchn == queue->rx_evtchn) {
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel", "%u", queue->tx_evtchn);
> +	else {
> +		/* Write the number of queues */
> +		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
> +				"%u", info->num_queues);
>  		if (err) {
> -			message = "writing event-channel";
> -			goto abort_transaction;
> +			message = "writing multi-queue-num-queues";
> +			goto abort_transaction_no_dev_fatal;
>  		}
> -	} else {
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel-tx", "%u", queue->tx_evtchn);
> -		if (err) {
> -			message = "writing event-channel-tx";
> -			goto abort_transaction;
> -		}
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel-rx", "%u", queue->rx_evtchn);
> -		if (err) {
> -			message = "writing event-channel-rx";
> -			goto abort_transaction;
> +
> +		/* Write the keys for each queue */
> +		for (i = 0; i < info->num_queues; ++i) {
> +			queue = &info->queues[i];
> +			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
> +			if (err)
> +				goto abort_transaction_no_dev_fatal;
>  		}
>  	}
>  
> +	/* The remaining keys are not queue-specific */
>  	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
>  			    1);
>  	if (err) {
> @@ -1879,8 +1971,9 @@ again:
>  	return 0;
>  
>   abort_transaction:
> -	xenbus_transaction_end(xbt, 1);
>  	xenbus_dev_fatal(dev, err, "%s", message);
> +abort_transaction_no_dev_fatal:
> +	xenbus_transaction_end(xbt, 1);
>   destroy_ring:
>  	xennet_disconnect_backend(info);
>  	kfree(info->queues);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:06:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6l8b-0000aP-4X; Fri, 24 Jan 2014 18:05:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6l8Z-0000aK-Ax
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 18:05:51 +0000
Received: from [85.158.143.35:11822] by server-1.bemta-4.messagelabs.com id
	49/A4-02132-E7BA2E25; Fri, 24 Jan 2014 18:05:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390586748!629945!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6288 invoked from network); 24 Jan 2014 18:05:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:05:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OI5juj007110
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:05:46 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI5iRs008554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:05:45 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI5iN1028757; Fri, 24 Jan 2014 18:05:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:05:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D06D41BFA72; Fri, 24 Jan 2014 13:05:42 -0500 (EST)
Date: Fri, 24 Jan 2014 13:05:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140124180542.GA15785@phenom.dumpdata.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
 multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb rxhash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 130 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 508ea96..9b08da5 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues = 16;

Should this be based on some for of num_online_cpus() ?

> +module_param(xennet_max_queues, uint, 0644);


How about just 'max_queues' ? Without the 'xennet'?

> +
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> @@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>  
>  static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>  {
> -	/* Stub for later implementation of queue selection */
> -	return 0;
> +	struct netfront_info *info = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_idx;
> +
> +	/* First, check if there is only one queue */
> +	if (info->num_queues == 1)
> +		queue_idx = 0;
> +	else {
> +		hash = skb_get_rxhash(skb);
> +		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
> +	}
> +
> +	return queue_idx;
>  }
>  
>  static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> @@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	struct net_device *netdev;
>  	struct netfront_info *np;
>  
> -	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
> +	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
>  	if (!netdev)
>  		return ERR_PTR(-ENOMEM);
>  
> @@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
>  	return err;
>  }
>  
> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
> +			   struct xenbus_transaction *xbt, int write_hierarchical)
> +{
> +	/* Write the queue-specific keys into XenStore in the traditional
> +	 * way for a single queue, or in a queue subkeys for multiple
> +	 * queues.
> +	 */
> +	struct xenbus_device *dev = queue->info->xbdev;
> +	int err;
> +	const char *message;
> +	char *path;
> +	size_t pathsize;
> +
> +	/* Choose the correct place to write the keys */
> +	if (write_hierarchical) {
> +		pathsize = strlen(dev->nodename) + 10;
> +		path = kzalloc(pathsize, GFP_KERNEL);
> +		if (!path) {
> +			err = -ENOMEM;
> +			message = "writing ring references";
> +			goto error;
> +		}
> +		snprintf(path, pathsize, "%s/queue-%u",
> +				dev->nodename, queue->number);
> +	}
> +	else
> +		path = (char *)dev->nodename;
> +
> +	/* Write ring references */
> +	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
> +			queue->tx_ring_ref);
> +	if (err) {
> +		message = "writing tx-ring-ref";
> +		goto error;
> +	}
> +
> +	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
> +			queue->rx_ring_ref);
> +	if (err) {
> +		message = "writing rx-ring-ref";
> +		goto error;
> +	}
> +
> +	/* Write event channels; taking into account both shared
> +	 * and split event channel scenarios.
> +	 */
> +	if (queue->tx_evtchn == queue->rx_evtchn) {
> +		/* Shared event channel */
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel", "%u", queue->tx_evtchn);
> +		if (err) {
> +			message = "writing event-channel";
> +			goto error;
> +		}
> +	}
> +	else {
> +		/* Split event channels */
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel-tx", "%u", queue->tx_evtchn);
> +		if (err) {
> +			message = "writing event-channel-tx";
> +			goto error;
> +		}
> +
> +		err = xenbus_printf(*xbt, path,
> +				"event-channel-rx", "%u", queue->rx_evtchn);
> +		if (err) {
> +			message = "writing event-channel-rx";
> +			goto error;
> +		}
> +	}
> +
> +	if (write_hierarchical)
> +		kfree(path);
> +	return 0;
> +
> +error:
> +	if (write_hierarchical)
> +		kfree(path);
> +	xenbus_dev_fatal(dev, err, "%s", message);
> +	return err;
> +}
> +
>  /* Common code used when first setting up, and when resuming. */
>  static int talk_to_netback(struct xenbus_device *dev,
>  			   struct netfront_info *info)
> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	int err;
>  	unsigned int feature_split_evtchn;
>  	unsigned int i = 0;
> +	unsigned int max_queues = 0;
>  	struct netfront_queue *queue = NULL;
>  
>  	info->netdev->irq = 0;
>  
> +	/* Check if backend supports multiple queues */
> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
> +			"multi-queue-max-queues", "%u", &max_queues);
> +	if (err < 0)
> +		max_queues = 1;
> +
>  	/* Check feature-split-event-channels */
>  	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>  			   "feature-split-event-channels", "%u",
> @@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	}
>  
>  	/* Allocate array of queues */
> -	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
> +	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
>  	if (!info->queues) {
>  		err = -ENOMEM;
>  		goto out;
>  	}
> -	info->num_queues = 1;
> +	info->num_queues = max_queues;
>  
>  	/* Create shared ring, alloc event channel -- for each queue */
>  	for (i = 0; i < info->num_queues; ++i) {
> @@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
>  	}
>  
>  again:
> -	queue = &info->queues[0]; /* Use first queue only */
> -
>  	err = xenbus_transaction_start(&xbt);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err, "starting transaction");
>  		goto destroy_ring;
>  	}
>  
> -	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
> -			    queue->tx_ring_ref);
> -	if (err) {
> -		message = "writing tx ring-ref";
> -		goto abort_transaction;
> -	}
> -	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
> -			    queue->rx_ring_ref);
> -	if (err) {
> -		message = "writing rx ring-ref";
> -		goto abort_transaction;
> +	if (info->num_queues == 1) {
> +		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
> +		if (err)
> +			goto abort_transaction_no_dev_fatal;
>  	}
> -
> -	if (queue->tx_evtchn == queue->rx_evtchn) {
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel", "%u", queue->tx_evtchn);
> +	else {
> +		/* Write the number of queues */
> +		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
> +				"%u", info->num_queues);
>  		if (err) {
> -			message = "writing event-channel";
> -			goto abort_transaction;
> +			message = "writing multi-queue-num-queues";
> +			goto abort_transaction_no_dev_fatal;
>  		}
> -	} else {
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel-tx", "%u", queue->tx_evtchn);
> -		if (err) {
> -			message = "writing event-channel-tx";
> -			goto abort_transaction;
> -		}
> -		err = xenbus_printf(xbt, dev->nodename,
> -				    "event-channel-rx", "%u", queue->rx_evtchn);
> -		if (err) {
> -			message = "writing event-channel-rx";
> -			goto abort_transaction;
> +
> +		/* Write the keys for each queue */
> +		for (i = 0; i < info->num_queues; ++i) {
> +			queue = &info->queues[i];
> +			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
> +			if (err)
> +				goto abort_transaction_no_dev_fatal;
>  		}
>  	}
>  
> +	/* The remaining keys are not queue-specific */
>  	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
>  			    1);
>  	if (err) {
> @@ -1879,8 +1971,9 @@ again:
>  	return 0;
>  
>   abort_transaction:
> -	xenbus_transaction_end(xbt, 1);
>  	xenbus_dev_fatal(dev, err, "%s", message);
> +abort_transaction_no_dev_fatal:
> +	xenbus_transaction_end(xbt, 1);
>   destroy_ring:
>  	xennet_disconnect_backend(info);
>  	kfree(info->queues);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:08:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lBG-0000gZ-1g; Fri, 24 Jan 2014 18:08:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lBE-0000gS-Mp
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:08:36 +0000
Received: from [193.109.254.147:40306] by server-3.bemta-14.messagelabs.com id
	55/72-11000-42CA2E25; Fri, 24 Jan 2014 18:08:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390586913!9556343!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16885 invoked from network); 24 Jan 2014 18:08:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:08:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OI8Wkj010336
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:08:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI8V6t015432
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:08:31 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OI8VQp012812; Fri, 24 Jan 2014 18:08:31 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:08:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E77301BFA72; Fri, 24 Jan 2014 13:08:29 -0500 (EST)
Date: Fri, 24 Jan 2014 13:08:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Message-ID: <20140124180829.GD15785@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
> The paravirtualized clock used in KVM and Xen uses a version field to
> allow the guest to see when the shared data structure is inconsistent.
> The code reads the version field twice (before and after the data
> structure is copied) and checks whether they haven't
> changed and that the lowest bit is not set. As the second access is not
> synchronized, the compiler could generate code that accesses version
> more than two times and you end up with inconsistent data.

Could you paste in the code that the 'bad' compiler generates
vs the compiler that generate 'good' code please?

> 
> An example using pvclock_get_time_values:
> 
> host starts updating data, sets src->version to 1
> guest reads src->version (1) and stores it into dst->version.
> guest copies inconsistent data
> guest reads src->version (1) and computes xor with dst->version.
> host finishes updating data and sets src->version to 2
> guest reads src->version (2) and checks whether lower bit is not set.
> while loop exits with inconsistent data!
> 
> AFAICS the compiler is allowed to optimize the given code this way.
> 
> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> ---
>  arch/x86/kernel/pvclock.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 42eb330..f62b41c 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  					struct pvclock_vcpu_time_info *src)
>  {
> +	u32 nversion;
> +
>  	do {
>  		dst->version = src->version;
>  		rmb();		/* fetch version before data */
> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  		dst->tsc_shift         = src->tsc_shift;
>  		dst->flags             = src->flags;
>  		rmb();		/* test version after fetching data */
> -	} while ((src->version & 1) || (dst->version != src->version));
> +		nversion = ACCESS_ONCE(src->version);
> +	} while ((nversion & 1) || (dst->version != nversion));
>  
>  	return dst->version;
>  }
> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
>  			    struct pvclock_vcpu_time_info *vcpu_time,
>  			    struct timespec *ts)
>  {
> -	u32 version;
> +	u32 version, nversion;
>  	u64 delta;
>  	struct timespec now;
>  
> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
>  		now.tv_sec  = wall_clock->sec;
>  		now.tv_nsec = wall_clock->nsec;
>  		rmb();		/* fetch time before checking version */
> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> +		nversion = ACCESS_ONCE(wall_clock->version);
> +	} while ((nversion & 1) || (version != nversion));
>  
>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> -- 
> 1.8.4.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:08:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lBG-0000gZ-1g; Fri, 24 Jan 2014 18:08:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lBE-0000gS-Mp
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:08:36 +0000
Received: from [193.109.254.147:40306] by server-3.bemta-14.messagelabs.com id
	55/72-11000-42CA2E25; Fri, 24 Jan 2014 18:08:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390586913!9556343!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16885 invoked from network); 24 Jan 2014 18:08:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:08:35 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OI8Wkj010336
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:08:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OI8V6t015432
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:08:31 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OI8VQp012812; Fri, 24 Jan 2014 18:08:31 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:08:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E77301BFA72; Fri, 24 Jan 2014 13:08:29 -0500 (EST)
Date: Fri, 24 Jan 2014 13:08:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Message-ID: <20140124180829.GD15785@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
> The paravirtualized clock used in KVM and Xen uses a version field to
> allow the guest to see when the shared data structure is inconsistent.
> The code reads the version field twice (before and after the data
> structure is copied) and checks whether they haven't
> changed and that the lowest bit is not set. As the second access is not
> synchronized, the compiler could generate code that accesses version
> more than two times and you end up with inconsistent data.

Could you paste in the code that the 'bad' compiler generates
vs the compiler that generate 'good' code please?

> 
> An example using pvclock_get_time_values:
> 
> host starts updating data, sets src->version to 1
> guest reads src->version (1) and stores it into dst->version.
> guest copies inconsistent data
> guest reads src->version (1) and computes xor with dst->version.
> host finishes updating data and sets src->version to 2
> guest reads src->version (2) and checks whether lower bit is not set.
> while loop exits with inconsistent data!
> 
> AFAICS the compiler is allowed to optimize the given code this way.
> 
> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> ---
>  arch/x86/kernel/pvclock.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 42eb330..f62b41c 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  					struct pvclock_vcpu_time_info *src)
>  {
> +	u32 nversion;
> +
>  	do {
>  		dst->version = src->version;
>  		rmb();		/* fetch version before data */
> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
>  		dst->tsc_shift         = src->tsc_shift;
>  		dst->flags             = src->flags;
>  		rmb();		/* test version after fetching data */
> -	} while ((src->version & 1) || (dst->version != src->version));
> +		nversion = ACCESS_ONCE(src->version);
> +	} while ((nversion & 1) || (dst->version != nversion));
>  
>  	return dst->version;
>  }
> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
>  			    struct pvclock_vcpu_time_info *vcpu_time,
>  			    struct timespec *ts)
>  {
> -	u32 version;
> +	u32 version, nversion;
>  	u64 delta;
>  	struct timespec now;
>  
> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
>  		now.tv_sec  = wall_clock->sec;
>  		now.tv_nsec = wall_clock->nsec;
>  		rmb();		/* fetch time before checking version */
> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> +		nversion = ACCESS_ONCE(wall_clock->version);
> +	} while ((nversion & 1) || (version != nversion));
>  
>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> -- 
> 1.8.4.2
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:23:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lPU-0001O4-Sf; Fri, 24 Jan 2014 18:23:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6lPT-0001Nz-6T
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:23:19 +0000
Received: from [85.158.143.35:61922] by server-3.bemta-4.messagelabs.com id
	BB/91-32360-69FA2E25; Fri, 24 Jan 2014 18:23:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390587796!632408!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6597 invoked from network); 24 Jan 2014 18:23:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 18:23:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94244091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 18:23:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 13:23:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6lPO-0007qJ-I6;
	Fri, 24 Jan 2014 18:23:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6lPO-0005xw-Ds;
	Fri, 24 Jan 2014 18:23:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24480-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 18:23:14 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24480: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24480 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24480/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           3 host-build-prep           fail REGR. vs. 22405
 test-amd64-i386-xl           16 guest-start.2             fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:23:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lPU-0001O4-Sf; Fri, 24 Jan 2014 18:23:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6lPT-0001Nz-6T
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:23:19 +0000
Received: from [85.158.143.35:61922] by server-3.bemta-4.messagelabs.com id
	BB/91-32360-69FA2E25; Fri, 24 Jan 2014 18:23:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390587796!632408!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6597 invoked from network); 24 Jan 2014 18:23:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 18:23:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94244091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 18:23:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 13:23:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6lPO-0007qJ-I6;
	Fri, 24 Jan 2014 18:23:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6lPO-0005xw-Ds;
	Fri, 24 Jan 2014 18:23:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24480-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 18:23:14 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24480: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24480 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24480/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           3 host-build-prep           fail REGR. vs. 22405
 test-amd64-i386-xl           16 guest-start.2             fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:28:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lUL-0001Xu-SZ; Fri, 24 Jan 2014 18:28:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6lUK-0001Xp-47
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:28:20 +0000
Received: from [85.158.139.211:29215] by server-9.bemta-5.messagelabs.com id
	7C/80-15098-3C0B2E25; Fri, 24 Jan 2014 18:28:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390588096!11768774!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17568 invoked from network); 24 Jan 2014 18:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 18:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94245732"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 18:28:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 13:28:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W6lUF-0000zJ-30; Fri, 24 Jan 2014 18:28:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 24 Jan 2014 18:28:11 +0000
Message-ID: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel]  [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A physdev_op is a two argument hypercall, taking a command paramter and an
optional pointer to a structure.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
 extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
index ef52ecd..dcfbe41 100644
--- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
+++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
@@ -255,9 +255,9 @@ HYPERVISOR_console_io(
 
 static inline int
 HYPERVISOR_physdev_op(
-	void *physdev_op)
+	int cmd, void *physdev_op)
 {
-	return _hypercall1(int, physdev_op, physdev_op);
+	return _hypercall2(int, physdev_op, cmd, physdev_op);
 }
 
 static inline int
diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
index 513d74e..7083763 100644
--- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
+++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
@@ -256,9 +256,9 @@ HYPERVISOR_console_io(
 
 static inline int
 HYPERVISOR_physdev_op(
-	void *physdev_op)
+	int cmd, void *physdev_op)
 {
-	return _hypercall1(int, physdev_op, physdev_op);
+	return _hypercall2(int, physdev_op, cmd, physdev_op);
 }
 
 static inline int
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:28:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lUL-0001Xu-SZ; Fri, 24 Jan 2014 18:28:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W6lUK-0001Xp-47
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:28:20 +0000
Received: from [85.158.139.211:29215] by server-9.bemta-5.messagelabs.com id
	7C/80-15098-3C0B2E25; Fri, 24 Jan 2014 18:28:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390588096!11768774!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17568 invoked from network); 24 Jan 2014 18:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 18:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94245732"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 18:28:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 13:28:15 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W6lUF-0000zJ-30; Fri, 24 Jan 2014 18:28:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 24 Jan 2014 18:28:11 +0000
Message-ID: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel]  [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A physdev_op is a two argument hypercall, taking a command paramter and an
optional pointer to a structure.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
 extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
index ef52ecd..dcfbe41 100644
--- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
+++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
@@ -255,9 +255,9 @@ HYPERVISOR_console_io(
 
 static inline int
 HYPERVISOR_physdev_op(
-	void *physdev_op)
+	int cmd, void *physdev_op)
 {
-	return _hypercall1(int, physdev_op, physdev_op);
+	return _hypercall2(int, physdev_op, cmd, physdev_op);
 }
 
 static inline int
diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
index 513d74e..7083763 100644
--- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
+++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
@@ -256,9 +256,9 @@ HYPERVISOR_console_io(
 
 static inline int
 HYPERVISOR_physdev_op(
-	void *physdev_op)
+	int cmd, void *physdev_op)
 {
-	return _hypercall1(int, physdev_op, physdev_op);
+	return _hypercall2(int, physdev_op, cmd, physdev_op);
 }
 
 static inline int
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:32:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lXn-0001lf-JL; Fri, 24 Jan 2014 18:31:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lXl-0001la-Gb
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:31:53 +0000
Received: from [193.109.254.147:24586] by server-3.bemta-14.messagelabs.com id
	B3/41-11000-891B2E25; Fri, 24 Jan 2014 18:31:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390588310!13071587!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32266 invoked from network); 24 Jan 2014 18:31:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:31:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIVlRv011465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:31:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIVkEN008115
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:31:46 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OIVjJI013680; Fri, 24 Jan 2014 18:31:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:31:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3E2E41BFA72; Fri, 24 Jan 2014 13:31:44 -0500 (EST)
Date: Fri, 24 Jan 2014 13:31:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140124183144.GE15785@phenom.dumpdata.com>
References: <52DECA4E.4080004@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DECA4E.4080004@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
> Hello,
> =

> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
> cpuid feature flags exposed to PVH guests are kind of strange, this is
> the output of the feature flags as seen by an HVM domain:
> =


What about a PV guest? I presume if you ran an NetBSD PV guest it would
give a format similar to this?

> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PG=
E,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDL=
T,HV>
> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
> AMD Features2=3D0x1<LAHF>
> =

> And this is what a PVH domain sees when running on the same hardware:
> =

> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,AC=
PI,MMX,FXSR,SSE,SSE2,SS,HTT>
> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
> AMD Features=3D0x20100800<SYSCALL,NX,LM>
> AMD Features2=3D0x1<LAHF>
> =

> I would expect the feature flags to be quite similar between an HVM
> domain and a PV domain (since they both run inside of a HVM container).
> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
> guests. Also, is there any reason why PVH guests have the ACPI, SS and
> CLFLUSH feature flags but not HVM?

S5?

ACPI is enabled for PV I think, but Linux PV guests disable them
as there is no ACPI tables in PV mode:

 429         if (!xen_initial_domain())                                    =
          =

 430                 cpuid_leaf1_edx_mask &=3D                             =
            =

 431                         ~((1 << X86_FEATURE_ACPI));  /* disable ACPI *=
/         =

 432                                                                       =
          =

 434                                                                      =


CLFLUSH - no idea why it would be disabled.


The rdsctp should be enabled. In the past I think it was related to
'timer=3D' option. We would either trap it, or emulate it with a constant
value, or passthrough. It should be passing it through but maybe there
is a bug?

PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
is not exposed because of another bug:
http://www.gossamer-threads.com/lists/xen/devel/313596

> =

> Most (if not all) of this probably comes from the fact that we are
> reporting the same feature flags as pure PV guests, but I see no reason
> to do that for PVH guests, we should decide what's supported on PVH and
> set the feature flags accordingly.

Right and also have a nice policy. The problem is that we set/unset
the cpuid flags in the toolstack (and in two places - depending on the
architecture) and also in the hypervisor.

Anyhow, these I know we disable:

 425         cpuid_leaf1_edx_mask =3D                                      =
            =

 426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */       =
          =

 427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring */ =
          =


And I  think your patch to the Xen hypervisor does it too - it clears
the MTRR by default now. The ACC is (if my memory is correct) for
the Pentium 4 and such - we can disable it as well.

 428                                                                       =
          =

 433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32));  =
            =


And this we definitly need to disable. The x2APIC should not
be exposed as we want to use the Xen's version of APIC ops. And
if the x2APIC bit is enabled in Linux, there is some other code
(NMI handler) that will use that without using the APIC ops.
And blow up :-(


Then there is the MWAIT. Actually we can put that on the side.
I know it is important for dom0, but since we don't have those
patches yet in, we can ignore that. However, the hypervisor
(pv_cpuid) disables it. =



There is also the XSAVE business:

save_mask =3D                                                            =

 440                 (1 << (X86_FEATURE_XSAVE % 32)) |                     =
          =

 441                 (1 << (X86_FEATURE_OSXSAVE % 32));                    =
          =

 442                                                                       =
          =

 443         /* Xen will set CR4.OSXSAVE if supported and not disabled by f=
orce */   =

 444         if ((cx & xsave_mask) !=3D xsave_mask)                        =
            =

 445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable XSAV=
E & OSXSAVE */

Which I am not clear about.


This looks like to make a uniform 'cpuid' look in the hypervisor
we need somehow to glue hvm_cpuid and pv_cpuid with some
form of tables/lookups.

And make sure that the same logic is reflected in
xc_cpuid_x86.c (toolstack).


> =

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:32:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lXn-0001lf-JL; Fri, 24 Jan 2014 18:31:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lXl-0001la-Gb
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:31:53 +0000
Received: from [193.109.254.147:24586] by server-3.bemta-14.messagelabs.com id
	B3/41-11000-891B2E25; Fri, 24 Jan 2014 18:31:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390588310!13071587!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32266 invoked from network); 24 Jan 2014 18:31:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:31:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIVlRv011465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:31:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIVkEN008115
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 18:31:46 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OIVjJI013680; Fri, 24 Jan 2014 18:31:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:31:45 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3E2E41BFA72; Fri, 24 Jan 2014 13:31:44 -0500 (EST)
Date: Fri, 24 Jan 2014 13:31:44 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140124183144.GE15785@phenom.dumpdata.com>
References: <52DECA4E.4080004@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52DECA4E.4080004@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
> Hello,
> =

> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
> cpuid feature flags exposed to PVH guests are kind of strange, this is
> the output of the feature flags as seen by an HVM domain:
> =


What about a PV guest? I presume if you ran an NetBSD PV guest it would
give a format similar to this?

> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PG=
E,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDL=
T,HV>
> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
> AMD Features2=3D0x1<LAHF>
> =

> And this is what a PVH domain sees when running on the same hardware:
> =

> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,AC=
PI,MMX,FXSR,SSE,SSE2,SS,HTT>
> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
> AMD Features=3D0x20100800<SYSCALL,NX,LM>
> AMD Features2=3D0x1<LAHF>
> =

> I would expect the feature flags to be quite similar between an HVM
> domain and a PV domain (since they both run inside of a HVM container).
> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
> guests. Also, is there any reason why PVH guests have the ACPI, SS and
> CLFLUSH feature flags but not HVM?

S5?

ACPI is enabled for PV I think, but Linux PV guests disable them
as there is no ACPI tables in PV mode:

 429         if (!xen_initial_domain())                                    =
          =

 430                 cpuid_leaf1_edx_mask &=3D                             =
            =

 431                         ~((1 << X86_FEATURE_ACPI));  /* disable ACPI *=
/         =

 432                                                                       =
          =

 434                                                                      =


CLFLUSH - no idea why it would be disabled.


The rdsctp should be enabled. In the past I think it was related to
'timer=3D' option. We would either trap it, or emulate it with a constant
value, or passthrough. It should be passing it through but maybe there
is a bug?

PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
is not exposed because of another bug:
http://www.gossamer-threads.com/lists/xen/devel/313596

> =

> Most (if not all) of this probably comes from the fact that we are
> reporting the same feature flags as pure PV guests, but I see no reason
> to do that for PVH guests, we should decide what's supported on PVH and
> set the feature flags accordingly.

Right and also have a nice policy. The problem is that we set/unset
the cpuid flags in the toolstack (and in two places - depending on the
architecture) and also in the hypervisor.

Anyhow, these I know we disable:

 425         cpuid_leaf1_edx_mask =3D                                      =
            =

 426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */       =
          =

 427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring */ =
          =


And I  think your patch to the Xen hypervisor does it too - it clears
the MTRR by default now. The ACC is (if my memory is correct) for
the Pentium 4 and such - we can disable it as well.

 428                                                                       =
          =

 433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32));  =
            =


And this we definitly need to disable. The x2APIC should not
be exposed as we want to use the Xen's version of APIC ops. And
if the x2APIC bit is enabled in Linux, there is some other code
(NMI handler) that will use that without using the APIC ops.
And blow up :-(


Then there is the MWAIT. Actually we can put that on the side.
I know it is important for dom0, but since we don't have those
patches yet in, we can ignore that. However, the hypervisor
(pv_cpuid) disables it. =



There is also the XSAVE business:

save_mask =3D                                                            =

 440                 (1 << (X86_FEATURE_XSAVE % 32)) |                     =
          =

 441                 (1 << (X86_FEATURE_OSXSAVE % 32));                    =
          =

 442                                                                       =
          =

 443         /* Xen will set CR4.OSXSAVE if supported and not disabled by f=
orce */   =

 444         if ((cx & xsave_mask) !=3D xsave_mask)                        =
            =

 445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable XSAV=
E & OSXSAVE */

Which I am not clear about.


This looks like to make a uniform 'cpuid' look in the hypervisor
we need somehow to glue hvm_cpuid and pv_cpuid with some
form of tables/lookups.

And make sure that the same logic is reflected in
xc_cpuid_x86.c (toolstack).


> =

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:45:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lkz-0002KO-7w; Fri, 24 Jan 2014 18:45:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lkx-0002KJ-U0
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:45:32 +0000
Received: from [85.158.139.211:20855] by server-8.bemta-5.messagelabs.com id
	EB/79-29838-BC4B2E25; Fri, 24 Jan 2014 18:45:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390589128!11815820!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18084 invoked from network); 24 Jan 2014 18:45:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:45:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIiBBH024607
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:44:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIi84O009994
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 18:44:09 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIi8NI006255; Fri, 24 Jan 2014 18:44:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:44:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EB8E01BFA72; Fri, 24 Jan 2014 13:44:06 -0500 (EST)
Date: Fri, 24 Jan 2014 13:44:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140124184406.GB16410@phenom.dumpdata.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 06:28:11PM +0000, Andrew Cooper wrote:
> A physdev_op is a two argument hypercall, taking a command paramter and an
                                                             ^^^^^^^- parameter
> optional pointer to a structure.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:45:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lkz-0002KO-7w; Fri, 24 Jan 2014 18:45:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lkx-0002KJ-U0
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 18:45:32 +0000
Received: from [85.158.139.211:20855] by server-8.bemta-5.messagelabs.com id
	EB/79-29838-BC4B2E25; Fri, 24 Jan 2014 18:45:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390589128!11815820!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18084 invoked from network); 24 Jan 2014 18:45:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:45:30 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIiBBH024607
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:44:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIi84O009994
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 18:44:09 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIi8NI006255; Fri, 24 Jan 2014 18:44:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:44:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EB8E01BFA72; Fri, 24 Jan 2014 13:44:06 -0500 (EST)
Date: Fri, 24 Jan 2014 13:44:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140124184406.GB16410@phenom.dumpdata.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 06:28:11PM +0000, Andrew Cooper wrote:
> A physdev_op is a two argument hypercall, taking a command paramter and an
                                                             ^^^^^^^- parameter
> optional pointer to a structure.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:47:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lmT-0002P7-OC; Fri, 24 Jan 2014 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lmR-0002P1-OZ
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:47:03 +0000
Received: from [85.158.139.211:57042] by server-10.bemta-5.messagelabs.com id
	F5/29-01405-625B2E25; Fri, 24 Jan 2014 18:47:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390589220!11815972!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21231 invoked from network); 24 Jan 2014 18:47:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:47:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIkwT1020424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:46:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIkvTa019265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 18:46:58 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIkv8H013749; Fri, 24 Jan 2014 18:46:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:46:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E15F11BFA72; Fri, 24 Jan 2014 13:46:55 -0500 (EST)
Date: Fri, 24 Jan 2014 13:46:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dave Jones <davej@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Message-ID: <20140124184655.GC16410@phenom.dumpdata.com>
References: <20140123064242.09E68660D05@gitolite.kernel.org>
	<20140124183114.GA31844@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140124183114.GA31844@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] Fix misplaced kfree from xlated_setup_gnttab_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 01:31:14PM -0500, Dave Jones wrote:
> Passing a freed 'pages' to free_xenballooned_pages will end badly
> on kernels with slub debug enabled.

Ouch.
> 
> This looks out of place between the rc assign and the check, and
> was likely a cut-and-paste error.
> 
> Signed-off-by: Dave Jones <davej@fedoraproject.org>
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 103c93f874b2..28990cc97304 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -161,12 +161,11 @@ static int __init xlated_setup_gnttab_pages(void)
>  
>  	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
>  				    &xen_auto_xlat_grant_frames.vaddr);
> -
> -	kfree(pages);
>  	if (rc) {
>  		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
>  			nr_grant_frames, rc);
>  		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pages);
>  		kfree(pfns);
>  		return rc;
>  	}

Actually it should also be freed on the success path, as so:


I can squash it in, if you are OK with that?

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 103c93f..c985835 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -162,14 +162,15 @@ static int __init xlated_setup_gnttab_pages(void)
 	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
 				    &xen_auto_xlat_grant_frames.vaddr);
 
-	kfree(pages);
 	if (rc) {
 		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
 			nr_grant_frames, rc);
 		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pages);
 		kfree(pfns);
 		return rc;
 	}
+	kfree(pages);
 
 	xen_auto_xlat_grant_frames.pfn = pfns;
 	xen_auto_xlat_grant_frames.count = nr_grant_frames;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:47:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lmT-0002P7-OC; Fri, 24 Jan 2014 18:47:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6lmR-0002P1-OZ
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:47:03 +0000
Received: from [85.158.139.211:57042] by server-10.bemta-5.messagelabs.com id
	F5/29-01405-625B2E25; Fri, 24 Jan 2014 18:47:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390589220!11815972!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21231 invoked from network); 24 Jan 2014 18:47:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 18:47:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OIkwT1020424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 18:46:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIkvTa019265
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 18:46:58 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OIkv8H013749; Fri, 24 Jan 2014 18:46:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 10:46:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E15F11BFA72; Fri, 24 Jan 2014 13:46:55 -0500 (EST)
Date: Fri, 24 Jan 2014 13:46:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Dave Jones <davej@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Message-ID: <20140124184655.GC16410@phenom.dumpdata.com>
References: <20140123064242.09E68660D05@gitolite.kernel.org>
	<20140124183114.GA31844@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140124183114.GA31844@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] Fix misplaced kfree from xlated_setup_gnttab_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 01:31:14PM -0500, Dave Jones wrote:
> Passing a freed 'pages' to free_xenballooned_pages will end badly
> on kernels with slub debug enabled.

Ouch.
> 
> This looks out of place between the rc assign and the check, and
> was likely a cut-and-paste error.
> 
> Signed-off-by: Dave Jones <davej@fedoraproject.org>
> 
> diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
> index 103c93f874b2..28990cc97304 100644
> --- a/arch/x86/xen/grant-table.c
> +++ b/arch/x86/xen/grant-table.c
> @@ -161,12 +161,11 @@ static int __init xlated_setup_gnttab_pages(void)
>  
>  	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
>  				    &xen_auto_xlat_grant_frames.vaddr);
> -
> -	kfree(pages);
>  	if (rc) {
>  		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
>  			nr_grant_frames, rc);
>  		free_xenballooned_pages(nr_grant_frames, pages);
> +		kfree(pages);
>  		kfree(pfns);
>  		return rc;
>  	}

Actually it should also be freed on the success path, as so:


I can squash it in, if you are OK with that?

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 103c93f..c985835 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -162,14 +162,15 @@ static int __init xlated_setup_gnttab_pages(void)
 	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
 				    &xen_auto_xlat_grant_frames.vaddr);
 
-	kfree(pages);
 	if (rc) {
 		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
 			nr_grant_frames, rc);
 		free_xenballooned_pages(nr_grant_frames, pages);
+		kfree(pages);
 		kfree(pfns);
 		return rc;
 	}
+	kfree(pages);
 
 	xen_auto_xlat_grant_frames.pfn = pfns;
 	xen_auto_xlat_grant_frames.count = nr_grant_frames;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:49:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6loW-0002mB-BY; Fri, 24 Jan 2014 18:49:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davej@redhat.com>) id 1W6loU-0002m3-I7
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:49:10 +0000
Received: from [85.158.143.35:37226] by server-2.bemta-4.messagelabs.com id
	49/83-11386-5A5B2E25; Fri, 24 Jan 2014 18:49:09 +0000
X-Env-Sender: davej@redhat.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390589348!634738!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10834 invoked from network); 24 Jan 2014 18:49:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	24 Jan 2014 18:49:09 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0OIn51e030750
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 13:49:05 -0500
Received: from gelk.kernelslacker.org (ovpn-113-155.phx2.redhat.com
	[10.3.113.155])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0OImx9A018134
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 13:49:05 -0500
Received: from gelk.kernelslacker.org (localhost [127.0.0.1])
	by gelk.kernelslacker.org (8.14.7/8.14.7) with ESMTP id s0OImwsV005128; 
	Fri, 24 Jan 2014 13:48:58 -0500
Received: (from davej@localhost)
	by gelk.kernelslacker.org (8.14.7/8.14.7/Submit) id s0OImvSg005127;
	Fri, 24 Jan 2014 13:48:57 -0500
X-Authentication-Warning: gelk.kernelslacker.org: davej set sender to
	davej@redhat.com using -f
Date: Fri, 24 Jan 2014 13:48:57 -0500
From: Dave Jones <davej@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140124184857.GA5062@redhat.com>
Mail-Followup-To: Dave Jones <davej@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xensource.com
References: <20140123064242.09E68660D05@gitolite.kernel.org>
	<20140124183114.GA31844@redhat.com>
	<20140124184655.GC16410@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140124184655.GC16410@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] Fix misplaced kfree from xlated_setup_gnttab_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 01:46:55PM -0500, Konrad Rzeszutek Wilk wrote:
 > Actually it should also be freed on the success path, as so:
 > 
 > I can squash it in, if you are OK with that?
 
Looks good to me.

thanks,

	Dave

 > diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
 > index 103c93f..c985835 100644
 > --- a/arch/x86/xen/grant-table.c
 > +++ b/arch/x86/xen/grant-table.c
 > @@ -162,14 +162,15 @@ static int __init xlated_setup_gnttab_pages(void)
 >  	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
 >  				    &xen_auto_xlat_grant_frames.vaddr);
 >  
 > -	kfree(pages);
 >  	if (rc) {
 >  		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
 >  			nr_grant_frames, rc);
 >  		free_xenballooned_pages(nr_grant_frames, pages);
 > +		kfree(pages);
 >  		kfree(pfns);
 >  		return rc;
 >  	}
 > +	kfree(pages);
 >  
 >  	xen_auto_xlat_grant_frames.pfn = pfns;
 >  	xen_auto_xlat_grant_frames.count = nr_grant_frames;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:49:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6loW-0002mB-BY; Fri, 24 Jan 2014 18:49:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davej@redhat.com>) id 1W6loU-0002m3-I7
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 18:49:10 +0000
Received: from [85.158.143.35:37226] by server-2.bemta-4.messagelabs.com id
	49/83-11386-5A5B2E25; Fri, 24 Jan 2014 18:49:09 +0000
X-Env-Sender: davej@redhat.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390589348!634738!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10834 invoked from network); 24 Jan 2014 18:49:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	24 Jan 2014 18:49:09 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0OIn51e030750
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 13:49:05 -0500
Received: from gelk.kernelslacker.org (ovpn-113-155.phx2.redhat.com
	[10.3.113.155])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0OImx9A018134
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 13:49:05 -0500
Received: from gelk.kernelslacker.org (localhost [127.0.0.1])
	by gelk.kernelslacker.org (8.14.7/8.14.7) with ESMTP id s0OImwsV005128; 
	Fri, 24 Jan 2014 13:48:58 -0500
Received: (from davej@localhost)
	by gelk.kernelslacker.org (8.14.7/8.14.7/Submit) id s0OImvSg005127;
	Fri, 24 Jan 2014 13:48:57 -0500
X-Authentication-Warning: gelk.kernelslacker.org: davej set sender to
	davej@redhat.com using -f
Date: Fri, 24 Jan 2014 13:48:57 -0500
From: Dave Jones <davej@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140124184857.GA5062@redhat.com>
Mail-Followup-To: Dave Jones <davej@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xensource.com
References: <20140123064242.09E68660D05@gitolite.kernel.org>
	<20140124183114.GA31844@redhat.com>
	<20140124184655.GC16410@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140124184655.GC16410@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: boris.ostrovsky@oracle.com, xen-devel@lists.xensource.com,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] Fix misplaced kfree from xlated_setup_gnttab_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 01:46:55PM -0500, Konrad Rzeszutek Wilk wrote:
 > Actually it should also be freed on the success path, as so:
 > 
 > I can squash it in, if you are OK with that?
 
Looks good to me.

thanks,

	Dave

 > diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
 > index 103c93f..c985835 100644
 > --- a/arch/x86/xen/grant-table.c
 > +++ b/arch/x86/xen/grant-table.c
 > @@ -162,14 +162,15 @@ static int __init xlated_setup_gnttab_pages(void)
 >  	rc = arch_gnttab_map_shared(pfns, nr_grant_frames, nr_grant_frames,
 >  				    &xen_auto_xlat_grant_frames.vaddr);
 >  
 > -	kfree(pages);
 >  	if (rc) {
 >  		pr_warn("%s Couldn't map %ld pfns rc:%d\n", __func__,
 >  			nr_grant_frames, rc);
 >  		free_xenballooned_pages(nr_grant_frames, pages);
 > +		kfree(pages);
 >  		kfree(pfns);
 >  		return rc;
 >  	}
 > +	kfree(pages);
 >  
 >  	xen_auto_xlat_grant_frames.pfn = pfns;
 >  	xen_auto_xlat_grant_frames.count = nr_grant_frames;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:54:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lt5-0002yF-6F; Fri, 24 Jan 2014 18:53:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W6lt4-0002y9-8l
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 18:53:54 +0000
Received: from [193.109.254.147:54763] by server-7.bemta-14.messagelabs.com id
	7C/65-15500-1C6B2E25; Fri, 24 Jan 2014 18:53:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390589632!13031286!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3329 invoked from network); 24 Jan 2014 18:53:52 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Jan 2014 18:53:52 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52740 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W6lgr-0004Ds-Sn; Fri, 24 Jan 2014 19:41:18 +0100
Date: Fri, 24 Jan 2014 19:53:48 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <5410177059.20140124195348@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 24, 2014, 6:48:06 PM, you wrote:

> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>> 
>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> >> > nonethless.
>> >> 
>> >> As usual ;-)
>> 
>> > Ha!
>> > ..snip..
>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> >> 
>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> >> > I totally forgot about it !
>> >> 
>> >> Got a link to that patchset ?
>> 
>> > https://lkml.org/lkml/2013/12/13/315
>> 
>> >> I at least could give it a spin .. you never know when fortune is on your side :-)
>> 
>> > It is also at this git tree:
>> 
>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> > want to merge it in your current Linus tree.
>> 
>> > Thank you!
>> 
>> 
>> Hi Konrad,
>> 
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>> 
>> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
>> So the first 4 seem to be an improvement.
>> 
>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

> Could you email me your lspci output and also which devices you move/switch etc?

Hmm hope you didn't misunderstood :-) I now spot a missing "w" .. i am noW capable of .. :-)
So it works when the first 4 patches of that branch are applied, I tried with both a NIC and a wireless NIC and had no problems.

The problems with that last commit don't seem to be related to that moving/switch of devices it also occurs
on a more regular create or shutdown of a guest with a device passed through to it.

Just to be complete here is a stacktrace of the hung task i encounter then:

[  968.600248] INFO: task xenwatch:29 blocked for more than 120 seconds.
[  968.601885]       Not tainted 3.13.020140123-pcireset-p5+ #1
[  968.603086] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  968.605298] xenwatch        D ffff88003fb6c1d0     0    29      2 0x00000000
[  968.607007]  ffff88003fb6c1d0 0000000000000246 ffffffff81a0af60 ffff880038c8a980
[  968.608773]  0000000000013f00 0000000000013f00 ffff88003fb6c1d0 ffff88003fb9bfd8
[  968.609515]  ffff88003fb6c1d0 7fffffffffffffff 7fffffffffffffff 0000000000000002
[  968.610260] Call Trace:
[  968.610493]  [<ffffffff818eda68>] ? genl_set_err.isra.15.constprop.21+0x51/0x51
[  968.611163]  [<ffffffff818eda94>] ? schedule_timeout+0x2c/0x123
[  968.641854]  [<ffffffff813e8bc5>] ? read_reply+0xcc/0xd8
[  968.666733]  [<ffffffff81117c2f>] ? kfree+0x50/0x6f
[  968.691593]  [<ffffffff81116d5b>] ? arch_local_irq_restore+0x7/0x8
[  968.716408]  [<ffffffff813e8bc5>] ? read_reply+0xcc/0xd8
[  968.740956]  [<ffffffff81087ecf>] ? arch_local_irq_disable+0x7/0x8
[  968.765631]  [<ffffffff818eebc3>] ? __wait_for_common+0x123/0x15e
[  968.790148]  [<ffffffff8107d188>] ? try_to_wake_up+0x198/0x198
[  968.814917]  [<ffffffff818f061e>] ? _raw_spin_unlock_irqrestore+0xb/0xc
[  968.839656]  [<ffffffff813f18c1>] ? pcistub_get_pci_dev_by_slot+0xc3/0xd5
[  968.864307]  [<ffffffff813f27f4>] ? xen_pcibk_export_device+0x27/0xfe
[  968.888824]  [<ffffffff813f2a10>] ? xen_pcibk_setup_backend+0x145/0x265
[  968.913094]  [<ffffffff813f300d>] ? xen_pcibk_xenbus_probe+0xeb/0x12a
[  968.937093]  [<ffffffff813ea6b8>] ? xenbus_dev_probe+0x56/0xb5
[  968.961314]  [<ffffffff814cbd9b>] ? __driver_attach+0x73/0x73
[  968.985750]  [<ffffffff814cbc07>] ? driver_probe_device+0x92/0x1b3
[  969.010282]  [<ffffffff814ca434>] ? bus_for_each_drv+0x46/0x80
[  969.034570]  [<ffffffff814cbb40>] ? device_attach+0x68/0x86
[  969.058685]  [<ffffffff814cb22b>] ? bus_probe_device+0x2c/0x9d
[  969.082670]  [<ffffffff814c99b3>] ? device_add+0x371/0x51c
[  969.106462]  [<ffffffff8106339e>] ? init_timer_key+0xe/0x5a
[  969.130155]  [<ffffffff813ea39c>] ? xenbus_probe_node+0x121/0x160
[  969.153793]  [<ffffffff813e9369>] ? xenbus_dev_request_and_reply+0x75/0x75
[  969.177396]  [<ffffffff813ea550>] ? xenbus_dev_changed+0x175/0x1a4
[  969.200938]  [<ffffffff813e9431>] ? xenwatch_thread+0xc8/0xf2
[  969.224678]  [<ffffffff813e9369>] ? xenbus_dev_request_and_reply+0x75/0x75
[  969.248560]  [<ffffffff81085072>] ? bit_waitqueue+0x82/0x82
[  969.272249]  [<ffffffff810728e6>] ? kthread+0x99/0xa1
[  969.296128]  [<ffffffff8100384f>] ? xen_mc_issue.constprop.20+0x27/0x4d
[  969.320029]  [<ffffffff81070000>] ? get_task_pid+0x2a/0x2c
[  969.343706]  [<ffffffff8107284d>] ? __kthread_parkme+0x59/0x59
[  969.367233]  [<ffffffff818f598c>] ? ret_from_fork+0x7c/0xb0
[  969.390736]  [<ffffffff8107284d>] ? __kthread_parkme+0x59/0x59



> Thanks!
>> 
>> --
>> Sander
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 18:54:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 18:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6lt5-0002yF-6F; Fri, 24 Jan 2014 18:53:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W6lt4-0002y9-8l
	for xen-devel@lists.xenproject.org; Fri, 24 Jan 2014 18:53:54 +0000
Received: from [193.109.254.147:54763] by server-7.bemta-14.messagelabs.com id
	7C/65-15500-1C6B2E25; Fri, 24 Jan 2014 18:53:53 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390589632!13031286!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3329 invoked from network); 24 Jan 2014 18:53:52 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Jan 2014 18:53:52 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:52740 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W6lgr-0004Ds-Sn; Fri, 24 Jan 2014 19:41:18 +0100
Date: Fri, 24 Jan 2014 19:53:48 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <5410177059.20140124195348@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, January 24, 2014, 6:48:06 PM, you wrote:

> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>> 
>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> >> > nonethless.
>> >> 
>> >> As usual ;-)
>> 
>> > Ha!
>> > ..snip..
>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> >> 
>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> >> > I totally forgot about it !
>> >> 
>> >> Got a link to that patchset ?
>> 
>> > https://lkml.org/lkml/2013/12/13/315
>> 
>> >> I at least could give it a spin .. you never know when fortune is on your side :-)
>> 
>> > It is also at this git tree:
>> 
>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> > want to merge it in your current Linus tree.
>> 
>> > Thank you!
>> 
>> 
>> Hi Konrad,
>> 
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>> 
>> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
>> So the first 4 seem to be an improvement.
>> 
>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

> Could you email me your lspci output and also which devices you move/switch etc?

Hmm hope you didn't misunderstood :-) I now spot a missing "w" .. i am noW capable of .. :-)
So it works when the first 4 patches of that branch are applied, I tried with both a NIC and a wireless NIC and had no problems.

The problems with that last commit don't seem to be related to that moving/switch of devices it also occurs
on a more regular create or shutdown of a guest with a device passed through to it.

Just to be complete here is a stacktrace of the hung task i encounter then:

[  968.600248] INFO: task xenwatch:29 blocked for more than 120 seconds.
[  968.601885]       Not tainted 3.13.020140123-pcireset-p5+ #1
[  968.603086] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  968.605298] xenwatch        D ffff88003fb6c1d0     0    29      2 0x00000000
[  968.607007]  ffff88003fb6c1d0 0000000000000246 ffffffff81a0af60 ffff880038c8a980
[  968.608773]  0000000000013f00 0000000000013f00 ffff88003fb6c1d0 ffff88003fb9bfd8
[  968.609515]  ffff88003fb6c1d0 7fffffffffffffff 7fffffffffffffff 0000000000000002
[  968.610260] Call Trace:
[  968.610493]  [<ffffffff818eda68>] ? genl_set_err.isra.15.constprop.21+0x51/0x51
[  968.611163]  [<ffffffff818eda94>] ? schedule_timeout+0x2c/0x123
[  968.641854]  [<ffffffff813e8bc5>] ? read_reply+0xcc/0xd8
[  968.666733]  [<ffffffff81117c2f>] ? kfree+0x50/0x6f
[  968.691593]  [<ffffffff81116d5b>] ? arch_local_irq_restore+0x7/0x8
[  968.716408]  [<ffffffff813e8bc5>] ? read_reply+0xcc/0xd8
[  968.740956]  [<ffffffff81087ecf>] ? arch_local_irq_disable+0x7/0x8
[  968.765631]  [<ffffffff818eebc3>] ? __wait_for_common+0x123/0x15e
[  968.790148]  [<ffffffff8107d188>] ? try_to_wake_up+0x198/0x198
[  968.814917]  [<ffffffff818f061e>] ? _raw_spin_unlock_irqrestore+0xb/0xc
[  968.839656]  [<ffffffff813f18c1>] ? pcistub_get_pci_dev_by_slot+0xc3/0xd5
[  968.864307]  [<ffffffff813f27f4>] ? xen_pcibk_export_device+0x27/0xfe
[  968.888824]  [<ffffffff813f2a10>] ? xen_pcibk_setup_backend+0x145/0x265
[  968.913094]  [<ffffffff813f300d>] ? xen_pcibk_xenbus_probe+0xeb/0x12a
[  968.937093]  [<ffffffff813ea6b8>] ? xenbus_dev_probe+0x56/0xb5
[  968.961314]  [<ffffffff814cbd9b>] ? __driver_attach+0x73/0x73
[  968.985750]  [<ffffffff814cbc07>] ? driver_probe_device+0x92/0x1b3
[  969.010282]  [<ffffffff814ca434>] ? bus_for_each_drv+0x46/0x80
[  969.034570]  [<ffffffff814cbb40>] ? device_attach+0x68/0x86
[  969.058685]  [<ffffffff814cb22b>] ? bus_probe_device+0x2c/0x9d
[  969.082670]  [<ffffffff814c99b3>] ? device_add+0x371/0x51c
[  969.106462]  [<ffffffff8106339e>] ? init_timer_key+0xe/0x5a
[  969.130155]  [<ffffffff813ea39c>] ? xenbus_probe_node+0x121/0x160
[  969.153793]  [<ffffffff813e9369>] ? xenbus_dev_request_and_reply+0x75/0x75
[  969.177396]  [<ffffffff813ea550>] ? xenbus_dev_changed+0x175/0x1a4
[  969.200938]  [<ffffffff813e9431>] ? xenwatch_thread+0xc8/0xf2
[  969.224678]  [<ffffffff813e9369>] ? xenbus_dev_request_and_reply+0x75/0x75
[  969.248560]  [<ffffffff81085072>] ? bit_waitqueue+0x82/0x82
[  969.272249]  [<ffffffff810728e6>] ? kthread+0x99/0xa1
[  969.296128]  [<ffffffff8100384f>] ? xen_mc_issue.constprop.20+0x27/0x4d
[  969.320029]  [<ffffffff81070000>] ? get_task_pid+0x2a/0x2c
[  969.343706]  [<ffffffff8107284d>] ? __kthread_parkme+0x59/0x59
[  969.367233]  [<ffffffff818f598c>] ? ret_from_fork+0x7c/0xb0
[  969.390736]  [<ffffffff8107284d>] ? __kthread_parkme+0x59/0x59



> Thanks!
>> 
>> --
>> Sander
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:06:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6m5E-0003R6-0B; Fri, 24 Jan 2014 19:06:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6m5C-0003R1-Am
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:06:26 +0000
Received: from [193.109.254.147:35057] by server-9.bemta-14.messagelabs.com id
	69/8A-13957-1B9B2E25; Fri, 24 Jan 2014 19:06:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390590382!9563132!1
X-Originating-IP: [199.249.25.208]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7812 invoked from network); 24 Jan 2014 19:06:24 -0000
Received: from omzsmtpe03.verizonbusiness.com (HELO
	omzsmtpe03.verizonbusiness.com) (199.249.25.208)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 19:06:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe03.verizonbusiness.com with ESMTP; 24 Jan 2014 19:06:22 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; 
	d="scan'208,217";a="245075988"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 24 Jan 2014 19:06:20 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Fri, 24 Jan 2014 14:06:20 -0500
Message-ID: <52E2B9AB.6090103@terremark.com>
Date: Fri, 24 Jan 2014 14:06:19 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
	<52E29827.7000001@terremark.com>
In-Reply-To: <52E29827.7000001@terremark.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7875061455415934661=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7875061455415934661==
Content-Type: multipart/alternative;
	boundary="------------070901010209060000050803"

--------------070901010209060000050803
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/24/14 11:43, Don Slutz wrote:
> On 01/24/14 09:58, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
>>> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
>>>> Don Slutz<dslutz@verizon.com>  wrote:
>>> [snip]
>>>
>>>>> WARNING: g.e. still in use!
>>>>> WARNING: g.e. still in use!
>>>>> WARNING: g.e. still in use!
>>>>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>>>>> PM: Device i8042 failed to resume: error -19
>>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>> "echo 0 >..."
>>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>>
>>>>> [root@dcs-xen-54 ~]# xl des 17
>>>>> [root@dcs-xen-54 ~]# xl restore -V
>>>>> /big/xl-save/centos-6.4-x86_64.0.save
>>>>>
>>>>>
>>>>> Not sure if this is expected or not.
>>>> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
>>> I have not used xend/xe in a long time.  I did need to configure it.
>>>
>>> Does not start:
>>>
>>>
>>> # /etc/init.d/xend start
>>> WARNING: Enabling the xend toolstack.
>>> xend is deprecated and scheduled for removal. Please migrate
>>> to another toolstack ASAP.
>>> Traceback (most recent call last):
>>>    File "/usr/sbin/xend", line 110, in <module>
>>>      sys.exit(main())
>>>    File "/usr/sbin/xend", line 91, in main
>>>      start_blktapctrl()
>>>    File "/usr/sbin/xend", line 77, in start_blktapctrl
>>>      start_daemon("blktapctrl", "")
>>>    File "/usr/sbin/xend", line 74, in start_daemon
>>>      os.execvp(daemon, (daemon,) + args)
>>>    File "/usr/lib64/python2.7/os.py", line 344, in execvp
>>>      _execvpe(file, args)
>>>    File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>>>      func(fullname, *argrest)
>>> OSError: [Errno 2] No such file or directory
>>>
>>>
>>> How important is it to try this?
>> It tells us whether the issue is indeed with the 'fast-cancel' thing.
>>
>> But, I do recall seeing a patch from Ian Jackson for this - I just
>> don't remember what it was called - it was posted here and perhaps
>> applying it would help?
>
> I have not found a patch.  The bug #30:
>
>
> http://bugs.xenproject.org/xen/bug/30
>
> Looks to me like the issue I am seeing.  The Guest I was testing this on is an old kernel (as far as this bug) and so I would expect it might be related.
>
>
> Here is what I see that may link it to this:
>
>     *From*: Ian Campbell<Ian.Campbell@citrix.com>
>     *To*: Ian Jackson<Ian.Jackson@eu.citrix.com>
>     *Cc*:xen-devel@lists.xen.org
>     *Subject*: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
>     *Date*: Fri, 10 Jan 2014 10:26:31 +0000
>
> ...
>
>     Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
>     support for the new mode at all.
>
>     It would probably be wise to validate this under xend before chasing
>     red-herrings with xl.
>
>     Ian.
>
>
> So I will continue the fight to get xend running.
>
>    -Don
>
>

Ah, now have xend running, but still need to convert from an xl.cfg file to a xmdomain.cfg...  Having not used xm in years, this will take a while.  The man page does not say how to build an hvm guest.

I did another test and Fedora 19 (x86_64) saved just fine.  So this looks to be bug #30.

If I understand this all, this means that the older linux kernels will fail this way also if a migrate fails and the source guest is restarted.

So, do I continue to get xend working, or just say it is an example of bug #30?

     -Don Slutz


>
>
>
>>>      -Don Slutz
>>>
>


--------------070901010209060000050803
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/24/14 11:43, Don Slutz wrote:<br>
    </div>
    <blockquote cite="mid:52E29827.7000001@terremark.com" type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div class="moz-cite-prefix">On 01/24/14 09:58, Konrad Rzeszutek
        Wilk wrote:<br>
      </div>
      <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
        type="cite">
        <pre wrap="">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
</pre>
          <blockquote type="cite">
            <pre wrap="">Don Slutz <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:dslutz@verizon.com">&lt;dslutz@verizon.com&gt;</a> wrote:
</pre>
          </blockquote>
          <pre wrap="">[snip]

</pre>
          <blockquote type="cite">
            <blockquote type="cite">
              <pre wrap="">WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</pre>
            </blockquote>
            <pre wrap="">I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
</pre>
          </blockquote>
          <pre wrap="">I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</pre>
        </blockquote>
        <pre wrap="">It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</pre>
      </blockquote>
      <br>
      I have not found a patch.&nbsp; The bug #30:<br>
      <br>
      <br>
      <a moz-do-not-send="true" class="moz-txt-link-freetext"
        href="http://bugs.xenproject.org/xen/bug/30">http://bugs.xenproject.org/xen/bug/30</a><br>
      <br>
      Looks to me like the issue I am seeing.&nbsp; The Guest I was testing
      this on is an old kernel (as far as this bug) and so I would
      expect it might be related.<br>
      <br>
      <br>
      Here is what I see that may link it to this:<br>
      <br>
      <blockquote>
        <pre class="headers"><b>From</b>: Ian Campbell <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:Ian.Campbell@citrix.com">&lt;Ian.Campbell@citrix.com&gt;</a>
<b>To</b>: Ian Jackson <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:Ian.Jackson@eu.citrix.com">&lt;Ian.Jackson@eu.citrix.com&gt;</a>
<b>Cc</b>: <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
<b>Subject</b>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
<b>Date</b>: Fri, 10 Jan 2014 10:26:31 +0000

</pre>
      </blockquote>
      <pre class="headers">...
</pre>
      <blockquote>
        <pre class="headers">Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</pre>
      </blockquote>
      <br>
      So I will continue the fight to get xend running.<br>
      <br>
      &nbsp;&nbsp; -Don<br>
      <br>
      <br>
    </blockquote>
    <br>
    Ah, now have xend running, but still need to convert from an xl.cfg
    file to a xmdomain.cfg...&nbsp; Having not used xm in years, this will
    take a while.&nbsp; The man page does not say how to build an hvm guest.<br>
    <br>
    I did another test and Fedora 19 (x86_64) saved just fine.&nbsp; So this
    looks to be bug #30.<br>
    <br>
    If I understand this all, this means that the older linux kernels
    will fail this way also if a migrate fails and the source guest is
    restarted.<br>
    <br>
    So, do I continue to get xend working, or just say it is an example
    of bug #30?<br>
    <br>
    &nbsp;&nbsp;&nbsp; -Don Slutz<br>
    <br>
    <br>
    <blockquote cite="mid:52E29827.7000001@terremark.com" type="cite"> <br>
      <br>
      <br>
      <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
        type="cite">
        <blockquote type="cite">
          <pre wrap="">    -Don Slutz

</pre>
        </blockquote>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>

--------------070901010209060000050803--


--===============7875061455415934661==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7875061455415934661==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 19:06:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6m5E-0003R6-0B; Fri, 24 Jan 2014 19:06:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W6m5C-0003R1-Am
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:06:26 +0000
Received: from [193.109.254.147:35057] by server-9.bemta-14.messagelabs.com id
	69/8A-13957-1B9B2E25; Fri, 24 Jan 2014 19:06:25 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390590382!9563132!1
X-Originating-IP: [199.249.25.208]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7812 invoked from network); 24 Jan 2014 19:06:24 -0000
Received: from omzsmtpe03.verizonbusiness.com (HELO
	omzsmtpe03.verizonbusiness.com) (199.249.25.208)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 19:06:24 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from omzsmtpi03.vzbi.com ([165.122.46.173])
	by omzsmtpe03.verizonbusiness.com with ESMTP; 24 Jan 2014 19:06:22 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; 
	d="scan'208,217";a="245075988"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by omzsmtpi03.vzbi.com with ESMTP; 24 Jan 2014 19:06:20 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Fri, 24 Jan 2014 14:06:20 -0500
Message-ID: <52E2B9AB.6090103@terremark.com>
Date: Fri, 24 Jan 2014 14:06:19 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Don Slutz
	<dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
	<52E29827.7000001@terremark.com>
In-Reply-To: <52E29827.7000001@terremark.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7875061455415934661=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7875061455415934661==
Content-Type: multipart/alternative;
	boundary="------------070901010209060000050803"

--------------070901010209060000050803
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit

On 01/24/14 11:43, Don Slutz wrote:
> On 01/24/14 09:58, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
>>> On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
>>>> Don Slutz<dslutz@verizon.com>  wrote:
>>> [snip]
>>>
>>>>> WARNING: g.e. still in use!
>>>>> WARNING: g.e. still in use!
>>>>> WARNING: g.e. still in use!
>>>>> pm_op(): platform_pm_resume+0x0/0x50 returns -19
>>>>> PM: Device i8042 failed to resume: error -19
>>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>> "echo 0 >..."
>>>>> INFO: task sadc:22164 blocked for more then 120 seconds.
>>>>>
>>>>> [root@dcs-xen-54 ~]# xl des 17
>>>>> [root@dcs-xen-54 ~]# xl restore -V
>>>>> /big/xl-save/centos-6.4-x86_64.0.save
>>>>>
>>>>>
>>>>> Not sure if this is expected or not.
>>>> I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
>>> I have not used xend/xe in a long time.  I did need to configure it.
>>>
>>> Does not start:
>>>
>>>
>>> # /etc/init.d/xend start
>>> WARNING: Enabling the xend toolstack.
>>> xend is deprecated and scheduled for removal. Please migrate
>>> to another toolstack ASAP.
>>> Traceback (most recent call last):
>>>    File "/usr/sbin/xend", line 110, in <module>
>>>      sys.exit(main())
>>>    File "/usr/sbin/xend", line 91, in main
>>>      start_blktapctrl()
>>>    File "/usr/sbin/xend", line 77, in start_blktapctrl
>>>      start_daemon("blktapctrl", "")
>>>    File "/usr/sbin/xend", line 74, in start_daemon
>>>      os.execvp(daemon, (daemon,) + args)
>>>    File "/usr/lib64/python2.7/os.py", line 344, in execvp
>>>      _execvpe(file, args)
>>>    File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
>>>      func(fullname, *argrest)
>>> OSError: [Errno 2] No such file or directory
>>>
>>>
>>> How important is it to try this?
>> It tells us whether the issue is indeed with the 'fast-cancel' thing.
>>
>> But, I do recall seeing a patch from Ian Jackson for this - I just
>> don't remember what it was called - it was posted here and perhaps
>> applying it would help?
>
> I have not found a patch.  The bug #30:
>
>
> http://bugs.xenproject.org/xen/bug/30
>
> Looks to me like the issue I am seeing.  The Guest I was testing this on is an old kernel (as far as this bug) and so I would expect it might be related.
>
>
> Here is what I see that may link it to this:
>
>     *From*: Ian Campbell<Ian.Campbell@citrix.com>
>     *To*: Ian Jackson<Ian.Jackson@eu.citrix.com>
>     *Cc*:xen-devel@lists.xen.org
>     *Subject*: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
>     *Date*: Fri, 10 Jan 2014 10:26:31 +0000
>
> ...
>
>     Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
>     support for the new mode at all.
>
>     It would probably be wise to validate this under xend before chasing
>     red-herrings with xl.
>
>     Ian.
>
>
> So I will continue the fight to get xend running.
>
>    -Don
>
>

Ah, now have xend running, but still need to convert from an xl.cfg file to a xmdomain.cfg...  Having not used xm in years, this will take a while.  The man page does not say how to build an hvm guest.

I did another test and Fedora 19 (x86_64) saved just fine.  So this looks to be bug #30.

If I understand this all, this means that the older linux kernels will fail this way also if a migrate fails and the source guest is restarted.

So, do I continue to get xend working, or just say it is an example of bug #30?

     -Don Slutz


>
>
>
>>>      -Don Slutz
>>>
>


--------------070901010209060000050803
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 01/24/14 11:43, Don Slutz wrote:<br>
    </div>
    <blockquote cite="mid:52E29827.7000001@terremark.com" type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div class="moz-cite-prefix">On 01/24/14 09:58, Konrad Rzeszutek
        Wilk wrote:<br>
      </div>
      <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
        type="cite">
        <pre wrap="">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slutz wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">On 01/20/14 18:35, Konrad Rzeszutek Wilk wrote:
</pre>
          <blockquote type="cite">
            <pre wrap="">Don Slutz <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:dslutz@verizon.com">&lt;dslutz@verizon.com&gt;</a> wrote:
</pre>
          </blockquote>
          <pre wrap="">[snip]

</pre>
          <blockquote type="cite">
            <blockquote type="cite">
              <pre wrap="">WARNING: g.e. still in use!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</pre>
            </blockquote>
            <pre wrap="">I think Ian saw this with the 'fast-cancel' something resume but I might be incorrect. Did it work if you used xend (you might have to configure it be enabled)?
</pre>
          </blockquote>
          <pre wrap="">I have not used xend/xe in a long time.  I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</pre>
        </blockquote>
        <pre wrap="">It tells us whether the issue is indeed with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</pre>
      </blockquote>
      <br>
      I have not found a patch.&nbsp; The bug #30:<br>
      <br>
      <br>
      <a moz-do-not-send="true" class="moz-txt-link-freetext"
        href="http://bugs.xenproject.org/xen/bug/30">http://bugs.xenproject.org/xen/bug/30</a><br>
      <br>
      Looks to me like the issue I am seeing.&nbsp; The Guest I was testing
      this on is an old kernel (as far as this bug) and so I would
      expect it might be related.<br>
      <br>
      <br>
      Here is what I see that may link it to this:<br>
      <br>
      <blockquote>
        <pre class="headers"><b>From</b>: Ian Campbell <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:Ian.Campbell@citrix.com">&lt;Ian.Campbell@citrix.com&gt;</a>
<b>To</b>: Ian Jackson <a moz-do-not-send="true" class="moz-txt-link-rfc2396E" href="mailto:Ian.Jackson@eu.citrix.com">&lt;Ian.Jackson@eu.citrix.com&gt;</a>
<b>Cc</b>: <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
<b>Subject</b>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on failed migration
<b>Date</b>: Fri, 10 Jan 2014 10:26:31 +0000

</pre>
      </blockquote>
      <pre class="headers">...
</pre>
      <blockquote>
        <pre class="headers">Looks like RHEL4 (linux-2.6.9-89.0.16.EL kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</pre>
      </blockquote>
      <br>
      So I will continue the fight to get xend running.<br>
      <br>
      &nbsp;&nbsp; -Don<br>
      <br>
      <br>
    </blockquote>
    <br>
    Ah, now have xend running, but still need to convert from an xl.cfg
    file to a xmdomain.cfg...&nbsp; Having not used xm in years, this will
    take a while.&nbsp; The man page does not say how to build an hvm guest.<br>
    <br>
    I did another test and Fedora 19 (x86_64) saved just fine.&nbsp; So this
    looks to be bug #30.<br>
    <br>
    If I understand this all, this means that the older linux kernels
    will fail this way also if a migrate fails and the source guest is
    restarted.<br>
    <br>
    So, do I continue to get xend working, or just say it is an example
    of bug #30?<br>
    <br>
    &nbsp;&nbsp;&nbsp; -Don Slutz<br>
    <br>
    <br>
    <blockquote cite="mid:52E29827.7000001@terremark.com" type="cite"> <br>
      <br>
      <br>
      <blockquote cite="mid:20140124145840.GE12946@phenom.dumpdata.com"
        type="cite">
        <blockquote type="cite">
          <pre wrap="">    -Don Slutz

</pre>
        </blockquote>
      </blockquote>
      <br>
    </blockquote>
    <br>
  </body>
</html>

--------------070901010209060000050803--


--===============7875061455415934661==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7875061455415934661==--


From xen-devel-bounces@lists.xen.org Fri Jan 24 19:20:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6mIX-00044w-Nx; Fri, 24 Jan 2014 19:20:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6mIV-00044r-K6
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:20:11 +0000
Received: from [85.158.139.211:10913] by server-17.bemta-5.messagelabs.com id
	63/FC-19152-AECB2E25; Fri, 24 Jan 2014 19:20:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390591208!11826264!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14913 invoked from network); 24 Jan 2014 19:20:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 19:20:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OJK7JK032118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 19:20:08 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OJK77h014847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 19:20:07 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OJK69o014830; Fri, 24 Jan 2014 19:20:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 11:20:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B6D771BFA72; Fri, 24 Jan 2014 14:20:05 -0500 (EST)
Date: Fri, 24 Jan 2014 14:20:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140124192005.GA17156@phenom.dumpdata.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
	<52E29827.7000001@terremark.com> <52E2B9AB.6090103@terremark.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E2B9AB.6090103@terremark.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >So I will continue the fight to get xend running.
> >
> >   -Don
> >
> >
> 
> Ah, now have xend running, but still need to convert from an xl.cfg file to a xmdomain.cfg...  Having not used xm in years, this will take a while.  The man page does not say how to build an hvm guest.
> 
> I did another test and Fedora 19 (x86_64) saved just fine.  So this looks to be bug #30.
> 
> If I understand this all, this means that the older linux kernels will fail this way also if a migrate fails and the source guest is restarted.
> 
> So, do I continue to get xend working, or just say it is an example of bug #30?

Blame #30 :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:20:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6mIX-00044w-Nx; Fri, 24 Jan 2014 19:20:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6mIV-00044r-K6
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:20:11 +0000
Received: from [85.158.139.211:10913] by server-17.bemta-5.messagelabs.com id
	63/FC-19152-AECB2E25; Fri, 24 Jan 2014 19:20:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390591208!11826264!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14913 invoked from network); 24 Jan 2014 19:20:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Jan 2014 19:20:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OJK7JK032118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 19:20:08 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0OJK77h014847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 24 Jan 2014 19:20:07 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0OJK69o014830; Fri, 24 Jan 2014 19:20:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 11:20:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B6D771BFA72; Fri, 24 Jan 2014 14:20:05 -0500 (EST)
Date: Fri, 24 Jan 2014 14:20:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Don Slutz <dslutz@verizon.com>
Message-ID: <20140124192005.GA17156@phenom.dumpdata.com>
References: <52DDA807.2050703@terremark.com>
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>
	<52E191BB.7040904@terremark.com>
	<20140124145840.GE12946@phenom.dumpdata.com>
	<52E29827.7000001@terremark.com> <52E2B9AB.6090103@terremark.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E2B9AB.6090103@terremark.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> >So I will continue the fight to get xend running.
> >
> >   -Don
> >
> >
> 
> Ah, now have xend running, but still need to convert from an xl.cfg file to a xmdomain.cfg...  Having not used xm in years, this will take a while.  The man page does not say how to build an hvm guest.
> 
> I did another test and Fedora 19 (x86_64) saved just fine.  So this looks to be bug #30.
> 
> If I understand this all, this means that the older linux kernels will fail this way also if a migrate fails and the source guest is restarted.
> 
> So, do I continue to get xend working, or just say it is an example of bug #30?

Blame #30 :-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:36:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6mXc-0004UM-NA; Fri, 24 Jan 2014 19:35:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6mXc-0004UH-0Z
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:35:48 +0000
Received: from [85.158.143.35:53307] by server-1.bemta-4.messagelabs.com id
	52/24-02132-390C2E25; Fri, 24 Jan 2014 19:35:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390592144!646143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6449 invoked from network); 24 Jan 2014 19:35:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 19:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94268321"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 19:35:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 14:35:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6mXE-0001tA-PF;
	Fri, 24 Jan 2014 19:35:24 +0000
Date: Fri, 24 Jan 2014 19:34:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> > > On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > > > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > > > 
> > > > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > > > could have expected.
> > > > > > > > > > 
> > > > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > > > Xen and their potential impact on the release.
> > > > > > > > > 
> > > > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > > > affected.
> > > > > > > > 
> > > > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > > > on a freeze exception. Did you refer to 
> > > > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > > > not the very briefest words you can possibly manage.
> > > > > > > > 
> > > > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > > > 
> > > > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > > > stable tree?
> > > > > > 
> > > > > > Yes, a simple merge.
> > > > > > 
> > > > > > > I also assume that you tested at the very least the basic
> > > > > > > PV and HVM configurations?
> > > > > > 
> > > > > > :(, no, I haven't try PV. But I did try HVM.
> > > > > > 
> > > > > > There is one thing that I may want to try, it's migration from the
> > > > > > previous version of Xen. There is one patch that change (fix?) that.
> > > > > 
> > > > > Please do and let me know if it works as expected.
> > > > 
> > > > I have tryied a pv guest, it does work fine.
> > > > 
> > > > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > > > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > > > migration fail in both cases because of the same error reported by qemu.
> > > > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > > > investigate in that. Might just be a issue with my compile script ...
> > > > (using the wrong qemu-xen tree).
> > > 
> > > It is important that we identify what is the cause of the problem.
> > > Especially if you think that it could be a "compile script" issue,
> > > because I imagine that if it is, it might invalidate your previous
> > > positive tests too.
> > 
> > I was compiling with always the master branch of qemu-xen. So I had a
> > Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
> > migration test.
> 
> OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
> works?

I tested migration myself: although the update fixes a couple of
problems with migration from 1.3, it also introduces a new one, that
fortunately is fixed in QEMU 1.7.0.

I have a branch based on 1.6.2 plus the backported fix from 1.7.0 that
works well, I recommend getting a release exception for it and pushing
it as soon as possible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:36:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6mXc-0004UM-NA; Fri, 24 Jan 2014 19:35:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W6mXc-0004UH-0Z
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 19:35:48 +0000
Received: from [85.158.143.35:53307] by server-1.bemta-4.messagelabs.com id
	52/24-02132-390C2E25; Fri, 24 Jan 2014 19:35:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390592144!646143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6449 invoked from network); 24 Jan 2014 19:35:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 19:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94268321"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 19:35:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 24 Jan 2014 14:35:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W6mXE-0001tA-PF;
	Fri, 24 Jan 2014 19:35:24 +0000
Date: Fri, 24 Jan 2014 19:34:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 17 Jan 2014, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
> > > On Fri, 17 Jan 2014, Anthony PERARD wrote:
> > > > On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
> > > > > On Thu, 16 Jan 2014, Anthony PERARD wrote:
> > > > > > On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
> > > > > > > On Wed, 15 Jan 2014, Ian Campbell wrote:
> > > > > > > > On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
> > > > > > > > > On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
> > > > > > > > > > On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
> > > > > > > > > > > There is an update of QEMU 1.6, I have done a merge and put that in a tree:
> > > > > > > > > > > git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
> > > > > > > > > > 
> > > > > > > > > > Based on the above I have no idea whether a freeze exception should be
> > > > > > > > > > granted for this, so my default answer is no. I'm not sure what else you
> > > > > > > > > > could have expected.
> > > > > > > > > > 
> > > > > > > > > > If you think there are changes here which should be in 4.4.0 then please
> > > > > > > > > > enumerate all changes included in this merge which have any relation to
> > > > > > > > > > Xen and their potential impact on the release.
> > > > > > > > > 
> > > > > > > > > I have a list the change here that have a potential impact on Xen, with
> > > > > > > > > the ones that I think are quite important at the beginning. Either the
> > > > > > > > > commit title speak for itself or I added a small description on what is
> > > > > > > > > affected.
> > > > > > > > 
> > > > > > > > Thanks but there's not a lot here for me to go on WRT making a decision
> > > > > > > > on a freeze exception. Did you refer to 
> > > > > > > > http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
> > > > > > > > like I said? A freeze exception needs an analysis of benefits and risks,
> > > > > > > > not the very briefest words you can possibly manage.
> > > > > > > > 
> > > > > > > > Anyway it appears this is a grab bag of things we might want and misc
> > > > > > > > fixes which are perhaps nice to have, I'm nowhere near comfortable
> > > > > > > > giving it a blanket exemption based on what you've presented here, or
> > > > > > > > even of cherry picking what might be the important ones. If you think
> > > > > > > > any or all of it is actually important for 4.4 please make a proper case
> > > > > > > > for inclusion, either of the aggregate or of the individual changes.
> > > > > > > 
> > > > > > > Anthony, did you simply update the tree by pulling from the upstream 1.6
> > > > > > > stable tree?
> > > > > > 
> > > > > > Yes, a simple merge.
> > > > > > 
> > > > > > > I also assume that you tested at the very least the basic
> > > > > > > PV and HVM configurations?
> > > > > > 
> > > > > > :(, no, I haven't try PV. But I did try HVM.
> > > > > > 
> > > > > > There is one thing that I may want to try, it's migration from the
> > > > > > previous version of Xen. There is one patch that change (fix?) that.
> > > > > 
> > > > > Please do and let me know if it works as expected.
> > > > 
> > > > I have tryied a pv guest, it does work fine.
> > > > 
> > > > I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
> > > > both the current qemu-xen tree and with the merge of 1.6.2, but the
> > > > migration fail in both cases because of the same error reported by qemu.
> > > > (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
> > > > investigate in that. Might just be a issue with my compile script ...
> > > > (using the wrong qemu-xen tree).
> > > 
> > > It is important that we identify what is the cause of the problem.
> > > Especially if you think that it could be a "compile script" issue,
> > > because I imagine that if it is, it might invalidate your previous
> > > positive tests too.
> > 
> > I was compiling with always the master branch of qemu-xen. So I had a
> > Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
> > migration test.
> 
> OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
> works?

I tested migration myself: although the update fixes a couple of
problems with migration from 1.3, it also introduces a new one, that
fortunately is fixed in QEMU 1.7.0.

I have a branch based on 1.6.2 plus the backported fix from 1.7.0 that
works well, I recommend getting a release exception for it and pushing
it as soon as possible.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6msu-00058S-MZ; Fri, 24 Jan 2014 19:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6mss-00058N-PI
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 19:57:47 +0000
Received: from [85.158.139.211:48610] by server-4.bemta-5.messagelabs.com id
	AC/7C-26791-9B5C2E25; Fri, 24 Jan 2014 19:57:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390593463!11798697!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22144 invoked from network); 24 Jan 2014 19:57:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 19:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94275919"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 19:57:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 14:57:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6msn-0008NB-78;
	Fri, 24 Jan 2014 19:57:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6msm-0000eQ-Sv;
	Fri, 24 Jan 2014 19:57:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24479-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 19:57:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24479: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24479 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24469
 build-amd64                   4 xen-build                 fail REGR. vs. 24469
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24469

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
baseline version:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:46:43 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 19:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 19:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6msu-00058S-MZ; Fri, 24 Jan 2014 19:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6mss-00058N-PI
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 19:57:47 +0000
Received: from [85.158.139.211:48610] by server-4.bemta-5.messagelabs.com id
	AC/7C-26791-9B5C2E25; Fri, 24 Jan 2014 19:57:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390593463!11798697!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22144 invoked from network); 24 Jan 2014 19:57:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 19:57:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,714,1384300800"; d="scan'208";a="94275919"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Jan 2014 19:57:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 14:57:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6msn-0008NB-78;
	Fri, 24 Jan 2014 19:57:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6msm-0000eQ-Sv;
	Fri, 24 Jan 2014 19:57:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24479-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 19:57:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24479: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24479 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24469
 build-amd64                   4 xen-build                 fail REGR. vs. 24469
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24469

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
baseline version:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:46:43 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 21:58:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 21:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6olL-0008JP-OK; Fri, 24 Jan 2014 21:58:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6olJ-0008JK-42
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 21:58:05 +0000
Received: from [193.109.254.147:22480] by server-10.bemta-14.messagelabs.com
	id FD/B3-20752-CE1E2E25; Fri, 24 Jan 2014 21:58:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390600681!12983567!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 24 Jan 2014 21:58:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 21:58:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OLuuPh031352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 21:56:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OLusaK008937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 21:56:55 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OLusxm008913; Fri, 24 Jan 2014 21:56:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 13:56:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 50BF01BFA72; Fri, 24 Jan 2014 16:56:52 -0500 (EST)
Date: Fri, 24 Jan 2014 16:56:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124215652.GA18710@phenom.dumpdata.com>
References: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="d6Gm4EdcadzBjdND"
Content-Disposition: inline
In-Reply-To: <20140124174349.GA15472@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re: [PATCH]
 x86/msi: Validate the guest-identified PCI devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Fri, Jan 24, 2014 at 12:43:49PM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 24, 2014 at 04:19:15PM +0000, Jan Beulich wrote:
> > >>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > I built the kernel without the igb driver just to eliminate it being
> > > the culprit. Now I can boot without issues and this is what lspci
> > > reports:
> > > 
> > > -bash-4.1# lspci -s 02:00.0 -v
> > > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > > Connection (rev 01)
> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > >         Flags: bus master, fast devsel, latency 0, IRQ 10
> > >         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
> > >         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
> > >         I/O ports at e020 [size=32]
> > >         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
> > >         Expansion ROM at f0c00000 [disabled] [size=4M]
> > >         Capabilities: [40] Power Management version 3
> > >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> > >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> > 
> > So here's a patch to figure out why we don't find this.
> 
> Thank you!
> 
> See attached log. The corresponding xen-syms is compressed and
> updated at : http://darnok.org/xen/xen-syms.gz
> 
> The interesting bit is:
> 
> (XEN) 02:00.0: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
> (XEN) 02:00.0: pos=40
> (XEN) 02:00.0: id=01
> (XEN) 02:00.0: pos=50
> (XEN) 02:00.0: id=05
> (XEN) 02:00.0: pos=70
> (XEN) 02:00.0: id=11
> (XEN) 02:00.1: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
> (XEN) 02:00.1: pos=40
> (XEN) 02:00.1: id=01
> (XEN) 02:00.1: pos=50
> (XEN) 02:00.1: id=05
> (XEN) 02:00.1: pos=70
> (XEN) 02:00.1: id=11

You were right on the idea that it might be the device not having
the right capabilities, but it was the wrong BDF. I instrumented
the faulting operation to make sure I knew which BDF it was:

(XEN) 02:00.0: alloced (179)
(XEN) 02:00.0: alloced (189) ffff830239467f70,pdev ffff8302394660d0
(XEN) 02:00.1: alloced (179)
(XEN) 02:00.1: alloced (189) ffff830239466250,pdev ffff830239466190
(XEN) 04:00.0: alloced (179)
(XEN) 04:00.0: alloced (189) ffff830239466520,pdev ffff830239466460
(XEN) 05:00.0: status=0010 (alloc_pdev+0xb7/0x360 wants 11)
(XEN) 05:00.0: pos=60
(XEN) 05:00.0: id=0d
(XEN) 05:00.0: pos=a0
(XEN) 05:00.0: id=01
(XEN) 05:00.0: pos=00
(XEN) 05:00.0: no cap 11
(XEN) 08:00.0: alloced (179)
(XEN) 08:00.0: alloced (189) ffff830239466eb0,pdev ffff830239466df0

(XEN) [2014-01-25 03:42:08] msix_capability_init:759 for 05:00.0:, msix:0 dev:ffff8302394665b0
(XEN) [2014-01-25 03:42:08] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-25 03:42:08] CPU:    0
(XEN) [2014-01-25 03:42:08] RIP:    e008:[<ffff82d0801683d6>] msix_capability_init+0x210/0x63e
... snip..
(XEN) [2014-01-25 03:42:08] Xen call trace:
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801683d6>] msix_capability_init+0x210/0x63e
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801689c2>] pci_enable_msi+0x1be/0x4d7
(XEN) [2014-01-25 03:42:08]    [<ffff82d08016c68c>] map_domain_pirq+0x222/0x5ad
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f134>] physdev_map_pirq+0x507/0x5d1
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f844>] do_physdev_op+0x646/0x1232
(XEN) [2014-01-25 03:42:08]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x145
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] Pagetable walk from 0000000000000004:
(XEN) [2014-01-25 03:42:08]  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] ****************************************
(XEN) [2014-01-25 03:42:08] Panic on CPU 0:
(XEN) [2014-01-25 03:42:08] FATAL PAGE FAULT
(XEN) [2014-01-25 03:42:08] [error_code=0000]
(XEN) [2014-01-25 03:42:08] Faulting linear address: 0000000000000004
(XEN) [2014-01-25 03:42:08] ****************************************
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] Manual reset required ('noreboot' specified)

lspci shows (baremetal kernel, with said driver):

bash-4.1# lspci -s 05:00.0 -v 
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
        Subsystem: Super Micro Computer Inc Device 1533
        Flags: bus master, fast devsel, latency 0, IRQ 19
        Memory at f1900000 (32-bit, non-prefetchable) [size=512K]
        I/O ports at c000 [size=32]
        Memory at f1980000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-86-be-f1
        Capabilities: [1a0] #17
        Kernel driver in use: igb

aka, Intel I210 

lspci shows (Xen, kernel does not have igb built-in):

-bash-4.1# lspci -s 05:00.0 -v
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
        Subsystem: Super Micro Computer Inc Device 1533
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at f1900000 (32-bit, non-prefetchable) [size=512K]
        I/O ports at c000 [size=32]
        Memory at f1980000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-86-be-f1
        Capabilities: [1a0] #17

And with -xxx:

bash-4.1# lspci -s 05:00.0 -xxx
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
00: 86 80 33 15 07 00 10 00 03 00 00 02 10 00 00 00
10: 00 00 90 f1 00 00 00 00 01 c0 00 00 00 00 98 f1
20: 00 00 00 00 00 00 00 00 00 00 00 00 d9 15 33 15
30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00
40: 01 50 23 c8 08 20 00 00 00 00 00 00 00 00 00 00
50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 04 00 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff
a0: 10 00 02 00 c2 8c 00 10 07 28 19 00 11 5c 42 00
b0: 40 00 11 10 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Which would imply that we should start with '50' offset, not
'60'!


If I boot baremetal with 'pci=earlydump' I get:

[    0.000000] pci 0000:05:00.0 config space:
[    0.000000]   00: e3 10 13 81 07 00 10 00 01 01 04 06 00 00 01 00
[    0.000000]   10: 00 00 00 00 00 00 00 00 05 06 07 20 f1 01 a0 22
[    0.000000]   20: 50 f1 60 f1 f1 ff 01 00 00 00 00 00 00 00 00 00
[    0.000000]   30: ff 00 00 00 60 00 00 00 00 00 00 00 ff 00 10 00
[    0.000000]   40: 00 aa 00 00 00 19 90 7d 80 01 00 00 07 03 00 00
[    0.000000]   50: 68 89 09 80 00 1f 00 00 00 01 00 00 00 00 00 00
[    0.000000]   60: 0d a0 00 00 d9 15 05 08 00 00 00 00 00 00 00 00
[    0.000000]   70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   a0: 01 00 03 f8 08 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   b0: 00 00 00 00 40 00 00 00 00 00 00 00 ef fb be 07
[    0.000000]   c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Which does indeed show that at bootup the PCI configuration
space is different. 

<blink>And the driver id does not match!

If I look at one that has it:
[    0.000000] pci 0000:04:00.0 config space:
[    0.000000]   00: 86 80 33 15 07 00 10 00 03 00 00 02 10 00 00 00
[    0.000000]   10: 00 00 90 f1 00 00 00 00 01 c0 00 00 00 00 98 f1
[    0.000000]   20: 00 00 00 00 00 00 00 00 00 00 00 00 d9 15 33 15
[    0.000000]   30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00
[    0.000000]   40: 01 50 23 c8 08 20 00 00 00 00 00 00 00 00 00 00
[    0.000000]   50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   70: 11 a0 04 00 03 00 00 00 03 20 00 00 00 00 00 00
[    0.000000]   80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   90: 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff
[    0.000000]   a0: 10 00 02 00 c2 8c 00 10 07 28 19 00 11 5c 42 00
[    0.000000]   b0: 42 00 11 10 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

That matches more of the reality and 04:00.0 is actually 05:00.0.

The reason that is happening is probably because of:

-bash-4.1# cat /proc/cmdline 
initrd=initramfs.cpio.gz console=ttyS0,115200 kgdboc=ttyS0 pci=assign-busses pci=earlydump BOOT_IMAGE=vmlinuz 
-bash-4.1# 

The 'assign-busses' which is needed for SR-IOV to work.

If don't use that paremeter Linux kernel (baremetal and with Xen)
tells me:


-bash-4.1# cat /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_numvfs
0
-bash-4.1# cat /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_totalvfs
7
-bash-4.1# echo 7 > /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_numvfs
-bash: echo: write error: Cannot allocate memory
-bash-4.1# dmesg | tail
[  241.874349] random: sshd urandom read with 63 bits of entropy available
[  242.918267] Loading iSCSI transport class v2.0-870.
[  242.926046] iscsi: registered transport (tcp)
[  244.689798] scsi8 : iSCSI Initiator over TCP/IP
[  244.709799]  connection1:0: detected conn error (1020)
[  244.969450] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
[  244.980434] device-mapper: multipath: version 1.6.0 loaded
[  250.027291] random: nonblocking pool is initialized
[  256.282312] switch: port 1(eth0) entered forwarding state
[  365.468641] igb 0000:02:00.0: SR-IOV: bus number out of range


And sure enough if I boot Xen without 'pci=assign-busses' it works just
fine.

Ugh.

I wonder how Xen 4.3 would actually do the PCI passthrough - it booted with
the 'assign-busses' - but I hadn't tried to do PCI passthrough of the
PF device (the I210).

If do pass in '05:00.0' (new bus number) I wonder if it will use IOMMU context
with whatever '05:00.0' was _before_ the bus re-assigment  aka:

05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=05, secondary=06, subordinate=07, sec-latency=32
        Memory behind bridge: f1500000-f16fffff
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3

Which I think would confuse Xen as this is clearly labeled as bridge
not a PCI device.


The reason for me using 'pci=assign-busses' is that it looks to be
the only option to use SR-IOV.

Which I suppose makes sense as it tries to create VFs right after its own bus id:


  +-01.1-[02-03]--+-[0000:03]-+-10.0  Intel Corporation 82576 Virtual Function
           |               |           +-10.1  Intel Corporation 82576 Virtual Function
           |               |           +-10.2  Intel Corporation 82576 Virtual Function
           |               |           +-10.3  Intel Corporation 82576 Virtual Function
           |               |           +-10.4  Intel Corporation 82576 Virtual Function
           |               |           +-10.5  Intel Corporation 82576 Virtual Function
           |               |           +-10.6  Intel Corporation 82576 Virtual Function
           |               |           +-10.7  Intel Corporation 82576 Virtual Function
           |               |           +-11.0  Intel Corporation 82576 Virtual Function
           |               |           +-11.1  Intel Corporation 82576 Virtual Function
           |               |           +-11.2  Intel Corporation 82576 Virtual Function
           |               |           +-11.3  Intel Corporation 82576 Virtual Function
           |               |           +-11.4  Intel Corporation 82576 Virtual Function
           |               |           \-11.5  Intel Corporation 82576 Virtual Function
           |               \-[0000:02]-+-00.0  Intel Corporation 82576 Gigabit Network Connection
           |                           \-00.1  Intel Corporation 82576 Gigabit Network Connection


But why does it have to have the bus _right_ after its own? Can't it
use one at the end of the its bus-space? The bus is after it is occupied
by another card (if I boot without 'pci=assign-busses').

I do recall using this particular SR-IOV card on a different hardware
a year ago or so. And it did work. I think that might be because
there were no PCI cards _after_ the SR-IOV card.

For posterity, with pci=assign-busses under baremetal (with SR-IOV enabled):
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
07:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11)
07:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
08:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
09:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
0a:00.0 SATA controller: Device 1b21:0612 (rev 01)

Without 'pci=assign-busses' under baremetal:
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
03:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
04:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
06:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11)
06:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
07:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
09:00.0 SATA controller: Device 1b21:0612 (rev 01)


This problem with SR-IOV bus seems to have been solved in 2009:

commit a28724b0fb909d247229a70761c90bb37b13366a
Author: Yu Zhao <yu.zhao@intel.com>
Date:   Fri Mar 20 11:25:13 2009 +0800

    PCI: reserve bus range for SR-IOV device
    
    Reserve the bus number range used by the Virtual Function when
    pcibios_assign_all_busses() returns true.

And pcibios_assign_all_busses() is the one that returns true if 'pci=assign-busses'
is set.


--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="tst035-jan-debug-2.txt"
Content-Transfer-Encoding: quoted-printable

Trying 192.168.102.15...=0D
Connected to maxsrv2.=0D
Escape character is '^]'.=0D
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mIniti=
alizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00HPress Ctrl+S to enter the Setup Menu.             =
                              =1B[04;00H                                   =
                                             =1B[05;00H                    =
                                                            =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[03;39H=1B[03;00HP=
ress Ctrl+S to enter the Setup Menu..                                      =
    =1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00HPress Ctrl+S to enter the Setup =
Menu.                                           =1B[08;00H                 =
                                                               =1B[09;00H  =
                                                                           =
   =1B[10;00H                                                              =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[07;39H=1B[07;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[07;39H=1B[07;39H=1B[07;39H=1B[07;3=
9H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B=
[07;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitializing Int=
el(R) Boot Agent GE v1.3.22                                     =1B[02;00HP=
XE 2.1 Build 086 (WfM 2.0)                                                 =
    =1B[03;00H                                                             =
                   =1B[04;00H                                              =
                                  =1B[05;00HInitializing Intel(R) Boot Agen=
t GE v1.3.22                                     =1B[06;00HPXE 2.1 Build 08=
6 (WfM 2.0)                                                     =1B[07;00H =
                                                                           =
    =1B[08;00H                                                             =
                   =1B[09;00HInitializing Intel(R) Boot Agent GE v1.4.10   =
                                  =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)    =
                                                 =1B[11;00HPress Ctrl+S to =
enter the Setup Menu.                                           =1B[12;00H =
                                                                           =
    =1B[13;00H                                                             =
                   =1B[14;00H                                              =
                                  =1B[15;00H                               =
                                                 =1B[16;00H                =
                                                                =1B[17;00H =
                                                                           =
    =1B[18;00H                                                             =
                   =1B[19;00H                                              =
                                  =1B[20;00H                               =
                                                 =1B[21;00H                =
                                                                =1B[22;00H =
                                                                           =
    =1B[23;00H                                                             =
                   =1B[24;00H                                              =
                                 =1B[24;00H=1B[11;39H=1B[11;00HPress Ctrl+S=
 to enter the Setup Menu..                                          =1B[11;=
39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=
=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00H                                =
                                                =1B[08;00H                 =
                                                               =1B[09;00HIn=
itializing Intel(R) Boot Agent GE v1.4.10                                  =
   =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                   =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00HInitializing Inte=
l(R) Boot Agent GE v1.4.10                                     =1B[14;00HPX=
E 2.1 Build 092 (WfM 2.0)                                                  =
   =1B[15;00HPress Ctrl+S to enter the Setup Menu.                         =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[15;39H=1B[15;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[15;39H=1B[15;39H=1B[15;39H=1B[15;3=
9H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B=
[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=80=08 =08=1B[2J=1B[1;1H=1B[=
1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22        =
                             =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)         =
                                            =1B[03;00H                     =
                                                           =1B[04;00H      =
                                                                          =
=1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                      =
               =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                       =
                              =1B[07;00H                                   =
                                             =1B[08;00H                    =
                                                            =1B[09;00HIniti=
alizing Intel(R) Boot Agent GE v1.4.10                                     =
=1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                      =
               =1B[11;00H                                                  =
                              =1B[12;00H                                   =
                                             =1B[13;00HInitializing Intel(R=
) Boot Agent GE v1.4.10                                     =1B[14;00HPXE 2=
=2E1 Build 092 (WfM 2.0)                                                   =
  =1B[15;00H                                                               =
                 =1B[16;00H                                                =
                                =1B[17;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[18;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[19;00HPre=
ss Ctrl+S to enter the Setup Menu.                                         =
  =1B[20;00H                                                               =
                 =1B[21;00H                                                =
                                =1B[22;00H                                 =
                                               =1B[23;00H                  =
                                                              =1B[24;00H   =
                                                                           =
 =1B[24;00H=1B[19;39H=1B[19;00HPress Ctrl+S to enter the Setup Menu..      =
                                    =1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39=
H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[=
19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=80=08 =08=1B[2J=1B[1;1H=1B[1=
;1H=80=08 =08=1B[01;00H                                         =80=08 =08=
=1B[2J=1B[1;1H=1B[1;1H=1B[2J=1B[1;1H=1B[2J=1B[1;1H=80=08 =08=1B[01;00H     =
                                                                           =
=1B[02;00HIntel(R) Boot Agent GE v1.4.10                                   =
               =1B[03;00HCopyright (C) 1997-2012, Intel Corporation        =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing and est=
ablishing link...                                           =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;00HCLIENT MAC ADDR: 00 25 90 86 BE F0  GUID: 00000000 0000=
 0000 0000 00259086BEF0  =1B[06;00HDHCP.|                                  =
                                        =1B[06;06H=1B[06;00HDHCP./         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.-                                                           =
               =1B[06;06H=1B[06;00HDHCP.\                                  =
                                        =1B[06;06H=1B[06;00HDHCP.|         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP./                                                           =
               =1B[06;06H=1B[06;00HDHCP.-                                  =
                                        =1B[06;06H=1B[06;00HDHCP.\         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.|                                                           =
               =1B[06;06H=1B[06;00HDHCP./                                  =
                                        =1B[06;06H=1B[06;00HDHCP.-         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.\                                                           =
               =1B[06;06H=1B[06;00HDHCP.|                                  =
                                        =1B[06;06H=1B[06;00HDHCP./         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.-                                                           =
               =1B[06;06H=1B[06;00HDHCP.\                                  =
                                        =1B[06;06H=1B[06;00HDHCP.|         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP./                                                           =
               =1B[06;06H=1B[06;00HCLIENT IP: 192.168.102.35  MASK: 255.255=
=2E255.0  DHCP IP: 192.168.102.1          =0D
PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al=0D
Loading xen.gz... =1B[07;00Hok=0D
Loading vmlinuz... =1B[01;00Hok=0D
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok=0D
Loading microcode.bin... ok=0D
 Xen 4.4-rc2=0D
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Fri Jan 24 14:40:10 EST 2014=0D
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0-dirty=0D
(XEN) Console output is synchronous.=0D
(XEN) Bootloader: unknown=0D
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G=
=0D
(XEN) Video information:=0D
(XEN)  VGA is text mode 80x25, font 8x16=0D
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds=0D
(XEN)  EDID info not retrieved because no DDC retrieval method detected=0D
(XEN) Disc information:=0D
(XEN)  Found 1 MBR signatures=0D
(XEN)  Found 1 EDD information structures=0D
(XEN) Xen-e820 RAM map:=0D
(XEN)  0000000000000000 - 0000000000099c00 (usable)=0D
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)=0D
(XEN)  00000000000e0000 - 0000000000100000 (reserved)=0D
(XEN)  0000000000100000 - 00000000a58f1000 (usable)=0D
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)=0D
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)=0D
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)=0D
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)=0D
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)=0D
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)=0D
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)=0D
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)=0D
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)=0D
(XEN)  00000000bc000000 - 00000000be200000 (reserved)=0D
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)=0D
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)=0D
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)=0D
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)=0D
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)=0D
(XEN)  00000000ff000000 - 0000000100000000 (reserved)=0D
(XEN)  0000000100000000 - 000000023fe00000 (usable)=0D
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)=0D
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FACP B779F0B8, 010C (r5 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)=
=0D
(XEN) ACPI: FACS B77B7080, 0040=0D
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)=
=0D
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)=
=0D
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)=
=0D
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)=
=0D
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)=
=0D
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240=
)=0D
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)=
=0D
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)=
=0D
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)=
=0D
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)=
=0D
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)=
=0D
(XEN) System RAM: 8046MB (8239752kB)=0D
(XEN) No NUMA configuration found=0D
(XEN) Faking a node at 0000000000000000-000000023fe00000=0D
(XEN) Domain heap initialised=0D
(XEN) found SMP MP-table at 000fd870=0D
(XEN) DMI 2.7 present.=0D
(XEN) Using APIC driver default=0D
(XEN) ACPI: PM-Timer IO Port: 0x1808=0D
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]=0D
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]=0D
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32=0D
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]=0D
(XEN) ACPI: Local APIC address 0xfee00000=0D
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
(XEN) Processor #0 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
(XEN) Processor #2 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
(XEN) Processor #4 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
(XEN) Processor #6 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
(XEN) Processor #1 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
(XEN) Processor #3 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
(XEN) Processor #5 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
(XEN) Processor #7 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=0D
(XEN) ACPI: IRQ0 used by override.=0D
(XEN) ACPI: IRQ2 used by override.=0D
(XEN) ACPI: IRQ9 used by override.=0D
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs=0D
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
(XEN) [VT-D]dmar.c:778: Host address width 39=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da=0D
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0=0D
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0=0D
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff=0D
(XEN) Xen ERST support is initialized.=0D
(XEN) Using ACPI (MADT) for SMP configuration information=0D
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)=0D
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X=0D
(XEN) Switched to APIC driver x2apic_cluster.=0D
(XEN) Using scheduler: SMP Credit Scheduler (credit)=0D
(XEN) Detected 3400.107 MHz processor.=0D
(XEN) Initing memory sharing.=0D
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0xlity: BCAST 1 SER =
0 CMCI 1 firstbank 0 extended MCE MSR 0=0D
(XEN) Intel machine check reporting enabled=0D
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f=0D
(XEN) PCI: MCFG area at f8000000 reserved in E820=0D
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f=0D
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.=0D
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.=0D
(XEN) Intel VT-d Snoop Control not enabled.=0D
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.=0D
(XEN) Intel VT-d Queued Invalidation enabled.=0D
(XEN) Intel VT-d Interrupt Remapping enabled.=0D
(XEN) Intel VT-d Shared EPT tables not enabled.=0D
(XEN) 02:00.0: alloced (179)=0D
(XEN) 02:00.0: alloced (189) ffff830239467f70,pdev ffff8302394660d0=0D
(XEN) 02:00.1: alloced (179)=0D
(XEN) 02:00.1: alloced (189) ffff830239466250,pdev ffff830239466190=0D
(XEN) 04:00.0: alloced (179)=0D
(XEN) 04:00.0: alloced (189) ffff830239466520,pdev ffff830239466460=0D
(XEN) 05:00.0: status=3D0010 (alloc_pdev+0xb7/0x360 wants 11)=0D
(XEN) 05:00.0: pos=3D60=0D
(XEN) 05:00.0: id=3D0d=0D
(XEN) 05:00.0: pos=3Da0=0D
(XEN) 05:00.0: id=3D01=0D
(XEN) 05:00.0: pos=3D00=0D
(XEN) 05:00.0: no cap 11=0D
(XEN) 08:00.0: alloced (179)=0D
(XEN) 08:00.0: alloced (189) ffff830239466eb0,pdev ffff830239466df0=0D
(XEN) I/O virtualisation enabled=0D
(XEN)  - Dom0 mode: Relaxed=0D
(XEN) Interrupt remapping enabled=0D
(XEN) Enabled directed EOI wit(XEN) TSC deadline timer enabled=0D
(XEN) [2014-01-25 03:41:53] Platform timer is 14.318MHz HPET=0D
(XEN) [2014-01-25 03:41:53] Allocated console ring of 1048576 KiB.=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: MWAIT substates: 0x42120=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: v0.4 model 0x3c=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff=0D
(XEN) [2014-01-25 03:41:53] VMX: Supported advanced features:=0D
(XEN) [2014-01-25 03:41:53]  - APIC MMIO access virtualisation=0D
(XEN) [2014-01-25 03:41:53]  - APIC TPR shadow=0D
(XEN) [2014-01-25 03:41:53]  - Extended Page Tables (EPT)=0D
(XEN) [2014-01-25 03:41:53]  - Virtual-Processor Identifiers (VPID)=0D
(XEN) [2014-01-25 03:41:53]  - Virtual NMI=0D
(XEN) [2014-01-25 03:41:53]  - MSR direct-access bitmap=0D
(XEN) [2014-01-25 03:41:53]  - Unrestricted Guest=0D
(XEN) [2014-01-25 03:41:53]  - VMCS shadowing=0D
(XEN) [2014-01-25 03:41:53] HVM: ASIDs enabled.=0D
(XEN) [2014-01-25 03:41:53] HVM: VMX enabled=0D
(XEN) [2014-01-25 03:41:53] HVM: Hardware Assisted Paging (HAP) detected=0D
(XEN) [2014-01-25 03:41:53] HVM: HAP page sizes: 4kB, 2MB, 1GB=0D
(XEN) [2014-01-25 03:41:53] Brought up 8 CPUs=0D
(XEN) [2014-01-25 03:41:53] ACPI sleep modes: S3=0D
(XEN) [2014-01-25 03:41:53] mcheck_poll: Machine check polling timer starte=
d.*** LOADING DOMAIN 0 ***=0D
(XEN) [2014-01-25 03:41:53] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa28000=0D
(XEN) [2014-01-25 03:41:53] elf_parse_binarymemory: 0x1000000 -> 0x23f7000=
=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: GUEST_OS =3D "linux"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: GUEST_VERSION =3D "2.6"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd81e=
0=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: PAE_MODE =3D "yes"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: LOADER =3D "generic"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: unknown xen elf note (0xd)=
=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: PADDR_OFFSET =3D 0x0=0D
(XEN) [2014-01-25 03:41:53] elf_xen_addr_calc_check: addresses:=0D
(XEN) [2014-01-25 03:41:53]     virt_base        =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 03:41:53]     elf_paddr_offset =3D 0x0=0D
(XEN) [2014-01-25 03:41:53]     virt_offset      =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 03:41:53]     virt_kstart      =3D 0xffffffff81000000=0D
(XEN) [2014-01-25 03:41:53]     virt_kend        =3D 0xffffffff823f7000=0D
(XEN) [2014-01-25 03:41:53]     virt_entry       =3D 0xffffffff81cd81e0=0D
(XEN) [2014-01-25 03:41:53]     p2m_base         =3D 0xffffffffffffffff=0D
(XEN) [2014-01-25 03:41:53]  Xen  kernel: 64-bit, lsb, compat32=0D
(XEN) [2014-01-25 03:41:53]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f7000=0D
(XEN) [2014-01-25 03:41:53] PHYSICAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 03:41:53]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487082 pages to be allocated)=0D
(XEN) [2014-01-25 03:41:53]  Init. ramdisk: 000000023ac31000->000000023fd86=
dfa=0D
(XEN) [2014-01-25 03:41:53] VIRTUAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 03:41:53]  Loaded kernel: ffffffff81000000->ffffffff823f7=
000=0D
(XEN) [2014-01-25 03:41:53]  Init. ramdisk: ffffffff823f7000->ffffffff8754c=
dfa=0D
(XEN) [2014-01-25 03:41:53]  Phys-Mach map: ffffffff8754d000->ffffffff8794d=
000=0D
(XEN) [2014-01-25 03:41:53]  Start info:    ffffffff8794d000->ffffffff8794d=
4b4=0D
(XEN) [2014-01-25 03:41:53]  Page tables:   ffffffff8794e000->ffffffff8798f=
000=0D
(XEN) [2014-01-25 03:41:54]  Boot stack:    ffffffff8798f000->ffffffff87990=
000=0D
(XEN) [2014-01-25 03:41:54]  TOTAL:         ffffffff80000000->ffffffff87c00=
000=0D
(XEN) [2014-01-25 03:41:54]  ENTRY ADDRESS: ffffffff81cd81e0=0D
(XEN) [2014-01-25 03:41:54] Dom0 has maximum 1 VCPUs=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a28000=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc20f0=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 2 at 0xffffffff81cc3000 -=
> 0xffffffff81cd7d80=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 3 at 0xffffffff81cd8000 -=
> 0xffffffff81e7b000=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:PCI: map 0000:00:16.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000=0D
(XEN) [2014-01-25 03:41:..............................................done.=
=0D
(XEN) [2014-01-25 03:41:55] Initial low memory virq threshold set at 0x4000=
 pages.=0D
(XEN) [2014-01-25 03:41:55] Std. Loglevel: All=0D
(XEN) [2014-01-25 03:41:55] Guest Loglevel: All=0D
(XEN) [2014-01-25 03:41:55] **********************************************=
=0D
(XEN) [2014-01-25 03:41:55] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS=
=0D
(XEN) [2014-01-25 03:41:55] ******* This option is intended to aid debuggin=
g of Xen by ensuring=0D
(XEN) [2014-01-25 03:41:55] ******* that all output is synchronously delive=
red on the serial line.=0D
(XEN) [2014-01-25 03:41:55] ******* However it can introduce SIGNIFICANT la=
tencies and affect=0D
(XEN) [2014-01-25 03:41:55] ******* timekeeping. It is NOT recommended for =
production use!=0D
(XEN) [2014-01-25 03:41:55] **********************************************=
=0D
(XEN) [2014-01-25 03:41:55] 3... 2... 1... =0D
(XEN) [2014-01-25 03:41:58] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)=0D
(XEN) [2014-01-25 03:et started...=0D
[    0.000000] Initializing cgroup subsys cpuset=0D
[    0.000000] Initializing cgroup subsys cpu=0D
[    0.000000] Initializing c.0upstream-03477-gdf32e43 (konrad@build-extern=
al.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) #5 S=
MP Fri Jan 24 12:22:52 EST 2014=0D
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAA.hide=3D(05:00.*) =
kgdboc=3Dhvc0=0D
[    0.000000] Freeing 99-100 pfn range: 103 pages freed=0D
[    0.000000] 1-1 mapping on 99->100=0D
[    0.000000] 1-1 mapping on a58f1->a58f8=0D
[    0.000000] 1-1 mapping on a61b1->a6597=0D
[    0.000000] 1-1 mapping on b74b4->b76cb=0D
[    0.000000] 1-1 mapping on b770c->b7fff=0D
[    0.000000] 1-1 mapping on b8000->100000=0D
[    0.000000] Released 103 pages of unused memory=0D
[    0.000000] Set 298846 page(s) to 1-1 mapping=0D
[    0.000000] Populating 80000-80067 pfn range: 103 pages added=0D
[    0.000000] e820: BIOS-provided physical RAM map:=0D
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable=0D
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable=0D
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable=0D
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable=0D
[    0.000000] NX (Execute Disable) protection: active=0D
[    0.000000] SMBIOS 2.7 present.=0D
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013=0D
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved=0D
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable=0D
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000=0D
[    0.000000] Scanning 1 areas for low memory corruption=0D
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 2457=
6=0D
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]=0D
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]=0D
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k=0D
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE=0D
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]=0D
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k=0D
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE=0D
[    0.000000] BRK [0x01ff2000, 0x01ff2fff] PGTABLE=0D
[    0.000000] BRK [0x01ff3000, 0x01ff3fff] PGTABLE=0D
[    0.000000] BRK [0x01ff4000, 0x01ff4fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]=0D
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]=0D
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k=0D
[    0.000000] RAMDISK: [mem 0x023f7000-0x0754cfff]=0D
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)=0D
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)=0D
[    0.000000] ACPI: FACS 00000000b77b7080 000040=0D
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)=0D
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)=0D
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)=0D
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)=0D
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)=0D
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)=0D
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)=0D
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)=0D
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)=0D
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] NUMA turned off=0D
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]=
=0D
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]=0D
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]=0D
[    0.000000] Zone ranges:=0D
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]=0D
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]=0D
[    0.000000]   Normal   empty=0D
[    0.000000] Movable zone start for each node=0D
[    0.000000] Early memory node ranges=0D
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]=0D
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]=0D
[    0.000000] On node 0 totalpages: 524287=0D
[    0.000000]   DMA zone: 56 pages used for memmap=0D
[    0.000000]   DMA zone: 21 pages reserved=0D
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0=0D
[    0.000000]   DMA32 zone: 7114 pages used for memmap=0D
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31=0D
[    0.000000] ACPI: PM-Timer IO Port: 0x1808=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=
=0D
[    0.000000] ACPI: IRQ0 used by override.=0D
[    0.000000] ACPI: IRQ2 used by override.=0D
[    0.000000] ACPI: IRQ9 used by override.=0D
[    0.000000] Using ACPI (MADT) for SMP configuration information=0D
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs=0D
[    0.000000] nr_irqs_gsi: 40=0D
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]=0D
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]=0D
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices=
=0D
[    0.000000] Booting paravirtualized kernel on Xen=0D
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)=0D
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1=0D
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144=0D
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152=0D
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 =0D
[    5.511514] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096=0D
[    5.511515] Policy zone: DMA32=0D
[    5.511516] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAA.hide=3D(05=
:00.*) kgdboc=3Dhvc0=0D
[    5.511829] PID hash table entries: 4096 (order: 3, 32768 bytes)=0D
[    5.511858] xsave: enabled xstate_bv 0x7, cntxt size 0x340=0D
[    5.532322] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]=0D
[    5.535407] Memory: 1891592K/2097148K available (7058K kernel code, 773K=
 rwdata, 2208K rodata, 1724K init, 1380K bss, 205556K reserved)=0D
[    5.535637] Hierarchical RCU implementation.=0D
[    5.535638] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.=
=0D
[    5.535638] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1=0D
[    5.535646] NR_IRQS:33024 nr_irqs:256 16=0D
[    5.535725] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0=0D
[    5.535727] xen: registering gsi 9 triggering 0 polarity 0=0D
[    5.535738] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)=0D
[    5.535760] xen: acpi sci 9=0D
[    5.535763] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)=0D
[    5.535766] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)=0D
[    5.535769] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)=0D
[    5.535771] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)=0D
[    5.535774] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)=0D
[    5.535776] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)=0D
[    5.535779] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)=0D
[    5.535781] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)=0D
[    5.535784] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)=0D
[    5.535786] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)=0D
[    5.535789] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)=0D
[    5.535791] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)=0D
[    5.535794] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)=0D
[    5.535796] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)=0D
[    5.537360] Console: colour VGA+ 80x25=0D
[    6.488462] console [hvc0] enabled=0D
[    6.492388] Xen: using vcpuop timer interface=0D
[    6.496738] installing Xen timer for CPU 0=0D
[    6.500919] tsc: Detected 3400.106 MHz processor=0D
[    6.505603] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.21 BogoMIPS (lpj=3D3400106)=0D
[    6.516237] pid_max: default: 32768 minimum: 301=0D
[    6.521075] Security Framework initialized=0D
[    6.525165] SELinux:  Initializing.=0D
[    6.528738] SELinux:  Starting in permissive mode=0D
[    6.533811] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)=0D
[    6.541261] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)=0D
[    6.548425] Mount-cache hash table entries: 256=0D
[    6.553402] Initializing cgroup subsys freezer=0D
[    6.557907] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'=0D
[    6.557907] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)=0D
[    6.571011] CPU: Physical Processor ID: 0=0D
[    6.575083] CPU: Processor Core ID: 0=0D
[    6.579504] mce: CPU supports 2 MCE banks=0D
[    6.583517] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024=0D
[    6.583517] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4=
=0D
[    6.583517] tlb_flushall_shift: 6=0D
[    6.620866] Freeing SMP alternatives memory: 32K (ffffffff81e72000 - fff=
fffff81e7a000)=0D
[    6.629543] ACPI: Core revision 2115=0D
[    6.686527] ACPI: All ACPI Tables successfully acquired=0D
[    6.693282] cpu 0 spinlock event irq 41=0D
[    6.697164] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1=0D
[    6.708165] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs=0D
[    6.715635] calling  set_real_mode_permissions+0x0/0xa9 @ 1=0D
[    6.721275] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs=0D
[    6.728720] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1=0D
[    6.735133] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs=0D
[    6.743366] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1=0D
[    6.749000] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs=0D
[    6.756452] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1=0D
[    6.762173] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[    6.769711] calling  init_hw_perf_events+0x0/0x53b @ 1=0D
[    6.774912] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.=0D
[    6.783784] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs=0D
[    6.791064] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1=0D
[    6.797477] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs=0D
[    6.805709] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1=0D
[    6.811180] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs=0D
[    6.818303] calling  spawn_ksoftirqd+0x0/0x28 @ 1=0D
[    6.823096] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs=0D
[    6.829657] calling  init_workqueues+0x0/0x59a @ 1=0D
[    6.834667] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs=
=0D
[    6.841270] calling  migration_init+0x0/0x72 @ 1=0D
[    6.845949] initcall migration_init+0x0/0x72 returned 0 after 0 usecs=0D
[    6.852448] calling  check_cpu_stall_init+0x0/0x1b @ 1=0D
[    6.857649] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs=0D
[    6.864668] calling  rcu_scheduler_really_started+0x0/0x12 @ 1=0D
[    6.870560] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs=0D
[    6.878274] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1=0D
[    6.883512] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs=0D
[    6.890497] calling  cpu_stop_init+0x0/0x76 @ 1=0D
[    6.895111] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs=0D
[    6.901500] calling  relay_init+0x0/0x14 @ 1=0D
[    6.905833] initcall relay_init+0x0/0x14 returned 0 after 0 usecs=0D
[    6.911986] calling  tracer_alloc_buffers+0x0/0x1bd @ 1=0D
[    6.917293] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs=0D
[    6.924378] calling  init_events+0x0/0x61 @ 1=0D
[    6.928800] initcall init_events+0x0/0x61 returned 0 after 0 usecs=0D
[    6.935038] calling  init_trace_printk+0x0/0x12 @ 1=0D
[    6.939979] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs=
=0D
[    6.946738] calling  event_trace_memsetup+0x0/0x52 @ 1=0D
[    6.951958] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs=0D
[    6.958957] calling  jump_label_init_module+0x0/0x12 @ 1=0D
[    6.964331] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs=0D
[    6.971525] calling  balloon_clear+0x0/0x4f @ 1=0D
[    6.976118] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs=0D
[    6.982531] calling  rand_initialize+0x0/0x30 @ 1=0D
[    6.987319] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs=0D
[    6.993884] calling  mce_amd_init+0x0/0x165 @ 1=0D
[    6.998476] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs=0D
[    7.004915] x86: Booted up 1 node, 1 CPUs=0D
[    7.009659] NMI watchdog: disabled (cpu0): hardware events not enabled=0D
[    7.016304] devtmpfs: initialized=0D
[    7.022209] calling  ipc_ns_init+0x0/0x14 @ 1=0D
[    7.026557] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.032797] calling  init_mmap_min_addr+0x0/0x26 @ 1=0D
[    7.037822] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usec=
s=0D
[    7.044668] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1=
=0D
[    7.051343] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs=0D
[    7.059835] calling  net_ns_init+0x0/0x104 @ 1=0D
[    7.064400] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs=0D
[    7.070683] calling  e820_mark_nvs_memory+0x0/0x41 @ 1=0D
[    7.075869] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)=0D
[    7.083764] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)=0D
[    7.091924] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs=0D
[    7.099128] calling  cpufreq_tsc+0x0/0x37 @ 1=0D
[    7.103549] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs=0D
[    7.109787] calling  reboot_init+0x0/0x1d @ 1=0D
[    7.114209] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs=0D
[    7.120447] calling  init_lapic_sysfs+0x0/0x20 @ 1=0D
[    7.125301] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs=
=0D
[    7.131974] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1=0D
[    7.137520] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs=0D
[    7.144887] calling  alloc_frozen_cpus+0x0/0x8 @ 1=0D
[    7.149739] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs=
=0D
[    7.156412] calling  wq_sysfs_init+0x0/0x14 @ 1=0D
[    7.161108] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left=
=0D
[    7.167854] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs=0D
[    7.174381] calling  ksysfs_init+0x0/0x94 @ 1=0D
[    7.178844] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs=0D
[    7.185039] calling  pm_init+0x0/0x4e @ 1=0D
[    7.189151] initcall pm_init+0x0/0x4e returned 0 after 0 usecs=0D
[    7.195004] calling  pm_disk_init+0x0/0x19 @ 1=0D
[    7.199527] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs=0D
[    7.205838] calling  swsusp_header_init+0x0/0x30 @ 1=0D
[    7.210865] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usec=
s=0D
[    7.217710] calling  init_jiffies_clocksource+0x0/0x12 @ 1=0D
[    7.223257] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs=0D
[    7.230622] calling  cgroup_wq_init+0x0/0x5c @ 1=0D
[    7.235311] initcall cgroup_wq_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.241803] calling  event_trace_enable+0x0/0x173 @ 1=0D
[    7.247407] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs=0D
[    7.254267] calling  init_zero_pfn+0x0/0x35 @ 1=0D
[    7.258859] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs=0D
[    7.265273] calling  fsnotify_init+0x0/0x26 @ 1=0D
[    7.269867] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.276277] calling  filelock_init+0x0/0x84 @ 1=0D
[    7.280883] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs=0D
[    7.287284] calling  init_misc_binfmt+0x0/0x31 @ 1=0D
[    7.292139] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs=
=0D
[    7.298811] calling  init_script_binfmt+0x0/0x16 @ 1=0D
[    7.303837] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usec=
s=0D
[    7.310685] calling  init_elf_binfmt+0x0/0x16 @ 1=0D
[    7.315449] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs=0D
[    7.322037] calling  init_compat_elf_binfmt+0x0/0x16 @ 1=0D
[    7.327410] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs=0D
[    7.334603] calling  debugfs_init+0x0/0x5c @ 1=0D
[    7.339121] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.345436] calling  securityfs_init+0x0/0x53 @ 1=0D
[    7.350211] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs=0D
[    7.356789] calling  prandom_init+0x0/0xe2 @ 1=0D
[    7.361295] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs=0D
[    7.367623] calling  virtio_init+0x0/0x30 @ 1=0D
[    7.372147] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs=0D
[    7.378318] calling  __gnttab_init+0x0/0x30 @ 1=0D
[    7.382912] xen:grant_table: Grant tables using version 2 layout=0D
[    7.388993] Grant table initialized=0D
[    7.392530] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs=
=0D
[    7.399203] calling  early_resume_init+0x0/0x1d0 @ 1=0D
[    7.404256] RTC time:  3:41:59, date: 01/25/14=0D
[    7.408736] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs=0D
[    7.415755] calling  cpufreq_core_init+0x0/0x37 @ 1=0D
[    7.420695] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs=0D
[    7.427628] calling  cpuidle_init+0x0/0x40 @ 1=0D
[    7.432135] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs=0D
[    7.438634] calling  bsp_pm_check_init+0x0/0x14 @ 1=0D
[    7.443574] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs=
=0D
[    7.450333] calling  sock_init+0x0/0x8b @ 1=0D
[    7.454685] initcall sock_init+0x0/0x8b returned 0 after 0 usecs=0D
[    7.460683] calling  net_inuse_init+0x0/0x26 @ 1=0D
[    7.465365] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.471860] calling  netpoll_init+0x0/0x31 @ 1=0D
[    7.476367] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs=0D
[    7.482694] calling  netlink_proto_init+0x0/0x1f7 @ 1=0D
[    7.487847] NET: Registered protocol family 16=0D
[    7.492338] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs=0D
[    7.499433] calling  bdi_class_init+0x0/0x4d @ 1=0D
[    7.504218] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs=0D
[    7.510649] calling  kobject_uevent_init+0x0/0x12 @ 1=0D
[    7.515773] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[    7.522693] calling  pcibus_class_init+0x0/0x19 @ 1=0D
[    7.527694] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs=
=0D
[    7.534392] calling  pci_driver_init+0x0/0x12 @ 1=0D
[    7.539251] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs=0D
[    7.545769] calling  backlight_class_init+0x0/0x85 @ 1=0D
[    7.551028] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs=0D
[    7.557991] calling  video_output_class_init+0x0/0x19 @ 1=0D
[    7.563514] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs=0D
[    7.570728] calling  xenbus_init+0x0/0x26f @ 1=0D
[    7.575328] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs=0D
[    7.581581] calling  tty_class_init+0x0/0x38 @ 1=0D
[    7.586328] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs=0D
[    7.592760] calling  vtconsole_class_init+0x0/0xc2 @ 1=0D
[    7.598129] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs=0D
[    7.605075] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1=0D
[    7.610886] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs=0D
[    7.618509] calling  register_node_type+0x0/0x34 @ 1=0D
[    7.623667] initcall register_node_type+0x0/0x34 returned 0 after 0 usec=
s=0D
[    7.630441] calling  i2c_init+0x0/0x70 @ 1=0D
[    7.634769] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs=0D
[    7.640676] calling  init_ladder+0x0/0x12 @ 1=0D
[    7.645093] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs=0D
[    7.651506] calling  init_menu+0x0/0x12 @ 1=0D
[    7.655754] initcall init_menu+0x0/0x12 returned -19 after 0 usecs=0D
[    7.661993] calling  amd_postcore_init+0x0/0x143 @ 1=0D
[    7.667019] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usec=
s=0D
[    7.673879] calling  boot_params_ksysfs_init+0x0/0x237 @ 1=0D
[    7.679431] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs=0D
[    7.686778] calling  arch_kdebugfs_init+0x0/0x233 @ 1=0D
[    7.691922] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs=0D
[    7.698826] calling  mtrr_if_init+0x0/0x78 @ 1=0D
[    7.703333] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs=0D
[    7.709832] calling  ffh_cstate_init+0x0/0x2a @ 1=0D
[    7.714599] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs=0D
[    7.721183] calling  activate_jump_labels+0x0/0x32 @ 1=0D
[    7.726385] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs=0D
[    7.733404] calling  acpi_pci_init+0x0/0x61 @ 1=0D
[    7.737997] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it=0D
[    7.745623] ACPI: bus type PCI registered=0D
[    7.749696] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5=0D
[    7.756197] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs=
=0D
[    7.762870] calling  dma_bus_init+0x0/0xd6 @ 1=0D
[    7.767499] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left=
=0D
[    7.774205] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs=0D
[    7.780726] calling  dma_channel_table_init+0x0/0xde @ 1=0D
[    7.786110] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs=0D
[    7.793288] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1=0D
[    7.798837] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs=0D
[    7.806201] calling  register_xen_pci_notifier+0x0/0x38 @ 1=0D
[    7.811834] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs=0D
[    7.819289] calling  xen_pcpu_init+0x0/0xcc @ 1=0D
[    7.824733] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs=0D
[    7.831080] calling  dmi_id_init+0x0/0x31d @ 1=0D
[    7.835833] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs=0D
[    7.842088] calling  dca_init+0x0/0x20 @ 1=0D
[    7.846246] dca service started, version 1.12.1=0D
[    7.850899] initcall dca_init+0x0/0x20 returned 0 after 976 usecs=0D
[    7.856996] calling  iommu_init+0x0/0x58 @ 1=0D
[    7.861336] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs=0D
[    7.867481] calling  pci_arch_init+0x0/0x69 @ 1=0D
[    7.872091] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)=0D
[    7.881435] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E82=
0=0D
[    7.896157] PCI: Using configuration type 1 for base access=0D
[    7.901715] initcall pci_arch_init+0x0/0x69 returned 0 after    7.913398=
] initcall topology_init+0x0/0x98 returned 0 after 0 usecs=0D
[    7.919759] calling  mtrr_init_finialize+0x0/0x36 @ 1=0D
[    7.924853] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs=0D
[    7.931788] calling  init_vdso+0x0/0x135 @ 1=0D
[    7.936121] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs=0D
[    7.942274] calling  sysenter_setup+0x0/0x2dd @ 1=0D
[    7.947040] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs=0D
[    7.953626] calling  param_sysfs_init+0x0/0x194 @ 1=0D
[    7.975052] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs=0D
[    7.982090] calling  pm_sysrq_init+0x0/0x19  7.993092] calling  default_=
bdi_init+0x0/0x65 @ 1=0D
[    7.998253] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs=
=0D
[    8.004853] calling  init_bio+0x0/0xe9 @ 1=0D
[    8.009069] bio: create slab <bio-0> at 0=0D
[    8.013134] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs=0D
[    8.019241] calling  cryptomgr_init+0x0/0x12 @ 1=0D
[    8.023920] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.030419] calling  blk_settings_init+0x0/0x2c @ 1=0D
[    8.035359] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs=
=0D
[    8.042118] calling  blk_ioc_init+0x0/0x2a @ 1=0D
[    8.046636] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.052952] calling  blk_softirq_init+0x0/0x6e @ 1=0D
[    8.057805] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.064477] calling  blk_iopoll_setup+0x0/0x6e @ 1=0D
[    8.069331] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.076003] calling  blk_mq_init+0x0/0x5f @ 1=0D
[    8.080424] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs=0D
[    8.086665] calling  genhd_device_init+0x0/0x85 @ 1=0D
[    8.091745] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs=
=0D
[    8.098435] calling  pci_slot_init+0x0/0x50 @ 1=0D
[    8.103034] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs=0D
[    8.109437] calling  fbmem_init+0x0/0x98 @ 1=0D
[    8.113843] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.119927] calling  acpi_init+0x0/0x27a @ 1=0D
[    8.124283] ACPI: Added _OSI(Module Device)=0D
[    8.128504] ACPI: Added _OSI(Processor Device)=0D
[    8.133010] ACPI: Added _OSI(3.0 _SCP Extensions)=0D
[    8.137774] ACPI: Added _OSI(Processor Aggregator Device)=0D
[    8.147027] ACPI: Executed 1 blocks of module-level executable AML code=
=0D
[    8.179322] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored=0D
[    8.187216] \_SB_:_OSC invalid UUID=0D
[    8.190694] _Oata:1 1f =0D
[    8.196378] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.205616] ACPI: Dynamic OEM Table Load:=0D
[    8.209610] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.219471] ACPI: Interpreter enabled=0D
[    8.223140] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)=0D
[    8.232400] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)=0D
[    8.241683] ACPI: (supports S0 S1 S4 S5)=0D
[    8.245654] ACPI: Using IOAPIC for interrupt routing=0D
[    8.251056] HEST: Table parsing has been initialized.=0D
[    8.256107] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug=0D
[    8.266510] ACPI: No dock devices found.=0D
[    8.368976] ACPI: Power Resource [FN00] (off)=0D
[    8.374125] ACPI: Power Resource [FN01] (off)=0D
[    8.379306] ACPI: Power Resource [FN02] (off)=0D
[    8.384446] ACPI: Power Resource [FN03] (off)=0D
[    8.389594] ACPI: Power Resource [FN04] (off)=0D
[    8.399292] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])=0D
[    8.405472] acpi PNP0A08:00: _OSC: OS supports [Exten=0D
[    8.416241] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]=0D
[    8.425254] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]=
=0D
[    8.438579] PCI host bridge to bus 0000:00=0D
[    8.442672] pci_bus 0000:00: root bus resource [bus 00-3e]=0D
[    8.448218] p0-0x000d7fff]=0D
[    8.474563] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]=0D
[    8.481496] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]=0D
[    8.488428] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]=0D
[    8.495363] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]=0D
[    8.502295] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]=0D
[    8.509241] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:00.0=0D
[    8.526971] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400=0D
[    8.533130] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold=0D
[    8.539756] pci 0000:00:01.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:01.0=0D
[    8.556754] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400=0D
[    8.562818] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1.1 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:01.1=0D
[    8.580646] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000=0D
[    8.586663] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit=
]=0D
[    8.593500] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]=0D
[    8.600776] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:2.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:02.0=0D
[    8.618132] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300=0D
[    8.624151] pci 0000:00:03.0: reg 0x10: [mem 0xf1b34000-0xf1b37fff 64bit=
]=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:3.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:03.0=0D
[    8.642734] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330=0D
[    8.648793] pci 0000:00:14.0: reg 0x10: [mem 0xf1b20000-0xf1b2ffff 64bit=
]=0D
[    8.655724] pci 0000:00:14.0: PME# supported from D3hot D3cold=0D
[    8.661958] pci 0000:00:14.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:14.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:14.0=0D
[    8.679051] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000=0D
[    8.685090] pci 0000:00:16.0: reg 0x10: [mem 0xf1b3f000-0xf1b3f00f 64bit=
]=0D
[    8.692031] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:16.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:16.0=0D
[    8.709907] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000=0D
[    8.715949] pci 0000:00:19.0: reg 0x10: [mem 0xf1b00000-0xf1b1ffff]=0D
[    8.722242] pci 0000:00:19.0: reg 0x14: [mem 0xf1b3d000-0xf1b3dfff]=0D
[    8.728569] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]=0D
[    8.734328] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold=0D
[    8.740823] pci 0000:00:19.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:19.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:19.0=0D
[    8.757917] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320=0D
[    8.763959] pci 0000:00:1a.0: reg 0x10: [mem 0xf1b3c000-0xf1b3c3ff]=0D
[    8.770411] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold=0D
[    8.776995] pci 0000:00:1a.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1a.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1a.0=0D
[    8.794132] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300=0D
[    8.800166] pci 0000:00:1b.0: reg 0x10: [mem 0xf1b30000-0xf1b33fff 64bit=
]=0D
[    8.807131] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold=0D
[    8.813616] pci 0000:00:1b.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1b.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1b.0=0D
[    8.830699] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400=0D
[    8.836862] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold=0D
[    8.843355] pci 0000:00:1c.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.0=0D
[    8.860448] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400=0D
[    8.866613] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold=0D
[    8.873107] pci 0000:00:1c.3: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.3 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.3=0D
[    8.890194] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400=0D
[    8.896358] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold=0D
[    8.902851] pci 0000:00:1c.5: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.5 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.5=0D
[    8.919937] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400=0D
[    8.926101] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold=0D
[    8.932593] pci 0000:00:1c.6: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.6 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.6=0D
[    8.949670] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400=0D
[    8.955834] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold=0D
[    8.962332] pci 0000:00:1c.7: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.7 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.7=0D
[    8.979430] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320=0D
[    8.985470] pci 0000:00:1d.0: reg 0x10: [mem 0xf1b3b000-0xf1b3b3ff]=0D
[    8.991923] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold=0D
[    8.998505] pci 0000:00:1d.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1d.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1d.0=0D
[    9.015607] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.0=0D
[    9.033538] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601=0D
[    9.039575] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]=0D
[    9.045178] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]=0D
[    9.050811] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]=0D
[    9.056442] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]=0D
[    9.062077] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]=0D
[    9.067708] pci 0000:00:1f.2: reg 0x24: [mem 0xf1b3a000-0xf1b3a7ff]=0D
[    9.074118] pci 0000:00:1f.2: PME# supported from D3hot=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.2 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.2=0D
[    9.091125] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500=0D
[    9.097155] pci 0000:00:1f.3: reg 0x10: [mem 0xf1b39000-0xf1b390ff 64bit=
]=0D
[    9.104006] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.3 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.3=0D
[    9.121387] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000=0D
[    9.127429] pci 0000:00:1f.6: reg 0x10: [mem 0xf1b38000-0xf1b38fff 64bit=
]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.6 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.6=0D
[    9.146352] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.157128] pci 0000:00:01.0: PCI bridge to [bus 01-ff]=0D
[    9.162409] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01=
=0D
[    9.169273] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.180074] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000=0D
[    9.186118] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]=0D
[    9.192440] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]=0D
[    9.198767] pci 0000:02:00.0: reg 0x18: [io  0xe020-0xe03f]=0D
[    9.204401] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]=0D
[    9.210745] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]=
=0D
[    9.217536] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.223664] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.230579] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 2:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:02:00.0=0D
[    9.248930] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000=0D
[    9.254942] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]=0D
[    9.261262] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]=0D
[    9.267589] pci 0000:02:00.1: reg 0x18: [io  0xe000-0xe01f]=0D
[    9.273222] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]=0D
[    9.279568] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]=
=0D
[    9.286359] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold=0D
[    9.292483] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.299398] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 2:0.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:02:00.1=0D
[    9.319827] pci 0000:00:01.1: PCI bridge to [bus 02-ff]=0D
[    9.325048] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[    9.331200] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[    9.338047] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03=
=0D
[    9.345085] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.355901] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000=0D
[    9.361949] pci 0000:04:00.0: reg 0x10: [mem 0xf1aa0000-0xf1abffff]=0D
[    9.368263] pci 0000:04:00.0: reg 0x14: [mem 0xf1a80000-0xf1a9ffff]=0D
[    9.374589] pci 0000:04:00.0: reg 0x18: [io  0xd020-0xd03f]=0D
[    9.380305] pci 0000:04:00.0: reg 0x30: [mem 0xf1a60000-0xf1a7ffff pref]=
=0D
[    9.387132] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.393360] pci 0000:04:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 4:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:04:00.0=0D
[    9.410427] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000=0D
[    9.416462] pci 0000:04:00.1: reg 0x10: [mem 0xf1a40000-0xf1a5ffff]=0D
[    9.422774] pci 0000:04:00.1: reg 0x14: [mem 0xf1a20000-0xf1a3ffff]=0D
[    9.429100] pci 0000:04:00.1: reg 0x18: [io  0xd000-0xd01f]=0D
[    9.434819] pci 0000:04:00.1: reg 0x30: [mem 0xf1a00000-0xf1a1ffff pref]=
=0D
[    9.441643] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 4:0.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:04:00.1=0D
[    9.467789] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]=0D
[    9.473010] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[    9.479161] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[    9.486015] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04=
=0D
[    9.493043] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.503865] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000=0D
[    9.509908] pci 0000:05:00.0: reg 0x10: [mem 0xf1900000-0xf197ffff]=0D
[    9.516246] pci 0000:05:00.0: reg 0x18: [io  0xc000-0xc01f]=0D
[    9.521858] pci 0000:05:00.0: reg 0x1c: [mem 0xf1980000-0xf1983fff]=0D
[    9.528361] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.534597] pci 0000:05:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 5:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:05:00.0=0D
[    9.553743] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]=0D
[    9.558966] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[    9.565115] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[    9.571966] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05=
=0D
[    9.579040] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.589864] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401=0D
[    9.596108] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold=
=0D
[    9.602879] pci 0000:06:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 6:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:06:00.0=0D
[    9.619891] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]=0D
[    9.625122] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[    9.631984] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring=0D
[    9.640473] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400=0D
[    9.646667] pci 0000:07:01.0: supports D1 D2=0D
[    9.650926] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 7:1.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:07:01.0=0D
[    9.668875] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010=0D
[    9.674912] pci 0000:07:03.0: reg 0x10: [mem 0xf1604000-0xf16047ff]=0D
[    9.681221] pci 0000:07:03.0: reg 0x14: [mem 0xf1600000-0xf1603fff]=0D
[    9.687704] pci 0000:07:03.0: supports D1 D2=0D
[    9.691961] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 7:3.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:07:03.0=0D
[    9.716013] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)=0D
[    9.723066] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[    9.729909] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.738478] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
] (subtractive decode)=0D
[    9.747143] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.755722] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.764303] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring=0D
[    9.772694] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000=0D
[    9.778825] pci 0000:08:08.0: reg 0x10: [mem 0xf1507000-0xf1507fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:8.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:08.0=0D
[    9.803612] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000=0D
[    9.809661] pci 0000:08:08.1: reg 0x10: [mem 0xf1506000-0xf1506fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:8.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:08.1=0D
[    9.834476] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000=0D
[    9.840523] pci 0000:08:09.0: reg 0x10: [mem 0xf1505000-0xf1505fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:9.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:09.0=0D
[    9.865321] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000=0D
[    9.871371] pci 0000:08:09.1: reg 0x10: [mem 0xf1504000-0xf1504fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:9.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:09.1=0D
[    9.896201] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000=0D
[    9.902255] pci 0000:08:0a.0: reg 0x10: [mem 0xf1503000-0xf1503fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:a.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0a.0=0D
[    9.927051] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000=0D
[    9.933101] pci 0000:08:0a.1: reg 0x10: [mem 0xf1502000-0xf1502fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:a.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0a.1=0D
[    9.957922] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000=0D
[    9.963976] pci 0000:08:0b.0: reg 0x10: [mem 0xf1501000-0xf1501fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:b.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0b.0=0D
[    9.988773] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000=0D
[    9.994823] pci 0000:08:0b.1: reg 0x10: [mem 0xf1500000-0xf1500fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:b.1 flags:0=0D
(XEN) [2014-01-25 03:42:03] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:08:0b.1=0D
[   10.019675] pci 0000:07:01.0: PCI bridge to [bus 08-ff]=0D
[   10.024903] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   10.031739] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08=
=0D
[   10.038412] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08=
=0D
[   10.045083] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08=
=0D
[   10.052117] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.063011] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330=0D
[   10.069124] pci 0000:09:00.0: reg 0x10: [mem 0xf1800000-0xf1801fff 64bit=
]=0D
[   10.076291] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.082582] pci 0000:09:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:03] PHYSDEVOP_pci_device_add of 9:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:09:00.0=0D
[   10.101766] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]=0D
[   10.106995] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   10.113841] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09=
=0D
[   10.120878] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.131683] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601=0D
[   10.137730] pci 0000:0a:00.0: reg 0x10: [io  0xb050-0xb057]=0D
[   10.143357] pci 0000:0a:00.0: reg 0x14: [io  0xb040-0xb043]=0D
[   10.148988] pci 0000:0a:00.0: reg 0x18: [io  0xb030-0xb037]=0D
[   10.154624] pci 0000:0a:00.0: reg 0x1c: [io  0xb020-0xb023]=0D
[   10.160255] pci 0000:0a:00.0: reg 0x20: [io  0xb000-0xb01f]=0D
[   10.165889] pci 0000:0a:00.0: reg 0x24: [mem 0xf1700000-0xf17001ff]=0D
[   10.172424] pci 0000:0a:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:03] PHYSDEVOP_pci_device_add of a:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:03] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:0a:00.0=0D
[   10.198063] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]=0D
[   10.203284] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   10.209434] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   10.216286] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a=
=0D
[   10.223047] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)=0D
[   10.234866] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.242188] ACPI: PCI Interrupt Link [LNKB] ( PCI Interrupt Link [LNKF] =
(IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.=0D
[   10.279883] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)=0D
[   10.287196] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.295636] ACPI: Enabled 4 GPEs in block 00 to 3F=0D
[   10.300430] ACPI: \_SB_.PCI0: notify handler is installed=0D
[   10.305914] Found 1 acpi root devices=0D
[   10.309716] initcall acpi_init+0x0/0x27a returned 0 after 443359 usecs=0D
[   10.316232] calling  pnp_init+0x0/0x12 @ 1=0D
[   10.320575] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.326492] calling  balloon_init+0x0/0x242 @ 1=0D
[   10.331082] xen:balloon: Initialising balloon driver=0D
[   10.336108] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs=0D
[   10.342694] calling  xen_setup_shutdown_event+0x0/0x30 @ 1=0D
[   10.348240] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs=0D
[   10.355605] calling  xenbus_probe_backend_init+0x0/0x2d @ 1=0D
[   10.361335] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs=0D
[   10.368723] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1=0D
[   10.374558] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs=0D
[   10.382022] calling  xen_acpi_pad_init+0x0/0x47 @ 1=0D
[   10.387039] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs=
=0D
[   10.393730] calling  balloon_init+0x0/0xfa @ 1=0D
[   10.398234] xen_balloon: Initialising balloon driver=0D
[   10.403540] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs=0D
[   10.409974] calling  misc_init+0x0/0xba @ 1=0D
[   10.414312] initcall misc_init+0x0/0xba returned 0 after 0 usecs=0D
[   10.420314] calling  vga_arb_device_init+0x0/0xde @ 1=0D
[   10.425575] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone=0D
[   10.433649] vgaarb: loaded=0D
[   10.436418] vgaarb: bridge control possible 0000:00:02.0=0D
[   10.441792] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs=0D
[   10.448986] calling  cn_init+0x0/0xc0 @ 1=0D
[   10.453078] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs=0D
[   10.458952] calling  dma_buf_init+0x0/0x75 @ 1=0D
[   10.463471] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs=0D
[   10.469785] calling  phy_init+0x0/0x2e @ 1=0D
[   10.474166] initcall phy_init+0x0/0x2e returned 0 after 0 usecs=0D
[   10.480076] calling  init_pcmcia_cs+0x0/0x3d @ 1=0D
[   10.484819] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs=0D
[   10.491255] calling  usb_init+0x0/0x169 @ 1=0D
[   10.495514] ACPI: bus type USB registered=0D
[   10.499784] usbcore: registered new interface driver usbfs=0D
[   10.505366] usbcore: registered new interface driver hub=0D
[   10.510778] usbcore: registered new device driver usb=0D
[   10.515825] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs=0D
[   10.522149] calling  serio_init+0x0/0x31 @ 1=0D
[   10.526576] initcall serio_init+0x0/0x31 returned 0 after 0 usecs=0D
[   10.532663] calling  input_init+0x0/0x103 @ 1=0D
[   10.537149] initcall input_init+0x0/0x103 returned 0 after 0 usecs=0D
[   10.543323] calling  rtc_init+0x0/0x5b @ 1=0D
[   10.547548] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs=0D
[   10.553462] calling  pps_init+0x0/0xb7 @ 1=0D
[   10.557683] pps_core: LinuxPPS API ver. 1 registered=0D
[   10.562649] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>=0D
[   10.571834] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs=0D
[   10.578073] calling  ptp_init+0x0/0xa4 @ 1=0D
[   10.582293] PTP clock support registered=0D
[   10.586220] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs=0D
[   10.592373] calling  power_supply_class_init+0x0/0x44 @ 1=0D
[   10.597892] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs=0D
[   10.605116] calling  hwmon_init+0x0/0xe3 @ 1=0D
[   10.609509] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs=0D
[   10.615601] calling  leds_init+0x0/0x40 @ 1=0D
[   10.619908] initcall leds_init+0x0/0x40 returned 0 after 0 usecs=0D
[   10.625913] calling  efisubsys_init+0x0/0x142 @ 1=0D
[   10.630680] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs=0D
[   10.637264] calling  pci_subsys_init+0x0/0x4f @ 1=0D
[   10.642029] PCI: Using ACPI for IRQ routing=0D
[   10.649715] PCI: pci_cache_line_size set to 64 bytes=0D
[   10.654869] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]=0D
[   10.660865] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]=0D
[   10.666931] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usec=
s=0D
[   10.673776] calling  proto_init+0x0/0x12 @ 1=0D
[   10.678114] initcall proto_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.684261] calling  net_dev_init+0x0/0x1c6 @ 1=0D
[   10.689488] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs=0D
[   10.695834] calling  neigh_init+0x0/0x80 @ 1=0D
[   10.700165] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs=0D
[   10.706316] calling  fib_rules_init+0x0/0xaf @ 1=0D
[   10.710997] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs=0D
[   10.717496] calling  pktsched_init+0x0/0x10a @ 1=0D
[   10.722182] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs=0D
[   10.728676] calling  tc_filter_init+0x0/0x55 @ 1=0D
[   10.733356] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs=0D
[   10.739856] calling  tc_action_init+0x0/0x55 @ 1=0D
[   10.744535] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs=0D
[   10.751037] calling  genl_init+0x0/0x85 @ 1=0D
[   10.755298] initcall genl_init+0x0/0x85 returned 0 after 0 usecs=0D
[   10.761349] calling  cipso_v4_init+0x0/0x61 @ 1=0D
[   10.765942] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs=0D
[   10.772355] calling  netlbl_init+0x0/0x81 @ 1=0D
[   10.776774] NetLabel: Initializing=0D
[   10.780275] NetLabel:  domain hash size =3D 128=0D
[   10.784693] NetLabel:  protocols =3D UNLABELED CIPSOv4=0D
[   10.789759] NetLabel:  unlabeled traffic allowed by default=0D
[   10.795353] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs=0D
[   10.801854] calling  rfkill_init+0x0/0x79 @ 1=0D
[   10.806450] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs=0D
[   10.812623] calling  xen_mcfg_late+0x0/0xab @ 1=0D
[   10.817213] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs=0D
[   10.823642] calling  xen_p2m_debugfs+0x0/0x4a @ 1=0D
[   10.828407] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs=0D
[   10.834977] calling  xen_spinlock_debugfs+0x0/0x13a @ 1=0D
[   10.840312] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs=0D
[   10.847370] calling  nmi_warning_debugfs+0x0/0x27 @ 1=0D
[   10.852489] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs=0D
[   10.859415] calling  hpet_late_init+0x0/0x101 @ 1=0D
[   10.864182] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs=
=0D
[   10.870942] calling  init_amd_nbs+0x0/0xb8 @ 1=0D
[   10.875452] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs=0D
[   10.881775] calling  clocksource_done_booting+0x0/0x42 @ 1=0D
[   10.887329] Switched to clocksource xen=0D
[   10.891229] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs=0D
[   10.898851] calling  tracer_init_debugfs+0x0/0x1b2 @ 1=0D
[   10.904344] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 287 =
usecs=0D
[   10.911467] calling  init_trace_printk_function_export+0x0/0x2f @ 1=0D
[   10.917800] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs=0D
[   10.925939] calling  event_trace_init+0x0/0x205 @ 1=0D
[   10.945703] initcall event_trace_init+0x0/0x205 returned 0 after 14473 u=
secs=0D
[   10.952737] calling  init_kprobe_trace+0x0/0x93 @ 1=0D
[   10.957687] initcall init_kprobe_trace+0x0/0x93 returned 0 after 11 usec=
s=0D
[   10.964524] calling  init_pipe_fs+0x0/0x4c @ 1=0D
[   10.969069] initcall init_pipe_fs+0x0/0x4c returned 0 after 39 usecs=0D
[   10.975442] calling  eventpoll_init+0x0/0xda @ 1=0D
[   10.980152] initcall eventpoll_init+0x0/0xda returned 0 after 29 usecs=0D
[   10.986709] calling  anon_inode_init+0x0/0x5b @ 1=0D
[   10.991508] initcall anon_inode_init+0x0/0x5b returned 0 after 34 usecs=
=0D
[   10.998148] calling  init_ramfs_fs+0x0/0x4d @ 1=0D
[   11.002749] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs=0D
[   11.009154] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1=0D
[   11.014354] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs=0D
[   11.021373] calling  acpi_event_init+0x0/0x3a @ 1=0D
[   11.026159] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs=
=0D
[   11.032813] calling  pnp_system_init+0x0/0x12 @ 1=0D
[   11.037677] initcall pnp_system_init+0x0/0x12 returned 0 after 94 usecs=
=0D
[   11.044295] calling  pnpacpi_init+0x0/0x8c @ 1=0D
[   11.048785] pnp: PnP ACPI init=0D
[   11.051928] ACPI: bus type PNP registered=0D
[   11.056303] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved=
=0D
[   11.062906] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active=
)=0D
[   11.069797] pnp 00:01: [dma 4]=0D
[   11.073013] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)=0D
[   11.079701] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)=0D
[   11.086773] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)=0D
[   11.094311] system 00:04: [io  0x0680-0x069f] has been reserved=0D
[   11.100228] system 00:04: [io  0xffff] has been reserved=0D
[   11.105600] system 00:04: [io  0xffff] has been reserved=0D
[   11.110972] system 00:04: [io  0xffff] has been reserved=0D
[   11.116348] system 00:04: [io  0x1c00-0x1cfe] has been reserved=0D
[   11.122324] system 00:04: [io  0x1d00-0x1dfe] has been reserved=0D
[   11.128304] system 00:04: [io  0x1e00-0x1efe] has been reserved=0D
[   11.134285] system 00:04: [io  0x1f00-0x1ffe] has been reserved=0D
[   11.140267] system 00:04: [io  0x0ca4-0x0ca7] has been reserved=0D
[   11.146244] system 00:04: [io  0x1800-0x18fe] could not be reserved=0D
[   11.152570] system 00:04: [io  0x164e-0x164f] has been reserved=0D
[   11.158545] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.165426] xen: registering gsi 8 triggering 1 polarity 0=0D
[   11.171110] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)=0D
[   11.177953] system 00:06: [io  0x1854-0x1857] has been reserved=0D
[   11.183863] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)=0D
[   11.192198] kworker/u2:0 (517) used greatest stack depth: 5560 bytes lef=
t=0D
[   11.199015] system 00:07: [io  0x0a00-0x0a1f] has been reserved=0D
[   11.204963] system 00:07: [io  0x0a30-0x0a3f] has been reserved=0D
[   11.210936] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.219186] xen: registering gsi 4 triggering 1 polarity 0=0D
[   11.224662] Already setup the GSI :4=0D
[   11.228306] pnp 00:08: [dma 0 disabled]=0D
[   11.232467] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.240187] xen: registering gsi 3 triggering 1 polarity 0=0D
[   11.245682] pnp 00:09: [dma 0 disabled]=0D
[   11.249768] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.256613] system 00:0a: [io  0x04d0-0x04d1] has been reserved=0D
[   11.262519] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.269396] xen: registering gsi 13 triggering 1 polarity 0=0D
[   11.275208] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)=0D
[   11.284867] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved=
=0D
[   11.291482] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved=
=0D
[   11.298152] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved=
=0D
[   11.304822] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved=
=0D
[   11.311495] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved=
=0D
[   11.318168] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved=
=0D
[   11.324842] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved=
=0D
[   11.331514] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved=
=0D
[   11.338187] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved=
=0D
[   11.344860] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved=
=0D
[   11.351534] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved=
=0D
[   11.358207] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved=
=0D
[   11.364875] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.373788] pnp: PnP ACPI: found 13 devices=0D
[   11.377963] ACPI: bus type PNP unregistered=0D
[   11.382208] initcall pnpacpi_init+0x0/0x8c returned 0 after 325606 usecs=
=0D
[   11.388968] calling  pcistub_init+0x0/0x29f @ 1=0D
[   11.394231] initcall pcistub_init+0x0/0x29f returned 0 after 654 usecs=0D
[   11.400756] calling  chr_dev_init+0x0/0xc6 @ 1=0D
[   11.414450] initcall chr_dev_init+0x0/0xc6 returned 0 after 8981 usecs=0D
[   11.420967] calling  firmware_class_init+0x0/0xec   11.433117] calling  =
init_pcmcia_bus+0x0/0x65 @ 1=0D
[   11.438023] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 139 usecs=
=0D
[   11.444715] calling  thermal_init+0x0/0x8b @ 1=0D
[   11.449297] initcall thermal_init+0x0/0x8b returned 0 after 76 usecs=0D
[   11.455642] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1=0D
[   11.461532] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs=0D
[   11.469418] calling  init_acpi_pm_clocksource+0x0/0xec @ 1=0D
[   11.478109] PM-Timer failed consistency check  (0xffffff) - aborting.=0D
[   11.484543] initcall init_acpi_pm_clocksource+0x0/439] calling  pcibios_=
assign_resources+0x0/0xbd @ 1=0D
[   11.497997] pci 0000:00:01.0: PCI bridge to [bus 01]=0D
[   11.502969] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.509893] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.516827] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.523756] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.530691] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.537623] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.544557] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.551489] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.558423] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.565355] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.572289] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.579212] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]=0D
[   11.586594] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.593512] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]=0D
[   11.600981] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.607898] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]=0D
[   11.615281] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.622197] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]=0D
[   11.629657] pci 0000:00:01.1: PCI bridge to [bus 02-03]=0D
[   11.634939] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[   11.641093] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[   11.647941] pci 0000:00:1c.0: PCI bridge to [bus 04]=0D
[   11.652965] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[   11.659121] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[   11.665973] pci 0000:00:1c.3: PCI bridge to [bus 05]=0D
[   11.670991] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   11.677149] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   11.684000] pci 0000:07:01.0: PCI bridge to [bus 08]=0D
[   11.689025] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   11.695880] pci 0000:06:00.0: PCI bridge to [bus 07-08]=0D
[   11.701155] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   11.708010] pci 0000:00:1c.5: PCI bridge to [bus 06-08]=0D
[   11.713286] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   11.720139] pci 0000:00:1c.6: PCI bridge to [bus 09]=0D
[   11.725159] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   11.732013] pci 0000:00:1c.7: PCI bridge to [bus 0a]=0D
[   11.737030] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   11.743186] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   11.750038] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]=0D
[   11.755661] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]=0D
[   11.761294] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]=0D
[   11.767619] pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff]=0D
[   11.773944] pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff]=0D
[   11.780329] pci_bus 0000:00: resource 9 [mem 0x000dc000-0x000dffff]=0D
[   11.786630] pci_bus 0000:00: resource 10 [mem 0x000e0000-0x000e3fff]=0D
[   11.793042] pci_bus 0000:00: resource 11 [mem 0x000e4000-0x000e7fff]=0D
[   11.799456] pci_bus 0000:00: resource 12 [mem 0xbe200000-0xfeafffff]=0D
[   11.805870] pci_bus 0000:02: resource 0 [io  0xe000-0xefff]=0D
[   11.811504] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]=0D
[   11.817831] pci_bus 0000:04: resource 0 [io  0xd000-0xdfff]=0D
[   11.823461] pci_bus 0000:04: resource 1 [mem 0xf1a00000-0xf1afffff]=0D
[   11.829789] pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]=0D
[   11.835422] pci_bus 0000:05: resource 1 [mem 0xf1900000-0xf19fffff]=0D
[   11.841749] pci_bus 0000:06: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   11.848075] pci_bus 0000:07: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   11.854401] pci_bus 0000:07: resource 5 [mem 0xf1500000-0xf16fffff]=0D
[   11.860727] pci_bus 0000:08: resource 1 [mem 0xf1500000-0xf15fffff]=0D
[   11.867056] pci_bus 0000:09: resource 1 [mem 0xf1800000-0xf18fffff]=0D
[   11.873381] pci_bus 0000:0a: resource 0 [io  0xb000-0xbfff]=0D
[   11.879013] pci_bus 0000:0a: resource 1 [mem 0xf1700000-0xf17fffff]=0D
[   11.885342] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
378372 usecs=0D
[   11.893141] calling  sysctl_core_init+0x0/0x2c @ 1=0D
[   11.898009] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs=
=0D
[   11.904756] calling  inet_init+0x0/0x296 @ 1=0D
[   11.909155] NET: Registered protocol family 2=0D
[   11.913823] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)=0D
[   11.921078] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)=
=0D
[   11.927716] TCP: Hash tables configured (established 16384 bind 16384)=0D
[   11.934309] TCP: reno registered=0D
[   11.937595] UDP hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   11.943660] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   11.950273] initcall inet_init+0x0/0x296 returned 0 after 40221 usecs=0D
[   11.956702] calling  ipv4_offload_init+0x0/0x61 @ 1=0D
[   11.961639] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs=
=0D
[   11.968399] calling  af_unix_init+0x0/0x55 @ 1=0D
[   11.972921] NET: Registered protocol family 1=0D
[   11.977341] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs=0D
[   11.983913] calling  ipv6_offload_init+0x0/0x7f @ 1=0D
[   11.988853] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs=
=0D
[   11.995612] calling  init_sunrpc+0x0/0x69 @ 1=0D
[   12.000231] RPC: Registered named UNIX socket transport module.=0D
[   12.006147] RPC: Registered udp transport module.=0D
[   12.010909] RPC: Registered tcp transport module.=0D
[   12.015676] RPC: Registered tcp NFSv4.1 backchannel transport module.=0D
[   12.022175] initcall init_sunrpc+0x0/0x69 returned 0 after 21623 usecs=0D
[   12.028761] calling  pci_apply_final_quirks+0x0/0x117 @ 1=0D
[   12.034229] pci 0000:00:02.0: Boot video device=0D
[   12.039317] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.044894] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)=0D
[   12.049537] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.=0D
[   12.057275] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.=0D
[   12.065195] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.070759] Already setup the GSI :16=0D
[   12.090388] xen: registering gsi 23 triggering 0 polarity 1=0D
[   12.095965] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)=0D
[   12.116471] xen: registering gsi 18 triggering 0 polarity 1=0D
[   12.122045] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)=0D
[   12.126.153014] Unpacking initramfs...=0D
[   13.250266] Freeing initrd memory: 83288K (ffff8800023f7000 - ffff880007=
54d000)=0D
[   13.257576] initcall populate_rootfs+0x0/0x112 returned 0 after 1078783 =
usecs=0D
[   13.264762] calling  pci_iommu_init+0x0/0x41 @ 1=0D
[   13.269442] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs=0D
[   13.275940] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1=0D
[   13.281573] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs=0D
[   13.289219] calling  register_kernel_offset_dumper+0x0/0x1b @ 1=0D
[   13.295180] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs=0D
[   13.302978] calling  i8259A_init_ops+0x0/0x21 @ 1=0D
[   13.307745] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs=0D
[   13.314332] calling  vsyscall_init+0x0/0x27 @ 1=0D
[   13.318930] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs=0D
[   13.325339] calling  sbf_init+0x0/0xf6 @ 1=0D
[   13.329498] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs=0D
[   13.335478] calling  init_tsc_clocksource+0x0/0xc2 @ 1=0D
[   13.340678] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs=0D
[   13.347697] calling  add_rtc_cmos+0x0/0xb4 @ 1=0D
[   13.352206] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs=0D
[   13.358530] calling  i8237A_init_ops+0x0/0x14 @ 1=0D
[   13.363297] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.369884] calling  cache_sysfs_init+0x0/0x65 @ 1=0D
[   13.374987] initcall cache_sysfs_init+0x0/0x65 returned 0 after 244 usec=
s=0D
[   13.381768] calling  amd_uncore_init+0x0/0x130 @ 1=0D
[   13.386620] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usec=
s=0D
[   13.393465] calling  amd_iommu_pc_init+0x0/0x150 @ 1=0D
[   13.398494] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs=0D
[   13.405512] calling  intel_uncore_init+0x0/0x3ab @ 1=0D
[   13.410538] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs=0D
[   13.417558] calling  rapl_pmu_init+0x0/0x1f8 @ 1=0D
[   13.422254] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer=0D
[   13.432811] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10325 usec=
s=0D
[   13.439660] calling  inject_init+0x0/0x30 @ 1=0D
[   13.444075] Machine check injector initialized=0D
[   13.448584] initcall inject_init+0x0/0x30 returned 0 after 4402 usecs=0D
[   13.455082] calling  thermal_throttle_init_device+0x0/0x9c @ 1=0D
[   13.460975] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs=0D
[   13.468689] calling  microcode_init+0x0/0x1b1 @ 1=0D
[   13.473648] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7=0D
[   13.479774] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba=0D
[   13.488550] initcall microcode_init+0x0/0x1b1 returned 0 after 14739 use=
cs=0D
[   13.495482] calling  amd_ibs_init+0x0/0x292 @ 1=0D
[   13.500069] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs=0D
[   13.506656] calling  msr_init+0x0/0x162 @ 1=0D
[   13.511125] initcall msr_init+0x0/0x162 returned 0 after 216 usecs=0D
[   13.517296] calling  cpuid_init+0x0/0x162 @ 1=0D
[   13.521909] initcall cpuid_init+0x0/0x162 returned 0 after 195 usecs=0D
[   13.528255] calling  ioapic_init_ops+0x0/0x14 @ 1=0D
[   13.533019] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.539607] calling  add_pcspkr+0x0/0x40 @ 1=0D
[   13.544043] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs=0D
[   13.550298] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1=0D
[   13.556796] Scanning for low memory corruption every 60 seconds=0D
[   13.562772] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5835 usecs=0D
[   13.571350] calling  sysfb_init+0x0/0x9c @ 1=0D
[   13.575796] initcall sysfb_init+0x0/0x9c returned 0 after 109 usecs=0D
[   13.582060] calling  audit_classes_init+0x0/0xaf @ 1=0D
[   13.587097] initcall audit_classes_init+0x0/0xaf returned 0 after 13 use=
cs=0D
[   13.594016] calling  pt_dump_init+0x0/0x30 @ 1=0D
[   13.598532] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs=0D
[   13.604850] calling  ia32_binfmt_init+0x0/0x14 @ 1=0D
[   13.609709] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 6 usecs=
=0D
[   13.616374] calling  proc_execdomains_init+0x0/0x22 @ 1=0D
[   13.621667] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs=0D
[   13.628766] calling  ioresources_init+0x0/0x3c @ 1=0D
[   13.633626] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs=
=0D
[   13.640294] calling  uid_cache_init+0x0/0x85 @ 1=0D
[   13.644989] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs=0D
[   13.651561] calling  init_posix_timers+0x0/0x240 @ 1=0D
[   13.656598] initcall init_posix_timers+0x0/0x240 returned 0 after 12 use=
cs=0D
[   13.663518] calling  init_posix_cpu_timers+0x0/0xbf @ 1=0D
[   13.668806] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs=0D
[   13.675911] calling  proc_schedstat_init+0x0/0x22 @ 1=0D
[   13.681029] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs=0D
[   13.687957] calling  snapshot_device_init+0x0/0x12 @ 1=0D
[   13.693282] initcall snapshot_device_init+0x0/0x12 returned 0 after 120 =
usecs=0D
[   13.700404] calling  irq_pm_init_ops+0x0/0x14 @ 1=0D
[   13.705169] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.711757] calling  create_proc_profile+0x0/0x300 @ 1=0D
[   13.716957] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs=0D
[   13.723975] calling  timekeeping_init_ops+0x0/0x14 @ 1=0D
[   13.729175] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs=0D
[   13.736196] calling  init_clocksource_sysfs+0x0/0x69 @ 1=0D
[   13.741786] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 21=
2 usecs=0D
[   13.749086] calling  init_timer_list_procfs+0x0/0x2c @ 1=0D
[   13.754464] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs=0D
[   13.761650] calling  alarmtimer_init+0x0/0x15f @ 1=0D
[   13.766699] initcall alarmtimer_init+0x0/0x15f returned 0 after 191 usec=
s=0D
[   13.773473] calling  clockevents_init_sysfs+0x0/0xd2 @ 1=0D
[   13.779148] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 29=
5 usecs=0D
[   13.786477] calling  init_tstats_procfs+0x0/0x2c @ 1=0D
[   13.791507] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usec=
s=0D
[   13.798349] calling  futex_init+0x0/0xf6 @ 1=0D
[   13.802699] futex hash table entries: 256 (order: 2, 16384 bytes)=0D
[   13.808840] initcall futex_init+0x0/0xf6 returned 0 after 6012 usecs=0D
[   13.815249] calling  proc_dma_init+0x0/0x22 @ 1=0D
[   13.819848] initcall proc_dma_init+0x0/0x22 returned 0 after 4 usecs=0D
[   13.826253] calling  proc_modules_init+0x0/0x22 @ 1=0D
[   13.831197] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   13.837954] calling  kallsyms_init+0x0/0x25 @ 1=0D
[   13.842548] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs=0D
[   13.848961] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1=0D
[   13.854775] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 9 usecs=0D
[   13.862393] calling  crash_notes_memory_init+0x0/0x36 @ 1=0D
[   13.867855] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs=0D
[   13.875132] calling  pid_namespaces_init+0x0/0x2d @ 1=0D
[   13.880258] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs=0D
[   13.887266] calling  ikconfig_init+0x0/0x3c @ 1=0D
[   13.891862] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs=0D
[   13.898271] calling  audit_init+0x0/0x141 @ 1=0D
[   13.902691] audit: initializing netlink socket (disabled)=0D
[   13.908177] type=3D2000 audit(1390621323.439:1): initialized=0D
[   13.913700] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs=0D
[   13.920283] calling  audit_watch_init+0x0/0x3a @ 1=0D
[   13.925138] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs=
=0D
[   13.931810] calling  audit_tree_init+0x0/0x49 @ 1=0D
[   13.936579] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs=0D
[   13.943162] calling  init_kprobes+0x0/0x16c @ 1=0D
[   13.958206] initcall init_kprobes+0x0/0x16c returned 0 after 10204 usecs=
=0D
[   13.964891] calling  hung_task_init+0x0/0x56 @ 3.993058] initcall init_t=
racepoints+0x0/0x20 returned 0 after 0 usecs=0D
[   13.999730] calling  init_blk_tracer+0x0/0x5a @ 1=0D
[   14.004498] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs=0D
[   14.011083] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1=0D
[   14.016801] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs=0D
[   14.024341] calling  perf_event_sysfs_init+0x0/0x93 @ 1=0D
[   14.030158] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 516=
 usecs=0D
[   14.037374] calling  init_per_zone_wmark_min+0x0/0xa9 @ 1=0D
[   14.042841] initcall init_per_zone_wmark_min+0x0/0xa9 returned 0 after 1=
1 usecs=0D
[   14.050196] calling  kswapd_init+0x0/0x76 @ 1=0D
[   14.054667] initcall kswapd_init+0x0/0x76 returned 0 after 51 usecs=0D
[   14.060941] calling  extfrag_debug_init+0x0/0x7e @ 1=0D
[   14.065985] initcall extfrag_debug_init+0x0/0x7e returned 0 after 19 use=
cs=0D
[   14.072899] calling  setup_vmstat+0x0/0xf3 @ 1=0D
[   14.077421] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs=0D
[   14.083818] calling  mm_sysfs_init+0x0/0x29 @ 1=0D
[   14.088422] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs=0D
[   14.094912] calling  mm_compute_batch_init+0x0/0x19 @ 1=0D
[   14.100199] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs=0D
[   14.107305] calling  slab_proc_init+0x0/0x25 @ 1=0D
[   14.111990] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.118486] calling  init_reserve_notifier+0x0/0x26 @ 1=0D
[   14.123771] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs=0D
[   14.130877] calling  init_admin_reserve+0x0/0x40 @ 1=0D
[   14.135903] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usec=
s=0D
[   14.142749] calling  init_user_reserve+0x0/0x40 @ 1=0D
[   14.147690] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs=
=0D
[   14.154451] calling  proc_vmalloc_init+0x0/0x25 @ 1=0D
[   14.159394] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 4 usecs=
=0D
[   14.166149] calling  procswaps_init+0x0/0x22 @ 1=0D
[   14.170832] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.177329] calling  init_frontswap+0x0/0x96 @ 1=0D
[   14.182038] initcall init_frontswap+0x0/0x96 returned 0 after 28 usecs=0D
[   14.188597] calling  hugetlb_init+0x0/0x4c2 @ 1=0D
[   14.193189] HugeTLB registered 2 MB page size, pre-allocated 0 pages=0D
[   14.199693] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6351 usecs=
=0D
[   14.206291] calling  mmu_notifier_init+0x0/0x12 @ 1=0D
[   14.211236] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs=
=0D
[   14.217993] calling  slab_proc_init+0x0/0x8 @ 1=0D
[   14.222585] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs=0D
[   14.228998] calling  cpucache_init+0x0/0x4b @ 1=0D
[   14.233592] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs=0D
[   14.240004] calling  hugepage_init+0x0/0x145 @ 1=0D
[   14.244685] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs=
=0D
[   14.251356] calling  init_cleancache+0x0/0xbc @ 1=0D
[   14.256151] initcall init_cleancache+0x0/0xbc returned 0 after 27 usecs=
=0D
[   14.262796] calling  fcntl_init+0x0/0x2a @ 1=0D
[   14.267141] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs=0D
[   14.273371] calling  proc_filesystems_init+0x0/0x22 @ 1=0D
[   14.278661] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs=0D
[   14.285763] calling  dio_init+0x0/0x2d @ 1=0D
[   14.289935] initcall dio_init+0x0/0x2d returned 0 after 10 usecs=0D
[   14.295990] calling  fsnotify_mark_init+0x0/0x40 @ 1=0D
[   14.301042] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 25 use=
cs=0D
[   14.307953] calling  dnotify_init+0x0/0x7b @ 1=0D
[   14.312480] initcall dnotify_init+0x0/0x7b returned 0 after 21 usecs=0D
[   14.318870] calling  inotify_user_setup+0x0/0x4b @ 1=0D
[   14.323912] initcall inotify_user_setup+0x0/0x4b returned 0 after 12 use=
cs=0D
[   14.330832] calling  aio_setup+0x0/0x7d @ 1=0D
[   14.335131] initcall aio_setup+0x0/0x7d returned 0 after 52 usecs=0D
[   14.341231] calling  proc_locks_init+0x0/0x22 @ 1=0D
[   14.346001] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.352582] calling  init_sys32_ioctl+0x0/0x28 @ 1=0D
[   14.357479] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs=
=0D
[   14.364197] calling  dquot_init+0x0/0x121 @ 1=0D
[   14.368616] VFS: Disk quotas dquot_6.5.2=0D
[   14.372634] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)=0D
[   14.379103] initcall dquot_init+0x0/0x121 returned 0 after 10240 usecs=0D
[   14.385687] calling  init_v2_quota_format+0x0/0x22 @ 1=0D
[   14.390886] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs=0D
[   14.397906] calling  quota_init+0x0/0x31 @ 1=0D
[   14.402259] initcall quota_init+0x0/0x31 returned 0 after 17 usecs=0D
[   14.408479] calling  proc_cmdline_init+0x0/0x22 @ 1=0D
[   14.413422] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   14.420180] calling  proc_consoles_init+0x0/0x22 @ 1=0D
[   14.425209] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.432051] calling  proc_cpuinfo_init+0x0/0x22 @ 1=0D
[   14.436994] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.443752] calling  proc_devices_init+0x0/0x22 @ 1=0D
[   14.448694] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.455451] calling  proc_interrupts_init+0x0/0x22 @ 1=0D
[   14.460654] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs=0D
[   14.467670] calling  proc_loadavg_init+0x0/0x22 @ 1=0D
[   14.472615] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.479372] calling  proc_meminfo_init+0x0/0x22 @ 1=0D
[   14.484314] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.491069] calling  proc_stat_init+0x0/0x22 @ 1=0D
[   14.495753] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.502249] calling  proc_uptime_init+0x0/0x22 @ 1=0D
[   14.507105] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.513776] calling  proc_version_init+0x0/0x22 @ 1=0D
[   14.518718] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.525476] calling  proc_softirqs_init+0x0/0x22 @ 1=0D
[   14.530505] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.537348] calling  proc_kcore_init+0x0/0xb5 @ 1=0D
[   14.542125] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs=
=0D
[   14.548789] calling  vmcore_init+0x0/0x5cb @ 1=0D
[   14.553294] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs=0D
[   14.559620] calling  proc_kmsg_init+0x0/0x25 @ 1=0D
[   14.564304] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.570800] calling  proc_page_init+0x0/0x42 @ 1=0D
[   14.575487] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs=0D
[   14.581981] calling  init_devpts_fs+0x0/0x62 @ 1=0D
[   14.586705] initcall init_devpts_fs+0x0/0x62 returned 0 after 43 usecs=0D
[   14.593246] calling  init_hugetlbfs_fs+0x0/0x15d @ 1=0D
[   14.598346] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 72 use=
cs=0D
[   14.605207] calling  init_fat_fs+0x0/0x4f @ 1=0D
[   14.609646] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs=0D
[   14.615953] calling  init_vfat_fs+0x0/0x12 @ 1=0D
[   14.620459] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   14.626785] calling  init_msdos_fs+0x0/0x12 @ 1=0D
[   14.631380] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   14.637791] calling  init_iso9660_fs+0x0/0x70 @ 1=0D
[   14.642581] initcall init_iso9660_fs+0x0/0x70 returned 0 after 23 usecs=
=0D
[   14.649233] calling  init_nfs_fs+0x0/0x16c @ 1=0D
[   14.653933] initcall init_nfs_fs+0x0/0x16c returned 0 after 190 usecs=0D
[   14.660363] calling  init_nfs_v2+0x0/0x14 @ 1=0D
[   14.664781] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs=0D
[   14.671021] calling  init_nfs_v3+0x0/0x14 @ 1=0D
[   14.675439] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs=0D
[   14.681679] calling  init_nfs_v4+0x0/0x3b @ 1=0D
[   14.686098] NFS: Registering the id_resolver key type=0D
[   14.691225] Key type id_resolver registered=0D
[   14.695457] Key type id_legacy registered=0D
[   14.699535] initcall init_nfs_v4+0x0/0x3b returned 0 after 13121 usecs=0D
[   14.706118] calling  init_nlm+0x0/0x4c @ 1=0D
[   14.710286] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs=0D
[   14.716257] calling  init_nls_cp437+0x0/0x12 @ 1=0D
[   14.720937] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs=0D
[   14.727437] calling  init_nls_ascii+0x0/0x12 @ 1=0D
[   14.732117] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs=0D
[   14.738617] calling  init_nls_iso8859_1+0x0/0x12 @ 1=0D
[   14.743642] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usec=
s=0D
[   14.750489] calling  init_nls_utf8+0x0/0x2b @ 1=0D
[   14.755082] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs=0D
[   14.761496] calling  init_ntfs_fs+0x0/0x1d1 @ 1=0D
[   14.766090] NTFS driver 2.1.30 [Flags: R/W].=0D
[   14.770476] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4282 usecs=
=0D
[   14.777097] calling  init_autofs4_fs+0x0/0x2a @ 1=0D
[   14.782028] initcall init_autofs4_fs+0x0/0x2a returned 0 after 129 usecs=
=0D
[   14.788725] calling  init_pstore_fs+0x0/0x53 @ 1=0D
[   14.793406] initcall init_pstore_fs+0x0/0x53 returned 0 after 11 usecs=0D
[   14.799982] calling  ipc_init+0x0/0x2f @ 1=0D
[   14.804149] msgmni has been set to 3857=0D
[   14.808051] initcall ipc_init+0x0/0x2f returned 0 after 3817 usecs=0D
[   14.814281] calling  ipc_sysctl_init+0x0/0x14 @ 1=0D
[   14.819054] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs=0D
[   14.825633] calling  init_mqueue_fs+0x0/0xa2 @ 1=0D
[   14.830373] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 57 usecs=0D
[   14.836900] calling  key_proc_init+0x0/0x5e @ 1=0D
[   14.841500] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs=0D
[   14.847906] calling  selinux_nf_ip_init+0x0/0x69 @ 1=0D
[   14.852932] SELinux:  Registering netfilter hooks=0D
[   14.857834] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4785 u=
secs=0D
[   14.864867] calling  init_sel_fs+0x0/0xa5 @ 1=0D
[   14.869649] initcall init_sel_fs+0x0/0xa5 returned 0 after 353 usecs=0D
[   14.875986] calling  selnl_init+0x0/0x56 @ 1=0D
[   14.880333] initcall selnl_init+0x0/0x56 returned 0 after 15 usecs=0D
[   14.886559] calling  sel_netif_init+0x0/0x5c @ 1=0D
[   14.891241] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs=0D
[   14.897738] calling  sel_netnode_init+0x0/0x6a @ 1=0D
[   14.902593] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs=
=0D
[   14.909265] calling  sel_netport_init+0x0/0x6a @ 1=0D
[   14.914122] initcall sel_netport_init+0x0/0x6a returned 0 after 2 usecs=
=0D
[   14.920791] calling  aurule_init+0x0/0x2d @ 1=0D
[   14.925212] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs=0D
[   14.931449] calling  crypto_wq_init+0x0/0x33 @ 1=0D
[   14.936159] initcall crypto_wq_init+0x0/0x33 returned 0 after 29 usecs=0D
[   14.942718] calling  crypto_algapi_init+0x0/0xd @ 1=0D
[   14.947661] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs=
=0D
[   14.954416] calling  chainiv_module_init+0x0/0x12 @ 1=0D
[   14.959529] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[   14.966463] calling  eseqiv_module_init+0x0/0x12 @ 1=0D
[   14.971490] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usec=
s=0D
[   14.978336] calling  hmac_module_init+0x0/0x12 @ 1=0D
[   14.983189] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs=
=0D
[   14.989862] calling  md5_mod_init+0x0/0x12 @ 1=0D
[   14.994400] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs=0D
[   15.000782] calling  sha1_generic_mod_init+0x0/0x12 @ 1=0D
[   15.006096] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 26 =
usecs=0D
[   15.013261] calling  crypto_cbc_module_init+0x0/0x12 @ 1=0D
[   15.018634] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs=0D
[   15.025827] calling  des_generic_mod_init+0x0/0x17 @ 1=0D
[   15.031078] initcall des_generic_mod_init+0x0/0x17 returned 0 after 50 u=
secs=0D
[   15.038135] calling  aes_init+0x0/0x12 @ 1=0D
[   15.042321] initcall aes_init+0x0/0x12 returned 0 after 27 usecs=0D
[   15.048362] calling  zlib_mod_init+0x0/0x12 @ 1=0D
[   15.052981] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.059454] calling  crypto_authenc_module_init+0x0/0x12 @ 1=0D
[   15.065177] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[   15.072716] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1=0D
[   15.078780] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs=0D
[   15.086666] calling  krng_mod_init+0x0/0x12 @ 1=0D
[   15.091288] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.097762] calling  proc_genhd_init+0x0/0x3c @ 1=0D
[   15.102537] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs=0D
[   15.109113] calling  bsg_init+0x0/0x12e @ 1=0D
[   15.113436] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)=0D
[   15.120822] initcall bsg_init+0x0/0x12e returned 0 after 7288 usecs=0D
[   15.127146] calling  noop_init+0x0/0x12 @ 1=0D
[   15.131394] io scheduler noop registered=0D
[   15.135379] initcall noop_init+0x0/0x12 returned 0 after 3890 usecs=0D
[   15.141705] calling  deadline_init+0x0/0x12 @ 1=0D
[   15.146298] io scheduler deadline registered=0D
[   15.150632] initcall deadline_init+0x0/0x12 returned 0 after 4231 usecs=
=0D
[   15.157306] calling  cfq_init+0x0/0x8b @ 1=0D
[   15.161486] io scheduler cfq registered (default)=0D
[   15.166232] initcall cfq_init+0x0/0x8b returned 0 after 4655 usecs=0D
[   15.172471] calling  percpu_counter_startup+0x0/0x38 @ 1=0D
[   15.177845] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs=0D
[   15.185038] calling  pci_proc_init+0x0/0x6a @ 1=0D
[   15.189814] initcall pci_proc_init+0x0/0x6a returned 0 after 179 usecs=0D
[   15.196330] calling  pcie_portdrv_init+0x0/0x7a @ 1=0D
[   15.202004] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.207567] Already setup the GSI :16=0D
[   15.212106] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.217670] Already setup the GSI :16=0D
[   15.222185] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.227750] Already setup the GSI :16=0D
[   15.232119] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.237694] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)=0D
[   15.242940] xen: registering gsi 17 triggering 0 polarity 1=0D
[   15.248516] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)=0D
[   15.253843] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.259406] Already setup the GSI :19=0D
[   15.263326] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60600 u=
secs=0D
[   15.270362] calling  aer_service_init+0x0/0x2b @ 1=0D
[   15.275286] initcall aer_service_init+0x0/0x2b returned 0 after 72 usecs=
=0D
[   15.281972] calling  pci_hotplug_init+0x0/0x1d @ 1=0D
[   15.286826] pci_hotplug: PCI Hot Plug PCI Core version: 0.5=0D
[   15.292458] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5499 use=
cs=0D
[   15.299392] calling  pcied_init+0x0/0x79 @ 1=0D
[   15.303927] pciehp: PCI Express Hot Plug Controller Driver version: 0.4=
=0D
[   15.310533] initcall pcied_init+0x0/0x79 returned 0 after 6648 usecs=0D
[   15.316944] calling  pcifront_init+0x0/0x3f @ 1=0D
[   15.321533] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs=0D
[   15.328120] calling  genericbl_driver_init+0x0/0x14 @ 1=0D
[   15.333520] initcall genericbl_driver_init+0x0/0x14 returned 0 after 109=
 usecs=0D
[   15.340731] calling  cirrusfb_init+0x0/0xcc @ 1=0D
[   15.345415] initcall cirrusfb_init+0x0/0xcc returned 0 after 89 usecs=0D
[   15.351841] calling  efifb_driver_init+0x0/0x14 @ 1=0D
[   15.356856] initcall efifb_driver_init+0x0/0x14 returned 0 after 72 usec=
s=0D
[   15.363630] calling  intel_idle_init+0x0/0x331 @ 1=0D
[   15.368482] intel_idle: MWAIT substates: 0x42120=0D
[   15.373163] intel_idle: v0.4 model 0x3C=0D
[   15.377060] intel_idle: lapic_timer_reliable_states 0xffffffff=0D
[   15.382958] intel_idle: intel_idle yielding to none=0D
[   15.387632] initcall intel_idle_init+0x0/0x331 returned -19 after 18700 =
usecs=0D
[   15.395086] calling  acpi_reserve_resources+0x0/0xeb @ 1=0D
[   15.400467] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 8 =
usecs=0D
[   15.407653] calling  acpi_ac_init+0x0/0x2a @ 1=0D
[   15.412233] initcall acpi_ac_init+0x0/0x2a returned 0 after 73 usecs=0D
[   15.418582] calling  acpi_button_driver_init+0x0/0x12 @ 1=0D
[   15.424317] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0=0D
[   15.432487] ACPI: Power Button [PWRB]=0D
[   15.436478] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1=0D
[   15.443865] ACPI: Power Button [PWRF]=0D
[   15.447660] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3074 usecs=0D
[   15.455214] calling  acpi_fan_driver_init+0x0/0x12 @ 1=0D
[   15.460653] ACPI: Fan [FAN0] (off)=0D
[   15.464281] ACPI: Fan [FAN1] (off)=0D
[   15.467894] ACPI: Fan [FAN2] (off)=0D
[   15.471509] ACPI: Fan [FAN3] (off)=0D
[   15.475111] ACPI: Fan [FAN4] (off)=0D
[   15.478582] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1774=
2 usecs=0D
[   15.485881] calling  acpi_processor_driver_init+0x0/0x43 @ 1=0D
[   15.504111] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)=0D
[   15.512533] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)=0D
[   15.528192] Monitor-Mwait will be used to enter C-1 state=0D
[   15.533579] Monitor-Mwait will be used to enter C-2 state=0D
[   15.539239] Warning: Processor Platform Limit not supported.=0D
[   15.544887] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 52040 usecs=0D
[   15.552770] calling  acpi_thermal_init+0x0/0x42 @ 1=0D
[   15.560984] thermal LNXTHERM:00: registered as thermal_zone0=0D
[   15.566629] ACPI: Thermal Zone [TZ00] (28 C)=0D
[   15.573132] thermal LNXTHERM:01: registered as thermal_zone1=0D
[   15.578785] ACPI: Thermal Zone [TZ01] (30 C)=0D
[   15.58345ry_init+0x0/0x16 @ 1=0D
[   15.595431] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs=
=0D
[   15.602188] calling  acpi_hed_driver_init+0x0/0x12 @ 1=0D
[   15.607435] calling  1_acpi_battery_init_async+0x0/0x35 @ 6=0D
[   15.613155] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5630=
 usecs=0D
[   15.620360] calling  erst_init+0x0/0x2fc @ 1=0D
[   15.624736] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.=0D
[   15.632240] pstore: Registered erst as persistent store backend=0D
[   15.638212] initcall erst_init+0x0/0x2fc returned 0 after 13202 usecs=0D
[   15.644713] calling  ghes_init+0x0/0x173 @ 1=0D
[   15.649195] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35327 usecs=0D
[   15.657638] \_SB_:_OSC request failed=0D
[   15.661294] _OSC request data:1 1 0 =0D
[   15.664932] \_SB_:_OSC invalid UUID=0D
[   15.668484] _OSC request data:1 1 0 =0D
[   15.672121] GHES: APEI firmware first mode is enabled by APEI bit.=0D
[   15.678364] initcall ghes_init+0x0/0x173 returned 0 after 28630 usecs=0D
[   15.684863] calling  einj_init+0x0/0x522 @ 1=0D
[   15.689259] EINJ: Error INJection is initialized.=0D
[   15.693964] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs=0D
[   15.700377] calling  ioat_init_module+0x0/0xb1 @ 1=0D
[   15.705229] ioatdma: Intel(R) QuickData Technology Driver 4.00=0D
[   15.711278] initcall ioat_init_module+0x0/0xb1 returned 0 after 5906 use=
cs=0D
[   15.718161] calling  virtio_mmio_init+0x0/0x14 @ 1=0D
[   15.723070] initcall virtio_mmio_init+0x0/0x14 returned 0 after 72 usecs=
=0D
[   15.729756] calling  virtio_balloon_driver_init+0x0/0x12 @ 1=0D
[   15.735547] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 69 usecs=0D
[   15.743105] calling  xenbus_probe_initcall+0x0/0x39 @ 1=0D
[   15.748388] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs=0D
[   15.755494] calling  xenbus_init+0x0/0x3d @ 1=0D
[   15.760053] initcall xenbus_init+0x0/0x3d returned 0 after 134 usecs=0D
[   15.766398] calling  xenbus_backend_init+0x0/0x51 @ 1=0D
[   15.771634] initcall xenbus_backend_init+0x0/0x51 returned 0 after 121 u=
secs=0D
[   15.778671] calling  gntdev_init+0x0/0x4d @ 1=0D
[   15.783281] initcall gntdev_init+0x0/0x4d returned 0 after 155 usecs=0D
[   15.789626] calling  gntalloc_init+0x0/0x3d @ 1=0D
[   15.794346] initcall gntalloc_init+0x0/0x3d returned 0 after 127 usecs=0D
[   15.800865] calling  hypervisor_subsys_init+0x0/0x25 @ 1=0D
[   15.806237] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs=0D
[   15.813428] calling  hyper_sysfs_init+0x0/0x103 @ 1=0D
[   15.818434] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 64 usec=
s=0D
[   15.825214] calling  platform_pci_module_init+0x0/0x1b @ 1=0D
[   15.830854] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
90 usecs=0D
[   15.838232] calling  xen_late_init_mcelog+0x0/0x3d @ 1=0D
[   15.843625] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs=0D
[   15.850747] calling  xen_pcibk_init+0x0/0x13f @ 1=0D
[   15.855539] xen_pciback: backend is vpci=0D
[   15.859576] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3968 usec=
s=0D
[   15.866358] calling  xen_acpi_processor_init+0x0/0x24b @ 1=0D
[   15.872671] xen_acpi_processor: Uploading Xen processor PM info=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(1) cpuid(0) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU0: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 0 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(2) cpuid(2) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU2: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 2 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(3) cpuid(4) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU4: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 4 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(4) cpuid(6) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU6: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 6 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(5) cpuid(1) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU1: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 1 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(6) cpuid(3) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU3: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 3 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(7) cpuid(5) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU5: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 5 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(8) cpuid(7) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU7: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 7 initialization completed=0D
[   17.294899] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389646 usecs=0D
[   17.302768] calling  pty_init+0x0/0x453 @ 1=0D
[   17.314441] kworker/u2:0 (756) used greatest stack depth: 5488 bytes lef=
t=0D
[   17.369193] initcall pty_init+0x0/0x453 returned 0 after 60719 usecs=0D
[   17.375542] calling  sysrq_init+0x0/0xb0 @ 1=0D
[   all xen_hvc_init+0x0/0x228 returned 0 after 1024 usecs=0D
[   17.398275] calling  serial8250_init+0x0/0x1ab @ 1=0D
[   17.403124] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled=0D
[   17.430759] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A=0D
[   17.439166] initcall serial8250_init+0x0/0x1ab returned 0 after 35196 us=
ecs=0D
[   17.446114] calling  serial_pci_driver_init+0x0/0x1b @ 1=0D
[   17.451590] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 10=
4 usecs=0D
[   17.458885] calling  init_kgdboc+0x0/0x16 @ 1=0D
[   17.463304] kgdb: Registered I/O driver kgdboc.=0D
[   17.467927] initcall init_kgdboc+0x0/0x16 returned 0 after 4515 usecs=0D
[   17.474397] calling  init+0x0/0x10f @ 1=0D
[   17.478522] initcall init+0x0/0x10f returned 0 after 220 usecs=0D
[   17.484346] calling  hpet_init+0x0/0x6a @ 1=0D
[   17.489080] hpet_acpi_add: no address or irqs in _CRS=0D
[   17.494215] initcall hpet_init+0x0/0x6a returned 0 after 5490 usecs=0D
[   17.500467] calling  nvram_init+0x0/0x82 @ 1=0D
[   17.504926] Non-volatile memory driver v1.3=0D
[   17.509099] initcall nvram_init+0x0/0x82 returned 0 after 4199 usecs=0D
[   17.515510] calling  mod_init+0x0/0x5a @ 1=0D
[   17.519670] initcall mod_init+0x0/0x5a returned -19 after 0 usecs=0D
[   17.525822] calling  rng_init+0x0/0x12 @ 1=0D
[   17.530120] initcall rng_init+0x0/0x12 returned 0 after 133 usecs=0D
[   17.536198] calling  agp_init+0x0/0x26 @ 1=0D
[   17.540356] Linux agpgart interface v0.103=0D
[   17.544517] initcall agp_init+0x0/0x26 returned 0 after 4063 usecs=0D
[   17.550757] calling  agp_amd64_mod_init+0x0/0xb @ 1=0D
[   17.555843] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 144 u=
secs=0D
[   17.562883] calling  agp_intel_init+0x0/0x29 @ 1=0D
[   17.567653] initcall agp_intel_init+0x0/0x29 returned 0 after 90 usecs=0D
[   17.574167] calling  agp_sis_init+0x0/0x29 @ 1=0D
[   17.578762] initcall agp_sis_init+0x0/0x29 returned 0 after 88 usecs=0D
[   17.585102] calling  agp_via_init+0x0/0x29 @ 1=0D
[   17.589698] initcall agp_via_init+0x0/0x29 returned 0 after 88 usecs=0D
[   17.596041] calling  drm_core_init+0x0/0x10c @ 1=0D
[   17.600809] [drm] Initialized drm 1.1.0 20060810=0D
[   17.605416] initcall drm_core_init+0x0/0x10c returned 0 after 4585 usecs=
=0D
[   17.612175] calling  cn_proc_init+0x0/0x3d @ 1=0D
[   17.616685] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs=0D
[   17.623009] calling  topology_sysfs_init+0x0/0x70 @ 1=0D
[   17.628154] initcall topology_sysfs_init+0x0/0x70 returned 0 after 33 us=
ecs=0D
[   17.635140] calling  loop_init+0x0/0x14e @ 1=0D
[   17.693296] loop: module loaded=0D
[   17.696460] initcall loop_init+0x0/0x14e returned 0 after 55648 usecs=0D
[   17.702930] calling  xen_blkif_init+0x0/0x22 @ 1=0D
[   17.707710] initcall xen_blkif_init+0x0/0x22 returned 0 after 99 usecs=0D
[   17.714248] calling  mac_hid_init+0x0/0x22 @ 1=0D
[   17.718736] initcall mac_hid_init+0x0/0x22 returned 0 after 8 usecs=0D
[   17.725055] calling  macvlan_init_module+0x0/0x3d @ 1=0D
[   17.730169] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs=0D
[   17.737102] calling  macvtap_init+0x0/0x100 @ 1=0D
[   17.741763] initcall macvtap_init+0x0/0x100 returned 0 after 67 usecs=0D
[   17.748219] calling  net_olddevs_init+0x0/0xb5 @ 1=0D
[   17.753050] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs=
=0D
[   17.759720] calling  fixed_mdio_bus_init+0x0/0x105 @ 1=0D
[   17.765133] libphy: Fixed MDIO Bus: probed=0D
[   17.769229] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4207=
 usecs=0D
[   17.776499] calling  tun_init+0x0/0x93 @ 1=0D
[   17.780658] tun: Universal TUN/TAP device driver, 1.6=0D
[   17.785771] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>=0D
[   17.792155] initcall tun_init+0x0/0x93 returned 0 after 11226 usecs=0D
[   17.798418] calling  tg3_driver_init+0x0/0x1b @ 1=0D
[   17.803298] initcall tg3_driver_init+0x0/0x1b returned 0 after 121 usecs=
=0D
[   17.809988] calling  igb_init_module+0x0/0x58 @ 1=0D
[   17.814753] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k=0D
[   17.821774] igb: Copyright (c) 2007-2013 Intel Corporation.=0D
[   17.827672] xen: registering gsi 17 triggering 0 polarity 1=0D
[   17.833243] Already setup the GSI :17=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
[   18.057960] igb 0000:02:00.0: added PHC on eth0=0D
[   18.062482] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
e:2.5Gb/s:Width x4) 00:1b:21:45:d9:ac=0D
[   18.076606] igb 0000:02:00.0: eth0: PBA No: Unknown=0D
[   18.081548] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)=0D
[   18.089439] xen: registering gsi 18 triggering 0 polarity 1=0D
[   18.095009] Already setup the GSI :18=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
[   18.319943] igb 0000:02:00.1: added PHC on eth1=0D
[   18.324467] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
istering gsi 19 triggering 0 polarity 1=0D
[   18.356994] Already setup the GSI :19=0D
(XEN) [2014-01-25 03:42:08] msix_capability_init:759 for 05:00.0:, msix:0 d=
ev:ffff8302394665b0=0D
(XEN) [2014-01-25 03:42:08] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----=0D
(XEN) [2014-01-25 03:42:08] CPU:    0=0D
(XEN) [2014-01-25 03:42:08] RIP:    e008:[<ffff82d0801683d6>] msix_capabili=
ty_init+0x210/0x63e=0D
(XEN) [2014-01-25 03:42:08] RFLAGS: 0000000000010296   CONTEXT: hypervisor=
=0D
(XEN) [2014-01-25 03:42:08] rax: 0000000000000000   rbx: ffff8302394665b0  =
 rcx: 0000000000000000=0D
(XEN) [2014-01-25 03:42:08] rdx: ffff82d080310e20   rsi: 000000000000000a  =
 rdi: ffff82d0802816c8=0D
(XEN) [2014-01-25 03:42:08] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfbf8  =
 r8:  0000000000000000=0D
(XEN) [2014-01-25 03:42:08] r9:  0000000000000000   r10: 0000000000000000  =
 r11: ffff82d080232040=0D
(XEN) [2014-01-25 03:42:08] r12: 0000000000000000   r13: ffff83022a085e30  =
 r14: ffff82d0802cfe98=0D
(XEN) [2014-01-25 03:42:08] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0=0D
(XEN) [2014-01-25 03:42:08] cr3: 000000022dc0c000   cr2: 0000000000000004=0D
(XEN) [2014-01-25 03:42:08] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008=0D
(XEN) [2014-01-25 03:42:08] Xen stack trace from rsp=3Dffff82d0802cfbf8:=0D
(XEN) [2014-01-25 03:42:08]    0000000000000000 ffff8302394665b0 0000000500=
04fc28 ffff82d0802cfd88=0D
(XEN) [2014-01-25 03:42:08]    000000728012a25f ffff8302ffffffff ffff82d000=
000000 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    0000000000000005 0000000000000070 0000000500=
000000 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    00000000f1980000 ffff82d000000005 0000000500=
000003 8005007000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfe98 ffff82d0802cfe98 ffff82d080=
2cfd88 ffff8302394665b0=0D
(XEN) [2014-01-25 03:42:08]    0000000000000005 0000000000000000 ffff82d080=
2cfd28 ffff82d0801689c2=0D
(XEN) [2014-01-25 03:42:08]    0000000000000246 ffff82d0802cfcd8 ffff82d080=
129d68 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfd28 ffff82d080147589 ffff82d080=
2cfd18 ffff830239463b70=0D
(XEN) [2014-01-25 03:42:08]    000000000000010f ffff8302337f8000 0000000000=
00010f 0000000000000022=0D
(XEN) [2014-01-25 03:42:08]    00000000ffffffed ffff830239402200 ffff82d080=
2cfdc8 ffff82d08016c68c=0D
(XEN) [2014-01-25 03:42:08]    ffff83022a085e00 000000000000010f 0000000000=
00010f ffff8302337f80e0=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfd98 ffff82d0801047ed 0000010f01=
402200 ffff82d0802cfe98=0D
(XEN) [2014-01-25 03:42:08]    ffff8302337f80e0 ffff8302394665b0 ffff82d080=
2cfe98 ffff83022a085e00=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfdc8 ffff8302337f8000 00000000ff=
fffffd 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfe98 ffff82d0802cfe70 ffff82d080=
2cfe48 ffff82d08017f134=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cff18 ffffffff8156d7c6 ffff82d080=
2cfe98 ffff8302337f80b8=0D
(XEN) [2014-01-25 03:42:08]    ffff82d00000010f ffff82d08018bd70 0000002200=
00f800 ffff82d0802cfe74=0D
(XEN) [2014-01-25 03:42:08]    ffff820040004000 000000000000000d ffff880078=
623b08 ffff8300b7313000=0D
(XEN) [2014-01-25 03:42:08]    ffff880006db8180 0000000000000000 ffff82d080=
2cfef8 ffff82d08017f844=0D
(XEN) [2014-01-25 03:42:08]    0000000000000000 0000000700000004 0000000000=
007ff0 ffffffffffffffff=0D
(XEN) [2014-01-25 03:42:08] Xen call trace:=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801683d6>] msix_capability_init+0x=
210/0x63e=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801689c2>] pci_enable_msi+0x1be/0x=
4d7=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08016c68c>] map_domain_pirq+0x222/0=
x5ad=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f134>] physdev_map_pirq+0x507/=
0x5d1=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f844>] do_physdev_op+0x646/0x1=
232=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x14=
5=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] Pagetable walk from 0000000000000004:=0D
(XEN) [2014-01-25 03:42:08]  L4[0x000] =3D 0000000000000000 fffffffffffffff=
f=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] ****************************************=0D
(XEN) [2014-01-25 03:42:08] Panic on CPU 0:=0D
(XEN) [2014-01-25 03:42:08] FATAL PAGE FAULT=0D
(XEN) [2014-01-25 03:42:08] [error_code=3D0000]=0D
(XEN) [2014-01-25 03:42:08] Faulting linear address: 0000000000000004=0D
(XEN) [2014-01-25 03:42:08] ****************************************=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] Manual reset required ('noreboot' specified)=0D

--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--d6Gm4EdcadzBjdND--


From xen-devel-bounces@lists.xen.org Fri Jan 24 21:58:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 21:58:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6olL-0008JP-OK; Fri, 24 Jan 2014 21:58:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W6olJ-0008JK-42
	for xen-devel@lists.xen.org; Fri, 24 Jan 2014 21:58:05 +0000
Received: from [193.109.254.147:22480] by server-10.bemta-14.messagelabs.com
	id FD/B3-20752-CE1E2E25; Fri, 24 Jan 2014 21:58:04 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390600681!12983567!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 24 Jan 2014 21:58:02 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Jan 2014 21:58:02 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0OLuuPh031352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 24 Jan 2014 21:56:56 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OLusaK008937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 24 Jan 2014 21:56:55 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0OLusxm008913; Fri, 24 Jan 2014 21:56:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 13:56:53 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 50BF01BFA72; Fri, 24 Jan 2014 16:56:52 -0500 (EST)
Date: Fri, 24 Jan 2014 16:56:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140124215652.GA18710@phenom.dumpdata.com>
References: <1390350251-22323-1-git-send-email-andrew.cooper3@citrix.com>
	<20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="d6Gm4EdcadzBjdND"
Content-Disposition: inline
In-Reply-To: <20140124174349.GA15472@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re: [PATCH]
 x86/msi: Validate the guest-identified PCI devices in pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Fri, Jan 24, 2014 at 12:43:49PM -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 24, 2014 at 04:19:15PM +0000, Jan Beulich wrote:
> > >>> On 24.01.14 at 16:01, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > I built the kernel without the igb driver just to eliminate it being
> > > the culprit. Now I can boot without issues and this is what lspci
> > > reports:
> > > 
> > > -bash-4.1# lspci -s 02:00.0 -v
> > > 02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network 
> > > Connection (rev 01)
> > >         Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
> > >         Flags: bus master, fast devsel, latency 0, IRQ 10
> > >         Memory at f1420000 (32-bit, non-prefetchable) [size=128K]
> > >         Memory at f1000000 (32-bit, non-prefetchable) [size=4M]
> > >         I/O ports at e020 [size=32]
> > >         Memory at f1444000 (32-bit, non-prefetchable) [size=16K]
> > >         Expansion ROM at f0c00000 [disabled] [size=4M]
> > >         Capabilities: [40] Power Management version 3
> > >         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
> > >         Capabilities: [70] MSI-X: Enable- Count=10 Masked-
> > 
> > So here's a patch to figure out why we don't find this.
> 
> Thank you!
> 
> See attached log. The corresponding xen-syms is compressed and
> updated at : http://darnok.org/xen/xen-syms.gz
> 
> The interesting bit is:
> 
> (XEN) 02:00.0: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
> (XEN) 02:00.0: pos=40
> (XEN) 02:00.0: id=01
> (XEN) 02:00.0: pos=50
> (XEN) 02:00.0: id=05
> (XEN) 02:00.0: pos=70
> (XEN) 02:00.0: id=11
> (XEN) 02:00.1: status=0010 (alloc_pdev+0xb4/0x2e9 wants 11)
> (XEN) 02:00.1: pos=40
> (XEN) 02:00.1: id=01
> (XEN) 02:00.1: pos=50
> (XEN) 02:00.1: id=05
> (XEN) 02:00.1: pos=70
> (XEN) 02:00.1: id=11

You were right on the idea that it might be the device not having
the right capabilities, but it was the wrong BDF. I instrumented
the faulting operation to make sure I knew which BDF it was:

(XEN) 02:00.0: alloced (179)
(XEN) 02:00.0: alloced (189) ffff830239467f70,pdev ffff8302394660d0
(XEN) 02:00.1: alloced (179)
(XEN) 02:00.1: alloced (189) ffff830239466250,pdev ffff830239466190
(XEN) 04:00.0: alloced (179)
(XEN) 04:00.0: alloced (189) ffff830239466520,pdev ffff830239466460
(XEN) 05:00.0: status=0010 (alloc_pdev+0xb7/0x360 wants 11)
(XEN) 05:00.0: pos=60
(XEN) 05:00.0: id=0d
(XEN) 05:00.0: pos=a0
(XEN) 05:00.0: id=01
(XEN) 05:00.0: pos=00
(XEN) 05:00.0: no cap 11
(XEN) 08:00.0: alloced (179)
(XEN) 08:00.0: alloced (189) ffff830239466eb0,pdev ffff830239466df0

(XEN) [2014-01-25 03:42:08] msix_capability_init:759 for 05:00.0:, msix:0 dev:ffff8302394665b0
(XEN) [2014-01-25 03:42:08] ----[ Xen-4.4-rc2  x86_64  debug=y  Tainted:    C ]----
(XEN) [2014-01-25 03:42:08] CPU:    0
(XEN) [2014-01-25 03:42:08] RIP:    e008:[<ffff82d0801683d6>] msix_capability_init+0x210/0x63e
... snip..
(XEN) [2014-01-25 03:42:08] Xen call trace:
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801683d6>] msix_capability_init+0x210/0x63e
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801689c2>] pci_enable_msi+0x1be/0x4d7
(XEN) [2014-01-25 03:42:08]    [<ffff82d08016c68c>] map_domain_pirq+0x222/0x5ad
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f134>] physdev_map_pirq+0x507/0x5d1
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f844>] do_physdev_op+0x646/0x1232
(XEN) [2014-01-25 03:42:08]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x145
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] Pagetable walk from 0000000000000004:
(XEN) [2014-01-25 03:42:08]  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] ****************************************
(XEN) [2014-01-25 03:42:08] Panic on CPU 0:
(XEN) [2014-01-25 03:42:08] FATAL PAGE FAULT
(XEN) [2014-01-25 03:42:08] [error_code=0000]
(XEN) [2014-01-25 03:42:08] Faulting linear address: 0000000000000004
(XEN) [2014-01-25 03:42:08] ****************************************
(XEN) [2014-01-25 03:42:08] 
(XEN) [2014-01-25 03:42:08] Manual reset required ('noreboot' specified)

lspci shows (baremetal kernel, with said driver):

bash-4.1# lspci -s 05:00.0 -v 
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
        Subsystem: Super Micro Computer Inc Device 1533
        Flags: bus master, fast devsel, latency 0, IRQ 19
        Memory at f1900000 (32-bit, non-prefetchable) [size=512K]
        I/O ports at c000 [size=32]
        Memory at f1980000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-86-be-f1
        Capabilities: [1a0] #17
        Kernel driver in use: igb

aka, Intel I210 

lspci shows (Xen, kernel does not have igb built-in):

-bash-4.1# lspci -s 05:00.0 -v
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
        Subsystem: Super Micro Computer Inc Device 1533
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at f1900000 (32-bit, non-prefetchable) [size=512K]
        I/O ports at c000 [size=32]
        Memory at f1980000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-25-90-ff-ff-86-be-f1
        Capabilities: [1a0] #17

And with -xxx:

bash-4.1# lspci -s 05:00.0 -xxx
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
00: 86 80 33 15 07 00 10 00 03 00 00 02 10 00 00 00
10: 00 00 90 f1 00 00 00 00 01 c0 00 00 00 00 98 f1
20: 00 00 00 00 00 00 00 00 00 00 00 00 d9 15 33 15
30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00
40: 01 50 23 c8 08 20 00 00 00 00 00 00 00 00 00 00
50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 04 00 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff
a0: 10 00 02 00 c2 8c 00 10 07 28 19 00 11 5c 42 00
b0: 40 00 11 10 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Which would imply that we should start with '50' offset, not
'60'!


If I boot baremetal with 'pci=earlydump' I get:

[    0.000000] pci 0000:05:00.0 config space:
[    0.000000]   00: e3 10 13 81 07 00 10 00 01 01 04 06 00 00 01 00
[    0.000000]   10: 00 00 00 00 00 00 00 00 05 06 07 20 f1 01 a0 22
[    0.000000]   20: 50 f1 60 f1 f1 ff 01 00 00 00 00 00 00 00 00 00
[    0.000000]   30: ff 00 00 00 60 00 00 00 00 00 00 00 ff 00 10 00
[    0.000000]   40: 00 aa 00 00 00 19 90 7d 80 01 00 00 07 03 00 00
[    0.000000]   50: 68 89 09 80 00 1f 00 00 00 01 00 00 00 00 00 00
[    0.000000]   60: 0d a0 00 00 d9 15 05 08 00 00 00 00 00 00 00 00
[    0.000000]   70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   a0: 01 00 03 f8 08 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   b0: 00 00 00 00 40 00 00 00 00 00 00 00 ef fb be 07
[    0.000000]   c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Which does indeed show that at bootup the PCI configuration
space is different. 

<blink>And the driver id does not match!

If I look at one that has it:
[    0.000000] pci 0000:04:00.0 config space:
[    0.000000]   00: 86 80 33 15 07 00 10 00 03 00 00 02 10 00 00 00
[    0.000000]   10: 00 00 90 f1 00 00 00 00 01 c0 00 00 00 00 98 f1
[    0.000000]   20: 00 00 00 00 00 00 00 00 00 00 00 00 d9 15 33 15
[    0.000000]   30: 00 00 00 00 40 00 00 00 00 00 00 00 0b 01 00 00
[    0.000000]   40: 01 50 23 c8 08 20 00 00 00 00 00 00 00 00 00 00
[    0.000000]   50: 05 70 80 01 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   70: 11 a0 04 00 03 00 00 00 03 20 00 00 00 00 00 00
[    0.000000]   80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   90: 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff
[    0.000000]   a0: 10 00 02 00 c2 8c 00 10 07 28 19 00 11 5c 42 00
[    0.000000]   b0: 42 00 11 10 00 00 00 00 00 00 00 00 00 00 00 00
[    0.000000]   c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

That matches more of the reality and 04:00.0 is actually 05:00.0.

The reason that is happening is probably because of:

-bash-4.1# cat /proc/cmdline 
initrd=initramfs.cpio.gz console=ttyS0,115200 kgdboc=ttyS0 pci=assign-busses pci=earlydump BOOT_IMAGE=vmlinuz 
-bash-4.1# 

The 'assign-busses' which is needed for SR-IOV to work.

If don't use that paremeter Linux kernel (baremetal and with Xen)
tells me:


-bash-4.1# cat /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_numvfs
0
-bash-4.1# cat /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_totalvfs
7
-bash-4.1# echo 7 > /sys/devices/pci0000:00/0000:00:01.1/0000:02:00.0/sriov_numvfs
-bash: echo: write error: Cannot allocate memory
-bash-4.1# dmesg | tail
[  241.874349] random: sshd urandom read with 63 bits of entropy available
[  242.918267] Loading iSCSI transport class v2.0-870.
[  242.926046] iscsi: registered transport (tcp)
[  244.689798] scsi8 : iSCSI Initiator over TCP/IP
[  244.709799]  connection1:0: detected conn error (1020)
[  244.969450] device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
[  244.980434] device-mapper: multipath: version 1.6.0 loaded
[  250.027291] random: nonblocking pool is initialized
[  256.282312] switch: port 1(eth0) entered forwarding state
[  365.468641] igb 0000:02:00.0: SR-IOV: bus number out of range


And sure enough if I boot Xen without 'pci=assign-busses' it works just
fine.

Ugh.

I wonder how Xen 4.3 would actually do the PCI passthrough - it booted with
the 'assign-busses' - but I hadn't tried to do PCI passthrough of the
PF device (the I210).

If do pass in '05:00.0' (new bus number) I wonder if it will use IOMMU context
with whatever '05:00.0' was _before_ the bus re-assigment  aka:

05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
        Flags: bus master, fast devsel, latency 0
        Bus: primary=05, secondary=06, subordinate=07, sec-latency=32
        Memory behind bridge: f1500000-f16fffff
        Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
        Capabilities: [a0] Power Management version 3

Which I think would confuse Xen as this is clearly labeled as bridge
not a PCI device.


The reason for me using 'pci=assign-busses' is that it looks to be
the only option to use SR-IOV.

Which I suppose makes sense as it tries to create VFs right after its own bus id:


  +-01.1-[02-03]--+-[0000:03]-+-10.0  Intel Corporation 82576 Virtual Function
           |               |           +-10.1  Intel Corporation 82576 Virtual Function
           |               |           +-10.2  Intel Corporation 82576 Virtual Function
           |               |           +-10.3  Intel Corporation 82576 Virtual Function
           |               |           +-10.4  Intel Corporation 82576 Virtual Function
           |               |           +-10.5  Intel Corporation 82576 Virtual Function
           |               |           +-10.6  Intel Corporation 82576 Virtual Function
           |               |           +-10.7  Intel Corporation 82576 Virtual Function
           |               |           +-11.0  Intel Corporation 82576 Virtual Function
           |               |           +-11.1  Intel Corporation 82576 Virtual Function
           |               |           +-11.2  Intel Corporation 82576 Virtual Function
           |               |           +-11.3  Intel Corporation 82576 Virtual Function
           |               |           +-11.4  Intel Corporation 82576 Virtual Function
           |               |           \-11.5  Intel Corporation 82576 Virtual Function
           |               \-[0000:02]-+-00.0  Intel Corporation 82576 Gigabit Network Connection
           |                           \-00.1  Intel Corporation 82576 Gigabit Network Connection


But why does it have to have the bus _right_ after its own? Can't it
use one at the end of the its bus-space? The bus is after it is occupied
by another card (if I boot without 'pci=assign-busses').

I do recall using this particular SR-IOV card on a different hardware
a year ago or so. And it did work. I think that might be because
there were no PCI cards _after_ the SR-IOV card.

For posterity, with pci=assign-busses under baremetal (with SR-IOV enabled):
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
03:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
05:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
07:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11)
07:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
08:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
08:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
09:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
0a:00.0 SATA controller: Device 1b21:0612 (rev 01)

Without 'pci=assign-busses' under baremetal:
02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
03:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
04:00.0 Ethernet controller: Intel Corporation Device 1533 (rev 03)
05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
06:01.0 PCI bridge: Hint Corp HB6 Universal PCI-PCI bridge (non-transparent mode) (rev 11)
06:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link)
07:08.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:08.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:09.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:09.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:0a.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:0a.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
07:0b.0 Multimedia video controller: Brooktree Corporation Bt878 Video Capture (rev 11)
07:0b.1 Multimedia controller: Brooktree Corporation Bt878 Audio Capture (rev 11)
08:00.0 USB Controller: Renesas Technology Corp. Device 0015 (rev 02)
09:00.0 SATA controller: Device 1b21:0612 (rev 01)


This problem with SR-IOV bus seems to have been solved in 2009:

commit a28724b0fb909d247229a70761c90bb37b13366a
Author: Yu Zhao <yu.zhao@intel.com>
Date:   Fri Mar 20 11:25:13 2009 +0800

    PCI: reserve bus range for SR-IOV device
    
    Reserve the bus number range used by the Virtual Function when
    pcibios_assign_all_busses() returns true.

And pcibios_assign_all_busses() is the one that returns true if 'pci=assign-busses'
is set.


--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="tst035-jan-debug-2.txt"
Content-Transfer-Encoding: quoted-printable

Trying 192.168.102.15...=0D
Connected to maxsrv2.=0D
Escape character is '^]'.=0D
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mIniti=
alizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00HPress Ctrl+S to enter the Setup Menu.             =
                              =1B[04;00H                                   =
                                             =1B[05;00H                    =
                                                            =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[03;39H=1B[03;00HP=
ress Ctrl+S to enter the Setup Menu..                                      =
    =1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00HPress Ctrl+S to enter the Setup =
Menu.                                           =1B[08;00H                 =
                                                               =1B[09;00H  =
                                                                           =
   =1B[10;00H                                                              =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[07;39H=1B[07;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[07;39H=1B[07;39H=1B[07;39H=1B[07;3=
9H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B[07;39H=1B=
[07;39H=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00HInitializing Int=
el(R) Boot Agent GE v1.3.22                                     =1B[02;00HP=
XE 2.1 Build 086 (WfM 2.0)                                                 =
    =1B[03;00H                                                             =
                   =1B[04;00H                                              =
                                  =1B[05;00HInitializing Intel(R) Boot Agen=
t GE v1.3.22                                     =1B[06;00HPXE 2.1 Build 08=
6 (WfM 2.0)                                                     =1B[07;00H =
                                                                           =
    =1B[08;00H                                                             =
                   =1B[09;00HInitializing Intel(R) Boot Agent GE v1.4.10   =
                                  =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)    =
                                                 =1B[11;00HPress Ctrl+S to =
enter the Setup Menu.                                           =1B[12;00H =
                                                                           =
    =1B[13;00H                                                             =
                   =1B[14;00H                                              =
                                  =1B[15;00H                               =
                                                 =1B[16;00H                =
                                                                =1B[17;00H =
                                                                           =
    =1B[18;00H                                                             =
                   =1B[19;00H                                              =
                                  =1B[20;00H                               =
                                                 =1B[21;00H                =
                                                                =1B[22;00H =
                                                                           =
    =1B[23;00H                                                             =
                   =1B[24;00H                                              =
                                 =1B[24;00H=1B[11;39H=1B[11;00HPress Ctrl+S=
 to enter the Setup Menu..                                          =1B[11;=
39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=
=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=1B[11;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00H                                =
                                                =1B[08;00H                 =
                                                               =1B[09;00HIn=
itializing Intel(R) Boot Agent GE v1.4.10                                  =
   =1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                   =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00HInitializing Inte=
l(R) Boot Agent GE v1.4.10                                     =1B[14;00HPX=
E 2.1 Build 092 (WfM 2.0)                                                  =
   =1B[15;00HPress Ctrl+S to enter the Setup Menu.                         =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                                =1B[18;00H                 =
                                                               =1B[19;00H  =
                                                                           =
   =1B[20;00H                                                              =
                  =1B[21;00H                                               =
                                 =1B[22;00H                                =
                                                =1B[23;00H                 =
                                                               =1B[24;00H  =
                                                                           =
  =1B[24;00H=1B[15;39H=1B[15;00HPress Ctrl+S to enter the Setup Menu..     =
                                     =1B[15;39H=1B[15;39H=1B[15;39H=1B[15;3=
9H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B=
[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=1B[15;39H=80=08 =08=1B[2J=1B[1;1H=1B[=
1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22        =
                             =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)         =
                                            =1B[03;00H                     =
                                                           =1B[04;00H      =
                                                                          =
=1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                      =
               =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                       =
                              =1B[07;00H                                   =
                                             =1B[08;00H                    =
                                                            =1B[09;00HIniti=
alizing Intel(R) Boot Agent GE v1.4.10                                     =
=1B[10;00HPXE 2.1 Build 092 (WfM 2.0)                                      =
               =1B[11;00H                                                  =
                              =1B[12;00H                                   =
                                             =1B[13;00HInitializing Intel(R=
) Boot Agent GE v1.4.10                                     =1B[14;00HPXE 2=
=2E1 Build 092 (WfM 2.0)                                                   =
  =1B[15;00H                                                               =
                 =1B[16;00H                                                =
                                =1B[17;00HInitializing Intel(R) Boot Agent =
GE v1.4.10                                     =1B[18;00HPXE 2.1 Build 092 =
(WfM 2.0)                                                     =1B[19;00HPre=
ss Ctrl+S to enter the Setup Menu.                                         =
  =1B[20;00H                                                               =
                 =1B[21;00H                                                =
                                =1B[22;00H                                 =
                                               =1B[23;00H                  =
                                                              =1B[24;00H   =
                                                                           =
 =1B[24;00H=1B[19;39H=1B[19;00HPress Ctrl+S to enter the Setup Menu..      =
                                    =1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39=
H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[=
19;39H=1B[19;39H=1B[19;39H=1B[19;39H=1B[19;39H=80=08 =08=1B[2J=1B[1;1H=1B[1=
;1H=80=08 =08=1B[01;00H                                         =80=08 =08=
=1B[2J=1B[1;1H=1B[1;1H=1B[2J=1B[1;1H=1B[2J=1B[1;1H=80=08 =08=1B[01;00H     =
                                                                           =
=1B[02;00HIntel(R) Boot Agent GE v1.4.10                                   =
               =1B[03;00HCopyright (C) 1997-2012, Intel Corporation        =
                              =1B[04;00H                                   =
                                             =1B[05;00HInitializing and est=
ablishing link...                                           =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[0=
5;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=1B[05;38H=
=1B[05;38H=1B[05;00HCLIENT MAC ADDR: 00 25 90 86 BE F0  GUID: 00000000 0000=
 0000 0000 00259086BEF0  =1B[06;00HDHCP.|                                  =
                                        =1B[06;06H=1B[06;00HDHCP./         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.-                                                           =
               =1B[06;06H=1B[06;00HDHCP.\                                  =
                                        =1B[06;06H=1B[06;00HDHCP.|         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP./                                                           =
               =1B[06;06H=1B[06;00HDHCP.-                                  =
                                        =1B[06;06H=1B[06;00HDHCP.\         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.|                                                           =
               =1B[06;06H=1B[06;00HDHCP./                                  =
                                        =1B[06;06H=1B[06;00HDHCP.-         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.\                                                           =
               =1B[06;06H=1B[06;00HDHCP.|                                  =
                                        =1B[06;06H=1B[06;00HDHCP./         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP.-                                                           =
               =1B[06;06H=1B[06;00HDHCP.\                                  =
                                        =1B[06;06H=1B[06;00HDHCP.|         =
                                                                 =1B[06;06H=
=1B[06;00HDHCP./                                                           =
               =1B[06;06H=1B[06;00HCLIENT IP: 192.168.102.35  MASK: 255.255=
=2E255.0  DHCP IP: 192.168.102.1          =0D
PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al=0D
Loading xen.gz... =1B[07;00Hok=0D
Loading vmlinuz... =1B[01;00Hok=0D
Loading initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01=
;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=
=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[0=
1;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok=0D
Loading microcode.bin... ok=0D
 Xen 4.4-rc2=0D
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Fri Jan 24 14:40:10 EST 2014=0D
(XEN) Latest ChangeSet: Mon Jan 20 09:50:20 2014 +0100 git:407a3c0-dirty=0D
(XEN) Console output is synchronous.=0D
(XEN) Bootloader: unknown=0D
(XEN) Command line: dom0_max_vcpus=3D1 dom0_mem=3Dmax:2G iommu=3Ddebug,verb=
ose com1=3D115200,8n1 console=3Dcom1 ucode=3Dscan console_timestamps=3D1 co=
nsole_to_ring conring_size=3D2097152 cpufreq=3Dxen:performance,verbose sync=
_console noreboot loglvl=3Dall guest_loglvl=3Dall dom0_mem_max=3Dmax:6GB,2G=
=0D
(XEN) Video information:=0D
(XEN)  VGA is text mode 80x25, font 8x16=0D
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds=0D
(XEN)  EDID info not retrieved because no DDC retrieval method detected=0D
(XEN) Disc information:=0D
(XEN)  Found 1 MBR signatures=0D
(XEN)  Found 1 EDD information structures=0D
(XEN) Xen-e820 RAM map:=0D
(XEN)  0000000000000000 - 0000000000099c00 (usable)=0D
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)=0D
(XEN)  00000000000e0000 - 0000000000100000 (reserved)=0D
(XEN)  0000000000100000 - 00000000a58f1000 (usable)=0D
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)=0D
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)=0D
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)=0D
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)=0D
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)=0D
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)=0D
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)=0D
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)=0D
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)=0D
(XEN)  00000000bc000000 - 00000000be200000 (reserved)=0D
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)=0D
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)=0D
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)=0D
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)=0D
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)=0D
(XEN)  00000000ff000000 - 0000000100000000 (reserved)=0D
(XEN)  0000000100000000 - 000000023fe00000 (usable)=0D
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)=0D
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FACP B779F0B8, 010C (r5 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)=
=0D
(XEN) ACPI: FACS B77B7080, 0040=0D
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)=
=0D
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)=
=0D
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)=
=0D
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)=
=0D
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)=
=0D
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)=
=0D
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)=
=0D
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240=
)=0D
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)=
=0D
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)=
=0D
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)=
=0D
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)=
=0D
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)=
=0D
(XEN) System RAM: 8046MB (8239752kB)=0D
(XEN) No NUMA configuration found=0D
(XEN) Faking a node at 0000000000000000-000000023fe00000=0D
(XEN) Domain heap initialised=0D
(XEN) found SMP MP-table at 000fd870=0D
(XEN) DMI 2.7 present.=0D
(XEN) Using APIC driver default=0D
(XEN) ACPI: PM-Timer IO Port: 0x1808=0D
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]=0D
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]=0D
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32=0D
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]=0D
(XEN) ACPI: Local APIC address 0xfee00000=0D
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
(XEN) Processor #0 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
(XEN) Processor #2 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
(XEN) Processor #4 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
(XEN) Processor #6 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
(XEN) Processor #1 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
(XEN) Processor #3 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
(XEN) Processor #5 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
(XEN) Processor #7 7:12 APIC version 21=0D
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=0D
(XEN) ACPI: IRQ0 used by override.=0D
(XEN) ACPI: IRQ2 used by override.=0D
(XEN) ACPI: IRQ9 used by override.=0D
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs=0D
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
(XEN) [VT-D]dmar.c:778: Host address width 39=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed90000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed90000 iommu->reg =3D ffff82c=
000201000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D c0000020660462 ecap =3D f0101a=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:792: found ACPI_DMAR_DRHD:=0D
(XEN) [VT-D]dmar.c:472:   dmaru->address =3D fed91000=0D
(XEN) [VT-D]iommu.c:1157: drhd->address =3D fed91000 iommu->reg =3D ffff82c=
000203000=0D
(XEN) [VT-D]iommu.c:1159: cap =3D d2008020660462 ecap =3D f010da=0D
(XEN) [VT-D]dmar.c:397:  IOAPIC: 0000:f0:1f.0=0D
(XEN) [VT-D]dmar.c:361:  MSI HPET: 0000:f0:0f.0=0D
(XEN) [VT-D]dmar.c:486:   flags: INCLUDE_ALL=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1d.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:1a.0=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:14.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr b764b000 end_address b7657=
fff=0D
(XEN) [VT-D]dmar.c:797: found ACPI_DMAR_RMRR:=0D
(XEN) [VT-D]dmar.c:383:  endpoint: 0000:00:02.0=0D
(XEN) [VT-D]dmar.c:666:   RMRR region: base_addr bc000000 end_address be1ff=
fff=0D
(XEN) Xen ERST support is initialized.=0D
(XEN) Using ACPI (MADT) for SMP configuration information=0D
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)=0D
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X=0D
(XEN) Switched to APIC driver x2apic_cluster.=0D
(XEN) Using scheduler: SMP Credit Scheduler (credit)=0D
(XEN) Detected 3400.107 MHz processor.=0D
(XEN) Initing memory sharing.=0D
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0xlity: BCAST 1 SER =
0 CMCI 1 firstbank 0 extended MCE MSR 0=0D
(XEN) Intel machine check reporting enabled=0D
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f=0D
(XEN) PCI: MCFG area at f8000000 reserved in E820=0D
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f=0D
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.=0D
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.=0D
(XEN) Intel VT-d Snoop Control not enabled.=0D
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.=0D
(XEN) Intel VT-d Queued Invalidation enabled.=0D
(XEN) Intel VT-d Interrupt Remapping enabled.=0D
(XEN) Intel VT-d Shared EPT tables not enabled.=0D
(XEN) 02:00.0: alloced (179)=0D
(XEN) 02:00.0: alloced (189) ffff830239467f70,pdev ffff8302394660d0=0D
(XEN) 02:00.1: alloced (179)=0D
(XEN) 02:00.1: alloced (189) ffff830239466250,pdev ffff830239466190=0D
(XEN) 04:00.0: alloced (179)=0D
(XEN) 04:00.0: alloced (189) ffff830239466520,pdev ffff830239466460=0D
(XEN) 05:00.0: status=3D0010 (alloc_pdev+0xb7/0x360 wants 11)=0D
(XEN) 05:00.0: pos=3D60=0D
(XEN) 05:00.0: id=3D0d=0D
(XEN) 05:00.0: pos=3Da0=0D
(XEN) 05:00.0: id=3D01=0D
(XEN) 05:00.0: pos=3D00=0D
(XEN) 05:00.0: no cap 11=0D
(XEN) 08:00.0: alloced (179)=0D
(XEN) 08:00.0: alloced (189) ffff830239466eb0,pdev ffff830239466df0=0D
(XEN) I/O virtualisation enabled=0D
(XEN)  - Dom0 mode: Relaxed=0D
(XEN) Interrupt remapping enabled=0D
(XEN) Enabled directed EOI wit(XEN) TSC deadline timer enabled=0D
(XEN) [2014-01-25 03:41:53] Platform timer is 14.318MHz HPET=0D
(XEN) [2014-01-25 03:41:53] Allocated console ring of 1048576 KiB.=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: MWAIT substates: 0x42120=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: v0.4 model 0x3c=0D
(XEN) [2014-01-25 03:41:53] mwait-idle: lapic_timer_reliable_states 0xfffff=
fff=0D
(XEN) [2014-01-25 03:41:53] VMX: Supported advanced features:=0D
(XEN) [2014-01-25 03:41:53]  - APIC MMIO access virtualisation=0D
(XEN) [2014-01-25 03:41:53]  - APIC TPR shadow=0D
(XEN) [2014-01-25 03:41:53]  - Extended Page Tables (EPT)=0D
(XEN) [2014-01-25 03:41:53]  - Virtual-Processor Identifiers (VPID)=0D
(XEN) [2014-01-25 03:41:53]  - Virtual NMI=0D
(XEN) [2014-01-25 03:41:53]  - MSR direct-access bitmap=0D
(XEN) [2014-01-25 03:41:53]  - Unrestricted Guest=0D
(XEN) [2014-01-25 03:41:53]  - VMCS shadowing=0D
(XEN) [2014-01-25 03:41:53] HVM: ASIDs enabled.=0D
(XEN) [2014-01-25 03:41:53] HVM: VMX enabled=0D
(XEN) [2014-01-25 03:41:53] HVM: Hardware Assisted Paging (HAP) detected=0D
(XEN) [2014-01-25 03:41:53] HVM: HAP page sizes: 4kB, 2MB, 1GB=0D
(XEN) [2014-01-25 03:41:53] Brought up 8 CPUs=0D
(XEN) [2014-01-25 03:41:53] ACPI sleep modes: S3=0D
(XEN) [2014-01-25 03:41:53] mcheck_poll: Machine check polling timer starte=
d.*** LOADING DOMAIN 0 ***=0D
(XEN) [2014-01-25 03:41:53] elf_parse_binary: phdr: paddr=3D0x1000000 memsz=
=3D0xa28000=0D
(XEN) [2014-01-25 03:41:53] elf_parse_binarymemory: 0x1000000 -> 0x23f7000=
=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: GUEST_OS =3D "linux"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: GUEST_VERSION =3D "2.6"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: VIRT_BASE =3D 0xffffffff800=
00000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: ENTRY =3D 0xffffffff81cd81e=
0=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: HYPERCALL_PAGE =3D 0xffffff=
ff81001000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: FEATURES =3D "!writable_pag=
e_tables|pae_pgdir_above_4gb"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: PAE_MODE =3D "yes"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: LOADER =3D "generic"=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: unknown xen elf note (0xd)=
=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: HV_START_LOW =3D 0xffff8000=
00000000=0D
(XEN) [2014-01-25 03:41:53] elf_xen_parse_note: PADDR_OFFSET =3D 0x0=0D
(XEN) [2014-01-25 03:41:53] elf_xen_addr_calc_check: addresses:=0D
(XEN) [2014-01-25 03:41:53]     virt_base        =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 03:41:53]     elf_paddr_offset =3D 0x0=0D
(XEN) [2014-01-25 03:41:53]     virt_offset      =3D 0xffffffff80000000=0D
(XEN) [2014-01-25 03:41:53]     virt_kstart      =3D 0xffffffff81000000=0D
(XEN) [2014-01-25 03:41:53]     virt_kend        =3D 0xffffffff823f7000=0D
(XEN) [2014-01-25 03:41:53]     virt_entry       =3D 0xffffffff81cd81e0=0D
(XEN) [2014-01-25 03:41:53]     p2m_base         =3D 0xffffffffffffffff=0D
(XEN) [2014-01-25 03:41:53]  Xen  kernel: 64-bit, lsb, compat32=0D
(XEN) [2014-01-25 03:41:53]  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -> 0x23f7000=0D
(XEN) [2014-01-25 03:41:53] PHYSICAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 03:41:53]  Dom0 alloc.:   000000022c000000->0000000230000=
000 (487082 pages to be allocated)=0D
(XEN) [2014-01-25 03:41:53]  Init. ramdisk: 000000023ac31000->000000023fd86=
dfa=0D
(XEN) [2014-01-25 03:41:53] VIRTUAL MEMORY ARRANGEMENT:=0D
(XEN) [2014-01-25 03:41:53]  Loaded kernel: ffffffff81000000->ffffffff823f7=
000=0D
(XEN) [2014-01-25 03:41:53]  Init. ramdisk: ffffffff823f7000->ffffffff8754c=
dfa=0D
(XEN) [2014-01-25 03:41:53]  Phys-Mach map: ffffffff8754d000->ffffffff8794d=
000=0D
(XEN) [2014-01-25 03:41:53]  Start info:    ffffffff8794d000->ffffffff8794d=
4b4=0D
(XEN) [2014-01-25 03:41:53]  Page tables:   ffffffff8794e000->ffffffff8798f=
000=0D
(XEN) [2014-01-25 03:41:54]  Boot stack:    ffffffff8798f000->ffffffff87990=
000=0D
(XEN) [2014-01-25 03:41:54]  TOTAL:         ffffffff80000000->ffffffff87c00=
000=0D
(XEN) [2014-01-25 03:41:54]  ENTRY ADDRESS: ffffffff81cd81e0=0D
(XEN) [2014-01-25 03:41:54] Dom0 has maximum 1 VCPUs=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 0 at 0xffffffff81000000 -=
> 0xffffffff81a28000=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 1 at 0xffffffff81c00000 -=
> 0xffffffff81cc20f0=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 2 at 0xffffffff81cc3000 -=
> 0xffffffff81cd7d80=0D
(XEN) [2014-01-25 03:41:54] elf_load_binary: phdr 3 at 0xffffffff81cd8000 -=
> 0xffffffff81e7b000=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1438: d0:Hostbridge: skip 0000:00=
:00.0 map=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:PCI: map 0000:00:16.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:19.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1a.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:00:1b.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1d.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.2=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.3=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:00:1f.6=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:02:00.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:03:00.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:06:03.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:08.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:09.1=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.0=0D
(XEN) [2014-01-25 03:41:54] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0a.1=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:0b.1=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1452: d0:PCIe: map 0000:08:00.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:1452: d0:PCIe: map 0000:09:00.0=0D
(XEN) [2014-01-25 03:41:55] [VT-D]iommu.c:750: iommu_enable_translation: io=
mmu->reg =3D ffff82c000201000=0D
(XEN) [2014-01-25 03:41:..............................................done.=
=0D
(XEN) [2014-01-25 03:41:55] Initial low memory virq threshold set at 0x4000=
 pages.=0D
(XEN) [2014-01-25 03:41:55] Std. Loglevel: All=0D
(XEN) [2014-01-25 03:41:55] Guest Loglevel: All=0D
(XEN) [2014-01-25 03:41:55] **********************************************=
=0D
(XEN) [2014-01-25 03:41:55] ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS=
=0D
(XEN) [2014-01-25 03:41:55] ******* This option is intended to aid debuggin=
g of Xen by ensuring=0D
(XEN) [2014-01-25 03:41:55] ******* that all output is synchronously delive=
red on the serial line.=0D
(XEN) [2014-01-25 03:41:55] ******* However it can introduce SIGNIFICANT la=
tencies and affect=0D
(XEN) [2014-01-25 03:41:55] ******* timekeeping. It is NOT recommended for =
production use!=0D
(XEN) [2014-01-25 03:41:55] **********************************************=
=0D
(XEN) [2014-01-25 03:41:55] 3... 2... 1... =0D
(XEN) [2014-01-25 03:41:58] *** Serial input -> DOM0 (type 'CTRL-a' three t=
imes to switch input to Xen)=0D
(XEN) [2014-01-25 03:et started...=0D
[    0.000000] Initializing cgroup subsys cpuset=0D
[    0.000000] Initializing cgroup subsys cpu=0D
[    0.000000] Initializing c.0upstream-03477-gdf32e43 (konrad@build-extern=
al.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) #5 S=
MP Fri Jan 24 12:22:52 EST 2014=0D
[    0.000000] Command line: debug pci=3Dassign-busses console=3Dhvc0 logle=
vel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAA.hide=3D(05:00.*) =
kgdboc=3Dhvc0=0D
[    0.000000] Freeing 99-100 pfn range: 103 pages freed=0D
[    0.000000] 1-1 mapping on 99->100=0D
[    0.000000] 1-1 mapping on a58f1->a58f8=0D
[    0.000000] 1-1 mapping on a61b1->a6597=0D
[    0.000000] 1-1 mapping on b74b4->b76cb=0D
[    0.000000] 1-1 mapping on b770c->b7fff=0D
[    0.000000] 1-1 mapping on b8000->100000=0D
[    0.000000] Released 103 pages of unused memory=0D
[    0.000000] Set 298846 page(s) to 1-1 mapping=0D
[    0.000000] Populating 80000-80067 pfn range: 103 pages added=0D
[    0.000000] e820: BIOS-provided physical RAM map:=0D
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable=0D
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000000100000-0x0000000080066fff] usable=0D
[    0.000000] Xen: [mem 0x0000000080067000-0x00000000a58f0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable=0D
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS=0D
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved=0D
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable=0D
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved=0D
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved=0D
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable=0D
[    0.000000] NX (Execute Disable) protection: active=0D
[    0.000000] SMBIOS 2.7 present.=0D
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013=0D
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved=0D
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable=0D
[    0.000000] e820: last_pfn =3D 0x80067 max_arch_pfn =3D 0x400000000=0D
[    0.000000] Scanning 1 areas for low memory corruption=0D
[    0.000000] Base memory trampoline at [ffff880000093000] 93000 size 2457=
6=0D
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]=0D
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x7fe00000-0x7fffffff]=0D
[    0.000000]  [mem 0x7fe00000-0x7fffffff] page 4k=0D
[    0.000000] BRK [0x01fef000, 0x01feffff] PGTABLE=0D
[    0.000000] BRK [0x01ff0000, 0x01ff0fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x7c000000-0x7fdfffff]=0D
[    0.000000]  [mem 0x7c000000-0x7fdfffff] page 4k=0D
[    0.000000] BRK [0x01ff1000, 0x01ff1fff] PGTABLE=0D
[    0.000000] BRK [0x01ff2000, 0x01ff2fff] PGTABLE=0D
[    0.000000] BRK [0x01ff3000, 0x01ff3fff] PGTABLE=0D
[    0.000000] BRK [0x01ff4000, 0x01ff4fff] PGTABLE=0D
[    0.000000] init_memory_mapping: [mem 0x00100000-0x7bffffff]=0D
[    0.000000]  [mem 0x00100000-0x7bffffff] page 4k=0D
[    0.000000] init_memory_mapping: [mem 0x80000000-0x80066fff]=0D
[    0.000000]  [mem 0x80000000-0x80066fff] page 4k=0D
[    0.000000] RAMDISK: [mem 0x023f7000-0x0754cfff]=0D
[    0.000000] ACPI: RSDP 00000000000f0490 000024 (v02 ALASKA)=0D
[    0.000000] ACPI: XSDT 00000000b7794098 0000AC (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FACP 00000000b779f0b8 00010C (v05 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: DSDT 00000000b77941d8 00AEDD (v02 ALASKA    A M I 0000=
0000 INTL 20091112)=0D
[    0.000000] ACPI: FACS 00000000b77b7080 000040=0D
[    0.000000] ACPI: APIC 00000000b779f1c8 000092 (v03 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: FPDT 00000000b779f260 000044 (v01 ALASKA    A M I 0107=
2009 AMI  00010013)=0D
[    0.000000] ACPI: SSDT 00000000b779f2a8 000540 (v01  PmRef  Cpu0Ist 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b779f7e8 000AD8 (v01  PmRef    CpuPm 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: SSDT 00000000b77a05b8 000348 (v01  PmRef    ApTst 0000=
3000 INTL 20051117)=0D
[    0.000000] ACPI: MCFG 00000000b77a0900 00003C (v01 ALASKA    A M I 0107=
2009 MSFT 00000097)=0D
[    0.000000] ACPI: HPET 00000000b77a0940 000038 (v01 ALASKA    A M I 0107=
2009 AMI. 00000005)=0D
[    0.000000] ACPI: SSDT 00000000b77a0978 00036D (v01 SataRe SataTabl 0000=
1000 INTL 20091112)=0D
[    0.000000] ACPI: SSDT 00000000b77a0ce8 00327D (v01 SaSsdt  SaSsdt  0000=
3000 INTL 20091112)=0D
[    0.000000] ACPI: ASF! 00000000b77a3f68 0000A5 (v32 INTEL       HCG 0000=
0001 TFSM 000F4240)=0D
[    0.000000] ACPI: XMAR 00000000b77a4010 0000B8 (v01 INTEL      HSW  0000=
0001 INTL 00000001)=0D
[    0.000000] ACPI: EINJ 00000000b77a40c8 000130 (v01    AMI AMI EINJ 0000=
0000      00000000)=0D
[    0.000000] ACPI: ERST 00000000b77a41f8 000230 (v01  AMIER AMI ERST 0000=
0000      00000000)=0D
[    0.000000] ACPI: HEST 00000000b77a4428 0000A8 (v01    AMI AMI HEST 0000=
0000      00000000)=0D
[    0.000000] ACPI: BERT 00000000b77a44d0 000030 (v01    AMI AMI BERT 0000=
0000      00000000)=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] NUMA turned off=0D
[    0.000000] Faking a node at [mem 0x0000000000000000-0x0000000080066fff]=
=0D
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x80066fff]=0D
[    0.000000]   NODE_DATA [mem 0x80063000-0x80066fff]=0D
[    0.000000] Zone ranges:=0D
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]=0D
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]=0D
[    0.000000]   Normal   empty=0D
[    0.000000] Movable zone start for each node=0D
[    0.000000] Early memory node ranges=0D
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]=0D
[    0.000000]   node   0: [mem 0x00100000-0x80066fff]=0D
[    0.000000] On node 0 totalpages: 524287=0D
[    0.000000]   DMA zone: 56 pages used for memmap=0D
[    0.000000]   DMA zone: 21 pages reserved=0D
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0=0D
[    0.000000]   DMA32 zone: 7114 pages used for memmap=0D
[    0.000000]   DMA32 zone: 520295 pages, LIFO batch:31=0D
[    0.000000] ACPI: PM-Timer IO Port: 0x1808=0D
[    0.000000] ACPI: Local APIC address 0xfee00000=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)=0D
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)=0D
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])=0D
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])=0D
[    0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-=
23=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)=0D
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)=
=0D
[    0.000000] ACPI: IRQ0 used by override.=0D
[    0.000000] ACPI: IRQ2 used by override.=0D
[    0.000000] ACPI: IRQ9 used by override.=0D
[    0.000000] Using ACPI (MADT) for SMP configuration information=0D
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000=0D
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs=0D
[    0.000000] nr_irqs_gsi: 40=0D
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]=0D
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]=0D
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices=
=0D
[    0.000000] Booting paravirtualized kernel on Xen=0D
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)=0D
[    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:8 n=
r_node_ids:1=0D
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007f600000 s85376 r8192=
 d21120 u262144=0D
[    0.000000] pcpu-alloc: s85376 r8192 d21120 u262144 alloc=3D1*2097152=0D
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 =0D
[    5.511514] Built 1 zonelists in Node order, mobility grouping on.  Tota=
l pages: 517096=0D
[    5.511515] Policy zone: DMA32=0D
[    5.511516] Kernel command line: debug pci=3Dassign-busses console=3Dhvc=
0 loglevel=3D10 initcall_debug loop.max_loop=3D100 xen-pcibackAA.hide=3D(05=
:00.*) kgdboc=3Dhvc0=0D
[    5.511829] PID hash table entries: 4096 (order: 3, 32768 bytes)=0D
[    5.511858] xsave: enabled xstate_bv 0x7, cntxt size 0x340=0D
[    5.532322] software IO TLB [mem 0x79200000-0x7d200000] (64MB) mapped at=
 [ffff880079200000-ffff88007d1fffff]=0D
[    5.535407] Memory: 1891592K/2097148K available (7058K kernel code, 773K=
 rwdata, 2208K rodata, 1724K init, 1380K bss, 205556K reserved)=0D
[    5.535637] Hierarchical RCU implementation.=0D
[    5.535638] 	RCU restricting CPUs from NR_CPUS=3D512 to nr_cpu_ids=3D1.=
=0D
[    5.535638] RCU: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D1=0D
[    5.535646] NR_IRQS:33024 nr_irqs:256 16=0D
[    5.535725] xen: sci override: global_irq=3D9 trigger=3D0 polarity=3D0=0D
[    5.535727] xen: registering gsi 9 triggering 0 polarity 0=0D
[    5.535738] xen: --> pirq=3D9 -> irq=3D9 (gsi=3D9)=0D
[    5.535760] xen: acpi sci 9=0D
[    5.535763] xen: --> pirq=3D1 -> irq=3D1 (gsi=3D1)=0D
[    5.535766] xen: --> pirq=3D2 -> irq=3D2 (gsi=3D2)=0D
[    5.535769] xen: --> pirq=3D3 -> irq=3D3 (gsi=3D3)=0D
[    5.535771] xen: --> pirq=3D4 -> irq=3D4 (gsi=3D4)=0D
[    5.535774] xen: --> pirq=3D5 -> irq=3D5 (gsi=3D5)=0D
[    5.535776] xen: --> pirq=3D6 -> irq=3D6 (gsi=3D6)=0D
[    5.535779] xen: --> pirq=3D7 -> irq=3D7 (gsi=3D7)=0D
[    5.535781] xen: --> pirq=3D8 -> irq=3D8 (gsi=3D8)=0D
[    5.535784] xen: --> pirq=3D10 -> irq=3D10 (gsi=3D10)=0D
[    5.535786] xen: --> pirq=3D11 -> irq=3D11 (gsi=3D11)=0D
[    5.535789] xen: --> pirq=3D12 -> irq=3D12 (gsi=3D12)=0D
[    5.535791] xen: --> pirq=3D13 -> irq=3D13 (gsi=3D13)=0D
[    5.535794] xen: --> pirq=3D14 -> irq=3D14 (gsi=3D14)=0D
[    5.535796] xen: --> pirq=3D15 -> irq=3D15 (gsi=3D15)=0D
[    5.537360] Console: colour VGA+ 80x25=0D
[    6.488462] console [hvc0] enabled=0D
[    6.492388] Xen: using vcpuop timer interface=0D
[    6.496738] installing Xen timer for CPU 0=0D
[    6.500919] tsc: Detected 3400.106 MHz processor=0D
[    6.505603] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 6800.21 BogoMIPS (lpj=3D3400106)=0D
[    6.516237] pid_max: default: 32768 minimum: 301=0D
[    6.521075] Security Framework initialized=0D
[    6.525165] SELinux:  Initializing.=0D
[    6.528738] SELinux:  Starting in permissive mode=0D
[    6.533811] Dentry cache hash table entries: 262144 (order: 9, 2097152 b=
ytes)=0D
[    6.541261] Inode-cache hash table entries: 131072 (order: 8, 1048576 by=
tes)=0D
[    6.548425] Mount-cache hash table entries: 256=0D
[    6.553402] Initializing cgroup subsys freezer=0D
[    6.557907] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'=0D
[    6.557907] ENERGY_PERF_BIAS: View and update with x86_energy_perf_polic=
y(8)=0D
[    6.571011] CPU: Physical Processor ID: 0=0D
[    6.575083] CPU: Processor Core ID: 0=0D
[    6.579504] mce: CPU supports 2 MCE banks=0D
[    6.583517] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024=0D
[    6.583517] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4=
=0D
[    6.583517] tlb_flushall_shift: 6=0D
[    6.620866] Freeing SMP alternatives memory: 32K (ffffffff81e72000 - fff=
fffff81e7a000)=0D
[    6.629543] ACPI: Core revision 2115=0D
[    6.686527] ACPI: All ACPI Tables successfully acquired=0D
[    6.693282] cpu 0 spinlock event irq 41=0D
[    6.697164] calling  xen_init_spinlocks_jump+0x0/0x1d @ 1=0D
[    6.708165] initcall xen_init_spinlocks_jump+0x0/0x1d returned 0 after 4=
882 usecs=0D
[    6.715635] calling  set_real_mode_permissions+0x0/0xa9 @ 1=0D
[    6.721275] initcall set_real_mode_permissions+0x0/0xa9 returned 0 after=
 0 usecs=0D
[    6.728720] calling  trace_init_perf_perm_irq_work_exit+0x0/0x13 @ 1=0D
[    6.735133] initcall trace_init_perf_perm_irq_work_exit+0x0/0x13 returne=
d 0 after 0 usecs=0D
[    6.743366] calling  trace_init_flags_sys_exit+0x0/0x12 @ 1=0D
[    6.749000] initcall trace_init_flags_sys_exit+0x0/0x12 returned 0 after=
 0 usecs=0D
[    6.756452] calling  trace_init_flags_sys_enter+0x0/0x12 @ 1=0D
[    6.762173] initcall trace_init_flags_sys_enter+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[    6.769711] calling  init_hw_perf_events+0x0/0x53b @ 1=0D
[    6.774912] Performance Events: unsupported p6 CPU model 60 no PMU drive=
r, software events only.=0D
[    6.783784] initcall init_hw_perf_events+0x0/0x53b returned 0 after 2929=
 usecs=0D
[    6.791064] calling  register_trigger_all_cpu_backtrace+0x0/0x16 @ 1=0D
[    6.797477] initcall register_trigger_all_cpu_backtrace+0x0/0x16 returne=
d 0 after 0 usecs=0D
[    6.805709] calling  kvm_spinlock_init_jump+0x0/0x5a @ 1=0D
[    6.811180] initcall kvm_spinlock_init_jump+0x0/0x5a returned 0 after 0 =
usecs=0D
[    6.818303] calling  spawn_ksoftirqd+0x0/0x28 @ 1=0D
[    6.823096] initcall spawn_ksoftirqd+0x0/0x28 returned 0 after 0 usecs=0D
[    6.829657] calling  init_workqueues+0x0/0x59a @ 1=0D
[    6.834667] initcall init_workqueues+0x0/0x59a returned 0 after 0 usecs=
=0D
[    6.841270] calling  migration_init+0x0/0x72 @ 1=0D
[    6.845949] initcall migration_init+0x0/0x72 returned 0 after 0 usecs=0D
[    6.852448] calling  check_cpu_stall_init+0x0/0x1b @ 1=0D
[    6.857649] initcall check_cpu_stall_init+0x0/0x1b returned 0 after 0 us=
ecs=0D
[    6.864668] calling  rcu_scheduler_really_started+0x0/0x12 @ 1=0D
[    6.870560] initcall rcu_scheduler_really_started+0x0/0x12 returned 0 af=
ter 0 usecs=0D
[    6.878274] calling  rcu_spawn_gp_kthread+0x0/0x90 @ 1=0D
[    6.883512] initcall rcu_spawn_gp_kthread+0x0/0x90 returned 0 after 0 us=
ecs=0D
[    6.890497] calling  cpu_stop_init+0x0/0x76 @ 1=0D
[    6.895111] initcall cpu_stop_init+0x0/0x76 returned 0 after 0 usecs=0D
[    6.901500] calling  relay_init+0x0/0x14 @ 1=0D
[    6.905833] initcall relay_init+0x0/0x14 returned 0 after 0 usecs=0D
[    6.911986] calling  tracer_alloc_buffers+0x0/0x1bd @ 1=0D
[    6.917293] initcall tracer_alloc_buffers+0x0/0x1bd returned 0 after 0 u=
secs=0D
[    6.924378] calling  init_events+0x0/0x61 @ 1=0D
[    6.928800] initcall init_events+0x0/0x61 returned 0 after 0 usecs=0D
[    6.935038] calling  init_trace_printk+0x0/0x12 @ 1=0D
[    6.939979] initcall init_trace_printk+0x0/0x12 returned 0 after 0 usecs=
=0D
[    6.946738] calling  event_trace_memsetup+0x0/0x52 @ 1=0D
[    6.951958] initcall event_trace_memsetup+0x0/0x52 returned 0 after 0 us=
ecs=0D
[    6.958957] calling  jump_label_init_module+0x0/0x12 @ 1=0D
[    6.964331] initcall jump_label_init_module+0x0/0x12 returned 0 after 0 =
usecs=0D
[    6.971525] calling  balloon_clear+0x0/0x4f @ 1=0D
[    6.976118] initcall balloon_clear+0x0/0x4f returned 0 after 0 usecs=0D
[    6.982531] calling  rand_initialize+0x0/0x30 @ 1=0D
[    6.987319] initcall rand_initialize+0x0/0x30 returned 0 after 0 usecs=0D
[    6.993884] calling  mce_amd_init+0x0/0x165 @ 1=0D
[    6.998476] initcall mce_amd_init+0x0/0x165 returned 0 after 0 usecs=0D
[    7.004915] x86: Booted up 1 node, 1 CPUs=0D
[    7.009659] NMI watchdog: disabled (cpu0): hardware events not enabled=0D
[    7.016304] devtmpfs: initialized=0D
[    7.022209] calling  ipc_ns_init+0x0/0x14 @ 1=0D
[    7.026557] initcall ipc_ns_init+0x0/0x14 returned 0 after 0 usecs=0D
[    7.032797] calling  init_mmap_min_addr+0x0/0x26 @ 1=0D
[    7.037822] initcall init_mmap_min_addr+0x0/0x26 returned 0 after 0 usec=
s=0D
[    7.044668] calling  init_cpufreq_transition_notifier_list+0x0/0x1b @ 1=
=0D
[    7.051343] initcall init_cpufreq_transition_notifier_list+0x0/0x1b retu=
rned 0 after 0 usecs=0D
[    7.059835] calling  net_ns_init+0x0/0x104 @ 1=0D
[    7.064400] initcall net_ns_init+0x0/0x104 returned 0 after 0 usecs=0D
[    7.070683] calling  e820_mark_nvs_memory+0x0/0x41 @ 1=0D
[    7.075869] PM: Registering ACPI NVS region [mem 0xa58f1000-0xa58f7fff] =
(28672 bytes)=0D
[    7.083764] PM: Registering ACPI NVS region [mem 0xb770c000-0xb77b8fff] =
(708608 bytes)=0D
[    7.091924] initcall e820_mark_nvs_memory+0x0/0x41 returned 0 after 1953=
 usecs=0D
[    7.099128] calling  cpufreq_tsc+0x0/0x37 @ 1=0D
[    7.103549] initcall cpufreq_tsc+0x0/0x37 returned 0 after 0 usecs=0D
[    7.109787] calling  reboot_init+0x0/0x1d @ 1=0D
[    7.114209] initcall reboot_init+0x0/0x1d returned 0 after 0 usecs=0D
[    7.120447] calling  init_lapic_sysfs+0x0/0x20 @ 1=0D
[    7.125301] initcall init_lapic_sysfs+0x0/0x20 returned 0 after 0 usecs=
=0D
[    7.131974] calling  cpu_hotplug_pm_sync_init+0x0/0x2f @ 1=0D
[    7.137520] initcall cpu_hotplug_pm_sync_init+0x0/0x2f returned 0 after =
0 usecs=0D
[    7.144887] calling  alloc_frozen_cpus+0x0/0x8 @ 1=0D
[    7.149739] initcall alloc_frozen_cpus+0x0/0x8 returned 0 after 0 usecs=
=0D
[    7.156412] calling  wq_sysfs_init+0x0/0x14 @ 1=0D
[    7.161108] kworker/u2:0 (15) used greatest stack depth: 6168 bytes left=
=0D
[    7.167854] initcall wq_sysfs_init+0x0/0x14 returned 0 after 976 usecs=0D
[    7.174381] calling  ksysfs_init+0x0/0x94 @ 1=0D
[    7.178844] initcall ksysfs_init+0x0/0x94 returned 0 after 0 usecs=0D
[    7.185039] calling  pm_init+0x0/0x4e @ 1=0D
[    7.189151] initcall pm_init+0x0/0x4e returned 0 after 0 usecs=0D
[    7.195004] calling  pm_disk_init+0x0/0x19 @ 1=0D
[    7.199527] initcall pm_disk_init+0x0/0x19 returned 0 after 0 usecs=0D
[    7.205838] calling  swsusp_header_init+0x0/0x30 @ 1=0D
[    7.210865] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 usec=
s=0D
[    7.217710] calling  init_jiffies_clocksource+0x0/0x12 @ 1=0D
[    7.223257] initcall init_jiffies_clocksource+0x0/0x12 returned 0 after =
0 usecs=0D
[    7.230622] calling  cgroup_wq_init+0x0/0x5c @ 1=0D
[    7.235311] initcall cgroup_wq_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.241803] calling  event_trace_enable+0x0/0x173 @ 1=0D
[    7.247407] initcall event_trace_enable+0x0/0x173 returned 0 after 0 use=
cs=0D
[    7.254267] calling  init_zero_pfn+0x0/0x35 @ 1=0D
[    7.258859] initcall init_zero_pfn+0x0/0x35 returned 0 after 0 usecs=0D
[    7.265273] calling  fsnotify_init+0x0/0x26 @ 1=0D
[    7.269867] initcall fsnotify_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.276277] calling  filelock_init+0x0/0x84 @ 1=0D
[    7.280883] initcall filelock_init+0x0/0x84 returned 0 after 0 usecs=0D
[    7.287284] calling  init_misc_binfmt+0x0/0x31 @ 1=0D
[    7.292139] initcall init_misc_binfmt+0x0/0x31 returned 0 after 0 usecs=
=0D
[    7.298811] calling  init_script_binfmt+0x0/0x16 @ 1=0D
[    7.303837] initcall init_script_binfmt+0x0/0x16 returned 0 after 0 usec=
s=0D
[    7.310685] calling  init_elf_binfmt+0x0/0x16 @ 1=0D
[    7.315449] initcall init_elf_binfmt+0x0/0x16 returned 0 after 0 usecs=0D
[    7.322037] calling  init_compat_elf_binfmt+0x0/0x16 @ 1=0D
[    7.327410] initcall init_compat_elf_binfmt+0x0/0x16 returned 0 after 0 =
usecs=0D
[    7.334603] calling  debugfs_init+0x0/0x5c @ 1=0D
[    7.339121] initcall debugfs_init+0x0/0x5c returned 0 after 0 usecs=0D
[    7.345436] calling  securityfs_init+0x0/0x53 @ 1=0D
[    7.350211] initcall securityfs_init+0x0/0x53 returned 0 after 0 usecs=0D
[    7.356789] calling  prandom_init+0x0/0xe2 @ 1=0D
[    7.361295] initcall prandom_init+0x0/0xe2 returned 0 after 0 usecs=0D
[    7.367623] calling  virtio_init+0x0/0x30 @ 1=0D
[    7.372147] initcall virtio_init+0x0/0x30 returned 0 after 0 usecs=0D
[    7.378318] calling  __gnttab_init+0x0/0x30 @ 1=0D
[    7.382912] xen:grant_table: Grant tables using version 2 layout=0D
[    7.388993] Grant table initialized=0D
[    7.392530] initcall __gnttab_init+0x0/0x30 returned 0 after 1953 usecs=
=0D
[    7.399203] calling  early_resume_init+0x0/0x1d0 @ 1=0D
[    7.404256] RTC time:  3:41:59, date: 01/25/14=0D
[    7.408736] initcall early_resume_init+0x0/0x1d0 returned 0 after 976 us=
ecs=0D
[    7.415755] calling  cpufreq_core_init+0x0/0x37 @ 1=0D
[    7.420695] initcall cpufreq_core_init+0x0/0x37 returned -19 after 0 use=
cs=0D
[    7.427628] calling  cpuidle_init+0x0/0x40 @ 1=0D
[    7.432135] initcall cpuidle_init+0x0/0x40 returned -19 after 0 usecs=0D
[    7.438634] calling  bsp_pm_check_init+0x0/0x14 @ 1=0D
[    7.443574] initcall bsp_pm_check_init+0x0/0x14 returned 0 after 0 usecs=
=0D
[    7.450333] calling  sock_init+0x0/0x8b @ 1=0D
[    7.454685] initcall sock_init+0x0/0x8b returned 0 after 0 usecs=0D
[    7.460683] calling  net_inuse_init+0x0/0x26 @ 1=0D
[    7.465365] initcall net_inuse_init+0x0/0x26 returned 0 after 0 usecs=0D
[    7.471860] calling  netpoll_init+0x0/0x31 @ 1=0D
[    7.476367] initcall netpoll_init+0x0/0x31 returned 0 after 0 usecs=0D
[    7.482694] calling  netlink_proto_init+0x0/0x1f7 @ 1=0D
[    7.487847] NET: Registered protocol family 16=0D
[    7.492338] initcall netlink_proto_init+0x0/0x1f7 returned 0 after 976 u=
secs=0D
[    7.499433] calling  bdi_class_init+0x0/0x4d @ 1=0D
[    7.504218] initcall bdi_class_init+0x0/0x4d returned 0 after 0 usecs=0D
[    7.510649] calling  kobject_uevent_init+0x0/0x12 @ 1=0D
[    7.515773] initcall kobject_uevent_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[    7.522693] calling  pcibus_class_init+0x0/0x19 @ 1=0D
[    7.527694] initcall pcibus_class_init+0x0/0x19 returned 0 after 0 usecs=
=0D
[    7.534392] calling  pci_driver_init+0x0/0x12 @ 1=0D
[    7.539251] initcall pci_driver_init+0x0/0x12 returned 0 after 0 usecs=0D
[    7.545769] calling  backlight_class_init+0x0/0x85 @ 1=0D
[    7.551028] initcall backlight_class_init+0x0/0x85 returned 0 after 0 us=
ecs=0D
[    7.557991] calling  video_output_class_init+0x0/0x19 @ 1=0D
[    7.563514] initcall video_output_class_init+0x0/0x19 returned 0 after 0=
 usecs=0D
[    7.570728] calling  xenbus_init+0x0/0x26f @ 1=0D
[    7.575328] initcall xenbus_init+0x0/0x26f returned 0 after 0 usecs=0D
[    7.581581] calling  tty_class_init+0x0/0x38 @ 1=0D
[    7.586328] initcall tty_class_init+0x0/0x38 returned 0 after 0 usecs=0D
[    7.592760] calling  vtconsole_class_init+0x0/0xc2 @ 1=0D
[    7.598129] initcall vtconsole_class_init+0x0/0xc2 returned 0 after 0 us=
ecs=0D
[    7.605075] calling  wakeup_sources_debugfs_init+0x0/0x2b @ 1=0D
[    7.610886] initcall wakeup_sources_debugfs_init+0x0/0x2b returned 0 aft=
er 0 usecs=0D
[    7.618509] calling  register_node_type+0x0/0x34 @ 1=0D
[    7.623667] initcall register_node_type+0x0/0x34 returned 0 after 0 usec=
s=0D
[    7.630441] calling  i2c_init+0x0/0x70 @ 1=0D
[    7.634769] initcall i2c_init+0x0/0x70 returned 0 after 0 usecs=0D
[    7.640676] calling  init_ladder+0x0/0x12 @ 1=0D
[    7.645093] initcall init_ladder+0x0/0x12 returned -19 after 0 usecs=0D
[    7.651506] calling  init_menu+0x0/0x12 @ 1=0D
[    7.655754] initcall init_menu+0x0/0x12 returned -19 after 0 usecs=0D
[    7.661993] calling  amd_postcore_init+0x0/0x143 @ 1=0D
[    7.667019] initcall amd_postcore_init+0x0/0x143 returned 0 after 0 usec=
s=0D
[    7.673879] calling  boot_params_ksysfs_init+0x0/0x237 @ 1=0D
[    7.679431] initcall boot_params_ksysfs_init+0x0/0x237 returned 0 after =
0 usecs=0D
[    7.686778] calling  arch_kdebugfs_init+0x0/0x233 @ 1=0D
[    7.691922] initcall arch_kdebugfs_init+0x0/0x233 returned 0 after 0 use=
cs=0D
[    7.698826] calling  mtrr_if_init+0x0/0x78 @ 1=0D
[    7.703333] initcall mtrr_if_init+0x0/0x78 returned -19 after 0 usecs=0D
[    7.709832] calling  ffh_cstate_init+0x0/0x2a @ 1=0D
[    7.714599] initcall ffh_cstate_init+0x0/0x2a returned 0 after 0 usecs=0D
[    7.721183] calling  activate_jump_labels+0x0/0x32 @ 1=0D
[    7.726385] initcall activate_jump_labels+0x0/0x32 returned 0 after 0 us=
ecs=0D
[    7.733404] calling  acpi_pci_init+0x0/0x61 @ 1=0D
[    7.737997] ACPI FADT declares the system doesn't support PCIe ASPM, so =
disable it=0D
[    7.745623] ACPI: bus type PCI registered=0D
[    7.749696] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5=0D
[    7.756197] initcall acpi_pci_init+0x0/0x61 returned 0 after 2929 usecs=
=0D
[    7.762870] calling  dma_bus_init+0x0/0xd6 @ 1=0D
[    7.767499] kworker/u2:0 (30) used greatest stack depth: 5768 bytes left=
=0D
[    7.774205] initcall dma_bus_init+0x0/0xd6 returned 0 after 976 usecs=0D
[    7.780726] calling  dma_channel_table_init+0x0/0xde @ 1=0D
[    7.786110] initcall dma_channel_table_init+0x0/0xde returned 0 after 0 =
usecs=0D
[    7.793288] calling  setup_vcpu_hotplug_event+0x0/0x22 @ 1=0D
[    7.798837] initcall setup_vcpu_hotplug_event+0x0/0x22 returned 0 after =
0 usecs=0D
[    7.806201] calling  register_xen_pci_notifier+0x0/0x38 @ 1=0D
[    7.811834] initcall register_xen_pci_notifier+0x0/0x38 returned 0 after=
 0 usecs=0D
[    7.819289] calling  xen_pcpu_init+0x0/0xcc @ 1=0D
[    7.824733] initcall xen_pcpu_init+0x0/0xcc returned 0 after 0 usecs=0D
[    7.831080] calling  dmi_id_init+0x0/0x31d @ 1=0D
[    7.835833] initcall dmi_id_init+0x0/0x31d returned 0 after 0 usecs=0D
[    7.842088] calling  dca_init+0x0/0x20 @ 1=0D
[    7.846246] dca service started, version 1.12.1=0D
[    7.850899] initcall dca_init+0x0/0x20 returned 0 after 976 usecs=0D
[    7.856996] calling  iommu_init+0x0/0x58 @ 1=0D
[    7.861336] initcall iommu_init+0x0/0x58 returned 0 after 0 usecs=0D
[    7.867481] calling  pci_arch_init+0x0/0x69 @ 1=0D
[    7.872091] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000=
-0xfbffffff] (base 0xf8000000)=0D
[    7.881435] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E82=
0=0D
[    7.896157] PCI: Using configuration type 1 for base access=0D
[    7.901715] initcall pci_arch_init+0x0/0x69 returned 0 after    7.913398=
] initcall topology_init+0x0/0x98 returned 0 after 0 usecs=0D
[    7.919759] calling  mtrr_init_finialize+0x0/0x36 @ 1=0D
[    7.924853] initcall mtrr_init_finialize+0x0/0x36 returned 0 after 0 use=
cs=0D
[    7.931788] calling  init_vdso+0x0/0x135 @ 1=0D
[    7.936121] initcall init_vdso+0x0/0x135 returned 0 after 0 usecs=0D
[    7.942274] calling  sysenter_setup+0x0/0x2dd @ 1=0D
[    7.947040] initcall sysenter_setup+0x0/0x2dd returned 0 after 0 usecs=0D
[    7.953626] calling  param_sysfs_init+0x0/0x194 @ 1=0D
[    7.975052] initcall param_sysfs_init+0x0/0x194 returned 0 after 14648 u=
secs=0D
[    7.982090] calling  pm_sysrq_init+0x0/0x19  7.993092] calling  default_=
bdi_init+0x0/0x65 @ 1=0D
[    7.998253] initcall default_bdi_init+0x0/0x65 returned 0 after 0 usecs=
=0D
[    8.004853] calling  init_bio+0x0/0xe9 @ 1=0D
[    8.009069] bio: create slab <bio-0> at 0=0D
[    8.013134] initcall init_bio+0x0/0xe9 returned 0 after 976 usecs=0D
[    8.019241] calling  cryptomgr_init+0x0/0x12 @ 1=0D
[    8.023920] initcall cryptomgr_init+0x0/0x12 returned 0 after 0 usecs=0D
[    8.030419] calling  blk_settings_init+0x0/0x2c @ 1=0D
[    8.035359] initcall blk_settings_init+0x0/0x2c returned 0 after 0 usecs=
=0D
[    8.042118] calling  blk_ioc_init+0x0/0x2a @ 1=0D
[    8.046636] initcall blk_ioc_init+0x0/0x2a returned 0 after 0 usecs=0D
[    8.052952] calling  blk_softirq_init+0x0/0x6e @ 1=0D
[    8.057805] initcall blk_softirq_init+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.064477] calling  blk_iopoll_setup+0x0/0x6e @ 1=0D
[    8.069331] initcall blk_iopoll_setup+0x0/0x6e returned 0 after 0 usecs=
=0D
[    8.076003] calling  blk_mq_init+0x0/0x5f @ 1=0D
[    8.080424] initcall blk_mq_init+0x0/0x5f returned 0 after 0 usecs=0D
[    8.086665] calling  genhd_device_init+0x0/0x85 @ 1=0D
[    8.091745] initcall genhd_device_init+0x0/0x85 returned 0 after 0 usecs=
=0D
[    8.098435] calling  pci_slot_init+0x0/0x50 @ 1=0D
[    8.103034] initcall pci_slot_init+0x0/0x50 returned 0 after 0 usecs=0D
[    8.109437] calling  fbmem_init+0x0/0x98 @ 1=0D
[    8.113843] initcall fbmem_init+0x0/0x98 returned 0 after 0 usecs=0D
[    8.119927] calling  acpi_init+0x0/0x27a @ 1=0D
[    8.124283] ACPI: Added _OSI(Module Device)=0D
[    8.128504] ACPI: Added _OSI(Processor Device)=0D
[    8.133010] ACPI: Added _OSI(3.0 _SCP Extensions)=0D
[    8.137774] ACPI: Added _OSI(Processor Aggregator Device)=0D
[    8.147027] ACPI: Executed 1 blocks of module-level executable AML code=
=0D
[    8.179322] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored=0D
[    8.187216] \_SB_:_OSC invalid UUID=0D
[    8.190694] _Oata:1 1f =0D
[    8.196378] ACPI: SSDT 00000000b76c1c18 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.205616] ACPI: Dynamic OEM Table Load:=0D
[    8.209610] ACPI: SSDT           (null) 0003D3 (v01  PmRef  Cpu0Cst 0000=
3001 INTL 20051117)=0D
[    8.219471] ACPI: Interpreter enabled=0D
[    8.223140] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S2_] (20131115/hwxface-580)=0D
[    8.232400] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [=
\_S3_] (20131115/hwxface-580)=0D
[    8.241683] ACPI: (supports S0 S1 S4 S5)=0D
[    8.245654] ACPI: Using IOAPIC for interrupt routing=0D
[    8.251056] HEST: Table parsing has been initialized.=0D
[    8.256107] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug=0D
[    8.266510] ACPI: No dock devices found.=0D
[    8.368976] ACPI: Power Resource [FN00] (off)=0D
[    8.374125] ACPI: Power Resource [FN01] (off)=0D
[    8.379306] ACPI: Power Resource [FN02] (off)=0D
[    8.384446] ACPI: Power Resource [FN03] (off)=0D
[    8.389594] ACPI: Power Resource [FN04] (off)=0D
[    8.399292] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])=0D
[    8.405472] acpi PNP0A08:00: _OSC: OS supports [Exten=0D
[    8.416241] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplu=
g PME]=0D
[    8.425254] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability]=
=0D
[    8.438579] PCI host bridge to bus 0000:00=0D
[    8.442672] pci_bus 0000:00: root bus resource [bus 00-3e]=0D
[    8.448218] p0-0x000d7fff]=0D
[    8.474563] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbff=
f]=0D
[    8.481496] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dfff=
f]=0D
[    8.488428] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3ff=
f]=0D
[    8.495363] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7ff=
f]=0D
[    8.502295] pci_bus 0000:00: root bus resource [mem 0xbe200000-0xfeaffff=
f]=0D
[    8.509241] pci 0000:00:00.0: [8086:0c08] type 00 class 0x060000=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:00.0=0D
[    8.526971] pci 0000:00:01.0: [8086:0c01] type 01 class 0x060400=0D
[    8.533130] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold=0D
[    8.539756] pci 0000:00:01.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:01.0=0D
[    8.556754] pci 0000:00:01.1: [8086:0c05] type 01 class 0x060400=0D
[    8.562818] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1.1 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:01.1=0D
[    8.580646] pci 0000:00:02.0: [8086:041a] type 00 class 0x030000=0D
[    8.586663] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit=
]=0D
[    8.593500] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit=
 pref]=0D
[    8.600776] pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:2.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:02.0=0D
[    8.618132] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300=0D
[    8.624151] pci 0000:00:03.0: reg 0x10: [mem 0xf1b34000-0xf1b37fff 64bit=
]=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:3.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:03.0=0D
[    8.642734] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330=0D
[    8.648793] pci 0000:00:14.0: reg 0x10: [mem 0xf1b20000-0xf1b2ffff 64bit=
]=0D
[    8.655724] pci 0000:00:14.0: PME# supported from D3hot D3cold=0D
[    8.661958] pci 0000:00:14.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:14.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:14.0=0D
[    8.679051] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000=0D
[    8.685090] pci 0000:00:16.0: reg 0x10: [mem 0xf1b3f000-0xf1b3f00f 64bit=
]=0D
[    8.692031] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:16.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:16.0=0D
[    8.709907] pci 0000:00:19.0: [8086:153a] type 00 class 0x020000=0D
[    8.715949] pci 0000:00:19.0: reg 0x10: [mem 0xf1b00000-0xf1b1ffff]=0D
[    8.722242] pci 0000:00:19.0: reg 0x14: [mem 0xf1b3d000-0xf1b3dfff]=0D
[    8.728569] pci 0000:00:19.0: reg 0x18: [io  0xf080-0xf09f]=0D
[    8.734328] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold=0D
[    8.740823] pci 0000:00:19.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:19.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:19.0=0D
[    8.757917] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320=0D
[    8.763959] pci 0000:00:1a.0: reg 0x10: [mem 0xf1b3c000-0xf1b3c3ff]=0D
[    8.770411] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold=0D
[    8.776995] pci 0000:00:1a.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1a.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1a.0=0D
[    8.794132] pci 0000:00:1b.0: [8086:8c20] type 00 class 0x040300=0D
[    8.800166] pci 0000:00:1b.0: reg 0x10: [mem 0xf1b30000-0xf1b33fff 64bit=
]=0D
[    8.807131] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold=0D
[    8.813616] pci 0000:00:1b.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1b.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1b.0=0D
[    8.830699] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400=0D
[    8.836862] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold=0D
[    8.843355] pci 0000:00:1c.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.0 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.0=0D
[    8.860448] pci 0000:00:1c.3: [8086:8c16] type 01 class 0x060400=0D
[    8.866613] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold=0D
[    8.873107] pci 0000:00:1c.3: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.3 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.3=0D
[    8.890194] pci 0000:00:1c.5: [8086:8c1a] type 01 class 0x060400=0D
[    8.896358] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold=0D
[    8.902851] pci 0000:00:1c.5: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.5 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.5=0D
[    8.919937] pci 0000:00:1c.6: [8086:8c1c] type 01 class 0x060400=0D
[    8.926101] pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold=0D
[    8.932593] pci 0000:00:1c.6: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.6 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.6=0D
[    8.949670] pci 0000:00:1c.7: [8086:8c1e] type 01 class 0x060400=0D
[    8.955834] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold=0D
[    8.962332] pci 0000:00:1c.7: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:01] PHYSDEVOP_pci_device_add of 0:1c.7 flags:0=0D
(XEN) [2014-01-25 03:42:01] PCI add device 0000:00:1c.7=0D
[    8.979430] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320=0D
[    8.985470] pci 0000:00:1d.0: reg 0x10: [mem 0xf1b3b000-0xf1b3b3ff]=0D
[    8.991923] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold=0D
[    8.998505] pci 0000:00:1d.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1d.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1d.0=0D
[    9.015607] pci 0000:00:1f.0: [8086:8c56] type 00 class 0x060100=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.0=0D
[    9.033538] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601=0D
[    9.039575] pci 0000:00:1f.2: reg 0x10: [io  0xf0d0-0xf0d7]=0D
[    9.045178] pci 0000:00:1f.2: reg 0x14: [io  0xf0c0-0xf0c3]=0D
[    9.050811] pci 0000:00:1f.2: reg 0x18: [io  0xf0b0-0xf0b7]=0D
[    9.056442] pci 0000:00:1f.2: reg 0x1c: [io  0xf0a0-0xf0a3]=0D
[    9.062077] pci 0000:00:1f.2: reg 0x20: [io  0xf060-0xf07f]=0D
[    9.067708] pci 0000:00:1f.2: reg 0x24: [mem 0xf1b3a000-0xf1b3a7ff]=0D
[    9.074118] pci 0000:00:1f.2: PME# supported from D3hot=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.2 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.2=0D
[    9.091125] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500=0D
[    9.097155] pci 0000:00:1f.3: reg 0x10: [mem 0xf1b39000-0xf1b390ff 64bit=
]=0D
[    9.104006] pci 0000:00:1f.3: reg 0x20: [io  0xf040-0xf05f]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.3 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.3=0D
[    9.121387] pci 0000:00:1f.6: [8086:8c24] type 00 class 0x118000=0D
[    9.127429] pci 0000:00:1f.6: reg 0x10: [mem 0xf1b38000-0xf1b38fff 64bit=
]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 0:1f.6 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:00:1f.6=0D
[    9.146352] pci_bus 0000:01: busn_res: can not insert [bus 01-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.157128] pci 0000:00:01.0: PCI bridge to [bus 01-ff]=0D
[    9.162409] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01=
=0D
[    9.169273] pci_bus 0000:02: busn_res: can not insert [bus 02-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.180074] pci 0000:02:00.0: [8086:10c9] type 00 class 0x020000=0D
[    9.186118] pci 0000:02:00.0: reg 0x10: [mem 0xf1420000-0xf143ffff]=0D
[    9.192440] pci 0000:02:00.0: reg 0x14: [mem 0xf1000000-0xf13fffff]=0D
[    9.198767] pci 0000:02:00.0: reg 0x18: [io  0xe020-0xe03f]=0D
[    9.204401] pci 0000:02:00.0: reg 0x1c: [mem 0xf1444000-0xf1447fff]=0D
[    9.210745] pci 0000:02:00.0: reg 0x30: [mem 0xf0c00000-0xf0ffffff pref]=
=0D
[    9.217536] pci 0000:02:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.223664] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.230579] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 2:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:02:00.0=0D
[    9.248930] pci 0000:02:00.1: [8086:10c9] type 00 class 0x020000=0D
[    9.254942] pci 0000:02:00.1: reg 0x10: [mem 0xf1400000-0xf141ffff]=0D
[    9.261262] pci 0000:02:00.1: reg 0x14: [mem 0xf0800000-0xf0bfffff]=0D
[    9.267589] pci 0000:02:00.1: reg 0x18: [io  0xe000-0xe01f]=0D
[    9.273222] pci 0000:02:00.1: reg 0x1c: [mem 0xf1440000-0xf1443fff]=0D
[    9.279568] pci 0000:02:00.1: reg 0x30: [mem 0xf0400000-0xf07fffff pref]=
=0D
[    9.286359] pci 0000:02:00.1: PME# supported from D0 D3hot D3cold=0D
[    9.292483] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[    9.299398] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 2:0.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:02:00.1=0D
[    9.319827] pci 0000:00:01.1: PCI bridge to [bus 02-ff]=0D
[    9.325048] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[    9.331200] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[    9.338047] pci_bus 0000:02: busn_res: [bus 02-ff] end is updated to 03=
=0D
[    9.345085] pci_bus 0000:04: busn_res: can not insert [bus 04-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.355901] pci 0000:04:00.0: [8086:105e] type 00 class 0x020000=0D
[    9.361949] pci 0000:04:00.0: reg 0x10: [mem 0xf1aa0000-0xf1abffff]=0D
[    9.368263] pci 0000:04:00.0: reg 0x14: [mem 0xf1a80000-0xf1a9ffff]=0D
[    9.374589] pci 0000:04:00.0: reg 0x18: [io  0xd020-0xd03f]=0D
[    9.380305] pci 0000:04:00.0: reg 0x30: [mem 0xf1a60000-0xf1a7ffff pref]=
=0D
[    9.387132] pci 0000:04:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.393360] pci 0000:04:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 4:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:04:00.0=0D
[    9.410427] pci 0000:04:00.1: [8086:105e] type 00 class 0x020000=0D
[    9.416462] pci 0000:04:00.1: reg 0x10: [mem 0xf1a40000-0xf1a5ffff]=0D
[    9.422774] pci 0000:04:00.1: reg 0x14: [mem 0xf1a20000-0xf1a3ffff]=0D
[    9.429100] pci 0000:04:00.1: reg 0x18: [io  0xd000-0xd01f]=0D
[    9.434819] pci 0000:04:00.1: reg 0x30: [mem 0xf1a00000-0xf1a1ffff pref]=
=0D
[    9.441643] pci 0000:04:00.1: PME# supported from D0 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 4:0.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1452: d0:PCIe: map 0000:04:00.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:04:00.1=0D
[    9.467789] pci 0000:00:1c.0: PCI bridge to [bus 04-ff]=0D
[    9.473010] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[    9.479161] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[    9.486015] pci_bus 0000:04: busn_res: [bus 04-ff] end is updated to 04=
=0D
[    9.493043] pci_bus 0000:05: busn_res: can not insert [bus 05-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.503865] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000=0D
[    9.509908] pci 0000:05:00.0: reg 0x10: [mem 0xf1900000-0xf197ffff]=0D
[    9.516246] pci 0000:05:00.0: reg 0x18: [io  0xc000-0xc01f]=0D
[    9.521858] pci 0000:05:00.0: reg 0x1c: [mem 0xf1980000-0xf1983fff]=0D
[    9.528361] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold=0D
[    9.534597] pci 0000:05:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 5:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:05:00.0=0D
[    9.553743] pci 0000:00:1c.3: PCI bridge to [bus 05-ff]=0D
[    9.558966] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[    9.565115] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[    9.571966] pci_bus 0000:05: busn_res: [bus 05-ff] end is updated to 05=
=0D
[    9.579040] pci_bus 0000:06: busn_res: can not insert [bus 06-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[    9.589864] pci 0000:06:00.0: [10e3:8113] type 01 class 0x060401=0D
[    9.596108] pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold=
=0D
[    9.602879] pci 0000:06:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 6:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:06:00.0=0D
[    9.619891] pci 0000:00:1c.5: PCI bridge to [bus 06-ff]=0D
[    9.625122] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[    9.631984] pci 0000:06:00.0: bridge configuration invalid ([bus 06-07])=
, reconfiguring=0D
[    9.640473] pci 0000:07:01.0: [3388:0021] type 01 class 0x060400=0D
[    9.646667] pci 0000:07:01.0: supports D1 D2=0D
[    9.650926] pci 0000:07:01.0: PME# supported from D1 D2 D3hot D3cold=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 7:1.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:07:01.0=0D
[    9.668875] pci 0000:07:03.0: [104c:8023] type 00 class 0x0c0010=0D
[    9.674912] pci 0000:07:03.0: reg 0x10: [mem 0xf1604000-0xf16047ff]=0D
[    9.681221] pci 0000:07:03.0: reg 0x14: [mem 0xf1600000-0xf1603fff]=0D
[    9.687704] pci 0000:07:03.0: supports D1 D2=0D
[    9.691961] pci 0000:07:03.0: PME# supported from D0 D1 D2 D3hot=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 7:3.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:07:03.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:07:03.0=0D
[    9.716013] pci 0000:06:00.0: PCI bridge to [bus 07-ff] (subtractive dec=
ode)=0D
[    9.723066] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[    9.729909] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.738478] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
] (subtractive decode)=0D
[    9.747143] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.755722] pci 0000:06:00.0:   bridge window [??? 0x00000000 flags 0x0]=
 (subtractive decode)=0D
[    9.764303] pci 0000:07:01.0: bridge configuration invalid ([bus 07-07])=
, reconfiguring=0D
[    9.772694] pci 0000:08:08.0: [109e:036e] type 00 class 0x040000=0D
[    9.778825] pci 0000:08:08.0: reg 0x10: [mem 0xf1507000-0xf1507fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:8.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:08.0=0D
[    9.803612] pci 0000:08:08.1: [109e:0878] type 00 class 0x048000=0D
[    9.809661] pci 0000:08:08.1: reg 0x10: [mem 0xf1506000-0xf1506fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:8.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:08.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:08.1=0D
[    9.834476] pci 0000:08:09.0: [109e:036e] type 00 class 0x040000=0D
[    9.840523] pci 0000:08:09.0: reg 0x10: [mem 0xf1505000-0xf1505fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:9.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:09.0=0D
[    9.865321] pci 0000:08:09.1: [109e:0878] type 00 class 0x048000=0D
[    9.871371] pci 0000:08:09.1: reg 0x10: [mem 0xf1504000-0xf1504fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:9.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:09.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:09.1=0D
[    9.896201] pci 0000:08:0a.0: [109e:036e] type 00 class 0x040000=0D
[    9.902255] pci 0000:08:0a.0: reg 0x10: [mem 0xf1503000-0xf1503fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:a.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0a.0=0D
[    9.927051] pci 0000:08:0a.1: [109e:0878] type 00 class 0x048000=0D
[    9.933101] pci 0000:08:0a.1: reg 0x10: [mem 0xf1502000-0xf1502fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:a.1 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0a.1=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0a.1=0D
[    9.957922] pci 0000:08:0b.0: [109e:036e] type 00 class 0x040000=0D
[    9.963976] pci 0000:08:0b.0: reg 0x10: [mem 0xf1501000-0xf1501fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:b.0 flags:0=0D
(XEN) [2014-01-25 03:42:02] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.0=0D
(XEN) [2014-01-25 03:42:02] PCI add device 0000:08:0b.0=0D
[    9.988773] pci 0000:08:0b.1: [109e:0878] type 00 class 0x048000=0D
[    9.994823] pci 0000:08:0b.1: reg 0x10: [mem 0xf1500000-0xf1500fff pref]=
=0D
(XEN) [2014-01-25 03:42:02] PHYSDEVOP_pci_device_add of 8:b.1 flags:0=0D
(XEN) [2014-01-25 03:42:03] [VT-D]iommu.c:1464: d0:PCI: map 0000:08:0b.1=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:08:0b.1=0D
[   10.019675] pci 0000:07:01.0: PCI bridge to [bus 08-ff]=0D
[   10.024903] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   10.031739] pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 08=
=0D
[   10.038412] pci_bus 0000:07: busn_res: [bus 07-ff] end is updated to 08=
=0D
[   10.045083] pci_bus 0000:06: busn_res: [bus 06-ff] end is updated to 08=
=0D
[   10.052117] pci_bus 0000:09: busn_res: can not insert [bus 09-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.063011] pci 0000:09:00.0: [1912:0015] type 00 class 0x0c0330=0D
[   10.069124] pci 0000:09:00.0: reg 0x10: [mem 0xf1800000-0xf1801fff 64bit=
]=0D
[   10.076291] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold=0D
[   10.082582] pci 0000:09:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:03] PHYSDEVOP_pci_device_add of 9:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:09:00.0=0D
[   10.101766] pci 0000:00:1c.6: PCI bridge to [bus 09-ff]=0D
[   10.106995] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   10.113841] pci_bus 0000:09: busn_res: [bus 09-ff] end is updated to 09=
=0D
[   10.120878] pci_bus 0000:0a: busn_res: can not insert [bus 0a-ff] under =
[bus 00-3e] (conflicts with (null) [bus 00-3e])=0D
[   10.131683] pci 0000:0a:00.0: [1b21:0612] type 00 class 0x010601=0D
[   10.137730] pci 0000:0a:00.0: reg 0x10: [io  0xb050-0xb057]=0D
[   10.143357] pci 0000:0a:00.0: reg 0x14: [io  0xb040-0xb043]=0D
[   10.148988] pci 0000:0a:00.0: reg 0x18: [io  0xb030-0xb037]=0D
[   10.154624] pci 0000:0a:00.0: reg 0x1c: [io  0xb020-0xb023]=0D
[   10.160255] pci 0000:0a:00.0: reg 0x20: [io  0xb000-0xb01f]=0D
[   10.165889] pci 0000:0a:00.0: reg 0x24: [mem 0xf1700000-0xf17001ff]=0D
[   10.172424] pci 0000:0a:00.0: System wakeup disabled by ACPI=0D
(XEN) [2014-01-25 03:42:03] PHYSDEVOP_pci_device_add of a:0.0 flags:0=0D
(XEN) [2014-01-25 03:42:03] [VT-D]iommu.c:1452: d0:PCIe: map 0000:0a:00.0=0D
(XEN) [2014-01-25 03:42:03] PCI add device 0000:0a:00.0=0D
[   10.198063] pci 0000:00:1c.7: PCI bridge to [bus 0a-ff]=0D
[   10.203284] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   10.209434] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   10.216286] pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 0a=
=0D
[   10.223047] acpi PNP0A08:00: Disabling ASPM (FADT indicates it is unsupp=
orted)=0D
[   10.234866] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.242188] ACPI: PCI Interrupt Link [LNKB] ( PCI Interrupt Link [LNKF] =
(IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.=0D
[   10.279883] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 *5 6 10 11 12 14 1=
5)=0D
[   10.287196] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 *11 12 14 1=
5)=0D
[   10.295636] ACPI: Enabled 4 GPEs in block 00 to 3F=0D
[   10.300430] ACPI: \_SB_.PCI0: notify handler is installed=0D
[   10.305914] Found 1 acpi root devices=0D
[   10.309716] initcall acpi_init+0x0/0x27a returned 0 after 443359 usecs=0D
[   10.316232] calling  pnp_init+0x0/0x12 @ 1=0D
[   10.320575] initcall pnp_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.326492] calling  balloon_init+0x0/0x242 @ 1=0D
[   10.331082] xen:balloon: Initialising balloon driver=0D
[   10.336108] initcall balloon_init+0x0/0x242 returned 0 after 976 usecs=0D
[   10.342694] calling  xen_setup_shutdown_event+0x0/0x30 @ 1=0D
[   10.348240] initcall xen_setup_shutdown_event+0x0/0x30 returned 0 after =
0 usecs=0D
[   10.355605] calling  xenbus_probe_backend_init+0x0/0x2d @ 1=0D
[   10.361335] initcall xenbus_probe_backend_init+0x0/0x2d returned 0 after=
 0 usecs=0D
[   10.368723] calling  xenbus_probe_frontend_init+0x0/0x72 @ 1=0D
[   10.374558] initcall xenbus_probe_frontend_init+0x0/0x72 returned 0 afte=
r 0 usecs=0D
[   10.382022] calling  xen_acpi_pad_init+0x0/0x47 @ 1=0D
[   10.387039] initcall xen_acpi_pad_init+0x0/0x47 returned 0 after 0 usecs=
=0D
[   10.393730] calling  balloon_init+0x0/0xfa @ 1=0D
[   10.398234] xen_balloon: Initialising balloon driver=0D
[   10.403540] initcall balloon_init+0x0/0xfa returned 0 after 976 usecs=0D
[   10.409974] calling  misc_init+0x0/0xba @ 1=0D
[   10.414312] initcall misc_init+0x0/0xba returned 0 after 0 usecs=0D
[   10.420314] calling  vga_arb_device_init+0x0/0xde @ 1=0D
[   10.425575] vgaarb: device added: PCI:0000:00:02.0,decodes=3Dio+mem,owns=
=3Dio+mem,locks=3Dnone=0D
[   10.433649] vgaarb: loaded=0D
[   10.436418] vgaarb: bridge control possible 0000:00:02.0=0D
[   10.441792] initcall vga_arb_device_init+0x0/0xde returned 0 after 2929 =
usecs=0D
[   10.448986] calling  cn_init+0x0/0xc0 @ 1=0D
[   10.453078] initcall cn_init+0x0/0xc0 returned 0 after 0 usecs=0D
[   10.458952] calling  dma_buf_init+0x0/0x75 @ 1=0D
[   10.463471] initcall dma_buf_init+0x0/0x75 returned 0 after 0 usecs=0D
[   10.469785] calling  phy_init+0x0/0x2e @ 1=0D
[   10.474166] initcall phy_init+0x0/0x2e returned 0 after 0 usecs=0D
[   10.480076] calling  init_pcmcia_cs+0x0/0x3d @ 1=0D
[   10.484819] initcall init_pcmcia_cs+0x0/0x3d returned 0 after 0 usecs=0D
[   10.491255] calling  usb_init+0x0/0x169 @ 1=0D
[   10.495514] ACPI: bus type USB registered=0D
[   10.499784] usbcore: registered new interface driver usbfs=0D
[   10.505366] usbcore: registered new interface driver hub=0D
[   10.510778] usbcore: registered new device driver usb=0D
[   10.515825] initcall usb_init+0x0/0x169 returned 0 after 3906 usecs=0D
[   10.522149] calling  serio_init+0x0/0x31 @ 1=0D
[   10.526576] initcall serio_init+0x0/0x31 returned 0 after 0 usecs=0D
[   10.532663] calling  input_init+0x0/0x103 @ 1=0D
[   10.537149] initcall input_init+0x0/0x103 returned 0 after 0 usecs=0D
[   10.543323] calling  rtc_init+0x0/0x5b @ 1=0D
[   10.547548] initcall rtc_init+0x0/0x5b returned 0 after 0 usecs=0D
[   10.553462] calling  pps_init+0x0/0xb7 @ 1=0D
[   10.557683] pps_core: LinuxPPS API ver. 1 registered=0D
[   10.562649] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>=0D
[   10.571834] initcall pps_init+0x0/0xb7 returned 0 after 1953 usecs=0D
[   10.578073] calling  ptp_init+0x0/0xa4 @ 1=0D
[   10.582293] PTP clock support registered=0D
[   10.586220] initcall ptp_init+0x0/0xa4 returned 0 after 976 usecs=0D
[   10.592373] calling  power_supply_class_init+0x0/0x44 @ 1=0D
[   10.597892] initcall power_supply_class_init+0x0/0x44 returned 0 after 0=
 usecs=0D
[   10.605116] calling  hwmon_init+0x0/0xe3 @ 1=0D
[   10.609509] initcall hwmon_init+0x0/0xe3 returned 0 after 0 usecs=0D
[   10.615601] calling  leds_init+0x0/0x40 @ 1=0D
[   10.619908] initcall leds_init+0x0/0x40 returned 0 after 0 usecs=0D
[   10.625913] calling  efisubsys_init+0x0/0x142 @ 1=0D
[   10.630680] initcall efisubsys_init+0x0/0x142 returned 0 after 0 usecs=0D
[   10.637264] calling  pci_subsys_init+0x0/0x4f @ 1=0D
[   10.642029] PCI: Using ACPI for IRQ routing=0D
[   10.649715] PCI: pci_cache_line_size set to 64 bytes=0D
[   10.654869] e820: reserve RAM buffer [mem 0x00099000-0x0009ffff]=0D
[   10.660865] e820: reserve RAM buffer [mem 0x80067000-0x83ffffff]=0D
[   10.666931] initcall pci_subsys_init+0x0/0x4f returned 0 after 6835 usec=
s=0D
[   10.673776] calling  proto_init+0x0/0x12 @ 1=0D
[   10.678114] initcall proto_init+0x0/0x12 returned 0 after 0 usecs=0D
[   10.684261] calling  net_dev_init+0x0/0x1c6 @ 1=0D
[   10.689488] initcall net_dev_init+0x0/0x1c6 returned 0 after 0 usecs=0D
[   10.695834] calling  neigh_init+0x0/0x80 @ 1=0D
[   10.700165] initcall neigh_init+0x0/0x80 returned 0 after 0 usecs=0D
[   10.706316] calling  fib_rules_init+0x0/0xaf @ 1=0D
[   10.710997] initcall fib_rules_init+0x0/0xaf returned 0 after 0 usecs=0D
[   10.717496] calling  pktsched_init+0x0/0x10a @ 1=0D
[   10.722182] initcall pktsched_init+0x0/0x10a returned 0 after 0 usecs=0D
[   10.728676] calling  tc_filter_init+0x0/0x55 @ 1=0D
[   10.733356] initcall tc_filter_init+0x0/0x55 returned 0 after 0 usecs=0D
[   10.739856] calling  tc_action_init+0x0/0x55 @ 1=0D
[   10.744535] initcall tc_action_init+0x0/0x55 returned 0 after 0 usecs=0D
[   10.751037] calling  genl_init+0x0/0x85 @ 1=0D
[   10.755298] initcall genl_init+0x0/0x85 returned 0 after 0 usecs=0D
[   10.761349] calling  cipso_v4_init+0x0/0x61 @ 1=0D
[   10.765942] initcall cipso_v4_init+0x0/0x61 returned 0 after 0 usecs=0D
[   10.772355] calling  netlbl_init+0x0/0x81 @ 1=0D
[   10.776774] NetLabel: Initializing=0D
[   10.780275] NetLabel:  domain hash size =3D 128=0D
[   10.784693] NetLabel:  protocols =3D UNLABELED CIPSOv4=0D
[   10.789759] NetLabel:  unlabeled traffic allowed by default=0D
[   10.795353] initcall netlbl_init+0x0/0x81 returned 0 after 3906 usecs=0D
[   10.801854] calling  rfkill_init+0x0/0x79 @ 1=0D
[   10.806450] initcall rfkill_init+0x0/0x79 returned 0 after 0 usecs=0D
[   10.812623] calling  xen_mcfg_late+0x0/0xab @ 1=0D
[   10.817213] initcall xen_mcfg_late+0x0/0xab returned 0 after 0 usecs=0D
[   10.823642] calling  xen_p2m_debugfs+0x0/0x4a @ 1=0D
[   10.828407] initcall xen_p2m_debugfs+0x0/0x4a returned 0 after 0 usecs=0D
[   10.834977] calling  xen_spinlock_debugfs+0x0/0x13a @ 1=0D
[   10.840312] initcall xen_spinlock_debugfs+0x0/0x13a returned 0 after 0 u=
secs=0D
[   10.847370] calling  nmi_warning_debugfs+0x0/0x27 @ 1=0D
[   10.852489] initcall nmi_warning_debugfs+0x0/0x27 returned 0 after 0 use=
cs=0D
[   10.859415] calling  hpet_late_init+0x0/0x101 @ 1=0D
[   10.864182] initcall hpet_late_init+0x0/0x101 returned -19 after 0 usecs=
=0D
[   10.870942] calling  init_amd_nbs+0x0/0xb8 @ 1=0D
[   10.875452] initcall init_amd_nbs+0x0/0xb8 returned 0 after 0 usecs=0D
[   10.881775] calling  clocksource_done_booting+0x0/0x42 @ 1=0D
[   10.887329] Switched to clocksource xen=0D
[   10.891229] initcall clocksource_done_booting+0x0/0x42 returned 0 after =
3810 usecs=0D
[   10.898851] calling  tracer_init_debugfs+0x0/0x1b2 @ 1=0D
[   10.904344] initcall tracer_init_debugfs+0x0/0x1b2 returned 0 after 287 =
usecs=0D
[   10.911467] calling  init_trace_printk_function_export+0x0/0x2f @ 1=0D
[   10.917800] initcall init_trace_printk_function_export+0x0/0x2f returned=
 0 after 5 usecs=0D
[   10.925939] calling  event_trace_init+0x0/0x205 @ 1=0D
[   10.945703] initcall event_trace_init+0x0/0x205 returned 0 after 14473 u=
secs=0D
[   10.952737] calling  init_kprobe_trace+0x0/0x93 @ 1=0D
[   10.957687] initcall init_kprobe_trace+0x0/0x93 returned 0 after 11 usec=
s=0D
[   10.964524] calling  init_pipe_fs+0x0/0x4c @ 1=0D
[   10.969069] initcall init_pipe_fs+0x0/0x4c returned 0 after 39 usecs=0D
[   10.975442] calling  eventpoll_init+0x0/0xda @ 1=0D
[   10.980152] initcall eventpoll_init+0x0/0xda returned 0 after 29 usecs=0D
[   10.986709] calling  anon_inode_init+0x0/0x5b @ 1=0D
[   10.991508] initcall anon_inode_init+0x0/0x5b returned 0 after 34 usecs=
=0D
[   10.998148] calling  init_ramfs_fs+0x0/0x4d @ 1=0D
[   11.002749] initcall init_ramfs_fs+0x0/0x4d returned 0 after 9 usecs=0D
[   11.009154] calling  blk_scsi_ioctl_init+0x0/0x2c5 @ 1=0D
[   11.014354] initcall blk_scsi_ioctl_init+0x0/0x2c5 returned 0 after 0 us=
ecs=0D
[   11.021373] calling  acpi_event_init+0x0/0x3a @ 1=0D
[   11.026159] initcall acpi_event_init+0x0/0x3a returned 0 after 17 usecs=
=0D
[   11.032813] calling  pnp_system_init+0x0/0x12 @ 1=0D
[   11.037677] initcall pnp_system_init+0x0/0x12 returned 0 after 94 usecs=
=0D
[   11.044295] calling  pnpacpi_init+0x0/0x8c @ 1=0D
[   11.048785] pnp: PnP ACPI init=0D
[   11.051928] ACPI: bus type PNP registered=0D
[   11.056303] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved=
=0D
[   11.062906] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active=
)=0D
[   11.069797] pnp 00:01: [dma 4]=0D
[   11.073013] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active)=0D
[   11.079701] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active)=0D
[   11.086773] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active)=0D
[   11.094311] system 00:04: [io  0x0680-0x069f] has been reserved=0D
[   11.100228] system 00:04: [io  0xffff] has been reserved=0D
[   11.105600] system 00:04: [io  0xffff] has been reserved=0D
[   11.110972] system 00:04: [io  0xffff] has been reserved=0D
[   11.116348] system 00:04: [io  0x1c00-0x1cfe] has been reserved=0D
[   11.122324] system 00:04: [io  0x1d00-0x1dfe] has been reserved=0D
[   11.128304] system 00:04: [io  0x1e00-0x1efe] has been reserved=0D
[   11.134285] system 00:04: [io  0x1f00-0x1ffe] has been reserved=0D
[   11.140267] system 00:04: [io  0x0ca4-0x0ca7] has been reserved=0D
[   11.146244] system 00:04: [io  0x1800-0x18fe] could not be reserved=0D
[   11.152570] system 00:04: [io  0x164e-0x164f] has been reserved=0D
[   11.158545] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.165426] xen: registering gsi 8 triggering 1 polarity 0=0D
[   11.171110] pnp 00:05: Plug and Play ACPI device, IDs PNP0b00 (active)=0D
[   11.177953] system 00:06: [io  0x1854-0x1857] has been reserved=0D
[   11.183863] system 00:06: Plug and Play ACPI device, IDs INT3f0d PNP0c02=
 (active)=0D
[   11.192198] kworker/u2:0 (517) used greatest stack depth: 5560 bytes lef=
t=0D
[   11.199015] system 00:07: [io  0x0a00-0x0a1f] has been reserved=0D
[   11.204963] system 00:07: [io  0x0a30-0x0a3f] has been reserved=0D
[   11.210936] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.219186] xen: registering gsi 4 triggering 1 polarity 0=0D
[   11.224662] Already setup the GSI :4=0D
[   11.228306] pnp 00:08: [dma 0 disabled]=0D
[   11.232467] pnp 00:08: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.240187] xen: registering gsi 3 triggering 1 polarity 0=0D
[   11.245682] pnp 00:09: [dma 0 disabled]=0D
[   11.249768] pnp 00:09: Plug and Play ACPI device, IDs PNP0501 (active)=0D
[   11.256613] system 00:0a: [io  0x04d0-0x04d1] has been reserved=0D
[   11.262519] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.269396] xen: registering gsi 13 triggering 1 polarity 0=0D
[   11.275208] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active)=0D
[   11.284867] system 00:0c: [mem 0xfed1c000-0xfed1ffff] has been reserved=
=0D
[   11.291482] system 00:0c: [mem 0xfed10000-0xfed17fff] has been reserved=
=0D
[   11.298152] system 00:0c: [mem 0xfed18000-0xfed18fff] has been reserved=
=0D
[   11.304822] system 00:0c: [mem 0xfed19000-0xfed19fff] has been reserved=
=0D
[   11.311495] system 00:0c: [mem 0xf8000000-0xfbffffff] has been reserved=
=0D
[   11.318168] system 00:0c: [mem 0xfed20000-0xfed3ffff] has been reserved=
=0D
[   11.324842] system 00:0c: [mem 0xfed90000-0xfed93fff] has been reserved=
=0D
[   11.331514] system 00:0c: [mem 0xfed45000-0xfed8ffff] has been reserved=
=0D
[   11.338187] system 00:0c: [mem 0xff000000-0xffffffff] has been reserved=
=0D
[   11.344860] system 00:0c: [mem 0xfee00000-0xfeefffff] has been reserved=
=0D
[   11.351534] system 00:0c: [mem 0xf7fef000-0xf7feffff] has been reserved=
=0D
[   11.358207] system 00:0c: [mem 0xf7ff0000-0xf7ff0fff] has been reserved=
=0D
[   11.364875] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active=
)=0D
[   11.373788] pnp: PnP ACPI: found 13 devices=0D
[   11.377963] ACPI: bus type PNP unregistered=0D
[   11.382208] initcall pnpacpi_init+0x0/0x8c returned 0 after 325606 usecs=
=0D
[   11.388968] calling  pcistub_init+0x0/0x29f @ 1=0D
[   11.394231] initcall pcistub_init+0x0/0x29f returned 0 after 654 usecs=0D
[   11.400756] calling  chr_dev_init+0x0/0xc6 @ 1=0D
[   11.414450] initcall chr_dev_init+0x0/0xc6 returned 0 after 8981 usecs=0D
[   11.420967] calling  firmware_class_init+0x0/0xec   11.433117] calling  =
init_pcmcia_bus+0x0/0x65 @ 1=0D
[   11.438023] initcall init_pcmcia_bus+0x0/0x65 returned 0 after 139 usecs=
=0D
[   11.444715] calling  thermal_init+0x0/0x8b @ 1=0D
[   11.449297] initcall thermal_init+0x0/0x8b returned 0 after 76 usecs=0D
[   11.455642] calling  cpufreq_gov_performance_init+0x0/0x12 @ 1=0D
[   11.461532] initcall cpufreq_gov_performance_init+0x0/0x12 returned -19 =
after 0 usecs=0D
[   11.469418] calling  init_acpi_pm_clocksource+0x0/0xec @ 1=0D
[   11.478109] PM-Timer failed consistency check  (0xffffff) - aborting.=0D
[   11.484543] initcall init_acpi_pm_clocksource+0x0/439] calling  pcibios_=
assign_resources+0x0/0xbd @ 1=0D
[   11.497997] pci 0000:00:01.0: PCI bridge to [bus 01]=0D
[   11.502969] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.509893] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.516827] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.523756] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.530691] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.537623] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.544557] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.551489] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.558423] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.565355] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.572289] pci 0000:02:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.579212] pci 0000:02:00.0: BAR 7: assigned [mem 0xf1448000-0xf1467fff=
 64bit]=0D
[   11.586594] pci 0000:02:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.593512] pci 0000:02:00.0: BAR 10: assigned [mem 0xf1468000-0xf1487ff=
f 64bit]=0D
[   11.600981] pci 0000:02:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.607898] pci 0000:02:00.1: BAR 7: assigned [mem 0xf1488000-0xf14a7fff=
 64bit]=0D
[   11.615281] pci 0000:02:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bi=
t]=0D
[   11.622197] pci 0000:02:00.1: BAR 10: assigned [mem 0xf14a8000-0xf14c7ff=
f 64bit]=0D
[   11.629657] pci 0000:00:01.1: PCI bridge to [bus 02-03]=0D
[   11.634939] pci 0000:00:01.1:   bridge window [io  0xe000-0xefff]=0D
[   11.641093] pci 0000:00:01.1:   bridge window [mem 0xf0400000-0xf14fffff=
]=0D
[   11.647941] pci 0000:00:1c.0: PCI bridge to [bus 04]=0D
[   11.652965] pci 0000:00:1c.0:   bridge window [io  0xd000-0xdfff]=0D
[   11.659121] pci 0000:00:1c.0:   bridge window [mem 0xf1a00000-0xf1afffff=
]=0D
[   11.665973] pci 0000:00:1c.3: PCI bridge to [bus 05]=0D
[   11.670991] pci 0000:00:1c.3:   bridge window [io  0xc000-0xcfff]=0D
[   11.677149] pci 0000:00:1c.3:   bridge window [mem 0xf1900000-0xf19fffff=
]=0D
[   11.684000] pci 0000:07:01.0: PCI bridge to [bus 08]=0D
[   11.689025] pci 0000:07:01.0:   bridge window [mem 0xf1500000-0xf15fffff=
]=0D
[   11.695880] pci 0000:06:00.0: PCI bridge to [bus 07-08]=0D
[   11.701155] pci 0000:06:00.0:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   11.708010] pci 0000:00:1c.5: PCI bridge to [bus 06-08]=0D
[   11.713286] pci 0000:00:1c.5:   bridge window [mem 0xf1500000-0xf16fffff=
]=0D
[   11.720139] pci 0000:00:1c.6: PCI bridge to [bus 09]=0D
[   11.725159] pci 0000:00:1c.6:   bridge window [mem 0xf1800000-0xf18fffff=
]=0D
[   11.732013] pci 0000:00:1c.7: PCI bridge to [bus 0a]=0D
[   11.737030] pci 0000:00:1c.7:   bridge window [io  0xb000-0xbfff]=0D
[   11.743186] pci 0000:00:1c.7:   bridge window [mem 0xf1700000-0xf17fffff=
]=0D
[   11.750038] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7]=0D
[   11.755661] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff]=0D
[   11.761294] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]=0D
[   11.767619] pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff]=0D
[   11.773944] pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff]=0D
[   11.780329] pci_bus 0000:00: resource 9 [mem 0x000dc000-0x000dffff]=0D
[   11.786630] pci_bus 0000:00: resource 10 [mem 0x000e0000-0x000e3fff]=0D
[   11.793042] pci_bus 0000:00: resource 11 [mem 0x000e4000-0x000e7fff]=0D
[   11.799456] pci_bus 0000:00: resource 12 [mem 0xbe200000-0xfeafffff]=0D
[   11.805870] pci_bus 0000:02: resource 0 [io  0xe000-0xefff]=0D
[   11.811504] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf14fffff]=0D
[   11.817831] pci_bus 0000:04: resource 0 [io  0xd000-0xdfff]=0D
[   11.823461] pci_bus 0000:04: resource 1 [mem 0xf1a00000-0xf1afffff]=0D
[   11.829789] pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]=0D
[   11.835422] pci_bus 0000:05: resource 1 [mem 0xf1900000-0xf19fffff]=0D
[   11.841749] pci_bus 0000:06: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   11.848075] pci_bus 0000:07: resource 1 [mem 0xf1500000-0xf16fffff]=0D
[   11.854401] pci_bus 0000:07: resource 5 [mem 0xf1500000-0xf16fffff]=0D
[   11.860727] pci_bus 0000:08: resource 1 [mem 0xf1500000-0xf15fffff]=0D
[   11.867056] pci_bus 0000:09: resource 1 [mem 0xf1800000-0xf18fffff]=0D
[   11.873381] pci_bus 0000:0a: resource 0 [io  0xb000-0xbfff]=0D
[   11.879013] pci_bus 0000:0a: resource 1 [mem 0xf1700000-0xf17fffff]=0D
[   11.885342] initcall pcibios_assign_resources+0x0/0xbd returned 0 after =
378372 usecs=0D
[   11.893141] calling  sysctl_core_init+0x0/0x2c @ 1=0D
[   11.898009] initcall sysctl_core_init+0x0/0x2c returned 0 after 13 usecs=
=0D
[   11.904756] calling  inet_init+0x0/0x296 @ 1=0D
[   11.909155] NET: Registered protocol family 2=0D
[   11.913823] TCP established hash table entries: 16384 (order: 5, 131072 =
bytes)=0D
[   11.921078] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)=
=0D
[   11.927716] TCP: Hash tables configured (established 16384 bind 16384)=0D
[   11.934309] TCP: reno registered=0D
[   11.937595] UDP hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   11.943660] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)=0D
[   11.950273] initcall inet_init+0x0/0x296 returned 0 after 40221 usecs=0D
[   11.956702] calling  ipv4_offload_init+0x0/0x61 @ 1=0D
[   11.961639] initcall ipv4_offload_init+0x0/0x61 returned 0 after 0 usecs=
=0D
[   11.968399] calling  af_unix_init+0x0/0x55 @ 1=0D
[   11.972921] NET: Registered protocol family 1=0D
[   11.977341] initcall af_unix_init+0x0/0x55 returned 0 after 4329 usecs=0D
[   11.983913] calling  ipv6_offload_init+0x0/0x7f @ 1=0D
[   11.988853] initcall ipv6_offload_init+0x0/0x7f returned 0 after 0 usecs=
=0D
[   11.995612] calling  init_sunrpc+0x0/0x69 @ 1=0D
[   12.000231] RPC: Registered named UNIX socket transport module.=0D
[   12.006147] RPC: Registered udp transport module.=0D
[   12.010909] RPC: Registered tcp transport module.=0D
[   12.015676] RPC: Registered tcp NFSv4.1 backchannel transport module.=0D
[   12.022175] initcall init_sunrpc+0x0/0x69 returned 0 after 21623 usecs=0D
[   12.028761] calling  pci_apply_final_quirks+0x0/0x117 @ 1=0D
[   12.034229] pci 0000:00:02.0: Boot video device=0D
[   12.039317] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.044894] xen: --> pirq=3D16 -> irq=3D16 (gsi=3D16)=0D
[   12.049537] pci 0000:00:14.0: CONFIG_USB_XHCI_HCD is turned off, default=
ing to EHCI.=0D
[   12.057275] pci 0000:00:14.0: USB 3.0 devices will work at USB 2.0 speed=
s.=0D
[   12.065195] xen: registering gsi 16 triggering 0 polarity 1=0D
[   12.070759] Already setup the GSI :16=0D
[   12.090388] xen: registering gsi 23 triggering 0 polarity 1=0D
[   12.095965] xen: --> pirq=3D23 -> irq=3D23 (gsi=3D23)=0D
[   12.116471] xen: registering gsi 18 triggering 0 polarity 1=0D
[   12.122045] xen: --> pirq=3D18 -> irq=3D18 (gsi=3D18)=0D
[   12.126.153014] Unpacking initramfs...=0D
[   13.250266] Freeing initrd memory: 83288K (ffff8800023f7000 - ffff880007=
54d000)=0D
[   13.257576] initcall populate_rootfs+0x0/0x112 returned 0 after 1078783 =
usecs=0D
[   13.264762] calling  pci_iommu_init+0x0/0x41 @ 1=0D
[   13.269442] initcall pci_iommu_init+0x0/0x41 returned 0 after 0 usecs=0D
[   13.275940] calling  calgary_fixup_tce_spaces+0x0/0x105 @ 1=0D
[   13.281573] initcall calgary_fixup_tce_spaces+0x0/0x105 returned -19 aft=
er 0 usecs=0D
[   13.289219] calling  register_kernel_offset_dumper+0x0/0x1b @ 1=0D
[   13.295180] initcall register_kernel_offset_dumper+0x0/0x1b returned 0 a=
fter 0 usecs=0D
[   13.302978] calling  i8259A_init_ops+0x0/0x21 @ 1=0D
[   13.307745] initcall i8259A_init_ops+0x0/0x21 returned 0 after 0 usecs=0D
[   13.314332] calling  vsyscall_init+0x0/0x27 @ 1=0D
[   13.318930] initcall vsyscall_init+0x0/0x27 returned 0 after 4 usecs=0D
[   13.325339] calling  sbf_init+0x0/0xf6 @ 1=0D
[   13.329498] initcall sbf_init+0x0/0xf6 returned 0 after 0 usecs=0D
[   13.335478] calling  init_tsc_clocksource+0x0/0xc2 @ 1=0D
[   13.340678] initcall init_tsc_clocksource+0x0/0xc2 returned 0 after 1 us=
ecs=0D
[   13.347697] calling  add_rtc_cmos+0x0/0xb4 @ 1=0D
[   13.352206] initcall add_rtc_cmos+0x0/0xb4 returned 0 after 2 usecs=0D
[   13.358530] calling  i8237A_init_ops+0x0/0x14 @ 1=0D
[   13.363297] initcall i8237A_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.369884] calling  cache_sysfs_init+0x0/0x65 @ 1=0D
[   13.374987] initcall cache_sysfs_init+0x0/0x65 returned 0 after 244 usec=
s=0D
[   13.381768] calling  amd_uncore_init+0x0/0x130 @ 1=0D
[   13.386620] initcall amd_uncore_init+0x0/0x130 returned -19 after 0 usec=
s=0D
[   13.393465] calling  amd_iommu_pc_init+0x0/0x150 @ 1=0D
[   13.398494] initcall amd_iommu_pc_init+0x0/0x150 returned -19 after 0 us=
ecs=0D
[   13.405512] calling  intel_uncore_init+0x0/0x3ab @ 1=0D
[   13.410538] initcall intel_uncore_init+0x0/0x3ab returned -19 after 0 us=
ecs=0D
[   13.417558] calling  rapl_pmu_init+0x0/0x1f8 @ 1=0D
[   13.422254] RAPL PMU detected, hw unit 2^-14 Joules, API unit is 2^-32 J=
oules, 3 fixed counters 655360 ms ovfl timer=0D
[   13.432811] initcall rapl_pmu_init+0x0/0x1f8 returned 0 after 10325 usec=
s=0D
[   13.439660] calling  inject_init+0x0/0x30 @ 1=0D
[   13.444075] Machine check injector initialized=0D
[   13.448584] initcall inject_init+0x0/0x30 returned 0 after 4402 usecs=0D
[   13.455082] calling  thermal_throttle_init_device+0x0/0x9c @ 1=0D
[   13.460975] initcall thermal_throttle_init_device+0x0/0x9c returned 0 af=
ter 0 usecs=0D
[   13.468689] calling  microcode_init+0x0/0x1b1 @ 1=0D
[   13.473648] microcode: CPU0 sig=3D0x306c3, pf=3D0x2, revision=3D0x7=0D
[   13.479774] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.f=
snet.co.uk>, Peter Oruba=0D
[   13.488550] initcall microcode_init+0x0/0x1b1 returned 0 after 14739 use=
cs=0D
[   13.495482] calling  amd_ibs_init+0x0/0x292 @ 1=0D
[   13.500069] initcall amd_ibs_init+0x0/0x292 returned -19 after 0 usecs=0D
[   13.506656] calling  msr_init+0x0/0x162 @ 1=0D
[   13.511125] initcall msr_init+0x0/0x162 returned 0 after 216 usecs=0D
[   13.517296] calling  cpuid_init+0x0/0x162 @ 1=0D
[   13.521909] initcall cpuid_init+0x0/0x162 returned 0 after 195 usecs=0D
[   13.528255] calling  ioapic_init_ops+0x0/0x14 @ 1=0D
[   13.533019] initcall ioapic_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.539607] calling  add_pcspkr+0x0/0x40 @ 1=0D
[   13.544043] initcall add_pcspkr+0x0/0x40 returned 0 after 101 usecs=0D
[   13.550298] calling  start_periodic_check_for_corruption+0x0/0x50 @ 1=0D
[   13.556796] Scanning for low memory corruption every 60 seconds=0D
[   13.562772] initcall start_periodic_check_for_corruption+0x0/0x50 return=
ed 0 after 5835 usecs=0D
[   13.571350] calling  sysfb_init+0x0/0x9c @ 1=0D
[   13.575796] initcall sysfb_init+0x0/0x9c returned 0 after 109 usecs=0D
[   13.582060] calling  audit_classes_init+0x0/0xaf @ 1=0D
[   13.587097] initcall audit_classes_init+0x0/0xaf returned 0 after 13 use=
cs=0D
[   13.594016] calling  pt_dump_init+0x0/0x30 @ 1=0D
[   13.598532] initcall pt_dump_init+0x0/0x30 returned 0 after 8 usecs=0D
[   13.604850] calling  ia32_binfmt_init+0x0/0x14 @ 1=0D
[   13.609709] initcall ia32_binfmt_init+0x0/0x14 returned 0 after 6 usecs=
=0D
[   13.616374] calling  proc_execdomains_init+0x0/0x22 @ 1=0D
[   13.621667] initcall proc_execdomains_init+0x0/0x22 returned 0 after 5 u=
secs=0D
[   13.628766] calling  ioresources_init+0x0/0x3c @ 1=0D
[   13.633626] initcall ioresources_init+0x0/0x3c returned 0 after 6 usecs=
=0D
[   13.640294] calling  uid_cache_init+0x0/0x85 @ 1=0D
[   13.644989] initcall uid_cache_init+0x0/0x85 returned 0 after 16 usecs=0D
[   13.651561] calling  init_posix_timers+0x0/0x240 @ 1=0D
[   13.656598] initcall init_posix_timers+0x0/0x240 returned 0 after 12 use=
cs=0D
[   13.663518] calling  init_posix_cpu_timers+0x0/0xbf @ 1=0D
[   13.668806] initcall init_posix_cpu_timers+0x0/0xbf returned 0 after 0 u=
secs=0D
[   13.675911] calling  proc_schedstat_init+0x0/0x22 @ 1=0D
[   13.681029] initcall proc_schedstat_init+0x0/0x22 returned 0 after 3 use=
cs=0D
[   13.687957] calling  snapshot_device_init+0x0/0x12 @ 1=0D
[   13.693282] initcall snapshot_device_init+0x0/0x12 returned 0 after 120 =
usecs=0D
[   13.700404] calling  irq_pm_init_ops+0x0/0x14 @ 1=0D
[   13.705169] initcall irq_pm_init_ops+0x0/0x14 returned 0 after 0 usecs=0D
[   13.711757] calling  create_proc_profile+0x0/0x300 @ 1=0D
[   13.716957] initcall create_proc_profile+0x0/0x300 returned 0 after 0 us=
ecs=0D
[   13.723975] calling  timekeeping_init_ops+0x0/0x14 @ 1=0D
[   13.729175] initcall timekeeping_init_ops+0x0/0x14 returned 0 after 0 us=
ecs=0D
[   13.736196] calling  init_clocksource_sysfs+0x0/0x69 @ 1=0D
[   13.741786] initcall init_clocksource_sysfs+0x0/0x69 returned 0 after 21=
2 usecs=0D
[   13.749086] calling  init_timer_list_procfs+0x0/0x2c @ 1=0D
[   13.754464] initcall init_timer_list_procfs+0x0/0x2c returned 0 after 4 =
usecs=0D
[   13.761650] calling  alarmtimer_init+0x0/0x15f @ 1=0D
[   13.766699] initcall alarmtimer_init+0x0/0x15f returned 0 after 191 usec=
s=0D
[   13.773473] calling  clockevents_init_sysfs+0x0/0xd2 @ 1=0D
[   13.779148] initcall clockevents_init_sysfs+0x0/0xd2 returned 0 after 29=
5 usecs=0D
[   13.786477] calling  init_tstats_procfs+0x0/0x2c @ 1=0D
[   13.791507] initcall init_tstats_procfs+0x0/0x2c returned 0 after 4 usec=
s=0D
[   13.798349] calling  futex_init+0x0/0xf6 @ 1=0D
[   13.802699] futex hash table entries: 256 (order: 2, 16384 bytes)=0D
[   13.808840] initcall futex_init+0x0/0xf6 returned 0 after 6012 usecs=0D
[   13.815249] calling  proc_dma_init+0x0/0x22 @ 1=0D
[   13.819848] initcall proc_dma_init+0x0/0x22 returned 0 after 4 usecs=0D
[   13.826253] calling  proc_modules_init+0x0/0x22 @ 1=0D
[   13.831197] initcall proc_modules_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   13.837954] calling  kallsyms_init+0x0/0x25 @ 1=0D
[   13.842548] initcall kallsyms_init+0x0/0x25 returned 0 after 3 usecs=0D
[   13.848961] calling  crash_save_vmcoreinfo_init+0x0/0x53f @ 1=0D
[   13.854775] initcall crash_save_vmcoreinfo_init+0x0/0x53f returned 0 aft=
er 9 usecs=0D
[   13.862393] calling  crash_notes_memory_init+0x0/0x36 @ 1=0D
[   13.867855] initcall crash_notes_memory_init+0x0/0x36 returned 0 after 2=
 usecs=0D
[   13.875132] calling  pid_namespaces_init+0x0/0x2d @ 1=0D
[   13.880258] initcall pid_namespaces_init+0x0/0x2d returned 0 after 11 us=
ecs=0D
[   13.887266] calling  ikconfig_init+0x0/0x3c @ 1=0D
[   13.891862] initcall ikconfig_init+0x0/0x3c returned 0 after 3 usecs=0D
[   13.898271] calling  audit_init+0x0/0x141 @ 1=0D
[   13.902691] audit: initializing netlink socket (disabled)=0D
[   13.908177] type=3D2000 audit(1390621323.439:1): initialized=0D
[   13.913700] initcall audit_init+0x0/0x141 returned 0 after 10750 usecs=0D
[   13.920283] calling  audit_watch_init+0x0/0x3a @ 1=0D
[   13.925138] initcall audit_watch_init+0x0/0x3a returned 0 after 1 usecs=
=0D
[   13.931810] calling  audit_tree_init+0x0/0x49 @ 1=0D
[   13.936579] initcall audit_tree_init+0x0/0x49 returned 0 after 1 usecs=0D
[   13.943162] calling  init_kprobes+0x0/0x16c @ 1=0D
[   13.958206] initcall init_kprobes+0x0/0x16c returned 0 after 10204 usecs=
=0D
[   13.964891] calling  hung_task_init+0x0/0x56 @ 3.993058] initcall init_t=
racepoints+0x0/0x20 returned 0 after 0 usecs=0D
[   13.999730] calling  init_blk_tracer+0x0/0x5a @ 1=0D
[   14.004498] initcall init_blk_tracer+0x0/0x5a returned 0 after 1 usecs=0D
[   14.011083] calling  irq_work_init_cpu_notifier+0x0/0x29 @ 1=0D
[   14.016801] initcall irq_work_init_cpu_notifier+0x0/0x29 returned 0 afte=
r 0 usecs=0D
[   14.024341] calling  perf_event_sysfs_init+0x0/0x93 @ 1=0D
[   14.030158] initcall perf_event_sysfs_init+0x0/0x93 returned 0 after 516=
 usecs=0D
[   14.037374] calling  init_per_zone_wmark_min+0x0/0xa9 @ 1=0D
[   14.042841] initcall init_per_zone_wmark_min+0x0/0xa9 returned 0 after 1=
1 usecs=0D
[   14.050196] calling  kswapd_init+0x0/0x76 @ 1=0D
[   14.054667] initcall kswapd_init+0x0/0x76 returned 0 after 51 usecs=0D
[   14.060941] calling  extfrag_debug_init+0x0/0x7e @ 1=0D
[   14.065985] initcall extfrag_debug_init+0x0/0x7e returned 0 after 19 use=
cs=0D
[   14.072899] calling  setup_vmstat+0x0/0xf3 @ 1=0D
[   14.077421] initcall setup_vmstat+0x0/0xf3 returned 0 after 15 usecs=0D
[   14.083818] calling  mm_sysfs_init+0x0/0x29 @ 1=0D
[   14.088422] initcall mm_sysfs_init+0x0/0x29 returned 0 after 10 usecs=0D
[   14.094912] calling  mm_compute_batch_init+0x0/0x19 @ 1=0D
[   14.100199] initcall mm_compute_batch_init+0x0/0x19 returned 0 after 0 u=
secs=0D
[   14.107305] calling  slab_proc_init+0x0/0x25 @ 1=0D
[   14.111990] initcall slab_proc_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.118486] calling  init_reserve_notifier+0x0/0x26 @ 1=0D
[   14.123771] initcall init_reserve_notifier+0x0/0x26 returned 0 after 0 u=
secs=0D
[   14.130877] calling  init_admin_reserve+0x0/0x40 @ 1=0D
[   14.135903] initcall init_admin_reserve+0x0/0x40 returned 0 after 0 usec=
s=0D
[   14.142749] calling  init_user_reserve+0x0/0x40 @ 1=0D
[   14.147690] initcall init_user_reserve+0x0/0x40 returned 0 after 0 usecs=
=0D
[   14.154451] calling  proc_vmalloc_init+0x0/0x25 @ 1=0D
[   14.159394] initcall proc_vmalloc_init+0x0/0x25 returned 0 after 4 usecs=
=0D
[   14.166149] calling  procswaps_init+0x0/0x22 @ 1=0D
[   14.170832] initcall procswaps_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.177329] calling  init_frontswap+0x0/0x96 @ 1=0D
[   14.182038] initcall init_frontswap+0x0/0x96 returned 0 after 28 usecs=0D
[   14.188597] calling  hugetlb_init+0x0/0x4c2 @ 1=0D
[   14.193189] HugeTLB registered 2 MB page size, pre-allocated 0 pages=0D
[   14.199693] initcall hugetlb_init+0x0/0x4c2 returned 0 after 6351 usecs=
=0D
[   14.206291] calling  mmu_notifier_init+0x0/0x12 @ 1=0D
[   14.211236] initcall mmu_notifier_init+0x0/0x12 returned 0 after 2 usecs=
=0D
[   14.217993] calling  slab_proc_init+0x0/0x8 @ 1=0D
[   14.222585] initcall slab_proc_init+0x0/0x8 returned 0 after 0 usecs=0D
[   14.228998] calling  cpucache_init+0x0/0x4b @ 1=0D
[   14.233592] initcall cpucache_init+0x0/0x4b returned 0 after 0 usecs=0D
[   14.240004] calling  hugepage_init+0x0/0x145 @ 1=0D
[   14.244685] initcall hugepage_init+0x0/0x145 returned -22 after 0 usecs=
=0D
[   14.251356] calling  init_cleancache+0x0/0xbc @ 1=0D
[   14.256151] initcall init_cleancache+0x0/0xbc returned 0 after 27 usecs=
=0D
[   14.262796] calling  fcntl_init+0x0/0x2a @ 1=0D
[   14.267141] initcall fcntl_init+0x0/0x2a returned 0 after 11 usecs=0D
[   14.273371] calling  proc_filesystems_init+0x0/0x22 @ 1=0D
[   14.278661] initcall proc_filesystems_init+0x0/0x22 returned 0 after 4 u=
secs=0D
[   14.285763] calling  dio_init+0x0/0x2d @ 1=0D
[   14.289935] initcall dio_init+0x0/0x2d returned 0 after 10 usecs=0D
[   14.295990] calling  fsnotify_mark_init+0x0/0x40 @ 1=0D
[   14.301042] initcall fsnotify_mark_init+0x0/0x40 returned 0 after 25 use=
cs=0D
[   14.307953] calling  dnotify_init+0x0/0x7b @ 1=0D
[   14.312480] initcall dnotify_init+0x0/0x7b returned 0 after 21 usecs=0D
[   14.318870] calling  inotify_user_setup+0x0/0x4b @ 1=0D
[   14.323912] initcall inotify_user_setup+0x0/0x4b returned 0 after 12 use=
cs=0D
[   14.330832] calling  aio_setup+0x0/0x7d @ 1=0D
[   14.335131] initcall aio_setup+0x0/0x7d returned 0 after 52 usecs=0D
[   14.341231] calling  proc_locks_init+0x0/0x22 @ 1=0D
[   14.346001] initcall proc_locks_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.352582] calling  init_sys32_ioctl+0x0/0x28 @ 1=0D
[   14.357479] initcall init_sys32_ioctl+0x0/0x28 returned 0 after 44 usecs=
=0D
[   14.364197] calling  dquot_init+0x0/0x121 @ 1=0D
[   14.368616] VFS: Disk quotas dquot_6.5.2=0D
[   14.372634] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)=0D
[   14.379103] initcall dquot_init+0x0/0x121 returned 0 after 10240 usecs=0D
[   14.385687] calling  init_v2_quota_format+0x0/0x22 @ 1=0D
[   14.390886] initcall init_v2_quota_format+0x0/0x22 returned 0 after 0 us=
ecs=0D
[   14.397906] calling  quota_init+0x0/0x31 @ 1=0D
[   14.402259] initcall quota_init+0x0/0x31 returned 0 after 17 usecs=0D
[   14.408479] calling  proc_cmdline_init+0x0/0x22 @ 1=0D
[   14.413422] initcall proc_cmdline_init+0x0/0x22 returned 0 after 4 usecs=
=0D
[   14.420180] calling  proc_consoles_init+0x0/0x22 @ 1=0D
[   14.425209] initcall proc_consoles_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.432051] calling  proc_cpuinfo_init+0x0/0x22 @ 1=0D
[   14.436994] initcall proc_cpuinfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.443752] calling  proc_devices_init+0x0/0x22 @ 1=0D
[   14.448694] initcall proc_devices_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.455451] calling  proc_interrupts_init+0x0/0x22 @ 1=0D
[   14.460654] initcall proc_interrupts_init+0x0/0x22 returned 0 after 3 us=
ecs=0D
[   14.467670] calling  proc_loadavg_init+0x0/0x22 @ 1=0D
[   14.472615] initcall proc_loadavg_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.479372] calling  proc_meminfo_init+0x0/0x22 @ 1=0D
[   14.484314] initcall proc_meminfo_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.491069] calling  proc_stat_init+0x0/0x22 @ 1=0D
[   14.495753] initcall proc_stat_init+0x0/0x22 returned 0 after 3 usecs=0D
[   14.502249] calling  proc_uptime_init+0x0/0x22 @ 1=0D
[   14.507105] initcall proc_uptime_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.513776] calling  proc_version_init+0x0/0x22 @ 1=0D
[   14.518718] initcall proc_version_init+0x0/0x22 returned 0 after 3 usecs=
=0D
[   14.525476] calling  proc_softirqs_init+0x0/0x22 @ 1=0D
[   14.530505] initcall proc_softirqs_init+0x0/0x22 returned 0 after 3 usec=
s=0D
[   14.537348] calling  proc_kcore_init+0x0/0xb5 @ 1=0D
[   14.542125] initcall proc_kcore_init+0x0/0xb5 returned 0 after 10 usecs=
=0D
[   14.548789] calling  vmcore_init+0x0/0x5cb @ 1=0D
[   14.553294] initcall vmcore_init+0x0/0x5cb returned 0 after 0 usecs=0D
[   14.559620] calling  proc_kmsg_init+0x0/0x25 @ 1=0D
[   14.564304] initcall proc_kmsg_init+0x0/0x25 returned 0 after 3 usecs=0D
[   14.570800] calling  proc_page_init+0x0/0x42 @ 1=0D
[   14.575487] initcall proc_page_init+0x0/0x42 returned 0 after 6 usecs=0D
[   14.581981] calling  init_devpts_fs+0x0/0x62 @ 1=0D
[   14.586705] initcall init_devpts_fs+0x0/0x62 returned 0 after 43 usecs=0D
[   14.593246] calling  init_hugetlbfs_fs+0x0/0x15d @ 1=0D
[   14.598346] initcall init_hugetlbfs_fs+0x0/0x15d returned 0 after 72 use=
cs=0D
[   14.605207] calling  init_fat_fs+0x0/0x4f @ 1=0D
[   14.609646] initcall init_fat_fs+0x0/0x4f returned 0 after 20 usecs=0D
[   14.615953] calling  init_vfat_fs+0x0/0x12 @ 1=0D
[   14.620459] initcall init_vfat_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   14.626785] calling  init_msdos_fs+0x0/0x12 @ 1=0D
[   14.631380] initcall init_msdos_fs+0x0/0x12 returned 0 after 0 usecs=0D
[   14.637791] calling  init_iso9660_fs+0x0/0x70 @ 1=0D
[   14.642581] initcall init_iso9660_fs+0x0/0x70 returned 0 after 23 usecs=
=0D
[   14.649233] calling  init_nfs_fs+0x0/0x16c @ 1=0D
[   14.653933] initcall init_nfs_fs+0x0/0x16c returned 0 after 190 usecs=0D
[   14.660363] calling  init_nfs_v2+0x0/0x14 @ 1=0D
[   14.664781] initcall init_nfs_v2+0x0/0x14 returned 0 after 0 usecs=0D
[   14.671021] calling  init_nfs_v3+0x0/0x14 @ 1=0D
[   14.675439] initcall init_nfs_v3+0x0/0x14 returned 0 after 0 usecs=0D
[   14.681679] calling  init_nfs_v4+0x0/0x3b @ 1=0D
[   14.686098] NFS: Registering the id_resolver key type=0D
[   14.691225] Key type id_resolver registered=0D
[   14.695457] Key type id_legacy registered=0D
[   14.699535] initcall init_nfs_v4+0x0/0x3b returned 0 after 13121 usecs=0D
[   14.706118] calling  init_nlm+0x0/0x4c @ 1=0D
[   14.710286] initcall init_nlm+0x0/0x4c returned 0 after 7 usecs=0D
[   14.716257] calling  init_nls_cp437+0x0/0x12 @ 1=0D
[   14.720937] initcall init_nls_cp437+0x0/0x12 returned 0 after 0 usecs=0D
[   14.727437] calling  init_nls_ascii+0x0/0x12 @ 1=0D
[   14.732117] initcall init_nls_ascii+0x0/0x12 returned 0 after 0 usecs=0D
[   14.738617] calling  init_nls_iso8859_1+0x0/0x12 @ 1=0D
[   14.743642] initcall init_nls_iso8859_1+0x0/0x12 returned 0 after 0 usec=
s=0D
[   14.750489] calling  init_nls_utf8+0x0/0x2b @ 1=0D
[   14.755082] initcall init_nls_utf8+0x0/0x2b returned 0 after 0 usecs=0D
[   14.761496] calling  init_ntfs_fs+0x0/0x1d1 @ 1=0D
[   14.766090] NTFS driver 2.1.30 [Flags: R/W].=0D
[   14.770476] initcall init_ntfs_fs+0x0/0x1d1 returned 0 after 4282 usecs=
=0D
[   14.777097] calling  init_autofs4_fs+0x0/0x2a @ 1=0D
[   14.782028] initcall init_autofs4_fs+0x0/0x2a returned 0 after 129 usecs=
=0D
[   14.788725] calling  init_pstore_fs+0x0/0x53 @ 1=0D
[   14.793406] initcall init_pstore_fs+0x0/0x53 returned 0 after 11 usecs=0D
[   14.799982] calling  ipc_init+0x0/0x2f @ 1=0D
[   14.804149] msgmni has been set to 3857=0D
[   14.808051] initcall ipc_init+0x0/0x2f returned 0 after 3817 usecs=0D
[   14.814281] calling  ipc_sysctl_init+0x0/0x14 @ 1=0D
[   14.819054] initcall ipc_sysctl_init+0x0/0x14 returned 0 after 7 usecs=0D
[   14.825633] calling  init_mqueue_fs+0x0/0xa2 @ 1=0D
[   14.830373] initcall init_mqueue_fs+0x0/0xa2 returned 0 after 57 usecs=0D
[   14.836900] calling  key_proc_init+0x0/0x5e @ 1=0D
[   14.841500] initcall key_proc_init+0x0/0x5e returned 0 after 7 usecs=0D
[   14.847906] calling  selinux_nf_ip_init+0x0/0x69 @ 1=0D
[   14.852932] SELinux:  Registering netfilter hooks=0D
[   14.857834] initcall selinux_nf_ip_init+0x0/0x69 returned 0 after 4785 u=
secs=0D
[   14.864867] calling  init_sel_fs+0x0/0xa5 @ 1=0D
[   14.869649] initcall init_sel_fs+0x0/0xa5 returned 0 after 353 usecs=0D
[   14.875986] calling  selnl_init+0x0/0x56 @ 1=0D
[   14.880333] initcall selnl_init+0x0/0x56 returned 0 after 15 usecs=0D
[   14.886559] calling  sel_netif_init+0x0/0x5c @ 1=0D
[   14.891241] initcall sel_netif_init+0x0/0x5c returned 0 after 3 usecs=0D
[   14.897738] calling  sel_netnode_init+0x0/0x6a @ 1=0D
[   14.902593] initcall sel_netnode_init+0x0/0x6a returned 0 after 1 usecs=
=0D
[   14.909265] calling  sel_netport_init+0x0/0x6a @ 1=0D
[   14.914122] initcall sel_netport_init+0x0/0x6a returned 0 after 2 usecs=
=0D
[   14.920791] calling  aurule_init+0x0/0x2d @ 1=0D
[   14.925212] initcall aurule_init+0x0/0x2d returned 0 after 1 usecs=0D
[   14.931449] calling  crypto_wq_init+0x0/0x33 @ 1=0D
[   14.936159] initcall crypto_wq_init+0x0/0x33 returned 0 after 29 usecs=0D
[   14.942718] calling  crypto_algapi_init+0x0/0xd @ 1=0D
[   14.947661] initcall crypto_algapi_init+0x0/0xd returned 0 after 4 usecs=
=0D
[   14.954416] calling  chainiv_module_init+0x0/0x12 @ 1=0D
[   14.959529] initcall chainiv_module_init+0x0/0x12 returned 0 after 0 use=
cs=0D
[   14.966463] calling  eseqiv_module_init+0x0/0x12 @ 1=0D
[   14.971490] initcall eseqiv_module_init+0x0/0x12 returned 0 after 0 usec=
s=0D
[   14.978336] calling  hmac_module_init+0x0/0x12 @ 1=0D
[   14.983189] initcall hmac_module_init+0x0/0x12 returned 0 after 0 usecs=
=0D
[   14.989862] calling  md5_mod_init+0x0/0x12 @ 1=0D
[   14.994400] initcall md5_mod_init+0x0/0x12 returned 0 after 31 usecs=0D
[   15.000782] calling  sha1_generic_mod_init+0x0/0x12 @ 1=0D
[   15.006096] initcall sha1_generic_mod_init+0x0/0x12 returned 0 after 26 =
usecs=0D
[   15.013261] calling  crypto_cbc_module_init+0x0/0x12 @ 1=0D
[   15.018634] initcall crypto_cbc_module_init+0x0/0x12 returned 0 after 0 =
usecs=0D
[   15.025827] calling  des_generic_mod_init+0x0/0x17 @ 1=0D
[   15.031078] initcall des_generic_mod_init+0x0/0x17 returned 0 after 50 u=
secs=0D
[   15.038135] calling  aes_init+0x0/0x12 @ 1=0D
[   15.042321] initcall aes_init+0x0/0x12 returned 0 after 27 usecs=0D
[   15.048362] calling  zlib_mod_init+0x0/0x12 @ 1=0D
[   15.052981] initcall zlib_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.059454] calling  crypto_authenc_module_init+0x0/0x12 @ 1=0D
[   15.065177] initcall crypto_authenc_module_init+0x0/0x12 returned 0 afte=
r 0 usecs=0D
[   15.072716] calling  crypto_authenc_esn_module_init+0x0/0x12 @ 1=0D
[   15.078780] initcall crypto_authenc_esn_module_init+0x0/0x12 returned 0 =
after 0 usecs=0D
[   15.086666] calling  krng_mod_init+0x0/0x12 @ 1=0D
[   15.091288] initcall krng_mod_init+0x0/0x12 returned 0 after 26 usecs=0D
[   15.097762] calling  proc_genhd_init+0x0/0x3c @ 1=0D
[   15.102537] initcall proc_genhd_init+0x0/0x3c returned 0 after 7 usecs=0D
[   15.109113] calling  bsg_init+0x0/0x12e @ 1=0D
[   15.113436] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 251)=0D
[   15.120822] initcall bsg_init+0x0/0x12e returned 0 after 7288 usecs=0D
[   15.127146] calling  noop_init+0x0/0x12 @ 1=0D
[   15.131394] io scheduler noop registered=0D
[   15.135379] initcall noop_init+0x0/0x12 returned 0 after 3890 usecs=0D
[   15.141705] calling  deadline_init+0x0/0x12 @ 1=0D
[   15.146298] io scheduler deadline registered=0D
[   15.150632] initcall deadline_init+0x0/0x12 returned 0 after 4231 usecs=
=0D
[   15.157306] calling  cfq_init+0x0/0x8b @ 1=0D
[   15.161486] io scheduler cfq registered (default)=0D
[   15.166232] initcall cfq_init+0x0/0x8b returned 0 after 4655 usecs=0D
[   15.172471] calling  percpu_counter_startup+0x0/0x38 @ 1=0D
[   15.177845] initcall percpu_counter_startup+0x0/0x38 returned 0 after 0 =
usecs=0D
[   15.185038] calling  pci_proc_init+0x0/0x6a @ 1=0D
[   15.189814] initcall pci_proc_init+0x0/0x6a returned 0 after 179 usecs=0D
[   15.196330] calling  pcie_portdrv_init+0x0/0x7a @ 1=0D
[   15.202004] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.207567] Already setup the GSI :16=0D
[   15.212106] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.217670] Already setup the GSI :16=0D
[   15.222185] xen: registering gsi 16 triggering 0 polarity 1=0D
[   15.227750] Already setup the GSI :16=0D
[   15.232119] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.237694] xen: --> pirq=3D19 -> irq=3D19 (gsi=3D19)=0D
[   15.242940] xen: registering gsi 17 triggering 0 polarity 1=0D
[   15.248516] xen: --> pirq=3D17 -> irq=3D17 (gsi=3D17)=0D
[   15.253843] xen: registering gsi 19 triggering 0 polarity 1=0D
[   15.259406] Already setup the GSI :19=0D
[   15.263326] initcall pcie_portdrv_init+0x0/0x7a returned 0 after 60600 u=
secs=0D
[   15.270362] calling  aer_service_init+0x0/0x2b @ 1=0D
[   15.275286] initcall aer_service_init+0x0/0x2b returned 0 after 72 usecs=
=0D
[   15.281972] calling  pci_hotplug_init+0x0/0x1d @ 1=0D
[   15.286826] pci_hotplug: PCI Hot Plug PCI Core version: 0.5=0D
[   15.292458] initcall pci_hotplug_init+0x0/0x1d returned 0 after 5499 use=
cs=0D
[   15.299392] calling  pcied_init+0x0/0x79 @ 1=0D
[   15.303927] pciehp: PCI Express Hot Plug Controller Driver version: 0.4=
=0D
[   15.310533] initcall pcied_init+0x0/0x79 returned 0 after 6648 usecs=0D
[   15.316944] calling  pcifront_init+0x0/0x3f @ 1=0D
[   15.321533] initcall pcifront_init+0x0/0x3f returned -19 after 0 usecs=0D
[   15.328120] calling  genericbl_driver_init+0x0/0x14 @ 1=0D
[   15.333520] initcall genericbl_driver_init+0x0/0x14 returned 0 after 109=
 usecs=0D
[   15.340731] calling  cirrusfb_init+0x0/0xcc @ 1=0D
[   15.345415] initcall cirrusfb_init+0x0/0xcc returned 0 after 89 usecs=0D
[   15.351841] calling  efifb_driver_init+0x0/0x14 @ 1=0D
[   15.356856] initcall efifb_driver_init+0x0/0x14 returned 0 after 72 usec=
s=0D
[   15.363630] calling  intel_idle_init+0x0/0x331 @ 1=0D
[   15.368482] intel_idle: MWAIT substates: 0x42120=0D
[   15.373163] intel_idle: v0.4 model 0x3C=0D
[   15.377060] intel_idle: lapic_timer_reliable_states 0xffffffff=0D
[   15.382958] intel_idle: intel_idle yielding to none=0D
[   15.387632] initcall intel_idle_init+0x0/0x331 returned -19 after 18700 =
usecs=0D
[   15.395086] calling  acpi_reserve_resources+0x0/0xeb @ 1=0D
[   15.400467] initcall acpi_reserve_resources+0x0/0xeb returned 0 after 8 =
usecs=0D
[   15.407653] calling  acpi_ac_init+0x0/0x2a @ 1=0D
[   15.412233] initcall acpi_ac_init+0x0/0x2a returned 0 after 73 usecs=0D
[   15.418582] calling  acpi_button_driver_init+0x0/0x12 @ 1=0D
[   15.424317] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0=
C:00/input/input0=0D
[   15.432487] ACPI: Power Button [PWRB]=0D
[   15.436478] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input1=0D
[   15.443865] ACPI: Power Button [PWRF]=0D
[   15.447660] initcall acpi_button_driver_init+0x0/0x12 returned 0 after 2=
3074 usecs=0D
[   15.455214] calling  acpi_fan_driver_init+0x0/0x12 @ 1=0D
[   15.460653] ACPI: Fan [FAN0] (off)=0D
[   15.464281] ACPI: Fan [FAN1] (off)=0D
[   15.467894] ACPI: Fan [FAN2] (off)=0D
[   15.471509] ACPI: Fan [FAN3] (off)=0D
[   15.475111] ACPI: Fan [FAN4] (off)=0D
[   15.478582] initcall acpi_fan_driver_init+0x0/0x12 returned 0 after 1774=
2 usecs=0D
[   15.485881] calling  acpi_processor_driver_init+0x0/0x43 @ 1=0D
[   15.504111] ACPI Error: [\PETE] Namespace lookup failure, AE_NOT_FOUND (=
20131115/psargs-359)=0D
[   15.512533] ACPI Error: Method parse/execution failed [\_PR_.CPU0._TPC] =
(Node ffff8800784b2ce0), AE_NOT_FOUND (20131115/psparse-536)=0D
[   15.528192] Monitor-Mwait will be used to enter C-1 state=0D
[   15.533579] Monitor-Mwait will be used to enter C-2 state=0D
[   15.539239] Warning: Processor Platform Limit not supported.=0D
[   15.544887] initcall acpi_processor_driver_init+0x0/0x43 returned 0 afte=
r 52040 usecs=0D
[   15.552770] calling  acpi_thermal_init+0x0/0x42 @ 1=0D
[   15.560984] thermal LNXTHERM:00: registered as thermal_zone0=0D
[   15.566629] ACPI: Thermal Zone [TZ00] (28 C)=0D
[   15.573132] thermal LNXTHERM:01: registered as thermal_zone1=0D
[   15.578785] ACPI: Thermal Zone [TZ01] (30 C)=0D
[   15.58345ry_init+0x0/0x16 @ 1=0D
[   15.595431] initcall acpi_battery_init+0x0/0x16 returned 0 after 2 usecs=
=0D
[   15.602188] calling  acpi_hed_driver_init+0x0/0x12 @ 1=0D
[   15.607435] calling  1_acpi_battery_init_async+0x0/0x35 @ 6=0D
[   15.613155] initcall acpi_hed_driver_init+0x0/0x12 returned 0 after 5630=
 usecs=0D
[   15.620360] calling  erst_init+0x0/0x2fc @ 1=0D
[   15.624736] ERST: Error Record Serialization Table (ERST) support is ini=
tialized.=0D
[   15.632240] pstore: Registered erst as persistent store backend=0D
[   15.638212] initcall erst_init+0x0/0x2fc returned 0 after 13202 usecs=0D
[   15.644713] calling  ghes_init+0x0/0x173 @ 1=0D
[   15.649195] initcall 1_acpi_battery_init_async+0x0/0x35 returned 0 after=
 35327 usecs=0D
[   15.657638] \_SB_:_OSC request failed=0D
[   15.661294] _OSC request data:1 1 0 =0D
[   15.664932] \_SB_:_OSC invalid UUID=0D
[   15.668484] _OSC request data:1 1 0 =0D
[   15.672121] GHES: APEI firmware first mode is enabled by APEI bit.=0D
[   15.678364] initcall ghes_init+0x0/0x173 returned 0 after 28630 usecs=0D
[   15.684863] calling  einj_init+0x0/0x522 @ 1=0D
[   15.689259] EINJ: Error INJection is initialized.=0D
[   15.693964] initcall einj_init+0x0/0x522 returned 0 after 4655 usecs=0D
[   15.700377] calling  ioat_init_module+0x0/0xb1 @ 1=0D
[   15.705229] ioatdma: Intel(R) QuickData Technology Driver 4.00=0D
[   15.711278] initcall ioat_init_module+0x0/0xb1 returned 0 after 5906 use=
cs=0D
[   15.718161] calling  virtio_mmio_init+0x0/0x14 @ 1=0D
[   15.723070] initcall virtio_mmio_init+0x0/0x14 returned 0 after 72 usecs=
=0D
[   15.729756] calling  virtio_balloon_driver_init+0x0/0x12 @ 1=0D
[   15.735547] initcall virtio_balloon_driver_init+0x0/0x12 returned 0 afte=
r 69 usecs=0D
[   15.743105] calling  xenbus_probe_initcall+0x0/0x39 @ 1=0D
[   15.748388] initcall xenbus_probe_initcall+0x0/0x39 returned 0 after 0 u=
secs=0D
[   15.755494] calling  xenbus_init+0x0/0x3d @ 1=0D
[   15.760053] initcall xenbus_init+0x0/0x3d returned 0 after 134 usecs=0D
[   15.766398] calling  xenbus_backend_init+0x0/0x51 @ 1=0D
[   15.771634] initcall xenbus_backend_init+0x0/0x51 returned 0 after 121 u=
secs=0D
[   15.778671] calling  gntdev_init+0x0/0x4d @ 1=0D
[   15.783281] initcall gntdev_init+0x0/0x4d returned 0 after 155 usecs=0D
[   15.789626] calling  gntalloc_init+0x0/0x3d @ 1=0D
[   15.794346] initcall gntalloc_init+0x0/0x3d returned 0 after 127 usecs=0D
[   15.800865] calling  hypervisor_subsys_init+0x0/0x25 @ 1=0D
[   15.806237] initcall hypervisor_subsys_init+0x0/0x25 returned 0 after 0 =
usecs=0D
[   15.813428] calling  hyper_sysfs_init+0x0/0x103 @ 1=0D
[   15.818434] initcall hyper_sysfs_init+0x0/0x103 returned 0 after 64 usec=
s=0D
[   15.825214] calling  platform_pci_module_init+0x0/0x1b @ 1=0D
[   15.830854] initcall platform_pci_module_init+0x0/0x1b returned 0 after =
90 usecs=0D
[   15.838232] calling  xen_late_init_mcelog+0x0/0x3d @ 1=0D
[   15.843625] initcall xen_late_init_mcelog+0x0/0x3d returned 0 after 190 =
usecs=0D
[   15.850747] calling  xen_pcibk_init+0x0/0x13f @ 1=0D
[   15.855539] xen_pciback: backend is vpci=0D
[   15.859576] initcall xen_pcibk_init+0x0/0x13f returned 0 after 3968 usec=
s=0D
[   15.866358] calling  xen_acpi_processor_init+0x0/0x24b @ 1=0D
[   15.872671] xen_acpi_processor: Uploading Xen processor PM info=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(1) cpuid(0) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU0: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 0 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(2) cpuid(2) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU2: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 2 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(3) cpuid(4) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:05] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:05] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:05] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:05] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:05] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:05] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:05] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:05] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:05] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:05] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:05] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:05] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:05] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:05] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:05] CPU4: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:05] CPU 4 initialization completed=0D
(XEN) [2014-01-25 03:42:05] Set CPU acpi_id(4) cpuid(6) Px State info:=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:05] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:05] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:05] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:05] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:05] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU6: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 6 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(5) cpuid(1) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU1: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 1 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(6) cpuid(3) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU3: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 3 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(7) cpuid(5) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU5: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 5 initialization completed=0D
(XEN) [2014-01-25 03:42:06] Set CPU acpi_id(8) cpuid(7) Px State info:=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PCT: descriptor=3D130, length=3D12, space_id=
=3D127, bit_width=3D0, bit_offset=3D0, reserved=3D0, address=3D0=0D
(XEN) [2014-01-25 03:42:06] 	_PSS: state_count=3D16=0D
(XEN) [2014-01-25 03:42:06] 	State0: 3401MHz 84000mW 10us 10us 0x2600 0x260=
0=0D
(XEN) [2014-01-25 03:42:06] 	State1: 3400MHz 84000mW 10us 10us 0x2200 0x220=
0=0D
(XEN) [2014-01-25 03:42:06] 	State2: 3200MHz 77169mW 10us 10us 0x2000 0x200=
0=0D
(XEN) [2014-01-25 03:42:06] 	State3: 3000MHz 70587mW 10us 10us 0x1e00 0x1e0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State4: 2800MHz 64262mW 10us 10us 0x1c00 0x1c0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State5: 2700MHz 61182mW 10us 10us 0x1b00 0x1b0=
0=0D
(XEN) [2014-01-25 03:42:06] 	State6: 2500MHz 55201mW 10us 10us 0x1900 0x190=
0=0D
(XEN) [2014-01-25 03:42:06] 	State7: 2300MHz 49464mW 10us 10us 0x1700 0x170=
0=0D
(XEN) [2014-01-25 03:42:06] 	State8: 2100MHz 43946mW 10us 10us 0x1500 0x150=
0=0D
(XEN) [2014-01-25 03:42:06] 	State9: 1900MHz 38654mW 10us 10us 0x1300 0x130=
0=0D
(XEN) [2014-01-25 03:42:06] 	State10: 1700MHz 34277mW 10us 10us 0x1100 0x11=
00=0D
(XEN) [2014-01-25 03:42:06] 	State11: 1500MHz 29407mW 10us 10us 0xf00 0xf00=
=0D
(XEN) [2014-01-25 03:42:06] 	State12: 1400MHz 27053mW 10us 10us 0xe00 0xe00=
=0D
(XEN) [2014-01-25 03:42:06] 	State13: 1200MHz 22509mW 10us 10us 0xc00 0xc00=
=0D
(XEN) [2014-01-25 03:42:06] 	State14: 1000MHz 18167mW 10us 10us 0xa00 0xa00=
=0D
(XEN) [2014-01-25 03:42:06] 	State15: 800MHz 14031mW 10us 10us 0x800 0x800=
=0D
(XEN) [2014-01-25 03:42:06] 	_PSD: num_entries=3D5 rev=3D0 domain=3D0 coord=
_type=3D254 num_processors=3D8=0D
(XEN) [2014-01-25 03:42:06] 	_PPC: 0=0D
(XEN) [2014-01-25 03:42:06] xen_pminfo: @acpi_cpufreq_cpu_init,HARDWARE add=
r space=0D
(XEN) [2014-01-25 03:42:06] max_freq: 3401000    second_max_freq: 3400000=0D
(XEN) [2014-01-25 03:42:06] CPU7: Turbo Mode detected and enabled=0D
(XEN) [2014-01-25 03:42:06] CPU 7 initialization completed=0D
[   17.294899] initcall xen_acpi_processor_init+0x0/0x24b returned 0 after =
1389646 usecs=0D
[   17.302768] calling  pty_init+0x0/0x453 @ 1=0D
[   17.314441] kworker/u2:0 (756) used greatest stack depth: 5488 bytes lef=
t=0D
[   17.369193] initcall pty_init+0x0/0x453 returned 0 after 60719 usecs=0D
[   17.375542] calling  sysrq_init+0x0/0xb0 @ 1=0D
[   all xen_hvc_init+0x0/0x228 returned 0 after 1024 usecs=0D
[   17.398275] calling  serial8250_init+0x0/0x1ab @ 1=0D
[   17.403124] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled=0D
[   17.430759] 00:09: ttyS1 at I/O 0x2f8 (irq =3D 3, base_baud =3D 115200) =
is a 16550A=0D
[   17.439166] initcall serial8250_init+0x0/0x1ab returned 0 after 35196 us=
ecs=0D
[   17.446114] calling  serial_pci_driver_init+0x0/0x1b @ 1=0D
[   17.451590] initcall serial_pci_driver_init+0x0/0x1b returned 0 after 10=
4 usecs=0D
[   17.458885] calling  init_kgdboc+0x0/0x16 @ 1=0D
[   17.463304] kgdb: Registered I/O driver kgdboc.=0D
[   17.467927] initcall init_kgdboc+0x0/0x16 returned 0 after 4515 usecs=0D
[   17.474397] calling  init+0x0/0x10f @ 1=0D
[   17.478522] initcall init+0x0/0x10f returned 0 after 220 usecs=0D
[   17.484346] calling  hpet_init+0x0/0x6a @ 1=0D
[   17.489080] hpet_acpi_add: no address or irqs in _CRS=0D
[   17.494215] initcall hpet_init+0x0/0x6a returned 0 after 5490 usecs=0D
[   17.500467] calling  nvram_init+0x0/0x82 @ 1=0D
[   17.504926] Non-volatile memory driver v1.3=0D
[   17.509099] initcall nvram_init+0x0/0x82 returned 0 after 4199 usecs=0D
[   17.515510] calling  mod_init+0x0/0x5a @ 1=0D
[   17.519670] initcall mod_init+0x0/0x5a returned -19 after 0 usecs=0D
[   17.525822] calling  rng_init+0x0/0x12 @ 1=0D
[   17.530120] initcall rng_init+0x0/0x12 returned 0 after 133 usecs=0D
[   17.536198] calling  agp_init+0x0/0x26 @ 1=0D
[   17.540356] Linux agpgart interface v0.103=0D
[   17.544517] initcall agp_init+0x0/0x26 returned 0 after 4063 usecs=0D
[   17.550757] calling  agp_amd64_mod_init+0x0/0xb @ 1=0D
[   17.555843] initcall agp_amd64_mod_init+0x0/0xb returned -19 after 144 u=
secs=0D
[   17.562883] calling  agp_intel_init+0x0/0x29 @ 1=0D
[   17.567653] initcall agp_intel_init+0x0/0x29 returned 0 after 90 usecs=0D
[   17.574167] calling  agp_sis_init+0x0/0x29 @ 1=0D
[   17.578762] initcall agp_sis_init+0x0/0x29 returned 0 after 88 usecs=0D
[   17.585102] calling  agp_via_init+0x0/0x29 @ 1=0D
[   17.589698] initcall agp_via_init+0x0/0x29 returned 0 after 88 usecs=0D
[   17.596041] calling  drm_core_init+0x0/0x10c @ 1=0D
[   17.600809] [drm] Initialized drm 1.1.0 20060810=0D
[   17.605416] initcall drm_core_init+0x0/0x10c returned 0 after 4585 usecs=
=0D
[   17.612175] calling  cn_proc_init+0x0/0x3d @ 1=0D
[   17.616685] initcall cn_proc_init+0x0/0x3d returned 0 after 2 usecs=0D
[   17.623009] calling  topology_sysfs_init+0x0/0x70 @ 1=0D
[   17.628154] initcall topology_sysfs_init+0x0/0x70 returned 0 after 33 us=
ecs=0D
[   17.635140] calling  loop_init+0x0/0x14e @ 1=0D
[   17.693296] loop: module loaded=0D
[   17.696460] initcall loop_init+0x0/0x14e returned 0 after 55648 usecs=0D
[   17.702930] calling  xen_blkif_init+0x0/0x22 @ 1=0D
[   17.707710] initcall xen_blkif_init+0x0/0x22 returned 0 after 99 usecs=0D
[   17.714248] calling  mac_hid_init+0x0/0x22 @ 1=0D
[   17.718736] initcall mac_hid_init+0x0/0x22 returned 0 after 8 usecs=0D
[   17.725055] calling  macvlan_init_module+0x0/0x3d @ 1=0D
[   17.730169] initcall macvlan_init_module+0x0/0x3d returned 0 after 2 use=
cs=0D
[   17.737102] calling  macvtap_init+0x0/0x100 @ 1=0D
[   17.741763] initcall macvtap_init+0x0/0x100 returned 0 after 67 usecs=0D
[   17.748219] calling  net_olddevs_init+0x0/0xb5 @ 1=0D
[   17.753050] initcall net_olddevs_init+0x0/0xb5 returned 0 after 1 usecs=
=0D
[   17.759720] calling  fixed_mdio_bus_init+0x0/0x105 @ 1=0D
[   17.765133] libphy: Fixed MDIO Bus: probed=0D
[   17.769229] initcall fixed_mdio_bus_init+0x0/0x105 returned 0 after 4207=
 usecs=0D
[   17.776499] calling  tun_init+0x0/0x93 @ 1=0D
[   17.780658] tun: Universal TUN/TAP device driver, 1.6=0D
[   17.785771] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>=0D
[   17.792155] initcall tun_init+0x0/0x93 returned 0 after 11226 usecs=0D
[   17.798418] calling  tg3_driver_init+0x0/0x1b @ 1=0D
[   17.803298] initcall tg3_driver_init+0x0/0x1b returned 0 after 121 usecs=
=0D
[   17.809988] calling  igb_init_module+0x0/0x58 @ 1=0D
[   17.814753] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.0.=
5-k=0D
[   17.821774] igb: Copyright (c) 2007-2013 Intel Corporation.=0D
[   17.827672] xen: registering gsi 17 triggering 0 polarity 1=0D
[   17.833243] Already setup the GSI :17=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.0:, msix:fff=
f830239467f70 dev:ffff8302394660d0=0D
[   18.057960] igb 0000:02:00.0: added PHC on eth0=0D
[   18.062482] igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connecti=
e:2.5Gb/s:Width x4) 00:1b:21:45:d9:ac=0D
[   18.076606] igb 0000:02:00.0: eth0: PBA No: Unknown=0D
[   18.081548] igb 0000:02:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 t=
x queue(s)=0D
[   18.089439] xen: registering gsi 18 triggering 0 polarity 1=0D
[   18.095009] Already setup the GSI :18=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
(XEN) [2014-01-25 03:42:07] msix_capability_init:759 for 02:00.1:, msix:fff=
f830239466250 dev:ffff830239466190=0D
[   18.319943] igb 0000:02:00.1: added PHC on eth1=0D
[   18.324467] igb 0000:02:00.1: Intel(R) Gigabit Ethernet Network Connecti=
istering gsi 19 triggering 0 polarity 1=0D
[   18.356994] Already setup the GSI :19=0D
(XEN) [2014-01-25 03:42:08] msix_capability_init:759 for 05:00.0:, msix:0 d=
ev:ffff8302394665b0=0D
(XEN) [2014-01-25 03:42:08] ----[ Xen-4.4-rc2  x86_64  debug=3Dy  Tainted: =
   C ]----=0D
(XEN) [2014-01-25 03:42:08] CPU:    0=0D
(XEN) [2014-01-25 03:42:08] RIP:    e008:[<ffff82d0801683d6>] msix_capabili=
ty_init+0x210/0x63e=0D
(XEN) [2014-01-25 03:42:08] RFLAGS: 0000000000010296   CONTEXT: hypervisor=
=0D
(XEN) [2014-01-25 03:42:08] rax: 0000000000000000   rbx: ffff8302394665b0  =
 rcx: 0000000000000000=0D
(XEN) [2014-01-25 03:42:08] rdx: ffff82d080310e20   rsi: 000000000000000a  =
 rdi: ffff82d0802816c8=0D
(XEN) [2014-01-25 03:42:08] rbp: ffff82d0802cfca8   rsp: ffff82d0802cfbf8  =
 r8:  0000000000000000=0D
(XEN) [2014-01-25 03:42:08] r9:  0000000000000000   r10: 0000000000000000  =
 r11: ffff82d080232040=0D
(XEN) [2014-01-25 03:42:08] r12: 0000000000000000   r13: ffff83022a085e30  =
 r14: ffff82d0802cfe98=0D
(XEN) [2014-01-25 03:42:08] r15: 0000000000000000   cr0: 0000000080050033  =
 cr4: 00000000001526f0=0D
(XEN) [2014-01-25 03:42:08] cr3: 000000022dc0c000   cr2: 0000000000000004=0D
(XEN) [2014-01-25 03:42:08] ds: 0000   es: 0000   fs: 0000   gs: 0000   ss:=
 e010   cs: e008=0D
(XEN) [2014-01-25 03:42:08] Xen stack trace from rsp=3Dffff82d0802cfbf8:=0D
(XEN) [2014-01-25 03:42:08]    0000000000000000 ffff8302394665b0 0000000500=
04fc28 ffff82d0802cfd88=0D
(XEN) [2014-01-25 03:42:08]    000000728012a25f ffff8302ffffffff ffff82d000=
000000 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    0000000000000005 0000000000000070 0000000500=
000000 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    00000000f1980000 ffff82d000000005 0000000500=
000003 8005007000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfe98 ffff82d0802cfe98 ffff82d080=
2cfd88 ffff8302394665b0=0D
(XEN) [2014-01-25 03:42:08]    0000000000000005 0000000000000000 ffff82d080=
2cfd28 ffff82d0801689c2=0D
(XEN) [2014-01-25 03:42:08]    0000000000000246 ffff82d0802cfcd8 ffff82d080=
129d68 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfd28 ffff82d080147589 ffff82d080=
2cfd18 ffff830239463b70=0D
(XEN) [2014-01-25 03:42:08]    000000000000010f ffff8302337f8000 0000000000=
00010f 0000000000000022=0D
(XEN) [2014-01-25 03:42:08]    00000000ffffffed ffff830239402200 ffff82d080=
2cfdc8 ffff82d08016c68c=0D
(XEN) [2014-01-25 03:42:08]    ffff83022a085e00 000000000000010f 0000000000=
00010f ffff8302337f80e0=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfd98 ffff82d0801047ed 0000010f01=
402200 ffff82d0802cfe98=0D
(XEN) [2014-01-25 03:42:08]    ffff8302337f80e0 ffff8302394665b0 ffff82d080=
2cfe98 ffff83022a085e00=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfdc8 ffff8302337f8000 00000000ff=
fffffd 0000000000000000=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cfe98 ffff82d0802cfe70 ffff82d080=
2cfe48 ffff82d08017f134=0D
(XEN) [2014-01-25 03:42:08]    ffff82d0802cff18 ffffffff8156d7c6 ffff82d080=
2cfe98 ffff8302337f80b8=0D
(XEN) [2014-01-25 03:42:08]    ffff82d00000010f ffff82d08018bd70 0000002200=
00f800 ffff82d0802cfe74=0D
(XEN) [2014-01-25 03:42:08]    ffff820040004000 000000000000000d ffff880078=
623b08 ffff8300b7313000=0D
(XEN) [2014-01-25 03:42:08]    ffff880006db8180 0000000000000000 ffff82d080=
2cfef8 ffff82d08017f844=0D
(XEN) [2014-01-25 03:42:08]    0000000000000000 0000000700000004 0000000000=
007ff0 ffffffffffffffff=0D
(XEN) [2014-01-25 03:42:08] Xen call trace:=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801683d6>] msix_capability_init+0x=
210/0x63e=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0801689c2>] pci_enable_msi+0x1be/0x=
4d7=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08016c68c>] map_domain_pirq+0x222/0=
x5ad=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f134>] physdev_map_pirq+0x507/=
0x5d1=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d08017f844>] do_physdev_op+0x646/0x1=
232=0D
(XEN) [2014-01-25 03:42:08]    [<ffff82d0802223ab>] syscall_enter+0xeb/0x14=
5=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] Pagetable walk from 0000000000000004:=0D
(XEN) [2014-01-25 03:42:08]  L4[0x000] =3D 0000000000000000 fffffffffffffff=
f=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] ****************************************=0D
(XEN) [2014-01-25 03:42:08] Panic on CPU 0:=0D
(XEN) [2014-01-25 03:42:08] FATAL PAGE FAULT=0D
(XEN) [2014-01-25 03:42:08] [error_code=3D0000]=0D
(XEN) [2014-01-25 03:42:08] Faulting linear address: 0000000000000004=0D
(XEN) [2014-01-25 03:42:08] ****************************************=0D
(XEN) [2014-01-25 03:42:08] =0D
(XEN) [2014-01-25 03:42:08] Manual reset required ('noreboot' specified)=0D

--d6Gm4EdcadzBjdND
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--d6Gm4EdcadzBjdND--


From xen-devel-bounces@lists.xen.org Fri Jan 24 22:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 22:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6oqi-0000FM-7B; Fri, 24 Jan 2014 22:03:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6oqg-0000FG-P6
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 22:03:39 +0000
Received: from [85.158.137.68:34893] by server-14.bemta-3.messagelabs.com id
	90/4F-06105-A33E2E25; Fri, 24 Jan 2014 22:03:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390601015!10408170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25831 invoked from network); 24 Jan 2014 22:03:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 22:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,715,1384300800"; d="scan'208";a="96346242"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 22:03:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 17:03:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6oqZ-0000Zr-Ne;
	Fri, 24 Jan 2014 22:03:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6oqZ-0000qf-1o;
	Fri, 24 Jan 2014 22:03:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24477-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 22:03:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24477: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24477 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24477/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2   16 guest-start.2             fail REGR. vs. 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1cd4fab14ce25859efa4a2af13475e6650a5506c
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 24 22:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jan 2014 22:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6oqi-0000FM-7B; Fri, 24 Jan 2014 22:03:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6oqg-0000FG-P6
	for xen-devel@lists.xensource.com; Fri, 24 Jan 2014 22:03:39 +0000
Received: from [85.158.137.68:34893] by server-14.bemta-3.messagelabs.com id
	90/4F-06105-A33E2E25; Fri, 24 Jan 2014 22:03:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390601015!10408170!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25831 invoked from network); 24 Jan 2014 22:03:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Jan 2014 22:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,715,1384300800"; d="scan'208";a="96346242"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Jan 2014 22:03:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 17:03:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6oqZ-0000Zr-Ne;
	Fri, 24 Jan 2014 22:03:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6oqZ-0000qf-1o;
	Fri, 24 Jan 2014 22:03:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24477-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 24 Jan 2014 22:03:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24477: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24477 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24477/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-credit2   16 guest-start.2             fail REGR. vs. 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install     fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  1cd4fab14ce25859efa4a2af13475e6650a5506c
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6rpz-0000Rn-PR; Sat, 25 Jan 2014 01:15:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6rpx-0000Rf-T4
	for Xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:15:06 +0000
Received: from [85.158.137.68:31616] by server-16.bemta-3.messagelabs.com id
	4E/53-26128-81013E25; Sat, 25 Jan 2014 01:15:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390612502!11189466!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28943 invoked from network); 25 Jan 2014 01:15:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Jan 2014 01:15:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0P1Dw3X031051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 25 Jan 2014 01:13:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvH2019864
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 25 Jan 2014 01:13:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvEs019845; Sat, 25 Jan 2014 01:13:57 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 17:13:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Xen-devel@lists.xensource.com
Date: Fri, 24 Jan 2014 17:13:29 -0800
Message-Id: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH]: feature flags for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

For pass 1, I came with these feature flags to be exposed to pvh
domUs.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6rq3-0000Rz-5k; Sat, 25 Jan 2014 01:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6rq1-0000Ru-M4
	for Xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:15:09 +0000
Received: from [85.158.143.35:26625] by server-3.bemta-4.messagelabs.com id
	D0/53-32360-C1013E25; Sat, 25 Jan 2014 01:15:08 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390612503!675021!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32274 invoked from network); 25 Jan 2014 01:15:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Jan 2014 01:15:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0P1Dww6031130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 25 Jan 2014 01:13:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0P1DvJU023045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 25 Jan 2014 01:13:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvrZ006010; Sat, 25 Jan 2014 01:13:57 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 17:13:57 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Xen-devel@lists.xensource.com
Date: Fri, 24 Jan 2014 17:13:30 -0800
Message-Id: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Expose features for pvh domUs from tools.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
 tools/libxc/xc_domain.c    |    1 +
 tools/libxc/xenctrl.h      |    2 +-
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index bbbf9b8..33f6829 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
 
 static void xc_cpuid_pv_policy(
     xc_interface *xch, domid_t domid,
-    const unsigned int *input, unsigned int *regs)
+    const unsigned int *input, unsigned int *regs, int is_pvh)
 {
     DECLARE_DOMCTL;
     unsigned int guest_width;
@@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
 
     if ( (input[0] & 0x7fffffff) == 0x00000001 )
     {
-        clear_bit(X86_FEATURE_VME, regs[3]);
-        clear_bit(X86_FEATURE_PSE, regs[3]);
-        clear_bit(X86_FEATURE_PGE, regs[3]);
-        clear_bit(X86_FEATURE_MCE, regs[3]);
-        clear_bit(X86_FEATURE_MCA, regs[3]);
+        if ( !is_pvh )
+        {
+            clear_bit(X86_FEATURE_VME, regs[3]);
+            clear_bit(X86_FEATURE_PSE, regs[3]);
+            clear_bit(X86_FEATURE_PGE, regs[3]);
+            clear_bit(X86_FEATURE_MCE, regs[3]);
+            clear_bit(X86_FEATURE_MCA, regs[3]);
+            clear_bit(X86_FEATURE_PSE36, regs[3]);
+        }
         clear_bit(X86_FEATURE_MTRR, regs[3]);
-        clear_bit(X86_FEATURE_PSE36, regs[3]);
     }
 
     switch ( input[0] )
@@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
         {
             set_bit(X86_FEATURE_SYSCALL, regs[3]);
         }
-        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
-        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
+        if ( !is_pvh )
+        {
+            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
+            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
+        }
 
         clear_bit(X86_FEATURE_SVM, regs[2]);
         clear_bit(X86_FEATURE_OSVW, regs[2]);
@@ -561,7 +567,7 @@ static int xc_cpuid_policy(
     if ( info.hvm )
         xc_cpuid_hvm_policy(xch, domid, input, regs);
     else
-        xc_cpuid_pv_policy(xch, domid, input, regs);
+        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
 
     return 0;
 }
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..f12999a 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
         info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
         info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
         info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
+        info->pvh      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest);
 
         info->shutdown_reason =
             (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..77d219a 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -404,7 +404,7 @@ typedef struct xc_dominfo {
     uint32_t      ssidref;
     unsigned int  dying:1, crashed:1, shutdown:1,
                   paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1;
+                  hvm:1, debugged:1, pvh:1;
     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
     unsigned long nr_pages; /* current number, not maximum */
     unsigned long nr_outstanding_pages;
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6rpz-0000Rn-PR; Sat, 25 Jan 2014 01:15:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6rpx-0000Rf-T4
	for Xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:15:06 +0000
Received: from [85.158.137.68:31616] by server-16.bemta-3.messagelabs.com id
	4E/53-26128-81013E25; Sat, 25 Jan 2014 01:15:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390612502!11189466!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28943 invoked from network); 25 Jan 2014 01:15:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Jan 2014 01:15:04 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0P1Dw3X031051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 25 Jan 2014 01:13:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvH2019864
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 25 Jan 2014 01:13:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvEs019845; Sat, 25 Jan 2014 01:13:57 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 17:13:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Xen-devel@lists.xensource.com
Date: Fri, 24 Jan 2014 17:13:29 -0800
Message-Id: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH]: feature flags for PVH
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

For pass 1, I came with these feature flags to be exposed to pvh
domUs.

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6rq3-0000Rz-5k; Sat, 25 Jan 2014 01:15:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W6rq1-0000Ru-M4
	for Xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:15:09 +0000
Received: from [85.158.143.35:26625] by server-3.bemta-4.messagelabs.com id
	D0/53-32360-C1013E25; Sat, 25 Jan 2014 01:15:08 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390612503!675021!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32274 invoked from network); 25 Jan 2014 01:15:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Jan 2014 01:15:05 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0P1Dww6031130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 25 Jan 2014 01:13:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0P1DvJU023045
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 25 Jan 2014 01:13:58 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0P1DvrZ006010; Sat, 25 Jan 2014 01:13:57 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 24 Jan 2014 17:13:57 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Xen-devel@lists.xensource.com
Date: Fri, 24 Jan 2014 17:13:30 -0800
Message-Id: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Expose features for pvh domUs from tools.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
 tools/libxc/xc_domain.c    |    1 +
 tools/libxc/xenctrl.h      |    2 +-
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index bbbf9b8..33f6829 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
 
 static void xc_cpuid_pv_policy(
     xc_interface *xch, domid_t domid,
-    const unsigned int *input, unsigned int *regs)
+    const unsigned int *input, unsigned int *regs, int is_pvh)
 {
     DECLARE_DOMCTL;
     unsigned int guest_width;
@@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
 
     if ( (input[0] & 0x7fffffff) == 0x00000001 )
     {
-        clear_bit(X86_FEATURE_VME, regs[3]);
-        clear_bit(X86_FEATURE_PSE, regs[3]);
-        clear_bit(X86_FEATURE_PGE, regs[3]);
-        clear_bit(X86_FEATURE_MCE, regs[3]);
-        clear_bit(X86_FEATURE_MCA, regs[3]);
+        if ( !is_pvh )
+        {
+            clear_bit(X86_FEATURE_VME, regs[3]);
+            clear_bit(X86_FEATURE_PSE, regs[3]);
+            clear_bit(X86_FEATURE_PGE, regs[3]);
+            clear_bit(X86_FEATURE_MCE, regs[3]);
+            clear_bit(X86_FEATURE_MCA, regs[3]);
+            clear_bit(X86_FEATURE_PSE36, regs[3]);
+        }
         clear_bit(X86_FEATURE_MTRR, regs[3]);
-        clear_bit(X86_FEATURE_PSE36, regs[3]);
     }
 
     switch ( input[0] )
@@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
         {
             set_bit(X86_FEATURE_SYSCALL, regs[3]);
         }
-        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
-        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
+        if ( !is_pvh )
+        {
+            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
+            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
+        }
 
         clear_bit(X86_FEATURE_SVM, regs[2]);
         clear_bit(X86_FEATURE_OSVW, regs[2]);
@@ -561,7 +567,7 @@ static int xc_cpuid_policy(
     if ( info.hvm )
         xc_cpuid_hvm_policy(xch, domid, input, regs);
     else
-        xc_cpuid_pv_policy(xch, domid, input, regs);
+        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
 
     return 0;
 }
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..f12999a 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
         info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
         info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
         info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
+        info->pvh      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest);
 
         info->shutdown_reason =
             (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..77d219a 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -404,7 +404,7 @@ typedef struct xc_dominfo {
     uint32_t      ssidref;
     unsigned int  dying:1, crashed:1, shutdown:1,
                   paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1;
+                  hvm:1, debugged:1, pvh:1;
     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
     unsigned long nr_pages; /* current number, not maximum */
     unsigned long nr_outstanding_pages;
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6sPM-0001ZQ-B3; Sat, 25 Jan 2014 01:51:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6sPL-0001ZL-99
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:51:39 +0000
Received: from [85.158.137.68:3769] by server-12.bemta-3.messagelabs.com id
	DE/F8-20055-AA813E25; Sat, 25 Jan 2014 01:51:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390614696!10058885!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25825 invoked from network); 25 Jan 2014 01:51:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 01:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,716,1384300800"; d="scan'208";a="94358111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 01:51:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 20:51:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6sP2-0001pH-O0;
	Sat, 25 Jan 2014 01:51:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6sOy-00033a-JK;
	Sat, 25 Jan 2014 01:51:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24483-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 01:51:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24483: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24483 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24483/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail REGR. vs. 24470
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24470

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24470

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9612d2948e1637c303e6be68df2168775ac5e97e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:45:03 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 01:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 01:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6sPM-0001ZQ-B3; Sat, 25 Jan 2014 01:51:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6sPL-0001ZL-99
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 01:51:39 +0000
Received: from [85.158.137.68:3769] by server-12.bemta-3.messagelabs.com id
	DE/F8-20055-AA813E25; Sat, 25 Jan 2014 01:51:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390614696!10058885!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25825 invoked from network); 25 Jan 2014 01:51:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 01:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,716,1384300800"; d="scan'208";a="94358111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 01:51:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 20:51:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6sP2-0001pH-O0;
	Sat, 25 Jan 2014 01:51:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6sOy-00033a-JK;
	Sat, 25 Jan 2014 01:51:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24483-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 01:51:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24483: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24483 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24483/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail REGR. vs. 24470
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24470

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24470

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9612d2948e1637c303e6be68df2168775ac5e97e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:45:03 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 03:14:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 03:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6thS-0003xE-0Z; Sat, 25 Jan 2014 03:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6thP-0003x9-Oq
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 03:14:24 +0000
Received: from [85.158.139.211:54618] by server-11.bemta-5.messagelabs.com id
	9F/8D-23268-F0C23E25; Sat, 25 Jan 2014 03:14:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390619660!11862281!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5937 invoked from network); 25 Jan 2014 03:14:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 03:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,717,1384300800"; d="scan'208";a="96397730"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 03:14:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 22:14:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6thK-0002Gp-It;
	Sat, 25 Jan 2014 03:14:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6thK-00050o-FC;
	Sat, 25 Jan 2014 03:14:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24488-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 03:14:18 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24488: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24488 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           3 host-build-prep  fail in 24480 REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24480
 test-amd64-i386-xl           16 guest-start.2      fail in 24480 pass in 24488

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 03:14:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 03:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6thS-0003xE-0Z; Sat, 25 Jan 2014 03:14:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6thP-0003x9-Oq
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 03:14:24 +0000
Received: from [85.158.139.211:54618] by server-11.bemta-5.messagelabs.com id
	9F/8D-23268-F0C23E25; Sat, 25 Jan 2014 03:14:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390619660!11862281!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5937 invoked from network); 25 Jan 2014 03:14:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 03:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,717,1384300800"; d="scan'208";a="96397730"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 03:14:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 24 Jan 2014 22:14:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6thK-0002Gp-It;
	Sat, 25 Jan 2014 03:14:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6thK-00050o-FC;
	Sat, 25 Jan 2014 03:14:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24488-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 03:14:18 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24488: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24488 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           3 host-build-prep  fail in 24480 REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24480
 test-amd64-i386-xl           16 guest-start.2      fail in 24480 pass in 24488

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 05:12:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 05:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6vXM-0007M1-BZ; Sat, 25 Jan 2014 05:12:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6vXK-0007Lr-Oe
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 05:12:07 +0000
Received: from [85.158.137.68:50585] by server-4.bemta-3.messagelabs.com id
	19/D4-10414-5A743E25; Sat, 25 Jan 2014 05:12:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390626723!11265325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16920 invoked from network); 25 Jan 2014 05:12:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 05:12:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,717,1384300800"; d="scan'208";a="94384472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 05:12:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 00:12:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6vXF-0002tW-HO;
	Sat, 25 Jan 2014 05:12:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6vXF-0002Jq-6g;
	Sat, 25 Jan 2014 05:12:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24492-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 05:12:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24492: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24492 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24492/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     13 guest-localmigrate.2      fail REGR. vs. 24469

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
baseline version:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ branch=xen-4.2-testing
+ revision=f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f6179b2e3638e1ff3b3f087ce34b0afdb05ed432:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   85c4e39..f6179b2  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 05:12:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 05:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W6vXM-0007M1-BZ; Sat, 25 Jan 2014 05:12:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W6vXK-0007Lr-Oe
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 05:12:07 +0000
Received: from [85.158.137.68:50585] by server-4.bemta-3.messagelabs.com id
	19/D4-10414-5A743E25; Sat, 25 Jan 2014 05:12:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390626723!11265325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16920 invoked from network); 25 Jan 2014 05:12:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 05:12:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,717,1384300800"; d="scan'208";a="94384472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 05:12:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 00:12:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W6vXF-0002tW-HO;
	Sat, 25 Jan 2014 05:12:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W6vXF-0002Jq-6g;
	Sat, 25 Jan 2014 05:12:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24492-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 05:12:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24492: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24492 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24492/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     13 guest-localmigrate.2      fail REGR. vs. 24469

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
baseline version:
 xen                  85c4e39100037fafc4e4c3e517aaef8180ffdde7

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ branch=xen-4.2-testing
+ revision=f6179b2e3638e1ff3b3f087ce34b0afdb05ed432
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f6179b2e3638e1ff3b3f087ce34b0afdb05ed432:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   85c4e39..f6179b2  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 10:24:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 10:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W70P0-0006Uo-VF; Sat, 25 Jan 2014 10:23:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W70Oz-0006Uj-Hz
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 10:23:49 +0000
Received: from [85.158.137.68:16238] by server-12.bemta-3.messagelabs.com id
	7A/AA-20055-4B093E25; Sat, 25 Jan 2014 10:23:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390645426!10093694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28126 invoked from network); 25 Jan 2014 10:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 10:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="96447879"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 10:23:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 05:23:44 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W70Ou-0004Vo-5j;
	Sat, 25 Jan 2014 10:23:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W70Nf-0005Bq-H4;
	Sat, 25 Jan 2014 10:23:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24496-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 10:22:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24496: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24496 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24496/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24473
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore        fail like 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 10:24:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 10:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W70P0-0006Uo-VF; Sat, 25 Jan 2014 10:23:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W70Oz-0006Uj-Hz
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 10:23:49 +0000
Received: from [85.158.137.68:16238] by server-12.bemta-3.messagelabs.com id
	7A/AA-20055-4B093E25; Sat, 25 Jan 2014 10:23:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390645426!10093694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28126 invoked from network); 25 Jan 2014 10:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 10:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="96447879"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 10:23:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 05:23:44 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W70Ou-0004Vo-5j;
	Sat, 25 Jan 2014 10:23:44 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W70Nf-0005Bq-H4;
	Sat, 25 Jan 2014 10:23:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24496-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 10:22:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24496: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24496 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24496/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24473
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3) broken REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore        fail like 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 14:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 14:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W740R-0003Sy-1j; Sat, 25 Jan 2014 14:14:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W740O-0003St-Vx
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 14:14:41 +0000
Received: from [85.158.143.35:18381] by server-2.bemta-4.messagelabs.com id
	56/EE-11386-0D6C3E25; Sat, 25 Jan 2014 14:14:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390659277!733250!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18945 invoked from network); 25 Jan 2014 14:14:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 14:14:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="94443996"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 14:14:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 09:14:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W740J-0005p8-T9;
	Sat, 25 Jan 2014 14:14:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W740J-0003R8-PG;
	Sat, 25 Jan 2014 14:14:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24504-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 14:14:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24504: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24504 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24504/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1 10 guest-saverestore.2   fail pass in 24483
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24483
 test-amd64-amd64-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail in 24483 pass in 24504
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24483 pass in 24504

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24470

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24483 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24483 never pass

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=9612d2948e1637c303e6be68df2168775ac5e97e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 9612d2948e1637c303e6be68df2168775ac5e97e
+ branch=xen-4.3-testing
+ revision=9612d2948e1637c303e6be68df2168775ac5e97e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 9612d2948e1637c303e6be68df2168775ac5e97e:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   4e54949..9612d29  9612d2948e1637c303e6be68df2168775ac5e97e -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 14:15:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 14:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W740R-0003Sy-1j; Sat, 25 Jan 2014 14:14:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W740O-0003St-Vx
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 14:14:41 +0000
Received: from [85.158.143.35:18381] by server-2.bemta-4.messagelabs.com id
	56/EE-11386-0D6C3E25; Sat, 25 Jan 2014 14:14:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390659277!733250!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18945 invoked from network); 25 Jan 2014 14:14:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 14:14:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="94443996"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 14:14:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 09:14:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W740J-0005p8-T9;
	Sat, 25 Jan 2014 14:14:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W740J-0003R8-PG;
	Sat, 25 Jan 2014 14:14:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24504-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 14:14:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24504: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24504 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24504/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1 10 guest-saverestore.2   fail pass in 24483
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 24483
 test-amd64-amd64-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail in 24483 pass in 24504
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24483 pass in 24504

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24470

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24483 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24483 never pass

version targeted for testing:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e
baseline version:
 xen                  4e54949a7baa5d66f1ab36571d6c974c9319a964

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=9612d2948e1637c303e6be68df2168775ac5e97e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 9612d2948e1637c303e6be68df2168775ac5e97e
+ branch=xen-4.3-testing
+ revision=9612d2948e1637c303e6be68df2168775ac5e97e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 9612d2948e1637c303e6be68df2168775ac5e97e:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   4e54949..9612d29  9612d2948e1637c303e6be68df2168775ac5e97e -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 15:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 15:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W751b-00058R-F0; Sat, 25 Jan 2014 15:19:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W751a-00058M-8Q
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 15:19:58 +0000
Received: from [193.109.254.147:61998] by server-1.bemta-14.messagelabs.com id
	AA/2C-15600-D16D3E25; Sat, 25 Jan 2014 15:19:57 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390663195!13069290!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6193 invoked from network); 25 Jan 2014 15:19:56 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-5.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	25 Jan 2014 15:19:56 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W751W-00009h-ID
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 07:19:54 -0800
Date: Sat, 25 Jan 2014 07:19:54 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi all 

i would like to know how qemu is integrated with xen and what is the aim of
QEMU in Xen's system?

As far as i know kQEMU is hardware assisted virtualization tool and i would
like to know what is kQEMU role in xen archtecture?


Best Regards



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 15:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 15:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W751b-00058R-F0; Sat, 25 Jan 2014 15:19:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W751a-00058M-8Q
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 15:19:58 +0000
Received: from [193.109.254.147:61998] by server-1.bemta-14.messagelabs.com id
	AA/2C-15600-D16D3E25; Sat, 25 Jan 2014 15:19:57 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390663195!13069290!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6193 invoked from network); 25 Jan 2014 15:19:56 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-5.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	25 Jan 2014 15:19:56 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W751W-00009h-ID
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 07:19:54 -0800
Date: Sat, 25 Jan 2014 07:19:54 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

hi all 

i would like to know how qemu is integrated with xen and what is the aim of
QEMU in Xen's system?

As far as i know kQEMU is hardware assisted virtualization tool and i would
like to know what is kQEMU role in xen archtecture?


Best Regards



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 15:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 15:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W75HH-0005V9-2e; Sat, 25 Jan 2014 15:36:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W75HF-0005V4-E6
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 15:36:09 +0000
Received: from [193.109.254.147:42993] by server-8.bemta-14.messagelabs.com id
	2F/10-30921-8E9D3E25; Sat, 25 Jan 2014 15:36:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390664166!13087997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5217 invoked from network); 25 Jan 2014 15:36:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 15:36:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="96482545"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 15:36:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 10:36:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W75HA-0006Hd-Ni;
	Sat, 25 Jan 2014 15:36:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W75H9-0004jw-4M;
	Sat, 25 Jan 2014 15:36:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24505-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 15:36:03 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24505: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24505 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24505/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 15:36:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 15:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W75HH-0005V9-2e; Sat, 25 Jan 2014 15:36:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W75HF-0005V4-E6
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 15:36:09 +0000
Received: from [193.109.254.147:42993] by server-8.bemta-14.messagelabs.com id
	2F/10-30921-8E9D3E25; Sat, 25 Jan 2014 15:36:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390664166!13087997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5217 invoked from network); 25 Jan 2014 15:36:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 15:36:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,719,1384300800"; d="scan'208";a="96482545"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Jan 2014 15:36:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 10:36:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W75HA-0006Hd-Ni;
	Sat, 25 Jan 2014 15:36:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W75H9-0004jw-4M;
	Sat, 25 Jan 2014 15:36:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24505-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 15:36:03 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24505: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24505 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24505/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 19:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 19:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W79JP-00048c-5h; Sat, 25 Jan 2014 19:54:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W79JN-00048X-A9
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 19:54:37 +0000
Received: from [85.158.143.35:5818] by server-1.bemta-4.messagelabs.com id
	5E/0A-02132-C7614E25; Sat, 25 Jan 2014 19:54:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390679674!765900!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19569 invoked from network); 25 Jan 2014 19:54:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 19:54:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,720,1384300800"; d="scan'208";a="94481662"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 19:54:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 14:54:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W79JI-0007YO-OK;
	Sat, 25 Jan 2014 19:54:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W79JI-0005TM-EV;
	Sat, 25 Jan 2014 19:54:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24516-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 19:54:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24516: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24516 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24516/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 22405
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 19:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 19:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W79JP-00048c-5h; Sat, 25 Jan 2014 19:54:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W79JN-00048X-A9
	for xen-devel@lists.xensource.com; Sat, 25 Jan 2014 19:54:37 +0000
Received: from [85.158.143.35:5818] by server-1.bemta-4.messagelabs.com id
	5E/0A-02132-C7614E25; Sat, 25 Jan 2014 19:54:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390679674!765900!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19569 invoked from network); 25 Jan 2014 19:54:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 19:54:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,720,1384300800"; d="scan'208";a="94481662"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Jan 2014 19:54:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 14:54:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W79JI-0007YO-OK;
	Sat, 25 Jan 2014 19:54:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W79JI-0005TM-EV;
	Sat, 25 Jan 2014 19:54:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24516-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 25 Jan 2014 19:54:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24516: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24516 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24516/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 22405
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 22405

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            broken  
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 21:07:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 21:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ARb-000618-LL; Sat, 25 Jan 2014 21:07:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W7ARZ-000613-Bj
	for xen-devel@lists.xen.org; Sat, 25 Jan 2014 21:07:09 +0000
Received: from [85.158.137.68:7560] by server-12.bemta-3.messagelabs.com id
	6E/43-20055-C7724E25; Sat, 25 Jan 2014 21:07:08 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390684025!11271930!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18441 invoked from network); 25 Jan 2014 21:07:07 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 21:07:07 -0000
Received: by mail-pa0-f42.google.com with SMTP id kl14so4516576pab.15
	for <xen-devel@lists.xen.org>; Sat, 25 Jan 2014 13:07:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=9tPNpZ7eMFBolNd2eghBYgq7m2ZffvPVB/NATD+hvbs=;
	b=CHN49YJxwFbPXqaQUov5x6ZgsFlql1mvDQJOQsfxx15w5PGghBsqrhIaG73P+1IUWK
	+X6aHcznxznBqfhcoXoxf8BOz1h+q+3Wt0ecNsqe7spWSGamEn5vwKaqVYNCj7WCcLIL
	T46Spv6lHh3aNSi2r+BfM8jPS+NH6YGUjZORQaddt6CEmRQM8L6U1RBe9B/BntQRdqHh
	mH3VTxqVOcbAeIPIXA9S9mHwxsJLhGfxN5ed+kv0Y3EfB4NlpeC3VeF4T9Y3CJ1s9d6o
	zdYYF1K2yhCb38epg9/qnhUBTtQS6y2MsrXOfDCZSN2yiWzJG6yfL769IFR1wZ3fqut9
	2Mzw==
X-Received: by 10.68.129.201 with SMTP id ny9mr21914586pbb.70.1390684025360;
	Sat, 25 Jan 2014 13:07:05 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id sx8sm42090387pab.5.2014.01.25.13.07.02
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Sat, 25 Jan 2014 13:07:04 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Sat, 25 Jan 2014 13:07:00 -0800
Date: Sat, 25 Jan 2014 13:07:00 -0800
From: Matt Wilson <msw@linux.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390577782.13513.8.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 03:36:22PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 09:21 +0000, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 11:28 -0800, Matt Wilson wrote:
> > > From: Matt Rushton <mrushton@amazon.com>
> > > =

> > > Currently shrink_free_pagepool() is called before the pages used for
> > > persistent grants are released via free_persistent_gnts(). This
> > > results in a memory leak when a VBD that uses persistent grants is
> > > torn down.
> > =

> > This may well be the explanation for the memory leak I was observing on
> > ARM last night. I'll give it a go and let you know.
> =

> Results are a bit inconclusive unfortunately, it seems like I am seeing
> some other leak too (or instead).
> =

> Totally unscientifically it does seem to be leaking more slowly than
> before, so perhaps this patch has helped, but nothing conclusive I'm
> afraid.

Testing here looks good. I don't know if perhaps something else is
going on with ARM...

> I don't think that quite qualifies for a Tested-by though, sorry.

How about an Acked-by? ;-)

--msw

> Ian. =

> =

> > =

> > > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > Cc: "Roger Pau Monn=E9" <roger.pau@citrix.com>
> > > Cc: Ian Campbell <Ian.Campbell@citrix.com>
> > > Cc: David Vrabel <david.vrabel@citrix.com>
> > > Cc: linux-kernel@vger.kernel.org
> > > Cc: xen-devel@lists.xen.org
> > > Cc: Anthony Liguori <aliguori@amazon.com>
> > > Signed-off-by: Matt Rushton <mrushton@amazon.com>
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > ---
> > >  drivers/block/xen-blkback/blkback.c |    6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > =

> > > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-=
blkback/blkback.c
> > > index 6620b73..30ef7b3 100644
> > > --- a/drivers/block/xen-blkback/blkback.c
> > > +++ b/drivers/block/xen-blkback/blkback.c
> > > @@ -625,9 +625,6 @@ purge_gnt_list:
> > >  			print_stats(blkif);
> > >  	}
> > >  =

> > > -	/* Since we are shutting down remove all pages from the buffer */
> > > -	shrink_free_pagepool(blkif, 0 /* All */);
> > > -
> > >  	/* Free all persistent grant pages */
> > >  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> > >  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> > > @@ -636,6 +633,9 @@ purge_gnt_list:
> > >  	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> > >  	blkif->persistent_gnt_c =3D 0;
> > >  =

> > > +	/* Since we are shutting down remove all pages from the buffer */
> > > +	shrink_free_pagepool(blkif, 0 /* All */);
> > > +
> > >  	if (log_stats)
> > >  		print_stats(blkif);
> > >  =

> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Jan 25 21:07:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jan 2014 21:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ARb-000618-LL; Sat, 25 Jan 2014 21:07:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W7ARZ-000613-Bj
	for xen-devel@lists.xen.org; Sat, 25 Jan 2014 21:07:09 +0000
Received: from [85.158.137.68:7560] by server-12.bemta-3.messagelabs.com id
	6E/43-20055-C7724E25; Sat, 25 Jan 2014 21:07:08 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390684025!11271930!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18441 invoked from network); 25 Jan 2014 21:07:07 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Jan 2014 21:07:07 -0000
Received: by mail-pa0-f42.google.com with SMTP id kl14so4516576pab.15
	for <xen-devel@lists.xen.org>; Sat, 25 Jan 2014 13:07:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=9tPNpZ7eMFBolNd2eghBYgq7m2ZffvPVB/NATD+hvbs=;
	b=CHN49YJxwFbPXqaQUov5x6ZgsFlql1mvDQJOQsfxx15w5PGghBsqrhIaG73P+1IUWK
	+X6aHcznxznBqfhcoXoxf8BOz1h+q+3Wt0ecNsqe7spWSGamEn5vwKaqVYNCj7WCcLIL
	T46Spv6lHh3aNSi2r+BfM8jPS+NH6YGUjZORQaddt6CEmRQM8L6U1RBe9B/BntQRdqHh
	mH3VTxqVOcbAeIPIXA9S9mHwxsJLhGfxN5ed+kv0Y3EfB4NlpeC3VeF4T9Y3CJ1s9d6o
	zdYYF1K2yhCb38epg9/qnhUBTtQS6y2MsrXOfDCZSN2yiWzJG6yfL769IFR1wZ3fqut9
	2Mzw==
X-Received: by 10.68.129.201 with SMTP id ny9mr21914586pbb.70.1390684025360;
	Sat, 25 Jan 2014 13:07:05 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id sx8sm42090387pab.5.2014.01.25.13.07.02
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Sat, 25 Jan 2014 13:07:04 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Sat, 25 Jan 2014 13:07:00 -0800
Date: Sat, 25 Jan 2014 13:07:00 -0800
From: Matt Wilson <msw@linux.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390577782.13513.8.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 03:36:22PM +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 09:21 +0000, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 11:28 -0800, Matt Wilson wrote:
> > > From: Matt Rushton <mrushton@amazon.com>
> > > =

> > > Currently shrink_free_pagepool() is called before the pages used for
> > > persistent grants are released via free_persistent_gnts(). This
> > > results in a memory leak when a VBD that uses persistent grants is
> > > torn down.
> > =

> > This may well be the explanation for the memory leak I was observing on
> > ARM last night. I'll give it a go and let you know.
> =

> Results are a bit inconclusive unfortunately, it seems like I am seeing
> some other leak too (or instead).
> =

> Totally unscientifically it does seem to be leaking more slowly than
> before, so perhaps this patch has helped, but nothing conclusive I'm
> afraid.

Testing here looks good. I don't know if perhaps something else is
going on with ARM...

> I don't think that quite qualifies for a Tested-by though, sorry.

How about an Acked-by? ;-)

--msw

> Ian. =

> =

> > =

> > > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > Cc: "Roger Pau Monn=E9" <roger.pau@citrix.com>
> > > Cc: Ian Campbell <Ian.Campbell@citrix.com>
> > > Cc: David Vrabel <david.vrabel@citrix.com>
> > > Cc: linux-kernel@vger.kernel.org
> > > Cc: xen-devel@lists.xen.org
> > > Cc: Anthony Liguori <aliguori@amazon.com>
> > > Signed-off-by: Matt Rushton <mrushton@amazon.com>
> > > Signed-off-by: Matt Wilson <msw@amazon.com>
> > > ---
> > >  drivers/block/xen-blkback/blkback.c |    6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > =

> > > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-=
blkback/blkback.c
> > > index 6620b73..30ef7b3 100644
> > > --- a/drivers/block/xen-blkback/blkback.c
> > > +++ b/drivers/block/xen-blkback/blkback.c
> > > @@ -625,9 +625,6 @@ purge_gnt_list:
> > >  			print_stats(blkif);
> > >  	}
> > >  =

> > > -	/* Since we are shutting down remove all pages from the buffer */
> > > -	shrink_free_pagepool(blkif, 0 /* All */);
> > > -
> > >  	/* Free all persistent grant pages */
> > >  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> > >  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> > > @@ -636,6 +633,9 @@ purge_gnt_list:
> > >  	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> > >  	blkif->persistent_gnt_c =3D 0;
> > >  =

> > > +	/* Since we are shutting down remove all pages from the buffer */
> > > +	shrink_free_pagepool(blkif, 0 /* All */);
> > > +
> > >  	if (log_stats)
> > >  		print_stats(blkif);
> > >  =

> > =

> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 00:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 00:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7DF6-0002UR-3J; Sun, 26 Jan 2014 00:06:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7DF4-0002UL-BY
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 00:06:26 +0000
Received: from [85.158.139.211:32111] by server-5.bemta-5.messagelabs.com id
	CE/6A-14928-18154E25; Sun, 26 Jan 2014 00:06:25 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390694784!11954122!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29468 invoked from network); 26 Jan 2014 00:06:24 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	26 Jan 2014 00:06:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 25 Jan 2014 16:06:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="464746114"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 25 Jan 2014 16:05:54 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 16:05:54 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 08:05:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Anthony PERARD <anthony.perard@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPww
Date: Sun, 26 Jan 2014 00:05:52 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>, "Dugger, 
	Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony PERARD wrote on 2014-01-09:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>> [...]
>>>>>>> Those Xen report something like:
>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>> 131329 >
>>>>>>> 131328
>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>> id=46
>>>>>>> memflags=0 (62 of 64)
>>>>>>> 
>>>>>>> ?
>>>>>>> 
>>>>>>> (I tryied to reproduce the issue by simply add many
>>>>>>> emulated
>>>>>>> e1000 in QEMU :) )
>>>>>>> 
>>> 
>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>> Network
> Connection (rev 01)
>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>         ROM at fb400000 [disabled] [size=4M]
>>> 
>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>> allocate memory for it. Will have maybe have to find another way.
>>> qemu-trad those not seems to allocate memory, but I haven't been
>>> very far in trying to check that.
>> 
>> And indeed that is the case. The "Fix" below fixes it.
>> 
>> 
>> Based on that and this guest config:
>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> memory = 2048
>> boot="d"
>> maxvcpus=32
>> vcpus=1
>> serial='pty'
>> vnclisten="0.0.0.0"
>> name="latest"
>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
>> 
>> I can boot the guest.
> 
> And can you access the ROM from the guest ?
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any other BAR.
> In this case, if qemu is envolved in the access to ROM, it will print
> an error, like it the case for other BAR.
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from the device.
> 
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
> static int xen_pt_register_regions(XenPCIPassthroughState *s)
> 
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>

Hi, Anthony,

Does your fixing is the final solution for this issue? If yes, will you push it before Xen 4.4 release?

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 00:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 00:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7DF6-0002UR-3J; Sun, 26 Jan 2014 00:06:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7DF4-0002UL-BY
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 00:06:26 +0000
Received: from [85.158.139.211:32111] by server-5.bemta-5.messagelabs.com id
	CE/6A-14928-18154E25; Sun, 26 Jan 2014 00:06:25 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390694784!11954122!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29468 invoked from network); 26 Jan 2014 00:06:24 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	26 Jan 2014 00:06:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 25 Jan 2014 16:06:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="464746114"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 25 Jan 2014 16:05:54 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 16:05:54 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 08:05:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Anthony PERARD <anthony.perard@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPww
Date: Sun, 26 Jan 2014 00:05:52 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
In-Reply-To: <20140109145624.GD1696@perard.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>, "Dugger, 
	Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony PERARD wrote on 2014-01-09:
> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>> [...]
>>>>>>> Those Xen report something like:
>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>> 131329 >
>>>>>>> 131328
>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>> id=46
>>>>>>> memflags=0 (62 of 64)
>>>>>>> 
>>>>>>> ?
>>>>>>> 
>>>>>>> (I tryied to reproduce the issue by simply add many
>>>>>>> emulated
>>>>>>> e1000 in QEMU :) )
>>>>>>> 
>>> 
>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>> Network
> Connection (rev 01)
>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>         ROM at fb400000 [disabled] [size=4M]
>>> 
>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>> allocate memory for it. Will have maybe have to find another way.
>>> qemu-trad those not seems to allocate memory, but I haven't been
>>> very far in trying to check that.
>> 
>> And indeed that is the case. The "Fix" below fixes it.
>> 
>> 
>> Based on that and this guest config:
>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>> memory = 2048
>> boot="d"
>> maxvcpus=32
>> vcpus=1
>> serial='pty'
>> vnclisten="0.0.0.0"
>> name="latest"
>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
>> 
>> I can boot the guest.
> 
> And can you access the ROM from the guest ?
> 
> 
> Also, I have another patch, it will initialize the PCI ROM BAR like any other BAR.
> In this case, if qemu is envolved in the access to ROM, it will print
> an error, like it the case for other BAR.
> 
> I tried to test it, but it was with an embedded VGA card. When I dump
> the ROM, I got the same one as the emulated card instead of the ROM from the device.
> 
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
> static int xen_pt_register_regions(XenPCIPassthroughState *s)
> 
>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> -                                      "xen-pci-pt-rom", d->rom.size);
> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> +                              "xen-pci-pt-rom", d->rom.size);
>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> PCI_BASE_ADDRESS_MEM_PREFETCH,
>                           &s->rom);
>

Hi, Anthony,

Does your fixing is the final solution for this issue? If yes, will you push it before Xen 4.4 release?

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 00:26:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 00:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7DY6-00037Z-4b; Sun, 26 Jan 2014 00:26:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7DY5-00037U-6l
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 00:26:05 +0000
Received: from [193.109.254.147:32126] by server-12.bemta-14.messagelabs.com
	id 4C/59-13681-C1654E25; Sun, 26 Jan 2014 00:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390695962!13174715!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3427 invoked from network); 26 Jan 2014 00:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 00:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,721,1384300800"; d="scan'208";a="94507636"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 00:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 19:26:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7DY0-0000Sn-IX;
	Sun, 26 Jan 2014 00:26:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7DY0-0000QJ-A0;
	Sun, 26 Jan 2014 00:26:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24512-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 00:26:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24512: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24512 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24512/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 00:26:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 00:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7DY6-00037Z-4b; Sun, 26 Jan 2014 00:26:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7DY5-00037U-6l
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 00:26:05 +0000
Received: from [193.109.254.147:32126] by server-12.bemta-14.messagelabs.com
	id 4C/59-13681-C1654E25; Sun, 26 Jan 2014 00:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390695962!13174715!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3427 invoked from network); 26 Jan 2014 00:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 00:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,721,1384300800"; d="scan'208";a="94507636"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 00:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 25 Jan 2014 19:26:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7DY0-0000Sn-IX;
	Sun, 26 Jan 2014 00:26:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7DY0-0000QJ-A0;
	Sun, 26 Jan 2014 00:26:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24512-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 00:26:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24512: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24512 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24512/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 01:14:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 01:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7EIv-0008HF-02; Sun, 26 Jan 2014 01:14:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7EIs-0008H7-UE
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 01:14:27 +0000
Received: from [85.158.139.211:40811] by server-4.bemta-5.messagelabs.com id
	3F/87-26791-27164E25; Sun, 26 Jan 2014 01:14:26 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390698864!11944815!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19219 invoked from network); 26 Jan 2014 01:14:25 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-13.tower-206.messagelabs.com with SMTP;
	26 Jan 2014 01:14:25 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 25 Jan 2014 17:14:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="470826735"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 25 Jan 2014 17:13:55 -0800
Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 17:13:54 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX113.amr.corp.intel.com (10.18.116.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 17:13:54 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 09:13:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Thread-Topic: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait on
	XenBus stalling shutdown/restart.
Thread-Index: AQHO3KmE8nzOae4vI0ON33E2bX7xXpqWrgxw
Date: Sun, 26 Jan 2014 01:13:51 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
References: <1383932286-25080-1-git-send-email-konrad.wilk@oracle.com>
	<1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait
	on	XenBus stalling shutdown/restart.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2013-11-09:
> The 'read_reply' works with 'process_msg' to read of a reply in XenBus.
> 'process_msg' is running from within the 'xenbus' thread. Whenever a
> message shows up in XenBus it is put on a xs_state.reply_list list and
> 'read_reply' picks it up.
> 
> The problem is if the backend domain or the xenstored process is killed.
> In which case 'xenbus' is still awaiting - and 'read_reply' if called
> - stuck forever waiting for the reply_list to have some contents.
> 
> This is normally not a problem - as the backend domain can come back
> or the xenstored process can be restarted. However if the domain is in
> process of being powered off/restarted/halted - there is no point of
> waiting on it coming back - as we are effectively being terminated and
> should not impede the progress.
> 

Hi, Konrad,

Is this patch applied in Linux upstream tree? I didn't find it with latest Linux upstream source.

> This patch solves this problem by checking the 'system_state' value to
> see if we are in heading towards death. We also make the wait
> mechanism a bit more asynchronous.
> 
> Fixes-Bug: http://bugs.xenproject.org/xen/bug/8
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/xenbus/xenbus_xs.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> diff --git a/drivers/xen/xenbus/xenbus_xs.c
> b/drivers/xen/xenbus/xenbus_xs.c index b6d5fff..4f22706 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -148,9 +148,24 @@ static void *read_reply(enum xsd_sockmsg_type
> *type, unsigned int *len)
> 
>  	while (list_empty(&xs_state.reply_list)) {
>  		spin_unlock(&xs_state.reply_lock);
> -		/* XXX FIXME: Avoid synchronous wait for response here. */
> -		wait_event(xs_state.reply_waitq, -			  
> !list_empty(&xs_state.reply_list));
> +		wait_event_timeout(xs_state.reply_waitq, +				  
> !list_empty(&xs_state.reply_list), +				   msecs_to_jiffies(500)); +
> +		/* +		 * If we are in the process of being shut-down there is +		 *
> no point of trying to contact XenBus - it is either +		 * killed
> (xenstored application) or the other domain +		 * has been killed or is
> unreachable. +		 */ +		switch (system_state) { +		case SYSTEM_POWER_OFF:
> +		case SYSTEM_RESTART: +		case SYSTEM_HALT: +			return ERR_PTR(-EIO);
> +		default: +			break; +		}
>  		spin_lock(&xs_state.reply_lock);
>  	}
> @@ -215,6 +230,9 @@ void *xenbus_dev_request_and_reply(struct
> xsd_sockmsg *msg)
> 
>  	mutex_unlock(&xs_state.request_mutex);
> +	if (IS_ERR(ret))
> +		return ret;
> +
>  	if ((msg->type == XS_TRANSACTION_END) ||
>  	    ((req_msg.type == XS_TRANSACTION_START) &&
>  	     (msg->type == XS_ERROR)))


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 01:14:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 01:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7EIv-0008HF-02; Sun, 26 Jan 2014 01:14:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7EIs-0008H7-UE
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 01:14:27 +0000
Received: from [85.158.139.211:40811] by server-4.bemta-5.messagelabs.com id
	3F/87-26791-27164E25; Sun, 26 Jan 2014 01:14:26 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390698864!11944815!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19219 invoked from network); 26 Jan 2014 01:14:25 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-13.tower-206.messagelabs.com with SMTP;
	26 Jan 2014 01:14:25 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 25 Jan 2014 17:14:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="470826735"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 25 Jan 2014 17:13:55 -0800
Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 17:13:54 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX113.amr.corp.intel.com (10.18.116.7) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 17:13:54 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 09:13:52 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "JBeulich@suse.com" <JBeulich@suse.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Thread-Topic: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait on
	XenBus stalling shutdown/restart.
Thread-Index: AQHO3KmE8nzOae4vI0ON33E2bX7xXpqWrgxw
Date: Sun, 26 Jan 2014 01:13:51 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
References: <1383932286-25080-1-git-send-email-konrad.wilk@oracle.com>
	<1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait
	on	XenBus stalling shutdown/restart.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2013-11-09:
> The 'read_reply' works with 'process_msg' to read of a reply in XenBus.
> 'process_msg' is running from within the 'xenbus' thread. Whenever a
> message shows up in XenBus it is put on a xs_state.reply_list list and
> 'read_reply' picks it up.
> 
> The problem is if the backend domain or the xenstored process is killed.
> In which case 'xenbus' is still awaiting - and 'read_reply' if called
> - stuck forever waiting for the reply_list to have some contents.
> 
> This is normally not a problem - as the backend domain can come back
> or the xenstored process can be restarted. However if the domain is in
> process of being powered off/restarted/halted - there is no point of
> waiting on it coming back - as we are effectively being terminated and
> should not impede the progress.
> 

Hi, Konrad,

Is this patch applied in Linux upstream tree? I didn't find it with latest Linux upstream source.

> This patch solves this problem by checking the 'system_state' value to
> see if we are in heading towards death. We also make the wait
> mechanism a bit more asynchronous.
> 
> Fixes-Bug: http://bugs.xenproject.org/xen/bug/8
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  drivers/xen/xenbus/xenbus_xs.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> diff --git a/drivers/xen/xenbus/xenbus_xs.c
> b/drivers/xen/xenbus/xenbus_xs.c index b6d5fff..4f22706 100644
> --- a/drivers/xen/xenbus/xenbus_xs.c
> +++ b/drivers/xen/xenbus/xenbus_xs.c
> @@ -148,9 +148,24 @@ static void *read_reply(enum xsd_sockmsg_type
> *type, unsigned int *len)
> 
>  	while (list_empty(&xs_state.reply_list)) {
>  		spin_unlock(&xs_state.reply_lock);
> -		/* XXX FIXME: Avoid synchronous wait for response here. */
> -		wait_event(xs_state.reply_waitq, -			  
> !list_empty(&xs_state.reply_list));
> +		wait_event_timeout(xs_state.reply_waitq, +				  
> !list_empty(&xs_state.reply_list), +				   msecs_to_jiffies(500)); +
> +		/* +		 * If we are in the process of being shut-down there is +		 *
> no point of trying to contact XenBus - it is either +		 * killed
> (xenstored application) or the other domain +		 * has been killed or is
> unreachable. +		 */ +		switch (system_state) { +		case SYSTEM_POWER_OFF:
> +		case SYSTEM_RESTART: +		case SYSTEM_HALT: +			return ERR_PTR(-EIO);
> +		default: +			break; +		}
>  		spin_lock(&xs_state.reply_lock);
>  	}
> @@ -215,6 +230,9 @@ void *xenbus_dev_request_and_reply(struct
> xsd_sockmsg *msg)
> 
>  	mutex_unlock(&xs_state.request_mutex);
> +	if (IS_ERR(ret))
> +		return ret;
> +
>  	if ((msg->type == XS_TRANSACTION_END) ||
>  	    ((req_msg.type == XS_TRANSACTION_START) &&
>  	     (msg->type == XS_ERROR)))


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 02:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 02:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7FGA-0001ku-EK; Sun, 26 Jan 2014 02:15:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7FG8-0001kp-QD
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 02:15:40 +0000
Received: from [193.109.254.147:35461] by server-1.bemta-14.messagelabs.com id
	7E/6C-15600-BCF64E25; Sun, 26 Jan 2014 02:15:39 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390702538!11695633!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14036 invoked from network); 26 Jan 2014 02:15:38 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-27.messagelabs.com with SMTP;
	26 Jan 2014 02:15:38 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 25 Jan 2014 18:15:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="470838099"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 25 Jan 2014 18:15:37 -0800
Received: from fmsmsx118.amr.corp.intel.com (10.18.116.18) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 18:15:36 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx118.amr.corp.intel.com (10.18.116.18) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 18:15:36 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 10:15:32 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Anthony PERARD <anthony.perard@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPwwgAAgNGA=
Date: Sun, 26 Jan 2014 02:15:31 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6F87@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Liu, SongtaoX" <songtaox.liu@intel.com>, "Dugger, 
	Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-01-26:
> Anthony PERARD wrote on 2014-01-09:
>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>>> [...]
>>>>>>>> Those Xen report something like:
>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>>> 131329 >
>>>>>>>> 131328
>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>>> id=46
>>>>>>>> memflags=0 (62 of 64)
>>>>>>>> 
>>>>>>>> ?
>>>>>>>> 
>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
>>>>>>>> e1000 in QEMU :) )
>>>>>>>> 
>>>> 
>>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>>> Network
>> Connection (rev 01)
>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>>         ROM at fb400000 [disabled] [size=4M]
>>>> 
>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>> allocate memory for it. Will have maybe have to find another way.
>>>> qemu-trad those not seems to allocate memory, but I haven't been
>>>> very far in trying to check that.
>>> 
>>> And indeed that is the case. The "Fix" below fixes it.
>>> 
>>> 
>>> Based on that and this guest config:
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
>>> 
>>> I can boot the guest.
>> 
>> And can you access the ROM from the guest ?
>> 
>> 
>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>> will print an error, like it the case for other BAR.
>> 
>> I tried to test it, but it was with an embedded VGA card. When I dump
>> the ROM, I got the same one as the emulated card instead of the ROM
>> from the device.
>> 
>> 
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
>> static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> 
>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> -                                      "xen-pci-pt-rom", d->rom.size);
>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +    
>>                          "xen-pci-pt-rom", d->rom.size);
>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
>> PCI_BASE_ADDRESS_MEM_PREFETCH,
>>                           &s->rom);
> 
> Hi, Anthony,
> 
> Does your fixing is the final solution for this issue? If yes, will
> you push it before Xen 4.4 release?
> 
> Best regards,
> Yang
>

CC songtao who still saw this issue with qemu-xen. If the patch is in, please let us know. We hope it will be included in Xen 4.4 release.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 02:16:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 02:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7FGA-0001ku-EK; Sun, 26 Jan 2014 02:15:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7FG8-0001kp-QD
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 02:15:40 +0000
Received: from [193.109.254.147:35461] by server-1.bemta-14.messagelabs.com id
	7E/6C-15600-BCF64E25; Sun, 26 Jan 2014 02:15:39 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390702538!11695633!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14036 invoked from network); 26 Jan 2014 02:15:38 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-16.tower-27.messagelabs.com with SMTP;
	26 Jan 2014 02:15:38 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 25 Jan 2014 18:15:37 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,721,1384329600"; d="scan'208";a="470838099"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 25 Jan 2014 18:15:37 -0800
Received: from fmsmsx118.amr.corp.intel.com (10.18.116.18) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 18:15:36 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx118.amr.corp.intel.com (10.18.116.18) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 25 Jan 2014 18:15:36 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 10:15:32 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Anthony PERARD <anthony.perard@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPwwgAAgNGA=
Date: Sun, 26 Jan 2014 02:15:31 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6F87@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Liu, SongtaoX" <songtaox.liu@intel.com>, "Dugger, 
	Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-01-26:
> Anthony PERARD wrote on 2014-01-09:
>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>>> [...]
>>>>>>>> Those Xen report something like:
>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>>> 131329 >
>>>>>>>> 131328
>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>>> id=46
>>>>>>>> memflags=0 (62 of 64)
>>>>>>>> 
>>>>>>>> ?
>>>>>>>> 
>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
>>>>>>>> e1000 in QEMU :) )
>>>>>>>> 
>>>> 
>>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>>> Network
>> Connection (rev 01)
>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>>         ROM at fb400000 [disabled] [size=4M]
>>>> 
>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>> allocate memory for it. Will have maybe have to find another way.
>>>> qemu-trad those not seems to allocate memory, but I haven't been
>>>> very far in trying to check that.
>>> 
>>> And indeed that is the case. The "Fix" below fixes it.
>>> 
>>> 
>>> Based on that and this guest config:
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
>>> 
>>> I can boot the guest.
>> 
>> And can you access the ROM from the guest ?
>> 
>> 
>> Also, I have another patch, it will initialize the PCI ROM BAR like any
>> other BAR. In this case, if qemu is envolved in the access to ROM, it
>> will print an error, like it the case for other BAR.
>> 
>> I tried to test it, but it was with an embedded VGA card. When I dump
>> the ROM, I got the same one as the emulated card instead of the ROM
>> from the device.
>> 
>> 
>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
>> static int xen_pt_register_regions(XenPCIPassthroughState *s)
>> 
>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>> -                                      "xen-pci-pt-rom", d->rom.size);
>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +    
>>                          "xen-pci-pt-rom", d->rom.size);
>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
>> PCI_BASE_ADDRESS_MEM_PREFETCH,
>>                           &s->rom);
> 
> Hi, Anthony,
> 
> Does your fixing is the final solution for this issue? If yes, will
> you push it before Xen 4.4 release?
> 
> Best regards,
> Yang
>

CC songtao who still saw this issue with qemu-xen. If the patch is in, please let us know. We hope it will be included in Xen 4.4 release.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 03:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 03:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7GeP-0003zb-Vs; Sun, 26 Jan 2014 03:44:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7GeO-0003zW-AW
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 03:44:48 +0000
Received: from [85.158.143.35:8875] by server-3.bemta-4.messagelabs.com id
	09/12-32360-FA484E25; Sun, 26 Jan 2014 03:44:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390707885!798230!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12301 invoked from network); 26 Jan 2014 03:44:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 03:44:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0Q3iWd7000996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 03:44:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0Q3iVkx019580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sun, 26 Jan 2014 03:44:31 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0Q3iUIu018909; Sun, 26 Jan 2014 03:44:30 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 25 Jan 2014 19:44:30 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
References: <1383932286-25080-1-git-send-email-konrad.wilk@oracle.com>
	<1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sat, 25 Jan 2014 22:44:21 -0500
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Message-ID: <c9180994-668d-4be2-a67f-12ceeb5e4cf5@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait
	on	XenBus stalling shutdown/restart.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>Konrad Rzeszutek Wilk wrote on 2013-11-09:
>> The 'read_reply' works with 'process_msg' to read of a reply in
>XenBus.
>> 'process_msg' is running from within the 'xenbus' thread. Whenever a
>> message shows up in XenBus it is put on a xs_state.reply_list list
>and
>> 'read_reply' picks it up.
>> 
>> The problem is if the backend domain or the xenstored process is
>killed.
>> In which case 'xenbus' is still awaiting - and 'read_reply' if called
>> - stuck forever waiting for the reply_list to have some contents.
>> 
>> This is normally not a problem - as the backend domain can come back
>> or the xenstored process can be restarted. However if the domain is
>in
>> process of being powered off/restarted/halted - there is no point of
>> waiting on it coming back - as we are effectively being terminated
>and
>> should not impede the progress.
>> 
>
>Hi, Konrad,
>
>Is this patch applied in Linux upstream tree? I didn't find it with
>latest Linux upstream source.
>

No. It needs rework.
>> This patch solves this problem by checking the 'system_state' value
>to
>> see if we are in heading towards death. We also make the wait
>> mechanism a bit more asynchronous.
>> 
>> Fixes-Bug: http://bugs.xenproject.org/xen/bug/8
>> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> ---
>>  drivers/xen/xenbus/xenbus_xs.c | 24 +++++++++++++++++++++---
>>  1 file changed, 21 insertions(+), 3 deletions(-)
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c
>> b/drivers/xen/xenbus/xenbus_xs.c index b6d5fff..4f22706 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>> @@ -148,9 +148,24 @@ static void *read_reply(enum xsd_sockmsg_type
>> *type, unsigned int *len)
>> 
>>  	while (list_empty(&xs_state.reply_list)) {
>>  		spin_unlock(&xs_state.reply_lock);
>> -		/* XXX FIXME: Avoid synchronous wait for response here. */
>> -		wait_event(xs_state.reply_waitq, -			  
>> !list_empty(&xs_state.reply_list));
>> +		wait_event_timeout(xs_state.reply_waitq, +				  
>> !list_empty(&xs_state.reply_list), +				   msecs_to_jiffies(500)); +
>> +		/* +		 * If we are in the process of being shut-down there is +		
>*
>> no point of trying to contact XenBus - it is either +		 * killed
>> (xenstored application) or the other domain +		 * has been killed or
>is
>> unreachable. +		 */ +		switch (system_state) { +		case
>SYSTEM_POWER_OFF:
>> +		case SYSTEM_RESTART: +		case SYSTEM_HALT: +			return
>ERR_PTR(-EIO);
>> +		default: +			break; +		}
>>  		spin_lock(&xs_state.reply_lock);
>>  	}
>> @@ -215,6 +230,9 @@ void *xenbus_dev_request_and_reply(struct
>> xsd_sockmsg *msg)
>> 
>>  	mutex_unlock(&xs_state.request_mutex);
>> +	if (IS_ERR(ret))
>> +		return ret;
>> +
>>  	if ((msg->type == XS_TRANSACTION_END) ||
>>  	    ((req_msg.type == XS_TRANSACTION_START) &&
>>  	     (msg->type == XS_ERROR)))
>
>
>Best regards,
>Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 03:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 03:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7GeP-0003zb-Vs; Sun, 26 Jan 2014 03:44:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7GeO-0003zW-AW
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 03:44:48 +0000
Received: from [85.158.143.35:8875] by server-3.bemta-4.messagelabs.com id
	09/12-32360-FA484E25; Sun, 26 Jan 2014 03:44:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390707885!798230!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12301 invoked from network); 26 Jan 2014 03:44:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 03:44:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0Q3iWd7000996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 03:44:33 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0Q3iVkx019580
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sun, 26 Jan 2014 03:44:31 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0Q3iUIu018909; Sun, 26 Jan 2014 03:44:30 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sat, 25 Jan 2014 19:44:30 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
References: <1383932286-25080-1-git-send-email-konrad.wilk@oracle.com>
	<1383932286-25080-5-git-send-email-konrad.wilk@oracle.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6EC5@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sat, 25 Jan 2014 22:44:21 -0500
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Message-ID: <c9180994-668d-4be2-a67f-12ceeb5e4cf5@email.android.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: Re: [Xen-devel] [PATCH 4/4] xen/xenbus: Avoid synchronous wait
	on	XenBus stalling shutdown/restart.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>Konrad Rzeszutek Wilk wrote on 2013-11-09:
>> The 'read_reply' works with 'process_msg' to read of a reply in
>XenBus.
>> 'process_msg' is running from within the 'xenbus' thread. Whenever a
>> message shows up in XenBus it is put on a xs_state.reply_list list
>and
>> 'read_reply' picks it up.
>> 
>> The problem is if the backend domain or the xenstored process is
>killed.
>> In which case 'xenbus' is still awaiting - and 'read_reply' if called
>> - stuck forever waiting for the reply_list to have some contents.
>> 
>> This is normally not a problem - as the backend domain can come back
>> or the xenstored process can be restarted. However if the domain is
>in
>> process of being powered off/restarted/halted - there is no point of
>> waiting on it coming back - as we are effectively being terminated
>and
>> should not impede the progress.
>> 
>
>Hi, Konrad,
>
>Is this patch applied in Linux upstream tree? I didn't find it with
>latest Linux upstream source.
>

No. It needs rework.
>> This patch solves this problem by checking the 'system_state' value
>to
>> see if we are in heading towards death. We also make the wait
>> mechanism a bit more asynchronous.
>> 
>> Fixes-Bug: http://bugs.xenproject.org/xen/bug/8
>> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> ---
>>  drivers/xen/xenbus/xenbus_xs.c | 24 +++++++++++++++++++++---
>>  1 file changed, 21 insertions(+), 3 deletions(-)
>> diff --git a/drivers/xen/xenbus/xenbus_xs.c
>> b/drivers/xen/xenbus/xenbus_xs.c index b6d5fff..4f22706 100644
>> --- a/drivers/xen/xenbus/xenbus_xs.c
>> +++ b/drivers/xen/xenbus/xenbus_xs.c
>> @@ -148,9 +148,24 @@ static void *read_reply(enum xsd_sockmsg_type
>> *type, unsigned int *len)
>> 
>>  	while (list_empty(&xs_state.reply_list)) {
>>  		spin_unlock(&xs_state.reply_lock);
>> -		/* XXX FIXME: Avoid synchronous wait for response here. */
>> -		wait_event(xs_state.reply_waitq, -			  
>> !list_empty(&xs_state.reply_list));
>> +		wait_event_timeout(xs_state.reply_waitq, +				  
>> !list_empty(&xs_state.reply_list), +				   msecs_to_jiffies(500)); +
>> +		/* +		 * If we are in the process of being shut-down there is +		
>*
>> no point of trying to contact XenBus - it is either +		 * killed
>> (xenstored application) or the other domain +		 * has been killed or
>is
>> unreachable. +		 */ +		switch (system_state) { +		case
>SYSTEM_POWER_OFF:
>> +		case SYSTEM_RESTART: +		case SYSTEM_HALT: +			return
>ERR_PTR(-EIO);
>> +		default: +			break; +		}
>>  		spin_lock(&xs_state.reply_lock);
>>  	}
>> @@ -215,6 +230,9 @@ void *xenbus_dev_request_and_reply(struct
>> xsd_sockmsg *msg)
>> 
>>  	mutex_unlock(&xs_state.request_mutex);
>> +	if (IS_ERR(ret))
>> +		return ret;
>> +
>>  	if ((msg->type == XS_TRANSACTION_END) ||
>>  	    ((req_msg.type == XS_TRANSACTION_START) &&
>>  	     (msg->type == XS_ERROR)))
>
>
>Best regards,
>Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 05:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 05:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7IGo-0006sh-7k; Sun, 26 Jan 2014 05:28:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7IGm-0006sc-9e
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:28:32 +0000
Received: from [85.158.143.35:56930] by server-3.bemta-4.messagelabs.com id
	5B/F8-32360-FFC94E25; Sun, 26 Jan 2014 05:28:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390714108!808960!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3037 invoked from network); 26 Jan 2014 05:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 05:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,722,1384300800"; d="scan'208";a="96557293"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Jan 2014 05:28:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 00:28:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7IGg-0001xv-8c;
	Sun, 26 Jan 2014 05:28:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7IGg-0008Gc-8H;
	Sun, 26 Jan 2014 05:28:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24517-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 05:28:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24517: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2211786373701431974=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2211786373701431974==
Content-Type: text/plain

flight 24517 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24517/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start               fail REGR. vs. 24413
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                     fail   like 24413
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24413
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24407
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24413
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24413
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24413
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24413

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  9 guest-localmigrate fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                020abbc91120ddf052e2c303a8c598c3be4dc459
baseline version:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Morton <akpm@linux-foundation.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bob Peterson <rpeterso@redhat.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  David Rientjes <rientjes@google.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Hugh Dickins <hughd@google.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Jon Medhurst <tixy@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paulo Zanoni <paulo.r.zanoni@intel.com>
  Peter Zijlstra <peterz@infradead.org>
  Qingshuai Tian <qingshuai.tian@intel.com>
  Rob Herring <rob.herring@calxeda.com>
  Robert Richter <rric@kernel.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Serge Hallyn <serge.hallyn@canonical.com>
  Stephen Warren <swarren@wwwdotorg.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Taras Kondratiuk <taras.kondratiuk@linaro.org>
  Tony Lindgren <tony@atomide.com>
  Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Will Deacon <will.deacon@arm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=020abbc91120ddf052e2c303a8c598c3be4dc459
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 020abbc91120ddf052e2c303a8c598c3be4dc459
+ branch=linux-3.10
+ revision=020abbc91120ddf052e2c303a8c598c3be4dc459
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 020abbc91120ddf052e2c303a8c598c3be4dc459:tested/linux-3.10
Counting objects: 1   
Counting objects: 32   
Counting objects: 196, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/136)   
Writing objects:   1% (2/136)   
Writing objects:   2% (3/136)   
Writing objects:   3% (5/136)   
Writing objects:   4% (6/136)   
Writing objects:   5% (7/136)   
Writing objects:   6% (9/136)   
Writing objects:   7% (10/136)   
Writing objects:   8% (11/136)   
Writing objects:   9% (13/136)   
Writing objects:  10% (14/136)   
Writing objects:  11% (15/136)   
Writing objects:  12% (17/136)   
Writing objects:  13% (18/136)   
Writing objects:  14% (20/136)   
Writing objects:  15% (21/136)   
Writing objects:  16% (22/136)   
Writing objects:  17% (24/136)   
Writing objects:  18% (25/136)   
Writing objects:  19% (26/136)   
Writing objects:  20% (28/136)   
Writing objects:  21% (29/136)   
Writing objects:  22% (30/136)   
Writing objects:  23% (32/136)   
Writing objects:  24% (33/136)   
Writing objects:  25% (34/136)   
Writing objects:  26% (36/136)   
Writing objects:  27% (37/136)   
Writing objects:  28% (39/136)   
Writing objects:  29% (40/136)   
Writing objects:  30% (41/136)   
Writing objects:  31% (43/136)   
Writing objects:  32% (44/136)   
Writing objects:  33% (45/136)   
Writing objects:  34% (47/136)   
Writing objects:  35% (48/136)   
Writing objects:  36% (49/136)   
Writing objects:  37% (51/136)   
Writing objects:  38% (52/136)   
Writing objects:  39% (54/136)   
Writing objects:  40% (55/136)   
Writing objects:  41% (56/136)   
Writing objects:  42% (58/136)   
Writing objects:  43% (59/136)   
Writing objects:  44% (60/136)   
Writing objects:  45% (62/136)   
Writing objects:  46% (63/136)   
Writing objects:  47% (64/136)   
Writing objects:  48% (66/136)   
Writing objects:  49% (67/136)   
Writing objects:  50% (68/136)   
Writing objects:  51% (70/136)   
Writing objects:  52% (71/136)   
Writing objects:  53% (73/136)   
Writing objects:  54% (74/136)   
Writing objects:  55% (75/136)   
Writing objects:  56% (77/136)   
Writing objects:  57% (78/136)   
Writing objects:  58% (79/136)   
Writing objects:  59% (81/136)   
Writing objects:  60% (82/136)   
Writing objects:  61% (83/136)   
Writing objects:  62% (85/136)   
Writing objects:  63% (86/136)   
Writing objects:  64% (88/136)   
Writing objects:  65% (89/136)   
Writing objects:  66% (90/136)   
Writing objects:  67% (92/136)   
Writing objects:  68% (93/136)   
Writing objects:  69% (94/136)   
Writing objects:  70% (96/136)   
Writing objects:  71% (97/136)   
Writing objects:  72% (98/136)   
Writing objects:  73% (100/136)   
Writing objects:  74% (101/136)   
Writing objects:  75% (102/136)   
Writing objects:  76% (104/136)   
Writing objects:  77% (105/136)   
Writing objects:  78% (107/136)   
Writing objects:  79% (108/136)   
Writing objects:  80% (109/136)   
Writing objects:  81% (111/136)   
Writing objects:  82% (112/136)   
Writing objects:  83% (113/136)   
Writing objects:  84% (115/136)   
Writing objects:  85% (116/136)   
Writing objects:  86% (117/136)   
Writing objects:  87% (119/136)   
Writing objects:  88% (120/136)   
Writing objects:  89% (122/136)   
Writing objects:  90% (123/136)   
Writing objects:  91% (124/136)   
Writing objects:  92% (126/136)   
Writing objects:  93% (127/136)   
Writing objects:  94% (128/136)   
Writing objects:  95% (130/136)   
Writing objects:  96% (131/136)   
Writing objects:  97% (132/136)   
Writing objects:  98% (134/136)   
Writing objects:  99% (135/136)   
Writing objects: 100% (136/136)   
Writing objects: 100% (136/136), 26.84 KiB, done.
Total 136 (delta 109), reused 136 (delta 109)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   1071ea6..020abbc  020abbc91120ddf052e2c303a8c598c3be4dc459 -> tested/linux-3.10
+ exit 0


--===============2211786373701431974==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2211786373701431974==--

From xen-devel-bounces@lists.xen.org Sun Jan 26 05:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 05:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7IGo-0006sh-7k; Sun, 26 Jan 2014 05:28:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7IGm-0006sc-9e
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:28:32 +0000
Received: from [85.158.143.35:56930] by server-3.bemta-4.messagelabs.com id
	5B/F8-32360-FFC94E25; Sun, 26 Jan 2014 05:28:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390714108!808960!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3037 invoked from network); 26 Jan 2014 05:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 05:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,722,1384300800"; d="scan'208";a="96557293"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Jan 2014 05:28:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 00:28:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7IGg-0001xv-8c;
	Sun, 26 Jan 2014 05:28:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7IGg-0008Gc-8H;
	Sun, 26 Jan 2014 05:28:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24517-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 05:28:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24517: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2211786373701431974=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2211786373701431974==
Content-Type: text/plain

flight 24517 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24517/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start               fail REGR. vs. 24413
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                     fail   like 24413
 test-amd64-i386-freebsd10-amd64  5 xen-boot                    fail like 24413
 test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install           fail like 24407
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24413
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot                 fail like 24413
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24413
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24413

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  9 guest-localmigrate fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                020abbc91120ddf052e2c303a8c598c3be4dc459
baseline version:
 linux                1071ea6e68ead40df739b223e9013d99c23c19ab

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Morton <akpm@linux-foundation.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bob Peterson <rpeterso@redhat.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  David Rientjes <rientjes@google.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Hugh Dickins <hughd@google.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Jon Medhurst <tixy@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paulo Zanoni <paulo.r.zanoni@intel.com>
  Peter Zijlstra <peterz@infradead.org>
  Qingshuai Tian <qingshuai.tian@intel.com>
  Rob Herring <rob.herring@calxeda.com>
  Robert Richter <rric@kernel.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Serge Hallyn <serge.hallyn@canonical.com>
  Stephen Warren <swarren@wwwdotorg.org>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Taras Kondratiuk <taras.kondratiuk@linaro.org>
  Tony Lindgren <tony@atomide.com>
  Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Will Deacon <will.deacon@arm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=020abbc91120ddf052e2c303a8c598c3be4dc459
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 020abbc91120ddf052e2c303a8c598c3be4dc459
+ branch=linux-3.10
+ revision=020abbc91120ddf052e2c303a8c598c3be4dc459
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 020abbc91120ddf052e2c303a8c598c3be4dc459:tested/linux-3.10
Counting objects: 1   
Counting objects: 32   
Counting objects: 196, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/136)   
Writing objects:   1% (2/136)   
Writing objects:   2% (3/136)   
Writing objects:   3% (5/136)   
Writing objects:   4% (6/136)   
Writing objects:   5% (7/136)   
Writing objects:   6% (9/136)   
Writing objects:   7% (10/136)   
Writing objects:   8% (11/136)   
Writing objects:   9% (13/136)   
Writing objects:  10% (14/136)   
Writing objects:  11% (15/136)   
Writing objects:  12% (17/136)   
Writing objects:  13% (18/136)   
Writing objects:  14% (20/136)   
Writing objects:  15% (21/136)   
Writing objects:  16% (22/136)   
Writing objects:  17% (24/136)   
Writing objects:  18% (25/136)   
Writing objects:  19% (26/136)   
Writing objects:  20% (28/136)   
Writing objects:  21% (29/136)   
Writing objects:  22% (30/136)   
Writing objects:  23% (32/136)   
Writing objects:  24% (33/136)   
Writing objects:  25% (34/136)   
Writing objects:  26% (36/136)   
Writing objects:  27% (37/136)   
Writing objects:  28% (39/136)   
Writing objects:  29% (40/136)   
Writing objects:  30% (41/136)   
Writing objects:  31% (43/136)   
Writing objects:  32% (44/136)   
Writing objects:  33% (45/136)   
Writing objects:  34% (47/136)   
Writing objects:  35% (48/136)   
Writing objects:  36% (49/136)   
Writing objects:  37% (51/136)   
Writing objects:  38% (52/136)   
Writing objects:  39% (54/136)   
Writing objects:  40% (55/136)   
Writing objects:  41% (56/136)   
Writing objects:  42% (58/136)   
Writing objects:  43% (59/136)   
Writing objects:  44% (60/136)   
Writing objects:  45% (62/136)   
Writing objects:  46% (63/136)   
Writing objects:  47% (64/136)   
Writing objects:  48% (66/136)   
Writing objects:  49% (67/136)   
Writing objects:  50% (68/136)   
Writing objects:  51% (70/136)   
Writing objects:  52% (71/136)   
Writing objects:  53% (73/136)   
Writing objects:  54% (74/136)   
Writing objects:  55% (75/136)   
Writing objects:  56% (77/136)   
Writing objects:  57% (78/136)   
Writing objects:  58% (79/136)   
Writing objects:  59% (81/136)   
Writing objects:  60% (82/136)   
Writing objects:  61% (83/136)   
Writing objects:  62% (85/136)   
Writing objects:  63% (86/136)   
Writing objects:  64% (88/136)   
Writing objects:  65% (89/136)   
Writing objects:  66% (90/136)   
Writing objects:  67% (92/136)   
Writing objects:  68% (93/136)   
Writing objects:  69% (94/136)   
Writing objects:  70% (96/136)   
Writing objects:  71% (97/136)   
Writing objects:  72% (98/136)   
Writing objects:  73% (100/136)   
Writing objects:  74% (101/136)   
Writing objects:  75% (102/136)   
Writing objects:  76% (104/136)   
Writing objects:  77% (105/136)   
Writing objects:  78% (107/136)   
Writing objects:  79% (108/136)   
Writing objects:  80% (109/136)   
Writing objects:  81% (111/136)   
Writing objects:  82% (112/136)   
Writing objects:  83% (113/136)   
Writing objects:  84% (115/136)   
Writing objects:  85% (116/136)   
Writing objects:  86% (117/136)   
Writing objects:  87% (119/136)   
Writing objects:  88% (120/136)   
Writing objects:  89% (122/136)   
Writing objects:  90% (123/136)   
Writing objects:  91% (124/136)   
Writing objects:  92% (126/136)   
Writing objects:  93% (127/136)   
Writing objects:  94% (128/136)   
Writing objects:  95% (130/136)   
Writing objects:  96% (131/136)   
Writing objects:  97% (132/136)   
Writing objects:  98% (134/136)   
Writing objects:  99% (135/136)   
Writing objects: 100% (136/136)   
Writing objects: 100% (136/136), 26.84 KiB, done.
Total 136 (delta 109), reused 136 (delta 109)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   1071ea6..020abbc  020abbc91120ddf052e2c303a8c598c3be4dc459 -> tested/linux-3.10
+ exit 0


--===============2211786373701431974==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2211786373701431974==--

From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbm-000209-GA; Sun, 26 Jan 2014 07:58:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7Ds0-0003hr-Vc
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 00:46:41 +0000
Received: from [85.158.143.35:10740] by server-2.bemta-4.messagelabs.com id
	99/2E-11386-0FA54E25; Sun, 26 Jan 2014 00:46:40 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390697196!787155!1
X-Originating-IP: [216.109.115.62]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 818 invoked from network); 26 Jan 2014 00:46:37 -0000
Received: from nm45-vm3.bullet.mail.bf1.yahoo.com (HELO
	nm45-vm3.bullet.mail.bf1.yahoo.com) (216.109.115.62)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 00:46:37 -0000
Received: from [98.139.212.153] by nm45.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:36 -0000
Received: from [98.139.211.162] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:35 -0000
Received: from [127.0.0.1] by smtp219.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390697195; bh=lh37dkCBzJRdGd02hTpS8HqH2zQMvEEuui+W/YRjpXE=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=JNvAmlUMftjBTfLSka7bZX96v3xPCRMA3iAvvquocZHJK2WujT/SEfXmGnBd9PixppS8n3d+HaoCVzcHnt/xi9YjHS2R6CcSaLWv/jIZLpsGIvTsBS2tjyhHp6dC5jLDiBIxn3nUEgGN3cv1GZKx+Qh0Y1lbu/e2O2MgulabVRE=
X-Yahoo-Newman-Id: 797681.48345.bm@smtp219.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 1dZipnoVM1k_LsqOahfxKceLVLHT7qWQ3clNSZeqhxNqUFB
	xarns2B7KUZaAOzKoudkxdEceYOGB2mhkQmhHLF2nCRB8RvwYL5lzDbQ_tQn
	.JKmVAdPeBilmrct9QHBDhQGc_A4wQREOokHqaJaXbNsUY5p_fSq2mCdnJYE
	s.iQ.Fu7P6rWvBdXhpTsONnxnHWBzH5RG9aHMP1CQEbo8EKiv7I2VkMblFIq
	v.fliZ74fPPpnz3RluNWOSUaV5SBRsGcWtMCnVHQvrKB4_qo2Qfy1pGG76pP
	xAhcfIjE.vnmPXEGWi1jPBLXUMBP6MBHb15DIj2rdRSYounTsJlCAwEDCea8
	Uci0kzu7KJhfWsRfEd414Du5qqbWTB6a4aLMI18Ivgq_vHH2.C7m0vKW7B5d
	cjTcJqWZXMY7lDE4Tc_XXiyrvF4KWYyoRy1TXCrNiiwq9tkS0cvR7wqr6RS.
	1h0v_wF5aNIgYUOrR_Y1cKvSclGaXhlREhanP9y6FKB9UkrXwi623NKiqu7W
	YIWtXIvlFIt22juJF.J1J_WdFqBdTPZRvMWp9sg8LvXvLQFg-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp219.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 16:46:35 -0800 PST
Message-ID: <1390697193.2613.1.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 17:46:33 -0700
In-Reply-To: <52E233A60200007800116776@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-TuKKW+WXQkS/vjObEGDa"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-TuKKW+WXQkS/vjObEGDa
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On Fri, 2014-01-24 at 08:34 +0000, Jan Beulich wrote:

> But you realize that, following precedents elsewhere in the
> IOMMU code, we would disable the IOMMU as a whole rather
> than just interrupt remapping.
> 
> But yes, looking at the Linux side code, I guess we need to do
> so. Would be nice if you could confirm that the system comes up
> fine (and hopefully without IOMMU faults) with
> "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Jan
> 

The system does come up fine with the above commands.  See attached
logs.

-Eric


--=-TuKKW+WXQkS/vjObEGDa
Content-Disposition: attachment; filename="no-intremap.txt"
Content-Type: text/plain; name="no-intremap.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet: 
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=no-intremap,debug com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.985 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD  
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD 
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003dab000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003dab000.

--=-TuKKW+WXQkS/vjObEGDa
Content-Disposition: attachment; filename="iommu_off.txt"
Content-Type: text/plain; name="iommu_off.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet: 
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose,off com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.982 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880072db2400.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880072db2400.

--=-TuKKW+WXQkS/vjObEGDa
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-TuKKW+WXQkS/vjObEGDa--



From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbm-00020G-SZ; Sun, 26 Jan 2014 07:58:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7E4R-00049j-9w
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 00:59:31 +0000
Received: from [85.158.137.68:20072] by server-15.bemta-3.messagelabs.com id
	62/F7-11556-2FD54E25; Sun, 26 Jan 2014 00:59:30 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390697965!11286000!1
X-Originating-IP: [98.139.212.175]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9718 invoked from network); 26 Jan 2014 00:59:26 -0000
Received: from nm16.bullet.mail.bf1.yahoo.com (HELO
	nm16.bullet.mail.bf1.yahoo.com) (98.139.212.175)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 00:59:26 -0000
Received: from [98.139.212.150] by nm16.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
Received: from [98.139.211.195] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
Received: from [127.0.0.1] by smtp204.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390697964; bh=5u3H/OoCco5r0Kwuol1yCVHp3ZKi/UELaK/uzjWExUQ=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=bmbh9gH4HvtelKT0Nm/Qq+xs4wcQe9SrSKU8COa5tew3g4KHmb02lZKNIBC7ESW9RCRiyoXIHvZm2ybjWC0KQq3z7fYy+S4sQP5NWes6ycCFG5ltEnqirptFQNEXFFnm5YDNhmqcQeI8QS4EpVc9XlPnMMDz1SVGBfv6OqC+EPs=
X-Yahoo-Newman-Id: 378564.55500.bm@smtp204.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: _m3bYToVM1mvyXBAAlJxK.pP8I.DLLYiQeaYu2ccRQYSX00
	fVrZlnPOBgEBxyieU5QBSqHBDTTR51LnJ.YcDgKHnNFyr.hdkiModJRDURhh
	aiG7E4FjzhLl3xH9RowgRX4Ptak7.5lbdhmIfm2aDk.AReXtXFJaSaet26m0
	s5jGcJbkk5URfzsnYERVh0m9hMxnJvvLpibHq3Kh8pvdc0gH2UUFL5JYJye7
	xNKolgg6s78CAan3ioQe7CbHHOjVFKwVBlqMhrvjDhqRxlXlj15oxTTUucrf
	IvrZULpXC79rMeOWH_jJrzLKJr2cGA_gxT5.pnSdC3u2kDvRkeO_V0h6JdmV
	sHQsONDhtsXvtJ20GPvlZBtQ4t0PaIBWB3F_tQhPO74VWyqrnI2QIlqMaVwn
	pBjMJUF0RcpV9_iOxqa_2BqBYIgo3Dy2N5mbNa1C9lUqZOB_FzXDpLrVvO0j
	QsLoPnfG8vnyyxiP18dbk36EoSwQMo1WmTg_2hb1wRXQQ5pivgw6.vh3j9nU
	d4RLk_udDJFV5WdXVHj8x.v2MnQj61fToPRFc_t6Eiqg3.xU-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp204.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 16:59:24 -0800 PST
Message-ID: <1390697961.2613.11.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 17:59:21 -0700
In-Reply-To: <52E23A0502000078001167D0@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E23A0502000078001167D0@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-sosPhbz903Mc1UoWlyyp"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-sosPhbz903Mc1UoWlyyp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 09:01 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> > On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >> If
> >> the kernel responds to that mentioned firmware bug by forcing
> >> interrupt remapping off, maybe we would have to do the same...
> > 
> > That would be better than xen failing to boot.
> 
> You may also want to try booting with "ivrs_ioapic[0]=00:14.0" or
> "ivrs_ioapic[1]=00:14.0", but I'm afraid that would still not work
> properly as the base system related ACPI tables are also lacking
> that second IO-APIC (and I'm not convinced that adding a
> respective command line option would help either).
> 
> Jan
> 

I tried the above ivrs_ioapic commands and didn't see any improvements
in behavior.  Logs of xen booting four times with different combinations
of ivrs_ioapic.

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,no-intremap ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose ivrs-ioapic[0]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose,no-intremap com1=38400,8n1,pci console=com1

-Eric

--=-sosPhbz903Mc1UoWlyyp
Content-Disposition: attachment; filename="ivrs-ioapic.txt"
Content-Type: text/plain; name="ivrs-ioapic.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.955 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d40
(XEN)    0000000000000005 ffff830000086ee0 0000000800000000 000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,no-intremap ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.971 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88000388a000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88000388a000.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login: (XEN) Domain 0 shutdown: rebooting machine.
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose ivrs-ioapic[0]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.942 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d40
(XEN)    0000000000000005 ffff830000086ee0 0000000800000000 000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose,no-intremap com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.957 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003755800.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003755800.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login:

--=-sosPhbz903Mc1UoWlyyp
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-sosPhbz903Mc1UoWlyyp--



From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbs-00029b-1q; Sun, 26 Jan 2014 07:58:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Kbp-000226-Ki
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 07:58:25 +0000
Received: from [85.158.139.211:2657] by server-16.bemta-5.messagelabs.com id
	17/3E-11843-020C4E25; Sun, 26 Jan 2014 07:58:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390723102!11924495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14422 invoked from network); 26 Jan 2014 07:58:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 07:58:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,722,1384300800"; d="scan'208";a="94549030"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 07:58:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 02:58:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Kbk-0002kM-JL;
	Sun, 26 Jan 2014 07:58:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Kbk-000799-2V;
	Sun, 26 Jan 2014 07:58:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24525-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 07:58:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24525: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24525 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24525/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24512 REGR. vs. 24473

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 24512
 test-amd64-i386-xl-credit2   13 guest-localmigrate.2        fail pass in 24512

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24512 blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24512 like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24512 never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbm-00020G-SZ; Sun, 26 Jan 2014 07:58:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7E4R-00049j-9w
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 00:59:31 +0000
Received: from [85.158.137.68:20072] by server-15.bemta-3.messagelabs.com id
	62/F7-11556-2FD54E25; Sun, 26 Jan 2014 00:59:30 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390697965!11286000!1
X-Originating-IP: [98.139.212.175]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9718 invoked from network); 26 Jan 2014 00:59:26 -0000
Received: from nm16.bullet.mail.bf1.yahoo.com (HELO
	nm16.bullet.mail.bf1.yahoo.com) (98.139.212.175)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 00:59:26 -0000
Received: from [98.139.212.150] by nm16.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
Received: from [98.139.211.195] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
Received: from [127.0.0.1] by smtp204.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:59:24 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390697964; bh=5u3H/OoCco5r0Kwuol1yCVHp3ZKi/UELaK/uzjWExUQ=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=bmbh9gH4HvtelKT0Nm/Qq+xs4wcQe9SrSKU8COa5tew3g4KHmb02lZKNIBC7ESW9RCRiyoXIHvZm2ybjWC0KQq3z7fYy+S4sQP5NWes6ycCFG5ltEnqirptFQNEXFFnm5YDNhmqcQeI8QS4EpVc9XlPnMMDz1SVGBfv6OqC+EPs=
X-Yahoo-Newman-Id: 378564.55500.bm@smtp204.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: _m3bYToVM1mvyXBAAlJxK.pP8I.DLLYiQeaYu2ccRQYSX00
	fVrZlnPOBgEBxyieU5QBSqHBDTTR51LnJ.YcDgKHnNFyr.hdkiModJRDURhh
	aiG7E4FjzhLl3xH9RowgRX4Ptak7.5lbdhmIfm2aDk.AReXtXFJaSaet26m0
	s5jGcJbkk5URfzsnYERVh0m9hMxnJvvLpibHq3Kh8pvdc0gH2UUFL5JYJye7
	xNKolgg6s78CAan3ioQe7CbHHOjVFKwVBlqMhrvjDhqRxlXlj15oxTTUucrf
	IvrZULpXC79rMeOWH_jJrzLKJr2cGA_gxT5.pnSdC3u2kDvRkeO_V0h6JdmV
	sHQsONDhtsXvtJ20GPvlZBtQ4t0PaIBWB3F_tQhPO74VWyqrnI2QIlqMaVwn
	pBjMJUF0RcpV9_iOxqa_2BqBYIgo3Dy2N5mbNa1C9lUqZOB_FzXDpLrVvO0j
	QsLoPnfG8vnyyxiP18dbk36EoSwQMo1WmTg_2hb1wRXQQ5pivgw6.vh3j9nU
	d4RLk_udDJFV5WdXVHj8x.v2MnQj61fToPRFc_t6Eiqg3.xU-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp204.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 16:59:24 -0800 PST
Message-ID: <1390697961.2613.11.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 17:59:21 -0700
In-Reply-To: <52E23A0502000078001167D0@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E23A0502000078001167D0@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-sosPhbz903Mc1UoWlyyp"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-sosPhbz903Mc1UoWlyyp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 09:01 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> > On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >> If
> >> the kernel responds to that mentioned firmware bug by forcing
> >> interrupt remapping off, maybe we would have to do the same...
> > 
> > That would be better than xen failing to boot.
> 
> You may also want to try booting with "ivrs_ioapic[0]=00:14.0" or
> "ivrs_ioapic[1]=00:14.0", but I'm afraid that would still not work
> properly as the base system related ACPI tables are also lacking
> that second IO-APIC (and I'm not convinced that adding a
> respective command line option would help either).
> 
> Jan
> 

I tried the above ivrs_ioapic commands and didn't see any improvements
in behavior.  Logs of xen booting four times with different combinations
of ivrs_ioapic.

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,no-intremap ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose ivrs-ioapic[0]=00:14.0 com1=38400,8n1,pci
console=com1

(XEN) Command line: placeholder dom0_mem=2048M,max:2048M
dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all
iommu=debug,verbose,no-intremap com1=38400,8n1,pci console=com1

-Eric

--=-sosPhbz903Mc1UoWlyyp
Content-Disposition: attachment; filename="ivrs-ioapic.txt"
Content-Type: text/plain; name="ivrs-ioapic.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.955 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d40
(XEN)    0000000000000005 ffff830000086ee0 0000000800000000 000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,no-intremap ivrs-ioapic[1]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.971 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88000388a000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff88000388a000.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login: (XEN) Domain 0 shutdown: rebooting machine.
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose ivrs-ioapic[0]=00:14.0 com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.942 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ...  failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... works.
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) AMD-Vi: IO_PAGE_FAULT: domain = 0, device id = 0xa0, fault address = 0xfdf8010140, flags = 0x8
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ----[ Xen-4.4-rc2  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000015   rcx: ffff830247340000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff830247366004
(XEN) rbp: ffff82d0802e7d98   rsp: ffff82d0802e7ce8   r8:  0000000000000004
(XEN) r9:  ffff82d080287161   r10: 000000000000000f   r11: ffff82d080287960
(XEN) r12: 0000000000000000   r13: 0000000000000001   r14: 0000000000000014
(XEN) r15: 0000000000205000   cr0: 000000008005003b   cr4: 00000000000006f0
(XEN) cr3: 00000000afa98000   cr2: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802e7ce8:
(XEN)    ffff82d000000001 0000000000000086 ffff830200000001 0000000000000046
(XEN)    ffff830247340070 0000000000010000 0000000000000000 0000000000000000
(XEN)    000182d000000014 ffff83024d02b600 0000000147380000 0000000000000000
(XEN)    0000000200000001 ffff83024d0253c0 ffff82d0802e7da8 0100000000010000
(XEN)    0000000000000001 ffff83024733d430 0000000000000004 ffff82cffffff010
(XEN)    ffff82cffffff000 0000000001000000 ffff82d0802e7da8 ffff82d080144f76
(XEN)    ffff82d0802e7df8 ffff82d080175f4b 0000000000000296 ffff830247380000
(XEN)    0000000000000008 0000000000000002 0000000000000000 ffff82d080284ec0
(XEN)    ffff82d080284ec0 ffff830247380000 ffff82d0802e7e38 ffff82d080176fca
(XEN)    ffff82d0802e7e58 0000000000000008 ffff8302473c9820 0000000000000008
(XEN)    ffff82d0802891e0 ffff830000086fb0 ffff82d0802e7e48 ffff82d0802bed37
(XEN)    ffff82d0802e7f08 ffff82d0802be41d 0000000000000000 0000000000100000
(XEN)    ffff83000177fc58 ffff830000086fb0 ffff82d0802d7488 0000000001281000
(XEN)    0000000000250000 0000000000000000 ffff83000000000c ffff830000086d40
(XEN)    0000000000000005 ffff830000086ee0 0000000800000000 000000010000006e
(XEN)    0000000000000003 00000000000002f8 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff82d0801000b5 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d080152dd0>] amd_iommu_ioapic_update_ire+0x41d/0x5b4
(XEN)    [<ffff82d080144f76>] iommu_update_ire_from_apic+0x28/0x2a
(XEN)    [<ffff82d080175f4b>] set_ioapic_affinity_irq+0xa8/0x1e0
(XEN)    [<ffff82d080176fca>] setup_ioapic_dest+0x89/0xc3
(XEN)    [<ffff82d0802bed37>] smp_cpus_done+0x51/0x61
(XEN)    [<ffff82d0802be41d>] __start_xen+0x261a/0x2938
(XEN)    [<ffff82d0801000b5>] __high_start+0xa1/0xa3
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'get_rte_index(rte) == offset' failed at iommu_intr.c:188
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet:
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose,no-intremap com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.957 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003755800.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003755800.

Fedora release 20 (Heisenbug)
Kernel 3.12.7-300.fc20.x86_64 on an x86_64 (hvc0)

astar login:

--=-sosPhbz903Mc1UoWlyyp
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-sosPhbz903Mc1UoWlyyp--



From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbs-00029b-1q; Sun, 26 Jan 2014 07:58:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Kbp-000226-Ki
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 07:58:25 +0000
Received: from [85.158.139.211:2657] by server-16.bemta-5.messagelabs.com id
	17/3E-11843-020C4E25; Sun, 26 Jan 2014 07:58:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390723102!11924495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14422 invoked from network); 26 Jan 2014 07:58:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 07:58:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,722,1384300800"; d="scan'208";a="94549030"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 07:58:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 02:58:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Kbk-0002kM-JL;
	Sun, 26 Jan 2014 07:58:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Kbk-000799-2V;
	Sun, 26 Jan 2014 07:58:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24525-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 07:58:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24525: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24525 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24525/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24512 REGR. vs. 24473

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 24512
 test-amd64-i386-xl-credit2   13 guest-localmigrate.2        fail pass in 24512

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24512 blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24512 like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24512 never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbn-00020N-8j; Sun, 26 Jan 2014 07:58:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7EFb-0008BW-Sq
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 01:11:14 +0000
Received: from [193.109.254.147:21983] by server-7.bemta-14.messagelabs.com id
	92/F0-15500-7A064E25; Sun, 26 Jan 2014 01:11:03 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390698661!13175212!1
X-Originating-IP: [98.139.213.150]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26846 invoked from network); 26 Jan 2014 01:11:02 -0000
Received: from nm5-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm5-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.150)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 01:11:02 -0000
Received: from [98.139.212.153] by nm5.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
Received: from [98.139.213.10] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
Received: from [127.0.0.1] by smtp110.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390698660; bh=aLMjuaAHgiXiUG0gc1RxVTCYQU7Zioglyep6hg6ot/Q=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=H6Nk2eI3PD9LiP/QeTT+b+otze2eD7l80PpU4k6Y7H5oBbBTgzR0oFAyW9QbBTtld2fQjPADJOLLfdtBuBi3fqnRGvLJum/mB5tr8i+q9xyJUt3E0CmXIjorNopJvAPjFr4u8ZQICbWeuDg9Yj8K5Q9BDs4V8CYVNcPgNcsoi00=
X-Yahoo-Newman-Id: 503983.52662.bm@smtp110.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: ZJF516kVM1n6kR9FvoZrjGaOIN9u0Wbm7IzQUnyss9FzQz8
	23Lt9geK8WrzapJp9Vis7La39MaXvyH2Rs_PPx8VqQaAvXTQdwhk94iFmdeV
	soGNPwZOr0RD_gNgu1yV9X4gjHb5Tva2YTiKGvBB1sErWMeIKJnBCaZOsPyh
	HhNRU268Tmz338rsVbBbB.BUUVBCPsGWmO2a4vRgAcMj6Xkf6q4KJmWvyit_
	QIc_mmvIdQVE9Gon1a3dNh6pcrcI9Rn1p6Eb6eWxpm6NBXzx8_iLA2qx95Bo
	ZroFKZuFUkJwjF3D_I_19VJupMUnn9ewSbTtUEaU.zGFxPZjbcMp7fOMwW9m
	spEvVHD.o_KzLXbIHscikT960g25mX_gpRwFsNiFuPIV6x.UQm.UQo7oN2_b
	fbS7CIfDq0egna.ugl14J3YVrqRursVloMmIRzj9ZSh4wJpKCEcaJIHTf_Eh
	Lp8XF5ZH_0DNPCD4CC8jnCwvYcepinZLDg2_tGacYj_Of_yUBCgh4xBAq8QN
	xj8dPNBfDdNPQUfo.T_5b0IklWNEACsbuDT3rNPRbk47UaQ--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp110.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 17:11:00 -0800 PST
Message-ID: <1390698658.2613.20.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 18:10:58 -0700
In-Reply-To: <52E277920200007800116A70@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 13:24 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> >> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >>> If
> >>> the kernel responds to that mentioned firmware bug by forcing
> >>> interrupt remapping off, maybe we would have to do the same...
> >> 
> >> That would be better than xen failing to boot.
> > 
> > But you realize that, following precedents elsewhere in the
> > IOMMU code, we would disable the IOMMU as a whole rather
> > than just interrupt remapping.
> > 
> > But yes, looking at the Linux side code, I guess we need to do
> > so. Would be nice if you could confirm that the system comes up
> > fine (and hopefully without IOMMU faults) with
> > "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Here's a patch attempting to do just that. Please give this a try
> without any IOMMU-related override options.
> 
> Jan
> 
> AMD IOMMU: fail if there is no southbridge IO-APIC
> 
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
> 
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
> 
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
> 
> 
> 

Jan,

I am currently using the 4.4.0-RC2 RPMs that are available from the Test
Day information site.  With my limited skill set and time constraints,
it will take me a while to set up for testing the patch.  I will let you
know how it goes.

Thanks,

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbm-000209-GA; Sun, 26 Jan 2014 07:58:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7Ds0-0003hr-Vc
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 00:46:41 +0000
Received: from [85.158.143.35:10740] by server-2.bemta-4.messagelabs.com id
	99/2E-11386-0FA54E25; Sun, 26 Jan 2014 00:46:40 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390697196!787155!1
X-Originating-IP: [216.109.115.62]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 818 invoked from network); 26 Jan 2014 00:46:37 -0000
Received: from nm45-vm3.bullet.mail.bf1.yahoo.com (HELO
	nm45-vm3.bullet.mail.bf1.yahoo.com) (216.109.115.62)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 00:46:37 -0000
Received: from [98.139.212.153] by nm45.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:36 -0000
Received: from [98.139.211.162] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:35 -0000
Received: from [127.0.0.1] by smtp219.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 00:46:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390697195; bh=lh37dkCBzJRdGd02hTpS8HqH2zQMvEEuui+W/YRjpXE=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=JNvAmlUMftjBTfLSka7bZX96v3xPCRMA3iAvvquocZHJK2WujT/SEfXmGnBd9PixppS8n3d+HaoCVzcHnt/xi9YjHS2R6CcSaLWv/jIZLpsGIvTsBS2tjyhHp6dC5jLDiBIxn3nUEgGN3cv1GZKx+Qh0Y1lbu/e2O2MgulabVRE=
X-Yahoo-Newman-Id: 797681.48345.bm@smtp219.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 1dZipnoVM1k_LsqOahfxKceLVLHT7qWQ3clNSZeqhxNqUFB
	xarns2B7KUZaAOzKoudkxdEceYOGB2mhkQmhHLF2nCRB8RvwYL5lzDbQ_tQn
	.JKmVAdPeBilmrct9QHBDhQGc_A4wQREOokHqaJaXbNsUY5p_fSq2mCdnJYE
	s.iQ.Fu7P6rWvBdXhpTsONnxnHWBzH5RG9aHMP1CQEbo8EKiv7I2VkMblFIq
	v.fliZ74fPPpnz3RluNWOSUaV5SBRsGcWtMCnVHQvrKB4_qo2Qfy1pGG76pP
	xAhcfIjE.vnmPXEGWi1jPBLXUMBP6MBHb15DIj2rdRSYounTsJlCAwEDCea8
	Uci0kzu7KJhfWsRfEd414Du5qqbWTB6a4aLMI18Ivgq_vHH2.C7m0vKW7B5d
	cjTcJqWZXMY7lDE4Tc_XXiyrvF4KWYyoRy1TXCrNiiwq9tkS0cvR7wqr6RS.
	1h0v_wF5aNIgYUOrR_Y1cKvSclGaXhlREhanP9y6FKB9UkrXwi623NKiqu7W
	YIWtXIvlFIt22juJF.J1J_WdFqBdTPZRvMWp9sg8LvXvLQFg-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp219.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 16:46:35 -0800 PST
Message-ID: <1390697193.2613.1.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 17:46:33 -0700
In-Reply-To: <52E233A60200007800116776@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-TuKKW+WXQkS/vjObEGDa"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-TuKKW+WXQkS/vjObEGDa
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit


On Fri, 2014-01-24 at 08:34 +0000, Jan Beulich wrote:

> But you realize that, following precedents elsewhere in the
> IOMMU code, we would disable the IOMMU as a whole rather
> than just interrupt remapping.
> 
> But yes, looking at the Linux side code, I guess we need to do
> so. Would be nice if you could confirm that the system comes up
> fine (and hopefully without IOMMU faults) with
> "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Jan
> 

The system does come up fine with the above commands.  See attached
logs.

-Eric


--=-TuKKW+WXQkS/vjObEGDa
Content-Disposition: attachment; filename="no-intremap.txt"
Content-Type: text/plain; name="no-intremap.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet: 
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=no-intremap,debug com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.985 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD  
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD 
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) AMD-Vi: Setup I/O page table: device id = 0, type = 0x6, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x10, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x20, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x28, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x30, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x38, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x48, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x58, type = 0x2, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x88, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x90, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x92, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x98, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x9a, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa3, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa4, type = 0x5, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xa5, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb0, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0xb2, type = 0x7, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.0
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.1
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.2
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.3
(XEN) AMD-Vi: Skipping host bridge 0000:00:18.4
(XEN) AMD-Vi: Setup I/O page table: device id = 0x100, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x101, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x200, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x500, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x600, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x700, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) AMD-Vi: Setup I/O page table: device id = 0x701, type = 0x1, root table = 0x2472a2000, domain = 0, paging mode = 3
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) mm.c:809: d0: Forcing read-only access to MFN e0002
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003dab000.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880003dab000.

--=-TuKKW+WXQkS/vjObEGDa
Content-Disposition: attachment; filename="iommu_off.txt"
Content-Type: text/plain; name="iommu_off.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2-0.rc2.1.fc20
(XEN) Xen version 4.4-rc2 (mockbuild@[unknown]) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Jan 16 19:37:57 UTC 2014
(XEN) Latest ChangeSet: 
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose,off com1=38400,8n1,pci console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) XSM Framework v1.0.0 initialized
(XEN) Flask:  No policy file found. Disabling Flask.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.982 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) HVM: ASIDs enabled.
(XEN) SVM: Supported advanced features:
(XEN)  - Nested Page Tables (NPT)
(XEN)  - Last Branch Record (LBR) Virtualisation
(XEN)  - Next-RIP Saved on #VMEXIT
(XEN)  - Pause-Intercept Filter
(XEN) HVM: SVM enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) HVM: PVH mode not supported on this platform
(XEN) spurious 8259A interrupt: IRQ7.
(XEN) CPU1: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU1 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU2: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU2 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU3: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU3 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU4: No irq handler for vector e7 (IRQ -2147483648)
(XEN) microcode: CPU4 collect_cpu_info: patch_id=0x10000bf
(XEN) CPU5: No irq handler for vector e7 (IRQ -2147483648)
(XEN) Brought up 6 CPUs
(XEN) microcode: CPU5 collect_cpu_info: patch_id=0x10000bf
(XEN) HPET: 3 timers usable for broadcast (3 total)
(XEN) ACPI sleep modes: S3
(XEN) MCA: Use hw thresholding to adjust polling frequency
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Xenoprofile: AMD IBS detected (0x1f)
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xae0000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0x1060f0
(XEN) elf_parse_binary: phdr: paddr=0x1d07000 memsz=0x15300
(XEN) elf_parse_binary: phdr: paddr=0x1d1d000 memsz=0x71f000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x243c000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81d1d1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb"
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff8243c000
(XEN)     virt_entry       = 0xffffffff81d1d1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x243c000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000240000000->0000000244000000 (501692 pages to be allocated)
(XEN)  Init. ramdisk: 000000024e7bc000->000000024ffffc00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff8243c000
(XEN)  Init. ramdisk: ffffffff8243c000->ffffffff83c7fc00
(XEN)  Phys-Mach map: ffffffff83c80000->ffffffff84080000
(XEN)  Start info:    ffffffff84080000->ffffffff840804b4
(XEN)  Page tables:   ffffffff84081000->ffffffff840a6000
(XEN)  Boot stack:    ffffffff840a6000->ffffffff840a7000
(XEN)  TOTAL:         ffffffff80000000->ffffffff84400000
(XEN)  ENTRY ADDRESS: ffffffff81d1d1e0
(XEN) Dom0 has maximum 2 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81ae0000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d060f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d07000 -> 0xffffffff81d1c300
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d1d000 -> 0xffffffff81e77000
(XEN) Scrubbing Free RAM: ............................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 268kB init memory.
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010004 from 0x0000000000000000 to 0x000000000000ffff.
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:02.0
(XEN) PCI add device 0000:00:04.0
(XEN) PCI add device 0000:00:05.0
(XEN) PCI add device 0000:00:06.0
(XEN) PCI add device 0000:00:07.0
(XEN) PCI add device 0000:00:09.0
(XEN) PCI add device 0000:00:0b.0
(XEN) PCI add device 0000:00:11.0
(XEN) PCI add device 0000:00:12.0
(XEN) PCI add device 0000:00:12.2
(XEN) PCI add device 0000:00:13.0
(XEN) PCI add device 0000:00:13.2
(XEN) PCI add device 0000:00:14.0
(XEN) PCI add device 0000:00:14.2
(XEN) PCI add device 0000:00:14.3
(XEN) PCI add device 0000:00:14.4
(XEN) PCI add device 0000:00:14.5
(XEN) PCI add device 0000:00:16.0
(XEN) PCI add device 0000:00:16.2
(XEN) PCI add device 0000:00:18.0
(XEN) PCI add device 0000:00:18.1
(XEN) PCI add device 0000:00:18.2
(XEN) PCI add device 0000:00:18.3
(XEN) PCI add device 0000:00:18.4
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:01:00.1
(XEN) PCI add device 0000:02:00.0
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:06:00.0
(XEN) PCI add device 0000:07:00.0
(XEN) PCI add device 0000:07:00.1
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880072db2400.
(XEN) traps.c:3074: GPF (0000): ffff82d08019cc5e -> ffff82d080237d51
(XEN) traps.c:2516:d0 Domain attempted WRMSR 00000000c0010020 from 0x0000000000000000 to 0xffff880072db2400.

--=-TuKKW+WXQkS/vjObEGDa
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-TuKKW+WXQkS/vjObEGDa--



From xen-devel-bounces@lists.xen.org Sun Jan 26 07:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Kbn-00020N-8j; Sun, 26 Jan 2014 07:58:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W7EFb-0008BW-Sq
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 01:11:14 +0000
Received: from [193.109.254.147:21983] by server-7.bemta-14.messagelabs.com id
	92/F0-15500-7A064E25; Sun, 26 Jan 2014 01:11:03 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390698661!13175212!1
X-Originating-IP: [98.139.213.150]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26846 invoked from network); 26 Jan 2014 01:11:02 -0000
Received: from nm5-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm5-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.150)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 01:11:02 -0000
Received: from [98.139.212.153] by nm5.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
Received: from [98.139.213.10] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
Received: from [127.0.0.1] by smtp110.mail.bf1.yahoo.com with NNFMP;
	26 Jan 2014 01:11:00 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1390698660; bh=aLMjuaAHgiXiUG0gc1RxVTCYQU7Zioglyep6hg6ot/Q=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=H6Nk2eI3PD9LiP/QeTT+b+otze2eD7l80PpU4k6Y7H5oBbBTgzR0oFAyW9QbBTtld2fQjPADJOLLfdtBuBi3fqnRGvLJum/mB5tr8i+q9xyJUt3E0CmXIjorNopJvAPjFr4u8ZQICbWeuDg9Yj8K5Q9BDs4V8CYVNcPgNcsoi00=
X-Yahoo-Newman-Id: 503983.52662.bm@smtp110.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: ZJF516kVM1n6kR9FvoZrjGaOIN9u0Wbm7IzQUnyss9FzQz8
	23Lt9geK8WrzapJp9Vis7La39MaXvyH2Rs_PPx8VqQaAvXTQdwhk94iFmdeV
	soGNPwZOr0RD_gNgu1yV9X4gjHb5Tva2YTiKGvBB1sErWMeIKJnBCaZOsPyh
	HhNRU268Tmz338rsVbBbB.BUUVBCPsGWmO2a4vRgAcMj6Xkf6q4KJmWvyit_
	QIc_mmvIdQVE9Gon1a3dNh6pcrcI9Rn1p6Eb6eWxpm6NBXzx8_iLA2qx95Bo
	ZroFKZuFUkJwjF3D_I_19VJupMUnn9ewSbTtUEaU.zGFxPZjbcMp7fOMwW9m
	spEvVHD.o_KzLXbIHscikT960g25mX_gpRwFsNiFuPIV6x.UQm.UQo7oN2_b
	fbS7CIfDq0egna.ugl14J3YVrqRursVloMmIRzj9ZSh4wJpKCEcaJIHTf_Eh
	Lp8XF5ZH_0DNPCD4CC8jnCwvYcepinZLDg2_tGacYj_Of_yUBCgh4xBAq8QN
	xj8dPNBfDdNPQUfo.T_5b0IklWNEACsbuDT3rNPRbk47UaQ--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp110.mail.bf1.yahoo.com with SMTP; 25 Jan 2014 17:11:00 -0800 PST
Message-ID: <1390698658.2613.20.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Sat, 25 Jan 2014 18:10:58 -0700
In-Reply-To: <52E277920200007800116A70@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Sun, 26 Jan 2014 07:58:20 +0000
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 13:24 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> >> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >>> If
> >>> the kernel responds to that mentioned firmware bug by forcing
> >>> interrupt remapping off, maybe we would have to do the same...
> >> 
> >> That would be better than xen failing to boot.
> > 
> > But you realize that, following precedents elsewhere in the
> > IOMMU code, we would disable the IOMMU as a whole rather
> > than just interrupt remapping.
> > 
> > But yes, looking at the Linux side code, I guess we need to do
> > so. Would be nice if you could confirm that the system comes up
> > fine (and hopefully without IOMMU faults) with
> > "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Here's a patch attempting to do just that. Please give this a try
> without any IOMMU-related override options.
> 
> Jan
> 
> AMD IOMMU: fail if there is no southbridge IO-APIC
> 
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
> 
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
> 
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
> 
> 
> 

Jan,

I am currently using the 4.4.0-RC2 RPMs that are available from the Test
Day information site.  With my limited skill set and time constraints,
it will take me a while to set up for testing the patch.  I will let you
know how it goes.

Thanks,

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:29:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7L5c-0003uc-3a; Sun, 26 Jan 2014 08:29:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7L5a-0003uX-Lp
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 08:29:10 +0000
Received: from [85.158.143.35:58671] by server-1.bemta-4.messagelabs.com id
	17/06-02132-657C4E25; Sun, 26 Jan 2014 08:29:10 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390724948!816863!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14593 invoked from network); 26 Jan 2014 08:29:09 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-21.messagelabs.com with SMTP;
	26 Jan 2014 08:29:09 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 26 Jan 2014 00:29:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,723,1384329600"; d="scan'208";a="470913578"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 26 Jan 2014 00:29:07 -0800
Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:29:07 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx117.amr.corp.intel.com (10.18.116.17) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:29:07 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 16:29:04 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
Thread-Index: AQHPGR1hIrZ/0qGTuke4rIDnPBDI1JqWrMuQ
Date: Sun, 26 Jan 2014 08:29:04 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-01-25:
> On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> Hello,
>> 
>> I have been giving nested virt a try, and have my first bug to report.
>> This is still ongoing, and is by no means complete yet.
>> 
>> Setup:
>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>> 
>> Single Intel Haswell SDP (Grantley platform):
>> Native hypervisor: XenServer
>> 
>> Two L1 guests:
>>   XenServer (running with EPT)
>>   XenServer (running with shadow)
>> 
>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>> domain, the L1 shadow domain is killed with:
> 
> Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
> the L2 HAP stuff assumed that L1 was HAP as well.
> 
> In which case, if an L1 guest is started in shadow mode, then EPT
> should not be advertised.

AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy (Actually, I never tried it successfully from the first day I start working on nested stuff). Shadow-on-EPT and EPT-on-EPT are working in my box. But I recommended using EPT on EPT if possible. Because it is really a pain to run L2 guest on shadow on shadow mode due to the poor performance.

> 
>  -George


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:29:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7L5c-0003uc-3a; Sun, 26 Jan 2014 08:29:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7L5a-0003uX-Lp
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 08:29:10 +0000
Received: from [85.158.143.35:58671] by server-1.bemta-4.messagelabs.com id
	17/06-02132-657C4E25; Sun, 26 Jan 2014 08:29:10 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390724948!816863!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14593 invoked from network); 26 Jan 2014 08:29:09 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-21.messagelabs.com with SMTP;
	26 Jan 2014 08:29:09 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 26 Jan 2014 00:29:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,723,1384329600"; d="scan'208";a="470913578"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 26 Jan 2014 00:29:07 -0800
Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:29:07 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx117.amr.corp.intel.com (10.18.116.17) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:29:07 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 16:29:04 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Thread-Topic: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
Thread-Index: AQHPGR1hIrZ/0qGTuke4rIDnPBDI1JqWrMuQ
Date: Sun, 26 Jan 2014 08:29:04 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-01-25:
> On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> Hello,
>> 
>> I have been giving nested virt a try, and have my first bug to report.
>> This is still ongoing, and is by no means complete yet.
>> 
>> Setup:
>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>> 
>> Single Intel Haswell SDP (Grantley platform):
>> Native hypervisor: XenServer
>> 
>> Two L1 guests:
>>   XenServer (running with EPT)
>>   XenServer (running with shadow)
>> 
>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>> domain, the L1 shadow domain is killed with:
> 
> Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
> the L2 HAP stuff assumed that L1 was HAP as well.
> 
> In which case, if an L1 guest is started in shadow mode, then EPT
> should not be advertised.

AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy (Actually, I never tried it successfully from the first day I start working on nested stuff). Shadow-on-EPT and EPT-on-EPT are working in my box. But I recommended using EPT on EPT if possible. Because it is really a pain to run L2 guest on shadow on shadow mode due to the poor performance.

> 
>  -George


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LLH-0004HT-Br; Sun, 26 Jan 2014 08:45:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7LLF-0004HO-Mk
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 08:45:22 +0000
Received: from [85.158.137.68:11601] by server-2.bemta-3.messagelabs.com id
	04/20-17329-02BC4E25; Sun, 26 Jan 2014 08:45:20 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390725919!11377003!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29310 invoked from network); 26 Jan 2014 08:45:19 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-31.messagelabs.com with SMTP;
	26 Jan 2014 08:45:19 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 26 Jan 2014 00:45:18 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,723,1384329600"; d="scan'208";a="470917179"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 26 Jan 2014 00:45:17 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:45:16 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:45:16 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 16:45:15 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnJqUAA/B//98e4CAAAGbAIAAEUGAgAMsOiA=
Date: Sun, 26 Jan 2014 08:45:15 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7184@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
	<20140124145641.GA83765@deinos.phlegethon.org>
	<1390575746.2124.91.camel@kazak.uk.xensource.com>
	<52E28EFB.3020008@eu.citrix.com>
In-Reply-To: <52E28EFB.3020008@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-01-25:
> On 01/24/2014 03:02 PM, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
>>> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George
> Dunlap wrote:
>>>> On 01/17/2014 09:40 AM, Ian Campbell wrote:
>>>>> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>>>>>> As Andrew said, nested still in experimental stage, because there are
>>>>>> still lots of scenarios I am not covered in my testing. So it may not
>>>>>> accurate to say it is good supported. But I hope people know that the
>>>>>> nested is ready to use now. And encourage them to try it and report
>>>>>> bug to us to push nested move forward.
>>>>> Perhaps we could say it is "tech preview" rather than "experimental"?
>>>> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP
>>>> compatibility mode are tested regularly, and only HyperV, L2 shadow, and
>>>> paging / PoD don't work, I think we should be able to call this a "1.0"
>>>> release for nested virt.  Then we can add in "now works with HyperV",
>>>> "Now works with shadow", "Now works with paging" as those become
> mature.
>>> That depends on what the failure modes are for the other cases --
>>> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
>>> are not under the control of the L0 admin.  I thikn that has to be
>>> clearly understood before we encourage people to turn this on.
>> Especially in the light of the previous two bugs here which let the
>> guest admin crash the host, in at least one of the two cases even if the
>> host admin had disabled nested virt for that guest (and I think it was
>> actually in both cases...)
> 
> Right -- well I think then we need to help try to define some criteria
> that VMX nested virt would need to meet for portions of it to stop being
> considered "experimental" or "tech preview".  Just a couple of angles:
> 
> * L1 / L2 guests tested.  What do people think of the mix of L1 / L2
> guests there?  They look like a pretty good combination to me.
> 
> * L2 workloads tested
> 
> Other than booting, what kinds of workloads are run in the L2 guests?
> Do the L2 guests ever get into heavy swapping scenarios, for instance?

Currently, we didn't start the workload testing. I guess there will be more bugs coming if running workload inside guest. :)

> 
> * Minimum subset of functionality
> 
> I think it makes sense to explicitly say that we support only certain
> hypervisors, and to not support some advanced features in L2 guests.
>
> Saying only L1 HAP L2 HAP is reasonable, I think.  No HyperV, no L2
> shadow, no PoD are reasonable restrictions; it should be fine for us to
> say that the L1 admin enables that, and badness ensues, he has only
> himself to blame.

I think Hyper-V should be acceptable. 

>
> 
> * Security
> 
> That said, I think we must assume that some of our users will have L0
> admin != L1 admin.  This means that L1 admin must not be able to do
> anything to crash L0.  In the PoD case above, for example, if L1 enables
> PoD or paging, it might cause locking issue in L0; that's not acceptable.
> 
> Anything else?
> 

Should we consider the save/restore and migration for L1? I believe it also doesn't working currently.

>   -George


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:45:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LLH-0004HT-Br; Sun, 26 Jan 2014 08:45:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7LLF-0004HO-Mk
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 08:45:22 +0000
Received: from [85.158.137.68:11601] by server-2.bemta-3.messagelabs.com id
	04/20-17329-02BC4E25; Sun, 26 Jan 2014 08:45:20 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390725919!11377003!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29310 invoked from network); 26 Jan 2014 08:45:19 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-4.tower-31.messagelabs.com with SMTP;
	26 Jan 2014 08:45:19 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 26 Jan 2014 00:45:18 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,723,1384329600"; d="scan'208";a="470917179"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 26 Jan 2014 00:45:17 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:45:16 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 26 Jan 2014 00:45:16 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 26 Jan 2014 16:45:15 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: Status of Nested Virt in 4.4 (Was: Re: [Xen-devel] Xen 4.4
	development update)
Thread-Index: AQHPEyoXMu6lo7nxOkWoQ/Ns0loHnJqUAA/B//98e4CAAAGbAIAAEUGAgAMsOiA=
Date: Sun, 26 Jan 2014 08:45:15 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7184@SHSMSX104.ccr.corp.intel.com>
References: <1389186984.4883.67.camel@kazak.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BE1AC@SHSMSX104.ccr.corp.intel.com>
	<1389865519.5190.9.camel@kazak.uk.xensource.com>
	<52D7BC9E02000078001142D4@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9BECCE@SHSMSX104.ccr.corp.intel.com>
	<1389951634.6697.43.camel@kazak.uk.xensource.com>
	<52E27CEF.2010009@eu.citrix.com>
	<20140124145641.GA83765@deinos.phlegethon.org>
	<1390575746.2124.91.camel@kazak.uk.xensource.com>
	<52E28EFB.3020008@eu.citrix.com>
In-Reply-To: <52E28EFB.3020008@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Status of Nested Virt in 4.4 (Was: Re: Xen 4.4
 development update)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-01-25:
> On 01/24/2014 03:02 PM, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 15:56 +0100, Tim Deegan wrote:
>>> B11;rgb:0000/0000/0000At 14:47 +0000 on 24 Jan (1390571231), George
> Dunlap wrote:
>>>> On 01/17/2014 09:40 AM, Ian Campbell wrote:
>>>>> On Fri, 2014-01-17 at 02:16 +0000, Zhang, Yang Z wrote:
>>>>>> As Andrew said, nested still in experimental stage, because there are
>>>>>> still lots of scenarios I am not covered in my testing. So it may not
>>>>>> accurate to say it is good supported. But I hope people know that the
>>>>>> nested is ready to use now. And encourage them to try it and report
>>>>>> bug to us to push nested move forward.
>>>>> Perhaps we could say it is "tech preview" rather than "experimental"?
>>>> If { {xp,win7,win8,rhel6}x{x86,x64} } x { Xen, KVM, VMWare } and Win7 XP
>>>> compatibility mode are tested regularly, and only HyperV, L2 shadow, and
>>>> paging / PoD don't work, I think we should be able to call this a "1.0"
>>>> release for nested virt.  Then we can add in "now works with HyperV",
>>>> "Now works with shadow", "Now works with paging" as those become
> mature.
>>> That depends on what the failure modes are for the other cases --
>>> esp. given that the L1 guest's choice of hypervisor, shadow vs HAP &c,
>>> are not under the control of the L0 admin.  I thikn that has to be
>>> clearly understood before we encourage people to turn this on.
>> Especially in the light of the previous two bugs here which let the
>> guest admin crash the host, in at least one of the two cases even if the
>> host admin had disabled nested virt for that guest (and I think it was
>> actually in both cases...)
> 
> Right -- well I think then we need to help try to define some criteria
> that VMX nested virt would need to meet for portions of it to stop being
> considered "experimental" or "tech preview".  Just a couple of angles:
> 
> * L1 / L2 guests tested.  What do people think of the mix of L1 / L2
> guests there?  They look like a pretty good combination to me.
> 
> * L2 workloads tested
> 
> Other than booting, what kinds of workloads are run in the L2 guests?
> Do the L2 guests ever get into heavy swapping scenarios, for instance?

Currently, we didn't start the workload testing. I guess there will be more bugs coming if running workload inside guest. :)

> 
> * Minimum subset of functionality
> 
> I think it makes sense to explicitly say that we support only certain
> hypervisors, and to not support some advanced features in L2 guests.
>
> Saying only L1 HAP L2 HAP is reasonable, I think.  No HyperV, no L2
> shadow, no PoD are reasonable restrictions; it should be fine for us to
> say that the L1 admin enables that, and badness ensues, he has only
> himself to blame.

I think Hyper-V should be acceptable. 

>
> 
> * Security
> 
> That said, I think we must assume that some of our users will have L0
> admin != L1 admin.  This means that L1 admin must not be able to do
> anything to crash L0.  In the PoD case above, for example, if L1 enables
> PoD or paging, it might cause locking issue in L0; that's not acceptable.
> 
> Anything else?
> 

Should we consider the save/restore and migration for L1? I believe it also doesn't working currently.

>   -George


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:52:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LSV-0004cv-C8; Sun, 26 Jan 2014 08:52:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W7LSU-0004cq-6y
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 08:52:50 +0000
Received: from [85.158.137.68:55210] by server-10.bemta-3.messagelabs.com id
	E0/86-23989-1ECC4E25; Sun, 26 Jan 2014 08:52:49 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390726368!11349884!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15551 invoked from network); 26 Jan 2014 08:52:48 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 08:52:48 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 092741887D3;
	Sun, 26 Jan 2014 10:52:48 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DEBB136C01F; Sun, 26 Jan 2014 10:52:47 +0200 (EET)
Date: Sun, 26 Jan 2014 10:52:47 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xennn <openbg@abv.bg>
Message-ID: <20140126085247.GP2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390663194541-5720926.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 25, 2014 at 07:19:54AM -0800, xennn wrote:
> hi all 
> 
> i would like to know how qemu is integrated with xen and what is the aim of
> QEMU in Xen's system?
> 

Xen uses Qemu as a 'device model' for HVM guests, so for emulating various legacy x86 hardware in the VM.
Xen also uses Qemu as a VNC server for PV guests, for their PV graphical console, and possibly for disk backend emulation in dom0 (qdisc).

> As far as i know kQEMU is hardware assisted virtualization tool and i would
> like to know what is kQEMU role in xen archtecture?
> 

Xen hypervisor itself is the 'hardware assisted virtualization tool',
so Qemu is only used for other tasks in Xen.

Hopefully that helps..

-- Pasi

> 
> Best Regards
> 
> 
> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 08:52:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 08:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LSV-0004cv-C8; Sun, 26 Jan 2014 08:52:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W7LSU-0004cq-6y
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 08:52:50 +0000
Received: from [85.158.137.68:55210] by server-10.bemta-3.messagelabs.com id
	E0/86-23989-1ECC4E25; Sun, 26 Jan 2014 08:52:49 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390726368!11349884!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15551 invoked from network); 26 Jan 2014 08:52:48 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 08:52:48 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 092741887D3;
	Sun, 26 Jan 2014 10:52:48 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id DEBB136C01F; Sun, 26 Jan 2014 10:52:47 +0200 (EET)
Date: Sun, 26 Jan 2014 10:52:47 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xennn <openbg@abv.bg>
Message-ID: <20140126085247.GP2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390663194541-5720926.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Jan 25, 2014 at 07:19:54AM -0800, xennn wrote:
> hi all 
> 
> i would like to know how qemu is integrated with xen and what is the aim of
> QEMU in Xen's system?
> 

Xen uses Qemu as a 'device model' for HVM guests, so for emulating various legacy x86 hardware in the VM.
Xen also uses Qemu as a VNC server for PV guests, for their PV graphical console, and possibly for disk backend emulation in dom0 (qdisc).

> As far as i know kQEMU is hardware assisted virtualization tool and i would
> like to know what is kQEMU role in xen archtecture?
> 

Xen hypervisor itself is the 'hardware assisted virtualization tool',
so Qemu is only used for other tasks in Xen.

Hopefully that helps..

-- Pasi

> 
> Best Regards
> 
> 
> 
> --
> View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 09:18:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 09:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LqU-0005Ht-DX; Sun, 26 Jan 2014 09:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7LqT-0005Ho-8Y
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 09:17:37 +0000
Received: from [85.158.139.211:6071] by server-1.bemta-5.messagelabs.com id
	C4/BA-21065-0B2D4E25; Sun, 26 Jan 2014 09:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390727854!11797371!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18041 invoked from network); 26 Jan 2014 09:17:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 09:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,723,1384300800"; d="scan'208";a="94558006"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 09:17:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 04:17:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7LqO-000386-9D;
	Sun, 26 Jan 2014 09:17:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7LqN-0002fn-VN;
	Sun, 26 Jan 2014 09:17:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24519-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 09:17:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24519: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24519 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24519/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24505 REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2        fail pass in 24505
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)    broken pass in 24505
 test-i386-i386-pair           4 host-install/dst_host(4)  broken pass in 24505

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24505 never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 09:18:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 09:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7LqU-0005Ht-DX; Sun, 26 Jan 2014 09:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7LqT-0005Ho-8Y
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 09:17:37 +0000
Received: from [85.158.139.211:6071] by server-1.bemta-5.messagelabs.com id
	C4/BA-21065-0B2D4E25; Sun, 26 Jan 2014 09:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390727854!11797371!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18041 invoked from network); 26 Jan 2014 09:17:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 09:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,723,1384300800"; d="scan'208";a="94558006"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 09:17:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 04:17:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7LqO-000386-9D;
	Sun, 26 Jan 2014 09:17:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7LqN-0002fn-VN;
	Sun, 26 Jan 2014 09:17:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24519-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 09:17:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24519: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24519 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24519/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24505 REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2        fail pass in 24505
 test-amd64-i386-xl-qemut-win7-amd64  3 host-install(3)    broken pass in 24505
 test-i386-i386-pair           4 host-install/dst_host(4)  broken pass in 24505

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24505 never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          broken  
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          broken  
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 10:11:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 10:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Mfv-0007Dd-Ud; Sun, 26 Jan 2014 10:10:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7Mfv-0007DU-01
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 10:10:47 +0000
Received: from [85.158.137.68:49897] by server-9.bemta-3.messagelabs.com id
	82/43-13104-62FD4E25; Sun, 26 Jan 2014 10:10:46 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390731043!10205410!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7720 invoked from network); 26 Jan 2014 10:10:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 10:10:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QAAdpt028672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 10:10:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QAAZ3M025357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 10:10:36 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0QAAYLZ022146; Sun, 26 Jan 2014 10:10:34 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 02:10:34 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Sun, 26 Jan 2014 18:12:27 +0800
Message-Id: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Annie Li <annie.li@oracle.com>, david.vrabel@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
use get_page to ensure page is released when grant access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V5: Remove unecessary change in xennet_end_access.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   88 +++++++++++++-------------------------------
 1 files changed, 26 insertions(+), 62 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..6ddf1e6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
 
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 10:11:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 10:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Mfv-0007Dd-Ud; Sun, 26 Jan 2014 10:10:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7Mfv-0007DU-01
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 10:10:47 +0000
Received: from [85.158.137.68:49897] by server-9.bemta-3.messagelabs.com id
	82/43-13104-62FD4E25; Sun, 26 Jan 2014 10:10:46 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390731043!10205410!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7720 invoked from network); 26 Jan 2014 10:10:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 10:10:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QAAdpt028672
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 10:10:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QAAZ3M025357
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 10:10:36 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0QAAYLZ022146; Sun, 26 Jan 2014 10:10:34 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 02:10:34 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Sun, 26 Jan 2014 18:12:27 +0800
Message-Id: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Annie Li <annie.li@oracle.com>, david.vrabel@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
	xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
use get_page to ensure page is released when grant access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V5: Remove unecessary change in xennet_end_access.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   88 +++++++++++++-------------------------------
 1 files changed, 26 insertions(+), 62 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..6ddf1e6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
 
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 10:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 10:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Mql-0007l1-Dr; Sun, 26 Jan 2014 10:21:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7Mqj-0007kw-VD
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 10:21:58 +0000
Received: from [193.109.254.147:11597] by server-11.bemta-14.messagelabs.com
	id DD/25-20576-5C1E4E25; Sun, 26 Jan 2014 10:21:57 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390731715!9749797!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6322 invoked from network); 26 Jan 2014 10:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 10:21:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QALq8A010167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 10:21:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QALoGj006219
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 10:21:52 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QALord013705; Sun, 26 Jan 2014 10:21:50 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 02:21:50 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Sun, 26 Jan 2014 18:23:49 +0800
Message-Id: <1390731829-2469-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: annie.li@oracle.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v3] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split event channel feature was supported by netback/netfront,
"xl network-list" does not show event channel correctly. Add tx-/rx-evt-ch
to show tx/rx event channel correctly.

V3: Remove conditional limitation of LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL

V2: keep evtch field without breaking the API

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |    8 ++++++++
 tools/libxl/libxl.h         |   10 ++++++++++
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4ce9efc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3142,6 +3142,14 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
     nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(nicinfo->evtch > 0)
+        nicinfo->evtch_rx = nicinfo->evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..87f1fa4 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,16 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+ * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
+ * containing the value of event channel for rx path of netback&netfront which support
+ * split event channel. The original evtch field contains the value of event channel
+ * for tx path in such case.
+ *
+ */
+#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..5642cf5 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -489,6 +489,7 @@ libxl_nicinfo = Struct("nicinfo", [
     ("devid", libxl_devid),
     ("state", integer),
     ("evtch", integer),
+    ("evtch_rx", integer),
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..be1162e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 10:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 10:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Mql-0007l1-Dr; Sun, 26 Jan 2014 10:21:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7Mqj-0007kw-VD
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 10:21:58 +0000
Received: from [193.109.254.147:11597] by server-11.bemta-14.messagelabs.com
	id DD/25-20576-5C1E4E25; Sun, 26 Jan 2014 10:21:57 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390731715!9749797!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6322 invoked from network); 26 Jan 2014 10:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 10:21:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QALq8A010167
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 10:21:53 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QALoGj006219
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 10:21:52 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QALord013705; Sun, 26 Jan 2014 10:21:50 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 02:21:50 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org
Date: Sun, 26 Jan 2014 18:23:49 +0800
Message-Id: <1390731829-2469-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: annie.li@oracle.com, wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v3] tools/xl: correctly shows split eventchannel
	for netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

After split event channel feature was supported by netback/netfront,
"xl network-list" does not show event channel correctly. Add tx-/rx-evt-ch
to show tx/rx event channel correctly.

V3: Remove conditional limitation of LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL

V2: keep evtch field without breaking the API

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 tools/libxl/libxl.c         |    8 ++++++++
 tools/libxl/libxl.h         |   10 ++++++++++
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |   12 ++++++------
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4ce9efc 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3142,6 +3142,14 @@ int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
     nicinfo->state = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel", nicpath));
     nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+    if(nicinfo->evtch > 0)
+        nicinfo->evtch_rx = nicinfo->evtch;
+    else {
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-tx", nicpath));
+        nicinfo->evtch = val ? strtoul(val, NULL, 10) : -1;
+        val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/event-channel-rx", nicpath));
+        nicinfo->evtch_rx = val ? strtoul(val, NULL, 10) : -1;
+    }
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/tx-ring-ref", nicpath));
     nicinfo->rref_tx = val ? strtoul(val, NULL, 10) : -1;
     val = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/rx-ring-ref", nicpath));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..87f1fa4 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,16 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL
+ * If this is defined, libxl_nicinfo will contain an integer type field: evtch_rx,
+ * containing the value of event channel for rx path of netback&netfront which support
+ * split event channel. The original evtch field contains the value of event channel
+ * for tx path in such case.
+ *
+ */
+#define LIBXL_HAVE_NETWORK_SPLIT_EVENTCHANNEL 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..5642cf5 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -489,6 +489,7 @@ libxl_nicinfo = Struct("nicinfo", [
     ("devid", libxl_devid),
     ("state", integer),
     ("evtch", integer),
+    ("evtch_rx", integer),
     ("rref_tx", integer),
     ("rref_rx", integer),
     ], dir=DIR_OUT)
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..be1162e 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -5842,9 +5842,9 @@ int main_networklist(int argc, char **argv)
         /* No options */
     }
 
-    /*      Idx  BE   MAC   Hdl  Sta  evch txr/rxr  BE-path */
-    printf("%-3s %-2s %-17s %-6s %-5s %-6s %5s/%-5s %-30s\n",
-           "Idx", "BE", "Mac Addr.", "handle", "state", "evt-ch", "tx-", "rx-ring-ref", "BE-path");
+    /*      Idx  BE   MAC   Hdl  Sta  txev/rxev txr/rxr  BE-path */
+    printf("%-3s %-2s %-17s %-6s %-5s %6s/%-6s %5s/%-5s %-30s\n",
+           "Idx", "BE", "Mac Addr.", "handle", "state", "tx-", "rx-evt-ch", "tx-", "rx-ring-ref", "BE-path");
     for (argv += optind, argc -= optind; argc > 0; --argc, ++argv) {
         uint32_t domid = find_domain(*argv);
         nics = libxl_device_nic_list(ctx, domid, &nb);
@@ -5857,9 +5857,9 @@ int main_networklist(int argc, char **argv)
                 printf("%-3d %-2d ", nicinfo.devid, nicinfo.backend_id);
                 /* MAC */
                 printf(LIBXL_MAC_FMT, LIBXL_MAC_BYTES(nics[i].mac));
-                /* Hdl  Sta  evch txr/rxr  BE-path */
-                printf("%6d %5d %6d %5d/%-11d %-30s\n",
-                       nicinfo.devid, nicinfo.state, nicinfo.evtch,
+                /* Hdl  Sta  txev/rxev txr/rxr  BE-path */
+                printf(" %6d %5d %6d/%-9d %5d/%-11d %-30s\n",
+                       nicinfo.devid, nicinfo.state, nicinfo.evtch, nicinfo.evtch_rx,
                        nicinfo.rref_tx, nicinfo.rref_rx, nicinfo.backend);
                 libxl_nicinfo_dispose(&nicinfo);
             }
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 11:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 11:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7NrD-0000q6-Py; Sun, 26 Jan 2014 11:26:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W7NrC-0000q1-5K
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 11:26:30 +0000
Received: from [85.158.139.211:30872] by server-6.bemta-5.messagelabs.com id
	4D/94-16310-5E0F4E25; Sun, 26 Jan 2014 11:26:29 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390735587!11991635!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18254 invoked from network); 26 Jan 2014 11:26:28 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 11:26:28 -0000
Received: by mail-lb0-f180.google.com with SMTP id n15so3740827lbi.11
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 03:26:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=OTmeaWHd+HaaCv+EPioOzobt0kMMc+jA0xv04LngIYc=;
	b=VqWuXx9EYLj1EMGotH6Cxjn3WeizmXX+3nu8JHa2iQvSgBoM1u1lpzI8uIQcK2T+Q6
	BCDnDIb8v4avQhm2s/zZhPaSxWKQs/ndDBLPYnpm5F76sMHhFeHkeXf4cX/HpndunT+L
	VngHGjaiV+OoRdVankZGIdJ7roCKrcDcxlMtD2WJEtXyjgOZsAH1vXz5P6J2TSa2LAUs
	X/98UsYsJ9tk6cwnyESP3YkfNEoG57PhWI570vHk0QZGJl2nFDv4KjkYrwONx0pUCLcm
	am30ETIIQ6woRnpELZKIpSKJi2mvvmFh+zZV9dQjpFDmr5I/bZKJzBq4+PWsxfIxaMIF
	6nSw==
X-Received: by 10.112.205.5 with SMTP id lc5mr115477lbc.40.1390735587650;
	Sun, 26 Jan 2014 03:26:27 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id qe4sm8120780lbb.8.2014.01.26.03.26.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 26 Jan 2014 03:26:26 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Sun, 26 Jan 2014 15:26:24 +0400
Message-Id: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

i have a good news: i have loaded xen-4.3 to DilOS (illumos based platform) as dom0 (64bit)

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2047     4     r-----     288.7

# xl info
host                   : myhost
release                : 5.11
version                : 1.3.6-xen
machine                : i86pc
libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed pages
libxl_physinfo failed.
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre-xvm
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 
xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
cc_compiler            : gcc (GCC) 4.7.3
cc_compile_by          : root
cc_compile_domain      : 
cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
xend_config_format     : 4

yes - need to do more work on it and it have buggy, but work is in progress :)
i have a limited time for work on  it and will do it as i can by my free time.

have to work on both sides: dilos-illumos sources and xen sources.
need more work on update python scripts with patches from xen-3.4

if some interested in to help - please send email to dilos-dev@ list and welcome to FreeNode IRC: #dilos

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 11:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 11:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7NrD-0000q6-Py; Sun, 26 Jan 2014 11:26:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W7NrC-0000q1-5K
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 11:26:30 +0000
Received: from [85.158.139.211:30872] by server-6.bemta-5.messagelabs.com id
	4D/94-16310-5E0F4E25; Sun, 26 Jan 2014 11:26:29 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390735587!11991635!1
X-Originating-IP: [209.85.217.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18254 invoked from network); 26 Jan 2014 11:26:28 -0000
Received: from mail-lb0-f180.google.com (HELO mail-lb0-f180.google.com)
	(209.85.217.180)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 11:26:28 -0000
Received: by mail-lb0-f180.google.com with SMTP id n15so3740827lbi.11
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 03:26:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=OTmeaWHd+HaaCv+EPioOzobt0kMMc+jA0xv04LngIYc=;
	b=VqWuXx9EYLj1EMGotH6Cxjn3WeizmXX+3nu8JHa2iQvSgBoM1u1lpzI8uIQcK2T+Q6
	BCDnDIb8v4avQhm2s/zZhPaSxWKQs/ndDBLPYnpm5F76sMHhFeHkeXf4cX/HpndunT+L
	VngHGjaiV+OoRdVankZGIdJ7roCKrcDcxlMtD2WJEtXyjgOZsAH1vXz5P6J2TSa2LAUs
	X/98UsYsJ9tk6cwnyESP3YkfNEoG57PhWI570vHk0QZGJl2nFDv4KjkYrwONx0pUCLcm
	am30ETIIQ6woRnpELZKIpSKJi2mvvmFh+zZV9dQjpFDmr5I/bZKJzBq4+PWsxfIxaMIF
	6nSw==
X-Received: by 10.112.205.5 with SMTP id lc5mr115477lbc.40.1390735587650;
	Sun, 26 Jan 2014 03:26:27 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id qe4sm8120780lbb.8.2014.01.26.03.26.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 26 Jan 2014 03:26:26 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Sun, 26 Jan 2014 15:26:24 +0400
Message-Id: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

i have a good news: i have loaded xen-4.3 to DilOS (illumos based platform) as dom0 (64bit)

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  2047     4     r-----     288.7

# xl info
host                   : myhost
release                : 5.11
version                : 1.3.6-xen
machine                : i86pc
libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed pages
libxl_physinfo failed.
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre-xvm
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 
xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
cc_compiler            : gcc (GCC) 4.7.3
cc_compile_by          : root
cc_compile_domain      : 
cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
xend_config_format     : 4

yes - need to do more work on it and it have buggy, but work is in progress :)
i have a limited time for work on  it and will do it as i can by my free time.

have to work on both sides: dilos-illumos sources and xen sources.
need more work on update python scripts with patches from xen-3.4

if some interested in to help - please send email to dilos-dev@ list and welcome to FreeNode IRC: #dilos

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:06:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7PPB-0003ii-9A; Sun, 26 Jan 2014 13:05:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PPA-0003id-NR
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:05:40 +0000
Received: from [85.158.137.68:64227] by server-8.bemta-3.messagelabs.com id
	D8/88-31081-32805E25; Sun, 26 Jan 2014 13:05:39 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390741537!10203819!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17139 invoked from network); 26 Jan 2014 13:05:39 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-10.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	26 Jan 2014 13:05:39 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PP6-00068I-Uc
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:05:36 -0800
Date: Sun, 26 Jan 2014 05:05:36 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390741536939-5720933.post@n5.nabble.com>
In-Reply-To: <1390663194541-5720926.post@n5.nabble.com>
References: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

10x :) 

how about xen's hvm ... as far as i see HVM is coded in xen\arch\x86
subtree...  right?



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926p5720933.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:06:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7PPB-0003ii-9A; Sun, 26 Jan 2014 13:05:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PPA-0003id-NR
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:05:40 +0000
Received: from [85.158.137.68:64227] by server-8.bemta-3.messagelabs.com id
	D8/88-31081-32805E25; Sun, 26 Jan 2014 13:05:39 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390741537!10203819!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17139 invoked from network); 26 Jan 2014 13:05:39 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-10.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	26 Jan 2014 13:05:39 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PP6-00068I-Uc
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:05:36 -0800
Date: Sun, 26 Jan 2014 05:05:36 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390741536939-5720933.post@n5.nabble.com>
In-Reply-To: <1390663194541-5720926.post@n5.nabble.com>
References: <1390663194541-5720926.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

10x :) 

how about xen's hvm ... as far as i see HVM is coded in xen\arch\x86
subtree...  right?



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926p5720933.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:16:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7PZs-00044c-HL; Sun, 26 Jan 2014 13:16:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PZq-00044X-R9
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:16:42 +0000
Received: from [85.158.139.211:19849] by server-6.bemta-5.messagelabs.com id
	14/13-16310-9BA05E25; Sun, 26 Jan 2014 13:16:41 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390742200!12002632!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 26 Jan 2014 13:16:41 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	26 Jan 2014 13:16:41 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PZn-0006Wq-K1
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:16:39 -0800
Date: Sun, 26 Jan 2014 05:16:39 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390742199614-5720934.post@n5.nabble.com>
In-Reply-To: <1390741536939-5720933.post@n5.nabble.com>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

just one more question ... is it possible xen to support SMP gusts ?



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926p5720934.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:16:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7PZs-00044c-HL; Sun, 26 Jan 2014 13:16:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PZq-00044X-R9
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:16:42 +0000
Received: from [85.158.139.211:19849] by server-6.bemta-5.messagelabs.com id
	14/13-16310-9BA05E25; Sun, 26 Jan 2014 13:16:41 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390742200!12002632!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 26 Jan 2014 13:16:41 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-4.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	26 Jan 2014 13:16:41 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W7PZn-0006Wq-K1
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 05:16:39 -0800
Date: Sun, 26 Jan 2014 05:16:39 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1390742199614-5720934.post@n5.nabble.com>
In-Reply-To: <1390741536939-5720933.post@n5.nabble.com>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

just one more question ... is it possible xen to support SMP gusts ?



--
View this message in context: http://xen.1045712.n5.nabble.com/how-QEMU-is-integrated-with-xen-tp5720926p5720934.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Pb7-00048z-09; Sun, 26 Jan 2014 13:18:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Pb4-00048n-Lu
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:17:58 +0000
Received: from [85.158.137.68:56710] by server-14.bemta-3.messagelabs.com id
	D5/3A-06105-50B05E25; Sun, 26 Jan 2014 13:17:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390742274!7714950!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32291 invoked from network); 26 Jan 2014 13:17:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 13:17:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="94581839"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 13:17:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 08:17:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Pay-0004KF-E4;
	Sun, 26 Jan 2014 13:17:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Pay-0007kP-9U;
	Sun, 26 Jan 2014 13:17:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24531-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 13:17:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24531: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24531 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 13:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 13:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Pb7-00048z-09; Sun, 26 Jan 2014 13:18:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Pb4-00048n-Lu
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 13:17:58 +0000
Received: from [85.158.137.68:56710] by server-14.bemta-3.messagelabs.com id
	D5/3A-06105-50B05E25; Sun, 26 Jan 2014 13:17:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390742274!7714950!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32291 invoked from network); 26 Jan 2014 13:17:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 13:17:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="94581839"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 13:17:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 08:17:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Pay-0004KF-E4;
	Sun, 26 Jan 2014 13:17:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Pay-0007kP-9U;
	Sun, 26 Jan 2014 13:17:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24531-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 13:17:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24531: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24531 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24473

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail blocked in 24473
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 14:44:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 14:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Qwb-0006MI-3E; Sun, 26 Jan 2014 14:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7QwZ-0006MD-RB
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 14:44:16 +0000
Received: from [85.158.143.35:2744] by server-1.bemta-4.messagelabs.com id
	24/F3-02132-E3F15E25; Sun, 26 Jan 2014 14:44:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390747452!863421!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21373 invoked from network); 26 Jan 2014 14:44:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 14:44:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="96613721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Jan 2014 14:44:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 09:44:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7QwU-0004kW-Iq;
	Sun, 26 Jan 2014 14:44:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7QwU-0001R2-Eb;
	Sun, 26 Jan 2014 14:44:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24532-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 14:44:10 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24532: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24532 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24532/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 24519
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2 fail in 24519 pass in 24532
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 24519 pass in 24532
 test-i386-i386-pair   4 host-install/dst_host(4) broken in 24519 pass in 24532

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail like 22399

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24519 never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 14:44:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 14:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Qwb-0006MI-3E; Sun, 26 Jan 2014 14:44:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7QwZ-0006MD-RB
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 14:44:16 +0000
Received: from [85.158.143.35:2744] by server-1.bemta-4.messagelabs.com id
	24/F3-02132-E3F15E25; Sun, 26 Jan 2014 14:44:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390747452!863421!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21373 invoked from network); 26 Jan 2014 14:44:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 14:44:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="96613721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Jan 2014 14:44:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 09:44:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7QwU-0004kW-Iq;
	Sun, 26 Jan 2014 14:44:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7QwU-0001R2-Eb;
	Sun, 26 Jan 2014 14:44:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24532-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 14:44:10 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24532: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24532 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24532/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 22405

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 24519
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2 fail in 24519 pass in 24532
 test-amd64-i386-xl-qemut-win7-amd64 3 host-install(3) broken in 24519 pass in 24532
 test-i386-i386-pair   4 host-install/dst_host(4) broken in 24519 pass in 24532

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail like 22399

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24519 never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit fa1bde94493ee9fc66ce6f33ed434a9d7133c896
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:48:07 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
    master date: 2014-01-24 13:41:36 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 16:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 16:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7SjG-0001C7-AP; Sun, 26 Jan 2014 16:38:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W7SjE-0001C2-Uz
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 16:38:37 +0000
Received: from [85.158.137.68:20759] by server-3.bemta-3.messagelabs.com id
	89/AE-10658-B0A35E25; Sun, 26 Jan 2014 16:38:35 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390754312!11349147!1
X-Originating-IP: [147.210.8.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31603 invoked from network); 26 Jan 2014 16:38:33 -0000
Received: from iona.labri.fr (HELO iona.labri.fr) (147.210.8.143)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 16:38:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by iona.labri.fr (Postfix) with ESMTP id 61C4F42A5;
	Sun, 26 Jan 2014 17:38:32 +0100 (CET)
X-Virus-Scanned: amavisd-new at labri.fr
Received: from iona.labri.fr ([127.0.0.1])
	by localhost (iona.labri.fr [127.0.0.1]) (amavisd-new, port 10027)
	with LMTP id 6ZsTr8dMTT8N; Sun, 26 Jan 2014 17:38:32 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	by iona.labri.fr (Postfix) with ESMTPSA id C8A7C3DD0;
	Sun, 26 Jan 2014 17:38:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W7Sj6-0008EU-SA; Sun, 26 Jan 2014 17:38:28 +0100
Date: Sun, 26 Jan 2014 17:38:28 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140126163828.GA6096@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Fri 24 Jan 2014 18:28:11 +0000, a =E9crit :
> A physdev_op is a two argument hypercall, taking a command paramter and an
> optional pointer to a structure.

Mmm, this this a remnant of the old hypercall which was taking one
parameter only, indeed.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> =

> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extra=
s/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  =

>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  =

>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extra=
s/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  =

>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  =

>  static inline int
> -- =

> 1.7.10.4
> =


-- =

Samuel
#include <culture.h>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 16:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 16:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7SjG-0001C7-AP; Sun, 26 Jan 2014 16:38:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1W7SjE-0001C2-Uz
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 16:38:37 +0000
Received: from [85.158.137.68:20759] by server-3.bemta-3.messagelabs.com id
	89/AE-10658-B0A35E25; Sun, 26 Jan 2014 16:38:35 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390754312!11349147!1
X-Originating-IP: [147.210.8.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31603 invoked from network); 26 Jan 2014 16:38:33 -0000
Received: from iona.labri.fr (HELO iona.labri.fr) (147.210.8.143)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 16:38:33 -0000
Received: from localhost (localhost [127.0.0.1])
	by iona.labri.fr (Postfix) with ESMTP id 61C4F42A5;
	Sun, 26 Jan 2014 17:38:32 +0100 (CET)
X-Virus-Scanned: amavisd-new at labri.fr
Received: from iona.labri.fr ([127.0.0.1])
	by localhost (iona.labri.fr [127.0.0.1]) (amavisd-new, port 10027)
	with LMTP id 6ZsTr8dMTT8N; Sun, 26 Jan 2014 17:38:32 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(Client did not present a certificate)
	by iona.labri.fr (Postfix) with ESMTPSA id C8A7C3DD0;
	Sun, 26 Jan 2014 17:38:31 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1W7Sj6-0008EU-SA; Sun, 26 Jan 2014 17:38:28 +0100
Date: Sun, 26 Jan 2014 17:38:28 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140126163828.GA6096@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper, le Fri 24 Jan 2014 18:28:11 +0000, a =E9crit :
> A physdev_op is a two argument hypercall, taking a command paramter and an
> optional pointer to a structure.

Mmm, this this a remnant of the old hypercall which was taking one
parameter only, indeed.

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>

Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> =

> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extra=
s/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  =

>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  =

>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extra=
s/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  =

>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  =

>  static inline int
> -- =

> 1.7.10.4
> =


-- =

Samuel
#include <culture.h>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 18:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 18:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7U1a-0003Dr-QE; Sun, 26 Jan 2014 18:01:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7U1Z-0003Dm-V5
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 18:01:38 +0000
Received: from [193.109.254.147:3697] by server-1.bemta-14.messagelabs.com id
	58/58-15600-18D45E25; Sun, 26 Jan 2014 18:01:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390759295!13285244!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4112 invoked from network); 26 Jan 2014 18:01:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 18:01:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QI1TC0029614
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 18:01:30 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QI1Sco005860
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 18:01:29 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QI1SfS005857; Sun, 26 Jan 2014 18:01:28 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 10:01:28 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
	<20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 26 Jan 2014 13:01:14 -0500
To: Matt Wilson <msw@linux.com>, Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <ecc2d6a2-e9c9-43a0-afe5-abca9ea47b07@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson <msw@linux.com> wrote:
>On Fri, Jan 24, 2014 at 03:36:22PM +0000, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 09:21 +0000, Ian Campbell wrote:
>> > On Thu, 2014-01-23 at 11:28 -0800, Matt Wilson wrote:
>> > > From: Matt Rushton <mrushton@amazon.com>
>> > > 
>> > > Currently shrink_free_pagepool() is called before the pages used
>for
>> > > persistent grants are released via free_persistent_gnts(). This
>> > > results in a memory leak when a VBD that uses persistent grants
>is
>> > > torn down.
>> > 
>> > This may well be the explanation for the memory leak I was
>observing on
>> > ARM last night. I'll give it a go and let you know.
>> 
>> Results are a bit inconclusive unfortunately, it seems like I am
>seeing
>> some other leak too (or instead).
>> 
>> Totally unscientifically it does seem to be leaking more slowly than
>> before, so perhaps this patch has helped, but nothing conclusive I'm
>> afraid.
>
>Testing here looks good. I don't know if perhaps something else is
>going on with ARM...
>
>> I don't think that quite qualifies for a Tested-by though, sorry.
>
>How about an Acked-by? ;-)
>
>--msw


How big and often does this leak occur - that this patch fixes?

I think there is one comment from Roger that needs to be addressed? Maybe I missed the resolution?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 18:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 18:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7U1a-0003Dr-QE; Sun, 26 Jan 2014 18:01:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7U1Z-0003Dm-V5
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 18:01:38 +0000
Received: from [193.109.254.147:3697] by server-1.bemta-14.messagelabs.com id
	58/58-15600-18D45E25; Sun, 26 Jan 2014 18:01:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390759295!13285244!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4112 invoked from network); 26 Jan 2014 18:01:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 18:01:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QI1TC0029614
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 18:01:30 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QI1Sco005860
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sun, 26 Jan 2014 18:01:29 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QI1SfS005857; Sun, 26 Jan 2014 18:01:28 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 10:01:28 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
	<20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 26 Jan 2014 13:01:14 -0500
To: Matt Wilson <msw@linux.com>, Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <ecc2d6a2-e9c9-43a0-afe5-abca9ea47b07@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Matt Wilson <msw@linux.com> wrote:
>On Fri, Jan 24, 2014 at 03:36:22PM +0000, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 09:21 +0000, Ian Campbell wrote:
>> > On Thu, 2014-01-23 at 11:28 -0800, Matt Wilson wrote:
>> > > From: Matt Rushton <mrushton@amazon.com>
>> > > 
>> > > Currently shrink_free_pagepool() is called before the pages used
>for
>> > > persistent grants are released via free_persistent_gnts(). This
>> > > results in a memory leak when a VBD that uses persistent grants
>is
>> > > torn down.
>> > 
>> > This may well be the explanation for the memory leak I was
>observing on
>> > ARM last night. I'll give it a go and let you know.
>> 
>> Results are a bit inconclusive unfortunately, it seems like I am
>seeing
>> some other leak too (or instead).
>> 
>> Totally unscientifically it does seem to be leaking more slowly than
>> before, so perhaps this patch has helped, but nothing conclusive I'm
>> afraid.
>
>Testing here looks good. I don't know if perhaps something else is
>going on with ARM...
>
>> I don't think that quite qualifies for a Tested-by though, sorry.
>
>How about an Acked-by? ;-)
>
>--msw


How big and often does this leak occur - that this patch fixes?

I think there is one comment from Roger that needs to be addressed? Maybe I missed the resolution?

Thanks!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 18:02:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 18:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7U2a-0003Hw-EB; Sun, 26 Jan 2014 18:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W7U2Y-0003Hf-TQ
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 18:02:39 +0000
Received: from [85.158.137.68:51497] by server-17.bemta-3.messagelabs.com id
	18/63-15965-DBD45E25; Sun, 26 Jan 2014 18:02:37 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390759355!11386821!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 499 invoked from network); 26 Jan 2014 18:02:36 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 18:02:36 -0000
Received: by mail-qa0-f44.google.com with SMTP id w5so6151546qac.17
	for <xen-devel@lists.xenproject.org>;
	Sun, 26 Jan 2014 10:02:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=T5mg2XrJLKb2WidcXlPbLYV/DCE9aWmrDM3i5h7oYDk=;
	b=0aH6Cx1xFiZeqWGLY+bG1WqTuqoJYq6dXYVOZwf99KHYHRbM7uD9cBr9VIr02THgWh
	6iT8bx8s3rEaU8Qun99+bDx/gFyyYPz9AyMPI48xLgXGmxhuBciRmkWojqxhRXKHhvTK
	Kans8Omn6SQnauf+SQnpAMoLTl67gNvbGvRN2uYDE1nrQgTM34ykkunsqX7ZmxC0p0/R
	t/y8vnxL0b3gQbpviai94EZjVeMZF9BTG9uh9t1O1817QI8/YQuf+gRZJG1IPlJte7wy
	HWqKVvUjujNzQeWI+Bi2IZXFkFAbfMKmOG4p9IGAC+eqwbzKFht7R0bq+5eYs7UqRUZC
	zXdw==
MIME-Version: 1.0
X-Received: by 10.229.196.197 with SMTP id eh5mr36812133qcb.3.1390759355349;
	Sun, 26 Jan 2014 10:02:35 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Sun, 26 Jan 2014 10:02:35 -0800 (PST)
In-Reply-To: <20140124133830.GU4963@suse.de>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
Date: Sun, 26 Jan 2014 13:02:35 -0500
Message-ID: <CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 8:38 AM, Mel Gorman <mgorman@suse.de> wrote:
> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>> >> >> <SNIP>
>> >> >>
>> >> >> This dump doesn't look dramatically different, either.
>> >> >>
>> >> >>>
>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >> >>> turned on?
>> >> >>
>> >> >>
>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> >> mean not enabled at runtime?
>> >> >>
>> >> >> [1]
>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>> >>
>> >>
>> >>
>> >> --
>> >> Elena
>>
>> I was able to reproduce this consistently, also with the latest mm
>> patches from yesterday.
>> Can you please try this:
>>
>
> Thanks Elena,
>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index ce563be..76dcf96 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>> *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>
> Would reusing pte_present be an option? Ordinarily I expect that
> PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
> is defined as
>
> static inline int pte_present(pte_t a)
> {
>         return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
>                                _PAGE_NUMA);
> }
>
> So it looks like it work work. Of course it would need to be split to
> reuse it within xen if pte_present was split to have a pteval_present
> helper like so
>
> static inline int pteval_present(pteval_t val)
> {
>         /*
>          * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
>          * way clearly states that the intent is that a protnone and numa
>          * hinting ptes are considered present for the purposes of
>          * pagetable operations like zapping, protection changes, gup etc.
>          */
>         return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
> }
>
> static inline int pte_present(pte_t pte)
> {
>         return pteval_present(pte_flags(pte))
> }
>
> If Xen is doing some other tricks with _PAGE_PRESENT then it might be
> ruled out as an option. If so, then maybe it could still be made a
> little clearer for future reference?

Yes, sure, it should work, I tried it.
Thank you Mel.

>
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c1d406f..ff621de 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
>
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 8e4f41d..693fe00 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
>   * (because _PAGE_PRESENT is not set).
>   */
>  #ifndef pte_numa
> +static inline int pteval_numa(pteval_t pteval)
> +{
> +       return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
> +}
> +
>  static inline int pte_numa(pte_t pte)
>  {
> -       return (pte_flags(pte) &
> -               (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
> +       return pteval_numa(pte_flags(pte));
>  }
>  #endif
>



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 18:02:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 18:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7U2a-0003Hw-EB; Sun, 26 Jan 2014 18:02:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1W7U2Y-0003Hf-TQ
	for xen-devel@lists.xenproject.org; Sun, 26 Jan 2014 18:02:39 +0000
Received: from [85.158.137.68:51497] by server-17.bemta-3.messagelabs.com id
	18/63-15965-DBD45E25; Sun, 26 Jan 2014 18:02:37 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390759355!11386821!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 499 invoked from network); 26 Jan 2014 18:02:36 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 18:02:36 -0000
Received: by mail-qa0-f44.google.com with SMTP id w5so6151546qac.17
	for <xen-devel@lists.xenproject.org>;
	Sun, 26 Jan 2014 10:02:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=T5mg2XrJLKb2WidcXlPbLYV/DCE9aWmrDM3i5h7oYDk=;
	b=0aH6Cx1xFiZeqWGLY+bG1WqTuqoJYq6dXYVOZwf99KHYHRbM7uD9cBr9VIr02THgWh
	6iT8bx8s3rEaU8Qun99+bDx/gFyyYPz9AyMPI48xLgXGmxhuBciRmkWojqxhRXKHhvTK
	Kans8Omn6SQnauf+SQnpAMoLTl67gNvbGvRN2uYDE1nrQgTM34ykkunsqX7ZmxC0p0/R
	t/y8vnxL0b3gQbpviai94EZjVeMZF9BTG9uh9t1O1817QI8/YQuf+gRZJG1IPlJte7wy
	HWqKVvUjujNzQeWI+Bi2IZXFkFAbfMKmOG4p9IGAC+eqwbzKFht7R0bq+5eYs7UqRUZC
	zXdw==
MIME-Version: 1.0
X-Received: by 10.229.196.197 with SMTP id eh5mr36812133qcb.3.1390759355349;
	Sun, 26 Jan 2014 10:02:35 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Sun, 26 Jan 2014 10:02:35 -0800 (PST)
In-Reply-To: <20140124133830.GU4963@suse.de>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
Date: Sun, 26 Jan 2014 13:02:35 -0500
Message-ID: <CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 8:38 AM, Mel Gorman <mgorman@suse.de> wrote:
> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>> >> >> <SNIP>
>> >> >>
>> >> >> This dump doesn't look dramatically different, either.
>> >> >>
>> >> >>>
>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>> >> >>> turned on?
>> >> >>
>> >> >>
>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>> >> >> mean not enabled at runtime?
>> >> >>
>> >> >> [1]
>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>> >>
>> >>
>> >>
>> >> --
>> >> Elena
>>
>> I was able to reproduce this consistently, also with the latest mm
>> patches from yesterday.
>> Can you please try this:
>>
>
> Thanks Elena,
>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index ce563be..76dcf96 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>> *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || ((val &
>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>
> Would reusing pte_present be an option? Ordinarily I expect that
> PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
> is defined as
>
> static inline int pte_present(pte_t a)
> {
>         return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
>                                _PAGE_NUMA);
> }
>
> So it looks like it work work. Of course it would need to be split to
> reuse it within xen if pte_present was split to have a pteval_present
> helper like so
>
> static inline int pteval_present(pteval_t val)
> {
>         /*
>          * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
>          * way clearly states that the intent is that a protnone and numa
>          * hinting ptes are considered present for the purposes of
>          * pagetable operations like zapping, protection changes, gup etc.
>          */
>         return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
> }
>
> static inline int pte_present(pte_t pte)
> {
>         return pteval_present(pte_flags(pte))
> }
>
> If Xen is doing some other tricks with _PAGE_PRESENT then it might be
> ruled out as an option. If so, then maybe it could still be made a
> little clearer for future reference?

Yes, sure, it should work, I tried it.
Thank you Mel.

>
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c1d406f..ff621de 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 unsigned long pfn = mfn_to_pfn(mfn);
>
> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>
>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>  {
> -       if (val & _PAGE_PRESENT) {
> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>                 pteval_t flags = val & PTE_FLAGS_MASK;
>                 unsigned long mfn;
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 8e4f41d..693fe00 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
>   * (because _PAGE_PRESENT is not set).
>   */
>  #ifndef pte_numa
> +static inline int pteval_numa(pteval_t pteval)
> +{
> +       return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
> +}
> +
>  static inline int pte_numa(pte_t pte)
>  {
> -       return (pte_flags(pte) &
> -               (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
> +       return pteval_numa(pte_flags(pte));
>  }
>  #endif
>



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 19:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 19:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Vi9-0006E3-WE; Sun, 26 Jan 2014 19:49:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Vi8-0006Dy-Eu
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 19:49:40 +0000
Received: from [193.109.254.147:53890] by server-5.bemta-14.messagelabs.com id
	0C/2F-03510-3D665E25; Sun, 26 Jan 2014 19:49:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390765777!13224309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21825 invoked from network); 26 Jan 2014 19:49:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 19:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="94622110"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 19:49:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 14:49:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Vi3-0006G3-Bb;
	Sun, 26 Jan 2014 19:49:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Vi2-0003jE-Vl;
	Sun, 26 Jan 2014 19:49:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24536-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 19:49:35 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24536: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24536 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24536/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 19:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 19:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Vi9-0006E3-WE; Sun, 26 Jan 2014 19:49:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Vi8-0006Dy-Eu
	for xen-devel@lists.xensource.com; Sun, 26 Jan 2014 19:49:40 +0000
Received: from [193.109.254.147:53890] by server-5.bemta-14.messagelabs.com id
	0C/2F-03510-3D665E25; Sun, 26 Jan 2014 19:49:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390765777!13224309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21825 invoked from network); 26 Jan 2014 19:49:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 19:49:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,724,1384300800"; d="scan'208";a="94622110"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Jan 2014 19:49:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 14:49:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7Vi3-0006G3-Bb;
	Sun, 26 Jan 2014 19:49:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7Vi2-0003jE-Vl;
	Sun, 26 Jan 2014 19:49:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24536-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 26 Jan 2014 19:49:35 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24536: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24536 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24536/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 22:17:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:17:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Y0h-0001Iy-94; Sun, 26 Jan 2014 22:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7Y0f-0001It-Cm
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:16:57 +0000
Received: from [193.109.254.147:53279] by server-10.bemta-14.messagelabs.com
	id CE/77-20752-85985E25; Sun, 26 Jan 2014 22:16:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390774612!11795405!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13068 invoked from network); 26 Jan 2014 22:16:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:16:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QMGjHp016694
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 22:16:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QMGima025560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sun, 26 Jan 2014 22:16:44 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QMGhW0025551; Sun, 26 Jan 2014 22:16:44 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 14:16:43 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 26 Jan 2014 17:16:40 -0500
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
Message-ID: <e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
>Hi All,
>
>i have a good news: i have loaded xen-4.3 to DilOS (illumos based
>platform) as dom0 (64bit)

Woot!

Are there instructions in how to compile and load/use it?

Thanks!
>
># xl list
>Name                                        ID   Mem VCPUs      State  
>Time(s)
>Domain-0                                     0  2047     4     r-----  
>  288.7
>
># xl info
>host                   : myhost
>release                : 5.11
>version                : 1.3.6-xen
>machine                : i86pc
>libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed
>pages
>libxl_physinfo failed.
>xen_major              : 4
>xen_minor              : 3
>xen_extra              : .2-pre-xvm
>xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
>xen_scheduler          : credit
>xen_pagesize           : 4096
>platform_params        : virt_start=0xffff800000000000
>xen_changeset          : 
>xen_commandline        : console=com1 dom0_mem=2047M
>dom0_vcpus_pin=false watchdog=false
>cc_compiler            : gcc (GCC) 4.7.3
>cc_compile_by          : root
>cc_compile_domain      : 
>cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
>xend_config_format     : 4
>
>yes - need to do more work on it and it have buggy, but work is in
>progress :)
>i have a limited time for work on  it and will do it as i can by my
>free time.
>
>have to work on both sides: dilos-illumos sources and xen sources.
>need more work on update python scripts with patches from xen-3.4
>
>if some interested in to help - please send email to dilos-dev@ list
>and welcome to FreeNode IRC: #dilos
>
>--
>Best regards,
>Igor Kozhukhov
>
>
>
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 22:17:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:17:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Y0h-0001Iy-94; Sun, 26 Jan 2014 22:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7Y0f-0001It-Cm
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:16:57 +0000
Received: from [193.109.254.147:53279] by server-10.bemta-14.messagelabs.com
	id CE/77-20752-85985E25; Sun, 26 Jan 2014 22:16:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390774612!11795405!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13068 invoked from network); 26 Jan 2014 22:16:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:16:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0QMGjHp016694
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sun, 26 Jan 2014 22:16:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QMGima025560
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sun, 26 Jan 2014 22:16:44 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0QMGhW0025551; Sun, 26 Jan 2014 22:16:44 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 26 Jan 2014 14:16:43 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Sun, 26 Jan 2014 17:16:40 -0500
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
Message-ID: <e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
>Hi All,
>
>i have a good news: i have loaded xen-4.3 to DilOS (illumos based
>platform) as dom0 (64bit)

Woot!

Are there instructions in how to compile and load/use it?

Thanks!
>
># xl list
>Name                                        ID   Mem VCPUs      State  
>Time(s)
>Domain-0                                     0  2047     4     r-----  
>  288.7
>
># xl info
>host                   : myhost
>release                : 5.11
>version                : 1.3.6-xen
>machine                : i86pc
>libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed
>pages
>libxl_physinfo failed.
>xen_major              : 4
>xen_minor              : 3
>xen_extra              : .2-pre-xvm
>xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
>xen_scheduler          : credit
>xen_pagesize           : 4096
>platform_params        : virt_start=0xffff800000000000
>xen_changeset          : 
>xen_commandline        : console=com1 dom0_mem=2047M
>dom0_vcpus_pin=false watchdog=false
>cc_compiler            : gcc (GCC) 4.7.3
>cc_compile_by          : root
>cc_compile_domain      : 
>cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
>xend_config_format     : 4
>
>yes - need to do more work on it and it have buggy, but work is in
>progress :)
>i have a limited time for work on  it and will do it as i can by my
>free time.
>
>have to work on both sides: dilos-illumos sources and xen sources.
>need more work on update python scripts with patches from xen-3.4
>
>if some interested in to help - please send email to dilos-dev@ list
>and welcome to FreeNode IRC: #dilos
>
>--
>Best regards,
>Igor Kozhukhov
>
>
>
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Jan 26 22:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YAT-0001fJ-Kl; Sun, 26 Jan 2014 22:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YAS-0001fE-79
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:27:04 +0000
Received: from [85.158.137.68:22459] by server-5.bemta-3.messagelabs.com id
	89/FB-25188-7BB85E25; Sun, 26 Jan 2014 22:27:03 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390775220!11438182!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 26 Jan 2014 22:27:02 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:27:02 -0000
Received: from mail-ie0-f178.google.com (mail-ie0-f178.google.com
	[209.85.223.178]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMQwfQ024878
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:26:59 -0800
Received: by mail-ie0-f178.google.com with SMTP id x13so5126752ief.9
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:26:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=MTVUtg/99IQjoebN4n7s8do//jGVfWMvQL6Z0ipsCMk=;
	b=PWDrVnEY5VLeC7qLbGr8ggHPUJPOkuESCZcUImOOlFbiJ0DtVYyYLHvAETE+ku1e4t
	rFikrvAI26oxa0HT506agVm6GnH6X+vN5fqy2bErjTnOJzZINLWadKeVI+d3DGf5WTDp
	3qLFmJSsC/P/n32SzGlB9DgQdksxqrj+jjBOnZExagkTOjZGeRmEGkCgVCevZbbTeVe1
	MXZg/sUO4GzVurdbuaSzbIPFGbqMCXLcksUehfVDE8rlDnDUEXUz01NUMzKCT4iTzKtR
	0eNt+t9bjYX2DJ5jzJT9PkS0N9QZMaczEVhso1dMxWqoy4yuwpbchV951JCujvAivTbD
	1cWg==
X-Received: by 10.43.56.4 with SMTP id wa4mr19363139icb.18.1390775217549; Sun,
	26 Jan 2014 14:26:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:26:17 -0800 (PST)
In-Reply-To: <1390558852.2124.31.camel@kazak.uk.xensource.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<52E23D01.50204@cn.fujitsu.com>
	<1390558852.2124.31.camel@kazak.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:26:17 -0800
Message-ID: <CAP8mzPOyAgFJRCOSJJf-fnF1mZ9cZr8OiqE-WTBg4CaLitfdDA@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2620672936551860204=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2620672936551860204==
Content-Type: multipart/alternative; boundary=bcaec517aa5acdf94604f0e716e3

--bcaec517aa5acdf94604f0e716e3
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Jan 24, 2014 at 2:20 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Fri, 2014-01-24 at 18:14 +0800, Lai Jiangshan wrote:
> > On 01/21/2014 05:05 PM, Lai Jiangshan wrote:
> >
> > >>
> > >
> > > Changes in V6:
> > >   Applied Ian Jackson's comments of V5 series.
> > >   the [PATCH 2/4 V5] is split by small functionalities.
> > >
> > >   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> > >
> >
> > Ping!
>
> This is targeting 4.5 I think? I'm afraid that this means it is likely
> to be queued behind anything targeting 4.4 as far as review bandwidth
> goes.
>
> Ian.
>
> >
> > Ian Jackson, any suggestion?
> >
> > Shriram, could you review it?
> >
> > thanks,
> > Lai
>
>
>
Hi
 Thanks for splitting up the patch into smaller ones and incorporating all
the feedback.
 Most of the series is fine. I have some minor feedback related to patches
2, 6 & 13.

Shriram

--bcaec517aa5acdf94604f0e716e3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On F=
ri, Jan 24, 2014 at 2:20 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"=
mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</=
a>&gt;</span> wrote:<br>




<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Fri, 2014-01-24 at 18:14 +0800, Lai =
Jiangshan wrote:<br>
&gt; On 01/21/2014 05:05 PM, Lai Jiangshan wrote:<br>
&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt; Changes in V6:<br>
&gt; &gt; =A0 Applied Ian Jackson&#39;s comments of V5 series.<br>
&gt; &gt; =A0 the [PATCH 2/4 V5] is split by small functionalities.<br>
&gt; &gt;<br>
&gt; &gt; =A0 [PATCH 4/4 V5] --&gt; [PATCH 13/13] netbuffer is default enab=
led.<br>
&gt; &gt;<br>
&gt;<br>
&gt; Ping!<br>
<br>
</div>This is targeting 4.5 I think? I&#39;m afraid that this means it is l=
ikely<br>
to be queued behind anything targeting 4.4 as far as review bandwidth<br>
goes.<br>
<span><font color=3D"#888888"><br>
Ian.<br>
</font></span><div><div><br>
&gt;<br>
&gt; Ian Jackson, any suggestion?<br>
&gt;<br>
&gt; Shriram, could you review it?<br>
&gt;<br>
&gt; thanks,<br>
&gt; Lai<br>
<br>
<br></div></div></blockquote><div><br></div><div>Hi</div><div>=A0Thanks for=
 splitting up the patch into smaller ones and incorporating all the feedbac=
k.</div><div>=A0Most of the series is fine. I have some minor feedback rela=
ted to patches 2, 6 &amp; 13.</div>


<div><br></div>
<div>Shriram=A0</div></div><br></div></div>

--bcaec517aa5acdf94604f0e716e3--


--===============2620672936551860204==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2620672936551860204==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YAT-0001fJ-Kl; Sun, 26 Jan 2014 22:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YAS-0001fE-79
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:27:04 +0000
Received: from [85.158.137.68:22459] by server-5.bemta-3.messagelabs.com id
	89/FB-25188-7BB85E25; Sun, 26 Jan 2014 22:27:03 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390775220!11438182!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 26 Jan 2014 22:27:02 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:27:02 -0000
Received: from mail-ie0-f178.google.com (mail-ie0-f178.google.com
	[209.85.223.178]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMQwfQ024878
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:26:59 -0800
Received: by mail-ie0-f178.google.com with SMTP id x13so5126752ief.9
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:26:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=MTVUtg/99IQjoebN4n7s8do//jGVfWMvQL6Z0ipsCMk=;
	b=PWDrVnEY5VLeC7qLbGr8ggHPUJPOkuESCZcUImOOlFbiJ0DtVYyYLHvAETE+ku1e4t
	rFikrvAI26oxa0HT506agVm6GnH6X+vN5fqy2bErjTnOJzZINLWadKeVI+d3DGf5WTDp
	3qLFmJSsC/P/n32SzGlB9DgQdksxqrj+jjBOnZExagkTOjZGeRmEGkCgVCevZbbTeVe1
	MXZg/sUO4GzVurdbuaSzbIPFGbqMCXLcksUehfVDE8rlDnDUEXUz01NUMzKCT4iTzKtR
	0eNt+t9bjYX2DJ5jzJT9PkS0N9QZMaczEVhso1dMxWqoy4yuwpbchV951JCujvAivTbD
	1cWg==
X-Received: by 10.43.56.4 with SMTP id wa4mr19363139icb.18.1390775217549; Sun,
	26 Jan 2014 14:26:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:26:17 -0800 (PST)
In-Reply-To: <1390558852.2124.31.camel@kazak.uk.xensource.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<52E23D01.50204@cn.fujitsu.com>
	<1390558852.2124.31.camel@kazak.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:26:17 -0800
Message-ID: <CAP8mzPOyAgFJRCOSJJf-fnF1mZ9cZr8OiqE-WTBg4CaLitfdDA@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/13 V6] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2620672936551860204=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2620672936551860204==
Content-Type: multipart/alternative; boundary=bcaec517aa5acdf94604f0e716e3

--bcaec517aa5acdf94604f0e716e3
Content-Type: text/plain; charset=ISO-8859-1

On Fri, Jan 24, 2014 at 2:20 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Fri, 2014-01-24 at 18:14 +0800, Lai Jiangshan wrote:
> > On 01/21/2014 05:05 PM, Lai Jiangshan wrote:
> >
> > >>
> > >
> > > Changes in V6:
> > >   Applied Ian Jackson's comments of V5 series.
> > >   the [PATCH 2/4 V5] is split by small functionalities.
> > >
> > >   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> > >
> >
> > Ping!
>
> This is targeting 4.5 I think? I'm afraid that this means it is likely
> to be queued behind anything targeting 4.4 as far as review bandwidth
> goes.
>
> Ian.
>
> >
> > Ian Jackson, any suggestion?
> >
> > Shriram, could you review it?
> >
> > thanks,
> > Lai
>
>
>
Hi
 Thanks for splitting up the patch into smaller ones and incorporating all
the feedback.
 Most of the series is fine. I have some minor feedback related to patches
2, 6 & 13.

Shriram

--bcaec517aa5acdf94604f0e716e3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On F=
ri, Jan 24, 2014 at 2:20 AM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"=
mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</=
a>&gt;</span> wrote:<br>




<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Fri, 2014-01-24 at 18:14 +0800, Lai =
Jiangshan wrote:<br>
&gt; On 01/21/2014 05:05 PM, Lai Jiangshan wrote:<br>
&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt; Changes in V6:<br>
&gt; &gt; =A0 Applied Ian Jackson&#39;s comments of V5 series.<br>
&gt; &gt; =A0 the [PATCH 2/4 V5] is split by small functionalities.<br>
&gt; &gt;<br>
&gt; &gt; =A0 [PATCH 4/4 V5] --&gt; [PATCH 13/13] netbuffer is default enab=
led.<br>
&gt; &gt;<br>
&gt;<br>
&gt; Ping!<br>
<br>
</div>This is targeting 4.5 I think? I&#39;m afraid that this means it is l=
ikely<br>
to be queued behind anything targeting 4.4 as far as review bandwidth<br>
goes.<br>
<span><font color=3D"#888888"><br>
Ian.<br>
</font></span><div><div><br>
&gt;<br>
&gt; Ian Jackson, any suggestion?<br>
&gt;<br>
&gt; Shriram, could you review it?<br>
&gt;<br>
&gt; thanks,<br>
&gt; Lai<br>
<br>
<br></div></div></blockquote><div><br></div><div>Hi</div><div>=A0Thanks for=
 splitting up the patch into smaller ones and incorporating all the feedbac=
k.</div><div>=A0Most of the series is fine. I have some minor feedback rela=
ted to patches 2, 6 &amp; 13.</div>


<div><br></div>
<div>Shriram=A0</div></div><br></div></div>

--bcaec517aa5acdf94604f0e716e3--


--===============2620672936551860204==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2620672936551860204==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:28:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YBS-0001jG-6I; Sun, 26 Jan 2014 22:28:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YBQ-0001j0-U6
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:28:05 +0000
Received: from [85.158.137.68:51077] by server-11.bemta-3.messagelabs.com id
	BC/CA-19379-4FB85E25; Sun, 26 Jan 2014 22:28:04 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390775281!11429381!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16923 invoked from network); 26 Jan 2014 22:28:03 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:28:03 -0000
Received: from mail-ig0-f172.google.com (mail-ig0-f172.google.com
	[209.85.213.172]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMS0eE025038
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:28:01 -0800
Received: by mail-ig0-f172.google.com with SMTP id k19so7393250igc.5
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:27:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=NqERA/vsYmV1wGKGgAAh1SqEEW6eEECJfmO5klnKN4o=;
	b=TCVspqCS0PYOjwd10Hs/qvuvnZa5fHkbGlcnoELSGnsr+7YnYq72E3Vi8vJP0SkP+f
	qajz8i+RGEf1mrw0sXkn6Bi+wA0Y7c7X2p1Fk8TgGDOfheJGxkshRuQw84ZrFNemtPim
	wUdgq8j4kI9VCaTueCnwC0UxWztdWjpYBU+/XoIdRpNJAfuyQVwoVS8Ai4iwew2/CKyl
	uE204vE6moBsLnb1nnxugW5SInJAkVJa58RT7b7CZn8SPvPM1LeIOjhvKhb2eSgjMMfV
	5t/S1kuufgVUNSqlUw0w6cK/Onn+27coFozrfEdlBkgN+EJAN1+4A6QsCQ+yOdEWAzMT
	6t4g==
X-Received: by 10.50.79.198 with SMTP id l6mr14609284igx.23.1390775279728;
	Sun, 26 Jan 2014 14:27:59 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:27:19 -0800 (PST)
In-Reply-To: <1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:27:19 -0800
Message-ID: <CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
	hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4982055540856779951=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4982055540856779951==
Content-Type: multipart/alternative; boundary=089e013a110282c24704f0e71a92

--089e013a110282c24704f0e71a92
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> This patch introduces remus-netbuf-setup hotplug script responsible for
> setting up and tearing down the necessary infrastructure required for
> network output buffering in Remus.  This script is intended to be invoked
> by libxl for each guest interface, when starting or stopping Remus.
>
> Apart from returning success/failure indication via the usual hotplug
> entries in xenstore, this script also writes to xenstore, the name of
> the IFB device to be used to control the vif's network output.
>
> The script relies on libnl3 command line utilities to perform various
> setup/teardown functions. The script is confined to Linux platforms only
> since NetBSD does not seem to have libnl3.
>
> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  tools/hotplug/Linux/Makefile           |   1 +
>  tools/hotplug/Linux/remus-netbuf-setup | 183
> +++++++++++++++++++++++++++++++++
>  2 files changed, 184 insertions(+)
>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>
>
The last time I posted this script, the feedback was that the script and
the code invoking
the script should be in a single patch. So I would suggest doing the same.

--089e013a110282c24704f0e71a92
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mai=
lto:rshriram@cs.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
<br>
This patch introduces remus-netbuf-setup hotplug script responsible for<br>
setting up and tearing down the necessary infrastructure required for<br>
network output buffering in Remus. =A0This script is intended to be invoked=
<br>
by libxl for each guest interface, when starting or stopping Remus.<br>
<br>
Apart from returning success/failure indication via the usual hotplug<br>
entries in xenstore, this script also writes to xenstore, the name of<br>
the IFB device to be used to control the vif&#39;s network output.<br>
<br>
The script relies on libnl3 command line utilities to perform various<br>
setup/teardown functions. The script is confined to Linux platforms only<br=
>
since NetBSD does not seem to have libnl3.<br>
<br>
Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujitsu.com" ta=
rget=3D"_blank">laijs@cn.fujitsu.com</a>&gt;<br>
Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.com" targe=
t=3D"_blank">wency@cn.fujitsu.com</a>&gt;<br>
---<br>
=A0tools/hotplug/Linux/Makefile =A0 =A0 =A0 =A0 =A0 | =A0 1 +<br>
=A0tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++++++++++++++++++=
++++++<br>
=A02 files changed, 184 insertions(+)<br>
=A0create mode 100644 tools/hotplug/Linux/remus-netbuf-setup<br>
<br></blockquote><div><br></div><div>The last time I posted this script, th=
e feedback was that the script and the code invoking</div><div>the script s=
hould be in a single patch. So I would suggest doing the same.</div></div>

</div>
</div>

--089e013a110282c24704f0e71a92--


--===============4982055540856779951==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4982055540856779951==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:28:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YBS-0001jG-6I; Sun, 26 Jan 2014 22:28:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YBQ-0001j0-U6
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:28:05 +0000
Received: from [85.158.137.68:51077] by server-11.bemta-3.messagelabs.com id
	BC/CA-19379-4FB85E25; Sun, 26 Jan 2014 22:28:04 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390775281!11429381!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16923 invoked from network); 26 Jan 2014 22:28:03 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:28:03 -0000
Received: from mail-ig0-f172.google.com (mail-ig0-f172.google.com
	[209.85.213.172]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMS0eE025038
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:28:01 -0800
Received: by mail-ig0-f172.google.com with SMTP id k19so7393250igc.5
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:27:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=NqERA/vsYmV1wGKGgAAh1SqEEW6eEECJfmO5klnKN4o=;
	b=TCVspqCS0PYOjwd10Hs/qvuvnZa5fHkbGlcnoELSGnsr+7YnYq72E3Vi8vJP0SkP+f
	qajz8i+RGEf1mrw0sXkn6Bi+wA0Y7c7X2p1Fk8TgGDOfheJGxkshRuQw84ZrFNemtPim
	wUdgq8j4kI9VCaTueCnwC0UxWztdWjpYBU+/XoIdRpNJAfuyQVwoVS8Ai4iwew2/CKyl
	uE204vE6moBsLnb1nnxugW5SInJAkVJa58RT7b7CZn8SPvPM1LeIOjhvKhb2eSgjMMfV
	5t/S1kuufgVUNSqlUw0w6cK/Onn+27coFozrfEdlBkgN+EJAN1+4A6QsCQ+yOdEWAzMT
	6t4g==
X-Received: by 10.50.79.198 with SMTP id l6mr14609284igx.23.1390775279728;
	Sun, 26 Jan 2014 14:27:59 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:27:19 -0800 (PST)
In-Reply-To: <1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:27:19 -0800
Message-ID: <CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
	hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4982055540856779951=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4982055540856779951==
Content-Type: multipart/alternative; boundary=089e013a110282c24704f0e71a92

--089e013a110282c24704f0e71a92
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> This patch introduces remus-netbuf-setup hotplug script responsible for
> setting up and tearing down the necessary infrastructure required for
> network output buffering in Remus.  This script is intended to be invoked
> by libxl for each guest interface, when starting or stopping Remus.
>
> Apart from returning success/failure indication via the usual hotplug
> entries in xenstore, this script also writes to xenstore, the name of
> the IFB device to be used to control the vif's network output.
>
> The script relies on libnl3 command line utilities to perform various
> setup/teardown functions. The script is confined to Linux platforms only
> since NetBSD does not seem to have libnl3.
>
> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  tools/hotplug/Linux/Makefile           |   1 +
>  tools/hotplug/Linux/remus-netbuf-setup | 183
> +++++++++++++++++++++++++++++++++
>  2 files changed, 184 insertions(+)
>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>
>
The last time I posted this script, the feedback was that the script and
the code invoking
the script should be in a single patch. So I would suggest doing the same.

--089e013a110282c24704f0e71a92
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mai=
lto:rshriram@cs.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
<br>
This patch introduces remus-netbuf-setup hotplug script responsible for<br>
setting up and tearing down the necessary infrastructure required for<br>
network output buffering in Remus. =A0This script is intended to be invoked=
<br>
by libxl for each guest interface, when starting or stopping Remus.<br>
<br>
Apart from returning success/failure indication via the usual hotplug<br>
entries in xenstore, this script also writes to xenstore, the name of<br>
the IFB device to be used to control the vif&#39;s network output.<br>
<br>
The script relies on libnl3 command line utilities to perform various<br>
setup/teardown functions. The script is confined to Linux platforms only<br=
>
since NetBSD does not seem to have libnl3.<br>
<br>
Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujitsu.com" ta=
rget=3D"_blank">laijs@cn.fujitsu.com</a>&gt;<br>
Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.com" targe=
t=3D"_blank">wency@cn.fujitsu.com</a>&gt;<br>
---<br>
=A0tools/hotplug/Linux/Makefile =A0 =A0 =A0 =A0 =A0 | =A0 1 +<br>
=A0tools/hotplug/Linux/remus-netbuf-setup | 183 +++++++++++++++++++++++++++=
++++++<br>
=A02 files changed, 184 insertions(+)<br>
=A0create mode 100644 tools/hotplug/Linux/remus-netbuf-setup<br>
<br></blockquote><div><br></div><div>The last time I posted this script, th=
e feedback was that the script and the code invoking</div><div>the script s=
hould be in a single patch. So I would suggest doing the same.</div></div>

</div>
</div>

--089e013a110282c24704f0e71a92--


--===============4982055540856779951==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4982055540856779951==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:31:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YEr-00028S-4B; Sun, 26 Jan 2014 22:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YEp-00028M-SX
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:31:36 +0000
Received: from [85.158.137.68:56154] by server-9.bemta-3.messagelabs.com id
	8E/E4-13104-7CC85E25; Sun, 26 Jan 2014 22:31:35 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390775490!10610055!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1813 invoked from network); 26 Jan 2014 22:31:32 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:31:32 -0000
Received: from mail-ie0-f175.google.com (mail-ie0-f175.google.com
	[209.85.223.175]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMVT2T025544
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:31:30 -0800
Received: by mail-ie0-f175.google.com with SMTP id ar20so5038407iec.34
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:31:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=ilYD3vsHp0HCGNqkKzceewJdzcEjd5dlMHosI4CYkPM=;
	b=TwD0BIR2yFiiqpXNGj36asknYRh4e+eUkQego7wQHQmYy3B6Li7RC3Z8E9My8cQjef
	9ySlXUUrLIUX5dJkR0q7Oo+JjS2eZegqxEHlWuDuWZXEbqQpYZHUV+/jAtetggZ/ti3Q
	oidpRbjq3uVolVv39wpVnS1WJP/62bi6lP2fZJGc0A4iuzxBuJyxRtpdutnYpEJZ8Dk+
	2N3giSGShDF/6pEzCp2zz2vApUOMvl+mXzvqM0YYDKAmmvVmt8XNrUGL9w3nhJcL6lHJ
	02qL7LMe+bRGDUVcX7MlozhyPsAt5Kt760qwztSLxu2QXDjJYRGf00PoD+na6AzBn1hs
	rFZQ==
X-Received: by 10.50.45.33 with SMTP id j1mr14693376igm.32.1390775488721; Sun,
	26 Jan 2014 14:31:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:30:48 -0800 (PST)
In-Reply-To: <1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:30:48 -0800
Message-ID: <CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0260953706865313514=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0260953706865313514==
Content-Type: multipart/alternative; boundary=089e0122f016f7bc9a04f0e72682

--089e0122f016f7bc9a04f0e72682
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> The following steps are taken during setup:
>  a) call the hotplug script for each vif to setup its network buffer
>
>  b) establish a dedicated remus context containing libnl related
>     state (netlink sockets, qdisc caches, etc.,)
>
>  c) Obtain handles to plug qdiscs installed on the IFB devices
>     chosen by the hotplug scripts.
>
> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  docs/misc/xenstore-paths.markdown |   4 +
>  tools/libxl/Makefile              |   2 +
>  tools/libxl/libxl_dom.c           |   7 +-
>  tools/libxl/libxl_internal.h      |  11 +
>  tools/libxl/libxl_netbuffer.c     | 419
> ++++++++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_nonetbuffer.c   |   6 +
>  tools/libxl/libxl_remus.c         |  35 ++++
>  7 files changed, 479 insertions(+), 5 deletions(-)
>  create mode 100644 tools/libxl/libxl_remus.c
>
> diff --git a/docs/misc/xenstore-paths.markdown
> b/docs/misc/xenstore-paths.markdown
> index 70ab7f4..7a0d2c9 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
>
>  The device model version for a domain.
>
> +#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
> +
> +IFB device used by Remus to buffer network output from the associated vif.
> +
>  [BLKIF]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
>  [FBIF]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
>  [HVMPARAMS]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 84a467c..218f55e 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -52,6 +52,8 @@ else
>  LIBXL_OBJS-y += libxl_nonetbuffer.o
>  endif
>
> +LIBXL_OBJS-y += libxl_remus.o
> +
>  LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
>  LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
>
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 8d63f90..e3e9f6f 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const
> uint8_t *buf,
>
>  /*==================== Domain suspend (save) ====================*/
>
> -static void domain_suspend_done(libxl__egc *egc,
> -                        libxl__domain_suspend_state *dss, int rc);
> -
>  /*----- complicated callback, called by xc_domain_save -----*/
>
>  /*
> @@ -1508,8 +1505,8 @@ static void
> save_device_model_datacopier_done(libxl__egc *egc,
>      dss->save_dm_callback(egc, dss, our_rc);
>  }
>
> -static void domain_suspend_done(libxl__egc *egc,
> -                        libxl__domain_suspend_state *dss, int rc)
> +void domain_suspend_done(libxl__egc *egc,
> +                         libxl__domain_suspend_state *dss, int rc)
>  {
>      STATE_AO_GC(dss->ao);
>
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 2f64382..0430307 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
>
>  _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
>
> +_hidden void domain_suspend_done(libxl__egc *egc,
> +                                 libxl__domain_suspend_state *dss,
> +                                 int rc);
> +
> +_hidden void libxl__remus_setup_done(libxl__egc *egc,
> +                                     libxl__domain_suspend_state *dss,
> +                                     int rc);
> +
> +_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
> +                                       libxl__domain_suspend_state *dss);
> +
>  struct libxl__domain_suspend_state {
>      /* set by caller of libxl__domain_suspend */
>      libxl__ao *ao;
> diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
> index 8e23d75..0be876c 100644
> --- a/tools/libxl/libxl_netbuffer.c
> +++ b/tools/libxl/libxl_netbuffer.c
> @@ -17,11 +17,430 @@
>
>  #include "libxl_internal.h"
>
> +#include <netlink/cache.h>
> +#include <netlink/socket.h>
> +#include <netlink/attr.h>
> +#include <netlink/route/link.h>
> +#include <netlink/route/route.h>
> +#include <netlink/route/qdisc.h>
> +#include <netlink/route/qdisc/plug.h>
> +
> +typedef struct libxl__remus_netbuf_state {
> +    struct rtnl_qdisc **netbuf_qdisc_list;
> +    struct nl_sock *nlsock;
> +    struct nl_cache *qdisc_cache;
> +    const char **vif_list;
> +    const char **ifb_list;
> +    uint32_t num_netbufs;
> +    uint32_t unused;
> +} libxl__remus_netbuf_state;
> +
>  int libxl__netbuffer_enabled(libxl__gc *gc)
>  {
>      return 1;
>  }
>
> +/* If the device has a vifname, then use that instead of
> + * the vifX.Y format.
> + */
> +static const char *get_vifname(libxl__gc *gc, uint32_t domid,
> +                               libxl_device_nic *nic)
> +{
> +    const char *vifname = NULL;
> +    const char *path;
> +    int rc;
> +
> +    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
> +                          libxl__xs_get_dompath(gc, 0), domid,
> nic->devid);
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
> +    if (rc < 0) {
> +        /* use the default name */
> +        vifname = libxl__device_nic_devname(gc, domid,
> +                                            nic->devid,
> +                                            nic->nictype);
> +    }
> +
> +    return vifname;
>

IanJ's feedback last time was that the error code rc needs to be checked.
If its a failure, then return NULL to the caller. If its a ENOENT, then use
the
default name.

+}
> +
> +static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
> +                                       int *num_vifs)
> +{
> +    libxl_device_nic *nics = NULL;
> +    int nb, i = 0;
> +    const char **vif_list = NULL;
> +
> +    *num_vifs = 0;
> +    nics = libxl_device_nic_list(CTX, domid, &nb);
> +    if (!nics)
> +        return NULL;
> +
> +    /* Ensure that none of the vifs are backed by driver domains */
> +    for (i = 0; i < nb; i++) {
> +        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
> +            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
> +                "Network buffering is not supported with driver domains",
> +                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
>

And if the previous feedback were to be incorporated, then get_vifname's
return
value should be assigned to a variable and checked before printing or using
it for
other purposes.


> +            *num_vifs = -1;
> +            goto out;
> +        }
> +    }
> +
> +    GCNEW_ARRAY(vif_list, nb);
> +    for (i = 0; i < nb; ++i) {
> +        vif_list[i] = get_vifname(gc, domid, &nics[i]);
> +        if (!vif_list[i]) {
> +            vif_list = NULL;
> +            goto out;
> +        }
> +    }
> +    *num_vifs = nb;
> +
> + out:
> +    for (i = 0; i < nb; i++)
> +        libxl_device_nic_dispose(&nics[i]);
> +    free(nics);
> +    return vif_list;
> +}
> +
> +static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
> +{
> +    int i;
> +    struct rtnl_qdisc *qdisc = NULL;
> +
> +    /* free qdiscs */
> +    for (i = 0; i < netbuf_state->num_netbufs; i++) {
> +        qdisc = netbuf_state->netbuf_qdisc_list[i];
> +        if (!qdisc)
> +            break;
> +
> +        nl_object_put((struct nl_object *)qdisc);
> +    }
> +
> +    /* free qdisc cache */
> +    nl_cache_clear(netbuf_state->qdisc_cache);
> +    nl_cache_free(netbuf_state->qdisc_cache);
> +
> +    /* close nlsock */
> +    nl_close(netbuf_state->nlsock);
> +
> +    /* free nlsock */
> +    nl_socket_free(netbuf_state->nlsock);
> +}
> +
>

This code (free_qdiscs) is new. Have you tested it?
While the control flow looks pretty sane, libnl has been evolving
a bit ever since the 3.* series.

If init_qdisc fails, it calls free_qdisc(). If any other setup stage after
network buffering fails, it would invoke the teardown code, which also
calls free_qdisc(). This may end up in a segfault.

I suggest adding a simple check to see if nlsock/qdisc_cache are NULL
before attempting to execute the rest of the function.  And after you free
the
qdisc_cache & nlsock, set them to NULL.


> +static int init_qdiscs(libxl__gc *gc,
> +                       libxl__remus_state *remus_state)
> +{
> +    int i, ret, ifindex;
> +    struct rtnl_link *ifb = NULL;
> +    struct rtnl_qdisc *qdisc = NULL;
> +
> +    /* Convenience aliases */
> +    libxl__remus_netbuf_state * const netbuf_state =
> remus_state->netbuf_state;
> +    const int num_netbufs = netbuf_state->num_netbufs;
> +    const char ** const ifb_list = netbuf_state->ifb_list;
> +
> +    /* Now that we have brought up IFB devices with plug qdisc for
> +     * each vif, lets get a netlink handle on the plug qdisc for use
> +     * during checkpointing.
> +     */
> +    netbuf_state->nlsock = nl_socket_alloc();
> +    if (!netbuf_state->nlsock) {
> +        LOG(ERROR, "cannot allocate nl socket");
> +        goto out;
> +    }
> +
> +    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
> +    if (ret) {
> +        LOG(ERROR, "failed to open netlink socket: %s",
> +            nl_geterror(ret));
> +        goto out;
> +    }
> +
> +    /* get list of all qdiscs installed on network devs. */
> +    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
> +                                 &netbuf_state->qdisc_cache);
> +    if (ret) {
> +        LOG(ERROR, "failed to allocate qdisc cache: %s",
> +            nl_geterror(ret));
> +        goto out;
> +    }
> +
> +    /* list of handles to plug qdiscs */
> +    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
> +
> +    for (i = 0; i < num_netbufs; ++i) {
> +
> +        /* get a handle to the IFB interface */
> +        ifb = NULL;
> +        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
> +                                   ifb_list[i], &ifb);
> +        if (ret) {
> +            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
> +                nl_geterror(ret));
> +            goto out;
> +        }
> +
> +        ifindex = rtnl_link_get_ifindex(ifb);
> +        if (!ifindex) {
> +            LOG(ERROR, "interface %s has no index", ifb_list[i]);
> +            goto out;
> +        }
> +
> +        /* Get a reference to the root qdisc installed on the IFB, by
> +         * querying the qdisc list we obtained earlier. The netbufscript
> +         * sets up the plug qdisc as the root qdisc, so we don't have to
> +         * search the entire qdisc tree on the IFB dev.
> +
> +         * There is no need to explicitly free this qdisc as its just a
> +         * reference from the qdisc cache we allocated earlier.
> +         */
> +        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache,
> ifindex,
> +                                         TC_H_ROOT);
> +
> +        if (qdisc) {
> +            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
> +            /* Sanity check: Ensure that the root qdisc is a plug qdisc.
> */
> +            if (!tc_kind || strcmp(tc_kind, "plug")) {
> +                nl_object_put((struct nl_object *)qdisc);
> +                LOG(ERROR, "plug qdisc is not installed on %s",
> ifb_list[i]);
> +                goto out;
> +            }
> +            netbuf_state->netbuf_qdisc_list[i] = qdisc;
> +        } else {
> +            LOG(ERROR, "Cannot get qdisc handle from ifb %s",
> ifb_list[i]);
> +            goto out;
> +        }
> +        rtnl_link_put(ifb);
> +    }
> +
> +    return 0;
> +
> + out:
> +    if (ifb)
> +        rtnl_link_put(ifb);
> +    free_qdiscs(netbuf_state);
> +    return ERROR_FAIL;
> +}
> +
> +static void netbuf_setup_timeout_cb(libxl__egc *egc,
> +                                    libxl__ev_time *ev,
> +                                    const struct timeval *requested_abs)
> +{
> +    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state,
> timeout);
> +
> +    /* Convenience aliases */
> +    const int devid = remus_state->dev_id;
> +    libxl__remus_netbuf_state *const netbuf_state =
> remus_state->netbuf_state;
> +    const char *const vif = netbuf_state->vif_list[devid];
> +
> +    STATE_AO_GC(remus_state->dss->ao);
> +
> +    libxl__ev_time_deregister(gc, &remus_state->timeout);
> +    assert(libxl__ev_child_inuse(&remus_state->child));
> +
> +    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
> +        remus_state->netbufscript, vif);
> +
> +    if (kill(remus_state->child.pid, SIGKILL)) {
> +        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
> +              remus_state->netbufscript,
> +              (unsigned long)remus_state->child.pid);
> +    }
> +
> +    return;
> +}
> +
> +/* the script needs the following env & args
> + * $vifname
> + * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
> + * $IFB (for teardown)
> + * setup/teardown as command line arg.
> + * In return, the script writes the name of IFB device (during setup) to
> be
> + * used for output buffering into XENBUS_PATH/ifb
> + */
> +static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state
> *remus_state,
> +                              char *op, libxl__ev_child_callback *death)
> +{
> +    int arraysize = 7, nr = 0;
> +    char **env = NULL, **args = NULL;
> +    pid_t pid;
> +
> +    /* Convenience aliases */
> +    libxl__ev_child *const child = &remus_state->child;
> +    libxl__ev_time *const timeout = &remus_state->timeout;
> +    char *const script = libxl__strdup(gc, remus_state->netbufscript);
> +    const uint32_t domid = remus_state->dss->domid;
> +    const int devid = remus_state->dev_id;
> +    libxl__remus_netbuf_state *const netbuf_state =
> remus_state->netbuf_state;
> +    const char *const vif = netbuf_state->vif_list[devid];
> +    const char *const ifb = netbuf_state->ifb_list[devid];
> +
>

Please set arraysize to 7 here, instead of the beginning of the function.
Its more readable that way.

+    GCNEW_ARRAY(env, arraysize);
> +    env[nr++] = "vifname";
> +    env[nr++] = libxl__strdup(gc, vif);
> +    env[nr++] = "XENBUS_PATH";
> +    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
> +                          libxl__xs_libxl_path(gc, domid), devid);
> +    if (!strcmp(op, "teardown")) {
> +        env[nr++] = "IFB";
> +        env[nr++] = libxl__strdup(gc, ifb);
> +    }
> +    env[nr++] = NULL;
> +    assert(nr <= arraysize);
> +
> +    arraysize = 3; nr = 0;
> +    GCNEW_ARRAY(args, arraysize);
> +    args[nr++] = script;
> +    args[nr++] = op;
> +    args[nr++] = NULL;
> +    assert(nr == arraysize);
> +
> +    /* Set hotplug timeout */
> +    if (libxl__ev_time_register_rel(gc, timeout,
> +                                    netbuf_setup_timeout_cb,
> +                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
> +        LOG(ERROR, "unable to register timeout for "
> +            "netbuf setup script %s on vif %s", script, vif);
> +        return ERROR_FAIL;
> +    }
> +
> +    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
> +        script, op, vif);
> +
> +    /* Fork and exec netbuf script */
> +    pid = libxl__ev_child_fork(gc, child, death);
> +    if (pid == -1) {
> +        LOG(ERROR, "unable to fork netbuf script %s", script);
> +        return ERROR_FAIL;
> +    }
> +
> +    if (!pid) {
> +        /* child: Launch netbuf script */
> +        libxl__exec(gc, -1, -1, -1, args[0], args, env);
> +        /* notreached */
> +        abort();
> +    }
> +
> +    return 0;
> +}
> +
>
>

--089e0122f016f7bc9a04f0e72682
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>



<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@c=
s.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>




<br>
The following steps are taken during setup:<br>
=A0a) call the hotplug script for each vif to setup its network buffer<br>
<br>
=A0b) establish a dedicated remus context containing libnl related<br>
=A0 =A0 state (netlink sockets, qdisc caches, etc.,)<br>
<br>
=A0c) Obtain handles to plug qdiscs installed on the IFB devices<br>
=A0 =A0 chosen by the hotplug scripts.<br>
<br>
Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujitsu.com" ta=
rget=3D"_blank">laijs@cn.fujitsu.com</a>&gt;<br>
Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.com" targe=
t=3D"_blank">wency@cn.fujitsu.com</a>&gt;<br>
---<br>
=A0docs/misc/xenstore-paths.markdown | =A0 4 +<br>
=A0tools/libxl/Makefile =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 2 +<br>
=A0tools/libxl/libxl_dom.c =A0 =A0 =A0 =A0 =A0 | =A0 7 +-<br>
=A0tools/libxl/libxl_internal.h =A0 =A0 =A0| =A011 +<br>
=A0tools/libxl/libxl_netbuffer.c =A0 =A0 | 419 ++++++++++++++++++++++++++++=
++++++++++<br>
=A0tools/libxl/libxl_nonetbuffer.c =A0 | =A0 6 +<br>
=A0tools/libxl/libxl_remus.c =A0 =A0 =A0 =A0 | =A035 ++++<br>
=A07 files changed, 479 insertions(+), 5 deletions(-)<br>
=A0create mode 100644 tools/libxl/libxl_remus.c<br>
<br>
diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.m=
arkdown<br>
index 70ab7f4..7a0d2c9 100644<br>
--- a/docs/misc/xenstore-paths.markdown<br>
+++ b/docs/misc/xenstore-paths.markdown<br>
@@ -385,6 +385,10 @@ The guest&#39;s virtual time offset from UTC in second=
s.<br>
<br>
=A0The device model version for a domain.<br>
<br>
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb =3D STRING [n,INTERNAL]<br>
+<br>
+IFB device used by Remus to buffer network output from the associated vif.=
<br>
+<br>
=A0[BLKIF]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/inclu=
de,public,io,blkif.h.html" target=3D"_blank">http://xenbits.xen.org/docs/un=
stable/hypercall/include,public,io,blkif.h.html</a><br>
=A0[FBIF]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/includ=
e,public,io,fbif.h.html" target=3D"_blank">http://xenbits.xen.org/docs/unst=
able/hypercall/include,public,io,fbif.h.html</a><br>
=A0[HVMPARAMS]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/i=
nclude,public,hvm,params.h.html" target=3D"_blank">http://xenbits.xen.org/d=
ocs/unstable/hypercall/include,public,hvm,params.h.html</a><br>
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile<br>
index 84a467c..218f55e 100644<br>
--- a/tools/libxl/Makefile<br>
+++ b/tools/libxl/Makefile<br>
@@ -52,6 +52,8 @@ else<br>
=A0LIBXL_OBJS-y +=3D libxl_nonetbuffer.o<br>
=A0endif<br>
<br>
+LIBXL_OBJS-y +=3D libxl_remus.o<br>
+<br>
=A0LIBXL_OBJS-$(CONFIG_X86) +=3D libxl_cpuid.o libxl_x86.o<br>
=A0LIBXL_OBJS-$(CONFIG_ARM) +=3D libxl_nocpuid.o libxl_arm.o<br>
<br>
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c<br>
index 8d63f90..e3e9f6f 100644<br>
--- a/tools/libxl/libxl_dom.c<br>
+++ b/tools/libxl/libxl_dom.c<br>
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint=
8_t *buf,<br>
<br>
=A0/*=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Domain su=
spend (save) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D*/=
<br>
<br>
-static void domain_suspend_done(libxl__egc *egc,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__domain_suspend_stat=
e *dss, int rc);<br>
-<br>
=A0/*----- complicated callback, called by xc_domain_save -----*/<br>
<br>
=A0/*<br>
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__=
egc *egc,<br>
=A0 =A0 =A0dss-&gt;save_dm_callback(egc, dss, our_rc);<br>
=A0}<br>
<br>
-static void domain_suspend_done(libxl__egc *egc,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__domain_suspend_stat=
e *dss, int rc)<br>
+void domain_suspend_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__domain_suspend_sta=
te *dss, int rc)<br>
=A0{<br>
=A0 =A0 =A0STATE_AO_GC(dss-&gt;ao);<br>
<br>
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h<br=
>
index 2f64382..0430307 100644<br>
--- a/tools/libxl/libxl_internal.h<br>
+++ b/tools/libxl/libxl_internal.h<br>
@@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {<br>
<br>
=A0_hidden int libxl__netbuffer_enabled(libxl__gc *gc);<br>
<br>
+_hidden void domain_suspend_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__do=
main_suspend_state *dss,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 int rc);<=
br>
+<br>
+_hidden void libxl__remus_setup_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 l=
ibxl__domain_suspend_state *dss,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 i=
nt rc);<br>
+<br>
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 libxl__domain_suspend_state *dss);<br>
+<br>
=A0struct libxl__domain_suspend_state {<br>
=A0 =A0 =A0/* set by caller of libxl__domain_suspend */<br>
=A0 =A0 =A0libxl__ao *ao;<br>
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c<=
br>
index 8e23d75..0be876c 100644<br>
--- a/tools/libxl/libxl_netbuffer.c<br>
+++ b/tools/libxl/libxl_netbuffer.c<br>
@@ -17,11 +17,430 @@<br>
<br>
=A0#include &quot;libxl_internal.h&quot;<br>
<br>
+#include &lt;netlink/cache.h&gt;<br>
+#include &lt;netlink/socket.h&gt;<br>
+#include &lt;netlink/attr.h&gt;<br>
+#include &lt;netlink/route/link.h&gt;<br>
+#include &lt;netlink/route/route.h&gt;<br>
+#include &lt;netlink/route/qdisc.h&gt;<br>
+#include &lt;netlink/route/qdisc/plug.h&gt;<br>
+<br>
+typedef struct libxl__remus_netbuf_state {<br>
+ =A0 =A0struct rtnl_qdisc **netbuf_qdisc_list;<br>
+ =A0 =A0struct nl_sock *nlsock;<br>
+ =A0 =A0struct nl_cache *qdisc_cache;<br>
+ =A0 =A0const char **vif_list;<br>
+ =A0 =A0const char **ifb_list;<br>
+ =A0 =A0uint32_t num_netbufs;<br>
+ =A0 =A0uint32_t unused;<br>
+} libxl__remus_netbuf_state;<br>
+<br>
=A0int libxl__netbuffer_enabled(libxl__gc *gc)<br>
=A0{<br>
=A0 =A0 =A0return 1;<br>
=A0}<br>
<br>
+/* If the device has a vifname, then use that instead of<br>
+ * the vifX.Y format.<br>
+ */<br>
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl_device_=
nic *nic)<br>
+{<br>
+ =A0 =A0const char *vifname =3D NULL;<br>
+ =A0 =A0const char *path;<br>
+ =A0 =A0int rc;<br>
+<br>
+ =A0 =A0path =3D libxl__sprintf(gc, &quot;%s/backend/vif/%d/%d/vifname&quo=
t;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__xs_get_dompath(=
gc, 0), domid, nic-&gt;devid);<br>
+ =A0 =A0rc =3D libxl__xs_read_checked(gc, XBT_NULL, path, &amp;vifname);<b=
r>
+ =A0 =A0if (rc &lt; 0) {<br>
+ =A0 =A0 =A0 =A0/* use the default name */<br>
+ =A0 =A0 =A0 =A0vifname =3D libxl__device_nic_devname(gc, domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0nic-&gt;devid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0nic-&gt;nictype);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return vifname;<br></blockquote><div><br></div><div>IanJ&#39;s fee=
dback last time was that the error code rc needs to be checked.</div><div>I=
f its a failure, then return NULL to the caller. If its a ENOENT, then use =
the</div>



<div>default name.</div><div><br></div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb=
(204,204,204);border-left-style:solid;padding-left:1ex">
+}<br>
+<br>
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 int *num_vifs)<br>
+{<br>
+ =A0 =A0libxl_device_nic *nics =3D NULL;<br>
+ =A0 =A0int nb, i =3D 0;<br>
+ =A0 =A0const char **vif_list =3D NULL;<br>
+<br>
+ =A0 =A0*num_vifs =3D 0;<br>
+ =A0 =A0nics =3D libxl_device_nic_list(CTX, domid, &amp;nb);<br>
+ =A0 =A0if (!nics)<br>
+ =A0 =A0 =A0 =A0return NULL;<br>
+<br>
+ =A0 =A0/* Ensure that none of the vifs are backed by driver domains */<br=
>
+ =A0 =A0for (i =3D 0; i &lt; nb; i++) {<br>
+ =A0 =A0 =A0 =A0if (nics[i].backend_domid !=3D LIBXL_TOOLSTACK_DOMID) {<br=
>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;vif %s has driver domain (%u) as =
its backend. &quot;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&quot;Network buffering is not supported w=
ith driver domains&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0get_vifname(gc, domid, &amp;nics[i]), nics=
[i].backend_domid);<br></blockquote><div><br></div><div>And if the previous=
 feedback were to be incorporated, then get_vifname&#39;s return</div><div>=
value should be assigned to a variable and checked before printing or using=
 it for</div>



<div>other purposes.</div><div>=A0</div><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rg=
b(204,204,204);border-left-style:solid;padding-left:1ex">
+ =A0 =A0 =A0 =A0 =A0 =A0*num_vifs =3D -1;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0GCNEW_ARRAY(vif_list, nb);<br>
+ =A0 =A0for (i =3D 0; i &lt; nb; ++i) {<br>
+ =A0 =A0 =A0 =A0vif_list[i] =3D get_vifname(gc, domid, &amp;nics[i]);<br>
+ =A0 =A0 =A0 =A0if (!vif_list[i]) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0vif_list =3D NULL;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+ =A0 =A0*num_vifs =3D nb;<br>
+<br>
+ out:<br>
+ =A0 =A0for (i =3D 0; i &lt; nb; i++)<br>
+ =A0 =A0 =A0 =A0libxl_device_nic_dispose(&amp;nics[i]);<br>
+ =A0 =A0free(nics);<br>
+ =A0 =A0return vif_list;<br>
+}<br>
+<br>
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)<br>
+{<br>
+ =A0 =A0int i;<br>
+ =A0 =A0struct rtnl_qdisc *qdisc =3D NULL;<br>
+<br>
+ =A0 =A0/* free qdiscs */<br>
+ =A0 =A0for (i =3D 0; i &lt; netbuf_state-&gt;num_netbufs; i++) {<br>
+ =A0 =A0 =A0 =A0qdisc =3D netbuf_state-&gt;netbuf_qdisc_list[i];<br>
+ =A0 =A0 =A0 =A0if (!qdisc)<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0nl_object_put((struct nl_object *)qdisc);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* free qdisc cache */<br>
+ =A0 =A0nl_cache_clear(netbuf_state-&gt;qdisc_cache);<br>
+ =A0 =A0nl_cache_free(netbuf_state-&gt;qdisc_cache);<br>
+<br>
+ =A0 =A0/* close nlsock */<br>
+ =A0 =A0nl_close(netbuf_state-&gt;nlsock);<br>
+<br>
+ =A0 =A0/* free nlsock */<br>
+ =A0 =A0nl_socket_free(netbuf_state-&gt;nlsock);<br>
+}<br>
+<br></blockquote><div><br></div><div>This code (free_qdiscs) is new. Have =
you tested it?</div><div>While the control flow looks pretty sane, libnl ha=
s been evolving</div><div>a bit ever since the 3.* series.</div><div><br>



</div><div>If init_qdisc fails, it calls free_qdisc(). If any other setup s=
tage after=A0</div><div>network buffering fails, it would invoke the teardo=
wn code, which also=A0</div><div>calls free_qdisc(). This may end up in a s=
egfault.</div>



<div><div><br></div><div>I suggest adding a simple check to see if nlsock/q=
disc_cache are NULL</div><div>before attempting to execute the rest of the =
function. =A0And after you free the=A0</div><div>qdisc_cache &amp; nlsock, =
set them to NULL.=A0</div>



<div>=A0</div></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);borde=
r-left-style:solid;padding-left:1ex">
+static int init_qdiscs(libxl__gc *gc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__remus_state *remus_sta=
te)<br>
+{<br>
+ =A0 =A0int i, ret, ifindex;<br>
+ =A0 =A0struct rtnl_link *ifb =3D NULL;<br>
+ =A0 =A0struct rtnl_qdisc *qdisc =3D NULL;<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0libxl__remus_netbuf_state * const netbuf_state =3D remus_state-&gt=
;netbuf_state;<br>
+ =A0 =A0const int num_netbufs =3D netbuf_state-&gt;num_netbufs;<br>
+ =A0 =A0const char ** const ifb_list =3D netbuf_state-&gt;ifb_list;<br>
+<br>
+ =A0 =A0/* Now that we have brought up IFB devices with plug qdisc for<br>
+ =A0 =A0 * each vif, lets get a netlink handle on the plug qdisc for use<b=
r>
+ =A0 =A0 * during checkpointing.<br>
+ =A0 =A0 */<br>
+ =A0 =A0netbuf_state-&gt;nlsock =3D nl_socket_alloc();<br>
+ =A0 =A0if (!netbuf_state-&gt;nlsock) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;cannot allocate nl socket&quot;);<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0ret =3D nl_connect(netbuf_state-&gt;nlsock, NETLINK_ROUTE);<br>
+ =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;failed to open netlink socket: %s&quot;,<=
br>
+ =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* get list of all qdiscs installed on network devs. */<br>
+ =A0 =A0ret =3D rtnl_qdisc_alloc_cache(netbuf_state-&gt;nlsock,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &amp;netb=
uf_state-&gt;qdisc_cache);<br>
+ =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;failed to allocate qdisc cache: %s&quot;,=
<br>
+ =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* list of handles to plug qdiscs */<br>
+ =A0 =A0GCNEW_ARRAY(netbuf_state-&gt;netbuf_qdisc_list, num_netbufs);<br>
+<br>
+ =A0 =A0for (i =3D 0; i &lt; num_netbufs; ++i) {<br>
+<br>
+ =A0 =A0 =A0 =A0/* get a handle to the IFB interface */<br>
+ =A0 =A0 =A0 =A0ifb =3D NULL;<br>
+ =A0 =A0 =A0 =A0ret =3D rtnl_link_get_kernel(netbuf_state-&gt;nlsock, 0,<b=
r>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ifb_l=
ist[i], &amp;ifb);<br>
+ =A0 =A0 =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;cannot obtain handle for %s: %s&q=
uot;, ifb_list[i],<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+<br>
+ =A0 =A0 =A0 =A0ifindex =3D rtnl_link_get_ifindex(ifb);<br>
+ =A0 =A0 =A0 =A0if (!ifindex) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;interface %s has no index&quot;, =
ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+<br>
+ =A0 =A0 =A0 =A0/* Get a reference to the root qdisc installed on the IFB,=
 by<br>
+ =A0 =A0 =A0 =A0 * querying the qdisc list we obtained earlier. The netbuf=
script<br>
+ =A0 =A0 =A0 =A0 * sets up the plug qdisc as the root qdisc, so we don&#39=
;t have to<br>
+ =A0 =A0 =A0 =A0 * search the entire qdisc tree on the IFB dev.<br>
+<br>
+ =A0 =A0 =A0 =A0 * There is no need to explicitly free this qdisc as its j=
ust a<br>
+ =A0 =A0 =A0 =A0 * reference from the qdisc cache we allocated earlier.<br=
>
+ =A0 =A0 =A0 =A0 */<br>
+ =A0 =A0 =A0 =A0qdisc =3D rtnl_qdisc_get_by_parent(netbuf_state-&gt;qdisc_=
cache, ifindex,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 TC_H_ROOT);<br>
+<br>
+ =A0 =A0 =A0 =A0if (qdisc) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0const char *tc_kind =3D rtnl_tc_get_kind(TC_CAST(q=
disc));<br>
+ =A0 =A0 =A0 =A0 =A0 =A0/* Sanity check: Ensure that the root qdisc is a p=
lug qdisc. */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0if (!tc_kind || strcmp(tc_kind, &quot;plug&quot;))=
 {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nl_object_put((struct nl_object *)qdisc);<=
br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;plug qdisc is not install=
ed on %s&quot;, ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0 =A0 =A0netbuf_state-&gt;netbuf_qdisc_list[i] =3D qdisc;<b=
r>
+ =A0 =A0 =A0 =A0} else {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;Cannot get qdisc handle from ifb =
%s&quot;, ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0rtnl_link_put(ifb);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return 0;<br>
+<br>
+ out:<br>
+ =A0 =A0if (ifb)<br>
+ =A0 =A0 =A0 =A0rtnl_link_put(ifb);<br>
+ =A0 =A0free_qdiscs(netbuf_state);<br>
+ =A0 =A0return ERROR_FAIL;<br>
+}<br>
+<br>
+static void netbuf_setup_timeout_cb(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0li=
bxl__ev_time *ev,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0co=
nst struct timeval *requested_abs)<br>
+{<br>
+ =A0 =A0libxl__remus_state *remus_state =3D CONTAINER_OF(ev, *remus_state,=
 timeout);<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0const int devid =3D remus_state-&gt;dev_id;<br>
+ =A0 =A0libxl__remus_netbuf_state *const netbuf_state =3D remus_state-&gt;=
netbuf_state;<br>
+ =A0 =A0const char *const vif =3D netbuf_state-&gt;vif_list[devid];<br>
+<br>
+ =A0 =A0STATE_AO_GC(remus_state-&gt;dss-&gt;ao);<br>
+<br>
+ =A0 =A0libxl__ev_time_deregister(gc, &amp;remus_state-&gt;timeout);<br>
+ =A0 =A0assert(libxl__ev_child_inuse(&amp;remus_state-&gt;child));<br>
+<br>
+ =A0 =A0LOG(DEBUG, &quot;killing hotplug script %s (on vif %s) because of =
timeout&quot;,<br>
+ =A0 =A0 =A0 =A0remus_state-&gt;netbufscript, vif);<br>
+<br>
+ =A0 =A0if (kill(remus_state-&gt;child.pid, SIGKILL)) {<br>
+ =A0 =A0 =A0 =A0LOGEV(ERROR, errno, &quot;unable to kill hotplug script %s=
 [%ld]&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0remus_state-&gt;netbufscript,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0(unsigned long)remus_state-&gt;child.pid);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return;<br>
+}<br>
+<br>
+/* the script needs the following env &amp; args<br>
+ * $vifname<br>
+ * $XENBUS_PATH (/libxl/&lt;domid&gt;/remus/netbuf/&lt;devid&gt;/)<br>
+ * $IFB (for teardown)<br>
+ * setup/teardown as command line arg.<br>
+ * In return, the script writes the name of IFB device (during setup) to b=
e<br>
+ * used for output buffering into XENBUS_PATH/ifb<br>
+ */<br>
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_sta=
te,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0char *op, libx=
l__ev_child_callback *death)<br>
+{<br>
+ =A0 =A0int arraysize =3D 7, nr =3D 0;<br>
+ =A0 =A0char **env =3D NULL, **args =3D NULL;<br>
+ =A0 =A0pid_t pid;<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0libxl__ev_child *const child =3D &amp;remus_state-&gt;child;<br>
+ =A0 =A0libxl__ev_time *const timeout =3D &amp;remus_state-&gt;timeout;<br=
>
+ =A0 =A0char *const script =3D libxl__strdup(gc, remus_state-&gt;netbufscr=
ipt);<br>
+ =A0 =A0const uint32_t domid =3D remus_state-&gt;dss-&gt;domid;<br>
+ =A0 =A0const int devid =3D remus_state-&gt;dev_id;<br>
+ =A0 =A0libxl__remus_netbuf_state *const netbuf_state =3D remus_state-&gt;=
netbuf_state;<br>
+ =A0 =A0const char *const vif =3D netbuf_state-&gt;vif_list[devid];<br>
+ =A0 =A0const char *const ifb =3D netbuf_state-&gt;ifb_list[devid];<br>
+<br></blockquote><div><br></div><div>Please set arraysize to 7 here, inste=
ad of the beginning of the function.=A0</div><div>Its more readable that wa=
y.</div><div><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);bo=
rder-left-style:solid;padding-left:1ex">




+ =A0 =A0GCNEW_ARRAY(env, arraysize);<br>
+ =A0 =A0env[nr++] =3D &quot;vifname&quot;;<br>
+ =A0 =A0env[nr++] =3D libxl__strdup(gc, vif);<br>
+ =A0 =A0env[nr++] =3D &quot;XENBUS_PATH&quot;;<br>
+ =A0 =A0env[nr++] =3D GCSPRINTF(&quot;%s/remus/netbuf/%d&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__xs_libxl_path(g=
c, domid), devid);<br>
+ =A0 =A0if (!strcmp(op, &quot;teardown&quot;)) {<br>
+ =A0 =A0 =A0 =A0env[nr++] =3D &quot;IFB&quot;;<br>
+ =A0 =A0 =A0 =A0env[nr++] =3D libxl__strdup(gc, ifb);<br>
+ =A0 =A0}<br>
+ =A0 =A0env[nr++] =3D NULL;<br>
+ =A0 =A0assert(nr &lt;=3D arraysize);<br>
+<br>
+ =A0 =A0arraysize =3D 3; nr =3D 0;<br>
+ =A0 =A0GCNEW_ARRAY(args, arraysize);<br>
+ =A0 =A0args[nr++] =3D script;<br>
+ =A0 =A0args[nr++] =3D op;<br>
+ =A0 =A0args[nr++] =3D NULL;<br>
+ =A0 =A0assert(nr =3D=3D arraysize);<br>
+<br>
+ =A0 =A0/* Set hotplug timeout */<br>
+ =A0 =A0if (libxl__ev_time_register_rel(gc, timeout,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ne=
tbuf_setup_timeout_cb,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LI=
BXL_HOTPLUG_TIMEOUT * 1000)) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;unable to register timeout for &quot;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0&quot;netbuf setup script %s on vif %s&quot;, scri=
pt, vif);<br>
+ =A0 =A0 =A0 =A0return ERROR_FAIL;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0LOG(DEBUG, &quot;Calling netbuf script: %s %s on vif %s&quot;,<br>
+ =A0 =A0 =A0 =A0script, op, vif);<br>
+<br>
+ =A0 =A0/* Fork and exec netbuf script */<br>
+ =A0 =A0pid =3D libxl__ev_child_fork(gc, child, death);<br>
+ =A0 =A0if (pid =3D=3D -1) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;unable to fork netbuf script %s&quot;, sc=
ript);<br>
+ =A0 =A0 =A0 =A0return ERROR_FAIL;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0if (!pid) {<br>
+ =A0 =A0 =A0 =A0/* child: Launch netbuf script */<br>
+ =A0 =A0 =A0 =A0libxl__exec(gc, -1, -1, -1, args[0], args, env);<br>
+ =A0 =A0 =A0 =A0/* notreached */<br>
+ =A0 =A0 =A0 =A0abort();<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return 0;<br>
+}<br>
+<br><span><font color=3D"#888888"><br>
</font></span></blockquote></div><br></div></div>

--089e0122f016f7bc9a04f0e72682--


--===============0260953706865313514==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0260953706865313514==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:31:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YEr-00028S-4B; Sun, 26 Jan 2014 22:31:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YEp-00028M-SX
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:31:36 +0000
Received: from [85.158.137.68:56154] by server-9.bemta-3.messagelabs.com id
	8E/E4-13104-7CC85E25; Sun, 26 Jan 2014 22:31:35 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390775490!10610055!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1813 invoked from network); 26 Jan 2014 22:31:32 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:31:32 -0000
Received: from mail-ie0-f175.google.com (mail-ie0-f175.google.com
	[209.85.223.175]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMVT2T025544
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:31:30 -0800
Received: by mail-ie0-f175.google.com with SMTP id ar20so5038407iec.34
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:31:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=ilYD3vsHp0HCGNqkKzceewJdzcEjd5dlMHosI4CYkPM=;
	b=TwD0BIR2yFiiqpXNGj36asknYRh4e+eUkQego7wQHQmYy3B6Li7RC3Z8E9My8cQjef
	9ySlXUUrLIUX5dJkR0q7Oo+JjS2eZegqxEHlWuDuWZXEbqQpYZHUV+/jAtetggZ/ti3Q
	oidpRbjq3uVolVv39wpVnS1WJP/62bi6lP2fZJGc0A4iuzxBuJyxRtpdutnYpEJZ8Dk+
	2N3giSGShDF/6pEzCp2zz2vApUOMvl+mXzvqM0YYDKAmmvVmt8XNrUGL9w3nhJcL6lHJ
	02qL7LMe+bRGDUVcX7MlozhyPsAt5Kt760qwztSLxu2QXDjJYRGf00PoD+na6AzBn1hs
	rFZQ==
X-Received: by 10.50.45.33 with SMTP id j1mr14693376igm.32.1390775488721; Sun,
	26 Jan 2014 14:31:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:30:48 -0800 (PST)
In-Reply-To: <1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:30:48 -0800
Message-ID: <CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0260953706865313514=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0260953706865313514==
Content-Type: multipart/alternative; boundary=089e0122f016f7bc9a04f0e72682

--089e0122f016f7bc9a04f0e72682
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> The following steps are taken during setup:
>  a) call the hotplug script for each vif to setup its network buffer
>
>  b) establish a dedicated remus context containing libnl related
>     state (netlink sockets, qdisc caches, etc.,)
>
>  c) Obtain handles to plug qdiscs installed on the IFB devices
>     chosen by the hotplug scripts.
>
> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
>  docs/misc/xenstore-paths.markdown |   4 +
>  tools/libxl/Makefile              |   2 +
>  tools/libxl/libxl_dom.c           |   7 +-
>  tools/libxl/libxl_internal.h      |  11 +
>  tools/libxl/libxl_netbuffer.c     | 419
> ++++++++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_nonetbuffer.c   |   6 +
>  tools/libxl/libxl_remus.c         |  35 ++++
>  7 files changed, 479 insertions(+), 5 deletions(-)
>  create mode 100644 tools/libxl/libxl_remus.c
>
> diff --git a/docs/misc/xenstore-paths.markdown
> b/docs/misc/xenstore-paths.markdown
> index 70ab7f4..7a0d2c9 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
>
>  The device model version for a domain.
>
> +#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
> +
> +IFB device used by Remus to buffer network output from the associated vif.
> +
>  [BLKIF]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
>  [FBIF]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
>  [HVMPARAMS]:
> http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 84a467c..218f55e 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -52,6 +52,8 @@ else
>  LIBXL_OBJS-y += libxl_nonetbuffer.o
>  endif
>
> +LIBXL_OBJS-y += libxl_remus.o
> +
>  LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
>  LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
>
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 8d63f90..e3e9f6f 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const
> uint8_t *buf,
>
>  /*==================== Domain suspend (save) ====================*/
>
> -static void domain_suspend_done(libxl__egc *egc,
> -                        libxl__domain_suspend_state *dss, int rc);
> -
>  /*----- complicated callback, called by xc_domain_save -----*/
>
>  /*
> @@ -1508,8 +1505,8 @@ static void
> save_device_model_datacopier_done(libxl__egc *egc,
>      dss->save_dm_callback(egc, dss, our_rc);
>  }
>
> -static void domain_suspend_done(libxl__egc *egc,
> -                        libxl__domain_suspend_state *dss, int rc)
> +void domain_suspend_done(libxl__egc *egc,
> +                         libxl__domain_suspend_state *dss, int rc)
>  {
>      STATE_AO_GC(dss->ao);
>
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 2f64382..0430307 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
>
>  _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
>
> +_hidden void domain_suspend_done(libxl__egc *egc,
> +                                 libxl__domain_suspend_state *dss,
> +                                 int rc);
> +
> +_hidden void libxl__remus_setup_done(libxl__egc *egc,
> +                                     libxl__domain_suspend_state *dss,
> +                                     int rc);
> +
> +_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
> +                                       libxl__domain_suspend_state *dss);
> +
>  struct libxl__domain_suspend_state {
>      /* set by caller of libxl__domain_suspend */
>      libxl__ao *ao;
> diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
> index 8e23d75..0be876c 100644
> --- a/tools/libxl/libxl_netbuffer.c
> +++ b/tools/libxl/libxl_netbuffer.c
> @@ -17,11 +17,430 @@
>
>  #include "libxl_internal.h"
>
> +#include <netlink/cache.h>
> +#include <netlink/socket.h>
> +#include <netlink/attr.h>
> +#include <netlink/route/link.h>
> +#include <netlink/route/route.h>
> +#include <netlink/route/qdisc.h>
> +#include <netlink/route/qdisc/plug.h>
> +
> +typedef struct libxl__remus_netbuf_state {
> +    struct rtnl_qdisc **netbuf_qdisc_list;
> +    struct nl_sock *nlsock;
> +    struct nl_cache *qdisc_cache;
> +    const char **vif_list;
> +    const char **ifb_list;
> +    uint32_t num_netbufs;
> +    uint32_t unused;
> +} libxl__remus_netbuf_state;
> +
>  int libxl__netbuffer_enabled(libxl__gc *gc)
>  {
>      return 1;
>  }
>
> +/* If the device has a vifname, then use that instead of
> + * the vifX.Y format.
> + */
> +static const char *get_vifname(libxl__gc *gc, uint32_t domid,
> +                               libxl_device_nic *nic)
> +{
> +    const char *vifname = NULL;
> +    const char *path;
> +    int rc;
> +
> +    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
> +                          libxl__xs_get_dompath(gc, 0), domid,
> nic->devid);
> +    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
> +    if (rc < 0) {
> +        /* use the default name */
> +        vifname = libxl__device_nic_devname(gc, domid,
> +                                            nic->devid,
> +                                            nic->nictype);
> +    }
> +
> +    return vifname;
>

IanJ's feedback last time was that the error code rc needs to be checked.
If its a failure, then return NULL to the caller. If its a ENOENT, then use
the
default name.

+}
> +
> +static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
> +                                       int *num_vifs)
> +{
> +    libxl_device_nic *nics = NULL;
> +    int nb, i = 0;
> +    const char **vif_list = NULL;
> +
> +    *num_vifs = 0;
> +    nics = libxl_device_nic_list(CTX, domid, &nb);
> +    if (!nics)
> +        return NULL;
> +
> +    /* Ensure that none of the vifs are backed by driver domains */
> +    for (i = 0; i < nb; i++) {
> +        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
> +            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
> +                "Network buffering is not supported with driver domains",
> +                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
>

And if the previous feedback were to be incorporated, then get_vifname's
return
value should be assigned to a variable and checked before printing or using
it for
other purposes.


> +            *num_vifs = -1;
> +            goto out;
> +        }
> +    }
> +
> +    GCNEW_ARRAY(vif_list, nb);
> +    for (i = 0; i < nb; ++i) {
> +        vif_list[i] = get_vifname(gc, domid, &nics[i]);
> +        if (!vif_list[i]) {
> +            vif_list = NULL;
> +            goto out;
> +        }
> +    }
> +    *num_vifs = nb;
> +
> + out:
> +    for (i = 0; i < nb; i++)
> +        libxl_device_nic_dispose(&nics[i]);
> +    free(nics);
> +    return vif_list;
> +}
> +
> +static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
> +{
> +    int i;
> +    struct rtnl_qdisc *qdisc = NULL;
> +
> +    /* free qdiscs */
> +    for (i = 0; i < netbuf_state->num_netbufs; i++) {
> +        qdisc = netbuf_state->netbuf_qdisc_list[i];
> +        if (!qdisc)
> +            break;
> +
> +        nl_object_put((struct nl_object *)qdisc);
> +    }
> +
> +    /* free qdisc cache */
> +    nl_cache_clear(netbuf_state->qdisc_cache);
> +    nl_cache_free(netbuf_state->qdisc_cache);
> +
> +    /* close nlsock */
> +    nl_close(netbuf_state->nlsock);
> +
> +    /* free nlsock */
> +    nl_socket_free(netbuf_state->nlsock);
> +}
> +
>

This code (free_qdiscs) is new. Have you tested it?
While the control flow looks pretty sane, libnl has been evolving
a bit ever since the 3.* series.

If init_qdisc fails, it calls free_qdisc(). If any other setup stage after
network buffering fails, it would invoke the teardown code, which also
calls free_qdisc(). This may end up in a segfault.

I suggest adding a simple check to see if nlsock/qdisc_cache are NULL
before attempting to execute the rest of the function.  And after you free
the
qdisc_cache & nlsock, set them to NULL.


> +static int init_qdiscs(libxl__gc *gc,
> +                       libxl__remus_state *remus_state)
> +{
> +    int i, ret, ifindex;
> +    struct rtnl_link *ifb = NULL;
> +    struct rtnl_qdisc *qdisc = NULL;
> +
> +    /* Convenience aliases */
> +    libxl__remus_netbuf_state * const netbuf_state =
> remus_state->netbuf_state;
> +    const int num_netbufs = netbuf_state->num_netbufs;
> +    const char ** const ifb_list = netbuf_state->ifb_list;
> +
> +    /* Now that we have brought up IFB devices with plug qdisc for
> +     * each vif, lets get a netlink handle on the plug qdisc for use
> +     * during checkpointing.
> +     */
> +    netbuf_state->nlsock = nl_socket_alloc();
> +    if (!netbuf_state->nlsock) {
> +        LOG(ERROR, "cannot allocate nl socket");
> +        goto out;
> +    }
> +
> +    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
> +    if (ret) {
> +        LOG(ERROR, "failed to open netlink socket: %s",
> +            nl_geterror(ret));
> +        goto out;
> +    }
> +
> +    /* get list of all qdiscs installed on network devs. */
> +    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
> +                                 &netbuf_state->qdisc_cache);
> +    if (ret) {
> +        LOG(ERROR, "failed to allocate qdisc cache: %s",
> +            nl_geterror(ret));
> +        goto out;
> +    }
> +
> +    /* list of handles to plug qdiscs */
> +    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
> +
> +    for (i = 0; i < num_netbufs; ++i) {
> +
> +        /* get a handle to the IFB interface */
> +        ifb = NULL;
> +        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
> +                                   ifb_list[i], &ifb);
> +        if (ret) {
> +            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
> +                nl_geterror(ret));
> +            goto out;
> +        }
> +
> +        ifindex = rtnl_link_get_ifindex(ifb);
> +        if (!ifindex) {
> +            LOG(ERROR, "interface %s has no index", ifb_list[i]);
> +            goto out;
> +        }
> +
> +        /* Get a reference to the root qdisc installed on the IFB, by
> +         * querying the qdisc list we obtained earlier. The netbufscript
> +         * sets up the plug qdisc as the root qdisc, so we don't have to
> +         * search the entire qdisc tree on the IFB dev.
> +
> +         * There is no need to explicitly free this qdisc as its just a
> +         * reference from the qdisc cache we allocated earlier.
> +         */
> +        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache,
> ifindex,
> +                                         TC_H_ROOT);
> +
> +        if (qdisc) {
> +            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
> +            /* Sanity check: Ensure that the root qdisc is a plug qdisc.
> */
> +            if (!tc_kind || strcmp(tc_kind, "plug")) {
> +                nl_object_put((struct nl_object *)qdisc);
> +                LOG(ERROR, "plug qdisc is not installed on %s",
> ifb_list[i]);
> +                goto out;
> +            }
> +            netbuf_state->netbuf_qdisc_list[i] = qdisc;
> +        } else {
> +            LOG(ERROR, "Cannot get qdisc handle from ifb %s",
> ifb_list[i]);
> +            goto out;
> +        }
> +        rtnl_link_put(ifb);
> +    }
> +
> +    return 0;
> +
> + out:
> +    if (ifb)
> +        rtnl_link_put(ifb);
> +    free_qdiscs(netbuf_state);
> +    return ERROR_FAIL;
> +}
> +
> +static void netbuf_setup_timeout_cb(libxl__egc *egc,
> +                                    libxl__ev_time *ev,
> +                                    const struct timeval *requested_abs)
> +{
> +    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state,
> timeout);
> +
> +    /* Convenience aliases */
> +    const int devid = remus_state->dev_id;
> +    libxl__remus_netbuf_state *const netbuf_state =
> remus_state->netbuf_state;
> +    const char *const vif = netbuf_state->vif_list[devid];
> +
> +    STATE_AO_GC(remus_state->dss->ao);
> +
> +    libxl__ev_time_deregister(gc, &remus_state->timeout);
> +    assert(libxl__ev_child_inuse(&remus_state->child));
> +
> +    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
> +        remus_state->netbufscript, vif);
> +
> +    if (kill(remus_state->child.pid, SIGKILL)) {
> +        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
> +              remus_state->netbufscript,
> +              (unsigned long)remus_state->child.pid);
> +    }
> +
> +    return;
> +}
> +
> +/* the script needs the following env & args
> + * $vifname
> + * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
> + * $IFB (for teardown)
> + * setup/teardown as command line arg.
> + * In return, the script writes the name of IFB device (during setup) to
> be
> + * used for output buffering into XENBUS_PATH/ifb
> + */
> +static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state
> *remus_state,
> +                              char *op, libxl__ev_child_callback *death)
> +{
> +    int arraysize = 7, nr = 0;
> +    char **env = NULL, **args = NULL;
> +    pid_t pid;
> +
> +    /* Convenience aliases */
> +    libxl__ev_child *const child = &remus_state->child;
> +    libxl__ev_time *const timeout = &remus_state->timeout;
> +    char *const script = libxl__strdup(gc, remus_state->netbufscript);
> +    const uint32_t domid = remus_state->dss->domid;
> +    const int devid = remus_state->dev_id;
> +    libxl__remus_netbuf_state *const netbuf_state =
> remus_state->netbuf_state;
> +    const char *const vif = netbuf_state->vif_list[devid];
> +    const char *const ifb = netbuf_state->ifb_list[devid];
> +
>

Please set arraysize to 7 here, instead of the beginning of the function.
Its more readable that way.

+    GCNEW_ARRAY(env, arraysize);
> +    env[nr++] = "vifname";
> +    env[nr++] = libxl__strdup(gc, vif);
> +    env[nr++] = "XENBUS_PATH";
> +    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
> +                          libxl__xs_libxl_path(gc, domid), devid);
> +    if (!strcmp(op, "teardown")) {
> +        env[nr++] = "IFB";
> +        env[nr++] = libxl__strdup(gc, ifb);
> +    }
> +    env[nr++] = NULL;
> +    assert(nr <= arraysize);
> +
> +    arraysize = 3; nr = 0;
> +    GCNEW_ARRAY(args, arraysize);
> +    args[nr++] = script;
> +    args[nr++] = op;
> +    args[nr++] = NULL;
> +    assert(nr == arraysize);
> +
> +    /* Set hotplug timeout */
> +    if (libxl__ev_time_register_rel(gc, timeout,
> +                                    netbuf_setup_timeout_cb,
> +                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
> +        LOG(ERROR, "unable to register timeout for "
> +            "netbuf setup script %s on vif %s", script, vif);
> +        return ERROR_FAIL;
> +    }
> +
> +    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
> +        script, op, vif);
> +
> +    /* Fork and exec netbuf script */
> +    pid = libxl__ev_child_fork(gc, child, death);
> +    if (pid == -1) {
> +        LOG(ERROR, "unable to fork netbuf script %s", script);
> +        return ERROR_FAIL;
> +    }
> +
> +    if (!pid) {
> +        /* child: Launch netbuf script */
> +        libxl__exec(gc, -1, -1, -1, args[0], args, env);
> +        /* notreached */
> +        abort();
> +    }
> +
> +    return 0;
> +}
> +
>
>

--089e0122f016f7bc9a04f0e72682
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>



<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@c=
s.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>




<br>
The following steps are taken during setup:<br>
=A0a) call the hotplug script for each vif to setup its network buffer<br>
<br>
=A0b) establish a dedicated remus context containing libnl related<br>
=A0 =A0 state (netlink sockets, qdisc caches, etc.,)<br>
<br>
=A0c) Obtain handles to plug qdiscs installed on the IFB devices<br>
=A0 =A0 chosen by the hotplug scripts.<br>
<br>
Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujitsu.com" ta=
rget=3D"_blank">laijs@cn.fujitsu.com</a>&gt;<br>
Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.com" targe=
t=3D"_blank">wency@cn.fujitsu.com</a>&gt;<br>
---<br>
=A0docs/misc/xenstore-paths.markdown | =A0 4 +<br>
=A0tools/libxl/Makefile =A0 =A0 =A0 =A0 =A0 =A0 =A0| =A0 2 +<br>
=A0tools/libxl/libxl_dom.c =A0 =A0 =A0 =A0 =A0 | =A0 7 +-<br>
=A0tools/libxl/libxl_internal.h =A0 =A0 =A0| =A011 +<br>
=A0tools/libxl/libxl_netbuffer.c =A0 =A0 | 419 ++++++++++++++++++++++++++++=
++++++++++<br>
=A0tools/libxl/libxl_nonetbuffer.c =A0 | =A0 6 +<br>
=A0tools/libxl/libxl_remus.c =A0 =A0 =A0 =A0 | =A035 ++++<br>
=A07 files changed, 479 insertions(+), 5 deletions(-)<br>
=A0create mode 100644 tools/libxl/libxl_remus.c<br>
<br>
diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.m=
arkdown<br>
index 70ab7f4..7a0d2c9 100644<br>
--- a/docs/misc/xenstore-paths.markdown<br>
+++ b/docs/misc/xenstore-paths.markdown<br>
@@ -385,6 +385,10 @@ The guest&#39;s virtual time offset from UTC in second=
s.<br>
<br>
=A0The device model version for a domain.<br>
<br>
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb =3D STRING [n,INTERNAL]<br>
+<br>
+IFB device used by Remus to buffer network output from the associated vif.=
<br>
+<br>
=A0[BLKIF]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/inclu=
de,public,io,blkif.h.html" target=3D"_blank">http://xenbits.xen.org/docs/un=
stable/hypercall/include,public,io,blkif.h.html</a><br>
=A0[FBIF]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/includ=
e,public,io,fbif.h.html" target=3D"_blank">http://xenbits.xen.org/docs/unst=
able/hypercall/include,public,io,fbif.h.html</a><br>
=A0[HVMPARAMS]: <a href=3D"http://xenbits.xen.org/docs/unstable/hypercall/i=
nclude,public,hvm,params.h.html" target=3D"_blank">http://xenbits.xen.org/d=
ocs/unstable/hypercall/include,public,hvm,params.h.html</a><br>
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile<br>
index 84a467c..218f55e 100644<br>
--- a/tools/libxl/Makefile<br>
+++ b/tools/libxl/Makefile<br>
@@ -52,6 +52,8 @@ else<br>
=A0LIBXL_OBJS-y +=3D libxl_nonetbuffer.o<br>
=A0endif<br>
<br>
+LIBXL_OBJS-y +=3D libxl_remus.o<br>
+<br>
=A0LIBXL_OBJS-$(CONFIG_X86) +=3D libxl_cpuid.o libxl_x86.o<br>
=A0LIBXL_OBJS-$(CONFIG_ARM) +=3D libxl_nocpuid.o libxl_arm.o<br>
<br>
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c<br>
index 8d63f90..e3e9f6f 100644<br>
--- a/tools/libxl/libxl_dom.c<br>
+++ b/tools/libxl/libxl_dom.c<br>
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint=
8_t *buf,<br>
<br>
=A0/*=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Domain su=
spend (save) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D*/=
<br>
<br>
-static void domain_suspend_done(libxl__egc *egc,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__domain_suspend_stat=
e *dss, int rc);<br>
-<br>
=A0/*----- complicated callback, called by xc_domain_save -----*/<br>
<br>
=A0/*<br>
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__=
egc *egc,<br>
=A0 =A0 =A0dss-&gt;save_dm_callback(egc, dss, our_rc);<br>
=A0}<br>
<br>
-static void domain_suspend_done(libxl__egc *egc,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__domain_suspend_stat=
e *dss, int rc)<br>
+void domain_suspend_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__domain_suspend_sta=
te *dss, int rc)<br>
=A0{<br>
=A0 =A0 =A0STATE_AO_GC(dss-&gt;ao);<br>
<br>
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h<br=
>
index 2f64382..0430307 100644<br>
--- a/tools/libxl/libxl_internal.h<br>
+++ b/tools/libxl/libxl_internal.h<br>
@@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {<br>
<br>
=A0_hidden int libxl__netbuffer_enabled(libxl__gc *gc);<br>
<br>
+_hidden void domain_suspend_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__do=
main_suspend_state *dss,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 int rc);<=
br>
+<br>
+_hidden void libxl__remus_setup_done(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 l=
ibxl__domain_suspend_state *dss,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 i=
nt rc);<br>
+<br>
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 libxl__domain_suspend_state *dss);<br>
+<br>
=A0struct libxl__domain_suspend_state {<br>
=A0 =A0 =A0/* set by caller of libxl__domain_suspend */<br>
=A0 =A0 =A0libxl__ao *ao;<br>
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c<=
br>
index 8e23d75..0be876c 100644<br>
--- a/tools/libxl/libxl_netbuffer.c<br>
+++ b/tools/libxl/libxl_netbuffer.c<br>
@@ -17,11 +17,430 @@<br>
<br>
=A0#include &quot;libxl_internal.h&quot;<br>
<br>
+#include &lt;netlink/cache.h&gt;<br>
+#include &lt;netlink/socket.h&gt;<br>
+#include &lt;netlink/attr.h&gt;<br>
+#include &lt;netlink/route/link.h&gt;<br>
+#include &lt;netlink/route/route.h&gt;<br>
+#include &lt;netlink/route/qdisc.h&gt;<br>
+#include &lt;netlink/route/qdisc/plug.h&gt;<br>
+<br>
+typedef struct libxl__remus_netbuf_state {<br>
+ =A0 =A0struct rtnl_qdisc **netbuf_qdisc_list;<br>
+ =A0 =A0struct nl_sock *nlsock;<br>
+ =A0 =A0struct nl_cache *qdisc_cache;<br>
+ =A0 =A0const char **vif_list;<br>
+ =A0 =A0const char **ifb_list;<br>
+ =A0 =A0uint32_t num_netbufs;<br>
+ =A0 =A0uint32_t unused;<br>
+} libxl__remus_netbuf_state;<br>
+<br>
=A0int libxl__netbuffer_enabled(libxl__gc *gc)<br>
=A0{<br>
=A0 =A0 =A0return 1;<br>
=A0}<br>
<br>
+/* If the device has a vifname, then use that instead of<br>
+ * the vifX.Y format.<br>
+ */<br>
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl_device_=
nic *nic)<br>
+{<br>
+ =A0 =A0const char *vifname =3D NULL;<br>
+ =A0 =A0const char *path;<br>
+ =A0 =A0int rc;<br>
+<br>
+ =A0 =A0path =3D libxl__sprintf(gc, &quot;%s/backend/vif/%d/%d/vifname&quo=
t;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__xs_get_dompath(=
gc, 0), domid, nic-&gt;devid);<br>
+ =A0 =A0rc =3D libxl__xs_read_checked(gc, XBT_NULL, path, &amp;vifname);<b=
r>
+ =A0 =A0if (rc &lt; 0) {<br>
+ =A0 =A0 =A0 =A0/* use the default name */<br>
+ =A0 =A0 =A0 =A0vifname =3D libxl__device_nic_devname(gc, domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0nic-&gt;devid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0nic-&gt;nictype);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return vifname;<br></blockquote><div><br></div><div>IanJ&#39;s fee=
dback last time was that the error code rc needs to be checked.</div><div>I=
f its a failure, then return NULL to the caller. If its a ENOENT, then use =
the</div>



<div>default name.</div><div><br></div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb=
(204,204,204);border-left-style:solid;padding-left:1ex">
+}<br>
+<br>
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 int *num_vifs)<br>
+{<br>
+ =A0 =A0libxl_device_nic *nics =3D NULL;<br>
+ =A0 =A0int nb, i =3D 0;<br>
+ =A0 =A0const char **vif_list =3D NULL;<br>
+<br>
+ =A0 =A0*num_vifs =3D 0;<br>
+ =A0 =A0nics =3D libxl_device_nic_list(CTX, domid, &amp;nb);<br>
+ =A0 =A0if (!nics)<br>
+ =A0 =A0 =A0 =A0return NULL;<br>
+<br>
+ =A0 =A0/* Ensure that none of the vifs are backed by driver domains */<br=
>
+ =A0 =A0for (i =3D 0; i &lt; nb; i++) {<br>
+ =A0 =A0 =A0 =A0if (nics[i].backend_domid !=3D LIBXL_TOOLSTACK_DOMID) {<br=
>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;vif %s has driver domain (%u) as =
its backend. &quot;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&quot;Network buffering is not supported w=
ith driver domains&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0get_vifname(gc, domid, &amp;nics[i]), nics=
[i].backend_domid);<br></blockquote><div><br></div><div>And if the previous=
 feedback were to be incorporated, then get_vifname&#39;s return</div><div>=
value should be assigned to a variable and checked before printing or using=
 it for</div>



<div>other purposes.</div><div>=A0</div><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rg=
b(204,204,204);border-left-style:solid;padding-left:1ex">
+ =A0 =A0 =A0 =A0 =A0 =A0*num_vifs =3D -1;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0GCNEW_ARRAY(vif_list, nb);<br>
+ =A0 =A0for (i =3D 0; i &lt; nb; ++i) {<br>
+ =A0 =A0 =A0 =A0vif_list[i] =3D get_vifname(gc, domid, &amp;nics[i]);<br>
+ =A0 =A0 =A0 =A0if (!vif_list[i]) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0vif_list =3D NULL;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+ =A0 =A0*num_vifs =3D nb;<br>
+<br>
+ out:<br>
+ =A0 =A0for (i =3D 0; i &lt; nb; i++)<br>
+ =A0 =A0 =A0 =A0libxl_device_nic_dispose(&amp;nics[i]);<br>
+ =A0 =A0free(nics);<br>
+ =A0 =A0return vif_list;<br>
+}<br>
+<br>
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)<br>
+{<br>
+ =A0 =A0int i;<br>
+ =A0 =A0struct rtnl_qdisc *qdisc =3D NULL;<br>
+<br>
+ =A0 =A0/* free qdiscs */<br>
+ =A0 =A0for (i =3D 0; i &lt; netbuf_state-&gt;num_netbufs; i++) {<br>
+ =A0 =A0 =A0 =A0qdisc =3D netbuf_state-&gt;netbuf_qdisc_list[i];<br>
+ =A0 =A0 =A0 =A0if (!qdisc)<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0nl_object_put((struct nl_object *)qdisc);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* free qdisc cache */<br>
+ =A0 =A0nl_cache_clear(netbuf_state-&gt;qdisc_cache);<br>
+ =A0 =A0nl_cache_free(netbuf_state-&gt;qdisc_cache);<br>
+<br>
+ =A0 =A0/* close nlsock */<br>
+ =A0 =A0nl_close(netbuf_state-&gt;nlsock);<br>
+<br>
+ =A0 =A0/* free nlsock */<br>
+ =A0 =A0nl_socket_free(netbuf_state-&gt;nlsock);<br>
+}<br>
+<br></blockquote><div><br></div><div>This code (free_qdiscs) is new. Have =
you tested it?</div><div>While the control flow looks pretty sane, libnl ha=
s been evolving</div><div>a bit ever since the 3.* series.</div><div><br>



</div><div>If init_qdisc fails, it calls free_qdisc(). If any other setup s=
tage after=A0</div><div>network buffering fails, it would invoke the teardo=
wn code, which also=A0</div><div>calls free_qdisc(). This may end up in a s=
egfault.</div>



<div><div><br></div><div>I suggest adding a simple check to see if nlsock/q=
disc_cache are NULL</div><div>before attempting to execute the rest of the =
function. =A0And after you free the=A0</div><div>qdisc_cache &amp; nlsock, =
set them to NULL.=A0</div>



<div>=A0</div></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);borde=
r-left-style:solid;padding-left:1ex">
+static int init_qdiscs(libxl__gc *gc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 libxl__remus_state *remus_sta=
te)<br>
+{<br>
+ =A0 =A0int i, ret, ifindex;<br>
+ =A0 =A0struct rtnl_link *ifb =3D NULL;<br>
+ =A0 =A0struct rtnl_qdisc *qdisc =3D NULL;<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0libxl__remus_netbuf_state * const netbuf_state =3D remus_state-&gt=
;netbuf_state;<br>
+ =A0 =A0const int num_netbufs =3D netbuf_state-&gt;num_netbufs;<br>
+ =A0 =A0const char ** const ifb_list =3D netbuf_state-&gt;ifb_list;<br>
+<br>
+ =A0 =A0/* Now that we have brought up IFB devices with plug qdisc for<br>
+ =A0 =A0 * each vif, lets get a netlink handle on the plug qdisc for use<b=
r>
+ =A0 =A0 * during checkpointing.<br>
+ =A0 =A0 */<br>
+ =A0 =A0netbuf_state-&gt;nlsock =3D nl_socket_alloc();<br>
+ =A0 =A0if (!netbuf_state-&gt;nlsock) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;cannot allocate nl socket&quot;);<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0ret =3D nl_connect(netbuf_state-&gt;nlsock, NETLINK_ROUTE);<br>
+ =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;failed to open netlink socket: %s&quot;,<=
br>
+ =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* get list of all qdiscs installed on network devs. */<br>
+ =A0 =A0ret =3D rtnl_qdisc_alloc_cache(netbuf_state-&gt;nlsock,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &amp;netb=
uf_state-&gt;qdisc_cache);<br>
+ =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;failed to allocate qdisc cache: %s&quot;,=
<br>
+ =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0/* list of handles to plug qdiscs */<br>
+ =A0 =A0GCNEW_ARRAY(netbuf_state-&gt;netbuf_qdisc_list, num_netbufs);<br>
+<br>
+ =A0 =A0for (i =3D 0; i &lt; num_netbufs; ++i) {<br>
+<br>
+ =A0 =A0 =A0 =A0/* get a handle to the IFB interface */<br>
+ =A0 =A0 =A0 =A0ifb =3D NULL;<br>
+ =A0 =A0 =A0 =A0ret =3D rtnl_link_get_kernel(netbuf_state-&gt;nlsock, 0,<b=
r>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ifb_l=
ist[i], &amp;ifb);<br>
+ =A0 =A0 =A0 =A0if (ret) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;cannot obtain handle for %s: %s&q=
uot;, ifb_list[i],<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nl_geterror(ret));<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+<br>
+ =A0 =A0 =A0 =A0ifindex =3D rtnl_link_get_ifindex(ifb);<br>
+ =A0 =A0 =A0 =A0if (!ifindex) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;interface %s has no index&quot;, =
ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+<br>
+ =A0 =A0 =A0 =A0/* Get a reference to the root qdisc installed on the IFB,=
 by<br>
+ =A0 =A0 =A0 =A0 * querying the qdisc list we obtained earlier. The netbuf=
script<br>
+ =A0 =A0 =A0 =A0 * sets up the plug qdisc as the root qdisc, so we don&#39=
;t have to<br>
+ =A0 =A0 =A0 =A0 * search the entire qdisc tree on the IFB dev.<br>
+<br>
+ =A0 =A0 =A0 =A0 * There is no need to explicitly free this qdisc as its j=
ust a<br>
+ =A0 =A0 =A0 =A0 * reference from the qdisc cache we allocated earlier.<br=
>
+ =A0 =A0 =A0 =A0 */<br>
+ =A0 =A0 =A0 =A0qdisc =3D rtnl_qdisc_get_by_parent(netbuf_state-&gt;qdisc_=
cache, ifindex,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 TC_H_ROOT);<br>
+<br>
+ =A0 =A0 =A0 =A0if (qdisc) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0const char *tc_kind =3D rtnl_tc_get_kind(TC_CAST(q=
disc));<br>
+ =A0 =A0 =A0 =A0 =A0 =A0/* Sanity check: Ensure that the root qdisc is a p=
lug qdisc. */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0if (!tc_kind || strcmp(tc_kind, &quot;plug&quot;))=
 {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nl_object_put((struct nl_object *)qdisc);<=
br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;plug qdisc is not install=
ed on %s&quot;, ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0 =A0 =A0netbuf_state-&gt;netbuf_qdisc_list[i] =3D qdisc;<b=
r>
+ =A0 =A0 =A0 =A0} else {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0LOG(ERROR, &quot;Cannot get qdisc handle from ifb =
%s&quot;, ifb_list[i]);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0rtnl_link_put(ifb);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return 0;<br>
+<br>
+ out:<br>
+ =A0 =A0if (ifb)<br>
+ =A0 =A0 =A0 =A0rtnl_link_put(ifb);<br>
+ =A0 =A0free_qdiscs(netbuf_state);<br>
+ =A0 =A0return ERROR_FAIL;<br>
+}<br>
+<br>
+static void netbuf_setup_timeout_cb(libxl__egc *egc,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0li=
bxl__ev_time *ev,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0co=
nst struct timeval *requested_abs)<br>
+{<br>
+ =A0 =A0libxl__remus_state *remus_state =3D CONTAINER_OF(ev, *remus_state,=
 timeout);<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0const int devid =3D remus_state-&gt;dev_id;<br>
+ =A0 =A0libxl__remus_netbuf_state *const netbuf_state =3D remus_state-&gt;=
netbuf_state;<br>
+ =A0 =A0const char *const vif =3D netbuf_state-&gt;vif_list[devid];<br>
+<br>
+ =A0 =A0STATE_AO_GC(remus_state-&gt;dss-&gt;ao);<br>
+<br>
+ =A0 =A0libxl__ev_time_deregister(gc, &amp;remus_state-&gt;timeout);<br>
+ =A0 =A0assert(libxl__ev_child_inuse(&amp;remus_state-&gt;child));<br>
+<br>
+ =A0 =A0LOG(DEBUG, &quot;killing hotplug script %s (on vif %s) because of =
timeout&quot;,<br>
+ =A0 =A0 =A0 =A0remus_state-&gt;netbufscript, vif);<br>
+<br>
+ =A0 =A0if (kill(remus_state-&gt;child.pid, SIGKILL)) {<br>
+ =A0 =A0 =A0 =A0LOGEV(ERROR, errno, &quot;unable to kill hotplug script %s=
 [%ld]&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0remus_state-&gt;netbufscript,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0(unsigned long)remus_state-&gt;child.pid);<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return;<br>
+}<br>
+<br>
+/* the script needs the following env &amp; args<br>
+ * $vifname<br>
+ * $XENBUS_PATH (/libxl/&lt;domid&gt;/remus/netbuf/&lt;devid&gt;/)<br>
+ * $IFB (for teardown)<br>
+ * setup/teardown as command line arg.<br>
+ * In return, the script writes the name of IFB device (during setup) to b=
e<br>
+ * used for output buffering into XENBUS_PATH/ifb<br>
+ */<br>
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_sta=
te,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0char *op, libx=
l__ev_child_callback *death)<br>
+{<br>
+ =A0 =A0int arraysize =3D 7, nr =3D 0;<br>
+ =A0 =A0char **env =3D NULL, **args =3D NULL;<br>
+ =A0 =A0pid_t pid;<br>
+<br>
+ =A0 =A0/* Convenience aliases */<br>
+ =A0 =A0libxl__ev_child *const child =3D &amp;remus_state-&gt;child;<br>
+ =A0 =A0libxl__ev_time *const timeout =3D &amp;remus_state-&gt;timeout;<br=
>
+ =A0 =A0char *const script =3D libxl__strdup(gc, remus_state-&gt;netbufscr=
ipt);<br>
+ =A0 =A0const uint32_t domid =3D remus_state-&gt;dss-&gt;domid;<br>
+ =A0 =A0const int devid =3D remus_state-&gt;dev_id;<br>
+ =A0 =A0libxl__remus_netbuf_state *const netbuf_state =3D remus_state-&gt;=
netbuf_state;<br>
+ =A0 =A0const char *const vif =3D netbuf_state-&gt;vif_list[devid];<br>
+ =A0 =A0const char *const ifb =3D netbuf_state-&gt;ifb_list[devid];<br>
+<br></blockquote><div><br></div><div>Please set arraysize to 7 here, inste=
ad of the beginning of the function.=A0</div><div>Its more readable that wa=
y.</div><div><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);bo=
rder-left-style:solid;padding-left:1ex">




+ =A0 =A0GCNEW_ARRAY(env, arraysize);<br>
+ =A0 =A0env[nr++] =3D &quot;vifname&quot;;<br>
+ =A0 =A0env[nr++] =3D libxl__strdup(gc, vif);<br>
+ =A0 =A0env[nr++] =3D &quot;XENBUS_PATH&quot;;<br>
+ =A0 =A0env[nr++] =3D GCSPRINTF(&quot;%s/remus/netbuf/%d&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0libxl__xs_libxl_path(g=
c, domid), devid);<br>
+ =A0 =A0if (!strcmp(op, &quot;teardown&quot;)) {<br>
+ =A0 =A0 =A0 =A0env[nr++] =3D &quot;IFB&quot;;<br>
+ =A0 =A0 =A0 =A0env[nr++] =3D libxl__strdup(gc, ifb);<br>
+ =A0 =A0}<br>
+ =A0 =A0env[nr++] =3D NULL;<br>
+ =A0 =A0assert(nr &lt;=3D arraysize);<br>
+<br>
+ =A0 =A0arraysize =3D 3; nr =3D 0;<br>
+ =A0 =A0GCNEW_ARRAY(args, arraysize);<br>
+ =A0 =A0args[nr++] =3D script;<br>
+ =A0 =A0args[nr++] =3D op;<br>
+ =A0 =A0args[nr++] =3D NULL;<br>
+ =A0 =A0assert(nr =3D=3D arraysize);<br>
+<br>
+ =A0 =A0/* Set hotplug timeout */<br>
+ =A0 =A0if (libxl__ev_time_register_rel(gc, timeout,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ne=
tbuf_setup_timeout_cb,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0LI=
BXL_HOTPLUG_TIMEOUT * 1000)) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;unable to register timeout for &quot;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0&quot;netbuf setup script %s on vif %s&quot;, scri=
pt, vif);<br>
+ =A0 =A0 =A0 =A0return ERROR_FAIL;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0LOG(DEBUG, &quot;Calling netbuf script: %s %s on vif %s&quot;,<br>
+ =A0 =A0 =A0 =A0script, op, vif);<br>
+<br>
+ =A0 =A0/* Fork and exec netbuf script */<br>
+ =A0 =A0pid =3D libxl__ev_child_fork(gc, child, death);<br>
+ =A0 =A0if (pid =3D=3D -1) {<br>
+ =A0 =A0 =A0 =A0LOG(ERROR, &quot;unable to fork netbuf script %s&quot;, sc=
ript);<br>
+ =A0 =A0 =A0 =A0return ERROR_FAIL;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0if (!pid) {<br>
+ =A0 =A0 =A0 =A0/* child: Launch netbuf script */<br>
+ =A0 =A0 =A0 =A0libxl__exec(gc, -1, -1, -1, args[0], args, env);<br>
+ =A0 =A0 =A0 =A0/* notreached */<br>
+ =A0 =A0 =A0 =A0abort();<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0return 0;<br>
+}<br>
+<br><span><font color=3D"#888888"><br>
</font></span></blockquote></div><br></div></div>

--089e0122f016f7bc9a04f0e72682--


--===============0260953706865313514==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0260953706865313514==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:32:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YFw-0002GH-RK; Sun, 26 Jan 2014 22:32:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YFu-0002G3-P3
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:32:43 +0000
Received: from [85.158.137.68:57258] by server-8.bemta-3.messagelabs.com id
	CD/E9-31081-A0D85E25; Sun, 26 Jan 2014 22:32:42 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390775559!11385500!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4156 invoked from network); 26 Jan 2014 22:32:41 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:32:41 -0000
Received: from mail-ig0-f182.google.com (mail-ig0-f182.google.com
	[209.85.213.182]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMWcl1025711
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:32:39 -0800
Received: by mail-ig0-f182.google.com with SMTP id uy17so7413446igb.3
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:32:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=x045BAEWWp98mQy+DIcIDzdYXCQpGWMQ0DjAGsQXhiU=;
	b=L0P/P0zxmqc1eOjvApM82iqkkWWzPZ56HddZq8JrqlIPV4/Hkwlkh9GdCmKDdY+f78
	Pv3CSgHFfYVZlrU/RBRD3KUkRy7UhIxOsxoOUYp+DX7c5v0ZCUsfn7H4vtubFw9pn14g
	w+Xy/8NHdJ28sOroTtC+9WKTY8JykszsVgtWcF0mlMl0B3MUX5hkpx4c3ZTjoA/NFVsV
	koxS8Wx0Gnbcn41yyYBJiwnJVS/T/d6yBQjsdyd0Y/s9/7BIBfjd0ZVfiRaQiHym8BXr
	x4nhwLkilP1bbicK97F4o8bzNEx28J4zFhOyY0b8ODexhaMt1sh+r10oHpHDELknKqwT
	g6OA==
X-Received: by 10.50.45.33 with SMTP id j1mr14696056igm.32.1390775557530; Sun,
	26 Jan 2014 14:32:37 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:31:57 -0800 (PST)
In-Reply-To: <1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:31:57 -0800
Message-ID: <CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering
	cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7657454765089218757=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7657454765089218757==
Content-Type: multipart/alternative; boundary=089e0122f01611abd104f0e72bc4

--089e0122f01611abd104f0e72bc4
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> Command line switch to 'xl remus' command, to enable network buffering.
> Pass on this flag to libxl so that it can act accordingly.
> Also update man pages to reflect the addition of a new option to
> 'xl remus' command.
>
> Note: the network buffering is enabled as default. If you want to
> disable it, please use -n option.
>
>

> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..f05e07b 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
>        "-i MS                   Checkpoint domain memory every MS
> milliseconds (def. 200ms).\n"
>        "-b                      Replicate memory checkpoints to /dev/null
> (blackhole)\n"
>        "-u                      Disable memory checkpoint compression.\n"
> +      "-n                      Enable network output buffering.\n"
>

Since network buffering is enabled by default, this should be "Disable
network output..".

--089e0122f01611abd104f0e72bc4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mai=
lto:rshriram@cs.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
<br>
Command line switch to &#39;xl remus&#39; command, to enable network buffer=
ing.<br>
Pass on this flag to libxl so that it can act accordingly.<br>
Also update man pages to reflect the addition of a new option to<br>
&#39;xl remus&#39; command.<br>
<br>
Note: the network buffering is enabled as default. If you want to<br>
disable it, please use -n option.<br>
<br></blockquote><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">diff --git a/=
tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c<br>
index ebe0220..f05e07b 100644<br>
--- a/tools/libxl/xl_cmdtable.c<br>
+++ b/tools/libxl/xl_cmdtable.c<br>
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] =3D {<br>
=A0 =A0 =A0 =A0&quot;-i MS =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Checkpoint d=
omain memory every MS milliseconds (def. 200ms).\n&quot;<br>
=A0 =A0 =A0 =A0&quot;-b =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Replicat=
e memory checkpoints to /dev/null (blackhole)\n&quot;<br>
=A0 =A0 =A0 =A0&quot;-u =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Disable =
memory checkpoint compression.\n&quot;<br>
+ =A0 =A0 =A0&quot;-n =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Enable net=
work output buffering.\n&quot;<br></blockquote><div><br></div><div>Since ne=
twork buffering is enabled by default, this should be &quot;Disable network=
 output..&quot;.</div><div>


<br></div></div></div></div>

--089e0122f01611abd104f0e72bc4--


--===============7657454765089218757==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7657454765089218757==--


From xen-devel-bounces@lists.xen.org Sun Jan 26 22:32:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jan 2014 22:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7YFw-0002GH-RK; Sun, 26 Jan 2014 22:32:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7YFu-0002G3-P3
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 22:32:43 +0000
Received: from [85.158.137.68:57258] by server-8.bemta-3.messagelabs.com id
	CD/E9-31081-A0D85E25; Sun, 26 Jan 2014 22:32:42 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390775559!11385500!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4156 invoked from network); 26 Jan 2014 22:32:41 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Jan 2014 22:32:41 -0000
Received: from mail-ig0-f182.google.com (mail-ig0-f182.google.com
	[209.85.213.182]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0QMWcl1025711
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:32:39 -0800
Received: by mail-ig0-f182.google.com with SMTP id uy17so7413446igb.3
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 14:32:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=x045BAEWWp98mQy+DIcIDzdYXCQpGWMQ0DjAGsQXhiU=;
	b=L0P/P0zxmqc1eOjvApM82iqkkWWzPZ56HddZq8JrqlIPV4/Hkwlkh9GdCmKDdY+f78
	Pv3CSgHFfYVZlrU/RBRD3KUkRy7UhIxOsxoOUYp+DX7c5v0ZCUsfn7H4vtubFw9pn14g
	w+Xy/8NHdJ28sOroTtC+9WKTY8JykszsVgtWcF0mlMl0B3MUX5hkpx4c3ZTjoA/NFVsV
	koxS8Wx0Gnbcn41yyYBJiwnJVS/T/d6yBQjsdyd0Y/s9/7BIBfjd0ZVfiRaQiHym8BXr
	x4nhwLkilP1bbicK97F4o8bzNEx28J4zFhOyY0b8ODexhaMt1sh+r10oHpHDELknKqwT
	g6OA==
X-Received: by 10.50.45.33 with SMTP id j1mr14696056igm.32.1390775557530; Sun,
	26 Jan 2014 14:32:37 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Sun, 26 Jan 2014 14:31:57 -0800 (PST)
In-Reply-To: <1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Sun, 26 Jan 2014 14:31:57 -0800
Message-ID: <CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering
	cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7657454765089218757=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7657454765089218757==
Content-Type: multipart/alternative; boundary=089e0122f01611abd104f0e72bc4

--089e0122f01611abd104f0e72bc4
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:

> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>
> Command line switch to 'xl remus' command, to enable network buffering.
> Pass on this flag to libxl so that it can act accordingly.
> Also update man pages to reflect the addition of a new option to
> 'xl remus' command.
>
> Note: the network buffering is enabled as default. If you want to
> disable it, please use -n option.
>
>

> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..f05e07b 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
>        "-i MS                   Checkpoint domain memory every MS
> milliseconds (def. 200ms).\n"
>        "-b                      Replicate memory checkpoints to /dev/null
> (blackhole)\n"
>        "-u                      Disable memory checkpoint compression.\n"
> +      "-n                      Enable network output buffering.\n"
>

Since network buffering is enabled by default, this should be "Disable
network output..".

--089e0122f01611abd104f0e72bc4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <span dir=3D"ltr">&lt;<a href=3D=
"mailto:laijs@cn.fujitsu.com" target=3D"_blank">laijs@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">From: Shriram Rajagopalan &lt;<a href=3D"mai=
lto:rshriram@cs.ubc.ca" target=3D"_blank">rshriram@cs.ubc.ca</a>&gt;<br>
<br>
Command line switch to &#39;xl remus&#39; command, to enable network buffer=
ing.<br>
Pass on this flag to libxl so that it can act accordingly.<br>
Also update man pages to reflect the addition of a new option to<br>
&#39;xl remus&#39; command.<br>
<br>
Note: the network buffering is enabled as default. If you want to<br>
disable it, please use -n option.<br>
<br></blockquote><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">diff --git a/=
tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c<br>
index ebe0220..f05e07b 100644<br>
--- a/tools/libxl/xl_cmdtable.c<br>
+++ b/tools/libxl/xl_cmdtable.c<br>
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] =3D {<br>
=A0 =A0 =A0 =A0&quot;-i MS =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Checkpoint d=
omain memory every MS milliseconds (def. 200ms).\n&quot;<br>
=A0 =A0 =A0 =A0&quot;-b =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Replicat=
e memory checkpoints to /dev/null (blackhole)\n&quot;<br>
=A0 =A0 =A0 =A0&quot;-u =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Disable =
memory checkpoint compression.\n&quot;<br>
+ =A0 =A0 =A0&quot;-n =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Enable net=
work output buffering.\n&quot;<br></blockquote><div><br></div><div>Since ne=
twork buffering is enabled by default, this should be &quot;Disable network=
 output..&quot;.</div><div>


<br></div></div></div></div>

--089e0122f01611abd104f0e72bc4--


--===============7657454765089218757==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7657454765089218757==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 00:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 00:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Zlc-0005En-CV; Mon, 27 Jan 2014 00:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Zla-0005Ei-IY
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 00:09:30 +0000
Received: from [193.109.254.147:8815] by server-6.bemta-14.messagelabs.com id
	C1/CA-14958-9B3A5E25; Mon, 27 Jan 2014 00:09:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390781367!13242375!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16545 invoked from network); 27 Jan 2014 00:09:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 00:09:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,725,1384300800"; d="scan'208";a="96667192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 00:09:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 19:09:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7ZlV-0007YU-Qk;
	Mon, 27 Jan 2014 00:09:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7ZlV-0006BD-PB;
	Mon, 27 Jan 2014 00:09:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24537-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 00:09:25 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24537: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24537 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24537/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ branch=xen-4.1-testing
+ revision=fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git fa1bde94493ee9fc66ce6f33ed434a9d7133c896:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   684b40e..fa1bde9  fa1bde94493ee9fc66ce6f33ed434a9d7133c896 -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 00:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 00:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7Zlc-0005En-CV; Mon, 27 Jan 2014 00:09:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7Zla-0005Ei-IY
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 00:09:30 +0000
Received: from [193.109.254.147:8815] by server-6.bemta-14.messagelabs.com id
	C1/CA-14958-9B3A5E25; Mon, 27 Jan 2014 00:09:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390781367!13242375!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16545 invoked from network); 27 Jan 2014 00:09:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 00:09:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,725,1384300800"; d="scan'208";a="96667192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 00:09:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 19:09:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7ZlV-0007YU-Qk;
	Mon, 27 Jan 2014 00:09:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7ZlV-0006BD-PB;
	Mon, 27 Jan 2014 00:09:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24537-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 00:09:25 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24537: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24537 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24537/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896
baseline version:
 xen                  684b40eb41c3d5eba55ad94b36fa3702c7720fe1

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ branch=xen-4.1-testing
+ revision=fa1bde94493ee9fc66ce6f33ed434a9d7133c896
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git fa1bde94493ee9fc66ce6f33ed434a9d7133c896:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   684b40e..fa1bde9  fa1bde94493ee9fc66ce6f33ed434a9d7133c896 -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 01:06:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 01:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7aeZ-00021j-2X; Mon, 27 Jan 2014 01:06:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W7aeX-00021a-FK
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 01:06:17 +0000
Received: from [85.158.137.68:34785] by server-12.bemta-3.messagelabs.com id
	FE/7F-20055-801B5E25; Mon, 27 Jan 2014 01:06:16 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390784775!11450649!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2853 invoked from network); 27 Jan 2014 01:06:15 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 01:06:15 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0R161tb027284
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:06:05 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0R15wYl022385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s0R15wOD001302
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s0R15wET001297
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Date: Mon, 27 Jan 2014 01:05:57 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: xen-devel@lists.xen.org
Message-ID: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0R161tb027284
Subject: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I get the following error after running pygrub when launching a guest with 
xen 4.4-rc2.

xenconsole: Could not open tty `': No such file or directory
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console 
child [0] exited with error status 2

After this the boot continues normally, but once the console exits the tty 
settings are wrong and key presses aren't echoed. I believe the error 
above is from a xenconsole session launched when pygrub is, and this exits 
because the guest generally hasn't started. A second run of xenconsole 
serves the console output after that. I have occasionally noticed with 
this and earlier xen versions that I have to press ^] after pygrub 
finishes before the boot continues, so I would guess that in some cases 
the first xenconsole run lasts long enough to get a console connection 
which has to be ended before the boot proceeds. Hence I suspect the 
behaviour hasn't changed in this version, and only the error message is 
new.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 01:06:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 01:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7aeZ-00021j-2X; Mon, 27 Jan 2014 01:06:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W7aeX-00021a-FK
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 01:06:17 +0000
Received: from [85.158.137.68:34785] by server-12.bemta-3.messagelabs.com id
	FE/7F-20055-801B5E25; Mon, 27 Jan 2014 01:06:16 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390784775!11450649!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2853 invoked from network); 27 Jan 2014 01:06:15 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 01:06:15 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0R161tb027284
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:06:05 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0R15wYl022385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s0R15wOD001302
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s0R15wET001297
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 01:05:58 GMT
Date: Mon, 27 Jan 2014 01:05:57 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: xen-devel@lists.xen.org
Message-ID: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0R161tb027284
Subject: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I get the following error after running pygrub when launching a guest with 
xen 4.4-rc2.

xenconsole: Could not open tty `': No such file or directory
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console 
child [0] exited with error status 2

After this the boot continues normally, but once the console exits the tty 
settings are wrong and key presses aren't echoed. I believe the error 
above is from a xenconsole session launched when pygrub is, and this exits 
because the guest generally hasn't started. A second run of xenconsole 
serves the console output after that. I have occasionally noticed with 
this and earlier xen versions that I have to press ^] after pygrub 
finishes before the boot continues, so I would guess that in some cases 
the first xenconsole run lasts long enough to get a console connection 
which has to be ended before the boot proceeds. Hence I suspect the 
behaviour hasn't changed in this version, and only the error message is 
new.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 01:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 01:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7and-0002Ne-BI; Mon, 27 Jan 2014 01:15:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1W7anc-0002NZ-A7
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 01:15:40 +0000
Received: from [85.158.139.211:18261] by server-12.bemta-5.messagelabs.com id
	76/CE-30017-B33B5E25; Mon, 27 Jan 2014 01:15:39 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390785335!12052393!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.8 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 27 Jan 2014 01:15:37 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 01:15:37 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so5102469pdi.37
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 17:15:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=YfTCSq31BEY3ba9FZjitDBV1rHf+OYKp3ZSm8ZihX6M=;
	b=X5ETGlM7sY1EI63U5Y0Ax/D0BBsaj+FGceKePPTpjtKXC4cu/XNcazL6uD30YfizPv
	qZ/Kq7g2/JDlCLf7HXzO2HGLDRqG11xqAGLDkOe/mZS6fklugR2ps/gn76lWJF3U+hWX
	QMH1oNPD9n+BKAvF7T1dXfnDxeZF2FgDWD84H+tAXvxX7dMwtL8E3ZdiWAxFbk9cJFtg
	y6TuLHFKg7BIoESBm4WVliIn7+HPiFk2HKd+cPdT2SfW8CLU/NBfePPlgfJw39nvplKP
	71VWaXGQxeW2bBdYcn6sdfTWtzF62FR2m8W12ws7cMcQIOKsQgHCdKwddC0dF2nh9wrt
	0q7Q==
X-Received: by 10.68.171.99 with SMTP id at3mr27350364pbc.109.1390785335021;
	Sun, 26 Jan 2014 17:15:35 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	yd4sm26276515pbc.13.2014.01.26.17.15.32 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 26 Jan 2014 17:15:34 -0800 (PST)
Date: Mon, 27 Jan 2014 09:15:30 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>, 
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>, 
	<52E191BB.7040904@terremark.com>, 
	<20140124145840.GE12946@phenom.dumpdata.com>, 
	<52E29827.7000001@terremark.com>, <52E2B9AB.6090103@terremark.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <201401270915284579998@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7713829676248012366=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============7713829676248012366==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart016722406740_=----"

This is a multi-part message in MIME format.

------=_001_NextPart016722406740_=----
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: base64

VGhpcyBhIGJvcmluZyBCVUcgd2hpY2ggaGFzIGEgIGxvbmcgaGlzdG9yeSwgIGZyb20geGVuIDMu
KiB+IDQuNHJjMiwgd2hpY2ggaXMgYWx3YXlzIG9jY3VycmVkIHdoZW4gZ3Vlc3Qgb3MgaXMgY2Vu
dG9zIDYrLCAgIGFzIGkga25vdy4NCkJ1dCBpdCBpcyB2ZXJ5IHN0cmFuZ2UgdGhhdCBvbmx5IGEg
ZmV3IGd1eXMgbWVudGlvbiAgaXQgYW5kIGNhcmUgaXQuDQpwbGVhc2UgcmVhZCBmb2xsb3dpbmcg
bGlua3M6DQpodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEz
LTA4L21zZzAxMzE1Lmh0bWwgDQpodHRwOi8veGVuLjEwNDU3MTIubjUubmFiYmxlLmNvbS94bS1z
YXZlLWNoZWNrcG9pbnQtcHJvYmxlbS10ZDI1MTk3ODkuaHRtbCANCg0KDQogIA0KDQoNCg0KDQpo
ZXJiZXJ0IGNsYW5kDQoNCkZyb206IERvbiBTbHV0eg0KRGF0ZTogMjAxNC0wMS0yNSAwMzowNg0K
VG86IEtvbnJhZCBSemVzenV0ZWsgV2lsazsgRG9uIFNsdXR6DQpDQzogeGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcNClN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbVEVTVERBWV0gVGVzdCByZXBvcnQN
Ck9uIDAxLzI0LzE0IDExOjQzLCBEb24gU2x1dHogd3JvdGU6DQoNCk9uIDAxLzI0LzE0IDA5OjU4
LCBLb25yYWQgUnplc3p1dGVrIFdpbGsgd3JvdGU6DQoNCk9uIFRodSwgSmFuIDIzLCAyMDE0IGF0
IDA1OjAzOjM5UE0gLTA1MDAsIERvbiBTbHV0eiB3cm90ZToNCg0KT24gMDEvMjAvMTQgMTg6MzUs
IEtvbnJhZCBSemVzenV0ZWsgV2lsayB3cm90ZToNCg0KRG9uIFNsdXR6IDxkc2x1dHpAdmVyaXpv
bi5jb20+IHdyb3RlOg0KDQpbc25pcF0NCg0KDQpXQVJOSU5HOiBnLmUuIHN0aWxsIGluIHVzZSEN
CldBUk5JTkc6IGcuZS4gc3RpbGwgaW4gdXNlIQ0KV0FSTklORzogZy5lLiBzdGlsbCBpbiB1c2Uh
DQpwbV9vcCgpOiBwbGF0Zm9ybV9wbV9yZXN1bWUrMHgwLzB4NTAgcmV0dXJucyAtMTkNClBNOiBE
ZXZpY2UgaTgwNDIgZmFpbGVkIHRvIHJlc3VtZTogZXJyb3IgLTE5DQpJTkZPOiB0YXNrIHNhZGM6
MjIxNjQgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRzLg0KImVjaG8gMCA+Li4uIg0K
SU5GTzogdGFzayBzYWRjOjIyMTY0IGJsb2NrZWQgZm9yIG1vcmUgdGhlbiAxMjAgc2Vjb25kcy4N
Cg0KW3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgZGVzIDE3DQpbcm9vdEBkY3MteGVuLTU0IH5dIyB4
bCByZXN0b3JlIC1WDQovYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5zYXZlDQoNCg0K
Tm90IHN1cmUgaWYgdGhpcyBpcyBleHBlY3RlZCBvciBub3QuDQoNCkkgdGhpbmsgSWFuIHNhdyB0
aGlzIHdpdGggdGhlICdmYXN0LWNhbmNlbCcgc29tZXRoaW5nIHJlc3VtZSBidXQgSSBtaWdodCBi
ZSBpbmNvcnJlY3QuIERpZCBpdCB3b3JrIGlmIHlvdSB1c2VkIHhlbmQgKHlvdSBtaWdodCBoYXZl
IHRvIGNvbmZpZ3VyZSBpdCBiZSBlbmFibGVkKT8NCg0KSSBoYXZlIG5vdCB1c2VkIHhlbmQveGUg
aW4gYSBsb25nIHRpbWUuICBJIGRpZCBuZWVkIHRvIGNvbmZpZ3VyZSBpdC4NCg0KRG9lcyBub3Qg
c3RhcnQ6DQoNCg0KIyAvZXRjL2luaXQuZC94ZW5kIHN0YXJ0DQpXQVJOSU5HOiBFbmFibGluZyB0
aGUgeGVuZCB0b29sc3RhY2suDQp4ZW5kIGlzIGRlcHJlY2F0ZWQgYW5kIHNjaGVkdWxlZCBmb3Ig
cmVtb3ZhbC4gUGxlYXNlIG1pZ3JhdGUNCnRvIGFub3RoZXIgdG9vbHN0YWNrIEFTQVAuDQpUcmFj
ZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdCk6DQogIEZpbGUgIi91c3Ivc2Jpbi94ZW5kIiwg
bGluZSAxMTAsIGluIDxtb2R1bGU+DQogICAgc3lzLmV4aXQobWFpbigpKQ0KICBGaWxlICIvdXNy
L3NiaW4veGVuZCIsIGxpbmUgOTEsIGluIG1haW4NCiAgICBzdGFydF9ibGt0YXBjdHJsKCkNCiAg
RmlsZSAiL3Vzci9zYmluL3hlbmQiLCBsaW5lIDc3LCBpbiBzdGFydF9ibGt0YXBjdHJsDQogICAg
c3RhcnRfZGFlbW9uKCJibGt0YXBjdHJsIiwgIiIpDQogIEZpbGUgIi91c3Ivc2Jpbi94ZW5kIiwg
bGluZSA3NCwgaW4gc3RhcnRfZGFlbW9uDQogICAgb3MuZXhlY3ZwKGRhZW1vbiwgKGRhZW1vbiwp
ICsgYXJncykNCiAgRmlsZSAiL3Vzci9saWI2NC9weXRob24yLjcvb3MucHkiLCBsaW5lIDM0NCwg
aW4gZXhlY3ZwDQogICAgX2V4ZWN2cGUoZmlsZSwgYXJncykNCiAgRmlsZSAiL3Vzci9saWI2NC9w
eXRob24yLjcvb3MucHkiLCBsaW5lIDM4MCwgaW4gX2V4ZWN2cGUNCiAgICBmdW5jKGZ1bGxuYW1l
LCAqYXJncmVzdCkNCk9TRXJyb3I6IFtFcnJubyAyXSBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5
DQoNCg0KSG93IGltcG9ydGFudCBpcyBpdCB0byB0cnkgdGhpcz8NCg0KSXQgdGVsbHMgdXMgd2hl
dGhlciB0aGUgaXNzdWUgaXMgaW5kZWVkIHdpdGggdGhlICdmYXN0LWNhbmNlbCcgdGhpbmcuDQoN
CkJ1dCwgSSBkbyByZWNhbGwgc2VlaW5nIGEgcGF0Y2ggZnJvbSBJYW4gSmFja3NvbiBmb3IgdGhp
cyAtIEkganVzdA0KZG9uJ3QgcmVtZW1iZXIgd2hhdCBpdCB3YXMgY2FsbGVkIC0gaXQgd2FzIHBv
c3RlZCBoZXJlIGFuZCBwZXJoYXBzDQphcHBseWluZyBpdCB3b3VsZCBoZWxwPw0KDQoNCkkgaGF2
ZSBub3QgZm91bmQgYSBwYXRjaC4gIFRoZSBidWcgIzMwOg0KDQoNCmh0dHA6Ly9idWdzLnhlbnBy
b2plY3Qub3JnL3hlbi9idWcvMzANCg0KTG9va3MgdG8gbWUgbGlrZSB0aGUgaXNzdWUgSSBhbSBz
ZWVpbmcuICBUaGUgR3Vlc3QgSSB3YXMgdGVzdGluZyB0aGlzIG9uIGlzIGFuIG9sZCBrZXJuZWwg
KGFzIGZhciBhcyB0aGlzIGJ1ZykgYW5kIHNvIEkgd291bGQgZXhwZWN0IGl0IG1pZ2h0IGJlIHJl
bGF0ZWQuDQoNCg0KSGVyZSBpcyB3aGF0IEkgc2VlIHRoYXQgbWF5IGxpbmsgaXQgdG8gdGhpczoN
Cg0KDQpGcm9tOiBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRyaXguY29tPg0KVG86IElh
biBKYWNrc29uIDxJYW4uSmFja3NvbkBldS5jaXRyaXguY29tPg0KQ2M6IHhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gMy40LjcwKyBrZXJuZWwgV0FSTklO
RyBzcGV3IGR5c2Z1bmN0aW9uIG9uIGZhaWxlZCBtaWdyYXRpb24NCkRhdGU6IEZyaSwgMTAgSmFu
IDIwMTQgMTA6MjY6MzEgKzAwMDANCg0KDQouLi4NCg0KTG9va3MgbGlrZSBSSEVMNCAobGludXgt
Mi42LjktODkuMC4xNi5FTCBrZXJuZWwpIGRvZXNuJ3QgaGF2ZSB0aGUNCnN1cHBvcnQgZm9yIHRo
ZSBuZXcgbW9kZSBhdCBhbGwuDQoNCkl0IHdvdWxkIHByb2JhYmx5IGJlIHdpc2UgdG8gdmFsaWRh
dGUgdGhpcyB1bmRlciB4ZW5kIGJlZm9yZSBjaGFzaW5nDQpyZWQtaGVycmluZ3Mgd2l0aCB4bC4N
Cg0KSWFuLg0KDQoNClNvIEkgd2lsbCBjb250aW51ZSB0aGUgZmlnaHQgdG8gZ2V0IHhlbmQgcnVu
bmluZy4NCg0KICAgLURvbg0KDQoNCg0KDQpBaCwgbm93IGhhdmUgeGVuZCBydW5uaW5nLCBidXQg
c3RpbGwgbmVlZCB0byBjb252ZXJ0IGZyb20gYW4geGwuY2ZnIGZpbGUgdG8gYSB4bWRvbWFpbi5j
ZmcuLi4gIEhhdmluZyBub3QgdXNlZCB4bSBpbiB5ZWFycywgdGhpcyB3aWxsIHRha2UgYSB3aGls
ZS4gIFRoZSBtYW4gcGFnZSBkb2VzIG5vdCBzYXkgaG93IHRvIGJ1aWxkIGFuIGh2bSBndWVzdC4N
Cg0KSSBkaWQgYW5vdGhlciB0ZXN0IGFuZCBGZWRvcmEgMTkgKHg4Nl82NCkgc2F2ZWQganVzdCBm
aW5lLiAgU28gdGhpcyBsb29rcyB0byBiZSBidWcgIzMwLg0KDQpJZiBJIHVuZGVyc3RhbmQgdGhp
cyBhbGwsIHRoaXMgbWVhbnMgdGhhdCB0aGUgb2xkZXIgbGludXgga2VybmVscyB3aWxsIGZhaWwg
dGhpcyB3YXkgYWxzbyBpZiBhIG1pZ3JhdGUgZmFpbHMgYW5kIHRoZSBzb3VyY2UgZ3Vlc3QgaXMg
cmVzdGFydGVkLg0KDQpTbywgZG8gSSBjb250aW51ZSB0byBnZXQgeGVuZCB3b3JraW5nLCBvciBq
dXN0IHNheSBpdCBpcyBhbiBleGFtcGxlIG9mIGJ1ZyAjMzA/DQoNCiAgICAtRG9uIFNsdXR6DQoN
Cg0KDQoNCg0KDQoNCiAgICAtRG9uIFNsdXR6

------=_001_NextPart016722406740_=----
Content-Type: text/html;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
DIV.FoxDiv20140127085946801671 {
	COLOR: #000000
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px" text=3D#000000>
<DIV>This a <SPAN class=3Dgt-baf-base>boring BUG which has a &nbsp;long=20
history,&nbsp; from xen&nbsp;3.* ~ 4.4rc2, which is always occurred when g=
uest=20
os is centos 6+,&nbsp;&nbsp; as i know.</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base>But it is very strange that&nbsp;only a=20
few&nbsp;guys&nbsp;mention&nbsp; it and care it.</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base>please read following links:</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base><A=20
href=3D"http://lists.xen.org/archives/html/xen-devel/2013-08/msg01315.html=
">http://lists.xen.org/archives/html/xen-devel/2013-08/msg01315.html</A>=20
</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base><A=20
href=3D"http://xen.1045712.n5.nabble.com/xm-save-checkpoint-problem-td2519=
789.html">http://xen.1045712.n5.nabble.com/xm-save-checkpoint-problem-td25=
19789.html</A>=20
</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3Dgt-baf-base></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3Dgt-baf-base>&nbsp;&nbsp;</SPAN></DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:dslutz@verizon.com">Don Slutz</A>=
</DIV>
<DIV><B>Date:</B>&nbsp;2014-01-25&nbsp;03:06</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:konrad.wilk@oracle.com">Konrad Rzes=
zutek=20
Wilk</A>; <A href=3D"mailto:dslutz@verizon.com">Don Slutz</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [TESTDAY] Test=20
report</DIV></DIV></DIV>
<DIV>
<DIV class=3DFoxDiv20140127085946801671 style=3D"BACKGROUND-COLOR: white">
<DIV class=3Dmoz-cite-prefix>On 01/24/14 11:43, Don Slutz wrote:<BR></DIV>
<BLOCKQUOTE cite=3Dmid:52E29827.7000001@terremark.com type=3D"cite">
  <DIV class=3Dmoz-cite-prefix>On 01/24/14 09:58, Konrad Rzeszutek Wilk=20
  wrote:<BR></DIV>
  <BLOCKQUOTE cite=3Dmid:20140124145840.GE12946@phenom.dumpdata.com type=
=3D"cite"><PRE wrap=3D"">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slu=
tz wrote:
</PRE>
    <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">On 01/20/14 18:35, Konrad Rze=
szutek Wilk wrote:
</PRE>
      <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Don Slutz <A class=3Dmoz-tx=
t-link-rfc2396E href=3D"mailto:dslutz@verizon.com" moz-do-not-send=3D"true=
">&lt;dslutz@verizon.com&gt;</A> wrote:
</PRE></BLOCKQUOTE><PRE wrap=3D"">[snip]

</PRE>
      <BLOCKQUOTE type=3D"cite">
        <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">WARNING: g.e. still in us=
e!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</PRE></BLOCKQUOTE><PRE wrap=3D"">I think Ian saw this with the 'fast-canc=
el' something resume but I might be incorrect. Did it work if you used xen=
d (you might have to configure it be enabled)?
</PRE></BLOCKQUOTE><PRE wrap=3D"">I have not used xend/xe in a long time. =
 I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</PRE></BLOCKQUOTE><PRE wrap=3D"">It tells us whether the issue is indeed =
with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</PRE></BLOCKQUOTE><BR>I have not found a patch.&nbsp; The bug=20
  #30:<BR><BR><BR><A class=3Dmoz-txt-link-freetext=20
  href=3D"http://bugs.xenproject.org/xen/bug/30"=20
  moz-do-not-send=3D"true">http://bugs.xenproject.org/xen/bug/30</A><BR><B=
R>Looks=20
  to me like the issue I am seeing.&nbsp; The Guest I was testing this on =
is an=20
  old kernel (as far as this bug) and so I would expect it might be=20
  related.<BR><BR><BR>Here is what I see that may link it to this:<BR><BR>
  <BLOCKQUOTE><PRE class=3Dheaders><B>From</B>: Ian Campbell <A class=3Dmo=
z-txt-link-rfc2396E href=3D"mailto:Ian.Campbell@citrix.com" moz-do-not-sen=
d=3D"true">&lt;Ian.Campbell@citrix.com&gt;</A>
<B>To</B>: Ian Jackson <A class=3Dmoz-txt-link-rfc2396E href=3D"mailto:Ian=
.Jackson@eu.citrix.com" moz-do-not-send=3D"true">&lt;Ian.Jackson@eu.citrix=
.com&gt;</A>
<B>Cc</B>: <A class=3Dmoz-txt-link-abbreviated href=3D"mailto:xen-devel@li=
sts.xen.org" moz-do-not-send=3D"true">xen-devel@lists.xen.org</A>
<B>Subject</B>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on=
 failed migration
<B>Date</B>: Fri, 10 Jan 2014 10:26:31 +0000

</PRE></BLOCKQUOTE><PRE class=3Dheaders>...
</PRE>
  <BLOCKQUOTE><PRE class=3Dheaders>Looks like RHEL4 (linux-2.6.9-89.0.16.E=
L kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</PRE></BLOCKQUOTE><BR>So I will continue the fight to get xend=20
  running.<BR><BR>&nbsp;&nbsp; -Don<BR><BR><BR></BLOCKQUOTE><BR>Ah, now ha=
ve xend=20
running, but still need to convert from an xl.cfg file to a=20
xmdomain.cfg...&nbsp; Having not used xm in years, this will take a while.=
&nbsp;=20
The man page does not say how to build an hvm guest.<BR><BR>I did another =
test=20
and Fedora 19 (x86_64) saved just fine.&nbsp; So this looks to be bug=20
#30.<BR><BR>If I understand this all, this means that the older linux kern=
els=20
will fail this way also if a migrate fails and the source guest is=20
restarted.<BR><BR>So, do I continue to get xend working, or just say it is=
 an=20
example of bug #30?<BR><BR>&nbsp;&nbsp;&nbsp; -Don Slutz<BR><BR><BR>
<BLOCKQUOTE cite=3Dmid:52E29827.7000001@terremark.com type=3D"cite"><BR><B=
R><BR>
  <BLOCKQUOTE cite=3Dmid:20140124145840.GE12946@phenom.dumpdata.com type=
=3D"cite">
    <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">    -Don Slutz

</PRE></BLOCKQUOTE></BLOCKQUOTE><BR></BLOCKQUOTE><BR></DIV></DIV></BODY></=
HTML>

------=_001_NextPart016722406740_=------



--===============7713829676248012366==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7713829676248012366==--



From xen-devel-bounces@lists.xen.org Mon Jan 27 01:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 01:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7and-0002Ne-BI; Mon, 27 Jan 2014 01:15:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1W7anc-0002NZ-A7
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 01:15:40 +0000
Received: from [85.158.139.211:18261] by server-12.bemta-5.messagelabs.com id
	76/CE-30017-B33B5E25; Mon, 27 Jan 2014 01:15:39 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390785335!12052393!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.8 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8313 invoked from network); 27 Jan 2014 01:15:37 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 01:15:37 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so5102469pdi.37
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 17:15:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=YfTCSq31BEY3ba9FZjitDBV1rHf+OYKp3ZSm8ZihX6M=;
	b=X5ETGlM7sY1EI63U5Y0Ax/D0BBsaj+FGceKePPTpjtKXC4cu/XNcazL6uD30YfizPv
	qZ/Kq7g2/JDlCLf7HXzO2HGLDRqG11xqAGLDkOe/mZS6fklugR2ps/gn76lWJF3U+hWX
	QMH1oNPD9n+BKAvF7T1dXfnDxeZF2FgDWD84H+tAXvxX7dMwtL8E3ZdiWAxFbk9cJFtg
	y6TuLHFKg7BIoESBm4WVliIn7+HPiFk2HKd+cPdT2SfW8CLU/NBfePPlgfJw39nvplKP
	71VWaXGQxeW2bBdYcn6sdfTWtzF62FR2m8W12ws7cMcQIOKsQgHCdKwddC0dF2nh9wrt
	0q7Q==
X-Received: by 10.68.171.99 with SMTP id at3mr27350364pbc.109.1390785335021;
	Sun, 26 Jan 2014 17:15:35 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	yd4sm26276515pbc.13.2014.01.26.17.15.32 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 26 Jan 2014 17:15:34 -0800 (PST)
Date: Mon, 27 Jan 2014 09:15:30 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52DDA807.2050703@terremark.com>, 
	<c6880ec1-95a8-4c1d-8b10-75534b33f170@email.android.com>, 
	<52E191BB.7040904@terremark.com>, 
	<20140124145840.GE12946@phenom.dumpdata.com>, 
	<52E29827.7000001@terremark.com>, <52E2B9AB.6090103@terremark.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <201401270915284579998@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TESTDAY] Test report
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7713829676248012366=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============7713829676248012366==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart016722406740_=----"

This is a multi-part message in MIME format.

------=_001_NextPart016722406740_=----
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: base64

VGhpcyBhIGJvcmluZyBCVUcgd2hpY2ggaGFzIGEgIGxvbmcgaGlzdG9yeSwgIGZyb20geGVuIDMu
KiB+IDQuNHJjMiwgd2hpY2ggaXMgYWx3YXlzIG9jY3VycmVkIHdoZW4gZ3Vlc3Qgb3MgaXMgY2Vu
dG9zIDYrLCAgIGFzIGkga25vdy4NCkJ1dCBpdCBpcyB2ZXJ5IHN0cmFuZ2UgdGhhdCBvbmx5IGEg
ZmV3IGd1eXMgbWVudGlvbiAgaXQgYW5kIGNhcmUgaXQuDQpwbGVhc2UgcmVhZCBmb2xsb3dpbmcg
bGlua3M6DQpodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDEz
LTA4L21zZzAxMzE1Lmh0bWwgDQpodHRwOi8veGVuLjEwNDU3MTIubjUubmFiYmxlLmNvbS94bS1z
YXZlLWNoZWNrcG9pbnQtcHJvYmxlbS10ZDI1MTk3ODkuaHRtbCANCg0KDQogIA0KDQoNCg0KDQpo
ZXJiZXJ0IGNsYW5kDQoNCkZyb206IERvbiBTbHV0eg0KRGF0ZTogMjAxNC0wMS0yNSAwMzowNg0K
VG86IEtvbnJhZCBSemVzenV0ZWsgV2lsazsgRG9uIFNsdXR6DQpDQzogeGVuLWRldmVsQGxpc3Rz
Lnhlbi5vcmcNClN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBbVEVTVERBWV0gVGVzdCByZXBvcnQN
Ck9uIDAxLzI0LzE0IDExOjQzLCBEb24gU2x1dHogd3JvdGU6DQoNCk9uIDAxLzI0LzE0IDA5OjU4
LCBLb25yYWQgUnplc3p1dGVrIFdpbGsgd3JvdGU6DQoNCk9uIFRodSwgSmFuIDIzLCAyMDE0IGF0
IDA1OjAzOjM5UE0gLTA1MDAsIERvbiBTbHV0eiB3cm90ZToNCg0KT24gMDEvMjAvMTQgMTg6MzUs
IEtvbnJhZCBSemVzenV0ZWsgV2lsayB3cm90ZToNCg0KRG9uIFNsdXR6IDxkc2x1dHpAdmVyaXpv
bi5jb20+IHdyb3RlOg0KDQpbc25pcF0NCg0KDQpXQVJOSU5HOiBnLmUuIHN0aWxsIGluIHVzZSEN
CldBUk5JTkc6IGcuZS4gc3RpbGwgaW4gdXNlIQ0KV0FSTklORzogZy5lLiBzdGlsbCBpbiB1c2Uh
DQpwbV9vcCgpOiBwbGF0Zm9ybV9wbV9yZXN1bWUrMHgwLzB4NTAgcmV0dXJucyAtMTkNClBNOiBE
ZXZpY2UgaTgwNDIgZmFpbGVkIHRvIHJlc3VtZTogZXJyb3IgLTE5DQpJTkZPOiB0YXNrIHNhZGM6
MjIxNjQgYmxvY2tlZCBmb3IgbW9yZSB0aGVuIDEyMCBzZWNvbmRzLg0KImVjaG8gMCA+Li4uIg0K
SU5GTzogdGFzayBzYWRjOjIyMTY0IGJsb2NrZWQgZm9yIG1vcmUgdGhlbiAxMjAgc2Vjb25kcy4N
Cg0KW3Jvb3RAZGNzLXhlbi01NCB+XSMgeGwgZGVzIDE3DQpbcm9vdEBkY3MteGVuLTU0IH5dIyB4
bCByZXN0b3JlIC1WDQovYmlnL3hsLXNhdmUvY2VudG9zLTYuNC14ODZfNjQuMC5zYXZlDQoNCg0K
Tm90IHN1cmUgaWYgdGhpcyBpcyBleHBlY3RlZCBvciBub3QuDQoNCkkgdGhpbmsgSWFuIHNhdyB0
aGlzIHdpdGggdGhlICdmYXN0LWNhbmNlbCcgc29tZXRoaW5nIHJlc3VtZSBidXQgSSBtaWdodCBi
ZSBpbmNvcnJlY3QuIERpZCBpdCB3b3JrIGlmIHlvdSB1c2VkIHhlbmQgKHlvdSBtaWdodCBoYXZl
IHRvIGNvbmZpZ3VyZSBpdCBiZSBlbmFibGVkKT8NCg0KSSBoYXZlIG5vdCB1c2VkIHhlbmQveGUg
aW4gYSBsb25nIHRpbWUuICBJIGRpZCBuZWVkIHRvIGNvbmZpZ3VyZSBpdC4NCg0KRG9lcyBub3Qg
c3RhcnQ6DQoNCg0KIyAvZXRjL2luaXQuZC94ZW5kIHN0YXJ0DQpXQVJOSU5HOiBFbmFibGluZyB0
aGUgeGVuZCB0b29sc3RhY2suDQp4ZW5kIGlzIGRlcHJlY2F0ZWQgYW5kIHNjaGVkdWxlZCBmb3Ig
cmVtb3ZhbC4gUGxlYXNlIG1pZ3JhdGUNCnRvIGFub3RoZXIgdG9vbHN0YWNrIEFTQVAuDQpUcmFj
ZWJhY2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdCk6DQogIEZpbGUgIi91c3Ivc2Jpbi94ZW5kIiwg
bGluZSAxMTAsIGluIDxtb2R1bGU+DQogICAgc3lzLmV4aXQobWFpbigpKQ0KICBGaWxlICIvdXNy
L3NiaW4veGVuZCIsIGxpbmUgOTEsIGluIG1haW4NCiAgICBzdGFydF9ibGt0YXBjdHJsKCkNCiAg
RmlsZSAiL3Vzci9zYmluL3hlbmQiLCBsaW5lIDc3LCBpbiBzdGFydF9ibGt0YXBjdHJsDQogICAg
c3RhcnRfZGFlbW9uKCJibGt0YXBjdHJsIiwgIiIpDQogIEZpbGUgIi91c3Ivc2Jpbi94ZW5kIiwg
bGluZSA3NCwgaW4gc3RhcnRfZGFlbW9uDQogICAgb3MuZXhlY3ZwKGRhZW1vbiwgKGRhZW1vbiwp
ICsgYXJncykNCiAgRmlsZSAiL3Vzci9saWI2NC9weXRob24yLjcvb3MucHkiLCBsaW5lIDM0NCwg
aW4gZXhlY3ZwDQogICAgX2V4ZWN2cGUoZmlsZSwgYXJncykNCiAgRmlsZSAiL3Vzci9saWI2NC9w
eXRob24yLjcvb3MucHkiLCBsaW5lIDM4MCwgaW4gX2V4ZWN2cGUNCiAgICBmdW5jKGZ1bGxuYW1l
LCAqYXJncmVzdCkNCk9TRXJyb3I6IFtFcnJubyAyXSBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5
DQoNCg0KSG93IGltcG9ydGFudCBpcyBpdCB0byB0cnkgdGhpcz8NCg0KSXQgdGVsbHMgdXMgd2hl
dGhlciB0aGUgaXNzdWUgaXMgaW5kZWVkIHdpdGggdGhlICdmYXN0LWNhbmNlbCcgdGhpbmcuDQoN
CkJ1dCwgSSBkbyByZWNhbGwgc2VlaW5nIGEgcGF0Y2ggZnJvbSBJYW4gSmFja3NvbiBmb3IgdGhp
cyAtIEkganVzdA0KZG9uJ3QgcmVtZW1iZXIgd2hhdCBpdCB3YXMgY2FsbGVkIC0gaXQgd2FzIHBv
c3RlZCBoZXJlIGFuZCBwZXJoYXBzDQphcHBseWluZyBpdCB3b3VsZCBoZWxwPw0KDQoNCkkgaGF2
ZSBub3QgZm91bmQgYSBwYXRjaC4gIFRoZSBidWcgIzMwOg0KDQoNCmh0dHA6Ly9idWdzLnhlbnBy
b2plY3Qub3JnL3hlbi9idWcvMzANCg0KTG9va3MgdG8gbWUgbGlrZSB0aGUgaXNzdWUgSSBhbSBz
ZWVpbmcuICBUaGUgR3Vlc3QgSSB3YXMgdGVzdGluZyB0aGlzIG9uIGlzIGFuIG9sZCBrZXJuZWwg
KGFzIGZhciBhcyB0aGlzIGJ1ZykgYW5kIHNvIEkgd291bGQgZXhwZWN0IGl0IG1pZ2h0IGJlIHJl
bGF0ZWQuDQoNCg0KSGVyZSBpcyB3aGF0IEkgc2VlIHRoYXQgbWF5IGxpbmsgaXQgdG8gdGhpczoN
Cg0KDQpGcm9tOiBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRyaXguY29tPg0KVG86IElh
biBKYWNrc29uIDxJYW4uSmFja3NvbkBldS5jaXRyaXguY29tPg0KQ2M6IHhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gMy40LjcwKyBrZXJuZWwgV0FSTklO
RyBzcGV3IGR5c2Z1bmN0aW9uIG9uIGZhaWxlZCBtaWdyYXRpb24NCkRhdGU6IEZyaSwgMTAgSmFu
IDIwMTQgMTA6MjY6MzEgKzAwMDANCg0KDQouLi4NCg0KTG9va3MgbGlrZSBSSEVMNCAobGludXgt
Mi42LjktODkuMC4xNi5FTCBrZXJuZWwpIGRvZXNuJ3QgaGF2ZSB0aGUNCnN1cHBvcnQgZm9yIHRo
ZSBuZXcgbW9kZSBhdCBhbGwuDQoNCkl0IHdvdWxkIHByb2JhYmx5IGJlIHdpc2UgdG8gdmFsaWRh
dGUgdGhpcyB1bmRlciB4ZW5kIGJlZm9yZSBjaGFzaW5nDQpyZWQtaGVycmluZ3Mgd2l0aCB4bC4N
Cg0KSWFuLg0KDQoNClNvIEkgd2lsbCBjb250aW51ZSB0aGUgZmlnaHQgdG8gZ2V0IHhlbmQgcnVu
bmluZy4NCg0KICAgLURvbg0KDQoNCg0KDQpBaCwgbm93IGhhdmUgeGVuZCBydW5uaW5nLCBidXQg
c3RpbGwgbmVlZCB0byBjb252ZXJ0IGZyb20gYW4geGwuY2ZnIGZpbGUgdG8gYSB4bWRvbWFpbi5j
ZmcuLi4gIEhhdmluZyBub3QgdXNlZCB4bSBpbiB5ZWFycywgdGhpcyB3aWxsIHRha2UgYSB3aGls
ZS4gIFRoZSBtYW4gcGFnZSBkb2VzIG5vdCBzYXkgaG93IHRvIGJ1aWxkIGFuIGh2bSBndWVzdC4N
Cg0KSSBkaWQgYW5vdGhlciB0ZXN0IGFuZCBGZWRvcmEgMTkgKHg4Nl82NCkgc2F2ZWQganVzdCBm
aW5lLiAgU28gdGhpcyBsb29rcyB0byBiZSBidWcgIzMwLg0KDQpJZiBJIHVuZGVyc3RhbmQgdGhp
cyBhbGwsIHRoaXMgbWVhbnMgdGhhdCB0aGUgb2xkZXIgbGludXgga2VybmVscyB3aWxsIGZhaWwg
dGhpcyB3YXkgYWxzbyBpZiBhIG1pZ3JhdGUgZmFpbHMgYW5kIHRoZSBzb3VyY2UgZ3Vlc3QgaXMg
cmVzdGFydGVkLg0KDQpTbywgZG8gSSBjb250aW51ZSB0byBnZXQgeGVuZCB3b3JraW5nLCBvciBq
dXN0IHNheSBpdCBpcyBhbiBleGFtcGxlIG9mIGJ1ZyAjMzA/DQoNCiAgICAtRG9uIFNsdXR6DQoN
Cg0KDQoNCg0KDQoNCiAgICAtRG9uIFNsdXR6

------=_001_NextPart016722406740_=----
Content-Type: text/html;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
DIV.FoxDiv20140127085946801671 {
	COLOR: #000000
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px" text=3D#000000>
<DIV>This a <SPAN class=3Dgt-baf-base>boring BUG which has a &nbsp;long=20
history,&nbsp; from xen&nbsp;3.* ~ 4.4rc2, which is always occurred when g=
uest=20
os is centos 6+,&nbsp;&nbsp; as i know.</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base>But it is very strange that&nbsp;only a=20
few&nbsp;guys&nbsp;mention&nbsp; it and care it.</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base>please read following links:</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base><A=20
href=3D"http://lists.xen.org/archives/html/xen-devel/2013-08/msg01315.html=
">http://lists.xen.org/archives/html/xen-devel/2013-08/msg01315.html</A>=20
</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base><A=20
href=3D"http://xen.1045712.n5.nabble.com/xm-save-checkpoint-problem-td2519=
789.html">http://xen.1045712.n5.nabble.com/xm-save-checkpoint-problem-td25=
19789.html</A>=20
</SPAN></DIV>
<DIV><SPAN class=3Dgt-baf-base></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3Dgt-baf-base></SPAN>&nbsp;</DIV>
<DIV><SPAN class=3Dgt-baf-base>&nbsp;&nbsp;</SPAN></DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:dslutz@verizon.com">Don Slutz</A>=
</DIV>
<DIV><B>Date:</B>&nbsp;2014-01-25&nbsp;03:06</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:konrad.wilk@oracle.com">Konrad Rzes=
zutek=20
Wilk</A>; <A href=3D"mailto:dslutz@verizon.com">Don Slutz</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [TESTDAY] Test=20
report</DIV></DIV></DIV>
<DIV>
<DIV class=3DFoxDiv20140127085946801671 style=3D"BACKGROUND-COLOR: white">
<DIV class=3Dmoz-cite-prefix>On 01/24/14 11:43, Don Slutz wrote:<BR></DIV>
<BLOCKQUOTE cite=3Dmid:52E29827.7000001@terremark.com type=3D"cite">
  <DIV class=3Dmoz-cite-prefix>On 01/24/14 09:58, Konrad Rzeszutek Wilk=20
  wrote:<BR></DIV>
  <BLOCKQUOTE cite=3Dmid:20140124145840.GE12946@phenom.dumpdata.com type=
=3D"cite"><PRE wrap=3D"">On Thu, Jan 23, 2014 at 05:03:39PM -0500, Don Slu=
tz wrote:
</PRE>
    <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">On 01/20/14 18:35, Konrad Rze=
szutek Wilk wrote:
</PRE>
      <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Don Slutz <A class=3Dmoz-tx=
t-link-rfc2396E href=3D"mailto:dslutz@verizon.com" moz-do-not-send=3D"true=
">&lt;dslutz@verizon.com&gt;</A> wrote:
</PRE></BLOCKQUOTE><PRE wrap=3D"">[snip]

</PRE>
      <BLOCKQUOTE type=3D"cite">
        <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">WARNING: g.e. still in us=
e!
WARNING: g.e. still in use!
WARNING: g.e. still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
INFO: task sadc:22164 blocked for more then 120 seconds.
"echo 0 &gt;..."
INFO: task sadc:22164 blocked for more then 120 seconds.

[root@dcs-xen-54 ~]# xl des 17
[root@dcs-xen-54 ~]# xl restore -V
/big/xl-save/centos-6.4-x86_64.0.save


Not sure if this is expected or not.
</PRE></BLOCKQUOTE><PRE wrap=3D"">I think Ian saw this with the 'fast-canc=
el' something resume but I might be incorrect. Did it work if you used xen=
d (you might have to configure it be enabled)?
</PRE></BLOCKQUOTE><PRE wrap=3D"">I have not used xend/xe in a long time. =
 I did need to configure it.

Does not start:


# /etc/init.d/xend start
WARNING: Enabling the xend toolstack.
xend is deprecated and scheduled for removal. Please migrate
to another toolstack ASAP.
Traceback (most recent call last):
  File "/usr/sbin/xend", line 110, in &lt;module&gt;
    sys.exit(main())
  File "/usr/sbin/xend", line 91, in main
    start_blktapctrl()
  File "/usr/sbin/xend", line 77, in start_blktapctrl
    start_daemon("blktapctrl", "")
  File "/usr/sbin/xend", line 74, in start_daemon
    os.execvp(daemon, (daemon,) + args)
  File "/usr/lib64/python2.7/os.py", line 344, in execvp
    _execvpe(file, args)
  File "/usr/lib64/python2.7/os.py", line 380, in _execvpe
    func(fullname, *argrest)
OSError: [Errno 2] No such file or directory


How important is it to try this?
</PRE></BLOCKQUOTE><PRE wrap=3D"">It tells us whether the issue is indeed =
with the 'fast-cancel' thing.

But, I do recall seeing a patch from Ian Jackson for this - I just
don't remember what it was called - it was posted here and perhaps
applying it would help?
</PRE></BLOCKQUOTE><BR>I have not found a patch.&nbsp; The bug=20
  #30:<BR><BR><BR><A class=3Dmoz-txt-link-freetext=20
  href=3D"http://bugs.xenproject.org/xen/bug/30"=20
  moz-do-not-send=3D"true">http://bugs.xenproject.org/xen/bug/30</A><BR><B=
R>Looks=20
  to me like the issue I am seeing.&nbsp; The Guest I was testing this on =
is an=20
  old kernel (as far as this bug) and so I would expect it might be=20
  related.<BR><BR><BR>Here is what I see that may link it to this:<BR><BR>
  <BLOCKQUOTE><PRE class=3Dheaders><B>From</B>: Ian Campbell <A class=3Dmo=
z-txt-link-rfc2396E href=3D"mailto:Ian.Campbell@citrix.com" moz-do-not-sen=
d=3D"true">&lt;Ian.Campbell@citrix.com&gt;</A>
<B>To</B>: Ian Jackson <A class=3Dmoz-txt-link-rfc2396E href=3D"mailto:Ian=
.Jackson@eu.citrix.com" moz-do-not-send=3D"true">&lt;Ian.Jackson@eu.citrix=
.com&gt;</A>
<B>Cc</B>: <A class=3Dmoz-txt-link-abbreviated href=3D"mailto:xen-devel@li=
sts.xen.org" moz-do-not-send=3D"true">xen-devel@lists.xen.org</A>
<B>Subject</B>: Re: [Xen-devel] 3.4.70+ kernel WARNING spew dysfunction on=
 failed migration
<B>Date</B>: Fri, 10 Jan 2014 10:26:31 +0000

</PRE></BLOCKQUOTE><PRE class=3Dheaders>...
</PRE>
  <BLOCKQUOTE><PRE class=3Dheaders>Looks like RHEL4 (linux-2.6.9-89.0.16.E=
L kernel) doesn't have the
support for the new mode at all.

It would probably be wise to validate this under xend before chasing
red-herrings with xl.

Ian.
</PRE></BLOCKQUOTE><BR>So I will continue the fight to get xend=20
  running.<BR><BR>&nbsp;&nbsp; -Don<BR><BR><BR></BLOCKQUOTE><BR>Ah, now ha=
ve xend=20
running, but still need to convert from an xl.cfg file to a=20
xmdomain.cfg...&nbsp; Having not used xm in years, this will take a while.=
&nbsp;=20
The man page does not say how to build an hvm guest.<BR><BR>I did another =
test=20
and Fedora 19 (x86_64) saved just fine.&nbsp; So this looks to be bug=20
#30.<BR><BR>If I understand this all, this means that the older linux kern=
els=20
will fail this way also if a migrate fails and the source guest is=20
restarted.<BR><BR>So, do I continue to get xend working, or just say it is=
 an=20
example of bug #30?<BR><BR>&nbsp;&nbsp;&nbsp; -Don Slutz<BR><BR><BR>
<BLOCKQUOTE cite=3Dmid:52E29827.7000001@terremark.com type=3D"cite"><BR><B=
R><BR>
  <BLOCKQUOTE cite=3Dmid:20140124145840.GE12946@phenom.dumpdata.com type=
=3D"cite">
    <BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">    -Don Slutz

</PRE></BLOCKQUOTE></BLOCKQUOTE><BR></BLOCKQUOTE><BR></DIV></DIV></BODY></=
HTML>

------=_001_NextPart016722406740_=------



--===============7713829676248012366==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7713829676248012366==--



From xen-devel-bounces@lists.xen.org Mon Jan 27 03:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 03:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7d8m-0006lU-00; Mon, 27 Jan 2014 03:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7d8k-0006lP-JP
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 03:45:38 +0000
Received: from [193.109.254.147:32007] by server-1.bemta-14.messagelabs.com id
	8F/4F-15600-166D5E25; Mon, 27 Jan 2014 03:45:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390794335!13303629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20908 invoked from network); 27 Jan 2014 03:45:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 03:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384300800"; d="scan'208";a="96689504"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 03:45:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 22:45:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7d8f-0000BU-7x;
	Mon, 27 Jan 2014 03:45:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7d8e-0000PM-GH;
	Mon, 27 Jan 2014 03:45:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24540-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 03:45:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24540: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24540 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep  fail in 24536 REGR. vs. 24473

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24536
 test-amd64-amd64-xl-winxpsp3  7 windows-install             fail pass in 24536

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24507-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24536 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)      blocked in 24536 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)     blocked in 24536 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24536 never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24536 n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 03:46:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 03:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7d8m-0006lU-00; Mon, 27 Jan 2014 03:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7d8k-0006lP-JP
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 03:45:38 +0000
Received: from [193.109.254.147:32007] by server-1.bemta-14.messagelabs.com id
	8F/4F-15600-166D5E25; Mon, 27 Jan 2014 03:45:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390794335!13303629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20908 invoked from network); 27 Jan 2014 03:45:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 03:45:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384300800"; d="scan'208";a="96689504"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 03:45:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 26 Jan 2014 22:45:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7d8f-0000BU-7x;
	Mon, 27 Jan 2014 03:45:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7d8e-0000PM-GH;
	Mon, 27 Jan 2014 03:45:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24540-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 03:45:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24540: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24540 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep  fail in 24536 REGR. vs. 24473

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24536
 test-amd64-amd64-xl-winxpsp3  7 windows-install             fail pass in 24536

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24507-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24536 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)      blocked in 24536 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24536 n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)     blocked in 24536 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24536 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24536 n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24536 never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24536 n/a

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 13:41:36 2014 +0100

    x86: PHYSDEVOP_{prepare,release}_msix are privileged
    
    Yet this wasn't being enforced.
    
    This is XSA-87.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 1cd4fab14ce25859efa4a2af13475e6650a5506c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Jan 24 10:19:53 2014 +0100

    Revert "x86/viridian: Time Reference Count MSR"
    
    This mostly reverts commit e36cd2cdc9674a7a4855d21fb7b3e6e17c4bb33b.
    hvm_get_guest_time() is not a suitable time source for this MSR, as
    is resets across migration.
    
    Conflicts:
    	xen/arch/x86/hvm/viridian.c
    	xen/include/asm-x86/perfc_defn.h
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 05:23:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 05:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7een-00010Y-Dj; Mon, 27 Jan 2014 05:22:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7eem-00010T-8w
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 05:22:48 +0000
Received: from [193.109.254.147:27005] by server-13.bemta-14.messagelabs.com
	id A8/F8-19374-72DE5E25; Mon, 27 Jan 2014 05:22:47 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390800164!13344394!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14307 invoked from network); 27 Jan 2014 05:22:46 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 05:22:46 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Sun, 26 Jan 2014 22:22:34 -0700
Message-ID: <52E5ED14.2090005@suse.com>
Date: Sun, 26 Jan 2014 22:22:28 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> BTW, I only see the crash when the save/restore script is running.  I
>> stopped the other scripts and domains, running only save/restore on a
>> single domain, and see the crash rather quickly (within 10 iterations).
>>     
>
> I'll look at the libvirt code, but:
>
> With a recurring timeout, how can you ever know it's cancelled ?
> There might be threads out there, which don't hold any locks, which
> are in the process of executing a callback for a timeout.  That might
> be arbitrarily delayed from the pov of the rest of the program.
>
> E.g.:
>
>  Thread A                                             Thread B
>
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate
>                                        V      need to do callback
>                                                call libxl driver
>
>       entering libvirt event loop
>  V     observe timeout is immediate
>  V      need to do callback
>          call libxl driver
>            call libxl
>   X          libxl sees timeout is live
>   X          libxl does libxl stuff
>          libxl driver deregisters
>  V         record lack of timeout
>          free driver's timeout struct
>                                                call libxl
>                                       X          libxl sees timeout is dead
>                                       X          libxl does nothing
>                                              libxl driver deregisters
>                                        V       CRASH due to deregistering
>                                        V        already-deregistered timeout
>
> If this is how things are, then I think there is no sane way to use
> libvirt's timeouts (!)
>   

Looking at libvirt's default event loop impl, and the current libxl
driver code, I think this is how things are :-/.  But maybe you have
just described a bug in the libxl driver.  In the timer callback,
libxlDomainObjPrivate is locked, the timeout is disabled in libvirt
event loop, libxlDomainObjPrivate is unlocked, and
libxl_osevent_occurred_timeout is called.  Could the issue be solved by
checking if the timeout is still valid in the callback, while holding a
lock on libxlDomainObjPrivate?  The first thread running the callback
could mark the timeout invalid before releasing the lock and calling
libxl_osevent_occurred_timeout.  After acquiring the
libxlDomainObjPrivate lock, subsequent threads running the callback
would see the timer is invalid and return.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 05:23:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 05:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7een-00010Y-Dj; Mon, 27 Jan 2014 05:22:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7eem-00010T-8w
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 05:22:48 +0000
Received: from [193.109.254.147:27005] by server-13.bemta-14.messagelabs.com
	id A8/F8-19374-72DE5E25; Mon, 27 Jan 2014 05:22:47 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390800164!13344394!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14307 invoked from network); 27 Jan 2014 05:22:46 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 05:22:46 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Sun, 26 Jan 2014 22:22:34 -0700
Message-ID: <52E5ED14.2090005@suse.com>
Date: Sun, 26 Jan 2014 22:22:28 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> BTW, I only see the crash when the save/restore script is running.  I
>> stopped the other scripts and domains, running only save/restore on a
>> single domain, and see the crash rather quickly (within 10 iterations).
>>     
>
> I'll look at the libvirt code, but:
>
> With a recurring timeout, how can you ever know it's cancelled ?
> There might be threads out there, which don't hold any locks, which
> are in the process of executing a callback for a timeout.  That might
> be arbitrarily delayed from the pov of the rest of the program.
>
> E.g.:
>
>  Thread A                                             Thread B
>
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate
>                                        V      need to do callback
>                                                call libxl driver
>
>       entering libvirt event loop
>  V     observe timeout is immediate
>  V      need to do callback
>          call libxl driver
>            call libxl
>   X          libxl sees timeout is live
>   X          libxl does libxl stuff
>          libxl driver deregisters
>  V         record lack of timeout
>          free driver's timeout struct
>                                                call libxl
>                                       X          libxl sees timeout is dead
>                                       X          libxl does nothing
>                                              libxl driver deregisters
>                                        V       CRASH due to deregistering
>                                        V        already-deregistered timeout
>
> If this is how things are, then I think there is no sane way to use
> libvirt's timeouts (!)
>   

Looking at libvirt's default event loop impl, and the current libxl
driver code, I think this is how things are :-/.  But maybe you have
just described a bug in the libxl driver.  In the timer callback,
libxlDomainObjPrivate is locked, the timeout is disabled in libvirt
event loop, libxlDomainObjPrivate is unlocked, and
libxl_osevent_occurred_timeout is called.  Could the issue be solved by
checking if the timeout is still valid in the callback, while holding a
lock on libxlDomainObjPrivate?  The first thread running the callback
could mark the timeout invalid before releasing the lock and calling
libxl_osevent_occurred_timeout.  After acquiring the
libxlDomainObjPrivate lock, subsequent threads running the callback
would see the timer is invalid and return.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 05:40:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 05:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7evH-0001b4-Fs; Mon, 27 Jan 2014 05:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7evF-0001az-Bk
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 05:39:49 +0000
Received: from [85.158.143.35:45713] by server-2.bemta-4.messagelabs.com id
	73/3E-11386-421F5E25; Mon, 27 Jan 2014 05:39:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390801185!928102!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16351 invoked from network); 27 Jan 2014 05:39:47 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 05:39:47 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Sun, 26 Jan 2014 22:39:35 -0700
Message-ID: <52E5F110.1020907@suse.com>
Date: Sun, 26 Jan 2014 22:39:28 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>	
	<21214.37402.648941.864060@mariner.uk.xensource.com>	
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>	
	<21216.62800.746512.422459@mariner.uk.xensource.com>	
	<52E1EB97.4080007@suse.com>	
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1390567954.2124.85.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:
>
>   
>>  Thread A                                             Thread B
>>
>>    invoke some libxl operation
>> X    do some libxl stuff
>> X    register timeout (libxl)
>> XV     record timeout info
>> X    do some more libxl stuff
>>      ...
>> X    do some more libxl stuff
>> X    deregister timeout (libxl internal)
>> X     converted to request immediate timeout
>> XV     record new timeout info
>> X      release libvirt event loop lock
>>                                             entering libvirt event loop
>>                                        V     observe timeout is immediate
>>     
>
> Is there nothing in this interval which deregisters, pauses, quiesces or
> otherwise prevents the timer from going off again until after the
> callback (when the lock would be reacquired and whatever was done is
> undone)?
>   

The libvirt libxl driver will disable the timeout in libvirt's event
loop before calling libxl_osevent_occurred_timeout.  But AFAICT, and as
Ian J. describes, another thread running the event loop could invoke the
timeout before it is disabled.  I think this could be handled in the
libxl driver as described in my response to his mail.

> (V is the libvirt event loop lock I presume?)
>
>   
>>                                        V      need to do callback
>>                                                call libxl driver
>>
>>       entering libvirt event loop
>>  V     observe timeout is immediate
>>     
>
> Given the behaviour I suggest above this would be prevented I think?
>
>   
>> But if you think about it, if you have 10 threads all running the
>> event loop and you set a timeout to zero, doesn't that mean that every
>> thread's event loop should do the timeout callback as fast as it can ?
>> That could be a lot of wasted effort.
>>     
>
> It doesn't seem all that likely triggering the same timeout multiple
> times in different threads simultaneously would be a deliberate design
> decision, so if the libvirt event core doesn't already prevent this
> somehow then it seems to me that this is just a bug in the event loop
> core.
>   

If I understand the code correctly, it does seem possible to trigger the
same timeout multiple times.

> In that case should be addressed in libvirt, and in any case the libvirt
> core folks should be involved in the discussion, so they have the
> opportunity to tell us how best, if at all, we can use the provided
> facilities and/or whether these issues are deliberate of things which
> should be fixed etc.
>   

Agreed.  I'll mention the issue and this discussion on the libvirt IRC
channel tomorrow.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 05:40:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 05:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7evH-0001b4-Fs; Mon, 27 Jan 2014 05:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7evF-0001az-Bk
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 05:39:49 +0000
Received: from [85.158.143.35:45713] by server-2.bemta-4.messagelabs.com id
	73/3E-11386-421F5E25; Mon, 27 Jan 2014 05:39:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390801185!928102!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16351 invoked from network); 27 Jan 2014 05:39:47 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 05:39:47 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Sun, 26 Jan 2014 22:39:35 -0700
Message-ID: <52E5F110.1020907@suse.com>
Date: Sun, 26 Jan 2014 22:39:28 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>	
	<21214.37402.648941.864060@mariner.uk.xensource.com>	
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>	
	<21216.62800.746512.422459@mariner.uk.xensource.com>	
	<52E1EB97.4080007@suse.com>	
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<1390567954.2124.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1390567954.2124.85.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell wrote:
> On Fri, 2014-01-24 at 12:41 +0000, Ian Jackson wrote:
>
>   
>>  Thread A                                             Thread B
>>
>>    invoke some libxl operation
>> X    do some libxl stuff
>> X    register timeout (libxl)
>> XV     record timeout info
>> X    do some more libxl stuff
>>      ...
>> X    do some more libxl stuff
>> X    deregister timeout (libxl internal)
>> X     converted to request immediate timeout
>> XV     record new timeout info
>> X      release libvirt event loop lock
>>                                             entering libvirt event loop
>>                                        V     observe timeout is immediate
>>     
>
> Is there nothing in this interval which deregisters, pauses, quiesces or
> otherwise prevents the timer from going off again until after the
> callback (when the lock would be reacquired and whatever was done is
> undone)?
>   

The libvirt libxl driver will disable the timeout in libvirt's event
loop before calling libxl_osevent_occurred_timeout.  But AFAICT, and as
Ian J. describes, another thread running the event loop could invoke the
timeout before it is disabled.  I think this could be handled in the
libxl driver as described in my response to his mail.

> (V is the libvirt event loop lock I presume?)
>
>   
>>                                        V      need to do callback
>>                                                call libxl driver
>>
>>       entering libvirt event loop
>>  V     observe timeout is immediate
>>     
>
> Given the behaviour I suggest above this would be prevented I think?
>
>   
>> But if you think about it, if you have 10 threads all running the
>> event loop and you set a timeout to zero, doesn't that mean that every
>> thread's event loop should do the timeout callback as fast as it can ?
>> That could be a lot of wasted effort.
>>     
>
> It doesn't seem all that likely triggering the same timeout multiple
> times in different threads simultaneously would be a deliberate design
> decision, so if the libvirt event core doesn't already prevent this
> somehow then it seems to me that this is just a bug in the event loop
> core.
>   

If I understand the code correctly, it does seem possible to trigger the
same timeout multiple times.

> In that case should be addressed in libvirt, and in any case the libvirt
> core folks should be involved in the discussion, so they have the
> opportunity to tell us how best, if at all, we can use the provided
> facilities and/or whether these issues are deliberate of things which
> should be fixed etc.
>   

Agreed.  I'll mention the issue and this discussion on the libvirt IRC
channel tomorrow.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 06:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 06:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7g68-0003Ca-FU; Mon, 27 Jan 2014 06:55:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7g66-0003CS-LM
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 06:55:07 +0000
Received: from [193.109.254.147:58461] by server-13.bemta-14.messagelabs.com
	id 69/F4-19374-AC206E25; Mon, 27 Jan 2014 06:55:06 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390805701!2562!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23142 invoked from network); 27 Jan 2014 06:55:03 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-16.tower-27.messagelabs.com with SMTP;
	27 Jan 2014 06:55:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457296"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 14:51:13 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R6ssbI031394;
	Mon, 27 Jan 2014 14:54:55 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012714532279-1384147 ;
	Mon, 27 Jan 2014 14:53:22 +0800 
Message-ID: <52E60355.3010808@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 14:57:25 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
In-Reply-To: <CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 14:53:22,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 14:53:24,
	Serialize complete at 2014/01/27 14:53:24
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
 network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:30 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> The following steps are taken during setup:
>>  a) call the hotplug script for each vif to setup its network buffer
>>
>>  b) establish a dedicated remus context containing libnl related
>>     state (netlink sockets, qdisc caches, etc.,)
>>
>>  c) Obtain handles to plug qdiscs installed on the IFB devices
>>     chosen by the hotplug scripts.
>>
>> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>>  docs/misc/xenstore-paths.markdown |   4 +
>>  tools/libxl/Makefile              |   2 +
>>  tools/libxl/libxl_dom.c           |   7 +-
>>  tools/libxl/libxl_internal.h      |  11 +
>>  tools/libxl/libxl_netbuffer.c     | 419
>> ++++++++++++++++++++++++++++++++++++++
>>  tools/libxl/libxl_nonetbuffer.c   |   6 +
>>  tools/libxl/libxl_remus.c         |  35 ++++
>>  7 files changed, 479 insertions(+), 5 deletions(-)
>>  create mode 100644 tools/libxl/libxl_remus.c
>>
>> diff --git a/docs/misc/xenstore-paths.markdown
>> b/docs/misc/xenstore-paths.markdown
>> index 70ab7f4..7a0d2c9 100644
>> --- a/docs/misc/xenstore-paths.markdown
>> +++ b/docs/misc/xenstore-paths.markdown
>> @@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
>>
>>  The device model version for a domain.
>>
>> +#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
>> +
>> +IFB device used by Remus to buffer network output from the associated vif.
>> +
>>  [BLKIF]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
>>  [FBIF]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
>>  [HVMPARAMS]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 84a467c..218f55e 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -52,6 +52,8 @@ else
>>  LIBXL_OBJS-y += libxl_nonetbuffer.o
>>  endif
>>
>> +LIBXL_OBJS-y += libxl_remus.o
>> +
>>  LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
>>  LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
>>
>> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
>> index 8d63f90..e3e9f6f 100644
>> --- a/tools/libxl/libxl_dom.c
>> +++ b/tools/libxl/libxl_dom.c
>> @@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const
>> uint8_t *buf,
>>
>>  /*==================== Domain suspend (save) ====================*/
>>
>> -static void domain_suspend_done(libxl__egc *egc,
>> -                        libxl__domain_suspend_state *dss, int rc);
>> -
>>  /*----- complicated callback, called by xc_domain_save -----*/
>>
>>  /*
>> @@ -1508,8 +1505,8 @@ static void
>> save_device_model_datacopier_done(libxl__egc *egc,
>>      dss->save_dm_callback(egc, dss, our_rc);
>>  }
>>
>> -static void domain_suspend_done(libxl__egc *egc,
>> -                        libxl__domain_suspend_state *dss, int rc)
>> +void domain_suspend_done(libxl__egc *egc,
>> +                         libxl__domain_suspend_state *dss, int rc)
>>  {
>>      STATE_AO_GC(dss->ao);
>>
>> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
>> index 2f64382..0430307 100644
>> --- a/tools/libxl/libxl_internal.h
>> +++ b/tools/libxl/libxl_internal.h
>> @@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
>>
>>  _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
>>
>> +_hidden void domain_suspend_done(libxl__egc *egc,
>> +                                 libxl__domain_suspend_state *dss,
>> +                                 int rc);
>> +
>> +_hidden void libxl__remus_setup_done(libxl__egc *egc,
>> +                                     libxl__domain_suspend_state *dss,
>> +                                     int rc);
>> +
>> +_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
>> +                                       libxl__domain_suspend_state *dss);
>> +
>>  struct libxl__domain_suspend_state {
>>      /* set by caller of libxl__domain_suspend */
>>      libxl__ao *ao;
>> diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
>> index 8e23d75..0be876c 100644
>> --- a/tools/libxl/libxl_netbuffer.c
>> +++ b/tools/libxl/libxl_netbuffer.c
>> @@ -17,11 +17,430 @@
>>
>>  #include "libxl_internal.h"
>>
>> +#include <netlink/cache.h>
>> +#include <netlink/socket.h>
>> +#include <netlink/attr.h>
>> +#include <netlink/route/link.h>
>> +#include <netlink/route/route.h>
>> +#include <netlink/route/qdisc.h>
>> +#include <netlink/route/qdisc/plug.h>
>> +
>> +typedef struct libxl__remus_netbuf_state {
>> +    struct rtnl_qdisc **netbuf_qdisc_list;
>> +    struct nl_sock *nlsock;
>> +    struct nl_cache *qdisc_cache;
>> +    const char **vif_list;
>> +    const char **ifb_list;
>> +    uint32_t num_netbufs;
>> +    uint32_t unused;
>> +} libxl__remus_netbuf_state;
>> +
>>  int libxl__netbuffer_enabled(libxl__gc *gc)
>>  {
>>      return 1;
>>  }
>>
>> +/* If the device has a vifname, then use that instead of
>> + * the vifX.Y format.
>> + */
>> +static const char *get_vifname(libxl__gc *gc, uint32_t domid,
>> +                               libxl_device_nic *nic)
>> +{
>> +    const char *vifname = NULL;
>> +    const char *path;
>> +    int rc;
>> +
>> +    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
>> +                          libxl__xs_get_dompath(gc, 0), domid,
>> nic->devid);
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
>> +    if (rc < 0) {
>> +        /* use the default name */
>> +        vifname = libxl__device_nic_devname(gc, domid,
>> +                                            nic->devid,
>> +                                            nic->nictype);
>> +    }
>> +
>> +    return vifname;
>>
> 
> IanJ's feedback last time was that the error code rc needs to be checked.
> If its a failure, then return NULL to the caller. If its a ENOENT, then use
> the
> default name.

OK. This should be
if (!rc && !vifname) {
    /* use the default name */
    ....
}

> 
> +}
>> +
>> +static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
>> +                                       int *num_vifs)
>> +{
>> +    libxl_device_nic *nics = NULL;
>> +    int nb, i = 0;
>> +    const char **vif_list = NULL;
>> +
>> +    *num_vifs = 0;
>> +    nics = libxl_device_nic_list(CTX, domid, &nb);
>> +    if (!nics)
>> +        return NULL;
>> +
>> +    /* Ensure that none of the vifs are backed by driver domains */
>> +    for (i = 0; i < nb; i++) {
>> +        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
>> +            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
>> +                "Network buffering is not supported with driver domains",
>> +                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
>>
> 
> And if the previous feedback were to be incorporated, then get_vifname's
> return
> value should be assigned to a variable and checked before printing or using
> it for
> other purposes.

OK

> 
> 
>> +            *num_vifs = -1;
>> +            goto out;
>> +        }
>> +    }
>> +
>> +    GCNEW_ARRAY(vif_list, nb);
>> +    for (i = 0; i < nb; ++i) {
>> +        vif_list[i] = get_vifname(gc, domid, &nics[i]);
>> +        if (!vif_list[i]) {
>> +            vif_list = NULL;
>> +            goto out;
>> +        }
>> +    }
>> +    *num_vifs = nb;
>> +
>> + out:
>> +    for (i = 0; i < nb; i++)
>> +        libxl_device_nic_dispose(&nics[i]);
>> +    free(nics);
>> +    return vif_list;
>> +}
>> +
>> +static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
>> +{
>> +    int i;
>> +    struct rtnl_qdisc *qdisc = NULL;
>> +
>> +    /* free qdiscs */
>> +    for (i = 0; i < netbuf_state->num_netbufs; i++) {
>> +        qdisc = netbuf_state->netbuf_qdisc_list[i];
>> +        if (!qdisc)
>> +            break;
>> +
>> +        nl_object_put((struct nl_object *)qdisc);
>> +    }
>> +
>> +    /* free qdisc cache */
>> +    nl_cache_clear(netbuf_state->qdisc_cache);
>> +    nl_cache_free(netbuf_state->qdisc_cache);
>> +
>> +    /* close nlsock */
>> +    nl_close(netbuf_state->nlsock);
>> +
>> +    /* free nlsock */
>> +    nl_socket_free(netbuf_state->nlsock);
>> +}
>> +
>>
> 
> This code (free_qdiscs) is new. Have you tested it?
> While the control flow looks pretty sane, libnl has been evolving
> a bit ever since the 3.* series.
> 
> If init_qdisc fails, it calls free_qdisc(). If any other setup stage after
> network buffering fails, it would invoke the teardown code, which also
> calls free_qdisc(). This may end up in a segfault.
> 
> I suggest adding a simple check to see if nlsock/qdisc_cache are NULL
> before attempting to execute the rest of the function.  And after you free
> the
> qdisc_cache & nlsock, set them to NULL.

Yes, free_qdisc() may be called twice. Will fix it in the next version.

> 
> 
>> +static int init_qdiscs(libxl__gc *gc,
>> +                       libxl__remus_state *remus_state)
>> +{
>> +    int i, ret, ifindex;
>> +    struct rtnl_link *ifb = NULL;
>> +    struct rtnl_qdisc *qdisc = NULL;
>> +
>> +    /* Convenience aliases */
>> +    libxl__remus_netbuf_state * const netbuf_state =
>> remus_state->netbuf_state;
>> +    const int num_netbufs = netbuf_state->num_netbufs;
>> +    const char ** const ifb_list = netbuf_state->ifb_list;
>> +
>> +    /* Now that we have brought up IFB devices with plug qdisc for
>> +     * each vif, lets get a netlink handle on the plug qdisc for use
>> +     * during checkpointing.
>> +     */
>> +    netbuf_state->nlsock = nl_socket_alloc();
>> +    if (!netbuf_state->nlsock) {
>> +        LOG(ERROR, "cannot allocate nl socket");
>> +        goto out;
>> +    }
>> +
>> +    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
>> +    if (ret) {
>> +        LOG(ERROR, "failed to open netlink socket: %s",
>> +            nl_geterror(ret));
>> +        goto out;
>> +    }
>> +
>> +    /* get list of all qdiscs installed on network devs. */
>> +    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
>> +                                 &netbuf_state->qdisc_cache);
>> +    if (ret) {
>> +        LOG(ERROR, "failed to allocate qdisc cache: %s",
>> +            nl_geterror(ret));
>> +        goto out;
>> +    }
>> +
>> +    /* list of handles to plug qdiscs */
>> +    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
>> +
>> +    for (i = 0; i < num_netbufs; ++i) {
>> +
>> +        /* get a handle to the IFB interface */
>> +        ifb = NULL;
>> +        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
>> +                                   ifb_list[i], &ifb);
>> +        if (ret) {
>> +            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
>> +                nl_geterror(ret));
>> +            goto out;
>> +        }
>> +
>> +        ifindex = rtnl_link_get_ifindex(ifb);
>> +        if (!ifindex) {
>> +            LOG(ERROR, "interface %s has no index", ifb_list[i]);
>> +            goto out;
>> +        }
>> +
>> +        /* Get a reference to the root qdisc installed on the IFB, by
>> +         * querying the qdisc list we obtained earlier. The netbufscript
>> +         * sets up the plug qdisc as the root qdisc, so we don't have to
>> +         * search the entire qdisc tree on the IFB dev.
>> +
>> +         * There is no need to explicitly free this qdisc as its just a
>> +         * reference from the qdisc cache we allocated earlier.
>> +         */
>> +        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache,
>> ifindex,
>> +                                         TC_H_ROOT);
>> +
>> +        if (qdisc) {
>> +            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
>> +            /* Sanity check: Ensure that the root qdisc is a plug qdisc.
>> */
>> +            if (!tc_kind || strcmp(tc_kind, "plug")) {
>> +                nl_object_put((struct nl_object *)qdisc);
>> +                LOG(ERROR, "plug qdisc is not installed on %s",
>> ifb_list[i]);
>> +                goto out;
>> +            }
>> +            netbuf_state->netbuf_qdisc_list[i] = qdisc;
>> +        } else {
>> +            LOG(ERROR, "Cannot get qdisc handle from ifb %s",
>> ifb_list[i]);
>> +            goto out;
>> +        }
>> +        rtnl_link_put(ifb);
>> +    }
>> +
>> +    return 0;
>> +
>> + out:
>> +    if (ifb)
>> +        rtnl_link_put(ifb);
>> +    free_qdiscs(netbuf_state);
>> +    return ERROR_FAIL;
>> +}
>> +
>> +static void netbuf_setup_timeout_cb(libxl__egc *egc,
>> +                                    libxl__ev_time *ev,
>> +                                    const struct timeval *requested_abs)
>> +{
>> +    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state,
>> timeout);
>> +
>> +    /* Convenience aliases */
>> +    const int devid = remus_state->dev_id;
>> +    libxl__remus_netbuf_state *const netbuf_state =
>> remus_state->netbuf_state;
>> +    const char *const vif = netbuf_state->vif_list[devid];
>> +
>> +    STATE_AO_GC(remus_state->dss->ao);
>> +
>> +    libxl__ev_time_deregister(gc, &remus_state->timeout);
>> +    assert(libxl__ev_child_inuse(&remus_state->child));
>> +
>> +    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
>> +        remus_state->netbufscript, vif);
>> +
>> +    if (kill(remus_state->child.pid, SIGKILL)) {
>> +        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
>> +              remus_state->netbufscript,
>> +              (unsigned long)remus_state->child.pid);
>> +    }
>> +
>> +    return;
>> +}
>> +
>> +/* the script needs the following env & args
>> + * $vifname
>> + * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
>> + * $IFB (for teardown)
>> + * setup/teardown as command line arg.
>> + * In return, the script writes the name of IFB device (during setup) to
>> be
>> + * used for output buffering into XENBUS_PATH/ifb
>> + */
>> +static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state
>> *remus_state,
>> +                              char *op, libxl__ev_child_callback *death)
>> +{
>> +    int arraysize = 7, nr = 0;
>> +    char **env = NULL, **args = NULL;
>> +    pid_t pid;
>> +
>> +    /* Convenience aliases */
>> +    libxl__ev_child *const child = &remus_state->child;
>> +    libxl__ev_time *const timeout = &remus_state->timeout;
>> +    char *const script = libxl__strdup(gc, remus_state->netbufscript);
>> +    const uint32_t domid = remus_state->dss->domid;
>> +    const int devid = remus_state->dev_id;
>> +    libxl__remus_netbuf_state *const netbuf_state =
>> remus_state->netbuf_state;
>> +    const char *const vif = netbuf_state->vif_list[devid];
>> +    const char *const ifb = netbuf_state->ifb_list[devid];
>> +
>>
> 
> Please set arraysize to 7 here, instead of the beginning of the function.
> Its more readable that way.

OK.

Thanks
Wen Congyang

> 
> +    GCNEW_ARRAY(env, arraysize);
>> +    env[nr++] = "vifname";
>> +    env[nr++] = libxl__strdup(gc, vif);
>> +    env[nr++] = "XENBUS_PATH";
>> +    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
>> +                          libxl__xs_libxl_path(gc, domid), devid);
>> +    if (!strcmp(op, "teardown")) {
>> +        env[nr++] = "IFB";
>> +        env[nr++] = libxl__strdup(gc, ifb);
>> +    }
>> +    env[nr++] = NULL;
>> +    assert(nr <= arraysize);
>> +
>> +    arraysize = 3; nr = 0;
>> +    GCNEW_ARRAY(args, arraysize);
>> +    args[nr++] = script;
>> +    args[nr++] = op;
>> +    args[nr++] = NULL;
>> +    assert(nr == arraysize);
>> +
>> +    /* Set hotplug timeout */
>> +    if (libxl__ev_time_register_rel(gc, timeout,
>> +                                    netbuf_setup_timeout_cb,
>> +                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
>> +        LOG(ERROR, "unable to register timeout for "
>> +            "netbuf setup script %s on vif %s", script, vif);
>> +        return ERROR_FAIL;
>> +    }
>> +
>> +    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
>> +        script, op, vif);
>> +
>> +    /* Fork and exec netbuf script */
>> +    pid = libxl__ev_child_fork(gc, child, death);
>> +    if (pid == -1) {
>> +        LOG(ERROR, "unable to fork netbuf script %s", script);
>> +        return ERROR_FAIL;
>> +    }
>> +
>> +    if (!pid) {
>> +        /* child: Launch netbuf script */
>> +        libxl__exec(gc, -1, -1, -1, args[0], args, env);
>> +        /* notreached */
>> +        abort();
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>
>>
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 06:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 06:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7g68-0003Ca-FU; Mon, 27 Jan 2014 06:55:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7g66-0003CS-LM
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 06:55:07 +0000
Received: from [193.109.254.147:58461] by server-13.bemta-14.messagelabs.com
	id 69/F4-19374-AC206E25; Mon, 27 Jan 2014 06:55:06 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390805701!2562!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23142 invoked from network); 27 Jan 2014 06:55:03 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-16.tower-27.messagelabs.com with SMTP;
	27 Jan 2014 06:55:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457296"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 14:51:13 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R6ssbI031394;
	Mon, 27 Jan 2014 14:54:55 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012714532279-1384147 ;
	Mon, 27 Jan 2014 14:53:22 +0800 
Message-ID: <52E60355.3010808@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 14:57:25 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-7-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
In-Reply-To: <CAP8mzPOyJvoT5-wNXE=x0WAzy=U75vJ2mGWhEruSWVa+P9TLLQ@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 14:53:22,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 14:53:24,
	Serialize complete at 2014/01/27 14:53:24
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 06/13 V6] remus: implement the API to setup
 network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:30 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> The following steps are taken during setup:
>>  a) call the hotplug script for each vif to setup its network buffer
>>
>>  b) establish a dedicated remus context containing libnl related
>>     state (netlink sockets, qdisc caches, etc.,)
>>
>>  c) Obtain handles to plug qdiscs installed on the IFB devices
>>     chosen by the hotplug scripts.
>>
>> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>>  docs/misc/xenstore-paths.markdown |   4 +
>>  tools/libxl/Makefile              |   2 +
>>  tools/libxl/libxl_dom.c           |   7 +-
>>  tools/libxl/libxl_internal.h      |  11 +
>>  tools/libxl/libxl_netbuffer.c     | 419
>> ++++++++++++++++++++++++++++++++++++++
>>  tools/libxl/libxl_nonetbuffer.c   |   6 +
>>  tools/libxl/libxl_remus.c         |  35 ++++
>>  7 files changed, 479 insertions(+), 5 deletions(-)
>>  create mode 100644 tools/libxl/libxl_remus.c
>>
>> diff --git a/docs/misc/xenstore-paths.markdown
>> b/docs/misc/xenstore-paths.markdown
>> index 70ab7f4..7a0d2c9 100644
>> --- a/docs/misc/xenstore-paths.markdown
>> +++ b/docs/misc/xenstore-paths.markdown
>> @@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
>>
>>  The device model version for a domain.
>>
>> +#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
>> +
>> +IFB device used by Remus to buffer network output from the associated vif.
>> +
>>  [BLKIF]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
>>  [FBIF]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
>>  [HVMPARAMS]:
>> http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
>> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
>> index 84a467c..218f55e 100644
>> --- a/tools/libxl/Makefile
>> +++ b/tools/libxl/Makefile
>> @@ -52,6 +52,8 @@ else
>>  LIBXL_OBJS-y += libxl_nonetbuffer.o
>>  endif
>>
>> +LIBXL_OBJS-y += libxl_remus.o
>> +
>>  LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
>>  LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
>>
>> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
>> index 8d63f90..e3e9f6f 100644
>> --- a/tools/libxl/libxl_dom.c
>> +++ b/tools/libxl/libxl_dom.c
>> @@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const
>> uint8_t *buf,
>>
>>  /*==================== Domain suspend (save) ====================*/
>>
>> -static void domain_suspend_done(libxl__egc *egc,
>> -                        libxl__domain_suspend_state *dss, int rc);
>> -
>>  /*----- complicated callback, called by xc_domain_save -----*/
>>
>>  /*
>> @@ -1508,8 +1505,8 @@ static void
>> save_device_model_datacopier_done(libxl__egc *egc,
>>      dss->save_dm_callback(egc, dss, our_rc);
>>  }
>>
>> -static void domain_suspend_done(libxl__egc *egc,
>> -                        libxl__domain_suspend_state *dss, int rc)
>> +void domain_suspend_done(libxl__egc *egc,
>> +                         libxl__domain_suspend_state *dss, int rc)
>>  {
>>      STATE_AO_GC(dss->ao);
>>
>> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
>> index 2f64382..0430307 100644
>> --- a/tools/libxl/libxl_internal.h
>> +++ b/tools/libxl/libxl_internal.h
>> @@ -2313,6 +2313,17 @@ typedef struct libxl__remus_state {
>>
>>  _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
>>
>> +_hidden void domain_suspend_done(libxl__egc *egc,
>> +                                 libxl__domain_suspend_state *dss,
>> +                                 int rc);
>> +
>> +_hidden void libxl__remus_setup_done(libxl__egc *egc,
>> +                                     libxl__domain_suspend_state *dss,
>> +                                     int rc);
>> +
>> +_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
>> +                                       libxl__domain_suspend_state *dss);
>> +
>>  struct libxl__domain_suspend_state {
>>      /* set by caller of libxl__domain_suspend */
>>      libxl__ao *ao;
>> diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
>> index 8e23d75..0be876c 100644
>> --- a/tools/libxl/libxl_netbuffer.c
>> +++ b/tools/libxl/libxl_netbuffer.c
>> @@ -17,11 +17,430 @@
>>
>>  #include "libxl_internal.h"
>>
>> +#include <netlink/cache.h>
>> +#include <netlink/socket.h>
>> +#include <netlink/attr.h>
>> +#include <netlink/route/link.h>
>> +#include <netlink/route/route.h>
>> +#include <netlink/route/qdisc.h>
>> +#include <netlink/route/qdisc/plug.h>
>> +
>> +typedef struct libxl__remus_netbuf_state {
>> +    struct rtnl_qdisc **netbuf_qdisc_list;
>> +    struct nl_sock *nlsock;
>> +    struct nl_cache *qdisc_cache;
>> +    const char **vif_list;
>> +    const char **ifb_list;
>> +    uint32_t num_netbufs;
>> +    uint32_t unused;
>> +} libxl__remus_netbuf_state;
>> +
>>  int libxl__netbuffer_enabled(libxl__gc *gc)
>>  {
>>      return 1;
>>  }
>>
>> +/* If the device has a vifname, then use that instead of
>> + * the vifX.Y format.
>> + */
>> +static const char *get_vifname(libxl__gc *gc, uint32_t domid,
>> +                               libxl_device_nic *nic)
>> +{
>> +    const char *vifname = NULL;
>> +    const char *path;
>> +    int rc;
>> +
>> +    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
>> +                          libxl__xs_get_dompath(gc, 0), domid,
>> nic->devid);
>> +    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
>> +    if (rc < 0) {
>> +        /* use the default name */
>> +        vifname = libxl__device_nic_devname(gc, domid,
>> +                                            nic->devid,
>> +                                            nic->nictype);
>> +    }
>> +
>> +    return vifname;
>>
> 
> IanJ's feedback last time was that the error code rc needs to be checked.
> If its a failure, then return NULL to the caller. If its a ENOENT, then use
> the
> default name.

OK. This should be
if (!rc && !vifname) {
    /* use the default name */
    ....
}

> 
> +}
>> +
>> +static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
>> +                                       int *num_vifs)
>> +{
>> +    libxl_device_nic *nics = NULL;
>> +    int nb, i = 0;
>> +    const char **vif_list = NULL;
>> +
>> +    *num_vifs = 0;
>> +    nics = libxl_device_nic_list(CTX, domid, &nb);
>> +    if (!nics)
>> +        return NULL;
>> +
>> +    /* Ensure that none of the vifs are backed by driver domains */
>> +    for (i = 0; i < nb; i++) {
>> +        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
>> +            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
>> +                "Network buffering is not supported with driver domains",
>> +                get_vifname(gc, domid, &nics[i]), nics[i].backend_domid);
>>
> 
> And if the previous feedback were to be incorporated, then get_vifname's
> return
> value should be assigned to a variable and checked before printing or using
> it for
> other purposes.

OK

> 
> 
>> +            *num_vifs = -1;
>> +            goto out;
>> +        }
>> +    }
>> +
>> +    GCNEW_ARRAY(vif_list, nb);
>> +    for (i = 0; i < nb; ++i) {
>> +        vif_list[i] = get_vifname(gc, domid, &nics[i]);
>> +        if (!vif_list[i]) {
>> +            vif_list = NULL;
>> +            goto out;
>> +        }
>> +    }
>> +    *num_vifs = nb;
>> +
>> + out:
>> +    for (i = 0; i < nb; i++)
>> +        libxl_device_nic_dispose(&nics[i]);
>> +    free(nics);
>> +    return vif_list;
>> +}
>> +
>> +static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
>> +{
>> +    int i;
>> +    struct rtnl_qdisc *qdisc = NULL;
>> +
>> +    /* free qdiscs */
>> +    for (i = 0; i < netbuf_state->num_netbufs; i++) {
>> +        qdisc = netbuf_state->netbuf_qdisc_list[i];
>> +        if (!qdisc)
>> +            break;
>> +
>> +        nl_object_put((struct nl_object *)qdisc);
>> +    }
>> +
>> +    /* free qdisc cache */
>> +    nl_cache_clear(netbuf_state->qdisc_cache);
>> +    nl_cache_free(netbuf_state->qdisc_cache);
>> +
>> +    /* close nlsock */
>> +    nl_close(netbuf_state->nlsock);
>> +
>> +    /* free nlsock */
>> +    nl_socket_free(netbuf_state->nlsock);
>> +}
>> +
>>
> 
> This code (free_qdiscs) is new. Have you tested it?
> While the control flow looks pretty sane, libnl has been evolving
> a bit ever since the 3.* series.
> 
> If init_qdisc fails, it calls free_qdisc(). If any other setup stage after
> network buffering fails, it would invoke the teardown code, which also
> calls free_qdisc(). This may end up in a segfault.
> 
> I suggest adding a simple check to see if nlsock/qdisc_cache are NULL
> before attempting to execute the rest of the function.  And after you free
> the
> qdisc_cache & nlsock, set them to NULL.

Yes, free_qdisc() may be called twice. Will fix it in the next version.

> 
> 
>> +static int init_qdiscs(libxl__gc *gc,
>> +                       libxl__remus_state *remus_state)
>> +{
>> +    int i, ret, ifindex;
>> +    struct rtnl_link *ifb = NULL;
>> +    struct rtnl_qdisc *qdisc = NULL;
>> +
>> +    /* Convenience aliases */
>> +    libxl__remus_netbuf_state * const netbuf_state =
>> remus_state->netbuf_state;
>> +    const int num_netbufs = netbuf_state->num_netbufs;
>> +    const char ** const ifb_list = netbuf_state->ifb_list;
>> +
>> +    /* Now that we have brought up IFB devices with plug qdisc for
>> +     * each vif, lets get a netlink handle on the plug qdisc for use
>> +     * during checkpointing.
>> +     */
>> +    netbuf_state->nlsock = nl_socket_alloc();
>> +    if (!netbuf_state->nlsock) {
>> +        LOG(ERROR, "cannot allocate nl socket");
>> +        goto out;
>> +    }
>> +
>> +    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
>> +    if (ret) {
>> +        LOG(ERROR, "failed to open netlink socket: %s",
>> +            nl_geterror(ret));
>> +        goto out;
>> +    }
>> +
>> +    /* get list of all qdiscs installed on network devs. */
>> +    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
>> +                                 &netbuf_state->qdisc_cache);
>> +    if (ret) {
>> +        LOG(ERROR, "failed to allocate qdisc cache: %s",
>> +            nl_geterror(ret));
>> +        goto out;
>> +    }
>> +
>> +    /* list of handles to plug qdiscs */
>> +    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
>> +
>> +    for (i = 0; i < num_netbufs; ++i) {
>> +
>> +        /* get a handle to the IFB interface */
>> +        ifb = NULL;
>> +        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
>> +                                   ifb_list[i], &ifb);
>> +        if (ret) {
>> +            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
>> +                nl_geterror(ret));
>> +            goto out;
>> +        }
>> +
>> +        ifindex = rtnl_link_get_ifindex(ifb);
>> +        if (!ifindex) {
>> +            LOG(ERROR, "interface %s has no index", ifb_list[i]);
>> +            goto out;
>> +        }
>> +
>> +        /* Get a reference to the root qdisc installed on the IFB, by
>> +         * querying the qdisc list we obtained earlier. The netbufscript
>> +         * sets up the plug qdisc as the root qdisc, so we don't have to
>> +         * search the entire qdisc tree on the IFB dev.
>> +
>> +         * There is no need to explicitly free this qdisc as its just a
>> +         * reference from the qdisc cache we allocated earlier.
>> +         */
>> +        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache,
>> ifindex,
>> +                                         TC_H_ROOT);
>> +
>> +        if (qdisc) {
>> +            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
>> +            /* Sanity check: Ensure that the root qdisc is a plug qdisc.
>> */
>> +            if (!tc_kind || strcmp(tc_kind, "plug")) {
>> +                nl_object_put((struct nl_object *)qdisc);
>> +                LOG(ERROR, "plug qdisc is not installed on %s",
>> ifb_list[i]);
>> +                goto out;
>> +            }
>> +            netbuf_state->netbuf_qdisc_list[i] = qdisc;
>> +        } else {
>> +            LOG(ERROR, "Cannot get qdisc handle from ifb %s",
>> ifb_list[i]);
>> +            goto out;
>> +        }
>> +        rtnl_link_put(ifb);
>> +    }
>> +
>> +    return 0;
>> +
>> + out:
>> +    if (ifb)
>> +        rtnl_link_put(ifb);
>> +    free_qdiscs(netbuf_state);
>> +    return ERROR_FAIL;
>> +}
>> +
>> +static void netbuf_setup_timeout_cb(libxl__egc *egc,
>> +                                    libxl__ev_time *ev,
>> +                                    const struct timeval *requested_abs)
>> +{
>> +    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state,
>> timeout);
>> +
>> +    /* Convenience aliases */
>> +    const int devid = remus_state->dev_id;
>> +    libxl__remus_netbuf_state *const netbuf_state =
>> remus_state->netbuf_state;
>> +    const char *const vif = netbuf_state->vif_list[devid];
>> +
>> +    STATE_AO_GC(remus_state->dss->ao);
>> +
>> +    libxl__ev_time_deregister(gc, &remus_state->timeout);
>> +    assert(libxl__ev_child_inuse(&remus_state->child));
>> +
>> +    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
>> +        remus_state->netbufscript, vif);
>> +
>> +    if (kill(remus_state->child.pid, SIGKILL)) {
>> +        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
>> +              remus_state->netbufscript,
>> +              (unsigned long)remus_state->child.pid);
>> +    }
>> +
>> +    return;
>> +}
>> +
>> +/* the script needs the following env & args
>> + * $vifname
>> + * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
>> + * $IFB (for teardown)
>> + * setup/teardown as command line arg.
>> + * In return, the script writes the name of IFB device (during setup) to
>> be
>> + * used for output buffering into XENBUS_PATH/ifb
>> + */
>> +static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state
>> *remus_state,
>> +                              char *op, libxl__ev_child_callback *death)
>> +{
>> +    int arraysize = 7, nr = 0;
>> +    char **env = NULL, **args = NULL;
>> +    pid_t pid;
>> +
>> +    /* Convenience aliases */
>> +    libxl__ev_child *const child = &remus_state->child;
>> +    libxl__ev_time *const timeout = &remus_state->timeout;
>> +    char *const script = libxl__strdup(gc, remus_state->netbufscript);
>> +    const uint32_t domid = remus_state->dss->domid;
>> +    const int devid = remus_state->dev_id;
>> +    libxl__remus_netbuf_state *const netbuf_state =
>> remus_state->netbuf_state;
>> +    const char *const vif = netbuf_state->vif_list[devid];
>> +    const char *const ifb = netbuf_state->ifb_list[devid];
>> +
>>
> 
> Please set arraysize to 7 here, instead of the beginning of the function.
> Its more readable that way.

OK.

Thanks
Wen Congyang

> 
> +    GCNEW_ARRAY(env, arraysize);
>> +    env[nr++] = "vifname";
>> +    env[nr++] = libxl__strdup(gc, vif);
>> +    env[nr++] = "XENBUS_PATH";
>> +    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
>> +                          libxl__xs_libxl_path(gc, domid), devid);
>> +    if (!strcmp(op, "teardown")) {
>> +        env[nr++] = "IFB";
>> +        env[nr++] = libxl__strdup(gc, ifb);
>> +    }
>> +    env[nr++] = NULL;
>> +    assert(nr <= arraysize);
>> +
>> +    arraysize = 3; nr = 0;
>> +    GCNEW_ARRAY(args, arraysize);
>> +    args[nr++] = script;
>> +    args[nr++] = op;
>> +    args[nr++] = NULL;
>> +    assert(nr == arraysize);
>> +
>> +    /* Set hotplug timeout */
>> +    if (libxl__ev_time_register_rel(gc, timeout,
>> +                                    netbuf_setup_timeout_cb,
>> +                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
>> +        LOG(ERROR, "unable to register timeout for "
>> +            "netbuf setup script %s on vif %s", script, vif);
>> +        return ERROR_FAIL;
>> +    }
>> +
>> +    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
>> +        script, op, vif);
>> +
>> +    /* Fork and exec netbuf script */
>> +    pid = libxl__ev_child_fork(gc, child, death);
>> +    if (pid == -1) {
>> +        LOG(ERROR, "unable to fork netbuf script %s", script);
>> +        return ERROR_FAIL;
>> +    }
>> +
>> +    if (!pid) {
>> +        /* child: Launch netbuf script */
>> +        libxl__exec(gc, -1, -1, -1, args[0], args, env);
>> +        /* notreached */
>> +        abort();
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>
>>
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 06:56:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 06:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7g7C-0003Gl-4v; Mon, 27 Jan 2014 06:56:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7g7A-0003GX-6e
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 06:56:12 +0000
Received: from [85.158.139.211:60659] by server-5.bemta-5.messagelabs.com id
	B0/E5-14928-B0306E25; Mon, 27 Jan 2014 06:56:11 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390805767!12079179!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4398 invoked from network); 27 Jan 2014 06:56:10 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	27 Jan 2014 06:56:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457298"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 14:52:23 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R6u55g031472;
	Mon, 27 Jan 2014 14:56:05 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012714543306-1384166 ;
	Mon, 27 Jan 2014 14:54:33 +0800 
Message-ID: <52E6039B.6040806@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 14:58:35 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
In-Reply-To: <CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 14:54:33,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 14:54:33,
	Serialize complete at 2014/01/27 14:54:33
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering
 cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:31 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> Command line switch to 'xl remus' command, to enable network buffering.
>> Pass on this flag to libxl so that it can act accordingly.
>> Also update man pages to reflect the addition of a new option to
>> 'xl remus' command.
>>
>> Note: the network buffering is enabled as default. If you want to
>> disable it, please use -n option.
>>
>>
> 
>> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
>> index ebe0220..f05e07b 100644
>> --- a/tools/libxl/xl_cmdtable.c
>> +++ b/tools/libxl/xl_cmdtable.c
>> @@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
>>        "-i MS                   Checkpoint domain memory every MS
>> milliseconds (def. 200ms).\n"
>>        "-b                      Replicate memory checkpoints to /dev/null
>> (blackhole)\n"
>>        "-u                      Disable memory checkpoint compression.\n"
>> +      "-n                      Enable network output buffering.\n"
>>
> 
> Since network buffering is enabled by default, this should be "Disable
> network output..".

Yes, we forgot to update this description.

Thanks
Wen Congyang

> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 06:56:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 06:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7g7C-0003Gl-4v; Mon, 27 Jan 2014 06:56:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7g7A-0003GX-6e
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 06:56:12 +0000
Received: from [85.158.139.211:60659] by server-5.bemta-5.messagelabs.com id
	B0/E5-14928-B0306E25; Mon, 27 Jan 2014 06:56:11 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390805767!12079179!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4398 invoked from network); 27 Jan 2014 06:56:10 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-206.messagelabs.com with SMTP;
	27 Jan 2014 06:56:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457298"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 14:52:23 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R6u55g031472;
	Mon, 27 Jan 2014 14:56:05 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012714543306-1384166 ;
	Mon, 27 Jan 2014 14:54:33 +0800 
Message-ID: <52E6039B.6040806@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 14:58:35 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-14-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
In-Reply-To: <CAP8mzPNEiuL9NZdRNjUhuMyGweZJExRE1KdAzU5ZkRnQXodbrg@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 14:54:33,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 14:54:33,
	Serialize complete at 2014/01/27 14:54:33
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 13/13 V6] tools/libxl: network buffering
 cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:31 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> Command line switch to 'xl remus' command, to enable network buffering.
>> Pass on this flag to libxl so that it can act accordingly.
>> Also update man pages to reflect the addition of a new option to
>> 'xl remus' command.
>>
>> Note: the network buffering is enabled as default. If you want to
>> disable it, please use -n option.
>>
>>
> 
>> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
>> index ebe0220..f05e07b 100644
>> --- a/tools/libxl/xl_cmdtable.c
>> +++ b/tools/libxl/xl_cmdtable.c
>> @@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
>>        "-i MS                   Checkpoint domain memory every MS
>> milliseconds (def. 200ms).\n"
>>        "-b                      Replicate memory checkpoints to /dev/null
>> (blackhole)\n"
>>        "-u                      Disable memory checkpoint compression.\n"
>> +      "-n                      Enable network output buffering.\n"
>>
> 
> Since network buffering is enabled by default, this should be "Disable
> network output..".

Yes, we forgot to update this description.

Thanks
Wen Congyang

> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gE7-0003tg-7L; Mon, 27 Jan 2014 07:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W7gE5-0003tb-Ea
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 07:03:21 +0000
Received: from [85.158.137.68:2949] by server-14.bemta-3.messagelabs.com id
	1F/B6-06105-8B406E25; Mon, 27 Jan 2014 07:03:20 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390806199!10296889!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28570 invoked from network); 27 Jan 2014 07:03:20 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 07:03:20 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 5C70A1887EF;
	Mon, 27 Jan 2014 09:03:19 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 4D76936C01F; Mon, 27 Jan 2014 09:03:19 +0200 (EET)
Date: Mon, 27 Jan 2014 09:03:19 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xennn <openbg@abv.bg>
Message-ID: <20140127070319.GQ2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
	<1390742199614-5720934.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390742199614-5720934.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 26, 2014 at 05:16:39AM -0800, xennn wrote:
> just one more question ... is it possible xen to support SMP gusts ?
> 

Xen supports multiple vcpus of course.. up to 256 or 512 vCPUs per VM:

http://wiki.xenproject.org/wiki/Xen_Release_Features

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gE7-0003tg-7L; Mon, 27 Jan 2014 07:03:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1W7gE5-0003tb-Ea
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 07:03:21 +0000
Received: from [85.158.137.68:2949] by server-14.bemta-3.messagelabs.com id
	1F/B6-06105-8B406E25; Mon, 27 Jan 2014 07:03:20 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390806199!10296889!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28570 invoked from network); 27 Jan 2014 07:03:20 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 07:03:20 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 5C70A1887EF;
	Mon, 27 Jan 2014 09:03:19 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 4D76936C01F; Mon, 27 Jan 2014 09:03:19 +0200 (EET)
Date: Mon, 27 Jan 2014 09:03:19 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: xennn <openbg@abv.bg>
Message-ID: <20140127070319.GQ2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
	<1390742199614-5720934.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390742199614-5720934.post@n5.nabble.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 26, 2014 at 05:16:39AM -0800, xennn wrote:
> just one more question ... is it possible xen to support SMP gusts ?
> 

Xen supports multiple vcpus of course.. up to 256 or 512 vCPUs per VM:

http://wiki.xenproject.org/wiki/Xen_Release_Features

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gEf-0003wG-L7; Mon, 27 Jan 2014 07:03:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7gEe-0003w8-OY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 07:03:56 +0000
Received: from [85.158.137.68:52339] by server-13.bemta-3.messagelabs.com id
	3D/95-28603-CD406E25; Mon, 27 Jan 2014 07:03:56 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390806233!11478926!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2839 invoked from network); 27 Jan 2014 07:03:54 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-31.messagelabs.com with SMTP;
	27 Jan 2014 07:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457326"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 15:00:08 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R73jhI031899;
	Mon, 27 Jan 2014 15:03:46 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012715021392-1384373 ;
	Mon, 27 Jan 2014 15:02:13 +0800 
Message-ID: <52E60568.7060305@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 15:06:16 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
In-Reply-To: <CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 15:02:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 15:02:14,
	Serialize complete at 2014/01/27 15:02:14
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:27 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> This patch introduces remus-netbuf-setup hotplug script responsible for
>> setting up and tearing down the necessary infrastructure required for
>> network output buffering in Remus.  This script is intended to be invoked
>> by libxl for each guest interface, when starting or stopping Remus.
>>
>> Apart from returning success/failure indication via the usual hotplug
>> entries in xenstore, this script also writes to xenstore, the name of
>> the IFB device to be used to control the vif's network output.
>>
>> The script relies on libnl3 command line utilities to perform various
>> setup/teardown functions. The script is confined to Linux platforms only
>> since NetBSD does not seem to have libnl3.
>>
>> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>>  tools/hotplug/Linux/Makefile           |   1 +
>>  tools/hotplug/Linux/remus-netbuf-setup | 183
>> +++++++++++++++++++++++++++++++++
>>  2 files changed, 184 insertions(+)
>>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>>
>>
> The last time I posted this script, the feedback was that the script and
> the code invoking
> the script should be in a single patch. So I would suggest doing the same.

We use the script in patch6. It adds 479 lines. These two patches are big
patches(add more than 100 lines), so why put them into a single patch?

Thanks
Wen Congyang
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gEf-0003wG-L7; Mon, 27 Jan 2014 07:03:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wency@cn.fujitsu.com>) id 1W7gEe-0003w8-OY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 07:03:56 +0000
Received: from [85.158.137.68:52339] by server-13.bemta-3.messagelabs.com id
	3D/95-28603-CD406E25; Mon, 27 Jan 2014 07:03:56 +0000
X-Env-Sender: wency@cn.fujitsu.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390806233!11478926!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2839 invoked from network); 27 Jan 2014 07:03:54 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-4.tower-31.messagelabs.com with SMTP;
	27 Jan 2014 07:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,727,1384272000"; 
   d="scan'208";a="9457326"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 27 Jan 2014 15:00:08 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s0R73jhI031899;
	Mon, 27 Jan 2014 15:03:46 +0800
Received: from [10.167.226.164] ([10.167.226.164])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014012715021392-1384373 ;
	Mon, 27 Jan 2014 15:02:13 +0800 
Message-ID: <52E60568.7060305@cn.fujitsu.com>
Date: Mon, 27 Jan 2014 15:06:16 +0800
From: Wen Congyang <wency@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: rshriram@cs.ubc.ca, Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
In-Reply-To: <CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/01/27 15:02:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/01/27 15:02:14,
	Serialize complete at 2014/01/27 15:02:14
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 01/27/2014 06:27 AM, Shriram Rajagopalan Wrote:
> On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com> wrote:
> 
>> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>>
>> This patch introduces remus-netbuf-setup hotplug script responsible for
>> setting up and tearing down the necessary infrastructure required for
>> network output buffering in Remus.  This script is intended to be invoked
>> by libxl for each guest interface, when starting or stopping Remus.
>>
>> Apart from returning success/failure indication via the usual hotplug
>> entries in xenstore, this script also writes to xenstore, the name of
>> the IFB device to be used to control the vif's network output.
>>
>> The script relies on libnl3 command line utilities to perform various
>> setup/teardown functions. The script is confined to Linux platforms only
>> since NetBSD does not seem to have libnl3.
>>
>> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
>> ---
>>  tools/hotplug/Linux/Makefile           |   1 +
>>  tools/hotplug/Linux/remus-netbuf-setup | 183
>> +++++++++++++++++++++++++++++++++
>>  2 files changed, 184 insertions(+)
>>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>>
>>
> The last time I posted this script, the feedback was that the script and
> the code invoking
> the script should be in a single patch. So I would suggest doing the same.

We use the script in patch6. It adds 479 lines. These two patches are big
patches(add more than 100 lines), so why put them into a single patch?

Thanks
Wen Congyang
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:28:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gc7-0004qR-AA; Mon, 27 Jan 2014 07:28:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7gc5-0004qL-Rw
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 07:28:10 +0000
Received: from [193.109.254.147:61598] by server-3.bemta-14.messagelabs.com id
	A0/5A-11000-98A06E25; Mon, 27 Jan 2014 07:28:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390807687!8490!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11130 invoked from network); 27 Jan 2014 07:28:08 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 07:28:08 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so7301890qcv.5
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 23:28:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=Gje8cXK1itU5jdrtk+1WgGZL8c2tUAoTc7Trg129tOE=;
	b=GTQnoriDWV4c58VMCm1cTrJ6ghiVFOaPmt+iY2V4SQV4RP5pAzEJRJ1nIzUxMBMFWw
	YwEJKyKQ1H2m1846rn7QDdeWukaz818uXDv7fY0YUfhMFcfXPvNmTDhpA+2rndNdtYau
	FjDyNyenOMVE8w1UohMiemztGXZJOyUq5vEW7atTMTFUUQQmMewKgIKIADWVklkH/wsZ
	TZxRdkDeVVxnYelb/r/1/T2XcCbvBvVtLrsxvTHd4la3z4zLzksqQqexyCqaRzKOLnNH
	wDGIbhgTPZH2sCPwnw6Whge1jLDRz/u6ODNZb9ErreZYsQMEocedK0vWV4e6AGdGILMf
	2vOA==
X-Gm-Message-State: ALoCoQkqvZBmlPSOTwp61Dq+vMD+/zLfrSe4BPCo/5F4yGa3cKzF5DAh1ZBwMs9E0QmtdmkGchdv
MIME-Version: 1.0
X-Received: by 10.224.121.67 with SMTP id g3mr10382960qar.78.1390807686909;
	Sun, 26 Jan 2014 23:28:06 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Sun, 26 Jan 2014 23:28:06 -0800 (PST)
In-Reply-To: <1390564902.2124.73.camel@kazak.uk.xensource.com>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
	<1390564902.2124.73.camel@kazak.uk.xensource.com>
Date: Mon, 27 Jan 2014 12:58:06 +0530
Message-ID: <CAAHg+HjoEfPRwG0p+c4G8D-nE_NXa=oiDOEUKnHfM7joyVC2rg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 24 January 2014 17:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-24 at 17:18 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V3:
>> - Retriving reset base address and reset mask from device tree.
>> - Removed unnecssary header files included earlier.
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>>  1 file changed, 70 insertions(+)
>>
>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>> index 5b0bd5f..986284c 100644
>> --- a/xen/arch/arm/platforms/xgene-storm.c
>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>> @@ -20,8 +20,17 @@
>>
>>  #include <xen/config.h>
>>  #include <asm/platform.h>
>> +#include <xen/vmap.h>
>> +#include <asm/io.h>
>>  #include <asm/gic.h>
>>
>> +#define DT_MATCH_RESET                      \
>> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
>
> The gic and timer use a #define here because it is needed in multiple
> places, for this use case you can just inline it into the array in
> xgene_storm_init. i.e.:
>
>> +static int xgene_storm_init(void)
>> +{
>> +    static const struct dt_device_match reset_ids[] __initconst =
>> +    {
>> +        DT_MATCH_RESET,
>
>            DT_MATCH_COMPATIBLE("apm,xgene-reboot")
>
> is fine IMHO.

Sure i will fix this,

>
>> +        {},
>> +    };
>> +    struct dt_device_node *dev;
>> +    int res;
>> +
>> +    dev = dt_find_matching_node(NULL, reset_ids);
>> +    if ( !dev )
>> +    {
>> +        printk("Unable to find a compatible reset node in "
>> +               "the device tree");
>
> Please don't wrap string constants, it makes it hard to grep and I'd
> rather have a long line (in this case it's not too long either).
>
> Please can you add an xgene: (or whatever is appropriate) prefix too.

Yes will do it.
>
>> +        return -ENODEV;
>
> I wonder if it is worth returning success here? The system would be
> mostly functional after all.
>
> (You could apply this logic to the other returns if you wish, although
> if the node is present then an error in the content could be considered
> more critical to abort on)
Yeah actually i also wonder about correct return value since system is
mostly functional without this.
I will return a success here.
>
>> +    }
>> +
>> +    dt_device_set_used_by(dev, DOMID_XEN);
>> +
>> +    /* Retrieve base address and size */
>> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
>> +    if ( res )
>> +    {
>> +        printk("Unable to retrieve the base address for reset\n");
>> +        return res;
>> +    }
>> +
>> +    /* Get reset mask */
>> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
>> +    if ( !res )
>> +    {
>> +        printk("Unable to retrieve the reset mask\n");
>> +        return res;
>> +    }
>
> All looks good, thanks.
Thanks, will send out a new patch today.
>
>
>> +
>> +    return 0;
>> +}
>>
>>  static const char * const xgene_storm_dt_compat[] __initconst =
>>  {
>> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>>
>>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>>      .compatible = xgene_storm_dt_compat,
>> +    .init = xgene_storm_init,
>> +    .reset = xgene_storm_reset,
>>      .quirks = xgene_storm_quirks,
>>      .specific_mapping = xgene_storm_specific_mapping,
>>
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 07:28:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 07:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7gc7-0004qR-AA; Mon, 27 Jan 2014 07:28:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7gc5-0004qL-Rw
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 07:28:10 +0000
Received: from [193.109.254.147:61598] by server-3.bemta-14.messagelabs.com id
	A0/5A-11000-98A06E25; Mon, 27 Jan 2014 07:28:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390807687!8490!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11130 invoked from network); 27 Jan 2014 07:28:08 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 07:28:08 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so7301890qcv.5
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 23:28:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=Gje8cXK1itU5jdrtk+1WgGZL8c2tUAoTc7Trg129tOE=;
	b=GTQnoriDWV4c58VMCm1cTrJ6ghiVFOaPmt+iY2V4SQV4RP5pAzEJRJ1nIzUxMBMFWw
	YwEJKyKQ1H2m1846rn7QDdeWukaz818uXDv7fY0YUfhMFcfXPvNmTDhpA+2rndNdtYau
	FjDyNyenOMVE8w1UohMiemztGXZJOyUq5vEW7atTMTFUUQQmMewKgIKIADWVklkH/wsZ
	TZxRdkDeVVxnYelb/r/1/T2XcCbvBvVtLrsxvTHd4la3z4zLzksqQqexyCqaRzKOLnNH
	wDGIbhgTPZH2sCPwnw6Whge1jLDRz/u6ODNZb9ErreZYsQMEocedK0vWV4e6AGdGILMf
	2vOA==
X-Gm-Message-State: ALoCoQkqvZBmlPSOTwp61Dq+vMD+/zLfrSe4BPCo/5F4yGa3cKzF5DAh1ZBwMs9E0QmtdmkGchdv
MIME-Version: 1.0
X-Received: by 10.224.121.67 with SMTP id g3mr10382960qar.78.1390807686909;
	Sun, 26 Jan 2014 23:28:06 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Sun, 26 Jan 2014 23:28:06 -0800 (PST)
In-Reply-To: <1390564902.2124.73.camel@kazak.uk.xensource.com>
References: <1390564129-32611-1-git-send-email-pranavkumar@linaro.org>
	<1390564902.2124.73.camel@kazak.uk.xensource.com>
Date: Mon, 27 Jan 2014 12:58:06 +0530
Message-ID: <CAAHg+HjoEfPRwG0p+c4G8D-nE_NXa=oiDOEUKnHfM7joyVC2rg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 24 January 2014 17:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-24 at 17:18 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V3:
>> - Retriving reset base address and reset mask from device tree.
>> - Removed unnecssary header files included earlier.
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>  xen/arch/arm/platforms/xgene-storm.c |   70 ++++++++++++++++++++++++++++++++++
>>  1 file changed, 70 insertions(+)
>>
>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>> index 5b0bd5f..986284c 100644
>> --- a/xen/arch/arm/platforms/xgene-storm.c
>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>> @@ -20,8 +20,17 @@
>>
>>  #include <xen/config.h>
>>  #include <asm/platform.h>
>> +#include <xen/vmap.h>
>> +#include <asm/io.h>
>>  #include <asm/gic.h>
>>
>> +#define DT_MATCH_RESET                      \
>> +    DT_MATCH_COMPATIBLE("apm,xgene-reboot")
>
> The gic and timer use a #define here because it is needed in multiple
> places, for this use case you can just inline it into the array in
> xgene_storm_init. i.e.:
>
>> +static int xgene_storm_init(void)
>> +{
>> +    static const struct dt_device_match reset_ids[] __initconst =
>> +    {
>> +        DT_MATCH_RESET,
>
>            DT_MATCH_COMPATIBLE("apm,xgene-reboot")
>
> is fine IMHO.

Sure i will fix this,

>
>> +        {},
>> +    };
>> +    struct dt_device_node *dev;
>> +    int res;
>> +
>> +    dev = dt_find_matching_node(NULL, reset_ids);
>> +    if ( !dev )
>> +    {
>> +        printk("Unable to find a compatible reset node in "
>> +               "the device tree");
>
> Please don't wrap string constants, it makes it hard to grep and I'd
> rather have a long line (in this case it's not too long either).
>
> Please can you add an xgene: (or whatever is appropriate) prefix too.

Yes will do it.
>
>> +        return -ENODEV;
>
> I wonder if it is worth returning success here? The system would be
> mostly functional after all.
>
> (You could apply this logic to the other returns if you wish, although
> if the node is present then an error in the content could be considered
> more critical to abort on)
Yeah actually i also wonder about correct return value since system is
mostly functional without this.
I will return a success here.
>
>> +    }
>> +
>> +    dt_device_set_used_by(dev, DOMID_XEN);
>> +
>> +    /* Retrieve base address and size */
>> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
>> +    if ( res )
>> +    {
>> +        printk("Unable to retrieve the base address for reset\n");
>> +        return res;
>> +    }
>> +
>> +    /* Get reset mask */
>> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
>> +    if ( !res )
>> +    {
>> +        printk("Unable to retrieve the reset mask\n");
>> +        return res;
>> +    }
>
> All looks good, thanks.
Thanks, will send out a new patch today.
>
>
>> +
>> +    return 0;
>> +}
>>
>>  static const char * const xgene_storm_dt_compat[] __initconst =
>>  {
>> @@ -116,6 +184,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>>
>>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>>      .compatible = xgene_storm_dt_compat,
>> +    .init = xgene_storm_init,
>> +    .reset = xgene_storm_reset,
>>      .quirks = xgene_storm_quirks,
>>      .specific_mapping = xgene_storm_specific_mapping,
>>
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:00:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7h7E-0005w2-B2; Mon, 27 Jan 2014 08:00:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W7h7A-0005ut-88
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:00:17 +0000
Received: from [85.158.139.211:45667] by server-16.bemta-5.messagelabs.com id
	A5/3C-11843-F0216E25; Mon, 27 Jan 2014 08:00:15 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390809613!12089284!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15770 invoked from network); 27 Jan 2014 08:00:14 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 08:00:14 -0000
Received: by mail-la0-f47.google.com with SMTP id hr17so4148935lab.6
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 00:00:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=EXcrrw4//hCpXs8ORCpk3GoZTKjUniKqccGaIWtmSRc=;
	b=w/hCMs418oPQufAQPE4WUVcWCKIRMdTbMjWqFlurbvtnFgjOMgw7Ua9lNRnReOol5O
	6tWqyoI1AlFXHw8Qoic/ME44+TGkT2BPCFljpyp/r9ef/oxtHurbxQ3qXpV7l/v3Bbrv
	PkrQOUIybR/FdnIOU1qNGJx8wl1SOt//6LUtv3ZIKC6OaODVwugVxzey7dUzuSTjS/ca
	8PWWCyKFD1zSFnnkMphnXcfhGJo8MSJmm3gW/fL1ooJUL+F7fOdPS5Un6s4mtZtXMzSW
	96xZaXG2/4fttJsARoAAFofM/4qtHOwk4n3wk1DPNZ3ap88kt/TCTSGVL4ZMO6IZlRpT
	iGMg==
X-Received: by 10.152.5.136 with SMTP id s8mr78161las.55.1390809613197;
	Mon, 27 Jan 2014 00:00:13 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id sv5sm11265268lbb.9.2014.01.27.00.00.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 00:00:12 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
Date: Mon, 27 Jan 2014 12:00:11 +0400
Message-Id: <733CE4A4-66DC-45DC-A9DA-2C254525C163@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
	<e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 27, 2014, at 2:16 AM, Konrad Rzeszutek Wilk wrote:

> Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
>> Hi All,
>> 
>> i have a good news: i have loaded xen-4.3 to DilOS (illumos based
>> platform) as dom0 (64bit)
> 
> Woot!
> 
> Are there instructions in how to compile and load/use it?

Hi Konrad,
i need commit my changes to the tree and i'll prepare instructions.

i have instruction for builds at:
https://dilos-dev.atlassian.net/wiki/display/DS/DilOS+platform+Home

for dilos-illumos-gate build:
https://dilos-dev.atlassian.net/wiki/pages/viewpage.action?pageId=1343543

for userland packages:
https://dilos-dev.atlassian.net/wiki/pages/viewpage.action?pageId=1343545

how to configure Xen and load guests:
https://dilos-dev.atlassian.net/wiki/display/DS/Xen

for access to sources you need account on bitbucket.org

for xen builds you need:
1. build dilos-illumos-gate - it will be the same instruction for xen-3.4 and xen-4.3 with different branches for sources.
2. build components/xen package at dilos-userland-review
3. build additional userland packages: libvirt, virtinst, url grabber, vdisk

For Xen-4.3 i'll add additional xen-43 component to userland and commit updated dilos-illumos-gate sources to branch 'xen4', because we need more updates on kernel side for new xen.

at this moment i tried to load PV guest, but it failed.
have to work with python scripts updates.

for development i'm using vmware esxi 5.5 with XEN as guest + serial output.

also i have fixed 'xl' output and now i have:

root@myhost:~# xl info
host                   : myhost
release                : 5.11
version                : 1.3.6-xen
machine                : i86pc
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 2842
hw_caps                : 1fabfbff:20100800:00000000:00003b00:84082201:00000000:00000001:00000000
virt_caps              :
total_memory           : 4095
free_memory            : 1998
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre-xvm
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Sun Jan 26 22:36:31 2014 +0400 hg:ee626d805a13-dirty
xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
cc_compiler            : gcc (GCC) 4.7.3
cc_compile_by          : root
cc_compile_domain      : 
cc_compile_date        : Mon Jan 27 00:29:01 MSK 2014
xend_config_format     : 4

-Igor


> Thanks!


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:00:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7h7E-0005w2-B2; Mon, 27 Jan 2014 08:00:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W7h7A-0005ut-88
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:00:17 +0000
Received: from [85.158.139.211:45667] by server-16.bemta-5.messagelabs.com id
	A5/3C-11843-F0216E25; Mon, 27 Jan 2014 08:00:15 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390809613!12089284!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15770 invoked from network); 27 Jan 2014 08:00:14 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 08:00:14 -0000
Received: by mail-la0-f47.google.com with SMTP id hr17so4148935lab.6
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 00:00:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=EXcrrw4//hCpXs8ORCpk3GoZTKjUniKqccGaIWtmSRc=;
	b=w/hCMs418oPQufAQPE4WUVcWCKIRMdTbMjWqFlurbvtnFgjOMgw7Ua9lNRnReOol5O
	6tWqyoI1AlFXHw8Qoic/ME44+TGkT2BPCFljpyp/r9ef/oxtHurbxQ3qXpV7l/v3Bbrv
	PkrQOUIybR/FdnIOU1qNGJx8wl1SOt//6LUtv3ZIKC6OaODVwugVxzey7dUzuSTjS/ca
	8PWWCyKFD1zSFnnkMphnXcfhGJo8MSJmm3gW/fL1ooJUL+F7fOdPS5Un6s4mtZtXMzSW
	96xZaXG2/4fttJsARoAAFofM/4qtHOwk4n3wk1DPNZ3ap88kt/TCTSGVL4ZMO6IZlRpT
	iGMg==
X-Received: by 10.152.5.136 with SMTP id s8mr78161las.55.1390809613197;
	Mon, 27 Jan 2014 00:00:13 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id sv5sm11265268lbb.9.2014.01.27.00.00.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 00:00:12 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
Date: Mon, 27 Jan 2014 12:00:11 +0400
Message-Id: <733CE4A4-66DC-45DC-A9DA-2C254525C163@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
	<e9a0431c-9a1d-42a8-832d-69a718e55163@email.android.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 27, 2014, at 2:16 AM, Konrad Rzeszutek Wilk wrote:

> Igor Kozhukhov <ikozhukhov@gmail.com> wrote:
>> Hi All,
>> 
>> i have a good news: i have loaded xen-4.3 to DilOS (illumos based
>> platform) as dom0 (64bit)
> 
> Woot!
> 
> Are there instructions in how to compile and load/use it?

Hi Konrad,
i need commit my changes to the tree and i'll prepare instructions.

i have instruction for builds at:
https://dilos-dev.atlassian.net/wiki/display/DS/DilOS+platform+Home

for dilos-illumos-gate build:
https://dilos-dev.atlassian.net/wiki/pages/viewpage.action?pageId=1343543

for userland packages:
https://dilos-dev.atlassian.net/wiki/pages/viewpage.action?pageId=1343545

how to configure Xen and load guests:
https://dilos-dev.atlassian.net/wiki/display/DS/Xen

for access to sources you need account on bitbucket.org

for xen builds you need:
1. build dilos-illumos-gate - it will be the same instruction for xen-3.4 and xen-4.3 with different branches for sources.
2. build components/xen package at dilos-userland-review
3. build additional userland packages: libvirt, virtinst, url grabber, vdisk

For Xen-4.3 i'll add additional xen-43 component to userland and commit updated dilos-illumos-gate sources to branch 'xen4', because we need more updates on kernel side for new xen.

at this moment i tried to load PV guest, but it failed.
have to work with python scripts updates.

for development i'm using vmware esxi 5.5 with XEN as guest + serial output.

also i have fixed 'xl' output and now i have:

root@myhost:~# xl info
host                   : myhost
release                : 5.11
version                : 1.3.6-xen
machine                : i86pc
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 2842
hw_caps                : 1fabfbff:20100800:00000000:00003b00:84082201:00000000:00000001:00000000
virt_caps              :
total_memory           : 4095
free_memory            : 1998
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .2-pre-xvm
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Sun Jan 26 22:36:31 2014 +0400 hg:ee626d805a13-dirty
xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
cc_compiler            : gcc (GCC) 4.7.3
cc_compile_by          : root
cc_compile_domain      : 
cc_compile_date        : Mon Jan 27 00:29:01 MSK 2014
xend_config_format     : 4

-Igor


> Thanks!


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7heO-0007CU-LN; Mon, 27 Jan 2014 08:34:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7heN-0007CP-6E
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:34:35 +0000
Received: from [85.158.143.35:41224] by server-3.bemta-4.messagelabs.com id
	5B/04-32360-A1A16E25; Mon, 27 Jan 2014 08:34:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390811673!966830!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29832 invoked from network); 27 Jan 2014 08:34:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 08:34:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 08:34:33 +0000
Message-Id: <52E6281C020000780011716B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 08:34:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
In-Reply-To: <52E29F27.50403@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 18:13, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/24/2014 10:10 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>> +{
>>> +    int ret = -EINVAL;
>>> +    xen_pmu_params_t pmu_params;
>>> +    uint32_t mode;
>>> +
>>> +    switch ( op )
>>> +    {
>>> +    case XENPMU_mode_set:
>>> +        if ( !is_control_domain(current->domain) )
>>> +            return -EPERM;
>>> +
>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>> +            return -EFAULT;
>>> +
>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>> +        if ( mode & ~XENPMU_MODE_ON )
>>> +            return -EINVAL;
>> Please, if you add a new interface, think carefully about future
>> extension room: Here you ignore the upper 32 bits of .val instead
>> of making sure they're zero, thus making it impossible to assign
>> them some meaning later on.
> 
> I think I can leave this as is for now --- I am storing VPMU mode and 
> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.

You should drop the cast to a 32-bit value at the very least -
"leave this as is for now" reads like you don#t need to make
any changes.

>>> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
>>> +struct xen_pmu_params {
>>> +    /* IN/OUT parameters */
>>> +    union {
>>> +        struct version {
>>> +            uint8_t maj;
>>> +            uint8_t min;
>>> +        } version;
>>> +        uint64_t pad;
>>> +    } v;
>> Looking at the implementation above I don't see this ever being an
>> IN parameter.
> 
> Currently Xen doesn't care about version but in the future a guest may 
> specify what version of PMU it wants to use (I hope this day will never 
> come though...)

At which time you'd need to add something like a set-version sub-op,
which would then also be the time to make this as IN/OUT. Right now
it is just IN and hence should be marked as such.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7heO-0007CU-LN; Mon, 27 Jan 2014 08:34:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7heN-0007CP-6E
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:34:35 +0000
Received: from [85.158.143.35:41224] by server-3.bemta-4.messagelabs.com id
	5B/04-32360-A1A16E25; Mon, 27 Jan 2014 08:34:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390811673!966830!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29832 invoked from network); 27 Jan 2014 08:34:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 08:34:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 08:34:33 +0000
Message-Id: <52E6281C020000780011716B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 08:34:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
In-Reply-To: <52E29F27.50403@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 18:13, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/24/2014 10:10 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>> +{
>>> +    int ret = -EINVAL;
>>> +    xen_pmu_params_t pmu_params;
>>> +    uint32_t mode;
>>> +
>>> +    switch ( op )
>>> +    {
>>> +    case XENPMU_mode_set:
>>> +        if ( !is_control_domain(current->domain) )
>>> +            return -EPERM;
>>> +
>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>> +            return -EFAULT;
>>> +
>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>> +        if ( mode & ~XENPMU_MODE_ON )
>>> +            return -EINVAL;
>> Please, if you add a new interface, think carefully about future
>> extension room: Here you ignore the upper 32 bits of .val instead
>> of making sure they're zero, thus making it impossible to assign
>> them some meaning later on.
> 
> I think I can leave this as is for now --- I am storing VPMU mode and 
> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.

You should drop the cast to a 32-bit value at the very least -
"leave this as is for now" reads like you don#t need to make
any changes.

>>> +/* Parameters structure for HYPERVISOR_xenpmu_op call */
>>> +struct xen_pmu_params {
>>> +    /* IN/OUT parameters */
>>> +    union {
>>> +        struct version {
>>> +            uint8_t maj;
>>> +            uint8_t min;
>>> +        } version;
>>> +        uint64_t pad;
>>> +    } v;
>> Looking at the implementation above I don't see this ever being an
>> IN parameter.
> 
> Currently Xen doesn't care about version but in the future a guest may 
> specify what version of PMU it wants to use (I hope this day will never 
> come though...)

At which time you'd need to add something like a set-version sub-op,
which would then also be the time to make this as IN/OUT. Right now
it is just IN and hence should be marked as such.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7hmi-0007Ym-QU; Mon, 27 Jan 2014 08:43:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7hmg-0007Yh-O2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:43:10 +0000
Received: from [85.158.143.35:58372] by server-2.bemta-4.messagelabs.com id
	5D/A8-11386-E1C16E25; Mon, 27 Jan 2014 08:43:10 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390812187!969527!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18644 invoked from network); 27 Jan 2014 08:43:09 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 08:43:09 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so5636598pab.16
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 00:43:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=L4mlK/J5Qdoa5WRgA+Z1lGbBDDJdNMwsvnuWvj8gVW0=;
	b=KWwFKqJJu+MQdyJhqNJ9fU8/PpspMdPRKajtBLCSu+rHoC8veZb+CMQqAhTRQHSVPN
	8nadJ/DYXXHjld57REJ1qyrgaw6IbY0jZoQB9bM/MoCEkcRkIl/MpRgYtggDzEdEtRmK
	REVoiPHunNfCnHEOyoEUgsKASpxs2sbMagu3MJgEAc/L2EFFlzyM4pA+mpt//IBwi24w
	wgz1oelHrKtIS/odZJ+2CoMFYT3iV8xzj/BLvzNpzMrQLj/WfBNnfiaMrRt2mnRcGaKn
	2jDjDR/O9IzVXV1BEPvmqsn9G36WUWu4FKiyK4kzrFyeltUC3g/n/jCGG6z/jD38Otw1
	Robg==
X-Gm-Message-State: ALoCoQkEdyCTSyRuPeAcSfAQ4iqsFanzwpTn1syhtZoHz6diFW/oeluCTfSYW5nDearhIHlNKWdI
X-Received: by 10.68.133.138 with SMTP id pc10mr2537129pbb.98.1390812187268;
	Mon, 27 Jan 2014 00:43:07 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	sd3sm29616576pbb.42.2014.01.27.00.43.02 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 00:43:06 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 14:12:52 +0530
Message-Id: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V5:
- Incorporating comments received on V4 patch.
V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   63 ++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..3825269 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,14 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +113,61 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("XGENE: Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("XGENE: Unable to find a compatible reset node in the device tree");
+        return 0;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("XGENE: Unable to retrieve the base address for reset\n");
+        return 0;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("XGENE: Unable to retrieve the reset mask\n");
+        return 0;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +177,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 08:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 08:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7hmi-0007Ym-QU; Mon, 27 Jan 2014 08:43:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7hmg-0007Yh-O2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 08:43:10 +0000
Received: from [85.158.143.35:58372] by server-2.bemta-4.messagelabs.com id
	5D/A8-11386-E1C16E25; Mon, 27 Jan 2014 08:43:10 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390812187!969527!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18644 invoked from network); 27 Jan 2014 08:43:09 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 08:43:09 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so5636598pab.16
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 00:43:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=L4mlK/J5Qdoa5WRgA+Z1lGbBDDJdNMwsvnuWvj8gVW0=;
	b=KWwFKqJJu+MQdyJhqNJ9fU8/PpspMdPRKajtBLCSu+rHoC8veZb+CMQqAhTRQHSVPN
	8nadJ/DYXXHjld57REJ1qyrgaw6IbY0jZoQB9bM/MoCEkcRkIl/MpRgYtggDzEdEtRmK
	REVoiPHunNfCnHEOyoEUgsKASpxs2sbMagu3MJgEAc/L2EFFlzyM4pA+mpt//IBwi24w
	wgz1oelHrKtIS/odZJ+2CoMFYT3iV8xzj/BLvzNpzMrQLj/WfBNnfiaMrRt2mnRcGaKn
	2jDjDR/O9IzVXV1BEPvmqsn9G36WUWu4FKiyK4kzrFyeltUC3g/n/jCGG6z/jD38Otw1
	Robg==
X-Gm-Message-State: ALoCoQkEdyCTSyRuPeAcSfAQ4iqsFanzwpTn1syhtZoHz6diFW/oeluCTfSYW5nDearhIHlNKWdI
X-Received: by 10.68.133.138 with SMTP id pc10mr2537129pbb.98.1390812187268;
	Mon, 27 Jan 2014 00:43:07 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	sd3sm29616576pbb.42.2014.01.27.00.43.02 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 00:43:06 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 14:12:52 +0530
Message-Id: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V5:
- Incorporating comments received on V4 patch.
V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   63 ++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..3825269 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,14 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +113,61 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("XGENE: Unable to map xgene reset address\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("XGENE: Unable to find a compatible reset node in the device tree");
+        return 0;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("XGENE: Unable to retrieve the base address for reset\n");
+        return 0;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("XGENE: Unable to retrieve the reset mask\n");
+        return 0;
+    }
+
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +177,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 09:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 09:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7iQ5-0000FD-PL; Mon, 27 Jan 2014 09:23:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7iQ4-0000F8-V8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 09:23:53 +0000
Received: from [193.109.254.147:55698] by server-10.bemta-14.messagelabs.com
	id F2/01-20752-8A526E25; Mon, 27 Jan 2014 09:23:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390814631!35727!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10123 invoked from network); 27 Jan 2014 09:23:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 09:23:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 09:23:51 +0000
Message-Id: <52E633B202000078001171C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 09:23:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
In-Reply-To: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v4] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 18:15, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> xen: [v4] Pass the location of the ACPI RSDP to DOM0.
> 
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.

As said before - the description reads as if Xen did something wrong
here. I think I explained in enough detail why this _has_ to be that
way. Hence this description is at least misleading, and thus not
acceptable.

> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -57,6 +57,7 @@ bool_t __initdata acpi_lapic;
>  bool_t __initdata acpi_ioapic;
> 
>  bool_t acpi_skip_timer_override __initdata;
> +bool_t acpi_rsdp_passthrough    __initdata;

I see absolutely no reason why this option can't be contained to
setup.c - (static, not declaration in any header).

And you surely don't need the multiple successive blanks.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -75,6 +75,11 @@ custom_param("acpi", parse_acpi_param);
>  boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
> 
>  /* **** Linux config option: propagated to domain0. */
> +/* acpi_rsdp_passthrough: Explicitly pass the ACPI RSDP pointer to */
> +/*                        domain0 via the acpi_rsdp option.        */
> +boolean_param("acpi_rsdp_passthrough", acpi_rsdp_passthrough);
> +
> +/* **** Linux config option: propagated to domain0. */
>  /* noapic: Disable IOAPIC setup. */
>  boolean_param("noapic", skip_ioapic_setup);
> 
> @@ -1378,6 +1383,26 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( efi_enabled && acpi_rsdp_passthrough &&
> +             !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 3];
> +
> +            if ( rp )
> +            {

If you already restrict the scopes of the newly added variables
(which I appreciate), please move the declaration of rp_str here.

> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 3,
> +                         "%p", (void *)rp);

Both the use of %p and the cast to void * seem pretty bogus. I
don't recall earlier reviews having requested you to do so...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 09:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 09:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7iQ5-0000FD-PL; Mon, 27 Jan 2014 09:23:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7iQ4-0000F8-V8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 09:23:53 +0000
Received: from [193.109.254.147:55698] by server-10.bemta-14.messagelabs.com
	id F2/01-20752-8A526E25; Mon, 27 Jan 2014 09:23:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390814631!35727!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10123 invoked from network); 27 Jan 2014 09:23:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 09:23:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 09:23:51 +0000
Message-Id: <52E633B202000078001171C7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 09:23:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Philip Wernersbach" <philip.wernersbach@gmail.com>
References: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
In-Reply-To: <CAO5Rg11T5mota05vVY4TYQiMc2-jsd+xHXKa3L-ofTJ9boAWzA@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH][v4] xen: Pass the location of the ACPI RSDP
 to DOM0.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.01.14 at 18:15, Philip Wernersbach <philip.wernersbach@gmail.com> wrote:
> xen: [v4] Pass the location of the ACPI RSDP to DOM0.
> 
> Some machines, such as recent IBM servers, only allow the OS to get the
> ACPI RSDP from EFI. Since Xen nukes DOM0's ability to access EFI, DOM0
> cannot get the RSDP on these machines, leading to all sorts of
> functionality reductions.

As said before - the description reads as if Xen did something wrong
here. I think I explained in enough detail why this _has_ to be that
way. Hence this description is at least misleading, and thus not
acceptable.

> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -57,6 +57,7 @@ bool_t __initdata acpi_lapic;
>  bool_t __initdata acpi_ioapic;
> 
>  bool_t acpi_skip_timer_override __initdata;
> +bool_t acpi_rsdp_passthrough    __initdata;

I see absolutely no reason why this option can't be contained to
setup.c - (static, not declaration in any header).

And you surely don't need the multiple successive blanks.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -75,6 +75,11 @@ custom_param("acpi", parse_acpi_param);
>  boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
> 
>  /* **** Linux config option: propagated to domain0. */
> +/* acpi_rsdp_passthrough: Explicitly pass the ACPI RSDP pointer to */
> +/*                        domain0 via the acpi_rsdp option.        */
> +boolean_param("acpi_rsdp_passthrough", acpi_rsdp_passthrough);
> +
> +/* **** Linux config option: propagated to domain0. */
>  /* noapic: Disable IOAPIC setup. */
>  boolean_param("noapic", skip_ioapic_setup);
> 
> @@ -1378,6 +1383,26 @@ void __init __start_xen(unsigned long mbi_p)
>              safe_strcat(dom0_cmdline, " acpi=");
>              safe_strcat(dom0_cmdline, acpi_param);
>          }
> +        if ( efi_enabled && acpi_rsdp_passthrough &&
> +             !strstr(dom0_cmdline, "acpi_rsdp=") )
> +        {
> +            acpi_physical_address rp = acpi_os_get_root_pointer();
> +            char rp_str[sizeof(acpi_physical_address)*2 + 3];
> +
> +            if ( rp )
> +            {

If you already restrict the scopes of the newly added variables
(which I appreciate), please move the declaration of rp_str here.

> +                snprintf(rp_str, sizeof(acpi_physical_address)*2 + 3,
> +                         "%p", (void *)rp);

Both the use of %p and the cast to void * seem pretty bogus. I
don't recall earlier reviews having requested you to do so...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 09:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 09:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7iZo-0000bt-4R; Mon, 27 Jan 2014 09:33:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W7iZm-0000bo-Fh
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 09:33:54 +0000
Received: from [193.109.254.147:6720] by server-9.bemta-14.messagelabs.com id
	57/4F-13957-10826E25; Mon, 27 Jan 2014 09:33:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390815233!36677!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7965 invoked from network); 27 Jan 2014 09:33:53 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 09:33:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W7iZb-000GwJ-Ah; Mon, 27 Jan 2014 09:33:43 +0000
Date: Mon, 27 Jan 2014 10:33:43 +0100
From: Tim Deegan <tim@xen.org>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140127093343.GA64086@deinos.phlegethon.org>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B0;278;0cAt 08:29 +0000 on 26 Jan (1390721344), Zhang, Yang Z wrote:
> George Dunlap wrote on 2014-01-25:
> > On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
> > <andrew.cooper3@citrix.com> wrote:
> >> Hello,
> >> 
> >> I have been giving nested virt a try, and have my first bug to report.
> >> This is still ongoing, and is by no means complete yet.
> >> 
> >> Setup:
> >> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
> >> 
> >> Single Intel Haswell SDP (Grantley platform):
> >> Native hypervisor: XenServer
> >> 
> >> Two L1 guests:
> >>   XenServer (running with EPT)
> >>   XenServer (running with shadow)
> >> 
> >> When attempting to create an L2 EPT HVM domain under an L1 shadow
> >> domain, the L1 shadow domain is killed with:
> > 
> > Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
> > the L2 HAP stuff assumed that L1 was HAP as well.
> > 
> > In which case, if an L1 guest is started in shadow mode, then EPT
> > should not be advertised.
> 
> AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy
> (Actually, I never tried it successfully from the first day I start
> working on nested stuff).

Fair enough.  That needs to be documented, and those modes (which I
guess means nested-on-shadow in general) need to be disabled in the
hypervisor, with a sensible error message.

> Shadow-on-EPT and EPT-on-EPT are working
> in my box. But I recommended using EPT on EPT if possible. Because
> it is really a pain to run L2 guest on shadow on shadow mode due to
> the poor performance.

Yeah, I think it's generally accepted that having shadow pagetables
anywhere in that stack is going to hurt.  Sadly, there's no way for
the L0 admin to stop the L1 hypervisor from using shadow pagetables,
so shadow-on-EPT ought to at least work correctly, even if performance
sucks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 09:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 09:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7iZo-0000bt-4R; Mon, 27 Jan 2014 09:33:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W7iZm-0000bo-Fh
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 09:33:54 +0000
Received: from [193.109.254.147:6720] by server-9.bemta-14.messagelabs.com id
	57/4F-13957-10826E25; Mon, 27 Jan 2014 09:33:53 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390815233!36677!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7965 invoked from network); 27 Jan 2014 09:33:53 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 09:33:53 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W7iZb-000GwJ-Ah; Mon, 27 Jan 2014 09:33:43 +0000
Date: Mon, 27 Jan 2014 10:33:43 +0100
From: Tim Deegan <tim@xen.org>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140127093343.GA64086@deinos.phlegethon.org>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

B0;278;0cAt 08:29 +0000 on 26 Jan (1390721344), Zhang, Yang Z wrote:
> George Dunlap wrote on 2014-01-25:
> > On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
> > <andrew.cooper3@citrix.com> wrote:
> >> Hello,
> >> 
> >> I have been giving nested virt a try, and have my first bug to report.
> >> This is still ongoing, and is by no means complete yet.
> >> 
> >> Setup:
> >> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
> >> 
> >> Single Intel Haswell SDP (Grantley platform):
> >> Native hypervisor: XenServer
> >> 
> >> Two L1 guests:
> >>   XenServer (running with EPT)
> >>   XenServer (running with shadow)
> >> 
> >> When attempting to create an L2 EPT HVM domain under an L1 shadow
> >> domain, the L1 shadow domain is killed with:
> > 
> > Is EPT-on-shadow actually meant to work?  I wouldn't be surprised if
> > the L2 HAP stuff assumed that L1 was HAP as well.
> > 
> > In which case, if an L1 guest is started in shadow mode, then EPT
> > should not be advertised.
> 
> AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy
> (Actually, I never tried it successfully from the first day I start
> working on nested stuff).

Fair enough.  That needs to be documented, and those modes (which I
guess means nested-on-shadow in general) need to be disabled in the
hypervisor, with a sensible error message.

> Shadow-on-EPT and EPT-on-EPT are working
> in my box. But I recommended using EPT on EPT if possible. Because
> it is really a pain to run L2 guest on shadow on shadow mode due to
> the poor performance.

Yeah, I think it's generally accepted that having shadow pagetables
anywhere in that stack is going to hurt.  Sadly, there's no way for
the L0 admin to stop the L1 hypervisor from using shadow pagetables,
so shadow-on-EPT ought to at least work correctly, even if performance
sucks.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7j6v-0002Km-QU; Mon, 27 Jan 2014 10:08:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7j6t-0002KK-NR
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:08:07 +0000
Received: from [85.158.143.35:5388] by server-1.bemta-4.messagelabs.com id
	4E/7E-02132-70036E25; Mon, 27 Jan 2014 10:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390817285!986533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25150 invoked from network); 27 Jan 2014 10:08:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94729680"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:08:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:08:04 -0500
Message-ID: <1390817282.9890.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Mon, 27 Jan 2014 10:08:02 +0000
In-Reply-To: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
	<20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCAyMDE0LTAxLTI1IGF0IDEzOjA3IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiBP
biBGcmksIEphbiAyNCwgMjAxNCBhdCAwMzozNjoyMlBNICswMDAwLCBJYW4gQ2FtcGJlbGwgd3Jv
dGU6Cj4gPiBPbiBGcmksIDIwMTQtMDEtMjQgYXQgMDk6MjEgKzAwMDAsIElhbiBDYW1wYmVsbCB3
cm90ZToKPiA+ID4gT24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNv
biB3cm90ZToKPiA+ID4gPiBGcm9tOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9uQGFtYXpvbi5jb20+
Cj4gPiA+ID4gCj4gPiA+ID4gQ3VycmVudGx5IHNocmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2Fs
bGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKPiA+ID4gPiBwZXJzaXN0ZW50IGdyYW50cyBh
cmUgcmVsZWFzZWQgdmlhIGZyZWVfcGVyc2lzdGVudF9nbnRzKCkuIFRoaXMKPiA+ID4gPiByZXN1
bHRzIGluIGEgbWVtb3J5IGxlYWsgd2hlbiBhIFZCRCB0aGF0IHVzZXMgcGVyc2lzdGVudCBncmFu
dHMgaXMKPiA+ID4gPiB0b3JuIGRvd24uCj4gPiA+IAo+ID4gPiBUaGlzIG1heSB3ZWxsIGJlIHRo
ZSBleHBsYW5hdGlvbiBmb3IgdGhlIG1lbW9yeSBsZWFrIEkgd2FzIG9ic2VydmluZyBvbgo+ID4g
PiBBUk0gbGFzdCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Uga25vdy4KPiA+
IAo+ID4gUmVzdWx0cyBhcmUgYSBiaXQgaW5jb25jbHVzaXZlIHVuZm9ydHVuYXRlbHksIGl0IHNl
ZW1zIGxpa2UgSSBhbSBzZWVpbmcKPiA+IHNvbWUgb3RoZXIgbGVhayB0b28gKG9yIGluc3RlYWQp
Lgo+ID4gCj4gPiBUb3RhbGx5IHVuc2NpZW50aWZpY2FsbHkgaXQgZG9lcyBzZWVtIHRvIGJlIGxl
YWtpbmcgbW9yZSBzbG93bHkgdGhhbgo+ID4gYmVmb3JlLCBzbyBwZXJoYXBzIHRoaXMgcGF0Y2gg
aGFzIGhlbHBlZCwgYnV0IG5vdGhpbmcgY29uY2x1c2l2ZSBJJ20KPiA+IGFmcmFpZC4KPiAKPiBU
ZXN0aW5nIGhlcmUgbG9va3MgZ29vZC4gSSBkb24ndCBrbm93IGlmIHBlcmhhcHMgc29tZXRoaW5n
IGVsc2UgaXMKPiBnb2luZyBvbiB3aXRoIEFSTS4uLgo+IAo+ID4gSSBkb24ndCB0aGluayB0aGF0
IHF1aXRlIHF1YWxpZmllcyBmb3IgYSBUZXN0ZWQtYnkgdGhvdWdoLCBzb3JyeS4KPiAKPiBIb3cg
YWJvdXQgYW4gQWNrZWQtYnk/IDstKQoKSSdtIG5vdCBhdCBhbGwgZmFtaWxpYXIgd2l0aCB0aGUg
bW9kZXJuIGJsa2JhY2sgY29kZSBiYXNlIHNvIEknbSBhZnJhaWQKaXQgd291bGQgYmUgYSBwcmV0
dHkgaG9sbG93IEFjay4KCj4gCj4gLS1tc3cKPiAKPiA+IElhbi4gCj4gPiAKPiA+ID4gCj4gPiA+
ID4gQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiA+
ID4gPiBDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiA+ID4g
PiBDYzogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbT4KPiA+ID4gPiBDYzog
RGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4KPiA+ID4gPiBDYzogbGludXgt
a2VybmVsQHZnZXIua2VybmVsLm9yZwo+ID4gPiA+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9y
Zwo+ID4gPiA+IENjOiBBbnRob255IExpZ3VvcmkgPGFsaWd1b3JpQGFtYXpvbi5jb20+Cj4gPiA+
ID4gU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgo+ID4g
PiA+IFNpZ25lZC1vZmYtYnk6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KPiA+ID4gPiAt
LS0KPiA+ID4gPiAgZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQo+ID4gPiA+ICAxIGZpbGUgY2hhbmdlZCwgMyBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9u
cygtKQo+ID4gPiA+IAo+ID4gPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGti
YWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCj4gPiA+
ID4gaW5kZXggNjYyMGI3My4uMzBlZjdiMyAxMDA2NDQKPiA+ID4gPiAtLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+ID4gPiA+ICsrKyBiL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jCj4gPiA+ID4gQEAgLTYyNSw5ICs2MjUsNiBAQCBwdXJnZV9nbnRf
bGlzdDoKPiA+ID4gPiAgCQkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ID4gPiA+ICAJfQo+ID4gPiA+
ICAKPiA+ID4gPiAtCS8qIFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFn
ZXMgZnJvbSB0aGUgYnVmZmVyICovCj4gPiA+ID4gLQlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtp
ZiwgMCAvKiBBbGwgKi8pOwo+ID4gPiA+IC0KPiA+ID4gPiAgCS8qIEZyZWUgYWxsIHBlcnNpc3Rl
bnQgZ3JhbnQgcGFnZXMgKi8KPiA+ID4gPiAgCWlmICghUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBl
cnNpc3RlbnRfZ250cykpCj4gPiA+ID4gIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYsICZi
bGtpZi0+cGVyc2lzdGVudF9nbnRzLAo+ID4gPiA+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2Vf
Z250X2xpc3Q6Cj4gPiA+ID4gIAlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKTsKPiA+ID4gPiAgCWJsa2lmLT5wZXJzaXN0ZW50X2dudF9jID0gMDsKPiA+ID4g
PiAgCj4gPiA+ID4gKwkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBh
Z2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+ID4gPiA+ICsJc2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxr
aWYsIDAgLyogQWxsICovKTsKPiA+ID4gPiArCj4gPiA+ID4gIAlpZiAobG9nX3N0YXRzKQo+ID4g
PiA+ICAJCXByaW50X3N0YXRzKGJsa2lmKTsKPiA+ID4gPiAgCj4gPiA+IAo+ID4gPiAKPiA+ID4g
Cj4gPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4g
PiA+IFhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKPiA+ID4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiA+ID4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCj4gPiAKPiA+IAoKCgpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGlu
ZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1k
ZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7j6v-0002Km-QU; Mon, 27 Jan 2014 10:08:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7j6t-0002KK-NR
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:08:07 +0000
Received: from [85.158.143.35:5388] by server-1.bemta-4.messagelabs.com id
	4E/7E-02132-70036E25; Mon, 27 Jan 2014 10:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390817285!986533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25150 invoked from network); 27 Jan 2014 10:08:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94729680"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:08:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:08:04 -0500
Message-ID: <1390817282.9890.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Matt Wilson <msw@linux.com>
Date: Mon, 27 Jan 2014 10:08:02 +0000
In-Reply-To: <20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
References: <1390505326-9368-1-git-send-email-msw@linux.com>
	<1390555293.2124.6.camel@kazak.uk.xensource.com>
	<1390577782.13513.8.camel@kazak.uk.xensource.com>
	<20140125210659.GA15756@u109add4315675089e695.ant.amazon.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU2F0LCAyMDE0LTAxLTI1IGF0IDEzOjA3IC0wODAwLCBNYXR0IFdpbHNvbiB3cm90ZToKPiBP
biBGcmksIEphbiAyNCwgMjAxNCBhdCAwMzozNjoyMlBNICswMDAwLCBJYW4gQ2FtcGJlbGwgd3Jv
dGU6Cj4gPiBPbiBGcmksIDIwMTQtMDEtMjQgYXQgMDk6MjEgKzAwMDAsIElhbiBDYW1wYmVsbCB3
cm90ZToKPiA+ID4gT24gVGh1LCAyMDE0LTAxLTIzIGF0IDExOjI4IC0wODAwLCBNYXR0IFdpbHNv
biB3cm90ZToKPiA+ID4gPiBGcm9tOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9uQGFtYXpvbi5jb20+
Cj4gPiA+ID4gCj4gPiA+ID4gQ3VycmVudGx5IHNocmlua19mcmVlX3BhZ2Vwb29sKCkgaXMgY2Fs
bGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKPiA+ID4gPiBwZXJzaXN0ZW50IGdyYW50cyBh
cmUgcmVsZWFzZWQgdmlhIGZyZWVfcGVyc2lzdGVudF9nbnRzKCkuIFRoaXMKPiA+ID4gPiByZXN1
bHRzIGluIGEgbWVtb3J5IGxlYWsgd2hlbiBhIFZCRCB0aGF0IHVzZXMgcGVyc2lzdGVudCBncmFu
dHMgaXMKPiA+ID4gPiB0b3JuIGRvd24uCj4gPiA+IAo+ID4gPiBUaGlzIG1heSB3ZWxsIGJlIHRo
ZSBleHBsYW5hdGlvbiBmb3IgdGhlIG1lbW9yeSBsZWFrIEkgd2FzIG9ic2VydmluZyBvbgo+ID4g
PiBBUk0gbGFzdCBuaWdodC4gSSdsbCBnaXZlIGl0IGEgZ28gYW5kIGxldCB5b3Uga25vdy4KPiA+
IAo+ID4gUmVzdWx0cyBhcmUgYSBiaXQgaW5jb25jbHVzaXZlIHVuZm9ydHVuYXRlbHksIGl0IHNl
ZW1zIGxpa2UgSSBhbSBzZWVpbmcKPiA+IHNvbWUgb3RoZXIgbGVhayB0b28gKG9yIGluc3RlYWQp
Lgo+ID4gCj4gPiBUb3RhbGx5IHVuc2NpZW50aWZpY2FsbHkgaXQgZG9lcyBzZWVtIHRvIGJlIGxl
YWtpbmcgbW9yZSBzbG93bHkgdGhhbgo+ID4gYmVmb3JlLCBzbyBwZXJoYXBzIHRoaXMgcGF0Y2gg
aGFzIGhlbHBlZCwgYnV0IG5vdGhpbmcgY29uY2x1c2l2ZSBJJ20KPiA+IGFmcmFpZC4KPiAKPiBU
ZXN0aW5nIGhlcmUgbG9va3MgZ29vZC4gSSBkb24ndCBrbm93IGlmIHBlcmhhcHMgc29tZXRoaW5n
IGVsc2UgaXMKPiBnb2luZyBvbiB3aXRoIEFSTS4uLgo+IAo+ID4gSSBkb24ndCB0aGluayB0aGF0
IHF1aXRlIHF1YWxpZmllcyBmb3IgYSBUZXN0ZWQtYnkgdGhvdWdoLCBzb3JyeS4KPiAKPiBIb3cg
YWJvdXQgYW4gQWNrZWQtYnk/IDstKQoKSSdtIG5vdCBhdCBhbGwgZmFtaWxpYXIgd2l0aCB0aGUg
bW9kZXJuIGJsa2JhY2sgY29kZSBiYXNlIHNvIEknbSBhZnJhaWQKaXQgd291bGQgYmUgYSBwcmV0
dHkgaG9sbG93IEFjay4KCj4gCj4gLS1tc3cKPiAKPiA+IElhbi4gCj4gPiAKPiA+ID4gCj4gPiA+
ID4gQ2M6IEtvbnJhZCBSemVzenV0ZWsgV2lsayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4KPiA+
ID4gPiBDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiA+ID4g
PiBDYzogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNvbT4KPiA+ID4gPiBDYzog
RGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4KPiA+ID4gPiBDYzogbGludXgt
a2VybmVsQHZnZXIua2VybmVsLm9yZwo+ID4gPiA+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9y
Zwo+ID4gPiA+IENjOiBBbnRob255IExpZ3VvcmkgPGFsaWd1b3JpQGFtYXpvbi5jb20+Cj4gPiA+
ID4gU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgo+ID4g
PiA+IFNpZ25lZC1vZmYtYnk6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KPiA+ID4gPiAt
LS0KPiA+ID4gPiAgZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQo+ID4gPiA+ICAxIGZpbGUgY2hhbmdlZCwgMyBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9u
cygtKQo+ID4gPiA+IAo+ID4gPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGti
YWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCj4gPiA+
ID4gaW5kZXggNjYyMGI3My4uMzBlZjdiMyAxMDA2NDQKPiA+ID4gPiAtLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwo+ID4gPiA+ICsrKyBiL2RyaXZlcnMvYmxvY2sveGVu
LWJsa2JhY2svYmxrYmFjay5jCj4gPiA+ID4gQEAgLTYyNSw5ICs2MjUsNiBAQCBwdXJnZV9nbnRf
bGlzdDoKPiA+ID4gPiAgCQkJcHJpbnRfc3RhdHMoYmxraWYpOwo+ID4gPiA+ICAJfQo+ID4gPiA+
ICAKPiA+ID4gPiAtCS8qIFNpbmNlIHdlIGFyZSBzaHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFn
ZXMgZnJvbSB0aGUgYnVmZmVyICovCj4gPiA+ID4gLQlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtp
ZiwgMCAvKiBBbGwgKi8pOwo+ID4gPiA+IC0KPiA+ID4gPiAgCS8qIEZyZWUgYWxsIHBlcnNpc3Rl
bnQgZ3JhbnQgcGFnZXMgKi8KPiA+ID4gPiAgCWlmICghUkJfRU1QVFlfUk9PVCgmYmxraWYtPnBl
cnNpc3RlbnRfZ250cykpCj4gPiA+ID4gIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYsICZi
bGtpZi0+cGVyc2lzdGVudF9nbnRzLAo+ID4gPiA+IEBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2Vf
Z250X2xpc3Q6Cj4gPiA+ID4gIAlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKTsKPiA+ID4gPiAgCWJsa2lmLT5wZXJzaXN0ZW50X2dudF9jID0gMDsKPiA+ID4g
PiAgCj4gPiA+ID4gKwkvKiBTaW5jZSB3ZSBhcmUgc2h1dHRpbmcgZG93biByZW1vdmUgYWxsIHBh
Z2VzIGZyb20gdGhlIGJ1ZmZlciAqLwo+ID4gPiA+ICsJc2hyaW5rX2ZyZWVfcGFnZXBvb2woYmxr
aWYsIDAgLyogQWxsICovKTsKPiA+ID4gPiArCj4gPiA+ID4gIAlpZiAobG9nX3N0YXRzKQo+ID4g
PiA+ICAJCXByaW50X3N0YXRzKGJsa2lmKTsKPiA+ID4gPiAgCj4gPiA+IAo+ID4gPiAKPiA+ID4g
Cj4gPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4g
PiA+IFhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKPiA+ID4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
PiA+ID4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCj4gPiAKPiA+IAoKCgpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGlu
ZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1k
ZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:14:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jCO-00037w-Rs; Mon, 27 Jan 2014 10:13:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7jCM-00037l-RR
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:13:47 +0000
Received: from [85.158.143.35:17235] by server-1.bemta-4.messagelabs.com id
	A4/99-02132-A5136E25; Mon, 27 Jan 2014 10:13:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390817624!991705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17601 invoked from network); 27 Jan 2014 10:13:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94730872"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:13:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:13:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W7jCI-00078y-1V;
	Mon, 27 Jan 2014 10:13:42 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Mon, 27 Jan 2014 11:13:41 +0100
Message-ID: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBXZSBk
b24ndCB3YWl0IGZvciBhbnkgcGVuZGluZyBwdXJnZSB3b3JrIHRvIGZpbmlzaCBiZWZvcmUgY2xl
YW5pbmcKICB0aGUgbGlzdCBvZiBmcmVlX3BhZ2VzLiBUaGUgcHVyZ2Ugd29yayB3aWxsIGNhbGwg
cHV0X2ZyZWVfcGFnZXMgYW5kCiAgdGh1cyB3ZSBtaWdodCBlbmQgdXAgd2l0aCBwYWdlcyBiZWlu
ZyBhZGRlZCB0byB0aGUgZnJlZV9wYWdlcyBsaXN0CiAgYWZ0ZXIgd2UgaGF2ZSBlbXB0aWVkIGl0
LgotIFdlIGRvbid0IHdhaXQgZm9yIHBlbmRpbmcgcmVxdWVzdHMgdG8gZW5kIGJlZm9yZSBjbGVh
bmluZyBwZXJzaXN0ZW50CiAgZ3JhbnRzIGFuZCB0aGUgbGlzdCBvZiBmcmVlX3BhZ2VzLiBBZ2Fp
biB0aGlzIGNhbiBhZGQgcGFnZXMgdG8gdGhlCiAgZnJlZV9wYWdlcyBsaXN0cyBvciBwZXJzaXN0
ZW50IGdyYW50cyB0byB0aGUgcGVyc2lzdGVudF9nbnRzCiAgcmVkLWJsYWNrIHRyZWUuCgpBbHNv
LCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2ZyZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBj
bGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxr
QG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpD
YzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQg
UnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9u
LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Ci0tLQpUaGlz
IHNob3VsZCBiZSBhcHBsaWVkIGFmdGVyIHRoZSBwYXRjaDoKCnhlbi1ibGtiYWNrOiBmaXggbWVt
b3J5IGxlYWsgd2hlbiBwZXJzaXN0ZW50IGdyYW50cyBhcmUgdXNlZAoKRnJvbSBNYXR0IFJ1c2h0
b24gJiBNYXR0IFdpbHNvbiBhbmQgYmFja3BvcnRlZCB0byBzdGFibGUuCgpJJ3ZlIGJlZW4gYWJs
ZSB0byBjcmVhdGUgYW5kIGRlc3Ryb3kgfjQwMDAgZ3Vlc3RzIHdoaWxlIGRvaW5nIGhlYXZ5IElP
Cm9wZXJhdGlvbnMgd2l0aCB0aGlzIHBhdGNoIG9uIGEgNTEyTSBEb20wIHdpdGhvdXQgcHJvYmxl
bXMuCi0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgIDI5ICsrKysr
KysrKysrKysrKysrKystLS0tLS0tLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1
cy5jICB8ICAgIDkgKysrKysrKysrCiAyIGZpbGVzIGNoYW5nZWQsIDI4IGluc2VydGlvbnMoKyks
IDEwIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKaW5kZXggMzBl
ZjdiMy4uMTk5MjViNyAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGti
YWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKQEAgLTE2OSw2
ICsxNjksNyBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxr
aWYgKmJsa2lmLAogCQkJCXN0cnVjdCBwZW5kaW5nX3JlcSAqcGVuZGluZ19yZXEpOwogc3RhdGlj
IHZvaWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJ
ICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IHN0KTsKK3N0YXRpYyB2b2lkIHhlbl9ibGtfZHJhaW5f
aW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIGJvb2wgZm9yY2UpOwogCiAjZGVmaW5lIGZvcmVh
Y2hfZ3JhbnRfc2FmZShwb3MsIG4sIHJidHJlZSwgbm9kZSkgXAogCWZvciAoKHBvcykgPSBjb250
YWluZXJfb2YocmJfZmlyc3QoKHJidHJlZSkpLCB0eXBlb2YoKihwb3MpKSwgbm9kZSksIFwKQEAg
LTYyNSw2ICs2MjYsMTIgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJCQlwcmludF9zdGF0cyhibGtpZik7
CiAJfQogCisJLyogRHJhaW4gcGVuZGluZyBJTyAqLworCXhlbl9ibGtfZHJhaW5faW8oYmxraWYs
IHRydWUpOworCisJLyogRHJhaW4gcGVuZGluZyBwdXJnZSB3b3JrICovCisJZmx1c2hfd29yaygm
YmxraWYtPnBlcnNpc3RlbnRfcHVyZ2Vfd29yayk7CisKIAkvKiBGcmVlIGFsbCBwZXJzaXN0ZW50
IGdyYW50IHBhZ2VzICovCiAJaWYgKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9n
bnRzKSkKIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYsICZibGtpZi0+cGVyc2lzdGVudF9n
bnRzLApAQCAtOTMwLDcgKzkzNyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfb3RoZXJfaW8oc3Ry
dWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJcmV0dXJuIC1FSU87CiB9CiAKLXN0YXRpYyB2b2lkIHhl
bl9ibGtfZHJhaW5faW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCitzdGF0aWMgdm9pZCB4ZW5f
YmxrX2RyYWluX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBib29sIGZvcmNlKQogewogCWF0
b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMSk7CiAJZG8gewpAQCAtOTQzLDcgKzk1MCw3IEBAIHN0
YXRpYyB2b2lkIHhlbl9ibGtfZHJhaW5faW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAKIAkJ
aWYgKCFhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKIAkJCWJyZWFrOwotCX0gd2hpbGUgKCFr
dGhyZWFkX3Nob3VsZF9zdG9wKCkpOworCX0gd2hpbGUgKCFrdGhyZWFkX3Nob3VsZF9zdG9wKCkg
fHwgZm9yY2UpOwogCWF0b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMCk7CiB9CiAKQEAgLTk3Niwx
NyArOTgzLDE5IEBAIHN0YXRpYyB2b2lkIF9fZW5kX2Jsb2NrX2lvX29wKHN0cnVjdCBwZW5kaW5n
X3JlcSAqcGVuZGluZ19yZXEsIGludCBlcnJvcikKIAkgKiB0aGUgcHJvcGVyIHJlc3BvbnNlIG9u
IHRoZSByaW5nLgogCSAqLwogCWlmIChhdG9taWNfZGVjX2FuZF90ZXN0KCZwZW5kaW5nX3JlcS0+
cGVuZGNudCkpIHsKLQkJeGVuX2Jsa2JrX3VubWFwKHBlbmRpbmdfcmVxLT5ibGtpZiwKKwkJc3Ry
dWN0IHhlbl9ibGtpZiAqYmxraWYgPSBwZW5kaW5nX3JlcS0+YmxraWY7CisKKwkJeGVuX2Jsa2Jr
X3VubWFwKGJsa2lmLAogCQkgICAgICAgICAgICAgICAgcGVuZGluZ19yZXEtPnNlZ21lbnRzLAog
CQkgICAgICAgICAgICAgICAgcGVuZGluZ19yZXEtPm5yX3BhZ2VzKTsKLQkJbWFrZV9yZXNwb25z
ZShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKKwkJbWFrZV9yZXNwb25zZShi
bGtpZiwgcGVuZGluZ19yZXEtPmlkLAogCQkJICAgICAgcGVuZGluZ19yZXEtPm9wZXJhdGlvbiwg
cGVuZGluZ19yZXEtPnN0YXR1cyk7Ci0JCXhlbl9ibGtpZl9wdXQocGVuZGluZ19yZXEtPmJsa2lm
KTsKLQkJaWYgKGF0b21pY19yZWFkKCZwZW5kaW5nX3JlcS0+YmxraWYtPnJlZmNudCkgPD0gMikg
ewotCQkJaWYgKGF0b21pY19yZWFkKCZwZW5kaW5nX3JlcS0+YmxraWYtPmRyYWluKSkKLQkJCQlj
b21wbGV0ZSgmcGVuZGluZ19yZXEtPmJsa2lmLT5kcmFpbl9jb21wbGV0ZSk7CisJCWZyZWVfcmVx
KGJsa2lmLCBwZW5kaW5nX3JlcSk7CisJCXhlbl9ibGtpZl9wdXQoYmxraWYpOworCQlpZiAoYXRv
bWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKKwkJCWlmIChhdG9taWNfcmVhZCgmYmxr
aWYtPmRyYWluKSkKKwkJCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsKIAkJfQot
CQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxKTsKIAl9CiB9CiAKQEAg
LTEyMjQsNyArMTIzMyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0
IHhlbl9ibGtpZiAqYmxraWYsCiAJICogaXNzdWUgdGhlIFdSSVRFX0ZMVVNILgogCSAqLwogCWlm
IChkcmFpbikKLQkJeGVuX2Jsa19kcmFpbl9pbyhwZW5kaW5nX3JlcS0+YmxraWYpOworCQl4ZW5f
YmxrX2RyYWluX2lvKHBlbmRpbmdfcmVxLT5ibGtpZiwgZmFsc2UpOwogCiAJLyoKIAkgKiBJZiB3
ZSBoYXZlIGZhaWxlZCBhdCB0aGlzIHBvaW50LCB3ZSBuZWVkIHRvIHVuZG8gdGhlIE0yUCBvdmVy
cmlkZSwKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCmluZGV4IGMyMDE0YTAuLjNjMTAyODEg
MTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKKysrIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwpAQCAtMTI1LDYgKzEyNSw3IEBAIHN0YXRp
YyBzdHJ1Y3QgeGVuX2Jsa2lmICp4ZW5fYmxraWZfYWxsb2MoZG9taWRfdCBkb21pZCkKIAlibGtp
Zi0+cGVyc2lzdGVudF9nbnRzLnJiX25vZGUgPSBOVUxMOwogCXNwaW5fbG9ja19pbml0KCZibGtp
Zi0+ZnJlZV9wYWdlc19sb2NrKTsKIAlJTklUX0xJU1RfSEVBRCgmYmxraWYtPmZyZWVfcGFnZXMp
OworCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKIAlibGtp
Zi0+ZnJlZV9wYWdlc19udW0gPSAwOwogCWF0b21pY19zZXQoJmJsa2lmLT5wZXJzaXN0ZW50X2du
dF9pbl91c2UsIDApOwogCkBAIC0yNTksNiArMjYwLDE0IEBAIHN0YXRpYyB2b2lkIHhlbl9ibGtp
Zl9mcmVlKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQogCWlmICghYXRvbWljX2RlY19hbmRfdGVz
dCgmYmxraWYtPnJlZmNudCkpCiAJCUJVRygpOwogCisJLyogTWFrZSBzdXJlIGV2ZXJ5dGhpbmcg
aXMgZHJhaW5lZCBiZWZvcmUgc2h1dHRpbmcgZG93biAqLworCUJVR19PTihibGtpZi0+cGVyc2lz
dGVudF9nbnRfYyAhPSAwKTsKKwlCVUdfT04oYXRvbWljX3JlYWQoJmJsa2lmLT5wZXJzaXN0ZW50
X2dudF9pbl91c2UpICE9IDApOworCUJVR19PTihibGtpZi0+ZnJlZV9wYWdlc19udW0gIT0gMCk7
CisJQlVHX09OKCFsaXN0X2VtcHR5KCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV9saXN0KSk7CisJ
QlVHX09OKCFsaXN0X2VtcHR5KCZibGtpZi0+ZnJlZV9wYWdlcykpOworCUJVR19PTighUkJfRU1Q
VFlfUk9PVCgmYmxraWYtPnBlcnNpc3RlbnRfZ250cykpOworCiAJLyogQ2hlY2sgdGhhdCB0aGVy
ZSBpcyBubyByZXF1ZXN0IGluIHVzZSAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZShyZXEs
IG4sICZibGtpZi0+cGVuZGluZ19mcmVlLCBmcmVlX2xpc3QpIHsKIAkJbGlzdF9kZWwoJnJlcS0+
ZnJlZV9saXN0KTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:14:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jCO-00037w-Rs; Mon, 27 Jan 2014 10:13:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7jCM-00037l-RR
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:13:47 +0000
Received: from [85.158.143.35:17235] by server-1.bemta-4.messagelabs.com id
	A4/99-02132-A5136E25; Mon, 27 Jan 2014 10:13:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390817624!991705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17601 invoked from network); 27 Jan 2014 10:13:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94730872"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:13:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:13:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W7jCI-00078y-1V;
	Mon, 27 Jan 2014 10:13:42 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Mon, 27 Jan 2014 11:13:41 +0100
Message-ID: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBXZSBk
b24ndCB3YWl0IGZvciBhbnkgcGVuZGluZyBwdXJnZSB3b3JrIHRvIGZpbmlzaCBiZWZvcmUgY2xl
YW5pbmcKICB0aGUgbGlzdCBvZiBmcmVlX3BhZ2VzLiBUaGUgcHVyZ2Ugd29yayB3aWxsIGNhbGwg
cHV0X2ZyZWVfcGFnZXMgYW5kCiAgdGh1cyB3ZSBtaWdodCBlbmQgdXAgd2l0aCBwYWdlcyBiZWlu
ZyBhZGRlZCB0byB0aGUgZnJlZV9wYWdlcyBsaXN0CiAgYWZ0ZXIgd2UgaGF2ZSBlbXB0aWVkIGl0
LgotIFdlIGRvbid0IHdhaXQgZm9yIHBlbmRpbmcgcmVxdWVzdHMgdG8gZW5kIGJlZm9yZSBjbGVh
bmluZyBwZXJzaXN0ZW50CiAgZ3JhbnRzIGFuZCB0aGUgbGlzdCBvZiBmcmVlX3BhZ2VzLiBBZ2Fp
biB0aGlzIGNhbiBhZGQgcGFnZXMgdG8gdGhlCiAgZnJlZV9wYWdlcyBsaXN0cyBvciBwZXJzaXN0
ZW50IGdyYW50cyB0byB0aGUgcGVyc2lzdGVudF9nbnRzCiAgcmVkLWJsYWNrIHRyZWUuCgpBbHNv
LCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2ZyZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBj
bGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxr
QG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpD
YzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQg
UnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9u
LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+Ci0tLQpUaGlz
IHNob3VsZCBiZSBhcHBsaWVkIGFmdGVyIHRoZSBwYXRjaDoKCnhlbi1ibGtiYWNrOiBmaXggbWVt
b3J5IGxlYWsgd2hlbiBwZXJzaXN0ZW50IGdyYW50cyBhcmUgdXNlZAoKRnJvbSBNYXR0IFJ1c2h0
b24gJiBNYXR0IFdpbHNvbiBhbmQgYmFja3BvcnRlZCB0byBzdGFibGUuCgpJJ3ZlIGJlZW4gYWJs
ZSB0byBjcmVhdGUgYW5kIGRlc3Ryb3kgfjQwMDAgZ3Vlc3RzIHdoaWxlIGRvaW5nIGhlYXZ5IElP
Cm9wZXJhdGlvbnMgd2l0aCB0aGlzIHBhdGNoIG9uIGEgNTEyTSBEb20wIHdpdGhvdXQgcHJvYmxl
bXMuCi0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgIDI5ICsrKysr
KysrKysrKysrKysrKystLS0tLS0tLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1
cy5jICB8ICAgIDkgKysrKysrKysrCiAyIGZpbGVzIGNoYW5nZWQsIDI4IGluc2VydGlvbnMoKyks
IDEwIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKaW5kZXggMzBl
ZjdiMy4uMTk5MjViNyAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGti
YWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMKQEAgLTE2OSw2
ICsxNjksNyBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxr
aWYgKmJsa2lmLAogCQkJCXN0cnVjdCBwZW5kaW5nX3JlcSAqcGVuZGluZ19yZXEpOwogc3RhdGlj
IHZvaWQgbWFrZV9yZXNwb25zZShzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZiwgdTY0IGlkLAogCQkJ
ICB1bnNpZ25lZCBzaG9ydCBvcCwgaW50IHN0KTsKK3N0YXRpYyB2b2lkIHhlbl9ibGtfZHJhaW5f
aW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYsIGJvb2wgZm9yY2UpOwogCiAjZGVmaW5lIGZvcmVh
Y2hfZ3JhbnRfc2FmZShwb3MsIG4sIHJidHJlZSwgbm9kZSkgXAogCWZvciAoKHBvcykgPSBjb250
YWluZXJfb2YocmJfZmlyc3QoKHJidHJlZSkpLCB0eXBlb2YoKihwb3MpKSwgbm9kZSksIFwKQEAg
LTYyNSw2ICs2MjYsMTIgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJCQlwcmludF9zdGF0cyhibGtpZik7
CiAJfQogCisJLyogRHJhaW4gcGVuZGluZyBJTyAqLworCXhlbl9ibGtfZHJhaW5faW8oYmxraWYs
IHRydWUpOworCisJLyogRHJhaW4gcGVuZGluZyBwdXJnZSB3b3JrICovCisJZmx1c2hfd29yaygm
YmxraWYtPnBlcnNpc3RlbnRfcHVyZ2Vfd29yayk7CisKIAkvKiBGcmVlIGFsbCBwZXJzaXN0ZW50
IGdyYW50IHBhZ2VzICovCiAJaWYgKCFSQl9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9n
bnRzKSkKIAkJZnJlZV9wZXJzaXN0ZW50X2dudHMoYmxraWYsICZibGtpZi0+cGVyc2lzdGVudF9n
bnRzLApAQCAtOTMwLDcgKzkzNyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfb3RoZXJfaW8oc3Ry
dWN0IHhlbl9ibGtpZiAqYmxraWYsCiAJcmV0dXJuIC1FSU87CiB9CiAKLXN0YXRpYyB2b2lkIHhl
bl9ibGtfZHJhaW5faW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCitzdGF0aWMgdm9pZCB4ZW5f
YmxrX2RyYWluX2lvKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmLCBib29sIGZvcmNlKQogewogCWF0
b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMSk7CiAJZG8gewpAQCAtOTQzLDcgKzk1MCw3IEBAIHN0
YXRpYyB2b2lkIHhlbl9ibGtfZHJhaW5faW8oc3RydWN0IHhlbl9ibGtpZiAqYmxraWYpCiAKIAkJ
aWYgKCFhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKIAkJCWJyZWFrOwotCX0gd2hpbGUgKCFr
dGhyZWFkX3Nob3VsZF9zdG9wKCkpOworCX0gd2hpbGUgKCFrdGhyZWFkX3Nob3VsZF9zdG9wKCkg
fHwgZm9yY2UpOwogCWF0b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMCk7CiB9CiAKQEAgLTk3Niwx
NyArOTgzLDE5IEBAIHN0YXRpYyB2b2lkIF9fZW5kX2Jsb2NrX2lvX29wKHN0cnVjdCBwZW5kaW5n
X3JlcSAqcGVuZGluZ19yZXEsIGludCBlcnJvcikKIAkgKiB0aGUgcHJvcGVyIHJlc3BvbnNlIG9u
IHRoZSByaW5nLgogCSAqLwogCWlmIChhdG9taWNfZGVjX2FuZF90ZXN0KCZwZW5kaW5nX3JlcS0+
cGVuZGNudCkpIHsKLQkJeGVuX2Jsa2JrX3VubWFwKHBlbmRpbmdfcmVxLT5ibGtpZiwKKwkJc3Ry
dWN0IHhlbl9ibGtpZiAqYmxraWYgPSBwZW5kaW5nX3JlcS0+YmxraWY7CisKKwkJeGVuX2Jsa2Jr
X3VubWFwKGJsa2lmLAogCQkgICAgICAgICAgICAgICAgcGVuZGluZ19yZXEtPnNlZ21lbnRzLAog
CQkgICAgICAgICAgICAgICAgcGVuZGluZ19yZXEtPm5yX3BhZ2VzKTsKLQkJbWFrZV9yZXNwb25z
ZShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKKwkJbWFrZV9yZXNwb25zZShi
bGtpZiwgcGVuZGluZ19yZXEtPmlkLAogCQkJICAgICAgcGVuZGluZ19yZXEtPm9wZXJhdGlvbiwg
cGVuZGluZ19yZXEtPnN0YXR1cyk7Ci0JCXhlbl9ibGtpZl9wdXQocGVuZGluZ19yZXEtPmJsa2lm
KTsKLQkJaWYgKGF0b21pY19yZWFkKCZwZW5kaW5nX3JlcS0+YmxraWYtPnJlZmNudCkgPD0gMikg
ewotCQkJaWYgKGF0b21pY19yZWFkKCZwZW5kaW5nX3JlcS0+YmxraWYtPmRyYWluKSkKLQkJCQlj
b21wbGV0ZSgmcGVuZGluZ19yZXEtPmJsa2lmLT5kcmFpbl9jb21wbGV0ZSk7CisJCWZyZWVfcmVx
KGJsa2lmLCBwZW5kaW5nX3JlcSk7CisJCXhlbl9ibGtpZl9wdXQoYmxraWYpOworCQlpZiAoYXRv
bWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKKwkJCWlmIChhdG9taWNfcmVhZCgmYmxr
aWYtPmRyYWluKSkKKwkJCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsKIAkJfQot
CQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxKTsKIAl9CiB9CiAKQEAg
LTEyMjQsNyArMTIzMyw3IEBAIHN0YXRpYyBpbnQgZGlzcGF0Y2hfcndfYmxvY2tfaW8oc3RydWN0
IHhlbl9ibGtpZiAqYmxraWYsCiAJICogaXNzdWUgdGhlIFdSSVRFX0ZMVVNILgogCSAqLwogCWlm
IChkcmFpbikKLQkJeGVuX2Jsa19kcmFpbl9pbyhwZW5kaW5nX3JlcS0+YmxraWYpOworCQl4ZW5f
YmxrX2RyYWluX2lvKHBlbmRpbmdfcmVxLT5ibGtpZiwgZmFsc2UpOwogCiAJLyoKIAkgKiBJZiB3
ZSBoYXZlIGZhaWxlZCBhdCB0aGlzIHBvaW50LCB3ZSBuZWVkIHRvIHVuZG8gdGhlIE0yUCBvdmVy
cmlkZSwKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgYi9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCmluZGV4IGMyMDE0YTAuLjNjMTAyODEg
MTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKKysrIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwpAQCAtMTI1LDYgKzEyNSw3IEBAIHN0YXRp
YyBzdHJ1Y3QgeGVuX2Jsa2lmICp4ZW5fYmxraWZfYWxsb2MoZG9taWRfdCBkb21pZCkKIAlibGtp
Zi0+cGVyc2lzdGVudF9nbnRzLnJiX25vZGUgPSBOVUxMOwogCXNwaW5fbG9ja19pbml0KCZibGtp
Zi0+ZnJlZV9wYWdlc19sb2NrKTsKIAlJTklUX0xJU1RfSEVBRCgmYmxraWYtPmZyZWVfcGFnZXMp
OworCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKIAlibGtp
Zi0+ZnJlZV9wYWdlc19udW0gPSAwOwogCWF0b21pY19zZXQoJmJsa2lmLT5wZXJzaXN0ZW50X2du
dF9pbl91c2UsIDApOwogCkBAIC0yNTksNiArMjYwLDE0IEBAIHN0YXRpYyB2b2lkIHhlbl9ibGtp
Zl9mcmVlKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQogCWlmICghYXRvbWljX2RlY19hbmRfdGVz
dCgmYmxraWYtPnJlZmNudCkpCiAJCUJVRygpOwogCisJLyogTWFrZSBzdXJlIGV2ZXJ5dGhpbmcg
aXMgZHJhaW5lZCBiZWZvcmUgc2h1dHRpbmcgZG93biAqLworCUJVR19PTihibGtpZi0+cGVyc2lz
dGVudF9nbnRfYyAhPSAwKTsKKwlCVUdfT04oYXRvbWljX3JlYWQoJmJsa2lmLT5wZXJzaXN0ZW50
X2dudF9pbl91c2UpICE9IDApOworCUJVR19PTihibGtpZi0+ZnJlZV9wYWdlc19udW0gIT0gMCk7
CisJQlVHX09OKCFsaXN0X2VtcHR5KCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV9saXN0KSk7CisJ
QlVHX09OKCFsaXN0X2VtcHR5KCZibGtpZi0+ZnJlZV9wYWdlcykpOworCUJVR19PTighUkJfRU1Q
VFlfUk9PVCgmYmxraWYtPnBlcnNpc3RlbnRfZ250cykpOworCiAJLyogQ2hlY2sgdGhhdCB0aGVy
ZSBpcyBubyByZXF1ZXN0IGluIHVzZSAqLwogCWxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZShyZXEs
IG4sICZibGtpZi0+cGVuZGluZ19mcmVlLCBmcmVlX2xpc3QpIHsKIAkJbGlzdF9kZWwoJnJlcS0+
ZnJlZV9saXN0KTsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:21:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jJt-0003dd-V3; Mon, 27 Jan 2014 10:21:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7jJs-0003dY-DE
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 10:21:32 +0000
Received: from [85.158.139.211:47922] by server-12.bemta-5.messagelabs.com id
	51/B5-30017-B2336E25; Mon, 27 Jan 2014 10:21:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390818089!9426141!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15695 invoked from network); 27 Jan 2014 10:21:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:21:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96748799"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:21:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:21:28 -0500
Message-ID: <1390818087.9890.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Mon, 27 Jan 2014 10:21:27 +0000
In-Reply-To: <20140127070319.GQ2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
	<1390742199614-5720934.post@n5.nabble.com>
	<20140127070319.GQ2924@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xennn <openbg@abv.bg>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAxLTI3IGF0IDA5OjAzICswMjAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBTdW4sIEphbiAyNiwgMjAxNCBhdCAwNToxNjozOUFNIC0wODAwLCB4ZW5ubiB3cm90
ZToKPiA+IGp1c3Qgb25lIG1vcmUgcXVlc3Rpb24gLi4uIGlzIGl0IHBvc3NpYmxlIHhlbiB0byBz
dXBwb3J0IFNNUCBndXN0cyA/Cj4gPiAKPiAKPiBYZW4gc3VwcG9ydHMgbXVsdGlwbGUgdmNwdXMg
b2YgY291cnNlLi4KCnhlbm5uLCBwbGVhc2UgY291bGQgeW91IGRvIHNvbWUgcmVzZWFyY2ggb2Yg
eW91ciBvd24gdXNpbmcgc2VhcmNoCmVuZ2luZXMgYW5kIHRoZSB3aWtpL2RvY3MvZXRjIGJlZm9y
ZSBhc2tpbmcgcXVlc3Rpb25zIGhlcmUuCgpJYW4uCgoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:21:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jJt-0003dd-V3; Mon, 27 Jan 2014 10:21:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7jJs-0003dY-DE
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 10:21:32 +0000
Received: from [85.158.139.211:47922] by server-12.bemta-5.messagelabs.com id
	51/B5-30017-B2336E25; Mon, 27 Jan 2014 10:21:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390818089!9426141!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15695 invoked from network); 27 Jan 2014 10:21:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:21:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96748799"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:21:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:21:28 -0500
Message-ID: <1390818087.9890.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Mon, 27 Jan 2014 10:21:27 +0000
In-Reply-To: <20140127070319.GQ2924@reaktio.net>
References: <1390663194541-5720926.post@n5.nabble.com>
	<1390741536939-5720933.post@n5.nabble.com>
	<1390742199614-5720934.post@n5.nabble.com>
	<20140127070319.GQ2924@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xennn <openbg@abv.bg>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] how QEMU is integrated with xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAxLTI3IGF0IDA5OjAzICswMjAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBTdW4sIEphbiAyNiwgMjAxNCBhdCAwNToxNjozOUFNIC0wODAwLCB4ZW5ubiB3cm90
ZToKPiA+IGp1c3Qgb25lIG1vcmUgcXVlc3Rpb24gLi4uIGlzIGl0IHBvc3NpYmxlIHhlbiB0byBz
dXBwb3J0IFNNUCBndXN0cyA/Cj4gPiAKPiAKPiBYZW4gc3VwcG9ydHMgbXVsdGlwbGUgdmNwdXMg
b2YgY291cnNlLi4KCnhlbm5uLCBwbGVhc2UgY291bGQgeW91IGRvIHNvbWUgcmVzZWFyY2ggb2Yg
eW91ciBvd24gdXNpbmcgc2VhcmNoCmVuZ2luZXMgYW5kIHRoZSB3aWtpL2RvY3MvZXRjIGJlZm9y
ZSBhc2tpbmcgcXVlc3Rpb25zIGhlcmUuCgpJYW4uCgoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jLP-0003iA-EP; Mon, 27 Jan 2014 10:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W7jLN-0003i2-Nf
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:23:05 +0000
Received: from [85.158.139.211:50645] by server-14.bemta-5.messagelabs.com id
	2E/05-24200-98336E25; Mon, 27 Jan 2014 10:23:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390818181!259225!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5104 invoked from network); 27 Jan 2014 10:23:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94733075"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:23:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:22:59 -0500
Message-ID: <52E63382.6090503@citrix.com>
Date: Mon, 27 Jan 2014 10:22:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/01/14 10:12, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.
> 
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
> use get_page to ensure page is released when grant access is completed successfully.
> 
> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
> for them will be created separately.
> 
> V5: Remove unecessary change in xennet_end_access.
> 
> V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
> single patch.
> 
> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
> grant acess is ended.
> 
> V2: Improve patch comments.
> 
> Signed-off-by: Annie Li <annie.li@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

I think this should be applied to net (and tagged as a stable candidate)
rather than net-next as this fixes are very big resource leak.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jLP-0003iA-EP; Mon, 27 Jan 2014 10:23:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W7jLN-0003i2-Nf
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:23:05 +0000
Received: from [85.158.139.211:50645] by server-14.bemta-5.messagelabs.com id
	2E/05-24200-98336E25; Mon, 27 Jan 2014 10:23:05 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390818181!259225!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5104 invoked from network); 27 Jan 2014 10:23:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94733075"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 10:23:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:22:59 -0500
Message-ID: <52E63382.6090503@citrix.com>
Date: Mon, 27 Jan 2014 10:22:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Annie Li <Annie.li@oracle.com>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
In-Reply-To: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: netdev@vger.kernel.org, wei.liu2@citrix.com, ian.campbell@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/01/14 10:12, Annie Li wrote:
> From: Annie Li <annie.li@oracle.com>
> 
> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.
> 
> * clean up grant transfer code kept from old netfront(2.6.18) which grants
> pages for access/map and transfer. But grant transfer is deprecated in current
> netfront, so remove corresponding release code for transfer.
> 
> * release grant access (through gnttab_end_foreign_access) and skb for tx/rx path,
> use get_page to ensure page is released when grant access is completed successfully.
> 
> Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
> for them will be created separately.
> 
> V5: Remove unecessary change in xennet_end_access.
> 
> V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
> single patch.
> 
> V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
> grant acess is ended.
> 
> V2: Improve patch comments.
> 
> Signed-off-by: Annie Li <annie.li@oracle.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

I think this should be applied to net (and tagged as a stable candidate)
rather than net-next as this fixes are very big resource leak.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jOB-0003ry-1L; Mon, 27 Jan 2014 10:25:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W7jO9-0003rs-PE
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:25:58 +0000
Received: from [85.158.139.211:44119] by server-11.bemta-5.messagelabs.com id
	C5/DA-23268-53436E25; Mon, 27 Jan 2014 10:25:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390818251!11933891!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31090 invoked from network); 27 Jan 2014 10:24:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 10:24:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390818251; l=446;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=LJdzdBfhif4eCacSj9kDI8zwSP8=;
	b=GxyQDHWSMA6NL1WGMcDVaR0AMTHxsEGXsHffyCSXJBlnQWd7xoFWjLPAKDErVwtOXwW
	UVCHjFvAv5C75eJDIL3FOMDQjuvp5Ubcq8bAUWkeBl6JXhGqFKPSFUY5jvEM8YevMnOWZ
	IlcMJwjUpyWMSRI08V1BtvaYEul1HHT3WB4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.21 AUTH) with ESMTPSA id j01c5cq0RAO4TVd
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 27 Jan 2014 11:24:04 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A490E50266; Mon, 27 Jan 2014 11:24:03 +0100 (CET)
Date: Mon, 27 Jan 2014 11:24:03 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140127102403.GA30713@aepfle.de>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
	<20140124143028.GA12946@phenom.dumpdata.com>
	<52E28A620200007800116B34@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E28A620200007800116B34@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, Jan Beulich wrote:

> Not with my reading of it - when using "must", it's clear that both
> need to be present. When using "may", one can read it that
> providing just one is fine too. But I agree that the wording with
> "must" isn't fully correct either. I'd go for "should", and extend the
> sentence by "; providing just one of the two may be considered an
> error by the frontend".

I will make that change.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jOB-0003ry-1L; Mon, 27 Jan 2014 10:25:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W7jO9-0003rs-PE
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:25:58 +0000
Received: from [85.158.139.211:44119] by server-11.bemta-5.messagelabs.com id
	C5/DA-23268-53436E25; Mon, 27 Jan 2014 10:25:57 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390818251!11933891!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31090 invoked from network); 27 Jan 2014 10:24:12 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 10:24:12 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390818251; l=446;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=LJdzdBfhif4eCacSj9kDI8zwSP8=;
	b=GxyQDHWSMA6NL1WGMcDVaR0AMTHxsEGXsHffyCSXJBlnQWd7xoFWjLPAKDErVwtOXwW
	UVCHjFvAv5C75eJDIL3FOMDQjuvp5Ubcq8bAUWkeBl6JXhGqFKPSFUY5jvEM8YevMnOWZ
	IlcMJwjUpyWMSRI08V1BtvaYEul1HHT3WB4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.21 AUTH) with ESMTPSA id j01c5cq0RAO4TVd
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 27 Jan 2014 11:24:04 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A490E50266; Mon, 27 Jan 2014 11:24:03 +0100 (CET)
Date: Mon, 27 Jan 2014 11:24:03 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140127102403.GA30713@aepfle.de>
References: <1389736679-15637-1-git-send-email-olaf@aepfle.de>
	<52D955A902000078001149E2@nat28.tlf.novell.com>
	<20140122211449.GA10426@phenom.dumpdata.com>
	<52E0DEAC020000780011603E@nat28.tlf.novell.com>
	<20140124143028.GA12946@phenom.dumpdata.com>
	<52E28A620200007800116B34@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E28A620200007800116B34@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, Jan Beulich wrote:

> Not with my reading of it - when using "must", it's clear that both
> need to be present. When using "may", one can read it that
> providing just one is fine too. But I agree that the wording with
> "must" isn't fully correct either. I'd go for "should", and extend the
> sentence by "; providing just one of the two may be considered an
> error by the frontend".

I will make that change.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:26:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jOP-0003u8-K6; Mon, 27 Jan 2014 10:26:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W7jOO-0003tn-Gl
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:26:12 +0000
Received: from [85.158.139.211:3297] by server-17.bemta-5.messagelabs.com id
	92/B2-19152-34436E25; Mon, 27 Jan 2014 10:26:11 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390818366!12129390!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22312 invoked from network); 27 Jan 2014 10:26:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:26:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96749790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:26:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:26:06 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W7jOG-0007Ln-UV; Mon, 27 Jan 2014 10:26:04 +0000
Message-ID: <52E6343C.70504@citrix.com>
Date: Mon, 27 Jan 2014 10:26:04 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140124180542.GA15785@phenom.dumpdata.com>
In-Reply-To: <20140124180542.GA15785@phenom.dumpdata.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
 multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 18:05, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb rxhash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 130 insertions(+), 37 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index 508ea96..9b08da5 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,10 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues = 16;
>
> Should this be based on some for of num_online_cpus() ?

Perhaps. This would have the effect of setting the number of queues to
min(num dom0 cpus, num domU cpus). The logical argument behind this is
that if you have more than one queue per CPU you're potentially
overloading the CPUs, but the more queues you have, the more opportunity
there is to split traffic and load balance across the available
resources, with the kernel scheduler taking care of which CPUs to run
which threads on.

Either way, num_online_cpus() certainly makes for a sensible default,
and I'll make that change in the V2 series.

>
>> +module_param(xennet_max_queues, uint, 0644);
>
>
> How about just 'max_queues' ? Without the 'xennet'?

Yes, I can remove the 'xennet' part.

Thanks for the feedback.
-Andrew

>
>> +
>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
>>   struct netfront_cb {
>> @@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>>
>>   static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>>   {
>> -	/* Stub for later implementation of queue selection */
>> -	return 0;
>> +	struct netfront_info *info = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_idx;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (info->num_queues == 1)
>> +		queue_idx = 0;
>> +	else {
>> +		hash = skb_get_rxhash(skb);
>> +		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
>> +	}
>> +
>> +	return queue_idx;
>>   }
>>
>>   static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>> @@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>>   	struct net_device *netdev;
>>   	struct netfront_info *np;
>>
>> -	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
>> +	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
>>   	if (!netdev)
>>   		return ERR_PTR(-ENOMEM);
>>
>> @@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
>>   	return err;
>>   }
>>
>> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
>> +			   struct xenbus_transaction *xbt, int write_hierarchical)
>> +{
>> +	/* Write the queue-specific keys into XenStore in the traditional
>> +	 * way for a single queue, or in a queue subkeys for multiple
>> +	 * queues.
>> +	 */
>> +	struct xenbus_device *dev = queue->info->xbdev;
>> +	int err;
>> +	const char *message;
>> +	char *path;
>> +	size_t pathsize;
>> +
>> +	/* Choose the correct place to write the keys */
>> +	if (write_hierarchical) {
>> +		pathsize = strlen(dev->nodename) + 10;
>> +		path = kzalloc(pathsize, GFP_KERNEL);
>> +		if (!path) {
>> +			err = -ENOMEM;
>> +			message = "writing ring references";
>> +			goto error;
>> +		}
>> +		snprintf(path, pathsize, "%s/queue-%u",
>> +				dev->nodename, queue->number);
>> +	}
>> +	else
>> +		path = (char *)dev->nodename;
>> +
>> +	/* Write ring references */
>> +	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
>> +			queue->tx_ring_ref);
>> +	if (err) {
>> +		message = "writing tx-ring-ref";
>> +		goto error;
>> +	}
>> +
>> +	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
>> +			queue->rx_ring_ref);
>> +	if (err) {
>> +		message = "writing rx-ring-ref";
>> +		goto error;
>> +	}
>> +
>> +	/* Write event channels; taking into account both shared
>> +	 * and split event channel scenarios.
>> +	 */
>> +	if (queue->tx_evtchn == queue->rx_evtchn) {
>> +		/* Shared event channel */
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel", "%u", queue->tx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel";
>> +			goto error;
>> +		}
>> +	}
>> +	else {
>> +		/* Split event channels */
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel-tx", "%u", queue->tx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel-tx";
>> +			goto error;
>> +		}
>> +
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel-rx", "%u", queue->rx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel-rx";
>> +			goto error;
>> +		}
>> +	}
>> +
>> +	if (write_hierarchical)
>> +		kfree(path);
>> +	return 0;
>> +
>> +error:
>> +	if (write_hierarchical)
>> +		kfree(path);
>> +	xenbus_dev_fatal(dev, err, "%s", message);
>> +	return err;
>> +}
>> +
>>   /* Common code used when first setting up, and when resuming. */
>>   static int talk_to_netback(struct xenbus_device *dev,
>>   			   struct netfront_info *info)
>> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	int err;
>>   	unsigned int feature_split_evtchn;
>>   	unsigned int i = 0;
>> +	unsigned int max_queues = 0;
>>   	struct netfront_queue *queue = NULL;
>>
>>   	info->netdev->irq = 0;
>>
>> +	/* Check if backend supports multiple queues */
>> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>> +			"multi-queue-max-queues", "%u", &max_queues);
>> +	if (err < 0)
>> +		max_queues = 1;
>> +
>>   	/* Check feature-split-event-channels */
>>   	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>>   			   "feature-split-event-channels", "%u",
>> @@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	}
>>
>>   	/* Allocate array of queues */
>> -	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
>> +	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
>>   	if (!info->queues) {
>>   		err = -ENOMEM;
>>   		goto out;
>>   	}
>> -	info->num_queues = 1;
>> +	info->num_queues = max_queues;
>>
>>   	/* Create shared ring, alloc event channel -- for each queue */
>>   	for (i = 0; i < info->num_queues; ++i) {
>> @@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	}
>>
>>   again:
>> -	queue = &info->queues[0]; /* Use first queue only */
>> -
>>   	err = xenbus_transaction_start(&xbt);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err, "starting transaction");
>>   		goto destroy_ring;
>>   	}
>>
>> -	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
>> -			    queue->tx_ring_ref);
>> -	if (err) {
>> -		message = "writing tx ring-ref";
>> -		goto abort_transaction;
>> -	}
>> -	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
>> -			    queue->rx_ring_ref);
>> -	if (err) {
>> -		message = "writing rx ring-ref";
>> -		goto abort_transaction;
>> +	if (info->num_queues == 1) {
>> +		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
>> +		if (err)
>> +			goto abort_transaction_no_dev_fatal;
>>   	}
>> -
>> -	if (queue->tx_evtchn == queue->rx_evtchn) {
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel", "%u", queue->tx_evtchn);
>> +	else {
>> +		/* Write the number of queues */
>> +		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
>> +				"%u", info->num_queues);
>>   		if (err) {
>> -			message = "writing event-channel";
>> -			goto abort_transaction;
>> +			message = "writing multi-queue-num-queues";
>> +			goto abort_transaction_no_dev_fatal;
>>   		}
>> -	} else {
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel-tx", "%u", queue->tx_evtchn);
>> -		if (err) {
>> -			message = "writing event-channel-tx";
>> -			goto abort_transaction;
>> -		}
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel-rx", "%u", queue->rx_evtchn);
>> -		if (err) {
>> -			message = "writing event-channel-rx";
>> -			goto abort_transaction;
>> +
>> +		/* Write the keys for each queue */
>> +		for (i = 0; i < info->num_queues; ++i) {
>> +			queue = &info->queues[i];
>> +			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
>> +			if (err)
>> +				goto abort_transaction_no_dev_fatal;
>>   		}
>>   	}
>>
>> +	/* The remaining keys are not queue-specific */
>>   	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
>>   			    1);
>>   	if (err) {
>> @@ -1879,8 +1971,9 @@ again:
>>   	return 0;
>>
>>    abort_transaction:
>> -	xenbus_transaction_end(xbt, 1);
>>   	xenbus_dev_fatal(dev, err, "%s", message);
>> +abort_transaction_no_dev_fatal:
>> +	xenbus_transaction_end(xbt, 1);
>>    destroy_ring:
>>   	xennet_disconnect_backend(info);
>>   	kfree(info->queues);
>> --
>> 1.7.10.4
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:26:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jOP-0003u8-K6; Mon, 27 Jan 2014 10:26:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1W7jOO-0003tn-Gl
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 10:26:12 +0000
Received: from [85.158.139.211:3297] by server-17.bemta-5.messagelabs.com id
	92/B2-19152-34436E25; Mon, 27 Jan 2014 10:26:11 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390818366!12129390!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22312 invoked from network); 27 Jan 2014 10:26:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:26:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96749790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:26:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:26:06 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1W7jOG-0007Ln-UV; Mon, 27 Jan 2014 10:26:04 +0000
Message-ID: <52E6343C.70504@citrix.com>
Date: Mon, 27 Jan 2014 10:26:04 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1389803004-31812-1-git-send-email-andrew.bennieston@citrix.com>
	<1389803004-31812-5-git-send-email-andrew.bennieston@citrix.com>
	<20140124180542.GA15785@phenom.dumpdata.com>
In-Reply-To: <20140124180542.GA15785@phenom.dumpdata.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for
 multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 18:05, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb rxhash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  167 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 130 insertions(+), 37 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index 508ea96..9b08da5 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,10 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues = 16;
>
> Should this be based on some for of num_online_cpus() ?

Perhaps. This would have the effect of setting the number of queues to
min(num dom0 cpus, num domU cpus). The logical argument behind this is
that if you have more than one queue per CPU you're potentially
overloading the CPUs, but the more queues you have, the more opportunity
there is to split traffic and load balance across the available
resources, with the kernel scheduler taking care of which CPUs to run
which threads on.

Either way, num_online_cpus() certainly makes for a sensible default,
and I'll make that change in the V2 series.

>
>> +module_param(xennet_max_queues, uint, 0644);
>
>
> How about just 'max_queues' ? Without the 'xennet'?

Yes, I can remove the 'xennet' part.

Thanks for the feedback.
-Andrew

>
>> +
>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
>>   struct netfront_cb {
>> @@ -556,8 +560,19 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>>
>>   static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>>   {
>> -	/* Stub for later implementation of queue selection */
>> -	return 0;
>> +	struct netfront_info *info = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_idx;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (info->num_queues == 1)
>> +		queue_idx = 0;
>> +	else {
>> +		hash = skb_get_rxhash(skb);
>> +		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
>> +	}
>> +
>> +	return queue_idx;
>>   }
>>
>>   static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
>> @@ -1361,7 +1376,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>>   	struct net_device *netdev;
>>   	struct netfront_info *np;
>>
>> -	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
>> +	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
>>   	if (!netdev)
>>   		return ERR_PTR(-ENOMEM);
>>
>> @@ -1731,6 +1746,89 @@ static int xennet_init_queue(struct netfront_queue *queue)
>>   	return err;
>>   }
>>
>> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
>> +			   struct xenbus_transaction *xbt, int write_hierarchical)
>> +{
>> +	/* Write the queue-specific keys into XenStore in the traditional
>> +	 * way for a single queue, or in a queue subkeys for multiple
>> +	 * queues.
>> +	 */
>> +	struct xenbus_device *dev = queue->info->xbdev;
>> +	int err;
>> +	const char *message;
>> +	char *path;
>> +	size_t pathsize;
>> +
>> +	/* Choose the correct place to write the keys */
>> +	if (write_hierarchical) {
>> +		pathsize = strlen(dev->nodename) + 10;
>> +		path = kzalloc(pathsize, GFP_KERNEL);
>> +		if (!path) {
>> +			err = -ENOMEM;
>> +			message = "writing ring references";
>> +			goto error;
>> +		}
>> +		snprintf(path, pathsize, "%s/queue-%u",
>> +				dev->nodename, queue->number);
>> +	}
>> +	else
>> +		path = (char *)dev->nodename;
>> +
>> +	/* Write ring references */
>> +	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
>> +			queue->tx_ring_ref);
>> +	if (err) {
>> +		message = "writing tx-ring-ref";
>> +		goto error;
>> +	}
>> +
>> +	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
>> +			queue->rx_ring_ref);
>> +	if (err) {
>> +		message = "writing rx-ring-ref";
>> +		goto error;
>> +	}
>> +
>> +	/* Write event channels; taking into account both shared
>> +	 * and split event channel scenarios.
>> +	 */
>> +	if (queue->tx_evtchn == queue->rx_evtchn) {
>> +		/* Shared event channel */
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel", "%u", queue->tx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel";
>> +			goto error;
>> +		}
>> +	}
>> +	else {
>> +		/* Split event channels */
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel-tx", "%u", queue->tx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel-tx";
>> +			goto error;
>> +		}
>> +
>> +		err = xenbus_printf(*xbt, path,
>> +				"event-channel-rx", "%u", queue->rx_evtchn);
>> +		if (err) {
>> +			message = "writing event-channel-rx";
>> +			goto error;
>> +		}
>> +	}
>> +
>> +	if (write_hierarchical)
>> +		kfree(path);
>> +	return 0;
>> +
>> +error:
>> +	if (write_hierarchical)
>> +		kfree(path);
>> +	xenbus_dev_fatal(dev, err, "%s", message);
>> +	return err;
>> +}
>> +
>>   /* Common code used when first setting up, and when resuming. */
>>   static int talk_to_netback(struct xenbus_device *dev,
>>   			   struct netfront_info *info)
>> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	int err;
>>   	unsigned int feature_split_evtchn;
>>   	unsigned int i = 0;
>> +	unsigned int max_queues = 0;
>>   	struct netfront_queue *queue = NULL;
>>
>>   	info->netdev->irq = 0;
>>
>> +	/* Check if backend supports multiple queues */
>> +	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>> +			"multi-queue-max-queues", "%u", &max_queues);
>> +	if (err < 0)
>> +		max_queues = 1;
>> +
>>   	/* Check feature-split-event-channels */
>>   	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>>   			   "feature-split-event-channels", "%u",
>> @@ -1759,12 +1864,12 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	}
>>
>>   	/* Allocate array of queues */
>> -	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
>> +	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
>>   	if (!info->queues) {
>>   		err = -ENOMEM;
>>   		goto out;
>>   	}
>> -	info->num_queues = 1;
>> +	info->num_queues = max_queues;
>>
>>   	/* Create shared ring, alloc event channel -- for each queue */
>>   	for (i = 0; i < info->num_queues; ++i) {
>> @@ -1800,49 +1905,36 @@ static int talk_to_netback(struct xenbus_device *dev,
>>   	}
>>
>>   again:
>> -	queue = &info->queues[0]; /* Use first queue only */
>> -
>>   	err = xenbus_transaction_start(&xbt);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err, "starting transaction");
>>   		goto destroy_ring;
>>   	}
>>
>> -	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
>> -			    queue->tx_ring_ref);
>> -	if (err) {
>> -		message = "writing tx ring-ref";
>> -		goto abort_transaction;
>> -	}
>> -	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
>> -			    queue->rx_ring_ref);
>> -	if (err) {
>> -		message = "writing rx ring-ref";
>> -		goto abort_transaction;
>> +	if (info->num_queues == 1) {
>> +		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
>> +		if (err)
>> +			goto abort_transaction_no_dev_fatal;
>>   	}
>> -
>> -	if (queue->tx_evtchn == queue->rx_evtchn) {
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel", "%u", queue->tx_evtchn);
>> +	else {
>> +		/* Write the number of queues */
>> +		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
>> +				"%u", info->num_queues);
>>   		if (err) {
>> -			message = "writing event-channel";
>> -			goto abort_transaction;
>> +			message = "writing multi-queue-num-queues";
>> +			goto abort_transaction_no_dev_fatal;
>>   		}
>> -	} else {
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel-tx", "%u", queue->tx_evtchn);
>> -		if (err) {
>> -			message = "writing event-channel-tx";
>> -			goto abort_transaction;
>> -		}
>> -		err = xenbus_printf(xbt, dev->nodename,
>> -				    "event-channel-rx", "%u", queue->rx_evtchn);
>> -		if (err) {
>> -			message = "writing event-channel-rx";
>> -			goto abort_transaction;
>> +
>> +		/* Write the keys for each queue */
>> +		for (i = 0; i < info->num_queues; ++i) {
>> +			queue = &info->queues[i];
>> +			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
>> +			if (err)
>> +				goto abort_transaction_no_dev_fatal;
>>   		}
>>   	}
>>
>> +	/* The remaining keys are not queue-specific */
>>   	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
>>   			    1);
>>   	if (err) {
>> @@ -1879,8 +1971,9 @@ again:
>>   	return 0;
>>
>>    abort_transaction:
>> -	xenbus_transaction_end(xbt, 1);
>>   	xenbus_dev_fatal(dev, err, "%s", message);
>> +abort_transaction_no_dev_fatal:
>> +	xenbus_transaction_end(xbt, 1);
>>    destroy_ring:
>>   	xennet_disconnect_backend(info);
>>   	kfree(info->queues);
>> --
>> 1.7.10.4
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jUb-0004S8-KN; Mon, 27 Jan 2014 10:32:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7jUZ-0004S3-UA
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:32:36 +0000
Received: from [85.158.137.68:50517] by server-6.bemta-3.messagelabs.com id
	E9/AE-04868-3C536E25; Mon, 27 Jan 2014 10:32:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390818752!10349923!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20128 invoked from network); 27 Jan 2014 10:32:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:32:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96751119"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:32:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:32:32 -0500
Message-ID: <1390818750.12230.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 10:32:30 +0000
In-Reply-To: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {

At this point reset_addr and reset_size are set and xgene_storm_reset
will try to use them with reset_mask -- which I suppose is 0 at this
point (due to .bss initialisation + dt_prop_read not touching it on
failure).

Is that safe / ok?

In fact, on failure of the dt_device_get_address xgene_storm_reset will
try and use the values too and they may be uninitialised or may be
DT_BAD_ADDR depending on the location of the failure.

Could this be potentially harmful? Obviously it is not expected to
successfully reset under these circumstances, but what else could it do?
(e.g. turn off the fan and melt the system?)

I'd suggest to set a flag right at the end which xgene_storm_reset can
check. Either an explicit boolean or initialise reset_addr to ~(u64)0
when it is declared and gather the address into a local variable before
setting the global only after init has succeeded.

(I'd also accept your assurances that writing to random memory locations
is safe, on the off chance that this is true ;-))

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jUb-0004S8-KN; Mon, 27 Jan 2014 10:32:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7jUZ-0004S3-UA
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:32:36 +0000
Received: from [85.158.137.68:50517] by server-6.bemta-3.messagelabs.com id
	E9/AE-04868-3C536E25; Mon, 27 Jan 2014 10:32:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390818752!10349923!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20128 invoked from network); 27 Jan 2014 10:32:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:32:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96751119"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:32:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 05:32:32 -0500
Message-ID: <1390818750.12230.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 10:32:30 +0000
In-Reply-To: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {

At this point reset_addr and reset_size are set and xgene_storm_reset
will try to use them with reset_mask -- which I suppose is 0 at this
point (due to .bss initialisation + dt_prop_read not touching it on
failure).

Is that safe / ok?

In fact, on failure of the dt_device_get_address xgene_storm_reset will
try and use the values too and they may be uninitialised or may be
DT_BAD_ADDR depending on the location of the failure.

Could this be potentially harmful? Obviously it is not expected to
successfully reset under these circumstances, but what else could it do?
(e.g. turn off the fan and melt the system?)

I'd suggest to set a flag right at the end which xgene_storm_reset can
check. Either an explicit boolean or initialise reset_addr to ~(u64)0
when it is declared and gather the address into a local variable before
setting the global only after init has succeeded.

(I'd also accept your assurances that writing to random memory locations
is safe, on the off chance that this is true ;-))

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:36:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jYS-0004ag-GL; Mon, 27 Jan 2014 10:36:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W7jYQ-0004ab-Sd
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:36:35 +0000
Received: from [85.158.139.211:43967] by server-8.bemta-5.messagelabs.com id
	00/F5-29838-1B636E25; Mon, 27 Jan 2014 10:36:33 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390818992!1354993!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 27 Jan 2014 10:36:33 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 10:36:33 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390818992; l=2570;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=qJiRAMxXBJnBYoJZq0cRr/mC4i0=;
	b=fNNYs2Fr0SIJffbotfuXH2xMOCX57+8Zo/hudzn9UchYj0uuJDc8A6JB/4XD+vg8/De
	RGSbP5OnhLu87s2Br9Zj3bgnl5jSkwV0CIIpwtxciBzu5VeFsn+ZufUgFJT52ADJsrTAX
	SI61asBAqCmjXD6iARA7T6CeN8gWPaveX3I=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.21 AUTH) with ESMTPSA id y00a88q0RAaNVvp
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 27 Jan 2014 11:36:23 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 00ADA50266; Mon, 27 Jan 2014 11:36:22 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: JBeulich@suse.com,
	konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 11:36:10 +0100
Message-Id: <1390818970-737-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Also fix the name of the discard-alignment property, add the missing 'n'.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2:
include changes suggested by Jan and Konrad which make it more clear that
both properties have to be present, if required.

 xen/include/public/io/blkif.h | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..542f123 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,7 +175,7 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
- * discard-aligment
+ * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0
  *      Notes:          4, 5
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,15 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. If the backing device has such discardable extents the
+ *     backend should provide both discard-granularity and discard-alignment.
+ *     Providing just one of the two may be considered an error by the frontend.
+ *     Backends supporting discard should include discard-granularity and
+ *     discard-alignment even if it supports discarding individual sectors.
+ *     Frontends should assume discard-alignment == 0 and discard-granularity
+ *     == sector size if these keys are missing.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +351,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:36:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jYS-0004ag-GL; Mon, 27 Jan 2014 10:36:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W7jYQ-0004ab-Sd
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 10:36:35 +0000
Received: from [85.158.139.211:43967] by server-8.bemta-5.messagelabs.com id
	00/F5-29838-1B636E25; Mon, 27 Jan 2014 10:36:33 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390818992!1354993!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 27 Jan 2014 10:36:33 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 10:36:33 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390818992; l=2570;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=qJiRAMxXBJnBYoJZq0cRr/mC4i0=;
	b=fNNYs2Fr0SIJffbotfuXH2xMOCX57+8Zo/hudzn9UchYj0uuJDc8A6JB/4XD+vg8/De
	RGSbP5OnhLu87s2Br9Zj3bgnl5jSkwV0CIIpwtxciBzu5VeFsn+ZufUgFJT52ADJsrTAX
	SI61asBAqCmjXD6iARA7T6CeN8gWPaveX3I=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.21 AUTH) with ESMTPSA id y00a88q0RAaNVvp
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 27 Jan 2014 11:36:23 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 00ADA50266; Mon, 27 Jan 2014 11:36:22 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: JBeulich@suse.com,
	konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 11:36:10 +0100
Message-Id: <1390818970-737-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v2] blkif.h: enhance comments related to the
	discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Also fix the name of the discard-alignment property, add the missing 'n'.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2:
include changes suggested by Jan and Konrad which make it more clear that
both properties have to be present, if required.

 xen/include/public/io/blkif.h | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 84eb7fd..542f123 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,7 +175,7 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
- * discard-aligment
+ * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0
  *      Notes:          4, 5
@@ -194,6 +194,7 @@
  * discard-secure
  *      Values:         0/1 (boolean)
  *      Default Value:  0
+ *      Notes:          10
  *
  *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
  *      requests with the BLKIF_DISCARD_SECURE flag set.
@@ -323,9 +324,15 @@
  *     For full interoperability, block front and backends should publish
  *     identical ring parameters, adjusted for unit differences, to the
  *     XenStore nodes used in both schemes.
- * (4) Devices that support discard functionality may internally allocate
- *     space (discardable extents) in units that are larger than the
- *     exported logical block size.
+ * (4) Devices that support discard functionality may internally allocate space
+ *     (discardable extents) in units that are larger than the exported logical
+ *     block size. If the backing device has such discardable extents the
+ *     backend should provide both discard-granularity and discard-alignment.
+ *     Providing just one of the two may be considered an error by the frontend.
+ *     Backends supporting discard should include discard-granularity and
+ *     discard-alignment even if it supports discarding individual sectors.
+ *     Frontends should assume discard-alignment == 0 and discard-granularity
+ *     == sector size if these keys are missing.
  * (5) The discard-alignment parameter allows a physical device to be
  *     partitioned into virtual devices that do not necessarily begin or
  *     end on a discardable extent boundary.
@@ -344,6 +351,8 @@
  *     grants that can be persistently mapped in the frontend driver, but
  *     due to the frontent driver implementation it should never be bigger
  *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
+ *(10) The discard-secure property may be present and will be set to 1 if the
+ *     backing device supports secure discard.
  */
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:45:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jgc-00053T-W5; Mon, 27 Jan 2014 10:45:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7jga-00053M-Mw
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 10:45:01 +0000
Received: from [193.109.254.147:14144] by server-2.bemta-14.messagelabs.com id
	22/2C-00361-CA836E25; Mon, 27 Jan 2014 10:45:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390819496!58191!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28486 invoked from network); 27 Jan 2014 10:44:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:44:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96753212"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:44:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 05:44:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7jgU-0002Jf-LT;
	Mon, 27 Jan 2014 10:44:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7jgU-0003eR-Fv;
	Mon, 27 Jan 2014 10:44:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24542-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 10:44:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24542: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24542 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24542/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24507-bisect
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ branch=xen-unstable
+ revision=9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   650fc2f..9c7e789  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 10:45:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 10:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7jgc-00053T-W5; Mon, 27 Jan 2014 10:45:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7jga-00053M-Mw
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 10:45:01 +0000
Received: from [193.109.254.147:14144] by server-2.bemta-14.messagelabs.com id
	22/2C-00361-CA836E25; Mon, 27 Jan 2014 10:45:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390819496!58191!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28486 invoked from network); 27 Jan 2014 10:44:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 10:44:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96753212"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 10:44:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 05:44:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7jgU-0002Jf-LT;
	Mon, 27 Jan 2014 10:44:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7jgU-0003eR-Fv;
	Mon, 27 Jan 2014 10:44:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24542-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 10:44:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24542: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24542 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24542/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail blocked in 24473
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24507-bisect
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24473
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24473

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  650fc2f76d0a156e23703683d0c18fa262ecea36

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ branch=xen-unstable
+ revision=9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 9c7e789a1b60b6114e0b1ef16dff95f03f532fb5:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   650fc2f..9c7e789  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:06:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7k0q-0005la-Kn; Mon, 27 Jan 2014 11:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7k0p-0005lV-P5
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:05:56 +0000
Received: from [85.158.139.211:12988] by server-4.bemta-5.messagelabs.com id
	94/BF-26791-29D36E25; Mon, 27 Jan 2014 11:05:54 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390820752!12141828!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 27 Jan 2014 11:05:54 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:05:54 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so7029902qab.18
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 03:05:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=RTtsWDq6Y2boszQr1GPepLr56PhtdPR0WcGtvio3NIg=;
	b=O8Kt3OUziu2tyj/pP5oF0wMw4CGodD3eDBEGwd8S01kr6N9Z48YpRmU6Dj3qItb/nS
	bwyMePvJI9SeJDPuhCMIrti2tmWTHzmMz4YNxAlRFjcitqncreHknoYoOxAVFnSHHmoZ
	oSzYfqCmKkP0p/6Ai4tT8BWHrH48NSyZMh9FdzP8GMDRpwaAcQ3R6++mvFQo/7hd1Piv
	tIL1i0DqM015Ry9TFRr2paDXzDO37RVaVnG61//hFNpqY+xEsU429PVsuwRLmgcTde+O
	iUkUEipvAtCm3SswLqRxBavFf04G3ZGM/yIMTxBbfoyBtArAIF2ezzAn6a7eQ/7qCZRf
	A9sw==
X-Gm-Message-State: ALoCoQmUaeSnEWUUz7xV7GkhtnIvsWkLCXsh5lOHezALWR5E5GQfqhDc9hTTmGlTDnj6meF9co8Q
MIME-Version: 1.0
X-Received: by 10.140.23.6 with SMTP id 6mr39092857qgo.17.1390820752550; Mon,
	27 Jan 2014 03:05:52 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Mon, 27 Jan 2014 03:05:52 -0800 (PST)
In-Reply-To: <1390818750.12230.7.camel@kazak.uk.xensource.com>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
	<1390818750.12230.7.camel@kazak.uk.xensource.com>
Date: Mon, 27 Jan 2014 16:35:52 +0530
Message-ID: <CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 27 January 2014 16:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
>> +    /* Retrieve base address and size */
>> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
>> +    if ( res )
>> +    {
>> +        printk("XGENE: Unable to retrieve the base address for reset\n");
>> +        return 0;
>> +    }
>> +
>> +    /* Get reset mask */
>> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
>> +    if ( !res )
>> +    {
>
> At this point reset_addr and reset_size are set and xgene_storm_reset
> will try to use them with reset_mask -- which I suppose is 0 at this
> point (due to .bss initialisation + dt_prop_read not touching it on
> failure).
I thought that if reset_addr or reset_size is 0 then ioremap will not
return me a success hence code will return it without writing the reg.
But i think mask can cause issue if uninitialized, so i will add a
global flag  (valid_reset_vals) and will set it at the end of the
xgene_storm_init, and in fuction xgene_storm_reset i will first check
if this falg is set or not before proceeding further in reset.
Is this fine ?

>
> Is that safe / ok?
>
> In fact, on failure of the dt_device_get_address xgene_storm_reset will
> try and use the values too and they may be uninitialised or may be
> DT_BAD_ADDR depending on the location of the failure.
>
> Could this be potentially harmful? Obviously it is not expected to
> successfully reset under these circumstances, but what else could it do?
> (e.g. turn off the fan and melt the system?)
>
> I'd suggest to set a flag right at the end which xgene_storm_reset can
> check. Either an explicit boolean or initialise reset_addr to ~(u64)0
> when it is declared and gather the address into a local variable before
> setting the global only after init has succeeded.

Ok will do that.
>
> (I'd also accept your assurances that writing to random memory locations
> is safe, on the off chance that this is true ;-))
>
> Ian.
>

Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:06:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7k0q-0005la-Kn; Mon, 27 Jan 2014 11:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7k0p-0005lV-P5
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:05:56 +0000
Received: from [85.158.139.211:12988] by server-4.bemta-5.messagelabs.com id
	94/BF-26791-29D36E25; Mon, 27 Jan 2014 11:05:54 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390820752!12141828!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11934 invoked from network); 27 Jan 2014 11:05:54 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:05:54 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so7029902qab.18
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 03:05:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=RTtsWDq6Y2boszQr1GPepLr56PhtdPR0WcGtvio3NIg=;
	b=O8Kt3OUziu2tyj/pP5oF0wMw4CGodD3eDBEGwd8S01kr6N9Z48YpRmU6Dj3qItb/nS
	bwyMePvJI9SeJDPuhCMIrti2tmWTHzmMz4YNxAlRFjcitqncreHknoYoOxAVFnSHHmoZ
	oSzYfqCmKkP0p/6Ai4tT8BWHrH48NSyZMh9FdzP8GMDRpwaAcQ3R6++mvFQo/7hd1Piv
	tIL1i0DqM015Ry9TFRr2paDXzDO37RVaVnG61//hFNpqY+xEsU429PVsuwRLmgcTde+O
	iUkUEipvAtCm3SswLqRxBavFf04G3ZGM/yIMTxBbfoyBtArAIF2ezzAn6a7eQ/7qCZRf
	A9sw==
X-Gm-Message-State: ALoCoQmUaeSnEWUUz7xV7GkhtnIvsWkLCXsh5lOHezALWR5E5GQfqhDc9hTTmGlTDnj6meF9co8Q
MIME-Version: 1.0
X-Received: by 10.140.23.6 with SMTP id 6mr39092857qgo.17.1390820752550; Mon,
	27 Jan 2014 03:05:52 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Mon, 27 Jan 2014 03:05:52 -0800 (PST)
In-Reply-To: <1390818750.12230.7.camel@kazak.uk.xensource.com>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
	<1390818750.12230.7.camel@kazak.uk.xensource.com>
Date: Mon, 27 Jan 2014 16:35:52 +0530
Message-ID: <CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 27 January 2014 16:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
>> +    /* Retrieve base address and size */
>> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
>> +    if ( res )
>> +    {
>> +        printk("XGENE: Unable to retrieve the base address for reset\n");
>> +        return 0;
>> +    }
>> +
>> +    /* Get reset mask */
>> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
>> +    if ( !res )
>> +    {
>
> At this point reset_addr and reset_size are set and xgene_storm_reset
> will try to use them with reset_mask -- which I suppose is 0 at this
> point (due to .bss initialisation + dt_prop_read not touching it on
> failure).
I thought that if reset_addr or reset_size is 0 then ioremap will not
return me a success hence code will return it without writing the reg.
But i think mask can cause issue if uninitialized, so i will add a
global flag  (valid_reset_vals) and will set it at the end of the
xgene_storm_init, and in fuction xgene_storm_reset i will first check
if this falg is set or not before proceeding further in reset.
Is this fine ?

>
> Is that safe / ok?
>
> In fact, on failure of the dt_device_get_address xgene_storm_reset will
> try and use the values too and they may be uninitialised or may be
> DT_BAD_ADDR depending on the location of the failure.
>
> Could this be potentially harmful? Obviously it is not expected to
> successfully reset under these circumstances, but what else could it do?
> (e.g. turn off the fan and melt the system?)
>
> I'd suggest to set a flag right at the end which xgene_storm_reset can
> check. Either an explicit boolean or initialise reset_addr to ~(u64)0
> when it is declared and gather the address into a local variable before
> setting the global only after init has succeeded.

Ok will do that.
>
> (I'd also accept your assurances that writing to random memory locations
> is safe, on the off chance that this is true ;-))
>
> Ian.
>

Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:13:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7k82-000692-PJ; Mon, 27 Jan 2014 11:13:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7k81-00068x-BR
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:13:21 +0000
Received: from [85.158.139.211:21439] by server-3.bemta-5.messagelabs.com id
	A2/49-04773-05F36E25; Mon, 27 Jan 2014 11:13:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390821197!1365984!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8564 invoked from network); 27 Jan 2014 11:13:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:13:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94744507"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 11:13:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 06:13:16 -0500
Message-ID: <1390821195.12230.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 11:13:15 +0000
In-Reply-To: <CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
	<1390818750.12230.7.camel@kazak.uk.xensource.com>
	<CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 16:35 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 27 January 2014 16:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
> >> +    /* Retrieve base address and size */
> >> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> >> +    if ( res )
> >> +    {
> >> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> >> +        return 0;
> >> +    }
> >> +
> >> +    /* Get reset mask */
> >> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> >> +    if ( !res )
> >> +    {
> >
> > At this point reset_addr and reset_size are set and xgene_storm_reset
> > will try to use them with reset_mask -- which I suppose is 0 at this
> > point (due to .bss initialisation + dt_prop_read not touching it on
> > failure).
> I thought that if reset_addr or reset_size is 0 then ioremap will not
> return me a success hence code will return it without writing the reg.

I'm not sure about the reset_size==0, I'd hope that ioremap does work
for reset_addr == 0 though, since it isn't implausible that something
might live there on some platform.

But in any case after partial success gathering the parameters these may
no longer be zero.

> But i think mask can cause issue if uninitialized, so i will add a
> global flag  (valid_reset_vals) and will set it at the end of the
> xgene_storm_init, and in fuction xgene_storm_reset i will first check
> if this falg is set or not before proceeding further in reset.
> Is this fine ?

Sounds good to me.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:13:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7k82-000692-PJ; Mon, 27 Jan 2014 11:13:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7k81-00068x-BR
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:13:21 +0000
Received: from [85.158.139.211:21439] by server-3.bemta-5.messagelabs.com id
	A2/49-04773-05F36E25; Mon, 27 Jan 2014 11:13:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390821197!1365984!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8564 invoked from network); 27 Jan 2014 11:13:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:13:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94744507"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 11:13:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 06:13:16 -0500
Message-ID: <1390821195.12230.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 11:13:15 +0000
In-Reply-To: <CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
References: <1390812173-12081-1-git-send-email-pranavkumar@linaro.org>
	<1390818750.12230.7.camel@kazak.uk.xensource.com>
	<CAAHg+Hj2drp5RLKRTbgQsa24TrJLe8w-yFJTMzr-x7_teKOyBg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V5] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 16:35 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 27 January 2014 16:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-27 at 14:12 +0530, Pranavkumar Sawargaonkar wrote:
> >> +    /* Retrieve base address and size */
> >> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> >> +    if ( res )
> >> +    {
> >> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> >> +        return 0;
> >> +    }
> >> +
> >> +    /* Get reset mask */
> >> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> >> +    if ( !res )
> >> +    {
> >
> > At this point reset_addr and reset_size are set and xgene_storm_reset
> > will try to use them with reset_mask -- which I suppose is 0 at this
> > point (due to .bss initialisation + dt_prop_read not touching it on
> > failure).
> I thought that if reset_addr or reset_size is 0 then ioremap will not
> return me a success hence code will return it without writing the reg.

I'm not sure about the reset_size==0, I'd hope that ioremap does work
for reset_addr == 0 though, since it isn't implausible that something
might live there on some platform.

But in any case after partial success gathering the parameters these may
no longer be zero.

> But i think mask can cause issue if uninitialized, so i will add a
> global flag  (valid_reset_vals) and will set it at the end of the
> xgene_storm_init, and in fuction xgene_storm_reset i will first check
> if this falg is set or not before proceeding further in reset.
> Is this fine ?

Sounds good to me.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:35:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kT8-0006ne-6z; Mon, 27 Jan 2014 11:35:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7kT6-0006nZ-TL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:35:09 +0000
Received: from [85.158.137.68:17799] by server-13.bemta-3.messagelabs.com id
	DA/7E-28603-C6446E25; Mon, 27 Jan 2014 11:35:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390822502!11550904!1
X-Originating-IP: [209.85.220.49]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8166 invoked from network); 27 Jan 2014 11:35:04 -0000
Received: from mail-pa0-f49.google.com (HELO mail-pa0-f49.google.com)
	(209.85.220.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:35:04 -0000
Received: by mail-pa0-f49.google.com with SMTP id hz1so5790492pad.36
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 03:35:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=2p62Br783ACLmzP6HRdwT2/nG8RVUS149dvATghBR/M=;
	b=I0/1SflG1uo7XAp/BLq73JjCmBv5z55Tsj/zEN6g3dbQyNhQtHNy2AN/liUJqA2HF+
	v1dGgkl8kdeazKdcohzstLt2Wm2G298+43NVE38V1GMn6ivs8A75wikbdIIUPrfzxoB/
	IhfyjL6p24+nCZDaF2Es43neaYEKgSA33bYdlH84ebKZR7zAUvmrjMenGOwXvT7iZcVi
	/w/qQIxGzVIP4w6FdLtddr0Fwikd+i+b/pLMthlvxhoGLDUkfZNUEBEad32kT07dJkFV
	vrzmK5IxQLtKl9LHkAfqD+oOqWEJqQV/NyGtuluWA5uAQ3KJoXYirWXRSoasOsMANPi3
	sfVA==
X-Gm-Message-State: ALoCoQnOTcoG1NzMk4InyA1msKzWjEKI7FJ1abPtfg6VfPsJqzybI01BkxJJaL79ibNGwVw/wxPu
X-Received: by 10.66.122.201 with SMTP id lu9mr29441992pab.40.1390822501982;
	Mon, 27 Jan 2014 03:35:01 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	vg1sm31092137pbc.44.2014.01.27.03.34.58 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 03:35:01 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 17:04:48 +0530
Message-Id: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V6:
- Incorporating comments received on V5 patch.
V5:
- Incorporating comments received on V4 patch.
V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..4fc185b 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,16 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/stdbool.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+static bool reset_vals_valid = false;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +115,68 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    if ( !reset_vals_valid )
+    {
+        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
+        return;
+    }
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("XGENE: Unable to find a compatible reset node in the device tree");
+        return 0;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("XGENE: Unable to retrieve the base address for reset\n");
+        return 0;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("XGENE: Unable to retrieve the reset mask\n");
+        return 0;
+    }
+
+    reset_vals_valid = true;
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:35:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kT8-0006ne-6z; Mon, 27 Jan 2014 11:35:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W7kT6-0006nZ-TL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:35:09 +0000
Received: from [85.158.137.68:17799] by server-13.bemta-3.messagelabs.com id
	DA/7E-28603-C6446E25; Mon, 27 Jan 2014 11:35:08 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390822502!11550904!1
X-Originating-IP: [209.85.220.49]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8166 invoked from network); 27 Jan 2014 11:35:04 -0000
Received: from mail-pa0-f49.google.com (HELO mail-pa0-f49.google.com)
	(209.85.220.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:35:04 -0000
Received: by mail-pa0-f49.google.com with SMTP id hz1so5790492pad.36
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 03:35:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=2p62Br783ACLmzP6HRdwT2/nG8RVUS149dvATghBR/M=;
	b=I0/1SflG1uo7XAp/BLq73JjCmBv5z55Tsj/zEN6g3dbQyNhQtHNy2AN/liUJqA2HF+
	v1dGgkl8kdeazKdcohzstLt2Wm2G298+43NVE38V1GMn6ivs8A75wikbdIIUPrfzxoB/
	IhfyjL6p24+nCZDaF2Es43neaYEKgSA33bYdlH84ebKZR7zAUvmrjMenGOwXvT7iZcVi
	/w/qQIxGzVIP4w6FdLtddr0Fwikd+i+b/pLMthlvxhoGLDUkfZNUEBEad32kT07dJkFV
	vrzmK5IxQLtKl9LHkAfqD+oOqWEJqQV/NyGtuluWA5uAQ3KJoXYirWXRSoasOsMANPi3
	sfVA==
X-Gm-Message-State: ALoCoQnOTcoG1NzMk4InyA1msKzWjEKI7FJ1abPtfg6VfPsJqzybI01BkxJJaL79ibNGwVw/wxPu
X-Received: by 10.66.122.201 with SMTP id lu9mr29441992pab.40.1390822501982;
	Mon, 27 Jan 2014 03:35:01 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	vg1sm31092137pbc.44.2014.01.27.03.34.58 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 03:35:01 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 17:04:48 +0530
Message-Id: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset support
	for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a reset support for xgene arm64 platform.

V6:
- Incorporating comments received on V5 patch.
V5:
- Incorporating comments received on V4 patch.
V4:
- Removing TODO comment about retriving reset base address from dts
  as that is done now.
V3:
- Retriving reset base address and reset mask from device tree.
- Removed unnecssary header files included earlier.
V2:
- Removed unnecssary mdelay in code.
- Adding iounmap of the base address.
V1:
-Initial patch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 5b0bd5f..4fc185b 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -20,8 +20,16 @@
 
 #include <xen/config.h>
 #include <asm/platform.h>
+#include <xen/stdbool.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/gic.h>
 
+/* Variables to save reset address of soc during platform initialization. */
+static u64 reset_addr, reset_size;
+static u32 reset_mask;
+static bool reset_vals_valid = false;
+
 static uint32_t xgene_storm_quirks(void)
 {
     return PLATFORM_QUIRK_GIC_64K_STRIDE;
@@ -107,6 +115,68 @@ err:
     return ret;
 }
 
+static void xgene_storm_reset(void)
+{
+    void __iomem *addr;
+
+    if ( !reset_vals_valid )
+    {
+        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
+        return;
+    }
+
+    addr = ioremap_nocache(reset_addr, reset_size);
+
+    if ( !addr )
+    {
+        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
+        return;
+    }
+
+    /* Write reset mask to base address */
+    writel(reset_mask, addr);
+
+    iounmap(addr);
+}
+
+static int xgene_storm_init(void)
+{
+    static const struct dt_device_match reset_ids[] __initconst =
+    {
+        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
+        {},
+    };
+    struct dt_device_node *dev;
+    int res;
+
+    dev = dt_find_matching_node(NULL, reset_ids);
+    if ( !dev )
+    {
+        printk("XGENE: Unable to find a compatible reset node in the device tree");
+        return 0;
+    }
+
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    /* Retrieve base address and size */
+    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
+    if ( res )
+    {
+        printk("XGENE: Unable to retrieve the base address for reset\n");
+        return 0;
+    }
+
+    /* Get reset mask */
+    res = dt_property_read_u32(dev, "mask", &reset_mask);
+    if ( !res )
+    {
+        printk("XGENE: Unable to retrieve the reset mask\n");
+        return 0;
+    }
+
+    reset_vals_valid = true;
+    return 0;
+}
 
 static const char * const xgene_storm_dt_compat[] __initconst =
 {
@@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
 
 PLATFORM_START(xgene_storm, "APM X-GENE STORM")
     .compatible = xgene_storm_dt_compat,
+    .init = xgene_storm_init,
+    .reset = xgene_storm_reset,
     .quirks = xgene_storm_quirks,
     .specific_mapping = xgene_storm_specific_mapping,
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kW2-0006tL-Rv; Mon, 27 Jan 2014 11:38:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7kW1-0006tG-Oz
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:38:09 +0000
Received: from [193.109.254.147:5136] by server-2.bemta-14.messagelabs.com id
	9C/26-00361-D1546E25; Mon, 27 Jan 2014 11:38:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390822683!78142!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7365 invoked from network); 27 Jan 2014 11:38:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94750584"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 11:38:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 06:38:02 -0500
Message-ID: <1390822680.12230.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 11:38:00 +0000
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V6:
> - Incorporating comments received on V5 patch.
> V5:
> - Incorporating comments received on V4 patch.
> V4:
> - Removing TODO comment about retriving reset base address from dts
>   as that is done now.
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

Looks good, thanks:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

George, I'd like to put this in 4.4. The impact on non-xgene is non
existent and I think reset is basic functionality which we should
enable.

> ---
>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..4fc185b 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,16 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/stdbool.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +static bool reset_vals_valid = false;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +115,68 @@ err:
>      return ret;
>  }
>  
> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    if ( !reset_vals_valid )
> +    {
> +        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    /* Write reset mask to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("XGENE: Unable to find a compatible reset node in the device tree");
> +        return 0;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("XGENE: Unable to retrieve the reset mask\n");
> +        return 0;
> +    }
> +
> +    reset_vals_valid = true;
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kW2-0006tL-Rv; Mon, 27 Jan 2014 11:38:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7kW1-0006tG-Oz
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 11:38:09 +0000
Received: from [193.109.254.147:5136] by server-2.bemta-14.messagelabs.com id
	9C/26-00361-D1546E25; Mon, 27 Jan 2014 11:38:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390822683!78142!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7365 invoked from network); 27 Jan 2014 11:38:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94750584"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 11:38:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 06:38:02 -0500
Message-ID: <1390822680.12230.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 27 Jan 2014 11:38:00 +0000
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V6:
> - Incorporating comments received on V5 patch.
> V5:
> - Incorporating comments received on V4 patch.
> V4:
> - Removing TODO comment about retriving reset base address from dts
>   as that is done now.
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

Looks good, thanks:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

George, I'd like to put this in 4.4. The impact on non-xgene is non
existent and I think reset is basic functionality which we should
enable.

> ---
>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..4fc185b 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,16 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/stdbool.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +static bool reset_vals_valid = false;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +115,68 @@ err:
>      return ret;
>  }
>  
> +static void xgene_storm_reset(void)
> +{
> +    void __iomem *addr;
> +
> +    if ( !reset_vals_valid )
> +    {
> +        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    /* Write reset mask to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> +        {},
> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("XGENE: Unable to find a compatible reset node in the device tree");
> +        return 0;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("XGENE: Unable to retrieve the reset mask\n");
> +        return 0;
> +    }
> +
> +    reset_vals_valid = true;
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kf0-0007Yh-76; Mon, 27 Jan 2014 11:47:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W7kex-0007YQ-S7
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 11:47:24 +0000
Received: from [85.158.143.35:3391] by server-1.bemta-4.messagelabs.com id
	87/7F-02132-B4746E25; Mon, 27 Jan 2014 11:47:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390823242!1029629!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23286 invoked from network); 27 Jan 2014 11:47:22 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:47:22 -0000
Received: by mail-we0-f176.google.com with SMTP id t61so5153868wes.35
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Jan 2014 03:47:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=yxCoD/WC6Q1ScLIJxW5JUxjf2TeEykFxQrkF+W4l0w0=;
	b=QdISczpxK+faIiy7kdkRpL/wBRWshmktPKBXN5SsleEQHVLqDoPl2o7fuotuUSEAxJ
	1XkFs0Qc747n4bOXMYvrCqoCh94j41SBcgPhJwdsxG156gN9sy1UEpJIW8ps+KisQMy7
	VO2dDzJ8VFkFb8XDWVdQyk5YsAL13iEbJ8gwEgoq+zdPq8qUa+mTdHQ4Ftbs1cxkTyIh
	phuBc/wF0z0Ztmq3oS2OXIIVaSlh86jNPeCjYTohSP4dTQW9Pw+ZE5Ek8rIg77Z2vaex
	7N+/8G0ZrLg01FlDZhhzRyzY4RJJ/r6f78z2RoT5lrEHNGA3xMxPfhX/3/Dj0XhQ9iaF
	NgTw==
X-Gm-Message-State: ALoCoQmL8haLpgNBDjtXkP4/7w1MPVepeTR3U0ZxxAnBnnbzKDUILSTtxUCNM4a+h2uKJwA9ReSm
X-Received: by 10.181.8.66 with SMTP id di2mr11752047wid.43.1390823241734;
	Mon, 27 Jan 2014 03:47:21 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id fm3sm24678302wib.8.2014.01.27.03.47.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 03:47:21 -0800 (PST)
Message-ID: <52E64747.5010207@m2r.biz>
Date: Mon, 27 Jan 2014 12:47:19 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27CA5.50407@m2r.biz> <52E291C6.80205@eu.citrix.com>
In-Reply-To: <52E291C6.80205@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 17:16, George Dunlap ha scritto:
> On 01/24/2014 02:45 PM, Fabio Fantoni wrote:
>> Il 21/01/2014 14:56, Ian Campbell ha scritto:
>>> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>>>> Make rdname function work with xl
>>>>
>>>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Although I would have preferred a slightly more verbose changelog.
>>
>> This patch fix this problem:
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
>> and perhaps also other problems.
>>
>> I have done extensive testing with the addition of this patch without 
>> encountering errors, can it be added to the 4.4?
>>
>> Thanks for any reply.
>
> Ian said he was OK with the patch, but that he wished it had a better 
> description.
>
> The one-line description is good; but a better body description would 
> include:
>
> 1) A description of what's wrong
> 2) How this patch fixes the problem
>
>  -George

rdname function not support json output of xl commands and this cause 
problems using xl, for example check if domUs is already running 
(because restored) on domUs autostart and do create in any case, if domU 
is already running xl create fails with error instead skip it.
This patch add support of json output on rdname function sed solving 
this problem and probably also problems of other cases.
I tried all possible cases that came to mind only after the patch, andI 
haven't encountered problems but I do not know what other situations 
have solved in addition to the case described above.

Add something similar to the description can be good?

Thanks for any reply and sorry for my bad english.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 11:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 11:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kf0-0007Yh-76; Mon, 27 Jan 2014 11:47:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W7kex-0007YQ-S7
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 11:47:24 +0000
Received: from [85.158.143.35:3391] by server-1.bemta-4.messagelabs.com id
	87/7F-02132-B4746E25; Mon, 27 Jan 2014 11:47:23 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390823242!1029629!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23286 invoked from network); 27 Jan 2014 11:47:22 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 11:47:22 -0000
Received: by mail-we0-f176.google.com with SMTP id t61so5153868wes.35
	for <xen-devel@lists.xensource.com>;
	Mon, 27 Jan 2014 03:47:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=yxCoD/WC6Q1ScLIJxW5JUxjf2TeEykFxQrkF+W4l0w0=;
	b=QdISczpxK+faIiy7kdkRpL/wBRWshmktPKBXN5SsleEQHVLqDoPl2o7fuotuUSEAxJ
	1XkFs0Qc747n4bOXMYvrCqoCh94j41SBcgPhJwdsxG156gN9sy1UEpJIW8ps+KisQMy7
	VO2dDzJ8VFkFb8XDWVdQyk5YsAL13iEbJ8gwEgoq+zdPq8qUa+mTdHQ4Ftbs1cxkTyIh
	phuBc/wF0z0Ztmq3oS2OXIIVaSlh86jNPeCjYTohSP4dTQW9Pw+ZE5Ek8rIg77Z2vaex
	7N+/8G0ZrLg01FlDZhhzRyzY4RJJ/r6f78z2RoT5lrEHNGA3xMxPfhX/3/Dj0XhQ9iaF
	NgTw==
X-Gm-Message-State: ALoCoQmL8haLpgNBDjtXkP4/7w1MPVepeTR3U0ZxxAnBnnbzKDUILSTtxUCNM4a+h2uKJwA9ReSm
X-Received: by 10.181.8.66 with SMTP id di2mr11752047wid.43.1390823241734;
	Mon, 27 Jan 2014 03:47:21 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id fm3sm24678302wib.8.2014.01.27.03.47.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 03:47:21 -0800 (PST)
Message-ID: <52E64747.5010207@m2r.biz>
Date: Mon, 27 Jan 2014 12:47:19 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27CA5.50407@m2r.biz> <52E291C6.80205@eu.citrix.com>
In-Reply-To: <52E291C6.80205@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian.Jackson@eu.citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 24/01/2014 17:16, George Dunlap ha scritto:
> On 01/24/2014 02:45 PM, Fabio Fantoni wrote:
>> Il 21/01/2014 14:56, Ian Campbell ha scritto:
>>> On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
>>>> Make rdname function work with xl
>>>>
>>>> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> Although I would have preferred a slightly more verbose changelog.
>>
>> This patch fix this problem:
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01545.html
>> and perhaps also other problems.
>>
>> I have done extensive testing with the addition of this patch without 
>> encountering errors, can it be added to the 4.4?
>>
>> Thanks for any reply.
>
> Ian said he was OK with the patch, but that he wished it had a better 
> description.
>
> The one-line description is good; but a better body description would 
> include:
>
> 1) A description of what's wrong
> 2) How this patch fixes the problem
>
>  -George

rdname function not support json output of xl commands and this cause 
problems using xl, for example check if domUs is already running 
(because restored) on domUs autostart and do create in any case, if domU 
is already running xl create fails with error instead skip it.
This patch add support of json output on rdname function sed solving 
this problem and probably also problems of other cases.
I tried all possible cases that came to mind only after the patch, andI 
haven't encountered problems but I do not know what other situations 
have solved in addition to the case described above.

Add something similar to the description can be good?

Thanks for any reply and sorry for my bad english.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:00:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7krr-00006J-4X; Mon, 27 Jan 2014 12:00:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W7krp-000067-DQ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:00:41 +0000
Received: from [85.158.139.211:38878] by server-16.bemta-5.messagelabs.com id
	42/29-11843-86A46E25; Mon, 27 Jan 2014 12:00:40 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390824039!12157504!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25518 invoked from network); 27 Jan 2014 12:00:40 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:00:40 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so5351364wgg.25
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 04:00:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=HdHiw8VdjPaWWwXN32dAFVxyh+UZKZ77frzgNzlTB6I=;
	b=fDrLsmTvwqQxIN4aa7C74a5WXlpnuFH0A2MxxcfOs3kpZ9OPjGMPpM0UhIHcKWOoiZ
	RKjZRPeL7cDeB5MzCBki7DTa6dneFlpoD/jGf7e5KIa9jeMUukL6FDjgto1hMgZLBo89
	1RHNZT/oFzEYA1i4uBeTtVD9ubx+uibxcjqvsybYr4cf5stDdFNWoc4dopvbG52nH8xQ
	iE0887x5FtpQrA4g8x83avaA171I3VGVDRpENaZdGdKD9LcDqrsfYrogHMBB1FGBIHcr
	ZUHOhrZxYE3N66t6uZnl8ZAYjR5PdPHX/fvWhJVLmoxMYkPerYd2GPRFDaQ7im1h9PbU
	MB7A==
X-Gm-Message-State: ALoCoQlwpgIQKS11LoujO5+4DpaPgFVqJE7pY+gqwnUItvFCZ6Fyp/bBZkRwdhJ1iPEuz0yRWzUr
X-Received: by 10.180.104.42 with SMTP id gb10mr11615616wib.51.1390824039671; 
	Mon, 27 Jan 2014 04:00:39 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id
	q15sm24682862wjw.18.2014.01.27.04.00.38 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 04:00:39 -0800 (PST)
Message-ID: <52E64A66.3010608@m2r.biz>
Date: Mon, 27 Jan 2014 13:00:38 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: M A Young <m.a.young@durham.ac.uk>, xen-devel@lists.xen.org
References: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
In-Reply-To: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
Subject: Re: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/01/2014 02:05, M A Young ha scritto:
> I get the following error after running pygrub when launching a guest 
> with xen 4.4-rc2.
>
> xenconsole: Could not open tty `': No such file or directory
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console 
> child [0] exited with error status 2
>
> After this the boot continues normally, but once the console exits the 
> tty settings are wrong and key presses aren't echoed. I believe the 
> error above is from a xenconsole session launched when pygrub is, and 
> this exits because the guest generally hasn't started. A second run of 
> xenconsole serves the console output after that. I have occasionally 
> noticed with this and earlier xen versions that I have to press ^] 
> after pygrub finishes before the boot continues, so I would guess that 
> in some cases the first xenconsole run lasts long enough to get a 
> console connection which has to be ended before the boot proceeds. 
> Hence I suspect the behaviour hasn't changed in this version, and only 
> the error message is new.
>

Try to check if xenconsoled is running.
I had a similar problem time ago and it was because xenconsoled was no 
longer active, perhaps crashed for unknown reasons and found nothing in 
the logs.
After installing a rebuilt completely clean the problem was no longer 
presented.

>     Michael Young
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:00:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7krr-00006J-4X; Mon, 27 Jan 2014 12:00:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1W7krp-000067-DQ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:00:41 +0000
Received: from [85.158.139.211:38878] by server-16.bemta-5.messagelabs.com id
	42/29-11843-86A46E25; Mon, 27 Jan 2014 12:00:40 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390824039!12157504!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25518 invoked from network); 27 Jan 2014 12:00:40 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:00:40 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so5351364wgg.25
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 04:00:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=HdHiw8VdjPaWWwXN32dAFVxyh+UZKZ77frzgNzlTB6I=;
	b=fDrLsmTvwqQxIN4aa7C74a5WXlpnuFH0A2MxxcfOs3kpZ9OPjGMPpM0UhIHcKWOoiZ
	RKjZRPeL7cDeB5MzCBki7DTa6dneFlpoD/jGf7e5KIa9jeMUukL6FDjgto1hMgZLBo89
	1RHNZT/oFzEYA1i4uBeTtVD9ubx+uibxcjqvsybYr4cf5stDdFNWoc4dopvbG52nH8xQ
	iE0887x5FtpQrA4g8x83avaA171I3VGVDRpENaZdGdKD9LcDqrsfYrogHMBB1FGBIHcr
	ZUHOhrZxYE3N66t6uZnl8ZAYjR5PdPHX/fvWhJVLmoxMYkPerYd2GPRFDaQ7im1h9PbU
	MB7A==
X-Gm-Message-State: ALoCoQlwpgIQKS11LoujO5+4DpaPgFVqJE7pY+gqwnUItvFCZ6Fyp/bBZkRwdhJ1iPEuz0yRWzUr
X-Received: by 10.180.104.42 with SMTP id gb10mr11615616wib.51.1390824039671; 
	Mon, 27 Jan 2014 04:00:39 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id
	q15sm24682862wjw.18.2014.01.27.04.00.38 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 04:00:39 -0800 (PST)
Message-ID: <52E64A66.3010608@m2r.biz>
Date: Mon, 27 Jan 2014 13:00:38 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: M A Young <m.a.young@durham.ac.uk>, xen-devel@lists.xen.org
References: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
In-Reply-To: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
Subject: Re: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/01/2014 02:05, M A Young ha scritto:
> I get the following error after running pygrub when launching a guest 
> with xen 4.4-rc2.
>
> xenconsole: Could not open tty `': No such file or directory
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console 
> child [0] exited with error status 2
>
> After this the boot continues normally, but once the console exits the 
> tty settings are wrong and key presses aren't echoed. I believe the 
> error above is from a xenconsole session launched when pygrub is, and 
> this exits because the guest generally hasn't started. A second run of 
> xenconsole serves the console output after that. I have occasionally 
> noticed with this and earlier xen versions that I have to press ^] 
> after pygrub finishes before the boot continues, so I would guess that 
> in some cases the first xenconsole run lasts long enough to get a 
> console connection which has to be ended before the boot proceeds. 
> Hence I suspect the behaviour hasn't changed in this version, and only 
> the error message is new.
>

Try to check if xenconsoled is running.
I had a similar problem time ago and it was because xenconsoled was no 
longer active, perhaps crashed for unknown reasons and found nothing in 
the logs.
After installing a rebuilt completely clean the problem was no longer 
presented.

>     Michael Young
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksZ-0000Bt-Mi; Mon, 27 Jan 2014 12:01:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksY-0000BR-E3
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:26 +0000
Received: from [85.158.137.68:37120] by server-5.bemta-3.messagelabs.com id
	78/58-25188-59A46E25; Mon, 27 Jan 2014 12:01:25 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22252 invoked from network); 27 Jan 2014 12:01:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770431"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-VT;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:11 +0000
Message-ID: <1390824074-21006-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 3/6] xen: move Xen HVM files under
	hw/i386/xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs            |    2 +-
 hw/i386/xen/Makefile.objs        |    1 +
 hw/{ => i386}/xen/xen_apic.c     |    0
 hw/{ => i386}/xen/xen_platform.c |    0
 hw/{ => i386}/xen/xen_pvdevice.c |    0
 hw/xen/Makefile.objs             |    1 -
 6 files changed, 2 insertions(+), 2 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 0faccd7..77dcf06 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += ../xenpv/
+obj-$(CONFIG_XEN) += ../xenpv/ xen/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/i386/xen/Makefile.objs b/hw/i386/xen/Makefile.objs
new file mode 100644
index 0000000..801a68d
--- /dev/null
+++ b/hw/i386/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y += xen_platform.o xen_apic.o xen_pvdevice.o
diff --git a/hw/xen/xen_apic.c b/hw/i386/xen/xen_apic.c
similarity index 100%
rename from hw/xen/xen_apic.c
rename to hw/i386/xen/xen_apic.c
diff --git a/hw/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
similarity index 100%
rename from hw/xen/xen_platform.c
rename to hw/i386/xen/xen_platform.c
diff --git a/hw/xen/xen_pvdevice.c b/hw/i386/xen/xen_pvdevice.c
similarity index 100%
rename from hw/xen/xen_pvdevice.c
rename to hw/i386/xen/xen_pvdevice.c
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..a0ca0aa 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,6 +1,5 @@
 # xen backend driver support
 common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
-obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksZ-0000Bt-Mi; Mon, 27 Jan 2014 12:01:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksY-0000BR-E3
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:26 +0000
Received: from [85.158.137.68:37120] by server-5.bemta-3.messagelabs.com id
	78/58-25188-59A46E25; Mon, 27 Jan 2014 12:01:25 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22252 invoked from network); 27 Jan 2014 12:01:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770431"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-VT;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:11 +0000
Message-ID: <1390824074-21006-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 3/6] xen: move Xen HVM files under
	hw/i386/xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs            |    2 +-
 hw/i386/xen/Makefile.objs        |    1 +
 hw/{ => i386}/xen/xen_apic.c     |    0
 hw/{ => i386}/xen/xen_platform.c |    0
 hw/{ => i386}/xen/xen_pvdevice.c |    0
 hw/xen/Makefile.objs             |    1 -
 6 files changed, 2 insertions(+), 2 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 0faccd7..77dcf06 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += ../xenpv/
+obj-$(CONFIG_XEN) += ../xenpv/ xen/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/i386/xen/Makefile.objs b/hw/i386/xen/Makefile.objs
new file mode 100644
index 0000000..801a68d
--- /dev/null
+++ b/hw/i386/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y += xen_platform.o xen_apic.o xen_pvdevice.o
diff --git a/hw/xen/xen_apic.c b/hw/i386/xen/xen_apic.c
similarity index 100%
rename from hw/xen/xen_apic.c
rename to hw/i386/xen/xen_apic.c
diff --git a/hw/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
similarity index 100%
rename from hw/xen/xen_platform.c
rename to hw/i386/xen/xen_platform.c
diff --git a/hw/xen/xen_pvdevice.c b/hw/i386/xen/xen_pvdevice.c
similarity index 100%
rename from hw/xen/xen_pvdevice.c
rename to hw/i386/xen/xen_pvdevice.c
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..a0ca0aa 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,6 +1,5 @@
 # xen backend driver support
 common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
-obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksb-0000Cq-3O; Mon, 27 Jan 2014 12:01:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000BY-85
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:27 +0000
Received: from [85.158.137.68:37235] by server-2.bemta-3.messagelabs.com id
	39/2E-17329-69A46E25; Mon, 27 Jan 2014 12:01:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22302 invoked from network); 27 Jan 2014 12:01:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770432"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-Uh;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:10 +0000
Message-ID: <1390824074-21006-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 2/6] xen: move Xen PV machine files to
	hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksb-0000Cq-3O; Mon, 27 Jan 2014 12:01:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000BY-85
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:27 +0000
Received: from [85.158.137.68:37235] by server-2.bemta-3.messagelabs.com id
	39/2E-17329-69A46E25; Mon, 27 Jan 2014 12:01:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22302 invoked from network); 27 Jan 2014 12:01:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770432"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-Uh;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:10 +0000
Message-ID: <1390824074-21006-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 2/6] xen: move Xen PV machine files to
	hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksb-0000DQ-Iv; Mon, 27 Jan 2014 12:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000Bk-FF
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:27 +0000
Received: from [85.158.143.35:43807] by server-3.bemta-4.messagelabs.com id
	B4/F7-32360-69A46E25; Mon, 27 Jan 2014 12:01:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32239 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-SU;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:09 +0000
Message-ID: <1390824074-21006-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 1/6] configure: factor out list of
	supported Xen/KVM targets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paolo Bonzini <pbonzini@redhat.com>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure |   58 +++++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 35 insertions(+), 23 deletions(-)

diff --git a/configure b/configure
index 07b6be3..549b9cc 100755
--- a/configure
+++ b/configure
@@ -4372,6 +4372,32 @@ if test "$linux" = "yes" ; then
     fi
 fi
 
+supported_kvm_target() {
+    test "$kvm" = "yes" || return 1
+    test "$target_softmmu" = "yes" || return 1
+    case "$target_name:$cpu" in
+        arm:arm | aarch64:aarch64 | \
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | \
+        ppc:ppc | ppcemb:ppc | ppc64:ppc | \
+        ppc:ppc64 | ppcemb:ppc64 | ppc64:ppc64 | \
+        s390x:s390x)
+            return 0
+        ;;
+    esac
+    return 1
+}
+
+supported_xen_target() {
+    test "$xen" = "yes" || return 1
+    test "$target_softmmu" = "yes" || return 1
+    case "$target_name:$cpu" in
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+            return 0
+        ;;
+    esac
+    return 1
+}
+
 for target in $target_list; do
 target_dir="$target"
 config_target_mak=$target_dir/config-target.mak
@@ -4538,34 +4564,20 @@ if [ "$TARGET_ABI_DIR" = "" ]; then
   TARGET_ABI_DIR=$TARGET_ARCH
 fi
 echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
-case "$target_name" in
-  i386|x86_64)
-    if test "$xen" = "yes" -a "$target_softmmu" = "yes" ; then
-      echo "CONFIG_XEN=y" >> $config_target_mak
-      if test "$xen_pci_passthrough" = yes; then
+
+if supported_xen_target; then
+    echo "CONFIG_XEN=y" >> $config_target_mak
+    if test "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
-      fi
     fi
-    ;;
-  *)
-esac
-case "$target_name" in
-  aarch64|arm|i386|x86_64|ppcemb|ppc|ppc64|s390x)
+fi
+if supported_kvm_target; then
     # Make sure the target and host cpus are compatible
-    if test "$kvm" = "yes" -a "$target_softmmu" = "yes" -a \
-      \( "$target_name" = "$cpu" -o \
-      \( "$target_name" = "ppcemb" -a "$cpu" = "ppc" \) -o \
-      \( "$target_name" = "ppc64"  -a "$cpu" = "ppc" \) -o \
-      \( "$target_name" = "ppc"    -a "$cpu" = "ppc64" \) -o \
-      \( "$target_name" = "ppcemb" -a "$cpu" = "ppc64" \) -o \
-      \( "$target_name" = "x86_64" -a "$cpu" = "i386"   \) -o \
-      \( "$target_name" = "i386"   -a "$cpu" = "x86_64" \) \) ; then
-      echo "CONFIG_KVM=y" >> $config_target_mak
-      if test "$vhost_net" = "yes" ; then
+    echo "CONFIG_KVM=y" >> $config_target_mak
+    if test "$vhost_net" = "yes" ; then
         echo "CONFIG_VHOST_NET=y" >> $config_target_mak
-      fi
     fi
-esac
+fi
 if test "$target_bigendian" = "yes" ; then
   echo "TARGET_WORDS_BIGENDIAN=y" >> $config_target_mak
 fi
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksb-0000DQ-Iv; Mon, 27 Jan 2014 12:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000Bk-FF
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:27 +0000
Received: from [85.158.143.35:43807] by server-3.bemta-4.messagelabs.com id
	B4/F7-32360-69A46E25; Mon, 27 Jan 2014 12:01:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32239 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-SU;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:09 +0000
Message-ID: <1390824074-21006-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 1/6] configure: factor out list of
	supported Xen/KVM targets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Paolo Bonzini <pbonzini@redhat.com>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 configure |   58 +++++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 35 insertions(+), 23 deletions(-)

diff --git a/configure b/configure
index 07b6be3..549b9cc 100755
--- a/configure
+++ b/configure
@@ -4372,6 +4372,32 @@ if test "$linux" = "yes" ; then
     fi
 fi
 
+supported_kvm_target() {
+    test "$kvm" = "yes" || return 1
+    test "$target_softmmu" = "yes" || return 1
+    case "$target_name:$cpu" in
+        arm:arm | aarch64:aarch64 | \
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | \
+        ppc:ppc | ppcemb:ppc | ppc64:ppc | \
+        ppc:ppc64 | ppcemb:ppc64 | ppc64:ppc64 | \
+        s390x:s390x)
+            return 0
+        ;;
+    esac
+    return 1
+}
+
+supported_xen_target() {
+    test "$xen" = "yes" || return 1
+    test "$target_softmmu" = "yes" || return 1
+    case "$target_name:$cpu" in
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+            return 0
+        ;;
+    esac
+    return 1
+}
+
 for target in $target_list; do
 target_dir="$target"
 config_target_mak=$target_dir/config-target.mak
@@ -4538,34 +4564,20 @@ if [ "$TARGET_ABI_DIR" = "" ]; then
   TARGET_ABI_DIR=$TARGET_ARCH
 fi
 echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
-case "$target_name" in
-  i386|x86_64)
-    if test "$xen" = "yes" -a "$target_softmmu" = "yes" ; then
-      echo "CONFIG_XEN=y" >> $config_target_mak
-      if test "$xen_pci_passthrough" = yes; then
+
+if supported_xen_target; then
+    echo "CONFIG_XEN=y" >> $config_target_mak
+    if test "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
-      fi
     fi
-    ;;
-  *)
-esac
-case "$target_name" in
-  aarch64|arm|i386|x86_64|ppcemb|ppc|ppc64|s390x)
+fi
+if supported_kvm_target; then
     # Make sure the target and host cpus are compatible
-    if test "$kvm" = "yes" -a "$target_softmmu" = "yes" -a \
-      \( "$target_name" = "$cpu" -o \
-      \( "$target_name" = "ppcemb" -a "$cpu" = "ppc" \) -o \
-      \( "$target_name" = "ppc64"  -a "$cpu" = "ppc" \) -o \
-      \( "$target_name" = "ppc"    -a "$cpu" = "ppc64" \) -o \
-      \( "$target_name" = "ppcemb" -a "$cpu" = "ppc64" \) -o \
-      \( "$target_name" = "x86_64" -a "$cpu" = "i386"   \) -o \
-      \( "$target_name" = "i386"   -a "$cpu" = "x86_64" \) \) ; then
-      echo "CONFIG_KVM=y" >> $config_target_mak
-      if test "$vhost_net" = "yes" ; then
+    echo "CONFIG_KVM=y" >> $config_target_mak
+    if test "$vhost_net" = "yes" ; then
         echo "CONFIG_VHOST_NET=y" >> $config_target_mak
-      fi
     fi
-esac
+fi
 if test "$target_bigendian" = "yes" ; then
   echo "TARGET_WORDS_BIGENDIAN=y" >> $config_target_mak
 fi
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksc-0000E6-4j; Mon, 27 Jan 2014 12:01:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000Bm-Ps
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:28 +0000
Received: from [85.158.137.68:54254] by server-5.bemta-3.messagelabs.com id
	C3/68-25188-79A46E25; Mon, 27 Jan 2014 12:01:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22351 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770434"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:15 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-Rn;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:08 +0000
Message-ID: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 0/6] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a prototype based on QEMU's master branch. In fact it's more or
less the same as the last version.

The first 3 patches refactor some code to disentangle Xen PV and HVM
guest in QEMU. They are quite safe to go in.

The 4th patch has the real meat. It introduces Xen PV target,
which contains basically a dummy CPU, then hooks up this Xen PV CPU to
QEMU internal structures.

The last patch introduces xenpv-softmmu, which contains *no* emulation
code. I know that in previous discussion people said that every device
emulation should be included if the target architecture is called null.
But since this target CPU is now called xenpv I don't feel obliged to
include any device emulation in this prototype anymore. :-)

Please note that the existing Xen QEMU build is not affected at all. You
can still use "--enable-xen --target-list=i386-softmmu" (or
x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for both
HVM and PV guest. This series adds another option to build QEMU with
"--enable-xen --target-list=xenpv-softmmu" and get a QEMU binary
tailored for Xen PV guest.  The effect is that we reduce the binary size
from 14MB to 7.3MB.

Wei.

Changes in RFC V2:
* more refactoring
* include Paolo's patch to factor out list of Xen / KVM targets

Paolo Bonzini (1):
  configure: factor out list of supported Xen/KVM targets

Wei Liu (5):
  xen: move Xen PV machine files to hw/xenpv
  xen: move Xen HVM files under hw/i386/xen
  xen: factor out common functions
  xen: implement Xen PV target
  xen: introduce xenpv-softmmu.mak

 Makefile.target                      |    6 +-
 arch_init.c                          |    2 +
 configure                            |   61 ++++++++++-------
 cpu-exec.c                           |    2 +
 default-configs/xenpv-softmmu.mak    |    2 +
 hw/i386/Makefile.objs                |    2 +-
 hw/i386/xen/Makefile.objs            |    1 +
 hw/{ => i386}/xen/xen_apic.c         |    0
 hw/{ => i386}/xen/xen_platform.c     |    0
 hw/{ => i386}/xen/xen_pvdevice.c     |    0
 hw/xen/Makefile.objs                 |    1 -
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 include/sysemu/arch_init.h           |    1 +
 target-xenpv/Makefile.objs           |    1 +
 target-xenpv/cpu-qom.h               |   64 ++++++++++++++++++
 target-xenpv/cpu.h                   |   66 ++++++++++++++++++
 target-xenpv/helper.c                |   32 +++++++++
 target-xenpv/translate.c             |   27 ++++++++
 xen-common-stub.c                    |   19 ++++++
 xen-common.c                         |  123 ++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c         |   18 ++---
 xen-all.c => xen-hvm.c               |  121 +++------------------------------
 xen-mapcache-stub.c                  |   39 +++++++++++
 26 files changed, 442 insertions(+), 148 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksc-0000E6-4j; Mon, 27 Jan 2014 12:01:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksZ-0000Bm-Ps
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:28 +0000
Received: from [85.158.137.68:54254] by server-5.bemta-3.messagelabs.com id
	C3/68-25188-79A46E25; Mon, 27 Jan 2014 12:01:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22351 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770434"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:15 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-Rn;
	Mon, 27 Jan 2014 12:01:14 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:08 +0000
Message-ID: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 0/6] Xen: introduce Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a prototype based on QEMU's master branch. In fact it's more or
less the same as the last version.

The first 3 patches refactor some code to disentangle Xen PV and HVM
guest in QEMU. They are quite safe to go in.

The 4th patch has the real meat. It introduces Xen PV target,
which contains basically a dummy CPU, then hooks up this Xen PV CPU to
QEMU internal structures.

The last patch introduces xenpv-softmmu, which contains *no* emulation
code. I know that in previous discussion people said that every device
emulation should be included if the target architecture is called null.
But since this target CPU is now called xenpv I don't feel obliged to
include any device emulation in this prototype anymore. :-)

Please note that the existing Xen QEMU build is not affected at all. You
can still use "--enable-xen --target-list=i386-softmmu" (or
x86_64-softmmu") to build qemu-system-{i386,x86_64} and use it for both
HVM and PV guest. This series adds another option to build QEMU with
"--enable-xen --target-list=xenpv-softmmu" and get a QEMU binary
tailored for Xen PV guest.  The effect is that we reduce the binary size
from 14MB to 7.3MB.

Wei.

Changes in RFC V2:
* more refactoring
* include Paolo's patch to factor out list of Xen / KVM targets

Paolo Bonzini (1):
  configure: factor out list of supported Xen/KVM targets

Wei Liu (5):
  xen: move Xen PV machine files to hw/xenpv
  xen: move Xen HVM files under hw/i386/xen
  xen: factor out common functions
  xen: implement Xen PV target
  xen: introduce xenpv-softmmu.mak

 Makefile.target                      |    6 +-
 arch_init.c                          |    2 +
 configure                            |   61 ++++++++++-------
 cpu-exec.c                           |    2 +
 default-configs/xenpv-softmmu.mak    |    2 +
 hw/i386/Makefile.objs                |    2 +-
 hw/i386/xen/Makefile.objs            |    1 +
 hw/{ => i386}/xen/xen_apic.c         |    0
 hw/{ => i386}/xen/xen_platform.c     |    0
 hw/{ => i386}/xen/xen_pvdevice.c     |    0
 hw/xen/Makefile.objs                 |    1 -
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 include/sysemu/arch_init.h           |    1 +
 target-xenpv/Makefile.objs           |    1 +
 target-xenpv/cpu-qom.h               |   64 ++++++++++++++++++
 target-xenpv/cpu.h                   |   66 ++++++++++++++++++
 target-xenpv/helper.c                |   32 +++++++++
 target-xenpv/translate.c             |   27 ++++++++
 xen-common-stub.c                    |   19 ++++++
 xen-common.c                         |  123 ++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c         |   18 ++---
 xen-all.c => xen-hvm.c               |  121 +++------------------------------
 xen-mapcache-stub.c                  |   39 +++++++++++
 26 files changed, 442 insertions(+), 148 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksc-0000Eo-Ko; Mon, 27 Jan 2014 12:01:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksa-0000C2-AJ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:28 +0000
Received: from [85.158.143.35:59469] by server-2.bemta-4.messagelabs.com id
	85/97-11386-79A46E25; Mon, 27 Jan 2014 12:01:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32418 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksN-0000QC-2g;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:14 +0000
Message-ID: <1390824074-21006-7-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 6/6] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                         |    7 +++++--
 default-configs/xenpv-softmmu.mak |    2 ++
 2 files changed, 7 insertions(+), 2 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak

diff --git a/configure b/configure
index 549b9cc..b713d93 100755
--- a/configure
+++ b/configure
@@ -4391,7 +4391,7 @@ supported_xen_target() {
     test "$xen" = "yes" || return 1
     test "$target_softmmu" = "yes" || return 1
     case "$target_name:$cpu" in
-        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | xenpv:*)
             return 0
         ;;
     esac
@@ -4538,6 +4538,9 @@ case "$target_name" in
   ;;
   unicore32)
   ;;
+  xenpv)
+    TARGET_ARCH=xenpv
+  ;;
   xtensa|xtensaeb)
     TARGET_ARCH=xtensa
   ;;
@@ -4567,7 +4570,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
-    if test "$xen_pci_passthrough" = yes; then
+    if test "$target_name" != "xenpv" -a "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
     fi
 fi
diff --git a/default-configs/xenpv-softmmu.mak b/default-configs/xenpv-softmmu.mak
new file mode 100644
index 0000000..773f128
--- /dev/null
+++ b/default-configs/xenpv-softmmu.mak
@@ -0,0 +1,2 @@
+# Default configuration for xenpv-softmmu
+# Yes it is empty as we don't need to include any device emulation code
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksc-0000Eo-Ko; Mon, 27 Jan 2014 12:01:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksa-0000C2-AJ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:28 +0000
Received: from [85.158.143.35:59469] by server-2.bemta-4.messagelabs.com id
	85/97-11386-79A46E25; Mon, 27 Jan 2014 12:01:27 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32418 invoked from network); 27 Jan 2014 12:01:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksN-0000QC-2g;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:14 +0000
Message-ID: <1390824074-21006-7-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 6/6] xen: introduce xenpv-softmmu.mak
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 configure                         |    7 +++++--
 default-configs/xenpv-softmmu.mak |    2 ++
 2 files changed, 7 insertions(+), 2 deletions(-)
 create mode 100644 default-configs/xenpv-softmmu.mak

diff --git a/configure b/configure
index 549b9cc..b713d93 100755
--- a/configure
+++ b/configure
@@ -4391,7 +4391,7 @@ supported_xen_target() {
     test "$xen" = "yes" || return 1
     test "$target_softmmu" = "yes" || return 1
     case "$target_name:$cpu" in
-        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64)
+        i386:i386 | i386:x86_64 | x86_64:i386 | x86_64:x86_64 | xenpv:*)
             return 0
         ;;
     esac
@@ -4538,6 +4538,9 @@ case "$target_name" in
   ;;
   unicore32)
   ;;
+  xenpv)
+    TARGET_ARCH=xenpv
+  ;;
   xtensa|xtensaeb)
     TARGET_ARCH=xtensa
   ;;
@@ -4567,7 +4570,7 @@ echo "TARGET_ABI_DIR=$TARGET_ABI_DIR" >> $config_target_mak
 
 if supported_xen_target; then
     echo "CONFIG_XEN=y" >> $config_target_mak
-    if test "$xen_pci_passthrough" = yes; then
+    if test "$target_name" != "xenpv" -a "$xen_pci_passthrough" = yes; then
         echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
     fi
 fi
diff --git a/default-configs/xenpv-softmmu.mak b/default-configs/xenpv-softmmu.mak
new file mode 100644
index 0000000..773f128
--- /dev/null
+++ b/default-configs/xenpv-softmmu.mak
@@ -0,0 +1,2 @@
+# Default configuration for xenpv-softmmu
+# Yes it is empty as we don't need to include any device emulation code
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksd-0000FW-5y; Mon, 27 Jan 2014 12:01:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksb-0000Cg-3K
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:29 +0000
Received: from [85.158.137.68:54382] by server-9.bemta-3.messagelabs.com id
	BB/CB-13104-89A46E25; Mon, 27 Jan 2014 12:01:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22394 invoked from network); 27 Jan 2014 12:01:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770438"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-WA;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:12 +0000
Message-ID: <1390824074-21006-5-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 4/6] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Create *-stub files and modify Makefile.target to reflect the changes.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target              |    6 ++-
 xen-common-stub.c            |   19 +++++++
 xen-common.c                 |  123 ++++++++++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c |   18 +++----
 xen-all.c => xen-hvm.c       |  121 ++++-------------------------------------
 xen-mapcache-stub.c          |   39 ++++++++++++++
 6 files changed, 203 insertions(+), 123 deletions(-)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

diff --git a/Makefile.target b/Makefile.target
index af6ac7e..9c9e1c7 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -120,8 +120,10 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
-obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
+obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
+obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o xen-mapcache-stub.o
 
 # Hardware support
 ifeq ($(TARGET_NAME), sparc64)
diff --git a/xen-common-stub.c b/xen-common-stub.c
new file mode 100644
index 0000000..3152018
--- /dev/null
+++ b/xen-common-stub.c
@@ -0,0 +1,19 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu-common.h"
+#include "hw/xen/xen.h"
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+}
+
+int xen_init(void)
+{
+    return -ENOSYS;
+}
+
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..5207318
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
diff --git a/xen-stub.c b/xen-hvm-stub.c
similarity index 91%
rename from xen-stub.c
rename to xen-hvm-stub.c
index ad189a6..00fa9b3 100644
--- a/xen-stub.c
+++ b/xen-hvm-stub.c
@@ -12,10 +12,7 @@
 #include "hw/xen/xen.h"
 #include "exec/memory.h"
 #include "qmp-commands.h"
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-}
+#include "hw/xen/xen_common.h"
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
@@ -47,24 +44,23 @@ qemu_irq *xen_interrupt_controller_init(void)
     return NULL;
 }
 
-int xen_init(void)
+void xen_register_framebuffer(MemoryRegion *mr)
 {
-    return -ENOSYS;
 }
 
-void xen_register_framebuffer(MemoryRegion *mr)
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+int xen_hvm_init(MemoryRegion **ram_memory)
 {
+    return 0;
 }
 
-void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-int xen_hvm_init(MemoryRegion **ram_memory)
+void xen_shutdown_fatal_error(const char *fmt, ...)
 {
-    return 0;
 }
diff --git a/xen-all.c b/xen-hvm.c
similarity index 93%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..0a49055 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
@@ -1226,3 +1118,12 @@ void xen_modified_memory(ram_addr_t start, ram_addr_t length)
         }
     }
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
new file mode 100644
index 0000000..f4ddf53
--- /dev/null
+++ b/xen-mapcache-stub.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2014       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "config.h"
+
+#include <sys/resource.h>
+
+#include "hw/xen/xen_backend.h"
+
+#include <xen/hvm/params.h>
+
+#include "sysemu/xen-mapcache.h"
+#include "trace.h"
+
+uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
+                       uint8_t lock)
+{
+    abort();
+    return NULL;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+    abort();
+    return 0;
+}
+
+void xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+    abort();
+}
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksd-0000FW-5y; Mon, 27 Jan 2014 12:01:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksb-0000Cg-3K
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:29 +0000
Received: from [85.158.137.68:54382] by server-9.bemta-3.messagelabs.com id
	BB/CB-13104-89A46E25; Mon, 27 Jan 2014 12:01:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390824082!10362832!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22394 invoked from network); 27 Jan 2014 12:01:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770438"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksM-0000QC-WA;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:12 +0000
Message-ID: <1390824074-21006-5-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 4/6] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Create *-stub files and modify Makefile.target to reflect the changes.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target              |    6 ++-
 xen-common-stub.c            |   19 +++++++
 xen-common.c                 |  123 ++++++++++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c |   18 +++----
 xen-all.c => xen-hvm.c       |  121 ++++-------------------------------------
 xen-mapcache-stub.c          |   39 ++++++++++++++
 6 files changed, 203 insertions(+), 123 deletions(-)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

diff --git a/Makefile.target b/Makefile.target
index af6ac7e..9c9e1c7 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -120,8 +120,10 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
-obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
+obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
+obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o xen-mapcache-stub.o
 
 # Hardware support
 ifeq ($(TARGET_NAME), sparc64)
diff --git a/xen-common-stub.c b/xen-common-stub.c
new file mode 100644
index 0000000..3152018
--- /dev/null
+++ b/xen-common-stub.c
@@ -0,0 +1,19 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu-common.h"
+#include "hw/xen/xen.h"
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+}
+
+int xen_init(void)
+{
+    return -ENOSYS;
+}
+
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..5207318
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
diff --git a/xen-stub.c b/xen-hvm-stub.c
similarity index 91%
rename from xen-stub.c
rename to xen-hvm-stub.c
index ad189a6..00fa9b3 100644
--- a/xen-stub.c
+++ b/xen-hvm-stub.c
@@ -12,10 +12,7 @@
 #include "hw/xen/xen.h"
 #include "exec/memory.h"
 #include "qmp-commands.h"
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-}
+#include "hw/xen/xen_common.h"
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
@@ -47,24 +44,23 @@ qemu_irq *xen_interrupt_controller_init(void)
     return NULL;
 }
 
-int xen_init(void)
+void xen_register_framebuffer(MemoryRegion *mr)
 {
-    return -ENOSYS;
 }
 
-void xen_register_framebuffer(MemoryRegion *mr)
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+int xen_hvm_init(MemoryRegion **ram_memory)
 {
+    return 0;
 }
 
-void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-int xen_hvm_init(MemoryRegion **ram_memory)
+void xen_shutdown_fatal_error(const char *fmt, ...)
 {
-    return 0;
 }
diff --git a/xen-all.c b/xen-hvm.c
similarity index 93%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..0a49055 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
@@ -1226,3 +1118,12 @@ void xen_modified_memory(ram_addr_t start, ram_addr_t length)
         }
     }
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
new file mode 100644
index 0000000..f4ddf53
--- /dev/null
+++ b/xen-mapcache-stub.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2014       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "config.h"
+
+#include <sys/resource.h>
+
+#include "hw/xen/xen_backend.h"
+
+#include <xen/hvm/params.h>
+
+#include "sysemu/xen-mapcache.h"
+#include "trace.h"
+
+uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
+                       uint8_t lock)
+{
+    abort();
+    return NULL;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+    abort();
+    return 0;
+}
+
+void xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+    abort();
+}
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksd-0000GQ-RU; Mon, 27 Jan 2014 12:01:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksb-0000Co-DZ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:29 +0000
Received: from [85.158.143.35:59630] by server-3.bemta-4.messagelabs.com id
	4D/08-32360-89A46E25; Mon, 27 Jan 2014 12:01:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32568 invoked from network); 27 Jan 2014 12:01:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770439"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksN-0000QC-1x;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:13 +0000
Message-ID: <1390824074-21006-6-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 5/6] xen: implement Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Basically it's a dummy CPU that doens't do anything. This patch contains
necessary hooks to make QEMU compile.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch_init.c                |    2 ++
 cpu-exec.c                 |    2 ++
 include/sysemu/arch_init.h |    1 +
 target-xenpv/Makefile.objs |    1 +
 target-xenpv/cpu-qom.h     |   64 ++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/cpu.h         |   66 ++++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/helper.c      |   32 +++++++++++++++++++++
 target-xenpv/translate.c   |   27 ++++++++++++++++++
 8 files changed, 195 insertions(+)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c

diff --git a/arch_init.c b/arch_init.c
index e0acbc5..a15de61 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -101,6 +101,8 @@ int graphic_depth = 32;
 #define QEMU_ARCH QEMU_ARCH_XTENSA
 #elif defined(TARGET_UNICORE32)
 #define QEMU_ARCH QEMU_ARCH_UNICORE32
+#elif defined(TARGET_XENPV)
+#define QEMU_ARCH QEMU_ARCH_XENPV
 #endif
 
 const uint32_t arch_type = QEMU_ARCH;
diff --git a/cpu-exec.c b/cpu-exec.c
index 30cfa2a..531e92a 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -258,6 +258,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
@@ -713,6 +714,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
diff --git a/include/sysemu/arch_init.h b/include/sysemu/arch_init.h
index be71bca..66ea63f 100644
--- a/include/sysemu/arch_init.h
+++ b/include/sysemu/arch_init.h
@@ -22,6 +22,7 @@ enum {
     QEMU_ARCH_OPENRISC = 8192,
     QEMU_ARCH_UNICORE32 = 0x4000,
     QEMU_ARCH_MOXIE = 0x8000,
+    QEMU_ARCH_XENPV = 0x10000,
 };
 
 extern const uint32_t arch_type;
diff --git a/target-xenpv/Makefile.objs b/target-xenpv/Makefile.objs
new file mode 100644
index 0000000..8a60866
--- /dev/null
+++ b/target-xenpv/Makefile.objs
@@ -0,0 +1 @@
+obj-y += helper.o translate.o
diff --git a/target-xenpv/cpu-qom.h b/target-xenpv/cpu-qom.h
new file mode 100644
index 0000000..61135a6
--- /dev/null
+++ b/target-xenpv/cpu-qom.h
@@ -0,0 +1,64 @@
+/*
+ * QEMU XenPV CPU
+ *
+ * Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+#ifndef QEMU_XENPV_CPU_QOM_H
+#define QEMU_XENPV_CPU_QOM_H
+
+#include "qom/cpu.h"
+#include "cpu.h"
+#include "qapi/error.h"
+
+#define TYPE_XENPV_CPU "xenpv-cpu"
+
+/**
+ * XenPVCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ */
+typedef struct XenPVCPUClass {
+    /*< private >*/
+    CPUClass parent_class;
+    /*< public >*/
+
+    DeviceRealize parent_realize;
+    void (*parent_reset)(CPUState *cpu);
+} XenPVCPUClass;
+
+/**
+ * XenPVCPU:
+ * @env: #CPUXenPVState
+ *
+ */
+typedef struct XenPVCPU {
+    /*< private >*/
+    CPUState parent_obj;
+    /*< public >*/
+    CPUXenPVState env;
+} XenPVCPU;
+
+static inline XenPVCPU *noarch_env_get_cpu(CPUXenPVState *env)
+{
+    return container_of(env, XenPVCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(noarch_env_get_cpu(e))
+
+#endif
+
diff --git a/target-xenpv/cpu.h b/target-xenpv/cpu.h
new file mode 100644
index 0000000..0e65707
--- /dev/null
+++ b/target-xenpv/cpu.h
@@ -0,0 +1,66 @@
+/*
+ * XenPV virtual CPU header
+ *
+ *  Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_XENPV_H
+#define CPU_XENPV_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+#define TARGET_PAGE_BITS 12
+#define TARGET_PHYS_ADDR_SPACE_BITS 52
+#define TARGET_VIRT_ADDR_SPACE_BITS 47
+#define NB_MMU_MODES 1
+
+#define CPUArchState struct CPUXenPVState
+
+#include "exec/cpu-defs.h"
+
+typedef struct CPUXenPVState {
+    CPU_COMMON
+} CPUXenPVState;
+
+#include "cpu-qom.h"
+
+static inline int cpu_mmu_index (CPUXenPVState *env)
+{
+    abort();
+}
+
+static inline void cpu_get_tb_cpu_state(CPUXenPVState *env, target_ulong *pc,
+                                        target_ulong *cs_base, int *flags)
+{
+    abort();
+}
+
+static inline bool cpu_has_work(CPUState *cs)
+{
+    abort();
+    return false;
+}
+
+int cpu_xenpv_exec(CPUXenPVState *s);
+#define cpu_exec cpu_xenpv_exec
+
+#include "exec/cpu-all.h"
+
+#include "exec/exec-all.h"
+
+#endif /* CPU_XENPV_H */
+
diff --git a/target-xenpv/helper.c b/target-xenpv/helper.c
new file mode 100644
index 0000000..225a063
--- /dev/null
+++ b/target-xenpv/helper.c
@@ -0,0 +1,32 @@
+#include "cpu.h"
+#include "helper.h"
+#if !defined(CONFIG_USER_ONLY)
+#include "exec/softmmu_exec.h"
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+
+#define MMUSUFFIX _mmu
+
+#define SHIFT 0
+#include "exec/softmmu_template.h"
+
+#define SHIFT 1
+#include "exec/softmmu_template.h"
+
+#define SHIFT 2
+#include "exec/softmmu_template.h"
+
+#define SHIFT 3
+#include "exec/softmmu_template.h"
+
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+void tlb_fill(CPUXenPVState *env, target_ulong addr, int is_write,
+              int mmu_idx, uintptr_t retaddr)
+{
+    abort();
+}
+#endif
+
diff --git a/target-xenpv/helper.h b/target-xenpv/helper.h
new file mode 100644
index 0000000..e69de29
diff --git a/target-xenpv/translate.c b/target-xenpv/translate.c
new file mode 100644
index 0000000..4bc84e5
--- /dev/null
+++ b/target-xenpv/translate.c
@@ -0,0 +1,27 @@
+#include <inttypes.h>
+#include "qemu/host-utils.h"
+#include "cpu.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+
+#include "helper.h"
+#define GEN_HELPER 1
+#include "helper.h"
+
+void gen_intermediate_code(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void gen_intermediate_code_pc(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void restore_state_to_opc(CPUXenPVState *env, TranslationBlock *tb,
+                          int pc_pos)
+{
+    abort();
+}
+
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksd-0000GQ-RU; Mon, 27 Jan 2014 12:01:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7ksb-0000Co-DZ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:01:29 +0000
Received: from [85.158.143.35:59630] by server-3.bemta-4.messagelabs.com id
	4D/08-32360-89A46E25; Mon, 27 Jan 2014 12:01:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390824085!1023153!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32568 invoked from network); 27 Jan 2014 12:01:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770439"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:16 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7ksN-0000QC-1x;
	Mon, 27 Jan 2014 12:01:15 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Mon, 27 Jan 2014 12:01:13 +0000
Message-ID: <1390824074-21006-6-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
References: <1390824074-21006-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, peter.maydell@linaro.org, pbonzini@redhat.com,
	Wei Liu <wei.liu2@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RFC V2 5/6] xen: implement Xen PV target
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Basically it's a dummy CPU that doens't do anything. This patch contains
necessary hooks to make QEMU compile.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 arch_init.c                |    2 ++
 cpu-exec.c                 |    2 ++
 include/sysemu/arch_init.h |    1 +
 target-xenpv/Makefile.objs |    1 +
 target-xenpv/cpu-qom.h     |   64 ++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/cpu.h         |   66 ++++++++++++++++++++++++++++++++++++++++++++
 target-xenpv/helper.c      |   32 +++++++++++++++++++++
 target-xenpv/translate.c   |   27 ++++++++++++++++++
 8 files changed, 195 insertions(+)
 create mode 100644 target-xenpv/Makefile.objs
 create mode 100644 target-xenpv/cpu-qom.h
 create mode 100644 target-xenpv/cpu.h
 create mode 100644 target-xenpv/helper.c
 create mode 100644 target-xenpv/helper.h
 create mode 100644 target-xenpv/translate.c

diff --git a/arch_init.c b/arch_init.c
index e0acbc5..a15de61 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -101,6 +101,8 @@ int graphic_depth = 32;
 #define QEMU_ARCH QEMU_ARCH_XTENSA
 #elif defined(TARGET_UNICORE32)
 #define QEMU_ARCH QEMU_ARCH_UNICORE32
+#elif defined(TARGET_XENPV)
+#define QEMU_ARCH QEMU_ARCH_XENPV
 #endif
 
 const uint32_t arch_type = QEMU_ARCH;
diff --git a/cpu-exec.c b/cpu-exec.c
index 30cfa2a..531e92a 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -258,6 +258,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
@@ -713,6 +714,7 @@ int cpu_exec(CPUArchState *env)
 #elif defined(TARGET_CRIS)
 #elif defined(TARGET_S390X)
 #elif defined(TARGET_XTENSA)
+#elif defined(TARGET_XENPV)
     /* XXXXX */
 #else
 #error unsupported target CPU
diff --git a/include/sysemu/arch_init.h b/include/sysemu/arch_init.h
index be71bca..66ea63f 100644
--- a/include/sysemu/arch_init.h
+++ b/include/sysemu/arch_init.h
@@ -22,6 +22,7 @@ enum {
     QEMU_ARCH_OPENRISC = 8192,
     QEMU_ARCH_UNICORE32 = 0x4000,
     QEMU_ARCH_MOXIE = 0x8000,
+    QEMU_ARCH_XENPV = 0x10000,
 };
 
 extern const uint32_t arch_type;
diff --git a/target-xenpv/Makefile.objs b/target-xenpv/Makefile.objs
new file mode 100644
index 0000000..8a60866
--- /dev/null
+++ b/target-xenpv/Makefile.objs
@@ -0,0 +1 @@
+obj-y += helper.o translate.o
diff --git a/target-xenpv/cpu-qom.h b/target-xenpv/cpu-qom.h
new file mode 100644
index 0000000..61135a6
--- /dev/null
+++ b/target-xenpv/cpu-qom.h
@@ -0,0 +1,64 @@
+/*
+ * QEMU XenPV CPU
+ *
+ * Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+#ifndef QEMU_XENPV_CPU_QOM_H
+#define QEMU_XENPV_CPU_QOM_H
+
+#include "qom/cpu.h"
+#include "cpu.h"
+#include "qapi/error.h"
+
+#define TYPE_XENPV_CPU "xenpv-cpu"
+
+/**
+ * XenPVCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ */
+typedef struct XenPVCPUClass {
+    /*< private >*/
+    CPUClass parent_class;
+    /*< public >*/
+
+    DeviceRealize parent_realize;
+    void (*parent_reset)(CPUState *cpu);
+} XenPVCPUClass;
+
+/**
+ * XenPVCPU:
+ * @env: #CPUXenPVState
+ *
+ */
+typedef struct XenPVCPU {
+    /*< private >*/
+    CPUState parent_obj;
+    /*< public >*/
+    CPUXenPVState env;
+} XenPVCPU;
+
+static inline XenPVCPU *noarch_env_get_cpu(CPUXenPVState *env)
+{
+    return container_of(env, XenPVCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(noarch_env_get_cpu(e))
+
+#endif
+
diff --git a/target-xenpv/cpu.h b/target-xenpv/cpu.h
new file mode 100644
index 0000000..0e65707
--- /dev/null
+++ b/target-xenpv/cpu.h
@@ -0,0 +1,66 @@
+/*
+ * XenPV virtual CPU header
+ *
+ *  Copyright (c) 2014 Citrix Systems UK Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_XENPV_H
+#define CPU_XENPV_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+#define TARGET_PAGE_BITS 12
+#define TARGET_PHYS_ADDR_SPACE_BITS 52
+#define TARGET_VIRT_ADDR_SPACE_BITS 47
+#define NB_MMU_MODES 1
+
+#define CPUArchState struct CPUXenPVState
+
+#include "exec/cpu-defs.h"
+
+typedef struct CPUXenPVState {
+    CPU_COMMON
+} CPUXenPVState;
+
+#include "cpu-qom.h"
+
+static inline int cpu_mmu_index (CPUXenPVState *env)
+{
+    abort();
+}
+
+static inline void cpu_get_tb_cpu_state(CPUXenPVState *env, target_ulong *pc,
+                                        target_ulong *cs_base, int *flags)
+{
+    abort();
+}
+
+static inline bool cpu_has_work(CPUState *cs)
+{
+    abort();
+    return false;
+}
+
+int cpu_xenpv_exec(CPUXenPVState *s);
+#define cpu_exec cpu_xenpv_exec
+
+#include "exec/cpu-all.h"
+
+#include "exec/exec-all.h"
+
+#endif /* CPU_XENPV_H */
+
diff --git a/target-xenpv/helper.c b/target-xenpv/helper.c
new file mode 100644
index 0000000..225a063
--- /dev/null
+++ b/target-xenpv/helper.c
@@ -0,0 +1,32 @@
+#include "cpu.h"
+#include "helper.h"
+#if !defined(CONFIG_USER_ONLY)
+#include "exec/softmmu_exec.h"
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+
+#define MMUSUFFIX _mmu
+
+#define SHIFT 0
+#include "exec/softmmu_template.h"
+
+#define SHIFT 1
+#include "exec/softmmu_template.h"
+
+#define SHIFT 2
+#include "exec/softmmu_template.h"
+
+#define SHIFT 3
+#include "exec/softmmu_template.h"
+
+#endif
+
+#if !defined(CONFIG_USER_ONLY)
+void tlb_fill(CPUXenPVState *env, target_ulong addr, int is_write,
+              int mmu_idx, uintptr_t retaddr)
+{
+    abort();
+}
+#endif
+
diff --git a/target-xenpv/helper.h b/target-xenpv/helper.h
new file mode 100644
index 0000000..e69de29
diff --git a/target-xenpv/translate.c b/target-xenpv/translate.c
new file mode 100644
index 0000000..4bc84e5
--- /dev/null
+++ b/target-xenpv/translate.c
@@ -0,0 +1,27 @@
+#include <inttypes.h>
+#include "qemu/host-utils.h"
+#include "cpu.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+
+#include "helper.h"
+#define GEN_HELPER 1
+#include "helper.h"
+
+void gen_intermediate_code(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void gen_intermediate_code_pc(CPUXenPVState *env, TranslationBlock *tb)
+{
+    abort();
+}
+
+void restore_state_to_opc(CPUXenPVState *env, TranslationBlock *tb,
+                          int pc_pos)
+{
+    abort();
+}
+
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksi-0000LQ-MN; Mon, 27 Jan 2014 12:01:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7ksh-0000JY-9F
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:01:35 +0000
Received: from [85.158.139.211:14112] by server-8.bemta-5.messagelabs.com id
	96/81-29838-E9A46E25; Mon, 27 Jan 2014 12:01:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390824092!11975317!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20358 invoked from network); 27 Jan 2014 12:01:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:30 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7ksb-0000QR-Nv;
	Mon, 27 Jan 2014 12:01:29 +0000
Message-ID: <52E64A95.2050201@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:01:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1390822680.12230.24.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 11:38 AM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V6:
>> - Incorporating comments received on V5 patch.
>> V5:
>> - Incorporating comments received on V4 patch.
>> V4:
>> - Removing TODO comment about retriving reset base address from dts
>>    as that is done now.
>> V3:
>> - Retriving reset base address and reset mask from device tree.
>> - Removed unnecssary header files included earlier.
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Looks good, thanks:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> George, I'd like to put this in 4.4. The impact on non-xgene is non
> existent and I think reset is basic functionality which we should
> enable.

So without this patch, support for xgene would be considered 
"experimental" (missing basic functionality)?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:01:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:01:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ksi-0000LQ-MN; Mon, 27 Jan 2014 12:01:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7ksh-0000JY-9F
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:01:35 +0000
Received: from [85.158.139.211:14112] by server-8.bemta-5.messagelabs.com id
	96/81-29838-E9A46E25; Mon, 27 Jan 2014 12:01:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390824092!11975317!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20358 invoked from network); 27 Jan 2014 12:01:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96770547"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:01:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:01:30 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7ksb-0000QR-Nv;
	Mon, 27 Jan 2014 12:01:29 +0000
Message-ID: <52E64A95.2050201@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:01:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1390822680.12230.24.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 11:38 AM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds a reset support for xgene arm64 platform.
>>
>> V6:
>> - Incorporating comments received on V5 patch.
>> V5:
>> - Incorporating comments received on V4 patch.
>> V4:
>> - Removing TODO comment about retriving reset base address from dts
>>    as that is done now.
>> V3:
>> - Retriving reset base address and reset mask from device tree.
>> - Removed unnecssary header files included earlier.
>> V2:
>> - Removed unnecssary mdelay in code.
>> - Adding iounmap of the base address.
>> V1:
>> -Initial patch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Looks good, thanks:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> George, I'd like to put this in 4.4. The impact on non-xgene is non
> existent and I think reset is basic functionality which we should
> enable.

So without this patch, support for xgene would be considered 
"experimental" (missing basic functionality)?

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:04:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kvs-0001GZ-Ct; Mon, 27 Jan 2014 12:04:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7kvr-0001GL-Ks
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:04:51 +0000
Received: from [85.158.139.211:48043] by server-11.bemta-5.messagelabs.com id
	38/E9-23268-26B46E25; Mon, 27 Jan 2014 12:04:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390824288!9456243!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7108 invoked from network); 27 Jan 2014 12:04:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:04:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96771234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:04:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:04:20 -0500
Message-ID: <1390824259.12230.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:04:19 +0000
In-Reply-To: <52E64A95.2050201@eu.citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
	<52E64A95.2050201@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:01 +0000, George Dunlap wrote:
> On 01/27/2014 11:38 AM, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch adds a reset support for xgene arm64 platform.
> >>
> >> V6:
> >> - Incorporating comments received on V5 patch.
> >> V5:
> >> - Incorporating comments received on V4 patch.
> >> V4:
> >> - Removing TODO comment about retriving reset base address from dts
> >>    as that is done now.
> >> V3:
> >> - Retriving reset base address and reset mask from device tree.
> >> - Removed unnecssary header files included earlier.
> >> V2:
> >> - Removed unnecssary mdelay in code.
> >> - Adding iounmap of the base address.
> >> V1:
> >> -Initial patch.
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Looks good, thanks:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George, I'd like to put this in 4.4. The impact on non-xgene is non
> > existent and I think reset is basic functionality which we should
> > enable.
> 
> So without this patch, support for xgene would be considered 
> "experimental" (missing basic functionality)?

I don't think I would go so far as to say "experimental", we could
probably live with it, but having to go find the board and press the
tiny little button to reboot it is certainly not "complete".

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:04:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kvs-0001GZ-Ct; Mon, 27 Jan 2014 12:04:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7kvr-0001GL-Ks
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:04:51 +0000
Received: from [85.158.139.211:48043] by server-11.bemta-5.messagelabs.com id
	38/E9-23268-26B46E25; Mon, 27 Jan 2014 12:04:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390824288!9456243!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7108 invoked from network); 27 Jan 2014 12:04:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:04:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96771234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:04:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:04:20 -0500
Message-ID: <1390824259.12230.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:04:19 +0000
In-Reply-To: <52E64A95.2050201@eu.citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
	<52E64A95.2050201@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:01 +0000, George Dunlap wrote:
> On 01/27/2014 11:38 AM, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch adds a reset support for xgene arm64 platform.
> >>
> >> V6:
> >> - Incorporating comments received on V5 patch.
> >> V5:
> >> - Incorporating comments received on V4 patch.
> >> V4:
> >> - Removing TODO comment about retriving reset base address from dts
> >>    as that is done now.
> >> V3:
> >> - Retriving reset base address and reset mask from device tree.
> >> - Removed unnecssary header files included earlier.
> >> V2:
> >> - Removed unnecssary mdelay in code.
> >> - Adding iounmap of the base address.
> >> V1:
> >> -Initial patch.
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Looks good, thanks:
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George, I'd like to put this in 4.4. The impact on non-xgene is non
> > existent and I think reset is basic functionality which we should
> > enable.
> 
> So without this patch, support for xgene would be considered 
> "experimental" (missing basic functionality)?

I don't think I would go so far as to say "experimental", we could
probably live with it, but having to go find the board and press the
tiny little button to reboot it is certainly not "complete".

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:07:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kxz-0001XJ-10; Mon, 27 Jan 2014 12:07:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7kxw-0001Wy-GP
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:07:00 +0000
Received: from [85.158.137.68:12244] by server-17.bemta-3.messagelabs.com id
	20/51-15965-3EB46E25; Mon, 27 Jan 2014 12:06:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390824417!11556271!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13180 invoked from network); 27 Jan 2014 12:06:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:06:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96772277"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:06:57 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:06:56 -0500
Message-ID: <52E64BDF.8040105@citrix.com>
Date: Mon, 27 Jan 2014 13:06:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <52DECA4E.4080004@citrix.com>
	<20140124183144.GE15785@phenom.dumpdata.com>
In-Reply-To: <20140124183144.GE15785@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 19:31, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
>> Hello,
>>
>> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
>> cpuid feature flags exposed to PVH guests are kind of strange, this is
>> the output of the feature flags as seen by an HVM domain:
>>
> =

> What about a PV guest? I presume if you ran an NetBSD PV guest it would
> give a format similar to this?

I guess so, the feature flags reported by NetBSD PV will probably be the
same of the ones reported by FreeBSD PVH (unless NetBSD PV also does
some kind of pre-filtering).

> =

>> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,P=
GE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
>>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCD=
LT,HV>
>> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
>> AMD Features2=3D0x1<LAHF>
>>
>> And this is what a PVH domain sees when running on the same hardware:
>>
>> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,A=
CPI,MMX,FXSR,SSE,SSE2,SS,HTT>
>> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
>> AMD Features=3D0x20100800<SYSCALL,NX,LM>
>> AMD Features2=3D0x1<LAHF>
>>
>> I would expect the feature flags to be quite similar between an HVM
>> domain and a PV domain (since they both run inside of a HVM container).
>> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
>> guests. Also, is there any reason why PVH guests have the ACPI, SS and
>> CLFLUSH feature flags but not HVM?
> =

> S5?

SS - CPU cache supports self-snoop.

Not sure if that should be enabled or not for PVH.

> =

> ACPI is enabled for PV I think, but Linux PV guests disable them
> as there is no ACPI tables in PV mode:
> =

>  429         if (!xen_initial_domain())                                  =
            =

>  430                 cpuid_leaf1_edx_mask &=3D                           =
              =

>  431                         ~((1 << X86_FEATURE_ACPI));  /* disable ACPI=
 */         =

>  432                                                                     =
            =

>  434                                                                      =

> =

> CLFLUSH - no idea why it would be disabled.
> =

> =

> The rdsctp should be enabled. In the past I think it was related to
> 'timer=3D' option. We would either trap it, or emulate it with a constant
> value, or passthrough. It should be passing it through but maybe there
> is a bug?
> =

> PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
> is not exposed because of another bug:
> http://www.gossamer-threads.com/lists/xen/devel/313596

Think so, now that we run inside of an HVM container we should be able
to make use of all those.

> =

>>
>> Most (if not all) of this probably comes from the fact that we are
>> reporting the same feature flags as pure PV guests, but I see no reason
>> to do that for PVH guests, we should decide what's supported on PVH and
>> set the feature flags accordingly.
> =

> Right and also have a nice policy. The problem is that we set/unset
> the cpuid flags in the toolstack (and in two places - depending on the
> architecture) and also in the hypervisor.

Yes, all this cpuid flag stuff seems like a mess to me, there are so
many places where we enable or blacklist certain cpu flags that makes me
wonder if it would be more sane to define a set of flags that an HVM
container supports and maybe then blacklist some of them if they are not
actually implemented/usable on the specific kind of guest.

> Anyhow, these I know we disable:
> =

>  425         cpuid_leaf1_edx_mask =3D                                    =
              =

>  426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */     =
            =

>  427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring *=
/           =

> =

> And I  think your patch to the Xen hypervisor does it too - it clears
> the MTRR by default now. The ACC is (if my memory is correct) for
> the Pentium 4 and such - we can disable it as well.
> =

>  428                                                                     =
            =

>  433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32));=
              =

> =

> And this we definitly need to disable. The x2APIC should not
> be exposed as we want to use the Xen's version of APIC ops. And
> if the x2APIC bit is enabled in Linux, there is some other code
> (NMI handler) that will use that without using the APIC ops.
> And blow up :-(
> =

> =

> Then there is the MWAIT. Actually we can put that on the side.
> I know it is important for dom0, but since we don't have those
> patches yet in, we can ignore that. However, the hypervisor
> (pv_cpuid) disables it. =

> =

> =

> There is also the XSAVE business:
> =

> save_mask =3D                                                            =

>  440                 (1 << (X86_FEATURE_XSAVE % 32)) |                   =
            =

>  441                 (1 << (X86_FEATURE_OSXSAVE % 32));                  =
            =

>  442                                                                     =
            =

>  443         /* Xen will set CR4.OSXSAVE if supported and not disabled by=
 force */   =

>  444         if ((cx & xsave_mask) !=3D xsave_mask)                      =
              =

>  445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable XS=
AVE & OSXSAVE */
> =

> Which I am not clear about.
> =

> =

> This looks like to make a uniform 'cpuid' look in the hypervisor
> we need somehow to glue hvm_cpuid and pv_cpuid with some
> form of tables/lookups.
> =

> And make sure that the same logic is reflected in
> xc_cpuid_x86.c (toolstack).

Agree. On a slight different topic, why do we enable the APIC flags for
PV(H) guests?

We certainly have no APIC, and makes me wonder if we should disable it
now and enable it once we have hardware APIC virtualization in place.
This would allow PVH to either use the traditional PV style, or a
hardware virtualized APIC if we enable it for PVH guests (and make the
guest aware of it by turning the flag).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:07:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kxz-0001XJ-10; Mon, 27 Jan 2014 12:07:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7kxw-0001Wy-GP
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:07:00 +0000
Received: from [85.158.137.68:12244] by server-17.bemta-3.messagelabs.com id
	20/51-15965-3EB46E25; Mon, 27 Jan 2014 12:06:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390824417!11556271!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13180 invoked from network); 27 Jan 2014 12:06:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:06:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96772277"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:06:57 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:06:56 -0500
Message-ID: <52E64BDF.8040105@citrix.com>
Date: Mon, 27 Jan 2014 13:06:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <52DECA4E.4080004@citrix.com>
	<20140124183144.GE15785@phenom.dumpdata.com>
In-Reply-To: <20140124183144.GE15785@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/01/14 19:31, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
>> Hello,
>>
>> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
>> cpuid feature flags exposed to PVH guests are kind of strange, this is
>> the output of the feature flags as seen by an HVM domain:
>>
> =

> What about a PV guest? I presume if you ran an NetBSD PV guest it would
> give a format similar to this?

I guess so, the feature flags reported by NetBSD PV will probably be the
same of the ones reported by FreeBSD PVH (unless NetBSD PV also does
some kind of pre-filtering).

> =

>> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,P=
GE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
>>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCD=
LT,HV>
>> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
>> AMD Features2=3D0x1<LAHF>
>>
>> And this is what a PVH domain sees when running on the same hardware:
>>
>> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH,A=
CPI,MMX,FXSR,SSE,SSE2,SS,HTT>
>> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
>> AMD Features=3D0x20100800<SYSCALL,NX,LM>
>> AMD Features2=3D0x1<LAHF>
>>
>> I would expect the feature flags to be quite similar between an HVM
>> domain and a PV domain (since they both run inside of a HVM container).
>> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
>> guests. Also, is there any reason why PVH guests have the ACPI, SS and
>> CLFLUSH feature flags but not HVM?
> =

> S5?

SS - CPU cache supports self-snoop.

Not sure if that should be enabled or not for PVH.

> =

> ACPI is enabled for PV I think, but Linux PV guests disable them
> as there is no ACPI tables in PV mode:
> =

>  429         if (!xen_initial_domain())                                  =
            =

>  430                 cpuid_leaf1_edx_mask &=3D                           =
              =

>  431                         ~((1 << X86_FEATURE_ACPI));  /* disable ACPI=
 */         =

>  432                                                                     =
            =

>  434                                                                      =

> =

> CLFLUSH - no idea why it would be disabled.
> =

> =

> The rdsctp should be enabled. In the past I think it was related to
> 'timer=3D' option. We would either trap it, or emulate it with a constant
> value, or passthrough. It should be passing it through but maybe there
> is a bug?
> =

> PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
> is not exposed because of another bug:
> http://www.gossamer-threads.com/lists/xen/devel/313596

Think so, now that we run inside of an HVM container we should be able
to make use of all those.

> =

>>
>> Most (if not all) of this probably comes from the fact that we are
>> reporting the same feature flags as pure PV guests, but I see no reason
>> to do that for PVH guests, we should decide what's supported on PVH and
>> set the feature flags accordingly.
> =

> Right and also have a nice policy. The problem is that we set/unset
> the cpuid flags in the toolstack (and in two places - depending on the
> architecture) and also in the hypervisor.

Yes, all this cpuid flag stuff seems like a mess to me, there are so
many places where we enable or blacklist certain cpu flags that makes me
wonder if it would be more sane to define a set of flags that an HVM
container supports and maybe then blacklist some of them if they are not
actually implemented/usable on the specific kind of guest.

> Anyhow, these I know we disable:
> =

>  425         cpuid_leaf1_edx_mask =3D                                    =
              =

>  426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */     =
            =

>  427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring *=
/           =

> =

> And I  think your patch to the Xen hypervisor does it too - it clears
> the MTRR by default now. The ACC is (if my memory is correct) for
> the Pentium 4 and such - we can disable it as well.
> =

>  428                                                                     =
            =

>  433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32));=
              =

> =

> And this we definitly need to disable. The x2APIC should not
> be exposed as we want to use the Xen's version of APIC ops. And
> if the x2APIC bit is enabled in Linux, there is some other code
> (NMI handler) that will use that without using the APIC ops.
> And blow up :-(
> =

> =

> Then there is the MWAIT. Actually we can put that on the side.
> I know it is important for dom0, but since we don't have those
> patches yet in, we can ignore that. However, the hypervisor
> (pv_cpuid) disables it. =

> =

> =

> There is also the XSAVE business:
> =

> save_mask =3D                                                            =

>  440                 (1 << (X86_FEATURE_XSAVE % 32)) |                   =
            =

>  441                 (1 << (X86_FEATURE_OSXSAVE % 32));                  =
            =

>  442                                                                     =
            =

>  443         /* Xen will set CR4.OSXSAVE if supported and not disabled by=
 force */   =

>  444         if ((cx & xsave_mask) !=3D xsave_mask)                      =
              =

>  445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable XS=
AVE & OSXSAVE */
> =

> Which I am not clear about.
> =

> =

> This looks like to make a uniform 'cpuid' look in the hypervisor
> we need somehow to glue hvm_cpuid and pv_cpuid with some
> form of tables/lookups.
> =

> And make sure that the same logic is reflected in
> xc_cpuid_x86.c (toolstack).

Agree. On a slight different topic, why do we enable the APIC flags for
PV(H) guests?

We certainly have no APIC, and makes me wonder if we should disable it
now and enable it once we have hardware APIC virtualization in place.
This would allow PVH to either use the traditional PV style, or a
hardware virtualized APIC if we enable it for PVH guests (and make the
guest aware of it by turning the flag).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kyR-0001ak-ES; Mon, 27 Jan 2014 12:07:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7kyQ-0001aT-29
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:07:30 +0000
Received: from [85.158.143.35:64228] by server-2.bemta-4.messagelabs.com id
	4C/D4-11386-10C46E25; Mon, 27 Jan 2014 12:07:29 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390824446!1029522!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27234 invoked from network); 27 Jan 2014 12:07:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96772400"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:07:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:07:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7kyL-0000W2-GD;
	Mon, 27 Jan 2014 12:07:25 +0000
Message-ID: <52E64BF9.8080608@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:07:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>	
	<1390822680.12230.24.camel@kazak.uk.xensource.com>	
	<52E64A95.2050201@eu.citrix.com>
	<1390824259.12230.26.camel@kazak.uk.xensource.com>
In-Reply-To: <1390824259.12230.26.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 12:04 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 12:01 +0000, George Dunlap wrote:
>> On 01/27/2014 11:38 AM, Ian Campbell wrote:
>>> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>>>> This patch adds a reset support for xgene arm64 platform.
>>>>
>>>> V6:
>>>> - Incorporating comments received on V5 patch.
>>>> V5:
>>>> - Incorporating comments received on V4 patch.
>>>> V4:
>>>> - Removing TODO comment about retriving reset base address from dts
>>>>     as that is done now.
>>>> V3:
>>>> - Retriving reset base address and reset mask from device tree.
>>>> - Removed unnecssary header files included earlier.
>>>> V2:
>>>> - Removed unnecssary mdelay in code.
>>>> - Adding iounmap of the base address.
>>>> V1:
>>>> -Initial patch.
>>>>
>>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>> Looks good, thanks:
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> George, I'd like to put this in 4.4. The impact on non-xgene is non
>>> existent and I think reset is basic functionality which we should
>>> enable.
>> So without this patch, support for xgene would be considered
>> "experimental" (missing basic functionality)?
> I don't think I would go so far as to say "experimental", we could
> probably live with it, but having to go find the board and press the
> tiny little button to reboot it is certainly not "complete".

Yes, "can't reboot via software" is a pretty big lack of functionality; 
it weighs in pretty heavily against whatever potential regressions might 
be introduced.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7kyR-0001ak-ES; Mon, 27 Jan 2014 12:07:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7kyQ-0001aT-29
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:07:30 +0000
Received: from [85.158.143.35:64228] by server-2.bemta-4.messagelabs.com id
	4C/D4-11386-10C46E25; Mon, 27 Jan 2014 12:07:29 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390824446!1029522!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27234 invoked from network); 27 Jan 2014 12:07:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96772400"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:07:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:07:25 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7kyL-0000W2-GD;
	Mon, 27 Jan 2014 12:07:25 +0000
Message-ID: <52E64BF9.8080608@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:07:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>	
	<1390822680.12230.24.camel@kazak.uk.xensource.com>	
	<52E64A95.2050201@eu.citrix.com>
	<1390824259.12230.26.camel@kazak.uk.xensource.com>
In-Reply-To: <1390824259.12230.26.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 12:04 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 12:01 +0000, George Dunlap wrote:
>> On 01/27/2014 11:38 AM, Ian Campbell wrote:
>>> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>>>> This patch adds a reset support for xgene arm64 platform.
>>>>
>>>> V6:
>>>> - Incorporating comments received on V5 patch.
>>>> V5:
>>>> - Incorporating comments received on V4 patch.
>>>> V4:
>>>> - Removing TODO comment about retriving reset base address from dts
>>>>     as that is done now.
>>>> V3:
>>>> - Retriving reset base address and reset mask from device tree.
>>>> - Removed unnecssary header files included earlier.
>>>> V2:
>>>> - Removed unnecssary mdelay in code.
>>>> - Adding iounmap of the base address.
>>>> V1:
>>>> -Initial patch.
>>>>
>>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>> Looks good, thanks:
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> George, I'd like to put this in 4.4. The impact on non-xgene is non
>>> existent and I think reset is basic functionality which we should
>>> enable.
>> So without this patch, support for xgene would be considered
>> "experimental" (missing basic functionality)?
> I don't think I would go so far as to say "experimental", we could
> probably live with it, but having to go find the board and press the
> tiny little button to reboot it is certainly not "complete".

Yes, "can't reboot via software" is a pretty big lack of functionality; 
it weighs in pretty heavily against whatever potential regressions might 
be introduced.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7l3f-000297-FG; Mon, 27 Jan 2014 12:12:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7l3e-00028x-2z
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:12:54 +0000
Received: from [85.158.139.211:14100] by server-11.bemta-5.messagelabs.com id
	FA/6D-23268-34D46E25; Mon, 27 Jan 2014 12:12:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390824769!12157667!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29085 invoked from network); 27 Jan 2014 12:12:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:12:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96774098"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:12:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:12:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7l3X-0000dN-NF;
	Mon, 27 Jan 2014 12:12:47 +0000
Date: Mon, 27 Jan 2014 12:12:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> Anthony PERARD wrote on 2014-01-09:
> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>> [...]
> >>>>>>> Those Xen report something like:
> >>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>> 131329 >
> >>>>>>> 131328
> >>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>> id=46
> >>>>>>> memflags=0 (62 of 64)
> >>>>>>> 
> >>>>>>> ?
> >>>>>>> 
> >>>>>>> (I tryied to reproduce the issue by simply add many
> >>>>>>> emulated
> >>>>>>> e1000 in QEMU :) )
> >>>>>>> 
> >>> 
> >>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>> Network
> > Connection (rev 01)
> >>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>         ROM at fb400000 [disabled] [size=4M]
> >>> 
> >>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>> allocate memory for it. Will have maybe have to find another way.
> >>> qemu-trad those not seems to allocate memory, but I haven't been
> >>> very far in trying to check that.
> >> 
> >> And indeed that is the case. The "Fix" below fixes it.
> >> 
> >> 
> >> Based on that and this guest config:
> >> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >> memory = 2048
> >> boot="d"
> >> maxvcpus=32
> >> vcpus=1
> >> serial='pty'
> >> vnclisten="0.0.0.0"
> >> name="latest"
> >> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
> >> 
> >> I can boot the guest.
> > 
> > And can you access the ROM from the guest ?
> > 
> > 
> > Also, I have another patch, it will initialize the PCI ROM BAR like any other BAR.
> > In this case, if qemu is envolved in the access to ROM, it will print
> > an error, like it the case for other BAR.
> > 
> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from the device.
> > 
> > 
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
> > 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
> > static int xen_pt_register_regions(XenPCIPassthroughState *s)
> > 
> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> > -                                      "xen-pci-pt-rom", d->rom.size);
> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> > +                              "xen-pci-pt-rom", d->rom.size);
> >          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> > PCI_BASE_ADDRESS_MEM_PREFETCH,
> >                           &s->rom);
> >
> 
> Hi, Anthony,
> 
> Does your fixing is the final solution for this issue? If yes, will you push it before Xen 4.4 release?

I included this patch in the last pull request I sent to Anthony
Liguori:

http://marc.info/?l=qemu-devel&m=138997319906095

It hasn't been pulled yet, but I would expect that it is going to be
upstream soon.
Regarding the 4.4 release, we are trying to fix a couple of other
serious bugs in the qemu-xen tree right now, but it is still conceivable
to have this fix backported to the tree in time for the release.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7l3f-000297-FG; Mon, 27 Jan 2014 12:12:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7l3e-00028x-2z
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 12:12:54 +0000
Received: from [85.158.139.211:14100] by server-11.bemta-5.messagelabs.com id
	FA/6D-23268-34D46E25; Mon, 27 Jan 2014 12:12:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390824769!12157667!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29085 invoked from network); 27 Jan 2014 12:12:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:12:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96774098"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:12:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:12:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7l3X-0000dN-NF;
	Mon, 27 Jan 2014 12:12:47 +0000
Date: Mon, 27 Jan 2014 12:12:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> Anthony PERARD wrote on 2014-01-09:
> > On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>> [...]
> >>>>>>> Those Xen report something like:
> >>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>> 131329 >
> >>>>>>> 131328
> >>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>> id=46
> >>>>>>> memflags=0 (62 of 64)
> >>>>>>> 
> >>>>>>> ?
> >>>>>>> 
> >>>>>>> (I tryied to reproduce the issue by simply add many
> >>>>>>> emulated
> >>>>>>> e1000 in QEMU :) )
> >>>>>>> 
> >>> 
> >>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>> Network
> > Connection (rev 01)
> >>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>         ROM at fb400000 [disabled] [size=4M]
> >>> 
> >>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>> allocate memory for it. Will have maybe have to find another way.
> >>> qemu-trad those not seems to allocate memory, but I haven't been
> >>> very far in trying to check that.
> >> 
> >> And indeed that is the case. The "Fix" below fixes it.
> >> 
> >> 
> >> Based on that and this guest config:
> >> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >> memory = 2048
> >> boot="d"
> >> maxvcpus=32
> >> vcpus=1
> >> serial='pty'
> >> vnclisten="0.0.0.0"
> >> name="latest"
> >> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci = ["01:00.0"]
> >> 
> >> I can boot the guest.
> > 
> > And can you access the ROM from the guest ?
> > 
> > 
> > Also, I have another patch, it will initialize the PCI ROM BAR like any other BAR.
> > In this case, if qemu is envolved in the access to ROM, it will print
> > an error, like it the case for other BAR.
> > 
> > I tried to test it, but it was with an embedded VGA card. When I dump
> > the ROM, I got the same one as the emulated card instead of the ROM from the device.
> > 
> > 
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index 6dd7a68..2bbdb6d
> > 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8 +440,8 @@
> > static int xen_pt_register_regions(XenPCIPassthroughState *s)
> > 
> >          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> > -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> > -                                      "xen-pci-pt-rom", d->rom.size);
> > +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev,
> > +                              "xen-pci-pt-rom", d->rom.size);
> >          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> > PCI_BASE_ADDRESS_MEM_PREFETCH,
> >                           &s->rom);
> >
> 
> Hi, Anthony,
> 
> Does your fixing is the final solution for this issue? If yes, will you push it before Xen 4.4 release?

I included this patch in the last pull request I sent to Anthony
Liguori:

http://marc.info/?l=qemu-devel&m=138997319906095

It hasn't been pulled yet, but I would expect that it is going to be
upstream soon.
Regarding the 4.4 release, we are trying to fix a couple of other
serious bugs in the qemu-xen tree right now, but it is still conceivable
to have this fix backported to the tree in time for the release.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:15:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7l5l-0002FQ-2b; Mon, 27 Jan 2014 12:15:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7l5j-0002FH-3c
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 12:15:03 +0000
Received: from [85.158.139.211:11005] by server-13.bemta-5.messagelabs.com id
	C5/8D-11357-6CD46E25; Mon, 27 Jan 2014 12:15:02 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390824900!1854580!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7450 invoked from network); 27 Jan 2014 12:15:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:15:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96774594"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:15:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:14:59 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7l5f-0000fW-3b;
	Mon, 27 Jan 2014 12:14:59 +0000
Message-ID: <52E64DBE.5090603@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:14:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 06:45 PM, Ian Jackson wrote:
> I think these three bugfixes are clear 4.4 material.
>
>   a 1/3 libxl: events: Pass correct nfds to poll
>   a 2/3 xl: Free optdata_begin when saving domain config
>   a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr
>
> They have all been acked by Ian Campbell.

All three:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:15:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7l5l-0002FQ-2b; Mon, 27 Jan 2014 12:15:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7l5j-0002FH-3c
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 12:15:03 +0000
Received: from [85.158.139.211:11005] by server-13.bemta-5.messagelabs.com id
	C5/8D-11357-6CD46E25; Mon, 27 Jan 2014 12:15:02 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390824900!1854580!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7450 invoked from network); 27 Jan 2014 12:15:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:15:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96774594"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:15:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:14:59 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7l5f-0000fW-3b;
	Mon, 27 Jan 2014 12:14:59 +0000
Message-ID: <52E64DBE.5090603@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:14:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/21/2014 06:45 PM, Ian Jackson wrote:
> I think these three bugfixes are clear 4.4 material.
>
>   a 1/3 libxl: events: Pass correct nfds to poll
>   a 2/3 xl: Free optdata_begin when saving domain config
>   a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr
>
> They have all been acked by Ian Campbell.

All three:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:33:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lNH-0002mv-EW; Mon, 27 Jan 2014 12:33:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W7lNG-0002mq-L3
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:33:10 +0000
Received: from [85.158.137.68:53738] by server-1.bemta-3.messagelabs.com id
	68/13-29598-50256E25; Mon, 27 Jan 2014 12:33:09 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390825988!11494360!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12471 invoked from network); 27 Jan 2014 12:33:09 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 12:33:09 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128)
	(Exim 4.82) id 1W7lNC-000537-L1; Mon, 27 Jan 2014 13:33:06 +0100
Message-ID: <52E651FD.2020608@os.inf.tu-dresden.de>
Date: Mon, 27 Jan 2014 13:33:01 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
In-Reply-To: <20140124180829.GD15785@phenom.dumpdata.com>
X-Enigmail-Version: 1.7a1pre
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4332397328015839535=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4332397328015839535==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/24/2014 07:08 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
>> The paravirtualized clock used in KVM and Xen uses a version field to
>> allow the guest to see when the shared data structure is inconsistent.=

>> The code reads the version field twice (before and after the data
>> structure is copied) and checks whether they haven't
>> changed and that the lowest bit is not set. As the second access is no=
t
>> synchronized, the compiler could generate code that accesses version
>> more than two times and you end up with inconsistent data.
>=20
> Could you paste in the code that the 'bad' compiler generates
> vs the compiler that generate 'good' code please?

At least 4.8 and probably older compilers compile this as intended. The
point is that the standard does not guarantee the indented behavior,
i.e. the code is wrong.

I can refer to this lwn article:
https://lwn.net/Articles/508991/

The whole point of ACCESS_ONCE is to avoid time bombs like that. There
are lots of place where ACCESS_ONCE is used in the kernel:

http://lxr.free-electrons.com/ident?i=3DACCESS_ONCE

See for example the check_zero function here:
http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559

Julian

>=20
>>
>> An example using pvclock_get_time_values:
>>
>> host starts updating data, sets src->version to 1
>> guest reads src->version (1) and stores it into dst->version.
>> guest copies inconsistent data
>> guest reads src->version (1) and computes xor with dst->version.
>> host finishes updating data and sets src->version to 2
>> guest reads src->version (2) and checks whether lower bit is not set.
>> while loop exits with inconsistent data!
>>
>> AFAICS the compiler is allowed to optimize the given code this way.
>>
>> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
>> ---
>>  arch/x86/kernel/pvclock.c | 10 +++++++---
>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
>> index 42eb330..f62b41c 100644
>> --- a/arch/x86/kernel/pvclock.c
>> +++ b/arch/x86/kernel/pvclock.c
>> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_sh=
adow_time *shadow)
>>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *d=
st,
>>  					struct pvclock_vcpu_time_info *src)
>>  {
>> +	u32 nversion;
>> +
>>  	do {
>>  		dst->version =3D src->version;
>>  		rmb();		/* fetch version before data */
>> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclo=
ck_shadow_time *dst,
>>  		dst->tsc_shift         =3D src->tsc_shift;
>>  		dst->flags             =3D src->flags;
>>  		rmb();		/* test version after fetching data */
>> -	} while ((src->version & 1) || (dst->version !=3D src->version));
>> +		nversion =3D ACCESS_ONCE(src->version);
>> +	} while ((nversion & 1) || (dst->version !=3D nversion));
>> =20
>>  	return dst->version;
>>  }
>> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_cl=
ock *wall_clock,
>>  			    struct pvclock_vcpu_time_info *vcpu_time,
>>  			    struct timespec *ts)
>>  {
>> -	u32 version;
>> +	u32 version, nversion;
>>  	u64 delta;
>>  	struct timespec now;
>> =20
>> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_cl=
ock *wall_clock,
>>  		now.tv_sec  =3D wall_clock->sec;
>>  		now.tv_nsec =3D wall_clock->nsec;
>>  		rmb();		/* fetch time before checking version */
>> -	} while ((wall_clock->version & 1) || (version !=3D wall_clock->vers=
ion));
>> +		nversion =3D ACCESS_ONCE(wall_clock->version);
>> +	} while ((nversion & 1) || (version !=3D nversion));
>> =20
>>  	delta =3D pvclock_clocksource_read(vcpu_time);	/* time since system =
boot */
>>  	delta +=3D now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
>> --=20
>> 1.8.4.2
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel



--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAlLmUf8ACgkQ2EtjUdW3H9kUvwCgyRpGSeIIx8ObUxF4svFk/GUV
aZQAnjcAtZRslv1520WXiWvdxg6rpQtM
=PSZg
-----END PGP SIGNATURE-----

--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul--


--===============4332397328015839535==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4332397328015839535==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 12:33:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lNH-0002mv-EW; Mon, 27 Jan 2014 12:33:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W7lNG-0002mq-L3
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:33:10 +0000
Received: from [85.158.137.68:53738] by server-1.bemta-3.messagelabs.com id
	68/13-29598-50256E25; Mon, 27 Jan 2014 12:33:09 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390825988!11494360!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12471 invoked from network); 27 Jan 2014 12:33:09 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 12:33:09 -0000
Received: from [2002:8d4c:3001:48:ea40:f2ff:fee2:6328]
	by os.inf.tu-dresden.de with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128)
	(Exim 4.82) id 1W7lNC-000537-L1; Mon, 27 Jan 2014 13:33:06 +0100
Message-ID: <52E651FD.2020608@os.inf.tu-dresden.de>
Date: Mon, 27 Jan 2014 13:33:01 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
In-Reply-To: <20140124180829.GD15785@phenom.dumpdata.com>
X-Enigmail-Version: 1.7a1pre
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4332397328015839535=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4332397328015839535==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 01/24/2014 07:08 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
>> The paravirtualized clock used in KVM and Xen uses a version field to
>> allow the guest to see when the shared data structure is inconsistent.=

>> The code reads the version field twice (before and after the data
>> structure is copied) and checks whether they haven't
>> changed and that the lowest bit is not set. As the second access is no=
t
>> synchronized, the compiler could generate code that accesses version
>> more than two times and you end up with inconsistent data.
>=20
> Could you paste in the code that the 'bad' compiler generates
> vs the compiler that generate 'good' code please?

At least 4.8 and probably older compilers compile this as intended. The
point is that the standard does not guarantee the indented behavior,
i.e. the code is wrong.

I can refer to this lwn article:
https://lwn.net/Articles/508991/

The whole point of ACCESS_ONCE is to avoid time bombs like that. There
are lots of place where ACCESS_ONCE is used in the kernel:

http://lxr.free-electrons.com/ident?i=3DACCESS_ONCE

See for example the check_zero function here:
http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559

Julian

>=20
>>
>> An example using pvclock_get_time_values:
>>
>> host starts updating data, sets src->version to 1
>> guest reads src->version (1) and stores it into dst->version.
>> guest copies inconsistent data
>> guest reads src->version (1) and computes xor with dst->version.
>> host finishes updating data and sets src->version to 2
>> guest reads src->version (2) and checks whether lower bit is not set.
>> while loop exits with inconsistent data!
>>
>> AFAICS the compiler is allowed to optimize the given code this way.
>>
>> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
>> ---
>>  arch/x86/kernel/pvclock.c | 10 +++++++---
>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
>> index 42eb330..f62b41c 100644
>> --- a/arch/x86/kernel/pvclock.c
>> +++ b/arch/x86/kernel/pvclock.c
>> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_sh=
adow_time *shadow)
>>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *d=
st,
>>  					struct pvclock_vcpu_time_info *src)
>>  {
>> +	u32 nversion;
>> +
>>  	do {
>>  		dst->version =3D src->version;
>>  		rmb();		/* fetch version before data */
>> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclo=
ck_shadow_time *dst,
>>  		dst->tsc_shift         =3D src->tsc_shift;
>>  		dst->flags             =3D src->flags;
>>  		rmb();		/* test version after fetching data */
>> -	} while ((src->version & 1) || (dst->version !=3D src->version));
>> +		nversion =3D ACCESS_ONCE(src->version);
>> +	} while ((nversion & 1) || (dst->version !=3D nversion));
>> =20
>>  	return dst->version;
>>  }
>> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_cl=
ock *wall_clock,
>>  			    struct pvclock_vcpu_time_info *vcpu_time,
>>  			    struct timespec *ts)
>>  {
>> -	u32 version;
>> +	u32 version, nversion;
>>  	u64 delta;
>>  	struct timespec now;
>> =20
>> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_cl=
ock *wall_clock,
>>  		now.tv_sec  =3D wall_clock->sec;
>>  		now.tv_nsec =3D wall_clock->nsec;
>>  		rmb();		/* fetch time before checking version */
>> -	} while ((wall_clock->version & 1) || (version !=3D wall_clock->vers=
ion));
>> +		nversion =3D ACCESS_ONCE(wall_clock->version);
>> +	} while ((nversion & 1) || (version !=3D nversion));
>> =20
>>  	delta =3D pvclock_clocksource_read(vcpu_time);	/* time since system =
boot */
>>  	delta +=3D now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
>> --=20
>> 1.8.4.2
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel



--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAlLmUf8ACgkQ2EtjUdW3H9kUvwCgyRpGSeIIx8ObUxF4svFk/GUV
aZQAnjcAtZRslv1520WXiWvdxg6rpQtM
=PSZg
-----END PGP SIGNATURE-----

--oItvu1LiUXsqcsoKmqsed0m8NaomuD3ul--


--===============4332397328015839535==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4332397328015839535==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 12:33:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lNg-0002oP-Ru; Mon, 27 Jan 2014 12:33:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7lNf-0002oF-FI
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:33:35 +0000
Received: from [85.158.143.35:14010] by server-2.bemta-4.messagelabs.com id
	51/D5-11386-E1256E25; Mon, 27 Jan 2014 12:33:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390826012!1042399!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14209 invoked from network); 27 Jan 2014 12:33:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:33:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96778504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:33:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:33:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7lNa-0000xN-3M;
	Mon, 27 Jan 2014 12:33:30 +0000
Date: Mon, 27 Jan 2014 12:33:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, fu.wei@linaro.org,
	julien.grall@linaro.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Ian Campbell wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.

I am not opposed to this patch, but for the sake of clarity, isn't the
alignment of (uint32_t*) and (const unsigned long*) the same? It should
be 8 bytes in both cases on ARM64.

It seems to me that the problem is not the cast to (const unsigned
long*), but the usage of &r: maybe the tools aren't able to covert &r to
a properly aligned pointer?

Am I getting this wrong?



> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:33:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lNg-0002oP-Ru; Mon, 27 Jan 2014 12:33:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7lNf-0002oF-FI
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:33:35 +0000
Received: from [85.158.143.35:14010] by server-2.bemta-4.messagelabs.com id
	51/D5-11386-E1256E25; Mon, 27 Jan 2014 12:33:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390826012!1042399!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14209 invoked from network); 27 Jan 2014 12:33:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:33:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="96778504"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 12:33:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:33:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7lNa-0000xN-3M;
	Mon, 27 Jan 2014 12:33:30 +0000
Date: Mon, 27 Jan 2014 12:33:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, fu.wei@linaro.org,
	julien.grall@linaro.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 24 Jan 2014, Ian Campbell wrote:
> find_next_bit takes a "const unsigned long *" but forcing a cast of an
> "uint32_t *" throws away the alignment constraints and ends up causing an
> alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.

I am not opposed to this patch, but for the sake of clarity, isn't the
alignment of (uint32_t*) and (const unsigned long*) the same? It should
be 8 bytes in both cases on ARM64.

It seems to me that the problem is not the cast to (const unsigned
long*), but the usage of &r: maybe the tools aren't able to covert &r to
a properly aligned pointer?

Am I getting this wrong?



> Instead of casting use a temporary variable of the right type.
> 
> I've had a look around for similar constructs and the only thing I found was
> maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> although perhaps not best advised is safe I think.
> 
> This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> is just coincidental due to subtle changes to the stack layout etc.
> 
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/vgic.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..553411d 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -362,11 +362,12 @@ read_as_zero:
>  
>  static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> @@ -379,11 +380,12 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>  
>  static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>  {
> +    const unsigned long mask = r;
>      struct pending_irq *p;
>      unsigned int irq;
>      int i = 0;
>  
> -    while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
> +    while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lYZ-0003m6-Ru; Mon, 27 Jan 2014 12:44:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7lYY-0003lz-Ar
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:44:50 +0000
Received: from [193.109.254.147:63910] by server-9.bemta-14.messagelabs.com id
	20/CF-13957-1C456E25; Mon, 27 Jan 2014 12:44:49 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390826687!96159!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21606 invoked from network); 27 Jan 2014 12:44:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:44:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94766001"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 12:44:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:44:29 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7lYC-00015r-7E;
	Mon, 27 Jan 2014 12:44:28 +0000
Message-ID: <52E654A8.5070609@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:44:24 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
In-Reply-To: <20140110160536.GB21360@phenom.dumpdata.com>
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 04:05 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
>> On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
>>> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
>>>> On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
>>>>> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
>>>>> And then it is the responsibility of the balloon driver to give the memory
>>>>> back (and this is where the 'static-max' et al come in play to tell the
>>>>> balloon driver to balloon out).
>>>> PoD exists purely so that we don't need the 'maxmem' amount of memory at
>>>> boot time. It is basically there in order to let the guest get booted
>>>> far enough to load the balloon driver to give the memory back.
>>>>
>>>> It's basically a boot time zero-page sharing mechanism AIUI.
>>> But it does look to gulp up hypervisor memory and return it during
>>> allocation of memory for the guest.
>> It should be less than the maxmem-memory amount though. Perhaps because
>> Wei is using relatively small sizes the pod cache ends up being a
>> similar size to the saving?
>>
>> Or maybe I just don't understand PoD, since the code you quote does seem
>> to contradict that.
>>
>> Or maybe libxl's calculation of pod_target is wrong?
>>
>>>  From reading the code the patch seems correct - we will _need_ that
>>> extra 128MB 'claim' to allocate/free the 128MB extra pages. They
>>> are temporary as we do free them.
>> It does makes sense that the PoD cache should be included in the claim,
>> I just don't get why the cache is so big...
> I think it expands and shrinks to make sure that the memory is present
> in the hypervisor. If there is not enough memory it would -ENOMEM and
> the toolstack would know immediately.
>
> But that seems silly - as that memory might be in the future used
> by other guests and then you won't be able to use said cache. But since
> it is a "cache" I guess that is OK.

Sorry, "cache" is a bit of a misnomer.  It really should be "pool".

The basic idea is to allocate memory to the guest *without assigning it 
to specific p2m entries*.  Then the PoD mechanism will shuffle the 
memory around behind the p2m entries as needed until the balloon driver 
comes up.

In the simple case, memory should only ever be allocated, not freed; for 
example:
* Admin sets target=1GiB, maxmem=2GiB
* Domain creation:
  - makes 2GiB of p2m, filling it with PoD entries rather than memory
  - allocates 1GiB of ram for the PoD "cache"
* PoD shuffles memory around to allow guest to boot
* Balloon driver comes up, and balloons down to target.
  - In theory, at this point #PoD entries in p2m == #pages in the "cache"

The basic complication comes in that there is no point at which we can 
be *certain* that all PoD entries have been eliminated.  If the guest 
just doesn't touch some of its memory, there may be PoD entries 
outstanding (with corresponding memory in the "cache") indefinitely.  
Also, the admin may want to be able to change the target before the 
balloon driver hits it.    So every time you change the target, you need 
to tell the PoD code that's what you're doing; it has carefully 
thought-out logic inside it to free or allocate more memory appropriately.

For example, in the example above, while the balloon driver has only 
ballooned down to 1.2 GiB, the admin may want to set the target to 
1.5GiB.  In that case, the PoD code would allocate an additional 0.2GiB 
(to cover the outstanding 0.2GiB of PoD entries in the p2m).

Anyway, if I understand correctly, the problem was that the memory 
allocated to the PoD "cache" wasn't being counted in the claim mode.  
That's the basic problem: memory in the PoD "cache" should be considered 
basically the same as memory in the p2m table for claim purposes.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lYZ-0003m6-Ru; Mon, 27 Jan 2014 12:44:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7lYY-0003lz-Ar
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:44:50 +0000
Received: from [193.109.254.147:63910] by server-9.bemta-14.messagelabs.com id
	20/CF-13957-1C456E25; Mon, 27 Jan 2014 12:44:49 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390826687!96159!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21606 invoked from network); 27 Jan 2014 12:44:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:44:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94766001"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 12:44:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:44:29 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7lYC-00015r-7E;
	Mon, 27 Jan 2014 12:44:28 +0000
Message-ID: <52E654A8.5070609@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:44:24 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<1389366985.19142.64.camel@kazak.uk.xensource.com>
	<20140110152840.GA20385@phenom.dumpdata.com>
	<1389369373.6423.21.camel@kazak.uk.xensource.com>
	<20140110160536.GB21360@phenom.dumpdata.com>
In-Reply-To: <20140110160536.GB21360@phenom.dumpdata.com>
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/10/2014 04:05 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 03:56:13PM +0000, Ian Campbell wrote:
>> On Fri, 2014-01-10 at 10:28 -0500, Konrad Rzeszutek Wilk wrote:
>>> On Fri, Jan 10, 2014 at 03:16:25PM +0000, Ian Campbell wrote:
>>>> On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
>>>>> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
>>>>> And then it is the responsibility of the balloon driver to give the memory
>>>>> back (and this is where the 'static-max' et al come in play to tell the
>>>>> balloon driver to balloon out).
>>>> PoD exists purely so that we don't need the 'maxmem' amount of memory at
>>>> boot time. It is basically there in order to let the guest get booted
>>>> far enough to load the balloon driver to give the memory back.
>>>>
>>>> It's basically a boot time zero-page sharing mechanism AIUI.
>>> But it does look to gulp up hypervisor memory and return it during
>>> allocation of memory for the guest.
>> It should be less than the maxmem-memory amount though. Perhaps because
>> Wei is using relatively small sizes the pod cache ends up being a
>> similar size to the saving?
>>
>> Or maybe I just don't understand PoD, since the code you quote does seem
>> to contradict that.
>>
>> Or maybe libxl's calculation of pod_target is wrong?
>>
>>>  From reading the code the patch seems correct - we will _need_ that
>>> extra 128MB 'claim' to allocate/free the 128MB extra pages. They
>>> are temporary as we do free them.
>> It does makes sense that the PoD cache should be included in the claim,
>> I just don't get why the cache is so big...
> I think it expands and shrinks to make sure that the memory is present
> in the hypervisor. If there is not enough memory it would -ENOMEM and
> the toolstack would know immediately.
>
> But that seems silly - as that memory might be in the future used
> by other guests and then you won't be able to use said cache. But since
> it is a "cache" I guess that is OK.

Sorry, "cache" is a bit of a misnomer.  It really should be "pool".

The basic idea is to allocate memory to the guest *without assigning it 
to specific p2m entries*.  Then the PoD mechanism will shuffle the 
memory around behind the p2m entries as needed until the balloon driver 
comes up.

In the simple case, memory should only ever be allocated, not freed; for 
example:
* Admin sets target=1GiB, maxmem=2GiB
* Domain creation:
  - makes 2GiB of p2m, filling it with PoD entries rather than memory
  - allocates 1GiB of ram for the PoD "cache"
* PoD shuffles memory around to allow guest to boot
* Balloon driver comes up, and balloons down to target.
  - In theory, at this point #PoD entries in p2m == #pages in the "cache"

The basic complication comes in that there is no point at which we can 
be *certain* that all PoD entries have been eliminated.  If the guest 
just doesn't touch some of its memory, there may be PoD entries 
outstanding (with corresponding memory in the "cache") indefinitely.  
Also, the admin may want to be able to change the target before the 
balloon driver hits it.    So every time you change the target, you need 
to tell the PoD code that's what you're doing; it has carefully 
thought-out logic inside it to free or allocate more memory appropriately.

For example, in the example above, while the balloon driver has only 
ballooned down to 1.2 GiB, the admin may want to set the target to 
1.5GiB.  In that case, the PoD code would allocate an additional 0.2GiB 
(to cover the outstanding 0.2GiB of PoD entries in the p2m).

Anyway, if I understand correctly, the problem was that the memory 
allocated to the PoD "cache" wasn't being counted in the claim mode.  
That's the basic problem: memory in the PoD "cache" should be considered 
basically the same as memory in the p2m table for claim purposes.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:53:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lh8-0004GF-0o; Mon, 27 Jan 2014 12:53:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7lh6-0004GA-34
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:53:40 +0000
Received: from [193.109.254.147:38138] by server-14.bemta-14.messagelabs.com
	id 86/C3-12628-3D656E25; Mon, 27 Jan 2014 12:53:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390827217!99001!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26262 invoked from network); 27 Jan 2014 12:53:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94767688"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 12:53:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:53:37 -0500
Message-ID: <1390827215.12230.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:53:35 +0000
In-Reply-To: <alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, fu.wei@linaro.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:33 +0000, Stefano Stabellini wrote:
> On Fri, 24 Jan 2014, Ian Campbell wrote:
> > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > "uint32_t *" throws away the alignment constraints and ends up causing an
> > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> 
> I am not opposed to this patch, but for the sake of clarity, isn't the
> alignment of (uint32_t*) and (const unsigned long*) the same? It should
> be 8 bytes in both cases on ARM64.

The target of a uint32_t * only needs to be 4 byte aligned on arm64,
since that is the natural alignment of a 32-bit type.

> It seems to me that the problem is not the cast to (const unsigned
> long*), but the usage of &r: maybe the tools aren't able to covert &r to
> a properly aligned pointer?

No, &r behaves exactly as expected and returns the address of r,
whatever the alignment of r is.

> Am I getting this wrong?

I think so, yes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 12:53:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 12:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7lh8-0004GF-0o; Mon, 27 Jan 2014 12:53:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7lh6-0004GA-34
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 12:53:40 +0000
Received: from [193.109.254.147:38138] by server-14.bemta-14.messagelabs.com
	id 86/C3-12628-3D656E25; Mon, 27 Jan 2014 12:53:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390827217!99001!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26262 invoked from network); 27 Jan 2014 12:53:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 12:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,728,1384300800"; d="scan'208";a="94767688"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 12:53:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 07:53:37 -0500
Message-ID: <1390827215.12230.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 27 Jan 2014 12:53:35 +0000
In-Reply-To: <alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401271229350.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org, fu.wei@linaro.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:33 +0000, Stefano Stabellini wrote:
> On Fri, 24 Jan 2014, Ian Campbell wrote:
> > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > "uint32_t *" throws away the alignment constraints and ends up causing an
> > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> 
> I am not opposed to this patch, but for the sake of clarity, isn't the
> alignment of (uint32_t*) and (const unsigned long*) the same? It should
> be 8 bytes in both cases on ARM64.

The target of a uint32_t * only needs to be 4 byte aligned on arm64,
since that is the natural alignment of a 32-bit type.

> It seems to me that the problem is not the cast to (const unsigned
> long*), but the usage of &r: maybe the tools aren't able to covert &r to
> a properly aligned pointer?

No, &r behaves exactly as expected and returns the address of r,
whatever the alignment of r is.

> Am I getting this wrong?

I think so, yes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 13:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 13:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mMP-0005a4-Bc; Mon, 27 Jan 2014 13:36:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7mMN-0005Zz-S0
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 13:36:20 +0000
Received: from [85.158.143.35:6492] by server-2.bemta-4.messagelabs.com id
	9F/5A-11386-3D066E25; Mon, 27 Jan 2014 13:36:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390829778!1059853!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 27 Jan 2014 13:36:18 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 13:36:18 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so2007910ead.38
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 05:36:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=R8LEbbz+OU+bap+rrtxlGcKJHYtvfvuhsD9NUrq6S48=;
	b=HzvLV6neqXNwLucKUck5M7LxVaiiEmGkYkHRUVLiCaerwir4Is8e5y2vZa8OOhg2L8
	FVsfvvgyFNsPhwijVuE488tVrbZ+ArAYIwbG7LWBbR6FuIf2f3Pa5tl4WuuZ+gDCYWWT
	KW//vxBfWtZB92IsCRry9pTPqmK2LDqMTG7rzzKmmo5+tc2LskXYCgzrYgvSppFkbW7W
	fFvO1CLF7y9X+Aqja3VyaKlbF62nMDM+viO2zyBs5PtyZtzjsZzO+IeRZzSphVDGPmDJ
	Om1l57oFTKtbYsz0ibddbMiwxdFaepVhr4xQ/n9uAdmS/wKx0YAGxYInBiVXCgxZHtDj
	91ew==
X-Gm-Message-State: ALoCoQmdd7rIn3CNYR/BFURXhfd+VkyGGivzE7+fhw4qBBQi8DVX3Eu2mkdfnsSb9y9hA3HXFM8v
X-Received: by 10.15.33.193 with SMTP id c41mr14240567eev.79.1390829777899;
	Mon, 27 Jan 2014 05:36:17 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm42545861een.13.2014.01.27.05.36.16
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 05:36:17 -0800 (PST)
Message-ID: <52E660CF.4010606@linaro.org>
Date: Mon, 27 Jan 2014 13:36:15 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, xen-devel@lists.xen.org,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V6:
> - Incorporating comments received on V5 patch.
> V5:
> - Incorporating comments received on V4 patch.
> V4:
> - Removing TODO comment about retriving reset base address from dts
>   as that is done now.
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..4fc185b 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,16 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/stdbool.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +static bool reset_vals_valid = false;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +115,68 @@ err:
>      return ret;
>  }
>  
> +static void xgene_storm_reset(void)
> +{

I'm concerned about reset function in general, in common code we have
this code (arch/arm/shutdown.c machine_restart).

while (1)
{
   raw_machine_reset(); // which call platform_reset()
   mdelay(100);
}

If platform_reset failed, it's possible with this code, the console will
be spam with "XGENE: ...".

Do we really need to call raw_machine_reset multiple time?
Would it be more suitable to have this code:

raw_machine_reset();
mdelay(100);
printk("Failed to reset\n");
while (1);

Or even better, moving the mdelay  per platform...

> +    void __iomem *addr;
> +
> +    if ( !reset_vals_valid )
> +    {
> +        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    /* Write reset mask to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> +        {},

Do you plan to have other ids in the future? If not, I would directly
use dt_find_compatible_node(NULL, NULL "arm,xgene-reboot"); instead of
dt_find_matching_node(...).

> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("XGENE: Unable to find a compatible reset node in the device tree");
> +        return 0;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("XGENE: Unable to retrieve the reset mask\n");
> +        return 0;
> +    }
> +
> +    reset_vals_valid = true;
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 13:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 13:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mMP-0005a4-Bc; Mon, 27 Jan 2014 13:36:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7mMN-0005Zz-S0
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 13:36:20 +0000
Received: from [85.158.143.35:6492] by server-2.bemta-4.messagelabs.com id
	9F/5A-11386-3D066E25; Mon, 27 Jan 2014 13:36:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390829778!1059853!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 27 Jan 2014 13:36:18 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 13:36:18 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so2007910ead.38
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 05:36:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=R8LEbbz+OU+bap+rrtxlGcKJHYtvfvuhsD9NUrq6S48=;
	b=HzvLV6neqXNwLucKUck5M7LxVaiiEmGkYkHRUVLiCaerwir4Is8e5y2vZa8OOhg2L8
	FVsfvvgyFNsPhwijVuE488tVrbZ+ArAYIwbG7LWBbR6FuIf2f3Pa5tl4WuuZ+gDCYWWT
	KW//vxBfWtZB92IsCRry9pTPqmK2LDqMTG7rzzKmmo5+tc2LskXYCgzrYgvSppFkbW7W
	fFvO1CLF7y9X+Aqja3VyaKlbF62nMDM+viO2zyBs5PtyZtzjsZzO+IeRZzSphVDGPmDJ
	Om1l57oFTKtbYsz0ibddbMiwxdFaepVhr4xQ/n9uAdmS/wKx0YAGxYInBiVXCgxZHtDj
	91ew==
X-Gm-Message-State: ALoCoQmdd7rIn3CNYR/BFURXhfd+VkyGGivzE7+fhw4qBBQi8DVX3Eu2mkdfnsSb9y9hA3HXFM8v
X-Received: by 10.15.33.193 with SMTP id c41mr14240567eev.79.1390829777899;
	Mon, 27 Jan 2014 05:36:17 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id l4sm42545861een.13.2014.01.27.05.36.16
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 05:36:17 -0800 (PST)
Message-ID: <52E660CF.4010606@linaro.org>
Date: Mon, 27 Jan 2014 13:36:15 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, xen-devel@lists.xen.org,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> This patch adds a reset support for xgene arm64 platform.
> 
> V6:
> - Incorporating comments received on V5 patch.
> V5:
> - Incorporating comments received on V4 patch.
> V4:
> - Removing TODO comment about retriving reset base address from dts
>   as that is done now.
> V3:
> - Retriving reset base address and reset mask from device tree.
> - Removed unnecssary header files included earlier.
> V2:
> - Removed unnecssary mdelay in code.
> - Adding iounmap of the base address.
> V1:
> -Initial patch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>  1 file changed, 72 insertions(+)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 5b0bd5f..4fc185b 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -20,8 +20,16 @@
>  
>  #include <xen/config.h>
>  #include <asm/platform.h>
> +#include <xen/stdbool.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* Variables to save reset address of soc during platform initialization. */
> +static u64 reset_addr, reset_size;
> +static u32 reset_mask;
> +static bool reset_vals_valid = false;
> +
>  static uint32_t xgene_storm_quirks(void)
>  {
>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> @@ -107,6 +115,68 @@ err:
>      return ret;
>  }
>  
> +static void xgene_storm_reset(void)
> +{

I'm concerned about reset function in general, in common code we have
this code (arch/arm/shutdown.c machine_restart).

while (1)
{
   raw_machine_reset(); // which call platform_reset()
   mdelay(100);
}

If platform_reset failed, it's possible with this code, the console will
be spam with "XGENE: ...".

Do we really need to call raw_machine_reset multiple time?
Would it be more suitable to have this code:

raw_machine_reset();
mdelay(100);
printk("Failed to reset\n");
while (1);

Or even better, moving the mdelay  per platform...

> +    void __iomem *addr;
> +
> +    if ( !reset_vals_valid )
> +    {
> +        printk("XGENE: Invalid reset values, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    addr = ioremap_nocache(reset_addr, reset_size);
> +
> +    if ( !addr )
> +    {
> +        printk("XGENE: Unable to map xgene reset address, can not reset XGENE...\n");
> +        return;
> +    }
> +
> +    /* Write reset mask to base address */
> +    writel(reset_mask, addr);
> +
> +    iounmap(addr);
> +}
> +
> +static int xgene_storm_init(void)
> +{
> +    static const struct dt_device_match reset_ids[] __initconst =
> +    {
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> +        {},

Do you plan to have other ids in the future? If not, I would directly
use dt_find_compatible_node(NULL, NULL "arm,xgene-reboot"); instead of
dt_find_matching_node(...).

> +    };
> +    struct dt_device_node *dev;
> +    int res;
> +
> +    dev = dt_find_matching_node(NULL, reset_ids);
> +    if ( !dev )
> +    {
> +        printk("XGENE: Unable to find a compatible reset node in the device tree");
> +        return 0;
> +    }
> +
> +    dt_device_set_used_by(dev, DOMID_XEN);
> +
> +    /* Retrieve base address and size */
> +    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> +    if ( res )
> +    {
> +        printk("XGENE: Unable to retrieve the base address for reset\n");
> +        return 0;
> +    }
> +
> +    /* Get reset mask */
> +    res = dt_property_read_u32(dev, "mask", &reset_mask);
> +    if ( !res )
> +    {
> +        printk("XGENE: Unable to retrieve the reset mask\n");
> +        return 0;
> +    }
> +
> +    reset_vals_valid = true;
> +    return 0;
> +}
>  
>  static const char * const xgene_storm_dt_compat[] __initconst =
>  {
> @@ -116,6 +186,8 @@ static const char * const xgene_storm_dt_compat[] __initconst =
>  
>  PLATFORM_START(xgene_storm, "APM X-GENE STORM")
>      .compatible = xgene_storm_dt_compat,
> +    .init = xgene_storm_init,
> +    .reset = xgene_storm_reset,
>      .quirks = xgene_storm_quirks,
>      .specific_mapping = xgene_storm_specific_mapping,
>  
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 13:52:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 13:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mbc-0006Fd-Cb; Mon, 27 Jan 2014 13:52:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W7mbb-0006FN-2T; Mon, 27 Jan 2014 13:52:03 +0000
Received: from [85.158.143.35:43261] by server-3.bemta-4.messagelabs.com id
	1C/2F-32360-28466E25; Mon, 27 Jan 2014 13:52:02 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390830720!1069730!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32597 invoked from network); 27 Jan 2014 13:52:01 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 13:52:01 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so4436323lan.5
	for <multiple recipients>; Mon, 27 Jan 2014 05:52:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=vMnuYiRqs9H8ftj71Fqt+goDY1muv+xoMcrcel79nLQ=;
	b=q8QZBHwGf5dIJlKjW6ycvOEpXiDhovgmeeRUBmVRGuTdi2tjyYOWodI66iRMAo0iWi
	msLSeGC95IvW4i5egW9YzJLv7LYo6sh+ig/uqnAC7Y5P5hyu7w8hbM+q2+PjRgGdT++O
	Q+MRQ40blRkgyZXCfO35QmWh1iERgrcbThW3x/TLl7d7fkklarqT8Fg5K8gddZOiWW1C
	UtD8h+EjAjbwVU2h9xNR1tY0jjXpOLvMnF2R9qndzDLUB2Pe0IWd+nnzoUXPOG2Ko5uA
	IIQLM0TPqGazjeALOjICCk4hFVFDg7JRLxdDwoOTR8zRjGRwChoN5lhKzYZUrK75HU/E
	6Kfg==
MIME-Version: 1.0
X-Received: by 10.152.164.166 with SMTP id yr6mr17584163lab.1.1390830720257;
	Mon, 27 Jan 2014 05:52:00 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Mon, 27 Jan 2014 05:52:00 -0800 (PST)
Date: Mon, 27 Jan 2014 08:52:00 -0500
X-Google-Sender-Auth: 5NSolALttpOH3jOAEHdMxnURh7E
Message-ID: <CAHehzX3qhas5MZne3M3hxTje5fLFab1FSo-eb5y4c0Rp1r1stw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org
Subject: [Xen-devel] TODAY is Xen Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Today is the day!

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

Not sure what needs attention?  Here is the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

If you haven't requested to be made a Wiki editor, just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

Hope to see you in #xendocs!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 13:52:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 13:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mbc-0006Fd-Cb; Mon, 27 Jan 2014 13:52:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W7mbb-0006FN-2T; Mon, 27 Jan 2014 13:52:03 +0000
Received: from [85.158.143.35:43261] by server-3.bemta-4.messagelabs.com id
	1C/2F-32360-28466E25; Mon, 27 Jan 2014 13:52:02 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390830720!1069730!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32597 invoked from network); 27 Jan 2014 13:52:01 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 13:52:01 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so4436323lan.5
	for <multiple recipients>; Mon, 27 Jan 2014 05:52:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=vMnuYiRqs9H8ftj71Fqt+goDY1muv+xoMcrcel79nLQ=;
	b=q8QZBHwGf5dIJlKjW6ycvOEpXiDhovgmeeRUBmVRGuTdi2tjyYOWodI66iRMAo0iWi
	msLSeGC95IvW4i5egW9YzJLv7LYo6sh+ig/uqnAC7Y5P5hyu7w8hbM+q2+PjRgGdT++O
	Q+MRQ40blRkgyZXCfO35QmWh1iERgrcbThW3x/TLl7d7fkklarqT8Fg5K8gddZOiWW1C
	UtD8h+EjAjbwVU2h9xNR1tY0jjXpOLvMnF2R9qndzDLUB2Pe0IWd+nnzoUXPOG2Ko5uA
	IIQLM0TPqGazjeALOjICCk4hFVFDg7JRLxdDwoOTR8zRjGRwChoN5lhKzYZUrK75HU/E
	6Kfg==
MIME-Version: 1.0
X-Received: by 10.152.164.166 with SMTP id yr6mr17584163lab.1.1390830720257;
	Mon, 27 Jan 2014 05:52:00 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Mon, 27 Jan 2014 05:52:00 -0800 (PST)
Date: Mon, 27 Jan 2014 08:52:00 -0500
X-Google-Sender-Auth: 5NSolALttpOH3jOAEHdMxnURh7E
Message-ID: <CAHehzX3qhas5MZne3M3hxTje5fLFab1FSo-eb5y4c0Rp1r1stw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org
Subject: [Xen-devel] TODAY is Xen Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Today is the day!

Xen Project Document Day is a day to help improve overall Xen
documentation, with emphasis on the Xen Project Wiki.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

Not sure what needs attention?  Here is the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

Add any documentation deficiencies you have come across while working
with Xen.  Is there a subject you wrestled with?  That's a perfect
opportunity for you to help shape the documentation into something
more useful for the next person who needs it!

If you haven't requested to be made a Wiki editor, just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

Hope to see you in #xendocs!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:06:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mpJ-00072g-MK; Mon, 27 Jan 2014 14:06:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7mpI-00072Z-In
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:06:12 +0000
Received: from [85.158.139.211:59592] by server-11.bemta-5.messagelabs.com id
	BD/E1-23268-3D766E25; Mon, 27 Jan 2014 14:06:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390831568!1415542!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3507 invoked from network); 27 Jan 2014 14:06:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:06:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96806064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:06:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:06:07 -0500
Message-ID: <1390831566.12230.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:06:06 +0000
In-Reply-To: <52E29EB9.7020906@linaro.org>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
	<52E29EB9.7020906@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:11 +0000, Julien Grall wrote:
> On 01/14/2014 10:51 AM, Ian Campbell wrote:
> > I think this problem is now fixed upstream, the intention was to
> > eventually revert the workaround (Julien was going to tell me when it
> > had gone into stable etc, but this is now a 4.5 era revert candidate).
> 
> The patch was merged for 3.13.

Thanks. I don't think we can revert for 4.4 now, but lets revisit for
4.5.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:06:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mpJ-00072g-MK; Mon, 27 Jan 2014 14:06:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7mpI-00072Z-In
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:06:12 +0000
Received: from [85.158.139.211:59592] by server-11.bemta-5.messagelabs.com id
	BD/E1-23268-3D766E25; Mon, 27 Jan 2014 14:06:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390831568!1415542!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3507 invoked from network); 27 Jan 2014 14:06:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:06:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96806064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:06:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:06:07 -0500
Message-ID: <1390831566.12230.30.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:06:06 +0000
In-Reply-To: <52E29EB9.7020906@linaro.org>
References: <1389327171-3685-1-git-send-email-karim.allah.ahmed@gmail.com>
	<1389368924.6423.17.camel@kazak.uk.xensource.com>
	<CAOTdubsks_Yv91vQEwQkGOuW=2DxOLuLeQJ73c4YHWVL486TTw@mail.gmail.com>
	<1389696705.9887.52.camel@kazak.uk.xensource.com>
	<52E29EB9.7020906@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] create multiple banks for dom0 in 1:1
 mapping if necessary
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:11 +0000, Julien Grall wrote:
> On 01/14/2014 10:51 AM, Ian Campbell wrote:
> > I think this problem is now fixed upstream, the intention was to
> > eventually revert the workaround (Julien was going to tell me when it
> > had gone into stable etc, but this is now a 4.5 era revert candidate).
> 
> The patch was merged for 3.13.

Thanks. I don't think we can revert for 4.4 now, but lets revisit for
4.5.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:14:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mwf-0007U2-7b; Mon, 27 Jan 2014 14:13:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7mwc-0007Tx-3V
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:13:47 +0000
Received: from [85.158.137.68:59457] by server-11.bemta-3.messagelabs.com id
	BE/7C-19379-99966E25; Mon, 27 Jan 2014 14:13:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390832022!11595132!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3115 invoked from network); 27 Jan 2014 14:13:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96809100"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:13:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:13:41 -0500
Message-ID: <1390832020.12230.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:13:40 +0000
In-Reply-To: <52E660CF.4010606@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> > This patch adds a reset support for xgene arm64 platform.
> > 
> > V6:
> > - Incorporating comments received on V5 patch.
> > V5:
> > - Incorporating comments received on V4 patch.
> > V4:
> > - Removing TODO comment about retriving reset base address from dts
> >   as that is done now.
> > V3:
> > - Retriving reset base address and reset mask from device tree.
> > - Removed unnecssary header files included earlier.
> > V2:
> > - Removed unnecssary mdelay in code.
> > - Adding iounmap of the base address.
> > V1:
> > -Initial patch.
> > 
> > Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> > Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > ---
> >  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
> >  1 file changed, 72 insertions(+)
> > 
> > diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> > index 5b0bd5f..4fc185b 100644
> > --- a/xen/arch/arm/platforms/xgene-storm.c
> > +++ b/xen/arch/arm/platforms/xgene-storm.c
> > @@ -20,8 +20,16 @@
> >  
> >  #include <xen/config.h>
> >  #include <asm/platform.h>
> > +#include <xen/stdbool.h>
> > +#include <xen/vmap.h>
> > +#include <asm/io.h>
> >  #include <asm/gic.h>
> >  
> > +/* Variables to save reset address of soc during platform initialization. */
> > +static u64 reset_addr, reset_size;
> > +static u32 reset_mask;
> > +static bool reset_vals_valid = false;
> > +
> >  static uint32_t xgene_storm_quirks(void)
> >  {
> >      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> > @@ -107,6 +115,68 @@ err:
> >      return ret;
> >  }
> >  
> > +static void xgene_storm_reset(void)
> > +{
> 
> I'm concerned about reset function in general, in common code we have
> this code (arch/arm/shutdown.c machine_restart).
> 
> while (1)
> {
>    raw_machine_reset(); // which call platform_reset()
>    mdelay(100);
> }
> 
> If platform_reset failed, it's possible with this code, the console will
> be spam with "XGENE: ...".

Hrm, yes, I hadn't thought of this.

> Do we really need to call raw_machine_reset multiple time?

I suppose the logic is that there is no harm in trying again, we aren't
doing anything else and depending on the failure it might eventually
work.

I think it would be reasonable to remove the print from
xgene_storm_reset, or use a static int once construct.

> Would it be more suitable to have this code:
> 
> raw_machine_reset();
> mdelay(100);
> printk("Failed to reset\n");
> while (1);
> 
> Or even better, moving the mdelay  per platform...

I don't think that makes sense.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:14:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7mwf-0007U2-7b; Mon, 27 Jan 2014 14:13:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7mwc-0007Tx-3V
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:13:47 +0000
Received: from [85.158.137.68:59457] by server-11.bemta-3.messagelabs.com id
	BE/7C-19379-99966E25; Mon, 27 Jan 2014 14:13:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390832022!11595132!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3115 invoked from network); 27 Jan 2014 14:13:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:13:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96809100"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:13:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:13:41 -0500
Message-ID: <1390832020.12230.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:13:40 +0000
In-Reply-To: <52E660CF.4010606@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> > This patch adds a reset support for xgene arm64 platform.
> > 
> > V6:
> > - Incorporating comments received on V5 patch.
> > V5:
> > - Incorporating comments received on V4 patch.
> > V4:
> > - Removing TODO comment about retriving reset base address from dts
> >   as that is done now.
> > V3:
> > - Retriving reset base address and reset mask from device tree.
> > - Removed unnecssary header files included earlier.
> > V2:
> > - Removed unnecssary mdelay in code.
> > - Adding iounmap of the base address.
> > V1:
> > -Initial patch.
> > 
> > Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> > Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > ---
> >  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
> >  1 file changed, 72 insertions(+)
> > 
> > diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> > index 5b0bd5f..4fc185b 100644
> > --- a/xen/arch/arm/platforms/xgene-storm.c
> > +++ b/xen/arch/arm/platforms/xgene-storm.c
> > @@ -20,8 +20,16 @@
> >  
> >  #include <xen/config.h>
> >  #include <asm/platform.h>
> > +#include <xen/stdbool.h>
> > +#include <xen/vmap.h>
> > +#include <asm/io.h>
> >  #include <asm/gic.h>
> >  
> > +/* Variables to save reset address of soc during platform initialization. */
> > +static u64 reset_addr, reset_size;
> > +static u32 reset_mask;
> > +static bool reset_vals_valid = false;
> > +
> >  static uint32_t xgene_storm_quirks(void)
> >  {
> >      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> > @@ -107,6 +115,68 @@ err:
> >      return ret;
> >  }
> >  
> > +static void xgene_storm_reset(void)
> > +{
> 
> I'm concerned about reset function in general, in common code we have
> this code (arch/arm/shutdown.c machine_restart).
> 
> while (1)
> {
>    raw_machine_reset(); // which call platform_reset()
>    mdelay(100);
> }
> 
> If platform_reset failed, it's possible with this code, the console will
> be spam with "XGENE: ...".

Hrm, yes, I hadn't thought of this.

> Do we really need to call raw_machine_reset multiple time?

I suppose the logic is that there is no harm in trying again, we aren't
doing anything else and depending on the failure it might eventually
work.

I think it would be reasonable to remove the print from
xgene_storm_reset, or use a static int once construct.

> Would it be more suitable to have this code:
> 
> raw_machine_reset();
> mdelay(100);
> printk("Failed to reset\n");
> while (1);
> 
> Or even better, moving the mdelay  per platform...

I don't think that makes sense.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:24:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:24:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7n7H-0007t6-TQ; Mon, 27 Jan 2014 14:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7n7H-0007t1-2Y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:24:47 +0000
Received: from [85.158.143.35:28873] by server-3.bemta-4.messagelabs.com id
	EA/A1-32360-E2C66E25; Mon, 27 Jan 2014 14:24:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390832685!1074377!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10142 invoked from network); 27 Jan 2014 14:24:45 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:24:45 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so2297473eek.33
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:24:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=g0LdpYEjOj8cPwSpS14hD1toT8ap0y6WoMcpSgPSTok=;
	b=DrYEmDbYK+uZXu/puklbZq6pmc5BuQFWa2f+ZU42tYzSLwI3HqZ+C4Zr2eKFwosppP
	zvXmEJ2CjqhziocX+F8oTceRi+WT0nzWs35OQjR3PQk3To+0OHSoxXrCzT1DKjc1VDWI
	wQRp4Y4QeIElOI+NUZCpirMreG6qv2vFthXbbCwVhiKqu/u4b9VX1zACuKj1ySijivsx
	INTBLgKx0rRdrQ5ALr8R0YhdzkRcrW9n7eUmEsnd4Xg6z9YT58IYI6eo9OCRxiVlcyzm
	hSOOfG9OtvDYg4CxX61kav7VrQkztv0eIeNwdYQNlVPZ8LJYWFTzuFSF5bvqdb17Or4H
	JShQ==
X-Gm-Message-State: ALoCoQkqySZ1m6njKWjiofVdcuF3fXj0XN6N9IzIXRUdpWR+Ztjoo51P1m3EJ7LhAajHpWU5tB0b
X-Received: by 10.15.110.8 with SMTP id cg8mr25319448eeb.42.1390832685181;
	Mon, 27 Jan 2014 06:24:45 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm43051134ees.4.2014.01.27.06.24.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 06:24:44 -0800 (PST)
Message-ID: <52E66C2B.9020901@linaro.org>
Date: Mon, 27 Jan 2014 14:24:43 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
In-Reply-To: <1390832020.12230.32.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 02:13 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
>> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
>>> This patch adds a reset support for xgene arm64 platform.
>>>
>>> V6:
>>> - Incorporating comments received on V5 patch.
>>> V5:
>>> - Incorporating comments received on V4 patch.
>>> V4:
>>> - Removing TODO comment about retriving reset base address from dts
>>>   as that is done now.
>>> V3:
>>> - Retriving reset base address and reset mask from device tree.
>>> - Removed unnecssary header files included earlier.
>>> V2:
>>> - Removed unnecssary mdelay in code.
>>> - Adding iounmap of the base address.
>>> V1:
>>> -Initial patch.
>>>
>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>> ---
>>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>>>  1 file changed, 72 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>>> index 5b0bd5f..4fc185b 100644
>>> --- a/xen/arch/arm/platforms/xgene-storm.c
>>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>>> @@ -20,8 +20,16 @@
>>>  
>>>  #include <xen/config.h>
>>>  #include <asm/platform.h>
>>> +#include <xen/stdbool.h>
>>> +#include <xen/vmap.h>
>>> +#include <asm/io.h>
>>>  #include <asm/gic.h>
>>>  
>>> +/* Variables to save reset address of soc during platform initialization. */
>>> +static u64 reset_addr, reset_size;
>>> +static u32 reset_mask;
>>> +static bool reset_vals_valid = false;
>>> +
>>>  static uint32_t xgene_storm_quirks(void)
>>>  {
>>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
>>> @@ -107,6 +115,68 @@ err:
>>>      return ret;
>>>  }
>>>  
>>> +static void xgene_storm_reset(void)
>>> +{
>>
>> I'm concerned about reset function in general, in common code we have
>> this code (arch/arm/shutdown.c machine_restart).
>>
>> while (1)
>> {
>>    raw_machine_reset(); // which call platform_reset()
>>    mdelay(100);
>> }
>>
>> If platform_reset failed, it's possible with this code, the console will
>> be spam with "XGENE: ...".
> 
> Hrm, yes, I hadn't thought of this.
> 
>> Do we really need to call raw_machine_reset multiple time?
> 
> I suppose the logic is that there is no harm in trying again, we aren't
> doing anything else and depending on the failure it might eventually
> work.

If it doesn't work the first time, why it should work the second time?
I think it's platform specific to retry again if the "restart" has failed.

> I think it would be reasonable to remove the print from
> xgene_storm_reset, or use a static int once construct.

Print are useful for debugging.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:24:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:24:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7n7H-0007t6-TQ; Mon, 27 Jan 2014 14:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7n7H-0007t1-2Y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:24:47 +0000
Received: from [85.158.143.35:28873] by server-3.bemta-4.messagelabs.com id
	EA/A1-32360-E2C66E25; Mon, 27 Jan 2014 14:24:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390832685!1074377!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10142 invoked from network); 27 Jan 2014 14:24:45 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:24:45 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so2297473eek.33
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:24:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=g0LdpYEjOj8cPwSpS14hD1toT8ap0y6WoMcpSgPSTok=;
	b=DrYEmDbYK+uZXu/puklbZq6pmc5BuQFWa2f+ZU42tYzSLwI3HqZ+C4Zr2eKFwosppP
	zvXmEJ2CjqhziocX+F8oTceRi+WT0nzWs35OQjR3PQk3To+0OHSoxXrCzT1DKjc1VDWI
	wQRp4Y4QeIElOI+NUZCpirMreG6qv2vFthXbbCwVhiKqu/u4b9VX1zACuKj1ySijivsx
	INTBLgKx0rRdrQ5ALr8R0YhdzkRcrW9n7eUmEsnd4Xg6z9YT58IYI6eo9OCRxiVlcyzm
	hSOOfG9OtvDYg4CxX61kav7VrQkztv0eIeNwdYQNlVPZ8LJYWFTzuFSF5bvqdb17Or4H
	JShQ==
X-Gm-Message-State: ALoCoQkqySZ1m6njKWjiofVdcuF3fXj0XN6N9IzIXRUdpWR+Ztjoo51P1m3EJ7LhAajHpWU5tB0b
X-Received: by 10.15.110.8 with SMTP id cg8mr25319448eeb.42.1390832685181;
	Mon, 27 Jan 2014 06:24:45 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm43051134ees.4.2014.01.27.06.24.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 06:24:44 -0800 (PST)
Message-ID: <52E66C2B.9020901@linaro.org>
Date: Mon, 27 Jan 2014 14:24:43 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
In-Reply-To: <1390832020.12230.32.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 02:13 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
>> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
>>> This patch adds a reset support for xgene arm64 platform.
>>>
>>> V6:
>>> - Incorporating comments received on V5 patch.
>>> V5:
>>> - Incorporating comments received on V4 patch.
>>> V4:
>>> - Removing TODO comment about retriving reset base address from dts
>>>   as that is done now.
>>> V3:
>>> - Retriving reset base address and reset mask from device tree.
>>> - Removed unnecssary header files included earlier.
>>> V2:
>>> - Removed unnecssary mdelay in code.
>>> - Adding iounmap of the base address.
>>> V1:
>>> -Initial patch.
>>>
>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>> ---
>>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>>>  1 file changed, 72 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>>> index 5b0bd5f..4fc185b 100644
>>> --- a/xen/arch/arm/platforms/xgene-storm.c
>>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>>> @@ -20,8 +20,16 @@
>>>  
>>>  #include <xen/config.h>
>>>  #include <asm/platform.h>
>>> +#include <xen/stdbool.h>
>>> +#include <xen/vmap.h>
>>> +#include <asm/io.h>
>>>  #include <asm/gic.h>
>>>  
>>> +/* Variables to save reset address of soc during platform initialization. */
>>> +static u64 reset_addr, reset_size;
>>> +static u32 reset_mask;
>>> +static bool reset_vals_valid = false;
>>> +
>>>  static uint32_t xgene_storm_quirks(void)
>>>  {
>>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
>>> @@ -107,6 +115,68 @@ err:
>>>      return ret;
>>>  }
>>>  
>>> +static void xgene_storm_reset(void)
>>> +{
>>
>> I'm concerned about reset function in general, in common code we have
>> this code (arch/arm/shutdown.c machine_restart).
>>
>> while (1)
>> {
>>    raw_machine_reset(); // which call platform_reset()
>>    mdelay(100);
>> }
>>
>> If platform_reset failed, it's possible with this code, the console will
>> be spam with "XGENE: ...".
> 
> Hrm, yes, I hadn't thought of this.
> 
>> Do we really need to call raw_machine_reset multiple time?
> 
> I suppose the logic is that there is no harm in trying again, we aren't
> doing anything else and depending on the failure it might eventually
> work.

If it doesn't work the first time, why it should work the second time?
I think it's platform specific to retry again if the "restart" has failed.

> I think it would be reasonable to remove the print from
> xgene_storm_reset, or use a static int once construct.

Print are useful for debugging.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7n9y-0007zX-Il; Mon, 27 Jan 2014 14:27:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7n9x-0007zS-C8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:27:33 +0000
Received: from [85.158.137.68:5240] by server-5.bemta-3.messagelabs.com id
	D0/6E-25188-4DC66E25; Mon, 27 Jan 2014 14:27:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390832850!7914590!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22927 invoked from network); 27 Jan 2014 14:27:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:27:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96813762"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:27:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:27:29 -0500
Message-ID: <1390832847.12230.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:27:27 +0000
In-Reply-To: <52E66C2B.9020901@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:24 +0000, Julien Grall wrote:
> On 01/27/2014 02:13 PM, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
> >> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> >>> This patch adds a reset support for xgene arm64 platform.
> >>>
> >>> V6:
> >>> - Incorporating comments received on V5 patch.
> >>> V5:
> >>> - Incorporating comments received on V4 patch.
> >>> V4:
> >>> - Removing TODO comment about retriving reset base address from dts
> >>>   as that is done now.
> >>> V3:
> >>> - Retriving reset base address and reset mask from device tree.
> >>> - Removed unnecssary header files included earlier.
> >>> V2:
> >>> - Removed unnecssary mdelay in code.
> >>> - Adding iounmap of the base address.
> >>> V1:
> >>> -Initial patch.
> >>>
> >>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> >>> ---
> >>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
> >>>  1 file changed, 72 insertions(+)
> >>>
> >>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> >>> index 5b0bd5f..4fc185b 100644
> >>> --- a/xen/arch/arm/platforms/xgene-storm.c
> >>> +++ b/xen/arch/arm/platforms/xgene-storm.c
> >>> @@ -20,8 +20,16 @@
> >>>  
> >>>  #include <xen/config.h>
> >>>  #include <asm/platform.h>
> >>> +#include <xen/stdbool.h>
> >>> +#include <xen/vmap.h>
> >>> +#include <asm/io.h>
> >>>  #include <asm/gic.h>
> >>>  
> >>> +/* Variables to save reset address of soc during platform initialization. */
> >>> +static u64 reset_addr, reset_size;
> >>> +static u32 reset_mask;
> >>> +static bool reset_vals_valid = false;
> >>> +
> >>>  static uint32_t xgene_storm_quirks(void)
> >>>  {
> >>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> >>> @@ -107,6 +115,68 @@ err:
> >>>      return ret;
> >>>  }
> >>>  
> >>> +static void xgene_storm_reset(void)
> >>> +{
> >>
> >> I'm concerned about reset function in general, in common code we have
> >> this code (arch/arm/shutdown.c machine_restart).
> >>
> >> while (1)
> >> {
> >>    raw_machine_reset(); // which call platform_reset()
> >>    mdelay(100);
> >> }
> >>
> >> If platform_reset failed, it's possible with this code, the console will
> >> be spam with "XGENE: ...".
> > 
> > Hrm, yes, I hadn't thought of this.
> > 
> >> Do we really need to call raw_machine_reset multiple time?
> > 
> > I suppose the logic is that there is no harm in trying again, we aren't
> > doing anything else and depending on the failure it might eventually
> > work.
> 
> If it doesn't work the first time, why it should work the second time?

For some platform specific reason.

> I think it's platform specific to retry again if the "restart" has
> failed.

All that does is force every platform to reimplement the loop.

> 
> > I think it would be reasonable to remove the print from
> > xgene_storm_reset, or use a static int once construct.
> 
> Print are useful for debugging.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7n9y-0007zX-Il; Mon, 27 Jan 2014 14:27:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7n9x-0007zS-C8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:27:33 +0000
Received: from [85.158.137.68:5240] by server-5.bemta-3.messagelabs.com id
	D0/6E-25188-4DC66E25; Mon, 27 Jan 2014 14:27:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390832850!7914590!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22927 invoked from network); 27 Jan 2014 14:27:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:27:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96813762"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:27:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:27:29 -0500
Message-ID: <1390832847.12230.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:27:27 +0000
In-Reply-To: <52E66C2B.9020901@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:24 +0000, Julien Grall wrote:
> On 01/27/2014 02:13 PM, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
> >> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
> >>> This patch adds a reset support for xgene arm64 platform.
> >>>
> >>> V6:
> >>> - Incorporating comments received on V5 patch.
> >>> V5:
> >>> - Incorporating comments received on V4 patch.
> >>> V4:
> >>> - Removing TODO comment about retriving reset base address from dts
> >>>   as that is done now.
> >>> V3:
> >>> - Retriving reset base address and reset mask from device tree.
> >>> - Removed unnecssary header files included earlier.
> >>> V2:
> >>> - Removed unnecssary mdelay in code.
> >>> - Adding iounmap of the base address.
> >>> V1:
> >>> -Initial patch.
> >>>
> >>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> >>> ---
> >>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
> >>>  1 file changed, 72 insertions(+)
> >>>
> >>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> >>> index 5b0bd5f..4fc185b 100644
> >>> --- a/xen/arch/arm/platforms/xgene-storm.c
> >>> +++ b/xen/arch/arm/platforms/xgene-storm.c
> >>> @@ -20,8 +20,16 @@
> >>>  
> >>>  #include <xen/config.h>
> >>>  #include <asm/platform.h>
> >>> +#include <xen/stdbool.h>
> >>> +#include <xen/vmap.h>
> >>> +#include <asm/io.h>
> >>>  #include <asm/gic.h>
> >>>  
> >>> +/* Variables to save reset address of soc during platform initialization. */
> >>> +static u64 reset_addr, reset_size;
> >>> +static u32 reset_mask;
> >>> +static bool reset_vals_valid = false;
> >>> +
> >>>  static uint32_t xgene_storm_quirks(void)
> >>>  {
> >>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
> >>> @@ -107,6 +115,68 @@ err:
> >>>      return ret;
> >>>  }
> >>>  
> >>> +static void xgene_storm_reset(void)
> >>> +{
> >>
> >> I'm concerned about reset function in general, in common code we have
> >> this code (arch/arm/shutdown.c machine_restart).
> >>
> >> while (1)
> >> {
> >>    raw_machine_reset(); // which call platform_reset()
> >>    mdelay(100);
> >> }
> >>
> >> If platform_reset failed, it's possible with this code, the console will
> >> be spam with "XGENE: ...".
> > 
> > Hrm, yes, I hadn't thought of this.
> > 
> >> Do we really need to call raw_machine_reset multiple time?
> > 
> > I suppose the logic is that there is no harm in trying again, we aren't
> > doing anything else and depending on the failure it might eventually
> > work.
> 
> If it doesn't work the first time, why it should work the second time?

For some platform specific reason.

> I think it's platform specific to retry again if the "restart" has
> failed.

All that does is force every platform to reimplement the loop.

> 
> > I think it would be reasonable to remove the print from
> > xgene_storm_reset, or use a static int once construct.
> 
> Print are useful for debugging.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nIo-0008Sj-UD; Mon, 27 Jan 2014 14:36:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7nIn-0008Se-GL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:36:41 +0000
Received: from [85.158.143.35:63103] by server-1.bemta-4.messagelabs.com id
	50/2C-02132-8FE66E25; Mon, 27 Jan 2014 14:36:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390833399!1055084!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23328 invoked from network); 27 Jan 2014 14:36:40 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:36:40 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so2318139eek.13
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:36:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=y+O4M8wyY1RObiyscPA6E/rLWqJp+64L2U6lSoDP0AU=;
	b=SCujjbOxZuUZymwwxJ+wRHMo8FzUkc/Td7ZQQ4fujBhwTdETVU2qcCFtIQOcZQbDEp
	tmO4bYii/QwiAnetUYCrCFKU6375iJP9s6aSfc9UCeLCv9KJ3tKifB1UoyfNkiMhtL0m
	9MlkeV7oVMZhuiArSjnqiOyNjoja6Lj8+viGWLeMJaGgsn7MI1k4fn1+Tk5vuDb2pe2r
	ci1W0J87JfIumKPmeqauxHsdTULx+AdfhYb4P5DGe8Ls1yDSZrzHrRMrC2PSq4/8OHcg
	j7EKbjl13QwpZrivVLjzaZFzBdc/Ozko34UJr1+hZEzfClh/duyjJWnGCWpYpoFcdquV
	+crg==
X-Gm-Message-State: ALoCoQnhg89iC/6OXhNSKVwzgc8R2jIkwOcSl14aLBelzOLJd+43B0CTHabLt+gCsP+LAXJ0KL7P
X-Received: by 10.15.36.65 with SMTP id h41mr25487613eev.0.1390833399761;
	Mon, 27 Jan 2014 06:36:39 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm21557005een.7.2014.01.27.06.36.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 06:36:38 -0800 (PST)
Message-ID: <52E66EF5.7060405@linaro.org>
Date: Mon, 27 Jan 2014 14:36:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
	<1390832847.12230.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1390832847.12230.33.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 02:27 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 14:24 +0000, Julien Grall wrote:
>> On 01/27/2014 02:13 PM, Ian Campbell wrote:
>>> On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
>>>> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
>>>>> This patch adds a reset support for xgene arm64 platform.
>>>>>
>>>>> V6:
>>>>> - Incorporating comments received on V5 patch.
>>>>> V5:
>>>>> - Incorporating comments received on V4 patch.
>>>>> V4:
>>>>> - Removing TODO comment about retriving reset base address from dts
>>>>>   as that is done now.
>>>>> V3:
>>>>> - Retriving reset base address and reset mask from device tree.
>>>>> - Removed unnecssary header files included earlier.
>>>>> V2:
>>>>> - Removed unnecssary mdelay in code.
>>>>> - Adding iounmap of the base address.
>>>>> V1:
>>>>> -Initial patch.
>>>>>
>>>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>>>> ---
>>>>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>>>>>  1 file changed, 72 insertions(+)
>>>>>
>>>>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>>>>> index 5b0bd5f..4fc185b 100644
>>>>> --- a/xen/arch/arm/platforms/xgene-storm.c
>>>>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>>>>> @@ -20,8 +20,16 @@
>>>>>  
>>>>>  #include <xen/config.h>
>>>>>  #include <asm/platform.h>
>>>>> +#include <xen/stdbool.h>
>>>>> +#include <xen/vmap.h>
>>>>> +#include <asm/io.h>
>>>>>  #include <asm/gic.h>
>>>>>  
>>>>> +/* Variables to save reset address of soc during platform initialization. */
>>>>> +static u64 reset_addr, reset_size;
>>>>> +static u32 reset_mask;
>>>>> +static bool reset_vals_valid = false;
>>>>> +
>>>>>  static uint32_t xgene_storm_quirks(void)
>>>>>  {
>>>>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
>>>>> @@ -107,6 +115,68 @@ err:
>>>>>      return ret;
>>>>>  }
>>>>>  
>>>>> +static void xgene_storm_reset(void)
>>>>> +{
>>>>
>>>> I'm concerned about reset function in general, in common code we have
>>>> this code (arch/arm/shutdown.c machine_restart).
>>>>
>>>> while (1)
>>>> {
>>>>    raw_machine_reset(); // which call platform_reset()
>>>>    mdelay(100);
>>>> }
>>>>
>>>> If platform_reset failed, it's possible with this code, the console will
>>>> be spam with "XGENE: ...".
>>>
>>> Hrm, yes, I hadn't thought of this.
>>>
>>>> Do we really need to call raw_machine_reset multiple time?
>>>
>>> I suppose the logic is that there is no harm in trying again, we aren't
>>> doing anything else and depending on the failure it might eventually
>>> work.
>>
>> If it doesn't work the first time, why it should work the second time?
> 
> For some platform specific reason.
> 
>> I think it's platform specific to retry again if the "restart" has
>> failed.
> 
> All that does is force every platform to reimplement the loop.

Not every platform. For instance on the Arndale and the Versatile
Express we don't need the loop.

After looking to the reset code of X-Gene on Linux it's the same. I'm
surprised that we would need the loop in Xen.

The only ways we can fail are:
	- ioremap return NULL;
	- reset address is not set.

Both won't work at the second attempd, neither the third.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nIo-0008Sj-UD; Mon, 27 Jan 2014 14:36:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7nIn-0008Se-GL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:36:41 +0000
Received: from [85.158.143.35:63103] by server-1.bemta-4.messagelabs.com id
	50/2C-02132-8FE66E25; Mon, 27 Jan 2014 14:36:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390833399!1055084!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23328 invoked from network); 27 Jan 2014 14:36:40 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:36:40 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so2318139eek.13
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:36:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=y+O4M8wyY1RObiyscPA6E/rLWqJp+64L2U6lSoDP0AU=;
	b=SCujjbOxZuUZymwwxJ+wRHMo8FzUkc/Td7ZQQ4fujBhwTdETVU2qcCFtIQOcZQbDEp
	tmO4bYii/QwiAnetUYCrCFKU6375iJP9s6aSfc9UCeLCv9KJ3tKifB1UoyfNkiMhtL0m
	9MlkeV7oVMZhuiArSjnqiOyNjoja6Lj8+viGWLeMJaGgsn7MI1k4fn1+Tk5vuDb2pe2r
	ci1W0J87JfIumKPmeqauxHsdTULx+AdfhYb4P5DGe8Ls1yDSZrzHrRMrC2PSq4/8OHcg
	j7EKbjl13QwpZrivVLjzaZFzBdc/Ozko34UJr1+hZEzfClh/duyjJWnGCWpYpoFcdquV
	+crg==
X-Gm-Message-State: ALoCoQnhg89iC/6OXhNSKVwzgc8R2jIkwOcSl14aLBelzOLJd+43B0CTHabLt+gCsP+LAXJ0KL7P
X-Received: by 10.15.36.65 with SMTP id h41mr25487613eev.0.1390833399761;
	Mon, 27 Jan 2014 06:36:39 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm21557005een.7.2014.01.27.06.36.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 06:36:38 -0800 (PST)
Message-ID: <52E66EF5.7060405@linaro.org>
Date: Mon, 27 Jan 2014 14:36:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
	<1390832847.12230.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1390832847.12230.33.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 02:27 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 14:24 +0000, Julien Grall wrote:
>> On 01/27/2014 02:13 PM, Ian Campbell wrote:
>>> On Mon, 2014-01-27 at 13:36 +0000, Julien Grall wrote:
>>>> On 01/27/2014 11:34 AM, Pranavkumar Sawargaonkar wrote:
>>>>> This patch adds a reset support for xgene arm64 platform.
>>>>>
>>>>> V6:
>>>>> - Incorporating comments received on V5 patch.
>>>>> V5:
>>>>> - Incorporating comments received on V4 patch.
>>>>> V4:
>>>>> - Removing TODO comment about retriving reset base address from dts
>>>>>   as that is done now.
>>>>> V3:
>>>>> - Retriving reset base address and reset mask from device tree.
>>>>> - Removed unnecssary header files included earlier.
>>>>> V2:
>>>>> - Removed unnecssary mdelay in code.
>>>>> - Adding iounmap of the base address.
>>>>> V1:
>>>>> -Initial patch.
>>>>>
>>>>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>>>>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>>>>> ---
>>>>>  xen/arch/arm/platforms/xgene-storm.c |   72 ++++++++++++++++++++++++++++++++++
>>>>>  1 file changed, 72 insertions(+)
>>>>>
>>>>> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
>>>>> index 5b0bd5f..4fc185b 100644
>>>>> --- a/xen/arch/arm/platforms/xgene-storm.c
>>>>> +++ b/xen/arch/arm/platforms/xgene-storm.c
>>>>> @@ -20,8 +20,16 @@
>>>>>  
>>>>>  #include <xen/config.h>
>>>>>  #include <asm/platform.h>
>>>>> +#include <xen/stdbool.h>
>>>>> +#include <xen/vmap.h>
>>>>> +#include <asm/io.h>
>>>>>  #include <asm/gic.h>
>>>>>  
>>>>> +/* Variables to save reset address of soc during platform initialization. */
>>>>> +static u64 reset_addr, reset_size;
>>>>> +static u32 reset_mask;
>>>>> +static bool reset_vals_valid = false;
>>>>> +
>>>>>  static uint32_t xgene_storm_quirks(void)
>>>>>  {
>>>>>      return PLATFORM_QUIRK_GIC_64K_STRIDE;
>>>>> @@ -107,6 +115,68 @@ err:
>>>>>      return ret;
>>>>>  }
>>>>>  
>>>>> +static void xgene_storm_reset(void)
>>>>> +{
>>>>
>>>> I'm concerned about reset function in general, in common code we have
>>>> this code (arch/arm/shutdown.c machine_restart).
>>>>
>>>> while (1)
>>>> {
>>>>    raw_machine_reset(); // which call platform_reset()
>>>>    mdelay(100);
>>>> }
>>>>
>>>> If platform_reset failed, it's possible with this code, the console will
>>>> be spam with "XGENE: ...".
>>>
>>> Hrm, yes, I hadn't thought of this.
>>>
>>>> Do we really need to call raw_machine_reset multiple time?
>>>
>>> I suppose the logic is that there is no harm in trying again, we aren't
>>> doing anything else and depending on the failure it might eventually
>>> work.
>>
>> If it doesn't work the first time, why it should work the second time?
> 
> For some platform specific reason.
> 
>> I think it's platform specific to retry again if the "restart" has
>> failed.
> 
> All that does is force every platform to reimplement the loop.

Not every platform. For instance on the Arndale and the Versatile
Express we don't need the loop.

After looking to the reset code of X-Gene on Linux it's the same. I'm
surprised that we would need the loop in Xen.

The only ways we can fail are:
	- ioremap return NULL;
	- reset address is not set.

Both won't work at the second attempd, neither the third.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:40:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nMG-0000Mq-KI; Mon, 27 Jan 2014 14:40:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7nMF-0000Mk-LH
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:40:15 +0000
Received: from [85.158.137.68:56778] by server-10.bemta-3.messagelabs.com id
	82/10-23989-ECF66E25; Mon, 27 Jan 2014 14:40:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390833612!7929618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4402 invoked from network); 27 Jan 2014 14:40:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96818727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:39:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:39:57 -0500
Message-ID: <1390833596.12230.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:39:56 +0000
In-Reply-To: <52E66EF5.7060405@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
	<1390832847.12230.33.camel@kazak.uk.xensource.com>
	<52E66EF5.7060405@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:36 +0000, Julien Grall wrote:

> The only ways we can fail are:
> 	- ioremap return NULL;
> 	- reset address is not set.
> 
> Both won't work at the second attempd, neither the third.


You are missing the fact that the write to the reset address itself may
"succeed" but not do anything, for example because of firmware oddities
or some strange platform specific property. It might succeed on a second
or third attempt, and there is no harm in trying.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:40:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nMG-0000Mq-KI; Mon, 27 Jan 2014 14:40:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7nMF-0000Mk-LH
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:40:15 +0000
Received: from [85.158.137.68:56778] by server-10.bemta-3.messagelabs.com id
	82/10-23989-ECF66E25; Mon, 27 Jan 2014 14:40:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390833612!7929618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4402 invoked from network); 27 Jan 2014 14:40:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96818727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 14:39:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 09:39:57 -0500
Message-ID: <1390833596.12230.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 27 Jan 2014 14:39:56 +0000
In-Reply-To: <52E66EF5.7060405@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<52E660CF.4010606@linaro.org>
	<1390832020.12230.32.camel@kazak.uk.xensource.com>
	<52E66C2B.9020901@linaro.org>
	<1390832847.12230.33.camel@kazak.uk.xensource.com>
	<52E66EF5.7060405@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 14:36 +0000, Julien Grall wrote:

> The only ways we can fail are:
> 	- ioremap return NULL;
> 	- reset address is not set.
> 
> Both won't work at the second attempd, neither the third.


You are missing the fact that the write to the reset address itself may
"succeed" but not do anything, for example because of firmware oddities
or some strange platform specific property. It might succeed on a second
or third attempt, and there is no harm in trying.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:49:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nUd-0000d1-Kh; Mon, 27 Jan 2014 14:48:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7nUb-0000ct-S9
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 14:48:54 +0000
Received: from [85.158.137.68:9953] by server-13.bemta-3.messagelabs.com id
	70/AC-28603-4D176E25; Mon, 27 Jan 2014 14:48:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390834130!10423517!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1190 invoked from network); 27 Jan 2014 14:48:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:48:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94812651"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 14:48:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 09:48:49 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W7nUW-0003Wf-Or;
	Mon, 27 Jan 2014 14:48:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W7nUU-00032w-Mb;
	Mon, 27 Jan 2014 14:48:46 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21222.29132.74941.583455@mariner.uk.xensource.com>
Date: Mon, 27 Jan 2014 14:48:44 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E5ED14.2090005@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E5ED14.2090005@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Looking at libvirt's default event loop impl, and the current libxl
> driver code, I think this is how things are :-/.  But maybe you have
> just described a bug in the libxl driver.

I think this is a bug in the libvirt timeout system.  At the very
least it should avoid entering the same timeout callback in multiple
threads.

Something like the attached (UNTESTED).

I'm currently working on a test case to demonstrate the fd race.

Ian.


Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/src/util/vireventpoll.c b/src/util/vireventpoll.c
index 8a4c8bc..41b012d 100644
--- a/src/util/vireventpoll.c
+++ b/src/util/vireventpoll.c
@@ -66,6 +66,7 @@ struct virEventPollTimeout {
     virFreeCallback ff;
     void *opaque;
     int deleted;
+    bool in_callback;
 };
 
 /* Allocate extra slots for virEventPollHandle/virEventPollTimeout
@@ -238,6 +239,7 @@ int virEventPollAddTimeout(int frequency,
     eventLoop.timeouts[eventLoop.timeoutsCount].deleted = 0;
     eventLoop.timeouts[eventLoop.timeoutsCount].expiresAt =
         frequency >= 0 ? frequency + now : 0;
+    eventLoop.timeouts[eventLoop.timeoutsCount].in_callback = 0;
 
     eventLoop.timeoutsCount++;
     ret = nextTimer-1;
@@ -334,6 +336,8 @@ static int virEventPollCalculateTimeout(int *timeout) {
     for (i = 0; i < eventLoop.timeoutsCount; i++) {
         if (eventLoop.timeouts[i].deleted)
             continue;
+        if (eventLoop.timeouts[i].in_callback)
+            continue;
         if (eventLoop.timeouts[i].frequency < 0)
             continue;
 
@@ -429,7 +433,9 @@ static int virEventPollDispatchTimeouts(void)
         return -1;
 
     for (i = 0; i < ntimeouts; i++) {
-        if (eventLoop.timeouts[i].deleted || eventLoop.timeouts[i].frequency < 0)
+        if (eventLoop.timeouts[i].deleted)
+            continue;
+        if (eventLoop.timeouts[i].frequency < 0)
             continue;
 
         /* Add 20ms fuzz so we don't pointlessly spin doing
@@ -447,9 +453,11 @@ static int virEventPollDispatchTimeouts(void)
             PROBE(EVENT_POLL_DISPATCH_TIMEOUT,
                   "timer=%d",
                   timer);
+            eventLoop.timeouts[i].in_callback = 1;
             virMutexUnlock(&eventLoop.lock);
             (cb)(timer, opaque);
             virMutexLock(&eventLoop.lock);
+            eventLoop.timeouts[i].in_callback = 0;
         }
     }
     return 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:49:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nUd-0000d1-Kh; Mon, 27 Jan 2014 14:48:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7nUb-0000ct-S9
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 14:48:54 +0000
Received: from [85.158.137.68:9953] by server-13.bemta-3.messagelabs.com id
	70/AC-28603-4D176E25; Mon, 27 Jan 2014 14:48:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390834130!10423517!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1190 invoked from network); 27 Jan 2014 14:48:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:48:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94812651"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 14:48:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 09:48:49 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W7nUW-0003Wf-Or;
	Mon, 27 Jan 2014 14:48:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W7nUU-00032w-Mb;
	Mon, 27 Jan 2014 14:48:46 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21222.29132.74941.583455@mariner.uk.xensource.com>
Date: Mon, 27 Jan 2014 14:48:44 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E5ED14.2090005@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E5ED14.2090005@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Looking at libvirt's default event loop impl, and the current libxl
> driver code, I think this is how things are :-/.  But maybe you have
> just described a bug in the libxl driver.

I think this is a bug in the libvirt timeout system.  At the very
least it should avoid entering the same timeout callback in multiple
threads.

Something like the attached (UNTESTED).

I'm currently working on a test case to demonstrate the fd race.

Ian.


Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

diff --git a/src/util/vireventpoll.c b/src/util/vireventpoll.c
index 8a4c8bc..41b012d 100644
--- a/src/util/vireventpoll.c
+++ b/src/util/vireventpoll.c
@@ -66,6 +66,7 @@ struct virEventPollTimeout {
     virFreeCallback ff;
     void *opaque;
     int deleted;
+    bool in_callback;
 };
 
 /* Allocate extra slots for virEventPollHandle/virEventPollTimeout
@@ -238,6 +239,7 @@ int virEventPollAddTimeout(int frequency,
     eventLoop.timeouts[eventLoop.timeoutsCount].deleted = 0;
     eventLoop.timeouts[eventLoop.timeoutsCount].expiresAt =
         frequency >= 0 ? frequency + now : 0;
+    eventLoop.timeouts[eventLoop.timeoutsCount].in_callback = 0;
 
     eventLoop.timeoutsCount++;
     ret = nextTimer-1;
@@ -334,6 +336,8 @@ static int virEventPollCalculateTimeout(int *timeout) {
     for (i = 0; i < eventLoop.timeoutsCount; i++) {
         if (eventLoop.timeouts[i].deleted)
             continue;
+        if (eventLoop.timeouts[i].in_callback)
+            continue;
         if (eventLoop.timeouts[i].frequency < 0)
             continue;
 
@@ -429,7 +433,9 @@ static int virEventPollDispatchTimeouts(void)
         return -1;
 
     for (i = 0; i < ntimeouts; i++) {
-        if (eventLoop.timeouts[i].deleted || eventLoop.timeouts[i].frequency < 0)
+        if (eventLoop.timeouts[i].deleted)
+            continue;
+        if (eventLoop.timeouts[i].frequency < 0)
             continue;
 
         /* Add 20ms fuzz so we don't pointlessly spin doing
@@ -447,9 +453,11 @@ static int virEventPollDispatchTimeouts(void)
             PROBE(EVENT_POLL_DISPATCH_TIMEOUT,
                   "timer=%d",
                   timer);
+            eventLoop.timeouts[i].in_callback = 1;
             virMutexUnlock(&eventLoop.lock);
             (cb)(timer, opaque);
             virMutexLock(&eventLoop.lock);
+            eventLoop.timeouts[i].in_callback = 0;
         }
     }
     return 0;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:55:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7naH-0000y8-IX; Mon, 27 Jan 2014 14:54:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7naF-0000y1-7y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:54:43 +0000
Received: from [85.158.137.68:41327] by server-10.bemta-3.messagelabs.com id
	FC/FC-23989-23376E25; Mon, 27 Jan 2014 14:54:42 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390834481!11611307!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 340 invoked from network); 27 Jan 2014 14:54:41 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:54:41 -0000
Received: by mail-wg0-f48.google.com with SMTP id x13so5568230wgg.15
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:54:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Ea7yTUWbVL9VaXJcVnhTNXiYMxdGMbqxssyHetBoDa0=;
	b=EP9EgfnHc6+QXJ1n8fn5Ee89ahKNhSWI3j5j7eQ85TQhsAkQoDucwz4eF6M03RO9mb
	nHQ4j7UhyCtEySelgtNIW6x5asecsW5D8mLFc/kna3HF03mJ7k+ExClFVRv5KgOeiC0p
	JgYCm/vSFrG8AwaBHtjUmIWcZj830yWm33xzcPNHyZ8SDfYReGYfIWk0F7HrtAXP6Y8D
	1RqkGcL8p99vO3XViPX8ngTG//Cb5kb/WmGHd7hIjURZDd+RQJxjR+LKBRnVDzMQUaPV
	6N2vkJtLcmfQOGFkP/oMYLZ3coEQidesgWEeffnsn3ssUd2BBycRTvg2jMBmJFAZv3Ly
	4vTQ==
MIME-Version: 1.0
X-Received: by 10.194.79.131 with SMTP id j3mr20464281wjx.17.1390834480851;
	Mon, 27 Jan 2014 06:54:40 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 06:54:40 -0800 (PST)
In-Reply-To: <20140120162943.GD11681@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
Date: Mon, 27 Jan 2014 14:54:40 +0000
X-Google-Sender-Auth: -irIqhGQSo_7u0kGGQs-FWSTLDs
Message-ID: <CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 4:29 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
>> > create ^
>> > owner Wei Liu <wei.liu2@citrix.com>
>> > thanks
>> >
>> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
>> > > When I have following configuration in HVM config file:
>> > >   memory=128
>> > >   maxmem=256
>> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
>> > >
>> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
>> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
>> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
>> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
>> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
>> > >
>> > > With claim_mode=0, I can sucessfuly create HVM guest.
>> >
>> > Is it trying to claim 256M instead of 128M? (although the likelyhood
>>
>> No. 128MB actually.
>>
>> > that you only have 128-255M free is quite low, or are you
>> > autoballooning?)
>>
>> This patch fixes it for me. It basically sets the amount of pages
>> claimed to be 'maxmem' instead of 'memory' for PoD.
>>
>> I don't know PoD very well, and this claim is only valid during the
>> allocation of the guests memory - so the 'target_pages' value might be
>> the wrong one. However looking at the hypervisor's
>> 'p2m_pod_set_mem_target' I see this comment:
>>
>>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>>  317  *   entries.  The balloon driver will deflate the balloon to give back
>>  318  *   the remainder of the ram to the guest OS.
>>
>> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
>> And then it is the responsibility of the balloon driver to give the memory
>> back (and this is where the 'static-max' et al come in play to tell the
>> balloon driver to balloon out).
>>
>>
>> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> index 77bd365..65e9577 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>>
>>      /* try to claim pages for early warning of insufficient memory available */
>>      if ( claim_enabled ) {
>> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> +        unsigned long nr = nr_pages - cur_pages;
>> +
>> +        if ( pod_mode )
>> +            nr = target_pages - 0x20;
>> +
>
> I'm a bit confused, did this work for you? At this point d->tot_pages
> should be (target_pages - 0x20). However in the hypervisor logic if you
> try to claim the exact amount of pages as d->tot_pages it should return
> EINVAL.
>
> Furthur more, the original logic doesn't look right. In PV guest
> creation, xc tries to claim "memory=" pages, while in HVM guest creation
> it tries to claim "maxmem=" pages. I think HVM code is wrong.
>
> And George shed me some light on PoD this morning, "cache" in PoD should
> be the pool of pages that used to populate into guest physical memory.
> In that sense it should be the size of mem_target ("memory=").
>
> So I come up with a fix like this. Any idea?
>
> Wei.
>
> ---8<---
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..472f1df 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>
> +#define POD_VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - POD_VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = target_pages;
> +
> +        if ( pod_mode )
> +            nr -= POD_VGA_HOLE_SIZE;
> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);

Two things:

1. This is broken because it doesn't claim pages for the PoD "cache".
The PoD "cache" amounts to *all the pages that the domain will have
allocated* -- there will be basically no pages allocated after this
point.

Claim mode is trying to make creation of large guests fail early or be
guaranteed to succeed.  For large guests, it's set_pod_target() that
may take the long time, and it's there that things will fail if
there's not enough memory.  By the time you get to actually setting up
the p2m, you've already made it.

2. I think the VGA_HOLE doesn't have anything to do with PoD.

Actually, it looks like the original code was wrong: correct me if I'm
wrong, but xc_domain_claim_pages() wants the *total number of pages*,
whereas nr_pages-cur_pages would give you the *number of additional
pages*.

I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
you're in PoD mode or not.  The initial code would claim 0xa0 pages
too few; the new code will claim 0x20 pages too many in the non-PoD
case.

> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..1e44ba3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>          goto out;
>      }
>
> -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> +    /* disallow a claim below current tot_pages or above max_pages */
> +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>      {

This seems like a good interface change in any case -- there's no
particular reason not to allow multiple calls with the same claim
amount.  (Interface-wise, I don't see a good reason we couldn't allow
the claim to be reduced as well; but that's probably more than we want
to do at this point.)

So it seems like we should at least make the two fixes above:
* Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled
* Allow a claim equal to tot_pages

That will allow PoD guests to boot with claim mode enabled, although
it will effectively be a noop.

The next question is whether we should try to make claim mode actually
do the claim properly for PoD mode for 4.4.  It seems like just moving
the claim call up before the xc_domain_set_target() call should work;
I'm inclined to say that if that works, we should just do it.

Thoughts?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 14:55:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 14:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7naH-0000y8-IX; Mon, 27 Jan 2014 14:54:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7naF-0000y1-7y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 14:54:43 +0000
Received: from [85.158.137.68:41327] by server-10.bemta-3.messagelabs.com id
	FC/FC-23989-23376E25; Mon, 27 Jan 2014 14:54:42 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390834481!11611307!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 340 invoked from network); 27 Jan 2014 14:54:41 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 14:54:41 -0000
Received: by mail-wg0-f48.google.com with SMTP id x13so5568230wgg.15
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 06:54:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Ea7yTUWbVL9VaXJcVnhTNXiYMxdGMbqxssyHetBoDa0=;
	b=EP9EgfnHc6+QXJ1n8fn5Ee89ahKNhSWI3j5j7eQ85TQhsAkQoDucwz4eF6M03RO9mb
	nHQ4j7UhyCtEySelgtNIW6x5asecsW5D8mLFc/kna3HF03mJ7k+ExClFVRv5KgOeiC0p
	JgYCm/vSFrG8AwaBHtjUmIWcZj830yWm33xzcPNHyZ8SDfYReGYfIWk0F7HrtAXP6Y8D
	1RqkGcL8p99vO3XViPX8ngTG//Cb5kb/WmGHd7hIjURZDd+RQJxjR+LKBRnVDzMQUaPV
	6N2vkJtLcmfQOGFkP/oMYLZ3coEQidesgWEeffnsn3ssUd2BBycRTvg2jMBmJFAZv3Ly
	4vTQ==
MIME-Version: 1.0
X-Received: by 10.194.79.131 with SMTP id j3mr20464281wjx.17.1390834480851;
	Mon, 27 Jan 2014 06:54:40 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 06:54:40 -0800 (PST)
In-Reply-To: <20140120162943.GD11681@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
Date: Mon, 27 Jan 2014 14:54:40 +0000
X-Google-Sender-Auth: -irIqhGQSo_7u0kGGQs-FWSTLDs
Message-ID: <CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 20, 2014 at 4:29 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
>> > create ^
>> > owner Wei Liu <wei.liu2@citrix.com>
>> > thanks
>> >
>> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
>> > > When I have following configuration in HVM config file:
>> > >   memory=128
>> > >   maxmem=256
>> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
>> > >
>> > > xc: error: Could not allocate memory for HVM guest as we cannot claim memory! (22 = Invalid argument): Internal error
>> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
>> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot (re-)build domain: -3
>> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device model pid in /local/domain/82/image/device-model-pid
>> > > libxl: error: libxl.c:1425:libxl__destroy_domid: libxl__destroy_device_model failed for 82
>> > >
>> > > With claim_mode=0, I can sucessfuly create HVM guest.
>> >
>> > Is it trying to claim 256M instead of 128M? (although the likelyhood
>>
>> No. 128MB actually.
>>
>> > that you only have 128-255M free is quite low, or are you
>> > autoballooning?)
>>
>> This patch fixes it for me. It basically sets the amount of pages
>> claimed to be 'maxmem' instead of 'memory' for PoD.
>>
>> I don't know PoD very well, and this claim is only valid during the
>> allocation of the guests memory - so the 'target_pages' value might be
>> the wrong one. However looking at the hypervisor's
>> 'p2m_pod_set_mem_target' I see this comment:
>>
>>  316  *     B <T': Set the PoD cache size equal to the number of outstanding PoD
>>  317  *   entries.  The balloon driver will deflate the balloon to give back
>>  318  *   the remainder of the ram to the guest OS.
>>
>> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
>> And then it is the responsibility of the balloon driver to give the memory
>> back (and this is where the 'static-max' et al come in play to tell the
>> balloon driver to balloon out).
>>
>>
>> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> index 77bd365..65e9577 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>>
>>      /* try to claim pages for early warning of insufficient memory available */
>>      if ( claim_enabled ) {
>> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> +        unsigned long nr = nr_pages - cur_pages;
>> +
>> +        if ( pod_mode )
>> +            nr = target_pages - 0x20;
>> +
>
> I'm a bit confused, did this work for you? At this point d->tot_pages
> should be (target_pages - 0x20). However in the hypervisor logic if you
> try to claim the exact amount of pages as d->tot_pages it should return
> EINVAL.
>
> Furthur more, the original logic doesn't look right. In PV guest
> creation, xc tries to claim "memory=" pages, while in HVM guest creation
> it tries to claim "maxmem=" pages. I think HVM code is wrong.
>
> And George shed me some light on PoD this morning, "cache" in PoD should
> be the pool of pages that used to populate into guest physical memory.
> In that sense it should be the size of mem_target ("memory=").
>
> So I come up with a fix like this. Any idea?
>
> Wei.
>
> ---8<---
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..472f1df 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>
> +#define POD_VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - POD_VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>
>      /* try to claim pages for early warning of insufficient memory available */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = target_pages;
> +
> +        if ( pod_mode )
> +            nr -= POD_VGA_HOLE_SIZE;
> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);

Two things:

1. This is broken because it doesn't claim pages for the PoD "cache".
The PoD "cache" amounts to *all the pages that the domain will have
allocated* -- there will be basically no pages allocated after this
point.

Claim mode is trying to make creation of large guests fail early or be
guaranteed to succeed.  For large guests, it's set_pod_target() that
may take the long time, and it's there that things will fail if
there's not enough memory.  By the time you get to actually setting up
the p2m, you've already made it.

2. I think the VGA_HOLE doesn't have anything to do with PoD.

Actually, it looks like the original code was wrong: correct me if I'm
wrong, but xc_domain_claim_pages() wants the *total number of pages*,
whereas nr_pages-cur_pages would give you the *number of additional
pages*.

I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
you're in PoD mode or not.  The initial code would claim 0xa0 pages
too few; the new code will claim 0x20 pages too many in the non-PoD
case.

> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..1e44ba3 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>          goto out;
>      }
>
> -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> +    /* disallow a claim below current tot_pages or above max_pages */
> +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>      {

This seems like a good interface change in any case -- there's no
particular reason not to allow multiple calls with the same claim
amount.  (Interface-wise, I don't see a good reason we couldn't allow
the claim to be reduced as well; but that's probably more than we want
to do at this point.)

So it seems like we should at least make the two fixes above:
* Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled
* Allow a claim equal to tot_pages

That will allow PoD guests to boot with claim mode enabled, although
it will effectively be a noop.

The next question is whether we should try to make claim mode actually
do the claim properly for PoD mode for 4.4.  It seems like just moving
the claim call up before the xc_domain_set_target() call should work;
I'm inclined to say that if that works, we should just do it.

Thoughts?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nfx-00018i-Hn; Mon, 27 Jan 2014 15:00:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <skiselkov.ml@gmail.com>) id 1W7OyQ-0002st-2Y
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 12:38:03 +0000
Received: from [85.158.137.68:64802] by server-6.bemta-3.messagelabs.com id
	95/8E-04868-9A105E25; Sun, 26 Jan 2014 12:38:01 +0000
X-Env-Sender: skiselkov.ml@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390739880!11393284!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18646 invoked from network); 26 Jan 2014 12:38:00 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 12:38:00 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so4496237wgg.21
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 04:38:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=56NFY/yTwb+3GAuFxro4db8gdXBOMHHfIGG9OJr5FDg=;
	b=K8aTUngW/GNZf6qjdG+O6MvddRTw8hd9DG5bjQjJODVUfOU7DwCRm3883NawCKzW2B
	TrnpQnnky/AlAfnZh3DtIkLnYW1Xn6s5hOQofuArdRaFM5tuiofPQ6KYey9Ia1dLFhsz
	iLfjlrGd8I5FyjHzZ26KcvVlena1jT3nkVqPIKQmg6COve6Nnh/PUyO22y2t9hfY927p
	rpTGPtvdRCdoN+D0w4H0lRjRqoIs+OZqfFKSWScd1jgQW7c5CJ+3hY1PEr37f9D+DOGq
	3Eh/OQIr5TvdWnrhQgrIBoXoiWoRCL5/nEggXaE4XVdsfWaBFak93jrurafmt0SuE0Bg
	kTJA==
X-Received: by 10.180.188.197 with SMTP id gc5mr8764654wic.30.1390739880140;
	Sun, 26 Jan 2014 04:38:00 -0800 (PST)
Received: from [192.168.0.9]
	(cpc64773-cmbg15-2-0-cust209.5-4.cable.virginmedia.com.
	[86.9.84.210])
	by mx.google.com with ESMTPSA id r1sm10404624wia.5.2014.01.26.04.37.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 26 Jan 2014 04:37:59 -0800 (PST)
Message-ID: <52E501A7.5060102@gmail.com>
Date: Sun, 26 Jan 2014 12:37:59 +0000
From: Saso Kiselkov <skiselkov.ml@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
X-Mailman-Approved-At: Mon, 27 Jan 2014 15:00:36 +0000
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [developer] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pretty neat work. I know a few people who'd welcome Xen on an
Illumos-based platform. Hope you get this to work properly eventually!

Cheers,
-- 
Saso

On 1/26/14, 11:26 AM, Igor Kozhukhov wrote:
> Hi All,
> 
> i have a good news: i have loaded xen-4.3 to DilOS (illumos based platform) as dom0 (64bit)
> 
> # xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0  2047     4     r-----     288.7
> 
> # xl info
> host                   : myhost
> release                : 5.11
> version                : 1.3.6-xen
> machine                : i86pc
> libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed pages
> libxl_physinfo failed.
> xen_major              : 4
> xen_minor              : 3
> xen_extra              : .2-pre-xvm
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : 
> xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
> cc_compiler            : gcc (GCC) 4.7.3
> cc_compile_by          : root
> cc_compile_domain      : 
> cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
> xend_config_format     : 4
> 
> yes - need to do more work on it and it have buggy, but work is in progress :)
> i have a limited time for work on  it and will do it as i can by my free time.
> 
> have to work on both sides: dilos-illumos sources and xen sources.
> need more work on update python scripts with patches from xen-3.4
> 
> if some interested in to help - please send email to dilos-dev@ list and welcome to FreeNode IRC: #dilos
> 
> --
> Best regards,
> Igor Kozhukhov
> 
> 
> 
> 
> 
> 
> -------------------------------------------
> illumos-developer
> Archives: https://www.listbox.com/member/archive/182179/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182179/22816526-0cda8fed
> Modify Your Subscription: https://www.listbox.com/member/?member_id=22816526&id_secret=22816526-5ad51f11
> Powered by Listbox: http://www.listbox.com
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nfx-00018i-Hn; Mon, 27 Jan 2014 15:00:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <skiselkov.ml@gmail.com>) id 1W7OyQ-0002st-2Y
	for xen-devel@lists.xen.org; Sun, 26 Jan 2014 12:38:03 +0000
Received: from [85.158.137.68:64802] by server-6.bemta-3.messagelabs.com id
	95/8E-04868-9A105E25; Sun, 26 Jan 2014 12:38:01 +0000
X-Env-Sender: skiselkov.ml@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390739880!11393284!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18646 invoked from network); 26 Jan 2014 12:38:00 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Jan 2014 12:38:00 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so4496237wgg.21
	for <xen-devel@lists.xen.org>; Sun, 26 Jan 2014 04:38:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=56NFY/yTwb+3GAuFxro4db8gdXBOMHHfIGG9OJr5FDg=;
	b=K8aTUngW/GNZf6qjdG+O6MvddRTw8hd9DG5bjQjJODVUfOU7DwCRm3883NawCKzW2B
	TrnpQnnky/AlAfnZh3DtIkLnYW1Xn6s5hOQofuArdRaFM5tuiofPQ6KYey9Ia1dLFhsz
	iLfjlrGd8I5FyjHzZ26KcvVlena1jT3nkVqPIKQmg6COve6Nnh/PUyO22y2t9hfY927p
	rpTGPtvdRCdoN+D0w4H0lRjRqoIs+OZqfFKSWScd1jgQW7c5CJ+3hY1PEr37f9D+DOGq
	3Eh/OQIr5TvdWnrhQgrIBoXoiWoRCL5/nEggXaE4XVdsfWaBFak93jrurafmt0SuE0Bg
	kTJA==
X-Received: by 10.180.188.197 with SMTP id gc5mr8764654wic.30.1390739880140;
	Sun, 26 Jan 2014 04:38:00 -0800 (PST)
Received: from [192.168.0.9]
	(cpc64773-cmbg15-2-0-cust209.5-4.cable.virginmedia.com.
	[86.9.84.210])
	by mx.google.com with ESMTPSA id r1sm10404624wia.5.2014.01.26.04.37.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 26 Jan 2014 04:37:59 -0800 (PST)
Message-ID: <52E501A7.5060102@gmail.com>
Date: Sun, 26 Jan 2014 12:37:59 +0000
From: Saso Kiselkov <skiselkov.ml@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
X-Mailman-Approved-At: Mon, 27 Jan 2014 15:00:36 +0000
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [developer] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pretty neat work. I know a few people who'd welcome Xen on an
Illumos-based platform. Hope you get this to work properly eventually!

Cheers,
-- 
Saso

On 1/26/14, 11:26 AM, Igor Kozhukhov wrote:
> Hi All,
> 
> i have a good news: i have loaded xen-4.3 to DilOS (illumos based platform) as dom0 (64bit)
> 
> # xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0  2047     4     r-----     288.7
> 
> # xl info
> host                   : myhost
> release                : 5.11
> version                : 1.3.6-xen
> machine                : i86pc
> libxl: error: libxl.c:3963:libxl_get_physinfo: getting sharing freed pages
> libxl_physinfo failed.
> xen_major              : 4
> xen_minor              : 3
> xen_extra              : .2-pre-xvm
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p 
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : 
> xen_commandline        : console=com1 dom0_mem=2047M dom0_vcpus_pin=false watchdog=false
> cc_compiler            : gcc (GCC) 4.7.3
> cc_compile_by          : root
> cc_compile_domain      : 
> cc_compile_date        : Mon Jan 20 09:34:23 MSK 2014
> xend_config_format     : 4
> 
> yes - need to do more work on it and it have buggy, but work is in progress :)
> i have a limited time for work on  it and will do it as i can by my free time.
> 
> have to work on both sides: dilos-illumos sources and xen sources.
> need more work on update python scripts with patches from xen-3.4
> 
> if some interested in to help - please send email to dilos-dev@ list and welcome to FreeNode IRC: #dilos
> 
> --
> Best regards,
> Igor Kozhukhov
> 
> 
> 
> 
> 
> 
> -------------------------------------------
> illumos-developer
> Archives: https://www.listbox.com/member/archive/182179/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182179/22816526-0cda8fed
> Modify Your Subscription: https://www.listbox.com/member/?member_id=22816526&id_secret=22816526-5ad51f11
> Powered by Listbox: http://www.listbox.com
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:08:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nmy-0001K6-Vf; Mon, 27 Jan 2014 15:07:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W7nmx-0001K1-7B
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:07:51 +0000
Received: from [85.158.137.68:22756] by server-14.bemta-3.messagelabs.com id
	66/84-06105-64676E25; Mon, 27 Jan 2014 15:07:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390835268!10417733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23509 invoked from network); 27 Jan 2014 15:07:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 15:07:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; 
	d="asc'?scan'208";a="94821646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 15:07:47 +0000
Received: from [127.0.0.1] (10.80.16.66) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 10:07:47 -0500
Message-ID: <1390835266.5660.16.camel@Abyss-unstable>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 27 Jan 2014 15:07:46 +0000
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2462510638439059957=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2462510638439059957==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-BIKqt0t0lB11ybHk4a+Z"

--=-BIKqt0t0lB11ybHk4a+Z
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2014-01-26 at 15:26 +0400, Igor Kozhukhov wrote:
> Hi All,
>=20
> i have a good news: i have loaded xen-4.3 to DilOS (illumos based platfor=
m) as dom0 (64bit)
>
> [..]
>
> yes - need to do more work on it and it have buggy, but work is in progre=
ss :)
>
Great!!

The offer of blogging about it on Xen-project's blog is still valid...
Whenever you think it will be the time for it.

> i have a limited time for work on  it and will do it as i can by my free =
time.
>
> have to work on both sides: dilos-illumos sources and xen sources.
> need more work on update python scripts with patches from xen-3.4
>=20
> if some interested in to help - please send email to dilos-dev@ list and =
welcome to FreeNode IRC: #dilos
>=20
I see. Actually, that is part of the reason why I proposed you to blog
about it, even if it's early. In fact, I can't promise miracles, but
letting people know about the project may tickle someone's interest and
get him/them on board. Well, at least it should not harm! ;-P

Let me know.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

--=-BIKqt0t0lB11ybHk4a+Z
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLmdkIACgkQk4XaBE3IOsQSwgCeKbDoiJ3AIyHr0VR0q3HojnEb
dowAn2tX/RT/JrvTvdoYaEX4hWqFO2Rj
=sra8
-----END PGP SIGNATURE-----

--=-BIKqt0t0lB11ybHk4a+Z--


--===============2462510638439059957==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2462510638439059957==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 15:08:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nmy-0001K6-Vf; Mon, 27 Jan 2014 15:07:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W7nmx-0001K1-7B
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:07:51 +0000
Received: from [85.158.137.68:22756] by server-14.bemta-3.messagelabs.com id
	66/84-06105-64676E25; Mon, 27 Jan 2014 15:07:50 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390835268!10417733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23509 invoked from network); 27 Jan 2014 15:07:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 15:07:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; 
	d="asc'?scan'208";a="94821646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 15:07:47 +0000
Received: from [127.0.0.1] (10.80.16.66) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 10:07:47 -0500
Message-ID: <1390835266.5660.16.camel@Abyss-unstable>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 27 Jan 2014 15:07:46 +0000
In-Reply-To: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
References: <1CAC49AC-8890-4A1D-BBEA-CC69FF61F30A@gmail.com>
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: dilos-dev@lists.illumos.org,
	illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port status
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2462510638439059957=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2462510638439059957==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-BIKqt0t0lB11ybHk4a+Z"

--=-BIKqt0t0lB11ybHk4a+Z
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sun, 2014-01-26 at 15:26 +0400, Igor Kozhukhov wrote:
> Hi All,
>=20
> i have a good news: i have loaded xen-4.3 to DilOS (illumos based platfor=
m) as dom0 (64bit)
>
> [..]
>
> yes - need to do more work on it and it have buggy, but work is in progre=
ss :)
>
Great!!

The offer of blogging about it on Xen-project's blog is still valid...
Whenever you think it will be the time for it.

> i have a limited time for work on  it and will do it as i can by my free =
time.
>
> have to work on both sides: dilos-illumos sources and xen sources.
> need more work on update python scripts with patches from xen-3.4
>=20
> if some interested in to help - please send email to dilos-dev@ list and =
welcome to FreeNode IRC: #dilos
>=20
I see. Actually, that is part of the reason why I proposed you to blog
about it, even if it's early. In fact, I can't promise miracles, but
letting people know about the project may tickle someone's interest and
get him/them on board. Well, at least it should not harm! ;-P

Let me know.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

--=-BIKqt0t0lB11ybHk4a+Z
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLmdkIACgkQk4XaBE3IOsQSwgCeKbDoiJ3AIyHr0VR0q3HojnEb
dowAn2tX/RT/JrvTvdoYaEX4hWqFO2Rj
=sra8
-----END PGP SIGNATURE-----

--=-BIKqt0t0lB11ybHk4a+Z--


--===============2462510638439059957==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2462510638439059957==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 15:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nyw-0002JE-51; Mon, 27 Jan 2014 15:20:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7nyt-0002J7-Un
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:20:12 +0000
Received: from [193.109.254.147:50053] by server-5.bemta-14.messagelabs.com id
	DF/0F-03510-B2976E25; Mon, 27 Jan 2014 15:20:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390836008!146806!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13821 invoked from network); 27 Jan 2014 15:20:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 15:20:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RFK1NT027113
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 15:20:01 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFJwqE021065
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 15:19:58 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFJvaQ029778; Mon, 27 Jan 2014 15:19:57 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 07:19:57 -0800
Message-ID: <52E67956.7030503@oracle.com>
Date: Mon, 27 Jan 2014 10:20:54 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
	<52E6281C020000780011716B@nat28.tlf.novell.com>
In-Reply-To: <52E6281C020000780011716B@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 03:34 AM, Jan Beulich wrote:
>>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>>> +{
>>>> +    int ret = -EINVAL;
>>>> +    xen_pmu_params_t pmu_params;
>>>> +    uint32_t mode;
>>>> +
>>>> +    switch ( op )
>>>> +    {
>>>> +    case XENPMU_mode_set:
>>>> +        if ( !is_control_domain(current->domain) )
>>>> +            return -EPERM;
>>>> +
>>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>>> +            return -EFAULT;
>>>> +
>>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>>> +        if ( mode & ~XENPMU_MODE_ON )
>>>> +            return -EINVAL;
>>> Please, if you add a new interface, think carefully about future
>>> extension room: Here you ignore the upper 32 bits of .val instead
>>> of making sure they're zero, thus making it impossible to assign
>>> them some meaning later on.
>> I think I can leave this as is for now --- I am storing VPMU mode and
>> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.
> You should drop the cast to a 32-bit value at the very least -
> "leave this as is for now" reads like you don#t need to make
> any changes.

mode is stored in the lower 32 bits of vpmu_mode variable a few lines below

      vpmu_mode &= ~XENPMU_MODE_MASK; // XENPMU_MODE_MASK - 0xffffffff
      vpmu_mode |= mode;

so the cast needs to happen somewhere. I can move it to the line above 
although I
am not sure what difference that would make.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7nyw-0002JE-51; Mon, 27 Jan 2014 15:20:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7nyt-0002J7-Un
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:20:12 +0000
Received: from [193.109.254.147:50053] by server-5.bemta-14.messagelabs.com id
	DF/0F-03510-B2976E25; Mon, 27 Jan 2014 15:20:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390836008!146806!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13821 invoked from network); 27 Jan 2014 15:20:10 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 15:20:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RFK1NT027113
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 15:20:01 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFJwqE021065
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 15:19:58 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFJvaQ029778; Mon, 27 Jan 2014 15:19:57 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 07:19:57 -0800
Message-ID: <52E67956.7030503@oracle.com>
Date: Mon, 27 Jan 2014 10:20:54 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
	<52E6281C020000780011716B@nat28.tlf.novell.com>
In-Reply-To: <52E6281C020000780011716B@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 03:34 AM, Jan Beulich wrote:
>>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>>> +{
>>>> +    int ret = -EINVAL;
>>>> +    xen_pmu_params_t pmu_params;
>>>> +    uint32_t mode;
>>>> +
>>>> +    switch ( op )
>>>> +    {
>>>> +    case XENPMU_mode_set:
>>>> +        if ( !is_control_domain(current->domain) )
>>>> +            return -EPERM;
>>>> +
>>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>>> +            return -EFAULT;
>>>> +
>>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>>> +        if ( mode & ~XENPMU_MODE_ON )
>>>> +            return -EINVAL;
>>> Please, if you add a new interface, think carefully about future
>>> extension room: Here you ignore the upper 32 bits of .val instead
>>> of making sure they're zero, thus making it impossible to assign
>>> them some meaning later on.
>> I think I can leave this as is for now --- I am storing VPMU mode and
>> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.
> You should drop the cast to a 32-bit value at the very least -
> "leave this as is for now" reads like you don#t need to make
> any changes.

mode is stored in the lower 32 bits of vpmu_mode variable a few lines below

      vpmu_mode &= ~XENPMU_MODE_MASK; // XENPMU_MODE_MASK - 0xffffffff
      vpmu_mode |= mode;

so the cast needs to happen somewhere. I can move it to the line above 
although I
am not sure what difference that would make.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7o7X-0002ju-Et; Mon, 27 Jan 2014 15:29:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7o7V-0002jm-Ks
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:29:05 +0000
Received: from [85.158.139.211:47674] by server-3.bemta-5.messagelabs.com id
	67/8C-04773-04B76E25; Mon, 27 Jan 2014 15:29:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390836543!12213781!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11355 invoked from network); 27 Jan 2014 15:29:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 15:29:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 15:29:03 +0000
Message-Id: <52E6894C02000078001173B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 15:29:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
	<52E6281C020000780011716B@nat28.tlf.novell.com>
	<52E67956.7030503@oracle.com>
In-Reply-To: <52E67956.7030503@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 16:20, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/27/2014 03:34 AM, Jan Beulich wrote:
>>>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>>>> +{
>>>>> +    int ret = -EINVAL;
>>>>> +    xen_pmu_params_t pmu_params;
>>>>> +    uint32_t mode;
>>>>> +
>>>>> +    switch ( op )
>>>>> +    {
>>>>> +    case XENPMU_mode_set:
>>>>> +        if ( !is_control_domain(current->domain) )
>>>>> +            return -EPERM;
>>>>> +
>>>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>>>> +            return -EFAULT;
>>>>> +
>>>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>>>> +        if ( mode & ~XENPMU_MODE_ON )
>>>>> +            return -EINVAL;
>>>> Please, if you add a new interface, think carefully about future
>>>> extension room: Here you ignore the upper 32 bits of .val instead
>>>> of making sure they're zero, thus making it impossible to assign
>>>> them some meaning later on.
>>> I think I can leave this as is for now --- I am storing VPMU mode and
>>> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.
>> You should drop the cast to a 32-bit value at the very least -
>> "leave this as is for now" reads like you don#t need to make
>> any changes.
> 
> mode is stored in the lower 32 bits of vpmu_mode variable a few lines below
> 
>       vpmu_mode &= ~XENPMU_MODE_MASK; // XENPMU_MODE_MASK - 0xffffffff
>       vpmu_mode |= mode;
> 
> so the cast needs to happen somewhere. I can move it to the line above 
> although I
> am not sure what difference that would make.

I don't really care what you do here, so long as you don't ignore
data passed into the hypercall.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7o7X-0002ju-Et; Mon, 27 Jan 2014 15:29:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W7o7V-0002jm-Ks
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:29:05 +0000
Received: from [85.158.139.211:47674] by server-3.bemta-5.messagelabs.com id
	67/8C-04773-04B76E25; Mon, 27 Jan 2014 15:29:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390836543!12213781!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11355 invoked from network); 27 Jan 2014 15:29:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 15:29:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 27 Jan 2014 15:29:03 +0000
Message-Id: <52E6894C02000078001173B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 27 Jan 2014 15:29:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-10-git-send-email-boris.ostrovsky@oracle.com>
	<52E2905D0200007800116BD2@nat28.tlf.novell.com>
	<52E29F27.50403@oracle.com>
	<52E6281C020000780011716B@nat28.tlf.novell.com>
	<52E67956.7030503@oracle.com>
In-Reply-To: <52E67956.7030503@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 09/17] x86/VPMU: Interface for setting
 PMU mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 16:20, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 01/27/2014 03:34 AM, Jan Beulich wrote:
>>>>> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>>>> +{
>>>>> +    int ret = -EINVAL;
>>>>> +    xen_pmu_params_t pmu_params;
>>>>> +    uint32_t mode;
>>>>> +
>>>>> +    switch ( op )
>>>>> +    {
>>>>> +    case XENPMU_mode_set:
>>>>> +        if ( !is_control_domain(current->domain) )
>>>>> +            return -EPERM;
>>>>> +
>>>>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>>>>> +            return -EFAULT;
>>>>> +
>>>>> +        mode = (uint32_t)pmu_params.d.val & XENPMU_MODE_MASK;
>>>>> +        if ( mode & ~XENPMU_MODE_ON )
>>>>> +            return -EINVAL;
>>>> Please, if you add a new interface, think carefully about future
>>>> extension room: Here you ignore the upper 32 bits of .val instead
>>>> of making sure they're zero, thus making it impossible to assign
>>>> them some meaning later on.
>>> I think I can leave this as is for now --- I am storing VPMU mode and
>>> VPMU features in the Xen-private vpmu_mode, which is a 64-bit value.
>> You should drop the cast to a 32-bit value at the very least -
>> "leave this as is for now" reads like you don#t need to make
>> any changes.
> 
> mode is stored in the lower 32 bits of vpmu_mode variable a few lines below
> 
>       vpmu_mode &= ~XENPMU_MODE_MASK; // XENPMU_MODE_MASK - 0xffffffff
>       vpmu_mode |= mode;
> 
> so the cast needs to happen somewhere. I can move it to the line above 
> although I
> am not sure what difference that would make.

I don't really care what you do here, so long as you don't ignore
data passed into the hypercall.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oDE-00034L-DQ; Mon, 27 Jan 2014 15:35:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7oDC-00034D-HZ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:34:58 +0000
Received: from [85.158.139.211:44224] by server-14.bemta-5.messagelabs.com id
	C3/1D-24200-1AC76E25; Mon, 27 Jan 2014 15:34:57 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390836896!9516092!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27902 invoked from network); 27 Jan 2014 15:34:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-206.messagelabs.com with SMTP;
	27 Jan 2014 15:34:56 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 27 Jan 2014 07:34:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,729,1384329600"; d="scan'208";a="471471710"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 27 Jan 2014 07:34:51 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:51 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:51 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Mon, 27 Jan 2014 23:34:49 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
Thread-Index: AQHPGR1hIrZ/0qGTuke4rIDnPBDI1JqWrMuQgAEgmoCAAOiWEA==
Date: Mon, 27 Jan 2014 15:34:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C835B@SHSMSX104.ccr.corp.intel.com>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
	<20140127093343.GA64086@deinos.phlegethon.org>
In-Reply-To: <20140127093343.GA64086@deinos.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim Deegan wrote on 2014-01-27:
> B0;278;0cAt 08:29 +0000 on 26 Jan (1390721344), Zhang, Yang Z wrote:
>> George Dunlap wrote on 2014-01-25:
>>> On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>> Hello,
>>>> 
>>>> I have been giving nested virt a try, and have my first bug to report.
>>>> This is still ongoing, and is by no means complete yet.
>>>> 
>>>> Setup:
>>>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>>>> 
>>>> Single Intel Haswell SDP (Grantley platform):
>>>> Native hypervisor: XenServer
>>>> 
>>>> Two L1 guests:
>>>>   XenServer (running with EPT)
>>>>   XenServer (running with shadow)
>>>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>>>> domain, the L1 shadow domain is killed with:
>>> 
>>> Is EPT-on-shadow actually meant to work?  I wouldn't be surprised
>>> if the L2 HAP stuff assumed that L1 was HAP as well.
>>> 
>>> In which case, if an L1 guest is started in shadow mode, then EPT
>>> should not be advertised.
>> 
>> AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy
>> (Actually, I never tried it successfully from the first day I start
>> working on nested stuff).
> 
> Fair enough.  That needs to be documented, and those modes (which I
> guess means nested-on-shadow in general) need to be disabled in the
> hypervisor, with a sensible error message.
> 

Yes, I am working on writing the wiki page.

>> Shadow-on-EPT and EPT-on-EPT are working in my box. But I
>> recommended using EPT on EPT if possible. Because it is really a
>> pain to run L2 guest on shadow on shadow mode due to the poor performance.
> 
> Yeah, I think it's generally accepted that having shadow pagetables
> anywhere in that stack is going to hurt.  Sadly, there's no way for
> the L0 admin to stop the L1 hypervisor from using shadow pagetables,
> so shadow-on-EPT ought to at least work correctly, even if performance sucks.
> 
> Tim.


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oDL-00034r-Qs; Mon, 27 Jan 2014 15:35:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7oDK-00034g-Ns
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:35:07 +0000
Received: from [85.158.143.35:44415] by server-3.bemta-4.messagelabs.com id
	23/5E-32360-AAC76E25; Mon, 27 Jan 2014 15:35:06 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390836899!1103801!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20462 invoked from network); 27 Jan 2014 15:35:03 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-14.tower-21.messagelabs.com with SMTP;
	27 Jan 2014 15:35:03 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 27 Jan 2014 07:34:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,729,1384329600"; d="scan'208";a="471471727"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 27 Jan 2014 07:34:52 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:52 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 27 Jan 2014 23:34:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPwwgAHYewCAALrRsA==
Date: Mon, 27 Jan 2014 15:34:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini wrote on 2014-01-27:
> On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
>> Anthony PERARD wrote on 2014-01-09:
>>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> wrote:
>>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>>>> [...]
>>>>>>>>> Those Xen report something like:
>>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>>>> 131329 >
>>>>>>>>> 131328
>>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>>>> id=46
>>>>>>>>> memflags=0 (62 of 64)
>>>>>>>>> 
>>>>>>>>> ?
>>>>>>>>> 
>>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
>>>>>>>>> e1000 in QEMU :) )
>>>>>>>>> 
>>>>> 
>>>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>>>> Network
>>> Connection (rev 01)
>>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>>>         ROM at fb400000 [disabled] [size=4M]
>>>>> 
>>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>>> allocate memory for it. Will have maybe have to find another way.
>>>>> qemu-trad those not seems to allocate memory, but I haven't been
>>>>> very far in trying to check that.
>>>> 
>>>> And indeed that is the case. The "Fix" below fixes it.
>>>> 
>>>> 
>>>> Based on that and this guest config:
>>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>>> memory = 2048
>>>> boot="d"
>>>> maxvcpus=32
>>>> vcpus=1
>>>> serial='pty'
>>>> vnclisten="0.0.0.0"
>>>> name="latest"
>>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
>>>> ["01:00.0"]
>>>> 
>>>> I can boot the guest.
>>> 
>>> And can you access the ROM from the guest ?
>>> 
>>> 
>>> Also, I have another patch, it will initialize the PCI ROM BAR like
>>> any other BAR. In this case, if qemu is envolved in the access to ROM,
>>> it will print an error, like it the case for other BAR.
>>> 
>>> I tried to test it, but it was with an embedded VGA card. When I dump
>>> the ROM, I got the same one as the emulated card instead of the ROM
>>> from the device.
>>> 
>>> 
>>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
>>> 6dd7a68..2bbdb6d
>>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
>>> +440,8 @@ static int
>>> xen_pt_register_regions(XenPCIPassthroughState *s)
>>> 
>>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>>> -                                      "xen-pci-pt-rom", d->rom.size);
>>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
>>>                           "xen-pci-pt-rom", d->rom.size);
>>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
>>> PCI_BASE_ADDRESS_MEM_PREFETCH,
>>>                           &s->rom);
>> 
>> Hi, Anthony,
>> 
>> Does your fixing is the final solution for this issue? If yes, will
>> you push it
> before Xen 4.4 release?
> 
> I included this patch in the last pull request I sent to Anthony Liguori:
> 
> http://marc.info/?l=qemu-devel&m=138997319906095
> 
> It hasn't been pulled yet, but I would expect that it is going to be
> upstream soon.
> Regarding the 4.4 release, we are trying to fix a couple of other
> serious bugs in the qemu-xen tree right now, but it is still
> conceivable to have this fix backported to the tree in time for the release.

Hope it can catch the 4.4 release. 

BTW: Do you know when Xen 4.4 will be released? I saw the wiki documented that it should be Jan 21. But it seems there still will have RC3 come out on Feb 9.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oDE-00034L-DQ; Mon, 27 Jan 2014 15:35:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7oDC-00034D-HZ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 15:34:58 +0000
Received: from [85.158.139.211:44224] by server-14.bemta-5.messagelabs.com id
	C3/1D-24200-1AC76E25; Mon, 27 Jan 2014 15:34:57 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390836896!9516092!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27902 invoked from network); 27 Jan 2014 15:34:56 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-7.tower-206.messagelabs.com with SMTP;
	27 Jan 2014 15:34:56 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 27 Jan 2014 07:34:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,729,1384329600"; d="scan'208";a="471471710"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 27 Jan 2014 07:34:51 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:51 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:51 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Mon, 27 Jan 2014 23:34:49 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
Thread-Index: AQHPGR1hIrZ/0qGTuke4rIDnPBDI1JqWrMuQgAEgmoCAAOiWEA==
Date: Mon, 27 Jan 2014 15:34:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C835B@SHSMSX104.ccr.corp.intel.com>
References: <52DEB887.8070409@citrix.com>
	<CAFLBxZZNnbFwjfVkqmB3OqkjjUukbf4AmbRhOSzHnwJDqLCEDQ@mail.gmail.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C7169@SHSMSX104.ccr.corp.intel.com>
	<20140127093343.GA64086@deinos.phlegethon.org>
In-Reply-To: <20140127093343.GA64086@deinos.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.4-rc2 - Some Nested Virt testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim Deegan wrote on 2014-01-27:
> B0;278;0cAt 08:29 +0000 on 26 Jan (1390721344), Zhang, Yang Z wrote:
>> George Dunlap wrote on 2014-01-25:
>>> On Tue, Jan 21, 2014 at 6:12 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>> Hello,
>>>> 
>>>> I have been giving nested virt a try, and have my first bug to report.
>>>> This is still ongoing, and is by no means complete yet.
>>>> 
>>>> Setup:
>>>> Each reference to XenServer is a trunk XenServer based on 4.4-rc2
>>>> 
>>>> Single Intel Haswell SDP (Grantley platform):
>>>> Native hypervisor: XenServer
>>>> 
>>>> Two L1 guests:
>>>>   XenServer (running with EPT)
>>>>   XenServer (running with shadow)
>>>> When attempting to create an L2 EPT HVM domain under an L1 shadow
>>>> domain, the L1 shadow domain is killed with:
>>> 
>>> Is EPT-on-shadow actually meant to work?  I wouldn't be surprised
>>> if the L2 HAP stuff assumed that L1 was HAP as well.
>>> 
>>> In which case, if an L1 guest is started in shadow mode, then EPT
>>> should not be advertised.
>> 
>> AFAK, EPT-on-shadow is not supported. Shadow-on-shadow is buggy
>> (Actually, I never tried it successfully from the first day I start
>> working on nested stuff).
> 
> Fair enough.  That needs to be documented, and those modes (which I
> guess means nested-on-shadow in general) need to be disabled in the
> hypervisor, with a sensible error message.
> 

Yes, I am working on writing the wiki page.

>> Shadow-on-EPT and EPT-on-EPT are working in my box. But I
>> recommended using EPT on EPT if possible. Because it is really a
>> pain to run L2 guest on shadow on shadow mode due to the poor performance.
> 
> Yeah, I think it's generally accepted that having shadow pagetables
> anywhere in that stack is going to hurt.  Sadly, there's no way for
> the L0 admin to stop the L1 hypervisor from using shadow pagetables,
> so shadow-on-EPT ought to at least work correctly, even if performance sucks.
> 
> Tim.


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:35:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oDL-00034r-Qs; Mon, 27 Jan 2014 15:35:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W7oDK-00034g-Ns
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:35:07 +0000
Received: from [85.158.143.35:44415] by server-3.bemta-4.messagelabs.com id
	23/5E-32360-AAC76E25; Mon, 27 Jan 2014 15:35:06 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390836899!1103801!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20462 invoked from network); 27 Jan 2014 15:35:03 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-14.tower-21.messagelabs.com with SMTP;
	27 Jan 2014 15:35:03 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 27 Jan 2014 07:34:58 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,729,1384329600"; d="scan'208";a="471471727"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 27 Jan 2014 07:34:52 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 27 Jan 2014 07:34:52 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 27 Jan 2014 23:34:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Thread-Topic: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
Thread-Index: AQHPDUs3aA8yi+K0sk+fDGrKfVQCipqWOPwwgAHYewCAALrRsA==
Date: Mon, 27 Jan 2014 15:34:48 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Stefano Stabellini wrote on 2014-01-27:
> On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
>> Anthony PERARD wrote on 2014-01-09:
>>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
>>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> wrote:
>>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
>>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
>>>>>>> [...]
>>>>>>>>> Those Xen report something like:
>>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
>>>>>>>>> 131329 >
>>>>>>>>> 131328
>>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
>>>>>>>>> id=46
>>>>>>>>> memflags=0 (62 of 64)
>>>>>>>>> 
>>>>>>>>> ?
>>>>>>>>> 
>>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
>>>>>>>>> e1000 in QEMU :) )
>>>>>>>>> 
>>>>> 
>>>>>> -bash-4.1# lspci -s 01:00.0 -v
>>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
>>>>>> Network
>>> Connection (rev 01)
>>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
>>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
>>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
>>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
>>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
>>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
>>>>>>         ROM at fb400000 [disabled] [size=4M]
>>>>> 
>>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
>>>>> allocate memory for it. Will have maybe have to find another way.
>>>>> qemu-trad those not seems to allocate memory, but I haven't been
>>>>> very far in trying to check that.
>>>> 
>>>> And indeed that is the case. The "Fix" below fixes it.
>>>> 
>>>> 
>>>> Based on that and this guest config:
>>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>>> memory = 2048
>>>> boot="d"
>>>> maxvcpus=32
>>>> vcpus=1
>>>> serial='pty'
>>>> vnclisten="0.0.0.0"
>>>> name="latest"
>>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
>>>> ["01:00.0"]
>>>> 
>>>> I can boot the guest.
>>> 
>>> And can you access the ROM from the guest ?
>>> 
>>> 
>>> Also, I have another patch, it will initialize the PCI ROM BAR like
>>> any other BAR. In this case, if qemu is envolved in the access to ROM,
>>> it will print an error, like it the case for other BAR.
>>> 
>>> I tried to test it, but it was with an embedded VGA card. When I dump
>>> the ROM, I got the same one as the emulated card instead of the ROM
>>> from the device.
>>> 
>>> 
>>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
>>> 6dd7a68..2bbdb6d
>>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
>>> +440,8 @@ static int
>>> xen_pt_register_regions(XenPCIPassthroughState *s)
>>> 
>>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
>>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
>>> -                                      "xen-pci-pt-rom", d->rom.size);
>>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
>>>                           "xen-pci-pt-rom", d->rom.size);
>>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
>>> PCI_BASE_ADDRESS_MEM_PREFETCH,
>>>                           &s->rom);
>> 
>> Hi, Anthony,
>> 
>> Does your fixing is the final solution for this issue? If yes, will
>> you push it
> before Xen 4.4 release?
> 
> I included this patch in the last pull request I sent to Anthony Liguori:
> 
> http://marc.info/?l=qemu-devel&m=138997319906095
> 
> It hasn't been pulled yet, but I would expect that it is going to be
> upstream soon.
> Regarding the 4.4 release, we are trying to fix a couple of other
> serious bugs in the qemu-xen tree right now, but it is still
> conceivable to have this fix backported to the tree in time for the release.

Hope it can catch the 4.4 release. 

BTW: Do you know when Xen 4.4 will be released? I saw the wiki documented that it should be Jan 21. But it seems there still will have RC3 come out on Feb 9.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:38:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oG6-0003IK-FM; Mon, 27 Jan 2014 15:37:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7oG4-0003IF-IF
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:37:56 +0000
Received: from [85.158.137.68:33710] by server-9.bemta-3.messagelabs.com id
	A0/F6-13104-35D76E25; Mon, 27 Jan 2014 15:37:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390837071!7946712!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 27 Jan 2014 15:37:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 15:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96844779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 15:37:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 10:37:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7oFy-0004Cn-3S;
	Mon, 27 Jan 2014 15:37:50 +0000
Date: Mon, 27 Jan 2014 15:37:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401271537140.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Zhang, Yang Z wrote:
> Stefano Stabellini wrote on 2014-01-27:
> > On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> >> Anthony PERARD wrote on 2014-01-09:
> >>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> > wrote:
> >>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>>>> [...]
> >>>>>>>>> Those Xen report something like:
> >>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>>>> 131329 >
> >>>>>>>>> 131328
> >>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>>>> id=46
> >>>>>>>>> memflags=0 (62 of 64)
> >>>>>>>>> 
> >>>>>>>>> ?
> >>>>>>>>> 
> >>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
> >>>>>>>>> e1000 in QEMU :) )
> >>>>>>>>> 
> >>>>> 
> >>>>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>>>> Network
> >>> Connection (rev 01)
> >>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>>>         ROM at fb400000 [disabled] [size=4M]
> >>>>> 
> >>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>>>> allocate memory for it. Will have maybe have to find another way.
> >>>>> qemu-trad those not seems to allocate memory, but I haven't been
> >>>>> very far in trying to check that.
> >>>> 
> >>>> And indeed that is the case. The "Fix" below fixes it.
> >>>> 
> >>>> 
> >>>> Based on that and this guest config:
> >>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >>>> memory = 2048
> >>>> boot="d"
> >>>> maxvcpus=32
> >>>> vcpus=1
> >>>> serial='pty'
> >>>> vnclisten="0.0.0.0"
> >>>> name="latest"
> >>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
> >>>> ["01:00.0"]
> >>>> 
> >>>> I can boot the guest.
> >>> 
> >>> And can you access the ROM from the guest ?
> >>> 
> >>> 
> >>> Also, I have another patch, it will initialize the PCI ROM BAR like
> >>> any other BAR. In this case, if qemu is envolved in the access to ROM,
> >>> it will print an error, like it the case for other BAR.
> >>> 
> >>> I tried to test it, but it was with an embedded VGA card. When I dump
> >>> the ROM, I got the same one as the emulated card instead of the ROM
> >>> from the device.
> >>> 
> >>> 
> >>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
> >>> 6dd7a68..2bbdb6d
> >>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
> >>> +440,8 @@ static int
> >>> xen_pt_register_regions(XenPCIPassthroughState *s)
> >>> 
> >>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >>> -                                      "xen-pci-pt-rom", d->rom.size);
> >>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
> >>>                           "xen-pci-pt-rom", d->rom.size);
> >>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> >>> PCI_BASE_ADDRESS_MEM_PREFETCH,
> >>>                           &s->rom);
> >> 
> >> Hi, Anthony,
> >> 
> >> Does your fixing is the final solution for this issue? If yes, will
> >> you push it
> > before Xen 4.4 release?
> > 
> > I included this patch in the last pull request I sent to Anthony Liguori:
> > 
> > http://marc.info/?l=qemu-devel&m=138997319906095
> > 
> > It hasn't been pulled yet, but I would expect that it is going to be
> > upstream soon.
> > Regarding the 4.4 release, we are trying to fix a couple of other
> > serious bugs in the qemu-xen tree right now, but it is still
> > conceivable to have this fix backported to the tree in time for the release.
> 
> Hope it can catch the 4.4 release. 
> 
> BTW: Do you know when Xen 4.4 will be released? I saw the wiki documented that it should be Jan 21. But it seems there still will have RC3 come out on Feb 9.

When it is ready ;-)
It is probably going to be "soon".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:38:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oG6-0003IK-FM; Mon, 27 Jan 2014 15:37:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7oG4-0003IF-IF
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:37:56 +0000
Received: from [85.158.137.68:33710] by server-9.bemta-3.messagelabs.com id
	A0/F6-13104-35D76E25; Mon, 27 Jan 2014 15:37:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390837071!7946712!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2878 invoked from network); 27 Jan 2014 15:37:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 15:37:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="96844779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 15:37:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 10:37:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7oFy-0004Cn-3S;
	Mon, 27 Jan 2014 15:37:50 +0000
Date: Mon, 27 Jan 2014 15:37:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401271537140.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Zhang, Yang Z wrote:
> Stefano Stabellini wrote on 2014-01-27:
> > On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> >> Anthony PERARD wrote on 2014-01-09:
> >>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> > wrote:
> >>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>>>> [...]
> >>>>>>>>> Those Xen report something like:
> >>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>>>> 131329 >
> >>>>>>>>> 131328
> >>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>>>> id=46
> >>>>>>>>> memflags=0 (62 of 64)
> >>>>>>>>> 
> >>>>>>>>> ?
> >>>>>>>>> 
> >>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
> >>>>>>>>> e1000 in QEMU :) )
> >>>>>>>>> 
> >>>>> 
> >>>>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>>>> Network
> >>> Connection (rev 01)
> >>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>>>         ROM at fb400000 [disabled] [size=4M]
> >>>>> 
> >>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>>>> allocate memory for it. Will have maybe have to find another way.
> >>>>> qemu-trad those not seems to allocate memory, but I haven't been
> >>>>> very far in trying to check that.
> >>>> 
> >>>> And indeed that is the case. The "Fix" below fixes it.
> >>>> 
> >>>> 
> >>>> Based on that and this guest config:
> >>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >>>> memory = 2048
> >>>> boot="d"
> >>>> maxvcpus=32
> >>>> vcpus=1
> >>>> serial='pty'
> >>>> vnclisten="0.0.0.0"
> >>>> name="latest"
> >>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
> >>>> ["01:00.0"]
> >>>> 
> >>>> I can boot the guest.
> >>> 
> >>> And can you access the ROM from the guest ?
> >>> 
> >>> 
> >>> Also, I have another patch, it will initialize the PCI ROM BAR like
> >>> any other BAR. In this case, if qemu is envolved in the access to ROM,
> >>> it will print an error, like it the case for other BAR.
> >>> 
> >>> I tried to test it, but it was with an embedded VGA card. When I dump
> >>> the ROM, I got the same one as the emulated card instead of the ROM
> >>> from the device.
> >>> 
> >>> 
> >>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
> >>> 6dd7a68..2bbdb6d
> >>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
> >>> +440,8 @@ static int
> >>> xen_pt_register_regions(XenPCIPassthroughState *s)
> >>> 
> >>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >>> -                                      "xen-pci-pt-rom", d->rom.size);
> >>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
> >>>                           "xen-pci-pt-rom", d->rom.size);
> >>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> >>> PCI_BASE_ADDRESS_MEM_PREFETCH,
> >>>                           &s->rom);
> >> 
> >> Hi, Anthony,
> >> 
> >> Does your fixing is the final solution for this issue? If yes, will
> >> you push it
> > before Xen 4.4 release?
> > 
> > I included this patch in the last pull request I sent to Anthony Liguori:
> > 
> > http://marc.info/?l=qemu-devel&m=138997319906095
> > 
> > It hasn't been pulled yet, but I would expect that it is going to be
> > upstream soon.
> > Regarding the 4.4 release, we are trying to fix a couple of other
> > serious bugs in the qemu-xen tree right now, but it is still
> > conceivable to have this fix backported to the tree in time for the release.
> 
> Hope it can catch the 4.4 release. 
> 
> BTW: Do you know when Xen 4.4 will be released? I saw the wiki documented that it should be Jan 21. But it seems there still will have RC3 come out on Feb 9.

When it is ready ;-)
It is probably going to be "soon".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oZO-00045T-UN; Mon, 27 Jan 2014 15:57:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7oZN-00045O-JJ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:57:53 +0000
Received: from [193.109.254.147:41997] by server-16.bemta-14.messagelabs.com
	id 4E/D1-20600-00286E25; Mon, 27 Jan 2014 15:57:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390838269!154798!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26630 invoked from network); 27 Jan 2014 15:57:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 15:57:50 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RFvjYL021651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 15:57:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFvirT004068
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 27 Jan 2014 15:57:44 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFvhhg004048; Mon, 27 Jan 2014 15:57:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 07:57:43 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 59A821BFA72; Mon, 27 Jan 2014 10:57:42 -0500 (EST)
Date: Mon, 27 Jan 2014 10:57:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <lliubbo@gmail.com>
Message-ID: <20140127155742.GB22245@phenom.dumpdata.com>
References: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: james.dingwall@zynstra.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] drivers: xen: deaggressive selfballoon
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 02:57:44PM +0800, Bob Liu wrote:
> Current xen-selfballoon driver is too aggressive which may cause OOM be
> triggered more often. Eg. this bug reported by James:
> https://lkml.org/lkml/2013/11/21/158
> 
> There are two mainly reasons:
> 1) The original goal_page didn't consider some pages used by kernel space, like
> slab pages and pages used by device drivers.
> 
> 2) The balloon driver may not give back memory to guest OS fast enough when the
> workload suddenly aquries a lot of physical memory.
> 
> In both cases, the guest OS will suffer from memory pressure and OOM may
> be triggered.
> 
> The fix is make xen-selfballoon driver not that aggressive by adding extra 10%
> of total ram pages to goal_page.
> It's more valuable to keep the guest system reliable and response faster than
> balloon out these 10% pages to XEN.
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>

Looks OK to me.
> ---
>  drivers/xen/xen-selfballoon.c |   22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..745ad79 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>  #endif /* CONFIG_FRONTSWAP */
>  
>  #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>  
>  /*
>   * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>  int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>  {
>  	bool enable = false;
> +	unsigned long reserve_pages;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>  	if (!enable)
>  		return -ENODEV;
>  
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are mainly two reasons:
> +	 * 1) The original goal_page didn't consider some pages used by kernel
> +	 *    space, like slab pages and memory used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of physical memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>  	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>  
>  	return 0;
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 15:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 15:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oZO-00045T-UN; Mon, 27 Jan 2014 15:57:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7oZN-00045O-JJ
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 15:57:53 +0000
Received: from [193.109.254.147:41997] by server-16.bemta-14.messagelabs.com
	id 4E/D1-20600-00286E25; Mon, 27 Jan 2014 15:57:52 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390838269!154798!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26630 invoked from network); 27 Jan 2014 15:57:50 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 15:57:50 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RFvjYL021651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 15:57:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFvirT004068
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 27 Jan 2014 15:57:44 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RFvhhg004048; Mon, 27 Jan 2014 15:57:44 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 07:57:43 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 59A821BFA72; Mon, 27 Jan 2014 10:57:42 -0500 (EST)
Date: Mon, 27 Jan 2014 10:57:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <lliubbo@gmail.com>
Message-ID: <20140127155742.GB22245@phenom.dumpdata.com>
References: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390373864-10525-1-git-send-email-bob.liu@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: james.dingwall@zynstra.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] drivers: xen: deaggressive selfballoon
	driver
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 02:57:44PM +0800, Bob Liu wrote:
> Current xen-selfballoon driver is too aggressive which may cause OOM be
> triggered more often. Eg. this bug reported by James:
> https://lkml.org/lkml/2013/11/21/158
> 
> There are two mainly reasons:
> 1) The original goal_page didn't consider some pages used by kernel space, like
> slab pages and pages used by device drivers.
> 
> 2) The balloon driver may not give back memory to guest OS fast enough when the
> workload suddenly aquries a lot of physical memory.
> 
> In both cases, the guest OS will suffer from memory pressure and OOM may
> be triggered.
> 
> The fix is make xen-selfballoon driver not that aggressive by adding extra 10%
> of total ram pages to goal_page.
> It's more valuable to keep the guest system reliable and response faster than
> balloon out these 10% pages to XEN.
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>

Looks OK to me.
> ---
>  drivers/xen/xen-selfballoon.c |   22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..745ad79 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>  #endif /* CONFIG_FRONTSWAP */
>  
>  #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>  
>  /*
>   * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>  int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>  {
>  	bool enable = false;
> +	unsigned long reserve_pages;
>  
>  	if (!xen_domain())
>  		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>  	if (!enable)
>  		return -ENODEV;
>  
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are mainly two reasons:
> +	 * 1) The original goal_page didn't consider some pages used by kernel
> +	 *    space, like slab pages and memory used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of physical memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>  	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>  
>  	return 0;
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7okZ-0005Ba-MA; Mon, 27 Jan 2014 16:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7okY-0005BV-73
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:09:26 +0000
Received: from [85.158.137.68:6491] by server-1.bemta-3.messagelabs.com id
	5E/94-29598-5B486E25; Mon, 27 Jan 2014 16:09:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390838963!11626194!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26167 invoked from network); 27 Jan 2014 16:09:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 16:09:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RG9IKs028865
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 16:09:19 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RG9HT3014229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 16:09:17 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0RG9GEW005331; Mon, 27 Jan 2014 16:09:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 08:09:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 79D0E1BFA72; Mon, 27 Jan 2014 11:09:14 -0500 (EST)
Date: Mon, 27 Jan 2014 11:09:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140127160914.GA23059@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> =

> - We don't wait for any pending purge work to finish before cleaning
>   the list of free_pages. The purge work will call put_free_pages and
>   thus we might end up with pages being added to the free_pages list
>   after we have emptied it.
> - We don't wait for pending requests to end before cleaning persistent
>   grants and the list of free_pages. Again this can add pages to the
>   free_pages lists or persistent grants to the persistent_gnts
>   red-black tree.
> =

> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.
> =

> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Matt Rushton <mrushton@amazon.com>
> Cc: Matt Wilson <msw@amazon.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
> This should be applied after the patch:
> =

> xen-blkback: fix memory leak when persistent grants are used

Could you respin the series with the issues below fixed and
have said patch as part of the series. That way not only does
it have your SoB on it but it makes it easier to apply the patch
for lazy^H^H^Hbusy maintainers and makes it clear that you had
tested both of them.

Also, please CC Jens Axboe on these patches.

Thank you!
> =

> >From Matt Rushton & Matt Wilson and backported to stable.
> =

> I've been able to create and destroy ~4000 guests while doing heavy IO
> operations with this patch on a 512M Dom0 without problems.
> ---
>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>  2 files changed, 28 insertions(+), 10 deletions(-)
> =

> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
> index 30ef7b3..19925b7 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
>  				struct pending_req *pending_req);
>  static void make_response(struct xen_blkif *blkif, u64 id,
>  			  unsigned short op, int st);
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>  =

>  #define foreach_grant_safe(pos, n, rbtree, node) \
>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node), \
> @@ -625,6 +626,12 @@ purge_gnt_list:
>  			print_stats(blkif);
>  	}
>  =

> +	/* Drain pending IO */
> +	xen_blk_drain_io(blkif, true);
> +
> +	/* Drain pending purge work */
> +	flush_work(&blkif->persistent_purge_work);
> +
>  	/* Free all persistent grant pages */
>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>  	return -EIO;
>  }
>  =

> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>  {
>  	atomic_set(&blkif->drain, 1);
>  	do {
> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>  =

>  		if (!atomic_read(&blkif->drain))
>  			break;
> -	} while (!kthread_should_stop());
> +	} while (!kthread_should_stop() || force);
>  	atomic_set(&blkif->drain, 0);
>  }
>  =

> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *p=
ending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif =3D pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);
>  	}
>  }
>  =

> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *b=
lkif,
>  	 * issue the WRITE_FLUSH.
>  	 */
>  	if (drain)
> -		xen_blk_drain_io(pending_req->blkif);
> +		xen_blk_drain_io(pending_req->blkif, false);
>  =

>  	/*
>  	 * If we have failed at this point, we need to undo the M2P override,
> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index c2014a0..3c10281 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domi=
d)
>  	blkif->persistent_gnts.rb_node =3D NULL;
>  	spin_lock_init(&blkif->free_pages_lock);
>  	INIT_LIST_HEAD(&blkif->free_pages);
> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);

Hm,
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  =

> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>  	if (!atomic_dec_and_test(&blkif->refcnt))
>  		BUG();
>  =

> +	/* Make sure everything is drained before shutting down */
> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> +	BUG_ON(blkif->free_pages_num !=3D 0);
> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));

You don't seem to put anything on this list? Or even declare this?
Was there another patch in the series?

> +	BUG_ON(!list_empty(&blkif->free_pages));
> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> +
>  	/* Check that there is no request in use */
>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>  		list_del(&req->free_list);
> -- =

> 1.7.7.5 (Apple Git-26)
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7okZ-0005Ba-MA; Mon, 27 Jan 2014 16:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7okY-0005BV-73
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:09:26 +0000
Received: from [85.158.137.68:6491] by server-1.bemta-3.messagelabs.com id
	5E/94-29598-5B486E25; Mon, 27 Jan 2014 16:09:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390838963!11626194!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26167 invoked from network); 27 Jan 2014 16:09:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 16:09:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RG9IKs028865
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 16:09:19 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RG9HT3014229
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 16:09:17 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0RG9GEW005331; Mon, 27 Jan 2014 16:09:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 08:09:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 79D0E1BFA72; Mon, 27 Jan 2014 11:09:14 -0500 (EST)
Date: Mon, 27 Jan 2014 11:09:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140127160914.GA23059@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> =

> - We don't wait for any pending purge work to finish before cleaning
>   the list of free_pages. The purge work will call put_free_pages and
>   thus we might end up with pages being added to the free_pages list
>   after we have emptied it.
> - We don't wait for pending requests to end before cleaning persistent
>   grants and the list of free_pages. Again this can add pages to the
>   free_pages lists or persistent grants to the persistent_gnts
>   red-black tree.
> =

> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.
> =

> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Matt Rushton <mrushton@amazon.com>
> Cc: Matt Wilson <msw@amazon.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
> This should be applied after the patch:
> =

> xen-blkback: fix memory leak when persistent grants are used

Could you respin the series with the issues below fixed and
have said patch as part of the series. That way not only does
it have your SoB on it but it makes it easier to apply the patch
for lazy^H^H^Hbusy maintainers and makes it clear that you had
tested both of them.

Also, please CC Jens Axboe on these patches.

Thank you!
> =

> >From Matt Rushton & Matt Wilson and backported to stable.
> =

> I've been able to create and destroy ~4000 guests while doing heavy IO
> operations with this patch on a 512M Dom0 without problems.
> ---
>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>  2 files changed, 28 insertions(+), 10 deletions(-)
> =

> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
> index 30ef7b3..19925b7 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
>  				struct pending_req *pending_req);
>  static void make_response(struct xen_blkif *blkif, u64 id,
>  			  unsigned short op, int st);
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>  =

>  #define foreach_grant_safe(pos, n, rbtree, node) \
>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node), \
> @@ -625,6 +626,12 @@ purge_gnt_list:
>  			print_stats(blkif);
>  	}
>  =

> +	/* Drain pending IO */
> +	xen_blk_drain_io(blkif, true);
> +
> +	/* Drain pending purge work */
> +	flush_work(&blkif->persistent_purge_work);
> +
>  	/* Free all persistent grant pages */
>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>  	return -EIO;
>  }
>  =

> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>  {
>  	atomic_set(&blkif->drain, 1);
>  	do {
> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>  =

>  		if (!atomic_read(&blkif->drain))
>  			break;
> -	} while (!kthread_should_stop());
> +	} while (!kthread_should_stop() || force);
>  	atomic_set(&blkif->drain, 0);
>  }
>  =

> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *p=
ending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif =3D pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);
>  	}
>  }
>  =

> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *b=
lkif,
>  	 * issue the WRITE_FLUSH.
>  	 */
>  	if (drain)
> -		xen_blk_drain_io(pending_req->blkif);
> +		xen_blk_drain_io(pending_req->blkif, false);
>  =

>  	/*
>  	 * If we have failed at this point, we need to undo the M2P override,
> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index c2014a0..3c10281 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domi=
d)
>  	blkif->persistent_gnts.rb_node =3D NULL;
>  	spin_lock_init(&blkif->free_pages_lock);
>  	INIT_LIST_HEAD(&blkif->free_pages);
> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);

Hm,
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  =

> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>  	if (!atomic_dec_and_test(&blkif->refcnt))
>  		BUG();
>  =

> +	/* Make sure everything is drained before shutting down */
> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> +	BUG_ON(blkif->free_pages_num !=3D 0);
> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));

You don't seem to put anything on this list? Or even declare this?
Was there another patch in the series?

> +	BUG_ON(!list_empty(&blkif->free_pages));
> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> +
>  	/* Check that there is no request in use */
>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>  		list_del(&req->free_list);
> -- =

> 1.7.7.5 (Apple Git-26)
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:12:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7onb-0005LO-Kl; Mon, 27 Jan 2014 16:12:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7onZ-0005LH-JT
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:12:33 +0000
Received: from [85.158.137.68:45340] by server-11.bemta-3.messagelabs.com id
	1D/86-19379-07586E25; Mon, 27 Jan 2014 16:12:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390839150!11554495!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17441 invoked from network); 27 Jan 2014 16:12:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 16:12:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RGCRC4010019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 16:12:27 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RGCPmR013970
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 16:12:26 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RGCPer001223; Mon, 27 Jan 2014 16:12:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 08:12:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B42921BFA72; Mon, 27 Jan 2014 11:12:24 -0500 (EST)
Date: Mon, 27 Jan 2014 11:12:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140127161224.GB23059@phenom.dumpdata.com>
References: <1390818970-737-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390818970-737-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: JBeulich@suse.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:36:10AM +0100, Olaf Hering wrote:
> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thanks!
> ---
> v2:
> include changes suggested by Jan and Konrad which make it more clear that
> both properties have to be present, if required.
> 
>  xen/include/public/io/blkif.h | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..542f123 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,15 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate space
> + *     (discardable extents) in units that are larger than the exported logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend should provide both discard-granularity and discard-alignment.
> + *     Providing just one of the two may be considered an error by the frontend.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity
> + *     == sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +351,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */
>  
>  /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:12:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7onb-0005LO-Kl; Mon, 27 Jan 2014 16:12:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7onZ-0005LH-JT
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:12:33 +0000
Received: from [85.158.137.68:45340] by server-11.bemta-3.messagelabs.com id
	1D/86-19379-07586E25; Mon, 27 Jan 2014 16:12:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1390839150!11554495!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17441 invoked from network); 27 Jan 2014 16:12:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 16:12:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RGCRC4010019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 16:12:27 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RGCPmR013970
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 16:12:26 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RGCPer001223; Mon, 27 Jan 2014 16:12:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 08:12:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B42921BFA72; Mon, 27 Jan 2014 11:12:24 -0500 (EST)
Date: Mon, 27 Jan 2014 11:12:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Olaf Hering <olaf@aepfle.de>
Message-ID: <20140127161224.GB23059@phenom.dumpdata.com>
References: <1390818970-737-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390818970-737-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: JBeulich@suse.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] blkif.h: enhance comments related to the
 discard feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:36:10AM +0100, Olaf Hering wrote:
> Also fix the name of the discard-alignment property, add the missing 'n'.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thanks!
> ---
> v2:
> include changes suggested by Jan and Konrad which make it more clear that
> both properties have to be present, if required.
> 
>  xen/include/public/io/blkif.h | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 84eb7fd..542f123 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,7 +175,7 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> - * discard-aligment
> + * discard-alignment
>   *      Values:         <uint32_t>
>   *      Default Value:  0
>   *      Notes:          4, 5
> @@ -194,6 +194,7 @@
>   * discard-secure
>   *      Values:         0/1 (boolean)
>   *      Default Value:  0
> + *      Notes:          10
>   *
>   *      A value of "1" indicates that the backend can process BLKIF_OP_DISCARD
>   *      requests with the BLKIF_DISCARD_SECURE flag set.
> @@ -323,9 +324,15 @@
>   *     For full interoperability, block front and backends should publish
>   *     identical ring parameters, adjusted for unit differences, to the
>   *     XenStore nodes used in both schemes.
> - * (4) Devices that support discard functionality may internally allocate
> - *     space (discardable extents) in units that are larger than the
> - *     exported logical block size.
> + * (4) Devices that support discard functionality may internally allocate space
> + *     (discardable extents) in units that are larger than the exported logical
> + *     block size. If the backing device has such discardable extents the
> + *     backend should provide both discard-granularity and discard-alignment.
> + *     Providing just one of the two may be considered an error by the frontend.
> + *     Backends supporting discard should include discard-granularity and
> + *     discard-alignment even if it supports discarding individual sectors.
> + *     Frontends should assume discard-alignment == 0 and discard-granularity
> + *     == sector size if these keys are missing.
>   * (5) The discard-alignment parameter allows a physical device to be
>   *     partitioned into virtual devices that do not necessarily begin or
>   *     end on a discardable extent boundary.
> @@ -344,6 +351,8 @@
>   *     grants that can be persistently mapped in the frontend driver, but
>   *     due to the frontent driver implementation it should never be bigger
>   *     than RING_SIZE * BLKIF_MAX_SEGMENTS_PER_REQUEST.
> + *(10) The discard-secure property may be present and will be set to 1 if the
> + *     backing device supports secure discard.
>   */
>  
>  /*

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7opa-0005Th-76; Mon, 27 Jan 2014 16:14:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7opZ-0005TZ-6P
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:14:37 +0000
Received: from [85.158.143.35:30227] by server-2.bemta-4.messagelabs.com id
	51/7D-11386-CE586E25; Mon, 27 Jan 2014 16:14:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390839274!1114400!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30852 invoked from network); 27 Jan 2014 16:14:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:14:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94856869"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:14:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:14:32 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W7opU-0004nG-7a;
	Mon, 27 Jan 2014 16:14:32 +0000
Date: Mon, 27 Jan 2014 16:14:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Message-ID: <20140127161431.GE32713@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
	<CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 02:54:40PM +0000, George Dunlap wrote:
[...]
> > ---8<---
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..472f1df 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -49,6 +49,8 @@
> >  #define NR_SPECIAL_PAGES     8
> >  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> >
> > +#define POD_VGA_HOLE_SIZE (0x20)
> > +
> >  static int modules_init(struct xc_hvm_build_args *args,
> >                          uint64_t vend, struct elf_binary *elf,
> >                          uint64_t *mstart_out, uint64_t *mend_out)
> > @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
> >      if ( pod_mode )
> >      {
> >          /*
> > -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> > -         * adjust the PoD cache size so that domain tot_pages will be
> > -         * target_pages - 0x20 after this call.
> > +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> > +         * "hole".  Xen will adjust the PoD cache size so that domain
> > +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> > +         * this call.
> >           */
> > -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> > +        rc = xc_domain_set_pod_target(xch, dom,
> > +                                      target_pages - POD_VGA_HOLE_SIZE,
> >                                        NULL, NULL, NULL);
> >          if ( rc != 0 )
> >          {
> > @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
> >
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = target_pages;
> > +
> > +        if ( pod_mode )
> > +            nr -= POD_VGA_HOLE_SIZE;
> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> 
> Two things:
> 
> 1. This is broken because it doesn't claim pages for the PoD "cache".
> The PoD "cache" amounts to *all the pages that the domain will have
> allocated* -- there will be basically no pages allocated after this
> point.
> 
> Claim mode is trying to make creation of large guests fail early or be
> guaranteed to succeed.  For large guests, it's set_pod_target() that
> may take the long time, and it's there that things will fail if
> there's not enough memory.  By the time you get to actually setting up
> the p2m, you've already made it.
> 
> 2. I think the VGA_HOLE doesn't have anything to do with PoD.
> 
> Actually, it looks like the original code was wrong: correct me if I'm
> wrong, but xc_domain_claim_pages() wants the *total number of pages*,
> whereas nr_pages-cur_pages would give you the *number of additional
> pages*.
> 
> I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
> you're in PoD mode or not.  The initial code would claim 0xa0 pages
> too few; the new code will claim 0x20 pages too many in the non-PoD
> case.
> 
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> > index 5f484a2..1e44ba3 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
> >          goto out;
> >      }
> >
> > -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> > -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> > +    /* disallow a claim below current tot_pages or above max_pages */
> > +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
> >      {
> 
> This seems like a good interface change in any case -- there's no
> particular reason not to allow multiple calls with the same claim
> amount.  (Interface-wise, I don't see a good reason we couldn't allow
> the claim to be reduced as well; but that's probably more than we want
> to do at this point.)
> 
> So it seems like we should at least make the two fixes above:
> * Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled

You mean target_pages here, right? nr_pages is maxmem= while target_pages
is memory=. And from the face-to-face discussion we had this morning I
had the impression you meant target_pages.

In fact using nr_pages won't work because at that point d->max_pages is
capped to target_memory in the hypervisor.

> * Allow a claim equal to tot_pages
> 
> That will allow PoD guests to boot with claim mode enabled, although
> it will effectively be a noop.
> 
> The next question is whether we should try to make claim mode actually
> do the claim properly for PoD mode for 4.4.  It seems like just moving
> the claim call up before the xc_domain_set_target() call should work;
> I'm inclined to say that if that works, we should just do it.
> 

Agreed.

Wei.

> Thoughts?
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7opa-0005Th-76; Mon, 27 Jan 2014 16:14:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7opZ-0005TZ-6P
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:14:37 +0000
Received: from [85.158.143.35:30227] by server-2.bemta-4.messagelabs.com id
	51/7D-11386-CE586E25; Mon, 27 Jan 2014 16:14:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390839274!1114400!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30852 invoked from network); 27 Jan 2014 16:14:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:14:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94856869"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:14:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:14:32 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W7opU-0004nG-7a;
	Mon, 27 Jan 2014 16:14:32 +0000
Date: Mon, 27 Jan 2014 16:14:31 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: George Dunlap <dunlapg@umich.edu>
Message-ID: <20140127161431.GE32713@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
	<CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 02:54:40PM +0000, George Dunlap wrote:
[...]
> > ---8<---
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..472f1df 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -49,6 +49,8 @@
> >  #define NR_SPECIAL_PAGES     8
> >  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> >
> > +#define POD_VGA_HOLE_SIZE (0x20)
> > +
> >  static int modules_init(struct xc_hvm_build_args *args,
> >                          uint64_t vend, struct elf_binary *elf,
> >                          uint64_t *mstart_out, uint64_t *mend_out)
> > @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
> >      if ( pod_mode )
> >      {
> >          /*
> > -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> > -         * adjust the PoD cache size so that domain tot_pages will be
> > -         * target_pages - 0x20 after this call.
> > +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
> > +         * "hole".  Xen will adjust the PoD cache size so that domain
> > +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
> > +         * this call.
> >           */
> > -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> > +        rc = xc_domain_set_pod_target(xch, dom,
> > +                                      target_pages - POD_VGA_HOLE_SIZE,
> >                                        NULL, NULL, NULL);
> >          if ( rc != 0 )
> >          {
> > @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
> >
> >      /* try to claim pages for early warning of insufficient memory available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = target_pages;
> > +
> > +        if ( pod_mode )
> > +            nr -= POD_VGA_HOLE_SIZE;
> > +
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> 
> Two things:
> 
> 1. This is broken because it doesn't claim pages for the PoD "cache".
> The PoD "cache" amounts to *all the pages that the domain will have
> allocated* -- there will be basically no pages allocated after this
> point.
> 
> Claim mode is trying to make creation of large guests fail early or be
> guaranteed to succeed.  For large guests, it's set_pod_target() that
> may take the long time, and it's there that things will fail if
> there's not enough memory.  By the time you get to actually setting up
> the p2m, you've already made it.
> 
> 2. I think the VGA_HOLE doesn't have anything to do with PoD.
> 
> Actually, it looks like the original code was wrong: correct me if I'm
> wrong, but xc_domain_claim_pages() wants the *total number of pages*,
> whereas nr_pages-cur_pages would give you the *number of additional
> pages*.
> 
> I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
> you're in PoD mode or not.  The initial code would claim 0xa0 pages
> too few; the new code will claim 0x20 pages too many in the non-PoD
> case.
> 
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> > index 5f484a2..1e44ba3 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
> >          goto out;
> >      }
> >
> > -    /* disallow a claim not exceeding current tot_pages or above max_pages */
> > -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
> > +    /* disallow a claim below current tot_pages or above max_pages */
> > +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
> >      {
> 
> This seems like a good interface change in any case -- there's no
> particular reason not to allow multiple calls with the same claim
> amount.  (Interface-wise, I don't see a good reason we couldn't allow
> the claim to be reduced as well; but that's probably more than we want
> to do at this point.)
> 
> So it seems like we should at least make the two fixes above:
> * Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled

You mean target_pages here, right? nr_pages is maxmem= while target_pages
is memory=. And from the face-to-face discussion we had this morning I
had the impression you meant target_pages.

In fact using nr_pages won't work because at that point d->max_pages is
capped to target_memory in the hypervisor.

> * Allow a claim equal to tot_pages
> 
> That will allow PoD guests to boot with claim mode enabled, although
> it will effectively be a noop.
> 
> The next question is whether we should try to make claim mode actually
> do the claim properly for PoD mode for 4.4.  It seems like just moving
> the claim call up before the xc_domain_set_target() call should work;
> I'm inclined to say that if that works, we should just do it.
> 

Agreed.

Wei.

> Thoughts?
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oui-0005wo-2B; Mon, 27 Jan 2014 16:19:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7oug-0005wi-03
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:19:54 +0000
Received: from [193.109.254.147:41538] by server-6.bemta-14.messagelabs.com id
	11/4F-14958-92786E25; Mon, 27 Jan 2014 16:19:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390839591!162437!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15215 invoked from network); 27 Jan 2014 16:19:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:19:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94860365"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:19:43 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:19:43 -0500
Message-ID: <52E6871D.2050107@citrix.com>
Date: Mon, 27 Jan 2014 17:19:41 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127160914.GA23059@phenom.dumpdata.com>
In-Reply-To: <20140127160914.GA23059@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 17:09, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>>
>> - We don't wait for any pending purge work to finish before cleaning
>>   the list of free_pages. The purge work will call put_free_pages and
>>   thus we might end up with pages being added to the free_pages list
>>   after we have emptied it.
>> - We don't wait for pending requests to end before cleaning persistent
>>   grants and the list of free_pages. Again this can add pages to the
>>   free_pages lists or persistent grants to the persistent_gnts
>>   red-black tree.
>>
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
>>
>> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Matt Rushton <mrushton@amazon.com>
>> Cc: Matt Wilson <msw@amazon.com>
>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>> ---
>> This should be applied after the patch:
>>
>> xen-blkback: fix memory leak when persistent grants are used
> =

> Could you respin the series with the issues below fixed and
> have said patch as part of the series. That way not only does
> it have your SoB on it but it makes it easier to apply the patch
> for lazy^H^H^Hbusy maintainers and makes it clear that you had
> tested both of them.
> =

> Also, please CC Jens Axboe on these patches.

Ack, will do once the comments below are sorted out.

>>
>> >From Matt Rushton & Matt Wilson and backported to stable.
>>
>> I've been able to create and destroy ~4000 guests while doing heavy IO
>> operations with this patch on a 512M Dom0 without problems.
>> ---
>>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>>  2 files changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 30ef7b3..19925b7 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *bl=
kif,
>>  				struct pending_req *pending_req);
>>  static void make_response(struct xen_blkif *blkif, u64 id,
>>  			  unsigned short op, int st);
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>>  =

>>  #define foreach_grant_safe(pos, n, rbtree, node) \
>>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node),=
 \
>> @@ -625,6 +626,12 @@ purge_gnt_list:
>>  			print_stats(blkif);
>>  	}
>>  =

>> +	/* Drain pending IO */
>> +	xen_blk_drain_io(blkif, true);
>> +
>> +	/* Drain pending purge work */
>> +	flush_work(&blkif->persistent_purge_work);
>> +
>>  	/* Free all persistent grant pages */
>>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
>> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>>  	return -EIO;
>>  }
>>  =

>> -static void xen_blk_drain_io(struct xen_blkif *blkif)
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>>  {
>>  	atomic_set(&blkif->drain, 1);
>>  	do {
>> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>>  =

>>  		if (!atomic_read(&blkif->drain))
>>  			break;
>> -	} while (!kthread_should_stop());
>> +	} while (!kthread_should_stop() || force);
>>  	atomic_set(&blkif->drain, 0);
>>  }
>>  =

>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *=
pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
>>  =

>> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
>>  	 * issue the WRITE_FLUSH.
>>  	 */
>>  	if (drain)
>> -		xen_blk_drain_io(pending_req->blkif);
>> +		xen_blk_drain_io(pending_req->blkif, false);
>>  =

>>  	/*
>>  	 * If we have failed at this point, we need to undo the M2P override,
>> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkb=
ack/xenbus.c
>> index c2014a0..3c10281 100644
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
>> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t dom=
id)
>>  	blkif->persistent_gnts.rb_node =3D NULL;
>>  	spin_lock_init(&blkif->free_pages_lock);
>>  	INIT_LIST_HEAD(&blkif->free_pages);
>> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
> =

> Hm,

See comment below.

>>  	blkif->free_pages_num =3D 0;
>>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>>  =

>> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>>  	if (!atomic_dec_and_test(&blkif->refcnt))
>>  		BUG();
>>  =

>> +	/* Make sure everything is drained before shutting down */
>> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
>> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
>> +	BUG_ON(blkif->free_pages_num !=3D 0);
>> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> =

> You don't seem to put anything on this list? Or even declare this?
> Was there another patch in the series?

No, the list is already used in current code, but it is initialized only
before usage, now I need to make sure it's initialized even if not used, or:

BUG_ON(!list_empty(&blkif->persistent_purge_list));

Is going to fail.

I will resend this and replace the other (now useless) initialization
with a BUG_ON(!list_empty...

> =

>> +	BUG_ON(!list_empty(&blkif->free_pages));
>> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
>> +
>>  	/* Check that there is no request in use */
>>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>>  		list_del(&req->free_list);
>> -- =

>> 1.7.7.5 (Apple Git-26)
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7oui-0005wo-2B; Mon, 27 Jan 2014 16:19:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W7oug-0005wi-03
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:19:54 +0000
Received: from [193.109.254.147:41538] by server-6.bemta-14.messagelabs.com id
	11/4F-14958-92786E25; Mon, 27 Jan 2014 16:19:53 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390839591!162437!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15215 invoked from network); 27 Jan 2014 16:19:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:19:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94860365"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:19:43 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:19:43 -0500
Message-ID: <52E6871D.2050107@citrix.com>
Date: Mon, 27 Jan 2014 17:19:41 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127160914.GA23059@phenom.dumpdata.com>
In-Reply-To: <20140127160914.GA23059@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 17:09, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>>
>> - We don't wait for any pending purge work to finish before cleaning
>>   the list of free_pages. The purge work will call put_free_pages and
>>   thus we might end up with pages being added to the free_pages list
>>   after we have emptied it.
>> - We don't wait for pending requests to end before cleaning persistent
>>   grants and the list of free_pages. Again this can add pages to the
>>   free_pages lists or persistent grants to the persistent_gnts
>>   red-black tree.
>>
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
>>
>> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Matt Rushton <mrushton@amazon.com>
>> Cc: Matt Wilson <msw@amazon.com>
>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>> ---
>> This should be applied after the patch:
>>
>> xen-blkback: fix memory leak when persistent grants are used
> =

> Could you respin the series with the issues below fixed and
> have said patch as part of the series. That way not only does
> it have your SoB on it but it makes it easier to apply the patch
> for lazy^H^H^Hbusy maintainers and makes it clear that you had
> tested both of them.
> =

> Also, please CC Jens Axboe on these patches.

Ack, will do once the comments below are sorted out.

>>
>> >From Matt Rushton & Matt Wilson and backported to stable.
>>
>> I've been able to create and destroy ~4000 guests while doing heavy IO
>> operations with this patch on a 512M Dom0 without problems.
>> ---
>>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>>  2 files changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 30ef7b3..19925b7 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *bl=
kif,
>>  				struct pending_req *pending_req);
>>  static void make_response(struct xen_blkif *blkif, u64 id,
>>  			  unsigned short op, int st);
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>>  =

>>  #define foreach_grant_safe(pos, n, rbtree, node) \
>>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node),=
 \
>> @@ -625,6 +626,12 @@ purge_gnt_list:
>>  			print_stats(blkif);
>>  	}
>>  =

>> +	/* Drain pending IO */
>> +	xen_blk_drain_io(blkif, true);
>> +
>> +	/* Drain pending purge work */
>> +	flush_work(&blkif->persistent_purge_work);
>> +
>>  	/* Free all persistent grant pages */
>>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
>> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>>  	return -EIO;
>>  }
>>  =

>> -static void xen_blk_drain_io(struct xen_blkif *blkif)
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>>  {
>>  	atomic_set(&blkif->drain, 1);
>>  	do {
>> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>>  =

>>  		if (!atomic_read(&blkif->drain))
>>  			break;
>> -	} while (!kthread_should_stop());
>> +	} while (!kthread_should_stop() || force);
>>  	atomic_set(&blkif->drain, 0);
>>  }
>>  =

>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *=
pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
>>  =

>> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
>>  	 * issue the WRITE_FLUSH.
>>  	 */
>>  	if (drain)
>> -		xen_blk_drain_io(pending_req->blkif);
>> +		xen_blk_drain_io(pending_req->blkif, false);
>>  =

>>  	/*
>>  	 * If we have failed at this point, we need to undo the M2P override,
>> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkb=
ack/xenbus.c
>> index c2014a0..3c10281 100644
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
>> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t dom=
id)
>>  	blkif->persistent_gnts.rb_node =3D NULL;
>>  	spin_lock_init(&blkif->free_pages_lock);
>>  	INIT_LIST_HEAD(&blkif->free_pages);
>> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
> =

> Hm,

See comment below.

>>  	blkif->free_pages_num =3D 0;
>>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>>  =

>> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>>  	if (!atomic_dec_and_test(&blkif->refcnt))
>>  		BUG();
>>  =

>> +	/* Make sure everything is drained before shutting down */
>> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
>> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
>> +	BUG_ON(blkif->free_pages_num !=3D 0);
>> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> =

> You don't seem to put anything on this list? Or even declare this?
> Was there another patch in the series?

No, the list is already used in current code, but it is initialized only
before usage, now I need to make sure it's initialized even if not used, or:

BUG_ON(!list_empty(&blkif->persistent_purge_list));

Is going to fail.

I will resend this and replace the other (now useless) initialization
with a BUG_ON(!list_empty...

> =

>> +	BUG_ON(!list_empty(&blkif->free_pages));
>> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
>> +
>>  	/* Check that there is no request in use */
>>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>>  		list_del(&req->free_list);
>> -- =

>> 1.7.7.5 (Apple Git-26)
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0H-00068f-W4; Mon, 27 Jan 2014 16:25:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0F-000685-Mh
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:39 +0000
Received: from [193.109.254.147:56139] by server-9.bemta-14.messagelabs.com id
	7F/68-13957-38886E25; Mon, 27 Jan 2014 16:25:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21733 invoked from network); 27 Jan 2014 16:25:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863714"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-GP; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:23 +0000
Message-ID: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Xen-devel Patch 0/2] Prevent xc_domain_restore()
	returning success despite errors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking through the XenServer patch queue, I noticed two bugfixes which
really should make their way upstream.

Both of these errors have been discovered by xc_domain_restore() returning
success after suffering a fatal error during migration, leading to the
toolstack believing that the VM migrated successfully.

Regarding 4.4:

I know this is quite late in the 4.4 cycle, and I apologise for not
noticing and upstreaming earlier, but I believe that these two fixes should
be considered for inclusion into 4.4.

They are both real errors found by XenRT causing mismanagement of migrated
domains. Both patches are small, concise, and (I believe) well explained.

The the use of various '*rc' variables can be easily viewed using `grep "rc "
xc_domain_restore.c`.  The risks are that I have made a mistake which could
result in further migration errors.  However, that risk is mitigated by
functionally-similar fixes being present in XenServer, and hopefully from the
obvious nature of the patches.  Futhermore, the changes themselves are for
error paths.

On the other hand, if they are deemed too risky (or buggy given review), it
will not be the end of the world if not included, although I hope that is not
the case.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0F-000683-5T; Mon, 27 Jan 2014 16:25:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0D-00067v-VP
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:38 +0000
Received: from [193.109.254.147:10233] by server-10.bemta-14.messagelabs.com
	id 50/A7-20752-18886E25; Mon, 27 Jan 2014 16:25:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21558 invoked from network); 27 Jan 2014 16:25:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-H0; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:24 +0000
Message-ID: <1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on error
	paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both of these "goto finish;" statements are actually errors, and need to "goto
out;" instead, which will correctly destroy the domain and return an error,
rather than trying to finish the migration (and in at least one scenario,
return success).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxc/xc_domain_restore.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index ca2fb51..5ba47d7 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -1778,14 +1778,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
 
     if ( pagebuf_get(xch, ctx, &pagebuf, io_fd, dom) ) {
         PERROR("error when buffering batch, finishing");
-        goto finish;
+        goto out;
     }
     memset(&tmptail, 0, sizeof(tmptail));
     tmptail.ishvm = hvm;
     if ( buffer_tail(xch, ctx, &tmptail, io_fd, max_vcpu_id, vcpumap,
                      ext_vcpucontext, vcpuextstate, vcpuextstate_size) < 0 ) {
         ERROR ("error buffering image tail, finishing");
-        goto finish;
+        goto out;
     }
     tailbuf_free(&tailbuf);
     memcpy(&tailbuf, &tmptail, sizeof(tailbuf));
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0H-00068f-W4; Mon, 27 Jan 2014 16:25:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0F-000685-Mh
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:39 +0000
Received: from [193.109.254.147:56139] by server-9.bemta-14.messagelabs.com id
	7F/68-13957-38886E25; Mon, 27 Jan 2014 16:25:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21733 invoked from network); 27 Jan 2014 16:25:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863714"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-GP; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:23 +0000
Message-ID: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Xen-devel Patch 0/2] Prevent xc_domain_restore()
	returning success despite errors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When looking through the XenServer patch queue, I noticed two bugfixes which
really should make their way upstream.

Both of these errors have been discovered by xc_domain_restore() returning
success after suffering a fatal error during migration, leading to the
toolstack believing that the VM migrated successfully.

Regarding 4.4:

I know this is quite late in the 4.4 cycle, and I apologise for not
noticing and upstreaming earlier, but I believe that these two fixes should
be considered for inclusion into 4.4.

They are both real errors found by XenRT causing mismanagement of migrated
domains. Both patches are small, concise, and (I believe) well explained.

The the use of various '*rc' variables can be easily viewed using `grep "rc "
xc_domain_restore.c`.  The risks are that I have made a mistake which could
result in further migration errors.  However, that risk is mitigated by
functionally-similar fixes being present in XenServer, and hopefully from the
obvious nature of the patches.  Futhermore, the changes themselves are for
error paths.

On the other hand, if they are deemed too risky (or buggy given review), it
will not be the end of the world if not included, although I hope that is not
the case.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0G-00068N-IY; Mon, 27 Jan 2014 16:25:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0F-000681-1y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:39 +0000
Received: from [193.109.254.147:52859] by server-12.bemta-14.messagelabs.com
	id 18/4E-13681-28886E25; Mon, 27 Jan 2014 16:25:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21675 invoked from network); 27 Jan 2014 16:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863715"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-HR; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:25 +0000
Message-ID: <1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success from
	xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
most part is left alone until success, at which point it is set to 0.

There is a separate 'frc' which for the most part is used to check function
calls, keeping errors separate from 'rc'.

For a toolstack which sets callbacks->toolstack_restore(), and the function
returns 0, any subsequent error will end up with code flow going to "out;",
resulting in the migration being declared a success.

For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
'frc', even though their use of 'rc' is currently safe.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

Regarding 4.4: If the two "for consistency" changes to
xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
without affecting the bugfix nature of the patch, but I would argue that
leaving some examples of "rc = function_call()" leaves a bad precident which
is likely to lead to similar bugs in the future.
---
 tools/libxc/xc_domain_restore.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 5ba47d7..817054d 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
     munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
 
-    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
-                            console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
+                             console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
@@ -2257,16 +2257,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     {
         if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
         {
-            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
-                        callbacks->data);
+            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
+                                               callbacks->data);
             free(tdata.data);
-            if ( rc < 0 )
+            if ( frc < 0 )
             {
                 PERROR("error calling toolstack_restore");
                 goto out;
             }
         } else {
-            rc = -1;
             ERROR("toolstack data available but no callback provided\n");
             free(tdata.data);
             goto out;
@@ -2326,9 +2325,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         goto out;
     }
 
-    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
-                                console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
+                                 console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0F-000683-5T; Mon, 27 Jan 2014 16:25:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0D-00067v-VP
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:38 +0000
Received: from [193.109.254.147:10233] by server-10.bemta-14.messagelabs.com
	id 50/A7-20752-18886E25; Mon, 27 Jan 2014 16:25:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21558 invoked from network); 27 Jan 2014 16:25:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-H0; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:24 +0000
Message-ID: <1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on error
	paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Both of these "goto finish;" statements are actually errors, and need to "goto
out;" instead, which will correctly destroy the domain and return an error,
rather than trying to finish the migration (and in at least one scenario,
return success).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxc/xc_domain_restore.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index ca2fb51..5ba47d7 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -1778,14 +1778,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
 
     if ( pagebuf_get(xch, ctx, &pagebuf, io_fd, dom) ) {
         PERROR("error when buffering batch, finishing");
-        goto finish;
+        goto out;
     }
     memset(&tmptail, 0, sizeof(tmptail));
     tmptail.ishvm = hvm;
     if ( buffer_tail(xch, ctx, &tmptail, io_fd, max_vcpu_id, vcpumap,
                      ext_vcpucontext, vcpuextstate, vcpuextstate_size) < 0 ) {
         ERROR ("error buffering image tail, finishing");
-        goto finish;
+        goto out;
     }
     tailbuf_free(&tailbuf);
     memcpy(&tailbuf, &tmptail, sizeof(tailbuf));
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:25:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:25:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p0G-00068N-IY; Mon, 27 Jan 2014 16:25:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7p0F-000681-1y
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:25:39 +0000
Received: from [193.109.254.147:52859] by server-12.bemta-14.messagelabs.com
	id 18/4E-13681-28886E25; Mon, 27 Jan 2014 16:25:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390839934!165108!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21675 invoked from network); 27 Jan 2014 16:25:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:25:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,729,1384300800"; d="scan'208";a="94863715"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:25:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1W7p03-0004yN-HR; Mon, 27 Jan 2014 16:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:25:25 +0000
Message-ID: <1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success from
	xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
most part is left alone until success, at which point it is set to 0.

There is a separate 'frc' which for the most part is used to check function
calls, keeping errors separate from 'rc'.

For a toolstack which sets callbacks->toolstack_restore(), and the function
returns 0, any subsequent error will end up with code flow going to "out;",
resulting in the migration being declared a success.

For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
'frc', even though their use of 'rc' is currently safe.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

Regarding 4.4: If the two "for consistency" changes to
xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
without affecting the bugfix nature of the patch, but I would argue that
leaving some examples of "rc = function_call()" leaves a bad precident which
is likely to lead to similar bugs in the future.
---
 tools/libxc/xc_domain_restore.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 5ba47d7..817054d 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
     munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
 
-    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
-                            console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
+                             console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
@@ -2257,16 +2257,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     {
         if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
         {
-            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
-                        callbacks->data);
+            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
+                                               callbacks->data);
             free(tdata.data);
-            if ( rc < 0 )
+            if ( frc < 0 )
             {
                 PERROR("error calling toolstack_restore");
                 goto out;
             }
         } else {
-            rc = -1;
             ERROR("toolstack data available but no callback provided\n");
             free(tdata.data);
             goto out;
@@ -2326,9 +2325,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         goto out;
     }
 
-    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
-                                console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
+                                 console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:29:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p3y-0006l2-NE; Mon, 27 Jan 2014 16:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7p3x-0006ku-9y
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:29:29 +0000
Received: from [85.158.143.35:41789] by server-1.bemta-4.messagelabs.com id
	24/27-02132-86986E25; Mon, 27 Jan 2014 16:29:28 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390840167!1119948!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21275 invoked from network); 27 Jan 2014 16:29:27 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:29:27 -0000
Received: by mail-we0-f175.google.com with SMTP id p61so5418325wes.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 08:29:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ZX3hjJIPSN3ldXCF0DpMhnimZfRxRvDxLS9XHgiFEAk=;
	b=p1QGzGVTw6JcZhjPFk7N6Hq1LuB+9UY55AEAp+BOM6ezjizUDyvoQvOVTVzrgO4HKa
	ptr5uXbSRPJxJU8M2+7XD+XkNTxF58/7Wh4GYWcCTKPzNQNVYdBLA7zUP9301QBKizun
	oDdxSXOWus/WJV3dtjjmMK3nnzZFxJJYNV+5IlLIyeygzPpCrVKaApaP0Z1T0qagW12p
	DuvyulJ1+zqT384Q0iSAk9uNfIwHRSyyiGAZDEny7Vpy1b6PBoSM+YAsHKxsb6eCCBsP
	hdachChYPOd6osN6KZQqpgXyOtCr016xtTnqzXN34eXoIwkmtL4n5dEfQ3L8da6qBj1r
	YDBw==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr12644576wie.6.1390840167486;
	Mon, 27 Jan 2014 08:29:27 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 08:29:27 -0800 (PST)
In-Reply-To: <1889333978.20140124143602@eikelenboom.it>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
Date: Mon, 27 Jan 2014 16:29:27 +0000
X-Google-Sender-Auth: Z-hXPokrX-mXc2ICA2BFs45D7Zg
Message-ID: <CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Sander Eikelenboom <linux@eikelenboom.it>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 1:36 PM, Sander Eikelenboom
<linux@eikelenboom.it> wrote:
>
> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>
>>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>> > nonethless.
>>>
>>> As usual ;-)
>
>> Ha!
>> ..snip..
>>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>>>
>>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>>> > I totally forgot about it !
>>>
>>> Got a link to that patchset ?
>
>> https://lkml.org/lkml/2013/12/13/315
>
>>> I at least could give it a spin .. you never know when fortune is on your side :-)
>
>> It is also at this git tree:
>
>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> want to merge it in your current Linus tree.
>
>> Thank you!
>
>
> Hi Konrad,
>
> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
> seems to help with my problem,i'm no capable of using:
> - xl pci-detach
> - xl pci-assignable-remove
> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

Out of curiosity, have you tried adding the -r option to pci-assignable-remove?

xl pci-assignable-add will store the original driver to which the
device was bound in xenstore; if you do "xl pci-assignable-remove -r"
it will attempt to re-bind it to that driver.

See more information here:

http://blog.xen.org/index.php/2012/06/04/xen-4-2-preview-xl-and-pci-pass-through/

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:29:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7p3y-0006l2-NE; Mon, 27 Jan 2014 16:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7p3x-0006ku-9y
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:29:29 +0000
Received: from [85.158.143.35:41789] by server-1.bemta-4.messagelabs.com id
	24/27-02132-86986E25; Mon, 27 Jan 2014 16:29:28 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390840167!1119948!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21275 invoked from network); 27 Jan 2014 16:29:27 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:29:27 -0000
Received: by mail-we0-f175.google.com with SMTP id p61so5418325wes.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 08:29:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ZX3hjJIPSN3ldXCF0DpMhnimZfRxRvDxLS9XHgiFEAk=;
	b=p1QGzGVTw6JcZhjPFk7N6Hq1LuB+9UY55AEAp+BOM6ezjizUDyvoQvOVTVzrgO4HKa
	ptr5uXbSRPJxJU8M2+7XD+XkNTxF58/7Wh4GYWcCTKPzNQNVYdBLA7zUP9301QBKizun
	oDdxSXOWus/WJV3dtjjmMK3nnzZFxJJYNV+5IlLIyeygzPpCrVKaApaP0Z1T0qagW12p
	DuvyulJ1+zqT384Q0iSAk9uNfIwHRSyyiGAZDEny7Vpy1b6PBoSM+YAsHKxsb6eCCBsP
	hdachChYPOd6osN6KZQqpgXyOtCr016xtTnqzXN34eXoIwkmtL4n5dEfQ3L8da6qBj1r
	YDBw==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr12644576wie.6.1390840167486;
	Mon, 27 Jan 2014 08:29:27 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 08:29:27 -0800 (PST)
In-Reply-To: <1889333978.20140124143602@eikelenboom.it>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
Date: Mon, 27 Jan 2014 16:29:27 +0000
X-Google-Sender-Auth: Z-hXPokrX-mXc2ICA2BFs45D7Zg
Message-ID: <CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Sander Eikelenboom <linux@eikelenboom.it>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 24, 2014 at 1:36 PM, Sander Eikelenboom
<linux@eikelenboom.it> wrote:
>
> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>
>>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>> > nonethless.
>>>
>>> As usual ;-)
>
>> Ha!
>> ..snip..
>>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>>>
>>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>>> > I totally forgot about it !
>>>
>>> Got a link to that patchset ?
>
>> https://lkml.org/lkml/2013/12/13/315
>
>>> I at least could give it a spin .. you never know when fortune is on your side :-)
>
>> It is also at this git tree:
>
>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> want to merge it in your current Linus tree.
>
>> Thank you!
>
>
> Hi Konrad,
>
> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
> seems to help with my problem,i'm no capable of using:
> - xl pci-detach
> - xl pci-assignable-remove
> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

Out of curiosity, have you tried adding the -r option to pci-assignable-remove?

xl pci-assignable-add will store the original driver to which the
device was bound in xenstore; if you do "xl pci-assignable-remove -r"
it will attempt to re-bind it to that driver.

See more information here:

http://blog.xen.org/index.php/2012/06/04/xen-4-2-preview-xl-and-pci-pass-through/

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pGD-0007JK-PZ; Mon, 27 Jan 2014 16:42:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W7pGD-0007JF-5n
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:42:09 +0000
Received: from [85.158.137.68:10777] by server-17.bemta-3.messagelabs.com id
	27/8C-15965-06C86E25; Mon, 27 Jan 2014 16:42:08 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390840927!10809159!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12451 invoked from network); 27 Jan 2014 16:42:07 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Jan 2014 16:42:07 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55385 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W7p3j-0004dp-NR; Mon, 27 Jan 2014 17:29:15 +0100
Date: Mon, 27 Jan 2014 17:42:05 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <121674995.20140127174205@eikelenboom.it>
To: George Dunlap <dunlapg@umich.edu>
In-Reply-To: <CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, January 27, 2014, 5:29:27 PM, you wrote:

> On Fri, Jan 24, 2014 at 1:36 PM, Sander Eikelenboom
> <linux@eikelenboom.it> wrote:
>>
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>>
>>>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>>> > nonethless.
>>>>
>>>> As usual ;-)
>>
>>> Ha!
>>> ..snip..
>>>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>>>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>>>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>>>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>>>>
>>>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>>>> > I totally forgot about it !
>>>>
>>>> Got a link to that patchset ?
>>
>>> https://lkml.org/lkml/2013/12/13/315
>>
>>>> I at least could give it a spin .. you never know when fortune is on your side :-)
>>
>>> It is also at this git tree:
>>
>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>>> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>>> want to merge it in your current Linus tree.
>>
>>> Thank you!
>>
>>
>> Hi Konrad,
>>
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

> Out of curiosity, have you tried adding the -r option to pci-assignable-remove?

> xl pci-assignable-add will store the original driver to which the
> device was bound in xenstore; if you do "xl pci-assignable-remove -r"
> it will attempt to re-bind it to that driver.

> See more information here:

> http://blog.xen.org/index.php/2012/06/04/xen-4-2-preview-xl-and-pci-pass-through/

Hi George,

Yes i did, but since i seize the device at boot it was never bound to a device, so
that isn't useful in this case (it doesn't know what to rebind to).


But this problem seems fixed after applying the first 4 commits in konrad's pci reset branch at:

http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git devel/xen-pciback.slot_and_bus.v0

Applying the last commit creates more problems than it fixes at the moment (hungtasks), so that
one clearly need more polishing. 

So it's clearly a problem in the kernel and pciback specifically.

--
Sander


>  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pGD-0007JK-PZ; Mon, 27 Jan 2014 16:42:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1W7pGD-0007JF-5n
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 16:42:09 +0000
Received: from [85.158.137.68:10777] by server-17.bemta-3.messagelabs.com id
	27/8C-15965-06C86E25; Mon, 27 Jan 2014 16:42:08 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390840927!10809159!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12451 invoked from network); 27 Jan 2014 16:42:07 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Jan 2014 16:42:07 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55385 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1W7p3j-0004dp-NR; Mon, 27 Jan 2014 17:29:15 +0100
Date: Mon, 27 Jan 2014 17:42:05 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <121674995.20140127174205@eikelenboom.it>
To: George Dunlap <dunlapg@umich.edu>
In-Reply-To: <CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<CAFLBxZY6FRvmnc4fYkWGQjYoiaiZjykj1jNADgbJv61_hosB6g@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, January 27, 2014, 5:29:27 PM, you wrote:

> On Fri, Jan 24, 2014 at 1:36 PM, Sander Eikelenboom
> <linux@eikelenboom.it> wrote:
>>
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>>
>>>> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>>> > nonethless.
>>>>
>>>> As usual ;-)
>>
>>> Ha!
>>> ..snip..
>>>> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>>>> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>>>> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>>>> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>>> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>>>>
>>>> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>>>> > I totally forgot about it !
>>>>
>>>> Got a link to that patchset ?
>>
>>> https://lkml.org/lkml/2013/12/13/315
>>
>>>> I at least could give it a spin .. you never know when fortune is on your side :-)
>>
>>> It is also at this git tree:
>>
>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>>> branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>>> want to merge it in your current Linus tree.
>>
>>> Thank you!
>>
>>
>> Hi Konrad,
>>
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind

> Out of curiosity, have you tried adding the -r option to pci-assignable-remove?

> xl pci-assignable-add will store the original driver to which the
> device was bound in xenstore; if you do "xl pci-assignable-remove -r"
> it will attempt to re-bind it to that driver.

> See more information here:

> http://blog.xen.org/index.php/2012/06/04/xen-4-2-preview-xl-and-pci-pass-through/

Hi George,

Yes i did, but since i seize the device at boot it was never bound to a device, so
that isn't useful in this case (it doesn't know what to rebind to).


But this problem seems fixed after applying the first 4 commits in konrad's pci reset branch at:

http://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git devel/xen-pciback.slot_and_bus.v0

Applying the last commit creates more problems than it fixes at the moment (hungtasks), so that
one clearly need more polishing. 

So it's clearly a problem in the kernel and pciback specifically.

--
Sander


>  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:49:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pMo-0007ey-PV; Mon, 27 Jan 2014 16:48:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7pMm-0007du-VV
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:48:57 +0000
Received: from [85.158.137.68:21503] by server-2.bemta-3.messagelabs.com id
	2D/D5-17329-8FD86E25; Mon, 27 Jan 2014 16:48:56 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390841332!11638296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22216 invoked from network); 27 Jan 2014 16:48:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:48:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94873469"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:48:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:48:50 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7pMf-0005La-SQ;
	Mon, 27 Jan 2014 16:48:49 +0000
Message-ID: <52E68DED.2020105@eu.citrix.com>
Date: Mon, 27 Jan 2014 16:48:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 07:34 PM, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Stefano Stabellini wrote:
>> On Fri, 17 Jan 2014, Anthony PERARD wrote:
>>> On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
>>>> On Fri, 17 Jan 2014, Anthony PERARD wrote:
>>>>> On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
>>>>>> On Thu, 16 Jan 2014, Anthony PERARD wrote:
>>>>>>> On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
>>>>>>>> On Wed, 15 Jan 2014, Ian Campbell wrote:
>>>>>>>>> On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
>>>>>>>>>> On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
>>>>>>>>>>> On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
>>>>>>>>>>>> There is an update of QEMU 1.6, I have done a merge and put that in a tree:
>>>>>>>>>>>> git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
>>>>>>>>>>> Based on the above I have no idea whether a freeze exception should be
>>>>>>>>>>> granted for this, so my default answer is no. I'm not sure what else you
>>>>>>>>>>> could have expected.
>>>>>>>>>>>
>>>>>>>>>>> If you think there are changes here which should be in 4.4.0 then please
>>>>>>>>>>> enumerate all changes included in this merge which have any relation to
>>>>>>>>>>> Xen and their potential impact on the release.
>>>>>>>>>> I have a list the change here that have a potential impact on Xen, with
>>>>>>>>>> the ones that I think are quite important at the beginning. Either the
>>>>>>>>>> commit title speak for itself or I added a small description on what is
>>>>>>>>>> affected.
>>>>>>>>> Thanks but there's not a lot here for me to go on WRT making a decision
>>>>>>>>> on a freeze exception. Did you refer to
>>>>>>>>> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>>>>>>>>> like I said? A freeze exception needs an analysis of benefits and risks,
>>>>>>>>> not the very briefest words you can possibly manage.
>>>>>>>>>
>>>>>>>>> Anyway it appears this is a grab bag of things we might want and misc
>>>>>>>>> fixes which are perhaps nice to have, I'm nowhere near comfortable
>>>>>>>>> giving it a blanket exemption based on what you've presented here, or
>>>>>>>>> even of cherry picking what might be the important ones. If you think
>>>>>>>>> any or all of it is actually important for 4.4 please make a proper case
>>>>>>>>> for inclusion, either of the aggregate or of the individual changes.
>>>>>>>> Anthony, did you simply update the tree by pulling from the upstream 1.6
>>>>>>>> stable tree?
>>>>>>> Yes, a simple merge.
>>>>>>>
>>>>>>>> I also assume that you tested at the very least the basic
>>>>>>>> PV and HVM configurations?
>>>>>>> :(, no, I haven't try PV. But I did try HVM.
>>>>>>>
>>>>>>> There is one thing that I may want to try, it's migration from the
>>>>>>> previous version of Xen. There is one patch that change (fix?) that.
>>>>>> Please do and let me know if it works as expected.
>>>>> I have tryied a pv guest, it does work fine.
>>>>>
>>>>> I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
>>>>> both the current qemu-xen tree and with the merge of 1.6.2, but the
>>>>> migration fail in both cases because of the same error reported by qemu.
>>>>> (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
>>>>> investigate in that. Might just be a issue with my compile script ...
>>>>> (using the wrong qemu-xen tree).
>>>> It is important that we identify what is the cause of the problem.
>>>> Especially if you think that it could be a "compile script" issue,
>>>> because I imagine that if it is, it might invalidate your previous
>>>> positive tests too.
>>> I was compiling with always the master branch of qemu-xen. So I had a
>>> Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
>>> migration test.
>> OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
>> works?
> I tested migration myself: although the update fixes a couple of
> problems with migration from 1.3, it also introduces a new one, that
> fortunately is fixed in QEMU 1.7.0.
>
> I have a branch based on 1.6.2 plus the backported fix from 1.7.0 that
> works well, I recommend getting a release exception for it and pushing
> it as soon as possible.

Sorry, didn't catch the "I recommend getting a release exception". The 
qemu issues (including the migration one) are definitely the main 
blockers to the release at the moment, so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 16:49:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 16:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pMo-0007ey-PV; Mon, 27 Jan 2014 16:48:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7pMm-0007du-VV
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 16:48:57 +0000
Received: from [85.158.137.68:21503] by server-2.bemta-3.messagelabs.com id
	2D/D5-17329-8FD86E25; Mon, 27 Jan 2014 16:48:56 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390841332!11638296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22216 invoked from network); 27 Jan 2014 16:48:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 16:48:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94873469"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 16:48:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 11:48:50 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7pMf-0005La-SQ;
	Mon, 27 Jan 2014 16:48:49 +0000
Message-ID: <52E68DED.2020105@eu.citrix.com>
Date: Mon, 27 Jan 2014 16:48:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <20140114185238.GJ1696@perard.uk.xensource.com>
	<1389778510.12434.118.camel@kazak.uk.xensource.com>
	<20140115143504.GL1696@perard.uk.xensource.com>
	<1389803256.3793.95.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401161539350.21510@kaball.uk.xensource.com>
	<20140116155055.GN1696@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401161551200.21510@kaball.uk.xensource.com>
	<20140117121350.GA16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171313070.21510@kaball.uk.xensource.com>
	<20140117134315.GB16586@perard.uk.xensource.com>
	<alpine.DEB.2.02.1401171344400.21510@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401241931270.15917@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PULL] qemu-xen stable update to 1.6.2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/24/2014 07:34 PM, Stefano Stabellini wrote:
> On Fri, 17 Jan 2014, Stefano Stabellini wrote:
>> On Fri, 17 Jan 2014, Anthony PERARD wrote:
>>> On Fri, Jan 17, 2014 at 01:17:55PM +0000, Stefano Stabellini wrote:
>>>> On Fri, 17 Jan 2014, Anthony PERARD wrote:
>>>>> On Thu, Jan 16, 2014 at 03:51:42PM +0000, Stefano Stabellini wrote:
>>>>>> On Thu, 16 Jan 2014, Anthony PERARD wrote:
>>>>>>> On Thu, Jan 16, 2014 at 03:42:17PM +0000, Stefano Stabellini wrote:
>>>>>>>> On Wed, 15 Jan 2014, Ian Campbell wrote:
>>>>>>>>> On Wed, 2014-01-15 at 14:35 +0000, Anthony PERARD wrote:
>>>>>>>>>> On Wed, Jan 15, 2014 at 09:35:10AM +0000, Ian Campbell wrote:
>>>>>>>>>>> On Tue, 2014-01-14 at 18:52 +0000, Anthony PERARD wrote:
>>>>>>>>>>>> There is an update of QEMU 1.6, I have done a merge and put that in a tree:
>>>>>>>>>>>> git://xenbits.xen.org/people/aperard/qemu-dm.git  merge-1.6.2
>>>>>>>>>>> Based on the above I have no idea whether a freeze exception should be
>>>>>>>>>>> granted for this, so my default answer is no. I'm not sure what else you
>>>>>>>>>>> could have expected.
>>>>>>>>>>>
>>>>>>>>>>> If you think there are changes here which should be in 4.4.0 then please
>>>>>>>>>>> enumerate all changes included in this merge which have any relation to
>>>>>>>>>>> Xen and their potential impact on the release.
>>>>>>>>>> I have a list the change here that have a potential impact on Xen, with
>>>>>>>>>> the ones that I think are quite important at the beginning. Either the
>>>>>>>>>> commit title speak for itself or I added a small description on what is
>>>>>>>>>> affected.
>>>>>>>>> Thanks but there's not a lot here for me to go on WRT making a decision
>>>>>>>>> on a freeze exception. Did you refer to
>>>>>>>>> http://wiki.xen.org/wiki/Xen_Roadmap/4.4#Exception_guidelines_for_after_the_code_freeze
>>>>>>>>> like I said? A freeze exception needs an analysis of benefits and risks,
>>>>>>>>> not the very briefest words you can possibly manage.
>>>>>>>>>
>>>>>>>>> Anyway it appears this is a grab bag of things we might want and misc
>>>>>>>>> fixes which are perhaps nice to have, I'm nowhere near comfortable
>>>>>>>>> giving it a blanket exemption based on what you've presented here, or
>>>>>>>>> even of cherry picking what might be the important ones. If you think
>>>>>>>>> any or all of it is actually important for 4.4 please make a proper case
>>>>>>>>> for inclusion, either of the aggregate or of the individual changes.
>>>>>>>> Anthony, did you simply update the tree by pulling from the upstream 1.6
>>>>>>>> stable tree?
>>>>>>> Yes, a simple merge.
>>>>>>>
>>>>>>>> I also assume that you tested at the very least the basic
>>>>>>>> PV and HVM configurations?
>>>>>>> :(, no, I haven't try PV. But I did try HVM.
>>>>>>>
>>>>>>> There is one thing that I may want to try, it's migration from the
>>>>>>> previous version of Xen. There is one patch that change (fix?) that.
>>>>>> Please do and let me know if it works as expected.
>>>>> I have tryied a pv guest, it does work fine.
>>>>>
>>>>> I also try a migration (xl save/restore) from Xen 4.3.1 to Xen 4.4 with
>>>>> both the current qemu-xen tree and with the merge of 1.6.2, but the
>>>>> migration fail in both cases because of the same error reported by qemu.
>>>>> (Unknown savevm section or instance '0000:02.0/cirrus_vga' 0). I did not
>>>>> investigate in that. Might just be a issue with my compile script ...
>>>>> (using the wrong qemu-xen tree).
>>>> It is important that we identify what is the cause of the problem.
>>>> Especially if you think that it could be a "compile script" issue,
>>>> because I imagine that if it is, it might invalidate your previous
>>>> positive tests too.
>>> I was compiling with always the master branch of qemu-xen. So I had a
>>> Xen 4.3 with QEMU 1.6 instead of 1.3. So it only invalidate the
>>> migration test.
>> OK, good. Can you double check how migration from  Xen 4.3 with QEMU 1.3
>> works?
> I tested migration myself: although the update fixes a couple of
> problems with migration from 1.3, it also introduces a new one, that
> fortunately is fixed in QEMU 1.7.0.
>
> I have a branch based on 1.6.2 plus the backported fix from 1.7.0 that
> works well, I recommend getting a release exception for it and pushing
> it as soon as possible.

Sorry, didn't catch the "I recommend getting a release exception". The 
qemu issues (including the migration one) are definitely the main 
blockers to the release at the moment, so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7plK-00006R-8v; Mon, 27 Jan 2014 17:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7plI-00006M-Pm
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:14:16 +0000
Received: from [85.158.143.35:11368] by server-3.bemta-4.messagelabs.com id
	8F/13-32360-7E396E25; Mon, 27 Jan 2014 17:14:15 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390842855!1130289!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11554 invoked from network); 27 Jan 2014 17:14:15 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:14:15 -0000
Received: by mail-we0-f174.google.com with SMTP id x55so5651939wes.33
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=XXN3Brzhm3Usqq5KSMfDutkb69hojlU1uyJ2u3RNBQU=;
	b=GkjrVYtniNormdXuTb34S8Bo3eRfvIi4c1w1ie5W5ByxLSH0nsI4NLOzzfyN/aQNnX
	O3W/gVKF/fVHjbvpoNV+9QdfkuIhPYmyYQWH354wwEqped+agFqCRlju6XjR5q/bGM2m
	ss1SxAD1CDBDh/bVDutxX9tTH6ERLjsg7OUhR7WkzzDen/tg6R++A6XLRhvFJP2bJD0w
	7HVw1orWWFj8RXaI9hPtsVgYXCBIP1nheb6EZN/8cMWEGy5RxEi/ptF0IkJYiBcQ33VJ
	CxEylHNaX6rLC6SxF8S1gkLu26zLEujUoJKo2FkbNx0L+Nb2PVhRBOr4Iq7mq3abmyDj
	foyw==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr12808998wib.28.1390842855032; 
	Mon, 27 Jan 2014 09:14:15 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 09:14:14 -0800 (PST)
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
References: <1390244796.2322.6.camel@astar.houby.net>
Date: Mon, 27 Jan 2014 17:14:14 +0000
X-Google-Sender-Auth: CpoKK3GVDEWiJtEAI2YWDwiB7fM
Message-ID: <CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: ehouby@yahoo.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it Disable IOMMU if no soutbridge
thanks

On Mon, Jan 20, 2014 at 7:06 PM, Eric Houby <ehouby@yahoo.com> wrote:
> xen-devel list,
>
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.
>
> Thanks,
>
> Eric
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7plK-00006R-8v; Mon, 27 Jan 2014 17:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7plI-00006M-Pm
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:14:16 +0000
Received: from [85.158.143.35:11368] by server-3.bemta-4.messagelabs.com id
	8F/13-32360-7E396E25; Mon, 27 Jan 2014 17:14:15 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390842855!1130289!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11554 invoked from network); 27 Jan 2014 17:14:15 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:14:15 -0000
Received: by mail-we0-f174.google.com with SMTP id x55so5651939wes.33
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:14:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=XXN3Brzhm3Usqq5KSMfDutkb69hojlU1uyJ2u3RNBQU=;
	b=GkjrVYtniNormdXuTb34S8Bo3eRfvIi4c1w1ie5W5ByxLSH0nsI4NLOzzfyN/aQNnX
	O3W/gVKF/fVHjbvpoNV+9QdfkuIhPYmyYQWH354wwEqped+agFqCRlju6XjR5q/bGM2m
	ss1SxAD1CDBDh/bVDutxX9tTH6ERLjsg7OUhR7WkzzDen/tg6R++A6XLRhvFJP2bJD0w
	7HVw1orWWFj8RXaI9hPtsVgYXCBIP1nheb6EZN/8cMWEGy5RxEi/ptF0IkJYiBcQ33VJ
	CxEylHNaX6rLC6SxF8S1gkLu26zLEujUoJKo2FkbNx0L+Nb2PVhRBOr4Iq7mq3abmyDj
	foyw==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr12808998wib.28.1390842855032; 
	Mon, 27 Jan 2014 09:14:15 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 09:14:14 -0800 (PST)
In-Reply-To: <1390244796.2322.6.camel@astar.houby.net>
References: <1390244796.2322.6.camel@astar.houby.net>
Date: Mon, 27 Jan 2014 17:14:14 +0000
X-Google-Sender-Auth: CpoKK3GVDEWiJtEAI2YWDwiB7fM
Message-ID: <CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: ehouby@yahoo.com
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it Disable IOMMU if no soutbridge
thanks

On Mon, Jan 20, 2014 at 7:06 PM, Eric Houby <ehouby@yahoo.com> wrote:
> xen-devel list,
>
> Testing the xen 4.4 RC2 RPMs on Fedora 20 caused xen to crash at boot.
> Screen shot of the crash is attached.  Hardware is a Gigabyte
> GA-890FXA-UD5.  Adding iommu=no-amd-iommu-perdev-intremap to xen command
> line allows the system to boot as expected.
>
> Thanks,
>
> Eric
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:19:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ppk-0000PD-I7; Mon, 27 Jan 2014 17:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7ppj-0000Mj-77
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:18:51 +0000
Received: from [85.158.137.68:13236] by server-13.bemta-3.messagelabs.com id
	E6/52-28603-AF496E25; Mon, 27 Jan 2014 17:18:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390843128!10455387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3966 invoked from network); 27 Jan 2014 17:18:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94886326"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:18:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:18:47 -0500
Message-ID: <1390843126.12230.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 27 Jan 2014 17:18:46 +0000
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
> A physdev_op is a two argument hypercall, taking a command paramter and an
> optional pointer to a structure.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>

This hypercall is unused in minios and stubdoms I think? (Trying to
gauge how critical the error is).

> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:19:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ppk-0000PD-I7; Mon, 27 Jan 2014 17:18:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7ppj-0000Mj-77
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:18:51 +0000
Received: from [85.158.137.68:13236] by server-13.bemta-3.messagelabs.com id
	E6/52-28603-AF496E25; Mon, 27 Jan 2014 17:18:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390843128!10455387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3966 invoked from network); 27 Jan 2014 17:18:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:18:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94886326"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:18:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:18:47 -0500
Message-ID: <1390843126.12230.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 27 Jan 2014 17:18:46 +0000
In-Reply-To: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
> A physdev_op is a two argument hypercall, taking a command paramter and an
> optional pointer to a structure.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>

This hypercall is unused in minios and stubdoms I think? (Trying to
gauge how critical the error is).

> ---
>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> index ef52ecd..dcfbe41 100644
> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int
> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> index 513d74e..7083763 100644
> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>  
>  static inline int
>  HYPERVISOR_physdev_op(
> -	void *physdev_op)
> +	int cmd, void *physdev_op)
>  {
> -	return _hypercall1(int, physdev_op, physdev_op);
> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>  }
>  
>  static inline int



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:21:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ps5-0000bi-1q; Mon, 27 Jan 2014 17:21:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7ps3-0000bX-4u
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:21:15 +0000
Received: from [193.109.254.147:65218] by server-9.bemta-14.messagelabs.com id
	89/99-13957-A8596E25; Mon, 27 Jan 2014 17:21:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390843272!172875!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20607 invoked from network); 27 Jan 2014 17:21:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:21:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94887168"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:21:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:21:10 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W7prx-0005q7-CP;
	Mon, 27 Jan 2014 17:21:09 +0000
Message-ID: <52E6956C.6060100@citrix.com>
Date: Mon, 27 Jan 2014 17:20:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
	<1390843126.12230.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1390843126.12230.46.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 17:18, Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
>> A physdev_op is a two argument hypercall, taking a command paramter and an
>> optional pointer to a structure.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> This hypercall is unused in minios and stubdoms I think? (Trying to
> gauge how critical the error is).

Correct.  I suppose it is more of a "nice to fix" than "must fix" at
this stage, although I was quite surprised that I needed to fix it.

~Andrew

>
>> ---
>>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> index ef52ecd..dcfbe41 100644
>> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>>  
>>  static inline int
>>  HYPERVISOR_physdev_op(
>> -	void *physdev_op)
>> +	int cmd, void *physdev_op)
>>  {
>> -	return _hypercall1(int, physdev_op, physdev_op);
>> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>>  }
>>  
>>  static inline int
>> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> index 513d74e..7083763 100644
>> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>>  
>>  static inline int
>>  HYPERVISOR_physdev_op(
>> -	void *physdev_op)
>> +	int cmd, void *physdev_op)
>>  {
>> -	return _hypercall1(int, physdev_op, physdev_op);
>> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>>  }
>>  
>>  static inline int
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:21:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7ps5-0000bi-1q; Mon, 27 Jan 2014 17:21:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W7ps3-0000bX-4u
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:21:15 +0000
Received: from [193.109.254.147:65218] by server-9.bemta-14.messagelabs.com id
	89/99-13957-A8596E25; Mon, 27 Jan 2014 17:21:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390843272!172875!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20607 invoked from network); 27 Jan 2014 17:21:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:21:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94887168"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:21:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:21:10 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W7prx-0005q7-CP;
	Mon, 27 Jan 2014 17:21:09 +0000
Message-ID: <52E6956C.6060100@citrix.com>
Date: Mon, 27 Jan 2014 17:20:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
	<1390843126.12230.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1390843126.12230.46.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 17:18, Ian Campbell wrote:
> On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
>> A physdev_op is a two argument hypercall, taking a command paramter and an
>> optional pointer to a structure.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> This hypercall is unused in minios and stubdoms I think? (Trying to
> gauge how critical the error is).

Correct.  I suppose it is more of a "nice to fix" than "must fix" at
this stage, although I was quite surprised that I needed to fix it.

~Andrew

>
>> ---
>>  extras/mini-os/include/x86/x86_32/hypercall-x86_32.h |    4 ++--
>>  extras/mini-os/include/x86/x86_64/hypercall-x86_64.h |    4 ++--
>>  2 files changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> index ef52ecd..dcfbe41 100644
>> --- a/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> +++ b/extras/mini-os/include/x86/x86_32/hypercall-x86_32.h
>> @@ -255,9 +255,9 @@ HYPERVISOR_console_io(
>>  
>>  static inline int
>>  HYPERVISOR_physdev_op(
>> -	void *physdev_op)
>> +	int cmd, void *physdev_op)
>>  {
>> -	return _hypercall1(int, physdev_op, physdev_op);
>> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>>  }
>>  
>>  static inline int
>> diff --git a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> index 513d74e..7083763 100644
>> --- a/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> +++ b/extras/mini-os/include/x86/x86_64/hypercall-x86_64.h
>> @@ -256,9 +256,9 @@ HYPERVISOR_console_io(
>>  
>>  static inline int
>>  HYPERVISOR_physdev_op(
>> -	void *physdev_op)
>> +	int cmd, void *physdev_op)
>>  {
>> -	return _hypercall1(int, physdev_op, physdev_op);
>> +	return _hypercall2(int, physdev_op, cmd, physdev_op);
>>  }
>>  
>>  static inline int
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:25:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pwY-0000me-FP; Mon, 27 Jan 2014 17:25:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W7pwX-0000mZ-M8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:25:53 +0000
Received: from [85.158.137.68:22716] by server-12.bemta-3.messagelabs.com id
	1D/E6-20055-0A696E25; Mon, 27 Jan 2014 17:25:52 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390843551!10456822!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5748 invoked from network); 27 Jan 2014 17:25:52 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Jan 2014 17:25:52 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W7q0Y-00032v-Qb; Mon, 27 Jan 2014 17:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390843802.11711@bugs.xenproject.org>
References: <1390244796.2322.6.camel@astar.houby.net>
	<CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
In-Reply-To: <CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 27 Jan 2014 17:30:02 +0000
Subject: [Xen-devel] Processed: Re:  [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #37 rooted at `<1390244796.2322.6.camel@astar.houby.net>'
Title: `Re: [Xen-devel] [TestDay] Xen crash on boot'
> title it Disable IOMMU if no soutbridge
Set title for #37 to `Disable IOMMU if no soutbridge'
> thanks
Finished processing.

Modified/created Bugs:
 - 37: http://bugs.xenproject.org/xen/bug/37 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:25:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7pwY-0000me-FP; Mon, 27 Jan 2014 17:25:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W7pwX-0000mZ-M8
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:25:53 +0000
Received: from [85.158.137.68:22716] by server-12.bemta-3.messagelabs.com id
	1D/E6-20055-0A696E25; Mon, 27 Jan 2014 17:25:52 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390843551!10456822!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5748 invoked from network); 27 Jan 2014 17:25:52 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Jan 2014 17:25:52 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1W7q0Y-00032v-Qb; Mon, 27 Jan 2014 17:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1390843802.11711@bugs.xenproject.org>
References: <1390244796.2322.6.camel@astar.houby.net>
	<CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
In-Reply-To: <CAFLBxZbF5-am6VhWKBD68hna2FxdrbK12f4CG0sA7gYbOyDfDA@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 27 Jan 2014 17:30:02 +0000
Subject: [Xen-devel] Processed: Re:  [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #37 rooted at `<1390244796.2322.6.camel@astar.houby.net>'
Title: `Re: [Xen-devel] [TestDay] Xen crash on boot'
> title it Disable IOMMU if no soutbridge
Set title for #37 to `Disable IOMMU if no soutbridge'
> thanks
Finished processing.

Modified/created Bugs:
 - 37: http://bugs.xenproject.org/xen/bug/37 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q49-0001GC-PQ; Mon, 27 Jan 2014 17:33:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7q47-0001G5-UG
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:33:44 +0000
Received: from [85.158.139.211:32930] by server-7.bemta-5.messagelabs.com id
	D2/28-04824-77896E25; Mon, 27 Jan 2014 17:33:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390844022!12248377!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20766 invoked from network); 27 Jan 2014 17:33:42 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:33:42 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so5696762wes.28
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:33:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=mhbeUsMqP7MrSpCNKZXKB56tghu+M0YDYMm4AWzLjR0=;
	b=P9IwJBiltd3Xt70EK/eAfrfMwOu4r4SUlMimRR0fliVTxlnMP1H+Nhfo/qw7df4dHJ
	xn6WQ0hQFl6/ZbAG1GIZktcaVV6ZVkC1q3RBfG7doK0kW+PDQ9Rw6ejYuHjypX6P33R+
	3jJOMDc8AZ0amWv9+Z60TyIYNWXZLQXZjPXvVKw8PH8ppz/rG6wB1Kn1USNYIL61LodI
	Bfc2vSf0n0y4/JyheguFj1OFhJ6v1zu5gE92Ekl8KBf03DIrsB0OoeLWFkq1sXTCk8ns
	NhL/Vr/m2YYfUhFBZlgVr/eMx7u6lV/lGJVn252BVETJzqvO/Vb3RoJCi5j/UDTfshcq
	KFUw==
MIME-Version: 1.0
X-Received: by 10.181.13.165 with SMTP id ez5mr12809170wid.56.1390844022014;
	Mon, 27 Jan 2014 09:33:42 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 09:33:41 -0800 (PST)
In-Reply-To: <20140127161431.GE32713@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
	<CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
	<20140127161431.GE32713@zion.uk.xensource.com>
Date: Mon, 27 Jan 2014 17:33:41 +0000
X-Google-Sender-Auth: hwqqLK2Ow2pETRkMzfNuqCnbwkQ
Message-ID: <CAFLBxZbrx1-WofUyp02yWJZVeYQghh99x8J0xFkXWHjNzYp+NQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 4:14 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Mon, Jan 27, 2014 at 02:54:40PM +0000, George Dunlap wrote:
> [...]
>> > ---8<---
>> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> > index 77bd365..472f1df 100644
>> > --- a/tools/libxc/xc_hvm_build_x86.c
>> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> > @@ -49,6 +49,8 @@
>> >  #define NR_SPECIAL_PAGES     8
>> >  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>> >
>> > +#define POD_VGA_HOLE_SIZE (0x20)
>> > +
>> >  static int modules_init(struct xc_hvm_build_args *args,
>> >                          uint64_t vend, struct elf_binary *elf,
>> >                          uint64_t *mstart_out, uint64_t *mend_out)
>> > @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>> >      if ( pod_mode )
>> >      {
>> >          /*
>> > -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> > -         * adjust the PoD cache size so that domain tot_pages will be
>> > -         * target_pages - 0x20 after this call.
>> > +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
>> > +         * "hole".  Xen will adjust the PoD cache size so that domain
>> > +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
>> > +         * this call.
>> >           */
>> > -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> > +        rc = xc_domain_set_pod_target(xch, dom,
>> > +                                      target_pages - POD_VGA_HOLE_SIZE,
>> >                                        NULL, NULL, NULL);
>> >          if ( rc != 0 )
>> >          {
>> > @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>> >
>> >      /* try to claim pages for early warning of insufficient memory available */
>> >      if ( claim_enabled ) {
>> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> > +        unsigned long nr = target_pages;
>> > +
>> > +        if ( pod_mode )
>> > +            nr -= POD_VGA_HOLE_SIZE;
>> > +
>> > +        rc = xc_domain_claim_pages(xch, dom, nr);
>>
>> Two things:
>>
>> 1. This is broken because it doesn't claim pages for the PoD "cache".
>> The PoD "cache" amounts to *all the pages that the domain will have
>> allocated* -- there will be basically no pages allocated after this
>> point.
>>
>> Claim mode is trying to make creation of large guests fail early or be
>> guaranteed to succeed.  For large guests, it's set_pod_target() that
>> may take the long time, and it's there that things will fail if
>> there's not enough memory.  By the time you get to actually setting up
>> the p2m, you've already made it.
>>
>> 2. I think the VGA_HOLE doesn't have anything to do with PoD.
>>
>> Actually, it looks like the original code was wrong: correct me if I'm
>> wrong, but xc_domain_claim_pages() wants the *total number of pages*,
>> whereas nr_pages-cur_pages would give you the *number of additional
>> pages*.
>>
>> I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
>> you're in PoD mode or not.  The initial code would claim 0xa0 pages
>> too few; the new code will claim 0x20 pages too many in the non-PoD
>> case.
>>
>> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> > index 5f484a2..1e44ba3 100644
>> > --- a/xen/common/page_alloc.c
>> > +++ b/xen/common/page_alloc.c
>> > @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>> >          goto out;
>> >      }
>> >
>> > -    /* disallow a claim not exceeding current tot_pages or above max_pages */
>> > -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
>> > +    /* disallow a claim below current tot_pages or above max_pages */
>> > +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>> >      {
>>
>> This seems like a good interface change in any case -- there's no
>> particular reason not to allow multiple calls with the same claim
>> amount.  (Interface-wise, I don't see a good reason we couldn't allow
>> the claim to be reduced as well; but that's probably more than we want
>> to do at this point.)
>>
>> So it seems like we should at least make the two fixes above:
>> * Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled
>
> You mean target_pages here, right? nr_pages is maxmem= while target_pages
> is memory=. And from the face-to-face discussion we had this morning I
> had the impression you meant target_pages.
>
> In fact using nr_pages won't work because at that point d->max_pages is
> capped to target_memory in the hypervisor.

Dur, yes of course.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q49-0001GC-PQ; Mon, 27 Jan 2014 17:33:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7q47-0001G5-UG
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:33:44 +0000
Received: from [85.158.139.211:32930] by server-7.bemta-5.messagelabs.com id
	D2/28-04824-77896E25; Mon, 27 Jan 2014 17:33:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390844022!12248377!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20766 invoked from network); 27 Jan 2014 17:33:42 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:33:42 -0000
Received: by mail-we0-f169.google.com with SMTP id u57so5696762wes.28
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:33:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=mhbeUsMqP7MrSpCNKZXKB56tghu+M0YDYMm4AWzLjR0=;
	b=P9IwJBiltd3Xt70EK/eAfrfMwOu4r4SUlMimRR0fliVTxlnMP1H+Nhfo/qw7df4dHJ
	xn6WQ0hQFl6/ZbAG1GIZktcaVV6ZVkC1q3RBfG7doK0kW+PDQ9Rw6ejYuHjypX6P33R+
	3jJOMDc8AZ0amWv9+Z60TyIYNWXZLQXZjPXvVKw8PH8ppz/rG6wB1Kn1USNYIL61LodI
	Bfc2vSf0n0y4/JyheguFj1OFhJ6v1zu5gE92Ekl8KBf03DIrsB0OoeLWFkq1sXTCk8ns
	NhL/Vr/m2YYfUhFBZlgVr/eMx7u6lV/lGJVn252BVETJzqvO/Vb3RoJCi5j/UDTfshcq
	KFUw==
MIME-Version: 1.0
X-Received: by 10.181.13.165 with SMTP id ez5mr12809170wid.56.1390844022014;
	Mon, 27 Jan 2014 09:33:42 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 09:33:41 -0800 (PST)
In-Reply-To: <20140127161431.GE32713@zion.uk.xensource.com>
References: <20140110115638.GG29180@zion.uk.xensource.com>
	<1389355182.19142.38.camel@kazak.uk.xensource.com>
	<20140110145807.GB19124@phenom.dumpdata.com>
	<20140120162943.GD11681@zion.uk.xensource.com>
	<CAFLBxZYrL1T93FD=Jjdg_pOGxGgE_Hr4vOasEHWgjB9VmuxxGA@mail.gmail.com>
	<20140127161431.GE32713@zion.uk.xensource.com>
Date: Mon, 27 Jan 2014 17:33:41 +0000
X-Google-Sender-Auth: hwqqLK2Ow2pETRkMzfNuqCnbwkQ
Message-ID: <CAFLBxZbrx1-WofUyp02yWJZVeYQghh99x8J0xFkXWHjNzYp+NQ@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Wei Liu <wei.liu2@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Claim mode and HVM PoD interact badly
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 4:14 PM, Wei Liu <wei.liu2@citrix.com> wrote:
> On Mon, Jan 27, 2014 at 02:54:40PM +0000, George Dunlap wrote:
> [...]
>> > ---8<---
>> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> > index 77bd365..472f1df 100644
>> > --- a/tools/libxc/xc_hvm_build_x86.c
>> > +++ b/tools/libxc/xc_hvm_build_x86.c
>> > @@ -49,6 +49,8 @@
>> >  #define NR_SPECIAL_PAGES     8
>> >  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>> >
>> > +#define POD_VGA_HOLE_SIZE (0x20)
>> > +
>> >  static int modules_init(struct xc_hvm_build_args *args,
>> >                          uint64_t vend, struct elf_binary *elf,
>> >                          uint64_t *mstart_out, uint64_t *mend_out)
>> > @@ -305,11 +307,13 @@ static int setup_guest(xc_interface *xch,
>> >      if ( pod_mode )
>> >      {
>> >          /*
>> > -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> > -         * adjust the PoD cache size so that domain tot_pages will be
>> > -         * target_pages - 0x20 after this call.
>> > +         * Subtract POD_VGA_HOLE_SIZE from target_pages for the VGA
>> > +         * "hole".  Xen will adjust the PoD cache size so that domain
>> > +         * tot_pages will be target_pages - POD_VGA_HOLE_SIZE after
>> > +         * this call.
>> >           */
>> > -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> > +        rc = xc_domain_set_pod_target(xch, dom,
>> > +                                      target_pages - POD_VGA_HOLE_SIZE,
>> >                                        NULL, NULL, NULL);
>> >          if ( rc != 0 )
>> >          {
>> > @@ -335,7 +339,12 @@ static int setup_guest(xc_interface *xch,
>> >
>> >      /* try to claim pages for early warning of insufficient memory available */
>> >      if ( claim_enabled ) {
>> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> > +        unsigned long nr = target_pages;
>> > +
>> > +        if ( pod_mode )
>> > +            nr -= POD_VGA_HOLE_SIZE;
>> > +
>> > +        rc = xc_domain_claim_pages(xch, dom, nr);
>>
>> Two things:
>>
>> 1. This is broken because it doesn't claim pages for the PoD "cache".
>> The PoD "cache" amounts to *all the pages that the domain will have
>> allocated* -- there will be basically no pages allocated after this
>> point.
>>
>> Claim mode is trying to make creation of large guests fail early or be
>> guaranteed to succeed.  For large guests, it's set_pod_target() that
>> may take the long time, and it's there that things will fail if
>> there's not enough memory.  By the time you get to actually setting up
>> the p2m, you've already made it.
>>
>> 2. I think the VGA_HOLE doesn't have anything to do with PoD.
>>
>> Actually, it looks like the original code was wrong: correct me if I'm
>> wrong, but xc_domain_claim_pages() wants the *total number of pages*,
>> whereas nr_pages-cur_pages would give you the *number of additional
>> pages*.
>>
>> I think you need to claim nr_pages-VGA_HOLE_SIZE regardless of whether
>> you're in PoD mode or not.  The initial code would claim 0xa0 pages
>> too few; the new code will claim 0x20 pages too many in the non-PoD
>> case.
>>
>> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> > index 5f484a2..1e44ba3 100644
>> > --- a/xen/common/page_alloc.c
>> > +++ b/xen/common/page_alloc.c
>> > @@ -339,8 +339,8 @@ int domain_set_outstanding_pages(struct domain *d, unsigned long pages)
>> >          goto out;
>> >      }
>> >
>> > -    /* disallow a claim not exceeding current tot_pages or above max_pages */
>> > -    if ( (pages <= d->tot_pages) || (pages > d->max_pages) )
>> > +    /* disallow a claim below current tot_pages or above max_pages */
>> > +    if ( (pages < d->tot_pages) || (pages > d->max_pages) )
>> >      {
>>
>> This seems like a good interface change in any case -- there's no
>> particular reason not to allow multiple calls with the same claim
>> amount.  (Interface-wise, I don't see a good reason we couldn't allow
>> the claim to be reduced as well; but that's probably more than we want
>> to do at this point.)
>>
>> So it seems like we should at least make the two fixes above:
>> * Use nr_pages-VGA_HOLE_SIZE for the claim, regardless of whether PoD is enabled
>
> You mean target_pages here, right? nr_pages is maxmem= while target_pages
> is memory=. And from the face-to-face discussion we had this morning I
> had the impression you meant target_pages.
>
> In fact using nr_pages won't work because at that point d->max_pages is
> capped to target_memory in the hypervisor.

Dur, yes of course.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4p-0001K2-70; Mon, 27 Jan 2014 17:34:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4n-0001Jo-EN
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:25 +0000
Received: from [85.158.137.68:4539] by server-1.bemta-3.messagelabs.com id
	FE/32-29598-0A896E25; Mon, 27 Jan 2014 17:34:24 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390844061!10458450!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20682 invoked from network); 27 Jan 2014 17:34:23 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:23 -0000
Received: from mail-la0-f42.google.com ([209.85.215.42]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYnbEkwwbnbBb8sauRAwnPpaD7/PFI@postini.com;
	Mon, 27 Jan 2014 09:34:23 PST
Received: by mail-la0-f42.google.com with SMTP id hr13so4798983lab.1
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=5RvAeUg/Rewn+9Jd+M66/NtDPl1yjHilg7+mUrUltgo=;
	b=V6BxL09rbTnvdx8E292Qfm5qr3zaYjy/5gVDSWX03DUBTVaQ4WDLsxXMKgXjLKhp8+
	2FWQzle7fC5NCnHqCn80+qpYdE+ISY7B9N7nprl4OKvYVT4B6hKlmrka1RPAU+cOQ/0w
	g5Yu7Y0t8ztP1eL9zDzVaMdRY5Vc8qcCJ3lIM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=5RvAeUg/Rewn+9Jd+M66/NtDPl1yjHilg7+mUrUltgo=;
	b=BuwEwurKR3IbgnR+T1pSfz8PLb0Vl6RG24tf2kbuYrweGi/jNP4jX/f1nWCVO9yxU0
	7CfhmVvXNFoo8oPP2vOMKKnTnYeB8BFcYSOrXW5/JnXN4ptsgi9bNjhI4t+Vl3Z8R+9N
	9IFT+j4ZbbkXD9Y4aMLxIdEk+fXHLz34NiBl5EL5SC53EUhDVkAEHnzivhJkMmMppnLS
	7P267Kc7okGUvAl3fQQay+pSQv2M3RRmat5rZgtGIG0WNurIMld7CMnfAXFmlmtYogpM
	5/sca2R58LbvNdDS6Fs+BbnQUfZ0BYXAnQXkCj9C63mGSkmoaI5lPVbs/Z/bTlM9V2yg
	QUsQ==
X-Gm-Message-State: ALoCoQk6hyPDRE/Rlxcqrhb1cGDHAaESfp3DXzXgWK/OVyHLsqgArdEJ2mqIu0oLnGDEltgKDJRsINmV+bnddOM+32uhl+oqYWUlGMRFcR73/pStV+Ib+ckAaPkzbz/BqmvMN+3usAISkT0A3rnA1MVfOPlpLO9tENv7yAj82k1z0Iw3OE2CYqs=
X-Received: by 10.112.40.114 with SMTP id w18mr1092223lbk.20.1390844059647;
	Mon, 27 Jan 2014 09:34:19 -0800 (PST)
X-Received: by 10.112.40.114 with SMTP id w18mr1092219lbk.20.1390844059552;
	Mon, 27 Jan 2014 09:34:19 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.18
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:18 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:41 +0200
Message-Id: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, all.

We are trying to bringing up XEN on DRA7XX (OMAP5) platform.

We sometimes see some hangs in Hypervisor and these hangs are related to SMP.
We found out that deadlock took place in on_selected_cpus function
in case of simultaneous occurrence cross-interrupts.

The issue:

1. We receive irqs from first CPU (for example CPU0) and second CPU (for example CPU1) in parallel.
2. In our case the maintenance_interrupt function for maintenance irq from CPU0 is executed on CPU1 and
maintenance_interrupt for irq from CPU1 is executed on CPU0.
3. According to existing logic we have run gic_irq_eoi function on CPU which it was scheduled.
4. Due to this in both cases we need to call on_selected_cpus function to EOI irqs.
5. For the CPU0 on_selected_cpus function is called where we take a lock in the beginning of the function
and continue to execute it.
6. Parallel to this the same function is called for the CPU1 where we stop after attempting to take a lock because it is already holding.  
7. For the CPU0 we send IPI and going to wait until CPU1 execute function and cleared cpumask.
8. But the mask will never be cleaned, because CPU1 is waiting too. 

Now, we have next situation. The CPU0 can not exit from busy loop, it is waiting CPU1 to execute function and clear mask, but CPU0 is waiting to release lock. 
This causes to deadlock.  

Since as we needed solution to avoid hangs the attached patch was created. The solution is just to
call the smp_call_function_interrupt function if lock is holding. This causes the waiting CPU to exit from busy loop and release lock.
But I am afraid this solution not completed and maybe not enough for stable work. I would appreciate if you could explain me how to solve the issue in a right way or give some advices.

P.S. We use next SW:
1. Hypervisor - XEN 4.4 unstable
2. Dom0 - Kernel 3.8
3. DomU - Kernel 3.8 

Oleksandr Tyshchenko (2):
  xen/arm: Add return value to smp_call_function_interrupt function
  xen/arm: Fix deadlock in on_selected_cpus function

 xen/common/smp.c      |   13 ++++++++++---
 xen/include/xen/smp.h |    2 +-
 2 files changed, 11 insertions(+), 4 deletions(-)

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4p-0001K2-70; Mon, 27 Jan 2014 17:34:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4n-0001Jo-EN
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:25 +0000
Received: from [85.158.137.68:4539] by server-1.bemta-3.messagelabs.com id
	FE/32-29598-0A896E25; Mon, 27 Jan 2014 17:34:24 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390844061!10458450!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20682 invoked from network); 27 Jan 2014 17:34:23 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:23 -0000
Received: from mail-la0-f42.google.com ([209.85.215.42]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYnbEkwwbnbBb8sauRAwnPpaD7/PFI@postini.com;
	Mon, 27 Jan 2014 09:34:23 PST
Received: by mail-la0-f42.google.com with SMTP id hr13so4798983lab.1
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=5RvAeUg/Rewn+9Jd+M66/NtDPl1yjHilg7+mUrUltgo=;
	b=V6BxL09rbTnvdx8E292Qfm5qr3zaYjy/5gVDSWX03DUBTVaQ4WDLsxXMKgXjLKhp8+
	2FWQzle7fC5NCnHqCn80+qpYdE+ISY7B9N7nprl4OKvYVT4B6hKlmrka1RPAU+cOQ/0w
	g5Yu7Y0t8ztP1eL9zDzVaMdRY5Vc8qcCJ3lIM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=5RvAeUg/Rewn+9Jd+M66/NtDPl1yjHilg7+mUrUltgo=;
	b=BuwEwurKR3IbgnR+T1pSfz8PLb0Vl6RG24tf2kbuYrweGi/jNP4jX/f1nWCVO9yxU0
	7CfhmVvXNFoo8oPP2vOMKKnTnYeB8BFcYSOrXW5/JnXN4ptsgi9bNjhI4t+Vl3Z8R+9N
	9IFT+j4ZbbkXD9Y4aMLxIdEk+fXHLz34NiBl5EL5SC53EUhDVkAEHnzivhJkMmMppnLS
	7P267Kc7okGUvAl3fQQay+pSQv2M3RRmat5rZgtGIG0WNurIMld7CMnfAXFmlmtYogpM
	5/sca2R58LbvNdDS6Fs+BbnQUfZ0BYXAnQXkCj9C63mGSkmoaI5lPVbs/Z/bTlM9V2yg
	QUsQ==
X-Gm-Message-State: ALoCoQk6hyPDRE/Rlxcqrhb1cGDHAaESfp3DXzXgWK/OVyHLsqgArdEJ2mqIu0oLnGDEltgKDJRsINmV+bnddOM+32uhl+oqYWUlGMRFcR73/pStV+Ib+ckAaPkzbz/BqmvMN+3usAISkT0A3rnA1MVfOPlpLO9tENv7yAj82k1z0Iw3OE2CYqs=
X-Received: by 10.112.40.114 with SMTP id w18mr1092223lbk.20.1390844059647;
	Mon, 27 Jan 2014 09:34:19 -0800 (PST)
X-Received: by 10.112.40.114 with SMTP id w18mr1092219lbk.20.1390844059552;
	Mon, 27 Jan 2014 09:34:19 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.18
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:18 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:41 +0200
Message-Id: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, all.

We are trying to bringing up XEN on DRA7XX (OMAP5) platform.

We sometimes see some hangs in Hypervisor and these hangs are related to SMP.
We found out that deadlock took place in on_selected_cpus function
in case of simultaneous occurrence cross-interrupts.

The issue:

1. We receive irqs from first CPU (for example CPU0) and second CPU (for example CPU1) in parallel.
2. In our case the maintenance_interrupt function for maintenance irq from CPU0 is executed on CPU1 and
maintenance_interrupt for irq from CPU1 is executed on CPU0.
3. According to existing logic we have run gic_irq_eoi function on CPU which it was scheduled.
4. Due to this in both cases we need to call on_selected_cpus function to EOI irqs.
5. For the CPU0 on_selected_cpus function is called where we take a lock in the beginning of the function
and continue to execute it.
6. Parallel to this the same function is called for the CPU1 where we stop after attempting to take a lock because it is already holding.  
7. For the CPU0 we send IPI and going to wait until CPU1 execute function and cleared cpumask.
8. But the mask will never be cleaned, because CPU1 is waiting too. 

Now, we have next situation. The CPU0 can not exit from busy loop, it is waiting CPU1 to execute function and clear mask, but CPU0 is waiting to release lock. 
This causes to deadlock.  

Since as we needed solution to avoid hangs the attached patch was created. The solution is just to
call the smp_call_function_interrupt function if lock is holding. This causes the waiting CPU to exit from busy loop and release lock.
But I am afraid this solution not completed and maybe not enough for stable work. I would appreciate if you could explain me how to solve the issue in a right way or give some advices.

P.S. We use next SW:
1. Hypervisor - XEN 4.4 unstable
2. Dom0 - Kernel 3.8
3. DomU - Kernel 3.8 

Oleksandr Tyshchenko (2):
  xen/arm: Add return value to smp_call_function_interrupt function
  xen/arm: Fix deadlock in on_selected_cpus function

 xen/common/smp.c      |   13 ++++++++++---
 xen/include/xen/smp.h |    2 +-
 2 files changed, 11 insertions(+), 4 deletions(-)

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4r-0001Kv-Jm; Mon, 27 Jan 2014 17:34:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4p-0001K1-I4
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:27 +0000
Received: from [85.158.143.35:43232] by server-3.bemta-4.messagelabs.com id
	ED/13-32360-2A896E25; Mon, 27 Jan 2014 17:34:26 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390844064!1137216!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15269 invoked from network); 27 Jan 2014 17:34:26 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:26 -0000
Received: from mail-lb0-f180.google.com ([209.85.217.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYny1p+aBSf2Gb+5sOxGvU6gJYtlVi@postini.com;
	Mon, 27 Jan 2014 09:34:25 PST
Received: by mail-lb0-f180.google.com with SMTP id n15so4742080lbi.11
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=Z7AquGoHszCV4Un47HqSC5qBRYBk/2ctjvlL5jqwkUA=;
	b=ExYtusOmaunsKvCBJU3byrwbv2VuwiE/v4aCWCe5TWP9bb/leHF0iq6NMEQ1xo0XCg
	uEkjhtt/+GxYbi6h5cyrq980Z+u/Zh8CAvJJEzPlRg4b82OBD0uYZblfqPUcLEFOvoeU
	+b4i0PxcJVnS1Z37qNATZFjEACBLWNDqPoeww=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=Z7AquGoHszCV4Un47HqSC5qBRYBk/2ctjvlL5jqwkUA=;
	b=M+wrd2rc+mTr9W/EHVhfs/EEr4rxF9JepBqVcr7OMJ57RRtM1/M+DVoTiaHC7wSTik
	Y88sriF86qIYZEZV3XH0RGD28wC62qM0DW7UmTXzhOHMvL/nqydtdyNgiA16rGMSpVPz
	ras8YbbF8jLeziA7zhK38lpYUgFSiSfHB5tMv9t7v3cAZWYsYuAvveMtsec6jHvd7ged
	v5z0SaIFyxX7vsF3S5dAKLxJM2dKU3xeYLnmtuyM4w0VbUFiNBwUz0PpM6/1ZxeY5lR6
	AEthMpSjl2Rz2PrCwbSDSx4M33DGVVHeLuDVEgJyh5vTDdAFZgbTyZVGu7bMrufMIQq9
	hQpA==
X-Gm-Message-State: ALoCoQkIGC1n9C5pbSjGH3bKjh531v2Fx2VsByKSKB1zSuap9itWGIxocAZEJWhZpgeYxrpxtGI1LsW62sbveOwbq8fL008lTj9yjVNdpXrY55F1XXRaQO9syRjJlmGhr+OfzHcS7zMnoAymVzieqXijt53muiZN6T3TbIufDCaz8q6nwmO7zNc=
X-Received: by 10.152.87.228 with SMTP id bb4mr18496184lab.15.1390844062411;
	Mon, 27 Jan 2014 09:34:22 -0800 (PST)
X-Received: by 10.152.87.228 with SMTP id bb4mr18496173lab.15.1390844062282;
	Mon, 27 Jan 2014 09:34:22 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.21
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:43 +0200
Message-Id: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
	on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch is needed to avoid possible deadlocks in case of simultaneous
occurrence cross-interrupts.

Change-Id: I574b496442253a7b67a27e2edd793526c8131284
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/common/smp.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 2700bd7..46d2fc6 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -55,7 +55,11 @@ void on_selected_cpus(
 
     ASSERT(local_irq_is_enabled());
 
-    spin_lock(&call_lock);
+    if (!spin_trylock(&call_lock)) {
+        if (smp_call_function_interrupt())
+            return;
+        spin_lock(&call_lock);
+    }
 
     cpumask_copy(&call_data.selected, selected);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4r-0001Kv-Jm; Mon, 27 Jan 2014 17:34:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4p-0001K1-I4
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:27 +0000
Received: from [85.158.143.35:43232] by server-3.bemta-4.messagelabs.com id
	ED/13-32360-2A896E25; Mon, 27 Jan 2014 17:34:26 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390844064!1137216!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15269 invoked from network); 27 Jan 2014 17:34:26 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:26 -0000
Received: from mail-lb0-f180.google.com ([209.85.217.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYny1p+aBSf2Gb+5sOxGvU6gJYtlVi@postini.com;
	Mon, 27 Jan 2014 09:34:25 PST
Received: by mail-lb0-f180.google.com with SMTP id n15so4742080lbi.11
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=Z7AquGoHszCV4Un47HqSC5qBRYBk/2ctjvlL5jqwkUA=;
	b=ExYtusOmaunsKvCBJU3byrwbv2VuwiE/v4aCWCe5TWP9bb/leHF0iq6NMEQ1xo0XCg
	uEkjhtt/+GxYbi6h5cyrq980Z+u/Zh8CAvJJEzPlRg4b82OBD0uYZblfqPUcLEFOvoeU
	+b4i0PxcJVnS1Z37qNATZFjEACBLWNDqPoeww=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=Z7AquGoHszCV4Un47HqSC5qBRYBk/2ctjvlL5jqwkUA=;
	b=M+wrd2rc+mTr9W/EHVhfs/EEr4rxF9JepBqVcr7OMJ57RRtM1/M+DVoTiaHC7wSTik
	Y88sriF86qIYZEZV3XH0RGD28wC62qM0DW7UmTXzhOHMvL/nqydtdyNgiA16rGMSpVPz
	ras8YbbF8jLeziA7zhK38lpYUgFSiSfHB5tMv9t7v3cAZWYsYuAvveMtsec6jHvd7ged
	v5z0SaIFyxX7vsF3S5dAKLxJM2dKU3xeYLnmtuyM4w0VbUFiNBwUz0PpM6/1ZxeY5lR6
	AEthMpSjl2Rz2PrCwbSDSx4M33DGVVHeLuDVEgJyh5vTDdAFZgbTyZVGu7bMrufMIQq9
	hQpA==
X-Gm-Message-State: ALoCoQkIGC1n9C5pbSjGH3bKjh531v2Fx2VsByKSKB1zSuap9itWGIxocAZEJWhZpgeYxrpxtGI1LsW62sbveOwbq8fL008lTj9yjVNdpXrY55F1XXRaQO9syRjJlmGhr+OfzHcS7zMnoAymVzieqXijt53muiZN6T3TbIufDCaz8q6nwmO7zNc=
X-Received: by 10.152.87.228 with SMTP id bb4mr18496184lab.15.1390844062411;
	Mon, 27 Jan 2014 09:34:22 -0800 (PST)
X-Received: by 10.152.87.228 with SMTP id bb4mr18496173lab.15.1390844062282;
	Mon, 27 Jan 2014 09:34:22 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.21
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:43 +0200
Message-Id: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
	on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch is needed to avoid possible deadlocks in case of simultaneous
occurrence cross-interrupts.

Change-Id: I574b496442253a7b67a27e2edd793526c8131284
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/common/smp.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 2700bd7..46d2fc6 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -55,7 +55,11 @@ void on_selected_cpus(
 
     ASSERT(local_irq_is_enabled());
 
-    spin_lock(&call_lock);
+    if (!spin_trylock(&call_lock)) {
+        if (smp_call_function_interrupt())
+            return;
+        spin_lock(&call_lock);
+    }
 
     cpumask_copy(&call_data.selected, selected);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4s-0001LQ-3X; Mon, 27 Jan 2014 17:34:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4q-0001KC-6H
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:28 +0000
Received: from [85.158.137.68:4759] by server-12.bemta-3.messagelabs.com id
	E3/E2-20055-3A896E25; Mon, 27 Jan 2014 17:34:27 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390844064!10451839!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6172 invoked from network); 27 Jan 2014 17:34:26 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:26 -0000
Received: from mail-lb0-f181.google.com ([209.85.217.181]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYny1p+aBSf2Gb+5sOxGvU6gJYtlVi@postini.com;
	Mon, 27 Jan 2014 09:34:26 PST
Received: by mail-lb0-f181.google.com with SMTP id z5so4638232lbh.40
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=Yd9apWar7rPuIXucqHNLQXSMrdZH99YXKy7EsMQrFrc=;
	b=XKZu2nhrLGVFcCCW3FLIp9SpUg9ROHSzpPEP7xOBs0woDkfm+eAWJTOf6WPtX0zQZ3
	TiJJKEBLBEHrZq5UjjEsbHWUAv/LQmqaJsP/lYhvef1aU5mBqC7O/82t4EidSWLuiNmx
	N/m0xquMFwlfu5dmbohDHCzLPnGfv3049bs2U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=Yd9apWar7rPuIXucqHNLQXSMrdZH99YXKy7EsMQrFrc=;
	b=hdm3V9ilmlVVRiFhIm/g5Q3QvZJklckC07lFZjH6Pce9u0FgxBJJRkuq9MzbornAlg
	Hh+c9sirM3J5Vfe9zCWju+fsanGrV0v2nVNiqxG1PQeY3RrnJrZ78unku2+I9ecrpRww
	zzqJKw32/cik5eNHA0GfhehreqEgUFg4DUw0zY6Oac3b3n6efbYTaWJgpYrZZ7mhZfkD
	CxVJgMrTMBA19yE5W64/uvUO5+6tMZ7W10KXrzTt1rrZcWQ+w9fYCkmEg94PAFLYL+ky
	1a6+CJDBrx/K+24EQOfv/Iz4VvEuJwkL4KbNzq+39ZB4+92j4XNKrRDXcEhUCnqiSlhc
	EAfQ==
X-Gm-Message-State: ALoCoQkVfBkqAJSrTkUsbN8J9dxtXlzGNkIsy7KZ204pxk3cOHZb0xPFCZ7PM99t/J7ETuH1UC4QbAKWBtg8HccEqvnckWaTOY9nzwOJ2LWEZFnR7raWE+xueOTBP1+noSl5ivf5SVsVDOhwaeLrSrZ7DaVqKB2PWVCoahqSjktGwZDfeQw9Ebo=
X-Received: by 10.112.180.72 with SMTP id dm8mr8055433lbc.28.1390844061145;
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
X-Received: by 10.112.180.72 with SMTP id dm8mr8055426lbc.28.1390844061024;
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.19
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:20 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:42 +0200
Message-Id: <1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH v1 1/2] xen/arm: Add return value to
	smp_call_function_interrupt function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the function returns error if the action can not be executed.

Change-Id: Iace691850024656239326bf0e3c87b57cb1b8ab3
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/common/smp.c      |    7 +++++--
 xen/include/xen/smp.h |    2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 482a203..2700bd7 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -20,6 +20,7 @@
 #include <asm/processor.h>
 #include <xen/spinlock.h>
 #include <xen/smp.h>
+#include <xen/errno.h>
 
 /*
  * Structure and data for smp_call_function()/on_selected_cpus().
@@ -75,14 +76,14 @@ out:
     spin_unlock(&call_lock);
 }
 
-void smp_call_function_interrupt(void)
+int smp_call_function_interrupt(void)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
     unsigned int cpu = smp_processor_id();
 
     if ( !cpumask_test_cpu(cpu, &call_data.selected) )
-        return;
+        return -EPERM;
 
     irq_enter();
 
@@ -100,6 +101,8 @@ void smp_call_function_interrupt(void)
     }
 
     irq_exit();
+
+    return 0;
 }
 
 /*
diff --git a/xen/include/xen/smp.h b/xen/include/xen/smp.h
index 6febb56..6d05910 100644
--- a/xen/include/xen/smp.h
+++ b/xen/include/xen/smp.h
@@ -61,7 +61,7 @@ static inline void on_each_cpu(
 /*
  * Call a function on the current CPU
  */
-void smp_call_function_interrupt(void);
+int smp_call_function_interrupt(void);
 
 void smp_send_call_function_mask(const cpumask_t *mask);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7q4s-0001LQ-3X; Mon, 27 Jan 2014 17:34:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W7q4q-0001KC-6H
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:34:28 +0000
Received: from [85.158.137.68:4759] by server-12.bemta-3.messagelabs.com id
	E3/E2-20055-3A896E25; Mon, 27 Jan 2014 17:34:27 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390844064!10451839!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6172 invoked from network); 27 Jan 2014 17:34:26 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 17:34:26 -0000
Received: from mail-lb0-f181.google.com ([209.85.217.181]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuaYny1p+aBSf2Gb+5sOxGvU6gJYtlVi@postini.com;
	Mon, 27 Jan 2014 09:34:26 PST
Received: by mail-lb0-f181.google.com with SMTP id z5so4638232lbh.40
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:34:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=from:to:subject:date:message-id:in-reply-to:references;
	bh=Yd9apWar7rPuIXucqHNLQXSMrdZH99YXKy7EsMQrFrc=;
	b=XKZu2nhrLGVFcCCW3FLIp9SpUg9ROHSzpPEP7xOBs0woDkfm+eAWJTOf6WPtX0zQZ3
	TiJJKEBLBEHrZq5UjjEsbHWUAv/LQmqaJsP/lYhvef1aU5mBqC7O/82t4EidSWLuiNmx
	N/m0xquMFwlfu5dmbohDHCzLPnGfv3049bs2U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to
	:references;
	bh=Yd9apWar7rPuIXucqHNLQXSMrdZH99YXKy7EsMQrFrc=;
	b=hdm3V9ilmlVVRiFhIm/g5Q3QvZJklckC07lFZjH6Pce9u0FgxBJJRkuq9MzbornAlg
	Hh+c9sirM3J5Vfe9zCWju+fsanGrV0v2nVNiqxG1PQeY3RrnJrZ78unku2+I9ecrpRww
	zzqJKw32/cik5eNHA0GfhehreqEgUFg4DUw0zY6Oac3b3n6efbYTaWJgpYrZZ7mhZfkD
	CxVJgMrTMBA19yE5W64/uvUO5+6tMZ7W10KXrzTt1rrZcWQ+w9fYCkmEg94PAFLYL+ky
	1a6+CJDBrx/K+24EQOfv/Iz4VvEuJwkL4KbNzq+39ZB4+92j4XNKrRDXcEhUCnqiSlhc
	EAfQ==
X-Gm-Message-State: ALoCoQkVfBkqAJSrTkUsbN8J9dxtXlzGNkIsy7KZ204pxk3cOHZb0xPFCZ7PM99t/J7ETuH1UC4QbAKWBtg8HccEqvnckWaTOY9nzwOJ2LWEZFnR7raWE+xueOTBP1+noSl5ivf5SVsVDOhwaeLrSrZ7DaVqKB2PWVCoahqSjktGwZDfeQw9Ebo=
X-Received: by 10.112.180.72 with SMTP id dm8mr8055433lbc.28.1390844061145;
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
X-Received: by 10.112.180.72 with SMTP id dm8mr8055426lbc.28.1390844061024;
	Mon, 27 Jan 2014 09:34:21 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id e6sm12989024lbs.3.2014.01.27.09.34.19
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:34:20 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon, 27 Jan 2014 19:33:42 +0200
Message-Id: <1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH v1 1/2] xen/arm: Add return value to
	smp_call_function_interrupt function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the function returns error if the action can not be executed.

Change-Id: Iace691850024656239326bf0e3c87b57cb1b8ab3
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
---
 xen/common/smp.c      |    7 +++++--
 xen/include/xen/smp.h |    2 +-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 482a203..2700bd7 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -20,6 +20,7 @@
 #include <asm/processor.h>
 #include <xen/spinlock.h>
 #include <xen/smp.h>
+#include <xen/errno.h>
 
 /*
  * Structure and data for smp_call_function()/on_selected_cpus().
@@ -75,14 +76,14 @@ out:
     spin_unlock(&call_lock);
 }
 
-void smp_call_function_interrupt(void)
+int smp_call_function_interrupt(void)
 {
     void (*func)(void *info) = call_data.func;
     void *info = call_data.info;
     unsigned int cpu = smp_processor_id();
 
     if ( !cpumask_test_cpu(cpu, &call_data.selected) )
-        return;
+        return -EPERM;
 
     irq_enter();
 
@@ -100,6 +101,8 @@ void smp_call_function_interrupt(void)
     }
 
     irq_exit();
+
+    return 0;
 }
 
 /*
diff --git a/xen/include/xen/smp.h b/xen/include/xen/smp.h
index 6febb56..6d05910 100644
--- a/xen/include/xen/smp.h
+++ b/xen/include/xen/smp.h
@@ -61,7 +61,7 @@ static inline void on_each_cpu(
 /*
  * Call a function on the current CPU
  */
-void smp_call_function_interrupt(void);
+int smp_call_function_interrupt(void);
 
 void smp_send_call_function_mask(const cpumask_t *mask);
 
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:41:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qB3-0002EP-Rr; Mon, 27 Jan 2014 17:40:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7qB2-0002EH-BY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:40:52 +0000
Received: from [193.109.254.147:43097] by server-2.bemta-14.messagelabs.com id
	40/59-00361-32A96E25; Mon, 27 Jan 2014 17:40:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390844449!175338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14349 invoked from network); 27 Jan 2014 17:40:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:40:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96901494"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 17:40:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:40:30 -0500
Message-ID: <1390844429.20617.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Mon, 27 Jan 2014 17:40:29 +0000
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:33 +0200, Oleksandr Tyshchenko wrote:
> But I am afraid this solution not completed and maybe not enough for
> stable work. I would appreciate if you could explain me how to solve
> the issue in a right way or give some advices. 

I'll look into this properly tomorrow, but my initial reaction is that
this should be fixed by making those SGIs which are used as IPIs have
higher priority in the GIC than normal interrupts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:41:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:41:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qB3-0002EP-Rr; Mon, 27 Jan 2014 17:40:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W7qB2-0002EH-BY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:40:52 +0000
Received: from [193.109.254.147:43097] by server-2.bemta-14.messagelabs.com id
	40/59-00361-32A96E25; Mon, 27 Jan 2014 17:40:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390844449!175338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14349 invoked from network); 27 Jan 2014 17:40:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:40:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96901494"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 17:40:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:40:30 -0500
Message-ID: <1390844429.20617.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Mon, 27 Jan 2014 17:40:29 +0000
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:33 +0200, Oleksandr Tyshchenko wrote:
> But I am afraid this solution not completed and maybe not enough for
> stable work. I would appreciate if you could explain me how to solve
> the issue in a right way or give some advices. 

I'll look into this properly tomorrow, but my initial reaction is that
this should be fixed by making those SGIs which are used as IPIs have
higher priority in the GIC than normal interrupts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qI4-0002R8-0Z; Mon, 27 Jan 2014 17:48:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W7qI2-0002R0-Mp
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:06 +0000
Received: from [193.109.254.147:20479] by server-10.bemta-14.messagelabs.com
	id 3D/74-20752-5DB96E25; Mon, 27 Jan 2014 17:48:05 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390844883!180724!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31198 invoked from network); 27 Jan 2014 17:48:04 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:48:04 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so8391079qcq.36
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:48:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=QwT5hr6Uarlx1HrSg0tHY99RBvQy6qEFReNsn/hp8uc=;
	b=j1WUC/uOMYPsFf350xHAa5smmXfCXqjkFQxnZp7mP3Q91+3Ooink49Rc8vzKOIhO5V
	wVYd1ZYe8848TUrShd+f7xnzWAX5yMbuvbbZUjhgU+ffhWObOiDqDyr3dUebW4iSoexv
	LLgDKR2rQhE7b31yCWrloQ5SdprAQcKoUSMdzvWtR5oe2FA/+HJHp+CzWSgOcBxsaJd3
	K0FxxKGRqGut8lb8EyJ39pBF9WSJ1I3GnNO7ayzI8MTqXQL7xShk6+9n0CQ1u5MS5LJ4
	bnKPIhLn1TTGOeZ+rFfOP8SocE/ALmJ6xagabgphQG7eyFYtAn+SjetjyxiLIuGfZrxJ
	TpYw==
X-Received: by 10.140.86.116 with SMTP id o107mr41780201qgd.67.1390844883712; 
	Mon, 27 Jan 2014 09:48:03 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.vodafonedsl.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id g6sm26893927qag.17.2014.01.27.09.48.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:48:02 -0800 (PST)
Message-ID: <52E69BCE.1070508@redhat.com>
Date: Mon, 27 Jan 2014 18:47:58 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
In-Reply-To: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 17/01/2014 10:41, Jan Beulich ha scritto:
> One half of this doesn't apply here, due to the explicit barriers
> that are there. The half about converting local variable accesses
> back to memory reads (i.e. eliding the local variable), however,
> is only a theoretical issue afaict: If a compiler really did this, I
> think there'd be far more places where this would hurt.

Perhaps.  But for example seqlocks get it right.

> I don't think so - this would only be an issue if the conditions used
> | instead of ||. || implies a sequence point between evaluating the
> left and right sides, and the standard says: "The presence of a
> sequence point between the evaluation of expressions A and B
> implies that every value computation and side effect associated
> with A is sequenced before every value computation and side
> effect associated with B."

I suspect this is widely ignored by compilers if A is not 
side-effecting.  The above wording would imply that

     x = a || b    =>    x = (a | b) != 0

(where "a" and "b" are non-volatile globals) would be an invalid 
change.  The compiler would have to do:

     temp = a;
     barrier();
     x = (temp | b) != 0

and I'm pretty sure that no compiler does it this way unless C11/C++11
atomics are involved (at which point accesses become side-effecting).

The code has changed and pvclock_get_time_values moved to
__pvclock_read_cycles, but I think the problem remains.  Another approach
to fixing this (and one I prefer) is to do the same thing as seqlocks:
turn off the low bit in the return value of __pvclock_read_cycles,
and drop the || altogether.  Untested patch after my name.

Paolo

diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
index d6b078e9fa28..5aec80adaf54 100644
--- a/arch/x86/include/asm/pvclock.h
+++ b/arch/x86/include/asm/pvclock.h
@@ -75,7 +75,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
 	cycle_t ret, offset;
 	u8 ret_flags;
 
-	version = src->version;
+	version = src->version & ~1;
 	/* Note: emulated platforms which do not advertise SSE2 support
 	 * result in kvmclock not using the necessary RDTSC barriers.
 	 * Without barriers, it is possible that RDTSC instruction reads from
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 2f355d229a58..a5052a87d55e 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -66,7 +66,7 @@ u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src)
 
 	do {
 		version = __pvclock_read_cycles(src, &ret, &flags);
-	} while ((src->version & 1) || version != src->version);
+	} while (version != src->version);
 
 	return flags & valid_flags;
 }
@@ -80,7 +80,7 @@ cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
 
 	do {
 		version = __pvclock_read_cycles(src, &ret, &flags);
-	} while ((src->version & 1) || version != src->version);
+	} while (version != src->version);
 
 	if (unlikely((flags & PVCLOCK_GUEST_STOPPED) != 0)) {
 		src->flags &= ~PVCLOCK_GUEST_STOPPED;
diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
index eb5d7a56f8d4..f09b09bcb515 100644
--- a/arch/x86/vdso/vclock_gettime.c
+++ b/arch/x86/vdso/vclock_gettime.c
@@ -117,7 +117,6 @@ static notrace cycle_t vread_pvclock(int *mode)
 		 */
 		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
 	} while (unlikely(cpu != cpu1 ||
-			  (pvti->pvti.version & 1) ||
 			  pvti->pvti.version != version));
 
 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qI4-0002R8-0Z; Mon, 27 Jan 2014 17:48:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W7qI2-0002R0-Mp
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:06 +0000
Received: from [193.109.254.147:20479] by server-10.bemta-14.messagelabs.com
	id 3D/74-20752-5DB96E25; Mon, 27 Jan 2014 17:48:05 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390844883!180724!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31198 invoked from network); 27 Jan 2014 17:48:04 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:48:04 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so8391079qcq.36
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:48:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=QwT5hr6Uarlx1HrSg0tHY99RBvQy6qEFReNsn/hp8uc=;
	b=j1WUC/uOMYPsFf350xHAa5smmXfCXqjkFQxnZp7mP3Q91+3Ooink49Rc8vzKOIhO5V
	wVYd1ZYe8848TUrShd+f7xnzWAX5yMbuvbbZUjhgU+ffhWObOiDqDyr3dUebW4iSoexv
	LLgDKR2rQhE7b31yCWrloQ5SdprAQcKoUSMdzvWtR5oe2FA/+HJHp+CzWSgOcBxsaJd3
	K0FxxKGRqGut8lb8EyJ39pBF9WSJ1I3GnNO7ayzI8MTqXQL7xShk6+9n0CQ1u5MS5LJ4
	bnKPIhLn1TTGOeZ+rFfOP8SocE/ALmJ6xagabgphQG7eyFYtAn+SjetjyxiLIuGfZrxJ
	TpYw==
X-Received: by 10.140.86.116 with SMTP id o107mr41780201qgd.67.1390844883712; 
	Mon, 27 Jan 2014 09:48:03 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.vodafonedsl.it.
	[2.35.197.229])
	by mx.google.com with ESMTPSA id g6sm26893927qag.17.2014.01.27.09.48.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:48:02 -0800 (PST)
Message-ID: <52E69BCE.1070508@redhat.com>
Date: Mon, 27 Jan 2014 18:47:58 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, 
	Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
In-Reply-To: <52D908BF0200007800114782@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 17/01/2014 10:41, Jan Beulich ha scritto:
> One half of this doesn't apply here, due to the explicit barriers
> that are there. The half about converting local variable accesses
> back to memory reads (i.e. eliding the local variable), however,
> is only a theoretical issue afaict: If a compiler really did this, I
> think there'd be far more places where this would hurt.

Perhaps.  But for example seqlocks get it right.

> I don't think so - this would only be an issue if the conditions used
> | instead of ||. || implies a sequence point between evaluating the
> left and right sides, and the standard says: "The presence of a
> sequence point between the evaluation of expressions A and B
> implies that every value computation and side effect associated
> with A is sequenced before every value computation and side
> effect associated with B."

I suspect this is widely ignored by compilers if A is not 
side-effecting.  The above wording would imply that

     x = a || b    =>    x = (a | b) != 0

(where "a" and "b" are non-volatile globals) would be an invalid 
change.  The compiler would have to do:

     temp = a;
     barrier();
     x = (temp | b) != 0

and I'm pretty sure that no compiler does it this way unless C11/C++11
atomics are involved (at which point accesses become side-effecting).

The code has changed and pvclock_get_time_values moved to
__pvclock_read_cycles, but I think the problem remains.  Another approach
to fixing this (and one I prefer) is to do the same thing as seqlocks:
turn off the low bit in the return value of __pvclock_read_cycles,
and drop the || altogether.  Untested patch after my name.

Paolo

diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
index d6b078e9fa28..5aec80adaf54 100644
--- a/arch/x86/include/asm/pvclock.h
+++ b/arch/x86/include/asm/pvclock.h
@@ -75,7 +75,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
 	cycle_t ret, offset;
 	u8 ret_flags;
 
-	version = src->version;
+	version = src->version & ~1;
 	/* Note: emulated platforms which do not advertise SSE2 support
 	 * result in kvmclock not using the necessary RDTSC barriers.
 	 * Without barriers, it is possible that RDTSC instruction reads from
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 2f355d229a58..a5052a87d55e 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -66,7 +66,7 @@ u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src)
 
 	do {
 		version = __pvclock_read_cycles(src, &ret, &flags);
-	} while ((src->version & 1) || version != src->version);
+	} while (version != src->version);
 
 	return flags & valid_flags;
 }
@@ -80,7 +80,7 @@ cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
 
 	do {
 		version = __pvclock_read_cycles(src, &ret, &flags);
-	} while ((src->version & 1) || version != src->version);
+	} while (version != src->version);
 
 	if (unlikely((flags & PVCLOCK_GUEST_STOPPED) != 0)) {
 		src->flags &= ~PVCLOCK_GUEST_STOPPED;
diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
index eb5d7a56f8d4..f09b09bcb515 100644
--- a/arch/x86/vdso/vclock_gettime.c
+++ b/arch/x86/vdso/vclock_gettime.c
@@ -117,7 +117,6 @@ static notrace cycle_t vread_pvclock(int *mode)
 		 */
 		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
 	} while (unlikely(cpu != cpu1 ||
-			  (pvti->pvti.version & 1) ||
 			  pvti->pvti.version != version));
 
 	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIN-0002Sq-EE; Mon, 27 Jan 2014 17:48:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIM-0002Sh-Gq
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:26 +0000
Received: from [85.158.139.211:56039] by server-8.bemta-5.messagelabs.com id
	AB/1A-29838-9EB96E25; Mon, 27 Jan 2014 17:48:25 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390844903!12248627!1
X-Originating-IP: [207.46.163.26]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:23 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:23 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2413 invoked from network); 27 Jan 2014 17:48:25 -0000
Received: from co9ehsobe003.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.26)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:25 -0000
Received: from mail125-co9-R.bigfish.com (10.236.132.227) by
	CO9EHSOBE020.bigfish.com (10.236.130.83) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:22 +0000
Received: from mail125-co9 (localhost [127.0.0.1])	by
	mail125-co9-R.bigfish.com (Postfix) with ESMTP id B6EA160134;
	Mon, 27 Jan 2014 17:48:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail125-co9 (localhost.localdomain [127.0.0.1]) by mail125-co9
	(MessageSwitch) id 139084490117648_21941;
	Mon, 27 Jan 2014 17:48:21 +0000 (UTC)
Received: from CO9EHSMHS004.bigfish.com (unknown [10.236.132.227])	by
	mail125-co9.bigfish.com (Postfix) with ESMTP id F3245220055;
	Mon, 27 Jan 2014 17:48:20 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS004.bigfish.com
	(10.236.130.14) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 17:48:20 +0000
X-WSS-ID: 0N02MSG-08-IQP-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2D4F1D160A2;	Mon, 27 Jan 2014 11:48:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:26 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:16 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:23 -0500
Message-ID: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/2] Fix AMD threshold register definitions and
	activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.

Aravind Gopalakrishnan (2):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs

 xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 3 files changed, 11 insertions(+), 9 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIN-0002Sq-EE; Mon, 27 Jan 2014 17:48:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIM-0002Sh-Gq
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:26 +0000
Received: from [85.158.139.211:56039] by server-8.bemta-5.messagelabs.com id
	AB/1A-29838-9EB96E25; Mon, 27 Jan 2014 17:48:25 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390844903!12248627!1
X-Originating-IP: [207.46.163.26]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:23 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:23 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2413 invoked from network); 27 Jan 2014 17:48:25 -0000
Received: from co9ehsobe003.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.26)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:25 -0000
Received: from mail125-co9-R.bigfish.com (10.236.132.227) by
	CO9EHSOBE020.bigfish.com (10.236.130.83) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:22 +0000
Received: from mail125-co9 (localhost [127.0.0.1])	by
	mail125-co9-R.bigfish.com (Postfix) with ESMTP id B6EA160134;
	Mon, 27 Jan 2014 17:48:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail125-co9 (localhost.localdomain [127.0.0.1]) by mail125-co9
	(MessageSwitch) id 139084490117648_21941;
	Mon, 27 Jan 2014 17:48:21 +0000 (UTC)
Received: from CO9EHSMHS004.bigfish.com (unknown [10.236.132.227])	by
	mail125-co9.bigfish.com (Postfix) with ESMTP id F3245220055;
	Mon, 27 Jan 2014 17:48:20 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS004.bigfish.com
	(10.236.130.14) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 17:48:20 +0000
X-WSS-ID: 0N02MSG-08-IQP-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2D4F1D160A2;	Mon, 27 Jan 2014 11:48:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:26 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:16 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:23 -0500
Message-ID: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/2] Fix AMD threshold register definitions and
	activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.

Aravind Gopalakrishnan (2):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs

 xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 3 files changed, 11 insertions(+), 9 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIS-0002Ym-RZ; Mon, 27 Jan 2014 17:48:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIR-0002WL-71
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:31 +0000
Received: from [193.109.254.147:56103] by server-12.bemta-14.messagelabs.com
	id 72/50-13681-EEB96E25; Mon, 27 Jan 2014 17:48:30 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390844908!183167!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:24 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:24 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10945 invoked from network); 27 Jan 2014 17:48:29 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-6.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:29 -0000
Received: from mail146-tx2-R.bigfish.com (10.9.14.249) by
	TX2EHSOBE004.bigfish.com (10.9.40.24) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:27 +0000
Received: from mail146-tx2 (localhost [127.0.0.1])	by
	mail146-tx2-R.bigfish.com (Postfix) with ESMTP id D2DCF4E028F;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kze0eahzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail146-tx2 (localhost.localdomain [127.0.0.1]) by mail146-tx2
	(MessageSwitch) id 1390844905387237_13240;
	Mon, 27 Jan 2014 17:48:25 +0000 (UTC)
Received: from TX2EHSMHS020.bigfish.com (unknown [10.9.14.237])	by
	mail146-tx2.bigfish.com (Postfix) with ESMTP id 4999E4C023D;
	Mon, 27 Jan 2014 17:48:25 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS020.bigfish.com
	(10.9.99.120) with Microsoft SMTP Server id 14.1.225.23;
	Mon, 27 Jan 2014 17:48:20 +0000
X-WSS-ID: 0N02MSI-07-GO2-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	25EC912C0098;	Mon, 27 Jan 2014 11:48:17 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:27 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:17 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:24 -0500
Message-ID: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0xC000040A is marked as reserved from Fam15 onwards and
MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
So remove the unnecessary definition of the reserved MSR and
use MSR_IA32_MCx_MISC() to define MSR 0x413.

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly.

Fam15 Model 00h-0fh  BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..07c2684 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIS-0002Ym-RZ; Mon, 27 Jan 2014 17:48:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIR-0002WL-71
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:31 +0000
Received: from [193.109.254.147:56103] by server-12.bemta-14.messagelabs.com
	id 72/50-13681-EEB96E25; Mon, 27 Jan 2014 17:48:30 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390844908!183167!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:24 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:24 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10945 invoked from network); 27 Jan 2014 17:48:29 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-6.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:29 -0000
Received: from mail146-tx2-R.bigfish.com (10.9.14.249) by
	TX2EHSOBE004.bigfish.com (10.9.40.24) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:27 +0000
Received: from mail146-tx2 (localhost [127.0.0.1])	by
	mail146-tx2-R.bigfish.com (Postfix) with ESMTP id D2DCF4E028F;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kze0eahzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail146-tx2 (localhost.localdomain [127.0.0.1]) by mail146-tx2
	(MessageSwitch) id 1390844905387237_13240;
	Mon, 27 Jan 2014 17:48:25 +0000 (UTC)
Received: from TX2EHSMHS020.bigfish.com (unknown [10.9.14.237])	by
	mail146-tx2.bigfish.com (Postfix) with ESMTP id 4999E4C023D;
	Mon, 27 Jan 2014 17:48:25 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS020.bigfish.com
	(10.9.99.120) with Microsoft SMTP Server id 14.1.225.23;
	Mon, 27 Jan 2014 17:48:20 +0000
X-WSS-ID: 0N02MSI-07-GO2-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	25EC912C0098;	Mon, 27 Jan 2014 11:48:17 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:27 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:17 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:24 -0500
Message-ID: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0xC000040A is marked as reserved from Fam15 onwards and
MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
So remove the unnecessary definition of the reserved MSR and
use MSR_IA32_MCx_MISC() to define MSR 0x413.

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly.

Fam15 Model 00h-0fh  BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..07c2684 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIU-0002Ze-Ba; Mon, 27 Jan 2014 17:48:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIS-0002YX-N2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:32 +0000
Received: from [85.158.139.211:56394] by server-1.bemta-5.messagelabs.com id
	77/2A-21065-FEB96E25; Mon, 27 Jan 2014 17:48:31 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390844909!12048666!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:25 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:25 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13193 invoked from network); 27 Jan 2014 17:48:30 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-3.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:30 -0000
Received: from mail87-va3-R.bigfish.com (10.7.14.233) by
	VA3EHSOBE001.bigfish.com (10.7.40.21) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:29 +0000
Received: from mail87-va3 (localhost [127.0.0.1])	by mail87-va3-R.bigfish.com
	(Postfix) with ESMTP id 769AC1C0077;
	Mon, 27 Jan 2014 17:48:29 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail87-va3 (localhost.localdomain [127.0.0.1]) by mail87-va3
	(MessageSwitch) id 1390844907458403_31714;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
Received: from VA3EHSMHS029.bigfish.com (unknown [10.7.14.227])	by
	mail87-va3.bigfish.com (Postfix) with ESMTP id 6772E24004C;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS029.bigfish.com
	(10.7.99.39) with Microsoft SMTP Server id 14.16.227.3; Mon, 27 Jan 2014
	17:48:21 +0000
X-WSS-ID: 0N02MSI-08-IQT-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2BFBAD160A8;	Mon, 27 Jan 2014 11:48:17 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:28 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:18 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:25 -0500
Message-ID: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..cb4fd12 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:48:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qIU-0002Ze-Ba; Mon, 27 Jan 2014 17:48:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7qIS-0002YX-N2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:48:32 +0000
Received: from [85.158.139.211:56394] by server-1.bemta-5.messagelabs.com id
	77/2A-21065-FEB96E25; Mon, 27 Jan 2014 17:48:31 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390844909!12048666!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=6.8 required=7.0 tests=ratty_date: Date is far 
	from today: Sat, 15 Mar 2008 16:50:25 -0500,
	ratty_date: Date is far from 
	today: Sat, 15 Mar 2008 16:50:25 -0500,DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13193 invoked from network); 27 Jan 2014 17:48:30 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-3.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 17:48:30 -0000
Received: from mail87-va3-R.bigfish.com (10.7.14.233) by
	VA3EHSOBE001.bigfish.com (10.7.40.21) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 17:48:29 +0000
Received: from mail87-va3 (localhost [127.0.0.1])	by mail87-va3-R.bigfish.com
	(Postfix) with ESMTP id 769AC1C0077;
	Mon, 27 Jan 2014 17:48:29 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail87-va3 (localhost.localdomain [127.0.0.1]) by mail87-va3
	(MessageSwitch) id 1390844907458403_31714;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
Received: from VA3EHSMHS029.bigfish.com (unknown [10.7.14.227])	by
	mail87-va3.bigfish.com (Postfix) with ESMTP id 6772E24004C;
	Mon, 27 Jan 2014 17:48:27 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS029.bigfish.com
	(10.7.99.39) with Microsoft SMTP Server id 14.16.227.3; Mon, 27 Jan 2014
	17:48:21 +0000
X-WSS-ID: 0N02MSI-08-IQT-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2BFBAD160A8;	Mon, 27 Jan 2014 11:48:17 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 11:48:28 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	12:48:18 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Sat, 15 Mar 2008 16:50:25 -0500
Message-ID: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..cb4fd12 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qLq-0003Cn-J6; Mon, 27 Jan 2014 17:52:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7qLo-0003Cd-T9
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:52:01 +0000
Received: from [85.158.139.211:15148] by server-5.bemta-5.messagelabs.com id
	D5/D3-14928-0CC96E25; Mon, 27 Jan 2014 17:52:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390845119!12068104!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25879 invoked from network); 27 Jan 2014 17:51:59 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:51:59 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so2455528eek.9
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:51:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=UXALXuNKbxq5CvbY6wwIhyhdEq/ryclO2ym+fWsASTA=;
	b=JQaV5GbMUo5/xFecu2YGYbJx5bMspjYZYGcM5Qj4yoCPIM8hOu+tryv0Jvyd38nAPp
	oC6HRXrCtdR53WkuI/YIQ0xdWOJdHLj/QevRne3FuD5F266Y03ZFRjWXm12ZyGqJbjv0
	h290AJsXazReVpaNUD4XFdTkaYHCB8eKw5vGAiHEnwOygli0o9XUSfNdkXtHzWK9rj8K
	ZTyKYm0yCK2lK3uujoz3dQeBPZf2CCf/ei7m0hRSwurZThnEfd0DMzXhsDv4axo/3qKY
	y8ZlJsZqUtpKdS70KZnQctQOEjs+wMC2OHhZVzVN9LeQHGO9KnAPTdNlg+Uv8cru7Am0
	ozFg==
X-Gm-Message-State: ALoCoQnadlc83ktuqarQO4NKhB9xd1WVMvtCgaa9c8PGLoLezJXYNJspKvPsB5AIkb9q3uGnQZv0
X-Received: by 10.14.104.6 with SMTP id h6mr9536711eeg.29.1390845119480;
	Mon, 27 Jan 2014 09:51:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm44956870eef.9.2014.01.27.09.51.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:51:58 -0800 (PST)
Message-ID: <52E69CBC.3090207@linaro.org>
Date: Mon, 27 Jan 2014 17:51:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:33 PM, Oleksandr Tyshchenko wrote:
> Hi, all.

Hello Oleksandr,

> We are trying to bringing up XEN on DRA7XX (OMAP5) platform.
> 
> We sometimes see some hangs in Hypervisor and these hangs are related to SMP.
> We found out that deadlock took place in on_selected_cpus function
> in case of simultaneous occurrence cross-interrupts.
> 
> The issue:
> 
> 1. We receive irqs from first CPU (for example CPU0) and second CPU (for example CPU1) in parallel.
> 2. In our case the maintenance_interrupt function for maintenance irq from CPU0 is executed on CPU1 and
> maintenance_interrupt for irq from CPU1 is executed on CPU0.
> 3. According to existing logic we have run gic_irq_eoi function on CPU which it was scheduled.

gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
hard drive, network, ...). As far as I remember, these IRQs are only
routed to CPU0. Do you pass-through PPIs to dom0?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qLq-0003Cn-J6; Mon, 27 Jan 2014 17:52:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7qLo-0003Cd-T9
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:52:01 +0000
Received: from [85.158.139.211:15148] by server-5.bemta-5.messagelabs.com id
	D5/D3-14928-0CC96E25; Mon, 27 Jan 2014 17:52:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390845119!12068104!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25879 invoked from network); 27 Jan 2014 17:51:59 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:51:59 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so2455528eek.9
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 09:51:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=UXALXuNKbxq5CvbY6wwIhyhdEq/ryclO2ym+fWsASTA=;
	b=JQaV5GbMUo5/xFecu2YGYbJx5bMspjYZYGcM5Qj4yoCPIM8hOu+tryv0Jvyd38nAPp
	oC6HRXrCtdR53WkuI/YIQ0xdWOJdHLj/QevRne3FuD5F266Y03ZFRjWXm12ZyGqJbjv0
	h290AJsXazReVpaNUD4XFdTkaYHCB8eKw5vGAiHEnwOygli0o9XUSfNdkXtHzWK9rj8K
	ZTyKYm0yCK2lK3uujoz3dQeBPZf2CCf/ei7m0hRSwurZThnEfd0DMzXhsDv4axo/3qKY
	y8ZlJsZqUtpKdS70KZnQctQOEjs+wMC2OHhZVzVN9LeQHGO9KnAPTdNlg+Uv8cru7Am0
	ozFg==
X-Gm-Message-State: ALoCoQnadlc83ktuqarQO4NKhB9xd1WVMvtCgaa9c8PGLoLezJXYNJspKvPsB5AIkb9q3uGnQZv0
X-Received: by 10.14.104.6 with SMTP id h6mr9536711eeg.29.1390845119480;
	Mon, 27 Jan 2014 09:51:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v1sm44956870eef.9.2014.01.27.09.51.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 09:51:58 -0800 (PST)
Message-ID: <52E69CBC.3090207@linaro.org>
Date: Mon, 27 Jan 2014 17:51:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:33 PM, Oleksandr Tyshchenko wrote:
> Hi, all.

Hello Oleksandr,

> We are trying to bringing up XEN on DRA7XX (OMAP5) platform.
> 
> We sometimes see some hangs in Hypervisor and these hangs are related to SMP.
> We found out that deadlock took place in on_selected_cpus function
> in case of simultaneous occurrence cross-interrupts.
> 
> The issue:
> 
> 1. We receive irqs from first CPU (for example CPU0) and second CPU (for example CPU1) in parallel.
> 2. In our case the maintenance_interrupt function for maintenance irq from CPU0 is executed on CPU1 and
> maintenance_interrupt for irq from CPU1 is executed on CPU0.
> 3. According to existing logic we have run gic_irq_eoi function on CPU which it was scheduled.

gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
hard drive, network, ...). As far as I remember, these IRQs are only
routed to CPU0. Do you pass-through PPIs to dom0?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:53:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qNV-0003LB-Mz; Mon, 27 Jan 2014 17:53:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7qNT-0003L0-OY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:53:43 +0000
Received: from [193.109.254.147:13960] by server-11.bemta-14.messagelabs.com
	id 53/64-20576-72D96E25; Mon, 27 Jan 2014 17:53:43 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390845221!184086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32258 invoked from network); 27 Jan 2014 17:53:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94900166"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:53:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:53:40 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7qNP-0006L0-As;
	Mon, 27 Jan 2014 17:53:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 17:53:38 +0000
Message-ID: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The original code is wrong because:
* claim mode wants to know the total number of pages needed while
  original code provides the additional number of pages needed.
* if pod is enabled memory will already be allocated by the time we try
  to claim memory.

So the fix would be:
* move claim mode before actual memory allocation.
* pass the right number of pages to hypervisor.

The "right number of pages" should be number of pages of target memory
minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.

This fixes bug #32.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
mode is complete broken. If this patch is deemed too complicated, we
should flip the switch to disable claim mode by default for 4.4.
---
 tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
 1 file changed, 23 insertions(+), 13 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..dd3b522 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
 #define NR_SPECIAL_PAGES     8
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
+#define VGA_HOLE_SIZE (0x20)
+
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
                         uint64_t *mstart_out, uint64_t *mend_out)
@@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
     for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += mmio_size >> PAGE_SHIFT;
 
+    /*
+     * Try to claim pages for early warning of insufficient memory available.
+     * This should go before xc_domain_set_pod_target, becuase that function
+     * actually allocates memory for the guest. Claiming after memory has been
+     * allocated is pointless.
+     */
+    if ( claim_enabled ) {
+        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
+        if ( rc != 0 )
+        {
+            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
+            goto error_out;
+        }
+    }
+
     if ( pod_mode )
     {
         /*
-         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-         * adjust the PoD cache size so that domain tot_pages will be
-         * target_pages - 0x20 after this call.
+         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
+         * "hole".  Xen will adjust the PoD cache size so that domain
+         * tot_pages will be target_pages - VGA_HOLE_SIZE after
+         * this call.
          */
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+        rc = xc_domain_set_pod_target(xch, dom,
+                                      target_pages - VGA_HOLE_SIZE,
                                       NULL, NULL, NULL);
         if ( rc != 0 )
         {
@@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
     cur_pages = 0xc0;
     stat_normal_pages = 0xc0;
 
-    /* try to claim pages for early warning of insufficient memory available */
-    if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
-        if ( rc != 0 )
-        {
-            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
-            goto error_out;
-        }
-    }
     while ( (rc == 0) && (nr_pages > cur_pages) )
     {
         /* Clip count to maximum 1GB extent. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 17:53:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 17:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qNV-0003LB-Mz; Mon, 27 Jan 2014 17:53:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W7qNT-0003L0-OY
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 17:53:43 +0000
Received: from [193.109.254.147:13960] by server-11.bemta-14.messagelabs.com
	id 53/64-20576-72D96E25; Mon, 27 Jan 2014 17:53:43 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390845221!184086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32258 invoked from network); 27 Jan 2014 17:53:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 17:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94900166"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 17:53:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 12:53:40 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W7qNP-0006L0-As;
	Mon, 27 Jan 2014 17:53:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 17:53:38 +0000
Message-ID: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The original code is wrong because:
* claim mode wants to know the total number of pages needed while
  original code provides the additional number of pages needed.
* if pod is enabled memory will already be allocated by the time we try
  to claim memory.

So the fix would be:
* move claim mode before actual memory allocation.
* pass the right number of pages to hypervisor.

The "right number of pages" should be number of pages of target memory
minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.

This fixes bug #32.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
mode is complete broken. If this patch is deemed too complicated, we
should flip the switch to disable claim mode by default for 4.4.
---
 tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
 1 file changed, 23 insertions(+), 13 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..dd3b522 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
 #define NR_SPECIAL_PAGES     8
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
+#define VGA_HOLE_SIZE (0x20)
+
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
                         uint64_t *mstart_out, uint64_t *mend_out)
@@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
     for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += mmio_size >> PAGE_SHIFT;
 
+    /*
+     * Try to claim pages for early warning of insufficient memory available.
+     * This should go before xc_domain_set_pod_target, becuase that function
+     * actually allocates memory for the guest. Claiming after memory has been
+     * allocated is pointless.
+     */
+    if ( claim_enabled ) {
+        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
+        if ( rc != 0 )
+        {
+            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
+            goto error_out;
+        }
+    }
+
     if ( pod_mode )
     {
         /*
-         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-         * adjust the PoD cache size so that domain tot_pages will be
-         * target_pages - 0x20 after this call.
+         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
+         * "hole".  Xen will adjust the PoD cache size so that domain
+         * tot_pages will be target_pages - VGA_HOLE_SIZE after
+         * this call.
          */
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+        rc = xc_domain_set_pod_target(xch, dom,
+                                      target_pages - VGA_HOLE_SIZE,
                                       NULL, NULL, NULL);
         if ( rc != 0 )
         {
@@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
     cur_pages = 0xc0;
     stat_normal_pages = 0xc0;
 
-    /* try to claim pages for early warning of insufficient memory available */
-    if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
-        if ( rc != 0 )
-        {
-            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
-            goto error_out;
-        }
-    }
     while ( (rc == 0) && (nr_pages > cur_pages) )
     {
         /* Clip count to maximum 1GB extent. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:06:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qZG-0003wI-7q; Mon, 27 Jan 2014 18:05:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7qZF-0003wD-3Z
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:05:53 +0000
Received: from [85.158.139.211:23576] by server-3.bemta-5.messagelabs.com id
	E9/F6-04773-000A6E25; Mon, 27 Jan 2014 18:05:52 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390845949!12243594!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29256 invoked from network); 27 Jan 2014 18:05:51 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 18:05:51 -0000
Received: from mail-ig0-f171.google.com (mail-ig0-f171.google.com
	[209.85.213.171]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0RI5msd011740
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:05:48 -0800
Received: by mail-ig0-f171.google.com with SMTP id uy17so9694036igb.4
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:05:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=qnExNjO1U1Cmhb4NNfbmEw039k8CBo+G63EDJZLF4rw=;
	b=BmcUx6UTV82HIhGL/kTAz49b+zrg8/YmTS891tOhrTES0xuiTEjhDiEpJqYvMI3EXC
	k7Yg2BVOvITFEi1qEJn5ZnzQpWEksGoEqn+lK5g5kkNiVMxp0IQogFDy8glCr1NapMXA
	vBMg6AltmpshlhrJll+YSov2trV97rfS++hnTPlxFAed9We52OmjFn1I4gPTfQHBtlMV
	aC4FJ5oqjyWDwR6DxuwKyeUcC3vYJUPaxU/Nuovll+imRXjCO3YG6On+p11LElXTb9/C
	zZnQBSBmOmOBSQOnGrZ3e59UHijXEow5COONQ9I+/gz/1sW8r/RR4S9QlEbMoqvl0zQU
	g54Q==
X-Received: by 10.43.79.66 with SMTP id zp2mr1469087icb.76.1390845946843; Mon,
	27 Jan 2014 10:05:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 27 Jan 2014 10:05:06 -0800 (PST)
In-Reply-To: <52E60568.7060305@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
	<52E60568.7060305@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 27 Jan 2014 10:05:06 -0800
Message-ID: <CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>, Dong Eddie <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5732422296869307815=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5732422296869307815==
Content-Type: multipart/alternative; boundary=001a113322b099593204f0f78e40

--001a113322b099593204f0f78e40
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com> wrote:

> At 01/27/2014 06:27 AM, Shriram Rajagopalan Wrote:
> > On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com>
> wrote:
> >
> >> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> >>
> >> This patch introduces remus-netbuf-setup hotplug script responsible for
> >> setting up and tearing down the necessary infrastructure required for
> >> network output buffering in Remus.  This script is intended to be
> invoked
> >> by libxl for each guest interface, when starting or stopping Remus.
> >>
> >> Apart from returning success/failure indication via the usual hotplug
> >> entries in xenstore, this script also writes to xenstore, the name of
> >> the IFB device to be used to control the vif's network output.
> >>
> >> The script relies on libnl3 command line utilities to perform various
> >> setup/teardown functions. The script is confined to Linux platforms only
> >> since NetBSD does not seem to have libnl3.
> >>
> >> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> >> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> >> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> >> ---
> >>  tools/hotplug/Linux/Makefile           |   1 +
> >>  tools/hotplug/Linux/remus-netbuf-setup | 183
> >> +++++++++++++++++++++++++++++++++
> >>  2 files changed, 184 insertions(+)
> >>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
> >>
> >>
> > The last time I posted this script, the feedback was that the script and
> > the code invoking
> > the script should be in a single patch. So I would suggest doing the
> same.
>
> We use the script in patch6. It adds 479 lines. These two patches are big
> patches(add more than 100 lines), so why put them into a single patch?
>
>
That is a valid question. IIRC, IanJ was the one who wanted the code and the
script together. IanJ, any thoughts?

--001a113322b099593204f0f78e40
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On S=
un, Jan 26, 2014 at 11:06 PM, Wen Congyang <span dir=3D"ltr">&lt;<a href=3D=
"mailto:wency@cn.fujitsu.com" target=3D"_blank">wency@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">At 0=
1/27/2014 06:27 AM, Shriram Rajagopalan Wrote:<br>
&gt; On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan &lt;<a href=3D"mailto:l=
aijs@cn.fujitsu.com">laijs@cn.fujitsu.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt; From: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
">rshriram@cs.ubc.ca</a>&gt;<br>
&gt;&gt;<br>
&gt;&gt; This patch introduces remus-netbuf-setup hotplug script responsibl=
e for<br>
&gt;&gt; setting up and tearing down the necessary infrastructure required =
for<br>
&gt;&gt; network output buffering in Remus. =A0This script is intended to b=
e invoked<br>
&gt;&gt; by libxl for each guest interface, when starting or stopping Remus=
.<br>
&gt;&gt;<br>
&gt;&gt; Apart from returning success/failure indication via the usual hotp=
lug<br>
&gt;&gt; entries in xenstore, this script also writes to xenstore, the name=
 of<br>
&gt;&gt; the IFB device to be used to control the vif&#39;s network output.=
<br>
&gt;&gt;<br>
&gt;&gt; The script relies on libnl3 command line utilities to perform vari=
ous<br>
&gt;&gt; setup/teardown functions. The script is confined to Linux platform=
s only<br>
&gt;&gt; since NetBSD does not seem to have libnl3.<br>
&gt;&gt;<br>
&gt;&gt; Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@=
cs.ubc.ca">rshriram@cs.ubc.ca</a>&gt;<br>
&gt;&gt; Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujits=
u.com">laijs@cn.fujitsu.com</a>&gt;<br>
&gt;&gt; Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.c=
om">wency@cn.fujitsu.com</a>&gt;<br>
&gt;&gt; ---<br>
&gt;&gt; =A0tools/hotplug/Linux/Makefile =A0 =A0 =A0 =A0 =A0 | =A0 1 +<br>
&gt;&gt; =A0tools/hotplug/Linux/remus-netbuf-setup | 183<br>
&gt;&gt; +++++++++++++++++++++++++++++++++<br>
&gt;&gt; =A02 files changed, 184 insertions(+)<br>
&gt;&gt; =A0create mode 100644 tools/hotplug/Linux/remus-netbuf-setup<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt; The last time I posted this script, the feedback was that the script a=
nd<br>
&gt; the code invoking<br>
&gt; the script should be in a single patch. So I would suggest doing the s=
ame.<br>
<br>
</div></div>We use the script in patch6. It adds 479 lines. These two patch=
es are big<br>
patches(add more than 100 lines), so why put them into a single patch?<br>
<br></blockquote><div><br></div><div>That is a valid question. IIRC, IanJ w=
as the one who wanted the code and the</div><div>script together. IanJ, any=
 thoughts?</div></div></div></div>

--001a113322b099593204f0f78e40--


--===============5732422296869307815==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5732422296869307815==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 18:06:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7qZG-0003wI-7q; Mon, 27 Jan 2014 18:05:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1W7qZF-0003wD-3Z
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:05:53 +0000
Received: from [85.158.139.211:23576] by server-3.bemta-5.messagelabs.com id
	E9/F6-04773-000A6E25; Mon, 27 Jan 2014 18:05:52 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390845949!12243594!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29256 invoked from network); 27 Jan 2014 18:05:51 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Jan 2014 18:05:51 -0000
Received: from mail-ig0-f171.google.com (mail-ig0-f171.google.com
	[209.85.213.171]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s0RI5msd011740
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:05:48 -0800
Received: by mail-ig0-f171.google.com with SMTP id uy17so9694036igb.4
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:05:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=qnExNjO1U1Cmhb4NNfbmEw039k8CBo+G63EDJZLF4rw=;
	b=BmcUx6UTV82HIhGL/kTAz49b+zrg8/YmTS891tOhrTES0xuiTEjhDiEpJqYvMI3EXC
	k7Yg2BVOvITFEi1qEJn5ZnzQpWEksGoEqn+lK5g5kkNiVMxp0IQogFDy8glCr1NapMXA
	vBMg6AltmpshlhrJll+YSov2trV97rfS++hnTPlxFAed9We52OmjFn1I4gPTfQHBtlMV
	aC4FJ5oqjyWDwR6DxuwKyeUcC3vYJUPaxU/Nuovll+imRXjCO3YG6On+p11LElXTb9/C
	zZnQBSBmOmOBSQOnGrZ3e59UHijXEow5COONQ9I+/gz/1sW8r/RR4S9QlEbMoqvl0zQU
	g54Q==
X-Received: by 10.43.79.66 with SMTP id zp2mr1469087icb.76.1390845946843; Mon,
	27 Jan 2014 10:05:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 27 Jan 2014 10:05:06 -0800 (PST)
In-Reply-To: <52E60568.7060305@cn.fujitsu.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
	<52E60568.7060305@cn.fujitsu.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 27 Jan 2014 10:05:06 -0800
Message-ID: <CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>, Dong Eddie <eddie.dong@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5732422296869307815=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5732422296869307815==
Content-Type: multipart/alternative; boundary=001a113322b099593204f0f78e40

--001a113322b099593204f0f78e40
Content-Type: text/plain; charset=ISO-8859-1

On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com> wrote:

> At 01/27/2014 06:27 AM, Shriram Rajagopalan Wrote:
> > On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan <laijs@cn.fujitsu.com>
> wrote:
> >
> >> From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> >>
> >> This patch introduces remus-netbuf-setup hotplug script responsible for
> >> setting up and tearing down the necessary infrastructure required for
> >> network output buffering in Remus.  This script is intended to be
> invoked
> >> by libxl for each guest interface, when starting or stopping Remus.
> >>
> >> Apart from returning success/failure indication via the usual hotplug
> >> entries in xenstore, this script also writes to xenstore, the name of
> >> the IFB device to be used to control the vif's network output.
> >>
> >> The script relies on libnl3 command line utilities to perform various
> >> setup/teardown functions. The script is confined to Linux platforms only
> >> since NetBSD does not seem to have libnl3.
> >>
> >> Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
> >> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> >> Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
> >> ---
> >>  tools/hotplug/Linux/Makefile           |   1 +
> >>  tools/hotplug/Linux/remus-netbuf-setup | 183
> >> +++++++++++++++++++++++++++++++++
> >>  2 files changed, 184 insertions(+)
> >>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
> >>
> >>
> > The last time I posted this script, the feedback was that the script and
> > the code invoking
> > the script should be in a single patch. So I would suggest doing the
> same.
>
> We use the script in patch6. It adds 479 lines. These two patches are big
> patches(add more than 100 lines), so why put them into a single patch?
>
>
That is a valid question. IIRC, IanJ was the one who wanted the code and the
script together. IanJ, any thoughts?

--001a113322b099593204f0f78e40
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On S=
un, Jan 26, 2014 at 11:06 PM, Wen Congyang <span dir=3D"ltr">&lt;<a href=3D=
"mailto:wency@cn.fujitsu.com" target=3D"_blank">wency@cn.fujitsu.com</a>&gt=
;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"HOEnZb"><div class=3D"h5">At 0=
1/27/2014 06:27 AM, Shriram Rajagopalan Wrote:<br>
&gt; On Tue, Jan 21, 2014 at 1:05 AM, Lai Jiangshan &lt;<a href=3D"mailto:l=
aijs@cn.fujitsu.com">laijs@cn.fujitsu.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt; From: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@cs.ubc.ca=
">rshriram@cs.ubc.ca</a>&gt;<br>
&gt;&gt;<br>
&gt;&gt; This patch introduces remus-netbuf-setup hotplug script responsibl=
e for<br>
&gt;&gt; setting up and tearing down the necessary infrastructure required =
for<br>
&gt;&gt; network output buffering in Remus. =A0This script is intended to b=
e invoked<br>
&gt;&gt; by libxl for each guest interface, when starting or stopping Remus=
.<br>
&gt;&gt;<br>
&gt;&gt; Apart from returning success/failure indication via the usual hotp=
lug<br>
&gt;&gt; entries in xenstore, this script also writes to xenstore, the name=
 of<br>
&gt;&gt; the IFB device to be used to control the vif&#39;s network output.=
<br>
&gt;&gt;<br>
&gt;&gt; The script relies on libnl3 command line utilities to perform vari=
ous<br>
&gt;&gt; setup/teardown functions. The script is confined to Linux platform=
s only<br>
&gt;&gt; since NetBSD does not seem to have libnl3.<br>
&gt;&gt;<br>
&gt;&gt; Signed-off-by: Shriram Rajagopalan &lt;<a href=3D"mailto:rshriram@=
cs.ubc.ca">rshriram@cs.ubc.ca</a>&gt;<br>
&gt;&gt; Signed-off-by: Lai Jiangshan &lt;<a href=3D"mailto:laijs@cn.fujits=
u.com">laijs@cn.fujitsu.com</a>&gt;<br>
&gt;&gt; Reviewed-by: Wen Congyang &lt;<a href=3D"mailto:wency@cn.fujitsu.c=
om">wency@cn.fujitsu.com</a>&gt;<br>
&gt;&gt; ---<br>
&gt;&gt; =A0tools/hotplug/Linux/Makefile =A0 =A0 =A0 =A0 =A0 | =A0 1 +<br>
&gt;&gt; =A0tools/hotplug/Linux/remus-netbuf-setup | 183<br>
&gt;&gt; +++++++++++++++++++++++++++++++++<br>
&gt;&gt; =A02 files changed, 184 insertions(+)<br>
&gt;&gt; =A0create mode 100644 tools/hotplug/Linux/remus-netbuf-setup<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt; The last time I posted this script, the feedback was that the script a=
nd<br>
&gt; the code invoking<br>
&gt; the script should be in a single patch. So I would suggest doing the s=
ame.<br>
<br>
</div></div>We use the script in patch6. It adds 479 lines. These two patch=
es are big<br>
patches(add more than 100 lines), so why put them into a single patch?<br>
<br></blockquote><div><br></div><div>That is a valid question. IIRC, IanJ w=
as the one who wanted the code and the</div><div>script together. IanJ, any=
 thoughts?</div></div></div></div>

--001a113322b099593204f0f78e40--


--===============5732422296869307815==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5732422296869307815==--


From xen-devel-bounces@lists.xen.org Mon Jan 27 18:28:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7quw-0004iX-QL; Mon, 27 Jan 2014 18:28:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7quv-0004iR-AE
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:28:17 +0000
Received: from [85.158.137.68:8570] by server-6.bemta-3.messagelabs.com id
	F9/F0-04868-045A6E25; Mon, 27 Jan 2014 18:28:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390847293!11646146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14095 invoked from network); 27 Jan 2014 18:28:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94915793"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 18:28:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 13:28:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7qup-0006ne-KK;
	Mon, 27 Jan 2014 18:28:11 +0000
Date: Mon, 27 Jan 2014 18:28:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401271827230.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 1/2] xen/arm: Add return value to
 smp_call_function_interrupt function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> Let the function returns error if the action can not be executed.

Unless you make the calling function (do_sgi) check for the return
value, this patch won't change anything.


> Change-Id: Iace691850024656239326bf0e3c87b57cb1b8ab3
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c      |    7 +++++--
>  xen/include/xen/smp.h |    2 +-
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 482a203..2700bd7 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -20,6 +20,7 @@
>  #include <asm/processor.h>
>  #include <xen/spinlock.h>
>  #include <xen/smp.h>
> +#include <xen/errno.h>
>  
>  /*
>   * Structure and data for smp_call_function()/on_selected_cpus().
> @@ -75,14 +76,14 @@ out:
>      spin_unlock(&call_lock);
>  }
>  
> -void smp_call_function_interrupt(void)
> +int smp_call_function_interrupt(void)
>  {
>      void (*func)(void *info) = call_data.func;
>      void *info = call_data.info;
>      unsigned int cpu = smp_processor_id();
>  
>      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
> -        return;
> +        return -EPERM;
>  
>      irq_enter();
>  
> @@ -100,6 +101,8 @@ void smp_call_function_interrupt(void)
>      }
>  
>      irq_exit();
> +
> +    return 0;
>  }
>  
>  /*
> diff --git a/xen/include/xen/smp.h b/xen/include/xen/smp.h
> index 6febb56..6d05910 100644
> --- a/xen/include/xen/smp.h
> +++ b/xen/include/xen/smp.h
> @@ -61,7 +61,7 @@ static inline void on_each_cpu(
>  /*
>   * Call a function on the current CPU
>   */
> -void smp_call_function_interrupt(void);
> +int smp_call_function_interrupt(void);
>  
>  void smp_send_call_function_mask(const cpumask_t *mask);
>  
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:28:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7quw-0004iX-QL; Mon, 27 Jan 2014 18:28:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7quv-0004iR-AE
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:28:17 +0000
Received: from [85.158.137.68:8570] by server-6.bemta-3.messagelabs.com id
	F9/F0-04868-045A6E25; Mon, 27 Jan 2014 18:28:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390847293!11646146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14095 invoked from network); 27 Jan 2014 18:28:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="94915793"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 18:28:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 13:28:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7qup-0006ne-KK;
	Mon, 27 Jan 2014 18:28:11 +0000
Date: Mon, 27 Jan 2014 18:28:08 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401271827230.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-2-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 1/2] xen/arm: Add return value to
 smp_call_function_interrupt function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> Let the function returns error if the action can not be executed.

Unless you make the calling function (do_sgi) check for the return
value, this patch won't change anything.


> Change-Id: Iace691850024656239326bf0e3c87b57cb1b8ab3
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c      |    7 +++++--
>  xen/include/xen/smp.h |    2 +-
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 482a203..2700bd7 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -20,6 +20,7 @@
>  #include <asm/processor.h>
>  #include <xen/spinlock.h>
>  #include <xen/smp.h>
> +#include <xen/errno.h>
>  
>  /*
>   * Structure and data for smp_call_function()/on_selected_cpus().
> @@ -75,14 +76,14 @@ out:
>      spin_unlock(&call_lock);
>  }
>  
> -void smp_call_function_interrupt(void)
> +int smp_call_function_interrupt(void)
>  {
>      void (*func)(void *info) = call_data.func;
>      void *info = call_data.info;
>      unsigned int cpu = smp_processor_id();
>  
>      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
> -        return;
> +        return -EPERM;
>  
>      irq_enter();
>  
> @@ -100,6 +101,8 @@ void smp_call_function_interrupt(void)
>      }
>  
>      irq_exit();
> +
> +    return 0;
>  }
>  
>  /*
> diff --git a/xen/include/xen/smp.h b/xen/include/xen/smp.h
> index 6febb56..6d05910 100644
> --- a/xen/include/xen/smp.h
> +++ b/xen/include/xen/smp.h
> @@ -61,7 +61,7 @@ static inline void on_each_cpu(
>  /*
>   * Call a function on the current CPU
>   */
> -void smp_call_function_interrupt(void);
> +int smp_call_function_interrupt(void);
>  
>  void smp_send_call_function_mask(const cpumask_t *mask);
>  
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rF4-0005oD-AD; Mon, 27 Jan 2014 18:49:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7rF2-0005o8-AZ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:49:04 +0000
Received: from [85.158.137.68:36639] by server-11.bemta-3.messagelabs.com id
	D6/A4-19379-F1AA6E25; Mon, 27 Jan 2014 18:49:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390848541!11660711!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 27 Jan 2014 18:49:01 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:49:01 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so6052256wgh.5
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:49:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=BqTjcRjGbcTlFpPrhNTcmhzf6m0gN93MYW6OKR1F8eU=;
	b=Zhw9Z4fe7bzsiim4cJ/683BJucVsWVkKOUMKR3hBtYIpCSnZn1AK++v13WLZDQTCAF
	h5xNO5dpMfggDX8fh3+CKx7GdOXjo+yY1HBbjrRabGfu3edfnHJJgDN3JzA5KVX7/mKO
	UDRDIPCfBTb4BL1OimWKf0N5RnCzkIy/2ycjVYo1obE8PshFp4te5p5lNIKZvMwHwMSX
	aW11Ibz5VRccL8AMKqNUK71M9PIXLD3czvzVIq18WVYGr1KbChvjRMr+n9B5Us1yEPLX
	PjautwzYyWeC/ZSb6OF9iGufdG8ckcmvuxiwmgx72eQ1ThH+VhqgtstDpbsevRLXQNx9
	hN/w==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr13028937wiw.56.1390848541522;
	Mon, 27 Jan 2014 10:49:01 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 10:49:01 -0800 (PST)
Date: Mon, 27 Jan 2014 18:49:01 +0000
X-Google-Sender-Auth: mfMUFf-JTOAaUjBQUMU44cKhDq4
Message-ID: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

I've just spent some time going through xen-devel, and it looks like
the main blocker we know about is the Windows install BSOD introduced
with the update to qemu 1.6.

I've divided the list of open bugs into "Open" (might be for 4.4) and
"Open, not for 4.4".  If there are any other important bugs that need
to be considered for this release which are not on the first list,
please let me know.

It's probably about time to start looking at "cross-compatibility"
test matrix:

* Migrating from 4.3 to 4.4

* Compiling applications written for 4.3's libxl against 4.4's libxl

Anything else I missed?

The next question is: Once the qemu issue is fixed, how much more
testing do we need before we can be confident we've caught all the
necessary bugs?

I'm inclined to say that if we do one more test day and wait a week,
we should be ready.  Thoughts?

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: When it's ready (Probably by the end of February).

Last updated: 27 January 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6.2

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings

== Resolved since last update ==

* xl support for vnc and vnclisten options with PV guests

* xl needs to disallow PoD with PCI passthrough

== Open ==

* osstest windows-install failures
  > http://bugs.xenproject.org/xen/bug/29
  > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
  Anthony investigating
  Blocker

* qemu-* parses "008" as octal in USB bus.addr format
  > http://bugs.xenproject.org/xen/bug/15
  > just needs documenting
  Anthony Perard to patch docs

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/30
  > Easiest way to reproduce:
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* Claim mode and PoD
  > http://bugs.xenproject.org/xen/bug/32
  Probably not a blocker, but easily fixed
  status: Patch posted

* Disable IOMMU if no southbridge
 > http://bugs.xenproject.org/xen/bug/37

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

== Open, not for 4.4 ==

* qemu-upstream not freeing pirq
 > http://www.gossamer-threads.com/lists/xen/devel/281498
 > http://marc.info/?l=xen-devel&m=137265766424502
 status: patches posted; latest patches need testing
 it hasn't been tested because of the other passthrough issues.

 Not a blocker.

* Race in PV shutdown between tool detection and shutdown watch
 > http://www.gossamer-threads.com/lists/xen/devel/282467
 > Nothing to do with ACPI
 status: Patches posted, need more work, will be stalled for some time
 The fix is to the Linux side of things.
 Not a blocker.

* xl does not support specifying virtual function for passthrough device
 > http://bugs.xenproject.org/xen/bug/22
 Too much work to be a blocker.

* xl does not handle migrate interruption gracefully
  > If you start a localhost migrate, and press "Ctrl-C" in the middle,
  > you get two hung domains
 Ian J investigated -- can of worms, too big to be a blocker for 4.4

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review. ( v2 ID
1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )

  > andyhhp: my proposed RTC fixes break migrate from older versions of
  > Xen, so I have to redesign it from scratch. no way it is going to
  > be ready for 4.4

* HPET interrupt stack overflow (when using hpet_broadcast mode and MSI
capable HPETs)
  owner: andyh@citrix
  status: patches posted, undergoing review iteration.
  > andyhhp: I have more work to do on the HPET series
  > andyhhp: no way it is going to be ready or safe for 4.4

* PCI hole resize support hvmloader/qemu-traditional/qemu-upstream
with PCI/GPU passthrough
  > http://bugs.xenproject.org/xen/bug/28
  > http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
  > Where Stefano writes:
  > 2) for Xen 4.4 rework the two patches above and improve
  > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
  > enough, it also needs to be able to resize the system memory region
  > (xen.ram) to make room for the bigger pci_hole

  status: not going to be fixed for 4.4 either. Created bug #28.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rF4-0005oD-AD; Mon, 27 Jan 2014 18:49:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7rF2-0005o8-AZ
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:49:04 +0000
Received: from [85.158.137.68:36639] by server-11.bemta-3.messagelabs.com id
	D6/A4-19379-F1AA6E25; Mon, 27 Jan 2014 18:49:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390848541!11660711!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 27 Jan 2014 18:49:01 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:49:01 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so6052256wgh.5
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:49:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=BqTjcRjGbcTlFpPrhNTcmhzf6m0gN93MYW6OKR1F8eU=;
	b=Zhw9Z4fe7bzsiim4cJ/683BJucVsWVkKOUMKR3hBtYIpCSnZn1AK++v13WLZDQTCAF
	h5xNO5dpMfggDX8fh3+CKx7GdOXjo+yY1HBbjrRabGfu3edfnHJJgDN3JzA5KVX7/mKO
	UDRDIPCfBTb4BL1OimWKf0N5RnCzkIy/2ycjVYo1obE8PshFp4te5p5lNIKZvMwHwMSX
	aW11Ibz5VRccL8AMKqNUK71M9PIXLD3czvzVIq18WVYGr1KbChvjRMr+n9B5Us1yEPLX
	PjautwzYyWeC/ZSb6OF9iGufdG8ckcmvuxiwmgx72eQ1ThH+VhqgtstDpbsevRLXQNx9
	hN/w==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr13028937wiw.56.1390848541522;
	Mon, 27 Jan 2014 10:49:01 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 10:49:01 -0800 (PST)
Date: Mon, 27 Jan 2014 18:49:01 +0000
X-Google-Sender-Auth: mfMUFf-JTOAaUjBQUMU44cKhDq4
Message-ID: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

I've just spent some time going through xen-devel, and it looks like
the main blocker we know about is the Windows install BSOD introduced
with the update to qemu 1.6.

I've divided the list of open bugs into "Open" (might be for 4.4) and
"Open, not for 4.4".  If there are any other important bugs that need
to be considered for this release which are not on the first list,
please let me know.

It's probably about time to start looking at "cross-compatibility"
test matrix:

* Migrating from 4.3 to 4.4

* Compiling applications written for 4.3's libxl against 4.4's libxl

Anything else I missed?

The next question is: Once the qemu issue is fixed, how much more
testing do we need before we can be confident we've caught all the
necessary bugs?

I'm inclined to say that if we do one more test day and wait a week,
we should be ready.  Thoughts?

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: When it's ready (Probably by the end of February).

Last updated: 27 January 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6.2

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings

== Resolved since last update ==

* xl support for vnc and vnclisten options with PV guests

* xl needs to disallow PoD with PCI passthrough

== Open ==

* osstest windows-install failures
  > http://bugs.xenproject.org/xen/bug/29
  > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
  Anthony investigating
  Blocker

* qemu-* parses "008" as octal in USB bus.addr format
  > http://bugs.xenproject.org/xen/bug/15
  > just needs documenting
  Anthony Perard to patch docs

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/30
  > Easiest way to reproduce:
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* Claim mode and PoD
  > http://bugs.xenproject.org/xen/bug/32
  Probably not a blocker, but easily fixed
  status: Patch posted

* Disable IOMMU if no southbridge
 > http://bugs.xenproject.org/xen/bug/37

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

== Open, not for 4.4 ==

* qemu-upstream not freeing pirq
 > http://www.gossamer-threads.com/lists/xen/devel/281498
 > http://marc.info/?l=xen-devel&m=137265766424502
 status: patches posted; latest patches need testing
 it hasn't been tested because of the other passthrough issues.

 Not a blocker.

* Race in PV shutdown between tool detection and shutdown watch
 > http://www.gossamer-threads.com/lists/xen/devel/282467
 > Nothing to do with ACPI
 status: Patches posted, need more work, will be stalled for some time
 The fix is to the Linux side of things.
 Not a blocker.

* xl does not support specifying virtual function for passthrough device
 > http://bugs.xenproject.org/xen/bug/22
 Too much work to be a blocker.

* xl does not handle migrate interruption gracefully
  > If you start a localhost migrate, and press "Ctrl-C" in the middle,
  > you get two hung domains
 Ian J investigated -- can of worms, too big to be a blocker for 4.4

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review. ( v2 ID
1386241748-9617-1-git-send-email-andrew.cooper3@citrix.com )

  > andyhhp: my proposed RTC fixes break migrate from older versions of
  > Xen, so I have to redesign it from scratch. no way it is going to
  > be ready for 4.4

* HPET interrupt stack overflow (when using hpet_broadcast mode and MSI
capable HPETs)
  owner: andyh@citrix
  status: patches posted, undergoing review iteration.
  > andyhhp: I have more work to do on the HPET series
  > andyhhp: no way it is going to be ready or safe for 4.4

* PCI hole resize support hvmloader/qemu-traditional/qemu-upstream
with PCI/GPU passthrough
  > http://bugs.xenproject.org/xen/bug/28
  > http://lists.xen.org/archives/html/xen-devel/2013-05/msg02813.html
  > Where Stefano writes:
  > 2) for Xen 4.4 rework the two patches above and improve
  > i440fx_update_pci_mem_hole: resizing the pci_hole subregion is not
  > enough, it also needs to be able to resize the system memory region
  > (xen.ram) to make room for the bigger pci_hole

  status: not going to be fixed for 4.4 either. Created bug #28.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rGA-0005tZ-4A; Mon, 27 Jan 2014 18:50:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W7rG8-0005tO-7X
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 18:50:12 +0000
Received: from [85.158.143.35:63954] by server-2.bemta-4.messagelabs.com id
	0D/CE-11386-36AA6E25; Mon, 27 Jan 2014 18:50:11 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390848608!1148890!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5702 invoked from network); 27 Jan 2014 18:50:10 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:50:10 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so6208253pbb.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 10:50:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=xzqm64+BFNwcJDGVIO2M+Qgoe99HC6MJJqEAwmqS0KM=;
	b=k6PKknXI4a34t7/P2zMqCWhvOONAMayL87J5ifO9YjaP1Km1qcPWS8ubbr7DXlixEw
	TKnW6a3Akw9YvdAotza+HO9+RjKG3EXTgf3nXBZ0DTVrpqLzl7OkaL38y+VMOQcNDyNL
	/cqjJcB+PvgoVjXBU170ptbZJ17Nq6ufB2gGWF2k5SEQh3CzSnooNXZcTfXeIxr10z7y
	u8OH9m9yNBOeT238CoRL8uhnCALcwu7jkLATo02IjFfFQYVmRG1ccdUu8badloNQOP++
	9Yi22LQJxyAjDiIC1L3bjPbfDUeRL8IrYAQw9NLGb9TaGt9pNsxT5PM177Sp1kbO3EC1
	9dxQ==
X-Received: by 10.66.13.138 with SMTP id h10mr4368848pac.148.1390848608255;
	Mon, 27 Jan 2014 10:50:08 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id
	de3sm34155425pbb.33.2014.01.27.10.50.05 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 10:50:06 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 27 Jan 2014 10:50:03 -0800
Date: Mon, 27 Jan 2014 10:50:03 -0800
From: Matt Wilson <msw@linux.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140127185001.GA9782@u109add4315675089e695.ant.amazon.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127160914.GA23059@phenom.dumpdata.com>
	<52E6871D.2050107@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E6871D.2050107@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 05:19:41PM +0100, Roger Pau Monn=E9 wrote:
> On 27/01/14 17:09, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> >> I've at least identified two possible memory leaks in blkback, both
> >> related to the shutdown path of a VBD:
> >>
> >> - We don't wait for any pending purge work to finish before cleaning
> >>   the list of free_pages. The purge work will call put_free_pages and
> >>   thus we might end up with pages being added to the free_pages list
> >>   after we have emptied it.
> >> - We don't wait for pending requests to end before cleaning persistent
> >>   grants and the list of free_pages. Again this can add pages to the
> >>   free_pages lists or persistent grants to the persistent_gnts
> >>   red-black tree.
> >>
> >> Also, add some checks in xen_blkif_free to make sure we are cleaning
> >> everything.
> >>
> >> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> Cc: David Vrabel <david.vrabel@citrix.com>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: Matt Rushton <mrushton@amazon.com>
> >> Cc: Matt Wilson <msw@amazon.com>
> >> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> >> ---
> >> This should be applied after the patch:
> >>
> >> xen-blkback: fix memory leak when persistent grants are used
> > =

> > Could you respin the series with the issues below fixed and
> > have said patch as part of the series. That way not only does
> > it have your SoB on it but it makes it easier to apply the patch
> > for lazy^H^H^Hbusy maintainers and makes it clear that you had
> > tested both of them.
> > =

> > Also, please CC Jens Axboe on these patches.
> =

> Ack, will do once the comments below are sorted out.

We'll keep an eye out for the resubmitted series and do some testing.

Thanks for taking this the extra step.

--msw

> >> >From Matt Rushton & Matt Wilson and backported to stable.
> >>
> >> I've been able to create and destroy ~4000 guests while doing heavy IO
> >> operations with this patch on a 512M Dom0 without problems.
> >> ---
> >>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++-------=
---
> >>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
> >>  2 files changed, 28 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-b=
lkback/blkback.c
> >> index 30ef7b3..19925b7 100644
> >> --- a/drivers/block/xen-blkback/blkback.c
> >> +++ b/drivers/block/xen-blkback/blkback.c
> >> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
> >>  				struct pending_req *pending_req);
> >>  static void make_response(struct xen_blkif *blkif, u64 id,
> >>  			  unsigned short op, int st);
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
> >>  =

> >>  #define foreach_grant_safe(pos, n, rbtree, node) \
> >>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node=
), \
> >> @@ -625,6 +626,12 @@ purge_gnt_list:
> >>  			print_stats(blkif);
> >>  	}
> >>  =

> >> +	/* Drain pending IO */
> >> +	xen_blk_drain_io(blkif, true);
> >> +
> >> +	/* Drain pending purge work */
> >> +	flush_work(&blkif->persistent_purge_work);
> >> +
> >>  	/* Free all persistent grant pages */
> >>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> >>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> >> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blk=
if,
> >>  	return -EIO;
> >>  }
> >>  =

> >> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
> >>  {
> >>  	atomic_set(&blkif->drain, 1);
> >>  	do {
> >> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blk=
if)
> >>  =

> >>  		if (!atomic_read(&blkif->drain))
> >>  			break;
> >> -	} while (!kthread_should_stop());
> >> +	} while (!kthread_should_stop() || force);
> >>  	atomic_set(&blkif->drain, 0);
> >>  }
> >>  =

> >> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
> >>  	 * the proper response on the ring.
> >>  	 */
> >>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> >> -		xen_blkbk_unmap(pending_req->blkif,
> >> +		struct xen_blkif *blkif =3D pending_req->blkif;
> >> +
> >> +		xen_blkbk_unmap(blkif,
> >>  		                pending_req->segments,
> >>  		                pending_req->nr_pages);
> >> -		make_response(pending_req->blkif, pending_req->id,
> >> +		make_response(blkif, pending_req->id,
> >>  			      pending_req->operation, pending_req->status);
> >> -		xen_blkif_put(pending_req->blkif);
> >> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> >> -			if (atomic_read(&pending_req->blkif->drain))
> >> -				complete(&pending_req->blkif->drain_complete);
> >> +		free_req(blkif, pending_req);
> >> +		xen_blkif_put(blkif);
> >> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> >> +			if (atomic_read(&blkif->drain))
> >> +				complete(&blkif->drain_complete);
> >>  		}
> >> -		free_req(pending_req->blkif, pending_req);
> >>  	}
> >>  }
> >>  =

> >> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif=
 *blkif,
> >>  	 * issue the WRITE_FLUSH.
> >>  	 */
> >>  	if (drain)
> >> -		xen_blk_drain_io(pending_req->blkif);
> >> +		xen_blk_drain_io(pending_req->blkif, false);
> >>  =

> >>  	/*
> >>  	 * If we have failed at this point, we need to undo the M2P override,
> >> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-bl=
kback/xenbus.c
> >> index c2014a0..3c10281 100644
> >> --- a/drivers/block/xen-blkback/xenbus.c
> >> +++ b/drivers/block/xen-blkback/xenbus.c
> >> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t d=
omid)
> >>  	blkif->persistent_gnts.rb_node =3D NULL;
> >>  	spin_lock_init(&blkif->free_pages_lock);
> >>  	INIT_LIST_HEAD(&blkif->free_pages);
> >> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
> > =

> > Hm,
> =

> See comment below.
> =

> >>  	blkif->free_pages_num =3D 0;
> >>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
> >>  =

> >> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blki=
f)
> >>  	if (!atomic_dec_and_test(&blkif->refcnt))
> >>  		BUG();
> >>  =

> >> +	/* Make sure everything is drained before shutting down */
> >> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> >> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> >> +	BUG_ON(blkif->free_pages_num !=3D 0);
> >> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> > =

> > You don't seem to put anything on this list? Or even declare this?
> > Was there another patch in the series?
> =

> No, the list is already used in current code, but it is initialized only
> before usage, now I need to make sure it's initialized even if not used, =
or:
> =

> BUG_ON(!list_empty(&blkif->persistent_purge_list));
> =

> Is going to fail.
> =

> I will resend this and replace the other (now useless) initialization
> with a BUG_ON(!list_empty...
> =

> > =

> >> +	BUG_ON(!list_empty(&blkif->free_pages));
> >> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> >> +
> >>  	/* Check that there is no request in use */
> >>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
> >>  		list_del(&req->free_list);
> >>
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rGA-0005tZ-4A; Mon, 27 Jan 2014 18:50:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W7rG8-0005tO-7X
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 18:50:12 +0000
Received: from [85.158.143.35:63954] by server-2.bemta-4.messagelabs.com id
	0D/CE-11386-36AA6E25; Mon, 27 Jan 2014 18:50:11 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390848608!1148890!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5702 invoked from network); 27 Jan 2014 18:50:10 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:50:10 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so6208253pbb.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 10:50:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=xzqm64+BFNwcJDGVIO2M+Qgoe99HC6MJJqEAwmqS0KM=;
	b=k6PKknXI4a34t7/P2zMqCWhvOONAMayL87J5ifO9YjaP1Km1qcPWS8ubbr7DXlixEw
	TKnW6a3Akw9YvdAotza+HO9+RjKG3EXTgf3nXBZ0DTVrpqLzl7OkaL38y+VMOQcNDyNL
	/cqjJcB+PvgoVjXBU170ptbZJ17Nq6ufB2gGWF2k5SEQh3CzSnooNXZcTfXeIxr10z7y
	u8OH9m9yNBOeT238CoRL8uhnCALcwu7jkLATo02IjFfFQYVmRG1ccdUu8badloNQOP++
	9Yi22LQJxyAjDiIC1L3bjPbfDUeRL8IrYAQw9NLGb9TaGt9pNsxT5PM177Sp1kbO3EC1
	9dxQ==
X-Received: by 10.66.13.138 with SMTP id h10mr4368848pac.148.1390848608255;
	Mon, 27 Jan 2014 10:50:08 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id
	de3sm34155425pbb.33.2014.01.27.10.50.05 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 10:50:06 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 27 Jan 2014 10:50:03 -0800
Date: Mon, 27 Jan 2014 10:50:03 -0800
From: Matt Wilson <msw@linux.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140127185001.GA9782@u109add4315675089e695.ant.amazon.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127160914.GA23059@phenom.dumpdata.com>
	<52E6871D.2050107@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E6871D.2050107@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 05:19:41PM +0100, Roger Pau Monn=E9 wrote:
> On 27/01/14 17:09, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> >> I've at least identified two possible memory leaks in blkback, both
> >> related to the shutdown path of a VBD:
> >>
> >> - We don't wait for any pending purge work to finish before cleaning
> >>   the list of free_pages. The purge work will call put_free_pages and
> >>   thus we might end up with pages being added to the free_pages list
> >>   after we have emptied it.
> >> - We don't wait for pending requests to end before cleaning persistent
> >>   grants and the list of free_pages. Again this can add pages to the
> >>   free_pages lists or persistent grants to the persistent_gnts
> >>   red-black tree.
> >>
> >> Also, add some checks in xen_blkif_free to make sure we are cleaning
> >> everything.
> >>
> >> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> Cc: David Vrabel <david.vrabel@citrix.com>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: Matt Rushton <mrushton@amazon.com>
> >> Cc: Matt Wilson <msw@amazon.com>
> >> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> >> ---
> >> This should be applied after the patch:
> >>
> >> xen-blkback: fix memory leak when persistent grants are used
> > =

> > Could you respin the series with the issues below fixed and
> > have said patch as part of the series. That way not only does
> > it have your SoB on it but it makes it easier to apply the patch
> > for lazy^H^H^Hbusy maintainers and makes it clear that you had
> > tested both of them.
> > =

> > Also, please CC Jens Axboe on these patches.
> =

> Ack, will do once the comments below are sorted out.

We'll keep an eye out for the resubmitted series and do some testing.

Thanks for taking this the extra step.

--msw

> >> >From Matt Rushton & Matt Wilson and backported to stable.
> >>
> >> I've been able to create and destroy ~4000 guests while doing heavy IO
> >> operations with this patch on a 512M Dom0 without problems.
> >> ---
> >>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++-------=
---
> >>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
> >>  2 files changed, 28 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-b=
lkback/blkback.c
> >> index 30ef7b3..19925b7 100644
> >> --- a/drivers/block/xen-blkback/blkback.c
> >> +++ b/drivers/block/xen-blkback/blkback.c
> >> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
> >>  				struct pending_req *pending_req);
> >>  static void make_response(struct xen_blkif *blkif, u64 id,
> >>  			  unsigned short op, int st);
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
> >>  =

> >>  #define foreach_grant_safe(pos, n, rbtree, node) \
> >>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node=
), \
> >> @@ -625,6 +626,12 @@ purge_gnt_list:
> >>  			print_stats(blkif);
> >>  	}
> >>  =

> >> +	/* Drain pending IO */
> >> +	xen_blk_drain_io(blkif, true);
> >> +
> >> +	/* Drain pending purge work */
> >> +	flush_work(&blkif->persistent_purge_work);
> >> +
> >>  	/* Free all persistent grant pages */
> >>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> >>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> >> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blk=
if,
> >>  	return -EIO;
> >>  }
> >>  =

> >> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
> >>  {
> >>  	atomic_set(&blkif->drain, 1);
> >>  	do {
> >> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blk=
if)
> >>  =

> >>  		if (!atomic_read(&blkif->drain))
> >>  			break;
> >> -	} while (!kthread_should_stop());
> >> +	} while (!kthread_should_stop() || force);
> >>  	atomic_set(&blkif->drain, 0);
> >>  }
> >>  =

> >> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
> >>  	 * the proper response on the ring.
> >>  	 */
> >>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> >> -		xen_blkbk_unmap(pending_req->blkif,
> >> +		struct xen_blkif *blkif =3D pending_req->blkif;
> >> +
> >> +		xen_blkbk_unmap(blkif,
> >>  		                pending_req->segments,
> >>  		                pending_req->nr_pages);
> >> -		make_response(pending_req->blkif, pending_req->id,
> >> +		make_response(blkif, pending_req->id,
> >>  			      pending_req->operation, pending_req->status);
> >> -		xen_blkif_put(pending_req->blkif);
> >> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> >> -			if (atomic_read(&pending_req->blkif->drain))
> >> -				complete(&pending_req->blkif->drain_complete);
> >> +		free_req(blkif, pending_req);
> >> +		xen_blkif_put(blkif);
> >> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> >> +			if (atomic_read(&blkif->drain))
> >> +				complete(&blkif->drain_complete);
> >>  		}
> >> -		free_req(pending_req->blkif, pending_req);
> >>  	}
> >>  }
> >>  =

> >> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif=
 *blkif,
> >>  	 * issue the WRITE_FLUSH.
> >>  	 */
> >>  	if (drain)
> >> -		xen_blk_drain_io(pending_req->blkif);
> >> +		xen_blk_drain_io(pending_req->blkif, false);
> >>  =

> >>  	/*
> >>  	 * If we have failed at this point, we need to undo the M2P override,
> >> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-bl=
kback/xenbus.c
> >> index c2014a0..3c10281 100644
> >> --- a/drivers/block/xen-blkback/xenbus.c
> >> +++ b/drivers/block/xen-blkback/xenbus.c
> >> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t d=
omid)
> >>  	blkif->persistent_gnts.rb_node =3D NULL;
> >>  	spin_lock_init(&blkif->free_pages_lock);
> >>  	INIT_LIST_HEAD(&blkif->free_pages);
> >> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
> > =

> > Hm,
> =

> See comment below.
> =

> >>  	blkif->free_pages_num =3D 0;
> >>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
> >>  =

> >> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blki=
f)
> >>  	if (!atomic_dec_and_test(&blkif->refcnt))
> >>  		BUG();
> >>  =

> >> +	/* Make sure everything is drained before shutting down */
> >> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> >> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> >> +	BUG_ON(blkif->free_pages_num !=3D 0);
> >> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> > =

> > You don't seem to put anything on this list? Or even declare this?
> > Was there another patch in the series?
> =

> No, the list is already used in current code, but it is initialized only
> before usage, now I need to make sure it's initialized even if not used, =
or:
> =

> BUG_ON(!list_empty(&blkif->persistent_purge_list));
> =

> Is going to fail.
> =

> I will resend this and replace the other (now useless) initialization
> with a BUG_ON(!list_empty...
> =

> > =

> >> +	BUG_ON(!list_empty(&blkif->free_pages));
> >> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> >> +
> >>  	/* Check that there is no request in use */
> >>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
> >>  		list_del(&req->free_list);
> >>
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:51:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rHW-00061X-PJ; Mon, 27 Jan 2014 18:51:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7rHU-00061J-Pz
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:51:36 +0000
Received: from [193.109.254.147:43301] by server-15.bemta-14.messagelabs.com
	id 1E/83-22186-8BAA6E25; Mon, 27 Jan 2014 18:51:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390848695!191853!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14942 invoked from network); 27 Jan 2014 18:51:35 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:51:35 -0000
Received: by mail-we0-f171.google.com with SMTP id w61so5770582wes.30
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:51:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=q9MOFHp66tkiJ7PwrohAtu+DdiY2SFohfZKg6pFiH4k=;
	b=eTdMf3mVg41kxbEJs85DuoX9N0bMANU0uX93fFIXXWbhc4FDl5xQVE7rJ2KMx+WIs5
	7Bg7Qypkxag1tHpHqhp/hQYP63h7/tZrWflI/fwTTRJctpc/L3wQWqYan+p0S0L+FkEC
	xFyfjrgsf0qlK6V6Kp3D6ueZKwg1g822OB4tRDOIWtdXzy1tqV92AZoSwVLMScouJM/K
	NjeIaHgPmVQwCqdSM72FSQNRhNChKiMetq6U68j53hJIIF6Vmld5RT8QLRD1PNXTaS4j
	jozxDzT3zZbmL1rN63Sdl0K4gIvMgR8jav5KGroLy0zhSSFqMZ12AsKWSg6hw2xmvI1C
	l79Q==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr13096745wib.28.1390848695384; 
	Mon, 27 Jan 2014 10:51:35 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 10:51:35 -0800 (PST)
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Date: Mon, 27 Jan 2014 18:51:35 +0000
X-Google-Sender-Auth: BZvsvqzvwgSBp__vV3bFwNk0bvE
Message-ID: <CAFLBxZZSKXrWth2BWOUMbkL7N=seA=gUDQPnQDNAV3L6xEfigw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Diana Crisan <dcrisan@flexiant.com>, Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 6:49 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> It's probably about time to start looking at "cross-compatibility"
> test matrix:
>
> * Migrating from 4.3 to 4.4
>
> * Compiling applications written for 4.3's libxl against 4.4's libxl

Diana / Alex:

I think I recall that you guys had your own toolstack that you were
compiling against libxl.  If you guys could spare a few cycles, it
would be really helpful to try the new 4.4 libxl and make sure that
we're being suitable backwards-compatible.

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 18:51:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 18:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rHW-00061X-PJ; Mon, 27 Jan 2014 18:51:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W7rHU-00061J-Pz
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 18:51:36 +0000
Received: from [193.109.254.147:43301] by server-15.bemta-14.messagelabs.com
	id 1E/83-22186-8BAA6E25; Mon, 27 Jan 2014 18:51:36 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390848695!191853!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14942 invoked from network); 27 Jan 2014 18:51:35 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 18:51:35 -0000
Received: by mail-we0-f171.google.com with SMTP id w61so5770582wes.30
	for <xen-devel@lists.xen.org>; Mon, 27 Jan 2014 10:51:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=q9MOFHp66tkiJ7PwrohAtu+DdiY2SFohfZKg6pFiH4k=;
	b=eTdMf3mVg41kxbEJs85DuoX9N0bMANU0uX93fFIXXWbhc4FDl5xQVE7rJ2KMx+WIs5
	7Bg7Qypkxag1tHpHqhp/hQYP63h7/tZrWflI/fwTTRJctpc/L3wQWqYan+p0S0L+FkEC
	xFyfjrgsf0qlK6V6Kp3D6ueZKwg1g822OB4tRDOIWtdXzy1tqV92AZoSwVLMScouJM/K
	NjeIaHgPmVQwCqdSM72FSQNRhNChKiMetq6U68j53hJIIF6Vmld5RT8QLRD1PNXTaS4j
	jozxDzT3zZbmL1rN63Sdl0K4gIvMgR8jav5KGroLy0zhSSFqMZ12AsKWSg6hw2xmvI1C
	l79Q==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr13096745wib.28.1390848695384; 
	Mon, 27 Jan 2014 10:51:35 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 27 Jan 2014 10:51:35 -0800 (PST)
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Date: Mon, 27 Jan 2014 18:51:35 +0000
X-Google-Sender-Auth: BZvsvqzvwgSBp__vV3bFwNk0bvE
Message-ID: <CAFLBxZZSKXrWth2BWOUMbkL7N=seA=gUDQPnQDNAV3L6xEfigw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Diana Crisan <dcrisan@flexiant.com>, Alex Bligh <alex@alex.org.uk>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 6:49 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> It's probably about time to start looking at "cross-compatibility"
> test matrix:
>
> * Migrating from 4.3 to 4.4
>
> * Compiling applications written for 4.3's libxl against 4.4's libxl

Diana / Alex:

I think I recall that you guys had your own toolstack that you were
compiling against libxl.  If you guys could spare a few cycles, it
would be really helpful to try the new 4.4 libxl and make sure that
we're being suitable backwards-compatible.

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:00:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rPu-0006aZ-1Z; Mon, 27 Jan 2014 19:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7rPs-0006aU-Pc
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:00:16 +0000
Received: from [85.158.143.35:3935] by server-1.bemta-4.messagelabs.com id
	DA/9E-02132-0CCA6E25; Mon, 27 Jan 2014 19:00:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390849214!1151497!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18848 invoked from network); 27 Jan 2014 19:00:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 19:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96935990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 19:00:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 14:00:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7rPh-0007Eg-Nk;
	Mon, 27 Jan 2014 19:00:05 +0000
Date: Mon, 27 Jan 2014 19:00:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> This patch is needed to avoid possible deadlocks in case of simultaneous
> occurrence cross-interrupts.
> 
> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 2700bd7..46d2fc6 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -55,7 +55,11 @@ void on_selected_cpus(
>  
>      ASSERT(local_irq_is_enabled());
>  
> -    spin_lock(&call_lock);
> +    if (!spin_trylock(&call_lock)) {
> +        if (smp_call_function_interrupt())
> +            return;
> +        spin_lock(&call_lock);
> +    }
>  
>      cpumask_copy(&call_data.selected, selected);

So this is where you check for the return value of
smp_call_function_interrupt.

I think it would be better to move the on_selected_cpus call out of
maintenance_interrupt, after the write to EOIR (caused by
desc->handler->end(desc) at the end of xen/arch/arm/irq.c:do_IRQ).
Maybe to a tasklet.

Alternatively, as Ian suggested, we could increase the priotiry of SGIs
but I am a bit wary of making that change at RC2.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:00:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:00:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rPu-0006aZ-1Z; Mon, 27 Jan 2014 19:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W7rPs-0006aU-Pc
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:00:16 +0000
Received: from [85.158.143.35:3935] by server-1.bemta-4.messagelabs.com id
	DA/9E-02132-0CCA6E25; Mon, 27 Jan 2014 19:00:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390849214!1151497!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18848 invoked from network); 27 Jan 2014 19:00:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 19:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96935990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 19:00:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 14:00:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W7rPh-0007Eg-Nk;
	Mon, 27 Jan 2014 19:00:05 +0000
Date: Mon, 27 Jan 2014 19:00:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> This patch is needed to avoid possible deadlocks in case of simultaneous
> occurrence cross-interrupts.
> 
> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 2700bd7..46d2fc6 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -55,7 +55,11 @@ void on_selected_cpus(
>  
>      ASSERT(local_irq_is_enabled());
>  
> -    spin_lock(&call_lock);
> +    if (!spin_trylock(&call_lock)) {
> +        if (smp_call_function_interrupt())
> +            return;
> +        spin_lock(&call_lock);
> +    }
>  
>      cpumask_copy(&call_data.selected, selected);

So this is where you check for the return value of
smp_call_function_interrupt.

I think it would be better to move the on_selected_cpus call out of
maintenance_interrupt, after the write to EOIR (caused by
desc->handler->end(desc) at the end of xen/arch/arm/irq.c:do_IRQ).
Maybe to a tasklet.

Alternatively, as Ian suggested, we could increase the priotiry of SGIs
but I am a bit wary of making that change at RC2.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:10:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rZE-00072Z-6v; Mon, 27 Jan 2014 19:09:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7rZC-00072U-SL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:09:55 +0000
Received: from [85.158.139.211:38450] by server-14.bemta-5.messagelabs.com id
	85/6F-24200-20FA6E25; Mon, 27 Jan 2014 19:09:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390849792!12262450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20604 invoked from network); 27 Jan 2014 19:09:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 19:09:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96940392"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 19:09:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 14:09:51 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7rZ8-0007Ny-Fg;
	Mon, 27 Jan 2014 19:09:50 +0000
Message-ID: <52E6AEF9.9050406@eu.citrix.com>
Date: Mon, 27 Jan 2014 19:09:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, <xen-devel@lists.xen.org>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:53 PM, Wei Liu wrote:
> The original code is wrong because:
> * claim mode wants to know the total number of pages needed while
>    original code provides the additional number of pages needed.
> * if pod is enabled memory will already be allocated by the time we try
>    to claim memory.
>
> So the fix would be:
> * move claim mode before actual memory allocation.
> * pass the right number of pages to hypervisor.
>
> The "right number of pages" should be number of pages of target memory
> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
>
> This fixes bug #32.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Wilk <konrad.wilk@oracle.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>

Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> mode is complete broken. If this patch is deemed too complicated, we
> should flip the switch to disable claim mode by default for 4.4.

I think a more reasonable mitigation strategy would simply be to ignore 
claim mode when constructing a domain that uses PoD.

I'm inclined to take this one.  Since claim mode is on by default, the 
currently-working path should get exercised well before the release to 
shake out any bugs.  The other path doesn't work at all currently 
(AFAICT) unless you disable claim mode -- which is still available as a 
work-around, even if there is a bug in this patch.

I'll wait a day or two for others to speak up before giving it a formal 
ack, just in case.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:10:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rZE-00072Z-6v; Mon, 27 Jan 2014 19:09:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W7rZC-00072U-SL
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:09:55 +0000
Received: from [85.158.139.211:38450] by server-14.bemta-5.messagelabs.com id
	85/6F-24200-20FA6E25; Mon, 27 Jan 2014 19:09:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390849792!12262450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20604 invoked from network); 27 Jan 2014 19:09:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 19:09:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,730,1384300800"; d="scan'208";a="96940392"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Jan 2014 19:09:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 27 Jan 2014 14:09:51 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W7rZ8-0007Ny-Fg;
	Mon, 27 Jan 2014 19:09:50 +0000
Message-ID: <52E6AEF9.9050406@eu.citrix.com>
Date: Mon, 27 Jan 2014 19:09:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, <xen-devel@lists.xen.org>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:53 PM, Wei Liu wrote:
> The original code is wrong because:
> * claim mode wants to know the total number of pages needed while
>    original code provides the additional number of pages needed.
> * if pod is enabled memory will already be allocated by the time we try
>    to claim memory.
>
> So the fix would be:
> * move claim mode before actual memory allocation.
> * pass the right number of pages to hypervisor.
>
> The "right number of pages" should be number of pages of target memory
> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
>
> This fixes bug #32.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Wilk <konrad.wilk@oracle.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>

Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> mode is complete broken. If this patch is deemed too complicated, we
> should flip the switch to disable claim mode by default for 4.4.

I think a more reasonable mitigation strategy would simply be to ignore 
claim mode when constructing a domain that uses PoD.

I'm inclined to take this one.  Since claim mode is on by default, the 
currently-working path should get exercised well before the release to 
shake out any bugs.  The other path doesn't work at all currently 
(AFAICT) unless you disable claim mode -- which is still available as a 
work-around, even if there is a bug in this patch.

I'll wait a day or two for others to speak up before giving it a formal 
ack, just in case.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rfT-0007BZ-7Z; Mon, 27 Jan 2014 19:16:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7rfS-0007BU-Kf
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:16:22 +0000
Received: from [193.109.254.147:60628] by server-15.bemta-14.messagelabs.com
	id 06/75-22186-580B6E25; Mon, 27 Jan 2014 19:16:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390850179!191089!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2060 invoked from network); 27 Jan 2014 19:16:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:16:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RJF9xu020458
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 19:15:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RJF8hU013119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:15:08 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RJF7Xa018773; Mon, 27 Jan 2014 19:15:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 11:15:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B537C1BFA72; Mon, 27 Jan 2014 14:15:06 -0500 (EST)
Date: Mon, 27 Jan 2014 14:15:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140127191506.GB29967@phenom.dumpdata.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<52E6AEF9.9050406@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E6AEF9.9050406@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 07:09:45PM +0000, George Dunlap wrote:
> On 01/27/2014 05:53 PM, Wei Liu wrote:
> >The original code is wrong because:
> >* claim mode wants to know the total number of pages needed while
> >   original code provides the additional number of pages needed.
> >* if pod is enabled memory will already be allocated by the time we try
> >   to claim memory.
> >
> >So the fix would be:
> >* move claim mode before actual memory allocation.
> >* pass the right number of pages to hypervisor.
> >
> >The "right number of pages" should be number of pages of target memory
> >minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >
> >This fixes bug #32.
> >
> >Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >Cc: Konrad Wilk <konrad.wilk@oracle.com>
> >Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >Cc: Ian Campbell <ian.campbell@citrix.com>
> >Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

and tomorrow I should be able to test it out as well.
> 
> >---
> >WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> >mode is complete broken. If this patch is deemed too complicated, we
> >should flip the switch to disable claim mode by default for 4.4.
> 
> I think a more reasonable mitigation strategy would simply be to
> ignore claim mode when constructing a domain that uses PoD.
> 
> I'm inclined to take this one.  Since claim mode is on by default,
> the currently-working path should get exercised well before the
> release to shake out any bugs.  The other path doesn't work at all
> currently (AFAICT) unless you disable claim mode -- which is still
> available as a work-around, even if there is a bug in this patch.
> 
> I'll wait a day or two for others to speak up before giving it a
> formal ack, just in case.
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rfT-0007BZ-7Z; Mon, 27 Jan 2014 19:16:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7rfS-0007BU-Kf
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:16:22 +0000
Received: from [193.109.254.147:60628] by server-15.bemta-14.messagelabs.com
	id 06/75-22186-580B6E25; Mon, 27 Jan 2014 19:16:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390850179!191089!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2060 invoked from network); 27 Jan 2014 19:16:21 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:16:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RJF9xu020458
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 19:15:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RJF8hU013119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:15:08 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RJF7Xa018773; Mon, 27 Jan 2014 19:15:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 11:15:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B537C1BFA72; Mon, 27 Jan 2014 14:15:06 -0500 (EST)
Date: Mon, 27 Jan 2014 14:15:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140127191506.GB29967@phenom.dumpdata.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<52E6AEF9.9050406@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E6AEF9.9050406@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 07:09:45PM +0000, George Dunlap wrote:
> On 01/27/2014 05:53 PM, Wei Liu wrote:
> >The original code is wrong because:
> >* claim mode wants to know the total number of pages needed while
> >   original code provides the additional number of pages needed.
> >* if pod is enabled memory will already be allocated by the time we try
> >   to claim memory.
> >
> >So the fix would be:
> >* move claim mode before actual memory allocation.
> >* pass the right number of pages to hypervisor.
> >
> >The "right number of pages" should be number of pages of target memory
> >minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >
> >This fixes bug #32.
> >
> >Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >Cc: Konrad Wilk <konrad.wilk@oracle.com>
> >Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >Cc: Ian Campbell <ian.campbell@citrix.com>
> >Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

and tomorrow I should be able to test it out as well.
> 
> >---
> >WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> >mode is complete broken. If this patch is deemed too complicated, we
> >should flip the switch to disable claim mode by default for 4.4.
> 
> I think a more reasonable mitigation strategy would simply be to
> ignore claim mode when constructing a domain that uses PoD.
> 
> I'm inclined to take this one.  Since claim mode is on by default,
> the currently-working path should get exercised well before the
> release to shake out any bugs.  The other path doesn't work at all
> currently (AFAICT) unless you disable claim mode -- which is still
> available as a work-around, even if there is a bug in this patch.
> 
> I'll wait a day or two for others to speak up before giving it a
> formal ack, just in case.
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:17:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rg3-0007EQ-Mz; Mon, 27 Jan 2014 19:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7rg2-0007ED-AV
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:16:58 +0000
Received: from [193.109.254.147:5107] by server-5.bemta-14.messagelabs.com id
	A4/71-03510-9A0B6E25; Mon, 27 Jan 2014 19:16:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390850215!194180!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8657 invoked from network); 27 Jan 2014 19:16:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:16:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RJGnR0022675
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 19:16:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RJGlLH017965
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:16:48 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RJGlem027326; Mon, 27 Jan 2014 19:16:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 11:16:47 -0800
Message-ID: <52E6B0DA.8070708@oracle.com>
Date: Mon, 27 Jan 2014 14:17:46 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits and bit 4 which meant the
> register accesses never made it to vmce_amd_* functions.
>
> We correct this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
>
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>   xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
> index f6c35db..cb4fd12 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>   
>       *val = 0;
>   
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

Which MSRs are going to be handled in the non-default cases? 
MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
explicit cases and I think it would be more readable if you explicitly 
created case statements for the latter four and had a simple 'switch(msr)'.

In fact, do MSRC000_040[0:3] even exist?

(You may also want to adjust your clock --- your emails are being sent 
from distant past ;-) )

-boris



>       {
>       case MSR_IA32_MC0_CTL:
>           /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>       int ret = 1;
>       unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>   
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>       {
>       case MSR_IA32_MC0_CTL:
>           /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:17:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7rg3-0007EQ-Mz; Mon, 27 Jan 2014 19:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7rg2-0007ED-AV
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:16:58 +0000
Received: from [193.109.254.147:5107] by server-5.bemta-14.messagelabs.com id
	A4/71-03510-9A0B6E25; Mon, 27 Jan 2014 19:16:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390850215!194180!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8657 invoked from network); 27 Jan 2014 19:16:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:16:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RJGnR0022675
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 19:16:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RJGlLH017965
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:16:48 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RJGlem027326; Mon, 27 Jan 2014 19:16:47 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 11:16:47 -0800
Message-ID: <52E6B0DA.8070708@oracle.com>
Date: Mon, 27 Jan 2014 14:17:46 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits and bit 4 which meant the
> register accesses never made it to vmce_amd_* functions.
>
> We correct this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
>
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>   xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
> index f6c35db..cb4fd12 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>   
>       *val = 0;
>   
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

Which MSRs are going to be handled in the non-default cases? 
MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
explicit cases and I think it would be more readable if you explicitly 
created case statements for the latter four and had a simple 'switch(msr)'.

In fact, do MSRC000_040[0:3] even exist?

(You may also want to adjust your clock --- your emails are being sent 
from distant past ;-) )

-boris



>       {
>       case MSR_IA32_MC0_CTL:
>           /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>       int ret = 1;
>       unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>   
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>       {
>       case MSR_IA32_MC0_CTL:
>           /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7s0W-00087v-Ec; Mon, 27 Jan 2014 19:38:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W7s0U-00087n-Jy
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:38:06 +0000
Received: from [193.109.254.147:44512] by server-16.bemta-14.messagelabs.com
	id 67/93-20600-D95B6E25; Mon, 27 Jan 2014 19:38:05 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390851485!193964!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19439 invoked from network); 27 Jan 2014 19:38:05 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:38:05 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0RJbuOr011308;
	Mon, 27 Jan 2014 19:38:00 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0RJboqY021344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:37:50 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s0RJboRl004245;
	Mon, 27 Jan 2014 19:37:50 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s0RJboOO004241; Mon, 27 Jan 2014 19:37:50 GMT
Date: Mon, 27 Jan 2014 19:37:50 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
In-Reply-To: <52E64A66.3010608@m2r.biz>
Message-ID: <alpine.DEB.2.00.1401271930450.28622@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
	<52E64A66.3010608@m2r.biz>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0RJbuOr011308
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Fabio Fantoni wrote:

> Il 27/01/2014 02:05, M A Young ha scritto:
>> I get the following error after running pygrub when launching a guest with 
>> xen 4.4-rc2.
>> 
>> xenconsole: Could not open tty `': No such file or directory
>> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console child 
>> [0] exited with error status 2
>> 
>> After this the boot continues normally, but once the console exits the tty 
>> settings are wrong and key presses aren't echoed. I believe the error above 
>> is from a xenconsole session launched when pygrub is, and this exits 
>> because the guest generally hasn't started. A second run of xenconsole 
>> serves the console output after that. I have occasionally noticed with this 
>> and earlier xen versions that I have to press ^] after pygrub finishes 
>> before the boot continues, so I would guess that in some cases the first 
>> xenconsole run lasts long enough to get a console connection which has to 
>> be ended before the boot proceeds. Hence I suspect the behaviour hasn't 
>> changed in this version, and only the error message is new.
>> 
>
> Try to check if xenconsoled is running.
> I had a similar problem time ago and it was because xenconsoled was no longer 
> active, perhaps crashed for unknown reasons and found nothing in the logs.
> After installing a rebuilt completely clean the problem was no longer 
> presented.

I get the error with or without xenconsoled running. The difference is 
that if xenconsole isn't running, the session exits after the error 
message rather than give the start up messages.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 19:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 19:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7s0W-00087v-Ec; Mon, 27 Jan 2014 19:38:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W7s0U-00087n-Jy
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 19:38:06 +0000
Received: from [193.109.254.147:44512] by server-16.bemta-14.messagelabs.com
	id 67/93-20600-D95B6E25; Mon, 27 Jan 2014 19:38:05 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390851485!193964!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19439 invoked from network); 27 Jan 2014 19:38:05 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 19:38:05 -0000
Received: from smtphost1.dur.ac.uk (smtphost1.dur.ac.uk [129.234.252.1])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0RJbuOr011308;
	Mon, 27 Jan 2014 19:38:00 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0RJboqY021344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 19:37:50 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s0RJboRl004245;
	Mon, 27 Jan 2014 19:37:50 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s0RJboOO004241; Mon, 27 Jan 2014 19:37:50 GMT
Date: Mon, 27 Jan 2014 19:37:50 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
In-Reply-To: <52E64A66.3010608@m2r.biz>
Message-ID: <alpine.DEB.2.00.1401271930450.28622@procyon.dur.ac.uk>
References: <alpine.DEB.2.00.1401270013590.6358@procyon.dur.ac.uk>
	<52E64A66.3010608@m2r.biz>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0RJbuOr011308
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Error after running pygrub with xen 4.4-rc2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Fabio Fantoni wrote:

> Il 27/01/2014 02:05, M A Young ha scritto:
>> I get the following error after running pygrub when launching a guest with 
>> xen 4.4-rc2.
>> 
>> xenconsole: Could not open tty `': No such file or directory
>> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console child 
>> [0] exited with error status 2
>> 
>> After this the boot continues normally, but once the console exits the tty 
>> settings are wrong and key presses aren't echoed. I believe the error above 
>> is from a xenconsole session launched when pygrub is, and this exits 
>> because the guest generally hasn't started. A second run of xenconsole 
>> serves the console output after that. I have occasionally noticed with this 
>> and earlier xen versions that I have to press ^] after pygrub finishes 
>> before the boot continues, so I would guess that in some cases the first 
>> xenconsole run lasts long enough to get a console connection which has to 
>> be ended before the boot proceeds. Hence I suspect the behaviour hasn't 
>> changed in this version, and only the error message is new.
>> 
>
> Try to check if xenconsoled is running.
> I had a similar problem time ago and it was because xenconsoled was no longer 
> active, perhaps crashed for unknown reasons and found nothing in the logs.
> After installing a rebuilt completely clean the problem was no longer 
> presented.

I get the error with or without xenconsoled running. The difference is 
that if xenconsole isn't running, the session exits after the error 
message rather than give the start up messages.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:09:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7tQt-0003AB-2I; Mon, 27 Jan 2014 21:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W7tQs-0003A6-2G
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 21:09:26 +0000
Received: from [85.158.137.68:28727] by server-12.bemta-3.messagelabs.com id
	55/D6-20055-50BC6E25; Mon, 27 Jan 2014 21:09:25 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390856963!11666128!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8388 invoked from network); 27 Jan 2014 21:09:24 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	27 Jan 2014 21:09:24 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 4D37E5824D0;
	Mon, 27 Jan 2014 13:09:22 -0800 (PST)
Date: Mon, 27 Jan 2014 13:09:21 -0800 (PST)
Message-Id: <20140127.130921.7056971883805642.davem@davemloft.net>
To: david.vrabel@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52E63382.6090503@citrix.com>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
	<52E63382.6090503@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, Annie.li@oracle.com
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 27 Jan 2014 10:22:58 +0000

> I think this should be applied to net (and tagged as a stable candidate)
> rather than net-next as this fixes are very big resource leak.

Then this subject line and commit message must be fixed to make it clear
that this is a BUG fix, rather then just a "clean up".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:09:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7tQt-0003AB-2I; Mon, 27 Jan 2014 21:09:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W7tQs-0003A6-2G
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 21:09:26 +0000
Received: from [85.158.137.68:28727] by server-12.bemta-3.messagelabs.com id
	55/D6-20055-50BC6E25; Mon, 27 Jan 2014 21:09:25 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390856963!11666128!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8388 invoked from network); 27 Jan 2014 21:09:24 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-11.tower-31.messagelabs.com with SMTP;
	27 Jan 2014 21:09:24 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 4D37E5824D0;
	Mon, 27 Jan 2014 13:09:22 -0800 (PST)
Date: Mon, 27 Jan 2014 13:09:21 -0800 (PST)
Message-Id: <20140127.130921.7056971883805642.davem@davemloft.net>
To: david.vrabel@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <52E63382.6090503@citrix.com>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
	<52E63382.6090503@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, Annie.li@oracle.com
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 27 Jan 2014 10:22:58 +0000

> I think this should be applied to net (and tagged as a stable candidate)
> rather than net-next as this fixes are very big resource leak.

Then this subject line and commit message must be fixed to make it clear
that this is a BUG fix, rather then just a "clean up".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7td1-0003OU-Qf; Mon, 27 Jan 2014 21:21:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7td0-0003OJ-6x
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 21:21:58 +0000
Received: from [85.158.139.211:45739] by server-1.bemta-5.messagelabs.com id
	2A/CE-21065-5FDC6E25; Mon, 27 Jan 2014 21:21:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390857715!12076101!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25582 invoked from network); 27 Jan 2014 21:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 21:21:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RLLnW3007643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 21:21:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RLLmRA010296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 21:21:48 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RLLmS4009388; Mon, 27 Jan 2014 21:21:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 13:21:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7588C1BFA72; Mon, 27 Jan 2014 16:21:46 -0500 (EST)
Date: Mon, 27 Jan 2014 16:21:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140127212146.GA32007@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> =

> - We don't wait for any pending purge work to finish before cleaning
>   the list of free_pages. The purge work will call put_free_pages and
>   thus we might end up with pages being added to the free_pages list
>   after we have emptied it.
> - We don't wait for pending requests to end before cleaning persistent
>   grants and the list of free_pages. Again this can add pages to the
>   free_pages lists or persistent grants to the persistent_gnts
>   red-black tree.
> =

> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.
> =

> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Matt Rushton <mrushton@amazon.com>
> Cc: Matt Wilson <msw@amazon.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
> This should be applied after the patch:
> =

> xen-blkback: fix memory leak when persistent grants are used
> =

> >From Matt Rushton & Matt Wilson and backported to stable.
> =

> I've been able to create and destroy ~4000 guests while doing heavy IO
> operations with this patch on a 512M Dom0 without problems.
> ---
>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>  2 files changed, 28 insertions(+), 10 deletions(-)
> =

> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
> index 30ef7b3..19925b7 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
>  				struct pending_req *pending_req);
>  static void make_response(struct xen_blkif *blkif, u64 id,
>  			  unsigned short op, int st);
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>  =

>  #define foreach_grant_safe(pos, n, rbtree, node) \
>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node), \
> @@ -625,6 +626,12 @@ purge_gnt_list:
>  			print_stats(blkif);
>  	}
>  =

> +	/* Drain pending IO */
> +	xen_blk_drain_io(blkif, true);
> +
> +	/* Drain pending purge work */
> +	flush_work(&blkif->persistent_purge_work);
> +

I think this means we can eliminate the refcnt usage - at least when
it comes to xen_blkif_disconnect where if we would initiate the shutdown, a=
nd
there is

239         atomic_dec(&blkif->refcnt);                                    =
         =

240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refcnt) =
=3D=3D 0);   =

241         atomic_inc(&blkif->refcnt);                                    =
         =

242                                                                        =
         =


which is done _after_ the thread is done executing. That check won't
be needed anymore as the xen_blk_drain_io, flush_work, and free_persistent_=
gnts
has pretty much drained every I/O out - so the moment the thread exits
there should be no need for waiting_to_free. I think.


>  	/* Free all persistent grant pages */
>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>  	return -EIO;
>  }
>  =

> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>  {
>  	atomic_set(&blkif->drain, 1);
>  	do {
> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>  =

>  		if (!atomic_read(&blkif->drain))
>  			break;
> -	} while (!kthread_should_stop());
> +	} while (!kthread_should_stop() || force);
>  	atomic_set(&blkif->drain, 0);
>  }
>  =

> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *p=
ending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif =3D pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);

I keep coming back to this and I am not sure what to think - especially
in the context of WRITE_BARRIER and disconnecting the vbd.

You moved the 'free_req' to be done before you do atomic_read/dec.

Which means that we do:

	list_add(&req->free_list, &blkif->pending_free);
	wake_up(&blkif->pending_free_wq);

	atomic_dec
	if atomic_read <=3D 2 poke thread that is waiting for drain.


while in the past we did:

	atomic_dec
	if atomic_read <=3D 2 poke thread that is waiting for drain.

	list_add(&req->free_list, &blkif->pending_free);
	wake_up(&blkif->pending_free_wq);

which means that we are giving the 'req' _before_ we decrement
the refcnts.

Could that mean that __do_block_io_op takes it for a spin - oh
wait it won't as it is sitting on a WRITE_BARRIER and waiting:

1226         if (drain)                                                    =
          =

1227                 xen_blk_drain_io(pending_req->blkif);  =


But still that feels 'wrong'?

If you think this is right and OK, then perhaps we should
document this behavior in case somebody in the future wants to
rewrite this and starts working it out ?


>  =

> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *b=
lkif,
>  	 * issue the WRITE_FLUSH.
>  	 */
>  	if (drain)
> -		xen_blk_drain_io(pending_req->blkif);
> +		xen_blk_drain_io(pending_req->blkif, false);
>  =

>  	/*
>  	 * If we have failed at this point, we need to undo the M2P override,
> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index c2014a0..3c10281 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domi=
d)
>  	blkif->persistent_gnts.rb_node =3D NULL;
>  	spin_lock_init(&blkif->free_pages_lock);
>  	INIT_LIST_HEAD(&blkif->free_pages);
> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  =

> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>  	if (!atomic_dec_and_test(&blkif->refcnt))
>  		BUG();
>  =

> +	/* Make sure everything is drained before shutting down */
> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> +	BUG_ON(blkif->free_pages_num !=3D 0);
> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> +	BUG_ON(!list_empty(&blkif->free_pages));
> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> +
>  	/* Check that there is no request in use */
>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>  		list_del(&req->free_list);
> -- =

> 1.7.7.5 (Apple Git-26)
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7td1-0003OU-Qf; Mon, 27 Jan 2014 21:21:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7td0-0003OJ-6x
	for xen-devel@lists.xenproject.org; Mon, 27 Jan 2014 21:21:58 +0000
Received: from [85.158.139.211:45739] by server-1.bemta-5.messagelabs.com id
	2A/CE-21065-5FDC6E25; Mon, 27 Jan 2014 21:21:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390857715!12076101!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25582 invoked from network); 27 Jan 2014 21:21:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 21:21:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RLLnW3007643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 21:21:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RLLmRA010296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 21:21:48 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RLLmS4009388; Mon, 27 Jan 2014 21:21:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 13:21:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7588C1BFA72; Mon, 27 Jan 2014 16:21:46 -0500 (EST)
Date: Mon, 27 Jan 2014 16:21:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Message-ID: <20140127212146.GA32007@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> =

> - We don't wait for any pending purge work to finish before cleaning
>   the list of free_pages. The purge work will call put_free_pages and
>   thus we might end up with pages being added to the free_pages list
>   after we have emptied it.
> - We don't wait for pending requests to end before cleaning persistent
>   grants and the list of free_pages. Again this can add pages to the
>   free_pages lists or persistent grants to the persistent_gnts
>   red-black tree.
> =

> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.
> =

> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Matt Rushton <mrushton@amazon.com>
> Cc: Matt Wilson <msw@amazon.com>
> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> ---
> This should be applied after the patch:
> =

> xen-blkback: fix memory leak when persistent grants are used
> =

> >From Matt Rushton & Matt Wilson and backported to stable.
> =

> I've been able to create and destroy ~4000 guests while doing heavy IO
> operations with this patch on a 512M Dom0 without problems.
> ---
>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>  2 files changed, 28 insertions(+), 10 deletions(-)
> =

> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkb=
ack/blkback.c
> index 30ef7b3..19925b7 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blk=
if,
>  				struct pending_req *pending_req);
>  static void make_response(struct xen_blkif *blkif, u64 id,
>  			  unsigned short op, int st);
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>  =

>  #define foreach_grant_safe(pos, n, rbtree, node) \
>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node), \
> @@ -625,6 +626,12 @@ purge_gnt_list:
>  			print_stats(blkif);
>  	}
>  =

> +	/* Drain pending IO */
> +	xen_blk_drain_io(blkif, true);
> +
> +	/* Drain pending purge work */
> +	flush_work(&blkif->persistent_purge_work);
> +

I think this means we can eliminate the refcnt usage - at least when
it comes to xen_blkif_disconnect where if we would initiate the shutdown, a=
nd
there is

239         atomic_dec(&blkif->refcnt);                                    =
         =

240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refcnt) =
=3D=3D 0);   =

241         atomic_inc(&blkif->refcnt);                                    =
         =

242                                                                        =
         =


which is done _after_ the thread is done executing. That check won't
be needed anymore as the xen_blk_drain_io, flush_work, and free_persistent_=
gnts
has pretty much drained every I/O out - so the moment the thread exits
there should be no need for waiting_to_free. I think.


>  	/* Free all persistent grant pages */
>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>  	return -EIO;
>  }
>  =

> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>  {
>  	atomic_set(&blkif->drain, 1);
>  	do {
> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>  =

>  		if (!atomic_read(&blkif->drain))
>  			break;
> -	} while (!kthread_should_stop());
> +	} while (!kthread_should_stop() || force);
>  	atomic_set(&blkif->drain, 0);
>  }
>  =

> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *p=
ending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif =3D pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);

I keep coming back to this and I am not sure what to think - especially
in the context of WRITE_BARRIER and disconnecting the vbd.

You moved the 'free_req' to be done before you do atomic_read/dec.

Which means that we do:

	list_add(&req->free_list, &blkif->pending_free);
	wake_up(&blkif->pending_free_wq);

	atomic_dec
	if atomic_read <=3D 2 poke thread that is waiting for drain.


while in the past we did:

	atomic_dec
	if atomic_read <=3D 2 poke thread that is waiting for drain.

	list_add(&req->free_list, &blkif->pending_free);
	wake_up(&blkif->pending_free_wq);

which means that we are giving the 'req' _before_ we decrement
the refcnts.

Could that mean that __do_block_io_op takes it for a spin - oh
wait it won't as it is sitting on a WRITE_BARRIER and waiting:

1226         if (drain)                                                    =
          =

1227                 xen_blk_drain_io(pending_req->blkif);  =


But still that feels 'wrong'?

If you think this is right and OK, then perhaps we should
document this behavior in case somebody in the future wants to
rewrite this and starts working it out ?


>  =

> @@ -1224,7 +1233,7 @@ static int dispatch_rw_block_io(struct xen_blkif *b=
lkif,
>  	 * issue the WRITE_FLUSH.
>  	 */
>  	if (drain)
> -		xen_blk_drain_io(pending_req->blkif);
> +		xen_blk_drain_io(pending_req->blkif, false);
>  =

>  	/*
>  	 * If we have failed at this point, we need to undo the M2P override,
> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index c2014a0..3c10281 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -125,6 +125,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domi=
d)
>  	blkif->persistent_gnts.rb_node =3D NULL;
>  	spin_lock_init(&blkif->free_pages_lock);
>  	INIT_LIST_HEAD(&blkif->free_pages);
> +	INIT_LIST_HEAD(&blkif->persistent_purge_list);
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  =

> @@ -259,6 +260,14 @@ static void xen_blkif_free(struct xen_blkif *blkif)
>  	if (!atomic_dec_and_test(&blkif->refcnt))
>  		BUG();
>  =

> +	/* Make sure everything is drained before shutting down */
> +	BUG_ON(blkif->persistent_gnt_c !=3D 0);
> +	BUG_ON(atomic_read(&blkif->persistent_gnt_in_use) !=3D 0);
> +	BUG_ON(blkif->free_pages_num !=3D 0);
> +	BUG_ON(!list_empty(&blkif->persistent_purge_list));
> +	BUG_ON(!list_empty(&blkif->free_pages));
> +	BUG_ON(!RB_EMPTY_ROOT(&blkif->persistent_gnts));
> +
>  	/* Check that there is no request in use */
>  	list_for_each_entry_safe(req, n, &blkif->pending_free, free_list) {
>  		list_del(&req->free_list);
> -- =

> 1.7.7.5 (Apple Git-26)
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7tey-0003XO-GC; Mon, 27 Jan 2014 21:24:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7tex-0003XJ-H3
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 21:23:59 +0000
Received: from [85.158.139.211:54386] by server-8.bemta-5.messagelabs.com id
	5C/81-29838-E6EC6E25; Mon, 27 Jan 2014 21:23:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390857836!12076294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30374 invoked from network); 27 Jan 2014 21:23:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 21:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,731,1384300800"; d="scan'208";a="94989416"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 21:23:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 16:23:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7tes-0005ag-Os;
	Mon, 27 Jan 2014 21:23:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7tes-0001YB-Lq;
	Mon, 27 Jan 2014 21:23:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24546-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 21:23:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24546: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5947064009550562729=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5947064009550562729==
Content-Type: text/plain

flight 24546 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24546/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 22089

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10  fail like 22082
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail like 22091

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
baseline version:
 qemuu                b97307ecaad98360f41ea36cd9674ef810c4f8cf

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Amit Shah <amit.shah@redhat.com>
  Amos Kong <akong@redhat.com>
  Andreas FÃ¤rber <afaerber@suse.de>
  Anthony Liguori <aliguori@amazon.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Bandan Das <bsd@redhat.com>
  Cole Robinson <crobinso@redhat.com>
  CongLi <coli@redhat.com>
  Eduardo Otubo <otubo@linux.vnet.ibm.com>
  Fam Zheng <famz@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <james.hogan@imgtec.com> [mips]
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Matthew Daley <mattjd@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mike Frysinger <vapier@gentoo.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <pmoore@redhat.com>
  Petar Jovanovic <petar.jovanovic@imgtec.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomoki Sekiyama <tomoki.sekiyama@hds.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Wenchao Xia <xiawenc@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 879 lines long.)


--===============5947064009550562729==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5947064009550562729==--

From xen-devel-bounces@lists.xen.org Mon Jan 27 21:24:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 21:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7tey-0003XO-GC; Mon, 27 Jan 2014 21:24:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7tex-0003XJ-H3
	for xen-devel@lists.xensource.com; Mon, 27 Jan 2014 21:23:59 +0000
Received: from [85.158.139.211:54386] by server-8.bemta-5.messagelabs.com id
	5C/81-29838-E6EC6E25; Mon, 27 Jan 2014 21:23:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390857836!12076294!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30374 invoked from network); 27 Jan 2014 21:23:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Jan 2014 21:23:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,731,1384300800"; d="scan'208";a="94989416"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Jan 2014 21:23:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 16:23:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7tes-0005ag-Os;
	Mon, 27 Jan 2014 21:23:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7tes-0001YB-Lq;
	Mon, 27 Jan 2014 21:23:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24546-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 27 Jan 2014 21:23:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24546: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5947064009550562729=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5947064009550562729==
Content-Type: text/plain

flight 24546 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24546/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3  8 guest-saverestore   fail REGR. vs. 22089

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10  fail like 22082
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail like 22091

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
baseline version:
 qemuu                b97307ecaad98360f41ea36cd9674ef810c4f8cf

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Amit Shah <amit.shah@redhat.com>
  Amos Kong <akong@redhat.com>
  Andreas FÃ¤rber <afaerber@suse.de>
  Anthony Liguori <aliguori@amazon.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Bandan Das <bsd@redhat.com>
  Cole Robinson <crobinso@redhat.com>
  CongLi <coli@redhat.com>
  Eduardo Otubo <otubo@linux.vnet.ibm.com>
  Fam Zheng <famz@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <james.hogan@imgtec.com> [mips]
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Matthew Daley <mattjd@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mike Frysinger <vapier@gentoo.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <pmoore@redhat.com>
  Petar Jovanovic <petar.jovanovic@imgtec.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomoki Sekiyama <tomoki.sekiyama@hds.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Wenchao Xia <xiawenc@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 879 lines long.)


--===============5947064009550562729==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5947064009550562729==--

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uwZ-0006CO-1G; Mon, 27 Jan 2014 22:46:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7uwX-0006CJ-D2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:13 +0000
Received: from [193.109.254.147:64842] by server-4.bemta-14.messagelabs.com id
	14/D3-03916-4B1E6E25; Mon, 27 Jan 2014 22:46:12 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390862770!216036!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12683 invoked from network); 27 Jan 2014 22:46:11 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-7.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:11 -0000
Received: from mail50-ch1-R.bigfish.com (10.43.68.242) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:10 +0000
Received: from mail50-ch1 (localhost [127.0.0.1])	by mail50-ch1-R.bigfish.com
	(Postfix) with ESMTP id 03AF91A0069;
	Mon, 27 Jan 2014 22:46:10 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I148cIe0eah1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h1155h)
Received: from mail50-ch1 (localhost.localdomain [127.0.0.1]) by mail50-ch1
	(MessageSwitch) id 1390862767333583_10044;
	Mon, 27 Jan 2014 22:46:07 +0000 (UTC)
Received: from CH1EHSMHS031.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.249])	by mail50-ch1.bigfish.com (Postfix) with ESMTP id
	4282B20050; Mon, 27 Jan 2014 22:46:07 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS031.bigfish.com
	(10.43.70.31) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:04 +0000
X-WSS-ID: 0N030KN-08-856-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	298D0D1607E;	Mon, 27 Jan 2014 16:45:59 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:11 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 17:46:01 -0500
Message-ID: <52E6E1A9.3030903@amd.com>
Date: Mon, 27 Jan 2014 16:46:01 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
	<52E6B0DA.8070708@oracle.com>
In-Reply-To: <52E6B0DA.8070708@oracle.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/27/2014 1:17 PM, Boris Ostrovsky wrote:
> On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
>>
>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c 
>> b/xen/arch/x86/cpu/mcheck/vmce.c
>> index f6c35db..cb4fd12 100644
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, 
>> uint32_t msr, uint64_t *val)
>>         *val = 0;
>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>
> Which MSRs are going to be handled in the non-default cases? 
> MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
> explicit cases and I think it would be more readable if you explicitly 
> created case statements for the latter four and had a simple 
> 'switch(msr)'.
>
In non-default cases, we need to handle MSR0x40[0:7]. MSR0x40[0:3] is 
bank 0 and remaining ones are bank 1.
We can't do a simple switch(msr) since if we do that, MSR0x40[4:7] will 
fall to 'default' (which is incorrect)

> In fact, do MSRC000_040[0:3] even exist?
>
Nope. They don't..

we only allow MCE MSR's here anyway:
ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0;

'mce_bank_msr' returns 1 only if we see MSR0x40[0:7] or some 'special 
MSR's' which in AMD's case are the three thresholding regs 
MSR_F10_MC4_MISC1 to MSR_F10_MC4_MISC3

> (You may also want to adjust your clock --- your emails are being sent 
> from distant past ;-) )
>
>
Yes, thanks for pointing that out :)
I'll correct that and send V2

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uwZ-0006CO-1G; Mon, 27 Jan 2014 22:46:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7uwX-0006CJ-D2
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:13 +0000
Received: from [193.109.254.147:64842] by server-4.bemta-14.messagelabs.com id
	14/D3-03916-4B1E6E25; Mon, 27 Jan 2014 22:46:12 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390862770!216036!1
X-Originating-IP: [216.32.181.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12683 invoked from network); 27 Jan 2014 22:46:11 -0000
Received: from ch1ehsobe002.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.182)
	by server-7.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:11 -0000
Received: from mail50-ch1-R.bigfish.com (10.43.68.242) by
	CH1EHSOBE010.bigfish.com (10.43.70.60) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:10 +0000
Received: from mail50-ch1 (localhost [127.0.0.1])	by mail50-ch1-R.bigfish.com
	(Postfix) with ESMTP id 03AF91A0069;
	Mon, 27 Jan 2014 22:46:10 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I148cIe0eah1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h1155h)
Received: from mail50-ch1 (localhost.localdomain [127.0.0.1]) by mail50-ch1
	(MessageSwitch) id 1390862767333583_10044;
	Mon, 27 Jan 2014 22:46:07 +0000 (UTC)
Received: from CH1EHSMHS031.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.249])	by mail50-ch1.bigfish.com (Postfix) with ESMTP id
	4282B20050; Mon, 27 Jan 2014 22:46:07 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS031.bigfish.com
	(10.43.70.31) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:04 +0000
X-WSS-ID: 0N030KN-08-856-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	298D0D1607E;	Mon, 27 Jan 2014 16:45:59 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:11 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 17:46:01 -0500
Message-ID: <52E6E1A9.3030903@amd.com>
Date: Mon, 27 Jan 2014 16:46:01 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
	<52E6B0DA.8070708@oracle.com>
In-Reply-To: <52E6B0DA.8070708@oracle.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/27/2014 1:17 PM, Boris Ostrovsky wrote:
> On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
>>
>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c 
>> b/xen/arch/x86/cpu/mcheck/vmce.c
>> index f6c35db..cb4fd12 100644
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, 
>> uint32_t msr, uint64_t *val)
>>         *val = 0;
>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>
> Which MSRs are going to be handled in the non-default cases? 
> MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
> explicit cases and I think it would be more readable if you explicitly 
> created case statements for the latter four and had a simple 
> 'switch(msr)'.
>
In non-default cases, we need to handle MSR0x40[0:7]. MSR0x40[0:3] is 
bank 0 and remaining ones are bank 1.
We can't do a simple switch(msr) since if we do that, MSR0x40[4:7] will 
fall to 'default' (which is incorrect)

> In fact, do MSRC000_040[0:3] even exist?
>
Nope. They don't..

we only allow MCE MSR's here anyway:
ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0;

'mce_bank_msr' returns 1 only if we see MSR0x40[0:7] or some 'special 
MSR's' which in AMD's case are the three thresholding regs 
MSR_F10_MC4_MISC1 to MSR_F10_MC4_MISC3

> (You may also want to adjust your clock --- your emails are being sent 
> from distant past ;-) )
>
>
Yes, thanks for pointing that out :)
I'll correct that and send V2

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxA-0006FI-JU; Mon, 27 Jan 2014 22:46:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7ux9-0006F1-5z
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:51 +0000
Received: from [85.158.139.211:15180] by server-16.bemta-5.messagelabs.com id
	6F/3F-11843-AD1E6E25; Mon, 27 Jan 2014 22:46:50 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390862808!12281813!1
X-Originating-IP: [216.32.180.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10002 invoked from network); 27 Jan 2014 22:46:49 -0000
Received: from va3ehsobe002.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.12)
	by server-9.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:49 -0000
Received: from mail190-va3-R.bigfish.com (10.7.14.227) by
	VA3EHSOBE002.bigfish.com (10.7.40.22) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:48 +0000
Received: from mail190-va3 (localhost [127.0.0.1])	by
	mail190-va3-R.bigfish.com (Postfix) with ESMTP id 1181434011F;
	Mon, 27 Jan 2014 22:46:48 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kze0eahzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail190-va3 (localhost.localdomain [127.0.0.1]) by mail190-va3
	(MessageSwitch) id 13908628066678_12853;
	Mon, 27 Jan 2014 22:46:46 +0000 (UTC)
Received: from VA3EHSMHS018.bigfish.com (unknown [10.7.14.245])	by
	mail190-va3.bigfish.com (Postfix) with ESMTP id DDCA5480074;
	Mon, 27 Jan 2014 22:46:45 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS018.bigfish.com
	(10.7.99.28) with Microsoft SMTP Server id 14.16.227.3; Mon, 27 Jan 2014
	22:46:41 +0000
X-WSS-ID: 0N030LR-07-3ME-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	24B1112C0028;	Mon, 27 Jan 2014 16:46:38 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:48 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:39 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:14 -0600
Message-ID: <1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/2 V2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0xC000040A is marked as reserved from Fam15 onwards and
MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
So remove the unnecessary definition of the reserved MSR and
use MSR_IA32_MCx_MISC() to define MSR 0x413.

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly.

Fam15 Model 00h-0fh  BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..07c2684 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxA-0006FI-JU; Mon, 27 Jan 2014 22:46:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7ux9-0006F1-5z
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:51 +0000
Received: from [85.158.139.211:15180] by server-16.bemta-5.messagelabs.com id
	6F/3F-11843-AD1E6E25; Mon, 27 Jan 2014 22:46:50 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390862808!12281813!1
X-Originating-IP: [216.32.180.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10002 invoked from network); 27 Jan 2014 22:46:49 -0000
Received: from va3ehsobe002.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.12)
	by server-9.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:49 -0000
Received: from mail190-va3-R.bigfish.com (10.7.14.227) by
	VA3EHSOBE002.bigfish.com (10.7.40.22) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:48 +0000
Received: from mail190-va3 (localhost [127.0.0.1])	by
	mail190-va3-R.bigfish.com (Postfix) with ESMTP id 1181434011F;
	Mon, 27 Jan 2014 22:46:48 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kze0eahzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail190-va3 (localhost.localdomain [127.0.0.1]) by mail190-va3
	(MessageSwitch) id 13908628066678_12853;
	Mon, 27 Jan 2014 22:46:46 +0000 (UTC)
Received: from VA3EHSMHS018.bigfish.com (unknown [10.7.14.245])	by
	mail190-va3.bigfish.com (Postfix) with ESMTP id DDCA5480074;
	Mon, 27 Jan 2014 22:46:45 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS018.bigfish.com
	(10.7.99.28) with Microsoft SMTP Server id 14.16.227.3; Mon, 27 Jan 2014
	22:46:41 +0000
X-WSS-ID: 0N030LR-07-3ME-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	24B1112C0028;	Mon, 27 Jan 2014 16:46:38 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:48 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:39 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:14 -0600
Message-ID: <1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/2 V2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0xC000040A is marked as reserved from Fam15 onwards and
MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
So remove the unnecessary definition of the reserved MSR and
use MSR_IA32_MCx_MISC() to define MSR 0x413.

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly.

Fam15 Model 00h-0fh  BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..07c2684 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold registers */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxB-0006FW-21; Mon, 27 Jan 2014 22:46:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7ux9-0006F3-CA
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:51 +0000
Received: from [85.158.143.35:41500] by server-1.bemta-4.messagelabs.com id
	74/B3-02132-AD1E6E25; Mon, 27 Jan 2014 22:46:50 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390862808!1178583!1
X-Originating-IP: [207.46.163.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28842 invoked from network); 27 Jan 2014 22:46:49 -0000
Received: from co9ehsobe001.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.24)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:49 -0000
Received: from mail208-co9-R.bigfish.com (10.236.132.250) by
	CO9EHSOBE029.bigfish.com (10.236.130.92) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:47 +0000
Received: from mail208-co9 (localhost [127.0.0.1])	by
	mail208-co9-R.bigfish.com (Postfix) with ESMTP id 4AB5F4C0092;
	Mon, 27 Jan 2014 22:46:47 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail208-co9 (localhost.localdomain [127.0.0.1]) by mail208-co9
	(MessageSwitch) id 1390862804300384_1107;
	Mon, 27 Jan 2014 22:46:44 +0000 (UTC)
Received: from CO9EHSMHS025.bigfish.com (unknown [10.236.132.230])	by
	mail208-co9.bigfish.com (Postfix) with ESMTP id 449D8D4004A;
	Mon, 27 Jan 2014 22:46:44 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS025.bigfish.com
	(10.236.130.35) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:41 +0000
X-WSS-ID: 0N030LQ-08-869-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	248D1D1608F;	Mon, 27 Jan 2014 16:46:38 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:49 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:41 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:15 -0600
Message-ID: <1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/2 V2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..cb4fd12 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:46:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxB-0006FW-21; Mon, 27 Jan 2014 22:46:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7ux9-0006F3-CA
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:46:51 +0000
Received: from [85.158.143.35:41500] by server-1.bemta-4.messagelabs.com id
	74/B3-02132-AD1E6E25; Mon, 27 Jan 2014 22:46:50 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390862808!1178583!1
X-Originating-IP: [207.46.163.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28842 invoked from network); 27 Jan 2014 22:46:49 -0000
Received: from co9ehsobe001.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.24)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:46:49 -0000
Received: from mail208-co9-R.bigfish.com (10.236.132.250) by
	CO9EHSOBE029.bigfish.com (10.236.130.92) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:46:47 +0000
Received: from mail208-co9 (localhost [127.0.0.1])	by
	mail208-co9-R.bigfish.com (Postfix) with ESMTP id 4AB5F4C0092;
	Mon, 27 Jan 2014 22:46:47 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail208-co9 (localhost.localdomain [127.0.0.1]) by mail208-co9
	(MessageSwitch) id 1390862804300384_1107;
	Mon, 27 Jan 2014 22:46:44 +0000 (UTC)
Received: from CO9EHSMHS025.bigfish.com (unknown [10.236.132.230])	by
	mail208-co9.bigfish.com (Postfix) with ESMTP id 449D8D4004A;
	Mon, 27 Jan 2014 22:46:44 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS025.bigfish.com
	(10.236.130.35) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:41 +0000
X-WSS-ID: 0N030LQ-08-869-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	248D1D1608F;	Mon, 27 Jan 2014 16:46:38 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:49 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:41 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:15 -0600
Message-ID: <1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/2 V2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..cb4fd12 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxT-0006Jy-Hw; Mon, 27 Jan 2014 22:47:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7uxS-0006Ja-6J
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:47:10 +0000
Received: from [85.158.143.35:42368] by server-2.bemta-4.messagelabs.com id
	6F/FA-11386-DE1E6E25; Mon, 27 Jan 2014 22:47:09 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390862827!1174593!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16615 invoked from network); 27 Jan 2014 22:47:08 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-2.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:47:08 -0000
Received: from mail223-tx2-R.bigfish.com (10.9.14.234) by
	TX2EHSOBE008.bigfish.com (10.9.40.28) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:47:00 +0000
Received: from mail223-tx2 (localhost [127.0.0.1])	by
	mail223-tx2-R.bigfish.com (Postfix) with ESMTP id B226780020C;
	Mon, 27 Jan 2014 22:47:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail223-tx2 (localhost.localdomain [127.0.0.1]) by mail223-tx2
	(MessageSwitch) id 1390862817954842_1453;
	Mon, 27 Jan 2014 22:46:57 +0000 (UTC)
Received: from TX2EHSMHS025.bigfish.com (unknown [10.9.14.229])	by
	mail223-tx2.bigfish.com (Postfix) with ESMTP id E4BA0200056;
	Mon, 27 Jan 2014 22:46:57 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS025.bigfish.com
	(10.9.99.125) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:48 +0000
X-WSS-ID: 0N030LQ-07-3MD-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2381A12C007C;	Mon, 27 Jan 2014 16:46:37 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:46 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:37 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:13 -0600
Message-ID: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/2 V2] Fix AMD threshold register definitions
	and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.

Changes from V1:
    - Correct time skew on the V1 patch.

Aravind Gopalakrishnan (2):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs

 xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 3 files changed, 11 insertions(+), 9 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 22:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 22:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7uxT-0006Jy-Hw; Mon, 27 Jan 2014 22:47:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1W7uxS-0006Ja-6J
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 22:47:10 +0000
Received: from [85.158.143.35:42368] by server-2.bemta-4.messagelabs.com id
	6F/FA-11386-DE1E6E25; Mon, 27 Jan 2014 22:47:09 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390862827!1174593!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16615 invoked from network); 27 Jan 2014 22:47:08 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-2.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Jan 2014 22:47:08 -0000
Received: from mail223-tx2-R.bigfish.com (10.9.14.234) by
	TX2EHSOBE008.bigfish.com (10.9.40.28) with Microsoft SMTP Server id
	14.1.225.22; Mon, 27 Jan 2014 22:47:00 +0000
Received: from mail223-tx2 (localhost [127.0.0.1])	by
	mail223-tx2-R.bigfish.com (Postfix) with ESMTP id B226780020C;
	Mon, 27 Jan 2014 22:47:00 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h1155h)
Received: from mail223-tx2 (localhost.localdomain [127.0.0.1]) by mail223-tx2
	(MessageSwitch) id 1390862817954842_1453;
	Mon, 27 Jan 2014 22:46:57 +0000 (UTC)
Received: from TX2EHSMHS025.bigfish.com (unknown [10.9.14.229])	by
	mail223-tx2.bigfish.com (Postfix) with ESMTP id E4BA0200056;
	Mon, 27 Jan 2014 22:46:57 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS025.bigfish.com
	(10.9.99.125) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 27 Jan 2014 22:46:48 +0000
X-WSS-ID: 0N030LQ-07-3MD-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2381A12C007C;	Mon, 27 Jan 2014 16:46:37 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 27 Jan 2014 16:46:46 -0600
Received: from sos-bantry0.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 27 Jan 2014
	17:46:37 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<boris.ostrovsky@oracle.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>
Date: Mon, 27 Jan 2014 16:44:13 -0600
Message-ID: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/2 V2] Fix AMD threshold register definitions
	and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.

Changes from V1:
    - Correct time skew on the V1 patch.

Aravind Gopalakrishnan (2):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs

 xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 3 files changed, 11 insertions(+), 9 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 23:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 23:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7vbj-00084L-Up; Mon, 27 Jan 2014 23:28:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7vbh-00082B-My
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 23:28:45 +0000
Received: from [193.109.254.147:11537] by server-7.bemta-14.messagelabs.com id
	C9/92-15500-CABE6E25; Mon, 27 Jan 2014 23:28:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390865322!216344!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14561 invoked from network); 27 Jan 2014 23:28:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 23:28:44 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RNSbnb010691
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 23:28:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RNSZGN023613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 23:28:36 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RNSZ4c003338; Mon, 27 Jan 2014 23:28:35 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 15:28:35 -0800
Message-ID: <52E6EBDE.7030303@oracle.com>
Date: Mon, 27 Jan 2014 18:29:34 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
	<52E6B0DA.8070708@oracle.com> <52E6E1A9.3030903@amd.com>
In-Reply-To: <52E6E1A9.3030903@amd.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:46 PM, Aravind Gopalakrishnan wrote:
> On 1/27/2014 1:17 PM, Boris Ostrovsky wrote:
>> On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
>>>
>>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c 
>>> b/xen/arch/x86/cpu/mcheck/vmce.c
>>> index f6c35db..cb4fd12 100644
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, 
>>> uint32_t msr, uint64_t *val)
>>>         *val = 0;
>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>>
>> Which MSRs are going to be handled in the non-default cases? 
>> MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
>> explicit cases and I think it would be more readable if you 
>> explicitly created case statements for the latter four and had a 
>> simple 'switch(msr)'.

I didn't realize this was common code so ignore this.

One thing I suggest you change is instead of (0x3<<30) use 0xc0000000 
because the latter is immediately familiar to anyone used to AMD MSR 
spaces. And as for 0x13, I'd add a comment explaining why you have it.

Also, Intel processors have 0x413+ registers as well and Intel folks 
should probably confirm that your change won't break code there.

-boris

>>
> In non-default cases, we need to handle MSR0x40[0:7]. MSR0x40[0:3] is 
> bank 0 and remaining ones are bank 1.
> We can't do a simple switch(msr) since if we do that, MSR0x40[4:7] 
> will fall to 'default' (which is incorrect)
>
>> In fact, do MSRC000_040[0:3] even exist?
>>
> Nope. They don't..
>
> we only allow MCE MSR's here anyway:
> ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0;
>
> 'mce_bank_msr' returns 1 only if we see MSR0x40[0:7] or some 'special 
> MSR's' which in AMD's case are the three thresholding regs 
> MSR_F10_MC4_MISC1 to MSR_F10_MC4_MISC3
>
>> (You may also want to adjust your clock --- your emails are being 
>> sent from distant past ;-) )
>>
>>
> Yes, thanks for pointing that out :)
> I'll correct that and send V2
>
> -Aravind.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 23:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 23:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7vbj-00084L-Up; Mon, 27 Jan 2014 23:28:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W7vbh-00082B-My
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 23:28:45 +0000
Received: from [193.109.254.147:11537] by server-7.bemta-14.messagelabs.com id
	C9/92-15500-CABE6E25; Mon, 27 Jan 2014 23:28:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390865322!216344!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14561 invoked from network); 27 Jan 2014 23:28:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 23:28:44 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0RNSbnb010691
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 27 Jan 2014 23:28:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0RNSZGN023613
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 27 Jan 2014 23:28:36 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0RNSZ4c003338; Mon, 27 Jan 2014 23:28:35 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 15:28:35 -0800
Message-ID: <52E6EBDE.7030303@oracle.com>
Date: Mon, 27 Jan 2014 18:29:34 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
	<52E6B0DA.8070708@oracle.com> <52E6E1A9.3030903@amd.com>
In-Reply-To: <52E6E1A9.3030903@amd.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jinsong.liu@intel.com, chegger@amazon.de, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 05:46 PM, Aravind Gopalakrishnan wrote:
> On 1/27/2014 1:17 PM, Boris Ostrovsky wrote:
>> On 03/15/2008 05:50 PM, Aravind Gopalakrishnan wrote:
>>>
>>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c 
>>> b/xen/arch/x86/cpu/mcheck/vmce.c
>>> index f6c35db..cb4fd12 100644
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, 
>>> uint32_t msr, uint64_t *val)
>>>         *val = 0;
>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>>
>> Which MSRs are going to be handled in the non-default cases? 
>> MSR0000_040[0:3] and MSRC000_040[0:3]? The first four already have 
>> explicit cases and I think it would be more readable if you 
>> explicitly created case statements for the latter four and had a 
>> simple 'switch(msr)'.

I didn't realize this was common code so ignore this.

One thing I suggest you change is instead of (0x3<<30) use 0xc0000000 
because the latter is immediately familiar to anyone used to AMD MSR 
spaces. And as for 0x13, I'd add a comment explaining why you have it.

Also, Intel processors have 0x413+ registers as well and Intel folks 
should probably confirm that your change won't break code there.

-boris

>>
> In non-default cases, we need to handle MSR0x40[0:7]. MSR0x40[0:3] is 
> bank 0 and remaining ones are bank 1.
> We can't do a simple switch(msr) since if we do that, MSR0x40[4:7] 
> will fall to 'default' (which is incorrect)
>
>> In fact, do MSRC000_040[0:3] even exist?
>>
> Nope. They don't..
>
> we only allow MCE MSR's here anyway:
> ret = mce_bank_msr(cur, msr) ? bank_mce_rdmsr(cur, msr, val) : 0;
>
> 'mce_bank_msr' returns 1 only if we see MSR0x40[0:7] or some 'special 
> MSR's' which in AMD's case are the three thresholding regs 
> MSR_F10_MC4_MISC1 to MSR_F10_MC4_MISC3
>
>> (You may also want to adjust your clock --- your emails are being 
>> sent from distant past ;-) )
>>
>>
> Yes, thanks for pointing that out :)
> I'll correct that and send V2
>
> -Aravind.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 23:53:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 23:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7vyu-0000Td-Ak; Mon, 27 Jan 2014 23:52:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7vys-0000TW-Oq
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 23:52:42 +0000
Received: from [85.158.139.211:24047] by server-10.bemta-5.messagelabs.com id
	6E/ED-01405-941F6E25; Mon, 27 Jan 2014 23:52:41 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390866759!12287685!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5181 invoked from network); 27 Jan 2014 23:52:41 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 23:52:41 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 27 Jan 2014 16:52:35 -0700
Message-ID: <52E6F142.5010709@suse.com>
Date: Mon, 27 Jan 2014 16:52:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> It's probably about time to start looking at "cross-compatibility"
> test matrix:
>
> * Migrating from 4.3 to 4.4
>
> * Compiling applications written for 4.3's libxl against 4.4's libxl
>   

The libvirt libxl driver compiles cleanly with 4.2, 4.3, and 4.4 libxl.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Jan 27 23:53:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jan 2014 23:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7vyu-0000Td-Ak; Mon, 27 Jan 2014 23:52:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7vys-0000TW-Oq
	for xen-devel@lists.xen.org; Mon, 27 Jan 2014 23:52:42 +0000
Received: from [85.158.139.211:24047] by server-10.bemta-5.messagelabs.com id
	6E/ED-01405-941F6E25; Mon, 27 Jan 2014 23:52:41 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390866759!12287685!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5181 invoked from network); 27 Jan 2014 23:52:41 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Jan 2014 23:52:41 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 27 Jan 2014 16:52:35 -0700
Message-ID: <52E6F142.5010709@suse.com>
Date: Mon, 27 Jan 2014 16:52:34 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> It's probably about time to start looking at "cross-compatibility"
> test matrix:
>
> * Migrating from 4.3 to 4.4
>
> * Compiling applications written for 4.3's libxl against 4.4's libxl
>   

The libvirt libxl driver compiles cleanly with 4.2, 4.3, and 4.4 libxl.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 00:34:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 00:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7wdT-0002Dt-4T; Tue, 28 Jan 2014 00:34:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7wdR-0002Do-TC
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 00:34:38 +0000
Received: from [85.158.143.35:27542] by server-1.bemta-4.messagelabs.com id
	99/11-02132-D1BF6E25; Tue, 28 Jan 2014 00:34:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390869275!1176525!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1435 invoked from network); 28 Jan 2014 00:34:36 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 00:34:36 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so2597150eek.14
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 16:34:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=wy/CLXDKza9MnN7Yer+QO0f4maX12werXdTKAmL7A44=;
	b=GR6bbYAKpzVIYkWh420G/Jt7ZNxj3tewfFz4btd11Ajjuo4NVzqzp7PMgSxsXRtpDj
	5oyL7sovK6rESzWOikvXOgG7xMIk1+P3CYucE/hJmXpjVF1CNfutIyYnCMxDAy6Hdd49
	/vsYYrIlNBg10tqUKGwhhLAMRlacELabJc2nPUP/1zJSEicYoOu+1vQqpp3HFVINHt+y
	6aT/pbM/Iot1HPCPQQt0l3CmXebCLeEds9Dn3Vjro4707Jz8SEB7N6tGgQu02jKVSzXZ
	YqAyFzKf6vs/7/uQqaaD9GDHShS2K2nCWTm5j2XN2tDjiKicSNKulCOpN4YJQUrBUAjY
	ikuA==
X-Gm-Message-State: ALoCoQmaKUvJ+IoSiezfXoO+Rj9TCqJhDiZItKzegDdO70buQ9gVs+c0ukVQL8XilBYF02pDinXK
X-Received: by 10.14.98.129 with SMTP id v1mr27462032eef.5.1390869275332;
	Mon, 27 Jan 2014 16:34:35 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm48745029ees.4.2014.01.27.16.34.34
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 27 Jan 2014 16:34:34 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org, konrad.wilk@oracle.com,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Date: Tue, 28 Jan 2014 00:34:29 +0000
Message-Id: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be called
	very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
all CPUs are online. It would mean that the notifier will never be called.

Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
because the event channel structure for this processor is not initialized.

This can be fixed by calling the init function on every online cpu when the
event channel fifo driver is initialized.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 drivers/xen/events/events_fifo.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 1de2a19..15498ab 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
 
 int __init xen_evtchn_fifo_init(void)
 {
-	int cpu = get_cpu();
+	int cpu;
 	int ret;
 
-	ret = evtchn_fifo_init_control_block(cpu);
-	if (ret < 0)
-		goto out;
+	for_each_online_cpu(cpu) {
+		ret = evtchn_fifo_init_control_block(cpu);
+		if (ret < 0)
+			goto out;
+	}
 
 	pr_info("Using FIFO-based ABI\n");
 
@@ -423,6 +425,5 @@ int __init xen_evtchn_fifo_init(void)
 
 	register_cpu_notifier(&evtchn_fifo_cpu_notifier);
 out:
-	put_cpu();
 	return ret;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 00:34:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 00:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7wdT-0002Dt-4T; Tue, 28 Jan 2014 00:34:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7wdR-0002Do-TC
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 00:34:38 +0000
Received: from [85.158.143.35:27542] by server-1.bemta-4.messagelabs.com id
	99/11-02132-D1BF6E25; Tue, 28 Jan 2014 00:34:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390869275!1176525!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1435 invoked from network); 28 Jan 2014 00:34:36 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 00:34:36 -0000
Received: by mail-ee0-f41.google.com with SMTP id e49so2597150eek.14
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 16:34:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=wy/CLXDKza9MnN7Yer+QO0f4maX12werXdTKAmL7A44=;
	b=GR6bbYAKpzVIYkWh420G/Jt7ZNxj3tewfFz4btd11Ajjuo4NVzqzp7PMgSxsXRtpDj
	5oyL7sovK6rESzWOikvXOgG7xMIk1+P3CYucE/hJmXpjVF1CNfutIyYnCMxDAy6Hdd49
	/vsYYrIlNBg10tqUKGwhhLAMRlacELabJc2nPUP/1zJSEicYoOu+1vQqpp3HFVINHt+y
	6aT/pbM/Iot1HPCPQQt0l3CmXebCLeEds9Dn3Vjro4707Jz8SEB7N6tGgQu02jKVSzXZ
	YqAyFzKf6vs/7/uQqaaD9GDHShS2K2nCWTm5j2XN2tDjiKicSNKulCOpN4YJQUrBUAjY
	ikuA==
X-Gm-Message-State: ALoCoQmaKUvJ+IoSiezfXoO+Rj9TCqJhDiZItKzegDdO70buQ9gVs+c0ukVQL8XilBYF02pDinXK
X-Received: by 10.14.98.129 with SMTP id v1mr27462032eef.5.1390869275332;
	Mon, 27 Jan 2014 16:34:35 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm48745029ees.4.2014.01.27.16.34.34
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 27 Jan 2014 16:34:34 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org, konrad.wilk@oracle.com,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Date: Tue, 28 Jan 2014 00:34:29 +0000
Message-Id: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be called
	very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
all CPUs are online. It would mean that the notifier will never be called.

Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
because the event channel structure for this processor is not initialized.

This can be fixed by calling the init function on every online cpu when the
event channel fifo driver is initialized.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 drivers/xen/events/events_fifo.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 1de2a19..15498ab 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
 
 int __init xen_evtchn_fifo_init(void)
 {
-	int cpu = get_cpu();
+	int cpu;
 	int ret;
 
-	ret = evtchn_fifo_init_control_block(cpu);
-	if (ret < 0)
-		goto out;
+	for_each_online_cpu(cpu) {
+		ret = evtchn_fifo_init_control_block(cpu);
+		if (ret < 0)
+			goto out;
+	}
 
 	pr_info("Using FIFO-based ABI\n");
 
@@ -423,6 +425,5 @@ int __init xen_evtchn_fifo_init(void)
 
 	register_cpu_notifier(&evtchn_fifo_cpu_notifier);
 out:
-	put_cpu();
 	return ret;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7x5n-0007Az-RE; Tue, 28 Jan 2014 01:03:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7x5l-0007Au-Tt
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 01:03:54 +0000
Received: from [85.158.139.211:22504] by server-12.bemta-5.messagelabs.com id
	BD/BE-30017-8F107E25; Tue, 28 Jan 2014 01:03:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390871032!9594841!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10218 invoked from network); 28 Jan 2014 01:03:52 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 01:03:52 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2595773eek.38
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 17:03:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hgYpr/CIX+QuH3Yk0iTshsrQ4a372p/o8+Jz4egz2xg=;
	b=QZVi/mJGc0hGK9uubQqSBnBrxvBoHzPlwRcd33syms41Lc+DxlGddy1d1lZiGmgA9O
	DQYlzVUBXbIL5TDfhTWZin2BVT1e7WWdagb4oDrqRrNtvQPw4VGOBC4yypuo8InnDJ30
	K9X7eZkMvzJrujjhyIOjRyNLvTJlZ0pL5bS1Zf4ICamw9ogAbqxCJWmxgpY0Qbf6bT7J
	Gjkr5GOaJrkjNZQOn9rswAIKIlPx45N5StfH4NEdYeC7RJ0bQw6MlkTPdgQYRd+rVaM2
	AHPcGprq4eIRKnBUvJnR/4qjXmlw4S1Aq4mcyq+1apB+1eTpuvNCj1umm8qG1W97yS1Q
	vXsg==
X-Gm-Message-State: ALoCoQlXsmcixFmKXx3RAPmnLngfSrmK8CD5HGb8i1RDoR1B1NuKHuPnPBOGs7m7TD94EbFtjlQS
X-Received: by 10.14.39.3 with SMTP id c3mr28087973eeb.4.1390871031823;
	Mon, 27 Jan 2014 17:03:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	z49sm48960563eeo.10.2014.01.27.17.03.50 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 27 Jan 2014 17:03:51 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org, konrad.wilk@oracle.com,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Date: Tue, 28 Jan 2014 01:03:45 +0000
Message-Id: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
	grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
is enabled).
We can't assume that the grant frame base address will always fits in an
unsigned long. Use phys_addr_t instead of unsigned long as argument for
gnttab_setup_auto_xlat_frames.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 arch/arm/xen/enlighten.c  |    6 +++---
 drivers/xen/grant-table.c |    6 +++---
 include/xen/grant_table.h |    2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 2162172..293eeea 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,7 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
-	unsigned long grant_frames;
+	phys_addr_t grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -227,8 +227,8 @@ static int __init xen_guest_init(void)
 		return 0;
 	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
-	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
+	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
+			version, xen_events_irq, &grant_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 1ce1c40..b84e3ab 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -837,7 +837,7 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
-int gnttab_setup_auto_xlat_frames(unsigned long addr)
+int gnttab_setup_auto_xlat_frames(phys_addr_t addr)
 {
 	xen_pfn_t *pfn;
 	unsigned int max_nr_gframes = __max_nr_grant_frames();
@@ -849,8 +849,8 @@ int gnttab_setup_auto_xlat_frames(unsigned long addr)
 
 	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
 	if (vaddr == NULL) {
-		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-			addr);
+		pr_warn("Failed to ioremap gnttab share frames (addr=%pa)!\n",
+			&addr);
 		return -ENOMEM;
 	}
 	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 5acb1e4..a5af2a2 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -185,7 +185,7 @@ struct grant_frames {
 };
 extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
-int gnttab_setup_auto_xlat_frames(unsigned long addr);
+int gnttab_setup_auto_xlat_frames(phys_addr_t addr);
 void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7x5n-0007Az-RE; Tue, 28 Jan 2014 01:03:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W7x5l-0007Au-Tt
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 01:03:54 +0000
Received: from [85.158.139.211:22504] by server-12.bemta-5.messagelabs.com id
	BD/BE-30017-8F107E25; Tue, 28 Jan 2014 01:03:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390871032!9594841!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10218 invoked from network); 28 Jan 2014 01:03:52 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 01:03:52 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2595773eek.38
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 17:03:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hgYpr/CIX+QuH3Yk0iTshsrQ4a372p/o8+Jz4egz2xg=;
	b=QZVi/mJGc0hGK9uubQqSBnBrxvBoHzPlwRcd33syms41Lc+DxlGddy1d1lZiGmgA9O
	DQYlzVUBXbIL5TDfhTWZin2BVT1e7WWdagb4oDrqRrNtvQPw4VGOBC4yypuo8InnDJ30
	K9X7eZkMvzJrujjhyIOjRyNLvTJlZ0pL5bS1Zf4ICamw9ogAbqxCJWmxgpY0Qbf6bT7J
	Gjkr5GOaJrkjNZQOn9rswAIKIlPx45N5StfH4NEdYeC7RJ0bQw6MlkTPdgQYRd+rVaM2
	AHPcGprq4eIRKnBUvJnR/4qjXmlw4S1Aq4mcyq+1apB+1eTpuvNCj1umm8qG1W97yS1Q
	vXsg==
X-Gm-Message-State: ALoCoQlXsmcixFmKXx3RAPmnLngfSrmK8CD5HGb8i1RDoR1B1NuKHuPnPBOGs7m7TD94EbFtjlQS
X-Received: by 10.14.39.3 with SMTP id c3mr28087973eeb.4.1390871031823;
	Mon, 27 Jan 2014 17:03:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	z49sm48960563eeo.10.2014.01.27.17.03.50 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 27 Jan 2014 17:03:51 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org, konrad.wilk@oracle.com,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Date: Tue, 28 Jan 2014 01:03:45 +0000
Message-Id: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, patches@linaro.org, ian.campbell@citrix.com,
	Julien Grall <julien.grall@linaro.org>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
	grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
is enabled).
We can't assume that the grant frame base address will always fits in an
unsigned long. Use phys_addr_t instead of unsigned long as argument for
gnttab_setup_auto_xlat_frames.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 arch/arm/xen/enlighten.c  |    6 +++---
 drivers/xen/grant-table.c |    6 +++---
 include/xen/grant_table.h |    2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 2162172..293eeea 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -208,7 +208,7 @@ static int __init xen_guest_init(void)
 	const char *version = NULL;
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
-	unsigned long grant_frames;
+	phys_addr_t grant_frames;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -227,8 +227,8 @@ static int __init xen_guest_init(void)
 		return 0;
 	grant_frames = res.start;
 	xen_events_irq = irq_of_parse_and_map(node, 0);
-	pr_info("Xen %s support found, events_irq=%d gnttab_frame_pfn=%lx\n",
-			version, xen_events_irq, (grant_frames >> PAGE_SHIFT));
+	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
+			version, xen_events_irq, &grant_frames);
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 1ce1c40..b84e3ab 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -837,7 +837,7 @@ unsigned int gnttab_max_grant_frames(void)
 }
 EXPORT_SYMBOL_GPL(gnttab_max_grant_frames);
 
-int gnttab_setup_auto_xlat_frames(unsigned long addr)
+int gnttab_setup_auto_xlat_frames(phys_addr_t addr)
 {
 	xen_pfn_t *pfn;
 	unsigned int max_nr_gframes = __max_nr_grant_frames();
@@ -849,8 +849,8 @@ int gnttab_setup_auto_xlat_frames(unsigned long addr)
 
 	vaddr = xen_remap(addr, PAGE_SIZE * max_nr_gframes);
 	if (vaddr == NULL) {
-		pr_warn("Failed to ioremap gnttab share frames (addr=0x%08lx)!\n",
-			addr);
+		pr_warn("Failed to ioremap gnttab share frames (addr=%pa)!\n",
+			&addr);
 		return -ENOMEM;
 	}
 	pfn = kcalloc(max_nr_gframes, sizeof(pfn[0]), GFP_KERNEL);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 5acb1e4..a5af2a2 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -185,7 +185,7 @@ struct grant_frames {
 };
 extern struct grant_frames xen_auto_xlat_grant_frames;
 unsigned int gnttab_max_grant_frames(void);
-int gnttab_setup_auto_xlat_frames(unsigned long addr);
+int gnttab_setup_auto_xlat_frames(phys_addr_t addr);
 void gnttab_free_auto_xlat_frames(void);
 
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:40:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7xeZ-0008HF-0I; Tue, 28 Jan 2014 01:39:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7xeW-0008HA-UP
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 01:39:49 +0000
Received: from [85.158.137.68:16271] by server-7.bemta-3.messagelabs.com id
	2A/2C-27599-46A07E25; Tue, 28 Jan 2014 01:39:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390873184!11701268!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14734 invoked from network); 28 Jan 2014 01:39:47 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 01:39:47 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 27 Jan 2014 18:39:38 -0700
Message-ID: <52E70A58.2060002@suse.com>
Date: Mon, 27 Jan 2014 18:39:36 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[Adding libvirt list...]

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> BTW, I only see the crash when the save/restore script is running.  I
>> stopped the other scripts and domains, running only save/restore on a
>> single domain, and see the crash rather quickly (within 10 iterations).
>>     
>
> I'll look at the libvirt code, but:
>
> With a recurring timeout, how can you ever know it's cancelled ?
> There might be threads out there, which don't hold any locks, which
> are in the process of executing a callback for a timeout.  That might
> be arbitrarily delayed from the pov of the rest of the program.
>
> E.g.:
>
>  Thread A                                             Thread B
>
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate
>                                        V      need to do callback
>                                                call libxl driver
>
>       entering libvirt event loop
>  V     observe timeout is immediate
>  V      need to do callback
>          call libxl driver
>            call libxl
>   X          libxl sees timeout is live
>   X          libxl does libxl stuff
>          libxl driver deregisters
>  V         record lack of timeout
>          free driver's timeout struct
>                                                call libxl
>                                       X          libxl sees timeout is dead
>                                       X          libxl does nothing
>                                              libxl driver deregisters
>                                        V       CRASH due to deregistering
>                                        V        already-deregistered timeout
>
> If this is how things are, then I think there is no sane way to use
> libvirt's timeouts (!)
>   

Looking at the libvirt code again, it seems a single thread services the
event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
see the same thread ID in all the timer and fd callbacks. One of the
libvirt core devs can correct me if I'm wrong.

Regards,
Jim


> In principle I guess the driver could keep its per-timeout structs
> around forever and remember whether they've been deregistered or not.
> Each one would have to have a lock in it.
>
> But if you think about it, if you have 10 threads all running the
> event loop and you set a timeout to zero, doesn't that mean that every
> thread's event loop should do the timeout callback as fast as it can ?
> That could be a lot of wasted effort.
>
> The best solution would appear to be to provide a non-recurring
> callback.
>
>   
>> I'm not so thrilled with the timeout handling code in the libvirt libxl
>> driver.  The driver maintains a list of active timeouts because IIRC,
>> there were cases when the driver received timeout deregistrations when
>> calling libxl_ctx_free, at which point some of the associated structures
>> were freed.  The idea was to call libxl_osevent_occurred_timeout on any
>> active timeouts before freeing libxlDomainObjPrivate and its contents.
>>     
>
> libxl does deregister fd callbacks in libxl_ctx_free.
>
> But libxl doesn't currently "deregister" any timeouts in
> libxl_ctx_free; indeed it would be a bit daft for it to do so as at
> libxl_ctx_free there are no aos running so there would be nothing to
> time out.
>
> But there is a difficulty with timeouts which libxl has set to occur
> immediately but which have not yet actually had the callback.  The the
> application cannot call libxl_ctx_free with such timeouts outstanding,
> because that would imply later calling back into libxl with a stale
> ctx.
>
> (Looking at the code I see that the "nexi" are never actually freed.
> Bah.)
>
> Thanks,
> Ian.
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:40:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:40:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7xeZ-0008HF-0I; Tue, 28 Jan 2014 01:39:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W7xeW-0008HA-UP
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 01:39:49 +0000
Received: from [85.158.137.68:16271] by server-7.bemta-3.messagelabs.com id
	2A/2C-27599-46A07E25; Tue, 28 Jan 2014 01:39:48 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390873184!11701268!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14734 invoked from network); 28 Jan 2014 01:39:47 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 01:39:47 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 27 Jan 2014 18:39:38 -0700
Message-ID: <52E70A58.2060002@suse.com>
Date: Mon, 27 Jan 2014 18:39:36 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
In-Reply-To: <21218.24466.92095.134875@mariner.uk.xensource.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[Adding libvirt list...]

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> BTW, I only see the crash when the save/restore script is running.  I
>> stopped the other scripts and domains, running only save/restore on a
>> single domain, and see the crash rather quickly (within 10 iterations).
>>     
>
> I'll look at the libvirt code, but:
>
> With a recurring timeout, how can you ever know it's cancelled ?
> There might be threads out there, which don't hold any locks, which
> are in the process of executing a callback for a timeout.  That might
> be arbitrarily delayed from the pov of the rest of the program.
>
> E.g.:
>
>  Thread A                                             Thread B
>
>    invoke some libxl operation
> X    do some libxl stuff
> X    register timeout (libxl)
> XV     record timeout info
> X    do some more libxl stuff
>      ...
> X    do some more libxl stuff
> X    deregister timeout (libxl internal)
> X     converted to request immediate timeout
> XV     record new timeout info
> X      release libvirt event loop lock
>                                             entering libvirt event loop
>                                        V     observe timeout is immediate
>                                        V      need to do callback
>                                                call libxl driver
>
>       entering libvirt event loop
>  V     observe timeout is immediate
>  V      need to do callback
>          call libxl driver
>            call libxl
>   X          libxl sees timeout is live
>   X          libxl does libxl stuff
>          libxl driver deregisters
>  V         record lack of timeout
>          free driver's timeout struct
>                                                call libxl
>                                       X          libxl sees timeout is dead
>                                       X          libxl does nothing
>                                              libxl driver deregisters
>                                        V       CRASH due to deregistering
>                                        V        already-deregistered timeout
>
> If this is how things are, then I think there is no sane way to use
> libvirt's timeouts (!)
>   

Looking at the libvirt code again, it seems a single thread services the
event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
see the same thread ID in all the timer and fd callbacks. One of the
libvirt core devs can correct me if I'm wrong.

Regards,
Jim


> In principle I guess the driver could keep its per-timeout structs
> around forever and remember whether they've been deregistered or not.
> Each one would have to have a lock in it.
>
> But if you think about it, if you have 10 threads all running the
> event loop and you set a timeout to zero, doesn't that mean that every
> thread's event loop should do the timeout callback as fast as it can ?
> That could be a lot of wasted effort.
>
> The best solution would appear to be to provide a non-recurring
> callback.
>
>   
>> I'm not so thrilled with the timeout handling code in the libvirt libxl
>> driver.  The driver maintains a list of active timeouts because IIRC,
>> there were cases when the driver received timeout deregistrations when
>> calling libxl_ctx_free, at which point some of the associated structures
>> were freed.  The idea was to call libxl_osevent_occurred_timeout on any
>> active timeouts before freeing libxlDomainObjPrivate and its contents.
>>     
>
> libxl does deregister fd callbacks in libxl_ctx_free.
>
> But libxl doesn't currently "deregister" any timeouts in
> libxl_ctx_free; indeed it would be a bit daft for it to do so as at
> libxl_ctx_free there are no aos running so there would be nothing to
> time out.
>
> But there is a difficulty with timeouts which libxl has set to occur
> immediately but which have not yet actually had the callback.  The the
> application cannot call libxl_ctx_free with such timeouts outstanding,
> because that would imply later calling back into libxl with a stale
> ctx.
>
> (Looking at the code I see that the "nexi" are never actually freed.
> Bah.)
>
> Thanks,
> Ian.
>
>   

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:56:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7xuB-0000DN-0K; Tue, 28 Jan 2014 01:55:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7xu9-0000DI-UK
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 01:55:58 +0000
Received: from [85.158.139.211:13523] by server-8.bemta-5.messagelabs.com id
	D7/B3-29838-D2E07E25; Tue, 28 Jan 2014 01:55:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390874155!12302457!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4999 invoked from network); 28 Jan 2014 01:55:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 01:55:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S1trQF006711
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 01:55:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S1tq42013129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 01:55:53 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S1tqxX013120; Tue, 28 Jan 2014 01:55:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 17:55:51 -0800
Date: Mon, 27 Jan 2014 17:55:50 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140127175550.4cc67171@mantra.us.oracle.com>
In-Reply-To: <1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Jan,

Because of your changes to XENMEM_add_to_physmap_batch, I had to rework
the non-xsm changes in this patch. Please take a look and lmk if you are 
OK with following.

Thanks
Mukesh

--------

Subject: [PATCH] pvh: xsm changes for add to physmap

In preparation for the next patch, we update xsm_add_to_physmap to
allow for checking of foreign domain. Thus, the current domain must
have the right to update the mappings of target domain with pages from
foreign domain.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/mm.c        |    4 ++--
 xen/common/memory.c      |   20 +++++++++++++++++---
 xen/include/asm-x86/mm.h |    3 +++
 xen/include/xsm/dummy.h  |   10 ++++++++--
 xen/include/xsm/xsm.h    |    6 +++---
 xen/xsm/flask/hooks.c    |    9 +++++++--
 6 files changed, 40 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 172c68c..467976c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2794,7 +2794,7 @@ int new_guest_cr3(unsigned long mfn)
     return rc;
 }
 
-static struct domain *get_pg_owner(domid_t domid)
+struct domain *get_pg_owner(domid_t domid)
 {
     struct domain *pg_owner = NULL, *curr = current->domain;
 
@@ -2837,7 +2837,7 @@ static struct domain *get_pg_owner(domid_t domid)
     return pg_owner;
 }
 
-static void put_pg_owner(struct domain *pg_owner)
+void put_pg_owner(struct domain *pg_owner)
 {
     rcu_unlock_domain(pg_owner);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5a0efd5..aff0ebd 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -838,7 +838,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, NULL);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -860,7 +860,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_add_to_physmap_batch:
     {
         struct xen_add_to_physmap_batch xatpb;
-        struct domain *d;
+        struct domain *d, *fd = NULL;
 
         BUILD_BUG_ON((typeof(xatpb.size))-1 >
                      (UINT_MAX >> MEMOP_EXTENT_SHIFT));
@@ -883,7 +883,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+#ifdef CONFIG_X86
+        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
+        {
+            if ( (fd = get_pg_owner(xatpb.foreign_domid)) == NULL )
+            {
+                rcu_unlock_domain(d);
+                return -ESRCH;
+            }
+        }
+#endif
+        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, fd);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -893,6 +903,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
 
         rcu_unlock_domain(d);
+#ifdef CONFIG_X86
+        if ( fd )
+            put_pg_owner(fd);
+#endif
 
         if ( rc > 0 )
             rc = hypercall_create_continuation(
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index c835f76..b5f8faa 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -610,4 +610,7 @@ typedef struct mm_rwlock {
     const char        *locker_function; /* func that took it */
 } mm_rwlock_t;
 
+struct domain *get_pg_owner(domid_t domid);
+void put_pg_owner(struct domain *pg_owner);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index eb9e1a1..1228e52 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -467,10 +467,16 @@ static XSM_INLINE int xsm_pci_config_permission(XSM_DEFAULT_ARG struct domain *d
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d, struct domain *t, struct domain *f)
 {
+    int rc;
+
     XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    rc = xsm_default_action(action, d, t);
+    if ( f && !rc )
+        rc = xsm_default_action(action, d, f);
+
+    return rc;
 }
 
 static XSM_INLINE int xsm_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1939453..9ee9543 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -90,7 +90,7 @@ struct xsm_operations {
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
-    int (*add_to_physmap) (struct domain *d1, struct domain *d2);
+    int (*add_to_physmap) (struct domain *d, struct domain *t, struct domain *f);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
     int (*claim_pages) (struct domain *d);
 
@@ -344,9 +344,9 @@ static inline int xsm_memory_pin_page(xsm_default_t def, struct domain *d1, stru
     return xsm_ops->memory_pin_page(d1, d2, page);
 }
 
-static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d, struct domain *t, struct domain *f)
 {
-    return xsm_ops->add_to_physmap(d1, d2);
+    return xsm_ops->add_to_physmap(d, t, f);
 }
 
 static inline int xsm_remove_from_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..81294b1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1068,9 +1068,14 @@ static inline int flask_tmem_control(void)
     return domain_has_xen(current->domain, XEN__TMEM_CONTROL);
 }
 
-static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
+static int flask_add_to_physmap(struct domain *d, struct domain *t, struct domain *f)
 {
-    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
+    int rc;
+
+    rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__PHYSMAP);
+    if ( f && !rc )
+        rc = domain_has_perm(d, f, SECCLASS_MMU, MMU__MAP_READ|MMU__MAP_WRITE);
+    return rc;
 }
 
 static int flask_remove_from_physmap(struct domain *d1, struct domain *d2)
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 01:56:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 01:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7xuB-0000DN-0K; Tue, 28 Jan 2014 01:55:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7xu9-0000DI-UK
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 01:55:58 +0000
Received: from [85.158.139.211:13523] by server-8.bemta-5.messagelabs.com id
	D7/B3-29838-D2E07E25; Tue, 28 Jan 2014 01:55:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390874155!12302457!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4999 invoked from network); 28 Jan 2014 01:55:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 01:55:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S1trQF006711
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 01:55:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S1tq42013129
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 01:55:53 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S1tqxX013120; Tue, 28 Jan 2014 01:55:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 17:55:51 -0800
Date: Mon, 27 Jan 2014 17:55:50 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140127175550.4cc67171@mantra.us.oracle.com>
In-Reply-To: <1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Jan,

Because of your changes to XENMEM_add_to_physmap_batch, I had to rework
the non-xsm changes in this patch. Please take a look and lmk if you are 
OK with following.

Thanks
Mukesh

--------

Subject: [PATCH] pvh: xsm changes for add to physmap

In preparation for the next patch, we update xsm_add_to_physmap to
allow for checking of foreign domain. Thus, the current domain must
have the right to update the mappings of target domain with pages from
foreign domain.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/arch/x86/mm.c        |    4 ++--
 xen/common/memory.c      |   20 +++++++++++++++++---
 xen/include/asm-x86/mm.h |    3 +++
 xen/include/xsm/dummy.h  |   10 ++++++++--
 xen/include/xsm/xsm.h    |    6 +++---
 xen/xsm/flask/hooks.c    |    9 +++++++--
 6 files changed, 40 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 172c68c..467976c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2794,7 +2794,7 @@ int new_guest_cr3(unsigned long mfn)
     return rc;
 }
 
-static struct domain *get_pg_owner(domid_t domid)
+struct domain *get_pg_owner(domid_t domid)
 {
     struct domain *pg_owner = NULL, *curr = current->domain;
 
@@ -2837,7 +2837,7 @@ static struct domain *get_pg_owner(domid_t domid)
     return pg_owner;
 }
 
-static void put_pg_owner(struct domain *pg_owner)
+void put_pg_owner(struct domain *pg_owner)
 {
     rcu_unlock_domain(pg_owner);
 }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5a0efd5..aff0ebd 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -838,7 +838,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, NULL);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -860,7 +860,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case XENMEM_add_to_physmap_batch:
     {
         struct xen_add_to_physmap_batch xatpb;
-        struct domain *d;
+        struct domain *d, *fd = NULL;
 
         BUILD_BUG_ON((typeof(xatpb.size))-1 >
                      (UINT_MAX >> MEMOP_EXTENT_SHIFT));
@@ -883,7 +883,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+#ifdef CONFIG_X86
+        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
+        {
+            if ( (fd = get_pg_owner(xatpb.foreign_domid)) == NULL )
+            {
+                rcu_unlock_domain(d);
+                return -ESRCH;
+            }
+        }
+#endif
+        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, fd);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -893,6 +903,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
 
         rcu_unlock_domain(d);
+#ifdef CONFIG_X86
+        if ( fd )
+            put_pg_owner(fd);
+#endif
 
         if ( rc > 0 )
             rc = hypercall_create_continuation(
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index c835f76..b5f8faa 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -610,4 +610,7 @@ typedef struct mm_rwlock {
     const char        *locker_function; /* func that took it */
 } mm_rwlock_t;
 
+struct domain *get_pg_owner(domid_t domid);
+void put_pg_owner(struct domain *pg_owner);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index eb9e1a1..1228e52 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -467,10 +467,16 @@ static XSM_INLINE int xsm_pci_config_permission(XSM_DEFAULT_ARG struct domain *d
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
+static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d, struct domain *t, struct domain *f)
 {
+    int rc;
+
     XSM_ASSERT_ACTION(XSM_TARGET);
-    return xsm_default_action(action, d1, d2);
+    rc = xsm_default_action(action, d, t);
+    if ( f && !rc )
+        rc = xsm_default_action(action, d, f);
+
+    return rc;
 }
 
 static XSM_INLINE int xsm_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1939453..9ee9543 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -90,7 +90,7 @@ struct xsm_operations {
     int (*memory_adjust_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_stat_reservation) (struct domain *d1, struct domain *d2);
     int (*memory_pin_page) (struct domain *d1, struct domain *d2, struct page_info *page);
-    int (*add_to_physmap) (struct domain *d1, struct domain *d2);
+    int (*add_to_physmap) (struct domain *d, struct domain *t, struct domain *f);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
     int (*claim_pages) (struct domain *d);
 
@@ -344,9 +344,9 @@ static inline int xsm_memory_pin_page(xsm_default_t def, struct domain *d1, stru
     return xsm_ops->memory_pin_page(d1, d2, page);
 }
 
-static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d, struct domain *t, struct domain *f)
 {
-    return xsm_ops->add_to_physmap(d1, d2);
+    return xsm_ops->add_to_physmap(d, t, f);
 }
 
 static inline int xsm_remove_from_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..81294b1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1068,9 +1068,14 @@ static inline int flask_tmem_control(void)
     return domain_has_xen(current->domain, XEN__TMEM_CONTROL);
 }
 
-static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
+static int flask_add_to_physmap(struct domain *d, struct domain *t, struct domain *f)
 {
-    return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
+    int rc;
+
+    rc = domain_has_perm(d, t, SECCLASS_MMU, MMU__PHYSMAP);
+    if ( f && !rc )
+        rc = domain_has_perm(d, f, SECCLASS_MMU, MMU__MAP_READ|MMU__MAP_WRITE);
+    return rc;
 }
 
 static int flask_remove_from_physmap(struct domain *d1, struct domain *d2)
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:17:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yEL-0001Ay-IB; Tue, 28 Jan 2014 02:16:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7yEJ-0001At-Qv
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:16:48 +0000
Received: from [193.109.254.147:58105] by server-11.bemta-14.messagelabs.com
	id 48/A5-20576-F0317E25; Tue, 28 Jan 2014 02:16:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390875404!236896!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20266 invoked from network); 28 Jan 2014 02:16:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 02:16:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,732,1384300800"; d="scan'208";a="97072052"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 02:16:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 21:16:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7yE7-0007Aj-06;
	Tue, 28 Jan 2014 02:16:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7yE7-0006I4-0A;
	Tue, 28 Jan 2014 02:16:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 02:16:35 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3739737356849696318=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3739737356849696318==
Content-Type: text/plain

flight 24553 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  7 windows-install    fail like 22066
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 22089

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
baseline version:
 qemuu                b97307ecaad98360f41ea36cd9674ef810c4f8cf

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Amit Shah <amit.shah@redhat.com>
  Amos Kong <akong@redhat.com>
  Andreas FÃ¤rber <afaerber@suse.de>
  Anthony Liguori <aliguori@amazon.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Bandan Das <bsd@redhat.com>
  Cole Robinson <crobinso@redhat.com>
  CongLi <coli@redhat.com>
  Eduardo Otubo <otubo@linux.vnet.ibm.com>
  Fam Zheng <famz@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <james.hogan@imgtec.com> [mips]
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Matthew Daley <mattjd@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mike Frysinger <vapier@gentoo.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <pmoore@redhat.com>
  Petar Jovanovic <petar.jovanovic@imgtec.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomoki Sekiyama <tomoki.sekiyama@hds.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Wenchao Xia <xiawenc@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ branch=qemu-upstream-unstable
+ revision=11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 11f6a1cedb8d759fd64d7dd5db95b747591f2ca7:master
Counting objects: 253, done.
Compressing objects:   2% (1/44)   
Compressing objects:   4% (2/44)   
Compressing objects:   6% (3/44)   
Compressing objects:   9% (4/44)   
Compressing objects:  11% (5/44)   
Compressing objects:  13% (6/44)   
Compressing objects:  15% (7/44)   
Compressing objects:  18% (8/44)   
Compressing objects:  20% (9/44)   
Compressing objects:  22% (10/44)   
Compressing objects:  25% (11/44)   
Compressing objects:  27% (12/44)   
Compressing objects:  29% (13/44)   
Compressing objects:  31% (14/44)   
Compressing objects:  34% (15/44)   
Compressing objects:  36% (16/44)   
Compressing objects:  38% (17/44)   
Compressing objects:  40% (18/44)   
Compressing objects:  43% (19/44)   
Compressing objects:  45% (20/44)   
Compressing objects:  47% (21/44)   
Compressing objects:  50% (22/44)   
Compressing objects:  52% (23/44)   
Compressing objects:  54% (24/44)   
Compressing objects:  56% (25/44)   
Compressing objects:  59% (26/44)   
Compressing objects:  61% (27/44)   
Compressing objects:  63% (28/44)   
Compressing objects:  65% (29/44)   
Compressing objects:  68% (30/44)   
Compressing objects:  70% (31/44)   
Compressing objects:  72% (32/44)   
Compressing objects:  75% (33/44)   
Compressing objects:  77% (34/44)   
Compressing objects:  79% (35/44)   
Compressing objects:  81% (36/44)   
Compressing objects:  84% (37/44)   
Compressing objects:  86% (38/44)   
Compressing objects:  88% (39/44)   
Compressing objects:  90% (40/44)   
Compressing objects:  93% (41/44)   
Compressing objects:  95% (42/44)   
Compressing objects:  97% (43/44)   
Compressing objects: 100% (44/44)   
Compressing objects: 100% (44/44), done.
Writing objects:   1% (2/185)   
Writing objects:   2% (4/185)   
Writing objects:   3% (6/185)   
Writing objects:   4% (8/185)   
Writing objects:   5% (10/185)   
Writing objects:   6% (12/185)   
Writing objects:   7% (13/185)   
Writing objects:   8% (15/185)   
Writing objects:   9% (17/185)   
Writing objects:  10% (19/185)   
Writing objects:  11% (21/185)   
Writing objects:  12% (23/185)   
Writing objects:  13% (25/185)   
Writing objects:  14% (26/185)   
Writing objects:  15% (28/185)   
Writing objects:  16% (30/185)   
Writing objects:  17% (32/185)   
Writing objects:  18% (34/185)   
Writing objects:  19% (36/185)   
Writing objects:  20% (37/185)   
Writing objects:  21% (39/185)   
Writing objects:  22% (41/185)   
Writing objects:  23% (43/185)   
Writing objects:  24% (45/185)   
Writing objects:  25% (47/185)   
Writing objects:  26% (49/185)   
Writing objects:  28% (53/185)   
Writing objects:  29% (55/185)   
Writing objects:  30% (56/185)   
Writing objects:  31% (58/185)   
Writing objects:  32% (60/185)   
Writing objects:  33% (62/185)   
Writing objects:  34% (63/185)   
Writing objects:  35% (66/185)   
Writing objects:  36% (67/185)   
Writing objects:  37% (69/185)   
Writing objects:  38% (71/185)   
Writing objects:  39% (73/185)   
Writing objects:  40% (74/185)   
Writing objects:  41% (76/185)   
Writing objects:  42% (78/185)   
Writing objects:  43% (80/185)   
Writing objects:  44% (82/185)   
Writing objects:  45% (84/185)   
Writing objects:  46% (86/185)   
Writing objects:  47% (87/185)   
Writing objects:  48% (89/185)   
Writing objects:  49% (91/185)   
Writing objects:  50% (93/185)   
Writing objects:  51% (95/185)   
Writing objects:  52% (97/185)   
Writing objects:  53% (99/185)   
Writing objects:  54% (100/185)   
Writing objects:  55% (102/185)   
Writing objects:  56% (104/185)   
Writing objects:  57% (106/185)   
Writing objects:  58% (108/185)   
Writing objects:  60% (111/185)   
Writing objects:  61% (113/185)   
Writing objects:  62% (115/185)   
Writing objects:  63% (117/185)   
Writing objects:  64% (119/185)   
Writing objects:  65% (121/185)   
Writing objects:  66% (123/185)   
Writing objects:  67% (125/185)   
Writing objects:  68% (126/185)   
Writing objects:  69% (128/185)   
Writing objects:  70% (130/185)   
Writing objects:  71% (133/185)   
Writing objects:  72% (134/185)   
Writing objects:  75% (140/185)   
Writing objects:  76% (141/185)   
Writing objects:  78% (146/185)   
Writing objects:  79% (147/185)   
Writing objects:  80% (148/185)   
Writing objects:  81% (150/185)   
Writing objects:  82% (152/185)   
Writing objects:  83% (154/185)   
Writing objects:  85% (159/185)   
Writing objects:  86% (160/185)   
Writing objects:  87% (161/185)   
Writing objects:  88% (163/185)   
Writing objects:  89% (165/185)   
Writing objects:  90% (167/185)   
Writing objects:  91% (169/185)   
Writing objects:  92% (171/185)   
Writing objects:  93% (173/185)   
Writing objects:  94% (175/185)   
Writing objects:  95% (176/185)   
Writing objects:  96% (178/185)   
Writing objects:  97% (180/185)   
Writing objects:  98% (182/185)   
Writing objects:  99% (184/185)   
Writing objects: 100% (185/185)   
Writing objects: 100% (185/185), 42.22 KiB, done.
Total 185 (delta 140), reused 185 (delta 140)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   b97307e..11f6a1c  11f6a1cedb8d759fd64d7dd5db95b747591f2ca7 -> master


--===============3739737356849696318==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3739737356849696318==--

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:17:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yEL-0001Ay-IB; Tue, 28 Jan 2014 02:16:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W7yEJ-0001At-Qv
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:16:48 +0000
Received: from [193.109.254.147:58105] by server-11.bemta-14.messagelabs.com
	id 48/A5-20576-F0317E25; Tue, 28 Jan 2014 02:16:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390875404!236896!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20266 invoked from network); 28 Jan 2014 02:16:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 02:16:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,732,1384300800"; d="scan'208";a="97072052"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 02:16:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 27 Jan 2014 21:16:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W7yE7-0007Aj-06;
	Tue, 28 Jan 2014 02:16:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W7yE7-0006I4-0A;
	Tue, 28 Jan 2014 02:16:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24553-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 02:16:35 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3739737356849696318=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3739737356849696318==
Content-Type: text/plain

flight 24553 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  7 windows-install    fail like 22066
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 22089

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
baseline version:
 qemuu                b97307ecaad98360f41ea36cd9674ef810c4f8cf

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Amit Shah <amit.shah@redhat.com>
  Amos Kong <akong@redhat.com>
  Andreas FÃ¤rber <afaerber@suse.de>
  Anthony Liguori <aliguori@amazon.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Bandan Das <bsd@redhat.com>
  Cole Robinson <crobinso@redhat.com>
  CongLi <coli@redhat.com>
  Eduardo Otubo <otubo@linux.vnet.ibm.com>
  Fam Zheng <famz@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <james.hogan@imgtec.com> [mips]
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Matthew Daley <mattjd@gmail.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mike Frysinger <vapier@gentoo.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <pmoore@redhat.com>
  Petar Jovanovic <petar.jovanovic@imgtec.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <rth@twiddle.net>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomoki Sekiyama <tomoki.sekiyama@hds.com>
  Vlad Yasevich <vyasevic@redhat.com>
  Wenchao Xia <xiawenc@linux.vnet.ibm.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ branch=qemu-upstream-unstable
+ revision=11f6a1cedb8d759fd64d7dd5db95b747591f2ca7
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 11f6a1cedb8d759fd64d7dd5db95b747591f2ca7:master
Counting objects: 253, done.
Compressing objects:   2% (1/44)   
Compressing objects:   4% (2/44)   
Compressing objects:   6% (3/44)   
Compressing objects:   9% (4/44)   
Compressing objects:  11% (5/44)   
Compressing objects:  13% (6/44)   
Compressing objects:  15% (7/44)   
Compressing objects:  18% (8/44)   
Compressing objects:  20% (9/44)   
Compressing objects:  22% (10/44)   
Compressing objects:  25% (11/44)   
Compressing objects:  27% (12/44)   
Compressing objects:  29% (13/44)   
Compressing objects:  31% (14/44)   
Compressing objects:  34% (15/44)   
Compressing objects:  36% (16/44)   
Compressing objects:  38% (17/44)   
Compressing objects:  40% (18/44)   
Compressing objects:  43% (19/44)   
Compressing objects:  45% (20/44)   
Compressing objects:  47% (21/44)   
Compressing objects:  50% (22/44)   
Compressing objects:  52% (23/44)   
Compressing objects:  54% (24/44)   
Compressing objects:  56% (25/44)   
Compressing objects:  59% (26/44)   
Compressing objects:  61% (27/44)   
Compressing objects:  63% (28/44)   
Compressing objects:  65% (29/44)   
Compressing objects:  68% (30/44)   
Compressing objects:  70% (31/44)   
Compressing objects:  72% (32/44)   
Compressing objects:  75% (33/44)   
Compressing objects:  77% (34/44)   
Compressing objects:  79% (35/44)   
Compressing objects:  81% (36/44)   
Compressing objects:  84% (37/44)   
Compressing objects:  86% (38/44)   
Compressing objects:  88% (39/44)   
Compressing objects:  90% (40/44)   
Compressing objects:  93% (41/44)   
Compressing objects:  95% (42/44)   
Compressing objects:  97% (43/44)   
Compressing objects: 100% (44/44)   
Compressing objects: 100% (44/44), done.
Writing objects:   1% (2/185)   
Writing objects:   2% (4/185)   
Writing objects:   3% (6/185)   
Writing objects:   4% (8/185)   
Writing objects:   5% (10/185)   
Writing objects:   6% (12/185)   
Writing objects:   7% (13/185)   
Writing objects:   8% (15/185)   
Writing objects:   9% (17/185)   
Writing objects:  10% (19/185)   
Writing objects:  11% (21/185)   
Writing objects:  12% (23/185)   
Writing objects:  13% (25/185)   
Writing objects:  14% (26/185)   
Writing objects:  15% (28/185)   
Writing objects:  16% (30/185)   
Writing objects:  17% (32/185)   
Writing objects:  18% (34/185)   
Writing objects:  19% (36/185)   
Writing objects:  20% (37/185)   
Writing objects:  21% (39/185)   
Writing objects:  22% (41/185)   
Writing objects:  23% (43/185)   
Writing objects:  24% (45/185)   
Writing objects:  25% (47/185)   
Writing objects:  26% (49/185)   
Writing objects:  28% (53/185)   
Writing objects:  29% (55/185)   
Writing objects:  30% (56/185)   
Writing objects:  31% (58/185)   
Writing objects:  32% (60/185)   
Writing objects:  33% (62/185)   
Writing objects:  34% (63/185)   
Writing objects:  35% (66/185)   
Writing objects:  36% (67/185)   
Writing objects:  37% (69/185)   
Writing objects:  38% (71/185)   
Writing objects:  39% (73/185)   
Writing objects:  40% (74/185)   
Writing objects:  41% (76/185)   
Writing objects:  42% (78/185)   
Writing objects:  43% (80/185)   
Writing objects:  44% (82/185)   
Writing objects:  45% (84/185)   
Writing objects:  46% (86/185)   
Writing objects:  47% (87/185)   
Writing objects:  48% (89/185)   
Writing objects:  49% (91/185)   
Writing objects:  50% (93/185)   
Writing objects:  51% (95/185)   
Writing objects:  52% (97/185)   
Writing objects:  53% (99/185)   
Writing objects:  54% (100/185)   
Writing objects:  55% (102/185)   
Writing objects:  56% (104/185)   
Writing objects:  57% (106/185)   
Writing objects:  58% (108/185)   
Writing objects:  60% (111/185)   
Writing objects:  61% (113/185)   
Writing objects:  62% (115/185)   
Writing objects:  63% (117/185)   
Writing objects:  64% (119/185)   
Writing objects:  65% (121/185)   
Writing objects:  66% (123/185)   
Writing objects:  67% (125/185)   
Writing objects:  68% (126/185)   
Writing objects:  69% (128/185)   
Writing objects:  70% (130/185)   
Writing objects:  71% (133/185)   
Writing objects:  72% (134/185)   
Writing objects:  75% (140/185)   
Writing objects:  76% (141/185)   
Writing objects:  78% (146/185)   
Writing objects:  79% (147/185)   
Writing objects:  80% (148/185)   
Writing objects:  81% (150/185)   
Writing objects:  82% (152/185)   
Writing objects:  83% (154/185)   
Writing objects:  85% (159/185)   
Writing objects:  86% (160/185)   
Writing objects:  87% (161/185)   
Writing objects:  88% (163/185)   
Writing objects:  89% (165/185)   
Writing objects:  90% (167/185)   
Writing objects:  91% (169/185)   
Writing objects:  92% (171/185)   
Writing objects:  93% (173/185)   
Writing objects:  94% (175/185)   
Writing objects:  95% (176/185)   
Writing objects:  96% (178/185)   
Writing objects:  97% (180/185)   
Writing objects:  98% (182/185)   
Writing objects:  99% (184/185)   
Writing objects: 100% (185/185)   
Writing objects: 100% (185/185), 42.22 KiB, done.
Total 185 (delta 140), reused 185 (delta 140)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   b97307e..11f6a1c  11f6a1cedb8d759fd64d7dd5db95b747591f2ca7 -> master


--===============3739737356849696318==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3739737356849696318==--

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:19:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yGR-0001L5-Mt; Tue, 28 Jan 2014 02:18:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7yGQ-0001KP-Kc
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:18:58 +0000
Received: from [85.158.139.211:53271] by server-7.bemta-5.messagelabs.com id
	82/E2-04824-19317E25; Tue, 28 Jan 2014 02:18:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390875533!12307360!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2746 invoked from network); 28 Jan 2014 02:18:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 02:18:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S2IoFO025050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 02:18:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InGF024409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 02:18:50 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InaK022324; Tue, 28 Jan 2014 02:18:49 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 18:18:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 18:18:38 -0800
Message-Id: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] pvh: disable pse feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

Following will turn off PSE in linux until we can get to it. It's better
to turn it off here than in xen, so if BSD gets there sooner, they are not 
dependent on us.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:19:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yGQ-0001Kk-BT; Tue, 28 Jan 2014 02:18:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7yGP-0001KI-4D
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:18:57 +0000
Received: from [193.109.254.147:48519] by server-6.bemta-14.messagelabs.com id
	50/1F-14958-09317E25; Tue, 28 Jan 2014 02:18:56 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390875534!239258!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30780 invoked from network); 28 Jan 2014 02:18:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 02:18:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S2IpK8002759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 02:18:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2IoVq024424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 02:18:51 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InIK022344; Tue, 28 Jan 2014 02:18:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 18:18:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 18:18:39 -0800
Message-Id: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Until now, xen did not expose PSE to pvh guest, but a patch was submitted
to xen list to enable bunch of features for a pvh guest. PSE has not been
looked into for PVH, so until we can do that and test it to make sure it
works, disable the feature to avoid flood of bugs.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..4e952046 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
 	xen_have_vector_callback = 1;
 	xen_pvh_set_cr_flags(0);
 
+        /* pvh guests are not quite ready for large pages yet */
+        setup_clear_cpu_cap(X86_FEATURE_PSE);
+        setup_clear_cpu_cap(X86_FEATURE_PSE36);
+
+
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
 #endif
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:19:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yGR-0001L5-Mt; Tue, 28 Jan 2014 02:18:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7yGQ-0001KP-Kc
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:18:58 +0000
Received: from [85.158.139.211:53271] by server-7.bemta-5.messagelabs.com id
	82/E2-04824-19317E25; Tue, 28 Jan 2014 02:18:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390875533!12307360!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2746 invoked from network); 28 Jan 2014 02:18:54 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 02:18:54 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S2IoFO025050
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 02:18:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InGF024409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 02:18:50 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InaK022324; Tue, 28 Jan 2014 02:18:49 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 18:18:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 18:18:38 -0800
Message-Id: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] pvh: disable pse feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

Following will turn off PSE in linux until we can get to it. It's better
to turn it off here than in xen, so if BSD gets there sooner, they are not 
dependent on us.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 02:19:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 02:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7yGQ-0001Kk-BT; Tue, 28 Jan 2014 02:18:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W7yGP-0001KI-4D
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 02:18:57 +0000
Received: from [193.109.254.147:48519] by server-6.bemta-14.messagelabs.com id
	50/1F-14958-09317E25; Tue, 28 Jan 2014 02:18:56 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390875534!239258!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30780 invoked from network); 28 Jan 2014 02:18:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 02:18:55 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S2IpK8002759
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 02:18:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2IoVq024424
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 02:18:51 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S2InIK022344; Tue, 28 Jan 2014 02:18:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 18:18:49 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Mon, 27 Jan 2014 18:18:39 -0800
Message-Id: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Until now, xen did not expose PSE to pvh guest, but a patch was submitted
to xen list to enable bunch of features for a pvh guest. PSE has not been
looked into for PVH, so until we can do that and test it to make sure it
works, disable the feature to avoid flood of bugs.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..4e952046 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
 	xen_have_vector_callback = 1;
 	xen_pvh_set_cr_flags(0);
 
+        /* pvh guests are not quite ready for large pages yet */
+        setup_clear_cpu_cap(X86_FEATURE_PSE);
+        setup_clear_cpu_cap(X86_FEATURE_PSE36);
+
+
 #ifdef CONFIG_X86_32
 	BUG(); /* PVH: Implement proper support. */
 #endif
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:30:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zN0-0004lp-QA; Tue, 28 Jan 2014 03:29:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W7zMy-0004ju-Vv
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:29:49 +0000
Received: from [85.158.139.211:8907] by server-15.bemta-5.messagelabs.com id
	CB/2E-08490-C2427E25; Tue, 28 Jan 2014 03:29:48 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390879785!12318615!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19684 invoked from network); 28 Jan 2014 03:29:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:29:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3TgIc023278
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:29:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0S3TeCj024398
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:29:41 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0S3Texo024374; Tue, 28 Jan 2014 03:29:40 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:29:40 -0800
Message-ID: <52E72422.1030205@oracle.com>
Date: Tue, 28 Jan 2014 11:29:38 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
	<52E63382.6090503@citrix.com>
	<20140127.130921.7056971883805642.davem@davemloft.net>
In-Reply-To: <20140127.130921.7056971883805642.davem@davemloft.net>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, xen-devel@lists.xen.org, wei.liu2@citrix.com,
	david.vrabel@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/28 5:09, David Miller wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> Date: Mon, 27 Jan 2014 10:22:58 +0000
>
>> I think this should be applied to net (and tagged as a stable candidate)
>> rather than net-next as this fixes are very big resource leak.
> Then this subject line and commit message must be fixed to make it clear
> that this is a BUG fix, rather then just a "clean up".
>

I will post a new patch with correct subject and commit message.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:30:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zN0-0004lp-QA; Tue, 28 Jan 2014 03:29:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1W7zMy-0004ju-Vv
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:29:49 +0000
Received: from [85.158.139.211:8907] by server-15.bemta-5.messagelabs.com id
	CB/2E-08490-C2427E25; Tue, 28 Jan 2014 03:29:48 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390879785!12318615!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19684 invoked from network); 28 Jan 2014 03:29:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:29:47 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3TgIc023278
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:29:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0S3TeCj024398
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:29:41 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0S3Texo024374; Tue, 28 Jan 2014 03:29:40 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:29:40 -0800
Message-ID: <52E72422.1030205@oracle.com>
Date: Tue, 28 Jan 2014 11:29:38 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: David Miller <davem@davemloft.net>
References: <1390731147-2424-1-git-send-email-Annie.li@oracle.com>
	<52E63382.6090503@citrix.com>
	<20140127.130921.7056971883805642.davem@davemloft.net>
In-Reply-To: <20140127.130921.7056971883805642.davem@davemloft.net>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: netdev@vger.kernel.org, xen-devel@lists.xen.org, wei.liu2@citrix.com,
	david.vrabel@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH net-next v5] xen-netfront: clean up code in
 xennet_release_rx_bufs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/1/28 5:09, David Miller wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> Date: Mon, 27 Jan 2014 10:22:58 +0000
>
>> I think this should be applied to net (and tagged as a stable candidate)
>> rather than net-next as this fixes are very big resource leak.
> Then this subject line and commit message must be fixed to make it clear
> that this is a BUG fix, rather then just a "clean up".
>

I will post a new patch with correct subject and commit message.

Thanks
Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zQw-0004yK-JM; Tue, 28 Jan 2014 03:33:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7zQv-0004yC-0g
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:33:53 +0000
Received: from [193.109.254.147:49354] by server-11.bemta-14.messagelabs.com
	id 80/DC-20576-02527E25; Tue, 28 Jan 2014 03:33:52 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390880028!238935!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7118 invoked from network); 28 Jan 2014 03:33:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:33:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3XkwS026065
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:33:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3XjOC028697
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:33:46 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0S3XiJV004721; Tue, 28 Jan 2014 03:33:45 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:33:44 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Tue, 28 Jan 2014 11:35:42 +0800
Message-Id: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net v6] xen-netfront: fix resource leak in
	netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* fix resource leak, release grant access (through gnttab_end_foreign_access)
and skb for tx/rx path, use get_page to ensure page is released when grant
access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V6: Correct subject line and commit message.

V5: Remove unecessary change in xennet_end_access.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   88 +++++++++++++-------------------------------
 1 files changed, 26 insertions(+), 62 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..6ddf1e6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
 
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zQw-0004yK-JM; Tue, 28 Jan 2014 03:33:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Annie.li@oracle.com>) id 1W7zQv-0004yC-0g
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:33:53 +0000
Received: from [193.109.254.147:49354] by server-11.bemta-14.messagelabs.com
	id 80/DC-20576-02527E25; Tue, 28 Jan 2014 03:33:52 +0000
X-Env-Sender: Annie.li@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390880028!238935!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7118 invoked from network); 28 Jan 2014 03:33:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:33:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3XkwS026065
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:33:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3XjOC028697
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:33:46 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0S3XiJV004721; Tue, 28 Jan 2014 03:33:45 GMT
Received: from annie.cn.oracle.com (/10.182.38.163)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:33:44 -0800
From: Annie Li <Annie.li@oracle.com>
To: xen-devel@lists.xen.org, netdev@vger.kernel.org
Date: Tue, 28 Jan 2014 11:35:42 +0800
Message-Id: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
X-Mailer: git-send-email 1.7.6.5
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: annie.li@oracle.com, david.vrabel@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH net v6] xen-netfront: fix resource leak in
	netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <annie.li@oracle.com>

This patch removes grant transfer releasing code from netfront, and uses
gnttab_end_foreign_access to end grant access since
gnttab_end_foreign_access_ref may fail when the grant entry is
currently used for reading or writing.

* clean up grant transfer code kept from old netfront(2.6.18) which grants
pages for access/map and transfer. But grant transfer is deprecated in current
netfront, so remove corresponding release code for transfer.

* fix resource leak, release grant access (through gnttab_end_foreign_access)
and skb for tx/rx path, use get_page to ensure page is released when grant
access is completed successfully.

Xen-blkfront/xen-tpmfront/xen-pcifront also have similar issue, but patches
for them will be created separately.

V6: Correct subject line and commit message.

V5: Remove unecessary change in xennet_end_access.

V4: Revert put_page in gnttab_end_foreign_access, and keep netfront change in
single patch.

V3: Changes as suggestion from David Vrabel, ensure pages are not freed untill
grant acess is ended.

V2: Improve patch comments.

Signed-off-by: Annie Li <annie.li@oracle.com>
---
 drivers/net/xen-netfront.c |   88 +++++++++++++-------------------------------
 1 files changed, 26 insertions(+), 62 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d7bee8a..6ddf1e6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -117,6 +117,7 @@ struct netfront_info {
 	} tx_skbs[NET_TX_RING_SIZE];
 	grant_ref_t gref_tx_head;
 	grant_ref_t grant_tx_ref[NET_TX_RING_SIZE];
+	struct page *grant_tx_page[NET_TX_RING_SIZE];
 	unsigned tx_skb_freelist;
 
 	spinlock_t   rx_lock ____cacheline_aligned_in_smp;
@@ -396,6 +397,7 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 			gnttab_release_grant_reference(
 				&np->gref_tx_head, np->grant_tx_ref[id]);
 			np->grant_tx_ref[id] = GRANT_INVALID_REF;
+			np->grant_tx_page[id] = NULL;
 			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
@@ -452,6 +454,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
+		np->grant_tx_page[id] = virt_to_page(data);
 		tx->gref = np->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
@@ -497,6 +500,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 							np->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
+			np->grant_tx_page[id] = page;
 			tx->gref = np->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
@@ -596,6 +600,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
 		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	np->grant_tx_page[id] = virt_to_page(data);
 	tx->gref = np->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
@@ -1085,10 +1090,11 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 			continue;
 
 		skb = np->tx_skbs[i].skb;
-		gnttab_end_foreign_access_ref(np->grant_tx_ref[i],
-					      GNTMAP_readonly);
-		gnttab_release_grant_reference(&np->gref_tx_head,
-					       np->grant_tx_ref[i]);
+		get_page(np->grant_tx_page[i]);
+		gnttab_end_foreign_access(np->grant_tx_ref[i],
+					  GNTMAP_readonly,
+					  (unsigned long)page_address(np->grant_tx_page[i]));
+		np->grant_tx_page[i] = NULL;
 		np->grant_tx_ref[i] = GRANT_INVALID_REF;
 		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
@@ -1097,78 +1103,35 @@ static void xennet_release_tx_bufs(struct netfront_info *np)
 
 static void xennet_release_rx_bufs(struct netfront_info *np)
 {
-	struct mmu_update      *mmu = np->rx_mmu;
-	struct multicall_entry *mcl = np->rx_mcl;
-	struct sk_buff_head free_list;
-	struct sk_buff *skb;
-	unsigned long mfn;
-	int xfer = 0, noxfer = 0, unused = 0;
 	int id, ref;
 
-	dev_warn(&np->netdev->dev, "%s: fix me for copying receiver.\n",
-			 __func__);
-	return;
-
-	skb_queue_head_init(&free_list);
-
 	spin_lock_bh(&np->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
-		ref = np->grant_rx_ref[id];
-		if (ref == GRANT_INVALID_REF) {
-			unused++;
-			continue;
-		}
+		struct sk_buff *skb;
+		struct page *page;
 
 		skb = np->rx_skbs[id];
-		mfn = gnttab_end_foreign_transfer_ref(ref);
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
-
-		if (0 == mfn) {
-			skb_shinfo(skb)->nr_frags = 0;
-			dev_kfree_skb(skb);
-			noxfer++;
+		if (!skb)
 			continue;
-		}
 
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Remap the page. */
-			const struct page *page =
-				skb_frag_page(&skb_shinfo(skb)->frags[0]);
-			unsigned long pfn = page_to_pfn(page);
-			void *vaddr = page_address(page);
+		ref = np->grant_rx_ref[id];
+		if (ref == GRANT_INVALID_REF)
+			continue;
 
-			MULTI_update_va_mapping(mcl, (unsigned long)vaddr,
-						mfn_pte(mfn, PAGE_KERNEL),
-						0);
-			mcl++;
-			mmu->ptr = ((u64)mfn << PAGE_SHIFT)
-				| MMU_MACHPHYS_UPDATE;
-			mmu->val = pfn;
-			mmu++;
+		page = skb_frag_page(&skb_shinfo(skb)->frags[0]);
 
-			set_phys_to_machine(pfn, mfn);
-		}
-		__skb_queue_tail(&free_list, skb);
-		xfer++;
-	}
-
-	dev_info(&np->netdev->dev, "%s: %d xfer, %d noxfer, %d unused\n",
-		 __func__, xfer, noxfer, unused);
+		/* gnttab_end_foreign_access() needs a page ref until
+		 * foreign access is ended (which may be deferred).
+		 */
+		get_page(page);
+		gnttab_end_foreign_access(ref, 0,
+					  (unsigned long)page_address(page));
+		np->grant_rx_ref[id] = GRANT_INVALID_REF;
 
-	if (xfer) {
-		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-			/* Do all the remapping work and M2P updates. */
-			MULTI_mmu_update(mcl, np->rx_mmu, mmu - np->rx_mmu,
-					 NULL, DOMID_SELF);
-			mcl++;
-			HYPERVISOR_multicall(np->rx_mcl, mcl - np->rx_mcl);
-		}
+		kfree_skb(skb);
 	}
 
-	__skb_queue_purge(&free_list);
-
 	spin_unlock_bh(&np->rx_lock);
 }
 
@@ -1339,6 +1302,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	for (i = 0; i < NET_RX_RING_SIZE; i++) {
 		np->rx_skbs[i] = NULL;
 		np->grant_rx_ref[i] = GRANT_INVALID_REF;
+		np->grant_tx_page[i] = NULL;
 	}
 
 	/* A grant for every tx ring slot */
-- 
1.7.3.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:46:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zdO-0005YC-1o; Tue, 28 Jan 2014 03:46:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7zdM-0005Y4-LM
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 03:46:44 +0000
Received: from [85.158.137.68:5216] by server-10.bemta-3.messagelabs.com id
	FF/AF-23989-32827E25; Tue, 28 Jan 2014 03:46:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390880801!8022153!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4559 invoked from network); 28 Jan 2014 03:46:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:46:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3kclm002892
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:46:38 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3kbSf012478
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:46:37 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3kaOQ017367; Tue, 28 Jan 2014 03:46:36 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:46:36 -0800
Date: Mon, 27 Jan 2014 22:46:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140128034634.GC20680@pegasus.dumpdata.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:18:39PM -0800, Mukesh Rathor wrote:
> Until now, xen did not expose PSE to pvh guest, but a patch was submitted
> to xen list to enable bunch of features for a pvh guest. PSE has not been

Which 'patch'?

> looked into for PVH, so until we can do that and test it to make sure it
> works, disable the feature to avoid flood of bugs.

I think we want a flood of bugs, no?
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..4e952046 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
>  	xen_have_vector_callback = 1;
>  	xen_pvh_set_cr_flags(0);
>  
> +        /* pvh guests are not quite ready for large pages yet */
> +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> +        setup_clear_cpu_cap(X86_FEATURE_PSE36);
> +
> +
>  #ifdef CONFIG_X86_32
>  	BUG(); /* PVH: Implement proper support. */
>  #endif
> -- 
> 1.7.2.3
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:46:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zdO-0005YC-1o; Tue, 28 Jan 2014 03:46:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W7zdM-0005Y4-LM
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 03:46:44 +0000
Received: from [85.158.137.68:5216] by server-10.bemta-3.messagelabs.com id
	FF/AF-23989-32827E25; Tue, 28 Jan 2014 03:46:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390880801!8022153!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4559 invoked from network); 28 Jan 2014 03:46:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 03:46:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0S3kclm002892
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 03:46:38 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3kbSf012478
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 03:46:37 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0S3kaOQ017367; Tue, 28 Jan 2014 03:46:36 GMT
Received: from pegasus.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 27 Jan 2014 19:46:36 -0800
Date: Mon, 27 Jan 2014 22:46:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140128034634.GC20680@pegasus.dumpdata.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:18:39PM -0800, Mukesh Rathor wrote:
> Until now, xen did not expose PSE to pvh guest, but a patch was submitted
> to xen list to enable bunch of features for a pvh guest. PSE has not been

Which 'patch'?

> looked into for PVH, so until we can do that and test it to make sure it
> works, disable the feature to avoid flood of bugs.

I think we want a flood of bugs, no?
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..4e952046 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
>  	xen_have_vector_callback = 1;
>  	xen_pvh_set_cr_flags(0);
>  
> +        /* pvh guests are not quite ready for large pages yet */
> +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> +        setup_clear_cpu_cap(X86_FEATURE_PSE36);
> +
> +
>  #ifdef CONFIG_X86_32
>  	BUG(); /* PVH: Implement proper support. */
>  #endif
> -- 
> 1.7.2.3
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:49:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zfx-0005sF-NA; Tue, 28 Jan 2014 03:49:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W7zfw-0005s5-MI
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:49:24 +0000
Received: from [85.158.143.35:16079] by server-1.bemta-4.messagelabs.com id
	D8/1C-02132-3C827E25; Tue, 28 Jan 2014 03:49:23 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390880962!1203449!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30193 invoked from network); 28 Jan 2014 03:49:23 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-13.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 03:49:23 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id A2961580F45;
	Mon, 27 Jan 2014 19:49:21 -0800 (PST)
Date: Mon, 27 Jan 2014 19:49:19 -0800 (PST)
Message-Id: <20140127.194919.2002640193008432431.davem@davemloft.net>
To: Annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
References: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH net v6] xen-netfront: fix resource leak in
 netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <Annie.li@oracle.com>
Date: Tue, 28 Jan 2014 11:35:42 +0800

> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.

Applied and queued up for -stable, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 03:49:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 03:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W7zfx-0005sF-NA; Tue, 28 Jan 2014 03:49:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1W7zfw-0005s5-MI
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 03:49:24 +0000
Received: from [85.158.143.35:16079] by server-1.bemta-4.messagelabs.com id
	D8/1C-02132-3C827E25; Tue, 28 Jan 2014 03:49:23 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390880962!1203449!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30193 invoked from network); 28 Jan 2014 03:49:23 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-13.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 03:49:23 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id A2961580F45;
	Mon, 27 Jan 2014 19:49:21 -0800 (PST)
Date: Mon, 27 Jan 2014 19:49:19 -0800 (PST)
Message-Id: <20140127.194919.2002640193008432431.davem@davemloft.net>
To: Annie.li@oracle.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
References: <1390880142-4962-1-git-send-email-Annie.li@oracle.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	xen-devel@lists.xen.org, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH net v6] xen-netfront: fix resource leak in
 netfront
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Annie Li <Annie.li@oracle.com>
Date: Tue, 28 Jan 2014 11:35:42 +0800

> This patch removes grant transfer releasing code from netfront, and uses
> gnttab_end_foreign_access to end grant access since
> gnttab_end_foreign_access_ref may fail when the grant entry is
> currently used for reading or writing.

Applied and queued up for -stable, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80IG-0007CX-VK; Tue, 28 Jan 2014 04:29:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80IF-0007CO-Ul
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:00 +0000
Received: from [85.158.143.35:23885] by server-3.bemta-4.messagelabs.com id
	0F/01-32360-B0237E25; Tue, 28 Jan 2014 04:28:59 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390883336!1199364!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19479 invoked from network); 28 Jan 2014 04:28:58 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:28:58 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so6775889pbc.30
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:28:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=8TGFKYceFi06pWji4NMxi8XSFWwYJA4x/QJKi5AukgE=;
	b=FFjZsCsT49OXPUZ2esw/ZZ77Jw/zV89bNm87rbmIGySrKsvJACHGCInCXbfusOkMYW
	MWrCMVpd+NnnDVLVuaj9MJ0nJTQ2A3xMGHd0qsDjWlbu+8REu3eB/tIJnPJXhhl9ByzE
	3Fcj3ypaYlf+fDmWp2ZYk5nQ1Abo57IQrnmnZNhaYNY+d3kcvVpGsR8OzeytS5DfmoS4
	FBpMuYA3wjZ9du/M2nH2Ol16jm+S8cmf7C+iTGb9sAjm78+A0MBMe6/cy1tY/b5BM33M
	cueEW0qqIQtzpYjKhIaEksqUkz77ph8yyNVeIsCf1ZbIcNLNJ5q6owh6n64Gtg36R8vL
	zzAQ==
X-Received: by 10.66.232.40 with SMTP id tl8mr7141807pac.137.1390883336409;
	Mon, 27 Jan 2014 20:28:56 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id qz9sm37338797pbc.3.2014.01.27.20.28.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:28:55 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:20 +0800
Message-Id: <1390883313-19313-2-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 01/14] tmem: remove pageshift from struct
	tmem_pool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pagesize is always the same as PAGE_SIZE in tmem, so remove pageshift from
struct tmem_pool and use POOL_PAGESHIFT and PAGE_SIZE directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index d9e912b..aa8e6f5 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -88,6 +88,7 @@ struct share_list {
     struct client *client;
 };
 
+#define POOL_PAGESHIFT (PAGE_SHIFT - 12)
 #define OBJ_HASH_BUCKETS 256 /* must be power of two */
 #define OBJ_HASH_BUCKETS_MASK (OBJ_HASH_BUCKETS-1)
 
@@ -95,7 +96,6 @@ struct tmem_pool {
     bool_t shared;
     bool_t persistent;
     bool_t is_dying;
-    int pageshift; /* 0 == 2**12 */
     struct list_head pool_list;
     struct client *client;
     uint64_t uuid[2]; /* 0 for private, non-zero for shared */
@@ -1042,7 +1042,6 @@ static struct tmem_pool * pool_alloc(void)
     pool->objnode_count = pool->objnode_count_max = 0;
     atomic_set(&pool->pgp_count,0);
     pool->obj_count = 0; pool->shared_count = 0;
-    pool->pageshift = PAGE_SHIFT - 12;
     pool->good_puts = pool->puts = pool->dup_puts_flushed = 0;
     pool->dup_puts_replaced = pool->no_mem_puts = 0;
     pool->found_gets = pool->gets = 0;
@@ -2355,7 +2354,7 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
              break;
          rc = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
               (pool->shared ? TMEM_POOL_SHARED : 0) |
-              (pool->pageshift << TMEM_POOL_PAGESIZE_SHIFT) |
+              (POOL_PAGESHIFT << TMEM_POOL_PAGESIZE_SHIFT) |
               (TMEM_SPEC_VERSION << TMEM_POOL_VERSION_SHIFT);
         break;
     case TMEMC_SAVE_GET_POOL_NPAGES:
@@ -2395,13 +2394,11 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     struct oid oid;
     int ret = 0;
     struct tmem_handle h;
-    unsigned int pagesize;
 
     if ( pool == NULL || !is_persistent(pool) )
         return -1;
 
-    pagesize = 1 << (pool->pageshift + 12);
-    if ( bufsize < pagesize + sizeof(struct tmem_handle) )
+    if ( bufsize < PAGE_SIZE + sizeof(struct tmem_handle) )
         return -ENOMEM;
 
     spin_lock(&pers_lists_spinlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80IG-0007CX-VK; Tue, 28 Jan 2014 04:29:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80IF-0007CO-Ul
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:00 +0000
Received: from [85.158.143.35:23885] by server-3.bemta-4.messagelabs.com id
	0F/01-32360-B0237E25; Tue, 28 Jan 2014 04:28:59 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390883336!1199364!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19479 invoked from network); 28 Jan 2014 04:28:58 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:28:58 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so6775889pbc.30
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:28:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=8TGFKYceFi06pWji4NMxi8XSFWwYJA4x/QJKi5AukgE=;
	b=FFjZsCsT49OXPUZ2esw/ZZ77Jw/zV89bNm87rbmIGySrKsvJACHGCInCXbfusOkMYW
	MWrCMVpd+NnnDVLVuaj9MJ0nJTQ2A3xMGHd0qsDjWlbu+8REu3eB/tIJnPJXhhl9ByzE
	3Fcj3ypaYlf+fDmWp2ZYk5nQ1Abo57IQrnmnZNhaYNY+d3kcvVpGsR8OzeytS5DfmoS4
	FBpMuYA3wjZ9du/M2nH2Ol16jm+S8cmf7C+iTGb9sAjm78+A0MBMe6/cy1tY/b5BM33M
	cueEW0qqIQtzpYjKhIaEksqUkz77ph8yyNVeIsCf1ZbIcNLNJ5q6owh6n64Gtg36R8vL
	zzAQ==
X-Received: by 10.66.232.40 with SMTP id tl8mr7141807pac.137.1390883336409;
	Mon, 27 Jan 2014 20:28:56 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id qz9sm37338797pbc.3.2014.01.27.20.28.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:28:55 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:20 +0800
Message-Id: <1390883313-19313-2-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 01/14] tmem: remove pageshift from struct
	tmem_pool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pagesize is always the same as PAGE_SIZE in tmem, so remove pageshift from
struct tmem_pool and use POOL_PAGESHIFT and PAGE_SIZE directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index d9e912b..aa8e6f5 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -88,6 +88,7 @@ struct share_list {
     struct client *client;
 };
 
+#define POOL_PAGESHIFT (PAGE_SHIFT - 12)
 #define OBJ_HASH_BUCKETS 256 /* must be power of two */
 #define OBJ_HASH_BUCKETS_MASK (OBJ_HASH_BUCKETS-1)
 
@@ -95,7 +96,6 @@ struct tmem_pool {
     bool_t shared;
     bool_t persistent;
     bool_t is_dying;
-    int pageshift; /* 0 == 2**12 */
     struct list_head pool_list;
     struct client *client;
     uint64_t uuid[2]; /* 0 for private, non-zero for shared */
@@ -1042,7 +1042,6 @@ static struct tmem_pool * pool_alloc(void)
     pool->objnode_count = pool->objnode_count_max = 0;
     atomic_set(&pool->pgp_count,0);
     pool->obj_count = 0; pool->shared_count = 0;
-    pool->pageshift = PAGE_SHIFT - 12;
     pool->good_puts = pool->puts = pool->dup_puts_flushed = 0;
     pool->dup_puts_replaced = pool->no_mem_puts = 0;
     pool->found_gets = pool->gets = 0;
@@ -2355,7 +2354,7 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
              break;
          rc = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
               (pool->shared ? TMEM_POOL_SHARED : 0) |
-              (pool->pageshift << TMEM_POOL_PAGESIZE_SHIFT) |
+              (POOL_PAGESHIFT << TMEM_POOL_PAGESIZE_SHIFT) |
               (TMEM_SPEC_VERSION << TMEM_POOL_VERSION_SHIFT);
         break;
     case TMEMC_SAVE_GET_POOL_NPAGES:
@@ -2395,13 +2394,11 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     struct oid oid;
     int ret = 0;
     struct tmem_handle h;
-    unsigned int pagesize;
 
     if ( pool == NULL || !is_persistent(pool) )
         return -1;
 
-    pagesize = 1 << (pool->pageshift + 12);
-    if ( bufsize < pagesize + sizeof(struct tmem_handle) )
+    if ( bufsize < PAGE_SIZE + sizeof(struct tmem_handle) )
         return -ENOMEM;
 
     spin_lock(&pers_lists_spinlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80I6-0007Aq-Dt; Tue, 28 Jan 2014 04:28:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80I4-00078A-P0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:28:48 +0000
Received: from [85.158.143.35:3253] by server-1.bemta-4.messagelabs.com id
	68/4B-02132-00237E25; Tue, 28 Jan 2014 04:28:48 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390883325!1214949!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29926 invoked from network); 28 Jan 2014 04:28:47 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:28:47 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so6753729pbb.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:28:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=4uHwR2AD+93eekEehpbjnn+/mm9jdqKMVw4kDhy3gko=;
	b=lSsYMwWCwpoKq/PL010hU6lICzgQrrF8FsOnBtusLqXJ6iawCupUckJgN7dYEEzeH+
	b+Atnc4VD2Lk1GtgT3ejahohi5x9oZv3n4j/M2HpcCVXVqHAwWG48xqD78FYSWVvoCC5
	j0MOb/tHUlRHpoNTF2hSQBjMem3MBMLKifKifL1Sm3lZc1Q042IhNICMwXRPc0E+DSs5
	F7/NJU7auO77OfsxRAWU6e/TmU6PxaL5OqiIWCMZKsrNzgsIjOV7m6K66pDQ3FA1YGAL
	SkkCWH47XOHH7kCQtltLlRZVoDTk/eMpPLTBgwjvbnJZ7widyrwbcFtv7IdjJzUJ7NB1
	dyYA==
X-Received: by 10.66.181.70 with SMTP id du6mr34607155pac.122.1390883325395;
	Mon, 27 Jan 2014 20:28:45 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id g6sm100696242pat.2.2014.01.27.20.28.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:28:44 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:19 +0800
Message-Id: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH RESEND 00/14] xen: new patches for tmem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi folks,

This is a resend of my tmem patches, there were rebased on 4.4.0-rc2 with some
tivial changes.

These patches continue to clean up tmem including drop unneeded functions/
parameters and reorgnize functions to make them easier for understanding. Also
some potential bugs were fixed.

It's not aimed and urgent to get merged, but my further work will be based on
top of these. It'll be easier for me if problems can be pointed out earlier.

Bob Liu (14):
  tmem: remove pageshift from struct tmem_pool
  tmem: refactor function do_tmem_op()
  tmem: cleanup: drop unneeded client/pool initialization
  tmem: bugfix in obj allocate path
  tmem: cleanup: remove unneed parameter from pgp_delist()
  tmem: cleanup: remove unneed parameter from pgp_free()
  tmem: cleanup the pgp free path
  tmem: drop oneline function client_freeze()
  tmem: remove unneeded parameters from obj destroy path
  tmem: cleanup: drop global_pool_list
  tmem: fix the return value of tmemc_set_var()
  tmem: cleanup: refactor function tmemc_shared_pool_auth()
  tmem: reorg the shared pool allocate path
  tmem: remove useless parameter from client and pool flush

 xen/common/tmem.c |  580 +++++++++++++++++++++++++++--------------------------
 1 file changed, 295 insertions(+), 285 deletions(-)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80I6-0007Aq-Dt; Tue, 28 Jan 2014 04:28:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80I4-00078A-P0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:28:48 +0000
Received: from [85.158.143.35:3253] by server-1.bemta-4.messagelabs.com id
	68/4B-02132-00237E25; Tue, 28 Jan 2014 04:28:48 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390883325!1214949!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29926 invoked from network); 28 Jan 2014 04:28:47 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:28:47 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so6753729pbb.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:28:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=4uHwR2AD+93eekEehpbjnn+/mm9jdqKMVw4kDhy3gko=;
	b=lSsYMwWCwpoKq/PL010hU6lICzgQrrF8FsOnBtusLqXJ6iawCupUckJgN7dYEEzeH+
	b+Atnc4VD2Lk1GtgT3ejahohi5x9oZv3n4j/M2HpcCVXVqHAwWG48xqD78FYSWVvoCC5
	j0MOb/tHUlRHpoNTF2hSQBjMem3MBMLKifKifL1Sm3lZc1Q042IhNICMwXRPc0E+DSs5
	F7/NJU7auO77OfsxRAWU6e/TmU6PxaL5OqiIWCMZKsrNzgsIjOV7m6K66pDQ3FA1YGAL
	SkkCWH47XOHH7kCQtltLlRZVoDTk/eMpPLTBgwjvbnJZ7widyrwbcFtv7IdjJzUJ7NB1
	dyYA==
X-Received: by 10.66.181.70 with SMTP id du6mr34607155pac.122.1390883325395;
	Mon, 27 Jan 2014 20:28:45 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id g6sm100696242pat.2.2014.01.27.20.28.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:28:44 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:19 +0800
Message-Id: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH RESEND 00/14] xen: new patches for tmem
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi folks,

This is a resend of my tmem patches, there were rebased on 4.4.0-rc2 with some
tivial changes.

These patches continue to clean up tmem including drop unneeded functions/
parameters and reorgnize functions to make them easier for understanding. Also
some potential bugs were fixed.

It's not aimed and urgent to get merged, but my further work will be based on
top of these. It'll be easier for me if problems can be pointed out earlier.

Bob Liu (14):
  tmem: remove pageshift from struct tmem_pool
  tmem: refactor function do_tmem_op()
  tmem: cleanup: drop unneeded client/pool initialization
  tmem: bugfix in obj allocate path
  tmem: cleanup: remove unneed parameter from pgp_delist()
  tmem: cleanup: remove unneed parameter from pgp_free()
  tmem: cleanup the pgp free path
  tmem: drop oneline function client_freeze()
  tmem: remove unneeded parameters from obj destroy path
  tmem: cleanup: drop global_pool_list
  tmem: fix the return value of tmemc_set_var()
  tmem: cleanup: refactor function tmemc_shared_pool_auth()
  tmem: reorg the shared pool allocate path
  tmem: remove useless parameter from client and pool flush

 xen/common/tmem.c |  580 +++++++++++++++++++++++++++--------------------------
 1 file changed, 295 insertions(+), 285 deletions(-)

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80IW-0007Ii-CR; Tue, 28 Jan 2014 04:29:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80IU-0007HL-FK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:14 +0000
Received: from [85.158.137.68:3013] by server-11.bemta-3.messagelabs.com id
	45/0C-19379-91237E25; Tue, 28 Jan 2014 04:29:13 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390883350!11705576!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18241 invoked from network); 28 Jan 2014 04:29:12 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:12 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so6755479pbc.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=LaYD1nugoPJ10paIAdk46/c5HDbHqizmcJeIEcq2g4Q=;
	b=hdXks2xPjQ74vr5bGnddnx4fS2vWprlwlfaY4c4jDi6nQB7EDyzgkFjGmFuVgGb+np
	tLreMBewNlJQoH515jax8gc+sjZ7Y7SrCbHiK3C1kH4MPW/VEkF0yyxFqVjDbdgDHFQk
	PKK+tD7gB0cRP1K3iKMgAO3xYcEkZrXqLh4vR3rVrQJXm1vVUrmqasiTU5waWdI+CLmE
	JT+8vXEO5Z9UDY9lMy8UWs9RQdwfiLs7VEJ150K8JW9VrN3Ryx0Uu3XKA7aL2rKQQm0P
	L8Atctc0i4w8b0/k2jIBdISjgP2yj0wBAs9kpix9jie5wNf53AoPMSHl6yG9S9jN6R3x
	PfPQ==
X-Received: by 10.68.189.132 with SMTP id gi4mr34295193pbc.57.1390883350378;
	Mon, 27 Jan 2014 20:29:10 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id de1sm37326284pbc.7.2014.01.27.20.29.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:09 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:21 +0800
Message-Id: <1390883313-19313-3-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 02/14] tmem: refactor function do_tmem_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Refactor function do_tmem_op() to make it more readable.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |  172 +++++++++++++++++++++++++----------------------------
 1 file changed, 81 insertions(+), 91 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index aa8e6f5..362e774 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2599,7 +2599,6 @@ long do_tmem_op(tmem_cli_op_t uops)
     bool_t succ_get = 0, succ_put = 0;
     bool_t non_succ_get = 0, non_succ_put = 0;
     bool_t flush = 0, flush_obj = 0;
-    bool_t write_lock_set = 0, read_lock_set = 0;
 
     if ( !tmem_initialized )
         return -ENODEV;
@@ -2624,114 +2623,105 @@ long do_tmem_op(tmem_cli_op_t uops)
         goto simple_error;
     }
 
+    /* Acquire wirte lock for all command at first */
+    write_lock(&tmem_rwlock);
+
     if ( op.cmd == TMEM_CONTROL )
     {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
         rc = do_tmem_control(&op);
-        goto out;
-    } else if ( op.cmd == TMEM_AUTH ) {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
+    }
+    else if ( op.cmd == TMEM_AUTH )
+    {
         rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
                          op.u.creat.uuid[1],op.u.creat.flags);
-        goto out;
-    } else if ( op.cmd == TMEM_RESTORE_NEW ) {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
+    }
+    else if ( op.cmd == TMEM_RESTORE_NEW )
+    {
         rc = do_tmem_new_pool(op.u.creat.arg1, op.pool_id, op.u.creat.flags,
                          op.u.creat.uuid[0], op.u.creat.uuid[1]);
-        goto out;
     }
-
-    /* create per-client tmem structure dynamically on first use by client */
-    if ( client == NULL )
-    {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
-        if ( (client = client_create(current->domain->domain_id)) == NULL )
+    else {
+        /*
+	 * For other commands, create per-client tmem structure dynamically on
+	 * first use by client.
+	 */
+        if ( client == NULL )
         {
-            tmem_client_err("tmem: can't create tmem structure for %s\n",
-                           tmem_client_str);
-            rc = -ENOMEM;
-            goto out;
+            if ( (client = client_create(current->domain->domain_id)) == NULL )
+            {
+                tmem_client_err("tmem: can't create tmem structure for %s\n",
+                               tmem_client_str);
+                rc = -ENOMEM;
+                goto out;
+            }
         }
-    }
 
-    if ( op.cmd == TMEM_NEW_POOL || op.cmd == TMEM_DESTROY_POOL )
-    {
-        if ( !write_lock_set )
+        if ( op.cmd == TMEM_NEW_POOL || op.cmd == TMEM_DESTROY_POOL )
         {
-            write_lock(&tmem_rwlock);
-            write_lock_set = 1;
+	    if ( op.cmd == TMEM_NEW_POOL )
+                rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
+                                op.u.creat.uuid[0], op.u.creat.uuid[1]);
+	    else
+                rc = do_tmem_destroy_pool(op.pool_id);
         }
-    }
-    else
-    {
-        if ( !write_lock_set )
+        else
         {
+            if ( ((uint32_t)op.pool_id >= MAX_POOLS_PER_DOMAIN) ||
+                 ((pool = client->pools[op.pool_id]) == NULL) )
+            {
+                tmem_client_err("tmem: operation requested on uncreated pool\n");
+                rc = -ENODEV;
+                goto out;
+            }
+	    /* Commands only need read lock */
+	    write_unlock(&tmem_rwlock);
             read_lock(&tmem_rwlock);
-            read_lock_set = 1;
-        }
-        if ( ((uint32_t)op.pool_id >= MAX_POOLS_PER_DOMAIN) ||
-             ((pool = client->pools[op.pool_id]) == NULL) )
-        {
-            tmem_client_err("tmem: operation requested on uncreated pool\n");
-            rc = -ENODEV;
-            goto out;
-        }
-    }
 
-    oidp = (struct oid *)&op.u.gen.oid[0];
-    switch ( op.cmd )
-    {
-    case TMEM_NEW_POOL:
-        rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
-                              op.u.creat.uuid[0], op.u.creat.uuid[1]);
-        break;
-    case TMEM_PUT_PAGE:
-        if (tmem_ensure_avail_pages())
-            rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                        tmem_cli_buf_null);
-        else
-            rc = -ENOMEM;
-        if (rc == 1) succ_put = 1;
-        else non_succ_put = 1;
-        break;
-    case TMEM_GET_PAGE:
-        rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                        tmem_cli_buf_null);
-        if (rc == 1) succ_get = 1;
-        else non_succ_get = 1;
-        break;
-    case TMEM_FLUSH_PAGE:
-        flush = 1;
-        rc = do_tmem_flush_page(pool, oidp, op.u.gen.index);
-        break;
-    case TMEM_FLUSH_OBJECT:
-        rc = do_tmem_flush_object(pool, oidp);
-        flush_obj = 1;
-        break;
-    case TMEM_DESTROY_POOL:
-        flush = 1;
-        rc = do_tmem_destroy_pool(op.pool_id);
-        break;
-    default:
-        tmem_client_warn("tmem: op %d not implemented\n", op.cmd);
-        rc = -ENOSYS;
-        break;
+            oidp = (struct oid *)&op.u.gen.oid[0];
+            switch ( op.cmd )
+            {
+            case TMEM_NEW_POOL:
+                rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
+                                      op.u.creat.uuid[0], op.u.creat.uuid[1]);
+                break;
+            case TMEM_PUT_PAGE:
+                if (tmem_ensure_avail_pages())
+                    rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+                                tmem_cli_buf_null);
+                else
+                    rc = -ENOMEM;
+                if (rc == 1) succ_put = 1;
+                else non_succ_put = 1;
+                break;
+            case TMEM_GET_PAGE:
+                rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+                                tmem_cli_buf_null);
+                if (rc == 1) succ_get = 1;
+                else non_succ_get = 1;
+                break;
+            case TMEM_FLUSH_PAGE:
+                flush = 1;
+                rc = do_tmem_flush_page(pool, oidp, op.u.gen.index);
+                break;
+            case TMEM_FLUSH_OBJECT:
+                rc = do_tmem_flush_object(pool, oidp);
+                flush_obj = 1;
+                break;
+            case TMEM_DESTROY_POOL:
+                flush = 1;
+                rc = do_tmem_destroy_pool(op.pool_id);
+                break;
+            default:
+                tmem_client_warn("tmem: op %d not implemented\n", op.cmd);
+                rc = -ENOSYS;
+                break;
+            }
+            read_unlock(&tmem_rwlock);
+	    return rc;
+        }
     }
-
 out:
-    if ( rc < 0 )
-        errored_tmem_ops++;
-    if ( write_lock_set )
-        write_unlock(&tmem_rwlock);
-    else if ( read_lock_set )
-        read_unlock(&tmem_rwlock);
-    else 
-        ASSERT(0);
-
+    write_unlock(&tmem_rwlock);
     return rc;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80IW-0007Ii-CR; Tue, 28 Jan 2014 04:29:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80IU-0007HL-FK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:14 +0000
Received: from [85.158.137.68:3013] by server-11.bemta-3.messagelabs.com id
	45/0C-19379-91237E25; Tue, 28 Jan 2014 04:29:13 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390883350!11705576!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18241 invoked from network); 28 Jan 2014 04:29:12 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:12 -0000
Received: by mail-pb0-f43.google.com with SMTP id md12so6755479pbc.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=LaYD1nugoPJ10paIAdk46/c5HDbHqizmcJeIEcq2g4Q=;
	b=hdXks2xPjQ74vr5bGnddnx4fS2vWprlwlfaY4c4jDi6nQB7EDyzgkFjGmFuVgGb+np
	tLreMBewNlJQoH515jax8gc+sjZ7Y7SrCbHiK3C1kH4MPW/VEkF0yyxFqVjDbdgDHFQk
	PKK+tD7gB0cRP1K3iKMgAO3xYcEkZrXqLh4vR3rVrQJXm1vVUrmqasiTU5waWdI+CLmE
	JT+8vXEO5Z9UDY9lMy8UWs9RQdwfiLs7VEJ150K8JW9VrN3Ryx0Uu3XKA7aL2rKQQm0P
	L8Atctc0i4w8b0/k2jIBdISjgP2yj0wBAs9kpix9jie5wNf53AoPMSHl6yG9S9jN6R3x
	PfPQ==
X-Received: by 10.68.189.132 with SMTP id gi4mr34295193pbc.57.1390883350378;
	Mon, 27 Jan 2014 20:29:10 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id de1sm37326284pbc.7.2014.01.27.20.29.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:09 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:21 +0800
Message-Id: <1390883313-19313-3-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 02/14] tmem: refactor function do_tmem_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Refactor function do_tmem_op() to make it more readable.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |  172 +++++++++++++++++++++++++----------------------------
 1 file changed, 81 insertions(+), 91 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index aa8e6f5..362e774 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2599,7 +2599,6 @@ long do_tmem_op(tmem_cli_op_t uops)
     bool_t succ_get = 0, succ_put = 0;
     bool_t non_succ_get = 0, non_succ_put = 0;
     bool_t flush = 0, flush_obj = 0;
-    bool_t write_lock_set = 0, read_lock_set = 0;
 
     if ( !tmem_initialized )
         return -ENODEV;
@@ -2624,114 +2623,105 @@ long do_tmem_op(tmem_cli_op_t uops)
         goto simple_error;
     }
 
+    /* Acquire wirte lock for all command at first */
+    write_lock(&tmem_rwlock);
+
     if ( op.cmd == TMEM_CONTROL )
     {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
         rc = do_tmem_control(&op);
-        goto out;
-    } else if ( op.cmd == TMEM_AUTH ) {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
+    }
+    else if ( op.cmd == TMEM_AUTH )
+    {
         rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
                          op.u.creat.uuid[1],op.u.creat.flags);
-        goto out;
-    } else if ( op.cmd == TMEM_RESTORE_NEW ) {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
+    }
+    else if ( op.cmd == TMEM_RESTORE_NEW )
+    {
         rc = do_tmem_new_pool(op.u.creat.arg1, op.pool_id, op.u.creat.flags,
                          op.u.creat.uuid[0], op.u.creat.uuid[1]);
-        goto out;
     }
-
-    /* create per-client tmem structure dynamically on first use by client */
-    if ( client == NULL )
-    {
-        write_lock(&tmem_rwlock);
-        write_lock_set = 1;
-        if ( (client = client_create(current->domain->domain_id)) == NULL )
+    else {
+        /*
+	 * For other commands, create per-client tmem structure dynamically on
+	 * first use by client.
+	 */
+        if ( client == NULL )
         {
-            tmem_client_err("tmem: can't create tmem structure for %s\n",
-                           tmem_client_str);
-            rc = -ENOMEM;
-            goto out;
+            if ( (client = client_create(current->domain->domain_id)) == NULL )
+            {
+                tmem_client_err("tmem: can't create tmem structure for %s\n",
+                               tmem_client_str);
+                rc = -ENOMEM;
+                goto out;
+            }
         }
-    }
 
-    if ( op.cmd == TMEM_NEW_POOL || op.cmd == TMEM_DESTROY_POOL )
-    {
-        if ( !write_lock_set )
+        if ( op.cmd == TMEM_NEW_POOL || op.cmd == TMEM_DESTROY_POOL )
         {
-            write_lock(&tmem_rwlock);
-            write_lock_set = 1;
+	    if ( op.cmd == TMEM_NEW_POOL )
+                rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
+                                op.u.creat.uuid[0], op.u.creat.uuid[1]);
+	    else
+                rc = do_tmem_destroy_pool(op.pool_id);
         }
-    }
-    else
-    {
-        if ( !write_lock_set )
+        else
         {
+            if ( ((uint32_t)op.pool_id >= MAX_POOLS_PER_DOMAIN) ||
+                 ((pool = client->pools[op.pool_id]) == NULL) )
+            {
+                tmem_client_err("tmem: operation requested on uncreated pool\n");
+                rc = -ENODEV;
+                goto out;
+            }
+	    /* Commands only need read lock */
+	    write_unlock(&tmem_rwlock);
             read_lock(&tmem_rwlock);
-            read_lock_set = 1;
-        }
-        if ( ((uint32_t)op.pool_id >= MAX_POOLS_PER_DOMAIN) ||
-             ((pool = client->pools[op.pool_id]) == NULL) )
-        {
-            tmem_client_err("tmem: operation requested on uncreated pool\n");
-            rc = -ENODEV;
-            goto out;
-        }
-    }
 
-    oidp = (struct oid *)&op.u.gen.oid[0];
-    switch ( op.cmd )
-    {
-    case TMEM_NEW_POOL:
-        rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
-                              op.u.creat.uuid[0], op.u.creat.uuid[1]);
-        break;
-    case TMEM_PUT_PAGE:
-        if (tmem_ensure_avail_pages())
-            rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                        tmem_cli_buf_null);
-        else
-            rc = -ENOMEM;
-        if (rc == 1) succ_put = 1;
-        else non_succ_put = 1;
-        break;
-    case TMEM_GET_PAGE:
-        rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                        tmem_cli_buf_null);
-        if (rc == 1) succ_get = 1;
-        else non_succ_get = 1;
-        break;
-    case TMEM_FLUSH_PAGE:
-        flush = 1;
-        rc = do_tmem_flush_page(pool, oidp, op.u.gen.index);
-        break;
-    case TMEM_FLUSH_OBJECT:
-        rc = do_tmem_flush_object(pool, oidp);
-        flush_obj = 1;
-        break;
-    case TMEM_DESTROY_POOL:
-        flush = 1;
-        rc = do_tmem_destroy_pool(op.pool_id);
-        break;
-    default:
-        tmem_client_warn("tmem: op %d not implemented\n", op.cmd);
-        rc = -ENOSYS;
-        break;
+            oidp = (struct oid *)&op.u.gen.oid[0];
+            switch ( op.cmd )
+            {
+            case TMEM_NEW_POOL:
+                rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
+                                      op.u.creat.uuid[0], op.u.creat.uuid[1]);
+                break;
+            case TMEM_PUT_PAGE:
+                if (tmem_ensure_avail_pages())
+                    rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+                                tmem_cli_buf_null);
+                else
+                    rc = -ENOMEM;
+                if (rc == 1) succ_put = 1;
+                else non_succ_put = 1;
+                break;
+            case TMEM_GET_PAGE:
+                rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+                                tmem_cli_buf_null);
+                if (rc == 1) succ_get = 1;
+                else non_succ_get = 1;
+                break;
+            case TMEM_FLUSH_PAGE:
+                flush = 1;
+                rc = do_tmem_flush_page(pool, oidp, op.u.gen.index);
+                break;
+            case TMEM_FLUSH_OBJECT:
+                rc = do_tmem_flush_object(pool, oidp);
+                flush_obj = 1;
+                break;
+            case TMEM_DESTROY_POOL:
+                flush = 1;
+                rc = do_tmem_destroy_pool(op.pool_id);
+                break;
+            default:
+                tmem_client_warn("tmem: op %d not implemented\n", op.cmd);
+                rc = -ENOSYS;
+                break;
+            }
+            read_unlock(&tmem_rwlock);
+	    return rc;
+        }
     }
-
 out:
-    if ( rc < 0 )
-        errored_tmem_ops++;
-    if ( write_lock_set )
-        write_unlock(&tmem_rwlock);
-    else if ( read_lock_set )
-        read_unlock(&tmem_rwlock);
-    else 
-        ASSERT(0);
-
+    write_unlock(&tmem_rwlock);
     return rc;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Im-0007OQ-Qt; Tue, 28 Jan 2014 04:29:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Ik-0007Nl-Iq
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:30 +0000
Received: from [85.158.137.68:25214] by server-8.bemta-3.messagelabs.com id
	AC/BC-31081-92237E25; Tue, 28 Jan 2014 04:29:29 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390883367!10886309!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19149 invoked from network); 28 Jan 2014 04:29:28 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:28 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so6785218pbb.39
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=LHRGjv66KJljDbQHNgLoIfpUYPaz15J1wXMUf2Beq3c=;
	b=cgokWiL1xT0Ic7aX0gY+BunUliEwqAG3N9OJ+xVJxSdFZgWoFplQZXWCHRBkG2GdAC
	tFHcp7s8ukpFN20t3NESpkMaLaaZfF1XTbqGZxdgRrPCyuDVmxelzihXXvXAiwgbrzwQ
	XLpEAOLf2E9gthOMJqNiiLJ9gMrBL4gl2QxfSabpiRvLIUtP80moXGO5nXT2qdM9Do/x
	xKgj7T6O6cuGCKmfxVSeDOl4xLL3secURvTafGZ50hiOMUyq5HNAfB0lZaxTxMA0gfdz
	0xr6lbPdG3WpN30vMxBo82uh+C0ML6gYuBho3VwdaOEsR1Bx7Z6YF7dM+NLG18LGlWR/
	T1/A==
X-Received: by 10.68.129.5 with SMTP id ns5mr7101649pbb.147.1390883366806;
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	sx8sm100690927pab.5.2014.01.27.20.29.16 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:22 +0800
Message-Id: <1390883313-19313-4-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 03/14] tmem: cleanup: drop unneeded client/pool
	initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using xzalloc to alloc client and pool, so some extra initialization are
unneeded.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 362e774..6ed91b4 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1030,24 +1030,13 @@ static struct tmem_pool * pool_alloc(void)
     struct tmem_pool *pool;
     int i;
 
-    if ( (pool = xmalloc(struct tmem_pool)) == NULL )
+    if ( (pool = xzalloc(struct tmem_pool)) == NULL )
         return NULL;
     for (i = 0; i < OBJ_HASH_BUCKETS; i++)
         pool->obj_rb_root[i] = RB_ROOT;
     INIT_LIST_HEAD(&pool->pool_list);
     INIT_LIST_HEAD(&pool->persistent_page_list);
-    pool->cur_pgp = NULL;
     rwlock_init(&pool->pool_rwlock);
-    pool->pgp_count_max = pool->obj_count_max = 0;
-    pool->objnode_count = pool->objnode_count_max = 0;
-    atomic_set(&pool->pgp_count,0);
-    pool->obj_count = 0; pool->shared_count = 0;
-    pool->good_puts = pool->puts = pool->dup_puts_flushed = 0;
-    pool->dup_puts_replaced = pool->no_mem_puts = 0;
-    pool->found_gets = pool->gets = 0;
-    pool->flushs_found = pool->flushs = 0;
-    pool->flush_objs_found = pool->flush_objs = 0;
-    pool->is_dying = 0;
     return pool;
 }
 
@@ -1216,15 +1205,9 @@ static struct client *client_create(domid_t cli_id)
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
             client->shared_auth_uuid[i][1] = -1L;
-    client->frozen = 0; client->live_migrating = 0;
-    client->weight = 0; client->cap = 0;
     list_add_tail(&client->client_list, &global_client_list);
     INIT_LIST_HEAD(&client->ephemeral_page_list);
     INIT_LIST_HEAD(&client->persistent_invalidated_list);
-    client->cur_pgp = NULL;
-    client->eph_count = client->eph_count_max = 0;
-    client->total_cycles = 0; client->succ_pers_puts = 0;
-    client->succ_eph_gets = 0; client->succ_pers_gets = 0;
     tmem_client_info("ok\n");
     return client;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Im-0007OQ-Qt; Tue, 28 Jan 2014 04:29:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Ik-0007Nl-Iq
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:30 +0000
Received: from [85.158.137.68:25214] by server-8.bemta-3.messagelabs.com id
	AC/BC-31081-92237E25; Tue, 28 Jan 2014 04:29:29 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1390883367!10886309!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19149 invoked from network); 28 Jan 2014 04:29:28 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:28 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so6785218pbb.39
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=LHRGjv66KJljDbQHNgLoIfpUYPaz15J1wXMUf2Beq3c=;
	b=cgokWiL1xT0Ic7aX0gY+BunUliEwqAG3N9OJ+xVJxSdFZgWoFplQZXWCHRBkG2GdAC
	tFHcp7s8ukpFN20t3NESpkMaLaaZfF1XTbqGZxdgRrPCyuDVmxelzihXXvXAiwgbrzwQ
	XLpEAOLf2E9gthOMJqNiiLJ9gMrBL4gl2QxfSabpiRvLIUtP80moXGO5nXT2qdM9Do/x
	xKgj7T6O6cuGCKmfxVSeDOl4xLL3secURvTafGZ50hiOMUyq5HNAfB0lZaxTxMA0gfdz
	0xr6lbPdG3WpN30vMxBo82uh+C0ML6gYuBho3VwdaOEsR1Bx7Z6YF7dM+NLG18LGlWR/
	T1/A==
X-Received: by 10.68.129.5 with SMTP id ns5mr7101649pbb.147.1390883366806;
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	sx8sm100690927pab.5.2014.01.27.20.29.16 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:26 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:22 +0800
Message-Id: <1390883313-19313-4-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 03/14] tmem: cleanup: drop unneeded client/pool
	initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using xzalloc to alloc client and pool, so some extra initialization are
unneeded.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 362e774..6ed91b4 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1030,24 +1030,13 @@ static struct tmem_pool * pool_alloc(void)
     struct tmem_pool *pool;
     int i;
 
-    if ( (pool = xmalloc(struct tmem_pool)) == NULL )
+    if ( (pool = xzalloc(struct tmem_pool)) == NULL )
         return NULL;
     for (i = 0; i < OBJ_HASH_BUCKETS; i++)
         pool->obj_rb_root[i] = RB_ROOT;
     INIT_LIST_HEAD(&pool->pool_list);
     INIT_LIST_HEAD(&pool->persistent_page_list);
-    pool->cur_pgp = NULL;
     rwlock_init(&pool->pool_rwlock);
-    pool->pgp_count_max = pool->obj_count_max = 0;
-    pool->objnode_count = pool->objnode_count_max = 0;
-    atomic_set(&pool->pgp_count,0);
-    pool->obj_count = 0; pool->shared_count = 0;
-    pool->good_puts = pool->puts = pool->dup_puts_flushed = 0;
-    pool->dup_puts_replaced = pool->no_mem_puts = 0;
-    pool->found_gets = pool->gets = 0;
-    pool->flushs_found = pool->flushs = 0;
-    pool->flush_objs_found = pool->flush_objs = 0;
-    pool->is_dying = 0;
     return pool;
 }
 
@@ -1216,15 +1205,9 @@ static struct client *client_create(domid_t cli_id)
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
             client->shared_auth_uuid[i][1] = -1L;
-    client->frozen = 0; client->live_migrating = 0;
-    client->weight = 0; client->cap = 0;
     list_add_tail(&client->client_list, &global_client_list);
     INIT_LIST_HEAD(&client->ephemeral_page_list);
     INIT_LIST_HEAD(&client->persistent_invalidated_list);
-    client->cur_pgp = NULL;
-    client->eph_count = client->eph_count_max = 0;
-    client->total_cycles = 0; client->succ_pers_puts = 0;
-    client->succ_eph_gets = 0; client->succ_pers_gets = 0;
     tmem_client_info("ok\n");
     return client;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80J1-0007Uh-9G; Tue, 28 Jan 2014 04:29:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Iz-0007U1-UN
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:46 +0000
Received: from [85.158.143.35:25188] by server-2.bemta-4.messagelabs.com id
	3E/73-11386-93237E25; Tue, 28 Jan 2014 04:29:45 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390883382!1213689!1
X-Originating-IP: [209.85.192.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3662 invoked from network); 28 Jan 2014 04:29:44 -0000
Received: from mail-pd0-f175.google.com (HELO mail-pd0-f175.google.com)
	(209.85.192.175)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:44 -0000
Received: by mail-pd0-f175.google.com with SMTP id w10so6546137pde.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=AfaMdxFxMZdv4BBtJtdbnjqjpNW4hHezQnFMqTWPkOc=;
	b=u/Qu9tD7aupQX452ObUDMFP0Gn2zHsFY9lUSiY6ulSrSMz9svlZltC7XRTOL0/VCE8
	pvxOU15qGo56VFR+izFdrgUTxS8Jpd6EInRR+UAvO593Y0pvY/TOpm4iCUeBfq79SzUh
	KeELVHNrG8lpikchr6yPlkIs3HIKSEj8a8W4ejji+LKXURLamC/bjNDcUe+Tr+UhCgq1
	8PZC1XpzW4UWHraJB7+/ikrQF13gVO0JiNaWhMxqGwNABF/Q+aXd86P8h9FvrGsTtOFa
	zs1L/mFrY9L175vtwnM1YPwCxXjK2rkhqLO/qsng3NxW1+vTG70/adcFCgpDjOVQH/9o
	W7QQ==
X-Received: by 10.68.184.194 with SMTP id ew2mr34334925pbc.100.1390883382569; 
	Mon, 27 Jan 2014 20:29:42 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id pq1sm37329932pbc.8.2014.01.27.20.29.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:41 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:23 +0800
Message-Id: <1390883313-19313-5-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 04/14] tmem: bugfix in obj allocate path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a potential bug in the obj allocate path. When there are parallel
callers allocate a obj and inserte it to pool->obj_rb_root, an unexpected
obj might be returned.

Caller A:                            Caller B:

obj_find(oidp) == NULL               obj_find(oidp) == NULL

write_lock(&pool->pool_rwlock)
obj_new():
    objA = tmem_malloc()
    obj_rb_insert(objA)
wirte_unlock()
                                     write_lock(&pool->pool_rwlock)
                                     obj_new():
                                        objB = tmem_malloc()
                                        obj_rb_insert(objB)
                                     write_unlock()

Continue write data to objA
But in future obj_find(), objB
will always be returned.

The route cause is the allocate path didn't check the return value of
obj_rb_insert(). This patch fix it and replace obj_new() with better name
obj_alloc().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 6ed91b4..61dfd62 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -959,12 +959,11 @@ static int obj_rb_insert(struct rb_root *root, struct tmem_object_root *obj)
  * allocate, initialize, and insert an tmem_object_root
  * (should be called only if find failed)
  */
-static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oidp)
+static struct tmem_object_root * obj_alloc(struct tmem_pool *pool, struct oid *oidp)
 {
     struct tmem_object_root *obj;
 
     ASSERT(pool != NULL);
-    ASSERT_WRITELOCK(&pool->pool_rwlock);
     if ( (obj = tmem_malloc(sizeof(struct tmem_object_root), pool)) == NULL )
         return NULL;
     pool->obj_count++;
@@ -979,9 +978,6 @@ static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oid
     obj->objnode_count = 0;
     obj->pgp_count = 0;
     obj->last_client = TMEM_CLI_ID_NULL;
-    spin_lock(&obj->obj_spinlock);
-    obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj);
-    ASSERT_SPINLOCK(&obj->obj_spinlock);
     return obj;
 }
 
@@ -1552,10 +1548,13 @@ static int do_tmem_put(struct tmem_pool *pool,
 
     ASSERT(pool != NULL);
     client = pool->client;
+    ASSERT(client != NULL);
     ret = client->frozen ? -EFROZEN : -ENOMEM;
     pool->puts++;
+
+refind:
     /* does page already exist (dup)?  if so, handle specially */
-    if ( (obj = obj_find(pool,oidp)) != NULL )
+    if ( (obj = obj_find(pool, oidp)) != NULL )
     {
         if ((pgp = pgp_lookup_in_obj(obj, index)) != NULL)
         {
@@ -1573,12 +1572,22 @@ static int do_tmem_put(struct tmem_pool *pool,
         /* no puts allowed into a frozen pool (except dup puts) */
         if ( client->frozen )
             return ret;
+        if ( (obj = obj_alloc(pool, oidp)) == NULL )
+            return -ENOMEM;
+
         write_lock(&pool->pool_rwlock);
-        if ( (obj = obj_new(pool,oidp)) == NULL )
+        /*
+	 * Parallel callers may already allocated obj and inserted to obj_rb_root
+	 * before us.
+	 */
+        if (!obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj))
         {
+	    tmem_free(obj, pool);
             write_unlock(&pool->pool_rwlock);
-            return -ENOMEM;
+	    goto refind;
         }
+
+        spin_lock(&obj->obj_spinlock);
         newobj = 1;
         write_unlock(&pool->pool_rwlock);
     }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:29:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80J1-0007Uh-9G; Tue, 28 Jan 2014 04:29:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Iz-0007U1-UN
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:29:46 +0000
Received: from [85.158.143.35:25188] by server-2.bemta-4.messagelabs.com id
	3E/73-11386-93237E25; Tue, 28 Jan 2014 04:29:45 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390883382!1213689!1
X-Originating-IP: [209.85.192.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3662 invoked from network); 28 Jan 2014 04:29:44 -0000
Received: from mail-pd0-f175.google.com (HELO mail-pd0-f175.google.com)
	(209.85.192.175)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:29:44 -0000
Received: by mail-pd0-f175.google.com with SMTP id w10so6546137pde.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:29:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=AfaMdxFxMZdv4BBtJtdbnjqjpNW4hHezQnFMqTWPkOc=;
	b=u/Qu9tD7aupQX452ObUDMFP0Gn2zHsFY9lUSiY6ulSrSMz9svlZltC7XRTOL0/VCE8
	pvxOU15qGo56VFR+izFdrgUTxS8Jpd6EInRR+UAvO593Y0pvY/TOpm4iCUeBfq79SzUh
	KeELVHNrG8lpikchr6yPlkIs3HIKSEj8a8W4ejji+LKXURLamC/bjNDcUe+Tr+UhCgq1
	8PZC1XpzW4UWHraJB7+/ikrQF13gVO0JiNaWhMxqGwNABF/Q+aXd86P8h9FvrGsTtOFa
	zs1L/mFrY9L175vtwnM1YPwCxXjK2rkhqLO/qsng3NxW1+vTG70/adcFCgpDjOVQH/9o
	W7QQ==
X-Received: by 10.68.184.194 with SMTP id ew2mr34334925pbc.100.1390883382569; 
	Mon, 27 Jan 2014 20:29:42 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id pq1sm37329932pbc.8.2014.01.27.20.29.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:29:41 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:23 +0800
Message-Id: <1390883313-19313-5-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 04/14] tmem: bugfix in obj allocate path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a potential bug in the obj allocate path. When there are parallel
callers allocate a obj and inserte it to pool->obj_rb_root, an unexpected
obj might be returned.

Caller A:                            Caller B:

obj_find(oidp) == NULL               obj_find(oidp) == NULL

write_lock(&pool->pool_rwlock)
obj_new():
    objA = tmem_malloc()
    obj_rb_insert(objA)
wirte_unlock()
                                     write_lock(&pool->pool_rwlock)
                                     obj_new():
                                        objB = tmem_malloc()
                                        obj_rb_insert(objB)
                                     write_unlock()

Continue write data to objA
But in future obj_find(), objB
will always be returned.

The route cause is the allocate path didn't check the return value of
obj_rb_insert(). This patch fix it and replace obj_new() with better name
obj_alloc().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 6ed91b4..61dfd62 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -959,12 +959,11 @@ static int obj_rb_insert(struct rb_root *root, struct tmem_object_root *obj)
  * allocate, initialize, and insert an tmem_object_root
  * (should be called only if find failed)
  */
-static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oidp)
+static struct tmem_object_root * obj_alloc(struct tmem_pool *pool, struct oid *oidp)
 {
     struct tmem_object_root *obj;
 
     ASSERT(pool != NULL);
-    ASSERT_WRITELOCK(&pool->pool_rwlock);
     if ( (obj = tmem_malloc(sizeof(struct tmem_object_root), pool)) == NULL )
         return NULL;
     pool->obj_count++;
@@ -979,9 +978,6 @@ static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oid
     obj->objnode_count = 0;
     obj->pgp_count = 0;
     obj->last_client = TMEM_CLI_ID_NULL;
-    spin_lock(&obj->obj_spinlock);
-    obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj);
-    ASSERT_SPINLOCK(&obj->obj_spinlock);
     return obj;
 }
 
@@ -1552,10 +1548,13 @@ static int do_tmem_put(struct tmem_pool *pool,
 
     ASSERT(pool != NULL);
     client = pool->client;
+    ASSERT(client != NULL);
     ret = client->frozen ? -EFROZEN : -ENOMEM;
     pool->puts++;
+
+refind:
     /* does page already exist (dup)?  if so, handle specially */
-    if ( (obj = obj_find(pool,oidp)) != NULL )
+    if ( (obj = obj_find(pool, oidp)) != NULL )
     {
         if ((pgp = pgp_lookup_in_obj(obj, index)) != NULL)
         {
@@ -1573,12 +1572,22 @@ static int do_tmem_put(struct tmem_pool *pool,
         /* no puts allowed into a frozen pool (except dup puts) */
         if ( client->frozen )
             return ret;
+        if ( (obj = obj_alloc(pool, oidp)) == NULL )
+            return -ENOMEM;
+
         write_lock(&pool->pool_rwlock);
-        if ( (obj = obj_new(pool,oidp)) == NULL )
+        /*
+	 * Parallel callers may already allocated obj and inserted to obj_rb_root
+	 * before us.
+	 */
+        if (!obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj))
         {
+	    tmem_free(obj, pool);
             write_unlock(&pool->pool_rwlock);
-            return -ENOMEM;
+	    goto refind;
         }
+
+        spin_lock(&obj->obj_spinlock);
         newobj = 1;
         write_unlock(&pool->pool_rwlock);
     }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80JX-0007eM-TG; Tue, 28 Jan 2014 04:30:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80JV-0007dw-R0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:18 +0000
Received: from [85.158.137.68:32590] by server-3.bemta-3.messagelabs.com id
	B9/3E-10658-95237E25; Tue, 28 Jan 2014 04:30:17 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390883414!11712086!1
X-Originating-IP: [209.85.220.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30059 invoked from network); 28 Jan 2014 04:30:16 -0000
Received: from mail-pa0-f51.google.com (HELO mail-pa0-f51.google.com)
	(209.85.220.51)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:16 -0000
Received: by mail-pa0-f51.google.com with SMTP id ld10so6880160pab.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=2p2W7Bh2f+joMhA/cbzyl2R1k5FIpCafgQS6CH+rIpE=;
	b=wrcI4WJmDg9WM3zuVLvC4IAH6qC3bMOmfjQuWVQd5uDjPQWvDz9QsbsFuadyUwAVNk
	+VCORjp2pj69Hb5TW0eqfAFiIywRTzCNUcnjF9U++6FDHxshclO+RXEB2fqJdH3/QMlP
	irt/S07tIWRyAFZ+FuVyQCZzThAYEbS4fBZQKgyieeaSPZhlvs/nOJh0aUqV1701KD95
	W1BMg98089aFiJopTsLi915f7RzYN4lEPAwxC9TiXc6UyHzswE62eMn3LAnLVbKjIYzw
	CjtsTayRLEaC1y9iNxql9ss6KZW+965M4g0bWJTH1WRvivpCCs8I4QGHLan+GSnZEbPe
	0YvA==
X-Received: by 10.66.138.40 with SMTP id qn8mr6758541pab.154.1390883414030;
	Mon, 27 Jan 2014 20:30:14 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	zc6sm100638654pab.18.2014.01.27.20.30.08 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:13 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:24 +0800
Message-Id: <1390883313-19313-6-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 05/14] tmem: cleanup: remove unneed parameter
	from pgp_delist()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The parameter "eph_lock" is only needed for function tmem_evict(). Embeded the
delist code into tmem_evict() directly so as to drop the eph_lock parameter. By
this change, the eph list lock can also be released a bit earier.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   55 +++++++++++++++++++++++++++++------------------------
 1 file changed, 30 insertions(+), 25 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 61dfd62..8a6ee84 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -694,7 +694,7 @@ static void pgp_free_from_inv_list(struct client *client, struct tmem_page_descr
 }
 
 /* remove the page from appropriate lists but not from parent object */
-static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
+static void pgp_delist(struct tmem_page_descriptor *pgp)
 {
     struct client *client;
 
@@ -705,8 +705,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     ASSERT(client != NULL);
     if ( !is_persistent(pgp->us.obj->pool) )
     {
-        if ( !no_eph_lock )
-            spin_lock(&eph_lists_spinlock);
+        spin_lock(&eph_lists_spinlock);
         if ( !list_empty(&pgp->us.client_eph_pages) )
             client->eph_count--;
         ASSERT(client->eph_count >= 0);
@@ -715,8 +714,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
             global_eph_count--;
         ASSERT(global_eph_count >= 0);
         list_del_init(&pgp->global_eph_pages);
-        if ( !no_eph_lock )
-            spin_unlock(&eph_lists_spinlock);
+        spin_unlock(&eph_lists_spinlock);
     } else {
         if ( client->live_migrating )
         {
@@ -735,7 +733,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
 }
 
 /* remove page from lists (but not from parent object) and free it */
-static void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
+static void pgp_delete(struct tmem_page_descriptor *pgp)
 {
     uint64_t life;
 
@@ -744,7 +742,7 @@ static void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     ASSERT(pgp->us.obj->pool != NULL);
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
-    pgp_delist(pgp, no_eph_lock);
+    pgp_delist(pgp);
     pgp_free(pgp,1);
 }
 
@@ -754,7 +752,7 @@ static void pgp_destroy(void *v)
     struct tmem_page_descriptor *pgp = (struct tmem_page_descriptor *)v;
 
     ASSERT_SPINLOCK(&pgp->us.obj->obj_spinlock);
-    pgp_delist(pgp,0);
+    pgp_delist(pgp);
     ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
     ASSERT(pgp->us.obj->pgp_count >= 0);
@@ -1303,7 +1301,7 @@ obj_unlock:
 static int tmem_evict(void)
 {
     struct client *client = current->domain->tmem_client;
-    struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
+    struct tmem_page_descriptor *pgp = NULL, *pgp_del;
     struct tmem_object_root *obj;
     struct tmem_pool *pool;
     int ret = 0;
@@ -1314,21 +1312,28 @@ static int tmem_evict(void)
     if ( (client != NULL) && client_over_quota(client) &&
          !list_empty(&client->ephemeral_page_list) )
     {
-        list_for_each_entry_safe(pgp,pgp2,&client->ephemeral_page_list,us.client_eph_pages)
-            if ( tmem_try_to_evict_pgp(pgp,&hold_pool_rwlock) )
+        list_for_each_entry(pgp, &client->ephemeral_page_list, us.client_eph_pages)
+            if ( tmem_try_to_evict_pgp(pgp, &hold_pool_rwlock) )
                 goto found;
-    } else if ( list_empty(&global_ephemeral_page_list) ) {
-        goto out;
-    } else {
-        list_for_each_entry_safe(pgp,pgp2,&global_ephemeral_page_list,global_eph_pages)
-            if ( tmem_try_to_evict_pgp(pgp,&hold_pool_rwlock) )
+    }
+    else if ( !list_empty(&global_ephemeral_page_list) )
+    {
+        list_for_each_entry(pgp, &global_ephemeral_page_list, global_eph_pages)
+            if ( tmem_try_to_evict_pgp(pgp, &hold_pool_rwlock) )
                 goto found;
     }
-
-    ret = 0;
+    spin_unlock(&eph_lists_spinlock);
     goto out;
 
 found:
+    list_del_init(&pgp->us.client_eph_pages);
+    client->eph_count--;
+    list_del_init(&pgp->global_eph_pages);
+    global_eph_count--;
+    ASSERT(global_eph_count >= 0);
+    ASSERT(client->eph_count >= 0);
+    spin_unlock(&eph_lists_spinlock);
+
     ASSERT(pgp != NULL);
     obj = pgp->us.obj;
     ASSERT(obj != NULL);
@@ -1343,7 +1348,9 @@ found:
         ASSERT(pgp->pcd->pgp_ref_count == 1 || pgp->eviction_attempted);
         pcd_disassociate(pgp,pool,1);
     }
-    pgp_delete(pgp,1);
+
+    /* pgp already delist, so call pgp_free directly */
+    pgp_free(pgp, 1);
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
@@ -1355,9 +1362,7 @@ found:
         write_unlock(&pool->pool_rwlock);
     evicted_pgs++;
     ret = 1;
-
 out:
-    spin_unlock(&eph_lists_spinlock);
     return ret;
 }
 
@@ -1524,7 +1529,7 @@ failed_dup:
 cleanup:
     pgpfound = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgpfound == pgp);
-    pgp_delete(pgpfound,0);
+    pgp_delete(pgpfound);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -1684,7 +1689,7 @@ del_pgp_from_obj:
     pgp_delete_from_obj(obj, pgp->index);
 
 free_pgp:
-    pgp_delete(pgp, 0);
+    pgp_delete(pgp);
 unlock_obj:
     if ( newobj )
     {
@@ -1743,7 +1748,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     {
         if ( !is_shared(pool) )
         {
-            pgp_delete(pgp,0);
+            pgp_delete(pgp);
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
@@ -1793,7 +1798,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
         spin_unlock(&obj->obj_spinlock);
         goto out;
     }
-    pgp_delete(pgp,0);
+    pgp_delete(pgp);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Jk-0007ha-KL; Tue, 28 Jan 2014 04:30:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Jj-0007h0-0I
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:31 +0000
Received: from [85.158.139.211:53686] by server-14.bemta-5.messagelabs.com id
	52/27-24200-66237E25; Tue, 28 Jan 2014 04:30:30 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390883427!12314957!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24542 invoked from network); 28 Jan 2014 04:30:29 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:29 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so6795173pbb.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=/TINx0TS/eeiXsNTk6yl4IFu3xrM7eg6AdMFVhSlYr0=;
	b=TJldC3J1ras1xoJPl5Ux1p6EoK7I2p8z0plKM6VDO2TrmnN/JB5BLC8dvjclVTYnFl
	c9/mknDp7RGMndPpH4InyPMdkQwXXNlyXOaRzS4e5ebP9/krF9vnKk4LqvSvqR7hik7H
	7EZDd8AD0fkstprC/UNi2GbJc7t/oDJCPGnKw5U7k3mGopBmWquhDggFACXjUk/nMDhx
	rsHBIaW6/TQN97oyrXhc/cS2jF3vi7W4b2ioJ7pTQ0n0G9/2wox1QC+xq1B7F24tG2QD
	9neyCb49O2GS1UAHitjnpfH+KfC3r2d6fhDRZuuGUUgTqdiHIxNBIfiFSap4urntPZur
	uYog==
X-Received: by 10.68.232.132 with SMTP id to4mr6963574pbc.141.1390883427341;
	Mon, 27 Jan 2014 20:30:27 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	qf7sm100671533pac.14.2014.01.27.20.30.19 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:26 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:25 +0800
Message-Id: <1390883313-19313-7-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 06/14] tmem: cleanup: remove unneed parameter
	from pgp_free()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The only difference of the "from_delete" parameter in pgp_free() is one line
ASSERT(), this patch remove it to make code more clean.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 8a6ee84..9cfbca3 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -654,15 +654,14 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
     pgp->size = -1;
 }
 
-static void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
+static void pgp_free(struct tmem_page_descriptor *pgp)
 {
     struct tmem_pool *pool = NULL;
 
     ASSERT(pgp->us.obj != NULL);
-    ASSERT(pgp->us.obj->pool->client != NULL);
-    if ( from_delete )
-        ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
     ASSERT(pgp->us.obj->pool != NULL);
+    ASSERT(pgp->us.obj->pool->client != NULL);
+
     pool = pgp->us.obj->pool;
     if ( !is_persistent(pool) )
     {
@@ -743,7 +742,8 @@ static void pgp_delete(struct tmem_page_descriptor *pgp)
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
     pgp_delist(pgp);
-    pgp_free(pgp,1);
+    ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
+    pgp_free(pgp);
 }
 
 /* called only indirectly by radix_tree_destroy */
@@ -756,7 +756,7 @@ static void pgp_destroy(void *v)
     ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
     ASSERT(pgp->us.obj->pgp_count >= 0);
-    pgp_free(pgp,0);
+    pgp_free(pgp);
 }
 
 static int pgp_add_to_obj(struct tmem_object_root *obj, uint32_t index, struct tmem_page_descriptor *pgp)
@@ -1350,7 +1350,7 @@ found:
     }
 
     /* pgp already delist, so call pgp_free directly */
-    pgp_free(pgp, 1);
+    pgp_free(pgp);
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80JX-0007eM-TG; Tue, 28 Jan 2014 04:30:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80JV-0007dw-R0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:18 +0000
Received: from [85.158.137.68:32590] by server-3.bemta-3.messagelabs.com id
	B9/3E-10658-95237E25; Tue, 28 Jan 2014 04:30:17 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390883414!11712086!1
X-Originating-IP: [209.85.220.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30059 invoked from network); 28 Jan 2014 04:30:16 -0000
Received: from mail-pa0-f51.google.com (HELO mail-pa0-f51.google.com)
	(209.85.220.51)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:16 -0000
Received: by mail-pa0-f51.google.com with SMTP id ld10so6880160pab.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=2p2W7Bh2f+joMhA/cbzyl2R1k5FIpCafgQS6CH+rIpE=;
	b=wrcI4WJmDg9WM3zuVLvC4IAH6qC3bMOmfjQuWVQd5uDjPQWvDz9QsbsFuadyUwAVNk
	+VCORjp2pj69Hb5TW0eqfAFiIywRTzCNUcnjF9U++6FDHxshclO+RXEB2fqJdH3/QMlP
	irt/S07tIWRyAFZ+FuVyQCZzThAYEbS4fBZQKgyieeaSPZhlvs/nOJh0aUqV1701KD95
	W1BMg98089aFiJopTsLi915f7RzYN4lEPAwxC9TiXc6UyHzswE62eMn3LAnLVbKjIYzw
	CjtsTayRLEaC1y9iNxql9ss6KZW+965M4g0bWJTH1WRvivpCCs8I4QGHLan+GSnZEbPe
	0YvA==
X-Received: by 10.66.138.40 with SMTP id qn8mr6758541pab.154.1390883414030;
	Mon, 27 Jan 2014 20:30:14 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	zc6sm100638654pab.18.2014.01.27.20.30.08 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:13 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:24 +0800
Message-Id: <1390883313-19313-6-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 05/14] tmem: cleanup: remove unneed parameter
	from pgp_delist()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The parameter "eph_lock" is only needed for function tmem_evict(). Embeded the
delist code into tmem_evict() directly so as to drop the eph_lock parameter. By
this change, the eph list lock can also be released a bit earier.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   55 +++++++++++++++++++++++++++++------------------------
 1 file changed, 30 insertions(+), 25 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 61dfd62..8a6ee84 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -694,7 +694,7 @@ static void pgp_free_from_inv_list(struct client *client, struct tmem_page_descr
 }
 
 /* remove the page from appropriate lists but not from parent object */
-static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
+static void pgp_delist(struct tmem_page_descriptor *pgp)
 {
     struct client *client;
 
@@ -705,8 +705,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     ASSERT(client != NULL);
     if ( !is_persistent(pgp->us.obj->pool) )
     {
-        if ( !no_eph_lock )
-            spin_lock(&eph_lists_spinlock);
+        spin_lock(&eph_lists_spinlock);
         if ( !list_empty(&pgp->us.client_eph_pages) )
             client->eph_count--;
         ASSERT(client->eph_count >= 0);
@@ -715,8 +714,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
             global_eph_count--;
         ASSERT(global_eph_count >= 0);
         list_del_init(&pgp->global_eph_pages);
-        if ( !no_eph_lock )
-            spin_unlock(&eph_lists_spinlock);
+        spin_unlock(&eph_lists_spinlock);
     } else {
         if ( client->live_migrating )
         {
@@ -735,7 +733,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
 }
 
 /* remove page from lists (but not from parent object) and free it */
-static void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
+static void pgp_delete(struct tmem_page_descriptor *pgp)
 {
     uint64_t life;
 
@@ -744,7 +742,7 @@ static void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     ASSERT(pgp->us.obj->pool != NULL);
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
-    pgp_delist(pgp, no_eph_lock);
+    pgp_delist(pgp);
     pgp_free(pgp,1);
 }
 
@@ -754,7 +752,7 @@ static void pgp_destroy(void *v)
     struct tmem_page_descriptor *pgp = (struct tmem_page_descriptor *)v;
 
     ASSERT_SPINLOCK(&pgp->us.obj->obj_spinlock);
-    pgp_delist(pgp,0);
+    pgp_delist(pgp);
     ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
     ASSERT(pgp->us.obj->pgp_count >= 0);
@@ -1303,7 +1301,7 @@ obj_unlock:
 static int tmem_evict(void)
 {
     struct client *client = current->domain->tmem_client;
-    struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
+    struct tmem_page_descriptor *pgp = NULL, *pgp_del;
     struct tmem_object_root *obj;
     struct tmem_pool *pool;
     int ret = 0;
@@ -1314,21 +1312,28 @@ static int tmem_evict(void)
     if ( (client != NULL) && client_over_quota(client) &&
          !list_empty(&client->ephemeral_page_list) )
     {
-        list_for_each_entry_safe(pgp,pgp2,&client->ephemeral_page_list,us.client_eph_pages)
-            if ( tmem_try_to_evict_pgp(pgp,&hold_pool_rwlock) )
+        list_for_each_entry(pgp, &client->ephemeral_page_list, us.client_eph_pages)
+            if ( tmem_try_to_evict_pgp(pgp, &hold_pool_rwlock) )
                 goto found;
-    } else if ( list_empty(&global_ephemeral_page_list) ) {
-        goto out;
-    } else {
-        list_for_each_entry_safe(pgp,pgp2,&global_ephemeral_page_list,global_eph_pages)
-            if ( tmem_try_to_evict_pgp(pgp,&hold_pool_rwlock) )
+    }
+    else if ( !list_empty(&global_ephemeral_page_list) )
+    {
+        list_for_each_entry(pgp, &global_ephemeral_page_list, global_eph_pages)
+            if ( tmem_try_to_evict_pgp(pgp, &hold_pool_rwlock) )
                 goto found;
     }
-
-    ret = 0;
+    spin_unlock(&eph_lists_spinlock);
     goto out;
 
 found:
+    list_del_init(&pgp->us.client_eph_pages);
+    client->eph_count--;
+    list_del_init(&pgp->global_eph_pages);
+    global_eph_count--;
+    ASSERT(global_eph_count >= 0);
+    ASSERT(client->eph_count >= 0);
+    spin_unlock(&eph_lists_spinlock);
+
     ASSERT(pgp != NULL);
     obj = pgp->us.obj;
     ASSERT(obj != NULL);
@@ -1343,7 +1348,9 @@ found:
         ASSERT(pgp->pcd->pgp_ref_count == 1 || pgp->eviction_attempted);
         pcd_disassociate(pgp,pool,1);
     }
-    pgp_delete(pgp,1);
+
+    /* pgp already delist, so call pgp_free directly */
+    pgp_free(pgp, 1);
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
@@ -1355,9 +1362,7 @@ found:
         write_unlock(&pool->pool_rwlock);
     evicted_pgs++;
     ret = 1;
-
 out:
-    spin_unlock(&eph_lists_spinlock);
     return ret;
 }
 
@@ -1524,7 +1529,7 @@ failed_dup:
 cleanup:
     pgpfound = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgpfound == pgp);
-    pgp_delete(pgpfound,0);
+    pgp_delete(pgpfound);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -1684,7 +1689,7 @@ del_pgp_from_obj:
     pgp_delete_from_obj(obj, pgp->index);
 
 free_pgp:
-    pgp_delete(pgp, 0);
+    pgp_delete(pgp);
 unlock_obj:
     if ( newobj )
     {
@@ -1743,7 +1748,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     {
         if ( !is_shared(pool) )
         {
-            pgp_delete(pgp,0);
+            pgp_delete(pgp);
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
@@ -1793,7 +1798,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
         spin_unlock(&obj->obj_spinlock);
         goto out;
     }
-    pgp_delete(pgp,0);
+    pgp_delete(pgp);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Jk-0007ha-KL; Tue, 28 Jan 2014 04:30:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Jj-0007h0-0I
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:31 +0000
Received: from [85.158.139.211:53686] by server-14.bemta-5.messagelabs.com id
	52/27-24200-66237E25; Tue, 28 Jan 2014 04:30:30 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390883427!12314957!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24542 invoked from network); 28 Jan 2014 04:30:29 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:29 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so6795173pbb.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=/TINx0TS/eeiXsNTk6yl4IFu3xrM7eg6AdMFVhSlYr0=;
	b=TJldC3J1ras1xoJPl5Ux1p6EoK7I2p8z0plKM6VDO2TrmnN/JB5BLC8dvjclVTYnFl
	c9/mknDp7RGMndPpH4InyPMdkQwXXNlyXOaRzS4e5ebP9/krF9vnKk4LqvSvqR7hik7H
	7EZDd8AD0fkstprC/UNi2GbJc7t/oDJCPGnKw5U7k3mGopBmWquhDggFACXjUk/nMDhx
	rsHBIaW6/TQN97oyrXhc/cS2jF3vi7W4b2ioJ7pTQ0n0G9/2wox1QC+xq1B7F24tG2QD
	9neyCb49O2GS1UAHitjnpfH+KfC3r2d6fhDRZuuGUUgTqdiHIxNBIfiFSap4urntPZur
	uYog==
X-Received: by 10.68.232.132 with SMTP id to4mr6963574pbc.141.1390883427341;
	Mon, 27 Jan 2014 20:30:27 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	qf7sm100671533pac.14.2014.01.27.20.30.19 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:26 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:25 +0800
Message-Id: <1390883313-19313-7-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 06/14] tmem: cleanup: remove unneed parameter
	from pgp_free()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The only difference of the "from_delete" parameter in pgp_free() is one line
ASSERT(), this patch remove it to make code more clean.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 8a6ee84..9cfbca3 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -654,15 +654,14 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
     pgp->size = -1;
 }
 
-static void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
+static void pgp_free(struct tmem_page_descriptor *pgp)
 {
     struct tmem_pool *pool = NULL;
 
     ASSERT(pgp->us.obj != NULL);
-    ASSERT(pgp->us.obj->pool->client != NULL);
-    if ( from_delete )
-        ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
     ASSERT(pgp->us.obj->pool != NULL);
+    ASSERT(pgp->us.obj->pool->client != NULL);
+
     pool = pgp->us.obj->pool;
     if ( !is_persistent(pool) )
     {
@@ -743,7 +742,8 @@ static void pgp_delete(struct tmem_page_descriptor *pgp)
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
     pgp_delist(pgp);
-    pgp_free(pgp,1);
+    ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
+    pgp_free(pgp);
 }
 
 /* called only indirectly by radix_tree_destroy */
@@ -756,7 +756,7 @@ static void pgp_destroy(void *v)
     ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
     ASSERT(pgp->us.obj->pgp_count >= 0);
-    pgp_free(pgp,0);
+    pgp_free(pgp);
 }
 
 static int pgp_add_to_obj(struct tmem_object_root *obj, uint32_t index, struct tmem_page_descriptor *pgp)
@@ -1350,7 +1350,7 @@ found:
     }
 
     /* pgp already delist, so call pgp_free directly */
-    pgp_free(pgp, 1);
+    pgp_free(pgp);
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80K3-0007o5-6O; Tue, 28 Jan 2014 04:30:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80K2-0007nO-0I
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:50 +0000
Received: from [193.109.254.147:28484] by server-9.bemta-14.messagelabs.com id
	3B/AC-13957-97237E25; Tue, 28 Jan 2014 04:30:49 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390883446!249135!1
X-Originating-IP: [209.85.192.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21091 invoked from network); 28 Jan 2014 04:30:48 -0000
Received: from mail-pd0-f172.google.com (HELO mail-pd0-f172.google.com)
	(209.85.192.172)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:48 -0000
Received: by mail-pd0-f172.google.com with SMTP id p10so6596878pdj.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=eoQ6Mw2M9yOPHearODuZfZrIlZt/7KrlZ7c9lC3hFn0=;
	b=xZn9e7PLSf89Q/w3m9wciVfm0GVur3jfy3aCcKX85MHhHhIa7/n375I8D6R2SAxyqO
	VN/Yz26FDqYRR13Z6D8tWfupyq8qjNB0OCunAgqAk/y350apdstT736TT8UwNVmqui5O
	wiCHwxSTMqzTF+1RhxRBtICQV3vDpt5ZRIerPU8H4nyl7AClH3TyYHlyqJFhXD02ppn+
	UujkzD72/N4KtvOIthrtU8ZMaLeMN78So913faNNEO2lnHPvpNMXmSYFiOXr4mPTv6TI
	wwgPrqLd5NpYQSmZc26UyUS9BDWL7G8QCbwHBYJfIhe6TUGoZgFou9QZ6GT4hkFbKHS+
	22AA==
X-Received: by 10.68.162.131 with SMTP id ya3mr34161339pbb.102.1390883446030; 
	Mon, 27 Jan 2014 20:30:46 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	de3sm37269104pbb.33.2014.01.27.20.30.39 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:45 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:26 +0800
Message-Id: <1390883313-19313-8-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 07/14] tmem: cleanup the pgp free path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There are several functions related with pgp free, but their relationships are
not clear enough for understanding. This patch made some cleanup by remove
pgp_delist() and pgp_free_from_inv_list().

The call trace is simple now:
pgp_delist_free()
    > pgp_free()
        > __pgp_free()

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   66 +++++++++++++++++++++++------------------------------
 1 file changed, 28 insertions(+), 38 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 9cfbca3..91096eb 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -654,6 +654,13 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
     pgp->size = -1;
 }
 
+static void __pgp_free(struct tmem_page_descriptor *pgp, struct tmem_pool *pool)
+{
+    pgp->us.obj = NULL;
+    pgp->index = -1;
+    tmem_free(pgp, pool);
+}
+
 static void pgp_free(struct tmem_page_descriptor *pgp)
 {
     struct tmem_pool *pool = NULL;
@@ -678,30 +685,22 @@ static void pgp_free(struct tmem_page_descriptor *pgp)
         pgp->pool_id = pool->pool_id;
         return;
     }
-    pgp->us.obj = NULL;
-    pgp->index = -1;
-    tmem_free(pgp, pool);
-}
-
-static void pgp_free_from_inv_list(struct client *client, struct tmem_page_descriptor *pgp)
-{
-    struct tmem_pool *pool = client->pools[pgp->pool_id];
-
-    pgp->us.obj = NULL;
-    pgp->index = -1;
-    tmem_free(pgp, pool);
+    __pgp_free(pgp, pool);
 }
 
-/* remove the page from appropriate lists but not from parent object */
-static void pgp_delist(struct tmem_page_descriptor *pgp)
+/* remove pgp from global/pool/client lists and free it */
+static void pgp_delist_free(struct tmem_page_descriptor *pgp)
 {
     struct client *client;
+    uint64_t life;
 
     ASSERT(pgp != NULL);
     ASSERT(pgp->us.obj != NULL);
     ASSERT(pgp->us.obj->pool != NULL);
     client = pgp->us.obj->pool->client;
     ASSERT(client != NULL);
+
+    /* Delist pgp */
     if ( !is_persistent(pgp->us.obj->pool) )
     {
         spin_lock(&eph_lists_spinlock);
@@ -714,7 +713,9 @@ static void pgp_delist(struct tmem_page_descriptor *pgp)
         ASSERT(global_eph_count >= 0);
         list_del_init(&pgp->global_eph_pages);
         spin_unlock(&eph_lists_spinlock);
-    } else {
+    }
+    else
+    {
         if ( client->live_migrating )
         {
             spin_lock(&pers_lists_spinlock);
@@ -723,26 +724,18 @@ static void pgp_delist(struct tmem_page_descriptor *pgp)
             if ( pgp != pgp->us.obj->pool->cur_pgp )
                 list_del_init(&pgp->us.pool_pers_pages);
             spin_unlock(&pers_lists_spinlock);
-        } else {
+        }
+        else
+        {
             spin_lock(&pers_lists_spinlock);
             list_del_init(&pgp->us.pool_pers_pages);
             spin_unlock(&pers_lists_spinlock);
         }
     }
-}
-
-/* remove page from lists (but not from parent object) and free it */
-static void pgp_delete(struct tmem_page_descriptor *pgp)
-{
-    uint64_t life;
-
-    ASSERT(pgp != NULL);
-    ASSERT(pgp->us.obj != NULL);
-    ASSERT(pgp->us.obj->pool != NULL);
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
-    pgp_delist(pgp);
-    ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
+
+    /* free pgp */
     pgp_free(pgp);
 }
 
@@ -751,12 +744,8 @@ static void pgp_destroy(void *v)
 {
     struct tmem_page_descriptor *pgp = (struct tmem_page_descriptor *)v;
 
-    ASSERT_SPINLOCK(&pgp->us.obj->obj_spinlock);
-    pgp_delist(pgp);
-    ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
-    ASSERT(pgp->us.obj->pgp_count >= 0);
-    pgp_free(pgp);
+    pgp_delist_free(pgp);
 }
 
 static int pgp_add_to_obj(struct tmem_object_root *obj, uint32_t index, struct tmem_page_descriptor *pgp)
@@ -1326,6 +1315,7 @@ static int tmem_evict(void)
     goto out;
 
 found:
+    /* Delist */
     list_del_init(&pgp->us.client_eph_pages);
     client->eph_count--;
     list_del_init(&pgp->global_eph_pages);
@@ -1529,7 +1519,7 @@ failed_dup:
 cleanup:
     pgpfound = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgpfound == pgp);
-    pgp_delete(pgpfound);
+    pgp_delist_free(pgpfound);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -1689,7 +1679,7 @@ del_pgp_from_obj:
     pgp_delete_from_obj(obj, pgp->index);
 
 free_pgp:
-    pgp_delete(pgp);
+    pgp_free(pgp);
 unlock_obj:
     if ( newobj )
     {
@@ -1748,7 +1738,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     {
         if ( !is_shared(pool) )
         {
-            pgp_delete(pgp);
+            pgp_delist_free(pgp);
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
@@ -1798,7 +1788,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
         spin_unlock(&obj->obj_spinlock);
         goto out;
     }
-    pgp_delete(pgp);
+    pgp_delist_free(pgp);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -2373,7 +2363,7 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
         if ( !list_empty(&client->persistent_invalidated_list) )
             list_for_each_entry_safe(pgp,pgp2,
               &client->persistent_invalidated_list, client_inv_pages)
-                pgp_free_from_inv_list(client,pgp);
+                __pgp_free(pgp, client->pools[pgp->pool_id]);
         client->frozen = client->was_frozen;
         rc = 0;
         break;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80K3-0007o5-6O; Tue, 28 Jan 2014 04:30:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80K2-0007nO-0I
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:30:50 +0000
Received: from [193.109.254.147:28484] by server-9.bemta-14.messagelabs.com id
	3B/AC-13957-97237E25; Tue, 28 Jan 2014 04:30:49 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390883446!249135!1
X-Originating-IP: [209.85.192.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21091 invoked from network); 28 Jan 2014 04:30:48 -0000
Received: from mail-pd0-f172.google.com (HELO mail-pd0-f172.google.com)
	(209.85.192.172)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:30:48 -0000
Received: by mail-pd0-f172.google.com with SMTP id p10so6596878pdj.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=eoQ6Mw2M9yOPHearODuZfZrIlZt/7KrlZ7c9lC3hFn0=;
	b=xZn9e7PLSf89Q/w3m9wciVfm0GVur3jfy3aCcKX85MHhHhIa7/n375I8D6R2SAxyqO
	VN/Yz26FDqYRR13Z6D8tWfupyq8qjNB0OCunAgqAk/y350apdstT736TT8UwNVmqui5O
	wiCHwxSTMqzTF+1RhxRBtICQV3vDpt5ZRIerPU8H4nyl7AClH3TyYHlyqJFhXD02ppn+
	UujkzD72/N4KtvOIthrtU8ZMaLeMN78So913faNNEO2lnHPvpNMXmSYFiOXr4mPTv6TI
	wwgPrqLd5NpYQSmZc26UyUS9BDWL7G8QCbwHBYJfIhe6TUGoZgFou9QZ6GT4hkFbKHS+
	22AA==
X-Received: by 10.68.162.131 with SMTP id ya3mr34161339pbb.102.1390883446030; 
	Mon, 27 Jan 2014 20:30:46 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	de3sm37269104pbb.33.2014.01.27.20.30.39 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:45 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:26 +0800
Message-Id: <1390883313-19313-8-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 07/14] tmem: cleanup the pgp free path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There are several functions related with pgp free, but their relationships are
not clear enough for understanding. This patch made some cleanup by remove
pgp_delist() and pgp_free_from_inv_list().

The call trace is simple now:
pgp_delist_free()
    > pgp_free()
        > __pgp_free()

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   66 +++++++++++++++++++++++------------------------------
 1 file changed, 28 insertions(+), 38 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 9cfbca3..91096eb 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -654,6 +654,13 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
     pgp->size = -1;
 }
 
+static void __pgp_free(struct tmem_page_descriptor *pgp, struct tmem_pool *pool)
+{
+    pgp->us.obj = NULL;
+    pgp->index = -1;
+    tmem_free(pgp, pool);
+}
+
 static void pgp_free(struct tmem_page_descriptor *pgp)
 {
     struct tmem_pool *pool = NULL;
@@ -678,30 +685,22 @@ static void pgp_free(struct tmem_page_descriptor *pgp)
         pgp->pool_id = pool->pool_id;
         return;
     }
-    pgp->us.obj = NULL;
-    pgp->index = -1;
-    tmem_free(pgp, pool);
-}
-
-static void pgp_free_from_inv_list(struct client *client, struct tmem_page_descriptor *pgp)
-{
-    struct tmem_pool *pool = client->pools[pgp->pool_id];
-
-    pgp->us.obj = NULL;
-    pgp->index = -1;
-    tmem_free(pgp, pool);
+    __pgp_free(pgp, pool);
 }
 
-/* remove the page from appropriate lists but not from parent object */
-static void pgp_delist(struct tmem_page_descriptor *pgp)
+/* remove pgp from global/pool/client lists and free it */
+static void pgp_delist_free(struct tmem_page_descriptor *pgp)
 {
     struct client *client;
+    uint64_t life;
 
     ASSERT(pgp != NULL);
     ASSERT(pgp->us.obj != NULL);
     ASSERT(pgp->us.obj->pool != NULL);
     client = pgp->us.obj->pool->client;
     ASSERT(client != NULL);
+
+    /* Delist pgp */
     if ( !is_persistent(pgp->us.obj->pool) )
     {
         spin_lock(&eph_lists_spinlock);
@@ -714,7 +713,9 @@ static void pgp_delist(struct tmem_page_descriptor *pgp)
         ASSERT(global_eph_count >= 0);
         list_del_init(&pgp->global_eph_pages);
         spin_unlock(&eph_lists_spinlock);
-    } else {
+    }
+    else
+    {
         if ( client->live_migrating )
         {
             spin_lock(&pers_lists_spinlock);
@@ -723,26 +724,18 @@ static void pgp_delist(struct tmem_page_descriptor *pgp)
             if ( pgp != pgp->us.obj->pool->cur_pgp )
                 list_del_init(&pgp->us.pool_pers_pages);
             spin_unlock(&pers_lists_spinlock);
-        } else {
+        }
+        else
+        {
             spin_lock(&pers_lists_spinlock);
             list_del_init(&pgp->us.pool_pers_pages);
             spin_unlock(&pers_lists_spinlock);
         }
     }
-}
-
-/* remove page from lists (but not from parent object) and free it */
-static void pgp_delete(struct tmem_page_descriptor *pgp)
-{
-    uint64_t life;
-
-    ASSERT(pgp != NULL);
-    ASSERT(pgp->us.obj != NULL);
-    ASSERT(pgp->us.obj->pool != NULL);
     life = get_cycles() - pgp->timestamp;
     pgp->us.obj->pool->sum_life_cycles += life;
-    pgp_delist(pgp);
-    ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
+
+    /* free pgp */
     pgp_free(pgp);
 }
 
@@ -751,12 +744,8 @@ static void pgp_destroy(void *v)
 {
     struct tmem_page_descriptor *pgp = (struct tmem_page_descriptor *)v;
 
-    ASSERT_SPINLOCK(&pgp->us.obj->obj_spinlock);
-    pgp_delist(pgp);
-    ASSERT(pgp->us.obj != NULL);
     pgp->us.obj->pgp_count--;
-    ASSERT(pgp->us.obj->pgp_count >= 0);
-    pgp_free(pgp);
+    pgp_delist_free(pgp);
 }
 
 static int pgp_add_to_obj(struct tmem_object_root *obj, uint32_t index, struct tmem_page_descriptor *pgp)
@@ -1326,6 +1315,7 @@ static int tmem_evict(void)
     goto out;
 
 found:
+    /* Delist */
     list_del_init(&pgp->us.client_eph_pages);
     client->eph_count--;
     list_del_init(&pgp->global_eph_pages);
@@ -1529,7 +1519,7 @@ failed_dup:
 cleanup:
     pgpfound = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgpfound == pgp);
-    pgp_delete(pgpfound);
+    pgp_delist_free(pgpfound);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -1689,7 +1679,7 @@ del_pgp_from_obj:
     pgp_delete_from_obj(obj, pgp->index);
 
 free_pgp:
-    pgp_delete(pgp);
+    pgp_free(pgp);
 unlock_obj:
     if ( newobj )
     {
@@ -1748,7 +1738,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     {
         if ( !is_shared(pool) )
         {
-            pgp_delete(pgp);
+            pgp_delist_free(pgp);
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
@@ -1798,7 +1788,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
         spin_unlock(&obj->obj_spinlock);
         goto out;
     }
-    pgp_delete(pgp);
+    pgp_delist_free(pgp);
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
@@ -2373,7 +2363,7 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
         if ( !list_empty(&client->persistent_invalidated_list) )
             list_for_each_entry_safe(pgp,pgp2,
               &client->persistent_invalidated_list, client_inv_pages)
-                pgp_free_from_inv_list(client,pgp);
+                __pgp_free(pgp, client->pools[pgp->pool_id]);
         client->frozen = client->was_frozen;
         rc = 0;
         break;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80KF-0007tN-MS; Tue, 28 Jan 2014 04:31:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80KE-0007sK-CM
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:02 +0000
Received: from [85.158.143.35:30414] by server-3.bemta-4.messagelabs.com id
	41/02-32360-58237E25; Tue, 28 Jan 2014 04:31:01 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390883459!1208933!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5003 invoked from network); 28 Jan 2014 04:31:01 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:01 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so6812121pab.23
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=NHtrpY+14TEqmJPc6ozbyLN9A06mkVfqYyh2M/fkSOg=;
	b=ZkFugULVvekZt12V4FqZPEHQuY3Pimuqm5VPtoaljs2pke8AoW7EiS/FZPxJiXDTpr
	ObZacpRxDTKKqYKlI77DEGXswdo6Z/hc3FcQUYbqDec/eFYT9v5GSKde4WPI81zAUy/p
	1kzzwy4dD1/5dxFaSQMpPJV7rn5GmiTe3SLLWI+7+vhm/iVZnWMWf/SXs4zgwR8Y+pCV
	KsCKIaRydw9q/iSidSk9QKjhPrQn+SY8NI5aOTcl8Fwoq74nHhxYhFQ1lswue8z6jR8o
	v7YVNnFkhNAy5qsXHbYrjJTOg+9atKzbaSLgqDf/ok8vTyNeRDkV7j/7rdIxxt1krhmx
	cRtA==
X-Received: by 10.68.201.97 with SMTP id jz1mr5354071pbc.26.1390883458857;
	Mon, 27 Jan 2014 20:30:58 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	y9sm100697178pas.10.2014.01.27.20.30.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:58 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:27 +0800
Message-Id: <1390883313-19313-9-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 08/14] tmem: drop oneline function
	client_freeze()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Function client_freeze() only set client->frozen = freeze, the caller can do
this work directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 91096eb..70c80ab 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1236,11 +1236,6 @@ static bool_t client_over_quota(struct client *client)
              ((total*100L) / client->weight) );
 }
 
-static void client_freeze(struct client *client, int freeze)
-{
-    client->frozen = freeze;
-}
-
 /************ MEMORY REVOCATION ROUTINES *******************************/
 
 static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *hold_pool_rwlock)
@@ -1988,14 +1983,14 @@ static int tmemc_freeze_pools(domid_t cli_id, int arg)
     if ( cli_id == TMEM_CLI_ID_NULL )
     {
         list_for_each_entry(client,&global_client_list,client_list)
-            client_freeze(client,freeze);
+            client->frozen = freeze;
         tmem_client_info("tmem: all pools %s for all %ss\n", s, tmem_client_str);
     }
     else
     {
         if ( (client = tmem_client_from_cli_id(cli_id)) == NULL)
             return -1;
-        client_freeze(client,freeze);
+        client->frozen = freeze;
         tmem_client_info("tmem: all pools %s for %s=%d\n",
                          s, tmem_cli_id_str, cli_id);
     }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80KF-0007tN-MS; Tue, 28 Jan 2014 04:31:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80KE-0007sK-CM
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:02 +0000
Received: from [85.158.143.35:30414] by server-3.bemta-4.messagelabs.com id
	41/02-32360-58237E25; Tue, 28 Jan 2014 04:31:01 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390883459!1208933!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5003 invoked from network); 28 Jan 2014 04:31:01 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:01 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so6812121pab.23
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:30:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=NHtrpY+14TEqmJPc6ozbyLN9A06mkVfqYyh2M/fkSOg=;
	b=ZkFugULVvekZt12V4FqZPEHQuY3Pimuqm5VPtoaljs2pke8AoW7EiS/FZPxJiXDTpr
	ObZacpRxDTKKqYKlI77DEGXswdo6Z/hc3FcQUYbqDec/eFYT9v5GSKde4WPI81zAUy/p
	1kzzwy4dD1/5dxFaSQMpPJV7rn5GmiTe3SLLWI+7+vhm/iVZnWMWf/SXs4zgwR8Y+pCV
	KsCKIaRydw9q/iSidSk9QKjhPrQn+SY8NI5aOTcl8Fwoq74nHhxYhFQ1lswue8z6jR8o
	v7YVNnFkhNAy5qsXHbYrjJTOg+9atKzbaSLgqDf/ok8vTyNeRDkV7j/7rdIxxt1krhmx
	cRtA==
X-Received: by 10.68.201.97 with SMTP id jz1mr5354071pbc.26.1390883458857;
	Mon, 27 Jan 2014 20:30:58 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	y9sm100697178pas.10.2014.01.27.20.30.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:30:58 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:27 +0800
Message-Id: <1390883313-19313-9-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 08/14] tmem: drop oneline function
	client_freeze()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Function client_freeze() only set client->frozen = freeze, the caller can do
this work directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 91096eb..70c80ab 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1236,11 +1236,6 @@ static bool_t client_over_quota(struct client *client)
              ((total*100L) / client->weight) );
 }
 
-static void client_freeze(struct client *client, int freeze)
-{
-    client->frozen = freeze;
-}
-
 /************ MEMORY REVOCATION ROUTINES *******************************/
 
 static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *hold_pool_rwlock)
@@ -1988,14 +1983,14 @@ static int tmemc_freeze_pools(domid_t cli_id, int arg)
     if ( cli_id == TMEM_CLI_ID_NULL )
     {
         list_for_each_entry(client,&global_client_list,client_list)
-            client_freeze(client,freeze);
+            client->frozen = freeze;
         tmem_client_info("tmem: all pools %s for all %ss\n", s, tmem_client_str);
     }
     else
     {
         if ( (client = tmem_client_from_cli_id(cli_id)) == NULL)
             return -1;
-        client_freeze(client,freeze);
+        client->frozen = freeze;
         tmem_client_info("tmem: all pools %s for %s=%d\n",
                          s, tmem_cli_id_str, cli_id);
     }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kc-00083y-3Y; Tue, 28 Jan 2014 04:31:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kb-00083V-85
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:25 +0000
Received: from [193.109.254.147:55922] by server-11.bemta-14.messagelabs.com
	id 78/4F-20576-C9237E25; Tue, 28 Jan 2014 04:31:24 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390883481!251043!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25372 invoked from network); 28 Jan 2014 04:31:23 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:23 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so6624704pde.27
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=0JK2z5JP/6lxYfzekKt7Sw3VgSyvkR0FjH60xmZ0oU8=;
	b=fzQfRkn+3RdrggbJqybRHmwzcU2IpwpMndcCyMinlgt5BSKDby7f8wUa+GnWdDm9G8
	zKELrnha9SnM57gNMpBlb7j32GWafbBR+vlwZr9PvXK02ObCaYTyXRIoDtHwLVoHUUur
	LumUFiBvq2UzRL6S8/P6vpCRUrYRS3FanVWnCvtu3l6WbbM0K64F3Kcd7XjFWQj59OVX
	CA9NDg11614ObXJP4JVbwDUy1VvB8mouyICVg6dlbUDCW4OfbBVVlBqIk+jDL3KQKfqI
	Zd9uOsKihbJV0cgx082RoSuKmxyqYx5sZUxWu3ceCEhv9wwdUB+1Aykf6YlVCwE74dWZ
	eWZw==
X-Received: by 10.66.222.234 with SMTP id qp10mr5649118pac.156.1390883481620; 
	Mon, 27 Jan 2014 20:31:21 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	si6sm100658400pab.19.2014.01.27.20.31.14 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:20 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:28 +0800
Message-Id: <1390883313-19313-10-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 09/14] tmem: remove unneeded parameters from obj
	destroy path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parameters "selective" and "no_rebalance" are meaningless in obj destory path,
this patch remove them.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   35 +++++++++++++++--------------------
 1 file changed, 15 insertions(+), 20 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 70c80ab..0febae1 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -885,7 +885,7 @@ restart_find:
 }
 
 /* free an object that has no more pgps in it */
-static void obj_free(struct tmem_object_root *obj, int no_rebalance)
+static void obj_free(struct tmem_object_root *obj)
 {
     struct tmem_pool *pool;
     struct oid old_oid;
@@ -908,9 +908,7 @@ static void obj_free(struct tmem_object_root *obj, int no_rebalance)
     oid_set_invalid(&obj->oid);
     obj->last_client = TMEM_CLI_ID_NULL;
     atomic_dec_and_assert(global_obj_count);
-    /* use no_rebalance only if all objects are being destroyed anyway */
-    if ( !no_rebalance )
-        rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
+    rb_erase(&obj->rb_tree_node, &pool->obj_rb_root[oid_hash(&old_oid)]);
     spin_unlock(&obj->obj_spinlock);
     tmem_free(obj, pool);
 }
@@ -969,15 +967,15 @@ static struct tmem_object_root * obj_alloc(struct tmem_pool *pool, struct oid *o
 }
 
 /* free an object after destroying any pgps in it */
-static void obj_destroy(struct tmem_object_root *obj, int no_rebalance)
+static void obj_destroy(struct tmem_object_root *obj)
 {
     ASSERT_WRITELOCK(&obj->pool->pool_rwlock);
     radix_tree_destroy(&obj->tree_root, pgp_destroy);
-    obj_free(obj,no_rebalance);
+    obj_free(obj);
 }
 
 /* destroys all objs in a pool, or only if obj->last_client matches cli_id */
-static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t cli_id)
+static void pool_destroy_objs(struct tmem_pool *pool, domid_t cli_id)
 {
     struct rb_node *node;
     struct tmem_object_root *obj;
@@ -993,11 +991,8 @@ static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t
             obj = container_of(node, struct tmem_object_root, rb_tree_node);
             spin_lock(&obj->obj_spinlock);
             node = rb_next(node);
-            if ( !selective )
-                /* FIXME: should be obj,1 but walking/erasing rbtree is racy */
-                obj_destroy(obj,0);
-            else if ( obj->last_client == cli_id )
-                obj_destroy(obj,0);
+            if ( obj->last_client == cli_id )
+                obj_destroy(obj);
             else
                 spin_unlock(&obj->obj_spinlock);
         }
@@ -1088,7 +1083,7 @@ static int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
     ASSERT(pool->client != NULL);
     
     ASSERT_WRITELOCK(&tmem_rwlock);
-    pool_destroy_objs(pool,1,cli_id);
+    pool_destroy_objs(pool, cli_id);
     list_for_each_entry(sl,&pool->share_list, share_list)
     {
         if (sl->client->cli_id != cli_id)
@@ -1134,7 +1129,7 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
                destroy?"destroy":"flush", tmem_client_str);
         return;
     }
-    pool_destroy_objs(pool,0,TMEM_CLI_ID_NULL);
+    pool_destroy_objs(pool, TMEM_CLI_ID_NULL);
     if ( destroy )
     {
         pool->client->pools[pool->pool_id] = NULL;
@@ -1339,7 +1334,7 @@ found:
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
     }
     else
         spin_unlock(&obj->obj_spinlock);
@@ -1518,7 +1513,7 @@ cleanup:
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     } else {
         spin_unlock(&obj->obj_spinlock);
@@ -1679,7 +1674,7 @@ unlock_obj:
     if ( newobj )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj, 0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     }
     else
@@ -1737,7 +1732,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
-                obj_free(obj,0);
+                obj_free(obj);
                 obj = NULL;
                 write_unlock(&pool->pool_rwlock);
             }
@@ -1787,7 +1782,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     } else {
         spin_unlock(&obj->obj_spinlock);
@@ -1810,7 +1805,7 @@ static int do_tmem_flush_object(struct tmem_pool *pool, struct oid *oidp)
     if ( obj == NULL )
         goto out;
     write_lock(&pool->pool_rwlock);
-    obj_destroy(obj,0);
+    obj_destroy(obj);
     pool->flush_objs_found++;
     write_unlock(&pool->pool_rwlock);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kc-00083y-3Y; Tue, 28 Jan 2014 04:31:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kb-00083V-85
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:25 +0000
Received: from [193.109.254.147:55922] by server-11.bemta-14.messagelabs.com
	id 78/4F-20576-C9237E25; Tue, 28 Jan 2014 04:31:24 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390883481!251043!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25372 invoked from network); 28 Jan 2014 04:31:23 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:23 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so6624704pde.27
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=0JK2z5JP/6lxYfzekKt7Sw3VgSyvkR0FjH60xmZ0oU8=;
	b=fzQfRkn+3RdrggbJqybRHmwzcU2IpwpMndcCyMinlgt5BSKDby7f8wUa+GnWdDm9G8
	zKELrnha9SnM57gNMpBlb7j32GWafbBR+vlwZr9PvXK02ObCaYTyXRIoDtHwLVoHUUur
	LumUFiBvq2UzRL6S8/P6vpCRUrYRS3FanVWnCvtu3l6WbbM0K64F3Kcd7XjFWQj59OVX
	CA9NDg11614ObXJP4JVbwDUy1VvB8mouyICVg6dlbUDCW4OfbBVVlBqIk+jDL3KQKfqI
	Zd9uOsKihbJV0cgx082RoSuKmxyqYx5sZUxWu3ceCEhv9wwdUB+1Aykf6YlVCwE74dWZ
	eWZw==
X-Received: by 10.66.222.234 with SMTP id qp10mr5649118pac.156.1390883481620; 
	Mon, 27 Jan 2014 20:31:21 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	si6sm100658400pab.19.2014.01.27.20.31.14 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:20 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:28 +0800
Message-Id: <1390883313-19313-10-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 09/14] tmem: remove unneeded parameters from obj
	destroy path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parameters "selective" and "no_rebalance" are meaningless in obj destory path,
this patch remove them.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   35 +++++++++++++++--------------------
 1 file changed, 15 insertions(+), 20 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 70c80ab..0febae1 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -885,7 +885,7 @@ restart_find:
 }
 
 /* free an object that has no more pgps in it */
-static void obj_free(struct tmem_object_root *obj, int no_rebalance)
+static void obj_free(struct tmem_object_root *obj)
 {
     struct tmem_pool *pool;
     struct oid old_oid;
@@ -908,9 +908,7 @@ static void obj_free(struct tmem_object_root *obj, int no_rebalance)
     oid_set_invalid(&obj->oid);
     obj->last_client = TMEM_CLI_ID_NULL;
     atomic_dec_and_assert(global_obj_count);
-    /* use no_rebalance only if all objects are being destroyed anyway */
-    if ( !no_rebalance )
-        rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
+    rb_erase(&obj->rb_tree_node, &pool->obj_rb_root[oid_hash(&old_oid)]);
     spin_unlock(&obj->obj_spinlock);
     tmem_free(obj, pool);
 }
@@ -969,15 +967,15 @@ static struct tmem_object_root * obj_alloc(struct tmem_pool *pool, struct oid *o
 }
 
 /* free an object after destroying any pgps in it */
-static void obj_destroy(struct tmem_object_root *obj, int no_rebalance)
+static void obj_destroy(struct tmem_object_root *obj)
 {
     ASSERT_WRITELOCK(&obj->pool->pool_rwlock);
     radix_tree_destroy(&obj->tree_root, pgp_destroy);
-    obj_free(obj,no_rebalance);
+    obj_free(obj);
 }
 
 /* destroys all objs in a pool, or only if obj->last_client matches cli_id */
-static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t cli_id)
+static void pool_destroy_objs(struct tmem_pool *pool, domid_t cli_id)
 {
     struct rb_node *node;
     struct tmem_object_root *obj;
@@ -993,11 +991,8 @@ static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t
             obj = container_of(node, struct tmem_object_root, rb_tree_node);
             spin_lock(&obj->obj_spinlock);
             node = rb_next(node);
-            if ( !selective )
-                /* FIXME: should be obj,1 but walking/erasing rbtree is racy */
-                obj_destroy(obj,0);
-            else if ( obj->last_client == cli_id )
-                obj_destroy(obj,0);
+            if ( obj->last_client == cli_id )
+                obj_destroy(obj);
             else
                 spin_unlock(&obj->obj_spinlock);
         }
@@ -1088,7 +1083,7 @@ static int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
     ASSERT(pool->client != NULL);
     
     ASSERT_WRITELOCK(&tmem_rwlock);
-    pool_destroy_objs(pool,1,cli_id);
+    pool_destroy_objs(pool, cli_id);
     list_for_each_entry(sl,&pool->share_list, share_list)
     {
         if (sl->client->cli_id != cli_id)
@@ -1134,7 +1129,7 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
                destroy?"destroy":"flush", tmem_client_str);
         return;
     }
-    pool_destroy_objs(pool,0,TMEM_CLI_ID_NULL);
+    pool_destroy_objs(pool, TMEM_CLI_ID_NULL);
     if ( destroy )
     {
         pool->client->pools[pool->pool_id] = NULL;
@@ -1339,7 +1334,7 @@ found:
     if ( obj->pgp_count == 0 )
     {
         ASSERT_WRITELOCK(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
     }
     else
         spin_unlock(&obj->obj_spinlock);
@@ -1518,7 +1513,7 @@ cleanup:
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     } else {
         spin_unlock(&obj->obj_spinlock);
@@ -1679,7 +1674,7 @@ unlock_obj:
     if ( newobj )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj, 0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     }
     else
@@ -1737,7 +1732,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
             if ( obj->pgp_count == 0 )
             {
                 write_lock(&pool->pool_rwlock);
-                obj_free(obj,0);
+                obj_free(obj);
                 obj = NULL;
                 write_unlock(&pool->pool_rwlock);
             }
@@ -1787,7 +1782,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
     if ( obj->pgp_count == 0 )
     {
         write_lock(&pool->pool_rwlock);
-        obj_free(obj,0);
+        obj_free(obj);
         write_unlock(&pool->pool_rwlock);
     } else {
         spin_unlock(&obj->obj_spinlock);
@@ -1810,7 +1805,7 @@ static int do_tmem_flush_object(struct tmem_pool *pool, struct oid *oidp)
     if ( obj == NULL )
         goto out;
     write_lock(&pool->pool_rwlock);
-    obj_destroy(obj,0);
+    obj_destroy(obj);
     pool->flush_objs_found++;
     write_unlock(&pool->pool_rwlock);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kn-00088f-IQ; Tue, 28 Jan 2014 04:31:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kl-000885-SY
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:36 +0000
Received: from [193.109.254.147:56318] by server-10.bemta-14.messagelabs.com
	id E8/08-20752-7A237E25; Tue, 28 Jan 2014 04:31:35 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390883492!251058!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26196 invoked from network); 28 Jan 2014 04:31:34 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:34 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so6600032pdj.15
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=BWrl8Da4cp3H0+fwOVci26PJgJ9lpJ4tNK2PhRXrqik=;
	b=ajFrzPeg3pR+FhDbG1BiR+c2K5eL/xZGQosEH18sY3eBBnS7C31XPiCZgARc60pd3J
	q+7cM8Q4rHfdJikL20t/4zJB6xX/gbYNX+q/FY1OnElNo20Mb9duERKt/qGRPa/NbLNX
	CBKSHS9naRnITrWZxdm5QJuXKDEXXZfAe6a7Snypxu5jS8gS4Sy1vAaPAwPsEP3BZ6IM
	YdTt40H/DLUjfqAXlJ2987bVHvZ8NFNsuwzufrUzqXiIYOyGKEweTHNTPe99PEm9jbo7
	M8YB3aUi3nyzW8tkvciN0RwVTS3VcnfLepf0QjX6Y/u9h4Ab11ybkOA/HV0N7dIOXMzv
	UDCQ==
X-Received: by 10.69.30.44 with SMTP id kb12mr427594pbd.87.1390883492397;
	Mon, 27 Jan 2014 20:31:32 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id eb5sm5996129pad.22.2014.01.27.20.31.27
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:31 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:29 +0800
Message-Id: <1390883313-19313-11-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 10/14] tmem: cleanup: drop global_pool_list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No need to maintain a global pool list, nobody use it.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 0febae1..72c3838 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -96,7 +96,6 @@ struct tmem_pool {
     bool_t shared;
     bool_t persistent;
     bool_t is_dying;
-    struct list_head pool_list;
     struct client *client;
     uint64_t uuid[2]; /* 0 for private, non-zero for shared */
     uint32_t pool_id;
@@ -199,7 +198,6 @@ rwlock_t pcd_tree_rwlocks[256]; /* poor man's concurrency for now */
 static LIST_HEAD(global_ephemeral_page_list); /* all pages in ephemeral pools */
 
 static LIST_HEAD(global_client_list);
-static LIST_HEAD(global_pool_list);
 
 static struct tmem_pool *global_shared_pools[MAX_GLOBAL_SHARED_POOLS] = { 0 };
 static bool_t global_shared_auth = 0;
@@ -1012,7 +1010,6 @@ static struct tmem_pool * pool_alloc(void)
         return NULL;
     for (i = 0; i < OBJ_HASH_BUCKETS; i++)
         pool->obj_rb_root[i] = RB_ROOT;
-    INIT_LIST_HEAD(&pool->pool_list);
     INIT_LIST_HEAD(&pool->persistent_page_list);
     rwlock_init(&pool->pool_rwlock);
     return pool;
@@ -1021,7 +1018,6 @@ static struct tmem_pool * pool_alloc(void)
 static void pool_free(struct tmem_pool *pool)
 {
     pool->client = NULL;
-    list_del(&pool->pool_list);
     xfree(pool);
 }
 
@@ -1952,7 +1948,6 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         }
     }
     client->pools[d_poolid] = pool;
-    list_add_tail(&pool->pool_list, &global_pool_list);
     pool->pool_id = d_poolid;
     pool->persistent = persistent;
     pool->uuid[0] = uuid_lo; pool->uuid[1] = uuid_hi;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kn-00088f-IQ; Tue, 28 Jan 2014 04:31:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kl-000885-SY
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:36 +0000
Received: from [193.109.254.147:56318] by server-10.bemta-14.messagelabs.com
	id E8/08-20752-7A237E25; Tue, 28 Jan 2014 04:31:35 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390883492!251058!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26196 invoked from network); 28 Jan 2014 04:31:34 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:34 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so6600032pdj.15
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=BWrl8Da4cp3H0+fwOVci26PJgJ9lpJ4tNK2PhRXrqik=;
	b=ajFrzPeg3pR+FhDbG1BiR+c2K5eL/xZGQosEH18sY3eBBnS7C31XPiCZgARc60pd3J
	q+7cM8Q4rHfdJikL20t/4zJB6xX/gbYNX+q/FY1OnElNo20Mb9duERKt/qGRPa/NbLNX
	CBKSHS9naRnITrWZxdm5QJuXKDEXXZfAe6a7Snypxu5jS8gS4Sy1vAaPAwPsEP3BZ6IM
	YdTt40H/DLUjfqAXlJ2987bVHvZ8NFNsuwzufrUzqXiIYOyGKEweTHNTPe99PEm9jbo7
	M8YB3aUi3nyzW8tkvciN0RwVTS3VcnfLepf0QjX6Y/u9h4Ab11ybkOA/HV0N7dIOXMzv
	UDCQ==
X-Received: by 10.69.30.44 with SMTP id kb12mr427594pbd.87.1390883492397;
	Mon, 27 Jan 2014 20:31:32 -0800 (PST)
Received: from localhost ([116.227.28.52])
	by mx.google.com with ESMTPSA id eb5sm5996129pad.22.2014.01.27.20.31.27
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:31 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:29 +0800
Message-Id: <1390883313-19313-11-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 10/14] tmem: cleanup: drop global_pool_list
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No need to maintain a global pool list, nobody use it.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 0febae1..72c3838 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -96,7 +96,6 @@ struct tmem_pool {
     bool_t shared;
     bool_t persistent;
     bool_t is_dying;
-    struct list_head pool_list;
     struct client *client;
     uint64_t uuid[2]; /* 0 for private, non-zero for shared */
     uint32_t pool_id;
@@ -199,7 +198,6 @@ rwlock_t pcd_tree_rwlocks[256]; /* poor man's concurrency for now */
 static LIST_HEAD(global_ephemeral_page_list); /* all pages in ephemeral pools */
 
 static LIST_HEAD(global_client_list);
-static LIST_HEAD(global_pool_list);
 
 static struct tmem_pool *global_shared_pools[MAX_GLOBAL_SHARED_POOLS] = { 0 };
 static bool_t global_shared_auth = 0;
@@ -1012,7 +1010,6 @@ static struct tmem_pool * pool_alloc(void)
         return NULL;
     for (i = 0; i < OBJ_HASH_BUCKETS; i++)
         pool->obj_rb_root[i] = RB_ROOT;
-    INIT_LIST_HEAD(&pool->pool_list);
     INIT_LIST_HEAD(&pool->persistent_page_list);
     rwlock_init(&pool->pool_rwlock);
     return pool;
@@ -1021,7 +1018,6 @@ static struct tmem_pool * pool_alloc(void)
 static void pool_free(struct tmem_pool *pool)
 {
     pool->client = NULL;
-    list_del(&pool->pool_list);
     xfree(pool);
 }
 
@@ -1952,7 +1948,6 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         }
     }
     client->pools[d_poolid] = pool;
-    list_add_tail(&pool->pool_list, &global_pool_list);
     pool->pool_id = d_poolid;
     pool->persistent = persistent;
     pool->uuid[0] = uuid_lo; pool->uuid[1] = uuid_hi;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kz-0008F4-5z; Tue, 28 Jan 2014 04:31:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kx-0008Dr-As
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:47 +0000
Received: from [85.158.143.35:15371] by server-1.bemta-4.messagelabs.com id
	F5/6C-02132-2B237E25; Tue, 28 Jan 2014 04:31:46 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390883504!1212470!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6019 invoked from network); 28 Jan 2014 04:31:45 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:45 -0000
Received: by mail-pb0-f47.google.com with SMTP id rp16so6764654pbb.34
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=3gg5g000r+mzgO0AZEF2OoNBPVzF+JW/5rBJwApzPQc=;
	b=kJgcOvL7zB3G8Tl5xltvfV1hGEjm3Vqlois9RVKNfwp59Fcy5Nj2nA+dWV5RXMY80L
	+h3WsRCvnQ07XMwsT6QS3A6f8olcqXbAFZ2yFX7MAQaaIX7jYtweaw/cmTX04n8rSFdL
	CcmF5g5j6sPYQ89LiuBpODntnHYzYtZWIcPXJZ8TlZa5oaRlf+1Y8Q6Ml5oxrpFw1Xt6
	5OS3GVxbbVREcopfMbHH/QI5N1HC3AtABV2yJ7ProCbb269GaB6fv0lUEiBpZLiEBvkz
	k9CYr0kU1QQDHy7dynlLd4sHRwwWGGbLG28JlSpIb/SWzmVMwZnJfRMjdsJ9oTqEKSX9
	Gt1A==
X-Received: by 10.68.244.103 with SMTP id xf7mr34241521pbc.50.1390883503852;
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vg1sm37234167pbc.44.2014.01.27.20.31.38 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:30 +0800
Message-Id: <1390883313-19313-12-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 11/14] tmem: fix the return value of
	tmemc_set_var()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tmemc_set_var() calls tmemc_set_var_one() but without taking its return value,
this patch fix this issue.
Also rename tmemc_set_var_one() to __tmemc_set_var().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 72c3838..eea3cbb 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2176,7 +2176,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
     return 0;
 }
 
-static int tmemc_set_var_one(struct client *client, uint32_t subop, uint32_t arg1)
+static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
 {
     domid_t cli_id = client->cli_id;
     uint32_t old_weight;
@@ -2218,15 +2218,24 @@ static int tmemc_set_var_one(struct client *client, uint32_t subop, uint32_t arg
 static int tmemc_set_var(domid_t cli_id, uint32_t subop, uint32_t arg1)
 {
     struct client *client;
+    int ret = -1;
 
     if ( cli_id == TMEM_CLI_ID_NULL )
+    {
         list_for_each_entry(client,&global_client_list,client_list)
-            tmemc_set_var_one(client, subop, arg1);
-    else if ( (client = tmem_client_from_cli_id(cli_id)) == NULL)
-        return -1;
+        {
+            ret =  __tmemc_set_var(client, subop, arg1);
+	    if (ret)
+	        break;
+        }
+    }
     else
-        tmemc_set_var_one(client, subop, arg1);
-    return 0;
+    {
+        client = tmem_client_from_cli_id(cli_id);
+        if ( client )
+            ret = __tmemc_set_var(client, subop, arg1);
+    }
+    return ret;
 }
 
 static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80Kz-0008F4-5z; Tue, 28 Jan 2014 04:31:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80Kx-0008Dr-As
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:31:47 +0000
Received: from [85.158.143.35:15371] by server-1.bemta-4.messagelabs.com id
	F5/6C-02132-2B237E25; Tue, 28 Jan 2014 04:31:46 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390883504!1212470!1
X-Originating-IP: [209.85.160.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6019 invoked from network); 28 Jan 2014 04:31:45 -0000
Received: from mail-pb0-f47.google.com (HELO mail-pb0-f47.google.com)
	(209.85.160.47)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:31:45 -0000
Received: by mail-pb0-f47.google.com with SMTP id rp16so6764654pbb.34
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=3gg5g000r+mzgO0AZEF2OoNBPVzF+JW/5rBJwApzPQc=;
	b=kJgcOvL7zB3G8Tl5xltvfV1hGEjm3Vqlois9RVKNfwp59Fcy5Nj2nA+dWV5RXMY80L
	+h3WsRCvnQ07XMwsT6QS3A6f8olcqXbAFZ2yFX7MAQaaIX7jYtweaw/cmTX04n8rSFdL
	CcmF5g5j6sPYQ89LiuBpODntnHYzYtZWIcPXJZ8TlZa5oaRlf+1Y8Q6Ml5oxrpFw1Xt6
	5OS3GVxbbVREcopfMbHH/QI5N1HC3AtABV2yJ7ProCbb269GaB6fv0lUEiBpZLiEBvkz
	k9CYr0kU1QQDHy7dynlLd4sHRwwWGGbLG28JlSpIb/SWzmVMwZnJfRMjdsJ9oTqEKSX9
	Gt1A==
X-Received: by 10.68.244.103 with SMTP id xf7mr34241521pbc.50.1390883503852;
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vg1sm37234167pbc.44.2014.01.27.20.31.38 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:43 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:30 +0800
Message-Id: <1390883313-19313-12-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 11/14] tmem: fix the return value of
	tmemc_set_var()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

tmemc_set_var() calls tmemc_set_var_one() but without taking its return value,
this patch fix this issue.
Also rename tmemc_set_var_one() to __tmemc_set_var().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 72c3838..eea3cbb 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2176,7 +2176,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
     return 0;
 }
 
-static int tmemc_set_var_one(struct client *client, uint32_t subop, uint32_t arg1)
+static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
 {
     domid_t cli_id = client->cli_id;
     uint32_t old_weight;
@@ -2218,15 +2218,24 @@ static int tmemc_set_var_one(struct client *client, uint32_t subop, uint32_t arg
 static int tmemc_set_var(domid_t cli_id, uint32_t subop, uint32_t arg1)
 {
     struct client *client;
+    int ret = -1;
 
     if ( cli_id == TMEM_CLI_ID_NULL )
+    {
         list_for_each_entry(client,&global_client_list,client_list)
-            tmemc_set_var_one(client, subop, arg1);
-    else if ( (client = tmem_client_from_cli_id(cli_id)) == NULL)
-        return -1;
+        {
+            ret =  __tmemc_set_var(client, subop, arg1);
+	    if (ret)
+	        break;
+        }
+    }
     else
-        tmemc_set_var_one(client, subop, arg1);
-    return 0;
+    {
+        client = tmem_client_from_cli_id(cli_id);
+        if ( client )
+            ret = __tmemc_set_var(client, subop, arg1);
+    }
+    return ret;
 }
 
 static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LC-0008LW-M1; Tue, 28 Jan 2014 04:32:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LB-0008Kz-JI
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:02 +0000
Received: from [85.158.137.68:39815] by server-7.bemta-3.messagelabs.com id
	54/9C-27599-0C237E25; Tue, 28 Jan 2014 04:32:00 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390883518!10519742!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25669 invoked from network); 28 Jan 2014 04:32:00 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:00 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so6568641pdj.40
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=WNQLbtcqWmVWatvjnNdy4+t+AKbcPc+CokMmTlox1pU=;
	b=leBORVpWYvMqm0BClfWolMAUPGaQIGF9KTUDXz+Nkep77EyOWDimuTVmqVChd+0f4w
	Sk5bFdTIXRrgFP9QmA3c2WYwBzVJ2rxtW1juup9/o0jy4E6mP5HdoAXZRIOUBIWr0kzg
	UM4EvS3A1JrD1JU6imyuYmz87klu5cd4S3MMAYRXnjkVIe1n9G7OkT8JiK7M727dNEFC
	kff5KOF6umtAzMTo6tb6UoTmWtMalyeihzJkQ/Vh7OtTISaH/Mf9xLyb53lmGo9feMo1
	k0QQepIfq/0gyIFr2oHAN7F7cLrBtRXjBNOGNV9+SPLlEJExJ3+oWCXUr16DGAoi9zAz
	Ge8A==
X-Received: by 10.66.164.70 with SMTP id yo6mr34640476pab.85.1390883518007;
	Mon, 27 Jan 2014 20:31:58 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	om6sm37242419pbc.43.2014.01.27.20.31.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:57 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:31 +0800
Message-Id: <1390883313-19313-13-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 12/14] tmem: cleanup: refactor function
	tmemc_shared_pool_auth()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make function tmemc_shared_pool_auth() more readable.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index eea3cbb..37f4e8f 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2252,27 +2252,39 @@ static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
     client = tmem_client_from_cli_id(cli_id);
     if ( client == NULL )
         return -EINVAL;
+
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
     {
-        if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
-             (client->shared_auth_uuid[i][1] == uuid_hi) )
+        if ( auth == 0 )
         {
-            if ( auth == 0 )
-                client->shared_auth_uuid[i][0] =
-                    client->shared_auth_uuid[i][1] = -1L;
-            return 1;
+            if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
+                    (client->shared_auth_uuid[i][1] == uuid_hi) )
+            {
+                client->shared_auth_uuid[i][0] = -1L;
+                client->shared_auth_uuid[i][1] = -1L;
+                return 1;
+            }
         }
-        if ( (auth == 1) && (client->shared_auth_uuid[i][0] == -1L) &&
-                 (client->shared_auth_uuid[i][1] == -1L) && (free == -1) )
-            free = i;
+        else
+        {
+            if ( (client->shared_auth_uuid[i][0] == -1L) &&
+                    (client->shared_auth_uuid[i][1] == -1L) )
+            {
+                free = i;
+                break;
+            }
+	}
     }
     if ( auth == 0 )
         return 0;
-    if ( auth == 1 && free == -1 )
+    else if ( free == -1)
         return -ENOMEM;
-    client->shared_auth_uuid[free][0] = uuid_lo;
-    client->shared_auth_uuid[free][1] = uuid_hi;
-    return 1;
+    else
+    {
+        client->shared_auth_uuid[free][0] = uuid_lo;
+        client->shared_auth_uuid[free][1] = uuid_hi;
+        return 1;
+    }
 }
 
 static int tmemc_save_subop(int cli_id, uint32_t pool_id,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LC-0008LW-M1; Tue, 28 Jan 2014 04:32:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LB-0008Kz-JI
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:02 +0000
Received: from [85.158.137.68:39815] by server-7.bemta-3.messagelabs.com id
	54/9C-27599-0C237E25; Tue, 28 Jan 2014 04:32:00 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390883518!10519742!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25669 invoked from network); 28 Jan 2014 04:32:00 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:00 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so6568641pdj.40
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:31:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=WNQLbtcqWmVWatvjnNdy4+t+AKbcPc+CokMmTlox1pU=;
	b=leBORVpWYvMqm0BClfWolMAUPGaQIGF9KTUDXz+Nkep77EyOWDimuTVmqVChd+0f4w
	Sk5bFdTIXRrgFP9QmA3c2WYwBzVJ2rxtW1juup9/o0jy4E6mP5HdoAXZRIOUBIWr0kzg
	UM4EvS3A1JrD1JU6imyuYmz87klu5cd4S3MMAYRXnjkVIe1n9G7OkT8JiK7M727dNEFC
	kff5KOF6umtAzMTo6tb6UoTmWtMalyeihzJkQ/Vh7OtTISaH/Mf9xLyb53lmGo9feMo1
	k0QQepIfq/0gyIFr2oHAN7F7cLrBtRXjBNOGNV9+SPLlEJExJ3+oWCXUr16DGAoi9zAz
	Ge8A==
X-Received: by 10.66.164.70 with SMTP id yo6mr34640476pab.85.1390883518007;
	Mon, 27 Jan 2014 20:31:58 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	om6sm37242419pbc.43.2014.01.27.20.31.52 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:31:57 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:31 +0800
Message-Id: <1390883313-19313-13-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 12/14] tmem: cleanup: refactor function
	tmemc_shared_pool_auth()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make function tmemc_shared_pool_auth() more readable.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index eea3cbb..37f4e8f 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2252,27 +2252,39 @@ static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
     client = tmem_client_from_cli_id(cli_id);
     if ( client == NULL )
         return -EINVAL;
+
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
     {
-        if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
-             (client->shared_auth_uuid[i][1] == uuid_hi) )
+        if ( auth == 0 )
         {
-            if ( auth == 0 )
-                client->shared_auth_uuid[i][0] =
-                    client->shared_auth_uuid[i][1] = -1L;
-            return 1;
+            if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
+                    (client->shared_auth_uuid[i][1] == uuid_hi) )
+            {
+                client->shared_auth_uuid[i][0] = -1L;
+                client->shared_auth_uuid[i][1] = -1L;
+                return 1;
+            }
         }
-        if ( (auth == 1) && (client->shared_auth_uuid[i][0] == -1L) &&
-                 (client->shared_auth_uuid[i][1] == -1L) && (free == -1) )
-            free = i;
+        else
+        {
+            if ( (client->shared_auth_uuid[i][0] == -1L) &&
+                    (client->shared_auth_uuid[i][1] == -1L) )
+            {
+                free = i;
+                break;
+            }
+	}
     }
     if ( auth == 0 )
         return 0;
-    if ( auth == 1 && free == -1 )
+    else if ( free == -1)
         return -ENOMEM;
-    client->shared_auth_uuid[free][0] = uuid_lo;
-    client->shared_auth_uuid[free][1] = uuid_hi;
-    return 1;
+    else
+    {
+        client->shared_auth_uuid[free][0] = uuid_lo;
+        client->shared_auth_uuid[free][1] = uuid_hi;
+        return 1;
+    }
 }
 
 static int tmemc_save_subop(int cli_id, uint32_t pool_id,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LP-0008SC-7z; Tue, 28 Jan 2014 04:32:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LN-0008RE-Hw
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:13 +0000
Received: from [85.158.143.35:35091] by server-1.bemta-4.messagelabs.com id
	94/9C-02132-CC237E25; Tue, 28 Jan 2014 04:32:12 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390883530!1213437!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10291 invoked from network); 28 Jan 2014 04:32:11 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:11 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so6652668pdj.1
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=lLZuiw7KxwfINMwdq7J/hTS2zP173Wi3gdphk0usJ7E=;
	b=CS1a6CDgrK6S/rstfJKNpA4jHzvrNRDIBZwvTtMa4Tv4+3JMT03O57osEQTpUHCera
	8CFZFEB7h0MhuFNwvINuCNLEl3VG/ScA6yj1JBcV31T5YcJHGouBxHuT/pC2MQQPqlam
	vD+xS8QwqbgGO/rg76ybqzIbAEkslxcMveg1Y5EH5AKUX/1fuXtc1AazcXpxZJMMhDwg
	pal09XwV7K7MEYzTGSEOMfqguS79eXzkbB3t3tw7tkkZTTL+c2W6grfVgH3/xLGIGh+A
	N2rz40eURNWSKcQMiN62/z+v6T26txp7TGaryykyPk3o5fkCq2mcCEs10GeEbIJFrT6b
	TT5g==
X-Received: by 10.66.216.129 with SMTP id oq1mr33934926pac.75.1390883529769;
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	oa3sm37325584pbb.15.2014.01.27.20.32.04 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:32 +0800
Message-Id: <1390883313-19313-14-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 13/14] tmem: reorg the shared pool allocate path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reorg the code to make it more readable.
Check the return value of shared_pool_join() and drop a unneeded call to
it. Disable creating a shared&persistant pool in an advance place.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |  104 +++++++++++++++++++++++++++++++++++------------------
 1 file changed, 70 insertions(+), 34 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 37f4e8f..ab5bba7 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1021,21 +1021,24 @@ static void pool_free(struct tmem_pool *pool)
     xfree(pool);
 }
 
-/* register new_client as a user of this shared pool and return new
-   total number of registered users */
+/*
+ * Register new_client as a user of this shared pool and return 0 on succ.
+ */
 static int shared_pool_join(struct tmem_pool *pool, struct client *new_client)
 {
     struct share_list *sl;
-
     ASSERT(is_shared(pool));
+
     if ( (sl = tmem_malloc(sizeof(struct share_list), NULL)) == NULL )
         return -1;
     sl->client = new_client;
     list_add_tail(&sl->share_list, &pool->share_list);
     if ( new_client->cli_id != pool->client->cli_id )
         tmem_client_info("adding new %s %d to shared pool owned by %s %d\n",
-            tmem_client_str, new_client->cli_id, tmem_client_str, pool->client->cli_id);
-    return ++pool->shared_count;
+                    tmem_client_str, new_client->cli_id, tmem_client_str,
+                    pool->client->cli_id);
+    ++pool->shared_count;
+    return 0;
 }
 
 /* reassign "ownership" of the pool to another client that shares this pool */
@@ -1841,8 +1844,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
     int specversion = (flags >> TMEM_POOL_VERSION_SHIFT)
          & TMEM_POOL_VERSION_MASK;
     struct tmem_pool *pool, *shpool;
-    int s_poolid, first_unused_s_poolid;
-    int i;
+    int i, first_unused_s_poolid;
 
     if ( this_cli_id == TMEM_CLI_ID_NULL )
         cli_id = current->domain->domain_id;
@@ -1856,6 +1858,11 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         tmem_client_err("failed... unsupported spec version\n");
         return -EPERM;
     }
+    if ( shared && persistent )
+    {
+        tmem_client_err("failed... unable to create a shared-persistant pool\n");
+        return -EPERM;
+    }
     if ( pagebits != (PAGE_SHIFT - 12) )
     {
         tmem_client_err("failed... unsupported pagesize %d\n",
@@ -1872,17 +1879,12 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         tmem_client_err("failed... reserved bits must be zero\n");
         return -EPERM;
     }
-    if ( (pool = pool_alloc()) == NULL )
-    {
-        tmem_client_err("failed... out of memory\n");
-        return -ENOMEM;
-    }
     if ( this_cli_id != TMEM_CLI_ID_NULL )
     {
         if ( (client = tmem_client_from_cli_id(this_cli_id)) == NULL
              || d_poolid >= MAX_POOLS_PER_DOMAIN
              || client->pools[d_poolid] != NULL )
-            goto fail;
+            return -EPERM;
     }
     else
     {
@@ -1895,13 +1897,35 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         {
             tmem_client_err("failed... no more pool slots available for this %s\n",
                    tmem_client_str);
-            goto fail;
+            return -EPERM;
         }
     }
+
+    if ( (pool = pool_alloc()) == NULL )
+    {
+        tmem_client_err("failed... out of memory\n");
+        return -ENOMEM;
+    }
+    client->pools[d_poolid] = pool;
+    pool->client = client;
+    pool->pool_id = d_poolid;
+    pool->shared = shared;
+    pool->persistent = persistent;
+    pool->uuid[0] = uuid_lo;
+    pool->uuid[1] = uuid_hi;
+
+    /*
+     * Already created a pool when arrived here, but need some special process
+     * for shared pool.
+     */
     if ( shared )
     {
         if ( uuid_lo == -1L && uuid_hi == -1L )
-            shared = 0;
+        {
+	    tmem_client_info("Invalid uuid, create non shared pool instead!\n");
+            pool->shared = 0;
+	    goto out;
+        }
         if ( client->shared_auth_required && !global_shared_auth )
         {
             for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
@@ -1909,48 +1933,60 @@ static int do_tmem_new_pool(domid_t this_cli_id,
                      (client->shared_auth_uuid[i][1] == uuid_hi) )
                     break;
             if ( i == MAX_GLOBAL_SHARED_POOLS )
-                shared = 0;
+	    {
+                tmem_client_info("Shared auth failed, create non shared pool instead!\n");
+                pool->shared = 0;
+                goto out;
+            }
         }
-    }
-    pool->shared = shared;
-    pool->client = client;
-    if ( shared )
-    {
+
+	/*
+	 * Authorize okay, match a global shared pool or use the newly allocated
+	 * one
+	 */
         first_unused_s_poolid = MAX_GLOBAL_SHARED_POOLS;
-        for ( s_poolid = 0; s_poolid < MAX_GLOBAL_SHARED_POOLS; s_poolid++ )
+        for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++ )
         {
-            if ( (shpool = global_shared_pools[s_poolid]) != NULL )
+            if ( (shpool = global_shared_pools[i]) != NULL )
             {
                 if ( shpool->uuid[0] == uuid_lo && shpool->uuid[1] == uuid_hi )
                 {
+		    /* Succ to match a global shared pool */
                     tmem_client_info("(matches shared pool uuid=%"PRIx64".%"PRIx64") pool_id=%d\n",
                         uuid_hi, uuid_lo, d_poolid);
-                    client->pools[d_poolid] = global_shared_pools[s_poolid];
-                    shared_pool_join(global_shared_pools[s_poolid], client);
-                    pool_free(pool);
-                    return d_poolid;
+                    client->pools[d_poolid] = shpool;
+                    if ( !shared_pool_join(shpool, client) )
+                    {
+                        pool_free(pool);
+                        goto out;
+		    }
+                    else
+                        goto fail;
                 }
             }
-            else if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
-                first_unused_s_poolid = s_poolid;
+            else
+            {
+                if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
+                    first_unused_s_poolid = i;
+            }
         }
+
+	/* Failed to find a global shard pool slot */
         if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
         {
             tmem_client_warn("tmem: failed... no global shared pool slots available\n");
             goto fail;
         }
+	/* Add pool to global shard pool */
         else
         {
             INIT_LIST_HEAD(&pool->share_list);
             pool->shared_count = 0;
             global_shared_pools[first_unused_s_poolid] = pool;
-            (void)shared_pool_join(pool,client);
         }
     }
-    client->pools[d_poolid] = pool;
-    pool->pool_id = d_poolid;
-    pool->persistent = persistent;
-    pool->uuid[0] = uuid_lo; pool->uuid[1] = uuid_hi;
+
+out:
     tmem_client_info("pool_id=%d\n", d_poolid);
     return d_poolid;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LP-0008SC-7z; Tue, 28 Jan 2014 04:32:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LN-0008RE-Hw
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:13 +0000
Received: from [85.158.143.35:35091] by server-1.bemta-4.messagelabs.com id
	94/9C-02132-CC237E25; Tue, 28 Jan 2014 04:32:12 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390883530!1213437!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10291 invoked from network); 28 Jan 2014 04:32:11 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:11 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so6652668pdj.1
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=lLZuiw7KxwfINMwdq7J/hTS2zP173Wi3gdphk0usJ7E=;
	b=CS1a6CDgrK6S/rstfJKNpA4jHzvrNRDIBZwvTtMa4Tv4+3JMT03O57osEQTpUHCera
	8CFZFEB7h0MhuFNwvINuCNLEl3VG/ScA6yj1JBcV31T5YcJHGouBxHuT/pC2MQQPqlam
	vD+xS8QwqbgGO/rg76ybqzIbAEkslxcMveg1Y5EH5AKUX/1fuXtc1AazcXpxZJMMhDwg
	pal09XwV7K7MEYzTGSEOMfqguS79eXzkbB3t3tw7tkkZTTL+c2W6grfVgH3/xLGIGh+A
	N2rz40eURNWSKcQMiN62/z+v6T26txp7TGaryykyPk3o5fkCq2mcCEs10GeEbIJFrT6b
	TT5g==
X-Received: by 10.66.216.129 with SMTP id oq1mr33934926pac.75.1390883529769;
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	oa3sm37325584pbb.15.2014.01.27.20.32.04 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:32:09 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:32 +0800
Message-Id: <1390883313-19313-14-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 13/14] tmem: reorg the shared pool allocate path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Reorg the code to make it more readable.
Check the return value of shared_pool_join() and drop a unneeded call to
it. Disable creating a shared&persistant pool in an advance place.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |  104 +++++++++++++++++++++++++++++++++++------------------
 1 file changed, 70 insertions(+), 34 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 37f4e8f..ab5bba7 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1021,21 +1021,24 @@ static void pool_free(struct tmem_pool *pool)
     xfree(pool);
 }
 
-/* register new_client as a user of this shared pool and return new
-   total number of registered users */
+/*
+ * Register new_client as a user of this shared pool and return 0 on succ.
+ */
 static int shared_pool_join(struct tmem_pool *pool, struct client *new_client)
 {
     struct share_list *sl;
-
     ASSERT(is_shared(pool));
+
     if ( (sl = tmem_malloc(sizeof(struct share_list), NULL)) == NULL )
         return -1;
     sl->client = new_client;
     list_add_tail(&sl->share_list, &pool->share_list);
     if ( new_client->cli_id != pool->client->cli_id )
         tmem_client_info("adding new %s %d to shared pool owned by %s %d\n",
-            tmem_client_str, new_client->cli_id, tmem_client_str, pool->client->cli_id);
-    return ++pool->shared_count;
+                    tmem_client_str, new_client->cli_id, tmem_client_str,
+                    pool->client->cli_id);
+    ++pool->shared_count;
+    return 0;
 }
 
 /* reassign "ownership" of the pool to another client that shares this pool */
@@ -1841,8 +1844,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
     int specversion = (flags >> TMEM_POOL_VERSION_SHIFT)
          & TMEM_POOL_VERSION_MASK;
     struct tmem_pool *pool, *shpool;
-    int s_poolid, first_unused_s_poolid;
-    int i;
+    int i, first_unused_s_poolid;
 
     if ( this_cli_id == TMEM_CLI_ID_NULL )
         cli_id = current->domain->domain_id;
@@ -1856,6 +1858,11 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         tmem_client_err("failed... unsupported spec version\n");
         return -EPERM;
     }
+    if ( shared && persistent )
+    {
+        tmem_client_err("failed... unable to create a shared-persistant pool\n");
+        return -EPERM;
+    }
     if ( pagebits != (PAGE_SHIFT - 12) )
     {
         tmem_client_err("failed... unsupported pagesize %d\n",
@@ -1872,17 +1879,12 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         tmem_client_err("failed... reserved bits must be zero\n");
         return -EPERM;
     }
-    if ( (pool = pool_alloc()) == NULL )
-    {
-        tmem_client_err("failed... out of memory\n");
-        return -ENOMEM;
-    }
     if ( this_cli_id != TMEM_CLI_ID_NULL )
     {
         if ( (client = tmem_client_from_cli_id(this_cli_id)) == NULL
              || d_poolid >= MAX_POOLS_PER_DOMAIN
              || client->pools[d_poolid] != NULL )
-            goto fail;
+            return -EPERM;
     }
     else
     {
@@ -1895,13 +1897,35 @@ static int do_tmem_new_pool(domid_t this_cli_id,
         {
             tmem_client_err("failed... no more pool slots available for this %s\n",
                    tmem_client_str);
-            goto fail;
+            return -EPERM;
         }
     }
+
+    if ( (pool = pool_alloc()) == NULL )
+    {
+        tmem_client_err("failed... out of memory\n");
+        return -ENOMEM;
+    }
+    client->pools[d_poolid] = pool;
+    pool->client = client;
+    pool->pool_id = d_poolid;
+    pool->shared = shared;
+    pool->persistent = persistent;
+    pool->uuid[0] = uuid_lo;
+    pool->uuid[1] = uuid_hi;
+
+    /*
+     * Already created a pool when arrived here, but need some special process
+     * for shared pool.
+     */
     if ( shared )
     {
         if ( uuid_lo == -1L && uuid_hi == -1L )
-            shared = 0;
+        {
+	    tmem_client_info("Invalid uuid, create non shared pool instead!\n");
+            pool->shared = 0;
+	    goto out;
+        }
         if ( client->shared_auth_required && !global_shared_auth )
         {
             for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
@@ -1909,48 +1933,60 @@ static int do_tmem_new_pool(domid_t this_cli_id,
                      (client->shared_auth_uuid[i][1] == uuid_hi) )
                     break;
             if ( i == MAX_GLOBAL_SHARED_POOLS )
-                shared = 0;
+	    {
+                tmem_client_info("Shared auth failed, create non shared pool instead!\n");
+                pool->shared = 0;
+                goto out;
+            }
         }
-    }
-    pool->shared = shared;
-    pool->client = client;
-    if ( shared )
-    {
+
+	/*
+	 * Authorize okay, match a global shared pool or use the newly allocated
+	 * one
+	 */
         first_unused_s_poolid = MAX_GLOBAL_SHARED_POOLS;
-        for ( s_poolid = 0; s_poolid < MAX_GLOBAL_SHARED_POOLS; s_poolid++ )
+        for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++ )
         {
-            if ( (shpool = global_shared_pools[s_poolid]) != NULL )
+            if ( (shpool = global_shared_pools[i]) != NULL )
             {
                 if ( shpool->uuid[0] == uuid_lo && shpool->uuid[1] == uuid_hi )
                 {
+		    /* Succ to match a global shared pool */
                     tmem_client_info("(matches shared pool uuid=%"PRIx64".%"PRIx64") pool_id=%d\n",
                         uuid_hi, uuid_lo, d_poolid);
-                    client->pools[d_poolid] = global_shared_pools[s_poolid];
-                    shared_pool_join(global_shared_pools[s_poolid], client);
-                    pool_free(pool);
-                    return d_poolid;
+                    client->pools[d_poolid] = shpool;
+                    if ( !shared_pool_join(shpool, client) )
+                    {
+                        pool_free(pool);
+                        goto out;
+		    }
+                    else
+                        goto fail;
                 }
             }
-            else if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
-                first_unused_s_poolid = s_poolid;
+            else
+            {
+                if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
+                    first_unused_s_poolid = i;
+            }
         }
+
+	/* Failed to find a global shard pool slot */
         if ( first_unused_s_poolid == MAX_GLOBAL_SHARED_POOLS )
         {
             tmem_client_warn("tmem: failed... no global shared pool slots available\n");
             goto fail;
         }
+	/* Add pool to global shard pool */
         else
         {
             INIT_LIST_HEAD(&pool->share_list);
             pool->shared_count = 0;
             global_shared_pools[first_unused_s_poolid] = pool;
-            (void)shared_pool_join(pool,client);
         }
     }
-    client->pools[d_poolid] = pool;
-    pool->pool_id = d_poolid;
-    pool->persistent = persistent;
-    pool->uuid[0] = uuid_lo; pool->uuid[1] = uuid_hi;
+
+out:
     tmem_client_info("pool_id=%d\n", d_poolid);
     return d_poolid;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LX-00005D-SJ; Tue, 28 Jan 2014 04:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LW-0008WH-Uz
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:23 +0000
Received: from [85.158.143.35:35388] by server-3.bemta-4.messagelabs.com id
	BD/82-32360-6D237E25; Tue, 28 Jan 2014 04:32:22 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390883539!1184980!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26574 invoked from network); 28 Jan 2014 04:32:21 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:21 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so6754730pbc.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:32:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=uNiFpiIBxS7XPJ74jmnASOYDfe3w62O0CIUlb++N/1g=;
	b=tqQep1VoCZW/7IouiRqtX7Kwq/6QEdJKMd0jmz+h03eSzsWcwy/WL5eGnNwMVQjiWo
	lhrBNmpihrP1mIUfk/OB8PNT39to+GEy9Q1LQIhBb1cG6ZoPKg0NCp5PH8kXSwqw+YNT
	6JiFav7BCyroNUoE/TjMFK7pQc2tT/dHQSQUixlBk/OdHQOYxJ8a0MGYS465KCb/i0Cg
	vZP7xUJnBZ0CBGX00sgwmc6sBtpJ69D2Iw4O8m2yIzKWzmv8u6cR7TSPyjL90MO7isVJ
	byAXaUnfqnMG8MQ+6i5xaY3cQZMKAfMbOO9LY113jnuSSjybHYW5FRAWavXQU60/iDNS
	jrQQ==
X-Received: by 10.68.131.202 with SMTP id oo10mr33907536pbb.35.1390883539592; 
	Mon, 27 Jan 2014 20:32:19 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	jn12sm37267567pbd.37.2014.01.27.20.32.14 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:32:18 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:33 +0800
Message-Id: <1390883313-19313-15-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 14/14] tmem: remove useless parameter from
	client and pool flush
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parameter "destroy" in function client_flush() and pool_flush() is unneeded
because it was always set to 1.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   30 ++++++++++++------------------
 1 file changed, 12 insertions(+), 18 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index ab5bba7..366dab2 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1108,7 +1108,7 @@ static int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
 }
 
 /* flush all data (owned by cli_id) from a pool and, optionally, free it */
-static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
+static void pool_flush(struct tmem_pool *pool, domid_t cli_id)
 {
     ASSERT(pool != NULL);
     if ( (is_shared(pool)) && (shared_pool_quit(pool,cli_id) > 0) )
@@ -1117,23 +1117,19 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
            tmem_cli_id_str, cli_id, pool->pool_id, tmem_cli_id_str,pool->client->cli_id);
         return;
     }
-    tmem_client_info("%s %s-%s tmem pool %s=%d pool_id=%d\n",
-                    destroy ? "destroying" : "flushing",
+    tmem_client_info("Destroying %s-%s tmem pool %s=%d pool_id=%d\n",
                     is_persistent(pool) ? "persistent" : "ephemeral" ,
                     is_shared(pool) ? "shared" : "private",
                     tmem_cli_id_str, pool->client->cli_id, pool->pool_id);
     if ( pool->client->live_migrating )
     {
-        tmem_client_warn("can't %s pool while %s is live-migrating\n",
-               destroy?"destroy":"flush", tmem_client_str);
+        tmem_client_warn("can't destroy pool while %s is live-migrating\n",
+                    tmem_client_str);
         return;
     }
     pool_destroy_objs(pool, TMEM_CLI_ID_NULL);
-    if ( destroy )
-    {
-        pool->client->pools[pool->pool_id] = NULL;
-        pool_free(pool);
-    }
+    pool->client->pools[pool->pool_id] = NULL;
+    pool_free(pool);
 }
 
 /************ CLIENT MANIPULATION OPERATIONS **************************/
@@ -1201,7 +1197,7 @@ static void client_free(struct client *client)
 }
 
 /* flush all data from a client and, optionally, free it */
-static void client_flush(struct client *client, bool_t destroy)
+static void client_flush(struct client *client)
 {
     int i;
     struct tmem_pool *pool;
@@ -1210,12 +1206,10 @@ static void client_flush(struct client *client, bool_t destroy)
     {
         if ( (pool = client->pools[i]) == NULL )
             continue;
-        pool_flush(pool,client->cli_id,destroy);
-        if ( destroy )
-            client->pools[i] = NULL;
+        pool_flush(pool, client->cli_id);
+        client->pools[i] = NULL;
     }
-    if ( destroy )
-        client_free(client);
+    client_free(client);
 }
 
 static bool_t client_over_quota(struct client *client)
@@ -1827,7 +1821,7 @@ static int do_tmem_destroy_pool(uint32_t pool_id)
     if ( (pool = client->pools[pool_id]) == NULL )
         return 0;
     client->pools[pool_id] = NULL;
-    pool_flush(pool,client->cli_id,1);
+    pool_flush(pool, client->cli_id);
     return 1;
 }
 
@@ -2772,7 +2766,7 @@ void tmem_destroy(void *v)
 
     printk("tmem: flushing tmem pools for %s=%d\n",
            tmem_cli_id_str, client->cli_id);
-    client_flush(client, 1);
+    client_flush(client);
 
     write_unlock(&tmem_rwlock);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 04:32:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 04:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W80LX-00005D-SJ; Tue, 28 Jan 2014 04:32:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1W80LW-0008WH-Uz
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 04:32:23 +0000
Received: from [85.158.143.35:35388] by server-3.bemta-4.messagelabs.com id
	BD/82-32360-6D237E25; Tue, 28 Jan 2014 04:32:22 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390883539!1184980!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26574 invoked from network); 28 Jan 2014 04:32:21 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 04:32:21 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so6754730pbc.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 27 Jan 2014 20:32:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=uNiFpiIBxS7XPJ74jmnASOYDfe3w62O0CIUlb++N/1g=;
	b=tqQep1VoCZW/7IouiRqtX7Kwq/6QEdJKMd0jmz+h03eSzsWcwy/WL5eGnNwMVQjiWo
	lhrBNmpihrP1mIUfk/OB8PNT39to+GEy9Q1LQIhBb1cG6ZoPKg0NCp5PH8kXSwqw+YNT
	6JiFav7BCyroNUoE/TjMFK7pQc2tT/dHQSQUixlBk/OdHQOYxJ8a0MGYS465KCb/i0Cg
	vZP7xUJnBZ0CBGX00sgwmc6sBtpJ69D2Iw4O8m2yIzKWzmv8u6cR7TSPyjL90MO7isVJ
	byAXaUnfqnMG8MQ+6i5xaY3cQZMKAfMbOO9LY113jnuSSjybHYW5FRAWavXQU60/iDNS
	jrQQ==
X-Received: by 10.68.131.202 with SMTP id oo10mr33907536pbb.35.1390883539592; 
	Mon, 27 Jan 2014 20:32:19 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	jn12sm37267567pbd.37.2014.01.27.20.32.14 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 27 Jan 2014 20:32:18 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Tue, 28 Jan 2014 12:28:33 +0800
Message-Id: <1390883313-19313-15-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
References: <1390883313-19313-1-git-send-email-bob.liu@oracle.com>
Cc: keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com,
	JBeulich@suse.com
Subject: [Xen-devel] [PATCH 14/14] tmem: remove useless parameter from
	client and pool flush
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parameter "destroy" in function client_flush() and pool_flush() is unneeded
because it was always set to 1.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   30 ++++++++++++------------------
 1 file changed, 12 insertions(+), 18 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index ab5bba7..366dab2 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1108,7 +1108,7 @@ static int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
 }
 
 /* flush all data (owned by cli_id) from a pool and, optionally, free it */
-static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
+static void pool_flush(struct tmem_pool *pool, domid_t cli_id)
 {
     ASSERT(pool != NULL);
     if ( (is_shared(pool)) && (shared_pool_quit(pool,cli_id) > 0) )
@@ -1117,23 +1117,19 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id, bool_t destroy)
            tmem_cli_id_str, cli_id, pool->pool_id, tmem_cli_id_str,pool->client->cli_id);
         return;
     }
-    tmem_client_info("%s %s-%s tmem pool %s=%d pool_id=%d\n",
-                    destroy ? "destroying" : "flushing",
+    tmem_client_info("Destroying %s-%s tmem pool %s=%d pool_id=%d\n",
                     is_persistent(pool) ? "persistent" : "ephemeral" ,
                     is_shared(pool) ? "shared" : "private",
                     tmem_cli_id_str, pool->client->cli_id, pool->pool_id);
     if ( pool->client->live_migrating )
     {
-        tmem_client_warn("can't %s pool while %s is live-migrating\n",
-               destroy?"destroy":"flush", tmem_client_str);
+        tmem_client_warn("can't destroy pool while %s is live-migrating\n",
+                    tmem_client_str);
         return;
     }
     pool_destroy_objs(pool, TMEM_CLI_ID_NULL);
-    if ( destroy )
-    {
-        pool->client->pools[pool->pool_id] = NULL;
-        pool_free(pool);
-    }
+    pool->client->pools[pool->pool_id] = NULL;
+    pool_free(pool);
 }
 
 /************ CLIENT MANIPULATION OPERATIONS **************************/
@@ -1201,7 +1197,7 @@ static void client_free(struct client *client)
 }
 
 /* flush all data from a client and, optionally, free it */
-static void client_flush(struct client *client, bool_t destroy)
+static void client_flush(struct client *client)
 {
     int i;
     struct tmem_pool *pool;
@@ -1210,12 +1206,10 @@ static void client_flush(struct client *client, bool_t destroy)
     {
         if ( (pool = client->pools[i]) == NULL )
             continue;
-        pool_flush(pool,client->cli_id,destroy);
-        if ( destroy )
-            client->pools[i] = NULL;
+        pool_flush(pool, client->cli_id);
+        client->pools[i] = NULL;
     }
-    if ( destroy )
-        client_free(client);
+    client_free(client);
 }
 
 static bool_t client_over_quota(struct client *client)
@@ -1827,7 +1821,7 @@ static int do_tmem_destroy_pool(uint32_t pool_id)
     if ( (pool = client->pools[pool_id]) == NULL )
         return 0;
     client->pools[pool_id] = NULL;
-    pool_flush(pool,client->cli_id,1);
+    pool_flush(pool, client->cli_id);
     return 1;
 }
 
@@ -2772,7 +2766,7 @@ void tmem_destroy(void *v)
 
     printk("tmem: flushing tmem pools for %s=%d\n",
            tmem_cli_id_str, client->cli_id);
-    client_flush(client, 1);
+    client_flush(client);
 
     write_unlock(&tmem_rwlock);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 06:25:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 06:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W826d-0004CL-S6; Tue, 28 Jan 2014 06:25:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gcvxd-xen-devel@m.gmane.org>) id 1W826c-0004CG-St
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 06:25:07 +0000
Received: from [85.158.143.35:12508] by server-3.bemta-4.messagelabs.com id
	8C/65-32360-24D47E25; Tue, 28 Jan 2014 06:25:06 +0000
X-Env-Sender: gcvxd-xen-devel@m.gmane.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390890305!1223467!1
X-Originating-IP: [80.91.229.3]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	UNPARSEABLE_RELAY,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5766 invoked from network); 28 Jan 2014 06:25:05 -0000
Received: from plane.gmane.org (HELO plane.gmane.org) (80.91.229.3)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Jan 2014 06:25:05 -0000
Received: from list by plane.gmane.org with local (Exim 4.69)
	(envelope-from <gcvxd-xen-devel@m.gmane.org>) id 1W826Y-0002Rq-S5
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 07:25:03 +0100
Received: from ABTS-KK-static-030.61.166.122.airtelbroadband.in
	([ABTS-KK-static-030.61.166.122.airtelbroadband.in])
	by main.gmane.org with esmtp (Gmexim 0.1 (Debian))
	id 1AlnuQ-0007hv-00
	for <xen-devel@lists.xensource.com>; Tue, 28 Jan 2014 07:25:02 +0100
Received: from kamal.kishi by ABTS-KK-static-030.61.166.122.airtelbroadband.in
	with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00
	for <xen-devel@lists.xensource.com>; Tue, 28 Jan 2014 07:25:02 +0100
X-Injected-Via-Gmane: http://gmane.org/
To: xen-devel@lists.xensource.com
From: Kamal Kishore <kamal.kishi@gmail.com>
Date: Tue, 28 Jan 2014 06:23:01 +0000 (UTC)
Lines: 50
Message-ID: <loom.20140128T071643-441@post.gmane.org>
References: <AEC6C66638C05B468B556EA548C1A77D019193E3@trantor>
	<AEC6C66638C05B468B556EA548C1A77D019193E6@trantor>
	<loom.20100429T232637-717@post.gmane.org>
Mime-Version: 1.0
X-Complaints-To: usenet@ger.gmane.org
X-Gmane-NNTP-Posting-Host: sea.gmane.org
User-Agent: Loom/3.14 (http://gmane.org/)
X-Loom-IP: 122.166.61.30 (Mozilla/5.0 (Windows NT 6.1;
	rv:23.0) Gecko/20100101 Firefox/23.0)
Subject: Re: [Xen-devel] drbd: and hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sauro Saltini <saltini <at> shc.it> writes:

> 
> Hi, I've found your patch digging around, in the hope of making HVM DomU's
> working with drbd: syntax and succesfully applied in my installation (Xen 4.0)
> 
> I've investigated a bit further, and found that the "xm create" of a such
> configured DomU on a drbd "Secondary" node (i.e during initial startup of the
> cluster) doesn't work as expected apparently only for timing issues...
> 
> In fact the block-drbd script is started and promotes the node to Primary, but
> qemu-dm, which is fired a little time before in an asynchronous way,
checks for
> the resource state too early and finding it still not promoted stops the domU
> creation; then the block-drbd is fired again (during DomU stop) and
demotes the
> node again to secondary.
> 
> I've managed to make the creation of domU work by simply putting a sleep(5);
> statement in the middle of your conditional block !
> In this way the asynchronous block-drbd launch is given the time to
promote the
> resource while qemu-drbd sleeps! Then the DomU creation can continue just
fine.
> 
> I've also noticed that the problem happens only for initial creation (xm
create)
> of the domU, while a migration (even live) can succeed without the need of
> adding the sleep statement, carrying out the correct state transitions of drbd
> resource both on starting and destination hosts.
> 
> Any ideas to elaborate the patch making it less "hackish" than this ?
> 
> Sauro Saltini.
> 

Hi Sauro Saltini,

           Was trying the same scenario as what you have accomplished.
I'm using XEN 4.1, as i'm not a developer I'm not sure where i can find the
patch you created or how to create one by myself.

Please suggest me the link to find your patch or provide me with the qemu-dm
file with the changes.

Thanks in advance.

Kamal Kishore 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 06:25:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 06:25:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W826d-0004CL-S6; Tue, 28 Jan 2014 06:25:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gcvxd-xen-devel@m.gmane.org>) id 1W826c-0004CG-St
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 06:25:07 +0000
Received: from [85.158.143.35:12508] by server-3.bemta-4.messagelabs.com id
	8C/65-32360-24D47E25; Tue, 28 Jan 2014 06:25:06 +0000
X-Env-Sender: gcvxd-xen-devel@m.gmane.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390890305!1223467!1
X-Originating-IP: [80.91.229.3]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	UNPARSEABLE_RELAY,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5766 invoked from network); 28 Jan 2014 06:25:05 -0000
Received: from plane.gmane.org (HELO plane.gmane.org) (80.91.229.3)
	by server-12.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	28 Jan 2014 06:25:05 -0000
Received: from list by plane.gmane.org with local (Exim 4.69)
	(envelope-from <gcvxd-xen-devel@m.gmane.org>) id 1W826Y-0002Rq-S5
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 07:25:03 +0100
Received: from ABTS-KK-static-030.61.166.122.airtelbroadband.in
	([ABTS-KK-static-030.61.166.122.airtelbroadband.in])
	by main.gmane.org with esmtp (Gmexim 0.1 (Debian))
	id 1AlnuQ-0007hv-00
	for <xen-devel@lists.xensource.com>; Tue, 28 Jan 2014 07:25:02 +0100
Received: from kamal.kishi by ABTS-KK-static-030.61.166.122.airtelbroadband.in
	with local (Gmexim 0.1 (Debian)) id 1AlnuQ-0007hv-00
	for <xen-devel@lists.xensource.com>; Tue, 28 Jan 2014 07:25:02 +0100
X-Injected-Via-Gmane: http://gmane.org/
To: xen-devel@lists.xensource.com
From: Kamal Kishore <kamal.kishi@gmail.com>
Date: Tue, 28 Jan 2014 06:23:01 +0000 (UTC)
Lines: 50
Message-ID: <loom.20140128T071643-441@post.gmane.org>
References: <AEC6C66638C05B468B556EA548C1A77D019193E3@trantor>
	<AEC6C66638C05B468B556EA548C1A77D019193E6@trantor>
	<loom.20100429T232637-717@post.gmane.org>
Mime-Version: 1.0
X-Complaints-To: usenet@ger.gmane.org
X-Gmane-NNTP-Posting-Host: sea.gmane.org
User-Agent: Loom/3.14 (http://gmane.org/)
X-Loom-IP: 122.166.61.30 (Mozilla/5.0 (Windows NT 6.1;
	rv:23.0) Gecko/20100101 Firefox/23.0)
Subject: Re: [Xen-devel] drbd: and hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sauro Saltini <saltini <at> shc.it> writes:

> 
> Hi, I've found your patch digging around, in the hope of making HVM DomU's
> working with drbd: syntax and succesfully applied in my installation (Xen 4.0)
> 
> I've investigated a bit further, and found that the "xm create" of a such
> configured DomU on a drbd "Secondary" node (i.e during initial startup of the
> cluster) doesn't work as expected apparently only for timing issues...
> 
> In fact the block-drbd script is started and promotes the node to Primary, but
> qemu-dm, which is fired a little time before in an asynchronous way,
checks for
> the resource state too early and finding it still not promoted stops the domU
> creation; then the block-drbd is fired again (during DomU stop) and
demotes the
> node again to secondary.
> 
> I've managed to make the creation of domU work by simply putting a sleep(5);
> statement in the middle of your conditional block !
> In this way the asynchronous block-drbd launch is given the time to
promote the
> resource while qemu-drbd sleeps! Then the DomU creation can continue just
fine.
> 
> I've also noticed that the problem happens only for initial creation (xm
create)
> of the domU, while a migration (even live) can succeed without the need of
> adding the sleep statement, carrying out the correct state transitions of drbd
> resource both on starting and destination hosts.
> 
> Any ideas to elaborate the patch making it less "hackish" than this ?
> 
> Sauro Saltini.
> 

Hi Sauro Saltini,

           Was trying the same scenario as what you have accomplished.
I'm using XEN 4.1, as i'm not a developer I'm not sure where i can find the
patch you created or how to create one by myself.

Please suggest me the link to find your patch or provide me with the qemu-dm
file with the changes.

Thanks in advance.

Kamal Kishore 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 08:05:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 08:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W83ev-0007k5-0X; Tue, 28 Jan 2014 08:04:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W83et-0007k0-LD
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 08:04:35 +0000
Received: from [85.158.137.68:50753] by server-2.bemta-3.messagelabs.com id
	D0/3A-17329-29467E25; Tue, 28 Jan 2014 08:04:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390896272!11747353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3034 invoked from network); 28 Jan 2014 08:04:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 08:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,734,1384300800"; d="scan'208";a="95133692"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 08:04:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 03:04:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W83eo-0000XQ-Mw;
	Tue, 28 Jan 2014 08:04:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W83eo-0007Z3-Ia;
	Tue, 28 Jan 2014 08:04:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24568-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 08:04:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24568: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24568 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24568/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           9 guest-start                 fail pass in 24542
 test-amd64-i386-xl-credit2    9 guest-start                 fail pass in 24542
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate         fail pass in 24525
 test-amd64-i386-rhel6hvm-amd  7 redhat-install     fail in 24525 pass in 24568
 test-amd64-i386-xl-credit2 13 guest-localmigrate.2 fail in 24525 pass in 24542
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24525 pass in 24568

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  9 guest-localmigrate fail like 24542
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24542
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install fail in 24542 like 24540
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24542 like 24540

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24525 never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 08:05:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 08:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W83ev-0007k5-0X; Tue, 28 Jan 2014 08:04:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W83et-0007k0-LD
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 08:04:35 +0000
Received: from [85.158.137.68:50753] by server-2.bemta-3.messagelabs.com id
	D0/3A-17329-29467E25; Tue, 28 Jan 2014 08:04:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390896272!11747353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3034 invoked from network); 28 Jan 2014 08:04:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 08:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,734,1384300800"; d="scan'208";a="95133692"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 08:04:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 03:04:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W83eo-0000XQ-Mw;
	Tue, 28 Jan 2014 08:04:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W83eo-0007Z3-Ia;
	Tue, 28 Jan 2014 08:04:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24568-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 08:04:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24568: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24568 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24568/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           9 guest-start                 fail pass in 24542
 test-amd64-i386-xl-credit2    9 guest-start                 fail pass in 24542
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate         fail pass in 24525
 test-amd64-i386-rhel6hvm-amd  7 redhat-install     fail in 24525 pass in 24568
 test-amd64-i386-xl-credit2 13 guest-localmigrate.2 fail in 24525 pass in 24542
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24525 pass in 24568

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  9 guest-localmigrate fail like 24542
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24542
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install fail in 24542 like 24540
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24542 like 24540

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24525 never pass

version targeted for testing:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5
baseline version:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:09:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84fj-0001IY-0Y; Tue, 28 Jan 2014 09:09:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=09805e8c2=chegger@amazon.de>)
	id 1W84fh-0001IT-VI
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:09:30 +0000
Received: from [85.158.137.68:17402] by server-17.bemta-3.messagelabs.com id
	30/A6-15965-9C377E25; Tue, 28 Jan 2014 09:09:29 +0000
X-Env-Sender: prvs=09805e8c2=chegger@amazon.de
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390900166!11766837!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26106 invoked from network); 28 Jan 2014 09:09:28 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:09:28 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390900168; x=1422436168;
	h=message-id:date:from:mime-version:to:subject:references:
	in-reply-to:content-transfer-encoding;
	bh=Fp2QUs6972t2h+SIb7rQER/oPAMEcMizZePeARbuiRw=;
	b=srEzUZneF/CotWzndOmim7nW0CDRSp8F3LWZfNonwLLrKKbeiekc770w
	x05WC90D7ZRhIxngT1SyhPK/fU4M+1vgmD/st9ziyI5VKjBSjV2i5aQdO
	xZ3nilratHZ/Q0AZV2judVLgM6/JmHaQJspYHrlkXa/xCCsrt9PmniRLl k=;
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="69945413"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Jan 2014 09:09:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s0S99AWc013042
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 28 Jan 2014 09:09:23 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 28 Jan 2014 01:09:05 -0800
Message-ID: <52E773AF.1060500@amazon.de>
Date: Tue, 28 Jan 2014 10:09:03 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	<jinsong.liu@intel.com>, <boris.ostrovsky@oracle.com>,
	<suravee.suthikulpanit@amd.com>, <xen-devel@lists.xen.org>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15.03.08 22:50, Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits and bit 4 which meant the
> register accesses never made it to vmce_amd_* functions.
> 
> We correct this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
> index f6c35db..cb4fd12 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

I prefer #defines for the two bits to make the code more meaningfull
to the reader.

>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>      int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

Ditto.

Christoph

>      {
>      case MSR_IA32_MC0_CTL:
>          /*
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:09:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84fj-0001IY-0Y; Tue, 28 Jan 2014 09:09:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=09805e8c2=chegger@amazon.de>)
	id 1W84fh-0001IT-VI
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:09:30 +0000
Received: from [85.158.137.68:17402] by server-17.bemta-3.messagelabs.com id
	30/A6-15965-9C377E25; Tue, 28 Jan 2014 09:09:29 +0000
X-Env-Sender: prvs=09805e8c2=chegger@amazon.de
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390900166!11766837!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26106 invoked from network); 28 Jan 2014 09:09:28 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:09:28 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390900168; x=1422436168;
	h=message-id:date:from:mime-version:to:subject:references:
	in-reply-to:content-transfer-encoding;
	bh=Fp2QUs6972t2h+SIb7rQER/oPAMEcMizZePeARbuiRw=;
	b=srEzUZneF/CotWzndOmim7nW0CDRSp8F3LWZfNonwLLrKKbeiekc770w
	x05WC90D7ZRhIxngT1SyhPK/fU4M+1vgmD/st9ziyI5VKjBSjV2i5aQdO
	xZ3nilratHZ/Q0AZV2judVLgM6/JmHaQJspYHrlkXa/xCCsrt9PmniRLl k=;
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="69945413"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Jan 2014 09:09:24 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s0S99AWc013042
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 28 Jan 2014 09:09:23 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 28 Jan 2014 01:09:05 -0800
Message-ID: <52E773AF.1060500@amazon.de>
Date: Tue, 28 Jan 2014 10:09:03 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	<jinsong.liu@intel.com>, <boris.ostrovsky@oracle.com>,
	<suravee.suthikulpanit@amd.com>, <xen-devel@lists.xen.org>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Subject: Re: [Xen-devel] [PATCH 2/2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15.03.08 22:50, Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits and bit 4 which meant the
> register accesses never made it to vmce_amd_* functions.
> 
> We correct this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/vmce.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
> index f6c35db..cb4fd12 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

I prefer #defines for the two bits to make the code more meaningfull
to the reader.

>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>      int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

Ditto.

Christoph

>      {
>      case MSR_IA32_MC0_CTL:
>          /*
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:13:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84jt-0001QC-Qd; Tue, 28 Jan 2014 09:13:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=09805e8c2=chegger@amazon.de>)
	id 1W84js-0001Py-0r
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:13:48 +0000
Received: from [85.158.139.211:48682] by server-9.bemta-5.messagelabs.com id
	59/6A-15098-9C477E25; Tue, 28 Jan 2014 09:13:45 +0000
X-Env-Sender: prvs=09805e8c2=chegger@amazon.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390900422!49890!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25163 invoked from network); 28 Jan 2014 09:13:44 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:13:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390900424; x=1422436424;
	h=message-id:date:from:mime-version:to:subject:references:
	in-reply-to:content-transfer-encoding;
	bh=3ud63ivNyo6ILpGb9fDVws7eCVIKQaZAroQw/H0T+Fo=;
	b=ZodbiGapwrzQtqi1NwaTVsbsmUPtfgZTcIXI6N+B/9njIo+R8Qp5bCfH
	bghbXBIgZwMHdPgF5fktF8eHsZq+AdTILzdULFg0TYkTiggIDzmUlkL47
	rJAwMRvyuu+pXrKDr/lqBDG8IovI3eyYcv+T7NsQKjM2L9vVH/TtqVmvo o=;
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="69947360"
Received: from email-inbound-relay-62040.pdx2.amazon.com ([10.241.21.71])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Jan 2014 09:13:42 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-62040.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0S9DfDE031914
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 28 Jan 2014 09:13:41 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 28 Jan 2014 01:13:37 -0800
Message-ID: <52E774BF.7070501@amazon.de>
Date: Tue, 28 Jan 2014 10:13:35 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	<jinsong.liu@intel.com>, <boris.ostrovsky@oracle.com>,
	<suravee.suthikulpanit@amd.com>, <xen-devel@lists.xen.org>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Subject: Re: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15.03.08 22:50, Aravind Gopalakrishnan wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.
> 
> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
> 
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Reviewed-by: Christoph Egger <chegger@amazon.de>

> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>  
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1		0xc0000408
> -#define MSR_F10_MC4_MISC2		0xc0000409
> -#define MSR_F10_MC4_MISC3		0xc000040A
> +#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2		0xc0000408
> +#define MSR_F10_MC4_MISC3		0xc0000409
>  
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG 		0xc0011023
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:13:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84jt-0001QC-Qd; Tue, 28 Jan 2014 09:13:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=09805e8c2=chegger@amazon.de>)
	id 1W84js-0001Py-0r
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:13:48 +0000
Received: from [85.158.139.211:48682] by server-9.bemta-5.messagelabs.com id
	59/6A-15098-9C477E25; Tue, 28 Jan 2014 09:13:45 +0000
X-Env-Sender: prvs=09805e8c2=chegger@amazon.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390900422!49890!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25163 invoked from network); 28 Jan 2014 09:13:44 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:13:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1390900424; x=1422436424;
	h=message-id:date:from:mime-version:to:subject:references:
	in-reply-to:content-transfer-encoding;
	bh=3ud63ivNyo6ILpGb9fDVws7eCVIKQaZAroQw/H0T+Fo=;
	b=ZodbiGapwrzQtqi1NwaTVsbsmUPtfgZTcIXI6N+B/9njIo+R8Qp5bCfH
	bghbXBIgZwMHdPgF5fktF8eHsZq+AdTILzdULFg0TYkTiggIDzmUlkL47
	rJAwMRvyuu+pXrKDr/lqBDG8IovI3eyYcv+T7NsQKjM2L9vVH/TtqVmvo o=;
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="69947360"
Received: from email-inbound-relay-62040.pdx2.amazon.com ([10.241.21.71])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 28 Jan 2014 09:13:42 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-62040.pdx2.amazon.com (8.14.7/8.14.7) with ESMTP
	id s0S9DfDE031914
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Tue, 28 Jan 2014 09:13:41 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Tue, 28 Jan 2014 01:13:37 -0800
Message-ID: <52E774BF.7070501@amazon.de>
Date: Tue, 28 Jan 2014 10:13:35 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	<jinsong.liu@intel.com>, <boris.ostrovsky@oracle.com>,
	<suravee.suthikulpanit@amd.com>, <xen-devel@lists.xen.org>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Subject: Re: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15.03.08 22:50, Aravind Gopalakrishnan wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.
> 
> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
> 
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Reviewed-by: Christoph Egger <chegger@amazon.de>

> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>  
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1		0xc0000408
> -#define MSR_F10_MC4_MISC2		0xc0000409
> -#define MSR_F10_MC4_MISC3		0xc000040A
> +#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2		0xc0000408
> +#define MSR_F10_MC4_MISC3		0xc0000409
>  
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG 		0xc0011023
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:29:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84zC-00025l-Vw; Tue, 28 Jan 2014 09:29:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W84zB-00025g-ST
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:29:38 +0000
Received: from [85.158.139.211:10429] by server-13.bemta-5.messagelabs.com id
	98/64-11357-18877E25; Tue, 28 Jan 2014 09:29:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390901374!52691!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27952 invoked from network); 28 Jan 2014 09:29:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:29:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95155946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:29:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:29:34 -0500
Message-ID: <1390901372.7753.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <rshriram@cs.ubc.ca>
Date: Tue, 28 Jan 2014 09:29:32 +0000
In-Reply-To: <CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
	<52E60568.7060305@cn.fujitsu.com>
	<CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 10:05 -0800, Shriram Rajagopalan wrote:
> On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com>
> wrote:
>         > The last time I posted this script, the feedback was that
>         the script and> the code invoking > the script should be in a
>         single patch. So I would suggest doing the same.
>         
>         
>         We use the script in patch6. It adds 479 lines. These two
>         patches are big patches(add more than 100 lines), so why put
>         them into a single patch?

> 
> That is a valid question. IIRC, IanJ was the one who wanted the code
> and the script together. IanJ, any thoughts?

Unless the patches are so big they won't get past the mailing list
filters (which are 100s of Kb I think) the important thing is the
logical separation of functionality into separate patches, not the
individual line count of each patch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:29:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W84zC-00025l-Vw; Tue, 28 Jan 2014 09:29:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W84zB-00025g-ST
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 09:29:38 +0000
Received: from [85.158.139.211:10429] by server-13.bemta-5.messagelabs.com id
	98/64-11357-18877E25; Tue, 28 Jan 2014 09:29:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390901374!52691!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27952 invoked from network); 28 Jan 2014 09:29:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:29:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95155946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:29:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:29:34 -0500
Message-ID: <1390901372.7753.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <rshriram@cs.ubc.ca>
Date: Tue, 28 Jan 2014 09:29:32 +0000
In-Reply-To: <CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>
	<52E60568.7060305@cn.fujitsu.com>
	<CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dong Eddie <eddie.dong@intel.com>, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 10:05 -0800, Shriram Rajagopalan wrote:
> On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com>
> wrote:
>         > The last time I posted this script, the feedback was that
>         the script and> the code invoking > the script should be in a
>         single patch. So I would suggest doing the same.
>         
>         
>         We use the script in patch6. It adds 479 lines. These two
>         patches are big patches(add more than 100 lines), so why put
>         them into a single patch?

> 
> That is a valid question. IIRC, IanJ was the one who wanted the code
> and the script together. IanJ, any thoughts?

Unless the patches are so big they won't get past the mailing list
filters (which are 100s of Kb I think) the important thing is the
logical separation of functionality into separate patches, not the
individual line count of each patch.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:36:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W855F-0002FA-VO; Tue, 28 Jan 2014 09:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W855D-0002F5-Tm
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 09:35:52 +0000
Received: from [85.158.139.211:51389] by server-17.bemta-5.messagelabs.com id
	A4/33-19152-7F977E25; Tue, 28 Jan 2014 09:35:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390901749!44798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9004 invoked from network); 28 Jan 2014 09:35:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95157608"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:35:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:35:47 -0500
Message-ID: <1390901746.7753.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 28 Jan 2014 09:35:46 +0000
In-Reply-To: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
References: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
 grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 01:03 +0000, Julien Grall wrote:
> On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
> is enabled).
> We can't assume that the grant frame base address will always fits in an
> unsigned long. Use phys_addr_t instead of unsigned long as argument for
> gnttab_setup_auto_xlat_frames.

Strictly speaking I think it would be wrong for the tools to provide a
grant frame address which only kernels with LPAE support enabled could
use, so the issue is more theoretical than a real danger.

The patch is still correct though.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:36:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W855F-0002FA-VO; Tue, 28 Jan 2014 09:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W855D-0002F5-Tm
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 09:35:52 +0000
Received: from [85.158.139.211:51389] by server-17.bemta-5.messagelabs.com id
	A4/33-19152-7F977E25; Tue, 28 Jan 2014 09:35:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390901749!44798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9004 invoked from network); 28 Jan 2014 09:35:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95157608"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:35:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:35:47 -0500
Message-ID: <1390901746.7753.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 28 Jan 2014 09:35:46 +0000
In-Reply-To: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
References: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, linux-kernel@vger.kernel.org,
	david.vrabel@citrix.com, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
 grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 01:03 +0000, Julien Grall wrote:
> On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
> is enabled).
> We can't assume that the grant frame base address will always fits in an
> unsigned long. Use phys_addr_t instead of unsigned long as argument for
> gnttab_setup_auto_xlat_frames.

Strictly speaking I think it would be wrong for the tools to provide a
grant frame address which only kernels with LPAE support enabled could
use, so the issue is more theoretical than a real danger.

The patch is still correct though.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:37:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W857B-0002Ll-IO; Tue, 28 Jan 2014 09:37:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W857A-0002Lf-T6
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 09:37:53 +0000
Received: from [85.158.139.211:43593] by server-16.bemta-5.messagelabs.com id
	57/5E-11843-07A77E25; Tue, 28 Jan 2014 09:37:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390901869!55145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3317 invoked from network); 28 Jan 2014 09:37:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95157924"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:37:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:37:27 -0500
Message-ID: <1390901845.7753.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 09:37:25 +0000
In-Reply-To: <osstest-24553-mainreport@xen.org>
References: <osstest-24553-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> flight 24553 qemu-upstream-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> 
> Failures :-/ but no regressions.

QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
and so will require updating to actually pull this new stuff into the
release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 09:37:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 09:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W857B-0002Ll-IO; Tue, 28 Jan 2014 09:37:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W857A-0002Lf-T6
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 09:37:53 +0000
Received: from [85.158.139.211:43593] by server-16.bemta-5.messagelabs.com id
	57/5E-11843-07A77E25; Tue, 28 Jan 2014 09:37:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390901869!55145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3317 invoked from network); 28 Jan 2014 09:37:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 09:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95157924"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 09:37:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 04:37:27 -0500
Message-ID: <1390901845.7753.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 09:37:25 +0000
In-Reply-To: <osstest-24553-mainreport@xen.org>
References: <osstest-24553-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> flight 24553 qemu-upstream-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> 
> Failures :-/ but no regressions.

QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
and so will require updating to actually pull this new stuff into the
release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:04:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85WH-0003cb-6k; Tue, 28 Jan 2014 10:03:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W85WF-0003cW-Fb
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:03:47 +0000
Received: from [85.158.143.35:44706] by server-1.bemta-4.messagelabs.com id
	24/1C-02132-28087E25; Tue, 28 Jan 2014 10:03:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390903425!1297246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29822 invoked from network); 28 Jan 2014 10:03:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:03:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97168114"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 10:03:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:03:44 -0500
Message-ID: <1390903423.7753.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 10:03:43 +0000
In-Reply-To: <alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> but I am a bit wary of making that change at RC2.

I'm leaning the other way -- I'm wary of open coding magic locking
primitives to work around this issue on a case by case basis. It's just
too subtle IMHO.

The IPI and cross CPU calling primitives are basically predicated on
those IPIs interrupting normal interrupt handlers.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:04:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85WH-0003cb-6k; Tue, 28 Jan 2014 10:03:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W85WF-0003cW-Fb
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:03:47 +0000
Received: from [85.158.143.35:44706] by server-1.bemta-4.messagelabs.com id
	24/1C-02132-28087E25; Tue, 28 Jan 2014 10:03:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1390903425!1297246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29822 invoked from network); 28 Jan 2014 10:03:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:03:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97168114"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 10:03:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:03:44 -0500
Message-ID: <1390903423.7753.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 10:03:43 +0000
In-Reply-To: <alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> but I am a bit wary of making that change at RC2.

I'm leaning the other way -- I'm wary of open coding magic locking
primitives to work around this issue on a case by case basis. It's just
too subtle IMHO.

The IPI and cross CPU calling primitives are basically predicated on
those IPIs interrupting normal interrupt handlers.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85Zi-0003mm-3A; Tue, 28 Jan 2014 10:07:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W85Zh-0003mg-1s
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 10:07:21 +0000
Received: from [85.158.139.211:23772] by server-15.bemta-5.messagelabs.com id
	EC/73-08490-85187E25; Tue, 28 Jan 2014 10:07:20 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390903638!66293!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12029 invoked from network); 28 Jan 2014 10:07:18 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-206.messagelabs.com with SMTP;
	28 Jan 2014 10:07:18 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0SA6EAt024642
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 05:06:14 -0500
Received: from redhat.com (vpn1-6-167.ams2.redhat.com [10.36.6.167])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0SA69UK003032
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Tue, 28 Jan 2014 05:06:11 -0500
Date: Tue, 28 Jan 2014 10:06:09 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140128100608.GE19598@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E70A58.2060002@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> [Adding libvirt list...]
> 
> Ian Jackson wrote:
> > Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> >   
> >> BTW, I only see the crash when the save/restore script is running.  I
> >> stopped the other scripts and domains, running only save/restore on a
> >> single domain, and see the crash rather quickly (within 10 iterations).
> >>     
> >
> > I'll look at the libvirt code, but:
> >
> > With a recurring timeout, how can you ever know it's cancelled ?
> > There might be threads out there, which don't hold any locks, which
> > are in the process of executing a callback for a timeout.  That might
> > be arbitrarily delayed from the pov of the rest of the program.
> >
> > E.g.:
> >
> >  Thread A                                             Thread B
> >
> >    invoke some libxl operation
> > X    do some libxl stuff
> > X    register timeout (libxl)
> > XV     record timeout info
> > X    do some more libxl stuff
> >      ...
> > X    do some more libxl stuff
> > X    deregister timeout (libxl internal)
> > X     converted to request immediate timeout
> > XV     record new timeout info
> > X      release libvirt event loop lock
> >                                             entering libvirt event loop
> >                                        V     observe timeout is immediate
> >                                        V      need to do callback
> >                                                call libxl driver
> >
> >       entering libvirt event loop
> >  V     observe timeout is immediate
> >  V      need to do callback
> >          call libxl driver
> >            call libxl
> >   X          libxl sees timeout is live
> >   X          libxl does libxl stuff
> >          libxl driver deregisters
> >  V         record lack of timeout
> >          free driver's timeout struct
> >                                                call libxl
> >                                       X          libxl sees timeout is dead
> >                                       X          libxl does nothing
> >                                              libxl driver deregisters
> >                                        V       CRASH due to deregistering
> >                                        V        already-deregistered timeout
> >
> > If this is how things are, then I think there is no sane way to use
> > libvirt's timeouts (!)
> >   
> 
> Looking at the libvirt code again, it seems a single thread services the
> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
> see the same thread ID in all the timer and fd callbacks. One of the
> libvirt core devs can correct me if I'm wrong.

Yes, you are correct. The threading model for libvirtd is that the
process thread leader executes the event loop, dealing with timer
and file descriptor I/O callbacks. There are also 'n' worker threads
which exclusively handle public API calls from libvirt clients. IOW
all your timer callbacks will be in one thread - which also means
you want your timer callbacks to be fast to execute. If you have any
very slow code you'll want to spawn a temporary worker thread from
the timer callback to do the slow work.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85Zi-0003mm-3A; Tue, 28 Jan 2014 10:07:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W85Zh-0003mg-1s
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 10:07:21 +0000
Received: from [85.158.139.211:23772] by server-15.bemta-5.messagelabs.com id
	EC/73-08490-85187E25; Tue, 28 Jan 2014 10:07:20 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390903638!66293!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12029 invoked from network); 28 Jan 2014 10:07:18 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-206.messagelabs.com with SMTP;
	28 Jan 2014 10:07:18 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0SA6EAt024642
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 05:06:14 -0500
Received: from redhat.com (vpn1-6-167.ams2.redhat.com [10.36.6.167])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0SA69UK003032
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Tue, 28 Jan 2014 05:06:11 -0500
Date: Tue, 28 Jan 2014 10:06:09 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140128100608.GE19598@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E70A58.2060002@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> [Adding libvirt list...]
> 
> Ian Jackson wrote:
> > Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> >   
> >> BTW, I only see the crash when the save/restore script is running.  I
> >> stopped the other scripts and domains, running only save/restore on a
> >> single domain, and see the crash rather quickly (within 10 iterations).
> >>     
> >
> > I'll look at the libvirt code, but:
> >
> > With a recurring timeout, how can you ever know it's cancelled ?
> > There might be threads out there, which don't hold any locks, which
> > are in the process of executing a callback for a timeout.  That might
> > be arbitrarily delayed from the pov of the rest of the program.
> >
> > E.g.:
> >
> >  Thread A                                             Thread B
> >
> >    invoke some libxl operation
> > X    do some libxl stuff
> > X    register timeout (libxl)
> > XV     record timeout info
> > X    do some more libxl stuff
> >      ...
> > X    do some more libxl stuff
> > X    deregister timeout (libxl internal)
> > X     converted to request immediate timeout
> > XV     record new timeout info
> > X      release libvirt event loop lock
> >                                             entering libvirt event loop
> >                                        V     observe timeout is immediate
> >                                        V      need to do callback
> >                                                call libxl driver
> >
> >       entering libvirt event loop
> >  V     observe timeout is immediate
> >  V      need to do callback
> >          call libxl driver
> >            call libxl
> >   X          libxl sees timeout is live
> >   X          libxl does libxl stuff
> >          libxl driver deregisters
> >  V         record lack of timeout
> >          free driver's timeout struct
> >                                                call libxl
> >                                       X          libxl sees timeout is dead
> >                                       X          libxl does nothing
> >                                              libxl driver deregisters
> >                                        V       CRASH due to deregistering
> >                                        V        already-deregistered timeout
> >
> > If this is how things are, then I think there is no sane way to use
> > libvirt's timeouts (!)
> >   
> 
> Looking at the libvirt code again, it seems a single thread services the
> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
> see the same thread ID in all the timer and fd callbacks. One of the
> libvirt core devs can correct me if I'm wrong.

Yes, you are correct. The threading model for libvirtd is that the
process thread leader executes the event loop, dealing with timer
and file descriptor I/O callbacks. There are also 'n' worker threads
which exclusively handle public API calls from libvirt clients. IOW
all your timer callbacks will be in one thread - which also means
you want your timer callbacks to be fast to execute. If you have any
very slow code you'll want to spawn a temporary worker thread from
the timer callback to do the slow work.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:28:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85tX-0004RE-GT; Tue, 28 Jan 2014 10:27:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W85tV-0004R9-VC
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:27:50 +0000
Received: from [85.158.137.68:19125] by server-11.bemta-3.messagelabs.com id
	76/10-19379-52687E25; Tue, 28 Jan 2014 10:27:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390904867!11764371!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30277 invoked from network); 28 Jan 2014 10:27:48 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:27:48 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so11666443igc.5
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 02:27:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=fVRVUrTb4+OkDgp786eArMxdZpdr3vYH3u7EhtNXLNU=;
	b=eo0lls8z4Xb+Zm5ldsIPycictYV39JDTQh3dFp6y/uaBya5fire5raXGxIBh0IAuKB
	I6FP1GWbdw0GtPIFp05/7v2x1nAQjFOZao9uiwxWxZnOK/5DJdWRAodPySW+TaunrbgT
	pf+DBZHXjwastUnVUdxJSgF6U98gsoFa9YKCeEEGGEchEYkG64Fk7OuKAo8KuF3TYZ6J
	0vlCjim6cGyjwlF8z2GvHmIxA2i+xDxcwqITJWW2mwqwigEiWRt9owRPSO36obWAvL4m
	8cZC4NHaCGE2zcK3hC/F7xNX7WvEsB6KC7o1dpEgg8Y3Nvh3QiinS6ux+roVAiclgjPD
	mKXA==
MIME-Version: 1.0
X-Received: by 10.50.61.137 with SMTP id p9mr22229762igr.45.1390904866799;
	Tue, 28 Jan 2014 02:27:46 -0800 (PST)
Received: by 10.65.15.74 with HTTP; Tue, 28 Jan 2014 02:27:46 -0800 (PST)
In-Reply-To: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
Date: Tue, 28 Jan 2014 10:27:46 +0000
X-Google-Sender-Auth: 0xeCQuMhfZaXsQIxls8aKMRT2lg
Message-ID: <CAFLBxZZ=1qsYnWBAkaMEc4868YtMR6yhgSsAYW7_N+kpLRw+oQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, chegger@amazon.de,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Mar 15, 2008 at 9:50 PM, Aravind Gopalakrishnan
<aravind.gopalakrishnan@amd.com> wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.
>
> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
>
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
>
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Since you haven't provided a bug description or a justification for a
freeze exception, I take it this is targeted at 4.5?

 -George

> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT        46
>
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1              0xc0000408
> -#define MSR_F10_MC4_MISC2              0xc0000409
> -#define MSR_F10_MC4_MISC3              0xc000040A
> +#define MSR_F10_MC4_MISC1              MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2              0xc0000408
> +#define MSR_F10_MC4_MISC3              0xc0000409
>
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG                 0xc0011023
> --
> 1.7.9.5
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:28:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85tX-0004RE-GT; Tue, 28 Jan 2014 10:27:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W85tV-0004R9-VC
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:27:50 +0000
Received: from [85.158.137.68:19125] by server-11.bemta-3.messagelabs.com id
	76/10-19379-52687E25; Tue, 28 Jan 2014 10:27:49 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390904867!11764371!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30277 invoked from network); 28 Jan 2014 10:27:48 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:27:48 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so11666443igc.5
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 02:27:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=fVRVUrTb4+OkDgp786eArMxdZpdr3vYH3u7EhtNXLNU=;
	b=eo0lls8z4Xb+Zm5ldsIPycictYV39JDTQh3dFp6y/uaBya5fire5raXGxIBh0IAuKB
	I6FP1GWbdw0GtPIFp05/7v2x1nAQjFOZao9uiwxWxZnOK/5DJdWRAodPySW+TaunrbgT
	pf+DBZHXjwastUnVUdxJSgF6U98gsoFa9YKCeEEGGEchEYkG64Fk7OuKAo8KuF3TYZ6J
	0vlCjim6cGyjwlF8z2GvHmIxA2i+xDxcwqITJWW2mwqwigEiWRt9owRPSO36obWAvL4m
	8cZC4NHaCGE2zcK3hC/F7xNX7WvEsB6KC7o1dpEgg8Y3Nvh3QiinS6ux+roVAiclgjPD
	mKXA==
MIME-Version: 1.0
X-Received: by 10.50.61.137 with SMTP id p9mr22229762igr.45.1390904866799;
	Tue, 28 Jan 2014 02:27:46 -0800 (PST)
Received: by 10.65.15.74 with HTTP; Tue, 28 Jan 2014 02:27:46 -0800 (PST)
In-Reply-To: <1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1205617825-10042-2-git-send-email-aravind.gopalakrishnan@amd.com>
Date: Tue, 28 Jan 2014 10:27:46 +0000
X-Google-Sender-Auth: 0xeCQuMhfZaXsQIxls8aKMRT2lg
Message-ID: <CAFLBxZZ=1qsYnWBAkaMEc4868YtMR6yhgSsAYW7_N+kpLRw+oQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, chegger@amazon.de,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 1/2] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Mar 15, 2008 at 9:50 PM, Aravind Gopalakrishnan
<aravind.gopalakrishnan@amd.com> wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.
>
> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
>
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
>
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Since you haven't provided a bug description or a justification for a
freeze exception, I take it this is targeted at 4.5?

 -George

> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT        46
>
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1              0xc0000408
> -#define MSR_F10_MC4_MISC2              0xc0000409
> -#define MSR_F10_MC4_MISC3              0xc000040A
> +#define MSR_F10_MC4_MISC1              MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2              0xc0000408
> +#define MSR_F10_MC4_MISC3              0xc0000409
>
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG                 0xc0011023
> --
> 1.7.9.5
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:31:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85xH-0004mN-8w; Tue, 28 Jan 2014 10:31:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W85xF-0004mI-Ev
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:31:41 +0000
Received: from [85.158.137.68:38343] by server-15.bemta-3.messagelabs.com id
	25/B4-11556-C0787E25; Tue, 28 Jan 2014 10:31:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390905099!11669279!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15631 invoked from network); 28 Jan 2014 10:31:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:31:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:31:39 +0000
Message-Id: <52E7951802000078001177F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:31:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
In-Reply-To: <20140127175550.4cc67171@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -838,7 +838,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> +        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, NULL);
>          if ( rc )
>          {
>              rcu_unlock_domain(d);
> @@ -860,7 +860,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case XENMEM_add_to_physmap_batch:
>      {
>          struct xen_add_to_physmap_batch xatpb;
> -        struct domain *d;
> +        struct domain *d, *fd = NULL;
>  
>          BUILD_BUG_ON((typeof(xatpb.size))-1 >
>                       (UINT_MAX >> MEMOP_EXTENT_SHIFT));
> @@ -883,7 +883,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> +#ifdef CONFIG_X86
> +        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
> +        {
> +            if ( (fd = get_pg_owner(xatpb.foreign_domid)) == NULL )
> +            {
> +                rcu_unlock_domain(d);
> +                return -ESRCH;
> +            }
> +        }
> +#endif
> +        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, fd);
>          if ( rc )
>          {
>              rcu_unlock_domain(d);
> @@ -893,6 +903,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>  
>          rcu_unlock_domain(d);
> +#ifdef CONFIG_X86
> +        if ( fd )
> +            put_pg_owner(fd);
> +#endif
>  
>          if ( rc > 0 )
>              rc = hypercall_create_continuation(

The only think x86-specific here is that {get,put}_pg_owner() may
not exist on ARM. But the general operation isn't x86-specific, so
there shouldn't be any CONFIG_X86 dependency here. Instead
you ought to work out with the ARM maintainers whether to stub
out those two functions, or whether the functionality is useful
there too (and hence proper implementations would be needed).

In the latter case I would then also wonder whether the x86
implementation shouldn't be moved into common code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:31:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W85xH-0004mN-8w; Tue, 28 Jan 2014 10:31:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W85xF-0004mI-Ev
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:31:41 +0000
Received: from [85.158.137.68:38343] by server-15.bemta-3.messagelabs.com id
	25/B4-11556-C0787E25; Tue, 28 Jan 2014 10:31:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390905099!11669279!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15631 invoked from network); 28 Jan 2014 10:31:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:31:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:31:39 +0000
Message-Id: <52E7951802000078001177F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:31:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
In-Reply-To: <20140127175550.4cc67171@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -838,7 +838,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> +        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, NULL);
>          if ( rc )
>          {
>              rcu_unlock_domain(d);
> @@ -860,7 +860,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case XENMEM_add_to_physmap_batch:
>      {
>          struct xen_add_to_physmap_batch xatpb;
> -        struct domain *d;
> +        struct domain *d, *fd = NULL;
>  
>          BUILD_BUG_ON((typeof(xatpb.size))-1 >
>                       (UINT_MAX >> MEMOP_EXTENT_SHIFT));
> @@ -883,7 +883,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( d == NULL )
>              return -ESRCH;
>  
> -        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> +#ifdef CONFIG_X86
> +        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
> +        {
> +            if ( (fd = get_pg_owner(xatpb.foreign_domid)) == NULL )
> +            {
> +                rcu_unlock_domain(d);
> +                return -ESRCH;
> +            }
> +        }
> +#endif
> +        rc = xsm_add_to_physmap(XSM_TARGET, current->domain, d, fd);
>          if ( rc )
>          {
>              rcu_unlock_domain(d);
> @@ -893,6 +903,10 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          rc = xenmem_add_to_physmap_batch(d, &xatpb, start_extent);
>  
>          rcu_unlock_domain(d);
> +#ifdef CONFIG_X86
> +        if ( fd )
> +            put_pg_owner(fd);
> +#endif
>  
>          if ( rc > 0 )
>              rc = hypercall_create_continuation(

The only think x86-specific here is that {get,put}_pg_owner() may
not exist on ARM. But the general operation isn't x86-specific, so
there shouldn't be any CONFIG_X86 dependency here. Instead
you ought to work out with the ARM maintainers whether to stub
out those two functions, or whether the functionality is useful
there too (and hence proper implementations would be needed).

In the latter case I would then also wonder whether the x86
implementation shouldn't be moved into common code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W861A-00050W-Ma; Tue, 28 Jan 2014 10:35:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W8619-00050L-Mz
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:35:43 +0000
Received: from [193.109.254.147:31104] by server-9.bemta-14.messagelabs.com id
	82/EB-13957-FF787E25; Tue, 28 Jan 2014 10:35:43 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390905341!297478!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17504 invoked from network); 28 Jan 2014 10:35:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:35:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97177002"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 10:35:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:35:40 -0500
Message-ID: <52E787FA.3080105@citrix.com>
Date: Tue, 28 Jan 2014 10:35:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
	called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 00:34, Julien Grall wrote:
> On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
> all CPUs are online. It would mean that the notifier will never be called.

Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
earlier when only the boot CPU is online?  There are problems with
attempting to init FIFO event channels after all CPUs are online.

If evtchn_fifo_init_control_block(cpu) fails on anything other than the
first CPU, that CPU will be unable to receive any events.  Xen will have
been switched to FIFO mode and it is not possible to revert back to
2-level mode.

> Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
> because the event channel structure for this processor is not initialized.
> 
> This can be fixed by calling the init function on every online cpu when the
> event channel fifo driver is initialized.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  drivers/xen/events/events_fifo.c |   11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
> index 1de2a19..15498ab 100644
> --- a/drivers/xen/events/events_fifo.c
> +++ b/drivers/xen/events/events_fifo.c
> @@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
>  
>  int __init xen_evtchn_fifo_init(void)
>  {
> -	int cpu = get_cpu();
> +	int cpu;
>  	int ret;
>  
> -	ret = evtchn_fifo_init_control_block(cpu);
> -	if (ret < 0)
> -		goto out;
> +	for_each_online_cpu(cpu) {
> +		ret = evtchn_fifo_init_control_block(cpu);
> +		if (ret < 0)
> +			goto out;

You need to handle this error differently depending on whether the first
call fails or not.

Failure on first CPU: return an error and the caller will fallback to
using 2-level mode.

Failure on second or later CPU: you need to offline that CPU.  It may
not be possible to offline a CPU with standard calls (e.g., cpu_down())
as it won't have working interrupts.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W861A-00050W-Ma; Tue, 28 Jan 2014 10:35:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W8619-00050L-Mz
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:35:43 +0000
Received: from [193.109.254.147:31104] by server-9.bemta-14.messagelabs.com id
	82/EB-13957-FF787E25; Tue, 28 Jan 2014 10:35:43 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390905341!297478!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17504 invoked from network); 28 Jan 2014 10:35:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:35:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97177002"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 10:35:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:35:40 -0500
Message-ID: <52E787FA.3080105@citrix.com>
Date: Tue, 28 Jan 2014 10:35:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
	called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 00:34, Julien Grall wrote:
> On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
> all CPUs are online. It would mean that the notifier will never be called.

Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
earlier when only the boot CPU is online?  There are problems with
attempting to init FIFO event channels after all CPUs are online.

If evtchn_fifo_init_control_block(cpu) fails on anything other than the
first CPU, that CPU will be unable to receive any events.  Xen will have
been switched to FIFO mode and it is not possible to revert back to
2-level mode.

> Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
> because the event channel structure for this processor is not initialized.
> 
> This can be fixed by calling the init function on every online cpu when the
> event channel fifo driver is initialized.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  drivers/xen/events/events_fifo.c |   11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
> index 1de2a19..15498ab 100644
> --- a/drivers/xen/events/events_fifo.c
> +++ b/drivers/xen/events/events_fifo.c
> @@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
>  
>  int __init xen_evtchn_fifo_init(void)
>  {
> -	int cpu = get_cpu();
> +	int cpu;
>  	int ret;
>  
> -	ret = evtchn_fifo_init_control_block(cpu);
> -	if (ret < 0)
> -		goto out;
> +	for_each_online_cpu(cpu) {
> +		ret = evtchn_fifo_init_control_block(cpu);
> +		if (ret < 0)
> +			goto out;

You need to handle this error differently depending on whether the first
call fails or not.

Failure on first CPU: return an error and the caller will fallback to
using 2-level mode.

Failure on second or later CPU: you need to offline that CPU.  It may
not be possible to offline a CPU with standard calls (e.g., cpu_down())
as it won't have working interrupts.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8634-000596-9W; Tue, 28 Jan 2014 10:37:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8633-00058x-DM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:37:41 +0000
Received: from [85.158.137.68:23842] by server-17.bemta-3.messagelabs.com id
	7D/8B-15965-47887E25; Tue, 28 Jan 2014 10:37:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390905459!8107322!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22440 invoked from network); 28 Jan 2014 10:37:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:37:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:37:39 +0000
Message-Id: <52E796810200007800117801@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:37:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 19:49, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> * Disable IOMMU if no southbridge
>  > http://bugs.xenproject.org/xen/bug/37 

Patch posted (for testing by the reporter of the problem), but
certainly not even coming close to being a blocker.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8634-000596-9W; Tue, 28 Jan 2014 10:37:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8633-00058x-DM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:37:41 +0000
Received: from [85.158.137.68:23842] by server-17.bemta-3.messagelabs.com id
	7D/8B-15965-47887E25; Tue, 28 Jan 2014 10:37:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390905459!8107322!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22440 invoked from network); 28 Jan 2014 10:37:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:37:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:37:39 +0000
Message-Id: <52E796810200007800117801@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:37:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYQ15JMe0HFD8HvT-GpLh9DGH96X5_R=x8u2xx+5qwwCQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 19:49, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> * Disable IOMMU if no southbridge
>  > http://bugs.xenproject.org/xen/bug/37 

Patch posted (for testing by the reporter of the problem), but
certainly not even coming close to being a blocker.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:39:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W864o-0005JX-RT; Tue, 28 Jan 2014 10:39:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W864n-0005JK-8a
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:39:29 +0000
Received: from [85.158.137.68:30420] by server-13.bemta-3.messagelabs.com id
	78/33-28603-0E887E25; Tue, 28 Jan 2014 10:39:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390905565!10598978!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20370 invoked from network); 28 Jan 2014 10:39:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:39:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:39:25 +0000
Message-Id: <52E796EB0200007800117804@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:39:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 03:18, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Until now, xen did not expose PSE to pvh guest, but a patch was submitted
> to xen list to enable bunch of features for a pvh guest. PSE has not been
> looked into for PVH, so until we can do that and test it to make sure it
> works, disable the feature to avoid flood of bugs.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..4e952046 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
>  	xen_have_vector_callback = 1;
>  	xen_pvh_set_cr_flags(0);
>  
> +        /* pvh guests are not quite ready for large pages yet */
> +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> +        setup_clear_cpu_cap(X86_FEATURE_PSE36);

And why would you not want to also turn of 1Gb pages then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:39:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W864o-0005JX-RT; Tue, 28 Jan 2014 10:39:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W864n-0005JK-8a
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:39:29 +0000
Received: from [85.158.137.68:30420] by server-13.bemta-3.messagelabs.com id
	78/33-28603-0E887E25; Tue, 28 Jan 2014 10:39:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390905565!10598978!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20370 invoked from network); 28 Jan 2014 10:39:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 10:39:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 10:39:25 +0000
Message-Id: <52E796EB0200007800117804@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 10:39:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 03:18, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Until now, xen did not expose PSE to pvh guest, but a patch was submitted
> to xen list to enable bunch of features for a pvh guest. PSE has not been
> looked into for PVH, so until we can do that and test it to make sure it
> works, disable the feature to avoid flood of bugs.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |    5 +++++
>  1 files changed, 5 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..4e952046 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1497,6 +1497,11 @@ static void __init xen_pvh_early_guest_init(void)
>  	xen_have_vector_callback = 1;
>  	xen_pvh_set_cr_flags(0);
>  
> +        /* pvh guests are not quite ready for large pages yet */
> +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> +        setup_clear_cpu_cap(X86_FEATURE_PSE36);

And why would you not want to also turn of 1Gb pages then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86At-0005nv-QM; Tue, 28 Jan 2014 10:45:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W86As-0005nq-P4
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:45:46 +0000
Received: from [85.158.139.211:7156] by server-17.bemta-5.messagelabs.com id
	F2/0B-19152-95A87E25; Tue, 28 Jan 2014 10:45:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390905943!76953!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28193 invoked from network); 28 Jan 2014 10:45:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95174994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:45:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:45:43 -0500
Message-ID: <52E78A55.3050905@citrix.com>
Date: Tue, 28 Jan 2014 10:45:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
 grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 01:03, Julien Grall wrote:
> On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
> is enabled).
> We can't assume that the grant frame base address will always fits in an
> unsigned long. Use phys_addr_t instead of unsigned long as argument for
> gnttab_setup_auto_xlat_frames.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:45:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86At-0005nv-QM; Tue, 28 Jan 2014 10:45:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W86As-0005nq-P4
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 10:45:46 +0000
Received: from [85.158.139.211:7156] by server-17.bemta-5.messagelabs.com id
	F2/0B-19152-95A87E25; Tue, 28 Jan 2014 10:45:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390905943!76953!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28193 invoked from network); 28 Jan 2014 10:45:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95174994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:45:43 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:45:43 -0500
Message-ID: <52E78A55.3050905@citrix.com>
Date: Tue, 28 Jan 2014 10:45:41 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1390871025-29079-1-git-send-email-julien.grall@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/gnttab: Use phys_addr_t to describe the
 grant frame base address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 01:03, Julien Grall wrote:
> On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
> is enabled).
> We can't assume that the grant frame base address will always fits in an
> unsigned long. Use phys_addr_t instead of unsigned long as argument for
> gnttab_setup_auto_xlat_frames.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86BX-0005s2-EP; Tue, 28 Jan 2014 10:46:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86BV-0005rj-Ft
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:46:25 +0000
Received: from [85.158.137.68:23512] by server-3.bemta-3.messagelabs.com id
	EB/BE-10658-08A87E25; Tue, 28 Jan 2014 10:46:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390905981!8123921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28429 invoked from network); 28 Jan 2014 10:46:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:46:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95175089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:46:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:46:20 -0500
Message-ID: <1390905979.7753.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 10:46:19 +0000
In-Reply-To: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I should have said this yesterday on the docs day, but: Ping

On Mon, 2014-01-20 at 13:20 +0000, Ian Campbell wrote:
> ---
>  README | 202 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 201 insertions(+), 1 deletion(-)
> 
> diff --git a/README b/README
> index 29c9d45..60379c4 100644
> --- a/README
> +++ b/README
> @@ -1,3 +1,197 @@
> +Introduction
> +============
> +
> +OSStest is the Xen Project automated test infrastructure.
> +
> +Terminology
> +===========
> +
> +"flight":
> +
> +    Each run of osstest is referred to as a "flight". Each flight is
> +    given a unique ID (a number or name).
> +
> +"job":
> +
> +    Each flight consists of one or more "jobs". These are a sequence
> +    of test steps run in order and correspond to a column in the test
> +    report grid. They have names like "build-amd64" or
> +    "test-amd64-amd64-pv". A job can depend on the output of another
> +    job in the flight -- e.g. most test-* jobs depend on one or more
> +    build-* jobs.
> +
> +"step":
> +
> +    Each job consists of multiple "steps" which is an individual test
> +    operation, such as "build the hypervisor", "install a guest",
> +    "start a guest", "migrate a guest", etc. A step corresponds to a
> +    cell in the results grid. A given step can be reused in multiple
> +    different jobs, e.g. the "xen build" step is used in several
> +    different build-* jobs. This reuse can be seen in the rows of the
> +    results grid.
> +
> +"runvar":
> +
> +    A runvar is a named textual variable associated with each job in a
> +    given flight. They serve as both the inputs and outputs to the
> +    job.
> +
> +    For example a Xen build job may have input runvars "tree_xen" (the
> +    tree to clone) and "revision_xen" (the version to test). As output
> +    it sets "path_xendist" which is the path to the tarball of the
> +    resulting binary.
> +
> +    As a further example a test job may have an input runvar
> +    "xenbuildjob" which specifies which build job produced the binary
> +    to be tested. The "xen install" step can then read this runvar in
> +    order to find the binary to install.
> +
> +    Other runvars also exist covering things such as:
> +
> +        * constraints on which machines in the test pool a job can be
> +          run on (e.g. the architecure, the need for a particular
> +          processor class, the presence of SR-IOV etc).
> +
> +        * the parameters of the guest to test (e.g. distro, PV vs HVM
> +          etc).
> +
> +Operation
> +=========
> +
> +A flight is constructed by the "make-flight" script.
> +
> +"make-flight" will allocate a new flight number, create a set of jobs
> +with input runvars depending on the configuration (e.g. branch/version
> +to test).
> +
> +A flight is run by the "mg-execute-flight" script, which in turn calls
> +"sg-execute-flight". "sg-execute-flight" then spawns an instance of
> +"sg-run-job" for each job in the flight.
> +
> +"sg-run-job" encodes various recipes (sequences of steps) which are
> +referenced by each job's configuration. It then runs each of these in
> +turn, taking into account the prerequisites etc, by calling the
> +relevant "ts-*" scripts.
> +
> +When running in standalone mode it is possible to run any of these
> +steps by hand, ("mg-execute-flight", "sg-run-job", "ts-*") although
> +you will need to find the correct inputs (some of which are documented
> +below) and perhaps take care of prerequisites yourself (e.g. running
> +"./sg-run-job test-armhf-armhf-xl" means you must have done
> +"./sg-runjob build-armhf" and "build-armhf-pvops" first.
> +
> +Results
> +=======
> +
> +For flights run automatically by the infrastructure an email report is
> +produced. For most normal flights this is mailed to the xen-devel
> +mailing list. The report for flight 24438 can be seen at
> +
> +    http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
> +
> +The report will link to a set of complete logs. Since these are
> +regularly expired due to space constraints the logs for flight 24438
> +have been archived to
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/
> +
> +NB: to save space any files larger than 100K have been replaced with a
> +placeholder.
> +
> +The results grid contains an overview of the flight's execution.
> +
> +The results for each job are reached by clicking the header of the
> +column in the results grid which will lead to reports such as:
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/build-amd64/info.html
> +
> +The job report contains all of the logs and build outputs associated
> +with this job.
> +
> +The logs for a step are reached by clicking the individual cells of
> +the results grid, or by clicking the list of steps on the job
> +report. In either case this will lead to a report such as
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/4.ts-xen-install.log
> +
> +Additional details (e.g. serial logs, guest cfg files, etc) will be
> +available in the complete logs associated with the containing job.
> +
> +The runvars are listed in the results page for the job as "Test
> +control variables". e.g. See the end of:
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
> +
> +In order to find the binaries which went into a test job you should
> +consult the results page for that job and find the relevant build
> +job. e.g.
> +
> +    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/test-amd64-amd64-xl/info.html
> +
> +lists "xenbuildjob" as "build-amd64". Therefore the input binaries are
> +found at
> +
> +    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/build-amd64/info.html
> +
> +which is linked from the top of the relevant column in the overview
> +grid.
> +
> +Script Naming Conventions
> +=========================
> +
> +Most of the scripts follow a naming convention:
> +
> +ap-*:   Adhoc push scripts
> +
> +cr-*:   Cron scripts
> +
> +cri-*:  Cron scripts (internal)
> +
> +cs-*:   Control Scripts
> +
> +mg-*:   Management scripts
> +
> +ms-*:   Management Services
> +
> +sg-*:   ?
> +
> +ts-*:   Test Step scripts.
> +
> +Jobs
> +====
> +
> +The names of jobs follow some common patterns:
> +
> +    build-$ARCH
> +
> +        Build Xen for $ARCH
> +
> +    build-$ARCH-xend
> +
> +        Build Xen for $ARCH, with xend enabled
> +
> +    build-$ARCH-pvops
> +
> +        Build an upstream ("pvops") kernel for $ARCH
> +
> +build-$ARCH-oldkern
> +
> +        Build the old "linux-2.6.18-xen" tree for $ARCH
> +
> +test-$XENARCH-$DOM0ARCH-<CASE>
> +
> +        A test <CASE> running a $XENARCH hypervisor with a $DOM0ARCH
> +        dom0.
> +
> +        Some tests also have a -$DOMUARCH suffix indicating the
> +        obvious thing.
> +
> +NB: $ARCH (and $XENARCH etc) are Debian arch names, i386, amd64, armhf.
> +
> +Standalone Mode
> +===============
> +
>  To run osstest in standalone mode:
>  
>   - You need to install
> @@ -18,7 +212,7 @@ To run osstest in standalone mode:
>     gives you the "branch" consisting of tests run for the xen-unstable
>     push gate.  You need to select a job.  The list of available jobs
>     is that shown in the publicly emailed test reports on xen-devel, eg
> -     http://lists.xen.org/archives/html/xen-devel/2013-08/msg02529.html
> +     http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
>  
>     If you don't want to repro one of those and don't know how to
>     choose a job, choose one of
> @@ -26,6 +220,12 @@ To run osstest in standalone mode:
>  
>   - Run ./standalone-reset
>  
> +   This will call "make-flight" for you to create a flight targetting
> +   xen-unstable (this can be adjusted by passing parameters to
> +   standalone-reset). By default the flight identifier is
> +   "standalone". standalone-reset will also make sure that certain
> +   bits of static data are available (e.g. Debian installer images)
> +
>   - Then you can run
>        ./sg-run-job <job>
>     to run that job on the default host.  NB in most cases this will



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86BX-0005s2-EP; Tue, 28 Jan 2014 10:46:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86BV-0005rj-Ft
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:46:25 +0000
Received: from [85.158.137.68:23512] by server-3.bemta-3.messagelabs.com id
	EB/BE-10658-08A87E25; Tue, 28 Jan 2014 10:46:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390905981!8123921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28429 invoked from network); 28 Jan 2014 10:46:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:46:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95175089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:46:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:46:20 -0500
Message-ID: <1390905979.7753.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 10:46:19 +0000
In-Reply-To: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
	terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I should have said this yesterday on the docs day, but: Ping

On Mon, 2014-01-20 at 13:20 +0000, Ian Campbell wrote:
> ---
>  README | 202 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 201 insertions(+), 1 deletion(-)
> 
> diff --git a/README b/README
> index 29c9d45..60379c4 100644
> --- a/README
> +++ b/README
> @@ -1,3 +1,197 @@
> +Introduction
> +============
> +
> +OSStest is the Xen Project automated test infrastructure.
> +
> +Terminology
> +===========
> +
> +"flight":
> +
> +    Each run of osstest is referred to as a "flight". Each flight is
> +    given a unique ID (a number or name).
> +
> +"job":
> +
> +    Each flight consists of one or more "jobs". These are a sequence
> +    of test steps run in order and correspond to a column in the test
> +    report grid. They have names like "build-amd64" or
> +    "test-amd64-amd64-pv". A job can depend on the output of another
> +    job in the flight -- e.g. most test-* jobs depend on one or more
> +    build-* jobs.
> +
> +"step":
> +
> +    Each job consists of multiple "steps" which is an individual test
> +    operation, such as "build the hypervisor", "install a guest",
> +    "start a guest", "migrate a guest", etc. A step corresponds to a
> +    cell in the results grid. A given step can be reused in multiple
> +    different jobs, e.g. the "xen build" step is used in several
> +    different build-* jobs. This reuse can be seen in the rows of the
> +    results grid.
> +
> +"runvar":
> +
> +    A runvar is a named textual variable associated with each job in a
> +    given flight. They serve as both the inputs and outputs to the
> +    job.
> +
> +    For example a Xen build job may have input runvars "tree_xen" (the
> +    tree to clone) and "revision_xen" (the version to test). As output
> +    it sets "path_xendist" which is the path to the tarball of the
> +    resulting binary.
> +
> +    As a further example a test job may have an input runvar
> +    "xenbuildjob" which specifies which build job produced the binary
> +    to be tested. The "xen install" step can then read this runvar in
> +    order to find the binary to install.
> +
> +    Other runvars also exist covering things such as:
> +
> +        * constraints on which machines in the test pool a job can be
> +          run on (e.g. the architecure, the need for a particular
> +          processor class, the presence of SR-IOV etc).
> +
> +        * the parameters of the guest to test (e.g. distro, PV vs HVM
> +          etc).
> +
> +Operation
> +=========
> +
> +A flight is constructed by the "make-flight" script.
> +
> +"make-flight" will allocate a new flight number, create a set of jobs
> +with input runvars depending on the configuration (e.g. branch/version
> +to test).
> +
> +A flight is run by the "mg-execute-flight" script, which in turn calls
> +"sg-execute-flight". "sg-execute-flight" then spawns an instance of
> +"sg-run-job" for each job in the flight.
> +
> +"sg-run-job" encodes various recipes (sequences of steps) which are
> +referenced by each job's configuration. It then runs each of these in
> +turn, taking into account the prerequisites etc, by calling the
> +relevant "ts-*" scripts.
> +
> +When running in standalone mode it is possible to run any of these
> +steps by hand, ("mg-execute-flight", "sg-run-job", "ts-*") although
> +you will need to find the correct inputs (some of which are documented
> +below) and perhaps take care of prerequisites yourself (e.g. running
> +"./sg-run-job test-armhf-armhf-xl" means you must have done
> +"./sg-runjob build-armhf" and "build-armhf-pvops" first.
> +
> +Results
> +=======
> +
> +For flights run automatically by the infrastructure an email report is
> +produced. For most normal flights this is mailed to the xen-devel
> +mailing list. The report for flight 24438 can be seen at
> +
> +    http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
> +
> +The report will link to a set of complete logs. Since these are
> +regularly expired due to space constraints the logs for flight 24438
> +have been archived to
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/
> +
> +NB: to save space any files larger than 100K have been replaced with a
> +placeholder.
> +
> +The results grid contains an overview of the flight's execution.
> +
> +The results for each job are reached by clicking the header of the
> +column in the results grid which will lead to reports such as:
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/build-amd64/info.html
> +
> +The job report contains all of the logs and build outputs associated
> +with this job.
> +
> +The logs for a step are reached by clicking the individual cells of
> +the results grid, or by clicking the list of steps on the job
> +report. In either case this will lead to a report such as
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/4.ts-xen-install.log
> +
> +Additional details (e.g. serial logs, guest cfg files, etc) will be
> +available in the complete logs associated with the containing job.
> +
> +The runvars are listed in the results page for the job as "Test
> +control variables". e.g. See the end of:
> +
> +    http://xenbits.xen.org/docs/osstest-output-example/24438/test-amd64-amd64-xl/info.html
> +
> +In order to find the binaries which went into a test job you should
> +consult the results page for that job and find the relevant build
> +job. e.g.
> +
> +    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/test-amd64-amd64-xl/info.html
> +
> +lists "xenbuildjob" as "build-amd64". Therefore the input binaries are
> +found at
> +
> +    http://www.chiark.greenend.org.uk/~xensrcts/logs/24438/build-amd64/info.html
> +
> +which is linked from the top of the relevant column in the overview
> +grid.
> +
> +Script Naming Conventions
> +=========================
> +
> +Most of the scripts follow a naming convention:
> +
> +ap-*:   Adhoc push scripts
> +
> +cr-*:   Cron scripts
> +
> +cri-*:  Cron scripts (internal)
> +
> +cs-*:   Control Scripts
> +
> +mg-*:   Management scripts
> +
> +ms-*:   Management Services
> +
> +sg-*:   ?
> +
> +ts-*:   Test Step scripts.
> +
> +Jobs
> +====
> +
> +The names of jobs follow some common patterns:
> +
> +    build-$ARCH
> +
> +        Build Xen for $ARCH
> +
> +    build-$ARCH-xend
> +
> +        Build Xen for $ARCH, with xend enabled
> +
> +    build-$ARCH-pvops
> +
> +        Build an upstream ("pvops") kernel for $ARCH
> +
> +build-$ARCH-oldkern
> +
> +        Build the old "linux-2.6.18-xen" tree for $ARCH
> +
> +test-$XENARCH-$DOM0ARCH-<CASE>
> +
> +        A test <CASE> running a $XENARCH hypervisor with a $DOM0ARCH
> +        dom0.
> +
> +        Some tests also have a -$DOMUARCH suffix indicating the
> +        obvious thing.
> +
> +NB: $ARCH (and $XENARCH etc) are Debian arch names, i386, amd64, armhf.
> +
> +Standalone Mode
> +===============
> +
>  To run osstest in standalone mode:
>  
>   - You need to install
> @@ -18,7 +212,7 @@ To run osstest in standalone mode:
>     gives you the "branch" consisting of tests run for the xen-unstable
>     push gate.  You need to select a job.  The list of available jobs
>     is that shown in the publicly emailed test reports on xen-devel, eg
> -     http://lists.xen.org/archives/html/xen-devel/2013-08/msg02529.html
> +     http://lists.xen.org/archives/html/xen-devel/2014-01/msg01614.html
>  
>     If you don't want to repro one of those and don't know how to
>     choose a job, choose one of
> @@ -26,6 +220,12 @@ To run osstest in standalone mode:
>  
>   - Run ./standalone-reset
>  
> +   This will call "make-flight" for you to create a flight targetting
> +   xen-unstable (this can be adjusted by passing parameters to
> +   standalone-reset). By default the flight identifier is
> +   "standalone". standalone-reset will also make sure that certain
> +   bits of static data are available (e.g. Debian installer images)
> +
>   - Then you can run
>        ./sg-run-job <job>
>     to run that job on the default host.  NB in most cases this will



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:48:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86D7-00061Q-00; Tue, 28 Jan 2014 10:48:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86D5-00061D-Ho
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:48:03 +0000
Received: from [193.109.254.147:13179] by server-5.bemta-14.messagelabs.com id
	4A/1B-03510-2EA87E25; Tue, 28 Jan 2014 10:48:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390906080!299396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14287 invoked from network); 28 Jan 2014 10:48:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:48:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95175407"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:47:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:47:39 -0500
Message-ID: <1390906058.7753.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 10:47:38 +0000
In-Reply-To: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> 
> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> xc_domain_set_pod_target suffers from the same broken error reporting as the
> rest of libxc and throws away the errno.
> 
> So for now conditionally define xc_domain_set_pod_target to return success
> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> errno==-1 and returns -1, which matches the broken error reporting of the
> existing function. It appears to have no in tree callers in any case.
> 
> The conditional should be removed once libxc has been fixed.
> 
> This makes ballooning (xl mem-set) work for ARM domains.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: george.dunlap@citrix.com
> ---
> I'd be generally wary of modifying the error handling in a piecemeal way, but
> certainly doing so for 4.4 now would be inapropriate.
> 
> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> frame, at which point this conditional stuff could be dropped.
> 
> In terms of the 4.4 release, obviously ballooning would be very nice to have
> for ARM guests, on the other hand I'm aware that while the patch is fairly
> small/contained and safe it is also pretty skanky and likely wouldn't be
> accepted outside of the rc period.

George -- what do you think of this?

> ---
>  tools/libxc/xc_domain.c |   28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..e1d1bec 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -986,6 +986,12 @@ out:
>      return rc;
>  }
>  
> +/* Currently only implemented on x86. This cannot be handled in the
> + * caller, e.g. by looking for errno==ENOSYS because of the broken
> + * error reporting style. Once this is fixed then this condition can
> + * be removed.
> + */
> +#if defined(__i386__)||defined(__x86_64__)
>  static int xc_domain_pod_target(xc_interface *xch,
>                                  int op,
>                                  uint32_t domid,
> @@ -1055,6 +1061,28 @@ int xc_domain_get_pod_target(xc_interface *xch,
>                                  pod_cache_pages,
>                                  pod_entries);
>  }
> +#else
> +int xc_domain_set_pod_target(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint64_t target_pages,
> +                             uint64_t *tot_pages,
> +                             uint64_t *pod_cache_pages,
> +                             uint64_t *pod_entries)
> +{
> +    return 0;
> +}
> +int xc_domain_get_pod_target(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint64_t *tot_pages,
> +                             uint64_t *pod_cache_pages,
> +                             uint64_t *pod_entries)
> +{
> +    /* On x86 (above) xc_domain_pod_target will incorrectly return -1
> +     * with errno==-1 on error. Do the same for least surprise. */
> +    errno = -1;
> +    return -1;
> +}
> +#endif
>  
>  int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
>  {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 10:48:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 10:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86D7-00061Q-00; Tue, 28 Jan 2014 10:48:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86D5-00061D-Ho
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 10:48:03 +0000
Received: from [193.109.254.147:13179] by server-5.bemta-14.messagelabs.com id
	4A/1B-03510-2EA87E25; Tue, 28 Jan 2014 10:48:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390906080!299396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14287 invoked from network); 28 Jan 2014 10:48:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 10:48:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95175407"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 10:47:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 05:47:39 -0500
Message-ID: <1390906058.7753.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 10:47:38 +0000
In-Reply-To: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> 
> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> xc_domain_set_pod_target suffers from the same broken error reporting as the
> rest of libxc and throws away the errno.
> 
> So for now conditionally define xc_domain_set_pod_target to return success
> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> errno==-1 and returns -1, which matches the broken error reporting of the
> existing function. It appears to have no in tree callers in any case.
> 
> The conditional should be removed once libxc has been fixed.
> 
> This makes ballooning (xl mem-set) work for ARM domains.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: george.dunlap@citrix.com
> ---
> I'd be generally wary of modifying the error handling in a piecemeal way, but
> certainly doing so for 4.4 now would be inapropriate.
> 
> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> frame, at which point this conditional stuff could be dropped.
> 
> In terms of the 4.4 release, obviously ballooning would be very nice to have
> for ARM guests, on the other hand I'm aware that while the patch is fairly
> small/contained and safe it is also pretty skanky and likely wouldn't be
> accepted outside of the rc period.

George -- what do you think of this?

> ---
>  tools/libxc/xc_domain.c |   28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..e1d1bec 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -986,6 +986,12 @@ out:
>      return rc;
>  }
>  
> +/* Currently only implemented on x86. This cannot be handled in the
> + * caller, e.g. by looking for errno==ENOSYS because of the broken
> + * error reporting style. Once this is fixed then this condition can
> + * be removed.
> + */
> +#if defined(__i386__)||defined(__x86_64__)
>  static int xc_domain_pod_target(xc_interface *xch,
>                                  int op,
>                                  uint32_t domid,
> @@ -1055,6 +1061,28 @@ int xc_domain_get_pod_target(xc_interface *xch,
>                                  pod_cache_pages,
>                                  pod_entries);
>  }
> +#else
> +int xc_domain_set_pod_target(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint64_t target_pages,
> +                             uint64_t *tot_pages,
> +                             uint64_t *pod_cache_pages,
> +                             uint64_t *pod_entries)
> +{
> +    return 0;
> +}
> +int xc_domain_get_pod_target(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint64_t *tot_pages,
> +                             uint64_t *pod_cache_pages,
> +                             uint64_t *pod_entries)
> +{
> +    /* On x86 (above) xc_domain_pod_target will incorrectly return -1
> +     * with errno==-1 on error. Do the same for least surprise. */
> +    errno = -1;
> +    return -1;
> +}
> +#endif
>  
>  int xc_domain_max_vcpus(xc_interface *xch, uint32_t domid, unsigned int max)
>  {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86cs-00076s-3L; Tue, 28 Jan 2014 11:14:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86cq-00076n-Ig
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:14:40 +0000
Received: from [193.109.254.147:42157] by server-14.bemta-14.messagelabs.com
	id B8/D7-12628-F1197E25; Tue, 28 Jan 2014 11:14:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390907678!306247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6631 invoked from network); 28 Jan 2014 11:14:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:14:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97185680"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:14:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:14:37 -0500
Message-ID: <1390907676.7753.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 28 Jan 2014 11:14:36 +0000
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:13 -0800, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 

I assume this is targeting 4.5?

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);
> +            clear_bit(X86_FEATURE_PSE36, regs[3]);
> +        }
>          clear_bit(X86_FEATURE_MTRR, regs[3]);
> -        clear_bit(X86_FEATURE_PSE36, regs[3]);
>      }
>  
>      switch ( input[0] )
> @@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
>          {
>              set_bit(X86_FEATURE_SYSCALL, regs[3]);
>          }
> -        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> -        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> +            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        }
>  
>          clear_bit(X86_FEATURE_SVM, regs[2]);
>          clear_bit(X86_FEATURE_OSVW, regs[2]);
> @@ -561,7 +567,7 @@ static int xc_cpuid_policy(
>      if ( info.hvm )
>          xc_cpuid_hvm_policy(xch, domid, input, regs);
>      else
> -        xc_cpuid_pv_policy(xch, domid, input, regs);
> +        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
>  
>      return 0;
>  }
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f12999a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
>          info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
>          info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
>          info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
> +        info->pvh      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest);
>  
>          info->shutdown_reason =
>              (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..77d219a 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -404,7 +404,7 @@ typedef struct xc_dominfo {
>      uint32_t      ssidref;
>      unsigned int  dying:1, crashed:1, shutdown:1,
>                    paused:1, blocked:1, running:1,
> -                  hvm:1, debugged:1;
> +                  hvm:1, debugged:1, pvh:1;
>      unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
>      unsigned long nr_pages; /* current number, not maximum */
>      unsigned long nr_outstanding_pages;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86cs-00076s-3L; Tue, 28 Jan 2014 11:14:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86cq-00076n-Ig
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:14:40 +0000
Received: from [193.109.254.147:42157] by server-14.bemta-14.messagelabs.com
	id B8/D7-12628-F1197E25; Tue, 28 Jan 2014 11:14:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390907678!306247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6631 invoked from network); 28 Jan 2014 11:14:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:14:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97185680"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:14:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:14:37 -0500
Message-ID: <1390907676.7753.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Tue, 28 Jan 2014 11:14:36 +0000
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 17:13 -0800, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 

I assume this is targeting 4.5?

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);
> +            clear_bit(X86_FEATURE_PSE36, regs[3]);
> +        }
>          clear_bit(X86_FEATURE_MTRR, regs[3]);
> -        clear_bit(X86_FEATURE_PSE36, regs[3]);
>      }
>  
>      switch ( input[0] )
> @@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
>          {
>              set_bit(X86_FEATURE_SYSCALL, regs[3]);
>          }
> -        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> -        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> +            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        }
>  
>          clear_bit(X86_FEATURE_SVM, regs[2]);
>          clear_bit(X86_FEATURE_OSVW, regs[2]);
> @@ -561,7 +567,7 @@ static int xc_cpuid_policy(
>      if ( info.hvm )
>          xc_cpuid_hvm_policy(xch, domid, input, regs);
>      else
> -        xc_cpuid_pv_policy(xch, domid, input, regs);
> +        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
>  
>      return 0;
>  }
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f12999a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
>          info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
>          info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
>          info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
> +        info->pvh      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest);
>  
>          info->shutdown_reason =
>              (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..77d219a 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -404,7 +404,7 @@ typedef struct xc_dominfo {
>      uint32_t      ssidref;
>      unsigned int  dying:1, crashed:1, shutdown:1,
>                    paused:1, blocked:1, running:1,
> -                  hvm:1, debugged:1;
> +                  hvm:1, debugged:1, pvh:1;
>      unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
>      unsigned long nr_pages; /* current number, not maximum */
>      unsigned long nr_outstanding_pages;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:17:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86fy-0007DI-Rw; Tue, 28 Jan 2014 11:17:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W86fx-0007DC-W9
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:17:54 +0000
Received: from [85.158.143.35:43841] by server-2.bemta-4.messagelabs.com id
	BB/B1-11386-1E197E25; Tue, 28 Jan 2014 11:17:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390907871!1322352!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6802 invoked from network); 28 Jan 2014 11:17:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 11:17:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 11:17:51 +0000
Message-Id: <52E79FEB0200007800117840@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 11:17:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2 V2] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.

My Fam10 BKDG version doesn't say anything like this. Instead it
says that the low 32 bits of all 4 registers are identical (i.e. all are
aliasing 0x413), whereas the high 32 bits are different among all
the four registers (with 0xc000040a having them all zero).

> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
> 
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf 

Higher model numbers appear to also have 0xc0000409 reserved...

Jan

> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, 
> uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, 
> uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we 
> ignore
>           * all write accesses. This behaviour matches real HW, so guests 
> should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>  
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1		0xc0000408
> -#define MSR_F10_MC4_MISC2		0xc0000409
> -#define MSR_F10_MC4_MISC3		0xc000040A
> +#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2		0xc0000408
> +#define MSR_F10_MC4_MISC3		0xc0000409
>  
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG 		0xc0011023
> -- 
> 1.7.9.5
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:17:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86fy-0007DI-Rw; Tue, 28 Jan 2014 11:17:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W86fx-0007DC-W9
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:17:54 +0000
Received: from [85.158.143.35:43841] by server-2.bemta-4.messagelabs.com id
	BB/B1-11386-1E197E25; Tue, 28 Jan 2014 11:17:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390907871!1322352!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6802 invoked from network); 28 Jan 2014 11:17:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 11:17:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 11:17:51 +0000
Message-Id: <52E79FEB0200007800117840@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 11:17:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1390862655-3461-2-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2 V2] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> MSR 0xC000040A is marked as reserved from Fam15 onwards and
> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
> So remove the unnecessary definition of the reserved MSR and
> use MSR_IA32_MCx_MISC() to define MSR 0x413.

My Fam10 BKDG version doesn't say anything like this. Instead it
says that the low 32 bits of all 4 registers are identical (i.e. all are
aliasing 0x413), whereas the high 32 bits are different among all
the four registers (with 0xc000040a having them all zero).

> Also, according to BKDG, MSR 0x413 is the first of the thresholding
> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
> respectively. So rework the #define's accordingly.
> 
> Fam15 Model 00h-0fh  BKDG reference:
> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf 

Higher model numbers appear to also have 0xc0000409 reserved...

Jan

> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..07c2684 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, 
> uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, 
> uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold registers */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we 
> ignore
>           * all write accesses. This behaviour matches real HW, so guests 
> should
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e5ffbf2 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -219,9 +219,9 @@
>  #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
>  
>  /* AMD Family10h machine check MSRs */
> -#define MSR_F10_MC4_MISC1		0xc0000408
> -#define MSR_F10_MC4_MISC2		0xc0000409
> -#define MSR_F10_MC4_MISC3		0xc000040A
> +#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
> +#define MSR_F10_MC4_MISC2		0xc0000408
> +#define MSR_F10_MC4_MISC3		0xc0000409
>  
>  /* AMD Family10h Bus Unit MSRs */
>  #define MSR_F10_BU_CFG 		0xc0011023
> -- 
> 1.7.9.5
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:24:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86mS-0007cP-T7; Tue, 28 Jan 2014 11:24:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W86mR-0007cH-95
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:24:35 +0000
Received: from [85.158.143.35:33078] by server-2.bemta-4.messagelabs.com id
	06/7E-11386-27397E25; Tue, 28 Jan 2014 11:24:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390908273!1322003!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16524 invoked from network); 28 Jan 2014 11:24:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 11:24:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 11:24:34 +0000
Message-Id: <52E7A17D020000780011784E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 11:24:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

As one of the other reviewers already said - 0xC0000000 would
be better recognizable here.

As to the 3 -> 0x13 change - I don't think this is conceptually
correct. While at present we emulate only 2 banks, this had
been different in the past and may become different again.
Hence introducing a dis-contiguity after bank 3 is undesirable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:24:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86mS-0007cP-T7; Tue, 28 Jan 2014 11:24:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W86mR-0007cH-95
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:24:35 +0000
Received: from [85.158.143.35:33078] by server-2.bemta-4.messagelabs.com id
	06/7E-11386-27397E25; Tue, 28 Jan 2014 11:24:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390908273!1322003!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16524 invoked from network); 28 Jan 2014 11:24:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 11:24:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 11:24:34 +0000
Message-Id: <52E7A17D020000780011784E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 11:24:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1390862655-3461-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1390862655-3461-3-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))

As one of the other reviewers already said - 0xC0000000 would
be better recognizable here.

As to the 3 -> 0x13 change - I don't think this is conceptually
correct. While at present we emulate only 2 banks, this had
been different in the past and may become different again.
Hence introducing a dis-contiguity after bank 3 is undesirable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86pu-0007ld-Oh; Tue, 28 Jan 2014 11:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86pt-0007lW-91
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:28:09 +0000
Received: from [85.158.137.68:25961] by server-15.bemta-3.messagelabs.com id
	70/3F-11556-84497E25; Tue, 28 Jan 2014 11:28:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390908484!11808308!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17534 invoked from network); 28 Jan 2014 11:28:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:28:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97188743"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:28:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:28:04 -0500
Message-ID: <1390908482.7753.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:28:02 +0000
In-Reply-To: <52E6AEF9.9050406@eu.citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<52E6AEF9.9050406@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:09 +0000, George Dunlap wrote:
> On 01/27/2014 05:53 PM, Wei Liu wrote:
> > The original code is wrong because:
> > * claim mode wants to know the total number of pages needed while
> >    original code provides the additional number of pages needed.
> > * if pod is enabled memory will already be allocated by the time we try
> >    to claim memory.
> >
> > So the fix would be:
> > * move claim mode before actual memory allocation.
> > * pass the right number of pages to hypervisor.
> >
> > The "right number of pages" should be number of pages of target memory
> > minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >
> > This fixes bug #32.
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Konrad Wilk <konrad.wilk@oracle.com>
> > Cc: George Dunlap <george.dunlap@eu.citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> > ---
> > WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> > mode is complete broken. If this patch is deemed too complicated, we
> > should flip the switch to disable claim mode by default for 4.4.
> 
> I think a more reasonable mitigation strategy would simply be to ignore 
> claim mode when constructing a domain that uses PoD.
> 
> I'm inclined to take this one.  Since claim mode is on by default, the 
> currently-working path should get exercised well before the release to 
> shake out any bugs.  The other path doesn't work at all currently 
> (AFAICT) unless you disable claim mode -- which is still available as a 
> work-around, even if there is a bug in this patch.
> 
> I'll wait a day or two for others to speak up before giving it a formal 
> ack, just in case.

I agree with this plan.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86pu-0007ld-Oh; Tue, 28 Jan 2014 11:28:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86pt-0007lW-91
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:28:09 +0000
Received: from [85.158.137.68:25961] by server-15.bemta-3.messagelabs.com id
	70/3F-11556-84497E25; Tue, 28 Jan 2014 11:28:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390908484!11808308!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17534 invoked from network); 28 Jan 2014 11:28:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:28:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97188743"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:28:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:28:04 -0500
Message-ID: <1390908482.7753.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:28:02 +0000
In-Reply-To: <52E6AEF9.9050406@eu.citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<52E6AEF9.9050406@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 19:09 +0000, George Dunlap wrote:
> On 01/27/2014 05:53 PM, Wei Liu wrote:
> > The original code is wrong because:
> > * claim mode wants to know the total number of pages needed while
> >    original code provides the additional number of pages needed.
> > * if pod is enabled memory will already be allocated by the time we try
> >    to claim memory.
> >
> > So the fix would be:
> > * move claim mode before actual memory allocation.
> > * pass the right number of pages to hypervisor.
> >
> > The "right number of pages" should be number of pages of target memory
> > minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >
> > This fixes bug #32.
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Cc: Konrad Wilk <konrad.wilk@oracle.com>
> > Cc: George Dunlap <george.dunlap@eu.citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> > ---
> > WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> > mode is complete broken. If this patch is deemed too complicated, we
> > should flip the switch to disable claim mode by default for 4.4.
> 
> I think a more reasonable mitigation strategy would simply be to ignore 
> claim mode when constructing a domain that uses PoD.
> 
> I'm inclined to take this one.  Since claim mode is on by default, the 
> currently-working path should get exercised well before the release to 
> shake out any bugs.  The other path doesn't work at all currently 
> (AFAICT) unless you disable claim mode -- which is still available as a 
> work-around, even if there is a bug in this patch.
> 
> I'll wait a day or two for others to speak up before giving it a formal 
> ack, just in case.

I agree with this plan.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:37:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86yZ-0008As-UT; Tue, 28 Jan 2014 11:37:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86yY-0008Ak-LA
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 11:37:06 +0000
Received: from [193.109.254.147:22120] by server-6.bemta-14.messagelabs.com id
	4C/21-14958-26697E25; Tue, 28 Jan 2014 11:37:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390909024!317904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14414 invoked from network); 28 Jan 2014 11:37:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:37:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95187896"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:37:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:37:03 -0500
Message-ID: <1390909022.7753.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 28 Jan 2014 11:37:02 +0000
In-Reply-To: <1389651599-26562-1-git-send-email-baozich@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>          ldr   r4, =BOOT_FDT_VIRT_START
> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */

Comparing the objdump before and after shows:
        @@ -299,7 +299,7 @@
           20041c:	e3822c0e 	orr	r2, r2, #3584	; 0xe00
           200420:	e382207d 	orr	r2, r2, #125	; 0x7d
           200424:	e3a04606 	mov	r4, #6291456	; 0x600000
        -  200428:	e1a04924 	lsr	r4, r4, #18
        +  200428:	e1a04aa4 	lsr	r4, r4, #21
           20042c:	e18120f4 	strd	r2, [r1, r4]
           200430:	f57ff04f 	dsb	sy
           200434:	e28f0004 	add	r0, pc, #4
        
which I think is unexpected/incorrect. I think you wanted #(SECOND_SHIFT
- 3) as elsewhere.

The only other change to the binary was the expected s/20/21/ in both
arm32 and arm64.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:37:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W86yZ-0008As-UT; Tue, 28 Jan 2014 11:37:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W86yY-0008Ak-LA
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 11:37:06 +0000
Received: from [193.109.254.147:22120] by server-6.bemta-14.messagelabs.com id
	4C/21-14958-26697E25; Tue, 28 Jan 2014 11:37:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390909024!317904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14414 invoked from network); 28 Jan 2014 11:37:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:37:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95187896"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:37:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:37:03 -0500
Message-ID: <1390909022.7753.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 28 Jan 2014 11:37:02 +0000
In-Reply-To: <1389651599-26562-1-git-send-email-baozich@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>          ldr   r4, =BOOT_FDT_VIRT_START
> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_START */

Comparing the objdump before and after shows:
        @@ -299,7 +299,7 @@
           20041c:	e3822c0e 	orr	r2, r2, #3584	; 0xe00
           200420:	e382207d 	orr	r2, r2, #125	; 0x7d
           200424:	e3a04606 	mov	r4, #6291456	; 0x600000
        -  200428:	e1a04924 	lsr	r4, r4, #18
        +  200428:	e1a04aa4 	lsr	r4, r4, #21
           20042c:	e18120f4 	strd	r2, [r1, r4]
           200430:	f57ff04f 	dsb	sy
           200434:	e28f0004 	add	r0, pc, #4
        
which I think is unexpected/incorrect. I think you wanted #(SECOND_SHIFT
- 3) as elsewhere.

The only other change to the binary was the expected s/20/21/ in both
arm32 and arm64.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:41:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W872d-0008R5-Nd; Tue, 28 Jan 2014 11:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W872d-0008R0-0C
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:41:19 +0000
Received: from [85.158.143.35:64454] by server-1.bemta-4.messagelabs.com id
	A7/69-02132-E5797E25; Tue, 28 Jan 2014 11:41:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390909274!1314962!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7622 invoked from network); 28 Jan 2014 11:41:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:41:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97191745"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:41:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:41:14 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W872X-000461-4S;
	Tue, 28 Jan 2014 11:41:13 +0000
Message-ID: <52E79757.9090708@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:41:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on
 error paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 04:25 PM, Andrew Cooper wrote:
> Both of these "goto finish;" statements are actually errors, and need to "goto
> out;" instead, which will correctly destroy the domain and return an error,
> rather than trying to finish the migration (and in at least one scenario,
> return success).
>
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> CC: Ian Campbell<Ian.Campbell@citrix.com>
> CC: Ian Jackson<Ian.Jackson@eu.citrix.com>
> CC: George Dunlap<george.dunlap@eu.citrix.com>

Right -- I can't imagine any goodness coming from jumping to "finish" in 
those cases...

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/libxc/xc_domain_restore.c |    4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index ca2fb51..5ba47d7 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1778,14 +1778,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>   
>       if ( pagebuf_get(xch, ctx, &pagebuf, io_fd, dom) ) {
>           PERROR("error when buffering batch, finishing");
> -        goto finish;
> +        goto out;
>       }
>       memset(&tmptail, 0, sizeof(tmptail));
>       tmptail.ishvm = hvm;
>       if ( buffer_tail(xch, ctx, &tmptail, io_fd, max_vcpu_id, vcpumap,
>                        ext_vcpucontext, vcpuextstate, vcpuextstate_size) < 0 ) {
>           ERROR ("error buffering image tail, finishing");
> -        goto finish;
> +        goto out;
>       }
>       tailbuf_free(&tailbuf);
>       memcpy(&tailbuf, &tmptail, sizeof(tailbuf));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:41:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W872d-0008R5-Nd; Tue, 28 Jan 2014 11:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W872d-0008R0-0C
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:41:19 +0000
Received: from [85.158.143.35:64454] by server-1.bemta-4.messagelabs.com id
	A7/69-02132-E5797E25; Tue, 28 Jan 2014 11:41:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390909274!1314962!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7622 invoked from network); 28 Jan 2014 11:41:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:41:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97191745"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:41:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:41:14 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W872X-000461-4S;
	Tue, 28 Jan 2014 11:41:13 +0000
Message-ID: <52E79757.9090708@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:41:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on
 error paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/27/2014 04:25 PM, Andrew Cooper wrote:
> Both of these "goto finish;" statements are actually errors, and need to "goto
> out;" instead, which will correctly destroy the domain and return an error,
> rather than trying to finish the migration (and in at least one scenario,
> return success).
>
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> CC: Ian Campbell<Ian.Campbell@citrix.com>
> CC: Ian Jackson<Ian.Jackson@eu.citrix.com>
> CC: George Dunlap<george.dunlap@eu.citrix.com>

Right -- I can't imagine any goodness coming from jumping to "finish" in 
those cases...

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/libxc/xc_domain_restore.c |    4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index ca2fb51..5ba47d7 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -1778,14 +1778,14 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>   
>       if ( pagebuf_get(xch, ctx, &pagebuf, io_fd, dom) ) {
>           PERROR("error when buffering batch, finishing");
> -        goto finish;
> +        goto out;
>       }
>       memset(&tmptail, 0, sizeof(tmptail));
>       tmptail.ishvm = hvm;
>       if ( buffer_tail(xch, ctx, &tmptail, io_fd, max_vcpu_id, vcpumap,
>                        ext_vcpucontext, vcpuextstate, vcpuextstate_size) < 0 ) {
>           ERROR ("error buffering image tail, finishing");
> -        goto finish;
> +        goto out;
>       }
>       tailbuf_free(&tailbuf);
>       memcpy(&tailbuf, &tmptail, sizeof(tailbuf));


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:47:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W878c-0000FB-5g; Tue, 28 Jan 2014 11:47:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W878Z-0000Ew-L9
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:47:27 +0000
Received: from [193.109.254.147:19145] by server-15.bemta-14.messagelabs.com
	id C2/F5-22186-EC897E25; Tue, 28 Jan 2014 11:47:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390909645!323186!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25170 invoked from network); 28 Jan 2014 11:47:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191322"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:47:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:47:10 -0500
Message-ID: <1390909629.7753.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 28 Jan 2014 11:47:09 +0000
In-Reply-To: <52E6956C.6060100@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
	<1390843126.12230.46.camel@kazak.uk.xensource.com>
	<52E6956C.6060100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:20 +0000, Andrew Cooper wrote:
> On 27/01/14 17:18, Ian Campbell wrote:
> > On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
> >> A physdev_op is a two argument hypercall, taking a command paramter and an
> >> optional pointer to a structure.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> > This hypercall is unused in minios and stubdoms I think? (Trying to
> > gauge how critical the error is).
> 
> Correct.  I suppose it is more of a "nice to fix" than "must fix" at
> this stage, although I was quite surprised that I needed to fix it.

Given that the function is just wrong and that it appears to be unused
in tree I think a simple compile test to confirm is sufficient to take
it into 4.4. Since that succeeded I have acked + applied.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:47:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W878c-0000FB-5g; Tue, 28 Jan 2014 11:47:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W878Z-0000Ew-L9
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:47:27 +0000
Received: from [193.109.254.147:19145] by server-15.bemta-14.messagelabs.com
	id C2/F5-22186-EC897E25; Tue, 28 Jan 2014 11:47:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390909645!323186!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25170 invoked from network); 28 Jan 2014 11:47:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:47:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191322"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:47:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:47:10 -0500
Message-ID: <1390909629.7753.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 28 Jan 2014 11:47:09 +0000
In-Reply-To: <52E6956C.6060100@citrix.com>
References: <1390588091-17670-1-git-send-email-andrew.cooper3@citrix.com>
	<1390843126.12230.46.camel@kazak.uk.xensource.com>
	<52E6956C.6060100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] minios: Correct HYPERVISOR_physdev_op()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:20 +0000, Andrew Cooper wrote:
> On 27/01/14 17:18, Ian Campbell wrote:
> > On Fri, 2014-01-24 at 18:28 +0000, Andrew Cooper wrote:
> >> A physdev_op is a two argument hypercall, taking a command paramter and an
> >> optional pointer to a structure.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >> CC: Samuel Thibault <samuel.thibault@ens-lyon.org>
> > This hypercall is unused in minios and stubdoms I think? (Trying to
> > gauge how critical the error is).
> 
> Correct.  I suppose it is more of a "nice to fix" than "must fix" at
> this stage, although I was quite surprised that I needed to fix it.

Given that the function is just wrong and that it appears to be unused
in tree I think a simple compile test to confirm is sufficient to take
it into 4.4. Since that succeeded I have acked + applied.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:48:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W879J-0000If-Jx; Tue, 28 Jan 2014 11:48:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8799-0000IC-Rp
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:48:09 +0000
Received: from [193.109.254.147:65298] by server-6.bemta-14.messagelabs.com id
	A3/FF-14958-3F897E25; Tue, 28 Jan 2014 11:48:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390909681!317535!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28117 invoked from network); 28 Jan 2014 11:48:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:48:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193268"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:47:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:47:38 -0500
Message-ID: <1390909657.7753.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:47:37 +0000
In-Reply-To: <52E64DBE.5090603@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
	<52E64DBE.5090603@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:14 +0000, George Dunlap wrote:
> On 01/21/2014 06:45 PM, Ian Jackson wrote:
> > I think these three bugfixes are clear 4.4 material.
> >
> >   a 1/3 libxl: events: Pass correct nfds to poll
> >   a 2/3 xl: Free optdata_begin when saving domain config
> >   a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr
> >
> > They have all been acked by Ian Campbell.
> 
> All three:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:48:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W879J-0000If-Jx; Tue, 28 Jan 2014 11:48:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8799-0000IC-Rp
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:48:09 +0000
Received: from [193.109.254.147:65298] by server-6.bemta-14.messagelabs.com id
	A3/FF-14958-3F897E25; Tue, 28 Jan 2014 11:48:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390909681!317535!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28117 invoked from network); 28 Jan 2014 11:48:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:48:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193268"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:47:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:47:38 -0500
Message-ID: <1390909657.7753.64.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:47:37 +0000
In-Reply-To: <52E64DBE.5090603@eu.citrix.com>
References: <1390329931-28251-1-git-send-email-ian.jackson@eu.citrix.com>
	<52E64DBE.5090603@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/3] tools: Fixes for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:14 +0000, George Dunlap wrote:
> On 01/21/2014 06:45 PM, Ian Jackson wrote:
> > I think these three bugfixes are clear 4.4 material.
> >
> >   a 1/3 libxl: events: Pass correct nfds to poll
> >   a 2/3 xl: Free optdata_begin when saving domain config
> >   a 3/3 xenstore: xs_suspend_evtchn_port: always free portstr
> >
> > They have all been acked by Ian Campbell.
> 
> All three:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:48:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W879w-0000Qg-5I; Tue, 28 Jan 2014 11:48:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W879u-0000Oz-7R
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:48:50 +0000
Received: from [85.158.139.211:60408] by server-3.bemta-5.messagelabs.com id
	7E/91-04773-12997E25; Tue, 28 Jan 2014 11:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390909727!97855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17583 invoked from network); 28 Jan 2014 11:48:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:48:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:48:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:48:46 -0500
Message-ID: <1390909725.7753.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:48:45 +0000
In-Reply-To: <52E27E56.6000209@eu.citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27E56.6000209@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xensource.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:53 +0000, George Dunlap wrote:
> On 01/21/2014 01:56 PM, Ian Campbell wrote:
> > On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
> >> Make rdname function work with xl
> >>
> >> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > Although I would have preferred a slightly more verbose changelog.
> 
> This clearly fixes a bug in xendomains.  The worst it might do is break 
> xendomains for xend; if Ian C. is reasonably confident, based on his 
> inspection of the patch that it won't do so, then:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, I added a paragraph based on Fabio's extended description from 
<52E64747.5010207@m2r.biz> to the commit message.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:48:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W879w-0000Qg-5I; Tue, 28 Jan 2014 11:48:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W879u-0000Oz-7R
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 11:48:50 +0000
Received: from [85.158.139.211:60408] by server-3.bemta-5.messagelabs.com id
	7E/91-04773-12997E25; Tue, 28 Jan 2014 11:48:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390909727!97855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17583 invoked from network); 28 Jan 2014 11:48:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:48:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:48:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:48:46 -0500
Message-ID: <1390909725.7753.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:48:45 +0000
In-Reply-To: <52E27E56.6000209@eu.citrix.com>
References: <1390312268-4468-1-git-send-email-fabio.fantoni@m2r.biz>
	<1390312565.20516.119.camel@kazak.uk.xensource.com>
	<52E27E56.6000209@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xensource.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/hotplug: fix bug on xendomains using
	xl
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:53 +0000, George Dunlap wrote:
> On 01/21/2014 01:56 PM, Ian Campbell wrote:
> > On Tue, 2014-01-21 at 14:51 +0100, Fabio Fantoni wrote:
> >> Make rdname function work with xl
> >>
> >> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > Although I would have preferred a slightly more verbose changelog.
> 
> This clearly fixes a bug in xendomains.  The worst it might do is break 
> xendomains for xend; if Ian C. is reasonably confident, based on his 
> inspection of the patch that it won't do so, then:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, I added a paragraph based on Fabio's extended description from 
<52E64747.5010207@m2r.biz> to the commit message.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:49:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87AS-0000c7-Pq; Tue, 28 Jan 2014 11:49:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87AR-0000ah-7Q
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:49:23 +0000
Received: from [85.158.137.68:38490] by server-7.bemta-3.messagelabs.com id
	77/46-27599-14997E25; Tue, 28 Jan 2014 11:49:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390909759!11816543!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11825 invoked from network); 28 Jan 2014 11:49:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:49:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191802"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:49:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:49:18 -0500
Message-ID: <1390909756.7753.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:49:16 +0000
In-Reply-To: <52E64BF9.8080608@eu.citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
	<52E64A95.2050201@eu.citrix.com>
	<1390824259.12230.26.camel@kazak.uk.xensource.com>
	<52E64BF9.8080608@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:07 +0000, George Dunlap wrote:
> Yes, "can't reboot via software" is a pretty big lack of functionality; 
> it weighs in pretty heavily against whatever potential regressions might 
> be introduced.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

I've applied this. I think Julien's concerns about the boot loop
spamming the consoleif reboot fails are 4.5 material.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:49:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87AS-0000c7-Pq; Tue, 28 Jan 2014 11:49:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87AR-0000ah-7Q
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:49:23 +0000
Received: from [85.158.137.68:38490] by server-7.bemta-3.messagelabs.com id
	77/46-27599-14997E25; Tue, 28 Jan 2014 11:49:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390909759!11816543!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11825 invoked from network); 28 Jan 2014 11:49:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:49:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191802"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:49:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:49:18 -0500
Message-ID: <1390909756.7753.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:49:16 +0000
In-Reply-To: <52E64BF9.8080608@eu.citrix.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390822680.12230.24.camel@kazak.uk.xensource.com>
	<52E64A95.2050201@eu.citrix.com>
	<1390824259.12230.26.camel@kazak.uk.xensource.com>
	<52E64BF9.8080608@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 12:07 +0000, George Dunlap wrote:
> Yes, "can't reboot via software" is a pretty big lack of functionality; 
> it weighs in pretty heavily against whatever potential regressions might 
> be introduced.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

I've applied this. I think Julien's concerns about the boot loop
spamming the consoleif reboot fails are 4.5 material.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:49:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87Av-0000nm-L6; Tue, 28 Jan 2014 11:49:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87Au-0000nW-56
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 11:49:52 +0000
Received: from [85.158.143.35:34524] by server-2.bemta-4.messagelabs.com id
	55/80-11386-F5997E25; Tue, 28 Jan 2014 11:49:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390909789!1328467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14609 invoked from network); 28 Jan 2014 11:49:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:49:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:49:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:49:48 -0500
Message-ID: <1390909787.7753.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 11:49:47 +0000
In-Reply-To: <1390555374.2124.7.camel@kazak.uk.xensource.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> > "ret", being set to -1 early on, gets cleared by the first invocation
> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> > subsequent failures wouldn't be noticed by the caller without setting
> > it back to -1 right after those calls.
> > 
> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> > 
> > Reported-by: Matthew Daley <mattjd@gmail.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

This is a pretty obvious bug fix, which is already in Linux (where this
code originates), so I've applied it.

I guess you will do the Xen side one yourself.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:49:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87Av-0000nm-L6; Tue, 28 Jan 2014 11:49:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87Au-0000nW-56
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 11:49:52 +0000
Received: from [85.158.143.35:34524] by server-2.bemta-4.messagelabs.com id
	55/80-11386-F5997E25; Tue, 28 Jan 2014 11:49:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390909789!1328467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14609 invoked from network); 28 Jan 2014 11:49:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:49:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95191920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 11:49:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:49:48 -0500
Message-ID: <1390909787.7753.67.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 11:49:47 +0000
In-Reply-To: <1390555374.2124.7.camel@kazak.uk.xensource.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> > "ret", being set to -1 early on, gets cleared by the first invocation
> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> > subsequent failures wouldn't be noticed by the caller without setting
> > it back to -1 right after those calls.
> > 
> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> > 
> > Reported-by: Matthew Daley <mattjd@gmail.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

This is a pretty obvious bug fix, which is already in Linux (where this
code originates), so I've applied it.

I guess you will do the Xen side one yourself.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87Bc-0000vZ-Nv; Tue, 28 Jan 2014 11:50:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87Ba-0000v5-Fb
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:50:34 +0000
Received: from [85.158.139.211:2097] by server-9.bemta-5.messagelabs.com id
	3D/D6-15098-98997E25; Tue, 28 Jan 2014 11:50:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390909831!98961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31977 invoked from network); 28 Jan 2014 11:50:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:50:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193796"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:50:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:50:30 -0500
Message-ID: <1390909829.7753.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:50:29 +0000
In-Reply-To: <52E0FBBC.1010102@eu.citrix.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<1390475173.24595.49.camel@kazak.uk.xensource.com>
	<52E0FBBC.1010102@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com,
	xenmail43267@gmail.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 11:23 +0000, George Dunlap wrote:
> On 01/23/2014 11:06 AM, Ian Campbell wrote:
> > On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
> >> From: Mike Neilsen <mneilsen@acm.org>
> >>
> >> This is a fix for bug 35:
> >> http://bugs.xenproject.org/xen/bug/35
> >>
> >> This bug report describes several format string mismatches which prevent
> >> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
> >> copy of Alex Sharp's original patch with the following modifications:
> >>
> >> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
> >>    avoid stack corruption
> >> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
> >>    unsigned long in pcifront_physical_to_virtual and related functions
> >>    (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> >>
> >> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> >>
> >> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> >>
> >> ---
> >> Changed since v1:
> >> * Change "fun" arguments into unsigned ints
> >
> > Thanks for shaving that yakk! Since you've done it I obviously rescind
> > my previous comments ;-) (I should have read all my mail first).
> >
> > Acked-by: Ian Campbell <ian,campbell@citrix.com>
> >
> > George -- as a build fix for a compiler which is now in the wild I think
> > this should go in for 4.4.
> 
> I agree.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks everyone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 11:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 11:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87Bc-0000vZ-Nv; Tue, 28 Jan 2014 11:50:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87Ba-0000v5-Fb
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 11:50:34 +0000
Received: from [85.158.139.211:2097] by server-9.bemta-5.messagelabs.com id
	3D/D6-15098-98997E25; Tue, 28 Jan 2014 11:50:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390909831!98961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31977 invoked from network); 28 Jan 2014 11:50:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 11:50:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="97193796"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 11:50:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 06:50:30 -0500
Message-ID: <1390909829.7753.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 11:50:29 +0000
In-Reply-To: <52E0FBBC.1010102@eu.citrix.com>
References: <1390412471-12978-1-git-send-email-xenmail43267@gmail.com>
	<1390475173.24595.49.camel@kazak.uk.xensource.com>
	<52E0FBBC.1010102@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Mike Neilsen <mneilsen@acm.org>, stefano.stabellini@citrix.com,
	samuel.thibault@ens-lyon.org, alex.sharp@orionvm.com,
	xenmail43267@gmail.com
Subject: Re: [Xen-devel] [PATCH v2] mini-os: Fix stubdom build failures on
	gcc 4.8
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-23 at 11:23 +0000, George Dunlap wrote:
> On 01/23/2014 11:06 AM, Ian Campbell wrote:
> > On Wed, 2014-01-22 at 11:41 -0600, xenmail43267@gmail.com wrote:
> >> From: Mike Neilsen <mneilsen@acm.org>
> >>
> >> This is a fix for bug 35:
> >> http://bugs.xenproject.org/xen/bug/35
> >>
> >> This bug report describes several format string mismatches which prevent
> >> building the stubdom target in Xen 4.3 and Xen 4.4-rc2 on gcc 4.8.  This is a
> >> copy of Alex Sharp's original patch with the following modifications:
> >>
> >> * Andrew Cooper's recommendation applied to extras/mini-os/xenbus/xenbus.c to
> >>    avoid stack corruption
> >> * Samuel Thibault's recommendation to make "fun" an unsigned int rather than an
> >>    unsigned long in pcifront_physical_to_virtual and related functions
> >>    (extras/mini-os/include/pcifront.h and extras/mini-os/pcifront.c)
> >>
> >> Tested on x86_64 gcc Ubuntu/Linaro 4.8.1-10ubuntu9.
> >>
> >> Signed-off-by: Mike Neilsen <mneilsen@acm.org>
> >>
> >> ---
> >> Changed since v1:
> >> * Change "fun" arguments into unsigned ints
> >
> > Thanks for shaving that yakk! Since you've done it I obviously rescind
> > my previous comments ;-) (I should have read all my mail first).
> >
> > Acked-by: Ian Campbell <ian,campbell@citrix.com>
> >
> > George -- as a build fix for a compiler which is now in the wild I think
> > this should go in for 4.4.
> 
> I agree.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks everyone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:24:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87iJ-0002sH-0M; Tue, 28 Jan 2014 12:24:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W87iH-0002sC-D7
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:24:21 +0000
Received: from [85.158.137.68:63666] by server-8.bemta-3.messagelabs.com id
	E2/94-31081-471A7E25; Tue, 28 Jan 2014 12:24:20 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390911858!11796990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 28 Jan 2014 12:24:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95204154"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 12:24:18 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:24:17 -0500
Message-ID: <52E7A16F.6090401@citrix.com>
Date: Tue, 28 Jan 2014 13:24:15 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <Xen-devel@lists.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/01/14 02:13, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);

Should we enable MCA/MCE flags for PVH DomUs? It looks to me like Dom0
is the only domain that can make use of MCE/MCA.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:24:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87iJ-0002sH-0M; Tue, 28 Jan 2014 12:24:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W87iH-0002sC-D7
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:24:21 +0000
Received: from [85.158.137.68:63666] by server-8.bemta-3.messagelabs.com id
	E2/94-31081-471A7E25; Tue, 28 Jan 2014 12:24:20 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390911858!11796990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 28 Jan 2014 12:24:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:24:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,735,1384300800"; d="scan'208";a="95204154"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 12:24:18 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:24:17 -0500
Message-ID: <52E7A16F.6090401@citrix.com>
Date: Tue, 28 Jan 2014 13:24:15 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <Xen-devel@lists.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/01/14 02:13, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);

Should we enable MCA/MCE flags for PVH DomUs? It looks to me like Dom0
is the only domain that can make use of MCE/MCA.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:29:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87mg-00030B-Rm; Tue, 28 Jan 2014 12:28:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87mg-000306-1H
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:28:54 +0000
Received: from [85.158.139.211:42741] by server-9.bemta-5.messagelabs.com id
	69/C0-15098-582A7E25; Tue, 28 Jan 2014 12:28:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390912132!110150!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1625 invoked from network); 28 Jan 2014 12:28:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:28:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:28:56 +0000
Message-Id: <52E7B09102000078001178A9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:28:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEEDD5691.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/usbback: fix after c/s
	1232:8806dfb939d4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEEDD5691.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

That c/s ("backends: Check for insane amounts of requests on the ring")
copied from blkback/blktap/scsiback a switch statement that is valid
there, but not here - the return value from usbbk_start_submit_urb()
could be any positive value, not just one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/usbback/usbback.c
+++ b/drivers/xen/usbback/usbback.c
@@ -1013,7 +1013,7 @@ static int usbbk_start_submit_urb(usbif_
=20
 	RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);
=20
-	return more_to_do;
+	return !!more_to_do;
 }
=20
 void usbbk_hotplug_notify(usbif_t *usbif, int portnum, int speed)




--=__PartEEDD5691.0__=
Content-Type: text/plain; name="xen-1232-fix.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-1232-fix.patch"

usbback: fix after c/s 1232:8806dfb939d4=0A=0AThat c/s ("backends: Check =
for insane amounts of requests on the ring")=0Acopied from blkback/blktap/s=
csiback a switch statement that is valid=0Athere, but not here - the =
return value from usbbk_start_submit_urb()=0Acould be any positive value, =
not just one.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/drivers/xen/usbback/usbback.c=0A+++ b/drivers/xen/usbback/usbback.c=0A@@ =
-1013,7 +1013,7 @@ static int usbbk_start_submit_urb(usbif_=0A =0A 	=
RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);=0A =0A-	=
return more_to_do;=0A+	return !!more_to_do;=0A }=0A =0A void usbbk_hotplug=
_notify(usbif_t *usbif, int portnum, int speed)=0A
--=__PartEEDD5691.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEEDD5691.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 28 12:29:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87mg-00030B-Rm; Tue, 28 Jan 2014 12:28:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87mg-000306-1H
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:28:54 +0000
Received: from [85.158.139.211:42741] by server-9.bemta-5.messagelabs.com id
	69/C0-15098-582A7E25; Tue, 28 Jan 2014 12:28:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390912132!110150!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1625 invoked from network); 28 Jan 2014 12:28:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:28:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:28:56 +0000
Message-Id: <52E7B09102000078001178A9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:28:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEEDD5691.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/usbback: fix after c/s
	1232:8806dfb939d4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEEDD5691.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

That c/s ("backends: Check for insane amounts of requests on the ring")
copied from blkback/blktap/scsiback a switch statement that is valid
there, but not here - the return value from usbbk_start_submit_urb()
could be any positive value, not just one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/xen/usbback/usbback.c
+++ b/drivers/xen/usbback/usbback.c
@@ -1013,7 +1013,7 @@ static int usbbk_start_submit_urb(usbif_
=20
 	RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);
=20
-	return more_to_do;
+	return !!more_to_do;
 }
=20
 void usbbk_hotplug_notify(usbif_t *usbif, int portnum, int speed)




--=__PartEEDD5691.0__=
Content-Type: text/plain; name="xen-1232-fix.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-1232-fix.patch"

usbback: fix after c/s 1232:8806dfb939d4=0A=0AThat c/s ("backends: Check =
for insane amounts of requests on the ring")=0Acopied from blkback/blktap/s=
csiback a switch statement that is valid=0Athere, but not here - the =
return value from usbbk_start_submit_urb()=0Acould be any positive value, =
not just one.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/drivers/xen/usbback/usbback.c=0A+++ b/drivers/xen/usbback/usbback.c=0A@@ =
-1013,7 +1013,7 @@ static int usbbk_start_submit_urb(usbif_=0A =0A 	=
RING_FINAL_CHECK_FOR_REQUESTS(&usbif->urb_ring, more_to_do);=0A =0A-	=
return more_to_do;=0A+	return !!more_to_do;=0A }=0A =0A void usbbk_hotplug=
_notify(usbif_t *usbif, int portnum, int speed)=0A
--=__PartEEDD5691.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEEDD5691.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 28 12:30:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87nn-00036D-D1; Tue, 28 Jan 2014 12:30:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W87nk-000364-H2
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:30:00 +0000
Received: from [85.158.139.211:14171] by server-5.bemta-5.messagelabs.com id
	B7/A6-14928-7C2A7E25; Tue, 28 Jan 2014 12:29:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390912197!108088!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5647 invoked from network); 28 Jan 2014 12:29:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:29:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97205681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 12:29:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:29:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W87nf-0004or-Sx;
	Tue, 28 Jan 2014 12:29:55 +0000
Date: Tue, 28 Jan 2014 12:29:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390901845.7753.6.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > flight 24553 qemu-upstream-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> > 
> > Failures :-/ but no regressions.
> 
> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> and so will require updating to actually pull this new stuff into the
> release.

OK. But given that the new code is not part of any RCs, should I wait
for the next one? Should we go back to "master"?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:30:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87nn-00036D-D1; Tue, 28 Jan 2014 12:30:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W87nk-000364-H2
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:30:00 +0000
Received: from [85.158.139.211:14171] by server-5.bemta-5.messagelabs.com id
	B7/A6-14928-7C2A7E25; Tue, 28 Jan 2014 12:29:59 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390912197!108088!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5647 invoked from network); 28 Jan 2014 12:29:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:29:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97205681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 12:29:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:29:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W87nf-0004or-Sx;
	Tue, 28 Jan 2014 12:29:55 +0000
Date: Tue, 28 Jan 2014 12:29:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390901845.7753.6.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > flight 24553 qemu-upstream-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> > 
> > Failures :-/ but no regressions.
> 
> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> and so will require updating to actually pull this new stuff into the
> release.

OK. But given that the new code is not part of any RCs, should I wait
for the next one? Should we go back to "master"?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87oZ-0003BQ-6l; Tue, 28 Jan 2014 12:30:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87oX-0003BB-Tt
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:30:50 +0000
Received: from [85.158.139.211:23680] by server-11.bemta-5.messagelabs.com id
	EA/92-23268-9F2A7E25; Tue, 28 Jan 2014 12:30:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390912248!109500!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17053 invoked from network); 28 Jan 2014 12:30:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:30:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:30:53 +0000
Message-Id: <52E7B10602000078001178AC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:30:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
	<1390909787.7753.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1390909787.7753.67.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	IanJackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 12:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
>> > "ret", being set to -1 early on, gets cleared by the first invocation
>> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
>> > subsequent failures wouldn't be noticed by the caller without setting
>> > it back to -1 right after those calls.
>> > 
>> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
>> > 
>> > Reported-by: Matthew Daley <mattjd@gmail.com>
>> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> This is a pretty obvious bug fix, which is already in Linux (where this
> code originates), so I've applied it.
> 
> I guess you will do the Xen side one yourself.

I was sort of hoping to get an ack from Keir, and wanted to apply
both as a pair. Now that you did the tools side, I guess I'll call it
trivial enough and apply the hypervisor side as is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87oZ-0003BQ-6l; Tue, 28 Jan 2014 12:30:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87oX-0003BB-Tt
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:30:50 +0000
Received: from [85.158.139.211:23680] by server-11.bemta-5.messagelabs.com id
	EA/92-23268-9F2A7E25; Tue, 28 Jan 2014 12:30:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390912248!109500!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17053 invoked from network); 28 Jan 2014 12:30:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:30:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:30:53 +0000
Message-Id: <52E7B10602000078001178AC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:30:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
	<1390909787.7753.67.camel@kazak.uk.xensource.com>
In-Reply-To: <1390909787.7753.67.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	IanJackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 12:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
>> > "ret", being set to -1 early on, gets cleared by the first invocation
>> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
>> > subsequent failures wouldn't be noticed by the caller without setting
>> > it back to -1 right after those calls.
>> > 
>> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
>> > 
>> > Reported-by: Matthew Daley <mattjd@gmail.com>
>> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> 
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> This is a pretty obvious bug fix, which is already in Linux (where this
> code originates), so I've applied it.
> 
> I guess you will do the Xen side one yourself.

I was sort of hoping to get an ack from Keir, and wanted to apply
both as a pair. Now that you did the tools side, I guess I'll call it
trivial enough and apply the hypervisor side as is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:32:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87q9-0003LN-QZ; Tue, 28 Jan 2014 12:32:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87q7-0003L4-SE
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:32:28 +0000
Received: from [193.109.254.147:46610] by server-15.bemta-14.messagelabs.com
	id 6E/F8-22186-B53A7E25; Tue, 28 Jan 2014 12:32:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390912345!336797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30411 invoked from network); 28 Jan 2014 12:32:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:32:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97206458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 12:32:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:32:24 -0500
Message-ID: <1390912343.7753.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 12:32:23 +0000
In-Reply-To: <52E7B10602000078001178AC@nat28.tlf.novell.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
	<1390909787.7753.67.camel@kazak.uk.xensource.com>
	<52E7B10602000078001178AC@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	IanJackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 12:30 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 12:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
> >> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> >> > "ret", being set to -1 early on, gets cleared by the first invocation
> >> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> >> > subsequent failures wouldn't be noticed by the caller without setting
> >> > it back to -1 right after those calls.
> >> > 
> >> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> >> > 
> >> > Reported-by: Matthew Daley <mattjd@gmail.com>
> >> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> 
> >> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > This is a pretty obvious bug fix, which is already in Linux (where this
> > code originates), so I've applied it.
> > 
> > I guess you will do the Xen side one yourself.
> 
> I was sort of hoping to get an ack from Keir, and wanted to apply
> both as a pair.

Oops, sorry!

>  Now that you did the tools side, I guess I'll call it
> trivial enough and apply the hypervisor side as is.

FWIW: Acked-By: Ian Campbell <ian.campbell@citrix.com> on that...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:32:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87q9-0003LN-QZ; Tue, 28 Jan 2014 12:32:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W87q7-0003L4-SE
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:32:28 +0000
Received: from [193.109.254.147:46610] by server-15.bemta-14.messagelabs.com
	id 6E/F8-22186-B53A7E25; Tue, 28 Jan 2014 12:32:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390912345!336797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30411 invoked from network); 28 Jan 2014 12:32:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:32:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97206458"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 12:32:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:32:24 -0500
Message-ID: <1390912343.7753.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 12:32:23 +0000
In-Reply-To: <52E7B10602000078001178AC@nat28.tlf.novell.com>
References: <1383870390-9273-1-git-send-email-mattjd@gmail.com>
	<1383870390-9273-2-git-send-email-mattjd@gmail.com>
	<52E22BE1020000780011673E@nat28.tlf.novell.com>
	<1390555374.2124.7.camel@kazak.uk.xensource.com>
	<1390909787.7753.67.camel@kazak.uk.xensource.com>
	<52E7B10602000078001178AC@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Matthew Daley <mattjd@gmail.com>,
	IanJackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc/unlz4: always set an error return
 code on failures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 12:30 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 12:49, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Fri, 2014-01-24 at 09:22 +0000, Ian Campbell wrote:
> >> On Fri, 2014-01-24 at 08:01 +0000, Jan Beulich wrote:
> >> > "ret", being set to -1 early on, gets cleared by the first invocation
> >> > of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
> >> > subsequent failures wouldn't be noticed by the caller without setting
> >> > it back to -1 right after those calls.
> >> > 
> >> > Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
> >> > 
> >> > Reported-by: Matthew Daley <mattjd@gmail.com>
> >> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> 
> >> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > This is a pretty obvious bug fix, which is already in Linux (where this
> > code originates), so I've applied it.
> > 
> > I guess you will do the Xen side one yourself.
> 
> I was sort of hoping to get an ack from Keir, and wanted to apply
> both as a pair.

Oops, sorry!

>  Now that you did the tools side, I guess I'll call it
> trivial enough and apply the hypervisor side as is.

FWIW: Acked-By: Ian Campbell <ian.campbell@citrix.com> on that...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:37:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87ul-0003jN-Nt; Tue, 28 Jan 2014 12:37:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87uk-0003jI-FE
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:37:14 +0000
Received: from [85.158.139.211:32358] by server-13.bemta-5.messagelabs.com id
	6F/A8-11357-974A7E25; Tue, 28 Jan 2014 12:37:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390912630!111329!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7489 invoked from network); 28 Jan 2014 12:37:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:37:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:37:17 +0000
Message-Id: <52E7B28202000078001178CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:37:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>,
	<Xen-devel@lists.xensource.com>, "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<52E7A16F.6090401@citrix.com>
In-Reply-To: <52E7A16F.6090401@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI4LjAxLjE0IGF0IDEzOjI0LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAyNS8wMS8xNCAwMjoxMywgTXVrZXNoIFJhdGhvciB3cm90ZToK
Pj4gQEAgLTQ1NSwxMyArNDU1LDE2IEBAIHN0YXRpYyB2b2lkIHhjX2NwdWlkX3B2X3BvbGljeSgK
Pj4gIAo+PiAgICAgIGlmICggKGlucHV0WzBdICYgMHg3ZmZmZmZmZikgPT0gMHgwMDAwMDAwMSAp
Cj4+ICAgICAgewo+PiAtICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfVk1FLCByZWdzWzNd
KTsKPj4gLSAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1szXSk7Cj4+IC0g
ICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9QR0UsIHJlZ3NbM10pOwo+PiAtICAgICAgICBj
bGVhcl9iaXQoWDg2X0ZFQVRVUkVfTUNFLCByZWdzWzNdKTsKPj4gLSAgICAgICAgY2xlYXJfYml0
KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4+ICsgICAgICAgIGlmICggIWlzX3B2aCApCj4+
ICsgICAgICAgIHsKPj4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9WTUUsIHJl
Z3NbM10pOwo+PiArICAgICAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1sz
XSk7Cj4+ICsgICAgICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfUEdFLCByZWdzWzNdKTsK
Pj4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0UsIHJlZ3NbM10pOwo+PiAr
ICAgICAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4gCj4gU2hv
dWxkIHdlIGVuYWJsZSBNQ0EvTUNFIGZsYWdzIGZvciBQVkggRG9tVXM/IEl0IGxvb2tzIHRvIG1l
IGxpa2UgRG9tMAo+IGlzIHRoZSBvbmx5IGRvbWFpbiB0aGF0IGNhbiBtYWtlIHVzZSBvZiBNQ0Uv
TUNBLgoKV2Ugc3RpbGwgaGF2ZSB2TUNFIC4uLgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:37:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W87ul-0003jN-Nt; Tue, 28 Jan 2014 12:37:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W87uk-0003jI-FE
	for Xen-devel@lists.xensource.com; Tue, 28 Jan 2014 12:37:14 +0000
Received: from [85.158.139.211:32358] by server-13.bemta-5.messagelabs.com id
	6F/A8-11357-974A7E25; Tue, 28 Jan 2014 12:37:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390912630!111329!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7489 invoked from network); 28 Jan 2014 12:37:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:37:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:37:17 +0000
Message-Id: <52E7B28202000078001178CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:37:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>,
	<Xen-devel@lists.xensource.com>, "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<52E7A16F.6090401@citrix.com>
In-Reply-To: <52E7A16F.6090401@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI4LjAxLjE0IGF0IDEzOjI0LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAyNS8wMS8xNCAwMjoxMywgTXVrZXNoIFJhdGhvciB3cm90ZToK
Pj4gQEAgLTQ1NSwxMyArNDU1LDE2IEBAIHN0YXRpYyB2b2lkIHhjX2NwdWlkX3B2X3BvbGljeSgK
Pj4gIAo+PiAgICAgIGlmICggKGlucHV0WzBdICYgMHg3ZmZmZmZmZikgPT0gMHgwMDAwMDAwMSAp
Cj4+ICAgICAgewo+PiAtICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfVk1FLCByZWdzWzNd
KTsKPj4gLSAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1szXSk7Cj4+IC0g
ICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9QR0UsIHJlZ3NbM10pOwo+PiAtICAgICAgICBj
bGVhcl9iaXQoWDg2X0ZFQVRVUkVfTUNFLCByZWdzWzNdKTsKPj4gLSAgICAgICAgY2xlYXJfYml0
KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4+ICsgICAgICAgIGlmICggIWlzX3B2aCApCj4+
ICsgICAgICAgIHsKPj4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9WTUUsIHJl
Z3NbM10pOwo+PiArICAgICAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1sz
XSk7Cj4+ICsgICAgICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfUEdFLCByZWdzWzNdKTsK
Pj4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0UsIHJlZ3NbM10pOwo+PiAr
ICAgICAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4gCj4gU2hv
dWxkIHdlIGVuYWJsZSBNQ0EvTUNFIGZsYWdzIGZvciBQVkggRG9tVXM/IEl0IGxvb2tzIHRvIG1l
IGxpa2UgRG9tMAo+IGlzIHRoZSBvbmx5IGRvbWFpbiB0aGF0IGNhbiBtYWtlIHVzZSBvZiBNQ0Uv
TUNBLgoKV2Ugc3RpbGwgaGF2ZSB2TUNFIC4uLgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:44:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8821-0004E3-IP; Tue, 28 Jan 2014 12:44:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8820-0004Dw-4q
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:44:44 +0000
Received: from [85.158.143.35:57067] by server-1.bemta-4.messagelabs.com id
	70/75-02132-B36A7E25; Tue, 28 Jan 2014 12:44:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390913081!1334387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5311 invoked from network); 28 Jan 2014 12:44:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:44:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95208687"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 12:44:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:44:39 -0500
Message-ID: <52E7A635.2090108@citrix.com>
Date: Tue, 28 Jan 2014 13:44:37 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
In-Reply-To: <20140127212146.GA32007@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>>
>> - We don't wait for any pending purge work to finish before cleaning
>>   the list of free_pages. The purge work will call put_free_pages and
>>   thus we might end up with pages being added to the free_pages list
>>   after we have emptied it.
>> - We don't wait for pending requests to end before cleaning persistent
>>   grants and the list of free_pages. Again this can add pages to the
>>   free_pages lists or persistent grants to the persistent_gnts
>>   red-black tree.
>>
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
>>
>> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Matt Rushton <mrushton@amazon.com>
>> Cc: Matt Wilson <msw@amazon.com>
>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>> ---
>> This should be applied after the patch:
>>
>> xen-blkback: fix memory leak when persistent grants are used
>>
>> >From Matt Rushton & Matt Wilson and backported to stable.
>>
>> I've been able to create and destroy ~4000 guests while doing heavy IO
>> operations with this patch on a 512M Dom0 without problems.
>> ---
>>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>>  2 files changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 30ef7b3..19925b7 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *bl=
kif,
>>  				struct pending_req *pending_req);
>>  static void make_response(struct xen_blkif *blkif, u64 id,
>>  			  unsigned short op, int st);
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>>  =

>>  #define foreach_grant_safe(pos, n, rbtree, node) \
>>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node),=
 \
>> @@ -625,6 +626,12 @@ purge_gnt_list:
>>  			print_stats(blkif);
>>  	}
>>  =

>> +	/* Drain pending IO */
>> +	xen_blk_drain_io(blkif, true);
>> +
>> +	/* Drain pending purge work */
>> +	flush_work(&blkif->persistent_purge_work);
>> +
> =

> I think this means we can eliminate the refcnt usage - at least when
> it comes to xen_blkif_disconnect where if we would initiate the shutdown,=
 and
> there is
> =

> 239         atomic_dec(&blkif->refcnt);                                  =
           =

> 240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refcnt=
) =3D=3D 0);   =

> 241         atomic_inc(&blkif->refcnt);                                  =
           =

> 242                                                                      =
           =

> =

> which is done _after_ the thread is done executing. That check won't
> be needed anymore as the xen_blk_drain_io, flush_work, and free_persisten=
t_gnts
> has pretty much drained every I/O out - so the moment the thread exits
> there should be no need for waiting_to_free. I think.

I've reworked this patch a bit, so we don't drain the in-flight requests
here, and instead moved all the cleanup code to xen_blkif_free. I've
also split the xen_blkif_put race fix into a separate patch.

> =

>>  	/* Free all persistent grant pages */
>>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
>> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>>  	return -EIO;
>>  }
>>  =

>> -static void xen_blk_drain_io(struct xen_blkif *blkif)
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>>  {
>>  	atomic_set(&blkif->drain, 1);
>>  	do {
>> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>>  =

>>  		if (!atomic_read(&blkif->drain))
>>  			break;
>> -	} while (!kthread_should_stop());
>> +	} while (!kthread_should_stop() || force);
>>  	atomic_set(&blkif->drain, 0);
>>  }
>>  =

>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *=
pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
> =

> I keep coming back to this and I am not sure what to think - especially
> in the context of WRITE_BARRIER and disconnecting the vbd.
> =

> You moved the 'free_req' to be done before you do atomic_read/dec.
> =

> Which means that we do:
> =

> 	list_add(&req->free_list, &blkif->pending_free);
> 	wake_up(&blkif->pending_free_wq);
> =

> 	atomic_dec
> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> =

> =

> while in the past we did:
> =

> 	atomic_dec
> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> =

> 	list_add(&req->free_list, &blkif->pending_free);
> 	wake_up(&blkif->pending_free_wq);
> =

> which means that we are giving the 'req' _before_ we decrement
> the refcnts.
> =

> Could that mean that __do_block_io_op takes it for a spin - oh
> wait it won't as it is sitting on a WRITE_BARRIER and waiting:
> =

> 1226         if (drain)                                                  =
            =

> 1227                 xen_blk_drain_io(pending_req->blkif);  =

> =

> But still that feels 'wrong'?

Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
harmless since the thread is waiting on drain_complete as you say, but I
take your point that it's all confusing. Do you think it will feel
better if we gate the call to wake_up in free_req with this condition:

if (was_empty && !atomic_read(&blkif->drain))

Or is this just going to make it even messier?

Maybe just adding a comment in free_req saying that the wake_up call is
going to be ignored in the context of a WRITE_BARRIER, since the thread
is already waiting on drain_complete is enough.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:44:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8821-0004E3-IP; Tue, 28 Jan 2014 12:44:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8820-0004Dw-4q
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:44:44 +0000
Received: from [85.158.143.35:57067] by server-1.bemta-4.messagelabs.com id
	70/75-02132-B36A7E25; Tue, 28 Jan 2014 12:44:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390913081!1334387!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5311 invoked from network); 28 Jan 2014 12:44:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 12:44:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95208687"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 12:44:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 07:44:39 -0500
Message-ID: <52E7A635.2090108@citrix.com>
Date: Tue, 28 Jan 2014 13:44:37 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
In-Reply-To: <20140127212146.GA32007@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>>
>> - We don't wait for any pending purge work to finish before cleaning
>>   the list of free_pages. The purge work will call put_free_pages and
>>   thus we might end up with pages being added to the free_pages list
>>   after we have emptied it.
>> - We don't wait for pending requests to end before cleaning persistent
>>   grants and the list of free_pages. Again this can add pages to the
>>   free_pages lists or persistent grants to the persistent_gnts
>>   red-black tree.
>>
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
>>
>> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Matt Rushton <mrushton@amazon.com>
>> Cc: Matt Wilson <msw@amazon.com>
>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>> ---
>> This should be applied after the patch:
>>
>> xen-blkback: fix memory leak when persistent grants are used
>>
>> >From Matt Rushton & Matt Wilson and backported to stable.
>>
>> I've been able to create and destroy ~4000 guests while doing heavy IO
>> operations with this patch on a 512M Dom0 without problems.
>> ---
>>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++----------
>>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
>>  2 files changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 30ef7b3..19925b7 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *bl=
kif,
>>  				struct pending_req *pending_req);
>>  static void make_response(struct xen_blkif *blkif, u64 id,
>>  			  unsigned short op, int st);
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
>>  =

>>  #define foreach_grant_safe(pos, n, rbtree, node) \
>>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node),=
 \
>> @@ -625,6 +626,12 @@ purge_gnt_list:
>>  			print_stats(blkif);
>>  	}
>>  =

>> +	/* Drain pending IO */
>> +	xen_blk_drain_io(blkif, true);
>> +
>> +	/* Drain pending purge work */
>> +	flush_work(&blkif->persistent_purge_work);
>> +
> =

> I think this means we can eliminate the refcnt usage - at least when
> it comes to xen_blkif_disconnect where if we would initiate the shutdown,=
 and
> there is
> =

> 239         atomic_dec(&blkif->refcnt);                                  =
           =

> 240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refcnt=
) =3D=3D 0);   =

> 241         atomic_inc(&blkif->refcnt);                                  =
           =

> 242                                                                      =
           =

> =

> which is done _after_ the thread is done executing. That check won't
> be needed anymore as the xen_blk_drain_io, flush_work, and free_persisten=
t_gnts
> has pretty much drained every I/O out - so the moment the thread exits
> there should be no need for waiting_to_free. I think.

I've reworked this patch a bit, so we don't drain the in-flight requests
here, and instead moved all the cleanup code to xen_blkif_free. I've
also split the xen_blkif_put race fix into a separate patch.

> =

>>  	/* Free all persistent grant pages */
>>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
>>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
>> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blkif,
>>  	return -EIO;
>>  }
>>  =

>> -static void xen_blk_drain_io(struct xen_blkif *blkif)
>> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
>>  {
>>  	atomic_set(&blkif->drain, 1);
>>  	do {
>> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
>>  =

>>  		if (!atomic_read(&blkif->drain))
>>  			break;
>> -	} while (!kthread_should_stop());
>> +	} while (!kthread_should_stop() || force);
>>  	atomic_set(&blkif->drain, 0);
>>  }
>>  =

>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req *=
pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
> =

> I keep coming back to this and I am not sure what to think - especially
> in the context of WRITE_BARRIER and disconnecting the vbd.
> =

> You moved the 'free_req' to be done before you do atomic_read/dec.
> =

> Which means that we do:
> =

> 	list_add(&req->free_list, &blkif->pending_free);
> 	wake_up(&blkif->pending_free_wq);
> =

> 	atomic_dec
> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> =

> =

> while in the past we did:
> =

> 	atomic_dec
> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> =

> 	list_add(&req->free_list, &blkif->pending_free);
> 	wake_up(&blkif->pending_free_wq);
> =

> which means that we are giving the 'req' _before_ we decrement
> the refcnts.
> =

> Could that mean that __do_block_io_op takes it for a spin - oh
> wait it won't as it is sitting on a WRITE_BARRIER and waiting:
> =

> 1226         if (drain)                                                  =
            =

> 1227                 xen_blk_drain_io(pending_req->blkif);  =

> =

> But still that feels 'wrong'?

Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
harmless since the thread is waiting on drain_complete as you say, but I
take your point that it's all confusing. Do you think it will feel
better if we gate the call to wake_up in free_req with this condition:

if (was_empty && !atomic_read(&blkif->drain))

Or is this just going to make it even messier?

Maybe just adding a comment in free_req saying that the wake_up call is
going to be ignored in the context of a WRITE_BARRIER, since the thread
is already waiting on drain_complete is enough.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 12:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W889B-0004dT-Pg; Tue, 28 Jan 2014 12:52:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W889A-0004dO-RH
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:52:09 +0000
Received: from [85.158.139.211:52511] by server-1.bemta-5.messagelabs.com id
	6C/CB-21065-7F7A7E25; Tue, 28 Jan 2014 12:52:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390913526!100976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5545 invoked from network); 28 Jan 2014 12:52:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:52:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:52:19 +0000
Message-Id: <52E7B61202000078001178E3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:52:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6754DF12.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6754DF12.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Many of these operations can be arbitrarily long, which can become a
problem irrespective of them being exposed to privileged users only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.0000000=
00 +0100
+++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.0000000=
00 +0100
@@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D mmapcmd.entry;
 		for (i =3D 0; i < mmapcmd.num;) {
+			if (i)
+				cond_resched();
+
 			nr =3D min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
=20
 		i =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			msg =3D (privcmd_mmap_entry_t*)(l + 1);
@@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
 		addr =3D vma->vm_start;
 		i =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			msg =3D (privcmd_mmap_entry_t*)(l + 1);
@@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
=20
 	mmap_out:
 		up_write(&mm->mmap_sem);
-		list_for_each_safe(l,l2,&pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 	}
 #undef MMAP_NR_PER_PAGE
 	break;
@@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D m.arr;
 		for (i=3D0; i<nr_pages; )	{
+			if (i)
+				cond_resched();
+
 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
 		ret =3D 0;
 		paged_out =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE)=
;
 			mfn =3D (unsigned long *)(l + 1);
=20
@@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
 			else
 				ret =3D 0;
 			list_for_each(l, &pagelist) {
+				if (i)
+					cond_resched();
+
 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);
 				mfn =3D (unsigned long *)(l + 1);
 				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
@@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
 			}
 		}
 	mmapbatch_out:
-		list_for_each_safe(l,l2,&pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 	}
 	break;
=20
@@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D m.arr;
 		for (i =3D 0; i < nr_pages; i +=3D nr, p +=3D nr) {
+			if (i)
+				cond_resched();
+
 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
 		ret =3D 0;
 		paged_out =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE)=
;
 			mfn =3D (void *)(l + 1);
 			err =3D (void *)(l + 1);
@@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
 			ret =3D paged_out ? -ENOENT : 0;
 			i =3D 0;
 			list_for_each(l, &pagelist) {
+				if (i)
+					cond_resched();
+
 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);
 				err =3D (void *)(l + 1);
 				if (copy_to_user(p, err, nr * sizeof(*err))=
)
@@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
 			ret =3D -EFAULT;
=20
 	mmapbatch_v2_out:
-		list_for_each_safe(l, l2, &pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 #undef MMAPBATCH_NR_PER_PAGE
 	}
 	break;




--=__Part6754DF12.0__=
Content-Type: text/plain; name="xen-privcmd-mmap-cond-resched.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-privcmd-mmap-cond-resched.patch"

privcmd: sprinkle around cond_resched() calls in mmap ioctl handling=0A=0AM=
any of these operations can be arbitrarily long, which can become =
a=0Aproblem irrespective of them being exposed to privileged users =
only.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.0000000=
00 +0100=0A+++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 =
10:01:23.000000000 +0100=0A@@ -126,6 +126,9 @@ static long privcmd_ioctl(st=
ruct file *f=0A =0A 		p =3D mmapcmd.entry;=0A 		=
for (i =3D 0; i < mmapcmd.num;) {=0A+			if (i)=0A+		=
		cond_resched();=0A+=0A 			nr =3D min(mmapcmd.=
num - i, MMAP_NR_PER_PAGE);=0A =0A 			ret =3D -ENOMEM;=0A=
@@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f=0A =0A 		=
i =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+			=
if (i)=0A+				cond_resched();=0A+=0A 			=
nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);=0A =0A 			=
msg =3D (privcmd_mmap_entry_t*)(l + 1);=0A@@ -186,6 +192,9 @@ static long =
privcmd_ioctl(struct file *f=0A 		addr =3D vma->vm_start;=0A =
		i =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+	=
		if (i)=0A+				cond_resched();=0A+=
=0A 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);=
=0A =0A 			msg =3D (privcmd_mmap_entry_t*)(l + =
1);=0A@@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f=0A =0A =
	mmap_out:=0A 		up_write(&mm->mmap_sem);=0A-		=
list_for_each_safe(l,l2,&pagelist)=0A+		i =3D 0;=0A+		=
list_for_each_safe(l, l2, &pagelist) {=0A+			if (!(++i =
& 7))=0A+				cond_resched();=0A 			=
free_page((unsigned long)l);=0A+		}=0A 	}=0A #undef =
MMAP_NR_PER_PAGE=0A 	break;=0A@@ -236,6 +249,9 @@ static long privcmd_io=
ctl(struct file *f=0A =0A 		p =3D m.arr;=0A 		=
for (i=3D0; i<nr_pages; )	{=0A+			if (i)=0A+		=
		cond_resched();=0A+=0A 			nr =3D min(nr_pages=
 - i, MMAPBATCH_NR_PER_PAGE);=0A =0A 			ret =3D -ENOMEM;=0A=
@@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f=0A 		=
ret =3D 0;=0A 		paged_out =3D 0;=0A 		list_for_each(l, =
&pagelist) {=0A+			if (i)=0A+				=
cond_resched();=0A+=0A 			nr =3D i + min(nr_pages - i, =
MMAPBATCH_NR_PER_PAGE);=0A 			mfn =3D (unsigned long =
*)(l + 1);=0A =0A@@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file =
*f=0A 			else=0A 				ret =3D =
0;=0A 			list_for_each(l, &pagelist) {=0A+			=
	if (i)=0A+					cond_resched();=0A+=
=0A 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);=0A 				mfn =3D (unsigned long *)(l + =
1);=0A 				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))=
=0A@@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f=0A 		=
	}=0A 		}=0A 	mmapbatch_out:=0A-		list_for_ea=
ch_safe(l,l2,&pagelist)=0A+		i =3D 0;=0A+		list_for_ea=
ch_safe(l, l2, &pagelist) {=0A+			if (!(++i & 7))=0A+		=
		cond_resched();=0A 			free_page((unsigned=
 long)l);=0A+		}=0A 	}=0A 	break;=0A =0A@@ -335,6 +361,9 @@ =
static long privcmd_ioctl(struct file *f=0A =0A 		p =3D =
m.arr;=0A 		for (i =3D 0; i < nr_pages; i +=3D nr, p +=3D nr) =
{=0A+			if (i)=0A+				cond_resche=
d();=0A+=0A 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);=0A =0A 			ret =3D -ENOMEM;=0A@@ -367,6 +396,9 @@ =
static long privcmd_ioctl(struct file *f=0A 		ret =3D 0;=0A 		=
paged_out =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+		=
	if (i)=0A+				cond_resched();=0A+=0A 		=
	nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);=0A 		=
	mfn =3D (void *)(l + 1);=0A 			err =3D (void *)(l =
+ 1);=0A@@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f=0A 	=
		ret =3D paged_out ? -ENOENT : 0;=0A 			i =
=3D 0;=0A 			list_for_each(l, &pagelist) {=0A+		=
		if (i)=0A+					cond_resche=
d();=0A+=0A 				nr =3D min(nr_pages - i, MMAPBATCH_=
NR_PER_PAGE);=0A 				err =3D (void *)(l + =
1);=0A 				if (copy_to_user(p, err, nr * sizeof(*err))=
)=0A@@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f=0A 		=
	ret =3D -EFAULT;=0A =0A 	mmapbatch_v2_out:=0A-		=
list_for_each_safe(l, l2, &pagelist)=0A+		i =3D 0;=0A+		=
list_for_each_safe(l, l2, &pagelist) {=0A+			if (!(++i =
& 7))=0A+				cond_resched();=0A 			=
free_page((unsigned long)l);=0A+		}=0A #undef MMAPBATCH_NR_PE=
R_PAGE=0A 	}=0A 	break;=0A
--=__Part6754DF12.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6754DF12.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 28 12:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 12:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W889B-0004dT-Pg; Tue, 28 Jan 2014 12:52:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W889A-0004dO-RH
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 12:52:09 +0000
Received: from [85.158.139.211:52511] by server-1.bemta-5.messagelabs.com id
	6C/CB-21065-7F7A7E25; Tue, 28 Jan 2014 12:52:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390913526!100976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5545 invoked from network); 28 Jan 2014 12:52:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 12:52:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 12:52:19 +0000
Message-Id: <52E7B61202000078001178E3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 12:52:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part6754DF12.0__="
Subject: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part6754DF12.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Many of these operations can be arbitrarily long, which can become a
problem irrespective of them being exposed to privileged users only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.0000000=
00 +0100
+++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.0000000=
00 +0100
@@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D mmapcmd.entry;
 		for (i =3D 0; i < mmapcmd.num;) {
+			if (i)
+				cond_resched();
+
 			nr =3D min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
=20
 		i =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			msg =3D (privcmd_mmap_entry_t*)(l + 1);
@@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
 		addr =3D vma->vm_start;
 		i =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
=20
 			msg =3D (privcmd_mmap_entry_t*)(l + 1);
@@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
=20
 	mmap_out:
 		up_write(&mm->mmap_sem);
-		list_for_each_safe(l,l2,&pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 	}
 #undef MMAP_NR_PER_PAGE
 	break;
@@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D m.arr;
 		for (i=3D0; i<nr_pages; )	{
+			if (i)
+				cond_resched();
+
 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
 		ret =3D 0;
 		paged_out =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE)=
;
 			mfn =3D (unsigned long *)(l + 1);
=20
@@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
 			else
 				ret =3D 0;
 			list_for_each(l, &pagelist) {
+				if (i)
+					cond_resched();
+
 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);
 				mfn =3D (unsigned long *)(l + 1);
 				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
@@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
 			}
 		}
 	mmapbatch_out:
-		list_for_each_safe(l,l2,&pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 	}
 	break;
=20
@@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
=20
 		p =3D m.arr;
 		for (i =3D 0; i < nr_pages; i +=3D nr, p +=3D nr) {
+			if (i)
+				cond_resched();
+
 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
=20
 			ret =3D -ENOMEM;
@@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
 		ret =3D 0;
 		paged_out =3D 0;
 		list_for_each(l, &pagelist) {
+			if (i)
+				cond_resched();
+
 			nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE)=
;
 			mfn =3D (void *)(l + 1);
 			err =3D (void *)(l + 1);
@@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
 			ret =3D paged_out ? -ENOENT : 0;
 			i =3D 0;
 			list_for_each(l, &pagelist) {
+				if (i)
+					cond_resched();
+
 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);
 				err =3D (void *)(l + 1);
 				if (copy_to_user(p, err, nr * sizeof(*err))=
)
@@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
 			ret =3D -EFAULT;
=20
 	mmapbatch_v2_out:
-		list_for_each_safe(l, l2, &pagelist)
+		i =3D 0;
+		list_for_each_safe(l, l2, &pagelist) {
+			if (!(++i & 7))
+				cond_resched();
 			free_page((unsigned long)l);
+		}
 #undef MMAPBATCH_NR_PER_PAGE
 	}
 	break;




--=__Part6754DF12.0__=
Content-Type: text/plain; name="xen-privcmd-mmap-cond-resched.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-privcmd-mmap-cond-resched.patch"

privcmd: sprinkle around cond_resched() calls in mmap ioctl handling=0A=0AM=
any of these operations can be arbitrarily long, which can become =
a=0Aproblem irrespective of them being exposed to privileged users =
only.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.0000000=
00 +0100=0A+++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 =
10:01:23.000000000 +0100=0A@@ -126,6 +126,9 @@ static long privcmd_ioctl(st=
ruct file *f=0A =0A 		p =3D mmapcmd.entry;=0A 		=
for (i =3D 0; i < mmapcmd.num;) {=0A+			if (i)=0A+		=
		cond_resched();=0A+=0A 			nr =3D min(mmapcmd.=
num - i, MMAP_NR_PER_PAGE);=0A =0A 			ret =3D -ENOMEM;=0A=
@@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f=0A =0A 		=
i =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+			=
if (i)=0A+				cond_resched();=0A+=0A 			=
nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);=0A =0A 			=
msg =3D (privcmd_mmap_entry_t*)(l + 1);=0A@@ -186,6 +192,9 @@ static long =
privcmd_ioctl(struct file *f=0A 		addr =3D vma->vm_start;=0A =
		i =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+	=
		if (i)=0A+				cond_resched();=0A+=
=0A 			nr =3D i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);=
=0A =0A 			msg =3D (privcmd_mmap_entry_t*)(l + =
1);=0A@@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f=0A =0A =
	mmap_out:=0A 		up_write(&mm->mmap_sem);=0A-		=
list_for_each_safe(l,l2,&pagelist)=0A+		i =3D 0;=0A+		=
list_for_each_safe(l, l2, &pagelist) {=0A+			if (!(++i =
& 7))=0A+				cond_resched();=0A 			=
free_page((unsigned long)l);=0A+		}=0A 	}=0A #undef =
MMAP_NR_PER_PAGE=0A 	break;=0A@@ -236,6 +249,9 @@ static long privcmd_io=
ctl(struct file *f=0A =0A 		p =3D m.arr;=0A 		=
for (i=3D0; i<nr_pages; )	{=0A+			if (i)=0A+		=
		cond_resched();=0A+=0A 			nr =3D min(nr_pages=
 - i, MMAPBATCH_NR_PER_PAGE);=0A =0A 			ret =3D -ENOMEM;=0A=
@@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f=0A 		=
ret =3D 0;=0A 		paged_out =3D 0;=0A 		list_for_each(l, =
&pagelist) {=0A+			if (i)=0A+				=
cond_resched();=0A+=0A 			nr =3D i + min(nr_pages - i, =
MMAPBATCH_NR_PER_PAGE);=0A 			mfn =3D (unsigned long =
*)(l + 1);=0A =0A@@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file =
*f=0A 			else=0A 				ret =3D =
0;=0A 			list_for_each(l, &pagelist) {=0A+			=
	if (i)=0A+					cond_resched();=0A+=
=0A 				nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);=0A 				mfn =3D (unsigned long *)(l + =
1);=0A 				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))=
=0A@@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f=0A 		=
	}=0A 		}=0A 	mmapbatch_out:=0A-		list_for_ea=
ch_safe(l,l2,&pagelist)=0A+		i =3D 0;=0A+		list_for_ea=
ch_safe(l, l2, &pagelist) {=0A+			if (!(++i & 7))=0A+		=
		cond_resched();=0A 			free_page((unsigned=
 long)l);=0A+		}=0A 	}=0A 	break;=0A =0A@@ -335,6 +361,9 @@ =
static long privcmd_ioctl(struct file *f=0A =0A 		p =3D =
m.arr;=0A 		for (i =3D 0; i < nr_pages; i +=3D nr, p +=3D nr) =
{=0A+			if (i)=0A+				cond_resche=
d();=0A+=0A 			nr =3D min(nr_pages - i, MMAPBATCH_NR_PER_P=
AGE);=0A =0A 			ret =3D -ENOMEM;=0A@@ -367,6 +396,9 @@ =
static long privcmd_ioctl(struct file *f=0A 		ret =3D 0;=0A 		=
paged_out =3D 0;=0A 		list_for_each(l, &pagelist) {=0A+		=
	if (i)=0A+				cond_resched();=0A+=0A 		=
	nr =3D i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);=0A 		=
	mfn =3D (void *)(l + 1);=0A 			err =3D (void *)(l =
+ 1);=0A@@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f=0A 	=
		ret =3D paged_out ? -ENOENT : 0;=0A 			i =
=3D 0;=0A 			list_for_each(l, &pagelist) {=0A+		=
		if (i)=0A+					cond_resche=
d();=0A+=0A 				nr =3D min(nr_pages - i, MMAPBATCH_=
NR_PER_PAGE);=0A 				err =3D (void *)(l + =
1);=0A 				if (copy_to_user(p, err, nr * sizeof(*err))=
)=0A@@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f=0A 		=
	ret =3D -EFAULT;=0A =0A 	mmapbatch_v2_out:=0A-		=
list_for_each_safe(l, l2, &pagelist)=0A+		i =3D 0;=0A+		=
list_for_each_safe(l, l2, &pagelist) {=0A+			if (!(++i =
& 7))=0A+				cond_resched();=0A 			=
free_page((unsigned long)l);=0A+		}=0A #undef MMAPBATCH_NR_PE=
R_PAGE=0A 	}=0A 	break;=0A
--=__Part6754DF12.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part6754DF12.0__=--


From xen-devel-bounces@lists.xen.org Tue Jan 28 13:55:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 13:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W897z-0006UG-Up; Tue, 28 Jan 2014 13:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W897y-0006Tt-Fn; Tue, 28 Jan 2014 13:54:58 +0000
Received: from [85.158.143.35:13578] by server-2.bemta-4.messagelabs.com id
	13/3F-11386-1B6B7E25; Tue, 28 Jan 2014 13:54:57 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390917296!1359153!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_SEX,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8882 invoked from network); 28 Jan 2014 13:54:56 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 13:54:56 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so842750wgh.24
	for <multiple recipients>; Tue, 28 Jan 2014 05:54:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1KZm4BjhqSNlL7j3KoXkcYOsI5joh9Z2wFRDmUrUbnM=;
	b=Gk21K3oqXclgM6IXLhBFefRObLR54Op9UZgYzrde4UnRUiE9vEY9TyefIFHLINocQ4
	qJvZDc66nqJs6IdUSVf6fnMeQCZqzzM64Bog9a9Gms6EZ/HYDmK7qutxxdeuOoCRDgP6
	hUCjM+P7qjp+5qOU/OQ8s1FEqjpkLJ+H3cIjolkvbuhCJphGwUdAj7ETs2mOYHc4PLNM
	MlD1/SlyM6RebilWbtcBw/bqT5INmQVSfcRjOuh7znDY/+vkg2IEjIqEkXlU6B/2CP7z
	tWe5HBC24iJz6xiXEUQFLwkqEhj/6EVWXyX4Xjn99LKLP9EkxZedeq2ZKU4A6+uz117Z
	7M+w==
X-Received: by 10.181.13.40 with SMTP id ev8mr2134573wid.16.1390917296523;
	Tue, 28 Jan 2014 05:54:56 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id fb8sm11574023wic.3.2014.01.28.05.54.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 05:54:55 -0800 (PST)
Message-ID: <52E7B6AF.3050604@xen.org>
Date: Tue, 28 Jan 2014 13:54:55 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
References: <52DCE9FA.6010400@xen.org>
In-Reply-To: <52DCE9FA.6010400@xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I have not gotten any reply to this thread. I saw Wei Lui and Andr=E9s =

Lagar-Cavilla make changes to the project list. Please go through the =

items below and make changes as suggested. Otherwise, our chances to get =

into GSoC 2014 will be relatively slim.
Lars

On 20/01/2014 09:18, Lars Kurth wrote:
> Hi all,
>
> the GSoC application deadline is coming up : Feb 2014. If we want to =

> have any chance of getting accepted this year, we ought to get our =

> project list into good shape. The project list and how the project and =

> menters present themselves has a bigger impact on whether we get =

> accepted than the actual application.
>
> Also, I would like to add a mentor section this year: a short bio, =

> what the mentor cares about and a picture. This will help make the =

> project list more real.
>
> We have *4 weeks* to do this. The bar for GSoC has been getting =

> increasingly high. I know, we are tied down with Xen 4.4, but this is =

> something you need to do if you want the Xen Project to participate.
>
> a) Please, update =

> http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently =

> (these need to be in good shape *before* the application). What I need =

> you to do is:
> a.1) Remove items that are done
> a.2) Add new work items : we ought to have a few sexy topics on say =

> Real-time, mobile and some of the other segments (assuming we can get HW)
> a.3) All project proposals need to be peer reviewed *and* clear ... =

> The peer review process for projects we put in place last year worked =

> well, by which we had past mentors sign of project proposals that were =

> in good enough state.
>
> b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should =

> get these listed on the respective other programs. And we should link =

> to these from our project page.
>
> Best Regards
> Lars
> P.S.: I will also see whether we can participate as Xen Project under =

> the LF GSoC program, but last year there was push-back and I don't =

> expect this to change
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 13:55:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 13:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W897z-0006UG-Up; Tue, 28 Jan 2014 13:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1W897y-0006Tt-Fn; Tue, 28 Jan 2014 13:54:58 +0000
Received: from [85.158.143.35:13578] by server-2.bemta-4.messagelabs.com id
	13/3F-11386-1B6B7E25; Tue, 28 Jan 2014 13:54:57 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390917296!1359153!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_SEX,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8882 invoked from network); 28 Jan 2014 13:54:56 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 13:54:56 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so842750wgh.24
	for <multiple recipients>; Tue, 28 Jan 2014 05:54:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1KZm4BjhqSNlL7j3KoXkcYOsI5joh9Z2wFRDmUrUbnM=;
	b=Gk21K3oqXclgM6IXLhBFefRObLR54Op9UZgYzrde4UnRUiE9vEY9TyefIFHLINocQ4
	qJvZDc66nqJs6IdUSVf6fnMeQCZqzzM64Bog9a9Gms6EZ/HYDmK7qutxxdeuOoCRDgP6
	hUCjM+P7qjp+5qOU/OQ8s1FEqjpkLJ+H3cIjolkvbuhCJphGwUdAj7ETs2mOYHc4PLNM
	MlD1/SlyM6RebilWbtcBw/bqT5INmQVSfcRjOuh7znDY/+vkg2IEjIqEkXlU6B/2CP7z
	tWe5HBC24iJz6xiXEUQFLwkqEhj/6EVWXyX4Xjn99LKLP9EkxZedeq2ZKU4A6+uz117Z
	7M+w==
X-Received: by 10.181.13.40 with SMTP id ev8mr2134573wid.16.1390917296523;
	Tue, 28 Jan 2014 05:54:56 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id fb8sm11574023wic.3.2014.01.28.05.54.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 05:54:55 -0800 (PST)
Message-ID: <52E7B6AF.3050604@xen.org>
Date: Tue, 28 Jan 2014 13:54:55 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org
References: <52DCE9FA.6010400@xen.org>
In-Reply-To: <52DCE9FA.6010400@xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
I have not gotten any reply to this thread. I saw Wei Lui and Andr=E9s =

Lagar-Cavilla make changes to the project list. Please go through the =

items below and make changes as suggested. Otherwise, our chances to get =

into GSoC 2014 will be relatively slim.
Lars

On 20/01/2014 09:18, Lars Kurth wrote:
> Hi all,
>
> the GSoC application deadline is coming up : Feb 2014. If we want to =

> have any chance of getting accepted this year, we ought to get our =

> project list into good shape. The project list and how the project and =

> menters present themselves has a bigger impact on whether we get =

> accepted than the actual application.
>
> Also, I would like to add a mentor section this year: a short bio, =

> what the mentor cares about and a picture. This will help make the =

> project list more real.
>
> We have *4 weeks* to do this. The bar for GSoC has been getting =

> increasingly high. I know, we are tied down with Xen 4.4, but this is =

> something you need to do if you want the Xen Project to participate.
>
> a) Please, update =

> http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently =

> (these need to be in good shape *before* the application). What I need =

> you to do is:
> a.1) Remove items that are done
> a.2) Add new work items : we ought to have a few sexy topics on say =

> Real-time, mobile and some of the other segments (assuming we can get HW)
> a.3) All project proposals need to be peer reviewed *and* clear ... =

> The peer review process for projects we put in place last year worked =

> well, by which we had past mentors sign of project proposals that were =

> in good enough state.
>
> b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should =

> get these listed on the respective other programs. And we should link =

> to these from our project page.
>
> Best Regards
> Lars
> P.S.: I will also see whether we can participate as Xen Project under =

> the LF GSoC program, but last year there was push-back and I don't =

> expect this to change
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 13:58:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 13:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89BX-0006gn-Le; Tue, 28 Jan 2014 13:58:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89BW-0006gg-8T
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 13:58:38 +0000
Received: from [85.158.143.35:48990] by server-3.bemta-4.messagelabs.com id
	F6/C0-32360-D87B7E25; Tue, 28 Jan 2014 13:58:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390917515!1369337!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6300 invoked from network); 28 Jan 2014 13:58:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 13:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97234417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 13:58:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 08:58:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89BQ-00065b-BD;
	Tue, 28 Jan 2014 13:58:32 +0000
Date: Tue, 28 Jan 2014 13:58:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> This patch is needed to avoid possible deadlocks in case of simultaneous
> occurrence cross-interrupts.
> 
> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 2700bd7..46d2fc6 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -55,7 +55,11 @@ void on_selected_cpus(
>  
>      ASSERT(local_irq_is_enabled());
>  
> -    spin_lock(&call_lock);
> +    if (!spin_trylock(&call_lock)) {
> +        if (smp_call_function_interrupt())
> +            return;

If smp_call_function_interrupt returns -EPERM, shouldn't we go back to
spinning on call_lock?
Also there is a race condition between spin_lock, cpumask_copy and
smp_call_function_interrupt: smp_call_function_interrupt could be called
on cpu1 after cpu0 acquired the lock, but before cpu0 set
call_data.selected.

I think the correct implemention would be:


while ( unlikely(!spin_trylock(&call_lock)) )
    smp_call_function_interrupt();



> +        spin_lock(&call_lock);
> +    }
>  
>      cpumask_copy(&call_data.selected, selected);
>  
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 13:58:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 13:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89BX-0006gn-Le; Tue, 28 Jan 2014 13:58:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89BW-0006gg-8T
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 13:58:38 +0000
Received: from [85.158.143.35:48990] by server-3.bemta-4.messagelabs.com id
	F6/C0-32360-D87B7E25; Tue, 28 Jan 2014 13:58:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390917515!1369337!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6300 invoked from network); 28 Jan 2014 13:58:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 13:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97234417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 13:58:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 08:58:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89BQ-00065b-BD;
	Tue, 28 Jan 2014 13:58:32 +0000
Date: Tue, 28 Jan 2014 13:58:28 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
> This patch is needed to avoid possible deadlocks in case of simultaneous
> occurrence cross-interrupts.
> 
> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
> ---
>  xen/common/smp.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 2700bd7..46d2fc6 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -55,7 +55,11 @@ void on_selected_cpus(
>  
>      ASSERT(local_irq_is_enabled());
>  
> -    spin_lock(&call_lock);
> +    if (!spin_trylock(&call_lock)) {
> +        if (smp_call_function_interrupt())
> +            return;

If smp_call_function_interrupt returns -EPERM, shouldn't we go back to
spinning on call_lock?
Also there is a race condition between spin_lock, cpumask_copy and
smp_call_function_interrupt: smp_call_function_interrupt could be called
on cpu1 after cpu0 acquired the lock, but before cpu0 set
call_data.selected.

I think the correct implemention would be:


while ( unlikely(!spin_trylock(&call_lock)) )
    smp_call_function_interrupt();



> +        spin_lock(&call_lock);
> +    }
>  
>      cpumask_copy(&call_data.selected, selected);
>  
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:00:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89DP-00075T-8X; Tue, 28 Jan 2014 14:00:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89DN-00075N-Oe
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:00:33 +0000
Received: from [85.158.137.68:47202] by server-13.bemta-3.messagelabs.com id
	59/86-28603-108B7E25; Tue, 28 Jan 2014 14:00:33 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390917631!11854756!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31151 invoked from network); 28 Jan 2014 14:00:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:00:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95238857"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:00:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:00:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89DJ-00067K-BI;
	Tue, 28 Jan 2014 14:00:29 +0000
Date: Tue, 28 Jan 2014 14:00:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390903423.7753.23.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > but I am a bit wary of making that change at RC2.
> 
> I'm leaning the other way -- I'm wary of open coding magic locking
> primitives to work around this issue on a case by case basis. It's just
> too subtle IMHO.
> 
> The IPI and cross CPU calling primitives are basically predicated on
> those IPIs interrupting normal interrupt handlers.

The problem is that we don't know if we can context switch properly
nested interrupts. Also I would need to think harder whether everything
would work correctly without hitches with multiple SGIs happening
simultaneously (with more than 2 cpus involved).

On the other hand we know that both Oleksandr's and my solution should
work OK with no surprises if implemented correctly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:00:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89DP-00075T-8X; Tue, 28 Jan 2014 14:00:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89DN-00075N-Oe
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:00:33 +0000
Received: from [85.158.137.68:47202] by server-13.bemta-3.messagelabs.com id
	59/86-28603-108B7E25; Tue, 28 Jan 2014 14:00:33 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390917631!11854756!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31151 invoked from network); 28 Jan 2014 14:00:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:00:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95238857"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:00:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:00:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89DJ-00067K-BI;
	Tue, 28 Jan 2014 14:00:29 +0000
Date: Tue, 28 Jan 2014 14:00:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390903423.7753.23.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > but I am a bit wary of making that change at RC2.
> 
> I'm leaning the other way -- I'm wary of open coding magic locking
> primitives to work around this issue on a case by case basis. It's just
> too subtle IMHO.
> 
> The IPI and cross CPU calling primitives are basically predicated on
> those IPIs interrupting normal interrupt handlers.

The problem is that we don't know if we can context switch properly
nested interrupts. Also I would need to think harder whether everything
would work correctly without hitches with multiple SGIs happening
simultaneously (with more than 2 cpus involved).

On the other hand we know that both Oleksandr's and my solution should
work OK with no surprises if implemented correctly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:01:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Ej-0007Cj-5Z; Tue, 28 Jan 2014 14:01:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Eh-0007CY-CR
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:01:55 +0000
Received: from [85.158.137.68:6252] by server-16.bemta-3.messagelabs.com id
	A9/37-26128-258B7E25; Tue, 28 Jan 2014 14:01:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390917713!11844594!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8388 invoked from network); 28 Jan 2014 14:01:53 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-31.messagelabs.com with SMTP;
	28 Jan 2014 14:01:53 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 28 Jan 2014 06:01:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445808048"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:01:12 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:01:12 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:01:11 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:01:08 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
	feature
Thread-Index: AQHPFd/A3DmK9Hy4sESluDdODQuNw5qaNqjQ
Date: Tue, 28 Jan 2014 14:01:08 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
	<52DD2C17020000780011507E@nat28.tlf.novell.com>
In-Reply-To: <52DD2C17020000780011507E@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:01 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
> feature
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/arch/x86/cpu/intel.c
> > +++ b/xen/arch/x86/cpu/intel.c
> > @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
> >  	     ( c->cpuid_level >= 0x00000006 ) &&
> >  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
> >  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
> > +
> > +	/* Check platform QoS monitoring capability */
> > +	if ((c->cpuid_level >= 0x00000007) &&
> > +	    (cpuid_ebx(0x00000007) & (1u<<12)))
> > +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
> > +
> 
> This is redundant with generic_identify() setting the respective
> c->x86_capability[] element.
> 
> > +struct pqos_cqm *cqm;
> 
> __read_mostly?
cqm->rmid_to_dom will be updated time to time.

> 
> > +
> > +static void __init init_cqm(void)
> > +{
> > +    unsigned int rmid;
> > +    unsigned int eax, edx;
> > +
> > +    if ( !opt_cqm_max_rmid )
> > +        return;
> > +
> > +    cqm = xzalloc(struct pqos_cqm);
> > +    if ( !cqm )
> > +        return;
> > +
> > +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid,
> &edx);
> > +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> > +    {
> > +        xfree(cqm);
> 
> "cqm" is a global variable and - afaict - the only way for other
> entities to tell whether the functionality is enabled. Hence shouldn't
> you clear the variable here (and similarly further down)? Otherwise
> the variable should perhaps be static.

Yes, will correct it in following patches.

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:01:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Ej-0007Cj-5Z; Tue, 28 Jan 2014 14:01:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Eh-0007CY-CR
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:01:55 +0000
Received: from [85.158.137.68:6252] by server-16.bemta-3.messagelabs.com id
	A9/37-26128-258B7E25; Tue, 28 Jan 2014 14:01:54 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390917713!11844594!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8388 invoked from network); 28 Jan 2014 14:01:53 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-31.messagelabs.com with SMTP;
	28 Jan 2014 14:01:53 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 28 Jan 2014 06:01:49 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445808048"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:01:12 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:01:12 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:01:11 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:01:08 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
	feature
Thread-Index: AQHPFd/A3DmK9Hy4sESluDdODQuNw5qaNqjQ
Date: Tue, 28 Jan 2014 14:01:08 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
	<52DD2C17020000780011507E@nat28.tlf.novell.com>
In-Reply-To: <52DD2C17020000780011507E@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:01 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
> feature
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/arch/x86/cpu/intel.c
> > +++ b/xen/arch/x86/cpu/intel.c
> > @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
> >  	     ( c->cpuid_level >= 0x00000006 ) &&
> >  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
> >  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
> > +
> > +	/* Check platform QoS monitoring capability */
> > +	if ((c->cpuid_level >= 0x00000007) &&
> > +	    (cpuid_ebx(0x00000007) & (1u<<12)))
> > +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
> > +
> 
> This is redundant with generic_identify() setting the respective
> c->x86_capability[] element.
> 
> > +struct pqos_cqm *cqm;
> 
> __read_mostly?
cqm->rmid_to_dom will be updated time to time.

> 
> > +
> > +static void __init init_cqm(void)
> > +{
> > +    unsigned int rmid;
> > +    unsigned int eax, edx;
> > +
> > +    if ( !opt_cqm_max_rmid )
> > +        return;
> > +
> > +    cqm = xzalloc(struct pqos_cqm);
> > +    if ( !cqm )
> > +        return;
> > +
> > +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid,
> &edx);
> > +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> > +    {
> > +        xfree(cqm);
> 
> "cqm" is a global variable and - afaict - the only way for other
> entities to tell whether the functionality is enabled. Hence shouldn't
> you clear the variable here (and similarly further down)? Otherwise
> the variable should perhaps be static.

Yes, will correct it in following patches.

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:04:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89HP-0007NI-Qk; Tue, 28 Jan 2014 14:04:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89HO-0007NB-LJ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:04:42 +0000
Received: from [193.109.254.147:22428] by server-11.bemta-14.messagelabs.com
	id 01/A6-20576-AF8B7E25; Tue, 28 Jan 2014 14:04:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390917880!362688!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12683 invoked from network); 28 Jan 2014 14:04:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:04:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:06:05 +0000
Message-Id: <52E7C7130200007800117945@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:04:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
	<52DD2C17020000780011507E@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:01, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Monday, January 20, 2014 9:01 PM
>> To: Xu, Dongxiao
>> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
>> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
>> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
>> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org 
>> Subject: Re: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
>> feature
>> 
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > --- a/xen/arch/x86/cpu/intel.c
>> > +++ b/xen/arch/x86/cpu/intel.c
>> > @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
>> >  	     ( c->cpuid_level >= 0x00000006 ) &&
>> >  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>> >  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
>> > +
>> > +	/* Check platform QoS monitoring capability */
>> > +	if ((c->cpuid_level >= 0x00000007) &&
>> > +	    (cpuid_ebx(0x00000007) & (1u<<12)))
>> > +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
>> > +
>> 
>> This is redundant with generic_identify() setting the respective
>> c->x86_capability[] element.
>> 
>> > +struct pqos_cqm *cqm;
>> 
>> __read_mostly?
> cqm->rmid_to_dom will be updated time to time.

But the attribute applies to the variable itself, not what it points to.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:04:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:04:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89HP-0007NI-Qk; Tue, 28 Jan 2014 14:04:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89HO-0007NB-LJ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:04:42 +0000
Received: from [193.109.254.147:22428] by server-11.bemta-14.messagelabs.com
	id 01/A6-20576-AF8B7E25; Tue, 28 Jan 2014 14:04:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390917880!362688!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12683 invoked from network); 28 Jan 2014 14:04:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:04:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:06:05 +0000
Message-Id: <52E7C7130200007800117945@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:04:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-2-git-send-email-dongxiao.xu@intel.com>
	<52DD2C17020000780011507E@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A91192371B@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 1/7] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:01, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Monday, January 20, 2014 9:01 PM
>> To: Xu, Dongxiao
>> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
>> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
>> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
>> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org 
>> Subject: Re: [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring
>> feature
>> 
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > --- a/xen/arch/x86/cpu/intel.c
>> > +++ b/xen/arch/x86/cpu/intel.c
>> > @@ -230,6 +230,12 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
>> >  	     ( c->cpuid_level >= 0x00000006 ) &&
>> >  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>> >  		set_bit(X86_FEATURE_ARAT, c->x86_capability);
>> > +
>> > +	/* Check platform QoS monitoring capability */
>> > +	if ((c->cpuid_level >= 0x00000007) &&
>> > +	    (cpuid_ebx(0x00000007) & (1u<<12)))
>> > +		set_bit(X86_FEATURE_QOSM, c->x86_capability);
>> > +
>> 
>> This is redundant with generic_identify() setting the respective
>> c->x86_capability[] element.
>> 
>> > +struct pqos_cqm *cqm;
>> 
>> __read_mostly?
> cqm->rmid_to_dom will be updated time to time.

But the attribute applies to the variable itself, not what it points to.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:10:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89MO-0007fp-US; Tue, 28 Jan 2014 14:09:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89MN-0007fj-8f
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:09:51 +0000
Received: from [85.158.143.35:54691] by server-1.bemta-4.messagelabs.com id
	3D/B4-02132-E2AB7E25; Tue, 28 Jan 2014 14:09:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390918189!1362004!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14717 invoked from network); 28 Jan 2014 14:09:49 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-8.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:09:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 28 Jan 2014 06:09:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="472028056"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 28 Jan 2014 06:09:48 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:09:47 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:09:40 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 2/7] x86: dynamically attach/detach CQM service for
	a guest
Thread-Index: AQHPFeF8+LbRF3elZkSB1+ZG7HiQe5qaNw0g
Date: Tue, 28 Jan 2014 14:09:39 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
In-Reply-To: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:14 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 2/7] x86: dynamically attach/detach CQM service for a
> guest
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > @@ -1223,6 +1224,45 @@ long arch_do_domctl(
> >      }
> >      break;
> >
> > +    case XEN_DOMCTL_attach_pqos:
> > +    {
> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> > +        {
> > +            if ( !system_supports_cqm() )
> > +                ret = -ENODEV;
> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
> > +                ret = -EEXIST;
> > +            else
> > +            {
> > +                ret = alloc_cqm_rmid(d);
> > +                if ( ret < 0 )
> > +                    ret = -EUSERS;
> 
> Why don't you have the function return a sensible error code
> (which presumably might also end up being other than -EUSERS,
> e.g. -ENOMEM).

For the assignment of RMID, I don't think there will be error of ENOMEM, so I think EUSER will be better here?

> 
> > +            }
> > +        }
> > +        else
> > +            ret = -EINVAL;
> > +    }
> > +    break;
> > +
> > +    case XEN_DOMCTL_detach_pqos:
> > +    {
> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> > +        {
> > +            if ( !system_supports_cqm() )
> > +                ret = -ENODEV;
> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
> > +            {
> > +                free_cqm_rmid(d);
> > +                ret = 0;
> > +            }
> > +            else
> > +                ret = -ENOENT;
> > +        }
> > +        else
> > +            ret = -EINVAL;
> > +    }
> > +    break;
> 
> For consistency, both of the above would better be changed to a
> single series of if()/else if().../else.

Will re-format in following patch.

> 
> > +bool_t system_supports_cqm(void)
> > +{
> > +    return !!cqm;
> 
> So here we go (wrt the remark on patch 1).

Yes

> 
> > +}
> > +
> > +int alloc_cqm_rmid(struct domain *d)
> > +{
> > +    int rc = 0;
> > +    unsigned int rmid;
> > +    unsigned long flags;
> > +
> > +    ASSERT(system_supports_cqm());
> > +
> > +    spin_lock_irqsave(&cqm_lock, flags);
> 
> Why not just spin_lock()? Briefly scanning over the following patches
> doesn't point out anything that might require this to be an IRQ-safe
> lock.

Will change it to simple spin_lock().

> 
> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> > +    {
> > +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
> > +            continue;
> > +
> > +        cqm->rmid_to_dom[rmid] = d->domain_id;
> > +        break;
> > +    }
> > +    spin_unlock_irqrestore(&cqm_lock, flags);
> > +
> > +    /* No CQM RMID available, assign RMID=0 by default */
> > +    if ( rmid > cqm->max_rmid )
> > +    {
> > +        rmid = 0;
> > +        rc = -1;
> > +    }
> > +
> > +    d->arch.pqos_cqm_rmid = rmid;
> 
> Is it really safe to do this and the freeing below outside of the
> lock?

Could you help to elaborate the race condition here?

Per my understanding, there can't be any competition over who owns the entry. Besides, setting it
back to DOMID_INVALID won't race with the allocation loop.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:10:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89MO-0007fp-US; Tue, 28 Jan 2014 14:09:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89MN-0007fj-8f
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:09:51 +0000
Received: from [85.158.143.35:54691] by server-1.bemta-4.messagelabs.com id
	3D/B4-02132-E2AB7E25; Tue, 28 Jan 2014 14:09:50 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390918189!1362004!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14717 invoked from network); 28 Jan 2014 14:09:49 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-8.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:09:49 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 28 Jan 2014 06:09:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="472028056"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 28 Jan 2014 06:09:48 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:09:47 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:09:40 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 2/7] x86: dynamically attach/detach CQM service for
	a guest
Thread-Index: AQHPFeF8+LbRF3elZkSB1+ZG7HiQe5qaNw0g
Date: Tue, 28 Jan 2014 14:09:39 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
In-Reply-To: <52DD2F1A02000078001150A4@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:14 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 2/7] x86: dynamically attach/detach CQM service for a
> guest
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > @@ -1223,6 +1224,45 @@ long arch_do_domctl(
> >      }
> >      break;
> >
> > +    case XEN_DOMCTL_attach_pqos:
> > +    {
> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> > +        {
> > +            if ( !system_supports_cqm() )
> > +                ret = -ENODEV;
> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
> > +                ret = -EEXIST;
> > +            else
> > +            {
> > +                ret = alloc_cqm_rmid(d);
> > +                if ( ret < 0 )
> > +                    ret = -EUSERS;
> 
> Why don't you have the function return a sensible error code
> (which presumably might also end up being other than -EUSERS,
> e.g. -ENOMEM).

For the assignment of RMID, I don't think there will be error of ENOMEM, so I think EUSER will be better here?

> 
> > +            }
> > +        }
> > +        else
> > +            ret = -EINVAL;
> > +    }
> > +    break;
> > +
> > +    case XEN_DOMCTL_detach_pqos:
> > +    {
> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
> > +        {
> > +            if ( !system_supports_cqm() )
> > +                ret = -ENODEV;
> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
> > +            {
> > +                free_cqm_rmid(d);
> > +                ret = 0;
> > +            }
> > +            else
> > +                ret = -ENOENT;
> > +        }
> > +        else
> > +            ret = -EINVAL;
> > +    }
> > +    break;
> 
> For consistency, both of the above would better be changed to a
> single series of if()/else if().../else.

Will re-format in following patch.

> 
> > +bool_t system_supports_cqm(void)
> > +{
> > +    return !!cqm;
> 
> So here we go (wrt the remark on patch 1).

Yes

> 
> > +}
> > +
> > +int alloc_cqm_rmid(struct domain *d)
> > +{
> > +    int rc = 0;
> > +    unsigned int rmid;
> > +    unsigned long flags;
> > +
> > +    ASSERT(system_supports_cqm());
> > +
> > +    spin_lock_irqsave(&cqm_lock, flags);
> 
> Why not just spin_lock()? Briefly scanning over the following patches
> doesn't point out anything that might require this to be an IRQ-safe
> lock.

Will change it to simple spin_lock().

> 
> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> > +    {
> > +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
> > +            continue;
> > +
> > +        cqm->rmid_to_dom[rmid] = d->domain_id;
> > +        break;
> > +    }
> > +    spin_unlock_irqrestore(&cqm_lock, flags);
> > +
> > +    /* No CQM RMID available, assign RMID=0 by default */
> > +    if ( rmid > cqm->max_rmid )
> > +    {
> > +        rmid = 0;
> > +        rc = -1;
> > +    }
> > +
> > +    d->arch.pqos_cqm_rmid = rmid;
> 
> Is it really safe to do this and the freeing below outside of the
> lock?

Could you help to elaborate the race condition here?

Per my understanding, there can't be any competition over who owns the entry. Besides, setting it
back to DOMID_INVALID won't race with the allocation loop.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Oe-0007nh-88; Tue, 28 Jan 2014 14:12:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Oc-0007nX-92
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:12:10 +0000
Received: from [85.158.143.35:22213] by server-3.bemta-4.messagelabs.com id
	40/DE-32360-9BAB7E25; Tue, 28 Jan 2014 14:12:09 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390918326!1365251!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10713 invoked from network); 28 Jan 2014 14:12:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:12:07 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 06:07:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445813820"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:12:05 -0800
Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:12:05 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:12:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:12:02 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 3/7] x86: initialize per socket cpu map
Thread-Index: AQHPFeNQY5uYd3yBTUi/6TGU+xLaQ5qaOW7A
Date: Tue, 28 Jan 2014 14:12:01 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
	<52DD322F02000078001150C0@nat28.tlf.novell.com>
In-Reply-To: <52DD322F02000078001150C0@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:27 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 3/7] x86: initialize per socket cpu map
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > For each socket in the system, we create a separate bitmap to tag its
> > related CPUs. This per socket bitmap will be initialized on system
> > start up, and adjusted when CPU is dynamically online/offline.
> 
> There's no reasoning here at all why cpu_sibling_mask and
> cpu_core_mask aren't sufficient.

The new mask is to mark socket CPUs, and they may be different with cpu_sibling_mask and cpu_core_mask...

> 
> > --- a/xen/arch/x86/smpboot.c
> > +++ b/xen/arch/x86/smpboot.c
> > @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t,
> cpu_core_mask);
> >  cpumask_t cpu_online_map __read_mostly;
> >  EXPORT_SYMBOL(cpu_online_map);
> >
> > +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
> > +EXPORT_SYMBOL(socket_cpu_map);
> 
> And _if_ we really need it, then it should be done in a better way
> than via a statically sized array, the size of which can't even be
> overridden on the build and/or hypervisor command line.

I saw current Xen code uses a lot of such static macros, e.g., NR_CPUS.
This reminds me one thing, can we define the MAX_NUM_SOCKETS as NR_CPUS? Since the socket number could not exceeds the CPU number.

> 
> And there shouldn't be EXPORT_SYMBOL() in new, not directly
> cloned hypervisor code either.

Okay, will remove it in following patch.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Oe-0007nh-88; Tue, 28 Jan 2014 14:12:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Oc-0007nX-92
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:12:10 +0000
Received: from [85.158.143.35:22213] by server-3.bemta-4.messagelabs.com id
	40/DE-32360-9BAB7E25; Tue, 28 Jan 2014 14:12:09 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390918326!1365251!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10713 invoked from network); 28 Jan 2014 14:12:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:12:07 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 06:07:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445813820"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:12:05 -0800
Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:12:05 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:12:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:12:02 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 3/7] x86: initialize per socket cpu map
Thread-Index: AQHPFeNQY5uYd3yBTUi/6TGU+xLaQ5qaOW7A
Date: Tue, 28 Jan 2014 14:12:01 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
	<52DD322F02000078001150C0@nat28.tlf.novell.com>
In-Reply-To: <52DD322F02000078001150C0@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:27 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 3/7] x86: initialize per socket cpu map
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > For each socket in the system, we create a separate bitmap to tag its
> > related CPUs. This per socket bitmap will be initialized on system
> > start up, and adjusted when CPU is dynamically online/offline.
> 
> There's no reasoning here at all why cpu_sibling_mask and
> cpu_core_mask aren't sufficient.

The new mask is to mark socket CPUs, and they may be different with cpu_sibling_mask and cpu_core_mask...

> 
> > --- a/xen/arch/x86/smpboot.c
> > +++ b/xen/arch/x86/smpboot.c
> > @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t,
> cpu_core_mask);
> >  cpumask_t cpu_online_map __read_mostly;
> >  EXPORT_SYMBOL(cpu_online_map);
> >
> > +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
> > +EXPORT_SYMBOL(socket_cpu_map);
> 
> And _if_ we really need it, then it should be done in a better way
> than via a statically sized array, the size of which can't even be
> overridden on the build and/or hypervisor command line.

I saw current Xen code uses a lot of such static macros, e.g., NR_CPUS.
This reminds me one thing, can we define the MAX_NUM_SOCKETS as NR_CPUS? Since the socket number could not exceeds the CPU number.

> 
> And there shouldn't be EXPORT_SYMBOL() in new, not directly
> cloned hypervisor code either.

Okay, will remove it in following patch.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Zk-0008Uu-Bz; Tue, 28 Jan 2014 14:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Zi-0008Uo-GA
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:23:38 +0000
Received: from [85.158.143.35:22536] by server-3.bemta-4.messagelabs.com id
	58/A5-32360-96DB7E25; Tue, 28 Jan 2014 14:23:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390919016!1352563!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22010 invoked from network); 28 Jan 2014 14:23:36 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:23:36 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 06:19:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445818673"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:23:24 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:23:23 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:23:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:23:20 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 4/7] x86: collect CQM information from all sockets
Thread-Index: AQHPFebedK9ctTr7n0679vvlcaoAk5qaOg1g
Date: Tue, 28 Jan 2014 14:23:20 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
In-Reply-To: <52DD38240200007800115102@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:52 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > @@ -126,6 +127,12 @@ bool_t system_supports_cqm(void)
> >      return !!cqm;
> >  }
> >
> > +unsigned int get_cqm_count(void)
> > +{
> > +    ASSERT(system_supports_cqm());
> > +    return cqm->max_rmid + 1;
> > +}
> > +
> >  int alloc_cqm_rmid(struct domain *d)
> >  {
> >      int rc = 0;
> > @@ -170,6 +177,48 @@ void free_cqm_rmid(struct domain *d)
> >      d->arch.pqos_cqm_rmid = 0;
> >  }
> >
> > +static void read_cqm_data(void *arg)
> > +{
> > +    uint64_t cqm_data;
> > +    unsigned int rmid;
> > +    int socket = cpu_to_socket(smp_processor_id());
> > +    struct xen_socket_cqmdata *data = arg;
> > +    unsigned long flags, i;
> 
> Either i can be "unsigned int" ...
> 
> > +
> > +    ASSERT(system_supports_cqm());
> > +
> > +    if ( socket < 0 )
> > +        return;
> > +
> > +    spin_lock_irqsave(&cqm_lock, flags);
> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> > +    {
> > +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> > +            continue;
> > +
> > +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> > +        rdmsrl(MSR_IA32_QMC, cqm_data);
> > +
> > +        i = socket * (cqm->max_rmid + 1) + rmid;
> 
> ... or this calculation needs one of the two operands of * cast
> to "unsigned long".

Will adopt this way in following patch.

> 
> > +        data[i].valid = !(cqm_data & IA32_QM_CTR_ERROR_MASK);
> > +        if ( data[i].valid )
> > +        {
> > +            data[i].l3c_occupancy = cqm_data * cqm->upscaling_factor;
> > +            data[i].socket = socket;
> > +            data[i].domid = cqm->rmid_to_dom[rmid];
> > +        }
> > +    }
> > +    spin_unlock_irqrestore(&cqm_lock, flags);
> > +}
> 
> Also, please clarify why the locking here is necessary: You don't
> seem to be modifying global data, and the only possibly mutable
> thing you read is cqm->rmid_to_dom[]. A race on that one with
> an addition/deletion doesn't appear to be problematic though.

Will use the spin_lock() in following patch.

> 
> > +void get_cqm_info(cpumask_t *cpu_cqmdata_map, struct
> xen_socket_cqmdata *data)
> 
> const cpumask_t *
> 
> > +    case XEN_SYSCTL_getcqminfo:
> > +    {
> > +        struct xen_socket_cqmdata *info;
> > +        uint32_t num_sockets;
> > +        uint32_t num_rmids;
> > +        cpumask_t cpu_cqmdata_map;
> 
> Unless absolutely avoidable, not CPU masks on the stack please.

Okay, will allocate it by "xzalloc" function.

> 
> > +
> > +        if ( !system_supports_cqm() )
> > +        {
> > +            ret = -ENODEV;
> > +            break;
> > +        }
> > +
> > +        select_socket_cpu(&cpu_cqmdata_map);
> > +
> > +        num_sockets = min((unsigned
> int)cpumask_weight(&cpu_cqmdata_map) + 1,
> > +                          sysctl->u.getcqminfo.num_sockets);
> > +        num_rmids = get_cqm_count();
> > +        info = xzalloc_array(struct xen_socket_cqmdata,
> > +                             num_rmids * num_sockets);
> 
> While unlikely right now, you ought to consider the case of this
> multiplication overflowing.
> 
> Also - how does the caller know how big the buffer needs to be?
> Only num_sockets can be restricted by it...
> 
> And what's worse - you allow the caller to limit num_sockets and
> allocate info based on this limited value, but you don't restrict
> cpu_cqmdata_map to just the socket covered, i.e. if the caller
> specified a lower number, then you'll corrupt memory.

Currently the caller (libxc) sets big num_rmid and num_sockets which are the maximum values that could be available inside the hypervisor.
If you think this approach is not enough to ensure the security, what about the caller (libxc) issue an hypercall to get the two values from hypervisor, then allocating the same size of num_rmid and num_sockets?

> 
> And finally, I think the total size of the buffer here can easily
> exceed a page, i.e. this then ends up being a non-order-0
> allocation, which may _never_ succeed (i.e. the operation is
> then rendered useless). I guest it'd be better to e.g. vmap()
> the MFNs underlying the guest buffer.

Do you mean we check the size of the total size, and allocate MFNs one by one, then vmap them?

> 
> > +        if ( !info )
> > +        {
> > +            ret = -ENOMEM;
> > +            break;
> > +        }
> > +
> > +        get_cqm_info(&cpu_cqmdata_map, info);
> > +
> > +        if ( copy_to_guest_offset(sysctl->u.getcqminfo.buffer,
> > +                                  0, info, num_rmids * num_sockets) )
> 
> If the offset is zero anyway, why do you use copy_to_guest_offset()
> rather than copy_to_guest()?

Okay.

> 
> > +        {
> > +            ret = -EFAULT;
> > +            xfree(info);
> > +            break;
> > +        }
> > +
> > +        sysctl->u.getcqminfo.num_rmids = num_rmids;
> > +        sysctl->u.getcqminfo.num_sockets = num_sockets;
> > +
> > +        if ( copy_to_guest(u_sysctl, sysctl, 1) )
> 
> __copy_to_guest() is sufficient here.

Okay.

Thanks,
Dongxiao

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89Zk-0008Uu-Bz; Tue, 28 Jan 2014 14:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89Zi-0008Uo-GA
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:23:38 +0000
Received: from [85.158.143.35:22536] by server-3.bemta-4.messagelabs.com id
	58/A5-32360-96DB7E25; Tue, 28 Jan 2014 14:23:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390919016!1352563!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22010 invoked from network); 28 Jan 2014 14:23:36 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 14:23:36 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 06:19:16 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="445818673"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 28 Jan 2014 06:23:24 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:23:23 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:23:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:23:20 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 4/7] x86: collect CQM information from all sockets
Thread-Index: AQHPFebedK9ctTr7n0679vvlcaoAk5qaOg1g
Date: Tue, 28 Jan 2014 14:23:20 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
In-Reply-To: <52DD38240200007800115102@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:52 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > @@ -126,6 +127,12 @@ bool_t system_supports_cqm(void)
> >      return !!cqm;
> >  }
> >
> > +unsigned int get_cqm_count(void)
> > +{
> > +    ASSERT(system_supports_cqm());
> > +    return cqm->max_rmid + 1;
> > +}
> > +
> >  int alloc_cqm_rmid(struct domain *d)
> >  {
> >      int rc = 0;
> > @@ -170,6 +177,48 @@ void free_cqm_rmid(struct domain *d)
> >      d->arch.pqos_cqm_rmid = 0;
> >  }
> >
> > +static void read_cqm_data(void *arg)
> > +{
> > +    uint64_t cqm_data;
> > +    unsigned int rmid;
> > +    int socket = cpu_to_socket(smp_processor_id());
> > +    struct xen_socket_cqmdata *data = arg;
> > +    unsigned long flags, i;
> 
> Either i can be "unsigned int" ...
> 
> > +
> > +    ASSERT(system_supports_cqm());
> > +
> > +    if ( socket < 0 )
> > +        return;
> > +
> > +    spin_lock_irqsave(&cqm_lock, flags);
> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> > +    {
> > +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> > +            continue;
> > +
> > +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> > +        rdmsrl(MSR_IA32_QMC, cqm_data);
> > +
> > +        i = socket * (cqm->max_rmid + 1) + rmid;
> 
> ... or this calculation needs one of the two operands of * cast
> to "unsigned long".

Will adopt this way in following patch.

> 
> > +        data[i].valid = !(cqm_data & IA32_QM_CTR_ERROR_MASK);
> > +        if ( data[i].valid )
> > +        {
> > +            data[i].l3c_occupancy = cqm_data * cqm->upscaling_factor;
> > +            data[i].socket = socket;
> > +            data[i].domid = cqm->rmid_to_dom[rmid];
> > +        }
> > +    }
> > +    spin_unlock_irqrestore(&cqm_lock, flags);
> > +}
> 
> Also, please clarify why the locking here is necessary: You don't
> seem to be modifying global data, and the only possibly mutable
> thing you read is cqm->rmid_to_dom[]. A race on that one with
> an addition/deletion doesn't appear to be problematic though.

Will use the spin_lock() in following patch.

> 
> > +void get_cqm_info(cpumask_t *cpu_cqmdata_map, struct
> xen_socket_cqmdata *data)
> 
> const cpumask_t *
> 
> > +    case XEN_SYSCTL_getcqminfo:
> > +    {
> > +        struct xen_socket_cqmdata *info;
> > +        uint32_t num_sockets;
> > +        uint32_t num_rmids;
> > +        cpumask_t cpu_cqmdata_map;
> 
> Unless absolutely avoidable, not CPU masks on the stack please.

Okay, will allocate it by "xzalloc" function.

> 
> > +
> > +        if ( !system_supports_cqm() )
> > +        {
> > +            ret = -ENODEV;
> > +            break;
> > +        }
> > +
> > +        select_socket_cpu(&cpu_cqmdata_map);
> > +
> > +        num_sockets = min((unsigned
> int)cpumask_weight(&cpu_cqmdata_map) + 1,
> > +                          sysctl->u.getcqminfo.num_sockets);
> > +        num_rmids = get_cqm_count();
> > +        info = xzalloc_array(struct xen_socket_cqmdata,
> > +                             num_rmids * num_sockets);
> 
> While unlikely right now, you ought to consider the case of this
> multiplication overflowing.
> 
> Also - how does the caller know how big the buffer needs to be?
> Only num_sockets can be restricted by it...
> 
> And what's worse - you allow the caller to limit num_sockets and
> allocate info based on this limited value, but you don't restrict
> cpu_cqmdata_map to just the socket covered, i.e. if the caller
> specified a lower number, then you'll corrupt memory.

Currently the caller (libxc) sets big num_rmid and num_sockets which are the maximum values that could be available inside the hypervisor.
If you think this approach is not enough to ensure the security, what about the caller (libxc) issue an hypercall to get the two values from hypervisor, then allocating the same size of num_rmid and num_sockets?

> 
> And finally, I think the total size of the buffer here can easily
> exceed a page, i.e. this then ends up being a non-order-0
> allocation, which may _never_ succeed (i.e. the operation is
> then rendered useless). I guest it'd be better to e.g. vmap()
> the MFNs underlying the guest buffer.

Do you mean we check the size of the total size, and allocate MFNs one by one, then vmap them?

> 
> > +        if ( !info )
> > +        {
> > +            ret = -ENOMEM;
> > +            break;
> > +        }
> > +
> > +        get_cqm_info(&cpu_cqmdata_map, info);
> > +
> > +        if ( copy_to_guest_offset(sysctl->u.getcqminfo.buffer,
> > +                                  0, info, num_rmids * num_sockets) )
> 
> If the offset is zero anyway, why do you use copy_to_guest_offset()
> rather than copy_to_guest()?

Okay.

> 
> > +        {
> > +            ret = -EFAULT;
> > +            xfree(info);
> > +            break;
> > +        }
> > +
> > +        sysctl->u.getcqminfo.num_rmids = num_rmids;
> > +        sysctl->u.getcqminfo.num_sockets = num_sockets;
> > +
> > +        if ( copy_to_guest(u_sysctl, sysctl, 1) )
> 
> __copy_to_guest() is sufficient here.

Okay.

Thanks,
Dongxiao

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:25:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89b8-0000AS-TD; Tue, 28 Jan 2014 14:25:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89b8-0000AG-37
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:25:06 +0000
Received: from [193.109.254.147:58825] by server-16.bemta-14.messagelabs.com
	id 49/F0-20600-1CDB7E25; Tue, 28 Jan 2014 14:25:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390919104!369013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19270 invoked from network); 28 Jan 2014 14:25:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	28 Jan 2014 14:25:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 28 Jan 2014 06:25:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="473767175"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 28 Jan 2014 06:25:01 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:25:01 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:25:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:24:59 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID
Thread-Index: AQHPFee28keOhk+s20mQn7/a4J47RZqaPU7g
Date: Tue, 28 Jan 2014 14:24:59 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A9119237CA@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
	<52DD3993020000780011511F@nat28.tlf.novell.com>
In-Reply-To: <52DD3993020000780011511F@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 5/7] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:58 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -1366,6 +1366,8 @@ static void __context_switch(void)
> >      {
> >          memcpy(&p->arch.user_regs, stack_regs,
> CTXT_SWITCH_STACK_BYTES);
> >          vcpu_save_fpu(p);
> > +        if ( system_supports_cqm() )
> > +            cqm_assoc_rmid(0);
> >          p->arch.ctxt_switch_from(p);
> >      }
> >
> > @@ -1390,6 +1392,9 @@ static void __context_switch(void)
> >          }
> >          vcpu_restore_fpu_eager(n);
> >          n->arch.ctxt_switch_to(n);
> > +
> > +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid >
> 0 )
> > +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
> >      }
> >
> >      gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
> 
> The two uses here clearly call for system_supports_cqm() to
> be an inline function (the more that the variable checked in that
> function is already global anyway).

Okay.

> 
> Further, cqm_assoc_rmid() being an RDMSR plus WRMSR, you
> surely will want to optimize the case of p's and n's RMIDs being
> identical. Or at the very least make sure you _never_ call that
> function if all domains run with RMID 0.

Okay.

> 
> > @@ -60,6 +60,8 @@ static void __init parse_pqos_param(char *s)
> >
> >  custom_param("pqos", parse_pqos_param);
> >
> > +static uint64_t rmid_mask;
> 
> __read_mostly?

Okay.

> 
> > +void cqm_assoc_rmid(unsigned int rmid)
> > +{
> > +    uint64_t val;
> > +
> > +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> > +    wrmsrl(MSR_IA32_PQR_ASSOC, (val & ~(rmid_mask)) | (rmid &
> rmid_mask));
> 
> Stray parentheses around a simple variable.

Okay.

Will reflect your suggestions in next version patch.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:25:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89b8-0000AS-TD; Tue, 28 Jan 2014 14:25:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W89b8-0000AG-37
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:25:06 +0000
Received: from [193.109.254.147:58825] by server-16.bemta-14.messagelabs.com
	id 49/F0-20600-1CDB7E25; Tue, 28 Jan 2014 14:25:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390919104!369013!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19270 invoked from network); 28 Jan 2014 14:25:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	28 Jan 2014 14:25:04 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 28 Jan 2014 06:25:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="473767175"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 28 Jan 2014 06:25:01 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:25:01 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 06:25:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 22:24:59 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID
Thread-Index: AQHPFee28keOhk+s20mQn7/a4J47RZqaPU7g
Date: Tue, 28 Jan 2014 14:24:59 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A9119237CA@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-6-git-send-email-dongxiao.xu@intel.com>
	<52DD3993020000780011511F@nat28.tlf.novell.com>
In-Reply-To: <52DD3993020000780011511F@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 5/7] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, January 20, 2014 9:58 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> Ian.Campbell@citrix.com; Ian.Jackson@eu.citrix.com;
> stefano.stabellini@eu.citrix.com; xen-devel@lists.xen.org;
> konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; keir@xen.org
> Subject: Re: [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID
> 
> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -1366,6 +1366,8 @@ static void __context_switch(void)
> >      {
> >          memcpy(&p->arch.user_regs, stack_regs,
> CTXT_SWITCH_STACK_BYTES);
> >          vcpu_save_fpu(p);
> > +        if ( system_supports_cqm() )
> > +            cqm_assoc_rmid(0);
> >          p->arch.ctxt_switch_from(p);
> >      }
> >
> > @@ -1390,6 +1392,9 @@ static void __context_switch(void)
> >          }
> >          vcpu_restore_fpu_eager(n);
> >          n->arch.ctxt_switch_to(n);
> > +
> > +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid >
> 0 )
> > +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
> >      }
> >
> >      gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
> 
> The two uses here clearly call for system_supports_cqm() to
> be an inline function (the more that the variable checked in that
> function is already global anyway).

Okay.

> 
> Further, cqm_assoc_rmid() being an RDMSR plus WRMSR, you
> surely will want to optimize the case of p's and n's RMIDs being
> identical. Or at the very least make sure you _never_ call that
> function if all domains run with RMID 0.

Okay.

> 
> > @@ -60,6 +60,8 @@ static void __init parse_pqos_param(char *s)
> >
> >  custom_param("pqos", parse_pqos_param);
> >
> > +static uint64_t rmid_mask;
> 
> __read_mostly?

Okay.

> 
> > +void cqm_assoc_rmid(unsigned int rmid)
> > +{
> > +    uint64_t val;
> > +
> > +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> > +    wrmsrl(MSR_IA32_PQR_ASSOC, (val & ~(rmid_mask)) | (rmid &
> rmid_mask));
> 
> Stray parentheses around a simple variable.

Okay.

Will reflect your suggestions in next version patch.

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89eM-0000OG-Lq; Tue, 28 Jan 2014 14:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W89eL-0000O9-6E
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:28:25 +0000
Received: from [85.158.143.35:11195] by server-1.bemta-4.messagelabs.com id
	09/58-02132-88EB7E25; Tue, 28 Jan 2014 14:28:24 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390919303!1370945!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12273 invoked from network); 28 Jan 2014 14:28:23 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:28:23 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so892437wgg.33
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 06:28:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=mON/cps3awwHxS2vipSZ+zpPMYYvYTeA9icHmyBIfnU=;
	b=OhaZ5uOTBpe8yrSw09F/AdcNg3t4lyxR+BlejERpf+USiAv2eU+BcPJO1qGKXgwh8N
	lXBsgduql1D3SMfR6uoNeAyMNsvOsej4bbHJWqXE+eQFmWaAiVqvjczysiH0DVEaVKJw
	qsL9ca2pXsKpUQoQW7n8JAOjECHTSAwJ+Qe0RKC1+FGM2fDNTY6VyAZgmFcwINWSk5DF
	oSG2gdvM290oxnnBRb/NNrmrVYHR+4VSXbez0VXyXLci1nXtqNzku+8OYK/XTukWz90e
	dz39GcfouCmGJ9cM43GyXJ2dONUlu59DRjTjY63zSgcgtpTN5OfV+aWl8HxZx60+zErH
	AtEw==
MIME-Version: 1.0
X-Received: by 10.194.92.109 with SMTP id cl13mr1253269wjb.13.1390919303507;
	Tue, 28 Jan 2014 06:28:23 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 28 Jan 2014 06:28:23 -0800 (PST)
In-Reply-To: <1390906058.7753.37.camel@kazak.uk.xensource.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 14:28:23 +0000
X-Google-Sender-Auth: bDxn2Q0_LRUbcSLydXSuZPLBKXc
Message-ID: <CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>
>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>> rest of libxc and throws away the errno.
>>
>> So for now conditionally define xc_domain_set_pod_target to return success
>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>> errno==-1 and returns -1, which matches the broken error reporting of the
>> existing function. It appears to have no in tree callers in any case.
>>
>> The conditional should be removed once libxc has been fixed.
>>
>> This makes ballooning (xl mem-set) work for ARM domains.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com
>> ---
>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>> certainly doing so for 4.4 now would be inapropriate.
>>
>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>> frame, at which point this conditional stuff could be dropped.
>>
>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>> small/contained and safe it is also pretty skanky and likely wouldn't be
>> accepted outside of the rc period.
>
> George -- what do you think of this?

So is this actually called in the arm domain build code at the moment?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89eM-0000OG-Lq; Tue, 28 Jan 2014 14:28:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W89eL-0000O9-6E
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:28:25 +0000
Received: from [85.158.143.35:11195] by server-1.bemta-4.messagelabs.com id
	09/58-02132-88EB7E25; Tue, 28 Jan 2014 14:28:24 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390919303!1370945!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12273 invoked from network); 28 Jan 2014 14:28:23 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:28:23 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so892437wgg.33
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 06:28:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=mON/cps3awwHxS2vipSZ+zpPMYYvYTeA9icHmyBIfnU=;
	b=OhaZ5uOTBpe8yrSw09F/AdcNg3t4lyxR+BlejERpf+USiAv2eU+BcPJO1qGKXgwh8N
	lXBsgduql1D3SMfR6uoNeAyMNsvOsej4bbHJWqXE+eQFmWaAiVqvjczysiH0DVEaVKJw
	qsL9ca2pXsKpUQoQW7n8JAOjECHTSAwJ+Qe0RKC1+FGM2fDNTY6VyAZgmFcwINWSk5DF
	oSG2gdvM290oxnnBRb/NNrmrVYHR+4VSXbez0VXyXLci1nXtqNzku+8OYK/XTukWz90e
	dz39GcfouCmGJ9cM43GyXJ2dONUlu59DRjTjY63zSgcgtpTN5OfV+aWl8HxZx60+zErH
	AtEw==
MIME-Version: 1.0
X-Received: by 10.194.92.109 with SMTP id cl13mr1253269wjb.13.1390919303507;
	Tue, 28 Jan 2014 06:28:23 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 28 Jan 2014 06:28:23 -0800 (PST)
In-Reply-To: <1390906058.7753.37.camel@kazak.uk.xensource.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 14:28:23 +0000
X-Google-Sender-Auth: bDxn2Q0_LRUbcSLydXSuZPLBKXc
Message-ID: <CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>
>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>> rest of libxc and throws away the errno.
>>
>> So for now conditionally define xc_domain_set_pod_target to return success
>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>> errno==-1 and returns -1, which matches the broken error reporting of the
>> existing function. It appears to have no in tree callers in any case.
>>
>> The conditional should be removed once libxc has been fixed.
>>
>> This makes ballooning (xl mem-set) work for ARM domains.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com
>> ---
>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>> certainly doing so for 4.4 now would be inapropriate.
>>
>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>> frame, at which point this conditional stuff could be dropped.
>>
>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>> small/contained and safe it is also pretty skanky and likely wouldn't be
>> accepted outside of the rc period.
>
> George -- what do you think of this?

So is this actually called in the arm domain build code at the moment?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89gm-0000p4-Pp; Tue, 28 Jan 2014 14:30:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89gl-0000oi-1v
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:30:55 +0000
Received: from [85.158.143.35:21398] by server-2.bemta-4.messagelabs.com id
	55/65-11386-E1FB7E25; Tue, 28 Jan 2014 14:30:54 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390919452!1384002!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2480 invoked from network); 28 Jan 2014 14:30:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:30:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95255215"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:30:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:30:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89gg-0006eO-BN;
	Tue, 28 Jan 2014 14:30:50 +0000
Date: Tue, 28 Jan 2014 14:30:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52E787FA.3080105@citrix.com>
Message-ID: <alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
	<52E787FA.3080105@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
 called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, David Vrabel wrote:
> On 28/01/14 00:34, Julien Grall wrote:
> > On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
> > all CPUs are online. It would mean that the notifier will never be called.
> 
> Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
> earlier when only the boot CPU is online?  There are problems with
> attempting to init FIFO event channels after all CPUs are online.
> 
> If evtchn_fifo_init_control_block(cpu) fails on anything other than the
> first CPU, that CPU will be unable to receive any events.  Xen will have
> been switched to FIFO mode and it is not possible to revert back to
> 2-level mode.

We simply didn't need to be called that early.
Most of xen_guest_init could be moved to an early_initcall, if that is
necessary.



> > Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
> > because the event channel structure for this processor is not initialized.
> > 
> > This can be fixed by calling the init function on every online cpu when the
> > event channel fifo driver is initialized.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > ---
> >  drivers/xen/events/events_fifo.c |   11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
> > index 1de2a19..15498ab 100644
> > --- a/drivers/xen/events/events_fifo.c
> > +++ b/drivers/xen/events/events_fifo.c
> > @@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
> >  
> >  int __init xen_evtchn_fifo_init(void)
> >  {
> > -	int cpu = get_cpu();
> > +	int cpu;
> >  	int ret;
> >  
> > -	ret = evtchn_fifo_init_control_block(cpu);
> > -	if (ret < 0)
> > -		goto out;
> > +	for_each_online_cpu(cpu) {
> > +		ret = evtchn_fifo_init_control_block(cpu);
> > +		if (ret < 0)
> > +			goto out;
> 
> You need to handle this error differently depending on whether the first
> call fails or not.
> 
> Failure on first CPU: return an error and the caller will fallback to
> using 2-level mode.
> 
> Failure on second or later CPU: you need to offline that CPU.  It may
> not be possible to offline a CPU with standard calls (e.g., cpu_down())
> as it won't have working interrupts.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89gm-0000p4-Pp; Tue, 28 Jan 2014 14:30:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89gl-0000oi-1v
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:30:55 +0000
Received: from [85.158.143.35:21398] by server-2.bemta-4.messagelabs.com id
	55/65-11386-E1FB7E25; Tue, 28 Jan 2014 14:30:54 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390919452!1384002!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2480 invoked from network); 28 Jan 2014 14:30:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:30:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95255215"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:30:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:30:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89gg-0006eO-BN;
	Tue, 28 Jan 2014 14:30:50 +0000
Date: Tue, 28 Jan 2014 14:30:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52E787FA.3080105@citrix.com>
Message-ID: <alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
	<52E787FA.3080105@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, patches@linaro.org,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
 called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, David Vrabel wrote:
> On 28/01/14 00:34, Julien Grall wrote:
> > On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
> > all CPUs are online. It would mean that the notifier will never be called.
> 
> Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
> earlier when only the boot CPU is online?  There are problems with
> attempting to init FIFO event channels after all CPUs are online.
> 
> If evtchn_fifo_init_control_block(cpu) fails on anything other than the
> first CPU, that CPU will be unable to receive any events.  Xen will have
> been switched to FIFO mode and it is not possible to revert back to
> 2-level mode.

We simply didn't need to be called that early.
Most of xen_guest_init could be moved to an early_initcall, if that is
necessary.



> > Therefore, when a secondary CPU will receive an interrupt, Linux will segfault
> > because the event channel structure for this processor is not initialized.
> > 
> > This can be fixed by calling the init function on every online cpu when the
> > event channel fifo driver is initialized.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > ---
> >  drivers/xen/events/events_fifo.c |   11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
> > index 1de2a19..15498ab 100644
> > --- a/drivers/xen/events/events_fifo.c
> > +++ b/drivers/xen/events/events_fifo.c
> > @@ -410,12 +410,14 @@ static struct notifier_block evtchn_fifo_cpu_notifier = {
> >  
> >  int __init xen_evtchn_fifo_init(void)
> >  {
> > -	int cpu = get_cpu();
> > +	int cpu;
> >  	int ret;
> >  
> > -	ret = evtchn_fifo_init_control_block(cpu);
> > -	if (ret < 0)
> > -		goto out;
> > +	for_each_online_cpu(cpu) {
> > +		ret = evtchn_fifo_init_control_block(cpu);
> > +		if (ret < 0)
> > +			goto out;
> 
> You need to handle this error differently depending on whether the first
> call fails or not.
> 
> Failure on first CPU: return an error and the caller will fallback to
> using 2-level mode.
> 
> Failure on second or later CPU: you need to offline that CPU.  It may
> not be possible to offline a CPU with standard calls (e.g., cpu_down())
> as it won't have working interrupts.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89gm-0000ou-EU; Tue, 28 Jan 2014 14:30:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W89gk-0000oe-AX
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 14:30:54 +0000
Received: from [85.158.137.68:9842] by server-1.bemta-3.messagelabs.com id
	98/13-29598-D1FB7E25; Tue, 28 Jan 2014 14:30:53 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390919451!11741119!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12859 invoked from network); 28 Jan 2014 14:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97252082"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:30:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:30:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W89gf-0006eL-31;
	Tue, 28 Jan 2014 14:30:49 +0000
Message-ID: <52E7BF17.3050200@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:30:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>>> flight 24553 qemu-upstream-unstable real [real]
>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
>>>
>>> Failures :-/ but no regressions.
>>
>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>> and so will require updating to actually pull this new stuff into the
>> release.
>
> OK. But given that the new code is not part of any RCs, should I wait
> for the next one? Should we go back to "master"?

I guess we should have gone back to "master" after tagging the last RC?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89gm-0000ou-EU; Tue, 28 Jan 2014 14:30:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W89gk-0000oe-AX
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 14:30:54 +0000
Received: from [85.158.137.68:9842] by server-1.bemta-3.messagelabs.com id
	98/13-29598-D1FB7E25; Tue, 28 Jan 2014 14:30:53 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390919451!11741119!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12859 invoked from network); 28 Jan 2014 14:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97252082"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:30:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:30:49 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W89gf-0006eL-31;
	Tue, 28 Jan 2014 14:30:49 +0000
Message-ID: <52E7BF17.3050200@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:30:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>>> flight 24553 qemu-upstream-unstable real [real]
>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
>>>
>>> Failures :-/ but no regressions.
>>
>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>> and so will require updating to actually pull this new stuff into the
>> release.
>
> OK. But given that the new code is not part of any RCs, should I wait
> for the next one? Should we go back to "master"?

I guess we should have gone back to "master" after tagging the last RC?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89h4-0000t3-7U; Tue, 28 Jan 2014 14:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W89h2-0000sg-KK
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:31:12 +0000
Received: from [85.158.137.68:23866] by server-17.bemta-3.messagelabs.com id
	20/12-15965-F2FB7E25; Tue, 28 Jan 2014 14:31:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390919469!11863817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31481 invoked from network); 28 Jan 2014 14:31:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:31:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95255391"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:31:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:31:08 -0500
Message-ID: <1390919466.7753.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:31:06 +0000
In-Reply-To: <CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> >> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> >>
> >> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> >> xc_domain_set_pod_target suffers from the same broken error reporting as the
> >> rest of libxc and throws away the errno.
> >>
> >> So for now conditionally define xc_domain_set_pod_target to return success
> >> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> >> errno==-1 and returns -1, which matches the broken error reporting of the
> >> existing function. It appears to have no in tree callers in any case.
> >>
> >> The conditional should be removed once libxc has been fixed.
> >>
> >> This makes ballooning (xl mem-set) work for ARM domains.
> >>
> >> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >> Cc: george.dunlap@citrix.com
> >> ---
> >> I'd be generally wary of modifying the error handling in a piecemeal way, but
> >> certainly doing so for 4.4 now would be inapropriate.
> >>
> >> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> >> frame, at which point this conditional stuff could be dropped.
> >>
> >> In terms of the 4.4 release, obviously ballooning would be very nice to have
> >> for ARM guests, on the other hand I'm aware that while the patch is fairly
> >> small/contained and safe it is also pretty skanky and likely wouldn't be
> >> accepted outside of the rc period.
> >
> > George -- what do you think of this?
> 
> So is this actually called in the arm domain build code at the moment?

It is common code in libxl which calls into it. I originally had the
ifdef there instead.

(I've just noticed that I forgot to update $subject when I moved the
#ifdef from libxl to libxc)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:31:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89h4-0000t3-7U; Tue, 28 Jan 2014 14:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W89h2-0000sg-KK
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:31:12 +0000
Received: from [85.158.137.68:23866] by server-17.bemta-3.messagelabs.com id
	20/12-15965-F2FB7E25; Tue, 28 Jan 2014 14:31:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390919469!11863817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31481 invoked from network); 28 Jan 2014 14:31:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:31:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95255391"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:31:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:31:08 -0500
Message-ID: <1390919466.7753.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:31:06 +0000
In-Reply-To: <CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> >> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> >>
> >> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> >> xc_domain_set_pod_target suffers from the same broken error reporting as the
> >> rest of libxc and throws away the errno.
> >>
> >> So for now conditionally define xc_domain_set_pod_target to return success
> >> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> >> errno==-1 and returns -1, which matches the broken error reporting of the
> >> existing function. It appears to have no in tree callers in any case.
> >>
> >> The conditional should be removed once libxc has been fixed.
> >>
> >> This makes ballooning (xl mem-set) work for ARM domains.
> >>
> >> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >> Cc: george.dunlap@citrix.com
> >> ---
> >> I'd be generally wary of modifying the error handling in a piecemeal way, but
> >> certainly doing so for 4.4 now would be inapropriate.
> >>
> >> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> >> frame, at which point this conditional stuff could be dropped.
> >>
> >> In terms of the 4.4 release, obviously ballooning would be very nice to have
> >> for ARM guests, on the other hand I'm aware that while the patch is fairly
> >> small/contained and safe it is also pretty skanky and likely wouldn't be
> >> accepted outside of the rc period.
> >
> > George -- what do you think of this?
> 
> So is this actually called in the arm domain build code at the moment?

It is common code in libxl which calls into it. I originally had the
ifdef there instead.

(I've just noticed that I forgot to update $subject when I moved the
#ifdef from libxl to libxc)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:32:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89iO-00017H-OK; Tue, 28 Jan 2014 14:32:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89iM-00016r-Vc
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 14:32:35 +0000
Received: from [85.158.143.35:57852] by server-3.bemta-4.messagelabs.com id
	E8/A7-32360-28FB7E25; Tue, 28 Jan 2014 14:32:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390919551!1381976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22046 invoked from network); 28 Jan 2014 14:32:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97252848"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:32:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:32:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89iH-0006g2-Os;
	Tue, 28 Jan 2014 14:32:29 +0000
Date: Tue, 28 Jan 2014 14:32:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52E7BF17.3050200@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1401281432190.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, George Dunlap wrote:
> On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > > > flight 24553 qemu-upstream-unstable real [real]
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> > > > 
> > > > Failures :-/ but no regressions.
> > > 
> > > QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > > and so will require updating to actually pull this new stuff into the
> > > release.
> > 
> > OK. But given that the new code is not part of any RCs, should I wait
> > for the next one? Should we go back to "master"?
> 
> I guess we should have gone back to "master" after tagging the last RC?

I think so.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:32:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89iO-00017H-OK; Tue, 28 Jan 2014 14:32:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W89iM-00016r-Vc
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 14:32:35 +0000
Received: from [85.158.143.35:57852] by server-3.bemta-4.messagelabs.com id
	E8/A7-32360-28FB7E25; Tue, 28 Jan 2014 14:32:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390919551!1381976!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22046 invoked from network); 28 Jan 2014 14:32:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:32:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97252848"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:32:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:32:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W89iH-0006g2-Os;
	Tue, 28 Jan 2014 14:32:29 +0000
Date: Tue, 28 Jan 2014 14:32:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52E7BF17.3050200@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1401281432190.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, George Dunlap wrote:
> On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > > > flight 24553 qemu-upstream-unstable real [real]
> > > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/
> > > > 
> > > > Failures :-/ but no regressions.
> > > 
> > > QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > > and so will require updating to actually pull this new stuff into the
> > > release.
> > 
> > OK. But given that the new code is not part of any RCs, should I wait
> > for the next one? Should we go back to "master"?
> 
> I guess we should have gone back to "master" after tagging the last RC?

I think so.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89kM-0001Ll-CS; Tue, 28 Jan 2014 14:34:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W89kL-0001Lb-Ks
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:34:37 +0000
Received: from [85.158.139.211:31818] by server-15.bemta-5.messagelabs.com id
	C2/B4-08490-CFFB7E25; Tue, 28 Jan 2014 14:34:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390919674!147677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26689 invoked from network); 28 Jan 2014 14:34:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:34:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97253784"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:34:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:34:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W89kG-0006iK-L9;
	Tue, 28 Jan 2014 14:34:32 +0000
Message-ID: <52E7BFF8.1090605@citrix.com>
Date: Tue, 28 Jan 2014 14:34:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 14:23, Xu, Dongxiao wrote:
>
>>> +    case XEN_SYSCTL_getcqminfo:
>>> +    {
>>> +        struct xen_socket_cqmdata *info;
>>> +        uint32_t num_sockets;
>>> +        uint32_t num_rmids;
>>> +        cpumask_t cpu_cqmdata_map;
>> Unless absolutely avoidable, not CPU masks on the stack please.
> Okay, will allocate it by "xzalloc" function.

cpumask_var_t mask;
{z}alloc_cpumask_var(&mask);
...
free_cpumask_var(mask);

This will switch between a long on the stack and an allocated structure
depending on whether Xen is compiled with fewer or more than 64 pcpu.

>
>>> +
>>> +        if ( !system_supports_cqm() )
>>> +        {
>>> +            ret = -ENODEV;
>>> +            break;
>>> +        }
>>> +
>>> +        select_socket_cpu(&cpu_cqmdata_map);
>>> +
>>> +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>>> +                          sysctl->u.getcqminfo.num_sockets);
>>> +        num_rmids = get_cqm_count();
>>> +        info = xzalloc_array(struct xen_socket_cqmdata,
>>> +                             num_rmids * num_sockets);
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>>
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>>
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about the caller (libxc) issue an hypercall to get the two values from hypervisor, then allocating the same size of num_rmid and num_sockets?
>
>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> Do you mean we check the size of the total size, and allocate MFNs one by one, then vmap them?

I still think this is barking mad as a method of getting this quantity
of data from Xen to the toolstack in a repeated fashon.

Xen should allocate a per-socket buffer at the start of day (or perhaps
on first use of CQM), and the CQM monitoring tool gets to map those
per-socket buffers read-only.

This way, all processing of the CQM data happens in dom0 userspace, not
in Xen in hypercall context; All Xen has to do is periodically dump the
MSRs into the pages.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:34:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89kM-0001Ll-CS; Tue, 28 Jan 2014 14:34:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W89kL-0001Lb-Ks
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:34:37 +0000
Received: from [85.158.139.211:31818] by server-15.bemta-5.messagelabs.com id
	C2/B4-08490-CFFB7E25; Tue, 28 Jan 2014 14:34:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390919674!147677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26689 invoked from network); 28 Jan 2014 14:34:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:34:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97253784"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 14:34:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:34:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W89kG-0006iK-L9;
	Tue, 28 Jan 2014 14:34:32 +0000
Message-ID: <52E7BFF8.1090605@citrix.com>
Date: Tue, 28 Jan 2014 14:34:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 14:23, Xu, Dongxiao wrote:
>
>>> +    case XEN_SYSCTL_getcqminfo:
>>> +    {
>>> +        struct xen_socket_cqmdata *info;
>>> +        uint32_t num_sockets;
>>> +        uint32_t num_rmids;
>>> +        cpumask_t cpu_cqmdata_map;
>> Unless absolutely avoidable, not CPU masks on the stack please.
> Okay, will allocate it by "xzalloc" function.

cpumask_var_t mask;
{z}alloc_cpumask_var(&mask);
...
free_cpumask_var(mask);

This will switch between a long on the stack and an allocated structure
depending on whether Xen is compiled with fewer or more than 64 pcpu.

>
>>> +
>>> +        if ( !system_supports_cqm() )
>>> +        {
>>> +            ret = -ENODEV;
>>> +            break;
>>> +        }
>>> +
>>> +        select_socket_cpu(&cpu_cqmdata_map);
>>> +
>>> +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>>> +                          sysctl->u.getcqminfo.num_sockets);
>>> +        num_rmids = get_cqm_count();
>>> +        info = xzalloc_array(struct xen_socket_cqmdata,
>>> +                             num_rmids * num_sockets);
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>>
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>>
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about the caller (libxc) issue an hypercall to get the two values from hypervisor, then allocating the same size of num_rmid and num_sockets?
>
>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> Do you mean we check the size of the total size, and allocate MFNs one by one, then vmap them?

I still think this is barking mad as a method of getting this quantity
of data from Xen to the toolstack in a repeated fashon.

Xen should allocate a per-socket buffer at the start of day (or perhaps
on first use of CQM), and the CQM monitoring tool gets to map those
per-socket buffers read-only.

This way, all processing of the CQM data happens in dom0 userspace, not
in Xen in hypercall context; All Xen has to do is periodically dump the
MSRs into the pages.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89mJ-0001Wf-04; Tue, 28 Jan 2014 14:36:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W89mH-0001WW-7p
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:36:37 +0000
Received: from [193.109.254.147:9530] by server-12.bemta-14.messagelabs.com id
	FF/A6-13681-470C7E25; Tue, 28 Jan 2014 14:36:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390919794!367163!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12583 invoked from network); 28 Jan 2014 14:36:34 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:36:34 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so276740eei.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 06:36:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=+hnfM38S7h/0VzPTldQawZSHzUmhYGq14A53XSx5hlA=;
	b=Uee+aozPVM9IC7JNqlYi608iV7048TtSKN4qU7VEbUiElidJOO4MHwMpjWQ0abS/+A
	KmaJNMCd1vJvFm2zDsPXZB+ex0UdeC/REMAZU5LM7xmXCoQZRzL7bjnkEPLfA83jUxUy
	ZqQRPHI9B7lQ0Bl7WaV7khdJFoWsYF9FXNYIl0FjfwenvumYuphUskc5T9A96pnOCktc
	Zw4Tlqih94njhVquuxhDpAyfA1wCE1aKmuFCduvPGiZgpUuPrwccY90MdBRsGwqxiKM6
	A9Fw38M1HgyLP/qUj8MB0i8z1Sa8FwwuXEeJP4pOH3yXaKVWsXLi/amAn3ssB3CbySgg
	drrQ==
X-Gm-Message-State: ALoCoQn1EegVWlGKtvR+BvsgXlHchvhrK5Tbcvy4MhLKIbH9dFc1VwSr5YZM5m/c1dNl9DeWCR1V
X-Received: by 10.14.107.3 with SMTP id n3mr2104924eeg.67.1390919794017;
	Tue, 28 Jan 2014 06:36:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	d43sm11429482eep.18.2014.01.28.06.36.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 06:36:33 -0800 (PST)
Message-ID: <52E7C06F.3010205@linaro.org>
Date: Tue, 28 Jan 2014 14:36:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
	<52E787FA.3080105@citrix.com>
	<alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
	called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:30 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, David Vrabel wrote:
>> On 28/01/14 00:34, Julien Grall wrote:
>>> On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
>>> all CPUs are online. It would mean that the notifier will never be called.
>>
>> Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
>> earlier when only the boot CPU is online?  There are problems with
>> attempting to init FIFO event channels after all CPUs are online.
>>
>> If evtchn_fifo_init_control_block(cpu) fails on anything other than the
>> first CPU, that CPU will be unable to receive any events.  Xen will have
>> been switched to FIFO mode and it is not possible to revert back to
>> 2-level mode.
> 
> We simply didn't need to be called that early.
> Most of xen_guest_init could be moved to an early_initcall, if that is
> necessary.
> 

I'm actually working on a patch to move xen_init_IRQ() in early_initcall.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89mJ-0001Wf-04; Tue, 28 Jan 2014 14:36:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W89mH-0001WW-7p
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:36:37 +0000
Received: from [193.109.254.147:9530] by server-12.bemta-14.messagelabs.com id
	FF/A6-13681-470C7E25; Tue, 28 Jan 2014 14:36:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390919794!367163!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12583 invoked from network); 28 Jan 2014 14:36:34 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:36:34 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so276740eei.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 06:36:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=+hnfM38S7h/0VzPTldQawZSHzUmhYGq14A53XSx5hlA=;
	b=Uee+aozPVM9IC7JNqlYi608iV7048TtSKN4qU7VEbUiElidJOO4MHwMpjWQ0abS/+A
	KmaJNMCd1vJvFm2zDsPXZB+ex0UdeC/REMAZU5LM7xmXCoQZRzL7bjnkEPLfA83jUxUy
	ZqQRPHI9B7lQ0Bl7WaV7khdJFoWsYF9FXNYIl0FjfwenvumYuphUskc5T9A96pnOCktc
	Zw4Tlqih94njhVquuxhDpAyfA1wCE1aKmuFCduvPGiZgpUuPrwccY90MdBRsGwqxiKM6
	A9Fw38M1HgyLP/qUj8MB0i8z1Sa8FwwuXEeJP4pOH3yXaKVWsXLi/amAn3ssB3CbySgg
	drrQ==
X-Gm-Message-State: ALoCoQn1EegVWlGKtvR+BvsgXlHchvhrK5Tbcvy4MhLKIbH9dFc1VwSr5YZM5m/c1dNl9DeWCR1V
X-Received: by 10.14.107.3 with SMTP id n3mr2104924eeg.67.1390919794017;
	Tue, 28 Jan 2014 06:36:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	d43sm11429482eep.18.2014.01.28.06.36.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 06:36:33 -0800 (PST)
Message-ID: <52E7C06F.3010205@linaro.org>
Date: Tue, 28 Jan 2014 14:36:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390869269-12502-1-git-send-email-julien.grall@linaro.org>
	<52E787FA.3080105@citrix.com>
	<alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281406090.4373@kaball.uk.xensource.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/events: xen_evtchn_fifo_init can be
	called very late
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:30 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, David Vrabel wrote:
>> On 28/01/14 00:34, Julien Grall wrote:
>>> On ARM, xen_init_IRQ (which calls xen_evtchn_fifo_init) is called after
>>> all CPUs are online. It would mean that the notifier will never be called.
>>
>> Why does ARM call xen_init_IRQ() so late?  Is it possible to call it
>> earlier when only the boot CPU is online?  There are problems with
>> attempting to init FIFO event channels after all CPUs are online.
>>
>> If evtchn_fifo_init_control_block(cpu) fails on anything other than the
>> first CPU, that CPU will be unable to receive any events.  Xen will have
>> been switched to FIFO mode and it is not possible to revert back to
>> 2-level mode.
> 
> We simply didn't need to be called that early.
> Most of xen_guest_init could be moved to an early_initcall, if that is
> necessary.
> 

I'm actually working on a patch to move xen_init_IRQ() in early_initcall.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89ma-0001Z5-Cd; Tue, 28 Jan 2014 14:36:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89mY-0001Yk-RY
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:36:55 +0000
Received: from [193.109.254.147:17507] by server-13.bemta-14.messagelabs.com
	id 9E/AB-19374-680C7E25; Tue, 28 Jan 2014 14:36:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390919813!367282!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15670 invoked from network); 28 Jan 2014 14:36:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:36:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:36:52 +0000
Message-Id: <52E7CEA00200007800117999@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:37:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:09, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>> >      }
>> >      break;
>> >
>> > +    case XEN_DOMCTL_attach_pqos:
>> > +    {
>> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> > +        {
>> > +            if ( !system_supports_cqm() )
>> > +                ret = -ENODEV;
>> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> > +                ret = -EEXIST;
>> > +            else
>> > +            {
>> > +                ret = alloc_cqm_rmid(d);
>> > +                if ( ret < 0 )
>> > +                    ret = -EUSERS;
>> 
>> Why don't you have the function return a sensible error code
>> (which presumably might also end up being other than -EUSERS,
>> e.g. -ENOMEM).
> 
> For the assignment of RMID, I don't think there will be error of ENOMEM, so 
> I think EUSER will be better here?

-EUSER is certainly fine here, but that wasn't my point. My point was
that alloc_cqm_rmid() should return a proper error code (right now
only -EUSER, but _potentially_ there could be others in the future,
_for example_ -ENOMEM), and that error code should simply get
passed up here.

>> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
>> > +    {
>> > +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
>> > +            continue;
>> > +
>> > +        cqm->rmid_to_dom[rmid] = d->domain_id;
>> > +        break;
>> > +    }
>> > +    spin_unlock_irqrestore(&cqm_lock, flags);
>> > +
>> > +    /* No CQM RMID available, assign RMID=0 by default */
>> > +    if ( rmid > cqm->max_rmid )
>> > +    {
>> > +        rmid = 0;
>> > +        rc = -1;
>> > +    }
>> > +
>> > +    d->arch.pqos_cqm_rmid = rmid;
>> 
>> Is it really safe to do this and the freeing below outside of the
>> lock?
> 
> Could you help to elaborate the race condition here?

I wasn't saying there is one. I was asking whether you thought
about whether there might be one. After all, from simply looking
at it I get the impression that two racing calls to this function
might end up leaking one of the two RMIDs (second instance
blindly overwriting what the first instance stored).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:36:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89ma-0001Z5-Cd; Tue, 28 Jan 2014 14:36:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89mY-0001Yk-RY
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:36:55 +0000
Received: from [193.109.254.147:17507] by server-13.bemta-14.messagelabs.com
	id 9E/AB-19374-680C7E25; Tue, 28 Jan 2014 14:36:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390919813!367282!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15670 invoked from network); 28 Jan 2014 14:36:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:36:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:36:52 +0000
Message-Id: <52E7CEA00200007800117999@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:37:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-3-git-send-email-dongxiao.xu@intel.com>
	<52DD2F1A02000078001150A4@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923768@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 2/7] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:09, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > @@ -1223,6 +1224,45 @@ long arch_do_domctl(
>> >      }
>> >      break;
>> >
>> > +    case XEN_DOMCTL_attach_pqos:
>> > +    {
>> > +        if ( domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm )
>> > +        {
>> > +            if ( !system_supports_cqm() )
>> > +                ret = -ENODEV;
>> > +            else if ( d->arch.pqos_cqm_rmid > 0 )
>> > +                ret = -EEXIST;
>> > +            else
>> > +            {
>> > +                ret = alloc_cqm_rmid(d);
>> > +                if ( ret < 0 )
>> > +                    ret = -EUSERS;
>> 
>> Why don't you have the function return a sensible error code
>> (which presumably might also end up being other than -EUSERS,
>> e.g. -ENOMEM).
> 
> For the assignment of RMID, I don't think there will be error of ENOMEM, so 
> I think EUSER will be better here?

-EUSER is certainly fine here, but that wasn't my point. My point was
that alloc_cqm_rmid() should return a proper error code (right now
only -EUSER, but _potentially_ there could be others in the future,
_for example_ -ENOMEM), and that error code should simply get
passed up here.

>> > +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
>> > +    {
>> > +        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
>> > +            continue;
>> > +
>> > +        cqm->rmid_to_dom[rmid] = d->domain_id;
>> > +        break;
>> > +    }
>> > +    spin_unlock_irqrestore(&cqm_lock, flags);
>> > +
>> > +    /* No CQM RMID available, assign RMID=0 by default */
>> > +    if ( rmid > cqm->max_rmid )
>> > +    {
>> > +        rmid = 0;
>> > +        rc = -1;
>> > +    }
>> > +
>> > +    d->arch.pqos_cqm_rmid = rmid;
>> 
>> Is it really safe to do this and the freeing below outside of the
>> lock?
> 
> Could you help to elaborate the race condition here?

I wasn't saying there is one. I was asking whether you thought
about whether there might be one. After all, from simply looking
at it I get the impression that two racing calls to this function
might end up leaking one of the two RMIDs (second instance
blindly overwriting what the first instance stored).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:41:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89qz-00025C-Bv; Tue, 28 Jan 2014 14:41:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89qx-000256-VV
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:41:28 +0000
Received: from [193.109.254.147:60275] by server-7.bemta-14.messagelabs.com id
	C9/CC-15500-791C7E25; Tue, 28 Jan 2014 14:41:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390920086!373611!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22632 invoked from network); 28 Jan 2014 14:41:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:41:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:41:26 +0000
Message-Id: <52E7CFB002000078001179B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:41:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
	<52DD322F02000078001150C0@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:12, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > For each socket in the system, we create a separate bitmap to tag its
>> > related CPUs. This per socket bitmap will be initialized on system
>> > start up, and adjusted when CPU is dynamically online/offline.
>> 
>> There's no reasoning here at all why cpu_sibling_mask and
>> cpu_core_mask aren't sufficient.
> 
> The new mask is to mark socket CPUs, and they may be different with 
> cpu_sibling_mask and cpu_core_mask...

Sorry, I don't follow: cpu_core_mask represents all cores sitting
on the same socket as the "owning" CPU. How's that different
from "marking socket CPUs"?

>> > --- a/xen/arch/x86/smpboot.c
>> > +++ b/xen/arch/x86/smpboot.c
>> > @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t,
>> cpu_core_mask);
>> >  cpumask_t cpu_online_map __read_mostly;
>> >  EXPORT_SYMBOL(cpu_online_map);
>> >
>> > +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
>> > +EXPORT_SYMBOL(socket_cpu_map);
>> 
>> And _if_ we really need it, then it should be done in a better way
>> than via a statically sized array, the size of which can't even be
>> overridden on the build and/or hypervisor command line.
> 
> I saw current Xen code uses a lot of such static macros, e.g., NR_CPUS.

For one, the number of these has been decreasing over time.

And then NR_CPUS _can_ be controlled from the make command
line.

> This reminds me one thing, can we define the MAX_NUM_SOCKETS as NR_CPUS? 
> Since the socket number could not exceeds the CPU number.

That might be an option, but only if this construct is really needed
in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:41:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89qz-00025C-Bv; Tue, 28 Jan 2014 14:41:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89qx-000256-VV
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:41:28 +0000
Received: from [193.109.254.147:60275] by server-7.bemta-14.messagelabs.com id
	C9/CC-15500-791C7E25; Tue, 28 Jan 2014 14:41:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390920086!373611!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22632 invoked from network); 28 Jan 2014 14:41:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:41:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:41:26 +0000
Message-Id: <52E7CFB002000078001179B9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:41:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-4-git-send-email-dongxiao.xu@intel.com>
	<52DD322F02000078001150C0@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923775@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 3/7] x86: initialize per socket cpu map
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:12, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > For each socket in the system, we create a separate bitmap to tag its
>> > related CPUs. This per socket bitmap will be initialized on system
>> > start up, and adjusted when CPU is dynamically online/offline.
>> 
>> There's no reasoning here at all why cpu_sibling_mask and
>> cpu_core_mask aren't sufficient.
> 
> The new mask is to mark socket CPUs, and they may be different with 
> cpu_sibling_mask and cpu_core_mask...

Sorry, I don't follow: cpu_core_mask represents all cores sitting
on the same socket as the "owning" CPU. How's that different
from "marking socket CPUs"?

>> > --- a/xen/arch/x86/smpboot.c
>> > +++ b/xen/arch/x86/smpboot.c
>> > @@ -59,6 +59,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t,
>> cpu_core_mask);
>> >  cpumask_t cpu_online_map __read_mostly;
>> >  EXPORT_SYMBOL(cpu_online_map);
>> >
>> > +cpumask_t socket_cpu_map[MAX_NUM_SOCKETS] __read_mostly;
>> > +EXPORT_SYMBOL(socket_cpu_map);
>> 
>> And _if_ we really need it, then it should be done in a better way
>> than via a statically sized array, the size of which can't even be
>> overridden on the build and/or hypervisor command line.
> 
> I saw current Xen code uses a lot of such static macros, e.g., NR_CPUS.

For one, the number of these has been decreasing over time.

And then NR_CPUS _can_ be controlled from the make command
line.

> This reminds me one thing, can we define the MAX_NUM_SOCKETS as NR_CPUS? 
> Since the socket number could not exceeds the CPU number.

That might be an option, but only if this construct is really needed
in the first place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:47:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89x2-0002J7-KH; Tue, 28 Jan 2014 14:47:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89x1-0002J0-0H
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:47:43 +0000
Received: from [85.158.139.211:62091] by server-5.bemta-5.messagelabs.com id
	F4/53-14928-E03C7E25; Tue, 28 Jan 2014 14:47:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390920461!147723!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25535 invoked from network); 28 Jan 2014 14:47:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:47:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:47:40 +0000
Message-Id: <52E7D12702000078001179CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:47:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:23, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > +    case XEN_SYSCTL_getcqminfo:
>> > +    {
>> > +        struct xen_socket_cqmdata *info;
>> > +        uint32_t num_sockets;
>> > +        uint32_t num_rmids;
>> > +        cpumask_t cpu_cqmdata_map;
>> 
>> Unless absolutely avoidable, not CPU masks on the stack please.
> 
> Okay, will allocate it by "xzalloc" function.

Hopefully this was just a thinko - zalloc_cpumask_var() is what you
want to use.

>> > +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>> > +                          sysctl->u.getcqminfo.num_sockets);
>> > +        num_rmids = get_cqm_count();
>> > +        info = xzalloc_array(struct xen_socket_cqmdata,
>> > +                             num_rmids * num_sockets);
>> 
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>> 
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>> 
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> 
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the 
> maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about 
> the caller (libxc) issue an hypercall to get the two values from hypervisor, 
> then allocating the same size of num_rmid and num_sockets?

Yes, that's the first half of it. And then _both_ values, as they're
determining the array dimensions, need to be passed back into
the "actual" call, and the hypervisor will need to take care not to
exceed these array dimensions.

>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> 
> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?

That would also be a possibility, but isn't what I wrote above (as
being less resource efficient, but easier to implement).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:47:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89x2-0002J7-KH; Tue, 28 Jan 2014 14:47:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W89x1-0002J0-0H
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:47:43 +0000
Received: from [85.158.139.211:62091] by server-5.bemta-5.messagelabs.com id
	F4/53-14928-E03C7E25; Tue, 28 Jan 2014 14:47:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390920461!147723!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25535 invoked from network); 28 Jan 2014 14:47:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 14:47:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 14:47:40 +0000
Message-Id: <52E7D12702000078001179CE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 14:47:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:23, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 05.12.13 at 10:38, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
>> > +    case XEN_SYSCTL_getcqminfo:
>> > +    {
>> > +        struct xen_socket_cqmdata *info;
>> > +        uint32_t num_sockets;
>> > +        uint32_t num_rmids;
>> > +        cpumask_t cpu_cqmdata_map;
>> 
>> Unless absolutely avoidable, not CPU masks on the stack please.
> 
> Okay, will allocate it by "xzalloc" function.

Hopefully this was just a thinko - zalloc_cpumask_var() is what you
want to use.

>> > +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>> > +                          sysctl->u.getcqminfo.num_sockets);
>> > +        num_rmids = get_cqm_count();
>> > +        info = xzalloc_array(struct xen_socket_cqmdata,
>> > +                             num_rmids * num_sockets);
>> 
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>> 
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>> 
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> 
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the 
> maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about 
> the caller (libxc) issue an hypercall to get the two values from hypervisor, 
> then allocating the same size of num_rmid and num_sockets?

Yes, that's the first half of it. And then _both_ values, as they're
determining the array dimensions, need to be passed back into
the "actual" call, and the hypervisor will need to take care not to
exceed these array dimensions.

>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> 
> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?

That would also be a possibility, but isn't what I wrote above (as
being less resource efficient, but easier to implement).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89xS-0002Kx-Da; Tue, 28 Jan 2014 14:48:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W89xR-0002Kl-2A
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:48:09 +0000
Received: from [85.158.139.211:64707] by server-12.bemta-5.messagelabs.com id
	36/69-30017-823C7E25; Tue, 28 Jan 2014 14:48:08 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390920485!148391!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8940 invoked from network); 28 Jan 2014 14:48:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95263263"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:48:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:48:03 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W89xK-0006wK-9t;
	Tue, 28 Jan 2014 14:48:02 +0000
Message-ID: <52E7C320.7080402@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:48:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>	
	<1390906058.7753.37.camel@kazak.uk.xensource.com>	
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1390919466.7753.97.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:31 PM, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
>> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>>>
>>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>>>> rest of libxc and throws away the errno.
>>>>
>>>> So for now conditionally define xc_domain_set_pod_target to return success
>>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>>>> errno==-1 and returns -1, which matches the broken error reporting of the
>>>> existing function. It appears to have no in tree callers in any case.
>>>>
>>>> The conditional should be removed once libxc has been fixed.
>>>>
>>>> This makes ballooning (xl mem-set) work for ARM domains.
>>>>
>>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>>> Cc: george.dunlap@citrix.com
>>>> ---
>>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>>>> certainly doing so for 4.4 now would be inapropriate.
>>>>
>>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>>>> frame, at which point this conditional stuff could be dropped.
>>>>
>>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>>>> small/contained and safe it is also pretty skanky and likely wouldn't be
>>>> accepted outside of the rc period.
>>>
>>> George -- what do you think of this?
>>
>> So is this actually called in the arm domain build code at the moment?
>
> It is common code in libxl which calls into it. I originally had the
> ifdef there instead.
>
> (I've just noticed that I forgot to update $subject when I moved the
> #ifdef from libxl to libxc)

Oh, right -- yes, you normally need to call set_pod_target() every time 
you update the balloon target, just in case PoD mode was activated on 
boot; if it wasn't (or if all the PoD entries have gone away) this will 
be a noop.

The only conceptual issue with putting it here is that 
xc_domain_set_pod_target() is also called during domain creation to fill 
the PoD "cache" with the domain's memory, from which to populate the p2m 
on-demand.  So if "someone" were to try to add PoD to the ARM guest 
creation, and forgot about this hack, they might spend a bit of time 
figuring out why the initial call to fill the PoD cache was succeeding 
but the guest was crashing with "PoD empty cache" anyway.

(xc_domain_set_target behaves differently if there are no entries in the 
p2m than if there are: if the p2m is empty, it will respond to this by 
filling the cache; if it's non-empty, it will ignore changes if there 
are no outstanding p2m entries.  That made sense at the time, but now it 
looks like a bit of an interface trap for the unwary...)

Was there a reason to put this in libxc rather than libxc?  We don't 
expect anyone to call libxc, so having it libxc isn't a big deal, but 
conceptually it would probably be safer in libxl.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89xS-0002Kx-Da; Tue, 28 Jan 2014 14:48:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W89xR-0002Kl-2A
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:48:09 +0000
Received: from [85.158.139.211:64707] by server-12.bemta-5.messagelabs.com id
	36/69-30017-823C7E25; Tue, 28 Jan 2014 14:48:08 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390920485!148391!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8940 invoked from network); 28 Jan 2014 14:48:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95263263"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 14:48:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 09:48:03 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W89xK-0006wK-9t;
	Tue, 28 Jan 2014 14:48:02 +0000
Message-ID: <52E7C320.7080402@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:48:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>	
	<1390906058.7753.37.camel@kazak.uk.xensource.com>	
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1390919466.7753.97.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:31 PM, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
>> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>>>
>>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>>>> rest of libxc and throws away the errno.
>>>>
>>>> So for now conditionally define xc_domain_set_pod_target to return success
>>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>>>> errno==-1 and returns -1, which matches the broken error reporting of the
>>>> existing function. It appears to have no in tree callers in any case.
>>>>
>>>> The conditional should be removed once libxc has been fixed.
>>>>
>>>> This makes ballooning (xl mem-set) work for ARM domains.
>>>>
>>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>>> Cc: george.dunlap@citrix.com
>>>> ---
>>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>>>> certainly doing so for 4.4 now would be inapropriate.
>>>>
>>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>>>> frame, at which point this conditional stuff could be dropped.
>>>>
>>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>>>> small/contained and safe it is also pretty skanky and likely wouldn't be
>>>> accepted outside of the rc period.
>>>
>>> George -- what do you think of this?
>>
>> So is this actually called in the arm domain build code at the moment?
>
> It is common code in libxl which calls into it. I originally had the
> ifdef there instead.
>
> (I've just noticed that I forgot to update $subject when I moved the
> #ifdef from libxl to libxc)

Oh, right -- yes, you normally need to call set_pod_target() every time 
you update the balloon target, just in case PoD mode was activated on 
boot; if it wasn't (or if all the PoD entries have gone away) this will 
be a noop.

The only conceptual issue with putting it here is that 
xc_domain_set_pod_target() is also called during domain creation to fill 
the PoD "cache" with the domain's memory, from which to populate the p2m 
on-demand.  So if "someone" were to try to add PoD to the ARM guest 
creation, and forgot about this hack, they might spend a bit of time 
figuring out why the initial call to fill the PoD cache was succeeding 
but the guest was crashing with "PoD empty cache" anyway.

(xc_domain_set_target behaves differently if there are no entries in the 
p2m than if there are: if the p2m is empty, it will respond to this by 
filling the cache; if it's non-empty, it will ignore changes if there 
are no outstanding p2m entries.  That made sense at the time, but now it 
looks like a bit of an interface trap for the unwary...)

Was there a reason to put this in libxc rather than libxc?  We don't 
expect anyone to call libxc, so having it libxc isn't a big deal, but 
conceptually it would probably be safer in libxl.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89zR-0002kQ-J3; Tue, 28 Jan 2014 14:50:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W89zI-0002kA-ES
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:50:11 +0000
Received: from [85.158.137.68:5979] by server-12.bemta-3.messagelabs.com id
	D6/07-20055-B93C7E25; Tue, 28 Jan 2014 14:50:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390920602!11869750!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13949 invoked from network); 28 Jan 2014 14:50:03 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:50:03 -0000
Received: by mail-wg0-f44.google.com with SMTP id l18so952704wgh.11
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 06:50:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=TloX74mwsB9JfPZO9klkxuQGga1b24cGzqk+CBFcxiY=;
	b=0asKkyKrwBKNtnyvrqV6QBPs6Acb92Z4W+CbV/SmTfhJQ5tnP0MgiSZjK5Qdv12zFo
	+hyp3HzxO5s33UpT6UZdU0V7kGi9v1ixOvYxC/neW3h2pEBFLEQdmj1R+atjYomBHajJ
	3ai1vBipBWJYlb7x+c9102AIISHiBj24Z4P2witN3/v/IH20SkihGmBbdvk6oCTMwh9O
	fsliGh8MvCmrfeeBTaOTpe4R93eQtJGUu7WSNOW9EWiFj4SG95RYucwlHYY9XwWllrXh
	As1zSIbxCqpkDGQUN5OuCPW7w/o/2zBv8SPYRvJYm99TFJRXvXWvW1nwXQFcej/Rp6n3
	ZGJA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr144156wjy.57.1390920602849; Tue,
	28 Jan 2014 06:50:02 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 28 Jan 2014 06:50:02 -0800 (PST)
In-Reply-To: <52E7C320.7080402@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:50:02 +0000
X-Google-Sender-Auth: 4_mwhETefMi-uHViecXsV2ke9ZQ
Message-ID: <CAFLBxZbN5gSCkJGBhQ9Yd=8nbJxFuHCpv=u3h1MsFgjxQD0pZw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 2:48 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> Was there a reason to put this in libxc rather than libxc?  We don't expect
> anyone to call libxc, so having it libxc isn't a big deal, but conceptually
> it would probably be safer in libxl.

Er, change the 2nd "libxc" to "libxl" in this sentence, and it will
probably make a bit more sense...

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W89zR-0002kQ-J3; Tue, 28 Jan 2014 14:50:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1W89zI-0002kA-ES
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 14:50:11 +0000
Received: from [85.158.137.68:5979] by server-12.bemta-3.messagelabs.com id
	D6/07-20055-B93C7E25; Tue, 28 Jan 2014 14:50:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390920602!11869750!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13949 invoked from network); 28 Jan 2014 14:50:03 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:50:03 -0000
Received: by mail-wg0-f44.google.com with SMTP id l18so952704wgh.11
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 06:50:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=TloX74mwsB9JfPZO9klkxuQGga1b24cGzqk+CBFcxiY=;
	b=0asKkyKrwBKNtnyvrqV6QBPs6Acb92Z4W+CbV/SmTfhJQ5tnP0MgiSZjK5Qdv12zFo
	+hyp3HzxO5s33UpT6UZdU0V7kGi9v1ixOvYxC/neW3h2pEBFLEQdmj1R+atjYomBHajJ
	3ai1vBipBWJYlb7x+c9102AIISHiBj24Z4P2witN3/v/IH20SkihGmBbdvk6oCTMwh9O
	fsliGh8MvCmrfeeBTaOTpe4R93eQtJGUu7WSNOW9EWiFj4SG95RYucwlHYY9XwWllrXh
	As1zSIbxCqpkDGQUN5OuCPW7w/o/2zBv8SPYRvJYm99TFJRXvXWvW1nwXQFcej/Rp6n3
	ZGJA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr144156wjy.57.1390920602849; Tue,
	28 Jan 2014 06:50:02 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 28 Jan 2014 06:50:02 -0800 (PST)
In-Reply-To: <52E7C320.7080402@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
Date: Tue, 28 Jan 2014 14:50:02 +0000
X-Google-Sender-Auth: 4_mwhETefMi-uHViecXsV2ke9ZQ
Message-ID: <CAFLBxZbN5gSCkJGBhQ9Yd=8nbJxFuHCpv=u3h1MsFgjxQD0pZw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
	ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 2:48 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> Was there a reason to put this in libxc rather than libxc?  We don't expect
> anyone to call libxc, so having it libxc isn't a big deal, but conceptually
> it would probably be safer in libxl.

Er, change the 2nd "libxc" to "libxl" in this sentence, and it will
probably make a bit more sense...

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:54:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8A3M-0002xk-9y; Tue, 28 Jan 2014 14:54:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8A3L-0002xd-5W
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:54:15 +0000
Received: from [193.109.254.147:59097] by server-3.bemta-14.messagelabs.com id
	D0/C6-11000-694C7E25; Tue, 28 Jan 2014 14:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390920853!372730!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17321 invoked from network); 28 Jan 2014 14:54:13 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:54:13 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so296869eaj.12
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 06:54:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=T6A0TJK8S4lJ8O869KqdWMOhEru/9+3aUGUvAnd1/38=;
	b=OCdNRR4Ao2zsS7Tu/2O52JFLQTXXJqVK0PrkQQIxUQPdixC7dQ9x7JEnXkgJSeGg+1
	/aFo4Pq2aZ5/Cow11hfYHblg0xhkeFerljIUzp6kMpMsuZYwScRPe140dJuISZ2CKGCY
	Q8Bn1Eww1RHBU8X4FTJZYgJUE+VyYFD3+zOl3mo7SCjF2AFkWInhfh2/PJR4Rpa/pHKO
	NJ5GVOWvi4EoCzOn+OQ+it5E8R6y+kixkrUq6x7nCK3nQLqZvW5OKe2bCD9AO95/ZLwW
	BCV31ffLmCVsmIcD/xVWhB0OsRH+68xFdbUCsFwjHMId+FGmmTX8B3bH76PIgx5hGDgD
	m2LQ==
X-Gm-Message-State: ALoCoQmS5StZMKJJREEjO5xNqBY4jWirE0b4Ewv59pA6uIbDxd95m5ZHiQR/rzIwNgiYUZapnuLg
X-Received: by 10.14.32.67 with SMTP id n43mr2301557eea.17.1390920853172;
	Tue, 28 Jan 2014 06:54:13 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm56508424ees.4.2014.01.28.06.54.11
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 28 Jan 2014 06:54:12 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Tue, 28 Jan 2014 14:54:02 +0000
Message-Id: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>, ian.campbell@citrix.com,
	patches@linaro.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Event channels driver needs to be initialized very early. Until now, Xen
initialization was done after all CPUs was bring up.

We can safely move the initialization to an early initcall.

Also use a cpu notifier to:
    - Register the VCPU when the CPU is prepared
    - Enable event channel IRQ when the CPU is running

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
 1 file changed, 55 insertions(+), 29 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 293eeea..39b668e 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -23,6 +23,7 @@
 #include <linux/of_address.h>
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
+#include <linux/cpu.h>
 
 #include <linux/mm.h>
 
@@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
-static void __init xen_percpu_init(void *unused)
+static void xen_percpu_init(int cpu)
 {
 	struct vcpu_register_vcpu_info info;
 	struct vcpu_info *vcpup;
 	int err;
-	int cpu = get_cpu();
 
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
@@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
+}
 
+static void xen_interrupt_init(void)
+{
 	enable_percpu_irq(xen_events_irq, 0);
-	put_cpu();
 }
 
 static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
@@ -193,6 +195,36 @@ static void xen_power_off(void)
 		BUG();
 }
 
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int xen_cpu_notification(struct notifier_block *self,
+				unsigned long action,
+				void *hcpu)
+{
+	int cpu = (long)hcpu;
+
+	switch (action) {
+	case CPU_UP_PREPARE:
+		xen_percpu_init(cpu);
+		break;
+	case CPU_STARTING:
+		xen_interrupt_init();
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block xen_cpu_notifier = {
+	.notifier_call = xen_cpu_notification,
+};
+
 /*
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
@@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
 	phys_addr_t grant_frames;
+	int cpu;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
 	disable_cpuidle();
 	disable_cpufreq();
 
+	xen_init_IRQ();
+
+	if (xen_events_irq < 0)
+		return -ENODEV;
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			       "events", &xen_vcpu)) {
+		pr_err("Error request IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	cpu = get_cpu();
+	xen_percpu_init(cpu);
+	xen_interrupt_init();
+	put_cpu();
+
+	register_cpu_notifier(&xen_cpu_notifier);
+
 	return 0;
 }
-core_initcall(xen_guest_init);
+early_initcall(xen_guest_init);
 
 static int __init xen_pm_init(void)
 {
@@ -297,31 +348,6 @@ static int __init xen_pm_init(void)
 }
 late_initcall(xen_pm_init);
 
-static irqreturn_t xen_arm_callback(int irq, void *arg)
-{
-	xen_hvm_evtchn_do_upcall();
-	return IRQ_HANDLED;
-}
-
-static int __init xen_init_events(void)
-{
-	if (!xen_domain() || xen_events_irq < 0)
-		return -ENODEV;
-
-	xen_init_IRQ();
-
-	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
-			"events", &xen_vcpu)) {
-		pr_err("Error requesting IRQ %d\n", xen_events_irq);
-		return -EINVAL;
-	}
-
-	on_each_cpu(xen_percpu_init, NULL, 0);
-
-	return 0;
-}
-postcore_initcall(xen_init_events);
-
 /* In the hypervisor.S file. */
 EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
 EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 14:54:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 14:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8A3M-0002xk-9y; Tue, 28 Jan 2014 14:54:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8A3L-0002xd-5W
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 14:54:15 +0000
Received: from [193.109.254.147:59097] by server-3.bemta-14.messagelabs.com id
	D0/C6-11000-694C7E25; Tue, 28 Jan 2014 14:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390920853!372730!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17321 invoked from network); 28 Jan 2014 14:54:13 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 14:54:13 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so296869eaj.12
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 06:54:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=T6A0TJK8S4lJ8O869KqdWMOhEru/9+3aUGUvAnd1/38=;
	b=OCdNRR4Ao2zsS7Tu/2O52JFLQTXXJqVK0PrkQQIxUQPdixC7dQ9x7JEnXkgJSeGg+1
	/aFo4Pq2aZ5/Cow11hfYHblg0xhkeFerljIUzp6kMpMsuZYwScRPe140dJuISZ2CKGCY
	Q8Bn1Eww1RHBU8X4FTJZYgJUE+VyYFD3+zOl3mo7SCjF2AFkWInhfh2/PJR4Rpa/pHKO
	NJ5GVOWvi4EoCzOn+OQ+it5E8R6y+kixkrUq6x7nCK3nQLqZvW5OKe2bCD9AO95/ZLwW
	BCV31ffLmCVsmIcD/xVWhB0OsRH+68xFdbUCsFwjHMId+FGmmTX8B3bH76PIgx5hGDgD
	m2LQ==
X-Gm-Message-State: ALoCoQmS5StZMKJJREEjO5xNqBY4jWirE0b4Ewv59pA6uIbDxd95m5ZHiQR/rzIwNgiYUZapnuLg
X-Received: by 10.14.32.67 with SMTP id n43mr2301557eea.17.1390920853172;
	Tue, 28 Jan 2014 06:54:13 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm56508424ees.4.2014.01.28.06.54.11
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 28 Jan 2014 06:54:12 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Tue, 28 Jan 2014 14:54:02 +0000
Message-Id: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>, ian.campbell@citrix.com,
	patches@linaro.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Event channels driver needs to be initialized very early. Until now, Xen
initialization was done after all CPUs was bring up.

We can safely move the initialization to an early initcall.

Also use a cpu notifier to:
    - Register the VCPU when the CPU is prepared
    - Enable event channel IRQ when the CPU is running

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
 1 file changed, 55 insertions(+), 29 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 293eeea..39b668e 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -23,6 +23,7 @@
 #include <linux/of_address.h>
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
+#include <linux/cpu.h>
 
 #include <linux/mm.h>
 
@@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
-static void __init xen_percpu_init(void *unused)
+static void xen_percpu_init(int cpu)
 {
 	struct vcpu_register_vcpu_info info;
 	struct vcpu_info *vcpup;
 	int err;
-	int cpu = get_cpu();
 
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
@@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
 	BUG_ON(err);
 	per_cpu(xen_vcpu, cpu) = vcpup;
+}
 
+static void xen_interrupt_init(void)
+{
 	enable_percpu_irq(xen_events_irq, 0);
-	put_cpu();
 }
 
 static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
@@ -193,6 +195,36 @@ static void xen_power_off(void)
 		BUG();
 }
 
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
+static int xen_cpu_notification(struct notifier_block *self,
+				unsigned long action,
+				void *hcpu)
+{
+	int cpu = (long)hcpu;
+
+	switch (action) {
+	case CPU_UP_PREPARE:
+		xen_percpu_init(cpu);
+		break;
+	case CPU_STARTING:
+		xen_interrupt_init();
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block xen_cpu_notifier = {
+	.notifier_call = xen_cpu_notification,
+};
+
 /*
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
@@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
 	phys_addr_t grant_frames;
+	int cpu;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
 	disable_cpuidle();
 	disable_cpufreq();
 
+	xen_init_IRQ();
+
+	if (xen_events_irq < 0)
+		return -ENODEV;
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			       "events", &xen_vcpu)) {
+		pr_err("Error request IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	cpu = get_cpu();
+	xen_percpu_init(cpu);
+	xen_interrupt_init();
+	put_cpu();
+
+	register_cpu_notifier(&xen_cpu_notifier);
+
 	return 0;
 }
-core_initcall(xen_guest_init);
+early_initcall(xen_guest_init);
 
 static int __init xen_pm_init(void)
 {
@@ -297,31 +348,6 @@ static int __init xen_pm_init(void)
 }
 late_initcall(xen_pm_init);
 
-static irqreturn_t xen_arm_callback(int irq, void *arg)
-{
-	xen_hvm_evtchn_do_upcall();
-	return IRQ_HANDLED;
-}
-
-static int __init xen_init_events(void)
-{
-	if (!xen_domain() || xen_events_irq < 0)
-		return -ENODEV;
-
-	xen_init_IRQ();
-
-	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
-			"events", &xen_vcpu)) {
-		pr_err("Error requesting IRQ %d\n", xen_events_irq);
-		return -EINVAL;
-	}
-
-	on_each_cpu(xen_percpu_init, NULL, 0);
-
-	return 0;
-}
-postcore_initcall(xen_init_events);
-
 /* In the hypervisor.S file. */
 EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
 EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ACb-0003S5-L8; Tue, 28 Jan 2014 15:03:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8ACZ-0003S0-HJ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:03:47 +0000
Received: from [85.158.137.68:63014] by server-1.bemta-3.messagelabs.com id
	A4/89-29598-2D6C7E25; Tue, 28 Jan 2014 15:03:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390921425!10676699!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16692 invoked from network); 28 Jan 2014 15:03:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:03:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:03:45 +0000
Message-Id: <52E7D4EC02000078001179FE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:03:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
In-Reply-To: <52E7BFF8.1090605@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 28/01/14 14:23, Xu, Dongxiao wrote:
>>> And finally, I think the total size of the buffer here can easily
>>> exceed a page, i.e. this then ends up being a non-order-0
>>> allocation, which may _never_ succeed (i.e. the operation is
>>> then rendered useless). I guest it'd be better to e.g. vmap()
>>> the MFNs underlying the guest buffer.
>> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?
> 
> I still think this is barking mad as a method of getting this quantity
> of data from Xen to the toolstack in a repeated fashon.
> 
> Xen should allocate a per-socket buffer at the start of day (or perhaps
> on first use of CQM), and the CQM monitoring tool gets to map those
> per-socket buffers read-only.
> 
> This way, all processing of the CQM data happens in dom0 userspace, not
> in Xen in hypercall context; All Xen has to do is periodically dump the
> MSRs into the pages.

Indeed - if the nature of the data is such that it can be exposed
read-only to suitably privileged entities, then this would be the
much better interface.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ACl-0003T8-9e; Tue, 28 Jan 2014 15:03:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ACj-0003Ss-B7
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:03:57 +0000
Received: from [85.158.143.35:31337] by server-2.bemta-4.messagelabs.com id
	3F/35-11386-CD6C7E25; Tue, 28 Jan 2014 15:03:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390921433!1390580!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29058 invoked from network); 28 Jan 2014 15:03:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:03:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97266452"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:03:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:03:38 -0500
Message-ID: <1390921417.7753.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:03:37 +0000
In-Reply-To: <52E7C320.7080402@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:48 +0000, George Dunlap wrote:
> On 01/28/2014 02:31 PM, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
> >> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> >>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> >>>>
> >>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> >>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
> >>>> rest of libxc and throws away the errno.
> >>>>
> >>>> So for now conditionally define xc_domain_set_pod_target to return success
> >>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> >>>> errno==-1 and returns -1, which matches the broken error reporting of the
> >>>> existing function. It appears to have no in tree callers in any case.
> >>>>
> >>>> The conditional should be removed once libxc has been fixed.
> >>>>
> >>>> This makes ballooning (xl mem-set) work for ARM domains.
> >>>>
> >>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >>>> Cc: george.dunlap@citrix.com
> >>>> ---
> >>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
> >>>> certainly doing so for 4.4 now would be inapropriate.
> >>>>
> >>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> >>>> frame, at which point this conditional stuff could be dropped.
> >>>>
> >>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
> >>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
> >>>> small/contained and safe it is also pretty skanky and likely wouldn't be
> >>>> accepted outside of the rc period.
> >>>
> >>> George -- what do you think of this?
> >>
> >> So is this actually called in the arm domain build code at the moment?
> >
> > It is common code in libxl which calls into it. I originally had the
> > ifdef there instead.
> >
> > (I've just noticed that I forgot to update $subject when I moved the
> > #ifdef from libxl to libxc)
> 
> Oh, right -- yes, you normally need to call set_pod_target() every time 
> you update the balloon target, just in case PoD mode was activated on 
> boot; if it wasn't (or if all the PoD entries have gone away) this will 
> be a noop.
> 
> The only conceptual issue with putting it here is that 
> xc_domain_set_pod_target() is also called during domain creation to fill 
> the PoD "cache" with the domain's memory, from which to populate the p2m 
> on-demand.  So if "someone" were to try to add PoD to the ARM guest 
> creation, and forgot about this hack, they might spend a bit of time 
> figuring out why the initial call to fill the PoD cache was succeeding 
> but the guest was crashing with "PoD empty cache" anyway.

My hope is that this will get cleaned up in the 4.5 timeframe, as part
of the overdue cleanup of libxc error handling. After that then this
will properly report ENOSYS so that libxl can just DTRT. My hope is that
this will happen before anyone gets to implementing PoD on ARM.

Anyway, if this is the biggest stumbling block someone has while adding
PoD to ARM then they will have done pretty well...

> (xc_domain_set_target behaves differently if there are no entries in the 
> p2m than if there are: if the p2m is empty, it will respond to this by 
> filling the cache; if it's non-empty, it will ignore changes if there 
> are no outstanding p2m entries.  That made sense at the time, but now it 
> looks like a bit of an interface trap for the unwary...)
> 
> Was there a reason to put this in libxc rather than libxc?  We don't 
> expect anyone to call libxc, so having it libxc isn't a big deal, but 
> conceptually it would probably be safer in libxl.

I just thought the hack was more contained in libxc is all. Also having
it in libxc means that when the error handling cleanup I mentioned
occurs this will cleaned up at the same time, whereas if it was an ifdef
in libxl it might get missed. Not a big deal until someone implements
PoD on ARM though...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ACb-0003S5-L8; Tue, 28 Jan 2014 15:03:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8ACZ-0003S0-HJ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:03:47 +0000
Received: from [85.158.137.68:63014] by server-1.bemta-3.messagelabs.com id
	A4/89-29598-2D6C7E25; Tue, 28 Jan 2014 15:03:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390921425!10676699!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16692 invoked from network); 28 Jan 2014 15:03:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:03:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:03:45 +0000
Message-Id: <52E7D4EC02000078001179FE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:03:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
In-Reply-To: <52E7BFF8.1090605@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 28/01/14 14:23, Xu, Dongxiao wrote:
>>> And finally, I think the total size of the buffer here can easily
>>> exceed a page, i.e. this then ends up being a non-order-0
>>> allocation, which may _never_ succeed (i.e. the operation is
>>> then rendered useless). I guest it'd be better to e.g. vmap()
>>> the MFNs underlying the guest buffer.
>> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?
> 
> I still think this is barking mad as a method of getting this quantity
> of data from Xen to the toolstack in a repeated fashon.
> 
> Xen should allocate a per-socket buffer at the start of day (or perhaps
> on first use of CQM), and the CQM monitoring tool gets to map those
> per-socket buffers read-only.
> 
> This way, all processing of the CQM data happens in dom0 userspace, not
> in Xen in hypercall context; All Xen has to do is periodically dump the
> MSRs into the pages.

Indeed - if the nature of the data is such that it can be exposed
read-only to suitably privileged entities, then this would be the
much better interface.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ACl-0003T8-9e; Tue, 28 Jan 2014 15:03:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ACj-0003Ss-B7
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:03:57 +0000
Received: from [85.158.143.35:31337] by server-2.bemta-4.messagelabs.com id
	3F/35-11386-CD6C7E25; Tue, 28 Jan 2014 15:03:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390921433!1390580!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29058 invoked from network); 28 Jan 2014 15:03:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:03:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97266452"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:03:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:03:38 -0500
Message-ID: <1390921417.7753.106.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:03:37 +0000
In-Reply-To: <52E7C320.7080402@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:48 +0000, George Dunlap wrote:
> On 01/28/2014 02:31 PM, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
> >> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
> >>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
> >>>>
> >>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
> >>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
> >>>> rest of libxc and throws away the errno.
> >>>>
> >>>> So for now conditionally define xc_domain_set_pod_target to return success
> >>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
> >>>> errno==-1 and returns -1, which matches the broken error reporting of the
> >>>> existing function. It appears to have no in tree callers in any case.
> >>>>
> >>>> The conditional should be removed once libxc has been fixed.
> >>>>
> >>>> This makes ballooning (xl mem-set) work for ARM domains.
> >>>>
> >>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> >>>> Cc: george.dunlap@citrix.com
> >>>> ---
> >>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
> >>>> certainly doing so for 4.4 now would be inapropriate.
> >>>>
> >>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
> >>>> frame, at which point this conditional stuff could be dropped.
> >>>>
> >>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
> >>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
> >>>> small/contained and safe it is also pretty skanky and likely wouldn't be
> >>>> accepted outside of the rc period.
> >>>
> >>> George -- what do you think of this?
> >>
> >> So is this actually called in the arm domain build code at the moment?
> >
> > It is common code in libxl which calls into it. I originally had the
> > ifdef there instead.
> >
> > (I've just noticed that I forgot to update $subject when I moved the
> > #ifdef from libxl to libxc)
> 
> Oh, right -- yes, you normally need to call set_pod_target() every time 
> you update the balloon target, just in case PoD mode was activated on 
> boot; if it wasn't (or if all the PoD entries have gone away) this will 
> be a noop.
> 
> The only conceptual issue with putting it here is that 
> xc_domain_set_pod_target() is also called during domain creation to fill 
> the PoD "cache" with the domain's memory, from which to populate the p2m 
> on-demand.  So if "someone" were to try to add PoD to the ARM guest 
> creation, and forgot about this hack, they might spend a bit of time 
> figuring out why the initial call to fill the PoD cache was succeeding 
> but the guest was crashing with "PoD empty cache" anyway.

My hope is that this will get cleaned up in the 4.5 timeframe, as part
of the overdue cleanup of libxc error handling. After that then this
will properly report ENOSYS so that libxl can just DTRT. My hope is that
this will happen before anyone gets to implementing PoD on ARM.

Anyway, if this is the biggest stumbling block someone has while adding
PoD to ARM then they will have done pretty well...

> (xc_domain_set_target behaves differently if there are no entries in the 
> p2m than if there are: if the p2m is empty, it will respond to this by 
> filling the cache; if it's non-empty, it will ignore changes if there 
> are no outstanding p2m entries.  That made sense at the time, but now it 
> looks like a bit of an interface trap for the unwary...)
> 
> Was there a reason to put this in libxc rather than libxc?  We don't 
> expect anyone to call libxc, so having it libxc isn't a big deal, but 
> conceptually it would probably be safer in libxl.

I just thought the hack was more contained in libxc is all. Also having
it in libxc means that when the error handling cleanup I mentioned
occurs this will cleaned up at the same time, whereas if it was an ifdef
in libxl it might get missed. Not a big deal until someone implements
PoD on ARM though...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:05:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AEZ-0003eQ-TT; Tue, 28 Jan 2014 15:05:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AEX-0003eF-KS
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:05:49 +0000
Received: from [85.158.139.211:4700] by server-9.bemta-5.messagelabs.com id
	4C/66-15098-C47C7E25; Tue, 28 Jan 2014 15:05:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390921546!155602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31546 invoked from network); 28 Jan 2014 15:05:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:05:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95271888"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:05:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:05:45 -0500
Message-ID: <1390921544.7753.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:05:44 +0000
In-Reply-To: <alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > but I am a bit wary of making that change at RC2.
> > 
> > I'm leaning the other way -- I'm wary of open coding magic locking
> > primitives to work around this issue on a case by case basis. It's just
> > too subtle IMHO.
> > 
> > The IPI and cross CPU calling primitives are basically predicated on
> > those IPIs interrupting normal interrupt handlers.
> 
> The problem is that we don't know if we can context switch properly
> nested interrupts.

What do you mean? We don't have to context switch an IPI.

> Also I would need to think harder whether everything
> would work correctly without hitches with multiple SGIs happening
> simultaneously (with more than 2 cpus involved).

Since all IPIs would be at the same higher priority only one will be
active on each CPU at a time. If you are worried about multiple CPUs
then that is already an issue today, just at a lower priority.

I have hacked the IPI priority to be higher in the past and it worked
fine, I just never got round to cleaning it up for submission (I hadn't
thought of the locking thing and my use case was low priority).

The interrupt entry and exit paths were written with nested interrupt in
mind and they have to be so in order to handle interrupts which occur
from both guest and hypervisor context.

> On the other hand we know that both Oleksandr's and my solution should
> work OK with no surprises if implemented correctly.

That's a big if in my mind, any use of trylock is very subtle IMOH.

AIUI this issue only occurs with "proto device assignment" patches added
to 4.4, n which case I think the solution can wait until 4.5 and can be
done properly via the IPI priority fix.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:05:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AEZ-0003eQ-TT; Tue, 28 Jan 2014 15:05:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AEX-0003eF-KS
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:05:49 +0000
Received: from [85.158.139.211:4700] by server-9.bemta-5.messagelabs.com id
	4C/66-15098-C47C7E25; Tue, 28 Jan 2014 15:05:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390921546!155602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31546 invoked from network); 28 Jan 2014 15:05:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:05:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95271888"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:05:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:05:45 -0500
Message-ID: <1390921544.7753.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:05:44 +0000
In-Reply-To: <alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > but I am a bit wary of making that change at RC2.
> > 
> > I'm leaning the other way -- I'm wary of open coding magic locking
> > primitives to work around this issue on a case by case basis. It's just
> > too subtle IMHO.
> > 
> > The IPI and cross CPU calling primitives are basically predicated on
> > those IPIs interrupting normal interrupt handlers.
> 
> The problem is that we don't know if we can context switch properly
> nested interrupts.

What do you mean? We don't have to context switch an IPI.

> Also I would need to think harder whether everything
> would work correctly without hitches with multiple SGIs happening
> simultaneously (with more than 2 cpus involved).

Since all IPIs would be at the same higher priority only one will be
active on each CPU at a time. If you are worried about multiple CPUs
then that is already an issue today, just at a lower priority.

I have hacked the IPI priority to be higher in the past and it worked
fine, I just never got round to cleaning it up for submission (I hadn't
thought of the locking thing and my use case was low priority).

The interrupt entry and exit paths were written with nested interrupt in
mind and they have to be so in order to handle interrupts which occur
from both guest and hypervisor context.

> On the other hand we know that both Oleksandr's and my solution should
> work OK with no surprises if implemented correctly.

That's a big if in my mind, any use of trylock is very subtle IMOH.

AIUI this issue only occurs with "proto device assignment" patches added
to 4.4, n which case I think the solution can wait until 4.5 and can be
done properly via the IPI priority fix.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:08:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:08:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AH0-0003oT-MS; Tue, 28 Jan 2014 15:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8AGz-0003oH-AF
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:08:21 +0000
Received: from [85.158.143.35:19146] by server-2.bemta-4.messagelabs.com id
	54/9D-11386-4E7C7E25; Tue, 28 Jan 2014 15:08:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390921700!1396200!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10498 invoked from network); 28 Jan 2014 15:08:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:08:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:08:19 +0000
Message-Id: <52E7D6000200007800117A11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:08:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
In-Reply-To: <52E7BF17.3050200@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
>> On Tue, 28 Jan 2014, Ian Campbell wrote:
>>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>>>> flight 24553 qemu-upstream-unstable real [real]
>>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
>>>>
>>>> Failures :-/ but no regressions.
>>>
>>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>>> and so will require updating to actually pull this new stuff into the
>>> release.
>>
>> OK. But given that the new code is not part of any RCs, should I wait
>> for the next one? Should we go back to "master"?
> 
> I guess we should have gone back to "master" after tagging the last RC?

Correct - this should have happened the moment the first new
commit passed the push gate on the qemuu tree. Don't know
whether there would be a way to automate this...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:08:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:08:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AH0-0003oT-MS; Tue, 28 Jan 2014 15:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8AGz-0003oH-AF
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:08:21 +0000
Received: from [85.158.143.35:19146] by server-2.bemta-4.messagelabs.com id
	54/9D-11386-4E7C7E25; Tue, 28 Jan 2014 15:08:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390921700!1396200!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10498 invoked from network); 28 Jan 2014 15:08:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:08:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:08:19 +0000
Message-Id: <52E7D6000200007800117A11@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:08:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Stefano Stabellini" <stefano.stabellini@eu.citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
In-Reply-To: <52E7BF17.3050200@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Anthony Perard <anthony.perard@citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
>> On Tue, 28 Jan 2014, Ian Campbell wrote:
>>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>>>> flight 24553 qemu-upstream-unstable real [real]
>>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
>>>>
>>>> Failures :-/ but no regressions.
>>>
>>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>>> and so will require updating to actually pull this new stuff into the
>>> release.
>>
>> OK. But given that the new code is not part of any RCs, should I wait
>> for the next one? Should we go back to "master"?
> 
> I guess we should have gone back to "master" after tagging the last RC?

Correct - this should have happened the moment the first new
commit passed the push gate on the qemuu tree. Don't know
whether there would be a way to automate this...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ANj-0004Jd-RY; Tue, 28 Jan 2014 15:15:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W8ANi-0004JY-9k
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:15:18 +0000
Received: from [85.158.143.35:31405] by server-2.bemta-4.messagelabs.com id
	6F/CA-11386-589C7E25; Tue, 28 Jan 2014 15:15:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390922116!1399514!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11450 invoked from network); 28 Jan 2014 15:15:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 15:15:16 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 07:11:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="465982082"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 28 Jan 2014 07:15:13 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 07:15:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 07:15:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 23:15:09 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [PATCH v6 4/7] x86: collect CQM information from all sockets
Thread-Index: AQHPFebedK9ctTr7n0679vvlcaoAk5qaOg1ggAAOu9KAAAKMQA==
Date: Tue, 28 Jan 2014 15:15:09 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
In-Reply-To: <52E7D4EC02000078001179FE@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, January 28, 2014 11:04 PM
> To: Andrew Cooper; Xu, Dongxiao
> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov;
> keir@xen.org
> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
> 
> >>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 28/01/14 14:23, Xu, Dongxiao wrote:
> >>> And finally, I think the total size of the buffer here can easily
> >>> exceed a page, i.e. this then ends up being a non-order-0
> >>> allocation, which may _never_ succeed (i.e. the operation is
> >>> then rendered useless). I guest it'd be better to e.g. vmap()
> >>> the MFNs underlying the guest buffer.
> >> Do you mean we check the size of the total size, and allocate MFNs one by
> > one, then vmap them?
> >
> > I still think this is barking mad as a method of getting this quantity
> > of data from Xen to the toolstack in a repeated fashon.
> >
> > Xen should allocate a per-socket buffer at the start of day (or perhaps
> > on first use of CQM), and the CQM monitoring tool gets to map those
> > per-socket buffers read-only.
> >
> > This way, all processing of the CQM data happens in dom0 userspace, not
> > in Xen in hypercall context; All Xen has to do is periodically dump the
> > MSRs into the pages.
> 
> Indeed - if the nature of the data is such that it can be exposed
> read-only to suitably privileged entities, then this would be the
> much better interface.

If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page?

Thanks,
Dongxiao 

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ANj-0004Jd-RY; Tue, 28 Jan 2014 15:15:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1W8ANi-0004JY-9k
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:15:18 +0000
Received: from [85.158.143.35:31405] by server-2.bemta-4.messagelabs.com id
	6F/CA-11386-589C7E25; Tue, 28 Jan 2014 15:15:17 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390922116!1399514!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11450 invoked from network); 28 Jan 2014 15:15:16 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-9.tower-21.messagelabs.com with SMTP;
	28 Jan 2014 15:15:16 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 28 Jan 2014 07:11:06 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,736,1384329600"; d="scan'208";a="465982082"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga001.fm.intel.com with ESMTP; 28 Jan 2014 07:15:13 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 07:15:12 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 28 Jan 2014 07:15:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Tue, 28 Jan 2014 23:15:09 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Thread-Topic: [PATCH v6 4/7] x86: collect CQM information from all sockets
Thread-Index: AQHPFebedK9ctTr7n0679vvlcaoAk5qaOg1ggAAOu9KAAAKMQA==
Date: Tue, 28 Jan 2014 15:15:09 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
In-Reply-To: <52E7D4EC02000078001179FE@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, January 28, 2014 11:04 PM
> To: Andrew Cooper; Xu, Dongxiao
> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov;
> keir@xen.org
> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
> 
> >>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > On 28/01/14 14:23, Xu, Dongxiao wrote:
> >>> And finally, I think the total size of the buffer here can easily
> >>> exceed a page, i.e. this then ends up being a non-order-0
> >>> allocation, which may _never_ succeed (i.e. the operation is
> >>> then rendered useless). I guest it'd be better to e.g. vmap()
> >>> the MFNs underlying the guest buffer.
> >> Do you mean we check the size of the total size, and allocate MFNs one by
> > one, then vmap them?
> >
> > I still think this is barking mad as a method of getting this quantity
> > of data from Xen to the toolstack in a repeated fashon.
> >
> > Xen should allocate a per-socket buffer at the start of day (or perhaps
> > on first use of CQM), and the CQM monitoring tool gets to map those
> > per-socket buffers read-only.
> >
> > This way, all processing of the CQM data happens in dom0 userspace, not
> > in Xen in hypercall context; All Xen has to do is periodically dump the
> > MSRs into the pages.
> 
> Indeed - if the nature of the data is such that it can be exposed
> read-only to suitably privileged entities, then this would be the
> much better interface.

If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page?

Thanks,
Dongxiao 

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:16:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8APH-0004OA-Nk; Tue, 28 Jan 2014 15:16:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8APG-0004O2-0P
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:16:54 +0000
Received: from [193.109.254.147:38787] by server-4.bemta-14.messagelabs.com id
	F9/AE-03916-5E9C7E25; Tue, 28 Jan 2014 15:16:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390922211!381076!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13026 invoked from network); 28 Jan 2014 15:16:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:16:52 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFGntR014294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:16:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFGlQ7023856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:16:48 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFGlGV003123; Tue, 28 Jan 2014 15:16:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:16:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7B1461BFA73; Tue, 28 Jan 2014 10:16:46 -0500 (EST)
Date: Tue, 28 Jan 2014 10:16:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>, jbeulich@suse.com
Message-ID: <20140128151646.GA4308@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
	<52E651FD.2020608@os.inf.tu-dresden.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E651FD.2020608@os.inf.tu-dresden.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 01:33:01PM +0100, Julian Stecklina wrote:
> On 01/24/2014 07:08 PM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
> >> The paravirtualized clock used in KVM and Xen uses a version field to
> >> allow the guest to see when the shared data structure is inconsistent.
> >> The code reads the version field twice (before and after the data
> >> structure is copied) and checks whether they haven't
> >> changed and that the lowest bit is not set. As the second access is not
> >> synchronized, the compiler could generate code that accesses version
> >> more than two times and you end up with inconsistent data.
> > 
> > Could you paste in the code that the 'bad' compiler generates
> > vs the compiler that generate 'good' code please?
> 
> At least 4.8 and probably older compilers compile this as intended. The
> point is that the standard does not guarantee the indented behavior,
> i.e. the code is wrong.

Perhaps I misunderstood Jan's response but it sounded to me like
that the compiler was not adhering to the standard?

> 
> I can refer to this lwn article:
> https://lwn.net/Articles/508991/
> 
> The whole point of ACCESS_ONCE is to avoid time bombs like that. There
> are lots of place where ACCESS_ONCE is used in the kernel:
> 
> http://lxr.free-electrons.com/ident?i=ACCESS_ONCE
> 
> See for example the check_zero function here:
> http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559
> 

In other words, you don't have a sample of 'bad' compiler code.


> Julian
> 
> > 
> >>
> >> An example using pvclock_get_time_values:
> >>
> >> host starts updating data, sets src->version to 1
> >> guest reads src->version (1) and stores it into dst->version.
> >> guest copies inconsistent data
> >> guest reads src->version (1) and computes xor with dst->version.
> >> host finishes updating data and sets src->version to 2
> >> guest reads src->version (2) and checks whether lower bit is not set.
> >> while loop exits with inconsistent data!
> >>
> >> AFAICS the compiler is allowed to optimize the given code this way.
> >>
> >> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> >> ---
> >>  arch/x86/kernel/pvclock.c | 10 +++++++---
> >>  1 file changed, 7 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> >> index 42eb330..f62b41c 100644
> >> --- a/arch/x86/kernel/pvclock.c
> >> +++ b/arch/x86/kernel/pvclock.c
> >> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
> >>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
> >>  					struct pvclock_vcpu_time_info *src)
> >>  {
> >> +	u32 nversion;
> >> +
> >>  	do {
> >>  		dst->version = src->version;
> >>  		rmb();		/* fetch version before data */
> >> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
> >>  		dst->tsc_shift         = src->tsc_shift;
> >>  		dst->flags             = src->flags;
> >>  		rmb();		/* test version after fetching data */
> >> -	} while ((src->version & 1) || (dst->version != src->version));
> >> +		nversion = ACCESS_ONCE(src->version);
> >> +	} while ((nversion & 1) || (dst->version != nversion));
> >>  
> >>  	return dst->version;
> >>  }
> >> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
> >>  			    struct pvclock_vcpu_time_info *vcpu_time,
> >>  			    struct timespec *ts)
> >>  {
> >> -	u32 version;
> >> +	u32 version, nversion;
> >>  	u64 delta;
> >>  	struct timespec now;
> >>  
> >> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
> >>  		now.tv_sec  = wall_clock->sec;
> >>  		now.tv_nsec = wall_clock->nsec;
> >>  		rmb();		/* fetch time before checking version */
> >> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> >> +		nversion = ACCESS_ONCE(wall_clock->version);
> >> +	} while ((nversion & 1) || (version != nversion));
> >>  
> >>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
> >>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> >> -- 
> >> 1.8.4.2
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:16:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8APH-0004OA-Nk; Tue, 28 Jan 2014 15:16:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8APG-0004O2-0P
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:16:54 +0000
Received: from [193.109.254.147:38787] by server-4.bemta-14.messagelabs.com id
	F9/AE-03916-5E9C7E25; Tue, 28 Jan 2014 15:16:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390922211!381076!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13026 invoked from network); 28 Jan 2014 15:16:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:16:52 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFGntR014294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:16:50 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFGlQ7023856
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:16:48 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFGlGV003123; Tue, 28 Jan 2014 15:16:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:16:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7B1461BFA73; Tue, 28 Jan 2014 10:16:46 -0500 (EST)
Date: Tue, 28 Jan 2014 10:16:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>, jbeulich@suse.com
Message-ID: <20140128151646.GA4308@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
	<52E651FD.2020608@os.inf.tu-dresden.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E651FD.2020608@os.inf.tu-dresden.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 01:33:01PM +0100, Julian Stecklina wrote:
> On 01/24/2014 07:08 PM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jan 16, 2014 at 03:13:44PM +0100, Julian Stecklina wrote:
> >> The paravirtualized clock used in KVM and Xen uses a version field to
> >> allow the guest to see when the shared data structure is inconsistent.
> >> The code reads the version field twice (before and after the data
> >> structure is copied) and checks whether they haven't
> >> changed and that the lowest bit is not set. As the second access is not
> >> synchronized, the compiler could generate code that accesses version
> >> more than two times and you end up with inconsistent data.
> > 
> > Could you paste in the code that the 'bad' compiler generates
> > vs the compiler that generate 'good' code please?
> 
> At least 4.8 and probably older compilers compile this as intended. The
> point is that the standard does not guarantee the indented behavior,
> i.e. the code is wrong.

Perhaps I misunderstood Jan's response but it sounded to me like
that the compiler was not adhering to the standard?

> 
> I can refer to this lwn article:
> https://lwn.net/Articles/508991/
> 
> The whole point of ACCESS_ONCE is to avoid time bombs like that. There
> are lots of place where ACCESS_ONCE is used in the kernel:
> 
> http://lxr.free-electrons.com/ident?i=ACCESS_ONCE
> 
> See for example the check_zero function here:
> http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559
> 

In other words, you don't have a sample of 'bad' compiler code.


> Julian
> 
> > 
> >>
> >> An example using pvclock_get_time_values:
> >>
> >> host starts updating data, sets src->version to 1
> >> guest reads src->version (1) and stores it into dst->version.
> >> guest copies inconsistent data
> >> guest reads src->version (1) and computes xor with dst->version.
> >> host finishes updating data and sets src->version to 2
> >> guest reads src->version (2) and checks whether lower bit is not set.
> >> while loop exits with inconsistent data!
> >>
> >> AFAICS the compiler is allowed to optimize the given code this way.
> >>
> >> Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
> >> ---
> >>  arch/x86/kernel/pvclock.c | 10 +++++++---
> >>  1 file changed, 7 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> >> index 42eb330..f62b41c 100644
> >> --- a/arch/x86/kernel/pvclock.c
> >> +++ b/arch/x86/kernel/pvclock.c
> >> @@ -55,6 +55,8 @@ static u64 pvclock_get_nsec_offset(struct pvclock_shadow_time *shadow)
> >>  static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
> >>  					struct pvclock_vcpu_time_info *src)
> >>  {
> >> +	u32 nversion;
> >> +
> >>  	do {
> >>  		dst->version = src->version;
> >>  		rmb();		/* fetch version before data */
> >> @@ -64,7 +66,8 @@ static unsigned pvclock_get_time_values(struct pvclock_shadow_time *dst,
> >>  		dst->tsc_shift         = src->tsc_shift;
> >>  		dst->flags             = src->flags;
> >>  		rmb();		/* test version after fetching data */
> >> -	} while ((src->version & 1) || (dst->version != src->version));
> >> +		nversion = ACCESS_ONCE(src->version);
> >> +	} while ((nversion & 1) || (dst->version != nversion));
> >>  
> >>  	return dst->version;
> >>  }
> >> @@ -135,7 +138,7 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
> >>  			    struct pvclock_vcpu_time_info *vcpu_time,
> >>  			    struct timespec *ts)
> >>  {
> >> -	u32 version;
> >> +	u32 version, nversion;
> >>  	u64 delta;
> >>  	struct timespec now;
> >>  
> >> @@ -146,7 +149,8 @@ void pvclock_read_wallclock(struct pvclock_wall_clock *wall_clock,
> >>  		now.tv_sec  = wall_clock->sec;
> >>  		now.tv_nsec = wall_clock->nsec;
> >>  		rmb();		/* fetch time before checking version */
> >> -	} while ((wall_clock->version & 1) || (version != wall_clock->version));
> >> +		nversion = ACCESS_ONCE(wall_clock->version);
> >> +	} while ((nversion & 1) || (version != nversion));
> >>  
> >>  	delta = pvclock_clocksource_read(vcpu_time);	/* time since system boot */
> >>  	delta += now.tv_sec * (u64)NSEC_PER_SEC + now.tv_nsec;
> >> -- 
> >> 1.8.4.2
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:22:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AU6-0004oS-AC; Tue, 28 Jan 2014 15:21:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8AU4-0004oL-Pi
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:21:52 +0000
Received: from [85.158.139.211:15688] by server-2.bemta-5.messagelabs.com id
	BF/86-29392-F0BC7E25; Tue, 28 Jan 2014 15:21:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390922509!158349!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 28 Jan 2014 15:21:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95281417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:21:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:21:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8ATz-0007VH-4H;
	Tue, 28 Jan 2014 15:21:47 +0000
Message-ID: <52E7CB0A.3000509@citrix.com>
Date: Tue, 28 Jan 2014 15:21:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 15:15, Xu, Dongxiao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Tuesday, January 28, 2014 11:04 PM
>> To: Andrew Cooper; Xu, Dongxiao
>> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com;
>> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
>> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov;
>> keir@xen.org
>> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
>>
>>>>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 28/01/14 14:23, Xu, Dongxiao wrote:
>>>>> And finally, I think the total size of the buffer here can easily
>>>>> exceed a page, i.e. this then ends up being a non-order-0
>>>>> allocation, which may _never_ succeed (i.e. the operation is
>>>>> then rendered useless). I guest it'd be better to e.g. vmap()
>>>>> the MFNs underlying the guest buffer.
>>>> Do you mean we check the size of the total size, and allocate MFNs one by
>>> one, then vmap them?
>>>
>>> I still think this is barking mad as a method of getting this quantity
>>> of data from Xen to the toolstack in a repeated fashon.
>>>
>>> Xen should allocate a per-socket buffer at the start of day (or perhaps
>>> on first use of CQM), and the CQM monitoring tool gets to map those
>>> per-socket buffers read-only.
>>>
>>> This way, all processing of the CQM data happens in dom0 userspace, not
>>> in Xen in hypercall context; All Xen has to do is periodically dump the
>>> MSRs into the pages.
>> Indeed - if the nature of the data is such that it can be exposed
>> read-only to suitably privileged entities, then this would be the
>> much better interface.
> If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page?
>
> Thanks,
> Dongxiao 

There is nothing preventing a hypercall existing which does a
synchronous prompt to dump data right now, but that is substantially
less overhead than then having the hypercall go and then rotate a matrix
of data so it can be consumed in a form convenient for userspace.

Other solutions involve having a single read/write control page where
the toolstack could set a bit indicating "please dump the msr when next
convenient" and the rmid context switching code could do a
test_and_clear() bit on it, which even solves the problem of "which core
on some other socket do I decide to interrupt".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:22:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AU6-0004oS-AC; Tue, 28 Jan 2014 15:21:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8AU4-0004oL-Pi
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:21:52 +0000
Received: from [85.158.139.211:15688] by server-2.bemta-5.messagelabs.com id
	BF/86-29392-F0BC7E25; Tue, 28 Jan 2014 15:21:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390922509!158349!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32265 invoked from network); 28 Jan 2014 15:21:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:21:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95281417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:21:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:21:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8ATz-0007VH-4H;
	Tue, 28 Jan 2014 15:21:47 +0000
Message-ID: <52E7CB0A.3000509@citrix.com>
Date: Tue, 28 Jan 2014 15:21:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 15:15, Xu, Dongxiao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Tuesday, January 28, 2014 11:04 PM
>> To: Andrew Cooper; Xu, Dongxiao
>> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com;
>> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
>> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov;
>> keir@xen.org
>> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
>>
>>>>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 28/01/14 14:23, Xu, Dongxiao wrote:
>>>>> And finally, I think the total size of the buffer here can easily
>>>>> exceed a page, i.e. this then ends up being a non-order-0
>>>>> allocation, which may _never_ succeed (i.e. the operation is
>>>>> then rendered useless). I guest it'd be better to e.g. vmap()
>>>>> the MFNs underlying the guest buffer.
>>>> Do you mean we check the size of the total size, and allocate MFNs one by
>>> one, then vmap them?
>>>
>>> I still think this is barking mad as a method of getting this quantity
>>> of data from Xen to the toolstack in a repeated fashon.
>>>
>>> Xen should allocate a per-socket buffer at the start of day (or perhaps
>>> on first use of CQM), and the CQM monitoring tool gets to map those
>>> per-socket buffers read-only.
>>>
>>> This way, all processing of the CQM data happens in dom0 userspace, not
>>> in Xen in hypercall context; All Xen has to do is periodically dump the
>>> MSRs into the pages.
>> Indeed - if the nature of the data is such that it can be exposed
>> read-only to suitably privileged entities, then this would be the
>> much better interface.
> If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page?
>
> Thanks,
> Dongxiao 

There is nothing preventing a hypercall existing which does a
synchronous prompt to dump data right now, but that is substantially
less overhead than then having the hypercall go and then rotate a matrix
of data so it can be consumed in a form convenient for userspace.

Other solutions involve having a single read/write control page where
the toolstack could set a bit indicating "please dump the msr when next
convenient" and the rmid context switching code could do a
test_and_clear() bit on it, which even solves the problem of "which core
on some other socket do I decide to interrupt".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AVz-0004x7-1V; Tue, 28 Jan 2014 15:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8AVx-0004wv-Nu
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 15:23:49 +0000
Received: from [193.109.254.147:50897] by server-6.bemta-14.messagelabs.com id
	46/82-14958-58BC7E25; Tue, 28 Jan 2014 15:23:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390922626!383297!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3985 invoked from network); 28 Jan 2014 15:23:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:23:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95282077"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:23:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:23:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8AVX-0007X6-Ge;
	Tue, 28 Jan 2014 15:23:23 +0000
Date: Tue, 28 Jan 2014 15:23:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Switch back to master.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/Config.mk b/Config.mk
index 55dce20..484cfdb 100644
--- a/Config.mk
+++ b/Config.mk
@@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
 SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 endif
 OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
-QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
+QEMU_UPSTREAM_REVISION ?= master
 SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
 # Fri Aug 2 14:12:09 2013 -0400
 # Fix bug in CBFS file walking with compressed files.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AVz-0004x7-1V; Tue, 28 Jan 2014 15:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8AVx-0004wv-Nu
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 15:23:49 +0000
Received: from [193.109.254.147:50897] by server-6.bemta-14.messagelabs.com id
	46/82-14958-58BC7E25; Tue, 28 Jan 2014 15:23:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390922626!383297!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3985 invoked from network); 28 Jan 2014 15:23:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:23:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95282077"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:23:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:23:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8AVX-0007X6-Ge;
	Tue, 28 Jan 2014 15:23:23 +0000
Date: Tue, 28 Jan 2014 15:23:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Switch back to master.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/Config.mk b/Config.mk
index 55dce20..484cfdb 100644
--- a/Config.mk
+++ b/Config.mk
@@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
 SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 endif
 OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
-QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
+QEMU_UPSTREAM_REVISION ?= master
 SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
 # Fri Aug 2 14:12:09 2013 -0400
 # Fix bug in CBFS file walking with compressed files.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:24:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AWK-0004zw-No; Tue, 28 Jan 2014 15:24:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8AWJ-0004zn-Ja
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:24:11 +0000
Received: from [193.109.254.147:31778] by server-9.bemta-14.messagelabs.com id
	70/7C-13957-A9BC7E25; Tue, 28 Jan 2014 15:24:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390922648!387332!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6896 invoked from network); 28 Jan 2014 15:24:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:24:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97279304"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:24:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:24:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8AWF-0007XH-7R;
	Tue, 28 Jan 2014 15:24:07 +0000
Date: Tue, 28 Jan 2014 15:24:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52E7D6000200007800117A11@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401281523210.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Jan Beulich wrote:
> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >>>> flight 24553 qemu-upstream-unstable real [real]
> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >>>>
> >>>> Failures :-/ but no regressions.
> >>>
> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >>> and so will require updating to actually pull this new stuff into the
> >>> release.
> >>
> >> OK. But given that the new code is not part of any RCs, should I wait
> >> for the next one? Should we go back to "master"?
> > 
> > I guess we should have gone back to "master" after tagging the last RC?
> 
> Correct - this should have happened the moment the first new
> commit passed the push gate on the qemuu tree. Don't know
> whether there would be a way to automate this...

I have just sent a patch to go back to master.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:24:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AWK-0004zw-No; Tue, 28 Jan 2014 15:24:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8AWJ-0004zn-Ja
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:24:11 +0000
Received: from [193.109.254.147:31778] by server-9.bemta-14.messagelabs.com id
	70/7C-13957-A9BC7E25; Tue, 28 Jan 2014 15:24:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390922648!387332!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6896 invoked from network); 28 Jan 2014 15:24:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:24:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97279304"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:24:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:24:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8AWF-0007XH-7R;
	Tue, 28 Jan 2014 15:24:07 +0000
Date: Tue, 28 Jan 2014 15:24:03 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52E7D6000200007800117A11@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401281523210.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Jan Beulich wrote:
> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >>>> flight 24553 qemu-upstream-unstable real [real]
> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >>>>
> >>>> Failures :-/ but no regressions.
> >>>
> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >>> and so will require updating to actually pull this new stuff into the
> >>> release.
> >>
> >> OK. But given that the new code is not part of any RCs, should I wait
> >> for the next one? Should we go back to "master"?
> > 
> > I guess we should have gone back to "master" after tagging the last RC?
> 
> Correct - this should have happened the moment the first new
> commit passed the push gate on the qemuu tree. Don't know
> whether there would be a way to automate this...

I have just sent a patch to go back to master.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AWt-00055h-6K; Tue, 28 Jan 2014 15:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8AWs-00055U-Nd
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:24:46 +0000
Received: from [85.158.143.35:37986] by server-1.bemta-4.messagelabs.com id
	25/75-02132-EBBC7E25; Tue, 28 Jan 2014 15:24:46 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390922684!1396477!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14370 invoked from network); 28 Jan 2014 15:24:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97279524"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:24:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:24:42 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8AWo-0007Xt-6y;
	Tue, 28 Jan 2014 15:24:42 +0000
Message-ID: <52E7CBB8.7040904@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:24:40 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>		
	<1390906058.7753.37.camel@kazak.uk.xensource.com>		
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>	
	<1390919466.7753.97.camel@kazak.uk.xensource.com>	
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
In-Reply-To: <1390921417.7753.106.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 03:03 PM, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:48 +0000, George Dunlap wrote:
>> On 01/28/2014 02:31 PM, Ian Campbell wrote:
>>> On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
>>>> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>>>>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>>>>>
>>>>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>>>>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>>>>>> rest of libxc and throws away the errno.
>>>>>>
>>>>>> So for now conditionally define xc_domain_set_pod_target to return success
>>>>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>>>>>> errno==-1 and returns -1, which matches the broken error reporting of the
>>>>>> existing function. It appears to have no in tree callers in any case.
>>>>>>
>>>>>> The conditional should be removed once libxc has been fixed.
>>>>>>
>>>>>> This makes ballooning (xl mem-set) work for ARM domains.
>>>>>>
>>>>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>>>>> Cc: george.dunlap@citrix.com
>>>>>> ---
>>>>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>>>>>> certainly doing so for 4.4 now would be inapropriate.
>>>>>>
>>>>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>>>>>> frame, at which point this conditional stuff could be dropped.
>>>>>>
>>>>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>>>>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>>>>>> small/contained and safe it is also pretty skanky and likely wouldn't be
>>>>>> accepted outside of the rc period.
>>>>>
>>>>> George -- what do you think of this?
>>>>
>>>> So is this actually called in the arm domain build code at the moment?
>>>
>>> It is common code in libxl which calls into it. I originally had the
>>> ifdef there instead.
>>>
>>> (I've just noticed that I forgot to update $subject when I moved the
>>> #ifdef from libxl to libxc)
>>
>> Oh, right -- yes, you normally need to call set_pod_target() every time
>> you update the balloon target, just in case PoD mode was activated on
>> boot; if it wasn't (or if all the PoD entries have gone away) this will
>> be a noop.
>>
>> The only conceptual issue with putting it here is that
>> xc_domain_set_pod_target() is also called during domain creation to fill
>> the PoD "cache" with the domain's memory, from which to populate the p2m
>> on-demand.  So if "someone" were to try to add PoD to the ARM guest
>> creation, and forgot about this hack, they might spend a bit of time
>> figuring out why the initial call to fill the PoD cache was succeeding
>> but the guest was crashing with "PoD empty cache" anyway.
>
> My hope is that this will get cleaned up in the 4.5 timeframe, as part
> of the overdue cleanup of libxc error handling. After that then this
> will properly report ENOSYS so that libxl can just DTRT. My hope is that
> this will happen before anyone gets to implementing PoD on ARM.
>
> Anyway, if this is the biggest stumbling block someone has while adding
> PoD to ARM then they will have done pretty well...
>
>> (xc_domain_set_target behaves differently if there are no entries in the
>> p2m than if there are: if the p2m is empty, it will respond to this by
>> filling the cache; if it's non-empty, it will ignore changes if there
>> are no outstanding p2m entries.  That made sense at the time, but now it
>> looks like a bit of an interface trap for the unwary...)
>>
>> Was there a reason to put this in libxc rather than libxc?  We don't
>> expect anyone to call libxc, so having it libxc isn't a big deal, but
>> conceptually it would probably be safer in libxl.
>
> I just thought the hack was more contained in libxc is all. Also having
> it in libxc means that when the error handling cleanup I mentioned
> occurs this will cleaned up at the same time, whereas if it was an ifdef
> in libxl it might get missed. Not a big deal until someone implements
> PoD on ARM though...

OK -- well I guess whatever you want to do then.  It's a bit of a 
bikeshed issue -- I've expressed my preference, go ahead and paint it 
whatever color you think best. :-)

If you want to check in this one:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AWt-00055h-6K; Tue, 28 Jan 2014 15:24:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8AWs-00055U-Nd
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:24:46 +0000
Received: from [85.158.143.35:37986] by server-1.bemta-4.messagelabs.com id
	25/75-02132-EBBC7E25; Tue, 28 Jan 2014 15:24:46 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390922684!1396477!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14370 invoked from network); 28 Jan 2014 15:24:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:24:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97279524"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:24:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:24:42 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8AWo-0007Xt-6y;
	Tue, 28 Jan 2014 15:24:42 +0000
Message-ID: <52E7CBB8.7040904@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:24:40 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>		
	<1390906058.7753.37.camel@kazak.uk.xensource.com>		
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>	
	<1390919466.7753.97.camel@kazak.uk.xensource.com>	
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
In-Reply-To: <1390921417.7753.106.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 03:03 PM, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:48 +0000, George Dunlap wrote:
>> On 01/28/2014 02:31 PM, Ian Campbell wrote:
>>> On Tue, 2014-01-28 at 14:28 +0000, George Dunlap wrote:
>>>> On Tue, Jan 28, 2014 at 10:47 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>>> On Thu, 2014-01-16 at 15:27 +0000, Ian Campbell wrote:
>>>>>> ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
>>>>>>
>>>>>> The correct solution here would be to check for ENOSYS in libxl, unfortunately
>>>>>> xc_domain_set_pod_target suffers from the same broken error reporting as the
>>>>>> rest of libxc and throws away the errno.
>>>>>>
>>>>>> So for now conditionally define xc_domain_set_pod_target to return success
>>>>>> (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
>>>>>> errno==-1 and returns -1, which matches the broken error reporting of the
>>>>>> existing function. It appears to have no in tree callers in any case.
>>>>>>
>>>>>> The conditional should be removed once libxc has been fixed.
>>>>>>
>>>>>> This makes ballooning (xl mem-set) work for ARM domains.
>>>>>>
>>>>>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>>>>>> Cc: george.dunlap@citrix.com
>>>>>> ---
>>>>>> I'd be generally wary of modifying the error handling in a piecemeal way, but
>>>>>> certainly doing so for 4.4 now would be inapropriate.
>>>>>>
>>>>>> IIRC Ian J was planning a thorough sweep of the libxc error paths in 4.5 time
>>>>>> frame, at which point this conditional stuff could be dropped.
>>>>>>
>>>>>> In terms of the 4.4 release, obviously ballooning would be very nice to have
>>>>>> for ARM guests, on the other hand I'm aware that while the patch is fairly
>>>>>> small/contained and safe it is also pretty skanky and likely wouldn't be
>>>>>> accepted outside of the rc period.
>>>>>
>>>>> George -- what do you think of this?
>>>>
>>>> So is this actually called in the arm domain build code at the moment?
>>>
>>> It is common code in libxl which calls into it. I originally had the
>>> ifdef there instead.
>>>
>>> (I've just noticed that I forgot to update $subject when I moved the
>>> #ifdef from libxl to libxc)
>>
>> Oh, right -- yes, you normally need to call set_pod_target() every time
>> you update the balloon target, just in case PoD mode was activated on
>> boot; if it wasn't (or if all the PoD entries have gone away) this will
>> be a noop.
>>
>> The only conceptual issue with putting it here is that
>> xc_domain_set_pod_target() is also called during domain creation to fill
>> the PoD "cache" with the domain's memory, from which to populate the p2m
>> on-demand.  So if "someone" were to try to add PoD to the ARM guest
>> creation, and forgot about this hack, they might spend a bit of time
>> figuring out why the initial call to fill the PoD cache was succeeding
>> but the guest was crashing with "PoD empty cache" anyway.
>
> My hope is that this will get cleaned up in the 4.5 timeframe, as part
> of the overdue cleanup of libxc error handling. After that then this
> will properly report ENOSYS so that libxl can just DTRT. My hope is that
> this will happen before anyone gets to implementing PoD on ARM.
>
> Anyway, if this is the biggest stumbling block someone has while adding
> PoD to ARM then they will have done pretty well...
>
>> (xc_domain_set_target behaves differently if there are no entries in the
>> p2m than if there are: if the p2m is empty, it will respond to this by
>> filling the cache; if it's non-empty, it will ignore changes if there
>> are no outstanding p2m entries.  That made sense at the time, but now it
>> looks like a bit of an interface trap for the unwary...)
>>
>> Was there a reason to put this in libxc rather than libxc?  We don't
>> expect anyone to call libxc, so having it libxc isn't a big deal, but
>> conceptually it would probably be safer in libxl.
>
> I just thought the hack was more contained in libxc is all. Also having
> it in libxc means that when the error handling cleanup I mentioned
> occurs this will cleaned up at the same time, whereas if it was an ifdef
> in libxl it might get missed. Not a big deal until someone implements
> PoD on ARM though...

OK -- well I guess whatever you want to do then.  It's a bit of a 
bikeshed issue -- I've expressed my preference, go ahead and paint it 
whatever color you think best. :-)

If you want to check in this one:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AXQ-0005Bb-LU; Tue, 28 Jan 2014 15:25:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8AXP-0005BQ-CK
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:25:19 +0000
Received: from [85.158.143.35:13519] by server-1.bemta-4.messagelabs.com id
	46/56-02132-EDBC7E25; Tue, 28 Jan 2014 15:25:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390922716!1392895!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28014 invoked from network); 28 Jan 2014 15:25:18 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:25:18 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFPFcE028092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:25:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFPACb020925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:25:11 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFPAmD005095; Tue, 28 Jan 2014 15:25:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:25:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DD1EA1BFA73; Tue, 28 Jan 2014 10:25:08 -0500 (EST)
Date: Tue, 28 Jan 2014 10:25:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140128152508.GB4308@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
	<52E69BCE.1070508@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E69BCE.1070508@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org, kvm@vger.kernel.org,
	Jan Beulich <JBeulich@suse.com>,
	Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:47:58PM +0100, Paolo Bonzini wrote:
> Il 17/01/2014 10:41, Jan Beulich ha scritto:
> > One half of this doesn't apply here, due to the explicit barriers
> > that are there. The half about converting local variable accesses
> > back to memory reads (i.e. eliding the local variable), however,
> > is only a theoretical issue afaict: If a compiler really did this, I
> > think there'd be far more places where this would hurt.
> 
> Perhaps.  But for example seqlocks get it right.
> 
> > I don't think so - this would only be an issue if the conditions used
> > | instead of ||. || implies a sequence point between evaluating the
> > left and right sides, and the standard says: "The presence of a
> > sequence point between the evaluation of expressions A and B
> > implies that every value computation and side effect associated
> > with A is sequenced before every value computation and side
> > effect associated with B."
> 
> I suspect this is widely ignored by compilers if A is not 
> side-effecting.  The above wording would imply that
> 
>      x = a || b    =>    x = (a | b) != 0
> 
> (where "a" and "b" are non-volatile globals) would be an invalid 
> change.  The compiler would have to do:
> 
>      temp = a;
>      barrier();
>      x = (temp | b) != 0
> 
> and I'm pretty sure that no compiler does it this way unless C11/C++11
> atomics are involved (at which point accesses become side-effecting).
> 
> The code has changed and pvclock_get_time_values moved to
> __pvclock_read_cycles, but I think the problem remains.  Another approach
> to fixing this (and one I prefer) is to do the same thing as seqlocks:
> turn off the low bit in the return value of __pvclock_read_cycles,
> and drop the || altogether.  Untested patch after my name.

Is there a good test-case to confirm that this patch does not introduce
any regressions?


> 
> Paolo
> 
> diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
> index d6b078e9fa28..5aec80adaf54 100644
> --- a/arch/x86/include/asm/pvclock.h
> +++ b/arch/x86/include/asm/pvclock.h
> @@ -75,7 +75,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
>  	cycle_t ret, offset;
>  	u8 ret_flags;
>  
> -	version = src->version;
> +	version = src->version & ~1;
>  	/* Note: emulated platforms which do not advertise SSE2 support
>  	 * result in kvmclock not using the necessary RDTSC barriers.
>  	 * Without barriers, it is possible that RDTSC instruction reads from
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 2f355d229a58..a5052a87d55e 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -66,7 +66,7 @@ u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src)
>  
>  	do {
>  		version = __pvclock_read_cycles(src, &ret, &flags);
> -	} while ((src->version & 1) || version != src->version);
> +	} while (version != src->version);
>  
>  	return flags & valid_flags;
>  }
> @@ -80,7 +80,7 @@ cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
>  
>  	do {
>  		version = __pvclock_read_cycles(src, &ret, &flags);
> -	} while ((src->version & 1) || version != src->version);
> +	} while (version != src->version);
>  
>  	if (unlikely((flags & PVCLOCK_GUEST_STOPPED) != 0)) {
>  		src->flags &= ~PVCLOCK_GUEST_STOPPED;
> diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
> index eb5d7a56f8d4..f09b09bcb515 100644
> --- a/arch/x86/vdso/vclock_gettime.c
> +++ b/arch/x86/vdso/vclock_gettime.c
> @@ -117,7 +117,6 @@ static notrace cycle_t vread_pvclock(int *mode)
>  		 */
>  		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
>  	} while (unlikely(cpu != cpu1 ||
> -			  (pvti->pvti.version & 1) ||
>  			  pvti->pvti.version != version));
>  
>  	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AXQ-0005Bb-LU; Tue, 28 Jan 2014 15:25:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8AXP-0005BQ-CK
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:25:19 +0000
Received: from [85.158.143.35:13519] by server-1.bemta-4.messagelabs.com id
	46/56-02132-EDBC7E25; Tue, 28 Jan 2014 15:25:18 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390922716!1392895!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28014 invoked from network); 28 Jan 2014 15:25:18 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:25:18 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFPFcE028092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:25:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFPACb020925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:25:11 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFPAmD005095; Tue, 28 Jan 2014 15:25:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:25:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DD1EA1BFA73; Tue, 28 Jan 2014 10:25:08 -0500 (EST)
Date: Tue, 28 Jan 2014 10:25:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <20140128152508.GB4308@phenom.dumpdata.com>
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<52D80324020000780011447A@nat28.tlf.novell.com>
	<52D8030E.1050501@os.inf.tu-dresden.de>
	<52D908BF0200007800114782@nat28.tlf.novell.com>
	<52E69BCE.1070508@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E69BCE.1070508@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org, kvm@vger.kernel.org,
	Jan Beulich <JBeulich@suse.com>,
	Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:47:58PM +0100, Paolo Bonzini wrote:
> Il 17/01/2014 10:41, Jan Beulich ha scritto:
> > One half of this doesn't apply here, due to the explicit barriers
> > that are there. The half about converting local variable accesses
> > back to memory reads (i.e. eliding the local variable), however,
> > is only a theoretical issue afaict: If a compiler really did this, I
> > think there'd be far more places where this would hurt.
> 
> Perhaps.  But for example seqlocks get it right.
> 
> > I don't think so - this would only be an issue if the conditions used
> > | instead of ||. || implies a sequence point between evaluating the
> > left and right sides, and the standard says: "The presence of a
> > sequence point between the evaluation of expressions A and B
> > implies that every value computation and side effect associated
> > with A is sequenced before every value computation and side
> > effect associated with B."
> 
> I suspect this is widely ignored by compilers if A is not 
> side-effecting.  The above wording would imply that
> 
>      x = a || b    =>    x = (a | b) != 0
> 
> (where "a" and "b" are non-volatile globals) would be an invalid 
> change.  The compiler would have to do:
> 
>      temp = a;
>      barrier();
>      x = (temp | b) != 0
> 
> and I'm pretty sure that no compiler does it this way unless C11/C++11
> atomics are involved (at which point accesses become side-effecting).
> 
> The code has changed and pvclock_get_time_values moved to
> __pvclock_read_cycles, but I think the problem remains.  Another approach
> to fixing this (and one I prefer) is to do the same thing as seqlocks:
> turn off the low bit in the return value of __pvclock_read_cycles,
> and drop the || altogether.  Untested patch after my name.

Is there a good test-case to confirm that this patch does not introduce
any regressions?


> 
> Paolo
> 
> diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
> index d6b078e9fa28..5aec80adaf54 100644
> --- a/arch/x86/include/asm/pvclock.h
> +++ b/arch/x86/include/asm/pvclock.h
> @@ -75,7 +75,7 @@ unsigned __pvclock_read_cycles(const struct pvclock_vcpu_time_info *src,
>  	cycle_t ret, offset;
>  	u8 ret_flags;
>  
> -	version = src->version;
> +	version = src->version & ~1;
>  	/* Note: emulated platforms which do not advertise SSE2 support
>  	 * result in kvmclock not using the necessary RDTSC barriers.
>  	 * Without barriers, it is possible that RDTSC instruction reads from
> diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
> index 2f355d229a58..a5052a87d55e 100644
> --- a/arch/x86/kernel/pvclock.c
> +++ b/arch/x86/kernel/pvclock.c
> @@ -66,7 +66,7 @@ u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src)
>  
>  	do {
>  		version = __pvclock_read_cycles(src, &ret, &flags);
> -	} while ((src->version & 1) || version != src->version);
> +	} while (version != src->version);
>  
>  	return flags & valid_flags;
>  }
> @@ -80,7 +80,7 @@ cycle_t pvclock_clocksource_read(struct pvclock_vcpu_time_info *src)
>  
>  	do {
>  		version = __pvclock_read_cycles(src, &ret, &flags);
> -	} while ((src->version & 1) || version != src->version);
> +	} while (version != src->version);
>  
>  	if (unlikely((flags & PVCLOCK_GUEST_STOPPED) != 0)) {
>  		src->flags &= ~PVCLOCK_GUEST_STOPPED;
> diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
> index eb5d7a56f8d4..f09b09bcb515 100644
> --- a/arch/x86/vdso/vclock_gettime.c
> +++ b/arch/x86/vdso/vclock_gettime.c
> @@ -117,7 +117,6 @@ static notrace cycle_t vread_pvclock(int *mode)
>  		 */
>  		cpu1 = __getcpu() & VGETCPU_CPU_MASK;
>  	} while (unlikely(cpu != cpu1 ||
> -			  (pvti->pvti.version & 1) ||
>  			  pvti->pvti.version != version));
>  
>  	if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:26:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AYU-0005LZ-5Z; Tue, 28 Jan 2014 15:26:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8AYS-0005LF-9L
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 15:26:24 +0000
Received: from [85.158.139.211:11489] by server-2.bemta-5.messagelabs.com id
	AE/3E-29392-F1CC7E25; Tue, 28 Jan 2014 15:26:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390922780!146831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19900 invoked from network); 28 Jan 2014 15:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95283407"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:26:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:26:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8AYM-0007Zp-Id;
	Tue, 28 Jan 2014 15:26:18 +0000
Message-ID: <52E7CC19.2040807@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:26:17 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 03:23 PM, Stefano Stabellini wrote:
> Switch back to master.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I don't think this needs an ack; switching back should be part of the 
normal process of cutting the RC.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:26:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AYU-0005LZ-5Z; Tue, 28 Jan 2014 15:26:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8AYS-0005LF-9L
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 15:26:24 +0000
Received: from [85.158.139.211:11489] by server-2.bemta-5.messagelabs.com id
	AE/3E-29392-F1CC7E25; Tue, 28 Jan 2014 15:26:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390922780!146831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19900 invoked from network); 28 Jan 2014 15:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95283407"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:26:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:26:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8AYM-0007Zp-Id;
	Tue, 28 Jan 2014 15:26:18 +0000
Message-ID: <52E7CC19.2040807@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:26:17 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	<xen-devel@lists.xensource.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 03:23 PM, Stefano Stabellini wrote:
> Switch back to master.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

I don't think this needs an ack; switching back should be part of the 
normal process of cutting the RC.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:31:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AdP-0005vj-2w; Tue, 28 Jan 2014 15:31:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AdO-0005ve-A0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:31:30 +0000
Received: from [85.158.137.68:61531] by server-17.bemta-3.messagelabs.com id
	56/B2-15965-15DC7E25; Tue, 28 Jan 2014 15:31:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390923087!11848243!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20183 invoked from network); 28 Jan 2014 15:31:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:31:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97283911"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:31:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:31:25 -0500
Message-ID: <1390923084.7753.110.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 15:31:24 +0000
In-Reply-To: <52E7D6000200007800117A11@nat28.tlf.novell.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >>>> flight 24553 qemu-upstream-unstable real [real]
> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >>>>
> >>>> Failures :-/ but no regressions.
> >>>
> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >>> and so will require updating to actually pull this new stuff into the
> >>> release.
> >>
> >> OK. But given that the new code is not part of any RCs, should I wait
> >> for the next one? Should we go back to "master"?
> > 
> > I guess we should have gone back to "master" after tagging the last RC?
> 
> Correct - this should have happened the moment the first new
> commit passed the push gate on the qemuu tree.

There's no need to wait that long -- this can be done in a commit which
immediately follows the one tagged as the rc.

> Don't know whether there would be a way to automate this...

It sounds like it would be tricky. I suppose a cronjob which verifies
that xen.git/staging always either says "master" or refers to a tag
which is the latest in qemu.git might work, but it sounds subtle and
error prone to me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:31:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AdP-0005vj-2w; Tue, 28 Jan 2014 15:31:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AdO-0005ve-A0
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:31:30 +0000
Received: from [85.158.137.68:61531] by server-17.bemta-3.messagelabs.com id
	56/B2-15965-15DC7E25; Tue, 28 Jan 2014 15:31:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390923087!11848243!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20183 invoked from network); 28 Jan 2014 15:31:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:31:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97283911"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:31:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:31:25 -0500
Message-ID: <1390923084.7753.110.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 15:31:24 +0000
In-Reply-To: <52E7D6000200007800117A11@nat28.tlf.novell.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >>>> flight 24553 qemu-upstream-unstable real [real]
> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >>>>
> >>>> Failures :-/ but no regressions.
> >>>
> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >>> and so will require updating to actually pull this new stuff into the
> >>> release.
> >>
> >> OK. But given that the new code is not part of any RCs, should I wait
> >> for the next one? Should we go back to "master"?
> > 
> > I guess we should have gone back to "master" after tagging the last RC?
> 
> Correct - this should have happened the moment the first new
> commit passed the push gate on the qemuu tree.

There's no need to wait that long -- this can be done in a commit which
immediately follows the one tagged as the rc.

> Don't know whether there would be a way to automate this...

It sounds like it would be tricky. I suppose a cronjob which verifies
that xen.git/staging always either says "master" or refers to a tag
which is the latest in qemu.git might work, but it sounds subtle and
error prone to me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Afi-00062d-My; Tue, 28 Jan 2014 15:33:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Afh-00062U-BM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:33:53 +0000
Received: from [85.158.139.211:46482] by server-12.bemta-5.messagelabs.com id
	DB/57-30017-0EDC7E25; Tue, 28 Jan 2014 15:33:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390923231!161749!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10295 invoked from network); 28 Jan 2014 15:33:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:33:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:33:51 +0000
Message-Id: <52E7DBFB0200007800117A74@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:34:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:15, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> > On 28/01/14 14:23, Xu, Dongxiao wrote:
>> >>> And finally, I think the total size of the buffer here can easily
>> >>> exceed a page, i.e. this then ends up being a non-order-0
>> >>> allocation, which may _never_ succeed (i.e. the operation is
>> >>> then rendered useless). I guest it'd be better to e.g. vmap()
>> >>> the MFNs underlying the guest buffer.
>> >> Do you mean we check the size of the total size, and allocate MFNs one by
>> > one, then vmap them?
>> >
>> > I still think this is barking mad as a method of getting this quantity
>> > of data from Xen to the toolstack in a repeated fashon.
>> >
>> > Xen should allocate a per-socket buffer at the start of day (or perhaps
>> > on first use of CQM), and the CQM monitoring tool gets to map those
>> > per-socket buffers read-only.
>> >
>> > This way, all processing of the CQM data happens in dom0 userspace, not
>> > in Xen in hypercall context; All Xen has to do is periodically dump the
>> > MSRs into the pages.
>> 
>> Indeed - if the nature of the data is such that it can be exposed
>> read-only to suitably privileged entities, then this would be the
>> much better interface.
> 
> If the data fetching is not hypercall driven, do you have a recommendation 
> on how frequent Xen dumps the MSRs into the share page?

Perhaps an "update the shared pages" hypercall should be made
then prior to reading the shared space?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Afi-00062d-My; Tue, 28 Jan 2014 15:33:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Afh-00062U-BM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:33:53 +0000
Received: from [85.158.139.211:46482] by server-12.bemta-5.messagelabs.com id
	DB/57-30017-0EDC7E25; Tue, 28 Jan 2014 15:33:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390923231!161749!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10295 invoked from network); 28 Jan 2014 15:33:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:33:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:33:51 +0000
Message-Id: <52E7DBFB0200007800117A74@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:34:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com>
	<1386236334-15410-5-git-send-email-dongxiao.xu@intel.com>
	<52DD38240200007800115102@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
	<52E7BFF8.1090605@citrix.com>
	<52E7D4EC02000078001179FE@nat28.tlf.novell.com>
	<40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "keir@xen.org" <keir@xen.org>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:15, "Xu, Dongxiao" <dongxiao.xu@intel.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> >>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> > On 28/01/14 14:23, Xu, Dongxiao wrote:
>> >>> And finally, I think the total size of the buffer here can easily
>> >>> exceed a page, i.e. this then ends up being a non-order-0
>> >>> allocation, which may _never_ succeed (i.e. the operation is
>> >>> then rendered useless). I guest it'd be better to e.g. vmap()
>> >>> the MFNs underlying the guest buffer.
>> >> Do you mean we check the size of the total size, and allocate MFNs one by
>> > one, then vmap them?
>> >
>> > I still think this is barking mad as a method of getting this quantity
>> > of data from Xen to the toolstack in a repeated fashon.
>> >
>> > Xen should allocate a per-socket buffer at the start of day (or perhaps
>> > on first use of CQM), and the CQM monitoring tool gets to map those
>> > per-socket buffers read-only.
>> >
>> > This way, all processing of the CQM data happens in dom0 userspace, not
>> > in Xen in hypercall context; All Xen has to do is periodically dump the
>> > MSRs into the pages.
>> 
>> Indeed - if the nature of the data is such that it can be exposed
>> read-only to suitably privileged entities, then this would be the
>> much better interface.
> 
> If the data fetching is not hypercall driven, do you have a recommendation 
> on how frequent Xen dumps the MSRs into the share page?

Perhaps an "update the shared pages" hypercall should be made
then prior to reading the shared space?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:34:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ag0-00065i-Ga; Tue, 28 Jan 2014 15:34:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Afy-00065L-W3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:34:11 +0000
Received: from [85.158.137.68:64454] by server-16.bemta-3.messagelabs.com id
	65/27-26128-2FDC7E25; Tue, 28 Jan 2014 15:34:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390923247!10695169!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1396 invoked from network); 28 Jan 2014 15:34:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:34:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95288170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:33:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:33:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Afj-0007iK-8z;
	Tue, 28 Jan 2014 15:33:55 +0000
Date: Tue, 28 Jan 2014 15:33:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390923084.7753.110.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> > >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > >>>> flight 24553 qemu-upstream-unstable real [real]
> > >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> > >>>>
> > >>>> Failures :-/ but no regressions.
> > >>>
> > >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > >>> and so will require updating to actually pull this new stuff into the
> > >>> release.
> > >>
> > >> OK. But given that the new code is not part of any RCs, should I wait
> > >> for the next one? Should we go back to "master"?
> > > 
> > > I guess we should have gone back to "master" after tagging the last RC?
> > 
> > Correct - this should have happened the moment the first new
> > commit passed the push gate on the qemuu tree.
> 
> There's no need to wait that long -- this can be done in a commit which
> immediately follows the one tagged as the rc.

Formally I am not a committer on xen-unstable.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:34:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ag0-00065i-Ga; Tue, 28 Jan 2014 15:34:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Afy-00065L-W3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:34:11 +0000
Received: from [85.158.137.68:64454] by server-16.bemta-3.messagelabs.com id
	65/27-26128-2FDC7E25; Tue, 28 Jan 2014 15:34:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1390923247!10695169!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1396 invoked from network); 28 Jan 2014 15:34:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:34:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95288170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:33:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:33:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Afj-0007iK-8z;
	Tue, 28 Jan 2014 15:33:55 +0000
Date: Tue, 28 Jan 2014 15:33:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390923084.7753.110.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> > >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > >>>> flight 24553 qemu-upstream-unstable real [real]
> > >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> > >>>>
> > >>>> Failures :-/ but no regressions.
> > >>>
> > >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > >>> and so will require updating to actually pull this new stuff into the
> > >>> release.
> > >>
> > >> OK. But given that the new code is not part of any RCs, should I wait
> > >> for the next one? Should we go back to "master"?
> > > 
> > > I guess we should have gone back to "master" after tagging the last RC?
> > 
> > Correct - this should have happened the moment the first new
> > commit passed the push gate on the qemuu tree.
> 
> There's no need to wait that long -- this can be done in a commit which
> immediately follows the one tagged as the rc.

Formally I am not a committer on xen-unstable.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ahx-0006Hp-5d; Tue, 28 Jan 2014 15:36:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Ahw-0006Hi-5E
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:36:12 +0000
Received: from [193.109.254.147:35827] by server-5.bemta-14.messagelabs.com id
	7F/69-03510-B6EC7E25; Tue, 28 Jan 2014 15:36:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390923369!387106!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21410 invoked from network); 28 Jan 2014 15:36:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95289601"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:36:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:36:08 -0500
Message-ID: <1390923366.7753.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:36:06 +0000
In-Reply-To: <alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:33 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> > > >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > > > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > > >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > > >>>> flight 24553 qemu-upstream-unstable real [real]
> > > >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> > > >>>>
> > > >>>> Failures :-/ but no regressions.
> > > >>>
> > > >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > > >>> and so will require updating to actually pull this new stuff into the
> > > >>> release.
> > > >>
> > > >> OK. But given that the new code is not part of any RCs, should I wait
> > > >> for the next one? Should we go back to "master"?
> > > > 
> > > > I guess we should have gone back to "master" after tagging the last RC?
> > > 
> > > Correct - this should have happened the moment the first new
> > > commit passed the push gate on the qemuu tree.
> > 
> > There's no need to wait that long -- this can be done in a commit which
> > immediately follows the one tagged as the rc.
> 
> Formally I am not a committer on xen-unstable.

I was meaning that the person cutting the rc should do it.

But that brings up an interesting point. FWIW I think it would be
absolutely fine for the committer responsible for one of the separate
trees to push updates to the bits of xen.git which pull in that code,
specifically the Config.mk bits.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ahx-0006Hp-5d; Tue, 28 Jan 2014 15:36:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Ahw-0006Hi-5E
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:36:12 +0000
Received: from [193.109.254.147:35827] by server-5.bemta-14.messagelabs.com id
	7F/69-03510-B6EC7E25; Tue, 28 Jan 2014 15:36:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390923369!387106!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21410 invoked from network); 28 Jan 2014 15:36:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95289601"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:36:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:36:08 -0500
Message-ID: <1390923366.7753.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 15:36:06 +0000
In-Reply-To: <alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:33 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> > > >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > > > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> > > >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> > > >>>> flight 24553 qemu-upstream-unstable real [real]
> > > >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> > > >>>>
> > > >>>> Failures :-/ but no regressions.
> > > >>>
> > > >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> > > >>> and so will require updating to actually pull this new stuff into the
> > > >>> release.
> > > >>
> > > >> OK. But given that the new code is not part of any RCs, should I wait
> > > >> for the next one? Should we go back to "master"?
> > > > 
> > > > I guess we should have gone back to "master" after tagging the last RC?
> > > 
> > > Correct - this should have happened the moment the first new
> > > commit passed the push gate on the qemuu tree.
> > 
> > There's no need to wait that long -- this can be done in a commit which
> > immediately follows the one tagged as the rc.
> 
> Formally I am not a committer on xen-unstable.

I was meaning that the person cutting the rc should do it.

But that brings up an interesting point. FWIW I think it would be
absolutely fine for the committer responsible for one of the separate
trees to push updates to the bits of xen.git which pull in that code,
specifically the Config.mk bits.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:37:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Aix-0006Nr-0O; Tue, 28 Jan 2014 15:37:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Aiw-0006Nf-GK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:37:14 +0000
Received: from [85.158.143.35:13133] by server-1.bemta-4.messagelabs.com id
	37/3C-02132-9AEC7E25; Tue, 28 Jan 2014 15:37:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390923432!1404244!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9253 invoked from network); 28 Jan 2014 15:37:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:37:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:37:12 +0000
Message-Id: <52E7DCC50200007800117A93@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:37:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
In-Reply-To: <1390923084.7753.110.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
>> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
>> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
>> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>> >>>> flight 24553 qemu-upstream-unstable real [real]
>> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
>> >>>>
>> >>>> Failures :-/ but no regressions.
>> >>>
>> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>> >>> and so will require updating to actually pull this new stuff into the
>> >>> release.
>> >>
>> >> OK. But given that the new code is not part of any RCs, should I wait
>> >> for the next one? Should we go back to "master"?
>> > 
>> > I guess we should have gone back to "master" after tagging the last RC?
>> 
>> Correct - this should have happened the moment the first new
>> commit passed the push gate on the qemuu tree.
> 
> There's no need to wait that long -- this can be done in a commit which
> immediately follows the one tagged as the rc.

Except that it might end up being pointless if nothing really
changes in qemuu until the next RC (or the final release).

>> Don't know whether there would be a way to automate this...
> 
> It sounds like it would be tricky. I suppose a cronjob which verifies
> that xen.git/staging always either says "master" or refers to a tag
> which is the latest in qemu.git might work, but it sounds subtle and
> error prone to me.

Yes, I realize there would be a number of "special" situations to
take into consideration...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:37:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Aix-0006Nr-0O; Tue, 28 Jan 2014 15:37:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Aiw-0006Nf-GK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:37:14 +0000
Received: from [85.158.143.35:13133] by server-1.bemta-4.messagelabs.com id
	37/3C-02132-9AEC7E25; Tue, 28 Jan 2014 15:37:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390923432!1404244!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9253 invoked from network); 28 Jan 2014 15:37:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:37:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:37:12 +0000
Message-Id: <52E7DCC50200007800117A93@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:37:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
In-Reply-To: <1390923084.7753.110.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
>> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
>> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
>> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
>> >>>> flight 24553 qemu-upstream-unstable real [real]
>> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
>> >>>>
>> >>>> Failures :-/ but no regressions.
>> >>>
>> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
>> >>> and so will require updating to actually pull this new stuff into the
>> >>> release.
>> >>
>> >> OK. But given that the new code is not part of any RCs, should I wait
>> >> for the next one? Should we go back to "master"?
>> > 
>> > I guess we should have gone back to "master" after tagging the last RC?
>> 
>> Correct - this should have happened the moment the first new
>> commit passed the push gate on the qemuu tree.
> 
> There's no need to wait that long -- this can be done in a commit which
> immediately follows the one tagged as the rc.

Except that it might end up being pointless if nothing really
changes in qemuu until the next RC (or the final release).

>> Don't know whether there would be a way to automate this...
> 
> It sounds like it would be tricky. I suppose a cronjob which verifies
> that xen.git/staging always either says "master" or refers to a tag
> which is the latest in qemu.git might work, but it sounds subtle and
> error prone to me.

Yes, I realize there would be a number of "special" situations to
take into consideration...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AjA-0006Pj-Dg; Tue, 28 Jan 2014 15:37:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Aj9-0006PR-Eo
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:37:27 +0000
Received: from [193.109.254.147:60388] by server-13.bemta-14.messagelabs.com
	id 58/F7-19374-6BEC7E25; Tue, 28 Jan 2014 15:37:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390923439!390859!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14728 invoked from network); 28 Jan 2014 15:37:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:37:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFbDlp011808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:37:13 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFbCm6001631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:37:12 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFbBh5014355; Tue, 28 Jan 2014 15:37:12 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:37:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6BF981BFA73; Tue, 28 Jan 2014 10:37:10 -0500 (EST)
Date: Tue, 28 Jan 2014 10:37:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140128153710.GC4308@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
	<52E7A635.2090108@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7A635.2090108@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 01:44:37PM +0100, Roger Pau Monn=E9 wrote:
> On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> >> I've at least identified two possible memory leaks in blkback, both
> >> related to the shutdown path of a VBD:
> >>
> >> - We don't wait for any pending purge work to finish before cleaning
> >>   the list of free_pages. The purge work will call put_free_pages and
> >>   thus we might end up with pages being added to the free_pages list
> >>   after we have emptied it.
> >> - We don't wait for pending requests to end before cleaning persistent
> >>   grants and the list of free_pages. Again this can add pages to the
> >>   free_pages lists or persistent grants to the persistent_gnts
> >>   red-black tree.
> >>
> >> Also, add some checks in xen_blkif_free to make sure we are cleaning
> >> everything.
> >>
> >> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> Cc: David Vrabel <david.vrabel@citrix.com>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: Matt Rushton <mrushton@amazon.com>
> >> Cc: Matt Wilson <msw@amazon.com>
> >> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> >> ---
> >> This should be applied after the patch:
> >>
> >> xen-blkback: fix memory leak when persistent grants are used
> >>
> >> >From Matt Rushton & Matt Wilson and backported to stable.
> >>
> >> I've been able to create and destroy ~4000 guests while doing heavy IO
> >> operations with this patch on a 512M Dom0 without problems.
> >> ---
> >>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++-------=
---
> >>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
> >>  2 files changed, 28 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-b=
lkback/blkback.c
> >> index 30ef7b3..19925b7 100644
> >> --- a/drivers/block/xen-blkback/blkback.c
> >> +++ b/drivers/block/xen-blkback/blkback.c
> >> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
> >>  				struct pending_req *pending_req);
> >>  static void make_response(struct xen_blkif *blkif, u64 id,
> >>  			  unsigned short op, int st);
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
> >>  =

> >>  #define foreach_grant_safe(pos, n, rbtree, node) \
> >>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node=
), \
> >> @@ -625,6 +626,12 @@ purge_gnt_list:
> >>  			print_stats(blkif);
> >>  	}
> >>  =

> >> +	/* Drain pending IO */
> >> +	xen_blk_drain_io(blkif, true);
> >> +
> >> +	/* Drain pending purge work */
> >> +	flush_work(&blkif->persistent_purge_work);
> >> +
> > =

> > I think this means we can eliminate the refcnt usage - at least when
> > it comes to xen_blkif_disconnect where if we would initiate the shutdow=
n, and
> > there is
> > =

> > 239         atomic_dec(&blkif->refcnt);                                =
             =

> > 240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refc=
nt) =3D=3D 0);   =

> > 241         atomic_inc(&blkif->refcnt);                                =
             =

> > 242                                                                    =
             =

> > =

> > which is done _after_ the thread is done executing. That check won't
> > be needed anymore as the xen_blk_drain_io, flush_work, and free_persist=
ent_gnts
> > has pretty much drained every I/O out - so the moment the thread exits
> > there should be no need for waiting_to_free. I think.
> =

> I've reworked this patch a bit, so we don't drain the in-flight requests
> here, and instead moved all the cleanup code to xen_blkif_free. I've
> also split the xen_blkif_put race fix into a separate patch.
> =

> > =

> >>  	/* Free all persistent grant pages */
> >>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> >>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> >> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blk=
if,
> >>  	return -EIO;
> >>  }
> >>  =

> >> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
> >>  {
> >>  	atomic_set(&blkif->drain, 1);
> >>  	do {
> >> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blk=
if)
> >>  =

> >>  		if (!atomic_read(&blkif->drain))
> >>  			break;
> >> -	} while (!kthread_should_stop());
> >> +	} while (!kthread_should_stop() || force);
> >>  	atomic_set(&blkif->drain, 0);
> >>  }
> >>  =

> >> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
> >>  	 * the proper response on the ring.
> >>  	 */
> >>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> >> -		xen_blkbk_unmap(pending_req->blkif,
> >> +		struct xen_blkif *blkif =3D pending_req->blkif;
> >> +
> >> +		xen_blkbk_unmap(blkif,
> >>  		                pending_req->segments,
> >>  		                pending_req->nr_pages);
> >> -		make_response(pending_req->blkif, pending_req->id,
> >> +		make_response(blkif, pending_req->id,
> >>  			      pending_req->operation, pending_req->status);
> >> -		xen_blkif_put(pending_req->blkif);
> >> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> >> -			if (atomic_read(&pending_req->blkif->drain))
> >> -				complete(&pending_req->blkif->drain_complete);
> >> +		free_req(blkif, pending_req);
> >> +		xen_blkif_put(blkif);
> >> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> >> +			if (atomic_read(&blkif->drain))
> >> +				complete(&blkif->drain_complete);
> >>  		}
> >> -		free_req(pending_req->blkif, pending_req);
> > =

> > I keep coming back to this and I am not sure what to think - especially
> > in the context of WRITE_BARRIER and disconnecting the vbd.
> > =

> > You moved the 'free_req' to be done before you do atomic_read/dec.
> > =

> > Which means that we do:
> > =

> > 	list_add(&req->free_list, &blkif->pending_free);
> > 	wake_up(&blkif->pending_free_wq);
> > =

> > 	atomic_dec
> > 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> > =

> > =

> > while in the past we did:
> > =

> > 	atomic_dec
> > 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> > =

> > 	list_add(&req->free_list, &blkif->pending_free);
> > 	wake_up(&blkif->pending_free_wq);
> > =

> > which means that we are giving the 'req' _before_ we decrement
> > the refcnts.
> > =

> > Could that mean that __do_block_io_op takes it for a spin - oh
> > wait it won't as it is sitting on a WRITE_BARRIER and waiting:
> > =

> > 1226         if (drain)                                                =
              =

> > 1227                 xen_blk_drain_io(pending_req->blkif);  =

> > =

> > But still that feels 'wrong'?
> =

> Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
> harmless since the thread is waiting on drain_complete as you say, but I
> take your point that it's all confusing. Do you think it will feel
> better if we gate the call to wake_up in free_req with this condition:
> =

> if (was_empty && !atomic_read(&blkif->drain))
> =

> Or is this just going to make it even messier?

My head spins around when thinking about the refcnt, drain, the two or
three workqueues. =


> =

> Maybe just adding a comment in free_req saying that the wake_up call is
> going to be ignored in the context of a WRITE_BARRIER, since the thread
> is already waiting on drain_complete is enough.

Perhaps. You do pass in the 'force' bool flag and we could piggyback
on that. Meaning you could do

/* a comment about what we just mentioned */

if (!force) {
	// do it the old way
} else {

	/* A comment mentioning _why_ we need the code reshuffled */

	// do it the new way
}

It would be a bit messy - but:
 - We won't have to worry about breaking WRITE_BARRIER as the old
   logic would be preserved. So less worry about regressions.
 - The bug-fix would be easy to backport as it would inject code for
   just the usage you want - that is to drain all I/Os.
 - It would make a nice distinction and allows us to refactor
   this in future patches.
The cons are that:
 - It would add extra path for just the use-case of shutting down
   without using the existing one.
 - It would be messy


But I think when it comes to fixes like these that are
candidates for backports - messy is OK and if they don't have any
posibility of introducing regressions on existing other behaviors -
then we should stick with that.


Then in the future we can refactor this to use less of these
workqueues, refcnt and atomics we have. It is getting confusing.

Thoughts?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AjA-0006Pj-Dg; Tue, 28 Jan 2014 15:37:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Aj9-0006PR-Eo
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:37:27 +0000
Received: from [193.109.254.147:60388] by server-13.bemta-14.messagelabs.com
	id 58/F7-19374-6BEC7E25; Tue, 28 Jan 2014 15:37:26 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390923439!390859!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14728 invoked from network); 28 Jan 2014 15:37:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 15:37:20 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFbDlp011808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:37:13 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFbCm6001631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:37:12 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFbBh5014355; Tue, 28 Jan 2014 15:37:12 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:37:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6BF981BFA73; Tue, 28 Jan 2014 10:37:10 -0500 (EST)
Date: Tue, 28 Jan 2014 10:37:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140128153710.GC4308@phenom.dumpdata.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
	<52E7A635.2090108@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7A635.2090108@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 01:44:37PM +0100, Roger Pau Monn=E9 wrote:
> On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
> >> I've at least identified two possible memory leaks in blkback, both
> >> related to the shutdown path of a VBD:
> >>
> >> - We don't wait for any pending purge work to finish before cleaning
> >>   the list of free_pages. The purge work will call put_free_pages and
> >>   thus we might end up with pages being added to the free_pages list
> >>   after we have emptied it.
> >> - We don't wait for pending requests to end before cleaning persistent
> >>   grants and the list of free_pages. Again this can add pages to the
> >>   free_pages lists or persistent grants to the persistent_gnts
> >>   red-black tree.
> >>
> >> Also, add some checks in xen_blkif_free to make sure we are cleaning
> >> everything.
> >>
> >> Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> Cc: David Vrabel <david.vrabel@citrix.com>
> >> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >> Cc: Matt Rushton <mrushton@amazon.com>
> >> Cc: Matt Wilson <msw@amazon.com>
> >> Cc: Ian Campbell <Ian.Campbell@citrix.com>
> >> ---
> >> This should be applied after the patch:
> >>
> >> xen-blkback: fix memory leak when persistent grants are used
> >>
> >> >From Matt Rushton & Matt Wilson and backported to stable.
> >>
> >> I've been able to create and destroy ~4000 guests while doing heavy IO
> >> operations with this patch on a 512M Dom0 without problems.
> >> ---
> >>  drivers/block/xen-blkback/blkback.c |   29 +++++++++++++++++++-------=
---
> >>  drivers/block/xen-blkback/xenbus.c  |    9 +++++++++
> >>  2 files changed, 28 insertions(+), 10 deletions(-)
> >>
> >> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-b=
lkback/blkback.c
> >> index 30ef7b3..19925b7 100644
> >> --- a/drivers/block/xen-blkback/blkback.c
> >> +++ b/drivers/block/xen-blkback/blkback.c
> >> @@ -169,6 +169,7 @@ static int dispatch_rw_block_io(struct xen_blkif *=
blkif,
> >>  				struct pending_req *pending_req);
> >>  static void make_response(struct xen_blkif *blkif, u64 id,
> >>  			  unsigned short op, int st);
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force);
> >>  =

> >>  #define foreach_grant_safe(pos, n, rbtree, node) \
> >>  	for ((pos) =3D container_of(rb_first((rbtree)), typeof(*(pos)), node=
), \
> >> @@ -625,6 +626,12 @@ purge_gnt_list:
> >>  			print_stats(blkif);
> >>  	}
> >>  =

> >> +	/* Drain pending IO */
> >> +	xen_blk_drain_io(blkif, true);
> >> +
> >> +	/* Drain pending purge work */
> >> +	flush_work(&blkif->persistent_purge_work);
> >> +
> > =

> > I think this means we can eliminate the refcnt usage - at least when
> > it comes to xen_blkif_disconnect where if we would initiate the shutdow=
n, and
> > there is
> > =

> > 239         atomic_dec(&blkif->refcnt);                                =
             =

> > 240         wait_event(blkif->waiting_to_free, atomic_read(&blkif->refc=
nt) =3D=3D 0);   =

> > 241         atomic_inc(&blkif->refcnt);                                =
             =

> > 242                                                                    =
             =

> > =

> > which is done _after_ the thread is done executing. That check won't
> > be needed anymore as the xen_blk_drain_io, flush_work, and free_persist=
ent_gnts
> > has pretty much drained every I/O out - so the moment the thread exits
> > there should be no need for waiting_to_free. I think.
> =

> I've reworked this patch a bit, so we don't drain the in-flight requests
> here, and instead moved all the cleanup code to xen_blkif_free. I've
> also split the xen_blkif_put race fix into a separate patch.
> =

> > =

> >>  	/* Free all persistent grant pages */
> >>  	if (!RB_EMPTY_ROOT(&blkif->persistent_gnts))
> >>  		free_persistent_gnts(blkif, &blkif->persistent_gnts,
> >> @@ -930,7 +937,7 @@ static int dispatch_other_io(struct xen_blkif *blk=
if,
> >>  	return -EIO;
> >>  }
> >>  =

> >> -static void xen_blk_drain_io(struct xen_blkif *blkif)
> >> +static void xen_blk_drain_io(struct xen_blkif *blkif, bool force)
> >>  {
> >>  	atomic_set(&blkif->drain, 1);
> >>  	do {
> >> @@ -943,7 +950,7 @@ static void xen_blk_drain_io(struct xen_blkif *blk=
if)
> >>  =

> >>  		if (!atomic_read(&blkif->drain))
> >>  			break;
> >> -	} while (!kthread_should_stop());
> >> +	} while (!kthread_should_stop() || force);
> >>  	atomic_set(&blkif->drain, 0);
> >>  }
> >>  =

> >> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
> >>  	 * the proper response on the ring.
> >>  	 */
> >>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> >> -		xen_blkbk_unmap(pending_req->blkif,
> >> +		struct xen_blkif *blkif =3D pending_req->blkif;
> >> +
> >> +		xen_blkbk_unmap(blkif,
> >>  		                pending_req->segments,
> >>  		                pending_req->nr_pages);
> >> -		make_response(pending_req->blkif, pending_req->id,
> >> +		make_response(blkif, pending_req->id,
> >>  			      pending_req->operation, pending_req->status);
> >> -		xen_blkif_put(pending_req->blkif);
> >> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
> >> -			if (atomic_read(&pending_req->blkif->drain))
> >> -				complete(&pending_req->blkif->drain_complete);
> >> +		free_req(blkif, pending_req);
> >> +		xen_blkif_put(blkif);
> >> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
> >> +			if (atomic_read(&blkif->drain))
> >> +				complete(&blkif->drain_complete);
> >>  		}
> >> -		free_req(pending_req->blkif, pending_req);
> > =

> > I keep coming back to this and I am not sure what to think - especially
> > in the context of WRITE_BARRIER and disconnecting the vbd.
> > =

> > You moved the 'free_req' to be done before you do atomic_read/dec.
> > =

> > Which means that we do:
> > =

> > 	list_add(&req->free_list, &blkif->pending_free);
> > 	wake_up(&blkif->pending_free_wq);
> > =

> > 	atomic_dec
> > 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> > =

> > =

> > while in the past we did:
> > =

> > 	atomic_dec
> > 	if atomic_read <=3D 2 poke thread that is waiting for drain.
> > =

> > 	list_add(&req->free_list, &blkif->pending_free);
> > 	wake_up(&blkif->pending_free_wq);
> > =

> > which means that we are giving the 'req' _before_ we decrement
> > the refcnts.
> > =

> > Could that mean that __do_block_io_op takes it for a spin - oh
> > wait it won't as it is sitting on a WRITE_BARRIER and waiting:
> > =

> > 1226         if (drain)                                                =
              =

> > 1227                 xen_blk_drain_io(pending_req->blkif);  =

> > =

> > But still that feels 'wrong'?
> =

> Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
> harmless since the thread is waiting on drain_complete as you say, but I
> take your point that it's all confusing. Do you think it will feel
> better if we gate the call to wake_up in free_req with this condition:
> =

> if (was_empty && !atomic_read(&blkif->drain))
> =

> Or is this just going to make it even messier?

My head spins around when thinking about the refcnt, drain, the two or
three workqueues. =


> =

> Maybe just adding a comment in free_req saying that the wake_up call is
> going to be ignored in the context of a WRITE_BARRIER, since the thread
> is already waiting on drain_complete is enough.

Perhaps. You do pass in the 'force' bool flag and we could piggyback
on that. Meaning you could do

/* a comment about what we just mentioned */

if (!force) {
	// do it the old way
} else {

	/* A comment mentioning _why_ we need the code reshuffled */

	// do it the new way
}

It would be a bit messy - but:
 - We won't have to worry about breaking WRITE_BARRIER as the old
   logic would be preserved. So less worry about regressions.
 - The bug-fix would be easy to backport as it would inject code for
   just the usage you want - that is to drain all I/Os.
 - It would make a nice distinction and allows us to refactor
   this in future patches.
The cons are that:
 - It would add extra path for just the use-case of shutting down
   without using the existing one.
 - It would be messy


But I think when it comes to fixes like these that are
candidates for backports - messy is OK and if they don't have any
posibility of introducing regressions on existing other behaviors -
then we should stick with that.


Then in the future we can refactor this to use less of these
workqueues, refcnt and atomics we have. It is getting confusing.

Thoughts?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:38:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:38:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ajn-0006XG-Su; Tue, 28 Jan 2014 15:38:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Ajm-0006Ww-7N
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:38:06 +0000
Received: from [193.109.254.147:7624] by server-5.bemta-14.messagelabs.com id
	9C/2C-03510-DDEC7E25; Tue, 28 Jan 2014 15:38:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390923483!383898!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12760 invoked from network); 28 Jan 2014 15:38:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97287271"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:38:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:38:02 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8Ajh-0007lr-V5;
	Tue, 28 Jan 2014 15:38:01 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 15:38:01 +0000
Message-ID: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Olaf Hering <olaf@aepfle.de>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH for 4.5] xl: honor more top level vfb options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that SDL and keymap options for VFB can also be specified in top
level options. Documentation is also updated.

This fixes bug #31 and further possible problems.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
 docs/man/xl.cfg.pod.5    |    4 ++--
 tools/libxl/xl_cmdimpl.c |   17 ++++++++++++++---
 2 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..26991c0 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -389,8 +389,8 @@ This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
 configure the emulated device. If L<Emulated VGA Graphics Device> options
 are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
-B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
-framebuffer device for the guest.
+B<vncpasswd>, B<vncdisplay>, B<vncunused>, B<sdl>, B<opengl> and
+B<keymap> to construct paravirtual framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..23d85f8 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -721,6 +721,15 @@ static void parse_top_level_vnc_options(XLU_Config *config,
     xlu_cfg_get_defbool(config, "vncunused", &vnc->findunused, 0);
 }
 
+static void parse_top_level_sdl_options(XLU_Config *config,
+                                        libxl_sdl_info *sdl)
+{
+    xlu_cfg_get_defbool(config, "sdl", &sdl->enable, 0);
+    xlu_cfg_get_defbool(config, "opengl", &sdl->opengl, 0);
+    xlu_cfg_replace_string (config, "display", &sdl->display, 0);
+    xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
+}
+
 static void parse_config_data(const char *config_source,
                               const char *config_data,
                               int config_len,
@@ -1657,9 +1666,13 @@ skip_vfb:
                                     libxl_device_vkb_init);
 
             parse_top_level_vnc_options(config, &vfb->vnc);
+            parse_top_level_sdl_options(config, &vfb->sdl);
+            xlu_cfg_replace_string (config, "keymap", &vfb->keymap, 0);
         }
-    } else
+    } else {
         parse_top_level_vnc_options(config, &b_info->u.hvm.vnc);
+        parse_top_level_sdl_options(config, &b_info->u.hvm.sdl);
+    }
 
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
@@ -1676,8 +1689,6 @@ skip_vfb:
                                          LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
 
         xlu_cfg_replace_string (config, "keymap", &b_info->u.hvm.keymap, 0);
-        xlu_cfg_get_defbool(config, "sdl", &b_info->u.hvm.sdl.enable, 0);
-        xlu_cfg_get_defbool(config, "opengl", &b_info->u.hvm.sdl.opengl, 0);
         xlu_cfg_get_defbool (config, "spice", &b_info->u.hvm.spice.enable, 0);
         if (!xlu_cfg_get_long (config, "spiceport", &l, 0))
             b_info->u.hvm.spice.port = l;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:38:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:38:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ajn-0006XG-Su; Tue, 28 Jan 2014 15:38:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Ajm-0006Ww-7N
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:38:06 +0000
Received: from [193.109.254.147:7624] by server-5.bemta-14.messagelabs.com id
	9C/2C-03510-DDEC7E25; Tue, 28 Jan 2014 15:38:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1390923483!383898!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12760 invoked from network); 28 Jan 2014 15:38:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:38:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97287271"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 15:38:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:38:02 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8Ajh-0007lr-V5;
	Tue, 28 Jan 2014 15:38:01 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 15:38:01 +0000
Message-ID: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Olaf Hering <olaf@aepfle.de>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH for 4.5] xl: honor more top level vfb options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that SDL and keymap options for VFB can also be specified in top
level options. Documentation is also updated.

This fixes bug #31 and further possible problems.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
---
 docs/man/xl.cfg.pod.5    |    4 ++--
 tools/libxl/xl_cmdimpl.c |   17 ++++++++++++++---
 2 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..26991c0 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -389,8 +389,8 @@ This options does not control the emulated graphics card presented to
 an HVM guest. See L<Emulated VGA Graphics Device> below for how to
 configure the emulated device. If L<Emulated VGA Graphics Device> options
 are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
-B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
-framebuffer device for the guest.
+B<vncpasswd>, B<vncdisplay>, B<vncunused>, B<sdl>, B<opengl> and
+B<keymap> to construct paravirtual framebuffer device for the guest.
 
 Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
 settings, from the following list:
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..23d85f8 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -721,6 +721,15 @@ static void parse_top_level_vnc_options(XLU_Config *config,
     xlu_cfg_get_defbool(config, "vncunused", &vnc->findunused, 0);
 }
 
+static void parse_top_level_sdl_options(XLU_Config *config,
+                                        libxl_sdl_info *sdl)
+{
+    xlu_cfg_get_defbool(config, "sdl", &sdl->enable, 0);
+    xlu_cfg_get_defbool(config, "opengl", &sdl->opengl, 0);
+    xlu_cfg_replace_string (config, "display", &sdl->display, 0);
+    xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
+}
+
 static void parse_config_data(const char *config_source,
                               const char *config_data,
                               int config_len,
@@ -1657,9 +1666,13 @@ skip_vfb:
                                     libxl_device_vkb_init);
 
             parse_top_level_vnc_options(config, &vfb->vnc);
+            parse_top_level_sdl_options(config, &vfb->sdl);
+            xlu_cfg_replace_string (config, "keymap", &vfb->keymap, 0);
         }
-    } else
+    } else {
         parse_top_level_vnc_options(config, &b_info->u.hvm.vnc);
+        parse_top_level_sdl_options(config, &b_info->u.hvm.sdl);
+    }
 
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
@@ -1676,8 +1689,6 @@ skip_vfb:
                                          LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
 
         xlu_cfg_replace_string (config, "keymap", &b_info->u.hvm.keymap, 0);
-        xlu_cfg_get_defbool(config, "sdl", &b_info->u.hvm.sdl.enable, 0);
-        xlu_cfg_get_defbool(config, "opengl", &b_info->u.hvm.sdl.opengl, 0);
         xlu_cfg_get_defbool (config, "spice", &b_info->u.hvm.spice.enable, 0);
         if (!xlu_cfg_get_long (config, "spiceport", &l, 0))
             b_info->u.hvm.spice.port = l;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ako-0006ja-K6; Tue, 28 Jan 2014 15:39:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Akn-0006jL-D3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:39:09 +0000
Received: from [85.158.139.211:30227] by server-7.bemta-5.messagelabs.com id
	D8/37-04824-C1FC7E25; Tue, 28 Jan 2014 15:39:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390923546!166487!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19475 invoked from network); 28 Jan 2014 15:39:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:39:07 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFd5B4014589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:39:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFd4II006600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:39:04 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFd4RK019407; Tue, 28 Jan 2014 15:39:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:39:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0629B1BFA73; Tue, 28 Jan 2014 10:39:03 -0500 (EST)
Date: Tue, 28 Jan 2014 10:39:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140128153902.GD4308@phenom.dumpdata.com>
References: <52E7B61202000078001178E3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7B61202000078001178E3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 12:52:18PM +0000, Jan Beulich wrote:
> Many of these operations can be arbitrarily long, which can become a
> problem irrespective of them being exposed to privileged users only.

You probably also want to sprinkle that in the balloon driver as well.

Any thoughts of upstreaming this as well?

Thanks.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100
> +++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 +0100
> @@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = mmapcmd.entry;
>  		for (i = 0; i < mmapcmd.num;) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
>  		addr = vma->vm_start;
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
>  
>  	mmap_out:
>  		up_write(&mm->mmap_sem);
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  #undef MMAP_NR_PER_PAGE
>  	break;
> @@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i=0; i<nr_pages; )	{
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (unsigned long *)(l + 1);
>  
> @@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
>  			else
>  				ret = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				mfn = (unsigned long *)(l + 1);
>  				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
> @@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
>  			}
>  		}
>  	mmapbatch_out:
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  	break;
>  
> @@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i = 0; i < nr_pages; i += nr, p += nr) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (void *)(l + 1);
>  			err = (void *)(l + 1);
> @@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
>  			ret = paged_out ? -ENOENT : 0;
>  			i = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				err = (void *)(l + 1);
>  				if (copy_to_user(p, err, nr * sizeof(*err)))
> @@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
>  			ret = -EFAULT;
>  
>  	mmapbatch_v2_out:
> -		list_for_each_safe(l, l2, &pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  #undef MMAPBATCH_NR_PER_PAGE
>  	}
>  	break;
> 
> 
> 

> privcmd: sprinkle around cond_resched() calls in mmap ioctl handling
> 
> Many of these operations can be arbitrarily long, which can become a
> problem irrespective of them being exposed to privileged users only.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100
> +++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 +0100
> @@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = mmapcmd.entry;
>  		for (i = 0; i < mmapcmd.num;) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
>  		addr = vma->vm_start;
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
>  
>  	mmap_out:
>  		up_write(&mm->mmap_sem);
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  #undef MMAP_NR_PER_PAGE
>  	break;
> @@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i=0; i<nr_pages; )	{
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (unsigned long *)(l + 1);
>  
> @@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
>  			else
>  				ret = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				mfn = (unsigned long *)(l + 1);
>  				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
> @@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
>  			}
>  		}
>  	mmapbatch_out:
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  	break;
>  
> @@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i = 0; i < nr_pages; i += nr, p += nr) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (void *)(l + 1);
>  			err = (void *)(l + 1);
> @@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
>  			ret = paged_out ? -ENOENT : 0;
>  			i = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				err = (void *)(l + 1);
>  				if (copy_to_user(p, err, nr * sizeof(*err)))
> @@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
>  			ret = -EFAULT;
>  
>  	mmapbatch_v2_out:
> -		list_for_each_safe(l, l2, &pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  #undef MMAPBATCH_NR_PER_PAGE
>  	}
>  	break;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ako-0006ja-K6; Tue, 28 Jan 2014 15:39:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Akn-0006jL-D3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:39:09 +0000
Received: from [85.158.139.211:30227] by server-7.bemta-5.messagelabs.com id
	D8/37-04824-C1FC7E25; Tue, 28 Jan 2014 15:39:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390923546!166487!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19475 invoked from network); 28 Jan 2014 15:39:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:39:07 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SFd5B4014589
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 15:39:05 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFd4II006600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 15:39:04 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SFd4RK019407; Tue, 28 Jan 2014 15:39:04 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 07:39:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0629B1BFA73; Tue, 28 Jan 2014 10:39:03 -0500 (EST)
Date: Tue, 28 Jan 2014 10:39:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140128153902.GD4308@phenom.dumpdata.com>
References: <52E7B61202000078001178E3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7B61202000078001178E3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 12:52:18PM +0000, Jan Beulich wrote:
> Many of these operations can be arbitrarily long, which can become a
> problem irrespective of them being exposed to privileged users only.

You probably also want to sprinkle that in the balloon driver as well.

Any thoughts of upstreaming this as well?

Thanks.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100
> +++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 +0100
> @@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = mmapcmd.entry;
>  		for (i = 0; i < mmapcmd.num;) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
>  		addr = vma->vm_start;
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
>  
>  	mmap_out:
>  		up_write(&mm->mmap_sem);
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  #undef MMAP_NR_PER_PAGE
>  	break;
> @@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i=0; i<nr_pages; )	{
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (unsigned long *)(l + 1);
>  
> @@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
>  			else
>  				ret = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				mfn = (unsigned long *)(l + 1);
>  				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
> @@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
>  			}
>  		}
>  	mmapbatch_out:
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  	break;
>  
> @@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i = 0; i < nr_pages; i += nr, p += nr) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (void *)(l + 1);
>  			err = (void *)(l + 1);
> @@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
>  			ret = paged_out ? -ENOENT : 0;
>  			i = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				err = (void *)(l + 1);
>  				if (copy_to_user(p, err, nr * sizeof(*err)))
> @@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
>  			ret = -EFAULT;
>  
>  	mmapbatch_v2_out:
> -		list_for_each_safe(l, l2, &pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  #undef MMAPBATCH_NR_PER_PAGE
>  	}
>  	break;
> 
> 
> 

> privcmd: sprinkle around cond_resched() calls in mmap ioctl handling
> 
> Many of these operations can be arbitrarily long, which can become a
> problem irrespective of them being exposed to privileged users only.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- sle11sp3.orig/drivers/xen/privcmd/privcmd.c	2012-12-12 12:05:51.000000000 +0100
> +++ sle11sp3/drivers/xen/privcmd/privcmd.c	2014-01-16 10:01:23.000000000 +0100
> @@ -126,6 +126,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = mmapcmd.entry;
>  		for (i = 0; i < mmapcmd.num;) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -158,6 +161,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -186,6 +192,9 @@ static long privcmd_ioctl(struct file *f
>  		addr = vma->vm_start;
>  		i = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(mmapcmd.num - i, MMAP_NR_PER_PAGE);
>  
>  			msg = (privcmd_mmap_entry_t*)(l + 1);
> @@ -209,8 +218,12 @@ static long privcmd_ioctl(struct file *f
>  
>  	mmap_out:
>  		up_write(&mm->mmap_sem);
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  #undef MMAP_NR_PER_PAGE
>  	break;
> @@ -236,6 +249,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i=0; i<nr_pages; )	{
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -270,6 +286,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (unsigned long *)(l + 1);
>  
> @@ -302,6 +321,9 @@ static long privcmd_ioctl(struct file *f
>  			else
>  				ret = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				mfn = (unsigned long *)(l + 1);
>  				if (copy_to_user(p, mfn, nr*sizeof(*mfn)))
> @@ -310,8 +332,12 @@ static long privcmd_ioctl(struct file *f
>  			}
>  		}
>  	mmapbatch_out:
> -		list_for_each_safe(l,l2,&pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  	}
>  	break;
>  
> @@ -335,6 +361,9 @@ static long privcmd_ioctl(struct file *f
>  
>  		p = m.arr;
>  		for (i = 0; i < nr_pages; i += nr, p += nr) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  
>  			ret = -ENOMEM;
> @@ -367,6 +396,9 @@ static long privcmd_ioctl(struct file *f
>  		ret = 0;
>  		paged_out = 0;
>  		list_for_each(l, &pagelist) {
> +			if (i)
> +				cond_resched();
> +
>  			nr = i + min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  			mfn = (void *)(l + 1);
>  			err = (void *)(l + 1);
> @@ -397,6 +429,9 @@ static long privcmd_ioctl(struct file *f
>  			ret = paged_out ? -ENOENT : 0;
>  			i = 0;
>  			list_for_each(l, &pagelist) {
> +				if (i)
> +					cond_resched();
> +
>  				nr = min(nr_pages - i, MMAPBATCH_NR_PER_PAGE);
>  				err = (void *)(l + 1);
>  				if (copy_to_user(p, err, nr * sizeof(*err)))
> @@ -407,8 +442,12 @@ static long privcmd_ioctl(struct file *f
>  			ret = -EFAULT;
>  
>  	mmapbatch_v2_out:
> -		list_for_each_safe(l, l2, &pagelist)
> +		i = 0;
> +		list_for_each_safe(l, l2, &pagelist) {
> +			if (!(++i & 7))
> +				cond_resched();
>  			free_page((unsigned long)l);
> +		}
>  #undef MMAPBATCH_NR_PER_PAGE
>  	}
>  	break;

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:42:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ao1-00074f-F8; Tue, 28 Jan 2014 15:42:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Ao0-00074Z-Kk
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:42:28 +0000
Received: from [193.109.254.147:17291] by server-1.bemta-14.messagelabs.com id
	49/23-15600-3EFC7E25; Tue, 28 Jan 2014 15:42:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390923744!391496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24563 invoked from network); 28 Jan 2014 15:42:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:42:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95292672"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:42:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:42:23 -0500
Message-ID: <1390923742.7753.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 15:42:22 +0000
In-Reply-To: <52E7DCC50200007800117A93@nat28.tlf.novell.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<52E7DCC50200007800117A93@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:37 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 16:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> >> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> >> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >> >>>> flight 24553 qemu-upstream-unstable real [real]
> >> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >> >>>>
> >> >>>> Failures :-/ but no regressions.
> >> >>>
> >> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >> >>> and so will require updating to actually pull this new stuff into the
> >> >>> release.
> >> >>
> >> >> OK. But given that the new code is not part of any RCs, should I wait
> >> >> for the next one? Should we go back to "master"?
> >> > 
> >> > I guess we should have gone back to "master" after tagging the last RC?
> >> 
> >> Correct - this should have happened the moment the first new
> >> commit passed the push gate on the qemuu tree.
> > 
> > There's no need to wait that long -- this can be done in a commit which
> > immediately follows the one tagged as the rc.
> 
> Except that it might end up being pointless if nothing really
> changes in qemuu until the next RC (or the final release).

The "point" is to do it while we are thinking about it, not at some
random later date when we are more than likely going to forget.

> 
> >> Don't know whether there would be a way to automate this...
> > 
> > It sounds like it would be tricky. I suppose a cronjob which verifies
> > that xen.git/staging always either says "master" or refers to a tag
> > which is the latest in qemu.git might work, but it sounds subtle and
> > error prone to me.
> 
> Yes, I realize there would be a number of "special" situations to
> take into consideration...
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:42:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ao1-00074f-F8; Tue, 28 Jan 2014 15:42:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Ao0-00074Z-Kk
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:42:28 +0000
Received: from [193.109.254.147:17291] by server-1.bemta-14.messagelabs.com id
	49/23-15600-3EFC7E25; Tue, 28 Jan 2014 15:42:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390923744!391496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24563 invoked from network); 28 Jan 2014 15:42:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:42:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95292672"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:42:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:42:23 -0500
Message-ID: <1390923742.7753.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 28 Jan 2014 15:42:22 +0000
In-Reply-To: <52E7DCC50200007800117A93@nat28.tlf.novell.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<52E7DCC50200007800117A93@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:37 +0000, Jan Beulich wrote:
> >>> On 28.01.14 at 16:31, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-28 at 15:08 +0000, Jan Beulich wrote:
> >> >>> On 28.01.14 at 15:30, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> >> > On 01/28/2014 12:29 PM, Stefano Stabellini wrote:
> >> >> On Tue, 28 Jan 2014, Ian Campbell wrote:
> >> >>> On Tue, 2014-01-28 at 02:16 +0000, xen.org wrote:
> >> >>>> flight 24553 qemu-upstream-unstable real [real]
> >> >>>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24553/ 
> >> >>>>
> >> >>>> Failures :-/ but no regressions.
> >> >>>
> >> >>> QEMU_UPSTREAM_REVISION in xen.git is currently set to qemu-xen-4.4.0-rc1
> >> >>> and so will require updating to actually pull this new stuff into the
> >> >>> release.
> >> >>
> >> >> OK. But given that the new code is not part of any RCs, should I wait
> >> >> for the next one? Should we go back to "master"?
> >> > 
> >> > I guess we should have gone back to "master" after tagging the last RC?
> >> 
> >> Correct - this should have happened the moment the first new
> >> commit passed the push gate on the qemuu tree.
> > 
> > There's no need to wait that long -- this can be done in a commit which
> > immediately follows the one tagged as the rc.
> 
> Except that it might end up being pointless if nothing really
> changes in qemuu until the next RC (or the final release).

The "point" is to do it while we are thinking about it, not at some
random later date when we are more than likely going to forget.

> 
> >> Don't know whether there would be a way to automate this...
> > 
> > It sounds like it would be tricky. I suppose a cronjob which verifies
> > that xen.git/staging always either says "master" or refers to a tag
> > which is the latest in qemu.git might work, but it sounds subtle and
> > error prone to me.
> 
> Yes, I realize there would be a number of "special" situations to
> take into consideration...
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Aom-000795-Tp; Tue, 28 Jan 2014 15:43:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Aol-00078r-FK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:43:15 +0000
Received: from [193.109.254.147:37878] by server-9.bemta-14.messagelabs.com id
	C5/4D-13957-210D7E25; Tue, 28 Jan 2014 15:43:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390923794!387135!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32517 invoked from network); 28 Jan 2014 15:43:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:43:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:43:13 +0000
Message-Id: <52E7DE2D0200007800117ABF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:43:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52E7B61202000078001178E3@nat28.tlf.novell.com>
	<20140128153902.GD4308@phenom.dumpdata.com>
In-Reply-To: <20140128153902.GD4308@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:39, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Jan 28, 2014 at 12:52:18PM +0000, Jan Beulich wrote:
>> Many of these operations can be arbitrarily long, which can become a
>> problem irrespective of them being exposed to privileged users only.
> 
> You probably also want to sprinkle that in the balloon driver as well.

At least in that old code I don't think there is a need - the batches
that get processed already get limited by the size of the static
frame_list[] array.

> Any thoughts of upstreaming this as well?

Eventually, yes. But not something I have time for right now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Aom-000795-Tp; Tue, 28 Jan 2014 15:43:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Aol-00078r-FK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:43:15 +0000
Received: from [193.109.254.147:37878] by server-9.bemta-14.messagelabs.com id
	C5/4D-13957-210D7E25; Tue, 28 Jan 2014 15:43:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390923794!387135!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32517 invoked from network); 28 Jan 2014 15:43:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:43:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:43:13 +0000
Message-Id: <52E7DE2D0200007800117ABF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:43:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52E7B61202000078001178E3@nat28.tlf.novell.com>
	<20140128153902.GD4308@phenom.dumpdata.com>
In-Reply-To: <20140128153902.GD4308@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/privcmd: sprinkle around
 cond_resched() calls in mmap ioctl handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:39, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Jan 28, 2014 at 12:52:18PM +0000, Jan Beulich wrote:
>> Many of these operations can be arbitrarily long, which can become a
>> problem irrespective of them being exposed to privileged users only.
> 
> You probably also want to sprinkle that in the balloon driver as well.

At least in that old code I don't think there is a need - the batches
that get processed already get limited by the size of the static
frame_list[] array.

> Any thoughts of upstreaming this as well?

Eventually, yes. But not something I have time for right now.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AtK-0007b5-PY; Tue, 28 Jan 2014 15:47:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AtJ-0007az-NA
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:47:57 +0000
Received: from [85.158.143.35:54782] by server-3.bemta-4.messagelabs.com id
	A6/8D-32360-D21D7E25; Tue, 28 Jan 2014 15:47:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390924075!1408984!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18029 invoked from network); 28 Jan 2014 15:47:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:47:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95295748"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:47:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:47:52 -0500
Message-ID: <1390924071.7753.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 28 Jan 2014 15:47:51 +0000
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),

I should have asked this sooner -- can you point me to the bindings
documentation for this device?

http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
is not yet agreed, so having Xen depend on it may have been a mistake.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8AtK-0007b5-PY; Tue, 28 Jan 2014 15:47:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8AtJ-0007az-NA
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 15:47:57 +0000
Received: from [85.158.143.35:54782] by server-3.bemta-4.messagelabs.com id
	A6/8D-32360-D21D7E25; Tue, 28 Jan 2014 15:47:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390924075!1408984!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18029 invoked from network); 28 Jan 2014 15:47:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 15:47:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95295748"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 15:47:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 10:47:52 -0500
Message-ID: <1390924071.7753.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 28 Jan 2014 15:47:51 +0000
In-Reply-To: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),

I should have asked this sooner -- can you point me to the bindings
documentation for this device?

http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
is not yet agreed, so having Xen depend on it may have been a mistake.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Atj-0007dO-6J; Tue, 28 Jan 2014 15:48:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Ati-0007dB-8l
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:48:22 +0000
Received: from [193.109.254.147:37008] by server-13.bemta-14.messagelabs.com
	id D3/CA-19374-541D7E25; Tue, 28 Jan 2014 15:48:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390924100!393317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7207 invoked from network); 28 Jan 2014 15:48:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:48:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:48:20 +0000
Message-Id: <52E7DF5F0200007800117AD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:48:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
	<1390923366.7753.112.camel@kazak.uk.xensource.com>
In-Reply-To: <1390923366.7753.112.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> But that brings up an interesting point. FWIW I think it would be
> absolutely fine for the committer responsible for one of the separate
> trees to push updates to the bits of xen.git which pull in that code,
> specifically the Config.mk bits.

Yet if we didn't refer to specific commit IDs, but instead - like we
now intend to be doing for qemuu - to a branch head, such
updates wouldn't be necessary anymore other than for RC or
release purposes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 15:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 15:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Atj-0007dO-6J; Tue, 28 Jan 2014 15:48:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Ati-0007dB-8l
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 15:48:22 +0000
Received: from [193.109.254.147:37008] by server-13.bemta-14.messagelabs.com
	id D3/CA-19374-541D7E25; Tue, 28 Jan 2014 15:48:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1390924100!393317!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7207 invoked from network); 28 Jan 2014 15:48:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 15:48:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 28 Jan 2014 15:48:20 +0000
Message-Id: <52E7DF5F0200007800117AD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 28 Jan 2014 15:48:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <osstest-24553-mainreport@xen.org>
	<1390901845.7753.6.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281228100.4373@kaball.uk.xensource.com>
	<52E7BF17.3050200@eu.citrix.com>
	<52E7D6000200007800117A11@nat28.tlf.novell.com>
	<1390923084.7753.110.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281532580.4373@kaball.uk.xensource.com>
	<1390923366.7753.112.camel@kazak.uk.xensource.com>
In-Reply-To: <1390923366.7753.112.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen.org" <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable test] 24553: tolerable FAIL
 - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 16:36, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> But that brings up an interesting point. FWIW I think it would be
> absolutely fine for the committer responsible for one of the separate
> trees to push updates to the bits of xen.git which pull in that code,
> specifically the Config.mk bits.

Yet if we didn't refer to specific commit IDs, but instead - like we
now intend to be doing for qemuu - to a branch head, such
updates wouldn't be necessary anymore other than for RC or
release purposes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:02:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B6y-0000Qy-6w; Tue, 28 Jan 2014 16:02:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8B6u-0000Qt-Ih
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 16:02:00 +0000
Received: from [193.109.254.147:2585] by server-6.bemta-14.messagelabs.com id
	5F/A7-14958-774D7E25; Tue, 28 Jan 2014 16:01:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390924917!391603!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21304 invoked from network); 28 Jan 2014 16:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95302164"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:01:57 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:01:56 -0500
Message-ID: <52E7D472.1070007@citrix.com>
Date: Tue, 28 Jan 2014 17:01:54 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
	<52E7A635.2090108@citrix.com>
	<20140128153710.GC4308@phenom.dumpdata.com>
In-Reply-To: <20140128153710.GC4308@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 16:37, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 01:44:37PM +0100, Roger Pau Monn=E9 wrote:
>> On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>>>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
>>>>  	 * the proper response on the ring.
>>>>  	 */
>>>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>>>> -		xen_blkbk_unmap(pending_req->blkif,
>>>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>>>> +
>>>> +		xen_blkbk_unmap(blkif,
>>>>  		                pending_req->segments,
>>>>  		                pending_req->nr_pages);
>>>> -		make_response(pending_req->blkif, pending_req->id,
>>>> +		make_response(blkif, pending_req->id,
>>>>  			      pending_req->operation, pending_req->status);
>>>> -		xen_blkif_put(pending_req->blkif);
>>>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>>>> -			if (atomic_read(&pending_req->blkif->drain))
>>>> -				complete(&pending_req->blkif->drain_complete);
>>>> +		free_req(blkif, pending_req);
>>>> +		xen_blkif_put(blkif);
>>>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>>>> +			if (atomic_read(&blkif->drain))
>>>> +				complete(&blkif->drain_complete);
>>>>  		}
>>>> -		free_req(pending_req->blkif, pending_req);
>>>
>>> I keep coming back to this and I am not sure what to think - especially
>>> in the context of WRITE_BARRIER and disconnecting the vbd.
>>>
>>> You moved the 'free_req' to be done before you do atomic_read/dec.
>>>
>>> Which means that we do:
>>>
>>> 	list_add(&req->free_list, &blkif->pending_free);
>>> 	wake_up(&blkif->pending_free_wq);
>>>
>>> 	atomic_dec
>>> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
>>>
>>>
>>> while in the past we did:
>>>
>>> 	atomic_dec
>>> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
>>>
>>> 	list_add(&req->free_list, &blkif->pending_free);
>>> 	wake_up(&blkif->pending_free_wq);
>>>
>>> which means that we are giving the 'req' _before_ we decrement
>>> the refcnts.
>>>
>>> Could that mean that __do_block_io_op takes it for a spin - oh
>>> wait it won't as it is sitting on a WRITE_BARRIER and waiting:
>>>
>>> 1226         if (drain)                                                =
              =

>>> 1227                 xen_blk_drain_io(pending_req->blkif);  =

>>>
>>> But still that feels 'wrong'?
>>
>> Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
>> harmless since the thread is waiting on drain_complete as you say, but I
>> take your point that it's all confusing. Do you think it will feel
>> better if we gate the call to wake_up in free_req with this condition:
>>
>> if (was_empty && !atomic_read(&blkif->drain))
>>
>> Or is this just going to make it even messier?
> =

> My head spins around when thinking about the refcnt, drain, the two or
> three workqueues. =

> =

>>
>> Maybe just adding a comment in free_req saying that the wake_up call is
>> going to be ignored in the context of a WRITE_BARRIER, since the thread
>> is already waiting on drain_complete is enough.
> =

> Perhaps. You do pass in the 'force' bool flag and we could piggyback
> on that. Meaning you could do

In the new version I'm preparing I'm no longer calling drain_io on
xen_blkif_schedule (so there's no "force" flag), instead I've moved the
cleanup code to xen_blkif_free where I think it makes more sense.

Also the force flag was just a local variable to drain_io, I think it
would get even messier if we add yet another variable (force) to the
xen_blkif struct.

> =

> /* a comment about what we just mentioned */
> =

> if (!force) {
> 	// do it the old way
> } else {
> =

> 	/* A comment mentioning _why_ we need the code reshuffled */
> =

> 	// do it the new way
> }
> =

> It would be a bit messy - but:
>  - We won't have to worry about breaking WRITE_BARRIER as the old
>    logic would be preserved. So less worry about regressions.
>  - The bug-fix would be easy to backport as it would inject code for
>    just the usage you want - that is to drain all I/Os.
>  - It would make a nice distinction and allows us to refactor
>    this in future patches.
> The cons are that:
>  - It would add extra path for just the use-case of shutting down
>    without using the existing one.
>  - It would be messy
> =

> =

> But I think when it comes to fixes like these that are
> candidates for backports - messy is OK and if they don't have any
> posibility of introducing regressions on existing other behaviors -
> then we should stick with that.
> =

> =

> Then in the future we can refactor this to use less of these
> workqueues, refcnt and atomics we have. It is getting confusing.
> =

> Thoughts?

Let me post the whole series as I have them now, and we can pick it up
again from that.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:02:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B6y-0000Qy-6w; Tue, 28 Jan 2014 16:02:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8B6u-0000Qt-Ih
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 16:02:00 +0000
Received: from [193.109.254.147:2585] by server-6.bemta-14.messagelabs.com id
	5F/A7-14958-774D7E25; Tue, 28 Jan 2014 16:01:59 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390924917!391603!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21304 invoked from network); 28 Jan 2014 16:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95302164"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:01:57 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:01:56 -0500
Message-ID: <52E7D472.1070007@citrix.com>
Date: Tue, 28 Jan 2014 17:01:54 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1390817621-12031-1-git-send-email-roger.pau@citrix.com>
	<20140127212146.GA32007@phenom.dumpdata.com>
	<52E7A635.2090108@citrix.com>
	<20140128153710.GC4308@phenom.dumpdata.com>
In-Reply-To: <20140128153710.GC4308@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/01/14 16:37, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 01:44:37PM +0100, Roger Pau Monn=E9 wrote:
>> On 27/01/14 22:21, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Jan 27, 2014 at 11:13:41AM +0100, Roger Pau Monne wrote:
>>>> @@ -976,17 +983,19 @@ static void __end_block_io_op(struct pending_req=
 *pending_req, int error)
>>>>  	 * the proper response on the ring.
>>>>  	 */
>>>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>>>> -		xen_blkbk_unmap(pending_req->blkif,
>>>> +		struct xen_blkif *blkif =3D pending_req->blkif;
>>>> +
>>>> +		xen_blkbk_unmap(blkif,
>>>>  		                pending_req->segments,
>>>>  		                pending_req->nr_pages);
>>>> -		make_response(pending_req->blkif, pending_req->id,
>>>> +		make_response(blkif, pending_req->id,
>>>>  			      pending_req->operation, pending_req->status);
>>>> -		xen_blkif_put(pending_req->blkif);
>>>> -		if (atomic_read(&pending_req->blkif->refcnt) <=3D 2) {
>>>> -			if (atomic_read(&pending_req->blkif->drain))
>>>> -				complete(&pending_req->blkif->drain_complete);
>>>> +		free_req(blkif, pending_req);
>>>> +		xen_blkif_put(blkif);
>>>> +		if (atomic_read(&blkif->refcnt) <=3D 2) {
>>>> +			if (atomic_read(&blkif->drain))
>>>> +				complete(&blkif->drain_complete);
>>>>  		}
>>>> -		free_req(pending_req->blkif, pending_req);
>>>
>>> I keep coming back to this and I am not sure what to think - especially
>>> in the context of WRITE_BARRIER and disconnecting the vbd.
>>>
>>> You moved the 'free_req' to be done before you do atomic_read/dec.
>>>
>>> Which means that we do:
>>>
>>> 	list_add(&req->free_list, &blkif->pending_free);
>>> 	wake_up(&blkif->pending_free_wq);
>>>
>>> 	atomic_dec
>>> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
>>>
>>>
>>> while in the past we did:
>>>
>>> 	atomic_dec
>>> 	if atomic_read <=3D 2 poke thread that is waiting for drain.
>>>
>>> 	list_add(&req->free_list, &blkif->pending_free);
>>> 	wake_up(&blkif->pending_free_wq);
>>>
>>> which means that we are giving the 'req' _before_ we decrement
>>> the refcnts.
>>>
>>> Could that mean that __do_block_io_op takes it for a spin - oh
>>> wait it won't as it is sitting on a WRITE_BARRIER and waiting:
>>>
>>> 1226         if (drain)                                                =
              =

>>> 1227                 xen_blk_drain_io(pending_req->blkif);  =

>>>
>>> But still that feels 'wrong'?
>>
>> Mmmm, the wake_up call in free_req in the context of WRITE_BARRIER is
>> harmless since the thread is waiting on drain_complete as you say, but I
>> take your point that it's all confusing. Do you think it will feel
>> better if we gate the call to wake_up in free_req with this condition:
>>
>> if (was_empty && !atomic_read(&blkif->drain))
>>
>> Or is this just going to make it even messier?
> =

> My head spins around when thinking about the refcnt, drain, the two or
> three workqueues. =

> =

>>
>> Maybe just adding a comment in free_req saying that the wake_up call is
>> going to be ignored in the context of a WRITE_BARRIER, since the thread
>> is already waiting on drain_complete is enough.
> =

> Perhaps. You do pass in the 'force' bool flag and we could piggyback
> on that. Meaning you could do

In the new version I'm preparing I'm no longer calling drain_io on
xen_blkif_schedule (so there's no "force" flag), instead I've moved the
cleanup code to xen_blkif_free where I think it makes more sense.

Also the force flag was just a local variable to drain_io, I think it
would get even messier if we add yet another variable (force) to the
xen_blkif struct.

> =

> /* a comment about what we just mentioned */
> =

> if (!force) {
> 	// do it the old way
> } else {
> =

> 	/* A comment mentioning _why_ we need the code reshuffled */
> =

> 	// do it the new way
> }
> =

> It would be a bit messy - but:
>  - We won't have to worry about breaking WRITE_BARRIER as the old
>    logic would be preserved. So less worry about regressions.
>  - The bug-fix would be easy to backport as it would inject code for
>    just the usage you want - that is to drain all I/Os.
>  - It would make a nice distinction and allows us to refactor
>    this in future patches.
> The cons are that:
>  - It would add extra path for just the use-case of shutting down
>    without using the existing one.
>  - It would be messy
> =

> =

> But I think when it comes to fixes like these that are
> candidates for backports - messy is OK and if they don't have any
> posibility of introducing regressions on existing other behaviors -
> then we should stick with that.
> =

> =

> Then in the future we can refactor this to use less of these
> workqueues, refcnt and atomics we have. It is getting confusing.
> =

> Thoughts?

Let me post the whole series as I have them now, and we can pick it up
again from that.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:02:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B7e-0000UP-RC; Tue, 28 Jan 2014 16:02:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8B7d-0000UG-6J
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:02:45 +0000
Received: from [193.109.254.147:19121] by server-15.bemta-14.messagelabs.com
	id FD/00-22186-4A4D7E25; Tue, 28 Jan 2014 16:02:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390924962!390188!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11715 invoked from network); 28 Jan 2014 16:02:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97299607"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 16:02:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:02:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8B7Y-0008CR-Qs;
	Tue, 28 Jan 2014 16:02:40 +0000
Date: Tue, 28 Jan 2014 16:02:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390921544.7753.108.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > but I am a bit wary of making that change at RC2.
> > > 
> > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > primitives to work around this issue on a case by case basis. It's just
> > > too subtle IMHO.
> > > 
> > > The IPI and cross CPU calling primitives are basically predicated on
> > > those IPIs interrupting normal interrupt handlers.
> > 
> > The problem is that we don't know if we can context switch properly
> > nested interrupts.
> 
> What do you mean? We don't have to context switch an IPI.

Sorry, I meant save/restore registers, stack pointer, processor mode,
etc, for nested interrupts.


> > Also I would need to think harder whether everything
> > would work correctly without hitches with multiple SGIs happening
> > simultaneously (with more than 2 cpus involved).
> 
> Since all IPIs would be at the same higher priority only one will be
> active on each CPU at a time. If you are worried about multiple CPUs
> then that is already an issue today, just at a lower priority.

That is correct, but if we moved the on_selected_cpus call out of the
interrupt handler I don't think the problem could occur.


> I have hacked the IPI priority to be higher in the past and it worked
> fine, I just never got round to cleaning it up for submission (I hadn't
> thought of the locking thing and my use case was low priority).
> 
> The interrupt entry and exit paths were written with nested interrupt in
> mind and they have to be so in order to handle interrupts which occur
> from both guest and hypervisor context.
>
> > On the other hand we know that both Oleksandr's and my solution should
> > work OK with no surprises if implemented correctly.
> 
> That's a big if in my mind, any use of trylock is very subtle IMOH.

I wouldn't accept that patch for 4.4 either (or for 4.5 given that we
can certainly come up with something better by that time).


> AIUI this issue only occurs with "proto device assignment" patches added
> to 4.4, n which case I think the solution can wait until 4.5 and can be
> done properly via the IPI priority fix.
 
I think this is a pretty significant problem, even if we don't commit a
fix, we should post a proper patch that we deem acceptable and link it to
the wiki so that anybody that needs it can find it and be sure that it
works correctly.
In my opinion if we go to this length then we might as well commit it
(if it doesn't touch common code of course), but I'll leave the decision
up to you and George.

Given the constraints, the solution I would feel more comfortable with at
this time is something like the following patch (lightly tested):

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..b00ca73 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
 
 static unsigned nr_lrs;
 
+static void gic_irq_eoi(void *info)
+{
+    int virq = (uintptr_t) info;
+    GICC[GICC_DIR] = virq;
+}
+
+static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
+static void eoi_action(unsigned long unused)
+{
+    struct pending_irq *p = this_cpu(eoi_irq);
+    ASSERT(p != NULL);
+
+    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
+            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
+
+    this_cpu(eoi_irq) = NULL;
+}
+static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
+
 /* The GIC mapping of CPU interfaces does not necessarily match the
  * logical CPU numbering. Let's use mapping as returned by the GIC
  * itself
@@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     int i = 0, virq, pirq = -1;
@@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             if ( cpu == smp_processor_id() )
                 gic_irq_eoi((void*)(uintptr_t)pirq);
             else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
+            {
+                ASSERT(this_cpu(eoi_irq) == NULL);
+                this_cpu(eoi_irq) = p;
+                tasklet_schedule(&eoi_tasklet);
+            }
         }
 
         i++;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:02:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B7e-0000UP-RC; Tue, 28 Jan 2014 16:02:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8B7d-0000UG-6J
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:02:45 +0000
Received: from [193.109.254.147:19121] by server-15.bemta-14.messagelabs.com
	id FD/00-22186-4A4D7E25; Tue, 28 Jan 2014 16:02:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1390924962!390188!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11715 invoked from network); 28 Jan 2014 16:02:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:02:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="97299607"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 16:02:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:02:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8B7Y-0008CR-Qs;
	Tue, 28 Jan 2014 16:02:40 +0000
Date: Tue, 28 Jan 2014 16:02:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390921544.7753.108.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > but I am a bit wary of making that change at RC2.
> > > 
> > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > primitives to work around this issue on a case by case basis. It's just
> > > too subtle IMHO.
> > > 
> > > The IPI and cross CPU calling primitives are basically predicated on
> > > those IPIs interrupting normal interrupt handlers.
> > 
> > The problem is that we don't know if we can context switch properly
> > nested interrupts.
> 
> What do you mean? We don't have to context switch an IPI.

Sorry, I meant save/restore registers, stack pointer, processor mode,
etc, for nested interrupts.


> > Also I would need to think harder whether everything
> > would work correctly without hitches with multiple SGIs happening
> > simultaneously (with more than 2 cpus involved).
> 
> Since all IPIs would be at the same higher priority only one will be
> active on each CPU at a time. If you are worried about multiple CPUs
> then that is already an issue today, just at a lower priority.

That is correct, but if we moved the on_selected_cpus call out of the
interrupt handler I don't think the problem could occur.


> I have hacked the IPI priority to be higher in the past and it worked
> fine, I just never got round to cleaning it up for submission (I hadn't
> thought of the locking thing and my use case was low priority).
> 
> The interrupt entry and exit paths were written with nested interrupt in
> mind and they have to be so in order to handle interrupts which occur
> from both guest and hypervisor context.
>
> > On the other hand we know that both Oleksandr's and my solution should
> > work OK with no surprises if implemented correctly.
> 
> That's a big if in my mind, any use of trylock is very subtle IMOH.

I wouldn't accept that patch for 4.4 either (or for 4.5 given that we
can certainly come up with something better by that time).


> AIUI this issue only occurs with "proto device assignment" patches added
> to 4.4, n which case I think the solution can wait until 4.5 and can be
> done properly via the IPI priority fix.
 
I think this is a pretty significant problem, even if we don't commit a
fix, we should post a proper patch that we deem acceptable and link it to
the wiki so that anybody that needs it can find it and be sure that it
works correctly.
In my opinion if we go to this length then we might as well commit it
(if it doesn't touch common code of course), but I'll leave the decision
up to you and George.

Given the constraints, the solution I would feel more comfortable with at
this time is something like the following patch (lightly tested):

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..b00ca73 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
 
 static unsigned nr_lrs;
 
+static void gic_irq_eoi(void *info)
+{
+    int virq = (uintptr_t) info;
+    GICC[GICC_DIR] = virq;
+}
+
+static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
+static void eoi_action(unsigned long unused)
+{
+    struct pending_irq *p = this_cpu(eoi_irq);
+    ASSERT(p != NULL);
+
+    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
+            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
+
+    this_cpu(eoi_irq) = NULL;
+}
+static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
+
 /* The GIC mapping of CPU interfaces does not necessarily match the
  * logical CPU numbering. Let's use mapping as returned by the GIC
  * itself
@@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     int i = 0, virq, pirq = -1;
@@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             if ( cpu == smp_processor_id() )
                 gic_irq_eoi((void*)(uintptr_t)pirq);
             else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
+            {
+                ASSERT(this_cpu(eoi_irq) == NULL);
+                this_cpu(eoi_irq) = p;
+                tasklet_schedule(&eoi_tasklet);
+            }
         }
 
         i++;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:03:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B8j-0000bh-I8; Tue, 28 Jan 2014 16:03:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W8B8j-0000bb-2F
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:03:53 +0000
Received: from [193.109.254.147:47970] by server-15.bemta-14.messagelabs.com
	id A0/62-22186-8E4D7E25; Tue, 28 Jan 2014 16:03:52 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390925030!398167!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32317 invoked from network); 28 Jan 2014 16:03:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:03:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95303189"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:03:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:03:20 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8B8B-0008DP-Q1;
	Tue, 28 Jan 2014 16:03:19 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:03:03 +0000
Message-ID: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.3
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] [PATCH] doc: Better documentation about the
	usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.pod.5 | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..9c0b438 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
 host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
 typically be found by using lsusb or usb-devices.
 
+If you wish to use the "host:bus.addr" format, remove any leading '0' from the
+bus and addr. For example, for the USB device on bus 008 dev 002, you will
+write "host:8.2".
+
 The form usbdevice=DEVICE is also accepted for backwards compatibility.
 
 More valid options can be found in the "usbdevice" section of the qemu
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:03:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B8j-0000bh-I8; Tue, 28 Jan 2014 16:03:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W8B8j-0000bb-2F
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:03:53 +0000
Received: from [193.109.254.147:47970] by server-15.bemta-14.messagelabs.com
	id A0/62-22186-8E4D7E25; Tue, 28 Jan 2014 16:03:52 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390925030!398167!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32317 invoked from network); 28 Jan 2014 16:03:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:03:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95303189"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:03:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:03:20 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8B8B-0008DP-Q1;
	Tue, 28 Jan 2014 16:03:19 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:03:03 +0000
Message-ID: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.3
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] [PATCH] doc: Better documentation about the
	usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 docs/man/xl.cfg.pod.5 | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..9c0b438 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
 host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
 typically be found by using lsusb or usb-devices.
 
+If you wish to use the "host:bus.addr" format, remove any leading '0' from the
+bus and addr. For example, for the USB device on bus 008 dev 002, you will
+write "host:8.2".
+
 The form usbdevice=DEVICE is also accepted for backwards compatibility.
 
 More valid options can be found in the "usbdevice" section of the qemu
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:04:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B94-0000ek-VN; Tue, 28 Jan 2014 16:04:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8B94-0000eP-5n
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:04:14 +0000
Received: from [85.158.143.35:44336] by server-2.bemta-4.messagelabs.com id
	1C/13-11386-DF4D7E25; Tue, 28 Jan 2014 16:04:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390925051!1411003!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24177 invoked from network); 28 Jan 2014 16:04:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:04:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SG48Du025516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:04:09 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG46gT002886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 16:04:07 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG465o002850; Tue, 28 Jan 2014 16:04:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:04:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DB0FF1BFA73; Tue, 28 Jan 2014 11:04:04 -0500 (EST)
Date: Tue, 28 Jan 2014 11:04:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140128160404.GA5732@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128150848.GA1428@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>, xen-devel@lists.xensource.com,
	Ben Guthro <benjamin.guthro@citrix.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 04:08:49PM +0100, Stanislaw Gruszka wrote:
> Hi,

Hey!
> 
> We have this bug report on our bugzilla:
> https://bugzilla.redhat.com/show_bug.cgi?id=1058268
> 
> In short:
> WARNING: CPU: 0 PID: 6733 at drivers/base/syscore.c:104 syscore_resume+0x9a/0xe0()
> Interrupts enabled after xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
> 
> Perhaps I'm wrong, but I think sysops->resume() callback should be
> atomic i.e. can not use mutexes or kmalloc(,GFP_KERNEL), what is not
> true regarding xen_acpi_processor_resume(). That callback was introduced
> by commit 3fac10145b766a2244422788f62dc35978613fd8. Fixing that will
> not be easy IMHO, but maybe you have some ideas ? :-)

If that is the case, then perhaps the patch below will fix it?

(I didn't compile tested it)

diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..7602229 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
  */
 static unsigned int nr_acpi_bits;
 /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
-static DEFINE_MUTEX(acpi_ids_mutex);
+static DEFINE_SPINLOCK(acpi_ids_lock);
 /* Which ACPI ID we have processed from 'struct acpi_processor'. */
 static unsigned long *acpi_ids_done;
 /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
@@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
 	int ret = 0;
 
 	dst_cx_states = kcalloc(_pr->power.count,
-				sizeof(struct xen_processor_cx), GFP_KERNEL);
+				sizeof(struct xen_processor_cx), GFP_ATOMIC);
 	if (!dst_cx_states)
 		return -ENOMEM;
 
@@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
 		     sizeof(struct acpi_processor_px));
 
 	dst_states = kcalloc(_pr->performance->state_count,
-			     sizeof(struct xen_processor_px), GFP_KERNEL);
+			     sizeof(struct xen_processor_px), GFP_ATOMIC);
 	if (!dst_states)
 		return ERR_PTR(-ENOMEM);
 
@@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
 {
 	int err = 0;
 
-	mutex_lock(&acpi_ids_mutex);
+	spin_lock(&acpi_ids_lock);
 	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
-		mutex_unlock(&acpi_ids_mutex);
+		spin_unlock(&acpi_ids_lock);
 		return -EBUSY;
 	}
 	if (_pr->flags.power)
@@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
 	if (_pr->performance && _pr->performance->states)
 		err |= push_pxx_to_hypervisor(_pr);
 
-	mutex_unlock(&acpi_ids_mutex);
+	spin_unlock(&acpi_ids_lock);
 	return err;
 }
 static unsigned int __init get_max_acpi_id(void)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:04:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8B94-0000ek-VN; Tue, 28 Jan 2014 16:04:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8B94-0000eP-5n
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:04:14 +0000
Received: from [85.158.143.35:44336] by server-2.bemta-4.messagelabs.com id
	1C/13-11386-DF4D7E25; Tue, 28 Jan 2014 16:04:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390925051!1411003!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24177 invoked from network); 28 Jan 2014 16:04:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:04:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SG48Du025516
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:04:09 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG46gT002886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 16:04:07 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG465o002850; Tue, 28 Jan 2014 16:04:06 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:04:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DB0FF1BFA73; Tue, 28 Jan 2014 11:04:04 -0500 (EST)
Date: Tue, 28 Jan 2014 11:04:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140128160404.GA5732@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128150848.GA1428@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>, xen-devel@lists.xensource.com,
	Ben Guthro <benjamin.guthro@citrix.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 04:08:49PM +0100, Stanislaw Gruszka wrote:
> Hi,

Hey!
> 
> We have this bug report on our bugzilla:
> https://bugzilla.redhat.com/show_bug.cgi?id=1058268
> 
> In short:
> WARNING: CPU: 0 PID: 6733 at drivers/base/syscore.c:104 syscore_resume+0x9a/0xe0()
> Interrupts enabled after xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
> 
> Perhaps I'm wrong, but I think sysops->resume() callback should be
> atomic i.e. can not use mutexes or kmalloc(,GFP_KERNEL), what is not
> true regarding xen_acpi_processor_resume(). That callback was introduced
> by commit 3fac10145b766a2244422788f62dc35978613fd8. Fixing that will
> not be easy IMHO, but maybe you have some ideas ? :-)

If that is the case, then perhaps the patch below will fix it?

(I didn't compile tested it)

diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..7602229 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
  */
 static unsigned int nr_acpi_bits;
 /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
-static DEFINE_MUTEX(acpi_ids_mutex);
+static DEFINE_SPINLOCK(acpi_ids_lock);
 /* Which ACPI ID we have processed from 'struct acpi_processor'. */
 static unsigned long *acpi_ids_done;
 /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
@@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
 	int ret = 0;
 
 	dst_cx_states = kcalloc(_pr->power.count,
-				sizeof(struct xen_processor_cx), GFP_KERNEL);
+				sizeof(struct xen_processor_cx), GFP_ATOMIC);
 	if (!dst_cx_states)
 		return -ENOMEM;
 
@@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
 		     sizeof(struct acpi_processor_px));
 
 	dst_states = kcalloc(_pr->performance->state_count,
-			     sizeof(struct xen_processor_px), GFP_KERNEL);
+			     sizeof(struct xen_processor_px), GFP_ATOMIC);
 	if (!dst_states)
 		return ERR_PTR(-ENOMEM);
 
@@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
 {
 	int err = 0;
 
-	mutex_lock(&acpi_ids_mutex);
+	spin_lock(&acpi_ids_lock);
 	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
-		mutex_unlock(&acpi_ids_mutex);
+		spin_unlock(&acpi_ids_lock);
 		return -EBUSY;
 	}
 	if (_pr->flags.power)
@@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
 	if (_pr->performance && _pr->performance->states)
 		err |= push_pxx_to_hypervisor(_pr);
 
-	mutex_unlock(&acpi_ids_mutex);
+	spin_unlock(&acpi_ids_lock);
 	return err;
 }
 static unsigned int __init get_max_acpi_id(void)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:06:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BBB-0000u6-K3; Tue, 28 Jan 2014 16:06:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8BBA-0000tt-0P
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:06:24 +0000
Received: from [85.158.143.35:14808] by server-3.bemta-4.messagelabs.com id
	DD/B0-32360-F75D7E25; Tue, 28 Jan 2014 16:06:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390925181!1400306!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31924 invoked from network); 28 Jan 2014 16:06:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:06:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SG5Frv026938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:05:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG5E9A006889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 16:05:14 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG5Edb006869; Tue, 28 Jan 2014 16:05:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:05:13 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C305C1BFA73; Tue, 28 Jan 2014 11:05:12 -0500 (EST)
Date: Tue, 28 Jan 2014 11:05:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140128160512.GB5732@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> Switch back to master.

Could you say why please?
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/Config.mk b/Config.mk
> index 55dce20..484cfdb 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
>  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
>  endif
>  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> +QEMU_UPSTREAM_REVISION ?= master
>  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
>  # Fri Aug 2 14:12:09 2013 -0400
>  # Fix bug in CBFS file walking with compressed files.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:06:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BBB-0000u6-K3; Tue, 28 Jan 2014 16:06:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8BBA-0000tt-0P
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:06:24 +0000
Received: from [85.158.143.35:14808] by server-3.bemta-4.messagelabs.com id
	DD/B0-32360-F75D7E25; Tue, 28 Jan 2014 16:06:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390925181!1400306!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31924 invoked from network); 28 Jan 2014 16:06:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:06:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SG5Frv026938
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:05:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG5E9A006889
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 16:05:14 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SG5Edb006869; Tue, 28 Jan 2014 16:05:14 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:05:13 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C305C1BFA73; Tue, 28 Jan 2014 11:05:12 -0500 (EST)
Date: Tue, 28 Jan 2014 11:05:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140128160512.GB5732@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> Switch back to master.

Could you say why please?
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/Config.mk b/Config.mk
> index 55dce20..484cfdb 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
>  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
>  endif
>  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> +QEMU_UPSTREAM_REVISION ?= master
>  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
>  # Fri Aug 2 14:12:09 2013 -0400
>  # Fix bug in CBFS file walking with compressed files.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:07:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BBr-0000zG-25; Tue, 28 Jan 2014 16:07:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W8BBo-0000yo-Re
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:07:05 +0000
Received: from [193.109.254.147:3481] by server-6.bemta-14.messagelabs.com id
	7D/2F-14958-8A5D7E25; Tue, 28 Jan 2014 16:07:04 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390925223!398210!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20445 invoked from network); 28 Jan 2014 16:07:03 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 16:07:03 -0000
Received: from [141.76.59.193]
	by os.inf.tu-dresden.de with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128)
	(Exim 4.82) id 1W8BBg-0004CC-PC; Tue, 28 Jan 2014 17:06:58 +0100
Message-ID: <52E7D598.9030109@os.inf.tu-dresden.de>
Date: Tue, 28 Jan 2014 17:06:48 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, jbeulich@suse.com
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
	<52E651FD.2020608@os.inf.tu-dresden.de>
	<20140128151646.GA4308@phenom.dumpdata.com>
In-Reply-To: <20140128151646.GA4308@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/28/2014 04:16 PM, Konrad Rzeszutek Wilk wrote:
>> At least 4.8 and probably older compilers compile this as
>> intended. The point is that the standard does not guarantee the
>> indented behavior, i.e. the code is wrong.
> 
> Perhaps I misunderstood Jan's response but it sounded to me like 
> that the compiler was not adhering to the standard?

Compilers are free to generate code that breaks the current clock code
while staying within the standards. If the compiler ignores the
standard, all bets are off anyway...

>> 
>> I can refer to this lwn article: 
>> https://lwn.net/Articles/508991/
>> 
>> The whole point of ACCESS_ONCE is to avoid time bombs like that.
>> There are lots of place where ACCESS_ONCE is used in the kernel:
>> 
>> http://lxr.free-electrons.com/ident?i=ACCESS_ONCE
>> 
>> See for example the check_zero function here: 
>> http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559
>> 
> 
> In other words, you don't have a sample of 'bad' compiler code.

Nope.

Julian

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEUEARECAAYFAlLn1ZgACgkQ2EtjUdW3H9n/DgCSAmrVCyxrs42aFFB3Ug+aw4kN
7wCgmEO4CbOI3BkOXKSorY91td9u7H8=
=fgrl
-----END PGP SIGNATURE-----

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:07:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BBr-0000zG-25; Tue, 28 Jan 2014 16:07:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jsteckli@os.inf.tu-dresden.de>) id 1W8BBo-0000yo-Re
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:07:05 +0000
Received: from [193.109.254.147:3481] by server-6.bemta-14.messagelabs.com id
	7D/2F-14958-8A5D7E25; Tue, 28 Jan 2014 16:07:04 +0000
X-Env-Sender: jsteckli@os.inf.tu-dresden.de
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390925223!398210!1
X-Originating-IP: [141.76.48.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20445 invoked from network); 28 Jan 2014 16:07:03 -0000
Received: from os.inf.tu-dresden.de (HELO os.inf.tu-dresden.de) (141.76.48.99)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 16:07:03 -0000
Received: from [141.76.59.193]
	by os.inf.tu-dresden.de with esmtpsa (TLSv1:DHE-RSA-AES128-SHA:128)
	(Exim 4.82) id 1W8BBg-0004CC-PC; Tue, 28 Jan 2014 17:06:58 +0100
Message-ID: <52E7D598.9030109@os.inf.tu-dresden.de>
Date: Tue, 28 Jan 2014 17:06:48 +0100
From: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, jbeulich@suse.com
References: <1389881624-28144-1-git-send-email-jsteckli@os.inf.tu-dresden.de>
	<20140124180829.GD15785@phenom.dumpdata.com>
	<52E651FD.2020608@os.inf.tu-dresden.de>
	<20140128151646.GA4308@phenom.dumpdata.com>
In-Reply-To: <20140128151646.GA4308@phenom.dumpdata.com>
X-Enigmail-Version: 1.6
Cc: kvm@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 01/28/2014 04:16 PM, Konrad Rzeszutek Wilk wrote:
>> At least 4.8 and probably older compilers compile this as
>> intended. The point is that the standard does not guarantee the
>> indented behavior, i.e. the code is wrong.
> 
> Perhaps I misunderstood Jan's response but it sounded to me like 
> that the compiler was not adhering to the standard?

Compilers are free to generate code that breaks the current clock code
while staying within the standards. If the compiler ignores the
standard, all bets are off anyway...

>> 
>> I can refer to this lwn article: 
>> https://lwn.net/Articles/508991/
>> 
>> The whole point of ACCESS_ONCE is to avoid time bombs like that.
>> There are lots of place where ACCESS_ONCE is used in the kernel:
>> 
>> http://lxr.free-electrons.com/ident?i=ACCESS_ONCE
>> 
>> See for example the check_zero function here: 
>> http://lxr.free-electrons.com/source/arch/x86/kernel/kvm.c#L559
>> 
> 
> In other words, you don't have a sample of 'bad' compiler code.

Nope.

Julian

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEUEARECAAYFAlLn1ZgACgkQ2EtjUdW3H9n/DgCSAmrVCyxrs42aFFB3Ug+aw4kN
7wCgmEO4CbOI3BkOXKSorY91td9u7H8=
=fgrl
-----END PGP SIGNATURE-----

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:09:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BE3-0001Id-OM; Tue, 28 Jan 2014 16:09:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8BE3-0001IQ-07
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:09:23 +0000
Received: from [85.158.139.211:42513] by server-8.bemta-5.messagelabs.com id
	58/DF-29838-236D7E25; Tue, 28 Jan 2014 16:09:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390925359!172382!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1539 invoked from network); 28 Jan 2014 16:09:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:09:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95306213"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:09:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:09:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8BDx-0008HW-5l;
	Tue, 28 Jan 2014 16:09:17 +0000
Date: Tue, 28 Jan 2014 16:09:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140128160512.GB5732@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
	<20140128160512.GB5732@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> > Switch back to master.
> 
> Could you say why please?

See this thread: http://marc.info/?l=xen-devel&m=139090192108528

This should just be an automatic commit (that at the moment
unfortunately is manual) to switch the xen build system to clone the
master branch of qemu-xen instead of the RC1 tag.
Obviously we switch to cloning the RC1 tag for the RC1 release. However
now that RC1 is over we should go back to master.

> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/Config.mk b/Config.mk
> > index 55dce20..484cfdb 100644
> > --- a/Config.mk
> > +++ b/Config.mk
> > @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
> >  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
> >  endif
> >  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> > -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> > +QEMU_UPSTREAM_REVISION ?= master
> >  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
> >  # Fri Aug 2 14:12:09 2013 -0400
> >  # Fix bug in CBFS file walking with compressed files.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:09:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BE3-0001Id-OM; Tue, 28 Jan 2014 16:09:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8BE3-0001IQ-07
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:09:23 +0000
Received: from [85.158.139.211:42513] by server-8.bemta-5.messagelabs.com id
	58/DF-29838-236D7E25; Tue, 28 Jan 2014 16:09:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1390925359!172382!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1539 invoked from network); 28 Jan 2014 16:09:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:09:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95306213"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:09:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:09:18 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8BDx-0008HW-5l;
	Tue, 28 Jan 2014 16:09:17 +0000
Date: Tue, 28 Jan 2014 16:09:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140128160512.GB5732@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
	<20140128160512.GB5732@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> > Switch back to master.
> 
> Could you say why please?

See this thread: http://marc.info/?l=xen-devel&m=139090192108528

This should just be an automatic commit (that at the moment
unfortunately is manual) to switch the xen build system to clone the
master branch of qemu-xen instead of the RC1 tag.
Obviously we switch to cloning the RC1 tag for the RC1 release. However
now that RC1 is over we should go back to master.

> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/Config.mk b/Config.mk
> > index 55dce20..484cfdb 100644
> > --- a/Config.mk
> > +++ b/Config.mk
> > @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
> >  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
> >  endif
> >  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> > -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> > +QEMU_UPSTREAM_REVISION ?= master
> >  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
> >  # Fri Aug 2 14:12:09 2013 -0400
> >  # Fix bug in CBFS file walking with compressed files.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BGs-0001ek-Ep; Tue, 28 Jan 2014 16:12:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8BGq-0001ec-AZ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:12:16 +0000
Received: from [85.158.137.68:32650] by server-6.bemta-3.messagelabs.com id
	5B/E3-04868-FD6D7E25; Tue, 28 Jan 2014 16:12:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390925533!11837492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2730 invoked from network); 28 Jan 2014 16:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95307994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:12:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:12:12 -0500
Message-ID: <1390925530.31814.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 16:12:10 +0000
In-Reply-To: <alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:02 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > > but I am a bit wary of making that change at RC2.
> > > > 
> > > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > > primitives to work around this issue on a case by case basis. It's just
> > > > too subtle IMHO.
> > > > 
> > > > The IPI and cross CPU calling primitives are basically predicated on
> > > > those IPIs interrupting normal interrupt handlers.
> > > 
> > > The problem is that we don't know if we can context switch properly
> > > nested interrupts.
> > 
> > What do you mean? We don't have to context switch an IPI.
> 
> Sorry, I meant save/restore registers, stack pointer, processor mode,
> etc, for nested interrupts.

Right, we do handle that actually since it is the same code as handles
an IRQ during a hypercall (since a hypercall is just another type of
trap).

> > > Also I would need to think harder whether everything
> > > would work correctly without hitches with multiple SGIs happening
> > > simultaneously (with more than 2 cpus involved).
> > 
> > Since all IPIs would be at the same higher priority only one will be
> > active on each CPU at a time. If you are worried about multiple CPUs
> > then that is already an issue today, just at a lower priority.
> 
> That is correct, but if we moved the on_selected_cpus call out of the
> interrupt handler I don't think the problem could occur.

Sure, I'm not opposed to fixing this issue by making an architectural
improvement to the code which happens to not use on_selected_cpus.

> > AIUI this issue only occurs with "proto device assignment" patches added
> > to 4.4, n which case I think the solution can wait until 4.5 and can be
> > done properly via the IPI priority fix.
>  
> I think this is a pretty significant problem, even if we don't commit a
> fix, we should post a proper patch that we deem acceptable and link it to
> the wiki so that anybody that needs it can find it and be sure that it
> works correctly.

FWIW I have an almost fix to the IPI priority thing which I would intend
to fix in 4.5 regardless of whether this issue needs it or not.

> In my opinion if we go to this length then we might as well commit it
> (if it doesn't touch common code of course), but I'll leave the decision
> up to you and George.
> 
> Given the constraints, the solution I would feel more comfortable with at
> this time is something like the following patch (lightly tested):

I don't think this buys you anything over just fixing on_selected_cpus
to work as it should, unless there is some reason why deferring this
work to a tasklet is the logically correct thing to do and/or a better
designh?

(IOW if you are only doing this to avoid calling on_selected_cpus in
interrupt context then I think this is the wrong fix).

Ian.

> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..b00ca73 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
>  
>  static unsigned nr_lrs;
>  
> +static void gic_irq_eoi(void *info)
> +{
> +    int virq = (uintptr_t) info;
> +    GICC[GICC_DIR] = virq;
> +}
> +
> +static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
> +static void eoi_action(unsigned long unused)
> +{
> +    struct pending_irq *p = this_cpu(eoi_irq);
> +    ASSERT(p != NULL);
> +
> +    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
> +            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
> +
> +    this_cpu(eoi_irq) = NULL;
> +}
> +static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
> +
>  /* The GIC mapping of CPU interfaces does not necessarily match the
>   * logical CPU numbering. Let's use mapping as returned by the GIC
>   * itself
> @@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
>  
>  }
>  
> -static void gic_irq_eoi(void *info)
> -{
> -    int virq = (uintptr_t) info;
> -    GICC[GICC_DIR] = virq;
> -}
> -
>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  {
>      int i = 0, virq, pirq = -1;
> @@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>              if ( cpu == smp_processor_id() )
>                  gic_irq_eoi((void*)(uintptr_t)pirq);
>              else
> -                on_selected_cpus(cpumask_of(cpu),
> -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> +            {
> +                ASSERT(this_cpu(eoi_irq) == NULL);
> +                this_cpu(eoi_irq) = p;
> +                tasklet_schedule(&eoi_tasklet);
> +            }
>          }
>  
>          i++;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:12:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BGs-0001ek-Ep; Tue, 28 Jan 2014 16:12:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8BGq-0001ec-AZ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:12:16 +0000
Received: from [85.158.137.68:32650] by server-6.bemta-3.messagelabs.com id
	5B/E3-04868-FD6D7E25; Tue, 28 Jan 2014 16:12:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390925533!11837492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2730 invoked from network); 28 Jan 2014 16:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95307994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:12:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:12:12 -0500
Message-ID: <1390925530.31814.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 28 Jan 2014 16:12:10 +0000
In-Reply-To: <alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:02 +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > > but I am a bit wary of making that change at RC2.
> > > > 
> > > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > > primitives to work around this issue on a case by case basis. It's just
> > > > too subtle IMHO.
> > > > 
> > > > The IPI and cross CPU calling primitives are basically predicated on
> > > > those IPIs interrupting normal interrupt handlers.
> > > 
> > > The problem is that we don't know if we can context switch properly
> > > nested interrupts.
> > 
> > What do you mean? We don't have to context switch an IPI.
> 
> Sorry, I meant save/restore registers, stack pointer, processor mode,
> etc, for nested interrupts.

Right, we do handle that actually since it is the same code as handles
an IRQ during a hypercall (since a hypercall is just another type of
trap).

> > > Also I would need to think harder whether everything
> > > would work correctly without hitches with multiple SGIs happening
> > > simultaneously (with more than 2 cpus involved).
> > 
> > Since all IPIs would be at the same higher priority only one will be
> > active on each CPU at a time. If you are worried about multiple CPUs
> > then that is already an issue today, just at a lower priority.
> 
> That is correct, but if we moved the on_selected_cpus call out of the
> interrupt handler I don't think the problem could occur.

Sure, I'm not opposed to fixing this issue by making an architectural
improvement to the code which happens to not use on_selected_cpus.

> > AIUI this issue only occurs with "proto device assignment" patches added
> > to 4.4, n which case I think the solution can wait until 4.5 and can be
> > done properly via the IPI priority fix.
>  
> I think this is a pretty significant problem, even if we don't commit a
> fix, we should post a proper patch that we deem acceptable and link it to
> the wiki so that anybody that needs it can find it and be sure that it
> works correctly.

FWIW I have an almost fix to the IPI priority thing which I would intend
to fix in 4.5 regardless of whether this issue needs it or not.

> In my opinion if we go to this length then we might as well commit it
> (if it doesn't touch common code of course), but I'll leave the decision
> up to you and George.
> 
> Given the constraints, the solution I would feel more comfortable with at
> this time is something like the following patch (lightly tested):

I don't think this buys you anything over just fixing on_selected_cpus
to work as it should, unless there is some reason why deferring this
work to a tasklet is the logically correct thing to do and/or a better
designh?

(IOW if you are only doing this to avoid calling on_selected_cpus in
interrupt context then I think this is the wrong fix).

Ian.

> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..b00ca73 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
>  
>  static unsigned nr_lrs;
>  
> +static void gic_irq_eoi(void *info)
> +{
> +    int virq = (uintptr_t) info;
> +    GICC[GICC_DIR] = virq;
> +}
> +
> +static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
> +static void eoi_action(unsigned long unused)
> +{
> +    struct pending_irq *p = this_cpu(eoi_irq);
> +    ASSERT(p != NULL);
> +
> +    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
> +            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
> +
> +    this_cpu(eoi_irq) = NULL;
> +}
> +static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
> +
>  /* The GIC mapping of CPU interfaces does not necessarily match the
>   * logical CPU numbering. Let's use mapping as returned by the GIC
>   * itself
> @@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
>  
>  }
>  
> -static void gic_irq_eoi(void *info)
> -{
> -    int virq = (uintptr_t) info;
> -    GICC[GICC_DIR] = virq;
> -}
> -
>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  {
>      int i = 0, virq, pirq = -1;
> @@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>              if ( cpu == smp_processor_id() )
>                  gic_irq_eoi((void*)(uintptr_t)pirq);
>              else
> -                on_selected_cpus(cpumask_of(cpu),
> -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> +            {
> +                ASSERT(this_cpu(eoi_irq) == NULL);
> +                this_cpu(eoi_irq) = p;
> +                tasklet_schedule(&eoi_tasklet);
> +            }
>          }
>  
>          i++;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BMy-0001ts-Js; Tue, 28 Jan 2014 16:18:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8BMx-0001tn-Dy
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:18:35 +0000
Received: from [85.158.143.35:42223] by server-3.bemta-4.messagelabs.com id
	62/26-32360-A58D7E25; Tue, 28 Jan 2014 16:18:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390925912!1401531!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19369 invoked from network); 28 Jan 2014 16:18:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:18:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SGHTqf011954
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:17:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SGHRRM018440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 16:17:28 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SGHRBa012935; Tue, 28 Jan 2014 16:17:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:17:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4663B1BFA73; Tue, 28 Jan 2014 11:17:26 -0500 (EST)
Date: Tue, 28 Jan 2014 11:17:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140128161726.GB6094@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
	<20140128160512.GB5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 04:09:13PM +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> > > Switch back to master.
> > 
> > Could you say why please?
> 
> See this thread: http://marc.info/?l=xen-devel&m=139090192108528

Just read it after I sent this email out :-)

I was thinking that the commit message should have at least
some description. Like 'Because we are going for rc2 and
its time to test new code' or 'rc1 tag had bugs, rc2 does
not.'


> 
> This should just be an automatic commit (that at the moment
> unfortunately is manual) to switch the xen build system to clone the
> master branch of qemu-xen instead of the RC1 tag.
> Obviously we switch to cloning the RC1 tag for the RC1 release. However
> now that RC1 is over we should go back to master.

Or for the pre-rc2 tag.

> 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > 
> > > diff --git a/Config.mk b/Config.mk
> > > index 55dce20..484cfdb 100644
> > > --- a/Config.mk
> > > +++ b/Config.mk
> > > @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
> > >  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
> > >  endif
> > >  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> > > -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> > > +QEMU_UPSTREAM_REVISION ?= master
> > >  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
> > >  # Fri Aug 2 14:12:09 2013 -0400
> > >  # Fix bug in CBFS file walking with compressed files.
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BMy-0001ts-Js; Tue, 28 Jan 2014 16:18:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8BMx-0001tn-Dy
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 16:18:35 +0000
Received: from [85.158.143.35:42223] by server-3.bemta-4.messagelabs.com id
	62/26-32360-A58D7E25; Tue, 28 Jan 2014 16:18:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390925912!1401531!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19369 invoked from network); 28 Jan 2014 16:18:33 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 16:18:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SGHTqf011954
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 16:17:29 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SGHRRM018440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 16:17:28 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SGHRBa012935; Tue, 28 Jan 2014 16:17:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 08:17:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4663B1BFA73; Tue, 28 Jan 2014 11:17:26 -0500 (EST)
Date: Tue, 28 Jan 2014 11:17:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140128161726.GB6094@phenom.dumpdata.com>
References: <alpine.DEB.2.02.1401281522120.4373@kaball.uk.xensource.com>
	<20140128160512.GB5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401281607070.4373@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xensource.com, George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] Update QEMU_UPSTREAM_REVISION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 04:09:13PM +0000, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 28, 2014 at 03:23:19PM +0000, Stefano Stabellini wrote:
> > > Switch back to master.
> > 
> > Could you say why please?
> 
> See this thread: http://marc.info/?l=xen-devel&m=139090192108528

Just read it after I sent this email out :-)

I was thinking that the commit message should have at least
some description. Like 'Because we are going for rc2 and
its time to test new code' or 'rc1 tag had bugs, rc2 does
not.'


> 
> This should just be an automatic commit (that at the moment
> unfortunately is manual) to switch the xen build system to clone the
> master branch of qemu-xen instead of the RC1 tag.
> Obviously we switch to cloning the RC1 tag for the RC1 release. However
> now that RC1 is over we should go back to master.

Or for the pre-rc2 tag.

> 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > 
> > > diff --git a/Config.mk b/Config.mk
> > > index 55dce20..484cfdb 100644
> > > --- a/Config.mk
> > > +++ b/Config.mk
> > > @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
> > >  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
> > >  endif
> > >  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> > > -QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc1
> > > +QEMU_UPSTREAM_REVISION ?= master
> > >  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
> > >  # Fri Aug 2 14:12:09 2013 -0400
> > >  # Fix bug in CBFS file walking with compressed files.
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:24:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BSf-0002Hw-IC; Tue, 28 Jan 2014 16:24:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8BSd-0002Hr-Gc
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:24:27 +0000
Received: from [85.158.143.35:33801] by server-1.bemta-4.messagelabs.com id
	44/1E-02132-AB9D7E25; Tue, 28 Jan 2014 16:24:26 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390926264!1420779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13738 invoked from network); 28 Jan 2014 16:24:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95313984"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:23:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8BS1-0000Lh-By;
	Tue, 28 Jan 2014 16:23:49 +0000
Date: Tue, 28 Jan 2014 16:23:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390925530.31814.5.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281612590.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
	<1390925530.31814.5.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 16:02 +0000, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > > > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > > > but I am a bit wary of making that change at RC2.
> > > > > 
> > > > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > > > primitives to work around this issue on a case by case basis. It's just
> > > > > too subtle IMHO.
> > > > > 
> > > > > The IPI and cross CPU calling primitives are basically predicated on
> > > > > those IPIs interrupting normal interrupt handlers.
> > > > 
> > > > The problem is that we don't know if we can context switch properly
> > > > nested interrupts.
> > > 
> > > What do you mean? We don't have to context switch an IPI.
> > 
> > Sorry, I meant save/restore registers, stack pointer, processor mode,
> > etc, for nested interrupts.
> 
> Right, we do handle that actually since it is the same code as handles
> an IRQ during a hypercall (since a hypercall is just another type of
> trap).

Ah right, good.


> > > > Also I would need to think harder whether everything
> > > > would work correctly without hitches with multiple SGIs happening
> > > > simultaneously (with more than 2 cpus involved).
> > > 
> > > Since all IPIs would be at the same higher priority only one will be
> > > active on each CPU at a time. If you are worried about multiple CPUs
> > > then that is already an issue today, just at a lower priority.
> > 
> > That is correct, but if we moved the on_selected_cpus call out of the
> > interrupt handler I don't think the problem could occur.
> 
> Sure, I'm not opposed to fixing this issue by making an architectural
> improvement to the code which happens to not use on_selected_cpus.
> 
> > > AIUI this issue only occurs with "proto device assignment" patches added
> > > to 4.4, n which case I think the solution can wait until 4.5 and can be
> > > done properly via the IPI priority fix.
> >  
> > I think this is a pretty significant problem, even if we don't commit a
> > fix, we should post a proper patch that we deem acceptable and link it to
> > the wiki so that anybody that needs it can find it and be sure that it
> > works correctly.
> 
> FWIW I have an almost fix to the IPI priority thing which I would intend
> to fix in 4.5 regardless of whether this issue needs it or not.

Well, in that case if you can test the patch in the specific case where
an SGI interrupts a lower priority interrupt, it might be worth sending
out the patch now.


> > In my opinion if we go to this length then we might as well commit it
> > (if it doesn't touch common code of course), but I'll leave the decision
> > up to you and George.
> > 
> > Given the constraints, the solution I would feel more comfortable with at
> > this time is something like the following patch (lightly tested):
> 
> I don't think this buys you anything over just fixing on_selected_cpus
> to work as it should, unless there is some reason why deferring this
> work to a tasklet is the logically correct thing to do and/or a better
> designh?
> 
> (IOW if you are only doing this to avoid calling on_selected_cpus in
> interrupt context then I think this is the wrong fix).

I feel that on_selected_cpus is the kind of work that belongs to the
"bottom half". I was never very happy to have the call where it is now
in the first place. This approach would have the benefit of allowing us
to receive regular interrupts while waiting for other cpus to handle the
SGI. Also I am sure that it would work even in cases where you have
more than one SGIs targeting the same cpu simultaneously.


> Ian.
> 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index e6257a7..b00ca73 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
> >  
> >  static unsigned nr_lrs;
> >  
> > +static void gic_irq_eoi(void *info)
> > +{
> > +    int virq = (uintptr_t) info;
> > +    GICC[GICC_DIR] = virq;
> > +}
> > +
> > +static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
> > +static void eoi_action(unsigned long unused)
> > +{
> > +    struct pending_irq *p = this_cpu(eoi_irq);
> > +    ASSERT(p != NULL);
> > +
> > +    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
> > +            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
> > +
> > +    this_cpu(eoi_irq) = NULL;
> > +}
> > +static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
> > +
> >  /* The GIC mapping of CPU interfaces does not necessarily match the
> >   * logical CPU numbering. Let's use mapping as returned by the GIC
> >   * itself
> > @@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
> >  
> >  }
> >  
> > -static void gic_irq_eoi(void *info)
> > -{
> > -    int virq = (uintptr_t) info;
> > -    GICC[GICC_DIR] = virq;
> > -}
> > -
> >  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> >  {
> >      int i = 0, virq, pirq = -1;
> > @@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
> >              if ( cpu == smp_processor_id() )
> >                  gic_irq_eoi((void*)(uintptr_t)pirq);
> >              else
> > -                on_selected_cpus(cpumask_of(cpu),
> > -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> > +            {
> > +                ASSERT(this_cpu(eoi_irq) == NULL);
> > +                this_cpu(eoi_irq) = p;
> > +                tasklet_schedule(&eoi_tasklet);
> > +            }
> >          }
> >  
> >          i++;
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:24:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BSf-0002Hw-IC; Tue, 28 Jan 2014 16:24:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8BSd-0002Hr-Gc
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:24:27 +0000
Received: from [85.158.143.35:33801] by server-1.bemta-4.messagelabs.com id
	44/1E-02132-AB9D7E25; Tue, 28 Jan 2014 16:24:26 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1390926264!1420779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13738 invoked from network); 28 Jan 2014 16:24:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95313984"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:23:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:23:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8BS1-0000Lh-By;
	Tue, 28 Jan 2014 16:23:49 +0000
Date: Tue, 28 Jan 2014 16:23:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1390925530.31814.5.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401281612590.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401271828440.4373@kaball.uk.xensource.com>
	<1390903423.7753.23.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281321080.4373@kaball.uk.xensource.com>
	<1390921544.7753.108.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1401281526510.4373@kaball.uk.xensource.com>
	<1390925530.31814.5.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> On Tue, 2014-01-28 at 16:02 +0000, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 14:00 +0000, Stefano Stabellini wrote:
> > > > On Tue, 28 Jan 2014, Ian Campbell wrote:
> > > > > On Mon, 2014-01-27 at 19:00 +0000, Stefano Stabellini wrote:
> > > > > > Alternatively, as Ian suggested, we could increase the priotiry of SGIs
> > > > > > but I am a bit wary of making that change at RC2.
> > > > > 
> > > > > I'm leaning the other way -- I'm wary of open coding magic locking
> > > > > primitives to work around this issue on a case by case basis. It's just
> > > > > too subtle IMHO.
> > > > > 
> > > > > The IPI and cross CPU calling primitives are basically predicated on
> > > > > those IPIs interrupting normal interrupt handlers.
> > > > 
> > > > The problem is that we don't know if we can context switch properly
> > > > nested interrupts.
> > > 
> > > What do you mean? We don't have to context switch an IPI.
> > 
> > Sorry, I meant save/restore registers, stack pointer, processor mode,
> > etc, for nested interrupts.
> 
> Right, we do handle that actually since it is the same code as handles
> an IRQ during a hypercall (since a hypercall is just another type of
> trap).

Ah right, good.


> > > > Also I would need to think harder whether everything
> > > > would work correctly without hitches with multiple SGIs happening
> > > > simultaneously (with more than 2 cpus involved).
> > > 
> > > Since all IPIs would be at the same higher priority only one will be
> > > active on each CPU at a time. If you are worried about multiple CPUs
> > > then that is already an issue today, just at a lower priority.
> > 
> > That is correct, but if we moved the on_selected_cpus call out of the
> > interrupt handler I don't think the problem could occur.
> 
> Sure, I'm not opposed to fixing this issue by making an architectural
> improvement to the code which happens to not use on_selected_cpus.
> 
> > > AIUI this issue only occurs with "proto device assignment" patches added
> > > to 4.4, n which case I think the solution can wait until 4.5 and can be
> > > done properly via the IPI priority fix.
> >  
> > I think this is a pretty significant problem, even if we don't commit a
> > fix, we should post a proper patch that we deem acceptable and link it to
> > the wiki so that anybody that needs it can find it and be sure that it
> > works correctly.
> 
> FWIW I have an almost fix to the IPI priority thing which I would intend
> to fix in 4.5 regardless of whether this issue needs it or not.

Well, in that case if you can test the patch in the specific case where
an SGI interrupts a lower priority interrupt, it might be worth sending
out the patch now.


> > In my opinion if we go to this length then we might as well commit it
> > (if it doesn't touch common code of course), but I'll leave the decision
> > up to you and George.
> > 
> > Given the constraints, the solution I would feel more comfortable with at
> > this time is something like the following patch (lightly tested):
> 
> I don't think this buys you anything over just fixing on_selected_cpus
> to work as it should, unless there is some reason why deferring this
> work to a tasklet is the logically correct thing to do and/or a better
> designh?
> 
> (IOW if you are only doing this to avoid calling on_selected_cpus in
> interrupt context then I think this is the wrong fix).

I feel that on_selected_cpus is the kind of work that belongs to the
"bottom half". I was never very happy to have the call where it is now
in the first place. This approach would have the benefit of allowing us
to receive regular interrupts while waiting for other cpus to handle the
SGI. Also I am sure that it would work even in cases where you have
more than one SGIs targeting the same cpu simultaneously.


> Ian.
> 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index e6257a7..b00ca73 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -58,6 +58,25 @@ static DEFINE_PER_CPU(uint64_t, lr_mask);
> >  
> >  static unsigned nr_lrs;
> >  
> > +static void gic_irq_eoi(void *info)
> > +{
> > +    int virq = (uintptr_t) info;
> > +    GICC[GICC_DIR] = virq;
> > +}
> > +
> > +static DEFINE_PER_CPU(struct pending_irq*, eoi_irq);
> > +static void eoi_action(unsigned long unused)
> > +{
> > +    struct pending_irq *p = this_cpu(eoi_irq);
> > +    ASSERT(p != NULL);
> > +
> > +    on_selected_cpus(cpumask_of(p->desc->arch.eoi_cpu),
> > +            gic_irq_eoi, (void*)(uintptr_t)p->desc->irq, 0);
> > +
> > +    this_cpu(eoi_irq) = NULL;
> > +}
> > +static DECLARE_TASKLET(eoi_tasklet, eoi_action, 0);
> > +
> >  /* The GIC mapping of CPU interfaces does not necessarily match the
> >   * logical CPU numbering. Let's use mapping as returned by the GIC
> >   * itself
> > @@ -897,12 +916,6 @@ int gicv_setup(struct domain *d)
> >  
> >  }
> >  
> > -static void gic_irq_eoi(void *info)
> > -{
> > -    int virq = (uintptr_t) info;
> > -    GICC[GICC_DIR] = virq;
> > -}
> > -
> >  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
> >  {
> >      int i = 0, virq, pirq = -1;
> > @@ -962,8 +975,11 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
> >              if ( cpu == smp_processor_id() )
> >                  gic_irq_eoi((void*)(uintptr_t)pirq);
> >              else
> > -                on_selected_cpus(cpumask_of(cpu),
> > -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> > +            {
> > +                ASSERT(this_cpu(eoi_irq) == NULL);
> > +                this_cpu(eoi_irq) = p;
> > +                tasklet_schedule(&eoi_tasklet);
> > +            }
> >          }
> >  
> >          i++;
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:51:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Bsi-0003O3-W0; Tue, 28 Jan 2014 16:51:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Bsh-0003Ny-NW
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:51:24 +0000
Received: from [85.158.143.35:13604] by server-3.bemta-4.messagelabs.com id
	CB/AA-32360-B00E7E25; Tue, 28 Jan 2014 16:51:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390927881!1416787!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7162 invoked from network); 28 Jan 2014 16:51:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95329217"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:51:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 11:51:19 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8Bsc-0003GL-K1;
	Tue, 28 Jan 2014 16:51:18 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:51:18 +0000
Message-ID: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code such as on_selected_cpus expects/requires that an IPI can preempt a
processor which is just handling a normal interrupt. Lacking this property can
result in a deadlock between two CPUs trying to IPI each other from interrupt
context.

For the time being there is only two priorities, IRQ and IPI, although it is
also conceivable that in the future some IPIs might be higher priority than
others. This could be used to implement a better BUG() than we have now, but I
haven't tackled that yet.

Tested with a debug patch which sends a local IPI from a keyhandler, which is
run in serial interrupt context.

This should also fix the issue reported by Oleksandr in "xen/arm:
maintenance_interrupt SMP fix" without resorting to trylock.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
---
I think this is probably 4.5 material at this point.

Tested with "HACK: dump pcpu state keyhandler" which I'll post for
completeness. It gives:
(XEN) Xen call trace:
(XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
(XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
(XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
(XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
(XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
(XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
(XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
(XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
(XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
(XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
(XEN)    [<0000000000249324>] init_done+0x10/0x18
(XEN)    [<0000000000000080>] 0000000000000080
---
 xen/arch/arm/gic.c        |   19 +++++++++++++------
 xen/arch/arm/time.c       |    6 +++---
 xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
 3 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index dcf9cd4..ee37019 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
 
     /* Default priority for global interrupts */
     for ( i = 32; i < gic.lines; i += 4 )
-        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
 
     /* Disable all global interrupts */
     for ( i = 32; i < gic.lines; i += 32 )
@@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
     GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
     GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
     /* Set PPI and SGI priorities */
-    for (i = 0; i < 32; i += 4)
-        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
+    for (i = 0; i < 16; i += 4)
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 | GIC_PRI_IPI;
+    for (i = 16; i < 32; i += 4)
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
 
     /* Local settings: interface controller */
     GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
@@ -538,7 +543,8 @@ void gic_disable_cpu(void)
 void gic_route_ppis(void)
 {
     /* GIC maintenance */
-    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
+    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
+                     GIC_PRI_IRQ);
     /* Route timer interrupt */
     route_timer_interrupt();
 }
@@ -553,7 +559,8 @@ void gic_route_spis(void)
         if ( (irq = serial_dt_irq(seridx)) == NULL )
             continue;
 
-        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
+        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
+                         GIC_PRI_IRQ);
     }
 }
 
@@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     level = dt_irq_is_level_triggered(irq);
 
     gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+                           GIC_PRI_IRQ);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..68b939d 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 void __cpuinit route_timer_interrupt(void)
 {
     gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
     gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
     gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
 }
 
 /* Set up the timer interrupt on this CPU */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 9c6f9bb..25b2b24 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,28 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+/*
+ * The minimum GICC_BPR is required to be in the range 0-3. We set
+ * GICC_BPR to 0 but we must expect that it might be 3. This means we
+ * can rely on premption between the following ranges:
+ * 0xf0..0xff
+ * 0xe0..0xdf
+ * 0xc0..0xcf
+ * 0xb0..0xbf
+ * 0xa0..0xaf
+ * 0x90..0x9f
+ * 0x80..0x8f
+ *
+ * Priorities within a range will not preempt each other.
+ *
+ * A GIC must support a mimimum of 16 priority levels.
+ */
+#define GIC_PRI_LOWEST     0xf0
+#define GIC_PRI_IRQ        0xa0
+#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts */
+#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to Secure-World */
+
+
 #ifndef __ASSEMBLY__
 #include <xen/device_tree.h>
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:51:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Bsi-0003O3-W0; Tue, 28 Jan 2014 16:51:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Bsh-0003Ny-NW
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:51:24 +0000
Received: from [85.158.143.35:13604] by server-3.bemta-4.messagelabs.com id
	CB/AA-32360-B00E7E25; Tue, 28 Jan 2014 16:51:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1390927881!1416787!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7162 invoked from network); 28 Jan 2014 16:51:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:51:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95329217"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:51:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 11:51:19 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8Bsc-0003GL-K1;
	Tue, 28 Jan 2014 16:51:18 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:51:18 +0000
Message-ID: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code such as on_selected_cpus expects/requires that an IPI can preempt a
processor which is just handling a normal interrupt. Lacking this property can
result in a deadlock between two CPUs trying to IPI each other from interrupt
context.

For the time being there is only two priorities, IRQ and IPI, although it is
also conceivable that in the future some IPIs might be higher priority than
others. This could be used to implement a better BUG() than we have now, but I
haven't tackled that yet.

Tested with a debug patch which sends a local IPI from a keyhandler, which is
run in serial interrupt context.

This should also fix the issue reported by Oleksandr in "xen/arm:
maintenance_interrupt SMP fix" without resorting to trylock.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
---
I think this is probably 4.5 material at this point.

Tested with "HACK: dump pcpu state keyhandler" which I'll post for
completeness. It gives:
(XEN) Xen call trace:
(XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
(XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
(XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
(XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
(XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
(XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
(XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
(XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
(XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
(XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
(XEN)    [<0000000000249324>] init_done+0x10/0x18
(XEN)    [<0000000000000080>] 0000000000000080
---
 xen/arch/arm/gic.c        |   19 +++++++++++++------
 xen/arch/arm/time.c       |    6 +++---
 xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
 3 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index dcf9cd4..ee37019 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
 
     /* Default priority for global interrupts */
     for ( i = 32; i < gic.lines; i += 4 )
-        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
 
     /* Disable all global interrupts */
     for ( i = 32; i < gic.lines; i += 32 )
@@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
     GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
     GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
     /* Set PPI and SGI priorities */
-    for (i = 0; i < 32; i += 4)
-        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
+    for (i = 0; i < 16; i += 4)
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 | GIC_PRI_IPI;
+    for (i = 16; i < 32; i += 4)
+        GICD[GICD_IPRIORITYR + i / 4] =
+            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
 
     /* Local settings: interface controller */
     GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
@@ -538,7 +543,8 @@ void gic_disable_cpu(void)
 void gic_route_ppis(void)
 {
     /* GIC maintenance */
-    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
+    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
+                     GIC_PRI_IRQ);
     /* Route timer interrupt */
     route_timer_interrupt();
 }
@@ -553,7 +559,8 @@ void gic_route_spis(void)
         if ( (irq = serial_dt_irq(seridx)) == NULL )
             continue;
 
-        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
+        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
+                         GIC_PRI_IRQ);
     }
 }
 
@@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
     level = dt_irq_is_level_triggered(irq);
 
     gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+                           GIC_PRI_IRQ);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..68b939d 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 void __cpuinit route_timer_interrupt(void)
 {
     gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
     gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
     gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
-                     cpumask_of(smp_processor_id()), 0xa0);
+                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
 }
 
 /* Set up the timer interrupt on this CPU */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 9c6f9bb..25b2b24 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,28 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+/*
+ * The minimum GICC_BPR is required to be in the range 0-3. We set
+ * GICC_BPR to 0 but we must expect that it might be 3. This means we
+ * can rely on premption between the following ranges:
+ * 0xf0..0xff
+ * 0xe0..0xdf
+ * 0xc0..0xcf
+ * 0xb0..0xbf
+ * 0xa0..0xaf
+ * 0x90..0x9f
+ * 0x80..0x8f
+ *
+ * Priorities within a range will not preempt each other.
+ *
+ * A GIC must support a mimimum of 16 priority levels.
+ */
+#define GIC_PRI_LOWEST     0xf0
+#define GIC_PRI_IRQ        0xa0
+#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts */
+#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to Secure-World */
+
+
 #ifndef __ASSEMBLY__
 #include <xen/device_tree.h>
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:52:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BtL-0003QH-Fa; Tue, 28 Jan 2014 16:52:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8BtK-0003Q7-Qd
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:52:03 +0000
Received: from [193.109.254.147:44492] by server-4.bemta-14.messagelabs.com id
	D1/70-03916-230E7E25; Tue, 28 Jan 2014 16:52:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390927918!410229!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23122 invoked from network); 28 Jan 2014 16:52:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:52:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95329558"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:51:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:51:56 -0500
Message-ID: <1390927915.31814.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:51:55 +0000
In-Reply-To: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:51 +0000, Ian Campbell wrote:
> Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> completeness. 

>From 7975fb2a9a27e738d3b551fa2258d65a8c4b0d9a Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Tue, 28 Jan 2014 15:57:33 +0000
Subject: [PATCH] HACK: dump pcpu state keyhandler

---
 xen/arch/arm/gic.c        |    3 +++
 xen/common/keyhandler.c   |   20 ++++++++++++++++++++
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..dcf9cd4 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -811,6 +811,9 @@ static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi sgi)
     case GIC_SGI_CALL_FUNCTION:
         smp_call_function_interrupt();
         break;
+    case GIC_SGI_DUMP_HOST_STATE:
+        show_execution_state(regs);
+        break;
     default:
         panic("Unhandled SGI %d on CPU%d", sgi, smp_processor_id());
         break;
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 8e4b3f8..52f0916 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -20,6 +20,7 @@
 #include <xen/init.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
+#include <asm/gic.h>
 
 static struct keyhandler *key_table[256];
 static unsigned char keypress_key;
@@ -149,6 +150,24 @@ static struct keyhandler dump_registers_keyhandler = {
     .desc = "dump registers"
 };
 
+static void dump_pcpus(unsigned char key, struct cpu_user_regs *regs)
+{
+    printk("'%c' pressed -> dumping PCPU state\n\n", key);
+
+    send_SGI_self(GIC_SGI_DUMP_HOST_STATE);
+
+    dsb(); /* Wait for SGI write to occur, or else it might be delayed
+            * until later, meaning we don't make a point of having an
+            * IPI interrupting an interrupt. */
+}
+
+static struct keyhandler dump_pcpus_keyhandler = {
+    .irq_callback = 1,
+    .diagnostic = 1,
+    .u.irq_fn = dump_pcpus,
+    .desc = "dump pcpus"
+};
+
 static DECLARE_TASKLET(dump_dom0_tasklet, NULL, 0);
 
 static void dump_dom0_action(unsigned long arg)
@@ -539,6 +558,7 @@ void __init initialize_keytable(void)
     }
     register_keyhandler('A', &toggle_alt_keyhandler);
     register_keyhandler('d', &dump_registers_keyhandler);
+    register_keyhandler('P', &dump_pcpus_keyhandler);
     register_keyhandler('h', &show_handlers_keyhandler);
     register_keyhandler('q', &dump_domains_keyhandler);
     register_keyhandler('r', &dump_runq_keyhandler);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 87f4298..9c6f9bb 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -183,6 +183,7 @@ enum gic_sgi {
     GIC_SGI_EVENT_CHECK = 0,
     GIC_SGI_DUMP_STATE  = 1,
     GIC_SGI_CALL_FUNCTION = 2,
+    GIC_SGI_DUMP_HOST_STATE  = 3,
 };
 extern void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi);
 extern void send_SGI_one(unsigned int cpu, enum gic_sgi sgi);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:52:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8BtL-0003QH-Fa; Tue, 28 Jan 2014 16:52:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8BtK-0003Q7-Qd
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:52:03 +0000
Received: from [193.109.254.147:44492] by server-4.bemta-14.messagelabs.com id
	D1/70-03916-230E7E25; Tue, 28 Jan 2014 16:52:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1390927918!410229!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23122 invoked from network); 28 Jan 2014 16:52:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:52:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95329558"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:51:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:51:56 -0500
Message-ID: <1390927915.31814.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 28 Jan 2014 16:51:55 +0000
In-Reply-To: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:51 +0000, Ian Campbell wrote:
> Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> completeness. 

>From 7975fb2a9a27e738d3b551fa2258d65a8c4b0d9a Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Tue, 28 Jan 2014 15:57:33 +0000
Subject: [PATCH] HACK: dump pcpu state keyhandler

---
 xen/arch/arm/gic.c        |    3 +++
 xen/common/keyhandler.c   |   20 ++++++++++++++++++++
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..dcf9cd4 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -811,6 +811,9 @@ static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi sgi)
     case GIC_SGI_CALL_FUNCTION:
         smp_call_function_interrupt();
         break;
+    case GIC_SGI_DUMP_HOST_STATE:
+        show_execution_state(regs);
+        break;
     default:
         panic("Unhandled SGI %d on CPU%d", sgi, smp_processor_id());
         break;
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 8e4b3f8..52f0916 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -20,6 +20,7 @@
 #include <xen/init.h>
 #include <asm/debugger.h>
 #include <asm/div64.h>
+#include <asm/gic.h>
 
 static struct keyhandler *key_table[256];
 static unsigned char keypress_key;
@@ -149,6 +150,24 @@ static struct keyhandler dump_registers_keyhandler = {
     .desc = "dump registers"
 };
 
+static void dump_pcpus(unsigned char key, struct cpu_user_regs *regs)
+{
+    printk("'%c' pressed -> dumping PCPU state\n\n", key);
+
+    send_SGI_self(GIC_SGI_DUMP_HOST_STATE);
+
+    dsb(); /* Wait for SGI write to occur, or else it might be delayed
+            * until later, meaning we don't make a point of having an
+            * IPI interrupting an interrupt. */
+}
+
+static struct keyhandler dump_pcpus_keyhandler = {
+    .irq_callback = 1,
+    .diagnostic = 1,
+    .u.irq_fn = dump_pcpus,
+    .desc = "dump pcpus"
+};
+
 static DECLARE_TASKLET(dump_dom0_tasklet, NULL, 0);
 
 static void dump_dom0_action(unsigned long arg)
@@ -539,6 +558,7 @@ void __init initialize_keytable(void)
     }
     register_keyhandler('A', &toggle_alt_keyhandler);
     register_keyhandler('d', &dump_registers_keyhandler);
+    register_keyhandler('P', &dump_pcpus_keyhandler);
     register_keyhandler('h', &show_handlers_keyhandler);
     register_keyhandler('q', &dump_domains_keyhandler);
     register_keyhandler('r', &dump_runq_keyhandler);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 87f4298..9c6f9bb 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -183,6 +183,7 @@ enum gic_sgi {
     GIC_SGI_EVENT_CHECK = 0,
     GIC_SGI_DUMP_STATE  = 1,
     GIC_SGI_CALL_FUNCTION = 2,
+    GIC_SGI_DUMP_HOST_STATE  = 3,
 };
 extern void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi);
 extern void send_SGI_one(unsigned int cpu, enum gic_sgi sgi);
-- 
1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:55:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Bwp-0003cg-8K; Tue, 28 Jan 2014 16:55:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Bwn-0003ca-Jm
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:55:37 +0000
Received: from [85.158.139.211:56167] by server-9.bemta-5.messagelabs.com id
	42/F3-15098-801E7E25; Tue, 28 Jan 2014 16:55:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390928134!182440!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5504 invoked from network); 28 Jan 2014 16:55:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:55:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95331204"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:55:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:55:33 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Bwj-0000ml-5W;
	Tue, 28 Jan 2014 16:55:33 +0000
Date: Tue, 28 Jan 2014 16:55:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140128165532.GI32713@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140115123045.GL5698@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 12:30:45PM +0000, Wei Liu wrote:
> Xen: master branch
> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>                linux-next
> 
> When I tried to start a HVM domain running squeeze with 2.6.32, I got
> 
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22

Julien has successfully reproduced this issue. This is actually caused
by two xenconsoled's running at the same time. Nothing needs fixing. :-)

Wei.

> [... more of this ...]
> [67196.736733] device vif5.0 entered promiscuous mode
> [67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
> [67196.911973] device vif5.0-emu entered promiscuous mode
> [67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
> [67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
> (d5) HVM Loader
> (d5) Detected Xen v4.4-unstable
> (d5) Xenbus rings @0xfeffc000, event channel 3
> (d5) System requested SeaBIOS
> (d5) CPU speed is 2660 MHz
> (d5) Relocating guest memory for lowmem MMIO space disabled
> (XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
> (d5) PCI-ISA link 0 routed to IRQ5
> (XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
> (d5) PCI-ISA link 1 routed to IRQ10
> 
> The guest was eventually up and running and seemed to be working fine.
> The failure in log was not that harmful after all... But it would be
> nice to figure out what's going on here. I suspect the toolstack was
> trying to setup something, failed, then retried and eventually succeed.
> Sadly the error log wasn't elaborated enough to provide direct insight
> into the root cause and I couldn't tell which side (toolstack, kernel,
> Xen) to blame.
> 
> The failing snippet in Xen common/event_channel.c
> 
> 269     if ( (rchn->state != ECS_UNBOUND) ||                                        
> 270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
> 271         ERROR_EXIT_DOM(-EINVAL, rd);     
> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 16:55:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 16:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Bwp-0003cg-8K; Tue, 28 Jan 2014 16:55:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Bwn-0003ca-Jm
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 16:55:37 +0000
Received: from [85.158.139.211:56167] by server-9.bemta-5.messagelabs.com id
	42/F3-15098-801E7E25; Tue, 28 Jan 2014 16:55:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1390928134!182440!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5504 invoked from network); 28 Jan 2014 16:55:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 16:55:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,736,1384300800"; d="scan'208";a="95331204"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 16:55:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 11:55:33 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Bwj-0000ml-5W;
	Tue, 28 Jan 2014 16:55:33 +0000
Date: Tue, 28 Jan 2014 16:55:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140128165532.GI32713@zion.uk.xensource.com>
References: <20140115123045.GL5698@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140115123045.GL5698@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com
Subject: Re: [Xen-devel] [POSSIBLE BUG] Failure to bind event channel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 15, 2014 at 12:30:45PM +0000, Wei Liu wrote:
> Xen: master branch
> Dom0 Linux: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
>                linux-next
> 
> When I tried to start a HVM domain running squeeze with 2.6.32, I got
> 
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22
> (XEN) event_channel.c:271:d0 EVTCHNOP failure: domain 5, error -22

Julien has successfully reproduced this issue. This is actually caused
by two xenconsoled's running at the same time. Nothing needs fixing. :-)

Wei.

> [... more of this ...]
> [67196.736733] device vif5.0 entered promiscuous mode
> [67196.746221] IPv6: ADDRCONF(NETDEV_UP): vif5.0: link is not ready
> [67196.911973] device vif5.0-emu entered promiscuous mode
> [67196.921890] xenbr0: port 3(vif5.0-emu) entered forwarding state
> [67196.927833] xenbr0: port 3(vif5.0-emu) entered forwarding state
> (d5) HVM Loader
> (d5) Detected Xen v4.4-unstable
> (d5) Xenbus rings @0xfeffc000, event channel 3
> (d5) System requested SeaBIOS
> (d5) CPU speed is 2660 MHz
> (d5) Relocating guest memory for lowmem MMIO space disabled
> (XEN) irq.c:270: Dom5 PCI link 0 changed 0 -> 5
> (d5) PCI-ISA link 0 routed to IRQ5
> (XEN) irq.c:270: Dom5 PCI link 1 changed 0 -> 10
> (d5) PCI-ISA link 1 routed to IRQ10
> 
> The guest was eventually up and running and seemed to be working fine.
> The failure in log was not that harmful after all... But it would be
> nice to figure out what's going on here. I suspect the toolstack was
> trying to setup something, failed, then retried and eventually succeed.
> Sadly the error log wasn't elaborated enough to provide direct insight
> into the root cause and I couldn't tell which side (toolstack, kernel,
> Xen) to blame.
> 
> The failing snippet in Xen common/event_channel.c
> 
> 269     if ( (rchn->state != ECS_UNBOUND) ||                                        
> 270          (rchn->u.unbound.remote_domid != ld->domain_id) )                      
> 271         ERROR_EXIT_DOM(-EINVAL, rd);     
> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:14:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:14:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CEi-0004Pk-Ld; Tue, 28 Jan 2014 17:14:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8CEg-0004PZ-SK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:14:07 +0000
Received: from [193.109.254.147:61882] by server-11.bemta-14.messagelabs.com
	id FC/38-20576-E55E7E25; Tue, 28 Jan 2014 17:14:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390929243!415539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10906 invoked from network); 28 Jan 2014 17:14:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97338192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:14:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:14:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CEZ-000122-FF;
	Tue, 28 Jan 2014 17:13:59 +0000
Date: Tue, 28 Jan 2014 17:13:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	patches@linaro.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> Event channels driver needs to be initialized very early. Until now, Xen
> initialization was done after all CPUs was bring up.
> 
> We can safely move the initialization to an early initcall.
> 
> Also use a cpu notifier to:
>     - Register the VCPU when the CPU is prepared
>     - Enable event channel IRQ when the CPU is running
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Did you test this patch in Dom0 as well as in DomUs?


>  arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
>  1 file changed, 55 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 293eeea..39b668e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -23,6 +23,7 @@
>  #include <linux/of_address.h>
>  #include <linux/cpuidle.h>
>  #include <linux/cpufreq.h>
> +#include <linux/cpu.h>
>  
>  #include <linux/mm.h>
>  
> @@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>  
> -static void __init xen_percpu_init(void *unused)
> +static void xen_percpu_init(int cpu)
>  {
>  	struct vcpu_register_vcpu_info info;
>  	struct vcpu_info *vcpup;
>  	int err;
> -	int cpu = get_cpu();
>  
>  	pr_info("Xen: initializing cpu%d\n", cpu);
>  	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> @@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
>  	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
>  	BUG_ON(err);
>  	per_cpu(xen_vcpu, cpu) = vcpup;
> +}
>  
> +static void xen_interrupt_init(void)
> +{
>  	enable_percpu_irq(xen_events_irq, 0);
> -	put_cpu();
>  }
>  
>  static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
> @@ -193,6 +195,36 @@ static void xen_power_off(void)
>  		BUG();
>  }
>  
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
> +static int xen_cpu_notification(struct notifier_block *self,
> +				unsigned long action,
> +				void *hcpu)
> +{
> +	int cpu = (long)hcpu;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		xen_percpu_init(cpu);
> +		break;
> +	case CPU_STARTING:
> +		xen_interrupt_init();
> +		break;

Is CPU_STARTING guaranteed to be called on the new cpu only?
If so, why not call both xen_percpu_init and xen_interrupt_init on
CPU_STARTING?
As it stands I think you introduced a subtle change (that might be OK
but I think is unintentional): xen_percpu_init might not be called from
the same cpu as its target anymore.


> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block xen_cpu_notifier = {
> +	.notifier_call = xen_cpu_notification,
> +};
> +
>  /*
>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>   * documentation of the Xen Device Tree format.
> @@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
>  	phys_addr_t grant_frames;
> +	int cpu;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
>  	disable_cpuidle();
>  	disable_cpufreq();
>  
> +	xen_init_IRQ();
> +
> +	if (xen_events_irq < 0)
> +		return -ENODEV;

Since you are moving this code to xen_guest_init, you can check for
xen_events_irq earlier on, where we parse the irq from device tree.


> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			       "events", &xen_vcpu)) {
> +		pr_err("Error request IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	cpu = get_cpu();
> +	xen_percpu_init(cpu);
> +	xen_interrupt_init();
> +	put_cpu();
> +
> +	register_cpu_notifier(&xen_cpu_notifier);
> +
>  	return 0;
>  }
> -core_initcall(xen_guest_init);
> +early_initcall(xen_guest_init);
>  
>  static int __init xen_pm_init(void)
>  {
> @@ -297,31 +348,6 @@ static int __init xen_pm_init(void)
>  }
>  late_initcall(xen_pm_init);
>  
> -static irqreturn_t xen_arm_callback(int irq, void *arg)
> -{
> -	xen_hvm_evtchn_do_upcall();
> -	return IRQ_HANDLED;
> -}
> -
> -static int __init xen_init_events(void)
> -{
> -	if (!xen_domain() || xen_events_irq < 0)
> -		return -ENODEV;
> -
> -	xen_init_IRQ();
> -
> -	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> -			"events", &xen_vcpu)) {
> -		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> -		return -EINVAL;
> -	}
> -
> -	on_each_cpu(xen_percpu_init, NULL, 0);
> -
> -	return 0;
> -}
> -postcore_initcall(xen_init_events);
> -
>  /* In the hypervisor.S file. */
>  EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:14:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:14:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CEi-0004Pk-Ld; Tue, 28 Jan 2014 17:14:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8CEg-0004PZ-SK
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:14:07 +0000
Received: from [193.109.254.147:61882] by server-11.bemta-14.messagelabs.com
	id FC/38-20576-E55E7E25; Tue, 28 Jan 2014 17:14:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390929243!415539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10906 invoked from network); 28 Jan 2014 17:14:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97338192"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:14:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:14:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CEZ-000122-FF;
	Tue, 28 Jan 2014 17:13:59 +0000
Date: Tue, 28 Jan 2014 17:13:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	patches@linaro.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> Event channels driver needs to be initialized very early. Until now, Xen
> initialization was done after all CPUs was bring up.
> 
> We can safely move the initialization to an early initcall.
> 
> Also use a cpu notifier to:
>     - Register the VCPU when the CPU is prepared
>     - Enable event channel IRQ when the CPU is running
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Did you test this patch in Dom0 as well as in DomUs?


>  arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
>  1 file changed, 55 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 293eeea..39b668e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -23,6 +23,7 @@
>  #include <linux/of_address.h>
>  #include <linux/cpuidle.h>
>  #include <linux/cpufreq.h>
> +#include <linux/cpu.h>
>  
>  #include <linux/mm.h>
>  
> @@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>  
> -static void __init xen_percpu_init(void *unused)
> +static void xen_percpu_init(int cpu)
>  {
>  	struct vcpu_register_vcpu_info info;
>  	struct vcpu_info *vcpup;
>  	int err;
> -	int cpu = get_cpu();
>  
>  	pr_info("Xen: initializing cpu%d\n", cpu);
>  	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> @@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
>  	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
>  	BUG_ON(err);
>  	per_cpu(xen_vcpu, cpu) = vcpup;
> +}
>  
> +static void xen_interrupt_init(void)
> +{
>  	enable_percpu_irq(xen_events_irq, 0);
> -	put_cpu();
>  }
>  
>  static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
> @@ -193,6 +195,36 @@ static void xen_power_off(void)
>  		BUG();
>  }
>  
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
> +static int xen_cpu_notification(struct notifier_block *self,
> +				unsigned long action,
> +				void *hcpu)
> +{
> +	int cpu = (long)hcpu;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		xen_percpu_init(cpu);
> +		break;
> +	case CPU_STARTING:
> +		xen_interrupt_init();
> +		break;

Is CPU_STARTING guaranteed to be called on the new cpu only?
If so, why not call both xen_percpu_init and xen_interrupt_init on
CPU_STARTING?
As it stands I think you introduced a subtle change (that might be OK
but I think is unintentional): xen_percpu_init might not be called from
the same cpu as its target anymore.


> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block xen_cpu_notifier = {
> +	.notifier_call = xen_cpu_notification,
> +};
> +
>  /*
>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>   * documentation of the Xen Device Tree format.
> @@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
>  	const char *xen_prefix = "xen,xen-";
>  	struct resource res;
>  	phys_addr_t grant_frames;
> +	int cpu;
>  
>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>  	if (!node) {
> @@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
>  	disable_cpuidle();
>  	disable_cpufreq();
>  
> +	xen_init_IRQ();
> +
> +	if (xen_events_irq < 0)
> +		return -ENODEV;

Since you are moving this code to xen_guest_init, you can check for
xen_events_irq earlier on, where we parse the irq from device tree.


> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			       "events", &xen_vcpu)) {
> +		pr_err("Error request IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	cpu = get_cpu();
> +	xen_percpu_init(cpu);
> +	xen_interrupt_init();
> +	put_cpu();
> +
> +	register_cpu_notifier(&xen_cpu_notifier);
> +
>  	return 0;
>  }
> -core_initcall(xen_guest_init);
> +early_initcall(xen_guest_init);
>  
>  static int __init xen_pm_init(void)
>  {
> @@ -297,31 +348,6 @@ static int __init xen_pm_init(void)
>  }
>  late_initcall(xen_pm_init);
>  
> -static irqreturn_t xen_arm_callback(int irq, void *arg)
> -{
> -	xen_hvm_evtchn_do_upcall();
> -	return IRQ_HANDLED;
> -}
> -
> -static int __init xen_init_events(void)
> -{
> -	if (!xen_domain() || xen_events_irq < 0)
> -		return -ENODEV;
> -
> -	xen_init_IRQ();
> -
> -	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> -			"events", &xen_vcpu)) {
> -		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> -		return -EINVAL;
> -	}
> -
> -	on_each_cpu(xen_percpu_init, NULL, 0);
> -
> -	return 0;
> -}
> -postcore_initcall(xen_init_events);
> -
>  /* In the hypervisor.S file. */
>  EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:15:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CFs-0004Sm-5r; Tue, 28 Jan 2014 17:15:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W8CFp-0004Se-VO
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:15:18 +0000
Received: from [85.158.139.211:58781] by server-13.bemta-5.messagelabs.com id
	93/42-11357-5A5E7E25; Tue, 28 Jan 2014 17:15:17 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390929315!175320!1
X-Originating-IP: [213.199.154.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 28 Jan 2014 17:15:15 -0000
Received: from mail-am1lp0014.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.14)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	28 Jan 2014 17:15:15 -0000
Received: from AMSPRD0710HT002.eurprd07.prod.outlook.com (157.56.249.85) by
	DB3PR03MB217.eurprd03.prod.outlook.com (10.242.131.11) with Microsoft
	SMTP Server (TLS) id 15.0.859.15; Tue, 28 Jan 2014 17:15:14 +0000
Message-ID: <52E7E594.2050104@zynstra.com>
Date: Tue, 28 Jan 2014 17:15:00 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com>
In-Reply-To: <52D7346A.3000300@oracle.com>
X-Originating-IP: [157.56.249.85]
X-ClientProxiedBy: DBXPR03CA004.eurprd03.prod.outlook.com (10.255.191.142)
	To DB3PR03MB217.eurprd03.prod.outlook.com (10.242.131.11)
X-Forefront-PRVS: 0105DAA385
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(24454002)(479174003)(51704005)(189002)(199002)(51444003)(46102001)(23756003)(56776001)(51856001)(87976001)(4396001)(74662001)(74502001)(47736001)(83506001)(94316002)(47976001)(49866001)(47446002)(81542001)(50986001)(77982001)(64126003)(85306002)(81816001)(31966008)(81342001)(59896001)(76482001)(59766001)(74366001)(79102001)(81686001)(53806001)(42186004)(76796001)(77096001)(54356001)(76786001)(80316001)(83322001)(63696002)(92726001)(36756003)(80022001)(80976001)(33656001)(83072002)(74876001)(54316002)(19580395003)(93136001)(85852003)(86362001)(50466002)(92566001)(93516002)(69226001)(47776003)(66066001)(74706001)(90146001)(56816005);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB3PR03MB217;
	H:AMSPRD0710HT002.eurprd07.prod.outlook.com; CLIP:157.56.249.85;
	FPR:; InfoNoRecordsMX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
Hi Bob,
> On 01/16/2014 12:35 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>>> with
>>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>>> difference packages during your testing.
>>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>>> problem but as I mentioned before it will generally show up under
>>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>>
>>>>> So the root cause is not because enabled selfshrinking.
>>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>>> memory
>>>>> pressure to guest OS.
>>>>> And kswapd started to reclaim page until most of pages were
>>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>>> triggered.
>>>>> In theory the balloon driver should give back ballooned out pages to
>>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>>
>>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>>> xen-selfballoon won't be so aggressive.
>>>>> You can do it through parameters selfballoon_reserved_mb or
>>>>> selfballoon_min_usable_mb.
>>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>>> another bump to 64.  I think that if I do find values which prevent it
>>>> though then it is more of a work around than a fix because it still
>>>> suggests that swap is not being used when ballooning is no longer
>>> Yes, it's like a work around. But I don't think there is a better way.
>>>
>>>   From the recent OOM log your reported:
>>> [ 8212.940769] Free swap  = 1925576kB
>>> [ 8212.940770] Total swap = 2097148kB
>>>
>>> [504638.442136] Free swap  = 1868108kB
>>> [504638.442137] Total swap = 2097148kB
>>>
>>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>>> are already significantly values for guest OS has only ~300M usable
>>> memory.
>>> guest OS can't find out pages suitable for swap any more after so many
>>> pages are swapped out, although at that time the swap device still have
>>> enough space.
>>>
>>> The OOM may not be triggered if the balloon driver can give back memory
>>> to guest OS fast enough but I think it's unrealistic.
>>> So the best way is reserve more memory to guest OS.
>>>
>>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>>> see if I can find a comparable workload to see if I get the same issue
>>>> with a different kernel configuration.
>>>>
>> I've done a bit more testing and seem to have found an extra condition
>> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
>> have swap space backed by a dm-crypt volume and if I remove this layer
>> then things seem to be behaving much more reliably.  In my Ubuntu guests
>> I have plain swap space and so far I haven't been able to trigger an OOM
>> condition.  Is it possible that it is the dm-crypt layer failing to get
>> working memory when swapping something in/out and causing the error?
>>
> One possible reason is the dm layer and related dm target driver occupy
> a significant mount of memory and there is no way for xenselfballoon to
> know this. So selfballoon driver ballooned out more memory than the
> system really requires.
>
> I have made a patch by reserving extra 10% of original total memory, by
> this way I think we can make the system much more reliably in all cases.
> Could you please have a test? You don't need to set
> selfballoon_reserved_mb by yourself any more.
I have to say that with this patch the situation has definitely 
improved.  I have been running it with 3.12.[78] and 3.13 and pushing it 
quite hard for the last 10 days or so.  Unfortunately yesterday I got an 
OOM during a compile (link) of webkit-gtk.  I think your patch is part 
of the solution but I'm not sure if the other bit is simply to be more 
generous with the guest memory allocation or something else.  Having 
tested with memory = 512  and no tmem I get an OOM with the same 
compile, with memory = 1024 and no tmem the compile completes ok (both 
cases without maxmem).  As my domains are usually started with memory = 
512 and maxmem = 1024 it seems that there should be sufficient with my 
default parameters. Also for an experiment I set memory=1024 and removed 
maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon: 
reserve_additional_memory: add_memory() failed: -17" printed many times 
in the guest kernel log.

Regards,
James

[456770.748827] Mem-Info:
[456770.748829] Node 0 DMA per-cpu:
[456770.748833] CPU    0: hi:    0, btch:   1 usd:   0
[456770.748835] CPU    1: hi:    0, btch:   1 usd:   0
[456770.748836] Node 0 DMA32 per-cpu:
[456770.748838] CPU    0: hi:  186, btch:  31 usd: 173
[456770.748840] CPU    1: hi:  186, btch:  31 usd: 120
[456770.748846] active_anon:91431 inactive_anon:96269 isolated_anon:0
  active_file:13286 inactive_file:31256 isolated_file:0
  unevictable:0 dirty:0 writeback:0 unstable:0
  free:1155 slab_reclaimable:7001 slab_unreclaimable:3932
  mapped:2300 shmem:88 pagetables:2576 bounce:0
  free_cma:0 totalram:255578 balloontarget:327320
[456770.748849] Node 0 DMA free:1956kB min:88kB low:108kB high:132kB 
active_anon:3128kB inactive_anon:3328kB active_file:1888kB 
inactive_file:2088kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:15996kB managed:15912kB mlocked:0kB dirty:0kB 
writeback:0kB mapped:32kB shmem:0kB slab_reclaimable:684kB 
slab_unreclaimable:720kB kernel_stack:72kB pagetables:488kB unstable:0kB 
bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:17841 
all_unreclaimable? yes
[456770.748863] lowmem_reserve[]: 0 469 469 469
[456770.748866] Node 0 DMA32 free:2664kB min:2728kB low:3408kB 
high:4092kB active_anon:362596kB inactive_anon:381748kB 
active_file:51256kB inactive_file:122936kB unevictable:0kB 
isolated(anon):0kB isolated(file):0kB present:1032192kB 
managed:1006400kB mlocked:0kB dirty:0kB writeback:0kB mapped:9168kB 
shmem:352kB slab_reclaimable:27320kB slab_unreclaimable:15008kB 
kernel_stack:1784kB pagetables:9816kB unstable:0kB bounce:0kB 
free_cma:0kB writeback_tmp:0kB pages_scanned:1382021 all_unreclaimable? yes
[456770.748874] lowmem_reserve[]: 0 0 0 0
[456770.748877] Node 0 DMA: 1*4kB (R) 0*8kB 0*16kB 5*32kB (R) 2*64kB (R) 
1*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
[456770.748890] Node 0 DMA32: 666*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB 
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2664kB
[456770.748899] 48556 total pagecache pages
[456770.748901] 35203 pages in swap cache
[456770.748903] Swap cache stats: add 358621, delete 323418, find 
206319/224002
[456770.748904] Free swap  = 1671532kB
[456770.748905] Total swap = 2097148kB
[456770.748906] 262047 pages RAM
[456770.748907] 0 pages HighMem/MovableOnly
[456770.748908] 6448 pages reserved
<snip process list>
[456770.749070] Out of memory: Kill process 28271 (ld) score 110 or 
sacrifice child
[456770.749073] Killed process 28271 (ld) total-vm:358488kB, 
anon-rss:324588kB, file-rss:1456kB

>
>
> xen_selfballoon_deaggressive.patch
>
>
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..8f33254 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>   #endif /* CONFIG_FRONTSWAP */
>   
>   #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>   
>   /*
>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>   int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   {
>   	bool enable = false;
> +	unsigned long reserve_pages;
>   
>   	if (!xen_domain())
>   		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   	if (!enable)
>   		return -ENODEV;
>   
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are two reasons:
> +	 * 1) The goal_page doesn't contain some pages used by kernel space,
> +	 *    like slab cache and pages used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>   	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>   
>   	return 0;


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:15:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:15:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CFs-0004Sm-5r; Tue, 28 Jan 2014 17:15:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W8CFp-0004Se-VO
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:15:18 +0000
Received: from [85.158.139.211:58781] by server-13.bemta-5.messagelabs.com id
	93/42-11357-5A5E7E25; Tue, 28 Jan 2014 17:15:17 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390929315!175320!1
X-Originating-IP: [213.199.154.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 28 Jan 2014 17:15:15 -0000
Received: from mail-am1lp0014.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.14)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	28 Jan 2014 17:15:15 -0000
Received: from AMSPRD0710HT002.eurprd07.prod.outlook.com (157.56.249.85) by
	DB3PR03MB217.eurprd03.prod.outlook.com (10.242.131.11) with Microsoft
	SMTP Server (TLS) id 15.0.859.15; Tue, 28 Jan 2014 17:15:14 +0000
Message-ID: <52E7E594.2050104@zynstra.com>
Date: Tue, 28 Jan 2014 17:15:00 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com>
In-Reply-To: <52D7346A.3000300@oracle.com>
X-Originating-IP: [157.56.249.85]
X-ClientProxiedBy: DBXPR03CA004.eurprd03.prod.outlook.com (10.255.191.142)
	To DB3PR03MB217.eurprd03.prod.outlook.com (10.242.131.11)
X-Forefront-PRVS: 0105DAA385
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(24454002)(479174003)(51704005)(189002)(199002)(51444003)(46102001)(23756003)(56776001)(51856001)(87976001)(4396001)(74662001)(74502001)(47736001)(83506001)(94316002)(47976001)(49866001)(47446002)(81542001)(50986001)(77982001)(64126003)(85306002)(81816001)(31966008)(81342001)(59896001)(76482001)(59766001)(74366001)(79102001)(81686001)(53806001)(42186004)(76796001)(77096001)(54356001)(76786001)(80316001)(83322001)(63696002)(92726001)(36756003)(80022001)(80976001)(33656001)(83072002)(74876001)(54316002)(19580395003)(93136001)(85852003)(86362001)(50466002)(92566001)(93516002)(69226001)(47776003)(66066001)(74706001)(90146001)(56816005);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB3PR03MB217;
	H:AMSPRD0710HT002.eurprd07.prod.outlook.com; CLIP:157.56.249.85;
	FPR:; InfoNoRecordsMX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
Hi Bob,
> On 01/16/2014 12:35 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> On 01/15/2014 04:49 PM, James Dingwall wrote:
>>>> Bob Liu wrote:
>>>>> On 01/07/2014 05:21 PM, James Dingwall wrote:
>>>>>> Bob Liu wrote:
>>>>>>> Could you confirm that this problem doesn't exist if loading tmem
>>>>>>> with
>>>>>>> selfshrinking=0 during compile gcc? It seems that you are compiling
>>>>>>> difference packages during your testing.
>>>>>>> This will help to figure out whether selfshrinking is the root cause.
>>>>>> Got an oom with selfshrinking=0, again during a gcc compile.
>>>>>> Unfortunately I don't have a single test case which demonstrates the
>>>>>> problem but as I mentioned before it will generally show up under
>>>>>> compiles of large packages such as glibc, kdelibs, gcc etc.
>>>>>>
>>>>> So the root cause is not because enabled selfshrinking.
>>>>> Then what I can think of is that the xen-selfballoon driver was too
>>>>> aggressive, too many pages were ballooned out which causeed heavy
>>>>> memory
>>>>> pressure to guest OS.
>>>>> And kswapd started to reclaim page until most of pages were
>>>>> unreclaimable(all_unreclaimable=yes for all zones), then OOM Killer was
>>>>> triggered.
>>>>> In theory the balloon driver should give back ballooned out pages to
>>>>> guest OS, but I'm afraid this procedure is not fast enough.
>>>>>
>>>>> My suggestion is reserve a min memory for your guest OS so that the
>>>>> xen-selfballoon won't be so aggressive.
>>>>> You can do it through parameters selfballoon_reserved_mb or
>>>>> selfballoon_min_usable_mb.
>>>> I am still getting OOM errors with both of these set to 32 so I'll try
>>>> another bump to 64.  I think that if I do find values which prevent it
>>>> though then it is more of a work around than a fix because it still
>>>> suggests that swap is not being used when ballooning is no longer
>>> Yes, it's like a work around. But I don't think there is a better way.
>>>
>>>   From the recent OOM log your reported:
>>> [ 8212.940769] Free swap  = 1925576kB
>>> [ 8212.940770] Total swap = 2097148kB
>>>
>>> [504638.442136] Free swap  = 1868108kB
>>> [504638.442137] Total swap = 2097148kB
>>>
>>> 171572KB and 229040KB data are swapped out to swap disk, I think there
>>> are already significantly values for guest OS has only ~300M usable
>>> memory.
>>> guest OS can't find out pages suitable for swap any more after so many
>>> pages are swapped out, although at that time the swap device still have
>>> enough space.
>>>
>>> The OOM may not be triggered if the balloon driver can give back memory
>>> to guest OS fast enough but I think it's unrealistic.
>>> So the best way is reserve more memory to guest OS.
>>>
>>>> capable of satisfying the request.  I've also got an Ubuntu Saucy (3.11
>>>> kernel) guest running on the dom0 with tmem activated so I'm going to
>>>> see if I can find a comparable workload to see if I get the same issue
>>>> with a different kernel configuration.
>>>>
>> I've done a bit more testing and seem to have found an extra condition
>> which is affecting the OOM behaviour in my guests.  All my Gentoo guests
>> have swap space backed by a dm-crypt volume and if I remove this layer
>> then things seem to be behaving much more reliably.  In my Ubuntu guests
>> I have plain swap space and so far I haven't been able to trigger an OOM
>> condition.  Is it possible that it is the dm-crypt layer failing to get
>> working memory when swapping something in/out and causing the error?
>>
> One possible reason is the dm layer and related dm target driver occupy
> a significant mount of memory and there is no way for xenselfballoon to
> know this. So selfballoon driver ballooned out more memory than the
> system really requires.
>
> I have made a patch by reserving extra 10% of original total memory, by
> this way I think we can make the system much more reliably in all cases.
> Could you please have a test? You don't need to set
> selfballoon_reserved_mb by yourself any more.
I have to say that with this patch the situation has definitely 
improved.  I have been running it with 3.12.[78] and 3.13 and pushing it 
quite hard for the last 10 days or so.  Unfortunately yesterday I got an 
OOM during a compile (link) of webkit-gtk.  I think your patch is part 
of the solution but I'm not sure if the other bit is simply to be more 
generous with the guest memory allocation or something else.  Having 
tested with memory = 512  and no tmem I get an OOM with the same 
compile, with memory = 1024 and no tmem the compile completes ok (both 
cases without maxmem).  As my domains are usually started with memory = 
512 and maxmem = 1024 it seems that there should be sufficient with my 
default parameters. Also for an experiment I set memory=1024 and removed 
maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon: 
reserve_additional_memory: add_memory() failed: -17" printed many times 
in the guest kernel log.

Regards,
James

[456770.748827] Mem-Info:
[456770.748829] Node 0 DMA per-cpu:
[456770.748833] CPU    0: hi:    0, btch:   1 usd:   0
[456770.748835] CPU    1: hi:    0, btch:   1 usd:   0
[456770.748836] Node 0 DMA32 per-cpu:
[456770.748838] CPU    0: hi:  186, btch:  31 usd: 173
[456770.748840] CPU    1: hi:  186, btch:  31 usd: 120
[456770.748846] active_anon:91431 inactive_anon:96269 isolated_anon:0
  active_file:13286 inactive_file:31256 isolated_file:0
  unevictable:0 dirty:0 writeback:0 unstable:0
  free:1155 slab_reclaimable:7001 slab_unreclaimable:3932
  mapped:2300 shmem:88 pagetables:2576 bounce:0
  free_cma:0 totalram:255578 balloontarget:327320
[456770.748849] Node 0 DMA free:1956kB min:88kB low:108kB high:132kB 
active_anon:3128kB inactive_anon:3328kB active_file:1888kB 
inactive_file:2088kB unevictable:0kB isolated(anon):0kB 
isolated(file):0kB present:15996kB managed:15912kB mlocked:0kB dirty:0kB 
writeback:0kB mapped:32kB shmem:0kB slab_reclaimable:684kB 
slab_unreclaimable:720kB kernel_stack:72kB pagetables:488kB unstable:0kB 
bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:17841 
all_unreclaimable? yes
[456770.748863] lowmem_reserve[]: 0 469 469 469
[456770.748866] Node 0 DMA32 free:2664kB min:2728kB low:3408kB 
high:4092kB active_anon:362596kB inactive_anon:381748kB 
active_file:51256kB inactive_file:122936kB unevictable:0kB 
isolated(anon):0kB isolated(file):0kB present:1032192kB 
managed:1006400kB mlocked:0kB dirty:0kB writeback:0kB mapped:9168kB 
shmem:352kB slab_reclaimable:27320kB slab_unreclaimable:15008kB 
kernel_stack:1784kB pagetables:9816kB unstable:0kB bounce:0kB 
free_cma:0kB writeback_tmp:0kB pages_scanned:1382021 all_unreclaimable? yes
[456770.748874] lowmem_reserve[]: 0 0 0 0
[456770.748877] Node 0 DMA: 1*4kB (R) 0*8kB 0*16kB 5*32kB (R) 2*64kB (R) 
1*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
[456770.748890] Node 0 DMA32: 666*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB 
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2664kB
[456770.748899] 48556 total pagecache pages
[456770.748901] 35203 pages in swap cache
[456770.748903] Swap cache stats: add 358621, delete 323418, find 
206319/224002
[456770.748904] Free swap  = 1671532kB
[456770.748905] Total swap = 2097148kB
[456770.748906] 262047 pages RAM
[456770.748907] 0 pages HighMem/MovableOnly
[456770.748908] 6448 pages reserved
<snip process list>
[456770.749070] Out of memory: Kill process 28271 (ld) score 110 or 
sacrifice child
[456770.749073] Killed process 28271 (ld) total-vm:358488kB, 
anon-rss:324588kB, file-rss:1456kB

>
>
> xen_selfballoon_deaggressive.patch
>
>
> diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
> index 21e18c1..8f33254 100644
> --- a/drivers/xen/xen-selfballoon.c
> +++ b/drivers/xen/xen-selfballoon.c
> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>   #endif /* CONFIG_FRONTSWAP */
>   
>   #define MB2PAGES(mb)	((mb) << (20 - PAGE_SHIFT))
> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>   
>   /*
>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>   int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   {
>   	bool enable = false;
> +	unsigned long reserve_pages;
>   
>   	if (!xen_domain())
>   		return -ENODEV;
> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning, bool use_frontswap_selfshrink)
>   	if (!enable)
>   		return -ENODEV;
>   
> +	/*
> +	 * Give selfballoon_reserved_mb a default value(10% of total ram pages)
> +	 * to make selfballoon not so aggressive.
> +	 *
> +	 * There are two reasons:
> +	 * 1) The goal_page doesn't contain some pages used by kernel space,
> +	 *    like slab cache and pages used by device drivers.
> +	 *
> +	 * 2) The balloon driver may not give back memory to guest OS fast
> +	 *    enough when the workload suddenly aquries a lot of memory.
> +	 *
> +	 * In both cases, the guest OS will suffer from memory pressure and
> +	 * OOM killer may be triggered.
> +	 * By reserving extra 10% of total ram pages, we can keep the system
> +	 * much more reliably and response faster in some cases.
> +	 */
> +	if (!selfballoon_reserved_mb) {
> +		reserve_pages = totalram_pages / 10;
> +		selfballoon_reserved_mb = PAGES2MB(reserve_pages);
> +	}
>   	schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
>   
>   	return 0;


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CZy-0005IC-2b; Tue, 28 Jan 2014 17:36:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8CZw-0005I6-Ek
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:36:04 +0000
Received: from [85.158.143.35:21238] by server-1.bemta-4.messagelabs.com id
	13/83-02132-38AE7E25; Tue, 28 Jan 2014 17:36:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390930562!1434526!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2767 invoked from network); 28 Jan 2014 17:36:03 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:36:03 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so371287eei.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 09:36:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=QDJsj4jVvN6OXJDElt4Q6PkArzZLhcDdUDxxIOnlWoY=;
	b=O+XkpeVY7+3sN9SymsEPB4EW2je7q+Z1GGNrZashGrYfGPRKpUe3H+J1Qh6dG5Ztmo
	xA99dJMNuKxbgUtGGUgDfOIUq62mvoxUpYH7Hrh+auQxMPwfrUjHvKZwqyAyCSLMF9X3
	swXF1OA5uqNvSmsagVNjjqZNKRDbdlq5hevHxomY49lOSQLoFpBlZF6j6+CHfZ5rj5Sr
	1Fm5/15M/26TpoR4ouPViPJBahN/z1UETdP+5jAJiDOtSSFkW/z41FE0PhzFQTQkRsF6
	0uJ/iOFTH5igu9fftmywDq/ueV1sRxVkk3zGpqhWcUblNFI0VWqcENjpg5cpQ0kP019z
	G0Ow==
X-Gm-Message-State: ALoCoQmuflluBtQfFahlefOEXarT126DdZkM1EJYCd+Ma0sG/fzHFTFdsw9GzaZevQMri0eOmRZ0
X-Received: by 10.14.104.6 with SMTP id h6mr3040748eeg.29.1390930562453;
	Tue, 28 Jan 2014 09:36:02 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	b41sm57957755eef.16.2014.01.28.09.35.54 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 09:36:00 -0800 (PST)
Message-ID: <52E7EA78.5020305@linaro.org>
Date: Tue, 28 Jan 2014 17:35:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:13 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Julien Grall wrote:
>> Event channels driver needs to be initialized very early. Until now, Xen
>> initialization was done after all CPUs was bring up.
>>
>> We can safely move the initialization to an early initcall.
>>
>> Also use a cpu notifier to:
>>     - Register the VCPU when the CPU is prepared
>>     - Enable event channel IRQ when the CPU is running
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Did you test this patch in Dom0 as well as in DomUs?
> 

Only try dom0. I will try domU.

> 
>>  arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
>>  1 file changed, 55 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>> index 293eeea..39b668e 100644
>> --- a/arch/arm/xen/enlighten.c
>> +++ b/arch/arm/xen/enlighten.c
>> @@ -23,6 +23,7 @@
>>  #include <linux/of_address.h>
>>  #include <linux/cpuidle.h>
>>  #include <linux/cpufreq.h>
>> +#include <linux/cpu.h>
>>  
>>  #include <linux/mm.h>
>>  
>> @@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>>  }
>>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>>  
>> -static void __init xen_percpu_init(void *unused)
>> +static void xen_percpu_init(int cpu)
>>  {
>>  	struct vcpu_register_vcpu_info info;
>>  	struct vcpu_info *vcpup;
>>  	int err;
>> -	int cpu = get_cpu();
>>  
>>  	pr_info("Xen: initializing cpu%d\n", cpu);
>>  	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>> @@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
>>  	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
>>  	BUG_ON(err);
>>  	per_cpu(xen_vcpu, cpu) = vcpup;
>> +}
>>  
>> +static void xen_interrupt_init(void)
>> +{
>>  	enable_percpu_irq(xen_events_irq, 0);
>> -	put_cpu();
>>  }
>>  
>>  static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
>> @@ -193,6 +195,36 @@ static void xen_power_off(void)
>>  		BUG();
>>  }
>>  
>> +static irqreturn_t xen_arm_callback(int irq, void *arg)
>> +{
>> +	xen_hvm_evtchn_do_upcall();
>> +	return IRQ_HANDLED;
>> +}
>> +
>> +static int xen_cpu_notification(struct notifier_block *self,
>> +				unsigned long action,
>> +				void *hcpu)
>> +{
>> +	int cpu = (long)hcpu;
>> +
>> +	switch (action) {
>> +	case CPU_UP_PREPARE:
>> +		xen_percpu_init(cpu);
>> +		break;
>> +	case CPU_STARTING:
>> +		xen_interrupt_init();
>> +		break;
> 
> Is CPU_STARTING guaranteed to be called on the new cpu only?

Yes.

> If so, why not call both xen_percpu_init and xen_interrupt_init on
> CPU_STARTING?

Just in case that xen_vcpu is used somewhere else by a cpu notifier
callback CPU_STARTING. We don't know which callback is called first.

> As it stands I think you introduced a subtle change (that might be OK
> but I think is unintentional): xen_percpu_init might not be called from
> the same cpu as its target anymore.

No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
the end of xen_guest_init.

> 
> 
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block xen_cpu_notifier = {
>> +	.notifier_call = xen_cpu_notification,
>> +};
>> +
>>  /*
>>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>>   * documentation of the Xen Device Tree format.
>> @@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
>>  	const char *xen_prefix = "xen,xen-";
>>  	struct resource res;
>>  	phys_addr_t grant_frames;
>> +	int cpu;
>>  
>>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>>  	if (!node) {
>> @@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
>>  	disable_cpuidle();
>>  	disable_cpufreq();
>>  
>> +	xen_init_IRQ();
>> +
>> +	if (xen_events_irq < 0)
>> +		return -ENODEV;
> 
> Since you are moving this code to xen_guest_init, you can check for
> xen_events_irq earlier on, where we parse the irq from device tree.

Will do.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CZv-0005I0-MN; Tue, 28 Jan 2014 17:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8CZs-0005Hv-4E
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:36:00 +0000
Received: from [85.158.137.68:27393] by server-12.bemta-3.messagelabs.com id
	E2/21-20055-F7AE7E25; Tue, 28 Jan 2014 17:35:59 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390930557!11909026!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2761 invoked from network); 28 Jan 2014 17:35:58 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:35:58 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so888537qaq.25
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 09:35:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=HemGpgIpL+MuL42aaL2xX/vafKfBu5Y6Z696YL7ybc4=;
	b=SIuSPpEE1z5Aj8sfW7kgRGeLQVGac0aL4dQwCnrZ0jOCCrS4w8yqkyf67k1aibW2Su
	w1SgvA+zafrfjPejuwL1hd/Hvtsd7FhMWDAnpcwsJYyDuKkJrh6Y8+69E84YIX26rwXV
	gRH4wZGDoccxdCAlQyfSBGZJ1hQfKcKQ3hEbjZo6PFzynzmoAj36MCRAVPqbRvYKWVp7
	YqIxz8NJHi8gNCUtaHzZ0VsiL7c0ikVg/9RYvcAx+sSaommn7IOhiugYVtAN/co+q3X1
	Aqc9urpBGsnuz1puRoeuECV1blZ9suB4Q1cnMn1q/tVBC2tTADSh/+bAfGjc9llkMvJ+
	xZWA==
X-Gm-Message-State: ALoCoQmqQrvxaxTR8LZhJ980tatBVLFTgSid7bxFN9KrZ6ovIx4rVIR++nUHnv7q309F8ho/lSE5
MIME-Version: 1.0
X-Received: by 10.140.46.119 with SMTP id j110mr4355350qga.32.1390930557337;
	Tue, 28 Jan 2014 09:35:57 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Tue, 28 Jan 2014 09:35:57 -0800 (PST)
In-Reply-To: <1390924071.7753.115.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 23:05:57 +0530
Message-ID: <CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
>
> I should have asked this sooner -- can you point me to the bindings
> documentation for this device?
>
> http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
> is not yet agreed, so having Xen depend on it may have been a mistake.

Above patch is still under discussion so i can not take changes from
that to xen driver immediately.

For now i have added xen reset code based on
"drivers/power/reset/xgene-reboot.c" driver which is already merged in
linux.

http://www.spinics.net/lists/arm-kernel/msg266039.html
For now DTS bindings for xen are similar as mentioned in above link.

Actually if you see new patch and old one (from reboot point of view) -
Only difference in both the dts bindings is "mask" filed in dts.
In old patch it used to be read from dts but in latest it is
hard-coded to 1 in actual code and being removed from dts in new
patch.

Now if you want this to be fixed , i can quickly submit a V7 in which
mask field will be just hard-coded to 1 hence xen code will always
work even if linux code does gets changed.

Thanks,
Pranav








>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CZy-0005IC-2b; Tue, 28 Jan 2014 17:36:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8CZw-0005I6-Ek
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:36:04 +0000
Received: from [85.158.143.35:21238] by server-1.bemta-4.messagelabs.com id
	13/83-02132-38AE7E25; Tue, 28 Jan 2014 17:36:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390930562!1434526!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2767 invoked from network); 28 Jan 2014 17:36:03 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:36:03 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so371287eei.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 09:36:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=QDJsj4jVvN6OXJDElt4Q6PkArzZLhcDdUDxxIOnlWoY=;
	b=O+XkpeVY7+3sN9SymsEPB4EW2je7q+Z1GGNrZashGrYfGPRKpUe3H+J1Qh6dG5Ztmo
	xA99dJMNuKxbgUtGGUgDfOIUq62mvoxUpYH7Hrh+auQxMPwfrUjHvKZwqyAyCSLMF9X3
	swXF1OA5uqNvSmsagVNjjqZNKRDbdlq5hevHxomY49lOSQLoFpBlZF6j6+CHfZ5rj5Sr
	1Fm5/15M/26TpoR4ouPViPJBahN/z1UETdP+5jAJiDOtSSFkW/z41FE0PhzFQTQkRsF6
	0uJ/iOFTH5igu9fftmywDq/ueV1sRxVkk3zGpqhWcUblNFI0VWqcENjpg5cpQ0kP019z
	G0Ow==
X-Gm-Message-State: ALoCoQmuflluBtQfFahlefOEXarT126DdZkM1EJYCd+Ma0sG/fzHFTFdsw9GzaZevQMri0eOmRZ0
X-Received: by 10.14.104.6 with SMTP id h6mr3040748eeg.29.1390930562453;
	Tue, 28 Jan 2014 09:36:02 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	b41sm57957755eef.16.2014.01.28.09.35.54 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 09:36:00 -0800 (PST)
Message-ID: <52E7EA78.5020305@linaro.org>
Date: Tue, 28 Jan 2014 17:35:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:13 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Julien Grall wrote:
>> Event channels driver needs to be initialized very early. Until now, Xen
>> initialization was done after all CPUs was bring up.
>>
>> We can safely move the initialization to an early initcall.
>>
>> Also use a cpu notifier to:
>>     - Register the VCPU when the CPU is prepared
>>     - Enable event channel IRQ when the CPU is running
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Did you test this patch in Dom0 as well as in DomUs?
> 

Only try dom0. I will try domU.

> 
>>  arch/arm/xen/enlighten.c |   84 ++++++++++++++++++++++++++++++----------------
>>  1 file changed, 55 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>> index 293eeea..39b668e 100644
>> --- a/arch/arm/xen/enlighten.c
>> +++ b/arch/arm/xen/enlighten.c
>> @@ -23,6 +23,7 @@
>>  #include <linux/of_address.h>
>>  #include <linux/cpuidle.h>
>>  #include <linux/cpufreq.h>
>> +#include <linux/cpu.h>
>>  
>>  #include <linux/mm.h>
>>  
>> @@ -154,12 +155,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>>  }
>>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>>  
>> -static void __init xen_percpu_init(void *unused)
>> +static void xen_percpu_init(int cpu)
>>  {
>>  	struct vcpu_register_vcpu_info info;
>>  	struct vcpu_info *vcpup;
>>  	int err;
>> -	int cpu = get_cpu();
>>  
>>  	pr_info("Xen: initializing cpu%d\n", cpu);
>>  	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>> @@ -170,9 +170,11 @@ static void __init xen_percpu_init(void *unused)
>>  	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info);
>>  	BUG_ON(err);
>>  	per_cpu(xen_vcpu, cpu) = vcpup;
>> +}
>>  
>> +static void xen_interrupt_init(void)
>> +{
>>  	enable_percpu_irq(xen_events_irq, 0);
>> -	put_cpu();
>>  }
>>  
>>  static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)
>> @@ -193,6 +195,36 @@ static void xen_power_off(void)
>>  		BUG();
>>  }
>>  
>> +static irqreturn_t xen_arm_callback(int irq, void *arg)
>> +{
>> +	xen_hvm_evtchn_do_upcall();
>> +	return IRQ_HANDLED;
>> +}
>> +
>> +static int xen_cpu_notification(struct notifier_block *self,
>> +				unsigned long action,
>> +				void *hcpu)
>> +{
>> +	int cpu = (long)hcpu;
>> +
>> +	switch (action) {
>> +	case CPU_UP_PREPARE:
>> +		xen_percpu_init(cpu);
>> +		break;
>> +	case CPU_STARTING:
>> +		xen_interrupt_init();
>> +		break;
> 
> Is CPU_STARTING guaranteed to be called on the new cpu only?

Yes.

> If so, why not call both xen_percpu_init and xen_interrupt_init on
> CPU_STARTING?

Just in case that xen_vcpu is used somewhere else by a cpu notifier
callback CPU_STARTING. We don't know which callback is called first.

> As it stands I think you introduced a subtle change (that might be OK
> but I think is unintentional): xen_percpu_init might not be called from
> the same cpu as its target anymore.

No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
the end of xen_guest_init.

> 
> 
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return NOTIFY_OK;
>> +}
>> +
>> +static struct notifier_block xen_cpu_notifier = {
>> +	.notifier_call = xen_cpu_notification,
>> +};
>> +
>>  /*
>>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>>   * documentation of the Xen Device Tree format.
>> @@ -209,6 +241,7 @@ static int __init xen_guest_init(void)
>>  	const char *xen_prefix = "xen,xen-";
>>  	struct resource res;
>>  	phys_addr_t grant_frames;
>> +	int cpu;
>>  
>>  	node = of_find_compatible_node(NULL, NULL, "xen,xen");
>>  	if (!node) {
>> @@ -281,9 +314,27 @@ static int __init xen_guest_init(void)
>>  	disable_cpuidle();
>>  	disable_cpufreq();
>>  
>> +	xen_init_IRQ();
>> +
>> +	if (xen_events_irq < 0)
>> +		return -ENODEV;
> 
> Since you are moving this code to xen_guest_init, you can check for
> xen_events_irq earlier on, where we parse the irq from device tree.

Will do.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:36:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CZv-0005I0-MN; Tue, 28 Jan 2014 17:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8CZs-0005Hv-4E
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:36:00 +0000
Received: from [85.158.137.68:27393] by server-12.bemta-3.messagelabs.com id
	E2/21-20055-F7AE7E25; Tue, 28 Jan 2014 17:35:59 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390930557!11909026!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2761 invoked from network); 28 Jan 2014 17:35:58 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:35:58 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so888537qaq.25
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 09:35:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=HemGpgIpL+MuL42aaL2xX/vafKfBu5Y6Z696YL7ybc4=;
	b=SIuSPpEE1z5Aj8sfW7kgRGeLQVGac0aL4dQwCnrZ0jOCCrS4w8yqkyf67k1aibW2Su
	w1SgvA+zafrfjPejuwL1hd/Hvtsd7FhMWDAnpcwsJYyDuKkJrh6Y8+69E84YIX26rwXV
	gRH4wZGDoccxdCAlQyfSBGZJ1hQfKcKQ3hEbjZo6PFzynzmoAj36MCRAVPqbRvYKWVp7
	YqIxz8NJHi8gNCUtaHzZ0VsiL7c0ikVg/9RYvcAx+sSaommn7IOhiugYVtAN/co+q3X1
	Aqc9urpBGsnuz1puRoeuECV1blZ9suB4Q1cnMn1q/tVBC2tTADSh/+bAfGjc9llkMvJ+
	xZWA==
X-Gm-Message-State: ALoCoQmqQrvxaxTR8LZhJ980tatBVLFTgSid7bxFN9KrZ6ovIx4rVIR++nUHnv7q309F8ho/lSE5
MIME-Version: 1.0
X-Received: by 10.140.46.119 with SMTP id j110mr4355350qga.32.1390930557337;
	Tue, 28 Jan 2014 09:35:57 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Tue, 28 Jan 2014 09:35:57 -0800 (PST)
In-Reply-To: <1390924071.7753.115.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 23:05:57 +0530
Message-ID: <CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
>
> I should have asked this sooner -- can you point me to the bindings
> documentation for this device?
>
> http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
> is not yet agreed, so having Xen depend on it may have been a mistake.

Above patch is still under discussion so i can not take changes from
that to xen driver immediately.

For now i have added xen reset code based on
"drivers/power/reset/xgene-reboot.c" driver which is already merged in
linux.

http://www.spinics.net/lists/arm-kernel/msg266039.html
For now DTS bindings for xen are similar as mentioned in above link.

Actually if you see new patch and old one (from reboot point of view) -
Only difference in both the dts bindings is "mask" filed in dts.
In old patch it used to be read from dts but in latest it is
hard-coded to 1 in actual code and being removed from dts in new
patch.

Now if you want this to be fixed , i can quickly submit a V7 in which
mask field will be just hard-coded to 1 hence xen code will always
work even if linux code does gets changed.

Thanks,
Pranav








>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chf-0005ro-Dx; Tue, 28 Jan 2014 17:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chd-0005rh-Q3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:01 +0000
Received: from [85.158.139.211:41346] by server-12.bemta-5.messagelabs.com id
	C5/32-30017-16CE7E25; Tue, 28 Jan 2014 17:44:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14910 invoked from network); 28 Jan 2014 17:44:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353079"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChG-0001UQ-Qp;
	Tue, 28 Jan 2014 17:43:38 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:32 +0100
Message-ID: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

blkback bug fixes for memory leaks (patches 1 and 2) and a race 
(patch 3).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chf-0005ro-Dx; Tue, 28 Jan 2014 17:44:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chd-0005rh-Q3
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:01 +0000
Received: from [85.158.139.211:41346] by server-12.bemta-5.messagelabs.com id
	C5/32-30017-16CE7E25; Tue, 28 Jan 2014 17:44:01 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14910 invoked from network); 28 Jan 2014 17:44:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353079"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChG-0001UQ-Qp;
	Tue, 28 Jan 2014 17:43:38 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:32 +0100
Message-ID: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

blkback bug fixes for memory leaks (patches 1 and 2) and a race 
(patch 3).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chi-0005tF-Ec; Tue, 28 Jan 2014 17:44:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chh-0005sT-Jm
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:44:05 +0000
Received: from [85.158.137.68:12851] by server-16.bemta-3.messagelabs.com id
	26/BD-26128-46CE7E25; Tue, 28 Jan 2014 17:44:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390931041!11874724!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24102 invoked from network); 28 Jan 2014 17:44:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:40 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChH-0001UQ-GI;
	Tue, 28 Jan 2014 17:43:39 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:33 +0100
Message-ID: <1390931015-5490-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chg-0005sH-Rf; Tue, 28 Jan 2014 17:44:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chf-0005rm-5t
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:03 +0000
Received: from [85.158.139.211:41451] by server-4.bemta-5.messagelabs.com id
	6D/D6-26791-26CE7E25; Tue, 28 Jan 2014 17:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15251 invoked from network); 28 Jan 2014 17:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353082"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:41 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChI-0001UQ-41;
	Tue, 28 Jan 2014 17:43:40 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:34 +0100
Message-ID: <1390931015-5490-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBibGti
YWNrIGRvZXNuJ3Qgd2FpdCBmb3IgYW55IHBlbmRpbmcgcHVyZ2Ugd29yayB0byBmaW5pc2ggYmVm
b3JlCiAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2ls
bCBjYWxsCiAgcHV0X2ZyZWVfcGFnZXMgYW5kIHRodXMgd2UgbWlnaHQgZW5kIHVwIHdpdGggcGFn
ZXMgYmVpbmcgYWRkZWQgdG8KICB0aGUgZnJlZV9wYWdlcyBsaXN0IGFmdGVyIHdlIGhhdmUgZW1w
dGllZCBpdC4gRml4IHRoaXMgYnkgbWFraW5nCiAgc3VyZSB0aGVyZSdzIG5vIHBlbmRpbmcgcHVy
Z2Ugd29yayBiZWZvcmUgZXhpdGluZwogIHhlbl9ibGtpZl9zY2hlZHVsZSwgYW5kIG1vdmluZyB0
aGUgZnJlZV9wYWdlIGNsZWFudXAgY29kZSB0bwogIHhlbl9ibGtpZl9mcmVlLgotIGJsa2JhY2sg
ZG9lc24ndCB3YWl0IGZvciBwZW5kaW5nIHJlcXVlc3RzIHRvIGVuZCBiZWZvcmUgY2xlYW5pbmcK
ICBwZXJzaXN0ZW50IGdyYW50cyBhbmQgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gQWdhaW4gdGhp
cyBjYW4gYWRkCiAgcGFnZXMgdG8gdGhlIGZyZWVfcGFnZXMgbGlzdCBvciBwZXJzaXN0ZW50IGdy
YW50cyB0byB0aGUKICBwZXJzaXN0ZW50X2dudHMgcmVkLWJsYWNrIHRyZWUuIEZpeGVkIGJ5IG1v
dmluZyB0aGUgcGVyc2lzdGVudAogIGdyYW50cyBhbmQgZnJlZV9wYWdlcyBjbGVhbnVwIGNvZGUg
dG8geGVuX2Jsa2lmX2ZyZWUuCgpBbHNvLCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2Zy
ZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBjbGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRh
dmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zz
a3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6
IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBi
ZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMg
fCAgIDI3ICsrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVz
LmMgIHwgICAxMiArKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMzEgaW5zZXJ0aW9ucygr
KSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDMw
ZWY3YjMuLmRjZmU0OWYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxr
YmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0zNzUs
NyArMzc1LDcgQEAgc3RhdGljIHZvaWQgcHVyZ2VfcGVyc2lzdGVudF9nbnQoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiAKIAlwcl9kZWJ1ZyhEUlZfUEZYICJHb2luZyB0byBwdXJnZSAldSBwZXJz
aXN0ZW50IGdyYW50c1xuIiwgbnVtX2NsZWFuKTsKIAotCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+
cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKKwlCVUdfT04oIWxpc3RfZW1wdHkoJmJsa2lmLT5wZXJz
aXN0ZW50X3B1cmdlX2xpc3QpKTsKIAlyb290ID0gJmJsa2lmLT5wZXJzaXN0ZW50X2dudHM7CiBw
dXJnZV9saXN0OgogCWZvcmVhY2hfZ3JhbnRfc2FmZShwZXJzaXN0ZW50X2dudCwgbiwgcm9vdCwg
bm9kZSkgewpAQCAtNjI1LDYgKzYyNSwyMyBAQCBwdXJnZV9nbnRfbGlzdDoKIAkJCXByaW50X3N0
YXRzKGJsa2lmKTsKIAl9CiAKKwkvKiBEcmFpbiBwZW5kaW5nIHB1cmdlIHdvcmsgKi8KKwlmbHVz
aF93b3JrKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV93b3JrKTsKKworCWlmIChsb2dfc3RhdHMp
CisJCXByaW50X3N0YXRzKGJsa2lmKTsKKworCWJsa2lmLT54ZW5ibGtkID0gTlVMTDsKKwl4ZW5f
YmxraWZfcHV0KGJsa2lmKTsKKworCXJldHVybiAwOworfQorCisvKgorICogUmVtb3ZlIHBlcnNp
c3RlbnQgZ3JhbnRzIGFuZCBlbXB0eSB0aGUgcG9vbCBvZiBmcmVlIHBhZ2VzCisgKi8KK3ZvaWQg
eGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQorewogCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMsCkBAIC02MzUsMTQgKzY1Miw2IEBAIHB1cmdlX2dudF9saXN0
OgogCiAJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9t
IHRoZSBidWZmZXIgKi8KIAlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8p
OwotCi0JaWYgKGxvZ19zdGF0cykKLQkJcHJpbnRfc3RhdHMoYmxraWYpOwotCi0JYmxraWYtPnhl
bmJsa2QgPSBOVUxMOwotCXhlbl9ibGtpZl9wdXQoYmxraWYpOwotCi0JcmV0dXJuIDA7CiB9CiAK
IC8qCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA4ZDg4MDc1Li5mNzMzZDc2IDEw
MDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTM3Niw2ICszNzYsNyBAQCBpbnQgeGVu
X2Jsa2lmX3hlbmJ1c19pbml0KHZvaWQpOwogaXJxcmV0dXJuX3QgeGVuX2Jsa2lmX2JlX2ludChp
bnQgaXJxLCB2b2lkICpkZXZfaWQpOwogaW50IHhlbl9ibGtpZl9zY2hlZHVsZSh2b2lkICphcmcp
OwogaW50IHhlbl9ibGtpZl9wdXJnZV9wZXJzaXN0ZW50KHZvaWQgKmFyZyk7Cit2b2lkIHhlbl9i
bGtia19mcmVlX2NhY2hlcyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CiAKIGludCB4ZW5fYmxr
YmtfZmx1c2hfZGlza2NhY2hlKHN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0LAogCQkJICAg
ICAgc3RydWN0IGJhY2tlbmRfaW5mbyAqYmUsIGludCBzdGF0ZSk7CmRpZmYgLS1naXQgYS9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay94ZW5idXMuYwppbmRleCBjMjAxNGEwLi44YWZlZjY3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
eGVuYnVzLmMKQEAgLTEyNSw2ICsxMjUsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVu
X2Jsa2lmX2FsbG9jKGRvbWlkX3QgZG9taWQpCiAJYmxraWYtPnBlcnNpc3RlbnRfZ250cy5yYl9u
b2RlID0gTlVMTDsKIAlzcGluX2xvY2tfaW5pdCgmYmxraWYtPmZyZWVfcGFnZXNfbG9jayk7CiAJ
SU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5mcmVlX3BhZ2VzKTsKKwlJTklUX0xJU1RfSEVBRCgmYmxr
aWYtPnBlcnNpc3RlbnRfcHVyZ2VfbGlzdCk7CiAJYmxraWYtPmZyZWVfcGFnZXNfbnVtID0gMDsK
IAlhdG9taWNfc2V0KCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlLCAwKTsKIApAQCAtMjU5
LDYgKzI2MCwxNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZfZnJlZShzdHJ1Y3QgeGVuX2Jsa2lm
ICpibGtpZikKIAlpZiAoIWF0b21pY19kZWNfYW5kX3Rlc3QoJmJsa2lmLT5yZWZjbnQpKQogCQlC
VUcoKTsKIAorCS8qIFJlbW92ZSBhbGwgcGVyc2lzdGVudCBncmFudHMgYW5kIHRoZSBjYWNoZSBv
ZiBiYWxsb29uZWQgcGFnZXMuICovCisJeGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKGJsa2lmKTsKKwor
CS8qIE1ha2Ugc3VyZSBldmVyeXRoaW5nIGlzIGRyYWluZWQgYmVmb3JlIHNodXR0aW5nIGRvd24g
Ki8KKwlCVUdfT04oYmxraWYtPnBlcnNpc3RlbnRfZ250X2MgIT0gMCk7CisJQlVHX09OKGF0b21p
Y19yZWFkKCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlKSAhPSAwKTsKKwlCVUdfT04oYmxr
aWYtPmZyZWVfcGFnZXNfbnVtICE9IDApOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPnBl
cnNpc3RlbnRfcHVyZ2VfbGlzdCkpOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPmZyZWVf
cGFnZXMpKTsKKwlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKKwogCS8qIENoZWNrIHRoYXQgdGhlcmUgaXMgbm8gcmVxdWVzdCBpbiB1c2UgKi8KIAlsaXN0
X2Zvcl9lYWNoX2VudHJ5X3NhZmUocmVxLCBuLCAmYmxraWYtPnBlbmRpbmdfZnJlZSwgZnJlZV9s
aXN0KSB7CiAJCWxpc3RfZGVsKCZyZXEtPmZyZWVfbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBH
aXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chh-0005sj-Ko; Tue, 28 Jan 2014 17:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chg-0005rz-8X
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:04 +0000
Received: from [85.158.139.211:41547] by server-11.bemta-5.messagelabs.com id
	5C/6E-23268-36CE7E25; Tue, 28 Jan 2014 17:44:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15409 invoked from network); 28 Jan 2014 17:44:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:40 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChH-0001UQ-GI;
	Tue, 28 Jan 2014 17:43:39 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:33 +0100
Message-ID: <1390931015-5490-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chi-0005t1-1V; Tue, 28 Jan 2014 17:44:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Chg-0005s4-HQ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:44:04 +0000
Received: from [85.158.137.68:62663] by server-17.bemta-3.messagelabs.com id
	6B/FD-15965-36CE7E25; Tue, 28 Jan 2014 17:44:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390931041!11874724!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24016 invoked from network); 28 Jan 2014 17:44:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353094"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:44 -0500
Message-ID: <1390931022.31814.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 28 Jan 2014 17:43:42 +0000
In-Reply-To: <CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:05 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> >> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> >
> > I should have asked this sooner -- can you point me to the bindings
> > documentation for this device?
> >
> > http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
> > is not yet agreed, so having Xen depend on it may have been a mistake.
> 
> Above patch is still under discussion so i can not take changes from
> that to xen driver immediately.
> 
> For now i have added xen reset code based on
> "drivers/power/reset/xgene-reboot.c" driver which is already merged in
> linux.
> 
> http://www.spinics.net/lists/arm-kernel/msg266039.html
> For now DTS bindings for xen are similar as mentioned in above link.
> 
> Actually if you see new patch and old one (from reboot point of view) -
> Only difference in both the dts bindings is "mask" filed in dts.
> In old patch it used to be read from dts but in latest it is
> hard-coded to 1 in actual code and being removed from dts in new
> patch.

Do you have a ref for that new patch?

I also don't see any patch to linux/Documentation/devicetree/bindings,
as was requested in that posting from 6 months ago. Where can I find
that?

It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
hasn't landed?

> Now if you want this to be fixed , i can quickly submit a V7 in which
> mask field will be just hard-coded to 1 hence xen code will always
> work even if linux code does gets changed.

Looks like the Linux driver uses 0xffffffff if the mask isn't given --
that seems like a good approach.

I think we'll just have to accept that until the binding is specified
and documented (in linux/Documentation/devicetree/bindings) then we may
have to be prepared to change the Xen implementation to match the final
spec without regard to backwards compat. If we aren't happy with that
then I should revert the patch now and we will have to live without
reboot support in the meantime.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chh-0005sj-Ko; Tue, 28 Jan 2014 17:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chg-0005rz-8X
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:04 +0000
Received: from [85.158.139.211:41547] by server-11.bemta-5.messagelabs.com id
	5C/6E-23268-36CE7E25; Tue, 28 Jan 2014 17:44:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15409 invoked from network); 28 Jan 2014 17:44:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:40 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChH-0001UQ-GI;
	Tue, 28 Jan 2014 17:43:39 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:33 +0100
Message-ID: <1390931015-5490-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chh-0005sW-8n; Tue, 28 Jan 2014 17:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chf-0005rn-J9
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:03 +0000
Received: from [85.158.139.211:29439] by server-16.bemta-5.messagelabs.com id
	C9/9C-11843-26CE7E25; Tue, 28 Jan 2014 17:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390931040!191634!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23692 invoked from network); 28 Jan 2014 17:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353085"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:41 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChI-0001UQ-OY;
	Tue, 28 Jan 2014 17:43:40 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:35 +0100
Message-ID: <1390931015-5490-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSB0aGUgY2FsbCB0byB4ZW5fYmxraWZfcHV0IGFmdGVyIHdlIGhhdmUgZnJlZWQgdGhlIHJl
cXVlc3QsCm90aGVyd2lzZSB3ZSBoYXZlIGEgcmFjZSBiZXR3ZWVuIHRoZSByZWxlYXNlIG9mIHRo
ZSByZXF1ZXN0IGFuZCB0aGUKY2xlYW51cCBkb25lIGluIHhlbl9ibGtpZl9mcmVlLgoKU2lnbmVk
LW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25y
YWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFi
ZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5v
c3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFu
LkNhbXBiZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGti
YWNrLmMgfCAgIDI4ICsrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0KIDEgZmlsZXMgY2hhbmdl
ZCwgMjEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCmluZGV4IGRjZmU0OWYuLjgyMDBhYTAgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCkBAIC05ODUsMTcgKzk4NSwzMSBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19p
b19vcChzdHJ1Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJICogdGhl
IHByb3BlciByZXNwb25zZSBvbiB0aGUgcmluZy4KIAkgKi8KIAlpZiAoYXRvbWljX2RlY19hbmRf
dGVzdCgmcGVuZGluZ19yZXEtPnBlbmRjbnQpKSB7Ci0JCXhlbl9ibGtia191bm1hcChwZW5kaW5n
X3JlcS0+YmxraWYsCisJCXN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmID0gcGVuZGluZ19yZXEtPmJs
a2lmOworCisJCXhlbl9ibGtia191bm1hcChibGtpZiwKIAkJICAgICAgICAgICAgICAgIHBlbmRp
bmdfcmVxLT5zZWdtZW50cywKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVxLT5ucl9wYWdl
cyk7Ci0JCW1ha2VfcmVzcG9uc2UocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5kaW5nX3JlcS0+aWQs
CisJCW1ha2VfcmVzcG9uc2UoYmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKIAkJCSAgICAgIHBlbmRp
bmdfcmVxLT5vcGVyYXRpb24sIHBlbmRpbmdfcmVxLT5zdGF0dXMpOwotCQl4ZW5fYmxraWZfcHV0
KHBlbmRpbmdfcmVxLT5ibGtpZik7Ci0JCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJs
a2lmLT5yZWZjbnQpIDw9IDIpIHsKLQkJCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJs
a2lmLT5kcmFpbikpCi0JCQkJY29tcGxldGUoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJhaW5fY29t
cGxldGUpOworCQlmcmVlX3JlcShibGtpZiwgcGVuZGluZ19yZXEpOworCQkvKgorCQkgKiBNYWtl
IHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwKKwkJICog
b3IgdGhlcmUgY291bGQgYmUgYSByYWNlIGJldHdlZW4gZnJlZV9yZXEgYW5kIHRoZQorCQkgKiBj
bGVhbnVwIGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUgZHVyaW5nIHNodXRkb3duLgorCQkgKgorCQkg
KiBOQjogVGhlIGZhY3QgdGhhdCB3ZSBtaWdodCB0cnkgdG8gd2FrZSB1cCBwZW5kaW5nX2ZyZWVf
d3EKKwkJICogYmVmb3JlIGRyYWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBn
b2luZyBvbikKKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVt
ZW50YXRpb24KKwkJICogYmVjYXVzZSB3ZSBjYW4gYXNzdXJlIHRoZXJlJ3Mgbm8gdGhyZWFkIHdh
aXRpbmcgb24KKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBv
biwgYnV0IGl0IGhhcworCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJl
bnQgbW9kZWwgaXMgY2hhbmdlZC4KKwkJICovCisJCXhlbl9ibGtpZl9wdXQoYmxraWYpOworCQlp
ZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKKwkJCWlmIChhdG9taWNfcmVh
ZCgmYmxraWYtPmRyYWluKSkKKwkJCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsK
IAkJfQotCQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxKTsKIAl9CiB9
CiAKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBs
aXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chg-0005sH-Rf; Tue, 28 Jan 2014 17:44:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chf-0005rm-5t
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:03 +0000
Received: from [85.158.139.211:41451] by server-4.bemta-5.messagelabs.com id
	6D/D6-26791-26CE7E25; Tue, 28 Jan 2014 17:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931038!190767!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15251 invoked from network); 28 Jan 2014 17:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353082"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:41 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChI-0001UQ-41;
	Tue, 28 Jan 2014 17:43:40 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:34 +0100
Message-ID: <1390931015-5490-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBibGti
YWNrIGRvZXNuJ3Qgd2FpdCBmb3IgYW55IHBlbmRpbmcgcHVyZ2Ugd29yayB0byBmaW5pc2ggYmVm
b3JlCiAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2ls
bCBjYWxsCiAgcHV0X2ZyZWVfcGFnZXMgYW5kIHRodXMgd2UgbWlnaHQgZW5kIHVwIHdpdGggcGFn
ZXMgYmVpbmcgYWRkZWQgdG8KICB0aGUgZnJlZV9wYWdlcyBsaXN0IGFmdGVyIHdlIGhhdmUgZW1w
dGllZCBpdC4gRml4IHRoaXMgYnkgbWFraW5nCiAgc3VyZSB0aGVyZSdzIG5vIHBlbmRpbmcgcHVy
Z2Ugd29yayBiZWZvcmUgZXhpdGluZwogIHhlbl9ibGtpZl9zY2hlZHVsZSwgYW5kIG1vdmluZyB0
aGUgZnJlZV9wYWdlIGNsZWFudXAgY29kZSB0bwogIHhlbl9ibGtpZl9mcmVlLgotIGJsa2JhY2sg
ZG9lc24ndCB3YWl0IGZvciBwZW5kaW5nIHJlcXVlc3RzIHRvIGVuZCBiZWZvcmUgY2xlYW5pbmcK
ICBwZXJzaXN0ZW50IGdyYW50cyBhbmQgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gQWdhaW4gdGhp
cyBjYW4gYWRkCiAgcGFnZXMgdG8gdGhlIGZyZWVfcGFnZXMgbGlzdCBvciBwZXJzaXN0ZW50IGdy
YW50cyB0byB0aGUKICBwZXJzaXN0ZW50X2dudHMgcmVkLWJsYWNrIHRyZWUuIEZpeGVkIGJ5IG1v
dmluZyB0aGUgcGVyc2lzdGVudAogIGdyYW50cyBhbmQgZnJlZV9wYWdlcyBjbGVhbnVwIGNvZGUg
dG8geGVuX2Jsa2lmX2ZyZWUuCgpBbHNvLCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2Zy
ZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBjbGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRh
dmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zz
a3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6
IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBi
ZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMg
fCAgIDI3ICsrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVz
LmMgIHwgICAxMiArKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMzEgaW5zZXJ0aW9ucygr
KSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDMw
ZWY3YjMuLmRjZmU0OWYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxr
YmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0zNzUs
NyArMzc1LDcgQEAgc3RhdGljIHZvaWQgcHVyZ2VfcGVyc2lzdGVudF9nbnQoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiAKIAlwcl9kZWJ1ZyhEUlZfUEZYICJHb2luZyB0byBwdXJnZSAldSBwZXJz
aXN0ZW50IGdyYW50c1xuIiwgbnVtX2NsZWFuKTsKIAotCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+
cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKKwlCVUdfT04oIWxpc3RfZW1wdHkoJmJsa2lmLT5wZXJz
aXN0ZW50X3B1cmdlX2xpc3QpKTsKIAlyb290ID0gJmJsa2lmLT5wZXJzaXN0ZW50X2dudHM7CiBw
dXJnZV9saXN0OgogCWZvcmVhY2hfZ3JhbnRfc2FmZShwZXJzaXN0ZW50X2dudCwgbiwgcm9vdCwg
bm9kZSkgewpAQCAtNjI1LDYgKzYyNSwyMyBAQCBwdXJnZV9nbnRfbGlzdDoKIAkJCXByaW50X3N0
YXRzKGJsa2lmKTsKIAl9CiAKKwkvKiBEcmFpbiBwZW5kaW5nIHB1cmdlIHdvcmsgKi8KKwlmbHVz
aF93b3JrKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV93b3JrKTsKKworCWlmIChsb2dfc3RhdHMp
CisJCXByaW50X3N0YXRzKGJsa2lmKTsKKworCWJsa2lmLT54ZW5ibGtkID0gTlVMTDsKKwl4ZW5f
YmxraWZfcHV0KGJsa2lmKTsKKworCXJldHVybiAwOworfQorCisvKgorICogUmVtb3ZlIHBlcnNp
c3RlbnQgZ3JhbnRzIGFuZCBlbXB0eSB0aGUgcG9vbCBvZiBmcmVlIHBhZ2VzCisgKi8KK3ZvaWQg
eGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQorewogCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMsCkBAIC02MzUsMTQgKzY1Miw2IEBAIHB1cmdlX2dudF9saXN0
OgogCiAJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9t
IHRoZSBidWZmZXIgKi8KIAlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8p
OwotCi0JaWYgKGxvZ19zdGF0cykKLQkJcHJpbnRfc3RhdHMoYmxraWYpOwotCi0JYmxraWYtPnhl
bmJsa2QgPSBOVUxMOwotCXhlbl9ibGtpZl9wdXQoYmxraWYpOwotCi0JcmV0dXJuIDA7CiB9CiAK
IC8qCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA4ZDg4MDc1Li5mNzMzZDc2IDEw
MDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTM3Niw2ICszNzYsNyBAQCBpbnQgeGVu
X2Jsa2lmX3hlbmJ1c19pbml0KHZvaWQpOwogaXJxcmV0dXJuX3QgeGVuX2Jsa2lmX2JlX2ludChp
bnQgaXJxLCB2b2lkICpkZXZfaWQpOwogaW50IHhlbl9ibGtpZl9zY2hlZHVsZSh2b2lkICphcmcp
OwogaW50IHhlbl9ibGtpZl9wdXJnZV9wZXJzaXN0ZW50KHZvaWQgKmFyZyk7Cit2b2lkIHhlbl9i
bGtia19mcmVlX2NhY2hlcyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CiAKIGludCB4ZW5fYmxr
YmtfZmx1c2hfZGlza2NhY2hlKHN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0LAogCQkJICAg
ICAgc3RydWN0IGJhY2tlbmRfaW5mbyAqYmUsIGludCBzdGF0ZSk7CmRpZmYgLS1naXQgYS9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay94ZW5idXMuYwppbmRleCBjMjAxNGEwLi44YWZlZjY3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
eGVuYnVzLmMKQEAgLTEyNSw2ICsxMjUsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVu
X2Jsa2lmX2FsbG9jKGRvbWlkX3QgZG9taWQpCiAJYmxraWYtPnBlcnNpc3RlbnRfZ250cy5yYl9u
b2RlID0gTlVMTDsKIAlzcGluX2xvY2tfaW5pdCgmYmxraWYtPmZyZWVfcGFnZXNfbG9jayk7CiAJ
SU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5mcmVlX3BhZ2VzKTsKKwlJTklUX0xJU1RfSEVBRCgmYmxr
aWYtPnBlcnNpc3RlbnRfcHVyZ2VfbGlzdCk7CiAJYmxraWYtPmZyZWVfcGFnZXNfbnVtID0gMDsK
IAlhdG9taWNfc2V0KCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlLCAwKTsKIApAQCAtMjU5
LDYgKzI2MCwxNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZfZnJlZShzdHJ1Y3QgeGVuX2Jsa2lm
ICpibGtpZikKIAlpZiAoIWF0b21pY19kZWNfYW5kX3Rlc3QoJmJsa2lmLT5yZWZjbnQpKQogCQlC
VUcoKTsKIAorCS8qIFJlbW92ZSBhbGwgcGVyc2lzdGVudCBncmFudHMgYW5kIHRoZSBjYWNoZSBv
ZiBiYWxsb29uZWQgcGFnZXMuICovCisJeGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKGJsa2lmKTsKKwor
CS8qIE1ha2Ugc3VyZSBldmVyeXRoaW5nIGlzIGRyYWluZWQgYmVmb3JlIHNodXR0aW5nIGRvd24g
Ki8KKwlCVUdfT04oYmxraWYtPnBlcnNpc3RlbnRfZ250X2MgIT0gMCk7CisJQlVHX09OKGF0b21p
Y19yZWFkKCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlKSAhPSAwKTsKKwlCVUdfT04oYmxr
aWYtPmZyZWVfcGFnZXNfbnVtICE9IDApOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPnBl
cnNpc3RlbnRfcHVyZ2VfbGlzdCkpOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPmZyZWVf
cGFnZXMpKTsKKwlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKKwogCS8qIENoZWNrIHRoYXQgdGhlcmUgaXMgbm8gcmVxdWVzdCBpbiB1c2UgKi8KIAlsaXN0
X2Zvcl9lYWNoX2VudHJ5X3NhZmUocmVxLCBuLCAmYmxraWYtPnBlbmRpbmdfZnJlZSwgZnJlZV9s
aXN0KSB7CiAJCWxpc3RfZGVsKCZyZXEtPmZyZWVfbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBH
aXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chi-0005tF-Ec; Tue, 28 Jan 2014 17:44:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chh-0005sT-Jm
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:44:05 +0000
Received: from [85.158.137.68:12851] by server-16.bemta-3.messagelabs.com id
	26/BD-26128-46CE7E25; Tue, 28 Jan 2014 17:44:04 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390931041!11874724!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24102 invoked from network); 28 Jan 2014 17:44:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:40 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChH-0001UQ-GI;
	Tue, 28 Jan 2014 17:43:39 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:33 +0100
Message-ID: <1390931015-5490-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chh-0005sW-8n; Tue, 28 Jan 2014 17:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8Chf-0005rn-J9
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:44:03 +0000
Received: from [85.158.139.211:29439] by server-16.bemta-5.messagelabs.com id
	C9/9C-11843-26CE7E25; Tue, 28 Jan 2014 17:44:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1390931040!191634!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23692 invoked from network); 28 Jan 2014 17:44:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353085"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:41 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1W8ChI-0001UQ-OY;
	Tue, 28 Jan 2014 17:43:40 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 28 Jan 2014 18:43:35 +0100
Message-ID: <1390931015-5490-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSB0aGUgY2FsbCB0byB4ZW5fYmxraWZfcHV0IGFmdGVyIHdlIGhhdmUgZnJlZWQgdGhlIHJl
cXVlc3QsCm90aGVyd2lzZSB3ZSBoYXZlIGEgcmFjZSBiZXR3ZWVuIHRoZSByZWxlYXNlIG9mIHRo
ZSByZXF1ZXN0IGFuZCB0aGUKY2xlYW51cCBkb25lIGluIHhlbl9ibGtpZl9mcmVlLgoKU2lnbmVk
LW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25y
YWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFi
ZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5v
c3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFu
LkNhbXBiZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGti
YWNrLmMgfCAgIDI4ICsrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0KIDEgZmlsZXMgY2hhbmdl
ZCwgMjEgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCmluZGV4IGRjZmU0OWYuLjgyMDBhYTAgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2svYmxrYmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
YmxrYmFjay5jCkBAIC05ODUsMTcgKzk4NSwzMSBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19p
b19vcChzdHJ1Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJICogdGhl
IHByb3BlciByZXNwb25zZSBvbiB0aGUgcmluZy4KIAkgKi8KIAlpZiAoYXRvbWljX2RlY19hbmRf
dGVzdCgmcGVuZGluZ19yZXEtPnBlbmRjbnQpKSB7Ci0JCXhlbl9ibGtia191bm1hcChwZW5kaW5n
X3JlcS0+YmxraWYsCisJCXN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmID0gcGVuZGluZ19yZXEtPmJs
a2lmOworCisJCXhlbl9ibGtia191bm1hcChibGtpZiwKIAkJICAgICAgICAgICAgICAgIHBlbmRp
bmdfcmVxLT5zZWdtZW50cywKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVxLT5ucl9wYWdl
cyk7Ci0JCW1ha2VfcmVzcG9uc2UocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5kaW5nX3JlcS0+aWQs
CisJCW1ha2VfcmVzcG9uc2UoYmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKIAkJCSAgICAgIHBlbmRp
bmdfcmVxLT5vcGVyYXRpb24sIHBlbmRpbmdfcmVxLT5zdGF0dXMpOwotCQl4ZW5fYmxraWZfcHV0
KHBlbmRpbmdfcmVxLT5ibGtpZik7Ci0JCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJs
a2lmLT5yZWZjbnQpIDw9IDIpIHsKLQkJCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJs
a2lmLT5kcmFpbikpCi0JCQkJY29tcGxldGUoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJhaW5fY29t
cGxldGUpOworCQlmcmVlX3JlcShibGtpZiwgcGVuZGluZ19yZXEpOworCQkvKgorCQkgKiBNYWtl
IHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwKKwkJICog
b3IgdGhlcmUgY291bGQgYmUgYSByYWNlIGJldHdlZW4gZnJlZV9yZXEgYW5kIHRoZQorCQkgKiBj
bGVhbnVwIGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUgZHVyaW5nIHNodXRkb3duLgorCQkgKgorCQkg
KiBOQjogVGhlIGZhY3QgdGhhdCB3ZSBtaWdodCB0cnkgdG8gd2FrZSB1cCBwZW5kaW5nX2ZyZWVf
d3EKKwkJICogYmVmb3JlIGRyYWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBn
b2luZyBvbikKKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVt
ZW50YXRpb24KKwkJICogYmVjYXVzZSB3ZSBjYW4gYXNzdXJlIHRoZXJlJ3Mgbm8gdGhyZWFkIHdh
aXRpbmcgb24KKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBv
biwgYnV0IGl0IGhhcworCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJl
bnQgbW9kZWwgaXMgY2hhbmdlZC4KKwkJICovCisJCXhlbl9ibGtpZl9wdXQoYmxraWYpOworCQlp
ZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKKwkJCWlmIChhdG9taWNfcmVh
ZCgmYmxraWYtPmRyYWluKSkKKwkJCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsK
IAkJfQotCQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+YmxraWYsIHBlbmRpbmdfcmVxKTsKIAl9CiB9
CiAKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBs
aXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:44:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Chi-0005t1-1V; Tue, 28 Jan 2014 17:44:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Chg-0005s4-HQ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:44:04 +0000
Received: from [85.158.137.68:62663] by server-17.bemta-3.messagelabs.com id
	6B/FD-15965-36CE7E25; Tue, 28 Jan 2014 17:44:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390931041!11874724!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24016 invoked from network); 28 Jan 2014 17:44:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97353094"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:43:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:43:44 -0500
Message-ID: <1390931022.31814.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 28 Jan 2014 17:43:42 +0000
In-Reply-To: <CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:05 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
> >> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> >
> > I should have asked this sooner -- can you point me to the bindings
> > documentation for this device?
> >
> > http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
> > is not yet agreed, so having Xen depend on it may have been a mistake.
> 
> Above patch is still under discussion so i can not take changes from
> that to xen driver immediately.
> 
> For now i have added xen reset code based on
> "drivers/power/reset/xgene-reboot.c" driver which is already merged in
> linux.
> 
> http://www.spinics.net/lists/arm-kernel/msg266039.html
> For now DTS bindings for xen are similar as mentioned in above link.
> 
> Actually if you see new patch and old one (from reboot point of view) -
> Only difference in both the dts bindings is "mask" filed in dts.
> In old patch it used to be read from dts but in latest it is
> hard-coded to 1 in actual code and being removed from dts in new
> patch.

Do you have a ref for that new patch?

I also don't see any patch to linux/Documentation/devicetree/bindings,
as was requested in that posting from 6 months ago. Where can I find
that?

It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
hasn't landed?

> Now if you want this to be fixed , i can quickly submit a V7 in which
> mask field will be just hard-coded to 1 hence xen code will always
> work even if linux code does gets changed.

Looks like the Linux driver uses 0xffffffff if the mask isn't given --
that seems like a good approach.

I think we'll just have to accept that until the binding is specified
and documented (in linux/Documentation/devicetree/bindings) then we may
have to be prepared to change the Xen implementation to match the final
spec without regard to backwards compat. If we aren't happy with that
then I should revert the patch now and we will have to live without
reboot support in the meantime.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CkQ-0006VP-Jy; Tue, 28 Jan 2014 17:46:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8CkP-0006VH-2H
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:46:53 +0000
Received: from [85.158.139.211:65490] by server-7.bemta-5.messagelabs.com id
	D2/DB-04824-C0DE7E25; Tue, 28 Jan 2014 17:46:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931210!191288!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32127 invoked from network); 28 Jan 2014 17:46:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97355130"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:46:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:46:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CkK-0001Wz-9L;
	Tue, 28 Jan 2014 17:46:48 +0000
Date: Tue, 28 Jan 2014 17:46:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E7EA78.5020305@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> >> +static int xen_cpu_notification(struct notifier_block *self,
> >> +				unsigned long action,
> >> +				void *hcpu)
> >> +{
> >> +	int cpu = (long)hcpu;
> >> +
> >> +	switch (action) {
> >> +	case CPU_UP_PREPARE:
> >> +		xen_percpu_init(cpu);
> >> +		break;
> >> +	case CPU_STARTING:
> >> +		xen_interrupt_init();
> >> +		break;
> > 
> > Is CPU_STARTING guaranteed to be called on the new cpu only?
> 
> Yes.
> 
> > If so, why not call both xen_percpu_init and xen_interrupt_init on
> > CPU_STARTING?
> 
> Just in case that xen_vcpu is used somewhere else by a cpu notifier
> callback CPU_STARTING. We don't know which callback is called first.

Could you please elaborate a bit more on the problem you are trying to
describe?


> > As it stands I think you introduced a subtle change (that might be OK
> > but I think is unintentional): xen_percpu_init might not be called from
> > the same cpu as its target anymore.
> 
> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
> the end of xen_guest_init.
 
Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CkQ-0006VP-Jy; Tue, 28 Jan 2014 17:46:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8CkP-0006VH-2H
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 17:46:53 +0000
Received: from [85.158.139.211:65490] by server-7.bemta-5.messagelabs.com id
	D2/DB-04824-C0DE7E25; Tue, 28 Jan 2014 17:46:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1390931210!191288!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32127 invoked from network); 28 Jan 2014 17:46:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97355130"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:46:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:46:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CkK-0001Wz-9L;
	Tue, 28 Jan 2014 17:46:48 +0000
Date: Tue, 28 Jan 2014 17:46:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E7EA78.5020305@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> >> +static int xen_cpu_notification(struct notifier_block *self,
> >> +				unsigned long action,
> >> +				void *hcpu)
> >> +{
> >> +	int cpu = (long)hcpu;
> >> +
> >> +	switch (action) {
> >> +	case CPU_UP_PREPARE:
> >> +		xen_percpu_init(cpu);
> >> +		break;
> >> +	case CPU_STARTING:
> >> +		xen_interrupt_init();
> >> +		break;
> > 
> > Is CPU_STARTING guaranteed to be called on the new cpu only?
> 
> Yes.
> 
> > If so, why not call both xen_percpu_init and xen_interrupt_init on
> > CPU_STARTING?
> 
> Just in case that xen_vcpu is used somewhere else by a cpu notifier
> callback CPU_STARTING. We don't know which callback is called first.

Could you please elaborate a bit more on the problem you are trying to
describe?


> > As it stands I think you introduced a subtle change (that might be OK
> > but I think is unintentional): xen_percpu_init might not be called from
> > the same cpu as its target anymore.
> 
> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
> the end of xen_guest_init.
 
Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cnb-0006vD-HH; Tue, 28 Jan 2014 17:50:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Cna-0006v5-61
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:50:10 +0000
Received: from [85.158.143.35:24429] by server-2.bemta-4.messagelabs.com id
	96/1D-11386-1DDE7E25; Tue, 28 Jan 2014 17:50:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390931407!1437125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22733 invoked from network); 28 Jan 2014 17:50:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:50:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97356869"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:50:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:50:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CnV-0001ZZ-30;
	Tue, 28 Jan 2014 17:50:05 +0000
Date: Tue, 28 Jan 2014 17:50:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> Code such as on_selected_cpus expects/requires that an IPI can preempt a
> processor which is just handling a normal interrupt. Lacking this property can
> result in a deadlock between two CPUs trying to IPI each other from interrupt
> context.
> 
> For the time being there is only two priorities, IRQ and IPI, although it is
> also conceivable that in the future some IPIs might be higher priority than
> others. This could be used to implement a better BUG() than we have now, but I
> haven't tackled that yet.
> 
> Tested with a debug patch which sends a local IPI from a keyhandler, which is
> run in serial interrupt context.
> 
> This should also fix the issue reported by Oleksandr in "xen/arm:
> maintenance_interrupt SMP fix" without resorting to trylock.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

It looks simple enough.
Oleksandr, I would appreciate if could test the patch and tell us if it
is working well for you.


> I think this is probably 4.5 material at this point.
> 
> Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> completeness. It gives:
> (XEN) Xen call trace:
> (XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
> (XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
> (XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
> (XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
> (XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
> (XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
> (XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
> (XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
> (XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
> (XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
> (XEN)    [<0000000000249324>] init_done+0x10/0x18
> (XEN)    [<0000000000000080>] 0000000000000080
> ---
>  xen/arch/arm/gic.c        |   19 +++++++++++++------
>  xen/arch/arm/time.c       |    6 +++---
>  xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
>  3 files changed, 38 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index dcf9cd4..ee37019 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
>  
>      /* Default priority for global interrupts */
>      for ( i = 32; i < gic.lines; i += 4 )
> -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
>  
>      /* Disable all global interrupts */
>      for ( i = 32; i < gic.lines; i += 32 )
> @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
>      GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
>      GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
>      /* Set PPI and SGI priorities */
> -    for (i = 0; i < 32; i += 4)
> -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> +    for (i = 0; i < 16; i += 4)
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 | GIC_PRI_IPI;
> +    for (i = 16; i < 32; i += 4)
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
>  
>      /* Local settings: interface controller */
>      GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
> @@ -538,7 +543,8 @@ void gic_disable_cpu(void)
>  void gic_route_ppis(void)
>  {
>      /* GIC maintenance */
> -    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
> +    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> +                     GIC_PRI_IRQ);
>      /* Route timer interrupt */
>      route_timer_interrupt();
>  }
> @@ -553,7 +559,8 @@ void gic_route_spis(void)
>          if ( (irq = serial_dt_irq(seridx)) == NULL )
>              continue;
>  
> -        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
> +        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
> +                         GIC_PRI_IRQ);
>      }
>  }
>  
> @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      level = dt_irq_is_level_triggered(irq);
>  
>      gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +                           GIC_PRI_IRQ);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> index 81e3e28..68b939d 100644
> --- a/xen/arch/arm/time.c
> +++ b/xen/arch/arm/time.c
> @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  void __cpuinit route_timer_interrupt(void)
>  {
>      gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>      gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>      gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>  }
>  
>  /* Set up the timer interrupt on this CPU */
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 9c6f9bb..25b2b24 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -129,6 +129,28 @@
>  #define GICH_LR_CPUID_SHIFT     9
>  #define GICH_VTR_NRLRGS         0x3f
>  
> +/*
> + * The minimum GICC_BPR is required to be in the range 0-3. We set
> + * GICC_BPR to 0 but we must expect that it might be 3. This means we
> + * can rely on premption between the following ranges:
> + * 0xf0..0xff
> + * 0xe0..0xdf
> + * 0xc0..0xcf
> + * 0xb0..0xbf
> + * 0xa0..0xaf
> + * 0x90..0x9f
> + * 0x80..0x8f
> + *
> + * Priorities within a range will not preempt each other.
> + *
> + * A GIC must support a mimimum of 16 priority levels.
> + */
> +#define GIC_PRI_LOWEST     0xf0
> +#define GIC_PRI_IRQ        0xa0
> +#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts */
> +#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to Secure-World */
> +
> +
>  #ifndef __ASSEMBLY__
>  #include <xen/device_tree.h>
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cnb-0006vD-HH; Tue, 28 Jan 2014 17:50:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Cna-0006v5-61
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:50:10 +0000
Received: from [85.158.143.35:24429] by server-2.bemta-4.messagelabs.com id
	96/1D-11386-1DDE7E25; Tue, 28 Jan 2014 17:50:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1390931407!1437125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22733 invoked from network); 28 Jan 2014 17:50:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:50:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97356869"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:50:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:50:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8CnV-0001ZZ-30;
	Tue, 28 Jan 2014 17:50:05 +0000
Date: Tue, 28 Jan 2014 17:50:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Ian Campbell wrote:
> Code such as on_selected_cpus expects/requires that an IPI can preempt a
> processor which is just handling a normal interrupt. Lacking this property can
> result in a deadlock between two CPUs trying to IPI each other from interrupt
> context.
> 
> For the time being there is only two priorities, IRQ and IPI, although it is
> also conceivable that in the future some IPIs might be higher priority than
> others. This could be used to implement a better BUG() than we have now, but I
> haven't tackled that yet.
> 
> Tested with a debug patch which sends a local IPI from a keyhandler, which is
> run in serial interrupt context.
> 
> This should also fix the issue reported by Oleksandr in "xen/arm:
> maintenance_interrupt SMP fix" without resorting to trylock.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

It looks simple enough.
Oleksandr, I would appreciate if could test the patch and tell us if it
is working well for you.


> I think this is probably 4.5 material at this point.
> 
> Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> completeness. It gives:
> (XEN) Xen call trace:
> (XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
> (XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
> (XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
> (XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
> (XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
> (XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
> (XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
> (XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
> (XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
> (XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
> (XEN)    [<0000000000249324>] init_done+0x10/0x18
> (XEN)    [<0000000000000080>] 0000000000000080
> ---
>  xen/arch/arm/gic.c        |   19 +++++++++++++------
>  xen/arch/arm/time.c       |    6 +++---
>  xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
>  3 files changed, 38 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index dcf9cd4..ee37019 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
>  
>      /* Default priority for global interrupts */
>      for ( i = 32; i < gic.lines; i += 4 )
> -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
>  
>      /* Disable all global interrupts */
>      for ( i = 32; i < gic.lines; i += 32 )
> @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
>      GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
>      GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
>      /* Set PPI and SGI priorities */
> -    for (i = 0; i < 32; i += 4)
> -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> +    for (i = 0; i < 16; i += 4)
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 | GIC_PRI_IPI;
> +    for (i = 16; i < 32; i += 4)
> +        GICD[GICD_IPRIORITYR + i / 4] =
> +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 | GIC_PRI_IRQ;
>  
>      /* Local settings: interface controller */
>      GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
> @@ -538,7 +543,8 @@ void gic_disable_cpu(void)
>  void gic_route_ppis(void)
>  {
>      /* GIC maintenance */
> -    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()), 0xa0);
> +    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> +                     GIC_PRI_IRQ);
>      /* Route timer interrupt */
>      route_timer_interrupt();
>  }
> @@ -553,7 +559,8 @@ void gic_route_spis(void)
>          if ( (irq = serial_dt_irq(seridx)) == NULL )
>              continue;
>  
> -        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
> +        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
> +                         GIC_PRI_IRQ);
>      }
>  }
>  
> @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      level = dt_irq_is_level_triggered(irq);
>  
>      gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +                           GIC_PRI_IRQ);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> index 81e3e28..68b939d 100644
> --- a/xen/arch/arm/time.c
> +++ b/xen/arch/arm/time.c
> @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  void __cpuinit route_timer_interrupt(void)
>  {
>      gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>      gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>      gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
> -                     cpumask_of(smp_processor_id()), 0xa0);
> +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
>  }
>  
>  /* Set up the timer interrupt on this CPU */
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 9c6f9bb..25b2b24 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -129,6 +129,28 @@
>  #define GICH_LR_CPUID_SHIFT     9
>  #define GICH_VTR_NRLRGS         0x3f
>  
> +/*
> + * The minimum GICC_BPR is required to be in the range 0-3. We set
> + * GICC_BPR to 0 but we must expect that it might be 3. This means we
> + * can rely on premption between the following ranges:
> + * 0xf0..0xff
> + * 0xe0..0xdf
> + * 0xc0..0xcf
> + * 0xb0..0xbf
> + * 0xa0..0xaf
> + * 0x90..0x9f
> + * 0x80..0x8f
> + *
> + * Priorities within a range will not preempt each other.
> + *
> + * A GIC must support a mimimum of 16 priority levels.
> + */
> +#define GIC_PRI_LOWEST     0xf0
> +#define GIC_PRI_IRQ        0xa0
> +#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts */
> +#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to Secure-World */
> +
> +
>  #ifndef __ASSEMBLY__
>  #include <xen/device_tree.h>
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:51:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cp2-0007AN-3C; Tue, 28 Jan 2014 17:51:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Cp0-0007AH-To
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:51:39 +0000
Received: from [193.109.254.147:56655] by server-6.bemta-14.messagelabs.com id
	FE/2B-14958-A2EE7E25; Tue, 28 Jan 2014 17:51:38 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390931496!421208!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4795 invoked from network); 28 Jan 2014 17:51:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97357782"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:51:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:51:35 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Cow-0001b0-S2;
	Tue, 28 Jan 2014 17:51:34 +0000
Date: Tue, 28 Jan 2014 17:51:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <david.vrabel@citrix.com>,
	<boris.ostrovsky@oracle.com>
Message-ID: <20140128175134.GJ32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [BUG] Xen kbdfront,
	Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad, David and Boris

I discovered a bug. If you disable Xen platform PCI device and have
xen-kbdfront compiled in the kernel, you won't be able to boot the
guest.

The cause is that Xen platform PCI initializes grant table. After that
xen-kbdfront kicks in. Everything works fine.

If you disable Xen platform PCI device, the initialization of grant
table is later than initialization of xen-kbdfront. In that case when
xen-kbdfront wants to make use of grant table it triggers BUG_ON in
grant-table.c. After enabling Xen platform PCI device, everything works
fine.

The fix would be moving the initialization of grant table before
xen-kbdfront. 

Wei.

---8<---
(No sign of grant table initialization before this point)
[    3.813406] input: Xen Virtual Keyboard as /devices/virtual/input/input1
[    3.829622] input: Xen Virtual Pointer as /devices/virtual/input/input2
[    3.846052] ------------[ cut here ]------------
[    3.849948] kernel BUG at drivers/xen/grant-table.c:1192!
[    3.849948] invalid opcode: 0000 [#1] SMP 
[    3.849948] Modules linked in:
[    3.849948] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 3.10.28-stable-for-h16 #19
[    3.849948] Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
[    3.849948] task: ffff88003da62750 ti: ffff88003da82000 task.ti: ffff88003da82000
[    3.849948] RIP: 0010:[<ffffffff81331ff7>]  [<ffffffff81331ff7>] get_free_entries+0x39/0x21a
[    3.849948] RSP: 0000:ffff88003da83c88  EFLAGS: 00010046
[    3.849948] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
[    3.849948] RDX: 0000000000000000 RSI: 0000000000000296 RDI: ffffffff81ea5d78
[    3.849948] RBP: 000000000003c12c R08: 0000000000000000 R09: 0000000000000000
[    3.849948] R10: 0000000000000000 R11: ffff88003c12e800 R12: 0000000000000000
[    3.849948] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    3.849948] FS:  0000000000000000(0000) GS:ffff88003a620000(0000) knlGS:0000000000000000
[    3.849948] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[    3.849948] CR2: 0000000000000000 CR3: 0000000002c0c000 CR4: 00000000000006e0
[    3.849948] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    3.849948] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    3.849948] Stack:
[    3.849948]  ffff88003c12e800 ffff88003c1448a8 0000000000000000 ffffffff815cc40f
[    3.849948]  ffffffff818c6920 ffff88003c12e800 ffff88003c144800 0000000000000296
[    3.849948]  0000000000000292 0000000000000000 000000000003c12c 0000000000000000
[    3.849948] Call Trace:
[    3.849948]  [<ffffffff815cc40f>] ? mousedev_create+0x1ea/0x24c
[    3.849948]  [<ffffffff8133224b>] ? gnttab_grant_foreign_access+0x1a/0x46
[    3.849948]  [<ffffffff815ceefe>] ? xenkbd_connect_backend+0x64/0x22e
[    3.849948]  [<ffffffff815cf37a>] ? xenkbd_probe+0x27c/0x2be
[    3.849948]  [<ffffffff813378ca>] ? xenbus_dev_probe+0x56/0xb8
[    3.849948]  [<ffffffff81376e2a>] ? driver_probe_device+0x1b3/0x1b3
[    3.849948]  [<ffffffff81376d09>] ? driver_probe_device+0x92/0x1b3
[    3.849948]  [<ffffffff81376e7d>] ? __driver_attach+0x53/0x73
[    3.849948]  [<ffffffff813755b4>] ? bus_for_each_dev+0x4b/0x7c
[    3.849948]  [<ffffffff8137651d>] ? bus_add_driver+0xd5/0x1f4
[    3.849948]  [<ffffffff813773b0>] ? driver_register+0x89/0x101
[    3.849948]  [<ffffffff81338cc9>] ? xenbus_register_frontend+0x1f/0x39
[    3.849948]  [<ffffffff81d32e0b>] ? atkbd_init+0x23/0x23
[    3.849948]  [<ffffffff8100209f>] ? do_one_initcall+0x75/0x10a
[    3.849948]  [<ffffffff81cebe78>] ? kernel_init_freeable+0x139/0x1ca
[    3.849948]  [<ffffffff81ceb723>] ? do_early_param+0x83/0x83
[    3.849948]  [<ffffffff8172b24c>] ? rest_init+0x70/0x70
[    3.849948]  [<ffffffff8172b252>] ? kernel_init+0x6/0xd3
[    3.849948]  [<ffffffff8174cb3c>] ? ret_from_fork+0x7c/0xb0
[    3.849948]  [<ffffffff8172b24c>] ? rest_init+0x70/0x70
[    3.849948] Code: c7 78 5d ea 81 48 83 ec 48 e8 15 58 41 00 48 89 44 24 38 8b 05 97 3d b7 00 39 d8 0f 83 8d 01 00 00 8b 0d 91 3d b7 00 85 c9 75 02 <0f> 0b 44 8d 6c 19 ff 31 d2 44 8b 25 81 3d b7 00 41 29 c5 44 89 
[    3.849948] RIP  [<ffffffff81331ff7>] get_free_entries+0x39/0x21a
[    3.849948]  RSP <ffff88003da83c88>
[    3.849948] ---[ end trace bb00725567f6ff1c ]---
[    4.582819] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:51:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cp2-0007AN-3C; Tue, 28 Jan 2014 17:51:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Cp0-0007AH-To
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:51:39 +0000
Received: from [193.109.254.147:56655] by server-6.bemta-14.messagelabs.com id
	FE/2B-14958-A2EE7E25; Tue, 28 Jan 2014 17:51:38 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390931496!421208!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4795 invoked from network); 28 Jan 2014 17:51:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97357782"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 17:51:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:51:35 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Cow-0001b0-S2;
	Tue, 28 Jan 2014 17:51:34 +0000
Date: Tue, 28 Jan 2014 17:51:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <david.vrabel@citrix.com>,
	<boris.ostrovsky@oracle.com>
Message-ID: <20140128175134.GJ32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [BUG] Xen kbdfront,
	Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad, David and Boris

I discovered a bug. If you disable Xen platform PCI device and have
xen-kbdfront compiled in the kernel, you won't be able to boot the
guest.

The cause is that Xen platform PCI initializes grant table. After that
xen-kbdfront kicks in. Everything works fine.

If you disable Xen platform PCI device, the initialization of grant
table is later than initialization of xen-kbdfront. In that case when
xen-kbdfront wants to make use of grant table it triggers BUG_ON in
grant-table.c. After enabling Xen platform PCI device, everything works
fine.

The fix would be moving the initialization of grant table before
xen-kbdfront. 

Wei.

---8<---
(No sign of grant table initialization before this point)
[    3.813406] input: Xen Virtual Keyboard as /devices/virtual/input/input1
[    3.829622] input: Xen Virtual Pointer as /devices/virtual/input/input2
[    3.846052] ------------[ cut here ]------------
[    3.849948] kernel BUG at drivers/xen/grant-table.c:1192!
[    3.849948] invalid opcode: 0000 [#1] SMP 
[    3.849948] Modules linked in:
[    3.849948] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 3.10.28-stable-for-h16 #19
[    3.849948] Hardware name: Xen HVM domU, BIOS 4.4-unstable 11/26/2013
[    3.849948] task: ffff88003da62750 ti: ffff88003da82000 task.ti: ffff88003da82000
[    3.849948] RIP: 0010:[<ffffffff81331ff7>]  [<ffffffff81331ff7>] get_free_entries+0x39/0x21a
[    3.849948] RSP: 0000:ffff88003da83c88  EFLAGS: 00010046
[    3.849948] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
[    3.849948] RDX: 0000000000000000 RSI: 0000000000000296 RDI: ffffffff81ea5d78
[    3.849948] RBP: 000000000003c12c R08: 0000000000000000 R09: 0000000000000000
[    3.849948] R10: 0000000000000000 R11: ffff88003c12e800 R12: 0000000000000000
[    3.849948] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[    3.849948] FS:  0000000000000000(0000) GS:ffff88003a620000(0000) knlGS:0000000000000000
[    3.849948] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[    3.849948] CR2: 0000000000000000 CR3: 0000000002c0c000 CR4: 00000000000006e0
[    3.849948] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    3.849948] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    3.849948] Stack:
[    3.849948]  ffff88003c12e800 ffff88003c1448a8 0000000000000000 ffffffff815cc40f
[    3.849948]  ffffffff818c6920 ffff88003c12e800 ffff88003c144800 0000000000000296
[    3.849948]  0000000000000292 0000000000000000 000000000003c12c 0000000000000000
[    3.849948] Call Trace:
[    3.849948]  [<ffffffff815cc40f>] ? mousedev_create+0x1ea/0x24c
[    3.849948]  [<ffffffff8133224b>] ? gnttab_grant_foreign_access+0x1a/0x46
[    3.849948]  [<ffffffff815ceefe>] ? xenkbd_connect_backend+0x64/0x22e
[    3.849948]  [<ffffffff815cf37a>] ? xenkbd_probe+0x27c/0x2be
[    3.849948]  [<ffffffff813378ca>] ? xenbus_dev_probe+0x56/0xb8
[    3.849948]  [<ffffffff81376e2a>] ? driver_probe_device+0x1b3/0x1b3
[    3.849948]  [<ffffffff81376d09>] ? driver_probe_device+0x92/0x1b3
[    3.849948]  [<ffffffff81376e7d>] ? __driver_attach+0x53/0x73
[    3.849948]  [<ffffffff813755b4>] ? bus_for_each_dev+0x4b/0x7c
[    3.849948]  [<ffffffff8137651d>] ? bus_add_driver+0xd5/0x1f4
[    3.849948]  [<ffffffff813773b0>] ? driver_register+0x89/0x101
[    3.849948]  [<ffffffff81338cc9>] ? xenbus_register_frontend+0x1f/0x39
[    3.849948]  [<ffffffff81d32e0b>] ? atkbd_init+0x23/0x23
[    3.849948]  [<ffffffff8100209f>] ? do_one_initcall+0x75/0x10a
[    3.849948]  [<ffffffff81cebe78>] ? kernel_init_freeable+0x139/0x1ca
[    3.849948]  [<ffffffff81ceb723>] ? do_early_param+0x83/0x83
[    3.849948]  [<ffffffff8172b24c>] ? rest_init+0x70/0x70
[    3.849948]  [<ffffffff8172b252>] ? kernel_init+0x6/0xd3
[    3.849948]  [<ffffffff8174cb3c>] ? ret_from_fork+0x7c/0xb0
[    3.849948]  [<ffffffff8172b24c>] ? rest_init+0x70/0x70
[    3.849948] Code: c7 78 5d ea 81 48 83 ec 48 e8 15 58 41 00 48 89 44 24 38 8b 05 97 3d b7 00 39 d8 0f 83 8d 01 00 00 8b 0d 91 3d b7 00 85 c9 75 02 <0f> 0b 44 8d 6c 19 ff 31 d2 44 8b 25 81 3d b7 00 41 29 c5 44 89 
[    3.849948] RIP  [<ffffffff81331ff7>] get_free_entries+0x39/0x21a
[    3.849948]  RSP <ffff88003da83c88>
[    3.849948] ---[ end trace bb00725567f6ff1c ]---
[    4.582819] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:56:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ctn-0007Mt-0G; Tue, 28 Jan 2014 17:56:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Ctl-0007Mi-0K
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:56:33 +0000
Received: from [85.158.137.68:56254] by server-3.bemta-3.messagelabs.com id
	04/BA-10658-E4FE7E25; Tue, 28 Jan 2014 17:56:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390931788!11913826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28158 invoked from network); 28 Jan 2014 17:56:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:56:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="95362243"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 17:56:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:56:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Cte-0001f7-SG;
	Tue, 28 Jan 2014 17:56:26 +0000
Date: Tue, 28 Jan 2014 17:56:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <david.vrabel@citrix.com>,
	<boris.ostrovsky@oracle.com>
Message-ID: <20140128175626.GK32713@zion.uk.xensource.com>
References: <20140128175134.GJ32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128175134.GJ32713@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] Xen kbdfront,
 Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

NM, this is already fixed in xen tip!

Thanks!
Wei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:56:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ctn-0007Mt-0G; Tue, 28 Jan 2014 17:56:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8Ctl-0007Mi-0K
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:56:33 +0000
Received: from [85.158.137.68:56254] by server-3.bemta-3.messagelabs.com id
	04/BA-10658-E4FE7E25; Tue, 28 Jan 2014 17:56:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390931788!11913826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28158 invoked from network); 28 Jan 2014 17:56:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:56:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="95362243"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 17:56:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 12:56:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W8Cte-0001f7-SG;
	Tue, 28 Jan 2014 17:56:26 +0000
Date: Tue, 28 Jan 2014 17:56:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <konrad.wilk@oracle.com>, <david.vrabel@citrix.com>,
	<boris.ostrovsky@oracle.com>
Message-ID: <20140128175626.GK32713@zion.uk.xensource.com>
References: <20140128175134.GJ32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128175134.GJ32713@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: wei.liu2@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] Xen kbdfront,
 Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

NM, this is already fixed in xen tip!

Thanks!
Wei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cum-0007RB-MW; Tue, 28 Jan 2014 17:57:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8Cuk-0007Qt-Ne
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:57:35 +0000
Received: from [85.158.139.211:27157] by server-10.bemta-5.messagelabs.com id
	89/BA-01405-E8FE7E25; Tue, 28 Jan 2014 17:57:34 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390931852!193419!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9846 invoked from network); 28 Jan 2014 17:57:33 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:57:33 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so926892qac.22
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 09:57:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ayCqApDsi/HgAE8rebXKIyT8jYBWmvHMf53/6Qndp9g=;
	b=NCLj0Mwve4950X/CJx1iF0yn+51sw9PxZkWQwX9RXDAnPz5dtvr0g08AS8YWvGfySh
	s60JDwHUnFcafz0fAWS45aEkcvj9ov28WBGKZ7U98CyNC1UQVwVweB5uKxo9hLFoAn4R
	Yc614If008XeMTMseQEvDWOx8sR3e7817iDb4qpxLVTgZXz3Y+r+B4weAEEA1GtME6pN
	u5K/S72OSCoqc2wj0m1/ppC4DjtacCflitWC9iWbCeSUVmEUH0qrdiBzEZdq7RoZ6TW7
	zDbiQQEEP6Pa04yxpwMMjlyvQlFCt2LzJ+wO+OsGYqgzIOtzVh7Rr7U0IXcv0MSSMvBS
	fSfg==
X-Gm-Message-State: ALoCoQkrJ13FQwhOuj7rli48L6e7X4LgRf0sIKDppYtFv4tvT+0qtO+Dr9lrjeueaAJ41ZK8r/Xe
MIME-Version: 1.0
X-Received: by 10.224.121.67 with SMTP id g3mr4453195qar.78.1390931851498;
	Tue, 28 Jan 2014 09:57:31 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Tue, 28 Jan 2014 09:57:31 -0800 (PST)
In-Reply-To: <1390931022.31814.32.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 23:27:31 +0530
Message-ID: <CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 28 January 2014 23:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 23:05 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> >> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
>> >
>> > I should have asked this sooner -- can you point me to the bindings
>> > documentation for this device?
>> >
>> > http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
>> > is not yet agreed, so having Xen depend on it may have been a mistake.
>>
>> Above patch is still under discussion so i can not take changes from
>> that to xen driver immediately.
>>
>> For now i have added xen reset code based on
>> "drivers/power/reset/xgene-reboot.c" driver which is already merged in
>> linux.
>>
>> http://www.spinics.net/lists/arm-kernel/msg266039.html
>> For now DTS bindings for xen are similar as mentioned in above link.
>>
>> Actually if you see new patch and old one (from reboot point of view) -
>> Only difference in both the dts bindings is "mask" filed in dts.
>> In old patch it used to be read from dts but in latest it is
>> hard-coded to 1 in actual code and being removed from dts in new
>> patch.
>
> Do you have a ref for that new patch?
New patch is the one you have mentioned i.e.
http://www.gossamer-threads.com/lists/linux/kernel/1845585
It has new DT bindings.
>

> I also don't see any patch to linux/Documentation/devicetree/bindings,
> as was requested in that posting from 6 months ago. Where can I find
> that?
>
> It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> hasn't landed?
Yeah it is dangling and since new patch is already posted i think we
can wait for final DT bindings.
>
>> Now if you want this to be fixed , i can quickly submit a V7 in which
>> mask field will be just hard-coded to 1 hence xen code will always
>> work even if linux code does gets changed.
>
> Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> that seems like a good approach.
>
> I think we'll just have to accept that until the binding is specified
> and documented (in linux/Documentation/devicetree/bindings) then we may
> have to be prepared to change the Xen implementation to match the final
> spec without regard to backwards compat. If we aren't happy with that
> then I should revert the patch now and we will have to live without
> reboot support in the meantime.
Please do not revert the patch , I think we can go ahead with current patch.
Once linux side is concluded i will fix minor changes in xen code
based on new DT bindigs..
>
> Ian
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 17:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 17:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Cum-0007RB-MW; Tue, 28 Jan 2014 17:57:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8Cuk-0007Qt-Ne
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 17:57:35 +0000
Received: from [85.158.139.211:27157] by server-10.bemta-5.messagelabs.com id
	89/BA-01405-E8FE7E25; Tue, 28 Jan 2014 17:57:34 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1390931852!193419!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9846 invoked from network); 28 Jan 2014 17:57:33 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 17:57:33 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so926892qac.22
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 09:57:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ayCqApDsi/HgAE8rebXKIyT8jYBWmvHMf53/6Qndp9g=;
	b=NCLj0Mwve4950X/CJx1iF0yn+51sw9PxZkWQwX9RXDAnPz5dtvr0g08AS8YWvGfySh
	s60JDwHUnFcafz0fAWS45aEkcvj9ov28WBGKZ7U98CyNC1UQVwVweB5uKxo9hLFoAn4R
	Yc614If008XeMTMseQEvDWOx8sR3e7817iDb4qpxLVTgZXz3Y+r+B4weAEEA1GtME6pN
	u5K/S72OSCoqc2wj0m1/ppC4DjtacCflitWC9iWbCeSUVmEUH0qrdiBzEZdq7RoZ6TW7
	zDbiQQEEP6Pa04yxpwMMjlyvQlFCt2LzJ+wO+OsGYqgzIOtzVh7Rr7U0IXcv0MSSMvBS
	fSfg==
X-Gm-Message-State: ALoCoQkrJ13FQwhOuj7rli48L6e7X4LgRf0sIKDppYtFv4tvT+0qtO+Dr9lrjeueaAJ41ZK8r/Xe
MIME-Version: 1.0
X-Received: by 10.224.121.67 with SMTP id g3mr4453195qar.78.1390931851498;
	Tue, 28 Jan 2014 09:57:31 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Tue, 28 Jan 2014 09:57:31 -0800 (PST)
In-Reply-To: <1390931022.31814.32.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
Date: Tue, 28 Jan 2014 23:27:31 +0530
Message-ID: <CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 28 January 2014 23:13, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 23:05 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 28 January 2014 21:17, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2014-01-27 at 17:04 +0530, Pranavkumar Sawargaonkar wrote:
>> >> +        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
>> >
>> > I should have asked this sooner -- can you point me to the bindings
>> > documentation for this device?
>> >
>> > http://www.gossamer-threads.com/lists/linux/kernel/1845585 suggests it
>> > is not yet agreed, so having Xen depend on it may have been a mistake.
>>
>> Above patch is still under discussion so i can not take changes from
>> that to xen driver immediately.
>>
>> For now i have added xen reset code based on
>> "drivers/power/reset/xgene-reboot.c" driver which is already merged in
>> linux.
>>
>> http://www.spinics.net/lists/arm-kernel/msg266039.html
>> For now DTS bindings for xen are similar as mentioned in above link.
>>
>> Actually if you see new patch and old one (from reboot point of view) -
>> Only difference in both the dts bindings is "mask" filed in dts.
>> In old patch it used to be read from dts but in latest it is
>> hard-coded to 1 in actual code and being removed from dts in new
>> patch.
>
> Do you have a ref for that new patch?
New patch is the one you have mentioned i.e.
http://www.gossamer-threads.com/lists/linux/kernel/1845585
It has new DT bindings.
>

> I also don't see any patch to linux/Documentation/devicetree/bindings,
> as was requested in that posting from 6 months ago. Where can I find
> that?
>
> It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> hasn't landed?
Yeah it is dangling and since new patch is already posted i think we
can wait for final DT bindings.
>
>> Now if you want this to be fixed , i can quickly submit a V7 in which
>> mask field will be just hard-coded to 1 hence xen code will always
>> work even if linux code does gets changed.
>
> Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> that seems like a good approach.
>
> I think we'll just have to accept that until the binding is specified
> and documented (in linux/Documentation/devicetree/bindings) then we may
> have to be prepared to change the Xen implementation to match the final
> spec without regard to backwards compat. If we aren't happy with that
> then I should revert the patch now and we will have to live without
> reboot support in the meantime.
Please do not revert the patch , I think we can go ahead with current patch.
Once linux side is concluded i will fix minor changes in xen code
based on new DT bindigs..
>
> Ian
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:00:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CxS-0007lx-9m; Tue, 28 Jan 2014 18:00:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8CxL-0007kB-EQ
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:00:18 +0000
Received: from [193.109.254.147:31071] by server-3.bemta-14.messagelabs.com id
	96/DA-11000-E20F7E25; Tue, 28 Jan 2014 18:00:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390932013!416077!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10602 invoked from network); 28 Jan 2014 18:00:13 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:00:13 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so386536eek.39
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 10:00:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OI3Sy17PfQink+yPpT71TPNxmcTHdhXjYOFzdaX3/Kk=;
	b=fyZrPzNslMG8dGlsbyYoqluyN1Ga/jyp496Cg4OpHfhD5YCAxchiIMb25ZP/9o8W6o
	kUrNDKs20yBxOpyRHNv+jsdM3/y1q4f50LuH4L1ZxfI82qSgB/kqzGgF3j6aVq1NJ3Fj
	hR3dJEwlUmefbnPiQOoE/5SvvX2i1YUfH/R+1Mld8Pj5js2Y9TaNX6Lks2h+p7Ki0QAT
	MNA8SGp7cdxG+C22urMu6qP4uXTnecKaGnVoZubAcidGguW5gAZP4Tm+hzXSbqcOifyj
	HTv3RAbX/4twMP8ic4OfNGF/nTCBXGFgfAZoSoGWVbGKHN0q/8cg+jV3tG5tYIkd/9q2
	Y7GQ==
X-Gm-Message-State: ALoCoQlQLs6VqKWLYzEW2uW0xHsraQN9OG9ZEP3f7FuH5p/+0vuYO6oIGNQNA8NGCBD3MXeZoIjz
X-Received: by 10.15.56.132 with SMTP id y4mr18760eew.61.1390932013025;
	Tue, 28 Jan 2014 10:00:13 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o13sm58258078eex.19.2014.01.28.10.00.11 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 10:00:12 -0800 (PST)
Message-ID: <52E7F02A.7010508@linaro.org>
Date: Tue, 28 Jan 2014 18:00:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
	<alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:46 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Julien Grall wrote:
>>>> +static int xen_cpu_notification(struct notifier_block *self,
>>>> +				unsigned long action,
>>>> +				void *hcpu)
>>>> +{
>>>> +	int cpu = (long)hcpu;
>>>> +
>>>> +	switch (action) {
>>>> +	case CPU_UP_PREPARE:
>>>> +		xen_percpu_init(cpu);
>>>> +		break;
>>>> +	case CPU_STARTING:
>>>> +		xen_interrupt_init();
>>>> +		break;
>>>
>>> Is CPU_STARTING guaranteed to be called on the new cpu only?
>>
>> Yes.
>>
>>> If so, why not call both xen_percpu_init and xen_interrupt_init on
>>> CPU_STARTING?
>>
>> Just in case that xen_vcpu is used somewhere else by a cpu notifier
>> callback CPU_STARTING. We don't know which callback is called first.
> 
> Could you please elaborate a bit more on the problem you are trying to
> describe?

We want to make sure that the vcpu is registered correctly. If not, we
can't skip it and avoid xen to have a "dead" VCPU to schedule due to BUG_ON.

I agree that now we have a BUG_ON in the middle of xen_percpu_init, but
it's possible to return an error. In this case Linux will skip this cpu
and continue to boot.

>>> As it stands I think you introduced a subtle change (that might be OK
>>> but I think is unintentional): xen_percpu_init might not be called from
>>> the same cpu as its target anymore.
>>
>> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
>> the end of xen_guest_init.
>  
> Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
> not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.
>

I don't see any issue to execute xen_percpu_init for cpu1 on cpu0, all
the code is taking directly the vcpu ID to initialize.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:00:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8CxS-0007lx-9m; Tue, 28 Jan 2014 18:00:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8CxL-0007kB-EQ
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:00:18 +0000
Received: from [193.109.254.147:31071] by server-3.bemta-14.messagelabs.com id
	96/DA-11000-E20F7E25; Tue, 28 Jan 2014 18:00:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1390932013!416077!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10602 invoked from network); 28 Jan 2014 18:00:13 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:00:13 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so386536eek.39
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 10:00:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OI3Sy17PfQink+yPpT71TPNxmcTHdhXjYOFzdaX3/Kk=;
	b=fyZrPzNslMG8dGlsbyYoqluyN1Ga/jyp496Cg4OpHfhD5YCAxchiIMb25ZP/9o8W6o
	kUrNDKs20yBxOpyRHNv+jsdM3/y1q4f50LuH4L1ZxfI82qSgB/kqzGgF3j6aVq1NJ3Fj
	hR3dJEwlUmefbnPiQOoE/5SvvX2i1YUfH/R+1Mld8Pj5js2Y9TaNX6Lks2h+p7Ki0QAT
	MNA8SGp7cdxG+C22urMu6qP4uXTnecKaGnVoZubAcidGguW5gAZP4Tm+hzXSbqcOifyj
	HTv3RAbX/4twMP8ic4OfNGF/nTCBXGFgfAZoSoGWVbGKHN0q/8cg+jV3tG5tYIkd/9q2
	Y7GQ==
X-Gm-Message-State: ALoCoQlQLs6VqKWLYzEW2uW0xHsraQN9OG9ZEP3f7FuH5p/+0vuYO6oIGNQNA8NGCBD3MXeZoIjz
X-Received: by 10.15.56.132 with SMTP id y4mr18760eew.61.1390932013025;
	Tue, 28 Jan 2014 10:00:13 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o13sm58258078eex.19.2014.01.28.10.00.11 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 10:00:12 -0800 (PST)
Message-ID: <52E7F02A.7010508@linaro.org>
Date: Tue, 28 Jan 2014 18:00:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
	<alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:46 PM, Stefano Stabellini wrote:
> On Tue, 28 Jan 2014, Julien Grall wrote:
>>>> +static int xen_cpu_notification(struct notifier_block *self,
>>>> +				unsigned long action,
>>>> +				void *hcpu)
>>>> +{
>>>> +	int cpu = (long)hcpu;
>>>> +
>>>> +	switch (action) {
>>>> +	case CPU_UP_PREPARE:
>>>> +		xen_percpu_init(cpu);
>>>> +		break;
>>>> +	case CPU_STARTING:
>>>> +		xen_interrupt_init();
>>>> +		break;
>>>
>>> Is CPU_STARTING guaranteed to be called on the new cpu only?
>>
>> Yes.
>>
>>> If so, why not call both xen_percpu_init and xen_interrupt_init on
>>> CPU_STARTING?
>>
>> Just in case that xen_vcpu is used somewhere else by a cpu notifier
>> callback CPU_STARTING. We don't know which callback is called first.
> 
> Could you please elaborate a bit more on the problem you are trying to
> describe?

We want to make sure that the vcpu is registered correctly. If not, we
can't skip it and avoid xen to have a "dead" VCPU to schedule due to BUG_ON.

I agree that now we have a BUG_ON in the middle of xen_percpu_init, but
it's possible to return an error. In this case Linux will skip this cpu
and continue to boot.

>>> As it stands I think you introduced a subtle change (that might be OK
>>> but I think is unintentional): xen_percpu_init might not be called from
>>> the same cpu as its target anymore.
>>
>> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
>> the end of xen_guest_init.
>  
> Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
> not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.
>

I don't see any issue to execute xen_percpu_init for cpu1 on cpu0, all
the code is taking directly the vcpu ID to initialize.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:08:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8D4h-00088g-E1; Tue, 28 Jan 2014 18:07:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8D4f-00088b-HS
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 18:07:50 +0000
Received: from [85.158.137.68:47882] by server-4.bemta-3.messagelabs.com id
	33/4C-10414-4F1F7E25; Tue, 28 Jan 2014 18:07:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390932465!8242592!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31493 invoked from network); 28 Jan 2014 18:07:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:07:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SI7bGo021597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:07:38 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SI7arh013296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 18:07:37 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SI7ZrY011549; Tue, 28 Jan 2014 18:07:36 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:07:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 407BF1BFA73; Tue, 28 Jan 2014 13:07:34 -0500 (EST)
Date: Tue, 28 Jan 2014 13:07:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: akpm@linux-foundation.org, santosh.shilimkar@ti.com, yinghai@kernel.org,
	boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xensource.com
Message-ID: <20140128180734.GA8158@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BXVAT5kNtrzKuDFl"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] Linux 3.14-rc0 + Merge branch 'akpm' (incoming from
 Andrew - Mon Jan 27) + Xen + i386 dom0 = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


If I boot a kernel that is built on
commit 1b17366d695c8ab03f98d0155357e97a427e1dce
Merge: d12de1e 7179ba5
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jan 27 21:11:26 2014 -0800

    Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/be=
nh/powerpc

it boots .

But if I boot with

commit 54c0a4b46150db1571d955d598cd342c9f1d9657
Merge: 1b17366 c2218e2
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jan 27 21:17:55 2014 -0800

    Merge branch 'akpm' (incoming from Andrew)

it blows up.

Here is the juicy bit (attached is the full serial log)


mapping kernel into physical memory
about to get started...
[    0.000000] Reserving virtual address space above 0xff400000
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-08800-g3bf6f37 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 28 11:29:28 EST 2014
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 3e700-3e767 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000004d066fff] usable
[    0.000000] Xen: [mem 0x000000004d067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x4d067 max_arch_pfn =3D 0x1000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] initial memory mapped: [mem 0x00000000-0x077fefff]
[    0.000000] Base memory trampoline at [c0095000] 95000 size 16384
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36c00000-0x36dfffff]
[    0.000000]  [mem 0x36c00000-0x36dfffff] page 4k
[    0.000000] BRK [0x01a33000, 0x01a33fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x34000000-0x36bfffff]
[    0.000000]  [mem 0x34000000-0x36bfffff] page 4k
[    0.000000] BRK [0x01a34000, 0x01a34fff] PGTABLE
[    0.000000] BRK [0x01a35000, 0x01a35fff] PGTABLE
[    0.000000] BRK [0x01a36000, 0x01a36fff] PGTABLE
[    0.000000] BRK [0x01a37000, 0x01a37fff] PGTABLE
[    0.000000] BRK [0x01a38000, 0x01a38fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
[    0.000000]  [mem 0x00100000-0x33ffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36e00000-0x36ffdfff]
[    0.000000]  [mem 0x36e00000-0x36ffdfff] page 4k
[    0.000000] RAMDISK: [mem 0x01c7f000-0x06bf8fff]
[    0.000000] ACPI: RSDP 000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT b7794098 0000AC (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FACP b779f0b8 00010C (v05 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: DSDT b77941d8 00AEDD (v02 ALASKA    A M I 00000000 INT=
L 20091112)
[    0.000000] ACPI: FACS b77b7080 000040
[    0.000000] ACPI: APIC b779f1c8 000092 (v03 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FPDT b779f260 000044 (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: SSDT b779f2a8 000540 (v01  PmRef  Cpu0Ist 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b779f7e8 000AD8 (v01  PmRef    CpuPm 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a05b8 000348 (v01  PmRef    ApTst 00003000 INT=
L 20051117)
[    0.000000] ACPI: MCFG b77a0900 00003C (v01 ALASKA    A M I 01072009 MSF=
T 00000097)
[    0.000000] ACPI: HPET b77a0940 000038 (v01 ALASKA    A M I 01072009 AMI=
=2E 00000005)
[    0.000000] ACPI: SSDT b77a0978 00036D (v01 SataRe SataTabl 00001000 INT=
L 20091112)
[    0.000000] ACPI: SSDT b77a0ce8 00327D (v01 SaSsdt  SaSsdt  00003000 INT=
L 20091112)
[    0.000000] ACPI: ASF! b77a3f68 0000A5 (v32 INTEL       HCG 00000001 TFS=
M 000F4240)
[    0.000000] ACPI: XMAR b77a4010 0000B8 (v01 INTEL      HSW  00000001 INT=
L 00000001)
[    0.000000] ACPI: EINJ b77a40c8 000130 (v01    AMI AMI EINJ 00000000    =
  00000000)
[    0.000000] ACPI: ERST b77a41f8 000230 (v01  AMIER AMI ERST 00000000    =
  00000000)
[    0.000000] ACPI: HEST b77a4428 0000A8 (v01    AMI AMI HEST 00000000    =
  00000000)
[    0.000000] ACPI: BERT b77a44d0 000030 (v01    AMI AMI BERT 00000000    =
  00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] 352MB HIGHMEM available.
[    0.000000] 879MB LOWMEM available.
[    0.000000]   mapped low ram: 0 - 36ffe000
[    0.000000]   low ram: 0 - 36ffe000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   Normal   [mem 0x01000000-0x36ffdfff]
[    0.000000]   HighMem  [mem 0x36ffe000-0x4d066fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x4d066fff]
[    0.000000] On node 0 totalpages: 315391
[    0.000000] free_area_init_node: node 0, pgdat c180da80, node_mem_map f6=
2e2020
[    0.000000]   DMA zone: 32 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   Normal zone: 1728 pages used for memmap
[    0.000000]   Normal zone: 221182 pages, LIFO batch:31
[    0.000000]   HighMem zone: 705 pages used for memmap
[    0.000000]   HighMem zone: 90217 pages, LIFO batch:31
[    0.000000] Using APIC driver default
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2 already used, trying 8
[    0.000000] IOAPIC[0]: Unable to change apic_id!
[    0.000000] IOAPIC[0]: apic_id 255, version 32, address 0xfec00000, GSI =
0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_no=
de_ids:1
[    0.000000] PERCPU: Embedded 14 pages/cpu @f6f85000 s35456 r0 d21888 u57=
344
[    0.000000] pcpu-alloc: s35456 r0 d21888 u57344 alloc=3D14*4096
[    0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 6 [0] 7=
=20
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Tota=
l pages: 313631
[    0.000000] Kernel command line: earlyprintk=3Dxen debug nofb console=3D=
tty console=3Dhvc0 xen-pciback.hide=3D(02:00.0)(02:00.1)(03:00.*)(04:00.0) =
loglevel=3D10
[    0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
[    0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 by=
tes)
[    0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 byte=
s)
[    0.000000] Initializing CPU#0
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] BUG: unable to handle kernel paging request at fe75f000
[    0.000000] IP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] *pdpt =3D 0000000001a2c027 *pde =3D 0000000000000000=20
[    0.000000] Oops: 0002 [#1] SMP=20
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0upstream-08800=
-g3bf6f37 #1
[    0.000000] Hardware name: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] task: c17c69e0 ti: c17ba000 task.ti: c17ba000
[    0.000000] EIP: e019:[<c184b741>] EFLAGS: 00010046 CPU: 0
[    0.000000] EIP is at memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] EAX: 00000000 EBX: fe75f000 ECX: 00008000 EDX: c18e1c48
[    0.000000] ESI: 00000000 EDI: fe75f000 EBP: c17bbdb8 ESP: c17bbd80
[    0.000000]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: e021
[    0.000000] CR0: 80050033 CR2: fe75f000 CR3: 01916000 CR4: 00042660
[    0.000000] Stack:
[    0.000000]  00008000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff 3e75f000
[    0.000000]  00000000 00008000 00000000 00001000 00000000 ffffffff c17bb=
e04 c184b91d
[    0.000000]  00001000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff c103b896
[    0.000000] Call Trace:
[    0.000000]  [<c184b91d>] memblock_virt_alloc_try_nid_nopanic+0xac/0xb4
[    0.000000]  [<c103b896>] ? xen_remap_exchanged_ptes+0x176/0x1f0
[    0.000000]  [<c15f5bb0>] ? _raw_spin_unlock_irqrestore+0x20/0x90
[    0.000000]  [<c185850d>] swiotlb_init_with_tbl+0x7d/0x18a
[    0.000000]  [<c15f0915>] xen_swiotlb_init+0x265/0x3d0
[    0.000000]  [<c182947d>] pci_xen_swiotlb_init+0x1b/0x2e
[    0.000000]  [<c182dd00>] pci_iommu_alloc+0x46/0x5a
[    0.000000]  [<c183a9d9>] mem_init+0xe/0x20f
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c1821aa9>] start_kernel+0x21a/0x40e
[    0.000000]  [<c1821700>] ? repair_env_string+0x5b/0x5b
[    0.000000]  [<c182136b>] i386_start_kernel+0x12e/0x131
[    0.000000]  [<c1826000>] xen_start_kernel+0x65c/0x667
[    0.000000] Code: ff ff 89 c1 8b 45 e8 8b 7d ec 89 4d e4 89 44 24 04 89 =
c8 89 3c 24 e8 1c 66 02 00 8b 4d e4 31 c0 8d 99 00 00 00 c0 8b 4d ec 89 df =
<f3> aa 89 d8 c7 04 24 00 00 00 00 8b 55 ec e8 3c 32 da ff 83 c4
[    0.000000] EIP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188 S=
S:ESP e021:c17bbd80
[    0.000000] CR2: 00000000fe75f000
[    0.000000] ---[ end trace 54b16dbd029168a1 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mIniti=
alizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00HPress Ctrl+S to enter the Setup Menu.             =
                              =1B[04;00H                                   =
                                             =1B[05;00H                    =
                                                            =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[03;39H=1B[03;00HP=
ress Ctrl+S to enter the Setup Menu..                                      =
    =1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00HPress Ctrl+S to enter the Setup =
Menu.                                           =1B[08;00H                 =
                                                               =1B[09;00H  =
                                                                           =
   =1B[10;00H                                                              =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                               =20

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=memblock
Content-Transfer-Encoding: quoted-printable

PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al
Loading latest/xen.gz... =1B[07;00Hok
Loading latest/vmlinuz... =1B[01;00Hok
Loading latest/initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00=
H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[=
01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00=
H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 28 10:58:43 EST 2014
(XEN) Latest ChangeSet: Tue Jan 28 12:28:33 2014 +0800 git:85a50e1-dirty
(XEN) Bootloader: unknown
(XEN) Command line: com1=3D115200,8n1 console=3Dcom1,vga guest_loglvl=3Dall=
 tmem tmem_compress tmem_dedup dom0_mem=3D999M,max:1232M dom0_max_vcpus=3D3=
 cpufreq=3Dperformance,verbose loglvl=3Dall apic=3Ddebug
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN)  ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.099 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x: BCAST 1 SER 0 CM=
CI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) 02:00.0: alloced (179)
(XEN) 02:00.0: alloced (189) ffff83023456cf70,pdev ffff8302345ef0d0
(XEN) 02:00.1: alloced (179)
(XEN) 02:00.1: alloced (189) ffff8302345ef250,pdev ffff8302345ef190
(XEN) 04:00.0: alloced (179)
(XEN) 04:00.0: alloced (189) ffff8302345ef520,pdev ffff8302345ef460
(XEN) 05:00.0: status=3D0010 (alloc_pdev+0xb7/0x360 wants 11)
(XEN) 05:00.0: pos=3D60
(XEN) 05:00.0: id=3D0d
(XEN) 05:00.0: pos=3Da0
(XEN) 05:00.0: id=3D01
(XEN) 05:00.0: pos=3D00
(XEN) 05:00.0: no cap 11
(XEN) 08:00.0: alloced (179)
(XEN) 08:00.0: alloced (189) ffff8302345efeb0,pdev ffff8302345efdf0
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI wit IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) mwait-idle: MWAIT substates: 0x42120
(XEit-idle: lapic_timer_reliable_states 0xffffffff
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisati)  - VMCS shadowing
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) Brought up 8 CPUs
(XEN) tmem: initialized comp=3D1 dedup=3D1 tze=3D0
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0x7ba000
(XEN) elf_parse_binary: phdr: paddr=3D0x17ba000 memsz=3D0x4c5000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x1c7f000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xc0000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xc182122c
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xc1001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: SUPPORTED_FEATURES =3D 0x801
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xf5800000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xc0000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xc0000000
(XEN)     virt_kstart      =3D 0xc1000000
(XEN)     virt_kend        =3D 0xc1c7f000
(XEN)     virt_entry       =3D 0xc182122c
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 32-bit, PAE, lsb, paddr 0x1000000 -> 0x1c7f000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000230000000->0000000234000000 (219014 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000023ae86000->000000023fdffbc9
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: 00000000c1000000->00000000c1c7f000
(XEN)  Init. ramdisk: 00000000c1c7f000->00000000c6bf8bc9
(XEN)  Phys-Mach map: 00000000c6bf9000->00000000c6cf2c00
(XEN)  Start info:    00000000c6cf3000->00000000c6cf34b4
(XEN)  Page tables:   00000000c6cf4000->00000000c6d31000
(XEN)  Boot stack:    00000000c6d31000->00000000c6d32000
(XEN)  TOTAL:         00000000c0000000->00000000c7000000
(XEN)  ENTRY ADDRESS: 00000000c182122c
(XEN) Dom0 has maximum 3 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xc1000000 -> 0xc17ba000
(XEN) elf_load_binary: phdr 1 at 0xc17ba000 -> 0xc1913000
(XEN) Scrubbing Free RAM: .................................................=
=2E...................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEswitch input to Xen)
(XEN) Freed 272kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Reserving virtual address space above 0xff400000
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-08800-g3bf6f37 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 28 11:29:28 EST 2014
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 3e700-3e767 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000004d066fff] usable
[    0.000000] Xen: [mem 0x000000004d067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x4d067 max_arch_pfn =3D 0x1000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] initial memory mapped: [mem 0x00000000-0x077fefff]
[    0.000000] Base memory trampoline at [c0095000] 95000 size 16384
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36c00000-0x36dfffff]
[    0.000000]  [mem 0x36c00000-0x36dfffff] page 4k
[    0.000000] BRK [0x01a33000, 0x01a33fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x34000000-0x36bfffff]
[    0.000000]  [mem 0x34000000-0x36bfffff] page 4k
[    0.000000] BRK [0x01a34000, 0x01a34fff] PGTABLE
[    0.000000] BRK [0x01a35000, 0x01a35fff] PGTABLE
[    0.000000] BRK [0x01a36000, 0x01a36fff] PGTABLE
[    0.000000] BRK [0x01a37000, 0x01a37fff] PGTABLE
[    0.000000] BRK [0x01a38000, 0x01a38fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
[    0.000000]  [mem 0x00100000-0x33ffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36e00000-0x36ffdfff]
[    0.000000]  [mem 0x36e00000-0x36ffdfff] page 4k
[    0.000000] RAMDISK: [mem 0x01c7f000-0x06bf8fff]
[    0.000000] ACPI: RSDP 000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT b7794098 0000AC (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FACP b779f0b8 00010C (v05 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: DSDT b77941d8 00AEDD (v02 ALASKA    A M I 00000000 INT=
L 20091112)
[    0.000000] ACPI: FACS b77b7080 000040
[    0.000000] ACPI: APIC b779f1c8 000092 (v03 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FPDT b779f260 000044 (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: SSDT b779f2a8 000540 (v01  PmRef  Cpu0Ist 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b779f7e8 000AD8 (v01  PmRef    CpuPm 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a05b8 000348 (v01  PmRef    ApTst 00003000 INT=
L 20051117)
[    0.000000] ACPI: MCFG b77a0900 00003C (v01 ALASKA    A M I 01072009 MSF=
T 00000097)
[    0.000000] ACPI: HPET b77a0940 000038 (v01 ALASKA    A M I 01072009 AMI=
=2E 00000005)
[    0.000000] ACPI: SSDT b77a0978 00036D (v01 SataRe SataTabl 00001000 INT=
L 20091112)
[    0.000000] ACPI: SSDT b77a0ce8 00327D (v01 SaSsdt  SaSsdt  00003000 INT=
L 20091112)
[    0.000000] ACPI: ASF! b77a3f68 0000A5 (v32 INTEL       HCG 00000001 TFS=
M 000F4240)
[    0.000000] ACPI: XMAR b77a4010 0000B8 (v01 INTEL      HSW  00000001 INT=
L 00000001)
[    0.000000] ACPI: EINJ b77a40c8 000130 (v01    AMI AMI EINJ 00000000    =
  00000000)
[    0.000000] ACPI: ERST b77a41f8 000230 (v01  AMIER AMI ERST 00000000    =
  00000000)
[    0.000000] ACPI: HEST b77a4428 0000A8 (v01    AMI AMI HEST 00000000    =
  00000000)
[    0.000000] ACPI: BERT b77a44d0 000030 (v01    AMI AMI BERT 00000000    =
  00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] 352MB HIGHMEM available.
[    0.000000] 879MB LOWMEM available.
[    0.000000]   mapped low ram: 0 - 36ffe000
[    0.000000]   low ram: 0 - 36ffe000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   Normal   [mem 0x01000000-0x36ffdfff]
[    0.000000]   HighMem  [mem 0x36ffe000-0x4d066fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x4d066fff]
[    0.000000] On node 0 totalpages: 315391
[    0.000000] free_area_init_node: node 0, pgdat c180da80, node_mem_map f6=
2e2020
[    0.000000]   DMA zone: 32 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   Normal zone: 1728 pages used for memmap
[    0.000000]   Normal zone: 221182 pages, LIFO batch:31
[    0.000000]   HighMem zone: 705 pages used for memmap
[    0.000000]   HighMem zone: 90217 pages, LIFO batch:31
[    0.000000] Using APIC driver default
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2 already used, trying 8
[    0.000000] IOAPIC[0]: Unable to change apic_id!
[    0.000000] IOAPIC[0]: apic_id 255, version 32, address 0xfec00000, GSI =
0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_no=
de_ids:1
[    0.000000] PERCPU: Embedded 14 pages/cpu @f6f85000 s35456 r0 d21888 u57=
344
[    0.000000] pcpu-alloc: s35456 r0 d21888 u57344 alloc=3D14*4096
[    0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 6 [0] 7=
=20
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Tota=
l pages: 313631
[    0.000000] Kernel command line: earlyprintk=3Dxen debug nofb console=3D=
tty console=3Dhvc0 xen-pciback.hide=3D(02:00.0)(02:00.1)(03:00.*)(04:00.0) =
loglevel=3D10
[    0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
[    0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 by=
tes)
[    0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 byte=
s)
[    0.000000] Initializing CPU#0
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] BUG: unable to handle kernel paging request at fe75f000
[    0.000000] IP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] *pdpt =3D 0000000001a2c027 *pde =3D 0000000000000000=20
[    0.000000] Oops: 0002 [#1] SMP=20
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0upstream-08800=
-g3bf6f37 #1
[    0.000000] Hardware name: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] task: c17c69e0 ti: c17ba000 task.ti: c17ba000
[    0.000000] EIP: e019:[<c184b741>] EFLAGS: 00010046 CPU: 0
[    0.000000] EIP is at memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] EAX: 00000000 EBX: fe75f000 ECX: 00008000 EDX: c18e1c48
[    0.000000] ESI: 00000000 EDI: fe75f000 EBP: c17bbdb8 ESP: c17bbd80
[    0.000000]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: e021
[    0.000000] CR0: 80050033 CR2: fe75f000 CR3: 01916000 CR4: 00042660
[    0.000000] Stack:
[    0.000000]  00008000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff 3e75f000
[    0.000000]  00000000 00008000 00000000 00001000 00000000 ffffffff c17bb=
e04 c184b91d
[    0.000000]  00001000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff c103b896
[    0.000000] Call Trace:
[    0.000000]  [<c184b91d>] memblock_virt_alloc_try_nid_nopanic+0xac/0xb4
[    0.000000]  [<c103b896>] ? xen_remap_exchanged_ptes+0x176/0x1f0
[    0.000000]  [<c15f5bb0>] ? _raw_spin_unlock_irqrestore+0x20/0x90
[    0.000000]  [<c185850d>] swiotlb_init_with_tbl+0x7d/0x18a
[    0.000000]  [<c15f0915>] xen_swiotlb_init+0x265/0x3d0
[    0.000000]  [<c182947d>] pci_xen_swiotlb_init+0x1b/0x2e
[    0.000000]  [<c182dd00>] pci_iommu_alloc+0x46/0x5a
[    0.000000]  [<c183a9d9>] mem_init+0xe/0x20f
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c1821aa9>] start_kernel+0x21a/0x40e
[    0.000000]  [<c1821700>] ? repair_env_string+0x5b/0x5b
[    0.000000]  [<c182136b>] i386_start_kernel+0x12e/0x131
[    0.000000]  [<c1826000>] xen_start_kernel+0x65c/0x667
[    0.000000] Code: ff ff 89 c1 8b 45 e8 8b 7d ec 89 4d e4 89 44 24 04 89 =
c8 89 3c 24 e8 1c 66 02 00 8b 4d e4 31 c0 8d 99 00 00 00 c0 8b 4d ec 89 df =
<f3> aa 89 d8 c7 04 24 00 00 00 00 8b 55 ec e8 3c 32 da ff 83 c4
[    0.000000] EIP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188 S=
S:ESP e021:c17bbd80
[    0.000000] CR2: 00000000fe75f000
[    0.000000] ---[ end trace 54b16dbd029168a1 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--BXVAT5kNtrzKuDFl--


From xen-devel-bounces@lists.xen.org Tue Jan 28 18:08:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8D4h-00088g-E1; Tue, 28 Jan 2014 18:07:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8D4f-00088b-HS
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 18:07:50 +0000
Received: from [85.158.137.68:47882] by server-4.bemta-3.messagelabs.com id
	33/4C-10414-4F1F7E25; Tue, 28 Jan 2014 18:07:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1390932465!8242592!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31493 invoked from network); 28 Jan 2014 18:07:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:07:46 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SI7bGo021597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:07:38 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SI7arh013296
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 18:07:37 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SI7ZrY011549; Tue, 28 Jan 2014 18:07:36 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:07:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 407BF1BFA73; Tue, 28 Jan 2014 13:07:34 -0500 (EST)
Date: Tue, 28 Jan 2014 13:07:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: akpm@linux-foundation.org, santosh.shilimkar@ti.com, yinghai@kernel.org,
	boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org,
	xen-devel@lists.xensource.com
Message-ID: <20140128180734.GA8158@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BXVAT5kNtrzKuDFl"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] Linux 3.14-rc0 + Merge branch 'akpm' (incoming from
 Andrew - Mon Jan 27) + Xen + i386 dom0 = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


If I boot a kernel that is built on
commit 1b17366d695c8ab03f98d0155357e97a427e1dce
Merge: d12de1e 7179ba5
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jan 27 21:11:26 2014 -0800

    Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/be=
nh/powerpc

it boots .

But if I boot with

commit 54c0a4b46150db1571d955d598cd342c9f1d9657
Merge: 1b17366 c2218e2
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jan 27 21:17:55 2014 -0800

    Merge branch 'akpm' (incoming from Andrew)

it blows up.

Here is the juicy bit (attached is the full serial log)


mapping kernel into physical memory
about to get started...
[    0.000000] Reserving virtual address space above 0xff400000
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-08800-g3bf6f37 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 28 11:29:28 EST 2014
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 3e700-3e767 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000004d066fff] usable
[    0.000000] Xen: [mem 0x000000004d067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x4d067 max_arch_pfn =3D 0x1000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] initial memory mapped: [mem 0x00000000-0x077fefff]
[    0.000000] Base memory trampoline at [c0095000] 95000 size 16384
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36c00000-0x36dfffff]
[    0.000000]  [mem 0x36c00000-0x36dfffff] page 4k
[    0.000000] BRK [0x01a33000, 0x01a33fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x34000000-0x36bfffff]
[    0.000000]  [mem 0x34000000-0x36bfffff] page 4k
[    0.000000] BRK [0x01a34000, 0x01a34fff] PGTABLE
[    0.000000] BRK [0x01a35000, 0x01a35fff] PGTABLE
[    0.000000] BRK [0x01a36000, 0x01a36fff] PGTABLE
[    0.000000] BRK [0x01a37000, 0x01a37fff] PGTABLE
[    0.000000] BRK [0x01a38000, 0x01a38fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
[    0.000000]  [mem 0x00100000-0x33ffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36e00000-0x36ffdfff]
[    0.000000]  [mem 0x36e00000-0x36ffdfff] page 4k
[    0.000000] RAMDISK: [mem 0x01c7f000-0x06bf8fff]
[    0.000000] ACPI: RSDP 000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT b7794098 0000AC (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FACP b779f0b8 00010C (v05 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: DSDT b77941d8 00AEDD (v02 ALASKA    A M I 00000000 INT=
L 20091112)
[    0.000000] ACPI: FACS b77b7080 000040
[    0.000000] ACPI: APIC b779f1c8 000092 (v03 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FPDT b779f260 000044 (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: SSDT b779f2a8 000540 (v01  PmRef  Cpu0Ist 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b779f7e8 000AD8 (v01  PmRef    CpuPm 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a05b8 000348 (v01  PmRef    ApTst 00003000 INT=
L 20051117)
[    0.000000] ACPI: MCFG b77a0900 00003C (v01 ALASKA    A M I 01072009 MSF=
T 00000097)
[    0.000000] ACPI: HPET b77a0940 000038 (v01 ALASKA    A M I 01072009 AMI=
=2E 00000005)
[    0.000000] ACPI: SSDT b77a0978 00036D (v01 SataRe SataTabl 00001000 INT=
L 20091112)
[    0.000000] ACPI: SSDT b77a0ce8 00327D (v01 SaSsdt  SaSsdt  00003000 INT=
L 20091112)
[    0.000000] ACPI: ASF! b77a3f68 0000A5 (v32 INTEL       HCG 00000001 TFS=
M 000F4240)
[    0.000000] ACPI: XMAR b77a4010 0000B8 (v01 INTEL      HSW  00000001 INT=
L 00000001)
[    0.000000] ACPI: EINJ b77a40c8 000130 (v01    AMI AMI EINJ 00000000    =
  00000000)
[    0.000000] ACPI: ERST b77a41f8 000230 (v01  AMIER AMI ERST 00000000    =
  00000000)
[    0.000000] ACPI: HEST b77a4428 0000A8 (v01    AMI AMI HEST 00000000    =
  00000000)
[    0.000000] ACPI: BERT b77a44d0 000030 (v01    AMI AMI BERT 00000000    =
  00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] 352MB HIGHMEM available.
[    0.000000] 879MB LOWMEM available.
[    0.000000]   mapped low ram: 0 - 36ffe000
[    0.000000]   low ram: 0 - 36ffe000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   Normal   [mem 0x01000000-0x36ffdfff]
[    0.000000]   HighMem  [mem 0x36ffe000-0x4d066fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x4d066fff]
[    0.000000] On node 0 totalpages: 315391
[    0.000000] free_area_init_node: node 0, pgdat c180da80, node_mem_map f6=
2e2020
[    0.000000]   DMA zone: 32 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   Normal zone: 1728 pages used for memmap
[    0.000000]   Normal zone: 221182 pages, LIFO batch:31
[    0.000000]   HighMem zone: 705 pages used for memmap
[    0.000000]   HighMem zone: 90217 pages, LIFO batch:31
[    0.000000] Using APIC driver default
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2 already used, trying 8
[    0.000000] IOAPIC[0]: Unable to change apic_id!
[    0.000000] IOAPIC[0]: apic_id 255, version 32, address 0xfec00000, GSI =
0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_no=
de_ids:1
[    0.000000] PERCPU: Embedded 14 pages/cpu @f6f85000 s35456 r0 d21888 u57=
344
[    0.000000] pcpu-alloc: s35456 r0 d21888 u57344 alloc=3D14*4096
[    0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 6 [0] 7=
=20
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Tota=
l pages: 313631
[    0.000000] Kernel command line: earlyprintk=3Dxen debug nofb console=3D=
tty console=3Dhvc0 xen-pciback.hide=3D(02:00.0)(02:00.1)(03:00.*)(04:00.0) =
loglevel=3D10
[    0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
[    0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 by=
tes)
[    0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 byte=
s)
[    0.000000] Initializing CPU#0
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] BUG: unable to handle kernel paging request at fe75f000
[    0.000000] IP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] *pdpt =3D 0000000001a2c027 *pde =3D 0000000000000000=20
[    0.000000] Oops: 0002 [#1] SMP=20
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0upstream-08800=
-g3bf6f37 #1
[    0.000000] Hardware name: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] task: c17c69e0 ti: c17ba000 task.ti: c17ba000
[    0.000000] EIP: e019:[<c184b741>] EFLAGS: 00010046 CPU: 0
[    0.000000] EIP is at memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] EAX: 00000000 EBX: fe75f000 ECX: 00008000 EDX: c18e1c48
[    0.000000] ESI: 00000000 EDI: fe75f000 EBP: c17bbdb8 ESP: c17bbd80
[    0.000000]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: e021
[    0.000000] CR0: 80050033 CR2: fe75f000 CR3: 01916000 CR4: 00042660
[    0.000000] Stack:
[    0.000000]  00008000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff 3e75f000
[    0.000000]  00000000 00008000 00000000 00001000 00000000 ffffffff c17bb=
e04 c184b91d
[    0.000000]  00001000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff c103b896
[    0.000000] Call Trace:
[    0.000000]  [<c184b91d>] memblock_virt_alloc_try_nid_nopanic+0xac/0xb4
[    0.000000]  [<c103b896>] ? xen_remap_exchanged_ptes+0x176/0x1f0
[    0.000000]  [<c15f5bb0>] ? _raw_spin_unlock_irqrestore+0x20/0x90
[    0.000000]  [<c185850d>] swiotlb_init_with_tbl+0x7d/0x18a
[    0.000000]  [<c15f0915>] xen_swiotlb_init+0x265/0x3d0
[    0.000000]  [<c182947d>] pci_xen_swiotlb_init+0x1b/0x2e
[    0.000000]  [<c182dd00>] pci_iommu_alloc+0x46/0x5a
[    0.000000]  [<c183a9d9>] mem_init+0xe/0x20f
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c1821aa9>] start_kernel+0x21a/0x40e
[    0.000000]  [<c1821700>] ? repair_env_string+0x5b/0x5b
[    0.000000]  [<c182136b>] i386_start_kernel+0x12e/0x131
[    0.000000]  [<c1826000>] xen_start_kernel+0x65c/0x667
[    0.000000] Code: ff ff 89 c1 8b 45 e8 8b 7d ec 89 4d e4 89 44 24 04 89 =
c8 89 3c 24 e8 1c 66 02 00 8b 4d e4 31 c0 8d 99 00 00 00 c0 8b 4d ec 89 df =
<f3> aa 89 d8 c7 04 24 00 00 00 00 8b 55 ec e8 3c 32 da ff 83 c4
[    0.000000] EIP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188 S=
S:ESP e021:c17bbd80
[    0.000000] CR2: 00000000fe75f000
[    0.000000] ---[ end trace 54b16dbd029168a1 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.
=80=08 =08=1B[2J=1B[1;1H=1B[1;1H=80=08 =08=1B[01;00H=1B[0m=1B[2;37;40mIniti=
alizing Intel(R) Boot Agent GE v1.3.22                                     =
=1B[02;00HPXE 2.1 Build 086 (WfM 2.0)                                      =
               =1B[03;00HPress Ctrl+S to enter the Setup Menu.             =
                              =1B[04;00H                                   =
                                             =1B[05;00H                    =
                                                            =1B[06;00H     =
                                                                           =
=1B[07;00H                                                                 =
               =1B[08;00H                                                  =
                              =1B[09;00H                                   =
                                             =1B[10;00H                    =
                                                            =1B[11;00H     =
                                                                           =
=1B[12;00H                                                                 =
               =1B[13;00H                                                  =
                              =1B[14;00H                                   =
                                             =1B[15;00H                    =
                                                            =1B[16;00H     =
                                                                           =
=1B[17;00H                                                                 =
               =1B[18;00H                                                  =
                              =1B[19;00H                                   =
                                             =1B[20;00H                    =
                                                            =1B[21;00H     =
                                                                           =
=1B[22;00H                                                                 =
               =1B[23;00H                                                  =
                              =1B[24;00H                                   =
                                            =1B[24;00H=1B[03;39H=1B[03;00HP=
ress Ctrl+S to enter the Setup Menu..                                      =
    =1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=
=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=1B[03;39H=80=08 =08=1B[2J=1B[1;1H=
=1B[1;1H=80=08 =08=1B[01;00HInitializing Intel(R) Boot Agent GE v1.3.22    =
                                 =1B[02;00HPXE 2.1 Build 086 (WfM 2.0)     =
                                                =1B[03;00H                 =
                                                               =1B[04;00H  =
                                                                           =
   =1B[05;00HInitializing Intel(R) Boot Agent GE v1.3.22                   =
                  =1B[06;00HPXE 2.1 Build 086 (WfM 2.0)                    =
                                 =1B[07;00HPress Ctrl+S to enter the Setup =
Menu.                                           =1B[08;00H                 =
                                                               =1B[09;00H  =
                                                                           =
   =1B[10;00H                                                              =
                  =1B[11;00H                                               =
                                 =1B[12;00H                                =
                                                =1B[13;00H                 =
                                                               =1B[14;00H  =
                                                                           =
   =1B[15;00H                                                              =
                  =1B[16;00H                                               =
                                 =1B[17;00H                                =
                                               =20

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=memblock
Content-Transfer-Encoding: quoted-printable

PXELINUX 3.82 2009-06-09  Copyright (C) 1994-2009 H. Peter Anvin et al
Loading latest/xen.gz... =1B[07;00Hok
Loading latest/vmlinuz... =1B[01;00Hok
Loading latest/initramfs.cpio.gz... =1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00=
H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[=
01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00=
H=1B[01;00H=1B[01;00H=1B[01;00H=1B[01;00Hok
 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (konrad@(none)) (gcc (GCC) 4.4.4 20100503 (Red Ha=
t 4.4.4-2)) debug=3Dy Tue Jan 28 10:58:43 EST 2014
(XEN) Latest ChangeSet: Tue Jan 28 12:28:33 2014 +0800 git:85a50e1-dirty
(XEN) Bootloader: unknown
(XEN) Command line: com1=3D115200,8n1 console=3Dcom1,vga guest_loglvl=3Dall=
 tmem tmem_compress tmem_dedup dom0_mem=3D999M,max:1232M dom0_max_vcpus=3D3=
 cpufreq=3Dperformance,verbose loglvl=3Dall apic=3Ddebug
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 1 MBR signatures
(XEN)  Found 1 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000099c00 (usable)
(XEN)  0000000000099c00 - 00000000000a0000 (reserved)
(XEN)  00000000000e0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000a58f1000 (usable)
(XEN)  00000000a58f1000 - 00000000a58f8000 (ACPI NVS)
(XEN)  00000000a58f8000 - 00000000a61b1000 (usable)
(XEN)  00000000a61b1000 - 00000000a6597000 (reserved)
(XEN)  00000000a6597000 - 00000000b74b4000 (usable)
(XEN)  00000000b74b4000 - 00000000b76cb000 (reserved)
(XEN)  00000000b76cb000 - 00000000b770c000 (usable)
(XEN)  00000000b770c000 - 00000000b77b9000 (ACPI NVS)
(XEN)  00000000b77b9000 - 00000000b7fff000 (reserved)
(XEN)  00000000b7fff000 - 00000000b8000000 (usable)
(XEN)  00000000bc000000 - 00000000be200000 (reserved)
(XEN)  00000000f8000000 - 00000000fc000000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fed00000 - 00000000fed04000 (reserved)
(XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000023fe00000 (usable)
(XEN) ACPI: RSDP 000F0490, 0024 (r2 ALASKA)
(XEN) ACPI: XSDT B7794098, 00AC (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN)  ACPI: DSDT B77941D8, AEDD (r2 ALASKA    A M I        0 INTL 20091112)
(XEN) ACPI: FACS B77B7080, 0040
(XEN) ACPI: APIC B779F1C8, 0092 (r3 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: FPDT B779F260, 0044 (r1 ALASKA    A M I  1072009 AMI     10013)
(XEN) ACPI: SSDT B779F2A8, 0540 (r1  PmRef  Cpu0Ist     3000 INTL 20051117)
(XEN) ACPI: SSDT B779F7E8, 0AD8 (r1  PmRef    CpuPm     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A02C0, 02F2 (r1  PmRef  Cpu0Tst     3000 INTL 20051117)
(XEN) ACPI: SSDT B77A05B8, 0348 (r1  PmRef    ApTst     3000 INTL 20051117)
(XEN) ACPI: MCFG B77A0900, 003C (r1 ALASKA    A M I  1072009 MSFT       97)
(XEN) ACPI: HPET B77A0940, 0038 (r1 ALASKA    A M I  1072009 AMI.        5)
(XEN) ACPI: SSDT B77A0978, 036D (r1 SataRe SataTabl     1000 INTL 20091112)
(XEN) ACPI: SSDT B77A0CE8, 327D (r1 SaSsdt  SaSsdt      3000 INTL 20091112)
(XEN) ACPI: ASF! B77A3F68, 00A5 (r32 INTEL       HCG        1 TFSM    F4240)
(XEN) ACPI: DMAR B77A4010, 00B8 (r1 INTEL      HSW         1 INTL        1)
(XEN) ACPI: EINJ B77A40C8, 0130 (r1    AMI AMI EINJ        0             0)
(XEN) ACPI: ERST B77A41F8, 0230 (r1  AMIER AMI ERST        0             0)
(XEN) ACPI: HEST B77A4428, 00A8 (r1    AMI AMI HEST        0             0)
(XEN) ACPI: BERT B77A44D0, 0030 (r1    AMI AMI BERT        0             0)
(XEN) System RAM: 8046MB (8239752kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-000000023fe00000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000fd870
(XEN) DMI 2.7 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x1808
(XEN) ACPI: v5 SLEEP INFO: control[0:0], status[0:0]
(XEN) ACPI: SLEEP INFO: pm1x_cnt[1804,0], pm1x_evt[1800,0]
(XEN) ACPI: 32/64X FACS address mismatch in FADT - b77b7080/000000000000000=
0, using 32
(XEN) ACPI:             wakeup_vec[b77b708c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:12 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:12 APIC version 21
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a701 base: 0xfed00000
(XEN) Xen ERST support is initialized.
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (0 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1528 MSI/MSI-X
(XEN) Switched to APIC driver x2apic_cluster.
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3400.099 MHz processor.
(XEN) Initing memory sharing.
(XEN) xstate_init: using cntxt_size: 0x340 and states: 0x: BCAST 1 SER 0 CM=
CI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) 02:00.0: alloced (179)
(XEN) 02:00.0: alloced (189) ffff83023456cf70,pdev ffff8302345ef0d0
(XEN) 02:00.1: alloced (179)
(XEN) 02:00.1: alloced (189) ffff8302345ef250,pdev ffff8302345ef190
(XEN) 04:00.0: alloced (179)
(XEN) 04:00.0: alloced (189) ffff8302345ef520,pdev ffff8302345ef460
(XEN) 05:00.0: status=3D0010 (alloc_pdev+0xb7/0x360 wants 11)
(XEN) 05:00.0: pos=3D60
(XEN) 05:00.0: id=3D0d
(XEN) 05:00.0: pos=3Da0
(XEN) 05:00.0: id=3D01
(XEN) 05:00.0: pos=3D00
(XEN) 05:00.0: no cap 11
(XEN) 08:00.0: alloced (179)
(XEN) 08:00.0: alloced (189) ffff8302345efeb0,pdev ffff8302345efdf0
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) Enabled directed EOI wit IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=3D0xF0 apic1=3D0 pin1=3D2 apic2=3D-1 pin2=3D-1
(XEN) TSC deadline timer enabled
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) mwait-idle: MWAIT substates: 0x42120
(XEit-idle: lapic_timer_reliable_states 0xffffffff
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisati)  - VMCS shadowing
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
(XEN) Brought up 8 CPUs
(XEN) tmem: initialized comp=3D1 dedup=3D1 tze=3D0
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=3D0x1000000 memsz=3D0x7ba000
(XEN) elf_parse_binary: phdr: paddr=3D0x17ba000 memsz=3D0x4c5000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x1c7f000
(XEN) elf_xen_parse_note: GUEST_OS =3D "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION =3D "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION =3D "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE =3D 0xc0000000
(XEN) elf_xen_parse_note: ENTRY =3D 0xc182122c
(XEN) elf_xen_parse_note: HYPERCALL_PAGE =3D 0xc1001000
(XEN) elf_xen_parse_note: FEATURES =3D "!writable_page_tables|pae_pgdir_abo=
ve_4gb"
(XEN) elf_xen_parse_note: SUPPORTED_FEATURES =3D 0x801
(XEN) elf_xen_parse_note: PAE_MODE =3D "yes"
(XEN) elf_xen_parse_note: LOADER =3D "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL =3D 0x1
(XEN) elf_xen_parse_note: HV_START_LOW =3D 0xf5800000
(XEN) elf_xen_parse_note: PADDR_OFFSET =3D 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        =3D 0xc0000000
(XEN)     elf_paddr_offset =3D 0x0
(XEN)     virt_offset      =3D 0xc0000000
(XEN)     virt_kstart      =3D 0xc1000000
(XEN)     virt_kend        =3D 0xc1c7f000
(XEN)     virt_entry       =3D 0xc182122c
(XEN)     p2m_base         =3D 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 32-bit, PAE, lsb, paddr 0x1000000 -> 0x1c7f000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000230000000->0000000234000000 (219014 pages to b=
e allocated)
(XEN)  Init. ramdisk: 000000023ae86000->000000023fdffbc9
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: 00000000c1000000->00000000c1c7f000
(XEN)  Init. ramdisk: 00000000c1c7f000->00000000c6bf8bc9
(XEN)  Phys-Mach map: 00000000c6bf9000->00000000c6cf2c00
(XEN)  Start info:    00000000c6cf3000->00000000c6cf34b4
(XEN)  Page tables:   00000000c6cf4000->00000000c6d31000
(XEN)  Boot stack:    00000000c6d31000->00000000c6d32000
(XEN)  TOTAL:         00000000c0000000->00000000c7000000
(XEN)  ENTRY ADDRESS: 00000000c182122c
(XEN) Dom0 has maximum 3 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xc1000000 -> 0xc17ba000
(XEN) elf_load_binary: phdr 1 at 0xc17ba000 -> 0xc1913000
(XEN) Scrubbing Free RAM: .................................................=
=2E...................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEswitch input to Xen)
(XEN) Freed 272kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Reserving virtual address space above 0xff400000
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.13.0upstream-08800-g3bf6f37 (konrad@build-ex=
ternal.dumpdata.com) (gcc version 4.4.4 20100503 (Red Hat 4.4.4-2) (GCC) ) =
#1 SMP Tue Jan 28 11:29:28 EST 2014
[    0.000000] Freeing 99-100 pfn range: 103 pages freed
[    0.000000] Released 103 pages of unused memory
[    0.000000] Set 298846 page(s) to 1-1 mapping
[    0.000000] Populating 3e700-3e767 pfn range: 103 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000098fff] usable
[    0.000000] Xen: [mem 0x0000000000099c00-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x000000004d066fff] usable
[    0.000000] Xen: [mem 0x000000004d067000-0x00000000a58f0fff] unusable
[    0.000000] Xen: [mem 0x00000000a58f1000-0x00000000a58f7fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000a58f8000-0x00000000a61b0fff] unusable
[    0.000000] Xen: [mem 0x00000000a61b1000-0x00000000a6596fff] reserved
[    0.000000] Xen: [mem 0x00000000a6597000-0x00000000b74b3fff] unusable
[    0.000000] Xen: [mem 0x00000000b74b4000-0x00000000b76cafff] reserved
[    0.000000] Xen: [mem 0x00000000b76cb000-0x00000000b770bfff] unusable
[    0.000000] Xen: [mem 0x00000000b770c000-0x00000000b77b8fff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000b77b9000-0x00000000b7ffefff] reserved
[    0.000000] Xen: [mem 0x00000000b7fff000-0x00000000b7ffffff] unusable
[    0.000000] Xen: [mem 0x00000000bc000000-0x00000000be1fffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[    0.000000] Xen: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[    0.000000] Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[    0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000023fdfffff] unusable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.7 present.
[    0.000000] DMI: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn =3D 0x4d067 max_arch_pfn =3D 0x1000000
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] initial memory mapped: [mem 0x00000000-0x077fefff]
[    0.000000] Base memory trampoline at [c0095000] 95000 size 16384
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36c00000-0x36dfffff]
[    0.000000]  [mem 0x36c00000-0x36dfffff] page 4k
[    0.000000] BRK [0x01a33000, 0x01a33fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x34000000-0x36bfffff]
[    0.000000]  [mem 0x34000000-0x36bfffff] page 4k
[    0.000000] BRK [0x01a34000, 0x01a34fff] PGTABLE
[    0.000000] BRK [0x01a35000, 0x01a35fff] PGTABLE
[    0.000000] BRK [0x01a36000, 0x01a36fff] PGTABLE
[    0.000000] BRK [0x01a37000, 0x01a37fff] PGTABLE
[    0.000000] BRK [0x01a38000, 0x01a38fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
[    0.000000]  [mem 0x00100000-0x33ffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x36e00000-0x36ffdfff]
[    0.000000]  [mem 0x36e00000-0x36ffdfff] page 4k
[    0.000000] RAMDISK: [mem 0x01c7f000-0x06bf8fff]
[    0.000000] ACPI: RSDP 000f0490 000024 (v02 ALASKA)
[    0.000000] ACPI: XSDT b7794098 0000AC (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FACP b779f0b8 00010C (v05 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: DSDT b77941d8 00AEDD (v02 ALASKA    A M I 00000000 INT=
L 20091112)
[    0.000000] ACPI: FACS b77b7080 000040
[    0.000000] ACPI: APIC b779f1c8 000092 (v03 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: FPDT b779f260 000044 (v01 ALASKA    A M I 01072009 AMI=
  00010013)
[    0.000000] ACPI: SSDT b779f2a8 000540 (v01  PmRef  Cpu0Ist 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b779f7e8 000AD8 (v01  PmRef    CpuPm 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a02c0 0002F2 (v01  PmRef  Cpu0Tst 00003000 INT=
L 20051117)
[    0.000000] ACPI: SSDT b77a05b8 000348 (v01  PmRef    ApTst 00003000 INT=
L 20051117)
[    0.000000] ACPI: MCFG b77a0900 00003C (v01 ALASKA    A M I 01072009 MSF=
T 00000097)
[    0.000000] ACPI: HPET b77a0940 000038 (v01 ALASKA    A M I 01072009 AMI=
=2E 00000005)
[    0.000000] ACPI: SSDT b77a0978 00036D (v01 SataRe SataTabl 00001000 INT=
L 20091112)
[    0.000000] ACPI: SSDT b77a0ce8 00327D (v01 SaSsdt  SaSsdt  00003000 INT=
L 20091112)
[    0.000000] ACPI: ASF! b77a3f68 0000A5 (v32 INTEL       HCG 00000001 TFS=
M 000F4240)
[    0.000000] ACPI: XMAR b77a4010 0000B8 (v01 INTEL      HSW  00000001 INT=
L 00000001)
[    0.000000] ACPI: EINJ b77a40c8 000130 (v01    AMI AMI EINJ 00000000    =
  00000000)
[    0.000000] ACPI: ERST b77a41f8 000230 (v01  AMIER AMI ERST 00000000    =
  00000000)
[    0.000000] ACPI: HEST b77a4428 0000A8 (v01    AMI AMI HEST 00000000    =
  00000000)
[    0.000000] ACPI: BERT b77a44d0 000030 (v01    AMI AMI BERT 00000000    =
  00000000)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] 352MB HIGHMEM available.
[    0.000000] 879MB LOWMEM available.
[    0.000000]   mapped low ram: 0 - 36ffe000
[    0.000000]   low ram: 0 - 36ffe000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   Normal   [mem 0x01000000-0x36ffdfff]
[    0.000000]   HighMem  [mem 0x36ffe000-0x4d066fff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00098fff]
[    0.000000]   node   0: [mem 0x00100000-0x4d066fff]
[    0.000000] On node 0 totalpages: 315391
[    0.000000] free_area_init_node: node 0, pgdat c180da80, node_mem_map f6=
2e2020
[    0.000000]   DMA zone: 32 pages used for memmap
[    0.000000]   DMA zone: 0 pages reserved
[    0.000000]   DMA zone: 3992 pages, LIFO batch:0
[    0.000000]   Normal zone: 1728 pages used for memmap
[    0.000000]   Normal zone: 221182 pages, LIFO batch:31
[    0.000000]   HighMem zone: 705 pages used for memmap
[    0.000000]   HighMem zone: 90217 pages, LIFO batch:31
[    0.000000] Using APIC driver default
[    0.000000] ACPI: PM-Timer IO Port: 0x1808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 2 already used, trying 8
[    0.000000] IOAPIC[0]: Unable to change apic_id!
[    0.000000] IOAPIC[0]: apic_id 255, version 32, address 0xfec00000, GSI =
0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: [mem 0x00099000-0x00099fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009a000-0x000fffff]
[    0.000000] e820: [mem 0xbe200000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.4-rc2 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_no=
de_ids:1
[    0.000000] PERCPU: Embedded 14 pages/cpu @f6f85000 s35456 r0 d21888 u57=
344
[    0.000000] pcpu-alloc: s35456 r0 d21888 u57344 alloc=3D14*4096
[    0.000000] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 [0] 6 [0] 7=
=20
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Tota=
l pages: 313631
[    0.000000] Kernel command line: earlyprintk=3Dxen debug nofb console=3D=
tty console=3Dhvc0 xen-pciback.hide=3D(02:00.0)(02:00.1)(03:00.*)(04:00.0) =
loglevel=3D10
[    0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
[    0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 by=
tes)
[    0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 byte=
s)
[    0.000000] Initializing CPU#0
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340
[    0.000000] BUG: unable to handle kernel paging request at fe75f000
[    0.000000] IP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] *pdpt =3D 0000000001a2c027 *pde =3D 0000000000000000=20
[    0.000000] Oops: 0002 [#1] SMP=20
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0upstream-08800=
-g3bf6f37 #1
[    0.000000] Hardware name: Supermicro X10SAE/X10SAE, BIOS 1.00 05/03/2013
[    0.000000] task: c17c69e0 ti: c17ba000 task.ti: c17ba000
[    0.000000] EIP: e019:[<c184b741>] EFLAGS: 00010046 CPU: 0
[    0.000000] EIP is at memblock_virt_alloc_internal+0x14e/0x188
[    0.000000] EAX: 00000000 EBX: fe75f000 ECX: 00008000 EDX: c18e1c48
[    0.000000] ESI: 00000000 EDI: fe75f000 EBP: c17bbdb8 ESP: c17bbd80
[    0.000000]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: e021
[    0.000000] CR0: 80050033 CR2: fe75f000 CR3: 01916000 CR4: 00042660
[    0.000000] Stack:
[    0.000000]  00008000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff 3e75f000
[    0.000000]  00000000 00008000 00000000 00001000 00000000 ffffffff c17bb=
e04 c184b91d
[    0.000000]  00001000 00000000 00000000 00000000 ffffffff 00000000 fffff=
fff c103b896
[    0.000000] Call Trace:
[    0.000000]  [<c184b91d>] memblock_virt_alloc_try_nid_nopanic+0xac/0xb4
[    0.000000]  [<c103b896>] ? xen_remap_exchanged_ptes+0x176/0x1f0
[    0.000000]  [<c15f5bb0>] ? _raw_spin_unlock_irqrestore+0x20/0x90
[    0.000000]  [<c185850d>] swiotlb_init_with_tbl+0x7d/0x18a
[    0.000000]  [<c15f0915>] xen_swiotlb_init+0x265/0x3d0
[    0.000000]  [<c182947d>] pci_xen_swiotlb_init+0x1b/0x2e
[    0.000000]  [<c182dd00>] pci_iommu_alloc+0x46/0x5a
[    0.000000]  [<c183a9d9>] mem_init+0xe/0x20f
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c15f8e00>] ? vmalloc_fault+0x30/0x1c0
[    0.000000]  [<c1821aa9>] start_kernel+0x21a/0x40e
[    0.000000]  [<c1821700>] ? repair_env_string+0x5b/0x5b
[    0.000000]  [<c182136b>] i386_start_kernel+0x12e/0x131
[    0.000000]  [<c1826000>] xen_start_kernel+0x65c/0x667
[    0.000000] Code: ff ff 89 c1 8b 45 e8 8b 7d ec 89 4d e4 89 44 24 04 89 =
c8 89 3c 24 e8 1c 66 02 00 8b 4d e4 31 c0 8d 99 00 00 00 c0 8b 4d ec 89 df =
<f3> aa 89 d8 c7 04 24 00 00 00 00 8b 55 ec e8 3c 32 da ff 83 c4
[    0.000000] EIP: [<c184b741>] memblock_virt_alloc_internal+0x14e/0x188 S=
S:ESP e021:c17bbd80
[    0.000000] CR2: 00000000fe75f000
[    0.000000] ---[ end trace 54b16dbd029168a1 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.
(XEN) Resetting with ACPI MEMORY or I/O RESET_REG.

--BXVAT5kNtrzKuDFl
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--BXVAT5kNtrzKuDFl--


From xen-devel-bounces@lists.xen.org Tue Jan 28 18:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8D9B-00009A-Fz; Tue, 28 Jan 2014 18:12:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8D9A-000095-Hk
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:12:28 +0000
Received: from [85.158.137.68:32619] by server-2.bemta-3.messagelabs.com id
	5C/8A-17329-B03F7E25; Tue, 28 Jan 2014 18:12:27 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390932747!10718963!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26733 invoked from network); 28 Jan 2014 18:12:27 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:12:27 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390932747; l=1855;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yMjy1hJmDnOjYNB+tuIc0zeY8q0=;
	b=o7zeKDzT2wa/jKULThmUkLBpMA1PGYz4psTJlQygpcpIFcEgAsMm4SbH5TtkFfAw5dn
	YdV/7OTGc7FcPnwvluxwSMKCzJkq+2y9Ue8c3twXOk8EFXWHeuEv5pDhm1frPzGOkU8Z+
	OBYcHnX1qVfh67iacwSVPghKGjG+LWJjAPw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id Y067aaq0SICQJ7g
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:12:26 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 7936750266; Tue, 28 Jan 2014 19:12:26 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:12:16 +0100
Message-Id: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
	backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxl/check-xl-disk-parse | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 41fb7af..797277c 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -53,6 +53,7 @@ one $e foo
 expected <<END
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/dev/vg/guest-volume",
     "vdev": "hda",
     "backend": "unknown",
@@ -73,6 +74,7 @@ one 0 raw:/dev/vg/guest-volume,hda,w
 expected <<END
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/root/image.iso",
     "vdev": "hdc",
     "backend": "unknown",
@@ -94,6 +96,7 @@ one 0 raw:/root/image.iso,hdc:cdrom,ro
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/dev/vg/guest-volume",
     "vdev": "xvdb",
     "backend": "phy",
@@ -110,6 +113,7 @@ one 0 backendtype=phy,vdev=xvdb,access=w,target=/dev/vg/guest-volume
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "",
     "vdev": "hdc",
     "backend": "unknown",
@@ -130,6 +134,7 @@ one 0 ,empty,hdc:cdrom,r
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": null,
     "vdev": "hdc",
     "backend": "unknown",
@@ -147,6 +152,7 @@ one 0 vdev=hdc,access=r,devtype=cdrom
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
     "vdev": "xvda",
     "backend": "unknown",
@@ -166,6 +172,7 @@ one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "app01",
     "vdev": "hda",
     "backend": "unknown",

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8D9B-00009A-Fz; Tue, 28 Jan 2014 18:12:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8D9A-000095-Hk
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:12:28 +0000
Received: from [85.158.137.68:32619] by server-2.bemta-3.messagelabs.com id
	5C/8A-17329-B03F7E25; Tue, 28 Jan 2014 18:12:27 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1390932747!10718963!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26733 invoked from network); 28 Jan 2014 18:12:27 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:12:27 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390932747; l=1855;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yMjy1hJmDnOjYNB+tuIc0zeY8q0=;
	b=o7zeKDzT2wa/jKULThmUkLBpMA1PGYz4psTJlQygpcpIFcEgAsMm4SbH5TtkFfAw5dn
	YdV/7OTGc7FcPnwvluxwSMKCzJkq+2y9Ue8c3twXOk8EFXWHeuEv5pDhm1frPzGOkU8Z+
	OBYcHnX1qVfh67iacwSVPghKGjG+LWJjAPw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id Y067aaq0SICQJ7g
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:12:26 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 7936750266; Tue, 28 Jan 2014 19:12:26 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:12:16 +0100
Message-Id: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
	backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxl/check-xl-disk-parse | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 41fb7af..797277c 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -53,6 +53,7 @@ one $e foo
 expected <<END
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/dev/vg/guest-volume",
     "vdev": "hda",
     "backend": "unknown",
@@ -73,6 +74,7 @@ one 0 raw:/dev/vg/guest-volume,hda,w
 expected <<END
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/root/image.iso",
     "vdev": "hdc",
     "backend": "unknown",
@@ -94,6 +96,7 @@ one 0 raw:/root/image.iso,hdc:cdrom,ro
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "/dev/vg/guest-volume",
     "vdev": "xvdb",
     "backend": "phy",
@@ -110,6 +113,7 @@ one 0 backendtype=phy,vdev=xvdb,access=w,target=/dev/vg/guest-volume
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "",
     "vdev": "hdc",
     "backend": "unknown",
@@ -130,6 +134,7 @@ one 0 ,empty,hdc:cdrom,r
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": null,
     "vdev": "hdc",
     "backend": "unknown",
@@ -147,6 +152,7 @@ one 0 vdev=hdc,access=r,devtype=cdrom
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
     "vdev": "xvda",
     "backend": "unknown",
@@ -166,6 +172,7 @@ one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-
 expected <<EOF
 disk: {
     "backend_domid": 0,
+    "backend_domname": null,
     "pdev_path": "app01",
     "vdev": "hda",
     "backend": "unknown",

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:20:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DGb-0000ZY-60; Tue, 28 Jan 2014 18:20:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8DGZ-0000ZQ-CE
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:20:07 +0000
Received: from [193.109.254.147:45331] by server-11.bemta-14.messagelabs.com
	id 2D/2B-20576-6D4F7E25; Tue, 28 Jan 2014 18:20:06 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390933202!424367!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_30_40,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13827 invoked from network); 28 Jan 2014 18:20:04 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:20:04 -0000
Received: from mail-vb0-f43.google.com ([209.85.212.43]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuf00dncJ6hbTZUvIwCTR6WGJK62PMZT@postini.com;
	Tue, 28 Jan 2014 10:20:03 PST
Received: by mail-vb0-f43.google.com with SMTP id p5so490466vbn.30
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 10:20:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=90HDtY0w4o30S3qd2RcuGpbkEUHNg9vRoAJ7ZI1kNNY=;
	b=iXEUI/UspHG8tb5uV0IaFxFWacLE9cGpevZGnol3Cot9XWwMe2YPvpJ5pejgpUrZQx
	QSVeK0/cM9RnO3EZOvO/+E/Doob6TS0VJfyLjkT5Td+vxK2fz1dml9tiHlyxMQOdbYBo
	SUqNpL8ZcemtqS1fWPN3esN3TKVsoB7JbOA4Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=90HDtY0w4o30S3qd2RcuGpbkEUHNg9vRoAJ7ZI1kNNY=;
	b=Q4r7eNA0fIBOcHoP0jL3dL3CbNX+nNoV08DZDBMjFtx0kLPVZI+HrC1yWZSyJUtMiz
	sdOb0vZEvVkQYoj8cU0t3GU+/9+Y06qExB9HmZoy8G6UMDn7tiWSYZ0Zl7VajQXJfmCm
	nK1BhtnJmFNfEjcVgE4k8IxWvUalGMR8XgHBIOCv9DAjuCvJ4HBJJPI2NISZnwr8bR2J
	NgGedwrcKKLnf1yLBupGb/Tp5FzrS0dONPa2jCZCnAnNab6BfckKDSjxuDc5NcMxKh8q
	3MdfXYVW9P4i+7GnA4eZTzp8emAIaBBc+DowEw4BLXYXv6yqN4qR6Ppn/Fe7OkuG/Bx0
	qcQQ==
X-Gm-Message-State: ALoCoQm1F9CinE9uXaGfOlf9tLKbcop95f9X8V3cAUBIN/jYx7W1eoroA36ka6FJlGk0UsA5R6rSjjxMWUFSUtE9bw8+JTM+VohQAJD3NE/RpLXOnhvbnP54xylNO0W++kUZ1c5qthGWzbWzBWvRhXHM6MiTtfCpeYYf2N/zxTx+hGk7xhErQFs=
X-Received: by 10.58.200.168 with SMTP id jt8mr90591vec.30.1390933201254;
	Tue, 28 Jan 2014 10:20:01 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.200.168 with SMTP id jt8mr90550vec.30.1390933200869; Tue,
	28 Jan 2014 10:20:00 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Tue, 28 Jan 2014 10:20:00 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
Date: Tue, 28 Jan 2014 20:20:00 +0200
Message-ID: <CAJEb2DFfqbKUn26GrdDGw2wcKq7nqnpJ-9onrZdFPssthgGdkw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: george.dunlap@citrix.com, julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3196472893408834412=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3196472893408834412==
Content-Type: multipart/alternative; boundary=047d7bd6bd765833a904f10bdf64

--047d7bd6bd765833a904f10bdf64
Content-Type: text/plain; charset=ISO-8859-1

Hi, all.
Sorry for late response.
A lot of thanks to all for your advices and full solutions.
I will check the current patch and earlier solution (with tasklet_schedule)
too.


On Tue, Jan 28, 2014 at 7:50 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > Code such as on_selected_cpus expects/requires that an IPI can preempt a
> > processor which is just handling a normal interrupt. Lacking this
> property can
> > result in a deadlock between two CPUs trying to IPI each other from
> interrupt
> > context.
> >
> > For the time being there is only two priorities, IRQ and IPI, although
> it is
> > also conceivable that in the future some IPIs might be higher priority
> than
> > others. This could be used to implement a better BUG() than we have now,
> but I
> > haven't tackled that yet.
> >
> > Tested with a debug patch which sends a local IPI from a keyhandler,
> which is
> > run in serial interrupt context.
> >
> > This should also fix the issue reported by Oleksandr in "xen/arm:
> > maintenance_interrupt SMP fix" without resorting to trylock.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>
> It looks simple enough.
> Oleksandr, I would appreciate if could test the patch and tell us if it
> is working well for you.
>
>
> > I think this is probably 4.5 material at this point.
> >
> > Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> > completeness. It gives:
> > (XEN) Xen call trace:
> > (XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
> > (XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
> > (XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
> > (XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
> > (XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
> > (XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
> > (XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
> > (XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
> > (XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
> > (XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
> > (XEN)    [<0000000000249324>] init_done+0x10/0x18
> > (XEN)    [<0000000000000080>] 0000000000000080
> > ---
> >  xen/arch/arm/gic.c        |   19 +++++++++++++------
> >  xen/arch/arm/time.c       |    6 +++---
> >  xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
> >  3 files changed, 38 insertions(+), 9 deletions(-)
> >
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index dcf9cd4..ee37019 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
> >
> >      /* Default priority for global interrupts */
> >      for ( i = 32; i < gic.lines; i += 4 )
> > -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 |
> GIC_PRI_IRQ;
> >
> >      /* Disable all global interrupts */
> >      for ( i = 32; i < gic.lines; i += 32 )
> > @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
> >      GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
> >      GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
> >      /* Set PPI and SGI priorities */
> > -    for (i = 0; i < 32; i += 4)
> > -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> > +    for (i = 0; i < 16; i += 4)
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 |
> GIC_PRI_IPI;
> > +    for (i = 16; i < 32; i += 4)
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 |
> GIC_PRI_IRQ;
> >
> >      /* Local settings: interface controller */
> >      GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
> > @@ -538,7 +543,8 @@ void gic_disable_cpu(void)
> >  void gic_route_ppis(void)
> >  {
> >      /* GIC maintenance */
> > -    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> 0xa0);
> > +    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> > +                     GIC_PRI_IRQ);
> >      /* Route timer interrupt */
> >      route_timer_interrupt();
> >  }
> > @@ -553,7 +559,8 @@ void gic_route_spis(void)
> >          if ( (irq = serial_dt_irq(seridx)) == NULL )
> >              continue;
> >
> > -        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
> > +        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
> > +                         GIC_PRI_IRQ);
> >      }
> >  }
> >
> > @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> >      level = dt_irq_is_level_triggered(irq);
> >
> >      gic_set_irq_properties(irq->irq, level,
> cpumask_of(smp_processor_id()),
> > -                           0xa0);
> > +                           GIC_PRI_IRQ);
> >
> >      retval = __setup_irq(desc, irq->irq, action);
> >      if (retval) {
> > diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> > index 81e3e28..68b939d 100644
> > --- a/xen/arch/arm/time.c
> > +++ b/xen/arch/arm/time.c
> > @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void
> *dev_id, struct cpu_user_regs *regs)
> >  void __cpuinit route_timer_interrupt(void)
> >  {
> >      gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >      gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >      gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >  }
> >
> >  /* Set up the timer interrupt on this CPU */
> > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> > index 9c6f9bb..25b2b24 100644
> > --- a/xen/include/asm-arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -129,6 +129,28 @@
> >  #define GICH_LR_CPUID_SHIFT     9
> >  #define GICH_VTR_NRLRGS         0x3f
> >
> > +/*
> > + * The minimum GICC_BPR is required to be in the range 0-3. We set
> > + * GICC_BPR to 0 but we must expect that it might be 3. This means we
> > + * can rely on premption between the following ranges:
> > + * 0xf0..0xff
> > + * 0xe0..0xdf
> > + * 0xc0..0xcf
> > + * 0xb0..0xbf
> > + * 0xa0..0xaf
> > + * 0x90..0x9f
> > + * 0x80..0x8f
> > + *
> > + * Priorities within a range will not preempt each other.
> > + *
> > + * A GIC must support a mimimum of 16 priority levels.
> > + */
> > +#define GIC_PRI_LOWEST     0xf0
> > +#define GIC_PRI_IRQ        0xa0
> > +#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts
> */
> > +#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to
> Secure-World */
> > +
> > +
> >  #ifndef __ASSEMBLY__
> >  #include <xen/device_tree.h>
> >
> > --
> > 1.7.10.4
> >
>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--047d7bd6bd765833a904f10bdf64
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, all.<br>Sorry for late response.<br>A lot of thanks to=
 all for your advices and full solutions.<br>I will check the current patch=
 and earlier solution (with tasklet_schedule) too.<br></div><div class=3D"g=
mail_extra">
<br><br><div class=3D"gmail_quote">On Tue, Jan 28, 2014 at 7:50 PM, Stefano=
 Stabellini <span dir=3D"ltr">&lt;<a href=3D"mailto:stefano.stabellini@eu.c=
itrix.com" target=3D"_blank">stefano.stabellini@eu.citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Tue, 28 Jan 2014, Ian C=
ampbell wrote:<br>
&gt; Code such as on_selected_cpus expects/requires that an IPI can preempt=
 a<br>
&gt; processor which is just handling a normal interrupt. Lacking this prop=
erty can<br>
&gt; result in a deadlock between two CPUs trying to IPI each other from in=
terrupt<br>
&gt; context.<br>
&gt;<br>
&gt; For the time being there is only two priorities, IRQ and IPI, although=
 it is<br>
&gt; also conceivable that in the future some IPIs might be higher priority=
 than<br>
&gt; others. This could be used to implement a better BUG() than we have no=
w, but I<br>
&gt; haven&#39;t tackled that yet.<br>
&gt;<br>
&gt; Tested with a debug patch which sends a local IPI from a keyhandler, w=
hich is<br>
&gt; run in serial interrupt context.<br>
&gt;<br>
&gt; This should also fix the issue reported by Oleksandr in &quot;xen/arm:=
<br>
&gt; maintenance_interrupt SMP fix&quot; without resorting to trylock.<br>
&gt;<br>
&gt; Signed-off-by: Ian Campbell &lt;<a href=3D"mailto:ian.campbell@citrix.=
com">ian.campbell@citrix.com</a>&gt;<br>
&gt; Cc: Oleksandr Tyshchenko &lt;<a href=3D"mailto:oleksandr.tyshchenko@gl=
oballogic.com">oleksandr.tyshchenko@globallogic.com</a>&gt;<br>
<br>
</div>It looks simple enough.<br>
Oleksandr, I would appreciate if could test the patch and tell us if it<br>
is working well for you.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt; I think this is probably 4.5 material at this point.<br>
&gt;<br>
&gt; Tested with &quot;HACK: dump pcpu state keyhandler&quot; which I&#39;l=
l post for<br>
&gt; completeness. It gives:<br>
&gt; (XEN) Xen call trace:<br>
&gt; (XEN) =A0 =A0[&lt;0000000000212048&gt;] dump_pcpus+0x28/0x2c (PC)<br>
&gt; (XEN) =A0 =A0[&lt;000000000021256c&gt;] handle_keypress+0x70/0xb0 (LR)=
<br>
&gt; (XEN) =A0 =A0[&lt;000000000023ed00&gt;] __serial_rx+0x20/0x6c<br>
&gt; (XEN) =A0 =A0[&lt;000000000023f8ac&gt;] serial_rx+0xb4/0xc4<br>
&gt; (XEN) =A0 =A0[&lt;00000000002409ec&gt;] serial_rx_interrupt+0xb0/0xd4<=
br>
&gt; (XEN) =A0 =A0[&lt;00000000002404b4&gt;] ns16550_interrupt+0x6c/0x90<br=
>
&gt; (XEN) =A0 =A0[&lt;0000000000245fc0&gt;] do_IRQ+0x144/0x1b4<br>
&gt; (XEN) =A0 =A0[&lt;0000000000245a28&gt;] gic_interrupt+0x60/0xf8<br>
&gt; (XEN) =A0 =A0[&lt;000000000024be64&gt;] do_trap_irq+0x10/0x18<br>
&gt; (XEN) =A0 =A0[&lt;000000000024e240&gt;] hyp_irq+0x5c/0x60<br>
&gt; (XEN) =A0 =A0[&lt;0000000000249324&gt;] init_done+0x10/0x18<br>
&gt; (XEN) =A0 =A0[&lt;0000000000000080&gt;] 0000000000000080<br>
&gt; ---<br>
&gt; =A0xen/arch/arm/gic.c =A0 =A0 =A0 =A0| =A0 19 +++++++++++++------<br>
&gt; =A0xen/arch/arm/time.c =A0 =A0 =A0 | =A0 =A06 +++---<br>
&gt; =A0xen/include/asm-arm/gic.h | =A0 22 ++++++++++++++++++++++<br>
&gt; =A03 files changed, 38 insertions(+), 9 deletions(-)<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c<br>
&gt; index dcf9cd4..ee37019 100644<br>
&gt; --- a/xen/arch/arm/gic.c<br>
&gt; +++ b/xen/arch/arm/gic.c<br>
&gt; @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Default priority for global interrupts */<br>
&gt; =A0 =A0 =A0for ( i =3D 32; i &lt; gic.lines; i +=3D 4 )<br>
&gt; - =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D 0xa0a0a0a0;<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IRQ&lt;&lt;24 | GIC_PRI_IRQ&lt;&lt;16=
 | GIC_PRI_IRQ&lt;&lt;8 | GIC_PRI_IRQ;<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Disable all global interrupts */<br>
&gt; =A0 =A0 =A0for ( i =3D 32; i &lt; gic.lines; i +=3D 32 )<br>
&gt; @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)<br>
&gt; =A0 =A0 =A0GICD[GICD_ICENABLER] =3D 0xffff0000; /* Disable all PPI */<=
br>
&gt; =A0 =A0 =A0GICD[GICD_ISENABLER] =3D 0x0000ffff; /* Enable all SGI */<b=
r>
&gt; =A0 =A0 =A0/* Set PPI and SGI priorities */<br>
&gt; - =A0 =A0for (i =3D 0; i &lt; 32; i +=3D 4)<br>
&gt; - =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D 0xa0a0a0a0;<br>
&gt; + =A0 =A0for (i =3D 0; i &lt; 16; i +=3D 4)<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IPI&lt;&lt;24 | GIC_PRI_IPI&lt;&lt;16=
 | GIC_PRI_IPI&lt;&lt;8 | GIC_PRI_IPI;<br>
&gt; + =A0 =A0for (i =3D 16; i &lt; 32; i +=3D 4)<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IRQ&lt;&lt;24 | GIC_PRI_IRQ&lt;&lt;16=
 | GIC_PRI_IRQ&lt;&lt;8 | GIC_PRI_IRQ;<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Local settings: interface controller */<br>
&gt; =A0 =A0 =A0GICC[GICC_PMR] =3D 0xff; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* =
Don&#39;t mask by priority */<br>
&gt; @@ -538,7 +543,8 @@ void gic_disable_cpu(void)<br>
&gt; =A0void gic_route_ppis(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0/* GIC maintenance */<br>
&gt; - =A0 =A0gic_route_dt_irq(&amp;gic.maintenance, cpumask_of(smp_process=
or_id()), 0xa0);<br>
&gt; + =A0 =A0gic_route_dt_irq(&amp;gic.maintenance, cpumask_of(smp_process=
or_id()),<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0/* Route timer interrupt */<br>
&gt; =A0 =A0 =A0route_timer_interrupt();<br>
&gt; =A0}<br>
&gt; @@ -553,7 +559,8 @@ void gic_route_spis(void)<br>
&gt; =A0 =A0 =A0 =A0 =A0if ( (irq =3D serial_dt_irq(seridx)) =3D=3D NULL )<=
br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;<br>
&gt;<br>
&gt; - =A0 =A0 =A0 =A0gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),=
 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),=
<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0}<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const=
 struct dt_irq *irq,<br>
&gt; =A0 =A0 =A0level =3D dt_irq_is_level_triggered(irq);<br>
&gt;<br>
&gt; =A0 =A0 =A0gic_set_irq_properties(irq-&gt;irq, level, cpumask_of(smp_p=
rocessor_id()),<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br=
>
&gt;<br>
&gt; =A0 =A0 =A0retval =3D __setup_irq(desc, irq-&gt;irq, action);<br>
&gt; =A0 =A0 =A0if (retval) {<br>
&gt; diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c<br>
&gt; index 81e3e28..68b939d 100644<br>
&gt; --- a/xen/arch/arm/time.c<br>
&gt; +++ b/xen/arch/arm/time.c<br>
&gt; @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_=
id, struct cpu_user_regs *regs)<br>
&gt; =A0void __cpuinit route_timer_interrupt(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_PHYS_NONSECURE_PPI],<=
br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_HYP_PPI],<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_VIRT_PPI],<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0/* Set up the timer interrupt on this CPU */<br>
&gt; diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h<br>
&gt; index 9c6f9bb..25b2b24 100644<br>
&gt; --- a/xen/include/asm-arm/gic.h<br>
&gt; +++ b/xen/include/asm-arm/gic.h<br>
&gt; @@ -129,6 +129,28 @@<br>
&gt; =A0#define GICH_LR_CPUID_SHIFT =A0 =A0 9<br>
&gt; =A0#define GICH_VTR_NRLRGS =A0 =A0 =A0 =A0 0x3f<br>
&gt;<br>
&gt; +/*<br>
&gt; + * The minimum GICC_BPR is required to be in the range 0-3. We set<br=
>
&gt; + * GICC_BPR to 0 but we must expect that it might be 3. This means we=
<br>
&gt; + * can rely on premption between the following ranges:<br>
&gt; + * 0xf0..0xff<br>
&gt; + * 0xe0..0xdf<br>
&gt; + * 0xc0..0xcf<br>
&gt; + * 0xb0..0xbf<br>
&gt; + * 0xa0..0xaf<br>
&gt; + * 0x90..0x9f<br>
&gt; + * 0x80..0x8f<br>
&gt; + *<br>
&gt; + * Priorities within a range will not preempt each other.<br>
&gt; + *<br>
&gt; + * A GIC must support a mimimum of 16 priority levels.<br>
&gt; + */<br>
&gt; +#define GIC_PRI_LOWEST =A0 =A0 0xf0<br>
&gt; +#define GIC_PRI_IRQ =A0 =A0 =A0 =A00xa0<br>
&gt; +#define GIC_PRI_IPI =A0 =A0 =A0 =A00x90 /* IPIs must preempt normal i=
nterrupts */<br>
&gt; +#define GIC_PRI_HIGHEST =A0 =A00x80 /* Higher priorities belong to Se=
cure-World */<br>
&gt; +<br>
&gt; +<br>
&gt; =A0#ifndef __ASSEMBLY__<br>
&gt; =A0#include &lt;xen/device_tree.h&gt;<br>
&gt;<br>
&gt; --<br>
&gt; 1.7.10.4<br>
&gt;<br>
</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br><font size=
=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;font=
-style:normal;font-size:12px;background-color:transparent;text-decoration:n=
one;font-family:Arial;font-weight:bold">Name | Title</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont>
</div>

--047d7bd6bd765833a904f10bdf64--


--===============3196472893408834412==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3196472893408834412==--


From xen-devel-bounces@lists.xen.org Tue Jan 28 18:20:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DGb-0000ZY-60; Tue, 28 Jan 2014 18:20:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8DGZ-0000ZQ-CE
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:20:07 +0000
Received: from [193.109.254.147:45331] by server-11.bemta-14.messagelabs.com
	id 2D/2B-20576-6D4F7E25; Tue, 28 Jan 2014 18:20:06 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390933202!424367!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_30_40,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13827 invoked from network); 28 Jan 2014 18:20:04 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:20:04 -0000
Received: from mail-vb0-f43.google.com ([209.85.212.43]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuf00dncJ6hbTZUvIwCTR6WGJK62PMZT@postini.com;
	Tue, 28 Jan 2014 10:20:03 PST
Received: by mail-vb0-f43.google.com with SMTP id p5so490466vbn.30
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 10:20:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=90HDtY0w4o30S3qd2RcuGpbkEUHNg9vRoAJ7ZI1kNNY=;
	b=iXEUI/UspHG8tb5uV0IaFxFWacLE9cGpevZGnol3Cot9XWwMe2YPvpJ5pejgpUrZQx
	QSVeK0/cM9RnO3EZOvO/+E/Doob6TS0VJfyLjkT5Td+vxK2fz1dml9tiHlyxMQOdbYBo
	SUqNpL8ZcemtqS1fWPN3esN3TKVsoB7JbOA4Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=90HDtY0w4o30S3qd2RcuGpbkEUHNg9vRoAJ7ZI1kNNY=;
	b=Q4r7eNA0fIBOcHoP0jL3dL3CbNX+nNoV08DZDBMjFtx0kLPVZI+HrC1yWZSyJUtMiz
	sdOb0vZEvVkQYoj8cU0t3GU+/9+Y06qExB9HmZoy8G6UMDn7tiWSYZ0Zl7VajQXJfmCm
	nK1BhtnJmFNfEjcVgE4k8IxWvUalGMR8XgHBIOCv9DAjuCvJ4HBJJPI2NISZnwr8bR2J
	NgGedwrcKKLnf1yLBupGb/Tp5FzrS0dONPa2jCZCnAnNab6BfckKDSjxuDc5NcMxKh8q
	3MdfXYVW9P4i+7GnA4eZTzp8emAIaBBc+DowEw4BLXYXv6yqN4qR6Ppn/Fe7OkuG/Bx0
	qcQQ==
X-Gm-Message-State: ALoCoQm1F9CinE9uXaGfOlf9tLKbcop95f9X8V3cAUBIN/jYx7W1eoroA36ka6FJlGk0UsA5R6rSjjxMWUFSUtE9bw8+JTM+VohQAJD3NE/RpLXOnhvbnP54xylNO0W++kUZ1c5qthGWzbWzBWvRhXHM6MiTtfCpeYYf2N/zxTx+hGk7xhErQFs=
X-Received: by 10.58.200.168 with SMTP id jt8mr90591vec.30.1390933201254;
	Tue, 28 Jan 2014 10:20:01 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.200.168 with SMTP id jt8mr90550vec.30.1390933200869; Tue,
	28 Jan 2014 10:20:00 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Tue, 28 Jan 2014 10:20:00 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
References: <1390927878-7048-1-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1401281749140.4373@kaball.uk.xensource.com>
Date: Tue, 28 Jan 2014 20:20:00 +0200
Message-ID: <CAJEb2DFfqbKUn26GrdDGw2wcKq7nqnpJ-9onrZdFPssthgGdkw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: george.dunlap@citrix.com, julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: increase priority of SGIs used as
	IPIs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3196472893408834412=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3196472893408834412==
Content-Type: multipart/alternative; boundary=047d7bd6bd765833a904f10bdf64

--047d7bd6bd765833a904f10bdf64
Content-Type: text/plain; charset=ISO-8859-1

Hi, all.
Sorry for late response.
A lot of thanks to all for your advices and full solutions.
I will check the current patch and earlier solution (with tasklet_schedule)
too.


On Tue, Jan 28, 2014 at 7:50 PM, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 28 Jan 2014, Ian Campbell wrote:
> > Code such as on_selected_cpus expects/requires that an IPI can preempt a
> > processor which is just handling a normal interrupt. Lacking this
> property can
> > result in a deadlock between two CPUs trying to IPI each other from
> interrupt
> > context.
> >
> > For the time being there is only two priorities, IRQ and IPI, although
> it is
> > also conceivable that in the future some IPIs might be higher priority
> than
> > others. This could be used to implement a better BUG() than we have now,
> but I
> > haven't tackled that yet.
> >
> > Tested with a debug patch which sends a local IPI from a keyhandler,
> which is
> > run in serial interrupt context.
> >
> > This should also fix the issue reported by Oleksandr in "xen/arm:
> > maintenance_interrupt SMP fix" without resorting to trylock.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>
> It looks simple enough.
> Oleksandr, I would appreciate if could test the patch and tell us if it
> is working well for you.
>
>
> > I think this is probably 4.5 material at this point.
> >
> > Tested with "HACK: dump pcpu state keyhandler" which I'll post for
> > completeness. It gives:
> > (XEN) Xen call trace:
> > (XEN)    [<0000000000212048>] dump_pcpus+0x28/0x2c (PC)
> > (XEN)    [<000000000021256c>] handle_keypress+0x70/0xb0 (LR)
> > (XEN)    [<000000000023ed00>] __serial_rx+0x20/0x6c
> > (XEN)    [<000000000023f8ac>] serial_rx+0xb4/0xc4
> > (XEN)    [<00000000002409ec>] serial_rx_interrupt+0xb0/0xd4
> > (XEN)    [<00000000002404b4>] ns16550_interrupt+0x6c/0x90
> > (XEN)    [<0000000000245fc0>] do_IRQ+0x144/0x1b4
> > (XEN)    [<0000000000245a28>] gic_interrupt+0x60/0xf8
> > (XEN)    [<000000000024be64>] do_trap_irq+0x10/0x18
> > (XEN)    [<000000000024e240>] hyp_irq+0x5c/0x60
> > (XEN)    [<0000000000249324>] init_done+0x10/0x18
> > (XEN)    [<0000000000000080>] 0000000000000080
> > ---
> >  xen/arch/arm/gic.c        |   19 +++++++++++++------
> >  xen/arch/arm/time.c       |    6 +++---
> >  xen/include/asm-arm/gic.h |   22 ++++++++++++++++++++++
> >  3 files changed, 38 insertions(+), 9 deletions(-)
> >
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index dcf9cd4..ee37019 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)
> >
> >      /* Default priority for global interrupts */
> >      for ( i = 32; i < gic.lines; i += 4 )
> > -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 |
> GIC_PRI_IRQ;
> >
> >      /* Disable all global interrupts */
> >      for ( i = 32; i < gic.lines; i += 32 )
> > @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)
> >      GICD[GICD_ICENABLER] = 0xffff0000; /* Disable all PPI */
> >      GICD[GICD_ISENABLER] = 0x0000ffff; /* Enable all SGI */
> >      /* Set PPI and SGI priorities */
> > -    for (i = 0; i < 32; i += 4)
> > -        GICD[GICD_IPRIORITYR + i / 4] = 0xa0a0a0a0;
> > +    for (i = 0; i < 16; i += 4)
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IPI<<24 | GIC_PRI_IPI<<16 | GIC_PRI_IPI<<8 |
> GIC_PRI_IPI;
> > +    for (i = 16; i < 32; i += 4)
> > +        GICD[GICD_IPRIORITYR + i / 4] =
> > +            GIC_PRI_IRQ<<24 | GIC_PRI_IRQ<<16 | GIC_PRI_IRQ<<8 |
> GIC_PRI_IRQ;
> >
> >      /* Local settings: interface controller */
> >      GICC[GICC_PMR] = 0xff;                /* Don't mask by priority */
> > @@ -538,7 +543,8 @@ void gic_disable_cpu(void)
> >  void gic_route_ppis(void)
> >  {
> >      /* GIC maintenance */
> > -    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> 0xa0);
> > +    gic_route_dt_irq(&gic.maintenance, cpumask_of(smp_processor_id()),
> > +                     GIC_PRI_IRQ);
> >      /* Route timer interrupt */
> >      route_timer_interrupt();
> >  }
> > @@ -553,7 +559,8 @@ void gic_route_spis(void)
> >          if ( (irq = serial_dt_irq(seridx)) == NULL )
> >              continue;
> >
> > -        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()), 0xa0);
> > +        gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),
> > +                         GIC_PRI_IRQ);
> >      }
> >  }
> >
> > @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> >      level = dt_irq_is_level_triggered(irq);
> >
> >      gic_set_irq_properties(irq->irq, level,
> cpumask_of(smp_processor_id()),
> > -                           0xa0);
> > +                           GIC_PRI_IRQ);
> >
> >      retval = __setup_irq(desc, irq->irq, action);
> >      if (retval) {
> > diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> > index 81e3e28..68b939d 100644
> > --- a/xen/arch/arm/time.c
> > +++ b/xen/arch/arm/time.c
> > @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void
> *dev_id, struct cpu_user_regs *regs)
> >  void __cpuinit route_timer_interrupt(void)
> >  {
> >      gic_route_dt_irq(&timer_irq[TIMER_PHYS_NONSECURE_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >      gic_route_dt_irq(&timer_irq[TIMER_HYP_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >      gic_route_dt_irq(&timer_irq[TIMER_VIRT_PPI],
> > -                     cpumask_of(smp_processor_id()), 0xa0);
> > +                     cpumask_of(smp_processor_id()), GIC_PRI_IRQ);
> >  }
> >
> >  /* Set up the timer interrupt on this CPU */
> > diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> > index 9c6f9bb..25b2b24 100644
> > --- a/xen/include/asm-arm/gic.h
> > +++ b/xen/include/asm-arm/gic.h
> > @@ -129,6 +129,28 @@
> >  #define GICH_LR_CPUID_SHIFT     9
> >  #define GICH_VTR_NRLRGS         0x3f
> >
> > +/*
> > + * The minimum GICC_BPR is required to be in the range 0-3. We set
> > + * GICC_BPR to 0 but we must expect that it might be 3. This means we
> > + * can rely on premption between the following ranges:
> > + * 0xf0..0xff
> > + * 0xe0..0xdf
> > + * 0xc0..0xcf
> > + * 0xb0..0xbf
> > + * 0xa0..0xaf
> > + * 0x90..0x9f
> > + * 0x80..0x8f
> > + *
> > + * Priorities within a range will not preempt each other.
> > + *
> > + * A GIC must support a mimimum of 16 priority levels.
> > + */
> > +#define GIC_PRI_LOWEST     0xf0
> > +#define GIC_PRI_IRQ        0xa0
> > +#define GIC_PRI_IPI        0x90 /* IPIs must preempt normal interrupts
> */
> > +#define GIC_PRI_HIGHEST    0x80 /* Higher priorities belong to
> Secure-World */
> > +
> > +
> >  #ifndef __ASSEMBLY__
> >  #include <xen/device_tree.h>
> >
> > --
> > 1.7.10.4
> >
>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--047d7bd6bd765833a904f10bdf64
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi, all.<br>Sorry for late response.<br>A lot of thanks to=
 all for your advices and full solutions.<br>I will check the current patch=
 and earlier solution (with tasklet_schedule) too.<br></div><div class=3D"g=
mail_extra">
<br><br><div class=3D"gmail_quote">On Tue, Jan 28, 2014 at 7:50 PM, Stefano=
 Stabellini <span dir=3D"ltr">&lt;<a href=3D"mailto:stefano.stabellini@eu.c=
itrix.com" target=3D"_blank">stefano.stabellini@eu.citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Tue, 28 Jan 2014, Ian C=
ampbell wrote:<br>
&gt; Code such as on_selected_cpus expects/requires that an IPI can preempt=
 a<br>
&gt; processor which is just handling a normal interrupt. Lacking this prop=
erty can<br>
&gt; result in a deadlock between two CPUs trying to IPI each other from in=
terrupt<br>
&gt; context.<br>
&gt;<br>
&gt; For the time being there is only two priorities, IRQ and IPI, although=
 it is<br>
&gt; also conceivable that in the future some IPIs might be higher priority=
 than<br>
&gt; others. This could be used to implement a better BUG() than we have no=
w, but I<br>
&gt; haven&#39;t tackled that yet.<br>
&gt;<br>
&gt; Tested with a debug patch which sends a local IPI from a keyhandler, w=
hich is<br>
&gt; run in serial interrupt context.<br>
&gt;<br>
&gt; This should also fix the issue reported by Oleksandr in &quot;xen/arm:=
<br>
&gt; maintenance_interrupt SMP fix&quot; without resorting to trylock.<br>
&gt;<br>
&gt; Signed-off-by: Ian Campbell &lt;<a href=3D"mailto:ian.campbell@citrix.=
com">ian.campbell@citrix.com</a>&gt;<br>
&gt; Cc: Oleksandr Tyshchenko &lt;<a href=3D"mailto:oleksandr.tyshchenko@gl=
oballogic.com">oleksandr.tyshchenko@globallogic.com</a>&gt;<br>
<br>
</div>It looks simple enough.<br>
Oleksandr, I would appreciate if could test the patch and tell us if it<br>
is working well for you.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt; I think this is probably 4.5 material at this point.<br>
&gt;<br>
&gt; Tested with &quot;HACK: dump pcpu state keyhandler&quot; which I&#39;l=
l post for<br>
&gt; completeness. It gives:<br>
&gt; (XEN) Xen call trace:<br>
&gt; (XEN) =A0 =A0[&lt;0000000000212048&gt;] dump_pcpus+0x28/0x2c (PC)<br>
&gt; (XEN) =A0 =A0[&lt;000000000021256c&gt;] handle_keypress+0x70/0xb0 (LR)=
<br>
&gt; (XEN) =A0 =A0[&lt;000000000023ed00&gt;] __serial_rx+0x20/0x6c<br>
&gt; (XEN) =A0 =A0[&lt;000000000023f8ac&gt;] serial_rx+0xb4/0xc4<br>
&gt; (XEN) =A0 =A0[&lt;00000000002409ec&gt;] serial_rx_interrupt+0xb0/0xd4<=
br>
&gt; (XEN) =A0 =A0[&lt;00000000002404b4&gt;] ns16550_interrupt+0x6c/0x90<br=
>
&gt; (XEN) =A0 =A0[&lt;0000000000245fc0&gt;] do_IRQ+0x144/0x1b4<br>
&gt; (XEN) =A0 =A0[&lt;0000000000245a28&gt;] gic_interrupt+0x60/0xf8<br>
&gt; (XEN) =A0 =A0[&lt;000000000024be64&gt;] do_trap_irq+0x10/0x18<br>
&gt; (XEN) =A0 =A0[&lt;000000000024e240&gt;] hyp_irq+0x5c/0x60<br>
&gt; (XEN) =A0 =A0[&lt;0000000000249324&gt;] init_done+0x10/0x18<br>
&gt; (XEN) =A0 =A0[&lt;0000000000000080&gt;] 0000000000000080<br>
&gt; ---<br>
&gt; =A0xen/arch/arm/gic.c =A0 =A0 =A0 =A0| =A0 19 +++++++++++++------<br>
&gt; =A0xen/arch/arm/time.c =A0 =A0 =A0 | =A0 =A06 +++---<br>
&gt; =A0xen/include/asm-arm/gic.h | =A0 22 ++++++++++++++++++++++<br>
&gt; =A03 files changed, 38 insertions(+), 9 deletions(-)<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c<br>
&gt; index dcf9cd4..ee37019 100644<br>
&gt; --- a/xen/arch/arm/gic.c<br>
&gt; +++ b/xen/arch/arm/gic.c<br>
&gt; @@ -319,7 +319,8 @@ static void __init gic_dist_init(void)<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Default priority for global interrupts */<br>
&gt; =A0 =A0 =A0for ( i =3D 32; i &lt; gic.lines; i +=3D 4 )<br>
&gt; - =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D 0xa0a0a0a0;<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IRQ&lt;&lt;24 | GIC_PRI_IRQ&lt;&lt;16=
 | GIC_PRI_IRQ&lt;&lt;8 | GIC_PRI_IRQ;<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Disable all global interrupts */<br>
&gt; =A0 =A0 =A0for ( i =3D 32; i &lt; gic.lines; i +=3D 32 )<br>
&gt; @@ -341,8 +342,12 @@ static void __cpuinit gic_cpu_init(void)<br>
&gt; =A0 =A0 =A0GICD[GICD_ICENABLER] =3D 0xffff0000; /* Disable all PPI */<=
br>
&gt; =A0 =A0 =A0GICD[GICD_ISENABLER] =3D 0x0000ffff; /* Enable all SGI */<b=
r>
&gt; =A0 =A0 =A0/* Set PPI and SGI priorities */<br>
&gt; - =A0 =A0for (i =3D 0; i &lt; 32; i +=3D 4)<br>
&gt; - =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D 0xa0a0a0a0;<br>
&gt; + =A0 =A0for (i =3D 0; i &lt; 16; i +=3D 4)<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IPI&lt;&lt;24 | GIC_PRI_IPI&lt;&lt;16=
 | GIC_PRI_IPI&lt;&lt;8 | GIC_PRI_IPI;<br>
&gt; + =A0 =A0for (i =3D 16; i &lt; 32; i +=3D 4)<br>
&gt; + =A0 =A0 =A0 =A0GICD[GICD_IPRIORITYR + i / 4] =3D<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GIC_PRI_IRQ&lt;&lt;24 | GIC_PRI_IRQ&lt;&lt;16=
 | GIC_PRI_IRQ&lt;&lt;8 | GIC_PRI_IRQ;<br>
&gt;<br>
&gt; =A0 =A0 =A0/* Local settings: interface controller */<br>
&gt; =A0 =A0 =A0GICC[GICC_PMR] =3D 0xff; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* =
Don&#39;t mask by priority */<br>
&gt; @@ -538,7 +543,8 @@ void gic_disable_cpu(void)<br>
&gt; =A0void gic_route_ppis(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0/* GIC maintenance */<br>
&gt; - =A0 =A0gic_route_dt_irq(&amp;gic.maintenance, cpumask_of(smp_process=
or_id()), 0xa0);<br>
&gt; + =A0 =A0gic_route_dt_irq(&amp;gic.maintenance, cpumask_of(smp_process=
or_id()),<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0/* Route timer interrupt */<br>
&gt; =A0 =A0 =A0route_timer_interrupt();<br>
&gt; =A0}<br>
&gt; @@ -553,7 +559,8 @@ void gic_route_spis(void)<br>
&gt; =A0 =A0 =A0 =A0 =A0if ( (irq =3D serial_dt_irq(seridx)) =3D=3D NULL )<=
br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;<br>
&gt;<br>
&gt; - =A0 =A0 =A0 =A0gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),=
 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0gic_route_dt_irq(irq, cpumask_of(smp_processor_id()),=
<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0}<br>
&gt; =A0}<br>
&gt;<br>
&gt; @@ -777,7 +784,7 @@ int gic_route_irq_to_guest(struct domain *d, const=
 struct dt_irq *irq,<br>
&gt; =A0 =A0 =A0level =3D dt_irq_is_level_triggered(irq);<br>
&gt;<br>
&gt; =A0 =A0 =A0gic_set_irq_properties(irq-&gt;irq, level, cpumask_of(smp_p=
rocessor_id()),<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 GIC_PRI_IRQ);<br=
>
&gt;<br>
&gt; =A0 =A0 =A0retval =3D __setup_irq(desc, irq-&gt;irq, action);<br>
&gt; =A0 =A0 =A0if (retval) {<br>
&gt; diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c<br>
&gt; index 81e3e28..68b939d 100644<br>
&gt; --- a/xen/arch/arm/time.c<br>
&gt; +++ b/xen/arch/arm/time.c<br>
&gt; @@ -222,11 +222,11 @@ static void vtimer_interrupt(int irq, void *dev_=
id, struct cpu_user_regs *regs)<br>
&gt; =A0void __cpuinit route_timer_interrupt(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_PHYS_NONSECURE_PPI],<=
br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_HYP_PPI],<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0 =A0 =A0gic_route_dt_irq(&amp;timer_irq[TIMER_VIRT_PPI],<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), 0xa0);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 cpumask_of(smp_processor_id(=
)), GIC_PRI_IRQ);<br>
&gt; =A0}<br>
&gt;<br>
&gt; =A0/* Set up the timer interrupt on this CPU */<br>
&gt; diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h<br>
&gt; index 9c6f9bb..25b2b24 100644<br>
&gt; --- a/xen/include/asm-arm/gic.h<br>
&gt; +++ b/xen/include/asm-arm/gic.h<br>
&gt; @@ -129,6 +129,28 @@<br>
&gt; =A0#define GICH_LR_CPUID_SHIFT =A0 =A0 9<br>
&gt; =A0#define GICH_VTR_NRLRGS =A0 =A0 =A0 =A0 0x3f<br>
&gt;<br>
&gt; +/*<br>
&gt; + * The minimum GICC_BPR is required to be in the range 0-3. We set<br=
>
&gt; + * GICC_BPR to 0 but we must expect that it might be 3. This means we=
<br>
&gt; + * can rely on premption between the following ranges:<br>
&gt; + * 0xf0..0xff<br>
&gt; + * 0xe0..0xdf<br>
&gt; + * 0xc0..0xcf<br>
&gt; + * 0xb0..0xbf<br>
&gt; + * 0xa0..0xaf<br>
&gt; + * 0x90..0x9f<br>
&gt; + * 0x80..0x8f<br>
&gt; + *<br>
&gt; + * Priorities within a range will not preempt each other.<br>
&gt; + *<br>
&gt; + * A GIC must support a mimimum of 16 priority levels.<br>
&gt; + */<br>
&gt; +#define GIC_PRI_LOWEST =A0 =A0 0xf0<br>
&gt; +#define GIC_PRI_IRQ =A0 =A0 =A0 =A00xa0<br>
&gt; +#define GIC_PRI_IPI =A0 =A0 =A0 =A00x90 /* IPIs must preempt normal i=
nterrupts */<br>
&gt; +#define GIC_PRI_HIGHEST =A0 =A00x80 /* Higher priorities belong to Se=
cure-World */<br>
&gt; +<br>
&gt; +<br>
&gt; =A0#ifndef __ASSEMBLY__<br>
&gt; =A0#include &lt;xen/device_tree.h&gt;<br>
&gt;<br>
&gt; --<br>
&gt; 1.7.10.4<br>
&gt;<br>
</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br><font size=
=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;font=
-style:normal;font-size:12px;background-color:transparent;text-decoration:n=
one;font-family:Arial;font-weight:bold">Name | Title</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont>
</div>

--047d7bd6bd765833a904f10bdf64--


--===============3196472893408834412==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3196472893408834412==--


From xen-devel-bounces@lists.xen.org Tue Jan 28 18:20:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DH7-0000d9-LJ; Tue, 28 Jan 2014 18:20:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8DH5-0000cw-Pd
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:20:39 +0000
Received: from [193.109.254.147:41159] by server-15.bemta-14.messagelabs.com
	id 5E/6E-22186-7F4F7E25; Tue, 28 Jan 2014 18:20:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390933236!423216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13421 invoked from network); 28 Jan 2014 18:20:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:20:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97371312"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 18:20:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 13:20:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8D8j-0001uB-0w;
	Tue, 28 Jan 2014 18:12:01 +0000
Date: Tue, 28 Jan 2014 18:11:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E7F02A.7010508@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281809560.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
	<alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
	<52E7F02A.7010508@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> On 01/28/2014 05:46 PM, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Julien Grall wrote:
> >>>> +static int xen_cpu_notification(struct notifier_block *self,
> >>>> +				unsigned long action,
> >>>> +				void *hcpu)
> >>>> +{
> >>>> +	int cpu = (long)hcpu;
> >>>> +
> >>>> +	switch (action) {
> >>>> +	case CPU_UP_PREPARE:
> >>>> +		xen_percpu_init(cpu);
> >>>> +		break;
> >>>> +	case CPU_STARTING:
> >>>> +		xen_interrupt_init();
> >>>> +		break;
> >>>
> >>> Is CPU_STARTING guaranteed to be called on the new cpu only?
> >>
> >> Yes.
> >>
> >>> If so, why not call both xen_percpu_init and xen_interrupt_init on
> >>> CPU_STARTING?
> >>
> >> Just in case that xen_vcpu is used somewhere else by a cpu notifier
> >> callback CPU_STARTING. We don't know which callback is called first.
> > 
> > Could you please elaborate a bit more on the problem you are trying to
> > describe?
> 
> We want to make sure that the vcpu is registered correctly. If not, we
> can't skip it and avoid xen to have a "dead" VCPU to schedule due to BUG_ON.
> 
> I agree that now we have a BUG_ON in the middle of xen_percpu_init, but
> it's possible to return an error. In this case Linux will skip this cpu
> and continue to boot.

I think there are no benefits in having two separate functions
(xen_percpu_init and xen_interrupt_init) called at two different points
(CPU_UP_PREPARE and CPU_STARTING).
I would simply have one. Simpler is better.


> >>> As it stands I think you introduced a subtle change (that might be OK
> >>> but I think is unintentional): xen_percpu_init might not be called from
> >>> the same cpu as its target anymore.
> >>
> >> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
> >> the end of xen_guest_init.
> >  
> > Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
> > not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.
> >
> 
> I don't see any issue to execute xen_percpu_init for cpu1 on cpu0, all
> the code is taking directly the vcpu ID to initialize.

Me neither, I was simply pointing out that you made this change without
writing it in the commit message (therefore I assume it might be
unintended).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:20:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DH7-0000d9-LJ; Tue, 28 Jan 2014 18:20:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8DH5-0000cw-Pd
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:20:39 +0000
Received: from [193.109.254.147:41159] by server-15.bemta-14.messagelabs.com
	id 5E/6E-22186-7F4F7E25; Tue, 28 Jan 2014 18:20:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390933236!423216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13421 invoked from network); 28 Jan 2014 18:20:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:20:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97371312"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 18:20:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 13:20:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8D8j-0001uB-0w;
	Tue, 28 Jan 2014 18:12:01 +0000
Date: Tue, 28 Jan 2014 18:11:57 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E7F02A.7010508@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281809560.4373@kaball.uk.xensource.com>
References: <1390920842-21886-1-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1401281651400.4373@kaball.uk.xensource.com>
	<52E7EA78.5020305@linaro.org>
	<alpine.DEB.2.02.1401281742080.4373@kaball.uk.xensource.com>
	<52E7F02A.7010508@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> On 01/28/2014 05:46 PM, Stefano Stabellini wrote:
> > On Tue, 28 Jan 2014, Julien Grall wrote:
> >>>> +static int xen_cpu_notification(struct notifier_block *self,
> >>>> +				unsigned long action,
> >>>> +				void *hcpu)
> >>>> +{
> >>>> +	int cpu = (long)hcpu;
> >>>> +
> >>>> +	switch (action) {
> >>>> +	case CPU_UP_PREPARE:
> >>>> +		xen_percpu_init(cpu);
> >>>> +		break;
> >>>> +	case CPU_STARTING:
> >>>> +		xen_interrupt_init();
> >>>> +		break;
> >>>
> >>> Is CPU_STARTING guaranteed to be called on the new cpu only?
> >>
> >> Yes.
> >>
> >>> If so, why not call both xen_percpu_init and xen_interrupt_init on
> >>> CPU_STARTING?
> >>
> >> Just in case that xen_vcpu is used somewhere else by a cpu notifier
> >> callback CPU_STARTING. We don't know which callback is called first.
> > 
> > Could you please elaborate a bit more on the problem you are trying to
> > describe?
> 
> We want to make sure that the vcpu is registered correctly. If not, we
> can't skip it and avoid xen to have a "dead" VCPU to schedule due to BUG_ON.
> 
> I agree that now we have a BUG_ON in the middle of xen_percpu_init, but
> it's possible to return an error. In this case Linux will skip this cpu
> and continue to boot.

I think there are no benefits in having two separate functions
(xen_percpu_init and xen_interrupt_init) called at two different points
(CPU_UP_PREPARE and CPU_STARTING).
I would simply have one. Simpler is better.


> >>> As it stands I think you introduced a subtle change (that might be OK
> >>> but I think is unintentional): xen_percpu_init might not be called from
> >>> the same cpu as its target anymore.
> >>
> >> No, xen_percpu_init and xen_interrupt_init are called on the boot cpu at
> >> the end of xen_guest_init.
> >  
> > Is CPU_UP_PREPARE guaranteed to be called on the target cpu? I think
> > not, therefore you would be executing xen_percpu_init for cpu1 on cpu0.
> >
> 
> I don't see any issue to execute xen_percpu_init for cpu1 on cpu0, all
> the code is taking directly the vcpu ID to initialize.

Me neither, I was simply pointing out that you made this change without
writing it in the commit message (therefore I assume it might be
unintended).

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:22:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DIw-0000oW-EN; Tue, 28 Jan 2014 18:22:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8DIv-0000oN-OM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:22:33 +0000
Received: from [85.158.143.35:8524] by server-3.bemta-4.messagelabs.com id
	20/CB-32360-965F7E25; Tue, 28 Jan 2014 18:22:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390933349!1444439!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15441 invoked from network); 28 Jan 2014 18:22:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:22:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SIMQoO006701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:22:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SIMPu0019917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 18:22:25 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SIMPqS011546; Tue, 28 Jan 2014 18:22:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:22:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E900B1BFA73; Tue, 28 Jan 2014 13:22:23 -0500 (EST)
Date: Tue, 28 Jan 2014 13:22:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140128182223.GA8567@phenom.dumpdata.com>
References: <20140128175134.GJ32713@zion.uk.xensource.com>
	<20140128175626.GK32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128175626.GK32713@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] Xen kbdfront,
 Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 05:56:26PM +0000, Wei Liu wrote:
> NM, this is already fixed in xen tip!

And already in Linux as well :-)

> 
> Thanks!
> Wei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:22:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DIw-0000oW-EN; Tue, 28 Jan 2014 18:22:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8DIv-0000oN-OM
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:22:33 +0000
Received: from [85.158.143.35:8524] by server-3.bemta-4.messagelabs.com id
	20/CB-32360-965F7E25; Tue, 28 Jan 2014 18:22:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390933349!1444439!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15441 invoked from network); 28 Jan 2014 18:22:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:22:30 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SIMQoO006701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:22:26 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SIMPu0019917
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 28 Jan 2014 18:22:25 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SIMPqS011546; Tue, 28 Jan 2014 18:22:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:22:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E900B1BFA73; Tue, 28 Jan 2014 13:22:23 -0500 (EST)
Date: Tue, 28 Jan 2014 13:22:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140128182223.GA8567@phenom.dumpdata.com>
References: <20140128175134.GJ32713@zion.uk.xensource.com>
	<20140128175626.GK32713@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128175626.GK32713@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] Xen kbdfront,
 Xen platform PCI and grant table initialization
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 05:56:26PM +0000, Wei Liu wrote:
> NM, this is already fixed in xen tip!

And already in Linux as well :-)

> 
> Thanks!
> Wei

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:25:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DLM-0000z5-1X; Tue, 28 Jan 2014 18:25:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8DLL-0000yv-1H
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:25:03 +0000
Received: from [85.158.137.68:9246] by server-14.bemta-3.messagelabs.com id
	25/C9-06105-EF5F7E25; Tue, 28 Jan 2014 18:25:02 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390933501!11875377!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9414 invoked from network); 28 Jan 2014 18:25:01 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:25:01 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390933501; l=6624;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+/dQca49hqvh58yVvvXqK//MjYw=;
	b=cNRj70ejDT20Xx3k88bZ4+2QjRmFv0My1A3aWUTgQDFqEj4TpfNC2W/7O8HDzO1KeGL
	iJ2jplwdAlRrC2eT+Xgh4jG+NdFYZJZGoI3GeiFdfDdWqlnj/axrKF/vK5dMjMsio9oux
	r6UVqCN5xO2PIEyPBG8jYp7brm/7zDooRys=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 602054q0SIP0DzH
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:25:00 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9005450266; Tue, 28 Jan 2014 19:25:00 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:24:57 +0100
Message-Id: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: add option for discard support to xl
	disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handle new option discard=on|off for disk configuration. It is supposed
to disable discard support if file based backing storage was
intentionally created non-sparse to avoid fragmentation of the file.

The option is a boolean and intended for the backend driver. A new
boolean property "discard_enable" is written to the backend node. An
upcoming patch for qemu will make use of this property. The kernel
blkback driver may be updated as well to disable discard for phy based
backing storage.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 13 +++++++++++++
 tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
 tools/libxl/libxl.c                 |  2 ++
 tools/libxl/libxl_types.idl         |  1 +
 tools/libxl/libxlu_disk.c           |  1 +
 tools/libxl/libxlu_disk_i.h         |  2 +-
 tools/libxl/libxlu_disk_l.l         |  8 ++++++++
 xen/include/public/io/blkif.h       |  8 ++++++++
 8 files changed, 48 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..4f81394 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,19 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on
+
+This option instructs the backend driver, depending of the value, to advertise
+discard support (TRIM, UNMAP) to the frontend. It allows to disable "hole
+punching" for file based backends which were intentionally created non-sparse.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 797277c..485b8c6 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -61,7 +61,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 END
@@ -82,7 +83,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 END
@@ -104,7 +106,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
@@ -121,7 +124,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 EOF
@@ -142,7 +146,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 EOF
@@ -160,7 +165,8 @@ disk: {
     "script": "block-iscsi",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
@@ -180,7 +186,8 @@ disk: {
     "script": "block-drbd",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..3633a7d 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        flexarray_append(back, "discard_enable");
+        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b58b198 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", integer),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..ee82a8d 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -58,6 +58,7 @@ int xlu_disk_parse(XLU_Config *cfg,
     dpc.disk = disk;
 
     disk->readwrite = 1;
+    disk->discard_enable = 1; /* Doing it twice?! */
 
     for (i=0; i<nspecs; i++) {
         e = dpc_prep(&dpc, specs[i]);
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..c002d02 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -10,7 +10,7 @@ typedef struct {
     void *scanner;
     YY_BUFFER_STATE buf;
     libxl_device_disk *disk;
-    int access_set, had_depr_prefix;
+    int access_set, discard_set, had_depr_prefix;
     const char *spec;
 } DiskParseContext;
 
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2afd5e7 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
+discard=1,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
+discard=off,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
+discard=0,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
 
  /* the target magic parameter, eats the rest of the string */
 
@@ -244,6 +248,10 @@ phy:/.*		{ DPC->had_depr_prefix=1; DEPRECATE(0); }
         xlu__disk_err(DPC,yytext,"too many positional parameters");
         return 0; /* don't print any more errors */
     }
+    if (!DPC->discard_set) {
+        DPC->discard_set = 1;
+	DPC->disk->discard_enable = 1;
+    }
 }
 
 . {
diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 542f123..0121e19 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,6 +175,14 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
+ * discard_enable
+ *      Values:         0/1 (boolean)
+ *      Default Value:  1
+ *
+ *      This optional property, set by the toolstack, instructs the backend to
+ *      offer discard to the frontend. If the property is missing the backend
+ *      should offer discard if the backing storage actually supports it.
+ *
  * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:25:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DLM-0000z5-1X; Tue, 28 Jan 2014 18:25:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8DLL-0000yv-1H
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:25:03 +0000
Received: from [85.158.137.68:9246] by server-14.bemta-3.messagelabs.com id
	25/C9-06105-EF5F7E25; Tue, 28 Jan 2014 18:25:02 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390933501!11875377!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9414 invoked from network); 28 Jan 2014 18:25:01 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:25:01 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390933501; l=6624;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+/dQca49hqvh58yVvvXqK//MjYw=;
	b=cNRj70ejDT20Xx3k88bZ4+2QjRmFv0My1A3aWUTgQDFqEj4TpfNC2W/7O8HDzO1KeGL
	iJ2jplwdAlRrC2eT+Xgh4jG+NdFYZJZGoI3GeiFdfDdWqlnj/axrKF/vK5dMjMsio9oux
	r6UVqCN5xO2PIEyPBG8jYp7brm/7zDooRys=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 602054q0SIP0DzH
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:25:00 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9005450266; Tue, 28 Jan 2014 19:25:00 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:24:57 +0100
Message-Id: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: add option for discard support to xl
	disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handle new option discard=on|off for disk configuration. It is supposed
to disable discard support if file based backing storage was
intentionally created non-sparse to avoid fragmentation of the file.

The option is a boolean and intended for the backend driver. A new
boolean property "discard_enable" is written to the backend node. An
upcoming patch for qemu will make use of this property. The kernel
blkback driver may be updated as well to disable discard for phy based
backing storage.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 13 +++++++++++++
 tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
 tools/libxl/libxl.c                 |  2 ++
 tools/libxl/libxl_types.idl         |  1 +
 tools/libxl/libxlu_disk.c           |  1 +
 tools/libxl/libxlu_disk_i.h         |  2 +-
 tools/libxl/libxlu_disk_l.l         |  8 ++++++++
 xen/include/public/io/blkif.h       |  8 ++++++++
 8 files changed, 48 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..4f81394 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,19 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on
+
+This option instructs the backend driver, depending of the value, to advertise
+discard support (TRIM, UNMAP) to the frontend. It allows to disable "hole
+punching" for file based backends which were intentionally created non-sparse.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 797277c..485b8c6 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -61,7 +61,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 END
@@ -82,7 +83,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 END
@@ -104,7 +106,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
@@ -121,7 +124,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 EOF
@@ -142,7 +146,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": 1
 }
 
 EOF
@@ -160,7 +165,8 @@ disk: {
     "script": "block-iscsi",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
@@ -180,7 +186,8 @@ disk: {
     "script": "block-drbd",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": 1
 }
 
 EOF
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..3633a7d 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        flexarray_append(back, "discard_enable");
+        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b58b198 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", integer),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..ee82a8d 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -58,6 +58,7 @@ int xlu_disk_parse(XLU_Config *cfg,
     dpc.disk = disk;
 
     disk->readwrite = 1;
+    disk->discard_enable = 1; /* Doing it twice?! */
 
     for (i=0; i<nspecs; i++) {
         e = dpc_prep(&dpc, specs[i]);
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..c002d02 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -10,7 +10,7 @@ typedef struct {
     void *scanner;
     YY_BUFFER_STATE buf;
     libxl_device_disk *disk;
-    int access_set, had_depr_prefix;
+    int access_set, discard_set, had_depr_prefix;
     const char *spec;
 } DiskParseContext;
 
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2afd5e7 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
+discard=1,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
+discard=off,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
+discard=0,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
 
  /* the target magic parameter, eats the rest of the string */
 
@@ -244,6 +248,10 @@ phy:/.*		{ DPC->had_depr_prefix=1; DEPRECATE(0); }
         xlu__disk_err(DPC,yytext,"too many positional parameters");
         return 0; /* don't print any more errors */
     }
+    if (!DPC->discard_set) {
+        DPC->discard_set = 1;
+	DPC->disk->discard_enable = 1;
+    }
 }
 
 . {
diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 542f123..0121e19 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,6 +175,14 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
+ * discard_enable
+ *      Values:         0/1 (boolean)
+ *      Default Value:  1
+ *
+ *      This optional property, set by the toolstack, instructs the backend to
+ *      offer discard to the frontend. If the property is missing the backend
+ *      should offer discard if the backing storage actually supports it.
+ *
  * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DNC-000177-Kw; Tue, 28 Jan 2014 18:26:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8DNA-00016t-Na
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:26:56 +0000
Received: from [193.109.254.147:39038] by server-3.bemta-14.messagelabs.com id
	C1/E1-11000-076F7E25; Tue, 28 Jan 2014 18:26:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390933614!424542!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14182 invoked from network); 28 Jan 2014 18:26:55 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:26:55 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390933614; l=4208;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=5mKLGhCYZstXeny8//TuIyL04cA=;
	b=bs2QycbsXeuDy1mRudWvXpMgRdwWnZr8gokE+6D5n1QUyhxyM2cg178kcQn7/76RIap
	TwlurX6ISYx+cxdr3fPrueMDfLvzMwnI+WGyierH/vBWTMETqszsNRxQp2XHJ7k1jtANh
	Rj6xaNMkqK8uiNKMwsAsrEMh0I3N550s8Dc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id C029b5q0SIQsFOw
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:26:54 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5470F50266; Tue, 28 Jan 2014 19:26:54 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:26:43 +0100
Message-Id: <1390933603-13353-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] qemu-upstream: add discard support for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. The tool stack may provide a
property "discard_enable" in the backend node to optionally disable discard
support.  This is helpful in case the backing file was intentionally created
non-sparse to avoid fragmentation.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 28 ++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..539f2ed 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -114,6 +114,7 @@ struct XenBlkDev {
     int                 requests_finished;
 
     /* Persistent grants extension */
+    gboolean            feature_discard;
     gboolean            feature_persistent;
     GTree               *persistent_gnts;
     unsigned int        persistent_gnt_count;
@@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_WRITE:
         ioreq->prot = PROT_READ; /* from memory */
         break;
+    case BLKIF_OP_DISCARD:
+        return 0;
     default:
         xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
@@ -490,6 +493,7 @@ static void qemu_aio_complete(void *opaque, int ret)
 static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
 
     if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) {
         goto err_no_map;
@@ -521,6 +525,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        discard_req->sector_number, discard_req->nr_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -699,6 +710,19 @@ static void blk_alloc(struct XenDevice *xendev)
     }
 }
 
+static void blk_parse_discard(struct XenBlkDev *blkdev)
+{
+    int enable;
+
+    blkdev->feature_discard = true;
+
+    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)
+	    blkdev->feature_discard = !!enable;
+
+    if (blkdev->feature_discard)
+	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+}
+
 static int blk_init(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -766,6 +790,8 @@ static int blk_init(struct XenDevice *xendev)
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
+    blk_parse_discard(blkdev);
+
     g_free(directiosafe);
     return 0;
 
@@ -801,6 +827,8 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    if (blkdev->feature_discard)
+        qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8DNC-000177-Kw; Tue, 28 Jan 2014 18:26:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8DNA-00016t-Na
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 18:26:56 +0000
Received: from [193.109.254.147:39038] by server-3.bemta-14.messagelabs.com id
	C1/E1-11000-076F7E25; Tue, 28 Jan 2014 18:26:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390933614!424542!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14182 invoked from network); 28 Jan 2014 18:26:55 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:26:55 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390933614; l=4208;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=5mKLGhCYZstXeny8//TuIyL04cA=;
	b=bs2QycbsXeuDy1mRudWvXpMgRdwWnZr8gokE+6D5n1QUyhxyM2cg178kcQn7/76RIap
	TwlurX6ISYx+cxdr3fPrueMDfLvzMwnI+WGyierH/vBWTMETqszsNRxQp2XHJ7k1jtANh
	Rj6xaNMkqK8uiNKMwsAsrEMh0I3N550s8Dc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id C029b5q0SIQsFOw
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 28 Jan 2014 19:26:54 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5470F50266; Tue, 28 Jan 2014 19:26:54 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 28 Jan 2014 19:26:43 +0100
Message-Id: <1390933603-13353-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] qemu-upstream: add discard support for xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. The tool stack may provide a
property "discard_enable" in the backend node to optionally disable discard
support.  This is helpful in case the backing file was intentionally created
non-sparse to avoid fragmentation.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 28 ++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..539f2ed 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -114,6 +114,7 @@ struct XenBlkDev {
     int                 requests_finished;
 
     /* Persistent grants extension */
+    gboolean            feature_discard;
     gboolean            feature_persistent;
     GTree               *persistent_gnts;
     unsigned int        persistent_gnt_count;
@@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_WRITE:
         ioreq->prot = PROT_READ; /* from memory */
         break;
+    case BLKIF_OP_DISCARD:
+        return 0;
     default:
         xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
@@ -490,6 +493,7 @@ static void qemu_aio_complete(void *opaque, int ret)
 static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
 {
     struct XenBlkDev *blkdev = ioreq->blkdev;
+    struct blkif_request_discard *discard_req = (void *)&ioreq->req;
 
     if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) {
         goto err_no_map;
@@ -521,6 +525,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        discard_req->sector_number, discard_req->nr_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -699,6 +710,19 @@ static void blk_alloc(struct XenDevice *xendev)
     }
 }
 
+static void blk_parse_discard(struct XenBlkDev *blkdev)
+{
+    int enable;
+
+    blkdev->feature_discard = true;
+
+    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)
+	    blkdev->feature_discard = !!enable;
+
+    if (blkdev->feature_discard)
+	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+}
+
 static int blk_init(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -766,6 +790,8 @@ static int blk_init(struct XenDevice *xendev)
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
+    blk_parse_discard(blkdev);
+
     g_free(directiosafe);
     return 0;
 
@@ -801,6 +827,8 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    if (blkdev->feature_discard)
+        qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:54:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:54:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Dnw-0002EH-Gb; Tue, 28 Jan 2014 18:54:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8Dnv-0002EC-QT
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:54:36 +0000
Received: from [85.158.143.35:42794] by server-2.bemta-4.messagelabs.com id
	08/C4-11386-BECF7E25; Tue, 28 Jan 2014 18:54:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390935274!1443738!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32655 invoked from network); 28 Jan 2014 18:54:34 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:54:34 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so427486eaj.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 10:54:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hJVBy/uQeS2r2AO3t2oVtmovOeG0ggHDK5uEaqWXP2c=;
	b=Bzr+2IwlLMZDy1uqYmvk6/9qlAT73ss65epLwQRb4jqGIj5UkidXAOUpeFLt//F/1k
	Mqieb5gu73AdKfZMEpLQznUBg3M+nsHypNYUpMiaBhE2MEB1+btpPf+WRqZwfNEym2wH
	giWJ6gH50lCm21801Ps9HcSuzfRX1nQmFGPQSWgJGro9Fm07DfXIQkd1y+Ky7R1saHfE
	Q0Hetpf6EbJVbSYatPuFBshSFstTXZHv8m1EFLqKKaUkxZGt7zogyBv2ARqnB2Sfarum
	OSqEIksiIAyt5L3OHNHaJrAkwasCP4hEN9YQAyFc2D4ZqRHDGm+CsyfUsEnaHYixIx9h
	cFqg==
X-Gm-Message-State: ALoCoQmDPmyWZ0KHr0h4k+ByuGrftawN31uwVS5mJ+Ea4D3KcxoFTI3RGTQSsP0JUzvcEdFhwIMm
X-Received: by 10.15.21.2 with SMTP id c2mr164989eeu.77.1390935273636;
	Tue, 28 Jan 2014 10:54:33 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	o13sm58852942eex.19.2014.01.28.10.54.30 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 28 Jan 2014 10:54:31 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Tue, 28 Jan 2014 18:54:23 +0000
Message-Id: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>, ian.campbell@citrix.com,
	patches@linaro.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Event channels driver needs to be initialized very early. Until now, Xen
initialization was done after all CPUs was bring up.

We can safely move the initialization to an early initcall.

Also use a cpu notifier to:
    - Register the VCPU when the CPU is prepared
    - Enable event channel IRQ when the CPU is running

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Check earlier if the event IRQ is valid
        - We can safely register the VCPU when the cpu is booting
---
 arch/arm/xen/enlighten.c |   71 ++++++++++++++++++++++++++++------------------
 1 file changed, 44 insertions(+), 27 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 293eeea..b96723e 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -23,6 +23,7 @@
 #include <linux/of_address.h>
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
+#include <linux/cpu.h>
 
 #include <linux/mm.h>
 
@@ -154,7 +155,7 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
-static void __init xen_percpu_init(void *unused)
+static void xen_percpu_init(void)
 {
 	struct vcpu_register_vcpu_info info;
 	struct vcpu_info *vcpup;
@@ -193,6 +194,31 @@ static void xen_power_off(void)
 		BUG();
 }
 
+static int xen_cpu_notification(struct notifier_block *self,
+				unsigned long action,
+				void *hcpu)
+{
+	switch (action) {
+	case CPU_STARTING:
+		xen_percpu_init();
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block xen_cpu_notifier = {
+	.notifier_call = xen_cpu_notification,
+};
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
 /*
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
@@ -229,6 +255,10 @@ static int __init xen_guest_init(void)
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
 			version, xen_events_irq, &grant_frames);
+
+	if (xen_events_irq < 0)
+		return -ENODEV;
+
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -281,9 +311,21 @@ static int __init xen_guest_init(void)
 	disable_cpuidle();
 	disable_cpufreq();
 
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			       "events", &xen_vcpu)) {
+		pr_err("Error request IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	xen_percpu_init();
+
+	register_cpu_notifier(&xen_cpu_notifier);
+
 	return 0;
 }
-core_initcall(xen_guest_init);
+early_initcall(xen_guest_init);
 
 static int __init xen_pm_init(void)
 {
@@ -297,31 +339,6 @@ static int __init xen_pm_init(void)
 }
 late_initcall(xen_pm_init);
 
-static irqreturn_t xen_arm_callback(int irq, void *arg)
-{
-	xen_hvm_evtchn_do_upcall();
-	return IRQ_HANDLED;
-}
-
-static int __init xen_init_events(void)
-{
-	if (!xen_domain() || xen_events_irq < 0)
-		return -ENODEV;
-
-	xen_init_IRQ();
-
-	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
-			"events", &xen_vcpu)) {
-		pr_err("Error requesting IRQ %d\n", xen_events_irq);
-		return -EINVAL;
-	}
-
-	on_each_cpu(xen_percpu_init, NULL, 0);
-
-	return 0;
-}
-postcore_initcall(xen_init_events);
-
 /* In the hypervisor.S file. */
 EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
 EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:54:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:54:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Dnw-0002EH-Gb; Tue, 28 Jan 2014 18:54:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8Dnv-0002EC-QT
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 18:54:36 +0000
Received: from [85.158.143.35:42794] by server-2.bemta-4.messagelabs.com id
	08/C4-11386-BECF7E25; Tue, 28 Jan 2014 18:54:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390935274!1443738!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32655 invoked from network); 28 Jan 2014 18:54:34 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 18:54:34 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so427486eaj.40
	for <xen-devel@lists.xenproject.org>;
	Tue, 28 Jan 2014 10:54:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hJVBy/uQeS2r2AO3t2oVtmovOeG0ggHDK5uEaqWXP2c=;
	b=Bzr+2IwlLMZDy1uqYmvk6/9qlAT73ss65epLwQRb4jqGIj5UkidXAOUpeFLt//F/1k
	Mqieb5gu73AdKfZMEpLQznUBg3M+nsHypNYUpMiaBhE2MEB1+btpPf+WRqZwfNEym2wH
	giWJ6gH50lCm21801Ps9HcSuzfRX1nQmFGPQSWgJGro9Fm07DfXIQkd1y+Ky7R1saHfE
	Q0Hetpf6EbJVbSYatPuFBshSFstTXZHv8m1EFLqKKaUkxZGt7zogyBv2ARqnB2Sfarum
	OSqEIksiIAyt5L3OHNHaJrAkwasCP4hEN9YQAyFc2D4ZqRHDGm+CsyfUsEnaHYixIx9h
	cFqg==
X-Gm-Message-State: ALoCoQmDPmyWZ0KHr0h4k+ByuGrftawN31uwVS5mJ+Ea4D3KcxoFTI3RGTQSsP0JUzvcEdFhwIMm
X-Received: by 10.15.21.2 with SMTP id c2mr164989eeu.77.1390935273636;
	Tue, 28 Jan 2014 10:54:33 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	o13sm58852942eex.19.2014.01.28.10.54.30 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 28 Jan 2014 10:54:31 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Tue, 28 Jan 2014 18:54:23 +0000
Message-Id: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>, ian.campbell@citrix.com,
	patches@linaro.org, david.vrabel@citrix.com,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH v2] arm/xen: Initialize event channels earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Event channels driver needs to be initialized very early. Until now, Xen
initialization was done after all CPUs was bring up.

We can safely move the initialization to an early initcall.

Also use a cpu notifier to:
    - Register the VCPU when the CPU is prepared
    - Enable event channel IRQ when the CPU is running

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Check earlier if the event IRQ is valid
        - We can safely register the VCPU when the cpu is booting
---
 arch/arm/xen/enlighten.c |   71 ++++++++++++++++++++++++++++------------------
 1 file changed, 44 insertions(+), 27 deletions(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 293eeea..b96723e 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -23,6 +23,7 @@
 #include <linux/of_address.h>
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
+#include <linux/cpu.h>
 
 #include <linux/mm.h>
 
@@ -154,7 +155,7 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
 
-static void __init xen_percpu_init(void *unused)
+static void xen_percpu_init(void)
 {
 	struct vcpu_register_vcpu_info info;
 	struct vcpu_info *vcpup;
@@ -193,6 +194,31 @@ static void xen_power_off(void)
 		BUG();
 }
 
+static int xen_cpu_notification(struct notifier_block *self,
+				unsigned long action,
+				void *hcpu)
+{
+	switch (action) {
+	case CPU_STARTING:
+		xen_percpu_init();
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block xen_cpu_notifier = {
+	.notifier_call = xen_cpu_notification,
+};
+
+static irqreturn_t xen_arm_callback(int irq, void *arg)
+{
+	xen_hvm_evtchn_do_upcall();
+	return IRQ_HANDLED;
+}
+
 /*
  * see Documentation/devicetree/bindings/arm/xen.txt for the
  * documentation of the Xen Device Tree format.
@@ -229,6 +255,10 @@ static int __init xen_guest_init(void)
 	xen_events_irq = irq_of_parse_and_map(node, 0);
 	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
 			version, xen_events_irq, &grant_frames);
+
+	if (xen_events_irq < 0)
+		return -ENODEV;
+
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -281,9 +311,21 @@ static int __init xen_guest_init(void)
 	disable_cpuidle();
 	disable_cpufreq();
 
+	xen_init_IRQ();
+
+	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
+			       "events", &xen_vcpu)) {
+		pr_err("Error request IRQ %d\n", xen_events_irq);
+		return -EINVAL;
+	}
+
+	xen_percpu_init();
+
+	register_cpu_notifier(&xen_cpu_notifier);
+
 	return 0;
 }
-core_initcall(xen_guest_init);
+early_initcall(xen_guest_init);
 
 static int __init xen_pm_init(void)
 {
@@ -297,31 +339,6 @@ static int __init xen_pm_init(void)
 }
 late_initcall(xen_pm_init);
 
-static irqreturn_t xen_arm_callback(int irq, void *arg)
-{
-	xen_hvm_evtchn_do_upcall();
-	return IRQ_HANDLED;
-}
-
-static int __init xen_init_events(void)
-{
-	if (!xen_domain() || xen_events_irq < 0)
-		return -ENODEV;
-
-	xen_init_IRQ();
-
-	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
-			"events", &xen_vcpu)) {
-		pr_err("Error requesting IRQ %d\n", xen_events_irq);
-		return -EINVAL;
-	}
-
-	on_each_cpu(xen_percpu_init, NULL, 0);
-
-	return 0;
-}
-postcore_initcall(xen_init_events);
-
 /* In the hypervisor.S file. */
 EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
 EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:57:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Dqf-0002Kz-5x; Tue, 28 Jan 2014 18:57:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Dqd-0002Kt-AI
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 18:57:23 +0000
Received: from [85.158.143.35:58409] by server-1.bemta-4.messagelabs.com id
	0A/DD-02132-29DF7E25; Tue, 28 Jan 2014 18:57:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390935439!1450646!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26491 invoked from network); 28 Jan 2014 18:57:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:57:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SIuw2O016014
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:56:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SIusf2020792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 18:56:55 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SIusm6005830; Tue, 28 Jan 2014 18:56:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:56:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9475A1BFA73; Tue, 28 Jan 2014 13:56:52 -0500 (EST)
Date: Tue, 28 Jan 2014 13:56:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Santosh Shilimkar <santosh.shilimkar@ti.com>
Message-ID: <20140128185652.GA9362@phenom.dumpdata.com>
References: <1390590670-25901-1-git-send-email-yinghai@kernel.org>
	<1390590670-25901-4-git-send-email-yinghai@kernel.org>
	<CAOesGMh7+SzLTfyPomnB_Q_wft+RC+F3tx-Ow1TdSmHiSwrKcw@mail.gmail.com>
	<CAGa+x87jJep_TSG70wAr5M-Ce7soPyr4SBdPqyo2ENZ2DYcqkA@mail.gmail.com>
	<CAE9FiQU_vZo40cje_XJXSxuKW-JL6H-8rAazwOw1O-ewcV+C5g@mail.gmail.com>
	<52E7E776.30909@ti.com>
	<20140128182230.GO15937@n2100.arm.linux.org.uk>
	<52E7F8AC.9090909@ti.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7F8AC.9090909@ti.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>, "Strashko,
	Grygorii" <grygorii.strashko@ti.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, Andrew Morton <akpm@linux-foundation.org>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH 1/3] memblock,
	nobootmem: Add memblock_virt_alloc_low()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 01:36:28PM -0500, Santosh Shilimkar wrote:
> + Gryagorii,
> On Tuesday 28 January 2014 01:22 PM, Russell King - ARM Linux wrote:
> > On Tue, Jan 28, 2014 at 12:23:02PM -0500, Santosh Shilimkar wrote:
> >> On Tuesday 28 January 2014 12:12 PM, Yinghai Lu wrote:
> >>> Index: linux-2.6/include/linux/bootmem.h
> >>> ===================================================================
> >>> --- linux-2.6.orig/include/linux/bootmem.h
> >>> +++ linux-2.6/include/linux/bootmem.h
> >>> @@ -179,6 +179,9 @@ static inline void * __init memblock_vir
> >>>                                                     NUMA_NO_NODE);
> >>>  }
> >>>
> >>> +/* Take arch's ARCH_LOW_ADDRESS_LIMIT at first*/
> >>> +#include <asm/processor.h>
> >>> +
> >>>  #ifndef ARCH_LOW_ADDRESS_LIMIT
> >>>  #define ARCH_LOW_ADDRESS_LIMIT  0xffffffffUL
> >>>  #endif
> >>
> >> This won't help mostly since the ARM 32 arch don't set ARCH_LOW_ADDRESS_LIMIT.
> >> Sorry i couldn't respond to the thread earlier because of travel and
> >> don't have access to my board to try out the patches.
> > 
> > Let's think about this for a moment, shall we...
> > 
> > What does memblock_alloc_virt*() return?  It returns a virtual address.
> > 
> > How is that virtual address obtained?  ptr = phys_to_virt(alloc);
> > 
> > What is the valid address range for passing into phys_to_virt() ?  Only
> > lowmem addresses.
> > 
> > Hence, having ARCH_LOW_ADDRESS_LIMIT set to 4GB-1 by default seems to be
> > completely rediculous - and presumably this also fails on x86_32 if it
> > returns memory up at 4GB.
> > 
> > So... yes, I think reverting the arch/arm part of this patch is the right
> > solution, whether the rest of it should be reverted is something I can't
> > comment on.
> > 
> Grygorri mentioned an alternate to update the memblock_find_in_range_node() so
> that it takes into account the limit.

This patch breaks also Xen and 32-bit guests (see
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02476.html)

Reverting it fixes it.

> 
> Regards,
> Santosh 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 18:57:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 18:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Dqf-0002Kz-5x; Tue, 28 Jan 2014 18:57:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8Dqd-0002Kt-AI
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 18:57:23 +0000
Received: from [85.158.143.35:58409] by server-1.bemta-4.messagelabs.com id
	0A/DD-02132-29DF7E25; Tue, 28 Jan 2014 18:57:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390935439!1450646!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26491 invoked from network); 28 Jan 2014 18:57:20 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 18:57:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SIuw2O016014
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 18:56:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SIusf2020792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 18:56:55 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SIusm6005830; Tue, 28 Jan 2014 18:56:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 10:56:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9475A1BFA73; Tue, 28 Jan 2014 13:56:52 -0500 (EST)
Date: Tue, 28 Jan 2014 13:56:52 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Santosh Shilimkar <santosh.shilimkar@ti.com>
Message-ID: <20140128185652.GA9362@phenom.dumpdata.com>
References: <1390590670-25901-1-git-send-email-yinghai@kernel.org>
	<1390590670-25901-4-git-send-email-yinghai@kernel.org>
	<CAOesGMh7+SzLTfyPomnB_Q_wft+RC+F3tx-Ow1TdSmHiSwrKcw@mail.gmail.com>
	<CAGa+x87jJep_TSG70wAr5M-Ce7soPyr4SBdPqyo2ENZ2DYcqkA@mail.gmail.com>
	<CAE9FiQU_vZo40cje_XJXSxuKW-JL6H-8rAazwOw1O-ewcV+C5g@mail.gmail.com>
	<52E7E776.30909@ti.com>
	<20140128182230.GO15937@n2100.arm.linux.org.uk>
	<52E7F8AC.9090909@ti.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E7F8AC.9090909@ti.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xensource.com, Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>, "Strashko,
	Grygorii" <grygorii.strashko@ti.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, Andrew Morton <akpm@linux-foundation.org>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH 1/3] memblock,
	nobootmem: Add memblock_virt_alloc_low()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 01:36:28PM -0500, Santosh Shilimkar wrote:
> + Gryagorii,
> On Tuesday 28 January 2014 01:22 PM, Russell King - ARM Linux wrote:
> > On Tue, Jan 28, 2014 at 12:23:02PM -0500, Santosh Shilimkar wrote:
> >> On Tuesday 28 January 2014 12:12 PM, Yinghai Lu wrote:
> >>> Index: linux-2.6/include/linux/bootmem.h
> >>> ===================================================================
> >>> --- linux-2.6.orig/include/linux/bootmem.h
> >>> +++ linux-2.6/include/linux/bootmem.h
> >>> @@ -179,6 +179,9 @@ static inline void * __init memblock_vir
> >>>                                                     NUMA_NO_NODE);
> >>>  }
> >>>
> >>> +/* Take arch's ARCH_LOW_ADDRESS_LIMIT at first*/
> >>> +#include <asm/processor.h>
> >>> +
> >>>  #ifndef ARCH_LOW_ADDRESS_LIMIT
> >>>  #define ARCH_LOW_ADDRESS_LIMIT  0xffffffffUL
> >>>  #endif
> >>
> >> This won't help mostly since the ARM 32 arch don't set ARCH_LOW_ADDRESS_LIMIT.
> >> Sorry i couldn't respond to the thread earlier because of travel and
> >> don't have access to my board to try out the patches.
> > 
> > Let's think about this for a moment, shall we...
> > 
> > What does memblock_alloc_virt*() return?  It returns a virtual address.
> > 
> > How is that virtual address obtained?  ptr = phys_to_virt(alloc);
> > 
> > What is the valid address range for passing into phys_to_virt() ?  Only
> > lowmem addresses.
> > 
> > Hence, having ARCH_LOW_ADDRESS_LIMIT set to 4GB-1 by default seems to be
> > completely rediculous - and presumably this also fails on x86_32 if it
> > returns memory up at 4GB.
> > 
> > So... yes, I think reverting the arch/arm part of this patch is the right
> > solution, whether the rest of it should be reverted is something I can't
> > comment on.
> > 
> Grygorri mentioned an alternate to update the memblock_find_in_range_node() so
> that it takes into account the limit.

This patch breaks also Xen and 32-bit guests (see
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02476.html)

Reverting it fixes it.

> 
> Regards,
> Santosh 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EE9-0003O3-3V; Tue, 28 Jan 2014 19:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8EE7-0003Ny-CQ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:21:39 +0000
Received: from [85.158.143.35:32587] by server-2.bemta-4.messagelabs.com id
	F9/99-11386-24308E25; Tue, 28 Jan 2014 19:21:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390936897!1438729!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10813 invoked from network); 28 Jan 2014 19:21:37 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:21:37 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so755234lab.30
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:21:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=poJ/O50GKtZbNx4N20+5iNh0QBqoqrRdHXL51c5HFHY=;
	b=bie4EmyiRFjxEqNGktDMIHFLbZYKWcH8cDODyrIPRPMLPEoWIJe3Le/hBhYmS+oA9X
	sR5D1ZsylCNCsQPUrjv11W3/9UP2Pubi9FWmXSlJYXOyJowZKwNawLEjhel/S+evlgH/
	dhaFYLr1co2DW5i0S09ROCpiL2CrtWQxN8txehkNxYeeOlTQhyHRXZZnR1sWLUZnDW8j
	jVEcI7fe7vAe6/+fIbrHvDls2pdEqy5f/nWzRzY0xOEBd5jxElSZ2BKRzPma7G2Yyoaw
	92Lpv/8UD2ljik5RRQ8NIJTT6jdQrL56uzVdckdqblWA2YG6jZfSeZljxiTiwbkkiXn0
	Ae1w==
X-Received: by 10.112.22.196 with SMTP id g4mr1289099lbf.47.1390936897078;
	Tue, 28 Jan 2014 11:21:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id gi5sm18194316lbc.4.2014.01.28.11.21.35
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 11:21:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Tue, 28 Jan 2014 23:21:34 +0400
Message-Id: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

i'm working on port xen-4.3 to DilOS (illumos based platform).

i have problems with PV guest load.
dom0 started, i can see info by 'xl info'.

first: i see platform ID=38, but i couldn't found it in xen/public/platform.h

Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38

could you please let me know - what is it the 38 platform hypercall ?

next problem: i try to create a new PV guest by 'xm create dilos.cfg'
i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op

do i need implement it first ?

i have error with :
root@myhost:/xen# xm create dilos.cfg 
Using config file "./dilos.cfg".
Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')

could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 

maybe - how to get more debug info ?
i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:22:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EE9-0003O3-3V; Tue, 28 Jan 2014 19:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8EE7-0003Ny-CQ
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:21:39 +0000
Received: from [85.158.143.35:32587] by server-2.bemta-4.messagelabs.com id
	F9/99-11386-24308E25; Tue, 28 Jan 2014 19:21:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390936897!1438729!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10813 invoked from network); 28 Jan 2014 19:21:37 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:21:37 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so755234lab.30
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:21:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=poJ/O50GKtZbNx4N20+5iNh0QBqoqrRdHXL51c5HFHY=;
	b=bie4EmyiRFjxEqNGktDMIHFLbZYKWcH8cDODyrIPRPMLPEoWIJe3Le/hBhYmS+oA9X
	sR5D1ZsylCNCsQPUrjv11W3/9UP2Pubi9FWmXSlJYXOyJowZKwNawLEjhel/S+evlgH/
	dhaFYLr1co2DW5i0S09ROCpiL2CrtWQxN8txehkNxYeeOlTQhyHRXZZnR1sWLUZnDW8j
	jVEcI7fe7vAe6/+fIbrHvDls2pdEqy5f/nWzRzY0xOEBd5jxElSZ2BKRzPma7G2Yyoaw
	92Lpv/8UD2ljik5RRQ8NIJTT6jdQrL56uzVdckdqblWA2YG6jZfSeZljxiTiwbkkiXn0
	Ae1w==
X-Received: by 10.112.22.196 with SMTP id g4mr1289099lbf.47.1390936897078;
	Tue, 28 Jan 2014 11:21:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id gi5sm18194316lbc.4.2014.01.28.11.21.35
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 11:21:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Tue, 28 Jan 2014 23:21:34 +0400
Message-Id: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

i'm working on port xen-4.3 to DilOS (illumos based platform).

i have problems with PV guest load.
dom0 started, i can see info by 'xl info'.

first: i see platform ID=38, but i couldn't found it in xen/public/platform.h

Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38

could you please let me know - what is it the 38 platform hypercall ?

next problem: i try to create a new PV guest by 'xm create dilos.cfg'
i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op

do i need implement it first ?

i have error with :
root@myhost:/xen# xm create dilos.cfg 
Using config file "./dilos.cfg".
Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')

could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 

maybe - how to get more debug info ?
i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:26:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EIM-0003Ve-Jv; Tue, 28 Jan 2014 19:26:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8EIK-0003VZ-5g
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:26:00 +0000
Received: from [85.158.143.35:29244] by server-2.bemta-4.messagelabs.com id
	9B/BC-11386-74408E25; Tue, 28 Jan 2014 19:25:59 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390937154!1452874!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_30_40,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22155 invoked from network); 28 Jan 2014 19:25:56 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 19:25:56 -0000
Received: from mail-ve0-f170.google.com ([209.85.128.170]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUugEQpHAv0kMqvzYKkk0a5tNtgLTwNhH@postini.com;
	Tue, 28 Jan 2014 11:25:56 PST
Received: by mail-ve0-f170.google.com with SMTP id cz12so577344veb.1
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:25:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LxYYGQsYzXNL8AZnTOhNs6XwYxPbXJN711vmAccPNuY=;
	b=NeZGvK/JuOmY94fErOY6QyUazpP2wNtyEQ/CAiMZ3Rc7u3FkXBdt3xkROxKH8ksQTd
	40LR2aBRpjiy26EAfsyzrjmg70/GDN6LfRjDY98tpN4MklGXmRqrG8mPe7jrzVB5IET9
	GsWjT/9pjAA1is0jKZUwU3GmDY8IQ6NRMJAtc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LxYYGQsYzXNL8AZnTOhNs6XwYxPbXJN711vmAccPNuY=;
	b=CSAEPb5psargH9HjptpQSQjXY9yogVvGE+0jm9ov8o59HZH3VXTmXRnfCTnZCQlaKp
	wuRL7uF8AcBq6Myo7qJVYU8Uyia/2MnoEnVEdPjJbVJDqEiMEWvxB1lh6dTXRLtbYX+R
	TWNqmgVyPFvVkfga8H9JT60RH12qU5CaXYoZQC2yvRn6vh/zf4saUZcV3AXEAdoooMMZ
	vB3r82xqcOdBGHbUKixVJmblSB45hBanPS+BmEESz/inn+jEdDAA0Z9Ajt72z6O3pCQ3
	S0utiHBsSv+RMpZJpgTc1gtxwU41mhT14iMW6FSrTym9e0Vu8G09tuKxDK7L5zmcNuYJ
	ngKQ==
X-Gm-Message-State: ALoCoQm4n4R5AsfwJsXoAuireGih6VjbX7i5KOGbwlmYklGuCiM2LaecsvrclMUc80b2oQxlSNg5OoClNDCWbbY2YhenTeZwf7aHplnjJUxgXXqMg2GzVR49/E3evlRWPicgOjsUv9M+/a6xI5TNM+J32lGP869lTEKbpDiRM4Pa5vhFqh1dXGg=
X-Received: by 10.220.159.4 with SMTP id h4mr2462894vcx.1.1390937153691;
	Tue, 28 Jan 2014 11:25:53 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.159.4 with SMTP id h4mr2462865vcx.1.1390937153317; Tue,
	28 Jan 2014 11:25:53 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Tue, 28 Jan 2014 11:25:53 -0800 (PST)
In-Reply-To: <52E69CBC.3090207@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
Date: Tue, 28 Jan 2014 21:25:53 +0200
Message-ID: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4318767929090731965=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4318767929090731965==
Content-Type: multipart/alternative; boundary=001a11c2ca0cedb91a04f10ccafb

--001a11c2ca0cedb91a04f10ccafb
Content-Type: text/plain; charset=ISO-8859-1

Hello Julien,

Please see inline

gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
> hard drive, network, ...). As far as I remember, these IRQs are only
> routed to CPU0.


I understand.

But, I have created debug patch to show the issue:

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 46d2fc6..6123561 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -22,6 +22,8 @@
 #include <xen/smp.h>
 #include <xen/errno.h>

+int locked = 0;
+
 /*
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
@@ -53,11 +55,19 @@ void on_selected_cpus(
 {
     unsigned int nr_cpus;

+    locked = 0;
+
     ASSERT(local_irq_is_enabled());

     if (!spin_trylock(&call_lock)) {
+
+    locked = 1;
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+                 cpumask_of(smp_processor_id())->bits[0],
selected->bits[0]);
+
         if (smp_call_function_interrupt())
             return;
+
         spin_lock(&call_lock);
     }

@@ -78,6 +88,10 @@ void on_selected_cpus(

 out:
     spin_unlock(&call_lock);
+
+    if (locked)
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
 }

 int smp_call_function_interrupt(void)
@@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
     void *info = call_data.info;
     unsigned int cpu = smp_processor_id();

+     if (locked)
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+            cpumask_of(smp_processor_id())->bits[0],
call_data.selected.bits[0]);
+
     if ( !cpumask_test_cpu(cpu, &call_data.selected) )
         return -EPERM;

Our issue (simultaneous cross-interrupts) has occurred during boot domU:

[    7.507812] oom_adj 2 => oom_score_adj 117
[    7.507812] oom_adj 4 => oom_score_adj 235
[    7.507812] oom_adj 9 => oom_score_adj 529
[    7.507812] oom_adj 15 => oom_score_adj 1000
[    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
index 0 [0, ]
(XEN)
(XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000002, cpu_mask_sel: 00000002
(XEN)
(XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
cpu_mask_sel: 00000002
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000001, cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000002, cpu_mask_sel: 00000000
[   11.023437] usbcore: registered new interface driver usbfs
[   11.023437] usbcore: registered new interface driver hub
[   11.023437] usbcore: registered new device driver usb
[   11.039062] usbcore: registered new interface driver usbhid
[   11.039062] usbhid: USB HID core driver


> Do you pass-through PPIs to dom0?
>

If I understand correctly that PPIs is irqs from 16 to 31.
So yes, I do. I see timer's irqs and maintenance irq which routed to both
CPUs.

And I have printed all irqs which fall to gic_route_irq_to_guest and
gic_route_irq functions.
...
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000048211000
(XEN)         gic_cpu_addr=0000000048212000
(XEN)         gic_hyp_addr=0000000048214000
(XEN)         gic_vcpu_addr=0000000048216000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
(XEN)
(XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
(XEN) Bringing up CPU1
(XEN)
(XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
(XEN) CPU 1 booted.
(XEN) Brought up 2 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
(XEN) Loading kernel from boot module 2
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
(XEN) Loading zImage from 00000000c0000040 to
00000000c8008000-00000000c8304eb0
(XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 252kB init memory.
[    0.000000] /cpus/cpu@0 missing clock-frequency property
[    0.000000] /cpus/cpu@1 missing clock-frequency property
[    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
[    0.265625] ahci ahci.0.auto: can't get clock
[    0.867187] Freeing init memory: 224K
Parsing config from /xen/images/DomUAndroid.cfg
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
Daemon running with PID 569
...

>
> --
> Julien Grall
>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
 <http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c2ca0cedb91a04f10ccafb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello <span name=3D"Julien Grall">Julien, <br></span>=
</div><span name=3D"Julien Grall"></span><div><div class=3D"gmail_extra"><b=
r>Please see inline<br><div class=3D"gmail_quote"><br><blockquote class=3D"=
gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(20=
4,204,204);padding-left:1ex">
gic_irq_eoi is only called for physical IRQ routed to the guest (eg:<br>
hard drive, network, ...). As far as I remember, these IRQs are only<br>
routed to CPU0. </blockquote><div>=A0</div><div>I understand.<br><br>But, I=
 have created debug patch to show the issue:<br><br><font size=3D"1">diff -=
-git a/xen/common/smp.c b/xen/common/smp.c<br>index 46d2fc6..6123561 100644=
<br>


--- a/xen/common/smp.c<br>
+++ b/xen/common/smp.c<br>@@ -22,6 +22,8 @@<br>=A0#include &lt;xen/smp.h&gt=
;<br>=A0#include &lt;xen/errno.h&gt;<br>=A0<br>+int locked =3D 0;<br>+<br>=
=A0/*<br>=A0 * Structure and data for smp_call_function()/on_selected_cpus(=
).<br>=A0 */<br>



@@ -53,11 +55,19 @@ void on_selected_cpus(<br>=A0{<br>=A0=A0=A0=A0 unsigned=
 int nr_cpus;<br>=A0<br>+=A0=A0=A0 locked =3D 0;<br>+<br>=A0=A0=A0=A0 ASSER=
T(local_irq_is_enabled());<br>=A0<br>=A0=A0=A0=A0 if (!spin_trylock(&amp;ca=
ll_lock)) {<br>+<br>+=A0=A0=A0 locked =3D 1;<br>



+=A0=A0=A0=A0=A0=A0=A0 printk(&quot;\n&gt;&gt;&gt;&gt;&gt; %s: line: %d, cp=
u_mask_curr: %08lx, cpu_mask_sel: %08lx\n&quot;, __func__, __LINE__,<br>+=
=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 =A0cpumask_of(smp_processor_id())=
-&gt;bits[0], selected-&gt;bits[0]);<br>



+<br>=A0=A0=A0=A0=A0=A0=A0=A0 if (smp_call_function_interrupt())<br>=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 return;<br>+<br>=A0=A0=A0=A0=A0=A0=A0=A0 spi=
n_lock(&amp;call_lock);<br>=A0=A0=A0=A0 }<br>=A0<br>@@ -78,6 +88,10 @@ void=
 on_selected_cpus(<br>=A0<br>=A0out:<br>=A0=A0=A0=A0 spin_unlock(&amp;call_=
lock);<br>



+<br>+=A0=A0=A0 if (locked)<br>+=A0=A0=A0 =A0=A0=A0 printk(&quot;\n&gt;&gt;=
&gt;&gt;&gt; %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel: %08lx\n&quot=
;, __func__, __LINE__,<br>+=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 cpumask_of(smp_pro=
cessor_id())-&gt;bits[0], selected-&gt;bits[0]);<br>



=A0}<br>=A0<br>=A0int smp_call_function_interrupt(void)<br>@@ -86,6 +100,10=
 @@ int smp_call_function_interrupt(void)<br>=A0=A0=A0=A0 void *info =3D <a=
 href=3D"http://call_data.info" target=3D"_blank">call_data.info</a>;<br>=
=A0=A0=A0=A0 unsigned int cpu =3D smp_processor_id();<br>



=A0<br>+=A0=A0=A0=A0 if (locked)<br>+=A0=A0=A0 =A0=A0=A0 printk(&quot;\n&gt=
;&gt;&gt;&gt;&gt; %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel: %08lx\n=
&quot;, __func__, __LINE__,<br>+=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 cpumask_of(sm=
p_processor_id())-&gt;bits[0], call_data.selected.bits[0]);<br>



+<br>=A0=A0=A0=A0 if ( !cpumask_test_cpu(cpu, &amp;call_data.selected) )<br=
>=A0=A0=A0=A0=A0=A0=A0=A0 return -EPERM;</font><br><br>Our issue (simultane=
ous cross-interrupts) has occurred during boot domU:<br><br><font size=3D"1=
">[=A0=A0=A0 7.507812] oom_adj 2 =3D&gt; oom_score_adj 117<br>



[=A0=A0=A0 7.507812] oom_adj 4 =3D&gt; oom_score_adj 235<br>[=A0=A0=A0 7.50=
7812] oom_adj 9 =3D&gt; oom_score_adj 529<br>[=A0=A0=A0 7.507812] oom_adj 1=
5 =3D&gt; oom_score_adj 1000<br>[=A0=A0=A0 8.835937] PVR_K:(Error): PVRSRVO=
penDCDeviceKM: no devnode matching index 0 [0, ]<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 65, cpu_mask_c=
urr: 00000002, cpu_mask_sel: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002, cpu_mas=
k_sel: 00000002<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 93, cpu_mask_c=
urr: 00000001, cpu_mask_sel: 00000002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001, cpu_mas=
k_sel: 00000001<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 93, cpu_mask_c=
urr: 00000002, cpu_mask_sel: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002, cpu_mas=
k_sel: 00000000<br>



[=A0=A0 11.023437] usbcore: registered new interface driver usbfs<br>[=A0=
=A0 11.023437] usbcore: registered new interface driver hub<br>[=A0=A0 11.0=
23437] usbcore: registered new device driver usb<br>[=A0=A0 11.039062] usbc=
ore: registered new interface driver usbhid<br>



[=A0=A0 11.039062] usbhid: USB HID core driver</font><br>=A0</div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">Do you pass-through PPIs to dom0?<=
br></blockquote>

<div>=A0</div>
<div>If I understand correctly that PPIs is irqs from 16 to 31.<br>So yes, =
I do. I see timer&#39;s irqs and maintenance irq which routed to both CPUs.=
<br><br>And I have printed all irqs which fall to gic_route_irq_to_guest an=
d gic_route_irq functions.<br>


<font size=3D"1">...<br>(XEN) GIC initialization:<br>(XEN)=A0=A0=A0=A0=A0=
=A0=A0=A0 gic_dist_addr=3D0000000048211000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0=
 gic_cpu_addr=3D0000000048212000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_hyp_a=
ddr=3D0000000048214000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_vcpu_addr=3D000=
0000048216000<br>


(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_maintenance_irq=3D25<br>(XEN) GIC: 192 li=
nes, 2 cpus, secure (IID 0000043b).<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt;=
 gic_route_irq: irq: 25, cpu_mask: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;=
&gt;&gt; gic_route_irq: irq: 30, cpu_mask: 00000001<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 26, cpu_mask: 0000=
0001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 27, cpu_ma=
sk: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 10=
4, cpu_mask: 00000001<br>


(XEN) Using scheduler: SMP Credit Scheduler (credit)<br>(XEN) Allocated con=
sole ring of 16 KiB.<br>(XEN) VFP implementer 0x41 architecture 4 part 0x30=
 variant 0xf rev 0x0<br>(XEN) Bringing up CPU1<br>(XEN) <br>(XEN) &gt;&gt;&=
gt;&gt;&gt; gic_route_irq: irq: 25, cpu_mask: 00000002<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 30, cpu_mask: 0000=
0002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 26, cpu_ma=
sk: 00000002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 27=
, cpu_mask: 00000002<br>


(XEN) CPU 1 booted.<br>(XEN) Brought up 2 CPUs<br>(XEN) *** LOADING DOMAIN =
0 ***<br>(XEN) Populate P2M 0xc8000000-&gt;0xd0000000 (1:1 mapping for dom0=
)<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0,=
 irq: 61, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 62, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 63, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 64, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 66, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 67, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 153, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 105, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 106, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 102, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 137, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 138, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 113, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 69, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 70, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 71, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 72, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 73, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 74, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 75, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 76, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 77, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 78, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 79, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 112, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 145, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 158, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 86, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 82, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 83, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 84, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 85, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 187, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 0, irq: 186, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 188, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 189, cpu: 0<br>(XEN) Loading kernel from boot module 2<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 57, cpu: 0<br>(XEN) Loading zImage from 00000000c0000040 to 00000000c80080=
00-00000000c8304eb0<br>(XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000=
000cfe03978<br>


(XEN) Std. Loglevel: All<br>(XEN) Guest Loglevel: All<br>(XEN) *** Serial i=
nput -&gt; DOM0 (type &#39;CTRL-a&#39; three times to switch input to Xen)<=
br>(XEN) Freed 252kB init memory.<br>[=A0=A0=A0 0.000000] /cpus/cpu@0 missi=
ng clock-frequency property<br>


[=A0=A0=A0 0.000000] /cpus/cpu@1 missing clock-frequency property<br>[=A0=
=A0=A0 0.093750] omap_l3_noc ocp.2: couldn&#39;t find resource 2<br>[=A0=A0=
=A0 0.265625] ahci ahci.0.auto: can&#39;t get clock<br>[=A0=A0=A0 0.867187]=
 Freeing init memory: 224K<br>


Parsing config from /xen/images/DomUAndroid.cfg<br>(XEN) <br>(XEN) &gt;&gt;=
&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1<br>(XEN) <b=
r>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq: 61, cpu=
: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 62, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 63, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 64, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 65, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 66, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 67, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 153, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 69, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 70, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 71, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 72, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 73, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 74, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 75, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 76, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 77, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 78, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 79, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 102, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 137, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 138, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 88, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 89, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 93, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 94, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 92, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 152, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 97, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 98, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 123, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 80, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 115, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 118, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 126, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 128, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 91, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 41, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 42, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 48, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 131, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 44, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 45, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 46, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 47, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 40, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 158, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 146, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 60, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 85, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 87, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 133, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 142, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 143, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 53, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 164, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 51, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 134, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 50, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 108, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 109, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 124, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 125, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 110, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 112, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 68, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 101, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 99, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 100, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 103, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 132, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 56, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 135, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 136, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 139, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 58, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 140, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 141, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 49, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 54, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 55, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 144, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 32, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 33, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 34, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 35, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 36, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 39, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 43, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 52, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 59, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 120, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 90, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 107, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 119, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 121, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 122, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 129, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 130, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 151, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 154, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 155, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 156, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 160, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 162, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 163, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 157, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 173, cpu: 1<br>Daemon running with PID 569<br>...</font><b=
r>

</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">
<span><font color=3D"#888888"><br>
--<br>
Julien Grall<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><font siz=
e=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:bold">Name | Title</span><br>



<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>



<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>



<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>



<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font>
</div></div></div>

--001a11c2ca0cedb91a04f10ccafb--


--===============4318767929090731965==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4318767929090731965==--


From xen-devel-bounces@lists.xen.org Tue Jan 28 19:26:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EIM-0003Ve-Jv; Tue, 28 Jan 2014 19:26:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8EIK-0003VZ-5g
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:26:00 +0000
Received: from [85.158.143.35:29244] by server-2.bemta-4.messagelabs.com id
	9B/BC-11386-74408E25; Tue, 28 Jan 2014 19:25:59 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390937154!1452874!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_30_40,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22155 invoked from network); 28 Jan 2014 19:25:56 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 19:25:56 -0000
Received: from mail-ve0-f170.google.com ([209.85.128.170]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUugEQpHAv0kMqvzYKkk0a5tNtgLTwNhH@postini.com;
	Tue, 28 Jan 2014 11:25:56 PST
Received: by mail-ve0-f170.google.com with SMTP id cz12so577344veb.1
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:25:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LxYYGQsYzXNL8AZnTOhNs6XwYxPbXJN711vmAccPNuY=;
	b=NeZGvK/JuOmY94fErOY6QyUazpP2wNtyEQ/CAiMZ3Rc7u3FkXBdt3xkROxKH8ksQTd
	40LR2aBRpjiy26EAfsyzrjmg70/GDN6LfRjDY98tpN4MklGXmRqrG8mPe7jrzVB5IET9
	GsWjT/9pjAA1is0jKZUwU3GmDY8IQ6NRMJAtc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LxYYGQsYzXNL8AZnTOhNs6XwYxPbXJN711vmAccPNuY=;
	b=CSAEPb5psargH9HjptpQSQjXY9yogVvGE+0jm9ov8o59HZH3VXTmXRnfCTnZCQlaKp
	wuRL7uF8AcBq6Myo7qJVYU8Uyia/2MnoEnVEdPjJbVJDqEiMEWvxB1lh6dTXRLtbYX+R
	TWNqmgVyPFvVkfga8H9JT60RH12qU5CaXYoZQC2yvRn6vh/zf4saUZcV3AXEAdoooMMZ
	vB3r82xqcOdBGHbUKixVJmblSB45hBanPS+BmEESz/inn+jEdDAA0Z9Ajt72z6O3pCQ3
	S0utiHBsSv+RMpZJpgTc1gtxwU41mhT14iMW6FSrTym9e0Vu8G09tuKxDK7L5zmcNuYJ
	ngKQ==
X-Gm-Message-State: ALoCoQm4n4R5AsfwJsXoAuireGih6VjbX7i5KOGbwlmYklGuCiM2LaecsvrclMUc80b2oQxlSNg5OoClNDCWbbY2YhenTeZwf7aHplnjJUxgXXqMg2GzVR49/E3evlRWPicgOjsUv9M+/a6xI5TNM+J32lGP869lTEKbpDiRM4Pa5vhFqh1dXGg=
X-Received: by 10.220.159.4 with SMTP id h4mr2462894vcx.1.1390937153691;
	Tue, 28 Jan 2014 11:25:53 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.159.4 with SMTP id h4mr2462865vcx.1.1390937153317; Tue,
	28 Jan 2014 11:25:53 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Tue, 28 Jan 2014 11:25:53 -0800 (PST)
In-Reply-To: <52E69CBC.3090207@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
Date: Tue, 28 Jan 2014 21:25:53 +0200
Message-ID: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4318767929090731965=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4318767929090731965==
Content-Type: multipart/alternative; boundary=001a11c2ca0cedb91a04f10ccafb

--001a11c2ca0cedb91a04f10ccafb
Content-Type: text/plain; charset=ISO-8859-1

Hello Julien,

Please see inline

gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
> hard drive, network, ...). As far as I remember, these IRQs are only
> routed to CPU0.


I understand.

But, I have created debug patch to show the issue:

diff --git a/xen/common/smp.c b/xen/common/smp.c
index 46d2fc6..6123561 100644
--- a/xen/common/smp.c
+++ b/xen/common/smp.c
@@ -22,6 +22,8 @@
 #include <xen/smp.h>
 #include <xen/errno.h>

+int locked = 0;
+
 /*
  * Structure and data for smp_call_function()/on_selected_cpus().
  */
@@ -53,11 +55,19 @@ void on_selected_cpus(
 {
     unsigned int nr_cpus;

+    locked = 0;
+
     ASSERT(local_irq_is_enabled());

     if (!spin_trylock(&call_lock)) {
+
+    locked = 1;
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+                 cpumask_of(smp_processor_id())->bits[0],
selected->bits[0]);
+
         if (smp_call_function_interrupt())
             return;
+
         spin_lock(&call_lock);
     }

@@ -78,6 +88,10 @@ void on_selected_cpus(

 out:
     spin_unlock(&call_lock);
+
+    if (locked)
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
 }

 int smp_call_function_interrupt(void)
@@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
     void *info = call_data.info;
     unsigned int cpu = smp_processor_id();

+     if (locked)
+        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
%08lx\n", __func__, __LINE__,
+            cpumask_of(smp_processor_id())->bits[0],
call_data.selected.bits[0]);
+
     if ( !cpumask_test_cpu(cpu, &call_data.selected) )
         return -EPERM;

Our issue (simultaneous cross-interrupts) has occurred during boot domU:

[    7.507812] oom_adj 2 => oom_score_adj 117
[    7.507812] oom_adj 4 => oom_score_adj 235
[    7.507812] oom_adj 9 => oom_score_adj 529
[    7.507812] oom_adj 15 => oom_score_adj 1000
[    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
index 0 [0, ]
(XEN)
(XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000002, cpu_mask_sel: 00000002
(XEN)
(XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
cpu_mask_sel: 00000002
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000001, cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
cpu_mask_sel: 00000001
(XEN)
(XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr:
00000002, cpu_mask_sel: 00000000
[   11.023437] usbcore: registered new interface driver usbfs
[   11.023437] usbcore: registered new interface driver hub
[   11.023437] usbcore: registered new device driver usb
[   11.039062] usbcore: registered new interface driver usbhid
[   11.039062] usbhid: USB HID core driver


> Do you pass-through PPIs to dom0?
>

If I understand correctly that PPIs is irqs from 16 to 31.
So yes, I do. I see timer's irqs and maintenance irq which routed to both
CPUs.

And I have printed all irqs which fall to gic_route_irq_to_guest and
gic_route_irq functions.
...
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000048211000
(XEN)         gic_cpu_addr=0000000048212000
(XEN)         gic_hyp_addr=0000000048214000
(XEN)         gic_vcpu_addr=0000000048216000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
(XEN)
(XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
(XEN)
(XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
(XEN) Bringing up CPU1
(XEN)
(XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
(XEN)
(XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
(XEN) CPU 1 booted.
(XEN) Brought up 2 CPUs
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
(XEN) Loading kernel from boot module 2
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
(XEN) Loading zImage from 00000000c0000040 to
00000000c8008000-00000000c8304eb0
(XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 252kB init memory.
[    0.000000] /cpus/cpu@0 missing clock-frequency property
[    0.000000] /cpus/cpu@1 missing clock-frequency property
[    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
[    0.265625] ahci ahci.0.auto: can't get clock
[    0.867187] Freeing init memory: 224K
Parsing config from /xen/images/DomUAndroid.cfg
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
(XEN)
(XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
Daemon running with PID 569
...

>
> --
> Julien Grall
>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
 <http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c2ca0cedb91a04f10ccafb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello <span name=3D"Julien Grall">Julien, <br></span>=
</div><span name=3D"Julien Grall"></span><div><div class=3D"gmail_extra"><b=
r>Please see inline<br><div class=3D"gmail_quote"><br><blockquote class=3D"=
gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(20=
4,204,204);padding-left:1ex">
gic_irq_eoi is only called for physical IRQ routed to the guest (eg:<br>
hard drive, network, ...). As far as I remember, these IRQs are only<br>
routed to CPU0. </blockquote><div>=A0</div><div>I understand.<br><br>But, I=
 have created debug patch to show the issue:<br><br><font size=3D"1">diff -=
-git a/xen/common/smp.c b/xen/common/smp.c<br>index 46d2fc6..6123561 100644=
<br>


--- a/xen/common/smp.c<br>
+++ b/xen/common/smp.c<br>@@ -22,6 +22,8 @@<br>=A0#include &lt;xen/smp.h&gt=
;<br>=A0#include &lt;xen/errno.h&gt;<br>=A0<br>+int locked =3D 0;<br>+<br>=
=A0/*<br>=A0 * Structure and data for smp_call_function()/on_selected_cpus(=
).<br>=A0 */<br>



@@ -53,11 +55,19 @@ void on_selected_cpus(<br>=A0{<br>=A0=A0=A0=A0 unsigned=
 int nr_cpus;<br>=A0<br>+=A0=A0=A0 locked =3D 0;<br>+<br>=A0=A0=A0=A0 ASSER=
T(local_irq_is_enabled());<br>=A0<br>=A0=A0=A0=A0 if (!spin_trylock(&amp;ca=
ll_lock)) {<br>+<br>+=A0=A0=A0 locked =3D 1;<br>



+=A0=A0=A0=A0=A0=A0=A0 printk(&quot;\n&gt;&gt;&gt;&gt;&gt; %s: line: %d, cp=
u_mask_curr: %08lx, cpu_mask_sel: %08lx\n&quot;, __func__, __LINE__,<br>+=
=A0=A0=A0=A0=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 =A0cpumask_of(smp_processor_id())=
-&gt;bits[0], selected-&gt;bits[0]);<br>



+<br>=A0=A0=A0=A0=A0=A0=A0=A0 if (smp_call_function_interrupt())<br>=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 return;<br>+<br>=A0=A0=A0=A0=A0=A0=A0=A0 spi=
n_lock(&amp;call_lock);<br>=A0=A0=A0=A0 }<br>=A0<br>@@ -78,6 +88,10 @@ void=
 on_selected_cpus(<br>=A0<br>=A0out:<br>=A0=A0=A0=A0 spin_unlock(&amp;call_=
lock);<br>



+<br>+=A0=A0=A0 if (locked)<br>+=A0=A0=A0 =A0=A0=A0 printk(&quot;\n&gt;&gt;=
&gt;&gt;&gt; %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel: %08lx\n&quot=
;, __func__, __LINE__,<br>+=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 cpumask_of(smp_pro=
cessor_id())-&gt;bits[0], selected-&gt;bits[0]);<br>



=A0}<br>=A0<br>=A0int smp_call_function_interrupt(void)<br>@@ -86,6 +100,10=
 @@ int smp_call_function_interrupt(void)<br>=A0=A0=A0=A0 void *info =3D <a=
 href=3D"http://call_data.info" target=3D"_blank">call_data.info</a>;<br>=
=A0=A0=A0=A0 unsigned int cpu =3D smp_processor_id();<br>



=A0<br>+=A0=A0=A0=A0 if (locked)<br>+=A0=A0=A0 =A0=A0=A0 printk(&quot;\n&gt=
;&gt;&gt;&gt;&gt; %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel: %08lx\n=
&quot;, __func__, __LINE__,<br>+=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 cpumask_of(sm=
p_processor_id())-&gt;bits[0], call_data.selected.bits[0]);<br>



+<br>=A0=A0=A0=A0 if ( !cpumask_test_cpu(cpu, &amp;call_data.selected) )<br=
>=A0=A0=A0=A0=A0=A0=A0=A0 return -EPERM;</font><br><br>Our issue (simultane=
ous cross-interrupts) has occurred during boot domU:<br><br><font size=3D"1=
">[=A0=A0=A0 7.507812] oom_adj 2 =3D&gt; oom_score_adj 117<br>



[=A0=A0=A0 7.507812] oom_adj 4 =3D&gt; oom_score_adj 235<br>[=A0=A0=A0 7.50=
7812] oom_adj 9 =3D&gt; oom_score_adj 529<br>[=A0=A0=A0 7.507812] oom_adj 1=
5 =3D&gt; oom_score_adj 1000<br>[=A0=A0=A0 8.835937] PVR_K:(Error): PVRSRVO=
penDCDeviceKM: no devnode matching index 0 [0, ]<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 65, cpu_mask_c=
urr: 00000002, cpu_mask_sel: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002, cpu_mas=
k_sel: 00000002<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 93, cpu_mask_c=
urr: 00000001, cpu_mask_sel: 00000002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001, cpu_mas=
k_sel: 00000001<br>



(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; on_selected_cpus: line: 93, cpu_mask_c=
urr: 00000002, cpu_mask_sel: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&g=
t; smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002, cpu_mas=
k_sel: 00000000<br>



[=A0=A0 11.023437] usbcore: registered new interface driver usbfs<br>[=A0=
=A0 11.023437] usbcore: registered new interface driver hub<br>[=A0=A0 11.0=
23437] usbcore: registered new device driver usb<br>[=A0=A0 11.039062] usbc=
ore: registered new interface driver usbhid<br>



[=A0=A0 11.039062] usbhid: USB HID core driver</font><br>=A0</div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">Do you pass-through PPIs to dom0?<=
br></blockquote>

<div>=A0</div>
<div>If I understand correctly that PPIs is irqs from 16 to 31.<br>So yes, =
I do. I see timer&#39;s irqs and maintenance irq which routed to both CPUs.=
<br><br>And I have printed all irqs which fall to gic_route_irq_to_guest an=
d gic_route_irq functions.<br>


<font size=3D"1">...<br>(XEN) GIC initialization:<br>(XEN)=A0=A0=A0=A0=A0=
=A0=A0=A0 gic_dist_addr=3D0000000048211000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0=
 gic_cpu_addr=3D0000000048212000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_hyp_a=
ddr=3D0000000048214000<br>(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_vcpu_addr=3D000=
0000048216000<br>


(XEN)=A0=A0=A0=A0=A0=A0=A0=A0 gic_maintenance_irq=3D25<br>(XEN) GIC: 192 li=
nes, 2 cpus, secure (IID 0000043b).<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt;=
 gic_route_irq: irq: 25, cpu_mask: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;=
&gt;&gt; gic_route_irq: irq: 30, cpu_mask: 00000001<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 26, cpu_mask: 0000=
0001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 27, cpu_ma=
sk: 00000001<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 10=
4, cpu_mask: 00000001<br>


(XEN) Using scheduler: SMP Credit Scheduler (credit)<br>(XEN) Allocated con=
sole ring of 16 KiB.<br>(XEN) VFP implementer 0x41 architecture 4 part 0x30=
 variant 0xf rev 0x0<br>(XEN) Bringing up CPU1<br>(XEN) <br>(XEN) &gt;&gt;&=
gt;&gt;&gt; gic_route_irq: irq: 25, cpu_mask: 00000002<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 30, cpu_mask: 0000=
0002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 26, cpu_ma=
sk: 00000002<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq: irq: 27=
, cpu_mask: 00000002<br>


(XEN) CPU 1 booted.<br>(XEN) Brought up 2 CPUs<br>(XEN) *** LOADING DOMAIN =
0 ***<br>(XEN) Populate P2M 0xc8000000-&gt;0xd0000000 (1:1 mapping for dom0=
)<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0,=
 irq: 61, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 62, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 63, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 64, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 66, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 67, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 153, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 105, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 106, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 102, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 137, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 138, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 113, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 69, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 70, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 71, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 72, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 73, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 74, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 75, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 76, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 77, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 78, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 79, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 112, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 145, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 158, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 0, irq: 86, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 82, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 83, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 0, irq: 84, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 85, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 0, irq: 187, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 0, irq: 186, cpu: 0<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 188, cpu: 0<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 0, irq: 189, cpu: 0<br>(XEN) Loading kernel from boot module 2<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 0, irq:=
 57, cpu: 0<br>(XEN) Loading zImage from 00000000c0000040 to 00000000c80080=
00-00000000c8304eb0<br>(XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000=
000cfe03978<br>


(XEN) Std. Loglevel: All<br>(XEN) Guest Loglevel: All<br>(XEN) *** Serial i=
nput -&gt; DOM0 (type &#39;CTRL-a&#39; three times to switch input to Xen)<=
br>(XEN) Freed 252kB init memory.<br>[=A0=A0=A0 0.000000] /cpus/cpu@0 missi=
ng clock-frequency property<br>


[=A0=A0=A0 0.000000] /cpus/cpu@1 missing clock-frequency property<br>[=A0=
=A0=A0 0.093750] omap_l3_noc ocp.2: couldn&#39;t find resource 2<br>[=A0=A0=
=A0 0.265625] ahci ahci.0.auto: can&#39;t get clock<br>[=A0=A0=A0 0.867187]=
 Freeing init memory: 224K<br>


Parsing config from /xen/images/DomUAndroid.cfg<br>(XEN) <br>(XEN) &gt;&gt;=
&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1<br>(XEN) <b=
r>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq: 61, cpu=
: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 62, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 63, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 64, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 65, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 66, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 67, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 153, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 69, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 70, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 71, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 72, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 73, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 74, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 75, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 76, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 77, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 78, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 79, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 102, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 137, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 138, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 88, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 89, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 93, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 94, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 92, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 152, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 97, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 98, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 123, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 80, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 115, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 118, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 126, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 128, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 91, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 41, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 42, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 48, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 131, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 44, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 45, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 46, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 47, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 40, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 158, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 146, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 60, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 85, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 87, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 133, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 142, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 143, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 53, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 164, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 51, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 134, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 50, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 108, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 109, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 124, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 125, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 110, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 112, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 68, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 101, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 99, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 100, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 103, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 132, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 56, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 135, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 136, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 139, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 58, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 140, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 141, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 49, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 54, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 55, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 144, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 32, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 33, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 34, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 35, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 36, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 39, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 43, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest:=
 domid: 1, irq: 52, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rout=
e_irq_to_guest: domid: 1, irq: 59, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 120, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 90, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_rou=
te_irq_to_guest: domid: 1, irq: 107, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 119, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 121, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 122, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 129, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 130, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 151, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 154, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 155, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 156, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 160, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 162, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_ro=
ute_irq_to_guest: domid: 1, irq: 163, cpu: 1<br>


(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest: domid: 1, irq:=
 157, cpu: 1<br>(XEN) <br>(XEN) &gt;&gt;&gt;&gt;&gt; gic_route_irq_to_guest=
: domid: 1, irq: 173, cpu: 1<br>Daemon running with PID 569<br>...</font><b=
r>

</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">
<span><font color=3D"#888888"><br>
--<br>
Julien Grall<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><font siz=
e=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:bold">Name | Title</span><br>



<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>



<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>



<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>



<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font>
</div></div></div>

--001a11c2ca0cedb91a04f10ccafb--


--===============4318767929090731965==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4318767929090731965==--


From xen-devel-bounces@lists.xen.org Tue Jan 28 19:34:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EQi-0003y5-5J; Tue, 28 Jan 2014 19:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8EQg-0003xx-J3
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:34:38 +0000
Received: from [85.158.143.35:12515] by server-2.bemta-4.messagelabs.com id
	CF/E2-11386-D4608E25; Tue, 28 Jan 2014 19:34:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390937675!1456053!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27320 invoked from network); 28 Jan 2014 19:34:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 19:34:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SJYXQs028907
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 19:34:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SJYWt4028023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 19:34:32 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SJYWc3027993; Tue, 28 Jan 2014 19:34:32 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 11:34:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 11D491BFA73; Tue, 28 Jan 2014 14:34:31 -0500 (EST)
Date: Tue, 28 Jan 2014 14:34:31 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Message-ID: <20140128193430.GB9842@phenom.dumpdata.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> Hello All,
> 
> i'm working on port xen-4.3 to DilOS (illumos based platform).
> 
> i have problems with PV guest load.
> dom0 started, i can see info by 'xl info'.
> 
> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> 
> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> 
> could you please let me know - what is it the 38 platform hypercall ?

tmem_op
> 
> next problem: i try to create a new PV guest by 'xm create dilos.cfg'
> i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op

You sure it is dom0? Not the guest?

> 
> do i need implement it first ?

No. But you should have stub functions in your hypercall page to at least
return -ENOSYS for everything you don't implement.

How do you construct your hyperpage?
> 
> i have error with :
> root@myhost:/xen# xm create dilos.cfg 
> Using config file "./dilos.cfg".
> Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')
> 
> could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 
> 
> maybe - how to get more debug info ?
> i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.
> 
> --
> Best regards,
> Igor Kozhukhov
> 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:34:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EQi-0003y5-5J; Tue, 28 Jan 2014 19:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8EQg-0003xx-J3
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:34:38 +0000
Received: from [85.158.143.35:12515] by server-2.bemta-4.messagelabs.com id
	CF/E2-11386-D4608E25; Tue, 28 Jan 2014 19:34:37 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1390937675!1456053!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27320 invoked from network); 28 Jan 2014 19:34:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 19:34:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SJYXQs028907
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 19:34:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0SJYWt4028023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 19:34:32 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SJYWc3027993; Tue, 28 Jan 2014 19:34:32 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 11:34:31 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 11D491BFA73; Tue, 28 Jan 2014 14:34:31 -0500 (EST)
Date: Tue, 28 Jan 2014 14:34:31 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Message-ID: <20140128193430.GB9842@phenom.dumpdata.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> Hello All,
> 
> i'm working on port xen-4.3 to DilOS (illumos based platform).
> 
> i have problems with PV guest load.
> dom0 started, i can see info by 'xl info'.
> 
> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> 
> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> 
> could you please let me know - what is it the 38 platform hypercall ?

tmem_op
> 
> next problem: i try to create a new PV guest by 'xm create dilos.cfg'
> i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op

You sure it is dom0? Not the guest?

> 
> do i need implement it first ?

No. But you should have stub functions in your hypercall page to at least
return -ENOSYS for everything you don't implement.

How do you construct your hyperpage?
> 
> i have error with :
> root@myhost:/xen# xm create dilos.cfg 
> Using config file "./dilos.cfg".
> Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')
> 
> could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 
> 
> maybe - how to get more debug info ?
> i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.
> 
> --
> Best regards,
> Igor Kozhukhov
> 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:38:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EUg-00048D-07; Tue, 28 Jan 2014 19:38:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1W8EUe-000487-DG
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 19:38:44 +0000
Received: from [193.109.254.147:20740] by server-7.bemta-14.messagelabs.com id
	EA/97-15500-34708E25; Tue, 28 Jan 2014 19:38:43 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390937921!433816!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15190 invoked from network); 28 Jan 2014 19:38:42 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 19:38:42 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s0SJcbDW032156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 28 Jan 2014 14:38:37 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s0SJcbuf032154;
	Tue, 28 Jan 2014 14:38:37 -0500
Date: Tue, 28 Jan 2014 15:38:37 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Roger Pau Monne <roger.pau@citrix.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140128193837.GA32072@andromeda.dapyr.net>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> (patch 3).

They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
branch and are testing them now (hadn't pushed it yet).

Matt and Matt,

Could you take a look at the other two patches as well?

David, Boris,

Are you OK with pushing those patches out to Jens Axboe if nobody
gives an NACK by Friday?

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:38:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EUg-00048D-07; Tue, 28 Jan 2014 19:38:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1W8EUe-000487-DG
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 19:38:44 +0000
Received: from [193.109.254.147:20740] by server-7.bemta-14.messagelabs.com id
	EA/97-15500-34708E25; Tue, 28 Jan 2014 19:38:43 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390937921!433816!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15190 invoked from network); 28 Jan 2014 19:38:42 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Jan 2014 19:38:42 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s0SJcbDW032156
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 28 Jan 2014 14:38:37 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s0SJcbuf032154;
	Tue, 28 Jan 2014 14:38:37 -0500
Date: Tue, 28 Jan 2014 15:38:37 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Roger Pau Monne <roger.pau@citrix.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140128193837.GA32072@andromeda.dapyr.net>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> (patch 3).

They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
branch and are testing them now (hadn't pushed it yet).

Matt and Matt,

Could you take a look at the other two patches as well?

David, Boris,

Are you OK with pushing those patches out to Jens Axboe if nobody
gives an NACK by Friday?

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EXX-0004UJ-Ov; Tue, 28 Jan 2014 19:41:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W8EXW-0004U0-10; Tue, 28 Jan 2014 19:41:42 +0000
Received: from [85.158.139.211:31548] by server-13.bemta-5.messagelabs.com id
	3D/DB-11357-5F708E25; Tue, 28 Jan 2014 19:41:41 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390938100!214129!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23573 invoked from network); 28 Jan 2014 19:41:40 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:41:40 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so741474lbv.6
	for <multiple recipients>; Tue, 28 Jan 2014 11:41:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=Mv+r0FlS87lKIyw8LkLOBsLqUVF5h5qOsM2brwqdlWM=;
	b=O1TxlKd2PsfuLM/82R7Vmg5/NTsf2xhh+RDXZyr+IgEWwNBKqryc51TMC7fT372h7u
	5wzcqwbKkBKvvasePbaH0BKP97BodKeDn0TdvX0iWjSad89fxz+I1fPjTH9Mel13t3uE
	1pSqSGT2RGmnCt/Lt4Dx4dr9ly2yt+RrscKPgSh6J14aK5q6jCaXRr2WtY4/GStV/0Cx
	m4J2U0sJtD1BKJDmO7iKv4E0wO4o/MzesGnGviYTVc7rVZ6lZKMNQ9dMRWK/7vvrBWar
	ZncDf50dkDBmEdX263XnkPWTTdskGCayuOYG+hsRmAMIDHijY30UF/CRDskgHI5tiuej
	7L9g==
MIME-Version: 1.0
X-Received: by 10.112.242.134 with SMTP id wq6mr2023624lbc.24.1390938099762;
	Tue, 28 Jan 2014 11:41:39 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Tue, 28 Jan 2014 11:41:39 -0800 (PST)
Date: Tue, 28 Jan 2014 14:41:39 -0500
X-Google-Sender-Auth: hrSngXopcGAG5SM2Ip-mwZ1fMy4
Message-ID: <CAHehzX1mNGdHqnoHOKsLT9HXXbgJFuYP34Yrr_yY3ma1=nxPCw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org
Subject: [Xen-devel] Thank you for participating in Xen Project Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you for everyone who pitched in during the Xen Project Document
Day on Monday.

Special thanks to Mneilsen and Ijc for the amount of work they
performed during the day.  You've helped move our project forward in
the area of documentation!

Our next Document Day is scheduled for February 24.  In the meantime,
if you see pages which need improvements, please add them to:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

I hope to see you all in #xendocs next month!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EXX-0004UJ-Ov; Tue, 28 Jan 2014 19:41:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W8EXW-0004U0-10; Tue, 28 Jan 2014 19:41:42 +0000
Received: from [85.158.139.211:31548] by server-13.bemta-5.messagelabs.com id
	3D/DB-11357-5F708E25; Tue, 28 Jan 2014 19:41:41 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390938100!214129!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23573 invoked from network); 28 Jan 2014 19:41:40 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:41:40 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so741474lbv.6
	for <multiple recipients>; Tue, 28 Jan 2014 11:41:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=Mv+r0FlS87lKIyw8LkLOBsLqUVF5h5qOsM2brwqdlWM=;
	b=O1TxlKd2PsfuLM/82R7Vmg5/NTsf2xhh+RDXZyr+IgEWwNBKqryc51TMC7fT372h7u
	5wzcqwbKkBKvvasePbaH0BKP97BodKeDn0TdvX0iWjSad89fxz+I1fPjTH9Mel13t3uE
	1pSqSGT2RGmnCt/Lt4Dx4dr9ly2yt+RrscKPgSh6J14aK5q6jCaXRr2WtY4/GStV/0Cx
	m4J2U0sJtD1BKJDmO7iKv4E0wO4o/MzesGnGviYTVc7rVZ6lZKMNQ9dMRWK/7vvrBWar
	ZncDf50dkDBmEdX263XnkPWTTdskGCayuOYG+hsRmAMIDHijY30UF/CRDskgHI5tiuej
	7L9g==
MIME-Version: 1.0
X-Received: by 10.112.242.134 with SMTP id wq6mr2023624lbc.24.1390938099762;
	Tue, 28 Jan 2014 11:41:39 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Tue, 28 Jan 2014 11:41:39 -0800 (PST)
Date: Tue, 28 Jan 2014 14:41:39 -0500
X-Google-Sender-Auth: hrSngXopcGAG5SM2Ip-mwZ1fMy4
Message-ID: <CAHehzX1mNGdHqnoHOKsLT9HXXbgJFuYP34Yrr_yY3ma1=nxPCw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org
Subject: [Xen-devel] Thank you for participating in Xen Project Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you for everyone who pitched in during the Xen Project Document
Day on Monday.

Special thanks to Mneilsen and Ijc for the amount of work they
performed during the day.  You've helped move our project forward in
the area of documentation!

Our next Document Day is scheduled for February 24.  In the meantime,
if you see pages which need improvements, please add them to:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

I hope to see you all in #xendocs next month!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:45:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EbY-0004oz-CG; Tue, 28 Jan 2014 19:45:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8EbX-0004or-8a
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:45:51 +0000
Received: from [85.158.143.35:39285] by server-1.bemta-4.messagelabs.com id
	01/61-02132-EE808E25; Tue, 28 Jan 2014 19:45:50 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390938349!1456949!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6251 invoked from network); 28 Jan 2014 19:45:49 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:45:49 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so766658lan.19
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:45:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=STeFjBT75ZkZitbHxT4U63jvO1pyC92O0bVmErDM02Y=;
	b=WOzp3hAoG9B52cQipLgM/7hJxw7diozJKD7R+GJ+UNCEXOHtElMiIMKzySK2PClLev
	B8ieGTMovgkmn0y1EvElQDlwjpMy8wHSkJfuucIFIAfmYOm1Jd+LwUyXYWn0vallip2V
	McGJ41blIXyhyoPgVdDwOYLRmgHZHCZBiul2Wtd3hWqT3Gbi17wquUMA0U5pDS3jynxF
	QOdgSBBOmw28/5v0Q/UYVP/S3mi5pdgPD/ZmC1LYJPoeaJCTZYidVVTlnCab6XtqscZQ
	D27lot66Nxk6kAO0I6EMgFoPnAvWTQCy7KAXRdjIFkpHhwHXWpRXbSRsGSbEG3bvgwZi
	to9w==
X-Received: by 10.112.172.8 with SMTP id ay8mr1974856lbc.41.1390938348990;
	Tue, 28 Jan 2014 11:45:48 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	mx3sm18239253lbc.14.2014.01.28.11.45.47 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 11:45:48 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <20140128193430.GB9842@phenom.dumpdata.com>
Date: Tue, 28 Jan 2014 23:45:46 +0400
Message-Id: <BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:

> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> i'm working on port xen-4.3 to DilOS (illumos based platform).
>> 
>> i have problems with PV guest load.
>> dom0 started, i can see info by 'xl info'.
>> 
>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
>> 
>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
>> 
>> could you please let me know - what is it the 38 platform hypercall ?
> 
> tmem_op

tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h

>> 
>> next problem: i try to create a new PV guest by 'xm create dilos.cfg'
>> i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op
> 
> You sure it is dom0? Not the guest?
yes - it is in dom0 log.

> 
>> 
>> do i need implement it first ?
> 
> No. But you should have stub functions in your hypercall page to at least
> return -ENOSYS for everything you don't implement.

based on current code i see:
return -X_EINVAL;
will it be correct to return it if ID not implemented ?


> How do you construct your hyperpage?
>> 

example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c

from line: 379


>> i have error with :
>> root@myhost:/xen# xm create dilos.cfg 
>> Using config file "./dilos.cfg".
>> Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')
>> 
>> could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 
>> 
>> maybe - how to get more debug info ?
>> i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.
>> 
>> --
>> Best regards,
>> Igor Kozhukhov
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:45:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8EbY-0004oz-CG; Tue, 28 Jan 2014 19:45:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8EbX-0004or-8a
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 19:45:51 +0000
Received: from [85.158.143.35:39285] by server-1.bemta-4.messagelabs.com id
	01/61-02132-EE808E25; Tue, 28 Jan 2014 19:45:50 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390938349!1456949!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6251 invoked from network); 28 Jan 2014 19:45:49 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:45:49 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so766658lan.19
	for <xen-devel@lists.xen.org>; Tue, 28 Jan 2014 11:45:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=STeFjBT75ZkZitbHxT4U63jvO1pyC92O0bVmErDM02Y=;
	b=WOzp3hAoG9B52cQipLgM/7hJxw7diozJKD7R+GJ+UNCEXOHtElMiIMKzySK2PClLev
	B8ieGTMovgkmn0y1EvElQDlwjpMy8wHSkJfuucIFIAfmYOm1Jd+LwUyXYWn0vallip2V
	McGJ41blIXyhyoPgVdDwOYLRmgHZHCZBiul2Wtd3hWqT3Gbi17wquUMA0U5pDS3jynxF
	QOdgSBBOmw28/5v0Q/UYVP/S3mi5pdgPD/ZmC1LYJPoeaJCTZYidVVTlnCab6XtqscZQ
	D27lot66Nxk6kAO0I6EMgFoPnAvWTQCy7KAXRdjIFkpHhwHXWpRXbSRsGSbEG3bvgwZi
	to9w==
X-Received: by 10.112.172.8 with SMTP id ay8mr1974856lbc.41.1390938348990;
	Tue, 28 Jan 2014 11:45:48 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	mx3sm18239253lbc.14.2014.01.28.11.45.47 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 11:45:48 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <20140128193430.GB9842@phenom.dumpdata.com>
Date: Tue, 28 Jan 2014 23:45:46 +0400
Message-Id: <BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:

> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
>> Hello All,
>> 
>> i'm working on port xen-4.3 to DilOS (illumos based platform).
>> 
>> i have problems with PV guest load.
>> dom0 started, i can see info by 'xl info'.
>> 
>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
>> 
>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
>> 
>> could you please let me know - what is it the 38 platform hypercall ?
> 
> tmem_op

tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h

>> 
>> next problem: i try to create a new PV guest by 'xm create dilos.cfg'
>> i see on dom0 logs requests to hypercall ID=38 - i see it is __HYPERVISOR_tmem_op
> 
> You sure it is dom0? Not the guest?
yes - it is in dom0 log.

> 
>> 
>> do i need implement it first ?
> 
> No. But you should have stub functions in your hypercall page to at least
> return -ENOSYS for everything you don't implement.

based on current code i see:
return -X_EINVAL;
will it be correct to return it if ID not implemented ?


> How do you construct your hyperpage?
>> 

example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c

from line: 379


>> i have error with :
>> root@myhost:/xen# xm create dilos.cfg 
>> Using config file "./dilos.cfg".
>> Error: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x0+0x826 [mmap, errno=6 (No such device or address)]')
>> 
>> could you please let me know - how to identify by Xen sources - what i need implement/fix on my dom0 sources ? 
>> 
>> maybe - how to get more debug info ?
>> i have output from Xen to serial console and i can get statistic, but have no more information about problem with PV guest creation.
>> 
>> --
>> Best regards,
>> Igor Kozhukhov
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ekm-0005On-My; Tue, 28 Jan 2014 19:55:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Ekl-0005Oi-Gg
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 19:55:23 +0000
Received: from [85.158.143.35:49229] by server-1.bemta-4.messagelabs.com id
	C5/77-02132-A2B08E25; Tue, 28 Jan 2014 19:55:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390938920!1444329!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20506 invoked from network); 28 Jan 2014 19:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="95416572"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 19:55:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 14:55:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Ekh-0003UD-0F;
	Tue, 28 Jan 2014 19:55:19 +0000
Date: Tue, 28 Jan 2014 19:55:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281954560.4373@kaball.uk.xensource.com>
References: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	patches@linaro.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2] arm/xen: Initialize event channels
	earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> Event channels driver needs to be initialized very early. Until now, Xen
> initialization was done after all CPUs was bring up.
> 
> We can safely move the initialization to an early initcall.
> 
> Also use a cpu notifier to:
>     - Register the VCPU when the CPU is prepared
>     - Enable event channel IRQ when the CPU is running
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>     Changes in v2:
>         - Check earlier if the event IRQ is valid
>         - We can safely register the VCPU when the cpu is booting
> ---
>  arch/arm/xen/enlighten.c |   71 ++++++++++++++++++++++++++++------------------
>  1 file changed, 44 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 293eeea..b96723e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -23,6 +23,7 @@
>  #include <linux/of_address.h>
>  #include <linux/cpuidle.h>
>  #include <linux/cpufreq.h>
> +#include <linux/cpu.h>
>  
>  #include <linux/mm.h>
>  
> @@ -154,7 +155,7 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>  
> -static void __init xen_percpu_init(void *unused)
> +static void xen_percpu_init(void)
>  {
>  	struct vcpu_register_vcpu_info info;
>  	struct vcpu_info *vcpup;
> @@ -193,6 +194,31 @@ static void xen_power_off(void)
>  		BUG();
>  }
>  
> +static int xen_cpu_notification(struct notifier_block *self,
> +				unsigned long action,
> +				void *hcpu)
> +{
> +	switch (action) {
> +	case CPU_STARTING:
> +		xen_percpu_init();
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block xen_cpu_notifier = {
> +	.notifier_call = xen_cpu_notification,
> +};
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
>  /*
>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>   * documentation of the Xen Device Tree format.
> @@ -229,6 +255,10 @@ static int __init xen_guest_init(void)
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
>  			version, xen_events_irq, &grant_frames);
> +
> +	if (xen_events_irq < 0)
> +		return -ENODEV;
> +
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -281,9 +311,21 @@ static int __init xen_guest_init(void)
>  	disable_cpuidle();
>  	disable_cpufreq();
>  
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			       "events", &xen_vcpu)) {
> +		pr_err("Error request IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	xen_percpu_init();
> +
> +	register_cpu_notifier(&xen_cpu_notifier);
> +
>  	return 0;
>  }
> -core_initcall(xen_guest_init);
> +early_initcall(xen_guest_init);
>  
>  static int __init xen_pm_init(void)
>  {
> @@ -297,31 +339,6 @@ static int __init xen_pm_init(void)
>  }
>  late_initcall(xen_pm_init);
>  
> -static irqreturn_t xen_arm_callback(int irq, void *arg)
> -{
> -	xen_hvm_evtchn_do_upcall();
> -	return IRQ_HANDLED;
> -}
> -
> -static int __init xen_init_events(void)
> -{
> -	if (!xen_domain() || xen_events_irq < 0)
> -		return -ENODEV;
> -
> -	xen_init_IRQ();
> -
> -	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> -			"events", &xen_vcpu)) {
> -		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> -		return -EINVAL;
> -	}
> -
> -	on_each_cpu(xen_percpu_init, NULL, 0);
> -
> -	return 0;
> -}
> -postcore_initcall(xen_init_events);
> -
>  /* In the hypervisor.S file. */
>  EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 19:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 19:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ekm-0005On-My; Tue, 28 Jan 2014 19:55:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Ekl-0005Oi-Gg
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 19:55:23 +0000
Received: from [85.158.143.35:49229] by server-1.bemta-4.messagelabs.com id
	C5/77-02132-A2B08E25; Tue, 28 Jan 2014 19:55:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1390938920!1444329!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20506 invoked from network); 28 Jan 2014 19:55:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 19:55:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="95416572"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 19:55:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 14:55:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Ekh-0003UD-0F;
	Tue, 28 Jan 2014 19:55:19 +0000
Date: Tue, 28 Jan 2014 19:55:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1401281954560.4373@kaball.uk.xensource.com>
References: <1390935264-23155-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	linux-kernel@vger.kernel.org, david.vrabel@citrix.com,
	patches@linaro.org, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v2] arm/xen: Initialize event channels
	earlier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Julien Grall wrote:
> Event channels driver needs to be initialized very early. Until now, Xen
> initialization was done after all CPUs was bring up.
> 
> We can safely move the initialization to an early initcall.
> 
> Also use a cpu notifier to:
>     - Register the VCPU when the CPU is prepared
>     - Enable event channel IRQ when the CPU is running
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>     Changes in v2:
>         - Check earlier if the event IRQ is valid
>         - We can safely register the VCPU when the cpu is booting
> ---
>  arch/arm/xen/enlighten.c |   71 ++++++++++++++++++++++++++++------------------
>  1 file changed, 44 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 293eeea..b96723e 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -23,6 +23,7 @@
>  #include <linux/of_address.h>
>  #include <linux/cpuidle.h>
>  #include <linux/cpufreq.h>
> +#include <linux/cpu.h>
>  
>  #include <linux/mm.h>
>  
> @@ -154,7 +155,7 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(xen_unmap_domain_mfn_range);
>  
> -static void __init xen_percpu_init(void *unused)
> +static void xen_percpu_init(void)
>  {
>  	struct vcpu_register_vcpu_info info;
>  	struct vcpu_info *vcpup;
> @@ -193,6 +194,31 @@ static void xen_power_off(void)
>  		BUG();
>  }
>  
> +static int xen_cpu_notification(struct notifier_block *self,
> +				unsigned long action,
> +				void *hcpu)
> +{
> +	switch (action) {
> +	case CPU_STARTING:
> +		xen_percpu_init();
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +
> +static struct notifier_block xen_cpu_notifier = {
> +	.notifier_call = xen_cpu_notification,
> +};
> +
> +static irqreturn_t xen_arm_callback(int irq, void *arg)
> +{
> +	xen_hvm_evtchn_do_upcall();
> +	return IRQ_HANDLED;
> +}
> +
>  /*
>   * see Documentation/devicetree/bindings/arm/xen.txt for the
>   * documentation of the Xen Device Tree format.
> @@ -229,6 +255,10 @@ static int __init xen_guest_init(void)
>  	xen_events_irq = irq_of_parse_and_map(node, 0);
>  	pr_info("Xen %s support found, events_irq=%d gnttab_frame=%pa\n",
>  			version, xen_events_irq, &grant_frames);
> +
> +	if (xen_events_irq < 0)
> +		return -ENODEV;
> +
>  	xen_domain_type = XEN_HVM_DOMAIN;
>  
>  	xen_setup_features();
> @@ -281,9 +311,21 @@ static int __init xen_guest_init(void)
>  	disable_cpuidle();
>  	disable_cpufreq();
>  
> +	xen_init_IRQ();
> +
> +	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> +			       "events", &xen_vcpu)) {
> +		pr_err("Error request IRQ %d\n", xen_events_irq);
> +		return -EINVAL;
> +	}
> +
> +	xen_percpu_init();
> +
> +	register_cpu_notifier(&xen_cpu_notifier);
> +
>  	return 0;
>  }
> -core_initcall(xen_guest_init);
> +early_initcall(xen_guest_init);
>  
>  static int __init xen_pm_init(void)
>  {
> @@ -297,31 +339,6 @@ static int __init xen_pm_init(void)
>  }
>  late_initcall(xen_pm_init);
>  
> -static irqreturn_t xen_arm_callback(int irq, void *arg)
> -{
> -	xen_hvm_evtchn_do_upcall();
> -	return IRQ_HANDLED;
> -}
> -
> -static int __init xen_init_events(void)
> -{
> -	if (!xen_domain() || xen_events_irq < 0)
> -		return -ENODEV;
> -
> -	xen_init_IRQ();
> -
> -	if (request_percpu_irq(xen_events_irq, xen_arm_callback,
> -			"events", &xen_vcpu)) {
> -		pr_err("Error requesting IRQ %d\n", xen_events_irq);
> -		return -EINVAL;
> -	}
> -
> -	on_each_cpu(xen_percpu_init, NULL, 0);
> -
> -	return 0;
> -}
> -postcore_initcall(xen_init_events);
> -
>  /* In the hypervisor.S file. */
>  EXPORT_SYMBOL_GPL(HYPERVISOR_event_channel_op);
>  EXPORT_SYMBOL_GPL(HYPERVISOR_grant_table_op);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Eww-0005w3-EV; Tue, 28 Jan 2014 20:07:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Ewu-0005vy-W6
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 20:07:57 +0000
Received: from [85.158.143.35:21529] by server-3.bemta-4.messagelabs.com id
	42/6A-32360-C1E08E25; Tue, 28 Jan 2014 20:07:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390939674!1449546!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1353 invoked from network); 28 Jan 2014 20:07:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:07:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97419156"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 20:07:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 15:07:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8Ewq-0004Hj-6Y;
	Tue, 28 Jan 2014 20:07:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8EwU-000430-VM;
	Tue, 28 Jan 2014 20:07:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 20:07:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24570: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24570 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24570/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24568

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321
baseline version:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Mike Neilsen <mneilsen@acm.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3c80caee183124b2a0d1f7e0dae062fd794d6321
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3c80caee183124b2a0d1f7e0dae062fd794d6321
+ branch=xen-unstable
+ revision=3c80caee183124b2a0d1f7e0dae062fd794d6321
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 3c80caee183124b2a0d1f7e0dae062fd794d6321:master
Counting objects: 1   
Counting objects: 85, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   1% (1/54)   
Writing objects:   3% (2/54)   
Writing objects:   5% (3/54)   
Writing objects:   7% (4/54)   
Writing objects:   9% (5/54)   
Writing objects:  11% (6/54)   
Writing objects:  12% (7/54)   
Writing objects:  14% (8/54)   
Writing objects:  16% (9/54)   
Writing objects:  18% (10/54)   
Writing objects:  20% (11/54)   
Writing objects:  22% (12/54)   
Writing objects:  24% (13/54)   
Writing objects:  25% (14/54)   
Writing objects:  27% (15/54)   
Writing objects:  29% (16/54)   
Writing objects:  31% (17/54)   
Writing objects:  33% (18/54)   
Writing objects:  35% (19/54)   
Writing objects:  37% (20/54)   
Writing objects:  38% (21/54)   
Writing objects:  40% (22/54)   
Writing objects:  42% (23/54)   
Writing objects:  44% (24/54)   
Writing objects:  46% (25/54)   
Writing objects:  48% (26/54)   
Writing objects:  50% (27/54)   
Writing objects:  51% (28/54)   
Writing objects:  53% (29/54)   
Writing objects:  55% (30/54)   
Writing objects:  57% (31/54)   
Writing objects:  59% (32/54)   
Writing objects:  61% (33/54)   
Writing objects:  62% (34/54)   
Writing objects:  64% (35/54)   
Writing objects:  66% (36/54)   
Writing objects:  68% (37/54)   
Writing objects:  70% (38/54)   
Writing objects:  72% (39/54)   
Writing objects:  74% (40/54)   
Writing objects:  75% (41/54)   
Writing objects:  77% (42/54)   
Writing objects:  79% (43/54)   
Writing objects:  81% (44/54)   
Writing objects:  83% (45/54)   
Writing objects:  85% (46/54)   
Writing objects:  87% (47/54)   
Writing objects:  88% (48/54)   
Writing objects:  90% (49/54)   
Writing objects:  92% (50/54)   
Writing objects:  94% (51/54)   
Writing objects:  96% (52/54)   
Writing objects:  98% (53/54)   
Writing objects: 100% (54/54)   
Writing objects: 100% (54/54), 85.06 KiB, done.
Total 54 (delta 21), reused 49 (delta 16)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9c7e789..3c80cae  3c80caee183124b2a0d1f7e0dae062fd794d6321 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Eww-0005w3-EV; Tue, 28 Jan 2014 20:07:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Ewu-0005vy-W6
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 20:07:57 +0000
Received: from [85.158.143.35:21529] by server-3.bemta-4.messagelabs.com id
	42/6A-32360-C1E08E25; Tue, 28 Jan 2014 20:07:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390939674!1449546!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1353 invoked from network); 28 Jan 2014 20:07:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:07:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97419156"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 20:07:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 15:07:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8Ewq-0004Hj-6Y;
	Tue, 28 Jan 2014 20:07:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8EwU-000430-VM;
	Tue, 28 Jan 2014 20:07:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24570-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 20:07:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24570: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24570 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24570/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24568

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321
baseline version:
 xen                  9c7e789a1b60b6114e0b1ef16dff95f03f532fb5

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Mike Neilsen <mneilsen@acm.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3c80caee183124b2a0d1f7e0dae062fd794d6321
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3c80caee183124b2a0d1f7e0dae062fd794d6321
+ branch=xen-unstable
+ revision=3c80caee183124b2a0d1f7e0dae062fd794d6321
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 3c80caee183124b2a0d1f7e0dae062fd794d6321:master
Counting objects: 1   
Counting objects: 85, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   1% (1/54)   
Writing objects:   3% (2/54)   
Writing objects:   5% (3/54)   
Writing objects:   7% (4/54)   
Writing objects:   9% (5/54)   
Writing objects:  11% (6/54)   
Writing objects:  12% (7/54)   
Writing objects:  14% (8/54)   
Writing objects:  16% (9/54)   
Writing objects:  18% (10/54)   
Writing objects:  20% (11/54)   
Writing objects:  22% (12/54)   
Writing objects:  24% (13/54)   
Writing objects:  25% (14/54)   
Writing objects:  27% (15/54)   
Writing objects:  29% (16/54)   
Writing objects:  31% (17/54)   
Writing objects:  33% (18/54)   
Writing objects:  35% (19/54)   
Writing objects:  37% (20/54)   
Writing objects:  38% (21/54)   
Writing objects:  40% (22/54)   
Writing objects:  42% (23/54)   
Writing objects:  44% (24/54)   
Writing objects:  46% (25/54)   
Writing objects:  48% (26/54)   
Writing objects:  50% (27/54)   
Writing objects:  51% (28/54)   
Writing objects:  53% (29/54)   
Writing objects:  55% (30/54)   
Writing objects:  57% (31/54)   
Writing objects:  59% (32/54)   
Writing objects:  61% (33/54)   
Writing objects:  62% (34/54)   
Writing objects:  64% (35/54)   
Writing objects:  66% (36/54)   
Writing objects:  68% (37/54)   
Writing objects:  70% (38/54)   
Writing objects:  72% (39/54)   
Writing objects:  74% (40/54)   
Writing objects:  75% (41/54)   
Writing objects:  77% (42/54)   
Writing objects:  79% (43/54)   
Writing objects:  81% (44/54)   
Writing objects:  83% (45/54)   
Writing objects:  85% (46/54)   
Writing objects:  87% (47/54)   
Writing objects:  88% (48/54)   
Writing objects:  90% (49/54)   
Writing objects:  92% (50/54)   
Writing objects:  94% (51/54)   
Writing objects:  96% (52/54)   
Writing objects:  98% (53/54)   
Writing objects: 100% (54/54)   
Writing objects: 100% (54/54), 85.06 KiB, done.
Total 54 (delta 21), reused 49 (delta 16)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9c7e789..3c80cae  3c80caee183124b2a0d1f7e0dae062fd794d6321 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:11:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ezv-0006Lh-Nm; Tue, 28 Jan 2014 20:11:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Ezt-0006LZ-MU
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 20:11:02 +0000
Received: from [85.158.137.68:17382] by server-9.bemta-3.messagelabs.com id
	ED/84-13104-4DE08E25; Tue, 28 Jan 2014 20:11:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390939857!11810075!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14249 invoked from network); 28 Jan 2014 20:10:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97420107"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 20:10:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 15:10:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Ezn-0003mG-UP;
	Tue, 28 Jan 2014 20:10:55 +0000
Date: Tue, 28 Jan 2014 20:10:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1390933603-13353-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390933603-13353-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. The tool stack may provide a
> property "discard_enable" in the backend node to optionally disable discard
> support.  This is helpful in case the backing file was intentionally created
> non-sparse to avoid fragmentation.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I think that the patch is fine, thank you.
Just a small comment below but it is just a matter of taste.


>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 28 ++++++++++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..539f2ed 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -114,6 +114,7 @@ struct XenBlkDev {
>      int                 requests_finished;
>  
>      /* Persistent grants extension */
> +    gboolean            feature_discard;
>      gboolean            feature_persistent;
>      GTree               *persistent_gnts;
>      unsigned int        persistent_gnt_count;
> @@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_WRITE:
>          ioreq->prot = PROT_READ; /* from memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        return 0;
>      default:
>          xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
>                        ioreq->req.operation);
> @@ -490,6 +493,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>  static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
> +    struct blkif_request_discard *discard_req = (void *)&ioreq->req;

Given that ioreq->req might not be a struct blkif_request_discard*, I
would rather make the assignment under the case BLKIF_OP_DISCARD below.


>      if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) {
>          goto err_no_map;
> @@ -521,6 +525,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        discard_req->sector_number, discard_req->nr_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -699,6 +710,19 @@ static void blk_alloc(struct XenDevice *xendev)
>      }
>  }
>  
> +static void blk_parse_discard(struct XenBlkDev *blkdev)
> +{
> +    int enable;
> +
> +    blkdev->feature_discard = true;
> +
> +    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)
> +	    blkdev->feature_discard = !!enable;
> +
> +    if (blkdev->feature_discard)
> +	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
> +}
> +
>  static int blk_init(struct XenDevice *xendev)
>  {
>      struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
> @@ -766,6 +790,8 @@ static int blk_init(struct XenDevice *xendev)
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
> +    blk_parse_discard(blkdev);
> +
>      g_free(directiosafe);
>      return 0;
>  
> @@ -801,6 +827,8 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    if (blkdev->feature_discard)
> +        qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:11:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ezv-0006Lh-Nm; Tue, 28 Jan 2014 20:11:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Ezt-0006LZ-MU
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 20:11:02 +0000
Received: from [85.158.137.68:17382] by server-9.bemta-3.messagelabs.com id
	ED/84-13104-4DE08E25; Tue, 28 Jan 2014 20:11:00 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390939857!11810075!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14249 invoked from network); 28 Jan 2014 20:10:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:10:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,737,1384300800"; d="scan'208";a="97420107"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 20:10:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 15:10:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8Ezn-0003mG-UP;
	Tue, 28 Jan 2014 20:10:55 +0000
Date: Tue, 28 Jan 2014 20:10:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1390933603-13353-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390933603-13353-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. The tool stack may provide a
> property "discard_enable" in the backend node to optionally disable discard
> support.  This is helpful in case the backing file was intentionally created
> non-sparse to avoid fragmentation.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I think that the patch is fine, thank you.
Just a small comment below but it is just a matter of taste.


>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 28 ++++++++++++++++++++++++++++
>  2 files changed, 40 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..539f2ed 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -114,6 +114,7 @@ struct XenBlkDev {
>      int                 requests_finished;
>  
>      /* Persistent grants extension */
> +    gboolean            feature_discard;
>      gboolean            feature_persistent;
>      GTree               *persistent_gnts;
>      unsigned int        persistent_gnt_count;
> @@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_WRITE:
>          ioreq->prot = PROT_READ; /* from memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        return 0;
>      default:
>          xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
>                        ioreq->req.operation);
> @@ -490,6 +493,7 @@ static void qemu_aio_complete(void *opaque, int ret)
>  static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
> +    struct blkif_request_discard *discard_req = (void *)&ioreq->req;

Given that ioreq->req might not be a struct blkif_request_discard*, I
would rather make the assignment under the case BLKIF_OP_DISCARD below.


>      if (ioreq->req.nr_segments && ioreq_map(ioreq) == -1) {
>          goto err_no_map;
> @@ -521,6 +525,13 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        discard_req->sector_number, discard_req->nr_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -699,6 +710,19 @@ static void blk_alloc(struct XenDevice *xendev)
>      }
>  }
>  
> +static void blk_parse_discard(struct XenBlkDev *blkdev)
> +{
> +    int enable;
> +
> +    blkdev->feature_discard = true;
> +
> +    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)
> +	    blkdev->feature_discard = !!enable;
> +
> +    if (blkdev->feature_discard)
> +	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
> +}
> +
>  static int blk_init(struct XenDevice *xendev)
>  {
>      struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
> @@ -766,6 +790,8 @@ static int blk_init(struct XenDevice *xendev)
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
> +    blk_parse_discard(blkdev);
> +
>      g_free(directiosafe);
>      return 0;
>  
> @@ -801,6 +827,8 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    if (blkdev->feature_discard)
> +        qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8F95-00070b-US; Tue, 28 Jan 2014 20:20:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <grygorii.strashko@ti.com>) id 1W8F7R-0006oI-6X
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 20:18:49 +0000
Received: from [193.109.254.147:35545] by server-6.bemta-14.messagelabs.com id
	E6/1D-14958-8A018E25; Tue, 28 Jan 2014 20:18:48 +0000
X-Env-Sender: grygorii.strashko@ti.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390940326!439500!1
X-Originating-IP: [192.94.94.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjk0Ljk0LjQxID0+IDE2NDY4Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5115 invoked from network); 28 Jan 2014 20:18:47 -0000
Received: from bear.ext.ti.com (HELO bear.ext.ti.com) (192.94.94.41)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 20:18:47 -0000
Received: from dflxv15.itg.ti.com ([128.247.5.124])
	by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id s0SKGTIE000856;
	Tue, 28 Jan 2014 14:16:29 -0600
Received: from DFRE70.ent.ti.com (dncmailx.itg.ti.com [10.167.188.34])
	by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s0SKGSFg012134;
	Tue, 28 Jan 2014 14:16:28 -0600
Received: from DFRE01.ent.ti.com ([fe80::b027:5293:c8d8:d82a]) by
	DFRE70.ent.ti.com ([fe80::645e:109b:13d3:dfa7%25]) with mapi id
	14.02.0342.003; Tue, 28 Jan 2014 21:16:27 +0100
From: "Strashko, Grygorii" <grygorii.strashko@ti.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Shilimkar, Santosh"
	<santosh.shilimkar@ti.com>
Thread-Topic: [PATCH 1/3] memblock, nobootmem: Add memblock_virt_alloc_low()
Thread-Index: AQHPHFfdxt7WG3vYDU6baCQUblPPbpqaa/UAgAAUuqQ=
Date: Tue, 28 Jan 2014 20:16:26 +0000
Message-ID: <902E09E6452B0E43903E4F2D568737AB0B98EE9A@DFRE01.ent.ti.com>
References: <1390590670-25901-1-git-send-email-yinghai@kernel.org>
	<1390590670-25901-4-git-send-email-yinghai@kernel.org>
	<CAOesGMh7+SzLTfyPomnB_Q_wft+RC+F3tx-Ow1TdSmHiSwrKcw@mail.gmail.com>
	<CAGa+x87jJep_TSG70wAr5M-Ce7soPyr4SBdPqyo2ENZ2DYcqkA@mail.gmail.com>
	<CAE9FiQU_vZo40cje_XJXSxuKW-JL6H-8rAazwOw1O-ewcV+C5g@mail.gmail.com>
	<52E7E776.30909@ti.com> <20140128182230.GO15937@n2100.arm.linux.org.uk>
	<52E7F8AC.9090909@ti.com>,<20140128185652.GA9362@phenom.dumpdata.com>
In-Reply-To: <20140128185652.GA9362@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [128.247.5.92]
x-exclaimer-md-config: f9c360f5-3d1e-4c3c-8703-f45bf52eff6b
Content-Type: multipart/mixed;
	boundary="_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_"
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 28 Jan 2014 20:20:30 +0000
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "H. Peter
	Anvin" <hpa@zytor.com>, Olof Johansson <olof@lixom.net>, Andrew
	Morton <akpm@linux-foundation.org>, Yinghai Lu <yinghai@kernel.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH 1/3] memblock,
	nobootmem: Add memblock_virt_alloc_low()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=
=0A=
Sorry, for the invalid mail & patch format - have no way to send it properl=
y now.=0A=
=0A=
Suppose there is another way to fix this issue (more generic)=0A=
- Correct memblock_virt_allocX() API to limit allocations below memblock.cu=
rrent_limit=0A=
(patch attached).=0A=
=0A=
Then the code behavior will become more similar to _alloc_memory_core_early=
.=0A=
=0A=
Not tested.=0A=
=0A=
=0A=
Best regards,=0A=
- grygorii=0A=
=0A=
________________________________________=0A=
From: Konrad Rzeszutek Wilk [konrad.wilk@oracle.com]=0A=
Sent: Tuesday, January 28, 2014 8:56 PM=0A=
To: Shilimkar, Santosh=0A=
Cc: Russell King - ARM Linux; Yinghai Lu; Kevin Hilman; Olof Johansson; Lin=
us Torvalds; Andrew Morton; Ingo Molnar; H. Peter Anvin; Dave Hansen; linux=
-kernel@vger.kernel.org; Strashko, Grygorii; xen-devel@lists.xensource.com=
=0A=
Subject: Re: [PATCH 1/3] memblock, nobootmem: Add memblock_virt_alloc_low()=
=0A=
=0A=
On Tue, Jan 28, 2014 at 01:36:28PM -0500, Santosh Shilimkar wrote:=0A=
> + Gryagorii,=0A=
> On Tuesday 28 January 2014 01:22 PM, Russell King - ARM Linux wrote:=0A=
> > On Tue, Jan 28, 2014 at 12:23:02PM -0500, Santosh Shilimkar wrote:=0A=
> >> On Tuesday 28 January 2014 12:12 PM, Yinghai Lu wrote:=0A=
> >>> Index: linux-2.6/include/linux/bootmem.h=0A=
> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A=
> >>> --- linux-2.6.orig/include/linux/bootmem.h=0A=
> >>> +++ linux-2.6/include/linux/bootmem.h=0A=
> >>> @@ -179,6 +179,9 @@ static inline void * __init memblock_vir=0A=
> >>>                                                     NUMA_NO_NODE);=0A=
> >>>  }=0A=
> >>>=0A=
> >>> +/* Take arch's ARCH_LOW_ADDRESS_LIMIT at first*/=0A=
> >>> +#include <asm/processor.h>=0A=
> >>> +=0A=
> >>>  #ifndef ARCH_LOW_ADDRESS_LIMIT=0A=
> >>>  #define ARCH_LOW_ADDRESS_LIMIT  0xffffffffUL=0A=
> >>>  #endif=0A=
> >>=0A=
> >> This won't help mostly since the ARM 32 arch don't set ARCH_LOW_ADDRES=
S_LIMIT.=0A=
> >> Sorry i couldn't respond to the thread earlier because of travel and=
=0A=
> >> don't have access to my board to try out the patches.=0A=
> >=0A=
> > Let's think about this for a moment, shall we...=0A=
> >=0A=
> > What does memblock_alloc_virt*() return?  It returns a virtual address.=
=0A=
> >=0A=
> > How is that virtual address obtained?  ptr =3D phys_to_virt(alloc);=0A=
> >=0A=
> > What is the valid address range for passing into phys_to_virt() ?  Only=
=0A=
> > lowmem addresses.=0A=
> >=0A=
> > Hence, having ARCH_LOW_ADDRESS_LIMIT set to 4GB-1 by default seems to b=
e=0A=
> > completely rediculous - and presumably this also fails on x86_32 if it=
=0A=
> > returns memory up at 4GB.=0A=
> >=0A=
> > So... yes, I think reverting the arch/arm part of this patch is the rig=
ht=0A=
> > solution, whether the rest of it should be reverted is something I can'=
t=0A=
> > comment on.=0A=
> >=0A=
> Grygorri mentioned an alternate to update the memblock_find_in_range_node=
() so=0A=
> that it takes into account the limit.=0A=
=0A=
This patch breaks also Xen and 32-bit guests (see=0A=
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02476.html)=0A=
=0A=
Reverting it fixes it.=0A=
=0A=
>=0A=
> Regards,=0A=
> Santosh=0A=

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/x-patch;
	name="0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch"
Content-Description: 0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch
Content-Disposition: attachment;
	filename="0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch";
	size=901; creation-date="Tue, 28 Jan 2014 20:10:27 GMT";
	modification-date="Tue, 28 Jan 2014 20:10:27 GMT"
Content-Transfer-Encoding: base64

RnJvbSBlZTMxYWZiOWM1YzBlNzg4MTljZTYyNGUzYTkzMGQzMWI5NzUyN2NkIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBncnlnb3JpaXMgPGdyeWdvcmlpLnN0cmFzaGtvQGdsb2JhbGxv
Z2ljLmNvbT4KRGF0ZTogVHVlLCAyOCBKYW4gMjAxNCAyMTo1OTozMCArMDIwMApTdWJqZWN0OiBb
UEFUQ0hdIG1tL21lbWJsb2NrOiBmaXggdXBwZXIgYm91bmRhcnkgb2YgYWxsb2NhdGluZyByZWdp
b24KCkNvcnJlY3QgbWVtYmxvY2tfdmlydF9hbGxvY1goKSBBUEkgdG8gbGltaXQgYWxsb2NhdGlv
bnMgYmVsb3cKbWVtYmxvY2suY3VycmVudF9saW1pdC4KClNpZ25lZC1vZmYtYnk6IGdyeWdvcmlp
cyA8Z3J5Z29yaWkuc3RyYXNoa29AZ2xvYmFsbG9naWMuY29tPgotLS0KIG1tL21lbWJsb2NrLmMg
fCAgICAzICsrKwogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25z
KC0pCgpkaWZmIC0tZ2l0IGEvbW0vbWVtYmxvY2suYyBiL21tL21lbWJsb2NrLmMKaW5kZXggODdk
MjFhNi4uZTkzZDY2OSAxMDA2NDQKLS0tIGEvbW0vbWVtYmxvY2suYworKysgYi9tbS9tZW1ibG9j
ay5jCkBAIC0xMDc3LDYgKzEwNzcsOSBAQCBzdGF0aWMgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192
aXJ0X2FsbG9jX2ludGVybmFsKAogCWlmICghYWxpZ24pCiAJCWFsaWduID0gU01QX0NBQ0hFX0JZ
VEVTOwogCisgICAgICAgIGlmIChtYXhfYWRkciA+IG1lbWJsb2NrLmN1cnJlbnRfbGltaXQpCisg
ICAgICAgICAgICAgICAgbWF4X2FkZHIgPSBtZW1ibG9jay5jdXJyZW50X2xpbWl0OworCiBhZ2Fp
bjoKIAlhbGxvYyA9IG1lbWJsb2NrX2ZpbmRfaW5fcmFuZ2Vfbm9kZShzaXplLCBhbGlnbiwgbWlu
X2FkZHIsIG1heF9hZGRyLAogCQkJCQkgICAgbmlkKTsKLS0gCjEuNy40LjEKCg==

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_--


From xen-devel-bounces@lists.xen.org Tue Jan 28 20:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8F95-00070b-US; Tue, 28 Jan 2014 20:20:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <grygorii.strashko@ti.com>) id 1W8F7R-0006oI-6X
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 20:18:49 +0000
Received: from [193.109.254.147:35545] by server-6.bemta-14.messagelabs.com id
	E6/1D-14958-8A018E25; Tue, 28 Jan 2014 20:18:48 +0000
X-Env-Sender: grygorii.strashko@ti.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1390940326!439500!1
X-Originating-IP: [192.94.94.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjk0Ljk0LjQxID0+IDE2NDY4Ng==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5115 invoked from network); 28 Jan 2014 20:18:47 -0000
Received: from bear.ext.ti.com (HELO bear.ext.ti.com) (192.94.94.41)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 20:18:47 -0000
Received: from dflxv15.itg.ti.com ([128.247.5.124])
	by bear.ext.ti.com (8.13.7/8.13.7) with ESMTP id s0SKGTIE000856;
	Tue, 28 Jan 2014 14:16:29 -0600
Received: from DFRE70.ent.ti.com (dncmailx.itg.ti.com [10.167.188.34])
	by dflxv15.itg.ti.com (8.14.3/8.13.8) with ESMTP id s0SKGSFg012134;
	Tue, 28 Jan 2014 14:16:28 -0600
Received: from DFRE01.ent.ti.com ([fe80::b027:5293:c8d8:d82a]) by
	DFRE70.ent.ti.com ([fe80::645e:109b:13d3:dfa7%25]) with mapi id
	14.02.0342.003; Tue, 28 Jan 2014 21:16:27 +0100
From: "Strashko, Grygorii" <grygorii.strashko@ti.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Shilimkar, Santosh"
	<santosh.shilimkar@ti.com>
Thread-Topic: [PATCH 1/3] memblock, nobootmem: Add memblock_virt_alloc_low()
Thread-Index: AQHPHFfdxt7WG3vYDU6baCQUblPPbpqaa/UAgAAUuqQ=
Date: Tue, 28 Jan 2014 20:16:26 +0000
Message-ID: <902E09E6452B0E43903E4F2D568737AB0B98EE9A@DFRE01.ent.ti.com>
References: <1390590670-25901-1-git-send-email-yinghai@kernel.org>
	<1390590670-25901-4-git-send-email-yinghai@kernel.org>
	<CAOesGMh7+SzLTfyPomnB_Q_wft+RC+F3tx-Ow1TdSmHiSwrKcw@mail.gmail.com>
	<CAGa+x87jJep_TSG70wAr5M-Ce7soPyr4SBdPqyo2ENZ2DYcqkA@mail.gmail.com>
	<CAE9FiQU_vZo40cje_XJXSxuKW-JL6H-8rAazwOw1O-ewcV+C5g@mail.gmail.com>
	<52E7E776.30909@ti.com> <20140128182230.GO15937@n2100.arm.linux.org.uk>
	<52E7F8AC.9090909@ti.com>,<20140128185652.GA9362@phenom.dumpdata.com>
In-Reply-To: <20140128185652.GA9362@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [128.247.5.92]
x-exclaimer-md-config: f9c360f5-3d1e-4c3c-8703-f45bf52eff6b
Content-Type: multipart/mixed;
	boundary="_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_"
MIME-Version: 1.0
X-Mailman-Approved-At: Tue, 28 Jan 2014 20:20:30 +0000
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>, "H. Peter
	Anvin" <hpa@zytor.com>, Olof Johansson <olof@lixom.net>, Andrew
	Morton <akpm@linux-foundation.org>, Yinghai Lu <yinghai@kernel.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH 1/3] memblock,
	nobootmem: Add memblock_virt_alloc_low()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,=0A=
=0A=
Sorry, for the invalid mail & patch format - have no way to send it properl=
y now.=0A=
=0A=
Suppose there is another way to fix this issue (more generic)=0A=
- Correct memblock_virt_allocX() API to limit allocations below memblock.cu=
rrent_limit=0A=
(patch attached).=0A=
=0A=
Then the code behavior will become more similar to _alloc_memory_core_early=
.=0A=
=0A=
Not tested.=0A=
=0A=
=0A=
Best regards,=0A=
- grygorii=0A=
=0A=
________________________________________=0A=
From: Konrad Rzeszutek Wilk [konrad.wilk@oracle.com]=0A=
Sent: Tuesday, January 28, 2014 8:56 PM=0A=
To: Shilimkar, Santosh=0A=
Cc: Russell King - ARM Linux; Yinghai Lu; Kevin Hilman; Olof Johansson; Lin=
us Torvalds; Andrew Morton; Ingo Molnar; H. Peter Anvin; Dave Hansen; linux=
-kernel@vger.kernel.org; Strashko, Grygorii; xen-devel@lists.xensource.com=
=0A=
Subject: Re: [PATCH 1/3] memblock, nobootmem: Add memblock_virt_alloc_low()=
=0A=
=0A=
On Tue, Jan 28, 2014 at 01:36:28PM -0500, Santosh Shilimkar wrote:=0A=
> + Gryagorii,=0A=
> On Tuesday 28 January 2014 01:22 PM, Russell King - ARM Linux wrote:=0A=
> > On Tue, Jan 28, 2014 at 12:23:02PM -0500, Santosh Shilimkar wrote:=0A=
> >> On Tuesday 28 January 2014 12:12 PM, Yinghai Lu wrote:=0A=
> >>> Index: linux-2.6/include/linux/bootmem.h=0A=
> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A=
> >>> --- linux-2.6.orig/include/linux/bootmem.h=0A=
> >>> +++ linux-2.6/include/linux/bootmem.h=0A=
> >>> @@ -179,6 +179,9 @@ static inline void * __init memblock_vir=0A=
> >>>                                                     NUMA_NO_NODE);=0A=
> >>>  }=0A=
> >>>=0A=
> >>> +/* Take arch's ARCH_LOW_ADDRESS_LIMIT at first*/=0A=
> >>> +#include <asm/processor.h>=0A=
> >>> +=0A=
> >>>  #ifndef ARCH_LOW_ADDRESS_LIMIT=0A=
> >>>  #define ARCH_LOW_ADDRESS_LIMIT  0xffffffffUL=0A=
> >>>  #endif=0A=
> >>=0A=
> >> This won't help mostly since the ARM 32 arch don't set ARCH_LOW_ADDRES=
S_LIMIT.=0A=
> >> Sorry i couldn't respond to the thread earlier because of travel and=
=0A=
> >> don't have access to my board to try out the patches.=0A=
> >=0A=
> > Let's think about this for a moment, shall we...=0A=
> >=0A=
> > What does memblock_alloc_virt*() return?  It returns a virtual address.=
=0A=
> >=0A=
> > How is that virtual address obtained?  ptr =3D phys_to_virt(alloc);=0A=
> >=0A=
> > What is the valid address range for passing into phys_to_virt() ?  Only=
=0A=
> > lowmem addresses.=0A=
> >=0A=
> > Hence, having ARCH_LOW_ADDRESS_LIMIT set to 4GB-1 by default seems to b=
e=0A=
> > completely rediculous - and presumably this also fails on x86_32 if it=
=0A=
> > returns memory up at 4GB.=0A=
> >=0A=
> > So... yes, I think reverting the arch/arm part of this patch is the rig=
ht=0A=
> > solution, whether the rest of it should be reverted is something I can'=
t=0A=
> > comment on.=0A=
> >=0A=
> Grygorri mentioned an alternate to update the memblock_find_in_range_node=
() so=0A=
> that it takes into account the limit.=0A=
=0A=
This patch breaks also Xen and 32-bit guests (see=0A=
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02476.html)=0A=
=0A=
Reverting it fixes it.=0A=
=0A=
>=0A=
> Regards,=0A=
> Santosh=0A=

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/x-patch;
	name="0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch"
Content-Description: 0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch
Content-Disposition: attachment;
	filename="0001-mm-memblock-fix-upper-boundary-of-allocating-region.patch";
	size=901; creation-date="Tue, 28 Jan 2014 20:10:27 GMT";
	modification-date="Tue, 28 Jan 2014 20:10:27 GMT"
Content-Transfer-Encoding: base64

RnJvbSBlZTMxYWZiOWM1YzBlNzg4MTljZTYyNGUzYTkzMGQzMWI5NzUyN2NkIE1vbiBTZXAgMTcg
MDA6MDA6MDAgMjAwMQpGcm9tOiBncnlnb3JpaXMgPGdyeWdvcmlpLnN0cmFzaGtvQGdsb2JhbGxv
Z2ljLmNvbT4KRGF0ZTogVHVlLCAyOCBKYW4gMjAxNCAyMTo1OTozMCArMDIwMApTdWJqZWN0OiBb
UEFUQ0hdIG1tL21lbWJsb2NrOiBmaXggdXBwZXIgYm91bmRhcnkgb2YgYWxsb2NhdGluZyByZWdp
b24KCkNvcnJlY3QgbWVtYmxvY2tfdmlydF9hbGxvY1goKSBBUEkgdG8gbGltaXQgYWxsb2NhdGlv
bnMgYmVsb3cKbWVtYmxvY2suY3VycmVudF9saW1pdC4KClNpZ25lZC1vZmYtYnk6IGdyeWdvcmlp
cyA8Z3J5Z29yaWkuc3RyYXNoa29AZ2xvYmFsbG9naWMuY29tPgotLS0KIG1tL21lbWJsb2NrLmMg
fCAgICAzICsrKwogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDAgZGVsZXRpb25z
KC0pCgpkaWZmIC0tZ2l0IGEvbW0vbWVtYmxvY2suYyBiL21tL21lbWJsb2NrLmMKaW5kZXggODdk
MjFhNi4uZTkzZDY2OSAxMDA2NDQKLS0tIGEvbW0vbWVtYmxvY2suYworKysgYi9tbS9tZW1ibG9j
ay5jCkBAIC0xMDc3LDYgKzEwNzcsOSBAQCBzdGF0aWMgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192
aXJ0X2FsbG9jX2ludGVybmFsKAogCWlmICghYWxpZ24pCiAJCWFsaWduID0gU01QX0NBQ0hFX0JZ
VEVTOwogCisgICAgICAgIGlmIChtYXhfYWRkciA+IG1lbWJsb2NrLmN1cnJlbnRfbGltaXQpCisg
ICAgICAgICAgICAgICAgbWF4X2FkZHIgPSBtZW1ibG9jay5jdXJyZW50X2xpbWl0OworCiBhZ2Fp
bjoKIAlhbGxvYyA9IG1lbWJsb2NrX2ZpbmRfaW5fcmFuZ2Vfbm9kZShzaXplLCBhbGlnbiwgbWlu
X2FkZHIsIG1heF9hZGRyLAogCQkJCQkgICAgbmlkKTsKLS0gCjEuNy40LjEKCg==

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_902E09E6452B0E43903E4F2D568737AB0B98EE9ADFRE01entticom_--


From xen-devel-bounces@lists.xen.org Tue Jan 28 20:25:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8FDw-00078P-RT; Tue, 28 Jan 2014 20:25:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8FDv-00078J-BW
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 20:25:31 +0000
Received: from [193.109.254.147:56807] by server-13.bemta-14.messagelabs.com
	id 91/73-19374-A3218E25; Tue, 28 Jan 2014 20:25:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390940729!441046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24221 invoked from network); 28 Jan 2014 20:25:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:25:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="95428867"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 20:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 15:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8FDr-0003xH-Hr;
	Tue, 28 Jan 2014 20:25:27 +0000
Message-ID: <52E81237.3010302@citrix.com>
Date: Tue, 28 Jan 2014 20:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Last night, XenRT discovered an interesting host crash.  The crash
itself somewhat concerning, but lack of information does highlight an
area which could do with easier debugability.

Here is the results from the serial console.  The server in question is
a Supermicro Xeon X5376 system which has not exhibited stability issues
in the past, and seems fine for tests during today.

I have linearised the stack and applied notes beside.

----[ Xen-4.3.1-xs82408-d  x86_64  debug=y  Not tainted ]----
CPU:    4
RIP:    e008:[<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec
RFLAGS: 0000000000010046   CONTEXT: hypervisor
rax: 0000000000000061   rbx: ffff8300cfafa000   rcx: ffff82c4c02ffd80
rdx: ffff8300cfafa570   rsi: ffff83022eacfd00   rdi: ffff8300cfafa000
rbp: ffff83022eacfd60   rsp: ffff83022eacff08   r8:  0000000000000000
r9:  0000000000000000   r10: ffff83022ead32e8   r11: 00001ac42042804f
r12: ffff8300cfafa000   r13: 0000000000000004   r14: ffff8300cfd3f000
r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000000026f0
cr3: 0000000228dde000   cr2: 00000000b74e4f10
ds: 007b   es: 007b   fs: 00d8   gs: 00e0   ss: 0000   cs: e008
Xen stack trace from rsp=ffff83022eacff08:
    0000000000000093 | rflags from pushfq in ASSERT_INTERRUPTS_ENABLED
    ffff82c4c02358d8 | RA? compat/entry.S:123 in compat_test_all_events()
    0000000000000001 | r15
    ffff8300cfd3f000 | r14
    0000000000000004 | r13
    ffff8300cfafa000 | r12
    00000000c1695ec0 | ebp
    00000000deadbeef | ebx
    0000000000000000 | r11
    00000000deadbeef | r10
    ffff8300cfafa060 | r9
    0000000000000000 | r8
    0000000000000000 | eax
    00000000deadbeef | ecx
    00000000ee8507a0 | edx
    00000000c23a7000 | esi
    0000000000000000 | edi
    0002010000000000 | TRAP_syscall | TRAP_regs_dirty
    00000000c10013a7 + (hypercall page) __HYPERCALL_sched_op
    0000000000000061 |
    0000000000000246 | Exception frame from ring1 kernel
    00000000c1695eb0 |
    0000000000000069 +
    0000000000000000 | es
    0000000000000000 | ds
    0000000000000000 | fs
    0000000000000000 | gs
    0000000000000004 | cpu_info.processor_id
    ffff8300cfafa000 | cpu_info.current_vcpu
    0000003d6e797180 | cpu_info.per_cpu_offset
    0000000000000000 +

Xen call trace:
   [<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec


Xen has failed the ASSERT_INTERRUPTS_ENABLED check at the very top of
compat_create_bounce_frame, which itself lacks a bugframe which is why
it is not automatically recognised as an assertion.

Following the code back using what I presume to be a return address as
the penultimate word on the stack, the codeflow looks like:

compat_test_all_events:
  ...
  sti
  leaq ...
  5x mov ...
  call compat_create_bounce_frame
  jmp  compat_test_all_events

compat_create_bounce_frame:
  pushfq
  testb
  jnz
  ud2


What I presume has happened is that after 'sti', Xen has taken an
interrupt, which has caused some form of corruption.  Judging from the
top word on the stack, rflags looks quite corrupt.  Unfortunatly, this
is all the available information.  (The crash kernel failed to boot
which is another issue I am looking into).

For crashes like this, particularly when attempting to leave Xen context
and return back to a guest, the information provided by the stack trace
is quite lacking; The interesting information is what is what has just
been popped off the stack (which I am hoping would have been another
exception frame)

Would it be sensible to have some indication that we are on the way out
of Xen, so errors in situations like this can take a chance to print
some of the recently popped stack values? I know it wont be terribly
heavily used debugging, but think it is probably worth the effort for
situations like this where there is simply not enough information to
diagnose the issue.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:25:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8FDw-00078P-RT; Tue, 28 Jan 2014 20:25:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8FDv-00078J-BW
	for xen-devel@lists.xen.org; Tue, 28 Jan 2014 20:25:31 +0000
Received: from [193.109.254.147:56807] by server-13.bemta-14.messagelabs.com
	id 91/73-19374-A3218E25; Tue, 28 Jan 2014 20:25:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390940729!441046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24221 invoked from network); 28 Jan 2014 20:25:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 20:25:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="95428867"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 20:25:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 28 Jan 2014 15:25:28 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8FDr-0003xH-Hr;
	Tue, 28 Jan 2014 20:25:27 +0000
Message-ID: <52E81237.3010302@citrix.com>
Date: Tue, 28 Jan 2014 20:25:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Last night, XenRT discovered an interesting host crash.  The crash
itself somewhat concerning, but lack of information does highlight an
area which could do with easier debugability.

Here is the results from the serial console.  The server in question is
a Supermicro Xeon X5376 system which has not exhibited stability issues
in the past, and seems fine for tests during today.

I have linearised the stack and applied notes beside.

----[ Xen-4.3.1-xs82408-d  x86_64  debug=y  Not tainted ]----
CPU:    4
RIP:    e008:[<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec
RFLAGS: 0000000000010046   CONTEXT: hypervisor
rax: 0000000000000061   rbx: ffff8300cfafa000   rcx: ffff82c4c02ffd80
rdx: ffff8300cfafa570   rsi: ffff83022eacfd00   rdi: ffff8300cfafa000
rbp: ffff83022eacfd60   rsp: ffff83022eacff08   r8:  0000000000000000
r9:  0000000000000000   r10: ffff83022ead32e8   r11: 00001ac42042804f
r12: ffff8300cfafa000   r13: 0000000000000004   r14: ffff8300cfd3f000
r15: 0000000000000001   cr0: 000000008005003b   cr4: 00000000000026f0
cr3: 0000000228dde000   cr2: 00000000b74e4f10
ds: 007b   es: 007b   fs: 00d8   gs: 00e0   ss: 0000   cs: e008
Xen stack trace from rsp=ffff83022eacff08:
    0000000000000093 | rflags from pushfq in ASSERT_INTERRUPTS_ENABLED
    ffff82c4c02358d8 | RA? compat/entry.S:123 in compat_test_all_events()
    0000000000000001 | r15
    ffff8300cfd3f000 | r14
    0000000000000004 | r13
    ffff8300cfafa000 | r12
    00000000c1695ec0 | ebp
    00000000deadbeef | ebx
    0000000000000000 | r11
    00000000deadbeef | r10
    ffff8300cfafa060 | r9
    0000000000000000 | r8
    0000000000000000 | eax
    00000000deadbeef | ecx
    00000000ee8507a0 | edx
    00000000c23a7000 | esi
    0000000000000000 | edi
    0002010000000000 | TRAP_syscall | TRAP_regs_dirty
    00000000c10013a7 + (hypercall page) __HYPERCALL_sched_op
    0000000000000061 |
    0000000000000246 | Exception frame from ring1 kernel
    00000000c1695eb0 |
    0000000000000069 +
    0000000000000000 | es
    0000000000000000 | ds
    0000000000000000 | fs
    0000000000000000 | gs
    0000000000000004 | cpu_info.processor_id
    ffff8300cfafa000 | cpu_info.current_vcpu
    0000003d6e797180 | cpu_info.per_cpu_offset
    0000000000000000 +

Xen call trace:
   [<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec


Xen has failed the ASSERT_INTERRUPTS_ENABLED check at the very top of
compat_create_bounce_frame, which itself lacks a bugframe which is why
it is not automatically recognised as an assertion.

Following the code back using what I presume to be a return address as
the penultimate word on the stack, the codeflow looks like:

compat_test_all_events:
  ...
  sti
  leaq ...
  5x mov ...
  call compat_create_bounce_frame
  jmp  compat_test_all_events

compat_create_bounce_frame:
  pushfq
  testb
  jnz
  ud2


What I presume has happened is that after 'sti', Xen has taken an
interrupt, which has caused some form of corruption.  Judging from the
top word on the stack, rflags looks quite corrupt.  Unfortunatly, this
is all the available information.  (The crash kernel failed to boot
which is another issue I am looking into).

For crashes like this, particularly when attempting to leave Xen context
and return back to a guest, the information provided by the stack trace
is quite lacking; The interesting information is what is what has just
been popped off the stack (which I am hoping would have been another
exception frame)

Would it be sensible to have some indication that we are on the way out
of Xen, so errors in situations like this can take a chance to print
some of the recently popped stack values? I know it wont be terribly
heavily used debugging, but think it is probably worth the effort for
situations like this where there is simply not enough information to
diagnose the issue.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:34:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8FMU-0007ZQ-2V; Tue, 28 Jan 2014 20:34:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W8FMR-0007ZL-V5
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 20:34:20 +0000
Received: from [85.158.137.68:18185] by server-12.bemta-3.messagelabs.com id
	AA/4F-20055-B4418E25; Tue, 28 Jan 2014 20:34:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390941257!11898091!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15622 invoked from network); 28 Jan 2014 20:34:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 20:34:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SKY4qv005483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 20:34:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SKY3lG023257
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 20:34:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SKY2Ch024581; Tue, 28 Jan 2014 20:34:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 12:34:01 -0800
Message-ID: <52E81474.8020304@oracle.com>
Date: Tue, 28 Jan 2014 15:35:00 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
In-Reply-To: <20140128193837.GA32072@andromeda.dapyr.net>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, david.vrabel@citrix.com,
	linux-kernel@vger.kernel.org, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:38 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
>> blkback bug fixes for memory leaks (patches 1 and 2) and a race
>> (patch 3).
> They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> branch and are testing them now (hadn't pushed it yet).
>
> Matt and Matt,
>
> Could you take a look at the other two patches as well?
>
> David, Boris,
>
> Are you OK with pushing those patches out to Jens Axboe if nobody
> gives an NACK by Friday?

The patches look reasonable to me so I don't have any objections (but
I am not particularly familiar with this code.)

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 20:34:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 20:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8FMU-0007ZQ-2V; Tue, 28 Jan 2014 20:34:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1W8FMR-0007ZL-V5
	for xen-devel@lists.xenproject.org; Tue, 28 Jan 2014 20:34:20 +0000
Received: from [85.158.137.68:18185] by server-12.bemta-3.messagelabs.com id
	AA/4F-20055-B4418E25; Tue, 28 Jan 2014 20:34:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1390941257!11898091!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15622 invoked from network); 28 Jan 2014 20:34:18 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Jan 2014 20:34:18 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0SKY4qv005483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 28 Jan 2014 20:34:05 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0SKY3lG023257
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 28 Jan 2014 20:34:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0SKY2Ch024581; Tue, 28 Jan 2014 20:34:03 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 12:34:01 -0800
Message-ID: <52E81474.8020304@oracle.com>
Date: Tue, 28 Jan 2014 15:35:00 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
In-Reply-To: <20140128193837.GA32072@andromeda.dapyr.net>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, david.vrabel@citrix.com,
	linux-kernel@vger.kernel.org, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 02:38 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
>> blkback bug fixes for memory leaks (patches 1 and 2) and a race
>> (patch 3).
> They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> branch and are testing them now (hadn't pushed it yet).
>
> Matt and Matt,
>
> Could you take a look at the other two patches as well?
>
> David, Boris,
>
> Are you OK with pushing those patches out to Jens Axboe if nobody
> gives an NACK by Friday?

The patches look reasonable to me so I don't have any objections (but
I am not particularly familiar with this code.)

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 21:25:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 21:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8G9p-0000mG-Iy; Tue, 28 Jan 2014 21:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8G9o-0000mB-88
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 21:25:20 +0000
Received: from [85.158.143.35:11104] by server-1.bemta-4.messagelabs.com id
	73/CD-31661-F3028E25; Tue, 28 Jan 2014 21:25:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390944317!1463873!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11033 invoked from network); 28 Jan 2014 21:25:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 21:25:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="95454607"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 21:25:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 16:25:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8G9j-0004i0-LI;
	Tue, 28 Jan 2014 21:25:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8G9j-0003TS-99;
	Tue, 28 Jan 2014 21:25:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24571-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 21:25:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24571 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24571/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10    fail REGR. vs. 24492

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602
baseline version:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=b06c0fd1e6be40843084442ebdb377b16f110602
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing b06c0fd1e6be40843084442ebdb377b16f110602
+ branch=xen-4.2-testing
+ revision=b06c0fd1e6be40843084442ebdb377b16f110602
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git b06c0fd1e6be40843084442ebdb377b16f110602:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f6179b2..b06c0fd  b06c0fd1e6be40843084442ebdb377b16f110602 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 21:25:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 21:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8G9p-0000mG-Iy; Tue, 28 Jan 2014 21:25:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8G9o-0000mB-88
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 21:25:20 +0000
Received: from [85.158.143.35:11104] by server-1.bemta-4.messagelabs.com id
	73/CD-31661-F3028E25; Tue, 28 Jan 2014 21:25:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390944317!1463873!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11033 invoked from network); 28 Jan 2014 21:25:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 21:25:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="95454607"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Jan 2014 21:25:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 16:25:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8G9j-0004i0-LI;
	Tue, 28 Jan 2014 21:25:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8G9j-0003TS-99;
	Tue, 28 Jan 2014 21:25:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24571-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 21:25:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24571: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24571 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24571/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10    fail REGR. vs. 24492

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602
baseline version:
 xen                  f6179b2e3638e1ff3b3f087ce34b0afdb05ed432

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=b06c0fd1e6be40843084442ebdb377b16f110602
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing b06c0fd1e6be40843084442ebdb377b16f110602
+ branch=xen-4.2-testing
+ revision=b06c0fd1e6be40843084442ebdb377b16f110602
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git b06c0fd1e6be40843084442ebdb377b16f110602:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f6179b2..b06c0fd  b06c0fd1e6be40843084442ebdb377b16f110602 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 22:23:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 22:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8H3A-0002lP-Ps; Tue, 28 Jan 2014 22:22:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8H39-0002lK-6E
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 22:22:31 +0000
Received: from [85.158.143.35:33077] by server-1.bemta-4.messagelabs.com id
	FA/1B-31661-6AD28E25; Tue, 28 Jan 2014 22:22:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390947748!1446392!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26872 invoked from network); 28 Jan 2014 22:22:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 22:22:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="97474145"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 22:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 17:22:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8H34-00050X-5N;
	Tue, 28 Jan 2014 22:22:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8H34-0007ej-1b;
	Tue, 28 Jan 2014 22:22:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24572-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 22:22:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24572: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24572 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24572/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 24504

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24504

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20
baseline version:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0ac5c121734c5055ba2b500b7f515a71800c7b20
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 15:51:06 2014 +0100

    update Xen version to 4.3.2-rc1
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Jan 28 22:23:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jan 2014 22:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8H3A-0002lP-Ps; Tue, 28 Jan 2014 22:22:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8H39-0002lK-6E
	for xen-devel@lists.xensource.com; Tue, 28 Jan 2014 22:22:31 +0000
Received: from [85.158.143.35:33077] by server-1.bemta-4.messagelabs.com id
	FA/1B-31661-6AD28E25; Tue, 28 Jan 2014 22:22:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1390947748!1446392!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26872 invoked from network); 28 Jan 2014 22:22:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Jan 2014 22:22:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,738,1384300800"; d="scan'208";a="97474145"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Jan 2014 22:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 28 Jan 2014 17:22:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8H34-00050X-5N;
	Tue, 28 Jan 2014 22:22:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8H34-0007ej-1b;
	Tue, 28 Jan 2014 22:22:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24572-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 28 Jan 2014 22:22:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24572: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24572 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24572/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-winxpsp3 10 guest-saverestore.2 fail REGR. vs. 24504

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24504

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20
baseline version:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0ac5c121734c5055ba2b500b7f515a71800c7b20
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 15:51:06 2014 +0100

    update Xen version to 4.3.2-rc1
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 01:51:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 01:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8KJL-00056h-IF; Wed, 29 Jan 2014 01:51:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8KJJ-00056a-6l
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 01:51:25 +0000
Received: from [85.158.137.68:4024] by server-11.bemta-3.messagelabs.com id
	E7/39-04255-C9E58E25; Wed, 29 Jan 2014 01:51:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390960282!11843907!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22847 invoked from network); 29 Jan 2014 01:51:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 01:51:23 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T1owSE004706
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 01:50:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T1otUb010883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 29 Jan 2014 01:50:56 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T1os4c010860; Wed, 29 Jan 2014 01:50:55 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 17:50:54 -0800
Date: Tue, 28 Jan 2014 20:50:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Yinghai Lu <yinghai@kernel.org>, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Message-ID: <20140129015048.GA14629@konrad-lan.dumpdata.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Olof Johansson <olof@lixom.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 02:47:57PM -0800, Yinghai Lu wrote:
> On Tue, Jan 28, 2014 at 2:08 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> > On 01/28/2014 02:04 PM, Yinghai Lu wrote:
> >> In original bootmem wrapper for memblock, we have limit checking.
> >>
> >> Add it to memblock_virt_alloc, to address arm and x86 booting crash.
> >>
> >> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> >
> > Do you have a git tree or cumulative set of patches that you'd like us
> > to all test?  I'm happy to boot it on my system, I just want to make
> > sure I've got the same set that you're testing.
> 
> This one should only affect on arm and x86 the 32 bit kernel with more
> then 4G RAM.

Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

It fixes the issue I saw with Xen and 32-bit dom0 blowing up.

> 
> thanks
> 
> Yinghai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 01:51:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 01:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8KJL-00056h-IF; Wed, 29 Jan 2014 01:51:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8KJJ-00056a-6l
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 01:51:25 +0000
Received: from [85.158.137.68:4024] by server-11.bemta-3.messagelabs.com id
	E7/39-04255-C9E58E25; Wed, 29 Jan 2014 01:51:24 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390960282!11843907!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22847 invoked from network); 29 Jan 2014 01:51:23 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 01:51:23 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T1owSE004706
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 01:50:59 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T1otUb010883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 29 Jan 2014 01:50:56 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T1os4c010860; Wed, 29 Jan 2014 01:50:55 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 17:50:54 -0800
Date: Tue, 28 Jan 2014 20:50:48 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Yinghai Lu <yinghai@kernel.org>, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com, david.vrabel@citrix.com
Message-ID: <20140129015048.GA14629@konrad-lan.dumpdata.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Olof Johansson <olof@lixom.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 02:47:57PM -0800, Yinghai Lu wrote:
> On Tue, Jan 28, 2014 at 2:08 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> > On 01/28/2014 02:04 PM, Yinghai Lu wrote:
> >> In original bootmem wrapper for memblock, we have limit checking.
> >>
> >> Add it to memblock_virt_alloc, to address arm and x86 booting crash.
> >>
> >> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> >
> > Do you have a git tree or cumulative set of patches that you'd like us
> > to all test?  I'm happy to boot it on my system, I just want to make
> > sure I've got the same set that you're testing.
> 
> This one should only affect on arm and x86 the 32 bit kernel with more
> then 4G RAM.

Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

It fixes the issue I saw with Xen and 32-bit dom0 blowing up.

> 
> thanks
> 
> Yinghai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:09:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kae-0006Ds-GC; Wed, 29 Jan 2014 02:09:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kac-0006Dn-UJ
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 02:09:19 +0000
Received: from [85.158.139.211:53279] by server-3.bemta-5.messagelabs.com id
	63/CC-13671-EC268E25; Wed, 29 Jan 2014 02:09:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390961355!236825!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12497 invoked from network); 29 Jan 2014 02:09:17 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 02:09:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T285df005442
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:08:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2842H008205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:08:05 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T284N1024553; Wed, 29 Jan 2014 02:08:04 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:08:03 -0800
Date: Tue, 28 Jan 2014 18:08:02 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140128180802.152b3f8d@mantra.us.oracle.com>
In-Reply-To: <52E7951802000078001177F6@nat28.tlf.novell.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 10:31:36 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
....
> The only think x86-specific here is that {get,put}_pg_owner() may
> not exist on ARM. But the general operation isn't x86-specific, so
> there shouldn't be any CONFIG_X86 dependency here. Instead
> you ought to work out with the ARM maintainers whether to stub
> out those two functions, or whether the functionality is useful
> there too (and hence proper implementations would be needed).
> 
> In the latter case I would then also wonder whether the x86
> implementation shouldn't be moved into common code.

Stefano/Ian:

If you have use for get_pg_owner() I can stub it out for now and
have it return 1, as NULL would result in error. Otherwise, I can
change the function prototype to return rc with ARM always returning 
0 and not doing anything, like:

        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
        {
            if ( (rc = get_pg_owner(xatpb.foreign_domid, &fd)) )
            {
                rcu_unlock_domain(d);
                return rc;
            }
        }

which on ARM would always return 0, setting fd to NULL.

If you think it would be needed in ARM, I can just leave the function
prototype the same and you guys can implement whenever as I don't have the
insight into ARM, and if it looks the same as x86 you can commonise it too.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:09:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kae-0006Ds-GC; Wed, 29 Jan 2014 02:09:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kac-0006Dn-UJ
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 02:09:19 +0000
Received: from [85.158.139.211:53279] by server-3.bemta-5.messagelabs.com id
	63/CC-13671-EC268E25; Wed, 29 Jan 2014 02:09:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1390961355!236825!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12497 invoked from network); 29 Jan 2014 02:09:17 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 02:09:17 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T285df005442
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:08:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2842H008205
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:08:05 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T284N1024553; Wed, 29 Jan 2014 02:08:04 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:08:03 -0800
Date: Tue, 28 Jan 2014 18:08:02 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140128180802.152b3f8d@mantra.us.oracle.com>
In-Reply-To: <52E7951802000078001177F6@nat28.tlf.novell.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 10:31:36 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
....
> The only think x86-specific here is that {get,put}_pg_owner() may
> not exist on ARM. But the general operation isn't x86-specific, so
> there shouldn't be any CONFIG_X86 dependency here. Instead
> you ought to work out with the ARM maintainers whether to stub
> out those two functions, or whether the functionality is useful
> there too (and hence proper implementations would be needed).
> 
> In the latter case I would then also wonder whether the x86
> implementation shouldn't be moved into common code.

Stefano/Ian:

If you have use for get_pg_owner() I can stub it out for now and
have it return 1, as NULL would result in error. Otherwise, I can
change the function prototype to return rc with ARM always returning 
0 and not doing anything, like:

        if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
        {
            if ( (rc = get_pg_owner(xatpb.foreign_domid, &fd)) )
            {
                rcu_unlock_domain(d);
                return rc;
            }
        }

which on ARM would always return 0, setting fd to NULL.

If you think it would be needed in ARM, I can just leave the function
prototype the same and you guys can implement whenever as I don't have the
insight into ARM, and if it looks the same as x86 you can commonise it too.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:28:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kt9-0006kd-6v; Wed, 29 Jan 2014 02:28:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kt7-0006jf-GA
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:28:25 +0000
Received: from [193.109.254.147:42717] by server-8.bemta-14.messagelabs.com id
	74/05-18529-84768E25; Wed, 29 Jan 2014 02:28:24 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390962502!479267!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3244 invoked from network); 29 Jan 2014 02:28:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:28:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2SJ7e021891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:28:20 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2SHEh022581
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:28:19 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2SH7Q016814; Wed, 29 Jan 2014 02:28:17 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:28:17 -0800
Date: Tue, 28 Jan 2014 18:28:16 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140128182816.3ef321d1@mantra.us.oracle.com>
In-Reply-To: <20140128034634.GC20680@pegasus.dumpdata.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
	<20140128034634.GC20680@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014 22:46:34 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Jan 27, 2014 at 06:18:39PM -0800, Mukesh Rathor wrote:
> > Until now, xen did not expose PSE to pvh guest, but a patch was
> > submitted to xen list to enable bunch of features for a pvh guest.
> > PSE has not been
> 
> Which 'patch'?
> 
> > looked into for PVH, so until we can do that and test it to make
> > sure it works, disable the feature to avoid flood of bugs.
> 
> I think we want a flood of bugs, no?

Ok, but lets document (via this email :)), that they are not tested.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:28:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kt9-0006kd-6v; Wed, 29 Jan 2014 02:28:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kt7-0006jf-GA
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:28:25 +0000
Received: from [193.109.254.147:42717] by server-8.bemta-14.messagelabs.com id
	74/05-18529-84768E25; Wed, 29 Jan 2014 02:28:24 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390962502!479267!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3244 invoked from network); 29 Jan 2014 02:28:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:28:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2SJ7e021891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:28:20 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2SHEh022581
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:28:19 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2SH7Q016814; Wed, 29 Jan 2014 02:28:17 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:28:17 -0800
Date: Tue, 28 Jan 2014 18:28:16 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140128182816.3ef321d1@mantra.us.oracle.com>
In-Reply-To: <20140128034634.GC20680@pegasus.dumpdata.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
	<20140128034634.GC20680@pegasus.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014 22:46:34 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Jan 27, 2014 at 06:18:39PM -0800, Mukesh Rathor wrote:
> > Until now, xen did not expose PSE to pvh guest, but a patch was
> > submitted to xen list to enable bunch of features for a pvh guest.
> > PSE has not been
> 
> Which 'patch'?
> 
> > looked into for PVH, so until we can do that and test it to make
> > sure it works, disable the feature to avoid flood of bugs.
> 
> I think we want a flood of bugs, no?

Ok, but lets document (via this email :)), that they are not tested.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:30:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kv7-000761-Jb; Wed, 29 Jan 2014 02:30:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kv6-00075o-17
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 02:30:28 +0000
Received: from [85.158.137.68:47598] by server-5.bemta-3.messagelabs.com id
	7D/98-04712-3C768E25; Wed, 29 Jan 2014 02:30:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390962625!11957005!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1568 invoked from network); 29 Jan 2014 02:30:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:30:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2UMGb005874
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:30:23 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2ULgt010690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:30:22 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2ULJZ022246; Wed, 29 Jan 2014 02:30:21 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:30:20 -0800
Date: Tue, 28 Jan 2014 18:30:19 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140128183019.5b765f85@mantra.us.oracle.com>
In-Reply-To: <52E796EB0200007800117804@nat28.tlf.novell.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
	<52E796EB0200007800117804@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 10:39:23 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 28.01.14 at 03:18, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > Until now, xen did not expose PSE to pvh guest, but a patch was
> > submitted to xen list to enable bunch of features for a pvh guest.
> > PSE has not been looked into for PVH, so until we can do that and
> > test it to make sure it works, disable the feature to avoid flood
> > of bugs.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > ---
> >  arch/x86/xen/enlighten.c |    5 +++++
> >  1 files changed, 5 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index a4d7b64..4e952046 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1497,6 +1497,11 @@ static void __init
> > xen_pvh_early_guest_init(void) xen_have_vector_callback = 1;
> >  	xen_pvh_set_cr_flags(0);
> >  
> > +        /* pvh guests are not quite ready for large pages yet */
> > +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> > +        setup_clear_cpu_cap(X86_FEATURE_PSE36);
> 
> And why would you not want to also turn of 1Gb pages then?

Right, that should be turned off too, but Konrad thinks we should
leave them on in linux and deal with issues as they come. I've not
tested them, or looked/thought about them, so had thought would be 
better to turn them on after I/someone gets to test them.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:30:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kv7-000761-Jb; Wed, 29 Jan 2014 02:30:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kv6-00075o-17
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 02:30:28 +0000
Received: from [85.158.137.68:47598] by server-5.bemta-3.messagelabs.com id
	7D/98-04712-3C768E25; Wed, 29 Jan 2014 02:30:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1390962625!11957005!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1568 invoked from network); 29 Jan 2014 02:30:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:30:26 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2UMGb005874
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:30:23 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2ULgt010690
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:30:22 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2ULJZ022246; Wed, 29 Jan 2014 02:30:21 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:30:20 -0800
Date: Tue, 28 Jan 2014 18:30:19 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140128183019.5b765f85@mantra.us.oracle.com>
In-Reply-To: <52E796EB0200007800117804@nat28.tlf.novell.com>
References: <1390875519-8667-1-git-send-email-mukesh.rathor@oracle.com>
	<1390875519-8667-2-git-send-email-mukesh.rathor@oracle.com>
	<52E796EB0200007800117804@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: Disable PSE feature for now
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 10:39:23 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 28.01.14 at 03:18, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > Until now, xen did not expose PSE to pvh guest, but a patch was
> > submitted to xen list to enable bunch of features for a pvh guest.
> > PSE has not been looked into for PVH, so until we can do that and
> > test it to make sure it works, disable the feature to avoid flood
> > of bugs.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > ---
> >  arch/x86/xen/enlighten.c |    5 +++++
> >  1 files changed, 5 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index a4d7b64..4e952046 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1497,6 +1497,11 @@ static void __init
> > xen_pvh_early_guest_init(void) xen_have_vector_callback = 1;
> >  	xen_pvh_set_cr_flags(0);
> >  
> > +        /* pvh guests are not quite ready for large pages yet */
> > +        setup_clear_cpu_cap(X86_FEATURE_PSE);
> > +        setup_clear_cpu_cap(X86_FEATURE_PSE36);
> 
> And why would you not want to also turn of 1Gb pages then?

Right, that should be turned off too, but Konrad thinks we should
leave them on in linux and deal with issues as they come. I've not
tested them, or looked/thought about them, so had thought would be 
better to turn them on after I/someone gets to test them.

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:32:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8KxP-0007EX-4c; Wed, 29 Jan 2014 02:32:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8KxN-0007EP-I4
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:32:49 +0000
Received: from [193.109.254.147:30559] by server-12.bemta-14.messagelabs.com
	id 58/4A-17220-05868E25; Wed, 29 Jan 2014 02:32:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390962766!479586!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20856 invoked from network); 29 Jan 2014 02:32:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:32:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2VhA2024605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:31:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2VgiZ024180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:31:42 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2VfOI026235; Wed, 29 Jan 2014 02:31:41 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:31:41 -0800
Date: Tue, 28 Jan 2014 18:31:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140128183139.23397437@mantra.us.oracle.com>
In-Reply-To: <1390907676.7753.48.camel@kazak.uk.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<1390907676.7753.48.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 11:14:36 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Fri, 2014-01-24 at 17:13 -0800, Mukesh Rathor wrote:
> > Expose features for pvh domUs from tools.
> > 
> 
> I assume this is targeting 4.5?

Yes. Unless it says "checks in the mail", it's 4.5 :).

-Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:32:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8KxP-0007EX-4c; Wed, 29 Jan 2014 02:32:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8KxN-0007EP-I4
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:32:49 +0000
Received: from [193.109.254.147:30559] by server-12.bemta-14.messagelabs.com
	id 58/4A-17220-05868E25; Wed, 29 Jan 2014 02:32:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1390962766!479586!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20856 invoked from network); 29 Jan 2014 02:32:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:32:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2VhA2024605
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:31:44 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0T2VgiZ024180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:31:42 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2VfOI026235; Wed, 29 Jan 2014 02:31:41 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:31:41 -0800
Date: Tue, 28 Jan 2014 18:31:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140128183139.23397437@mantra.us.oracle.com>
In-Reply-To: <1390907676.7753.48.camel@kazak.uk.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<1390907676.7753.48.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014 11:14:36 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Fri, 2014-01-24 at 17:13 -0800, Mukesh Rathor wrote:
> > Expose features for pvh domUs from tools.
> > 
> 
> I assume this is targeting 4.5?

Yes. Unless it says "checks in the mail", it's 4.5 :).

-Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kyt-0007M5-Lf; Wed, 29 Jan 2014 02:34:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kys-0007Lx-7z
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:34:22 +0000
Received: from [85.158.143.35:27691] by server-3.bemta-4.messagelabs.com id
	AC/D8-11539-DA868E25; Wed, 29 Jan 2014 02:34:21 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390962859!1498145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6819 invoked from network); 29 Jan 2014 02:34:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:34:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2XH3s008259
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:33:17 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0T2XGPH028299
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:33:16 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2XGhG028288; Wed, 29 Jan 2014 02:33:16 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:33:15 -0800
Date: Tue, 28 Jan 2014 18:33:14 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140128183314.0460c078@mantra.us.oracle.com>
In-Reply-To: <52E7A16F.6090401@citrix.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<52E7A16F.6090401@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyOCBKYW4gMjAxNCAxMzoyNDoxNSArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IE9uIDI1LzAxLzE0IDAyOjEzLCBNdWtlc2ggUmF0
aG9yIHdyb3RlOgo+ID4gRXhwb3NlIGZlYXR1cmVzIGZvciBwdmggZG9tVXMgZnJvbSB0b29scy4K
PiA+IAo+ID4gU2lnbmVkLW9mZi1ieTogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFj
bGUuY29tPgo+ID4gLS0tCj4gPiAgdG9vbHMvbGlieGMveGNfY3B1aWRfeDg2LmMgfCAgIDI2ICsr
KysrKysrKysrKysrKystLS0tLS0tLS0tCj4gPiAgdG9vbHMvbGlieGMveGNfZG9tYWluLmMgICAg
fCAgICAxICsKPiA+ICB0b29scy9saWJ4Yy94ZW5jdHJsLmggICAgICB8ICAgIDIgKy0KPiA+ICAz
IGZpbGVzIGNoYW5nZWQsIDE4IGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQo+ID4gCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMveGNfY3B1aWRfeDg2LmMgYi90b29scy9saWJ4Yy94
Y19jcHVpZF94ODYuYwo+ID4gaW5kZXggYmJiZjliOC4uMzNmNjgyOSAxMDA2NDQKPiA+IC0tLSBh
L3Rvb2xzL2xpYnhjL3hjX2NwdWlkX3g4Ni5jCj4gPiArKysgYi90b29scy9saWJ4Yy94Y19jcHVp
ZF94ODYuYwo+ID4gQEAgLTQzMyw3ICs0MzMsNyBAQCBzdGF0aWMgdm9pZCB4Y19jcHVpZF9odm1f
cG9saWN5KAo+ID4gIAo+ID4gIHN0YXRpYyB2b2lkIHhjX2NwdWlkX3B2X3BvbGljeSgKPiA+ICAg
ICAgeGNfaW50ZXJmYWNlICp4Y2gsIGRvbWlkX3QgZG9taWQsCj4gPiAtICAgIGNvbnN0IHVuc2ln
bmVkIGludCAqaW5wdXQsIHVuc2lnbmVkIGludCAqcmVncykKPiA+ICsgICAgY29uc3QgdW5zaWdu
ZWQgaW50ICppbnB1dCwgdW5zaWduZWQgaW50ICpyZWdzLCBpbnQgaXNfcHZoKQo+ID4gIHsKPiA+
ICAgICAgREVDTEFSRV9ET01DVEw7Cj4gPiAgICAgIHVuc2lnbmVkIGludCBndWVzdF93aWR0aDsK
PiA+IEBAIC00NTUsMTMgKzQ1NSwxNiBAQCBzdGF0aWMgdm9pZCB4Y19jcHVpZF9wdl9wb2xpY3ko
Cj4gPiAgCj4gPiAgICAgIGlmICggKGlucHV0WzBdICYgMHg3ZmZmZmZmZikgPT0gMHgwMDAwMDAw
MSApCj4gPiAgICAgIHsKPiA+IC0gICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9WTUUsIHJl
Z3NbM10pOwo+ID4gLSAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1szXSk7
Cj4gPiAtICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfUEdFLCByZWdzWzNdKTsKPiA+IC0g
ICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0UsIHJlZ3NbM10pOwo+ID4gLSAgICAgICAg
Y2xlYXJfYml0KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4gPiArICAgICAgICBpZiAoICFp
c19wdmggKQo+ID4gKyAgICAgICAgewo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVB
VFVSRV9WTUUsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVS
RV9QU0UsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9Q
R0UsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0Us
IHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0EsIHJl
Z3NbM10pOwo+IAo+IFNob3VsZCB3ZSBlbmFibGUgTUNBL01DRSBmbGFncyBmb3IgUFZIIERvbVVz
PyBJdCBsb29rcyB0byBtZSBsaWtlIERvbTAKPiBpcyB0aGUgb25seSBkb21haW4gdGhhdCBjYW4g
bWFrZSB1c2Ugb2YgTUNFL01DQS4KClllcywgUFZILCBsaWtlIGFueSBvdGhlciBndWVzdCwgbWF5
IHNldHVwIE1DRSBoYW5kbGVycyBpZiB0aGV5IGFyZQplbmFibGVkLCBhbmQgeGVuIHdvdWxkIGlu
amVjdCBhbnkgTUNFIHRvIGd1ZXN0IGlmIGl0IGJlbG9uZ3MgdG8gdGhlCmd1ZXN0LgoKdGhhbmtz
Ck11a2VzaAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 29 02:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 02:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Kyt-0007M5-Lf; Wed, 29 Jan 2014 02:34:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8Kys-0007Lx-7z
	for Xen-devel@lists.xensource.com; Wed, 29 Jan 2014 02:34:22 +0000
Received: from [85.158.143.35:27691] by server-3.bemta-4.messagelabs.com id
	AC/D8-11539-DA868E25; Wed, 29 Jan 2014 02:34:21 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1390962859!1498145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6819 invoked from network); 29 Jan 2014 02:34:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 02:34:20 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0T2XH3s008259
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 02:33:17 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0T2XGPH028299
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 02:33:16 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0T2XGhG028288; Wed, 29 Jan 2014 02:33:16 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 28 Jan 2014 18:33:15 -0800
Date: Tue, 28 Jan 2014 18:33:14 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140128183314.0460c078@mantra.us.oracle.com>
In-Reply-To: <52E7A16F.6090401@citrix.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<52E7A16F.6090401@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyOCBKYW4gMjAxNCAxMzoyNDoxNSArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IE9uIDI1LzAxLzE0IDAyOjEzLCBNdWtlc2ggUmF0
aG9yIHdyb3RlOgo+ID4gRXhwb3NlIGZlYXR1cmVzIGZvciBwdmggZG9tVXMgZnJvbSB0b29scy4K
PiA+IAo+ID4gU2lnbmVkLW9mZi1ieTogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFj
bGUuY29tPgo+ID4gLS0tCj4gPiAgdG9vbHMvbGlieGMveGNfY3B1aWRfeDg2LmMgfCAgIDI2ICsr
KysrKysrKysrKysrKystLS0tLS0tLS0tCj4gPiAgdG9vbHMvbGlieGMveGNfZG9tYWluLmMgICAg
fCAgICAxICsKPiA+ICB0b29scy9saWJ4Yy94ZW5jdHJsLmggICAgICB8ICAgIDIgKy0KPiA+ICAz
IGZpbGVzIGNoYW5nZWQsIDE4IGluc2VydGlvbnMoKyksIDExIGRlbGV0aW9ucygtKQo+ID4gCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMveGNfY3B1aWRfeDg2LmMgYi90b29scy9saWJ4Yy94
Y19jcHVpZF94ODYuYwo+ID4gaW5kZXggYmJiZjliOC4uMzNmNjgyOSAxMDA2NDQKPiA+IC0tLSBh
L3Rvb2xzL2xpYnhjL3hjX2NwdWlkX3g4Ni5jCj4gPiArKysgYi90b29scy9saWJ4Yy94Y19jcHVp
ZF94ODYuYwo+ID4gQEAgLTQzMyw3ICs0MzMsNyBAQCBzdGF0aWMgdm9pZCB4Y19jcHVpZF9odm1f
cG9saWN5KAo+ID4gIAo+ID4gIHN0YXRpYyB2b2lkIHhjX2NwdWlkX3B2X3BvbGljeSgKPiA+ICAg
ICAgeGNfaW50ZXJmYWNlICp4Y2gsIGRvbWlkX3QgZG9taWQsCj4gPiAtICAgIGNvbnN0IHVuc2ln
bmVkIGludCAqaW5wdXQsIHVuc2lnbmVkIGludCAqcmVncykKPiA+ICsgICAgY29uc3QgdW5zaWdu
ZWQgaW50ICppbnB1dCwgdW5zaWduZWQgaW50ICpyZWdzLCBpbnQgaXNfcHZoKQo+ID4gIHsKPiA+
ICAgICAgREVDTEFSRV9ET01DVEw7Cj4gPiAgICAgIHVuc2lnbmVkIGludCBndWVzdF93aWR0aDsK
PiA+IEBAIC00NTUsMTMgKzQ1NSwxNiBAQCBzdGF0aWMgdm9pZCB4Y19jcHVpZF9wdl9wb2xpY3ko
Cj4gPiAgCj4gPiAgICAgIGlmICggKGlucHV0WzBdICYgMHg3ZmZmZmZmZikgPT0gMHgwMDAwMDAw
MSApCj4gPiAgICAgIHsKPiA+IC0gICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9WTUUsIHJl
Z3NbM10pOwo+ID4gLSAgICAgICAgY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1BTRSwgcmVnc1szXSk7
Cj4gPiAtICAgICAgICBjbGVhcl9iaXQoWDg2X0ZFQVRVUkVfUEdFLCByZWdzWzNdKTsKPiA+IC0g
ICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0UsIHJlZ3NbM10pOwo+ID4gLSAgICAgICAg
Y2xlYXJfYml0KFg4Nl9GRUFUVVJFX01DQSwgcmVnc1szXSk7Cj4gPiArICAgICAgICBpZiAoICFp
c19wdmggKQo+ID4gKyAgICAgICAgewo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVB
VFVSRV9WTUUsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVS
RV9QU0UsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9Q
R0UsIHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0Us
IHJlZ3NbM10pOwo+ID4gKyAgICAgICAgICAgIGNsZWFyX2JpdChYODZfRkVBVFVSRV9NQ0EsIHJl
Z3NbM10pOwo+IAo+IFNob3VsZCB3ZSBlbmFibGUgTUNBL01DRSBmbGFncyBmb3IgUFZIIERvbVVz
PyBJdCBsb29rcyB0byBtZSBsaWtlIERvbTAKPiBpcyB0aGUgb25seSBkb21haW4gdGhhdCBjYW4g
bWFrZSB1c2Ugb2YgTUNFL01DQS4KClllcywgUFZILCBsaWtlIGFueSBvdGhlciBndWVzdCwgbWF5
IHNldHVwIE1DRSBoYW5kbGVycyBpZiB0aGV5IGFyZQplbmFibGVkLCBhbmQgeGVuIHdvdWxkIGlu
amVjdCBhbnkgTUNFIHRvIGd1ZXN0IGlmIGl0IGJlbG9uZ3MgdG8gdGhlCmd1ZXN0LgoKdGhhbmtz
Ck11a2VzaAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Jan 29 05:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 05:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ne4-0004Qw-Hn; Wed, 29 Jan 2014 05:25:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rientjes@google.com>) id 1W8Ne2-0004Qr-PP
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 05:25:02 +0000
Received: from [85.158.143.35:57747] by server-2.bemta-4.messagelabs.com id
	1C/0C-10891-DA098E25; Wed, 29 Jan 2014 05:25:01 +0000
X-Env-Sender: rientjes@google.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390973099!1517287!1
X-Originating-IP: [209.85.220.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22250 invoked from network); 29 Jan 2014 05:25:00 -0000
Received: from mail-pa0-f53.google.com (HELO mail-pa0-f53.google.com)
	(209.85.220.53)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 05:25:00 -0000
Received: by mail-pa0-f53.google.com with SMTP id lj1so1311167pab.26
	for <xen-devel@lists.xensource.com>;
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type;
	bh=2+o1dNoIstrtv3SZyOacf3z7sDPX/Sak/T551kO80Yw=;
	b=Wvim4z8pVc6YhtbkbsgbDMJP7npyg/ia6gpxBjjleolkdG2DrwlkDtnvlwSH5RdLov
	lcnQTO6HLDbqbsvEiKnbh+MqY0Z0NE8eaEIuWLWVNszC45ZsnDsy6sAQksE1vVjfLYPm
	jcpdoPPL5R1rik0znDgYfQHOxDNIBo1sZXEzbrlRJ1SsB5DzHbOlWfBrUimHcjvtYPwd
	aHRQN1WzuVdwGOkqQTQ/RmYSc2KTN1vNlKA4jLEe8MllLHjP3NfLLYJr7G2lmZwE1Ahq
	hZ3bpD/6CSr2/jvcSZ9aTcgtlVgdKFp6b7rIN+EgXwCMaAA2K40RDdlulyfrnGjm7H37
	bWFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type;
	bh=2+o1dNoIstrtv3SZyOacf3z7sDPX/Sak/T551kO80Yw=;
	b=ZFC8TOkxMwWXski4Lap/o7+8EYSJd8YGtFmR8sFFWYAqQ7nkN9MHPM3zbyXiJEAh+D
	H6p6gu8Fyh+r3Sy0WpyV4hD5XO6Z50BqVcSfEjJdUAqVBYcrW8IIV9DYPIZgbmedOcNV
	fzgVceXWqc4eI5u/e+yuW96ebllvuiMboT/VoCjATbED+l10L3r4FeZNZ8TogzwwtZjf
	MnX90H/NRTVt55HTURk/GMgulKbM5YvprIVGef0sObvxzNZj+HsPLC/iXUMiA74qBUeO
	7cEtu2SFWkWRwSNJcFayk+EaaE3M5ad0uRa8p/LPA3vlR3rBWof67aGFhlFCJ20fpefT
	bRxw==
X-Gm-Message-State: ALoCoQkCXqP+JjphgNOl8Nt07u546lqmUinECZHJcaxzgbesE1Gtafc02GUfqOuTOAE8zP/oWv6H2o4F3l2mYU2E/cX1Odu0NODs6VtYVHdzLF19f27fy5CRpAwRPeYYV6ICY1gRrSfzUWLZwJEO+OuhRM2RnWbjQVrDtWpJmhqZ/FyCu8knLVxYz2dejGTDo++kbhBLK3PDGPnbSKxwbzbe0knWYRWCDQ==
X-Received: by 10.66.240.4 with SMTP id vw4mr5887299pac.26.1390973098716;
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
Received: from [2620:0:1008:1101:be30:5bff:fed8:5e64]
	([2620:0:1008:1101:be30:5bff:fed8:5e64])
	by mx.google.com with ESMTPSA id
	sy10sm7792898pac.15.2014.01.28.21.24.57 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
Date: Tue, 28 Jan 2014 21:24:57 -0800 (PST)
From: David Rientjes <rientjes@google.com>
X-X-Sender: rientjes@chino.kir.corp.google.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140128160404.GA5732@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Stanislaw Gruszka <sgruszka@redhat.com>,
	linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rjw@sisk.pl>,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:

> diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> index 7231859..7602229 100644
> --- a/drivers/xen/xen-acpi-processor.c
> +++ b/drivers/xen/xen-acpi-processor.c
> @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
>   */
>  static unsigned int nr_acpi_bits;
>  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> -static DEFINE_MUTEX(acpi_ids_mutex);
> +static DEFINE_SPINLOCK(acpi_ids_lock);
>  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
>  static unsigned long *acpi_ids_done;
>  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
>  	int ret = 0;
>  
>  	dst_cx_states = kcalloc(_pr->power.count,
> -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
>  	if (!dst_cx_states)
>  		return -ENOMEM;
>  
> @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
>  		     sizeof(struct acpi_processor_px));
>  
>  	dst_states = kcalloc(_pr->performance->state_count,
> -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
>  	if (!dst_states)
>  		return ERR_PTR(-ENOMEM);
>  
> @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
>  {
>  	int err = 0;
>  
> -	mutex_lock(&acpi_ids_mutex);
> +	spin_lock(&acpi_ids_lock);
>  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> -		mutex_unlock(&acpi_ids_mutex);
> +		spin_unlock(&acpi_ids_lock);
>  		return -EBUSY;
>  	}
>  	if (_pr->flags.power)
> @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
>  	if (_pr->performance && _pr->performance->states)
>  		err |= push_pxx_to_hypervisor(_pr);
>  
> -	mutex_unlock(&acpi_ids_mutex);
> +	spin_unlock(&acpi_ids_lock);
>  	return err;
>  }
>  static unsigned int __init get_max_acpi_id(void)

Looks incomplete, what about the kzalloc() in 
xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 05:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 05:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Ne4-0004Qw-Hn; Wed, 29 Jan 2014 05:25:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rientjes@google.com>) id 1W8Ne2-0004Qr-PP
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 05:25:02 +0000
Received: from [85.158.143.35:57747] by server-2.bemta-4.messagelabs.com id
	1C/0C-10891-DA098E25; Wed, 29 Jan 2014 05:25:01 +0000
X-Env-Sender: rientjes@google.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390973099!1517287!1
X-Originating-IP: [209.85.220.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22250 invoked from network); 29 Jan 2014 05:25:00 -0000
Received: from mail-pa0-f53.google.com (HELO mail-pa0-f53.google.com)
	(209.85.220.53)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 05:25:00 -0000
Received: by mail-pa0-f53.google.com with SMTP id lj1so1311167pab.26
	for <xen-devel@lists.xensource.com>;
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=date:from:to:cc:subject:in-reply-to:message-id:references
	:user-agent:mime-version:content-type;
	bh=2+o1dNoIstrtv3SZyOacf3z7sDPX/Sak/T551kO80Yw=;
	b=Wvim4z8pVc6YhtbkbsgbDMJP7npyg/ia6gpxBjjleolkdG2DrwlkDtnvlwSH5RdLov
	lcnQTO6HLDbqbsvEiKnbh+MqY0Z0NE8eaEIuWLWVNszC45ZsnDsy6sAQksE1vVjfLYPm
	jcpdoPPL5R1rik0znDgYfQHOxDNIBo1sZXEzbrlRJ1SsB5DzHbOlWfBrUimHcjvtYPwd
	aHRQN1WzuVdwGOkqQTQ/RmYSc2KTN1vNlKA4jLEe8MllLHjP3NfLLYJr7G2lmZwE1Ahq
	hZ3bpD/6CSr2/jvcSZ9aTcgtlVgdKFp6b7rIN+EgXwCMaAA2K40RDdlulyfrnGjm7H37
	bWFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type;
	bh=2+o1dNoIstrtv3SZyOacf3z7sDPX/Sak/T551kO80Yw=;
	b=ZFC8TOkxMwWXski4Lap/o7+8EYSJd8YGtFmR8sFFWYAqQ7nkN9MHPM3zbyXiJEAh+D
	H6p6gu8Fyh+r3Sy0WpyV4hD5XO6Z50BqVcSfEjJdUAqVBYcrW8IIV9DYPIZgbmedOcNV
	fzgVceXWqc4eI5u/e+yuW96ebllvuiMboT/VoCjATbED+l10L3r4FeZNZ8TogzwwtZjf
	MnX90H/NRTVt55HTURk/GMgulKbM5YvprIVGef0sObvxzNZj+HsPLC/iXUMiA74qBUeO
	7cEtu2SFWkWRwSNJcFayk+EaaE3M5ad0uRa8p/LPA3vlR3rBWof67aGFhlFCJ20fpefT
	bRxw==
X-Gm-Message-State: ALoCoQkCXqP+JjphgNOl8Nt07u546lqmUinECZHJcaxzgbesE1Gtafc02GUfqOuTOAE8zP/oWv6H2o4F3l2mYU2E/cX1Odu0NODs6VtYVHdzLF19f27fy5CRpAwRPeYYV6ICY1gRrSfzUWLZwJEO+OuhRM2RnWbjQVrDtWpJmhqZ/FyCu8knLVxYz2dejGTDo++kbhBLK3PDGPnbSKxwbzbe0knWYRWCDQ==
X-Received: by 10.66.240.4 with SMTP id vw4mr5887299pac.26.1390973098716;
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
Received: from [2620:0:1008:1101:be30:5bff:fed8:5e64]
	([2620:0:1008:1101:be30:5bff:fed8:5e64])
	by mx.google.com with ESMTPSA id
	sy10sm7792898pac.15.2014.01.28.21.24.57 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 28 Jan 2014 21:24:58 -0800 (PST)
Date: Tue, 28 Jan 2014 21:24:57 -0800 (PST)
From: David Rientjes <rientjes@google.com>
X-X-Sender: rientjes@chino.kir.corp.google.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140128160404.GA5732@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Stanislaw Gruszka <sgruszka@redhat.com>,
	linux-kernel@vger.kernel.org, "Rafael J. Wysocki" <rjw@sisk.pl>,
	xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:

> diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> index 7231859..7602229 100644
> --- a/drivers/xen/xen-acpi-processor.c
> +++ b/drivers/xen/xen-acpi-processor.c
> @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
>   */
>  static unsigned int nr_acpi_bits;
>  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> -static DEFINE_MUTEX(acpi_ids_mutex);
> +static DEFINE_SPINLOCK(acpi_ids_lock);
>  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
>  static unsigned long *acpi_ids_done;
>  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
>  	int ret = 0;
>  
>  	dst_cx_states = kcalloc(_pr->power.count,
> -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
>  	if (!dst_cx_states)
>  		return -ENOMEM;
>  
> @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
>  		     sizeof(struct acpi_processor_px));
>  
>  	dst_states = kcalloc(_pr->performance->state_count,
> -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
>  	if (!dst_states)
>  		return ERR_PTR(-ENOMEM);
>  
> @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
>  {
>  	int err = 0;
>  
> -	mutex_lock(&acpi_ids_mutex);
> +	spin_lock(&acpi_ids_lock);
>  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> -		mutex_unlock(&acpi_ids_mutex);
> +		spin_unlock(&acpi_ids_lock);
>  		return -EBUSY;
>  	}
>  	if (_pr->flags.power)
> @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
>  	if (_pr->performance && _pr->performance->states)
>  		err |= push_pxx_to_hypervisor(_pr);
>  
> -	mutex_unlock(&acpi_ids_mutex);
> +	spin_unlock(&acpi_ids_lock);
>  	return err;
>  }
>  static unsigned int __init get_max_acpi_id(void)

Looks incomplete, what about the kzalloc() in 
xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 06:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 06:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ODy-0005oW-JU; Wed, 29 Jan 2014 06:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8ODx-0005oR-Cg
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 06:02:09 +0000
Received: from [85.158.137.68:22012] by server-16.bemta-3.messagelabs.com id
	63/0F-29917-06998E25; Wed, 29 Jan 2014 06:02:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390975325!8296620!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1137 invoked from network); 29 Jan 2014 06:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 06:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,740,1384300800"; d="scan'208";a="95577722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 06:02:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 01:02:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8ODs-0007H4-2y;
	Wed, 29 Jan 2014 06:02:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8ODr-0002zG-Vz;
	Wed, 29 Jan 2014 06:02:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24589-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 06:02:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24589: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24589 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24589/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24570

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 06:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 06:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ODy-0005oW-JU; Wed, 29 Jan 2014 06:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8ODx-0005oR-Cg
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 06:02:09 +0000
Received: from [85.158.137.68:22012] by server-16.bemta-3.messagelabs.com id
	63/0F-29917-06998E25; Wed, 29 Jan 2014 06:02:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1390975325!8296620!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1137 invoked from network); 29 Jan 2014 06:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 06:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,740,1384300800"; d="scan'208";a="95577722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 06:02:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 01:02:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8ODs-0007H4-2y;
	Wed, 29 Jan 2014 06:02:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8ODr-0002zG-Vz;
	Wed, 29 Jan 2014 06:02:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24589-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 06:02:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24589: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24589 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24589/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24570

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 07:31:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 07:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Pbs-0000Mq-2I; Wed, 29 Jan 2014 07:30:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Pbq-0000Mk-Lc
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 07:30:54 +0000
Received: from [193.109.254.147:39299] by server-14.bemta-14.messagelabs.com
	id 33/D3-29228-D2EA8E25; Wed, 29 Jan 2014 07:30:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390980650!512817!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30057 invoked from network); 29 Jan 2014 07:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 07:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,740,1384300800"; d="scan'208";a="97589861"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 07:30:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 02:30:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8PbO-0007l5-Mb;
	Wed, 29 Jan 2014 07:30:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8PbO-00012D-4Z;
	Wed, 29 Jan 2014 07:30:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 07:30:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24591: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24591 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24591/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24504

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20
baseline version:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=0ac5c121734c5055ba2b500b7f515a71800c7b20
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 0ac5c121734c5055ba2b500b7f515a71800c7b20
+ branch=xen-4.3-testing
+ revision=0ac5c121734c5055ba2b500b7f515a71800c7b20
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0ac5c121734c5055ba2b500b7f515a71800c7b20:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9612d29..0ac5c12  0ac5c121734c5055ba2b500b7f515a71800c7b20 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 07:31:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 07:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Pbs-0000Mq-2I; Wed, 29 Jan 2014 07:30:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Pbq-0000Mk-Lc
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 07:30:54 +0000
Received: from [193.109.254.147:39299] by server-14.bemta-14.messagelabs.com
	id 33/D3-29228-D2EA8E25; Wed, 29 Jan 2014 07:30:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390980650!512817!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30057 invoked from network); 29 Jan 2014 07:30:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 07:30:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,740,1384300800"; d="scan'208";a="97589861"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 07:30:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 02:30:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8PbO-0007l5-Mb;
	Wed, 29 Jan 2014 07:30:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8PbO-00012D-4Z;
	Wed, 29 Jan 2014 07:30:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24591-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 07:30:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24591: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24591 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24591/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24504

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20
baseline version:
 xen                  9612d2948e1637c303e6be68df2168775ac5e97e

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=0ac5c121734c5055ba2b500b7f515a71800c7b20
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 0ac5c121734c5055ba2b500b7f515a71800c7b20
+ branch=xen-4.3-testing
+ revision=0ac5c121734c5055ba2b500b7f515a71800c7b20
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0ac5c121734c5055ba2b500b7f515a71800c7b20:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   9612d29..0ac5c12  0ac5c121734c5055ba2b500b7f515a71800c7b20 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Qe7-0002p8-6t; Wed, 29 Jan 2014 08:37:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1W8QTS-0002Vo-Og
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 08:26:18 +0000
Received: from [85.158.137.68:62058] by server-9.bemta-3.messagelabs.com id
	5E/2A-10184-A2BB8E25; Wed, 29 Jan 2014 08:26:18 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390983976!12010428!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25691 invoked from network); 29 Jan 2014 08:26:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-31.messagelabs.com with SMTP;
	29 Jan 2014 08:26:17 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0T8QCF7014406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 03:26:12 -0500
Received: from localhost (dhcp-27-235.brq.redhat.com [10.34.27.235])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0T8QAZU016490; Wed, 29 Jan 2014 03:26:11 -0500
Date: Wed, 29 Jan 2014 09:25:22 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: David Rientjes <rientjes@google.com>
Message-ID: <20140129082521.GA1362@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Wed, 29 Jan 2014 08:37:18 +0000
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Cc: correct Rafael email)

On Tue, Jan 28, 2014 at 09:24:57PM -0800, David Rientjes wrote:
> On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> 
> > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > index 7231859..7602229 100644
> > --- a/drivers/xen/xen-acpi-processor.c
> > +++ b/drivers/xen/xen-acpi-processor.c
> > @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
> >   */
> >  static unsigned int nr_acpi_bits;
> >  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> > -static DEFINE_MUTEX(acpi_ids_mutex);
> > +static DEFINE_SPINLOCK(acpi_ids_lock);
> >  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
> >  static unsigned long *acpi_ids_done;
> >  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> > @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
> >  	int ret = 0;
> >  
> >  	dst_cx_states = kcalloc(_pr->power.count,
> > -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> > +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
> >  	if (!dst_cx_states)
> >  		return -ENOMEM;
> >  
> > @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
> >  		     sizeof(struct acpi_processor_px));
> >  
> >  	dst_states = kcalloc(_pr->performance->state_count,
> > -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> > +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
> >  	if (!dst_states)
> >  		return ERR_PTR(-ENOMEM);
> >  
> > @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
> >  {
> >  	int err = 0;
> >  
> > -	mutex_lock(&acpi_ids_mutex);
> > +	spin_lock(&acpi_ids_lock);
> >  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> > -		mutex_unlock(&acpi_ids_mutex);
> > +		spin_unlock(&acpi_ids_lock);
> >  		return -EBUSY;
> >  	}
> >  	if (_pr->flags.power)
> > @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
> >  	if (_pr->performance && _pr->performance->states)
> >  		err |= push_pxx_to_hypervisor(_pr);
> >  
> > -	mutex_unlock(&acpi_ids_mutex);
> > +	spin_unlock(&acpi_ids_lock);
> >  	return err;
> >  }
> >  static unsigned int __init get_max_acpi_id(void)
> 
> Looks incomplete, what about the kzalloc() in 
> xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?

Indeed and additionally from check_acpi_ids() we call
acpi_walk_namespace(), which also take mutexes. Hence unfortunately
making xen_upload_processor_pm_data() atomic is not easy, but possibly
can be done by saving some data in memory after initialization.

Or perhaps this problem can be solved differently, by not using 
yscore_ops->resume(), but some other resume callback from core, which
allow to sleep. Than can require registering dummy device or sysfs
class, but maybe there are simpler solutions.

Thanks
Stanislaw 
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Qe7-0002p8-6t; Wed, 29 Jan 2014 08:37:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1W8QTS-0002Vo-Og
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 08:26:18 +0000
Received: from [85.158.137.68:62058] by server-9.bemta-3.messagelabs.com id
	5E/2A-10184-A2BB8E25; Wed, 29 Jan 2014 08:26:18 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1390983976!12010428!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25691 invoked from network); 29 Jan 2014 08:26:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-31.messagelabs.com with SMTP;
	29 Jan 2014 08:26:17 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0T8QCF7014406
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 03:26:12 -0500
Received: from localhost (dhcp-27-235.brq.redhat.com [10.34.27.235])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0T8QAZU016490; Wed, 29 Jan 2014 03:26:11 -0500
Date: Wed, 29 Jan 2014 09:25:22 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: David Rientjes <rientjes@google.com>
Message-ID: <20140129082521.GA1362@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
X-Mailman-Approved-At: Wed, 29 Jan 2014 08:37:18 +0000
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Cc: correct Rafael email)

On Tue, Jan 28, 2014 at 09:24:57PM -0800, David Rientjes wrote:
> On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> 
> > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > index 7231859..7602229 100644
> > --- a/drivers/xen/xen-acpi-processor.c
> > +++ b/drivers/xen/xen-acpi-processor.c
> > @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
> >   */
> >  static unsigned int nr_acpi_bits;
> >  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> > -static DEFINE_MUTEX(acpi_ids_mutex);
> > +static DEFINE_SPINLOCK(acpi_ids_lock);
> >  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
> >  static unsigned long *acpi_ids_done;
> >  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> > @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
> >  	int ret = 0;
> >  
> >  	dst_cx_states = kcalloc(_pr->power.count,
> > -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> > +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
> >  	if (!dst_cx_states)
> >  		return -ENOMEM;
> >  
> > @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
> >  		     sizeof(struct acpi_processor_px));
> >  
> >  	dst_states = kcalloc(_pr->performance->state_count,
> > -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> > +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
> >  	if (!dst_states)
> >  		return ERR_PTR(-ENOMEM);
> >  
> > @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
> >  {
> >  	int err = 0;
> >  
> > -	mutex_lock(&acpi_ids_mutex);
> > +	spin_lock(&acpi_ids_lock);
> >  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> > -		mutex_unlock(&acpi_ids_mutex);
> > +		spin_unlock(&acpi_ids_lock);
> >  		return -EBUSY;
> >  	}
> >  	if (_pr->flags.power)
> > @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
> >  	if (_pr->performance && _pr->performance->states)
> >  		err |= push_pxx_to_hypervisor(_pr);
> >  
> > -	mutex_unlock(&acpi_ids_mutex);
> > +	spin_unlock(&acpi_ids_lock);
> >  	return err;
> >  }
> >  static unsigned int __init get_max_acpi_id(void)
> 
> Looks incomplete, what about the kzalloc() in 
> xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?

Indeed and additionally from check_acpi_ids() we call
acpi_walk_namespace(), which also take mutexes. Hence unfortunately
making xen_upload_processor_pm_data() atomic is not easy, but possibly
can be done by saving some data in memory after initialization.

Or perhaps this problem can be solved differently, by not using 
yscore_ops->resume(), but some other resume callback from core, which
allow to sleep. Than can require registering dummy device or sysfs
class, but maybe there are simpler solutions.

Thanks
Stanislaw 
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8QkI-0003Gq-CX; Wed, 29 Jan 2014 08:43:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8QkG-0003Gl-JE
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 08:43:40 +0000
Received: from [85.158.139.211:58733] by server-6.bemta-5.messagelabs.com id
	B0/96-14342-B3FB8E25; Wed, 29 Jan 2014 08:43:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390985019!301189!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20290 invoked from network); 29 Jan 2014 08:43:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 08:43:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 08:43:39 +0000
Message-Id: <52E8CD580200007800117D7E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 08:43:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52E81237.3010302@citrix.com>
In-Reply-To: <52E81237.3010302@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 21:25, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>     0000000000000093 | rflags from pushfq in ASSERT_INTERRUPTS_ENABLED
>     ffff82c4c02358d8 | RA? compat/entry.S:123 in compat_test_all_events()
>     0000000000000001 | r15
>     ffff8300cfd3f000 | r14
>     0000000000000004 | r13
>     ffff8300cfafa000 | r12
>     00000000c1695ec0 | ebp
>     00000000deadbeef | ebx
>     0000000000000000 | r11
>     00000000deadbeef | r10
>     ffff8300cfafa060 | r9
>     0000000000000000 | r8
>     0000000000000000 | eax
>     00000000deadbeef | ecx
>     00000000ee8507a0 | edx
>     00000000c23a7000 | esi
>     0000000000000000 | edi
>     0002010000000000 | TRAP_syscall | TRAP_regs_dirty
>     00000000c10013a7 + (hypercall page) __HYPERCALL_sched_op
>     0000000000000061 |
>     0000000000000246 | Exception frame from ring1 kernel
>     00000000c1695eb0 |
>     0000000000000069 +
>     0000000000000000 | es
>     0000000000000000 | ds
>     0000000000000000 | fs
>     0000000000000000 | gs
>     0000000000000004 | cpu_info.processor_id
>     ffff8300cfafa000 | cpu_info.current_vcpu
>     0000003d6e797180 | cpu_info.per_cpu_offset
>     0000000000000000 +
> 
> Xen call trace:
>    [<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec
> 
> 
> Xen has failed the ASSERT_INTERRUPTS_ENABLED check at the very top of
> compat_create_bounce_frame, which itself lacks a bugframe which is why
> it is not automatically recognised as an assertion.
> 
> Following the code back using what I presume to be a return address as
> the penultimate word on the stack, the codeflow looks like:
> 
> compat_test_all_events:
>   ...
>   sti
>   leaq ...
>   5x mov ...
>   call compat_create_bounce_frame
>   jmp  compat_test_all_events
> 
> compat_create_bounce_frame:
>   pushfq
>   testb
>   jnz
>   ud2
> 
> 
> What I presume has happened is that after 'sti', Xen has taken an
> interrupt, which has caused some form of corruption.  Judging from the
> top word on the stack, rflags looks quite corrupt.

Other that IF being clear, I see no other obvious corruption:
CF, AF, and SF (and the reserved bit 1) are set, and all other flags
are clear. Quite reasonable a state after the "cmpl  $0xfe,%eax"
(being the most recent instruction that affected the flags) it seems.

An interrupt not properly restoring EFLAGS.IF (or actually one not
properly restoring all of EFLAGS) would be very odd. About as odd
as a cosmic radiation induced bit flip resulting in some other
misbehavior. This hasn't been seen more than once I suppose?

> For crashes like this, particularly when attempting to leave Xen context
> and return back to a guest, the information provided by the stack trace
> is quite lacking; The interesting information is what is what has just
> been popped off the stack (which I am hoping would have been another
> exception frame)
> 
> Would it be sensible to have some indication that we are on the way out
> of Xen, so errors in situations like this can take a chance to print
> some of the recently popped stack values? I know it wont be terribly
> heavily used debugging, but think it is probably worth the effort for
> situations like this where there is simply not enough information to
> diagnose the issue.

While I realize that in a case like this seeing stack contents below the
stack pointer may be useful (but there's no guarantee it would be), I
don't think it is reasonable to get the code prepared for all kinds of
extremely unlikely scenarios to be debuggable. If the issue here is
reproducible, I'm sure you'll be able to instrument the code such that
you can get further information out of the system (and that's not
necessarily just stack contents - presumably you'd want to track
other state or state changes in some kind of static buffer, which
you'd then also want to dump out at the point of the crash).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:43:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8QkI-0003Gq-CX; Wed, 29 Jan 2014 08:43:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8QkG-0003Gl-JE
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 08:43:40 +0000
Received: from [85.158.139.211:58733] by server-6.bemta-5.messagelabs.com id
	B0/96-14342-B3FB8E25; Wed, 29 Jan 2014 08:43:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1390985019!301189!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20290 invoked from network); 29 Jan 2014 08:43:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 08:43:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 08:43:39 +0000
Message-Id: <52E8CD580200007800117D7E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 08:43:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52E81237.3010302@citrix.com>
In-Reply-To: <52E81237.3010302@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 21:25, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>     0000000000000093 | rflags from pushfq in ASSERT_INTERRUPTS_ENABLED
>     ffff82c4c02358d8 | RA? compat/entry.S:123 in compat_test_all_events()
>     0000000000000001 | r15
>     ffff8300cfd3f000 | r14
>     0000000000000004 | r13
>     ffff8300cfafa000 | r12
>     00000000c1695ec0 | ebp
>     00000000deadbeef | ebx
>     0000000000000000 | r11
>     00000000deadbeef | r10
>     ffff8300cfafa060 | r9
>     0000000000000000 | r8
>     0000000000000000 | eax
>     00000000deadbeef | ecx
>     00000000ee8507a0 | edx
>     00000000c23a7000 | esi
>     0000000000000000 | edi
>     0002010000000000 | TRAP_syscall | TRAP_regs_dirty
>     00000000c10013a7 + (hypercall page) __HYPERCALL_sched_op
>     0000000000000061 |
>     0000000000000246 | Exception frame from ring1 kernel
>     00000000c1695eb0 |
>     0000000000000069 +
>     0000000000000000 | es
>     0000000000000000 | ds
>     0000000000000000 | fs
>     0000000000000000 | gs
>     0000000000000004 | cpu_info.processor_id
>     ffff8300cfafa000 | cpu_info.current_vcpu
>     0000003d6e797180 | cpu_info.per_cpu_offset
>     0000000000000000 +
> 
> Xen call trace:
>    [<ffff82c4c0235a92>] compat_create_bounce_frame+0x8/0xec
> 
> 
> Xen has failed the ASSERT_INTERRUPTS_ENABLED check at the very top of
> compat_create_bounce_frame, which itself lacks a bugframe which is why
> it is not automatically recognised as an assertion.
> 
> Following the code back using what I presume to be a return address as
> the penultimate word on the stack, the codeflow looks like:
> 
> compat_test_all_events:
>   ...
>   sti
>   leaq ...
>   5x mov ...
>   call compat_create_bounce_frame
>   jmp  compat_test_all_events
> 
> compat_create_bounce_frame:
>   pushfq
>   testb
>   jnz
>   ud2
> 
> 
> What I presume has happened is that after 'sti', Xen has taken an
> interrupt, which has caused some form of corruption.  Judging from the
> top word on the stack, rflags looks quite corrupt.

Other that IF being clear, I see no other obvious corruption:
CF, AF, and SF (and the reserved bit 1) are set, and all other flags
are clear. Quite reasonable a state after the "cmpl  $0xfe,%eax"
(being the most recent instruction that affected the flags) it seems.

An interrupt not properly restoring EFLAGS.IF (or actually one not
properly restoring all of EFLAGS) would be very odd. About as odd
as a cosmic radiation induced bit flip resulting in some other
misbehavior. This hasn't been seen more than once I suppose?

> For crashes like this, particularly when attempting to leave Xen context
> and return back to a guest, the information provided by the stack trace
> is quite lacking; The interesting information is what is what has just
> been popped off the stack (which I am hoping would have been another
> exception frame)
> 
> Would it be sensible to have some indication that we are on the way out
> of Xen, so errors in situations like this can take a chance to print
> some of the recently popped stack values? I know it wont be terribly
> heavily used debugging, but think it is probably worth the effort for
> situations like this where there is simply not enough information to
> diagnose the issue.

While I realize that in a case like this seeing stack contents below the
stack pointer may be useful (but there's no guarantee it would be), I
don't think it is reasonable to get the code prepared for all kinds of
extremely unlikely scenarios to be debuggable. If the issue here is
reproducible, I'm sure you'll be able to instrument the code such that
you can get further information out of the system (and that's not
necessarily just stack contents - presumably you'd want to track
other state or state changes in some kind of static buffer, which
you'd then also want to dump out at the point of the crash).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:51:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Qrj-0003h5-Gz; Wed, 29 Jan 2014 08:51:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Qri-0003h0-HX
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 08:51:22 +0000
Received: from [85.158.143.35:63814] by server-1.bemta-4.messagelabs.com id
	02/66-31661-901C8E25; Wed, 29 Jan 2014 08:51:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390985480!1545629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10569 invoked from network); 29 Jan 2014 08:51:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 08:51:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95611168"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 08:51:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 03:51:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W8Qre-0005fZ-G2;
	Wed, 29 Jan 2014 08:51:18 +0000
Message-ID: <1390985477.15103.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 08:51:17 +0000
In-Reply-To: <52E8CD580200007800117D7E@nat28.tlf.novell.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
> An interrupt not properly restoring EFLAGS.IF (or actually one not
> properly restoring all of EFLAGS) would be very odd. About as odd
> as a cosmic radiation induced bit flip resulting in some other
> misbehavior.

Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
something else catch that first?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:51:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Qrj-0003h5-Gz; Wed, 29 Jan 2014 08:51:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Qri-0003h0-HX
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 08:51:22 +0000
Received: from [85.158.143.35:63814] by server-1.bemta-4.messagelabs.com id
	02/66-31661-901C8E25; Wed, 29 Jan 2014 08:51:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390985480!1545629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10569 invoked from network); 29 Jan 2014 08:51:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 08:51:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95611168"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 08:51:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 03:51:19 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1W8Qre-0005fZ-G2;
	Wed, 29 Jan 2014 08:51:18 +0000
Message-ID: <1390985477.15103.4.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 08:51:17 +0000
In-Reply-To: <52E8CD580200007800117D7E@nat28.tlf.novell.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
> An interrupt not properly restoring EFLAGS.IF (or actually one not
> properly restoring all of EFLAGS) would be very odd. About as odd
> as a cosmic radiation induced bit flip resulting in some other
> misbehavior.

Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
something else catch that first?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:52:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8QsX-0003ki-Uq; Wed, 29 Jan 2014 08:52:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8QsW-0003kU-7Q
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 08:52:12 +0000
Received: from [85.158.137.68:28471] by server-12.bemta-3.messagelabs.com id
	D9/F6-01674-B31C8E25; Wed, 29 Jan 2014 08:52:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390985530!12017144!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22126 invoked from network); 29 Jan 2014 08:52:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 08:52:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 08:52:12 +0000
Message-Id: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 08:52:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Roger Pau Monne" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1390931015-5490-4-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
> *pending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif = pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		/*
> +		 * Make sure the request is freed before releasing blkif,
> +		 * or there could be a race between free_req and the
> +		 * cleanup done in xen_blkif_free during shutdown.
> +		 *
> +		 * NB: The fact that we might try to wake up pending_free_wq
> +		 * before drain_complete (in case there's a drain going on)
> +		 * it's not a problem with our current implementation
> +		 * because we can assure there's no thread waiting on
> +		 * pending_free_wq if there's a drain going on, but it has
> +		 * to be taken into account if the current model is changed.
> +		 */
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <= 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);
>  	}
>  }

The put is still too early imo - you're explicitly accessing field in the
structure immediately afterwards. This may not be an issue at
present, but I think it's at least a latent one.

Apart from that, the two if()s would - at least to me - be more
clear if combined into one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 08:52:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 08:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8QsX-0003ki-Uq; Wed, 29 Jan 2014 08:52:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8QsW-0003kU-7Q
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 08:52:12 +0000
Received: from [85.158.137.68:28471] by server-12.bemta-3.messagelabs.com id
	D9/F6-01674-B31C8E25; Wed, 29 Jan 2014 08:52:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1390985530!12017144!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22126 invoked from network); 29 Jan 2014 08:52:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 08:52:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 08:52:12 +0000
Message-Id: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 08:52:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Roger Pau Monne" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1390931015-5490-4-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
> *pending_req, int error)
>  	 * the proper response on the ring.
>  	 */
>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
> -		xen_blkbk_unmap(pending_req->blkif,
> +		struct xen_blkif *blkif = pending_req->blkif;
> +
> +		xen_blkbk_unmap(blkif,
>  		                pending_req->segments,
>  		                pending_req->nr_pages);
> -		make_response(pending_req->blkif, pending_req->id,
> +		make_response(blkif, pending_req->id,
>  			      pending_req->operation, pending_req->status);
> -		xen_blkif_put(pending_req->blkif);
> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
> -			if (atomic_read(&pending_req->blkif->drain))
> -				complete(&pending_req->blkif->drain_complete);
> +		free_req(blkif, pending_req);
> +		/*
> +		 * Make sure the request is freed before releasing blkif,
> +		 * or there could be a race between free_req and the
> +		 * cleanup done in xen_blkif_free during shutdown.
> +		 *
> +		 * NB: The fact that we might try to wake up pending_free_wq
> +		 * before drain_complete (in case there's a drain going on)
> +		 * it's not a problem with our current implementation
> +		 * because we can assure there's no thread waiting on
> +		 * pending_free_wq if there's a drain going on, but it has
> +		 * to be taken into account if the current model is changed.
> +		 */
> +		xen_blkif_put(blkif);
> +		if (atomic_read(&blkif->refcnt) <= 2) {
> +			if (atomic_read(&blkif->drain))
> +				complete(&blkif->drain_complete);
>  		}
> -		free_req(pending_req->blkif, pending_req);
>  	}
>  }

The put is still too early imo - you're explicitly accessing field in the
structure immediately afterwards. This may not be an issue at
present, but I think it's at least a latent one.

Apart from that, the two if()s would - at least to me - be more
clear if combined into one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8R1Q-0004I1-Bh; Wed, 29 Jan 2014 09:01:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8R1P-0004Hw-5D
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:01:23 +0000
Received: from [193.109.254.147:64294] by server-13.bemta-14.messagelabs.com
	id 28/94-01226-263C8E25; Wed, 29 Jan 2014 09:01:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390986081!537597!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 810 invoked from network); 29 Jan 2014 09:01:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 09:01:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 09:01:25 +0000
Message-Id: <52E8D17E0200007800117D9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 09:01:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1390985477.15103.4.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>> An interrupt not properly restoring EFLAGS.IF (or actually one not
>> properly restoring all of EFLAGS) would be very odd. About as odd
>> as a cosmic radiation induced bit flip resulting in some other
>> misbehavior.
> 
> Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
> something else catch that first?

A missing plain spin_unlock() wouldn't have any effect of IF. And
a missing spin_unlock_irqrestore() would have an effect on IF in
the interrupt handler, but with the return being through an IRET
something would need to actively modify the flags on the stack
that IRET uses in order to affect the interrupted code's EFLAGS.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8R1Q-0004I1-Bh; Wed, 29 Jan 2014 09:01:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8R1P-0004Hw-5D
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:01:23 +0000
Received: from [193.109.254.147:64294] by server-13.bemta-14.messagelabs.com
	id 28/94-01226-263C8E25; Wed, 29 Jan 2014 09:01:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390986081!537597!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 810 invoked from network); 29 Jan 2014 09:01:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 09:01:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 09:01:25 +0000
Message-Id: <52E8D17E0200007800117D9F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 09:01:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
In-Reply-To: <1390985477.15103.4.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>> An interrupt not properly restoring EFLAGS.IF (or actually one not
>> properly restoring all of EFLAGS) would be very odd. About as odd
>> as a cosmic radiation induced bit flip resulting in some other
>> misbehavior.
> 
> Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
> something else catch that first?

A missing plain spin_unlock() wouldn't have any effect of IF. And
a missing spin_unlock_irqrestore() would have an effect on IF in
the interrupt handler, but with the return being through an IRET
something would need to actively modify the flags on the stack
that IRET uses in order to affect the interrupted code's EFLAGS.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:20:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RJc-00051u-5D; Wed, 29 Jan 2014 09:20:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RJb-00051p-BJ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:20:11 +0000
Received: from [85.158.137.68:35165] by server-13.bemta-3.messagelabs.com id
	D9/94-26923-AC7C8E25; Wed, 29 Jan 2014 09:20:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390987208!11970849!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26873 invoked from network); 29 Jan 2014 09:20:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:20:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95619731"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:20:07 -0500
Message-ID: <1390987206.31814.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 09:20:06 +0000
In-Reply-To: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
References: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
 backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 19:12 +0100, Olaf Hering wrote:
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Thanks.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I think it doesn't need a freeze exception, so I'll commit it when I
next go through my queue.

As an aside if anyone is looking for a project wiring up this and other
tests into a toplevel "make test" would be awesome. I'm not sure if it
would be complicated by some of the tests needing to run on an active
Xen host (as opposed to a build machine). Perhaps many of the tests
could be made to not rely on running under Xen, or if there was an easy
way to easily create a tarball of the tests which could be deployed and
run?

Once something like that is in place then adding it to the push gate
should be relatively simple, I think. It would also be possible to ask
people to run them before submitting patches etc.

If it didn't require running under Xen then I'd also add it to my
pre-commit test scripts.

Ian.

> ---
>  tools/libxl/check-xl-disk-parse | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
> index 41fb7af..797277c 100755
> --- a/tools/libxl/check-xl-disk-parse
> +++ b/tools/libxl/check-xl-disk-parse
> @@ -53,6 +53,7 @@ one $e foo
>  expected <<END
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/dev/vg/guest-volume",
>      "vdev": "hda",
>      "backend": "unknown",
> @@ -73,6 +74,7 @@ one 0 raw:/dev/vg/guest-volume,hda,w
>  expected <<END
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/root/image.iso",
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -94,6 +96,7 @@ one 0 raw:/root/image.iso,hdc:cdrom,ro
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/dev/vg/guest-volume",
>      "vdev": "xvdb",
>      "backend": "phy",
> @@ -110,6 +113,7 @@ one 0 backendtype=phy,vdev=xvdb,access=w,target=/dev/vg/guest-volume
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "",
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -130,6 +134,7 @@ one 0 ,empty,hdc:cdrom,r
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": null,
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -147,6 +152,7 @@ one 0 vdev=hdc,access=r,devtype=cdrom
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
>      "vdev": "xvda",
>      "backend": "unknown",
> @@ -166,6 +172,7 @@ one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "app01",
>      "vdev": "hda",
>      "backend": "unknown",
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:20:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RJc-00051u-5D; Wed, 29 Jan 2014 09:20:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RJb-00051p-BJ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:20:11 +0000
Received: from [85.158.137.68:35165] by server-13.bemta-3.messagelabs.com id
	D9/94-26923-AC7C8E25; Wed, 29 Jan 2014 09:20:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1390987208!11970849!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26873 invoked from network); 29 Jan 2014 09:20:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:20:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95619731"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:20:07 -0500
Message-ID: <1390987206.31814.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 09:20:06 +0000
In-Reply-To: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
References: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
 backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 19:12 +0100, Olaf Hering wrote:
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Thanks.

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I think it doesn't need a freeze exception, so I'll commit it when I
next go through my queue.

As an aside if anyone is looking for a project wiring up this and other
tests into a toplevel "make test" would be awesome. I'm not sure if it
would be complicated by some of the tests needing to run on an active
Xen host (as opposed to a build machine). Perhaps many of the tests
could be made to not rely on running under Xen, or if there was an easy
way to easily create a tarball of the tests which could be deployed and
run?

Once something like that is in place then adding it to the push gate
should be relatively simple, I think. It would also be possible to ask
people to run them before submitting patches etc.

If it didn't require running under Xen then I'd also add it to my
pre-commit test scripts.

Ian.

> ---
>  tools/libxl/check-xl-disk-parse | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
> index 41fb7af..797277c 100755
> --- a/tools/libxl/check-xl-disk-parse
> +++ b/tools/libxl/check-xl-disk-parse
> @@ -53,6 +53,7 @@ one $e foo
>  expected <<END
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/dev/vg/guest-volume",
>      "vdev": "hda",
>      "backend": "unknown",
> @@ -73,6 +74,7 @@ one 0 raw:/dev/vg/guest-volume,hda,w
>  expected <<END
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/root/image.iso",
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -94,6 +96,7 @@ one 0 raw:/root/image.iso,hdc:cdrom,ro
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "/dev/vg/guest-volume",
>      "vdev": "xvdb",
>      "backend": "phy",
> @@ -110,6 +113,7 @@ one 0 backendtype=phy,vdev=xvdb,access=w,target=/dev/vg/guest-volume
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "",
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -130,6 +134,7 @@ one 0 ,empty,hdc:cdrom,r
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": null,
>      "vdev": "hdc",
>      "backend": "unknown",
> @@ -147,6 +152,7 @@ one 0 vdev=hdc,access=r,devtype=cdrom
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "iqn.2001-05.com.equallogic:0-8a0906-23fe93404-c82797962054a96d-examplehost",
>      "vdev": "xvda",
>      "backend": "unknown",
> @@ -166,6 +172,7 @@ one 0 vdev=xvda,access=w,script=block-iscsi,target=iqn.2001-05.com.equallogic:0-
>  expected <<EOF
>  disk: {
>      "backend_domid": 0,
> +    "backend_domname": null,
>      "pdev_path": "app01",
>      "vdev": "hda",
>      "backend": "unknown",
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ROj-0005Ai-1o; Wed, 29 Jan 2014 09:25:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ROh-0005Ad-Ew
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:25:27 +0000
Received: from [193.109.254.147:60315] by server-12.bemta-14.messagelabs.com
	id 3C/42-17220-609C8E25; Wed, 29 Jan 2014 09:25:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390987524!543564!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 29 Jan 2014 09:25:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:25:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97614287"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 09:25:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:25:23 -0500
Message-ID: <1390987522.31814.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 09:25:22 +0000
In-Reply-To: <52E8D17E0200007800117D9F@nat28.tlf.novell.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
> >> An interrupt not properly restoring EFLAGS.IF (or actually one not
> >> properly restoring all of EFLAGS) would be very odd. About as odd
> >> as a cosmic radiation induced bit flip resulting in some other
> >> misbehavior.
> > 
> > Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
> > something else catch that first?
> 
> A missing plain spin_unlock() wouldn't have any effect of IF. And
> a missing spin_unlock_irqrestore() would have an effect on IF in
> the interrupt handler, but with the return being through an IRET
> something would need to actively modify the flags on the stack
> that IRET uses in order to affect the interrupted code's EFLAGS.

Ah, I mistakenly thought that this issue was happening on that return
path (i.e. before the IRET).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:25:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ROj-0005Ai-1o; Wed, 29 Jan 2014 09:25:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ROh-0005Ad-Ew
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:25:27 +0000
Received: from [193.109.254.147:60315] by server-12.bemta-14.messagelabs.com
	id 3C/42-17220-609C8E25; Wed, 29 Jan 2014 09:25:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390987524!543564!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 29 Jan 2014 09:25:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:25:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97614287"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 09:25:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:25:23 -0500
Message-ID: <1390987522.31814.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 09:25:22 +0000
In-Reply-To: <52E8D17E0200007800117D9F@nat28.tlf.novell.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
> >> An interrupt not properly restoring EFLAGS.IF (or actually one not
> >> properly restoring all of EFLAGS) would be very odd. About as odd
> >> as a cosmic radiation induced bit flip resulting in some other
> >> misbehavior.
> > 
> > Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
> > something else catch that first?
> 
> A missing plain spin_unlock() wouldn't have any effect of IF. And
> a missing spin_unlock_irqrestore() would have an effect on IF in
> the interrupt handler, but with the return being through an IRET
> something would need to actively modify the flags on the stack
> that IRET uses in order to affect the interrupted code's EFLAGS.

Ah, I mistakenly thought that this issue was happening on that return
path (i.e. before the IRET).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:34:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RWc-0005bp-6n; Wed, 29 Jan 2014 09:33:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RWa-0005bk-MZ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:33:36 +0000
Received: from [85.158.139.211:18333] by server-16.bemta-5.messagelabs.com id
	EB/D6-05060-FEAC8E25; Wed, 29 Jan 2014 09:33:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390988013!315437!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 29 Jan 2014 09:33:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95622834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:33:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:33:32 -0500
Message-ID: <1390988011.31814.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 29 Jan 2014 09:33:31 +0000
In-Reply-To: <20140128193430.GB9842@phenom.dumpdata.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Igor Kozhukhov <ikozhukhov@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:34 -0500, Konrad Rzeszutek Wilk wrote:
> > do i need implement it first ?
> 
> No. But you should have stub functions in your hypercall page to at least
> return -ENOSYS for everything you don't implement.
> 
> How do you construct your hyperpage?

Note that nobody should be constructing a hypercall page themselves,
they should rely on the hypervisor to do it (for PVH guests they will
need to request it, but for a PV guest it is just there).

If a guest calls the hypercall page entry associated with an
unimplemented hypercall then the effect will be an -ENOSYS return.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:34:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RWc-0005bp-6n; Wed, 29 Jan 2014 09:33:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RWa-0005bk-MZ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:33:36 +0000
Received: from [85.158.139.211:18333] by server-16.bemta-5.messagelabs.com id
	EB/D6-05060-FEAC8E25; Wed, 29 Jan 2014 09:33:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390988013!315437!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1372 invoked from network); 29 Jan 2014 09:33:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95622834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:33:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:33:32 -0500
Message-ID: <1390988011.31814.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 29 Jan 2014 09:33:31 +0000
In-Reply-To: <20140128193430.GB9842@phenom.dumpdata.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Igor Kozhukhov <ikozhukhov@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 14:34 -0500, Konrad Rzeszutek Wilk wrote:
> > do i need implement it first ?
> 
> No. But you should have stub functions in your hypercall page to at least
> return -ENOSYS for everything you don't implement.
> 
> How do you construct your hyperpage?

Note that nobody should be constructing a hypercall page themselves,
they should rely on the hypervisor to do it (for PVH guests they will
need to request it, but for a PV guest it is just there).

If a guest calls the hypercall page entry associated with an
unimplemented hypercall then the effect will be an -ENOSYS return.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:43:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RfM-0005qj-FG; Wed, 29 Jan 2014 09:42:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8RfL-0005qe-GQ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:42:39 +0000
Received: from [193.109.254.147:37044] by server-4.bemta-14.messagelabs.com id
	DD/8F-32066-E0DC8E25; Wed, 29 Jan 2014 09:42:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390988558!546315!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5200 invoked from network); 29 Jan 2014 09:42:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 09:42:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 09:42:59 +0000
Message-Id: <52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 09:42:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
	<1390987522.31814.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1390987522.31814.38.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>> >> An interrupt not properly restoring EFLAGS.IF (or actually one not
>> >> properly restoring all of EFLAGS) would be very odd. About as odd
>> >> as a cosmic radiation induced bit flip resulting in some other
>> >> misbehavior.
>> > 
>> > Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
>> > something else catch that first?
>> 
>> A missing plain spin_unlock() wouldn't have any effect of IF. And
>> a missing spin_unlock_irqrestore() would have an effect on IF in
>> the interrupt handler, but with the return being through an IRET
>> something would need to actively modify the flags on the stack
>> that IRET uses in order to affect the interrupted code's EFLAGS.
> 
> Ah, I mistakenly thought that this issue was happening on that return
> path (i.e. before the IRET).

Right - the problem is that we're having two return paths to
consider here: The outer one (wanting to return to the guest)
explicitly used STI a few instructions before the crash. And it
would need to be an inner one (hardware interrupt) that would
have to fail to restore IF properly, and for that to happen the
EFLAGS image used by that exit path's IRET would need to get
corrupted.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:43:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RfM-0005qj-FG; Wed, 29 Jan 2014 09:42:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8RfL-0005qe-GQ
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:42:39 +0000
Received: from [193.109.254.147:37044] by server-4.bemta-14.messagelabs.com id
	DD/8F-32066-E0DC8E25; Wed, 29 Jan 2014 09:42:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1390988558!546315!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5200 invoked from network); 29 Jan 2014 09:42:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 09:42:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 09:42:59 +0000
Message-Id: <52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 09:42:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
	<1390987522.31814.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1390987522.31814.38.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>> >> An interrupt not properly restoring EFLAGS.IF (or actually one not
>> >> properly restoring all of EFLAGS) would be very odd. About as odd
>> >> as a cosmic radiation induced bit flip resulting in some other
>> >> misbehavior.
>> > 
>> > Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
>> > something else catch that first?
>> 
>> A missing plain spin_unlock() wouldn't have any effect of IF. And
>> a missing spin_unlock_irqrestore() would have an effect on IF in
>> the interrupt handler, but with the return being through an IRET
>> something would need to actively modify the flags on the stack
>> that IRET uses in order to affect the interrupted code's EFLAGS.
> 
> Ah, I mistakenly thought that this issue was happening on that return
> path (i.e. before the IRET).

Right - the problem is that we're having two return paths to
consider here: The outer one (wanting to return to the guest)
explicitly used STI a few instructions before the crash. And it
would need to be an inner one (hardware interrupt) that would
have to fail to restore IF properly, and for that to happen the
EFLAGS image used by that exit path's IRET would need to get
corrupted.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:47:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RjZ-0006Cw-Ow; Wed, 29 Jan 2014 09:47:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RjX-0006Cp-Vp
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:47:00 +0000
Received: from [85.158.139.211:47751] by server-15.bemta-5.messagelabs.com id
	96/39-24395-31EC8E25; Wed, 29 Jan 2014 09:46:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390988817!311130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14484 invoked from network); 29 Jan 2014 09:46:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95625716"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:46:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:46:55 -0500
Message-ID: <1390988815.31814.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 29 Jan 2014 09:46:55 +0000
In-Reply-To: <BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
> 
> > On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> >> Hello All,
> >> 
> >> i'm working on port xen-4.3 to DilOS (illumos based platform).
> >> 
> >> i have problems with PV guest load.
> >> dom0 started, i can see info by 'xl info'.
> >> 
> >> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> >> 
> >> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> >> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> >> 
> >> could you please let me know - what is it the 38 platform hypercall ?
> > 
> > tmem_op
> 
> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h

platform.h only declares one subset of hypercall, the XENPF interfaces.
tmem_op is not one of those interfaces. You want include/public/tmem.h

> > 
> >> 
> >> do i need implement it first ?
> > 
> > No. But you should have stub functions in your hypercall page to at least
> > return -ENOSYS for everything you don't implement.
> 
> based on current code i see:
> return -X_EINVAL;
> will it be correct to return it if ID not implemented ?

This appears to be an illlumos return code. It is of course up to the OS
to decide what to return from an unimplemented ioctls, but the
hypervisor itself will return -ENOSYS to an unimplemented hypercall.

> > How do you construct your hyperpage?
> >> 
> 
> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
> 
> from line: 379

It seems like illumos has chosen to implement the privcmd hypercall
piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
basis). This is up to you but you might find it easier to just do as
Linux does and mirror all hypercalls made via this path through to the
hypervisor.

One downside of your approach is that you end up hardcoding
non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
These are not considered stable across Xen releases which means that you
will need to update your kernel whenever you update Xen. If you just
mirror the hypercalls through without inspection then when upgrading Xen
you only need to update the Xen tools and the hypervisor but not the
kernel.

I suppose you could also take the middle ground and only pass through
the non-stable interfaces but continue to check the rest.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 09:47:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 09:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8RjZ-0006Cw-Ow; Wed, 29 Jan 2014 09:47:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8RjX-0006Cp-Vp
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 09:47:00 +0000
Received: from [85.158.139.211:47751] by server-15.bemta-5.messagelabs.com id
	96/39-24395-31EC8E25; Wed, 29 Jan 2014 09:46:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1390988817!311130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14484 invoked from network); 29 Jan 2014 09:46:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 09:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95625716"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 09:46:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 04:46:55 -0500
Message-ID: <1390988815.31814.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 29 Jan 2014 09:46:55 +0000
In-Reply-To: <BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
> 
> > On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> >> Hello All,
> >> 
> >> i'm working on port xen-4.3 to DilOS (illumos based platform).
> >> 
> >> i have problems with PV guest load.
> >> dom0 started, i can see info by 'xl info'.
> >> 
> >> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> >> 
> >> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> >> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> >> 
> >> could you please let me know - what is it the 38 platform hypercall ?
> > 
> > tmem_op
> 
> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h

platform.h only declares one subset of hypercall, the XENPF interfaces.
tmem_op is not one of those interfaces. You want include/public/tmem.h

> > 
> >> 
> >> do i need implement it first ?
> > 
> > No. But you should have stub functions in your hypercall page to at least
> > return -ENOSYS for everything you don't implement.
> 
> based on current code i see:
> return -X_EINVAL;
> will it be correct to return it if ID not implemented ?

This appears to be an illlumos return code. It is of course up to the OS
to decide what to return from an unimplemented ioctls, but the
hypervisor itself will return -ENOSYS to an unimplemented hypercall.

> > How do you construct your hyperpage?
> >> 
> 
> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
> 
> from line: 379

It seems like illumos has chosen to implement the privcmd hypercall
piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
basis). This is up to you but you might find it easier to just do as
Linux does and mirror all hypercalls made via this path through to the
hypervisor.

One downside of your approach is that you end up hardcoding
non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
These are not considered stable across Xen releases which means that you
will need to update your kernel whenever you update Xen. If you just
mirror the hypercalls through without inspection then when upgrading Xen
you only need to update the Xen tools and the hypervisor but not the
kernel.

I suppose you could also take the middle ground and only pass through
the non-stable interfaces but continue to check the rest.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:12:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8S7a-0007M6-Mp; Wed, 29 Jan 2014 10:11:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8S7Z-0007Ly-FS; Wed, 29 Jan 2014 10:11:49 +0000
Received: from [85.158.143.35:45674] by server-2.bemta-4.messagelabs.com id
	98/B9-10891-4E3D8E25; Wed, 29 Jan 2014 10:11:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390990306!1572780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16588 invoked from network); 29 Jan 2014 10:11:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:11:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97622846"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:11:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:11:45 -0500
Message-ID: <1390990304.31814.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>, Anthony Perard <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 10:11:44 +0000
In-Reply-To: <CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> Sorry for the so late reply.
> I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50

And are you using the version of qemu-xen which ships with those
releases or your own version, perhaps from upstream?

ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
4.3.x but I thought it had been added during the 4.4.x development
cycle. Adding Anthony + xen-devel to confirm.

> Here is the /var/log/xen/
> Waiting for domain centos65.pv (domid 1) to die [pid 8116]
> 
> 
> Here is the output of "xl -vvv vcpu-set"

Is this from 4.3 or 4.4? I think at this point we should focus on the
issue with 4.4.

I also asked for your guest cfg file -- please can you show it to us.

> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-1
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> 
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "qmp_capabilities",
> 
>     "id": 1
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> 
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "cpu-add",
> 
>     "id": 2,
> 
>     "arguments": {
> 
>         "id": 0
> 
>     }
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: error
> 
> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> error message from QMP server: Not supported
> 
> xc: debug: hypercall buffer: total allocations:9 total releases:9
> 
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> 
> xc: debug: hypercall buffer: cache current size:2
> 
> xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
> 
> On Tue, Oct 1, 2013 at 3:24 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2013-09-30 at 10:52 -0600, Yun Wang wrote:
> >> Hi all,
> >>
> >> I tried to use "xl vcpu-set" to change the vCPU number of VMs in Xen
> >> 4.4-unstable and had the following errors.
> >
> > What was the full command line which you used?
> >
> > Which exact version of 4.4-unstable (i.e. git commit) were you using?
> >
> >> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> >> error message from QMP server: Unable to add CPU: 0, it already
> >> exists.
> >>
> >> CPU hotplug is enabled in the guest OS.
> >
> > Please can you provide your guest cfg file and any relevant logs from
> > under /var/log/xen (especially the ones with the guest name in them).
> >
> > Please also try "xl -vvv vcpu-set ..." and provide the full logs of that
> > attempt.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:12:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8S7a-0007M6-Mp; Wed, 29 Jan 2014 10:11:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8S7Z-0007Ly-FS; Wed, 29 Jan 2014 10:11:49 +0000
Received: from [85.158.143.35:45674] by server-2.bemta-4.messagelabs.com id
	98/B9-10891-4E3D8E25; Wed, 29 Jan 2014 10:11:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1390990306!1572780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16588 invoked from network); 29 Jan 2014 10:11:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:11:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97622846"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:11:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:11:45 -0500
Message-ID: <1390990304.31814.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>, Anthony Perard <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 10:11:44 +0000
In-Reply-To: <CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> Sorry for the so late reply.
> I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50

And are you using the version of qemu-xen which ships with those
releases or your own version, perhaps from upstream?

ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
4.3.x but I thought it had been added during the 4.4.x development
cycle. Adding Anthony + xen-devel to confirm.

> Here is the /var/log/xen/
> Waiting for domain centos65.pv (domid 1) to die [pid 8116]
> 
> 
> Here is the output of "xl -vvv vcpu-set"

Is this from 4.3 or 4.4? I think at this point we should focus on the
issue with 4.4.

I also asked for your guest cfg file -- please can you show it to us.

> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-1
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> 
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "qmp_capabilities",
> 
>     "id": 1
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> 
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "cpu-add",
> 
>     "id": 2,
> 
>     "arguments": {
> 
>         "id": 0
> 
>     }
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: error
> 
> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> error message from QMP server: Not supported
> 
> xc: debug: hypercall buffer: total allocations:9 total releases:9
> 
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> 
> xc: debug: hypercall buffer: cache current size:2
> 
> xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
> 
> On Tue, Oct 1, 2013 at 3:24 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2013-09-30 at 10:52 -0600, Yun Wang wrote:
> >> Hi all,
> >>
> >> I tried to use "xl vcpu-set" to change the vCPU number of VMs in Xen
> >> 4.4-unstable and had the following errors.
> >
> > What was the full command line which you used?
> >
> > Which exact version of 4.4-unstable (i.e. git commit) were you using?
> >
> >> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> >> error message from QMP server: Unable to add CPU: 0, it already
> >> exists.
> >>
> >> CPU hotplug is enabled in the guest OS.
> >
> > Please can you provide your guest cfg file and any relevant logs from
> > under /var/log/xen (especially the ones with the guest name in them).
> >
> > Please also try "xl -vvv vcpu-set ..." and provide the full logs of that
> > attempt.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SO7-0008Fp-9c; Wed, 29 Jan 2014 10:28:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SO6-0008Fe-Dx
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:28:54 +0000
Received: from [85.158.139.211:7059] by server-13.bemta-5.messagelabs.com id
	97/A6-18801-5E7D8E25; Wed, 29 Jan 2014 10:28:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390991331!324465!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17026 invoked from network); 29 Jan 2014 10:28:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208,217";a="97625971"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:28:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:28:49 -0500
Message-ID: <1390991329.31814.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 10:28:49 +0000
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> Handle new option discard=on|off for disk configuration. It is supposed
> to disable discard support if file based backing storage was
> intentionally created non-sparse to avoid fragmentation of the file.
> 
> The option is a boolean and intended for the backend driver. A new
> boolean property "discard_enable" is written to the backend node. An
> upcoming patch for qemu will make use of this property. The kernel
> blkback driver may be updated as well to disable discard for phy based
> backing storage.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  docs/misc/xl-disk-configuration.txt | 13 +++++++++++++
>  tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
>  tools/libxl/libxl.c                 |  2 ++
>  tools/libxl/libxl_types.idl         |  1 +
>  tools/libxl/libxlu_disk.c           |  1 +
>  tools/libxl/libxlu_disk_i.h         |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  8 ++++++++
>  xen/include/public/io/blkif.h       |  8 ++++++++
>  8 files changed, 48 insertions(+), 8 deletions(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5bd456d..4f81394 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -178,6 +178,19 @@ information to be interpreted by the executable program <script>,
>  These scripts are normally called "block-<script>".
>  
> 
> +discard=<boolean>
> +---------------
> +
> +Description:           Instruct backend to advertise discard support to frontend
> +Supported values:      on, off, 0, 1
> +Mandatory:             No
> +Default value:         on

I think this default should be "on if, available for that backend type".

What happens if the backed does not support discard?

> +This option instructs the backend driver, depending of the value, to advertise
> +discard support (TRIM, UNMAP) to the frontend. It allows to disable "hole
> +punching" for file based backends which were intentionally created non-sparse.
> +
> +
>  
>  ============================================
>  DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES

[...]
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b58b198 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
>      ("removable", integer),
>      ("readwrite", integer),
>      ("is_cdrom", integer),
> +    ("discard_enable", integer),

I have a feeling this should be a libxl_defbool, to allow for the
possibility of "libxl does what is best/lets the backend decide".

> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..3633a7d 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->readwrite ? "w" : "r");
>          flexarray_append(back, "device-type");
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> +        flexarray_append(back, "discard_enable");
> +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));


And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
	if (!libxl_defbool_is_default(disk->discard_enable))
		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))

(note the lack of libxl_sprintf here too).

>  
>          flexarray_append(front, "backend-id");
>          flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));

>      ])
>  
>  libxl_device_nic = Struct("device_nic", [
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..ee82a8d 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -58,6 +58,7 @@ int xlu_disk_parse(XLU_Config *cfg,
>      dpc.disk = disk;
>  
>      disk->readwrite = 1;
> +    disk->discard_enable = 1; /* Doing it twice?! */

Why?

> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index 7c4e7f1..2afd5e7 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>  
>  vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> +discard=on,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
> +discard=1,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
> +discard=off,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
> +discard=0,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
>  
>   /* the target magic parameter, eats the rest of the string */
>  
> @@ -244,6 +248,10 @@ phy:/.*		{ DPC->had_depr_prefix=1; DEPRECATE(0); }
>          xlu__disk_err(DPC,yytext,"too many positional parameters");
>          return 0; /* don't print any more errors */
>      }
> +    if (!DPC->discard_set) {
> +        DPC->discard_set = 1;
> +	DPC->disk->discard_enable = 1;

Indentation is messed up here.

> +    }
>  }
>  
>  . {
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 542f123..0121e19 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,6 +175,14 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> + * discard_enable

All of the existing properties use - not _.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SO7-0008Fp-9c; Wed, 29 Jan 2014 10:28:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SO6-0008Fe-Dx
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:28:54 +0000
Received: from [85.158.139.211:7059] by server-13.bemta-5.messagelabs.com id
	97/A6-18801-5E7D8E25; Wed, 29 Jan 2014 10:28:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1390991331!324465!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17026 invoked from network); 29 Jan 2014 10:28:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208,217";a="97625971"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:28:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:28:49 -0500
Message-ID: <1390991329.31814.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 10:28:49 +0000
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> Handle new option discard=on|off for disk configuration. It is supposed
> to disable discard support if file based backing storage was
> intentionally created non-sparse to avoid fragmentation of the file.
> 
> The option is a boolean and intended for the backend driver. A new
> boolean property "discard_enable" is written to the backend node. An
> upcoming patch for qemu will make use of this property. The kernel
> blkback driver may be updated as well to disable discard for phy based
> backing storage.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  docs/misc/xl-disk-configuration.txt | 13 +++++++++++++
>  tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
>  tools/libxl/libxl.c                 |  2 ++
>  tools/libxl/libxl_types.idl         |  1 +
>  tools/libxl/libxlu_disk.c           |  1 +
>  tools/libxl/libxlu_disk_i.h         |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  8 ++++++++
>  xen/include/public/io/blkif.h       |  8 ++++++++
>  8 files changed, 48 insertions(+), 8 deletions(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5bd456d..4f81394 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -178,6 +178,19 @@ information to be interpreted by the executable program <script>,
>  These scripts are normally called "block-<script>".
>  
> 
> +discard=<boolean>
> +---------------
> +
> +Description:           Instruct backend to advertise discard support to frontend
> +Supported values:      on, off, 0, 1
> +Mandatory:             No
> +Default value:         on

I think this default should be "on if, available for that backend type".

What happens if the backed does not support discard?

> +This option instructs the backend driver, depending of the value, to advertise
> +discard support (TRIM, UNMAP) to the frontend. It allows to disable "hole
> +punching" for file based backends which were intentionally created non-sparse.
> +
> +
>  
>  ============================================
>  DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES

[...]
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b58b198 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
>      ("removable", integer),
>      ("readwrite", integer),
>      ("is_cdrom", integer),
> +    ("discard_enable", integer),

I have a feeling this should be a libxl_defbool, to allow for the
possibility of "libxl does what is best/lets the backend decide".

> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..3633a7d 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->readwrite ? "w" : "r");
>          flexarray_append(back, "device-type");
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> +        flexarray_append(back, "discard_enable");
> +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));


And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
	if (!libxl_defbool_is_default(disk->discard_enable))
		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))

(note the lack of libxl_sprintf here too).

>  
>          flexarray_append(front, "backend-id");
>          flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));

>      ])
>  
>  libxl_device_nic = Struct("device_nic", [
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..ee82a8d 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -58,6 +58,7 @@ int xlu_disk_parse(XLU_Config *cfg,
>      dpc.disk = disk;
>  
>      disk->readwrite = 1;
> +    disk->discard_enable = 1; /* Doing it twice?! */

Why?

> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index 7c4e7f1..2afd5e7 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>  
>  vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> +discard=on,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
> +discard=1,?	{ DPC->disk->discard_enable = 1; DPC->discard_set = 1; }
> +discard=off,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
> +discard=0,?	{ DPC->disk->discard_enable = 0; DPC->discard_set = 1; }
>  
>   /* the target magic parameter, eats the rest of the string */
>  
> @@ -244,6 +248,10 @@ phy:/.*		{ DPC->had_depr_prefix=1; DEPRECATE(0); }
>          xlu__disk_err(DPC,yytext,"too many positional parameters");
>          return 0; /* don't print any more errors */
>      }
> +    if (!DPC->discard_set) {
> +        DPC->discard_set = 1;
> +	DPC->disk->discard_enable = 1;

Indentation is messed up here.

> +    }
>  }
>  
>  . {
> diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
> index 542f123..0121e19 100644
> --- a/xen/include/public/io/blkif.h
> +++ b/xen/include/public/io/blkif.h
> @@ -175,6 +175,14 @@
>   *
>   *------------------------- Backend Device Properties -------------------------
>   *
> + * discard_enable

All of the existing properties use - not _.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:30:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SPM-0008Q2-Qe; Wed, 29 Jan 2014 10:30:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8SPL-0008Pq-2D
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:30:11 +0000
Received: from [193.109.254.147:34417] by server-3.bemta-14.messagelabs.com id
	D4/1F-00432-238D8E25; Wed, 29 Jan 2014 10:30:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390991407!564512!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25881 invoked from network); 29 Jan 2014 10:30:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:30:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97626160"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:30:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:30:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8SPG-0007Db-S9;
	Wed, 29 Jan 2014 10:30:06 +0000
Message-ID: <52E8D82E.4060604@citrix.com>
Date: Wed, 29 Jan 2014 10:30:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
	<1390987522.31814.38.camel@kazak.uk.xensource.com>
	<52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
In-Reply-To: <52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 09:42, Jan Beulich wrote:
>>>> On 29.01.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
>>>>>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>> On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>>>>> An interrupt not properly restoring EFLAGS.IF (or actually one not
>>>>> properly restoring all of EFLAGS) would be very odd. About as odd
>>>>> as a cosmic radiation induced bit flip resulting in some other
>>>>> misbehavior.
>>>> Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
>>>> something else catch that first?
>>> A missing plain spin_unlock() wouldn't have any effect of IF. And
>>> a missing spin_unlock_irqrestore() would have an effect on IF in
>>> the interrupt handler, but with the return being through an IRET
>>> something would need to actively modify the flags on the stack
>>> that IRET uses in order to affect the interrupted code's EFLAGS.
>> Ah, I mistakenly thought that this issue was happening on that return
>> path (i.e. before the IRET).
> Right - the problem is that we're having two return paths to
> consider here: The outer one (wanting to return to the guest)
> explicitly used STI a few instructions before the crash. And it
> would need to be an inner one (hardware interrupt) that would
> have to fail to restore IF properly, and for that to happen the
> EFLAGS image used by that exit path's IRET would need to get
> corrupted.
>
> Jan
>

This issue has been seen exactly once, on an otherwise perfectly stable
server, which is running stably since.  I certainly have no evidence to
rule out cosmic radiation.

I suppose all that can be done at this point is to wait and see whether
it reoccurs.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:30:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SPM-0008Q2-Qe; Wed, 29 Jan 2014 10:30:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8SPL-0008Pq-2D
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:30:11 +0000
Received: from [193.109.254.147:34417] by server-3.bemta-14.messagelabs.com id
	D4/1F-00432-238D8E25; Wed, 29 Jan 2014 10:30:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1390991407!564512!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25881 invoked from network); 29 Jan 2014 10:30:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:30:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97626160"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:30:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:30:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8SPG-0007Db-S9;
	Wed, 29 Jan 2014 10:30:06 +0000
Message-ID: <52E8D82E.4060604@citrix.com>
Date: Wed, 29 Jan 2014 10:30:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E81237.3010302@citrix.com>
	<52E8CD580200007800117D7E@nat28.tlf.novell.com>
	<1390985477.15103.4.camel@dagon.hellion.org.uk>
	<52E8D17E0200007800117D9F@nat28.tlf.novell.com>
	<1390987522.31814.38.camel@kazak.uk.xensource.com>
	<52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
In-Reply-To: <52E8DB2B0200007800117DCA@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen-4.3 - curious crash
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 09:42, Jan Beulich wrote:
>>>> On 29.01.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Wed, 2014-01-29 at 09:01 +0000, Jan Beulich wrote:
>>>>>> On 29.01.14 at 09:51, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>>> On Wed, 2014-01-29 at 08:43 +0000, Jan Beulich wrote:
>>>>> An interrupt not properly restoring EFLAGS.IF (or actually one not
>>>>> properly restoring all of EFLAGS) would be very odd. About as odd
>>>>> as a cosmic radiation induced bit flip resulting in some other
>>>>> misbehavior.
>>>> Isn't it also the affect of a missing spin_unlock(_irqrestore)? Or does
>>>> something else catch that first?
>>> A missing plain spin_unlock() wouldn't have any effect of IF. And
>>> a missing spin_unlock_irqrestore() would have an effect on IF in
>>> the interrupt handler, but with the return being through an IRET
>>> something would need to actively modify the flags on the stack
>>> that IRET uses in order to affect the interrupted code's EFLAGS.
>> Ah, I mistakenly thought that this issue was happening on that return
>> path (i.e. before the IRET).
> Right - the problem is that we're having two return paths to
> consider here: The outer one (wanting to return to the guest)
> explicitly used STI a few instructions before the crash. And it
> would need to be an inner one (hardware interrupt) that would
> have to fail to restore IF properly, and for that to happen the
> EFLAGS image used by that exit path's IRET would need to get
> corrupted.
>
> Jan
>

This issue has been seen exactly once, on an otherwise perfectly stable
server, which is running stably since.  I certainly have no evidence to
rule out cosmic radiation.

I suppose all that can be done at this point is to wait and see whether
it reoccurs.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SPv-0008UQ-0o; Wed, 29 Jan 2014 10:30:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SPt-0008U8-Bz
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:30:45 +0000
Received: from [85.158.143.35:12515] by server-2.bemta-4.messagelabs.com id
	23/40-10891-458D8E25; Wed, 29 Jan 2014 10:30:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390991443!1589010!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25920 invoked from network); 29 Jan 2014 10:30:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:30:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97626324"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:30:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:30:41 -0500
Message-ID: <1390991440.31814.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 29 Jan 2014 10:30:40 +0000
In-Reply-To: <alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390933603-13353-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 20:10 +0000, Stefano Stabellini wrote:

> > +    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)

If this is the first implementation (i.e. there are no others in the
wild yet) then please can we use "discard-enable" for consistency with
all the other properties.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:30:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SPv-0008UQ-0o; Wed, 29 Jan 2014 10:30:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SPt-0008U8-Bz
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:30:45 +0000
Received: from [85.158.143.35:12515] by server-2.bemta-4.messagelabs.com id
	23/40-10891-458D8E25; Wed, 29 Jan 2014 10:30:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390991443!1589010!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25920 invoked from network); 29 Jan 2014 10:30:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:30:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97626324"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 10:30:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:30:41 -0500
Message-ID: <1390991440.31814.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Wed, 29 Jan 2014 10:30:40 +0000
In-Reply-To: <alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390933603-13353-1-git-send-email-olaf@aepfle.de>
	<alpine.DEB.2.02.1401282003210.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 20:10 +0000, Stefano Stabellini wrote:

> > +    if (xenstore_read_be_int(&blkdev->xendev, "discard_enable", &enable) == 0)

If this is the first implementation (i.e. there are no others in the
wild yet) then please can we use "discard-enable" for consistency with
all the other properties.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SZN-0000da-D3; Wed, 29 Jan 2014 10:40:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SZL-0000dV-0I
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 10:40:31 +0000
Received: from [193.109.254.147:40450] by server-15.bemta-14.messagelabs.com
	id 04/8D-10839-E9AD8E25; Wed, 29 Jan 2014 10:40:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390992028!575868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10873 invoked from network); 29 Jan 2014 10:40:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95637654"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 10:40:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:40:26 -0500
Message-ID: <1390992026.31814.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 10:40:26 +0000
In-Reply-To: <20140128180802.152b3f8d@mantra.us.oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> On Tue, 28 Jan 2014 10:31:36 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > >>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>> wrote:
> > > --- a/xen/common/memory.c
> > > +++ b/xen/common/memory.c
> ....
> > The only think x86-specific here is that {get,put}_pg_owner() may
> > not exist on ARM. But the general operation isn't x86-specific, so
> > there shouldn't be any CONFIG_X86 dependency here. Instead
> > you ought to work out with the ARM maintainers whether to stub
> > out those two functions, or whether the functionality is useful
> > there too (and hence proper implementations would be needed).
> > 
> > In the latter case I would then also wonder whether the x86
> > implementation shouldn't be moved into common code.
> 
> Stefano/Ian:
> 
> If you have use for get_pg_owner() I can stub it out for now and
> have it return 1, as NULL would result in error. Otherwise, I can
> change the function prototype to return rc with ARM always returning 
> 0 and not doing anything, like:
> 
>         if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
>         {
>             if ( (rc = get_pg_owner(xatpb.foreign_domid, &fd)) )
>             {
>                 rcu_unlock_domain(d);
>                 return rc;
>             }
>         }
> 
> which on ARM would always return 0, setting fd to NULL.
> 
> If you think it would be needed in ARM, I can just leave the function
> prototype the same and you guys can implement whenever as I don't have the
> insight into ARM, and if it looks the same as x86 you can commonise it too.

Yes, please just make get/put_pg_owner common.

The only required change would be to:
    if ( unlikely(paging_mode_translate(curr)) )
    {
        MEM_LOG("Cannot mix foreign mappings with translated domains");
        goto out;
    }

which is not needed for ARM, and I suspect needs adjusting for PVH too
(ah, there it is in the next patch). I think the best solution there
would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
some better name, I don't especially like my suggestion)

on ARM:

#define paging_mode_supports_foreign(d) (1)

on x86:

#define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:40:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SZN-0000da-D3; Wed, 29 Jan 2014 10:40:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8SZL-0000dV-0I
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 10:40:31 +0000
Received: from [193.109.254.147:40450] by server-15.bemta-14.messagelabs.com
	id 04/8D-10839-E9AD8E25; Wed, 29 Jan 2014 10:40:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390992028!575868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10873 invoked from network); 29 Jan 2014 10:40:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 10:40:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95637654"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 10:40:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 05:40:26 -0500
Message-ID: <1390992026.31814.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 10:40:26 +0000
In-Reply-To: <20140128180802.152b3f8d@mantra.us.oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> On Tue, 28 Jan 2014 10:31:36 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
> > >>> On 28.01.14 at 02:55, Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>> wrote:
> > > --- a/xen/common/memory.c
> > > +++ b/xen/common/memory.c
> ....
> > The only think x86-specific here is that {get,put}_pg_owner() may
> > not exist on ARM. But the general operation isn't x86-specific, so
> > there shouldn't be any CONFIG_X86 dependency here. Instead
> > you ought to work out with the ARM maintainers whether to stub
> > out those two functions, or whether the functionality is useful
> > there too (and hence proper implementations would be needed).
> > 
> > In the latter case I would then also wonder whether the x86
> > implementation shouldn't be moved into common code.
> 
> Stefano/Ian:
> 
> If you have use for get_pg_owner() I can stub it out for now and
> have it return 1, as NULL would result in error. Otherwise, I can
> change the function prototype to return rc with ARM always returning 
> 0 and not doing anything, like:
> 
>         if ( xatpb.space == XENMAPSPACE_gmfn_foreign )
>         {
>             if ( (rc = get_pg_owner(xatpb.foreign_domid, &fd)) )
>             {
>                 rcu_unlock_domain(d);
>                 return rc;
>             }
>         }
> 
> which on ARM would always return 0, setting fd to NULL.
> 
> If you think it would be needed in ARM, I can just leave the function
> prototype the same and you guys can implement whenever as I don't have the
> insight into ARM, and if it looks the same as x86 you can commonise it too.

Yes, please just make get/put_pg_owner common.

The only required change would be to:
    if ( unlikely(paging_mode_translate(curr)) )
    {
        MEM_LOG("Cannot mix foreign mappings with translated domains");
        goto out;
    }

which is not needed for ARM, and I suspect needs adjusting for PVH too
(ah, there it is in the next patch). I think the best solution there
would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
some better name, I don't especially like my suggestion)

on ARM:

#define paging_mode_supports_foreign(d) (1)

on x86:

#define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SoU-0001En-FF; Wed, 29 Jan 2014 10:56:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8SoR-0001Eb-B9
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:56:09 +0000
Received: from [85.158.137.68:15103] by server-6.bemta-3.messagelabs.com id
	4D/1E-09180-64ED8E25; Wed, 29 Jan 2014 10:56:06 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390992962!12056048!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1867 invoked from network); 29 Jan 2014 10:56:04 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 10:56:04 -0000
Received: from mail-ve0-f182.google.com ([209.85.128.182]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUujeQsoKIdf05Q/rotxUGGs8+ox65637@postini.com;
	Wed, 29 Jan 2014 02:56:04 PST
Received: by mail-ve0-f182.google.com with SMTP id jy13so1053218veb.41
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 02:56:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=m4tHU1Nd50WtFn5ddDnOK567DcSZOl2SFhIveW8QQ48=;
	b=ivFEpbSdHc7vHjoXlSL8FrKGq9ko62tFlzJbWmzMhEtpIE35HG9lzyfUQlaqPCkE19
	yQSHD0aMx4l5JGOd+dL7pZUW+6bdwX6KdutVFJi0w3h1zHvmn1FzQZ27lyvp6ObCHk/O
	ijasTCMbwgEXVJzmVwDYQ3tU9nAoODygYBh3o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=m4tHU1Nd50WtFn5ddDnOK567DcSZOl2SFhIveW8QQ48=;
	b=HEbcLfdECoGrrnvWIkScUaN5FtGGBYfA9cXp3Q1VK3NVmA8V5iK/u2afR2v4LLWMsB
	UvHlC7MNgpSjfQzgR8Om2NaK0K5uDBD9z49Hc+bno6a6l82YNW3fx7IQjv1+5Y0Y0Wes
	Ua4MH2SufGMb2SIm3lTEoQFiK6Zq6q+dwzCCKha6tzphy0dE/c++7cpzppoAfMzWLCcu
	RDDVdPnFSE2KnaMB8fl4glOGFapB13tPR2cC7Qob0PlctPrCbrUCW9Diqx8s/N0cnNld
	UiQXsdFKXIgA41EhqVI8Joe92r0oWwkgwahuLNfZNRQm2fLGk9NCz8U9QQn/E6ESA3QK
	OyDg==
X-Gm-Message-State: ALoCoQkNOaWlewKznoWtzLIMTcZW1kILf8i1kpL6vj2YSdbp+qAfhQpNDUgJ50BKEXuik72FXBzbMkV7mp2nrVhTqN1RZ4U3rN7/KYaRXbkUozQKcyyE+WJs9GpL8RxuKxxGIeb8ssemxfVQZ7Ruqu3GUYsyQTCm9RQnHIYgMZT8gvd4YV3I1D0=
X-Received: by 10.52.189.33 with SMTP id gf1mr1384990vdc.26.1390992961446;
	Wed, 29 Jan 2014 02:56:01 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.189.33 with SMTP id gf1mr1384975vdc.26.1390992961269;
	Wed, 29 Jan 2014 02:56:01 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 02:56:01 -0800 (PST)
In-Reply-To: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
Date: Wed, 29 Jan 2014 12:56:01 +0200
Message-ID: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

I just recollected about one hack which we created
as we needed to route HW IRQ in domU.

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d793ba..d0227b9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
libxl__multidev *multidev,

         LOG(DEBUG, "dom%d irq %d", domid, irq);

-        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
-                       : -EOVERFLOW;
         if (!ret)
             ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
         if (ret < 0) {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4b11f..b54c08e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
     if ( d->domain_id == 0 )
         d->arch.vgic.nr_lines = gic_number_lines() - 32;
     else
-        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
+        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
need SPIs for the guest */

     d->arch.vgic.shared_irqs =
         xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 75e2df3..ba88901 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
 #include <asm/page.h>
 #include <public/domctl.h>
 #include <xsm/xsm.h>
+#include <asm/gic.h>

 static DEFINE_SPINLOCK(domctl_lock);
 DEFINE_SPINLOCK(vcpu_alloc_lock);
@@ -782,8 +783,11 @@ long
do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = -EINVAL;
         else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
             ret = -EPERM;
-        else if ( allow )
-            ret = pirq_permit_access(d, pirq);
+        else if ( allow ) {
+            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
+            ret = pirq_permit_access(d, irq.irq);
+            gic_route_irq_to_guest(d, &irq, "");
+        }
         else
             ret = pirq_deny_access(d, pirq);
     }
(END)

It seems, the following patch can violate the logic about routing
physical IRQs only to CPU0.
In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
where the one of the parameters is cpumask_of(smp_processor_id()).
But in this part of code this function can be executed on CPU1. And as
result this can cause to the fact that the wrong value would set to
target CPU mask.

Please, confirm my assumption.
If I am right we have to add a basic HW IRQ routing to DomU in a right way.

On Tue, Jan 28, 2014 at 9:25 PM, Oleksandr Tyshchenko
<oleksandr.tyshchenko@globallogic.com> wrote:
> Hello Julien,
>
> Please see inline
>
>> gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
>> hard drive, network, ...). As far as I remember, these IRQs are only
>> routed to CPU0.
>
>
> I understand.
>
> But, I have created debug patch to show the issue:
>
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 46d2fc6..6123561 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -22,6 +22,8 @@
>  #include <xen/smp.h>
>  #include <xen/errno.h>
>
> +int locked = 0;
> +
>  /*
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
> @@ -53,11 +55,19 @@ void on_selected_cpus(
>  {
>      unsigned int nr_cpus;
>
> +    locked = 0;
> +
>      ASSERT(local_irq_is_enabled());
>
>      if (!spin_trylock(&call_lock)) {
> +
> +    locked = 1;
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +                 cpumask_of(smp_processor_id())->bits[0],
> selected->bits[0]);
> +
>          if (smp_call_function_interrupt())
>              return;
> +
>          spin_lock(&call_lock);
>      }
>
> @@ -78,6 +88,10 @@ void on_selected_cpus(
>
>  out:
>      spin_unlock(&call_lock);
> +
> +    if (locked)
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
>  }
>
>  int smp_call_function_interrupt(void)
> @@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
>      void *info = call_data.info;
>      unsigned int cpu = smp_processor_id();
>
> +     if (locked)
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +            cpumask_of(smp_processor_id())->bits[0],
> call_data.selected.bits[0]);
> +
>      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
>          return -EPERM;
>
> Our issue (simultaneous cross-interrupts) has occurred during boot domU:
>
> [    7.507812] oom_adj 2 => oom_score_adj 117
> [    7.507812] oom_adj 4 => oom_score_adj 235
> [    7.507812] oom_adj 9 => oom_score_adj 529
> [    7.507812] oom_adj 15 => oom_score_adj 1000
> [    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
> index 0 [0, ]
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000002
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
> cpu_mask_sel: 00000002
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000000
> [   11.023437] usbcore: registered new interface driver usbfs
> [   11.023437] usbcore: registered new interface driver hub
> [   11.023437] usbcore: registered new device driver usb
> [   11.039062] usbcore: registered new interface driver usbhid
> [   11.039062] usbhid: USB HID core driver
>
>>
>> Do you pass-through PPIs to dom0?
>
>
> If I understand correctly that PPIs is irqs from 16 to 31.
> So yes, I do. I see timer's irqs and maintenance irq which routed to both
> CPUs.
>
> And I have printed all irqs which fall to gic_route_irq_to_guest and
> gic_route_irq functions.
> ...
> (XEN) GIC initialization:
> (XEN)         gic_dist_addr=0000000048211000
> (XEN)         gic_cpu_addr=0000000048212000
> (XEN)         gic_hyp_addr=0000000048214000
> (XEN)         gic_vcpu_addr=0000000048216000
> (XEN)         gic_maintenance_irq=25
> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> (XEN) Bringing up CPU1
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> (XEN) CPU 1 booted.
> (XEN) Brought up 2 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
> (XEN) Loading kernel from boot module 2
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
> (XEN) Loading zImage from 00000000c0000040 to
> 00000000c8008000-00000000c8304eb0
> (XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> Xen)
> (XEN) Freed 252kB init memory.
> [    0.000000] /cpus/cpu@0 missing clock-frequency property
> [    0.000000] /cpus/cpu@1 missing clock-frequency property
> [    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
> [    0.265625] ahci ahci.0.auto: can't get clock
> [    0.867187] Freeing init memory: 224K
> Parsing config from /xen/images/DomUAndroid.cfg
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
> Daemon running with PID 569
> ...
>>
>>
>> --
>> Julien Grall
>
>
>
>
> --
>
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
>
> http://www.globallogic.com/email_disclaimer.txt



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 10:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 10:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8SoU-0001En-FF; Wed, 29 Jan 2014 10:56:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8SoR-0001Eb-B9
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 10:56:09 +0000
Received: from [85.158.137.68:15103] by server-6.bemta-3.messagelabs.com id
	4D/1E-09180-64ED8E25; Wed, 29 Jan 2014 10:56:06 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1390992962!12056048!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1867 invoked from network); 29 Jan 2014 10:56:04 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 10:56:04 -0000
Received: from mail-ve0-f182.google.com ([209.85.128.182]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUujeQsoKIdf05Q/rotxUGGs8+ox65637@postini.com;
	Wed, 29 Jan 2014 02:56:04 PST
Received: by mail-ve0-f182.google.com with SMTP id jy13so1053218veb.41
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 02:56:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=m4tHU1Nd50WtFn5ddDnOK567DcSZOl2SFhIveW8QQ48=;
	b=ivFEpbSdHc7vHjoXlSL8FrKGq9ko62tFlzJbWmzMhEtpIE35HG9lzyfUQlaqPCkE19
	yQSHD0aMx4l5JGOd+dL7pZUW+6bdwX6KdutVFJi0w3h1zHvmn1FzQZ27lyvp6ObCHk/O
	ijasTCMbwgEXVJzmVwDYQ3tU9nAoODygYBh3o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=m4tHU1Nd50WtFn5ddDnOK567DcSZOl2SFhIveW8QQ48=;
	b=HEbcLfdECoGrrnvWIkScUaN5FtGGBYfA9cXp3Q1VK3NVmA8V5iK/u2afR2v4LLWMsB
	UvHlC7MNgpSjfQzgR8Om2NaK0K5uDBD9z49Hc+bno6a6l82YNW3fx7IQjv1+5Y0Y0Wes
	Ua4MH2SufGMb2SIm3lTEoQFiK6Zq6q+dwzCCKha6tzphy0dE/c++7cpzppoAfMzWLCcu
	RDDVdPnFSE2KnaMB8fl4glOGFapB13tPR2cC7Qob0PlctPrCbrUCW9Diqx8s/N0cnNld
	UiQXsdFKXIgA41EhqVI8Joe92r0oWwkgwahuLNfZNRQm2fLGk9NCz8U9QQn/E6ESA3QK
	OyDg==
X-Gm-Message-State: ALoCoQkNOaWlewKznoWtzLIMTcZW1kILf8i1kpL6vj2YSdbp+qAfhQpNDUgJ50BKEXuik72FXBzbMkV7mp2nrVhTqN1RZ4U3rN7/KYaRXbkUozQKcyyE+WJs9GpL8RxuKxxGIeb8ssemxfVQZ7Ruqu3GUYsyQTCm9RQnHIYgMZT8gvd4YV3I1D0=
X-Received: by 10.52.189.33 with SMTP id gf1mr1384990vdc.26.1390992961446;
	Wed, 29 Jan 2014 02:56:01 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.189.33 with SMTP id gf1mr1384975vdc.26.1390992961269;
	Wed, 29 Jan 2014 02:56:01 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 02:56:01 -0800 (PST)
In-Reply-To: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
Date: Wed, 29 Jan 2014 12:56:01 +0200
Message-ID: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

I just recollected about one hack which we created
as we needed to route HW IRQ in domU.

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d793ba..d0227b9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
libxl__multidev *multidev,

         LOG(DEBUG, "dom%d irq %d", domid, irq);

-        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
-                       : -EOVERFLOW;
         if (!ret)
             ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
         if (ret < 0) {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4b11f..b54c08e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
     if ( d->domain_id == 0 )
         d->arch.vgic.nr_lines = gic_number_lines() - 32;
     else
-        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
+        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
need SPIs for the guest */

     d->arch.vgic.shared_irqs =
         xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 75e2df3..ba88901 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
 #include <asm/page.h>
 #include <public/domctl.h>
 #include <xsm/xsm.h>
+#include <asm/gic.h>

 static DEFINE_SPINLOCK(domctl_lock);
 DEFINE_SPINLOCK(vcpu_alloc_lock);
@@ -782,8 +783,11 @@ long
do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             ret = -EINVAL;
         else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
             ret = -EPERM;
-        else if ( allow )
-            ret = pirq_permit_access(d, pirq);
+        else if ( allow ) {
+            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
+            ret = pirq_permit_access(d, irq.irq);
+            gic_route_irq_to_guest(d, &irq, "");
+        }
         else
             ret = pirq_deny_access(d, pirq);
     }
(END)

It seems, the following patch can violate the logic about routing
physical IRQs only to CPU0.
In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
where the one of the parameters is cpumask_of(smp_processor_id()).
But in this part of code this function can be executed on CPU1. And as
result this can cause to the fact that the wrong value would set to
target CPU mask.

Please, confirm my assumption.
If I am right we have to add a basic HW IRQ routing to DomU in a right way.

On Tue, Jan 28, 2014 at 9:25 PM, Oleksandr Tyshchenko
<oleksandr.tyshchenko@globallogic.com> wrote:
> Hello Julien,
>
> Please see inline
>
>> gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
>> hard drive, network, ...). As far as I remember, these IRQs are only
>> routed to CPU0.
>
>
> I understand.
>
> But, I have created debug patch to show the issue:
>
> diff --git a/xen/common/smp.c b/xen/common/smp.c
> index 46d2fc6..6123561 100644
> --- a/xen/common/smp.c
> +++ b/xen/common/smp.c
> @@ -22,6 +22,8 @@
>  #include <xen/smp.h>
>  #include <xen/errno.h>
>
> +int locked = 0;
> +
>  /*
>   * Structure and data for smp_call_function()/on_selected_cpus().
>   */
> @@ -53,11 +55,19 @@ void on_selected_cpus(
>  {
>      unsigned int nr_cpus;
>
> +    locked = 0;
> +
>      ASSERT(local_irq_is_enabled());
>
>      if (!spin_trylock(&call_lock)) {
> +
> +    locked = 1;
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +                 cpumask_of(smp_processor_id())->bits[0],
> selected->bits[0]);
> +
>          if (smp_call_function_interrupt())
>              return;
> +
>          spin_lock(&call_lock);
>      }
>
> @@ -78,6 +88,10 @@ void on_selected_cpus(
>
>  out:
>      spin_unlock(&call_lock);
> +
> +    if (locked)
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
>  }
>
>  int smp_call_function_interrupt(void)
> @@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
>      void *info = call_data.info;
>      unsigned int cpu = smp_processor_id();
>
> +     if (locked)
> +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> %08lx\n", __func__, __LINE__,
> +            cpumask_of(smp_processor_id())->bits[0],
> call_data.selected.bits[0]);
> +
>      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
>          return -EPERM;
>
> Our issue (simultaneous cross-interrupts) has occurred during boot domU:
>
> [    7.507812] oom_adj 2 => oom_score_adj 117
> [    7.507812] oom_adj 4 => oom_score_adj 235
> [    7.507812] oom_adj 9 => oom_score_adj 529
> [    7.507812] oom_adj 15 => oom_score_adj 1000
> [    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
> index 0 [0, ]
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000002
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
> cpu_mask_sel: 00000002
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000001
> (XEN)
> (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> cpu_mask_sel: 00000000
> [   11.023437] usbcore: registered new interface driver usbfs
> [   11.023437] usbcore: registered new interface driver hub
> [   11.023437] usbcore: registered new device driver usb
> [   11.039062] usbcore: registered new interface driver usbhid
> [   11.039062] usbhid: USB HID core driver
>
>>
>> Do you pass-through PPIs to dom0?
>
>
> If I understand correctly that PPIs is irqs from 16 to 31.
> So yes, I do. I see timer's irqs and maintenance irq which routed to both
> CPUs.
>
> And I have printed all irqs which fall to gic_route_irq_to_guest and
> gic_route_irq functions.
> ...
> (XEN) GIC initialization:
> (XEN)         gic_dist_addr=0000000048211000
> (XEN)         gic_cpu_addr=0000000048212000
> (XEN)         gic_hyp_addr=0000000048214000
> (XEN)         gic_vcpu_addr=0000000048216000
> (XEN)         gic_maintenance_irq=25
> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> (XEN) Bringing up CPU1
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> (XEN) CPU 1 booted.
> (XEN) Brought up 2 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
> (XEN) Loading kernel from boot module 2
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
> (XEN) Loading zImage from 00000000c0000040 to
> 00000000c8008000-00000000c8304eb0
> (XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> Xen)
> (XEN) Freed 252kB init memory.
> [    0.000000] /cpus/cpu@0 missing clock-frequency property
> [    0.000000] /cpus/cpu@1 missing clock-frequency property
> [    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
> [    0.265625] ahci ahci.0.auto: can't get clock
> [    0.867187] Freeing init memory: 224K
> Parsing config from /xen/images/DomUAndroid.cfg
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
> Daemon running with PID 569
> ...
>>
>>
>> --
>> Julien Grall
>
>
>
>
> --
>
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
>
> http://www.globallogic.com/email_disclaimer.txt



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TBH-0002M2-DD; Wed, 29 Jan 2014 11:19:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8TBF-0002Lx-GM
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:19:41 +0000
Received: from [193.109.254.147:53995] by server-16.bemta-14.messagelabs.com
	id EB/5E-21945-CC3E8E25; Wed, 29 Jan 2014 11:19:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390994379!582165!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12679 invoked from network); 29 Jan 2014 11:19:40 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 11:19:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390994379; l=1297;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=8fuTeRD62Ha26Pc+Htpu90WqE8Q=;
	b=NaESEdZf7j7MgpTUi7Cn6zqBGGmw/BFSpdGTuSsKDTL0Vk9McsueuuAAO8p5w67F9Di
	5HHPObrK87ezPRspCFgs3N1YlDb/TutBCH8CMqcAqlJ1pMBtMBDvUKZs2JRLXxFsKNohh
	+iPD7cUC+SahoOJuaYkEOGz8vwHML8K/T1U=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id J07991q0TBJdQIw
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 12:19:39 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4CB8950266; Wed, 29 Jan 2014 12:19:39 +0100 (CET)
Date: Wed, 29 Jan 2014 12:19:39 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129111939.GA26899@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390991329.31814.58.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> > +Default value:         on
> 
> I think this default should be "on if, available for that backend type".

Ok, will make this change.

> What happens if the backed does not support discard?

The toolstack just does not know if a phy device supports it, or if file
backed storage can do hole punching. If feature-discard is set and the
frontend sends a discard request, the backend would return an error
(like ENOTSUPPORTED) and the frontend internally disables the discard
flag. Thats how it is done in pvops and the forward ported xenlinux
tree.

Up to now I have not prepared a change for the backend drivers. They
could either force feature-discard to be true so that the error paths
will be executed. Or they could ignore the discard-enable if the backing
storage does not support discard.

> >      disk->readwrite = 1;
> > +    disk->discard_enable = 1; /* Doing it twice?! */
> 
> Why?

Thats what I'm asking you. Why is readwrite set here, and later on also
in the .l file? At least just setting it here did not unconditionally
enable it if no discard= was specified. I have not traced the code why
that happens.

> > + * discard_enable
> 
> All of the existing properties use - not _.

I will make this change.

Thanks.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:20:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TBH-0002M2-DD; Wed, 29 Jan 2014 11:19:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8TBF-0002Lx-GM
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:19:41 +0000
Received: from [193.109.254.147:53995] by server-16.bemta-14.messagelabs.com
	id EB/5E-21945-CC3E8E25; Wed, 29 Jan 2014 11:19:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-27.messagelabs.com!1390994379!582165!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12679 invoked from network); 29 Jan 2014 11:19:40 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 11:19:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1390994379; l=1297;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=8fuTeRD62Ha26Pc+Htpu90WqE8Q=;
	b=NaESEdZf7j7MgpTUi7Cn6zqBGGmw/BFSpdGTuSsKDTL0Vk9McsueuuAAO8p5w67F9Di
	5HHPObrK87ezPRspCFgs3N1YlDb/TutBCH8CMqcAqlJ1pMBtMBDvUKZs2JRLXxFsKNohh
	+iPD7cUC+SahoOJuaYkEOGz8vwHML8K/T1U=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id J07991q0TBJdQIw
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 12:19:39 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4CB8950266; Wed, 29 Jan 2014 12:19:39 +0100 (CET)
Date: Wed, 29 Jan 2014 12:19:39 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129111939.GA26899@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390991329.31814.58.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> > +Default value:         on
> 
> I think this default should be "on if, available for that backend type".

Ok, will make this change.

> What happens if the backed does not support discard?

The toolstack just does not know if a phy device supports it, or if file
backed storage can do hole punching. If feature-discard is set and the
frontend sends a discard request, the backend would return an error
(like ENOTSUPPORTED) and the frontend internally disables the discard
flag. Thats how it is done in pvops and the forward ported xenlinux
tree.

Up to now I have not prepared a change for the backend drivers. They
could either force feature-discard to be true so that the error paths
will be executed. Or they could ignore the discard-enable if the backing
storage does not support discard.

> >      disk->readwrite = 1;
> > +    disk->discard_enable = 1; /* Doing it twice?! */
> 
> Why?

Thats what I'm asking you. Why is readwrite set here, and later on also
in the .l file? At least just setting it here did not unconditionally
enable it if no discard= was specified. I have not traced the code why
that happens.

> > + * discard_enable
> 
> All of the existing properties use - not _.

I will make this change.

Thanks.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TMB-0002ne-Vd; Wed, 29 Jan 2014 11:30:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8TMA-0002nZ-79
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:30:58 +0000
Received: from [85.158.137.68:32559] by server-1.bemta-3.messagelabs.com id
	D8/5E-17293-176E8E25; Wed, 29 Jan 2014 11:30:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390995054!11939231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19226 invoked from network); 29 Jan 2014 11:30:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95648151"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:30:54 +0000
Received: from dhcp-3-227.uk.xensource.com (10.80.3.227) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 06:30:53 -0500
Message-ID: <52E8E66F.4040009@citrix.com>
Date: Wed, 29 Jan 2014 11:30:55 +0000
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
In-Reply-To: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.80.3.227]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 08:52, Jan Beulich wrote:
>>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
>> *pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif = pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		/*
>> +		 * Make sure the request is freed before releasing blkif,
>> +		 * or there could be a race between free_req and the
>> +		 * cleanup done in xen_blkif_free during shutdown.
>> +		 *
>> +		 * NB: The fact that we might try to wake up pending_free_wq
>> +		 * before drain_complete (in case there's a drain going on)
>> +		 * it's not a problem with our current implementation
>> +		 * because we can assure there's no thread waiting on
>> +		 * pending_free_wq if there's a drain going on, but it has
>> +		 * to be taken into account if the current model is changed.
>> +		 */
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <= 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
> 
> The put is still too early imo - you're explicitly accessing field in the
> structure immediately afterwards. This may not be an issue at
> present, but I think it's at least a latent one.

Yes, thanks for catching that one, it's an issue that we should solve
now on this patch or else I would just be solving a race by introducing
a new one.

> Apart from that, the two if()s would - at least to me - be more
> clear if combined into one.

Ack, will see how the patch ends up looking after getting rid of the new
race.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TMB-0002ne-Vd; Wed, 29 Jan 2014 11:30:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8TMA-0002nZ-79
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:30:58 +0000
Received: from [85.158.137.68:32559] by server-1.bemta-3.messagelabs.com id
	D8/5E-17293-176E8E25; Wed, 29 Jan 2014 11:30:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1390995054!11939231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19226 invoked from network); 29 Jan 2014 11:30:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95648151"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:30:54 +0000
Received: from dhcp-3-227.uk.xensource.com (10.80.3.227) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 06:30:53 -0500
Message-ID: <52E8E66F.4040009@citrix.com>
Date: Wed, 29 Jan 2014 11:30:55 +0000
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
In-Reply-To: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.80.3.227]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 08:52, Jan Beulich wrote:
>>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
>> *pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif = pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		/*
>> +		 * Make sure the request is freed before releasing blkif,
>> +		 * or there could be a race between free_req and the
>> +		 * cleanup done in xen_blkif_free during shutdown.
>> +		 *
>> +		 * NB: The fact that we might try to wake up pending_free_wq
>> +		 * before drain_complete (in case there's a drain going on)
>> +		 * it's not a problem with our current implementation
>> +		 * because we can assure there's no thread waiting on
>> +		 * pending_free_wq if there's a drain going on, but it has
>> +		 * to be taken into account if the current model is changed.
>> +		 */
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <= 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
> 
> The put is still too early imo - you're explicitly accessing field in the
> structure immediately afterwards. This may not be an issue at
> present, but I think it's at least a latent one.

Yes, thanks for catching that one, it's an issue that we should solve
now on this patch or else I would just be solving a race by introducing
a new one.

> Apart from that, the two if()s would - at least to me - be more
> clear if combined into one.

Ack, will see how the patch ends up looking after getting rid of the new
race.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TTp-0003Fj-9l; Wed, 29 Jan 2014 11:38:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W8TTn-0003Ev-M8
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:38:51 +0000
Received: from [85.158.139.211:10785] by server-7.bemta-5.messagelabs.com id
	0C/A9-14867-A48E8E25; Wed, 29 Jan 2014 11:38:50 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390995530!347596!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19713 invoked from network); 29 Jan 2014 11:38:50 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:38:50 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W8TTi-000FOb-1Q; Wed, 29 Jan 2014 11:38:46 +0000
Date: Wed, 29 Jan 2014 12:38:46 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129113846.GA54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390992026.31814.63.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > On Tue, 28 Jan 2014 10:31:36 +0000
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > not exist on ARM. But the general operation isn't x86-specific, so
> > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > you ought to work out with the ARM maintainers whether to stub
> > > out those two functions, or whether the functionality is useful
> > > there too (and hence proper implementations would be needed).
[...]
> Yes, please just make get/put_pg_owner common.
> 
> The only required change would be to:
>     if ( unlikely(paging_mode_translate(curr)) )
>     {
>         MEM_LOG("Cannot mix foreign mappings with translated domains");
>         goto out;
>     }
> 
> which is not needed for ARM, and I suspect needs adjusting for PVH too
> (ah, there it is in the next patch). I think the best solution there
> would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> some better name, I don't especially like my suggestion)
> 
> on ARM:
> 
> #define paging_mode_supports_foreign(d) (1)
> 
> on x86:
> 
> #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> 

Hmmm.  That's likely to have unintended consequences somewhere.  (And
if that check is really not needed for PVH maybe it's not needed for
HVM either, given that they share all their paging support code).

But I don't think we need to tinker with it anyway - AFAICS,
get_pg_owner() isn't really what's wanted in the XATP code.  All the
other uses of get_pg_owner() are in the x86 PV MMU code, which this is
definitely not, and it handles cases (like mmio) that we don't want
here anyway.  How about using rcu_lock_live_remote_domain_by_id()?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TTp-0003Fj-9l; Wed, 29 Jan 2014 11:38:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W8TTn-0003Ev-M8
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:38:51 +0000
Received: from [85.158.139.211:10785] by server-7.bemta-5.messagelabs.com id
	0C/A9-14867-A48E8E25; Wed, 29 Jan 2014 11:38:50 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1390995530!347596!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19713 invoked from network); 29 Jan 2014 11:38:50 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:38:50 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W8TTi-000FOb-1Q; Wed, 29 Jan 2014 11:38:46 +0000
Date: Wed, 29 Jan 2014 12:38:46 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129113846.GA54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390992026.31814.63.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > On Tue, 28 Jan 2014 10:31:36 +0000
> > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > not exist on ARM. But the general operation isn't x86-specific, so
> > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > you ought to work out with the ARM maintainers whether to stub
> > > out those two functions, or whether the functionality is useful
> > > there too (and hence proper implementations would be needed).
[...]
> Yes, please just make get/put_pg_owner common.
> 
> The only required change would be to:
>     if ( unlikely(paging_mode_translate(curr)) )
>     {
>         MEM_LOG("Cannot mix foreign mappings with translated domains");
>         goto out;
>     }
> 
> which is not needed for ARM, and I suspect needs adjusting for PVH too
> (ah, there it is in the next patch). I think the best solution there
> would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> some better name, I don't especially like my suggestion)
> 
> on ARM:
> 
> #define paging_mode_supports_foreign(d) (1)
> 
> on x86:
> 
> #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> 

Hmmm.  That's likely to have unintended consequences somewhere.  (And
if that check is really not needed for PVH maybe it's not needed for
HVM either, given that they share all their paging support code).

But I don't think we need to tinker with it anyway - AFAICS,
get_pg_owner() isn't really what's wanted in the XATP code.  All the
other uses of get_pg_owner() are in the x86 PV MMU code, which this is
definitely not, and it handles cases (like mmio) that we don't want
here anyway.  How about using rcu_lock_live_remote_domain_by_id()?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:40:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TV4-0003Kn-RB; Wed, 29 Jan 2014 11:40:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8TV3-0003Kh-Ra
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:40:09 +0000
Received: from [85.158.139.211:42285] by server-10.bemta-5.messagelabs.com id
	DD/2D-08578-998E8E25; Wed, 29 Jan 2014 11:40:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390995608!355243!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5195 invoked from network); 29 Jan 2014 11:40:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:40:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 11:40:07 +0000
Message-Id: <52E8F6B30200007800117E35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 11:40:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

aiming at releases with, as before, presumably just one more RC on
each of them, please test!

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:40:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TV4-0003Kn-RB; Wed, 29 Jan 2014 11:40:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8TV3-0003Kh-Ra
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:40:09 +0000
Received: from [85.158.139.211:42285] by server-10.bemta-5.messagelabs.com id
	DD/2D-08578-998E8E25; Wed, 29 Jan 2014 11:40:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1390995608!355243!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5195 invoked from network); 29 Jan 2014 11:40:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:40:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 11:40:07 +0000
Message-Id: <52E8F6B30200007800117E35@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 11:40:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

aiming at releases with, as before, presumably just one more RC on
each of them, please test!

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TWO-0003S0-Oj; Wed, 29 Jan 2014 11:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TWN-0003Ro-HE
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:41:31 +0000
Received: from [85.158.143.35:44703] by server-1.bemta-4.messagelabs.com id
	E9/4F-31661-AE8E8E25; Wed, 29 Jan 2014 11:41:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390995688!1612837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18344 invoked from network); 29 Jan 2014 11:41:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97639996"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 11:41:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:41:04 -0500
Message-ID: <1390995662.31814.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 29 Jan 2014 11:41:02 +0000
In-Reply-To: <20140129113846.GA54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > you ought to work out with the ARM maintainers whether to stub
> > > > out those two functions, or whether the functionality is useful
> > > > there too (and hence proper implementations would be needed).
> [...]
> > Yes, please just make get/put_pg_owner common.
> > 
> > The only required change would be to:
> >     if ( unlikely(paging_mode_translate(curr)) )
> >     {
> >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> >         goto out;
> >     }
> > 
> > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > (ah, there it is in the next patch). I think the best solution there
> > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > some better name, I don't especially like my suggestion)
> > 
> > on ARM:
> > 
> > #define paging_mode_supports_foreign(d) (1)
> > 
> > on x86:
> > 
> > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > 
> 
> Hmmm.  That's likely to have unintended consequences somewhere.  (And
> if that check is really not needed for PVH maybe it's not needed for
> HVM either, given that they share all their paging support code).
> 
> But I don't think we need to tinker with it anyway - AFAICS,
> get_pg_owner() isn't really what's wanted in the XATP code.  All the
> other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> definitely not, and it handles cases (like mmio) that we don't want
> here anyway.  How about using rcu_lock_live_remote_domain_by_id()?

We have a struct page in our hand -- don't we need to lookup the owner
and lock it somewhat atomically?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TWO-0003S0-Oj; Wed, 29 Jan 2014 11:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TWN-0003Ro-HE
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:41:31 +0000
Received: from [85.158.143.35:44703] by server-1.bemta-4.messagelabs.com id
	E9/4F-31661-AE8E8E25; Wed, 29 Jan 2014 11:41:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390995688!1612837!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18344 invoked from network); 29 Jan 2014 11:41:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97639996"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 11:41:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:41:04 -0500
Message-ID: <1390995662.31814.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 29 Jan 2014 11:41:02 +0000
In-Reply-To: <20140129113846.GA54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > you ought to work out with the ARM maintainers whether to stub
> > > > out those two functions, or whether the functionality is useful
> > > > there too (and hence proper implementations would be needed).
> [...]
> > Yes, please just make get/put_pg_owner common.
> > 
> > The only required change would be to:
> >     if ( unlikely(paging_mode_translate(curr)) )
> >     {
> >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> >         goto out;
> >     }
> > 
> > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > (ah, there it is in the next patch). I think the best solution there
> > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > some better name, I don't especially like my suggestion)
> > 
> > on ARM:
> > 
> > #define paging_mode_supports_foreign(d) (1)
> > 
> > on x86:
> > 
> > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > 
> 
> Hmmm.  That's likely to have unintended consequences somewhere.  (And
> if that check is really not needed for PVH maybe it's not needed for
> HVM either, given that they share all their paging support code).
> 
> But I don't think we need to tinker with it anyway - AFAICS,
> get_pg_owner() isn't really what's wanted in the XATP code.  All the
> other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> definitely not, and it handles cases (like mmio) that we don't want
> here anyway.  How about using rcu_lock_live_remote_domain_by_id()?

We have a struct page in our hand -- don't we need to lookup the owner
and lock it somewhat atomically?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:42:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TXH-0003YE-84; Wed, 29 Jan 2014 11:42:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8TXE-0003Xu-Ql
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:42:25 +0000
Received: from [85.158.143.35:62357] by server-1.bemta-4.messagelabs.com id
	EF/01-31661-029E8E25; Wed, 29 Jan 2014 11:42:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390995741!1613070!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25297 invoked from network); 29 Jan 2014 11:42:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:42:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95650538"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:42:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:42:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8TXA-0008IU-Hu;
	Wed, 29 Jan 2014 11:42:20 +0000
Date: Wed, 29 Jan 2014 11:42:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
> Hello all,
> 
> I just recollected about one hack which we created
> as we needed to route HW IRQ in domU.
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d793ba..d0227b9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
> 
>          LOG(DEBUG, "dom%d irq %d", domid, irq);
> 
> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> -                       : -EOVERFLOW;
>          if (!ret)
>              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>          if (ret < 0) {
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..b54c08e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>      if ( d->domain_id == 0 )
>          d->arch.vgic.nr_lines = gic_number_lines() - 32;
>      else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> need SPIs for the guest */
> 
>      d->arch.vgic.shared_irqs =
>          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 75e2df3..ba88901 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -29,6 +29,7 @@
>  #include <asm/page.h>
>  #include <public/domctl.h>
>  #include <xsm/xsm.h>
> +#include <asm/gic.h>
> 
>  static DEFINE_SPINLOCK(domctl_lock);
>  DEFINE_SPINLOCK(vcpu_alloc_lock);
> @@ -782,8 +783,11 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              ret = -EINVAL;
>          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>              ret = -EPERM;
> -        else if ( allow )
> -            ret = pirq_permit_access(d, pirq);
> +        else if ( allow ) {
> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> +            ret = pirq_permit_access(d, irq.irq);
> +            gic_route_irq_to_guest(d, &irq, "");
> +        }
>          else
>              ret = pirq_deny_access(d, pirq);
>      }
> (END)
> 
> It seems, the following patch can violate the logic about routing
> physical IRQs only to CPU0.
> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> where the one of the parameters is cpumask_of(smp_processor_id()).
> But in this part of code this function can be executed on CPU1. And as
> result this can cause to the fact that the wrong value would set to
> target CPU mask.
> 
> Please, confirm my assumption.

That is correct.


> If I am right we have to add a basic HW IRQ routing to DomU in a right way.

We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
for now we could just hardcode the cpumask of cpu0
gic_route_irq_to_guest.

However keep in mind that if you plan on routing SPIs to guests other
than dom0, receiving all the interrupts on cpu0 might not be great for
performances.

It is impressive how small is this patch, if this is all is needed to
get irq routing to guests working.



> On Tue, Jan 28, 2014 at 9:25 PM, Oleksandr Tyshchenko
> <oleksandr.tyshchenko@globallogic.com> wrote:
> > Hello Julien,
> >
> > Please see inline
> >
> >> gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
> >> hard drive, network, ...). As far as I remember, these IRQs are only
> >> routed to CPU0.
> >
> >
> > I understand.
> >
> > But, I have created debug patch to show the issue:
> >
> > diff --git a/xen/common/smp.c b/xen/common/smp.c
> > index 46d2fc6..6123561 100644
> > --- a/xen/common/smp.c
> > +++ b/xen/common/smp.c
> > @@ -22,6 +22,8 @@
> >  #include <xen/smp.h>
> >  #include <xen/errno.h>
> >
> > +int locked = 0;
> > +
> >  /*
> >   * Structure and data for smp_call_function()/on_selected_cpus().
> >   */
> > @@ -53,11 +55,19 @@ void on_selected_cpus(
> >  {
> >      unsigned int nr_cpus;
> >
> > +    locked = 0;
> > +
> >      ASSERT(local_irq_is_enabled());
> >
> >      if (!spin_trylock(&call_lock)) {
> > +
> > +    locked = 1;
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +                 cpumask_of(smp_processor_id())->bits[0],
> > selected->bits[0]);
> > +
> >          if (smp_call_function_interrupt())
> >              return;
> > +
> >          spin_lock(&call_lock);
> >      }
> >
> > @@ -78,6 +88,10 @@ void on_selected_cpus(
> >
> >  out:
> >      spin_unlock(&call_lock);
> > +
> > +    if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
> >  }
> >
> >  int smp_call_function_interrupt(void)
> > @@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
> >      void *info = call_data.info;
> >      unsigned int cpu = smp_processor_id();
> >
> > +     if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0],
> > call_data.selected.bits[0]);
> > +
> >      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
> >          return -EPERM;
> >
> > Our issue (simultaneous cross-interrupts) has occurred during boot domU:
> >
> > [    7.507812] oom_adj 2 => oom_score_adj 117
> > [    7.507812] oom_adj 4 => oom_score_adj 235
> > [    7.507812] oom_adj 9 => oom_score_adj 529
> > [    7.507812] oom_adj 15 => oom_score_adj 1000
> > [    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
> > index 0 [0, ]
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000000
> > [   11.023437] usbcore: registered new interface driver usbfs
> > [   11.023437] usbcore: registered new interface driver hub
> > [   11.023437] usbcore: registered new device driver usb
> > [   11.039062] usbcore: registered new interface driver usbhid
> > [   11.039062] usbhid: USB HID core driver
> >
> >>
> >> Do you pass-through PPIs to dom0?
> >
> >
> > If I understand correctly that PPIs is irqs from 16 to 31.
> > So yes, I do. I see timer's irqs and maintenance irq which routed to both
> > CPUs.
> >
> > And I have printed all irqs which fall to gic_route_irq_to_guest and
> > gic_route_irq functions.
> > ...
> > (XEN) GIC initialization:
> > (XEN)         gic_dist_addr=0000000048211000
> > (XEN)         gic_cpu_addr=0000000048212000
> > (XEN)         gic_hyp_addr=0000000048214000
> > (XEN)         gic_vcpu_addr=0000000048216000
> > (XEN)         gic_maintenance_irq=25
> > (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> > (XEN) Using scheduler: SMP Credit Scheduler (credit)
> > (XEN) Allocated console ring of 16 KiB.
> > (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> > (XEN) Bringing up CPU1
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> > (XEN) CPU 1 booted.
> > (XEN) Brought up 2 CPUs
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
> > (XEN) Loading kernel from boot module 2
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
> > (XEN) Loading zImage from 00000000c0000040 to
> > 00000000c8008000-00000000c8304eb0
> > (XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> > Xen)
> > (XEN) Freed 252kB init memory.
> > [    0.000000] /cpus/cpu@0 missing clock-frequency property
> > [    0.000000] /cpus/cpu@1 missing clock-frequency property
> > [    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
> > [    0.265625] ahci ahci.0.auto: can't get clock
> > [    0.867187] Freeing init memory: 224K
> > Parsing config from /xen/images/DomUAndroid.cfg
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
> > Daemon running with PID 569
> > ...
> >>
> >>
> >> --
> >> Julien Grall
> >
> >
> >
> >
> > --
> >
> > Name | Title
> > GlobalLogic
> > P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> > www.globallogic.com
> >
> > http://www.globallogic.com/email_disclaimer.txt
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:42:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TXH-0003YE-84; Wed, 29 Jan 2014 11:42:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8TXE-0003Xu-Ql
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:42:25 +0000
Received: from [85.158.143.35:62357] by server-1.bemta-4.messagelabs.com id
	EF/01-31661-029E8E25; Wed, 29 Jan 2014 11:42:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1390995741!1613070!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25297 invoked from network); 29 Jan 2014 11:42:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:42:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95650538"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:42:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:42:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8TXA-0008IU-Hu;
	Wed, 29 Jan 2014 11:42:20 +0000
Date: Wed, 29 Jan 2014 11:42:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
> Hello all,
> 
> I just recollected about one hack which we created
> as we needed to route HW IRQ in domU.
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d793ba..d0227b9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
> 
>          LOG(DEBUG, "dom%d irq %d", domid, irq);
> 
> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> -                       : -EOVERFLOW;
>          if (!ret)
>              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>          if (ret < 0) {
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..b54c08e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>      if ( d->domain_id == 0 )
>          d->arch.vgic.nr_lines = gic_number_lines() - 32;
>      else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> need SPIs for the guest */
> 
>      d->arch.vgic.shared_irqs =
>          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 75e2df3..ba88901 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -29,6 +29,7 @@
>  #include <asm/page.h>
>  #include <public/domctl.h>
>  #include <xsm/xsm.h>
> +#include <asm/gic.h>
> 
>  static DEFINE_SPINLOCK(domctl_lock);
>  DEFINE_SPINLOCK(vcpu_alloc_lock);
> @@ -782,8 +783,11 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              ret = -EINVAL;
>          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>              ret = -EPERM;
> -        else if ( allow )
> -            ret = pirq_permit_access(d, pirq);
> +        else if ( allow ) {
> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> +            ret = pirq_permit_access(d, irq.irq);
> +            gic_route_irq_to_guest(d, &irq, "");
> +        }
>          else
>              ret = pirq_deny_access(d, pirq);
>      }
> (END)
> 
> It seems, the following patch can violate the logic about routing
> physical IRQs only to CPU0.
> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> where the one of the parameters is cpumask_of(smp_processor_id()).
> But in this part of code this function can be executed on CPU1. And as
> result this can cause to the fact that the wrong value would set to
> target CPU mask.
> 
> Please, confirm my assumption.

That is correct.


> If I am right we have to add a basic HW IRQ routing to DomU in a right way.

We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
for now we could just hardcode the cpumask of cpu0
gic_route_irq_to_guest.

However keep in mind that if you plan on routing SPIs to guests other
than dom0, receiving all the interrupts on cpu0 might not be great for
performances.

It is impressive how small is this patch, if this is all is needed to
get irq routing to guests working.



> On Tue, Jan 28, 2014 at 9:25 PM, Oleksandr Tyshchenko
> <oleksandr.tyshchenko@globallogic.com> wrote:
> > Hello Julien,
> >
> > Please see inline
> >
> >> gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
> >> hard drive, network, ...). As far as I remember, these IRQs are only
> >> routed to CPU0.
> >
> >
> > I understand.
> >
> > But, I have created debug patch to show the issue:
> >
> > diff --git a/xen/common/smp.c b/xen/common/smp.c
> > index 46d2fc6..6123561 100644
> > --- a/xen/common/smp.c
> > +++ b/xen/common/smp.c
> > @@ -22,6 +22,8 @@
> >  #include <xen/smp.h>
> >  #include <xen/errno.h>
> >
> > +int locked = 0;
> > +
> >  /*
> >   * Structure and data for smp_call_function()/on_selected_cpus().
> >   */
> > @@ -53,11 +55,19 @@ void on_selected_cpus(
> >  {
> >      unsigned int nr_cpus;
> >
> > +    locked = 0;
> > +
> >      ASSERT(local_irq_is_enabled());
> >
> >      if (!spin_trylock(&call_lock)) {
> > +
> > +    locked = 1;
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +                 cpumask_of(smp_processor_id())->bits[0],
> > selected->bits[0]);
> > +
> >          if (smp_call_function_interrupt())
> >              return;
> > +
> >          spin_lock(&call_lock);
> >      }
> >
> > @@ -78,6 +88,10 @@ void on_selected_cpus(
> >
> >  out:
> >      spin_unlock(&call_lock);
> > +
> > +    if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
> >  }
> >
> >  int smp_call_function_interrupt(void)
> > @@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
> >      void *info = call_data.info;
> >      unsigned int cpu = smp_processor_id();
> >
> > +     if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0],
> > call_data.selected.bits[0]);
> > +
> >      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
> >          return -EPERM;
> >
> > Our issue (simultaneous cross-interrupts) has occurred during boot domU:
> >
> > [    7.507812] oom_adj 2 => oom_score_adj 117
> > [    7.507812] oom_adj 4 => oom_score_adj 235
> > [    7.507812] oom_adj 9 => oom_score_adj 529
> > [    7.507812] oom_adj 15 => oom_score_adj 1000
> > [    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
> > index 0 [0, ]
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000000
> > [   11.023437] usbcore: registered new interface driver usbfs
> > [   11.023437] usbcore: registered new interface driver hub
> > [   11.023437] usbcore: registered new device driver usb
> > [   11.039062] usbcore: registered new interface driver usbhid
> > [   11.039062] usbhid: USB HID core driver
> >
> >>
> >> Do you pass-through PPIs to dom0?
> >
> >
> > If I understand correctly that PPIs is irqs from 16 to 31.
> > So yes, I do. I see timer's irqs and maintenance irq which routed to both
> > CPUs.
> >
> > And I have printed all irqs which fall to gic_route_irq_to_guest and
> > gic_route_irq functions.
> > ...
> > (XEN) GIC initialization:
> > (XEN)         gic_dist_addr=0000000048211000
> > (XEN)         gic_cpu_addr=0000000048212000
> > (XEN)         gic_hyp_addr=0000000048214000
> > (XEN)         gic_vcpu_addr=0000000048216000
> > (XEN)         gic_maintenance_irq=25
> > (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> > (XEN) Using scheduler: SMP Credit Scheduler (credit)
> > (XEN) Allocated console ring of 16 KiB.
> > (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> > (XEN) Bringing up CPU1
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> > (XEN) CPU 1 booted.
> > (XEN) Brought up 2 CPUs
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
> > (XEN) Loading kernel from boot module 2
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
> > (XEN) Loading zImage from 00000000c0000040 to
> > 00000000c8008000-00000000c8304eb0
> > (XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> > Xen)
> > (XEN) Freed 252kB init memory.
> > [    0.000000] /cpus/cpu@0 missing clock-frequency property
> > [    0.000000] /cpus/cpu@1 missing clock-frequency property
> > [    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
> > [    0.265625] ahci ahci.0.auto: can't get clock
> > [    0.867187] Freeing init memory: 224K
> > Parsing config from /xen/images/DomUAndroid.cfg
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
> > Daemon running with PID 569
> > ...
> >>
> >>
> >> --
> >> Julien Grall
> >
> >
> >
> >
> > --
> >
> > Name | Title
> > GlobalLogic
> > P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> > www.globallogic.com
> >
> > http://www.globallogic.com/email_disclaimer.txt
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:46:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TbT-0003pI-Ae; Wed, 29 Jan 2014 11:46:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8TbR-0003pB-Iy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:46:45 +0000
Received: from [85.158.137.68:14712] by server-10.bemta-3.messagelabs.com id
	04/6E-07302-42AE8E25; Wed, 29 Jan 2014 11:46:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390996002!12041592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32701 invoked from network); 29 Jan 2014 11:46:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:46:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95651745"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:46:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:46:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8TbM-0008Mb-MM;
	Wed, 29 Jan 2014 11:46:40 +0000
Date: Wed, 29 Jan 2014 11:46:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Stefano Stabellini wrote:
> On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
> > Hello all,
> > 
> > I just recollected about one hack which we created
> > as we needed to route HW IRQ in domU.
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d793ba..d0227b9 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> > libxl__multidev *multidev,
> > 
> >          LOG(DEBUG, "dom%d irq %d", domid, irq);
> > 
> > -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> > -                       : -EOVERFLOW;
> >          if (!ret)
> >              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
> >          if (ret < 0) {
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..b54c08e 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
> >      if ( d->domain_id == 0 )
> >          d->arch.vgic.nr_lines = gic_number_lines() - 32;
> >      else
> > -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> > need SPIs for the guest */
> > 
> >      d->arch.vgic.shared_irqs =
> >          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> > diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> > index 75e2df3..ba88901 100644
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -29,6 +29,7 @@
> >  #include <asm/page.h>
> >  #include <public/domctl.h>
> >  #include <xsm/xsm.h>
> > +#include <asm/gic.h>
> > 
> >  static DEFINE_SPINLOCK(domctl_lock);
> >  DEFINE_SPINLOCK(vcpu_alloc_lock);
> > @@ -782,8 +783,11 @@ long
> > do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >              ret = -EINVAL;
> >          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
> >              ret = -EPERM;
> > -        else if ( allow )
> > -            ret = pirq_permit_access(d, pirq);
> > +        else if ( allow ) {
> > +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> > +            ret = pirq_permit_access(d, irq.irq);
> > +            gic_route_irq_to_guest(d, &irq, "");
> > +        }
> >          else
> >              ret = pirq_deny_access(d, pirq);
> >      }
> > (END)
> > 
> > It seems, the following patch can violate the logic about routing
> > physical IRQs only to CPU0.
> > In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> > where the one of the parameters is cpumask_of(smp_processor_id()).
> > But in this part of code this function can be executed on CPU1. And as
> > result this can cause to the fact that the wrong value would set to
> > target CPU mask.
> > 
> > Please, confirm my assumption.
> 
> That is correct.
> 
> 
> > If I am right we have to add a basic HW IRQ routing to DomU in a right way.
> 
> We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
> for now we could just hardcode the cpumask of cpu0
> gic_route_irq_to_guest.
> 
> However keep in mind that if you plan on routing SPIs to guests other
> than dom0, receiving all the interrupts on cpu0 might not be great for
> performances.

Thinking twice about it, it might be the only acceptable change for 4.4.


diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..af96a31 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     level = dt_irq_is_level_triggered(irq);
 
-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:46:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TbT-0003pI-Ae; Wed, 29 Jan 2014 11:46:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8TbR-0003pB-Iy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:46:45 +0000
Received: from [85.158.137.68:14712] by server-10.bemta-3.messagelabs.com id
	04/6E-07302-42AE8E25; Wed, 29 Jan 2014 11:46:44 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1390996002!12041592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32701 invoked from network); 29 Jan 2014 11:46:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:46:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95651745"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:46:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:46:41 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8TbM-0008Mb-MM;
	Wed, 29 Jan 2014 11:46:40 +0000
Date: Wed, 29 Jan 2014 11:46:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Stefano Stabellini wrote:
> On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
> > Hello all,
> > 
> > I just recollected about one hack which we created
> > as we needed to route HW IRQ in domU.
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d793ba..d0227b9 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> > libxl__multidev *multidev,
> > 
> >          LOG(DEBUG, "dom%d irq %d", domid, irq);
> > 
> > -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> > -                       : -EOVERFLOW;
> >          if (!ret)
> >              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
> >          if (ret < 0) {
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..b54c08e 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
> >      if ( d->domain_id == 0 )
> >          d->arch.vgic.nr_lines = gic_number_lines() - 32;
> >      else
> > -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> > need SPIs for the guest */
> > 
> >      d->arch.vgic.shared_irqs =
> >          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> > diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> > index 75e2df3..ba88901 100644
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -29,6 +29,7 @@
> >  #include <asm/page.h>
> >  #include <public/domctl.h>
> >  #include <xsm/xsm.h>
> > +#include <asm/gic.h>
> > 
> >  static DEFINE_SPINLOCK(domctl_lock);
> >  DEFINE_SPINLOCK(vcpu_alloc_lock);
> > @@ -782,8 +783,11 @@ long
> > do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >              ret = -EINVAL;
> >          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
> >              ret = -EPERM;
> > -        else if ( allow )
> > -            ret = pirq_permit_access(d, pirq);
> > +        else if ( allow ) {
> > +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> > +            ret = pirq_permit_access(d, irq.irq);
> > +            gic_route_irq_to_guest(d, &irq, "");
> > +        }
> >          else
> >              ret = pirq_deny_access(d, pirq);
> >      }
> > (END)
> > 
> > It seems, the following patch can violate the logic about routing
> > physical IRQs only to CPU0.
> > In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> > where the one of the parameters is cpumask_of(smp_processor_id()).
> > But in this part of code this function can be executed on CPU1. And as
> > result this can cause to the fact that the wrong value would set to
> > target CPU mask.
> > 
> > Please, confirm my assumption.
> 
> That is correct.
> 
> 
> > If I am right we have to add a basic HW IRQ routing to DomU in a right way.
> 
> We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
> for now we could just hardcode the cpumask of cpu0
> gic_route_irq_to_guest.
> 
> However keep in mind that if you plan on routing SPIs to guests other
> than dom0, receiving all the interrupts on cpu0 might not be great for
> performances.

Thinking twice about it, it might be the only acceptable change for 4.4.


diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..af96a31 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     level = dt_irq_is_level_triggered(irq);
 
-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TdL-00044x-T4; Wed, 29 Jan 2014 11:48:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W8TdI-00044D-VS
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:48:42 +0000
Received: from [193.109.254.147:48722] by server-4.bemta-14.messagelabs.com id
	A0/A8-32066-89AE8E25; Wed, 29 Jan 2014 11:48:40 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390996119!585637!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28736 invoked from network); 29 Jan 2014 11:48:39 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:48:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W8TdF-000FZC-1E; Wed, 29 Jan 2014 11:48:37 +0000
Date: Wed, 29 Jan 2014 12:48:37 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129114837.GB54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390995662.31814.76.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > > you ought to work out with the ARM maintainers whether to stub
> > > > > out those two functions, or whether the functionality is useful
> > > > > there too (and hence proper implementations would be needed).
> > [...]
> > > Yes, please just make get/put_pg_owner common.
> > > 
> > > The only required change would be to:
> > >     if ( unlikely(paging_mode_translate(curr)) )
> > >     {
> > >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> > >         goto out;
> > >     }
> > > 
> > > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > > (ah, there it is in the next patch). I think the best solution there
> > > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > > some better name, I don't especially like my suggestion)
> > > 
> > > on ARM:
> > > 
> > > #define paging_mode_supports_foreign(d) (1)
> > > 
> > > on x86:
> > > 
> > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > > 
> > 
> > Hmmm.  That's likely to have unintended consequences somewhere.  (And
> > if that check is really not needed for PVH maybe it's not needed for
> > HVM either, given that they share all their paging support code).
> > 
> > But I don't think we need to tinker with it anyway - AFAICS,
> > get_pg_owner() isn't really what's wanted in the XATP code.  All the
> > other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> > definitely not, and it handles cases (like mmio) that we don't want
> > here anyway.  How about using rcu_lock_live_remote_domain_by_id()?
> 
> We have a struct page in our hand -- don't we need to lookup the owner
> and lock it somewhat atomically?

I'm not sure what you mean: 
 - the code that Mukesh is adding doesn't have a struct page, it's
   just grabbing the foreign domid from the hypercall arg;
 - if we did have a struct page, we'd just need to take a ref to 
   stop the owner changing underfoot; and
 - get_pg_owner() takes a domid anyway.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TdL-00044x-T4; Wed, 29 Jan 2014 11:48:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1W8TdI-00044D-VS
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:48:42 +0000
Received: from [193.109.254.147:48722] by server-4.bemta-14.messagelabs.com id
	A0/A8-32066-89AE8E25; Wed, 29 Jan 2014 11:48:40 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1390996119!585637!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28736 invoked from network); 29 Jan 2014 11:48:39 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 11:48:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1W8TdF-000FZC-1E; Wed, 29 Jan 2014 11:48:37 +0000
Date: Wed, 29 Jan 2014 12:48:37 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129114837.GB54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390995662.31814.76.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > > you ought to work out with the ARM maintainers whether to stub
> > > > > out those two functions, or whether the functionality is useful
> > > > > there too (and hence proper implementations would be needed).
> > [...]
> > > Yes, please just make get/put_pg_owner common.
> > > 
> > > The only required change would be to:
> > >     if ( unlikely(paging_mode_translate(curr)) )
> > >     {
> > >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> > >         goto out;
> > >     }
> > > 
> > > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > > (ah, there it is in the next patch). I think the best solution there
> > > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > > some better name, I don't especially like my suggestion)
> > > 
> > > on ARM:
> > > 
> > > #define paging_mode_supports_foreign(d) (1)
> > > 
> > > on x86:
> > > 
> > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > > 
> > 
> > Hmmm.  That's likely to have unintended consequences somewhere.  (And
> > if that check is really not needed for PVH maybe it's not needed for
> > HVM either, given that they share all their paging support code).
> > 
> > But I don't think we need to tinker with it anyway - AFAICS,
> > get_pg_owner() isn't really what's wanted in the XATP code.  All the
> > other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> > definitely not, and it handles cases (like mmio) that we don't want
> > here anyway.  How about using rcu_lock_live_remote_domain_by_id()?
> 
> We have a struct page in our hand -- don't we need to lookup the owner
> and lock it somewhat atomically?

I'm not sure what you mean: 
 - the code that Mukesh is adding doesn't have a struct page, it's
   just grabbing the foreign domid from the hypercall arg;
 - if we did have a struct page, we'd just need to take a ref to 
   stop the owner changing underfoot; and
 - get_pg_owner() takes a domid anyway.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:51:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TgE-0004LW-Jk; Wed, 29 Jan 2014 11:51:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TgD-0004LQ-0H
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:51:41 +0000
Received: from [85.158.139.211:60823] by server-8.bemta-5.messagelabs.com id
	80/5B-05298-C4BE8E25; Wed, 29 Jan 2014 11:51:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390996297!350683!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25120 invoked from network); 29 Jan 2014 11:51:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:51:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97642438"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 11:51:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:51:36 -0500
Message-ID: <1390996295.31814.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 29 Jan 2014 11:51:35 +0000
In-Reply-To: <20140129114837.GB54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:48 +0100, Tim Deegan wrote:
> At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > > > you ought to work out with the ARM maintainers whether to stub
> > > > > > out those two functions, or whether the functionality is useful
> > > > > > there too (and hence proper implementations would be needed).
> > > [...]
> > > > Yes, please just make get/put_pg_owner common.
> > > > 
> > > > The only required change would be to:
> > > >     if ( unlikely(paging_mode_translate(curr)) )
> > > >     {
> > > >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> > > >         goto out;
> > > >     }
> > > > 
> > > > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > > > (ah, there it is in the next patch). I think the best solution there
> > > > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > > > some better name, I don't especially like my suggestion)
> > > > 
> > > > on ARM:
> > > > 
> > > > #define paging_mode_supports_foreign(d) (1)
> > > > 
> > > > on x86:
> > > > 
> > > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > > > 
> > > 
> > > Hmmm.  That's likely to have unintended consequences somewhere.  (And
> > > if that check is really not needed for PVH maybe it's not needed for
> > > HVM either, given that they share all their paging support code).
> > > 
> > > But I don't think we need to tinker with it anyway - AFAICS,
> > > get_pg_owner() isn't really what's wanted in the XATP code.  All the
> > > other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> > > definitely not, and it handles cases (like mmio) that we don't want
> > > here anyway.  How about using rcu_lock_live_remote_domain_by_id()?
> > 
> > We have a struct page in our hand -- don't we need to lookup the owner
> > and lock it somewhat atomically?
> 
> I'm not sure what you mean: 
>  - the code that Mukesh is adding doesn't have a struct page, it's
>    just grabbing the foreign domid from the hypercall arg;
>  - if we did have a struct page, we'd just need to take a ref to 
>    stop the owner changing underfoot; and
>  - get_pg_owner() takes a domid anyway.

Sorry, I was confused/mislead by the name...

rcu_lock_live_remote_domain_by_id does look like what is needed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:51:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TgE-0004LW-Jk; Wed, 29 Jan 2014 11:51:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TgD-0004LQ-0H
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 11:51:41 +0000
Received: from [85.158.139.211:60823] by server-8.bemta-5.messagelabs.com id
	80/5B-05298-C4BE8E25; Wed, 29 Jan 2014 11:51:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1390996297!350683!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25120 invoked from network); 29 Jan 2014 11:51:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:51:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97642438"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 11:51:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:51:36 -0500
Message-ID: <1390996295.31814.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Tim Deegan <tim@xen.org>
Date: Wed, 29 Jan 2014 11:51:35 +0000
In-Reply-To: <20140129114837.GB54797@deinos.phlegethon.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:48 +0100, Tim Deegan wrote:
> At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > > The only think x86-specific here is that {get,put}_pg_owner() may
> > > > > > not exist on ARM. But the general operation isn't x86-specific, so
> > > > > > there shouldn't be any CONFIG_X86 dependency here. Instead
> > > > > > you ought to work out with the ARM maintainers whether to stub
> > > > > > out those two functions, or whether the functionality is useful
> > > > > > there too (and hence proper implementations would be needed).
> > > [...]
> > > > Yes, please just make get/put_pg_owner common.
> > > > 
> > > > The only required change would be to:
> > > >     if ( unlikely(paging_mode_translate(curr)) )
> > > >     {
> > > >         MEM_LOG("Cannot mix foreign mappings with translated domains");
> > > >         goto out;
> > > >     }
> > > > 
> > > > which is not needed for ARM, and I suspect needs adjusting for PVH too
> > > > (ah, there it is in the next patch). I think the best solution there
> > > > would be a new predicate e.g. paging_mode_supports_foreign(curr) (or
> > > > some better name, I don't especially like my suggestion)
> > > > 
> > > > on ARM:
> > > > 
> > > > #define paging_mode_supports_foreign(d) (1)
> > > > 
> > > > on x86:
> > > > 
> > > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr) || !(paging_mode_translate(curr))
> > > > 
> > > 
> > > Hmmm.  That's likely to have unintended consequences somewhere.  (And
> > > if that check is really not needed for PVH maybe it's not needed for
> > > HVM either, given that they share all their paging support code).
> > > 
> > > But I don't think we need to tinker with it anyway - AFAICS,
> > > get_pg_owner() isn't really what's wanted in the XATP code.  All the
> > > other uses of get_pg_owner() are in the x86 PV MMU code, which this is
> > > definitely not, and it handles cases (like mmio) that we don't want
> > > here anyway.  How about using rcu_lock_live_remote_domain_by_id()?
> > 
> > We have a struct page in our hand -- don't we need to lookup the owner
> > and lock it somewhat atomically?
> 
> I'm not sure what you mean: 
>  - the code that Mukesh is adding doesn't have a struct page, it's
>    just grabbing the foreign domid from the hypercall arg;
>  - if we did have a struct page, we'd just need to take a ref to 
>    stop the owner changing underfoot; and
>  - get_pg_owner() takes a domid anyway.

Sorry, I was confused/mislead by the name...

rcu_lock_live_remote_domain_by_id does look like what is needed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:56:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TkL-0004WG-Cn; Wed, 29 Jan 2014 11:55:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TkK-0004W9-8K
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:55:56 +0000
Received: from [85.158.143.35:44185] by server-2.bemta-4.messagelabs.com id
	31/48-10891-B4CE8E25; Wed, 29 Jan 2014 11:55:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390995949!1607909!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26512 invoked from network); 29 Jan 2014 11:45:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95651471"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:45:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:45:48 -0500
Message-ID: <1390995947.31814.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 11:45:47 +0000
In-Reply-To: <20140129111939.GA26899@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > > +Default value:         on
> > 
> > I think this default should be "on if, available for that backend type".
> 
> Ok, will make this change.
> 
> > What happens if the backed does not support discard?
> 
> The toolstack just does not know if a phy device supports it, or if file
> backed storage can do hole punching. If feature-discard is set and the
> frontend sends a discard request, the backend would return an error
> (like ENOTSUPPORTED) and the frontend internally disables the discard
> flag. Thats how it is done in pvops and the forward ported xenlinux
> tree.

That sounds good.

Is it worth noting that enable-discard=1 is only advisory and will be
ignored if the underlying storage and/or backend doesn't understand it?
The real benefit of this option is to be able to force it off rather
than on I think.

> Up to now I have not prepared a change for the backend drivers. They
> could either force feature-discard to be true so that the error paths
> will be executed. Or they could ignore the discard-enable if the backing
> storage does not support discard.
> 
> > >      disk->readwrite = 1;
> > > +    disk->discard_enable = 1; /* Doing it twice?! */
> > 
> > Why?
> 
> Thats what I'm asking you. Why is readwrite set here, and later on also
> in the .l file? At least just setting it here did not unconditionally
> enable it if no discard= was specified. I have not traced the code why
> that happens.

One for Ian J I think. Perhaps it is just setting the default?

readwrite is on unless you say "ro" in the config, so I suppose that
makes sense. I don't know about discard_enable -- if this were a defbool
it would probably go away anyway.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 11:56:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 11:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TkL-0004WG-Cn; Wed, 29 Jan 2014 11:55:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TkK-0004W9-8K
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 11:55:56 +0000
Received: from [85.158.143.35:44185] by server-2.bemta-4.messagelabs.com id
	31/48-10891-B4CE8E25; Wed, 29 Jan 2014 11:55:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1390995949!1607909!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26512 invoked from network); 29 Jan 2014 11:45:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 11:45:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95651471"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 11:45:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 06:45:48 -0500
Message-ID: <1390995947.31814.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 11:45:47 +0000
In-Reply-To: <20140129111939.GA26899@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > > +Default value:         on
> > 
> > I think this default should be "on if, available for that backend type".
> 
> Ok, will make this change.
> 
> > What happens if the backed does not support discard?
> 
> The toolstack just does not know if a phy device supports it, or if file
> backed storage can do hole punching. If feature-discard is set and the
> frontend sends a discard request, the backend would return an error
> (like ENOTSUPPORTED) and the frontend internally disables the discard
> flag. Thats how it is done in pvops and the forward ported xenlinux
> tree.

That sounds good.

Is it worth noting that enable-discard=1 is only advisory and will be
ignored if the underlying storage and/or backend doesn't understand it?
The real benefit of this option is to be able to force it off rather
than on I think.

> Up to now I have not prepared a change for the backend drivers. They
> could either force feature-discard to be true so that the error paths
> will be executed. Or they could ignore the discard-enable if the backing
> storage does not support discard.
> 
> > >      disk->readwrite = 1;
> > > +    disk->discard_enable = 1; /* Doing it twice?! */
> > 
> > Why?
> 
> Thats what I'm asking you. Why is readwrite set here, and later on also
> in the .l file? At least just setting it here did not unconditionally
> enable it if no discard= was specified. I have not traced the code why
> that happens.

One for Ian J I think. Perhaps it is just setting the default?

readwrite is on unless you say "ro" in the config, so I suppose that
makes sense. I don't know about discard_enable -- if this were a defbool
it would probably go away anyway.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:00:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tod-00050Z-Of; Wed, 29 Jan 2014 12:00:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Tob-000506-GY
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 12:00:21 +0000
Received: from [193.109.254.147:57146] by server-15.bemta-14.messagelabs.com
	id 58/4D-10839-35DE8E25; Wed, 29 Jan 2014 12:00:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390996816!595131!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30921 invoked from network); 29 Jan 2014 12:00:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:00:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97644780"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:00:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:00:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8ToW-00007g-2L;
	Wed, 29 Jan 2014 12:00:16 +0000
Date: Wed, 29 Jan 2014 12:00:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Paolo,
we have been trying to fix a BSOD that would happen during the Windows
XP installation, once every ten times on average.
After many days of bisection, we found out that commit

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate
 
breaks Xen support in QEMU, in particular the Xen mapcache.
The reason is that after this commit, l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).
The appended patch works around the issue by reverting to the old
behaviour.

What do you think is the right fix for this?
Maybe we need to add a size parameter to qemu_get_ram_ptr?

I should add that this problem is time sensitive because is a blocker
for the Xen 4.4 release (Xen is in RC2 right now).

Thanks for your feedback,

Stefano


diff --git a/exec.c b/exec.c
index 667a718..15edb69 100644
--- a/exec.c
+++ b/exec.c
@@ -1948,10 +1948,15 @@ bool address_space_rw(AddressSpace *as, hwaddr addr, uint8_t *buf,
     hwaddr addr1;
     MemoryRegion *mr;
     bool error = false;
+    hwaddr page;
 
     while (len > 0) {
         l = len;
         mr = address_space_translate(as, addr, &addr1, &l, is_write);
+        page = addr & TARGET_PAGE_MASK;
+        l = (page + TARGET_PAGE_SIZE) - addr;
+        if (l > len)
+            l = len;
 
         if (is_write) {
             if (!memory_access_is_direct(mr, is_write)) {
@@ -2057,11 +2062,16 @@ void cpu_physical_memory_write_rom(hwaddr addr,
     uint8_t *ptr;
     hwaddr addr1;
     MemoryRegion *mr;
+    hwaddr page;
 
     while (len > 0) {
         l = len;
         mr = address_space_translate(&address_space_memory,
                                      addr, &addr1, &l, true);
+        page = addr & TARGET_PAGE_MASK;
+        l = (page + TARGET_PAGE_SIZE) - addr;
+        if (l > len)
+            l = len;
 
         if (!(memory_region_is_ram(mr) ||
               memory_region_is_romd(mr))) {
@@ -2164,6 +2174,7 @@ void *address_space_map(AddressSpace *as,
     hwaddr l, xlat, base;
     MemoryRegion *mr, *this_mr;
     ram_addr_t raddr;
+    hwaddr page;
 
     if (len == 0) {
         return NULL;
@@ -2171,6 +2182,10 @@ void *address_space_map(AddressSpace *as,
 
     l = len;
     mr = address_space_translate(as, addr, &xlat, &l, is_write);
+    page = addr & TARGET_PAGE_MASK;
+    l = (page + TARGET_PAGE_SIZE) - addr;
+    if (l > len)
+        l = len;
     if (!memory_access_is_direct(mr, is_write)) {
         if (bounce.buffer) {
             return NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:00:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tod-00050Z-Of; Wed, 29 Jan 2014 12:00:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8Tob-000506-GY
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 12:00:21 +0000
Received: from [193.109.254.147:57146] by server-15.bemta-14.messagelabs.com
	id 58/4D-10839-35DE8E25; Wed, 29 Jan 2014 12:00:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1390996816!595131!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30921 invoked from network); 29 Jan 2014 12:00:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:00:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97644780"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:00:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:00:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8ToW-00007g-2L;
	Wed, 29 Jan 2014 12:00:16 +0000
Date: Wed, 29 Jan 2014 12:00:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Paolo,
we have been trying to fix a BSOD that would happen during the Windows
XP installation, once every ten times on average.
After many days of bisection, we found out that commit

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate
 
breaks Xen support in QEMU, in particular the Xen mapcache.
The reason is that after this commit, l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).
The appended patch works around the issue by reverting to the old
behaviour.

What do you think is the right fix for this?
Maybe we need to add a size parameter to qemu_get_ram_ptr?

I should add that this problem is time sensitive because is a blocker
for the Xen 4.4 release (Xen is in RC2 right now).

Thanks for your feedback,

Stefano


diff --git a/exec.c b/exec.c
index 667a718..15edb69 100644
--- a/exec.c
+++ b/exec.c
@@ -1948,10 +1948,15 @@ bool address_space_rw(AddressSpace *as, hwaddr addr, uint8_t *buf,
     hwaddr addr1;
     MemoryRegion *mr;
     bool error = false;
+    hwaddr page;
 
     while (len > 0) {
         l = len;
         mr = address_space_translate(as, addr, &addr1, &l, is_write);
+        page = addr & TARGET_PAGE_MASK;
+        l = (page + TARGET_PAGE_SIZE) - addr;
+        if (l > len)
+            l = len;
 
         if (is_write) {
             if (!memory_access_is_direct(mr, is_write)) {
@@ -2057,11 +2062,16 @@ void cpu_physical_memory_write_rom(hwaddr addr,
     uint8_t *ptr;
     hwaddr addr1;
     MemoryRegion *mr;
+    hwaddr page;
 
     while (len > 0) {
         l = len;
         mr = address_space_translate(&address_space_memory,
                                      addr, &addr1, &l, true);
+        page = addr & TARGET_PAGE_MASK;
+        l = (page + TARGET_PAGE_SIZE) - addr;
+        if (l > len)
+            l = len;
 
         if (!(memory_region_is_ram(mr) ||
               memory_region_is_romd(mr))) {
@@ -2164,6 +2174,7 @@ void *address_space_map(AddressSpace *as,
     hwaddr l, xlat, base;
     MemoryRegion *mr, *this_mr;
     ram_addr_t raddr;
+    hwaddr page;
 
     if (len == 0) {
         return NULL;
@@ -2171,6 +2182,10 @@ void *address_space_map(AddressSpace *as,
 
     l = len;
     mr = address_space_translate(as, addr, &xlat, &l, is_write);
+    page = addr & TARGET_PAGE_MASK;
+    l = (page + TARGET_PAGE_SIZE) - addr;
+    if (l > len)
+        l = len;
     if (!memory_access_is_direct(mr, is_write)) {
         if (bounce.buffer) {
             return NULL;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TvT-0005GG-02; Wed, 29 Jan 2014 12:07:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W8TvR-0005GB-Ae
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:07:25 +0000
Received: from [85.158.143.35:5441] by server-2.bemta-4.messagelabs.com id
	9F/21-10891-BFEE8E25; Wed, 29 Jan 2014 12:07:23 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390997242!1605719!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26102 invoked from network); 29 Jan 2014 12:07:22 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:07:22 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so3237542wgg.34
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 04:07:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type;
	bh=FHIRzmsv/Phv/ygftfUdzuwvCYKRQy7Sf5kkSkOM7AE=;
	b=GuslM6PlN7JVHNVIQC9am7NdGCEBtevA6bLtMzQtK2QSud5AL+i0XvfaqbsNkNUuua
	CYlSqUWSEHgDcmC/ltkVKOV/A3//wBS7oh36qfwxxIuW+/Xn48oDlDfRBozF2PI+umP5
	8WGVNPQkuv2wAmYala3603IPYj8kRjpxU4ume/+z8JHbg3OyKPYTscT7kRkS0L5ACa1e
	HIC4H1PpW3hqySnP5P8xUSqD3XAwnVU9JdOnQsL2csqB/a9k88pYrvHmhI8NPEYQFGTl
	XqgXx4J6YbTEb1d1X+wFcRfH/RE02Vch+i/aQbyQP7tq6ypmoHJEb8I51+mu8V4JgzJg
	/7Hw==
X-Received: by 10.194.90.144 with SMTP id bw16mr4724661wjb.1.1390997242178;
	Wed, 29 Jan 2014 04:07:22 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id jw4sm4375449wjc.20.2014.01.29.04.07.20
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 04:07:21 -0800 (PST)
Message-ID: <52E8EEF8.5060107@xen.org>
Date: Wed, 29 Jan 2014 12:07:20 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52E7EF70.4000000@karan.org>
In-Reply-To: <52E7EF70.4000000@karan.org>
X-Forwarded-Message-Id: <52E7EF70.4000000@karan.org>
Subject: [Xen-devel] Fwd: Re: [CentOS-devel] Cloud Instance SIG Hackathon @
 CentOS Dojo 31st Jan 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8496877513894171273=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8496877513894171273==
Content-Type: multipart/alternative;
 boundary="------------020006070405010203060401"

This is a multi-part message in MIME format.
--------------020006070405010203060401
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit


Please get back to KB off-list if interested
Lars

-------- Original Message --------
Subject: 	Re: [CentOS-devel] Cloud Instance SIG Hackathon @ CentOS Dojo 
31st Jan 2014
Date: 	Tue, 28 Jan 2014 17:57:04 +0000
From: 	Karanbir Singh <mail-lists@karan.org>
Reply-To: 	The CentOS developers mailing list. <centos-devel@centos.org>
To: 	centos-devel@centos.org



Hi Lars,

On 01/28/2014 12:52 PM, Lars Kurth wrote:
> KB,
> I wouldn't be able to come myself. I was wondering asking because I was
> going to forward to the thread to the xen user and dev community.
> Lars

I know that Jaime form OpenNebula is going to attempt delivering Xen
hypervisor controller scripts that work for xen4centos on the day, and
we will attempt to build pv images that work on xen4centos as well. so
if someone representing the xen project was to come along, it would be
great.

Let me know offlist if someone is keen, and we can add them - but it
needs to happen pre noon wednesday, we need to send the attendee list
over to the venue security guys by then

regards


-- 
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc
_______________________________________________
CentOS-devel mailing list
CentOS-devel@centos.org
http://lists.centos.org/mailman/listinfo/centos-devel




--------------020006070405010203060401
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-forward-container">Please get back to KB off-list if
      interested<br>
      Lars<br>
      <br>
      -------- Original Message --------
      <table class="moz-email-headers-table" border="0" cellpadding="0"
        cellspacing="0">
        <tbody>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Subject:
            </th>
            <td>Re: [CentOS-devel] Cloud Instance SIG Hackathon @ CentOS
              Dojo 31st Jan 2014</td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Date: </th>
            <td>Tue, 28 Jan 2014 17:57:04 +0000</td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">From: </th>
            <td>Karanbir Singh <a class="moz-txt-link-rfc2396E" href="mailto:mail-lists@karan.org">&lt;mail-lists@karan.org&gt;</a></td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Reply-To:
            </th>
            <td>The CentOS developers mailing list.
              <a class="moz-txt-link-rfc2396E" href="mailto:centos-devel@centos.org">&lt;centos-devel@centos.org&gt;</a></td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">To: </th>
            <td><a class="moz-txt-link-abbreviated" href="mailto:centos-devel@centos.org">centos-devel@centos.org</a></td>
          </tr>
        </tbody>
      </table>
      <br>
      <br>
      <pre>Hi Lars,

On 01/28/2014 12:52 PM, Lars Kurth wrote:
&gt; KB,
&gt; I wouldn't be able to come myself. I was wondering asking because I was 
&gt; going to forward to the thread to the xen user and dev community.
&gt; Lars

I know that Jaime form OpenNebula is going to attempt delivering Xen
hypervisor controller scripts that work for xen4centos on the day, and
we will attempt to build pv images that work on xen4centos as well. so
if someone representing the xen project was to come along, it would be
great.

Let me know offlist if someone is keen, and we can add them - but it
needs to happen pre noon wednesday, we need to send the attendee list
over to the venue security guys by then

regards


-- 
Karanbir Singh
+44-207-0999389 | <a class="moz-txt-link-freetext" href="http://www.karan.org/">http://www.karan.org/</a> | twitter.com/kbsingh
GnuPG Key : <a class="moz-txt-link-freetext" href="http://www.karan.org/publickey.asc">http://www.karan.org/publickey.asc</a>
_______________________________________________
CentOS-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:CentOS-devel@centos.org">CentOS-devel@centos.org</a>
<a class="moz-txt-link-freetext" href="http://lists.centos.org/mailman/listinfo/centos-devel">http://lists.centos.org/mailman/listinfo/centos-devel</a>
</pre>
      <br>
    </div>
    <br>
  </body>
</html>

--------------020006070405010203060401--


--===============8496877513894171273==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8496877513894171273==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 12:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8TvT-0005GG-02; Wed, 29 Jan 2014 12:07:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W8TvR-0005GB-Ae
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:07:25 +0000
Received: from [85.158.143.35:5441] by server-2.bemta-4.messagelabs.com id
	9F/21-10891-BFEE8E25; Wed, 29 Jan 2014 12:07:23 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390997242!1605719!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26102 invoked from network); 29 Jan 2014 12:07:22 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:07:22 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so3237542wgg.34
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 04:07:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type;
	bh=FHIRzmsv/Phv/ygftfUdzuwvCYKRQy7Sf5kkSkOM7AE=;
	b=GuslM6PlN7JVHNVIQC9am7NdGCEBtevA6bLtMzQtK2QSud5AL+i0XvfaqbsNkNUuua
	CYlSqUWSEHgDcmC/ltkVKOV/A3//wBS7oh36qfwxxIuW+/Xn48oDlDfRBozF2PI+umP5
	8WGVNPQkuv2wAmYala3603IPYj8kRjpxU4ume/+z8JHbg3OyKPYTscT7kRkS0L5ACa1e
	HIC4H1PpW3hqySnP5P8xUSqD3XAwnVU9JdOnQsL2csqB/a9k88pYrvHmhI8NPEYQFGTl
	XqgXx4J6YbTEb1d1X+wFcRfH/RE02Vch+i/aQbyQP7tq6ypmoHJEb8I51+mu8V4JgzJg
	/7Hw==
X-Received: by 10.194.90.144 with SMTP id bw16mr4724661wjb.1.1390997242178;
	Wed, 29 Jan 2014 04:07:22 -0800 (PST)
Received: from [172.16.26.11] ([2.122.219.75])
	by mx.google.com with ESMTPSA id jw4sm4375449wjc.20.2014.01.29.04.07.20
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 04:07:21 -0800 (PST)
Message-ID: <52E8EEF8.5060107@xen.org>
Date: Wed, 29 Jan 2014 12:07:20 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <52E7EF70.4000000@karan.org>
In-Reply-To: <52E7EF70.4000000@karan.org>
X-Forwarded-Message-Id: <52E7EF70.4000000@karan.org>
Subject: [Xen-devel] Fwd: Re: [CentOS-devel] Cloud Instance SIG Hackathon @
 CentOS Dojo 31st Jan 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8496877513894171273=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8496877513894171273==
Content-Type: multipart/alternative;
 boundary="------------020006070405010203060401"

This is a multi-part message in MIME format.
--------------020006070405010203060401
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit


Please get back to KB off-list if interested
Lars

-------- Original Message --------
Subject: 	Re: [CentOS-devel] Cloud Instance SIG Hackathon @ CentOS Dojo 
31st Jan 2014
Date: 	Tue, 28 Jan 2014 17:57:04 +0000
From: 	Karanbir Singh <mail-lists@karan.org>
Reply-To: 	The CentOS developers mailing list. <centos-devel@centos.org>
To: 	centos-devel@centos.org



Hi Lars,

On 01/28/2014 12:52 PM, Lars Kurth wrote:
> KB,
> I wouldn't be able to come myself. I was wondering asking because I was
> going to forward to the thread to the xen user and dev community.
> Lars

I know that Jaime form OpenNebula is going to attempt delivering Xen
hypervisor controller scripts that work for xen4centos on the day, and
we will attempt to build pv images that work on xen4centos as well. so
if someone representing the xen project was to come along, it would be
great.

Let me know offlist if someone is keen, and we can add them - but it
needs to happen pre noon wednesday, we need to send the attendee list
over to the venue security guys by then

regards


-- 
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc
_______________________________________________
CentOS-devel mailing list
CentOS-devel@centos.org
http://lists.centos.org/mailman/listinfo/centos-devel




--------------020006070405010203060401
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-forward-container">Please get back to KB off-list if
      interested<br>
      Lars<br>
      <br>
      -------- Original Message --------
      <table class="moz-email-headers-table" border="0" cellpadding="0"
        cellspacing="0">
        <tbody>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Subject:
            </th>
            <td>Re: [CentOS-devel] Cloud Instance SIG Hackathon @ CentOS
              Dojo 31st Jan 2014</td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Date: </th>
            <td>Tue, 28 Jan 2014 17:57:04 +0000</td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">From: </th>
            <td>Karanbir Singh <a class="moz-txt-link-rfc2396E" href="mailto:mail-lists@karan.org">&lt;mail-lists@karan.org&gt;</a></td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">Reply-To:
            </th>
            <td>The CentOS developers mailing list.
              <a class="moz-txt-link-rfc2396E" href="mailto:centos-devel@centos.org">&lt;centos-devel@centos.org&gt;</a></td>
          </tr>
          <tr>
            <th valign="BASELINE" align="RIGHT" nowrap="nowrap">To: </th>
            <td><a class="moz-txt-link-abbreviated" href="mailto:centos-devel@centos.org">centos-devel@centos.org</a></td>
          </tr>
        </tbody>
      </table>
      <br>
      <br>
      <pre>Hi Lars,

On 01/28/2014 12:52 PM, Lars Kurth wrote:
&gt; KB,
&gt; I wouldn't be able to come myself. I was wondering asking because I was 
&gt; going to forward to the thread to the xen user and dev community.
&gt; Lars

I know that Jaime form OpenNebula is going to attempt delivering Xen
hypervisor controller scripts that work for xen4centos on the day, and
we will attempt to build pv images that work on xen4centos as well. so
if someone representing the xen project was to come along, it would be
great.

Let me know offlist if someone is keen, and we can add them - but it
needs to happen pre noon wednesday, we need to send the attendee list
over to the venue security guys by then

regards


-- 
Karanbir Singh
+44-207-0999389 | <a class="moz-txt-link-freetext" href="http://www.karan.org/">http://www.karan.org/</a> | twitter.com/kbsingh
GnuPG Key : <a class="moz-txt-link-freetext" href="http://www.karan.org/publickey.asc">http://www.karan.org/publickey.asc</a>
_______________________________________________
CentOS-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:CentOS-devel@centos.org">CentOS-devel@centos.org</a>
<a class="moz-txt-link-freetext" href="http://lists.centos.org/mailman/listinfo/centos-devel">http://lists.centos.org/mailman/listinfo/centos-devel</a>
</pre>
      <br>
    </div>
    <br>
  </body>
</html>

--------------020006070405010203060401--


--===============8496877513894171273==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8496877513894171273==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tyt-0005g1-7O; Wed, 29 Jan 2014 12:10:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tyr-0005fu-Sy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:10:58 +0000
Received: from [85.158.143.35:47821] by server-1.bemta-4.messagelabs.com id
	43/CB-31661-1DFE8E25; Wed, 29 Jan 2014 12:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390997455!1607011!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25232 invoked from network); 29 Jan 2014 12:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95660549"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:10:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:10:54 -0500
Message-ID: <1390997452.31814.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:10:52 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4] xen/arm: fix guest builder cache cohenrency
 (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan/Ian/Keir -- the final patch involves tools changes and a new domctl,
which is why you are copied (although the domctl is marked as arm
specific you might have opinions on it).

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

This has survived 20000 bootloops on arm32 and 9700 (on going) on arm64.
(I've been seeing a sporadic issue on arm64 but I believe it is
unrelated, although I also cannot reproduce at the moment).

As well as being more correct (and secure!) this strategy is IMHO
simpler than the existing HCR.DC based thing and I'd like to push it to
4.4.

I'm aware that this is the third attempt to get this right and the
second one requiring a freeze exception :-(

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tyt-0005g1-7O; Wed, 29 Jan 2014 12:10:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tyr-0005fu-Sy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:10:58 +0000
Received: from [85.158.143.35:47821] by server-1.bemta-4.messagelabs.com id
	43/CB-31661-1DFE8E25; Wed, 29 Jan 2014 12:10:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1390997455!1607011!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25232 invoked from network); 29 Jan 2014 12:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95660549"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:10:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:10:54 -0500
Message-ID: <1390997452.31814.90.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:10:52 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4] xen/arm: fix guest builder cache cohenrency
 (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan/Ian/Keir -- the final patch involves tools changes and a new domctl,
which is why you are copied (although the domctl is marked as arm
specific you might have opinions on it).

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

This has survived 20000 bootloops on arm32 and 9700 (on going) on arm64.
(I've been seeing a sporadic issue on arm64 but I believe it is
unrelated, although I also cannot reproduce at the moment).

As well as being more correct (and secure!) this strategy is IMHO
simpler than the existing HCR.DC based thing and I'd like to push it to
4.4.

I'm aware that this is the third attempt to get this right and the
second one requiring a freeze exception :-(

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzb-0005jf-Pl; Wed, 29 Jan 2014 12:11:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TzZ-0005j5-Lh
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:41 +0000
Received: from [85.158.143.35:15303] by server-3.bemta-4.messagelabs.com id
	3D/1A-11539-DFFE8E25; Wed, 29 Jan 2014 12:11:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4621 invoked from network); 29 Jan 2014 12:11:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649452"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-Gk;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:25 +0000
Message-ID: <1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/4] xen: arm: rename p2m next_gfn_to_relinquish
	to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzd-0005k2-Ev; Wed, 29 Jan 2014 12:11:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tza-0005jI-U5
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:43 +0000
Received: from [85.158.143.35:54686] by server-3.bemta-4.messagelabs.com id
	C9/2A-11539-EFFE8E25; Wed, 29 Jan 2014 12:11:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4792 invoked from network); 29 Jan 2014 12:11:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-9f;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:23 +0000
Message-ID: <1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

A follow up patch which fix this (yet) another way (third time is the charm!)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c           |    7 --
 xen/arch/arm/traps.c            |  163 +--------------------------------------
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 -
 xen/include/asm-arm/domain.h    |    2 -
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 +----
 7 files changed, 9 insertions(+), 194 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..377a1e3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,13 +1393,11 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case HSR_SYSREG_CNTP_CTL_EL0:
-    case HSR_SYSREG_CNTP_TVAL_EL0:
+    case CNTP_CTL_EL0:
+    case CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1635,6 +1477,7 @@ done:
     if (first) unmap_domain_page(first);
 }
 
+
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..433ad55 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case HSR_SYSREG_CNTP_CTL_EL0:
+    case CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case HSR_SYSREG_CNTP_TVAL_EL0:
+    case CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_SYSREG_CNTPCT_EL0:
+    case HSR_CPREG64(CNTPCT):
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 508467a..f0f1d53 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,8 +121,6 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
-#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
-#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -262,9 +260,7 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
-#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
-#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 06e638f..dfe807d 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003c00)
+#define HSR_SYSREG_CRN_MASK (0x00003800)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 0cee0e9..48ad07e 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,23 +40,8 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
-#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
-#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
-#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
-#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
-#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
-#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
-#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
-#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
-#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
-#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
-
-#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
-#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
-#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
-
-
+#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
+#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzb-0005jf-Pl; Wed, 29 Jan 2014 12:11:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TzZ-0005j5-Lh
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:41 +0000
Received: from [85.158.143.35:15303] by server-3.bemta-4.messagelabs.com id
	3D/1A-11539-DFFE8E25; Wed, 29 Jan 2014 12:11:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4621 invoked from network); 29 Jan 2014 12:11:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649452"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-Gk;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:25 +0000
Message-ID: <1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/4] xen: arm: rename p2m next_gfn_to_relinquish
	to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tza-0005jK-OL; Wed, 29 Jan 2014 12:11:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TzZ-0005j1-12
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:41 +0000
Received: from [85.158.143.35:54428] by server-2.bemta-4.messagelabs.com id
	86/48-10891-CFFE8E25; Wed, 29 Jan 2014 12:11:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4436 invoked from network); 29 Jan 2014 12:11:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649451"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-CS;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:24 +0000
Message-ID: <1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tza-0005jK-OL; Wed, 29 Jan 2014 12:11:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8TzZ-0005j1-12
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:41 +0000
Received: from [85.158.143.35:54428] by server-2.bemta-4.messagelabs.com id
	86/48-10891-CFFE8E25; Wed, 29 Jan 2014 12:11:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4436 invoked from network); 29 Jan 2014 12:11:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649451"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-CS;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:24 +0000
Message-ID: <1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzd-0005k2-Ev; Wed, 29 Jan 2014 12:11:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tza-0005jI-U5
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:43 +0000
Received: from [85.158.143.35:54686] by server-3.bemta-4.messagelabs.com id
	C9/2A-11539-EFFE8E25; Wed, 29 Jan 2014 12:11:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1390997497!1622469!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4792 invoked from network); 29 Jan 2014 12:11:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97649453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:27 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-9f;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:23 +0000
Message-ID: <1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

A follow up patch which fix this (yet) another way (third time is the charm!)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/domain.c           |    7 --
 xen/arch/arm/traps.c            |  163 +--------------------------------------
 xen/arch/arm/vtimer.c           |    6 +-
 xen/include/asm-arm/cpregs.h    |    4 -
 xen/include/asm-arm/domain.h    |    2 -
 xen/include/asm-arm/processor.h |    2 +-
 xen/include/asm-arm/sysregs.h   |   19 +----
 7 files changed, 9 insertions(+), 194 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..377a1e3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,13 +1393,11 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case HSR_SYSREG_CNTP_CTL_EL0:
-    case HSR_SYSREG_CNTP_TVAL_EL0:
+    case CNTP_CTL_EL0:
+    case CNTP_TVAL_EL0:
         if ( !vtimer_emulate(regs, hsr) )
         {
             dprintk(XENLOG_ERR,
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1635,6 +1477,7 @@ done:
     if (first) unmap_domain_page(first);
 }
 
+
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..433ad55 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -240,18 +240,18 @@ static int vtimer_emulate_sysreg(struct cpu_user_regs *regs, union hsr hsr)
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
-    case HSR_SYSREG_CNTP_CTL_EL0:
+    case CNTP_CTL_EL0:
         vtimer_cntp_ctl(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
-    case HSR_SYSREG_CNTP_TVAL_EL0:
+    case CNTP_TVAL_EL0:
         vtimer_cntp_tval(regs, &r, sysreg.read);
         if ( sysreg.read )
             *x = r;
         return 1;
 
-    case HSR_SYSREG_CNTPCT_EL0:
+    case HSR_CPREG64(CNTPCT):
         return vtimer_cntpct(regs, x, sysreg.read);
 
     default:
diff --git a/xen/include/asm-arm/cpregs.h b/xen/include/asm-arm/cpregs.h
index 508467a..f0f1d53 100644
--- a/xen/include/asm-arm/cpregs.h
+++ b/xen/include/asm-arm/cpregs.h
@@ -121,8 +121,6 @@
 #define TTBR0           p15,0,c2        /* Translation Table Base Reg. 0 */
 #define TTBR1           p15,1,c2        /* Translation Table Base Reg. 1 */
 #define HTTBR           p15,4,c2        /* Hyp. Translation Table Base Register */
-#define TTBR0_32        p15,0,c2,c0,0   /* 32-bit access to TTBR0 */
-#define TTBR1_32        p15,0,c2,c0,1   /* 32-bit access to TTBR1 */
 #define HTCR            p15,4,c2,c0,2   /* Hyp. Translation Control Register */
 #define VTCR            p15,4,c2,c1,2   /* Virtualization Translation Control Register */
 #define VTTBR           p15,6,c2        /* Virtualization Translation Table Base Register */
@@ -262,9 +260,7 @@
 #define CPACR_EL1               CPACR
 #define CSSELR_EL1              CSSELR
 #define DACR32_EL2              DACR
-#define ESR_EL1                 DFSR
 #define ESR_EL2                 HSR
-#define FAR_EL1                 HIFAR
 #define FAR_EL2                 HIFAR
 #define HCR_EL2                 HCR
 #define HPFAR_EL2               HPFAR
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 06e638f..dfe807d 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -342,7 +342,7 @@ union hsr {
 #define HSR_SYSREG_OP0_SHIFT (20)
 #define HSR_SYSREG_OP1_MASK (0x0001c000)
 #define HSR_SYSREG_OP1_SHIFT (14)
-#define HSR_SYSREG_CRN_MASK (0x00003c00)
+#define HSR_SYSREG_CRN_MASK (0x00003800)
 #define HSR_SYSREG_CRN_SHIFT (10)
 #define HSR_SYSREG_CRM_MASK (0x0000001e)
 #define HSR_SYSREG_CRM_SHIFT (1)
diff --git a/xen/include/asm-arm/sysregs.h b/xen/include/asm-arm/sysregs.h
index 0cee0e9..48ad07e 100644
--- a/xen/include/asm-arm/sysregs.h
+++ b/xen/include/asm-arm/sysregs.h
@@ -40,23 +40,8 @@
     ((__HSR_SYSREG_##crm) << HSR_SYSREG_CRM_SHIFT) | \
     ((__HSR_SYSREG_##op2) << HSR_SYSREG_OP2_SHIFT)
 
-#define HSR_SYSREG_SCTLR_EL1      HSR_SYSREG(3,0,c1, c0,0)
-#define HSR_SYSREG_TTBR0_EL1      HSR_SYSREG(3,0,c2, c0,0)
-#define HSR_SYSREG_TTBR1_EL1      HSR_SYSREG(3,0,c2, c0,1)
-#define HSR_SYSREG_TCR_EL1        HSR_SYSREG(3,0,c2, c0,2)
-#define HSR_SYSREG_AFSR0_EL1      HSR_SYSREG(3,0,c5, c1,0)
-#define HSR_SYSREG_AFSR1_EL1      HSR_SYSREG(3,0,c5, c1,1)
-#define HSR_SYSREG_ESR_EL1        HSR_SYSREG(3,0,c5, c2,0)
-#define HSR_SYSREG_FAR_EL1        HSR_SYSREG(3,0,c6, c0,0)
-#define HSR_SYSREG_MAIR_EL1       HSR_SYSREG(3,0,c10,c2,0)
-#define HSR_SYSREG_AMAIR_EL1      HSR_SYSREG(3,0,c10,c3,0)
-#define HSR_SYSREG_CONTEXTIDR_EL1 HSR_SYSREG(3,0,c13,c0,1)
-
-#define HSR_SYSREG_CNTPCT_EL0     HSR_SYSREG(3,3,c14,c0,0)
-#define HSR_SYSREG_CNTP_CTL_EL0   HSR_SYSREG(3,3,c14,c2,1)
-#define HSR_SYSREG_CNTP_TVAL_EL0  HSR_SYSREG(3,3,c14,c2,0)
-
-
+#define CNTP_CTL_EL0  HSR_SYSREG(3,3,c14,c2,1)
+#define CNTP_TVAL_EL0 HSR_SYSREG(3,3,c14,c2,0)
 #endif
 
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzl-0005nu-Oi; Wed, 29 Jan 2014 12:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tzi-0005mJ-Ac
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:51 +0000
Received: from [85.158.139.211:61614] by server-7.bemta-5.messagelabs.com id
	8D/B3-14867-500F8E25; Wed, 29 Jan 2014 12:11:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390997506!355886!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18343 invoked from network); 29 Jan 2014 12:11:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95660775"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:28 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-Kw;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:26 +0000
Message-ID: <1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	jbeulich@suse.com
Subject: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

We need to clean all caches in order to catch both pages dirtied by the domain
builder and those which have been scrubbed but not yet flushed. Separating the
two and flushing scrubbed pages at scrub time and only builder-dirtied pages
here would require tracking the dirtiness state in the guest's p2m, perhaps
via a new p2m type.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
---
 tools/libxc/xc_domain.c     |    8 ++++++
 tools/libxc/xenctrl.h       |    1 +
 tools/libxl/libxl_create.c  |    3 ++
 xen/arch/arm/domctl.c       |   12 ++++++++
 xen/arch/arm/p2m.c          |   64 +++++++++++++++++++++++++++++++++++++------
 xen/include/public/domctl.h |    9 ++++++
 6 files changed, 89 insertions(+), 8 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..e6fa4ff 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_mfn = 0;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..43dae5c 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..55c86f0 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
     STATE_AO_GC(cdcs->dcs.ao);
 
     if (!rc)
+    {
         *cdcs->domid_out = domid;
+        xc_domain_cacheflush(CTX->xch, domid);
+    }
 
     libxl__ao_complete(egc, ao, rc);
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..9e3b37d 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,12 +11,24 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <xen/guest_access.h>
+
+extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
+        if ( __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..18bd500 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,15 +228,26 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
+static void do_one_cacheflush(paddr_t mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     xen_pfn_t *last_mfn)
 {
     int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -381,18 +392,42 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                    {
+                        count++;
+                        break;
+                    }
+
+                    count += 0x10;
+
+                    do_one_cacheflush(pte.p2m.base);
+                }
+                break;
         }
 
+        if ( last_mfn )
+            *last_mfn = addr >> PAGE_SHIFT;
+
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
-        if ( op == RELINQUISH && count >= 0x2000 )
+        switch ( op )
         {
-            if ( hypercall_preempt_check() )
+        case RELINQUISH:
+        case CACHEFLUSH:
+            if (count >= 0x2000 && hypercall_preempt_check() )
             {
                 p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
             count = 0;
+            break;
+        case INSERT:
+        case ALLOCATE:
+        case REMOVE:
+            /* No preemption */
+            break;
         }
 
         /* Got the next page */
@@ -438,7 +473,7 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, p2m_ram_rw, NULL);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,7 +482,7 @@ int map_mmio_regions(struct domain *d,
                      paddr_t maddr)
 {
     return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
-                             maddr, MATTR_DEV, p2m_mmio_direct);
+                             maddr, MATTR_DEV, p2m_mmio_direct, NULL);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -459,7 +494,7 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -469,7 +504,7 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -621,7 +656,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, NULL);
+}
+
+long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(*start_mfn),
+                             pfn_to_paddr(p2m->max_mapped_gfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid, start_mfn);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d7b22c3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_cacheflush {
+    /* Updated for progress */
+    xen_pfn_t start_mfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +961,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1020,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:11:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Tzl-0005nu-Oi; Wed, 29 Jan 2014 12:11:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Tzi-0005mJ-Ac
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:11:51 +0000
Received: from [85.158.139.211:61614] by server-7.bemta-5.messagelabs.com id
	8D/B3-14867-500F8E25; Wed, 29 Jan 2014 12:11:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1390997506!355886!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18343 invoked from network); 29 Jan 2014 12:11:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:11:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95660775"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:11:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 07:11:28 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8TzK-0000ow-Kw;
	Wed, 29 Jan 2014 12:11:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 29 Jan 2014 12:11:26 +0000
Message-ID: <1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, george.dunlap@citrix.com,
	jbeulich@suse.com
Subject: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

We need to clean all caches in order to catch both pages dirtied by the domain
builder and those which have been scrubbed but not yet flushed. Separating the
two and flushing scrubbed pages at scrub time and only builder-dirtied pages
here would require tracking the dirtiness state in the guest's p2m, perhaps
via a new p2m type.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
---
 tools/libxc/xc_domain.c     |    8 ++++++
 tools/libxc/xenctrl.h       |    1 +
 tools/libxl/libxl_create.c  |    3 ++
 xen/arch/arm/domctl.c       |   12 ++++++++
 xen/arch/arm/p2m.c          |   64 +++++++++++++++++++++++++++++++++++++------
 xen/include/public/domctl.h |    9 ++++++
 6 files changed, 89 insertions(+), 8 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..e6fa4ff 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_mfn = 0;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..43dae5c 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,6 +453,7 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..55c86f0 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1364,7 +1364,10 @@ static void domain_create_cb(libxl__egc *egc,
     STATE_AO_GC(cdcs->dcs.ao);
 
     if (!rc)
+    {
         *cdcs->domid_out = domid;
+        xc_domain_cacheflush(CTX->xch, domid);
+    }
 
     libxl__ao_complete(egc, ao, rc);
 }
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..9e3b37d 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,12 +11,24 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <xen/guest_access.h>
+
+extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
+        if ( __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..18bd500 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,15 +228,26 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
+static void do_one_cacheflush(paddr_t mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     xen_pfn_t *last_mfn)
 {
     int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -381,18 +392,42 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                    {
+                        count++;
+                        break;
+                    }
+
+                    count += 0x10;
+
+                    do_one_cacheflush(pte.p2m.base);
+                }
+                break;
         }
 
+        if ( last_mfn )
+            *last_mfn = addr >> PAGE_SHIFT;
+
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
-        if ( op == RELINQUISH && count >= 0x2000 )
+        switch ( op )
         {
-            if ( hypercall_preempt_check() )
+        case RELINQUISH:
+        case CACHEFLUSH:
+            if (count >= 0x2000 && hypercall_preempt_check() )
             {
                 p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
             count = 0;
+            break;
+        case INSERT:
+        case ALLOCATE:
+        case REMOVE:
+            /* No preemption */
+            break;
         }
 
         /* Got the next page */
@@ -438,7 +473,7 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return apply_p2m_changes(d, ALLOCATE, start, end,
-                             0, MATTR_MEM, p2m_ram_rw);
+                             0, MATTR_MEM, p2m_ram_rw, NULL);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,7 +482,7 @@ int map_mmio_regions(struct domain *d,
                      paddr_t maddr)
 {
     return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
-                             maddr, MATTR_DEV, p2m_mmio_direct);
+                             maddr, MATTR_DEV, p2m_mmio_direct, NULL);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -459,7 +494,7 @@ int guest_physmap_add_entry(struct domain *d,
     return apply_p2m_changes(d, INSERT,
                              pfn_to_paddr(gpfn),
                              pfn_to_paddr(gpfn + (1 << page_order)),
-                             pfn_to_paddr(mfn), MATTR_MEM, t);
+                             pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -469,7 +504,7 @@ void guest_physmap_remove_page(struct domain *d,
     apply_p2m_changes(d, REMOVE,
                       pfn_to_paddr(gpfn),
                       pfn_to_paddr(gpfn + (1<<page_order)),
-                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -621,7 +656,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, NULL);
+}
+
+long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(*start_mfn),
+                             pfn_to_paddr(p2m->max_mapped_gfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid, start_mfn);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d7b22c3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,13 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_cacheflush {
+    /* Updated for progress */
+    xen_pfn_t start_mfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +961,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1020,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U30-0006K0-Be; Wed, 29 Jan 2014 12:15:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2y-0006JZ-Fi
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:12 +0000
Received: from [85.158.139.211:19545] by server-5.bemta-5.messagelabs.com id
	73/25-32749-FC0F8E25; Wed, 29 Jan 2014 12:15:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18157 invoked from network); 29 Jan 2014 12:15:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650864"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-LF;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:05 +0000
Message-ID: <1390997707-28607-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/3] xen: move Xen PV machine files to hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U30-0006KX-S7; Wed, 29 Jan 2014 12:15:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2z-0006Jg-HK
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:13 +0000
Received: from [85.158.139.211:13780] by server-8.bemta-5.messagelabs.com id
	55/E5-05298-0D0F8E25; Wed, 29 Jan 2014 12:15:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18239 invoked from network); 29 Jan 2014 12:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650865"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-Ls;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:06 +0000
Message-ID: <1390997707-28607-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/3] xen: move Xen HVM files under hw/i386/xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs            |    2 +-
 hw/i386/xen/Makefile.objs        |    1 +
 hw/{ => i386}/xen/xen_apic.c     |    0
 hw/{ => i386}/xen/xen_platform.c |    0
 hw/{ => i386}/xen/xen_pvdevice.c |    0
 hw/xen/Makefile.objs             |    1 -
 6 files changed, 2 insertions(+), 2 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 0faccd7..77dcf06 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += ../xenpv/
+obj-$(CONFIG_XEN) += ../xenpv/ xen/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/i386/xen/Makefile.objs b/hw/i386/xen/Makefile.objs
new file mode 100644
index 0000000..801a68d
--- /dev/null
+++ b/hw/i386/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y += xen_platform.o xen_apic.o xen_pvdevice.o
diff --git a/hw/xen/xen_apic.c b/hw/i386/xen/xen_apic.c
similarity index 100%
rename from hw/xen/xen_apic.c
rename to hw/i386/xen/xen_apic.c
diff --git a/hw/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
similarity index 100%
rename from hw/xen/xen_platform.c
rename to hw/i386/xen/xen_platform.c
diff --git a/hw/xen/xen_pvdevice.c b/hw/i386/xen/xen_pvdevice.c
similarity index 100%
rename from hw/xen/xen_pvdevice.c
rename to hw/i386/xen/xen_pvdevice.c
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..a0ca0aa 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,6 +1,5 @@
 # xen backend driver support
 common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
-obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U2y-0006Jh-VD; Wed, 29 Jan 2014 12:15:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2x-0006JO-KS
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:11 +0000
Received: from [85.158.139.211:53782] by server-2.bemta-5.messagelabs.com id
	17/BB-23037-EC0F8E25; Wed, 29 Jan 2014 12:15:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18035 invoked from network); 29 Jan 2014 12:15:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650860"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-KY;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:04 +0000
Message-ID: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 0/3] QEMU/Xen: disentangle PV and HVM in QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This small series disentangles Xen-specific files in QEMU. PV and HVM guests
related files are moved to corresponding locations. Build system is updated to
reflect those changes.

These patches are taken from my previous patches. I think they are quite safe
to go in. So I drop the RFC tag and repost them for inclusion.

Tested with usual Xen building runes, everything works as before.

Wei.

Wei Liu (3):
  xen: move Xen PV machine files to hw/xenpv
  xen: move Xen HVM files under hw/i386/xen
  xen: factor out common functions

 Makefile.target                      |    6 +-
 hw/i386/Makefile.objs                |    2 +-
 hw/i386/xen/Makefile.objs            |    1 +
 hw/{ => i386}/xen/xen_apic.c         |    0
 hw/{ => i386}/xen/xen_platform.c     |    0
 hw/{ => i386}/xen/xen_pvdevice.c     |    0
 hw/xen/Makefile.objs                 |    1 -
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 xen-common-stub.c                    |   19 ++++++
 xen-common.c                         |  123 ++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c         |   18 ++---
 xen-all.c => xen-hvm.c               |  121 +++------------------------------
 xen-mapcache-stub.c                  |   39 +++++++++++
 16 files changed, 207 insertions(+), 125 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U30-0006K0-Be; Wed, 29 Jan 2014 12:15:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2y-0006JZ-Fi
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:12 +0000
Received: from [85.158.139.211:19545] by server-5.bemta-5.messagelabs.com id
	73/25-32749-FC0F8E25; Wed, 29 Jan 2014 12:15:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18157 invoked from network); 29 Jan 2014 12:15:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650864"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-LF;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:05 +0000
Message-ID: <1390997707-28607-2-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 1/3] xen: move Xen PV machine files to hw/xenpv
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs                |    2 +-
 hw/xenpv/Makefile.objs               |    2 ++
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 5 files changed, 3 insertions(+), 1 deletion(-)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 09ac433..0faccd7 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
+obj-$(CONFIG_XEN) += ../xenpv/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/xenpv/Makefile.objs b/hw/xenpv/Makefile.objs
new file mode 100644
index 0000000..49f6e9e
--- /dev/null
+++ b/hw/xenpv/Makefile.objs
@@ -0,0 +1,2 @@
+# Xen PV machine support
+obj-$(CONFIG_XEN) += xen_domainbuild.o xen_machine_pv.o
diff --git a/hw/i386/xen_domainbuild.c b/hw/xenpv/xen_domainbuild.c
similarity index 100%
rename from hw/i386/xen_domainbuild.c
rename to hw/xenpv/xen_domainbuild.c
diff --git a/hw/i386/xen_domainbuild.h b/hw/xenpv/xen_domainbuild.h
similarity index 100%
rename from hw/i386/xen_domainbuild.h
rename to hw/xenpv/xen_domainbuild.h
diff --git a/hw/i386/xen_machine_pv.c b/hw/xenpv/xen_machine_pv.c
similarity index 100%
rename from hw/i386/xen_machine_pv.c
rename to hw/xenpv/xen_machine_pv.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U30-0006KX-S7; Wed, 29 Jan 2014 12:15:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2z-0006Jg-HK
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:13 +0000
Received: from [85.158.139.211:13780] by server-8.bemta-5.messagelabs.com id
	55/E5-05298-0D0F8E25; Wed, 29 Jan 2014 12:15:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18239 invoked from network); 29 Jan 2014 12:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650865"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-Ls;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:06 +0000
Message-ID: <1390997707-28607-3-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 2/3] xen: move Xen HVM files under hw/i386/xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 hw/i386/Makefile.objs            |    2 +-
 hw/i386/xen/Makefile.objs        |    1 +
 hw/{ => i386}/xen/xen_apic.c     |    0
 hw/{ => i386}/xen/xen_platform.c |    0
 hw/{ => i386}/xen/xen_pvdevice.c |    0
 hw/xen/Makefile.objs             |    1 -
 6 files changed, 2 insertions(+), 2 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 0faccd7..77dcf06 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -2,7 +2,7 @@ obj-$(CONFIG_KVM) += kvm/
 obj-y += multiboot.o smbios.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
-obj-$(CONFIG_XEN) += ../xenpv/
+obj-$(CONFIG_XEN) += ../xenpv/ xen/
 
 obj-y += kvmvapic.o
 obj-y += acpi-build.o
diff --git a/hw/i386/xen/Makefile.objs b/hw/i386/xen/Makefile.objs
new file mode 100644
index 0000000..801a68d
--- /dev/null
+++ b/hw/i386/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y += xen_platform.o xen_apic.o xen_pvdevice.o
diff --git a/hw/xen/xen_apic.c b/hw/i386/xen/xen_apic.c
similarity index 100%
rename from hw/xen/xen_apic.c
rename to hw/i386/xen/xen_apic.c
diff --git a/hw/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
similarity index 100%
rename from hw/xen/xen_platform.c
rename to hw/i386/xen/xen_platform.c
diff --git a/hw/xen/xen_pvdevice.c b/hw/i386/xen/xen_pvdevice.c
similarity index 100%
rename from hw/xen/xen_pvdevice.c
rename to hw/i386/xen/xen_pvdevice.c
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..a0ca0aa 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,6 +1,5 @@
 # xen backend driver support
 common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
-obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U32-0006Kq-Ak; Wed, 29 Jan 2014 12:15:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U30-0006Jr-4X
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:14 +0000
Received: from [85.158.143.35:29367] by server-3.bemta-4.messagelabs.com id
	B5/B1-11539-1D0F8E25; Wed, 29 Jan 2014 12:15:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390997710!1620567!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30161 invoked from network); 29 Jan 2014 12:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650867"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-MW;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:07 +0000
Message-ID: <1390997707-28607-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/3] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Create *-stub files and modify Makefile.target to reflect the changes.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target              |    6 ++-
 xen-common-stub.c            |   19 +++++++
 xen-common.c                 |  123 ++++++++++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c |   18 +++----
 xen-all.c => xen-hvm.c       |  121 ++++-------------------------------------
 xen-mapcache-stub.c          |   39 ++++++++++++++
 6 files changed, 203 insertions(+), 123 deletions(-)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

diff --git a/Makefile.target b/Makefile.target
index af6ac7e..9c9e1c7 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -120,8 +120,10 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
-obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
+obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
+obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o xen-mapcache-stub.o
 
 # Hardware support
 ifeq ($(TARGET_NAME), sparc64)
diff --git a/xen-common-stub.c b/xen-common-stub.c
new file mode 100644
index 0000000..3152018
--- /dev/null
+++ b/xen-common-stub.c
@@ -0,0 +1,19 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu-common.h"
+#include "hw/xen/xen.h"
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+}
+
+int xen_init(void)
+{
+    return -ENOSYS;
+}
+
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..5207318
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
diff --git a/xen-stub.c b/xen-hvm-stub.c
similarity index 91%
rename from xen-stub.c
rename to xen-hvm-stub.c
index ad189a6..00fa9b3 100644
--- a/xen-stub.c
+++ b/xen-hvm-stub.c
@@ -12,10 +12,7 @@
 #include "hw/xen/xen.h"
 #include "exec/memory.h"
 #include "qmp-commands.h"
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-}
+#include "hw/xen/xen_common.h"
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
@@ -47,24 +44,23 @@ qemu_irq *xen_interrupt_controller_init(void)
     return NULL;
 }
 
-int xen_init(void)
+void xen_register_framebuffer(MemoryRegion *mr)
 {
-    return -ENOSYS;
 }
 
-void xen_register_framebuffer(MemoryRegion *mr)
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+int xen_hvm_init(MemoryRegion **ram_memory)
 {
+    return 0;
 }
 
-void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-int xen_hvm_init(MemoryRegion **ram_memory)
+void xen_shutdown_fatal_error(const char *fmt, ...)
 {
-    return 0;
 }
diff --git a/xen-all.c b/xen-hvm.c
similarity index 93%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..0a49055 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
@@ -1226,3 +1118,12 @@ void xen_modified_memory(ram_addr_t start, ram_addr_t length)
         }
     }
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
new file mode 100644
index 0000000..f4ddf53
--- /dev/null
+++ b/xen-mapcache-stub.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2014       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "config.h"
+
+#include <sys/resource.h>
+
+#include "hw/xen/xen_backend.h"
+
+#include <xen/hvm/params.h>
+
+#include "sysemu/xen-mapcache.h"
+#include "trace.h"
+
+uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
+                       uint8_t lock)
+{
+    abort();
+    return NULL;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+    abort();
+    return 0;
+}
+
+void xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+    abort();
+}
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U32-0006Kq-Ak; Wed, 29 Jan 2014 12:15:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U30-0006Jr-4X
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:14 +0000
Received: from [85.158.143.35:29367] by server-3.bemta-4.messagelabs.com id
	B5/B1-11539-1D0F8E25; Wed, 29 Jan 2014 12:15:13 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1390997710!1620567!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30161 invoked from network); 29 Jan 2014 12:15:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650867"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-MW;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:07 +0000
Message-ID: <1390997707-28607-4-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
References: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 3/3] xen: factor out common functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some common functions used by both HVM and PV are factored out from
xen-all.c to xen-common.c.

Finally rename xen-all.c to xen-hvm.c, as those functions are only
useful to HVM guest.

Create *-stub files and modify Makefile.target to reflect the changes.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 Makefile.target              |    6 ++-
 xen-common-stub.c            |   19 +++++++
 xen-common.c                 |  123 ++++++++++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c |   18 +++----
 xen-all.c => xen-hvm.c       |  121 ++++-------------------------------------
 xen-mapcache-stub.c          |   39 ++++++++++++++
 6 files changed, 203 insertions(+), 123 deletions(-)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

diff --git a/Makefile.target b/Makefile.target
index af6ac7e..9c9e1c7 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -120,8 +120,10 @@ obj-y += dump.o
 LIBS+=$(libs_softmmu)
 
 # xen support
-obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
-obj-$(call lnot,$(CONFIG_XEN)) += xen-stub.o
+obj-$(CONFIG_XEN) += xen-common.o
+obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
+obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
+obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o xen-mapcache-stub.o
 
 # Hardware support
 ifeq ($(TARGET_NAME), sparc64)
diff --git a/xen-common-stub.c b/xen-common-stub.c
new file mode 100644
index 0000000..3152018
--- /dev/null
+++ b/xen-common-stub.c
@@ -0,0 +1,19 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu-common.h"
+#include "hw/xen/xen.h"
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+}
+
+int xen_init(void)
+{
+    return -ENOSYS;
+}
+
diff --git a/xen-common.c b/xen-common.c
new file mode 100644
index 0000000..5207318
--- /dev/null
+++ b/xen-common.c
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2014       Citrix Systems UK Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "hw/xen/xen_backend.h"
+#include "qmp-commands.h"
+#include "sysemu/char.h"
+
+//#define DEBUG_XEN
+
+#ifdef DEBUG_XEN
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+static int store_dev_info(int domid, CharDriverState *cs, const char *string)
+{
+    struct xs_handle *xs = NULL;
+    char *path = NULL;
+    char *newpath = NULL;
+    char *pts = NULL;
+    int ret = -1;
+
+    /* Only continue if we're talking to a pty. */
+    if (strncmp(cs->filename, "pty:", 4)) {
+        return 0;
+    }
+    pts = cs->filename + 4;
+
+    /* We now have everything we need to set the xenstore entry. */
+    xs = xs_open(0);
+    if (xs == NULL) {
+        fprintf(stderr, "Could not contact XenStore\n");
+        goto out;
+    }
+
+    path = xs_get_domain_path(xs, domid);
+    if (path == NULL) {
+        fprintf(stderr, "xs_get_domain_path() error\n");
+        goto out;
+    }
+    newpath = realloc(path, (strlen(path) + strlen(string) +
+                strlen("/tty") + 1));
+    if (newpath == NULL) {
+        fprintf(stderr, "realloc error\n");
+        goto out;
+    }
+    path = newpath;
+
+    strcat(path, string);
+    strcat(path, "/tty");
+    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
+        fprintf(stderr, "xs_write for '%s' fail", string);
+        goto out;
+    }
+    ret = 0;
+
+out:
+    free(path);
+    xs_close(xs);
+
+    return ret;
+}
+
+void xenstore_store_pv_console_info(int i, CharDriverState *chr)
+{
+    if (i == 0) {
+        store_dev_info(xen_domid, chr, "/console");
+    } else {
+        char buf[32];
+        snprintf(buf, sizeof(buf), "/device/console/%d", i);
+        store_dev_info(xen_domid, chr, buf);
+    }
+}
+
+
+static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
+{
+    char path[50];
+
+    if (xs == NULL) {
+        fprintf(stderr, "xenstore connection not initialized\n");
+        exit(1);
+    }
+
+    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
+    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
+        fprintf(stderr, "error recording dm state\n");
+        exit(1);
+    }
+}
+
+
+static void xen_change_state_handler(void *opaque, int running,
+                                     RunState state)
+{
+    if (running) {
+        /* record state running */
+        xenstore_record_dm_state(xenstore, "running");
+    }
+}
+
+int xen_init(void)
+{
+    xen_xc = xen_xc_interface_open(0, 0, 0);
+    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
+        xen_be_printf(NULL, 0, "can't open xen interface\n");
+        return -1;
+    }
+    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
+
+    return 0;
+}
+
diff --git a/xen-stub.c b/xen-hvm-stub.c
similarity index 91%
rename from xen-stub.c
rename to xen-hvm-stub.c
index ad189a6..00fa9b3 100644
--- a/xen-stub.c
+++ b/xen-hvm-stub.c
@@ -12,10 +12,7 @@
 #include "hw/xen/xen.h"
 #include "exec/memory.h"
 #include "qmp-commands.h"
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-}
+#include "hw/xen/xen_common.h"
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
@@ -47,24 +44,23 @@ qemu_irq *xen_interrupt_controller_init(void)
     return NULL;
 }
 
-int xen_init(void)
+void xen_register_framebuffer(MemoryRegion *mr)
 {
-    return -ENOSYS;
 }
 
-void xen_register_framebuffer(MemoryRegion *mr)
+void xen_modified_memory(ram_addr_t start, ram_addr_t length)
 {
 }
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+int xen_hvm_init(MemoryRegion **ram_memory)
 {
+    return 0;
 }
 
-void xen_modified_memory(ram_addr_t start, ram_addr_t length)
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
 {
 }
 
-int xen_hvm_init(MemoryRegion **ram_memory)
+void xen_shutdown_fatal_error(const char *fmt, ...)
 {
-    return 0;
 }
diff --git a/xen-all.c b/xen-hvm.c
similarity index 93%
rename from xen-all.c
rename to xen-hvm.c
index 4a594bd..0a49055 100644
--- a/xen-all.c
+++ b/xen-hvm.c
@@ -26,9 +26,9 @@
 #include <xen/hvm/params.h>
 #include <xen/hvm/e820.h>
 
-//#define DEBUG_XEN
+//#define DEBUG_XEN_HVM
 
-#ifdef DEBUG_XEN
+#ifdef DEBUG_XEN_HVM
 #define DPRINTF(fmt, ...) \
     do { fprintf(stderr, "xen: " fmt, ## __VA_ARGS__); } while (0)
 #else
@@ -569,15 +569,6 @@ static MemoryListener xen_memory_listener = {
     .priority = 10,
 };
 
-void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
-{
-    if (enable) {
-        memory_global_dirty_log_start();
-    } else {
-        memory_global_dirty_log_stop();
-    }
-}
-
 /* get the ioreq packets from share mem */
 static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
 {
@@ -880,82 +871,6 @@ static void cpu_handle_ioreq(void *opaque)
     }
 }
 
-static int store_dev_info(int domid, CharDriverState *cs, const char *string)
-{
-    struct xs_handle *xs = NULL;
-    char *path = NULL;
-    char *newpath = NULL;
-    char *pts = NULL;
-    int ret = -1;
-
-    /* Only continue if we're talking to a pty. */
-    if (strncmp(cs->filename, "pty:", 4)) {
-        return 0;
-    }
-    pts = cs->filename + 4;
-
-    /* We now have everything we need to set the xenstore entry. */
-    xs = xs_open(0);
-    if (xs == NULL) {
-        fprintf(stderr, "Could not contact XenStore\n");
-        goto out;
-    }
-
-    path = xs_get_domain_path(xs, domid);
-    if (path == NULL) {
-        fprintf(stderr, "xs_get_domain_path() error\n");
-        goto out;
-    }
-    newpath = realloc(path, (strlen(path) + strlen(string) +
-                strlen("/tty") + 1));
-    if (newpath == NULL) {
-        fprintf(stderr, "realloc error\n");
-        goto out;
-    }
-    path = newpath;
-
-    strcat(path, string);
-    strcat(path, "/tty");
-    if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
-        fprintf(stderr, "xs_write for '%s' fail", string);
-        goto out;
-    }
-    ret = 0;
-
-out:
-    free(path);
-    xs_close(xs);
-
-    return ret;
-}
-
-void xenstore_store_pv_console_info(int i, CharDriverState *chr)
-{
-    if (i == 0) {
-        store_dev_info(xen_domid, chr, "/console");
-    } else {
-        char buf[32];
-        snprintf(buf, sizeof(buf), "/device/console/%d", i);
-        store_dev_info(xen_domid, chr, buf);
-    }
-}
-
-static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
-{
-    char path[50];
-
-    if (xs == NULL) {
-        fprintf(stderr, "xenstore connection not initialized\n");
-        exit(1);
-    }
-
-    snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
-    if (!xs_write(xs, XBT_NULL, path, state, strlen(state))) {
-        fprintf(stderr, "error recording dm state\n");
-        exit(1);
-    }
-}
-
 static void xen_main_loop_prepare(XenIOState *state)
 {
     int evtchn_fd = -1;
@@ -973,17 +888,6 @@ static void xen_main_loop_prepare(XenIOState *state)
 }
 
 
-/* Initialise Xen */
-
-static void xen_change_state_handler(void *opaque, int running,
-                                     RunState state)
-{
-    if (running) {
-        /* record state running */
-        xenstore_record_dm_state(xenstore, "running");
-    }
-}
-
 static void xen_hvm_change_state_handler(void *opaque, int running,
                                          RunState rstate)
 {
@@ -1001,18 +905,6 @@ static void xen_exit_notifier(Notifier *n, void *data)
     xs_daemon_close(state->xenstore);
 }
 
-int xen_init(void)
-{
-    xen_xc = xen_xc_interface_open(0, 0, 0);
-    if (xen_xc == XC_HANDLER_INITIAL_VALUE) {
-        xen_be_printf(NULL, 0, "can't open xen interface\n");
-        return -1;
-    }
-    qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
-
-    return 0;
-}
-
 static void xen_read_physmap(XenIOState *state)
 {
     XenPhysmap *physmap = NULL;
@@ -1226,3 +1118,12 @@ void xen_modified_memory(ram_addr_t start, ram_addr_t length)
         }
     }
 }
+
+void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
+{
+    if (enable) {
+        memory_global_dirty_log_start();
+    } else {
+        memory_global_dirty_log_stop();
+    }
+}
diff --git a/xen-mapcache-stub.c b/xen-mapcache-stub.c
new file mode 100644
index 0000000..f4ddf53
--- /dev/null
+++ b/xen-mapcache-stub.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2014       Citrix Ltd.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ * Contributions after 2012-01-13 are licensed under the terms of the
+ * GNU GPL, version 2 or (at your option) any later version.
+ */
+
+#include "config.h"
+
+#include <sys/resource.h>
+
+#include "hw/xen/xen_backend.h"
+
+#include <xen/hvm/params.h>
+
+#include "sysemu/xen-mapcache.h"
+#include "trace.h"
+
+uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
+                       uint8_t lock)
+{
+    abort();
+    return NULL;
+}
+
+ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
+{
+    abort();
+    return 0;
+}
+
+void xen_invalidate_map_cache_entry(uint8_t *buffer)
+{
+    abort();
+}
+
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U2y-0006Jh-VD; Wed, 29 Jan 2014 12:15:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W8U2x-0006JO-KS
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 12:15:11 +0000
Received: from [85.158.139.211:53782] by server-2.bemta-5.messagelabs.com id
	17/BB-23037-EC0F8E25; Wed, 29 Jan 2014 12:15:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1390997708!357626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18035 invoked from network); 29 Jan 2014 12:15:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:15:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97650860"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:15:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:15:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1W8U2t-0000KE-KY;
	Wed, 29 Jan 2014 12:15:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
Date: Wed, 29 Jan 2014 12:15:04 +0000
Message-ID: <1390997707-28607-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH 0/3] QEMU/Xen: disentangle PV and HVM in QEMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This small series disentangles Xen-specific files in QEMU. PV and HVM guests
related files are moved to corresponding locations. Build system is updated to
reflect those changes.

These patches are taken from my previous patches. I think they are quite safe
to go in. So I drop the RFC tag and repost them for inclusion.

Tested with usual Xen building runes, everything works as before.

Wei.

Wei Liu (3):
  xen: move Xen PV machine files to hw/xenpv
  xen: move Xen HVM files under hw/i386/xen
  xen: factor out common functions

 Makefile.target                      |    6 +-
 hw/i386/Makefile.objs                |    2 +-
 hw/i386/xen/Makefile.objs            |    1 +
 hw/{ => i386}/xen/xen_apic.c         |    0
 hw/{ => i386}/xen/xen_platform.c     |    0
 hw/{ => i386}/xen/xen_pvdevice.c     |    0
 hw/xen/Makefile.objs                 |    1 -
 hw/xenpv/Makefile.objs               |    2 +
 hw/{i386 => xenpv}/xen_domainbuild.c |    0
 hw/{i386 => xenpv}/xen_domainbuild.h |    0
 hw/{i386 => xenpv}/xen_machine_pv.c  |    0
 xen-common-stub.c                    |   19 ++++++
 xen-common.c                         |  123 ++++++++++++++++++++++++++++++++++
 xen-stub.c => xen-hvm-stub.c         |   18 ++---
 xen-all.c => xen-hvm.c               |  121 +++------------------------------
 xen-mapcache-stub.c                  |   39 +++++++++++
 16 files changed, 207 insertions(+), 125 deletions(-)
 create mode 100644 hw/i386/xen/Makefile.objs
 rename hw/{ => i386}/xen/xen_apic.c (100%)
 rename hw/{ => i386}/xen/xen_platform.c (100%)
 rename hw/{ => i386}/xen/xen_pvdevice.c (100%)
 create mode 100644 hw/xenpv/Makefile.objs
 rename hw/{i386 => xenpv}/xen_domainbuild.c (100%)
 rename hw/{i386 => xenpv}/xen_domainbuild.h (100%)
 rename hw/{i386 => xenpv}/xen_machine_pv.c (100%)
 create mode 100644 xen-common-stub.c
 create mode 100644 xen-common.c
 rename xen-stub.c => xen-hvm-stub.c (91%)
 rename xen-all.c => xen-hvm.c (93%)
 create mode 100644 xen-mapcache-stub.c

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U6D-00072g-D9; Wed, 29 Jan 2014 12:18:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8U6C-00071u-Fw; Wed, 29 Jan 2014 12:18:32 +0000
Received: from [193.109.254.147:44688] by server-6.bemta-14.messagelabs.com id
	51/DE-03396-791F8E25; Wed, 29 Jan 2014 12:18:31 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390997909!604821!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16408 invoked from network); 29 Jan 2014 12:18:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:18:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95663116"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:18:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:18:28 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8U68-0000Mq-6f;
	Wed, 29 Jan 2014 12:18:28 +0000
Date: Wed, 29 Jan 2014 12:18:28 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129121827.GA1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390990304.31814.50.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > Sorry for the so late reply.
> > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> 
> And are you using the version of qemu-xen which ships with those
> releases or your own version, perhaps from upstream?
> 
> ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> 4.3.x but I thought it had been added during the 4.4.x development
> cycle. Adding Anthony + xen-devel to confirm.

We've added vcpu hotplug to our tree in Xen 4.3.

QEMU upstream is able to do vcpu hotplug with Xen only with the last
release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
and any QEMU release before that those not support cpu hotplug.

> > Here is the /var/log/xen/
> > Waiting for domain centos65.pv (domid 1) to die [pid 8116]
> > 
> > 
> > Here is the output of "xl -vvv vcpu-set"
> 
> Is this from 4.3 or 4.4? I think at this point we should focus on the
> issue with 4.4.
> 
> I also asked for your guest cfg file -- please can you show it to us.
> 

[...]

> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "cpu-add",
> >     "id": 2,
> >     "arguments": {
> >         "id": 0
> >     }
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: error
> > libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> > error message from QMP server: Not supported

This right here means that xl support vcpu-set with qemu-xen, but the
QEMU used does not.

> > On Tue, Oct 1, 2013 at 3:24 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Mon, 2013-09-30 at 10:52 -0600, Yun Wang wrote:
> > >> Hi all,
> > >>
> > >> I tried to use "xl vcpu-set" to change the vCPU number of VMs in Xen
> > >> 4.4-unstable and had the following errors.
> > >
> > > What was the full command line which you used?
> > >
> > > Which exact version of 4.4-unstable (i.e. git commit) were you using?
> > >
> > >> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> > >> error message from QMP server: Unable to add CPU: 0, it already
> > >> exists.

This is just a message that I don't know how to remove.

Since libxl does not know which cpu are already plugged-in, it asks QEMU
to plug every asked cpu. QEMU just reply back as an error that some CPU
are already plugged. And libxl does print every error that QEMU send.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:18:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U6D-00072g-D9; Wed, 29 Jan 2014 12:18:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8U6C-00071u-Fw; Wed, 29 Jan 2014 12:18:32 +0000
Received: from [193.109.254.147:44688] by server-6.bemta-14.messagelabs.com id
	51/DE-03396-791F8E25; Wed, 29 Jan 2014 12:18:31 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1390997909!604821!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16408 invoked from network); 29 Jan 2014 12:18:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:18:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="95663116"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:18:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:18:28 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8U68-0000Mq-6f;
	Wed, 29 Jan 2014 12:18:28 +0000
Date: Wed, 29 Jan 2014 12:18:28 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129121827.GA1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390990304.31814.50.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > Sorry for the so late reply.
> > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> 
> And are you using the version of qemu-xen which ships with those
> releases or your own version, perhaps from upstream?
> 
> ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> 4.3.x but I thought it had been added during the 4.4.x development
> cycle. Adding Anthony + xen-devel to confirm.

We've added vcpu hotplug to our tree in Xen 4.3.

QEMU upstream is able to do vcpu hotplug with Xen only with the last
release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
and any QEMU release before that those not support cpu hotplug.

> > Here is the /var/log/xen/
> > Waiting for domain centos65.pv (domid 1) to die [pid 8116]
> > 
> > 
> > Here is the output of "xl -vvv vcpu-set"
> 
> Is this from 4.3 or 4.4? I think at this point we should focus on the
> issue with 4.4.
> 
> I also asked for your guest cfg file -- please can you show it to us.
> 

[...]

> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "cpu-add",
> >     "id": 2,
> >     "arguments": {
> >         "id": 0
> >     }
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: error
> > libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> > error message from QMP server: Not supported

This right here means that xl support vcpu-set with qemu-xen, but the
QEMU used does not.

> > On Tue, Oct 1, 2013 at 3:24 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Mon, 2013-09-30 at 10:52 -0600, Yun Wang wrote:
> > >> Hi all,
> > >>
> > >> I tried to use "xl vcpu-set" to change the vCPU number of VMs in Xen
> > >> 4.4-unstable and had the following errors.
> > >
> > > What was the full command line which you used?
> > >
> > > Which exact version of 4.4-unstable (i.e. git commit) were you using?
> > >
> > >> libxl: error: libxl_qmp.c:289:qmp_handle_error_response: received an
> > >> error message from QMP server: Unable to add CPU: 0, it already
> > >> exists.

This is just a message that I don't know how to remove.

Since libxl does not know which cpu are already plugged-in, it asks QEMU
to plug every asked cpu. QEMU just reply back as an error that some CPU
are already plugged. And libxl does print every error that QEMU send.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:22:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U9v-0007OF-9m; Wed, 29 Jan 2014 12:22:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8U9t-0007O6-Nk; Wed, 29 Jan 2014 12:22:21 +0000
Received: from [193.109.254.147:63814] by server-4.bemta-14.messagelabs.com id
	FA/98-32066-C72F8E25; Wed, 29 Jan 2014 12:22:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390998139!600053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13676 invoked from network); 29 Jan 2014 12:22:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97652479"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:22:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:22:18 -0500
Message-ID: <1390998137.31814.92.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 12:22:17 +0000
In-Reply-To: <20140129121827.GA1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > Sorry for the so late reply.
> > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > 
> > And are you using the version of qemu-xen which ships with those
> > releases or your own version, perhaps from upstream?
> > 
> > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > 4.3.x but I thought it had been added during the 4.4.x development
> > cycle. Adding Anthony + xen-devel to confirm.
> 
> We've added vcpu hotplug to our tree in Xen 4.3.
> 
> QEMU upstream is able to do vcpu hotplug with Xen only with the last
> release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> and any QEMU release before that those not support cpu hotplug.

OK. IIRC Xen 4.3 included qemu 1.3 or so?

4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct? So it
also misses those patches.

Are we planning to backport those two patches for 4.4?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:22:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8U9v-0007OF-9m; Wed, 29 Jan 2014 12:22:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8U9t-0007O6-Nk; Wed, 29 Jan 2014 12:22:21 +0000
Received: from [193.109.254.147:63814] by server-4.bemta-14.messagelabs.com id
	FA/98-32066-C72F8E25; Wed, 29 Jan 2014 12:22:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1390998139!600053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13676 invoked from network); 29 Jan 2014 12:22:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,741,1384300800"; d="scan'208";a="97652479"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:22:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:22:18 -0500
Message-ID: <1390998137.31814.92.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 12:22:17 +0000
In-Reply-To: <20140129121827.GA1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > Sorry for the so late reply.
> > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > 
> > And are you using the version of qemu-xen which ships with those
> > releases or your own version, perhaps from upstream?
> > 
> > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > 4.3.x but I thought it had been added during the 4.4.x development
> > cycle. Adding Anthony + xen-devel to confirm.
> 
> We've added vcpu hotplug to our tree in Xen 4.3.
> 
> QEMU upstream is able to do vcpu hotplug with Xen only with the last
> release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> and any QEMU release before that those not support cpu hotplug.

OK. IIRC Xen 4.3 included qemu 1.3 or so?

4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct? So it
also misses those patches.

Are we planning to backport those two patches for 4.4?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:37:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UNx-00087a-OK; Wed, 29 Jan 2014 12:36:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8UNw-00087R-Pp; Wed, 29 Jan 2014 12:36:52 +0000
Received: from [193.109.254.147:42557] by server-15.bemta-14.messagelabs.com
	id A7/4E-10839-4E5F8E25; Wed, 29 Jan 2014 12:36:52 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390999010!575146!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22339 invoked from network); 29 Jan 2014 12:36:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:36:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97656137"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:36:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:36:49 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8UNt-0000bm-8d;
	Wed, 29 Jan 2014 12:36:49 +0000
Date: Wed, 29 Jan 2014 12:36:49 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129123648.GB1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390998137.31814.92.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > Sorry for the so late reply.
> > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > 
> > > And are you using the version of qemu-xen which ships with those
> > > releases or your own version, perhaps from upstream?
> > > 
> > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > 4.3.x but I thought it had been added during the 4.4.x development
> > > cycle. Adding Anthony + xen-devel to confirm.
> > 
> > We've added vcpu hotplug to our tree in Xen 4.3.
> > 
> > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > and any QEMU release before that those not support cpu hotplug.
> 
> OK. IIRC Xen 4.3 included qemu 1.3 or so?

That's right, qemu 1.3.

> 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?

That's correct.

> So it also misses those patches.

No, I've put those patches on top of the merge, otherwise, there would
have been regression.

> Are we planning to backport those two patches for 4.4?

No :), it's not needed.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:37:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UNx-00087a-OK; Wed, 29 Jan 2014 12:36:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8UNw-00087R-Pp; Wed, 29 Jan 2014 12:36:52 +0000
Received: from [193.109.254.147:42557] by server-15.bemta-14.messagelabs.com
	id A7/4E-10839-4E5F8E25; Wed, 29 Jan 2014 12:36:52 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1390999010!575146!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22339 invoked from network); 29 Jan 2014 12:36:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:36:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97656137"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:36:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:36:49 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8UNt-0000bm-8d;
	Wed, 29 Jan 2014 12:36:49 +0000
Date: Wed, 29 Jan 2014 12:36:49 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129123648.GB1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390998137.31814.92.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA1
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > Sorry for the so late reply.
> > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > 
> > > And are you using the version of qemu-xen which ships with those
> > > releases or your own version, perhaps from upstream?
> > > 
> > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > 4.3.x but I thought it had been added during the 4.4.x development
> > > cycle. Adding Anthony + xen-devel to confirm.
> > 
> > We've added vcpu hotplug to our tree in Xen 4.3.
> > 
> > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > and any QEMU release before that those not support cpu hotplug.
> 
> OK. IIRC Xen 4.3 included qemu 1.3 or so?

That's right, qemu 1.3.

> 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?

That's correct.

> So it also misses those patches.

No, I've put those patches on top of the merge, otherwise, there would
have been regression.

> Are we planning to backport those two patches for 4.4?

No :), it's not needed.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:38:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UPq-0008K9-Dn; Wed, 29 Jan 2014 12:38:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8UPp-0008IT-1l
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:38:49 +0000
Received: from [85.158.137.68:45016] by server-17.bemta-3.messagelabs.com id
	0A/B1-22569-856F8E25; Wed, 29 Jan 2014 12:38:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390999126!12042717!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10751 invoked from network); 29 Jan 2014 12:38:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:38:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97656876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:38:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:38:45 -0500
Message-ID: <1390999123.31814.96.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Wed, 29 Jan 2014 12:38:43 +0000
In-Reply-To: <CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:

> > I also don't see any patch to linux/Documentation/devicetree/bindings,
> > as was requested in that posting from 6 months ago. Where can I find
> > that?
> >
> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> > hasn't landed?
> Yeah it is dangling and since new patch is already posted i think we
> can wait for final DT bindings.

It seems from the thread that the final bindings are going to differ
significantly from what is implemented in Xen and proposed in the above
thread. (with a syscon driver that the reset driver references).

> >> Now if you want this to be fixed , i can quickly submit a V7 in which
> >> mask field will be just hard-coded to 1 hence xen code will always
> >> work even if linux code does gets changed.
> >
> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> > that seems like a good approach.
> >
> > I think we'll just have to accept that until the binding is specified
> > and documented (in linux/Documentation/devicetree/bindings) then we may
> > have to be prepared to change the Xen implementation to match the final
> > spec without regard to backwards compat. If we aren't happy with that
> > then I should revert the patch now and we will have to live without
> > reboot support in the meantime.
> Please do not revert the patch , I think we can go ahead with current patch.
> Once linux side is concluded i will fix minor changes in xen code
> based on new DT bindigs..

It doesn't sounds to me like it is going to be minor changes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:38:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UPq-0008K9-Dn; Wed, 29 Jan 2014 12:38:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8UPp-0008IT-1l
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 12:38:49 +0000
Received: from [85.158.137.68:45016] by server-17.bemta-3.messagelabs.com id
	0A/B1-22569-856F8E25; Wed, 29 Jan 2014 12:38:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1390999126!12042717!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10751 invoked from network); 29 Jan 2014 12:38:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:38:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97656876"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 12:38:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:38:45 -0500
Message-ID: <1390999123.31814.96.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Wed, 29 Jan 2014 12:38:43 +0000
In-Reply-To: <CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:

> > I also don't see any patch to linux/Documentation/devicetree/bindings,
> > as was requested in that posting from 6 months ago. Where can I find
> > that?
> >
> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> > hasn't landed?
> Yeah it is dangling and since new patch is already posted i think we
> can wait for final DT bindings.

It seems from the thread that the final bindings are going to differ
significantly from what is implemented in Xen and proposed in the above
thread. (with a syscon driver that the reset driver references).

> >> Now if you want this to be fixed , i can quickly submit a V7 in which
> >> mask field will be just hard-coded to 1 hence xen code will always
> >> work even if linux code does gets changed.
> >
> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> > that seems like a good approach.
> >
> > I think we'll just have to accept that until the binding is specified
> > and documented (in linux/Documentation/devicetree/bindings) then we may
> > have to be prepared to change the Xen implementation to match the final
> > spec without regard to backwards compat. If we aren't happy with that
> > then I should revert the patch now and we will have to live without
> > reboot support in the meantime.
> Please do not revert the patch , I think we can go ahead with current patch.
> Once linux side is concluded i will fix minor changes in xen code
> based on new DT bindigs..

It doesn't sounds to me like it is going to be minor changes.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:39:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UQs-0000Ao-4J; Wed, 29 Jan 2014 12:39:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8UQq-0000AV-3J; Wed, 29 Jan 2014 12:39:52 +0000
Received: from [85.158.139.211:50197] by server-2.bemta-5.messagelabs.com id
	EA/F7-23037-796F8E25; Wed, 29 Jan 2014 12:39:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390999189!372594!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 499 invoked from network); 29 Jan 2014 12:39:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:39:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95668468"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:39:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:39:48 -0500
Message-ID: <1390999187.31814.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 12:39:47 +0000
In-Reply-To: <20140129123648.GB1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > Sorry for the so late reply.
> > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > 
> > > > And are you using the version of qemu-xen which ships with those
> > > > releases or your own version, perhaps from upstream?
> > > > 
> > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > cycle. Adding Anthony + xen-devel to confirm.
> > > 
> > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > 
> > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > and any QEMU release before that those not support cpu hotplug.
> > 
> > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> 
> That's right, qemu 1.3.
> 
> > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> 
> That's correct.
> 
> > So it also misses those patches.
> 
> No, I've put those patches on top of the merge, otherwise, there would
> have been regression.

So this functionality should work with xen 4.4.0-rc1 (using whatever
version was referenced by xen.git/Config.mk) or not?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 12:39:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 12:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UQs-0000Ao-4J; Wed, 29 Jan 2014 12:39:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8UQq-0000AV-3J; Wed, 29 Jan 2014 12:39:52 +0000
Received: from [85.158.139.211:50197] by server-2.bemta-5.messagelabs.com id
	EA/F7-23037-796F8E25; Wed, 29 Jan 2014 12:39:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1390999189!372594!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 499 invoked from network); 29 Jan 2014 12:39:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 12:39:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95668468"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 12:39:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 07:39:48 -0500
Message-ID: <1390999187.31814.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 12:39:47 +0000
In-Reply-To: <20140129123648.GB1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > Sorry for the so late reply.
> > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > 
> > > > And are you using the version of qemu-xen which ships with those
> > > > releases or your own version, perhaps from upstream?
> > > > 
> > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > cycle. Adding Anthony + xen-devel to confirm.
> > > 
> > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > 
> > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > and any QEMU release before that those not support cpu hotplug.
> > 
> > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> 
> That's right, qemu 1.3.
> 
> > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> 
> That's correct.
> 
> > So it also misses those patches.
> 
> No, I've put those patches on top of the merge, otherwise, there would
> have been regression.

So this functionality should work with xen 4.4.0-rc1 (using whatever
version was referenced by xen.git/Config.mk) or not?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:01:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uks-0001dA-Uu; Wed, 29 Jan 2014 13:00:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Ukr-0001cv-7L
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:00:33 +0000
Received: from [85.158.137.68:58893] by server-13.bemta-3.messagelabs.com id
	72/F8-26923-07BF8E25; Wed, 29 Jan 2014 13:00:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391000431!12018592!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17749 invoked from network); 29 Jan 2014 13:00:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 13:00:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 13:00:30 +0000
Message-Id: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 13:00:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;

Why can't the function parameter be domid_t right away?

> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> +        if ( __copy_to_guest(u_domctl, domctl, 1) )

While you certainly say so in the public header change, I think you
recall that we pretty recently changed another hypercall to not be
the only inconsistent one modifying the input structure in order to
handle hypercall preemption.

Further - who's responsible for initiating the resume after a
preemption? p2m_cache_flush() returning -EAGAIN isn't being
handled here, and also not in libxc (which would be awkward
anyway).

> +static void do_one_cacheflush(paddr_t mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +
> +    unmap_domain_page(v);
> +}

Sort of odd that you have to map a page in order to flush cache
(which I very much hope is physically indexed, or else this
operation wouldn't have the intended effect anyway). Can this
not be done based on the machine address?

>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> -        if ( op == RELINQUISH && count >= 0x2000 )
> +        switch ( op )
>          {
> -            if ( hypercall_preempt_check() )
> +        case RELINQUISH:
> +        case CACHEFLUSH:
> +            if (count >= 0x2000 && hypercall_preempt_check() )
>              {
>                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>                  rc = -EAGAIN;
>                  goto out;
>              }
>              count = 0;
> +            break;
> +        case INSERT:
> +        case ALLOCATE:
> +        case REMOVE:
> +            /* No preemption */
> +            break;
>          }

Unrelated to the patch here, but don't you have a problem if you
don't preempt _at all_ here for certain operation types? Or is a
limit on the number of iterations being enforced elsewhere for
those?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:01:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uks-0001dA-Uu; Wed, 29 Jan 2014 13:00:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Ukr-0001cv-7L
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:00:33 +0000
Received: from [85.158.137.68:58893] by server-13.bemta-3.messagelabs.com id
	72/F8-26923-07BF8E25; Wed, 29 Jan 2014 13:00:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391000431!12018592!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17749 invoked from network); 29 Jan 2014 13:00:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 13:00:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 13:00:30 +0000
Message-Id: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 13:00:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;

Why can't the function parameter be domid_t right away?

> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> +        if ( __copy_to_guest(u_domctl, domctl, 1) )

While you certainly say so in the public header change, I think you
recall that we pretty recently changed another hypercall to not be
the only inconsistent one modifying the input structure in order to
handle hypercall preemption.

Further - who's responsible for initiating the resume after a
preemption? p2m_cache_flush() returning -EAGAIN isn't being
handled here, and also not in libxc (which would be awkward
anyway).

> +static void do_one_cacheflush(paddr_t mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +
> +    unmap_domain_page(v);
> +}

Sort of odd that you have to map a page in order to flush cache
(which I very much hope is physically indexed, or else this
operation wouldn't have the intended effect anyway). Can this
not be done based on the machine address?

>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> -        if ( op == RELINQUISH && count >= 0x2000 )
> +        switch ( op )
>          {
> -            if ( hypercall_preempt_check() )
> +        case RELINQUISH:
> +        case CACHEFLUSH:
> +            if (count >= 0x2000 && hypercall_preempt_check() )
>              {
>                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>                  rc = -EAGAIN;
>                  goto out;
>              }
>              count = 0;
> +            break;
> +        case INSERT:
> +        case ALLOCATE:
> +        case REMOVE:
> +            /* No preemption */
> +            break;
>          }

Unrelated to the patch here, but don't you have a problem if you
don't preempt _at all_ here for certain operation types? Or is a
limit on the number of iterations being enforced elsewhere for
those?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UlR-0001i5-Eq; Wed, 29 Jan 2014 13:01:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8UlP-0001hm-C1
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:01:07 +0000
Received: from [85.158.137.68:11101] by server-7.bemta-3.messagelabs.com id
	01/D4-13775-29BF8E25; Wed, 29 Jan 2014 13:01:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391000464!10892934!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29565 invoked from network); 29 Jan 2014 13:01:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97661713"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:01:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 08:01:03 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52]
	helo=cosworth.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8UlJ-00013p-SB;
	Wed, 29 Jan 2014 13:01:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Wed, 29 Jan 2014 13:01:01 +0000
Message-ID: <1391000461-20713-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ian.campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Stephen Warren <swarren@wwwdotorg.org>, xen-devel@lists.xen.org,
	Rob Herring <robh+dt@kernel.org>, Kumar Gala <galak@codeaurora.org>
Subject: [Xen-devel] [PATCH REPOST] of: Add vendor prefix for Xen hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I suppose vendors of virtual hardware ought to be listed here as well.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Acked-by: Stephen Warren <swarren@nvidia>
Cc: devicetree@vger.kernel.org
---
v2: rebased for repost.
---
 Documentation/devicetree/bindings/vendor-prefixes.txt |    1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt
index 3f900cd..a30e3b7 100644
--- a/Documentation/devicetree/bindings/vendor-prefixes.txt
+++ b/Documentation/devicetree/bindings/vendor-prefixes.txt
@@ -90,4 +90,5 @@ via	VIA Technologies, Inc.
 winbond Winbond Electronics corp.
 wlf	Wolfson Microelectronics
 wm	Wondermedia Technologies, Inc.
+xen	Xen Hypervisor
 xlnx	Xilinx
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UlR-0001i5-Eq; Wed, 29 Jan 2014 13:01:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8UlP-0001hm-C1
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:01:07 +0000
Received: from [85.158.137.68:11101] by server-7.bemta-3.messagelabs.com id
	01/D4-13775-29BF8E25; Wed, 29 Jan 2014 13:01:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391000464!10892934!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29565 invoked from network); 29 Jan 2014 13:01:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97661713"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:01:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 08:01:03 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52]
	helo=cosworth.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1W8UlJ-00013p-SB;
	Wed, 29 Jan 2014 13:01:01 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <linux-kernel@vger.kernel.org>
Date: Wed, 29 Jan 2014 13:01:01 +0000
Message-ID: <1391000461-20713-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ian.campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Stephen Warren <swarren@wwwdotorg.org>, xen-devel@lists.xen.org,
	Rob Herring <robh+dt@kernel.org>, Kumar Gala <galak@codeaurora.org>
Subject: [Xen-devel] [PATCH REPOST] of: Add vendor prefix for Xen hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I suppose vendors of virtual hardware ought to be listed here as well.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Acked-by: Stephen Warren <swarren@nvidia>
Cc: devicetree@vger.kernel.org
---
v2: rebased for repost.
---
 Documentation/devicetree/bindings/vendor-prefixes.txt |    1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt
index 3f900cd..a30e3b7 100644
--- a/Documentation/devicetree/bindings/vendor-prefixes.txt
+++ b/Documentation/devicetree/bindings/vendor-prefixes.txt
@@ -90,4 +90,5 @@ via	VIA Technologies, Inc.
 winbond Winbond Electronics corp.
 wlf	Wolfson Microelectronics
 wm	Wondermedia Technologies, Inc.
+xen	Xen Hypervisor
 xlnx	Xilinx
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:04:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uor-0001xT-Bp; Wed, 29 Jan 2014 13:04:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8Uop-0001xI-MH; Wed, 29 Jan 2014 13:04:40 +0000
Received: from [85.158.139.211:56442] by server-2.bemta-5.messagelabs.com id
	D3/15-23037-66CF8E25; Wed, 29 Jan 2014 13:04:38 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391000676!373767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22746 invoked from network); 29 Jan 2014 13:04:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97663293"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:04:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:04:35 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8Uok-000144-Uh;
	Wed, 29 Jan 2014 13:04:34 +0000
Date: Wed, 29 Jan 2014 13:04:34 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129130434.GD1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390999187.31814.97.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > > Sorry for the so late reply.
> > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > > 
> > > > > And are you using the version of qemu-xen which ships with those
> > > > > releases or your own version, perhaps from upstream?
> > > > > 
> > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > > cycle. Adding Anthony + xen-devel to confirm.
> > > > 
> > > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > > 
> > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > > and any QEMU release before that those not support cpu hotplug.
> > > 
> > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> > 
> > That's right, qemu 1.3.
> > 
> > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> > 
> > That's correct.
> > 
> > > So it also misses those patches.
> > 
> > No, I've put those patches on top of the merge, otherwise, there would
> > have been regression.
> 
> So this functionality should work with xen 4.4.0-rc1 (using whatever
> version was referenced by xen.git/Config.mk) or not?

Yes, it should. You can go back to my first reply to this threads and
read the all mail ;-). I reply to what appear to me the "error" that
made someone think that somethings goes wrong.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:04:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uor-0001xT-Bp; Wed, 29 Jan 2014 13:04:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>)
	id 1W8Uop-0001xI-MH; Wed, 29 Jan 2014 13:04:40 +0000
Received: from [85.158.139.211:56442] by server-2.bemta-5.messagelabs.com id
	D3/15-23037-66CF8E25; Wed, 29 Jan 2014 13:04:38 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391000676!373767!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22746 invoked from network); 29 Jan 2014 13:04:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97663293"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:04:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:04:35 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W8Uok-000144-Uh;
	Wed, 29 Jan 2014 13:04:34 +0000
Date: Wed, 29 Jan 2014 13:04:34 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129130434.GD1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390999187.31814.97.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
> On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > > Sorry for the so late reply.
> > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > > 
> > > > > And are you using the version of qemu-xen which ships with those
> > > > > releases or your own version, perhaps from upstream?
> > > > > 
> > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > > cycle. Adding Anthony + xen-devel to confirm.
> > > > 
> > > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > > 
> > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > > and any QEMU release before that those not support cpu hotplug.
> > > 
> > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> > 
> > That's right, qemu 1.3.
> > 
> > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> > 
> > That's correct.
> > 
> > > So it also misses those patches.
> > 
> > No, I've put those patches on top of the merge, otherwise, there would
> > have been regression.
> 
> So this functionality should work with xen 4.4.0-rc1 (using whatever
> version was referenced by xen.git/Config.mk) or not?

Yes, it should. You can go back to my first reply to this threads and
read the all mail ;-). I reply to what appear to me the "error" that
made someone think that somethings goes wrong.

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UrQ-00026K-4Y; Wed, 29 Jan 2014 13:07:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8UrO-00026C-NK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:07:18 +0000
Received: from [85.158.137.68:7884] by server-12.bemta-3.messagelabs.com id
	CC/E0-01674-50DF8E25; Wed, 29 Jan 2014 13:07:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391000836!10891301!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9977 invoked from network); 29 Jan 2014 13:07:17 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:07:17 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so3380129wgg.1
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:07:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9WBij9OUI/IJqpNNFBrdONYCvtuns1ZvKUDHZLjkyH4=;
	b=PnOmvtzoi5+GhRr0Z5PEoKuxRJWA2PXw19FsfeUbOe9FwsvaoV/BylI7IagLAttPbm
	sxrVS7j/IdmU1P7+vPbTyw0VFNWqtMNm2/mWNT65nRdP75LdWbpZJDP7mOv1xbnvonTP
	cxGzwhuqT1rZeWynrOFUBKR7ZW4lrghApcANRn4LLakYMOZni/xO69kdK1GZEKeH9Iog
	yZcEnleaAr+1wACFZNdrn0hS+jXKxVNJMxAgIRi7xWHdaqZvMqbSU0N4DKZLNlrrqk8R
	QTlX5y6eYrbMrLqN4zo9XuVZfl2427zpmjWCZf5Tku6EmZHTThTmwlKv1WELbOsZePS6
	Tm2g==
X-Gm-Message-State: ALoCoQlaY/FXxVFDdoSBaKLb01h5c0XgEvqbyjN4h2HcFCOcfofrwk8ZYTTst2apQNWJtKwoe5Km
X-Received: by 10.194.83.9 with SMTP id m9mr1711947wjy.39.1391000836695;
	Wed, 29 Jan 2014 05:07:16 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4805340wjc.5.2014.01.29.05.07.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:07:15 -0800 (PST)
Message-ID: <52E8FD02.2060601@linaro.org>
Date: Wed, 29 Jan 2014 13:07:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
In-Reply-To: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 10:56, Oleksandr Tyshchenko wrote:
> Hello all,
>
> I just recollected about one hack which we created
> as we needed to route HW IRQ in domU.
>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d793ba..d0227b9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
>
>           LOG(DEBUG, "dom%d irq %d", domid, irq);
>
> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> -                       : -EOVERFLOW;
>           if (!ret)
>               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>           if (ret < 0) {
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..b54c08e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>       if ( d->domain_id == 0 )
>           d->arch.vgic.nr_lines = gic_number_lines() - 32;
>       else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> need SPIs for the guest */
>
>       d->arch.vgic.shared_irqs =
>           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 75e2df3..ba88901 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -29,6 +29,7 @@
>   #include <asm/page.h>
>   #include <public/domctl.h>
>   #include <xsm/xsm.h>
> +#include <asm/gic.h>
>
>   static DEFINE_SPINLOCK(domctl_lock);
>   DEFINE_SPINLOCK(vcpu_alloc_lock);
> @@ -782,8 +783,11 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>               ret = -EINVAL;
>           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>               ret = -EPERM;
> -        else if ( allow )
> -            ret = pirq_permit_access(d, pirq);
> +        else if ( allow ) {
> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> +            ret = pirq_permit_access(d, irq.irq);
> +            gic_route_irq_to_guest(d, &irq, "");
> +        }
>           else
>               ret = pirq_deny_access(d, pirq);
>       }
> (END)
>
> It seems, the following patch can violate the logic about routing
> physical IRQs only to CPU0.

I forgot the smp_processor_id() in gic_route_irq_to_guest(). As this 
function is only called (for upstream) when dom0 is building, only CPU0 
is used.

> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> where the one of the parameters is cpumask_of(smp_processor_id()).
> But in this part of code this function can be executed on CPU1. And as
> result this can cause to the fact that the wrong value would set to
> target CPU mask.
> Please, confirm my assumption.
> If I am right we have to add a basic HW IRQ routing to DomU in a right way.

With this current implementation, IRQ will be routed to CPU which has 
done the hypercall. This CPU could be different than the CPU where the 
domU (assuming we have only 1 VCPU) is running.
I think for both dom0 and domU, in case of the VCPU is pinned, we should 
say use the cpumask of this VCPU.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UrQ-00026K-4Y; Wed, 29 Jan 2014 13:07:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8UrO-00026C-NK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:07:18 +0000
Received: from [85.158.137.68:7884] by server-12.bemta-3.messagelabs.com id
	CC/E0-01674-50DF8E25; Wed, 29 Jan 2014 13:07:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391000836!10891301!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9977 invoked from network); 29 Jan 2014 13:07:17 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:07:17 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so3380129wgg.1
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:07:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9WBij9OUI/IJqpNNFBrdONYCvtuns1ZvKUDHZLjkyH4=;
	b=PnOmvtzoi5+GhRr0Z5PEoKuxRJWA2PXw19FsfeUbOe9FwsvaoV/BylI7IagLAttPbm
	sxrVS7j/IdmU1P7+vPbTyw0VFNWqtMNm2/mWNT65nRdP75LdWbpZJDP7mOv1xbnvonTP
	cxGzwhuqT1rZeWynrOFUBKR7ZW4lrghApcANRn4LLakYMOZni/xO69kdK1GZEKeH9Iog
	yZcEnleaAr+1wACFZNdrn0hS+jXKxVNJMxAgIRi7xWHdaqZvMqbSU0N4DKZLNlrrqk8R
	QTlX5y6eYrbMrLqN4zo9XuVZfl2427zpmjWCZf5Tku6EmZHTThTmwlKv1WELbOsZePS6
	Tm2g==
X-Gm-Message-State: ALoCoQlaY/FXxVFDdoSBaKLb01h5c0XgEvqbyjN4h2HcFCOcfofrwk8ZYTTst2apQNWJtKwoe5Km
X-Received: by 10.194.83.9 with SMTP id m9mr1711947wjy.39.1391000836695;
	Wed, 29 Jan 2014 05:07:16 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4805340wjc.5.2014.01.29.05.07.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:07:15 -0800 (PST)
Message-ID: <52E8FD02.2060601@linaro.org>
Date: Wed, 29 Jan 2014 13:07:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
In-Reply-To: <CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 10:56, Oleksandr Tyshchenko wrote:
> Hello all,
>
> I just recollected about one hack which we created
> as we needed to route HW IRQ in domU.
>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d793ba..d0227b9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
>
>           LOG(DEBUG, "dom%d irq %d", domid, irq);
>
> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> -                       : -EOVERFLOW;
>           if (!ret)
>               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>           if (ret < 0) {
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..b54c08e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>       if ( d->domain_id == 0 )
>           d->arch.vgic.nr_lines = gic_number_lines() - 32;
>       else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> need SPIs for the guest */
>
>       d->arch.vgic.shared_irqs =
>           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 75e2df3..ba88901 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -29,6 +29,7 @@
>   #include <asm/page.h>
>   #include <public/domctl.h>
>   #include <xsm/xsm.h>
> +#include <asm/gic.h>
>
>   static DEFINE_SPINLOCK(domctl_lock);
>   DEFINE_SPINLOCK(vcpu_alloc_lock);
> @@ -782,8 +783,11 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>               ret = -EINVAL;
>           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>               ret = -EPERM;
> -        else if ( allow )
> -            ret = pirq_permit_access(d, pirq);
> +        else if ( allow ) {
> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> +            ret = pirq_permit_access(d, irq.irq);
> +            gic_route_irq_to_guest(d, &irq, "");
> +        }
>           else
>               ret = pirq_deny_access(d, pirq);
>       }
> (END)
>
> It seems, the following patch can violate the logic about routing
> physical IRQs only to CPU0.

I forgot the smp_processor_id() in gic_route_irq_to_guest(). As this 
function is only called (for upstream) when dom0 is building, only CPU0 
is used.

> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> where the one of the parameters is cpumask_of(smp_processor_id()).
> But in this part of code this function can be executed on CPU1. And as
> result this can cause to the fact that the wrong value would set to
> target CPU mask.
> Please, confirm my assumption.
> If I am right we have to add a basic HW IRQ routing to DomU in a right way.

With this current implementation, IRQ will be routed to CPU which has 
done the hypercall. This CPU could be different than the CPU where the 
domU (assuming we have only 1 VCPU) is running.
I think for both dom0 and domU, in case of the VCPU is pinned, we should 
say use the cpumask of this VCPU.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UtN-0002LV-N0; Wed, 29 Jan 2014 13:09:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W8UtM-0002LL-D9
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 13:09:20 +0000
Received: from [193.109.254.147:27836] by server-6.bemta-14.messagelabs.com id
	A4/C4-03396-F7DF8E25; Wed, 29 Jan 2014 13:09:19 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391000957!615887!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12701 invoked from network); 29 Jan 2014 13:09:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	29 Jan 2014 13:09:17 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0TD8Dch028925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 08:08:13 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-60.ams2.redhat.com
	[10.36.112.60])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0TD89Dl021847; Wed, 29 Jan 2014 08:08:10 -0500
Message-ID: <52E8FD39.8030307@redhat.com>
Date: Wed, 29 Jan 2014 14:08:09 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org
Subject: Re: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 29/01/2014 13:00, Stefano Stabellini ha scritto:
> Hi Paolo,
> we have been trying to fix a BSOD that would happen during the Windows
> XP installation, once every ten times on average.
> After many days of bisection, we found out that commit
>
> commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date:   Fri May 24 12:59:37 2013 +0200
>
>     memory: add address_space_translate
>
> breaks Xen support in QEMU, in particular the Xen mapcache.
> The reason is that after this commit, l in address_space_rw can span a
> page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> to map a single page (if block->offset == 0).
> The appended patch works around the issue by reverting to the old
> behaviour.
>
> What do you think is the right fix for this?
> Maybe we need to add a size parameter to qemu_get_ram_ptr?

Yeah, that would be best but the patch you attached is fine too with a 
FIXME comment.

Paolo

> I should add that this problem is time sensitive because is a blocker
> for the Xen 4.4 release (Xen is in RC2 right now).
>
> Thanks for your feedback,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UtN-0002LV-N0; Wed, 29 Jan 2014 13:09:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1W8UtM-0002LL-D9
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 13:09:20 +0000
Received: from [193.109.254.147:27836] by server-6.bemta-14.messagelabs.com id
	A4/C4-03396-F7DF8E25; Wed, 29 Jan 2014 13:09:19 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391000957!615887!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12701 invoked from network); 29 Jan 2014 13:09:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	29 Jan 2014 13:09:17 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0TD8Dch028925
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 08:08:13 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-60.ams2.redhat.com
	[10.36.112.60])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0TD89Dl021847; Wed, 29 Jan 2014 08:08:10 -0500
Message-ID: <52E8FD39.8030307@redhat.com>
Date: Wed, 29 Jan 2014 14:08:09 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org
Subject: Re: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 29/01/2014 13:00, Stefano Stabellini ha scritto:
> Hi Paolo,
> we have been trying to fix a BSOD that would happen during the Windows
> XP installation, once every ten times on average.
> After many days of bisection, we found out that commit
>
> commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date:   Fri May 24 12:59:37 2013 +0200
>
>     memory: add address_space_translate
>
> breaks Xen support in QEMU, in particular the Xen mapcache.
> The reason is that after this commit, l in address_space_rw can span a
> page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> to map a single page (if block->offset == 0).
> The appended patch works around the issue by reverting to the old
> behaviour.
>
> What do you think is the right fix for this?
> Maybe we need to add a size parameter to qemu_get_ram_ptr?

Yeah, that would be best but the patch you attached is fine too with a 
FIXME comment.

Paolo

> I should add that this problem is time sensitive because is a blocker
> for the Xen 4.4 release (Xen is in RC2 right now).
>
> Thanks for your feedback,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uw7-0002gz-F7; Wed, 29 Jan 2014 13:12:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8Uw6-0002gq-6G
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:12:10 +0000
Received: from [85.158.137.68:54426] by server-17.bemta-3.messagelabs.com id
	D2/ED-22569-92EF8E25; Wed, 29 Jan 2014 13:12:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391001127!12064871!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27083 invoked from network); 29 Jan 2014 13:12:07 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:12:07 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so3530267wgg.32
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:12:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2ooQCalTGEzO9+UI1NPHGNvqOMR3+z5oCMmnSDgBODs=;
	b=Mm1YdGRfB5tuBTovIBCg+gU1+g2csfLmsRB2z/9zwGHOY/UU/SWT0QnGrrkdKKzzl4
	0ER5U9bT/N9WIOraHNunxTBzdBDJ6LDsBwIleXpns17GHY6Wb9p8MNZYEK8LhsJXAceV
	DPU2y96Z/zMOyHOTAWL7LGNV0R3+lB8VSal+j1LMQcBZZzaHmBFdiXvGNojMba8BLMCR
	zeTjFJDlSxOI74msWL0qvr8oeeyeyOASZKAvxUXZDNhr74fp4qPMrK9U/M4BvyWJsxlX
	OV//xKWaP4krc9Vj56vfcKt/OWPWwla8/BoY5G91xvkr+SgRqtDMWocAVzuQ4m9EY1wA
	GPRA==
X-Gm-Message-State: ALoCoQmn4OMibVmTh7fmoDlWcWhpYQAczTXxXri1O57Zd0jyoHjmmTk7vsxtS+kiELj9uNO5e1U4
X-Received: by 10.180.165.15 with SMTP id yu15mr5696670wib.28.1391001127252;
	Wed, 29 Jan 2014 05:12:07 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cx3sm49836618wib.0.2014.01.29.05.12.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:12:06 -0800 (PST)
Message-ID: <52E8FE25.1050108@linaro.org>
Date: Wed, 29 Jan 2014 13:12:05 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
In-Reply-To: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Oleksandr,

On 28/01/14 19:25, Oleksandr Tyshchenko wrote:

[..]

>
>     Do you pass-through PPIs to dom0?
>
> If I understand correctly that PPIs is irqs from 16 to 31.
> So yes, I do. I see timer's irqs and maintenance irq which routed to
> both CPUs.

This IRQs are used by Xen, therefore they are emulated for dom0 and 
domU. Xen won't EOI in maintenance_interrupt theses IRQs.

>
> And I have printed all irqs which fall to gic_route_irq_to_guest and
> gic_route_irq functions.
> ...
> (XEN) GIC initialization:
> (XEN)         gic_dist_addr=0000000048211000
> (XEN)         gic_cpu_addr=0000000048212000
> (XEN)         gic_hyp_addr=0000000048214000
> (XEN)         gic_vcpu_addr=0000000048216000
> (XEN)         gic_maintenance_irq=25
> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> (XEN) Bringing up CPU1
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> (XEN) CPU 1 booted.
> (XEN) Brought up 2 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0

[..]

> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1

Not related to this patch series, but is it normal that you passthrough 
the same interrupt both to dom0 and domU?

There is few other case like that.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Uw7-0002gz-F7; Wed, 29 Jan 2014 13:12:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8Uw6-0002gq-6G
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:12:10 +0000
Received: from [85.158.137.68:54426] by server-17.bemta-3.messagelabs.com id
	D2/ED-22569-92EF8E25; Wed, 29 Jan 2014 13:12:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391001127!12064871!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27083 invoked from network); 29 Jan 2014 13:12:07 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:12:07 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so3530267wgg.32
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:12:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2ooQCalTGEzO9+UI1NPHGNvqOMR3+z5oCMmnSDgBODs=;
	b=Mm1YdGRfB5tuBTovIBCg+gU1+g2csfLmsRB2z/9zwGHOY/UU/SWT0QnGrrkdKKzzl4
	0ER5U9bT/N9WIOraHNunxTBzdBDJ6LDsBwIleXpns17GHY6Wb9p8MNZYEK8LhsJXAceV
	DPU2y96Z/zMOyHOTAWL7LGNV0R3+lB8VSal+j1LMQcBZZzaHmBFdiXvGNojMba8BLMCR
	zeTjFJDlSxOI74msWL0qvr8oeeyeyOASZKAvxUXZDNhr74fp4qPMrK9U/M4BvyWJsxlX
	OV//xKWaP4krc9Vj56vfcKt/OWPWwla8/BoY5G91xvkr+SgRqtDMWocAVzuQ4m9EY1wA
	GPRA==
X-Gm-Message-State: ALoCoQmn4OMibVmTh7fmoDlWcWhpYQAczTXxXri1O57Zd0jyoHjmmTk7vsxtS+kiELj9uNO5e1U4
X-Received: by 10.180.165.15 with SMTP id yu15mr5696670wib.28.1391001127252;
	Wed, 29 Jan 2014 05:12:07 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cx3sm49836618wib.0.2014.01.29.05.12.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:12:06 -0800 (PST)
Message-ID: <52E8FE25.1050108@linaro.org>
Date: Wed, 29 Jan 2014 13:12:05 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
In-Reply-To: <CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Oleksandr,

On 28/01/14 19:25, Oleksandr Tyshchenko wrote:

[..]

>
>     Do you pass-through PPIs to dom0?
>
> If I understand correctly that PPIs is irqs from 16 to 31.
> So yes, I do. I see timer's irqs and maintenance irq which routed to
> both CPUs.

This IRQs are used by Xen, therefore they are emulated for dom0 and 
domU. Xen won't EOI in maintenance_interrupt theses IRQs.

>
> And I have printed all irqs which fall to gic_route_irq_to_guest and
> gic_route_irq functions.
> ...
> (XEN) GIC initialization:
> (XEN)         gic_dist_addr=0000000048211000
> (XEN)         gic_cpu_addr=0000000048212000
> (XEN)         gic_hyp_addr=0000000048214000
> (XEN)         gic_vcpu_addr=0000000048216000
> (XEN)         gic_maintenance_irq=25
> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Allocated console ring of 16 KiB.
> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> (XEN) Bringing up CPU1
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> (XEN)
> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> (XEN) CPU 1 booted.
> (XEN) Brought up 2 CPUs
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> (XEN)
> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0

[..]

> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1

Not related to this patch series, but is it normal that you passthrough 
the same interrupt both to dom0 and domU?

There is few other case like that.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UyL-0002pS-3I; Wed, 29 Jan 2014 13:14:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8UyI-0002pE-W8; Wed, 29 Jan 2014 13:14:27 +0000
Received: from [85.158.137.68:26133] by server-10.bemta-3.messagelabs.com id
	AA/01-07302-2BEF8E25; Wed, 29 Jan 2014 13:14:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391001263!12022800!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6270 invoked from network); 29 Jan 2014 13:14:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:14:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97666495"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:14:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:14:22 -0500
Message-ID: <1391001261.31814.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 13:14:21 +0000
In-Reply-To: <20140129130434.GD1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:04 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > > > Sorry for the so late reply.
> > > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > > > 
> > > > > > And are you using the version of qemu-xen which ships with those
> > > > > > releases or your own version, perhaps from upstream?
> > > > > > 
> > > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > > > cycle. Adding Anthony + xen-devel to confirm.
> > > > > 
> > > > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > > > 
> > > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > > > and any QEMU release before that those not support cpu hotplug.
> > > > 
> > > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> > > 
> > > That's right, qemu 1.3.
> > > 
> > > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> > > 
> > > That's correct.
> > > 
> > > > So it also misses those patches.
> > > 
> > > No, I've put those patches on top of the merge, otherwise, there would
> > > have been regression.
> > 
> > So this functionality should work with xen 4.4.0-rc1 (using whatever
> > version was referenced by xen.git/Config.mk) or not?
> 
> Yes, it should. You can go back to my first reply to this threads and
> read the all mail ;-).

Sorry, but this wasn't unambiguously clear to me from your initial
reply, which spoke about broken releases and a fix which was in a
version we don't include yet.

>  I reply to what appear to me the "error" that
> made someone think that somethings goes wrong.

So it sounds like Yun Wang is using an upstream qemu and not our branch
and is therefore missing some fixes?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UyL-0002pS-3I; Wed, 29 Jan 2014 13:14:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8UyI-0002pE-W8; Wed, 29 Jan 2014 13:14:27 +0000
Received: from [85.158.137.68:26133] by server-10.bemta-3.messagelabs.com id
	AA/01-07302-2BEF8E25; Wed, 29 Jan 2014 13:14:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391001263!12022800!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6270 invoked from network); 29 Jan 2014 13:14:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:14:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97666495"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:14:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:14:22 -0500
Message-ID: <1391001261.31814.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Wed, 29 Jan 2014 13:14:21 +0000
In-Reply-To: <20140129130434.GD1775@perard.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-users@lists.xen.org, Yun Wang <bimingery@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:04 +0000, Anthony PERARD wrote:
> On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
> > > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
> > > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
> > > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
> > > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
> > > > > > > Sorry for the so late reply.
> > > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
> > > > > > 
> > > > > > And are you using the version of qemu-xen which ships with those
> > > > > > releases or your own version, perhaps from upstream?
> > > > > > 
> > > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
> > > > > > 4.3.x but I thought it had been added during the 4.4.x development
> > > > > > cycle. Adding Anthony + xen-devel to confirm.
> > > > > 
> > > > > We've added vcpu hotplug to our tree in Xen 4.3.
> > > > > 
> > > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
> > > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
> > > > > and any QEMU release before that those not support cpu hotplug.
> > > > 
> > > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
> > > 
> > > That's right, qemu 1.3.
> > > 
> > > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
> > > 
> > > That's correct.
> > > 
> > > > So it also misses those patches.
> > > 
> > > No, I've put those patches on top of the merge, otherwise, there would
> > > have been regression.
> > 
> > So this functionality should work with xen 4.4.0-rc1 (using whatever
> > version was referenced by xen.git/Config.mk) or not?
> 
> Yes, it should. You can go back to my first reply to this threads and
> read the all mail ;-).

Sorry, but this wasn't unambiguously clear to me from your initial
reply, which spoke about broken releases and a fix which was in a
version we don't include yet.

>  I reply to what appear to me the "error" that
> made someone think that somethings goes wrong.

So it sounds like Yun Wang is using an upstream qemu and not our branch
and is therefore missing some fixes?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:15:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UzP-0002y2-53; Wed, 29 Jan 2014 13:15:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8UzN-0002xk-4z
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:15:33 +0000
Received: from [85.158.143.35:2375] by server-2.bemta-4.messagelabs.com id
	5C/3E-10891-4FEF8E25; Wed, 29 Jan 2014 13:15:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391001331!1649749!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11455 invoked from network); 29 Jan 2014 13:15:31 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:15:31 -0000
Received: by mail-wg0-f44.google.com with SMTP id l18so3414773wgh.23
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:15:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=o6oFcRclHisUGQj4AJUFiFKBzGeWEco7QkbLQeKiPQA=;
	b=ImQptW35YC8rNW1V7X2DkY+CP7cXYaXk24Je1QxevF6o0BPnHqrwzUJMqZuN2UvwhD
	9YpZmx69PV/VTZT69S1GqVseLCxPFdk7IOs8kXbiv/7hDuSQhFnZd9irKout/JNRGa1G
	vW/13EDMplPIALtpLOCpsuhXEZWuw+24biDLOe9mcvfIphtINvaM2KW8U7bv5kxoJGFk
	YfIL53mUwRuHBn0lesue+uOtXf0BRjFQ7r0X1RZobVND0qIdxy5EzIdNDsDltXzGR7Td
	peHFcOTDvw66XDuZoAa0VrvTmXx09SlqvAUHpb8unlY8/WmLqHE23HxAb1tVHAr08XPq
	jDSw==
X-Gm-Message-State: ALoCoQlCQGkGIrTP+GVKWWcgGZFGlshDr+gsGlSqMSaWhODLrVabspXgQevvp60Ld6G31brvTYBe
X-Received: by 10.180.99.39 with SMTP id en7mr19678850wib.10.1391001331158;
	Wed, 29 Jan 2014 05:15:31 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4818166wjc.5.2014.01.29.05.15.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:15:30 -0800 (PST)
Message-ID: <52E8FEF1.2080409@linaro.org>
Date: Wed, 29 Jan 2014 13:15:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 11:46, Stefano Stabellini wrote:
> On Wed, 29 Jan 2014, Stefano Stabellini wrote:
>> On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
>>> Hello all,
>>>
>>> I just recollected about one hack which we created
>>> as we needed to route HW IRQ in domU.
>>>
>>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>> index 9d793ba..d0227b9 100644
>>> --- a/tools/libxl/libxl_create.c
>>> +++ b/tools/libxl/libxl_create.c
>>> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
>>> libxl__multidev *multidev,
>>>
>>>           LOG(DEBUG, "dom%d irq %d", domid, irq);
>>>
>>> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
>>> -                       : -EOVERFLOW;
>>>           if (!ret)
>>>               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>>>           if (ret < 0) {
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 2e4b11f..b54c08e 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>>>       if ( d->domain_id == 0 )
>>>           d->arch.vgic.nr_lines = gic_number_lines() - 32;
>>>       else
>>> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
>>> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
>>> need SPIs for the guest */
>>>
>>>       d->arch.vgic.shared_irqs =
>>>           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
>>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
>>> index 75e2df3..ba88901 100644
>>> --- a/xen/common/domctl.c
>>> +++ b/xen/common/domctl.c
>>> @@ -29,6 +29,7 @@
>>>   #include <asm/page.h>
>>>   #include <public/domctl.h>
>>>   #include <xsm/xsm.h>
>>> +#include <asm/gic.h>
>>>
>>>   static DEFINE_SPINLOCK(domctl_lock);
>>>   DEFINE_SPINLOCK(vcpu_alloc_lock);
>>> @@ -782,8 +783,11 @@ long
>>> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>               ret = -EINVAL;
>>>           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>>>               ret = -EPERM;
>>> -        else if ( allow )
>>> -            ret = pirq_permit_access(d, pirq);
>>> +        else if ( allow ) {
>>> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
>>> +            ret = pirq_permit_access(d, irq.irq);
>>> +            gic_route_irq_to_guest(d, &irq, "");
>>> +        }
>>>           else
>>>               ret = pirq_deny_access(d, pirq);
>>>       }
>>> (END)
>>>
>>> It seems, the following patch can violate the logic about routing
>>> physical IRQs only to CPU0.
>>> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
>>> where the one of the parameters is cpumask_of(smp_processor_id()).
>>> But in this part of code this function can be executed on CPU1. And as
>>> result this can cause to the fact that the wrong value would set to
>>> target CPU mask.
>>>
>>> Please, confirm my assumption.
>>
>> That is correct.
>>
>>
>>> If I am right we have to add a basic HW IRQ routing to DomU in a right way.
>>
>> We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
>> for now we could just hardcode the cpumask of cpu0
>> gic_route_irq_to_guest.
>>
>> However keep in mind that if you plan on routing SPIs to guests other
>> than dom0, receiving all the interrupts on cpu0 might not be great for
>> performances.
>
> Thinking twice about it, it might be the only acceptable change for 4.4.

In Xen upstream, gic_route_irq_to_guest is only called when the dom0 is 
built (so on CPU0). I don't think we need this patch for Xen 4.4.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:15:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8UzP-0002y2-53; Wed, 29 Jan 2014 13:15:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8UzN-0002xk-4z
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:15:33 +0000
Received: from [85.158.143.35:2375] by server-2.bemta-4.messagelabs.com id
	5C/3E-10891-4FEF8E25; Wed, 29 Jan 2014 13:15:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391001331!1649749!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11455 invoked from network); 29 Jan 2014 13:15:31 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:15:31 -0000
Received: by mail-wg0-f44.google.com with SMTP id l18so3414773wgh.23
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:15:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=o6oFcRclHisUGQj4AJUFiFKBzGeWEco7QkbLQeKiPQA=;
	b=ImQptW35YC8rNW1V7X2DkY+CP7cXYaXk24Je1QxevF6o0BPnHqrwzUJMqZuN2UvwhD
	9YpZmx69PV/VTZT69S1GqVseLCxPFdk7IOs8kXbiv/7hDuSQhFnZd9irKout/JNRGa1G
	vW/13EDMplPIALtpLOCpsuhXEZWuw+24biDLOe9mcvfIphtINvaM2KW8U7bv5kxoJGFk
	YfIL53mUwRuHBn0lesue+uOtXf0BRjFQ7r0X1RZobVND0qIdxy5EzIdNDsDltXzGR7Td
	peHFcOTDvw66XDuZoAa0VrvTmXx09SlqvAUHpb8unlY8/WmLqHE23HxAb1tVHAr08XPq
	jDSw==
X-Gm-Message-State: ALoCoQlCQGkGIrTP+GVKWWcgGZFGlshDr+gsGlSqMSaWhODLrVabspXgQevvp60Ld6G31brvTYBe
X-Received: by 10.180.99.39 with SMTP id en7mr19678850wib.10.1391001331158;
	Wed, 29 Jan 2014 05:15:31 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm4818166wjc.5.2014.01.29.05.15.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:15:30 -0800 (PST)
Message-ID: <52E8FEF1.2080409@linaro.org>
Date: Wed, 29 Jan 2014 13:15:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 11:46, Stefano Stabellini wrote:
> On Wed, 29 Jan 2014, Stefano Stabellini wrote:
>> On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
>>> Hello all,
>>>
>>> I just recollected about one hack which we created
>>> as we needed to route HW IRQ in domU.
>>>
>>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>> index 9d793ba..d0227b9 100644
>>> --- a/tools/libxl/libxl_create.c
>>> +++ b/tools/libxl/libxl_create.c
>>> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
>>> libxl__multidev *multidev,
>>>
>>>           LOG(DEBUG, "dom%d irq %d", domid, irq);
>>>
>>> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
>>> -                       : -EOVERFLOW;
>>>           if (!ret)
>>>               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>>>           if (ret < 0) {
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 2e4b11f..b54c08e 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>>>       if ( d->domain_id == 0 )
>>>           d->arch.vgic.nr_lines = gic_number_lines() - 32;
>>>       else
>>> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
>>> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
>>> need SPIs for the guest */
>>>
>>>       d->arch.vgic.shared_irqs =
>>>           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
>>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
>>> index 75e2df3..ba88901 100644
>>> --- a/xen/common/domctl.c
>>> +++ b/xen/common/domctl.c
>>> @@ -29,6 +29,7 @@
>>>   #include <asm/page.h>
>>>   #include <public/domctl.h>
>>>   #include <xsm/xsm.h>
>>> +#include <asm/gic.h>
>>>
>>>   static DEFINE_SPINLOCK(domctl_lock);
>>>   DEFINE_SPINLOCK(vcpu_alloc_lock);
>>> @@ -782,8 +783,11 @@ long
>>> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>>               ret = -EINVAL;
>>>           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>>>               ret = -EPERM;
>>> -        else if ( allow )
>>> -            ret = pirq_permit_access(d, pirq);
>>> +        else if ( allow ) {
>>> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
>>> +            ret = pirq_permit_access(d, irq.irq);
>>> +            gic_route_irq_to_guest(d, &irq, "");
>>> +        }
>>>           else
>>>               ret = pirq_deny_access(d, pirq);
>>>       }
>>> (END)
>>>
>>> It seems, the following patch can violate the logic about routing
>>> physical IRQs only to CPU0.
>>> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
>>> where the one of the parameters is cpumask_of(smp_processor_id()).
>>> But in this part of code this function can be executed on CPU1. And as
>>> result this can cause to the fact that the wrong value would set to
>>> target CPU mask.
>>>
>>> Please, confirm my assumption.
>>
>> That is correct.
>>
>>
>>> If I am right we have to add a basic HW IRQ routing to DomU in a right way.
>>
>> We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
>> for now we could just hardcode the cpumask of cpu0
>> gic_route_irq_to_guest.
>>
>> However keep in mind that if you plan on routing SPIs to guests other
>> than dom0, receiving all the interrupts on cpu0 might not be great for
>> performances.
>
> Thinking twice about it, it might be the only acceptable change for 4.4.

In Xen upstream, gic_route_irq_to_guest is only called when the dom0 is 
built (so on CPU0). I don't think we need this patch for Xen 4.4.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V2W-0003Of-1j; Wed, 29 Jan 2014 13:18:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V2U-0003OV-Gi
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:18:46 +0000
Received: from [85.158.137.68:38147] by server-17.bemta-3.messagelabs.com id
	6F/49-22569-5BFF8E25; Wed, 29 Jan 2014 13:18:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391001524!11970398!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26074 invoked from network); 29 Jan 2014 13:18:45 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:18:45 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so3411775wgh.31
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:18:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tlYLN3UKjHmbKaKwdoFC6qiKagPhtkllXT12A5RKzmk=;
	b=Gv6qEvVyrAZbSa1hwRHkK86JHEfpP/liWf4jOJPDLgAz7m9ldl2cEDFOI2Lfjpdtqr
	9myjwJJcTMpVE9xGH0oILsE/yPa7Z/gjk1ycKnowbzcqZGW/sZXsa3L2QG5VsppeXWya
	MGr+DntV5kyen9G97EDTWE23bhPTcgOEuGMieB5YyvzItmmU7zZo6EB1XnNiqSsxzcND
	IbzJ106f7jWgmPpXEwPMdU92G3L9GHUV7i+LVhN8zxPiotZDVyhWL7HulxuDwuy0KqTQ
	zsAc8ynxCro9J1HPlDPA0NvCojd8cLr+DuiruvHbaD14pxwS8dFEh7VGXvqI1UIvGas3
	k4Vw==
X-Gm-Message-State: ALoCoQl41c9oeA7owK9FWUQYLeEfQcs1e8SEy311IbmXYndc+OxupvaL3EUU2itzPeFdWkS16bCX
X-Received: by 10.180.210.171 with SMTP id mv11mr5764419wic.44.1391001524711; 
	Wed, 29 Jan 2014 05:18:44 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id dm2sm5314553wib.8.2014.01.29.05.18.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:18:44 -0800 (PST)
Message-ID: <52E8FFB1.1060408@linaro.org>
Date: Wed, 29 Jan 2014 13:18:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 2/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This function hasn't been only about creating for quite a while.
>
> This is purely a rename.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
>   1 file changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 85ca330..ace3c54 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -230,7 +230,7 @@ enum p2m_operation {
>       RELINQUISH,
>   };
>
> -static int create_p2m_entries(struct domain *d,
> +static int apply_p2m_changes(struct domain *d,
>                        enum p2m_operation op,
>                        paddr_t start_gpaddr,
>                        paddr_t end_gpaddr,
> @@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
>                        paddr_t start,
>                        paddr_t end)
>   {
> -    return create_p2m_entries(d, ALLOCATE, start, end,
> -                              0, MATTR_MEM, p2m_ram_rw);
> +    return apply_p2m_changes(d, ALLOCATE, start, end,
> +                             0, MATTR_MEM, p2m_ram_rw);
>   }
>
>   int map_mmio_regions(struct domain *d,
> @@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
>                        paddr_t end_gaddr,
>                        paddr_t maddr)
>   {
> -    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
> -                              maddr, MATTR_DEV, p2m_mmio_direct);
> +    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
> +                             maddr, MATTR_DEV, p2m_mmio_direct);
>   }
>
>   int guest_physmap_add_entry(struct domain *d,
> @@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
>                               unsigned long page_order,
>                               p2m_type_t t)
>   {
> -    return create_p2m_entries(d, INSERT,
> -                              pfn_to_paddr(gpfn),
> -                              pfn_to_paddr(gpfn + (1 << page_order)),
> -                              pfn_to_paddr(mfn), MATTR_MEM, t);
> +    return apply_p2m_changes(d, INSERT,
> +                             pfn_to_paddr(gpfn),
> +                             pfn_to_paddr(gpfn + (1 << page_order)),
> +                             pfn_to_paddr(mfn), MATTR_MEM, t);
>   }
>
>   void guest_physmap_remove_page(struct domain *d,
>                                  unsigned long gpfn,
>                                  unsigned long mfn, unsigned int page_order)
>   {
> -    create_p2m_entries(d, REMOVE,
> -                       pfn_to_paddr(gpfn),
> -                       pfn_to_paddr(gpfn + (1<<page_order)),
> -                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
> +    apply_p2m_changes(d, REMOVE,
> +                      pfn_to_paddr(gpfn),
> +                      pfn_to_paddr(gpfn + (1<<page_order)),
> +                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
>   }
>
>   int p2m_alloc_table(struct domain *d)
> @@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
>   {
>       struct p2m_domain *p2m = &d->arch.p2m;
>
> -    return create_p2m_entries(d, RELINQUISH,
> +    return apply_p2m_changes(d, RELINQUISH,
>                                 pfn_to_paddr(p2m->next_gfn_to_relinquish),
>                                 pfn_to_paddr(p2m->max_mapped_gfn),
>                                 pfn_to_paddr(INVALID_MFN),
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V2W-0003Of-1j; Wed, 29 Jan 2014 13:18:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V2U-0003OV-Gi
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:18:46 +0000
Received: from [85.158.137.68:38147] by server-17.bemta-3.messagelabs.com id
	6F/49-22569-5BFF8E25; Wed, 29 Jan 2014 13:18:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391001524!11970398!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26074 invoked from network); 29 Jan 2014 13:18:45 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:18:45 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so3411775wgh.31
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:18:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tlYLN3UKjHmbKaKwdoFC6qiKagPhtkllXT12A5RKzmk=;
	b=Gv6qEvVyrAZbSa1hwRHkK86JHEfpP/liWf4jOJPDLgAz7m9ldl2cEDFOI2Lfjpdtqr
	9myjwJJcTMpVE9xGH0oILsE/yPa7Z/gjk1ycKnowbzcqZGW/sZXsa3L2QG5VsppeXWya
	MGr+DntV5kyen9G97EDTWE23bhPTcgOEuGMieB5YyvzItmmU7zZo6EB1XnNiqSsxzcND
	IbzJ106f7jWgmPpXEwPMdU92G3L9GHUV7i+LVhN8zxPiotZDVyhWL7HulxuDwuy0KqTQ
	zsAc8ynxCro9J1HPlDPA0NvCojd8cLr+DuiruvHbaD14pxwS8dFEh7VGXvqI1UIvGas3
	k4Vw==
X-Gm-Message-State: ALoCoQl41c9oeA7owK9FWUQYLeEfQcs1e8SEy311IbmXYndc+OxupvaL3EUU2itzPeFdWkS16bCX
X-Received: by 10.180.210.171 with SMTP id mv11mr5764419wic.44.1391001524711; 
	Wed, 29 Jan 2014 05:18:44 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id dm2sm5314553wib.8.2014.01.29.05.18.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:18:44 -0800 (PST)
Message-ID: <52E8FFB1.1060408@linaro.org>
Date: Wed, 29 Jan 2014 13:18:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-2-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 2/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This function hasn't been only about creating for quite a while.
>
> This is purely a rename.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
>   1 file changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 85ca330..ace3c54 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -230,7 +230,7 @@ enum p2m_operation {
>       RELINQUISH,
>   };
>
> -static int create_p2m_entries(struct domain *d,
> +static int apply_p2m_changes(struct domain *d,
>                        enum p2m_operation op,
>                        paddr_t start_gpaddr,
>                        paddr_t end_gpaddr,
> @@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
>                        paddr_t start,
>                        paddr_t end)
>   {
> -    return create_p2m_entries(d, ALLOCATE, start, end,
> -                              0, MATTR_MEM, p2m_ram_rw);
> +    return apply_p2m_changes(d, ALLOCATE, start, end,
> +                             0, MATTR_MEM, p2m_ram_rw);
>   }
>
>   int map_mmio_regions(struct domain *d,
> @@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
>                        paddr_t end_gaddr,
>                        paddr_t maddr)
>   {
> -    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
> -                              maddr, MATTR_DEV, p2m_mmio_direct);
> +    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
> +                             maddr, MATTR_DEV, p2m_mmio_direct);
>   }
>
>   int guest_physmap_add_entry(struct domain *d,
> @@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
>                               unsigned long page_order,
>                               p2m_type_t t)
>   {
> -    return create_p2m_entries(d, INSERT,
> -                              pfn_to_paddr(gpfn),
> -                              pfn_to_paddr(gpfn + (1 << page_order)),
> -                              pfn_to_paddr(mfn), MATTR_MEM, t);
> +    return apply_p2m_changes(d, INSERT,
> +                             pfn_to_paddr(gpfn),
> +                             pfn_to_paddr(gpfn + (1 << page_order)),
> +                             pfn_to_paddr(mfn), MATTR_MEM, t);
>   }
>
>   void guest_physmap_remove_page(struct domain *d,
>                                  unsigned long gpfn,
>                                  unsigned long mfn, unsigned int page_order)
>   {
> -    create_p2m_entries(d, REMOVE,
> -                       pfn_to_paddr(gpfn),
> -                       pfn_to_paddr(gpfn + (1<<page_order)),
> -                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
> +    apply_p2m_changes(d, REMOVE,
> +                      pfn_to_paddr(gpfn),
> +                      pfn_to_paddr(gpfn + (1<<page_order)),
> +                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
>   }
>
>   int p2m_alloc_table(struct domain *d)
> @@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
>   {
>       struct p2m_domain *p2m = &d->arch.p2m;
>
> -    return create_p2m_entries(d, RELINQUISH,
> +    return apply_p2m_changes(d, RELINQUISH,
>                                 pfn_to_paddr(p2m->next_gfn_to_relinquish),
>                                 pfn_to_paddr(p2m->max_mapped_gfn),
>                                 pfn_to_paddr(INVALID_MFN),
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V3Z-0003bK-J8; Wed, 29 Jan 2014 13:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V3Y-0003b6-Cm
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:19:52 +0000
Received: from [85.158.137.68:48647] by server-8.bemta-3.messagelabs.com id
	A3/B8-16039-7FFF8E25; Wed, 29 Jan 2014 13:19:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391001590!12085079!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11441 invoked from network); 29 Jan 2014 13:19:51 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:19:51 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so8106949wgh.5
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:19:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dJwwyokWeth5HKOBZnoj/EzehPdVb/BJ6orkSjhf6Fw=;
	b=KMnmPhh1R2oirqimfoj/TG5v2EinK4OSzMRveBZbNx0hHBUeP28JFuzrlVrWbO1Ulb
	NsX4ZO+IvQiXa3XWfK3K6ZLyYbROow+7SJ2kGdJTxTy1xdILIxlpOhW6A/J9pFfpmMMh
	z3IVBXSyYcXPOCd7O5p9IwJd7ny5rtmFvfOg4U6RDrFDM/CTvXj40HOUaQfuBjAqKJaW
	Pvzeo1rh9Z9omgB+B6XnP2smN1NU0E8accQCRTMPVU/F1sYX6awt33hY1O2uieqqbR4O
	HVAzboV5rgjzgOkB2GydZdc/i9YH9pfaTrTTI+OY6jruLut/fRxg7os5yZ22pZDglb2D
	bzVA==
X-Gm-Message-State: ALoCoQkTIKmZHVw1mVUBoF91tLe9s46L0LdrDS5LTCe0mPXE0EaXcx2C+jy31bjRz7cgkTmZoNKm
X-Received: by 10.194.123.201 with SMTP id mc9mr1737632wjb.43.1391001590789;
	Wed, 29 Jan 2014 05:19:50 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id q15sm4846436wjw.18.2014.01.29.05.19.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:19:50 -0800 (PST)
Message-ID: <52E8FFF3.7020805@linaro.org>
Date: Wed, 29 Jan 2014 13:19:47 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 3/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This has other uses other than during relinquish, so rename it for clarity.
>
> This is a pure rename.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/p2m.c        |    9 ++++-----
>   xen/include/asm-arm/p2m.h |    8 +++++---
>   2 files changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ace3c54..a61edeb 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
>           {
>               if ( hypercall_preempt_check() )
>               {
> -                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
> +                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>                   rc = -EAGAIN;
>                   goto out;
>               }
> @@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
>           unsigned long egfn = paddr_to_pfn(end_gpaddr);
>
>           p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
> -        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
> -        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
> +        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
>       }
>
>       rc = 0;
> @@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
>       p2m->first_level = NULL;
>
>       p2m->max_mapped_gfn = 0;
> -    p2m->next_gfn_to_relinquish = ULONG_MAX;
> +    p2m->lowest_mapped_gfn = ULONG_MAX;
>
>   err:
>       spin_unlock(&p2m->lock);
> @@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
>       struct p2m_domain *p2m = &d->arch.p2m;
>
>       return apply_p2m_changes(d, RELINQUISH,
> -                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
> +                              pfn_to_paddr(p2m->lowest_mapped_gfn),
>                                 pfn_to_paddr(p2m->max_mapped_gfn),
>                                 pfn_to_paddr(INVALID_MFN),
>                                 MATTR_MEM, p2m_invalid);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53b3266..e9c884a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -24,9 +24,11 @@ struct p2m_domain {
>        */
>       unsigned long max_mapped_gfn;
>
> -    /* When releasing mapped gfn's in a preemptible manner, recall where
> -     * to resume the search */
> -    unsigned long next_gfn_to_relinquish;
> +    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
> +     * preemptible manner this is update to track recall where to
> +     * resume the search. Apart from during teardown this can only
> +     * decrease. */
> +    unsigned long lowest_mapped_gfn;
>   };
>
>   /* List of possible type for each page in the p2m entry.
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V3Z-0003bK-J8; Wed, 29 Jan 2014 13:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V3Y-0003b6-Cm
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:19:52 +0000
Received: from [85.158.137.68:48647] by server-8.bemta-3.messagelabs.com id
	A3/B8-16039-7FFF8E25; Wed, 29 Jan 2014 13:19:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391001590!12085079!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11441 invoked from network); 29 Jan 2014 13:19:51 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:19:51 -0000
Received: by mail-wg0-f42.google.com with SMTP id l18so8106949wgh.5
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:19:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dJwwyokWeth5HKOBZnoj/EzehPdVb/BJ6orkSjhf6Fw=;
	b=KMnmPhh1R2oirqimfoj/TG5v2EinK4OSzMRveBZbNx0hHBUeP28JFuzrlVrWbO1Ulb
	NsX4ZO+IvQiXa3XWfK3K6ZLyYbROow+7SJ2kGdJTxTy1xdILIxlpOhW6A/J9pFfpmMMh
	z3IVBXSyYcXPOCd7O5p9IwJd7ny5rtmFvfOg4U6RDrFDM/CTvXj40HOUaQfuBjAqKJaW
	Pvzeo1rh9Z9omgB+B6XnP2smN1NU0E8accQCRTMPVU/F1sYX6awt33hY1O2uieqqbR4O
	HVAzboV5rgjzgOkB2GydZdc/i9YH9pfaTrTTI+OY6jruLut/fRxg7os5yZ22pZDglb2D
	bzVA==
X-Gm-Message-State: ALoCoQkTIKmZHVw1mVUBoF91tLe9s46L0LdrDS5LTCe0mPXE0EaXcx2C+jy31bjRz7cgkTmZoNKm
X-Received: by 10.194.123.201 with SMTP id mc9mr1737632wjb.43.1391001590789;
	Wed, 29 Jan 2014 05:19:50 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id q15sm4846436wjw.18.2014.01.29.05.19.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:19:50 -0800 (PST)
Message-ID: <52E8FFF3.7020805@linaro.org>
Date: Wed, 29 Jan 2014 13:19:47 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-3-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 3/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This has other uses other than during relinquish, so rename it for clarity.
>
> This is a pure rename.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/p2m.c        |    9 ++++-----
>   xen/include/asm-arm/p2m.h |    8 +++++---
>   2 files changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ace3c54..a61edeb 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
>           {
>               if ( hypercall_preempt_check() )
>               {
> -                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
> +                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>                   rc = -EAGAIN;
>                   goto out;
>               }
> @@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
>           unsigned long egfn = paddr_to_pfn(end_gpaddr);
>
>           p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
> -        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
> -        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
> +        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
>       }
>
>       rc = 0;
> @@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
>       p2m->first_level = NULL;
>
>       p2m->max_mapped_gfn = 0;
> -    p2m->next_gfn_to_relinquish = ULONG_MAX;
> +    p2m->lowest_mapped_gfn = ULONG_MAX;
>
>   err:
>       spin_unlock(&p2m->lock);
> @@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
>       struct p2m_domain *p2m = &d->arch.p2m;
>
>       return apply_p2m_changes(d, RELINQUISH,
> -                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
> +                              pfn_to_paddr(p2m->lowest_mapped_gfn),
>                                 pfn_to_paddr(p2m->max_mapped_gfn),
>                                 pfn_to_paddr(INVALID_MFN),
>                                 MATTR_MEM, p2m_invalid);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53b3266..e9c884a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -24,9 +24,11 @@ struct p2m_domain {
>        */
>       unsigned long max_mapped_gfn;
>
> -    /* When releasing mapped gfn's in a preemptible manner, recall where
> -     * to resume the search */
> -    unsigned long next_gfn_to_relinquish;
> +    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
> +     * preemptible manner this is update to track recall where to
> +     * resume the search. Apart from during teardown this can only
> +     * decrease. */
> +    unsigned long lowest_mapped_gfn;
>   };
>
>   /* List of possible type for each page in the p2m entry.
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V6R-0003vJ-QR; Wed, 29 Jan 2014 13:22:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8V6Q-0003uy-15
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:22:50 +0000
Received: from [85.158.143.35:34036] by server-1.bemta-4.messagelabs.com id
	34/FD-31661-9A009E25; Wed, 29 Jan 2014 13:22:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391001767!1645189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31935 invoked from network); 29 Jan 2014 13:22:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:22:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97668592"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:22:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:22:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8V6L-0001Ln-SX;
	Wed, 29 Jan 2014 13:22:45 +0000
Date: Wed, 29 Jan 2014 13:22:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E8FD02.2060601@linaro.org>
Message-ID: <alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Julien Grall wrote:
> On 29/01/14 10:56, Oleksandr Tyshchenko wrote:
> > Hello all,
> > 
> > I just recollected about one hack which we created
> > as we needed to route HW IRQ in domU.
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d793ba..d0227b9 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> > libxl__multidev *multidev,
> > 
> >           LOG(DEBUG, "dom%d irq %d", domid, irq);
> > 
> > -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> > -                       : -EOVERFLOW;
> >           if (!ret)
> >               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
> >           if (ret < 0) {
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..b54c08e 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
> >       if ( d->domain_id == 0 )
> >           d->arch.vgic.nr_lines = gic_number_lines() - 32;
> >       else
> > -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> > need SPIs for the guest */
> > 
> >       d->arch.vgic.shared_irqs =
> >           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> > diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> > index 75e2df3..ba88901 100644
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -29,6 +29,7 @@
> >   #include <asm/page.h>
> >   #include <public/domctl.h>
> >   #include <xsm/xsm.h>
> > +#include <asm/gic.h>
> > 
> >   static DEFINE_SPINLOCK(domctl_lock);
> >   DEFINE_SPINLOCK(vcpu_alloc_lock);
> > @@ -782,8 +783,11 @@ long
> > do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >               ret = -EINVAL;
> >           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
> >               ret = -EPERM;
> > -        else if ( allow )
> > -            ret = pirq_permit_access(d, pirq);
> > +        else if ( allow ) {
> > +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> > +            ret = pirq_permit_access(d, irq.irq);
> > +            gic_route_irq_to_guest(d, &irq, "");
> > +        }
> >           else
> >               ret = pirq_deny_access(d, pirq);
> >       }
> > (END)
> > 
> > It seems, the following patch can violate the logic about routing
> > physical IRQs only to CPU0.
> 
> I forgot the smp_processor_id() in gic_route_irq_to_guest(). As this function
> is only called (for upstream) when dom0 is building, only CPU0 is used.

Right, that's why changing it to cpumask_of(0) shouldn't make any
difference for xen-unstable (it should make things clearer, if nothing
else) but it should fix things for Oleksandr.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V6R-0003vC-FV; Wed, 29 Jan 2014 13:22:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V6P-0003uw-JY
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:22:50 +0000
Received: from [193.109.254.147:47366] by server-2.bemta-14.messagelabs.com id
	3A/9D-01236-8A009E25; Wed, 29 Jan 2014 13:22:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391001766!608559!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19045 invoked from network); 29 Jan 2014 13:22:46 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:22:46 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so3422971wgh.17
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:22:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tzfZvuZJgNTlqFWrhAqH4QQSzrRZzqbhaGK6fO5ZWnQ=;
	b=SU3GNCUWVsFddrwzdNJE9EB7SHwxRHRi2FR9ChLI4FVL3hVUOdKUcLS8D7WYRqZK45
	wQrTt+PQrALzs2IPQUpDF+BH5EDSwhODhOEbDknlVAJ49qqOmJidQzKRNa3EfILTuu+X
	lYc1iUJrQFg+LfVoZL+TWW0MP923lRpu82f7NPsndlsqggN4VvjraeBuFExQYdIMR1Ut
	1LxwPOYMOhgZKWR3rh3jVDELC6YkGb/cDwKZEAeEkxKV+ModbMN3StZA9YHQSRhv10Q2
	koQzdo9FM5oIOUXKPG205y0ZFkcc6WX5wFdFiZFcvQ+HuZ1RT0y8pAgasCDgeOcT3kfP
	TzdQ==
X-Gm-Message-State: ALoCoQmlLj9pQDlDYYwpuuMgW1/ps+vKl0Rm2Q//mTTSXxyMQr3yWopImNVEbDFSxg9Si4xAUG1y
X-Received: by 10.195.13.17 with SMTP id eu17mr5062076wjd.24.1391001766172;
	Wed, 29 Jan 2014 05:22:46 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ua8sm4904431wjc.4.2014.01.29.05.22.44
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:22:45 -0800 (PST)
Message-ID: <52E900A3.6070307@linaro.org>
Date: Wed, 29 Jan 2014 13:22:43 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
>
> This approach has a short coming in that it breaks when a guest enables its
> MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> same time. It turns out that FreeBSD does this.

By reverting this patch, you also revert some insteresting fix:
   - Fixing HSR_SYSREG_CRN_MASK
   - Use of HSR_SYSREG_

I think this both changes should be kept.

Otherwise, I would move this patch at the end of the series if we need 
to bisect the code.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V6R-0003vJ-QR; Wed, 29 Jan 2014 13:22:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8V6Q-0003uy-15
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:22:50 +0000
Received: from [85.158.143.35:34036] by server-1.bemta-4.messagelabs.com id
	34/FD-31661-9A009E25; Wed, 29 Jan 2014 13:22:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391001767!1645189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31935 invoked from network); 29 Jan 2014 13:22:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:22:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97668592"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:22:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:22:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8V6L-0001Ln-SX;
	Wed, 29 Jan 2014 13:22:45 +0000
Date: Wed, 29 Jan 2014 13:22:41 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52E8FD02.2060601@linaro.org>
Message-ID: <alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Julien Grall wrote:
> On 29/01/14 10:56, Oleksandr Tyshchenko wrote:
> > Hello all,
> > 
> > I just recollected about one hack which we created
> > as we needed to route HW IRQ in domU.
> > 
> > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > index 9d793ba..d0227b9 100644
> > --- a/tools/libxl/libxl_create.c
> > +++ b/tools/libxl/libxl_create.c
> > @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> > libxl__multidev *multidev,
> > 
> >           LOG(DEBUG, "dom%d irq %d", domid, irq);
> > 
> > -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> > -                       : -EOVERFLOW;
> >           if (!ret)
> >               ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
> >           if (ret < 0) {
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..b54c08e 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
> >       if ( d->domain_id == 0 )
> >           d->arch.vgic.nr_lines = gic_number_lines() - 32;
> >       else
> > -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> > need SPIs for the guest */
> > 
> >       d->arch.vgic.shared_irqs =
> >           xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> > diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> > index 75e2df3..ba88901 100644
> > --- a/xen/common/domctl.c
> > +++ b/xen/common/domctl.c
> > @@ -29,6 +29,7 @@
> >   #include <asm/page.h>
> >   #include <public/domctl.h>
> >   #include <xsm/xsm.h>
> > +#include <asm/gic.h>
> > 
> >   static DEFINE_SPINLOCK(domctl_lock);
> >   DEFINE_SPINLOCK(vcpu_alloc_lock);
> > @@ -782,8 +783,11 @@ long
> > do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> >               ret = -EINVAL;
> >           else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
> >               ret = -EPERM;
> > -        else if ( allow )
> > -            ret = pirq_permit_access(d, pirq);
> > +        else if ( allow ) {
> > +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> > +            ret = pirq_permit_access(d, irq.irq);
> > +            gic_route_irq_to_guest(d, &irq, "");
> > +        }
> >           else
> >               ret = pirq_deny_access(d, pirq);
> >       }
> > (END)
> > 
> > It seems, the following patch can violate the logic about routing
> > physical IRQs only to CPU0.
> 
> I forgot the smp_processor_id() in gic_route_irq_to_guest(). As this function
> is only called (for upstream) when dom0 is building, only CPU0 is used.

Right, that's why changing it to cpumask_of(0) shouldn't make any
difference for xen-unstable (it should make things clearer, if nothing
else) but it should fix things for Oleksandr.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:23:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8V6R-0003vC-FV; Wed, 29 Jan 2014 13:22:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8V6P-0003uw-JY
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:22:50 +0000
Received: from [193.109.254.147:47366] by server-2.bemta-14.messagelabs.com id
	3A/9D-01236-8A009E25; Wed, 29 Jan 2014 13:22:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391001766!608559!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19045 invoked from network); 29 Jan 2014 13:22:46 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:22:46 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so3422971wgh.17
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:22:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tzfZvuZJgNTlqFWrhAqH4QQSzrRZzqbhaGK6fO5ZWnQ=;
	b=SU3GNCUWVsFddrwzdNJE9EB7SHwxRHRi2FR9ChLI4FVL3hVUOdKUcLS8D7WYRqZK45
	wQrTt+PQrALzs2IPQUpDF+BH5EDSwhODhOEbDknlVAJ49qqOmJidQzKRNa3EfILTuu+X
	lYc1iUJrQFg+LfVoZL+TWW0MP923lRpu82f7NPsndlsqggN4VvjraeBuFExQYdIMR1Ut
	1LxwPOYMOhgZKWR3rh3jVDELC6YkGb/cDwKZEAeEkxKV+ModbMN3StZA9YHQSRhv10Q2
	koQzdo9FM5oIOUXKPG205y0ZFkcc6WX5wFdFiZFcvQ+HuZ1RT0y8pAgasCDgeOcT3kfP
	TzdQ==
X-Gm-Message-State: ALoCoQmlLj9pQDlDYYwpuuMgW1/ps+vKl0Rm2Q//mTTSXxyMQr3yWopImNVEbDFSxg9Si4xAUG1y
X-Received: by 10.195.13.17 with SMTP id eu17mr5062076wjd.24.1391001766172;
	Wed, 29 Jan 2014 05:22:46 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ua8sm4904431wjc.4.2014.01.29.05.22.44
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:22:45 -0800 (PST)
Message-ID: <52E900A3.6070307@linaro.org>
Date: Wed, 29 Jan 2014 13:22:43 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, george.dunlap@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 12:11, Ian Campbell wrote:
> This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
>
> This approach has a short coming in that it breaks when a guest enables its
> MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> same time. It turns out that FreeBSD does this.

By reverting this patch, you also revert some insteresting fix:
   - Fixing HSR_SYSREG_CRN_MASK
   - Use of HSR_SYSREG_

I think this both changes should be kept.

Otherwise, I would move this patch at the end of the series if we need 
to bisect the code.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:26:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VA7-0004F8-Ly; Wed, 29 Jan 2014 13:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8VA6-0004Ez-7b
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:26:38 +0000
Received: from [85.158.139.211:39014] by server-3.bemta-5.messagelabs.com id
	26/79-13671-D8109E25; Wed, 29 Jan 2014 13:26:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391001996!382674!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26364 invoked from network); 29 Jan 2014 13:26:36 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:26:36 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so3523417wgh.29
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:26:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=eqmOpJKg7qT5ws3/cWX09g0H3rvUrm0+NauopLJXr9s=;
	b=k2wrYaMIJ/d+8JfE8gGqHA5GjGg0i+ZSNfIlRySPMM+B8DrVLdkV+GIkZsNg1/6Kdi
	Lz6ormmGgs4QZnof0RZFw3wWfHzbMIDAnR+LmRVTcClYqGZg5SeX0P+QRjFXf3rTGm5u
	utqF+/JNHRQAHS8uKOHrIMtd5eVKGEj0vJ3nNmA0qD1rJr0XCg2GVllKOKqb24NMy+4e
	qswZjdLWY7IH0UeetCAXEHttWzYqrOzLVfpGBqKAvfg8uon/RVW5srSBII2JaBqwx4vB
	a8/Tt6GUwycIa8XPxGKf54QEpZbXk6r2sXtOUYoo69SI7y/sll5OoAhratsrIcY6aGTD
	9rTw==
X-Gm-Message-State: ALoCoQkE6foZFJBzEIPvVaDmh+0WiI/4fu50r/AATkWnfK4dxu0mNRrMefHOrj+trrlHsvX01+cT
X-Received: by 10.180.211.101 with SMTP id nb5mr22945598wic.0.1391001996530;
	Wed, 29 Jan 2014 05:26:36 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uc9sm42549676wib.2.2014.01.29.05.26.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:26:35 -0800 (PST)
Message-ID: <52E9018A.5090908@linaro.org>
Date: Wed, 29 Jan 2014 13:26:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	xen-devel <xen-devel@lists.xen.org>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 0/4] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi Ian,

Thanks for the patch series.

On 29/01/14 12:10, Ian Campbell wrote:
> Jan/Ian/Keir -- the final patch involves tools changes and a new domctl,
> which is why you are copied (although the domctl is marked as arm
> specific you might have opinions on it).
>
> On ARM we need to take care of cache coherency for guests which we have
> just built because they start with their caches disabled.
>
> Our current strategy for dealing with this, which is to make guest
> memory default to cacheable regardless of the in guest configuration
> (the HCR.DC bit), is flawed because it doesn't handle guests which
> enable their MMU before enabling their caches, which at least FreeBSD
> does. (NB: Setting HCR.DC while the guest MMU is enabled is
> UNPREDICTABLE, hence we must disable it when the guest turns its MMU
> one).

Enabling the cache earlier wouldn't change the issue :). The main 
problem is the page table attributes. Using Write-Through (with cached 
enabled/disabled) can randomly crash guest.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:26:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VA7-0004F8-Ly; Wed, 29 Jan 2014 13:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8VA6-0004Ez-7b
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:26:38 +0000
Received: from [85.158.139.211:39014] by server-3.bemta-5.messagelabs.com id
	26/79-13671-D8109E25; Wed, 29 Jan 2014 13:26:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391001996!382674!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26364 invoked from network); 29 Jan 2014 13:26:36 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:26:36 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so3523417wgh.29
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 05:26:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=eqmOpJKg7qT5ws3/cWX09g0H3rvUrm0+NauopLJXr9s=;
	b=k2wrYaMIJ/d+8JfE8gGqHA5GjGg0i+ZSNfIlRySPMM+B8DrVLdkV+GIkZsNg1/6Kdi
	Lz6ormmGgs4QZnof0RZFw3wWfHzbMIDAnR+LmRVTcClYqGZg5SeX0P+QRjFXf3rTGm5u
	utqF+/JNHRQAHS8uKOHrIMtd5eVKGEj0vJ3nNmA0qD1rJr0XCg2GVllKOKqb24NMy+4e
	qswZjdLWY7IH0UeetCAXEHttWzYqrOzLVfpGBqKAvfg8uon/RVW5srSBII2JaBqwx4vB
	a8/Tt6GUwycIa8XPxGKf54QEpZbXk6r2sXtOUYoo69SI7y/sll5OoAhratsrIcY6aGTD
	9rTw==
X-Gm-Message-State: ALoCoQkE6foZFJBzEIPvVaDmh+0WiI/4fu50r/AATkWnfK4dxu0mNRrMefHOrj+trrlHsvX01+cT
X-Received: by 10.180.211.101 with SMTP id nb5mr22945598wic.0.1391001996530;
	Wed, 29 Jan 2014 05:26:36 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uc9sm42549676wib.2.2014.01.29.05.26.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 05:26:35 -0800 (PST)
Message-ID: <52E9018A.5090908@linaro.org>
Date: Wed, 29 Jan 2014 13:26:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	xen-devel <xen-devel@lists.xen.org>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
In-Reply-To: <1390997452.31814.90.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 0/4] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Hi Ian,

Thanks for the patch series.

On 29/01/14 12:10, Ian Campbell wrote:
> Jan/Ian/Keir -- the final patch involves tools changes and a new domctl,
> which is why you are copied (although the domctl is marked as arm
> specific you might have opinions on it).
>
> On ARM we need to take care of cache coherency for guests which we have
> just built because they start with their caches disabled.
>
> Our current strategy for dealing with this, which is to make guest
> memory default to cacheable regardless of the in guest configuration
> (the HCR.DC bit), is flawed because it doesn't handle guests which
> enable their MMU before enabling their caches, which at least FreeBSD
> does. (NB: Setting HCR.DC while the guest MMU is enabled is
> UNPREDICTABLE, hence we must disable it when the guest turns its MMU
> one).

Enabling the cache earlier wouldn't change the issue :). The main 
problem is the page table attributes. Using Write-Through (with cached 
enabled/disabled) can randomly crash guest.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:41:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VNy-0004pf-Ik; Wed, 29 Jan 2014 13:40:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>) id 1W8VNx-0004pV-5X
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:40:57 +0000
Received: from [85.158.139.211:42917] by server-6.bemta-5.messagelabs.com id
	A4/76-14342-8E409E25; Wed, 29 Jan 2014 13:40:56 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391002854!368258!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13772 invoked from network); 29 Jan 2014 13:40:54 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:40:54 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so1483944lbh.12
	for <multiple recipients>; Wed, 29 Jan 2014 05:40:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=oVlhzGWv1nUc6WNU3ekkxrJYSUH3PvCLWo8Ay372EvM=;
	b=mGT1cB6N+/cRhaxD84NtMPKX8WQ3eGNGS81kruVVXA37mgoH38ZcsGtaUTr7a6cypU
	cvFEWiIVz57+zklVoYWpXwdlI63AhfzR8bJ89NxAhTyGbLHSduAm1BH1Dxh9WKhvWz+S
	kM04xcgV1f5Cnp5tkDGH7YP4wRNEo26Wbl74NQBAQBeDaIDGIheA3peUJqPmE/fcmP/k
	kKI9gYnBrG1ZK7MDZJxTduQ4sLl7lzKFIQdko9blqvDg2XMtORZKGhBoYclWPV27O7+m
	KfONOm64yvpec4FNXxwDfENZ1BY5Ki9WZOIupil5hE3RLsC80tw/Ba/nOwV5+vEfn5cH
	ppVA==
MIME-Version: 1.0
X-Received: by 10.152.219.97 with SMTP id pn1mr4470752lac.9.1391002853640;
	Wed, 29 Jan 2014 05:40:53 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Wed, 29 Jan 2014 05:40:53 -0800 (PST)
Date: Wed, 29 Jan 2014 08:40:53 -0500
X-Google-Sender-Auth: tNlOIhhRwLX0olAOqpN6wr1mrWI
Message-ID: <CAHehzX38FrH8-rMgOJf3vYA=Hu6H7r-Pbf8iX-+1KNzPxdoqZg@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] VGA Passthrough r9 270x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenProject.org user "Jonas Osborn" writes:

"Would an r9 270x be suitable for VGA passthrough with xen? It's not
listed on the wiki/XenVGAPassthroughTestedAdapters page but the 7870
is, and it's my understanding that the r9 270x is an upgrade/rebrand
of the 7870."

I will crosspost replies, or you can answer yourself on the website:

http://www.xenproject.org/help/questions-and-answers/vga-passthrough-r9-270x.html

Thanks,

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:41:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VNy-0004pf-Ik; Wed, 29 Jan 2014 13:40:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>) id 1W8VNx-0004pV-5X
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:40:57 +0000
Received: from [85.158.139.211:42917] by server-6.bemta-5.messagelabs.com id
	A4/76-14342-8E409E25; Wed, 29 Jan 2014 13:40:56 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391002854!368258!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13772 invoked from network); 29 Jan 2014 13:40:54 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:40:54 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so1483944lbh.12
	for <multiple recipients>; Wed, 29 Jan 2014 05:40:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=oVlhzGWv1nUc6WNU3ekkxrJYSUH3PvCLWo8Ay372EvM=;
	b=mGT1cB6N+/cRhaxD84NtMPKX8WQ3eGNGS81kruVVXA37mgoH38ZcsGtaUTr7a6cypU
	cvFEWiIVz57+zklVoYWpXwdlI63AhfzR8bJ89NxAhTyGbLHSduAm1BH1Dxh9WKhvWz+S
	kM04xcgV1f5Cnp5tkDGH7YP4wRNEo26Wbl74NQBAQBeDaIDGIheA3peUJqPmE/fcmP/k
	kKI9gYnBrG1ZK7MDZJxTduQ4sLl7lzKFIQdko9blqvDg2XMtORZKGhBoYclWPV27O7+m
	KfONOm64yvpec4FNXxwDfENZ1BY5Ki9WZOIupil5hE3RLsC80tw/Ba/nOwV5+vEfn5cH
	ppVA==
MIME-Version: 1.0
X-Received: by 10.152.219.97 with SMTP id pn1mr4470752lac.9.1391002853640;
	Wed, 29 Jan 2014 05:40:53 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Wed, 29 Jan 2014 05:40:53 -0800 (PST)
Date: Wed, 29 Jan 2014 08:40:53 -0500
X-Google-Sender-Auth: tNlOIhhRwLX0olAOqpN6wr1mrWI
Message-ID: <CAHehzX38FrH8-rMgOJf3vYA=Hu6H7r-Pbf8iX-+1KNzPxdoqZg@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] VGA Passthrough r9 270x
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

XenProject.org user "Jonas Osborn" writes:

"Would an r9 270x be suitable for VGA passthrough with xen? It's not
listed on the wiki/XenVGAPassthroughTestedAdapters page but the 7870
is, and it's my understanding that the r9 270x is an upgrade/rebrand
of the 7870."

I will crosspost replies, or you can answer yourself on the website:

http://www.xenproject.org/help/questions-and-answers/vga-passthrough-r9-270x.html

Thanks,

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:42:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VPo-00051Z-T3; Wed, 29 Jan 2014 13:42:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8VPn-00051S-Fr
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:42:51 +0000
Received: from [85.158.143.35:62742] by server-1.bemta-4.messagelabs.com id
	35/A1-31661-A5509E25; Wed, 29 Jan 2014 13:42:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391002969!1658457!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7262 invoked from network); 29 Jan 2014 13:42:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:42:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95688477"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 13:42:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:42:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8VCR-0001Rf-8F;
	Wed, 29 Jan 2014 13:29:03 +0000
Date: Wed, 29 Jan 2014 13:28:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401291327130.4373@kaball.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Jan Beulich wrote:
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> Sort of odd that you have to map a page in order to flush cache
> (which I very much hope is physically indexed, or else this
> operation wouldn't have the intended effect anyway). Can this
> not be done based on the machine address?

Unfortunately no. I asked for a similar change when Ian sent the RFC
because I was concerned about performances, but it turns out is not
possible. A pity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:42:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VPo-00051Z-T3; Wed, 29 Jan 2014 13:42:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8VPn-00051S-Fr
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 13:42:51 +0000
Received: from [85.158.143.35:62742] by server-1.bemta-4.messagelabs.com id
	35/A1-31661-A5509E25; Wed, 29 Jan 2014 13:42:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391002969!1658457!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7262 invoked from network); 29 Jan 2014 13:42:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:42:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95688477"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 13:42:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 08:42:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8VCR-0001Rf-8F;
	Wed, 29 Jan 2014 13:29:03 +0000
Date: Wed, 29 Jan 2014 13:28:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1401291327130.4373@kaball.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Jan Beulich wrote:
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> Sort of odd that you have to map a page in order to flush cache
> (which I very much hope is physically indexed, or else this
> operation wouldn't have the intended effect anyway). Can this
> not be done based on the machine address?

Unfortunately no. I asked for a similar change when Ian sent the RFC
because I was concerned about performances, but it turns out is not
possible. A pity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:52:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VYh-0005mr-8H; Wed, 29 Jan 2014 13:52:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8VYe-0005mm-Ke
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 13:52:00 +0000
Received: from [85.158.143.35:11804] by server-2.bemta-4.messagelabs.com id
	07/AD-10891-08709E25; Wed, 29 Jan 2014 13:52:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391003517!1649702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31768 invoked from network); 29 Jan 2014 13:51:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:51:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97677729"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:51:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 08:51:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8VYZ-0001K9-59;
	Wed, 29 Jan 2014 13:51:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8VYY-0008UK-7u;
	Wed, 29 Jan 2014 13:51:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24593-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 13:51:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24593: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24593 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24593/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail REGR. vs. 24570
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           7 debian-install              fail pass in 24589
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)    broken pass in 24589
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24589 pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24589 pass in 24596-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24589 pass in 24593

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24594-bisect
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24589 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24589 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24589 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24596 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 13:52:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 13:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VYh-0005mr-8H; Wed, 29 Jan 2014 13:52:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8VYe-0005mm-Ke
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 13:52:00 +0000
Received: from [85.158.143.35:11804] by server-2.bemta-4.messagelabs.com id
	07/AD-10891-08709E25; Wed, 29 Jan 2014 13:52:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391003517!1649702!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31768 invoked from network); 29 Jan 2014 13:51:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 13:51:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97677729"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 13:51:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 08:51:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8VYZ-0001K9-59;
	Wed, 29 Jan 2014 13:51:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8VYY-0008UK-7u;
	Wed, 29 Jan 2014 13:51:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24593-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 13:51:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24593: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24593 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24593/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail REGR. vs. 24570
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate       fail REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           7 debian-install              fail pass in 24589
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)    broken pass in 24589
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24589 pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24589 pass in 24596-bisect
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24589 pass in 24593

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24594-bisect
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24589 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24589 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24589 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24596 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VsU-0006uu-Md; Wed, 29 Jan 2014 14:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dietmar.hahn@ts.fujitsu.com>) id 1W8VsT-0006up-3G
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 14:12:29 +0000
Received: from [85.158.143.35:50298] by server-1.bemta-4.messagelabs.com id
	34/38-31661-C4C09E25; Wed, 29 Jan 2014 14:12:28 +0000
X-Env-Sender: dietmar.hahn@ts.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391004747!1659362!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9499 invoked from network); 29 Jan 2014 14:12:27 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:12:27 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:From:To:Cc:Subject:Date:Message-ID:
	User-Agent:In-Reply-To:References:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	b=awMJ9rL6y/NB2zpSIrud01aRJnOBAX1O029rAo7iHrgwxbyN6kT/CrNU
	ErCNjc5YK/VixTjxKnxP902+l/xUOTcrCIr0pvaVv69f9scD9H0iLfV81
	FevzYXdJ/teEFub7AOrhO9RLZVdK5R5W5GeY8Ze3162RxVGCWHX/n8fxT
	L0J3O5wwGPQhhqOfe6RQSsMPHvJyIlwYR3wkLsve8NAc5tvyghFPXm46f
	AwARDNbR7jF4CpHyc+4umd+n/n8f+;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391004748; x=1422540748;
	h=from:to:cc:subject:date:message-id:in-reply-to:
	references:mime-version:content-transfer-encoding;
	bh=t2ehrIWV+w+lS6WbOfD7tDLCWtOPDk7WxKorV/P6UL4=;
	b=A77l47irtdclKyCDLQCkR2tdQMJYmA7DYiAOEkJY0Ln+MDBeOyQb2Sth
	p6YMxiRyDGPibjW6MD/d6gtUNa3xbve48JcjW/EVXPt2zeCViUIFeWmbg
	n6dYIB3dg1l9qabHZjGPD+QV9J7dlBq714h5ATqck2TWbfWjS/eLBGmuU
	7voxld7PRnKR29QQwoHdHEAtuM3D0MImk8eGDwuzDMlqN5+MI1iSUdy1q
	1GDksB0BV6KMdGKU5kP1oa+NFlb+P;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,742,1384297200"; d="scan'208";a="184047723"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 29 Jan 2014 15:12:27 +0100
X-IronPort-AV: E=Sophos;i="4.95,742,1384297200"; d="scan'208";a="79007136"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate60u.abg.fsc.net with ESMTP; 29 Jan 2014 15:12:27 +0100
Received: from amur.localnet (amur.mch.fsc.net [10.172.102.141])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id E65009E8E62;
	Wed, 29 Jan 2014 15:12:26 +0100 (CET)
From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 15:12:26 +0100
Message-ID: <9938784.nn6jIk1DgB@amur>
User-Agent: KMail/4.11.3 (Linux/3.11.6-4-desktop; KDE/4.11.3; x86_64; ; )
In-Reply-To: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

sorry for the delay.

Am Donnerstag 16 Januar 2014, 11:10:38 schrieb Jan Beulich:
> >>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> > when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
> > softlockups from time to time.
> > 
> > kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
> > 
> > I tracked this down to the call of xc_domain_set_pod_target() and further
> > p2m_pod_set_mem_target().
> > 
> > Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
> > with enough memory for current hypervisors. But it seems the code is nearly
> > the same.
> 
> While I still didn't see a formal report of this against SLE11 yet,
> attached a draft patch against the SP3 code base adding manual
> preemption to the hypercall path of privcmd. This is only lightly
> tested, and therefore has a little bit of debugging code still left in
> there. Mind giving this an try (perhaps together with the patch
> David had sent for the other issue - there may still be a need for
> further preemption points in the IOCTL_PRIVCMD_MMAP*
> handling, but without knowing for sure whether that matters to
> you I didn't want to add this right away)?
> 
> Jan

Today I did some tests with the patch. As the debug part didn't compile I
changed the per cpu variables to local variables.

OK it works! I tried several times to start a domU with
memory=100GB and maxmem=230GB and never got a soft lockup.
Following messages in /var/log/message on the first start:
Jan 29 14:14:45 gut1 kernel: [  178.976373] psi[03] 00000000:1 #2
Jan 29 14:14:46 gut1 kernel: [  179.008774] psi[03] 00000000:1 #4
Jan 29 14:14:46 gut1 kernel: [  179.073048] psi[03] 00000000:1 #8
Jan 29 14:14:46 gut1 kernel: [  179.219272] psi[03] 00000000:1 #10
Jan 29 14:14:47 gut1 kernel: [  180.220803] psi[03] 00000000:1 #20
Jan 29 14:14:48 gut1 kernel: [  181.844153] psi[03] 00000000:1 #40
Jan 29 14:14:51 gut1 kernel: [  184.769331] psi[03] 00000000:1 #80
Jan 29 14:14:56 gut1 kernel: [  189.169159] psi[03] 00000000:1 #100
Jan 29 14:14:57 gut1 kernel: [  190.178545] psi[03] 00000000:1 #200
Jan 29 14:15:03 gut1 kernel: [  196.256353] psi[00] 00000000:1 #1
Jan 29 14:15:03 gut1 kernel: [  196.260928] psi[00] 00000000:1 #2
Jan 29 14:15:03 gut1 kernel: [  196.497156] psi[00] 00000000:1 #4
Jan 29 14:15:03 gut1 kernel: [  196.552303] psi[00] 00000000:1 #8
Jan 29 14:15:04 gut1 kernel: [  197.035527] psi[00] 00000000:1 #10
Jan 29 14:15:04 gut1 kernel: [  197.060626] psi[01] 00000000:1 #1
Jan 29 14:15:04 gut1 kernel: [  197.064101] psi[01] 00000000:1 #2
Jan 29 14:15:04 gut1 kernel: [  197.096719] psi[01] 00000000:1 #4
Jan 29 14:15:04 gut1 kernel: [  197.148756] psi[01] 00000000:1 #8
Jan 29 14:15:04 gut1 kernel: [  197.517184] psi[01] 00000000:1 #10
Jan 29 14:15:05 gut1 kernel: [  198.153211] psi[01] 00000000:1 #20
Jan 29 14:15:06 gut1 kernel: [  199.162541] psi[02] 00000000:1 #1
Jan 29 14:15:06 gut1 kernel: [  199.164895] psi[02] 00000000:1 #2
Jan 29 14:15:06 gut1 kernel: [  199.169576] psi[02] 00000000:1 #4
Jan 29 14:15:06 gut1 kernel: [  199.178073] psi[02] 00000000:1 #8
Jan 29 14:15:06 gut1 kernel: [  199.195693] psi[02] 00000000:1 #10
Jan 29 14:15:06 gut1 kernel: [  199.335857] psi[02] 00000000:1 #20
Jan 29 14:15:06 gut1 kernel: [  199.805027] psi[02] 00000000:1 #40
Jan 29 14:15:07 gut1 kernel: [  200.753118] psi[00] 00000000:1 #20
Jan 29 14:15:08 gut1 kernel: [  201.524368] psi[01] 00000000:1 #40
Jan 29 14:15:09 gut1 kernel: [  202.692159] psi[01] 00000000:1 #80
Jan 29 14:15:11 gut1 kernel: [  204.968433] psi[01] 00000000:1 #100
Jan 29 14:15:16 gut1 kernel: [  209.712892] psi[01] 00000000:1 #200
Jan 29 14:15:32 gut1 kernel: [  225.940798] psi[01] 00000000:1 #400
Jan 29 14:15:38 gut1 kernel: [  231.360556] psi[00] 00000000:1 #40

Second:
Jan 29 14:49:19 gut1 kernel: [ 2250.788926] psi[02] 00000000:1 #80
Jan 29 14:49:26 gut1 kernel: [ 2257.360767] psi[02] 00000000:1 #100
Jan 29 14:49:37 gut1 kernel: [ 2268.912916] psi[02] 00000000:1 #200
Jan 29 14:50:09 gut1 kernel: [ 2300.804211] psi[01] 00000000:1 #800

Thanks.

Dietmar.

-- 
Company details: http://ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VsU-0006uu-Md; Wed, 29 Jan 2014 14:12:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dietmar.hahn@ts.fujitsu.com>) id 1W8VsT-0006up-3G
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 14:12:29 +0000
Received: from [85.158.143.35:50298] by server-1.bemta-4.messagelabs.com id
	34/38-31661-C4C09E25; Wed, 29 Jan 2014 14:12:28 +0000
X-Env-Sender: dietmar.hahn@ts.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391004747!1659362!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9499 invoked from network); 29 Jan 2014 14:12:27 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:12:27 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Received:From:To:Cc:Subject:Date:Message-ID:
	User-Agent:In-Reply-To:References:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	b=awMJ9rL6y/NB2zpSIrud01aRJnOBAX1O029rAo7iHrgwxbyN6kT/CrNU
	ErCNjc5YK/VixTjxKnxP902+l/xUOTcrCIr0pvaVv69f9scD9H0iLfV81
	FevzYXdJ/teEFub7AOrhO9RLZVdK5R5W5GeY8Ze3162RxVGCWHX/n8fxT
	L0J3O5wwGPQhhqOfe6RQSsMPHvJyIlwYR3wkLsve8NAc5tvyghFPXm46f
	AwARDNbR7jF4CpHyc+4umd+n/n8f+;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391004748; x=1422540748;
	h=from:to:cc:subject:date:message-id:in-reply-to:
	references:mime-version:content-transfer-encoding;
	bh=t2ehrIWV+w+lS6WbOfD7tDLCWtOPDk7WxKorV/P6UL4=;
	b=A77l47irtdclKyCDLQCkR2tdQMJYmA7DYiAOEkJY0Ln+MDBeOyQb2Sth
	p6YMxiRyDGPibjW6MD/d6gtUNa3xbve48JcjW/EVXPt2zeCViUIFeWmbg
	n6dYIB3dg1l9qabHZjGPD+QV9J7dlBq714h5ATqck2TWbfWjS/eLBGmuU
	7voxld7PRnKR29QQwoHdHEAtuM3D0MImk8eGDwuzDMlqN5+MI1iSUdy1q
	1GDksB0BV6KMdGKU5kP1oa+NFlb+P;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,742,1384297200"; d="scan'208";a="184047723"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 29 Jan 2014 15:12:27 +0100
X-IronPort-AV: E=Sophos;i="4.95,742,1384297200"; d="scan'208";a="79007136"
Received: from sanpedro.mch.fsc.net ([172.17.20.6])
	by abgdgate60u.abg.fsc.net with ESMTP; 29 Jan 2014 15:12:27 +0100
Received: from amur.localnet (amur.mch.fsc.net [10.172.102.141])
	by sanpedro.mch.fsc.net (Postfix) with ESMTP id E65009E8E62;
	Wed, 29 Jan 2014 15:12:26 +0100 (CET)
From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 15:12:26 +0100
Message-ID: <9938784.nn6jIk1DgB@amur>
User-Agent: KMail/4.11.3 (Linux/3.11.6-4-desktop; KDE/4.11.3; x86_64; ; )
In-Reply-To: <52D7CC3E020000780011435C@nat28.tlf.novell.com>
References: <1538524.5AKIkpF9LB@amur>
	<52D7CC3E020000780011435C@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] POD: soft lockups in dom0 kernel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

sorry for the delay.

Am Donnerstag 16 Januar 2014, 11:10:38 schrieb Jan Beulich:
> >>> On 05.12.13 at 14:55, Dietmar Hahn <dietmar.hahn@ts.fujitsu.com> wrote:
> > when creating a bigger (> 50 GB) HVM guest with maxmem > memory we get
> > softlockups from time to time.
> > 
> > kernel: [  802.084335] BUG: soft lockup - CPU#1 stuck for 22s! [xend:31351]
> > 
> > I tracked this down to the call of xc_domain_set_pod_target() and further
> > p2m_pod_set_mem_target().
> > 
> > Unfortunately I can this check only with xen-4.2.2 as I don't have a machine
> > with enough memory for current hypervisors. But it seems the code is nearly
> > the same.
> 
> While I still didn't see a formal report of this against SLE11 yet,
> attached a draft patch against the SP3 code base adding manual
> preemption to the hypercall path of privcmd. This is only lightly
> tested, and therefore has a little bit of debugging code still left in
> there. Mind giving this an try (perhaps together with the patch
> David had sent for the other issue - there may still be a need for
> further preemption points in the IOCTL_PRIVCMD_MMAP*
> handling, but without knowing for sure whether that matters to
> you I didn't want to add this right away)?
> 
> Jan

Today I did some tests with the patch. As the debug part didn't compile I
changed the per cpu variables to local variables.

OK it works! I tried several times to start a domU with
memory=100GB and maxmem=230GB and never got a soft lockup.
Following messages in /var/log/message on the first start:
Jan 29 14:14:45 gut1 kernel: [  178.976373] psi[03] 00000000:1 #2
Jan 29 14:14:46 gut1 kernel: [  179.008774] psi[03] 00000000:1 #4
Jan 29 14:14:46 gut1 kernel: [  179.073048] psi[03] 00000000:1 #8
Jan 29 14:14:46 gut1 kernel: [  179.219272] psi[03] 00000000:1 #10
Jan 29 14:14:47 gut1 kernel: [  180.220803] psi[03] 00000000:1 #20
Jan 29 14:14:48 gut1 kernel: [  181.844153] psi[03] 00000000:1 #40
Jan 29 14:14:51 gut1 kernel: [  184.769331] psi[03] 00000000:1 #80
Jan 29 14:14:56 gut1 kernel: [  189.169159] psi[03] 00000000:1 #100
Jan 29 14:14:57 gut1 kernel: [  190.178545] psi[03] 00000000:1 #200
Jan 29 14:15:03 gut1 kernel: [  196.256353] psi[00] 00000000:1 #1
Jan 29 14:15:03 gut1 kernel: [  196.260928] psi[00] 00000000:1 #2
Jan 29 14:15:03 gut1 kernel: [  196.497156] psi[00] 00000000:1 #4
Jan 29 14:15:03 gut1 kernel: [  196.552303] psi[00] 00000000:1 #8
Jan 29 14:15:04 gut1 kernel: [  197.035527] psi[00] 00000000:1 #10
Jan 29 14:15:04 gut1 kernel: [  197.060626] psi[01] 00000000:1 #1
Jan 29 14:15:04 gut1 kernel: [  197.064101] psi[01] 00000000:1 #2
Jan 29 14:15:04 gut1 kernel: [  197.096719] psi[01] 00000000:1 #4
Jan 29 14:15:04 gut1 kernel: [  197.148756] psi[01] 00000000:1 #8
Jan 29 14:15:04 gut1 kernel: [  197.517184] psi[01] 00000000:1 #10
Jan 29 14:15:05 gut1 kernel: [  198.153211] psi[01] 00000000:1 #20
Jan 29 14:15:06 gut1 kernel: [  199.162541] psi[02] 00000000:1 #1
Jan 29 14:15:06 gut1 kernel: [  199.164895] psi[02] 00000000:1 #2
Jan 29 14:15:06 gut1 kernel: [  199.169576] psi[02] 00000000:1 #4
Jan 29 14:15:06 gut1 kernel: [  199.178073] psi[02] 00000000:1 #8
Jan 29 14:15:06 gut1 kernel: [  199.195693] psi[02] 00000000:1 #10
Jan 29 14:15:06 gut1 kernel: [  199.335857] psi[02] 00000000:1 #20
Jan 29 14:15:06 gut1 kernel: [  199.805027] psi[02] 00000000:1 #40
Jan 29 14:15:07 gut1 kernel: [  200.753118] psi[00] 00000000:1 #20
Jan 29 14:15:08 gut1 kernel: [  201.524368] psi[01] 00000000:1 #40
Jan 29 14:15:09 gut1 kernel: [  202.692159] psi[01] 00000000:1 #80
Jan 29 14:15:11 gut1 kernel: [  204.968433] psi[01] 00000000:1 #100
Jan 29 14:15:16 gut1 kernel: [  209.712892] psi[01] 00000000:1 #200
Jan 29 14:15:32 gut1 kernel: [  225.940798] psi[01] 00000000:1 #400
Jan 29 14:15:38 gut1 kernel: [  231.360556] psi[00] 00000000:1 #40

Second:
Jan 29 14:49:19 gut1 kernel: [ 2250.788926] psi[02] 00000000:1 #80
Jan 29 14:49:26 gut1 kernel: [ 2257.360767] psi[02] 00000000:1 #100
Jan 29 14:49:37 gut1 kernel: [ 2268.912916] psi[02] 00000000:1 #200
Jan 29 14:50:09 gut1 kernel: [ 2300.804211] psi[01] 00000000:1 #800

Thanks.

Dietmar.

-- 
Company details: http://ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:15:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Vvm-00071C-GA; Wed, 29 Jan 2014 14:15:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Vvk-000716-LK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:15:52 +0000
Received: from [85.158.143.35:13581] by server-1.bemta-4.messagelabs.com id
	BB/7E-31661-71D09E25; Wed, 29 Jan 2014 14:15:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391004949!1649640!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26671 invoked from network); 29 Jan 2014 14:15:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:15:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97687168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:15:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:15:49 -0500
Message-ID: <1391004947.31814.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 14:15:47 +0000
In-Reply-To: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
> >      return 0;
> >  }
> >  
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> 
> Why can't the function parameter be domid_t right away?

It seemed that the vast majority of the current libxc functions were
using uint32_t for whatever reason.

> 
> > +    case XEN_DOMCTL_cacheflush:
> > +    {
> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> 
> While you certainly say so in the public header change, I think you
> recall that we pretty recently changed another hypercall to not be
> the only inconsistent one modifying the input structure in order to
> handle hypercall preemption.

That was a XNEMEM though IIRC -- is the same requirement also true of
domctl's?

How/where would you recommend saving the progress here?

> 
> Further - who's responsible for initiating the resume after a
> preemption? p2m_cache_flush() returning -EAGAIN isn't being
> handled here, and also not in libxc (which would be awkward
> anyway).

I've once again fallen into the trap of thinking the common domctl code
would do it for me.

> 
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> Sort of odd that you have to map a page in order to flush cache
> (which I very much hope is physically indexed, or else this
> operation wouldn't have the intended effect anyway). Can this
> not be done based on the machine address?

Sadly not, yes it is very annoying.

Yes, the cache is required to be physically indexed from armv7 onwards.

> 
> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> > -        if ( op == RELINQUISH && count >= 0x2000 )
> > +        switch ( op )
> >          {
> > -            if ( hypercall_preempt_check() )
> > +        case RELINQUISH:
> > +        case CACHEFLUSH:
> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >              {
> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> >                  rc = -EAGAIN;
> >                  goto out;
> >              }
> >              count = 0;
> > +            break;
> > +        case INSERT:
> > +        case ALLOCATE:
> > +        case REMOVE:
> > +            /* No preemption */
> > +            break;
> >          }
> 
> Unrelated to the patch here, but don't you have a problem if you
> don't preempt _at all_ here for certain operation types? Or is a
> limit on the number of iterations being enforced elsewhere for
> those?

Good question.

The tools/guest accessible paths here are through
guest_physmap_add/remove_page. I think the only paths which are exposed
that pass a non-zero order are XENMEM_populate_physmap and
XENMEM_exchange, bot of which restrict the maximum order.

I don't think those guest_physmap_* are preemptible on x86 either?

It's possible that we should nevertheless handle preemption on those
code paths as well, but I don't think it is critical right now (*or at
least not critical enough to warrant an freeze exception for 4.4).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:15:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Vvm-00071C-GA; Wed, 29 Jan 2014 14:15:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Vvk-000716-LK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:15:52 +0000
Received: from [85.158.143.35:13581] by server-1.bemta-4.messagelabs.com id
	BB/7E-31661-71D09E25; Wed, 29 Jan 2014 14:15:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391004949!1649640!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26671 invoked from network); 29 Jan 2014 14:15:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:15:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97687168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:15:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:15:49 -0500
Message-ID: <1391004947.31814.119.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 14:15:47 +0000
In-Reply-To: <52E9098B0200007800117EC5@nat28.tlf.novell.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
> >      return 0;
> >  }
> >  
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> 
> Why can't the function parameter be domid_t right away?

It seemed that the vast majority of the current libxc functions were
using uint32_t for whatever reason.

> 
> > +    case XEN_DOMCTL_cacheflush:
> > +    {
> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> 
> While you certainly say so in the public header change, I think you
> recall that we pretty recently changed another hypercall to not be
> the only inconsistent one modifying the input structure in order to
> handle hypercall preemption.

That was a XNEMEM though IIRC -- is the same requirement also true of
domctl's?

How/where would you recommend saving the progress here?

> 
> Further - who's responsible for initiating the resume after a
> preemption? p2m_cache_flush() returning -EAGAIN isn't being
> handled here, and also not in libxc (which would be awkward
> anyway).

I've once again fallen into the trap of thinking the common domctl code
would do it for me.

> 
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> Sort of odd that you have to map a page in order to flush cache
> (which I very much hope is physically indexed, or else this
> operation wouldn't have the intended effect anyway). Can this
> not be done based on the machine address?

Sadly not, yes it is very annoying.

Yes, the cache is required to be physically indexed from armv7 onwards.

> 
> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> > -        if ( op == RELINQUISH && count >= 0x2000 )
> > +        switch ( op )
> >          {
> > -            if ( hypercall_preempt_check() )
> > +        case RELINQUISH:
> > +        case CACHEFLUSH:
> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >              {
> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> >                  rc = -EAGAIN;
> >                  goto out;
> >              }
> >              count = 0;
> > +            break;
> > +        case INSERT:
> > +        case ALLOCATE:
> > +        case REMOVE:
> > +            /* No preemption */
> > +            break;
> >          }
> 
> Unrelated to the patch here, but don't you have a problem if you
> don't preempt _at all_ here for certain operation types? Or is a
> limit on the number of iterations being enforced elsewhere for
> those?

Good question.

The tools/guest accessible paths here are through
guest_physmap_add/remove_page. I think the only paths which are exposed
that pass a non-zero order are XENMEM_populate_physmap and
XENMEM_exchange, bot of which restrict the maximum order.

I don't think those guest_physmap_* are preemptible on x86 either?

It's possible that we should nevertheless handle preemption on those
code paths as well, but I don't think it is critical right now (*or at
least not critical enough to warrant an freeze exception for 4.4).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:18:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VyJ-0007B4-57; Wed, 29 Jan 2014 14:18:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8VyH-0007AE-Ni
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:18:29 +0000
Received: from [85.158.139.211:12059] by server-14.bemta-5.messagelabs.com id
	F7/DC-27598-4BD09E25; Wed, 29 Jan 2014 14:18:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391005106!378539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 29 Jan 2014 14:18:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:18:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97687923"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:18:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:18:25 -0500
Message-ID: <1391005104.31814.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 29 Jan 2014 14:18:24 +0000
In-Reply-To: <52E900A3.6070307@linaro.org>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
	<52E900A3.6070307@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:22 +0000, Julien Grall wrote:
> 
> On 29/01/14 12:11, Ian Campbell wrote:
> > This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> >
> > This approach has a short coming in that it breaks when a guest enables its
> > MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> > same time. It turns out that FreeBSD does this.
> 
> By reverting this patch, you also revert some insteresting fix:
>    - Fixing HSR_SYSREG_CRN_MASK
>    - Use of HSR_SYSREG_
> 
> I think this both changes should be kept.

Good, point. The issues are not actual problems with the main change
reverted, but we should keep them anyway since they are correct.

> Otherwise, I would move this patch at the end of the series if we need 
> to bisect the code.

Yes, for some reason I had thought this one needed to come first, but I
think I was wrong.

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:18:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8VyJ-0007B4-57; Wed, 29 Jan 2014 14:18:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8VyH-0007AE-Ni
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:18:29 +0000
Received: from [85.158.139.211:12059] by server-14.bemta-5.messagelabs.com id
	F7/DC-27598-4BD09E25; Wed, 29 Jan 2014 14:18:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391005106!378539!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 29 Jan 2014 14:18:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:18:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97687923"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:18:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:18:25 -0500
Message-ID: <1391005104.31814.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 29 Jan 2014 14:18:24 +0000
In-Reply-To: <52E900A3.6070307@linaro.org>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-1-git-send-email-ian.campbell@citrix.com>
	<52E900A3.6070307@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 13:22 +0000, Julien Grall wrote:
> 
> On 29/01/14 12:11, Ian Campbell wrote:
> > This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> >
> > This approach has a short coming in that it breaks when a guest enables its
> > MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> > same time. It turns out that FreeBSD does this.
> 
> By reverting this patch, you also revert some insteresting fix:
>    - Fixing HSR_SYSREG_CRN_MASK
>    - Use of HSR_SYSREG_
> 
> I think this both changes should be kept.

Good, point. The issues are not actual problems with the main change
reverted, but we should keep them anyway since they are correct.

> Otherwise, I would move this patch at the end of the series if we need 
> to bisect the code.

Yes, for some reason I had thought this one needed to come first, but I
think I was wrong.

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8W3e-0007gW-BJ; Wed, 29 Jan 2014 14:24:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8W3b-0007gR-UC
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:24:00 +0000
Received: from [85.158.139.211:29043] by server-11.bemta-5.messagelabs.com id
	0F/56-23886-FFE09E25; Wed, 29 Jan 2014 14:23:59 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391005438!394159!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24132 invoked from network); 29 Jan 2014 14:23:58 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.161)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:23:58 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391005438; l=1358;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=2LcG5mk5pclM7BbYts/a/IKXWfs=;
	b=kRN/nxjMdWv2KmXgv8ET98r1oZ9NFJo0p9eBOhrZr+PqWfYE6aU1gp5Unc8fqhn77oc
	PIUdK6pVRK8vwj7HGVIcdaTM8tVPEUoEABA6gAXW9wr/zOM4/kA7MIPmPLXwBvLxHaRpZ
	u/FGUe/CKkj1ECKduHFiH0Pe1acmbPXBFPg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id w03acbq0TENwSiR
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 15:23:58 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9315550269; Wed, 29 Jan 2014 15:23:57 +0100 (CET)
Date: Wed, 29 Jan 2014 15:23:57 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129142357.GA27051@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
	<1390995947.31814.79.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390995947.31814.79.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> > The toolstack just does not know if a phy device supports it, or if file
> > backed storage can do hole punching. If feature-discard is set and the
> > frontend sends a discard request, the backend would return an error
> > (like ENOTSUPPORTED) and the frontend internally disables the discard
> > flag. Thats how it is done in pvops and the forward ported xenlinux
> > tree.
> 
> That sounds good.
> 
> Is it worth noting that enable-discard=1 is only advisory and will be
> ignored if the underlying storage and/or backend doesn't understand it?
> The real benefit of this option is to be able to force it off rather
> than on I think.

Yes, the purpose is to turn it off. I will extend the description.

> > Thats what I'm asking you. Why is readwrite set here, and later on also
> > in the .l file? At least just setting it here did not unconditionally
> > enable it if no discard= was specified. I have not traced the code why
> > that happens.
> 
> One for Ian J I think. Perhaps it is just setting the default?

For some reason this setting is lost, at least for my discard flag.  The
readwrite part may suffer from the same issue, otherwise access_set
could be removed.  I will trace the code to understand how it works.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8W3e-0007gW-BJ; Wed, 29 Jan 2014 14:24:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8W3b-0007gR-UC
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:24:00 +0000
Received: from [85.158.139.211:29043] by server-11.bemta-5.messagelabs.com id
	0F/56-23886-FFE09E25; Wed, 29 Jan 2014 14:23:59 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391005438!394159!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24132 invoked from network); 29 Jan 2014 14:23:58 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.161)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:23:58 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391005438; l=1358;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=2LcG5mk5pclM7BbYts/a/IKXWfs=;
	b=kRN/nxjMdWv2KmXgv8ET98r1oZ9NFJo0p9eBOhrZr+PqWfYE6aU1gp5Unc8fqhn77oc
	PIUdK6pVRK8vwj7HGVIcdaTM8tVPEUoEABA6gAXW9wr/zOM4/kA7MIPmPLXwBvLxHaRpZ
	u/FGUe/CKkj1ECKduHFiH0Pe1acmbPXBFPg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id w03acbq0TENwSiR
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 15:23:58 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9315550269; Wed, 29 Jan 2014 15:23:57 +0100 (CET)
Date: Wed, 29 Jan 2014 15:23:57 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129142357.GA27051@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
	<1390995947.31814.79.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390995947.31814.79.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> > The toolstack just does not know if a phy device supports it, or if file
> > backed storage can do hole punching. If feature-discard is set and the
> > frontend sends a discard request, the backend would return an error
> > (like ENOTSUPPORTED) and the frontend internally disables the discard
> > flag. Thats how it is done in pvops and the forward ported xenlinux
> > tree.
> 
> That sounds good.
> 
> Is it worth noting that enable-discard=1 is only advisory and will be
> ignored if the underlying storage and/or backend doesn't understand it?
> The real benefit of this option is to be able to force it off rather
> than on I think.

Yes, the purpose is to turn it off. I will extend the description.

> > Thats what I'm asking you. Why is readwrite set here, and later on also
> > in the .l file? At least just setting it here did not unconditionally
> > enable it if no discard= was specified. I have not traced the code why
> > that happens.
> 
> One for Ian J I think. Perhaps it is just setting the default?

For some reason this setting is lost, at least for my discard flag.  The
readwrite part may suffer from the same issue, otherwise access_set
could be removed.  I will trace the code to understand how it works.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:33:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WC9-0008A3-P9; Wed, 29 Jan 2014 14:32:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W8WC8-00089y-R4
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 14:32:49 +0000
Received: from [85.158.137.68:3533] by server-9.bemta-3.messagelabs.com id
	93/90-10184-01119E25; Wed, 29 Jan 2014 14:32:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391005957!12093015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjYyNzIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22422 invoked from network); 29 Jan 2014 14:32:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; 
	d="asc'?scan'208";a="95706597"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 14:32:37 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:32:36 -0500
Message-ID: <1391005955.21756.7.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 29 Jan 2014 14:32:35 +0000
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7406744020036818592=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7406744020036818592==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-l+686buVh8C/R7V9YaVr"

--=-l+686buVh8C/R7V9YaVr
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

standalone-reset's usage says:
   =20
  usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildfligh=
t>]]]
   branch and xenbranch default, separately, to xen-unstable
  options:
   -f<flight>     generate flight "flight", default is "standalone"
   =20
but then there is no place where '-f' is processed, and hence
no real way to pass a specific flight name to make-flight.
   =20
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/standalone-reset b/standalone-reset
index 8be7e86..846561d 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -27,6 +27,15 @@ options:
 END
 }
=20
+flight=3D"standalone"
+while getopts "f:" opt; do
+    case "$opt" in
+        f) flight=3D${OPTARG};;
+        *) usage; exit 1;;
+    esac
+done
+shift $((OPTIND-1))
+
 if [ -f standalone.config ] ; then
     . standalone.config
 fi

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-l+686buVh8C/R7V9YaVr
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLpEQMACgkQk4XaBE3IOsQ9UgCaAjps9TPYuQlfErsp7j2fzG9k
X1kAoISPY+jf2CQJkfiYRCWoYY1WpKBH
=e/ke
-----END PGP SIGNATURE-----

--=-l+686buVh8C/R7V9YaVr--


--===============7406744020036818592==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7406744020036818592==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 14:33:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WC9-0008A3-P9; Wed, 29 Jan 2014 14:32:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1W8WC8-00089y-R4
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 14:32:49 +0000
Received: from [85.158.137.68:3533] by server-9.bemta-3.messagelabs.com id
	93/90-10184-01119E25; Wed, 29 Jan 2014 14:32:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391005957!12093015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjYyNzIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22422 invoked from network); 29 Jan 2014 14:32:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; 
	d="asc'?scan'208";a="95706597"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 14:32:37 +0000
Received: from [127.0.0.1] (10.80.16.67) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:32:36 -0500
Message-ID: <1391005955.21756.7.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Wed, 29 Jan 2014 14:32:35 +0000
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7406744020036818592=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7406744020036818592==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-l+686buVh8C/R7V9YaVr"

--=-l+686buVh8C/R7V9YaVr
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

standalone-reset's usage says:
   =20
  usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildfligh=
t>]]]
   branch and xenbranch default, separately, to xen-unstable
  options:
   -f<flight>     generate flight "flight", default is "standalone"
   =20
but then there is no place where '-f' is processed, and hence
no real way to pass a specific flight name to make-flight.
   =20
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>

diff --git a/standalone-reset b/standalone-reset
index 8be7e86..846561d 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -27,6 +27,15 @@ options:
 END
 }
=20
+flight=3D"standalone"
+while getopts "f:" opt; do
+    case "$opt" in
+        f) flight=3D${OPTARG};;
+        *) usage; exit 1;;
+    esac
+done
+shift $((OPTIND-1))
+
 if [ -f standalone.config ] ; then
     . standalone.config
 fi

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-l+686buVh8C/R7V9YaVr
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLpEQMACgkQk4XaBE3IOsQ9UgCaAjps9TPYuQlfErsp7j2fzG9k
X1kAoISPY+jf2CQJkfiYRCWoYY1WpKBH
=e/ke
-----END PGP SIGNATURE-----

--=-l+686buVh8C/R7V9YaVr--


--===============7406744020036818592==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7406744020036818592==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 14:35:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WEV-0008G8-Dq; Wed, 29 Jan 2014 14:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8WEU-0008G2-0M
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:35:14 +0000
Received: from [85.158.143.35:24230] by server-2.bemta-4.messagelabs.com id
	89/CD-10891-1A119E25; Wed, 29 Jan 2014 14:35:13 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391006112!1663866!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8145 invoked from network); 29 Jan 2014 14:35:12 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:35:12 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1508122lan.26
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 06:35:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=MAXZuZz+rDFvYeeRBfgiyOnuNY5G7QY6sm2PfguoMpQ=;
	b=ILuz8Lt+7LxTnsFANTSx4Im4N87pOUJzF+Y9KeFHseh03IK0iYOU3iHc51Qw2aRl2D
	E4U1sOpoG5nRf9F/S7VDRGxQb0GveReofGaMgancbKf6BnQCj4R2DelWAkKKQGhvaLFo
	iQmm0nu9d/Pddv2IC726r5SHd27g4Qu1hHIqBUD4JDlzILm0LSMc9J7w/aO7oTiDHEVe
	a7+T+fumkGsbwT6NptS9yE85cR1h+6q22JMbwT5RMJm5ZYRdDhTmLF8Phde1q7nrlxFt
	OC4mycW5GGBLibijEiVaFWND3x4w+D9gvgdrnOOlqywMM3FppWc+gjhTZvXo591EysVo
	eEBQ==
X-Received: by 10.112.135.233 with SMTP id pv9mr71944lbb.69.1391006110647;
	Wed, 29 Jan 2014 06:35:10 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id gb8sm2694018lbc.13.2014.01.29.06.35.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 06:35:04 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1390988815.31814.46.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 18:35:02 +0400
Message-Id: <7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
	<1390988815.31814.46.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 29, 2014, at 1:46 PM, Ian Campbell wrote:

> On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
>> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
>> 
>>> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
>>>> Hello All,
>>>> 
>>>> i'm working on port xen-4.3 to DilOS (illumos based platform).
>>>> 
>>>> i have problems with PV guest load.
>>>> dom0 started, i can see info by 'xl info'.
>>>> 
>>>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
>>>> 
>>>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
>>>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
>>>> 
>>>> could you please let me know - what is it the 38 platform hypercall ?
>>> 
>>> tmem_op
>> 
>> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h
> 
> platform.h only declares one subset of hypercall, the XENPF interfaces.
> tmem_op is not one of those interfaces. You want include/public/tmem.h
Am i right here - i have to add to platform.h:
#define XENPF_tmem_op 38
?
i have no found ID 38 at xen-instable at xen/include/public/platform.h

>>> 
>>>> 
>>>> do i need implement it first ?
>>> 
>>> No. But you should have stub functions in your hypercall page to at least
>>> return -ENOSYS for everything you don't implement.
>> 
>> based on current code i see:
>> return -X_EINVAL;
>> will it be correct to return it if ID not implemented ?
> 
> This appears to be an illlumos return code. It is of course up to the OS
> to decide what to return from an unimplemented ioctls, but the
> hypervisor itself will return -ENOSYS to an unimplemented hyper call.
you mean - hypervisor = dom0 ?
or hypervisor = xen.gz + xen-syms ?

> 
>>> How do you construct your hyperpage?
>>>> 
>> 
>> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
>> 
>> from line: 379
> 
> It seems like illumos has chosen to implement the privcmd hypercall
> piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
> basis). This is up to you but you might find it easier to just do as
> Linux does and mirror all hypercalls made via this path through to the
> hypervisor.
> 
> One downside of your approach is that you end up hardcoding
> non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
> These are not considered stable across Xen releases which means that you
> will need to update your kernel whenever you update Xen. If you just
> mirror the hypercalls through without inspection then when upgrading Xen
> you only need to update the Xen tools and the hypervisor but not the
> kernel.
if it is possible - could you please point me take a look some files with realization on Linux ?
where it located ?
thanks for this info.

> I suppose you could also take the middle ground and only pass through
> the non-stable interfaces but continue to check the rest.
thanks - i'll take a look.

> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:35:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:35:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WEV-0008G8-Dq; Wed, 29 Jan 2014 14:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8WEU-0008G2-0M
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:35:14 +0000
Received: from [85.158.143.35:24230] by server-2.bemta-4.messagelabs.com id
	89/CD-10891-1A119E25; Wed, 29 Jan 2014 14:35:13 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391006112!1663866!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8145 invoked from network); 29 Jan 2014 14:35:12 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:35:12 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1508122lan.26
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 06:35:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=MAXZuZz+rDFvYeeRBfgiyOnuNY5G7QY6sm2PfguoMpQ=;
	b=ILuz8Lt+7LxTnsFANTSx4Im4N87pOUJzF+Y9KeFHseh03IK0iYOU3iHc51Qw2aRl2D
	E4U1sOpoG5nRf9F/S7VDRGxQb0GveReofGaMgancbKf6BnQCj4R2DelWAkKKQGhvaLFo
	iQmm0nu9d/Pddv2IC726r5SHd27g4Qu1hHIqBUD4JDlzILm0LSMc9J7w/aO7oTiDHEVe
	a7+T+fumkGsbwT6NptS9yE85cR1h+6q22JMbwT5RMJm5ZYRdDhTmLF8Phde1q7nrlxFt
	OC4mycW5GGBLibijEiVaFWND3x4w+D9gvgdrnOOlqywMM3FppWc+gjhTZvXo591EysVo
	eEBQ==
X-Received: by 10.112.135.233 with SMTP id pv9mr71944lbb.69.1391006110647;
	Wed, 29 Jan 2014 06:35:10 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id gb8sm2694018lbc.13.2014.01.29.06.35.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 06:35:04 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1390988815.31814.46.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 18:35:02 +0400
Message-Id: <7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
	<1390988815.31814.46.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Jan 29, 2014, at 1:46 PM, Ian Campbell wrote:

> On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
>> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
>> 
>>> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
>>>> Hello All,
>>>> 
>>>> i'm working on port xen-4.3 to DilOS (illumos based platform).
>>>> 
>>>> i have problems with PV guest load.
>>>> dom0 started, i can see info by 'xl info'.
>>>> 
>>>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
>>>> 
>>>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
>>>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
>>>> 
>>>> could you please let me know - what is it the 38 platform hypercall ?
>>> 
>>> tmem_op
>> 
>> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h
> 
> platform.h only declares one subset of hypercall, the XENPF interfaces.
> tmem_op is not one of those interfaces. You want include/public/tmem.h
Am i right here - i have to add to platform.h:
#define XENPF_tmem_op 38
?
i have no found ID 38 at xen-instable at xen/include/public/platform.h

>>> 
>>>> 
>>>> do i need implement it first ?
>>> 
>>> No. But you should have stub functions in your hypercall page to at least
>>> return -ENOSYS for everything you don't implement.
>> 
>> based on current code i see:
>> return -X_EINVAL;
>> will it be correct to return it if ID not implemented ?
> 
> This appears to be an illlumos return code. It is of course up to the OS
> to decide what to return from an unimplemented ioctls, but the
> hypervisor itself will return -ENOSYS to an unimplemented hyper call.
you mean - hypervisor = dom0 ?
or hypervisor = xen.gz + xen-syms ?

> 
>>> How do you construct your hyperpage?
>>>> 
>> 
>> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
>> 
>> from line: 379
> 
> It seems like illumos has chosen to implement the privcmd hypercall
> piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
> basis). This is up to you but you might find it easier to just do as
> Linux does and mirror all hypercalls made via this path through to the
> hypervisor.
> 
> One downside of your approach is that you end up hardcoding
> non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
> These are not considered stable across Xen releases which means that you
> will need to update your kernel whenever you update Xen. If you just
> mirror the hypercalls through without inspection then when upgrading Xen
> you only need to update the Xen tools and the hypervisor but not the
> kernel.
if it is possible - could you please point me take a look some files with realization on Linux ?
where it located ?
thanks for this info.

> I suppose you could also take the middle ground and only pass through
> the non-stable interfaces but continue to check the rest.
thanks - i'll take a look.

> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:36:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WFn-0008MR-VV; Wed, 29 Jan 2014 14:36:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W8WFm-0008MH-FC
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:36:34 +0000
Received: from [85.158.137.68:26493] by server-16.bemta-3.messagelabs.com id
	9E/B3-29917-1F119E25; Wed, 29 Jan 2014 14:36:33 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391006191!12106915!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18230 invoked from network); 29 Jan 2014 14:36:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:36:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0TEaF28022323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 14:36:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0TEaEI3006226
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 29 Jan 2014 14:36:15 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0TEaEsG028048; Wed, 29 Jan 2014 14:36:14 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 06:36:13 -0800
Message-ID: <52E911CA.9020700@oracle.com>
Date: Wed, 29 Jan 2014 22:35:54 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
In-Reply-To: <52E7E594.2050104@zynstra.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/29/2014 01:15 AM, James Dingwall wrote:
> Bob Liu wrote:
>>
>> I have made a patch by reserving extra 10% of original total memory, by
>> this way I think we can make the system much more reliably in all cases.
>> Could you please have a test? You don't need to set
>> selfballoon_reserved_mb by yourself any more.
> I have to say that with this patch the situation has definitely
> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> quite hard for the last 10 days or so.  Unfortunately yesterday I got an

Good news!

> OOM during a compile (link) of webkit-gtk.  I think your patch is part
> of the solution but I'm not sure if the other bit is simply to be more
> generous with the guest memory allocation or something else.  Having
> tested with memory = 512  and no tmem I get an OOM with the same
> compile, with memory = 1024 and no tmem the compile completes ok (both
> cases without maxmem).  As my domains are usually started with memory =
> 512 and maxmem = 1024 it seems that there should be sufficient with my

But I think from the beginning tmem/balloon driver can't expand guest
memory from size 'memory' to 'maxmem' automatically.

> default parameters. Also for an experiment I set memory=1024 and removed
> maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
> reserve_additional_memory: add_memory() failed: -17" printed many times
> in the guest kernel log.
> 

I'll take a look at it.

-- 
Regards,
-Bob

> Regards,
> James
> 
> [456770.748827] Mem-Info:
> [456770.748829] Node 0 DMA per-cpu:
> [456770.748833] CPU    0: hi:    0, btch:   1 usd:   0
> [456770.748835] CPU    1: hi:    0, btch:   1 usd:   0
> [456770.748836] Node 0 DMA32 per-cpu:
> [456770.748838] CPU    0: hi:  186, btch:  31 usd: 173
> [456770.748840] CPU    1: hi:  186, btch:  31 usd: 120
> [456770.748846] active_anon:91431 inactive_anon:96269 isolated_anon:0
>  active_file:13286 inactive_file:31256 isolated_file:0
>  unevictable:0 dirty:0 writeback:0 unstable:0
>  free:1155 slab_reclaimable:7001 slab_unreclaimable:3932
>  mapped:2300 shmem:88 pagetables:2576 bounce:0
>  free_cma:0 totalram:255578 balloontarget:327320
> [456770.748849] Node 0 DMA free:1956kB min:88kB low:108kB high:132kB
> active_anon:3128kB inactive_anon:3328kB active_file:1888kB
> inactive_file:2088kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:15996kB managed:15912kB mlocked:0kB dirty:0kB
> writeback:0kB mapped:32kB shmem:0kB slab_reclaimable:684kB
> slab_unreclaimable:720kB kernel_stack:72kB pagetables:488kB unstable:0kB
> bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:17841
> all_unreclaimable? yes
> [456770.748863] lowmem_reserve[]: 0 469 469 469
> [456770.748866] Node 0 DMA32 free:2664kB min:2728kB low:3408kB
> high:4092kB active_anon:362596kB inactive_anon:381748kB
> active_file:51256kB inactive_file:122936kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:1032192kB
> managed:1006400kB mlocked:0kB dirty:0kB writeback:0kB mapped:9168kB
> shmem:352kB slab_reclaimable:27320kB slab_unreclaimable:15008kB
> kernel_stack:1784kB pagetables:9816kB unstable:0kB bounce:0kB
> free_cma:0kB writeback_tmp:0kB pages_scanned:1382021 all_unreclaimable? yes
> [456770.748874] lowmem_reserve[]: 0 0 0 0
> [456770.748877] Node 0 DMA: 1*4kB (R) 0*8kB 0*16kB 5*32kB (R) 2*64kB (R)
> 1*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
> [456770.748890] Node 0 DMA32: 666*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2664kB
> [456770.748899] 48556 total pagecache pages
> [456770.748901] 35203 pages in swap cache
> [456770.748903] Swap cache stats: add 358621, delete 323418, find
> 206319/224002
> [456770.748904] Free swap  = 1671532kB
> [456770.748905] Total swap = 2097148kB
> [456770.748906] 262047 pages RAM
> [456770.748907] 0 pages HighMem/MovableOnly
> [456770.748908] 6448 pages reserved
> <snip process list>
> [456770.749070] Out of memory: Kill process 28271 (ld) score 110 or
> sacrifice child
> [456770.749073] Killed process 28271 (ld) total-vm:358488kB,
> anon-rss:324588kB, file-rss:1456kB
> 
>>
>>
>> xen_selfballoon_deaggressive.patch
>>
>>
>> diff --git a/drivers/xen/xen-selfballoon.c
>> b/drivers/xen/xen-selfballoon.c
>> index 21e18c1..8f33254 100644
>> --- a/drivers/xen/xen-selfballoon.c
>> +++ b/drivers/xen/xen-selfballoon.c
>> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>>   #endif /* CONFIG_FRONTSWAP */
>>     #define MB2PAGES(mb)    ((mb) << (20 - PAGE_SHIFT))
>> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>>     /*
>>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
>> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>>   int xen_selfballoon_init(bool use_selfballooning, bool
>> use_frontswap_selfshrink)
>>   {
>>       bool enable = false;
>> +    unsigned long reserve_pages;
>>         if (!xen_domain())
>>           return -ENODEV;
>> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning,
>> bool use_frontswap_selfshrink)
>>       if (!enable)
>>           return -ENODEV;
>>   +    /*
>> +     * Give selfballoon_reserved_mb a default value(10% of total ram
>> pages)
>> +     * to make selfballoon not so aggressive.
>> +     *
>> +     * There are two reasons:
>> +     * 1) The goal_page doesn't contain some pages used by kernel space,
>> +     *    like slab cache and pages used by device drivers.
>> +     *
>> +     * 2) The balloon driver may not give back memory to guest OS fast
>> +     *    enough when the workload suddenly aquries a lot of memory.
>> +     *
>> +     * In both cases, the guest OS will suffer from memory pressure and
>> +     * OOM killer may be triggered.
>> +     * By reserving extra 10% of total ram pages, we can keep the system
>> +     * much more reliably and response faster in some cases.
>> +     */
>> +    if (!selfballoon_reserved_mb) {
>> +        reserve_pages = totalram_pages / 10;
>> +        selfballoon_reserved_mb = PAGES2MB(reserve_pages);
>> +    }
>>       schedule_delayed_work(&selfballoon_worker, selfballoon_interval
>> * HZ);
>>         return 0;
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:36:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WFn-0008MR-VV; Wed, 29 Jan 2014 14:36:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1W8WFm-0008MH-FC
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:36:34 +0000
Received: from [85.158.137.68:26493] by server-16.bemta-3.messagelabs.com id
	9E/B3-29917-1F119E25; Wed, 29 Jan 2014 14:36:33 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391006191!12106915!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18230 invoked from network); 29 Jan 2014 14:36:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 14:36:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0TEaF28022323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 14:36:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0TEaEI3006226
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 29 Jan 2014 14:36:15 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0TEaEsG028048; Wed, 29 Jan 2014 14:36:14 GMT
Received: from [192.168.0.100] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 06:36:13 -0800
Message-ID: <52E911CA.9020700@oracle.com>
Date: Wed, 29 Jan 2014 22:35:54 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: James Dingwall <james.dingwall@zynstra.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
In-Reply-To: <52E7E594.2050104@zynstra.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 01/29/2014 01:15 AM, James Dingwall wrote:
> Bob Liu wrote:
>>
>> I have made a patch by reserving extra 10% of original total memory, by
>> this way I think we can make the system much more reliably in all cases.
>> Could you please have a test? You don't need to set
>> selfballoon_reserved_mb by yourself any more.
> I have to say that with this patch the situation has definitely
> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> quite hard for the last 10 days or so.  Unfortunately yesterday I got an

Good news!

> OOM during a compile (link) of webkit-gtk.  I think your patch is part
> of the solution but I'm not sure if the other bit is simply to be more
> generous with the guest memory allocation or something else.  Having
> tested with memory = 512  and no tmem I get an OOM with the same
> compile, with memory = 1024 and no tmem the compile completes ok (both
> cases without maxmem).  As my domains are usually started with memory =
> 512 and maxmem = 1024 it seems that there should be sufficient with my

But I think from the beginning tmem/balloon driver can't expand guest
memory from size 'memory' to 'maxmem' automatically.

> default parameters. Also for an experiment I set memory=1024 and removed
> maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
> reserve_additional_memory: add_memory() failed: -17" printed many times
> in the guest kernel log.
> 

I'll take a look at it.

-- 
Regards,
-Bob

> Regards,
> James
> 
> [456770.748827] Mem-Info:
> [456770.748829] Node 0 DMA per-cpu:
> [456770.748833] CPU    0: hi:    0, btch:   1 usd:   0
> [456770.748835] CPU    1: hi:    0, btch:   1 usd:   0
> [456770.748836] Node 0 DMA32 per-cpu:
> [456770.748838] CPU    0: hi:  186, btch:  31 usd: 173
> [456770.748840] CPU    1: hi:  186, btch:  31 usd: 120
> [456770.748846] active_anon:91431 inactive_anon:96269 isolated_anon:0
>  active_file:13286 inactive_file:31256 isolated_file:0
>  unevictable:0 dirty:0 writeback:0 unstable:0
>  free:1155 slab_reclaimable:7001 slab_unreclaimable:3932
>  mapped:2300 shmem:88 pagetables:2576 bounce:0
>  free_cma:0 totalram:255578 balloontarget:327320
> [456770.748849] Node 0 DMA free:1956kB min:88kB low:108kB high:132kB
> active_anon:3128kB inactive_anon:3328kB active_file:1888kB
> inactive_file:2088kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:15996kB managed:15912kB mlocked:0kB dirty:0kB
> writeback:0kB mapped:32kB shmem:0kB slab_reclaimable:684kB
> slab_unreclaimable:720kB kernel_stack:72kB pagetables:488kB unstable:0kB
> bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:17841
> all_unreclaimable? yes
> [456770.748863] lowmem_reserve[]: 0 469 469 469
> [456770.748866] Node 0 DMA32 free:2664kB min:2728kB low:3408kB
> high:4092kB active_anon:362596kB inactive_anon:381748kB
> active_file:51256kB inactive_file:122936kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:1032192kB
> managed:1006400kB mlocked:0kB dirty:0kB writeback:0kB mapped:9168kB
> shmem:352kB slab_reclaimable:27320kB slab_unreclaimable:15008kB
> kernel_stack:1784kB pagetables:9816kB unstable:0kB bounce:0kB
> free_cma:0kB writeback_tmp:0kB pages_scanned:1382021 all_unreclaimable? yes
> [456770.748874] lowmem_reserve[]: 0 0 0 0
> [456770.748877] Node 0 DMA: 1*4kB (R) 0*8kB 0*16kB 5*32kB (R) 2*64kB (R)
> 1*128kB (R) 0*256kB 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = 1956kB
> [456770.748890] Node 0 DMA32: 666*4kB (U) 0*8kB 0*16kB 0*32kB 0*64kB
> 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2664kB
> [456770.748899] 48556 total pagecache pages
> [456770.748901] 35203 pages in swap cache
> [456770.748903] Swap cache stats: add 358621, delete 323418, find
> 206319/224002
> [456770.748904] Free swap  = 1671532kB
> [456770.748905] Total swap = 2097148kB
> [456770.748906] 262047 pages RAM
> [456770.748907] 0 pages HighMem/MovableOnly
> [456770.748908] 6448 pages reserved
> <snip process list>
> [456770.749070] Out of memory: Kill process 28271 (ld) score 110 or
> sacrifice child
> [456770.749073] Killed process 28271 (ld) total-vm:358488kB,
> anon-rss:324588kB, file-rss:1456kB
> 
>>
>>
>> xen_selfballoon_deaggressive.patch
>>
>>
>> diff --git a/drivers/xen/xen-selfballoon.c
>> b/drivers/xen/xen-selfballoon.c
>> index 21e18c1..8f33254 100644
>> --- a/drivers/xen/xen-selfballoon.c
>> +++ b/drivers/xen/xen-selfballoon.c
>> @@ -175,6 +175,7 @@ static void frontswap_selfshrink(void)
>>   #endif /* CONFIG_FRONTSWAP */
>>     #define MB2PAGES(mb)    ((mb) << (20 - PAGE_SHIFT))
>> +#define PAGES2MB(pages) ((pages) >> (20 - PAGE_SHIFT))
>>     /*
>>    * Use current balloon size, the goal (vm_committed_as), and hysteresis
>> @@ -525,6 +526,7 @@ EXPORT_SYMBOL(register_xen_selfballooning);
>>   int xen_selfballoon_init(bool use_selfballooning, bool
>> use_frontswap_selfshrink)
>>   {
>>       bool enable = false;
>> +    unsigned long reserve_pages;
>>         if (!xen_domain())
>>           return -ENODEV;
>> @@ -549,6 +551,26 @@ int xen_selfballoon_init(bool use_selfballooning,
>> bool use_frontswap_selfshrink)
>>       if (!enable)
>>           return -ENODEV;
>>   +    /*
>> +     * Give selfballoon_reserved_mb a default value(10% of total ram
>> pages)
>> +     * to make selfballoon not so aggressive.
>> +     *
>> +     * There are two reasons:
>> +     * 1) The goal_page doesn't contain some pages used by kernel space,
>> +     *    like slab cache and pages used by device drivers.
>> +     *
>> +     * 2) The balloon driver may not give back memory to guest OS fast
>> +     *    enough when the workload suddenly aquries a lot of memory.
>> +     *
>> +     * In both cases, the guest OS will suffer from memory pressure and
>> +     * OOM killer may be triggered.
>> +     * By reserving extra 10% of total ram pages, we can keep the system
>> +     * much more reliably and response faster in some cases.
>> +     */
>> +    if (!selfballoon_reserved_mb) {
>> +        reserve_pages = totalram_pages / 10;
>> +        selfballoon_reserved_mb = PAGES2MB(reserve_pages);
>> +    }
>>       schedule_delayed_work(&selfballoon_worker, selfballoon_interval
>> * HZ);
>>         return 0;
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:40:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WJE-0000UE-Fd; Wed, 29 Jan 2014 14:40:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8WHx-00007B-0n; Wed, 29 Jan 2014 14:38:49 +0000
Received: from [85.158.139.211:23248] by server-17.bemta-5.messagelabs.com id
	5A/D2-31975-87219E25; Wed, 29 Jan 2014 14:38:48 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391006325!403215!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9299 invoked from network); 29 Jan 2014 14:38:47 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:38:47 -0000
Received: by mail-ob0-f173.google.com with SMTP id vb8so2037016obc.32
	for <multiple recipients>; Wed, 29 Jan 2014 06:38:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=RCJSoIX+pwVrAi2YjOVSxX0pL6q0PfyrcbAKwKKWmO0=;
	b=sW+Y9i50tqLQKLJPrUzIm2j7Mhl/grdYVDYyN9i8s64ir7derjgcjir6KW//dCqIsA
	9KFifMLyUQG+jYSbNMFmuelUnPz6OL8AiEEazEtY1k5/difB1LMH/KSfyotdgZ0AoChy
	kV7DSWEgN7Ak8xjXgAvlBci0AIz1HXdtJmuJAsg3GmAKpKqoLLdob1+Dn+OaeN7sF/jU
	aKCkyXTS2r/VH7L7gedlIK7ngAvAZqnQ0L1oOJRKAPI6i4HWHlkdOqHp5JzuISpqecbo
	/ksNEdbV/usB9hWoxtNSclxwFK8Ycj8NrzcKCynt97JGp76Vhl6612IyVyQWKjobvOp1
	DRBQ==
MIME-Version: 1.0
X-Received: by 10.182.204.41 with SMTP id kv9mr621271obc.78.1391006325199;
	Wed, 29 Jan 2014 06:38:45 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Wed, 29 Jan 2014 06:38:45 -0800 (PST)
In-Reply-To: <1391001261.31814.115.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 07:38:45 -0700
Message-ID: <CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 29 Jan 2014 14:40:07 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So to fix the problem, I need to update the qemu version to version
1.7 or later?
BTW. I had this problem in both pvhvm and pv guest.
Does pv guest rely on qemu also?

Thanks for all the reply!

On Wed, Jan 29, 2014 at 6:14 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 13:04 +0000, Anthony PERARD wrote:
>> On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
>> > On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
>> > > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
>> > > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
>> > > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
>> > > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
>> > > > > > > Sorry for the so late reply.
>> > > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
>> > > > > >
>> > > > > > And are you using the version of qemu-xen which ships with those
>> > > > > > releases or your own version, perhaps from upstream?
>> > > > > >
>> > > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
>> > > > > > 4.3.x but I thought it had been added during the 4.4.x development
>> > > > > > cycle. Adding Anthony + xen-devel to confirm.
>> > > > >
>> > > > > We've added vcpu hotplug to our tree in Xen 4.3.
>> > > > >
>> > > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
>> > > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
>> > > > > and any QEMU release before that those not support cpu hotplug.
>> > > >
>> > > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
>> > >
>> > > That's right, qemu 1.3.
>> > >
>> > > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
>> > >
>> > > That's correct.
>> > >
>> > > > So it also misses those patches.
>> > >
>> > > No, I've put those patches on top of the merge, otherwise, there would
>> > > have been regression.
>> >
>> > So this functionality should work with xen 4.4.0-rc1 (using whatever
>> > version was referenced by xen.git/Config.mk) or not?
>>
>> Yes, it should. You can go back to my first reply to this threads and
>> read the all mail ;-).
>
> Sorry, but this wasn't unambiguously clear to me from your initial
> reply, which spoke about broken releases and a fix which was in a
> version we don't include yet.
>
>>  I reply to what appear to me the "error" that
>> made someone think that somethings goes wrong.
>
> So it sounds like Yun Wang is using an upstream qemu and not our branch
> and is therefore missing some fixes?
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:40:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WJE-0000UE-Fd; Wed, 29 Jan 2014 14:40:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8WHx-00007B-0n; Wed, 29 Jan 2014 14:38:49 +0000
Received: from [85.158.139.211:23248] by server-17.bemta-5.messagelabs.com id
	5A/D2-31975-87219E25; Wed, 29 Jan 2014 14:38:48 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391006325!403215!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9299 invoked from network); 29 Jan 2014 14:38:47 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:38:47 -0000
Received: by mail-ob0-f173.google.com with SMTP id vb8so2037016obc.32
	for <multiple recipients>; Wed, 29 Jan 2014 06:38:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=RCJSoIX+pwVrAi2YjOVSxX0pL6q0PfyrcbAKwKKWmO0=;
	b=sW+Y9i50tqLQKLJPrUzIm2j7Mhl/grdYVDYyN9i8s64ir7derjgcjir6KW//dCqIsA
	9KFifMLyUQG+jYSbNMFmuelUnPz6OL8AiEEazEtY1k5/difB1LMH/KSfyotdgZ0AoChy
	kV7DSWEgN7Ak8xjXgAvlBci0AIz1HXdtJmuJAsg3GmAKpKqoLLdob1+Dn+OaeN7sF/jU
	aKCkyXTS2r/VH7L7gedlIK7ngAvAZqnQ0L1oOJRKAPI6i4HWHlkdOqHp5JzuISpqecbo
	/ksNEdbV/usB9hWoxtNSclxwFK8Ycj8NrzcKCynt97JGp76Vhl6612IyVyQWKjobvOp1
	DRBQ==
MIME-Version: 1.0
X-Received: by 10.182.204.41 with SMTP id kv9mr621271obc.78.1391006325199;
	Wed, 29 Jan 2014 06:38:45 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Wed, 29 Jan 2014 06:38:45 -0800 (PST)
In-Reply-To: <1391001261.31814.115.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 07:38:45 -0700
Message-ID: <CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 29 Jan 2014 14:40:07 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So to fix the problem, I need to update the qemu version to version
1.7 or later?
BTW. I had this problem in both pvhvm and pv guest.
Does pv guest rely on qemu also?

Thanks for all the reply!

On Wed, Jan 29, 2014 at 6:14 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 13:04 +0000, Anthony PERARD wrote:
>> On Wed, Jan 29, 2014 at 12:39:47PM +0000, Ian Campbell wrote:
>> > On Wed, 2014-01-29 at 12:36 +0000, Anthony PERARD wrote:
>> > > On Wed, Jan 29, 2014 at 12:22:17PM +0000, Ian Campbell wrote:
>> > > > On Wed, 2014-01-29 at 12:18 +0000, Anthony PERARD wrote:
>> > > > > On Wed, Jan 29, 2014 at 10:11:44AM +0000, Ian Campbell wrote:
>> > > > > > On Tue, 2014-01-28 at 18:44 -0700, Yun Wang wrote:
>> > > > > > > Sorry for the so late reply.
>> > > > > > > I had this issue in Xen-4.3.0 (official release) and Xen-4.4.0-rc1-25-g9a80d50
>> > > > > >
>> > > > > > And are you using the version of qemu-xen which ships with those
>> > > > > > releases or your own version, perhaps from upstream?
>> > > > > >
>> > > > > > ISTR that vcpu hotplug for HVM guests was missing from qemu-xen in Xen
>> > > > > > 4.3.x but I thought it had been added during the 4.4.x development
>> > > > > > cycle. Adding Anthony + xen-devel to confirm.
>> > > > >
>> > > > > We've added vcpu hotplug to our tree in Xen 4.3.
>> > > > >
>> > > > > QEMU upstream is able to do vcpu hotplug with Xen only with the last
>> > > > > release, 1.7. The two previous release (1.5 and 1.6) miss two patches,
>> > > > > and any QEMU release before that those not support cpu hotplug.
>> > > >
>> > > > OK. IIRC Xen 4.3 included qemu 1.3 or so?
>> > >
>> > > That's right, qemu 1.3.
>> > >
>> > > > 4.4-rc1 should have had the 1.6 merge in it, but not 1.7, correct?
>> > >
>> > > That's correct.
>> > >
>> > > > So it also misses those patches.
>> > >
>> > > No, I've put those patches on top of the merge, otherwise, there would
>> > > have been regression.
>> >
>> > So this functionality should work with xen 4.4.0-rc1 (using whatever
>> > version was referenced by xen.git/Config.mk) or not?
>>
>> Yes, it should. You can go back to my first reply to this threads and
>> read the all mail ;-).
>
> Sorry, but this wasn't unambiguously clear to me from your initial
> reply, which spoke about broken releases and a fix which was in a
> version we don't include yet.
>
>>  I reply to what appear to me the "error" that
>> made someone think that somethings goes wrong.
>
> So it sounds like Yun Wang is using an upstream qemu and not our branch
> and is therefore missing some fixes?
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WMI-0000kn-4U; Wed, 29 Jan 2014 14:43:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8WMG-0000kh-Hf
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:43:16 +0000
Received: from [85.158.143.35:57356] by server-2.bemta-4.messagelabs.com id
	08/BD-10891-38319E25; Wed, 29 Jan 2014 14:43:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391006594!1661045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17204 invoked from network); 29 Jan 2014 14:43:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:43:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95710956"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 14:43:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:43:12 -0500
Message-ID: <1391006591.31814.127.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 29 Jan 2014 14:43:11 +0000
In-Reply-To: <7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
	<1390988815.31814.46.camel@kazak.uk.xensource.com>
	<7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 18:35 +0400, Igor Kozhukhov wrote:
> On Jan 29, 2014, at 1:46 PM, Ian Campbell wrote:
> 
> > On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
> >> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
> >> 
> >>> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> >>>> Hello All,
> >>>> 
> >>>> i'm working on port xen-4.3 to DilOS (illumos based platform).
> >>>> 
> >>>> i have problems with PV guest load.
> >>>> dom0 started, i can see info by 'xl info'.
> >>>> 
> >>>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> >>>> 
> >>>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> >>>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> >>>> 
> >>>> could you please let me know - what is it the 38 platform hypercall ?
> >>> 
> >>> tmem_op
> >> 
> >> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h
> > 
> > platform.h only declares one subset of hypercall, the XENPF interfaces.
> > tmem_op is not one of those interfaces. You want include/public/tmem.h
> Am i right here - i have to add to platform.h:
> #define XENPF_tmem_op 38
> ?
> i have no found ID 38 at xen-instable at xen/include/public/platform.h

No. tmem is not a platform op suboperation, it is a top level hypercall
in its own right.

I think Konrad & I may have misunderstood what you are seeing though: If
you are seeing hypercall 7 (== __HYPERVISOR_platform_op) with subop 38
then I'm afraid I don't know where this has come from. I can see no
reference to it in the Xen history.

I think I would be inclined as a debugging aid to inject an error (e.g a
SIGSEGV) into the calling process at this point so that it can be
trapped at the place making the call.

> 
> >>> 
> >>>> 
> >>>> do i need implement it first ?
> >>> 
> >>> No. But you should have stub functions in your hypercall page to at least
> >>> return -ENOSYS for everything you don't implement.
> >> 
> >> based on current code i see:
> >> return -X_EINVAL;
> >> will it be correct to return it if ID not implemented ?
> > 
> > This appears to be an illlumos return code. It is of course up to the OS
> > to decide what to return from an unimplemented ioctls, but the
> > hypervisor itself will return -ENOSYS to an unimplemented hyper call.
> you mean - hypervisor = dom0 ?
> or hypervisor = xen.gz + xen-syms ?

hypervisor == xen.

> >>> How do you construct your hyperpage?
> >>>> 
> >> 
> >> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
> >> 
> >> from line: 379
> > 
> > It seems like illumos has chosen to implement the privcmd hypercall
> > piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
> > basis). This is up to you but you might find it easier to just do as
> > Linux does and mirror all hypercalls made via this path through to the
> > hypervisor.
> > 
> > One downside of your approach is that you end up hardcoding
> > non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
> > These are not considered stable across Xen releases which means that you
> > will need to update your kernel whenever you update Xen. If you just
> > mirror the hypercalls through without inspection then when upgrading Xen
> > you only need to update the Xen tools and the hypervisor but not the
> > kernel.
> if it is possible - could you please point me take a look some files with realization on Linux ?
> where it located ?
> thanks for this info.

drivers/xen/privcmd.c in the upstream Linux source.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WMI-0000kn-4U; Wed, 29 Jan 2014 14:43:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8WMG-0000kh-Hf
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:43:16 +0000
Received: from [85.158.143.35:57356] by server-2.bemta-4.messagelabs.com id
	08/BD-10891-38319E25; Wed, 29 Jan 2014 14:43:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391006594!1661045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17204 invoked from network); 29 Jan 2014 14:43:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:43:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95710956"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 14:43:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:43:12 -0500
Message-ID: <1391006591.31814.127.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Wed, 29 Jan 2014 14:43:11 +0000
In-Reply-To: <7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
References: <29A82C69-DB0A-46EB-B11F-A7B535CD90AE@gmail.com>
	<20140128193430.GB9842@phenom.dumpdata.com>
	<BCD4CE71-FA73-4CF8-976F-4F1EE785654F@gmail.com>
	<1390988815.31814.46.camel@kazak.uk.xensource.com>
	<7862F173-E728-4F01-BD74-A8A59EED2D1F@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 18:35 +0400, Igor Kozhukhov wrote:
> On Jan 29, 2014, at 1:46 PM, Ian Campbell wrote:
> 
> > On Tue, 2014-01-28 at 23:45 +0400, Igor Kozhukhov wrote:
> >> On Jan 28, 2014, at 11:34 PM, Konrad Rzeszutek Wilk wrote:
> >> 
> >>> On Tue, Jan 28, 2014 at 11:21:34PM +0400, Igor Kozhukhov wrote:
> >>>> Hello All,
> >>>> 
> >>>> i'm working on port xen-4.3 to DilOS (illumos based platform).
> >>>> 
> >>>> i have problems with PV guest load.
> >>>> dom0 started, i can see info by 'xl info'.
> >>>> 
> >>>> first: i see platform ID=38, but i couldn't found it in xen/public/platform.h
> >>>> 
> >>>> Jan 28 01:16:44 myhost privcmd: == HYPERVISOR_platform_op 38
> >>>> Jan 28 01:16:44 myhost privcmd: unrecognized HYPERVISOR_platform_op 38
> >>>> 
> >>>> could you please let me know - what is it the 38 platform hypercall ?
> >>> 
> >>> tmem_op
> >> 
> >> tmem_op defined at xen/public/xen.h, but 38 ID not defined at xen/public/platform.h
> > 
> > platform.h only declares one subset of hypercall, the XENPF interfaces.
> > tmem_op is not one of those interfaces. You want include/public/tmem.h
> Am i right here - i have to add to platform.h:
> #define XENPF_tmem_op 38
> ?
> i have no found ID 38 at xen-instable at xen/include/public/platform.h

No. tmem is not a platform op suboperation, it is a top level hypercall
in its own right.

I think Konrad & I may have misunderstood what you are seeing though: If
you are seeing hypercall 7 (== __HYPERVISOR_platform_op) with subop 38
then I'm afraid I don't know where this has come from. I can see no
reference to it in the Xen history.

I think I would be inclined as a debugging aid to inject an error (e.g a
SIGSEGV) into the calling process at this point so that it can be
trapped at the place making the call.

> 
> >>> 
> >>>> 
> >>>> do i need implement it first ?
> >>> 
> >>> No. But you should have stub functions in your hypercall page to at least
> >>> return -ENOSYS for everything you don't implement.
> >> 
> >> based on current code i see:
> >> return -X_EINVAL;
> >> will it be correct to return it if ID not implemented ?
> > 
> > This appears to be an illlumos return code. It is of course up to the OS
> > to decide what to return from an unimplemented ioctls, but the
> > hypervisor itself will return -ENOSYS to an unimplemented hyper call.
> you mean - hypervisor = dom0 ?
> or hypervisor = xen.gz + xen-syms ?

hypervisor == xen.

> >>> How do you construct your hyperpage?
> >>>> 
> >> 
> >> example here : https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/i86xpv/io/privcmd_hcall.c
> >> 
> >> from line: 379
> > 
> > It seems like illumos has chosen to implement the privcmd hypercall
> > piecemeal on a hypercall-by-hypercall basis (in fact on a subop by subop
> > basis). This is up to you but you might find it easier to just do as
> > Linux does and mirror all hypercalls made via this path through to the
> > hypervisor.
> > 
> > One downside of your approach is that you end up hardcoding
> > non-stable-ABIs into your kernel -- e.g. XEN_SYSCTL and XEN_DOMCTL.
> > These are not considered stable across Xen releases which means that you
> > will need to update your kernel whenever you update Xen. If you just
> > mirror the hypercalls through without inspection then when upgrading Xen
> > you only need to update the Xen tools and the hypervisor but not the
> > kernel.
> if it is possible - could you please point me take a look some files with realization on Linux ?
> where it located ?
> thanks for this info.

drivers/xen/privcmd.c in the upstream Linux source.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:45:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WOX-0000up-6U; Wed, 29 Jan 2014 14:45:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W8WOV-0000uj-Tp
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:45:36 +0000
Received: from [85.158.139.211:57625] by server-14.bemta-5.messagelabs.com id
	52/19-27598-F0419E25; Wed, 29 Jan 2014 14:45:35 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391006734!400045!1
X-Originating-IP: [213.199.154.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1465 invoked from network); 29 Jan 2014 14:45:34 -0000
Received: from mail-am1lp0015.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.15)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	29 Jan 2014 14:45:34 -0000
Received: from DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140) by
	DBXPR03MB144.eurprd03.prod.outlook.com (10.242.141.23) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Wed, 29 Jan 2014 14:45:33 +0000
Received: from AMSPRD0710HT002.eurprd07.prod.outlook.com (157.56.249.85) by
	DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140) with Microsoft
	SMTP Server (TLS) id 15.0.859.15; Wed, 29 Jan 2014 14:45:32 +0000
Message-ID: <52E91404.30602@zynstra.com>
Date: Wed, 29 Jan 2014 14:45:24 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
	<52E911CA.9020700@oracle.com>
In-Reply-To: <52E911CA.9020700@oracle.com>
X-Originating-IP: [157.56.249.85]
X-ClientProxiedBy: DB3PR03CA006.eurprd03.prod.outlook.com (10.242.134.16) To
	DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140)
X-Forefront-PRVS: 01068D0A20
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(189002)(199002)(24454002)(479174003)(164054003)(51704005)(83506001)(23756003)(77982001)(59766001)(66066001)(31966008)(56776001)(87266001)(64126003)(47976001)(15202345003)(79102001)(15975445006)(50986001)(74662001)(80022001)(85306002)(50466002)(54316002)(74502001)(81686001)(94946001)(76482001)(47446002)(90146001)(19580395003)(83072002)(80976001)(74366001)(81816001)(93136001)(81542001)(54356001)(47776003)(74876001)(81342001)(63696002)(42186004)(76796001)(74706001)(83322001)(47736001)(56816005)(4396001)(49866001)(53806001)(76786001)(92726001)(36756003)(87976001)(51856001)(85852003)(33656001)(77096001)(69226001)(93516002)(94316002)(86362001)(46102001)(92566001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DBXPR03MB222;
	H:AMSPRD0710HT002.eurprd07.prod.outlook.com; CLIP:157.56.249.85;
	FPR:; InfoNoRecordsMX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/29/2014 01:15 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> I have made a patch by reserving extra 10% of original total memory, by
>>> this way I think we can make the system much more reliably in all cases.
>>> Could you please have a test? You don't need to set
>>> selfballoon_reserved_mb by yourself any more.
>> I have to say that with this patch the situation has definitely
>> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
>> quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> Good news!
>
>> OOM during a compile (link) of webkit-gtk.  I think your patch is part
>> of the solution but I'm not sure if the other bit is simply to be more
>> generous with the guest memory allocation or something else.  Having
>> tested with memory = 512  and no tmem I get an OOM with the same
>> compile, with memory = 1024 and no tmem the compile completes ok (both
>> cases without maxmem).  As my domains are usually started with memory =
>> 512 and maxmem = 1024 it seems that there should be sufficient with my
> But I think from the beginning tmem/balloon driver can't expand guest
> memory from size 'memory' to 'maxmem' automatically.
I am carrying this patch for libxl (4.3.1) because maxmem wasn't being 
honoured.

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 356f920..fb7965d 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -235,7 +235,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, 
&info->cpumap);

-    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + 
LIBXL_MAXMEM_CONSTANT);
+    xc_domain_setmaxmem(ctx->xch, domid, info->max_memkb + 
LIBXL_MAXMEM_CONSTANT);
      xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
      state->store_domid = xs_domid ? atoi(xs_domid) : 0;
      free(xs_domid);

>
>> default parameters. Also for an experiment I set memory=1024 and removed
>> maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
>> reserve_additional_memory: add_memory() failed: -17" printed many times
>> in the guest kernel log.
>>
> I'll take a look at it.
It seems possible that this could be the same cause as for the message 
being printed in dom0 and which I reported in 
http://lists.xen.org/archives/html/xen-devel/2012-12/msg01607.html and 
for which no fix seems to have made it to the kernel.  I'm still working 
around this with:

#!/bin/sh

CURRENT_KB="/sys/bus/xen_memory/devices/xen_memory0/info/current_kb"
TARGET_KB="/sys/bus/xen_memory/devices/xen_memory0/target_kb"

CKB=$(< "${CURRENT_KB}")
TKB=$(< "${TARGET_KB}")

if [ "${TKB}" -gt "${CKB}" ] ; then
         echo "Resizing dom0 memory balloon target to ${CKB}"
         echo "${CKB}" > "${TARGET_KB}"
fi

Thanks,
James


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:45:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WOX-0000up-6U; Wed, 29 Jan 2014 14:45:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1W8WOV-0000uj-Tp
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 14:45:36 +0000
Received: from [85.158.139.211:57625] by server-14.bemta-5.messagelabs.com id
	52/19-27598-F0419E25; Wed, 29 Jan 2014 14:45:35 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391006734!400045!1
X-Originating-IP: [213.199.154.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1465 invoked from network); 29 Jan 2014 14:45:34 -0000
Received: from mail-am1lp0015.outbound.protection.outlook.com (HELO
	emea01-am1-obe.outbound.protection.outlook.com) (213.199.154.15)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	29 Jan 2014 14:45:34 -0000
Received: from DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140) by
	DBXPR03MB144.eurprd03.prod.outlook.com (10.242.141.23) with Microsoft
	SMTP Server (TLS) id 15.0.851.15; Wed, 29 Jan 2014 14:45:33 +0000
Received: from AMSPRD0710HT002.eurprd07.prod.outlook.com (157.56.249.85) by
	DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140) with Microsoft
	SMTP Server (TLS) id 15.0.859.15; Wed, 29 Jan 2014 14:45:32 +0000
Message-ID: <52E91404.30602@zynstra.com>
Date: Wed, 29 Jan 2014 14:45:24 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Bob Liu <bob.liu@oracle.com>
References: <52A602E5.3080300@zynstra.com>
	<20131209214816.GA3000@phenom.dumpdata.com>
	<52A72AB8.9060707@zynstra.com>
	<20131210152746.GF3184@phenom.dumpdata.com>
	<52A812B0.6060607@oracle.com> <52A89334.3090007@zynstra.com>
	<52B18F44.2030500@oracle.com> <52B3443F.5060704@zynstra.com>
	<52B3B6D7.50606@oracle.com> <52BBEBEF.8040509@zynstra.com>
	<52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
	<52E911CA.9020700@oracle.com>
In-Reply-To: <52E911CA.9020700@oracle.com>
X-Originating-IP: [157.56.249.85]
X-ClientProxiedBy: DB3PR03CA006.eurprd03.prod.outlook.com (10.242.134.16) To
	DBXPR03MB222.eurprd03.prod.outlook.com (10.242.141.140)
X-Forefront-PRVS: 01068D0A20
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(189002)(199002)(24454002)(479174003)(164054003)(51704005)(83506001)(23756003)(77982001)(59766001)(66066001)(31966008)(56776001)(87266001)(64126003)(47976001)(15202345003)(79102001)(15975445006)(50986001)(74662001)(80022001)(85306002)(50466002)(54316002)(74502001)(81686001)(94946001)(76482001)(47446002)(90146001)(19580395003)(83072002)(80976001)(74366001)(81816001)(93136001)(81542001)(54356001)(47776003)(74876001)(81342001)(63696002)(42186004)(76796001)(74706001)(83322001)(47736001)(56816005)(4396001)(49866001)(53806001)(76786001)(92726001)(36756003)(87976001)(51856001)(85852003)(33656001)(77096001)(69226001)(93516002)(94316002)(86362001)(46102001)(92566001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DBXPR03MB222;
	H:AMSPRD0710HT002.eurprd07.prod.outlook.com; CLIP:157.56.249.85;
	FPR:; InfoNoRecordsMX:1; A:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Bob Liu wrote:
> On 01/29/2014 01:15 AM, James Dingwall wrote:
>> Bob Liu wrote:
>>> I have made a patch by reserving extra 10% of original total memory, by
>>> this way I think we can make the system much more reliably in all cases.
>>> Could you please have a test? You don't need to set
>>> selfballoon_reserved_mb by yourself any more.
>> I have to say that with this patch the situation has definitely
>> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
>> quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> Good news!
>
>> OOM during a compile (link) of webkit-gtk.  I think your patch is part
>> of the solution but I'm not sure if the other bit is simply to be more
>> generous with the guest memory allocation or something else.  Having
>> tested with memory = 512  and no tmem I get an OOM with the same
>> compile, with memory = 1024 and no tmem the compile completes ok (both
>> cases without maxmem).  As my domains are usually started with memory =
>> 512 and maxmem = 1024 it seems that there should be sufficient with my
> But I think from the beginning tmem/balloon driver can't expand guest
> memory from size 'memory' to 'maxmem' automatically.
I am carrying this patch for libxl (4.3.1) because maxmem wasn't being 
honoured.

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 356f920..fb7965d 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -235,7 +235,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, 
&info->cpumap);

-    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + 
LIBXL_MAXMEM_CONSTANT);
+    xc_domain_setmaxmem(ctx->xch, domid, info->max_memkb + 
LIBXL_MAXMEM_CONSTANT);
      xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
      state->store_domid = xs_domid ? atoi(xs_domid) : 0;
      free(xs_domid);

>
>> default parameters. Also for an experiment I set memory=1024 and removed
>> maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
>> reserve_additional_memory: add_memory() failed: -17" printed many times
>> in the guest kernel log.
>>
> I'll take a look at it.
It seems possible that this could be the same cause as for the message 
being printed in dom0 and which I reported in 
http://lists.xen.org/archives/html/xen-devel/2012-12/msg01607.html and 
for which no fix seems to have made it to the kernel.  I'm still working 
around this with:

#!/bin/sh

CURRENT_KB="/sys/bus/xen_memory/devices/xen_memory0/info/current_kb"
TARGET_KB="/sys/bus/xen_memory/devices/xen_memory0/target_kb"

CKB=$(< "${CURRENT_KB}")
TKB=$(< "${TARGET_KB}")

if [ "${TKB}" -gt "${CKB}" ] ; then
         echo "Resizing dom0 memory balloon target to ${CKB}"
         echo "${CKB}" > "${TARGET_KB}"
fi

Thanks,
James


-- 

*James Dingwall*

Script Monkey

zynstra-signature-logo <http://www.zynstra.com/>twitter-black 
<http://www.twitter.com/zynstra>linkedin-black 
<http://www.linkedin.com/company/zynstra>

Zynstra is a private limited company registered in England and Wales 
(registered number 07864369).  Our registered office is 5 New Street 
Square, London, EC4A 3TW and our headquarters are at Bath Ventures, 
Broad Quay, Bath, BA1 1UD.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:46:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WOy-0000y2-K7; Wed, 29 Jan 2014 14:46:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8WOv-0000xT-5f; Wed, 29 Jan 2014 14:46:01 +0000
Received: from [85.158.139.211:20883] by server-11.bemta-5.messagelabs.com id
	42/88-23886-82419E25; Wed, 29 Jan 2014 14:46:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391006757!397978!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18554 invoked from network); 29 Jan 2014 14:45:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:45:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97698591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:45:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:45:57 -0500
Message-ID: <1391006755.31814.129.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Wed, 29 Jan 2014 14:45:55 +0000
In-Reply-To: <CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
> So to fix the problem, I need to update the qemu version to version
> 1.7 or later?

Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
Xen releases.

> BTW. I had this problem in both pvhvm and pv guest.
> Does pv guest rely on qemu also?

It does not.

If you are having an issue with a PV guest vcpu hotplug then it is not
the same underlying issue. Please report it separately with full logs,
config info etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 14:46:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 14:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8WOy-0000y2-K7; Wed, 29 Jan 2014 14:46:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8WOv-0000xT-5f; Wed, 29 Jan 2014 14:46:01 +0000
Received: from [85.158.139.211:20883] by server-11.bemta-5.messagelabs.com id
	42/88-23886-82419E25; Wed, 29 Jan 2014 14:46:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391006757!397978!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18554 invoked from network); 29 Jan 2014 14:45:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 14:45:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97698591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 14:45:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 09:45:57 -0500
Message-ID: <1391006755.31814.129.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Wed, 29 Jan 2014 14:45:55 +0000
In-Reply-To: <CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
> So to fix the problem, I need to update the qemu version to version
> 1.7 or later?

Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
Xen releases.

> BTW. I had this problem in both pvhvm and pv guest.
> Does pv guest rely on qemu also?

It does not.

If you are having an issue with a PV guest vcpu hotplug then it is not
the same underlying issue. Please report it separately with full logs,
config info etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 15:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Wdt-0002z3-0s; Wed, 29 Jan 2014 15:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Wdr-0002yu-Uk
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 15:01:28 +0000
Received: from [85.158.143.35:50155] by server-1.bemta-4.messagelabs.com id
	A7/92-31661-7C719E25; Wed, 29 Jan 2014 15:01:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391007686!1663463!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29316 invoked from network); 29 Jan 2014 15:01:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:01:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:01:26 +0000
Message-Id: <52E925E20200007800117FA3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:01:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1391004947.31814.119.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
>> > +    case XEN_DOMCTL_cacheflush:
>> > +    {
>> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
>> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
>> 
>> While you certainly say so in the public header change, I think you
>> recall that we pretty recently changed another hypercall to not be
>> the only inconsistent one modifying the input structure in order to
>> handle hypercall preemption.
> 
> That was a XNEMEM though IIRC -- is the same requirement also true of
> domctl's?

Not necessarily - I was just trying to point out the issue to
avoid needing to fix it later on.

> How/where would you recommend saving the progress here?

Depending on the nature, a per-domain or per-vCPU field that
gets acted upon before issuing any new, similar operation. I.e.
something along the lines of x86's old_guest_table. It's ugly, I
know. But with exposing domctls to semi-trusted guests in
mind, you may use state modifiable by the caller here only if
tampering with that state isn't going to harm the whole system
(if the guest being started is affected in the case here that
obviously wouldn't be a problem).

>> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>> > -        if ( op == RELINQUISH && count >= 0x2000 )
>> > +        switch ( op )
>> >          {
>> > -            if ( hypercall_preempt_check() )
>> > +        case RELINQUISH:
>> > +        case CACHEFLUSH:
>> > +            if (count >= 0x2000 && hypercall_preempt_check() )
>> >              {
>> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>> >                  rc = -EAGAIN;
>> >                  goto out;
>> >              }
>> >              count = 0;
>> > +            break;
>> > +        case INSERT:
>> > +        case ALLOCATE:
>> > +        case REMOVE:
>> > +            /* No preemption */
>> > +            break;
>> >          }
>> 
>> Unrelated to the patch here, but don't you have a problem if you
>> don't preempt _at all_ here for certain operation types? Or is a
>> limit on the number of iterations being enforced elsewhere for
>> those?
> 
> Good question.
> 
> The tools/guest accessible paths here are through
> guest_physmap_add/remove_page. I think the only paths which are exposed
> that pass a non-zero order are XENMEM_populate_physmap and
> XENMEM_exchange, bot of which restrict the maximum order.
> 
> I don't think those guest_physmap_* are preemptible on x86 either?

They aren't, but they have a strict upper limit of at most dealing
with a 1Gb page at a time. If that's similar for ARM, I don't see
an immediate issue.

> It's possible that we should nevertheless handle preemption on those
> code paths as well, but I don't think it is critical right now (*or at
> least not critical enough to warrant an freeze exception for 4.4).

Indeed. Of course the 1Gb limit mentioned above, while perhaps
acceptable to process without preemption right now, is still pretty
high for achieving really good responsiveness, so we may need to
do something about that going forward.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 15:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Wdt-0002z3-0s; Wed, 29 Jan 2014 15:01:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8Wdr-0002yu-Uk
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 15:01:28 +0000
Received: from [85.158.143.35:50155] by server-1.bemta-4.messagelabs.com id
	A7/92-31661-7C719E25; Wed, 29 Jan 2014 15:01:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391007686!1663463!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29316 invoked from network); 29 Jan 2014 15:01:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:01:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:01:26 +0000
Message-Id: <52E925E20200007800117FA3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:01:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
In-Reply-To: <1391004947.31814.119.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
>> > +    case XEN_DOMCTL_cacheflush:
>> > +    {
>> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
>> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
>> 
>> While you certainly say so in the public header change, I think you
>> recall that we pretty recently changed another hypercall to not be
>> the only inconsistent one modifying the input structure in order to
>> handle hypercall preemption.
> 
> That was a XNEMEM though IIRC -- is the same requirement also true of
> domctl's?

Not necessarily - I was just trying to point out the issue to
avoid needing to fix it later on.

> How/where would you recommend saving the progress here?

Depending on the nature, a per-domain or per-vCPU field that
gets acted upon before issuing any new, similar operation. I.e.
something along the lines of x86's old_guest_table. It's ugly, I
know. But with exposing domctls to semi-trusted guests in
mind, you may use state modifiable by the caller here only if
tampering with that state isn't going to harm the whole system
(if the guest being started is affected in the case here that
obviously wouldn't be a problem).

>> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
>> > -        if ( op == RELINQUISH && count >= 0x2000 )
>> > +        switch ( op )
>> >          {
>> > -            if ( hypercall_preempt_check() )
>> > +        case RELINQUISH:
>> > +        case CACHEFLUSH:
>> > +            if (count >= 0x2000 && hypercall_preempt_check() )
>> >              {
>> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
>> >                  rc = -EAGAIN;
>> >                  goto out;
>> >              }
>> >              count = 0;
>> > +            break;
>> > +        case INSERT:
>> > +        case ALLOCATE:
>> > +        case REMOVE:
>> > +            /* No preemption */
>> > +            break;
>> >          }
>> 
>> Unrelated to the patch here, but don't you have a problem if you
>> don't preempt _at all_ here for certain operation types? Or is a
>> limit on the number of iterations being enforced elsewhere for
>> those?
> 
> Good question.
> 
> The tools/guest accessible paths here are through
> guest_physmap_add/remove_page. I think the only paths which are exposed
> that pass a non-zero order are XENMEM_populate_physmap and
> XENMEM_exchange, bot of which restrict the maximum order.
> 
> I don't think those guest_physmap_* are preemptible on x86 either?

They aren't, but they have a strict upper limit of at most dealing
with a 1Gb page at a time. If that's similar for ARM, I don't see
an immediate issue.

> It's possible that we should nevertheless handle preemption on those
> code paths as well, but I don't think it is critical right now (*or at
> least not critical enough to warrant an freeze exception for 4.4).

Indeed. Of course the 1Gb limit mentioned above, while perhaps
acceptable to process without preemption right now, is still pretty
high for achieving really good responsiveness, so we may need to
do something about that going forward.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 15:06:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Wiq-0003FR-0U; Wed, 29 Jan 2014 15:06:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Wio-0003FM-OX
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 15:06:34 +0000
Received: from [85.158.143.35:9727] by server-3.bemta-4.messagelabs.com id
	42/57-11539-AF819E25; Wed, 29 Jan 2014 15:06:34 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391007993!1650169!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6127 invoked from network); 29 Jan 2014 15:06:33 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 15:06:33 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391007993; l=899;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=k5ds8o3onIt/scWmdzeArQaOJ6I=;
	b=AAiZNSZBqi/Ku9LGiG3bK4rhq47koFtW7LwK6pJZnnzH0NC6hZ+cOB3XGpnx9oza40U
	4JbUdjXjO77rC3c4rDe/k4HbniiaoweYxvn1WcwFouXg0MsVQaWd1CgWssx4T/7JD4slR
	n/tbwnQ8+XydJWU0VRoaIQ6zUsy9LLTG1dE=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id h021b0q0TF6XRgK
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 16:06:33 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 94AB050269; Wed, 29 Jan 2014 16:06:32 +0100 (CET)
Date: Wed, 29 Jan 2014 16:06:32 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140129150632.GA31958@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, Olaf Hering wrote:

> Handle new option discard=on|off for disk configuration. It is supposed
> to disable discard support if file based backing storage was
> intentionally created non-sparse to avoid fragmentation of the file.

> +++ b/tools/libxl/libxl_types.idl
> @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
>      ("removable", integer),
>      ("readwrite", integer),
>      ("is_cdrom", integer),
> +    ("discard_enable", integer),

This new field changes the API, _libxl_types.h:struct libxl_device_disk
gets a new member. How should code using this new flag recognize if its
present? If it is supposed to be part of a new libxl-4.5 API then
out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
If not, how should it be done?

For my own purpose I will overload ->readwrite to carry the discard flag
and to preserve the ABI.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 15:06:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Wiq-0003FR-0U; Wed, 29 Jan 2014 15:06:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Wio-0003FM-OX
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 15:06:34 +0000
Received: from [85.158.143.35:9727] by server-3.bemta-4.messagelabs.com id
	42/57-11539-AF819E25; Wed, 29 Jan 2014 15:06:34 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391007993!1650169!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6127 invoked from network); 29 Jan 2014 15:06:33 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 15:06:33 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391007993; l=899;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=k5ds8o3onIt/scWmdzeArQaOJ6I=;
	b=AAiZNSZBqi/Ku9LGiG3bK4rhq47koFtW7LwK6pJZnnzH0NC6hZ+cOB3XGpnx9oza40U
	4JbUdjXjO77rC3c4rDe/k4HbniiaoweYxvn1WcwFouXg0MsVQaWd1CgWssx4T/7JD4slR
	n/tbwnQ8+XydJWU0VRoaIQ6zUsy9LLTG1dE=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id h021b0q0TF6XRgK
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 16:06:33 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 94AB050269; Wed, 29 Jan 2014 16:06:32 +0100 (CET)
Date: Wed, 29 Jan 2014 16:06:32 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140129150632.GA31958@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, Olaf Hering wrote:

> Handle new option discard=on|off for disk configuration. It is supposed
> to disable discard support if file based backing storage was
> intentionally created non-sparse to avoid fragmentation of the file.

> +++ b/tools/libxl/libxl_types.idl
> @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
>      ("removable", integer),
>      ("readwrite", integer),
>      ("is_cdrom", integer),
> +    ("discard_enable", integer),

This new field changes the API, _libxl_types.h:struct libxl_device_disk
gets a new member. How should code using this new flag recognize if its
present? If it is supposed to be part of a new libxl-4.5 API then
out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
If not, how should it be done?

For my own purpose I will overload ->readwrite to carry the discard flag
and to preserve the ABI.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 15:34:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8X8v-0004SV-Ln; Wed, 29 Jan 2014 15:33:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8X8t-0004SQ-1O
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 15:33:31 +0000
Received: from [85.158.143.35:6084] by server-2.bemta-4.messagelabs.com id
	C1/3B-10891-A4F19E25; Wed, 29 Jan 2014 15:33:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391009608!1687975!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26729 invoked from network); 29 Jan 2014 15:33:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:33:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:33:14 +0000
Message-Id: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:33:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part94A72258.0__="
Subject: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part94A72258.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
(due to them running in ring 3), I drafted a mostly equivalent PV
solution, at this point mainly to see what people think how useful this
would be.

It being based on switching page tables (along with the two page
tables we have right now - one containing user mappings only, the
other containing both kernel and user mappings - a third category
gets added containing kernel mappings only; Linux would have such
a thing readily available and hence presumably would require not
too intrusive changes) of course makes clear that this would come
with quite a bit of a performance cost. Furthermore the state
management obviously requires a couple of extra instructions to be
added into reasonably hot hypervisor code paths.

Hence before going further with this approach (for now I only got
it to the point that an un-patched Linux is unaffected, i.e. I didn't
code up the Linux side yet) I would be interested to hear people's
opinions on whether the performance cost is worth it, or whether
instead we should consider PVH the one and only route towards
gaining that extra level of security.

And if considering it worthwhile, comments on the actual
implementation (including the notes at the top of the attached
patch) would of course be welcome too.

Jan


--=__Part94A72258.0__=
Content-Type: text/plain; name="x86-PV-64bit-SMAP.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PV-64bit-SMAP.patch"

x86: PV SMAP for 64-bit guests=0A=0ATODO? Apart from TOOGLE_MODE(), should =
we enforce SMAP mode for other=0A      implied-supervisor guest memory =
accesses?=0ATODO? MMUEXT_SET_SMAP_MODE may better be replaced by a =
standalone=0A      hypercall (with just a single parameter); maybe by =
extending=0A      fpu_taskswitch.=0A=0ANote that the new state isn't being =
saved/restored. That's mainly=0Abecause a capable kernel, when migrated =
from an incapable hypervisor to=0Aa capable one, would likely want to take =
advantage of the capability,=0Aand hence would need to set up all state =
anyway. This also implies that=0Aa capable kernel ought to be prepared to =
get migrated to an incapable=0Ahypervisor (as the loss of functionality =
isn't essential, it just=0Aresults in security getting weakened).=0A=0A--- =
a/xen/arch/x86/domain.c=0A+++ b/xen/arch/x86/domain.c=0A@@ -1149,6 +1149,7 =
@@ static void load_segments(struct vcpu *n=0A             (unsigned long =
*)regs->rsp :=0A             (unsigned long *)pv->kernel_sp;=0A         =
unsigned long cs_and_mask, rflags;=0A+        int smap_mode =3D -1;=0A =0A =
        if ( is_pv_32on64_domain(n->domain) )=0A         {=0A@@ -1199,9 =
+1200,17 @@ static void load_segments(struct vcpu *n=0A         }=0A =0A   =
      if ( !(n->arch.flags & TF_kernel_mode) )=0A+        {=0A+            =
n->arch.flags |=3D TF_smap_mode;=0A             toggle_guest_mode(n);=0A+  =
      }=0A         else=0A+        {=0A             regs->cs &=3D ~3;=0A+  =
          smap_mode =3D guest_smap_mode(n);=0A+            if ( !set_smap_m=
ode(n, 1) )=0A+                smap_mode =3D -1;=0A+        }=0A =0A       =
  /* CS longword also contains full evtchn_upcall_mask. */=0A         =
cs_and_mask =3D (unsigned long)regs->cs |=0A@@ -1210,6 +1219,11 @@ static =
void load_segments(struct vcpu *n=0A         /* Fold upcall mask into =
RFLAGS.IF. */=0A         rflags  =3D regs->rflags & ~X86_EFLAGS_IF;=0A     =
    rflags |=3D !vcpu_info(n, evtchn_upcall_mask) << 9;=0A+        if ( =
smap_mode >=3D 0 )=0A+        {=0A+            rflags &=3D ~X86_EFLAGS_AC;=
=0A+            rflags |=3D !smap_mode << 18;=0A+        }=0A =0A         =
if ( put_user(regs->ss,            rsp- 1) |=0A              put_user(regs-=
>rsp,           rsp- 2) |=0A--- a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x8=
6/domctl.c=0A@@ -827,7 +827,7 @@ long arch_do_domctl(=0A                 =
evc->sysenter_callback_eip     =3D=0A                     v->arch.pv_vcpu.s=
ysenter_callback_eip;=0A                 evc->sysenter_disables_events  =
=3D=0A-                    v->arch.pv_vcpu.sysenter_disables_events;=0A+   =
                 !!(v->arch.pv_vcpu.sysenter_tbf & TBF_INTERRUPT);=0A      =
           evc->syscall32_callback_cs     =3D=0A                     =
v->arch.pv_vcpu.syscall32_callback_cs;=0A                 evc->syscall32_ca=
llback_eip    =3D=0A@@ -863,8 +863,9 @@ long arch_do_domctl(=0A            =
         evc->sysenter_callback_cs;=0A                 v->arch.pv_vcpu.syse=
nter_callback_eip     =3D=0A                     evc->sysenter_callback_eip=
;=0A-                v->arch.pv_vcpu.sysenter_disables_events  =3D=0A-     =
               evc->sysenter_disables_events;=0A+                v->arch.pv=
_vcpu.sysenter_tbf              =3D 0;=0A+                if ( evc->sysente=
r_disables_events )=0A+                    v->arch.pv_vcpu.sysenter_tbf =
|=3D TBF_INTERRUPT;=0A                 fixup_guest_code_selector(d, =
evc->syscall32_callback_cs);=0A                 v->arch.pv_vcpu.syscall32_c=
allback_cs     =3D=0A                     evc->syscall32_callback_cs;=0A---=
 a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -488,7 +488,7 @@ void =
write_ptbase(struct vcpu *v)=0A /*=0A  * Should be called after CR3 is =
updated.=0A  * =0A- * Uses values found in vcpu->arch.(guest_table and =
guest_table_user), and=0A+ * Uses values found in vcpu->arch.guest_table{,_=
user,_kernel}, and=0A  * for HVM guests, arch.monitor_table and hvm's =
guest CR3.=0A  *=0A  * Update ref counts to shadow tables appropriately.=0A=
@@ -505,8 +505,10 @@ void update_cr3(struct vcpu *v)=0A =0A     if ( =
!(v->arch.flags & TF_kernel_mode) )=0A         cr3_mfn =3D pagetable_get_pf=
n(v->arch.guest_table_user);=0A-    else=0A+    else if ( !guest_smap_mode(=
v) )=0A         cr3_mfn =3D pagetable_get_pfn(v->arch.guest_table);=0A+    =
else=0A+        cr3_mfn =3D pagetable_get_pfn(v->arch.pv_vcpu.guest_table_s=
map);=0A =0A     make_cr3(v, cr3_mfn);=0A }=0A@@ -2687,7 +2689,22 @@ int =
vcpu_destroy_pagetables(struct vcpu =0A                 rc =3D put_page_and=
_type_preemptible(page);=0A         }=0A         if ( !rc )=0A+        =
{=0A             v->arch.guest_table_user =3D pagetable_null();=0A+=0A+    =
        /* Drop ref to guest_table_smap (from MMUEXT_NEW_SMAP_BASEPTR). =
*/=0A+            mfn =3D pagetable_get_pfn(v->arch.pv_vcpu.guest_table_sma=
p);=0A+            if ( mfn )=0A+            {=0A+                page =3D =
mfn_to_page(mfn);=0A+                if ( paging_mode_refcounts(v->domain) =
)=0A+                    put_page(page);=0A+                else=0A+       =
             rc =3D put_page_and_type_preemptible(page);=0A+            =
}=0A+        }=0A+        if ( !rc )=0A+            v->arch.pv_vcpu.guest_t=
able_smap =3D pagetable_null();=0A     }=0A =0A     v->arch.cr3 =3D =
0;=0A@@ -3086,7 +3103,11 @@ long do_mmuext_op(=0A             }=0A         =
    break;=0A =0A-        case MMUEXT_NEW_USER_BASEPTR: {=0A+        case =
MMUEXT_NEW_USER_BASEPTR:=0A+        case MMUEXT_NEW_SMAP_BASEPTR: {=0A+    =
        pagetable_t *ppt =3D op.cmd =3D=3D MMUEXT_NEW_USER_BASEPTR=0A+     =
                          ? &curr->arch.guest_table_user=0A+               =
                : &curr->arch.pv_vcpu.guest_table_smap;=0A             =
unsigned long old_mfn;=0A =0A             if ( paging_mode_translate(curren=
t->domain) )=0A@@ -3095,7 +3116,7 @@ long do_mmuext_op(=0A                 =
break;=0A             }=0A =0A-            old_mfn =3D pagetable_get_pfn(cu=
rr->arch.guest_table_user);=0A+            old_mfn =3D pagetable_get_pfn(*p=
pt);=0A             /*=0A              * This is particularly important =
when getting restarted after the=0A              * previous attempt got =
preempted in the put-old-MFN phase.=0A@@ -3124,7 +3145,7 @@ long do_mmuext_=
op(=0A                 }=0A             }=0A =0A-            curr->arch.gue=
st_table_user =3D pagetable_from_pfn(op.arg1.mfn);=0A+            *ppt =3D =
pagetable_from_pfn(op.arg1.mfn);=0A =0A             if ( old_mfn !=3D 0 =
)=0A             {=0A@@ -3249,6 +3270,15 @@ long do_mmuext_op(=0A          =
   break;=0A         }=0A =0A+        case MMUEXT_SET_SMAP_MODE:=0A+       =
     if ( unlikely(is_pv_32bit_domain(d)) )=0A+                rc =3D =
-ENOSYS, okay =3D 0;=0A+            else if ( unlikely(op.arg1.val & ~1) =
)=0A+                okay =3D 0;=0A+            else if ( unlikely(!set_sma=
p_mode(curr, op.arg1.val)) )=0A+                rc =3D -EOPNOTSUPP, okay =
=3D 0;=0A+            break;=0A+=0A         case MMUEXT_CLEAR_PAGE: {=0A   =
          struct page_info *page;=0A =0A--- a/xen/arch/x86/traps.c=0A+++ =
b/xen/arch/x86/traps.c=0A@@ -451,6 +451,8 @@ static void do_guest_trap(=0A =
=0A     if ( TI_GET_IF(ti) )=0A         tb->flags |=3D TBF_INTERRUPT;=0A+  =
  if ( !TI_GET_AC(ti) )=0A+        tb->flags |=3D TBF_SMAP;=0A =0A     if =
( unlikely(null_trap_bounce(v, tb)) )=0A         gdprintk(XENLOG_WARNING, =
"Unhandled %s fault/trap [#%d] "=0A@@ -1089,6 +1091,8 @@ struct trap_bounce=
 *propagate_page_fault=0A         tb->eip        =3D ti->address;=0A       =
  if ( TI_GET_IF(ti) )=0A             tb->flags |=3D TBF_INTERRUPT;=0A+    =
    if ( !TI_GET_AC(ti) )=0A+            tb->flags |=3D TBF_SMAP;=0A       =
  return tb;=0A     }=0A =0A@@ -1109,6 +1113,8 @@ struct trap_bounce =
*propagate_page_fault=0A     tb->eip        =3D ti->address;=0A     if ( =
TI_GET_IF(ti) )=0A         tb->flags |=3D TBF_INTERRUPT;=0A+    if ( =
!TI_GET_AC(ti) )=0A+        tb->flags |=3D TBF_SMAP;=0A     if ( unlikely(n=
ull_trap_bounce(v, tb)) )=0A     {=0A         printk("d%d:v%d: unhandled =
page fault (ec=3D%04X)\n",=0A@@ -1598,23 +1604,21 @@ static int guest_io_ok=
ay(=0A     unsigned int port, unsigned int bytes,=0A     struct vcpu *v, =
struct cpu_user_regs *regs)=0A {=0A-    /* If in user mode, switch to =
kernel mode just to read I/O bitmap. */=0A-    int user_mode =3D !(v->arch.=
flags & TF_kernel_mode);=0A-#define TOGGLE_MODE() if ( user_mode ) =
toggle_guest_mode(v)=0A-=0A     if ( !vm86_mode(regs) &&=0A          =
(v->arch.pv_vcpu.iopl >=3D (guest_kernel_mode(v, regs) ? 1 : 3)) )=0A      =
   return 1;=0A =0A     if ( v->arch.pv_vcpu.iobmp_limit > (port + bytes) =
)=0A     {=0A+        unsigned int mode;=0A         union { uint8_t =
bytes[2]; uint16_t mask; } x;=0A =0A         /*=0A          * Grab =
permission bytes from guest space. Inaccessible bytes are=0A          * =
read as 0xff (no access allowed).=0A+         * If in user mode, switch to =
kernel mode just to read I/O bitmap.=0A          */=0A-        TOGGLE_MODE(=
);=0A+        TOGGLE_MODE(v, mode, 1);=0A         switch ( __copy_from_gues=
t_offset(x.bytes, v->arch.pv_vcpu.iobmp,=0A                                =
           port>>3, 2) )=0A         {=0A@@ -1622,7 +1626,7 @@ static int =
guest_io_okay(=0A         case 1:  x.bytes[1] =3D ~0;=0A         case 0:  =
break;=0A         }=0A-        TOGGLE_MODE();=0A+        TOGGLE_MODE(v, =
mode, 0);=0A =0A         if ( (x.mask & (((1<<bytes)-1) << (port&7))) =
=3D=3D 0 )=0A             return 1;=0A@@ -2188,7 +2192,7 @@ static int =
emulate_privileged_op(struct =0A         goto fail;=0A     switch ( opcode =
)=0A     {=0A-    case 0x1: /* RDTSCP and XSETBV */=0A+    case 0x1: /* =
RDTSCP, XSETBV, CLAC, and STAC */=0A         switch ( insn_fetch(u8, =
code_base, eip, code_limit) )=0A         {=0A         case 0xf9: /* RDTSCP =
*/=0A@@ -2216,6 +2220,20 @@ static int emulate_privileged_op(struct =0A =
=0A             break;=0A         }=0A+        case 0xcb: /* STAC */=0A+   =
         if ( unlikely(!guest_kernel_mode(v, regs)) ||=0A+                 =
unlikely(is_pv_32bit_vcpu(v)) ||=0A+                 unlikely(!guest_smap_e=
nabled(v)) )=0A+                goto fail;=0A+            set_smap_mode(v, =
0);=0A+            break;=0A+        case 0xca: /* CLAC */=0A+            =
if ( unlikely(!guest_kernel_mode(v, regs)) ||=0A+                 =
unlikely(is_pv_32bit_vcpu(v)) ||=0A+                 unlikely(!guest_smap_e=
nabled(v)) )=0A+                goto fail;=0A+            set_smap_mode(v, =
1);=0A+            break;=0A         default:=0A             goto fail;=0A =
        }=0A--- a/xen/arch/x86/x86_64/asm-offsets.c=0A+++ b/xen/arch/x86/x8=
6_64/asm-offsets.c=0A@@ -80,8 +80,7 @@ void __dummy__(void)=0A            =
arch.pv_vcpu.sysenter_callback_eip);=0A     OFFSET(VCPU_sysenter_sel, =
struct vcpu,=0A            arch.pv_vcpu.sysenter_callback_cs);=0A-    =
OFFSET(VCPU_sysenter_disables_events, struct vcpu,=0A-           arch.pv_vc=
pu.sysenter_disables_events);=0A+    OFFSET(VCPU_sysenter_tbf, struct =
vcpu, arch.pv_vcpu.sysenter_tbf);=0A     OFFSET(VCPU_trap_ctxt, struct =
vcpu, arch.pv_vcpu.trap_ctxt);=0A     OFFSET(VCPU_kernel_sp, struct vcpu, =
arch.pv_vcpu.kernel_sp);=0A     OFFSET(VCPU_kernel_ss, struct vcpu, =
arch.pv_vcpu.kernel_ss);=0A@@ -95,6 +94,7 @@ void __dummy__(void)=0A     =
DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);=0A     DEFINE(_VGCF_failsafe_disables=
_events, _VGCF_failsafe_disables_events);=0A     DEFINE(_VGCF_syscall_disab=
les_events,  _VGCF_syscall_disables_events);=0A+    DEFINE(_VGCF_syscall_cl=
ac,             _VGCF_syscall_clac);=0A     BLANK();=0A =0A     OFFSET(VCPU=
_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);=0A--- a/xen/arch/x86/x86_=
64/compat/traps.c=0A+++ b/xen/arch/x86/x86_64/compat/traps.c=0A@@ -205,8 =
+205,8 @@ static long compat_register_guest_callba=0A     case CALLBACKTYPE=
_sysenter:=0A         v->arch.pv_vcpu.sysenter_callback_cs     =3D =
reg->address.cs;=0A         v->arch.pv_vcpu.sysenter_callback_eip    =3D =
reg->address.eip;=0A-        v->arch.pv_vcpu.sysenter_disables_events =
=3D=0A-            (reg->flags & CALLBACKF_mask_events) !=3D 0;=0A+        =
v->arch.pv_vcpu.sysenter_tbf =3D=0A+            (reg->flags & CALLBACKF_mas=
k_events ? TBF_INTERRUPT : 0);=0A         break;=0A =0A     case CALLBACKTY=
PE_nmi:=0A--- a/xen/arch/x86/x86_64/entry.S=0A+++ b/xen/arch/x86/x86_64/ent=
ry.S=0A@@ -28,7 +28,12 @@ switch_to_kernel:=0A         /* TB_flags =3D =
VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */=0A         btl   =
$_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)=0A         =
setc  %cl=0A+        /* TB_flags |=3D VGCF_syscall_clac ? TBF_SMAP : 0 =
*/=0A+        btl   $_VGCF_syscall_clac,VCPU_guest_context_flags(%rbx)=0A+ =
       setc  %al=0A         leal  (,%rcx,TBF_INTERRUPT),%ecx=0A+        =
leal  (,%rax,TBF_SMAP),%eax=0A+        orl   %eax,%ecx=0A         movb  =
%cl,TRAPBOUNCE_flags(%rdx)=0A         call  create_bounce_frame=0A         =
andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)=0A@@ -87,7 +92,7 @@ failsafe_callb=
ack:=0A         leaq  VCPU_trap_bounce(%rbx),%rdx=0A         movq  =
VCPU_failsafe_addr(%rbx),%rax=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=0A=
-        movb  $TBF_FAILSAFE,TRAPBOUNCE_flags(%rdx)=0A+        movb  =
$TBF_FAILSAFE|TBF_SMAP,TRAPBOUNCE_flags(%rdx)=0A         bt    $_VGCF_fails=
afe_disables_events,VCPU_guest_context_flags(%rbx)=0A         jnc   1f=0A  =
       orb   $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)=0A@@ -215,7 +220,7 @@ =
test_guest_events:=0A         leaq  VCPU_trap_bounce(%rbx),%rdx=0A         =
movq  VCPU_event_addr(%rbx),%rax=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=
=0A-        movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)=0A+        movb  =
$TBF_INTERRUPT|TBF_SMAP,TRAPBOUNCE_flags(%rdx)=0A         call  create_boun=
ce_frame=0A         jmp   test_all_events=0A =0A@@ -278,9 +283,8 @@ =
GLOBAL(sysenter_eflags_saved)=0A         pushq $0=0A         SAVE_VOLATILE =
TRAP_syscall=0A         GET_CURRENT(%rbx)=0A-        cmpb  $0,VCPU_sysenter=
_disables_events(%rbx)=0A+        movzbl VCPU_sysenter_tbf(%rbx),%ecx=0A   =
      movq  VCPU_sysenter_addr(%rbx),%rax=0A-        setne %cl=0A         =
testl $X86_EFLAGS_NT,UREGS_eflags(%rsp)=0A         leaq  VCPU_trap_bounce(%=
rbx),%rdx=0A UNLIKELY_START(nz, sysenter_nt_set)=0A@@ -290,7 +294,6 @@ =
UNLIKELY_START(nz, sysenter_nt_set)=0A         xorl  %eax,%eax=0A =
UNLIKELY_END(sysenter_nt_set)=0A         testq %rax,%rax=0A-        leal  =
(,%rcx,TBF_INTERRUPT),%ecx=0A UNLIKELY_START(z, sysenter_gpf)=0A         =
movq  VCPU_trap_ctxt(%rbx),%rsi=0A         SAVE_PRESERVED=0A@@ -299,7 =
+302,11 @@ UNLIKELY_START(z, sysenter_gpf)=0A         movq  TRAP_gp_fault =
* TRAPINFO_sizeof + TRAPINFO_eip(%rsi),%rax=0A         testb $4,TRAP_gp_fau=
lt * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)=0A         setnz %cl=0A+       =
 testb $8,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)=0A+       =
 setnz %sil=0A         leal  TBF_EXCEPTION|TBF_EXCEPTION_ERRCODE(,%rcx,TBF_=
INTERRUPT),%ecx=0A+        leal  (,%rsi,TBF_SMAP),%esi=0A+        orl   =
%esi,%ecx=0A UNLIKELY_END(sysenter_gpf)=0A         movq  VCPU_domain(%rbx),=
%rdi=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=0A@@ -351,19 +358,38 @@ =
int80_slow_path:=0A /* On return only %rbx and %rdx are guaranteed =
non-clobbered.            */=0A create_bounce_frame:=0A         ASSERT_INTE=
RRUPTS_ENABLED=0A-        testb $TF_kernel_mode,VCPU_thread_flags(%rbx)=0A-=
        jnz   1f=0A-        /* Push new frame at registered guest-OS stack =
base. */=0A+        xorl  %esi,%esi=0A+        testb $TBF_SMAP,TRAPBOUNCE_f=
lags(%rdx)=0A+        movl  VCPU_thread_flags(%rbx),%eax=0A+        setnz =
%sil=0A+        testb $TF_kernel_mode,%al=0A         pushq %rdx=0A         =
movq  %rbx,%rdi=0A+        jnz   1f=0A+        /* Push new frame at =
registered guest-OS stack base. */=0A+        andl  $~TF_smap_mode,VCPU_thr=
ead_flags(%rbx)=0A+        shll  $_TF_smap_mode,%esi=0A+        orl   =
%esi,VCPU_thread_flags(%rbx)=0A         call  toggle_guest_mode=0A-        =
popq  %rdx=0A         movq  VCPU_kernel_sp(%rbx),%rsi=0A+        movl  =
$~0,%edi=0A         jmp   2f=0A 1:      /* In kernel context already: push =
new frame at existing %rsp. */=0A-        movq  UREGS_rsp+8(%rsp),%rsi=0A- =
       andb  $0xfc,UREGS_cs+8(%rsp)    # Indicate kernel context to =
guest.=0A+        pushq %rax=0A+        call  set_smap_mode=0A+        =
test  %al,%al=0A+        movl  $~0,%edi=0A+        popq  %rax              =
        # old VCPU_thread_flags(%rbx)=0A+UNLIKELY_START(nz, cbf_smap)=0A+  =
      movl  $~X86_EFLAGS_AC,%edi=0A+        testb $TF_smap_mode,%al=0A+    =
    UNLIKELY_DONE(nz, cbf_smap)=0A+        btsq  $18+32,%rdi               =
# LOG2(X86_EFLAGS_AC)+32=0A+UNLIKELY_END(cbf_smap)=0A+        movq  =
UREGS_rsp+2*8(%rsp),%rsi=0A+        andl  $~3,UREGS_cs+2*8(%rsp)    # =
Indicate kernel context to guest.=0A 2:      andq  $~0xf,%rsi              =
  # Stack frames are 16-byte aligned.=0A+        popq  %rdx=0A         =
movq  $HYPERVISOR_VIRT_START,%rax=0A         cmpq  %rax,%rsi=0A         =
movq  $HYPERVISOR_VIRT_END+60,%rax=0A@@ -394,7 +420,10 @@ __UNLIKELY_END(cr=
eate_bounce_frame_bad_s=0A         setz  %ch                       # %ch =
=3D=3D !saved_upcall_mask=0A         movl  UREGS_eflags+8(%rsp),%eax=0A    =
     andl  $~X86_EFLAGS_IF,%eax=0A+        andl  %edi,%eax                 =
# Clear EFLAGS.AC if needed=0A+        shrq  $32,%rdi=0A         addb  =
%ch,%ch                   # Bit 9 (EFLAGS.IF)=0A+        orl   %edi,%eax   =
              # Set EFLAGS.AC if needed=0A         orb   %ch,%ah           =
        # Fold EFLAGS.IF into %eax=0A .Lft5:  movq  %rax,16(%rsi)          =
   # RFLAGS=0A         movq  UREGS_rip+8(%rsp),%rax=0A--- a/xen/arch/x86/x8=
6_64/traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -153,9 +153,11 @@ =
void vcpu_show_registers(const struct vc=0A =0A     crs[0] =3D v->arch.pv_v=
cpu.ctrlreg[0];=0A     crs[2] =3D arch_get_cr2(v);=0A-    crs[3] =3D =
pagetable_get_paddr(guest_kernel_mode(v, regs) ?=0A+    crs[3] =3D =
pagetable_get_paddr(!guest_kernel_mode(v, regs) ?=0A+                      =
           v->arch.guest_table_user :=0A+                                 =
!guest_smap_enabled(v) || !guest_smap_mode(v) ?=0A                         =
         v->arch.guest_table :=0A-                                 =
v->arch.guest_table_user);=0A+                                 v->arch.pv_v=
cpu.guest_table_smap);=0A     crs[4] =3D v->arch.pv_vcpu.ctrlreg[4];=0A =
=0A     _show_registers(regs, crs, CTXT_pv_guest, v);=0A@@ -258,14 +260,19 =
@@ void toggle_guest_mode(struct vcpu *v)=0A     if ( is_pv_32bit_vcpu(v) =
)=0A         return;=0A     v->arch.flags ^=3D TF_kernel_mode;=0A+    if ( =
!guest_smap_enabled(v) )=0A+        v->arch.flags &=3D ~TF_smap_mode;=0A   =
  asm volatile ( "swapgs" );=0A     update_cr3(v);=0A #ifdef USER_MAPPINGS_=
ARE_GLOBAL=0A-    /* Don't flush user global mappings from the TLB. Don't =
tick TLB clock. */=0A-    asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.c=
r3) : "memory" );=0A-#else=0A-    write_ptbase(v);=0A+    if ( !(v->arch.fl=
ags & TF_kernel_mode) || !guest_smap_mode(v) )=0A+    {=0A+        /* =
Don't flush user global mappings from the TLB. Don't tick TLB clock. =
*/=0A+        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : =
"memory" );=0A+    }=0A+    else=0A #endif=0A+        write_ptbase(v);=0A =
=0A     if ( !(v->arch.flags & TF_kernel_mode) )=0A         return;=0A@@ =
-280,6 +287,35 @@ void toggle_guest_mode(struct vcpu *v)=0A         =
v->arch.pv_vcpu.pending_system_time.version =3D 0;=0A }=0A =0A+bool_t =
set_smap_mode(struct vcpu *v, bool_t on)=0A+{=0A+    ASSERT(!is_pv_32bit_vc=
pu(v));=0A+    ASSERT(v->arch.flags & TF_kernel_mode);=0A+=0A+    if ( =
!guest_smap_enabled(v) )=0A+        return 0;=0A+    if ( !on =3D=3D =
!guest_smap_mode(v) )=0A+        return 1;=0A+=0A+    if ( on )=0A+        =
v->arch.flags |=3D TF_smap_mode;=0A+    else=0A+        v->arch.flags &=3D =
~TF_smap_mode;=0A+=0A+    update_cr3(v);=0A+#ifdef USER_MAPPINGS_ARE_GLOBAL=
=0A+    if ( !guest_smap_mode(v) )=0A+    {=0A+        /* Don't flush user =
global mappings from the TLB. Don't tick TLB clock. */=0A+        asm =
volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );=0A+    =
}=0A+    else=0A+#endif=0A+        write_ptbase(v);=0A+=0A+    return =
1;=0A+}=0A+=0A unsigned long do_iret(void)=0A {=0A     struct cpu_user_regs=
 *regs =3D guest_cpu_user_regs();=0A@@ -305,6 +341,8 @@ unsigned long =
do_iret(void)=0A         }=0A         toggle_guest_mode(v);=0A     }=0A+   =
 else if ( set_smap_mode(v, !(iret_saved.rflags & X86_EFLAGS_AC)) )=0A+    =
    iret_saved.rflags &=3D ~X86_EFLAGS_AC;=0A =0A     regs->rip    =3D =
iret_saved.rip;=0A     regs->cs     =3D iret_saved.cs | 3; /* force guest =
privilege */=0A@@ -480,6 +518,10 @@ static long register_guest_callback(str=
u=0A         else=0A             clear_bit(_VGCF_syscall_disables_events,=
=0A                       &v->arch.vgc_flags);=0A+        if ( reg->flags =
& CALLBACKF_clac )=0A+            set_bit(_VGCF_syscall_clac, &v->arch.vgc_=
flags);=0A+        else=0A+            clear_bit(_VGCF_syscall_clac, =
&v->arch.vgc_flags);=0A         break;=0A =0A     case CALLBACKTYPE_syscall=
32:=0A@@ -490,8 +532,9 @@ static long register_guest_callback(stru=0A =0A  =
   case CALLBACKTYPE_sysenter:=0A         v->arch.pv_vcpu.sysenter_callback=
_eip =3D reg->address;=0A-        v->arch.pv_vcpu.sysenter_disables_events =
=3D=0A-            !!(reg->flags & CALLBACKF_mask_events);=0A+        =
v->arch.pv_vcpu.sysenter_tbf =3D=0A+            (reg->flags & CALLBACKF_mas=
k_events ? TBF_INTERRUPT : 0) |=0A+            (reg->flags & CALLBACKF_clac=
 ? TBF_SMAP : 0);=0A         break;=0A =0A     case CALLBACKTYPE_nmi:=0A---=
 a/xen/include/asm-x86/domain.h=0A+++ b/xen/include/asm-x86/domain.h=0A@@ =
-75,6 +75,8 @@ void mapcache_override_current(struct vc=0A =0A /* x86/64: =
toggle guest between kernel and user modes. */=0A void toggle_guest_mode(st=
ruct vcpu *);=0A+/* x86/64: switch guest between SMAP and "normal" modes. =
*/=0A+bool_t set_smap_mode(struct vcpu *, bool_t);=0A =0A /*=0A  * =
Initialise a hypercall-transfer page. The given pointer must be mapped=0A@@=
 -354,13 +356,16 @@ struct pv_vcpu=0A     unsigned short syscall32_callback=
_cs;=0A     unsigned short sysenter_callback_cs;=0A     bool_t syscall32_di=
sables_events;=0A-    bool_t sysenter_disables_events;=0A+    u8 sysenter_t=
bf;=0A =0A     /* Segment base addresses. */=0A     unsigned long =
fs_base;=0A     unsigned long gs_base_kernel;=0A     unsigned long =
gs_base_user;=0A =0A+    /* x86/64 kernel-only (SMAP) pagetable */=0A+    =
pagetable_t guest_table_smap;=0A+=0A     /* Bounce information for =
propagating an exception to guest OS. */=0A     struct trap_bounce =
trap_bounce;=0A     struct trap_bounce int80_bounce;=0A@@ -471,6 +476,10 =
@@ unsigned long pv_guest_cr4_fixup(const s=0A     ((c) & ~(X86_CR4_PGE | =
X86_CR4_PSE | X86_CR4_TSD |      \=0A              X86_CR4_OSXSAVE | =
X86_CR4_SMEP | X86_CR4_FSGSBASE))=0A =0A+#define guest_smap_enabled(v) =
\=0A+    (!pagetable_is_null((v)->arch.pv_vcpu.guest_table_smap))=0A+#defin=
e guest_smap_mode(v) ((v)->arch.flags & TF_smap_mode)=0A+=0A void =
domain_cpuid(struct domain *d,=0A                   unsigned int  =
input,=0A                   unsigned int  sub_input,=0A--- a/xen/include/as=
m-x86/paging.h=0A+++ b/xen/include/asm-x86/paging.h=0A@@ -405,17 +405,34 =
@@ guest_get_eff_l1e(struct vcpu *v, unsign=0A     paging_get_hostmode(v)->=
guest_get_eff_l1e(v, addr, eff_l1e);=0A }=0A =0A+#define TOGGLE_MODE(v, m, =
in) do { \=0A+    if ( in ) \=0A+        (m) =3D (v)->arch.flags; \=0A+    =
if ( (m) & TF_kernel_mode ) \=0A+    { \=0A+        set_smap_mode(v, (in) =
|| ((m) & TF_smap_mode) ); \=0A+        break; \=0A+    } \=0A+    if ( in =
) \=0A+        (v)->arch.flags |=3D TF_smap_mode; \=0A+    else \=0A+    { =
\=0A+        (v)->arch.flags &=3D ~TF_smap_mode; \=0A+        (v)->arch.fla=
gs |=3D (m) & TF_smap_mode; \=0A+    } \=0A+    toggle_guest_mode(v); =
\=0A+} while ( 0 )=0A+=0A /* Read the guest's l1e that maps this address, =
from the kernel-mode=0A  * pagetables. */=0A static inline void=0A =
guest_get_eff_kern_l1e(struct vcpu *v, unsigned long addr, void =
*eff_l1e)=0A {=0A-    int user_mode =3D !(v->arch.flags & TF_kernel_mode);=
=0A-#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)=0A+    =
unsigned int mode;=0A =0A-    TOGGLE_MODE();=0A+    TOGGLE_MODE(v, mode, =
1);=0A     guest_get_eff_l1e(v, addr, eff_l1e);=0A-    TOGGLE_MODE();=0A+  =
  TOGGLE_MODE(v, mode, 0);=0A }=0A =0A =0A--- a/xen/include/asm-x86/process=
or.h=0A+++ b/xen/include/asm-x86/processor.h=0A@@ -125,12 +125,15 @@=0A /* =
'trap_bounce' flags values */=0A #define TBF_EXCEPTION          1=0A =
#define TBF_EXCEPTION_ERRCODE  2=0A+#define TBF_SMAP               4=0A =
#define TBF_INTERRUPT          8=0A #define TBF_FAILSAFE          16=0A =
=0A /* 'arch_vcpu' flags values */=0A #define _TF_kernel_mode        0=0A =
#define TF_kernel_mode         (1<<_TF_kernel_mode)=0A+#define _TF_smap_mod=
e          1=0A+#define TF_smap_mode           (1<<_TF_smap_mode)=0A =0A =
/* #PF error code values. */=0A #define PFEC_page_present   (1U<<0)=0A--- =
a/xen/include/public/arch-x86/xen.h=0A+++ b/xen/include/public/arch-x86/xen=
.h=0A@@ -138,6 +138,7 @@ typedef unsigned long xen_ulong_t;=0A  */=0A =
#define TI_GET_DPL(_ti)      ((_ti)->flags & 3)=0A #define TI_GET_IF(_ti)  =
     ((_ti)->flags & 4)=0A+#define TI_GET_AC(_ti)       ((_ti)->flags & =
8)=0A #define TI_SET_DPL(_ti,_dpl) ((_ti)->flags |=3D (_dpl))=0A #define =
TI_SET_IF(_ti,_if)   ((_ti)->flags |=3D ((!!(_if))<<2))=0A struct =
trap_info {=0A@@ -179,6 +180,8 @@ struct vcpu_guest_context {=0A #define =
VGCF_syscall_disables_events   (1<<_VGCF_syscall_disables_events)=0A =
#define _VGCF_online                   5=0A #define VGCF_online            =
        (1<<_VGCF_online)=0A+#define _VGCF_syscall_clac             =
6=0A+#define VGCF_syscall_clac              (1<<_VGCF_syscall_clac)=0A     =
unsigned long flags;                    /* VGCF_* flags                 =
*/=0A     struct cpu_user_regs user_regs;         /* User-level CPU =
registers     */=0A     struct trap_info trap_ctxt[256];        /* Virtual =
IDT                  */=0A--- a/xen/include/public/callback.h=0A+++ =
b/xen/include/public/callback.h=0A@@ -76,6 +76,13 @@=0A  */=0A #define =
_CALLBACKF_mask_events             0=0A #define CALLBACKF_mask_events      =
        (1U << _CALLBACKF_mask_events)=0A+/*=0A+ * Effect CLAC upon =
callback entry? This flag is ignored for event,=0A+ * failsafe, and NMI =
callbacks: user space gets unconditionally hidden if=0A+ * respective =
functionality was enabled by the kernel.=0A+ */=0A+#define _CALLBACKF_clac =
                   0=0A+#define CALLBACKF_clac                     (1U << =
_CALLBACKF_clac)=0A =0A /*=0A  * Register a callback.=0A--- a/xen/include/p=
ublic/xen.h=0A+++ b/xen/include/public/xen.h=0A@@ -341,6 +341,10 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A  * mfn: Machine frame number of =
new page-table base to install in MMU=0A  *      when in user space.=0A  =
*=0A+ * cmd: MMUEXT_NEW_SMAP_BASEPTR [x86/64 only]=0A+ * mfn: Machine =
frame number of new page-table base to install in MMU=0A+ *      when in =
kernel-only (SMAP) mode.=0A+ *=0A  * cmd: MMUEXT_TLB_FLUSH_LOCAL=0A  * No =
additional arguments. Flushes local TLB.=0A  *=0A@@ -371,6 +375,9 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A  * linear_addr: Linear address of =
LDT base (NB. must be page-aligned).=0A  * nr_ents: Number of entries in =
LDT.=0A  *=0A+ * cmd: MMUEXT_SET_SMAP_MODE=0A+ * val: 0 - disable, 1 - =
enable (other values reserved)=0A+ *=0A  * cmd: MMUEXT_CLEAR_PAGE=0A  * =
mfn: Machine frame number to be cleared.=0A  *=0A@@ -402,17 +409,21 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A #define MMUEXT_FLUSH_CACHE_GLOBAL =
18=0A #define MMUEXT_MARK_SUPER       19=0A #define MMUEXT_UNMARK_SUPER    =
 20=0A+#define MMUEXT_NEW_SMAP_BASEPTR 21=0A+#define MMUEXT_SET_SMAP_MODE  =
  22=0A /* ` } */=0A =0A #ifndef __ASSEMBLY__=0A struct mmuext_op {=0A     =
unsigned int cmd; /* =3D> enum mmuext_cmd */=0A     union {=0A-        /* =
[UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR=0A+        /* [UN]PIN_TABLE, =
NEW_BASEPTR, NEW_USER_BASEPTR, NEW_SMAP_BASEPTR=0A          * CLEAR_PAGE, =
COPY_PAGE, [UN]MARK_SUPER */=0A         xen_pfn_t     mfn;=0A         /* =
INVLPG_LOCAL, INVLPG_ALL, SET_LDT */=0A         unsigned long linear_addr;=
=0A+        /* SET_SMAP_MODE */=0A+        unsigned int  val;=0A     } =
arg1;=0A     union {=0A         /* SET_LDT */=0A
--=__Part94A72258.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part94A72258.0__=--


From xen-devel-bounces@lists.xen.org Wed Jan 29 15:34:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8X8v-0004SV-Ln; Wed, 29 Jan 2014 15:33:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8X8t-0004SQ-1O
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 15:33:31 +0000
Received: from [85.158.143.35:6084] by server-2.bemta-4.messagelabs.com id
	C1/3B-10891-A4F19E25; Wed, 29 Jan 2014 15:33:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391009608!1687975!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26729 invoked from network); 29 Jan 2014 15:33:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:33:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:33:14 +0000
Message-Id: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:33:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part94A72258.0__="
Subject: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part94A72258.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
(due to them running in ring 3), I drafted a mostly equivalent PV
solution, at this point mainly to see what people think how useful this
would be.

It being based on switching page tables (along with the two page
tables we have right now - one containing user mappings only, the
other containing both kernel and user mappings - a third category
gets added containing kernel mappings only; Linux would have such
a thing readily available and hence presumably would require not
too intrusive changes) of course makes clear that this would come
with quite a bit of a performance cost. Furthermore the state
management obviously requires a couple of extra instructions to be
added into reasonably hot hypervisor code paths.

Hence before going further with this approach (for now I only got
it to the point that an un-patched Linux is unaffected, i.e. I didn't
code up the Linux side yet) I would be interested to hear people's
opinions on whether the performance cost is worth it, or whether
instead we should consider PVH the one and only route towards
gaining that extra level of security.

And if considering it worthwhile, comments on the actual
implementation (including the notes at the top of the attached
patch) would of course be welcome too.

Jan


--=__Part94A72258.0__=
Content-Type: text/plain; name="x86-PV-64bit-SMAP.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PV-64bit-SMAP.patch"

x86: PV SMAP for 64-bit guests=0A=0ATODO? Apart from TOOGLE_MODE(), should =
we enforce SMAP mode for other=0A      implied-supervisor guest memory =
accesses?=0ATODO? MMUEXT_SET_SMAP_MODE may better be replaced by a =
standalone=0A      hypercall (with just a single parameter); maybe by =
extending=0A      fpu_taskswitch.=0A=0ANote that the new state isn't being =
saved/restored. That's mainly=0Abecause a capable kernel, when migrated =
from an incapable hypervisor to=0Aa capable one, would likely want to take =
advantage of the capability,=0Aand hence would need to set up all state =
anyway. This also implies that=0Aa capable kernel ought to be prepared to =
get migrated to an incapable=0Ahypervisor (as the loss of functionality =
isn't essential, it just=0Aresults in security getting weakened).=0A=0A--- =
a/xen/arch/x86/domain.c=0A+++ b/xen/arch/x86/domain.c=0A@@ -1149,6 +1149,7 =
@@ static void load_segments(struct vcpu *n=0A             (unsigned long =
*)regs->rsp :=0A             (unsigned long *)pv->kernel_sp;=0A         =
unsigned long cs_and_mask, rflags;=0A+        int smap_mode =3D -1;=0A =0A =
        if ( is_pv_32on64_domain(n->domain) )=0A         {=0A@@ -1199,9 =
+1200,17 @@ static void load_segments(struct vcpu *n=0A         }=0A =0A   =
      if ( !(n->arch.flags & TF_kernel_mode) )=0A+        {=0A+            =
n->arch.flags |=3D TF_smap_mode;=0A             toggle_guest_mode(n);=0A+  =
      }=0A         else=0A+        {=0A             regs->cs &=3D ~3;=0A+  =
          smap_mode =3D guest_smap_mode(n);=0A+            if ( !set_smap_m=
ode(n, 1) )=0A+                smap_mode =3D -1;=0A+        }=0A =0A       =
  /* CS longword also contains full evtchn_upcall_mask. */=0A         =
cs_and_mask =3D (unsigned long)regs->cs |=0A@@ -1210,6 +1219,11 @@ static =
void load_segments(struct vcpu *n=0A         /* Fold upcall mask into =
RFLAGS.IF. */=0A         rflags  =3D regs->rflags & ~X86_EFLAGS_IF;=0A     =
    rflags |=3D !vcpu_info(n, evtchn_upcall_mask) << 9;=0A+        if ( =
smap_mode >=3D 0 )=0A+        {=0A+            rflags &=3D ~X86_EFLAGS_AC;=
=0A+            rflags |=3D !smap_mode << 18;=0A+        }=0A =0A         =
if ( put_user(regs->ss,            rsp- 1) |=0A              put_user(regs-=
>rsp,           rsp- 2) |=0A--- a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x8=
6/domctl.c=0A@@ -827,7 +827,7 @@ long arch_do_domctl(=0A                 =
evc->sysenter_callback_eip     =3D=0A                     v->arch.pv_vcpu.s=
ysenter_callback_eip;=0A                 evc->sysenter_disables_events  =
=3D=0A-                    v->arch.pv_vcpu.sysenter_disables_events;=0A+   =
                 !!(v->arch.pv_vcpu.sysenter_tbf & TBF_INTERRUPT);=0A      =
           evc->syscall32_callback_cs     =3D=0A                     =
v->arch.pv_vcpu.syscall32_callback_cs;=0A                 evc->syscall32_ca=
llback_eip    =3D=0A@@ -863,8 +863,9 @@ long arch_do_domctl(=0A            =
         evc->sysenter_callback_cs;=0A                 v->arch.pv_vcpu.syse=
nter_callback_eip     =3D=0A                     evc->sysenter_callback_eip=
;=0A-                v->arch.pv_vcpu.sysenter_disables_events  =3D=0A-     =
               evc->sysenter_disables_events;=0A+                v->arch.pv=
_vcpu.sysenter_tbf              =3D 0;=0A+                if ( evc->sysente=
r_disables_events )=0A+                    v->arch.pv_vcpu.sysenter_tbf =
|=3D TBF_INTERRUPT;=0A                 fixup_guest_code_selector(d, =
evc->syscall32_callback_cs);=0A                 v->arch.pv_vcpu.syscall32_c=
allback_cs     =3D=0A                     evc->syscall32_callback_cs;=0A---=
 a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -488,7 +488,7 @@ void =
write_ptbase(struct vcpu *v)=0A /*=0A  * Should be called after CR3 is =
updated.=0A  * =0A- * Uses values found in vcpu->arch.(guest_table and =
guest_table_user), and=0A+ * Uses values found in vcpu->arch.guest_table{,_=
user,_kernel}, and=0A  * for HVM guests, arch.monitor_table and hvm's =
guest CR3.=0A  *=0A  * Update ref counts to shadow tables appropriately.=0A=
@@ -505,8 +505,10 @@ void update_cr3(struct vcpu *v)=0A =0A     if ( =
!(v->arch.flags & TF_kernel_mode) )=0A         cr3_mfn =3D pagetable_get_pf=
n(v->arch.guest_table_user);=0A-    else=0A+    else if ( !guest_smap_mode(=
v) )=0A         cr3_mfn =3D pagetable_get_pfn(v->arch.guest_table);=0A+    =
else=0A+        cr3_mfn =3D pagetable_get_pfn(v->arch.pv_vcpu.guest_table_s=
map);=0A =0A     make_cr3(v, cr3_mfn);=0A }=0A@@ -2687,7 +2689,22 @@ int =
vcpu_destroy_pagetables(struct vcpu =0A                 rc =3D put_page_and=
_type_preemptible(page);=0A         }=0A         if ( !rc )=0A+        =
{=0A             v->arch.guest_table_user =3D pagetable_null();=0A+=0A+    =
        /* Drop ref to guest_table_smap (from MMUEXT_NEW_SMAP_BASEPTR). =
*/=0A+            mfn =3D pagetable_get_pfn(v->arch.pv_vcpu.guest_table_sma=
p);=0A+            if ( mfn )=0A+            {=0A+                page =3D =
mfn_to_page(mfn);=0A+                if ( paging_mode_refcounts(v->domain) =
)=0A+                    put_page(page);=0A+                else=0A+       =
             rc =3D put_page_and_type_preemptible(page);=0A+            =
}=0A+        }=0A+        if ( !rc )=0A+            v->arch.pv_vcpu.guest_t=
able_smap =3D pagetable_null();=0A     }=0A =0A     v->arch.cr3 =3D =
0;=0A@@ -3086,7 +3103,11 @@ long do_mmuext_op(=0A             }=0A         =
    break;=0A =0A-        case MMUEXT_NEW_USER_BASEPTR: {=0A+        case =
MMUEXT_NEW_USER_BASEPTR:=0A+        case MMUEXT_NEW_SMAP_BASEPTR: {=0A+    =
        pagetable_t *ppt =3D op.cmd =3D=3D MMUEXT_NEW_USER_BASEPTR=0A+     =
                          ? &curr->arch.guest_table_user=0A+               =
                : &curr->arch.pv_vcpu.guest_table_smap;=0A             =
unsigned long old_mfn;=0A =0A             if ( paging_mode_translate(curren=
t->domain) )=0A@@ -3095,7 +3116,7 @@ long do_mmuext_op(=0A                 =
break;=0A             }=0A =0A-            old_mfn =3D pagetable_get_pfn(cu=
rr->arch.guest_table_user);=0A+            old_mfn =3D pagetable_get_pfn(*p=
pt);=0A             /*=0A              * This is particularly important =
when getting restarted after the=0A              * previous attempt got =
preempted in the put-old-MFN phase.=0A@@ -3124,7 +3145,7 @@ long do_mmuext_=
op(=0A                 }=0A             }=0A =0A-            curr->arch.gue=
st_table_user =3D pagetable_from_pfn(op.arg1.mfn);=0A+            *ppt =3D =
pagetable_from_pfn(op.arg1.mfn);=0A =0A             if ( old_mfn !=3D 0 =
)=0A             {=0A@@ -3249,6 +3270,15 @@ long do_mmuext_op(=0A          =
   break;=0A         }=0A =0A+        case MMUEXT_SET_SMAP_MODE:=0A+       =
     if ( unlikely(is_pv_32bit_domain(d)) )=0A+                rc =3D =
-ENOSYS, okay =3D 0;=0A+            else if ( unlikely(op.arg1.val & ~1) =
)=0A+                okay =3D 0;=0A+            else if ( unlikely(!set_sma=
p_mode(curr, op.arg1.val)) )=0A+                rc =3D -EOPNOTSUPP, okay =
=3D 0;=0A+            break;=0A+=0A         case MMUEXT_CLEAR_PAGE: {=0A   =
          struct page_info *page;=0A =0A--- a/xen/arch/x86/traps.c=0A+++ =
b/xen/arch/x86/traps.c=0A@@ -451,6 +451,8 @@ static void do_guest_trap(=0A =
=0A     if ( TI_GET_IF(ti) )=0A         tb->flags |=3D TBF_INTERRUPT;=0A+  =
  if ( !TI_GET_AC(ti) )=0A+        tb->flags |=3D TBF_SMAP;=0A =0A     if =
( unlikely(null_trap_bounce(v, tb)) )=0A         gdprintk(XENLOG_WARNING, =
"Unhandled %s fault/trap [#%d] "=0A@@ -1089,6 +1091,8 @@ struct trap_bounce=
 *propagate_page_fault=0A         tb->eip        =3D ti->address;=0A       =
  if ( TI_GET_IF(ti) )=0A             tb->flags |=3D TBF_INTERRUPT;=0A+    =
    if ( !TI_GET_AC(ti) )=0A+            tb->flags |=3D TBF_SMAP;=0A       =
  return tb;=0A     }=0A =0A@@ -1109,6 +1113,8 @@ struct trap_bounce =
*propagate_page_fault=0A     tb->eip        =3D ti->address;=0A     if ( =
TI_GET_IF(ti) )=0A         tb->flags |=3D TBF_INTERRUPT;=0A+    if ( =
!TI_GET_AC(ti) )=0A+        tb->flags |=3D TBF_SMAP;=0A     if ( unlikely(n=
ull_trap_bounce(v, tb)) )=0A     {=0A         printk("d%d:v%d: unhandled =
page fault (ec=3D%04X)\n",=0A@@ -1598,23 +1604,21 @@ static int guest_io_ok=
ay(=0A     unsigned int port, unsigned int bytes,=0A     struct vcpu *v, =
struct cpu_user_regs *regs)=0A {=0A-    /* If in user mode, switch to =
kernel mode just to read I/O bitmap. */=0A-    int user_mode =3D !(v->arch.=
flags & TF_kernel_mode);=0A-#define TOGGLE_MODE() if ( user_mode ) =
toggle_guest_mode(v)=0A-=0A     if ( !vm86_mode(regs) &&=0A          =
(v->arch.pv_vcpu.iopl >=3D (guest_kernel_mode(v, regs) ? 1 : 3)) )=0A      =
   return 1;=0A =0A     if ( v->arch.pv_vcpu.iobmp_limit > (port + bytes) =
)=0A     {=0A+        unsigned int mode;=0A         union { uint8_t =
bytes[2]; uint16_t mask; } x;=0A =0A         /*=0A          * Grab =
permission bytes from guest space. Inaccessible bytes are=0A          * =
read as 0xff (no access allowed).=0A+         * If in user mode, switch to =
kernel mode just to read I/O bitmap.=0A          */=0A-        TOGGLE_MODE(=
);=0A+        TOGGLE_MODE(v, mode, 1);=0A         switch ( __copy_from_gues=
t_offset(x.bytes, v->arch.pv_vcpu.iobmp,=0A                                =
           port>>3, 2) )=0A         {=0A@@ -1622,7 +1626,7 @@ static int =
guest_io_okay(=0A         case 1:  x.bytes[1] =3D ~0;=0A         case 0:  =
break;=0A         }=0A-        TOGGLE_MODE();=0A+        TOGGLE_MODE(v, =
mode, 0);=0A =0A         if ( (x.mask & (((1<<bytes)-1) << (port&7))) =
=3D=3D 0 )=0A             return 1;=0A@@ -2188,7 +2192,7 @@ static int =
emulate_privileged_op(struct =0A         goto fail;=0A     switch ( opcode =
)=0A     {=0A-    case 0x1: /* RDTSCP and XSETBV */=0A+    case 0x1: /* =
RDTSCP, XSETBV, CLAC, and STAC */=0A         switch ( insn_fetch(u8, =
code_base, eip, code_limit) )=0A         {=0A         case 0xf9: /* RDTSCP =
*/=0A@@ -2216,6 +2220,20 @@ static int emulate_privileged_op(struct =0A =
=0A             break;=0A         }=0A+        case 0xcb: /* STAC */=0A+   =
         if ( unlikely(!guest_kernel_mode(v, regs)) ||=0A+                 =
unlikely(is_pv_32bit_vcpu(v)) ||=0A+                 unlikely(!guest_smap_e=
nabled(v)) )=0A+                goto fail;=0A+            set_smap_mode(v, =
0);=0A+            break;=0A+        case 0xca: /* CLAC */=0A+            =
if ( unlikely(!guest_kernel_mode(v, regs)) ||=0A+                 =
unlikely(is_pv_32bit_vcpu(v)) ||=0A+                 unlikely(!guest_smap_e=
nabled(v)) )=0A+                goto fail;=0A+            set_smap_mode(v, =
1);=0A+            break;=0A         default:=0A             goto fail;=0A =
        }=0A--- a/xen/arch/x86/x86_64/asm-offsets.c=0A+++ b/xen/arch/x86/x8=
6_64/asm-offsets.c=0A@@ -80,8 +80,7 @@ void __dummy__(void)=0A            =
arch.pv_vcpu.sysenter_callback_eip);=0A     OFFSET(VCPU_sysenter_sel, =
struct vcpu,=0A            arch.pv_vcpu.sysenter_callback_cs);=0A-    =
OFFSET(VCPU_sysenter_disables_events, struct vcpu,=0A-           arch.pv_vc=
pu.sysenter_disables_events);=0A+    OFFSET(VCPU_sysenter_tbf, struct =
vcpu, arch.pv_vcpu.sysenter_tbf);=0A     OFFSET(VCPU_trap_ctxt, struct =
vcpu, arch.pv_vcpu.trap_ctxt);=0A     OFFSET(VCPU_kernel_sp, struct vcpu, =
arch.pv_vcpu.kernel_sp);=0A     OFFSET(VCPU_kernel_ss, struct vcpu, =
arch.pv_vcpu.kernel_ss);=0A@@ -95,6 +94,7 @@ void __dummy__(void)=0A     =
DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);=0A     DEFINE(_VGCF_failsafe_disables=
_events, _VGCF_failsafe_disables_events);=0A     DEFINE(_VGCF_syscall_disab=
les_events,  _VGCF_syscall_disables_events);=0A+    DEFINE(_VGCF_syscall_cl=
ac,             _VGCF_syscall_clac);=0A     BLANK();=0A =0A     OFFSET(VCPU=
_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);=0A--- a/xen/arch/x86/x86_=
64/compat/traps.c=0A+++ b/xen/arch/x86/x86_64/compat/traps.c=0A@@ -205,8 =
+205,8 @@ static long compat_register_guest_callba=0A     case CALLBACKTYPE=
_sysenter:=0A         v->arch.pv_vcpu.sysenter_callback_cs     =3D =
reg->address.cs;=0A         v->arch.pv_vcpu.sysenter_callback_eip    =3D =
reg->address.eip;=0A-        v->arch.pv_vcpu.sysenter_disables_events =
=3D=0A-            (reg->flags & CALLBACKF_mask_events) !=3D 0;=0A+        =
v->arch.pv_vcpu.sysenter_tbf =3D=0A+            (reg->flags & CALLBACKF_mas=
k_events ? TBF_INTERRUPT : 0);=0A         break;=0A =0A     case CALLBACKTY=
PE_nmi:=0A--- a/xen/arch/x86/x86_64/entry.S=0A+++ b/xen/arch/x86/x86_64/ent=
ry.S=0A@@ -28,7 +28,12 @@ switch_to_kernel:=0A         /* TB_flags =3D =
VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */=0A         btl   =
$_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)=0A         =
setc  %cl=0A+        /* TB_flags |=3D VGCF_syscall_clac ? TBF_SMAP : 0 =
*/=0A+        btl   $_VGCF_syscall_clac,VCPU_guest_context_flags(%rbx)=0A+ =
       setc  %al=0A         leal  (,%rcx,TBF_INTERRUPT),%ecx=0A+        =
leal  (,%rax,TBF_SMAP),%eax=0A+        orl   %eax,%ecx=0A         movb  =
%cl,TRAPBOUNCE_flags(%rdx)=0A         call  create_bounce_frame=0A         =
andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)=0A@@ -87,7 +92,7 @@ failsafe_callb=
ack:=0A         leaq  VCPU_trap_bounce(%rbx),%rdx=0A         movq  =
VCPU_failsafe_addr(%rbx),%rax=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=0A=
-        movb  $TBF_FAILSAFE,TRAPBOUNCE_flags(%rdx)=0A+        movb  =
$TBF_FAILSAFE|TBF_SMAP,TRAPBOUNCE_flags(%rdx)=0A         bt    $_VGCF_fails=
afe_disables_events,VCPU_guest_context_flags(%rbx)=0A         jnc   1f=0A  =
       orb   $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)=0A@@ -215,7 +220,7 @@ =
test_guest_events:=0A         leaq  VCPU_trap_bounce(%rbx),%rdx=0A         =
movq  VCPU_event_addr(%rbx),%rax=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=
=0A-        movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)=0A+        movb  =
$TBF_INTERRUPT|TBF_SMAP,TRAPBOUNCE_flags(%rdx)=0A         call  create_boun=
ce_frame=0A         jmp   test_all_events=0A =0A@@ -278,9 +283,8 @@ =
GLOBAL(sysenter_eflags_saved)=0A         pushq $0=0A         SAVE_VOLATILE =
TRAP_syscall=0A         GET_CURRENT(%rbx)=0A-        cmpb  $0,VCPU_sysenter=
_disables_events(%rbx)=0A+        movzbl VCPU_sysenter_tbf(%rbx),%ecx=0A   =
      movq  VCPU_sysenter_addr(%rbx),%rax=0A-        setne %cl=0A         =
testl $X86_EFLAGS_NT,UREGS_eflags(%rsp)=0A         leaq  VCPU_trap_bounce(%=
rbx),%rdx=0A UNLIKELY_START(nz, sysenter_nt_set)=0A@@ -290,7 +294,6 @@ =
UNLIKELY_START(nz, sysenter_nt_set)=0A         xorl  %eax,%eax=0A =
UNLIKELY_END(sysenter_nt_set)=0A         testq %rax,%rax=0A-        leal  =
(,%rcx,TBF_INTERRUPT),%ecx=0A UNLIKELY_START(z, sysenter_gpf)=0A         =
movq  VCPU_trap_ctxt(%rbx),%rsi=0A         SAVE_PRESERVED=0A@@ -299,7 =
+302,11 @@ UNLIKELY_START(z, sysenter_gpf)=0A         movq  TRAP_gp_fault =
* TRAPINFO_sizeof + TRAPINFO_eip(%rsi),%rax=0A         testb $4,TRAP_gp_fau=
lt * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)=0A         setnz %cl=0A+       =
 testb $8,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)=0A+       =
 setnz %sil=0A         leal  TBF_EXCEPTION|TBF_EXCEPTION_ERRCODE(,%rcx,TBF_=
INTERRUPT),%ecx=0A+        leal  (,%rsi,TBF_SMAP),%esi=0A+        orl   =
%esi,%ecx=0A UNLIKELY_END(sysenter_gpf)=0A         movq  VCPU_domain(%rbx),=
%rdi=0A         movq  %rax,TRAPBOUNCE_eip(%rdx)=0A@@ -351,19 +358,38 @@ =
int80_slow_path:=0A /* On return only %rbx and %rdx are guaranteed =
non-clobbered.            */=0A create_bounce_frame:=0A         ASSERT_INTE=
RRUPTS_ENABLED=0A-        testb $TF_kernel_mode,VCPU_thread_flags(%rbx)=0A-=
        jnz   1f=0A-        /* Push new frame at registered guest-OS stack =
base. */=0A+        xorl  %esi,%esi=0A+        testb $TBF_SMAP,TRAPBOUNCE_f=
lags(%rdx)=0A+        movl  VCPU_thread_flags(%rbx),%eax=0A+        setnz =
%sil=0A+        testb $TF_kernel_mode,%al=0A         pushq %rdx=0A         =
movq  %rbx,%rdi=0A+        jnz   1f=0A+        /* Push new frame at =
registered guest-OS stack base. */=0A+        andl  $~TF_smap_mode,VCPU_thr=
ead_flags(%rbx)=0A+        shll  $_TF_smap_mode,%esi=0A+        orl   =
%esi,VCPU_thread_flags(%rbx)=0A         call  toggle_guest_mode=0A-        =
popq  %rdx=0A         movq  VCPU_kernel_sp(%rbx),%rsi=0A+        movl  =
$~0,%edi=0A         jmp   2f=0A 1:      /* In kernel context already: push =
new frame at existing %rsp. */=0A-        movq  UREGS_rsp+8(%rsp),%rsi=0A- =
       andb  $0xfc,UREGS_cs+8(%rsp)    # Indicate kernel context to =
guest.=0A+        pushq %rax=0A+        call  set_smap_mode=0A+        =
test  %al,%al=0A+        movl  $~0,%edi=0A+        popq  %rax              =
        # old VCPU_thread_flags(%rbx)=0A+UNLIKELY_START(nz, cbf_smap)=0A+  =
      movl  $~X86_EFLAGS_AC,%edi=0A+        testb $TF_smap_mode,%al=0A+    =
    UNLIKELY_DONE(nz, cbf_smap)=0A+        btsq  $18+32,%rdi               =
# LOG2(X86_EFLAGS_AC)+32=0A+UNLIKELY_END(cbf_smap)=0A+        movq  =
UREGS_rsp+2*8(%rsp),%rsi=0A+        andl  $~3,UREGS_cs+2*8(%rsp)    # =
Indicate kernel context to guest.=0A 2:      andq  $~0xf,%rsi              =
  # Stack frames are 16-byte aligned.=0A+        popq  %rdx=0A         =
movq  $HYPERVISOR_VIRT_START,%rax=0A         cmpq  %rax,%rsi=0A         =
movq  $HYPERVISOR_VIRT_END+60,%rax=0A@@ -394,7 +420,10 @@ __UNLIKELY_END(cr=
eate_bounce_frame_bad_s=0A         setz  %ch                       # %ch =
=3D=3D !saved_upcall_mask=0A         movl  UREGS_eflags+8(%rsp),%eax=0A    =
     andl  $~X86_EFLAGS_IF,%eax=0A+        andl  %edi,%eax                 =
# Clear EFLAGS.AC if needed=0A+        shrq  $32,%rdi=0A         addb  =
%ch,%ch                   # Bit 9 (EFLAGS.IF)=0A+        orl   %edi,%eax   =
              # Set EFLAGS.AC if needed=0A         orb   %ch,%ah           =
        # Fold EFLAGS.IF into %eax=0A .Lft5:  movq  %rax,16(%rsi)          =
   # RFLAGS=0A         movq  UREGS_rip+8(%rsp),%rax=0A--- a/xen/arch/x86/x8=
6_64/traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -153,9 +153,11 @@ =
void vcpu_show_registers(const struct vc=0A =0A     crs[0] =3D v->arch.pv_v=
cpu.ctrlreg[0];=0A     crs[2] =3D arch_get_cr2(v);=0A-    crs[3] =3D =
pagetable_get_paddr(guest_kernel_mode(v, regs) ?=0A+    crs[3] =3D =
pagetable_get_paddr(!guest_kernel_mode(v, regs) ?=0A+                      =
           v->arch.guest_table_user :=0A+                                 =
!guest_smap_enabled(v) || !guest_smap_mode(v) ?=0A                         =
         v->arch.guest_table :=0A-                                 =
v->arch.guest_table_user);=0A+                                 v->arch.pv_v=
cpu.guest_table_smap);=0A     crs[4] =3D v->arch.pv_vcpu.ctrlreg[4];=0A =
=0A     _show_registers(regs, crs, CTXT_pv_guest, v);=0A@@ -258,14 +260,19 =
@@ void toggle_guest_mode(struct vcpu *v)=0A     if ( is_pv_32bit_vcpu(v) =
)=0A         return;=0A     v->arch.flags ^=3D TF_kernel_mode;=0A+    if ( =
!guest_smap_enabled(v) )=0A+        v->arch.flags &=3D ~TF_smap_mode;=0A   =
  asm volatile ( "swapgs" );=0A     update_cr3(v);=0A #ifdef USER_MAPPINGS_=
ARE_GLOBAL=0A-    /* Don't flush user global mappings from the TLB. Don't =
tick TLB clock. */=0A-    asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.c=
r3) : "memory" );=0A-#else=0A-    write_ptbase(v);=0A+    if ( !(v->arch.fl=
ags & TF_kernel_mode) || !guest_smap_mode(v) )=0A+    {=0A+        /* =
Don't flush user global mappings from the TLB. Don't tick TLB clock. =
*/=0A+        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : =
"memory" );=0A+    }=0A+    else=0A #endif=0A+        write_ptbase(v);=0A =
=0A     if ( !(v->arch.flags & TF_kernel_mode) )=0A         return;=0A@@ =
-280,6 +287,35 @@ void toggle_guest_mode(struct vcpu *v)=0A         =
v->arch.pv_vcpu.pending_system_time.version =3D 0;=0A }=0A =0A+bool_t =
set_smap_mode(struct vcpu *v, bool_t on)=0A+{=0A+    ASSERT(!is_pv_32bit_vc=
pu(v));=0A+    ASSERT(v->arch.flags & TF_kernel_mode);=0A+=0A+    if ( =
!guest_smap_enabled(v) )=0A+        return 0;=0A+    if ( !on =3D=3D =
!guest_smap_mode(v) )=0A+        return 1;=0A+=0A+    if ( on )=0A+        =
v->arch.flags |=3D TF_smap_mode;=0A+    else=0A+        v->arch.flags &=3D =
~TF_smap_mode;=0A+=0A+    update_cr3(v);=0A+#ifdef USER_MAPPINGS_ARE_GLOBAL=
=0A+    if ( !guest_smap_mode(v) )=0A+    {=0A+        /* Don't flush user =
global mappings from the TLB. Don't tick TLB clock. */=0A+        asm =
volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );=0A+    =
}=0A+    else=0A+#endif=0A+        write_ptbase(v);=0A+=0A+    return =
1;=0A+}=0A+=0A unsigned long do_iret(void)=0A {=0A     struct cpu_user_regs=
 *regs =3D guest_cpu_user_regs();=0A@@ -305,6 +341,8 @@ unsigned long =
do_iret(void)=0A         }=0A         toggle_guest_mode(v);=0A     }=0A+   =
 else if ( set_smap_mode(v, !(iret_saved.rflags & X86_EFLAGS_AC)) )=0A+    =
    iret_saved.rflags &=3D ~X86_EFLAGS_AC;=0A =0A     regs->rip    =3D =
iret_saved.rip;=0A     regs->cs     =3D iret_saved.cs | 3; /* force guest =
privilege */=0A@@ -480,6 +518,10 @@ static long register_guest_callback(str=
u=0A         else=0A             clear_bit(_VGCF_syscall_disables_events,=
=0A                       &v->arch.vgc_flags);=0A+        if ( reg->flags =
& CALLBACKF_clac )=0A+            set_bit(_VGCF_syscall_clac, &v->arch.vgc_=
flags);=0A+        else=0A+            clear_bit(_VGCF_syscall_clac, =
&v->arch.vgc_flags);=0A         break;=0A =0A     case CALLBACKTYPE_syscall=
32:=0A@@ -490,8 +532,9 @@ static long register_guest_callback(stru=0A =0A  =
   case CALLBACKTYPE_sysenter:=0A         v->arch.pv_vcpu.sysenter_callback=
_eip =3D reg->address;=0A-        v->arch.pv_vcpu.sysenter_disables_events =
=3D=0A-            !!(reg->flags & CALLBACKF_mask_events);=0A+        =
v->arch.pv_vcpu.sysenter_tbf =3D=0A+            (reg->flags & CALLBACKF_mas=
k_events ? TBF_INTERRUPT : 0) |=0A+            (reg->flags & CALLBACKF_clac=
 ? TBF_SMAP : 0);=0A         break;=0A =0A     case CALLBACKTYPE_nmi:=0A---=
 a/xen/include/asm-x86/domain.h=0A+++ b/xen/include/asm-x86/domain.h=0A@@ =
-75,6 +75,8 @@ void mapcache_override_current(struct vc=0A =0A /* x86/64: =
toggle guest between kernel and user modes. */=0A void toggle_guest_mode(st=
ruct vcpu *);=0A+/* x86/64: switch guest between SMAP and "normal" modes. =
*/=0A+bool_t set_smap_mode(struct vcpu *, bool_t);=0A =0A /*=0A  * =
Initialise a hypercall-transfer page. The given pointer must be mapped=0A@@=
 -354,13 +356,16 @@ struct pv_vcpu=0A     unsigned short syscall32_callback=
_cs;=0A     unsigned short sysenter_callback_cs;=0A     bool_t syscall32_di=
sables_events;=0A-    bool_t sysenter_disables_events;=0A+    u8 sysenter_t=
bf;=0A =0A     /* Segment base addresses. */=0A     unsigned long =
fs_base;=0A     unsigned long gs_base_kernel;=0A     unsigned long =
gs_base_user;=0A =0A+    /* x86/64 kernel-only (SMAP) pagetable */=0A+    =
pagetable_t guest_table_smap;=0A+=0A     /* Bounce information for =
propagating an exception to guest OS. */=0A     struct trap_bounce =
trap_bounce;=0A     struct trap_bounce int80_bounce;=0A@@ -471,6 +476,10 =
@@ unsigned long pv_guest_cr4_fixup(const s=0A     ((c) & ~(X86_CR4_PGE | =
X86_CR4_PSE | X86_CR4_TSD |      \=0A              X86_CR4_OSXSAVE | =
X86_CR4_SMEP | X86_CR4_FSGSBASE))=0A =0A+#define guest_smap_enabled(v) =
\=0A+    (!pagetable_is_null((v)->arch.pv_vcpu.guest_table_smap))=0A+#defin=
e guest_smap_mode(v) ((v)->arch.flags & TF_smap_mode)=0A+=0A void =
domain_cpuid(struct domain *d,=0A                   unsigned int  =
input,=0A                   unsigned int  sub_input,=0A--- a/xen/include/as=
m-x86/paging.h=0A+++ b/xen/include/asm-x86/paging.h=0A@@ -405,17 +405,34 =
@@ guest_get_eff_l1e(struct vcpu *v, unsign=0A     paging_get_hostmode(v)->=
guest_get_eff_l1e(v, addr, eff_l1e);=0A }=0A =0A+#define TOGGLE_MODE(v, m, =
in) do { \=0A+    if ( in ) \=0A+        (m) =3D (v)->arch.flags; \=0A+    =
if ( (m) & TF_kernel_mode ) \=0A+    { \=0A+        set_smap_mode(v, (in) =
|| ((m) & TF_smap_mode) ); \=0A+        break; \=0A+    } \=0A+    if ( in =
) \=0A+        (v)->arch.flags |=3D TF_smap_mode; \=0A+    else \=0A+    { =
\=0A+        (v)->arch.flags &=3D ~TF_smap_mode; \=0A+        (v)->arch.fla=
gs |=3D (m) & TF_smap_mode; \=0A+    } \=0A+    toggle_guest_mode(v); =
\=0A+} while ( 0 )=0A+=0A /* Read the guest's l1e that maps this address, =
from the kernel-mode=0A  * pagetables. */=0A static inline void=0A =
guest_get_eff_kern_l1e(struct vcpu *v, unsigned long addr, void =
*eff_l1e)=0A {=0A-    int user_mode =3D !(v->arch.flags & TF_kernel_mode);=
=0A-#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)=0A+    =
unsigned int mode;=0A =0A-    TOGGLE_MODE();=0A+    TOGGLE_MODE(v, mode, =
1);=0A     guest_get_eff_l1e(v, addr, eff_l1e);=0A-    TOGGLE_MODE();=0A+  =
  TOGGLE_MODE(v, mode, 0);=0A }=0A =0A =0A--- a/xen/include/asm-x86/process=
or.h=0A+++ b/xen/include/asm-x86/processor.h=0A@@ -125,12 +125,15 @@=0A /* =
'trap_bounce' flags values */=0A #define TBF_EXCEPTION          1=0A =
#define TBF_EXCEPTION_ERRCODE  2=0A+#define TBF_SMAP               4=0A =
#define TBF_INTERRUPT          8=0A #define TBF_FAILSAFE          16=0A =
=0A /* 'arch_vcpu' flags values */=0A #define _TF_kernel_mode        0=0A =
#define TF_kernel_mode         (1<<_TF_kernel_mode)=0A+#define _TF_smap_mod=
e          1=0A+#define TF_smap_mode           (1<<_TF_smap_mode)=0A =0A =
/* #PF error code values. */=0A #define PFEC_page_present   (1U<<0)=0A--- =
a/xen/include/public/arch-x86/xen.h=0A+++ b/xen/include/public/arch-x86/xen=
.h=0A@@ -138,6 +138,7 @@ typedef unsigned long xen_ulong_t;=0A  */=0A =
#define TI_GET_DPL(_ti)      ((_ti)->flags & 3)=0A #define TI_GET_IF(_ti)  =
     ((_ti)->flags & 4)=0A+#define TI_GET_AC(_ti)       ((_ti)->flags & =
8)=0A #define TI_SET_DPL(_ti,_dpl) ((_ti)->flags |=3D (_dpl))=0A #define =
TI_SET_IF(_ti,_if)   ((_ti)->flags |=3D ((!!(_if))<<2))=0A struct =
trap_info {=0A@@ -179,6 +180,8 @@ struct vcpu_guest_context {=0A #define =
VGCF_syscall_disables_events   (1<<_VGCF_syscall_disables_events)=0A =
#define _VGCF_online                   5=0A #define VGCF_online            =
        (1<<_VGCF_online)=0A+#define _VGCF_syscall_clac             =
6=0A+#define VGCF_syscall_clac              (1<<_VGCF_syscall_clac)=0A     =
unsigned long flags;                    /* VGCF_* flags                 =
*/=0A     struct cpu_user_regs user_regs;         /* User-level CPU =
registers     */=0A     struct trap_info trap_ctxt[256];        /* Virtual =
IDT                  */=0A--- a/xen/include/public/callback.h=0A+++ =
b/xen/include/public/callback.h=0A@@ -76,6 +76,13 @@=0A  */=0A #define =
_CALLBACKF_mask_events             0=0A #define CALLBACKF_mask_events      =
        (1U << _CALLBACKF_mask_events)=0A+/*=0A+ * Effect CLAC upon =
callback entry? This flag is ignored for event,=0A+ * failsafe, and NMI =
callbacks: user space gets unconditionally hidden if=0A+ * respective =
functionality was enabled by the kernel.=0A+ */=0A+#define _CALLBACKF_clac =
                   0=0A+#define CALLBACKF_clac                     (1U << =
_CALLBACKF_clac)=0A =0A /*=0A  * Register a callback.=0A--- a/xen/include/p=
ublic/xen.h=0A+++ b/xen/include/public/xen.h=0A@@ -341,6 +341,10 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A  * mfn: Machine frame number of =
new page-table base to install in MMU=0A  *      when in user space.=0A  =
*=0A+ * cmd: MMUEXT_NEW_SMAP_BASEPTR [x86/64 only]=0A+ * mfn: Machine =
frame number of new page-table base to install in MMU=0A+ *      when in =
kernel-only (SMAP) mode.=0A+ *=0A  * cmd: MMUEXT_TLB_FLUSH_LOCAL=0A  * No =
additional arguments. Flushes local TLB.=0A  *=0A@@ -371,6 +375,9 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A  * linear_addr: Linear address of =
LDT base (NB. must be page-aligned).=0A  * nr_ents: Number of entries in =
LDT.=0A  *=0A+ * cmd: MMUEXT_SET_SMAP_MODE=0A+ * val: 0 - disable, 1 - =
enable (other values reserved)=0A+ *=0A  * cmd: MMUEXT_CLEAR_PAGE=0A  * =
mfn: Machine frame number to be cleared.=0A  *=0A@@ -402,17 +409,21 @@ =
DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);=0A #define MMUEXT_FLUSH_CACHE_GLOBAL =
18=0A #define MMUEXT_MARK_SUPER       19=0A #define MMUEXT_UNMARK_SUPER    =
 20=0A+#define MMUEXT_NEW_SMAP_BASEPTR 21=0A+#define MMUEXT_SET_SMAP_MODE  =
  22=0A /* ` } */=0A =0A #ifndef __ASSEMBLY__=0A struct mmuext_op {=0A     =
unsigned int cmd; /* =3D> enum mmuext_cmd */=0A     union {=0A-        /* =
[UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR=0A+        /* [UN]PIN_TABLE, =
NEW_BASEPTR, NEW_USER_BASEPTR, NEW_SMAP_BASEPTR=0A          * CLEAR_PAGE, =
COPY_PAGE, [UN]MARK_SUPER */=0A         xen_pfn_t     mfn;=0A         /* =
INVLPG_LOCAL, INVLPG_ALL, SET_LDT */=0A         unsigned long linear_addr;=
=0A+        /* SET_SMAP_MODE */=0A+        unsigned int  val;=0A     } =
arg1;=0A     union {=0A         /* SET_LDT */=0A
--=__Part94A72258.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part94A72258.0__=--


From xen-devel-bounces@lists.xen.org Wed Jan 29 15:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8XW7-0005LF-PK; Wed, 29 Jan 2014 15:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8XW6-0005LA-8R
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 15:57:30 +0000
Received: from [85.158.143.35:42112] by server-3.bemta-4.messagelabs.com id
	40/E6-11539-9E429E25; Wed, 29 Jan 2014 15:57:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391011048!1694782!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8058 invoked from network); 29 Jan 2014 15:57:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:57:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:57:28 +0000
Message-Id: <52E933050200007800117FE2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:57:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1221A4E5.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/domctl: don't ignore errors from
 vmce_restore_vcpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1221A4E5.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

What started out as a simple cleanup patch (eliminating the redundant
check of domctl->cmd before setting "copyback", which as a result
turned the "ext_vcpucontext_out" label useless) revealed a bug in the
handling of XEN_DOMCTL_set_ext_vcpucontext.

Fix this, retaining the cleanup, and at once dropping a stale comment
and an accompanying formatting issue.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -815,7 +815,7 @@ long arch_do_domctl(
         ret =3D -ESRCH;
         if ( (evc->vcpu >=3D d->max_vcpus) ||
              ((v =3D d->vcpu[evc->vcpu]) =3D=3D NULL) )
-            goto ext_vcpucontext_out;
+            break;
=20
         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
         {
@@ -847,17 +847,20 @@ long arch_do_domctl(
             evc->vmce.caps =3D v->arch.vmce.mcg_cap;
             evc->vmce.mci_ctl2_bank0 =3D v->arch.vmce.bank[0].mci_ctl2;
             evc->vmce.mci_ctl2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;
+
+            ret =3D 0;
+            copyback =3D 1;
         }
         else
         {
             ret =3D -EINVAL;
             if ( evc->size < offsetof(typeof(*evc), vmce) )
-                goto ext_vcpucontext_out;
+                break;
             if ( is_pv_domain(d) )
             {
                 if ( !is_canonical_address(evc->sysenter_callback_eip) ||
                      !is_canonical_address(evc->syscall32_callback_eip) )
-                    goto ext_vcpucontext_out;
+                    break;
                 fixup_guest_code_selector(d, evc->sysenter_callback_cs);
                 v->arch.pv_vcpu.sysenter_callback_cs      =3D
                     evc->sysenter_callback_cs;
@@ -873,13 +876,11 @@ long arch_do_domctl(
                 v->arch.pv_vcpu.syscall32_disables_events =3D
                     evc->syscall32_disables_events;
             }
-            else
-            /* We do not support syscall/syscall32/sysenter on 32-bit =
Xen. */
-            if ( (evc->sysenter_callback_cs & ~3) ||
-                 evc->sysenter_callback_eip ||
-                 (evc->syscall32_callback_cs & ~3) ||
-                 evc->syscall32_callback_eip )
-                goto ext_vcpucontext_out;
+            else if ( (evc->sysenter_callback_cs & ~3) ||
+                      evc->sysenter_callback_eip ||
+                      (evc->syscall32_callback_cs & ~3) ||
+                      evc->syscall32_callback_eip )
+                break;
=20
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=3D
@@ -896,13 +897,9 @@ long arch_do_domctl(
=20
                 ret =3D vmce_restore_vcpu(v, &vmce);
             }
+            else
+                ret =3D 0;
         }
-
-        ret =3D 0;
-
-    ext_vcpucontext_out:
-        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
-            copyback =3D 1;
     }
     break;
=20




--=__Part1221A4E5.1__=
Content-Type: text/plain; name="x86-domctl-evc-cleanup.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-domctl-evc-cleanup.patch"

x86/domctl: don't ignore errors from vmce_restore_vcpu()=0A=0AWhat started =
out as a simple cleanup patch (eliminating the redundant=0Acheck of =
domctl->cmd before setting "copyback", which as a result=0Aturned the =
"ext_vcpucontext_out" label useless) revealed a bug in the=0Ahandling of =
XEN_DOMCTL_set_ext_vcpucontext.=0A=0AFix this, retaining the cleanup, and =
at once dropping a stale comment=0Aand an accompanying formatting =
issue.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ -815,7 +815,7 =
@@ long arch_do_domctl(=0A         ret =3D -ESRCH;=0A         if ( =
(evc->vcpu >=3D d->max_vcpus) ||=0A              ((v =3D d->vcpu[evc->vcpu]=
) =3D=3D NULL) )=0A-            goto ext_vcpucontext_out;=0A+            =
break;=0A =0A         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontex=
t )=0A         {=0A@@ -847,17 +847,20 @@ long arch_do_domctl(=0A           =
  evc->vmce.caps =3D v->arch.vmce.mcg_cap;=0A             evc->vmce.mci_ctl=
2_bank0 =3D v->arch.vmce.bank[0].mci_ctl2;=0A             evc->vmce.mci_ctl=
2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;=0A+=0A+            ret =3D =
0;=0A+            copyback =3D 1;=0A         }=0A         else=0A         =
{=0A             ret =3D -EINVAL;=0A             if ( evc->size < =
offsetof(typeof(*evc), vmce) )=0A-                goto ext_vcpucontext_out;=
=0A+                break;=0A             if ( is_pv_domain(d) )=0A        =
     {=0A                 if ( !is_canonical_address(evc->sysenter_callback=
_eip) ||=0A                      !is_canonical_address(evc->syscall32_callb=
ack_eip) )=0A-                    goto ext_vcpucontext_out;=0A+            =
        break;=0A                 fixup_guest_code_selector(d, evc->sysente=
r_callback_cs);=0A                 v->arch.pv_vcpu.sysenter_callback_cs    =
  =3D=0A                     evc->sysenter_callback_cs;=0A@@ -873,13 =
+876,11 @@ long arch_do_domctl(=0A                 v->arch.pv_vcpu.syscall3=
2_disables_events =3D=0A                     evc->syscall32_disables_events=
;=0A             }=0A-            else=0A-            /* We do not support =
syscall/syscall32/sysenter on 32-bit Xen. */=0A-            if ( (evc->syse=
nter_callback_cs & ~3) ||=0A-                 evc->sysenter_callback_eip =
||=0A-                 (evc->syscall32_callback_cs & ~3) ||=0A-            =
     evc->syscall32_callback_eip )=0A-                goto ext_vcpucontext_=
out;=0A+            else if ( (evc->sysenter_callback_cs & ~3) ||=0A+      =
                evc->sysenter_callback_eip ||=0A+                      =
(evc->syscall32_callback_cs & ~3) ||=0A+                      evc->syscall3=
2_callback_eip )=0A+                break;=0A =0A             BUILD_BUG_ON(=
offsetof(struct xen_domctl_ext_vcpucontext,=0A                             =
      mcg_cap) !=3D=0A@@ -896,13 +897,9 @@ long arch_do_domctl(=0A =0A     =
            ret =3D vmce_restore_vcpu(v, &vmce);=0A             }=0A+      =
      else=0A+                ret =3D 0;=0A         }=0A-=0A-        ret =
=3D 0;=0A-=0A-    ext_vcpucontext_out:=0A-        if ( domctl->cmd =3D=3D =
XEN_DOMCTL_get_ext_vcpucontext )=0A-            copyback =3D 1;=0A     =
}=0A     break;=0A =0A
--=__Part1221A4E5.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1221A4E5.1__=--


From xen-devel-bounces@lists.xen.org Wed Jan 29 15:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 15:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8XW7-0005LF-PK; Wed, 29 Jan 2014 15:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8XW6-0005LA-8R
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 15:57:30 +0000
Received: from [85.158.143.35:42112] by server-3.bemta-4.messagelabs.com id
	40/E6-11539-9E429E25; Wed, 29 Jan 2014 15:57:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391011048!1694782!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8058 invoked from network); 29 Jan 2014 15:57:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 15:57:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 15:57:28 +0000
Message-Id: <52E933050200007800117FE2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 15:57:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1221A4E5.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/domctl: don't ignore errors from
 vmce_restore_vcpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1221A4E5.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

What started out as a simple cleanup patch (eliminating the redundant
check of domctl->cmd before setting "copyback", which as a result
turned the "ext_vcpucontext_out" label useless) revealed a bug in the
handling of XEN_DOMCTL_set_ext_vcpucontext.

Fix this, retaining the cleanup, and at once dropping a stale comment
and an accompanying formatting issue.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -815,7 +815,7 @@ long arch_do_domctl(
         ret =3D -ESRCH;
         if ( (evc->vcpu >=3D d->max_vcpus) ||
              ((v =3D d->vcpu[evc->vcpu]) =3D=3D NULL) )
-            goto ext_vcpucontext_out;
+            break;
=20
         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
         {
@@ -847,17 +847,20 @@ long arch_do_domctl(
             evc->vmce.caps =3D v->arch.vmce.mcg_cap;
             evc->vmce.mci_ctl2_bank0 =3D v->arch.vmce.bank[0].mci_ctl2;
             evc->vmce.mci_ctl2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;
+
+            ret =3D 0;
+            copyback =3D 1;
         }
         else
         {
             ret =3D -EINVAL;
             if ( evc->size < offsetof(typeof(*evc), vmce) )
-                goto ext_vcpucontext_out;
+                break;
             if ( is_pv_domain(d) )
             {
                 if ( !is_canonical_address(evc->sysenter_callback_eip) ||
                      !is_canonical_address(evc->syscall32_callback_eip) )
-                    goto ext_vcpucontext_out;
+                    break;
                 fixup_guest_code_selector(d, evc->sysenter_callback_cs);
                 v->arch.pv_vcpu.sysenter_callback_cs      =3D
                     evc->sysenter_callback_cs;
@@ -873,13 +876,11 @@ long arch_do_domctl(
                 v->arch.pv_vcpu.syscall32_disables_events =3D
                     evc->syscall32_disables_events;
             }
-            else
-            /* We do not support syscall/syscall32/sysenter on 32-bit =
Xen. */
-            if ( (evc->sysenter_callback_cs & ~3) ||
-                 evc->sysenter_callback_eip ||
-                 (evc->syscall32_callback_cs & ~3) ||
-                 evc->syscall32_callback_eip )
-                goto ext_vcpucontext_out;
+            else if ( (evc->sysenter_callback_cs & ~3) ||
+                      evc->sysenter_callback_eip ||
+                      (evc->syscall32_callback_cs & ~3) ||
+                      evc->syscall32_callback_eip )
+                break;
=20
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=3D
@@ -896,13 +897,9 @@ long arch_do_domctl(
=20
                 ret =3D vmce_restore_vcpu(v, &vmce);
             }
+            else
+                ret =3D 0;
         }
-
-        ret =3D 0;
-
-    ext_vcpucontext_out:
-        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
-            copyback =3D 1;
     }
     break;
=20




--=__Part1221A4E5.1__=
Content-Type: text/plain; name="x86-domctl-evc-cleanup.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-domctl-evc-cleanup.patch"

x86/domctl: don't ignore errors from vmce_restore_vcpu()=0A=0AWhat started =
out as a simple cleanup patch (eliminating the redundant=0Acheck of =
domctl->cmd before setting "copyback", which as a result=0Aturned the =
"ext_vcpucontext_out" label useless) revealed a bug in the=0Ahandling of =
XEN_DOMCTL_set_ext_vcpucontext.=0A=0AFix this, retaining the cleanup, and =
at once dropping a stale comment=0Aand an accompanying formatting =
issue.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ -815,7 +815,7 =
@@ long arch_do_domctl(=0A         ret =3D -ESRCH;=0A         if ( =
(evc->vcpu >=3D d->max_vcpus) ||=0A              ((v =3D d->vcpu[evc->vcpu]=
) =3D=3D NULL) )=0A-            goto ext_vcpucontext_out;=0A+            =
break;=0A =0A         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontex=
t )=0A         {=0A@@ -847,17 +847,20 @@ long arch_do_domctl(=0A           =
  evc->vmce.caps =3D v->arch.vmce.mcg_cap;=0A             evc->vmce.mci_ctl=
2_bank0 =3D v->arch.vmce.bank[0].mci_ctl2;=0A             evc->vmce.mci_ctl=
2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;=0A+=0A+            ret =3D =
0;=0A+            copyback =3D 1;=0A         }=0A         else=0A         =
{=0A             ret =3D -EINVAL;=0A             if ( evc->size < =
offsetof(typeof(*evc), vmce) )=0A-                goto ext_vcpucontext_out;=
=0A+                break;=0A             if ( is_pv_domain(d) )=0A        =
     {=0A                 if ( !is_canonical_address(evc->sysenter_callback=
_eip) ||=0A                      !is_canonical_address(evc->syscall32_callb=
ack_eip) )=0A-                    goto ext_vcpucontext_out;=0A+            =
        break;=0A                 fixup_guest_code_selector(d, evc->sysente=
r_callback_cs);=0A                 v->arch.pv_vcpu.sysenter_callback_cs    =
  =3D=0A                     evc->sysenter_callback_cs;=0A@@ -873,13 =
+876,11 @@ long arch_do_domctl(=0A                 v->arch.pv_vcpu.syscall3=
2_disables_events =3D=0A                     evc->syscall32_disables_events=
;=0A             }=0A-            else=0A-            /* We do not support =
syscall/syscall32/sysenter on 32-bit Xen. */=0A-            if ( (evc->syse=
nter_callback_cs & ~3) ||=0A-                 evc->sysenter_callback_eip =
||=0A-                 (evc->syscall32_callback_cs & ~3) ||=0A-            =
     evc->syscall32_callback_eip )=0A-                goto ext_vcpucontext_=
out;=0A+            else if ( (evc->sysenter_callback_cs & ~3) ||=0A+      =
                evc->sysenter_callback_eip ||=0A+                      =
(evc->syscall32_callback_cs & ~3) ||=0A+                      evc->syscall3=
2_callback_eip )=0A+                break;=0A =0A             BUILD_BUG_ON(=
offsetof(struct xen_domctl_ext_vcpucontext,=0A                             =
      mcg_cap) !=3D=0A@@ -896,13 +897,9 @@ long arch_do_domctl(=0A =0A     =
            ret =3D vmce_restore_vcpu(v, &vmce);=0A             }=0A+      =
      else=0A+                ret =3D 0;=0A         }=0A-=0A-        ret =
=3D 0;=0A-=0A-    ext_vcpucontext_out:=0A-        if ( domctl->cmd =3D=3D =
XEN_DOMCTL_get_ext_vcpucontext )=0A-            copyback =3D 1;=0A     =
}=0A     break;=0A =0A
--=__Part1221A4E5.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1221A4E5.1__=--


From xen-devel-bounces@lists.xen.org Wed Jan 29 16:01:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8XZg-00069U-Jm; Wed, 29 Jan 2014 16:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8XZe-00069K-Iy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:01:10 +0000
Received: from [85.158.137.68:33006] by server-7.bemta-3.messagelabs.com id
	29/7A-13775-5C529E25; Wed, 29 Jan 2014 16:01:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391011267!12069261!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4811 invoked from network); 29 Jan 2014 16:01:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:01:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97733527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 16:01:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:01:06 -0500
Message-ID: <1391011264.31814.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 16:01:04 +0000
In-Reply-To: <20140129150632.GA31958@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> On Tue, Jan 28, Olaf Hering wrote:
> 
> > Handle new option discard=on|off for disk configuration. It is supposed
> > to disable discard support if file based backing storage was
> > intentionally created non-sparse to avoid fragmentation of the file.
> 
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
> >      ("removable", integer),
> >      ("readwrite", integer),
> >      ("is_cdrom", integer),
> > +    ("discard_enable", integer),
> 
> This new field changes the API, _libxl_types.h:struct libxl_device_disk
> gets a new member. How should code using this new flag recognize if its
> present? If it is supposed to be part of a new libxl-4.5 API then
> out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> If not, how should it be done?

You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
examples in there already.

There is no need to make the actual field conditional -- that would
actually be wrong since it would modify the ABI depending on what the
application asked for, meaning it would differ from how libxl was
actually built. An application which us using an ABI before 4.5 simply
won't think to touch this field.

> 
> For my own purpose I will overload ->readwrite to carry the discard flag
> and to preserve the ABI.
> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:01:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8XZg-00069U-Jm; Wed, 29 Jan 2014 16:01:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8XZe-00069K-Iy
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:01:10 +0000
Received: from [85.158.137.68:33006] by server-7.bemta-3.messagelabs.com id
	29/7A-13775-5C529E25; Wed, 29 Jan 2014 16:01:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391011267!12069261!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4811 invoked from network); 29 Jan 2014 16:01:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:01:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="97733527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 16:01:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:01:06 -0500
Message-ID: <1391011264.31814.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 16:01:04 +0000
In-Reply-To: <20140129150632.GA31958@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> On Tue, Jan 28, Olaf Hering wrote:
> 
> > Handle new option discard=on|off for disk configuration. It is supposed
> > to disable discard support if file based backing storage was
> > intentionally created non-sparse to avoid fragmentation of the file.
> 
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
> >      ("removable", integer),
> >      ("readwrite", integer),
> >      ("is_cdrom", integer),
> > +    ("discard_enable", integer),
> 
> This new field changes the API, _libxl_types.h:struct libxl_device_disk
> gets a new member. How should code using this new flag recognize if its
> present? If it is supposed to be part of a new libxl-4.5 API then
> out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> If not, how should it be done?

You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
examples in there already.

There is no need to make the actual field conditional -- that would
actually be wrong since it would modify the ABI depending on what the
application asked for, meaning it would differ from how libxl was
actually built. An application which us using an ABI before 4.5 simply
won't think to touch this field.

> 
> For my own purpose I will overload ->readwrite to carry the discard flag
> and to preserve the ABI.
> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xfd-0006Mg-KR; Wed, 29 Jan 2014 16:07:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Xfb-0006Mb-NK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:07:19 +0000
Received: from [85.158.139.211:16354] by server-14.bemta-5.messagelabs.com id
	F2/DD-27598-73729E25; Wed, 29 Jan 2014 16:07:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391011638!423832!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22516 invoked from network); 29 Jan 2014 16:07:18 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 16:07:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391011638; l=1032;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=dWQYGhgifC7Xse89WnzFtu5yxF8=;
	b=dTCyuMczB+YMGjUxOI9cvNc4qQYkoMI9PI5+X34R5no5kqMr7abQhn8vlPqqADLsCWk
	ybtm1Yc1QpYDyfApRvLNEeRcY9vcrQpVJxNr9GRetec674rUW/RjMhxqQ0q9eIw80UOnt
	+705jZ6R8ECjvWprmCVSkY+A9LIlcqc+qno=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id d06c63q0TG7HUmz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 17:07:17 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 8986C50269; Wed, 29 Jan 2014 17:07:17 +0100 (CET)
Date: Wed, 29 Jan 2014 17:07:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129160717.GA11100@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391011264.31814.134.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> > This new field changes the API, _libxl_types.h:struct libxl_device_disk
> > gets a new member. How should code using this new flag recognize if its
> > present? If it is supposed to be part of a new libxl-4.5 API then
> > out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> > If not, how should it be done?
> You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
> examples in there already.

I will add such a define.

> There is no need to make the actual field conditional -- that would
> actually be wrong since it would modify the ABI depending on what the
> application asked for, meaning it would differ from how libxl was
> actually built. An application which us using an ABI before 4.5 simply
> won't think to touch this field.

I meant the access of the field in libvirt, like "p->discard_enable = val;".
Putting such code into #ifdef LIBXL_HAVE_FOO is fine.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xfd-0006Mg-KR; Wed, 29 Jan 2014 16:07:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Xfb-0006Mb-NK
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:07:19 +0000
Received: from [85.158.139.211:16354] by server-14.bemta-5.messagelabs.com id
	F2/DD-27598-73729E25; Wed, 29 Jan 2014 16:07:19 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391011638!423832!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22516 invoked from network); 29 Jan 2014 16:07:18 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 16:07:18 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391011638; l=1032;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=dWQYGhgifC7Xse89WnzFtu5yxF8=;
	b=dTCyuMczB+YMGjUxOI9cvNc4qQYkoMI9PI5+X34R5no5kqMr7abQhn8vlPqqADLsCWk
	ybtm1Yc1QpYDyfApRvLNEeRcY9vcrQpVJxNr9GRetec674rUW/RjMhxqQ0q9eIw80UOnt
	+705jZ6R8ECjvWprmCVSkY+A9LIlcqc+qno=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id d06c63q0TG7HUmz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 17:07:17 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 8986C50269; Wed, 29 Jan 2014 17:07:17 +0100 (CET)
Date: Wed, 29 Jan 2014 17:07:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129160717.GA11100@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391011264.31814.134.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> > This new field changes the API, _libxl_types.h:struct libxl_device_disk
> > gets a new member. How should code using this new flag recognize if its
> > present? If it is supposed to be part of a new libxl-4.5 API then
> > out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> > If not, how should it be done?
> You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
> examples in there already.

I will add such a define.

> There is no need to make the actual field conditional -- that would
> actually be wrong since it would modify the ABI depending on what the
> application asked for, meaning it would differ from how libxl was
> actually built. An application which us using an ABI before 4.5 simply
> won't think to touch this field.

I meant the access of the field in libvirt, like "p->discard_enable = val;".
Putting such code into #ifdef LIBXL_HAVE_FOO is fine.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xrl-0006zY-5U; Wed, 29 Jan 2014 16:19:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Xrj-0006zR-NL
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:19:51 +0000
Received: from [193.109.254.147:45372] by server-16.bemta-14.messagelabs.com
	id A7/AE-21945-72A29E25; Wed, 29 Jan 2014 16:19:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391012387!674072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22534 invoked from network); 29 Jan 2014 16:19:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:19:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95760362"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:19:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:19:46 -0500
Message-ID: <1391012385.31814.138.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 16:19:45 +0000
In-Reply-To: <20140129160717.GA11100@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
	<20140129160717.GA11100@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 17:07 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> > > This new field changes the API, _libxl_types.h:struct libxl_device_disk
> > > gets a new member. How should code using this new flag recognize if its
> > > present? If it is supposed to be part of a new libxl-4.5 API then
> > > out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> > > If not, how should it be done?
> > You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
> > examples in there already.
> 
> I will add such a define.

THanks.

> > There is no need to make the actual field conditional -- that would
> > actually be wrong since it would modify the ABI depending on what the
> > application asked for, meaning it would differ from how libxl was
> > actually built. An application which us using an ABI before 4.5 simply
> > won't think to touch this field.
> 
> I meant the access of the field in libvirt, like "p->discard_enable = val;".
> Putting such code into #ifdef LIBXL_HAVE_FOO is fine.

Yes, I misunderstood what you meant.

The applications choices are to #define LIBXL_API_VERSION to a big
enough number or to make things conditional on the appropriate
LIBXL_HAVE_FOO. AIUI libvirt has chosen to use the LIBXL_HAVE option.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:20:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xrl-0006zY-5U; Wed, 29 Jan 2014 16:19:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Xrj-0006zR-NL
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:19:51 +0000
Received: from [193.109.254.147:45372] by server-16.bemta-14.messagelabs.com
	id A7/AE-21945-72A29E25; Wed, 29 Jan 2014 16:19:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391012387!674072!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22534 invoked from network); 29 Jan 2014 16:19:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:19:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,742,1384300800"; d="scan'208";a="95760362"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:19:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:19:46 -0500
Message-ID: <1391012385.31814.138.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 29 Jan 2014 16:19:45 +0000
In-Reply-To: <20140129160717.GA11100@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
	<20140129160717.GA11100@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 17:07 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > On Wed, 2014-01-29 at 16:06 +0100, Olaf Hering wrote:
> > > This new field changes the API, _libxl_types.h:struct libxl_device_disk
> > > gets a new member. How should code using this new flag recognize if its
> > > present? If it is supposed to be part of a new libxl-4.5 API then
> > > out-of-tree code could put the code into #ifdef LIBXL_API_VERSION >= X.
> > > If not, how should it be done?
> > You should add a #define LIBXL_HAVE_FOO to libxl.h, there are a few
> > examples in there already.
> 
> I will add such a define.

THanks.

> > There is no need to make the actual field conditional -- that would
> > actually be wrong since it would modify the ABI depending on what the
> > application asked for, meaning it would differ from how libxl was
> > actually built. An application which us using an ABI before 4.5 simply
> > won't think to touch this field.
> 
> I meant the access of the field in libvirt, like "p->discard_enable = val;".
> Putting such code into #ifdef LIBXL_HAVE_FOO is fine.

Yes, I misunderstood what you meant.

The applications choices are to #define LIBXL_API_VERSION to a big
enough number or to make things conditional on the appropriate
LIBXL_HAVE_FOO. AIUI libvirt has chosen to use the LIBXL_HAVE option.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:23:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xv1-0007Cw-TQ; Wed, 29 Jan 2014 16:23:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Xv0-0007Co-Gd
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 16:23:14 +0000
Received: from [85.158.139.211:59743] by server-15.bemta-5.messagelabs.com id
	7C/0C-24395-1FA29E25; Wed, 29 Jan 2014 16:23:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391012591!424657!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20501 invoked from network); 29 Jan 2014 16:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="95761715"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:23:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 11:23:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8Xuv-0002AV-Al;
	Wed, 29 Jan 2014 16:23:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8Xuv-0006XW-27;
	Wed, 29 Jan 2014 16:23:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21225.10988.728464.567330@mariner.uk.xensource.com>
Date: Wed, 29 Jan 2014 16:23:08 +0000
To: "Daniel P. Berrange" <berrange@redhat.com>
In-Reply-To: <20140128100608.GE19598@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140128100608.GE19598@redhat.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: LibVir <libvir-list@redhat.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel P. Berrange writes ("Re: [libvirt] [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Yes, you are correct. The threading model for libvirtd is that the
> process thread leader executes the event loop, dealing with timer
> and file descriptor I/O callbacks. There are also 'n' worker threads
> which exclusively handle public API calls from libvirt clients. IOW
> all your timer callbacks will be in one thread - which also means
> you want your timer callbacks to be fast to execute.

Right.  Good.  All of libxl's callbacks should be fast.

There is still the problem that libxl functions on other threads may
make an fd deregistration call, while libvirt's event loop is still
polling (or selecting) on the fd in the main loop.

libxl might then close the fd, dup2 something onto it, leave the fd to
be reused for some other object.

Depending on the underlying kernel, that can cause side effects.  I
have reproduced the analogous bug with libxl's event loop, but I had
to use an fd connected to the controlling tty, from a background
process group, when the tty has stty tostop.  polling such a thing for
POLLOUT raises SIGTTOU.

Of course the libvirt xl driver still needs to cope with being told by
libxl to register or modify a timeout, or register deregister or
modify an fd, in other than the master thread.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:23:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Xv1-0007Cw-TQ; Wed, 29 Jan 2014 16:23:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8Xv0-0007Co-Gd
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 16:23:14 +0000
Received: from [85.158.139.211:59743] by server-15.bemta-5.messagelabs.com id
	7C/0C-24395-1FA29E25; Wed, 29 Jan 2014 16:23:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391012591!424657!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20501 invoked from network); 29 Jan 2014 16:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="95761715"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:23:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 11:23:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8Xuv-0002AV-Al;
	Wed, 29 Jan 2014 16:23:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8Xuv-0006XW-27;
	Wed, 29 Jan 2014 16:23:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21225.10988.728464.567330@mariner.uk.xensource.com>
Date: Wed, 29 Jan 2014 16:23:08 +0000
To: "Daniel P. Berrange" <berrange@redhat.com>
In-Reply-To: <20140128100608.GE19598@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140128100608.GE19598@redhat.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: LibVir <libvir-list@redhat.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel P. Berrange writes ("Re: [libvirt] [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Yes, you are correct. The threading model for libvirtd is that the
> process thread leader executes the event loop, dealing with timer
> and file descriptor I/O callbacks. There are also 'n' worker threads
> which exclusively handle public API calls from libvirt clients. IOW
> all your timer callbacks will be in one thread - which also means
> you want your timer callbacks to be fast to execute.

Right.  Good.  All of libxl's callbacks should be fast.

There is still the problem that libxl functions on other threads may
make an fd deregistration call, while libvirt's event loop is still
polling (or selecting) on the fd in the main loop.

libxl might then close the fd, dup2 something onto it, leave the fd to
be reused for some other object.

Depending on the underlying kernel, that can cause side effects.  I
have reproduced the analogous bug with libxl's event loop, but I had
to use an fd connected to the controlling tty, from a background
process group, when the tty has stty tostop.  polling such a thing for
POLLOUT raises SIGTTOU.

Of course the libvirt xl driver still needs to cope with being told by
libxl to register or modify a timeout, or register deregister or
modify an fd, in other than the master thread.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:31:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y2i-0007mZ-40; Wed, 29 Jan 2014 16:31:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8Xqu-0006sp-Kb; Wed, 29 Jan 2014 16:19:00 +0000
Received: from [85.158.137.68:42693] by server-9.bemta-3.messagelabs.com id
	D6/AE-10184-3F929E25; Wed, 29 Jan 2014 16:18:59 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391012337!11314532!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13192 invoked from network); 29 Jan 2014 16:18:58 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:18:58 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so2132543obc.26
	for <multiple recipients>; Wed, 29 Jan 2014 08:18:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=vrwalMO0tuMkpaQ3vRqLDrkRb1S3ErYzb1jYsu5E9rM=;
	b=h2hl+TuDxGlutAbxJQI5DSbBGQWPsUCC5l3zsp2XKZ4pMpb4ydYWiUCHsRkD3sSAjb
	Fo9TWdzi/2+gx7g3yX/C7KFFLBhex9tEeIQPxIf37Ayk1pvB57HbsPm/bx9UpeBVdr+B
	bnduENQFkhrvAe/8xU6nmWVoavCc0TMJ1BgA8hkZYgOUULwFMumaKwAqgIfRBW0EBKEG
	1CPMmyxhVPWK9Xvj04gPpNl4u1wCYg01no6GJ+eA4jkI1Y5whhes2io2qDlf73r4jVNN
	BkEd7HtGjHOsi30RD1O7ba45p2mY/41GDEVeJrXOL1NyWqxZP4MtXzbO6kPUwBAYjdyN
	Uuig==
MIME-Version: 1.0
X-Received: by 10.182.157.114 with SMTP id wl18mr1970184obb.52.1391012336890; 
	Wed, 29 Jan 2014 08:18:56 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Wed, 29 Jan 2014 08:18:56 -0800 (PST)
In-Reply-To: <1391006755.31814.129.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 09:18:56 -0700
Message-ID: <CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 29 Jan 2014 16:31:10 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the PV guest config
#########################
name = "centos65.pv"
bootloader = "/usr/local/bin/pygrub"
extra = "root=/dev/xvda"
memory = 4096
vcpus = 2
vfb=[ "type=vnc, vncpass=123456" ]
vif = [ 'mac=00:16:3e:54:02:01, bridge=xenbr0' ]
disk = [ 'file:/vms/centos65_pv.img, xvda, w']
######################################


The output of "xl -vvv vcpu-set"
########################

libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-38

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp

libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{

    "execute": "qmp_capabilities",

    "id": 1

}

'

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return

libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{

    "execute": "cpu-add",

    "id": 2,

    "arguments": {

        "id": 0

    }

}

'

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error

libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an
error message from QMP server: Not supported

xc: debug: hypercall buffer: total allocations:9 total releases:9

xc: debug: hypercall buffer: current allocations:0 maximum allocations:2

xc: debug: hypercall buffer: cache current size:2

xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
######################################
Again, this issue exists both on Xen-4.3.0 (official release) and
Xen-4.4.0-rc1-25-g9a80d50

On Wed, Jan 29, 2014 at 7:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
>> So to fix the problem, I need to update the qemu version to version
>> 1.7 or later?
>
> Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
> Xen releases.
>
>> BTW. I had this problem in both pvhvm and pv guest.
>> Does pv guest rely on qemu also?
>
> It does not.
>
> If you are having an issue with a PV guest vcpu hotplug then it is not
> the same underlying issue. Please report it separately with full logs,
> config info etc.
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:31:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y2i-0007mZ-40; Wed, 29 Jan 2014 16:31:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8Xqu-0006sp-Kb; Wed, 29 Jan 2014 16:19:00 +0000
Received: from [85.158.137.68:42693] by server-9.bemta-3.messagelabs.com id
	D6/AE-10184-3F929E25; Wed, 29 Jan 2014 16:18:59 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391012337!11314532!1
X-Originating-IP: [209.85.214.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13192 invoked from network); 29 Jan 2014 16:18:58 -0000
Received: from mail-ob0-f181.google.com (HELO mail-ob0-f181.google.com)
	(209.85.214.181)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:18:58 -0000
Received: by mail-ob0-f181.google.com with SMTP id va2so2132543obc.26
	for <multiple recipients>; Wed, 29 Jan 2014 08:18:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=vrwalMO0tuMkpaQ3vRqLDrkRb1S3ErYzb1jYsu5E9rM=;
	b=h2hl+TuDxGlutAbxJQI5DSbBGQWPsUCC5l3zsp2XKZ4pMpb4ydYWiUCHsRkD3sSAjb
	Fo9TWdzi/2+gx7g3yX/C7KFFLBhex9tEeIQPxIf37Ayk1pvB57HbsPm/bx9UpeBVdr+B
	bnduENQFkhrvAe/8xU6nmWVoavCc0TMJ1BgA8hkZYgOUULwFMumaKwAqgIfRBW0EBKEG
	1CPMmyxhVPWK9Xvj04gPpNl4u1wCYg01no6GJ+eA4jkI1Y5whhes2io2qDlf73r4jVNN
	BkEd7HtGjHOsi30RD1O7ba45p2mY/41GDEVeJrXOL1NyWqxZP4MtXzbO6kPUwBAYjdyN
	Uuig==
MIME-Version: 1.0
X-Received: by 10.182.157.114 with SMTP id wl18mr1970184obb.52.1391012336890; 
	Wed, 29 Jan 2014 08:18:56 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Wed, 29 Jan 2014 08:18:56 -0800 (PST)
In-Reply-To: <1391006755.31814.129.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
Date: Wed, 29 Jan 2014 09:18:56 -0700
Message-ID: <CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Wed, 29 Jan 2014 16:31:10 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the PV guest config
#########################
name = "centos65.pv"
bootloader = "/usr/local/bin/pygrub"
extra = "root=/dev/xvda"
memory = 4096
vcpus = 2
vfb=[ "type=vnc, vncpass=123456" ]
vif = [ 'mac=00:16:3e:54:02:01, bridge=xenbr0' ]
disk = [ 'file:/vms/centos65_pv.img, xvda, w']
######################################


The output of "xl -vvv vcpu-set"
########################

libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-38

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp

libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{

    "execute": "qmp_capabilities",

    "id": 1

}

'

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return

libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{

    "execute": "cpu-add",

    "id": 2,

    "arguments": {

        "id": 0

    }

}

'

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error

libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an
error message from QMP server: Not supported

xc: debug: hypercall buffer: total allocations:9 total releases:9

xc: debug: hypercall buffer: current allocations:0 maximum allocations:2

xc: debug: hypercall buffer: cache current size:2

xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
######################################
Again, this issue exists both on Xen-4.3.0 (official release) and
Xen-4.4.0-rc1-25-g9a80d50

On Wed, Jan 29, 2014 at 7:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
>> So to fix the problem, I need to update the qemu version to version
>> 1.7 or later?
>
> Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
> Xen releases.
>
>> BTW. I had this problem in both pvhvm and pv guest.
>> Does pv guest rely on qemu also?
>
> It does not.
>
> If you are having an issue with a PV guest vcpu hotplug then it is not
> the same underlying issue. Please report it separately with full logs,
> config info etc.
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:34:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y61-0008Cv-8K; Wed, 29 Jan 2014 16:34:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8Y5y-0008Ch-W8
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 16:34:35 +0000
Received: from [85.158.139.211:18057] by server-4.bemta-5.messagelabs.com id
	4B/2B-08092-A9D29E25; Wed, 29 Jan 2014 16:34:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391013271!430647!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21899 invoked from network); 29 Jan 2014 16:34:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:34:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208,217";a="95766974"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:34:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:34:30 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8Y5u-0004Ud-3C;
	Wed, 29 Jan 2014 16:34:30 +0000
Message-ID: <52E92D96.2070703@citrix.com>
Date: Wed, 29 Jan 2014 16:34:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E933050200007800117FE2@nat28.tlf.novell.com>
In-Reply-To: <52E933050200007800117FE2@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/domctl: don't ignore errors from
	vmce_restore_vcpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3016993606684747766=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3016993606684747766==
Content-Type: multipart/alternative;
	boundary="------------010905040200080009030108"

--------------010905040200080009030108
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 29/01/14 15:57, Jan Beulich wrote:
> What started out as a simple cleanup patch (eliminating the redundant
> check of domctl->cmd before setting "copyback", which as a result
> turned the "ext_vcpucontext_out" label useless) revealed a bug in the
> handling of XEN_DOMCTL_set_ext_vcpucontext.
>
> Fix this, retaining the cleanup, and at once dropping a stale comment
> and an accompanying formatting issue.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -815,7 +815,7 @@ long arch_do_domctl(
>          ret = -ESRCH;
>          if ( (evc->vcpu >= d->max_vcpus) ||
>               ((v = d->vcpu[evc->vcpu]) == NULL) )
> -            goto ext_vcpucontext_out;
> +            break;
>  
>          if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
>          {
> @@ -847,17 +847,20 @@ long arch_do_domctl(
>              evc->vmce.caps = v->arch.vmce.mcg_cap;
>              evc->vmce.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> +
> +            ret = 0;
> +            copyback = 1;
>          }
>          else
>          {
>              ret = -EINVAL;
>              if ( evc->size < offsetof(typeof(*evc), vmce) )
> -                goto ext_vcpucontext_out;
> +                break;
>              if ( is_pv_domain(d) )
>              {
>                  if ( !is_canonical_address(evc->sysenter_callback_eip) ||
>                       !is_canonical_address(evc->syscall32_callback_eip) )
> -                    goto ext_vcpucontext_out;
> +                    break;
>                  fixup_guest_code_selector(d, evc->sysenter_callback_cs);
>                  v->arch.pv_vcpu.sysenter_callback_cs      =
>                      evc->sysenter_callback_cs;
> @@ -873,13 +876,11 @@ long arch_do_domctl(
>                  v->arch.pv_vcpu.syscall32_disables_events =
>                      evc->syscall32_disables_events;
>              }
> -            else
> -            /* We do not support syscall/syscall32/sysenter on 32-bit Xen. */
> -            if ( (evc->sysenter_callback_cs & ~3) ||
> -                 evc->sysenter_callback_eip ||
> -                 (evc->syscall32_callback_cs & ~3) ||
> -                 evc->syscall32_callback_eip )
> -                goto ext_vcpucontext_out;
> +            else if ( (evc->sysenter_callback_cs & ~3) ||
> +                      evc->sysenter_callback_eip ||
> +                      (evc->syscall32_callback_cs & ~3) ||
> +                      evc->syscall32_callback_eip )
> +                break;
>  
>              BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
>                                    mcg_cap) !=
> @@ -896,13 +897,9 @@ long arch_do_domctl(
>  
>                  ret = vmce_restore_vcpu(v, &vmce);
>              }
> +            else
> +                ret = 0;
>          }
> -
> -        ret = 0;
> -
> -    ext_vcpucontext_out:
> -        if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
> -            copyback = 1;
>      }
>      break;
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010905040200080009030108
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 29/01/14 15:57, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52E933050200007800117FE2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">What started out as a simple cleanup patch (eliminating the redundant
check of domctl-&gt;cmd before setting "copyback", which as a result
turned the "ext_vcpucontext_out" label useless) revealed a bug in the
handling of XEN_DOMCTL_set_ext_vcpucontext.

Fix this, retaining the cleanup, and at once dropping a stale comment
and an accompanying formatting issue.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52E933050200007800117FE2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -815,7 +815,7 @@ long arch_do_domctl(
         ret = -ESRCH;
         if ( (evc-&gt;vcpu &gt;= d-&gt;max_vcpus) ||
              ((v = d-&gt;vcpu[evc-&gt;vcpu]) == NULL) )
-            goto ext_vcpucontext_out;
+            break;
 
         if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
         {
@@ -847,17 +847,20 @@ long arch_do_domctl(
             evc-&gt;vmce.caps = v-&gt;arch.vmce.mcg_cap;
             evc-&gt;vmce.mci_ctl2_bank0 = v-&gt;arch.vmce.bank[0].mci_ctl2;
             evc-&gt;vmce.mci_ctl2_bank1 = v-&gt;arch.vmce.bank[1].mci_ctl2;
+
+            ret = 0;
+            copyback = 1;
         }
         else
         {
             ret = -EINVAL;
             if ( evc-&gt;size &lt; offsetof(typeof(*evc), vmce) )
-                goto ext_vcpucontext_out;
+                break;
             if ( is_pv_domain(d) )
             {
                 if ( !is_canonical_address(evc-&gt;sysenter_callback_eip) ||
                      !is_canonical_address(evc-&gt;syscall32_callback_eip) )
-                    goto ext_vcpucontext_out;
+                    break;
                 fixup_guest_code_selector(d, evc-&gt;sysenter_callback_cs);
                 v-&gt;arch.pv_vcpu.sysenter_callback_cs      =
                     evc-&gt;sysenter_callback_cs;
@@ -873,13 +876,11 @@ long arch_do_domctl(
                 v-&gt;arch.pv_vcpu.syscall32_disables_events =
                     evc-&gt;syscall32_disables_events;
             }
-            else
-            /* We do not support syscall/syscall32/sysenter on 32-bit Xen. */
-            if ( (evc-&gt;sysenter_callback_cs &amp; ~3) ||
-                 evc-&gt;sysenter_callback_eip ||
-                 (evc-&gt;syscall32_callback_cs &amp; ~3) ||
-                 evc-&gt;syscall32_callback_eip )
-                goto ext_vcpucontext_out;
+            else if ( (evc-&gt;sysenter_callback_cs &amp; ~3) ||
+                      evc-&gt;sysenter_callback_eip ||
+                      (evc-&gt;syscall32_callback_cs &amp; ~3) ||
+                      evc-&gt;syscall32_callback_eip )
+                break;
 
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=
@@ -896,13 +897,9 @@ long arch_do_domctl(
 
                 ret = vmce_restore_vcpu(v, &amp;vmce);
             }
+            else
+                ret = 0;
         }
-
-        ret = 0;
-
-    ext_vcpucontext_out:
-        if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
-            copyback = 1;
     }
     break;
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010905040200080009030108--


--===============3016993606684747766==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3016993606684747766==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 16:34:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y61-0008Cv-8K; Wed, 29 Jan 2014 16:34:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8Y5y-0008Ch-W8
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 16:34:35 +0000
Received: from [85.158.139.211:18057] by server-4.bemta-5.messagelabs.com id
	4B/2B-08092-A9D29E25; Wed, 29 Jan 2014 16:34:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391013271!430647!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21899 invoked from network); 29 Jan 2014 16:34:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:34:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208,217";a="95766974"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:34:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:34:30 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8Y5u-0004Ud-3C;
	Wed, 29 Jan 2014 16:34:30 +0000
Message-ID: <52E92D96.2070703@citrix.com>
Date: Wed, 29 Jan 2014 16:34:30 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E933050200007800117FE2@nat28.tlf.novell.com>
In-Reply-To: <52E933050200007800117FE2@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/domctl: don't ignore errors from
	vmce_restore_vcpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3016993606684747766=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3016993606684747766==
Content-Type: multipart/alternative;
	boundary="------------010905040200080009030108"

--------------010905040200080009030108
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 29/01/14 15:57, Jan Beulich wrote:
> What started out as a simple cleanup patch (eliminating the redundant
> check of domctl->cmd before setting "copyback", which as a result
> turned the "ext_vcpucontext_out" label useless) revealed a bug in the
> handling of XEN_DOMCTL_set_ext_vcpucontext.
>
> Fix this, retaining the cleanup, and at once dropping a stale comment
> and an accompanying formatting issue.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -815,7 +815,7 @@ long arch_do_domctl(
>          ret = -ESRCH;
>          if ( (evc->vcpu >= d->max_vcpus) ||
>               ((v = d->vcpu[evc->vcpu]) == NULL) )
> -            goto ext_vcpucontext_out;
> +            break;
>  
>          if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
>          {
> @@ -847,17 +847,20 @@ long arch_do_domctl(
>              evc->vmce.caps = v->arch.vmce.mcg_cap;
>              evc->vmce.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> +
> +            ret = 0;
> +            copyback = 1;
>          }
>          else
>          {
>              ret = -EINVAL;
>              if ( evc->size < offsetof(typeof(*evc), vmce) )
> -                goto ext_vcpucontext_out;
> +                break;
>              if ( is_pv_domain(d) )
>              {
>                  if ( !is_canonical_address(evc->sysenter_callback_eip) ||
>                       !is_canonical_address(evc->syscall32_callback_eip) )
> -                    goto ext_vcpucontext_out;
> +                    break;
>                  fixup_guest_code_selector(d, evc->sysenter_callback_cs);
>                  v->arch.pv_vcpu.sysenter_callback_cs      =
>                      evc->sysenter_callback_cs;
> @@ -873,13 +876,11 @@ long arch_do_domctl(
>                  v->arch.pv_vcpu.syscall32_disables_events =
>                      evc->syscall32_disables_events;
>              }
> -            else
> -            /* We do not support syscall/syscall32/sysenter on 32-bit Xen. */
> -            if ( (evc->sysenter_callback_cs & ~3) ||
> -                 evc->sysenter_callback_eip ||
> -                 (evc->syscall32_callback_cs & ~3) ||
> -                 evc->syscall32_callback_eip )
> -                goto ext_vcpucontext_out;
> +            else if ( (evc->sysenter_callback_cs & ~3) ||
> +                      evc->sysenter_callback_eip ||
> +                      (evc->syscall32_callback_cs & ~3) ||
> +                      evc->syscall32_callback_eip )
> +                break;
>  
>              BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
>                                    mcg_cap) !=
> @@ -896,13 +897,9 @@ long arch_do_domctl(
>  
>                  ret = vmce_restore_vcpu(v, &vmce);
>              }
> +            else
> +                ret = 0;
>          }
> -
> -        ret = 0;
> -
> -    ext_vcpucontext_out:
> -        if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
> -            copyback = 1;
>      }
>      break;
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010905040200080009030108
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 29/01/14 15:57, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52E933050200007800117FE2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">What started out as a simple cleanup patch (eliminating the redundant
check of domctl-&gt;cmd before setting "copyback", which as a result
turned the "ext_vcpucontext_out" label useless) revealed a bug in the
handling of XEN_DOMCTL_set_ext_vcpucontext.

Fix this, retaining the cleanup, and at once dropping a stale comment
and an accompanying formatting issue.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52E933050200007800117FE2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -815,7 +815,7 @@ long arch_do_domctl(
         ret = -ESRCH;
         if ( (evc-&gt;vcpu &gt;= d-&gt;max_vcpus) ||
              ((v = d-&gt;vcpu[evc-&gt;vcpu]) == NULL) )
-            goto ext_vcpucontext_out;
+            break;
 
         if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
         {
@@ -847,17 +847,20 @@ long arch_do_domctl(
             evc-&gt;vmce.caps = v-&gt;arch.vmce.mcg_cap;
             evc-&gt;vmce.mci_ctl2_bank0 = v-&gt;arch.vmce.bank[0].mci_ctl2;
             evc-&gt;vmce.mci_ctl2_bank1 = v-&gt;arch.vmce.bank[1].mci_ctl2;
+
+            ret = 0;
+            copyback = 1;
         }
         else
         {
             ret = -EINVAL;
             if ( evc-&gt;size &lt; offsetof(typeof(*evc), vmce) )
-                goto ext_vcpucontext_out;
+                break;
             if ( is_pv_domain(d) )
             {
                 if ( !is_canonical_address(evc-&gt;sysenter_callback_eip) ||
                      !is_canonical_address(evc-&gt;syscall32_callback_eip) )
-                    goto ext_vcpucontext_out;
+                    break;
                 fixup_guest_code_selector(d, evc-&gt;sysenter_callback_cs);
                 v-&gt;arch.pv_vcpu.sysenter_callback_cs      =
                     evc-&gt;sysenter_callback_cs;
@@ -873,13 +876,11 @@ long arch_do_domctl(
                 v-&gt;arch.pv_vcpu.syscall32_disables_events =
                     evc-&gt;syscall32_disables_events;
             }
-            else
-            /* We do not support syscall/syscall32/sysenter on 32-bit Xen. */
-            if ( (evc-&gt;sysenter_callback_cs &amp; ~3) ||
-                 evc-&gt;sysenter_callback_eip ||
-                 (evc-&gt;syscall32_callback_cs &amp; ~3) ||
-                 evc-&gt;syscall32_callback_eip )
-                goto ext_vcpucontext_out;
+            else if ( (evc-&gt;sysenter_callback_cs &amp; ~3) ||
+                      evc-&gt;sysenter_callback_eip ||
+                      (evc-&gt;syscall32_callback_cs &amp; ~3) ||
+                      evc-&gt;syscall32_callback_eip )
+                break;
 
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=
@@ -896,13 +897,9 @@ long arch_do_domctl(
 
                 ret = vmce_restore_vcpu(v, &amp;vmce);
             }
+            else
+                ret = 0;
         }
-
-        ret = 0;
-
-    ext_vcpucontext_out:
-        if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
-            copyback = 1;
     }
     break;
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010905040200080009030108--


--===============3016993606684747766==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3016993606684747766==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 16:35:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y6V-0008HD-Nd; Wed, 29 Jan 2014 16:35:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Y6V-0008H1-3Z
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:35:07 +0000
Received: from [85.158.137.68:42913] by server-16.bemta-3.messagelabs.com id
	5F/F6-29917-ABD29E25; Wed, 29 Jan 2014 16:35:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391013304!12118500!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21528 invoked from network); 29 Jan 2014 16:35:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:35:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="95767259"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:35:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:35:03 -0500
Message-ID: <1391013301.31814.147.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 16:35:01 +0000
In-Reply-To: <52E925E20200007800117FA3@nat28.tlf.novell.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<52E925E20200007800117FA3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 15:01 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> >> > +    case XEN_DOMCTL_cacheflush:
> >> > +    {
> >> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> >> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> >> 
> >> While you certainly say so in the public header change, I think you
> >> recall that we pretty recently changed another hypercall to not be
> >> the only inconsistent one modifying the input structure in order to
> >> handle hypercall preemption.
> > 
> > That was a XNEMEM though IIRC -- is the same requirement also true of
> > domctl's?
> 
> Not necessarily - I was just trying to point out the issue to
> avoid needing to fix it later on.

OK, but you do think it should be fixed "transparently" rather than made
an explicit part of the API?

> > How/where would you recommend saving the progress here?
> 
> Depending on the nature, a per-domain or per-vCPU field that
> gets acted upon before issuing any new, similar operation. I.e.
> something along the lines of x86's old_guest_table. It's ugly, I
> know. But with exposing domctls to semi-trusted guests in
> mind, you may use state modifiable by the caller here only if
> tampering with that state isn't going to harm the whole system
> (if the guest being started is affected in the case here that
> obviously wouldn't be a problem).

Hrm, thanks for raising this -- it made me realise that we cannot
necessarily rely on the disaggregated domain builder to even issue this
call at all.

That would be OK from the point of view of not flushing the things which
the builder touched (as you say, it can only harm the domain it is
building). But it is not ok from the PoV of flushing scrubbed data from
the cache, ensuring that the scrubbed bytes reach RAM (i.e. it can leak
old data).

So I think I need an approach which flushes the scrubbed pages as it
does the scrubbing (this makes a certain logical sense anyway) and have
the toolstack issue hypercalls to flush the stuff it has written. (the
first approach to this issue tried to do this but used a system call
provided by Linux which didn't have the quite correct semantics, but
using a version of this hypercall with a range should work).

Before I get too deep into that, do you think that
        struct xen_domctl_cacheflush {
            /* start_mfn is updated for progress over preemption. */
            xen_pfn_t start_mfn;
            xen_pfn_t end_mfn;
        };
        
is acceptable or do you want me to try and find a way to do preemption
without the write back?

The blobs written by the toolstack aren't likely to be >1GB in size, so
rejecting over large ranges would be a potential option, but it's not
totally satisfactory.

> >> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> >> > -        if ( op == RELINQUISH && count >= 0x2000 )
> >> > +        switch ( op )
> >> >          {
> >> > -            if ( hypercall_preempt_check() )
> >> > +        case RELINQUISH:
> >> > +        case CACHEFLUSH:
> >> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >> >              {
> >> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> >> >                  rc = -EAGAIN;
> >> >                  goto out;
> >> >              }
> >> >              count = 0;
> >> > +            break;
> >> > +        case INSERT:
> >> > +        case ALLOCATE:
> >> > +        case REMOVE:
> >> > +            /* No preemption */
> >> > +            break;
> >> >          }
> >> 
> >> Unrelated to the patch here, but don't you have a problem if you
> >> don't preempt _at all_ here for certain operation types? Or is a
> >> limit on the number of iterations being enforced elsewhere for
> >> those?
> > 
> > Good question.
> > 
> > The tools/guest accessible paths here are through
> > guest_physmap_add/remove_page. I think the only paths which are exposed
> > that pass a non-zero order are XENMEM_populate_physmap and
> > XENMEM_exchange, bot of which restrict the maximum order.
> > 
> > I don't think those guest_physmap_* are preemptible on x86 either?
> 
> They aren't, but they have a strict upper limit of at most dealing
> with a 1Gb page at a time. If that's similar for ARM, I don't see
> an immediate issue.

Same on ARM (through common code using MAX_ORDER == 20)

> > It's possible that we should nevertheless handle preemption on those
> > code paths as well, but I don't think it is critical right now (*or at
> > least not critical enough to warrant an freeze exception for 4.4).
> 
> Indeed. Of course the 1Gb limit mentioned above, while perhaps
> acceptable to process without preemption right now, is still pretty
> high for achieving really good responsiveness, so we may need to
> do something about that going forward.

Right. I won't worry about it now though.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:35:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Y6V-0008HD-Nd; Wed, 29 Jan 2014 16:35:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8Y6V-0008H1-3Z
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:35:07 +0000
Received: from [85.158.137.68:42913] by server-16.bemta-3.messagelabs.com id
	5F/F6-29917-ABD29E25; Wed, 29 Jan 2014 16:35:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391013304!12118500!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21528 invoked from network); 29 Jan 2014 16:35:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 16:35:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="95767259"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 16:35:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 11:35:03 -0500
Message-ID: <1391013301.31814.147.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 29 Jan 2014 16:35:01 +0000
In-Reply-To: <52E925E20200007800117FA3@nat28.tlf.novell.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<52E925E20200007800117FA3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 15:01 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> >> > +    case XEN_DOMCTL_cacheflush:
> >> > +    {
> >> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> >> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
> >> 
> >> While you certainly say so in the public header change, I think you
> >> recall that we pretty recently changed another hypercall to not be
> >> the only inconsistent one modifying the input structure in order to
> >> handle hypercall preemption.
> > 
> > That was a XNEMEM though IIRC -- is the same requirement also true of
> > domctl's?
> 
> Not necessarily - I was just trying to point out the issue to
> avoid needing to fix it later on.

OK, but you do think it should be fixed "transparently" rather than made
an explicit part of the API?

> > How/where would you recommend saving the progress here?
> 
> Depending on the nature, a per-domain or per-vCPU field that
> gets acted upon before issuing any new, similar operation. I.e.
> something along the lines of x86's old_guest_table. It's ugly, I
> know. But with exposing domctls to semi-trusted guests in
> mind, you may use state modifiable by the caller here only if
> tampering with that state isn't going to harm the whole system
> (if the guest being started is affected in the case here that
> obviously wouldn't be a problem).

Hrm, thanks for raising this -- it made me realise that we cannot
necessarily rely on the disaggregated domain builder to even issue this
call at all.

That would be OK from the point of view of not flushing the things which
the builder touched (as you say, it can only harm the domain it is
building). But it is not ok from the PoV of flushing scrubbed data from
the cache, ensuring that the scrubbed bytes reach RAM (i.e. it can leak
old data).

So I think I need an approach which flushes the scrubbed pages as it
does the scrubbing (this makes a certain logical sense anyway) and have
the toolstack issue hypercalls to flush the stuff it has written. (the
first approach to this issue tried to do this but used a system call
provided by Linux which didn't have the quite correct semantics, but
using a version of this hypercall with a range should work).

Before I get too deep into that, do you think that
        struct xen_domctl_cacheflush {
            /* start_mfn is updated for progress over preemption. */
            xen_pfn_t start_mfn;
            xen_pfn_t end_mfn;
        };
        
is acceptable or do you want me to try and find a way to do preemption
without the write back?

The blobs written by the toolstack aren't likely to be >1GB in size, so
rejecting over large ranges would be a potential option, but it's not
totally satisfactory.

> >> >          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> >> > -        if ( op == RELINQUISH && count >= 0x2000 )
> >> > +        switch ( op )
> >> >          {
> >> > -            if ( hypercall_preempt_check() )
> >> > +        case RELINQUISH:
> >> > +        case CACHEFLUSH:
> >> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >> >              {
> >> >                  p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> >> >                  rc = -EAGAIN;
> >> >                  goto out;
> >> >              }
> >> >              count = 0;
> >> > +            break;
> >> > +        case INSERT:
> >> > +        case ALLOCATE:
> >> > +        case REMOVE:
> >> > +            /* No preemption */
> >> > +            break;
> >> >          }
> >> 
> >> Unrelated to the patch here, but don't you have a problem if you
> >> don't preempt _at all_ here for certain operation types? Or is a
> >> limit on the number of iterations being enforced elsewhere for
> >> those?
> > 
> > Good question.
> > 
> > The tools/guest accessible paths here are through
> > guest_physmap_add/remove_page. I think the only paths which are exposed
> > that pass a non-zero order are XENMEM_populate_physmap and
> > XENMEM_exchange, bot of which restrict the maximum order.
> > 
> > I don't think those guest_physmap_* are preemptible on x86 either?
> 
> They aren't, but they have a strict upper limit of at most dealing
> with a 1Gb page at a time. If that's similar for ARM, I don't see
> an immediate issue.

Same on ARM (through common code using MAX_ORDER == 20)

> > It's possible that we should nevertheless handle preemption on those
> > code paths as well, but I don't think it is critical right now (*or at
> > least not critical enough to warrant an freeze exception for 4.4).
> 
> Indeed. Of course the 1Gb limit mentioned above, while perhaps
> acceptable to process without preemption right now, is still pretty
> high for achieving really good responsiveness, so we may need to
> do something about that going forward.

Right. I won't worry about it now though.

> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8YKs-0000Fa-H1; Wed, 29 Jan 2014 16:49:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8YKq-0000FV-PW
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:49:56 +0000
Received: from [85.158.139.211:48122] by server-9.bemta-5.messagelabs.com id
	56/0E-11237-23139E25; Wed, 29 Jan 2014 16:49:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391014193!416528!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9986 invoked from network); 29 Jan 2014 16:49:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 16:49:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 16:49:53 +0000
Message-Id: <52E93F4E0200007800118054@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 16:50:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<52E925E20200007800117FA3@nat28.tlf.novell.com>
	<1391013301.31814.147.camel@kazak.uk.xensource.com>
In-Reply-To: <1391013301.31814.147.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 17:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 15:01 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>> >> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
>> >> > +    case XEN_DOMCTL_cacheflush:
>> >> > +    {
>> >> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
>> >> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
>> >> 
>> >> While you certainly say so in the public header change, I think you
>> >> recall that we pretty recently changed another hypercall to not be
>> >> the only inconsistent one modifying the input structure in order to
>> >> handle hypercall preemption.
>> > 
>> > That was a XNEMEM though IIRC -- is the same requirement also true of
>> > domctl's?
>> 
>> Not necessarily - I was just trying to point out the issue to
>> avoid needing to fix it later on.
> 
> OK, but you do think it should be fixed "transparently" rather than made
> an explicit part of the API?

I'd prefer if it would be done that way. Otherwise we'd need to
mentally store that we're allowing exceptions for domctls, but not
for other hypercalls.

> Before I get too deep into that, do you think that
>         struct xen_domctl_cacheflush {
>             /* start_mfn is updated for progress over preemption. */
>             xen_pfn_t start_mfn;
>             xen_pfn_t end_mfn;
>         };
>         
> is acceptable or do you want me to try and find a way to do preemption
> without the write back?

As per above - if possible, I'd prefer you avoiding the write back.

> The blobs written by the toolstack aren't likely to be >1GB in size, so
> rejecting over large ranges would be a potential option, but it's not
> totally satisfactory.

Agreed, but good enough for now (considering the various other
cases that already have similar "issues").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 16:50:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 16:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8YKs-0000Fa-H1; Wed, 29 Jan 2014 16:49:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8YKq-0000FV-PW
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 16:49:56 +0000
Received: from [85.158.139.211:48122] by server-9.bemta-5.messagelabs.com id
	56/0E-11237-23139E25; Wed, 29 Jan 2014 16:49:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391014193!416528!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9986 invoked from network); 29 Jan 2014 16:49:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 16:49:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 29 Jan 2014 16:49:53 +0000
Message-Id: <52E93F4E0200007800118054@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 29 Jan 2014 16:50:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<52E925E20200007800117FA3@nat28.tlf.novell.com>
	<1391013301.31814.147.camel@kazak.uk.xensource.com>
In-Reply-To: <1391013301.31814.147.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com,
	george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 17:35, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Wed, 2014-01-29 at 15:01 +0000, Jan Beulich wrote:
>> >>> On 29.01.14 at 15:15, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>> >> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
>> >> > +    case XEN_DOMCTL_cacheflush:
>> >> > +    {
>> >> > +        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
>> >> > +        if ( __copy_to_guest(u_domctl, domctl, 1) )
>> >> 
>> >> While you certainly say so in the public header change, I think you
>> >> recall that we pretty recently changed another hypercall to not be
>> >> the only inconsistent one modifying the input structure in order to
>> >> handle hypercall preemption.
>> > 
>> > That was a XNEMEM though IIRC -- is the same requirement also true of
>> > domctl's?
>> 
>> Not necessarily - I was just trying to point out the issue to
>> avoid needing to fix it later on.
> 
> OK, but you do think it should be fixed "transparently" rather than made
> an explicit part of the API?

I'd prefer if it would be done that way. Otherwise we'd need to
mentally store that we're allowing exceptions for domctls, but not
for other hypercalls.

> Before I get too deep into that, do you think that
>         struct xen_domctl_cacheflush {
>             /* start_mfn is updated for progress over preemption. */
>             xen_pfn_t start_mfn;
>             xen_pfn_t end_mfn;
>         };
>         
> is acceptable or do you want me to try and find a way to do preemption
> without the write back?

As per above - if possible, I'd prefer you avoiding the write back.

> The blobs written by the toolstack aren't likely to be >1GB in size, so
> rejecting over large ranges would be a potential option, but it's not
> totally satisfactory.

Agreed, but good enough for now (considering the various other
cases that already have similar "issues").

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZRG-0003He-7K; Wed, 29 Jan 2014 18:00:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W8ZRE-0003HZ-Pt
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 18:00:37 +0000
Received: from [85.158.139.211:44412] by server-2.bemta-5.messagelabs.com id
	46/DE-23037-3C149E25; Wed, 29 Jan 2014 18:00:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391018433!443838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25586 invoked from network); 29 Jan 2014 18:00:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:00:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97786860"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 18:00:33 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 13:00:32 -0500
Message-ID: <52E941BF.3070308@citrix.com>
Date: Wed, 29 Jan 2014 18:00:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 15:33, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.
> 
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.
> 
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.

If I'm understanding this correctly, in upstream Linux this would
require two new pv-ops for clac and stac?  This might make upstreaming
support for this in Linux tricky, but I wouldn't suggest blocking a Xen
feature for this reason.

Each copy_from_user() and copy_to_user() and get_user()/put_user() would
thus require two hypercalls, at least one of which would do a TLB flush?

This does sound rather expensive and thus not something we (XenServer)
would be especially interested in using.

Do you have any figures for the performance impact on guests not using
this feature?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZRG-0003He-7K; Wed, 29 Jan 2014 18:00:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1W8ZRE-0003HZ-Pt
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 18:00:37 +0000
Received: from [85.158.139.211:44412] by server-2.bemta-5.messagelabs.com id
	46/DE-23037-3C149E25; Wed, 29 Jan 2014 18:00:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391018433!443838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25586 invoked from network); 29 Jan 2014 18:00:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:00:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97786860"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 18:00:33 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 13:00:32 -0500
Message-ID: <52E941BF.3070308@citrix.com>
Date: Wed, 29 Jan 2014 18:00:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 15:33, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.
> 
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.
> 
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.

If I'm understanding this correctly, in upstream Linux this would
require two new pv-ops for clac and stac?  This might make upstreaming
support for this in Linux tricky, but I wouldn't suggest blocking a Xen
feature for this reason.

Each copy_from_user() and copy_to_user() and get_user()/put_user() would
thus require two hypercalls, at least one of which would do a TLB flush?

This does sound rather expensive and thus not something we (XenServer)
would be especially interested in using.

Do you have any figures for the performance impact on guests not using
this feature?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZV6-0003VQ-0E; Wed, 29 Jan 2014 18:04:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8ZV5-0003VL-A4
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 18:04:35 +0000
Received: from [85.158.139.211:62958] by server-9.bemta-5.messagelabs.com id
	47/9C-11237-2B249E25; Wed, 29 Jan 2014 18:04:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391018672!449377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6185 invoked from network); 29 Jan 2014 18:04:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97788900"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 18:04:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 13:04:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8ZV1-0005nb-El;
	Wed, 29 Jan 2014 18:04:31 +0000
Message-ID: <52E942AF.1090002@citrix.com>
Date: Wed, 29 Jan 2014 18:04:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 15:33, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.

In principle, I think this is a good idea.  We certainly should provide
any opportunity we can to aid the security of the VMs.

>
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.

This appears to be hardware independent, so looks as if it would still
work fine on 64bit hardware lacking explicit SMAP/SMEP support?
(although possibly problems with emulating {ST,CL}AC)

>
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.
>
> And if considering it worthwhile, comments on the actual
> implementation (including the notes at the top of the attached
> patch) would of course be welcome too.
>
> Jan


At a glance, it doesn't appear to add too much code to hot-paths, but
the performance overhead from the point of view of the PV guest looks
substantial, requiring two hypercalls/traps on each
copy_{to,from}_user(), which themselves cause a local TLB flush.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZV6-0003VQ-0E; Wed, 29 Jan 2014 18:04:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8ZV5-0003VL-A4
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 18:04:35 +0000
Received: from [85.158.139.211:62958] by server-9.bemta-5.messagelabs.com id
	47/9C-11237-2B249E25; Wed, 29 Jan 2014 18:04:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391018672!449377!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6185 invoked from network); 29 Jan 2014 18:04:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:04:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97788900"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 18:04:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 29 Jan 2014 13:04:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8ZV1-0005nb-El;
	Wed, 29 Jan 2014 18:04:31 +0000
Message-ID: <52E942AF.1090002@citrix.com>
Date: Wed, 29 Jan 2014 18:04:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 15:33, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.

In principle, I think this is a good idea.  We certainly should provide
any opportunity we can to aid the security of the VMs.

>
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.

This appears to be hardware independent, so looks as if it would still
work fine on 64bit hardware lacking explicit SMAP/SMEP support?
(although possibly problems with emulating {ST,CL}AC)

>
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.
>
> And if considering it worthwhile, comments on the actual
> implementation (including the notes at the top of the attached
> patch) would of course be welcome too.
>
> Jan


At a glance, it doesn't appear to add too much code to hot-paths, but
the performance overhead from the point of view of the PV guest looks
substantial, requiring two hypercalls/traps on each
copy_{to,from}_user(), which themselves cause a local TLB flush.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:25:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Zoc-0004YV-1p; Wed, 29 Jan 2014 18:24:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Zoa-0004YM-As
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:24:44 +0000
Received: from [85.158.137.68:56842] by server-12.bemta-3.messagelabs.com id
	CC/18-01674-B6749E25; Wed, 29 Jan 2014 18:24:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391019882!8481493!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28972 invoked from network); 29 Jan 2014 18:24:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:24:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391019882; l=896;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=YymWmQJ4P+scK+sTQFOnEii9hF0=;
	b=eSCQLlo1v00zDaS+S2ymPRtv7LUtDXlVx4+ck9sK+zC0bBMJJX54q86seN+bd/fiI+P
	J+vBIjsxDZ2f1pF4AXo3uox+2VHD4njwN5yhBjyFqOqumR9zU+omVdhvbcKNJv97Cn3Dm
	oSeLsWijNS2WNU9UbwW40XljQ1bGGd9UWI4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 9068a6q0TIOgVQn
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 19:24:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 33BC950269; Wed, 29 Jan 2014 19:24:42 +0100 (CET)
Date: Wed, 29 Jan 2014 19:24:42 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129182442.GA12903@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
	<1390995947.31814.79.camel@kazak.uk.xensource.com>
	<20140129142357.GA27051@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129142357.GA27051@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Olaf Hering wrote:

> On Wed, Jan 29, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> > > Thats what I'm asking you. Why is readwrite set here, and later on also
> > > in the .l file? At least just setting it here did not unconditionally
> > > enable it if no discard= was specified. I have not traced the code why
> > > that happens.
> > 
> > One for Ian J I think. Perhaps it is just setting the default?
> 
> For some reason this setting is lost, at least for my discard flag.  The
> readwrite part may suffer from the same issue, otherwise access_set
> could be removed.  I will trace the code to understand how it works.

I think the failure was caused by incompatibility between libxl.so and xl
because I did just copy libs around.

Its working fine with a fresh rpm package. I will remove the discard_set
variable.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:25:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8Zoc-0004YV-1p; Wed, 29 Jan 2014 18:24:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8Zoa-0004YM-As
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:24:44 +0000
Received: from [85.158.137.68:56842] by server-12.bemta-3.messagelabs.com id
	CC/18-01674-B6749E25; Wed, 29 Jan 2014 18:24:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391019882!8481493!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28972 invoked from network); 29 Jan 2014 18:24:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:24:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391019882; l=896;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=YymWmQJ4P+scK+sTQFOnEii9hF0=;
	b=eSCQLlo1v00zDaS+S2ymPRtv7LUtDXlVx4+ck9sK+zC0bBMJJX54q86seN+bd/fiI+P
	J+vBIjsxDZ2f1pF4AXo3uox+2VHD4njwN5yhBjyFqOqumR9zU+omVdhvbcKNJv97Cn3Dm
	oSeLsWijNS2WNU9UbwW40XljQ1bGGd9UWI4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 9068a6q0TIOgVQn
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 29 Jan 2014 19:24:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 33BC950269; Wed, 29 Jan 2014 19:24:42 +0100 (CET)
Date: Wed, 29 Jan 2014 19:24:42 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129182442.GA12903@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140129111939.GA26899@aepfle.de>
	<1390995947.31814.79.camel@kazak.uk.xensource.com>
	<20140129142357.GA27051@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129142357.GA27051@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Olaf Hering wrote:

> On Wed, Jan 29, Ian Campbell wrote:
> > On Wed, 2014-01-29 at 12:19 +0100, Olaf Hering wrote:
> > > Thats what I'm asking you. Why is readwrite set here, and later on also
> > > in the .l file? At least just setting it here did not unconditionally
> > > enable it if no discard= was specified. I have not traced the code why
> > > that happens.
> > 
> > One for Ian J I think. Perhaps it is just setting the default?
> 
> For some reason this setting is lost, at least for my discard flag.  The
> readwrite part may suffer from the same issue, otherwise access_set
> could be removed.  I will trace the code to understand how it works.

I think the failure was caused by incompatibility between libxl.so and xl
because I did just copy libs around.

Its working fine with a fresh rpm package. I will remove the discard_set
variable.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:35:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZyW-000532-J7; Wed, 29 Jan 2014 18:35:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8ZyU-00052x-Cw
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:34:58 +0000
Received: from [193.109.254.147:26897] by server-7.bemta-14.messagelabs.com id
	A9/74-23424-1D949E25; Wed, 29 Jan 2014 18:34:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391020495!690792!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22428 invoked from network); 29 Jan 2014 18:34:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:34:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0TIXpkT016497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 18:33:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0TIXp4s010731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 18:33:51 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0TIXogj010712; Wed, 29 Jan 2014 18:33:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 10:33:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2552A1BFA73; Wed, 29 Jan 2014 13:33:49 -0500 (EST)
Date: Wed, 29 Jan 2014 13:33:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140129183349.GA14312@phenom.dumpdata.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
> The original code is wrong because:
> * claim mode wants to know the total number of pages needed while
>   original code provides the additional number of pages needed.
> * if pod is enabled memory will already be allocated by the time we try
>   to claim memory.
> 
> So the fix would be:
> * move claim mode before actual memory allocation.
> * pass the right number of pages to hypervisor.
> 
> The "right number of pages" should be number of pages of target memory
> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> 
> This fixes bug #32.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Wilk <konrad.wilk@oracle.com>

And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'

Thank you!
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> mode is complete broken. If this patch is deemed too complicated, we
> should flip the switch to disable claim mode by default for 4.4.
> ---
>  tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
>  1 file changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..dd3b522 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>  
> +#define VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
>      for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>          page_array[i] += mmio_size >> PAGE_SHIFT;
>  
> +    /*
> +     * Try to claim pages for early warning of insufficient memory available.
> +     * This should go before xc_domain_set_pod_target, becuase that function
> +     * actually allocates memory for the guest. Claiming after memory has been
> +     * allocated is pointless.
> +     */
> +    if ( claim_enabled ) {
> +        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
> +        if ( rc != 0 )
> +        {
> +            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> +            goto error_out;
> +        }
> +    }
> +
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
>      cur_pages = 0xc0;
>      stat_normal_pages = 0xc0;
>  
> -    /* try to claim pages for early warning of insufficient memory available */
> -    if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> -        if ( rc != 0 )
> -        {
> -            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> -            goto error_out;
> -        }
> -    }
>      while ( (rc == 0) && (nr_pages > cur_pages) )
>      {
>          /* Clip count to maximum 1GB extent. */
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:35:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ZyW-000532-J7; Wed, 29 Jan 2014 18:35:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W8ZyU-00052x-Cw
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:34:58 +0000
Received: from [193.109.254.147:26897] by server-7.bemta-14.messagelabs.com id
	A9/74-23424-1D949E25; Wed, 29 Jan 2014 18:34:57 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391020495!690792!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22428 invoked from network); 29 Jan 2014 18:34:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:34:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0TIXpkT016497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 29 Jan 2014 18:33:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0TIXp4s010731
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 29 Jan 2014 18:33:51 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0TIXogj010712; Wed, 29 Jan 2014 18:33:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 10:33:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2552A1BFA73; Wed, 29 Jan 2014 13:33:49 -0500 (EST)
Date: Wed, 29 Jan 2014 13:33:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140129183349.GA14312@phenom.dumpdata.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
> The original code is wrong because:
> * claim mode wants to know the total number of pages needed while
>   original code provides the additional number of pages needed.
> * if pod is enabled memory will already be allocated by the time we try
>   to claim memory.
> 
> So the fix would be:
> * move claim mode before actual memory allocation.
> * pass the right number of pages to hypervisor.
> 
> The "right number of pages" should be number of pages of target memory
> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> 
> This fixes bug #32.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Wilk <konrad.wilk@oracle.com>

And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'

Thank you!
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> mode is complete broken. If this patch is deemed too complicated, we
> should flip the switch to disable claim mode by default for 4.4.
> ---
>  tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
>  1 file changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..dd3b522 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>  
> +#define VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
>      for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>          page_array[i] += mmio_size >> PAGE_SHIFT;
>  
> +    /*
> +     * Try to claim pages for early warning of insufficient memory available.
> +     * This should go before xc_domain_set_pod_target, becuase that function
> +     * actually allocates memory for the guest. Claiming after memory has been
> +     * allocated is pointless.
> +     */
> +    if ( claim_enabled ) {
> +        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
> +        if ( rc != 0 )
> +        {
> +            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> +            goto error_out;
> +        }
> +    }
> +
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
>      cur_pages = 0xc0;
>      stat_normal_pages = 0xc0;
>  
> -    /* try to claim pages for early warning of insufficient memory available */
> -    if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> -        if ( rc != 0 )
> -        {
> -            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> -            goto error_out;
> -        }
> -    }
>      while ( (rc == 0) && (nr_pages > cur_pages) )
>      {
>          /* Clip count to maximum 1GB extent. */
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8a4D-0005XG-JG; Wed, 29 Jan 2014 18:40:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8a4C-0005XA-84
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:40:52 +0000
Received: from [85.158.139.211:31411] by server-14.bemta-5.messagelabs.com id
	E1/14-27598-33B49E25; Wed, 29 Jan 2014 18:40:51 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391020848!434457!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27907 invoked from network); 29 Jan 2014 18:40:50 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 18:40:50 -0000
Received: from mail-vb0-f52.google.com ([209.85.212.52]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulLL+BpgzVwCa+QJdY7Bd+DAqkRWfuN@postini.com;
	Wed, 29 Jan 2014 10:40:50 PST
Received: by mail-vb0-f52.google.com with SMTP id p14so1375523vbm.39
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:40:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zpg+3IuVw6UzrETBThTbr8895vWUDiH22sNU1V+rdfA=;
	b=fyeRP1CCTcnFaruWgXx09vA2C8XFvBsqXnAmthp2uAxzbDvYSr8R9L5GgwEV3/Rlrk
	UqF2nMoWU+BDV0CZoVzZR8FT60R/aymJA/CBRAYOix2R5vHemG+s1k5Gi+aTEJrS2lRH
	7LmtFSt8KNH60WOCjal4/TPeibP7SahNR5Elg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=zpg+3IuVw6UzrETBThTbr8895vWUDiH22sNU1V+rdfA=;
	b=E11/Nq0wxYDxL2w9v7KuD5t/rFuuMAw4ymG/J89eSsKgCjUirVeEw0UuCG0lDey9Gt
	fNWYxWtlL8LaJvVPzxs788fnyre7AyYsStIhb/6UKA2uqRMarGg/vvKkqJzOd3eGjppG
	KNYfRSx77VwUbFG3gWhO6E8Y3b3R8oEjaFgsHXPwQGJSlK1Jjf+wND1xay5iYrfbYvxv
	SLpxvEIy6kpJpFmHf5Wc39g1efnK0qMBFoVlNUcNfcJe94NyOxo3PZAPnvA4arTvbjQZ
	XzSNxbiAtTKynT+5IDwKqaGuDQ7TaODLJ2G+K8VXIviPqhT+K328YmgsY0WTQLNhqSeO
	IZxQ==
X-Gm-Message-State: ALoCoQkdtsmBfKY2e47FOgxNpdtM/FFRiktleN9N5MOcmr4YwQBBPTEorMFn8j9I9bpCvtCkemY1KMV+XFtXlIGlslmUIhssI6XhpN/0IDXfGHv70y+srMoQblheiwlmvBP1i6VHiLRuJhotct7iuDMku/YwBemlwmu+p4BCONmM38Lsrg7lFK8=
X-Received: by 10.220.99.7 with SMTP id s7mr7691653vcn.19.1391020847182;
	Wed, 29 Jan 2014 10:40:47 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.99.7 with SMTP id s7mr7691640vcn.19.1391020846988; Wed,
	29 Jan 2014 10:40:46 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:40:46 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
Date: Wed, 29 Jan 2014 20:40:46 +0200
Message-ID: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Right, that's why changing it to cpumask_of(0) shouldn't make any
> difference for xen-unstable (it should make things clearer, if nothing
> else) but it should fix things for Oleksandr.

Unfortunately, it is not enough for stable work.

I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
gic_route_irq_to_guest(). And as result, I don't see our situation
which cause to deadlock in on_selected_cpus function (expected).
But, hypervisor sometimes hangs somewhere else (I have not identified
yet where this is happening) or I sometimes see traps, like that:
("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)

(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
(XEN) CPU:    1
(XEN) PC:     00242c1c __warn+0x20/0x28
(XEN) CPSR:   200001da MODE:Hypervisor
(XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
(XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
(XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
(XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
(XEN)
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00020000dec6a000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 00000000000028b5
(XEN)  TTBR0_EL2: 00000000d2014000
(XEN)
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000482110
(XEN)      HDFAR: fa211190
(XEN)      HIFAR: 00000000
(XEN)
(XEN) Xen stack trace from sp=4bfd7eb4:
(XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
(XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
(XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
(XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
(XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
(XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
(XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
(XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
(XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
(XEN)    ffeffbfe fedeefff fffd5ffe
(XEN) Xen call trace:
(XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
(XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
(XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
(XEN)    [<00248e60>] do_IRQ+0x138/0x198
(XEN)    [<00248978>] gic_interrupt+0x58/0xc0
(XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
(XEN)    [<00251830>] return_from_trap+0/0x4
(XEN)

Also I am posting maintenance_interrupt() from my tree:

static void maintenance_interrupt(int irq, void *dev_id, struct
cpu_user_regs *regs)
{
    int i = 0, virq, pirq;
    uint32_t lr;
    struct vcpu *v = current;
    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);

    while ((i = find_next_bit((const long unsigned int *) &eisr,
                              64, i)) < 64) {
        struct pending_irq *p, *n;
        int cpu, eoi;

        cpu = -1;
        eoi = 0;

        spin_lock_irq(&gic.lock);
        lr = GICH[GICH_LR + i];
        virq = lr & GICH_LR_VIRTUAL_MASK;

        p = irq_to_pending(v, virq);
        if ( p->desc != NULL ) {
            p->desc->status &= ~IRQ_INPROGRESS;
            /* Assume only one pcpu needs to EOI the irq */
            cpu = p->desc->arch.eoi_cpu;
            eoi = 1;
            pirq = p->desc->irq;
        }
        if ( !atomic_dec_and_test(&p->inflight_cnt) )
        {
            /* Physical IRQ can't be reinject */
            WARN_ON(p->desc != NULL);
            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
            spin_unlock_irq(&gic.lock);
            i++;
            continue;
        }

        GICH[GICH_LR + i] = 0;
        clear_bit(i, &this_cpu(lr_mask));

        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
            n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
            gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
            list_del_init(&n->lr_queue);
            set_bit(i, &this_cpu(lr_mask));
        } else {
            gic_inject_irq_stop();
        }
        spin_unlock_irq(&gic.lock);

        spin_lock_irq(&v->arch.vgic.lock);
        list_del_init(&p->inflight);
        spin_unlock_irq(&v->arch.vgic.lock);

        if ( eoi ) {
            /* this is not racy because we can't receive another irq of the
             * same type until we EOI it.  */
            if ( cpu == smp_processor_id() )
                gic_irq_eoi((void*)(uintptr_t)pirq);
            else
                on_selected_cpus(cpumask_of(cpu),
                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
        }

        i++;
    }
}


Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8a4D-0005XG-JG; Wed, 29 Jan 2014 18:40:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8a4C-0005XA-84
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:40:52 +0000
Received: from [85.158.139.211:31411] by server-14.bemta-5.messagelabs.com id
	E1/14-27598-33B49E25; Wed, 29 Jan 2014 18:40:51 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391020848!434457!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27907 invoked from network); 29 Jan 2014 18:40:50 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 18:40:50 -0000
Received: from mail-vb0-f52.google.com ([209.85.212.52]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulLL+BpgzVwCa+QJdY7Bd+DAqkRWfuN@postini.com;
	Wed, 29 Jan 2014 10:40:50 PST
Received: by mail-vb0-f52.google.com with SMTP id p14so1375523vbm.39
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:40:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zpg+3IuVw6UzrETBThTbr8895vWUDiH22sNU1V+rdfA=;
	b=fyeRP1CCTcnFaruWgXx09vA2C8XFvBsqXnAmthp2uAxzbDvYSr8R9L5GgwEV3/Rlrk
	UqF2nMoWU+BDV0CZoVzZR8FT60R/aymJA/CBRAYOix2R5vHemG+s1k5Gi+aTEJrS2lRH
	7LmtFSt8KNH60WOCjal4/TPeibP7SahNR5Elg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=zpg+3IuVw6UzrETBThTbr8895vWUDiH22sNU1V+rdfA=;
	b=E11/Nq0wxYDxL2w9v7KuD5t/rFuuMAw4ymG/J89eSsKgCjUirVeEw0UuCG0lDey9Gt
	fNWYxWtlL8LaJvVPzxs788fnyre7AyYsStIhb/6UKA2uqRMarGg/vvKkqJzOd3eGjppG
	KNYfRSx77VwUbFG3gWhO6E8Y3b3R8oEjaFgsHXPwQGJSlK1Jjf+wND1xay5iYrfbYvxv
	SLpxvEIy6kpJpFmHf5Wc39g1efnK0qMBFoVlNUcNfcJe94NyOxo3PZAPnvA4arTvbjQZ
	XzSNxbiAtTKynT+5IDwKqaGuDQ7TaODLJ2G+K8VXIviPqhT+K328YmgsY0WTQLNhqSeO
	IZxQ==
X-Gm-Message-State: ALoCoQkdtsmBfKY2e47FOgxNpdtM/FFRiktleN9N5MOcmr4YwQBBPTEorMFn8j9I9bpCvtCkemY1KMV+XFtXlIGlslmUIhssI6XhpN/0IDXfGHv70y+srMoQblheiwlmvBP1i6VHiLRuJhotct7iuDMku/YwBemlwmu+p4BCONmM38Lsrg7lFK8=
X-Received: by 10.220.99.7 with SMTP id s7mr7691653vcn.19.1391020847182;
	Wed, 29 Jan 2014 10:40:47 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.99.7 with SMTP id s7mr7691640vcn.19.1391020846988; Wed,
	29 Jan 2014 10:40:46 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:40:46 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
Date: Wed, 29 Jan 2014 20:40:46 +0200
Message-ID: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> Right, that's why changing it to cpumask_of(0) shouldn't make any
> difference for xen-unstable (it should make things clearer, if nothing
> else) but it should fix things for Oleksandr.

Unfortunately, it is not enough for stable work.

I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
gic_route_irq_to_guest(). And as result, I don't see our situation
which cause to deadlock in on_selected_cpus function (expected).
But, hypervisor sometimes hangs somewhere else (I have not identified
yet where this is happening) or I sometimes see traps, like that:
("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)

(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
(XEN) CPU:    1
(XEN) PC:     00242c1c __warn+0x20/0x28
(XEN) CPSR:   200001da MODE:Hypervisor
(XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
(XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
(XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
(XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
(XEN)
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00020000dec6a000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 00000000000028b5
(XEN)  TTBR0_EL2: 00000000d2014000
(XEN)
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000482110
(XEN)      HDFAR: fa211190
(XEN)      HIFAR: 00000000
(XEN)
(XEN) Xen stack trace from sp=4bfd7eb4:
(XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
(XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
(XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
(XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
(XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
(XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
(XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
(XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
(XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
(XEN)    ffeffbfe fedeefff fffd5ffe
(XEN) Xen call trace:
(XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
(XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
(XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
(XEN)    [<00248e60>] do_IRQ+0x138/0x198
(XEN)    [<00248978>] gic_interrupt+0x58/0xc0
(XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
(XEN)    [<00251830>] return_from_trap+0/0x4
(XEN)

Also I am posting maintenance_interrupt() from my tree:

static void maintenance_interrupt(int irq, void *dev_id, struct
cpu_user_regs *regs)
{
    int i = 0, virq, pirq;
    uint32_t lr;
    struct vcpu *v = current;
    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);

    while ((i = find_next_bit((const long unsigned int *) &eisr,
                              64, i)) < 64) {
        struct pending_irq *p, *n;
        int cpu, eoi;

        cpu = -1;
        eoi = 0;

        spin_lock_irq(&gic.lock);
        lr = GICH[GICH_LR + i];
        virq = lr & GICH_LR_VIRTUAL_MASK;

        p = irq_to_pending(v, virq);
        if ( p->desc != NULL ) {
            p->desc->status &= ~IRQ_INPROGRESS;
            /* Assume only one pcpu needs to EOI the irq */
            cpu = p->desc->arch.eoi_cpu;
            eoi = 1;
            pirq = p->desc->irq;
        }
        if ( !atomic_dec_and_test(&p->inflight_cnt) )
        {
            /* Physical IRQ can't be reinject */
            WARN_ON(p->desc != NULL);
            gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
            spin_unlock_irq(&gic.lock);
            i++;
            continue;
        }

        GICH[GICH_LR + i] = 0;
        clear_bit(i, &this_cpu(lr_mask));

        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
            n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
            gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
            list_del_init(&n->lr_queue);
            set_bit(i, &this_cpu(lr_mask));
        } else {
            gic_inject_irq_stop();
        }
        spin_unlock_irq(&gic.lock);

        spin_lock_irq(&v->arch.vgic.lock);
        list_del_init(&p->inflight);
        spin_unlock_irq(&v->arch.vgic.lock);

        if ( eoi ) {
            /* this is not racy because we can't receive another irq of the
             * same type until we EOI it.  */
            if ( cpu == smp_processor_id() )
                gic_irq_eoi((void*)(uintptr_t)pirq);
            else
                on_selected_cpus(cpumask_of(cpu),
                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
        }

        i++;
    }
}


Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8a7G-0005e4-BK; Wed, 29 Jan 2014 18:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8a7F-0005dv-1r
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:44:01 +0000
Received: from [85.158.143.35:64305] by server-3.bemta-4.messagelabs.com id
	8E/07-11539-0FB49E25; Wed, 29 Jan 2014 18:44:00 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391021038!1730848!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6877 invoked from network); 29 Jan 2014 18:43:59 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:43:59 -0000
Received: from mail-ve0-f179.google.com ([209.85.128.179]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulL7WkzpAS8fNSG0xD0780Mu4nFKVvo@postini.com;
	Wed, 29 Jan 2014 10:43:59 PST
Received: by mail-ve0-f179.google.com with SMTP id jx11so1453158veb.24
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:43:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BXfEse3d+5W+PITwJCK7dA0bjtw7tJQUCDIbdZRHD6g=;
	b=QY/TLzmcYJoj6eCsduOUZIGiv+1zQKcM2x8KE0Nk8D41+S8G0kSByMXHWxKVi6HJ07
	0z2FwvBkCrGqpSZxI4Szxsx6TgPZTwBmVCIunAN+Em0U8uLbEPAzG7y6bPjL5h0qLd6y
	BYCnpm7Kgh+eix6Ze7XVcBNHstoIP0ul86zLc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=BXfEse3d+5W+PITwJCK7dA0bjtw7tJQUCDIbdZRHD6g=;
	b=OYCzorCdsEORtd9ynfaDBtcDptuDk+FzLkHuyUn7vPSkO9XlwJDMS90WIPpvihzeFh
	2OLbkeO+y6UHk4mukINOsCzcNRQ3M2s2WRe5whCMUoRNonlyUnsQh7GbsiaHfMAwMXcG
	ciNQKLIT7XXBdzQgwiyT+OHrYFc6U4Rgwvh66RAkxvBeFfNqm77O3g11DDc/OAaMDHAv
	kB/crSykdqS5Z3uvDqgvWY+fYWAt3O97uRxSkJP8A89CkyHq9kYOeifpsdPpsUvB8vyo
	F3CjoarMUHy2EJbdreCoGclH340JtHS5axNJjOLyjRDdlDnMIdvN0c42tuYReV6zM1SP
	nSXw==
X-Gm-Message-State: ALoCoQllujiFvusuHIBsfHXDM54ZwNeT6iAxHnjj9mxYEy/a3OD4zY7xlJeD41SQ0eIMIAMbzXXoHsuKRBl76zHQZ6p98fk9GIuxqRca9wFXO8oR/4pZUYNA+QVjUP+y7pvMaKIaSZYIZr6KhoRUrIvQGws+Vv6OTokdePMP2eUo6vbB7DcMsc0=
X-Received: by 10.52.108.232 with SMTP id hn8mr2077492vdb.29.1391021037266;
	Wed, 29 Jan 2014 10:43:57 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.108.232 with SMTP id hn8mr2077488vdb.29.1391021037212;
	Wed, 29 Jan 2014 10:43:57 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:43:57 -0800 (PST)
In-Reply-To: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
Date: Wed, 29 Jan 2014 20:43:57 +0200
Message-ID: <CAJEb2DGSXfD2Sk0=aiYs-8hLPnQtqkDqd7Xs6bouuNJs6Zfxmw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, just a small correction:

> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in

cpumask_of(0) instead of cpumask_of(smp_processor_id())

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8a7G-0005e4-BK; Wed, 29 Jan 2014 18:44:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8a7F-0005dv-1r
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:44:01 +0000
Received: from [85.158.143.35:64305] by server-3.bemta-4.messagelabs.com id
	8E/07-11539-0FB49E25; Wed, 29 Jan 2014 18:44:00 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391021038!1730848!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6877 invoked from network); 29 Jan 2014 18:43:59 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:43:59 -0000
Received: from mail-ve0-f179.google.com ([209.85.128.179]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulL7WkzpAS8fNSG0xD0780Mu4nFKVvo@postini.com;
	Wed, 29 Jan 2014 10:43:59 PST
Received: by mail-ve0-f179.google.com with SMTP id jx11so1453158veb.24
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:43:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=BXfEse3d+5W+PITwJCK7dA0bjtw7tJQUCDIbdZRHD6g=;
	b=QY/TLzmcYJoj6eCsduOUZIGiv+1zQKcM2x8KE0Nk8D41+S8G0kSByMXHWxKVi6HJ07
	0z2FwvBkCrGqpSZxI4Szxsx6TgPZTwBmVCIunAN+Em0U8uLbEPAzG7y6bPjL5h0qLd6y
	BYCnpm7Kgh+eix6Ze7XVcBNHstoIP0ul86zLc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=BXfEse3d+5W+PITwJCK7dA0bjtw7tJQUCDIbdZRHD6g=;
	b=OYCzorCdsEORtd9ynfaDBtcDptuDk+FzLkHuyUn7vPSkO9XlwJDMS90WIPpvihzeFh
	2OLbkeO+y6UHk4mukINOsCzcNRQ3M2s2WRe5whCMUoRNonlyUnsQh7GbsiaHfMAwMXcG
	ciNQKLIT7XXBdzQgwiyT+OHrYFc6U4Rgwvh66RAkxvBeFfNqm77O3g11DDc/OAaMDHAv
	kB/crSykdqS5Z3uvDqgvWY+fYWAt3O97uRxSkJP8A89CkyHq9kYOeifpsdPpsUvB8vyo
	F3CjoarMUHy2EJbdreCoGclH340JtHS5axNJjOLyjRDdlDnMIdvN0c42tuYReV6zM1SP
	nSXw==
X-Gm-Message-State: ALoCoQllujiFvusuHIBsfHXDM54ZwNeT6iAxHnjj9mxYEy/a3OD4zY7xlJeD41SQ0eIMIAMbzXXoHsuKRBl76zHQZ6p98fk9GIuxqRca9wFXO8oR/4pZUYNA+QVjUP+y7pvMaKIaSZYIZr6KhoRUrIvQGws+Vv6OTokdePMP2eUo6vbB7DcMsc0=
X-Received: by 10.52.108.232 with SMTP id hn8mr2077492vdb.29.1391021037266;
	Wed, 29 Jan 2014 10:43:57 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.108.232 with SMTP id hn8mr2077488vdb.29.1391021037212;
	Wed, 29 Jan 2014 10:43:57 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:43:57 -0800 (PST)
In-Reply-To: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
Date: Wed, 29 Jan 2014 20:43:57 +0200
Message-ID: <CAJEb2DGSXfD2Sk0=aiYs-8hLPnQtqkDqd7Xs6bouuNJs6Zfxmw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, just a small correction:

> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in

cpumask_of(0) instead of cpumask_of(smp_processor_id())

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8aCV-0005pf-C7; Wed, 29 Jan 2014 18:49:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8aCU-0005pa-79
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:49:26 +0000
Received: from [85.158.143.35:33398] by server-3.bemta-4.messagelabs.com id
	00/4B-11539-53D49E25; Wed, 29 Jan 2014 18:49:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391021362!1722582!1
X-Originating-IP: [209.85.212.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5668 invoked from network); 29 Jan 2014 18:49:23 -0000
Received: from mail-vb0-f42.google.com (HELO mail-vb0-f42.google.com)
	(209.85.212.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:49:23 -0000
Received: by mail-vb0-f42.google.com with SMTP id i3so1457475vbh.29
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=9Z7GwkLVcCQoFcZqvdfDPCVkdFxoxyesx05q/Uuf3NE=;
	b=BzyDo6UuZH+PfEWiMlaULf3lYd3Ta+802xwM9LwRsfEhleQ5OGdbVQTuvC8HJ1pfZa
	gjFOA9GqiCD4rbCySRGS99beVwZcCjpaSIBoSejhx733jSRLk58Pb5+cccdckNDvmoL0
	/k2CydJRk90D9hUdMrNMgTTUpoJ7M1fG6vE7hb+tUCuGwj7gZ3diFNWpVspYZWppwHjf
	RLEvrjt93qiehq5WdKwwTFyexcE6cpIuVho5ak3TjIzo5Z5l3W4wloIlES7KnC0BrGMO
	ftY9QJkz6a2hSxn5BO7Q5SCKxkUwSUqAqL7Dmg8bln2lheo5rCp4QOvOJa7597/9xeGM
	j9WA==
X-Gm-Message-State: ALoCoQmyP3u2xL8g1RFYfXcYN2nkuZVjwpge4kZN9hQ8kxhMyHf73vFq4p+nseV3S6Enkh604pQ7
MIME-Version: 1.0
X-Received: by 10.52.38.33 with SMTP id d1mr6588975vdk.4.1391021361767; Wed,
	29 Jan 2014 10:49:21 -0800 (PST)
Received: by 10.58.96.46 with HTTP; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
Received: by 10.58.96.46 with HTTP; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
In-Reply-To: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
Date: Wed, 29 Jan 2014 18:49:21 +0000
Message-ID: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
From: Julien Grall <julien.grall@linaro.org>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6665189090076775384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6665189090076775384==
Content-Type: multipart/alternative; boundary=bcaec51ddba924c02804f120666f

--bcaec51ddba924c02804f120666f
Content-Type: text/plain; charset=UTF-8

Hi,

It's weird, physical IRQ should not be injected twice ...
Were you able to print the IRQ number?

In any case, you are using the old version of the interrupt patch series.
Your new error may come of race condition in this code.

Can you try to use a newest version?
 On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <
oleksandr.tyshchenko@globallogic.com> wrote:

> > Right, that's why changing it to cpumask_of(0) shouldn't make any
> > difference for xen-unstable (it should make things clearer, if nothing
> > else) but it should fix things for Oleksandr.
>
> Unfortunately, it is not enough for stable work.
>
> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0)
> in
> gic_route_irq_to_guest(). And as result, I don't see our situation
> which cause to deadlock in on_selected_cpus function (expected).
> But, hypervisor sometimes hangs somewhere else (I have not identified
> yet where this is happening) or I sometimes see traps, like that:
> ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>
> (XEN) CPU1: Unexpected Trap: Undefined Instruction
> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> (XEN) CPU:    1
> (XEN) PC:     00242c1c __warn+0x20/0x28
> (XEN) CPSR:   200001da MODE:Hypervisor
> (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> (XEN)
> (XEN)   VTCR_EL2: 80002558
> (XEN)  VTTBR_EL2: 00020000dec6a000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd187f
> (XEN)    HCR_EL2: 00000000000028b5
> (XEN)  TTBR0_EL2: 00000000d2014000
> (XEN)
> (XEN)    ESR_EL2: 00000000
> (XEN)  HPFAR_EL2: 0000000000482110
> (XEN)      HDFAR: fa211190
> (XEN)      HIFAR: 00000000
> (XEN)
> (XEN) Xen stack trace from sp=4bfd7eb4:
> (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097
> 00000001
> (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019
> 00000000
> (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58
> 00000000
> (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097
> 00000097
> (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8
> 4bfd7f58
> (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097
> 00000000
> (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff
> b6efbca3
> (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8
> c007680c
> (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000
> 00000000
> (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193
> 00000000
> (XEN)    ffeffbfe fedeefff fffd5ffe
> (XEN) Xen call trace:
> (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> (XEN)    [<00251830>] return_from_trap+0/0x4
> (XEN)
>
> Also I am posting maintenance_interrupt() from my tree:
>
> static void maintenance_interrupt(int irq, void *dev_id, struct
> cpu_user_regs *regs)
> {
>     int i = 0, virq, pirq;
>     uint32_t lr;
>     struct vcpu *v = current;
>     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) <<
> 32);
>
>     while ((i = find_next_bit((const long unsigned int *) &eisr,
>                               64, i)) < 64) {
>         struct pending_irq *p, *n;
>         int cpu, eoi;
>
>         cpu = -1;
>         eoi = 0;
>
>         spin_lock_irq(&gic.lock);
>         lr = GICH[GICH_LR + i];
>         virq = lr & GICH_LR_VIRTUAL_MASK;
>
>         p = irq_to_pending(v, virq);
>         if ( p->desc != NULL ) {
>             p->desc->status &= ~IRQ_INPROGRESS;
>             /* Assume only one pcpu needs to EOI the irq */
>             cpu = p->desc->arch.eoi_cpu;
>             eoi = 1;
>             pirq = p->desc->irq;
>         }
>         if ( !atomic_dec_and_test(&p->inflight_cnt) )
>         {
>             /* Physical IRQ can't be reinject */
>             WARN_ON(p->desc != NULL);
>             gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>             spin_unlock_irq(&gic.lock);
>             i++;
>             continue;
>         }
>
>         GICH[GICH_LR + i] = 0;
>         clear_bit(i, &this_cpu(lr_mask));
>
>         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>             n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n),
> lr_queue);
>             gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>             list_del_init(&n->lr_queue);
>             set_bit(i, &this_cpu(lr_mask));
>         } else {
>             gic_inject_irq_stop();
>         }
>         spin_unlock_irq(&gic.lock);
>
>         spin_lock_irq(&v->arch.vgic.lock);
>         list_del_init(&p->inflight);
>         spin_unlock_irq(&v->arch.vgic.lock);
>
>         if ( eoi ) {
>             /* this is not racy because we can't receive another irq of the
>              * same type until we EOI it.  */
>             if ( cpu == smp_processor_id() )
>                 gic_irq_eoi((void*)(uintptr_t)pirq);
>             else
>                 on_selected_cpus(cpumask_of(cpu),
>                                  gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>         }
>
>         i++;
>     }
> }
>
>
> Oleksandr Tyshchenko | Embedded Developer
> GlobalLogic
>

--bcaec51ddba924c02804f120666f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Hi,</p>
<p dir=3D"ltr">It&#39;s weird, physical IRQ should not be injected twice ..=
.<br>
Were you able to print the IRQ number?</p>
<p dir=3D"ltr">In any case, you are using the old version of the interrupt =
patch series.<br>
Your new error may come of race condition in this code.</p>
<p dir=3D"ltr">Can you try to use a newest version?<br>
</p>
<div class=3D"gmail_quote">On 29 Jan 2014 18:40, &quot;Oleksandr Tyshchenko=
&quot; &lt;<a href=3D"mailto:oleksandr.tyshchenko@globallogic.com">oleksand=
r.tyshchenko@globallogic.com</a>&gt; wrote:<br type=3D"attribution"><blockq=
uote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc =
solid;padding-left:1ex">
&gt; Right, that&#39;s why changing it to cpumask_of(0) shouldn&#39;t make =
any<br>
&gt; difference for xen-unstable (it should make things clearer, if nothing=
<br>
&gt; else) but it should fix things for Oleksandr.<br>
<br>
Unfortunately, it is not enough for stable work.<br>
<br>
I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) =
in<br>
gic_route_irq_to_guest(). And as result, I don&#39;t see our situation<br>
which cause to deadlock in on_selected_cpus function (expected).<br>
But, hypervisor sometimes hangs somewhere else (I have not identified<br>
yet where this is happening) or I sometimes see traps, like that:<br>
(&quot;WARN_ON(p-&gt;desc !=3D NULL)&quot; in maintenance_interrupt() leads=
 to them)<br>
<br>
(XEN) CPU1: Unexpected Trap: Undefined Instruction<br>
(XEN) ----[ Xen-4.4-unstable =C2=A0arm32 =C2=A0debug=3Dy =C2=A0Not tainted =
]----<br>
(XEN) CPU: =C2=A0 =C2=A01<br>
(XEN) PC: =C2=A0 =C2=A0 00242c1c __warn+0x20/0x28<br>
(XEN) CPSR: =C2=A0 200001da MODE:Hypervisor<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000f=
ff<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf0=
00<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7e=
bc R12:00000002<br>
(XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c<br>
(XEN)<br>
(XEN) =C2=A0 VTCR_EL2: 80002558<br>
(XEN) =C2=A0VTTBR_EL2: 00020000dec6a000<br>
(XEN)<br>
(XEN) =C2=A0SCTLR_EL2: 30cd187f<br>
(XEN) =C2=A0 =C2=A0HCR_EL2: 00000000000028b5<br>
(XEN) =C2=A0TTBR0_EL2: 00000000d2014000<br>
(XEN)<br>
(XEN) =C2=A0 =C2=A0ESR_EL2: 00000000<br>
(XEN) =C2=A0HPFAR_EL2: 0000000000482110<br>
(XEN) =C2=A0 =C2=A0 =C2=A0HDFAR: fa211190<br>
(XEN) =C2=A0 =C2=A0 =C2=A0HIFAR: 00000000<br>
(XEN)<br>
(XEN) Xen stack trace from sp=3D4bfd7eb4:<br>
(XEN) =C2=A0 =C2=A00026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00=
000097 00000001<br>
(XEN) =C2=A0 =C2=A000000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00=
000019 00000000<br>
(XEN) =C2=A0 =C2=A040005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4b=
fd7f58 00000000<br>
(XEN) =C2=A0 =C2=A000405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00=
000097 00000097<br>
(XEN) =C2=A0 =C2=A000000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 00=
24f4b8 4bfd7f58<br>
(XEN) =C2=A0 =C2=A000251830 ea80c950 00000000 00000001 c0079a90 00000097 00=
000097 00000000<br>
(XEN) =C2=A0 =C2=A0fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ff=
ffffff b6efbca3<br>
(XEN) =C2=A0 =C2=A0c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9=
879eb8 c007680c<br>
(XEN) =C2=A0 =C2=A0c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00=
000000 00000000<br>
(XEN) =C2=A0 =C2=A000000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40=
000193 00000000<br>
(XEN) =C2=A0 =C2=A0ffeffbfe fedeefff fffd5ffe<br>
(XEN) Xen call trace:<br>
(XEN) =C2=A0 =C2=A0[&lt;00242c1c&gt;] __warn+0x20/0x28 (PC)<br>
(XEN) =C2=A0 =C2=A0[&lt;00242c1c&gt;] __warn+0x20/0x28 (LR)<br>
(XEN) =C2=A0 =C2=A0[&lt;00247a54&gt;] maintenance_interrupt+0xfc/0x2f4<br>
(XEN) =C2=A0 =C2=A0[&lt;00248e60&gt;] do_IRQ+0x138/0x198<br>
(XEN) =C2=A0 =C2=A0[&lt;00248978&gt;] gic_interrupt+0x58/0xc0<br>
(XEN) =C2=A0 =C2=A0[&lt;0024f4b8&gt;] do_trap_irq+0x10/0x14<br>
(XEN) =C2=A0 =C2=A0[&lt;00251830&gt;] return_from_trap+0/0x4<br>
(XEN)<br>
<br>
Also I am posting maintenance_interrupt() from my tree:<br>
<br>
static void maintenance_interrupt(int irq, void *dev_id, struct<br>
cpu_user_regs *regs)<br>
{<br>
=C2=A0 =C2=A0 int i =3D 0, virq, pirq;<br>
=C2=A0 =C2=A0 uint32_t lr;<br>
=C2=A0 =C2=A0 struct vcpu *v =3D current;<br>
=C2=A0 =C2=A0 uint64_t eisr =3D GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_E=
ISR1]) &lt;&lt; 32);<br>
<br>
=C2=A0 =C2=A0 while ((i =3D find_next_bit((const long unsigned int *) &amp;=
eisr,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 64, i)) &lt; 64) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct pending_irq *p, *n;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 int cpu, eoi;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D -1;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 0;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&amp;gic.lock);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 lr =3D GICH[GICH_LR + i];<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 virq =3D lr &amp; GICH_LR_VIRTUAL_MASK;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 p =3D irq_to_pending(v, virq);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( p-&gt;desc !=3D NULL ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p-&gt;desc-&gt;status &amp;=3D ~I=
RQ_INPROGRESS;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Assume only one pcpu needs to =
EOI the irq */<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D p-&gt;desc-&gt;arch.eoi_c=
pu;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 1;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pirq =3D p-&gt;desc-&gt;irq;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !atomic_dec_and_test(&amp;p-&gt;inflight_c=
nt) )<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Physical IRQ can&#39;t be rein=
ject */<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 WARN_ON(p-&gt;desc !=3D NULL);<br=
>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, p-&gt;irq, GICH_LR_=
PENDING, p-&gt;priority);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;gic.lock);<b=
r>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 GICH[GICH_LR + i] =3D 0;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 clear_bit(i, &amp;this_cpu(lr_mask));<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !list_empty(&amp;v-&gt;arch.vgic.lr_pendin=
g) ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 n =3D list_entry(v-&gt;arch.vgic.=
lr_pending.next, typeof(*n), lr_queue);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, n-&gt;irq, GICH_LR_=
PENDING, n-&gt;priority);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&amp;n-&gt;lr_queue=
);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 set_bit(i, &amp;this_cpu(lr_mask)=
);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_inject_irq_stop();<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;gic.lock);<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&amp;v-&gt;arch.vgic.lock);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&amp;p-&gt;inflight);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;v-&gt;arch.vgic.lock);<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( eoi ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* this is not racy because we ca=
n&#39;t receive another irq of the<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* same type until we EOI it=
. =C2=A0*/<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( cpu =3D=3D smp_processor_id(=
) )<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_irq_eoi((void*)=
(uintptr_t)pirq);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on_selected_cpus(cp=
umask_of(cpu),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_irq_eoi, (void*)(uintptr_t=
)pirq, 0);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;<br>
=C2=A0 =C2=A0 }<br>
}<br>
<br>
<br>
Oleksandr Tyshchenko | Embedded Developer<br>
GlobalLogic<br>
</blockquote></div>

--bcaec51ddba924c02804f120666f--


--===============6665189090076775384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6665189090076775384==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 18:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8aCV-0005pf-C7; Wed, 29 Jan 2014 18:49:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8aCU-0005pa-79
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:49:26 +0000
Received: from [85.158.143.35:33398] by server-3.bemta-4.messagelabs.com id
	00/4B-11539-53D49E25; Wed, 29 Jan 2014 18:49:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391021362!1722582!1
X-Originating-IP: [209.85.212.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5668 invoked from network); 29 Jan 2014 18:49:23 -0000
Received: from mail-vb0-f42.google.com (HELO mail-vb0-f42.google.com)
	(209.85.212.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 18:49:23 -0000
Received: by mail-vb0-f42.google.com with SMTP id i3so1457475vbh.29
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=9Z7GwkLVcCQoFcZqvdfDPCVkdFxoxyesx05q/Uuf3NE=;
	b=BzyDo6UuZH+PfEWiMlaULf3lYd3Ta+802xwM9LwRsfEhleQ5OGdbVQTuvC8HJ1pfZa
	gjFOA9GqiCD4rbCySRGS99beVwZcCjpaSIBoSejhx733jSRLk58Pb5+cccdckNDvmoL0
	/k2CydJRk90D9hUdMrNMgTTUpoJ7M1fG6vE7hb+tUCuGwj7gZ3diFNWpVspYZWppwHjf
	RLEvrjt93qiehq5WdKwwTFyexcE6cpIuVho5ak3TjIzo5Z5l3W4wloIlES7KnC0BrGMO
	ftY9QJkz6a2hSxn5BO7Q5SCKxkUwSUqAqL7Dmg8bln2lheo5rCp4QOvOJa7597/9xeGM
	j9WA==
X-Gm-Message-State: ALoCoQmyP3u2xL8g1RFYfXcYN2nkuZVjwpge4kZN9hQ8kxhMyHf73vFq4p+nseV3S6Enkh604pQ7
MIME-Version: 1.0
X-Received: by 10.52.38.33 with SMTP id d1mr6588975vdk.4.1391021361767; Wed,
	29 Jan 2014 10:49:21 -0800 (PST)
Received: by 10.58.96.46 with HTTP; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
Received: by 10.58.96.46 with HTTP; Wed, 29 Jan 2014 10:49:21 -0800 (PST)
In-Reply-To: <CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
Date: Wed, 29 Jan 2014 18:49:21 +0000
Message-ID: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
From: Julien Grall <julien.grall@linaro.org>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6665189090076775384=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6665189090076775384==
Content-Type: multipart/alternative; boundary=bcaec51ddba924c02804f120666f

--bcaec51ddba924c02804f120666f
Content-Type: text/plain; charset=UTF-8

Hi,

It's weird, physical IRQ should not be injected twice ...
Were you able to print the IRQ number?

In any case, you are using the old version of the interrupt patch series.
Your new error may come of race condition in this code.

Can you try to use a newest version?
 On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <
oleksandr.tyshchenko@globallogic.com> wrote:

> > Right, that's why changing it to cpumask_of(0) shouldn't make any
> > difference for xen-unstable (it should make things clearer, if nothing
> > else) but it should fix things for Oleksandr.
>
> Unfortunately, it is not enough for stable work.
>
> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0)
> in
> gic_route_irq_to_guest(). And as result, I don't see our situation
> which cause to deadlock in on_selected_cpus function (expected).
> But, hypervisor sometimes hangs somewhere else (I have not identified
> yet where this is happening) or I sometimes see traps, like that:
> ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>
> (XEN) CPU1: Unexpected Trap: Undefined Instruction
> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> (XEN) CPU:    1
> (XEN) PC:     00242c1c __warn+0x20/0x28
> (XEN) CPSR:   200001da MODE:Hypervisor
> (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> (XEN)
> (XEN)   VTCR_EL2: 80002558
> (XEN)  VTTBR_EL2: 00020000dec6a000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd187f
> (XEN)    HCR_EL2: 00000000000028b5
> (XEN)  TTBR0_EL2: 00000000d2014000
> (XEN)
> (XEN)    ESR_EL2: 00000000
> (XEN)  HPFAR_EL2: 0000000000482110
> (XEN)      HDFAR: fa211190
> (XEN)      HIFAR: 00000000
> (XEN)
> (XEN) Xen stack trace from sp=4bfd7eb4:
> (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097
> 00000001
> (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019
> 00000000
> (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58
> 00000000
> (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097
> 00000097
> (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8
> 4bfd7f58
> (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097
> 00000000
> (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff
> b6efbca3
> (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8
> c007680c
> (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000
> 00000000
> (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193
> 00000000
> (XEN)    ffeffbfe fedeefff fffd5ffe
> (XEN) Xen call trace:
> (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> (XEN)    [<00251830>] return_from_trap+0/0x4
> (XEN)
>
> Also I am posting maintenance_interrupt() from my tree:
>
> static void maintenance_interrupt(int irq, void *dev_id, struct
> cpu_user_regs *regs)
> {
>     int i = 0, virq, pirq;
>     uint32_t lr;
>     struct vcpu *v = current;
>     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) <<
> 32);
>
>     while ((i = find_next_bit((const long unsigned int *) &eisr,
>                               64, i)) < 64) {
>         struct pending_irq *p, *n;
>         int cpu, eoi;
>
>         cpu = -1;
>         eoi = 0;
>
>         spin_lock_irq(&gic.lock);
>         lr = GICH[GICH_LR + i];
>         virq = lr & GICH_LR_VIRTUAL_MASK;
>
>         p = irq_to_pending(v, virq);
>         if ( p->desc != NULL ) {
>             p->desc->status &= ~IRQ_INPROGRESS;
>             /* Assume only one pcpu needs to EOI the irq */
>             cpu = p->desc->arch.eoi_cpu;
>             eoi = 1;
>             pirq = p->desc->irq;
>         }
>         if ( !atomic_dec_and_test(&p->inflight_cnt) )
>         {
>             /* Physical IRQ can't be reinject */
>             WARN_ON(p->desc != NULL);
>             gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>             spin_unlock_irq(&gic.lock);
>             i++;
>             continue;
>         }
>
>         GICH[GICH_LR + i] = 0;
>         clear_bit(i, &this_cpu(lr_mask));
>
>         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>             n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n),
> lr_queue);
>             gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>             list_del_init(&n->lr_queue);
>             set_bit(i, &this_cpu(lr_mask));
>         } else {
>             gic_inject_irq_stop();
>         }
>         spin_unlock_irq(&gic.lock);
>
>         spin_lock_irq(&v->arch.vgic.lock);
>         list_del_init(&p->inflight);
>         spin_unlock_irq(&v->arch.vgic.lock);
>
>         if ( eoi ) {
>             /* this is not racy because we can't receive another irq of the
>              * same type until we EOI it.  */
>             if ( cpu == smp_processor_id() )
>                 gic_irq_eoi((void*)(uintptr_t)pirq);
>             else
>                 on_selected_cpus(cpumask_of(cpu),
>                                  gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>         }
>
>         i++;
>     }
> }
>
>
> Oleksandr Tyshchenko | Embedded Developer
> GlobalLogic
>

--bcaec51ddba924c02804f120666f
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Hi,</p>
<p dir=3D"ltr">It&#39;s weird, physical IRQ should not be injected twice ..=
.<br>
Were you able to print the IRQ number?</p>
<p dir=3D"ltr">In any case, you are using the old version of the interrupt =
patch series.<br>
Your new error may come of race condition in this code.</p>
<p dir=3D"ltr">Can you try to use a newest version?<br>
</p>
<div class=3D"gmail_quote">On 29 Jan 2014 18:40, &quot;Oleksandr Tyshchenko=
&quot; &lt;<a href=3D"mailto:oleksandr.tyshchenko@globallogic.com">oleksand=
r.tyshchenko@globallogic.com</a>&gt; wrote:<br type=3D"attribution"><blockq=
uote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc =
solid;padding-left:1ex">
&gt; Right, that&#39;s why changing it to cpumask_of(0) shouldn&#39;t make =
any<br>
&gt; difference for xen-unstable (it should make things clearer, if nothing=
<br>
&gt; else) but it should fix things for Oleksandr.<br>
<br>
Unfortunately, it is not enough for stable work.<br>
<br>
I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) =
in<br>
gic_route_irq_to_guest(). And as result, I don&#39;t see our situation<br>
which cause to deadlock in on_selected_cpus function (expected).<br>
But, hypervisor sometimes hangs somewhere else (I have not identified<br>
yet where this is happening) or I sometimes see traps, like that:<br>
(&quot;WARN_ON(p-&gt;desc !=3D NULL)&quot; in maintenance_interrupt() leads=
 to them)<br>
<br>
(XEN) CPU1: Unexpected Trap: Undefined Instruction<br>
(XEN) ----[ Xen-4.4-unstable =C2=A0arm32 =C2=A0debug=3Dy =C2=A0Not tainted =
]----<br>
(XEN) CPU: =C2=A0 =C2=A01<br>
(XEN) PC: =C2=A0 =C2=A0 00242c1c __warn+0x20/0x28<br>
(XEN) CPSR: =C2=A0 200001da MODE:Hypervisor<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000f=
ff<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf0=
00<br>
(XEN) =C2=A0 =C2=A0 =C2=A0R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7e=
bc R12:00000002<br>
(XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c<br>
(XEN)<br>
(XEN) =C2=A0 VTCR_EL2: 80002558<br>
(XEN) =C2=A0VTTBR_EL2: 00020000dec6a000<br>
(XEN)<br>
(XEN) =C2=A0SCTLR_EL2: 30cd187f<br>
(XEN) =C2=A0 =C2=A0HCR_EL2: 00000000000028b5<br>
(XEN) =C2=A0TTBR0_EL2: 00000000d2014000<br>
(XEN)<br>
(XEN) =C2=A0 =C2=A0ESR_EL2: 00000000<br>
(XEN) =C2=A0HPFAR_EL2: 0000000000482110<br>
(XEN) =C2=A0 =C2=A0 =C2=A0HDFAR: fa211190<br>
(XEN) =C2=A0 =C2=A0 =C2=A0HIFAR: 00000000<br>
(XEN)<br>
(XEN) Xen stack trace from sp=3D4bfd7eb4:<br>
(XEN) =C2=A0 =C2=A00026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00=
000097 00000001<br>
(XEN) =C2=A0 =C2=A000000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00=
000019 00000000<br>
(XEN) =C2=A0 =C2=A040005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4b=
fd7f58 00000000<br>
(XEN) =C2=A0 =C2=A000405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00=
000097 00000097<br>
(XEN) =C2=A0 =C2=A000000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 00=
24f4b8 4bfd7f58<br>
(XEN) =C2=A0 =C2=A000251830 ea80c950 00000000 00000001 c0079a90 00000097 00=
000097 00000000<br>
(XEN) =C2=A0 =C2=A0fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ff=
ffffff b6efbca3<br>
(XEN) =C2=A0 =C2=A0c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9=
879eb8 c007680c<br>
(XEN) =C2=A0 =C2=A0c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00=
000000 00000000<br>
(XEN) =C2=A0 =C2=A000000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40=
000193 00000000<br>
(XEN) =C2=A0 =C2=A0ffeffbfe fedeefff fffd5ffe<br>
(XEN) Xen call trace:<br>
(XEN) =C2=A0 =C2=A0[&lt;00242c1c&gt;] __warn+0x20/0x28 (PC)<br>
(XEN) =C2=A0 =C2=A0[&lt;00242c1c&gt;] __warn+0x20/0x28 (LR)<br>
(XEN) =C2=A0 =C2=A0[&lt;00247a54&gt;] maintenance_interrupt+0xfc/0x2f4<br>
(XEN) =C2=A0 =C2=A0[&lt;00248e60&gt;] do_IRQ+0x138/0x198<br>
(XEN) =C2=A0 =C2=A0[&lt;00248978&gt;] gic_interrupt+0x58/0xc0<br>
(XEN) =C2=A0 =C2=A0[&lt;0024f4b8&gt;] do_trap_irq+0x10/0x14<br>
(XEN) =C2=A0 =C2=A0[&lt;00251830&gt;] return_from_trap+0/0x4<br>
(XEN)<br>
<br>
Also I am posting maintenance_interrupt() from my tree:<br>
<br>
static void maintenance_interrupt(int irq, void *dev_id, struct<br>
cpu_user_regs *regs)<br>
{<br>
=C2=A0 =C2=A0 int i =3D 0, virq, pirq;<br>
=C2=A0 =C2=A0 uint32_t lr;<br>
=C2=A0 =C2=A0 struct vcpu *v =3D current;<br>
=C2=A0 =C2=A0 uint64_t eisr =3D GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_E=
ISR1]) &lt;&lt; 32);<br>
<br>
=C2=A0 =C2=A0 while ((i =3D find_next_bit((const long unsigned int *) &amp;=
eisr,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 64, i)) &lt; 64) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 struct pending_irq *p, *n;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 int cpu, eoi;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D -1;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 0;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&amp;gic.lock);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 lr =3D GICH[GICH_LR + i];<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 virq =3D lr &amp; GICH_LR_VIRTUAL_MASK;<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 p =3D irq_to_pending(v, virq);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( p-&gt;desc !=3D NULL ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p-&gt;desc-&gt;status &amp;=3D ~I=
RQ_INPROGRESS;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Assume only one pcpu needs to =
EOI the irq */<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D p-&gt;desc-&gt;arch.eoi_c=
pu;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 1;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pirq =3D p-&gt;desc-&gt;irq;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !atomic_dec_and_test(&amp;p-&gt;inflight_c=
nt) )<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Physical IRQ can&#39;t be rein=
ject */<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 WARN_ON(p-&gt;desc !=3D NULL);<br=
>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, p-&gt;irq, GICH_LR_=
PENDING, p-&gt;priority);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;gic.lock);<b=
r>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 GICH[GICH_LR + i] =3D 0;<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 clear_bit(i, &amp;this_cpu(lr_mask));<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !list_empty(&amp;v-&gt;arch.vgic.lr_pendin=
g) ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 n =3D list_entry(v-&gt;arch.vgic.=
lr_pending.next, typeof(*n), lr_queue);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, n-&gt;irq, GICH_LR_=
PENDING, n-&gt;priority);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&amp;n-&gt;lr_queue=
);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 set_bit(i, &amp;this_cpu(lr_mask)=
);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_inject_irq_stop();<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;gic.lock);<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&amp;v-&gt;arch.vgic.lock);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&amp;p-&gt;inflight);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&amp;v-&gt;arch.vgic.lock);<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( eoi ) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* this is not racy because we ca=
n&#39;t receive another irq of the<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* same type until we EOI it=
. =C2=A0*/<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( cpu =3D=3D smp_processor_id(=
) )<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_irq_eoi((void*)=
(uintptr_t)pirq);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on_selected_cpus(cp=
umask_of(cpu),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_irq_eoi, (void*)(uintptr_t=
)pirq, 0);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;<br>
=C2=A0 =C2=A0 }<br>
}<br>
<br>
<br>
Oleksandr Tyshchenko | Embedded Developer<br>
GlobalLogic<br>
</blockquote></div>

--bcaec51ddba924c02804f120666f--


--===============6665189090076775384==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6665189090076775384==--


From xen-devel-bounces@lists.xen.org Wed Jan 29 18:56:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8aIo-0006Ll-KO; Wed, 29 Jan 2014 18:55:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8aIm-0006Lg-Hz
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:55:56 +0000
Received: from [85.158.139.211:30395] by server-7.bemta-5.messagelabs.com id
	1A/94-14867-BBE49E25; Wed, 29 Jan 2014 18:55:55 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391021752!453034!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19787 invoked from network); 29 Jan 2014 18:55:54 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:55:54 -0000
Received: from mail-vb0-f43.google.com ([209.85.212.43]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulOuH47RovJRF4AL9bwZ3HOAlCDeyIy@postini.com;
	Wed, 29 Jan 2014 10:55:54 PST
Received: by mail-vb0-f43.google.com with SMTP id p5so1395037vbn.2
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:55:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LkqzNZt21YThAsEFQkP/TVf4dYAnWqgvICwiHYBhyiA=;
	b=LiXdXr4vD6UzVlhjGtLuqd/DMCG8El9/dqWkIAJEmtPrnZpJN8OtWRO3BBn8fvGYMb
	KRAkGhqQpagJuQBwXzLLgAmQLolplzawsCupQowZxUjGzjlH9YP4X8CHKQJvLFdkshdh
	QEdws3g60Vc3RIyMDp89wi7nUgfebMhU+UO4M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LkqzNZt21YThAsEFQkP/TVf4dYAnWqgvICwiHYBhyiA=;
	b=F6IjzIMdWVY8XysFO8gNnWJMAPg343HrWXH+pxiEgRUFAFlGIV2CYkct27cGrSFuFP
	5/FNzivpmTKTt2n7bjPxlJ1i158gyyel5a0D+wadqeXqxqtjJi2MjCr5nWBDjfd5ANBR
	JeRuF+3WoD3tQMdT3x6qJM0GPDDjsXQ3mmO6rHxGof61bjUOdm2uzgNeiMH/+4oyn4p/
	KQMnED962UWowLFNxdfeX6f2hQU4l38kjMxOXqx+K3HPCuV6HBsD5K952rBlFQDUb3Ge
	b7HTToHoHCwqtcnwmjD4DHER2CTSpeiP7K6GAcNec9OjAPE9y1daCaiki2wfdhZBlntn
	FBXg==
X-Gm-Message-State: ALoCoQngz8LO1XnoBFOFuVBPhXZ6hLfWOnJwF5OBXBFzKWerLon+mkv+8/zFoLvYDUf22i8Yegd2TIxxSl4mJPg2+3iZC9CcIr5c4pYGjREYBQgo/uj9b/SHgoZF9Zys7InfDvrO3t/VJJCyp0HcLhbV+lA0WiT8awhYoxVUBIsnASzMbCPYDWg=
X-Received: by 10.58.208.130 with SMTP id me2mr8097579vec.13.1391021751943;
	Wed, 29 Jan 2014 10:55:51 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.208.130 with SMTP id me2mr8097561vec.13.1391021751766;
	Wed, 29 Jan 2014 10:55:51 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:55:51 -0800 (PST)
In-Reply-To: <52E8FE25.1050108@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<52E8FE25.1050108@linaro.org>
Date: Wed, 29 Jan 2014 20:55:51 +0200
Message-ID: <CAJEb2DH3HC7oUDpEi2zZpCsdVJowEWMWqrPgejFSkwGtwo5WsQ@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 3:12 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hello Oleksandr,
>
> On 28/01/14 19:25, Oleksandr Tyshchenko wrote:
>
> [..]
>
>
>>
>>     Do you pass-through PPIs to dom0?
>>
>> If I understand correctly that PPIs is irqs from 16 to 31.
>> So yes, I do. I see timer's irqs and maintenance irq which routed to
>> both CPUs.
>
>
> This IRQs are used by Xen, therefore they are emulated for dom0 and domU.
> Xen won't EOI in maintenance_interrupt theses IRQs.
>
>
>>
>> And I have printed all irqs which fall to gic_route_irq_to_guest and
>> gic_route_irq functions.
>> ...
>> (XEN) GIC initialization:
>> (XEN)         gic_dist_addr=0000000048211000
>> (XEN)         gic_cpu_addr=0000000048212000
>> (XEN)         gic_hyp_addr=0000000048214000
>> (XEN)         gic_vcpu_addr=0000000048216000
>> (XEN)         gic_maintenance_irq=25
>> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
>> (XEN) Using scheduler: SMP Credit Scheduler (credit)
>> (XEN) Allocated console ring of 16 KiB.
>> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
>> (XEN) Bringing up CPU1
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
>> (XEN) CPU 1 booted.
>> (XEN) Brought up 2 CPUs
>> (XEN) *** LOADING DOMAIN 0 ***
>> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
>> (XEN)
>> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
>
>
> [..]
>
>
>> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
>
>
> Not related to this patch series, but is it normal that you passthrough the
> same interrupt both to dom0 and domU?
>
No, it isn't. Although, these interrupts are not used in both domains,
we need to make cleanup. Thank you for your attention.
> There is few other case like that.
>
> --
> Julien Grall

-- 
Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 18:56:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 18:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8aIo-0006Ll-KO; Wed, 29 Jan 2014 18:55:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8aIm-0006Lg-Hz
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 18:55:56 +0000
Received: from [85.158.139.211:30395] by server-7.bemta-5.messagelabs.com id
	1A/94-14867-BBE49E25; Wed, 29 Jan 2014 18:55:55 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391021752!453034!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19787 invoked from network); 29 Jan 2014 18:55:54 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 29 Jan 2014 18:55:54 -0000
Received: from mail-vb0-f43.google.com ([209.85.212.43]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulOuH47RovJRF4AL9bwZ3HOAlCDeyIy@postini.com;
	Wed, 29 Jan 2014 10:55:54 PST
Received: by mail-vb0-f43.google.com with SMTP id p5so1395037vbn.2
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 10:55:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LkqzNZt21YThAsEFQkP/TVf4dYAnWqgvICwiHYBhyiA=;
	b=LiXdXr4vD6UzVlhjGtLuqd/DMCG8El9/dqWkIAJEmtPrnZpJN8OtWRO3BBn8fvGYMb
	KRAkGhqQpagJuQBwXzLLgAmQLolplzawsCupQowZxUjGzjlH9YP4X8CHKQJvLFdkshdh
	QEdws3g60Vc3RIyMDp89wi7nUgfebMhU+UO4M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LkqzNZt21YThAsEFQkP/TVf4dYAnWqgvICwiHYBhyiA=;
	b=F6IjzIMdWVY8XysFO8gNnWJMAPg343HrWXH+pxiEgRUFAFlGIV2CYkct27cGrSFuFP
	5/FNzivpmTKTt2n7bjPxlJ1i158gyyel5a0D+wadqeXqxqtjJi2MjCr5nWBDjfd5ANBR
	JeRuF+3WoD3tQMdT3x6qJM0GPDDjsXQ3mmO6rHxGof61bjUOdm2uzgNeiMH/+4oyn4p/
	KQMnED962UWowLFNxdfeX6f2hQU4l38kjMxOXqx+K3HPCuV6HBsD5K952rBlFQDUb3Ge
	b7HTToHoHCwqtcnwmjD4DHER2CTSpeiP7K6GAcNec9OjAPE9y1daCaiki2wfdhZBlntn
	FBXg==
X-Gm-Message-State: ALoCoQngz8LO1XnoBFOFuVBPhXZ6hLfWOnJwF5OBXBFzKWerLon+mkv+8/zFoLvYDUf22i8Yegd2TIxxSl4mJPg2+3iZC9CcIr5c4pYGjREYBQgo/uj9b/SHgoZF9Zys7InfDvrO3t/VJJCyp0HcLhbV+lA0WiT8awhYoxVUBIsnASzMbCPYDWg=
X-Received: by 10.58.208.130 with SMTP id me2mr8097579vec.13.1391021751943;
	Wed, 29 Jan 2014 10:55:51 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.208.130 with SMTP id me2mr8097561vec.13.1391021751766;
	Wed, 29 Jan 2014 10:55:51 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 10:55:51 -0800 (PST)
In-Reply-To: <52E8FE25.1050108@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<52E8FE25.1050108@linaro.org>
Date: Wed, 29 Jan 2014 20:55:51 +0200
Message-ID: <CAJEb2DH3HC7oUDpEi2zZpCsdVJowEWMWqrPgejFSkwGtwo5WsQ@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 3:12 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hello Oleksandr,
>
> On 28/01/14 19:25, Oleksandr Tyshchenko wrote:
>
> [..]
>
>
>>
>>     Do you pass-through PPIs to dom0?
>>
>> If I understand correctly that PPIs is irqs from 16 to 31.
>> So yes, I do. I see timer's irqs and maintenance irq which routed to
>> both CPUs.
>
>
> This IRQs are used by Xen, therefore they are emulated for dom0 and domU.
> Xen won't EOI in maintenance_interrupt theses IRQs.
>
>
>>
>> And I have printed all irqs which fall to gic_route_irq_to_guest and
>> gic_route_irq functions.
>> ...
>> (XEN) GIC initialization:
>> (XEN)         gic_dist_addr=0000000048211000
>> (XEN)         gic_cpu_addr=0000000048212000
>> (XEN)         gic_hyp_addr=0000000048214000
>> (XEN)         gic_vcpu_addr=0000000048216000
>> (XEN)         gic_maintenance_irq=25
>> (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
>> (XEN) Using scheduler: SMP Credit Scheduler (credit)
>> (XEN) Allocated console ring of 16 KiB.
>> (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
>> (XEN) Bringing up CPU1
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
>> (XEN)
>> (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
>> (XEN) CPU 1 booted.
>> (XEN) Brought up 2 CPUs
>> (XEN) *** LOADING DOMAIN 0 ***
>> (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
>> (XEN)
>> (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
>
>
> [..]
>
>
>> (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
>
>
> Not related to this patch series, but is it normal that you passthrough the
> same interrupt both to dom0 and domU?
>
No, it isn't. Although, these interrupts are not used in both domains,
we need to make cleanup. Thank you for your attention.
> There is few other case like that.
>
> --
> Julien Grall

-- 
Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8akI-0007cj-Un; Wed, 29 Jan 2014 19:24:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W8akH-0007cF-EU; Wed, 29 Jan 2014 19:24:21 +0000
Received: from [193.109.254.147:48562] by server-7.bemta-14.messagelabs.com id
	7B/B3-23424-46559E25; Wed, 29 Jan 2014 19:24:20 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391023457!701851!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14823 invoked from network); 29 Jan 2014 19:24:18 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 19:24:18 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so1843445lbd.29
	for <multiple recipients>; Wed, 29 Jan 2014 11:24:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=qkUf5Z9nqOptmoVdn8M9Ha3nW02XWTc1w9HiGCp+2Yc=;
	b=MjdbB48ZBhKYEcfrePxwm9OofDMU9ZiOtTdO3kd6TTzJShW8xGtSXn2yEQwI4h26z0
	ymKVUfINRuMlFKqNdT3J7khBHIm3XFuAQYvi85wNRRsNWPewCo8i6Rbr9LOHHjF1HLx3
	uX8RBgpbNCOe564+fSLq5DEFq8jQA2vdTLRe9WJpoDNwkk2jjnKbUqPJJyLY2sJfL+2u
	GZqRJaYGsAI8lYZnP/afzxteh8IMKabseMswWjGjUc4s7fpiRSQisETcbnDxYIqChJpU
	xPksWKfrcEH0/chnEoaDxroacPCUGofVH4xEe7Q7Aj7rY6S7hD+6t8EotkP/x0q4sjDf
	d2fg==
MIME-Version: 1.0
X-Received: by 10.112.154.202 with SMTP id vq10mr6258691lbb.3.1391023457531;
	Wed, 29 Jan 2014 11:24:17 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Wed, 29 Jan 2014 11:24:17 -0800 (PST)
Date: Wed, 29 Jan 2014 14:24:17 -0500
X-Google-Sender-Auth: vEYCqxoYv1b4IzElmMrYaeKyyiY
Message-ID: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk, 
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>
Subject: [Xen-devel] REMINDER: Feb 3 is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC3, we need to know about it and how to test it.  Please edit the
instructions page this week to reflect suggested configuration and
testing instructions.

EVERYONE:

Please join us on Monday, February 3, to flush out any potential bugs
before the next release.

Thank you!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8akI-0007cj-Un; Wed, 29 Jan 2014 19:24:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W8akH-0007cF-EU; Wed, 29 Jan 2014 19:24:21 +0000
Received: from [193.109.254.147:48562] by server-7.bemta-14.messagelabs.com id
	7B/B3-23424-46559E25; Wed, 29 Jan 2014 19:24:20 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391023457!701851!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14823 invoked from network); 29 Jan 2014 19:24:18 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 19:24:18 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so1843445lbd.29
	for <multiple recipients>; Wed, 29 Jan 2014 11:24:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=qkUf5Z9nqOptmoVdn8M9Ha3nW02XWTc1w9HiGCp+2Yc=;
	b=MjdbB48ZBhKYEcfrePxwm9OofDMU9ZiOtTdO3kd6TTzJShW8xGtSXn2yEQwI4h26z0
	ymKVUfINRuMlFKqNdT3J7khBHIm3XFuAQYvi85wNRRsNWPewCo8i6Rbr9LOHHjF1HLx3
	uX8RBgpbNCOe564+fSLq5DEFq8jQA2vdTLRe9WJpoDNwkk2jjnKbUqPJJyLY2sJfL+2u
	GZqRJaYGsAI8lYZnP/afzxteh8IMKabseMswWjGjUc4s7fpiRSQisETcbnDxYIqChJpU
	xPksWKfrcEH0/chnEoaDxroacPCUGofVH4xEe7Q7Aj7rY6S7hD+6t8EotkP/x0q4sjDf
	d2fg==
MIME-Version: 1.0
X-Received: by 10.112.154.202 with SMTP id vq10mr6258691lbb.3.1391023457531;
	Wed, 29 Jan 2014 11:24:17 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Wed, 29 Jan 2014 11:24:17 -0800 (PST)
Date: Wed, 29 Jan 2014 14:24:17 -0500
X-Google-Sender-Auth: vEYCqxoYv1b4IzElmMrYaeKyyiY
Message-ID: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-devel@lists.xen.org, 
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk, 
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>
Subject: [Xen-devel] REMINDER: Feb 3 is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC3, we need to know about it and how to test it.  Please edit the
instructions page this week to reflect suggested configuration and
testing instructions.

EVERYONE:

Please join us on Monday, February 3, to flush out any potential bugs
before the next release.

Thank you!

Russ Pavlicek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:41:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8b0n-0000LZ-F4; Wed, 29 Jan 2014 19:41:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8b0l-0000LU-DG
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 19:41:23 +0000
Received: from [85.158.139.211:14551] by server-17.bemta-5.messagelabs.com id
	6B/63-31975-26959E25; Wed, 29 Jan 2014 19:41:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391024479!467450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21649 invoked from network); 29 Jan 2014 19:41:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 19:41:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97829242"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 19:41:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 14:41:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8b0g-0003Gr-4z;
	Wed, 29 Jan 2014 19:41:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8b0e-0002Ci-Ew;
	Wed, 29 Jan 2014 19:41:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 19:41:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24598: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24598 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24598/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386  3 host-install(3)       broken REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:41:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8b0n-0000LZ-F4; Wed, 29 Jan 2014 19:41:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8b0l-0000LU-DG
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 19:41:23 +0000
Received: from [85.158.139.211:14551] by server-17.bemta-5.messagelabs.com id
	6B/63-31975-26959E25; Wed, 29 Jan 2014 19:41:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391024479!467450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21649 invoked from network); 29 Jan 2014 19:41:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 19:41:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,743,1384300800"; d="scan'208";a="97829242"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 29 Jan 2014 19:41:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 14:41:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8b0g-0003Gr-4z;
	Wed, 29 Jan 2014 19:41:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8b0e-0002Ci-Ew;
	Wed, 29 Jan 2014 19:41:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24598-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 19:41:16 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24598: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24598 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24598/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386  3 host-install(3)       broken REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:54:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:54:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8bDc-0000sR-5g; Wed, 29 Jan 2014 19:54:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8bDa-0000sM-VU
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 19:54:39 +0000
Received: from [85.158.139.211:26449] by server-3.bemta-5.messagelabs.com id
	67/F9-13671-E7C59E25; Wed, 29 Jan 2014 19:54:38 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391025274!459349!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13660 invoked from network); 29 Jan 2014 19:54:36 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 19:54:36 -0000
Received: from mail-ve0-f182.google.com ([209.85.128.182]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulcep3LHS/98O9euwVlsDFg0aPPmYsj@postini.com;
	Wed, 29 Jan 2014 11:54:36 PST
Received: by mail-ve0-f182.google.com with SMTP id jy13so1509856veb.41
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 11:54:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=k1syVKjsN8LqHER98uMr/JIcs7aXWGlI+4Y5O0mueI0=;
	b=KqPVtTJz4xi0dggzNbv032xWqJ2hTE2yRquBesOwzQxevGAlLdM/oDOooVKgmvEW7e
	o2YHT+Azhk6HAmUJ7A3rHuQNA/d1y+HkQRl+b8nc9McnzjYg5ZuZt/pSReBQnjF3/r6T
	BfytgGIOH8/YL1Ny8Je8Z7KbqnNCW5y1o8sLc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=k1syVKjsN8LqHER98uMr/JIcs7aXWGlI+4Y5O0mueI0=;
	b=Qu4sVpLh2lAVoxTuVLaC1L6jFth3RCrQwCFNZDr5xCceAj+pGgckrRbGgEkIuoeZqK
	ShEz7A/qZeQm4KCnWCD8aCFS04mtoy/i1Uvvubl1YufxePU2uzV6IsmjqUfeQn6o38Ii
	s6fnoWe2weYzYvKtlvX9mqWrDs5qplyjVRvFpLNnftZbiw9K0ux5HZn14Yb/Hga42epg
	rsso1y2VMhlkYKNd9mbKHmWCIeHll2hxqm/Ykqxock7h4/O3IN7sgkVu2ThALHsce6Cq
	/Tss12tNrtb8yaTKenoT5GGHibwggVEZ7UhRUqd8tP5G35f8RUJ48BjFkWYr6cXv64aJ
	+pZg==
X-Gm-Message-State: ALoCoQmmbZWyaE8TYy29j8AZPgE04i0gVmeUJk5pmuzsZxxDDNXAuACERwRLC6kXNzTv5Wct1ct89NKCS3af1GdvurlMlGn1bRNH4ob3jpsui20+CM4C1tzE3wt1MS1QzWojaTIxI3MJRqo7rlT/khhxvupbM5M1p8D6UNqrEAJ4Qau4E5frtfg=
X-Received: by 10.52.163.132 with SMTP id yi4mr2231762vdb.30.1391025273666;
	Wed, 29 Jan 2014 11:54:33 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.163.132 with SMTP id yi4mr2231744vdb.30.1391025273552;
	Wed, 29 Jan 2014 11:54:33 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 11:54:33 -0800 (PST)
In-Reply-To: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
Date: Wed, 29 Jan 2014 21:54:33 +0200
Message-ID: <CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi,
>
> It's weird, physical IRQ should not be injected twice ...
> Were you able to print the IRQ number?

p->irq: 151, p->desc->irq: 151 (it is touchscreen irq)

>
> In any case, you are using the old version of the interrupt patch series.
> Your new error may come of race condition in this code.
>
> Can you try to use a newest version?

Yes, I can. But not today or tomorrow, I need time to apply our local
changes to newest version needed our system to work.
And Do you mean try to use newest version of XEN at whole or only
parts of code related to interrupts handling?

>
> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko"
> <oleksandr.tyshchenko@globallogic.com> wrote:
>>
>> > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> > difference for xen-unstable (it should make things clearer, if nothing
>> > else) but it should fix things for Oleksandr.
>>
>> Unfortunately, it is not enough for stable work.
>>
>> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0)
>> in
>> gic_route_irq_to_guest(). And as result, I don't see our situation
>> which cause to deadlock in on_selected_cpus function (expected).
>> But, hypervisor sometimes hangs somewhere else (I have not identified
>> yet where this is happening) or I sometimes see traps, like that:
>> ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>>
>> (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> (XEN) CPU:    1
>> (XEN) PC:     00242c1c __warn+0x20/0x28
>> (XEN) CPSR:   200001da MODE:Hypervisor
>> (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc
>> R12:00000002
>> (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> (XEN)
>> (XEN)   VTCR_EL2: 80002558
>> (XEN)  VTTBR_EL2: 00020000dec6a000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd187f
>> (XEN)    HCR_EL2: 00000000000028b5
>> (XEN)  TTBR0_EL2: 00000000d2014000
>> (XEN)
>> (XEN)    ESR_EL2: 00000000
>> (XEN)  HPFAR_EL2: 0000000000482110
>> (XEN)      HDFAR: fa211190
>> (XEN)      HIFAR: 00000000
>> (XEN)
>> (XEN) Xen stack trace from sp=4bfd7eb4:
>> (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097
>> 00000001
>> (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019
>> 00000000
>> (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58
>> 00000000
>> (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097
>> 00000097
>> (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8
>> 4bfd7f58
>> (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097
>> 00000000
>> (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff
>> b6efbca3
>> (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8
>> c007680c
>> (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000
>> 00000000
>> (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193
>> 00000000
>> (XEN)    ffeffbfe fedeefff fffd5ffe
>> (XEN) Xen call trace:
>> (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> (XEN)    [<00251830>] return_from_trap+0/0x4
>> (XEN)
>>
>> Also I am posting maintenance_interrupt() from my tree:
>>
>> static void maintenance_interrupt(int irq, void *dev_id, struct
>> cpu_user_regs *regs)
>> {
>>     int i = 0, virq, pirq;
>>     uint32_t lr;
>>     struct vcpu *v = current;
>>     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) <<
>> 32);
>>
>>     while ((i = find_next_bit((const long unsigned int *) &eisr,
>>                               64, i)) < 64) {
>>         struct pending_irq *p, *n;
>>         int cpu, eoi;
>>
>>         cpu = -1;
>>         eoi = 0;
>>
>>         spin_lock_irq(&gic.lock);
>>         lr = GICH[GICH_LR + i];
>>         virq = lr & GICH_LR_VIRTUAL_MASK;
>>
>>         p = irq_to_pending(v, virq);
>>         if ( p->desc != NULL ) {
>>             p->desc->status &= ~IRQ_INPROGRESS;
>>             /* Assume only one pcpu needs to EOI the irq */
>>             cpu = p->desc->arch.eoi_cpu;
>>             eoi = 1;
>>             pirq = p->desc->irq;
>>         }
>>         if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>         {
>>             /* Physical IRQ can't be reinject */
>>             WARN_ON(p->desc != NULL);
>>             gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>             spin_unlock_irq(&gic.lock);
>>             i++;
>>             continue;
>>         }
>>
>>         GICH[GICH_LR + i] = 0;
>>         clear_bit(i, &this_cpu(lr_mask));
>>
>>         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>             n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n),
>> lr_queue);
>>             gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>             list_del_init(&n->lr_queue);
>>             set_bit(i, &this_cpu(lr_mask));
>>         } else {
>>             gic_inject_irq_stop();
>>         }
>>         spin_unlock_irq(&gic.lock);
>>
>>         spin_lock_irq(&v->arch.vgic.lock);
>>         list_del_init(&p->inflight);
>>         spin_unlock_irq(&v->arch.vgic.lock);
>>
>>         if ( eoi ) {
>>             /* this is not racy because we can't receive another irq of
>> the
>>              * same type until we EOI it.  */
>>             if ( cpu == smp_processor_id() )
>>                 gic_irq_eoi((void*)(uintptr_t)pirq);
>>             else
>>                 on_selected_cpus(cpumask_of(cpu),
>>                                  gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>>         }
>>
>>         i++;
>>     }
>> }
>>
>>
>> Oleksandr Tyshchenko | Embedded Developer
>> GlobalLogic

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 19:54:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 19:54:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8bDc-0000sR-5g; Wed, 29 Jan 2014 19:54:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8bDa-0000sM-VU
	for xen-devel@lists.xen.org; Wed, 29 Jan 2014 19:54:39 +0000
Received: from [85.158.139.211:26449] by server-3.bemta-5.messagelabs.com id
	67/F9-13671-E7C59E25; Wed, 29 Jan 2014 19:54:38 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391025274!459349!1
X-Originating-IP: [64.18.0.147]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13660 invoked from network); 29 Jan 2014 19:54:36 -0000
Received: from exprod5og116.obsmtp.com (HELO exprod5og116.obsmtp.com)
	(64.18.0.147)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 29 Jan 2014 19:54:36 -0000
Received: from mail-ve0-f182.google.com ([209.85.128.182]) (using TLSv1) by
	exprod5ob116.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUulcep3LHS/98O9euwVlsDFg0aPPmYsj@postini.com;
	Wed, 29 Jan 2014 11:54:36 PST
Received: by mail-ve0-f182.google.com with SMTP id jy13so1509856veb.41
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 11:54:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=k1syVKjsN8LqHER98uMr/JIcs7aXWGlI+4Y5O0mueI0=;
	b=KqPVtTJz4xi0dggzNbv032xWqJ2hTE2yRquBesOwzQxevGAlLdM/oDOooVKgmvEW7e
	o2YHT+Azhk6HAmUJ7A3rHuQNA/d1y+HkQRl+b8nc9McnzjYg5ZuZt/pSReBQnjF3/r6T
	BfytgGIOH8/YL1Ny8Je8Z7KbqnNCW5y1o8sLc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=k1syVKjsN8LqHER98uMr/JIcs7aXWGlI+4Y5O0mueI0=;
	b=Qu4sVpLh2lAVoxTuVLaC1L6jFth3RCrQwCFNZDr5xCceAj+pGgckrRbGgEkIuoeZqK
	ShEz7A/qZeQm4KCnWCD8aCFS04mtoy/i1Uvvubl1YufxePU2uzV6IsmjqUfeQn6o38Ii
	s6fnoWe2weYzYvKtlvX9mqWrDs5qplyjVRvFpLNnftZbiw9K0ux5HZn14Yb/Hga42epg
	rsso1y2VMhlkYKNd9mbKHmWCIeHll2hxqm/Ykqxock7h4/O3IN7sgkVu2ThALHsce6Cq
	/Tss12tNrtb8yaTKenoT5GGHibwggVEZ7UhRUqd8tP5G35f8RUJ48BjFkWYr6cXv64aJ
	+pZg==
X-Gm-Message-State: ALoCoQmmbZWyaE8TYy29j8AZPgE04i0gVmeUJk5pmuzsZxxDDNXAuACERwRLC6kXNzTv5Wct1ct89NKCS3af1GdvurlMlGn1bRNH4ob3jpsui20+CM4C1tzE3wt1MS1QzWojaTIxI3MJRqo7rlT/khhxvupbM5M1p8D6UNqrEAJ4Qau4E5frtfg=
X-Received: by 10.52.163.132 with SMTP id yi4mr2231762vdb.30.1391025273666;
	Wed, 29 Jan 2014 11:54:33 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.163.132 with SMTP id yi4mr2231744vdb.30.1391025273552;
	Wed, 29 Jan 2014 11:54:33 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Wed, 29 Jan 2014 11:54:33 -0800 (PST)
In-Reply-To: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
Date: Wed, 29 Jan 2014 21:54:33 +0200
Message-ID: <CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi,
>
> It's weird, physical IRQ should not be injected twice ...
> Were you able to print the IRQ number?

p->irq: 151, p->desc->irq: 151 (it is touchscreen irq)

>
> In any case, you are using the old version of the interrupt patch series.
> Your new error may come of race condition in this code.
>
> Can you try to use a newest version?

Yes, I can. But not today or tomorrow, I need time to apply our local
changes to newest version needed our system to work.
And Do you mean try to use newest version of XEN at whole or only
parts of code related to interrupts handling?

>
> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko"
> <oleksandr.tyshchenko@globallogic.com> wrote:
>>
>> > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> > difference for xen-unstable (it should make things clearer, if nothing
>> > else) but it should fix things for Oleksandr.
>>
>> Unfortunately, it is not enough for stable work.
>>
>> I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0)
>> in
>> gic_route_irq_to_guest(). And as result, I don't see our situation
>> which cause to deadlock in on_selected_cpus function (expected).
>> But, hypervisor sometimes hangs somewhere else (I have not identified
>> yet where this is happening) or I sometimes see traps, like that:
>> ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>>
>> (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> (XEN) CPU:    1
>> (XEN) PC:     00242c1c __warn+0x20/0x28
>> (XEN) CPSR:   200001da MODE:Hypervisor
>> (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc
>> R12:00000002
>> (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> (XEN)
>> (XEN)   VTCR_EL2: 80002558
>> (XEN)  VTTBR_EL2: 00020000dec6a000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd187f
>> (XEN)    HCR_EL2: 00000000000028b5
>> (XEN)  TTBR0_EL2: 00000000d2014000
>> (XEN)
>> (XEN)    ESR_EL2: 00000000
>> (XEN)  HPFAR_EL2: 0000000000482110
>> (XEN)      HDFAR: fa211190
>> (XEN)      HIFAR: 00000000
>> (XEN)
>> (XEN) Xen stack trace from sp=4bfd7eb4:
>> (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097
>> 00000001
>> (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019
>> 00000000
>> (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58
>> 00000000
>> (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097
>> 00000097
>> (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8
>> 4bfd7f58
>> (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097
>> 00000000
>> (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff
>> b6efbca3
>> (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8
>> c007680c
>> (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000
>> 00000000
>> (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193
>> 00000000
>> (XEN)    ffeffbfe fedeefff fffd5ffe
>> (XEN) Xen call trace:
>> (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> (XEN)    [<00251830>] return_from_trap+0/0x4
>> (XEN)
>>
>> Also I am posting maintenance_interrupt() from my tree:
>>
>> static void maintenance_interrupt(int irq, void *dev_id, struct
>> cpu_user_regs *regs)
>> {
>>     int i = 0, virq, pirq;
>>     uint32_t lr;
>>     struct vcpu *v = current;
>>     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) <<
>> 32);
>>
>>     while ((i = find_next_bit((const long unsigned int *) &eisr,
>>                               64, i)) < 64) {
>>         struct pending_irq *p, *n;
>>         int cpu, eoi;
>>
>>         cpu = -1;
>>         eoi = 0;
>>
>>         spin_lock_irq(&gic.lock);
>>         lr = GICH[GICH_LR + i];
>>         virq = lr & GICH_LR_VIRTUAL_MASK;
>>
>>         p = irq_to_pending(v, virq);
>>         if ( p->desc != NULL ) {
>>             p->desc->status &= ~IRQ_INPROGRESS;
>>             /* Assume only one pcpu needs to EOI the irq */
>>             cpu = p->desc->arch.eoi_cpu;
>>             eoi = 1;
>>             pirq = p->desc->irq;
>>         }
>>         if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>         {
>>             /* Physical IRQ can't be reinject */
>>             WARN_ON(p->desc != NULL);
>>             gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>             spin_unlock_irq(&gic.lock);
>>             i++;
>>             continue;
>>         }
>>
>>         GICH[GICH_LR + i] = 0;
>>         clear_bit(i, &this_cpu(lr_mask));
>>
>>         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>             n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n),
>> lr_queue);
>>             gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>             list_del_init(&n->lr_queue);
>>             set_bit(i, &this_cpu(lr_mask));
>>         } else {
>>             gic_inject_irq_stop();
>>         }
>>         spin_unlock_irq(&gic.lock);
>>
>>         spin_lock_irq(&v->arch.vgic.lock);
>>         list_del_init(&p->inflight);
>>         spin_unlock_irq(&v->arch.vgic.lock);
>>
>>         if ( eoi ) {
>>             /* this is not racy because we can't receive another irq of
>> the
>>              * same type until we EOI it.  */
>>             if ( cpu == smp_processor_id() )
>>                 gic_irq_eoi((void*)(uintptr_t)pirq);
>>             else
>>                 on_selected_cpus(cpumask_of(cpu),
>>                                  gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>>         }
>>
>>         i++;
>>     }
>> }
>>
>>
>> Oleksandr Tyshchenko | Embedded Developer
>> GlobalLogic

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 21:10:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 21:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8cOG-0003os-1d; Wed, 29 Jan 2014 21:09:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8cOE-0003ol-9Y
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 21:09:42 +0000
Received: from [193.109.254.147:59389] by server-3.bemta-14.messagelabs.com id
	49/99-00432-51E69E25; Wed, 29 Jan 2014 21:09:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391029775!711616!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1916 invoked from network); 29 Jan 2014 21:09:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 21:09:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,744,1384300800"; d="scan'208";a="95879256"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 21:09:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 16:09:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8cO5-0003jI-3T;
	Wed, 29 Jan 2014 21:09:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8cO4-00062Z-Vw;
	Wed, 29 Jan 2014 21:09:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 21:09:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24604: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24604 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              4 kernel-build              fail REGR. vs. 24397
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 24387

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24387
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24397

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 21:10:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 21:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8cOG-0003os-1d; Wed, 29 Jan 2014 21:09:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8cOE-0003ol-9Y
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 21:09:42 +0000
Received: from [193.109.254.147:59389] by server-3.bemta-14.messagelabs.com id
	49/99-00432-51E69E25; Wed, 29 Jan 2014 21:09:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391029775!711616!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1916 invoked from network); 29 Jan 2014 21:09:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 21:09:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,744,1384300800"; d="scan'208";a="95879256"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 21:09:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 16:09:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8cO5-0003jI-3T;
	Wed, 29 Jan 2014 21:09:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8cO4-00062Z-Vw;
	Wed, 29 Jan 2014 21:09:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24604-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 21:09:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24604: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24604 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              4 kernel-build              fail REGR. vs. 24397
 test-amd64-amd64-xl-win7-amd64  9 guest-localmigrate      fail REGR. vs. 24387

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24387
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24397

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 21:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 21:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8d4L-0005Fp-Sr; Wed, 29 Jan 2014 21:53:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8d4J-0005Fk-VO
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 21:53:12 +0000
Received: from [85.158.139.211:47890] by server-2.bemta-5.messagelabs.com id
	9C/E6-23037-74879E25; Wed, 29 Jan 2014 21:53:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391032383!475752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20059 invoked from network); 29 Jan 2014 21:53:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 21:53:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,744,1384300800"; d="scan'208";a="95900312"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 21:52:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 16:52:58 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8d45-0003xr-SF;
	Wed, 29 Jan 2014 21:52:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8d44-0002Ba-67;
	Wed, 29 Jan 2014 21:52:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 21:52:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24605: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24605 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24605/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24591
 build-amd64                   3 host-build-prep           fail REGR. vs. 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 21:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 21:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8d4L-0005Fp-Sr; Wed, 29 Jan 2014 21:53:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8d4J-0005Fk-VO
	for xen-devel@lists.xensource.com; Wed, 29 Jan 2014 21:53:12 +0000
Received: from [85.158.139.211:47890] by server-2.bemta-5.messagelabs.com id
	9C/E6-23037-74879E25; Wed, 29 Jan 2014 21:53:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391032383!475752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20059 invoked from network); 29 Jan 2014 21:53:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 21:53:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,744,1384300800"; d="scan'208";a="95900312"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 29 Jan 2014 21:52:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 16:52:58 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8d45-0003xr-SF;
	Wed, 29 Jan 2014 21:52:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8d44-0002Ba-67;
	Wed, 29 Jan 2014 21:52:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24605-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 29 Jan 2014 21:52:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24605: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24605 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24605/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24591
 build-amd64                   3 host-build-prep           fail REGR. vs. 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 23:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8eDz-0007qL-RU; Wed, 29 Jan 2014 23:07:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8eDy-0007qG-M9
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:07:14 +0000
Received: from [85.158.139.211:59449] by server-17.bemta-5.messagelabs.com id
	1F/5A-31975-1A989E25; Wed, 29 Jan 2014 23:07:13 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391036831!488523!1
X-Originating-IP: [209.85.128.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13870 invoked from network); 29 Jan 2014 23:07:12 -0000
Received: from mail-ve0-f173.google.com (HELO mail-ve0-f173.google.com)
	(209.85.128.173)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:07:12 -0000
Received: by mail-ve0-f173.google.com with SMTP id oz11so1694647veb.32
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:07:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=xaapgCkj/i51BmQEt6Mbh2lg8pVroldK7Cs9iukpbD0=;
	b=XHtZg1Q+1AzXdkCmCWhPMtJNiD32d7EpJLKbE1emD9srMk0XG1NA8YxkFrYZ9/WOun
	u+SlWZywoe8fMy+3TnjkwRJGyKYgg756tollChg4kSxqzjvQE4gwts8HRwxYjim5xDdJ
	h/f/98umTqvhG6+8MNmLv/GdwsbSPcvWKF/5MbJ44qIx1fprS5IBwQ3rQMdHUvGFifue
	y90nTNi1PRIo7a1y4YMmmZTJPQCJsHCh3bBkscGlwryNvKy00c27MGJ4pcMZoLN/n4ho
	fb1ac3kerpO3dYnI9Uv6KiQjHv5LR6hYTKcrIhVEB0lhrVg7JCmoFoXE67mk3yZ5yDe1
	4oXA==
MIME-Version: 1.0
X-Received: by 10.52.76.105 with SMTP id j9mr58930vdw.52.1391036831550; Wed,
	29 Jan 2014 15:07:11 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 15:07:11 -0800 (PST)
In-Reply-To: <20140129015048.GA14629@konrad-lan.dumpdata.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
Date: Wed, 29 Jan 2014 15:07:11 -0800
Message-ID: <CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	david.vrabel@citrix.com, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hmmph.  ia64 is broken too.  git bisect says:

commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
Author: Yinghai Lu <yinghai@kernel.org>
Date:   Mon Jan 27 17:06:49 2014 -0800

    memblock, nobootmem: add memblock_virt_alloc_low()

is to blame.  But this patch doesn't fix it.  Still dies with:

PID hash table entries: 4096 (order: -1, 32768 bytes)
Sorting __ex_table...
kernel BUG at mm/bootmem.c:504!
swapper[0]: bugcheck! 0 [1]
Modules linked in:

CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0-bisect-09219-g5e345db #10
task: a000000101040000 ti: a000000101040f10 task.ti: a000000101040f10
psr : 00001010084a2018 ifs : 8000000000000797 ip  :
[<a000000100eea530>]    Not tainted (3.13.0-bisect-09219-g5e345db)
ip is at alloc_bootmem_bdata+0x230/0x790
unat: 0000000000000000 pfs : 0000000000000797 rsc : 0000000000000003
rnat: a000000100dff0ee bsps: 0000000000000000 pr  : 96669601598a95a9
ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
csd : 0930ffff00063000 ssd : 0930ffff00063000
b0  : a000000100eea530 b6  : a0000001005f4040 b7  : a0000001005f3e00
f6  : 000000000000000000000 f7  : 1003e0000000000000000
f8  : 1003e0044b82fa09b5a53 f9  : 1003e0000000000001680
f10 : 1003e0000000000000008 f11 : 1003e00000000000002d0
r1  : a0000001017ce170 r2  : a00000010158e0a0 r3  : a0000001015ced80
r8  : 000000000000001f r9  : 0000000000000664 r10 : 0000000000000000
r11 : 0000000000000000 r12 : a00000010104fe20 r13 : a000000101040000
r14 : a00000010158e0a8 r15 : a00000010158e0a8 r16 : 0000000006640332
r17 : 0000000000000000 r18 : 0000000000007fff r19 : 0000000000000000
r20 : a000000101727e98 r21 : 0000000000000058 r22 : a000000101616c98
r23 : a000000101616c00 r24 : a000000101616c00 r25 : 00000000000003f8
r26 : a000000100e901a8 r27 : a000000100f59480 r28 : 0000000000000058
r29 : 0000000000000057 r30 : 00000000000016d0 r31 : 0000000000000030
Unable to handle kernel NULL pointer dereference (address 0000000000000000)
swapper[0]: Oops 11012296146944 [2]

-Tony

On Tue, Jan 28, 2014 at 5:50 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Jan 28, 2014 at 02:47:57PM -0800, Yinghai Lu wrote:
>> On Tue, Jan 28, 2014 at 2:08 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>> > On 01/28/2014 02:04 PM, Yinghai Lu wrote:
>> >> In original bootmem wrapper for memblock, we have limit checking.
>> >>
>> >> Add it to memblock_virt_alloc, to address arm and x86 booting crash.
>> >>
>> >> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
>> >
>> > Do you have a git tree or cumulative set of patches that you'd like us
>> > to all test?  I'm happy to boot it on my system, I just want to make
>> > sure I've got the same set that you're testing.
>>
>> This one should only affect on arm and x86 the 32 bit kernel with more
>> then 4G RAM.
>
> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>
> It fixes the issue I saw with Xen and 32-bit dom0 blowing up.
>
>>
>> thanks
>>
>> Yinghai
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 23:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8eDz-0007qL-RU; Wed, 29 Jan 2014 23:07:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8eDy-0007qG-M9
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:07:14 +0000
Received: from [85.158.139.211:59449] by server-17.bemta-5.messagelabs.com id
	1F/5A-31975-1A989E25; Wed, 29 Jan 2014 23:07:13 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391036831!488523!1
X-Originating-IP: [209.85.128.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13870 invoked from network); 29 Jan 2014 23:07:12 -0000
Received: from mail-ve0-f173.google.com (HELO mail-ve0-f173.google.com)
	(209.85.128.173)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:07:12 -0000
Received: by mail-ve0-f173.google.com with SMTP id oz11so1694647veb.32
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:07:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=xaapgCkj/i51BmQEt6Mbh2lg8pVroldK7Cs9iukpbD0=;
	b=XHtZg1Q+1AzXdkCmCWhPMtJNiD32d7EpJLKbE1emD9srMk0XG1NA8YxkFrYZ9/WOun
	u+SlWZywoe8fMy+3TnjkwRJGyKYgg756tollChg4kSxqzjvQE4gwts8HRwxYjim5xDdJ
	h/f/98umTqvhG6+8MNmLv/GdwsbSPcvWKF/5MbJ44qIx1fprS5IBwQ3rQMdHUvGFifue
	y90nTNi1PRIo7a1y4YMmmZTJPQCJsHCh3bBkscGlwryNvKy00c27MGJ4pcMZoLN/n4ho
	fb1ac3kerpO3dYnI9Uv6KiQjHv5LR6hYTKcrIhVEB0lhrVg7JCmoFoXE67mk3yZ5yDe1
	4oXA==
MIME-Version: 1.0
X-Received: by 10.52.76.105 with SMTP id j9mr58930vdw.52.1391036831550; Wed,
	29 Jan 2014 15:07:11 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 15:07:11 -0800 (PST)
In-Reply-To: <20140129015048.GA14629@konrad-lan.dumpdata.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
Date: Wed, 29 Jan 2014 15:07:11 -0800
Message-ID: <CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	david.vrabel@citrix.com, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hmmph.  ia64 is broken too.  git bisect says:

commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
Author: Yinghai Lu <yinghai@kernel.org>
Date:   Mon Jan 27 17:06:49 2014 -0800

    memblock, nobootmem: add memblock_virt_alloc_low()

is to blame.  But this patch doesn't fix it.  Still dies with:

PID hash table entries: 4096 (order: -1, 32768 bytes)
Sorting __ex_table...
kernel BUG at mm/bootmem.c:504!
swapper[0]: bugcheck! 0 [1]
Modules linked in:

CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0-bisect-09219-g5e345db #10
task: a000000101040000 ti: a000000101040f10 task.ti: a000000101040f10
psr : 00001010084a2018 ifs : 8000000000000797 ip  :
[<a000000100eea530>]    Not tainted (3.13.0-bisect-09219-g5e345db)
ip is at alloc_bootmem_bdata+0x230/0x790
unat: 0000000000000000 pfs : 0000000000000797 rsc : 0000000000000003
rnat: a000000100dff0ee bsps: 0000000000000000 pr  : 96669601598a95a9
ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
csd : 0930ffff00063000 ssd : 0930ffff00063000
b0  : a000000100eea530 b6  : a0000001005f4040 b7  : a0000001005f3e00
f6  : 000000000000000000000 f7  : 1003e0000000000000000
f8  : 1003e0044b82fa09b5a53 f9  : 1003e0000000000001680
f10 : 1003e0000000000000008 f11 : 1003e00000000000002d0
r1  : a0000001017ce170 r2  : a00000010158e0a0 r3  : a0000001015ced80
r8  : 000000000000001f r9  : 0000000000000664 r10 : 0000000000000000
r11 : 0000000000000000 r12 : a00000010104fe20 r13 : a000000101040000
r14 : a00000010158e0a8 r15 : a00000010158e0a8 r16 : 0000000006640332
r17 : 0000000000000000 r18 : 0000000000007fff r19 : 0000000000000000
r20 : a000000101727e98 r21 : 0000000000000058 r22 : a000000101616c98
r23 : a000000101616c00 r24 : a000000101616c00 r25 : 00000000000003f8
r26 : a000000100e901a8 r27 : a000000100f59480 r28 : 0000000000000058
r29 : 0000000000000057 r30 : 00000000000016d0 r31 : 0000000000000030
Unable to handle kernel NULL pointer dereference (address 0000000000000000)
swapper[0]: Oops 11012296146944 [2]

-Tony

On Tue, Jan 28, 2014 at 5:50 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Jan 28, 2014 at 02:47:57PM -0800, Yinghai Lu wrote:
>> On Tue, Jan 28, 2014 at 2:08 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>> > On 01/28/2014 02:04 PM, Yinghai Lu wrote:
>> >> In original bootmem wrapper for memblock, we have limit checking.
>> >>
>> >> Add it to memblock_virt_alloc, to address arm and x86 booting crash.
>> >>
>> >> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
>> >
>> > Do you have a git tree or cumulative set of patches that you'd like us
>> > to all test?  I'm happy to boot it on my system, I just want to make
>> > sure I've got the same set that you're testing.
>>
>> This one should only affect on arm and x86 the 32 bit kernel with more
>> then 4G RAM.
>
> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>
> It fixes the issue I saw with Xen and 32-bit dom0 blowing up.
>
>>
>> thanks
>>
>> Yinghai
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 23:40:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:40:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ejS-0000rZ-1A; Wed, 29 Jan 2014 23:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W8ejQ-0000rU-Qq
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:39:45 +0000
Received: from [85.158.143.35:42334] by server-2.bemta-4.messagelabs.com id
	F4/2E-10891-04199E25; Wed, 29 Jan 2014 23:39:44 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391038782!1755475!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28734 invoked from network); 29 Jan 2014 23:39:43 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:39:43 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so16350791igc.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:39:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Q0caeMniVrCyVm86VZMaouSYPKueQ118tr4HYJe7s+g=;
	b=OFrdLR5kbMfPVoSwXTEq2NvUu+ItYNAslmP6yRLklzkKHhsnsv6GtE7dVQ7iiI1Eu+
	b53gtISdeXIBBDw6Wk9wRl/+nrBcDi+1nmrweX9KruFTWKIsEZ59BGP7W7/euGWTqHxz
	QhzFiWAuv7RcS3od/7Ipv9W8gK5xEcwXD6VmfQVW7fJZiOg4sX8Sxd/mr323NwpiNC9g
	MetBo+GeBAlpzSk/e875iTZ3sRiy/PIxi9Yj3zV3LG+z7r4KAh5FHlp0Zf8tmuBl1ibV
	/JhKoV8qSik8oUNHr5NP7SrMg0HU3g0V8+IbgBzAVCg6MOavjWHujpxWwFkOVLETwgt9
	JPGA==
MIME-Version: 1.0
X-Received: by 10.43.45.138 with SMTP id uk10mr2965294icb.75.1391038782058;
	Wed, 29 Jan 2014 15:39:42 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Wed, 29 Jan 2014 15:39:41 -0800 (PST)
In-Reply-To: <CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
Date: Wed, 29 Jan 2014 15:39:41 -0800
X-Google-Sender-Auth: RreNVxhwTAfQRuE-fHHiHcnSw6g
Message-ID: <CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Tony Luck <tony.luck@gmail.com>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 3:07 PM, Tony Luck <tony.luck@gmail.com> wrote:
> Hmmph.  ia64 is broken too.  git bisect says:
>
> commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
> Author: Yinghai Lu <yinghai@kernel.org>
> Date:   Mon Jan 27 17:06:49 2014 -0800
>
>     memblock, nobootmem: add memblock_virt_alloc_low()
>
> is to blame.  But this patch doesn't fix it.  Still dies with:
>
> PID hash table entries: 4096 (order: -1, 32768 bytes)
> Sorting __ex_table...
> kernel BUG at mm/bootmem.c:504!

that's another path with memblock_virt wrapper for bootmem.

Let's me check that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 23:40:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:40:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ejS-0000rZ-1A; Wed, 29 Jan 2014 23:39:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W8ejQ-0000rU-Qq
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:39:45 +0000
Received: from [85.158.143.35:42334] by server-2.bemta-4.messagelabs.com id
	F4/2E-10891-04199E25; Wed, 29 Jan 2014 23:39:44 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391038782!1755475!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28734 invoked from network); 29 Jan 2014 23:39:43 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:39:43 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so16350791igc.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:39:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Q0caeMniVrCyVm86VZMaouSYPKueQ118tr4HYJe7s+g=;
	b=OFrdLR5kbMfPVoSwXTEq2NvUu+ItYNAslmP6yRLklzkKHhsnsv6GtE7dVQ7iiI1Eu+
	b53gtISdeXIBBDw6Wk9wRl/+nrBcDi+1nmrweX9KruFTWKIsEZ59BGP7W7/euGWTqHxz
	QhzFiWAuv7RcS3od/7Ipv9W8gK5xEcwXD6VmfQVW7fJZiOg4sX8Sxd/mr323NwpiNC9g
	MetBo+GeBAlpzSk/e875iTZ3sRiy/PIxi9Yj3zV3LG+z7r4KAh5FHlp0Zf8tmuBl1ibV
	/JhKoV8qSik8oUNHr5NP7SrMg0HU3g0V8+IbgBzAVCg6MOavjWHujpxWwFkOVLETwgt9
	JPGA==
MIME-Version: 1.0
X-Received: by 10.43.45.138 with SMTP id uk10mr2965294icb.75.1391038782058;
	Wed, 29 Jan 2014 15:39:42 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Wed, 29 Jan 2014 15:39:41 -0800 (PST)
In-Reply-To: <CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
Date: Wed, 29 Jan 2014 15:39:41 -0800
X-Google-Sender-Auth: RreNVxhwTAfQRuE-fHHiHcnSw6g
Message-ID: <CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Tony Luck <tony.luck@gmail.com>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 3:07 PM, Tony Luck <tony.luck@gmail.com> wrote:
> Hmmph.  ia64 is broken too.  git bisect says:
>
> commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
> Author: Yinghai Lu <yinghai@kernel.org>
> Date:   Mon Jan 27 17:06:49 2014 -0800
>
>     memblock, nobootmem: add memblock_virt_alloc_low()
>
> is to blame.  But this patch doesn't fix it.  Still dies with:
>
> PID hash table entries: 4096 (order: -1, 32768 bytes)
> Sorting __ex_table...
> kernel BUG at mm/bootmem.c:504!

that's another path with memblock_virt wrapper for bootmem.

Let's me check that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Jan 29 23:53:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ewX-0001Ka-Hs; Wed, 29 Jan 2014 23:53:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W8ewV-0001KV-AE
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:53:15 +0000
Received: from [85.158.143.35:25732] by server-3.bemta-4.messagelabs.com id
	D1/66-11539-A6499E25; Wed, 29 Jan 2014 23:53:14 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391039592!1768003!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32619 invoked from network); 29 Jan 2014 23:53:13 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:53:13 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so16373362igc.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:53:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=f14xzwNWouxLfFQYpGH3hMpWgkYddt/48OpVmrtuwqM=;
	b=HSixufLxbE3SQNciJs/EEUXD7GsTJn0u9MjEa2bYAwVgWCE/vROVJfAvqaU/7r76u9
	fBJ9eXrIkUZlavptVDPRzEf+ob7NS0jB8+MBC+yV7IiCAI39lcNvIkadUBqM1nCsu4Pi
	FSXTZxLpxS+pMamnEiqMAeHdY8NfAUC181IqxjWUA3DYs1Yj+9iUVN0XBGSk51xxGNwB
	fUlOCeiPCXHHVVbyXzGNTvh4KzEtmyDLYrE/PfNPXNKf9Ei6Faxff2T89kBsLLRhNExc
	aOVndmzq+HlyirT7cdqHPGpohUdqvNg/I0+VyVzzKhaSSGQndbDlR7PvcpmdJ+PVhnl8
	4XQg==
MIME-Version: 1.0
X-Received: by 10.43.60.139 with SMTP id ws11mr8575622icb.12.1391039592229;
	Wed, 29 Jan 2014 15:53:12 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Wed, 29 Jan 2014 15:53:11 -0800 (PST)
In-Reply-To: <CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
Date: Wed, 29 Jan 2014 15:53:11 -0800
X-Google-Sender-Auth: 4rmk2XMTgL7QPPMxeBqR59G2EdE
Message-ID: <CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Tony Luck <tony.luck@gmail.com>
Content-Type: multipart/mixed; boundary=bcaec51a8946c3945d04f124a44f
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec51a8946c3945d04f124a44f
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Jan 29, 2014 at 3:39 PM, Yinghai Lu <yinghai@kernel.org> wrote:
> On Wed, Jan 29, 2014 at 3:07 PM, Tony Luck <tony.luck@gmail.com> wrote:
>> Hmmph.  ia64 is broken too.  git bisect says:
>>
>> commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
>> Author: Yinghai Lu <yinghai@kernel.org>
>> Date:   Mon Jan 27 17:06:49 2014 -0800
>>
>>     memblock, nobootmem: add memblock_virt_alloc_low()
>>
>> is to blame.  But this patch doesn't fix it.  Still dies with:
>>
>> PID hash table entries: 4096 (order: -1, 32768 bytes)
>> Sorting __ex_table...
>> kernel BUG at mm/bootmem.c:504!
>
> that's another path with memblock_virt wrapper for bootmem.

Please check attached patch.

Thanks

Yinghai

--bcaec51a8946c3945d04f124a44f
Content-Type: text/x-patch; charset=US-ASCII; name="fix_memblock_virt_alloc_ia64.patch"
Content-Disposition: attachment; 
	filename="fix_memblock_virt_alloc_ia64.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr193iau0

U3ViamVjdDogW1BBVENIXSBtZW1ibG9jaywgYm9vdG1lbTogUmVzdG9yZSBnb2FsIGZvciBhbGxv
Y19sb3cKCk5vdyB3ZSBoYXZlIG1lbWJsb2NrX3ZpcnRfYWxsb2NfbG93IHRvIHJlcGxhY2Ugb3Jp
Z2luYWwgYm9vdG1lbSBhcGkKaW4gc3dpb3RsYi4KCkJ1dCB3ZSBzaG91bGQgbm90IHVzZSBCT09U
TUVNX0xPV19MSU1JVCBmb3IgYXJjaCB0aGF0IGRvZXMgbm90IHN1cHBvcnQKQ09ORklHX05PQk9P
VE1FTSwgYXMgb2xkIGFwaSB0YWtlIDAuCgp8ICNkZWZpbmUgYWxsb2NfYm9vdG1lbV9sb3coeCkg
XAp8ICAgICAgICBfX2FsbG9jX2Jvb3RtZW1fbG93KHgsIFNNUF9DQUNIRV9CWVRFUywgMCkKfCNk
ZWZpbmUgYWxsb2NfYm9vdG1lbV9sb3dfcGFnZXNfbm9wYW5pYyh4KSBcCnwgICAgICAgIF9fYWxs
b2NfYm9vdG1lbV9sb3dfbm9wYW5pYyh4LCBQQUdFX1NJWkUsIDApCgphbmQgd2UgaGF2ZQogI2Rl
ZmluZSBCT09UTUVNX0xPV19MSU1JVCBfX3BhKE1BWF9ETUFfQUREUkVTUykKZm9yIENPTkZJR19O
T0JPT1RNRU0uCgpSZXN0b3JlIGdvYWwgdG8gMCB0byBmaXggaWE2NCBjcmFzaCwgdGhhdCBUb255
IGZvdW5kLgoKClJlcG9ydGVkLWJ5OiBUb255IEx1Y2sgPHRvbnkubHVja0BnbWFpbC5jb20+ClNp
Z25lZC1vZmYtYnk6IFlpbmdoYWkgTHUgPHlpbmdoYWlAa2VybmVsLm9yZz4KCi0tLQogaW5jbHVk
ZS9saW51eC9ib290bWVtLmggfCAgICA0ICsrLS0KIDEgZmlsZSBjaGFuZ2VkLCAyIGluc2VydGlv
bnMoKyksIDIgZGVsZXRpb25zKC0pCgpJbmRleDogbGludXgtMi42L2luY2x1ZGUvbGludXgvYm9v
dG1lbS5oCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT0KLS0tIGxpbnV4LTIuNi5vcmlnL2luY2x1ZGUvbGludXgvYm9vdG1l
bS5oCisrKyBsaW51eC0yLjYvaW5jbHVkZS9saW51eC9ib290bWVtLmgKQEAgLTI2NCw3ICsyNjQs
NyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgKiBfX2luaXQgbWVtYmxvY2tfdmlyCiB7CiAJaWYgKCFh
bGlnbikKIAkJYWxpZ24gPSBTTVBfQ0FDSEVfQllURVM7Ci0JcmV0dXJuIF9fYWxsb2NfYm9vdG1l
bV9sb3coc2l6ZSwgYWxpZ24sIEJPT1RNRU1fTE9XX0xJTUlUKTsKKwlyZXR1cm4gX19hbGxvY19i
b290bWVtX2xvdyhzaXplLCBhbGlnbiwgMCk7CiB9CiAKIHN0YXRpYyBpbmxpbmUgdm9pZCAqIF9f
aW5pdCBtZW1ibG9ja192aXJ0X2FsbG9jX2xvd19ub3BhbmljKApAQCAtMjcyLDcgKzI3Miw3IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192aXIKIHsKIAlpZiAoIWFsaWdu
KQogCQlhbGlnbiA9IFNNUF9DQUNIRV9CWVRFUzsKLQlyZXR1cm4gX19hbGxvY19ib290bWVtX2xv
d19ub3BhbmljKHNpemUsIGFsaWduLCBCT09UTUVNX0xPV19MSU1JVCk7CisJcmV0dXJuIF9fYWxs
b2NfYm9vdG1lbV9sb3dfbm9wYW5pYyhzaXplLCBhbGlnbiwgMCk7CiB9CiAKIHN0YXRpYyBpbmxp
bmUgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192aXJ0X2FsbG9jX2Zyb21fbm9wYW5pYygK
--bcaec51a8946c3945d04f124a44f
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec51a8946c3945d04f124a44f--


From xen-devel-bounces@lists.xen.org Wed Jan 29 23:53:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jan 2014 23:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ewX-0001Ka-Hs; Wed, 29 Jan 2014 23:53:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yhlu.kernel@gmail.com>) id 1W8ewV-0001KV-AE
	for xen-devel@lists.xenproject.org; Wed, 29 Jan 2014 23:53:15 +0000
Received: from [85.158.143.35:25732] by server-3.bemta-4.messagelabs.com id
	D1/66-11539-A6499E25; Wed, 29 Jan 2014 23:53:14 +0000
X-Env-Sender: yhlu.kernel@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391039592!1768003!1
X-Originating-IP: [209.85.213.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32619 invoked from network); 29 Jan 2014 23:53:13 -0000
Received: from mail-ig0-f172.google.com (HELO mail-ig0-f172.google.com)
	(209.85.213.172)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	29 Jan 2014 23:53:13 -0000
Received: by mail-ig0-f172.google.com with SMTP id k19so16373362igc.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 15:53:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=f14xzwNWouxLfFQYpGH3hMpWgkYddt/48OpVmrtuwqM=;
	b=HSixufLxbE3SQNciJs/EEUXD7GsTJn0u9MjEa2bYAwVgWCE/vROVJfAvqaU/7r76u9
	fBJ9eXrIkUZlavptVDPRzEf+ob7NS0jB8+MBC+yV7IiCAI39lcNvIkadUBqM1nCsu4Pi
	FSXTZxLpxS+pMamnEiqMAeHdY8NfAUC181IqxjWUA3DYs1Yj+9iUVN0XBGSk51xxGNwB
	fUlOCeiPCXHHVVbyXzGNTvh4KzEtmyDLYrE/PfNPXNKf9Ei6Faxff2T89kBsLLRhNExc
	aOVndmzq+HlyirT7cdqHPGpohUdqvNg/I0+VyVzzKhaSSGQndbDlR7PvcpmdJ+PVhnl8
	4XQg==
MIME-Version: 1.0
X-Received: by 10.43.60.139 with SMTP id ws11mr8575622icb.12.1391039592229;
	Wed, 29 Jan 2014 15:53:12 -0800 (PST)
Received: by 10.64.235.70 with HTTP; Wed, 29 Jan 2014 15:53:11 -0800 (PST)
In-Reply-To: <CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
Date: Wed, 29 Jan 2014 15:53:11 -0800
X-Google-Sender-Auth: 4rmk2XMTgL7QPPMxeBqR59G2EdE
Message-ID: <CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
From: Yinghai Lu <yinghai@kernel.org>
To: Tony Luck <tony.luck@gmail.com>
Content-Type: multipart/mixed; boundary=bcaec51a8946c3945d04f124a44f
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--bcaec51a8946c3945d04f124a44f
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Jan 29, 2014 at 3:39 PM, Yinghai Lu <yinghai@kernel.org> wrote:
> On Wed, Jan 29, 2014 at 3:07 PM, Tony Luck <tony.luck@gmail.com> wrote:
>> Hmmph.  ia64 is broken too.  git bisect says:
>>
>> commit ad6492b80f60a2139fa9bf8fd79b182fe5e3647c
>> Author: Yinghai Lu <yinghai@kernel.org>
>> Date:   Mon Jan 27 17:06:49 2014 -0800
>>
>>     memblock, nobootmem: add memblock_virt_alloc_low()
>>
>> is to blame.  But this patch doesn't fix it.  Still dies with:
>>
>> PID hash table entries: 4096 (order: -1, 32768 bytes)
>> Sorting __ex_table...
>> kernel BUG at mm/bootmem.c:504!
>
> that's another path with memblock_virt wrapper for bootmem.

Please check attached patch.

Thanks

Yinghai

--bcaec51a8946c3945d04f124a44f
Content-Type: text/x-patch; charset=US-ASCII; name="fix_memblock_virt_alloc_ia64.patch"
Content-Disposition: attachment; 
	filename="fix_memblock_virt_alloc_ia64.patch"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr193iau0

U3ViamVjdDogW1BBVENIXSBtZW1ibG9jaywgYm9vdG1lbTogUmVzdG9yZSBnb2FsIGZvciBhbGxv
Y19sb3cKCk5vdyB3ZSBoYXZlIG1lbWJsb2NrX3ZpcnRfYWxsb2NfbG93IHRvIHJlcGxhY2Ugb3Jp
Z2luYWwgYm9vdG1lbSBhcGkKaW4gc3dpb3RsYi4KCkJ1dCB3ZSBzaG91bGQgbm90IHVzZSBCT09U
TUVNX0xPV19MSU1JVCBmb3IgYXJjaCB0aGF0IGRvZXMgbm90IHN1cHBvcnQKQ09ORklHX05PQk9P
VE1FTSwgYXMgb2xkIGFwaSB0YWtlIDAuCgp8ICNkZWZpbmUgYWxsb2NfYm9vdG1lbV9sb3coeCkg
XAp8ICAgICAgICBfX2FsbG9jX2Jvb3RtZW1fbG93KHgsIFNNUF9DQUNIRV9CWVRFUywgMCkKfCNk
ZWZpbmUgYWxsb2NfYm9vdG1lbV9sb3dfcGFnZXNfbm9wYW5pYyh4KSBcCnwgICAgICAgIF9fYWxs
b2NfYm9vdG1lbV9sb3dfbm9wYW5pYyh4LCBQQUdFX1NJWkUsIDApCgphbmQgd2UgaGF2ZQogI2Rl
ZmluZSBCT09UTUVNX0xPV19MSU1JVCBfX3BhKE1BWF9ETUFfQUREUkVTUykKZm9yIENPTkZJR19O
T0JPT1RNRU0uCgpSZXN0b3JlIGdvYWwgdG8gMCB0byBmaXggaWE2NCBjcmFzaCwgdGhhdCBUb255
IGZvdW5kLgoKClJlcG9ydGVkLWJ5OiBUb255IEx1Y2sgPHRvbnkubHVja0BnbWFpbC5jb20+ClNp
Z25lZC1vZmYtYnk6IFlpbmdoYWkgTHUgPHlpbmdoYWlAa2VybmVsLm9yZz4KCi0tLQogaW5jbHVk
ZS9saW51eC9ib290bWVtLmggfCAgICA0ICsrLS0KIDEgZmlsZSBjaGFuZ2VkLCAyIGluc2VydGlv
bnMoKyksIDIgZGVsZXRpb25zKC0pCgpJbmRleDogbGludXgtMi42L2luY2x1ZGUvbGludXgvYm9v
dG1lbS5oCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT0KLS0tIGxpbnV4LTIuNi5vcmlnL2luY2x1ZGUvbGludXgvYm9vdG1l
bS5oCisrKyBsaW51eC0yLjYvaW5jbHVkZS9saW51eC9ib290bWVtLmgKQEAgLTI2NCw3ICsyNjQs
NyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgKiBfX2luaXQgbWVtYmxvY2tfdmlyCiB7CiAJaWYgKCFh
bGlnbikKIAkJYWxpZ24gPSBTTVBfQ0FDSEVfQllURVM7Ci0JcmV0dXJuIF9fYWxsb2NfYm9vdG1l
bV9sb3coc2l6ZSwgYWxpZ24sIEJPT1RNRU1fTE9XX0xJTUlUKTsKKwlyZXR1cm4gX19hbGxvY19i
b290bWVtX2xvdyhzaXplLCBhbGlnbiwgMCk7CiB9CiAKIHN0YXRpYyBpbmxpbmUgdm9pZCAqIF9f
aW5pdCBtZW1ibG9ja192aXJ0X2FsbG9jX2xvd19ub3BhbmljKApAQCAtMjcyLDcgKzI3Miw3IEBA
IHN0YXRpYyBpbmxpbmUgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192aXIKIHsKIAlpZiAoIWFsaWdu
KQogCQlhbGlnbiA9IFNNUF9DQUNIRV9CWVRFUzsKLQlyZXR1cm4gX19hbGxvY19ib290bWVtX2xv
d19ub3BhbmljKHNpemUsIGFsaWduLCBCT09UTUVNX0xPV19MSU1JVCk7CisJcmV0dXJuIF9fYWxs
b2NfYm9vdG1lbV9sb3dfbm9wYW5pYyhzaXplLCBhbGlnbiwgMCk7CiB9CiAKIHN0YXRpYyBpbmxp
bmUgdm9pZCAqIF9faW5pdCBtZW1ibG9ja192aXJ0X2FsbG9jX2Zyb21fbm9wYW5pYygK
--bcaec51a8946c3945d04f124a44f
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--bcaec51a8946c3945d04f124a44f--


From xen-devel-bounces@lists.xen.org Thu Jan 30 00:09:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fBZ-0002Kh-4g; Thu, 30 Jan 2014 00:08:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8fBY-0002Jm-0l
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:08:48 +0000
Received: from [85.158.139.211:11621] by server-10.bemta-5.messagelabs.com id
	8D/04-08578-F0899E25; Thu, 30 Jan 2014 00:08:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391040524!489862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12350 invoked from network); 30 Jan 2014 00:08:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:08:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,745,1384300800"; d="scan'208";a="95939376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 00:08:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 19:08:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8fBS-0004gu-Re;
	Thu, 30 Jan 2014 00:08:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8fBB-0007rT-SC;
	Thu, 30 Jan 2014 00:08:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 00:08:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24603: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24603 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24603/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24593 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24589
 test-armhf-armhf-xl           7 debian-install     fail in 24593 pass in 24603
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24593 pass in 24603
 test-amd64-i386-xl-win7-amd64 9 guest-localmigrate fail in 24593 pass in 24603
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24589 pass in 24596-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24593 like 24594-bisect
 test-amd64-i386-xl-win7-amd64 7 windows-install fail in 24589 like 24599-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24589 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24596 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:09:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fBZ-0002Kh-4g; Thu, 30 Jan 2014 00:08:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8fBY-0002Jm-0l
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:08:48 +0000
Received: from [85.158.139.211:11621] by server-10.bemta-5.messagelabs.com id
	8D/04-08578-F0899E25; Thu, 30 Jan 2014 00:08:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391040524!489862!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12350 invoked from network); 30 Jan 2014 00:08:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:08:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,745,1384300800"; d="scan'208";a="95939376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 00:08:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 19:08:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8fBS-0004gu-Re;
	Thu, 30 Jan 2014 00:08:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8fBB-0007rT-SC;
	Thu, 30 Jan 2014 00:08:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24603-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 00:08:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24603: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24603 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24603/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24593 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24589
 test-armhf-armhf-xl           7 debian-install     fail in 24593 pass in 24603
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24593 pass in 24603
 test-amd64-i386-xl-win7-amd64 9 guest-localmigrate fail in 24593 pass in 24603
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24589 pass in 24596-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24593 like 24594-bisect
 test-amd64-i386-xl-win7-amd64 7 windows-install fail in 24589 like 24599-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24589 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24596 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fEv-0002fG-RY; Thu, 30 Jan 2014 00:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8fEu-0002f9-Cj
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:12:16 +0000
Received: from [85.158.139.211:12236] by server-14.bemta-5.messagelabs.com id
	87/21-27598-FD899E25; Thu, 30 Jan 2014 00:12:15 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391040734!491615!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27143 invoked from network); 30 Jan 2014 00:12:15 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:12:15 -0000
Received: by mail-vb0-f41.google.com with SMTP id g10so1684427vbg.0
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 16:12:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Winnr/8uhoOVPeVNNR9StBKjMA7kZKWi5OZtZCWCg3o=;
	b=NVxtBBmgcZBZUi5H4mfOjQQsBdQnQ6trwrKzFtWZg65e6fd2fXjl8e3OXwMHhe41VZ
	/6Y6/r3P33TEc4a01NYOC5tTx6EEwWYhNNUOGcRnkD4AXFzN6FWdlBiia+5KXH+LoDDk
	fPY85urPvKCxM0+eVVu7zil3bTDhW24JQXXPMEfLLzyYxrJ94+8X+XiDS/XmVgBmBK+e
	z23RAVZwEufLnSE5TtxwLjv/Ronv13Maj8dX8jrO5HuibNjWf18xUN4EsHmG0cicEWoB
	rteIZteze9vTd0Lwzj66XKgE3ED9m2nOG89PzZlIQ97dk84sXKY9cWIc9gxK2/jm3BHJ
	ZQbg==
MIME-Version: 1.0
X-Received: by 10.58.191.38 with SMTP id gv6mr173255vec.33.1391040733710; Wed,
	29 Jan 2014 16:12:13 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 16:12:13 -0800 (PST)
In-Reply-To: <CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
Date: Wed, 29 Jan 2014 16:12:13 -0800
Message-ID: <CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Yinghai Lu <yinghai@kernel.org>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applying on top of Linus' tree (commit =
dda68a8c1707b4011dc3c656fa1b2c6de6f7f304) I just get:

patching file include/linux/bootmem.h
Hunk #1 FAILED at 264.
Hunk #2 FAILED at 272.
2 out of 2 hunks FAILED -- saving rejects to file include/linux/bootmem.h.rej

- not a promising start :-(

-Tony

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fEv-0002fG-RY; Thu, 30 Jan 2014 00:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8fEu-0002f9-Cj
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:12:16 +0000
Received: from [85.158.139.211:12236] by server-14.bemta-5.messagelabs.com id
	87/21-27598-FD899E25; Thu, 30 Jan 2014 00:12:15 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391040734!491615!1
X-Originating-IP: [209.85.212.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27143 invoked from network); 30 Jan 2014 00:12:15 -0000
Received: from mail-vb0-f41.google.com (HELO mail-vb0-f41.google.com)
	(209.85.212.41)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:12:15 -0000
Received: by mail-vb0-f41.google.com with SMTP id g10so1684427vbg.0
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 16:12:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Winnr/8uhoOVPeVNNR9StBKjMA7kZKWi5OZtZCWCg3o=;
	b=NVxtBBmgcZBZUi5H4mfOjQQsBdQnQ6trwrKzFtWZg65e6fd2fXjl8e3OXwMHhe41VZ
	/6Y6/r3P33TEc4a01NYOC5tTx6EEwWYhNNUOGcRnkD4AXFzN6FWdlBiia+5KXH+LoDDk
	fPY85urPvKCxM0+eVVu7zil3bTDhW24JQXXPMEfLLzyYxrJ94+8X+XiDS/XmVgBmBK+e
	z23RAVZwEufLnSE5TtxwLjv/Ronv13Maj8dX8jrO5HuibNjWf18xUN4EsHmG0cicEWoB
	rteIZteze9vTd0Lwzj66XKgE3ED9m2nOG89PzZlIQ97dk84sXKY9cWIc9gxK2/jm3BHJ
	ZQbg==
MIME-Version: 1.0
X-Received: by 10.58.191.38 with SMTP id gv6mr173255vec.33.1391040733710; Wed,
	29 Jan 2014 16:12:13 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 16:12:13 -0800 (PST)
In-Reply-To: <CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
Date: Wed, 29 Jan 2014 16:12:13 -0800
Message-ID: <CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Yinghai Lu <yinghai@kernel.org>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ingo Molnar <mingo@elte.hu>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applying on top of Linus' tree (commit =
dda68a8c1707b4011dc3c656fa1b2c6de6f7f304) I just get:

patching file include/linux/bootmem.h
Hunk #1 FAILED at 264.
Hunk #2 FAILED at 272.
2 out of 2 hunks FAILED -- saving rejects to file include/linux/bootmem.h.rej

- not a promising start :-(

-Tony

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:15:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fI9-0002oS-08; Thu, 30 Jan 2014 00:15:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8fI6-0002o9-T6
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:15:35 +0000
Received: from [85.158.137.68:15064] by server-5.bemta-3.messagelabs.com id
	3B/1B-04712-6A999E25; Thu, 30 Jan 2014 00:15:34 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391040931!12208002!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15403 invoked from network); 30 Jan 2014 00:15:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 00:15:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U0FSRW032421
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 00:15:29 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0U0FRtZ025823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 00:15:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FR1M026028; Thu, 30 Jan 2014 00:15:27 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 16:15:26 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Wed, 29 Jan 2014 16:15:17 -0800
Message-Id: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

The CR4 settings were dropped from my earlier patch because you didn't
wanna enable them. But since you do now, we need to set them in the APs
also. If you decide not too again, please apply my prev patch
"pvh: disable pse feature for now".

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:15:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fI8-0002oK-Gn; Thu, 30 Jan 2014 00:15:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8fI6-0002o8-DD
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:15:34 +0000
Received: from [85.158.137.68:15071] by server-2.bemta-3.messagelabs.com id
	9E/97-06531-5A999E25; Thu, 30 Jan 2014 00:15:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391040931!8521167!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4789 invoked from network); 30 Jan 2014 00:15:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 00:15:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U0FSK1032420
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 00:15:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FRFW026047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 30 Jan 2014 00:15:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FRQt027984; Thu, 30 Jan 2014 00:15:27 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 16:15:26 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Wed, 29 Jan 2014 16:15:18 -0800
Message-Id: <1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to set cr4 flags for APs that are already set for BSP.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:15:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fI8-0002oK-Gn; Thu, 30 Jan 2014 00:15:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8fI6-0002o8-DD
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:15:34 +0000
Received: from [85.158.137.68:15071] by server-2.bemta-3.messagelabs.com id
	9E/97-06531-5A999E25; Thu, 30 Jan 2014 00:15:33 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391040931!8521167!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4789 invoked from network); 30 Jan 2014 00:15:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 00:15:32 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U0FSK1032420
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 00:15:29 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FRFW026047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 30 Jan 2014 00:15:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FRQt027984; Thu, 30 Jan 2014 00:15:27 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 16:15:26 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Wed, 29 Jan 2014 16:15:18 -0800
Message-Id: <1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
In-Reply-To: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to set cr4 flags for APs that are already set for BSP.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 arch/x86/xen/enlighten.c |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.7.2.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:15:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fI9-0002oS-08; Thu, 30 Jan 2014 00:15:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8fI6-0002o9-T6
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 00:15:35 +0000
Received: from [85.158.137.68:15064] by server-5.bemta-3.messagelabs.com id
	3B/1B-04712-6A999E25; Thu, 30 Jan 2014 00:15:34 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391040931!12208002!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15403 invoked from network); 30 Jan 2014 00:15:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 00:15:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U0FSRW032421
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 00:15:29 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0U0FRtZ025823
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 00:15:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U0FR1M026028; Thu, 30 Jan 2014 00:15:27 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 16:15:26 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: konrad.wilk@oracle.com
Date: Wed, 29 Jan 2014 16:15:17 -0800
Message-Id: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.7.2.3
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad,

The CR4 settings were dropped from my earlier patch because you didn't
wanna enable them. But since you do now, we need to set them in the APs
also. If you decide not too again, please apply my prev patch
"pvh: disable pse feature for now".

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fWK-0003lr-KO; Thu, 30 Jan 2014 00:30:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <akpm@linux-foundation.org>) id 1W8fWH-0003lm-Va
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:30:14 +0000
Received: from [85.158.139.211:30843] by server-11.bemta-5.messagelabs.com id
	71/20-23886-51D99E25; Thu, 30 Jan 2014 00:30:13 +0000
X-Env-Sender: akpm@linux-foundation.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391041811!473195!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15168 invoked from network); 30 Jan 2014 00:30:12 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-13.tower-206.messagelabs.com with SMTP;
	30 Jan 2014 00:30:12 -0000
Received: from localhost (c-67-161-9-76.hsd1.ca.comcast.net [67.161.9.76])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 19F42947;
	Thu, 30 Jan 2014 00:30:10 +0000 (UTC)
Date: Wed, 29 Jan 2014 16:34:07 -0800
From: Andrew Morton <akpm@linux-foundation.org>
To: Tony Luck <tony.luck@gmail.com>
Message-Id: <20140129163407.a46408a4.akpm@linux-foundation.org>
In-Reply-To: <CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
	<CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
X-Mailer: Sylpheed 2.7.1 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014 16:12:13 -0800 Tony Luck <tony.luck@gmail.com> wrote:

> Applying on top of Linus' tree (commit =
> dda68a8c1707b4011dc3c656fa1b2c6de6f7f304) I just get:
> 
> patching file include/linux/bootmem.h
> Hunk #1 FAILED at 264.
> Hunk #2 FAILED at 272.
> 2 out of 2 hunks FAILED -- saving rejects to file include/linux/bootmem.h.rej
> 
> - not a promising start :-(
> 

It applies for me.  MIME getting you down?


From: Yinghai Lu <yinghai@kernel.org>
Subject: memblock, bootmem: restore goal for alloc_low

Now we have memblock_virt_alloc_low to replace original bootmem api
in swiotlb.

But we should not use BOOTMEM_LOW_LIMIT for arch that does not support
CONFIG_NOBOOTMEM, as old api take 0.

| #define alloc_bootmem_low(x) \
|        __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0)
|#define alloc_bootmem_low_pages_nopanic(x) \
|        __alloc_bootmem_low_nopanic(x, PAGE_SIZE, 0)

and we have
 #define BOOTMEM_LOW_LIMIT __pa(MAX_DMA_ADDRESS)
for CONFIG_NOBOOTMEM.

Restore goal to 0 to fix ia64 crash, that Tony found.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Reported-by: Tony Luck <tony.luck@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/bootmem.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff -puN include/linux/bootmem.h~memblock-bootmem-restore-goal-for-alloc_low include/linux/bootmem.h
--- a/include/linux/bootmem.h~memblock-bootmem-restore-goal-for-alloc_low
+++ a/include/linux/bootmem.h
@@ -264,7 +264,7 @@ static inline void * __init memblock_vir
 {
 	if (!align)
 		align = SMP_CACHE_BYTES;
-	return __alloc_bootmem_low(size, align, BOOTMEM_LOW_LIMIT);
+	return __alloc_bootmem_low(size, align, 0);
 }
 
 static inline void * __init memblock_virt_alloc_low_nopanic(
@@ -272,7 +272,7 @@ static inline void * __init memblock_vir
 {
 	if (!align)
 		align = SMP_CACHE_BYTES;
-	return __alloc_bootmem_low_nopanic(size, align, BOOTMEM_LOW_LIMIT);
+	return __alloc_bootmem_low_nopanic(size, align, 0);
 }
 
 static inline void * __init memblock_virt_alloc_from_nopanic(
_


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fWK-0003lr-KO; Thu, 30 Jan 2014 00:30:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <akpm@linux-foundation.org>) id 1W8fWH-0003lm-Va
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:30:14 +0000
Received: from [85.158.139.211:30843] by server-11.bemta-5.messagelabs.com id
	71/20-23886-51D99E25; Thu, 30 Jan 2014 00:30:13 +0000
X-Env-Sender: akpm@linux-foundation.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391041811!473195!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15168 invoked from network); 30 Jan 2014 00:30:12 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-13.tower-206.messagelabs.com with SMTP;
	30 Jan 2014 00:30:12 -0000
Received: from localhost (c-67-161-9-76.hsd1.ca.comcast.net [67.161.9.76])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 19F42947;
	Thu, 30 Jan 2014 00:30:10 +0000 (UTC)
Date: Wed, 29 Jan 2014 16:34:07 -0800
From: Andrew Morton <akpm@linux-foundation.org>
To: Tony Luck <tony.luck@gmail.com>
Message-Id: <20140129163407.a46408a4.akpm@linux-foundation.org>
In-Reply-To: <CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
	<CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
X-Mailer: Sylpheed 2.7.1 (GTK+ 2.18.9; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014 16:12:13 -0800 Tony Luck <tony.luck@gmail.com> wrote:

> Applying on top of Linus' tree (commit =
> dda68a8c1707b4011dc3c656fa1b2c6de6f7f304) I just get:
> 
> patching file include/linux/bootmem.h
> Hunk #1 FAILED at 264.
> Hunk #2 FAILED at 272.
> 2 out of 2 hunks FAILED -- saving rejects to file include/linux/bootmem.h.rej
> 
> - not a promising start :-(
> 

It applies for me.  MIME getting you down?


From: Yinghai Lu <yinghai@kernel.org>
Subject: memblock, bootmem: restore goal for alloc_low

Now we have memblock_virt_alloc_low to replace original bootmem api
in swiotlb.

But we should not use BOOTMEM_LOW_LIMIT for arch that does not support
CONFIG_NOBOOTMEM, as old api take 0.

| #define alloc_bootmem_low(x) \
|        __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0)
|#define alloc_bootmem_low_pages_nopanic(x) \
|        __alloc_bootmem_low_nopanic(x, PAGE_SIZE, 0)

and we have
 #define BOOTMEM_LOW_LIMIT __pa(MAX_DMA_ADDRESS)
for CONFIG_NOBOOTMEM.

Restore goal to 0 to fix ia64 crash, that Tony found.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Reported-by: Tony Luck <tony.luck@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/bootmem.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff -puN include/linux/bootmem.h~memblock-bootmem-restore-goal-for-alloc_low include/linux/bootmem.h
--- a/include/linux/bootmem.h~memblock-bootmem-restore-goal-for-alloc_low
+++ a/include/linux/bootmem.h
@@ -264,7 +264,7 @@ static inline void * __init memblock_vir
 {
 	if (!align)
 		align = SMP_CACHE_BYTES;
-	return __alloc_bootmem_low(size, align, BOOTMEM_LOW_LIMIT);
+	return __alloc_bootmem_low(size, align, 0);
 }
 
 static inline void * __init memblock_virt_alloc_low_nopanic(
@@ -272,7 +272,7 @@ static inline void * __init memblock_vir
 {
 	if (!align)
 		align = SMP_CACHE_BYTES;
-	return __alloc_bootmem_low_nopanic(size, align, BOOTMEM_LOW_LIMIT);
+	return __alloc_bootmem_low_nopanic(size, align, 0);
 }
 
 static inline void * __init memblock_virt_alloc_from_nopanic(
_


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:42:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fi4-0004F3-0G; Thu, 30 Jan 2014 00:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8fi2-0004Ey-AU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 00:42:22 +0000
Received: from [85.158.143.35:22322] by server-2.bemta-4.messagelabs.com id
	3D/7A-10891-DEF99E25; Thu, 30 Jan 2014 00:42:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391042540!1770816!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28524 invoked from network); 30 Jan 2014 00:42:21 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:42:21 -0000
Received: by mail-wg0-f48.google.com with SMTP id x13so4943582wgg.15
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 16:42:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kL7xeAHB2ZgHjyyr3IFQOROZ6lK1AE6cVYOxoUUW+ZQ=;
	b=YMD/PFdgtCSuiv1y6eoVCtHMbZnvGUo9ZV9d7uoVmd6o26cHAdyAeUqzeyM5IVybXt
	9BjcL25jz+nMDWskP13wUXxNwNazWQd8LHrDMxS56WB7PWccm9B+LgOlYyaKezM0vu+Q
	pWN+OJ8BW8Jv+O2YNC7ZxdSNPErcG12pLHn2gjkmz6nbDyIlmbmZrJONZzjBDubltUAy
	xSen0PaxHwxoH2OE5BVsgB9CNveDDXHsKgfGlN5czaDUk0QpozKPHGiAfqBtDVPzQGLU
	1MZRoWPbNCdwE/hmF4keFl5sbscBEBc/eirC5C2kPbiAQTcdDIsKeGV7w4Yx0VeF0j14
	JuCw==
X-Gm-Message-State: ALoCoQkmzKziElE5ntw959sc+hqFb3LDZXib/MJ0NTuLrKpjwomBkJjcCUoYIifP+Z1NBzr19vqw
X-Received: by 10.180.10.105 with SMTP id h9mr7684329wib.11.1391042540586;
	Wed, 29 Jan 2014 16:42:20 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id pl7sm8480221wjc.16.2014.01.29.16.42.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 16:42:19 -0800 (PST)
Message-ID: <52E99FEA.1030606@linaro.org>
Date: Thu, 30 Jan 2014 00:42:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	<52E8FD02.2060601@linaro.org>	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
In-Reply-To: <CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 19:54, Oleksandr Tyshchenko wrote:
> On Wed, Jan 29, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> Hi,
>>
>> It's weird, physical IRQ should not be injected twice ...
>> Were you able to print the IRQ number?
>
> p->irq: 151, p->desc->irq: 151 (it is touchscreen irq)
>
>>
>> In any case, you are using the old version of the interrupt patch series.
>> Your new error may come of race condition in this code.
>>
>> Can you try to use a newest version?
>
> Yes, I can. But not today or tomorrow, I need time to apply our local
> changes to newest version needed our system to work.
> And Do you mean try to use newest version of XEN at whole or only
> parts of code related to interrupts handling?
>

I don't think you need a newest version. Which Xen commit are you using?

This range of commit should be enough: 82064f0 - 88eb95e.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:42:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fi4-0004F3-0G; Thu, 30 Jan 2014 00:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8fi2-0004Ey-AU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 00:42:22 +0000
Received: from [85.158.143.35:22322] by server-2.bemta-4.messagelabs.com id
	3D/7A-10891-DEF99E25; Thu, 30 Jan 2014 00:42:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391042540!1770816!1
X-Originating-IP: [74.125.82.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28524 invoked from network); 30 Jan 2014 00:42:21 -0000
Received: from mail-wg0-f48.google.com (HELO mail-wg0-f48.google.com)
	(74.125.82.48)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:42:21 -0000
Received: by mail-wg0-f48.google.com with SMTP id x13so4943582wgg.15
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 16:42:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kL7xeAHB2ZgHjyyr3IFQOROZ6lK1AE6cVYOxoUUW+ZQ=;
	b=YMD/PFdgtCSuiv1y6eoVCtHMbZnvGUo9ZV9d7uoVmd6o26cHAdyAeUqzeyM5IVybXt
	9BjcL25jz+nMDWskP13wUXxNwNazWQd8LHrDMxS56WB7PWccm9B+LgOlYyaKezM0vu+Q
	pWN+OJ8BW8Jv+O2YNC7ZxdSNPErcG12pLHn2gjkmz6nbDyIlmbmZrJONZzjBDubltUAy
	xSen0PaxHwxoH2OE5BVsgB9CNveDDXHsKgfGlN5czaDUk0QpozKPHGiAfqBtDVPzQGLU
	1MZRoWPbNCdwE/hmF4keFl5sbscBEBc/eirC5C2kPbiAQTcdDIsKeGV7w4Yx0VeF0j14
	JuCw==
X-Gm-Message-State: ALoCoQkmzKziElE5ntw959sc+hqFb3LDZXib/MJ0NTuLrKpjwomBkJjcCUoYIifP+Z1NBzr19vqw
X-Received: by 10.180.10.105 with SMTP id h9mr7684329wib.11.1391042540586;
	Wed, 29 Jan 2014 16:42:20 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id pl7sm8480221wjc.16.2014.01.29.16.42.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 29 Jan 2014 16:42:19 -0800 (PST)
Message-ID: <52E99FEA.1030606@linaro.org>
Date: Thu, 30 Jan 2014 00:42:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	<52E8FD02.2060601@linaro.org>	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
In-Reply-To: <CAJEb2DGUfA8qHMBCKSaguA8D1yH+wS4rHaJmHb3+5SUU9Y6wkw@mail.gmail.com>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 29/01/14 19:54, Oleksandr Tyshchenko wrote:
> On Wed, Jan 29, 2014 at 8:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> Hi,
>>
>> It's weird, physical IRQ should not be injected twice ...
>> Were you able to print the IRQ number?
>
> p->irq: 151, p->desc->irq: 151 (it is touchscreen irq)
>
>>
>> In any case, you are using the old version of the interrupt patch series.
>> Your new error may come of race condition in this code.
>>
>> Can you try to use a newest version?
>
> Yes, I can. But not today or tomorrow, I need time to apply our local
> changes to newest version needed our system to work.
> And Do you mean try to use newest version of XEN at whole or only
> parts of code related to interrupts handling?
>

I don't think you need a newest version. Which Xen commit are you using?

This range of commit should be enough: 82064f0 - 88eb95e.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:46:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fls-0004MS-M7; Thu, 30 Jan 2014 00:46:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8fls-0004MN-2c
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:46:20 +0000
Received: from [85.158.139.211:52539] by server-6.bemta-5.messagelabs.com id
	37/57-14342-BD0A9E25; Thu, 30 Jan 2014 00:46:19 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391042777!490526!1
X-Originating-IP: [209.85.128.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13007 invoked from network); 30 Jan 2014 00:46:18 -0000
Received: from mail-ve0-f171.google.com (HELO mail-ve0-f171.google.com)
	(209.85.128.171)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:46:18 -0000
Received: by mail-ve0-f171.google.com with SMTP id pa12so1773210veb.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 16:46:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=kSF7F7eFN1LBn85Sppi+ASttEuoNexcA84sJEQs27ws=;
	b=TAnFIX8HDU9iMby7/Hn2RWDeyAZElnYqhohqUgt/5YDL9Dtm+vmfQ36Zvy9mH8BQcs
	SBPZEn4+uMB8Ft3NT6Hke5uNlDKIA5Dj7csgEPERGYYpz6k2MmDMn58+GIC20qjPQmr4
	eUhF4fc1g983soheGgM0KSjy7CQ8Hrav896wQESpoRQLC7+tpTslprdjcRkTRodp/hvM
	cEN3xJnhsFs7vKRflwwhb/v9y7H4GxiQI29/GgnlpPgIsnD5latgtoQlMK7UtMvbxDWZ
	Zvqn3JaHkcDGowrRr65k2nmq3RPKFLQacPGchxJgoBS10LtsLQYQ/fji35GkAy0h4kSU
	MPSw==
MIME-Version: 1.0
X-Received: by 10.220.29.200 with SMTP id r8mr9174280vcc.9.1391042777162; Wed,
	29 Jan 2014 16:46:17 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 16:46:17 -0800 (PST)
In-Reply-To: <20140129163407.a46408a4.akpm@linux-foundation.org>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
	<CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
	<20140129163407.a46408a4.akpm@linux-foundation.org>
Date: Wed, 29 Jan 2014 16:46:17 -0800
Message-ID: <CA+8MBb+=uN+=imrAAK+=CXQf3LjZov3JKT2rVKyD90d4HgM0Uw@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 4:34 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> It applies for me.  MIME getting you down?

Perhaps - Gmail's web client showed me a downward pointing arrow when I
moused over the attachment icon in Yinghai's e-mail. So I clicked it ... and
it looked like it downloaded a patch-like looking file. So I ran
"patch -p1 < file"
and whoops - that wasn't so great after all :-(

Your plain text e-mail version applied just fine.

GUI: 0 CLI: 1

Patch works - my ia64 boots again.  Thanks.

Tested-by: Tony Luck <tony.luck@intel.com>

-Tony

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 00:46:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 00:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8fls-0004MS-M7; Thu, 30 Jan 2014 00:46:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tony.luck@gmail.com>) id 1W8fls-0004MN-2c
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 00:46:20 +0000
Received: from [85.158.139.211:52539] by server-6.bemta-5.messagelabs.com id
	37/57-14342-BD0A9E25; Thu, 30 Jan 2014 00:46:19 +0000
X-Env-Sender: tony.luck@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391042777!490526!1
X-Originating-IP: [209.85.128.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13007 invoked from network); 30 Jan 2014 00:46:18 -0000
Received: from mail-ve0-f171.google.com (HELO mail-ve0-f171.google.com)
	(209.85.128.171)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 00:46:18 -0000
Received: by mail-ve0-f171.google.com with SMTP id pa12so1773210veb.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 29 Jan 2014 16:46:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=kSF7F7eFN1LBn85Sppi+ASttEuoNexcA84sJEQs27ws=;
	b=TAnFIX8HDU9iMby7/Hn2RWDeyAZElnYqhohqUgt/5YDL9Dtm+vmfQ36Zvy9mH8BQcs
	SBPZEn4+uMB8Ft3NT6Hke5uNlDKIA5Dj7csgEPERGYYpz6k2MmDMn58+GIC20qjPQmr4
	eUhF4fc1g983soheGgM0KSjy7CQ8Hrav896wQESpoRQLC7+tpTslprdjcRkTRodp/hvM
	cEN3xJnhsFs7vKRflwwhb/v9y7H4GxiQI29/GgnlpPgIsnD5latgtoQlMK7UtMvbxDWZ
	Zvqn3JaHkcDGowrRr65k2nmq3RPKFLQacPGchxJgoBS10LtsLQYQ/fji35GkAy0h4kSU
	MPSw==
MIME-Version: 1.0
X-Received: by 10.220.29.200 with SMTP id r8mr9174280vcc.9.1391042777162; Wed,
	29 Jan 2014 16:46:17 -0800 (PST)
Received: by 10.59.7.9 with HTTP; Wed, 29 Jan 2014 16:46:17 -0800 (PST)
In-Reply-To: <20140129163407.a46408a4.akpm@linux-foundation.org>
References: <1390946665-2967-1-git-send-email-yinghai@kernel.org>
	<52E82A40.3040104@intel.com>
	<CAE9FiQVGuN0f5xo9fbfsurmO3E=45A3m0oS75HYPV7BSEwrvsg@mail.gmail.com>
	<20140129015048.GA14629@konrad-lan.dumpdata.com>
	<CA+8MBbJjEUJKo2ip8bDzDRVbBT+yhzqZXV0BYP3X52OJ3zA6Pg@mail.gmail.com>
	<CAE9FiQWfZk7ogmORm7euBy1bncUhFW0Ms2hV_g8_9uBVU71RRQ@mail.gmail.com>
	<CAE9FiQX6CiumfOi2cgU1ODrKuaTHCZxo3zNmW01kn+GT1CAT3w@mail.gmail.com>
	<CA+8MBbLggLtM6ZQhmS0qZQaRG0b+HxtawxXSu5bUzHS-rMVcYg@mail.gmail.com>
	<20140129163407.a46408a4.akpm@linux-foundation.org>
Date: Wed, 29 Jan 2014 16:46:17 -0800
Message-ID: <CA+8MBb+=uN+=imrAAK+=CXQf3LjZov3JKT2rVKyD90d4HgM0Uw@mail.gmail.com>
From: Tony Luck <tony.luck@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Kevin Hilman <khilman@linaro.org>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Santosh Shilimkar <santosh.shilimkar@ti.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Olof Johansson <olof@lixom.net>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Yinghai Lu <yinghai@kernel.org>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [Xen-devel] [PATCH] memblock: Add limit checking to
	memblock_virt_alloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 4:34 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> It applies for me.  MIME getting you down?

Perhaps - Gmail's web client showed me a downward pointing arrow when I
moused over the attachment icon in Yinghai's e-mail. So I clicked it ... and
it looked like it downloaded a patch-like looking file. So I ran
"patch -p1 < file"
and whoops - that wasn't so great after all :-(

Your plain text e-mail version applied just fine.

GUI: 0 CLI: 1

Patch works - my ia64 boots again.  Thanks.

Tested-by: Tony Luck <tony.luck@intel.com>

-Tony

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 01:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 01:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8gWU-0001de-JR; Thu, 30 Jan 2014 01:34:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8gWS-0001dZ-FQ
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 01:34:28 +0000
Received: from [85.158.137.68:23120] by server-12.bemta-3.messagelabs.com id
	A9/8A-01674-32CA9E25; Thu, 30 Jan 2014 01:34:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391045665!12144544!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10822 invoked from network); 30 Jan 2014 01:34:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 01:34:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U1XL8F031828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 01:33:22 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U1XIA7019903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 01:33:18 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0U1XHpn001403; Thu, 30 Jan 2014 01:33:17 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 17:33:16 -0800
Date: Wed, 29 Jan 2014 17:33:15 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129173315.592e593e@mantra.us.oracle.com>
In-Reply-To: <1390996295.31814.84.camel@kazak.uk.xensource.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014 11:51:35 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-29 at 12:48 +0100, Tim Deegan wrote:
> > At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> > > On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > > > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > > > The only think x86-specific here is that
> > > > > > > {get,put}_pg_owner() may not exist on ARM. But the
> > > > > > > general operation isn't x86-specific, so there shouldn't
> > > > > > > be any CONFIG_X86 dependency here. Instead you ought to
> > > > > > > work out with the ARM maintainers whether to stub out
> > > > > > > those two functions, or whether the functionality is
> > > > > > > useful there too (and hence proper implementations would
> > > > > > > be needed).
> > > > [...]
> > > > > Yes, please just make get/put_pg_owner common.
> > > > > 
> > > > > The only required change would be to:
> > > > >     if ( unlikely(paging_mode_translate(curr)) )
> > > > >     {
> > > > >         MEM_LOG("Cannot mix foreign mappings with translated
> > > > > domains"); goto out;
> > > > >     }
> > > > > 
> > > > > which is not needed for ARM, and I suspect needs adjusting
> > > > > for PVH too (ah, there it is in the next patch). I think the
> > > > > best solution there would be a new predicate e.g.
> > > > > paging_mode_supports_foreign(curr) (or some better name, I
> > > > > don't especially like my suggestion)
> > > > > 
> > > > > on ARM:
> > > > > 
> > > > > #define paging_mode_supports_foreign(d) (1)
> > > > > 
> > > > > on x86:
> > > > > 
> > > > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr)
> > > > > || !(paging_mode_translate(curr))
> > > > > 
> > > > 
> > > > Hmmm.  That's likely to have unintended consequences
> > > > somewhere.  (And if that check is really not needed for PVH
> > > > maybe it's not needed for HVM either, given that they share all
> > > > their paging support code).
> > > > 
> > > > But I don't think we need to tinker with it anyway - AFAICS,
> > > > get_pg_owner() isn't really what's wanted in the XATP code.
> > > > All the other uses of get_pg_owner() are in the x86 PV MMU
> > > > code, which this is definitely not, and it handles cases (like
> > > > mmio) that we don't want here anyway.  How about using
> > > > rcu_lock_live_remote_domain_by_id()?
> > > 
> > > We have a struct page in our hand -- don't we need to lookup the
> > > owner and lock it somewhat atomically?
> > 
> > I'm not sure what you mean: 
> >  - the code that Mukesh is adding doesn't have a struct page, it's
> >    just grabbing the foreign domid from the hypercall arg;
> >  - if we did have a struct page, we'd just need to take a ref to 
> >    stop the owner changing underfoot; and
> >  - get_pg_owner() takes a domid anyway.
> 
> Sorry, I was confused/mislead by the name...
> 
> rcu_lock_live_remote_domain_by_id does look like what is needed.

Yup, that will do it. Thanks Tim.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 01:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 01:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8gWU-0001de-JR; Thu, 30 Jan 2014 01:34:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W8gWS-0001dZ-FQ
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 01:34:28 +0000
Received: from [85.158.137.68:23120] by server-12.bemta-3.messagelabs.com id
	A9/8A-01674-32CA9E25; Thu, 30 Jan 2014 01:34:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391045665!12144544!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10822 invoked from network); 30 Jan 2014 01:34:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 01:34:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0U1XL8F031828
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 01:33:22 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0U1XIA7019903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 01:33:18 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0U1XHpn001403; Thu, 30 Jan 2014 01:33:17 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 29 Jan 2014 17:33:16 -0800
Date: Wed, 29 Jan 2014 17:33:15 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140129173315.592e593e@mantra.us.oracle.com>
In-Reply-To: <1390996295.31814.84.camel@kazak.uk.xensource.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014 11:51:35 +0000
Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-01-29 at 12:48 +0100, Tim Deegan wrote:
> > At 11:41 +0000 on 29 Jan (1390992062), Ian Campbell wrote:
> > > On Wed, 2014-01-29 at 12:38 +0100, Tim Deegan wrote:
> > > > At 10:40 +0000 on 29 Jan (1390988426), Ian Campbell wrote:
> > > > > On Tue, 2014-01-28 at 18:08 -0800, Mukesh Rathor wrote:
> > > > > > On Tue, 28 Jan 2014 10:31:36 +0000
> > > > > > "Jan Beulich" <JBeulich@suse.com> wrote:
> > > > > > > The only think x86-specific here is that
> > > > > > > {get,put}_pg_owner() may not exist on ARM. But the
> > > > > > > general operation isn't x86-specific, so there shouldn't
> > > > > > > be any CONFIG_X86 dependency here. Instead you ought to
> > > > > > > work out with the ARM maintainers whether to stub out
> > > > > > > those two functions, or whether the functionality is
> > > > > > > useful there too (and hence proper implementations would
> > > > > > > be needed).
> > > > [...]
> > > > > Yes, please just make get/put_pg_owner common.
> > > > > 
> > > > > The only required change would be to:
> > > > >     if ( unlikely(paging_mode_translate(curr)) )
> > > > >     {
> > > > >         MEM_LOG("Cannot mix foreign mappings with translated
> > > > > domains"); goto out;
> > > > >     }
> > > > > 
> > > > > which is not needed for ARM, and I suspect needs adjusting
> > > > > for PVH too (ah, there it is in the next patch). I think the
> > > > > best solution there would be a new predicate e.g.
> > > > > paging_mode_supports_foreign(curr) (or some better name, I
> > > > > don't especially like my suggestion)
> > > > > 
> > > > > on ARM:
> > > > > 
> > > > > #define paging_mode_supports_foreign(d) (1)
> > > > > 
> > > > > on x86:
> > > > > 
> > > > > #define paging_mode_support_foreign(d) (is_pvh_domain(curr)
> > > > > || !(paging_mode_translate(curr))
> > > > > 
> > > > 
> > > > Hmmm.  That's likely to have unintended consequences
> > > > somewhere.  (And if that check is really not needed for PVH
> > > > maybe it's not needed for HVM either, given that they share all
> > > > their paging support code).
> > > > 
> > > > But I don't think we need to tinker with it anyway - AFAICS,
> > > > get_pg_owner() isn't really what's wanted in the XATP code.
> > > > All the other uses of get_pg_owner() are in the x86 PV MMU
> > > > code, which this is definitely not, and it handles cases (like
> > > > mmio) that we don't want here anyway.  How about using
> > > > rcu_lock_live_remote_domain_by_id()?
> > > 
> > > We have a struct page in our hand -- don't we need to lookup the
> > > owner and lock it somewhat atomically?
> > 
> > I'm not sure what you mean: 
> >  - the code that Mukesh is adding doesn't have a struct page, it's
> >    just grabbing the foreign domid from the hypercall arg;
> >  - if we did have a struct page, we'd just need to take a ref to 
> >    stop the owner changing underfoot; and
> >  - get_pg_owner() takes a domid anyway.
> 
> Sorry, I was confused/mislead by the name...
> 
> rcu_lock_live_remote_domain_by_id does look like what is needed.

Yup, that will do it. Thanks Tim.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 02:49:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 02:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8hgE-0004iM-SA; Thu, 30 Jan 2014 02:48:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8hgD-0004hX-6N
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 02:48:37 +0000
Received: from [85.158.139.211:4159] by server-5.bemta-5.messagelabs.com id
	DC/36-32749-48DB9E25; Thu, 30 Jan 2014 02:48:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391050113!495380!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27530 invoked from network); 30 Jan 2014 02:48:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 02:48:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,746,1384300800"; d="scan'208";a="97955781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 02:48:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 21:48:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8hfn-0005Uy-7o;
	Thu, 30 Jan 2014 02:48:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8hfn-0003l6-40;
	Thu, 30 Jan 2014 02:48:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 02:48:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24610: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24610 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24598
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24598
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 24598 pass in 24610

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24598 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 02:49:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 02:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8hgE-0004iM-SA; Thu, 30 Jan 2014 02:48:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8hgD-0004hX-6N
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 02:48:37 +0000
Received: from [85.158.139.211:4159] by server-5.bemta-5.messagelabs.com id
	DC/36-32749-48DB9E25; Thu, 30 Jan 2014 02:48:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391050113!495380!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27530 invoked from network); 30 Jan 2014 02:48:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 02:48:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,746,1384300800"; d="scan'208";a="97955781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 02:48:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 21:48:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8hfn-0005Uy-7o;
	Thu, 30 Jan 2014 02:48:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8hfn-0003l6-40;
	Thu, 30 Jan 2014 02:48:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24610-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 02:48:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24610: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24610 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24598
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24598
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 24598 pass in 24610

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24598 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 04:37:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 04:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8jNL-0008GP-7j; Thu, 30 Jan 2014 04:37:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8jNK-0008GK-EH
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 04:37:14 +0000
Received: from [193.109.254.147:5528] by server-12.bemta-14.messagelabs.com id
	78/B2-17220-8F6D9E25; Thu, 30 Jan 2014 04:37:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391056628!752946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15681 invoked from network); 30 Jan 2014 04:37:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 04:37:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,747,1384300800"; d="scan'208";a="95999759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 04:37:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 23:37:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8jNC-00064n-Tc;
	Thu, 30 Jan 2014 04:37:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8jNB-0003bu-MF;
	Thu, 30 Jan 2014 04:37:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 04:37:05 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24613: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24613 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 04:37:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 04:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8jNL-0008GP-7j; Thu, 30 Jan 2014 04:37:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8jNK-0008GK-EH
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 04:37:14 +0000
Received: from [193.109.254.147:5528] by server-12.bemta-14.messagelabs.com id
	78/B2-17220-8F6D9E25; Thu, 30 Jan 2014 04:37:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391056628!752946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15681 invoked from network); 30 Jan 2014 04:37:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 04:37:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,747,1384300800"; d="scan'208";a="95999759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 04:37:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 29 Jan 2014 23:37:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8jNC-00064n-Tc;
	Thu, 30 Jan 2014 04:37:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8jNB-0003bu-MF;
	Thu, 30 Jan 2014 04:37:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24613-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 04:37:05 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24613: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24613 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 06:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 06:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8knv-0003TK-BP; Thu, 30 Jan 2014 06:08:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8knu-0003TF-5W
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 06:08:46 +0000
Received: from [85.158.143.35:14828] by server-2.bemta-4.messagelabs.com id
	DC/83-10891-D6CE9E25; Thu, 30 Jan 2014 06:08:45 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391062123!1813723!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21436 invoked from network); 30 Jan 2014 06:08:44 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 06:08:44 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so4275455qcx.23
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 22:08:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=HhQlnFWlyiXU4gCD7PLwDkXiSNoZuk/M7kSBGAJC5yA=;
	b=cxjVUb3X3ShZ8/Kc9qICrXEbvJkdJJh5LWiCzgiiMBrj67hbWvV659Wp8SMDe+oE0g
	IS45ltNEPQnrVECe0ExL+4Ep0XFB1LTgAs/6QjA+TQ25On1USlcw1jc/jfIvFkFk/6x3
	H92byyRhG8u7hkKXjktzuiBhPBh4ydyFKMADiSiq7PnfdhslttAyURt1YeH1uKimdb+A
	CQkBypm4/hn+Fj2SmicyH6+gy6GmFFczY52g7CJYJxsHBHXCsG8+UN7uVrR684xy83Ds
	qr2RNaF9rKpHna9n8Vz3G45l3wPMmE7olNdCV1+SS1IVeCH6NGnvoyNvXWY+kHh3D+SS
	jT4A==
X-Gm-Message-State: ALoCoQnUpf49X3AaIH3h5aN6AmVHUZH2qpgSlgO6D7OkS0J8KNxZDPjU0wN5C++ONvQuedGiZFyX
MIME-Version: 1.0
X-Received: by 10.224.88.3 with SMTP id y3mr18965801qal.80.1391062123111; Wed,
	29 Jan 2014 22:08:43 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Wed, 29 Jan 2014 22:08:42 -0800 (PST)
In-Reply-To: <1390999123.31814.96.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:38:42 +0530
Message-ID: <CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>
>> > I also don't see any patch to linux/Documentation/devicetree/bindings,
>> > as was requested in that posting from 6 months ago. Where can I find
>> > that?
>> >
>> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>> > hasn't landed?
>> Yeah it is dangling and since new patch is already posted i think we
>> can wait for final DT bindings.
>
> It seems from the thread that the final bindings are going to differ
> significantly from what is implemented in Xen and proposed in the above
> thread. (with a syscon driver that the reset driver references).
>
>> >> Now if you want this to be fixed , i can quickly submit a V7 in which
>> >> mask field will be just hard-coded to 1 hence xen code will always
>> >> work even if linux code does gets changed.
>> >
>> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>> > that seems like a good approach.
>> >
>> > I think we'll just have to accept that until the binding is specified
>> > and documented (in linux/Documentation/devicetree/bindings) then we may
>> > have to be prepared to change the Xen implementation to match the final
>> > spec without regard to backwards compat. If we aren't happy with that
>> > then I should revert the patch now and we will have to live without
>> > reboot support in the meantime.
>> Please do not revert the patch , I think we can go ahead with current patch.
>> Once linux side is concluded i will fix minor changes in xen code
>> based on new DT bindigs..
>
> It doesn't sounds to me like it is going to be minor changes.
Yes binding are changed in new drivers but now question is what to do
in current state where new driver is not submitted ?

My take is we have 3 opts :
1. Keep current reboot driver in xen as it is and use it with old
bindings. (since that is the one merged in linux)
2. I will send a new patch (will take 1hr max for me to do it) with
addresses hardcoded instead of reading it from dts.
    This will help for xen to have reboot driver for xgene.
3. Remove this driver completely from xen as of now.

>
> Ian.
>

Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 06:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 06:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8knv-0003TK-BP; Thu, 30 Jan 2014 06:08:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1W8knu-0003TF-5W
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 06:08:46 +0000
Received: from [85.158.143.35:14828] by server-2.bemta-4.messagelabs.com id
	DC/83-10891-D6CE9E25; Thu, 30 Jan 2014 06:08:45 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391062123!1813723!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21436 invoked from network); 30 Jan 2014 06:08:44 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 06:08:44 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so4275455qcx.23
	for <xen-devel@lists.xen.org>; Wed, 29 Jan 2014 22:08:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=HhQlnFWlyiXU4gCD7PLwDkXiSNoZuk/M7kSBGAJC5yA=;
	b=cxjVUb3X3ShZ8/Kc9qICrXEbvJkdJJh5LWiCzgiiMBrj67hbWvV659Wp8SMDe+oE0g
	IS45ltNEPQnrVECe0ExL+4Ep0XFB1LTgAs/6QjA+TQ25On1USlcw1jc/jfIvFkFk/6x3
	H92byyRhG8u7hkKXjktzuiBhPBh4ydyFKMADiSiq7PnfdhslttAyURt1YeH1uKimdb+A
	CQkBypm4/hn+Fj2SmicyH6+gy6GmFFczY52g7CJYJxsHBHXCsG8+UN7uVrR684xy83Ds
	qr2RNaF9rKpHna9n8Vz3G45l3wPMmE7olNdCV1+SS1IVeCH6NGnvoyNvXWY+kHh3D+SS
	jT4A==
X-Gm-Message-State: ALoCoQnUpf49X3AaIH3h5aN6AmVHUZH2qpgSlgO6D7OkS0J8KNxZDPjU0wN5C++ONvQuedGiZFyX
MIME-Version: 1.0
X-Received: by 10.224.88.3 with SMTP id y3mr18965801qal.80.1391062123111; Wed,
	29 Jan 2014 22:08:43 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Wed, 29 Jan 2014 22:08:42 -0800 (PST)
In-Reply-To: <1390999123.31814.96.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:38:42 +0530
Message-ID: <CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>
>> > I also don't see any patch to linux/Documentation/devicetree/bindings,
>> > as was requested in that posting from 6 months ago. Where can I find
>> > that?
>> >
>> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>> > hasn't landed?
>> Yeah it is dangling and since new patch is already posted i think we
>> can wait for final DT bindings.
>
> It seems from the thread that the final bindings are going to differ
> significantly from what is implemented in Xen and proposed in the above
> thread. (with a syscon driver that the reset driver references).
>
>> >> Now if you want this to be fixed , i can quickly submit a V7 in which
>> >> mask field will be just hard-coded to 1 hence xen code will always
>> >> work even if linux code does gets changed.
>> >
>> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>> > that seems like a good approach.
>> >
>> > I think we'll just have to accept that until the binding is specified
>> > and documented (in linux/Documentation/devicetree/bindings) then we may
>> > have to be prepared to change the Xen implementation to match the final
>> > spec without regard to backwards compat. If we aren't happy with that
>> > then I should revert the patch now and we will have to live without
>> > reboot support in the meantime.
>> Please do not revert the patch , I think we can go ahead with current patch.
>> Once linux side is concluded i will fix minor changes in xen code
>> based on new DT bindigs..
>
> It doesn't sounds to me like it is going to be minor changes.
Yes binding are changed in new drivers but now question is what to do
in current state where new driver is not submitted ?

My take is we have 3 opts :
1. Keep current reboot driver in xen as it is and use it with old
bindings. (since that is the one merged in linux)
2. I will send a new patch (will take 1hr max for me to do it) with
addresses hardcoded instead of reading it from dts.
    This will help for xen to have reboot driver for xgene.
3. Remove this driver completely from xen as of now.

>
> Ian.
>

Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 07:37:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 07:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8mBb-0006zm-1g; Thu, 30 Jan 2014 07:37:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8mBZ-0006zh-C8
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 07:37:17 +0000
Received: from [85.158.137.68:33821] by server-2.bemta-3.messagelabs.com id
	C1/FC-06531-C210AE25; Thu, 30 Jan 2014 07:37:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391067435!11426300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25519 invoked from network); 30 Jan 2014 07:37:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 07:37:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 07:37:14 +0000
Message-Id: <52EA0F4802000078001181F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 07:37:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<52E941BF.3070308@citrix.com>
In-Reply-To: <52E941BF.3070308@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 19:00, David Vrabel <david.vrabel@citrix.com> wrote:
> Each copy_from_user() and copy_to_user() and get_user()/put_user() would
> thus require two hypercalls, at least one of which would do a TLB flush?

Right.

> This does sound rather expensive and thus not something we (XenServer)
> would be especially interested in using.

And I'd consider this only of interest for the very security conscious.
The plan would certainly be to have this default disabled in Linux.

> Do you have any figures for the performance impact on guests not using
> this feature?

Guests not using this feature would only suffer from the extra
instructions added to Xen's entry.S paths. I'm not sure that would
be directly measurable in other than micro benchmarks, but as said
I'm still somewhat concerned - I especially wouldn't want to add
such if people don't think this is at least of reasonable use in some
environments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 07:37:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 07:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8mBb-0006zm-1g; Thu, 30 Jan 2014 07:37:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8mBZ-0006zh-C8
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 07:37:17 +0000
Received: from [85.158.137.68:33821] by server-2.bemta-3.messagelabs.com id
	C1/FC-06531-C210AE25; Thu, 30 Jan 2014 07:37:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391067435!11426300!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25519 invoked from network); 30 Jan 2014 07:37:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 07:37:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 07:37:14 +0000
Message-Id: <52EA0F4802000078001181F7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 07:37:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<52E941BF.3070308@citrix.com>
In-Reply-To: <52E941BF.3070308@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 19:00, David Vrabel <david.vrabel@citrix.com> wrote:
> Each copy_from_user() and copy_to_user() and get_user()/put_user() would
> thus require two hypercalls, at least one of which would do a TLB flush?

Right.

> This does sound rather expensive and thus not something we (XenServer)
> would be especially interested in using.

And I'd consider this only of interest for the very security conscious.
The plan would certainly be to have this default disabled in Linux.

> Do you have any figures for the performance impact on guests not using
> this feature?

Guests not using this feature would only suffer from the extra
instructions added to Xen's entry.S paths. I'm not sure that would
be directly measurable in other than micro benchmarks, but as said
I'm still somewhat concerned - I especially wouldn't want to add
such if people don't think this is at least of reasonable use in some
environments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 07:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 07:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8mHI-0007RO-Sz; Thu, 30 Jan 2014 07:43:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8mHI-0007RJ-91
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 07:43:12 +0000
Received: from [85.158.137.68:38594] by server-16.bemta-3.messagelabs.com id
	B6/53-29917-F820AE25; Thu, 30 Jan 2014 07:43:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391067790!11427678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23964 invoked from network); 30 Jan 2014 07:43:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 07:43:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 07:43:09 +0000
Message-Id: <52EA10AA0200007800118206@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 07:43:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<52E942AF.1090002@citrix.com>
In-Reply-To: <52E942AF.1090002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 19:04, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This appears to be hardware independent, so looks as if it would still
> work fine on 64bit hardware lacking explicit SMAP/SMEP support?

Correct.

> (although possibly problems with emulating {ST,CL}AC)

Yeah, I already knew that in order to work on non-SMAP hardware
the #UD handler would also need to be enabled (not done in the
draft patch yet). Now that I checked again I see that the code in
the #GP handler is actually pointless altogether - according to the
spec #UD gets raised instead of #GP when CPL > 0. But then
again a guest should avoid relying on the emulation path anyway,
as the hypercall path is clearly faster.

> At a glance, it doesn't appear to add too much code to hot-paths, but

But it's also not as little that one could consider it completely
negligible.

> the performance overhead from the point of view of the PV guest looks
> substantial, requiring two hypercalls/traps on each
> copy_{to,from}_user(), which themselves cause a local TLB flush.

Right. Hence - as said in the response to David - the intention
would be for this to require explicit enabling on the kernel
command line.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 07:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 07:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8mHI-0007RO-Sz; Thu, 30 Jan 2014 07:43:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8mHI-0007RJ-91
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 07:43:12 +0000
Received: from [85.158.137.68:38594] by server-16.bemta-3.messagelabs.com id
	B6/53-29917-F820AE25; Thu, 30 Jan 2014 07:43:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391067790!11427678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23964 invoked from network); 30 Jan 2014 07:43:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 07:43:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 07:43:09 +0000
Message-Id: <52EA10AA0200007800118206@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 07:43:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<52E942AF.1090002@citrix.com>
In-Reply-To: <52E942AF.1090002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 29.01.14 at 19:04, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This appears to be hardware independent, so looks as if it would still
> work fine on 64bit hardware lacking explicit SMAP/SMEP support?

Correct.

> (although possibly problems with emulating {ST,CL}AC)

Yeah, I already knew that in order to work on non-SMAP hardware
the #UD handler would also need to be enabled (not done in the
draft patch yet). Now that I checked again I see that the code in
the #GP handler is actually pointless altogether - according to the
spec #UD gets raised instead of #GP when CPL > 0. But then
again a guest should avoid relying on the emulation path anyway,
as the hypercall path is clearly faster.

> At a glance, it doesn't appear to add too much code to hot-paths, but

But it's also not as little that one could consider it completely
negligible.

> the performance overhead from the point of view of the PV guest looks
> substantial, requiring two hypercalls/traps on each
> copy_{to,from}_user(), which themselves cause a local TLB flush.

Right. Hence - as said in the response to David - the intention
would be for this to require explicit enabling on the kernel
command line.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:21:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8nnd-0002v5-Lc; Thu, 30 Jan 2014 09:20:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W8nnb-0002v0-Oy
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 09:20:40 +0000
Received: from [85.158.143.35:18660] by server-3.bemta-4.messagelabs.com id
	F8/40-11539-6691AE25; Thu, 30 Jan 2014 09:20:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391073636!1840778!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31636 invoked from network); 30 Jan 2014 09:20:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 09:20:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="98021702"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 09:20:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 04:20:35 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W8nnX-0001kf-Hb;
	Thu, 30 Jan 2014 09:20:35 +0000
Message-ID: <1391073630.18185.0.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 30 Jan 2014 09:20:30 +0000
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping 

On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
>  1 file changed, 30 insertions(+), 51 deletions(-)
> 
> Changes from v1:
> - Use bitmap to allow any number of items to be used;
> - Use a single bitmap to simplify reserve loop;
> - Remove HOME flags as was not used anymore.
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> index 895ce1a..ed8e8d2 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -37,24 +37,19 @@ struct mctelem_ent {
>  	void *mcte_data;		/* corresponding data payload */
>  };
>  
> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>  
> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>  				MCTE_F_STATE_UNCOMMITTED | \
>  				MCTE_F_STATE_COMMITTED | \
>  				MCTE_F_STATE_PROCESSING)
>  
> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> -
>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>  #define	MCTE_SET_CLASS(tep, new) do { \
>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> @@ -69,6 +64,8 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
>  
> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>  
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +74,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>   */
>  static void mctelem_free(struct mctelem_ent *tep)
>  {
> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> -	    MC_URGENT : MC_NONURGENT;
> -
>  	BUG_ON(tep->mcte_refcnt != 0);
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>  
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>  }
>  
>  /* Increment the reference count of an entry that is not linked on to
> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>  	}
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> -	    datasz)) == NULL) {
> +	    MC_NENT)) == NULL ||
> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>  		if (mctctl.mctc_elems)
>  			xfree(mctctl.mctc_elems);
>  		printk("Allocations for MCA telemetry failed\n");
>  		return;
>  	}
>  
> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +	for (i = 0; i < MC_NENT; i++) {
> +		struct mctelem_ent *tep;
>  
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>  		tep->mcte_refcnt = 0;
>  		tep->mcte_data = datarr + i * datasz;
>  
> -		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> -		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> -		}
> -
> -		tep->mcte_next = *tepp;
> +		__set_bit(i, mctctl.mctc_free);
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
>  
> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>  
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call exactly one of
> - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> -	mctelem_class_t target = (which == MC_URGENT) ?
> -	    MC_URGENT : MC_NONURGENT;
> +	unsigned bit;
> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>  
> -	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> -			if (which == MC_URGENT && target == MC_URGENT) {
> -				/* raid the non-urgent freelist */
> -				target = MC_NONURGENT;
> -				freelp = &mctctl.mctc_free[target];
> -				continue;
> -			} else {
> -				mctelem_drop_count++;
> -				return (NULL);
> -			}
> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> +
> +		if (bit >= MC_NENT) {
> +			mctelem_drop_count++;
> +			return (NULL);
>  		}
>  
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>  
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:21:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8nnd-0002v5-Lc; Thu, 30 Jan 2014 09:20:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W8nnb-0002v0-Oy
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 09:20:40 +0000
Received: from [85.158.143.35:18660] by server-3.bemta-4.messagelabs.com id
	F8/40-11539-6691AE25; Thu, 30 Jan 2014 09:20:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391073636!1840778!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31636 invoked from network); 30 Jan 2014 09:20:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 09:20:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="98021702"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 09:20:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 04:20:35 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W8nnX-0001kf-Hb;
	Thu, 30 Jan 2014 09:20:35 +0000
Message-ID: <1391073630.18185.0.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 30 Jan 2014 09:20:30 +0000
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping 

On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
>  1 file changed, 30 insertions(+), 51 deletions(-)
> 
> Changes from v1:
> - Use bitmap to allow any number of items to be used;
> - Use a single bitmap to simplify reserve loop;
> - Remove HOME flags as was not used anymore.
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> index 895ce1a..ed8e8d2 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -37,24 +37,19 @@ struct mctelem_ent {
>  	void *mcte_data;		/* corresponding data payload */
>  };
>  
> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>  
> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>  				MCTE_F_STATE_UNCOMMITTED | \
>  				MCTE_F_STATE_COMMITTED | \
>  				MCTE_F_STATE_PROCESSING)
>  
> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> -
>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>  #define	MCTE_SET_CLASS(tep, new) do { \
>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> @@ -69,6 +64,8 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
>  
> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>  
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +74,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>   */
>  static void mctelem_free(struct mctelem_ent *tep)
>  {
> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> -	    MC_URGENT : MC_NONURGENT;
> -
>  	BUG_ON(tep->mcte_refcnt != 0);
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>  
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>  }
>  
>  /* Increment the reference count of an entry that is not linked on to
> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>  	}
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> -	    datasz)) == NULL) {
> +	    MC_NENT)) == NULL ||
> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>  		if (mctctl.mctc_elems)
>  			xfree(mctctl.mctc_elems);
>  		printk("Allocations for MCA telemetry failed\n");
>  		return;
>  	}
>  
> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +	for (i = 0; i < MC_NENT; i++) {
> +		struct mctelem_ent *tep;
>  
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>  		tep->mcte_refcnt = 0;
>  		tep->mcte_data = datarr + i * datasz;
>  
> -		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> -		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> -		}
> -
> -		tep->mcte_next = *tepp;
> +		__set_bit(i, mctctl.mctc_free);
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
>  
> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>  
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call exactly one of
> - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> -	mctelem_class_t target = (which == MC_URGENT) ?
> -	    MC_URGENT : MC_NONURGENT;
> +	unsigned bit;
> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>  
> -	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> -			if (which == MC_URGENT && target == MC_URGENT) {
> -				/* raid the non-urgent freelist */
> -				target = MC_NONURGENT;
> -				freelp = &mctctl.mctc_free[target];
> -				continue;
> -			} else {
> -				mctelem_drop_count++;
> -				return (NULL);
> -			}
> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> +
> +		if (bit >= MC_NENT) {
> +			mctelem_drop_count++;
> +			return (NULL);
>  		}
>  
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>  
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:25:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8nrr-000349-HD; Thu, 30 Jan 2014 09:25:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8nro-000344-Qw
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 09:25:01 +0000
Received: from [193.109.254.147:53607] by server-6.bemta-14.messagelabs.com id
	79/4B-03396-C6A1AE25; Thu, 30 Jan 2014 09:25:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391073897!793798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25635 invoked from network); 30 Jan 2014 09:24:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 09:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="96052578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 09:24:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 04:24:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8nrk-0007jC-KM;
	Thu, 30 Jan 2014 09:24:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8nrk-0003Eu-GY;
	Thu, 30 Jan 2014 09:24:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 09:24:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24617: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24617/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24593 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64 14 guest-stop               fail pass in 24603
 test-amd64-i386-xl            3 host-install(3)           broken pass in 24603
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail pass in 24603
 test-amd64-i386-xl-win7-amd64 10 guest-saverestore.2        fail pass in 24603
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24603 pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24603 pass in 24617
 test-armhf-armhf-xl           7 debian-install     fail in 24593 pass in 24617
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24593 pass in 24617
 test-amd64-i386-xl-win7-amd64 9 guest-localmigrate fail in 24593 pass in 24617

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail blocked in 24570
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24603 like 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24593 like 24594-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24603 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:25:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8nrr-000349-HD; Thu, 30 Jan 2014 09:25:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8nro-000344-Qw
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 09:25:01 +0000
Received: from [193.109.254.147:53607] by server-6.bemta-14.messagelabs.com id
	79/4B-03396-C6A1AE25; Thu, 30 Jan 2014 09:25:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391073897!793798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25635 invoked from network); 30 Jan 2014 09:24:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 09:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="96052578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 09:24:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 04:24:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8nrk-0007jC-KM;
	Thu, 30 Jan 2014 09:24:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8nrk-0003Eu-GY;
	Thu, 30 Jan 2014 09:24:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24617-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 09:24:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24617: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24617 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24617/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24593 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64 14 guest-stop               fail pass in 24603
 test-amd64-i386-xl            3 host-install(3)           broken pass in 24603
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail pass in 24603
 test-amd64-i386-xl-win7-amd64 10 guest-saverestore.2        fail pass in 24603
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24603 pass in 24593
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24603 pass in 24617
 test-armhf-armhf-xl           7 debian-install     fail in 24593 pass in 24617
 test-amd64-i386-xend-qemut-winxpsp3 3 host-install(3) broken in 24593 pass in 24617
 test-amd64-i386-xl-win7-amd64 9 guest-localmigrate fail in 24593 pass in 24617

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail blocked in 24570
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24603 like 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24593 like 24594-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24603 never pass

version targeted for testing:
 xen                  7754fb8cab292dfb2047b1cb38004d7290f8b6aa
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           broken  
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8oLP-0004CK-Bb; Thu, 30 Jan 2014 09:55:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8oLO-0004CF-Ir
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 09:55:34 +0000
Received: from [193.109.254.147:51001] by server-1.bemta-14.messagelabs.com id
	F0/7F-15438-5912AE25; Thu, 30 Jan 2014 09:55:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391075732!829191!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3262 invoked from network); 30 Jan 2014 09:55:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 09:55:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 09:55:31 +0000
Message-Id: <52EA2FB00200007800118248@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 09:55:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391073630.18185.0.camel@hamster.uk.xensource.com>
In-Reply-To: <1391073630.18185.0.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 10:20, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Ping 

You're ping should be to the MCE maintainers - I'm merely
waiting for their comments/ack.

Jan

> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>> These lines (in mctelem_reserve)
>> 
>>         newhead = oldhead->mcte_next;
>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> 
>> are racy. After you read the newhead pointer it can happen that another
>> flow (thread or recursive invocation) change all the list but set head
>> with same value. So oldhead is the same as *freelp but you are setting
>> a new head that could point to whatever element (even already used).
>> 
>> This patch use instead a bit array and atomic bit operations.
>> 
>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> ---
>>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 
> ++++++++++++++-----------------------
>>  1 file changed, 30 insertions(+), 51 deletions(-)
>> 
>> Changes from v1:
>> - Use bitmap to allow any number of items to be used;
>> - Use a single bitmap to simplify reserve loop;
>> - Remove HOME flags as was not used anymore.
>> 
>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c 
> b/xen/arch/x86/cpu/mcheck/mctelem.c
>> index 895ce1a..ed8e8d2 100644
>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>> @@ -37,24 +37,19 @@ struct mctelem_ent {
>>  	void *mcte_data;		/* corresponding data payload */
>>  };
>>  
>> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
>> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
>> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
>> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>>  
>> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>>  				MCTE_F_STATE_UNCOMMITTED | \
>>  				MCTE_F_STATE_COMMITTED | \
>>  				MCTE_F_STATE_PROCESSING)
>>  
>> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
>> -
>>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>>  #define	MCTE_SET_CLASS(tep, new) do { \
>>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>> @@ -69,6 +64,8 @@ struct mctelem_ent {
>>  #define	MC_URGENT_NENT		10
>>  #define	MC_NONURGENT_NENT	20
>>  
>> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
>> +
>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>  
>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>> @@ -77,11 +74,9 @@ struct mctelem_ent {
>>  static struct mc_telem_ctl {
>>  	/* Linked lists that thread the array members together.
>>  	 *
>> -	 * The free lists are singly-linked via mcte_next, and we allocate
>> -	 * from them by atomically unlinking an element from the head.
>> -	 * Consumed entries are returned to the head of the free list.
>> -	 * When an entry is reserved off the free list it is not linked
>> -	 * on any list until it is committed or dismissed.
>> +	 * The free lists is a bit array where bit 1 means free.
>> +	 * This as element number is quite small and is easy to
>> +	 * atomically allocate that way.
>>  	 *
>>  	 * The committed list grows at the head and we do not maintain a
>>  	 * tail pointer; insertions are performed atomically.  The head
>> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>>  	 * we can lock it for updates.  The head of the processing list
>>  	 * always has the oldest telemetry, and we append (as above)
>>  	 * at the tail of the processing list. */
>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>>   */
>>  static void mctelem_free(struct mctelem_ent *tep)
>>  {
>> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>> -	    MC_URGENT : MC_NONURGENT;
>> -
>>  	BUG_ON(tep->mcte_refcnt != 0);
>>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>  
>>  	tep->mcte_prev = NULL;
>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
>> +	tep->mcte_next = NULL;
>> +
>> +	/* set free in array */
>> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>>  }
>>  
>>  /* Increment the reference count of an entry that is not linked on to
>> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>>  	}
>>  
>>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
>> -	    datasz)) == NULL) {
>> +	    MC_NENT)) == NULL ||
>> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>>  		if (mctctl.mctc_elems)
>>  			xfree(mctctl.mctc_elems);
>>  		printk("Allocations for MCA telemetry failed\n");
>>  		return;
>>  	}
>>  
>> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>> -		struct mctelem_ent *tep, **tepp;
>> +	for (i = 0; i < MC_NENT; i++) {
>> +		struct mctelem_ent *tep;
>>  
>>  		tep = mctctl.mctc_elems + i;
>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>  		tep->mcte_refcnt = 0;
>>  		tep->mcte_data = datarr + i * datasz;
>>  
>> -		if (i < MC_URGENT_NENT) {
>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>> -		} else {
>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>> -		}
>> -
>> -		tep->mcte_next = *tepp;
>> +		__set_bit(i, mctctl.mctc_free);
>> +		tep->mcte_next = NULL;
>>  		tep->mcte_prev = NULL;
>> -		*tepp = tep;
>>  	}
>>  }
>>  
>> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>>  
>>  /* Reserve a telemetry entry, or return NULL if none available.
>>   * If we return an entry then the caller must subsequently call exactly one 
> of
>> - * mctelem_unreserve or mctelem_commit for that entry.
>> + * mctelem_dismiss or mctelem_commit for that entry.
>>   */
>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>>  {
>> -	struct mctelem_ent **freelp;
>> -	struct mctelem_ent *oldhead, *newhead;
>> -	mctelem_class_t target = (which == MC_URGENT) ?
>> -	    MC_URGENT : MC_NONURGENT;
>> +	unsigned bit;
>> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>>  
>> -	freelp = &mctctl.mctc_free[target];
>>  	for (;;) {
>> -		if ((oldhead = *freelp) == NULL) {
>> -			if (which == MC_URGENT && target == MC_URGENT) {
>> -				/* raid the non-urgent freelist */
>> -				target = MC_NONURGENT;
>> -				freelp = &mctctl.mctc_free[target];
>> -				continue;
>> -			} else {
>> -				mctelem_drop_count++;
>> -				return (NULL);
>> -			}
>> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
>> +
>> +		if (bit >= MC_NENT) {
>> +			mctelem_drop_count++;
>> +			return (NULL);
>>  		}
>>  
>> -		newhead = oldhead->mcte_next;
>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> -			struct mctelem_ent *tep = oldhead;
>> +		/* try to allocate, atomically clear free bit */
>> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>> +			/* return element we got */
>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>  
>>  			mctelem_hold(tep);
>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 09:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 09:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8oLP-0004CK-Bb; Thu, 30 Jan 2014 09:55:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8oLO-0004CF-Ir
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 09:55:34 +0000
Received: from [193.109.254.147:51001] by server-1.bemta-14.messagelabs.com id
	F0/7F-15438-5912AE25; Thu, 30 Jan 2014 09:55:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391075732!829191!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3262 invoked from network); 30 Jan 2014 09:55:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 09:55:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 09:55:31 +0000
Message-Id: <52EA2FB00200007800118248@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 09:55:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Frediano Ziglio" <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391073630.18185.0.camel@hamster.uk.xensource.com>
In-Reply-To: <1391073630.18185.0.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 10:20, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Ping 

You're ping should be to the MCE maintainers - I'm merely
waiting for their comments/ack.

Jan

> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>> These lines (in mctelem_reserve)
>> 
>>         newhead = oldhead->mcte_next;
>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> 
>> are racy. After you read the newhead pointer it can happen that another
>> flow (thread or recursive invocation) change all the list but set head
>> with same value. So oldhead is the same as *freelp but you are setting
>> a new head that could point to whatever element (even already used).
>> 
>> This patch use instead a bit array and atomic bit operations.
>> 
>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> ---
>>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 
> ++++++++++++++-----------------------
>>  1 file changed, 30 insertions(+), 51 deletions(-)
>> 
>> Changes from v1:
>> - Use bitmap to allow any number of items to be used;
>> - Use a single bitmap to simplify reserve loop;
>> - Remove HOME flags as was not used anymore.
>> 
>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c 
> b/xen/arch/x86/cpu/mcheck/mctelem.c
>> index 895ce1a..ed8e8d2 100644
>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>> @@ -37,24 +37,19 @@ struct mctelem_ent {
>>  	void *mcte_data;		/* corresponding data payload */
>>  };
>>  
>> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
>> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
>> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
>> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>>  
>> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>>  				MCTE_F_STATE_UNCOMMITTED | \
>>  				MCTE_F_STATE_COMMITTED | \
>>  				MCTE_F_STATE_PROCESSING)
>>  
>> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
>> -
>>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>>  #define	MCTE_SET_CLASS(tep, new) do { \
>>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>> @@ -69,6 +64,8 @@ struct mctelem_ent {
>>  #define	MC_URGENT_NENT		10
>>  #define	MC_NONURGENT_NENT	20
>>  
>> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
>> +
>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>  
>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>> @@ -77,11 +74,9 @@ struct mctelem_ent {
>>  static struct mc_telem_ctl {
>>  	/* Linked lists that thread the array members together.
>>  	 *
>> -	 * The free lists are singly-linked via mcte_next, and we allocate
>> -	 * from them by atomically unlinking an element from the head.
>> -	 * Consumed entries are returned to the head of the free list.
>> -	 * When an entry is reserved off the free list it is not linked
>> -	 * on any list until it is committed or dismissed.
>> +	 * The free lists is a bit array where bit 1 means free.
>> +	 * This as element number is quite small and is easy to
>> +	 * atomically allocate that way.
>>  	 *
>>  	 * The committed list grows at the head and we do not maintain a
>>  	 * tail pointer; insertions are performed atomically.  The head
>> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>>  	 * we can lock it for updates.  The head of the processing list
>>  	 * always has the oldest telemetry, and we append (as above)
>>  	 * at the tail of the processing list. */
>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>>   */
>>  static void mctelem_free(struct mctelem_ent *tep)
>>  {
>> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>> -	    MC_URGENT : MC_NONURGENT;
>> -
>>  	BUG_ON(tep->mcte_refcnt != 0);
>>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>  
>>  	tep->mcte_prev = NULL;
>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
>> +	tep->mcte_next = NULL;
>> +
>> +	/* set free in array */
>> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>>  }
>>  
>>  /* Increment the reference count of an entry that is not linked on to
>> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>>  	}
>>  
>>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
>> -	    datasz)) == NULL) {
>> +	    MC_NENT)) == NULL ||
>> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>>  		if (mctctl.mctc_elems)
>>  			xfree(mctctl.mctc_elems);
>>  		printk("Allocations for MCA telemetry failed\n");
>>  		return;
>>  	}
>>  
>> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>> -		struct mctelem_ent *tep, **tepp;
>> +	for (i = 0; i < MC_NENT; i++) {
>> +		struct mctelem_ent *tep;
>>  
>>  		tep = mctctl.mctc_elems + i;
>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>  		tep->mcte_refcnt = 0;
>>  		tep->mcte_data = datarr + i * datasz;
>>  
>> -		if (i < MC_URGENT_NENT) {
>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>> -		} else {
>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>> -		}
>> -
>> -		tep->mcte_next = *tepp;
>> +		__set_bit(i, mctctl.mctc_free);
>> +		tep->mcte_next = NULL;
>>  		tep->mcte_prev = NULL;
>> -		*tepp = tep;
>>  	}
>>  }
>>  
>> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>>  
>>  /* Reserve a telemetry entry, or return NULL if none available.
>>   * If we return an entry then the caller must subsequently call exactly one 
> of
>> - * mctelem_unreserve or mctelem_commit for that entry.
>> + * mctelem_dismiss or mctelem_commit for that entry.
>>   */
>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>>  {
>> -	struct mctelem_ent **freelp;
>> -	struct mctelem_ent *oldhead, *newhead;
>> -	mctelem_class_t target = (which == MC_URGENT) ?
>> -	    MC_URGENT : MC_NONURGENT;
>> +	unsigned bit;
>> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>>  
>> -	freelp = &mctctl.mctc_free[target];
>>  	for (;;) {
>> -		if ((oldhead = *freelp) == NULL) {
>> -			if (which == MC_URGENT && target == MC_URGENT) {
>> -				/* raid the non-urgent freelist */
>> -				target = MC_NONURGENT;
>> -				freelp = &mctctl.mctc_free[target];
>> -				continue;
>> -			} else {
>> -				mctelem_drop_count++;
>> -				return (NULL);
>> -			}
>> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
>> +
>> +		if (bit >= MC_NENT) {
>> +			mctelem_drop_count++;
>> +			return (NULL);
>>  		}
>>  
>> -		newhead = oldhead->mcte_next;
>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> -			struct mctelem_ent *tep = oldhead;
>> +		/* try to allocate, atomically clear free bit */
>> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>> +			/* return element we got */
>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>  
>>  			mctelem_hold(tep);
>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 10:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 10:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8okw-0005Rh-93; Thu, 30 Jan 2014 10:21:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8okt-0005RV-Nk
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 10:21:56 +0000
Received: from [85.158.143.35:51509] by server-2.bemta-4.messagelabs.com id
	B4/8E-10891-2C72AE25; Thu, 30 Jan 2014 10:21:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391077312!1880130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14318 invoked from network); 30 Jan 2014 10:21:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 10:21:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="96065220"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 10:21:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 05:21:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8oko-000827-Td;
	Thu, 30 Jan 2014 10:21:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8oko-000406-OG;
	Thu, 30 Jan 2014 10:21:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 10:21:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24618: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24618 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24618/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24610 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    8 debian-fixup                fail pass in 24610
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24598
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24610 pass in 24618
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 24598 pass in 24618

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24598 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 10:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 10:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8okw-0005Rh-93; Thu, 30 Jan 2014 10:21:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8okt-0005RV-Nk
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 10:21:56 +0000
Received: from [85.158.143.35:51509] by server-2.bemta-4.messagelabs.com id
	B4/8E-10891-2C72AE25; Thu, 30 Jan 2014 10:21:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391077312!1880130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14318 invoked from network); 30 Jan 2014 10:21:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 10:21:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,748,1384300800"; d="scan'208";a="96065220"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 10:21:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 05:21:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8oko-000827-Td;
	Thu, 30 Jan 2014 10:21:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8oko-000406-OG;
	Thu, 30 Jan 2014 10:21:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24618-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 10:21:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24618: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24618 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24618/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24610 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    8 debian-fixup                fail pass in 24610
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24598
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24610 pass in 24618
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 24598 pass in 24618

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24598 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 10:57:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 10:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pIk-0006o4-Fu; Thu, 30 Jan 2014 10:56:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8pIj-0006nz-A4
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 10:56:53 +0000
Received: from [193.109.254.147:36663] by server-1.bemta-14.messagelabs.com id
	E4/D4-15438-4FF2AE25; Thu, 30 Jan 2014 10:56:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391079411!841703!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11184 invoked from network); 30 Jan 2014 10:56:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 10:56:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391079411; l=2980;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=o5wyuYwGc92HNw45p/QH8J9ZXOs=;
	b=stg0nXKZsUq1GBYrX278Zb6PqU6bIZ8c9PfwqchO0+TZwKsUlP1yIo/Wo/9BZ0aiBVI
	fVeBDLrehseGFM4av+2eXmniMDfXc9NBiAOb/laJyEaFUzdlEcjimK5HAo2W/oIpaNSeH
	7TmTMWQloNa1fCU3aAHt6XIGg1cuPK3U1jg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id L03582q0UAupa3F
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 11:56:51 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2F83250269; Thu, 30 Jan 2014 11:56:51 +0100 (CET)
Date: Thu, 30 Jan 2014 11:56:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130105651.GA20496@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390991329.31814.58.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> > +    ("discard_enable", integer),
> I have a feeling this should be a libxl_defbool, to allow for the
> possibility of "libxl does what is best/lets the backend decide".
> 
> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index 2845ca4..3633a7d 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
> >          flexarray_append(back, disk->readwrite ? "w" : "r");
> >          flexarray_append(back, "device-type");
> >          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> > +        flexarray_append(back, "discard_enable");
> > +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
> And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
> 	if (!libxl_defbool_is_default(disk->discard_enable))
> 		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))
> 
> (note the lack of libxl_sprintf here too).

Did you have something like this in mind? Its all it takes.

Olaf


diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..bbaf450 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (!libxl_defbool_is_default(disk->discard_enable))
+            flexarray_append_pair(back, "discard-enable", libxl_defbool_val(disk->discard_enable) ? "1" : "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..6575515 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", libxl_defbool),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2585bee 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=1,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=off,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+discard=0,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 
  /* the target magic parameter, eats the rest of the string */
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 10:57:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 10:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pIk-0006o4-Fu; Thu, 30 Jan 2014 10:56:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8pIj-0006nz-A4
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 10:56:53 +0000
Received: from [193.109.254.147:36663] by server-1.bemta-14.messagelabs.com id
	E4/D4-15438-4FF2AE25; Thu, 30 Jan 2014 10:56:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391079411!841703!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11184 invoked from network); 30 Jan 2014 10:56:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 10:56:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391079411; l=2980;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=o5wyuYwGc92HNw45p/QH8J9ZXOs=;
	b=stg0nXKZsUq1GBYrX278Zb6PqU6bIZ8c9PfwqchO0+TZwKsUlP1yIo/Wo/9BZ0aiBVI
	fVeBDLrehseGFM4av+2eXmniMDfXc9NBiAOb/laJyEaFUzdlEcjimK5HAo2W/oIpaNSeH
	7TmTMWQloNa1fCU3aAHt6XIGg1cuPK3U1jg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id L03582q0UAupa3F
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 11:56:51 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 2F83250269; Thu, 30 Jan 2014 11:56:51 +0100 (CET)
Date: Thu, 30 Jan 2014 11:56:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130105651.GA20496@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1390991329.31814.58.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, Ian Campbell wrote:

> On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> > +    ("discard_enable", integer),
> I have a feeling this should be a libxl_defbool, to allow for the
> possibility of "libxl does what is best/lets the backend decide".
> 
> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index 2845ca4..3633a7d 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
> >          flexarray_append(back, disk->readwrite ? "w" : "r");
> >          flexarray_append(back, "device-type");
> >          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> > +        flexarray_append(back, "discard_enable");
> > +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
> And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
> 	if (!libxl_defbool_is_default(disk->discard_enable))
> 		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))
> 
> (note the lack of libxl_sprintf here too).

Did you have something like this in mind? Its all it takes.

Olaf


diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..bbaf450 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (!libxl_defbool_is_default(disk->discard_enable))
+            flexarray_append_pair(back, "discard-enable", libxl_defbool_val(disk->discard_enable) ? "1" : "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..6575515 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", libxl_defbool),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2585bee 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=1,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=off,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+discard=0,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 
  /* the target magic parameter, eats the rest of the string */
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:07:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pSl-0007NP-6g; Thu, 30 Jan 2014 11:07:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8pSj-0007NA-2E
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:07:13 +0000
Received: from [85.158.139.211:23391] by server-8.bemta-5.messagelabs.com id
	9D/6A-05298-0623AE25; Thu, 30 Jan 2014 11:07:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391080030!562822!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8600 invoked from network); 30 Jan 2014 11:07:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:07:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98044986"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:07:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:07:09 -0500
Message-ID: <1391080028.29487.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 11:07:08 +0000
In-Reply-To: <20140130105651.GA20496@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140130105651.GA20496@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:56 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> > > +    ("discard_enable", integer),
> > I have a feeling this should be a libxl_defbool, to allow for the
> > possibility of "libxl does what is best/lets the backend decide".
> > 
> > > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > > index 2845ca4..3633a7d 100644
> > > --- a/tools/libxl/libxl.c
> > > +++ b/tools/libxl/libxl.c
> > > @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
> > >          flexarray_append(back, disk->readwrite ? "w" : "r");
> > >          flexarray_append(back, "device-type");
> > >          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> > > +        flexarray_append(back, "discard_enable");
> > > +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
> > And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
> > 	if (!libxl_defbool_is_default(disk->discard_enable))
> > 		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))
> > 
> > (note the lack of libxl_sprintf here too).
> 
> Did you have something like this in mind? Its all it takes.

Looks about right, yes (modulo the over long line)

Thanks.
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:07:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pSl-0007NP-6g; Thu, 30 Jan 2014 11:07:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8pSj-0007NA-2E
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:07:13 +0000
Received: from [85.158.139.211:23391] by server-8.bemta-5.messagelabs.com id
	9D/6A-05298-0623AE25; Thu, 30 Jan 2014 11:07:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391080030!562822!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8600 invoked from network); 30 Jan 2014 11:07:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:07:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98044986"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:07:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:07:09 -0500
Message-ID: <1391080028.29487.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 11:07:08 +0000
In-Reply-To: <20140130105651.GA20496@aepfle.de>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<1390991329.31814.58.camel@kazak.uk.xensource.com>
	<20140130105651.GA20496@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:56 +0100, Olaf Hering wrote:
> On Wed, Jan 29, Ian Campbell wrote:
> 
> > On Tue, 2014-01-28 at 19:24 +0100, Olaf Hering wrote:
> > > +    ("discard_enable", integer),
> > I have a feeling this should be a libxl_defbool, to allow for the
> > possibility of "libxl does what is best/lets the backend decide".
> > 
> > > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > > index 2845ca4..3633a7d 100644
> > > --- a/tools/libxl/libxl.c
> > > +++ b/tools/libxl/libxl.c
> > > @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
> > >          flexarray_append(back, disk->readwrite ? "w" : "r");
> > >          flexarray_append(back, "device-type");
> > >          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> > > +        flexarray_append(back, "discard_enable");
> > > +        flexarray_append(back, libxl__sprintf(gc, "%d", (disk->discard_enable) ? 1 : 0));
> > And if this were a defbool then you'd want to use libxl_defbool_is_default: i.e.
> > 	if (!libxl_defbool_is_default(disk->discard_enable))
> > 		flexarray_append(back, ..., libxl_defbool_val(...) ? "1" : "0"))
> > 
> > (note the lack of libxl_sprintf here too).
> 
> Did you have something like this in mind? Its all it takes.

Looks about right, yes (modulo the over long line)

Thanks.
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:27:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pla-0008RE-10; Thu, 30 Jan 2014 11:26:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8plZ-0008R9-3V
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:26:41 +0000
Received: from [85.158.137.68:27828] by server-11.bemta-3.messagelabs.com id
	80/A7-04255-0F63AE25; Thu, 30 Jan 2014 11:26:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391081198!12320710!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30041 invoked from network); 30 Jan 2014 11:26:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:26:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98048891"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:26:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:26:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8plU-0008Nl-Rv;
	Thu, 30 Jan 2014 11:26:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8plU-0008Gw-9a;
	Thu, 30 Jan 2014 11:26:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.14059.736402.79784@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:26:35 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391004947.31814.119.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > +    domctl.domain = (domid_t)domid;
> > 
> > Why can't the function parameter be domid_t right away?
> 
> It seemed that the vast majority of the current libxc functions were
> using uint32_t for whatever reason.

What's the point of the cast, though ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:27:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pla-0008RE-10; Thu, 30 Jan 2014 11:26:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8plZ-0008R9-3V
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:26:41 +0000
Received: from [85.158.137.68:27828] by server-11.bemta-3.messagelabs.com id
	80/A7-04255-0F63AE25; Thu, 30 Jan 2014 11:26:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391081198!12320710!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30041 invoked from network); 30 Jan 2014 11:26:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:26:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98048891"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:26:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:26:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8plU-0008Nl-Rv;
	Thu, 30 Jan 2014 11:26:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8plU-0008Gw-9a;
	Thu, 30 Jan 2014 11:26:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.14059.736402.79784@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:26:35 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391004947.31814.119.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
>>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > +    domctl.domain = (domid_t)domid;
> > 
> > Why can't the function parameter be domid_t right away?
> 
> It seemed that the vast majority of the current libxc functions were
> using uint32_t for whatever reason.

What's the point of the cast, though ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pqT-0000RA-QE; Thu, 30 Jan 2014 11:31:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8pqS-0000R5-Lf
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:31:44 +0000
Received: from [85.158.139.211:7040] by server-9.bemta-5.messagelabs.com id
	C7/43-11237-F183AE25; Thu, 30 Jan 2014 11:31:43 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391081501!570100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3154 invoked from network); 30 Jan 2014 11:31:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:31:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98050254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:31:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:31:41 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8pqO-0003fg-WA; Thu, 30 Jan 2014 11:31:40 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8pqO-0000tO-N5; Thu, 30 Jan 2014
	11:31:40 +0000
Date: Thu, 30 Jan 2014 11:31:40 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130113126.GA3326@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has new
 commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
---
 tools/pygrub/src/GrubConf.py |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pqT-0000RA-QE; Thu, 30 Jan 2014 11:31:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8pqS-0000R5-Lf
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:31:44 +0000
Received: from [85.158.139.211:7040] by server-9.bemta-5.messagelabs.com id
	C7/43-11237-F183AE25; Thu, 30 Jan 2014 11:31:43 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391081501!570100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3154 invoked from network); 30 Jan 2014 11:31:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:31:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98050254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:31:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:31:41 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8pqO-0003fg-WA; Thu, 30 Jan 2014 11:31:40 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8pqO-0000tO-N5; Thu, 30 Jan 2014
	11:31:40 +0000
Date: Thu, 30 Jan 2014 11:31:40 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130113126.GA3326@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has new
 commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
---
 tools/pygrub/src/GrubConf.py |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8prV-0000XP-8a; Thu, 30 Jan 2014 11:32:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8prT-0000XE-QS
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:32:48 +0000
Received: from [85.158.139.211:21879] by server-4.bemta-5.messagelabs.com id
	19/44-08092-F583AE25; Thu, 30 Jan 2014 11:32:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391081564!559457!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 30 Jan 2014 11:32:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:32:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98050556"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:32:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:32:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8prP-0008PZ-J0;
	Thu, 30 Jan 2014 11:32:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8prP-0008Hm-Ag;
	Thu, 30 Jan 2014 11:32:43 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.14427.150.677204@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:32:42 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391011264.31814.134.camel@kazak.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: add option for discard support to xl disk configuration"):
> There is no need to make the actual field conditional -- that would
> actually be wrong since it would modify the ABI depending on what the
> application asked for, meaning it would differ from how libxl was
> actually built. An application which us using an ABI before 4.5 simply
> won't think to touch this field.

You mean an application which is using an API (P not B) before 4.5.

I.e. the API is forward-compatible but the ABI is compatible only
within a Xen release.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8prV-0000XP-8a; Thu, 30 Jan 2014 11:32:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8prT-0000XE-QS
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:32:48 +0000
Received: from [85.158.139.211:21879] by server-4.bemta-5.messagelabs.com id
	19/44-08092-F583AE25; Thu, 30 Jan 2014 11:32:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391081564!559457!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27804 invoked from network); 30 Jan 2014 11:32:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:32:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98050556"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:32:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:32:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8prP-0008PZ-J0;
	Thu, 30 Jan 2014 11:32:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8prP-0008Hm-Ag;
	Thu, 30 Jan 2014 11:32:43 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.14427.150.677204@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 11:32:42 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391011264.31814.134.camel@kazak.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] libxl: add option for discard support to xl disk configuration"):
> There is no need to make the actual field conditional -- that would
> actually be wrong since it would modify the ABI depending on what the
> application asked for, meaning it would differ from how libxl was
> actually built. An application which us using an ABI before 4.5 simply
> won't think to touch this field.

You mean an application which is using an API (P not B) before 4.5.

I.e. the API is forward-compatible but the ABI is compatible only
within a Xen release.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pva-0000jz-U0; Thu, 30 Jan 2014 11:37:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8pva-0000jt-2t
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:37:02 +0000
Received: from [85.158.137.68:49350] by server-2.bemta-3.messagelabs.com id
	D0/06-06531-D593AE25; Thu, 30 Jan 2014 11:37:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391081819!12196741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2893 invoked from network); 30 Jan 2014 11:37:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:37:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96083271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 11:36:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:36:57 -0500
Message-ID: <1391081816.29487.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 11:36:56 +0000
In-Reply-To: <21226.14427.150.677204@mariner.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
	<21226.14427.150.677204@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:32 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: add option for discard support to xl disk configuration"):
> > There is no need to make the actual field conditional -- that would
> > actually be wrong since it would modify the ABI depending on what the
> > application asked for, meaning it would differ from how libxl was
> > actually built. An application which us using an ABI before 4.5 simply
> > won't think to touch this field.
> 
> You mean an application which is using an API (P not B) before 4.5.

Yes, sorry.

> I.e. the API is forward-compatible but the ABI is compatible only
> within a Xen release.

Correct.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pva-0000jz-U0; Thu, 30 Jan 2014 11:37:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8pva-0000jt-2t
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:37:02 +0000
Received: from [85.158.137.68:49350] by server-2.bemta-3.messagelabs.com id
	D0/06-06531-D593AE25; Thu, 30 Jan 2014 11:37:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391081819!12196741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2893 invoked from network); 30 Jan 2014 11:37:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:37:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96083271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 11:36:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:36:57 -0500
Message-ID: <1391081816.29487.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 11:36:56 +0000
In-Reply-To: <21226.14427.150.677204@mariner.uk.xensource.com>
References: <1390933497-12819-1-git-send-email-olaf@aepfle.de>
	<20140129150632.GA31958@aepfle.de>
	<1391011264.31814.134.camel@kazak.uk.xensource.com>
	<21226.14427.150.677204@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxl: add option for discard support to xl
 disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:32 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] libxl: add option for discard support to xl disk configuration"):
> > There is no need to make the actual field conditional -- that would
> > actually be wrong since it would modify the ABI depending on what the
> > application asked for, meaning it would differ from how libxl was
> > actually built. An application which us using an ABI before 4.5 simply
> > won't think to touch this field.
> 
> You mean an application which is using an API (P not B) before 4.5.

Yes, sorry.

> I.e. the API is forward-compatible but the ABI is compatible only
> within a Xen release.

Correct.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:40:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pzE-0001Cj-JU; Thu, 30 Jan 2014 11:40:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8pzD-0001Cc-54
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 11:40:47 +0000
Received: from [85.158.139.211:23136] by server-7.bemta-5.messagelabs.com id
	10/61-14867-E3A3AE25; Thu, 30 Jan 2014 11:40:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391082044!586704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11633 invoked from network); 30 Jan 2014 11:40:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96084099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 11:40:43 +0000
Received: from dhcp-3-227.uk.xensource.com (10.80.3.227) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:40:43 -0500
Message-ID: <52EA3A3C.4030209@citrix.com>
Date: Thu, 30 Jan 2014 11:40:44 +0000
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <konrad.wilk@oracle.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.80.3.227]
X-DLP: MIA2
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 00:15, Mukesh Rathor wrote:
> Konrad,
> 
> The CR4 settings were dropped from my earlier patch because you didn't
> wanna enable them. But since you do now, we need to set them in the APs
> also. If you decide not too again, please apply my prev patch
> "pvh: disable pse feature for now".

Hello Mukesh,

Could you push your CR related patches to a git repo branch? I'm
currently having a bit of a mess in figuring out which ones should be
applied and in which order.

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:40:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8pzE-0001Cj-JU; Thu, 30 Jan 2014 11:40:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1W8pzD-0001Cc-54
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 11:40:47 +0000
Received: from [85.158.139.211:23136] by server-7.bemta-5.messagelabs.com id
	10/61-14867-E3A3AE25; Thu, 30 Jan 2014 11:40:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391082044!586704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11633 invoked from network); 30 Jan 2014 11:40:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:40:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96084099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 11:40:43 +0000
Received: from dhcp-3-227.uk.xensource.com (10.80.3.227) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 06:40:43 -0500
Message-ID: <52EA3A3C.4030209@citrix.com>
Date: Thu, 30 Jan 2014 11:40:44 +0000
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <konrad.wilk@oracle.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.80.3.227]
X-DLP: MIA2
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 00:15, Mukesh Rathor wrote:
> Konrad,
> 
> The CR4 settings were dropped from my earlier patch because you didn't
> wanna enable them. But since you do now, we need to set them in the APs
> also. If you decide not too again, please apply my prev patch
> "pvh: disable pse feature for now".

Hello Mukesh,

Could you push your CR related patches to a git repo branch? I'm
currently having a bit of a mess in figuring out which ones should be
applied and in which order.

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:46:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8q49-0001Ly-Bl; Thu, 30 Jan 2014 11:45:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8q48-0001Lt-5G
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:45:52 +0000
Received: from [85.158.137.68:35117] by server-8.bemta-3.messagelabs.com id
	7A/C5-16039-F6B3AE25; Thu, 30 Jan 2014 11:45:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391082344!12302553!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15724 invoked from network); 30 Jan 2014 11:45:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98053614"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:45:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:45:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8q3z-0003sJ-23;
	Thu, 30 Jan 2014 11:45:43 +0000
Message-ID: <52EA3B67.2070707@citrix.com>
Date: Thu, 30 Jan 2014 11:45:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Joby Poriyath <joby.poriyath@citrix.com>
References: <20140130113126.GA3326@citrix.com>
In-Reply-To: <20140130113126.GA3326@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 11:31, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).
>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>

It is typical to put an example in tools/pygrub/examples

Also, you will need to CC George Dunlap and specify why this change
might want a freeze exception to be included in 4.4

> ---
>  tools/pygrub/src/GrubConf.py |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)

Why is this necessary? fedora-19 also have the aformentioned "--class
red, --class gnu" yet is parsed happily.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:46:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8q49-0001Ly-Bl; Thu, 30 Jan 2014 11:45:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8q48-0001Lt-5G
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:45:52 +0000
Received: from [85.158.137.68:35117] by server-8.bemta-3.messagelabs.com id
	7A/C5-16039-F6B3AE25; Thu, 30 Jan 2014 11:45:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391082344!12302553!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15724 invoked from network); 30 Jan 2014 11:45:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 11:45:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98053614"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 11:45:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 06:45:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8q3z-0003sJ-23;
	Thu, 30 Jan 2014 11:45:43 +0000
Message-ID: <52EA3B67.2070707@citrix.com>
Date: Thu, 30 Jan 2014 11:45:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Joby Poriyath <joby.poriyath@citrix.com>
References: <20140130113126.GA3326@citrix.com>
In-Reply-To: <20140130113126.GA3326@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 11:31, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).
>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>

It is typical to put an example in tools/pygrub/examples

Also, you will need to CC George Dunlap and specify why this change
might want a freeze exception to be included in 4.4

> ---
>  tools/pygrub/src/GrubConf.py |    4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)

Why is this necessary? fedora-19 also have the aformentioned "--class
red, --class gnu" yet is parsed happily.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:59:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qGv-0002B2-Vw; Thu, 30 Jan 2014 11:59:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8qGu-0002Ax-Dp
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:59:05 +0000
Received: from [85.158.137.68:35341] by server-10.bemta-3.messagelabs.com id
	81/2C-07302-78E3AE25; Thu, 30 Jan 2014 11:59:03 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391083140!8645412!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1335 invoked from network); 30 Jan 2014 11:59:02 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 11:59:02 -0000
Received: from mail-ve0-f176.google.com ([209.85.128.176]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuo+hGlihFbwWylQh1l24iDHFE8M4CqA@postini.com;
	Thu, 30 Jan 2014 03:59:02 PST
Received: by mail-ve0-f176.google.com with SMTP id oz11so2014907veb.21
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 03:58:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=GjFiOonNfqD1cmz8h7XbJA+MMAGo/wDLMQHKDIMRfw8=;
	b=A0StgPY3wD+rQpTZwWoJbqZA75y84fYMvmHupeYXOXzV6SID55W4fvnpkVz/6vgd7O
	Zjz2ITbpzdBYYM1tRl1dbHv5JRwsYcX29HW49eba9larb5BdqSY5Z0PgP5H26BP91pHZ
	GjdcKEHACHmgNP7hRfmJaxRMdZaR3XIFx/QLk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=GjFiOonNfqD1cmz8h7XbJA+MMAGo/wDLMQHKDIMRfw8=;
	b=Jil07i7zhnvPWO6QvYwh1EVTdFKrsEAKvFQf3qQw65NhyPZJLviYjfB72AdI53Lc9q
	RGHrIRMDVoQC+RYgjPmNl5J/29iCrp1pfXA1AS890SNdTXWTODLrov1T65L88ISVvB1s
	i7apvJDoOC0QTHqWz4QKBUkEYc9sffc+VqGn/11vJOQxmK4/yIpyyyHF7xUOdfOJUlMK
	x3wK6vC3MKSlxCqrLQJzjY/pdph3RIXayc0abT6cxZwJn/6y+vIxzInqD5lU43nqqAyY
	RfY6vteWKB2/23BRx9aLouAwo/MtmpNxKJz02hr9BDxW/S7PWwgLmAVZAY/DNh+ILo/n
	LThQ==
X-Gm-Message-State: ALoCoQnFLPc3xMp8bQ6efdgCpDPwlYqsXBmTj1OVk/4iOUv2EJzfsl1EzQyHe46CCDTxrkUa91t30XAlwlEuGsXrw1wX+qTp9vXsPIK2wEsrgUk2WPQV+/6pXiph25c8aFRxHDwGgIwcly31z3wo4YduP2/c5ATZp31qjLXQc5ULxrDRD9j08Mg=
X-Received: by 10.52.170.3 with SMTP id ai3mr192011vdc.35.1391083139442;
	Thu, 30 Jan 2014 03:58:59 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.170.3 with SMTP id ai3mr191994vdc.35.1391083139234; Thu,
	30 Jan 2014 03:58:59 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 03:58:59 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 13:58:59 +0200
Message-ID: <CAJEb2DHntD6dCF-o5UiBL=oZzTLg=Hr1Y5TzY5VjroLg8CRW9g@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 3:58 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
>> This patch is needed to avoid possible deadlocks in case of simultaneous
>> occurrence cross-interrupts.
>>
>> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>> ---
>>  xen/common/smp.c |    6 +++++-
>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/smp.c b/xen/common/smp.c
>> index 2700bd7..46d2fc6 100644
>> --- a/xen/common/smp.c
>> +++ b/xen/common/smp.c
>> @@ -55,7 +55,11 @@ void on_selected_cpus(
>>
>>      ASSERT(local_irq_is_enabled());
>>
>> -    spin_lock(&call_lock);
>> +    if (!spin_trylock(&call_lock)) {
>> +        if (smp_call_function_interrupt())
>> +            return;
>
> If smp_call_function_interrupt returns -EPERM, shouldn't we go back to
> spinning on call_lock?
> Also there is a race condition between spin_lock, cpumask_copy and
> smp_call_function_interrupt: smp_call_function_interrupt could be called
> on cpu1 after cpu0 acquired the lock, but before cpu0 set
> call_data.selected.
>
> I think the correct implemention would be:
>
>
> while ( unlikely(!spin_trylock(&call_lock)) )
>     smp_call_function_interrupt();
I completely agree
>
>
>
>> +        spin_lock(&call_lock);
>> +    }
>>
>>      cpumask_copy(&call_data.selected, selected);
>>
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 11:59:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 11:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qGv-0002B2-Vw; Thu, 30 Jan 2014 11:59:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8qGu-0002Ax-Dp
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 11:59:05 +0000
Received: from [85.158.137.68:35341] by server-10.bemta-3.messagelabs.com id
	81/2C-07302-78E3AE25; Thu, 30 Jan 2014 11:59:03 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391083140!8645412!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1335 invoked from network); 30 Jan 2014 11:59:02 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 11:59:02 -0000
Received: from mail-ve0-f176.google.com ([209.85.128.176]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuo+hGlihFbwWylQh1l24iDHFE8M4CqA@postini.com;
	Thu, 30 Jan 2014 03:59:02 PST
Received: by mail-ve0-f176.google.com with SMTP id oz11so2014907veb.21
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 03:58:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=GjFiOonNfqD1cmz8h7XbJA+MMAGo/wDLMQHKDIMRfw8=;
	b=A0StgPY3wD+rQpTZwWoJbqZA75y84fYMvmHupeYXOXzV6SID55W4fvnpkVz/6vgd7O
	Zjz2ITbpzdBYYM1tRl1dbHv5JRwsYcX29HW49eba9larb5BdqSY5Z0PgP5H26BP91pHZ
	GjdcKEHACHmgNP7hRfmJaxRMdZaR3XIFx/QLk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=GjFiOonNfqD1cmz8h7XbJA+MMAGo/wDLMQHKDIMRfw8=;
	b=Jil07i7zhnvPWO6QvYwh1EVTdFKrsEAKvFQf3qQw65NhyPZJLviYjfB72AdI53Lc9q
	RGHrIRMDVoQC+RYgjPmNl5J/29iCrp1pfXA1AS890SNdTXWTODLrov1T65L88ISVvB1s
	i7apvJDoOC0QTHqWz4QKBUkEYc9sffc+VqGn/11vJOQxmK4/yIpyyyHF7xUOdfOJUlMK
	x3wK6vC3MKSlxCqrLQJzjY/pdph3RIXayc0abT6cxZwJn/6y+vIxzInqD5lU43nqqAyY
	RfY6vteWKB2/23BRx9aLouAwo/MtmpNxKJz02hr9BDxW/S7PWwgLmAVZAY/DNh+ILo/n
	LThQ==
X-Gm-Message-State: ALoCoQnFLPc3xMp8bQ6efdgCpDPwlYqsXBmTj1OVk/4iOUv2EJzfsl1EzQyHe46CCDTxrkUa91t30XAlwlEuGsXrw1wX+qTp9vXsPIK2wEsrgUk2WPQV+/6pXiph25c8aFRxHDwGgIwcly31z3wo4YduP2/c5ATZp31qjLXQc5ULxrDRD9j08Mg=
X-Received: by 10.52.170.3 with SMTP id ai3mr192011vdc.35.1391083139442;
	Thu, 30 Jan 2014 03:58:59 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.170.3 with SMTP id ai3mr191994vdc.35.1391083139234; Thu,
	30 Jan 2014 03:58:59 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 03:58:59 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<1390844023-23123-3-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1401281345521.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 13:58:59 +0200
Message-ID: <CAJEb2DHntD6dCF-o5UiBL=oZzTLg=Hr1Y5TzY5VjroLg8CRW9g@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 2/2] xen/arm: Fix deadlock in
 on_selected_cpus function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 3:58 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 27 Jan 2014, Oleksandr Tyshchenko wrote:
>> This patch is needed to avoid possible deadlocks in case of simultaneous
>> occurrence cross-interrupts.
>>
>> Change-Id: I574b496442253a7b67a27e2edd793526c8131284
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
>> ---
>>  xen/common/smp.c |    6 +++++-
>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/smp.c b/xen/common/smp.c
>> index 2700bd7..46d2fc6 100644
>> --- a/xen/common/smp.c
>> +++ b/xen/common/smp.c
>> @@ -55,7 +55,11 @@ void on_selected_cpus(
>>
>>      ASSERT(local_irq_is_enabled());
>>
>> -    spin_lock(&call_lock);
>> +    if (!spin_trylock(&call_lock)) {
>> +        if (smp_call_function_interrupt())
>> +            return;
>
> If smp_call_function_interrupt returns -EPERM, shouldn't we go back to
> spinning on call_lock?
> Also there is a race condition between spin_lock, cpumask_copy and
> smp_call_function_interrupt: smp_call_function_interrupt could be called
> on cpu1 after cpu0 acquired the lock, but before cpu0 set
> call_data.selected.
>
> I think the correct implemention would be:
>
>
> while ( unlikely(!spin_trylock(&call_lock)) )
>     smp_call_function_interrupt();
I completely agree
>
>
>
>> +        spin_lock(&call_lock);
>> +    }
>>
>>      cpumask_copy(&call_data.selected, selected);
>>
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qJ1-0002KS-8c; Thu, 30 Jan 2014 12:01:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8qJ0-0002KI-Jq
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:01:14 +0000
Received: from [85.158.143.35:31643] by server-3.bemta-4.messagelabs.com id
	12/21-11539-90F3AE25; Thu, 30 Jan 2014 12:01:13 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391083271!1897059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20088 invoked from network); 30 Jan 2014 12:01:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:01:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96089223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:01:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:01:08 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8qIu-00048s-By; Thu, 30 Jan 2014 12:01:08 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8qIu-00014d-5E; Thu, 30 Jan 2014
	12:01:08 +0000
Date: Thu, 30 Jan 2014 12:01:07 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140130120107.GA3441@citrix.com>
References: <20140130113126.GA3326@citrix.com>
 <52EA3B67.2070707@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EA3B67.2070707@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 11:45:43AM +0000, Andrew Cooper wrote:
> On 30/01/14 11:31, Joby Poriyath wrote:
> > menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> > instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> > boot after the installation.
> >
> > In addition to this, menuentry has some options as well
> > (--class red, --class gnu, etc).
> >
> > Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> 
> It is typical to put an example in tools/pygrub/examples
> 
> Also, you will need to CC George Dunlap and specify why this change
> might want a freeze exception to be included in 4.4
> 
Alright. I'll CC George.

> > ---
> >  tools/pygrub/src/GrubConf.py |    4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> > index cb853c9..974cded 100644
> > --- a/tools/pygrub/src/GrubConf.py
> > +++ b/tools/pygrub/src/GrubConf.py
> > @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
> >                  
> >      commands = {'set:root': 'root',
> >                  'linux': 'kernel',
> > +                'linux16': 'kernel',
> >                  'initrd': 'initrd',
> > +                'initrd16': 'initrd',
> >                  'echo': None,
> >                  'insmod': None,
> >                  'search': None}
> > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> >                  continue
> >  
> >              # new image
> > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> 
> Why is this necessary? fedora-19 also have the aformentioned "--class
> red, --class gnu" yet is parsed happily.

A menuentry from RHEL 7 looks like this...

menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 

So we need 'lazy' match with '.*?'.

-Joby



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qJ1-0002KS-8c; Thu, 30 Jan 2014 12:01:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8qJ0-0002KI-Jq
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:01:14 +0000
Received: from [85.158.143.35:31643] by server-3.bemta-4.messagelabs.com id
	12/21-11539-90F3AE25; Thu, 30 Jan 2014 12:01:13 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391083271!1897059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20088 invoked from network); 30 Jan 2014 12:01:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:01:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96089223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:01:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:01:08 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8qIu-00048s-By; Thu, 30 Jan 2014 12:01:08 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8qIu-00014d-5E; Thu, 30 Jan 2014
	12:01:08 +0000
Date: Thu, 30 Jan 2014 12:01:07 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140130120107.GA3441@citrix.com>
References: <20140130113126.GA3326@citrix.com>
 <52EA3B67.2070707@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EA3B67.2070707@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 11:45:43AM +0000, Andrew Cooper wrote:
> On 30/01/14 11:31, Joby Poriyath wrote:
> > menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> > instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> > boot after the installation.
> >
> > In addition to this, menuentry has some options as well
> > (--class red, --class gnu, etc).
> >
> > Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> 
> It is typical to put an example in tools/pygrub/examples
> 
> Also, you will need to CC George Dunlap and specify why this change
> might want a freeze exception to be included in 4.4
> 
Alright. I'll CC George.

> > ---
> >  tools/pygrub/src/GrubConf.py |    4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> > index cb853c9..974cded 100644
> > --- a/tools/pygrub/src/GrubConf.py
> > +++ b/tools/pygrub/src/GrubConf.py
> > @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
> >                  
> >      commands = {'set:root': 'root',
> >                  'linux': 'kernel',
> > +                'linux16': 'kernel',
> >                  'initrd': 'initrd',
> > +                'initrd16': 'initrd',
> >                  'echo': None,
> >                  'insmod': None,
> >                  'search': None}
> > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> >                  continue
> >  
> >              # new image
> > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> 
> Why is this necessary? fedora-19 also have the aformentioned "--class
> red, --class gnu" yet is parsed happily.

A menuentry from RHEL 7 looks like this...

menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 

So we need 'lazy' match with '.*?'.

-Joby



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:02:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qKZ-0002Ss-On; Thu, 30 Jan 2014 12:02:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8qKX-0002Sh-QT
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:02:50 +0000
Received: from [85.158.139.211:48022] by server-9.bemta-5.messagelabs.com id
	86/3D-11237-96F3AE25; Thu, 30 Jan 2014 12:02:49 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391083367!583746!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6162 invoked from network); 30 Jan 2014 12:02:47 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 12:02:47 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391083367; l=6508;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Qhy1Jh145nwAJw7Vef60s51sncA=;
	b=Dss1lxqdvenqGcmuX2hKMcUpbwd+pKYhWrg9q3u3w0aBai91NsiODGKfAg2ivLAbs0k
	AxbwYvL5DdsdGYmflXXajzirbPYpTdUOySBjtGef+j1fcC+UOdBZ27/+3dDNsReC/8Tv+
	G8MIoq20dN1q52argDTcjLuLtVt8wt3VshU=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id j045aaq0UC2ldSk
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 13:02:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D0A2650269; Thu, 30 Jan 2014 13:02:46 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 13:02:44 +0100
Message-Id: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] libxl: add option for discard support to xl
	disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handle new option discard=on|off for disk configuration. It is supposed
to disable discard support if file based backing storage was
intentionally created non-sparse to avoid fragmentation of the file.

The option is a boolean and intended for the backend driver. A new
boolean property "discard_enable" is written to the backend node. An
upcoming patch for qemu will make use of this property. The kernel
blkback driver may be updated as well to disable discard for phy based
backing storage.

v2:
rename xenstore property from discard_enable to discard-enable
update description in xl-disk-configuration.txt
use libxl_defbool as type for discard_enable
update check-xl-disk-parse to use "<default>"
add LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_ENABLE to libxl.h

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
 tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
 tools/libxl/libxl.c                 |  3 +++
 tools/libxl/libxl.h                 |  5 +++++
 tools/libxl/libxl_types.idl         |  1 +
 tools/libxl/libxlu_disk_l.l         |  4 ++++
 xen/include/public/io/blkif.h       |  8 ++++++++
 7 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..c9fd9bd 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on if, available for that backend typ
+
+This option is an advisory setting for the backend driver, depending of the
+value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
+benefit of this option is to be able to force it off rather than on. It allows
+to disable "hole punching" for file based backends which were intentionally
+created non-sparse to avoid fragmentation of the file.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 797277c..4d26ca3 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -61,7 +61,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 END
@@ -82,7 +83,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 END
@@ -104,7 +106,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -121,7 +124,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -142,7 +146,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -160,7 +165,8 @@ disk: {
     "script": "block-iscsi",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -180,7 +186,8 @@ disk: {
     "script": "block-drbd",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..64d081b 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,9 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (!libxl_defbool_is_default(disk->discard_enable))
+            flexarray_append_pair(back, "discard-enable",
+                                  libxl_defbool_val(disk->discard_enable) ? "1" : "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f26ef3c 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,11 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * The libxl_device_disk has the discard_enable field.
+ */
+#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_ENABLE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..6575515 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", libxl_defbool),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2585bee 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=1,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=off,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+discard=0,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 
  /* the target magic parameter, eats the rest of the string */
 
diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 542f123..4704fe6 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,6 +175,14 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
+ * discard-enable
+ *      Values:         0/1 (boolean)
+ *      Default Value:  1
+ *
+ *      This optional property, set by the toolstack, instructs the backend to
+ *      offer discard to the frontend. If the property is missing the backend
+ *      should offer discard if the backing storage actually supports it.
+ *
  * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:02:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qKZ-0002Ss-On; Thu, 30 Jan 2014 12:02:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8qKX-0002Sh-QT
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:02:50 +0000
Received: from [85.158.139.211:48022] by server-9.bemta-5.messagelabs.com id
	86/3D-11237-96F3AE25; Thu, 30 Jan 2014 12:02:49 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391083367!583746!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6162 invoked from network); 30 Jan 2014 12:02:47 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 12:02:47 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391083367; l=6508;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Qhy1Jh145nwAJw7Vef60s51sncA=;
	b=Dss1lxqdvenqGcmuX2hKMcUpbwd+pKYhWrg9q3u3w0aBai91NsiODGKfAg2ivLAbs0k
	AxbwYvL5DdsdGYmflXXajzirbPYpTdUOySBjtGef+j1fcC+UOdBZ27/+3dDNsReC/8Tv+
	G8MIoq20dN1q52argDTcjLuLtVt8wt3VshU=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id j045aaq0UC2ldSk
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 13:02:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D0A2650269; Thu, 30 Jan 2014 13:02:46 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 13:02:44 +0100
Message-Id: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] libxl: add option for discard support to xl
	disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handle new option discard=on|off for disk configuration. It is supposed
to disable discard support if file based backing storage was
intentionally created non-sparse to avoid fragmentation of the file.

The option is a boolean and intended for the backend driver. A new
boolean property "discard_enable" is written to the backend node. An
upcoming patch for qemu will make use of this property. The kernel
blkback driver may be updated as well to disable discard for phy based
backing storage.

v2:
rename xenstore property from discard_enable to discard-enable
update description in xl-disk-configuration.txt
use libxl_defbool as type for discard_enable
update check-xl-disk-parse to use "<default>"
add LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_ENABLE to libxl.h

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
 tools/libxl/check-xl-disk-parse     | 21 ++++++++++++++-------
 tools/libxl/libxl.c                 |  3 +++
 tools/libxl/libxl.h                 |  5 +++++
 tools/libxl/libxl_types.idl         |  1 +
 tools/libxl/libxlu_disk_l.l         |  4 ++++
 xen/include/public/io/blkif.h       |  8 ++++++++
 7 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..c9fd9bd 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on if, available for that backend typ
+
+This option is an advisory setting for the backend driver, depending of the
+value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
+benefit of this option is to be able to force it off rather than on. It allows
+to disable "hole punching" for file based backends which were intentionally
+created non-sparse to avoid fragmentation of the file.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/check-xl-disk-parse b/tools/libxl/check-xl-disk-parse
index 797277c..4d26ca3 100755
--- a/tools/libxl/check-xl-disk-parse
+++ b/tools/libxl/check-xl-disk-parse
@@ -61,7 +61,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 END
@@ -82,7 +83,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 END
@@ -104,7 +106,8 @@ disk: {
     "script": null,
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -121,7 +124,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -142,7 +146,8 @@ disk: {
     "script": null,
     "removable": 1,
     "readwrite": 0,
-    "is_cdrom": 1
+    "is_cdrom": 1,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -160,7 +165,8 @@ disk: {
     "script": "block-iscsi",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
@@ -180,7 +186,8 @@ disk: {
     "script": "block-drbd",
     "removable": 0,
     "readwrite": 1,
-    "is_cdrom": 0
+    "is_cdrom": 0,
+    "discard_enable": "<default>"
 }
 
 EOF
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..64d081b 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,9 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (!libxl_defbool_is_default(disk->discard_enable))
+            flexarray_append_pair(back, "discard-enable",
+                                  libxl_defbool_val(disk->discard_enable) ? "1" : "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f26ef3c 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,11 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * The libxl_device_disk has the discard_enable field.
+ */
+#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_ENABLE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..6575515 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -415,6 +415,7 @@ libxl_device_disk = Struct("device_disk", [
     ("removable", integer),
     ("readwrite", integer),
     ("is_cdrom", integer),
+    ("discard_enable", libxl_defbool),
     ])
 
 libxl_device_nic = Struct("device_nic", [
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..2585bee 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=1,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
+discard=off,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+discard=0,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
 
  /* the target magic parameter, eats the rest of the string */
 
diff --git a/xen/include/public/io/blkif.h b/xen/include/public/io/blkif.h
index 542f123..4704fe6 100644
--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -175,6 +175,14 @@
  *
  *------------------------- Backend Device Properties -------------------------
  *
+ * discard-enable
+ *      Values:         0/1 (boolean)
+ *      Default Value:  1
+ *
+ *      This optional property, set by the toolstack, instructs the backend to
+ *      offer discard to the frontend. If the property is missing the backend
+ *      should offer discard if the backing storage actually supports it.
+ *
  * discard-alignment
  *      Values:         <uint32_t>
  *      Default Value:  0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:03:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qLa-0002ZL-7P; Thu, 30 Jan 2014 12:03:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8qLY-0002Z2-4s
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:03:52 +0000
Received: from [85.158.137.68:2273] by server-8.bemta-3.messagelabs.com id
	EC/44-16039-7AF3AE25; Thu, 30 Jan 2014 12:03:51 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391083430!12268596!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16702 invoked from network); 30 Jan 2014 12:03:50 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 12:03:50 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391083430; l=4078;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=w3oU3p/nyaD+Zw/HaOS2uFtzP3o=;
	b=tGflAILsWSkMgO5HG80TjKngr51MVPu4cuK54UPaEeoSPcnX1Vb7soPXCfIWASukAtp
	u2eP9IgVrwrSwJd6L5SFZO6oSMcSiuM1Lzu8AUYKqg+qzyRlMqoCqLYqCLlkA/j/RJzxg
	kmpRyv2pN/xTPoGdLPiuoVCWKiq2BvKkJkg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 502954q0UC3lgpz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 13:03:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AAE7650269; Thu, 30 Jan 2014 13:03:46 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 13:03:45 +0100
Message-Id: <1391083425-29574-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. The tool stack may provide a
property "discard_enable" in the backend node to optionally disable discard
support.  This is helpful in case the backing file was intentionally created
non-sparse to avoid fragmentation.

v2:
rename xenstore property from discard_enable to discard-enable
move discard_req to case BLKIF_OP_DISCARD

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 30 ++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..a1f1c7e 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -114,6 +114,7 @@ struct XenBlkDev {
     int                 requests_finished;
 
     /* Persistent grants extension */
+    gboolean            feature_discard;
     gboolean            feature_persistent;
     GTree               *persistent_gnts;
     unsigned int        persistent_gnt_count;
@@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_WRITE:
         ioreq->prot = PROT_READ; /* from memory */
         break;
+    case BLKIF_OP_DISCARD:
+        return 0;
     default:
         xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
@@ -521,6 +524,16 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+    {
+        struct blkif_request_discard *discard_req = (void *)&ioreq->req;
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        discard_req->sector_number, discard_req->nr_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
+    }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -699,6 +712,19 @@ static void blk_alloc(struct XenDevice *xendev)
     }
 }
 
+static void blk_parse_discard(struct XenBlkDev *blkdev)
+{
+    int enable;
+
+    blkdev->feature_discard = true;
+
+    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0)
+	    blkdev->feature_discard = !!enable;
+
+    if (blkdev->feature_discard)
+	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+}
+
 static int blk_init(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -766,6 +792,8 @@ static int blk_init(struct XenDevice *xendev)
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
+    blk_parse_discard(blkdev);
+
     g_free(directiosafe);
     return 0;
 
@@ -801,6 +829,8 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    if (blkdev->feature_discard)
+        qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:03:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qLa-0002ZL-7P; Thu, 30 Jan 2014 12:03:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8qLY-0002Z2-4s
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:03:52 +0000
Received: from [85.158.137.68:2273] by server-8.bemta-3.messagelabs.com id
	EC/44-16039-7AF3AE25; Thu, 30 Jan 2014 12:03:51 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391083430!12268596!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16702 invoked from network); 30 Jan 2014 12:03:50 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 12:03:50 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391083430; l=4078;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=w3oU3p/nyaD+Zw/HaOS2uFtzP3o=;
	b=tGflAILsWSkMgO5HG80TjKngr51MVPu4cuK54UPaEeoSPcnX1Vb7soPXCfIWASukAtp
	u2eP9IgVrwrSwJd6L5SFZO6oSMcSiuM1Lzu8AUYKqg+qzyRlMqoCqLYqCLlkA/j/RJzxg
	kmpRyv2pN/xTPoGdLPiuoVCWKiq2BvKkJkg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id 502954q0UC3lgpz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 13:03:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AAE7650269; Thu, 30 Jan 2014 13:03:46 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 13:03:45 +0100
Message-Id: <1391083425-29574-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
Cc: anthony.perard@citrix.com, Olaf Hering <olaf@aepfle.de>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Implement discard support for xen_disk. It makes use of the existing
discard code in qemu.

The discard support is enabled unconditionally. The tool stack may provide a
property "discard_enable" in the backend node to optionally disable discard
support.  This is helpful in case the backing file was intentionally created
non-sparse to avoid fragmentation.

v2:
rename xenstore property from discard_enable to discard-enable
move discard_req to case BLKIF_OP_DISCARD

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 hw/block/xen_blkif.h | 12 ++++++++++++
 hw/block/xen_disk.c  | 30 ++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
index c0f4136..711b692 100644
--- a/hw/block/xen_blkif.h
+++ b/hw/block/xen_blkif.h
@@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
@@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
 	dst->handle = src->handle;
 	dst->id = src->id;
 	dst->sector_number = src->sector_number;
+	if (src->operation == BLKIF_OP_DISCARD) {
+		struct blkif_request_discard *s = (void *)src;
+		struct blkif_request_discard *d = (void *)dst;
+		d->nr_sectors = s->nr_sectors;
+		return;
+	}
 	if (n > src->nr_segments)
 		n = src->nr_segments;
 	for (i = 0; i < n; i++)
diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 03e30d7..a1f1c7e 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -114,6 +114,7 @@ struct XenBlkDev {
     int                 requests_finished;
 
     /* Persistent grants extension */
+    gboolean            feature_discard;
     gboolean            feature_persistent;
     GTree               *persistent_gnts;
     unsigned int        persistent_gnt_count;
@@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
     case BLKIF_OP_WRITE:
         ioreq->prot = PROT_READ; /* from memory */
         break;
+    case BLKIF_OP_DISCARD:
+        return 0;
     default:
         xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
                       ioreq->req.operation);
@@ -521,6 +524,16 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
                         &ioreq->v, ioreq->v.size / BLOCK_SIZE,
                         qemu_aio_complete, ioreq);
         break;
+    case BLKIF_OP_DISCARD:
+    {
+        struct blkif_request_discard *discard_req = (void *)&ioreq->req;
+        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
+        ioreq->aio_inflight++;
+        bdrv_aio_discard(blkdev->bs,
+                        discard_req->sector_number, discard_req->nr_sectors,
+                        qemu_aio_complete, ioreq);
+        break;
+    }
     default:
         /* unknown operation (shouldn't happen -- parse catches this) */
         goto err;
@@ -699,6 +712,19 @@ static void blk_alloc(struct XenDevice *xendev)
     }
 }
 
+static void blk_parse_discard(struct XenBlkDev *blkdev)
+{
+    int enable;
+
+    blkdev->feature_discard = true;
+
+    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0)
+	    blkdev->feature_discard = !!enable;
+
+    if (blkdev->feature_discard)
+	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
+}
+
 static int blk_init(struct XenDevice *xendev)
 {
     struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@@ -766,6 +792,8 @@ static int blk_init(struct XenDevice *xendev)
     xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
     xenstore_write_be_int(&blkdev->xendev, "info", info);
 
+    blk_parse_discard(blkdev);
+
     g_free(directiosafe);
     return 0;
 
@@ -801,6 +829,8 @@ static int blk_connect(struct XenDevice *xendev)
         qflags |= BDRV_O_RDWR;
         readonly = false;
     }
+    if (blkdev->feature_discard)
+        qflags |= BDRV_O_UNMAP;
 
     /* init qemu block driver */
     index = (blkdev->xendev.dev - 202 * 256) / 16;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qPF-0002mt-TL; Thu, 30 Jan 2014 12:07:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qPE-0002mk-Tp
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:07:41 +0000
Received: from [193.109.254.147:60543] by server-13.bemta-14.messagelabs.com
	id 4B/87-01226-C804AE25; Thu, 30 Jan 2014 12:07:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391083657!871772!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7892 invoked from network); 30 Jan 2014 12:07:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:07:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96092056"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:07:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:07:36 -0500
Message-ID: <1391083654.29487.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 12:07:34 +0000
In-Reply-To: <20140130120107.GA3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > >                  continue
> > >  
> > >              # new image
> > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > 
> > Why is this necessary? fedora-19 also have the aformentioned "--class
> > red, --class gnu" yet is parsed happily.
> 
> A menuentry from RHEL 7 looks like this...
> 
> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> 
> So we need 'lazy' match with '.*?'.

".*" already matches zero or more characters, so I'm not sure what ".*?"
means in addition to that, do you have a reference?

Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
the name itself, although you might have to split into handling " and '
separately to be more correct

Have you run this new regex over tools/pygrub/examples?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:07:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qPF-0002mt-TL; Thu, 30 Jan 2014 12:07:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qPE-0002mk-Tp
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:07:41 +0000
Received: from [193.109.254.147:60543] by server-13.bemta-14.messagelabs.com
	id 4B/87-01226-C804AE25; Thu, 30 Jan 2014 12:07:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391083657!871772!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7892 invoked from network); 30 Jan 2014 12:07:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:07:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96092056"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:07:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:07:36 -0500
Message-ID: <1391083654.29487.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 12:07:34 +0000
In-Reply-To: <20140130120107.GA3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > >                  continue
> > >  
> > >              # new image
> > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > 
> > Why is this necessary? fedora-19 also have the aformentioned "--class
> > red, --class gnu" yet is parsed happily.
> 
> A menuentry from RHEL 7 looks like this...
> 
> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> 
> So we need 'lazy' match with '.*?'.

".*" already matches zero or more characters, so I'm not sure what ".*?"
means in addition to that, do you have a reference?

Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
the name itself, although you might have to split into handling " and '
separately to be more correct

Have you run this new regex over tools/pygrub/examples?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:19:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qaY-0003bC-6f; Thu, 30 Jan 2014 12:19:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W8qaX-0003b7-Di
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:19:21 +0000
Received: from [85.158.143.35:57802] by server-1.bemta-4.messagelabs.com id
	24/3B-31661-8434AE25; Thu, 30 Jan 2014 12:19:20 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391084359!1904800!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 30 Jan 2014 12:19:19 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	30 Jan 2014 12:19:19 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0UCIGFg021616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 07:18:16 -0500
Received: from redhat.com (vpn1-7-27.ams2.redhat.com [10.36.7.27])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s0UCICj7017259
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Thu, 30 Jan 2014 07:18:14 -0500
Date: Thu, 30 Jan 2014 12:18:05 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140130121805.GI3139@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E70A58.2060002@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> [Adding libvirt list...]
> 
> Ian Jackson wrote:
> > Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):

BTW, I meant to ask before - what is the SIGCHLD reference about in the
subject line ?

libvirt drivers that live inside libvirtd should never use or rely on
the SIGCHLD signal at all. All VM processes started by libvirtd ought
to be fully daemonized so that their parent is pid 1 / init. This ensures
that the libvirtd daemon can be restarted without all the VMs getting
reaped. Once the VMs are reparented to init, then libvirt or library
code it uses has no way of ever receiving SIGCHLD.

I'm unclear if the libvirt libxl driver is currently daemonizing the
processes associated with VMs it starts, but if not, it really should
do.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:19:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qaY-0003bC-6f; Thu, 30 Jan 2014 12:19:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W8qaX-0003b7-Di
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:19:21 +0000
Received: from [85.158.143.35:57802] by server-1.bemta-4.messagelabs.com id
	24/3B-31661-8434AE25; Thu, 30 Jan 2014 12:19:20 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391084359!1904800!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24122 invoked from network); 30 Jan 2014 12:19:19 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-21.messagelabs.com with SMTP;
	30 Jan 2014 12:19:19 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0UCIGFg021616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 07:18:16 -0500
Received: from redhat.com (vpn1-7-27.ams2.redhat.com [10.36.7.27])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s0UCICj7017259
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Thu, 30 Jan 2014 07:18:14 -0500
Date: Thu, 30 Jan 2014 12:18:05 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140130121805.GI3139@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E70A58.2060002@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> [Adding libvirt list...]
> 
> Ian Jackson wrote:
> > Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):

BTW, I meant to ask before - what is the SIGCHLD reference about in the
subject line ?

libvirt drivers that live inside libvirtd should never use or rely on
the SIGCHLD signal at all. All VM processes started by libvirtd ought
to be fully daemonized so that their parent is pid 1 / init. This ensures
that the libvirtd daemon can be restarted without all the VMs getting
reaped. Once the VMs are reparented to init, then libvirt or library
code it uses has no way of ever receiving SIGCHLD.

I'm unclear if the libvirt libxl driver is currently daemonizing the
processes associated with VMs it starts, but if not, it really should
do.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qfa-0003iz-Uc; Thu, 30 Jan 2014 12:24:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8qfZ-0003it-9j
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:24:33 +0000
Received: from [193.109.254.147:26441] by server-16.bemta-14.messagelabs.com
	id 89/C5-21945-0844AE25; Thu, 30 Jan 2014 12:24:32 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391084671!876099!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22972 invoked from network); 30 Jan 2014 12:24:31 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:24:31 -0000
Received: by mail-la0-f41.google.com with SMTP id mc6so2545828lab.0
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 04:24:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=4HrOfwxkk8UxCouI+8cEhl19bb0KcKFWM+0BHNpK3qQ=;
	b=WTyk/ZLZhv7O1s3joRHWwmStMdw8djJhSH/ZrGxbhv2CQmtqZ0OWdkgCewzN9/k0rZ
	v4N/NoyY34xwcQ2p0GAKZFjX28LMHUjYhdc6j2FzwnRGTr5tUdaTfzMwa5TydadmbBKo
	Rbdx9O2ZHRITLk58r2f+j0sBIJpLNO7Pb+x78EhCZZIhGMiK3UU98MZJUY9w++Lv5alq
	gZlRBrDYR9OY3w45oElN6gAKiDQ84kMUXWe5dyzqx+rXG5ojNm5Q2PNkSL8tDz5J62AC
	BYip9c4+AWmUd75Wb4MSHrbxc5TwWlQwWM0WmCWqb69KLjLiR0lE7Ul+OW7tuq96ysy0
	AMMw==
X-Received: by 10.152.201.197 with SMTP id kc5mr890lac.77.1391084670965;
	Thu, 30 Jan 2014 04:24:30 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id o10sm8705763laj.2.2014.01.30.04.24.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 04:24:29 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <20140130120107.GA3441@citrix.com>
Date: Thu, 30 Jan 2014 16:24:26 +0400
Message-Id: <122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
	new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


it is possible to fix it for Oracle Solaris 11.1 ?

-Igor

On Jan 30, 2014, at 4:01 PM, Joby Poriyath wrote:

> On Thu, Jan 30, 2014 at 11:45:43AM +0000, Andrew Cooper wrote:
>> On 30/01/14 11:31, Joby Poriyath wrote:
>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>> boot after the installation.
>>> 
>>> In addition to this, menuentry has some options as well
>>> (--class red, --class gnu, etc).
>>> 
>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>> 
>> It is typical to put an example in tools/pygrub/examples
>> 
>> Also, you will need to CC George Dunlap and specify why this change
>> might want a freeze exception to be included in 4.4
>> 
> Alright. I'll CC George.
> 
>>> ---
>>> tools/pygrub/src/GrubConf.py |    4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
>>> index cb853c9..974cded 100644
>>> --- a/tools/pygrub/src/GrubConf.py
>>> +++ b/tools/pygrub/src/GrubConf.py
>>> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>>> 
>>>     commands = {'set:root': 'root',
>>>                 'linux': 'kernel',
>>> +                'linux16': 'kernel',
>>>                 'initrd': 'initrd',
>>> +                'initrd16': 'initrd',
>>>                 'echo': None,
>>>                 'insmod': None,
>>>                 'search': None}
>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>                 continue
>>> 
>>>             # new image
>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>> 
>> Why is this necessary? fedora-19 also have the aformentioned "--class
>> red, --class gnu" yet is parsed happily.
> 
> A menuentry from RHEL 7 looks like this...
> 
> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> 
> So we need 'lazy' match with '.*?'.
> 
> -Joby
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qfa-0003iz-Uc; Thu, 30 Jan 2014 12:24:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W8qfZ-0003it-9j
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:24:33 +0000
Received: from [193.109.254.147:26441] by server-16.bemta-14.messagelabs.com
	id 89/C5-21945-0844AE25; Thu, 30 Jan 2014 12:24:32 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391084671!876099!1
X-Originating-IP: [209.85.215.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22972 invoked from network); 30 Jan 2014 12:24:31 -0000
Received: from mail-la0-f41.google.com (HELO mail-la0-f41.google.com)
	(209.85.215.41)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:24:31 -0000
Received: by mail-la0-f41.google.com with SMTP id mc6so2545828lab.0
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 04:24:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=4HrOfwxkk8UxCouI+8cEhl19bb0KcKFWM+0BHNpK3qQ=;
	b=WTyk/ZLZhv7O1s3joRHWwmStMdw8djJhSH/ZrGxbhv2CQmtqZ0OWdkgCewzN9/k0rZ
	v4N/NoyY34xwcQ2p0GAKZFjX28LMHUjYhdc6j2FzwnRGTr5tUdaTfzMwa5TydadmbBKo
	Rbdx9O2ZHRITLk58r2f+j0sBIJpLNO7Pb+x78EhCZZIhGMiK3UU98MZJUY9w++Lv5alq
	gZlRBrDYR9OY3w45oElN6gAKiDQ84kMUXWe5dyzqx+rXG5ojNm5Q2PNkSL8tDz5J62AC
	BYip9c4+AWmUd75Wb4MSHrbxc5TwWlQwWM0WmCWqb69KLjLiR0lE7Ul+OW7tuq96ysy0
	AMMw==
X-Received: by 10.152.201.197 with SMTP id kc5mr890lac.77.1391084670965;
	Thu, 30 Jan 2014 04:24:30 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id o10sm8705763laj.2.2014.01.30.04.24.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 04:24:29 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <20140130120107.GA3441@citrix.com>
Date: Thu, 30 Jan 2014 16:24:26 +0400
Message-Id: <122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
	new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


it is possible to fix it for Oracle Solaris 11.1 ?

-Igor

On Jan 30, 2014, at 4:01 PM, Joby Poriyath wrote:

> On Thu, Jan 30, 2014 at 11:45:43AM +0000, Andrew Cooper wrote:
>> On 30/01/14 11:31, Joby Poriyath wrote:
>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>> boot after the installation.
>>> 
>>> In addition to this, menuentry has some options as well
>>> (--class red, --class gnu, etc).
>>> 
>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>> 
>> It is typical to put an example in tools/pygrub/examples
>> 
>> Also, you will need to CC George Dunlap and specify why this change
>> might want a freeze exception to be included in 4.4
>> 
> Alright. I'll CC George.
> 
>>> ---
>>> tools/pygrub/src/GrubConf.py |    4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
>>> index cb853c9..974cded 100644
>>> --- a/tools/pygrub/src/GrubConf.py
>>> +++ b/tools/pygrub/src/GrubConf.py
>>> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>>> 
>>>     commands = {'set:root': 'root',
>>>                 'linux': 'kernel',
>>> +                'linux16': 'kernel',
>>>                 'initrd': 'initrd',
>>> +                'initrd16': 'initrd',
>>>                 'echo': None,
>>>                 'insmod': None,
>>>                 'search': None}
>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>                 continue
>>> 
>>>             # new image
>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>> 
>> Why is this necessary? fedora-19 also have the aformentioned "--class
>> red, --class gnu" yet is parsed happily.
> 
> A menuentry from RHEL 7 looks like this...
> 
> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> 
> So we need 'lazy' match with '.*?'.
> 
> -Joby
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qg2-0003mP-Mo; Thu, 30 Jan 2014 12:25:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qg0-0003m8-3f
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:25:00 +0000
Received: from [85.158.137.68:35381] by server-10.bemta-3.messagelabs.com id
	A4/F0-07302-B944AE25; Thu, 30 Jan 2014 12:24:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391084697!8665454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27506 invoked from network); 30 Jan 2014 12:24:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98065969"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 12:24:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:24:36 -0500
Message-ID: <1391084675.29487.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 12:24:35 +0000
In-Reply-To: <21226.14059.736402.79784@mariner.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<21226.14059.736402.79784@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:26 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > +    domctl.domain = (domid_t)domid;
> > > 
> > > Why can't the function parameter be domid_t right away?
> > 
> > It seemed that the vast majority of the current libxc functions were
> > using uint32_t for whatever reason.
> 
> What's the point of the cast, though ?

Apparently all the cool kids in this file are doing it and I followed
suite ;-)

domid_t is a uint16_t, I kind of expect gcc to warn about an assignment
of a uint32_t to a uint16_t to generate some sort of warning, but
apparently not...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qg2-0003mP-Mo; Thu, 30 Jan 2014 12:25:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qg0-0003m8-3f
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:25:00 +0000
Received: from [85.158.137.68:35381] by server-10.bemta-3.messagelabs.com id
	A4/F0-07302-B944AE25; Thu, 30 Jan 2014 12:24:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391084697!8665454!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27506 invoked from network); 30 Jan 2014 12:24:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:24:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98065969"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 12:24:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:24:36 -0500
Message-ID: <1391084675.29487.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 12:24:35 +0000
In-Reply-To: <21226.14059.736402.79784@mariner.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<21226.14059.736402.79784@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:26 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > +    domctl.domain = (domid_t)domid;
> > > 
> > > Why can't the function parameter be domid_t right away?
> > 
> > It seemed that the vast majority of the current libxc functions were
> > using uint32_t for whatever reason.
> 
> What's the point of the cast, though ?

Apparently all the cool kids in this file are doing it and I followed
suite ;-)

domid_t is a uint16_t, I kind of expect gcc to warn about an assignment
of a uint32_t to a uint16_t to generate some sort of warning, but
apparently not...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qjp-00049S-F7; Thu, 30 Jan 2014 12:28:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qjn-00048u-ME
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:28:55 +0000
Received: from [85.158.139.211:5544] by server-5.bemta-5.messagelabs.com id
	BC/73-32749-6854AE25; Thu, 30 Jan 2014 12:28:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391084931!588802!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1233 invoked from network); 30 Jan 2014 12:28:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:28:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96099711"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:28:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:28:50 -0500
Message-ID: <1391084929.29487.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 30 Jan 2014 12:28:49 +0000
In-Reply-To: <122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 16:24 +0400, Igor Kozhukhov wrote:
> it is possible to fix it for Oracle Solaris 11.1 ?

If you send a patch then sure. At the very minimum please patch an
example of the syntax used by Solaris into tools/pygrub/examples.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:29:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qjp-00049S-F7; Thu, 30 Jan 2014 12:28:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qjn-00048u-ME
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:28:55 +0000
Received: from [85.158.139.211:5544] by server-5.bemta-5.messagelabs.com id
	BC/73-32749-6854AE25; Thu, 30 Jan 2014 12:28:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391084931!588802!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1233 invoked from network); 30 Jan 2014 12:28:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:28:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96099711"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:28:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:28:50 -0500
Message-ID: <1391084929.29487.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 30 Jan 2014 12:28:49 +0000
In-Reply-To: <122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<122E3FD9-A322-4209-9F71-3127E78974CE@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 16:24 +0400, Igor Kozhukhov wrote:
> it is possible to fix it for Oracle Solaris 11.1 ?

If you send a patch then sure. At the very minimum please patch an
example of the syntax used by Solaris into tools/pygrub/examples.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:32:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qnA-0004U3-5i; Thu, 30 Jan 2014 12:32:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qn8-0004Tw-Er
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:32:22 +0000
Received: from [85.158.139.211:61044] by server-13.bemta-5.messagelabs.com id
	FA/54-18801-5564AE25; Thu, 30 Jan 2014 12:32:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391085139!590884!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17496 invoked from network); 30 Jan 2014 12:32:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:32:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96101319"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:32:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:32:18 -0500
Message-ID: <1391085137.29487.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 12:32:17 +0000
In-Reply-To: <1391084675.29487.24.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<21226.14059.736402.79784@mariner.uk.xensource.com>
	<1391084675.29487.24.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 12:24 +0000, Ian Campbell wrote:
> On Thu, 2014-01-30 at 11:26 +0000, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> > >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > > +    domctl.domain = (domid_t)domid;
> > > > 
> > > > Why can't the function parameter be domid_t right away?
> > > 
> > > It seemed that the vast majority of the current libxc functions were
> > > using uint32_t for whatever reason.
> > 
> > What's the point of the cast, though ?
> 
> Apparently all the cool kids in this file are doing it and I followed
> suite ;-)
> 
> domid_t is a uint16_t, I kind of expect gcc to warn about an assignment
> of a uint32_t to a uint16_t to generate some sort of warning, but
> apparently not...

Just for completeness: -Wconversion is the option to make it do this.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:32:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8qnA-0004U3-5i; Thu, 30 Jan 2014 12:32:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8qn8-0004Tw-Er
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 12:32:22 +0000
Received: from [85.158.139.211:61044] by server-13.bemta-5.messagelabs.com id
	FA/54-18801-5564AE25; Thu, 30 Jan 2014 12:32:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391085139!590884!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17496 invoked from network); 30 Jan 2014 12:32:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:32:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96101319"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:32:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:32:18 -0500
Message-ID: <1391085137.29487.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 12:32:17 +0000
In-Reply-To: <1391084675.29487.24.camel@kazak.uk.xensource.com>
References: <1390997452.31814.90.camel@kazak.uk.xensource.com>
	<1390997486-3986-4-git-send-email-ian.campbell@citrix.com>
	<52E9098B0200007800117EC5@nat28.tlf.novell.com>
	<1391004947.31814.119.camel@kazak.uk.xensource.com>
	<21226.14059.736402.79784@mariner.uk.xensource.com>
	<1391084675.29487.24.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, george.dunlap@citrix.com, xen-devel@lists.xen.org,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest
 caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 12:24 +0000, Ian Campbell wrote:
> On Thu, 2014-01-30 at 11:26 +0000, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > > On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> > >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > > +    domctl.domain = (domid_t)domid;
> > > > 
> > > > Why can't the function parameter be domid_t right away?
> > > 
> > > It seemed that the vast majority of the current libxc functions were
> > > using uint32_t for whatever reason.
> > 
> > What's the point of the cast, though ?
> 
> Apparently all the cool kids in this file are doing it and I followed
> suite ;-)
> 
> domid_t is a uint16_t, I kind of expect gcc to warn about an assignment
> of a uint32_t to a uint16_t to generate some sort of warning, but
> apparently not...

Just for completeness: -Wconversion is the option to make it do this.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8r0Z-0004yW-Ih; Thu, 30 Jan 2014 12:46:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8r0Y-0004yR-Pe
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:46:15 +0000
Received: from [85.158.137.68:4020] by server-12.bemta-3.messagelabs.com id
	2B/A0-01674-5994AE25; Thu, 30 Jan 2014 12:46:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391085971!12341921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24809 invoked from network); 30 Jan 2014 12:46:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96104410"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:46:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:46:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Stefano.Stabellini@eu.citrix.com>)	id 1W8r0U-0004nO-Mi;
	Thu, 30 Jan 2014 12:46:10 +0000
Date: Thu, 30 Jan 2014 12:46:05 +0000
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, george.dunlap@eu.citrix.com,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following commit:

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate
 
breaks Xen support in QEMU, in particular the Xen mapcache. The effect
is that one Windows XP installation out of ten would end up with BSOD.

The reason is that after this commit l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).

Fix the issue by reverting to the previous behaviour: do not return a
length from address_space_translate_internal that can span a page
boundary.

Also in address_space_translate do not ignore the length returned by
address_space_translate_internal.

This patch should be backported to QEMU 1.6.x.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Perard <anthony.perard@citrix.com>

---
 exec.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/exec.c b/exec.c
index 667a718..f3797b7 100644
--- a/exec.c
+++ b/exec.c
@@ -251,7 +251,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
                                  hwaddr *plen, bool resolve_subpage)
 {
     MemoryRegionSection *section;
-    Int128 diff;
+    Int128 diff, diff_page;
 
     section = address_space_lookup_region(d, addr, resolve_subpage);
     /* Compute offset within MemoryRegionSection */
@@ -260,7 +260,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
     /* Compute offset within MemoryRegion */
     *xlat = addr + section->offset_within_region;
 
+    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
     diff = int128_sub(section->mr->size, int128_make64(addr));
+    diff = int128_min(diff, diff_page);
     *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
     return section;
 }
@@ -275,7 +277,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
     hwaddr len = *plen;
 
     for (;;) {
-        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
+        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
         mr = section->mr;
 
         if (!mr->iommu_ops) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8r0Z-0004yW-Ih; Thu, 30 Jan 2014 12:46:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8r0Y-0004yR-Pe
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:46:15 +0000
Received: from [85.158.137.68:4020] by server-12.bemta-3.messagelabs.com id
	2B/A0-01674-5994AE25; Thu, 30 Jan 2014 12:46:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391085971!12341921!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24809 invoked from network); 30 Jan 2014 12:46:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96104410"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:46:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:46:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Stefano.Stabellini@eu.citrix.com>)	id 1W8r0U-0004nO-Mi;
	Thu, 30 Jan 2014 12:46:10 +0000
Date: Thu, 30 Jan 2014 12:46:05 +0000
From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, george.dunlap@eu.citrix.com,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following commit:

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate
 
breaks Xen support in QEMU, in particular the Xen mapcache. The effect
is that one Windows XP installation out of ten would end up with BSOD.

The reason is that after this commit l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).

Fix the issue by reverting to the previous behaviour: do not return a
length from address_space_translate_internal that can span a page
boundary.

Also in address_space_translate do not ignore the length returned by
address_space_translate_internal.

This patch should be backported to QEMU 1.6.x.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Perard <anthony.perard@citrix.com>

---
 exec.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/exec.c b/exec.c
index 667a718..f3797b7 100644
--- a/exec.c
+++ b/exec.c
@@ -251,7 +251,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
                                  hwaddr *plen, bool resolve_subpage)
 {
     MemoryRegionSection *section;
-    Int128 diff;
+    Int128 diff, diff_page;
 
     section = address_space_lookup_region(d, addr, resolve_subpage);
     /* Compute offset within MemoryRegionSection */
@@ -260,7 +260,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
     /* Compute offset within MemoryRegion */
     *xlat = addr + section->offset_within_region;
 
+    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
     diff = int128_sub(section->mr->size, int128_make64(addr));
+    diff = int128_min(diff, diff_page);
     *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
     return section;
 }
@@ -275,7 +277,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
     hwaddr len = *plen;
 
     for (;;) {
-        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
+        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
         mr = section->mr;
 
         if (!mr->iommu_ops) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8r2Y-00053z-5o; Thu, 30 Jan 2014 12:48:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8r2Q-00053o-5d
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:48:16 +0000
Received: from [85.158.143.35:14687] by server-2.bemta-4.messagelabs.com id
	48/90-10891-90A4AE25; Thu, 30 Jan 2014 12:48:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391086087!1914237!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 901 invoked from network); 30 Jan 2014 12:48:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:48:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96105341"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:48:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:48:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8r2M-0004oz-Ky;
	Thu, 30 Jan 2014 12:48:06 +0000
Date: Thu, 30 Jan 2014 12:48:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52E8FD39.8030307@redhat.com>
Message-ID: <alpine.DEB.2.02.1401301246370.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
	<52E8FD39.8030307@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Paolo Bonzini wrote:
> Il 29/01/2014 13:00, Stefano Stabellini ha scritto:
> > Hi Paolo,
> > we have been trying to fix a BSOD that would happen during the Windows
> > XP installation, once every ten times on average.
> > After many days of bisection, we found out that commit
> > 
> > commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> > Author: Paolo Bonzini <pbonzini@redhat.com>
> > Date:   Fri May 24 12:59:37 2013 +0200
> > 
> >     memory: add address_space_translate
> > 
> > breaks Xen support in QEMU, in particular the Xen mapcache.
> > The reason is that after this commit, l in address_space_rw can span a
> > page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> > to map a single page (if block->offset == 0).
> > The appended patch works around the issue by reverting to the old
> > behaviour.
> > 
> > What do you think is the right fix for this?
> > Maybe we need to add a size parameter to qemu_get_ram_ptr?
> 
> Yeah, that would be best but the patch you attached is fine too with a FIXME
> comment.

Thanks for the quick reply. I have just sent a better and cleaner
version of this patch with a proper commit message and signed-off-by
lines:

http://marc.info/?l=qemu-devel&m=139108598630562&w=2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 12:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 12:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8r2Y-00053z-5o; Thu, 30 Jan 2014 12:48:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8r2Q-00053o-5d
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 12:48:16 +0000
Received: from [85.158.143.35:14687] by server-2.bemta-4.messagelabs.com id
	48/90-10891-90A4AE25; Thu, 30 Jan 2014 12:48:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391086087!1914237!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 901 invoked from network); 30 Jan 2014 12:48:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 12:48:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96105341"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 12:48:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 07:48:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8r2M-0004oz-Ky;
	Thu, 30 Jan 2014 12:48:06 +0000
Date: Thu, 30 Jan 2014 12:48:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52E8FD39.8030307@redhat.com>
Message-ID: <alpine.DEB.2.02.1401301246370.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401291152120.4373@kaball.uk.xensource.com>
	<52E8FD39.8030307@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony Perard <anthony.perard@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUG] BSoD on Windows XP installation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 29 Jan 2014, Paolo Bonzini wrote:
> Il 29/01/2014 13:00, Stefano Stabellini ha scritto:
> > Hi Paolo,
> > we have been trying to fix a BSOD that would happen during the Windows
> > XP installation, once every ten times on average.
> > After many days of bisection, we found out that commit
> > 
> > commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> > Author: Paolo Bonzini <pbonzini@redhat.com>
> > Date:   Fri May 24 12:59:37 2013 +0200
> > 
> >     memory: add address_space_translate
> > 
> > breaks Xen support in QEMU, in particular the Xen mapcache.
> > The reason is that after this commit, l in address_space_rw can span a
> > page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> > to map a single page (if block->offset == 0).
> > The appended patch works around the issue by reverting to the old
> > behaviour.
> > 
> > What do you think is the right fix for this?
> > Maybe we need to add a size parameter to qemu_get_ram_ptr?
> 
> Yeah, that would be best but the patch you attached is fine too with a FIXME
> comment.

Thanks for the quick reply. I have just sent a better and cleaner
version of this patch with a proper commit message and signed-off-by
lines:

http://marc.info/?l=qemu-devel&m=139108598630562&w=2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rGZ-0005eB-6G; Thu, 30 Jan 2014 13:02:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8rGX-0005e6-Jb
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:02:45 +0000
Received: from [85.158.139.211:53866] by server-15.bemta-5.messagelabs.com id
	04/DB-24395-47D4AE25; Thu, 30 Jan 2014 13:02:44 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391086962!592913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6105 invoked from network); 30 Jan 2014 13:02:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:02:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96109500"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 13:02:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:02:41 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8rGT-00051s-QW; Thu, 30 Jan 2014 13:02:41 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8rGT-0001GV-Jm; Thu, 30 Jan 2014
	13:02:41 +0000
Date: Thu, 30 Jan 2014 13:02:41 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130130241.GB3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083654.29487.21.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > > >                  continue
> > > >  
> > > >              # new image
> > > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > > 
> > > Why is this necessary? fedora-19 also have the aformentioned "--class
> > > red, --class gnu" yet is parsed happily.
> > 
> > A menuentry from RHEL 7 looks like this...
> > 
> > menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> > 
> > So we need 'lazy' match with '.*?'.
> 
> ".*" already matches zero or more characters, so I'm not sure what ".*?"
> means in addition to that, do you have a reference?

http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy

> 
> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> the name itself, although you might have to split into handling " and '
> separately to be more correct
> 
> Have you run this new regex over tools/pygrub/examples?

I ran this regex over examples. 

Thanks,
Joby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:03:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rGZ-0005eB-6G; Thu, 30 Jan 2014 13:02:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8rGX-0005e6-Jb
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:02:45 +0000
Received: from [85.158.139.211:53866] by server-15.bemta-5.messagelabs.com id
	04/DB-24395-47D4AE25; Thu, 30 Jan 2014 13:02:44 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391086962!592913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6105 invoked from network); 30 Jan 2014 13:02:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:02:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="96109500"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 13:02:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:02:41 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8rGT-00051s-QW; Thu, 30 Jan 2014 13:02:41 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8rGT-0001GV-Jm; Thu, 30 Jan 2014
	13:02:41 +0000
Date: Thu, 30 Jan 2014 13:02:41 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130130241.GB3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083654.29487.21.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > > >                  continue
> > > >  
> > > >              # new image
> > > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > > 
> > > Why is this necessary? fedora-19 also have the aformentioned "--class
> > > red, --class gnu" yet is parsed happily.
> > 
> > A menuentry from RHEL 7 looks like this...
> > 
> > menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> > 
> > So we need 'lazy' match with '.*?'.
> 
> ".*" already matches zero or more characters, so I'm not sure what ".*?"
> means in addition to that, do you have a reference?

http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy

> 
> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> the name itself, although you might have to split into handling " and '
> separately to be more correct
> 
> Have you run this new regex over tools/pygrub/examples?

I ran this regex over examples. 

Thanks,
Joby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:16:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rU1-000681-J2; Thu, 30 Jan 2014 13:16:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8rU0-00067w-Mk
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:16:40 +0000
Received: from [85.158.139.211:45895] by server-16.bemta-5.messagelabs.com id
	00/9D-05060-7B05AE25; Thu, 30 Jan 2014 13:16:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391087797!614743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1003 invoked from network); 30 Jan 2014 13:16:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:16:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98081756"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:16:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:16:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8rTw-0005EL-Ks;
	Thu, 30 Jan 2014 13:16:36 +0000
Date: Thu, 30 Jan 2014 13:16:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1391083425-29574-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401301315180.4373@kaball.uk.xensource.com>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<1391083425-29574-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org, anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. The tool stack may provide a
> property "discard_enable" in the backend node to optionally disable discard
> support.  This is helpful in case the backing file was intentionally created
> non-sparse to avoid fragmentation.
> 
> v2:
> rename xenstore property from discard_enable to discard-enable
> move discard_req to case BLKIF_OP_DISCARD
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

The patch is fine by me but it has few simple style issues.
Please run it through scripts/checkpatch.pl in QEMU and resend (CC'ing
qemu-devel too).
Thanks!


>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 30 ++++++++++++++++++++++++++++++
>  2 files changed, 42 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..a1f1c7e 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -114,6 +114,7 @@ struct XenBlkDev {
>      int                 requests_finished;
>  
>      /* Persistent grants extension */
> +    gboolean            feature_discard;
>      gboolean            feature_persistent;
>      GTree               *persistent_gnts;
>      unsigned int        persistent_gnt_count;
> @@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_WRITE:
>          ioreq->prot = PROT_READ; /* from memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        return 0;
>      default:
>          xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
>                        ioreq->req.operation);
> @@ -521,6 +524,16 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +    {
> +        struct blkif_request_discard *discard_req = (void *)&ioreq->req;
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        discard_req->sector_number, discard_req->nr_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
> +    }
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -699,6 +712,19 @@ static void blk_alloc(struct XenDevice *xendev)
>      }
>  }
>  
> +static void blk_parse_discard(struct XenBlkDev *blkdev)
> +{
> +    int enable;
> +
> +    blkdev->feature_discard = true;
> +
> +    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0)
> +	    blkdev->feature_discard = !!enable;
> +
> +    if (blkdev->feature_discard)
> +	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
> +}
> +
>  static int blk_init(struct XenDevice *xendev)
>  {
>      struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
> @@ -766,6 +792,8 @@ static int blk_init(struct XenDevice *xendev)
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
> +    blk_parse_discard(blkdev);
> +
>      g_free(directiosafe);
>      return 0;
>  
> @@ -801,6 +829,8 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    if (blkdev->feature_discard)
> +        qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:16:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rU1-000681-J2; Thu, 30 Jan 2014 13:16:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8rU0-00067w-Mk
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:16:40 +0000
Received: from [85.158.139.211:45895] by server-16.bemta-5.messagelabs.com id
	00/9D-05060-7B05AE25; Thu, 30 Jan 2014 13:16:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391087797!614743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1003 invoked from network); 30 Jan 2014 13:16:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:16:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98081756"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:16:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:16:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8rTw-0005EL-Ks;
	Thu, 30 Jan 2014 13:16:36 +0000
Date: Thu, 30 Jan 2014 13:16:31 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <1391083425-29574-1-git-send-email-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.02.1401301315180.4373@kaball.uk.xensource.com>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<1391083425-29574-1-git-send-email-olaf@aepfle.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, qemu-devel@nongnu.org,
	xen-devel@lists.xen.org, anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] qemu-upstream: add discard support for
	xen_disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Olaf Hering wrote:
> Implement discard support for xen_disk. It makes use of the existing
> discard code in qemu.
> 
> The discard support is enabled unconditionally. The tool stack may provide a
> property "discard_enable" in the backend node to optionally disable discard
> support.  This is helpful in case the backing file was intentionally created
> non-sparse to avoid fragmentation.
> 
> v2:
> rename xenstore property from discard_enable to discard-enable
> move discard_req to case BLKIF_OP_DISCARD
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

The patch is fine by me but it has few simple style issues.
Please run it through scripts/checkpatch.pl in QEMU and resend (CC'ing
qemu-devel too).
Thanks!


>  hw/block/xen_blkif.h | 12 ++++++++++++
>  hw/block/xen_disk.c  | 30 ++++++++++++++++++++++++++++++
>  2 files changed, 42 insertions(+)
> 
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index c0f4136..711b692 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -79,6 +79,12 @@ static inline void blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> @@ -94,6 +100,12 @@ static inline void blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
>  	dst->handle = src->handle;
>  	dst->id = src->id;
>  	dst->sector_number = src->sector_number;
> +	if (src->operation == BLKIF_OP_DISCARD) {
> +		struct blkif_request_discard *s = (void *)src;
> +		struct blkif_request_discard *d = (void *)dst;
> +		d->nr_sectors = s->nr_sectors;
> +		return;
> +	}
>  	if (n > src->nr_segments)
>  		n = src->nr_segments;
>  	for (i = 0; i < n; i++)
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index 03e30d7..a1f1c7e 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -114,6 +114,7 @@ struct XenBlkDev {
>      int                 requests_finished;
>  
>      /* Persistent grants extension */
> +    gboolean            feature_discard;
>      gboolean            feature_persistent;
>      GTree               *persistent_gnts;
>      unsigned int        persistent_gnt_count;
> @@ -253,6 +254,8 @@ static int ioreq_parse(struct ioreq *ioreq)
>      case BLKIF_OP_WRITE:
>          ioreq->prot = PROT_READ; /* from memory */
>          break;
> +    case BLKIF_OP_DISCARD:
> +        return 0;
>      default:
>          xen_be_printf(&blkdev->xendev, 0, "error: unknown operation (%d)\n",
>                        ioreq->req.operation);
> @@ -521,6 +524,16 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>                          &ioreq->v, ioreq->v.size / BLOCK_SIZE,
>                          qemu_aio_complete, ioreq);
>          break;
> +    case BLKIF_OP_DISCARD:
> +    {
> +        struct blkif_request_discard *discard_req = (void *)&ioreq->req;
> +        bdrv_acct_start(blkdev->bs, &ioreq->acct, discard_req->nr_sectors * BLOCK_SIZE, BDRV_ACCT_WRITE);
> +        ioreq->aio_inflight++;
> +        bdrv_aio_discard(blkdev->bs,
> +                        discard_req->sector_number, discard_req->nr_sectors,
> +                        qemu_aio_complete, ioreq);
> +        break;
> +    }
>      default:
>          /* unknown operation (shouldn't happen -- parse catches this) */
>          goto err;
> @@ -699,6 +712,19 @@ static void blk_alloc(struct XenDevice *xendev)
>      }
>  }
>  
> +static void blk_parse_discard(struct XenBlkDev *blkdev)
> +{
> +    int enable;
> +
> +    blkdev->feature_discard = true;
> +
> +    if (xenstore_read_be_int(&blkdev->xendev, "discard-enable", &enable) == 0)
> +	    blkdev->feature_discard = !!enable;
> +
> +    if (blkdev->feature_discard)
> +	    xenstore_write_be_int(&blkdev->xendev, "feature-discard", 1);
> +}
> +
>  static int blk_init(struct XenDevice *xendev)
>  {
>      struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
> @@ -766,6 +792,8 @@ static int blk_init(struct XenDevice *xendev)
>      xenstore_write_be_int(&blkdev->xendev, "feature-persistent", 1);
>      xenstore_write_be_int(&blkdev->xendev, "info", info);
>  
> +    blk_parse_discard(blkdev);
> +
>      g_free(directiosafe);
>      return 0;
>  
> @@ -801,6 +829,8 @@ static int blk_connect(struct XenDevice *xendev)
>          qflags |= BDRV_O_RDWR;
>          readonly = false;
>      }
> +    if (blkdev->feature_discard)
> +        qflags |= BDRV_O_UNMAP;
>  
>      /* init qemu block driver */
>      index = (blkdev->xendev.dev - 202 * 256) / 16;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rcK-00071s-0k; Thu, 30 Jan 2014 13:25:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8rcI-00071j-FI
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:25:14 +0000
Received: from [85.158.143.35:31406] by server-1.bemta-4.messagelabs.com id
	1D/39-31661-9B25AE25; Thu, 30 Jan 2014 13:25:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391088311!1939020!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11018 invoked from network); 30 Jan 2014 13:25:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:25:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98083838"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:25:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:24:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8rc3-0005M6-Mp;
	Thu, 30 Jan 2014 13:24:59 +0000
Date: Thu, 30 Jan 2014 13:24:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-521672814-1391088294=:4373"
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-521672814-1391088294=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Is it a level or an edge irq?

On Wed, 29 Jan 2014, Julien Grall wrote:
> Hi,
>=20
> It's weird, physical IRQ should not be injected twice ...
> Were you able to print the IRQ number?
>=20
> In any case, you are using the old version of the interrupt patch series.
> Your new error may come of race condition in this code.
>=20
> Can you try to use a newest version?
>=20
> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@global=
logic.com> wrote:
>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>       > difference for xen-unstable (it should make things clearer, if no=
thing
>       > else) but it should fix things for Oleksandr.
>=20
>       Unfortunately, it is not enough for stable work.
>=20
>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumas=
k_of(0) in
>       gic_route_irq_to_guest(). And as result, I don't see our situation
>       which cause to deadlock in on_selected_cpus function (expected).
>       But, hypervisor sometimes hangs somewhere else (I have not identifi=
ed
>       yet where this is happening) or I sometimes see traps, like that:
>       ("WARN_ON(p->desc !=3D NULL)" in maintenance_interrupt() leads to t=
hem)
>=20
>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>       (XEN) ----[ Xen-4.4-unstable =C2=A0arm32 =C2=A0debug=3Dy =C2=A0Not =
tainted ]----
>       (XEN) CPU: =C2=A0 =C2=A01
>       (XEN) PC: =C2=A0 =C2=A0 00242c1c __warn+0x20/0x28
>       (XEN) CPSR: =C2=A0 200001da MODE:Hypervisor
>       (XEN) =C2=A0 =C2=A0 =C2=A0R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3=
: 00000fff
>       (XEN) =C2=A0 =C2=A0 =C2=A0R4: 00406100 R5: 40020ee0 R6: 00000000 R7=
: 4bfdf000
>       (XEN) =C2=A0 =C2=A0 =C2=A0R8: 00000001 R9: 4bfd7ed0 R10:00000001 R1=
1:4bfd7ebc R12:00000002
>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>       (XEN)
>       (XEN) =C2=A0 VTCR_EL2: 80002558
>       (XEN) =C2=A0VTTBR_EL2: 00020000dec6a000
>       (XEN)
>       (XEN) =C2=A0SCTLR_EL2: 30cd187f
>       (XEN) =C2=A0 =C2=A0HCR_EL2: 00000000000028b5
>       (XEN) =C2=A0TTBR0_EL2: 00000000d2014000
>       (XEN)
>       (XEN) =C2=A0 =C2=A0ESR_EL2: 00000000
>       (XEN) =C2=A0HPFAR_EL2: 0000000000482110
>       (XEN) =C2=A0 =C2=A0 =C2=A0HDFAR: fa211190
>       (XEN) =C2=A0 =C2=A0 =C2=A0HIFAR: 00000000
>       (XEN)
>       (XEN) Xen stack trace from sp=3D4bfd7eb4:
>       (XEN) =C2=A0 =C2=A00026431c 4bfd7efc 00247a54 00000024 002e6608 002=
e6608 00000097 00000001
>       (XEN) =C2=A0 =C2=A000000000 4bfd7f54 40017000 40005f60 40017014 4bf=
d7f58 00000019 00000000
>       (XEN) =C2=A0 =C2=A040005f60 4bfd7f24 00248e60 00000009 00000019 004=
04000 4bfd7f58 00000000
>       (XEN) =C2=A0 =C2=A000405000 000045f0 002e7694 4bfd7f4c 00248978 c00=
79a90 00000097 00000097
>       (XEN) =C2=A0 =C2=A000000000 fa212000 ea80c900 00000001 c05b8a60 4bf=
d7f54 0024f4b8 4bfd7f58
>       (XEN) =C2=A0 =C2=A000251830 ea80c950 00000000 00000001 c0079a90 000=
00097 00000097 00000000
>       (XEN) =C2=A0 =C2=A0fa212000 ea80c900 00000001 c05b8a60 00000000 e98=
79e3c ffffffff b6efbca3
>       (XEN) =C2=A0 =C2=A0c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03=
b3140 e9879eb8 c007680c
>       (XEN) =C2=A0 =C2=A0c060750c c03b32c0 c0607518 c03b3360 00000000 000=
00000 00000000 00000000
>       (XEN) =C2=A0 =C2=A000000000 00000000 3ff6bebf a0000113 800b0193 800=
b0093 40000193 00000000
>       (XEN) =C2=A0 =C2=A0ffeffbfe fedeefff fffd5ffe
>       (XEN) Xen call trace:
>       (XEN) =C2=A0 =C2=A0[<00242c1c>] __warn+0x20/0x28 (PC)
>       (XEN) =C2=A0 =C2=A0[<00242c1c>] __warn+0x20/0x28 (LR)
>       (XEN) =C2=A0 =C2=A0[<00247a54>] maintenance_interrupt+0xfc/0x2f4
>       (XEN) =C2=A0 =C2=A0[<00248e60>] do_IRQ+0x138/0x198
>       (XEN) =C2=A0 =C2=A0[<00248978>] gic_interrupt+0x58/0xc0
>       (XEN) =C2=A0 =C2=A0[<0024f4b8>] do_trap_irq+0x10/0x14
>       (XEN) =C2=A0 =C2=A0[<00251830>] return_from_trap+0/0x4
>       (XEN)
>=20
>       Also I am posting maintenance_interrupt() from my tree:
>=20
>       static void maintenance_interrupt(int irq, void *dev_id, struct
>       cpu_user_regs *regs)
>       {
>       =C2=A0 =C2=A0 int i =3D 0, virq, pirq;
>       =C2=A0 =C2=A0 uint32_t lr;
>       =C2=A0 =C2=A0 struct vcpu *v =3D current;
>       =C2=A0 =C2=A0 uint64_t eisr =3D GICH[GICH_EISR0] | (((uint64_t) GIC=
H[GICH_EISR1]) << 32);
>=20
>       =C2=A0 =C2=A0 while ((i =3D find_next_bit((const long unsigned int =
*) &eisr,
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 64, i)) < 64) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct pending_irq *p, *n;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 int cpu, eoi;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D -1;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 0;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&gic.lock);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 lr =3D GICH[GICH_LR + i];
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 virq =3D lr & GICH_LR_VIRTUAL_MASK;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 p =3D irq_to_pending(v, virq);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( p->desc !=3D NULL ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p->desc->status &=3D ~IRQ=
_INPROGRESS;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Assume only one pcpu n=
eeds to EOI the irq */
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D p->desc->arch.eoi=
_cpu;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 1;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pirq =3D p->desc->irq;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !atomic_dec_and_test(&p->inflight_=
cnt) )
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Physical IRQ can't be =
reinject */
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 WARN_ON(p->desc !=3D NULL=
);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, p->irq, GIC=
H_LR_PENDING, p->priority);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&gic.lock=
);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 GICH[GICH_LR + i] =3D 0;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 clear_bit(i, &this_cpu(lr_mask));
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !list_empty(&v->arch.vgic.lr_pendi=
ng) ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 n =3D list_entry(v->arch.=
vgic.lr_pending.next, typeof(*n), lr_queue);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, n->irq, GIC=
H_LR_PENDING, n->priority);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&n->lr_queu=
e);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 set_bit(i, &this_cpu(lr_m=
ask));
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_inject_irq_stop();
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&gic.lock);
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&v->arch.vgic.lock);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&p->inflight);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&v->arch.vgic.lock);
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( eoi ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* this is not racy becau=
se we can't receive another irq of the
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* same type until w=
e EOI it. =C2=A0*/
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( cpu =3D=3D smp_proce=
ssor_id() )
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_irq_eoi=
((void*)(uintptr_t)pirq);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on_selected=
_cpus(cpumask_of(cpu),
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_irq_eoi, (void*)(ui=
ntptr_t)pirq, 0);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;
>       =C2=A0 =C2=A0 }
>       }
>=20
>=20
>       Oleksandr Tyshchenko | Embedded Developer
>       GlobalLogic
>=20
>=20
>=20
--1342847746-521672814-1391088294=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-521672814-1391088294=:4373--


From xen-devel-bounces@lists.xen.org Thu Jan 30 13:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rcK-00071s-0k; Thu, 30 Jan 2014 13:25:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8rcI-00071j-FI
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 13:25:14 +0000
Received: from [85.158.143.35:31406] by server-1.bemta-4.messagelabs.com id
	1D/39-31661-9B25AE25; Thu, 30 Jan 2014 13:25:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391088311!1939020!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11018 invoked from network); 30 Jan 2014 13:25:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:25:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,749,1384300800"; d="scan'208";a="98083838"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:25:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 08:24:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8rc3-0005M6-Mp;
	Thu, 30 Jan 2014 13:24:59 +0000
Date: Thu, 30 Jan 2014 13:24:54 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-521672814-1391088294=:4373"
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-521672814-1391088294=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

Is it a level or an edge irq?

On Wed, 29 Jan 2014, Julien Grall wrote:
> Hi,
>=20
> It's weird, physical IRQ should not be injected twice ...
> Were you able to print the IRQ number?
>=20
> In any case, you are using the old version of the interrupt patch series.
> Your new error may come of race condition in this code.
>=20
> Can you try to use a newest version?
>=20
> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@global=
logic.com> wrote:
>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>       > difference for xen-unstable (it should make things clearer, if no=
thing
>       > else) but it should fix things for Oleksandr.
>=20
>       Unfortunately, it is not enough for stable work.
>=20
>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumas=
k_of(0) in
>       gic_route_irq_to_guest(). And as result, I don't see our situation
>       which cause to deadlock in on_selected_cpus function (expected).
>       But, hypervisor sometimes hangs somewhere else (I have not identifi=
ed
>       yet where this is happening) or I sometimes see traps, like that:
>       ("WARN_ON(p->desc !=3D NULL)" in maintenance_interrupt() leads to t=
hem)
>=20
>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>       (XEN) ----[ Xen-4.4-unstable =C2=A0arm32 =C2=A0debug=3Dy =C2=A0Not =
tainted ]----
>       (XEN) CPU: =C2=A0 =C2=A01
>       (XEN) PC: =C2=A0 =C2=A0 00242c1c __warn+0x20/0x28
>       (XEN) CPSR: =C2=A0 200001da MODE:Hypervisor
>       (XEN) =C2=A0 =C2=A0 =C2=A0R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3=
: 00000fff
>       (XEN) =C2=A0 =C2=A0 =C2=A0R4: 00406100 R5: 40020ee0 R6: 00000000 R7=
: 4bfdf000
>       (XEN) =C2=A0 =C2=A0 =C2=A0R8: 00000001 R9: 4bfd7ed0 R10:00000001 R1=
1:4bfd7ebc R12:00000002
>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>       (XEN)
>       (XEN) =C2=A0 VTCR_EL2: 80002558
>       (XEN) =C2=A0VTTBR_EL2: 00020000dec6a000
>       (XEN)
>       (XEN) =C2=A0SCTLR_EL2: 30cd187f
>       (XEN) =C2=A0 =C2=A0HCR_EL2: 00000000000028b5
>       (XEN) =C2=A0TTBR0_EL2: 00000000d2014000
>       (XEN)
>       (XEN) =C2=A0 =C2=A0ESR_EL2: 00000000
>       (XEN) =C2=A0HPFAR_EL2: 0000000000482110
>       (XEN) =C2=A0 =C2=A0 =C2=A0HDFAR: fa211190
>       (XEN) =C2=A0 =C2=A0 =C2=A0HIFAR: 00000000
>       (XEN)
>       (XEN) Xen stack trace from sp=3D4bfd7eb4:
>       (XEN) =C2=A0 =C2=A00026431c 4bfd7efc 00247a54 00000024 002e6608 002=
e6608 00000097 00000001
>       (XEN) =C2=A0 =C2=A000000000 4bfd7f54 40017000 40005f60 40017014 4bf=
d7f58 00000019 00000000
>       (XEN) =C2=A0 =C2=A040005f60 4bfd7f24 00248e60 00000009 00000019 004=
04000 4bfd7f58 00000000
>       (XEN) =C2=A0 =C2=A000405000 000045f0 002e7694 4bfd7f4c 00248978 c00=
79a90 00000097 00000097
>       (XEN) =C2=A0 =C2=A000000000 fa212000 ea80c900 00000001 c05b8a60 4bf=
d7f54 0024f4b8 4bfd7f58
>       (XEN) =C2=A0 =C2=A000251830 ea80c950 00000000 00000001 c0079a90 000=
00097 00000097 00000000
>       (XEN) =C2=A0 =C2=A0fa212000 ea80c900 00000001 c05b8a60 00000000 e98=
79e3c ffffffff b6efbca3
>       (XEN) =C2=A0 =C2=A0c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03=
b3140 e9879eb8 c007680c
>       (XEN) =C2=A0 =C2=A0c060750c c03b32c0 c0607518 c03b3360 00000000 000=
00000 00000000 00000000
>       (XEN) =C2=A0 =C2=A000000000 00000000 3ff6bebf a0000113 800b0193 800=
b0093 40000193 00000000
>       (XEN) =C2=A0 =C2=A0ffeffbfe fedeefff fffd5ffe
>       (XEN) Xen call trace:
>       (XEN) =C2=A0 =C2=A0[<00242c1c>] __warn+0x20/0x28 (PC)
>       (XEN) =C2=A0 =C2=A0[<00242c1c>] __warn+0x20/0x28 (LR)
>       (XEN) =C2=A0 =C2=A0[<00247a54>] maintenance_interrupt+0xfc/0x2f4
>       (XEN) =C2=A0 =C2=A0[<00248e60>] do_IRQ+0x138/0x198
>       (XEN) =C2=A0 =C2=A0[<00248978>] gic_interrupt+0x58/0xc0
>       (XEN) =C2=A0 =C2=A0[<0024f4b8>] do_trap_irq+0x10/0x14
>       (XEN) =C2=A0 =C2=A0[<00251830>] return_from_trap+0/0x4
>       (XEN)
>=20
>       Also I am posting maintenance_interrupt() from my tree:
>=20
>       static void maintenance_interrupt(int irq, void *dev_id, struct
>       cpu_user_regs *regs)
>       {
>       =C2=A0 =C2=A0 int i =3D 0, virq, pirq;
>       =C2=A0 =C2=A0 uint32_t lr;
>       =C2=A0 =C2=A0 struct vcpu *v =3D current;
>       =C2=A0 =C2=A0 uint64_t eisr =3D GICH[GICH_EISR0] | (((uint64_t) GIC=
H[GICH_EISR1]) << 32);
>=20
>       =C2=A0 =C2=A0 while ((i =3D find_next_bit((const long unsigned int =
*) &eisr,
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 64, i)) < 64) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct pending_irq *p, *n;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 int cpu, eoi;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D -1;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 0;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&gic.lock);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 lr =3D GICH[GICH_LR + i];
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 virq =3D lr & GICH_LR_VIRTUAL_MASK;
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 p =3D irq_to_pending(v, virq);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( p->desc !=3D NULL ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 p->desc->status &=3D ~IRQ=
_INPROGRESS;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Assume only one pcpu n=
eeds to EOI the irq */
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu =3D p->desc->arch.eoi=
_cpu;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 eoi =3D 1;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pirq =3D p->desc->irq;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !atomic_dec_and_test(&p->inflight_=
cnt) )
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Physical IRQ can't be =
reinject */
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 WARN_ON(p->desc !=3D NULL=
);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, p->irq, GIC=
H_LR_PENDING, p->priority);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&gic.lock=
);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 continue;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 GICH[GICH_LR + i] =3D 0;
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 clear_bit(i, &this_cpu(lr_mask));
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( !list_empty(&v->arch.vgic.lr_pendi=
ng) ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 n =3D list_entry(v->arch.=
vgic.lr_pending.next, typeof(*n), lr_queue);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_set_lr(i, n->irq, GIC=
H_LR_PENDING, n->priority);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&n->lr_queu=
e);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 set_bit(i, &this_cpu(lr_m=
ask));
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_inject_irq_stop();
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&gic.lock);
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_lock_irq(&v->arch.vgic.lock);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 list_del_init(&p->inflight);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 spin_unlock_irq(&v->arch.vgic.lock);
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( eoi ) {
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* this is not racy becau=
se we can't receive another irq of the
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* same type until w=
e EOI it. =C2=A0*/
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( cpu =3D=3D smp_proce=
ssor_id() )
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 gic_irq_eoi=
((void*)(uintptr_t)pirq);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 else
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 on_selected=
_cpus(cpumask_of(cpu),
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gic_irq_eoi, (void*)(ui=
ntptr_t)pirq, 0);
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>=20
>       =C2=A0 =C2=A0 =C2=A0 =C2=A0 i++;
>       =C2=A0 =C2=A0 }
>       }
>=20
>=20
>       Oleksandr Tyshchenko | Embedded Developer
>       GlobalLogic
>=20
>=20
>=20
--1342847746-521672814-1391088294=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-521672814-1391088294=:4373--


From xen-devel-bounces@lists.xen.org Thu Jan 30 13:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rpZ-0007mu-Dd; Thu, 30 Jan 2014 13:38:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W8rpX-0007mp-Fu
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 13:38:55 +0000
Received: from [85.158.137.68:43577] by server-15.bemta-3.messagelabs.com id
	53/50-19263-EE55AE25; Thu, 30 Jan 2014 13:38:54 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391089133!12283073!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5054 invoked from network); 30 Jan 2014 13:38:54 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:38:54 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1609711eei.40
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Jan 2014 05:38:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=0JBNUxdaii7N0MKE82Rl7KF9kBSp8C9gZzBWD+xtLX0=;
	b=YGYxxQWV9WnoAy9Mt+AC74poJfJ6ijtiLxVbCfuAJHTWqFwKlbMfhZuSDd3JzsCP6V
	qN+yFVKGMLxNHHL/RKHyahlEbg54wOdSeW8KkyXmZWy2ZiAcAh4tYo83D8OWMxJ4NAnU
	XW7g82ibfP7QLQZRTZj3wl7jodwCrxlTE8m/7W3k4BrFF61T50KfgceKwG+CErLUtmhw
	FId9G6ZjuUwbsIKXeU1HWYqjZ616eiTg+psLPb0kCvVX69Yot7zNk7moGwhcYaFWoDeA
	odK1mBPo0/MqHH1StBjGGVOvvLaz51bNOWSm29ovH8xqaHE2eAfLT95gWWNzHjZVeRFB
	IPkQ==
X-Received: by 10.14.88.5 with SMTP id z5mr1811760eee.101.1391089133461;
	Thu, 30 Jan 2014 05:38:53 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.vodafonedsl.it.
	[2.35.197.229]) by mx.google.com with ESMTPSA id
	d43sm22654156eep.18.2014.01.30.05.38.50 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 05:38:51 -0800 (PST)
Message-ID: <52EA55E6.90708@redhat.com>
Date: Thu, 30 Jan 2014 14:38:46 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>, george.dunlap@eu.citrix.com,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	qemu-stable@nongnu.org
Subject: Re: [Xen-devel] [PATCH] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 30/01/2014 13:46, Stefano Stabellini ha scritto:
> The following commit:
>
> commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date:   Fri May 24 12:59:37 2013 +0200
>
>     memory: add address_space_translate
>
> breaks Xen support in QEMU, in particular the Xen mapcache. The effect
> is that one Windows XP installation out of ten would end up with BSOD.
>
> The reason is that after this commit l in address_space_rw can span a
> page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> to map a single page (if block->offset == 0).
>
> Fix the issue by reverting to the previous behaviour: do not return a
> length from address_space_translate_internal that can span a page
> boundary.
>
> Also in address_space_translate do not ignore the length returned by
> address_space_translate_internal.
>
> This patch should be backported to QEMU 1.6.x.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Anthony Perard <anthony.perard@citrix.com>

Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org

> ---
>  exec.c |    6 ++++--
>  1 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index 667a718..f3797b7 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -251,7 +251,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
>                                   hwaddr *plen, bool resolve_subpage)
>  {
>      MemoryRegionSection *section;
> -    Int128 diff;
> +    Int128 diff, diff_page;
>
>      section = address_space_lookup_region(d, addr, resolve_subpage);
>      /* Compute offset within MemoryRegionSection */
> @@ -260,7 +260,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
>      /* Compute offset within MemoryRegion */
>      *xlat = addr + section->offset_within_region;
>
> +    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
>      diff = int128_sub(section->mr->size, int128_make64(addr));
> +    diff = int128_min(diff, diff_page);
>      *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
>      return section;
>  }
> @@ -275,7 +277,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
>      hwaddr len = *plen;
>
>      for (;;) {
> -        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
> +        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
>          mr = section->mr;
>
>          if (!mr->iommu_ops) {
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rpZ-0007mu-Dd; Thu, 30 Jan 2014 13:38:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1W8rpX-0007mp-Fu
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 13:38:55 +0000
Received: from [85.158.137.68:43577] by server-15.bemta-3.messagelabs.com id
	53/50-19263-EE55AE25; Thu, 30 Jan 2014 13:38:54 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391089133!12283073!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5054 invoked from network); 30 Jan 2014 13:38:54 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:38:54 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1609711eei.40
	for <xen-devel@lists.xensource.com>;
	Thu, 30 Jan 2014 05:38:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=0JBNUxdaii7N0MKE82Rl7KF9kBSp8C9gZzBWD+xtLX0=;
	b=YGYxxQWV9WnoAy9Mt+AC74poJfJ6ijtiLxVbCfuAJHTWqFwKlbMfhZuSDd3JzsCP6V
	qN+yFVKGMLxNHHL/RKHyahlEbg54wOdSeW8KkyXmZWy2ZiAcAh4tYo83D8OWMxJ4NAnU
	XW7g82ibfP7QLQZRTZj3wl7jodwCrxlTE8m/7W3k4BrFF61T50KfgceKwG+CErLUtmhw
	FId9G6ZjuUwbsIKXeU1HWYqjZ616eiTg+psLPb0kCvVX69Yot7zNk7moGwhcYaFWoDeA
	odK1mBPo0/MqHH1StBjGGVOvvLaz51bNOWSm29ovH8xqaHE2eAfLT95gWWNzHjZVeRFB
	IPkQ==
X-Received: by 10.14.88.5 with SMTP id z5mr1811760eee.101.1391089133461;
	Thu, 30 Jan 2014 05:38:53 -0800 (PST)
Received: from yakj.usersys.redhat.com (net-2-35-197-229.cust.vodafonedsl.it.
	[2.35.197.229]) by mx.google.com with ESMTPSA id
	d43sm22654156eep.18.2014.01.30.05.38.50 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 05:38:51 -0800 (PST)
Message-ID: <52EA55E6.90708@redhat.com>
Date: Thu, 30 Jan 2014 14:38:46 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401301223350.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Anthony PERARD <anthony.perard@citrix.com>, george.dunlap@eu.citrix.com,
	xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	qemu-stable@nongnu.org
Subject: Re: [Xen-devel] [PATCH] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 30/01/2014 13:46, Stefano Stabellini ha scritto:
> The following commit:
>
> commit 149f54b53b7666a3facd45e86eece60ce7d3b114
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date:   Fri May 24 12:59:37 2013 +0200
>
>     memory: add address_space_translate
>
> breaks Xen support in QEMU, in particular the Xen mapcache. The effect
> is that one Windows XP installation out of ten would end up with BSOD.
>
> The reason is that after this commit l in address_space_rw can span a
> page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
> to map a single page (if block->offset == 0).
>
> Fix the issue by reverting to the previous behaviour: do not return a
> length from address_space_translate_internal that can span a page
> boundary.
>
> Also in address_space_translate do not ignore the length returned by
> address_space_translate_internal.
>
> This patch should be backported to QEMU 1.6.x.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Signed-off-by: Anthony Perard <anthony.perard@citrix.com>

Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org

> ---
>  exec.c |    6 ++++--
>  1 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index 667a718..f3797b7 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -251,7 +251,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
>                                   hwaddr *plen, bool resolve_subpage)
>  {
>      MemoryRegionSection *section;
> -    Int128 diff;
> +    Int128 diff, diff_page;
>
>      section = address_space_lookup_region(d, addr, resolve_subpage);
>      /* Compute offset within MemoryRegionSection */
> @@ -260,7 +260,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
>      /* Compute offset within MemoryRegion */
>      *xlat = addr + section->offset_within_region;
>
> +    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
>      diff = int128_sub(section->mr->size, int128_make64(addr));
> +    diff = int128_min(diff, diff_page);
>      *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
>      return section;
>  }
> @@ -275,7 +277,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
>      hwaddr len = *plen;
>
>      for (;;) {
> -        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
> +        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
>          mr = section->mr;
>
>          if (!mr->iommu_ops) {
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:47:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rxn-0007vx-Dg; Thu, 30 Jan 2014 13:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8rxl-0007vs-Sd
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 13:47:26 +0000
Received: from [85.158.143.35:44312] by server-1.bemta-4.messagelabs.com id
	D0/F0-31661-DE75AE25; Thu, 30 Jan 2014 13:47:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391089642!1945509!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 30 Jan 2014 13:47:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98090329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:47:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 08:47:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8rxh-0000gQ-PO;
	Thu, 30 Jan 2014 13:47:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8rxh-0003FV-J9;
	Thu, 30 Jan 2014 13:47:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 13:47:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24629: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24629 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24629/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24571
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 13:47:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 13:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8rxn-0007vx-Dg; Thu, 30 Jan 2014 13:47:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8rxl-0007vs-Sd
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 13:47:26 +0000
Received: from [85.158.143.35:44312] by server-1.bemta-4.messagelabs.com id
	D0/F0-31661-DE75AE25; Thu, 30 Jan 2014 13:47:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391089642!1945509!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1533 invoked from network); 30 Jan 2014 13:47:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 13:47:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98090329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 13:47:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 08:47:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8rxh-0000gQ-PO;
	Thu, 30 Jan 2014 13:47:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8rxh-0003FV-J9;
	Thu, 30 Jan 2014 13:47:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24629-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 13:47:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24629: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24629 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24629/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24571
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:07:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sGo-0000Ne-1x; Thu, 30 Jan 2014 14:07:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8sGl-0000NZ-Q3
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:07:04 +0000
Received: from [85.158.137.68:44436] by server-10.bemta-3.messagelabs.com id
	EC/65-07302-68C5AE25; Thu, 30 Jan 2014 14:07:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391090820!12291843!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18007 invoked from network); 30 Jan 2014 14:07:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98098057"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:06:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:06:59 -0500
Message-ID: <1391090818.29487.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 14:06:58 +0000
In-Reply-To: <20140130130241.GB3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> > On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > > > >                  continue
> > > > >  
> > > > >              # new image
> > > > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > > > 
> > > > Why is this necessary? fedora-19 also have the aformentioned "--class
> > > > red, --class gnu" yet is parsed happily.
> > > 
> > > A menuentry from RHEL 7 looks like this...
> > > 
> > > menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> > > 
> > > So we need 'lazy' match with '.*?'.
> > 
> > ".*" already matches zero or more characters, so I'm not sure what ".*?"
> > means in addition to that, do you have a reference?
> 
> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy

Thanks, pure punctuation is a bit tricky to for a search engine...

> > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > the name itself, although you might have to split into handling " and '
> > separately to be more correct

Any thoughts on this?

I suppose it depends a bit on the rules for mixing quotes in grub, e.g.
is
	menuentry "Ian's super cool Linux"

allowed.

On the other hand pygrub is very much best effort so as long as it works
with the current set of inputs which we are aware of then .*? is fine.

> > 
> > Have you run this new regex over tools/pygrub/examples?
> 
> I ran this regex over examples. 

Great.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:07:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sGo-0000Ne-1x; Thu, 30 Jan 2014 14:07:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8sGl-0000NZ-Q3
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:07:04 +0000
Received: from [85.158.137.68:44436] by server-10.bemta-3.messagelabs.com id
	EC/65-07302-68C5AE25; Thu, 30 Jan 2014 14:07:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391090820!12291843!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18007 invoked from network); 30 Jan 2014 14:07:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98098057"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:06:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:06:59 -0500
Message-ID: <1391090818.29487.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 14:06:58 +0000
In-Reply-To: <20140130130241.GB3441@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> > On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> > > > > @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> > > > >                  continue
> > > > >  
> > > > >              # new image
> > > > > -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> > > > > +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> > > > 
> > > > Why is this necessary? fedora-19 also have the aformentioned "--class
> > > > red, --class gnu" yet is parsed happily.
> > > 
> > > A menuentry from RHEL 7 looks like this...
> > > 
> > > menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' 
> > > 
> > > So we need 'lazy' match with '.*?'.
> > 
> > ".*" already matches zero or more characters, so I'm not sure what ".*?"
> > means in addition to that, do you have a reference?
> 
> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy

Thanks, pure punctuation is a bit tricky to for a search engine...

> > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > the name itself, although you might have to split into handling " and '
> > separately to be more correct

Any thoughts on this?

I suppose it depends a bit on the rules for mixing quotes in grub, e.g.
is
	menuentry "Ian's super cool Linux"

allowed.

On the other hand pygrub is very much best effort so as long as it works
with the current set of inputs which we are aware of then .*? is fine.

> > 
> > Have you run this new regex over tools/pygrub/examples?
> 
> I ran this regex over examples. 

Great.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTg-0000xZ-Rb; Thu, 30 Jan 2014 14:20:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTf-0000xD-F1
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:23 +0000
Received: from [85.158.139.211:16135] by server-13.bemta-5.messagelabs.com id
	6E/CF-18801-6AF5AE25; Thu, 30 Jan 2014 14:20:22 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391091619!611812!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11034 invoked from network); 30 Jan 2014 14:20:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136115"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-Qa	for
	xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:45 +0000
Message-ID: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [RFC PATCH 1/5] Support for running secondary emulators
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds the ioreq server interface which I mentioned in
my talk at the Xen developer summit in Edinburgh at the end of last year.
The code is based on work originally done by Julien Grall but has been
re-written to allow existing versions of QEMU to work unmodified.

The code is available in my xen.git [1] repo on xenbits, under the 'savannah'
branch, and I have also written a demo emulator to test the code, which can
be found in my demu.git [2] repo.

The modifications are broken down as follows:


Patch #1 basically just moves some code around to make subsequent patches
more obvious. The patch also removes the has_dm flag in hvememul_do_io() as
it is no longer necessary to special-case PVH domains in this way. (The I/O
can be completed by hvm_send_assist_req() later, which it is discovered there
is no shared ioreq page).

Patch #2 again is largely code movement, from various places into a new
hvm_ioreq_server structure. There should be no functional change at this
stage as the ioreq server is still created at domain initialisation time (as
were its contents prior to this patch).

Patch #3 is the first functional change. The ioreq server struct
initialisation is now deferred until something actually tries to play with
the HVM parameters which reference it. In practice this is QEMU, which
needs to read the ioreq pfns so it can map them.

Patch #4 is the big one. This moves from a single ioreq server per domain
to a list. The server that is created when the HVM parameters are reference
is given id 0 and is considered to be the 'catch all' server which is, after
all, how QEMU is used. Any secondary emulator, created using the new API
in xenctrl.h, will have id 1 or above and only gets ioreqs when I/O hits one
of its registered IO ranges or PCI devices.

Patch #5 pulls the PCI hotplug controller emulation into Xen. This is
necessary to allow a secondary emulator to hotplug a PCI device into the VM.
The code implements the controller in the same way as upstream QEMU and thus
the variant of the DSDT ASL used for upstream QEMU is retained.


There are no modifications to libxl to actually invoke a secondary emulator
at this stage. The only changes made are simply to increase the number of
special pages reserved for a VM to allow the use of more than one emulator
and call the new PCI hotplug API when attaching or detaching PCI devices.
The demo emulator can simply be invoked from a shell and will hotplug its
device onto the PCI bus (and remove it again when it's killed). The emulated
device is not an awful lot of use at this stage - it appears as a SCSI
controller with one IO BAR and one MEM BAR and has no intrinsic
functionality... but then it is only supposed to be demo :-)

  Paul

[1] http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git
[2] http://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTi-0000xy-L5; Thu, 30 Jan 2014 14:20:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTh-0000xW-8v
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:25 +0000
Received: from [193.109.254.147:18501] by server-6.bemta-14.messagelabs.com id
	AA/54-03396-8AF5AE25; Thu, 30 Jan 2014 14:20:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17760 invoked from network); 30 Jan 2014 14:20:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103493"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-US;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:47 +0000
Message-ID: <1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq server
	abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect together data structures concerning device emulation together into
a new struct hvm_ioreq_server.

Code that deals with the shared and buffered ioreq pages is extracted from
functions such as hvm_domain_initialise, hvm_vcpu_initialise and do_hvm_op
and consolidated into a set of hvm_ioreq_server_XXX functions.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++------------
 xen/include/asm-x86/hvm/domain.h |    9 +-
 xen/include/asm-x86/hvm/vcpu.h   |    2 +-
 3 files changed, 229 insertions(+), 100 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 71a44db..a0eaadb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-static ioreq_t *get_ioreq(struct vcpu *v)
+static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
 {
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
+    shared_iopage_t *p = s->ioreq.va;
+    ASSERT(p != NULL);
+    return &p->vcpu_ioreq[id];
 }
 
 void hvm_do_resume(struct vcpu *v)
 {
+    struct hvm_ioreq_server *s;
     ioreq_t *p;
 
     check_wakeup_from_wait();
@@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
     if ( is_hvm_vcpu(v) )
         pt_restore_timer(v);
 
-    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    if ( !(p = get_ioreq(v)) )
+    s = v->arch.hvm_vcpu.ioreq_server;
+    v->arch.hvm_vcpu.ioreq_server = NULL;
+
+    if ( !s )
         goto check_inject_trap;
 
+    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
+    p = get_ioreq(s, v->vcpu_id);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
             break;
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
+            wait_on_xen_event_channel(p->vp_eport,
                                       (p->state != STATE_IOREQ_READY) &&
                                       (p->state != STATE_IOREQ_INPROCESS));
             break;
@@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
 static void hvm_init_ioreq_page(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
-    memset(iorp, 0, sizeof(*iorp));
     spin_lock_init(&iorp->lock);
     domain_pause(d);
 }
@@ -541,6 +544,167 @@ static int handle_pvh_io(
     return X86EMUL_OKAY;
 }
 
+static int hvm_init_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    int i;
+
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        return -ENOMEM;
+
+    s->domain = d;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+        s->ioreq_evtchn[i] = -1;
+    s->buf_ioreq_evtchn = -1;
+
+    hvm_init_ioreq_page(d, &s->ioreq);
+    hvm_init_ioreq_page(d, &s->buf_ioreq);
+
+    d->arch.hvm_domain.ioreq_server = s;
+    return 0;
+}
+
+static void hvm_deinit_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+
+    xfree(s);
+}
+
+static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
+{
+    struct domain *d = s->domain;
+
+    if ( s->ioreq.va != NULL )
+    {
+        shared_iopage_t *p = s->ioreq.va;
+        struct vcpu *v;
+
+        for_each_vcpu ( d, v )
+            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v->vcpu_id];
+    }
+}
+
+static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    int rc;
+
+    /* Create ioreq event channel. */
+    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+    if ( rc < 0 )
+        goto done;
+
+    /* Register ioreq event channel. */
+    s->ioreq_evtchn[v->vcpu_id] = rc;
+
+    if ( v->vcpu_id == 0 )
+    {
+        /* Create bufioreq event channel. */
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            goto done;
+
+        s->buf_ioreq_evtchn = rc;
+    }
+
+    hvm_update_ioreq_server_evtchn(s);
+    rc = 0;
+
+done:
+    return rc;
+}
+
+static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    if ( v->vcpu_id == 0 )
+    {
+        if ( s->buf_ioreq_evtchn >= 0 )
+        {
+            free_xen_event_channel(v, s->buf_ioreq_evtchn);
+            s->buf_ioreq_evtchn = -1;
+        }
+    }
+
+    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
+    {
+        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
+        s->ioreq_evtchn[v->vcpu_id] = -1;
+    }
+}
+
+static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
+                                     int *p_port)
+{
+    int old_port, new_port;
+
+    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
+    if ( new_port < 0 )
+        return new_port;
+
+    /* xchg() ensures that only we call free_xen_event_channel(). */
+    old_port = xchg(p_port, new_port);
+    free_xen_event_channel(v, old_port);
+    return 0;
+}
+
+static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
+{
+    struct domain *d = s->domain;
+    struct vcpu *v;
+    int rc = 0;
+
+    domain_pause(d);
+
+    if ( d->vcpu[0] )
+    {
+        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s->buf_ioreq_evtchn);
+        if ( rc < 0 )
+            goto done;
+    }
+
+    for_each_vcpu ( d, v )
+    {
+        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v->vcpu_id]);
+        if ( rc < 0 )
+            goto done;
+    }
+
+    hvm_update_ioreq_server_evtchn(s);
+
+    s->domid = domid;
+
+done:
+    domain_unpause(d);
+
+    return rc;
+}
+
+static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
+{
+    struct domain *d = s->domain;
+    int rc;
+
+    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
+    if ( rc < 0 )
+        return rc;
+
+    hvm_update_ioreq_server_evtchn(s);
+
+    return 0;
+}
+
+static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
+{
+    struct domain *d = s->domain;
+
+    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
+}
+
 int hvm_domain_initialise(struct domain *d)
 {
     int rc;
@@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    rc = hvm_init_ioreq_server(d);
+    if ( rc != 0 )
+        goto fail2;
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail2;
+        goto fail3;
 
     return 0;
 
+ fail3:
+    hvm_deinit_ioreq_server(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_deinit_ioreq_server(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
     struct domain *d = v->domain;
-    domid_t dm_domid;
+    struct hvm_ioreq_server *s;
 
     hvm_asid_flush_vcpu(v);
 
@@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
-    dm_domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
+    s = d->arch.hvm_domain.ioreq_server;
 
-    /* Create ioreq event channel. */
-    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
+    rc = hvm_ioreq_server_add_vcpu(s, v);
     if ( rc < 0 )
         goto fail6;
 
-    /* Register ioreq event channel. */
-    v->arch.hvm_vcpu.xen_port = rc;
-
-    if ( v->vcpu_id == 0 )
-    {
-        /* Create bufioreq event channel. */
-        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
-        if ( rc < 0 )
-            goto fail6;
-        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
-    }
-
-    spin_lock(&d->arch.hvm_domain.ioreq.lock);
-    if ( d->arch.hvm_domain.ioreq.va != NULL )
-        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
-
     if ( v->vcpu_id == 0 )
     {
         /* NB. All these really belong in hvm_domain_initialise(). */
@@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+
+    hvm_ioreq_server_remove_vcpu(s, v);
+
     nestedhvm_vcpu_destroy(v);
 
     free_compat_arg_xlat(v);
@@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
         vlapic_destroy(v);
 
     hvm_funcs.vcpu_destroy(v);
-
-    /* Event channel is already freed by evtchn_destroy(). */
-    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
 }
 
 void hvm_vcpu_down(struct vcpu *v)
@@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
 int hvm_buffered_io_send(ioreq_t *p)
 {
     struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
     buf_ioreq_t bp;
     /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
     int qw = 0;
@@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
+        return 0;
+
+    iorp = &s->buf_ioreq;
+    pg = iorp->va;
+
     /*
      * Return 0 for the cases we can't deal with:
      *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
@@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
     wmb();
     pg->write_pointer += qw ? 2 : 1;
 
-    notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
     spin_unlock(&iorp->lock);
     
     return 1;
@@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
 {
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
     ioreq_t *p;
 
     if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
         return 0; /* implicitly bins the i/o operation */
 
-    if ( !(p = get_ioreq(v)) )
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
         return 0;
 
+    p = get_ioreq(s, v->vcpu_id);
+
     if ( unlikely(p->state != STATE_IOREQ_NONE) )
     {
         /* This indicates a bug in the device model. Crash the domain. */
         gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p->state);
-        domain_crash(v->domain);
+        domain_crash(d);
         return 0;
     }
 
+    v->arch.hvm_vcpu.ioreq_server = s;
+
     p->dir = proto_p->dir;
     p->data_is_ptr = proto_p->data_is_ptr;
     p->type = proto_p->type;
@@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
     p->df = proto_p->df;
     p->data = proto_p->data;
 
-    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
+    prepare_wait_on_xen_event_channel(p->vp_eport);
 
     /*
      * Following happens /after/ blocking and setting up ioreq contents.
      * prepare_wait_on_xen_event_channel() is an implicit barrier.
      */
     p->state = STATE_IOREQ_READY;
-    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
+    notify_via_xen_event_channel(d, p->vp_eport);
 
     return 1;
 }
@@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
-                                     int *p_port)
-{
-    int old_port, new_port;
-
-    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
-    if ( new_port < 0 )
-        return new_port;
-
-    /* xchg() ensures that only we call free_xen_event_channel(). */
-    old_port = xchg(p_port, new_port);
-    free_xen_event_channel(v, old_port);
-    return 0;
-}
-
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_page *iorp;
+        struct hvm_ioreq_server *s;
         struct domain *d;
         struct vcpu *v;
 
@@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( rc )
             goto param_fail;
 
+        s = d->arch.hvm_domain.ioreq_server;
+
         if ( op == HVMOP_set_param )
         {
             rc = 0;
@@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             switch ( a.index )
             {
             case HVM_PARAM_IOREQ_PFN:
-                iorp = &d->arch.hvm_domain.ioreq;
-                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
-                    break;
-                spin_lock(&iorp->lock);
-                if ( iorp->va != NULL )
-                    /* Initialise evtchn port info if VCPUs already created. */
-                    for_each_vcpu ( d, v )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                spin_unlock(&iorp->lock);
+                rc = hvm_set_ioreq_server_pfn(s, a.value);
                 break;
             case HVM_PARAM_BUFIOREQ_PFN: 
-                iorp = &d->arch.hvm_domain.buf_ioreq;
-                rc = hvm_set_ioreq_page(d, iorp, a.value);
+                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
                 break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
@@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = 0;
-                domain_pause(d); /* safe to change per-vcpu xen_port */
-                if ( d->vcpu[0] )
-                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
-                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
-                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
-                if ( rc )
-                {
-                    domain_unpause(d);
-                    break;
-                }
-                iorp = &d->arch.hvm_domain.ioreq;
-                for_each_vcpu ( d, v )
-                {
-                    rc = hvm_replace_event_channel(v, a.value,
-                                                   &v->arch.hvm_vcpu.xen_port);
-                    if ( rc )
-                        break;
-
-                    spin_lock(&iorp->lock);
-                    if ( iorp->va != NULL )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                    spin_unlock(&iorp->lock);
-                }
-                domain_unpause(d);
+                rc = hvm_set_ioreq_server_domid(s, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         {
             switch ( a.index )
             {
+            case HVM_PARAM_BUFIOREQ_EVTCHN:
+                a.value = s->buf_ioreq_evtchn;
+                break;
             case HVM_PARAM_ACPI_S_STATE:
                 a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
                 break;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index b1e3187..4c039f8 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -41,10 +41,17 @@ struct hvm_ioreq_page {
     void *va;
 };
 
-struct hvm_domain {
+struct hvm_ioreq_server {
+    struct domain          *domain;
+    domid_t                domid;
     struct hvm_ioreq_page  ioreq;
+    int                    ioreq_evtchn[MAX_HVM_VCPUS];
     struct hvm_ioreq_page  buf_ioreq;
+    int                    buf_ioreq_evtchn;
+};
 
+struct hvm_domain {
+    struct hvm_ioreq_server *ioreq_server;
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..4c9d7ee 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -138,7 +138,7 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    int                 xen_port;
+    struct hvm_ioreq_server *ioreq_server;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTm-0000zL-HH; Thu, 30 Jan 2014 14:20:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTj-0000y7-LM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:28 +0000
Received: from [193.109.254.147:22244] by server-2.bemta-14.messagelabs.com id
	A5/32-01236-BAF5AE25; Thu, 30 Jan 2014 14:20:27 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391091622!905967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24953 invoked from network); 30 Jan 2014 14:20:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-W6;
	Thu, 30 Jan 2014 14:19:55 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:49 +0000
Message-ID: <1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for multiple
	servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The legacy 'catch-all' server is always created with id 0. Secondary
servers will have an id ranging from 1 to a limit set by the toolstack
via the 'max_emulators' build info field. This defaults to 1 so ordinarily
no extra special pages are reserved for secondary emulators. It may be
increased using the secondary_device_emulators parameter in xl.cfg(5).

Because of the re-arrangement of the special pages in a previous patch we
only need the addition of parameter HVM_PARAM_NR_IOREQ_SERVERS to determine
the layout of the shared pages for multiple emulators. Guests migrated in
from hosts without this patch will be lacking the save record which stores
the new parameter and so the guest is assumed to only have had a single
emulator.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 docs/man/xl.cfg.pod.5            |    7 +
 tools/libxc/xc_domain.c          |  175 ++++++++
 tools/libxc/xc_domain_restore.c  |   20 +
 tools/libxc/xc_domain_save.c     |   12 +
 tools/libxc/xc_hvm_build_x86.c   |   25 +-
 tools/libxc/xenctrl.h            |   41 ++
 tools/libxc/xenguest.h           |    2 +
 tools/libxc/xg_save_restore.h    |    1 +
 tools/libxl/libxl.h              |    8 +
 tools/libxl/libxl_create.c       |    3 +
 tools/libxl/libxl_dom.c          |    1 +
 tools/libxl/libxl_types.idl      |    1 +
 tools/libxl/xl_cmdimpl.c         |    3 +
 xen/arch/x86/hvm/hvm.c           |  916 +++++++++++++++++++++++++++++++++++---
 xen/arch/x86/hvm/io.c            |    2 +-
 xen/include/asm-x86/hvm/domain.h |   21 +-
 xen/include/asm-x86/hvm/hvm.h    |    1 +
 xen/include/asm-x86/hvm/vcpu.h   |    2 +-
 xen/include/public/hvm/hvm_op.h  |   70 +++
 xen/include/public/hvm/ioreq.h   |    1 +
 xen/include/public/hvm/params.h  |    4 +-
 21 files changed, 1230 insertions(+), 86 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..9aa9958 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1277,6 +1277,13 @@ specified, enabling the use of XenServer PV drivers in the guest.
 This parameter only takes effect when device_model_version=qemu-xen.
 See F<docs/misc/pci-device-reservations.txt> for more information.
 
+=item B<secondary_device_emulators=NUMBER>
+
+If a number of secondary device emulators (i.e. in addition to
+qemu-xen or qemu-xen-traditional) are to be invoked to support the
+guest then this parameter can be set with the count of how many are
+to be used. The default value is zero.
+
 =back
 
 =head2 Device-Model Options
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..c64d15a 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1246,6 +1246,181 @@ int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long
     return rc;
 }
 
+int xc_hvm_create_ioreq_server(xc_interface *xch,
+                               domid_t domid,
+                               ioservid_t *id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_create_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    rc = do_xen_hypercall(xch, &hypercall);
+    *id = arg->id;
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_get_ioreq_server_info(xc_interface *xch,
+                                 domid_t domid,
+                                 ioservid_t id,
+                                 xen_pfn_t *pfn,
+                                 xen_pfn_t *buf_pfn,
+                                 evtchn_port_t *buf_port)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_get_ioreq_server_info;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+    if ( rc != 0 )
+        goto done;
+
+    if ( pfn )
+        *pfn = arg->pfn;
+
+    if ( buf_pfn )
+        *buf_pfn = arg->buf_pfn;
+
+    if ( buf_port )
+        *buf_port = arg->buf_port;
+
+done:
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->start = start;
+    arg->end = end;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t start)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_unmap_io_range_from_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->start = start;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
+                                      ioservid_t id, uint16_t bdf)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_pcidev_to_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_map_pcidev_to_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->bdf = bdf;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
+                                          ioservid_t id, uint16_t bdf)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_pcidev_from_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_unmap_pcidev_from_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->bdf = bdf;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_destroy_ioreq_server(xc_interface *xch,
+                                domid_t domid,
+                                ioservid_t id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_destroy_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
 int xc_domain_setdebugging(xc_interface *xch,
                            uint32_t domid,
                            unsigned int enable)
diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index ca2fb51..305e4b8 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -746,6 +746,7 @@ typedef struct {
     uint64_t acpi_ioport_location;
     uint64_t viridian;
     uint64_t vm_generationid_addr;
+    uint64_t nr_ioreq_servers;
 
     struct toolstack_data_t tdata;
 } pagebuf_t;
@@ -996,6 +997,16 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
         DPRINTF("read generation id buffer address");
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
+    case XC_SAVE_ID_HVM_NR_IOREQ_SERVERS:
+        /* Skip padding 4 bytes then read the acpi ioport location. */
+        if ( RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint32_t)) ||
+             RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint64_t)) )
+        {
+            PERROR("error reading the number of IOREQ servers");
+            return -1;
+        }
+        return pagebuf_get_one(xch, ctx, buf, fd, dom);
+
     default:
         if ( (count > MAX_BATCH_SIZE) || (count < 0) ) {
             ERROR("Max batch size exceeded (%d). Giving up.", count);
@@ -1755,6 +1766,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     if (pagebuf.viridian != 0)
         xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
 
+    if ( hvm ) {
+        int nr_ioreq_servers = pagebuf.nr_ioreq_servers;
+
+        if ( nr_ioreq_servers == 0 )
+            nr_ioreq_servers = 1;
+
+        xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS, nr_ioreq_servers);
+    }
+
     if (pagebuf.acpi_ioport_location == 1) {
         DBGPRINTF("Use new firmware ioport from the checkpoint\n");
         xc_set_hvm_param(xch, dom, HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..3293e29 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -1731,6 +1731,18 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             PERROR("Error when writing the viridian flag");
             goto out;
         }
+
+        chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVERS;
+        chunk.data = 0;
+        xc_get_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
+                         (unsigned long *)&chunk.data);
+
+        if ( (chunk.data != 0) &&
+             wrexact(io_fd, &chunk, sizeof(chunk)) )
+        {
+            PERROR("Error when writing the number of IOREQ servers");
+            goto out;
+        }
     }
 
     if ( callbacks != NULL && callbacks->toolstack_save != NULL )
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index f24f2a1..bbe5def 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -45,7 +45,7 @@
 #define SPECIALPAGE_IDENT_PT 4
 #define SPECIALPAGE_CONSOLE  5
 #define SPECIALPAGE_IOREQ    6
-#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
+#define NR_SPECIAL_PAGES(n)  SPECIALPAGE_IOREQ + (2 * n) /* ioreq server needs 2 pages */
 #define special_pfn(x) (0xff000u - (x))
 
 static int modules_init(struct xc_hvm_build_args *args,
@@ -83,7 +83,8 @@ static int modules_init(struct xc_hvm_build_args *args,
 }
 
 static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
-                           uint64_t mmio_start, uint64_t mmio_size)
+                           uint64_t mmio_start, uint64_t mmio_size,
+                           int max_emulators)
 {
     struct hvm_info_table *hvm_info = (struct hvm_info_table *)
         (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
@@ -111,7 +112,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
+    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES(max_emulators);
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -254,6 +255,10 @@ static int setup_guest(xc_interface *xch,
         stat_1gb_pages = 0;
     int pod_mode = 0;
     int claim_enabled = args->claim_enabled;
+    int max_emulators = args->max_emulators;
+
+    if ( max_emulators < 1 )
+        goto error_out;
 
     if ( nr_pages > target_pages )
         pod_mode = XENMEMF_populate_on_demand;
@@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
               HVM_INFO_PFN)) == NULL )
         goto error_out;
-    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
+    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
+                   max_emulators);
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
@@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
             "  STORE:     %"PRI_xen_pfn"\n"
             "  IDENT_PT:  %"PRI_xen_pfn"\n"
             "  CONSOLE:   %"PRI_xen_pfn"\n"
-            "  IOREQ:     %"PRI_xen_pfn"\n",
-            NR_SPECIAL_PAGES,
+            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
+            NR_SPECIAL_PAGES(max_emulators),
             (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
             (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
             (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
             (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
             (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
             (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
+            max_emulators * 2,
             (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
 
-    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
+    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
     {
         xen_pfn_t pfn = special_pfn(i);
         rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
@@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
     xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
                      special_pfn(SPECIALPAGE_IOREQ));
     xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ) - 1);
+                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
+    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
+                     max_emulators);
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..142aaea 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
 int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
 int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
 
+/*
+ * IOREQ server API
+ */
+int xc_hvm_create_ioreq_server(xc_interface *xch,
+			       domid_t domid,
+			       ioservid_t *id);
+
+int xc_hvm_get_ioreq_server_info(xc_interface *xch,
+				 domid_t domid,
+				 ioservid_t id,
+				 xen_pfn_t *pfn,
+				 xen_pfn_t *buf_pfn,
+				 evtchn_port_t *buf_port);
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
+					domid_t domid,
+                                        ioservid_t id,
+					int is_mmio,
+                                        uint64_t start,
+					uint64_t end);
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
+					    domid_t domid,
+                                            ioservid_t id,
+					    int is_mmio,
+                                            uint64_t start);
+
+int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
+				      domid_t domid,
+                                      ioservid_t id,
+				      uint16_t bdf);
+
+int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
+					  domid_t domid,
+					  ioservid_t id,
+					  uint16_t bdf);
+
+int xc_hvm_destroy_ioreq_server(xc_interface *xch,
+				domid_t domid,
+				ioservid_t id);
+
 /* HVM guest pass-through */
 int xc_assign_device(xc_interface *xch,
                      uint32_t domid,
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index a0e30e1..8930ac0 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -234,6 +234,8 @@ struct xc_hvm_build_args {
     struct xc_hvm_firmware_module smbios_module;
     /* Whether to use claim hypercall (1 - enable, 0 - disable). */
     int claim_enabled;
+    /* Maximum number of emulators for VM */
+    int max_emulators;
 };
 
 /**
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index f859621..5170b7f 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -259,6 +259,7 @@
 #define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
 #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
 #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
+#define XC_SAVE_ID_HVM_NR_IOREQ_SERVERS -19
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..b679957 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,14 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS indicates that the
+ * max_emulators field is present in the hvm sections of
+ * libxl_domain_build_info. This field can be used to reserve
+ * extra special pages for secondary device emulators.
+ */
+#define LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..cce93d9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -330,6 +330,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
         libxl_defbool_setdefault(&b_info->u.hvm.gfx_passthru, false);
 
+        if (b_info->u.hvm.max_emulators < 1)
+            b_info->u.hvm.max_emulators = 1;
+
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         libxl_defbool_setdefault(&b_info->u.pv.e820_host, false);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..9de06f9 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -637,6 +637,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     args.mem_size = (uint64_t)(info->max_memkb - info->video_memkb) << 10;
     args.mem_target = (uint64_t)(info->target_memkb - info->video_memkb) << 10;
     args.claim_enabled = libxl_defbool_val(info->claim_mode);
+    args.max_emulators = info->u.hvm.max_emulators;
     if (libxl__domain_firmware(gc, info, &args)) {
         LOG(ERROR, "initializing domain firmware failed");
         goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b707159 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -372,6 +372,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("xen_platform_pci", libxl_defbool),
                                        ("usbdevice_list",   libxl_string_list),
                                        ("vendor_device",    libxl_vendor_device),
+                                       ("max_emulators",    integer),
                                        ])),
                  ("pv", Struct(None, [("kernel", string),
                                       ("slack_memkb", MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..c65f4f4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1750,6 +1750,9 @@ skip_vfb:
 
             b_info->u.hvm.vendor_device = d;
         }
+ 
+        if (!xlu_cfg_get_long (config, "secondary_device_emulators", &l, 0))
+            b_info->u.hvm.max_emulators = l + 1;
     }
 
     xlu_cfg_destroy(config);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d9874fb..5f9e728 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -379,21 +379,23 @@ static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
 void hvm_do_resume(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
+    struct list_head *entry, *next;
 
     check_wakeup_from_wait();
 
     if ( is_hvm_vcpu(v) )
         pt_restore_timer(v);
 
-    s = v->arch.hvm_vcpu.ioreq_server;
-    v->arch.hvm_vcpu.ioreq_server = NULL;
-
-    if ( s )
+    list_for_each_safe ( entry, next, &v->arch.hvm_vcpu.ioreq_server_list )
     {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                vcpu_list_entry[v->vcpu_id]);
         ioreq_t *p = get_ioreq(s, v->vcpu_id);
 
         hvm_wait_on_io(d, p);
+
+        list_del_init(entry);
     }
 
     /* Inject pending hw/sw trap */
@@ -531,6 +533,83 @@ static int hvm_print_line(
     return X86EMUL_OKAY;
 }
 
+static int hvm_access_cf8(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *curr = current;
+    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
+    int rc;
+
+    BUG_ON(port < 0xcf8);
+    port -= 0xcf8;
+
+    spin_lock(&hd->pci_lock);
+
+    if ( dir == IOREQ_WRITE )
+    {
+        switch ( bytes )
+        {
+        case 4:
+            hd->pci_cf8 = *val;
+            break;
+
+        case 2:
+        {
+            uint32_t mask = 0xffff << (port * 8);
+            uint32_t subval = *val << (port * 8);
+
+            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
+                          (subval & mask);
+            break;
+        }
+            
+        case 1:
+        {
+            uint32_t mask = 0xff << (port * 8);
+            uint32_t subval = *val << (port * 8);
+
+            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
+                          (subval & mask);
+            break;
+        }
+
+        default:
+            break;
+        }
+
+        /* We always need to fall through to the catch all emulator */
+        rc = X86EMUL_UNHANDLEABLE;
+    }
+    else
+    {
+        switch ( bytes )
+        {
+        case 4:
+            *val = hd->pci_cf8;
+            rc = X86EMUL_OKAY;
+            break;
+
+        case 2:
+            *val = (hd->pci_cf8 >> (port * 8)) & 0xffff;
+            rc = X86EMUL_OKAY;
+            break;
+            
+        case 1:
+            *val = (hd->pci_cf8 >> (port * 8)) & 0xff;
+            rc = X86EMUL_OKAY;
+            break;
+
+        default:
+            rc = X86EMUL_UNHANDLEABLE;
+            break;
+        }
+    }
+
+    spin_unlock(&hd->pci_lock);
+
+    return rc;
+}
+
 static int handle_pvh_io(
     int dir, uint32_t port, uint32_t bytes, uint32_t *val)
 {
@@ -590,6 +669,8 @@ done:
 
 static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
 {
+    list_del_init(&s->vcpu_list_entry[v->vcpu_id]);
+
     if ( v->vcpu_id == 0 )
     {
         if ( s->buf_ioreq_evtchn >= 0 )
@@ -606,7 +687,7 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
     }
 }
 
-static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
+static int hvm_create_ioreq_server(struct domain *d, ioservid_t id, domid_t domid)
 {
     struct hvm_ioreq_server *s;
     int i;
@@ -614,34 +695,47 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
     struct vcpu *v;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -EEXIST;
-    if ( d->arch.hvm_domain.ioreq_server != NULL )
-        goto fail_exist;
+    list_for_each_entry ( s, 
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto fail_exist;
+    }
 
-    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
 
     rc = -ENOMEM;
     s = xzalloc(struct hvm_ioreq_server);
     if ( !s )
         goto fail_alloc;
 
+    s->id = id;
     s->domain = d;
     s->domid = domid;
+    INIT_LIST_HEAD(&s->domain_list_entry);
 
     for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
         s->ioreq_evtchn[i] = -1;
+        INIT_LIST_HEAD(&s->vcpu_list_entry[i]);
+    }
     s->buf_ioreq_evtchn = -1;
 
     /* Initialize shared pages */
-    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
 
     hvm_init_ioreq_page(s, 0);
     if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
         goto fail_set_ioreq;
 
-    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
 
     hvm_init_ioreq_page(s, 1);
     if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
@@ -653,7 +747,8 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
             goto fail_add_vcpu;
     }
 
-    d->arch.hvm_domain.ioreq_server = s;
+    list_add(&s->domain_list_entry,
+             &d->arch.hvm_domain.ioreq_server_list);
 
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
@@ -673,22 +768,30 @@ fail_exist:
     return rc;
 }
 
-static void hvm_destroy_ioreq_server(struct domain *d)
+static void hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
-    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_server *s, *next;
     struct vcpu *v;
 
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+    list_for_each_entry_safe ( s,
+                               next,
+                               &d->arch.hvm_domain.ioreq_server_list,
+                               domain_list_entry)
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    s = d->arch.hvm_domain.ioreq_server;
-    if ( !s )
-        goto done;
+    goto done;
+
+found:
+    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
 
     domain_pause(d);
 
-    d->arch.hvm_domain.ioreq_server = NULL;
+    list_del_init(&s->domain_list_entry);
 
     for_each_vcpu ( d, v )
         hvm_ioreq_server_remove_vcpu(s, v);
@@ -704,21 +807,186 @@ done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 }
 
-static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
+static int hvm_get_ioreq_server_buf_port(struct domain *d, ioservid_t id, evtchn_port_t *port)
+{
+    struct list_head *entry;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -ENOENT;
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( s->id == id )
+        {
+            *port = s->buf_ioreq_evtchn;
+            rc = 0;
+            break;
+        }
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_get_ioreq_server_pfn(struct domain *d, ioservid_t id, int buf, xen_pfn_t *pfn)
+{
+    struct list_head *entry;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -ENOENT;
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( s->id == id )
+        {
+            if ( buf )
+                *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
+            else
+                *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
+
+            rc = 0;
+            break;
+        }
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                            int is_mmio, uint64_t start, uint64_t end)
 {
     struct hvm_ioreq_server *s;
+    struct hvm_io_range *x;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    x = xmalloc(struct hvm_io_range);
+    if ( x == NULL )
+        return -ENOMEM;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    rc = -ENOENT;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
+
+    goto fail;
+
+found:
+    x->start = start;
+    x->end = end;
+
+    if ( is_mmio )
+    {
+        x->next = s->mmio_range_list;
+        s->mmio_range_list = x;
+    }
+    else
+    {
+        x->next = s->portio_range_list;
+        s->portio_range_list = x;
+    }
+
+    gdprintk(XENLOG_DEBUG, "%d:%d: +%s %"PRIX64" - %"PRIX64"\n",
+             d->domain_id,
+             s->id,
+             ( is_mmio ) ? "MMIO" : "PORTIO",
+             x->start,
+             x->end);
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    xfree(x);
+
+    return rc;
+}
+
+static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                                int is_mmio, uint64_t start)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x, **xp;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    *port = s->buf_ioreq_evtchn;
-    rc = 0;
+    goto done;
+
+found:
+    if ( is_mmio )
+    {
+        x = s->mmio_range_list;
+        xp = &s->mmio_range_list;
+    }
+    else
+    {
+        x = s->portio_range_list;
+        xp = &s->portio_range_list;
+    }
+
+    while ( (x != NULL) && (start != x->start) )
+    {
+        xp = &x->next;
+        x = x->next;
+    }
+
+    if ( (x != NULL) )
+    {
+        gdprintk(XENLOG_DEBUG, "%d:%d: -%s %"PRIX64" - %"PRIX64"\n",
+                 d->domain_id,
+                 s->id,
+                 ( is_mmio ) ? "MMIO" : "PORTIO",
+                 x->start,
+                 x->end);
+
+        *xp = x->next;
+        xfree(x);
+        rc = 0;
+    }
 
 done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
@@ -726,25 +994,98 @@ done:
     return rc;
 }
 
-static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
+static int hvm_map_pcidev_to_ioreq_server(struct domain *d, ioservid_t id,
+                                          uint16_t bdf)
 {
     struct hvm_ioreq_server *s;
+    struct hvm_pcidev *x;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    x = xmalloc(struct hvm_pcidev);
+    if ( x == NULL )
+        return -ENOMEM;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    rc = -ENOENT;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
+
+    goto fail;
+
+found:
+    x->bdf = bdf;
+
+    x->next = s->pcidev_list;
+    s->pcidev_list = x;
+
+    gdprintk(XENLOG_DEBUG, "%d:%d: +PCIDEV %04X\n",
+             d->domain_id,
+             s->id,
+             x->bdf);
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    xfree(x);
+
+    return rc;
+}
+
+static int hvm_unmap_pcidev_from_ioreq_server(struct domain *d, ioservid_t id,
+                                              uint16_t bdf)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_pcidev *x, **xp;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    if ( buf )
-        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
-    else
-        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+    goto done;
 
-    rc = 0;
+found:
+    x = s->pcidev_list;
+    xp = &s->pcidev_list;
+
+    while ( (x != NULL) && (bdf != x->bdf) )
+    {
+        xp = &x->next;
+        x = x->next;
+    }
+    if ( (x != NULL) )
+    {
+        gdprintk(XENLOG_DEBUG, "%d:%d: -PCIDEV %04X\n",
+                 d->domain_id,
+                 s->id,
+                 x->bdf);
+
+         *xp = x->next;
+        xfree(x);
+        rc = 0;
+    }
 
 done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
@@ -752,6 +1093,73 @@ done:
     return rc;
 }
 
+static int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct list_head *entry;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
+            goto fail;
+    }
+        
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct list_head *entry;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
+static void hvm_destroy_all_ioreq_servers(struct domain *d)
+{
+    ioservid_t id;
+
+    for ( id = 0;
+          id < d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
+          id++ )
+        hvm_destroy_ioreq_server(d, id);
+}
+
 static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
                                      int *p_port)
 {
@@ -767,21 +1175,30 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
+static int hvm_set_ioreq_server_domid(struct domain *d, ioservid_t id, domid_t domid)
 {
     struct hvm_ioreq_server *s;
     struct vcpu *v;
     int rc = 0;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
     domain_pause(d);
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    goto done;
 
+found:
     rc = 0;
     if ( s->domid == domid )
         goto done;
@@ -838,7 +1255,9 @@ int hvm_domain_initialise(struct domain *d)
 
     }
 
+    INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server_list);
     spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
+    spin_lock_init(&d->arch.hvm_domain.pci_lock);
     spin_lock_init(&d->arch.hvm_domain.irq_lock);
     spin_lock_init(&d->arch.hvm_domain.uc_lock);
 
@@ -880,6 +1299,7 @@ int hvm_domain_initialise(struct domain *d)
     rtc_init(d);
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
+    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
@@ -910,7 +1330,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_destroy_ioreq_server(d);
+    hvm_destroy_all_ioreq_servers(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1422,13 +1842,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
 
     hvm_asid_flush_vcpu(v);
 
     spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
     INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
 
+    INIT_LIST_HEAD(&v->arch.hvm_vcpu.ioreq_server_list);
+
     rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
     if ( rc != 0 )
         goto fail1;
@@ -1465,16 +1886,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
-    if ( s )
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc < 0 )
-            goto fail6;
-    }
+    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
+    if ( rc < 0 )
+        goto fail6;
 
     if ( v->vcpu_id == 0 )
     {
@@ -1510,14 +1924,8 @@ int hvm_vcpu_initialise(struct vcpu *v)
 void hvm_vcpu_destroy(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    if ( s )
-        hvm_ioreq_server_remove_vcpu(s, v);
+    hvm_all_ioreq_servers_remove_vcpu(d, v);
 
     nestedhvm_vcpu_destroy(v);
 
@@ -1556,6 +1964,101 @@ void hvm_vcpu_down(struct vcpu *v)
     }
 }
 
+static struct hvm_ioreq_server *hvm_select_ioreq_server(struct vcpu *v, ioreq_t *p)
+{
+#define BDF(cf8) (((cf8) & 0x00ffff00) >> 8)
+
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    uint8_t type;
+    uint64_t addr;
+
+    if ( p->type == IOREQ_TYPE_PIO &&
+         (p->addr & ~3) == 0xcfc )
+    { 
+        /* PCI config data cycle */
+        type = IOREQ_TYPE_PCI_CONFIG;
+
+        spin_lock(&d->arch.hvm_domain.pci_lock);
+        addr = d->arch.hvm_domain.pci_cf8 + (p->addr & 3);
+        spin_unlock(&d->arch.hvm_domain.pci_lock);
+    }
+    else
+    {
+        type = p->type;
+        addr = p->addr;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    switch ( type )
+    {
+    case IOREQ_TYPE_COPY:
+    case IOREQ_TYPE_PIO:
+    case IOREQ_TYPE_PCI_CONFIG:
+        break;
+    default:
+        goto done;
+    }
+
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        switch ( type )
+        {
+            case IOREQ_TYPE_COPY:
+            case IOREQ_TYPE_PIO: {
+                struct hvm_io_range *x;
+
+                x = (type == IOREQ_TYPE_COPY) ?
+                    s->mmio_range_list :
+                    s->portio_range_list;
+
+                for ( ; x; x = x->next )
+                {
+                    if ( (addr >= x->start) && (addr <= x->end) )
+                        goto found;
+                }
+                break;
+            }
+            case IOREQ_TYPE_PCI_CONFIG: {
+                struct hvm_pcidev *x;
+
+                x = s->pcidev_list;
+
+                for ( ; x; x = x->next )
+                {
+                    if ( BDF(addr) == x->bdf ) {
+                        p->type = type;
+                        p->addr = addr;
+                        goto found;
+                    }
+                }
+                break;
+            }
+        }
+    }
+
+done:
+    /* The catch-all server has id 0 */
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == 0 )
+            goto found;
+    }
+
+    s = NULL;
+
+found:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    return s;
+
+#undef BDF
+}
+
 int hvm_buffered_io_send(ioreq_t *p)
 {
     struct vcpu *v = current;
@@ -1570,10 +2073,7 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
+    s = hvm_select_ioreq_server(v, p);
     if ( !s )
         return 0;
 
@@ -1661,7 +2161,9 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
         return 0;
     }
 
-    v->arch.hvm_vcpu.ioreq_server = s;
+    ASSERT(list_empty(&s->vcpu_list_entry[v->vcpu_id]));
+    list_add(&s->vcpu_list_entry[v->vcpu_id],
+             &v->arch.hvm_vcpu.ioreq_server_list); 
 
     p->dir = proto_p->dir;
     p->data_is_ptr = proto_p->data_is_ptr;
@@ -1686,24 +2188,42 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
 {
-    struct domain *d = v->domain;
     struct hvm_ioreq_server *s;
 
-    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
+    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
 
     if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
         return 0;
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
+    s = hvm_select_ioreq_server(v, p);
     if ( !s )
         return 0;
 
     return hvm_send_assist_req_to_server(s, v, p);
 }
 
+void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p)
+{
+    struct domain *d = v->domain;
+    struct list_head *entry;
+
+    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        (void) hvm_send_assist_req_to_server(s, v, p);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
 void hvm_hlt(unsigned long rflags)
 {
     struct vcpu *curr = current;
@@ -4286,6 +4806,215 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
+static int hvmop_create_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
+{
+    struct domain *curr_d = current->domain;
+    xen_hvm_create_ioreq_server_t op;
+    struct domain *d;
+    ioservid_t id;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = -ENOSPC;
+    for ( id = 1;
+          id <  d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
+          id++ )
+    {
+        rc = hvm_create_ioreq_server(d, id, curr_d->domain_id);
+        if ( rc == -EEXIST )
+            continue;
+
+        break;
+    }
+
+    if ( rc == -EEXIST )
+        rc = -ENOSPC;
+
+    if ( rc < 0 )
+        goto out;
+
+    op.id = id;
+
+    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_get_ioreq_server_info(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
+{
+    xen_hvm_get_ioreq_server_info_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 0, &op.pfn)) < 0 )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 1, &op.buf_pfn)) < 0 )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_buf_port(d, op.id, &op.buf_port)) < 0 )
+        goto out;
+
+    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_map_io_range_to_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_io_range_to_ioreq_server_t) uop)
+{
+    xen_hvm_map_io_range_to_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.is_mmio,
+                                          op.start, op.end);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_unmap_io_range_from_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_io_range_from_ioreq_server_t) uop)
+{
+    xen_hvm_unmap_io_range_from_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.is_mmio,
+                                              op.start);
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_map_pcidev_to_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_pcidev_to_ioreq_server_t) uop)
+{
+    xen_hvm_map_pcidev_to_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_map_pcidev_to_ioreq_server(d, op.id, op.bdf);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_unmap_pcidev_from_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_pcidev_from_ioreq_server_t) uop)
+{
+    xen_hvm_unmap_pcidev_from_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_unmap_pcidev_from_ioreq_server(d, op.id, op.bdf);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_destroy_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
+{
+    xen_hvm_destroy_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    hvm_destroy_ioreq_server(d, op.id);
+    rc = 0;
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -4294,6 +5023,41 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     switch ( op )
     {
+    case HVMOP_create_ioreq_server:
+        rc = hvmop_create_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
+        break;
+    
+    case HVMOP_get_ioreq_server_info:
+        rc = hvmop_get_ioreq_server_info(
+            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
+        break;
+    
+    case HVMOP_map_io_range_to_ioreq_server:
+        rc = hvmop_map_io_range_to_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_map_io_range_to_ioreq_server_t));
+        break;
+    
+    case HVMOP_unmap_io_range_from_ioreq_server:
+        rc = hvmop_unmap_io_range_from_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_unmap_io_range_from_ioreq_server_t));
+        break;
+    
+    case HVMOP_map_pcidev_to_ioreq_server:
+        rc = hvmop_map_pcidev_to_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_map_pcidev_to_ioreq_server_t));
+        break;
+    
+    case HVMOP_unmap_pcidev_from_ioreq_server:
+        rc = hvmop_unmap_pcidev_from_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_unmap_pcidev_from_ioreq_server_t));
+        break;
+    
+    case HVMOP_destroy_ioreq_server:
+        rc = hvmop_destroy_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
+        break;
+    
     case HVMOP_set_param:
     case HVMOP_get_param:
     {
@@ -4382,9 +5146,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = hvm_create_ioreq_server(d, a.value);
+                rc = hvm_create_ioreq_server(d, 0, a.value);
                 if ( rc == -EEXIST )
-                    rc = hvm_set_ioreq_server_domid(d, a.value);
+                    rc = hvm_set_ioreq_server_domid(d, 0, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4449,6 +5213,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value > SHUTDOWN_MAX )
                     rc = -EINVAL;
                 break;
+            case HVM_PARAM_NR_IOREQ_SERVERS:
+                if ( d == current->domain )
+                    rc = -EPERM;
+                break;
             }
 
             if ( rc == 0 ) 
@@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             case HVM_PARAM_BUFIOREQ_PFN:
             case HVM_PARAM_BUFIOREQ_EVTCHN:
                 /* May need to create server */
-                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
+                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
                 if ( rc != 0 && rc != -EEXIST )
                     goto param_fail;
 
@@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_IOREQ_PFN: {
                     xen_pfn_t pfn;
 
-                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
                         goto param_fail;
 
                     a.value = pfn;
@@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_BUFIOREQ_PFN: {
                     xen_pfn_t pfn;
 
-                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
                         goto param_fail;
 
                     a.value = pfn;
@@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_BUFIOREQ_EVTCHN: {
                     evtchn_port_t port;
 
-                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
                         goto param_fail;
 
                     a.value = port;
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 576641c..a0d76b2 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -78,7 +78,7 @@ void send_invalidate_req(void)
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
 
-    (void)hvm_send_assist_req(v, p);
+    hvm_broadcast_assist_req(v, p);
 }
 
 int handle_mmio(void)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index e750ef0..93dcec1 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -41,19 +41,38 @@ struct hvm_ioreq_page {
     void *va;
 };
 
+struct hvm_io_range {
+    struct hvm_io_range *next;
+    uint64_t            start, end;
+};	
+
+struct hvm_pcidev {
+    struct hvm_pcidev *next;
+    uint16_t          bdf;
+};	
+
 struct hvm_ioreq_server {
+    struct list_head       domain_list_entry;
+    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];
+    ioservid_t             id;
     struct domain          *domain;
     domid_t                domid;
     struct hvm_ioreq_page  ioreq;
     int                    ioreq_evtchn[MAX_HVM_VCPUS];
     struct hvm_ioreq_page  buf_ioreq;
     int                    buf_ioreq_evtchn;
+    struct hvm_io_range    *mmio_range_list;
+    struct hvm_io_range    *portio_range_list;
+    struct hvm_pcidev      *pcidev_list;
 };
 
 struct hvm_domain {
-    struct hvm_ioreq_server *ioreq_server;
+    struct list_head        ioreq_server_list;
     spinlock_t              ioreq_server_lock;
 
+    uint32_t                pci_cf8;
+    spinlock_t              pci_lock;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 4e8fee8..1c3854f 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -225,6 +225,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
 void destroy_ring_for_helper(void **_va, struct page_info *page);
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
+void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p);
 
 void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
 int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 4c9d7ee..211ebfd 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -138,7 +138,7 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    struct hvm_ioreq_server *ioreq_server;
+    struct list_head    ioreq_server_list;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index a9aab4b..6b31189 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -23,6 +23,7 @@
 
 #include "../xen.h"
 #include "../trace.h"
+#include "../event_channel.h"
 
 /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
 #define HVMOP_set_param           0
@@ -270,6 +271,75 @@ struct xen_hvm_inject_msi {
 typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
 
+typedef uint32_t ioservid_t;
+
+DEFINE_XEN_GUEST_HANDLE(ioservid_t);
+
+#define HVMOP_create_ioreq_server 17
+struct xen_hvm_create_ioreq_server {
+    domid_t domid;  /* IN - domain to be serviced */
+    ioservid_t id;  /* OUT - server id */
+};
+typedef struct xen_hvm_create_ioreq_server xen_hvm_create_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_create_ioreq_server_t);
+
+#define HVMOP_get_ioreq_server_info 18
+struct xen_hvm_get_ioreq_server_info {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - server id */
+    xen_pfn_t pfn;          /* OUT - ioreq pfn */
+    xen_pfn_t buf_pfn;      /* OUT - buf ioreq pfn */
+    evtchn_port_t buf_port; /* OUT - buf ioreq port */
+};
+typedef struct xen_hvm_get_ioreq_server_info xen_hvm_get_ioreq_server_info_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
+
+#define HVMOP_map_io_range_to_ioreq_server 19
+struct xen_hvm_map_io_range_to_ioreq_server {
+    domid_t domid;                  /* IN - domain to be serviced */
+    ioservid_t id;                  /* IN - handle from HVMOP_register_ioreq_server */
+    int is_mmio;                    /* IN - MMIO or port IO? */
+    uint64_aligned_t start, end;    /* IN - inclusive start and end of range */
+};
+typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
+
+#define HVMOP_unmap_io_range_from_ioreq_server 20
+struct xen_hvm_unmap_io_range_from_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint8_t is_mmio;        /* IN - MMIO or port IO? */
+    uint64_aligned_t start; /* IN - start address of the range to remove */
+};
+typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
+
+#define HVMOP_map_pcidev_to_ioreq_server 21
+struct xen_hvm_map_pcidev_to_ioreq_server {
+    domid_t domid;      /* IN - domain to be serviced */
+    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
+    uint16_t bdf;       /* IN - PCI bus/dev/func */
+};
+typedef struct xen_hvm_map_pcidev_to_ioreq_server xen_hvm_map_pcidev_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_pcidev_to_ioreq_server_t);
+
+#define HVMOP_unmap_pcidev_from_ioreq_server 22
+struct xen_hvm_unmap_pcidev_from_ioreq_server {
+    domid_t domid;      /* IN - domain to be serviced */
+    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
+    uint16_t bdf;       /* IN - PCI bus/dev/func */
+};
+typedef struct xen_hvm_unmap_pcidev_from_ioreq_server xen_hvm_unmap_pcidev_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_pcidev_from_ioreq_server_t);
+
+#define HVMOP_destroy_ioreq_server 23
+struct xen_hvm_destroy_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - server id */
+};
+typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index f05d130..e84fa75 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -34,6 +34,7 @@
 
 #define IOREQ_TYPE_PIO          0 /* pio */
 #define IOREQ_TYPE_COPY         1 /* mmio ops */
+#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config ops */
 #define IOREQ_TYPE_TIMEOFFSET   7
 #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
 
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 517a184..4109b11 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -145,6 +145,8 @@
 /* SHUTDOWN_* action in case of a triple fault */
 #define HVM_PARAM_TRIPLE_FAULT_REASON 31
 
-#define HVM_NR_PARAMS          32
+#define HVM_PARAM_NR_IOREQ_SERVERS 32
+
+#define HVM_NR_PARAMS          33
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTm-0000zL-HH; Thu, 30 Jan 2014 14:20:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTj-0000y7-LM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:28 +0000
Received: from [193.109.254.147:22244] by server-2.bemta-14.messagelabs.com id
	A5/32-01236-BAF5AE25; Thu, 30 Jan 2014 14:20:27 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391091622!905967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24953 invoked from network); 30 Jan 2014 14:20:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136120"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-W6;
	Thu, 30 Jan 2014 14:19:55 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:49 +0000
Message-ID: <1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for multiple
	servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The legacy 'catch-all' server is always created with id 0. Secondary
servers will have an id ranging from 1 to a limit set by the toolstack
via the 'max_emulators' build info field. This defaults to 1 so ordinarily
no extra special pages are reserved for secondary emulators. It may be
increased using the secondary_device_emulators parameter in xl.cfg(5).

Because of the re-arrangement of the special pages in a previous patch we
only need the addition of parameter HVM_PARAM_NR_IOREQ_SERVERS to determine
the layout of the shared pages for multiple emulators. Guests migrated in
from hosts without this patch will be lacking the save record which stores
the new parameter and so the guest is assumed to only have had a single
emulator.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 docs/man/xl.cfg.pod.5            |    7 +
 tools/libxc/xc_domain.c          |  175 ++++++++
 tools/libxc/xc_domain_restore.c  |   20 +
 tools/libxc/xc_domain_save.c     |   12 +
 tools/libxc/xc_hvm_build_x86.c   |   25 +-
 tools/libxc/xenctrl.h            |   41 ++
 tools/libxc/xenguest.h           |    2 +
 tools/libxc/xg_save_restore.h    |    1 +
 tools/libxl/libxl.h              |    8 +
 tools/libxl/libxl_create.c       |    3 +
 tools/libxl/libxl_dom.c          |    1 +
 tools/libxl/libxl_types.idl      |    1 +
 tools/libxl/xl_cmdimpl.c         |    3 +
 xen/arch/x86/hvm/hvm.c           |  916 +++++++++++++++++++++++++++++++++++---
 xen/arch/x86/hvm/io.c            |    2 +-
 xen/include/asm-x86/hvm/domain.h |   21 +-
 xen/include/asm-x86/hvm/hvm.h    |    1 +
 xen/include/asm-x86/hvm/vcpu.h   |    2 +-
 xen/include/public/hvm/hvm_op.h  |   70 +++
 xen/include/public/hvm/ioreq.h   |    1 +
 xen/include/public/hvm/params.h  |    4 +-
 21 files changed, 1230 insertions(+), 86 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index 9941395..9aa9958 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1277,6 +1277,13 @@ specified, enabling the use of XenServer PV drivers in the guest.
 This parameter only takes effect when device_model_version=qemu-xen.
 See F<docs/misc/pci-device-reservations.txt> for more information.
 
+=item B<secondary_device_emulators=NUMBER>
+
+If a number of secondary device emulators (i.e. in addition to
+qemu-xen or qemu-xen-traditional) are to be invoked to support the
+guest then this parameter can be set with the count of how many are
+to be used. The default value is zero.
+
 =back
 
 =head2 Device-Model Options
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..c64d15a 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1246,6 +1246,181 @@ int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long
     return rc;
 }
 
+int xc_hvm_create_ioreq_server(xc_interface *xch,
+                               domid_t domid,
+                               ioservid_t *id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_create_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    rc = do_xen_hypercall(xch, &hypercall);
+    *id = arg->id;
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_get_ioreq_server_info(xc_interface *xch,
+                                 domid_t domid,
+                                 ioservid_t id,
+                                 xen_pfn_t *pfn,
+                                 xen_pfn_t *buf_pfn,
+                                 evtchn_port_t *buf_port)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_get_ioreq_server_info;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+    if ( rc != 0 )
+        goto done;
+
+    if ( pfn )
+        *pfn = arg->pfn;
+
+    if ( buf_pfn )
+        *buf_pfn = arg->buf_pfn;
+
+    if ( buf_port )
+        *buf_port = arg->buf_port;
+
+done:
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
+                                        ioservid_t id, int is_mmio,
+                                        uint64_t start, uint64_t end)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->start = start;
+    arg->end = end;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
+                                            ioservid_t id, int is_mmio,
+                                            uint64_t start)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_unmap_io_range_from_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->is_mmio = is_mmio;
+    arg->start = start;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
+                                      ioservid_t id, uint16_t bdf)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_pcidev_to_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_map_pcidev_to_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->bdf = bdf;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
+                                          ioservid_t id, uint16_t bdf)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_pcidev_from_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_unmap_pcidev_from_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    arg->bdf = bdf;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_destroy_ioreq_server(xc_interface *xch,
+                                domid_t domid,
+                                ioservid_t id)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_destroy_ioreq_server;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->id = id;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
 int xc_domain_setdebugging(xc_interface *xch,
                            uint32_t domid,
                            unsigned int enable)
diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index ca2fb51..305e4b8 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -746,6 +746,7 @@ typedef struct {
     uint64_t acpi_ioport_location;
     uint64_t viridian;
     uint64_t vm_generationid_addr;
+    uint64_t nr_ioreq_servers;
 
     struct toolstack_data_t tdata;
 } pagebuf_t;
@@ -996,6 +997,16 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
         DPRINTF("read generation id buffer address");
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
+    case XC_SAVE_ID_HVM_NR_IOREQ_SERVERS:
+        /* Skip padding 4 bytes then read the acpi ioport location. */
+        if ( RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint32_t)) ||
+             RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint64_t)) )
+        {
+            PERROR("error reading the number of IOREQ servers");
+            return -1;
+        }
+        return pagebuf_get_one(xch, ctx, buf, fd, dom);
+
     default:
         if ( (count > MAX_BATCH_SIZE) || (count < 0) ) {
             ERROR("Max batch size exceeded (%d). Giving up.", count);
@@ -1755,6 +1766,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     if (pagebuf.viridian != 0)
         xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
 
+    if ( hvm ) {
+        int nr_ioreq_servers = pagebuf.nr_ioreq_servers;
+
+        if ( nr_ioreq_servers == 0 )
+            nr_ioreq_servers = 1;
+
+        xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS, nr_ioreq_servers);
+    }
+
     if (pagebuf.acpi_ioport_location == 1) {
         DBGPRINTF("Use new firmware ioport from the checkpoint\n");
         xc_set_hvm_param(xch, dom, HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..3293e29 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -1731,6 +1731,18 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             PERROR("Error when writing the viridian flag");
             goto out;
         }
+
+        chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVERS;
+        chunk.data = 0;
+        xc_get_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
+                         (unsigned long *)&chunk.data);
+
+        if ( (chunk.data != 0) &&
+             wrexact(io_fd, &chunk, sizeof(chunk)) )
+        {
+            PERROR("Error when writing the number of IOREQ servers");
+            goto out;
+        }
     }
 
     if ( callbacks != NULL && callbacks->toolstack_save != NULL )
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index f24f2a1..bbe5def 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -45,7 +45,7 @@
 #define SPECIALPAGE_IDENT_PT 4
 #define SPECIALPAGE_CONSOLE  5
 #define SPECIALPAGE_IOREQ    6
-#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
+#define NR_SPECIAL_PAGES(n)  SPECIALPAGE_IOREQ + (2 * n) /* ioreq server needs 2 pages */
 #define special_pfn(x) (0xff000u - (x))
 
 static int modules_init(struct xc_hvm_build_args *args,
@@ -83,7 +83,8 @@ static int modules_init(struct xc_hvm_build_args *args,
 }
 
 static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
-                           uint64_t mmio_start, uint64_t mmio_size)
+                           uint64_t mmio_start, uint64_t mmio_size,
+                           int max_emulators)
 {
     struct hvm_info_table *hvm_info = (struct hvm_info_table *)
         (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
@@ -111,7 +112,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
+    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES(max_emulators);
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -254,6 +255,10 @@ static int setup_guest(xc_interface *xch,
         stat_1gb_pages = 0;
     int pod_mode = 0;
     int claim_enabled = args->claim_enabled;
+    int max_emulators = args->max_emulators;
+
+    if ( max_emulators < 1 )
+        goto error_out;
 
     if ( nr_pages > target_pages )
         pod_mode = XENMEMF_populate_on_demand;
@@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
               xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
               HVM_INFO_PFN)) == NULL )
         goto error_out;
-    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
+    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
+                   max_emulators);
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
@@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
             "  STORE:     %"PRI_xen_pfn"\n"
             "  IDENT_PT:  %"PRI_xen_pfn"\n"
             "  CONSOLE:   %"PRI_xen_pfn"\n"
-            "  IOREQ:     %"PRI_xen_pfn"\n",
-            NR_SPECIAL_PAGES,
+            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
+            NR_SPECIAL_PAGES(max_emulators),
             (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
             (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
             (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
             (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
             (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
             (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
+            max_emulators * 2,
             (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
 
-    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
+    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
     {
         xen_pfn_t pfn = special_pfn(i);
         rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
@@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
     xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
                      special_pfn(SPECIALPAGE_IOREQ));
     xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ) - 1);
+                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
+    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
+                     max_emulators);
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..142aaea 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
 int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
 int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
 
+/*
+ * IOREQ server API
+ */
+int xc_hvm_create_ioreq_server(xc_interface *xch,
+			       domid_t domid,
+			       ioservid_t *id);
+
+int xc_hvm_get_ioreq_server_info(xc_interface *xch,
+				 domid_t domid,
+				 ioservid_t id,
+				 xen_pfn_t *pfn,
+				 xen_pfn_t *buf_pfn,
+				 evtchn_port_t *buf_port);
+
+int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
+					domid_t domid,
+                                        ioservid_t id,
+					int is_mmio,
+                                        uint64_t start,
+					uint64_t end);
+
+int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
+					    domid_t domid,
+                                            ioservid_t id,
+					    int is_mmio,
+                                            uint64_t start);
+
+int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
+				      domid_t domid,
+                                      ioservid_t id,
+				      uint16_t bdf);
+
+int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
+					  domid_t domid,
+					  ioservid_t id,
+					  uint16_t bdf);
+
+int xc_hvm_destroy_ioreq_server(xc_interface *xch,
+				domid_t domid,
+				ioservid_t id);
+
 /* HVM guest pass-through */
 int xc_assign_device(xc_interface *xch,
                      uint32_t domid,
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index a0e30e1..8930ac0 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -234,6 +234,8 @@ struct xc_hvm_build_args {
     struct xc_hvm_firmware_module smbios_module;
     /* Whether to use claim hypercall (1 - enable, 0 - disable). */
     int claim_enabled;
+    /* Maximum number of emulators for VM */
+    int max_emulators;
 };
 
 /**
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index f859621..5170b7f 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -259,6 +259,7 @@
 #define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
 #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
 #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
+#define XC_SAVE_ID_HVM_NR_IOREQ_SERVERS -19
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..b679957 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,14 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS indicates that the
+ * max_emulators field is present in the hvm sections of
+ * libxl_domain_build_info. This field can be used to reserve
+ * extra special pages for secondary device emulators.
+ */
+#define LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..cce93d9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -330,6 +330,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
         libxl_defbool_setdefault(&b_info->u.hvm.gfx_passthru, false);
 
+        if (b_info->u.hvm.max_emulators < 1)
+            b_info->u.hvm.max_emulators = 1;
+
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         libxl_defbool_setdefault(&b_info->u.pv.e820_host, false);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..9de06f9 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -637,6 +637,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     args.mem_size = (uint64_t)(info->max_memkb - info->video_memkb) << 10;
     args.mem_target = (uint64_t)(info->target_memkb - info->video_memkb) << 10;
     args.claim_enabled = libxl_defbool_val(info->claim_mode);
+    args.max_emulators = info->u.hvm.max_emulators;
     if (libxl__domain_firmware(gc, info, &args)) {
         LOG(ERROR, "initializing domain firmware failed");
         goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b707159 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -372,6 +372,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("xen_platform_pci", libxl_defbool),
                                        ("usbdevice_list",   libxl_string_list),
                                        ("vendor_device",    libxl_vendor_device),
+                                       ("max_emulators",    integer),
                                        ])),
                  ("pv", Struct(None, [("kernel", string),
                                       ("slack_memkb", MemKB),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..c65f4f4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1750,6 +1750,9 @@ skip_vfb:
 
             b_info->u.hvm.vendor_device = d;
         }
+ 
+        if (!xlu_cfg_get_long (config, "secondary_device_emulators", &l, 0))
+            b_info->u.hvm.max_emulators = l + 1;
     }
 
     xlu_cfg_destroy(config);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d9874fb..5f9e728 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -379,21 +379,23 @@ static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
 void hvm_do_resume(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
+    struct list_head *entry, *next;
 
     check_wakeup_from_wait();
 
     if ( is_hvm_vcpu(v) )
         pt_restore_timer(v);
 
-    s = v->arch.hvm_vcpu.ioreq_server;
-    v->arch.hvm_vcpu.ioreq_server = NULL;
-
-    if ( s )
+    list_for_each_safe ( entry, next, &v->arch.hvm_vcpu.ioreq_server_list )
     {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                vcpu_list_entry[v->vcpu_id]);
         ioreq_t *p = get_ioreq(s, v->vcpu_id);
 
         hvm_wait_on_io(d, p);
+
+        list_del_init(entry);
     }
 
     /* Inject pending hw/sw trap */
@@ -531,6 +533,83 @@ static int hvm_print_line(
     return X86EMUL_OKAY;
 }
 
+static int hvm_access_cf8(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *curr = current;
+    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
+    int rc;
+
+    BUG_ON(port < 0xcf8);
+    port -= 0xcf8;
+
+    spin_lock(&hd->pci_lock);
+
+    if ( dir == IOREQ_WRITE )
+    {
+        switch ( bytes )
+        {
+        case 4:
+            hd->pci_cf8 = *val;
+            break;
+
+        case 2:
+        {
+            uint32_t mask = 0xffff << (port * 8);
+            uint32_t subval = *val << (port * 8);
+
+            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
+                          (subval & mask);
+            break;
+        }
+            
+        case 1:
+        {
+            uint32_t mask = 0xff << (port * 8);
+            uint32_t subval = *val << (port * 8);
+
+            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
+                          (subval & mask);
+            break;
+        }
+
+        default:
+            break;
+        }
+
+        /* We always need to fall through to the catch all emulator */
+        rc = X86EMUL_UNHANDLEABLE;
+    }
+    else
+    {
+        switch ( bytes )
+        {
+        case 4:
+            *val = hd->pci_cf8;
+            rc = X86EMUL_OKAY;
+            break;
+
+        case 2:
+            *val = (hd->pci_cf8 >> (port * 8)) & 0xffff;
+            rc = X86EMUL_OKAY;
+            break;
+            
+        case 1:
+            *val = (hd->pci_cf8 >> (port * 8)) & 0xff;
+            rc = X86EMUL_OKAY;
+            break;
+
+        default:
+            rc = X86EMUL_UNHANDLEABLE;
+            break;
+        }
+    }
+
+    spin_unlock(&hd->pci_lock);
+
+    return rc;
+}
+
 static int handle_pvh_io(
     int dir, uint32_t port, uint32_t bytes, uint32_t *val)
 {
@@ -590,6 +669,8 @@ done:
 
 static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
 {
+    list_del_init(&s->vcpu_list_entry[v->vcpu_id]);
+
     if ( v->vcpu_id == 0 )
     {
         if ( s->buf_ioreq_evtchn >= 0 )
@@ -606,7 +687,7 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
     }
 }
 
-static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
+static int hvm_create_ioreq_server(struct domain *d, ioservid_t id, domid_t domid)
 {
     struct hvm_ioreq_server *s;
     int i;
@@ -614,34 +695,47 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
     struct vcpu *v;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -EEXIST;
-    if ( d->arch.hvm_domain.ioreq_server != NULL )
-        goto fail_exist;
+    list_for_each_entry ( s, 
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto fail_exist;
+    }
 
-    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
 
     rc = -ENOMEM;
     s = xzalloc(struct hvm_ioreq_server);
     if ( !s )
         goto fail_alloc;
 
+    s->id = id;
     s->domain = d;
     s->domid = domid;
+    INIT_LIST_HEAD(&s->domain_list_entry);
 
     for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+    {
         s->ioreq_evtchn[i] = -1;
+        INIT_LIST_HEAD(&s->vcpu_list_entry[i]);
+    }
     s->buf_ioreq_evtchn = -1;
 
     /* Initialize shared pages */
-    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
 
     hvm_init_ioreq_page(s, 0);
     if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
         goto fail_set_ioreq;
 
-    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
 
     hvm_init_ioreq_page(s, 1);
     if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
@@ -653,7 +747,8 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
             goto fail_add_vcpu;
     }
 
-    d->arch.hvm_domain.ioreq_server = s;
+    list_add(&s->domain_list_entry,
+             &d->arch.hvm_domain.ioreq_server_list);
 
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
@@ -673,22 +768,30 @@ fail_exist:
     return rc;
 }
 
-static void hvm_destroy_ioreq_server(struct domain *d)
+static void hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
-    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_server *s, *next;
     struct vcpu *v;
 
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+    list_for_each_entry_safe ( s,
+                               next,
+                               &d->arch.hvm_domain.ioreq_server_list,
+                               domain_list_entry)
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    s = d->arch.hvm_domain.ioreq_server;
-    if ( !s )
-        goto done;
+    goto done;
+
+found:
+    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
 
     domain_pause(d);
 
-    d->arch.hvm_domain.ioreq_server = NULL;
+    list_del_init(&s->domain_list_entry);
 
     for_each_vcpu ( d, v )
         hvm_ioreq_server_remove_vcpu(s, v);
@@ -704,21 +807,186 @@ done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 }
 
-static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
+static int hvm_get_ioreq_server_buf_port(struct domain *d, ioservid_t id, evtchn_port_t *port)
+{
+    struct list_head *entry;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -ENOENT;
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( s->id == id )
+        {
+            *port = s->buf_ioreq_evtchn;
+            rc = 0;
+            break;
+        }
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_get_ioreq_server_pfn(struct domain *d, ioservid_t id, int buf, xen_pfn_t *pfn)
+{
+    struct list_head *entry;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -ENOENT;
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( s->id == id )
+        {
+            if ( buf )
+                *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
+            else
+                *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
+
+            rc = 0;
+            break;
+        }
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                            int is_mmio, uint64_t start, uint64_t end)
 {
     struct hvm_ioreq_server *s;
+    struct hvm_io_range *x;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    x = xmalloc(struct hvm_io_range);
+    if ( x == NULL )
+        return -ENOMEM;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    rc = -ENOENT;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
+
+    goto fail;
+
+found:
+    x->start = start;
+    x->end = end;
+
+    if ( is_mmio )
+    {
+        x->next = s->mmio_range_list;
+        s->mmio_range_list = x;
+    }
+    else
+    {
+        x->next = s->portio_range_list;
+        s->portio_range_list = x;
+    }
+
+    gdprintk(XENLOG_DEBUG, "%d:%d: +%s %"PRIX64" - %"PRIX64"\n",
+             d->domain_id,
+             s->id,
+             ( is_mmio ) ? "MMIO" : "PORTIO",
+             x->start,
+             x->end);
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    xfree(x);
+
+    return rc;
+}
+
+static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                                int is_mmio, uint64_t start)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_io_range *x, **xp;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    *port = s->buf_ioreq_evtchn;
-    rc = 0;
+    goto done;
+
+found:
+    if ( is_mmio )
+    {
+        x = s->mmio_range_list;
+        xp = &s->mmio_range_list;
+    }
+    else
+    {
+        x = s->portio_range_list;
+        xp = &s->portio_range_list;
+    }
+
+    while ( (x != NULL) && (start != x->start) )
+    {
+        xp = &x->next;
+        x = x->next;
+    }
+
+    if ( (x != NULL) )
+    {
+        gdprintk(XENLOG_DEBUG, "%d:%d: -%s %"PRIX64" - %"PRIX64"\n",
+                 d->domain_id,
+                 s->id,
+                 ( is_mmio ) ? "MMIO" : "PORTIO",
+                 x->start,
+                 x->end);
+
+        *xp = x->next;
+        xfree(x);
+        rc = 0;
+    }
 
 done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
@@ -726,25 +994,98 @@ done:
     return rc;
 }
 
-static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
+static int hvm_map_pcidev_to_ioreq_server(struct domain *d, ioservid_t id,
+                                          uint16_t bdf)
 {
     struct hvm_ioreq_server *s;
+    struct hvm_pcidev *x;
     int rc;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    x = xmalloc(struct hvm_pcidev);
+    if ( x == NULL )
+        return -ENOMEM;
+
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    rc = -ENOENT;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
+
+    goto fail;
+
+found:
+    x->bdf = bdf;
+
+    x->next = s->pcidev_list;
+    s->pcidev_list = x;
+
+    gdprintk(XENLOG_DEBUG, "%d:%d: +PCIDEV %04X\n",
+             d->domain_id,
+             s->id,
+             x->bdf);
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    xfree(x);
+
+    return rc;
+}
+
+static int hvm_unmap_pcidev_from_ioreq_server(struct domain *d, ioservid_t id,
+                                              uint16_t bdf)
+{
+    struct hvm_ioreq_server *s;
+    struct hvm_pcidev *x, **xp;
+    int rc;
+
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
-    if ( buf )
-        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
-    else
-        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+    goto done;
 
-    rc = 0;
+found:
+    x = s->pcidev_list;
+    xp = &s->pcidev_list;
+
+    while ( (x != NULL) && (bdf != x->bdf) )
+    {
+        xp = &x->next;
+        x = x->next;
+    }
+    if ( (x != NULL) )
+    {
+        gdprintk(XENLOG_DEBUG, "%d:%d: -PCIDEV %04X\n",
+                 d->domain_id,
+                 s->id,
+                 x->bdf);
+
+         *xp = x->next;
+        xfree(x);
+        rc = 0;
+    }
 
 done:
     spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
@@ -752,6 +1093,73 @@ done:
     return rc;
 }
 
+static int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct list_head *entry;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
+            goto fail;
+    }
+        
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail:
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct list_head *entry;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
+static void hvm_destroy_all_ioreq_servers(struct domain *d)
+{
+    ioservid_t id;
+
+    for ( id = 0;
+          id < d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
+          id++ )
+        hvm_destroy_ioreq_server(d, id);
+}
+
 static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
                                      int *p_port)
 {
@@ -767,21 +1175,30 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
+static int hvm_set_ioreq_server_domid(struct domain *d, ioservid_t id, domid_t domid)
 {
     struct hvm_ioreq_server *s;
     struct vcpu *v;
     int rc = 0;
 
+    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
+        return -EINVAL;
+
     domain_pause(d);
     spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    s = d->arch.hvm_domain.ioreq_server;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == id )
+            goto found;
+    }
 
     rc = -ENOENT;
-    if ( !s )
-        goto done;
+    goto done;
 
+found:
     rc = 0;
     if ( s->domid == domid )
         goto done;
@@ -838,7 +1255,9 @@ int hvm_domain_initialise(struct domain *d)
 
     }
 
+    INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server_list);
     spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
+    spin_lock_init(&d->arch.hvm_domain.pci_lock);
     spin_lock_init(&d->arch.hvm_domain.irq_lock);
     spin_lock_init(&d->arch.hvm_domain.uc_lock);
 
@@ -880,6 +1299,7 @@ int hvm_domain_initialise(struct domain *d)
     rtc_init(d);
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
+    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
@@ -910,7 +1330,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_destroy_ioreq_server(d);
+    hvm_destroy_all_ioreq_servers(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1422,13 +1842,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
 
     hvm_asid_flush_vcpu(v);
 
     spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
     INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
 
+    INIT_LIST_HEAD(&v->arch.hvm_vcpu.ioreq_server_list);
+
     rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
     if ( rc != 0 )
         goto fail1;
@@ -1465,16 +1886,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
-    if ( s )
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc < 0 )
-            goto fail6;
-    }
+    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
+    if ( rc < 0 )
+        goto fail6;
 
     if ( v->vcpu_id == 0 )
     {
@@ -1510,14 +1924,8 @@ int hvm_vcpu_initialise(struct vcpu *v)
 void hvm_vcpu_destroy(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    if ( s )
-        hvm_ioreq_server_remove_vcpu(s, v);
+    hvm_all_ioreq_servers_remove_vcpu(d, v);
 
     nestedhvm_vcpu_destroy(v);
 
@@ -1556,6 +1964,101 @@ void hvm_vcpu_down(struct vcpu *v)
     }
 }
 
+static struct hvm_ioreq_server *hvm_select_ioreq_server(struct vcpu *v, ioreq_t *p)
+{
+#define BDF(cf8) (((cf8) & 0x00ffff00) >> 8)
+
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    uint8_t type;
+    uint64_t addr;
+
+    if ( p->type == IOREQ_TYPE_PIO &&
+         (p->addr & ~3) == 0xcfc )
+    { 
+        /* PCI config data cycle */
+        type = IOREQ_TYPE_PCI_CONFIG;
+
+        spin_lock(&d->arch.hvm_domain.pci_lock);
+        addr = d->arch.hvm_domain.pci_cf8 + (p->addr & 3);
+        spin_unlock(&d->arch.hvm_domain.pci_lock);
+    }
+    else
+    {
+        type = p->type;
+        addr = p->addr;
+    }
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    switch ( type )
+    {
+    case IOREQ_TYPE_COPY:
+    case IOREQ_TYPE_PIO:
+    case IOREQ_TYPE_PCI_CONFIG:
+        break;
+    default:
+        goto done;
+    }
+
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        switch ( type )
+        {
+            case IOREQ_TYPE_COPY:
+            case IOREQ_TYPE_PIO: {
+                struct hvm_io_range *x;
+
+                x = (type == IOREQ_TYPE_COPY) ?
+                    s->mmio_range_list :
+                    s->portio_range_list;
+
+                for ( ; x; x = x->next )
+                {
+                    if ( (addr >= x->start) && (addr <= x->end) )
+                        goto found;
+                }
+                break;
+            }
+            case IOREQ_TYPE_PCI_CONFIG: {
+                struct hvm_pcidev *x;
+
+                x = s->pcidev_list;
+
+                for ( ; x; x = x->next )
+                {
+                    if ( BDF(addr) == x->bdf ) {
+                        p->type = type;
+                        p->addr = addr;
+                        goto found;
+                    }
+                }
+                break;
+            }
+        }
+    }
+
+done:
+    /* The catch-all server has id 0 */
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server_list,
+                          domain_list_entry )
+    {
+        if ( s->id == 0 )
+            goto found;
+    }
+
+    s = NULL;
+
+found:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    return s;
+
+#undef BDF
+}
+
 int hvm_buffered_io_send(ioreq_t *p)
 {
     struct vcpu *v = current;
@@ -1570,10 +2073,7 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
+    s = hvm_select_ioreq_server(v, p);
     if ( !s )
         return 0;
 
@@ -1661,7 +2161,9 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
         return 0;
     }
 
-    v->arch.hvm_vcpu.ioreq_server = s;
+    ASSERT(list_empty(&s->vcpu_list_entry[v->vcpu_id]));
+    list_add(&s->vcpu_list_entry[v->vcpu_id],
+             &v->arch.hvm_vcpu.ioreq_server_list); 
 
     p->dir = proto_p->dir;
     p->data_is_ptr = proto_p->data_is_ptr;
@@ -1686,24 +2188,42 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
 {
-    struct domain *d = v->domain;
     struct hvm_ioreq_server *s;
 
-    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
+    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
 
     if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
         return 0;
 
-    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
-    s = d->arch.hvm_domain.ioreq_server;
-    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
-
+    s = hvm_select_ioreq_server(v, p);
     if ( !s )
         return 0;
 
     return hvm_send_assist_req_to_server(s, v, p);
 }
 
+void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p)
+{
+    struct domain *d = v->domain;
+    struct list_head *entry;
+
+    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    list_for_each ( entry,
+                    &d->arch.hvm_domain.ioreq_server_list )
+    {
+        struct hvm_ioreq_server *s = list_entry(entry,
+                                                struct hvm_ioreq_server,
+                                                domain_list_entry);
+
+        (void) hvm_send_assist_req_to_server(s, v, p);
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
 void hvm_hlt(unsigned long rflags)
 {
     struct vcpu *curr = current;
@@ -4286,6 +4806,215 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
+static int hvmop_create_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
+{
+    struct domain *curr_d = current->domain;
+    xen_hvm_create_ioreq_server_t op;
+    struct domain *d;
+    ioservid_t id;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = -ENOSPC;
+    for ( id = 1;
+          id <  d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
+          id++ )
+    {
+        rc = hvm_create_ioreq_server(d, id, curr_d->domain_id);
+        if ( rc == -EEXIST )
+            continue;
+
+        break;
+    }
+
+    if ( rc == -EEXIST )
+        rc = -ENOSPC;
+
+    if ( rc < 0 )
+        goto out;
+
+    op.id = id;
+
+    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_get_ioreq_server_info(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
+{
+    xen_hvm_get_ioreq_server_info_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 0, &op.pfn)) < 0 )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 1, &op.buf_pfn)) < 0 )
+        goto out;
+
+    if ( (rc = hvm_get_ioreq_server_buf_port(d, op.id, &op.buf_port)) < 0 )
+        goto out;
+
+    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_map_io_range_to_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_io_range_to_ioreq_server_t) uop)
+{
+    xen_hvm_map_io_range_to_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.is_mmio,
+                                          op.start, op.end);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_unmap_io_range_from_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_io_range_from_ioreq_server_t) uop)
+{
+    xen_hvm_unmap_io_range_from_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.is_mmio,
+                                              op.start);
+    
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_map_pcidev_to_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_pcidev_to_ioreq_server_t) uop)
+{
+    xen_hvm_map_pcidev_to_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_map_pcidev_to_ioreq_server(d, op.id, op.bdf);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_unmap_pcidev_from_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_pcidev_from_ioreq_server_t) uop)
+{
+    xen_hvm_unmap_pcidev_from_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_unmap_pcidev_from_ioreq_server(d, op.id, op.bdf);
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
+static int hvmop_destroy_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
+{
+    xen_hvm_destroy_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    hvm_destroy_ioreq_server(d, op.id);
+    rc = 0;
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -4294,6 +5023,41 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     switch ( op )
     {
+    case HVMOP_create_ioreq_server:
+        rc = hvmop_create_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
+        break;
+    
+    case HVMOP_get_ioreq_server_info:
+        rc = hvmop_get_ioreq_server_info(
+            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
+        break;
+    
+    case HVMOP_map_io_range_to_ioreq_server:
+        rc = hvmop_map_io_range_to_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_map_io_range_to_ioreq_server_t));
+        break;
+    
+    case HVMOP_unmap_io_range_from_ioreq_server:
+        rc = hvmop_unmap_io_range_from_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_unmap_io_range_from_ioreq_server_t));
+        break;
+    
+    case HVMOP_map_pcidev_to_ioreq_server:
+        rc = hvmop_map_pcidev_to_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_map_pcidev_to_ioreq_server_t));
+        break;
+    
+    case HVMOP_unmap_pcidev_from_ioreq_server:
+        rc = hvmop_unmap_pcidev_from_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_unmap_pcidev_from_ioreq_server_t));
+        break;
+    
+    case HVMOP_destroy_ioreq_server:
+        rc = hvmop_destroy_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
+        break;
+    
     case HVMOP_set_param:
     case HVMOP_get_param:
     {
@@ -4382,9 +5146,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = hvm_create_ioreq_server(d, a.value);
+                rc = hvm_create_ioreq_server(d, 0, a.value);
                 if ( rc == -EEXIST )
-                    rc = hvm_set_ioreq_server_domid(d, a.value);
+                    rc = hvm_set_ioreq_server_domid(d, 0, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4449,6 +5213,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value > SHUTDOWN_MAX )
                     rc = -EINVAL;
                 break;
+            case HVM_PARAM_NR_IOREQ_SERVERS:
+                if ( d == current->domain )
+                    rc = -EPERM;
+                break;
             }
 
             if ( rc == 0 ) 
@@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             case HVM_PARAM_BUFIOREQ_PFN:
             case HVM_PARAM_BUFIOREQ_EVTCHN:
                 /* May need to create server */
-                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
+                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
                 if ( rc != 0 && rc != -EEXIST )
                     goto param_fail;
 
@@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_IOREQ_PFN: {
                     xen_pfn_t pfn;
 
-                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
                         goto param_fail;
 
                     a.value = pfn;
@@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_BUFIOREQ_PFN: {
                     xen_pfn_t pfn;
 
-                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
                         goto param_fail;
 
                     a.value = pfn;
@@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 case HVM_PARAM_BUFIOREQ_EVTCHN: {
                     evtchn_port_t port;
 
-                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
+                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
                         goto param_fail;
 
                     a.value = port;
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 576641c..a0d76b2 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -78,7 +78,7 @@ void send_invalidate_req(void)
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
 
-    (void)hvm_send_assist_req(v, p);
+    hvm_broadcast_assist_req(v, p);
 }
 
 int handle_mmio(void)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index e750ef0..93dcec1 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -41,19 +41,38 @@ struct hvm_ioreq_page {
     void *va;
 };
 
+struct hvm_io_range {
+    struct hvm_io_range *next;
+    uint64_t            start, end;
+};	
+
+struct hvm_pcidev {
+    struct hvm_pcidev *next;
+    uint16_t          bdf;
+};	
+
 struct hvm_ioreq_server {
+    struct list_head       domain_list_entry;
+    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];
+    ioservid_t             id;
     struct domain          *domain;
     domid_t                domid;
     struct hvm_ioreq_page  ioreq;
     int                    ioreq_evtchn[MAX_HVM_VCPUS];
     struct hvm_ioreq_page  buf_ioreq;
     int                    buf_ioreq_evtchn;
+    struct hvm_io_range    *mmio_range_list;
+    struct hvm_io_range    *portio_range_list;
+    struct hvm_pcidev      *pcidev_list;
 };
 
 struct hvm_domain {
-    struct hvm_ioreq_server *ioreq_server;
+    struct list_head        ioreq_server_list;
     spinlock_t              ioreq_server_lock;
 
+    uint32_t                pci_cf8;
+    spinlock_t              pci_lock;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 4e8fee8..1c3854f 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -225,6 +225,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
 void destroy_ring_for_helper(void **_va, struct page_info *page);
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
+void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p);
 
 void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
 int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 4c9d7ee..211ebfd 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -138,7 +138,7 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    struct hvm_ioreq_server *ioreq_server;
+    struct list_head    ioreq_server_list;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index a9aab4b..6b31189 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -23,6 +23,7 @@
 
 #include "../xen.h"
 #include "../trace.h"
+#include "../event_channel.h"
 
 /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
 #define HVMOP_set_param           0
@@ -270,6 +271,75 @@ struct xen_hvm_inject_msi {
 typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
 
+typedef uint32_t ioservid_t;
+
+DEFINE_XEN_GUEST_HANDLE(ioservid_t);
+
+#define HVMOP_create_ioreq_server 17
+struct xen_hvm_create_ioreq_server {
+    domid_t domid;  /* IN - domain to be serviced */
+    ioservid_t id;  /* OUT - server id */
+};
+typedef struct xen_hvm_create_ioreq_server xen_hvm_create_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_create_ioreq_server_t);
+
+#define HVMOP_get_ioreq_server_info 18
+struct xen_hvm_get_ioreq_server_info {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - server id */
+    xen_pfn_t pfn;          /* OUT - ioreq pfn */
+    xen_pfn_t buf_pfn;      /* OUT - buf ioreq pfn */
+    evtchn_port_t buf_port; /* OUT - buf ioreq port */
+};
+typedef struct xen_hvm_get_ioreq_server_info xen_hvm_get_ioreq_server_info_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
+
+#define HVMOP_map_io_range_to_ioreq_server 19
+struct xen_hvm_map_io_range_to_ioreq_server {
+    domid_t domid;                  /* IN - domain to be serviced */
+    ioservid_t id;                  /* IN - handle from HVMOP_register_ioreq_server */
+    int is_mmio;                    /* IN - MMIO or port IO? */
+    uint64_aligned_t start, end;    /* IN - inclusive start and end of range */
+};
+typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
+
+#define HVMOP_unmap_io_range_from_ioreq_server 20
+struct xen_hvm_unmap_io_range_from_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
+    uint8_t is_mmio;        /* IN - MMIO or port IO? */
+    uint64_aligned_t start; /* IN - start address of the range to remove */
+};
+typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
+
+#define HVMOP_map_pcidev_to_ioreq_server 21
+struct xen_hvm_map_pcidev_to_ioreq_server {
+    domid_t domid;      /* IN - domain to be serviced */
+    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
+    uint16_t bdf;       /* IN - PCI bus/dev/func */
+};
+typedef struct xen_hvm_map_pcidev_to_ioreq_server xen_hvm_map_pcidev_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_pcidev_to_ioreq_server_t);
+
+#define HVMOP_unmap_pcidev_from_ioreq_server 22
+struct xen_hvm_unmap_pcidev_from_ioreq_server {
+    domid_t domid;      /* IN - domain to be serviced */
+    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
+    uint16_t bdf;       /* IN - PCI bus/dev/func */
+};
+typedef struct xen_hvm_unmap_pcidev_from_ioreq_server xen_hvm_unmap_pcidev_from_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_pcidev_from_ioreq_server_t);
+
+#define HVMOP_destroy_ioreq_server 23
+struct xen_hvm_destroy_ioreq_server {
+    domid_t domid;          /* IN - domain to be serviced */
+    ioservid_t id;          /* IN - server id */
+};
+typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index f05d130..e84fa75 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -34,6 +34,7 @@
 
 #define IOREQ_TYPE_PIO          0 /* pio */
 #define IOREQ_TYPE_COPY         1 /* mmio ops */
+#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config ops */
 #define IOREQ_TYPE_TIMEOFFSET   7
 #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
 
diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 517a184..4109b11 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -145,6 +145,8 @@
 /* SHUTDOWN_* action in case of a triple fault */
 #define HVM_PARAM_TRIPLE_FAULT_REASON 31
 
-#define HVM_NR_PARAMS          32
+#define HVM_PARAM_NR_IOREQ_SERVERS 32
+
+#define HVM_NR_PARAMS          33
 
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTg-0000xZ-Rb; Thu, 30 Jan 2014 14:20:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTf-0000xD-F1
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:23 +0000
Received: from [85.158.139.211:16135] by server-13.bemta-5.messagelabs.com id
	6E/CF-18801-6AF5AE25; Thu, 30 Jan 2014 14:20:22 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391091619!611812!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11034 invoked from network); 30 Jan 2014 14:20:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136115"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-Qa	for
	xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:45 +0000
Message-ID: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [RFC PATCH 1/5] Support for running secondary emulators
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds the ioreq server interface which I mentioned in
my talk at the Xen developer summit in Edinburgh at the end of last year.
The code is based on work originally done by Julien Grall but has been
re-written to allow existing versions of QEMU to work unmodified.

The code is available in my xen.git [1] repo on xenbits, under the 'savannah'
branch, and I have also written a demo emulator to test the code, which can
be found in my demu.git [2] repo.

The modifications are broken down as follows:


Patch #1 basically just moves some code around to make subsequent patches
more obvious. The patch also removes the has_dm flag in hvememul_do_io() as
it is no longer necessary to special-case PVH domains in this way. (The I/O
can be completed by hvm_send_assist_req() later, which it is discovered there
is no shared ioreq page).

Patch #2 again is largely code movement, from various places into a new
hvm_ioreq_server structure. There should be no functional change at this
stage as the ioreq server is still created at domain initialisation time (as
were its contents prior to this patch).

Patch #3 is the first functional change. The ioreq server struct
initialisation is now deferred until something actually tries to play with
the HVM parameters which reference it. In practice this is QEMU, which
needs to read the ioreq pfns so it can map them.

Patch #4 is the big one. This moves from a single ioreq server per domain
to a list. The server that is created when the HVM parameters are reference
is given id 0 and is considered to be the 'catch all' server which is, after
all, how QEMU is used. Any secondary emulator, created using the new API
in xenctrl.h, will have id 1 or above and only gets ioreqs when I/O hits one
of its registered IO ranges or PCI devices.

Patch #5 pulls the PCI hotplug controller emulation into Xen. This is
necessary to allow a secondary emulator to hotplug a PCI device into the VM.
The code implements the controller in the same way as upstream QEMU and thus
the variant of the DSDT ASL used for upstream QEMU is retained.


There are no modifications to libxl to actually invoke a secondary emulator
at this stage. The only changes made are simply to increase the number of
special pages reserved for a VM to allow the use of more than one emulator
and call the new PCI hotplug API when attaching or detaching PCI devices.
The demo emulator can simply be invoked from a shell and will hotplug its
device onto the PCI bus (and remove it again when it's killed). The emulated
device is not an awful lot of use at this stage - it appears as a SCSI
controller with one IO BAR and one MEM BAR and has no intrinsic
functionality... but then it is only supposed to be demo :-)

  Paul

[1] http://xenbits.xen.org/gitweb/?p=people/pauldu/xen.git
[2] http://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTi-0000xy-L5; Thu, 30 Jan 2014 14:20:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTh-0000xW-8v
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:25 +0000
Received: from [193.109.254.147:18501] by server-6.bemta-14.messagelabs.com id
	AA/54-03396-8AF5AE25; Thu, 30 Jan 2014 14:20:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17760 invoked from network); 30 Jan 2014 14:20:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103493"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-US;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:47 +0000
Message-ID: <1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq server
	abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect together data structures concerning device emulation together into
a new struct hvm_ioreq_server.

Code that deals with the shared and buffered ioreq pages is extracted from
functions such as hvm_domain_initialise, hvm_vcpu_initialise and do_hvm_op
and consolidated into a set of hvm_ioreq_server_XXX functions.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++------------
 xen/include/asm-x86/hvm/domain.h |    9 +-
 xen/include/asm-x86/hvm/vcpu.h   |    2 +-
 3 files changed, 229 insertions(+), 100 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 71a44db..a0eaadb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
-static ioreq_t *get_ioreq(struct vcpu *v)
+static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
 {
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
+    shared_iopage_t *p = s->ioreq.va;
+    ASSERT(p != NULL);
+    return &p->vcpu_ioreq[id];
 }
 
 void hvm_do_resume(struct vcpu *v)
 {
+    struct hvm_ioreq_server *s;
     ioreq_t *p;
 
     check_wakeup_from_wait();
@@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
     if ( is_hvm_vcpu(v) )
         pt_restore_timer(v);
 
-    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    if ( !(p = get_ioreq(v)) )
+    s = v->arch.hvm_vcpu.ioreq_server;
+    v->arch.hvm_vcpu.ioreq_server = NULL;
+
+    if ( !s )
         goto check_inject_trap;
 
+    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
+    p = get_ioreq(s, v->vcpu_id);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
             break;
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
+            wait_on_xen_event_channel(p->vp_eport,
                                       (p->state != STATE_IOREQ_READY) &&
                                       (p->state != STATE_IOREQ_INPROCESS));
             break;
@@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
 static void hvm_init_ioreq_page(
     struct domain *d, struct hvm_ioreq_page *iorp)
 {
-    memset(iorp, 0, sizeof(*iorp));
     spin_lock_init(&iorp->lock);
     domain_pause(d);
 }
@@ -541,6 +544,167 @@ static int handle_pvh_io(
     return X86EMUL_OKAY;
 }
 
+static int hvm_init_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    int i;
+
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        return -ENOMEM;
+
+    s->domain = d;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+        s->ioreq_evtchn[i] = -1;
+    s->buf_ioreq_evtchn = -1;
+
+    hvm_init_ioreq_page(d, &s->ioreq);
+    hvm_init_ioreq_page(d, &s->buf_ioreq);
+
+    d->arch.hvm_domain.ioreq_server = s;
+    return 0;
+}
+
+static void hvm_deinit_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+
+    hvm_destroy_ioreq_page(d, &s->ioreq);
+    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
+
+    xfree(s);
+}
+
+static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
+{
+    struct domain *d = s->domain;
+
+    if ( s->ioreq.va != NULL )
+    {
+        shared_iopage_t *p = s->ioreq.va;
+        struct vcpu *v;
+
+        for_each_vcpu ( d, v )
+            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v->vcpu_id];
+    }
+}
+
+static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    int rc;
+
+    /* Create ioreq event channel. */
+    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+    if ( rc < 0 )
+        goto done;
+
+    /* Register ioreq event channel. */
+    s->ioreq_evtchn[v->vcpu_id] = rc;
+
+    if ( v->vcpu_id == 0 )
+    {
+        /* Create bufioreq event channel. */
+        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
+        if ( rc < 0 )
+            goto done;
+
+        s->buf_ioreq_evtchn = rc;
+    }
+
+    hvm_update_ioreq_server_evtchn(s);
+    rc = 0;
+
+done:
+    return rc;
+}
+
+static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    if ( v->vcpu_id == 0 )
+    {
+        if ( s->buf_ioreq_evtchn >= 0 )
+        {
+            free_xen_event_channel(v, s->buf_ioreq_evtchn);
+            s->buf_ioreq_evtchn = -1;
+        }
+    }
+
+    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
+    {
+        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
+        s->ioreq_evtchn[v->vcpu_id] = -1;
+    }
+}
+
+static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
+                                     int *p_port)
+{
+    int old_port, new_port;
+
+    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
+    if ( new_port < 0 )
+        return new_port;
+
+    /* xchg() ensures that only we call free_xen_event_channel(). */
+    old_port = xchg(p_port, new_port);
+    free_xen_event_channel(v, old_port);
+    return 0;
+}
+
+static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
+{
+    struct domain *d = s->domain;
+    struct vcpu *v;
+    int rc = 0;
+
+    domain_pause(d);
+
+    if ( d->vcpu[0] )
+    {
+        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s->buf_ioreq_evtchn);
+        if ( rc < 0 )
+            goto done;
+    }
+
+    for_each_vcpu ( d, v )
+    {
+        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v->vcpu_id]);
+        if ( rc < 0 )
+            goto done;
+    }
+
+    hvm_update_ioreq_server_evtchn(s);
+
+    s->domid = domid;
+
+done:
+    domain_unpause(d);
+
+    return rc;
+}
+
+static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
+{
+    struct domain *d = s->domain;
+    int rc;
+
+    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
+    if ( rc < 0 )
+        return rc;
+
+    hvm_update_ioreq_server_evtchn(s);
+
+    return 0;
+}
+
+static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
+{
+    struct domain *d = s->domain;
+
+    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
+}
+
 int hvm_domain_initialise(struct domain *d)
 {
     int rc;
@@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    rc = hvm_init_ioreq_server(d);
+    if ( rc != 0 )
+        goto fail2;
 
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail2;
+        goto fail3;
 
     return 0;
 
+ fail3:
+    hvm_deinit_ioreq_server(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
-    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
+    hvm_deinit_ioreq_server(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
     struct domain *d = v->domain;
-    domid_t dm_domid;
+    struct hvm_ioreq_server *s;
 
     hvm_asid_flush_vcpu(v);
 
@@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
-    dm_domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
+    s = d->arch.hvm_domain.ioreq_server;
 
-    /* Create ioreq event channel. */
-    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
+    rc = hvm_ioreq_server_add_vcpu(s, v);
     if ( rc < 0 )
         goto fail6;
 
-    /* Register ioreq event channel. */
-    v->arch.hvm_vcpu.xen_port = rc;
-
-    if ( v->vcpu_id == 0 )
-    {
-        /* Create bufioreq event channel. */
-        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
-        if ( rc < 0 )
-            goto fail6;
-        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
-    }
-
-    spin_lock(&d->arch.hvm_domain.ioreq.lock);
-    if ( d->arch.hvm_domain.ioreq.va != NULL )
-        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
-
     if ( v->vcpu_id == 0 )
     {
         /* NB. All these really belong in hvm_domain_initialise(). */
@@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+
+    hvm_ioreq_server_remove_vcpu(s, v);
+
     nestedhvm_vcpu_destroy(v);
 
     free_compat_arg_xlat(v);
@@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
         vlapic_destroy(v);
 
     hvm_funcs.vcpu_destroy(v);
-
-    /* Event channel is already freed by evtchn_destroy(). */
-    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
 }
 
 void hvm_vcpu_down(struct vcpu *v)
@@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
 int hvm_buffered_io_send(ioreq_t *p)
 {
     struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
     buf_ioreq_t bp;
     /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
     int qw = 0;
@@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
+        return 0;
+
+    iorp = &s->buf_ioreq;
+    pg = iorp->va;
+
     /*
      * Return 0 for the cases we can't deal with:
      *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
@@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
     wmb();
     pg->write_pointer += qw ? 2 : 1;
 
-    notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
     spin_unlock(&iorp->lock);
     
     return 1;
@@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
 
 bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
 {
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
     ioreq_t *p;
 
     if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
         return 0; /* implicitly bins the i/o operation */
 
-    if ( !(p = get_ioreq(v)) )
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
         return 0;
 
+    p = get_ioreq(s, v->vcpu_id);
+
     if ( unlikely(p->state != STATE_IOREQ_NONE) )
     {
         /* This indicates a bug in the device model. Crash the domain. */
         gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p->state);
-        domain_crash(v->domain);
+        domain_crash(d);
         return 0;
     }
 
+    v->arch.hvm_vcpu.ioreq_server = s;
+
     p->dir = proto_p->dir;
     p->data_is_ptr = proto_p->data_is_ptr;
     p->type = proto_p->type;
@@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
     p->df = proto_p->df;
     p->data = proto_p->data;
 
-    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
+    prepare_wait_on_xen_event_channel(p->vp_eport);
 
     /*
      * Following happens /after/ blocking and setting up ioreq contents.
      * prepare_wait_on_xen_event_channel() is an implicit barrier.
      */
     p->state = STATE_IOREQ_READY;
-    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
+    notify_via_xen_event_channel(d, p->vp_eport);
 
     return 1;
 }
@@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
-                                     int *p_port)
-{
-    int old_port, new_port;
-
-    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
-    if ( new_port < 0 )
-        return new_port;
-
-    /* xchg() ensures that only we call free_xen_event_channel(). */
-    old_port = xchg(p_port, new_port);
-    free_xen_event_channel(v, old_port);
-    return 0;
-}
-
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_page *iorp;
+        struct hvm_ioreq_server *s;
         struct domain *d;
         struct vcpu *v;
 
@@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( rc )
             goto param_fail;
 
+        s = d->arch.hvm_domain.ioreq_server;
+
         if ( op == HVMOP_set_param )
         {
             rc = 0;
@@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             switch ( a.index )
             {
             case HVM_PARAM_IOREQ_PFN:
-                iorp = &d->arch.hvm_domain.ioreq;
-                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
-                    break;
-                spin_lock(&iorp->lock);
-                if ( iorp->va != NULL )
-                    /* Initialise evtchn port info if VCPUs already created. */
-                    for_each_vcpu ( d, v )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                spin_unlock(&iorp->lock);
+                rc = hvm_set_ioreq_server_pfn(s, a.value);
                 break;
             case HVM_PARAM_BUFIOREQ_PFN: 
-                iorp = &d->arch.hvm_domain.buf_ioreq;
-                rc = hvm_set_ioreq_page(d, iorp, a.value);
+                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
                 break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
@@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = 0;
-                domain_pause(d); /* safe to change per-vcpu xen_port */
-                if ( d->vcpu[0] )
-                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
-                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
-                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
-                if ( rc )
-                {
-                    domain_unpause(d);
-                    break;
-                }
-                iorp = &d->arch.hvm_domain.ioreq;
-                for_each_vcpu ( d, v )
-                {
-                    rc = hvm_replace_event_channel(v, a.value,
-                                                   &v->arch.hvm_vcpu.xen_port);
-                    if ( rc )
-                        break;
-
-                    spin_lock(&iorp->lock);
-                    if ( iorp->va != NULL )
-                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
-                    spin_unlock(&iorp->lock);
-                }
-                domain_unpause(d);
+                rc = hvm_set_ioreq_server_domid(s, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         {
             switch ( a.index )
             {
+            case HVM_PARAM_BUFIOREQ_EVTCHN:
+                a.value = s->buf_ioreq_evtchn;
+                break;
             case HVM_PARAM_ACPI_S_STATE:
                 a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
                 break;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index b1e3187..4c039f8 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -41,10 +41,17 @@ struct hvm_ioreq_page {
     void *va;
 };
 
-struct hvm_domain {
+struct hvm_ioreq_server {
+    struct domain          *domain;
+    domid_t                domid;
     struct hvm_ioreq_page  ioreq;
+    int                    ioreq_evtchn[MAX_HVM_VCPUS];
     struct hvm_ioreq_page  buf_ioreq;
+    int                    buf_ioreq_evtchn;
+};
 
+struct hvm_domain {
+    struct hvm_ioreq_server *ioreq_server;
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..4c9d7ee 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -138,7 +138,7 @@ struct hvm_vcpu {
     spinlock_t          tm_lock;
     struct list_head    tm_list;
 
-    int                 xen_port;
+    struct hvm_ioreq_server *ioreq_server;
 
     bool_t              flag_dr_dirty;
     bool_t              debug_state_latch;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTi-0000xl-8G; Thu, 30 Jan 2014 14:20:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTg-0000xI-2m
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:24 +0000
Received: from [193.109.254.147:21820] by server-12.bemta-14.messagelabs.com
	id 60/35-17220-7AF5AE25; Thu, 30 Jan 2014 14:20:23 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17625 invoked from network); 30 Jan 2014 14:20:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103495"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTD-0006GQ-0m;
	Thu, 30 Jan 2014 14:19:55 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:50 +0000
Message-ID: <1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
	controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because we may now have more than one emulator, the implementation of the
PCI hotplug controller needs to be done by Xen. Happily the code is very
short and simple and it also removes the need for a different ACPI DSDT
when using different variants of QEMU.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 tools/firmware/hvmloader/acpi/mk_dsdt.c |  147 ++++------------------
 tools/libxc/xc_domain.c                 |   46 +++++++
 tools/libxc/xenctrl.h                   |   11 ++
 tools/libxl/libxl_pci.c                 |   15 +++
 xen/arch/x86/hvm/Makefile               |    1 +
 xen/arch/x86/hvm/hotplug.c              |  207 +++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c                  |   40 +++++-
 xen/include/asm-x86/hvm/domain.h        |   12 ++
 xen/include/asm-x86/hvm/io.h            |    6 +
 xen/include/public/hvm/hvm_op.h         |    9 ++
 xen/include/public/hvm/ioreq.h          |    2 +
 11 files changed, 373 insertions(+), 123 deletions(-)
 create mode 100644 xen/arch/x86/hvm/hotplug.c

diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index a4b693b..6408b44 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -58,28 +58,6 @@ static void pop_block(void)
     printf("}\n");
 }
 
-static void pci_hotplug_notify(unsigned int slt)
-{
-    stmt("Notify", "\\_SB.PCI0.S%02X, EVT", slt);
-}
-
-static void decision_tree(
-    unsigned int s, unsigned int e, char *var, void (*leaf)(unsigned int))
-{
-    if ( s == (e-1) )
-    {
-        (*leaf)(s);
-        return;
-    }
-
-    push_block("If", "And(%s, 0x%02x)", var, (e-s)/2);
-    decision_tree((s+e)/2, e, var, leaf);
-    pop_block();
-    push_block("Else", NULL);
-    decision_tree(s, (s+e)/2, var, leaf);
-    pop_block();
-}
-
 static struct option options[] = {
     { "maxcpu", 1, 0, 'c' },
     { "dm-version", 1, 0, 'q' },
@@ -322,64 +300,21 @@ int main(int argc, char **argv)
                    dev, intx, ((dev*4+dev/8+intx)&31)+16);
     printf("})\n");
 
-    /*
-     * Each PCI hotplug slot needs at least two methods to handle
-     * the ACPI event:
-     *  _EJ0: eject a device
-     *  _STA: return a device's status, e.g. enabled or removed
-     * 
-     * Eject button would generate a general-purpose event, then the
-     * control method for this event uses Notify() to inform OSPM which
-     * action happened and on which device.
-     *
-     * Pls. refer "6.3 Device Insertion, Removal, and Status Objects"
-     * in ACPI spec 3.0b for details.
-     *
-     * QEMU provides a simple hotplug controller with some I/O to handle
-     * the hotplug action and status, which is beyond the ACPI scope.
-     */
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        for ( slot = 0; slot < 0x100; slot++ )
-        {
-            push_block("Device", "S%02X", slot);
-            /* _ADR == dev:fn (16:16) */
-            stmt("Name", "_ADR, 0x%08x", ((slot & ~7) << 13) | (slot & 7));
-            /* _SUN == dev */
-            stmt("Name", "_SUN, 0x%08x", slot >> 3);
-            push_block("Method", "_EJ0, 1");
-            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
-            stmt("Store", "0x88, \\_GPE.DPT2");
-            stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
-                 (slot & 1) ? 0x10 : 0x01, slot & ~1);
-            pop_block();
-            push_block("Method", "_STA, 0");
-            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
-            stmt("Store", "0x89, \\_GPE.DPT2");
-            if ( slot & 1 )
-                stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
-            else
-                stmt("And", "\\_GPE.PH%02X, 0x0f, Local1", slot & ~1);
-            stmt("Return", "Local1"); /* IN status as the _STA */
-            pop_block();
-            pop_block();
-        }
-    } else {
-        stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
-        push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
-        indent(); printf("B0EJ, 32,\n");
-        pop_block();
+    stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
+    push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
+    indent(); printf("B0EJ, 32,\n");
+    pop_block();
 
-        /* hotplug_slot */
-        for (slot = 1; slot <= 31; slot++) {
-            push_block("Device", "S%i", slot); {
-                stmt("Name", "_ADR, %#06x0000", slot);
-                push_block("Method", "_EJ0,1"); {
-                    stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
-                    stmt("Return", "0x0");
-                } pop_block();
-                stmt("Name", "_SUN, %i", slot);
+    /* hotplug_slot */
+    for (slot = 1; slot <= 31; slot++) {
+        push_block("Device", "S%i", slot); {
+            stmt("Name", "_ADR, %#06x0000", slot);
+            push_block("Method", "_EJ0,1"); {
+                stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
+                stmt("Return", "0x0");
             } pop_block();
-        }
+            stmt("Name", "_SUN, %i", slot);
+        } pop_block();
     }
 
     pop_block();
@@ -389,26 +324,11 @@ int main(int argc, char **argv)
     /**** GPE start ****/
     push_block("Scope", "\\_GPE");
 
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        stmt("OperationRegion", "PHP, SystemIO, 0x10c0, 0x82");
-
-        push_block("Field", "PHP, ByteAcc, NoLock, Preserve");
-        indent(); printf("PSTA, 8,\n"); /* hotplug controller event reg */
-        indent(); printf("PSTB, 8,\n"); /* hotplug controller slot reg */
-        for ( slot = 0; slot < 0x100; slot += 2 )
-        {
-            indent();
-            /* Each hotplug control register manages a pair of pci functions. */
-            printf("PH%02X, 8,\n", slot);
-        }
-        pop_block();
-    } else {
-        stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
-        push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
-        indent(); printf("PCIU, 32,\n");
-        indent(); printf("PCID, 32,\n");
-        pop_block();
-    }
+    stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
+    push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
+    indent(); printf("PCIU, 32,\n");
+    indent(); printf("PCID, 32,\n");
+    pop_block();
 
     stmt("OperationRegion", "DG1, SystemIO, 0xb044, 0x04");
 
@@ -416,33 +336,16 @@ int main(int argc, char **argv)
     indent(); printf("DPT1, 8, DPT2, 8\n");
     pop_block();
 
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        push_block("Method", "_L03, 0, Serialized");
-        /* Detect slot and event (remove/add). */
-        stmt("Name", "SLT, 0x0");
-        stmt("Name", "EVT, 0x0");
-        stmt("Store", "PSTA, Local1");
-        stmt("And", "Local1, 0xf, EVT");
-        stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
-        stmt("And", "Local1, 0xff, SLT");
-        /* Debug */
-        stmt("Store", "SLT, DPT1");
-        stmt("Store", "EVT, DPT2");
-        /* Decision tree */
-        decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
+    push_block("Method", "_E01");
+    for (slot = 1; slot <= 31; slot++) {
+        push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
+        stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
         pop_block();
-    } else {
-        push_block("Method", "_E01");
-        for (slot = 1; slot <= 31; slot++) {
-            push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
-            stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
-            pop_block();
-            push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
-            stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
-            pop_block();
-        }
+        push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
+        stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
         pop_block();
     }
+    pop_block();
 
     pop_block();
     /**** GPE end ****/
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c64d15a..c89068e 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1421,6 +1421,52 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
     return rc;
 }
 
+int xc_hvm_pci_hotplug_enable(xc_interface *xch,
+                              domid_t domid,
+                              uint32_t slot)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_pci_hotplug;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->enable = 1;
+    arg->slot = slot;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_pci_hotplug_disable(xc_interface *xch,
+                               domid_t domid,
+                               uint32_t slot)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_pci_hotplug;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->enable = 0;
+    arg->slot = slot;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
 int xc_domain_setdebugging(xc_interface *xch,
                            uint32_t domid,
                            unsigned int enable)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 142aaea..c3e35a9 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1842,6 +1842,17 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
 				domid_t domid,
 				ioservid_t id);
 
+/*
+ * PCI hotplug API
+ */
+int xc_hvm_pci_hotplug_enable(xc_interface *xch,
+			      domid_t domid,
+			      uint32_t slot);
+
+int xc_hvm_pci_hotplug_disable(xc_interface *xch,
+			       domid_t domid,
+			       uint32_t slot);
+
 /* HVM guest pass-through */
 int xc_assign_device(xc_interface *xch,
                      uint32_t domid,
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 2e52470..4176440 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
         }
         if ( rc )
             return ERROR_FAIL;
+
+        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
+        if (rc < 0) {
+            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error: xc_hvm_pci_hotplug_enable failed");
+            return ERROR_FAIL;
+        }
+
         break;
     case LIBXL_DOMAIN_TYPE_PV:
     {
@@ -1182,6 +1189,14 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
                                          NULL, NULL, NULL) < 0)
             goto out_fail;
 
+        rc = xc_hvm_pci_hotplug_disable(ctx->xch, domid, pcidev->dev);
+        if (rc < 0) {
+            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
+                             "Error: xc_hvm_pci_hotplug_disable failed");
+            rc = ERROR_FAIL;
+            goto out_fail;
+        }
+
         switch (libxl__device_model_version_running(gc, domid)) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..48efddb 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -3,6 +3,7 @@ subdir-y += vmx
 
 obj-y += asid.o
 obj-y += emulate.o
+obj-y += hotplug.o
 obj-y += hpet.o
 obj-y += hvm.o
 obj-y += i8254.o
diff --git a/xen/arch/x86/hvm/hotplug.c b/xen/arch/x86/hvm/hotplug.c
new file mode 100644
index 0000000..253d435
--- /dev/null
+++ b/xen/arch/x86/hvm/hotplug.c
@@ -0,0 +1,207 @@
+/*
+ * hvm/hotplug.c
+ *
+ * Copyright (c) 2013, Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/types.h>
+#include <xen/spinlock.h>
+#include <xen/xmalloc.h>
+#include <asm/hvm/io.h>
+#include <asm/hvm/support.h>
+
+#define SCI_IRQ 9
+
+#define GPE_BASE            (ACPI_GPE0_BLK_ADDRESS_V1)
+#define GPE_LEN             (ACPI_GPE0_BLK_LEN_V1)
+
+#define GPE_PCI_HOTPLUG_STATUS  2
+
+#define PCI_HOTPLUG_BASE    (ACPI_PCI_HOTPLUG_ADDRESS_V1)
+#define PCI_HOTPLUG_LEN     (ACPI_PCI_HOTPLUG_LEN_V1)
+
+#define PCI_UP      0
+#define PCI_DOWN    4
+#define PCI_EJECT   8
+
+static void gpe_update_sci(struct hvm_hotplug *hp)
+{
+    if ( (hp->gpe_sts[0] & hp->gpe_en[0]) & GPE_PCI_HOTPLUG_STATUS )
+        hvm_isa_irq_assert(hp->domain, SCI_IRQ);
+    else
+        hvm_isa_irq_deassert(hp->domain, SCI_IRQ);
+}
+
+static int handle_gpe_io(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    if ( bytes != 1 )
+    {
+        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
+        goto done;
+    }
+
+    port -= GPE_BASE;
+
+    if ( dir == IOREQ_READ )
+    {
+        if ( port < GPE_LEN / 2 )
+        {
+            *val = hp->gpe_sts[port];
+        }
+        else
+        {
+            port -= GPE_LEN / 2;
+            *val = hp->gpe_en[port];
+        }
+    } else {
+        if ( port < GPE_LEN / 2 )
+        {
+            hp->gpe_sts[port] &= ~*val;
+        }
+        else
+        {
+            port -= GPE_LEN / 2;
+            hp->gpe_en[port] = *val;
+        }
+
+        gpe_update_sci(hp);
+    }
+
+done:
+    return X86EMUL_OKAY;
+}
+
+static void pci_hotplug_eject(struct hvm_hotplug *hp, uint32_t mask)
+{
+    int slot = ffs(mask) - 1;
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, slot);
+
+    hp->slot_down &= ~(1u  << slot);
+    hp->slot_up &= ~(1u  << slot);
+}
+
+static int handle_pci_hotplug_io(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    if ( bytes != 4 )
+    {
+        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
+        goto done;
+    }
+
+    port -= PCI_HOTPLUG_BASE;
+
+    if ( dir == IOREQ_READ )
+    {
+        switch ( port )
+        {
+        case PCI_UP:
+            *val = hp->slot_up;
+            break;
+        case PCI_DOWN:
+            *val = hp->slot_down;
+            break;
+        default:
+            break;
+        }
+    }
+    else
+    {   
+        switch ( port )
+        {
+        case PCI_EJECT:
+            pci_hotplug_eject(hp, *val);
+            break;
+        default:
+            break;
+        }
+    }
+
+done:
+    return X86EMUL_OKAY;
+}
+
+void pci_hotplug(struct domain *d, int slot, bool_t enable)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    gdprintk(XENLOG_INFO, "%s: %s %d\n", __func__,
+             ( enable ) ? "enable" : "disable", slot);
+
+    if ( enable )
+        hp->slot_up |= (1u << slot);
+    else
+        hp->slot_down |= (1u << slot);
+
+    hp->gpe_sts[0] |= GPE_PCI_HOTPLUG_STATUS;
+    gpe_update_sci(hp);
+}
+
+int gpe_init(struct domain *d)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    hp->domain = d;
+
+    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);
+    if ( hp->gpe_sts == NULL )
+        goto fail1;
+
+    hp->gpe_en = xzalloc_array(uint8_t, GPE_LEN / 2);
+    if ( hp->gpe_en == NULL )
+        goto fail2;
+
+    register_portio_handler(d, GPE_BASE, GPE_LEN, handle_gpe_io);
+    register_portio_handler(d, PCI_HOTPLUG_BASE, PCI_HOTPLUG_LEN,
+                            handle_pci_hotplug_io);
+
+    return 0;
+
+fail2:
+    xfree(hp->gpe_sts);
+
+fail1:
+    return -ENOMEM;
+}
+
+void gpe_deinit(struct domain *d)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    xfree(hp->gpe_en);
+    xfree(hp->gpe_sts);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * c-tab-always-indent: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5f9e728..ff7b259 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1298,15 +1298,21 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
+    rc = gpe_init(d);
+    if ( rc != 0 )
+        goto fail2;
+
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail2;
+        goto fail3;
 
     return 0;
 
+ fail3:
+    gpe_deinit(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -1352,6 +1358,7 @@ void hvm_domain_destroy(struct domain *d)
         return;
 
     hvm_funcs.domain_destroy(d);
+    gpe_deinit(d);
     rtc_deinit(d);
     stdvga_deinit(d);
     vioapic_deinit(d);
@@ -5015,6 +5022,32 @@ out:
     return rc;
 }
 
+static int hvmop_pci_hotplug(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_pci_hotplug_t) uop)
+{
+    xen_hvm_pci_hotplug_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    pci_hotplug(d, op.slot, op.enable);
+    rc = 0;
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -5058,6 +5091,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
         break;
     
+    case HVMOP_pci_hotplug:
+        rc = hvmop_pci_hotplug(
+            guest_handle_cast(arg, xen_hvm_pci_hotplug_t));
+        break;
+
     case HVMOP_set_param:
     case HVMOP_get_param:
     {
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 93dcec1..13dd24d 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -66,6 +66,16 @@ struct hvm_ioreq_server {
     struct hvm_pcidev      *pcidev_list;
 };
 
+struct hvm_hotplug {
+    struct domain   *domain;
+    uint8_t         *gpe_sts;
+    uint8_t         *gpe_en;
+
+    /* PCI hotplug */
+    uint32_t        slot_up;
+    uint32_t        slot_down;
+};
+
 struct hvm_domain {
     struct list_head        ioreq_server_list;
     spinlock_t              ioreq_server_lock;
@@ -73,6 +83,8 @@ struct hvm_domain {
     uint32_t                pci_cf8;
     spinlock_t              pci_lock;
 
+    struct hvm_hotplug      hotplug;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 86db58d..072bfe7 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
 void stdvga_deinit(struct domain *d);
 
 extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
+
+int gpe_init(struct domain *d);
+void gpe_deinit(struct domain *d);
+
+void pci_hotplug(struct domain *d, int slot, bool_t enable);
+
 #endif /* __ASM_X86_HVM_IO_H__ */
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 6b31189..20a53ab 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
 typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
 
+#define HVMOP_pci_hotplug 24
+struct xen_hvm_pci_hotplug {
+    domid_t domid;          /* IN - domain to be serviced */
+    uint8_t enable;         /* IN - enable or disable? */
+    uint32_t slot;          /* IN - slot to enable/disable */
+};
+typedef struct xen_hvm_pci_hotplug xen_hvm_pci_hotplug_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_pci_hotplug_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index e84fa75..40bfa61 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
 #define ACPI_PM_TMR_BLK_ADDRESS_V1   (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
 #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
 #define ACPI_GPE0_BLK_LEN_V1         0x04
+#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
+#define ACPI_PCI_HOTPLUG_LEN_V1      0x10
 
 /* Compatibility definitions for the default location (version 0). */
 #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTl-0000z0-8u; Thu, 30 Jan 2014 14:20:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTj-0000xe-4h
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:27 +0000
Received: from [85.158.139.211:60132] by server-16.bemta-5.messagelabs.com id
	15/C5-05060-8AF5AE25; Thu, 30 Jan 2014 14:20:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391091619!611812!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11239 invoked from network); 30 Jan 2014 14:20:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136117"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-VD;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:48 +0000
Message-ID: <1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
	ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch only creates the ioreq server when the legacy HVM parameters
are touched by an emulator. It also lays some groundwork for supporting
multiple IOREQ servers. For instance, it introduces ioreq server reference
counting which is not strictly necessary at this stage but will become so
when ioreq servers can be destroyed prior the domain dying.

There is a significant change in the layout of the special pages reserved
in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards without
moving pages such as the xenstore page when building a domain that can
support more than one emulator.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 tools/libxc/xc_hvm_build_x86.c   |   41 ++--
 xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++------------
 xen/include/asm-x86/hvm/domain.h |    3 +-
 3 files changed, 314 insertions(+), 139 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..f24f2a1 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -41,13 +41,12 @@
 #define SPECIALPAGE_PAGING   0
 #define SPECIALPAGE_ACCESS   1
 #define SPECIALPAGE_SHARING  2
-#define SPECIALPAGE_BUFIOREQ 3
-#define SPECIALPAGE_XENSTORE 4
-#define SPECIALPAGE_IOREQ    5
-#define SPECIALPAGE_IDENT_PT 6
-#define SPECIALPAGE_CONSOLE  7
-#define NR_SPECIAL_PAGES     8
-#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
+#define SPECIALPAGE_XENSTORE 3
+#define SPECIALPAGE_IDENT_PT 4
+#define SPECIALPAGE_CONSOLE  5
+#define SPECIALPAGE_IOREQ    6
+#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
+#define special_pfn(x) (0xff000u - (x))
 
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
@@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0);
+    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
+
+     DPRINTF("%d SPECIAL PAGES:\n"
+            "  PAGING:    %"PRI_xen_pfn"\n"
+            "  ACCESS:    %"PRI_xen_pfn"\n"
+            "  SHARING:   %"PRI_xen_pfn"\n"
+            "  STORE:     %"PRI_xen_pfn"\n"
+            "  IDENT_PT:  %"PRI_xen_pfn"\n"
+            "  CONSOLE:   %"PRI_xen_pfn"\n"
+            "  IOREQ:     %"PRI_xen_pfn"\n",
+            NR_SPECIAL_PAGES,
+            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
+
     for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
     {
         xen_pfn_t pfn = special_pfn(i);
@@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
 
     xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
                      special_pfn(SPECIALPAGE_XENSTORE));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_BUFIOREQ));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ));
     xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
                      special_pfn(SPECIALPAGE_CONSOLE));
     xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
@@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
                      special_pfn(SPECIALPAGE_ACCESS));
     xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
                      special_pfn(SPECIALPAGE_SHARING));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
+                     special_pfn(SPECIALPAGE_IOREQ));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
+                     special_pfn(SPECIALPAGE_IOREQ) - 1);
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a0eaadb..d9874fb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
     return &p->vcpu_ioreq[id];
 }
 
-void hvm_do_resume(struct vcpu *v)
+static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
 {
-    struct hvm_ioreq_server *s;
-    ioreq_t *p;
-
-    check_wakeup_from_wait();
-
-    if ( is_hvm_vcpu(v) )
-        pt_restore_timer(v);
-
-    s = v->arch.hvm_vcpu.ioreq_server;
-    v->arch.hvm_vcpu.ioreq_server = NULL;
-
-    if ( !s )
-        goto check_inject_trap;
-
     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    p = get_ioreq(s, v->vcpu_id);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
             break;
         default:
             gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
-            domain_crash(v->domain);
+            domain_crash(d);
             return; /* bail */
         }
     }
+}
+
+void hvm_do_resume(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+
+    check_wakeup_from_wait();
+
+    if ( is_hvm_vcpu(v) )
+        pt_restore_timer(v);
+
+    s = v->arch.hvm_vcpu.ioreq_server;
+    v->arch.hvm_vcpu.ioreq_server = NULL;
+
+    if ( s )
+    {
+        ioreq_t *p = get_ioreq(s, v->vcpu_id);
+
+        hvm_wait_on_io(d, p);
+    }
 
- check_inject_trap:
     /* Inject pending hw/sw trap */
     if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
     {
@@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
     }
 }
 
-static void hvm_init_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp)
+static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
 {
+    struct hvm_ioreq_page *iorp;
+
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
+
     spin_lock_init(&iorp->lock);
-    domain_pause(d);
 }
 
 void destroy_ring_for_helper(
@@ -419,16 +426,13 @@ void destroy_ring_for_helper(
     }
 }
 
-static void hvm_destroy_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp)
+static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
 {
-    spin_lock(&iorp->lock);
+    struct hvm_ioreq_page *iorp;
 
-    ASSERT(d->is_dying);
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
 
     destroy_ring_for_helper(&iorp->va, iorp->page);
-
-    spin_unlock(&iorp->lock);
 }
 
 int prepare_ring_for_helper(
@@ -476,8 +480,10 @@ int prepare_ring_for_helper(
 }
 
 static int hvm_set_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
+    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
 {
+    struct domain *d = s->domain;
+    struct hvm_ioreq_page *iorp;
     struct page_info *page;
     void *va;
     int rc;
@@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
     if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
         return rc;
 
-    spin_lock(&iorp->lock);
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
 
     if ( (iorp->va != NULL) || d->is_dying )
     {
-        destroy_ring_for_helper(&iorp->va, iorp->page);
-        spin_unlock(&iorp->lock);
+        destroy_ring_for_helper(&va, page);
         return -EINVAL;
     }
 
     iorp->va = va;
     iorp->page = page;
 
-    spin_unlock(&iorp->lock);
-
-    domain_unpause(d);
-
     return 0;
 }
 
@@ -544,38 +545,6 @@ static int handle_pvh_io(
     return X86EMUL_OKAY;
 }
 
-static int hvm_init_ioreq_server(struct domain *d)
-{
-    struct hvm_ioreq_server *s;
-    int i;
-
-    s = xzalloc(struct hvm_ioreq_server);
-    if ( !s )
-        return -ENOMEM;
-
-    s->domain = d;
-
-    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
-        s->ioreq_evtchn[i] = -1;
-    s->buf_ioreq_evtchn = -1;
-
-    hvm_init_ioreq_page(d, &s->ioreq);
-    hvm_init_ioreq_page(d, &s->buf_ioreq);
-
-    d->arch.hvm_domain.ioreq_server = s;
-    return 0;
-}
-
-static void hvm_deinit_ioreq_server(struct domain *d)
-{
-    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
-
-    hvm_destroy_ioreq_page(d, &s->ioreq);
-    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
-
-    xfree(s);
-}
-
 static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
 {
     struct domain *d = s->domain;
@@ -637,6 +606,152 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
     }
 }
 
+static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
+{
+    struct hvm_ioreq_server *s;
+    int i;
+    unsigned long pfn;
+    struct vcpu *v;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -EEXIST;
+    if ( d->arch.hvm_domain.ioreq_server != NULL )
+        goto fail_exist;
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+
+    rc = -ENOMEM;
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        goto fail_alloc;
+
+    s->domain = d;
+    s->domid = domid;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+        s->ioreq_evtchn[i] = -1;
+    s->buf_ioreq_evtchn = -1;
+
+    /* Initialize shared pages */
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+
+    hvm_init_ioreq_page(s, 0);
+    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
+        goto fail_set_ioreq;
+
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+
+    hvm_init_ioreq_page(s, 1);
+    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
+        goto fail_set_buf_ioreq;
+
+    for_each_vcpu ( d, v )
+    {
+        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
+            goto fail_add_vcpu;
+    }
+
+    d->arch.hvm_domain.ioreq_server = s;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail_add_vcpu:
+    for_each_vcpu ( d, v )
+        hvm_ioreq_server_remove_vcpu(s, v);
+    hvm_destroy_ioreq_page(s, 1);
+fail_set_buf_ioreq:
+    hvm_destroy_ioreq_page(s, 0);
+fail_set_ioreq:
+    xfree(s);
+fail_alloc:
+fail_exist:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    return rc;
+}
+
+static void hvm_destroy_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    struct vcpu *v;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
+        goto done;
+
+    domain_pause(d);
+
+    d->arch.hvm_domain.ioreq_server = NULL;
+
+    for_each_vcpu ( d, v )
+        hvm_ioreq_server_remove_vcpu(s, v);
+
+    hvm_destroy_ioreq_page(s, 1);
+    hvm_destroy_ioreq_page(s, 0);
+
+    xfree(s);
+
+    domain_unpause(d);
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
+static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    *port = s->buf_ioreq_evtchn;
+    rc = 0;
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    if ( buf )
+        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+    else
+        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+
+    rc = 0;
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
 static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
                                      int *p_port)
 {
@@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
+static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
 {
-    struct domain *d = s->domain;
+    struct hvm_ioreq_server *s;
     struct vcpu *v;
     int rc = 0;
 
     domain_pause(d);
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    rc = 0;
+    if ( s->domid == domid )
+        goto done;
 
     if ( d->vcpu[0] )
     {
@@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
 
 done:
     domain_unpause(d);
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
     return rc;
 }
 
-static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
-{
-    struct domain *d = s->domain;
-    int rc;
-
-    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
-    if ( rc < 0 )
-        return rc;
-
-    hvm_update_ioreq_server_evtchn(s);
-
-    return 0;
-}
-
-static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
-{
-    struct domain *d = s->domain;
-
-    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
-}
-
 int hvm_domain_initialise(struct domain *d)
 {
     int rc;
@@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
 
     }
 
+    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
     spin_lock_init(&d->arch.hvm_domain.irq_lock);
     spin_lock_init(&d->arch.hvm_domain.uc_lock);
 
@@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
-    rc = hvm_init_ioreq_server(d);
-    if ( rc != 0 )
-        goto fail2;
-
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail3;
+        goto fail2;
 
     return 0;
 
- fail3:
-    hvm_deinit_ioreq_server(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_deinit_ioreq_server(d);
+    hvm_destroy_ioreq_server(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
     s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    rc = hvm_ioreq_server_add_vcpu(s, v);
-    if ( rc < 0 )
-        goto fail6;
+    if ( s )
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc < 0 )
+            goto fail6;
+    }
 
     if ( v->vcpu_id == 0 )
     {
@@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
 void hvm_vcpu_destroy(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    hvm_ioreq_server_remove_vcpu(s, v);
+    if ( s )
+        hvm_ioreq_server_remove_vcpu(s, v);
 
     nestedhvm_vcpu_destroy(v);
 
@@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
     s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
     if ( !s )
         return 0;
 
@@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
     return 1;
 }
 
-bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
+static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
+                                            struct vcpu *v,
+                                            ioreq_t *proto_p)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-    ioreq_t *p;
-
-    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
-        return 0; /* implicitly bins the i/o operation */
-
-    s = d->arch.hvm_domain.ioreq_server;
-    if ( !s )
-        return 0;
-
-    p = get_ioreq(s, v->vcpu_id);
+    ioreq_t *p = get_ioreq(s, v->vcpu_id);
 
     if ( unlikely(p->state != STATE_IOREQ_NONE) )
     {
@@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
     return 1;
 }
 
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+
+    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
+
+    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
+        return 0;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    if ( !s )
+        return 0;
+
+    return hvm_send_assist_req_to_server(s, v, p);
+}
+
 void hvm_hlt(unsigned long rflags)
 {
     struct vcpu *curr = current;
@@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_server *s;
         struct domain *d;
         struct vcpu *v;
 
@@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( rc )
             goto param_fail;
 
-        s = d->arch.hvm_domain.ioreq_server;
-
         if ( op == HVMOP_set_param )
         {
             rc = 0;
 
             switch ( a.index )
             {
-            case HVM_PARAM_IOREQ_PFN:
-                rc = hvm_set_ioreq_server_pfn(s, a.value);
-                break;
-            case HVM_PARAM_BUFIOREQ_PFN: 
-                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
-                break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
                 hvm_latch_shinfo_size(d);
@@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = hvm_set_ioreq_server_domid(s, a.value);
+                rc = hvm_create_ioreq_server(d, a.value);
+                if ( rc == -EEXIST )
+                    rc = hvm_set_ioreq_server_domid(d, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         {
             switch ( a.index )
             {
+            case HVM_PARAM_IOREQ_PFN:
+            case HVM_PARAM_BUFIOREQ_PFN:
             case HVM_PARAM_BUFIOREQ_EVTCHN:
-                a.value = s->buf_ioreq_evtchn;
+                /* May need to create server */
+                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
+                if ( rc != 0 && rc != -EEXIST )
+                    goto param_fail;
+
+                switch ( a.index )
+                {
+                case HVM_PARAM_IOREQ_PFN: {
+                    xen_pfn_t pfn;
+
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
+                        goto param_fail;
+
+                    a.value = pfn;
+                    break;
+                }
+                case HVM_PARAM_BUFIOREQ_PFN: {
+                    xen_pfn_t pfn;
+
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
+                        goto param_fail;
+
+                    a.value = pfn;
+                    break;
+                }
+                case HVM_PARAM_BUFIOREQ_EVTCHN: {
+                    evtchn_port_t port;
+
+                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
+                        goto param_fail;
+
+                    a.value = port;
+                    break;
+                }
+                default:
+                    BUG();
+                }
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 4c039f8..e750ef0 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -52,6 +52,8 @@ struct hvm_ioreq_server {
 
 struct hvm_domain {
     struct hvm_ioreq_server *ioreq_server;
+    spinlock_t              ioreq_server_lock;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
@@ -106,4 +108,3 @@ struct hvm_domain {
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
-
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTg-0000xN-G8; Thu, 30 Jan 2014 14:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTe-0000xC-Vc
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:23 +0000
Received: from [193.109.254.147:31886] by server-11.bemta-14.messagelabs.com
	id AB/8C-24604-6AF5AE25; Thu, 30 Jan 2014 14:20:22 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17508 invoked from network); 30 Jan 2014 14:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103490"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-SW;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:46 +0000
Message-ID: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To simplify creation of the ioreq server abstraction in a
subsequent patch, this patch centralizes all use of the shared
ioreq structure and the buffered ioreq ring to the source module
xen/arch/x86/hvm/hvm.c.
Also, re-work hvm_send_assist_req() slightly to complete IO
immediately in the case where there is no emulator (i.e. the shared
IOREQ ring has not been set). This should handle the case currently
covered by has_dm in hvmemul_do_io().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/emulate.c        |   40 +++------------
 xen/arch/x86/hvm/hvm.c            |   98 ++++++++++++++++++++++++++++++++++++-
 xen/arch/x86/hvm/io.c             |   94 +----------------------------------
 xen/include/asm-x86/hvm/hvm.h     |    3 +-
 xen/include/asm-x86/hvm/support.h |    9 ----
 5 files changed, 108 insertions(+), 136 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 868aa1d..d1d3a6f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -57,24 +57,11 @@ static int hvmemul_do_io(
     int value_is_ptr = (p_data == NULL);
     struct vcpu *curr = current;
     struct hvm_vcpu_io *vio;
-    ioreq_t *p = get_ioreq(curr);
-    ioreq_t _ioreq;
+    ioreq_t p[1];
     unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
     p2m_type_t p2mt;
     struct page_info *ram_page;
     int rc;
-    bool_t has_dm = 1;
-
-    /*
-     * Domains without a backing DM, don't have an ioreq page.  Just
-     * point to a struct on the stack, initialising the state as needed.
-     */
-    if ( !p )
-    {
-        has_dm = 0;
-        p = &_ioreq;
-        p->state = STATE_IOREQ_NONE;
-    }
 
     /* Check for paged out page */
     ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt, P2M_UNSHARE);
@@ -173,15 +160,6 @@ static int hvmemul_do_io(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    if ( p->state != STATE_IOREQ_NONE )
-    {
-        gdprintk(XENLOG_WARNING, "WARNING: io already pending (%d)?\n",
-                 p->state);
-        if ( ram_page )
-            put_page(ram_page);
-        return X86EMUL_UNHANDLEABLE;
-    }
-
     vio->io_state =
         (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
     vio->io_size = size;
@@ -193,6 +171,7 @@ static int hvmemul_do_io(
     if ( vio->mmio_retrying )
         *reps = 1;
 
+    p->state = STATE_IOREQ_NONE;
     p->dir = dir;
     p->data_is_ptr = value_is_ptr;
     p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
@@ -232,20 +211,15 @@ static int hvmemul_do_io(
             vio->io_state = HVMIO_handle_mmio_awaiting_completion;
         break;
     case X86EMUL_UNHANDLEABLE:
-        /* If there is no backing DM, just ignore accesses */
-        if ( !has_dm )
+        rc = X86EMUL_RETRY;
+        if ( !hvm_send_assist_req(curr, p) )
         {
             rc = X86EMUL_OKAY;
             vio->io_state = HVMIO_none;
         }
-        else
-        {
-            rc = X86EMUL_RETRY;
-            if ( !hvm_send_assist_req(curr) )
-                vio->io_state = HVMIO_none;
-            else if ( p_data == NULL )
-                rc = X86EMUL_OKAY;
-        }
+        else if ( p_data == NULL )
+            rc = X86EMUL_OKAY;
+
         break;
     default:
         BUG();
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..71a44db 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
+static ioreq_t *get_ioreq(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
+    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
+    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
+}
+
 void hvm_do_resume(struct vcpu *v)
 {
     ioreq_t *p;
@@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
     }
 }
 
-bool_t hvm_send_assist_req(struct vcpu *v)
+int hvm_buffered_io_send(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
+    buffered_iopage_t *pg = iorp->va;
+    buf_ioreq_t bp;
+    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
+    int qw = 0;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    bp.type = p->type;
+    bp.dir  = p->dir;
+    switch ( p->size )
+    {
+    case 1:
+        bp.size = 0;
+        break;
+    case 2:
+        bp.size = 1;
+        break;
+    case 4:
+        bp.size = 2;
+        break;
+    case 8:
+        bp.size = 3;
+        qw = 1;
+        break;
+    default:
+        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
+        return 0;
+    }
+    
+    bp.data = p->data;
+    bp.addr = p->addr;
+    
+    spin_lock(&iorp->lock);
+
+    if ( (pg->write_pointer - pg->read_pointer) >=
+         (IOREQ_BUFFER_SLOT_NUM - qw) )
+    {
+        /* The queue is full: send the iopacket through the normal path. */
+        spin_unlock(&iorp->lock);
+        return 0;
+    }
+    
+    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
+           &bp, sizeof(bp));
+    
+    if ( qw )
+    {
+        bp.data = p->data >> 32;
+        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
+               &bp, sizeof(bp));
+    }
+
+    /* Make the ioreq_t visible /before/ write_pointer. */
+    wmb();
+    pg->write_pointer += qw ? 2 : 1;
+
+    notify_via_xen_event_channel(v->domain,
+            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+    spin_unlock(&iorp->lock);
+    
+    return 1;
+}
+
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
 {
     ioreq_t *p;
 
@@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
         return 0;
     }
 
+    p->dir = proto_p->dir;
+    p->data_is_ptr = proto_p->data_is_ptr;
+    p->type = proto_p->type;
+    p->size = proto_p->size;
+    p->addr = proto_p->addr;
+    p->count = proto_p->count;
+    p->df = proto_p->df;
+    p->data = proto_p->data;
+
     prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
 
     /*
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..576641c 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -46,85 +46,6 @@
 #include <xen/iocap.h>
 #include <public/hvm/ioreq.h>
 
-int hvm_buffered_io_send(ioreq_t *p)
-{
-    struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
-    buf_ioreq_t bp;
-    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
-    int qw = 0;
-
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
-
-    bp.type = p->type;
-    bp.dir  = p->dir;
-    switch ( p->size )
-    {
-    case 1:
-        bp.size = 0;
-        break;
-    case 2:
-        bp.size = 1;
-        break;
-    case 4:
-        bp.size = 2;
-        break;
-    case 8:
-        bp.size = 3;
-        qw = 1;
-        break;
-    default:
-        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return 0;
-    }
-    
-    bp.data = p->data;
-    bp.addr = p->addr;
-    
-    spin_lock(&iorp->lock);
-
-    if ( (pg->write_pointer - pg->read_pointer) >=
-         (IOREQ_BUFFER_SLOT_NUM - qw) )
-    {
-        /* The queue is full: send the iopacket through the normal path. */
-        spin_unlock(&iorp->lock);
-        return 0;
-    }
-    
-    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
-           &bp, sizeof(bp));
-    
-    if ( qw )
-    {
-        bp.data = p->data >> 32;
-        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
-               &bp, sizeof(bp));
-    }
-
-    /* Make the ioreq_t visible /before/ write_pointer. */
-    wmb();
-    pg->write_pointer += qw ? 2 : 1;
-
-    notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
-    spin_unlock(&iorp->lock);
-    
-    return 1;
-}
-
 void send_timeoffset_req(unsigned long timeoff)
 {
     ioreq_t p[1];
@@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
 void send_invalidate_req(void)
 {
     struct vcpu *v = current;
-    ioreq_t *p = get_ioreq(v);
-
-    if ( !p )
-        return;
-
-    if ( p->state != STATE_IOREQ_NONE )
-    {
-        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with something "
-                 "already pending (%d)?\n", p->state);
-        domain_crash(v->domain);
-        return;
-    }
+    ioreq_t p[1];
 
     p->type = IOREQ_TYPE_INVALIDATE;
     p->size = 4;
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
 
-    (void)hvm_send_assist_req(v);
+    (void)hvm_send_assist_req(v, p);
 }
 
 int handle_mmio(void)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index ccca5df..4e8fee8 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -26,6 +26,7 @@
 #include <asm/hvm/asid.h>
 #include <public/domctl.h>
 #include <public/hvm/save.h>
+#include <public/hvm/ioreq.h>
 #include <asm/mm.h>
 
 /* Interrupt acknowledgement sources. */
@@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
                             struct page_info **_page, void **_va);
 void destroy_ring_for_helper(void **_va, struct page_info *page);
 
-bool_t hvm_send_assist_req(struct vcpu *v);
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
 
 void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
 int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index 3529499..b6af3c5 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -22,19 +22,10 @@
 #define __ASM_X86_HVM_SUPPORT_H__
 
 #include <xen/types.h>
-#include <public/hvm/ioreq.h>
 #include <xen/sched.h>
 #include <xen/hvm/save.h>
 #include <asm/processor.h>
 
-static inline ioreq_t *get_ioreq(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
-}
-
 #define HVM_DELIVER_NO_ERROR_CODE  -1
 
 #ifndef NDEBUG
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTi-0000xl-8G; Thu, 30 Jan 2014 14:20:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTg-0000xI-2m
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:24 +0000
Received: from [193.109.254.147:21820] by server-12.bemta-14.messagelabs.com
	id 60/35-17220-7AF5AE25; Thu, 30 Jan 2014 14:20:23 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17625 invoked from network); 30 Jan 2014 14:20:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103495"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTD-0006GQ-0m;
	Thu, 30 Jan 2014 14:19:55 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:50 +0000
Message-ID: <1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
	controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because we may now have more than one emulator, the implementation of the
PCI hotplug controller needs to be done by Xen. Happily the code is very
short and simple and it also removes the need for a different ACPI DSDT
when using different variants of QEMU.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 tools/firmware/hvmloader/acpi/mk_dsdt.c |  147 ++++------------------
 tools/libxc/xc_domain.c                 |   46 +++++++
 tools/libxc/xenctrl.h                   |   11 ++
 tools/libxl/libxl_pci.c                 |   15 +++
 xen/arch/x86/hvm/Makefile               |    1 +
 xen/arch/x86/hvm/hotplug.c              |  207 +++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c                  |   40 +++++-
 xen/include/asm-x86/hvm/domain.h        |   12 ++
 xen/include/asm-x86/hvm/io.h            |    6 +
 xen/include/public/hvm/hvm_op.h         |    9 ++
 xen/include/public/hvm/ioreq.h          |    2 +
 11 files changed, 373 insertions(+), 123 deletions(-)
 create mode 100644 xen/arch/x86/hvm/hotplug.c

diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
index a4b693b..6408b44 100644
--- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
+++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
@@ -58,28 +58,6 @@ static void pop_block(void)
     printf("}\n");
 }
 
-static void pci_hotplug_notify(unsigned int slt)
-{
-    stmt("Notify", "\\_SB.PCI0.S%02X, EVT", slt);
-}
-
-static void decision_tree(
-    unsigned int s, unsigned int e, char *var, void (*leaf)(unsigned int))
-{
-    if ( s == (e-1) )
-    {
-        (*leaf)(s);
-        return;
-    }
-
-    push_block("If", "And(%s, 0x%02x)", var, (e-s)/2);
-    decision_tree((s+e)/2, e, var, leaf);
-    pop_block();
-    push_block("Else", NULL);
-    decision_tree(s, (s+e)/2, var, leaf);
-    pop_block();
-}
-
 static struct option options[] = {
     { "maxcpu", 1, 0, 'c' },
     { "dm-version", 1, 0, 'q' },
@@ -322,64 +300,21 @@ int main(int argc, char **argv)
                    dev, intx, ((dev*4+dev/8+intx)&31)+16);
     printf("})\n");
 
-    /*
-     * Each PCI hotplug slot needs at least two methods to handle
-     * the ACPI event:
-     *  _EJ0: eject a device
-     *  _STA: return a device's status, e.g. enabled or removed
-     * 
-     * Eject button would generate a general-purpose event, then the
-     * control method for this event uses Notify() to inform OSPM which
-     * action happened and on which device.
-     *
-     * Pls. refer "6.3 Device Insertion, Removal, and Status Objects"
-     * in ACPI spec 3.0b for details.
-     *
-     * QEMU provides a simple hotplug controller with some I/O to handle
-     * the hotplug action and status, which is beyond the ACPI scope.
-     */
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        for ( slot = 0; slot < 0x100; slot++ )
-        {
-            push_block("Device", "S%02X", slot);
-            /* _ADR == dev:fn (16:16) */
-            stmt("Name", "_ADR, 0x%08x", ((slot & ~7) << 13) | (slot & 7));
-            /* _SUN == dev */
-            stmt("Name", "_SUN, 0x%08x", slot >> 3);
-            push_block("Method", "_EJ0, 1");
-            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
-            stmt("Store", "0x88, \\_GPE.DPT2");
-            stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
-                 (slot & 1) ? 0x10 : 0x01, slot & ~1);
-            pop_block();
-            push_block("Method", "_STA, 0");
-            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
-            stmt("Store", "0x89, \\_GPE.DPT2");
-            if ( slot & 1 )
-                stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
-            else
-                stmt("And", "\\_GPE.PH%02X, 0x0f, Local1", slot & ~1);
-            stmt("Return", "Local1"); /* IN status as the _STA */
-            pop_block();
-            pop_block();
-        }
-    } else {
-        stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
-        push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
-        indent(); printf("B0EJ, 32,\n");
-        pop_block();
+    stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
+    push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
+    indent(); printf("B0EJ, 32,\n");
+    pop_block();
 
-        /* hotplug_slot */
-        for (slot = 1; slot <= 31; slot++) {
-            push_block("Device", "S%i", slot); {
-                stmt("Name", "_ADR, %#06x0000", slot);
-                push_block("Method", "_EJ0,1"); {
-                    stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
-                    stmt("Return", "0x0");
-                } pop_block();
-                stmt("Name", "_SUN, %i", slot);
+    /* hotplug_slot */
+    for (slot = 1; slot <= 31; slot++) {
+        push_block("Device", "S%i", slot); {
+            stmt("Name", "_ADR, %#06x0000", slot);
+            push_block("Method", "_EJ0,1"); {
+                stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
+                stmt("Return", "0x0");
             } pop_block();
-        }
+            stmt("Name", "_SUN, %i", slot);
+        } pop_block();
     }
 
     pop_block();
@@ -389,26 +324,11 @@ int main(int argc, char **argv)
     /**** GPE start ****/
     push_block("Scope", "\\_GPE");
 
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        stmt("OperationRegion", "PHP, SystemIO, 0x10c0, 0x82");
-
-        push_block("Field", "PHP, ByteAcc, NoLock, Preserve");
-        indent(); printf("PSTA, 8,\n"); /* hotplug controller event reg */
-        indent(); printf("PSTB, 8,\n"); /* hotplug controller slot reg */
-        for ( slot = 0; slot < 0x100; slot += 2 )
-        {
-            indent();
-            /* Each hotplug control register manages a pair of pci functions. */
-            printf("PH%02X, 8,\n", slot);
-        }
-        pop_block();
-    } else {
-        stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
-        push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
-        indent(); printf("PCIU, 32,\n");
-        indent(); printf("PCID, 32,\n");
-        pop_block();
-    }
+    stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
+    push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
+    indent(); printf("PCIU, 32,\n");
+    indent(); printf("PCID, 32,\n");
+    pop_block();
 
     stmt("OperationRegion", "DG1, SystemIO, 0xb044, 0x04");
 
@@ -416,33 +336,16 @@ int main(int argc, char **argv)
     indent(); printf("DPT1, 8, DPT2, 8\n");
     pop_block();
 
-    if (dm_version == QEMU_XEN_TRADITIONAL) {
-        push_block("Method", "_L03, 0, Serialized");
-        /* Detect slot and event (remove/add). */
-        stmt("Name", "SLT, 0x0");
-        stmt("Name", "EVT, 0x0");
-        stmt("Store", "PSTA, Local1");
-        stmt("And", "Local1, 0xf, EVT");
-        stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
-        stmt("And", "Local1, 0xff, SLT");
-        /* Debug */
-        stmt("Store", "SLT, DPT1");
-        stmt("Store", "EVT, DPT2");
-        /* Decision tree */
-        decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
+    push_block("Method", "_E01");
+    for (slot = 1; slot <= 31; slot++) {
+        push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
+        stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
         pop_block();
-    } else {
-        push_block("Method", "_E01");
-        for (slot = 1; slot <= 31; slot++) {
-            push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
-            stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
-            pop_block();
-            push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
-            stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
-            pop_block();
-        }
+        push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
+        stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
         pop_block();
     }
+    pop_block();
 
     pop_block();
     /**** GPE end ****/
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c64d15a..c89068e 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1421,6 +1421,52 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
     return rc;
 }
 
+int xc_hvm_pci_hotplug_enable(xc_interface *xch,
+                              domid_t domid,
+                              uint32_t slot)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_pci_hotplug;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->enable = 1;
+    arg->slot = slot;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
+int xc_hvm_pci_hotplug_disable(xc_interface *xch,
+                               domid_t domid,
+                               uint32_t slot)
+{
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
+    int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
+
+    hypercall.op     = __HYPERVISOR_hvm_op;
+    hypercall.arg[0] = HVMOP_pci_hotplug;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = domid;
+    arg->enable = 0;
+    arg->slot = slot;
+    rc = do_xen_hypercall(xch, &hypercall);
+    xc_hypercall_buffer_free(xch, arg);
+    return rc;
+}
+
 int xc_domain_setdebugging(xc_interface *xch,
                            uint32_t domid,
                            unsigned int enable)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 142aaea..c3e35a9 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1842,6 +1842,17 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
 				domid_t domid,
 				ioservid_t id);
 
+/*
+ * PCI hotplug API
+ */
+int xc_hvm_pci_hotplug_enable(xc_interface *xch,
+			      domid_t domid,
+			      uint32_t slot);
+
+int xc_hvm_pci_hotplug_disable(xc_interface *xch,
+			       domid_t domid,
+			       uint32_t slot);
+
 /* HVM guest pass-through */
 int xc_assign_device(xc_interface *xch,
                      uint32_t domid,
diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
index 2e52470..4176440 100644
--- a/tools/libxl/libxl_pci.c
+++ b/tools/libxl/libxl_pci.c
@@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
         }
         if ( rc )
             return ERROR_FAIL;
+
+        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
+        if (rc < 0) {
+            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error: xc_hvm_pci_hotplug_enable failed");
+            return ERROR_FAIL;
+        }
+
         break;
     case LIBXL_DOMAIN_TYPE_PV:
     {
@@ -1182,6 +1189,14 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
                                          NULL, NULL, NULL) < 0)
             goto out_fail;
 
+        rc = xc_hvm_pci_hotplug_disable(ctx->xch, domid, pcidev->dev);
+        if (rc < 0) {
+            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
+                             "Error: xc_hvm_pci_hotplug_disable failed");
+            rc = ERROR_FAIL;
+            goto out_fail;
+        }
+
         switch (libxl__device_model_version_running(gc, domid)) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..48efddb 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -3,6 +3,7 @@ subdir-y += vmx
 
 obj-y += asid.o
 obj-y += emulate.o
+obj-y += hotplug.o
 obj-y += hpet.o
 obj-y += hvm.o
 obj-y += i8254.o
diff --git a/xen/arch/x86/hvm/hotplug.c b/xen/arch/x86/hvm/hotplug.c
new file mode 100644
index 0000000..253d435
--- /dev/null
+++ b/xen/arch/x86/hvm/hotplug.c
@@ -0,0 +1,207 @@
+/*
+ * hvm/hotplug.c
+ *
+ * Copyright (c) 2013, Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/types.h>
+#include <xen/spinlock.h>
+#include <xen/xmalloc.h>
+#include <asm/hvm/io.h>
+#include <asm/hvm/support.h>
+
+#define SCI_IRQ 9
+
+#define GPE_BASE            (ACPI_GPE0_BLK_ADDRESS_V1)
+#define GPE_LEN             (ACPI_GPE0_BLK_LEN_V1)
+
+#define GPE_PCI_HOTPLUG_STATUS  2
+
+#define PCI_HOTPLUG_BASE    (ACPI_PCI_HOTPLUG_ADDRESS_V1)
+#define PCI_HOTPLUG_LEN     (ACPI_PCI_HOTPLUG_LEN_V1)
+
+#define PCI_UP      0
+#define PCI_DOWN    4
+#define PCI_EJECT   8
+
+static void gpe_update_sci(struct hvm_hotplug *hp)
+{
+    if ( (hp->gpe_sts[0] & hp->gpe_en[0]) & GPE_PCI_HOTPLUG_STATUS )
+        hvm_isa_irq_assert(hp->domain, SCI_IRQ);
+    else
+        hvm_isa_irq_deassert(hp->domain, SCI_IRQ);
+}
+
+static int handle_gpe_io(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    if ( bytes != 1 )
+    {
+        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
+        goto done;
+    }
+
+    port -= GPE_BASE;
+
+    if ( dir == IOREQ_READ )
+    {
+        if ( port < GPE_LEN / 2 )
+        {
+            *val = hp->gpe_sts[port];
+        }
+        else
+        {
+            port -= GPE_LEN / 2;
+            *val = hp->gpe_en[port];
+        }
+    } else {
+        if ( port < GPE_LEN / 2 )
+        {
+            hp->gpe_sts[port] &= ~*val;
+        }
+        else
+        {
+            port -= GPE_LEN / 2;
+            hp->gpe_en[port] = *val;
+        }
+
+        gpe_update_sci(hp);
+    }
+
+done:
+    return X86EMUL_OKAY;
+}
+
+static void pci_hotplug_eject(struct hvm_hotplug *hp, uint32_t mask)
+{
+    int slot = ffs(mask) - 1;
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, slot);
+
+    hp->slot_down &= ~(1u  << slot);
+    hp->slot_up &= ~(1u  << slot);
+}
+
+static int handle_pci_hotplug_io(
+    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
+{
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    if ( bytes != 4 )
+    {
+        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
+        goto done;
+    }
+
+    port -= PCI_HOTPLUG_BASE;
+
+    if ( dir == IOREQ_READ )
+    {
+        switch ( port )
+        {
+        case PCI_UP:
+            *val = hp->slot_up;
+            break;
+        case PCI_DOWN:
+            *val = hp->slot_down;
+            break;
+        default:
+            break;
+        }
+    }
+    else
+    {   
+        switch ( port )
+        {
+        case PCI_EJECT:
+            pci_hotplug_eject(hp, *val);
+            break;
+        default:
+            break;
+        }
+    }
+
+done:
+    return X86EMUL_OKAY;
+}
+
+void pci_hotplug(struct domain *d, int slot, bool_t enable)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    gdprintk(XENLOG_INFO, "%s: %s %d\n", __func__,
+             ( enable ) ? "enable" : "disable", slot);
+
+    if ( enable )
+        hp->slot_up |= (1u << slot);
+    else
+        hp->slot_down |= (1u << slot);
+
+    hp->gpe_sts[0] |= GPE_PCI_HOTPLUG_STATUS;
+    gpe_update_sci(hp);
+}
+
+int gpe_init(struct domain *d)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    hp->domain = d;
+
+    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);
+    if ( hp->gpe_sts == NULL )
+        goto fail1;
+
+    hp->gpe_en = xzalloc_array(uint8_t, GPE_LEN / 2);
+    if ( hp->gpe_en == NULL )
+        goto fail2;
+
+    register_portio_handler(d, GPE_BASE, GPE_LEN, handle_gpe_io);
+    register_portio_handler(d, PCI_HOTPLUG_BASE, PCI_HOTPLUG_LEN,
+                            handle_pci_hotplug_io);
+
+    return 0;
+
+fail2:
+    xfree(hp->gpe_sts);
+
+fail1:
+    return -ENOMEM;
+}
+
+void gpe_deinit(struct domain *d)
+{
+    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
+
+    xfree(hp->gpe_en);
+    xfree(hp->gpe_sts);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * c-tab-always-indent: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5f9e728..ff7b259 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1298,15 +1298,21 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
+    rc = gpe_init(d);
+    if ( rc != 0 )
+        goto fail2;
+
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail2;
+        goto fail3;
 
     return 0;
 
+ fail3:
+    gpe_deinit(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -1352,6 +1358,7 @@ void hvm_domain_destroy(struct domain *d)
         return;
 
     hvm_funcs.domain_destroy(d);
+    gpe_deinit(d);
     rtc_deinit(d);
     stdvga_deinit(d);
     vioapic_deinit(d);
@@ -5015,6 +5022,32 @@ out:
     return rc;
 }
 
+static int hvmop_pci_hotplug(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_pci_hotplug_t) uop)
+{
+    xen_hvm_pci_hotplug_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    pci_hotplug(d, op.slot, op.enable);
+    rc = 0;
+
+out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
 {
@@ -5058,6 +5091,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
         break;
     
+    case HVMOP_pci_hotplug:
+        rc = hvmop_pci_hotplug(
+            guest_handle_cast(arg, xen_hvm_pci_hotplug_t));
+        break;
+
     case HVMOP_set_param:
     case HVMOP_get_param:
     {
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 93dcec1..13dd24d 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -66,6 +66,16 @@ struct hvm_ioreq_server {
     struct hvm_pcidev      *pcidev_list;
 };
 
+struct hvm_hotplug {
+    struct domain   *domain;
+    uint8_t         *gpe_sts;
+    uint8_t         *gpe_en;
+
+    /* PCI hotplug */
+    uint32_t        slot_up;
+    uint32_t        slot_down;
+};
+
 struct hvm_domain {
     struct list_head        ioreq_server_list;
     spinlock_t              ioreq_server_lock;
@@ -73,6 +83,8 @@ struct hvm_domain {
     uint32_t                pci_cf8;
     spinlock_t              pci_lock;
 
+    struct hvm_hotplug      hotplug;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 86db58d..072bfe7 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
 void stdvga_deinit(struct domain *d);
 
 extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
+
+int gpe_init(struct domain *d);
+void gpe_deinit(struct domain *d);
+
+void pci_hotplug(struct domain *d, int slot, bool_t enable);
+
 #endif /* __ASM_X86_HVM_IO_H__ */
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 6b31189..20a53ab 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
 typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
 
+#define HVMOP_pci_hotplug 24
+struct xen_hvm_pci_hotplug {
+    domid_t domid;          /* IN - domain to be serviced */
+    uint8_t enable;         /* IN - enable or disable? */
+    uint32_t slot;          /* IN - slot to enable/disable */
+};
+typedef struct xen_hvm_pci_hotplug xen_hvm_pci_hotplug_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_pci_hotplug_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
index e84fa75..40bfa61 100644
--- a/xen/include/public/hvm/ioreq.h
+++ b/xen/include/public/hvm/ioreq.h
@@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
 #define ACPI_PM_TMR_BLK_ADDRESS_V1   (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
 #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
 #define ACPI_GPE0_BLK_LEN_V1         0x04
+#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
+#define ACPI_PCI_HOTPLUG_LEN_V1      0x10
 
 /* Compatibility definitions for the default location (version 0). */
 #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTl-0000z0-8u; Thu, 30 Jan 2014 14:20:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTj-0000xe-4h
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:27 +0000
Received: from [85.158.139.211:60132] by server-16.bemta-5.messagelabs.com id
	15/C5-05060-8AF5AE25; Thu, 30 Jan 2014 14:20:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391091619!611812!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11239 invoked from network); 30 Jan 2014 14:20:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96136117"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-VD;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:48 +0000
Message-ID: <1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
	ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch only creates the ioreq server when the legacy HVM parameters
are touched by an emulator. It also lays some groundwork for supporting
multiple IOREQ servers. For instance, it introduces ioreq server reference
counting which is not strictly necessary at this stage but will become so
when ioreq servers can be destroyed prior the domain dying.

There is a significant change in the layout of the special pages reserved
in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards without
moving pages such as the xenstore page when building a domain that can
support more than one emulator.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 tools/libxc/xc_hvm_build_x86.c   |   41 ++--
 xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++------------
 xen/include/asm-x86/hvm/domain.h |    3 +-
 3 files changed, 314 insertions(+), 139 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..f24f2a1 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -41,13 +41,12 @@
 #define SPECIALPAGE_PAGING   0
 #define SPECIALPAGE_ACCESS   1
 #define SPECIALPAGE_SHARING  2
-#define SPECIALPAGE_BUFIOREQ 3
-#define SPECIALPAGE_XENSTORE 4
-#define SPECIALPAGE_IOREQ    5
-#define SPECIALPAGE_IDENT_PT 6
-#define SPECIALPAGE_CONSOLE  7
-#define NR_SPECIAL_PAGES     8
-#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
+#define SPECIALPAGE_XENSTORE 3
+#define SPECIALPAGE_IDENT_PT 4
+#define SPECIALPAGE_CONSOLE  5
+#define SPECIALPAGE_IOREQ    6
+#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
+#define special_pfn(x) (0xff000u - (x))
 
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
@@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
     /* Memory parameters. */
     hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
-    hvm_info->reserved_mem_pgstart = special_pfn(0);
+    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
 
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
@@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Allocate and clear special pages. */
+
+     DPRINTF("%d SPECIAL PAGES:\n"
+            "  PAGING:    %"PRI_xen_pfn"\n"
+            "  ACCESS:    %"PRI_xen_pfn"\n"
+            "  SHARING:   %"PRI_xen_pfn"\n"
+            "  STORE:     %"PRI_xen_pfn"\n"
+            "  IDENT_PT:  %"PRI_xen_pfn"\n"
+            "  CONSOLE:   %"PRI_xen_pfn"\n"
+            "  IOREQ:     %"PRI_xen_pfn"\n",
+            NR_SPECIAL_PAGES,
+            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
+            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
+
     for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
     {
         xen_pfn_t pfn = special_pfn(i);
@@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
 
     xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
                      special_pfn(SPECIALPAGE_XENSTORE));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
-                     special_pfn(SPECIALPAGE_BUFIOREQ));
-    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
-                     special_pfn(SPECIALPAGE_IOREQ));
     xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
                      special_pfn(SPECIALPAGE_CONSOLE));
     xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
@@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
                      special_pfn(SPECIALPAGE_ACCESS));
     xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
                      special_pfn(SPECIALPAGE_SHARING));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
+                     special_pfn(SPECIALPAGE_IOREQ));
+    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
+                     special_pfn(SPECIALPAGE_IOREQ) - 1);
 
     /*
      * Identity-map page table is required for running with CR0.PG=0 when
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a0eaadb..d9874fb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
     return &p->vcpu_ioreq[id];
 }
 
-void hvm_do_resume(struct vcpu *v)
+static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
 {
-    struct hvm_ioreq_server *s;
-    ioreq_t *p;
-
-    check_wakeup_from_wait();
-
-    if ( is_hvm_vcpu(v) )
-        pt_restore_timer(v);
-
-    s = v->arch.hvm_vcpu.ioreq_server;
-    v->arch.hvm_vcpu.ioreq_server = NULL;
-
-    if ( !s )
-        goto check_inject_trap;
-
     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
-    p = get_ioreq(s, v->vcpu_id);
     while ( p->state != STATE_IOREQ_NONE )
     {
         switch ( p->state )
@@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
             break;
         default:
             gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
-            domain_crash(v->domain);
+            domain_crash(d);
             return; /* bail */
         }
     }
+}
+
+void hvm_do_resume(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+
+    check_wakeup_from_wait();
+
+    if ( is_hvm_vcpu(v) )
+        pt_restore_timer(v);
+
+    s = v->arch.hvm_vcpu.ioreq_server;
+    v->arch.hvm_vcpu.ioreq_server = NULL;
+
+    if ( s )
+    {
+        ioreq_t *p = get_ioreq(s, v->vcpu_id);
+
+        hvm_wait_on_io(d, p);
+    }
 
- check_inject_trap:
     /* Inject pending hw/sw trap */
     if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
     {
@@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
     }
 }
 
-static void hvm_init_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp)
+static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
 {
+    struct hvm_ioreq_page *iorp;
+
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
+
     spin_lock_init(&iorp->lock);
-    domain_pause(d);
 }
 
 void destroy_ring_for_helper(
@@ -419,16 +426,13 @@ void destroy_ring_for_helper(
     }
 }
 
-static void hvm_destroy_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp)
+static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
 {
-    spin_lock(&iorp->lock);
+    struct hvm_ioreq_page *iorp;
 
-    ASSERT(d->is_dying);
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
 
     destroy_ring_for_helper(&iorp->va, iorp->page);
-
-    spin_unlock(&iorp->lock);
 }
 
 int prepare_ring_for_helper(
@@ -476,8 +480,10 @@ int prepare_ring_for_helper(
 }
 
 static int hvm_set_ioreq_page(
-    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
+    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
 {
+    struct domain *d = s->domain;
+    struct hvm_ioreq_page *iorp;
     struct page_info *page;
     void *va;
     int rc;
@@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
     if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
         return rc;
 
-    spin_lock(&iorp->lock);
+    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
 
     if ( (iorp->va != NULL) || d->is_dying )
     {
-        destroy_ring_for_helper(&iorp->va, iorp->page);
-        spin_unlock(&iorp->lock);
+        destroy_ring_for_helper(&va, page);
         return -EINVAL;
     }
 
     iorp->va = va;
     iorp->page = page;
 
-    spin_unlock(&iorp->lock);
-
-    domain_unpause(d);
-
     return 0;
 }
 
@@ -544,38 +545,6 @@ static int handle_pvh_io(
     return X86EMUL_OKAY;
 }
 
-static int hvm_init_ioreq_server(struct domain *d)
-{
-    struct hvm_ioreq_server *s;
-    int i;
-
-    s = xzalloc(struct hvm_ioreq_server);
-    if ( !s )
-        return -ENOMEM;
-
-    s->domain = d;
-
-    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
-        s->ioreq_evtchn[i] = -1;
-    s->buf_ioreq_evtchn = -1;
-
-    hvm_init_ioreq_page(d, &s->ioreq);
-    hvm_init_ioreq_page(d, &s->buf_ioreq);
-
-    d->arch.hvm_domain.ioreq_server = s;
-    return 0;
-}
-
-static void hvm_deinit_ioreq_server(struct domain *d)
-{
-    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
-
-    hvm_destroy_ioreq_page(d, &s->ioreq);
-    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
-
-    xfree(s);
-}
-
 static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
 {
     struct domain *d = s->domain;
@@ -637,6 +606,152 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
     }
 }
 
+static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
+{
+    struct hvm_ioreq_server *s;
+    int i;
+    unsigned long pfn;
+    struct vcpu *v;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    rc = -EEXIST;
+    if ( d->arch.hvm_domain.ioreq_server != NULL )
+        goto fail_exist;
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+
+    rc = -ENOMEM;
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        goto fail_alloc;
+
+    s->domain = d;
+    s->domid = domid;
+
+    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
+        s->ioreq_evtchn[i] = -1;
+    s->buf_ioreq_evtchn = -1;
+
+    /* Initialize shared pages */
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+
+    hvm_init_ioreq_page(s, 0);
+    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
+        goto fail_set_ioreq;
+
+    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+
+    hvm_init_ioreq_page(s, 1);
+    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
+        goto fail_set_buf_ioreq;
+
+    for_each_vcpu ( d, v )
+    {
+        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
+            goto fail_add_vcpu;
+    }
+
+    d->arch.hvm_domain.ioreq_server = s;
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return 0;
+
+fail_add_vcpu:
+    for_each_vcpu ( d, v )
+        hvm_ioreq_server_remove_vcpu(s, v);
+    hvm_destroy_ioreq_page(s, 1);
+fail_set_buf_ioreq:
+    hvm_destroy_ioreq_page(s, 0);
+fail_set_ioreq:
+    xfree(s);
+fail_alloc:
+fail_exist:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+    return rc;
+}
+
+static void hvm_destroy_ioreq_server(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    struct vcpu *v;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
+
+    s = d->arch.hvm_domain.ioreq_server;
+    if ( !s )
+        goto done;
+
+    domain_pause(d);
+
+    d->arch.hvm_domain.ioreq_server = NULL;
+
+    for_each_vcpu ( d, v )
+        hvm_ioreq_server_remove_vcpu(s, v);
+
+    hvm_destroy_ioreq_page(s, 1);
+    hvm_destroy_ioreq_page(s, 0);
+
+    xfree(s);
+
+    domain_unpause(d);
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+}
+
+static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    *port = s->buf_ioreq_evtchn;
+    rc = 0;
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
+static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    if ( buf )
+        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
+    else
+        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
+
+    rc = 0;
+
+done:
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    return rc;
+}
+
 static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
                                      int *p_port)
 {
@@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
     return 0;
 }
 
-static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
+static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
 {
-    struct domain *d = s->domain;
+    struct hvm_ioreq_server *s;
     struct vcpu *v;
     int rc = 0;
 
     domain_pause(d);
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    s = d->arch.hvm_domain.ioreq_server;
+
+    rc = -ENOENT;
+    if ( !s )
+        goto done;
+
+    rc = 0;
+    if ( s->domid == domid )
+        goto done;
 
     if ( d->vcpu[0] )
     {
@@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
 
 done:
     domain_unpause(d);
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
     return rc;
 }
 
-static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
-{
-    struct domain *d = s->domain;
-    int rc;
-
-    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
-    if ( rc < 0 )
-        return rc;
-
-    hvm_update_ioreq_server_evtchn(s);
-
-    return 0;
-}
-
-static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
-{
-    struct domain *d = s->domain;
-
-    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
-}
-
 int hvm_domain_initialise(struct domain *d)
 {
     int rc;
@@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
 
     }
 
+    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
     spin_lock_init(&d->arch.hvm_domain.irq_lock);
     spin_lock_init(&d->arch.hvm_domain.uc_lock);
 
@@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
 
     rtc_init(d);
 
-    rc = hvm_init_ioreq_server(d);
-    if ( rc != 0 )
-        goto fail2;
-
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
-        goto fail3;
+        goto fail2;
 
     return 0;
 
- fail3:
-    hvm_deinit_ioreq_server(d);
  fail2:
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
-    hvm_deinit_ioreq_server(d);
+    hvm_destroy_ioreq_server(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
          && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
         goto fail5;
 
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
     s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    rc = hvm_ioreq_server_add_vcpu(s, v);
-    if ( rc < 0 )
-        goto fail6;
+    if ( s )
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc < 0 )
+            goto fail6;
+    }
 
     if ( v->vcpu_id == 0 )
     {
@@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
 void hvm_vcpu_destroy(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
 
-    hvm_ioreq_server_remove_vcpu(s, v);
+    if ( s )
+        hvm_ioreq_server_remove_vcpu(s, v);
 
     nestedhvm_vcpu_destroy(v);
 
@@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
     /* Ensure buffered_iopage fits in a page */
     BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
 
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
     s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
     if ( !s )
         return 0;
 
@@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
     return 1;
 }
 
-bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
+static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
+                                            struct vcpu *v,
+                                            ioreq_t *proto_p)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-    ioreq_t *p;
-
-    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
-        return 0; /* implicitly bins the i/o operation */
-
-    s = d->arch.hvm_domain.ioreq_server;
-    if ( !s )
-        return 0;
-
-    p = get_ioreq(s, v->vcpu_id);
+    ioreq_t *p = get_ioreq(s, v->vcpu_id);
 
     if ( unlikely(p->state != STATE_IOREQ_NONE) )
     {
@@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
     return 1;
 }
 
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+
+    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
+
+    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
+        return 0;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
+    s = d->arch.hvm_domain.ioreq_server;
+    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
+
+    if ( !s )
+        return 0;
+
+    return hvm_send_assist_req_to_server(s, v, p);
+}
+
 void hvm_hlt(unsigned long rflags)
 {
     struct vcpu *curr = current;
@@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     case HVMOP_get_param:
     {
         struct xen_hvm_param a;
-        struct hvm_ioreq_server *s;
         struct domain *d;
         struct vcpu *v;
 
@@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( rc )
             goto param_fail;
 
-        s = d->arch.hvm_domain.ioreq_server;
-
         if ( op == HVMOP_set_param )
         {
             rc = 0;
 
             switch ( a.index )
             {
-            case HVM_PARAM_IOREQ_PFN:
-                rc = hvm_set_ioreq_server_pfn(s, a.value);
-                break;
-            case HVM_PARAM_BUFIOREQ_PFN: 
-                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
-                break;
             case HVM_PARAM_CALLBACK_IRQ:
                 hvm_set_callback_via(d, a.value);
                 hvm_latch_shinfo_size(d);
@@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
                 if ( a.value == DOMID_SELF )
                     a.value = curr_d->domain_id;
 
-                rc = hvm_set_ioreq_server_domid(s, a.value);
+                rc = hvm_create_ioreq_server(d, a.value);
+                if ( rc == -EEXIST )
+                    rc = hvm_set_ioreq_server_domid(d, a.value);
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 /* Not reflexive, as we must domain_pause(). */
@@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         {
             switch ( a.index )
             {
+            case HVM_PARAM_IOREQ_PFN:
+            case HVM_PARAM_BUFIOREQ_PFN:
             case HVM_PARAM_BUFIOREQ_EVTCHN:
-                a.value = s->buf_ioreq_evtchn;
+                /* May need to create server */
+                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
+                if ( rc != 0 && rc != -EEXIST )
+                    goto param_fail;
+
+                switch ( a.index )
+                {
+                case HVM_PARAM_IOREQ_PFN: {
+                    xen_pfn_t pfn;
+
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
+                        goto param_fail;
+
+                    a.value = pfn;
+                    break;
+                }
+                case HVM_PARAM_BUFIOREQ_PFN: {
+                    xen_pfn_t pfn;
+
+                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
+                        goto param_fail;
+
+                    a.value = pfn;
+                    break;
+                }
+                case HVM_PARAM_BUFIOREQ_EVTCHN: {
+                    evtchn_port_t port;
+
+                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
+                        goto param_fail;
+
+                    a.value = port;
+                    break;
+                }
+                default:
+                    BUG();
+                }
                 break;
             case HVM_PARAM_ACPI_S_STATE:
                 a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 4c039f8..e750ef0 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -52,6 +52,8 @@ struct hvm_ioreq_server {
 
 struct hvm_domain {
     struct hvm_ioreq_server *ioreq_server;
+    spinlock_t              ioreq_server_lock;
+
     struct pl_time         pl_time;
 
     struct hvm_io_handler *io_handler;
@@ -106,4 +108,3 @@ struct hvm_domain {
 #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
-
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:20:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sTg-0000xN-G8; Thu, 30 Jan 2014 14:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sTe-0000xC-Vc
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:20:23 +0000
Received: from [193.109.254.147:31886] by server-11.bemta-14.messagelabs.com
	id AB/8C-24604-6AF5AE25; Thu, 30 Jan 2014 14:20:22 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391091620!907916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17508 invoked from network); 30 Jan 2014 14:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98103490"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:19:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:19:54 -0500
Received: from etemp.uk.xensource.com ([10.80.228.66]
	helo=etemp.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <paul.durrant@citrix.com>)	id 1W8sTC-0006GQ-SW;
	Thu, 30 Jan 2014 14:19:54 +0000
From: Paul Durrant <paul.durrant@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 30 Jan 2014 14:19:46 +0000
Message-ID: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Paul Durrant <paul.durrant@citrix.com>
Subject: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To simplify creation of the ioreq server abstraction in a
subsequent patch, this patch centralizes all use of the shared
ioreq structure and the buffered ioreq ring to the source module
xen/arch/x86/hvm/hvm.c.
Also, re-work hvm_send_assist_req() slightly to complete IO
immediately in the case where there is no emulator (i.e. the shared
IOREQ ring has not been set). This should handle the case currently
covered by has_dm in hvmemul_do_io().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/emulate.c        |   40 +++------------
 xen/arch/x86/hvm/hvm.c            |   98 ++++++++++++++++++++++++++++++++++++-
 xen/arch/x86/hvm/io.c             |   94 +----------------------------------
 xen/include/asm-x86/hvm/hvm.h     |    3 +-
 xen/include/asm-x86/hvm/support.h |    9 ----
 5 files changed, 108 insertions(+), 136 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 868aa1d..d1d3a6f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -57,24 +57,11 @@ static int hvmemul_do_io(
     int value_is_ptr = (p_data == NULL);
     struct vcpu *curr = current;
     struct hvm_vcpu_io *vio;
-    ioreq_t *p = get_ioreq(curr);
-    ioreq_t _ioreq;
+    ioreq_t p[1];
     unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
     p2m_type_t p2mt;
     struct page_info *ram_page;
     int rc;
-    bool_t has_dm = 1;
-
-    /*
-     * Domains without a backing DM, don't have an ioreq page.  Just
-     * point to a struct on the stack, initialising the state as needed.
-     */
-    if ( !p )
-    {
-        has_dm = 0;
-        p = &_ioreq;
-        p->state = STATE_IOREQ_NONE;
-    }
 
     /* Check for paged out page */
     ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt, P2M_UNSHARE);
@@ -173,15 +160,6 @@ static int hvmemul_do_io(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    if ( p->state != STATE_IOREQ_NONE )
-    {
-        gdprintk(XENLOG_WARNING, "WARNING: io already pending (%d)?\n",
-                 p->state);
-        if ( ram_page )
-            put_page(ram_page);
-        return X86EMUL_UNHANDLEABLE;
-    }
-
     vio->io_state =
         (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
     vio->io_size = size;
@@ -193,6 +171,7 @@ static int hvmemul_do_io(
     if ( vio->mmio_retrying )
         *reps = 1;
 
+    p->state = STATE_IOREQ_NONE;
     p->dir = dir;
     p->data_is_ptr = value_is_ptr;
     p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
@@ -232,20 +211,15 @@ static int hvmemul_do_io(
             vio->io_state = HVMIO_handle_mmio_awaiting_completion;
         break;
     case X86EMUL_UNHANDLEABLE:
-        /* If there is no backing DM, just ignore accesses */
-        if ( !has_dm )
+        rc = X86EMUL_RETRY;
+        if ( !hvm_send_assist_req(curr, p) )
         {
             rc = X86EMUL_OKAY;
             vio->io_state = HVMIO_none;
         }
-        else
-        {
-            rc = X86EMUL_RETRY;
-            if ( !hvm_send_assist_req(curr) )
-                vio->io_state = HVMIO_none;
-            else if ( p_data == NULL )
-                rc = X86EMUL_OKAY;
-        }
+        else if ( p_data == NULL )
+            rc = X86EMUL_OKAY;
+
         break;
     default:
         BUG();
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..71a44db 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
     spin_unlock(&d->event_lock);
 }
 
+static ioreq_t *get_ioreq(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
+    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
+    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
+}
+
 void hvm_do_resume(struct vcpu *v)
 {
     ioreq_t *p;
@@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
     }
 }
 
-bool_t hvm_send_assist_req(struct vcpu *v)
+int hvm_buffered_io_send(ioreq_t *p)
+{
+    struct vcpu *v = current;
+    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
+    buffered_iopage_t *pg = iorp->va;
+    buf_ioreq_t bp;
+    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
+    int qw = 0;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    bp.type = p->type;
+    bp.dir  = p->dir;
+    switch ( p->size )
+    {
+    case 1:
+        bp.size = 0;
+        break;
+    case 2:
+        bp.size = 1;
+        break;
+    case 4:
+        bp.size = 2;
+        break;
+    case 8:
+        bp.size = 3;
+        qw = 1;
+        break;
+    default:
+        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
+        return 0;
+    }
+    
+    bp.data = p->data;
+    bp.addr = p->addr;
+    
+    spin_lock(&iorp->lock);
+
+    if ( (pg->write_pointer - pg->read_pointer) >=
+         (IOREQ_BUFFER_SLOT_NUM - qw) )
+    {
+        /* The queue is full: send the iopacket through the normal path. */
+        spin_unlock(&iorp->lock);
+        return 0;
+    }
+    
+    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
+           &bp, sizeof(bp));
+    
+    if ( qw )
+    {
+        bp.data = p->data >> 32;
+        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
+               &bp, sizeof(bp));
+    }
+
+    /* Make the ioreq_t visible /before/ write_pointer. */
+    wmb();
+    pg->write_pointer += qw ? 2 : 1;
+
+    notify_via_xen_event_channel(v->domain,
+            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
+    spin_unlock(&iorp->lock);
+    
+    return 1;
+}
+
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
 {
     ioreq_t *p;
 
@@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
         return 0;
     }
 
+    p->dir = proto_p->dir;
+    p->data_is_ptr = proto_p->data_is_ptr;
+    p->type = proto_p->type;
+    p->size = proto_p->size;
+    p->addr = proto_p->addr;
+    p->count = proto_p->count;
+    p->df = proto_p->df;
+    p->data = proto_p->data;
+
     prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
 
     /*
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..576641c 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -46,85 +46,6 @@
 #include <xen/iocap.h>
 #include <public/hvm/ioreq.h>
 
-int hvm_buffered_io_send(ioreq_t *p)
-{
-    struct vcpu *v = current;
-    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
-    buffered_iopage_t *pg = iorp->va;
-    buf_ioreq_t bp;
-    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
-    int qw = 0;
-
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
-
-    bp.type = p->type;
-    bp.dir  = p->dir;
-    switch ( p->size )
-    {
-    case 1:
-        bp.size = 0;
-        break;
-    case 2:
-        bp.size = 1;
-        break;
-    case 4:
-        bp.size = 2;
-        break;
-    case 8:
-        bp.size = 3;
-        qw = 1;
-        break;
-    default:
-        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return 0;
-    }
-    
-    bp.data = p->data;
-    bp.addr = p->addr;
-    
-    spin_lock(&iorp->lock);
-
-    if ( (pg->write_pointer - pg->read_pointer) >=
-         (IOREQ_BUFFER_SLOT_NUM - qw) )
-    {
-        /* The queue is full: send the iopacket through the normal path. */
-        spin_unlock(&iorp->lock);
-        return 0;
-    }
-    
-    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
-           &bp, sizeof(bp));
-    
-    if ( qw )
-    {
-        bp.data = p->data >> 32;
-        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
-               &bp, sizeof(bp));
-    }
-
-    /* Make the ioreq_t visible /before/ write_pointer. */
-    wmb();
-    pg->write_pointer += qw ? 2 : 1;
-
-    notify_via_xen_event_channel(v->domain,
-            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
-    spin_unlock(&iorp->lock);
-    
-    return 1;
-}
-
 void send_timeoffset_req(unsigned long timeoff)
 {
     ioreq_t p[1];
@@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
 void send_invalidate_req(void)
 {
     struct vcpu *v = current;
-    ioreq_t *p = get_ioreq(v);
-
-    if ( !p )
-        return;
-
-    if ( p->state != STATE_IOREQ_NONE )
-    {
-        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with something "
-                 "already pending (%d)?\n", p->state);
-        domain_crash(v->domain);
-        return;
-    }
+    ioreq_t p[1];
 
     p->type = IOREQ_TYPE_INVALIDATE;
     p->size = 4;
     p->dir = IOREQ_WRITE;
     p->data = ~0UL; /* flush all */
 
-    (void)hvm_send_assist_req(v);
+    (void)hvm_send_assist_req(v, p);
 }
 
 int handle_mmio(void)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index ccca5df..4e8fee8 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -26,6 +26,7 @@
 #include <asm/hvm/asid.h>
 #include <public/domctl.h>
 #include <public/hvm/save.h>
+#include <public/hvm/ioreq.h>
 #include <asm/mm.h>
 
 /* Interrupt acknowledgement sources. */
@@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
                             struct page_info **_page, void **_va);
 void destroy_ring_for_helper(void **_va, struct page_info *page);
 
-bool_t hvm_send_assist_req(struct vcpu *v);
+bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
 
 void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
 int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index 3529499..b6af3c5 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -22,19 +22,10 @@
 #define __ASM_X86_HVM_SUPPORT_H__
 
 #include <xen/types.h>
-#include <public/hvm/ioreq.h>
 #include <xen/sched.h>
 #include <xen/hvm/save.h>
 #include <asm/processor.h>
 
-static inline ioreq_t *get_ioreq(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
-    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
-    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
-}
-
 #define HVM_DELIVER_NO_ERROR_CODE  -1
 
 #ifndef NDEBUG
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:23:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sX3-0001eZ-PY; Thu, 30 Jan 2014 14:23:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sX2-0001eQ-PD
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:23:52 +0000
Received: from [85.158.143.35:49316] by server-1.bemta-4.messagelabs.com id
	E0/56-31661-8706AE25; Thu, 30 Jan 2014 14:23:52 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391091830!1959778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15900 invoked from network); 30 Jan 2014 14:23:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:23:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96137666"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:23:50 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 09:23:49 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 15:23:48 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
	emulators
Thread-Index: AQHPHca4oYe/WApr9kCTDRHl4nTDRpqdUg2Q
Date: Thu, 30 Jan 2014 14:23:47 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02176D2@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
 emulators
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Paul Durrant
> Sent: 30 January 2014 14:20
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
> emulators
> 

That was, of course, supposed to read RFC PATCH 0/5.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:23:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sX3-0001eZ-PY; Thu, 30 Jan 2014 14:23:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sX2-0001eQ-PD
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:23:52 +0000
Received: from [85.158.143.35:49316] by server-1.bemta-4.messagelabs.com id
	E0/56-31661-8706AE25; Thu, 30 Jan 2014 14:23:52 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391091830!1959778!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15900 invoked from network); 30 Jan 2014 14:23:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:23:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96137666"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:23:50 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 09:23:49 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 15:23:48 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
	emulators
Thread-Index: AQHPHca4oYe/WApr9kCTDRHl4nTDRpqdUg2Q
Date: Thu, 30 Jan 2014 14:23:47 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02176D2@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Subject: Re: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
 emulators
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-
> bounces@lists.xen.org] On Behalf Of Paul Durrant
> Sent: 30 January 2014 14:20
> To: xen-devel@lists.xen.org
> Subject: [Xen-devel] [RFC PATCH 1/5] Support for running secondary
> emulators
> 

That was, of course, supposed to read RFC PATCH 0/5.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sYY-0001oJ-Bb; Thu, 30 Jan 2014 14:25:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8sYW-0001o5-EI
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:25:24 +0000
Received: from [85.158.139.211:50749] by server-10.bemta-5.messagelabs.com id
	E2/F0-08578-3D06AE25; Thu, 30 Jan 2014 14:25:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391091921!606570!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5804 invoked from network); 30 Jan 2014 14:25:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96138016"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:24:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:24:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8sXy-0006Kg-8C;
	Thu, 30 Jan 2014 14:24:50 +0000
Date: Thu, 30 Jan 2014 14:24:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	qemu-stable@nongnu.org, qemu-devel@nongnu.org,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:

  Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)

are available in the git repository at:


  git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130

for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:

  address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)

----------------------------------------------------------------
Stefano Stabellini (1):
      address_space_translate: do not cross page boundaries

 exec.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sYY-0001oJ-Bb; Thu, 30 Jan 2014 14:25:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8sYW-0001o5-EI
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:25:24 +0000
Received: from [85.158.139.211:50749] by server-10.bemta-5.messagelabs.com id
	E2/F0-08578-3D06AE25; Thu, 30 Jan 2014 14:25:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391091921!606570!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5804 invoked from network); 30 Jan 2014 14:25:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:25:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96138016"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:24:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:24:50 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8sXy-0006Kg-8C;
	Thu, 30 Jan 2014 14:24:50 +0000
Date: Thu, 30 Jan 2014 14:24:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	qemu-stable@nongnu.org, qemu-devel@nongnu.org,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:

  Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)

are available in the git repository at:


  git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130

for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:

  address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)

----------------------------------------------------------------
Stefano Stabellini (1):
      address_space_translate: do not cross page boundaries

 exec.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sZg-0001xJ-Qc; Thu, 30 Jan 2014 14:26:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8sZg-0001x7-2g
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:26:36 +0000
Received: from [193.109.254.147:63847] by server-9.bemta-14.messagelabs.com id
	3F/E8-24895-B116AE25; Thu, 30 Jan 2014 14:26:35 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391091938!902035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18239 invoked from network); 30 Jan 2014 14:25:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96138351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:25:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:25:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8sYZ-0006LU-14;
	Thu, 30 Jan 2014 14:25:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 30 Jan 2014 14:25:21 +0000
Message-ID: <1391091921-7306-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-stable@nongnu.org, qemu-devel@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 1/1] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>

The following commit:

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate

breaks Xen support in QEMU, in particular the Xen mapcache. The effect
is that one Windows XP installation out of ten would end up with BSOD.

The reason is that after this commit l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).

Fix the issue by reverting to the previous behaviour: do not return a
length from address_space_translate_internal that can span a page
boundary.

Also in address_space_translate do not ignore the length returned by
address_space_translate_internal.

This patch should be backported to QEMU 1.6.x.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Perard <anthony.perard@citrix.com>
Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
---
 exec.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/exec.c b/exec.c
index 2435d9e..9ad0a4b 100644
--- a/exec.c
+++ b/exec.c
@@ -325,7 +325,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
                                  hwaddr *plen, bool resolve_subpage)
 {
     MemoryRegionSection *section;
-    Int128 diff;
+    Int128 diff, diff_page;
 
     section = address_space_lookup_region(d, addr, resolve_subpage);
     /* Compute offset within MemoryRegionSection */
@@ -334,7 +334,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
     /* Compute offset within MemoryRegion */
     *xlat = addr + section->offset_within_region;
 
+    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
     diff = int128_sub(section->mr->size, int128_make64(addr));
+    diff = int128_min(diff, diff_page);
     *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
     return section;
 }
@@ -349,7 +351,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
     hwaddr len = *plen;
 
     for (;;) {
-        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
+        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
         mr = section->mr;
 
         if (!mr->iommu_ops) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sZg-0001xJ-Qc; Thu, 30 Jan 2014 14:26:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8sZg-0001x7-2g
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:26:36 +0000
Received: from [193.109.254.147:63847] by server-9.bemta-14.messagelabs.com id
	3F/E8-24895-B116AE25; Thu, 30 Jan 2014 14:26:35 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391091938!902035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18239 invoked from network); 30 Jan 2014 14:25:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:25:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96138351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:25:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:25:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8sYZ-0006LU-14;
	Thu, 30 Jan 2014 14:25:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 30 Jan 2014 14:25:21 +0000
Message-ID: <1391091921-7306-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-stable@nongnu.org, qemu-devel@nongnu.org,
	Anthony Perard <anthony.perard@citrix.com>, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 1/1] address_space_translate: do not cross page
	boundaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>

The following commit:

commit 149f54b53b7666a3facd45e86eece60ce7d3b114
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri May 24 12:59:37 2013 +0200

    memory: add address_space_translate

breaks Xen support in QEMU, in particular the Xen mapcache. The effect
is that one Windows XP installation out of ten would end up with BSOD.

The reason is that after this commit l in address_space_rw can span a
page boundary, however qemu_get_ram_ptr still calls xen_map_cache asking
to map a single page (if block->offset == 0).

Fix the issue by reverting to the previous behaviour: do not return a
length from address_space_translate_internal that can span a page
boundary.

Also in address_space_translate do not ignore the length returned by
address_space_translate_internal.

This patch should be backported to QEMU 1.6.x.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Anthony Perard <anthony.perard@citrix.com>
Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org
---
 exec.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/exec.c b/exec.c
index 2435d9e..9ad0a4b 100644
--- a/exec.c
+++ b/exec.c
@@ -325,7 +325,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
                                  hwaddr *plen, bool resolve_subpage)
 {
     MemoryRegionSection *section;
-    Int128 diff;
+    Int128 diff, diff_page;
 
     section = address_space_lookup_region(d, addr, resolve_subpage);
     /* Compute offset within MemoryRegionSection */
@@ -334,7 +334,9 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
     /* Compute offset within MemoryRegion */
     *xlat = addr + section->offset_within_region;
 
+    diff_page = int128_make64(((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr);
     diff = int128_sub(section->mr->size, int128_make64(addr));
+    diff = int128_min(diff, diff_page);
     *plen = int128_get64(int128_min(diff, int128_make64(*plen)));
     return section;
 }
@@ -349,7 +351,7 @@ MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
     hwaddr len = *plen;
 
     for (;;) {
-        section = address_space_translate_internal(as->dispatch, addr, &addr, plen, true);
+        section = address_space_translate_internal(as->dispatch, addr, &addr, &len, true);
         mr = section->mr;
 
         if (!mr->iommu_ops) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sf7-0002Sh-Ke; Thu, 30 Jan 2014 14:32:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8sf6-0002Sc-3j
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:32:12 +0000
Received: from [193.109.254.147:30076] by server-5.bemta-14.messagelabs.com id
	C9/2C-16688-B626AE25; Thu, 30 Jan 2014 14:32:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391092328!911814!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22536 invoked from network); 30 Jan 2014 14:32:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:32:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98108126"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:32:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:32:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8sf1-0006Sk-Am;
	Thu, 30 Jan 2014 14:32:07 +0000
Message-ID: <52EA6267.8020303@citrix.com>
Date: Thu, 30 Jan 2014 14:32:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> To simplify creation of the ioreq server abstraction in a
> subsequent patch, this patch centralizes all use of the shared
> ioreq structure and the buffered ioreq ring to the source module
> xen/arch/x86/hvm/hvm.c.
> Also, re-work hvm_send_assist_req() slightly to complete IO
> immediately in the case where there is no emulator (i.e. the shared
> IOREQ ring has not been set). This should handle the case currently
> covered by has_dm in hvmemul_do_io().
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  xen/arch/x86/hvm/emulate.c        |   40 +++------------
>  xen/arch/x86/hvm/hvm.c            |   98 ++++++++++++++++++++++++++++++++++++-
>  xen/arch/x86/hvm/io.c             |   94 +----------------------------------
>  xen/include/asm-x86/hvm/hvm.h     |    3 +-
>  xen/include/asm-x86/hvm/support.h |    9 ----
>  5 files changed, 108 insertions(+), 136 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 868aa1d..d1d3a6f 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -57,24 +57,11 @@ static int hvmemul_do_io(
>      int value_is_ptr = (p_data == NULL);
>      struct vcpu *curr = current;
>      struct hvm_vcpu_io *vio;
> -    ioreq_t *p = get_ioreq(curr);
> -    ioreq_t _ioreq;
> +    ioreq_t p[1];

I know it will make the patch sightly larger by modifying the
indirection of p, but having an array of 1 item on the stack is seems silly.

>      unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
>      p2m_type_t p2mt;
>      struct page_info *ram_page;
>      int rc;
> -    bool_t has_dm = 1;
> -
> -    /*
> -     * Domains without a backing DM, don't have an ioreq page.  Just
> -     * point to a struct on the stack, initialising the state as needed.
> -     */
> -    if ( !p )
> -    {
> -        has_dm = 0;
> -        p = &_ioreq;
> -        p->state = STATE_IOREQ_NONE;
> -    }
>  
>      /* Check for paged out page */
>      ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt, P2M_UNSHARE);
> @@ -173,15 +160,6 @@ static int hvmemul_do_io(
>          return X86EMUL_UNHANDLEABLE;
>      }
>  
> -    if ( p->state != STATE_IOREQ_NONE )
> -    {
> -        gdprintk(XENLOG_WARNING, "WARNING: io already pending (%d)?\n",
> -                 p->state);
> -        if ( ram_page )
> -            put_page(ram_page);
> -        return X86EMUL_UNHANDLEABLE;
> -    }
> -
>      vio->io_state =
>          (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
>      vio->io_size = size;
> @@ -193,6 +171,7 @@ static int hvmemul_do_io(
>      if ( vio->mmio_retrying )
>          *reps = 1;
>  
> +    p->state = STATE_IOREQ_NONE;
>      p->dir = dir;
>      p->data_is_ptr = value_is_ptr;
>      p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
> @@ -232,20 +211,15 @@ static int hvmemul_do_io(
>              vio->io_state = HVMIO_handle_mmio_awaiting_completion;
>          break;
>      case X86EMUL_UNHANDLEABLE:
> -        /* If there is no backing DM, just ignore accesses */
> -        if ( !has_dm )
> +        rc = X86EMUL_RETRY;
> +        if ( !hvm_send_assist_req(curr, p) )
>          {
>              rc = X86EMUL_OKAY;
>              vio->io_state = HVMIO_none;
>          }
> -        else
> -        {
> -            rc = X86EMUL_RETRY;
> -            if ( !hvm_send_assist_req(curr) )
> -                vio->io_state = HVMIO_none;
> -            else if ( p_data == NULL )
> -                rc = X86EMUL_OKAY;
> -        }
> +        else if ( p_data == NULL )
> +            rc = X86EMUL_OKAY;
> +
>          break;
>      default:
>          BUG();
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..71a44db 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
>      spin_unlock(&d->event_lock);
>  }
>  
> +static ioreq_t *get_ioreq(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;

newline here...

> +    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));

.. and here.  (I realise that this is just code motion, but might as
well take the opportunity to fix the style.)

> +    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> +}
> +
>  void hvm_do_resume(struct vcpu *v)
>  {
>      ioreq_t *p;
> @@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
>      }
>  }
>  
> -bool_t hvm_send_assist_req(struct vcpu *v)
> +int hvm_buffered_io_send(ioreq_t *p)
> +{
> +    struct vcpu *v = current;
> +    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> +    buffered_iopage_t *pg = iorp->va;
> +    buf_ioreq_t bp;
> +    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
> +    int qw = 0;
> +
> +    /* Ensure buffered_iopage fits in a page */
> +    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> +
> +    /*
> +     * Return 0 for the cases we can't deal with:
> +     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> +     *  - we cannot buffer accesses to guest memory buffers, as the guest
> +     *    may expect the memory buffer to be synchronously accessed
> +     *  - the count field is usually used with data_is_ptr and since we don't
> +     *    support data_is_ptr we do not waste space for the count field either
> +     */
> +    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> +        return 0;
> +
> +    bp.type = p->type;
> +    bp.dir  = p->dir;
> +    switch ( p->size )
> +    {
> +    case 1:
> +        bp.size = 0;
> +        break;
> +    case 2:
> +        bp.size = 1;
> +        break;
> +    case 4:
> +        bp.size = 2;
> +        break;
> +    case 8:
> +        bp.size = 3;
> +        qw = 1;
> +        break;
> +    default:
> +        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
> +        return 0;
> +    }
> +    
> +    bp.data = p->data;
> +    bp.addr = p->addr;
> +    
> +    spin_lock(&iorp->lock);
> +
> +    if ( (pg->write_pointer - pg->read_pointer) >=
> +         (IOREQ_BUFFER_SLOT_NUM - qw) )
> +    {
> +        /* The queue is full: send the iopacket through the normal path. */
> +        spin_unlock(&iorp->lock);
> +        return 0;
> +    }
> +    
> +    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
> +           &bp, sizeof(bp));
> +    
> +    if ( qw )
> +    {
> +        bp.data = p->data >> 32;
> +        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
> +               &bp, sizeof(bp));
> +    }
> +
> +    /* Make the ioreq_t visible /before/ write_pointer. */
> +    wmb();
> +    pg->write_pointer += qw ? 2 : 1;
> +
> +    notify_via_xen_event_channel(v->domain,
> +            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> +    spin_unlock(&iorp->lock);
> +    
> +    return 1;
> +}
> +
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>  {
>      ioreq_t *p;
>  
> @@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
>          return 0;
>      }
>  
> +    p->dir = proto_p->dir;
> +    p->data_is_ptr = proto_p->data_is_ptr;
> +    p->type = proto_p->type;
> +    p->size = proto_p->size;
> +    p->addr = proto_p->addr;
> +    p->count = proto_p->count;
> +    p->df = proto_p->df;
> +    p->data = proto_p->data;
> +
>      prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
>  
>      /*
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index bf6309d..576641c 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -46,85 +46,6 @@
>  #include <xen/iocap.h>
>  #include <public/hvm/ioreq.h>
>  
> -int hvm_buffered_io_send(ioreq_t *p)
> -{
> -    struct vcpu *v = current;
> -    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> -    buffered_iopage_t *pg = iorp->va;
> -    buf_ioreq_t bp;
> -    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
> -    int qw = 0;
> -
> -    /* Ensure buffered_iopage fits in a page */
> -    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> -
> -    /*
> -     * Return 0 for the cases we can't deal with:
> -     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> -     *  - we cannot buffer accesses to guest memory buffers, as the guest
> -     *    may expect the memory buffer to be synchronously accessed
> -     *  - the count field is usually used with data_is_ptr and since we don't
> -     *    support data_is_ptr we do not waste space for the count field either
> -     */
> -    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> -        return 0;
> -
> -    bp.type = p->type;
> -    bp.dir  = p->dir;
> -    switch ( p->size )
> -    {
> -    case 1:
> -        bp.size = 0;
> -        break;
> -    case 2:
> -        bp.size = 1;
> -        break;
> -    case 4:
> -        bp.size = 2;
> -        break;
> -    case 8:
> -        bp.size = 3;
> -        qw = 1;
> -        break;
> -    default:
> -        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
> -        return 0;
> -    }
> -    
> -    bp.data = p->data;
> -    bp.addr = p->addr;
> -    
> -    spin_lock(&iorp->lock);
> -
> -    if ( (pg->write_pointer - pg->read_pointer) >=
> -         (IOREQ_BUFFER_SLOT_NUM - qw) )
> -    {
> -        /* The queue is full: send the iopacket through the normal path. */
> -        spin_unlock(&iorp->lock);
> -        return 0;
> -    }
> -    
> -    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
> -           &bp, sizeof(bp));
> -    
> -    if ( qw )
> -    {
> -        bp.data = p->data >> 32;
> -        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
> -               &bp, sizeof(bp));
> -    }
> -
> -    /* Make the ioreq_t visible /before/ write_pointer. */
> -    wmb();
> -    pg->write_pointer += qw ? 2 : 1;
> -
> -    notify_via_xen_event_channel(v->domain,
> -            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> -    spin_unlock(&iorp->lock);
> -    
> -    return 1;
> -}
> -
>  void send_timeoffset_req(unsigned long timeoff)
>  {
>      ioreq_t p[1];
> @@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
>  void send_invalidate_req(void)
>  {
>      struct vcpu *v = current;
> -    ioreq_t *p = get_ioreq(v);
> -
> -    if ( !p )
> -        return;
> -
> -    if ( p->state != STATE_IOREQ_NONE )
> -    {
> -        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with something "
> -                 "already pending (%d)?\n", p->state);
> -        domain_crash(v->domain);
> -        return;
> -    }
> +    ioreq_t p[1];

This can all be reduced to a single item, and even using C structure
initialisation rather than 4 explicit assignments.

~Andrew

>  
>      p->type = IOREQ_TYPE_INVALIDATE;
>      p->size = 4;
>      p->dir = IOREQ_WRITE;
>      p->data = ~0UL; /* flush all */
>  
> -    (void)hvm_send_assist_req(v);
> +    (void)hvm_send_assist_req(v, p);
>  }
>  
>  int handle_mmio(void)
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index ccca5df..4e8fee8 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -26,6 +26,7 @@
>  #include <asm/hvm/asid.h>
>  #include <public/domctl.h>
>  #include <public/hvm/save.h>
> +#include <public/hvm/ioreq.h>
>  #include <asm/mm.h>
>  
>  /* Interrupt acknowledgement sources. */
> @@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
>                              struct page_info **_page, void **_va);
>  void destroy_ring_for_helper(void **_va, struct page_info *page);
>  
> -bool_t hvm_send_assist_req(struct vcpu *v);
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
>  
>  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
>  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
> index 3529499..b6af3c5 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -22,19 +22,10 @@
>  #define __ASM_X86_HVM_SUPPORT_H__
>  
>  #include <xen/types.h>
> -#include <public/hvm/ioreq.h>
>  #include <xen/sched.h>
>  #include <xen/hvm/save.h>
>  #include <asm/processor.h>
>  
> -static inline ioreq_t *get_ioreq(struct vcpu *v)
> -{
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> -}
> -
>  #define HVM_DELIVER_NO_ERROR_CODE  -1
>  
>  #ifndef NDEBUG


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sf7-0002Sh-Ke; Thu, 30 Jan 2014 14:32:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8sf6-0002Sc-3j
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:32:12 +0000
Received: from [193.109.254.147:30076] by server-5.bemta-14.messagelabs.com id
	C9/2C-16688-B626AE25; Thu, 30 Jan 2014 14:32:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391092328!911814!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22536 invoked from network); 30 Jan 2014 14:32:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:32:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98108126"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:32:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:32:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8sf1-0006Sk-Am;
	Thu, 30 Jan 2014 14:32:07 +0000
Message-ID: <52EA6267.8020303@citrix.com>
Date: Thu, 30 Jan 2014 14:32:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> To simplify creation of the ioreq server abstraction in a
> subsequent patch, this patch centralizes all use of the shared
> ioreq structure and the buffered ioreq ring to the source module
> xen/arch/x86/hvm/hvm.c.
> Also, re-work hvm_send_assist_req() slightly to complete IO
> immediately in the case where there is no emulator (i.e. the shared
> IOREQ ring has not been set). This should handle the case currently
> covered by has_dm in hvmemul_do_io().
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  xen/arch/x86/hvm/emulate.c        |   40 +++------------
>  xen/arch/x86/hvm/hvm.c            |   98 ++++++++++++++++++++++++++++++++++++-
>  xen/arch/x86/hvm/io.c             |   94 +----------------------------------
>  xen/include/asm-x86/hvm/hvm.h     |    3 +-
>  xen/include/asm-x86/hvm/support.h |    9 ----
>  5 files changed, 108 insertions(+), 136 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 868aa1d..d1d3a6f 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -57,24 +57,11 @@ static int hvmemul_do_io(
>      int value_is_ptr = (p_data == NULL);
>      struct vcpu *curr = current;
>      struct hvm_vcpu_io *vio;
> -    ioreq_t *p = get_ioreq(curr);
> -    ioreq_t _ioreq;
> +    ioreq_t p[1];

I know it will make the patch sightly larger by modifying the
indirection of p, but having an array of 1 item on the stack is seems silly.

>      unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
>      p2m_type_t p2mt;
>      struct page_info *ram_page;
>      int rc;
> -    bool_t has_dm = 1;
> -
> -    /*
> -     * Domains without a backing DM, don't have an ioreq page.  Just
> -     * point to a struct on the stack, initialising the state as needed.
> -     */
> -    if ( !p )
> -    {
> -        has_dm = 0;
> -        p = &_ioreq;
> -        p->state = STATE_IOREQ_NONE;
> -    }
>  
>      /* Check for paged out page */
>      ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt, P2M_UNSHARE);
> @@ -173,15 +160,6 @@ static int hvmemul_do_io(
>          return X86EMUL_UNHANDLEABLE;
>      }
>  
> -    if ( p->state != STATE_IOREQ_NONE )
> -    {
> -        gdprintk(XENLOG_WARNING, "WARNING: io already pending (%d)?\n",
> -                 p->state);
> -        if ( ram_page )
> -            put_page(ram_page);
> -        return X86EMUL_UNHANDLEABLE;
> -    }
> -
>      vio->io_state =
>          (p_data == NULL) ? HVMIO_dispatched : HVMIO_awaiting_completion;
>      vio->io_size = size;
> @@ -193,6 +171,7 @@ static int hvmemul_do_io(
>      if ( vio->mmio_retrying )
>          *reps = 1;
>  
> +    p->state = STATE_IOREQ_NONE;
>      p->dir = dir;
>      p->data_is_ptr = value_is_ptr;
>      p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
> @@ -232,20 +211,15 @@ static int hvmemul_do_io(
>              vio->io_state = HVMIO_handle_mmio_awaiting_completion;
>          break;
>      case X86EMUL_UNHANDLEABLE:
> -        /* If there is no backing DM, just ignore accesses */
> -        if ( !has_dm )
> +        rc = X86EMUL_RETRY;
> +        if ( !hvm_send_assist_req(curr, p) )
>          {
>              rc = X86EMUL_OKAY;
>              vio->io_state = HVMIO_none;
>          }
> -        else
> -        {
> -            rc = X86EMUL_RETRY;
> -            if ( !hvm_send_assist_req(curr) )
> -                vio->io_state = HVMIO_none;
> -            else if ( p_data == NULL )
> -                rc = X86EMUL_OKAY;
> -        }
> +        else if ( p_data == NULL )
> +            rc = X86EMUL_OKAY;
> +
>          break;
>      default:
>          BUG();
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..71a44db 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
>      spin_unlock(&d->event_lock);
>  }
>  
> +static ioreq_t *get_ioreq(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;

newline here...

> +    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));

.. and here.  (I realise that this is just code motion, but might as
well take the opportunity to fix the style.)

> +    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> +}
> +
>  void hvm_do_resume(struct vcpu *v)
>  {
>      ioreq_t *p;
> @@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
>      }
>  }
>  
> -bool_t hvm_send_assist_req(struct vcpu *v)
> +int hvm_buffered_io_send(ioreq_t *p)
> +{
> +    struct vcpu *v = current;
> +    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> +    buffered_iopage_t *pg = iorp->va;
> +    buf_ioreq_t bp;
> +    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
> +    int qw = 0;
> +
> +    /* Ensure buffered_iopage fits in a page */
> +    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> +
> +    /*
> +     * Return 0 for the cases we can't deal with:
> +     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> +     *  - we cannot buffer accesses to guest memory buffers, as the guest
> +     *    may expect the memory buffer to be synchronously accessed
> +     *  - the count field is usually used with data_is_ptr and since we don't
> +     *    support data_is_ptr we do not waste space for the count field either
> +     */
> +    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> +        return 0;
> +
> +    bp.type = p->type;
> +    bp.dir  = p->dir;
> +    switch ( p->size )
> +    {
> +    case 1:
> +        bp.size = 0;
> +        break;
> +    case 2:
> +        bp.size = 1;
> +        break;
> +    case 4:
> +        bp.size = 2;
> +        break;
> +    case 8:
> +        bp.size = 3;
> +        qw = 1;
> +        break;
> +    default:
> +        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
> +        return 0;
> +    }
> +    
> +    bp.data = p->data;
> +    bp.addr = p->addr;
> +    
> +    spin_lock(&iorp->lock);
> +
> +    if ( (pg->write_pointer - pg->read_pointer) >=
> +         (IOREQ_BUFFER_SLOT_NUM - qw) )
> +    {
> +        /* The queue is full: send the iopacket through the normal path. */
> +        spin_unlock(&iorp->lock);
> +        return 0;
> +    }
> +    
> +    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
> +           &bp, sizeof(bp));
> +    
> +    if ( qw )
> +    {
> +        bp.data = p->data >> 32;
> +        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
> +               &bp, sizeof(bp));
> +    }
> +
> +    /* Make the ioreq_t visible /before/ write_pointer. */
> +    wmb();
> +    pg->write_pointer += qw ? 2 : 1;
> +
> +    notify_via_xen_event_channel(v->domain,
> +            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> +    spin_unlock(&iorp->lock);
> +    
> +    return 1;
> +}
> +
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>  {
>      ioreq_t *p;
>  
> @@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
>          return 0;
>      }
>  
> +    p->dir = proto_p->dir;
> +    p->data_is_ptr = proto_p->data_is_ptr;
> +    p->type = proto_p->type;
> +    p->size = proto_p->size;
> +    p->addr = proto_p->addr;
> +    p->count = proto_p->count;
> +    p->df = proto_p->df;
> +    p->data = proto_p->data;
> +
>      prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
>  
>      /*
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index bf6309d..576641c 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -46,85 +46,6 @@
>  #include <xen/iocap.h>
>  #include <public/hvm/ioreq.h>
>  
> -int hvm_buffered_io_send(ioreq_t *p)
> -{
> -    struct vcpu *v = current;
> -    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> -    buffered_iopage_t *pg = iorp->va;
> -    buf_ioreq_t bp;
> -    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
> -    int qw = 0;
> -
> -    /* Ensure buffered_iopage fits in a page */
> -    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> -
> -    /*
> -     * Return 0 for the cases we can't deal with:
> -     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> -     *  - we cannot buffer accesses to guest memory buffers, as the guest
> -     *    may expect the memory buffer to be synchronously accessed
> -     *  - the count field is usually used with data_is_ptr and since we don't
> -     *    support data_is_ptr we do not waste space for the count field either
> -     */
> -    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> -        return 0;
> -
> -    bp.type = p->type;
> -    bp.dir  = p->dir;
> -    switch ( p->size )
> -    {
> -    case 1:
> -        bp.size = 0;
> -        break;
> -    case 2:
> -        bp.size = 1;
> -        break;
> -    case 4:
> -        bp.size = 2;
> -        break;
> -    case 8:
> -        bp.size = 3;
> -        qw = 1;
> -        break;
> -    default:
> -        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
> -        return 0;
> -    }
> -    
> -    bp.data = p->data;
> -    bp.addr = p->addr;
> -    
> -    spin_lock(&iorp->lock);
> -
> -    if ( (pg->write_pointer - pg->read_pointer) >=
> -         (IOREQ_BUFFER_SLOT_NUM - qw) )
> -    {
> -        /* The queue is full: send the iopacket through the normal path. */
> -        spin_unlock(&iorp->lock);
> -        return 0;
> -    }
> -    
> -    memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
> -           &bp, sizeof(bp));
> -    
> -    if ( qw )
> -    {
> -        bp.data = p->data >> 32;
> -        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
> -               &bp, sizeof(bp));
> -    }
> -
> -    /* Make the ioreq_t visible /before/ write_pointer. */
> -    wmb();
> -    pg->write_pointer += qw ? 2 : 1;
> -
> -    notify_via_xen_event_channel(v->domain,
> -            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> -    spin_unlock(&iorp->lock);
> -    
> -    return 1;
> -}
> -
>  void send_timeoffset_req(unsigned long timeoff)
>  {
>      ioreq_t p[1];
> @@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
>  void send_invalidate_req(void)
>  {
>      struct vcpu *v = current;
> -    ioreq_t *p = get_ioreq(v);
> -
> -    if ( !p )
> -        return;
> -
> -    if ( p->state != STATE_IOREQ_NONE )
> -    {
> -        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with something "
> -                 "already pending (%d)?\n", p->state);
> -        domain_crash(v->domain);
> -        return;
> -    }
> +    ioreq_t p[1];

This can all be reduced to a single item, and even using C structure
initialisation rather than 4 explicit assignments.

~Andrew

>  
>      p->type = IOREQ_TYPE_INVALIDATE;
>      p->size = 4;
>      p->dir = IOREQ_WRITE;
>      p->data = ~0UL; /* flush all */
>  
> -    (void)hvm_send_assist_req(v);
> +    (void)hvm_send_assist_req(v, p);
>  }
>  
>  int handle_mmio(void)
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index ccca5df..4e8fee8 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -26,6 +26,7 @@
>  #include <asm/hvm/asid.h>
>  #include <public/domctl.h>
>  #include <public/hvm/save.h>
> +#include <public/hvm/ioreq.h>
>  #include <asm/mm.h>
>  
>  /* Interrupt acknowledgement sources. */
> @@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
>                              struct page_info **_page, void **_va);
>  void destroy_ring_for_helper(void **_va, struct page_info *page);
>  
> -bool_t hvm_send_assist_req(struct vcpu *v);
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
>  
>  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
>  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
> index 3529499..b6af3c5 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -22,19 +22,10 @@
>  #define __ASM_X86_HVM_SUPPORT_H__
>  
>  #include <xen/types.h>
> -#include <public/hvm/ioreq.h>
>  #include <xen/sched.h>
>  #include <xen/hvm/save.h>
>  #include <asm/processor.h>
>  
> -static inline ioreq_t *get_ioreq(struct vcpu *v)
> -{
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> -}
> -
>  #define HVM_DELIVER_NO_ERROR_CODE  -1
>  
>  #ifndef NDEBUG


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:32:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sfb-0002Uq-7k; Thu, 30 Jan 2014 14:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W8sfZ-0002UX-OE
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:32:41 +0000
Received: from [85.158.143.35:31911] by server-3.bemta-4.messagelabs.com id
	4B/F0-11539-9826AE25; Thu, 30 Jan 2014 14:32:41 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391092359!1945993!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16017 invoked from network); 30 Jan 2014 14:32:40 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 14:32:40 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEWLg1026625;
	Thu, 30 Jan 2014 14:32:26 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEWHGH031130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 14:32:17 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s0UEWHW4019009;
	Thu, 30 Jan 2014 14:32:17 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s0UEWHMr019006; Thu, 30 Jan 2014 14:32:17 GMT
Date: Thu, 30 Jan 2014 14:32:16 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391090818.29487.36.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0UEWLg1026625
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Ian Campbell wrote:

> On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
>> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
>>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
>>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>>>>                  continue
>>>>>>
>>>>>>              # new image
>>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>>>>>
>>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
>>>>> red, --class gnu" yet is parsed happily.
>>>>
>>>> A menuentry from RHEL 7 looks like this...
>>>>
>>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
>>>>
>>>> So we need 'lazy' match with '.*?'.
>>>
>>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
>>> means in addition to that, do you have a reference?
>>
>> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
>
> Thanks, pure punctuation is a bit tricky to for a search engine...
>
>>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
>>> the name itself, although you might have to split into handling " and '
>>> separately to be more correct
>
> Any thoughts on this?

I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May 
last year. That seems to work fine for recent Fedora versions which will 
be somewhat similar to RHEL 7.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:32:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sfb-0002Uq-7k; Thu, 30 Jan 2014 14:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W8sfZ-0002UX-OE
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:32:41 +0000
Received: from [85.158.143.35:31911] by server-3.bemta-4.messagelabs.com id
	4B/F0-11539-9826AE25; Thu, 30 Jan 2014 14:32:41 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391092359!1945993!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16017 invoked from network); 30 Jan 2014 14:32:40 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 14:32:40 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEWLg1026625;
	Thu, 30 Jan 2014 14:32:26 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEWHGH031130
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 14:32:17 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s0UEWHW4019009;
	Thu, 30 Jan 2014 14:32:17 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s0UEWHMr019006; Thu, 30 Jan 2014 14:32:17 GMT
Date: Thu, 30 Jan 2014 14:32:16 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391090818.29487.36.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0UEWLg1026625
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Ian Campbell wrote:

> On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
>> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
>>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
>>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>>>>                  continue
>>>>>>
>>>>>>              # new image
>>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>>>>>
>>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
>>>>> red, --class gnu" yet is parsed happily.
>>>>
>>>> A menuentry from RHEL 7 looks like this...
>>>>
>>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
>>>>
>>>> So we need 'lazy' match with '.*?'.
>>>
>>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
>>> means in addition to that, do you have a reference?
>>
>> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
>
> Thanks, pure punctuation is a bit tricky to for a search engine...
>
>>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
>>> the name itself, although you might have to split into handling " and '
>>> separately to be more correct
>
> Any thoughts on this?

I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May 
last year. That seems to work fine for recent Fedora versions which will 
be somewhat similar to RHEL 7.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:34:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8shJ-0002gj-V4; Thu, 30 Jan 2014 14:34:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8shI-0002gS-Cr
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:34:28 +0000
Received: from [193.109.254.147:13468] by server-11.bemta-14.messagelabs.com
	id 68/E5-24604-3F26AE25; Thu, 30 Jan 2014 14:34:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391092465!912665!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7163 invoked from network); 30 Jan 2014 14:34:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96142203"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:34:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 09:34:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8shD-0000xr-QH;
	Thu, 30 Jan 2014 14:34:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8shD-0002hT-M6;
	Thu, 30 Jan 2014 14:34:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 14:34:23 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24637: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24637 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24637/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   4 xen-build                 fail REGR. vs. 24591
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24591
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:34:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8shJ-0002gj-V4; Thu, 30 Jan 2014 14:34:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8shI-0002gS-Cr
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 14:34:28 +0000
Received: from [193.109.254.147:13468] by server-11.bemta-14.messagelabs.com
	id 68/E5-24604-3F26AE25; Thu, 30 Jan 2014 14:34:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391092465!912665!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7163 invoked from network); 30 Jan 2014 14:34:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:34:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96142203"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:34:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 09:34:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8shD-0000xr-QH;
	Thu, 30 Jan 2014 14:34:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8shD-0002hT-M6;
	Thu, 30 Jan 2014 14:34:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24637-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 14:34:23 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24637: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24637 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24637/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   4 xen-build                 fail REGR. vs. 24591
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24591
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:36:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sir-0002og-F4; Thu, 30 Jan 2014 14:36:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sip-0002oX-Rr
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:36:04 +0000
Received: from [193.109.254.147:31496] by server-5.bemta-14.messagelabs.com id
	48/52-16688-3536AE25; Thu, 30 Jan 2014 14:36:03 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391092560!890597!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17040 invoked from network); 30 Jan 2014 14:36:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96142777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:36:00 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 09:35:59 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 15:35:57 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
Thread-Index: AQHPHcZZpkBP0yklNEimIf65ti+4fpqdQ8qAgAARD7A=
Date: Thu, 30 Jan 2014 14:35:57 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD021772B@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
	<52EA6267.8020303@citrix.com>
In-Reply-To: <52EA6267.8020303@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 14:32
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
> ioreq structures
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > To simplify creation of the ioreq server abstraction in a
> > subsequent patch, this patch centralizes all use of the shared
> > ioreq structure and the buffered ioreq ring to the source module
> > xen/arch/x86/hvm/hvm.c.
> > Also, re-work hvm_send_assist_req() slightly to complete IO
> > immediately in the case where there is no emulator (i.e. the shared
> > IOREQ ring has not been set). This should handle the case currently
> > covered by has_dm in hvmemul_do_io().
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  xen/arch/x86/hvm/emulate.c        |   40 +++------------
> >  xen/arch/x86/hvm/hvm.c            |   98
> ++++++++++++++++++++++++++++++++++++-
> >  xen/arch/x86/hvm/io.c             |   94 +----------------------------------
> >  xen/include/asm-x86/hvm/hvm.h     |    3 +-
> >  xen/include/asm-x86/hvm/support.h |    9 ----
> >  5 files changed, 108 insertions(+), 136 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> > index 868aa1d..d1d3a6f 100644
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -57,24 +57,11 @@ static int hvmemul_do_io(
> >      int value_is_ptr = (p_data == NULL);
> >      struct vcpu *curr = current;
> >      struct hvm_vcpu_io *vio;
> > -    ioreq_t *p = get_ioreq(curr);
> > -    ioreq_t _ioreq;
> > +    ioreq_t p[1];
> 
> I know it will make the patch sightly larger by modifying the
> indirection of p, but having an array of 1 item on the stack is seems silly.
> 

I'm following the style adopted in io.c and it is entirely to keep the patch as small as possible :-)
I agree it's a bit silly but I guess it would be better to keep such a change in a separate patch. I can add that to the sequence when I come to submit the patches for real.

  Paul

> >      unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
> >      p2m_type_t p2mt;
> >      struct page_info *ram_page;
> >      int rc;
> > -    bool_t has_dm = 1;
> > -
> > -    /*
> > -     * Domains without a backing DM, don't have an ioreq page.  Just
> > -     * point to a struct on the stack, initialising the state as needed.
> > -     */
> > -    if ( !p )
> > -    {
> > -        has_dm = 0;
> > -        p = &_ioreq;
> > -        p->state = STATE_IOREQ_NONE;
> > -    }
> >
> >      /* Check for paged out page */
> >      ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt,
> P2M_UNSHARE);
> > @@ -173,15 +160,6 @@ static int hvmemul_do_io(
> >          return X86EMUL_UNHANDLEABLE;
> >      }
> >
> > -    if ( p->state != STATE_IOREQ_NONE )
> > -    {
> > -        gdprintk(XENLOG_WARNING, "WARNING: io already pending
> (%d)?\n",
> > -                 p->state);
> > -        if ( ram_page )
> > -            put_page(ram_page);
> > -        return X86EMUL_UNHANDLEABLE;
> > -    }
> > -
> >      vio->io_state =
> >          (p_data == NULL) ? HVMIO_dispatched :
> HVMIO_awaiting_completion;
> >      vio->io_size = size;
> > @@ -193,6 +171,7 @@ static int hvmemul_do_io(
> >      if ( vio->mmio_retrying )
> >          *reps = 1;
> >
> > +    p->state = STATE_IOREQ_NONE;
> >      p->dir = dir;
> >      p->data_is_ptr = value_is_ptr;
> >      p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
> > @@ -232,20 +211,15 @@ static int hvmemul_do_io(
> >              vio->io_state = HVMIO_handle_mmio_awaiting_completion;
> >          break;
> >      case X86EMUL_UNHANDLEABLE:
> > -        /* If there is no backing DM, just ignore accesses */
> > -        if ( !has_dm )
> > +        rc = X86EMUL_RETRY;
> > +        if ( !hvm_send_assist_req(curr, p) )
> >          {
> >              rc = X86EMUL_OKAY;
> >              vio->io_state = HVMIO_none;
> >          }
> > -        else
> > -        {
> > -            rc = X86EMUL_RETRY;
> > -            if ( !hvm_send_assist_req(curr) )
> > -                vio->io_state = HVMIO_none;
> > -            else if ( p_data == NULL )
> > -                rc = X86EMUL_OKAY;
> > -        }
> > +        else if ( p_data == NULL )
> > +            rc = X86EMUL_OKAY;
> > +
> >          break;
> >      default:
> >          BUG();
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..71a44db 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >      spin_unlock(&d->event_lock);
> >  }
> >
> > +static ioreq_t *get_ioreq(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> 
> newline here...
> 
> > +    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> 
> .. and here.  (I realise that this is just code motion, but might as
> well take the opportunity to fix the style.)
> 
> > +    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > +}
> > +
> >  void hvm_do_resume(struct vcpu *v)
> >  {
> >      ioreq_t *p;
> > @@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
> >      }
> >  }
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v)
> > +int hvm_buffered_io_send(ioreq_t *p)
> > +{
> > +    struct vcpu *v = current;
> > +    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > +    buffered_iopage_t *pg = iorp->va;
> > +    buf_ioreq_t bp;
> > +    /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> > +    int qw = 0;
> > +
> > +    /* Ensure buffered_iopage fits in a page */
> > +    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> > +
> > +    /*
> > +     * Return 0 for the cases we can't deal with:
> > +     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > +     *  - we cannot buffer accesses to guest memory buffers, as the guest
> > +     *    may expect the memory buffer to be synchronously accessed
> > +     *  - the count field is usually used with data_is_ptr and since we don't
> > +     *    support data_is_ptr we do not waste space for the count field
> either
> > +     */
> > +    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> > +        return 0;
> > +
> > +    bp.type = p->type;
> > +    bp.dir  = p->dir;
> > +    switch ( p->size )
> > +    {
> > +    case 1:
> > +        bp.size = 0;
> > +        break;
> > +    case 2:
> > +        bp.size = 1;
> > +        break;
> > +    case 4:
> > +        bp.size = 2;
> > +        break;
> > +    case 8:
> > +        bp.size = 3;
> > +        qw = 1;
> > +        break;
> > +    default:
> > +        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p-
> >size);
> > +        return 0;
> > +    }
> > +
> > +    bp.data = p->data;
> > +    bp.addr = p->addr;
> > +
> > +    spin_lock(&iorp->lock);
> > +
> > +    if ( (pg->write_pointer - pg->read_pointer) >=
> > +         (IOREQ_BUFFER_SLOT_NUM - qw) )
> > +    {
> > +        /* The queue is full: send the iopacket through the normal path. */
> > +        spin_unlock(&iorp->lock);
> > +        return 0;
> > +    }
> > +
> > +    memcpy(&pg->buf_ioreq[pg->write_pointer %
> IOREQ_BUFFER_SLOT_NUM],
> > +           &bp, sizeof(bp));
> > +
> > +    if ( qw )
> > +    {
> > +        bp.data = p->data >> 32;
> > +        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) %
> IOREQ_BUFFER_SLOT_NUM],
> > +               &bp, sizeof(bp));
> > +    }
> > +
> > +    /* Make the ioreq_t visible /before/ write_pointer. */
> > +    wmb();
> > +    pg->write_pointer += qw ? 2 : 1;
> > +
> > +    notify_via_xen_event_channel(v->domain,
> > +            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > +    spin_unlock(&iorp->lock);
> > +
> > +    return 1;
> > +}
> > +
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> >  {
> >      ioreq_t *p;
> >
> > @@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
> >          return 0;
> >      }
> >
> > +    p->dir = proto_p->dir;
> > +    p->data_is_ptr = proto_p->data_is_ptr;
> > +    p->type = proto_p->type;
> > +    p->size = proto_p->size;
> > +    p->addr = proto_p->addr;
> > +    p->count = proto_p->count;
> > +    p->df = proto_p->df;
> > +    p->data = proto_p->data;
> > +
> >      prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> >
> >      /*
> > diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> > index bf6309d..576641c 100644
> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -46,85 +46,6 @@
> >  #include <xen/iocap.h>
> >  #include <public/hvm/ioreq.h>
> >
> > -int hvm_buffered_io_send(ioreq_t *p)
> > -{
> > -    struct vcpu *v = current;
> > -    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > -    buffered_iopage_t *pg = iorp->va;
> > -    buf_ioreq_t bp;
> > -    /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> > -    int qw = 0;
> > -
> > -    /* Ensure buffered_iopage fits in a page */
> > -    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> > -
> > -    /*
> > -     * Return 0 for the cases we can't deal with:
> > -     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > -     *  - we cannot buffer accesses to guest memory buffers, as the guest
> > -     *    may expect the memory buffer to be synchronously accessed
> > -     *  - the count field is usually used with data_is_ptr and since we don't
> > -     *    support data_is_ptr we do not waste space for the count field either
> > -     */
> > -    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> > -        return 0;
> > -
> > -    bp.type = p->type;
> > -    bp.dir  = p->dir;
> > -    switch ( p->size )
> > -    {
> > -    case 1:
> > -        bp.size = 0;
> > -        break;
> > -    case 2:
> > -        bp.size = 1;
> > -        break;
> > -    case 4:
> > -        bp.size = 2;
> > -        break;
> > -    case 8:
> > -        bp.size = 3;
> > -        qw = 1;
> > -        break;
> > -    default:
> > -        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p-
> >size);
> > -        return 0;
> > -    }
> > -
> > -    bp.data = p->data;
> > -    bp.addr = p->addr;
> > -
> > -    spin_lock(&iorp->lock);
> > -
> > -    if ( (pg->write_pointer - pg->read_pointer) >=
> > -         (IOREQ_BUFFER_SLOT_NUM - qw) )
> > -    {
> > -        /* The queue is full: send the iopacket through the normal path. */
> > -        spin_unlock(&iorp->lock);
> > -        return 0;
> > -    }
> > -
> > -    memcpy(&pg->buf_ioreq[pg->write_pointer %
> IOREQ_BUFFER_SLOT_NUM],
> > -           &bp, sizeof(bp));
> > -
> > -    if ( qw )
> > -    {
> > -        bp.data = p->data >> 32;
> > -        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) %
> IOREQ_BUFFER_SLOT_NUM],
> > -               &bp, sizeof(bp));
> > -    }
> > -
> > -    /* Make the ioreq_t visible /before/ write_pointer. */
> > -    wmb();
> > -    pg->write_pointer += qw ? 2 : 1;
> > -
> > -    notify_via_xen_event_channel(v->domain,
> > -            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > -    spin_unlock(&iorp->lock);
> > -
> > -    return 1;
> > -}
> > -
> >  void send_timeoffset_req(unsigned long timeoff)
> >  {
> >      ioreq_t p[1];
> > @@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
> >  void send_invalidate_req(void)
> >  {
> >      struct vcpu *v = current;
> > -    ioreq_t *p = get_ioreq(v);
> > -
> > -    if ( !p )
> > -        return;
> > -
> > -    if ( p->state != STATE_IOREQ_NONE )
> > -    {
> > -        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with
> something "
> > -                 "already pending (%d)?\n", p->state);
> > -        domain_crash(v->domain);
> > -        return;
> > -    }
> > +    ioreq_t p[1];
> 
> This can all be reduced to a single item, and even using C structure
> initialisation rather than 4 explicit assignments.
> 
> ~Andrew
> 
> >
> >      p->type = IOREQ_TYPE_INVALIDATE;
> >      p->size = 4;
> >      p->dir = IOREQ_WRITE;
> >      p->data = ~0UL; /* flush all */
> >
> > -    (void)hvm_send_assist_req(v);
> > +    (void)hvm_send_assist_req(v, p);
> >  }
> >
> >  int handle_mmio(void)
> > diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-
> x86/hvm/hvm.h
> > index ccca5df..4e8fee8 100644
> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -26,6 +26,7 @@
> >  #include <asm/hvm/asid.h>
> >  #include <public/domctl.h>
> >  #include <public/hvm/save.h>
> > +#include <public/hvm/ioreq.h>
> >  #include <asm/mm.h>
> >
> >  /* Interrupt acknowledgement sources. */
> > @@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d,
> unsigned long gmfn,
> >                              struct page_info **_page, void **_va);
> >  void destroy_ring_for_helper(void **_va, struct page_info *page);
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v);
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
> >
> >  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
> >  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> > diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> > index 3529499..b6af3c5 100644
> > --- a/xen/include/asm-x86/hvm/support.h
> > +++ b/xen/include/asm-x86/hvm/support.h
> > @@ -22,19 +22,10 @@
> >  #define __ASM_X86_HVM_SUPPORT_H__
> >
> >  #include <xen/types.h>
> > -#include <public/hvm/ioreq.h>
> >  #include <xen/sched.h>
> >  #include <xen/hvm/save.h>
> >  #include <asm/processor.h>
> >
> > -static inline ioreq_t *get_ioreq(struct vcpu *v)
> > -{
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > -}
> > -
> >  #define HVM_DELIVER_NO_ERROR_CODE  -1
> >
> >  #ifndef NDEBUG


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:36:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sir-0002og-F4; Thu, 30 Jan 2014 14:36:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8sip-0002oX-Rr
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:36:04 +0000
Received: from [193.109.254.147:31496] by server-5.bemta-14.messagelabs.com id
	48/52-16688-3536AE25; Thu, 30 Jan 2014 14:36:03 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391092560!890597!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17040 invoked from network); 30 Jan 2014 14:36:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:36:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96142777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:36:00 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 09:35:59 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 15:35:57 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
Thread-Index: AQHPHcZZpkBP0yklNEimIf65ti+4fpqdQ8qAgAARD7A=
Date: Thu, 30 Jan 2014 14:35:57 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD021772B@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
	<52EA6267.8020303@citrix.com>
In-Reply-To: <52EA6267.8020303@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 14:32
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
> ioreq structures
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > To simplify creation of the ioreq server abstraction in a
> > subsequent patch, this patch centralizes all use of the shared
> > ioreq structure and the buffered ioreq ring to the source module
> > xen/arch/x86/hvm/hvm.c.
> > Also, re-work hvm_send_assist_req() slightly to complete IO
> > immediately in the case where there is no emulator (i.e. the shared
> > IOREQ ring has not been set). This should handle the case currently
> > covered by has_dm in hvmemul_do_io().
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  xen/arch/x86/hvm/emulate.c        |   40 +++------------
> >  xen/arch/x86/hvm/hvm.c            |   98
> ++++++++++++++++++++++++++++++++++++-
> >  xen/arch/x86/hvm/io.c             |   94 +----------------------------------
> >  xen/include/asm-x86/hvm/hvm.h     |    3 +-
> >  xen/include/asm-x86/hvm/support.h |    9 ----
> >  5 files changed, 108 insertions(+), 136 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> > index 868aa1d..d1d3a6f 100644
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -57,24 +57,11 @@ static int hvmemul_do_io(
> >      int value_is_ptr = (p_data == NULL);
> >      struct vcpu *curr = current;
> >      struct hvm_vcpu_io *vio;
> > -    ioreq_t *p = get_ioreq(curr);
> > -    ioreq_t _ioreq;
> > +    ioreq_t p[1];
> 
> I know it will make the patch sightly larger by modifying the
> indirection of p, but having an array of 1 item on the stack is seems silly.
> 

I'm following the style adopted in io.c and it is entirely to keep the patch as small as possible :-)
I agree it's a bit silly but I guess it would be better to keep such a change in a separate patch. I can add that to the sequence when I come to submit the patches for real.

  Paul

> >      unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
> >      p2m_type_t p2mt;
> >      struct page_info *ram_page;
> >      int rc;
> > -    bool_t has_dm = 1;
> > -
> > -    /*
> > -     * Domains without a backing DM, don't have an ioreq page.  Just
> > -     * point to a struct on the stack, initialising the state as needed.
> > -     */
> > -    if ( !p )
> > -    {
> > -        has_dm = 0;
> > -        p = &_ioreq;
> > -        p->state = STATE_IOREQ_NONE;
> > -    }
> >
> >      /* Check for paged out page */
> >      ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt,
> P2M_UNSHARE);
> > @@ -173,15 +160,6 @@ static int hvmemul_do_io(
> >          return X86EMUL_UNHANDLEABLE;
> >      }
> >
> > -    if ( p->state != STATE_IOREQ_NONE )
> > -    {
> > -        gdprintk(XENLOG_WARNING, "WARNING: io already pending
> (%d)?\n",
> > -                 p->state);
> > -        if ( ram_page )
> > -            put_page(ram_page);
> > -        return X86EMUL_UNHANDLEABLE;
> > -    }
> > -
> >      vio->io_state =
> >          (p_data == NULL) ? HVMIO_dispatched :
> HVMIO_awaiting_completion;
> >      vio->io_size = size;
> > @@ -193,6 +171,7 @@ static int hvmemul_do_io(
> >      if ( vio->mmio_retrying )
> >          *reps = 1;
> >
> > +    p->state = STATE_IOREQ_NONE;
> >      p->dir = dir;
> >      p->data_is_ptr = value_is_ptr;
> >      p->type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
> > @@ -232,20 +211,15 @@ static int hvmemul_do_io(
> >              vio->io_state = HVMIO_handle_mmio_awaiting_completion;
> >          break;
> >      case X86EMUL_UNHANDLEABLE:
> > -        /* If there is no backing DM, just ignore accesses */
> > -        if ( !has_dm )
> > +        rc = X86EMUL_RETRY;
> > +        if ( !hvm_send_assist_req(curr, p) )
> >          {
> >              rc = X86EMUL_OKAY;
> >              vio->io_state = HVMIO_none;
> >          }
> > -        else
> > -        {
> > -            rc = X86EMUL_RETRY;
> > -            if ( !hvm_send_assist_req(curr) )
> > -                vio->io_state = HVMIO_none;
> > -            else if ( p_data == NULL )
> > -                rc = X86EMUL_OKAY;
> > -        }
> > +        else if ( p_data == NULL )
> > +            rc = X86EMUL_OKAY;
> > +
> >          break;
> >      default:
> >          BUG();
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..71a44db 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -345,6 +345,14 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >      spin_unlock(&d->event_lock);
> >  }
> >
> > +static ioreq_t *get_ioreq(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> 
> newline here...
> 
> > +    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> 
> .. and here.  (I realise that this is just code motion, but might as
> well take the opportunity to fix the style.)
> 
> > +    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > +}
> > +
> >  void hvm_do_resume(struct vcpu *v)
> >  {
> >      ioreq_t *p;
> > @@ -1287,7 +1295,86 @@ void hvm_vcpu_down(struct vcpu *v)
> >      }
> >  }
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v)
> > +int hvm_buffered_io_send(ioreq_t *p)
> > +{
> > +    struct vcpu *v = current;
> > +    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > +    buffered_iopage_t *pg = iorp->va;
> > +    buf_ioreq_t bp;
> > +    /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> > +    int qw = 0;
> > +
> > +    /* Ensure buffered_iopage fits in a page */
> > +    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> > +
> > +    /*
> > +     * Return 0 for the cases we can't deal with:
> > +     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > +     *  - we cannot buffer accesses to guest memory buffers, as the guest
> > +     *    may expect the memory buffer to be synchronously accessed
> > +     *  - the count field is usually used with data_is_ptr and since we don't
> > +     *    support data_is_ptr we do not waste space for the count field
> either
> > +     */
> > +    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> > +        return 0;
> > +
> > +    bp.type = p->type;
> > +    bp.dir  = p->dir;
> > +    switch ( p->size )
> > +    {
> > +    case 1:
> > +        bp.size = 0;
> > +        break;
> > +    case 2:
> > +        bp.size = 1;
> > +        break;
> > +    case 4:
> > +        bp.size = 2;
> > +        break;
> > +    case 8:
> > +        bp.size = 3;
> > +        qw = 1;
> > +        break;
> > +    default:
> > +        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p-
> >size);
> > +        return 0;
> > +    }
> > +
> > +    bp.data = p->data;
> > +    bp.addr = p->addr;
> > +
> > +    spin_lock(&iorp->lock);
> > +
> > +    if ( (pg->write_pointer - pg->read_pointer) >=
> > +         (IOREQ_BUFFER_SLOT_NUM - qw) )
> > +    {
> > +        /* The queue is full: send the iopacket through the normal path. */
> > +        spin_unlock(&iorp->lock);
> > +        return 0;
> > +    }
> > +
> > +    memcpy(&pg->buf_ioreq[pg->write_pointer %
> IOREQ_BUFFER_SLOT_NUM],
> > +           &bp, sizeof(bp));
> > +
> > +    if ( qw )
> > +    {
> > +        bp.data = p->data >> 32;
> > +        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) %
> IOREQ_BUFFER_SLOT_NUM],
> > +               &bp, sizeof(bp));
> > +    }
> > +
> > +    /* Make the ioreq_t visible /before/ write_pointer. */
> > +    wmb();
> > +    pg->write_pointer += qw ? 2 : 1;
> > +
> > +    notify_via_xen_event_channel(v->domain,
> > +            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > +    spin_unlock(&iorp->lock);
> > +
> > +    return 1;
> > +}
> > +
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> >  {
> >      ioreq_t *p;
> >
> > @@ -1305,6 +1392,15 @@ bool_t hvm_send_assist_req(struct vcpu *v)
> >          return 0;
> >      }
> >
> > +    p->dir = proto_p->dir;
> > +    p->data_is_ptr = proto_p->data_is_ptr;
> > +    p->type = proto_p->type;
> > +    p->size = proto_p->size;
> > +    p->addr = proto_p->addr;
> > +    p->count = proto_p->count;
> > +    p->df = proto_p->df;
> > +    p->data = proto_p->data;
> > +
> >      prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> >
> >      /*
> > diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> > index bf6309d..576641c 100644
> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -46,85 +46,6 @@
> >  #include <xen/iocap.h>
> >  #include <public/hvm/ioreq.h>
> >
> > -int hvm_buffered_io_send(ioreq_t *p)
> > -{
> > -    struct vcpu *v = current;
> > -    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > -    buffered_iopage_t *pg = iorp->va;
> > -    buf_ioreq_t bp;
> > -    /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> > -    int qw = 0;
> > -
> > -    /* Ensure buffered_iopage fits in a page */
> > -    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> > -
> > -    /*
> > -     * Return 0 for the cases we can't deal with:
> > -     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > -     *  - we cannot buffer accesses to guest memory buffers, as the guest
> > -     *    may expect the memory buffer to be synchronously accessed
> > -     *  - the count field is usually used with data_is_ptr and since we don't
> > -     *    support data_is_ptr we do not waste space for the count field either
> > -     */
> > -    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> > -        return 0;
> > -
> > -    bp.type = p->type;
> > -    bp.dir  = p->dir;
> > -    switch ( p->size )
> > -    {
> > -    case 1:
> > -        bp.size = 0;
> > -        break;
> > -    case 2:
> > -        bp.size = 1;
> > -        break;
> > -    case 4:
> > -        bp.size = 2;
> > -        break;
> > -    case 8:
> > -        bp.size = 3;
> > -        qw = 1;
> > -        break;
> > -    default:
> > -        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p-
> >size);
> > -        return 0;
> > -    }
> > -
> > -    bp.data = p->data;
> > -    bp.addr = p->addr;
> > -
> > -    spin_lock(&iorp->lock);
> > -
> > -    if ( (pg->write_pointer - pg->read_pointer) >=
> > -         (IOREQ_BUFFER_SLOT_NUM - qw) )
> > -    {
> > -        /* The queue is full: send the iopacket through the normal path. */
> > -        spin_unlock(&iorp->lock);
> > -        return 0;
> > -    }
> > -
> > -    memcpy(&pg->buf_ioreq[pg->write_pointer %
> IOREQ_BUFFER_SLOT_NUM],
> > -           &bp, sizeof(bp));
> > -
> > -    if ( qw )
> > -    {
> > -        bp.data = p->data >> 32;
> > -        memcpy(&pg->buf_ioreq[(pg->write_pointer+1) %
> IOREQ_BUFFER_SLOT_NUM],
> > -               &bp, sizeof(bp));
> > -    }
> > -
> > -    /* Make the ioreq_t visible /before/ write_pointer. */
> > -    wmb();
> > -    pg->write_pointer += qw ? 2 : 1;
> > -
> > -    notify_via_xen_event_channel(v->domain,
> > -            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > -    spin_unlock(&iorp->lock);
> > -
> > -    return 1;
> > -}
> > -
> >  void send_timeoffset_req(unsigned long timeoff)
> >  {
> >      ioreq_t p[1];
> > @@ -150,25 +71,14 @@ void send_timeoffset_req(unsigned long timeoff)
> >  void send_invalidate_req(void)
> >  {
> >      struct vcpu *v = current;
> > -    ioreq_t *p = get_ioreq(v);
> > -
> > -    if ( !p )
> > -        return;
> > -
> > -    if ( p->state != STATE_IOREQ_NONE )
> > -    {
> > -        gdprintk(XENLOG_ERR, "WARNING: send invalidate req with
> something "
> > -                 "already pending (%d)?\n", p->state);
> > -        domain_crash(v->domain);
> > -        return;
> > -    }
> > +    ioreq_t p[1];
> 
> This can all be reduced to a single item, and even using C structure
> initialisation rather than 4 explicit assignments.
> 
> ~Andrew
> 
> >
> >      p->type = IOREQ_TYPE_INVALIDATE;
> >      p->size = 4;
> >      p->dir = IOREQ_WRITE;
> >      p->data = ~0UL; /* flush all */
> >
> > -    (void)hvm_send_assist_req(v);
> > +    (void)hvm_send_assist_req(v, p);
> >  }
> >
> >  int handle_mmio(void)
> > diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-
> x86/hvm/hvm.h
> > index ccca5df..4e8fee8 100644
> > --- a/xen/include/asm-x86/hvm/hvm.h
> > +++ b/xen/include/asm-x86/hvm/hvm.h
> > @@ -26,6 +26,7 @@
> >  #include <asm/hvm/asid.h>
> >  #include <public/domctl.h>
> >  #include <public/hvm/save.h>
> > +#include <public/hvm/ioreq.h>
> >  #include <asm/mm.h>
> >
> >  /* Interrupt acknowledgement sources. */
> > @@ -223,7 +224,7 @@ int prepare_ring_for_helper(struct domain *d,
> unsigned long gmfn,
> >                              struct page_info **_page, void **_va);
> >  void destroy_ring_for_helper(void **_va, struct page_info *page);
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v);
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
> >
> >  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
> >  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> > diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> > index 3529499..b6af3c5 100644
> > --- a/xen/include/asm-x86/hvm/support.h
> > +++ b/xen/include/asm-x86/hvm/support.h
> > @@ -22,19 +22,10 @@
> >  #define __ASM_X86_HVM_SUPPORT_H__
> >
> >  #include <xen/types.h>
> > -#include <public/hvm/ioreq.h>
> >  #include <xen/sched.h>
> >  #include <xen/hvm/save.h>
> >  #include <asm/processor.h>
> >
> > -static inline ioreq_t *get_ioreq(struct vcpu *v)
> > -{
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > -}
> > -
> >  #define HVM_DELIVER_NO_ERROR_CODE  -1
> >
> >  #ifndef NDEBUG


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8slF-0003DN-On; Thu, 30 Jan 2014 14:38:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8slE-00038E-K4
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:38:32 +0000
Received: from [85.158.143.35:19430] by server-3.bemta-4.messagelabs.com id
	5E/8B-11539-6E36AE25; Thu, 30 Jan 2014 14:38:30 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391092709!1962752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5509 invoked from network); 30 Jan 2014 14:38:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:38:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96143744"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:38:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:38:27 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8sl9-0006Y3-Pw;
	Thu, 30 Jan 2014 14:38:27 +0000
Message-ID: <52EA63E2.9020603@eu.citrix.com>
Date: Thu, 30 Jan 2014 14:38:26 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Wei Liu
	<wei.liu2@citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
In-Reply-To: <20140129183349.GA14312@phenom.dumpdata.com>
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/29/2014 06:33 PM, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
>> The original code is wrong because:
>> * claim mode wants to know the total number of pages needed while
>>    original code provides the additional number of pages needed.
>> * if pod is enabled memory will already be allocated by the time we try
>>    to claim memory.
>>
>> So the fix would be:
>> * move claim mode before actual memory allocation.
>> * pass the right number of pages to hypervisor.
>>
>> The "right number of pages" should be number of pages of target memory
>> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
>>
>> This fixes bug #32.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Cc: Konrad Wilk <konrad.wilk@oracle.com>
>
> And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'

Right, well nobody has shouted, so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> Thank you!
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> ---
>> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
>> mode is complete broken. If this patch is deemed too complicated, we
>> should flip the switch to disable claim mode by default for 4.4.
>> ---
>>   tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
>>   1 file changed, 23 insertions(+), 13 deletions(-)
>>
>> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> index 77bd365..dd3b522 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -49,6 +49,8 @@
>>   #define NR_SPECIAL_PAGES     8
>>   #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>>
>> +#define VGA_HOLE_SIZE (0x20)
>> +
>>   static int modules_init(struct xc_hvm_build_args *args,
>>                           uint64_t vend, struct elf_binary *elf,
>>                           uint64_t *mstart_out, uint64_t *mend_out)
>> @@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
>>       for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>>           page_array[i] += mmio_size >> PAGE_SHIFT;
>>
>> +    /*
>> +     * Try to claim pages for early warning of insufficient memory available.
>> +     * This should go before xc_domain_set_pod_target, becuase that function
>> +     * actually allocates memory for the guest. Claiming after memory has been
>> +     * allocated is pointless.
>> +     */
>> +    if ( claim_enabled ) {
>> +        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
>> +        if ( rc != 0 )
>> +        {
>> +            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
>> +            goto error_out;
>> +        }
>> +    }
>> +
>>       if ( pod_mode )
>>       {
>>           /*
>> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> -         * adjust the PoD cache size so that domain tot_pages will be
>> -         * target_pages - 0x20 after this call.
>> +         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
>> +         * "hole".  Xen will adjust the PoD cache size so that domain
>> +         * tot_pages will be target_pages - VGA_HOLE_SIZE after
>> +         * this call.
>>            */
>> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> +        rc = xc_domain_set_pod_target(xch, dom,
>> +                                      target_pages - VGA_HOLE_SIZE,
>>                                         NULL, NULL, NULL);
>>           if ( rc != 0 )
>>           {
>> @@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
>>       cur_pages = 0xc0;
>>       stat_normal_pages = 0xc0;
>>
>> -    /* try to claim pages for early warning of insufficient memory available */
>> -    if ( claim_enabled ) {
>> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> -        if ( rc != 0 )
>> -        {
>> -            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
>> -            goto error_out;
>> -        }
>> -    }
>>       while ( (rc == 0) && (nr_pages > cur_pages) )
>>       {
>>           /* Clip count to maximum 1GB extent. */
>> --
>> 1.7.10.4
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8slF-0003DN-On; Thu, 30 Jan 2014 14:38:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1W8slE-00038E-K4
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:38:32 +0000
Received: from [85.158.143.35:19430] by server-3.bemta-4.messagelabs.com id
	5E/8B-11539-6E36AE25; Thu, 30 Jan 2014 14:38:30 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391092709!1962752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5509 invoked from network); 30 Jan 2014 14:38:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:38:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96143744"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:38:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:38:27 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1W8sl9-0006Y3-Pw;
	Thu, 30 Jan 2014 14:38:27 +0000
Message-ID: <52EA63E2.9020603@eu.citrix.com>
Date: Thu, 30 Jan 2014 14:38:26 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Wei Liu
	<wei.liu2@citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
In-Reply-To: <20140129183349.GA14312@phenom.dumpdata.com>
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/29/2014 06:33 PM, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
>> The original code is wrong because:
>> * claim mode wants to know the total number of pages needed while
>>    original code provides the additional number of pages needed.
>> * if pod is enabled memory will already be allocated by the time we try
>>    to claim memory.
>>
>> So the fix would be:
>> * move claim mode before actual memory allocation.
>> * pass the right number of pages to hypervisor.
>>
>> The "right number of pages" should be number of pages of target memory
>> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
>>
>> This fixes bug #32.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Cc: Konrad Wilk <konrad.wilk@oracle.com>
>
> And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'

Right, well nobody has shouted, so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> Thank you!
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> ---
>> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
>> mode is complete broken. If this patch is deemed too complicated, we
>> should flip the switch to disable claim mode by default for 4.4.
>> ---
>>   tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
>>   1 file changed, 23 insertions(+), 13 deletions(-)
>>
>> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
>> index 77bd365..dd3b522 100644
>> --- a/tools/libxc/xc_hvm_build_x86.c
>> +++ b/tools/libxc/xc_hvm_build_x86.c
>> @@ -49,6 +49,8 @@
>>   #define NR_SPECIAL_PAGES     8
>>   #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>>
>> +#define VGA_HOLE_SIZE (0x20)
>> +
>>   static int modules_init(struct xc_hvm_build_args *args,
>>                           uint64_t vend, struct elf_binary *elf,
>>                           uint64_t *mstart_out, uint64_t *mend_out)
>> @@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
>>       for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>>           page_array[i] += mmio_size >> PAGE_SHIFT;
>>
>> +    /*
>> +     * Try to claim pages for early warning of insufficient memory available.
>> +     * This should go before xc_domain_set_pod_target, becuase that function
>> +     * actually allocates memory for the guest. Claiming after memory has been
>> +     * allocated is pointless.
>> +     */
>> +    if ( claim_enabled ) {
>> +        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
>> +        if ( rc != 0 )
>> +        {
>> +            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
>> +            goto error_out;
>> +        }
>> +    }
>> +
>>       if ( pod_mode )
>>       {
>>           /*
>> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
>> -         * adjust the PoD cache size so that domain tot_pages will be
>> -         * target_pages - 0x20 after this call.
>> +         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
>> +         * "hole".  Xen will adjust the PoD cache size so that domain
>> +         * tot_pages will be target_pages - VGA_HOLE_SIZE after
>> +         * this call.
>>            */
>> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
>> +        rc = xc_domain_set_pod_target(xch, dom,
>> +                                      target_pages - VGA_HOLE_SIZE,
>>                                         NULL, NULL, NULL);
>>           if ( rc != 0 )
>>           {
>> @@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
>>       cur_pages = 0xc0;
>>       stat_normal_pages = 0xc0;
>>
>> -    /* try to claim pages for early warning of insufficient memory available */
>> -    if ( claim_enabled ) {
>> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
>> -        if ( rc != 0 )
>> -        {
>> -            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
>> -            goto error_out;
>> -        }
>> -    }
>>       while ( (rc == 0) && (nr_pages > cur_pages) )
>>       {
>>           /* Clip count to maximum 1GB extent. */
>> --
>> 1.7.10.4
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8slj-0003O5-6Q; Thu, 30 Jan 2014 14:39:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8slh-0003Nk-N5
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:39:01 +0000
Received: from [85.158.143.35:8641] by server-1.bemta-4.messagelabs.com id
	E2/24-31661-5046AE25; Thu, 30 Jan 2014 14:39:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391092739!1962928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10545 invoked from network); 30 Jan 2014 14:39:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:39:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98111143"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:38:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:38:58 -0500
Message-ID: <1391092737.5650.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 30 Jan 2014 14:38:57 +0000
In-Reply-To: <alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:32 +0000, M A Young wrote:
> On Thu, 30 Jan 2014, Ian Campbell wrote:
> 
> > On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
> >> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> >>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> >>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> >>>>>>                  continue
> >>>>>>
> >>>>>>              # new image
> >>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> >>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> >>>>>
> >>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
> >>>>> red, --class gnu" yet is parsed happily.
> >>>>
> >>>> A menuentry from RHEL 7 looks like this...
> >>>>
> >>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
> >>>>
> >>>> So we need 'lazy' match with '.*?'.
> >>>
> >>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
> >>> means in addition to that, do you have a reference?
> >>
> >> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
> >
> > Thanks, pure punctuation is a bit tricky to for a search engine...
> >
> >>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> >>> the name itself, although you might have to split into handling " and '
> >>> separately to be more correct
> >
> > Any thoughts on this?
> 
> I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May 
> last year. That seems to work fine for recent Fedora versions which will 
> be somewhat similar to RHEL 7.

Oops, sorry, did I manage to drop that patch on the ground?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8slj-0003O5-6Q; Thu, 30 Jan 2014 14:39:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8slh-0003Nk-N5
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:39:01 +0000
Received: from [85.158.143.35:8641] by server-1.bemta-4.messagelabs.com id
	E2/24-31661-5046AE25; Thu, 30 Jan 2014 14:39:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391092739!1962928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10545 invoked from network); 30 Jan 2014 14:39:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:39:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98111143"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:38:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:38:58 -0500
Message-ID: <1391092737.5650.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 30 Jan 2014 14:38:57 +0000
In-Reply-To: <alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:32 +0000, M A Young wrote:
> On Thu, 30 Jan 2014, Ian Campbell wrote:
> 
> > On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
> >> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
> >>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
> >>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
> >>>>>>                  continue
> >>>>>>
> >>>>>>              # new image
> >>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> >>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
> >>>>>
> >>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
> >>>>> red, --class gnu" yet is parsed happily.
> >>>>
> >>>> A menuentry from RHEL 7 looks like this...
> >>>>
> >>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
> >>>>
> >>>> So we need 'lazy' match with '.*?'.
> >>>
> >>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
> >>> means in addition to that, do you have a reference?
> >>
> >> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
> >
> > Thanks, pure punctuation is a bit tricky to for a search engine...
> >
> >>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> >>> the name itself, although you might have to split into handling " and '
> >>> separately to be more correct
> >
> > Any thoughts on this?
> 
> I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May 
> last year. That seems to work fine for recent Fedora versions which will 
> be somewhat similar to RHEL 7.

Oops, sorry, did I manage to drop that patch on the ground?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8snc-0003ed-Qq; Thu, 30 Jan 2014 14:41:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8snb-0003eQ-D5
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:40:59 +0000
Received: from [85.158.139.211:49126] by server-6.bemta-5.messagelabs.com id
	52/94-14342-A746AE25; Thu, 30 Jan 2014 14:40:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391092855!626582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10838 invoked from network); 30 Jan 2014 14:40:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:40:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98111832"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:40:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:40:50 -0500
Message-ID: <1391092849.5650.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Giovanni Bellac <giovannib1979@ymail.com>
Date: Thu, 30 Jan 2014 14:40:49 +0000
In-Reply-To: <1391079941.30719.YahooMailNeo@web171502.mail.ir2.yahoo.com>
References: <1390489176.13566.YahooMailNeo@web171502.mail.ir2.yahoo.com>
	<1390490156.24595.101.camel@kazak.uk.xensource.com>
	<1390729101.16602.YahooMailNeo@web171506.mail.ir2.yahoo.com>
	<1390816692.9890.15.camel@kazak.uk.xensource.com>
	<1390910385.49046.YahooMailNeo@web171506.mail.ir2.yahoo.com>
	<1390910704.7753.73.camel@kazak.uk.xensource.com>
	<1391079941.30719.YahooMailNeo@web171502.mail.ir2.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN 4.3 sxp options ip, netmask,
 gateway no more available ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:05 +0000, Giovanni Bellac wrote:
> Hello Ian,
> 
> yes, this is from a console output of the domU, when starting with
> "ip", "netmask" and "gateway" parameters in the sxp file.
> 
> Here is a complete "dmesg" output of a domU running on XEN 4.0.
> 
> The "Command line" line is interesting:
> 
> [    0.000000] Command line: root=/dev/xvda1 ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXX:eth0:off 

Hrm, so it seems that xend actually contains code to take various
generic looking networking settings and to fabricate a Linux specific
command line out of them.

If I'm honest I think this sort of Linux specific behaviour[0] was a
misfeature of xend and I'm rather reluctant to carry such behaviour over
into xl-land. (copying xen-devel so that folks on both -user and -devel
have a chance to comment on that).

You could add the necessary bits to the Linux command line using the
"extra" option manually, which would work with both xend and xl. Is that
an acceptable compromise for you?

Ian.

[0] Not just Linux specific but AFAIAA dependent on the kernel .config
having nfsroot support enabled. I think there have even been mutterings
on lkml about one day making this be functionality provided by the
initramfs and not be the kernel directly (see klibc etc).

> 
> Kind regards
> Giovanni
> 
> 
> 
> 
> 
> 
> root@vsrv70428:~# cat dmesg.txt 
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.0.101 (root@backer64) (gcc version
> 4.4.5 (Debian 4.4.5-8) ) #2 SMP Mon Dec 30 13:15:49 CET 2013
> [    0.000000] Command line: root=/dev/xvda1
> ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXX:eth0:off 
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] released 0 pages of unused memory
> [    0.000000] Set 0 page(s) to 1-1 mapping.
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000200800000 (usable)
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] e820 update range: 0000000000000000 - 0000000000010000
> (usable) ==> (reserved)
> [    0.000000] e820 remove range: 00000000000a0000 - 0000000000100000
> (usable)
> [    0.000000] No AGP bridge found
> [    0.000000] last_pfn = 0x200800 max_arch_pfn = 0x400000000
> [    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
> [    0.000000] initial memory mapped : 0 - 02858000
> [    0.000000] Base memory trampoline at [ffff88000009b000] 9b000 size
> 20480
> [    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
> [    0.000000]  0000000000 - 0100000000 page 4k
> [    0.000000] kernel direct mapping tables up to 0xffffffff @ [mem
> 0x007fb000-0x00ffffff]
> [    0.000000] xen: setting RW the range fdc000 - 1000000
> [    0.000000] init_memory_mapping: 0000000100000000-0000000200800000
> [    0.000000]  0100000000 - 0200800000 page 4k
> [    0.000000] kernel direct mapping tables up to 0x2007fffff @ [mem
> 0xff7f6000-0xffffffff]
> [    0.000000] xen: setting RW the range fffff000 - 100000000
> [    0.000000] NUMA turned off
> [    0.000000] Faking a node at 0000000000000000-0000000200800000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000200800000
> [    0.000000]   NODE_DATA [00000001ffffb000 - 00000001ffffffff]
> [    0.000000] Zone PFN ranges:
> [    0.000000]   DMA      0x00000010 -> 0x00001000
> [    0.000000]   DMA32    0x00001000 -> 0x00100000
> [    0.000000]   Normal   0x00100000 -> 0x00200800
> [    0.000000] Movable zone start PFN for each node
> [    0.000000] early_node_map[2] active PFN ranges
> [    0.000000]     0: 0x00000010 -> 0x000000a0
> [    0.000000]     0: 0x00000100 -> 0x00200800
> [    0.000000] On node 0 totalpages: 2099088
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 2022 pages reserved
> [    0.000000]   DMA zone: 1906 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 14280 pages used for memmap
> [    0.000000]   DMA32 zone: 1030200 pages, LIFO batch:31
> [    0.000000]   Normal zone: 14364 pages used for memmap
> [    0.000000]   Normal zone: 1036260 pages, LIFO batch:31
> [    0.000000] SMP: Allowing 4 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] nr_irqs_gsi: 16
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address
> range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers
> may break!
> [    0.000000] Allocating PCI resources starting at 200900000 (gap:
> 200900000:400000)
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.0.3 (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64
> nr_cpu_ids:4 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 27 pages/cpu @ffff8801ffc00000 s79552
> r8192 d22848 u524288
> [    0.000000] pcpu-alloc: s79552 r8192 d22848 u524288 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 0 1 2 3 
> [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.
> Total pages: 2068366
> [    0.000000] Policy zone: Normal
> [    0.000000] Kernel command line: root=/dev/xvda1
> ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXXX:eth0:off 
> [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [    0.000000] Checking aperture...
> [    0.000000] No AGP bridge found
> [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
> [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA -
> bailing!
> [    0.000000] Memory: 8148136k/8396800k available (13024k kernel
> code, 448k absent, 248216k reserved, 8399k data, 808k init)
> [    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=4, Nodes=1
> [    0.000000] Hierarchical RCU implementation.
> [    0.000000] NR_IRQS:4352 nr_irqs:304 16
> [    0.000000] Console: colour dummy device 80x25
> [    0.000000] console [tty0] enabled
> [    0.000000] console [hvc0] enabled
> [    0.000000] Xen: using vcpuop timer interface
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] Detected 2400.154 MHz processor.
> [    0.000999] Calibrating delay loop (skipped), value calculated
> using timer frequency.. 4800.30 BogoMIPS (lpj=2400154)
> [    0.000999] pid_max: default: 32768 minimum: 301
> [    0.000999] Security Framework initialized
> [    0.000999] SELinux:  Initializing.
> [    0.000999] SELinux:  Starting in permissive mode
> [    0.001915] Dentry cache hash table entries: 1048576 (order: 11,
> 8388608 bytes)
> [    0.004217] Inode-cache hash table entries: 524288 (order: 10,
> 4194304 bytes)
> [    0.004919] Mount-cache hash table entries: 256
> [    0.005088] Initializing cgroup subsys cpuacct
> [    0.005098] Initializing cgroup subsys devices
> [    0.005103] Initializing cgroup subsys freezer
> [    0.005107] Initializing cgroup subsys net_cls
> [    0.005111] Initializing cgroup subsys blkio
> [    0.005180] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> [    0.005181] ENERGY_PERF_BIAS: View and update with
> x86_energy_perf_policy(8)
> [    0.005202] CPU: Physical Processor ID: 0
> [    0.005206] CPU: Processor Core ID: 0
> [    0.005311] SMP alternatives: switching to UP code
> [    0.076491] Performance Events: unsupported p6 CPU model 44 no PMU
> driver, software events only.
> [    0.076649] installing Xen timer for CPU 1
> [    0.076679] SMP alternatives: switching to SMP code
> [    0.145292] installing Xen timer for CPU 2
> [    0.145452] installing Xen timer for CPU 3
> [    0.145559] Brought up 4 CPUs
> [    0.145605] devtmpfs: initialized
> [    0.145605] xor: automatically using best checksumming function:
> generic_sse
> [    0.150600]    generic_sse:  2656.000 MB/sec
> [    0.150606] xor: using function: generic_sse (2656.000 MB/sec)
> [    0.150623] Grant table initialized
> [    0.170276] Time: 165:165:165  Date: 165/165/65
> [    0.170321] NET: Registered protocol family 16
> [    0.171432] PCI: setting up Xen PCI frontend stub
> [    0.171439] PCI: pci_cache_line_size set to 64 bytes
> [    0.190008] bio: create slab <bio-0> at 0
> [    0.206499] raid6: int64x1   2261 MB/s
> [    0.223529] raid6: int64x2   2910 MB/s
> [    0.240547] raid6: int64x4   1964 MB/s
> [    0.257585] raid6: int64x8   2039 MB/s
> [    0.274653] raid6: sse2x1    5902 MB/s
> [    0.291698] raid6: sse2x2    6894 MB/s
> [    0.308712] raid6: sse2x4    7335 MB/s
> [    0.308717] raid6: using algorithm sse2x4 (7335 MB/s)
> [    0.308977] ACPI: Interpreter disabled.
> [    0.308986] xen/balloon: Initialising balloon driver.
> [    0.308986] last_pfn = 0x200800 max_arch_pfn = 0x400000000
> [    0.310966] xen-balloon: Initialising balloon driver.
> [    0.310989] vgaarb: loaded
> [    0.311023] SCSI subsystem initialized
> [    0.311023] libata version 3.00 loaded.
> [    0.311023] usbcore: registered new interface driver usbfs
> [    0.311023] usbcore: registered new interface driver hub
> [    0.311968] usbcore: registered new device driver usb
> [    0.311991] Advanced Linux Sound Architecture Driver Version
> 1.0.24.
> [    0.311991] PCI: System does not support PCI
> [    0.311991] PCI: System does not support PCI
> [    0.312004] cfg80211: Calling CRDA to update world regulatory
> domain
> [    0.312124] NetLabel: Initializing
> [    0.312129] NetLabel:  domain hash size = 128
> [    0.312133] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.312145] NetLabel:  unlabeled traffic allowed by default
> [    0.312159] Switching to clocksource xen
> [    0.312159] Switched to NOHz mode on CPU #0
> [    0.312224] Switched to NOHz mode on CPU #1
> [    0.312296] Switched to NOHz mode on CPU #2
> [    0.312308] Switched to NOHz mode on CPU #3
> [    0.316881] pnp: PnP ACPI: disabled
> [    0.321899] PCI: max bus depth: 0 pci_try_num: 1
> [    0.321921] NET: Registered protocol family 2
> [    0.322445] IP route cache hash table entries: 262144 (order: 9,
> 2097152 bytes)
> [    0.324691] TCP established hash table entries: 524288 (order: 11,
> 8388608 bytes)
> [    0.326244] TCP bind hash table entries: 65536 (order: 8, 1048576
> bytes)
> [    0.326397] TCP: Hash tables configured (established 524288 bind
> 65536)
> [    0.326403] TCP reno registered
> [    0.326442] UDP hash table entries: 4096 (order: 5, 131072 bytes)
> [    0.326500] UDP-Lite hash table entries: 4096 (order: 5, 131072
> bytes)
> [    0.326599] NET: Registered protocol family 1
> [    0.326701] RPC: Registered named UNIX socket transport module.
> [    0.326707] RPC: Registered udp transport module.
> [    0.326711] RPC: Registered tcp transport module.
> [    0.326714] RPC: Registered tcp NFSv4.1 backchannel transport
> module.
> [    0.326721] PCI: CLS 0 bytes, default 64
> [    0.326760] PCI-DMA: Using software bounce buffering for IO
> (SWIOTLB)
> [    0.326767] Placing 64MB software IO TLB between ffff8800fb7f6000 -
> ffff8800ff7f6000
> [    0.326773] software IO TLB at phys 0xfb7f6000 - 0xff7f6000
> [    0.326872] platform rtc_cmos: registered platform RTC device (no
> PNP device found)
> [    0.328713] microcode: CPU0 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328732] microcode: CPU1 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328750] microcode: CPU2 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328765] microcode: CPU3 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328839] microcode: Microcode Update Driver: v2.00
> <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> [    0.329156] audit: initializing netlink socket (disabled)
> [    0.329168] type=2000 audit(1390300158.090:1): initialized
> [    0.357317] HugeTLB registered 2 MB page size, pre-allocated 0
> pages
> [    0.361374] VFS: Disk quotas dquot_6.5.2
> [    0.361461] Dquot-cache hash table entries: 512 (order 0, 4096
> bytes)
> [    0.361812] DLM installed
> [    0.362991] NTFS driver 2.1.30 [Flags: R/W].
> [    0.363268] fuse init (API version 7.16)
> [    0.363757] JFS: nTxBlock = 8192, nTxLock = 65536
> [    0.366079] SGI XFS with ACLs, security attributes, realtime, large
> block/inode numbers, no debug enabled
> [    0.392866] SGI XFS Quota Management subsystem
> [    0.392877] OCFS2 1.5.0
> [    0.393101] ocfs2: Registered cluster interface o2cb
> [    0.393168] ocfs2: Registered cluster interface user
> [    0.393174] OCFS2 DLMFS 1.5.0
> [    0.393279] OCFS2 User DLM kernel interface loaded
> [    0.393285] OCFS2 Node Manager 1.5.0
> [    0.393400] OCFS2 DLM 1.5.0
> [    0.393857] GFS2 installed
> [    0.393867] msgmni has been set to 15914
> [    0.393976] SELinux:  Registering netfilter hooks
> [    0.394566] Block layer SCSI generic (bsg) driver version 0.4
> loaded (major 253)
> [    0.394607] io scheduler noop registered
> [    0.394612] io scheduler deadline registered
> [    0.394694] io scheduler cfq registered (default)
> [    0.394898] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.395180] Event-channel device installed.
> [    0.395842] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> [    0.457386] Non-volatile memory driver v1.3
> [    0.457392] Linux agpgart interface v0.103
> [    0.457583] [drm] Initialized drm 1.1.0 20060810
> [    0.457590] [drm:i915_init] *ERROR* drm/i915 can't work without
> intel_agp module!
> [    0.459632] brd: module loaded
> [    0.460674] loop: module loaded
> [    0.460814] nbd: registered device at major 43
> [    0.465569] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96)
> [    0.465577] drbd: built-in
> [    0.465580] drbd: registered as block device major 147
> [    0.465584] drbd: minor_table @ 0xffff8801f6c91100
> [    0.465848] Loading iSCSI transport class v2.0-870.
> [    0.466415] st: Version 20101219, fixed bufsize 32768, s/g segs 256
> [    0.466731] SCSI Media Changer driver v0.25 
> [    0.467322] e1000: Intel(R) PRO/1000 Network Driver - version
> 7.3.21-k8-NAPI
> [    0.467329] e1000: Copyright (c) 1999-2006 Intel Corporation.
> [    0.467391] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10-k2
> [    0.467397] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
> [    0.467460] Intel(R) Gigabit Ethernet Network Driver - version
> 3.0.6-k2
> [    0.467466] Copyright (c) 2007-2011 Intel Corporation.
> [    0.467519] Intel(R) Virtual Function Network Driver - version
> 1.0.8-k0
> [    0.467524] Copyright (c) 2009 - 2010 Intel Corporation.
> [    0.467589] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver -
> version 3.3.8-k2
> [    0.467595] ixgbe: Copyright (c) 1999-2011 Intel Corporation.
> [    0.467655] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual
> Function Network Driver - version 2.0.0-k2
> [    0.467663] Copyright (c) 2009 - 2010 Intel Corporation.
> [    0.467715] ixgb: Intel(R) PRO/10GbE Network Driver - version
> 1.0.135-k2-NAPI
> [    0.467721] ixgb: Copyright (c) 1999-2008 Intel Corporation.
> [    0.468044] bonding: Ethernet Channel Bonding Driver: v3.7.1 (April
> 27, 2011)
> [    0.468051] bonding: Warning: either miimon or arp_interval and
> arp_ip_target module parameters must be specified, otherwise bonding
> will not detect link failures! see bonding.txt for details.
> [    0.468640] blkfront: xvda1: barrier or flush: disabled
> [    0.469969] Atheros(R) L2 Ethernet Driver - version 2.2.3
> [    0.469975] Copyright (c) 2007 Atheros Corporation.
> [    0.470037] tehuti: Tehuti Networks(R) Network Driver, 7.29.3
> [    0.470042] tehuti: Options: hw_csum 
> [    0.470093] enic: Cisco VIC Ethernet NIC Driver, ver 2.1.1.13
> [    0.470147] jme: JMicron JMC2XX ethernet driver version 1.0.8
> [    0.470250] VMware vmxnet3 virtual NIC driver - version
> 1.1.29.0-k-NAPI
> [    0.470301] Brocade 10G Ethernet driver
> [    0.470655] pcnet32: pcnet32.c:v1.35 21.Apr.2008
> tsbogend@alpha.franken.de
> [    0.470707] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
> [    0.470712] e100: Copyright(c) 1999-2006 Intel Corporation
> [    0.470765] tlan: ThunderLAN driver v1.17
> [    0.470815] tlan: 0 devices installed, PCI: 0  EISA: 0
> [    0.471215] ns83820.c: National Semiconductor DP83820 10/100/1000
> driver.
> [    0.592776] cnic: Broadcom NetXtreme II CNIC Driver cnic v2.2.14
> (Mar 30, 2011)
> [    0.592817] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver
> bnx2x 1.62.12-0 (2011/03/20)
> [    0.592950] sky2: driver version 1.28
> [    0.593655] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver
> v5.0.18
> [    0.593787] PPP generic driver version 2.4.2
> [    0.593904] PPP Deflate Compression module registered
> [    0.593910] PPP BSD Compression module registered
> [    0.593915] Initialising Xen virtual ethernet driver.
> [    0.595294] eql: Equalizer2002: Simon Janes (simon@ncm.com) and
> David S. Miller (davem@redhat.com)
> [    0.595581] tun: Universal TUN/TAP device driver, 1.6
> [    0.595588] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.595868] vxge: Copyright(c) 2002-2010 Exar Corp.
> [    0.595874] vxge: Driver version: 2.5.3.22640-k
> [    0.595941] myri10ge: Version 1.5.2-1.459
> [    0.596173] console [netcon0] enabled
> [    0.596178] netconsole: network logging started
> [    0.596182] QLogic/NetXen Network Driver v4.0.75
> [    0.596298] Solarflare NET driver v3.1
> [    0.596651] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI)
> Driver
> [    0.596658] ehci_hcd: block sizes: qh 112 qtd 96 itd 192 sitd 96
> [    0.596721] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.596728] ohci_hcd: block sizes: ed 80 td 96
> [    0.596787] uhci_hcd: USB Universal Host Controller Interface
> driver
> [    0.596938] usbcore: registered new interface driver usblp
> [    0.596947] Initializing USB Mass Storage driver...
> [    0.597035] usbcore: registered new interface driver usb-storage
> [    0.597042] USB Mass Storage support registered.
> [    0.597114] usbcore: registered new interface driver libusual
> [    0.597344] i8042: PNP: No PS/2 controller found. Probing ports
> directly.
> [    1.606875] i8042: No controller found
> [    1.606986] mousedev: PS/2 mouse device common for all mice
> [    1.647359] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as
> rtc0
> [    1.647433] rtc_cmos: probe of rtc_cmos failed with error -38
> [    1.647611] md: linear personality registered for level -1
> [    1.647618] md: raid0 personality registered for level 0
> [    1.647622] md: raid1 personality registered for level 1
> [    1.647627] md: raid10 personality registered for level 10
> [    1.647631] md: raid6 personality registered for level 6
> [    1.647634] md: raid5 personality registered for level 5
> [    1.647638] md: raid4 personality registered for level 4
> [    1.647643] md: multipath personality registered for level -4
> [    1.647648] md: faulty personality registered for level -5
> [    1.647885] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02)
> initialised: dm-devel@redhat.com
> [    1.647963] device-mapper: multipath: version 1.3.0 loaded
> [    1.647970] device-mapper: multipath round-robin: version 1.0.0
> loaded
> [    1.647975] device-mapper: multipath queue-length: version 0.1.0
> loaded
> [    1.647979] device-mapper: multipath service-time: version 0.2.0
> loaded
> [    1.648093] cpuidle: using governor ladder
> [    1.648099] cpuidle: using governor menu
> [    1.648102] EFI Variables Facility v0.08 2004-May-17
> [    1.649120] usbcore: registered new interface driver usbhid
> [    1.649127] usbhid: USB HID core driver
> [    1.649608] ALSA device list:
> [    1.649614]   No soundcards found.
> [    1.649648] Netfilter messages via NETLINK v0.30.
> [    1.649670] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> [    1.649965] ctnetlink v0.93: registering with nfnetlink.
> [    1.650163] xt_time: kernel timezone is -0000
> [    1.650171] ip_set: protocol 6
> [    1.650187] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
> [    1.650215] IPVS: Connection hash table configured (size=4096,
> memory=64Kbytes)
> [    1.650281] IPVS: Creating netns size=2056 id=0
> [    1.650287] IPVS: ipvs loaded.
> [    1.650291] IPVS: [rr] scheduler registered.
> [    1.650295] IPVS: [wrr] scheduler registered.
> [    1.650299] IPVS: [lc] scheduler registered.
> [    1.650302] IPVS: [wlc] scheduler registered.
> [    1.650311] IPVS: [lblc] scheduler registered.
> [    1.650321] IPVS: [lblcr] scheduler registered.
> [    1.650325] IPVS: [dh] scheduler registered.
> [    1.650329] IPVS: [sh] scheduler registered.
> [    1.650333] IPVS: [sed] scheduler registered.
> [    1.650337] IPVS: [nq] scheduler registered.
> [    1.650342] IPVS: ftp: loaded support on port[0] = 21
> [    1.650346] IPVS: [sip] pe registered.
> [    1.650569] IPv4 over IPv4 tunneling driver
> [    1.650877] GRE over IPv4 demultiplexor driver
> [    1.650883] GRE over IPv4 tunneling driver
> [    1.651415] ip_tables: (C) 2000-2006 Netfilter Core Team
> [    1.651460] arp_tables: (C) 2002 David S. Miller
> [    1.651474] TCP cubic registered
> [    1.651477] Initializing XFRM netlink socket
> [    1.651760] NET: Registered protocol family 10
> [    1.653337] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [    1.653400] IPv6 over IPv4 tunneling driver
> [    1.654673] NET: Registered protocol family 17
> [    1.654685] NET: Registered protocol family 15
> [    1.654729] Bridge firewalling registered
> [    1.654735] Ebtables v2.0 registered
> [    1.654767] L2TP core driver, V2.0
> [    1.654771] 802.1Q VLAN Support v1.8
> [    1.655121] sctp: Hash tables configured (established 65536 bind
> 65536)
> [    1.655401] Registering the dns_resolver key type
> [    1.655585] PM: Hibernation image not present or could not be
> loaded.
> [    1.655595] registered taskstats version 1
> [    1.655617] XENBUS: Device with no driver: device/console/0
> [    1.655621]   Magic number: 1:252:3141
> [    1.667063] IP-Config: Complete:
> [    1.667072]      device=eth0, addr=213.160.XX.XX,
> mask=255.255.255.0, gw=213.160.XX.XX,
> [    1.667084]      host=vsrv70428, domain=, nis-domain=XXXXX
> [    1.667094]      bootserver=127.0.255.255,
> rootserver=127.0.255.255, rootpath=
> [    1.667162] md: Waiting for all devices to be available before
> autodetect
> [    1.667168] md: If you don't use raid, use raid=noautodetect
> [    1.667362] md: Autodetecting RAID arrays.
> [    1.667367] md: Scanned 0 and added 0 devices.
> [    1.667371] md: autorun ...
> [    1.667373] md: ... autorun DONE.
> [    1.687595] EXT3-fs: barriers not enabled
> [    1.687789] kjournald starting.  Commit interval 5 seconds
> [    1.687956] EXT3-fs (xvda1): using internal journal
> [    1.687967] EXT3-fs (xvda1): mounted filesystem with writeback data
> mode
> [    1.687984] VFS: Mounted root (ext3 filesystem) on device 202:1.
> [    1.694482] devtmpfs: mounted
> [    1.694805] Freeing unused kernel memory: 808k freed
> [    1.694994] Write protecting the kernel read-only data: 20480k
> [    1.702908] Freeing unused kernel memory: 1288k freed
> [    1.703715] Freeing unused kernel memory: 1304k freed
> [    1.999674] udev[1230]: starting version 164
> [    3.042545] Adding 524284k swap on /swap.  Priority:-1 extents:133
> across:541292k SS
> [    4.078247] sshd (1850): /proc/1850/oom_adj is deprecated, please
> use /proc/1850/oom_score_adj instead.
> [   11.970025] eth0: no IPv6 routers present
> 
> 
> 
> 
> 
> 
> Ian Campbell <Ian.Campbell@citrix.com> schrieb am 13:07 Dienstag,
> 28.Januar 2014:
> 
> On Tue, 2014-01-28 at 11:59 +0000, Giovanni Bellac wrote:
> > The "ip", "netmask" and "gateway" options in the domU sxp file are
> > forwarding the options to the kernel of the domU when starting:
> > 
> > Output of console when starting the domU with "-c" (console):
> > ...
> > [    1.667063] IP-Config: Complete:
> > [    1.667072]      device=eth0, addr=213.160.XX.XX,
> mask=255.255.255.0, gw=213.160.XX.XX,
> > [    1.667084]      host=vsrv70428, domain=, nis-domain=XXXXX.tld,
> > [    1.667094]      bootserver=127.0.255.255,
> rootserver=127.0.255.255, rootpath=
> > ...
> 
> This is the domU console? I wonder if perhaps this stuff is causing
> things to get added to the guest command line -- can you post a full
> guest dmesg please?
> 
> 
> Ian.
> 
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xen.org
> http://lists.xen.org/xen-users
> 
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8snc-0003ed-Qq; Thu, 30 Jan 2014 14:41:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8snb-0003eQ-D5
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:40:59 +0000
Received: from [85.158.139.211:49126] by server-6.bemta-5.messagelabs.com id
	52/94-14342-A746AE25; Thu, 30 Jan 2014 14:40:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391092855!626582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10838 invoked from network); 30 Jan 2014 14:40:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:40:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98111832"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 14:40:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:40:50 -0500
Message-ID: <1391092849.5650.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Giovanni Bellac <giovannib1979@ymail.com>
Date: Thu, 30 Jan 2014 14:40:49 +0000
In-Reply-To: <1391079941.30719.YahooMailNeo@web171502.mail.ir2.yahoo.com>
References: <1390489176.13566.YahooMailNeo@web171502.mail.ir2.yahoo.com>
	<1390490156.24595.101.camel@kazak.uk.xensource.com>
	<1390729101.16602.YahooMailNeo@web171506.mail.ir2.yahoo.com>
	<1390816692.9890.15.camel@kazak.uk.xensource.com>
	<1390910385.49046.YahooMailNeo@web171506.mail.ir2.yahoo.com>
	<1390910704.7753.73.camel@kazak.uk.xensource.com>
	<1391079941.30719.YahooMailNeo@web171502.mail.ir2.yahoo.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] XEN 4.3 sxp options ip, netmask,
 gateway no more available ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:05 +0000, Giovanni Bellac wrote:
> Hello Ian,
> 
> yes, this is from a console output of the domU, when starting with
> "ip", "netmask" and "gateway" parameters in the sxp file.
> 
> Here is a complete "dmesg" output of a domU running on XEN 4.0.
> 
> The "Command line" line is interesting:
> 
> [    0.000000] Command line: root=/dev/xvda1 ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXX:eth0:off 

Hrm, so it seems that xend actually contains code to take various
generic looking networking settings and to fabricate a Linux specific
command line out of them.

If I'm honest I think this sort of Linux specific behaviour[0] was a
misfeature of xend and I'm rather reluctant to carry such behaviour over
into xl-land. (copying xen-devel so that folks on both -user and -devel
have a chance to comment on that).

You could add the necessary bits to the Linux command line using the
"extra" option manually, which would work with both xend and xl. Is that
an acceptable compromise for you?

Ian.

[0] Not just Linux specific but AFAIAA dependent on the kernel .config
having nfsroot support enabled. I think there have even been mutterings
on lkml about one day making this be functionality provided by the
initramfs and not be the kernel directly (see klibc etc).

> 
> Kind regards
> Giovanni
> 
> 
> 
> 
> 
> 
> root@vsrv70428:~# cat dmesg.txt 
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.0.101 (root@backer64) (gcc version
> 4.4.5 (Debian 4.4.5-8) ) #2 SMP Mon Dec 30 13:15:49 CET 2013
> [    0.000000] Command line: root=/dev/xvda1
> ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXX:eth0:off 
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] released 0 pages of unused memory
> [    0.000000] Set 0 page(s) to 1-1 mapping.
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000200800000 (usable)
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] e820 update range: 0000000000000000 - 0000000000010000
> (usable) ==> (reserved)
> [    0.000000] e820 remove range: 00000000000a0000 - 0000000000100000
> (usable)
> [    0.000000] No AGP bridge found
> [    0.000000] last_pfn = 0x200800 max_arch_pfn = 0x400000000
> [    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
> [    0.000000] initial memory mapped : 0 - 02858000
> [    0.000000] Base memory trampoline at [ffff88000009b000] 9b000 size
> 20480
> [    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
> [    0.000000]  0000000000 - 0100000000 page 4k
> [    0.000000] kernel direct mapping tables up to 0xffffffff @ [mem
> 0x007fb000-0x00ffffff]
> [    0.000000] xen: setting RW the range fdc000 - 1000000
> [    0.000000] init_memory_mapping: 0000000100000000-0000000200800000
> [    0.000000]  0100000000 - 0200800000 page 4k
> [    0.000000] kernel direct mapping tables up to 0x2007fffff @ [mem
> 0xff7f6000-0xffffffff]
> [    0.000000] xen: setting RW the range fffff000 - 100000000
> [    0.000000] NUMA turned off
> [    0.000000] Faking a node at 0000000000000000-0000000200800000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000200800000
> [    0.000000]   NODE_DATA [00000001ffffb000 - 00000001ffffffff]
> [    0.000000] Zone PFN ranges:
> [    0.000000]   DMA      0x00000010 -> 0x00001000
> [    0.000000]   DMA32    0x00001000 -> 0x00100000
> [    0.000000]   Normal   0x00100000 -> 0x00200800
> [    0.000000] Movable zone start PFN for each node
> [    0.000000] early_node_map[2] active PFN ranges
> [    0.000000]     0: 0x00000010 -> 0x000000a0
> [    0.000000]     0: 0x00000100 -> 0x00200800
> [    0.000000] On node 0 totalpages: 2099088
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 2022 pages reserved
> [    0.000000]   DMA zone: 1906 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 14280 pages used for memmap
> [    0.000000]   DMA32 zone: 1030200 pages, LIFO batch:31
> [    0.000000]   Normal zone: 14364 pages used for memmap
> [    0.000000]   Normal zone: 1036260 pages, LIFO batch:31
> [    0.000000] SMP: Allowing 4 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] nr_irqs_gsi: 16
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address
> range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers
> may break!
> [    0.000000] Allocating PCI resources starting at 200900000 (gap:
> 200900000:400000)
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.0.3 (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64
> nr_cpu_ids:4 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 27 pages/cpu @ffff8801ffc00000 s79552
> r8192 d22848 u524288
> [    0.000000] pcpu-alloc: s79552 r8192 d22848 u524288 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 0 1 2 3 
> [    0.000000] Built 1 zonelists in Zone order, mobility grouping on.
> Total pages: 2068366
> [    0.000000] Policy zone: Normal
> [    0.000000] Kernel command line: root=/dev/xvda1
> ip=213.160.XX.XX:127.0.255.255:213.160.XX.1:255.255.255.0:vsrv70428.XXXXX:eth0:off 
> [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [    0.000000] Checking aperture...
> [    0.000000] No AGP bridge found
> [    0.000000] Calgary: detecting Calgary via BIOS EBDA area
> [    0.000000] Calgary: Unable to locate Rio Grande table in EBDA -
> bailing!
> [    0.000000] Memory: 8148136k/8396800k available (13024k kernel
> code, 448k absent, 248216k reserved, 8399k data, 808k init)
> [    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=4, Nodes=1
> [    0.000000] Hierarchical RCU implementation.
> [    0.000000] NR_IRQS:4352 nr_irqs:304 16
> [    0.000000] Console: colour dummy device 80x25
> [    0.000000] console [tty0] enabled
> [    0.000000] console [hvc0] enabled
> [    0.000000] Xen: using vcpuop timer interface
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] Detected 2400.154 MHz processor.
> [    0.000999] Calibrating delay loop (skipped), value calculated
> using timer frequency.. 4800.30 BogoMIPS (lpj=2400154)
> [    0.000999] pid_max: default: 32768 minimum: 301
> [    0.000999] Security Framework initialized
> [    0.000999] SELinux:  Initializing.
> [    0.000999] SELinux:  Starting in permissive mode
> [    0.001915] Dentry cache hash table entries: 1048576 (order: 11,
> 8388608 bytes)
> [    0.004217] Inode-cache hash table entries: 524288 (order: 10,
> 4194304 bytes)
> [    0.004919] Mount-cache hash table entries: 256
> [    0.005088] Initializing cgroup subsys cpuacct
> [    0.005098] Initializing cgroup subsys devices
> [    0.005103] Initializing cgroup subsys freezer
> [    0.005107] Initializing cgroup subsys net_cls
> [    0.005111] Initializing cgroup subsys blkio
> [    0.005180] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
> [    0.005181] ENERGY_PERF_BIAS: View and update with
> x86_energy_perf_policy(8)
> [    0.005202] CPU: Physical Processor ID: 0
> [    0.005206] CPU: Processor Core ID: 0
> [    0.005311] SMP alternatives: switching to UP code
> [    0.076491] Performance Events: unsupported p6 CPU model 44 no PMU
> driver, software events only.
> [    0.076649] installing Xen timer for CPU 1
> [    0.076679] SMP alternatives: switching to SMP code
> [    0.145292] installing Xen timer for CPU 2
> [    0.145452] installing Xen timer for CPU 3
> [    0.145559] Brought up 4 CPUs
> [    0.145605] devtmpfs: initialized
> [    0.145605] xor: automatically using best checksumming function:
> generic_sse
> [    0.150600]    generic_sse:  2656.000 MB/sec
> [    0.150606] xor: using function: generic_sse (2656.000 MB/sec)
> [    0.150623] Grant table initialized
> [    0.170276] Time: 165:165:165  Date: 165/165/65
> [    0.170321] NET: Registered protocol family 16
> [    0.171432] PCI: setting up Xen PCI frontend stub
> [    0.171439] PCI: pci_cache_line_size set to 64 bytes
> [    0.190008] bio: create slab <bio-0> at 0
> [    0.206499] raid6: int64x1   2261 MB/s
> [    0.223529] raid6: int64x2   2910 MB/s
> [    0.240547] raid6: int64x4   1964 MB/s
> [    0.257585] raid6: int64x8   2039 MB/s
> [    0.274653] raid6: sse2x1    5902 MB/s
> [    0.291698] raid6: sse2x2    6894 MB/s
> [    0.308712] raid6: sse2x4    7335 MB/s
> [    0.308717] raid6: using algorithm sse2x4 (7335 MB/s)
> [    0.308977] ACPI: Interpreter disabled.
> [    0.308986] xen/balloon: Initialising balloon driver.
> [    0.308986] last_pfn = 0x200800 max_arch_pfn = 0x400000000
> [    0.310966] xen-balloon: Initialising balloon driver.
> [    0.310989] vgaarb: loaded
> [    0.311023] SCSI subsystem initialized
> [    0.311023] libata version 3.00 loaded.
> [    0.311023] usbcore: registered new interface driver usbfs
> [    0.311023] usbcore: registered new interface driver hub
> [    0.311968] usbcore: registered new device driver usb
> [    0.311991] Advanced Linux Sound Architecture Driver Version
> 1.0.24.
> [    0.311991] PCI: System does not support PCI
> [    0.311991] PCI: System does not support PCI
> [    0.312004] cfg80211: Calling CRDA to update world regulatory
> domain
> [    0.312124] NetLabel: Initializing
> [    0.312129] NetLabel:  domain hash size = 128
> [    0.312133] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.312145] NetLabel:  unlabeled traffic allowed by default
> [    0.312159] Switching to clocksource xen
> [    0.312159] Switched to NOHz mode on CPU #0
> [    0.312224] Switched to NOHz mode on CPU #1
> [    0.312296] Switched to NOHz mode on CPU #2
> [    0.312308] Switched to NOHz mode on CPU #3
> [    0.316881] pnp: PnP ACPI: disabled
> [    0.321899] PCI: max bus depth: 0 pci_try_num: 1
> [    0.321921] NET: Registered protocol family 2
> [    0.322445] IP route cache hash table entries: 262144 (order: 9,
> 2097152 bytes)
> [    0.324691] TCP established hash table entries: 524288 (order: 11,
> 8388608 bytes)
> [    0.326244] TCP bind hash table entries: 65536 (order: 8, 1048576
> bytes)
> [    0.326397] TCP: Hash tables configured (established 524288 bind
> 65536)
> [    0.326403] TCP reno registered
> [    0.326442] UDP hash table entries: 4096 (order: 5, 131072 bytes)
> [    0.326500] UDP-Lite hash table entries: 4096 (order: 5, 131072
> bytes)
> [    0.326599] NET: Registered protocol family 1
> [    0.326701] RPC: Registered named UNIX socket transport module.
> [    0.326707] RPC: Registered udp transport module.
> [    0.326711] RPC: Registered tcp transport module.
> [    0.326714] RPC: Registered tcp NFSv4.1 backchannel transport
> module.
> [    0.326721] PCI: CLS 0 bytes, default 64
> [    0.326760] PCI-DMA: Using software bounce buffering for IO
> (SWIOTLB)
> [    0.326767] Placing 64MB software IO TLB between ffff8800fb7f6000 -
> ffff8800ff7f6000
> [    0.326773] software IO TLB at phys 0xfb7f6000 - 0xff7f6000
> [    0.326872] platform rtc_cmos: registered platform RTC device (no
> PNP device found)
> [    0.328713] microcode: CPU0 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328732] microcode: CPU1 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328750] microcode: CPU2 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328765] microcode: CPU3 sig=0x206c2, pf=0x1, revision=0x13
> [    0.328839] microcode: Microcode Update Driver: v2.00
> <tigran@aivazian.fsnet.co.uk>, Peter Oruba
> [    0.329156] audit: initializing netlink socket (disabled)
> [    0.329168] type=2000 audit(1390300158.090:1): initialized
> [    0.357317] HugeTLB registered 2 MB page size, pre-allocated 0
> pages
> [    0.361374] VFS: Disk quotas dquot_6.5.2
> [    0.361461] Dquot-cache hash table entries: 512 (order 0, 4096
> bytes)
> [    0.361812] DLM installed
> [    0.362991] NTFS driver 2.1.30 [Flags: R/W].
> [    0.363268] fuse init (API version 7.16)
> [    0.363757] JFS: nTxBlock = 8192, nTxLock = 65536
> [    0.366079] SGI XFS with ACLs, security attributes, realtime, large
> block/inode numbers, no debug enabled
> [    0.392866] SGI XFS Quota Management subsystem
> [    0.392877] OCFS2 1.5.0
> [    0.393101] ocfs2: Registered cluster interface o2cb
> [    0.393168] ocfs2: Registered cluster interface user
> [    0.393174] OCFS2 DLMFS 1.5.0
> [    0.393279] OCFS2 User DLM kernel interface loaded
> [    0.393285] OCFS2 Node Manager 1.5.0
> [    0.393400] OCFS2 DLM 1.5.0
> [    0.393857] GFS2 installed
> [    0.393867] msgmni has been set to 15914
> [    0.393976] SELinux:  Registering netfilter hooks
> [    0.394566] Block layer SCSI generic (bsg) driver version 0.4
> loaded (major 253)
> [    0.394607] io scheduler noop registered
> [    0.394612] io scheduler deadline registered
> [    0.394694] io scheduler cfq registered (default)
> [    0.394898] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.395180] Event-channel device installed.
> [    0.395842] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
> [    0.457386] Non-volatile memory driver v1.3
> [    0.457392] Linux agpgart interface v0.103
> [    0.457583] [drm] Initialized drm 1.1.0 20060810
> [    0.457590] [drm:i915_init] *ERROR* drm/i915 can't work without
> intel_agp module!
> [    0.459632] brd: module loaded
> [    0.460674] loop: module loaded
> [    0.460814] nbd: registered device at major 43
> [    0.465569] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96)
> [    0.465577] drbd: built-in
> [    0.465580] drbd: registered as block device major 147
> [    0.465584] drbd: minor_table @ 0xffff8801f6c91100
> [    0.465848] Loading iSCSI transport class v2.0-870.
> [    0.466415] st: Version 20101219, fixed bufsize 32768, s/g segs 256
> [    0.466731] SCSI Media Changer driver v0.25 
> [    0.467322] e1000: Intel(R) PRO/1000 Network Driver - version
> 7.3.21-k8-NAPI
> [    0.467329] e1000: Copyright (c) 1999-2006 Intel Corporation.
> [    0.467391] e1000e: Intel(R) PRO/1000 Network Driver - 1.3.10-k2
> [    0.467397] e1000e: Copyright(c) 1999 - 2011 Intel Corporation.
> [    0.467460] Intel(R) Gigabit Ethernet Network Driver - version
> 3.0.6-k2
> [    0.467466] Copyright (c) 2007-2011 Intel Corporation.
> [    0.467519] Intel(R) Virtual Function Network Driver - version
> 1.0.8-k0
> [    0.467524] Copyright (c) 2009 - 2010 Intel Corporation.
> [    0.467589] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver -
> version 3.3.8-k2
> [    0.467595] ixgbe: Copyright (c) 1999-2011 Intel Corporation.
> [    0.467655] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual
> Function Network Driver - version 2.0.0-k2
> [    0.467663] Copyright (c) 2009 - 2010 Intel Corporation.
> [    0.467715] ixgb: Intel(R) PRO/10GbE Network Driver - version
> 1.0.135-k2-NAPI
> [    0.467721] ixgb: Copyright (c) 1999-2008 Intel Corporation.
> [    0.468044] bonding: Ethernet Channel Bonding Driver: v3.7.1 (April
> 27, 2011)
> [    0.468051] bonding: Warning: either miimon or arp_interval and
> arp_ip_target module parameters must be specified, otherwise bonding
> will not detect link failures! see bonding.txt for details.
> [    0.468640] blkfront: xvda1: barrier or flush: disabled
> [    0.469969] Atheros(R) L2 Ethernet Driver - version 2.2.3
> [    0.469975] Copyright (c) 2007 Atheros Corporation.
> [    0.470037] tehuti: Tehuti Networks(R) Network Driver, 7.29.3
> [    0.470042] tehuti: Options: hw_csum 
> [    0.470093] enic: Cisco VIC Ethernet NIC Driver, ver 2.1.1.13
> [    0.470147] jme: JMicron JMC2XX ethernet driver version 1.0.8
> [    0.470250] VMware vmxnet3 virtual NIC driver - version
> 1.1.29.0-k-NAPI
> [    0.470301] Brocade 10G Ethernet driver
> [    0.470655] pcnet32: pcnet32.c:v1.35 21.Apr.2008
> tsbogend@alpha.franken.de
> [    0.470707] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
> [    0.470712] e100: Copyright(c) 1999-2006 Intel Corporation
> [    0.470765] tlan: ThunderLAN driver v1.17
> [    0.470815] tlan: 0 devices installed, PCI: 0  EISA: 0
> [    0.471215] ns83820.c: National Semiconductor DP83820 10/100/1000
> driver.
> [    0.592776] cnic: Broadcom NetXtreme II CNIC Driver cnic v2.2.14
> (Mar 30, 2011)
> [    0.592817] Broadcom NetXtreme II 5771x 10Gigabit Ethernet Driver
> bnx2x 1.62.12-0 (2011/03/20)
> [    0.592950] sky2: driver version 1.28
> [    0.593655] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver
> v5.0.18
> [    0.593787] PPP generic driver version 2.4.2
> [    0.593904] PPP Deflate Compression module registered
> [    0.593910] PPP BSD Compression module registered
> [    0.593915] Initialising Xen virtual ethernet driver.
> [    0.595294] eql: Equalizer2002: Simon Janes (simon@ncm.com) and
> David S. Miller (davem@redhat.com)
> [    0.595581] tun: Universal TUN/TAP device driver, 1.6
> [    0.595588] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.595868] vxge: Copyright(c) 2002-2010 Exar Corp.
> [    0.595874] vxge: Driver version: 2.5.3.22640-k
> [    0.595941] myri10ge: Version 1.5.2-1.459
> [    0.596173] console [netcon0] enabled
> [    0.596178] netconsole: network logging started
> [    0.596182] QLogic/NetXen Network Driver v4.0.75
> [    0.596298] Solarflare NET driver v3.1
> [    0.596651] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI)
> Driver
> [    0.596658] ehci_hcd: block sizes: qh 112 qtd 96 itd 192 sitd 96
> [    0.596721] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.596728] ohci_hcd: block sizes: ed 80 td 96
> [    0.596787] uhci_hcd: USB Universal Host Controller Interface
> driver
> [    0.596938] usbcore: registered new interface driver usblp
> [    0.596947] Initializing USB Mass Storage driver...
> [    0.597035] usbcore: registered new interface driver usb-storage
> [    0.597042] USB Mass Storage support registered.
> [    0.597114] usbcore: registered new interface driver libusual
> [    0.597344] i8042: PNP: No PS/2 controller found. Probing ports
> directly.
> [    1.606875] i8042: No controller found
> [    1.606986] mousedev: PS/2 mouse device common for all mice
> [    1.647359] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as
> rtc0
> [    1.647433] rtc_cmos: probe of rtc_cmos failed with error -38
> [    1.647611] md: linear personality registered for level -1
> [    1.647618] md: raid0 personality registered for level 0
> [    1.647622] md: raid1 personality registered for level 1
> [    1.647627] md: raid10 personality registered for level 10
> [    1.647631] md: raid6 personality registered for level 6
> [    1.647634] md: raid5 personality registered for level 5
> [    1.647638] md: raid4 personality registered for level 4
> [    1.647643] md: multipath personality registered for level -4
> [    1.647648] md: faulty personality registered for level -5
> [    1.647885] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02)
> initialised: dm-devel@redhat.com
> [    1.647963] device-mapper: multipath: version 1.3.0 loaded
> [    1.647970] device-mapper: multipath round-robin: version 1.0.0
> loaded
> [    1.647975] device-mapper: multipath queue-length: version 0.1.0
> loaded
> [    1.647979] device-mapper: multipath service-time: version 0.2.0
> loaded
> [    1.648093] cpuidle: using governor ladder
> [    1.648099] cpuidle: using governor menu
> [    1.648102] EFI Variables Facility v0.08 2004-May-17
> [    1.649120] usbcore: registered new interface driver usbhid
> [    1.649127] usbhid: USB HID core driver
> [    1.649608] ALSA device list:
> [    1.649614]   No soundcards found.
> [    1.649648] Netfilter messages via NETLINK v0.30.
> [    1.649670] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
> [    1.649965] ctnetlink v0.93: registering with nfnetlink.
> [    1.650163] xt_time: kernel timezone is -0000
> [    1.650171] ip_set: protocol 6
> [    1.650187] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP)
> [    1.650215] IPVS: Connection hash table configured (size=4096,
> memory=64Kbytes)
> [    1.650281] IPVS: Creating netns size=2056 id=0
> [    1.650287] IPVS: ipvs loaded.
> [    1.650291] IPVS: [rr] scheduler registered.
> [    1.650295] IPVS: [wrr] scheduler registered.
> [    1.650299] IPVS: [lc] scheduler registered.
> [    1.650302] IPVS: [wlc] scheduler registered.
> [    1.650311] IPVS: [lblc] scheduler registered.
> [    1.650321] IPVS: [lblcr] scheduler registered.
> [    1.650325] IPVS: [dh] scheduler registered.
> [    1.650329] IPVS: [sh] scheduler registered.
> [    1.650333] IPVS: [sed] scheduler registered.
> [    1.650337] IPVS: [nq] scheduler registered.
> [    1.650342] IPVS: ftp: loaded support on port[0] = 21
> [    1.650346] IPVS: [sip] pe registered.
> [    1.650569] IPv4 over IPv4 tunneling driver
> [    1.650877] GRE over IPv4 demultiplexor driver
> [    1.650883] GRE over IPv4 tunneling driver
> [    1.651415] ip_tables: (C) 2000-2006 Netfilter Core Team
> [    1.651460] arp_tables: (C) 2002 David S. Miller
> [    1.651474] TCP cubic registered
> [    1.651477] Initializing XFRM netlink socket
> [    1.651760] NET: Registered protocol family 10
> [    1.653337] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [    1.653400] IPv6 over IPv4 tunneling driver
> [    1.654673] NET: Registered protocol family 17
> [    1.654685] NET: Registered protocol family 15
> [    1.654729] Bridge firewalling registered
> [    1.654735] Ebtables v2.0 registered
> [    1.654767] L2TP core driver, V2.0
> [    1.654771] 802.1Q VLAN Support v1.8
> [    1.655121] sctp: Hash tables configured (established 65536 bind
> 65536)
> [    1.655401] Registering the dns_resolver key type
> [    1.655585] PM: Hibernation image not present or could not be
> loaded.
> [    1.655595] registered taskstats version 1
> [    1.655617] XENBUS: Device with no driver: device/console/0
> [    1.655621]   Magic number: 1:252:3141
> [    1.667063] IP-Config: Complete:
> [    1.667072]      device=eth0, addr=213.160.XX.XX,
> mask=255.255.255.0, gw=213.160.XX.XX,
> [    1.667084]      host=vsrv70428, domain=, nis-domain=XXXXX
> [    1.667094]      bootserver=127.0.255.255,
> rootserver=127.0.255.255, rootpath=
> [    1.667162] md: Waiting for all devices to be available before
> autodetect
> [    1.667168] md: If you don't use raid, use raid=noautodetect
> [    1.667362] md: Autodetecting RAID arrays.
> [    1.667367] md: Scanned 0 and added 0 devices.
> [    1.667371] md: autorun ...
> [    1.667373] md: ... autorun DONE.
> [    1.687595] EXT3-fs: barriers not enabled
> [    1.687789] kjournald starting.  Commit interval 5 seconds
> [    1.687956] EXT3-fs (xvda1): using internal journal
> [    1.687967] EXT3-fs (xvda1): mounted filesystem with writeback data
> mode
> [    1.687984] VFS: Mounted root (ext3 filesystem) on device 202:1.
> [    1.694482] devtmpfs: mounted
> [    1.694805] Freeing unused kernel memory: 808k freed
> [    1.694994] Write protecting the kernel read-only data: 20480k
> [    1.702908] Freeing unused kernel memory: 1288k freed
> [    1.703715] Freeing unused kernel memory: 1304k freed
> [    1.999674] udev[1230]: starting version 164
> [    3.042545] Adding 524284k swap on /swap.  Priority:-1 extents:133
> across:541292k SS
> [    4.078247] sshd (1850): /proc/1850/oom_adj is deprecated, please
> use /proc/1850/oom_score_adj instead.
> [   11.970025] eth0: no IPv6 routers present
> 
> 
> 
> 
> 
> 
> Ian Campbell <Ian.Campbell@citrix.com> schrieb am 13:07 Dienstag,
> 28.Januar 2014:
> 
> On Tue, 2014-01-28 at 11:59 +0000, Giovanni Bellac wrote:
> > The "ip", "netmask" and "gateway" options in the domU sxp file are
> > forwarding the options to the kernel of the domU when starting:
> > 
> > Output of console when starting the domU with "-c" (console):
> > ...
> > [    1.667063] IP-Config: Complete:
> > [    1.667072]      device=eth0, addr=213.160.XX.XX,
> mask=255.255.255.0, gw=213.160.XX.XX,
> > [    1.667084]      host=vsrv70428, domain=, nis-domain=XXXXX.tld,
> > [    1.667094]      bootserver=127.0.255.255,
> rootserver=127.0.255.255, rootpath=
> > ...
> 
> This is the domU console? I wonder if perhaps this stuff is causing
> things to get added to the guest command line -- can you post a full
> guest dmesg please?
> 
> 
> Ian.
> 
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xen.org
> http://lists.xen.org/xen-users
> 
> 
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:43:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sqF-00040v-8T; Thu, 30 Jan 2014 14:43:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W8sqE-00040o-IM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:43:42 +0000
Received: from [85.158.139.211:38147] by server-7.bemta-5.messagelabs.com id
	B1/54-14867-C156AE25; Thu, 30 Jan 2014 14:43:40 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391093019!625010!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7816 invoked from network); 30 Jan 2014 14:43:40 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 14:43:40 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEhGIp027041;
	Thu, 30 Jan 2014 14:43:20 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEhDII004558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 14:43:13 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s0UEhDeh019030;
	Thu, 30 Jan 2014 14:43:13 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s0UEhCvt019027; Thu, 30 Jan 2014 14:43:12 GMT
Date: Thu, 30 Jan 2014 14:43:12 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391092737.5650.7.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com> 
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
	<1391092737.5650.7.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0UEhGIp027041
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Ian Campbell wrote:

> On Thu, 2014-01-30 at 14:32 +0000, M A Young wrote:
>> On Thu, 30 Jan 2014, Ian Campbell wrote:
>>
>>> On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
>>>> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
>>>>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
>>>>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>>>>>>                  continue
>>>>>>>>
>>>>>>>>              # new image
>>>>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>>>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>>>>>>>
>>>>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
>>>>>>> red, --class gnu" yet is parsed happily.
>>>>>>
>>>>>> A menuentry from RHEL 7 looks like this...
>>>>>>
>>>>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
>>>>>>
>>>>>> So we need 'lazy' match with '.*?'.
>>>>>
>>>>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
>>>>> means in addition to that, do you have a reference?
>>>>
>>>> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
>>>
>>> Thanks, pure punctuation is a bit tricky to for a search engine...
>>>
>>>>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
>>>>> the name itself, although you might have to split into handling " and '
>>>>> separately to be more correct
>>>
>>> Any thoughts on this?
>>
>> I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May
>> last year. That seems to work fine for recent Fedora versions which will
>> be somewhat similar to RHEL 7.
>
> Oops, sorry, did I manage to drop that patch on the ground?

I don't think I ever got around to submitting it upstream (there may be 
other patches in my Fedora build that could go upstream as well).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:43:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8sqF-00040v-8T; Thu, 30 Jan 2014 14:43:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1W8sqE-00040o-IM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:43:42 +0000
Received: from [85.158.139.211:38147] by server-7.bemta-5.messagelabs.com id
	B1/54-14867-C156AE25; Thu, 30 Jan 2014 14:43:40 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391093019!625010!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7816 invoked from network); 30 Jan 2014 14:43:40 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 14:43:40 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEhGIp027041;
	Thu, 30 Jan 2014 14:43:20 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s0UEhDII004558
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 30 Jan 2014 14:43:13 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s0UEhDeh019030;
	Thu, 30 Jan 2014 14:43:13 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s0UEhCvt019027; Thu, 30 Jan 2014 14:43:12 GMT
Date: Thu, 30 Jan 2014 14:43:12 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391092737.5650.7.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com> 
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
	<1391092737.5650.7.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s0UEhGIp027041
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Ian Campbell wrote:

> On Thu, 2014-01-30 at 14:32 +0000, M A Young wrote:
>> On Thu, 30 Jan 2014, Ian Campbell wrote:
>>
>>> On Thu, 2014-01-30 at 13:02 +0000, Joby Poriyath wrote:
>>>> On Thu, Jan 30, 2014 at 12:07:34PM +0000, Ian Campbell wrote:
>>>>> On Thu, 2014-01-30 at 12:01 +0000, Joby Poriyath wrote:
>>>>>>>> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>>>>>>>>                  continue
>>>>>>>>
>>>>>>>>              # new image
>>>>>>>> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
>>>>>>>> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>>>>>>>
>>>>>>> Why is this necessary? fedora-19 also have the aformentioned "--class
>>>>>>> red, --class gnu" yet is parsed happily.
>>>>>>
>>>>>> A menuentry from RHEL 7 looks like this...
>>>>>>
>>>>>> menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163'
>>>>>>
>>>>>> So we need 'lazy' match with '.*?'.
>>>>>
>>>>> ".*" already matches zero or more characters, so I'm not sure what ".*?"
>>>>> means in addition to that, do you have a reference?
>>>>
>>>> http://docs.python.org/2/howto/regex.html#greedy-versus-non-greedy
>>>
>>> Thanks, pure punctuation is a bit tricky to for a search engine...
>>>
>>>>> Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
>>>>> the name itself, although you might have to split into handling " and '
>>>>> separately to be more correct
>>>
>>> Any thoughts on this?
>>
>> I went for ["\']([^"\']*)["\'] in a patch I added to Fedora pygrub in May
>> last year. That seems to work fine for recent Fedora versions which will
>> be somewhat similar to RHEL 7.
>
> Oops, sorry, did I manage to drop that patch on the ground?

I don't think I ever got around to submitting it upstream (there may be 
other patches in my Fedora build that could go upstream as well).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:45:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ss1-0004Co-Qs; Thu, 30 Jan 2014 14:45:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ss0-0004Cf-Fa
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:45:32 +0000
Received: from [85.158.137.68:64846] by server-14.bemta-3.messagelabs.com id
	B6/41-08196-B856AE25; Thu, 30 Jan 2014 14:45:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391093129!12333576!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25233 invoked from network); 30 Jan 2014 14:45:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96145847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:45:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:45:04 -0500
Message-ID: <1391093102.5650.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 30 Jan 2014 14:45:02 +0000
In-Reply-To: <alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
	<1391092737.5650.7.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:43 +0000, M A Young wrote:
> I don't think I ever got around to submitting it upstream

oh ok.

>  (there may be 
> other patches in my Fedora build that could go upstream as well).

Please do, especially if there are any which you think would be
important for 4.4 (rc3 due soon, so judge the criticality as
appropriate...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:45:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ss1-0004Co-Qs; Thu, 30 Jan 2014 14:45:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8ss0-0004Cf-Fa
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:45:32 +0000
Received: from [85.158.137.68:64846] by server-14.bemta-3.messagelabs.com id
	B6/41-08196-B856AE25; Thu, 30 Jan 2014 14:45:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391093129!12333576!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25233 invoked from network); 30 Jan 2014 14:45:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96145847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:45:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:45:04 -0500
Message-ID: <1391093102.5650.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Thu, 30 Jan 2014 14:45:02 +0000
In-Reply-To: <alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301426280.18476@algedi.dur.ac.uk>
	<1391092737.5650.7.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1401301439340.18476@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:43 +0000, M A Young wrote:
> I don't think I ever got around to submitting it upstream

oh ok.

>  (there may be 
> other patches in my Fedora build that could go upstream as well).

Please do, especially if there are any which you think would be
important for 4.4 (rc3 due soon, so judge the criticality as
appropriate...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8t22-0004k6-5c; Thu, 30 Jan 2014 14:55:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8t20-0004k1-LU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:55:52 +0000
Received: from [85.158.137.68:49230] by server-9.bemta-3.messagelabs.com id
	4F/0D-10184-7F76AE25; Thu, 30 Jan 2014 14:55:51 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391093749!12254878!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 30 Jan 2014 14:55:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:55:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96150821"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:55:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:55:48 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8t1w-0006yo-Ao; Thu, 30 Jan 2014 14:55:48 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8t1w-0001Tj-49; Thu, 30 Jan 2014
	14:55:48 +0000
Date: Thu, 30 Jan 2014 14:55:47 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130145547.GA5100@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391090818.29487.36.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 02:06:58PM +0000, Ian Campbell wrote:
> > > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > > the name itself, although you might have to split into handling " and '
> > > separately to be more correct
> 
> Any thoughts on this?

The two regexes seems to be equivalent. My only worry with '.*?' was 
compatibility with older python. Luckily, it's supported in Python 2.2 
and later.

> 
> I suppose it depends a bit on the rules for mixing quotes in grub, e.g.
> is
> 	menuentry "Ian's super cool Linux"
> 
> allowed.
> 
> On the other hand pygrub is very much best effort so as long as it works
> with the current set of inputs which we are aware of then .*? is fine.
> 

Ok.

Should I send an updated patch along with an example of RHEL 7 grub.cfg
or is this patch acceptable as it is?

Thanks,
Joby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 14:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 14:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8t22-0004k6-5c; Thu, 30 Jan 2014 14:55:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8t20-0004k1-LU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 14:55:52 +0000
Received: from [85.158.137.68:49230] by server-9.bemta-3.messagelabs.com id
	4F/0D-10184-7F76AE25; Thu, 30 Jan 2014 14:55:51 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391093749!12254878!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7152 invoked from network); 30 Jan 2014 14:55:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 14:55:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96150821"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 14:55:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 09:55:48 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8t1w-0006yo-Ao; Thu, 30 Jan 2014 14:55:48 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8t1w-0001Tj-49; Thu, 30 Jan 2014
	14:55:48 +0000
Date: Thu, 30 Jan 2014 14:55:47 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130145547.GA5100@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391090818.29487.36.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 02:06:58PM +0000, Ian Campbell wrote:
> > > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > > the name itself, although you might have to split into handling " and '
> > > separately to be more correct
> 
> Any thoughts on this?

The two regexes seems to be equivalent. My only worry with '.*?' was 
compatibility with older python. Luckily, it's supported in Python 2.2 
and later.

> 
> I suppose it depends a bit on the rules for mixing quotes in grub, e.g.
> is
> 	menuentry "Ian's super cool Linux"
> 
> allowed.
> 
> On the other hand pygrub is very much best effort so as long as it works
> with the current set of inputs which we are aware of then .*? is fine.
> 

Ok.

Should I send an updated patch along with an example of RHEL 7 grub.cfg
or is this patch acceptable as it is?

Thanks,
Joby

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8t6x-0005JH-LI; Thu, 30 Jan 2014 15:00:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8t6v-0005J5-Lj
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:00:58 +0000
Received: from [85.158.137.68:43005] by server-8.bemta-3.messagelabs.com id
	7A/6C-16039-8296AE25; Thu, 30 Jan 2014 15:00:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391094054!12353403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16179 invoked from network); 30 Jan 2014 15:00:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:00:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96152652"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:00:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:00:53 -0500
Message-ID: <1391094052.9495.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 15:00:52 +0000
In-Reply-To: <20140130145547.GA5100@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<20140130145547.GA5100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:55 +0000, Joby Poriyath wrote:
> On Thu, Jan 30, 2014 at 02:06:58PM +0000, Ian Campbell wrote:
> > > > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > > > the name itself, although you might have to split into handling " and '
> > > > separately to be more correct
> > 
> > Any thoughts on this?
> 
> The two regexes seems to be equivalent.

OK

> Should I send an updated patch along with an example of RHEL 7 grub.cfg
> or is this patch acceptable as it is?

Yes, please send an updated patch with the example added and CC George +
make a case for a release exception.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8t6x-0005JH-LI; Thu, 30 Jan 2014 15:00:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8t6v-0005J5-Lj
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:00:58 +0000
Received: from [85.158.137.68:43005] by server-8.bemta-3.messagelabs.com id
	7A/6C-16039-8296AE25; Thu, 30 Jan 2014 15:00:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391094054!12353403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16179 invoked from network); 30 Jan 2014 15:00:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:00:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96152652"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:00:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:00:53 -0500
Message-ID: <1391094052.9495.0.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 15:00:52 +0000
In-Reply-To: <20140130145547.GA5100@citrix.com>
References: <20140130113126.GA3326@citrix.com> <52EA3B67.2070707@citrix.com>
	<20140130120107.GA3441@citrix.com>
	<1391083654.29487.21.camel@kazak.uk.xensource.com>
	<20140130130241.GB3441@citrix.com>
	<1391090818.29487.36.camel@kazak.uk.xensource.com>
	<20140130145547.GA5100@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:55 +0000, Joby Poriyath wrote:
> On Thu, Jan 30, 2014 at 02:06:58PM +0000, Ian Campbell wrote:
> > > > Perhaps ["\']([^"\']*)["\'] is more accurate (i.e. disallow quotes in
> > > > the name itself, although you might have to split into handling " and '
> > > > separately to be more correct
> > 
> > Any thoughts on this?
> 
> The two regexes seems to be equivalent.

OK

> Should I send an updated patch along with an example of RHEL 7 grub.cfg
> or is this patch acceptable as it is?

Yes, please send an updated patch with the example added and CC George +
make a case for a release exception.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:06:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:06:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tC0-0005ao-Gx; Thu, 30 Jan 2014 15:06:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tBy-0005ah-QJ
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:06:11 +0000
Received: from [85.158.139.211:45232] by server-4.bemta-5.messagelabs.com id
	C3/E1-08092-16A6AE25; Thu, 30 Jan 2014 15:06:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391094228!625680!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12804 invoked from network); 30 Jan 2014 15:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96154358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:03:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:03:46 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8t9f-00078f-1t;
	Thu, 30 Jan 2014 15:03:47 +0000
Message-ID: <52EA69D3.4070600@citrix.com>
Date: Thu, 30 Jan 2014 15:03:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
 server abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> Collect together data structures concerning device emulation together into
> a new struct hvm_ioreq_server.
>
> Code that deals with the shared and buffered ioreq pages is extracted from
> functions such as hvm_domain_initialise, hvm_vcpu_initialise and do_hvm_op
> and consolidated into a set of hvm_ioreq_server_XXX functions.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++------------
>  xen/include/asm-x86/hvm/domain.h |    9 +-
>  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
>  3 files changed, 229 insertions(+), 100 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 71a44db..a0eaadb 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
>      spin_unlock(&d->event_lock);
>  }
>  
> -static ioreq_t *get_ioreq(struct vcpu *v)
> +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
>  {
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> +    shared_iopage_t *p = s->ioreq.va;
> +    ASSERT(p != NULL);
> +    return &p->vcpu_ioreq[id];
>  }
>  
>  void hvm_do_resume(struct vcpu *v)
>  {
> +    struct hvm_ioreq_server *s;
>      ioreq_t *p;
>  
>      check_wakeup_from_wait();
> @@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
>      if ( is_hvm_vcpu(v) )
>          pt_restore_timer(v);
>  
> -    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> -    if ( !(p = get_ioreq(v)) )
> +    s = v->arch.hvm_vcpu.ioreq_server;

This assignment can be part of the declaration of 's' (and likewise in
most later examples).

> +    v->arch.hvm_vcpu.ioreq_server = NULL;
> +
> +    if ( !s )
>          goto check_inject_trap;
>  
> +    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> +    p = get_ioreq(s, v->vcpu_id);
>      while ( p->state != STATE_IOREQ_NONE )
>      {
>          switch ( p->state )
> @@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
>              break;
>          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
>          case STATE_IOREQ_INPROCESS:
> -            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
> +            wait_on_xen_event_channel(p->vp_eport,
>                                        (p->state != STATE_IOREQ_READY) &&
>                                        (p->state != STATE_IOREQ_INPROCESS));
>              break;
> @@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
>  static void hvm_init_ioreq_page(
>      struct domain *d, struct hvm_ioreq_page *iorp)
>  {
> -    memset(iorp, 0, sizeof(*iorp));

Is it worth keeping this function?  the two back to back
domain_pause()'s from the callers are redundant.

>      spin_lock_init(&iorp->lock);
>      domain_pause(d);
>  }
> @@ -541,6 +544,167 @@ static int handle_pvh_io(
>      return X86EMUL_OKAY;
>  }
>  
> +static int hvm_init_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s;
> +    int i;
> +
> +    s = xzalloc(struct hvm_ioreq_server);
> +    if ( !s )
> +        return -ENOMEM;
> +
> +    s->domain = d;
> +
> +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +        s->ioreq_evtchn[i] = -1;
> +    s->buf_ioreq_evtchn = -1;
> +
> +    hvm_init_ioreq_page(d, &s->ioreq);
> +    hvm_init_ioreq_page(d, &s->buf_ioreq);
> +
> +    d->arch.hvm_domain.ioreq_server = s;
> +    return 0;
> +}
> +
> +static void hvm_deinit_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> +    hvm_destroy_ioreq_page(d, &s->ioreq);
> +    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> +
> +    xfree(s);
> +}
> +
> +static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
> +{
> +    struct domain *d = s->domain;
> +
> +    if ( s->ioreq.va != NULL )
> +    {
> +        shared_iopage_t *p = s->ioreq.va;
> +        struct vcpu *v;
> +
> +        for_each_vcpu ( d, v )
> +            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v->vcpu_id];
> +    }
> +}
> +
> +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
> +{
> +    int rc;
> +
> +    /* Create ioreq event channel. */
> +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> +    if ( rc < 0 )
> +        goto done;
> +
> +    /* Register ioreq event channel. */
> +    s->ioreq_evtchn[v->vcpu_id] = rc;
> +
> +    if ( v->vcpu_id == 0 )
> +    {
> +        /* Create bufioreq event channel. */
> +        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> +        if ( rc < 0 )
> +            goto done;

skipping hvm_update_ioreq_server_evtchn() even in the case of a
successful ioreq event channel?

> +
> +        s->buf_ioreq_evtchn = rc;
> +    }
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +    rc = 0;
> +
> +done:
> +    return rc;
> +}
> +
> +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
> +{
> +    if ( v->vcpu_id == 0 )
> +    {
> +        if ( s->buf_ioreq_evtchn >= 0 )
> +        {
> +            free_xen_event_channel(v, s->buf_ioreq_evtchn);
> +            s->buf_ioreq_evtchn = -1;
> +        }
> +    }
> +
> +    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
> +    {
> +        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
> +        s->ioreq_evtchn[v->vcpu_id] = -1;
> +    }
> +}
> +
> +static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> +                                     int *p_port)
> +{
> +    int old_port, new_port;
> +
> +    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> +    if ( new_port < 0 )
> +        return new_port;
> +
> +    /* xchg() ensures that only we call free_xen_event_channel(). */
> +    old_port = xchg(p_port, new_port);
> +    free_xen_event_channel(v, old_port);
> +    return 0;
> +}
> +
> +static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
> +{
> +    struct domain *d = s->domain;
> +    struct vcpu *v;
> +    int rc = 0;
> +
> +    domain_pause(d);
> +
> +    if ( d->vcpu[0] )
> +    {
> +        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s->buf_ioreq_evtchn);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v->vcpu_id]);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +
> +    s->domid = domid;
> +
> +done:
> +    domain_unpause(d);
> +
> +    return rc;
> +}
> +
> +static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> +{
> +    struct domain *d = s->domain;
> +    int rc;
> +
> +    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> +    if ( rc < 0 )
> +        return rc;
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +
> +    return 0;
> +}
> +
> +static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> +{
> +    struct domain *d = s->domain;
> +
> +    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);

Double space.

> +}
> +
>  int hvm_domain_initialise(struct domain *d)
>  {
>      int rc;
> @@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> +    rc = hvm_init_ioreq_server(d);
> +    if ( rc != 0 )
> +        goto fail2;
>  
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail2;
> +        goto fail3;
>  
>      return 0;
>  
> + fail3:
> +    hvm_deinit_ioreq_server(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> +    hvm_deinit_ioreq_server(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  {
>      int rc;
>      struct domain *d = v->domain;
> -    domid_t dm_domid;
> +    struct hvm_ioreq_server *s;
>  
>      hvm_asid_flush_vcpu(v);
>  
> @@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    dm_domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> +    s = d->arch.hvm_domain.ioreq_server;
>  
> -    /* Create ioreq event channel. */
> -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
> +    rc = hvm_ioreq_server_add_vcpu(s, v);
>      if ( rc < 0 )
>          goto fail6;
>  
> -    /* Register ioreq event channel. */
> -    v->arch.hvm_vcpu.xen_port = rc;
> -
> -    if ( v->vcpu_id == 0 )
> -    {
> -        /* Create bufioreq event channel. */
> -        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
> -        if ( rc < 0 )
> -            goto fail6;
> -        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
> -    }
> -
> -    spin_lock(&d->arch.hvm_domain.ioreq.lock);
> -    if ( d->arch.hvm_domain.ioreq.va != NULL )
> -        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
> -
>      if ( v->vcpu_id == 0 )
>      {
>          /* NB. All these really belong in hvm_domain_initialise(). */
> @@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> +    hvm_ioreq_server_remove_vcpu(s, v);
> +
>      nestedhvm_vcpu_destroy(v);
>  
>      free_compat_arg_xlat(v);
> @@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
>          vlapic_destroy(v);
>  
>      hvm_funcs.vcpu_destroy(v);
> -
> -    /* Event channel is already freed by evtchn_destroy(). */
> -    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
>  }
>  
>  void hvm_vcpu_down(struct vcpu *v)
> @@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
>  int hvm_buffered_io_send(ioreq_t *p)
>  {
>      struct vcpu *v = current;
> -    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> -    buffered_iopage_t *pg = iorp->va;
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +    struct hvm_ioreq_page *iorp;
> +    buffered_iopage_t *pg;
>      buf_ioreq_t bp;
>      /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
>      int qw = 0;
> @@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
> +        return 0;
> +
> +    iorp = &s->buf_ioreq;
> +    pg = iorp->va;
> +
>      /*
>       * Return 0 for the cases we can't deal with:
>       *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> @@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
>      wmb();
>      pg->write_pointer += qw ? 2 : 1;
>  
> -    notify_via_xen_event_channel(v->domain,
> -            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> +    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
>      spin_unlock(&iorp->lock);
>      
>      return 1;
> @@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>  {
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
>      ioreq_t *p;
>  
>      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
>          return 0; /* implicitly bins the i/o operation */
>  
> -    if ( !(p = get_ioreq(v)) )
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
>          return 0;
>  
> +    p = get_ioreq(s, v->vcpu_id);
> +
>      if ( unlikely(p->state != STATE_IOREQ_NONE) )
>      {
>          /* This indicates a bug in the device model. Crash the domain. */
>          gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p->state);
> -        domain_crash(v->domain);
> +        domain_crash(d);
>          return 0;
>      }
>  
> +    v->arch.hvm_vcpu.ioreq_server = s;
> +
>      p->dir = proto_p->dir;
>      p->data_is_ptr = proto_p->data_is_ptr;
>      p->type = proto_p->type;
> @@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>      p->df = proto_p->df;
>      p->data = proto_p->data;
>  
> -    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> +    prepare_wait_on_xen_event_channel(p->vp_eport);
>  
>      /*
>       * Following happens /after/ blocking and setting up ioreq contents.
>       * prepare_wait_on_xen_event_channel() is an implicit barrier.
>       */
>      p->state = STATE_IOREQ_READY;
> -    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
> +    notify_via_xen_event_channel(d, p->vp_eport);
>  
>      return 1;
>  }
> @@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
>      return 0;
>  }
>  
> -static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> -                                     int *p_port)
> -{
> -    int old_port, new_port;
> -
> -    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> -    if ( new_port < 0 )
> -        return new_port;
> -
> -    /* xchg() ensures that only we call free_xen_event_channel(). */
> -    old_port = xchg(p_port, new_port);
> -    free_xen_event_channel(v, old_port);
> -    return 0;
> -}
> -
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case HVMOP_get_param:
>      {
>          struct xen_hvm_param a;
> -        struct hvm_ioreq_page *iorp;
> +        struct hvm_ioreq_server *s;
>          struct domain *d;
>          struct vcpu *v;
>  
> @@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( rc )
>              goto param_fail;
>  
> +        s = d->arch.hvm_domain.ioreq_server;
> +

This should be reduced in lexical scope, and I would have said that it
can just be 'inlined' into each of the 4 uses later.

>          if ( op == HVMOP_set_param )
>          {
>              rc = 0;
> @@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              switch ( a.index )
>              {
>              case HVM_PARAM_IOREQ_PFN:
> -                iorp = &d->arch.hvm_domain.ioreq;
> -                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
> -                    break;
> -                spin_lock(&iorp->lock);
> -                if ( iorp->va != NULL )
> -                    /* Initialise evtchn port info if VCPUs already created. */
> -                    for_each_vcpu ( d, v )
> -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -                spin_unlock(&iorp->lock);
> +                rc = hvm_set_ioreq_server_pfn(s, a.value);
>                  break;
>              case HVM_PARAM_BUFIOREQ_PFN: 
> -                iorp = &d->arch.hvm_domain.buf_ioreq;
> -                rc = hvm_set_ioreq_page(d, iorp, a.value);
> +                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
>                  break;
>              case HVM_PARAM_CALLBACK_IRQ:
>                  hvm_set_callback_via(d, a.value);
> @@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = 0;
> -                domain_pause(d); /* safe to change per-vcpu xen_port */
> -                if ( d->vcpu[0] )
> -                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
> -                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
> -                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
> -                if ( rc )
> -                {
> -                    domain_unpause(d);
> -                    break;
> -                }
> -                iorp = &d->arch.hvm_domain.ioreq;
> -                for_each_vcpu ( d, v )
> -                {
> -                    rc = hvm_replace_event_channel(v, a.value,
> -                                                   &v->arch.hvm_vcpu.xen_port);
> -                    if ( rc )
> -                        break;
> -
> -                    spin_lock(&iorp->lock);
> -                    if ( iorp->va != NULL )
> -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -                    spin_unlock(&iorp->lock);
> -                }
> -                domain_unpause(d);
> +                rc = hvm_set_ioreq_server_domid(s, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          {
>              switch ( a.index )
>              {
> +            case HVM_PARAM_BUFIOREQ_EVTCHN:
> +                a.value = s->buf_ioreq_evtchn;
> +                break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
>                  break;
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index b1e3187..4c039f8 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -41,10 +41,17 @@ struct hvm_ioreq_page {
>      void *va;
>  };
>  
> -struct hvm_domain {
> +struct hvm_ioreq_server {
> +    struct domain          *domain;
> +    domid_t                domid;
>      struct hvm_ioreq_page  ioreq;
> +    int                    ioreq_evtchn[MAX_HVM_VCPUS];
>      struct hvm_ioreq_page  buf_ioreq;
> +    int                    buf_ioreq_evtchn;
> +};
>  
> +struct hvm_domain {
> +    struct hvm_ioreq_server *ioreq_server;
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
> index 122ab0d..4c9d7ee 100644
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -138,7 +138,7 @@ struct hvm_vcpu {
>      spinlock_t          tm_lock;
>      struct list_head    tm_list;
>  
> -    int                 xen_port;
> +    struct hvm_ioreq_server *ioreq_server;
>  

Why do both hvm_vcpu and hvm_domain need ioreq_server pointers?  I cant
spot anything which actually uses the vcpu one.

~Andrew

>      bool_t              flag_dr_dirty;
>      bool_t              debug_state_latch;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:06:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:06:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tC0-0005ao-Gx; Thu, 30 Jan 2014 15:06:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tBy-0005ah-QJ
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:06:11 +0000
Received: from [85.158.139.211:45232] by server-4.bemta-5.messagelabs.com id
	C3/E1-08092-16A6AE25; Thu, 30 Jan 2014 15:06:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391094228!625680!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12804 invoked from network); 30 Jan 2014 15:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96154358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:03:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:03:46 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8t9f-00078f-1t;
	Thu, 30 Jan 2014 15:03:47 +0000
Message-ID: <52EA69D3.4070600@citrix.com>
Date: Thu, 30 Jan 2014 15:03:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
 server abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> Collect together data structures concerning device emulation together into
> a new struct hvm_ioreq_server.
>
> Code that deals with the shared and buffered ioreq pages is extracted from
> functions such as hvm_domain_initialise, hvm_vcpu_initialise and do_hvm_op
> and consolidated into a set of hvm_ioreq_server_XXX functions.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++------------
>  xen/include/asm-x86/hvm/domain.h |    9 +-
>  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
>  3 files changed, 229 insertions(+), 100 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 71a44db..a0eaadb 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
>      spin_unlock(&d->event_lock);
>  }
>  
> -static ioreq_t *get_ioreq(struct vcpu *v)
> +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
>  {
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> +    shared_iopage_t *p = s->ioreq.va;
> +    ASSERT(p != NULL);
> +    return &p->vcpu_ioreq[id];
>  }
>  
>  void hvm_do_resume(struct vcpu *v)
>  {
> +    struct hvm_ioreq_server *s;
>      ioreq_t *p;
>  
>      check_wakeup_from_wait();
> @@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
>      if ( is_hvm_vcpu(v) )
>          pt_restore_timer(v);
>  
> -    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> -    if ( !(p = get_ioreq(v)) )
> +    s = v->arch.hvm_vcpu.ioreq_server;

This assignment can be part of the declaration of 's' (and likewise in
most later examples).

> +    v->arch.hvm_vcpu.ioreq_server = NULL;
> +
> +    if ( !s )
>          goto check_inject_trap;
>  
> +    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> +    p = get_ioreq(s, v->vcpu_id);
>      while ( p->state != STATE_IOREQ_NONE )
>      {
>          switch ( p->state )
> @@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
>              break;
>          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
>          case STATE_IOREQ_INPROCESS:
> -            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
> +            wait_on_xen_event_channel(p->vp_eport,
>                                        (p->state != STATE_IOREQ_READY) &&
>                                        (p->state != STATE_IOREQ_INPROCESS));
>              break;
> @@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
>  static void hvm_init_ioreq_page(
>      struct domain *d, struct hvm_ioreq_page *iorp)
>  {
> -    memset(iorp, 0, sizeof(*iorp));

Is it worth keeping this function?  the two back to back
domain_pause()'s from the callers are redundant.

>      spin_lock_init(&iorp->lock);
>      domain_pause(d);
>  }
> @@ -541,6 +544,167 @@ static int handle_pvh_io(
>      return X86EMUL_OKAY;
>  }
>  
> +static int hvm_init_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s;
> +    int i;
> +
> +    s = xzalloc(struct hvm_ioreq_server);
> +    if ( !s )
> +        return -ENOMEM;
> +
> +    s->domain = d;
> +
> +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +        s->ioreq_evtchn[i] = -1;
> +    s->buf_ioreq_evtchn = -1;
> +
> +    hvm_init_ioreq_page(d, &s->ioreq);
> +    hvm_init_ioreq_page(d, &s->buf_ioreq);
> +
> +    d->arch.hvm_domain.ioreq_server = s;
> +    return 0;
> +}
> +
> +static void hvm_deinit_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> +    hvm_destroy_ioreq_page(d, &s->ioreq);
> +    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> +
> +    xfree(s);
> +}
> +
> +static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
> +{
> +    struct domain *d = s->domain;
> +
> +    if ( s->ioreq.va != NULL )
> +    {
> +        shared_iopage_t *p = s->ioreq.va;
> +        struct vcpu *v;
> +
> +        for_each_vcpu ( d, v )
> +            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v->vcpu_id];
> +    }
> +}
> +
> +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
> +{
> +    int rc;
> +
> +    /* Create ioreq event channel. */
> +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> +    if ( rc < 0 )
> +        goto done;
> +
> +    /* Register ioreq event channel. */
> +    s->ioreq_evtchn[v->vcpu_id] = rc;
> +
> +    if ( v->vcpu_id == 0 )
> +    {
> +        /* Create bufioreq event channel. */
> +        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> +        if ( rc < 0 )
> +            goto done;

skipping hvm_update_ioreq_server_evtchn() even in the case of a
successful ioreq event channel?

> +
> +        s->buf_ioreq_evtchn = rc;
> +    }
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +    rc = 0;
> +
> +done:
> +    return rc;
> +}
> +
> +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
> +{
> +    if ( v->vcpu_id == 0 )
> +    {
> +        if ( s->buf_ioreq_evtchn >= 0 )
> +        {
> +            free_xen_event_channel(v, s->buf_ioreq_evtchn);
> +            s->buf_ioreq_evtchn = -1;
> +        }
> +    }
> +
> +    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
> +    {
> +        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
> +        s->ioreq_evtchn[v->vcpu_id] = -1;
> +    }
> +}
> +
> +static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> +                                     int *p_port)
> +{
> +    int old_port, new_port;
> +
> +    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> +    if ( new_port < 0 )
> +        return new_port;
> +
> +    /* xchg() ensures that only we call free_xen_event_channel(). */
> +    old_port = xchg(p_port, new_port);
> +    free_xen_event_channel(v, old_port);
> +    return 0;
> +}
> +
> +static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
> +{
> +    struct domain *d = s->domain;
> +    struct vcpu *v;
> +    int rc = 0;
> +
> +    domain_pause(d);
> +
> +    if ( d->vcpu[0] )
> +    {
> +        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s->buf_ioreq_evtchn);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v->vcpu_id]);
> +        if ( rc < 0 )
> +            goto done;
> +    }
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +
> +    s->domid = domid;
> +
> +done:
> +    domain_unpause(d);
> +
> +    return rc;
> +}
> +
> +static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> +{
> +    struct domain *d = s->domain;
> +    int rc;
> +
> +    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> +    if ( rc < 0 )
> +        return rc;
> +
> +    hvm_update_ioreq_server_evtchn(s);
> +
> +    return 0;
> +}
> +
> +static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> +{
> +    struct domain *d = s->domain;
> +
> +    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);

Double space.

> +}
> +
>  int hvm_domain_initialise(struct domain *d)
>  {
>      int rc;
> @@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> +    rc = hvm_init_ioreq_server(d);
> +    if ( rc != 0 )
> +        goto fail2;
>  
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail2;
> +        goto fail3;
>  
>      return 0;
>  
> + fail3:
> +    hvm_deinit_ioreq_server(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> +    hvm_deinit_ioreq_server(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  {
>      int rc;
>      struct domain *d = v->domain;
> -    domid_t dm_domid;
> +    struct hvm_ioreq_server *s;
>  
>      hvm_asid_flush_vcpu(v);
>  
> @@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    dm_domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> +    s = d->arch.hvm_domain.ioreq_server;
>  
> -    /* Create ioreq event channel. */
> -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
> +    rc = hvm_ioreq_server_add_vcpu(s, v);
>      if ( rc < 0 )
>          goto fail6;
>  
> -    /* Register ioreq event channel. */
> -    v->arch.hvm_vcpu.xen_port = rc;
> -
> -    if ( v->vcpu_id == 0 )
> -    {
> -        /* Create bufioreq event channel. */
> -        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /* teardown: none */
> -        if ( rc < 0 )
> -            goto fail6;
> -        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = rc;
> -    }
> -
> -    spin_lock(&d->arch.hvm_domain.ioreq.lock);
> -    if ( d->arch.hvm_domain.ioreq.va != NULL )
> -        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
> -
>      if ( v->vcpu_id == 0 )
>      {
>          /* NB. All these really belong in hvm_domain_initialise(). */
> @@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +
> +    hvm_ioreq_server_remove_vcpu(s, v);
> +
>      nestedhvm_vcpu_destroy(v);
>  
>      free_compat_arg_xlat(v);
> @@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
>          vlapic_destroy(v);
>  
>      hvm_funcs.vcpu_destroy(v);
> -
> -    /* Event channel is already freed by evtchn_destroy(). */
> -    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
>  }
>  
>  void hvm_vcpu_down(struct vcpu *v)
> @@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
>  int hvm_buffered_io_send(ioreq_t *p)
>  {
>      struct vcpu *v = current;
> -    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> -    buffered_iopage_t *pg = iorp->va;
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +    struct hvm_ioreq_page *iorp;
> +    buffered_iopage_t *pg;
>      buf_ioreq_t bp;
>      /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
>      int qw = 0;
> @@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
> +        return 0;
> +
> +    iorp = &s->buf_ioreq;
> +    pg = iorp->va;
> +
>      /*
>       * Return 0 for the cases we can't deal with:
>       *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> @@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
>      wmb();
>      pg->write_pointer += qw ? 2 : 1;
>  
> -    notify_via_xen_event_channel(v->domain,
> -            v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> +    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
>      spin_unlock(&iorp->lock);
>      
>      return 1;
> @@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>  {
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
>      ioreq_t *p;
>  
>      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
>          return 0; /* implicitly bins the i/o operation */
>  
> -    if ( !(p = get_ioreq(v)) )
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
>          return 0;
>  
> +    p = get_ioreq(s, v->vcpu_id);
> +
>      if ( unlikely(p->state != STATE_IOREQ_NONE) )
>      {
>          /* This indicates a bug in the device model. Crash the domain. */
>          gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p->state);
> -        domain_crash(v->domain);
> +        domain_crash(d);
>          return 0;
>      }
>  
> +    v->arch.hvm_vcpu.ioreq_server = s;
> +
>      p->dir = proto_p->dir;
>      p->data_is_ptr = proto_p->data_is_ptr;
>      p->type = proto_p->type;
> @@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>      p->df = proto_p->df;
>      p->data = proto_p->data;
>  
> -    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> +    prepare_wait_on_xen_event_channel(p->vp_eport);
>  
>      /*
>       * Following happens /after/ blocking and setting up ioreq contents.
>       * prepare_wait_on_xen_event_channel() is an implicit barrier.
>       */
>      p->state = STATE_IOREQ_READY;
> -    notify_via_xen_event_channel(v->domain, v->arch.hvm_vcpu.xen_port);
> +    notify_via_xen_event_channel(d, p->vp_eport);
>  
>      return 1;
>  }
> @@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
>      return 0;
>  }
>  
> -static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
> -                                     int *p_port)
> -{
> -    int old_port, new_port;
> -
> -    new_port = alloc_unbound_xen_event_channel(v, remote_domid, NULL);
> -    if ( new_port < 0 )
> -        return new_port;
> -
> -    /* xchg() ensures that only we call free_xen_event_channel(). */
> -    old_port = xchg(p_port, new_port);
> -    free_xen_event_channel(v, old_port);
> -    return 0;
> -}
> -
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case HVMOP_get_param:
>      {
>          struct xen_hvm_param a;
> -        struct hvm_ioreq_page *iorp;
> +        struct hvm_ioreq_server *s;
>          struct domain *d;
>          struct vcpu *v;
>  
> @@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( rc )
>              goto param_fail;
>  
> +        s = d->arch.hvm_domain.ioreq_server;
> +

This should be reduced in lexical scope, and I would have said that it
can just be 'inlined' into each of the 4 uses later.

>          if ( op == HVMOP_set_param )
>          {
>              rc = 0;
> @@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              switch ( a.index )
>              {
>              case HVM_PARAM_IOREQ_PFN:
> -                iorp = &d->arch.hvm_domain.ioreq;
> -                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
> -                    break;
> -                spin_lock(&iorp->lock);
> -                if ( iorp->va != NULL )
> -                    /* Initialise evtchn port info if VCPUs already created. */
> -                    for_each_vcpu ( d, v )
> -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -                spin_unlock(&iorp->lock);
> +                rc = hvm_set_ioreq_server_pfn(s, a.value);
>                  break;
>              case HVM_PARAM_BUFIOREQ_PFN: 
> -                iorp = &d->arch.hvm_domain.buf_ioreq;
> -                rc = hvm_set_ioreq_page(d, iorp, a.value);
> +                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
>                  break;
>              case HVM_PARAM_CALLBACK_IRQ:
>                  hvm_set_callback_via(d, a.value);
> @@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = 0;
> -                domain_pause(d); /* safe to change per-vcpu xen_port */
> -                if ( d->vcpu[0] )
> -                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
> -                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
> -                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
> -                if ( rc )
> -                {
> -                    domain_unpause(d);
> -                    break;
> -                }
> -                iorp = &d->arch.hvm_domain.ioreq;
> -                for_each_vcpu ( d, v )
> -                {
> -                    rc = hvm_replace_event_channel(v, a.value,
> -                                                   &v->arch.hvm_vcpu.xen_port);
> -                    if ( rc )
> -                        break;
> -
> -                    spin_lock(&iorp->lock);
> -                    if ( iorp->va != NULL )
> -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> -                    spin_unlock(&iorp->lock);
> -                }
> -                domain_unpause(d);
> +                rc = hvm_set_ioreq_server_domid(s, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          {
>              switch ( a.index )
>              {
> +            case HVM_PARAM_BUFIOREQ_EVTCHN:
> +                a.value = s->buf_ioreq_evtchn;
> +                break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
>                  break;
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index b1e3187..4c039f8 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -41,10 +41,17 @@ struct hvm_ioreq_page {
>      void *va;
>  };
>  
> -struct hvm_domain {
> +struct hvm_ioreq_server {
> +    struct domain          *domain;
> +    domid_t                domid;
>      struct hvm_ioreq_page  ioreq;
> +    int                    ioreq_evtchn[MAX_HVM_VCPUS];
>      struct hvm_ioreq_page  buf_ioreq;
> +    int                    buf_ioreq_evtchn;
> +};
>  
> +struct hvm_domain {
> +    struct hvm_ioreq_server *ioreq_server;
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
> index 122ab0d..4c9d7ee 100644
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -138,7 +138,7 @@ struct hvm_vcpu {
>      spinlock_t          tm_lock;
>      struct list_head    tm_list;
>  
> -    int                 xen_port;
> +    struct hvm_ioreq_server *ioreq_server;
>  

Why do both hvm_vcpu and hvm_domain need ioreq_server pointers?  I cant
spot anything which actually uses the vcpu one.

~Andrew

>      bool_t              flag_dr_dirty;
>      bool_t              debug_state_latch;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:06:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tCS-0005eZ-3n; Thu, 30 Jan 2014 15:06:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8tCP-0005eB-Re
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:06:38 +0000
Received: from [85.158.139.211:60112] by server-3.bemta-5.messagelabs.com id
	14/F8-13671-C7A6AE25; Thu, 30 Jan 2014 15:06:36 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391094393!646054!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9769 invoked from network); 30 Jan 2014 15:06:35 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 15:06:35 -0000
Received: from mail-vb0-f52.google.com ([209.85.212.52]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUupqeSA58fqOZ8zwfb2EBDR/jvxvXrbj@postini.com;
	Thu, 30 Jan 2014 07:06:35 PST
Received: by mail-vb0-f52.google.com with SMTP id p14so2119983vbm.11
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 07:06:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=nCx2V+JO82T/CfxUH2xdE2K7D7lqMQrJhav59PD0/XY=;
	b=UZwgzLVPs7Q4HnA2tOxot0jn1jhNfzN9kpiFLNveInOvwpIwG3yPKGYDU4uXeYEfU2
	8LeXCkcx4jpAGflRXJi2rM/osXck10/2+xLGj+9bUJDmiNVqdsyk5LIAnJhnXSVJweEs
	kyIkA/W8P1zpsdBlw5lnWlr0Q6iINZLk5FDI8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=nCx2V+JO82T/CfxUH2xdE2K7D7lqMQrJhav59PD0/XY=;
	b=T4esXDYmGMPENY+AzWS5sQMRjIyQmrl6WjeJEy/fxEZSxjnfqetsfej90mdqfubMkP
	lxJK1ZxP+Kkz5Ei+viTL0qrvpL0iFbwC5Yac+neVeBs/kPTt2Fj4GXYDkQKAQSJBDPbG
	nbLggwIGRmV/yGagRaHFrdfkyYVZjH+5Ctft75tPTWTyIby6MkuIw7dNz+4QKty8dndC
	y2+oMDEpQwdW5hgt2fmffDBKVw5vseKvjH4xutM7IKgmelNO6jbxxd0+q7NtUt4RcX9+
	EXBFvPnEyoOGDgr67pEYwbeN67ELirA1k8BuDowFwVmi1ePhBsSLStPEiMaB2EnW2JVy
	UpuA==
X-Gm-Message-State: ALoCoQkw55miwMU53CU6LsQ/kAo7NrLE2dLRAHiiEGScknIfvw/LUGk9zkWvb2tvLf5PaOsSMReI07a98qLBvOblLgZEj5nE0UKirRS2W7QUcrtFVc0nimI4mRoVHdK3RxamVQCPp1ftEE1BzwgZUbUgZawJJdVxYqZA8/oCtwbMQFqkFYOu3JI=
X-Received: by 10.220.97.145 with SMTP id l17mr265412vcn.35.1391094392725;
	Thu, 30 Jan 2014 07:06:32 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.97.145 with SMTP id l17mr265385vcn.35.1391094392351;
	Thu, 30 Jan 2014 07:06:32 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 07:06:32 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 17:06:32 +0200
Message-ID: <CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)

On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Is it a level or an edge irq?
>
> On Wed, 29 Jan 2014, Julien Grall wrote:
>> Hi,
>>
>> It's weird, physical IRQ should not be injected twice ...
>> Were you able to print the IRQ number?
>>
>> In any case, you are using the old version of the interrupt patch series.
>> Your new error may come of race condition in this code.
>>
>> Can you try to use a newest version?
>>
>> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>>       > difference for xen-unstable (it should make things clearer, if nothing
>>       > else) but it should fix things for Oleksandr.
>>
>>       Unfortunately, it is not enough for stable work.
>>
>>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>>       gic_route_irq_to_guest(). And as result, I don't see our situation
>>       which cause to deadlock in on_selected_cpus function (expected).
>>       But, hypervisor sometimes hangs somewhere else (I have not identified
>>       yet where this is happening) or I sometimes see traps, like that:
>>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>>
>>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>>       (XEN) CPU:    1
>>       (XEN) PC:     00242c1c __warn+0x20/0x28
>>       (XEN) CPSR:   200001da MODE:Hypervisor
>>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>>       (XEN)
>>       (XEN)   VTCR_EL2: 80002558
>>       (XEN)  VTTBR_EL2: 00020000dec6a000
>>       (XEN)
>>       (XEN)  SCTLR_EL2: 30cd187f
>>       (XEN)    HCR_EL2: 00000000000028b5
>>       (XEN)  TTBR0_EL2: 00000000d2014000
>>       (XEN)
>>       (XEN)    ESR_EL2: 00000000
>>       (XEN)  HPFAR_EL2: 0000000000482110
>>       (XEN)      HDFAR: fa211190
>>       (XEN)      HIFAR: 00000000
>>       (XEN)
>>       (XEN) Xen stack trace from sp=4bfd7eb4:
>>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>>       (XEN)    ffeffbfe fedeefff fffd5ffe
>>       (XEN) Xen call trace:
>>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>>       (XEN)    [<00251830>] return_from_trap+0/0x4
>>       (XEN)
>>
>>       Also I am posting maintenance_interrupt() from my tree:
>>
>>       static void maintenance_interrupt(int irq, void *dev_id, struct
>>       cpu_user_regs *regs)
>>       {
>>           int i = 0, virq, pirq;
>>           uint32_t lr;
>>           struct vcpu *v = current;
>>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>>
>>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>>                                     64, i)) < 64) {
>>               struct pending_irq *p, *n;
>>               int cpu, eoi;
>>
>>               cpu = -1;
>>               eoi = 0;
>>
>>               spin_lock_irq(&gic.lock);
>>               lr = GICH[GICH_LR + i];
>>               virq = lr & GICH_LR_VIRTUAL_MASK;
>>
>>               p = irq_to_pending(v, virq);
>>               if ( p->desc != NULL ) {
>>                   p->desc->status &= ~IRQ_INPROGRESS;
>>                   /* Assume only one pcpu needs to EOI the irq */
>>                   cpu = p->desc->arch.eoi_cpu;
>>                   eoi = 1;
>>                   pirq = p->desc->irq;
>>               }
>>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>               {
>>                   /* Physical IRQ can't be reinject */
>>                   WARN_ON(p->desc != NULL);
>>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>                   spin_unlock_irq(&gic.lock);
>>                   i++;
>>                   continue;
>>               }
>>
>>               GICH[GICH_LR + i] = 0;
>>               clear_bit(i, &this_cpu(lr_mask));
>>
>>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>                   list_del_init(&n->lr_queue);
>>                   set_bit(i, &this_cpu(lr_mask));
>>               } else {
>>                   gic_inject_irq_stop();
>>               }
>>               spin_unlock_irq(&gic.lock);
>>
>>               spin_lock_irq(&v->arch.vgic.lock);
>>               list_del_init(&p->inflight);
>>               spin_unlock_irq(&v->arch.vgic.lock);
>>
>>               if ( eoi ) {
>>                   /* this is not racy because we can't receive another irq of the
>>                    * same type until we EOI it.  */
>>                   if ( cpu == smp_processor_id() )
>>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>>                   else
>>                       on_selected_cpus(cpumask_of(cpu),
>>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>>               }
>>
>>               i++;
>>           }
>>       }
>>
>>
>>       Oleksandr Tyshchenko | Embedded Developer
>>       GlobalLogic
>>
>>
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:06:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tCS-0005eZ-3n; Thu, 30 Jan 2014 15:06:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8tCP-0005eB-Re
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:06:38 +0000
Received: from [85.158.139.211:60112] by server-3.bemta-5.messagelabs.com id
	14/F8-13671-C7A6AE25; Thu, 30 Jan 2014 15:06:36 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391094393!646054!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9769 invoked from network); 30 Jan 2014 15:06:35 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 15:06:35 -0000
Received: from mail-vb0-f52.google.com ([209.85.212.52]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUupqeSA58fqOZ8zwfb2EBDR/jvxvXrbj@postini.com;
	Thu, 30 Jan 2014 07:06:35 PST
Received: by mail-vb0-f52.google.com with SMTP id p14so2119983vbm.11
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 07:06:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=nCx2V+JO82T/CfxUH2xdE2K7D7lqMQrJhav59PD0/XY=;
	b=UZwgzLVPs7Q4HnA2tOxot0jn1jhNfzN9kpiFLNveInOvwpIwG3yPKGYDU4uXeYEfU2
	8LeXCkcx4jpAGflRXJi2rM/osXck10/2+xLGj+9bUJDmiNVqdsyk5LIAnJhnXSVJweEs
	kyIkA/W8P1zpsdBlw5lnWlr0Q6iINZLk5FDI8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=nCx2V+JO82T/CfxUH2xdE2K7D7lqMQrJhav59PD0/XY=;
	b=T4esXDYmGMPENY+AzWS5sQMRjIyQmrl6WjeJEy/fxEZSxjnfqetsfej90mdqfubMkP
	lxJK1ZxP+Kkz5Ei+viTL0qrvpL0iFbwC5Yac+neVeBs/kPTt2Fj4GXYDkQKAQSJBDPbG
	nbLggwIGRmV/yGagRaHFrdfkyYVZjH+5Ctft75tPTWTyIby6MkuIw7dNz+4QKty8dndC
	y2+oMDEpQwdW5hgt2fmffDBKVw5vseKvjH4xutM7IKgmelNO6jbxxd0+q7NtUt4RcX9+
	EXBFvPnEyoOGDgr67pEYwbeN67ELirA1k8BuDowFwVmi1ePhBsSLStPEiMaB2EnW2JVy
	UpuA==
X-Gm-Message-State: ALoCoQkw55miwMU53CU6LsQ/kAo7NrLE2dLRAHiiEGScknIfvw/LUGk9zkWvb2tvLf5PaOsSMReI07a98qLBvOblLgZEj5nE0UKirRS2W7QUcrtFVc0nimI4mRoVHdK3RxamVQCPp1ftEE1BzwgZUbUgZawJJdVxYqZA8/oCtwbMQFqkFYOu3JI=
X-Received: by 10.220.97.145 with SMTP id l17mr265412vcn.35.1391094392725;
	Thu, 30 Jan 2014 07:06:32 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.97.145 with SMTP id l17mr265385vcn.35.1391094392351;
	Thu, 30 Jan 2014 07:06:32 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 07:06:32 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 17:06:32 +0200
Message-ID: <CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)

On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Is it a level or an edge irq?
>
> On Wed, 29 Jan 2014, Julien Grall wrote:
>> Hi,
>>
>> It's weird, physical IRQ should not be injected twice ...
>> Were you able to print the IRQ number?
>>
>> In any case, you are using the old version of the interrupt patch series.
>> Your new error may come of race condition in this code.
>>
>> Can you try to use a newest version?
>>
>> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>>       > difference for xen-unstable (it should make things clearer, if nothing
>>       > else) but it should fix things for Oleksandr.
>>
>>       Unfortunately, it is not enough for stable work.
>>
>>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>>       gic_route_irq_to_guest(). And as result, I don't see our situation
>>       which cause to deadlock in on_selected_cpus function (expected).
>>       But, hypervisor sometimes hangs somewhere else (I have not identified
>>       yet where this is happening) or I sometimes see traps, like that:
>>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>>
>>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>>       (XEN) CPU:    1
>>       (XEN) PC:     00242c1c __warn+0x20/0x28
>>       (XEN) CPSR:   200001da MODE:Hypervisor
>>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>>       (XEN)
>>       (XEN)   VTCR_EL2: 80002558
>>       (XEN)  VTTBR_EL2: 00020000dec6a000
>>       (XEN)
>>       (XEN)  SCTLR_EL2: 30cd187f
>>       (XEN)    HCR_EL2: 00000000000028b5
>>       (XEN)  TTBR0_EL2: 00000000d2014000
>>       (XEN)
>>       (XEN)    ESR_EL2: 00000000
>>       (XEN)  HPFAR_EL2: 0000000000482110
>>       (XEN)      HDFAR: fa211190
>>       (XEN)      HIFAR: 00000000
>>       (XEN)
>>       (XEN) Xen stack trace from sp=4bfd7eb4:
>>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>>       (XEN)    ffeffbfe fedeefff fffd5ffe
>>       (XEN) Xen call trace:
>>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>>       (XEN)    [<00251830>] return_from_trap+0/0x4
>>       (XEN)
>>
>>       Also I am posting maintenance_interrupt() from my tree:
>>
>>       static void maintenance_interrupt(int irq, void *dev_id, struct
>>       cpu_user_regs *regs)
>>       {
>>           int i = 0, virq, pirq;
>>           uint32_t lr;
>>           struct vcpu *v = current;
>>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>>
>>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>>                                     64, i)) < 64) {
>>               struct pending_irq *p, *n;
>>               int cpu, eoi;
>>
>>               cpu = -1;
>>               eoi = 0;
>>
>>               spin_lock_irq(&gic.lock);
>>               lr = GICH[GICH_LR + i];
>>               virq = lr & GICH_LR_VIRTUAL_MASK;
>>
>>               p = irq_to_pending(v, virq);
>>               if ( p->desc != NULL ) {
>>                   p->desc->status &= ~IRQ_INPROGRESS;
>>                   /* Assume only one pcpu needs to EOI the irq */
>>                   cpu = p->desc->arch.eoi_cpu;
>>                   eoi = 1;
>>                   pirq = p->desc->irq;
>>               }
>>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>>               {
>>                   /* Physical IRQ can't be reinject */
>>                   WARN_ON(p->desc != NULL);
>>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>>                   spin_unlock_irq(&gic.lock);
>>                   i++;
>>                   continue;
>>               }
>>
>>               GICH[GICH_LR + i] = 0;
>>               clear_bit(i, &this_cpu(lr_mask));
>>
>>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>>                   list_del_init(&n->lr_queue);
>>                   set_bit(i, &this_cpu(lr_mask));
>>               } else {
>>                   gic_inject_irq_stop();
>>               }
>>               spin_unlock_irq(&gic.lock);
>>
>>               spin_lock_irq(&v->arch.vgic.lock);
>>               list_del_init(&p->inflight);
>>               spin_unlock_irq(&v->arch.vgic.lock);
>>
>>               if ( eoi ) {
>>                   /* this is not racy because we can't receive another irq of the
>>                    * same type until we EOI it.  */
>>                   if ( cpu == smp_processor_id() )
>>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>>                   else
>>                       on_selected_cpus(cpumask_of(cpu),
>>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>>               }
>>
>>               i++;
>>           }
>>       }
>>
>>
>>       Oleksandr Tyshchenko | Embedded Developer
>>       GlobalLogic
>>
>>
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tNP-0006CH-Ik; Thu, 30 Jan 2014 15:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tNM-0006CC-VR
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:17:57 +0000
Received: from [85.158.139.211:45555] by server-3.bemta-5.messagelabs.com id
	74/0E-13671-42D6AE25; Thu, 30 Jan 2014 15:17:56 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391095071!638049!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16583 invoked from network); 30 Jan 2014 15:17:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96162324"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:17:51 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:17:50 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:17:49 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
	server abstraction.
Thread-Index: AQHPHcZaQkXrYw5nnkO937nxcase55qdTKOAgAARfhA=
Date: Thu, 30 Jan 2014 15:17:48 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02178CB@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
	<52EA69D3.4070600@citrix.com>
In-Reply-To: <52EA69D3.4070600@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
 server abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 15:04
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
> server abstraction.
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > Collect together data structures concerning device emulation together into
> > a new struct hvm_ioreq_server.
> >
> > Code that deals with the shared and buffered ioreq pages is extracted from
> > functions such as hvm_domain_initialise, hvm_vcpu_initialise and
> do_hvm_op
> > and consolidated into a set of hvm_ioreq_server_XXX functions.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++----
> --------
> >  xen/include/asm-x86/hvm/domain.h |    9 +-
> >  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
> >  3 files changed, 229 insertions(+), 100 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 71a44db..a0eaadb 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >      spin_unlock(&d->event_lock);
> >  }
> >
> > -static ioreq_t *get_ioreq(struct vcpu *v)
> > +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
> >  {
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > +    shared_iopage_t *p = s->ioreq.va;
> > +    ASSERT(p != NULL);
> > +    return &p->vcpu_ioreq[id];
> >  }
> >
> >  void hvm_do_resume(struct vcpu *v)
> >  {
> > +    struct hvm_ioreq_server *s;
> >      ioreq_t *p;
> >
> >      check_wakeup_from_wait();
> > @@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
> >      if ( is_hvm_vcpu(v) )
> >          pt_restore_timer(v);
> >
> > -    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > -    if ( !(p = get_ioreq(v)) )
> > +    s = v->arch.hvm_vcpu.ioreq_server;
> 
> This assignment can be part of the declaration of 's' (and likewise in
> most later examples).
> 

Whilst that's true it would make the subsequent patch where we moved to using lists less obvious.

> > +    v->arch.hvm_vcpu.ioreq_server = NULL;
> > +
> > +    if ( !s )
> >          goto check_inject_trap;
> >
> > +    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > +    p = get_ioreq(s, v->vcpu_id);
> >      while ( p->state != STATE_IOREQ_NONE )
> >      {
> >          switch ( p->state )
> > @@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
> >              break;
> >          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} ->
> IORESP_READY */
> >          case STATE_IOREQ_INPROCESS:
> > -            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
> > +            wait_on_xen_event_channel(p->vp_eport,
> >                                        (p->state != STATE_IOREQ_READY) &&
> >                                        (p->state != STATE_IOREQ_INPROCESS));
> >              break;
> > @@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
> >  static void hvm_init_ioreq_page(
> >      struct domain *d, struct hvm_ioreq_page *iorp)
> >  {
> > -    memset(iorp, 0, sizeof(*iorp));
> 
> Is it worth keeping this function?  the two back to back
> domain_pause()'s from the callers are redundant.
> 

It actually becomes just the spin_lock_init in a subsequent patch. I left it this was as I did not want to make too many steps in one go.

> >      spin_lock_init(&iorp->lock);
> >      domain_pause(d);
> >  }
> > @@ -541,6 +544,167 @@ static int handle_pvh_io(
> >      return X86EMUL_OKAY;
> >  }
> >
> > +static int hvm_init_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int i;
> > +
> > +    s = xzalloc(struct hvm_ioreq_server);
> > +    if ( !s )
> > +        return -ENOMEM;
> > +
> > +    s->domain = d;
> > +
> > +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > +        s->ioreq_evtchn[i] = -1;
> > +    s->buf_ioreq_evtchn = -1;
> > +
> > +    hvm_init_ioreq_page(d, &s->ioreq);
> > +    hvm_init_ioreq_page(d, &s->buf_ioreq);
> > +
> > +    d->arch.hvm_domain.ioreq_server = s;
> > +    return 0;
> > +}
> > +
> > +static void hvm_deinit_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    hvm_destroy_ioreq_page(d, &s->ioreq);
> > +    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> > +
> > +    xfree(s);
> > +}
> > +
> > +static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server
> *s)
> > +{
> > +    struct domain *d = s->domain;
> > +
> > +    if ( s->ioreq.va != NULL )
> > +    {
> > +        shared_iopage_t *p = s->ioreq.va;
> > +        struct vcpu *v;
> > +
> > +        for_each_vcpu ( d, v )
> > +            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v-
> >vcpu_id];
> > +    }
> > +}
> > +
> > +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
> struct vcpu *v)
> > +{
> > +    int rc;
> > +
> > +    /* Create ioreq event channel. */
> > +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> > +    if ( rc < 0 )
> > +        goto done;
> > +
> > +    /* Register ioreq event channel. */
> > +    s->ioreq_evtchn[v->vcpu_id] = rc;
> > +
> > +    if ( v->vcpu_id == 0 )
> > +    {
> > +        /* Create bufioreq event channel. */
> > +        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> > +        if ( rc < 0 )
> > +            goto done;
> 
> skipping hvm_update_ioreq_server_evtchn() even in the case of a
> successful ioreq event channel?
> 

Yes, because the vcpu creation will fail.

> > +
> > +        s->buf_ioreq_evtchn = rc;
> > +    }
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +    rc = 0;
> > +
> > +done:
> > +    return rc;
> > +}
> > +
> > +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
> struct vcpu *v)
> > +{
> > +    if ( v->vcpu_id == 0 )
> > +    {
> > +        if ( s->buf_ioreq_evtchn >= 0 )
> > +        {
> > +            free_xen_event_channel(v, s->buf_ioreq_evtchn);
> > +            s->buf_ioreq_evtchn = -1;
> > +        }
> > +    }
> > +
> > +    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
> > +    {
> > +        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
> > +        s->ioreq_evtchn[v->vcpu_id] = -1;
> > +    }
> > +}
> > +
> > +static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> > +                                     int *p_port)
> > +{
> > +    int old_port, new_port;
> > +
> > +    new_port = alloc_unbound_xen_event_channel(v, remote_domid,
> NULL);
> > +    if ( new_port < 0 )
> > +        return new_port;
> > +
> > +    /* xchg() ensures that only we call free_xen_event_channel(). */
> > +    old_port = xchg(p_port, new_port);
> > +    free_xen_event_channel(v, old_port);
> > +    return 0;
> > +}
> > +
> > +static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s,
> domid_t domid)
> > +{
> > +    struct domain *d = s->domain;
> > +    struct vcpu *v;
> > +    int rc = 0;
> > +
> > +    domain_pause(d);
> > +
> > +    if ( d->vcpu[0] )
> > +    {
> > +        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s-
> >buf_ioreq_evtchn);
> > +        if ( rc < 0 )
> > +            goto done;
> > +    }
> > +
> > +    for_each_vcpu ( d, v )
> > +    {
> > +        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v-
> >vcpu_id]);
> > +        if ( rc < 0 )
> > +            goto done;
> > +    }
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +
> > +    s->domid = domid;
> > +
> > +done:
> > +    domain_unpause(d);
> > +
> > +    return rc;
> > +}
> > +
> > +static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > +{
> > +    struct domain *d = s->domain;
> > +    int rc;
> > +
> > +    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> > +    if ( rc < 0 )
> > +        return rc;
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +
> > +    return 0;
> > +}
> > +
> > +static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > +{
> > +    struct domain *d = s->domain;
> > +
> > +    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> 
> Double space.
> 
> > +}
> > +
> >  int hvm_domain_initialise(struct domain *d)
> >  {
> >      int rc;
> > @@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      rtc_init(d);
> >
> > -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> > -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> > +    rc = hvm_init_ioreq_server(d);
> > +    if ( rc != 0 )
> > +        goto fail2;
> >
> >      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> >
> >      rc = hvm_funcs.domain_initialise(d);
> >      if ( rc != 0 )
> > -        goto fail2;
> > +        goto fail3;
> >
> >      return 0;
> >
> > + fail3:
> > +    hvm_deinit_ioreq_server(d);
> >   fail2:
> >      rtc_deinit(d);
> >      stdvga_deinit(d);
> > @@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct
> domain *d)
> >      if ( hvm_funcs.nhvm_domain_relinquish_resources )
> >          hvm_funcs.nhvm_domain_relinquish_resources(d);
> >
> > -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> > -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> > +    hvm_deinit_ioreq_server(d);
> >
> >      msixtbl_pt_cleanup(d);
> >
> > @@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >  {
> >      int rc;
> >      struct domain *d = v->domain;
> > -    domid_t dm_domid;
> > +    struct hvm_ioreq_server *s;
> >
> >      hvm_asid_flush_vcpu(v);
> >
> > @@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown:
> nestedhvm_vcpu_destroy */
> >          goto fail5;
> >
> > -    dm_domid = d-
> >arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> > +    s = d->arch.hvm_domain.ioreq_server;
> >
> > -    /* Create ioreq event channel. */
> > -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /*
> teardown: none */
> > +    rc = hvm_ioreq_server_add_vcpu(s, v);
> >      if ( rc < 0 )
> >          goto fail6;
> >
> > -    /* Register ioreq event channel. */
> > -    v->arch.hvm_vcpu.xen_port = rc;
> > -
> > -    if ( v->vcpu_id == 0 )
> > -    {
> > -        /* Create bufioreq event channel. */
> > -        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /*
> teardown: none */
> > -        if ( rc < 0 )
> > -            goto fail6;
> > -        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
> rc;
> > -    }
> > -
> > -    spin_lock(&d->arch.hvm_domain.ioreq.lock);
> > -    if ( d->arch.hvm_domain.ioreq.va != NULL )
> > -        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
> > -
> >      if ( v->vcpu_id == 0 )
> >      {
> >          /* NB. All these really belong in hvm_domain_initialise(). */
> > @@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >
> >  void hvm_vcpu_destroy(struct vcpu *v)
> >  {
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    hvm_ioreq_server_remove_vcpu(s, v);
> > +
> >      nestedhvm_vcpu_destroy(v);
> >
> >      free_compat_arg_xlat(v);
> > @@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
> >          vlapic_destroy(v);
> >
> >      hvm_funcs.vcpu_destroy(v);
> > -
> > -    /* Event channel is already freed by evtchn_destroy(). */
> > -    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
> >  }
> >
> >  void hvm_vcpu_down(struct vcpu *v)
> > @@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
> >  int hvm_buffered_io_send(ioreq_t *p)
> >  {
> >      struct vcpu *v = current;
> > -    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > -    buffered_iopage_t *pg = iorp->va;
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +    struct hvm_ioreq_page *iorp;
> > +    buffered_iopage_t *pg;
> >      buf_ioreq_t bp;
> >      /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> >      int qw = 0;
> > @@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      /* Ensure buffered_iopage fits in a page */
> >      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> >
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> > +        return 0;
> > +
> > +    iorp = &s->buf_ioreq;
> > +    pg = iorp->va;
> > +
> >      /*
> >       * Return 0 for the cases we can't deal with:
> >       *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > @@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      wmb();
> >      pg->write_pointer += qw ? 2 : 1;
> >
> > -    notify_via_xen_event_channel(v->domain,
> > -            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > +    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
> >      spin_unlock(&iorp->lock);
> >
> >      return 1;
> > @@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
> >
> >  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> >  {
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> >      ioreq_t *p;
> >
> >      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> >          return 0; /* implicitly bins the i/o operation */
> >
> > -    if ( !(p = get_ioreq(v)) )
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> >          return 0;
> >
> > +    p = get_ioreq(s, v->vcpu_id);
> > +
> >      if ( unlikely(p->state != STATE_IOREQ_NONE) )
> >      {
> >          /* This indicates a bug in the device model. Crash the domain. */
> >          gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p-
> >state);
> > -        domain_crash(v->domain);
> > +        domain_crash(d);
> >          return 0;
> >      }
> >
> > +    v->arch.hvm_vcpu.ioreq_server = s;
> > +
> >      p->dir = proto_p->dir;
> >      p->data_is_ptr = proto_p->data_is_ptr;
> >      p->type = proto_p->type;
> > @@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v,
> ioreq_t *proto_p)
> >      p->df = proto_p->df;
> >      p->data = proto_p->data;
> >
> > -    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> > +    prepare_wait_on_xen_event_channel(p->vp_eport);
> >
> >      /*
> >       * Following happens /after/ blocking and setting up ioreq contents.
> >       * prepare_wait_on_xen_event_channel() is an implicit barrier.
> >       */
> >      p->state = STATE_IOREQ_READY;
> > -    notify_via_xen_event_channel(v->domain, v-
> >arch.hvm_vcpu.xen_port);
> > +    notify_via_xen_event_channel(d, p->vp_eport);
> >
> >      return 1;
> >  }
> > @@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
> >      return 0;
> >  }
> >
> > -static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> > -                                     int *p_port)
> > -{
> > -    int old_port, new_port;
> > -
> > -    new_port = alloc_unbound_xen_event_channel(v, remote_domid,
> NULL);
> > -    if ( new_port < 0 )
> > -        return new_port;
> > -
> > -    /* xchg() ensures that only we call free_xen_event_channel(). */
> > -    old_port = xchg(p_port, new_port);
> > -    free_xen_event_channel(v, old_port);
> > -    return 0;
> > -}
> > -
> >  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void)
> arg)
> >
> >  {
> > @@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >      case HVMOP_get_param:
> >      {
> >          struct xen_hvm_param a;
> > -        struct hvm_ioreq_page *iorp;
> > +        struct hvm_ioreq_server *s;
> >          struct domain *d;
> >          struct vcpu *v;
> >
> > @@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( rc )
> >              goto param_fail;
> >
> > +        s = d->arch.hvm_domain.ioreq_server;
> > +
> 
> This should be reduced in lexical scope, and I would have said that it
> can just be 'inlined' into each of the 4 uses later.
>

Again. It's done this way to make the patch sequencing work better together.
 
> >          if ( op == HVMOP_set_param )
> >          {
> >              rc = 0;
> > @@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              switch ( a.index )
> >              {
> >              case HVM_PARAM_IOREQ_PFN:
> > -                iorp = &d->arch.hvm_domain.ioreq;
> > -                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
> > -                    break;
> > -                spin_lock(&iorp->lock);
> > -                if ( iorp->va != NULL )
> > -                    /* Initialise evtchn port info if VCPUs already created. */
> > -                    for_each_vcpu ( d, v )
> > -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -                spin_unlock(&iorp->lock);
> > +                rc = hvm_set_ioreq_server_pfn(s, a.value);
> >                  break;
> >              case HVM_PARAM_BUFIOREQ_PFN:
> > -                iorp = &d->arch.hvm_domain.buf_ioreq;
> > -                rc = hvm_set_ioreq_page(d, iorp, a.value);
> > +                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> >                  break;
> >              case HVM_PARAM_CALLBACK_IRQ:
> >                  hvm_set_callback_via(d, a.value);
> > @@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  if ( a.value == DOMID_SELF )
> >                      a.value = curr_d->domain_id;
> >
> > -                rc = 0;
> > -                domain_pause(d); /* safe to change per-vcpu xen_port */
> > -                if ( d->vcpu[0] )
> > -                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
> > -                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
> > -                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
> > -                if ( rc )
> > -                {
> > -                    domain_unpause(d);
> > -                    break;
> > -                }
> > -                iorp = &d->arch.hvm_domain.ioreq;
> > -                for_each_vcpu ( d, v )
> > -                {
> > -                    rc = hvm_replace_event_channel(v, a.value,
> > -                                                   &v->arch.hvm_vcpu.xen_port);
> > -                    if ( rc )
> > -                        break;
> > -
> > -                    spin_lock(&iorp->lock);
> > -                    if ( iorp->va != NULL )
> > -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -                    spin_unlock(&iorp->lock);
> > -                }
> > -                domain_unpause(d);
> > +                rc = hvm_set_ioreq_server_domid(s, a.value);
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  /* Not reflexive, as we must domain_pause(). */
> > @@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          {
> >              switch ( a.index )
> >              {
> > +            case HVM_PARAM_BUFIOREQ_EVTCHN:
> > +                a.value = s->buf_ioreq_evtchn;
> > +                break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> >                  break;
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index b1e3187..4c039f8 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -41,10 +41,17 @@ struct hvm_ioreq_page {
> >      void *va;
> >  };
> >
> > -struct hvm_domain {
> > +struct hvm_ioreq_server {
> > +    struct domain          *domain;
> > +    domid_t                domid;
> >      struct hvm_ioreq_page  ioreq;
> > +    int                    ioreq_evtchn[MAX_HVM_VCPUS];
> >      struct hvm_ioreq_page  buf_ioreq;
> > +    int                    buf_ioreq_evtchn;
> > +};
> >
> > +struct hvm_domain {
> > +    struct hvm_ioreq_server *ioreq_server;
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-
> x86/hvm/vcpu.h
> > index 122ab0d..4c9d7ee 100644
> > --- a/xen/include/asm-x86/hvm/vcpu.h
> > +++ b/xen/include/asm-x86/hvm/vcpu.h
> > @@ -138,7 +138,7 @@ struct hvm_vcpu {
> >      spinlock_t          tm_lock;
> >      struct list_head    tm_list;
> >
> > -    int                 xen_port;
> > +    struct hvm_ioreq_server *ioreq_server;
> >
> 
> Why do both hvm_vcpu and hvm_domain need ioreq_server pointers?  I
> cant
> spot anything which actually uses the vcpu one.
> 

Your first comment is about one of those uses!

To explain... The reference is copied into the vcpu struct when the ioreq is sent to the emulator and removed when the response comes back. Now, this is strictly not necessary when dealing with one a single emulator per domain but once we move to a list of emulators then usually only one of them is in use for a particular vcpu at a time (except for the one case of a broadcast ioreq - for mapcache invalidate). This is why we need to track ioreq servers per vcpu as well as per domain.

  Paul

> ~Andrew
> 
> >      bool_t              flag_dr_dirty;
> >      bool_t              debug_state_latch;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tNP-0006CH-Ik; Thu, 30 Jan 2014 15:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tNM-0006CC-VR
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:17:57 +0000
Received: from [85.158.139.211:45555] by server-3.bemta-5.messagelabs.com id
	74/0E-13671-42D6AE25; Thu, 30 Jan 2014 15:17:56 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391095071!638049!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16583 invoked from network); 30 Jan 2014 15:17:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96162324"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:17:51 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:17:50 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:17:49 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
	server abstraction.
Thread-Index: AQHPHcZaQkXrYw5nnkO937nxcase55qdTKOAgAARfhA=
Date: Thu, 30 Jan 2014 15:17:48 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02178CB@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-3-git-send-email-paul.durrant@citrix.com>
	<52EA69D3.4070600@citrix.com>
In-Reply-To: <52EA69D3.4070600@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
 server abstraction.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 15:04
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 2/5] ioreq-server: create basic ioreq
> server abstraction.
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > Collect together data structures concerning device emulation together into
> > a new struct hvm_ioreq_server.
> >
> > Code that deals with the shared and buffered ioreq pages is extracted from
> > functions such as hvm_domain_initialise, hvm_vcpu_initialise and
> do_hvm_op
> > and consolidated into a set of hvm_ioreq_server_XXX functions.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  xen/arch/x86/hvm/hvm.c           |  318 ++++++++++++++++++++++++++----
> --------
> >  xen/include/asm-x86/hvm/domain.h |    9 +-
> >  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
> >  3 files changed, 229 insertions(+), 100 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 71a44db..a0eaadb 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -345,16 +345,16 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >      spin_unlock(&d->event_lock);
> >  }
> >
> > -static ioreq_t *get_ioreq(struct vcpu *v)
> > +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
> >  {
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > +    shared_iopage_t *p = s->ioreq.va;
> > +    ASSERT(p != NULL);
> > +    return &p->vcpu_ioreq[id];
> >  }
> >
> >  void hvm_do_resume(struct vcpu *v)
> >  {
> > +    struct hvm_ioreq_server *s;
> >      ioreq_t *p;
> >
> >      check_wakeup_from_wait();
> > @@ -362,10 +362,14 @@ void hvm_do_resume(struct vcpu *v)
> >      if ( is_hvm_vcpu(v) )
> >          pt_restore_timer(v);
> >
> > -    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > -    if ( !(p = get_ioreq(v)) )
> > +    s = v->arch.hvm_vcpu.ioreq_server;
> 
> This assignment can be part of the declaration of 's' (and likewise in
> most later examples).
> 

Whilst that's true it would make the subsequent patch where we moved to using lists less obvious.

> > +    v->arch.hvm_vcpu.ioreq_server = NULL;
> > +
> > +    if ( !s )
> >          goto check_inject_trap;
> >
> > +    /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > +    p = get_ioreq(s, v->vcpu_id);
> >      while ( p->state != STATE_IOREQ_NONE )
> >      {
> >          switch ( p->state )
> > @@ -375,7 +379,7 @@ void hvm_do_resume(struct vcpu *v)
> >              break;
> >          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} ->
> IORESP_READY */
> >          case STATE_IOREQ_INPROCESS:
> > -            wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port,
> > +            wait_on_xen_event_channel(p->vp_eport,
> >                                        (p->state != STATE_IOREQ_READY) &&
> >                                        (p->state != STATE_IOREQ_INPROCESS));
> >              break;
> > @@ -398,7 +402,6 @@ void hvm_do_resume(struct vcpu *v)
> >  static void hvm_init_ioreq_page(
> >      struct domain *d, struct hvm_ioreq_page *iorp)
> >  {
> > -    memset(iorp, 0, sizeof(*iorp));
> 
> Is it worth keeping this function?  the two back to back
> domain_pause()'s from the callers are redundant.
> 

It actually becomes just the spin_lock_init in a subsequent patch. I left it this was as I did not want to make too many steps in one go.

> >      spin_lock_init(&iorp->lock);
> >      domain_pause(d);
> >  }
> > @@ -541,6 +544,167 @@ static int handle_pvh_io(
> >      return X86EMUL_OKAY;
> >  }
> >
> > +static int hvm_init_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int i;
> > +
> > +    s = xzalloc(struct hvm_ioreq_server);
> > +    if ( !s )
> > +        return -ENOMEM;
> > +
> > +    s->domain = d;
> > +
> > +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > +        s->ioreq_evtchn[i] = -1;
> > +    s->buf_ioreq_evtchn = -1;
> > +
> > +    hvm_init_ioreq_page(d, &s->ioreq);
> > +    hvm_init_ioreq_page(d, &s->buf_ioreq);
> > +
> > +    d->arch.hvm_domain.ioreq_server = s;
> > +    return 0;
> > +}
> > +
> > +static void hvm_deinit_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    hvm_destroy_ioreq_page(d, &s->ioreq);
> > +    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> > +
> > +    xfree(s);
> > +}
> > +
> > +static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server
> *s)
> > +{
> > +    struct domain *d = s->domain;
> > +
> > +    if ( s->ioreq.va != NULL )
> > +    {
> > +        shared_iopage_t *p = s->ioreq.va;
> > +        struct vcpu *v;
> > +
> > +        for_each_vcpu ( d, v )
> > +            p->vcpu_ioreq[v->vcpu_id].vp_eport = s->ioreq_evtchn[v-
> >vcpu_id];
> > +    }
> > +}
> > +
> > +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
> struct vcpu *v)
> > +{
> > +    int rc;
> > +
> > +    /* Create ioreq event channel. */
> > +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> > +    if ( rc < 0 )
> > +        goto done;
> > +
> > +    /* Register ioreq event channel. */
> > +    s->ioreq_evtchn[v->vcpu_id] = rc;
> > +
> > +    if ( v->vcpu_id == 0 )
> > +    {
> > +        /* Create bufioreq event channel. */
> > +        rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
> > +        if ( rc < 0 )
> > +            goto done;
> 
> skipping hvm_update_ioreq_server_evtchn() even in the case of a
> successful ioreq event channel?
> 

Yes, because the vcpu creation will fail.

> > +
> > +        s->buf_ioreq_evtchn = rc;
> > +    }
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +    rc = 0;
> > +
> > +done:
> > +    return rc;
> > +}
> > +
> > +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
> struct vcpu *v)
> > +{
> > +    if ( v->vcpu_id == 0 )
> > +    {
> > +        if ( s->buf_ioreq_evtchn >= 0 )
> > +        {
> > +            free_xen_event_channel(v, s->buf_ioreq_evtchn);
> > +            s->buf_ioreq_evtchn = -1;
> > +        }
> > +    }
> > +
> > +    if ( s->ioreq_evtchn[v->vcpu_id] >= 0 )
> > +    {
> > +        free_xen_event_channel(v, s->ioreq_evtchn[v->vcpu_id]);
> > +        s->ioreq_evtchn[v->vcpu_id] = -1;
> > +    }
> > +}
> > +
> > +static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> > +                                     int *p_port)
> > +{
> > +    int old_port, new_port;
> > +
> > +    new_port = alloc_unbound_xen_event_channel(v, remote_domid,
> NULL);
> > +    if ( new_port < 0 )
> > +        return new_port;
> > +
> > +    /* xchg() ensures that only we call free_xen_event_channel(). */
> > +    old_port = xchg(p_port, new_port);
> > +    free_xen_event_channel(v, old_port);
> > +    return 0;
> > +}
> > +
> > +static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s,
> domid_t domid)
> > +{
> > +    struct domain *d = s->domain;
> > +    struct vcpu *v;
> > +    int rc = 0;
> > +
> > +    domain_pause(d);
> > +
> > +    if ( d->vcpu[0] )
> > +    {
> > +        rc = hvm_replace_event_channel(d->vcpu[0], domid, &s-
> >buf_ioreq_evtchn);
> > +        if ( rc < 0 )
> > +            goto done;
> > +    }
> > +
> > +    for_each_vcpu ( d, v )
> > +    {
> > +        rc = hvm_replace_event_channel(v, domid, &s->ioreq_evtchn[v-
> >vcpu_id]);
> > +        if ( rc < 0 )
> > +            goto done;
> > +    }
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +
> > +    s->domid = domid;
> > +
> > +done:
> > +    domain_unpause(d);
> > +
> > +    return rc;
> > +}
> > +
> > +static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > +{
> > +    struct domain *d = s->domain;
> > +    int rc;
> > +
> > +    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> > +    if ( rc < 0 )
> > +        return rc;
> > +
> > +    hvm_update_ioreq_server_evtchn(s);
> > +
> > +    return 0;
> > +}
> > +
> > +static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > +{
> > +    struct domain *d = s->domain;
> > +
> > +    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> 
> Double space.
> 
> > +}
> > +
> >  int hvm_domain_initialise(struct domain *d)
> >  {
> >      int rc;
> > @@ -608,17 +772,20 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      rtc_init(d);
> >
> > -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> > -    hvm_init_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> > +    rc = hvm_init_ioreq_server(d);
> > +    if ( rc != 0 )
> > +        goto fail2;
> >
> >      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> >
> >      rc = hvm_funcs.domain_initialise(d);
> >      if ( rc != 0 )
> > -        goto fail2;
> > +        goto fail3;
> >
> >      return 0;
> >
> > + fail3:
> > +    hvm_deinit_ioreq_server(d);
> >   fail2:
> >      rtc_deinit(d);
> >      stdvga_deinit(d);
> > @@ -642,8 +809,7 @@ void hvm_domain_relinquish_resources(struct
> domain *d)
> >      if ( hvm_funcs.nhvm_domain_relinquish_resources )
> >          hvm_funcs.nhvm_domain_relinquish_resources(d);
> >
> > -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.ioreq);
> > -    hvm_destroy_ioreq_page(d, &d->arch.hvm_domain.buf_ioreq);
> > +    hvm_deinit_ioreq_server(d);
> >
> >      msixtbl_pt_cleanup(d);
> >
> > @@ -1155,7 +1321,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >  {
> >      int rc;
> >      struct domain *d = v->domain;
> > -    domid_t dm_domid;
> > +    struct hvm_ioreq_server *s;
> >
> >      hvm_asid_flush_vcpu(v);
> >
> > @@ -1198,30 +1364,12 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown:
> nestedhvm_vcpu_destroy */
> >          goto fail5;
> >
> > -    dm_domid = d-
> >arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> > +    s = d->arch.hvm_domain.ioreq_server;
> >
> > -    /* Create ioreq event channel. */
> > -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /*
> teardown: none */
> > +    rc = hvm_ioreq_server_add_vcpu(s, v);
> >      if ( rc < 0 )
> >          goto fail6;
> >
> > -    /* Register ioreq event channel. */
> > -    v->arch.hvm_vcpu.xen_port = rc;
> > -
> > -    if ( v->vcpu_id == 0 )
> > -    {
> > -        /* Create bufioreq event channel. */
> > -        rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /*
> teardown: none */
> > -        if ( rc < 0 )
> > -            goto fail6;
> > -        d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
> rc;
> > -    }
> > -
> > -    spin_lock(&d->arch.hvm_domain.ioreq.lock);
> > -    if ( d->arch.hvm_domain.ioreq.va != NULL )
> > -        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -    spin_unlock(&d->arch.hvm_domain.ioreq.lock);
> > -
> >      if ( v->vcpu_id == 0 )
> >      {
> >          /* NB. All these really belong in hvm_domain_initialise(). */
> > @@ -1255,6 +1403,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >
> >  void hvm_vcpu_destroy(struct vcpu *v)
> >  {
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    hvm_ioreq_server_remove_vcpu(s, v);
> > +
> >      nestedhvm_vcpu_destroy(v);
> >
> >      free_compat_arg_xlat(v);
> > @@ -1266,9 +1419,6 @@ void hvm_vcpu_destroy(struct vcpu *v)
> >          vlapic_destroy(v);
> >
> >      hvm_funcs.vcpu_destroy(v);
> > -
> > -    /* Event channel is already freed by evtchn_destroy(). */
> > -    /*free_xen_event_channel(v, v->arch.hvm_vcpu.xen_port);*/
> >  }
> >
> >  void hvm_vcpu_down(struct vcpu *v)
> > @@ -1298,8 +1448,10 @@ void hvm_vcpu_down(struct vcpu *v)
> >  int hvm_buffered_io_send(ioreq_t *p)
> >  {
> >      struct vcpu *v = current;
> > -    struct hvm_ioreq_page *iorp = &v->domain-
> >arch.hvm_domain.buf_ioreq;
> > -    buffered_iopage_t *pg = iorp->va;
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +    struct hvm_ioreq_page *iorp;
> > +    buffered_iopage_t *pg;
> >      buf_ioreq_t bp;
> >      /* Timeoffset sends 64b data, but no address. Use two consecutive
> slots. */
> >      int qw = 0;
> > @@ -1307,6 +1459,13 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      /* Ensure buffered_iopage fits in a page */
> >      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> >
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> > +        return 0;
> > +
> > +    iorp = &s->buf_ioreq;
> > +    pg = iorp->va;
> > +
> >      /*
> >       * Return 0 for the cases we can't deal with:
> >       *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> > @@ -1367,8 +1526,7 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      wmb();
> >      pg->write_pointer += qw ? 2 : 1;
> >
> > -    notify_via_xen_event_channel(v->domain,
> > -            v->domain-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> > +    notify_via_xen_event_channel(d, s->buf_ioreq_evtchn);
> >      spin_unlock(&iorp->lock);
> >
> >      return 1;
> > @@ -1376,22 +1534,29 @@ int hvm_buffered_io_send(ioreq_t *p)
> >
> >  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> >  {
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> >      ioreq_t *p;
> >
> >      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> >          return 0; /* implicitly bins the i/o operation */
> >
> > -    if ( !(p = get_ioreq(v)) )
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> >          return 0;
> >
> > +    p = get_ioreq(s, v->vcpu_id);
> > +
> >      if ( unlikely(p->state != STATE_IOREQ_NONE) )
> >      {
> >          /* This indicates a bug in the device model. Crash the domain. */
> >          gdprintk(XENLOG_ERR, "Device model set bad IO state %d.\n", p-
> >state);
> > -        domain_crash(v->domain);
> > +        domain_crash(d);
> >          return 0;
> >      }
> >
> > +    v->arch.hvm_vcpu.ioreq_server = s;
> > +
> >      p->dir = proto_p->dir;
> >      p->data_is_ptr = proto_p->data_is_ptr;
> >      p->type = proto_p->type;
> > @@ -1401,14 +1566,14 @@ bool_t hvm_send_assist_req(struct vcpu *v,
> ioreq_t *proto_p)
> >      p->df = proto_p->df;
> >      p->data = proto_p->data;
> >
> > -    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> > +    prepare_wait_on_xen_event_channel(p->vp_eport);
> >
> >      /*
> >       * Following happens /after/ blocking and setting up ioreq contents.
> >       * prepare_wait_on_xen_event_channel() is an implicit barrier.
> >       */
> >      p->state = STATE_IOREQ_READY;
> > -    notify_via_xen_event_channel(v->domain, v-
> >arch.hvm_vcpu.xen_port);
> > +    notify_via_xen_event_channel(d, p->vp_eport);
> >
> >      return 1;
> >  }
> > @@ -3995,21 +4160,6 @@ static int hvmop_flush_tlb_all(void)
> >      return 0;
> >  }
> >
> > -static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> > -                                     int *p_port)
> > -{
> > -    int old_port, new_port;
> > -
> > -    new_port = alloc_unbound_xen_event_channel(v, remote_domid,
> NULL);
> > -    if ( new_port < 0 )
> > -        return new_port;
> > -
> > -    /* xchg() ensures that only we call free_xen_event_channel(). */
> > -    old_port = xchg(p_port, new_port);
> > -    free_xen_event_channel(v, old_port);
> > -    return 0;
> > -}
> > -
> >  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void)
> arg)
> >
> >  {
> > @@ -4022,7 +4172,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >      case HVMOP_get_param:
> >      {
> >          struct xen_hvm_param a;
> > -        struct hvm_ioreq_page *iorp;
> > +        struct hvm_ioreq_server *s;
> >          struct domain *d;
> >          struct vcpu *v;
> >
> > @@ -4048,6 +4198,8 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( rc )
> >              goto param_fail;
> >
> > +        s = d->arch.hvm_domain.ioreq_server;
> > +
> 
> This should be reduced in lexical scope, and I would have said that it
> can just be 'inlined' into each of the 4 uses later.
>

Again. It's done this way to make the patch sequencing work better together.
 
> >          if ( op == HVMOP_set_param )
> >          {
> >              rc = 0;
> > @@ -4055,19 +4207,10 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              switch ( a.index )
> >              {
> >              case HVM_PARAM_IOREQ_PFN:
> > -                iorp = &d->arch.hvm_domain.ioreq;
> > -                if ( (rc = hvm_set_ioreq_page(d, iorp, a.value)) != 0 )
> > -                    break;
> > -                spin_lock(&iorp->lock);
> > -                if ( iorp->va != NULL )
> > -                    /* Initialise evtchn port info if VCPUs already created. */
> > -                    for_each_vcpu ( d, v )
> > -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -                spin_unlock(&iorp->lock);
> > +                rc = hvm_set_ioreq_server_pfn(s, a.value);
> >                  break;
> >              case HVM_PARAM_BUFIOREQ_PFN:
> > -                iorp = &d->arch.hvm_domain.buf_ioreq;
> > -                rc = hvm_set_ioreq_page(d, iorp, a.value);
> > +                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> >                  break;
> >              case HVM_PARAM_CALLBACK_IRQ:
> >                  hvm_set_callback_via(d, a.value);
> > @@ -4122,31 +4265,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  if ( a.value == DOMID_SELF )
> >                      a.value = curr_d->domain_id;
> >
> > -                rc = 0;
> > -                domain_pause(d); /* safe to change per-vcpu xen_port */
> > -                if ( d->vcpu[0] )
> > -                    rc = hvm_replace_event_channel(d->vcpu[0], a.value,
> > -                             (int *)&d->vcpu[0]->domain->arch.hvm_domain.params
> > -                                     [HVM_PARAM_BUFIOREQ_EVTCHN]);
> > -                if ( rc )
> > -                {
> > -                    domain_unpause(d);
> > -                    break;
> > -                }
> > -                iorp = &d->arch.hvm_domain.ioreq;
> > -                for_each_vcpu ( d, v )
> > -                {
> > -                    rc = hvm_replace_event_channel(v, a.value,
> > -                                                   &v->arch.hvm_vcpu.xen_port);
> > -                    if ( rc )
> > -                        break;
> > -
> > -                    spin_lock(&iorp->lock);
> > -                    if ( iorp->va != NULL )
> > -                        get_ioreq(v)->vp_eport = v->arch.hvm_vcpu.xen_port;
> > -                    spin_unlock(&iorp->lock);
> > -                }
> > -                domain_unpause(d);
> > +                rc = hvm_set_ioreq_server_domid(s, a.value);
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  /* Not reflexive, as we must domain_pause(). */
> > @@ -4241,6 +4360,9 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          {
> >              switch ( a.index )
> >              {
> > +            case HVM_PARAM_BUFIOREQ_EVTCHN:
> > +                a.value = s->buf_ioreq_evtchn;
> > +                break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> >                  break;
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index b1e3187..4c039f8 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -41,10 +41,17 @@ struct hvm_ioreq_page {
> >      void *va;
> >  };
> >
> > -struct hvm_domain {
> > +struct hvm_ioreq_server {
> > +    struct domain          *domain;
> > +    domid_t                domid;
> >      struct hvm_ioreq_page  ioreq;
> > +    int                    ioreq_evtchn[MAX_HVM_VCPUS];
> >      struct hvm_ioreq_page  buf_ioreq;
> > +    int                    buf_ioreq_evtchn;
> > +};
> >
> > +struct hvm_domain {
> > +    struct hvm_ioreq_server *ioreq_server;
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-
> x86/hvm/vcpu.h
> > index 122ab0d..4c9d7ee 100644
> > --- a/xen/include/asm-x86/hvm/vcpu.h
> > +++ b/xen/include/asm-x86/hvm/vcpu.h
> > @@ -138,7 +138,7 @@ struct hvm_vcpu {
> >      spinlock_t          tm_lock;
> >      struct list_head    tm_list;
> >
> > -    int                 xen_port;
> > +    struct hvm_ioreq_server *ioreq_server;
> >
> 
> Why do both hvm_vcpu and hvm_domain need ioreq_server pointers?  I
> cant
> spot anything which actually uses the vcpu one.
> 

Your first comment is about one of those uses!

To explain... The reference is copied into the vcpu struct when the ioreq is sent to the emulator and removed when the response comes back. Now, this is strictly not necessary when dealing with one a single emulator per domain but once we move to a list of emulators then usually only one of them is in use for a particular vcpu at a time (except for the one case of a broadcast ioreq - for mapcache invalidate). This is why we need to track ioreq servers per vcpu as well as per domain.

  Paul

> ~Andrew
> 
> >      bool_t              flag_dr_dirty;
> >      bool_t              debug_state_latch;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tRE-0006Ym-EH; Thu, 30 Jan 2014 15:21:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tRD-0006Yh-1T
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:21:55 +0000
Received: from [85.158.143.35:45859] by server-3.bemta-4.messagelabs.com id
	B8/52-11539-21E6AE25; Thu, 30 Jan 2014 15:21:54 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391095312!1978424!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5210 invoked from network); 30 Jan 2014 15:21:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:21:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98131530"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:21:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:21:51 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tR8-0007Pi-QE;
	Thu, 30 Jan 2014 15:21:50 +0000
Message-ID: <52EA6E0E.8020200@citrix.com>
Date: Thu, 30 Jan 2014 15:21:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
 ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> This patch only creates the ioreq server when the legacy HVM parameters
> are touched by an emulator. It also lays some groundwork for supporting
> multiple IOREQ servers. For instance, it introduces ioreq server reference
> counting which is not strictly necessary at this stage but will become so
> when ioreq servers can be destroyed prior the domain dying.
>
> There is a significant change in the layout of the special pages reserved
> in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards without
> moving pages such as the xenstore page when building a domain that can
> support more than one emulator.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  tools/libxc/xc_hvm_build_x86.c   |   41 ++--
>  xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++------------
>  xen/include/asm-x86/hvm/domain.h |    3 +-
>  3 files changed, 314 insertions(+), 139 deletions(-)
>
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..f24f2a1 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -41,13 +41,12 @@
>  #define SPECIALPAGE_PAGING   0
>  #define SPECIALPAGE_ACCESS   1
>  #define SPECIALPAGE_SHARING  2
> -#define SPECIALPAGE_BUFIOREQ 3
> -#define SPECIALPAGE_XENSTORE 4
> -#define SPECIALPAGE_IOREQ    5
> -#define SPECIALPAGE_IDENT_PT 6
> -#define SPECIALPAGE_CONSOLE  7
> -#define NR_SPECIAL_PAGES     8
> -#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> +#define SPECIALPAGE_XENSTORE 3
> +#define SPECIALPAGE_IDENT_PT 4
> +#define SPECIALPAGE_CONSOLE  5
> +#define SPECIALPAGE_IOREQ    6
> +#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
> +#define special_pfn(x) (0xff000u - (x))
>  
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
> @@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
>      /* Memory parameters. */
>      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
>      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> -    hvm_info->reserved_mem_pgstart = special_pfn(0);
> +    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
>  
>      /* Finish with the checksum. */
>      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> @@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
>      munmap(hvm_info_page, PAGE_SIZE);
>  
>      /* Allocate and clear special pages. */
> +
> +     DPRINTF("%d SPECIAL PAGES:\n"
> +            "  PAGING:    %"PRI_xen_pfn"\n"
> +            "  ACCESS:    %"PRI_xen_pfn"\n"
> +            "  SHARING:   %"PRI_xen_pfn"\n"
> +            "  STORE:     %"PRI_xen_pfn"\n"
> +            "  IDENT_PT:  %"PRI_xen_pfn"\n"
> +            "  CONSOLE:   %"PRI_xen_pfn"\n"
> +            "  IOREQ:     %"PRI_xen_pfn"\n",
> +            NR_SPECIAL_PAGES,
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> +

I realise I am being quite picky here, but from a daemon point of view
trying to log to facilities like syslog, multi-line single debugging
messages are a pain.  Would it be possible to do this as 8 DPRINTF()s?

>      for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
>      {
>          xen_pfn_t pfn = special_pfn(i);
> @@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
>  
>      xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
>                       special_pfn(SPECIALPAGE_XENSTORE));
> -    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_BUFIOREQ));
> -    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_IOREQ));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
>                       special_pfn(SPECIALPAGE_CONSOLE));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
> @@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
>                       special_pfn(SPECIALPAGE_ACCESS));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
>                       special_pfn(SPECIALPAGE_SHARING));
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> +                     special_pfn(SPECIALPAGE_IOREQ));
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> +                     special_pfn(SPECIALPAGE_IOREQ) - 1);
>  
>      /*
>       * Identity-map page table is required for running with CR0.PG=0 when
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index a0eaadb..d9874fb 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
>      return &p->vcpu_ioreq[id];
>  }
>  
> -void hvm_do_resume(struct vcpu *v)
> +static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
>  {
> -    struct hvm_ioreq_server *s;
> -    ioreq_t *p;
> -
> -    check_wakeup_from_wait();
> -
> -    if ( is_hvm_vcpu(v) )
> -        pt_restore_timer(v);
> -
> -    s = v->arch.hvm_vcpu.ioreq_server;
> -    v->arch.hvm_vcpu.ioreq_server = NULL;
> -
> -    if ( !s )
> -        goto check_inject_trap;
> -
>      /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> -    p = get_ioreq(s, v->vcpu_id);
>      while ( p->state != STATE_IOREQ_NONE )
>      {
>          switch ( p->state )
> @@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
>              break;
>          default:
>              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
> -            domain_crash(v->domain);
> +            domain_crash(d);
>              return; /* bail */
>          }
>      }
> +}
> +
> +void hvm_do_resume(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +
> +    check_wakeup_from_wait();
> +
> +    if ( is_hvm_vcpu(v) )
> +        pt_restore_timer(v);
> +
> +    s = v->arch.hvm_vcpu.ioreq_server;
> +    v->arch.hvm_vcpu.ioreq_server = NULL;
> +
> +    if ( s )
> +    {
> +        ioreq_t *p = get_ioreq(s, v->vcpu_id);
> +
> +        hvm_wait_on_io(d, p);
> +    }
>  
> - check_inject_trap:
>      /* Inject pending hw/sw trap */
>      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
>      {
> @@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
>      }
>  }
>  
> -static void hvm_init_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
>  {
> +    struct hvm_ioreq_page *iorp;
> +
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> +

Brackets are redundant.

>      spin_lock_init(&iorp->lock);
> -    domain_pause(d);
>  }
>  
>  void destroy_ring_for_helper(
> @@ -419,16 +426,13 @@ void destroy_ring_for_helper(
>      }
>  }
>  
> -static void hvm_destroy_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
>  {
> -    spin_lock(&iorp->lock);
> +    struct hvm_ioreq_page *iorp;
>  
> -    ASSERT(d->is_dying);
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
>  
>      destroy_ring_for_helper(&iorp->va, iorp->page);
> -
> -    spin_unlock(&iorp->lock);
>  }
>  
>  int prepare_ring_for_helper(
> @@ -476,8 +480,10 @@ int prepare_ring_for_helper(
>  }
>  
>  static int hvm_set_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
> +    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
>  {
> +    struct domain *d = s->domain;
> +    struct hvm_ioreq_page *iorp;
>      struct page_info *page;
>      void *va;
>      int rc;
> @@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
>      if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
>          return rc;
>  
> -    spin_lock(&iorp->lock);
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
>  
>      if ( (iorp->va != NULL) || d->is_dying )
>      {
> -        destroy_ring_for_helper(&iorp->va, iorp->page);
> -        spin_unlock(&iorp->lock);
> +        destroy_ring_for_helper(&va, page);
>          return -EINVAL;
>      }
>  
>      iorp->va = va;
>      iorp->page = page;
>  
> -    spin_unlock(&iorp->lock);
> -
> -    domain_unpause(d);
> -
>      return 0;
>  }
>  
> @@ -544,38 +545,6 @@ static int handle_pvh_io(
>      return X86EMUL_OKAY;
>  }
>  
> -static int hvm_init_ioreq_server(struct domain *d)
> -{
> -    struct hvm_ioreq_server *s;
> -    int i;
> -
> -    s = xzalloc(struct hvm_ioreq_server);
> -    if ( !s )
> -        return -ENOMEM;
> -
> -    s->domain = d;
> -
> -    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> -        s->ioreq_evtchn[i] = -1;
> -    s->buf_ioreq_evtchn = -1;
> -
> -    hvm_init_ioreq_page(d, &s->ioreq);
> -    hvm_init_ioreq_page(d, &s->buf_ioreq);
> -
> -    d->arch.hvm_domain.ioreq_server = s;
> -    return 0;
> -}
> -
> -static void hvm_deinit_ioreq_server(struct domain *d)
> -{
> -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> -
> -    hvm_destroy_ioreq_page(d, &s->ioreq);
> -    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> -
> -    xfree(s);
> -}
> -
>  static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
>  {
>      struct domain *d = s->domain;
> @@ -637,6 +606,152 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
>      }
>  }
>  
> +static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> +{
> +    struct hvm_ioreq_server *s;
> +    int i;
> +    unsigned long pfn;
> +    struct vcpu *v;
> +    int rc;

i and rc can be declared together.

> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -EEXIST;
> +    if ( d->arch.hvm_domain.ioreq_server != NULL )
> +        goto fail_exist;
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +
> +    rc = -ENOMEM;
> +    s = xzalloc(struct hvm_ioreq_server);
> +    if ( !s )
> +        goto fail_alloc;
> +
> +    s->domain = d;
> +    s->domid = domid;
> +
> +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +        s->ioreq_evtchn[i] = -1;
> +    s->buf_ioreq_evtchn = -1;
> +
> +    /* Initialize shared pages */
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +
> +    hvm_init_ioreq_page(s, 0);
> +    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
> +        goto fail_set_ioreq;
> +
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +
> +    hvm_init_ioreq_page(s, 1);
> +    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> +        goto fail_set_buf_ioreq;
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> +            goto fail_add_vcpu;
> +    }
> +
> +    d->arch.hvm_domain.ioreq_server = s;
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail_add_vcpu:
> +    for_each_vcpu ( d, v )
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    hvm_destroy_ioreq_page(s, 1);
> +fail_set_buf_ioreq:
> +    hvm_destroy_ioreq_page(s, 0);
> +fail_set_ioreq:
> +    xfree(s);
> +fail_alloc:
> +fail_exist:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    return rc;
> +}
> +
> +static void hvm_destroy_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct vcpu *v;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
> +        goto done;
> +
> +    domain_pause(d);
> +
> +    d->arch.hvm_domain.ioreq_server = NULL;
> +
> +    for_each_vcpu ( d, v )
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +
> +    hvm_destroy_ioreq_page(s, 1);
> +    hvm_destroy_ioreq_page(s, 0);
> +
> +    xfree(s);
> +
> +    domain_unpause(d);
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
> +static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    *port = s->buf_ioreq_evtchn;
> +    rc = 0;
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    if ( buf )
> +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +    else
> +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];

This can be reduced and use "params[buf ? HVM_PARAM_BUFIOREQ_PFN :
HVM_PARAM_IOREQ_PFN]", although that is perhaps not as clear.

> +
> +    rc = 0;
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
>  static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>                                       int *p_port)
>  {
> @@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>      return 0;
>  }
>  
> -static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
> +static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
>  {
> -    struct domain *d = s->domain;
> +    struct hvm_ioreq_server *s;
>      struct vcpu *v;
>      int rc = 0;
>  
>      domain_pause(d);
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    rc = 0;
> +    if ( s->domid == domid )
> +        goto done;
>  
>      if ( d->vcpu[0] )
>      {
> @@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
>  
>  done:
>      domain_unpause(d);
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);

Mismatched order of pause/unpause and lock/unlock pairs.  The unlock
should ideally be before the unpause.

>  
>      return rc;
>  }
>  
> -static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> -{
> -    struct domain *d = s->domain;
> -    int rc;
> -
> -    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> -    if ( rc < 0 )
> -        return rc;
> -
> -    hvm_update_ioreq_server_evtchn(s);
> -
> -    return 0;
> -}
> -
> -static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> -{
> -    struct domain *d = s->domain;
> -
> -    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> -}
> -
>  int hvm_domain_initialise(struct domain *d)
>  {
>      int rc;
> @@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
>  
>      }
>  
> +    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
>      spin_lock_init(&d->arch.hvm_domain.irq_lock);
>      spin_lock_init(&d->arch.hvm_domain.uc_lock);
>  
> @@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> -    rc = hvm_init_ioreq_server(d);
> -    if ( rc != 0 )
> -        goto fail2;
> -
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail3;
> +        goto fail2;
>  
>      return 0;
>  
> - fail3:
> -    hvm_deinit_ioreq_server(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_deinit_ioreq_server(d);
> +    hvm_destroy_ioreq_server(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>      s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    rc = hvm_ioreq_server_add_vcpu(s, v);
> -    if ( rc < 0 )
> -        goto fail6;
> +    if ( s )
> +    {
> +        rc = hvm_ioreq_server_add_vcpu(s, v);
> +        if ( rc < 0 )
> +            goto fail6;
> +    }
>  
>      if ( v->vcpu_id == 0 )
>      {
> @@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +    struct hvm_ioreq_server *s;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +    s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    hvm_ioreq_server_remove_vcpu(s, v);
> +    if ( s )
> +        hvm_ioreq_server_remove_vcpu(s, v);
>  
>      nestedhvm_vcpu_destroy(v);
>  
> @@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>      s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
>      if ( !s )
>          return 0;
>  
> @@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
>      return 1;
>  }
>  
> -bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> +static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
> +                                            struct vcpu *v,
> +                                            ioreq_t *proto_p)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> -    ioreq_t *p;
> -
> -    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> -        return 0; /* implicitly bins the i/o operation */
> -
> -    s = d->arch.hvm_domain.ioreq_server;
> -    if ( !s )
> -        return 0;
> -
> -    p = get_ioreq(s, v->vcpu_id);
> +    ioreq_t *p = get_ioreq(s, v->vcpu_id);
>  
>      if ( unlikely(p->state != STATE_IOREQ_NONE) )
>      {
> @@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>      return 1;
>  }
>  
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
> +{
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +
> +    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> +
> +    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> +        return 0;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +    s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);

What is the purpose of talking the server lock just to read the
ioreq_server pointer?

> +
> +    if ( !s )
> +        return 0;
> +
> +    return hvm_send_assist_req_to_server(s, v, p);
> +}
> +
>  void hvm_hlt(unsigned long rflags)
>  {
>      struct vcpu *curr = current;
> @@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case HVMOP_get_param:
>      {
>          struct xen_hvm_param a;
> -        struct hvm_ioreq_server *s;
>          struct domain *d;
>          struct vcpu *v;
>  
> @@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( rc )
>              goto param_fail;
>  
> -        s = d->arch.hvm_domain.ioreq_server;
> -
>          if ( op == HVMOP_set_param )
>          {
>              rc = 0;
>  
>              switch ( a.index )
>              {
> -            case HVM_PARAM_IOREQ_PFN:
> -                rc = hvm_set_ioreq_server_pfn(s, a.value);
> -                break;
> -            case HVM_PARAM_BUFIOREQ_PFN: 
> -                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> -                break;
>              case HVM_PARAM_CALLBACK_IRQ:
>                  hvm_set_callback_via(d, a.value);
>                  hvm_latch_shinfo_size(d);
> @@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = hvm_set_ioreq_server_domid(s, a.value);
> +                rc = hvm_create_ioreq_server(d, a.value);
> +                if ( rc == -EEXIST )
> +                    rc = hvm_set_ioreq_server_domid(d, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          {
>              switch ( a.index )
>              {
> +            case HVM_PARAM_IOREQ_PFN:
> +            case HVM_PARAM_BUFIOREQ_PFN:
>              case HVM_PARAM_BUFIOREQ_EVTCHN:
> -                a.value = s->buf_ioreq_evtchn;
> +                /* May need to create server */
> +                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> +                if ( rc != 0 && rc != -EEXIST )
> +                    goto param_fail;
> +
> +                switch ( a.index )
> +                {
> +                case HVM_PARAM_IOREQ_PFN: {
> +                    xen_pfn_t pfn;
> +
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = pfn;
> +                    break;
> +                }
> +                case HVM_PARAM_BUFIOREQ_PFN: {
> +                    xen_pfn_t pfn;
> +
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = pfn;
> +                    break;
> +                }
> +                case HVM_PARAM_BUFIOREQ_EVTCHN: {
> +                    evtchn_port_t port;
> +
> +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = port;
> +                    break;
> +                }
> +                default:
> +                    BUG();
> +                }
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 4c039f8..e750ef0 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -52,6 +52,8 @@ struct hvm_ioreq_server {
>  
>  struct hvm_domain {
>      struct hvm_ioreq_server *ioreq_server;
> +    spinlock_t              ioreq_server_lock;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> @@ -106,4 +108,3 @@ struct hvm_domain {
>  #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
>  
>  #endif /* __ASM_X86_HVM_DOMAIN_H__ */
> -

Spurious whitespace change

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tRE-0006Ym-EH; Thu, 30 Jan 2014 15:21:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tRD-0006Yh-1T
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:21:55 +0000
Received: from [85.158.143.35:45859] by server-3.bemta-4.messagelabs.com id
	B8/52-11539-21E6AE25; Thu, 30 Jan 2014 15:21:54 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391095312!1978424!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5210 invoked from network); 30 Jan 2014 15:21:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:21:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98131530"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:21:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:21:51 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tR8-0007Pi-QE;
	Thu, 30 Jan 2014 15:21:50 +0000
Message-ID: <52EA6E0E.8020200@citrix.com>
Date: Thu, 30 Jan 2014 15:21:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
 ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> This patch only creates the ioreq server when the legacy HVM parameters
> are touched by an emulator. It also lays some groundwork for supporting
> multiple IOREQ servers. For instance, it introduces ioreq server reference
> counting which is not strictly necessary at this stage but will become so
> when ioreq servers can be destroyed prior the domain dying.
>
> There is a significant change in the layout of the special pages reserved
> in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards without
> moving pages such as the xenstore page when building a domain that can
> support more than one emulator.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  tools/libxc/xc_hvm_build_x86.c   |   41 ++--
>  xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++------------
>  xen/include/asm-x86/hvm/domain.h |    3 +-
>  3 files changed, 314 insertions(+), 139 deletions(-)
>
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..f24f2a1 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -41,13 +41,12 @@
>  #define SPECIALPAGE_PAGING   0
>  #define SPECIALPAGE_ACCESS   1
>  #define SPECIALPAGE_SHARING  2
> -#define SPECIALPAGE_BUFIOREQ 3
> -#define SPECIALPAGE_XENSTORE 4
> -#define SPECIALPAGE_IOREQ    5
> -#define SPECIALPAGE_IDENT_PT 6
> -#define SPECIALPAGE_CONSOLE  7
> -#define NR_SPECIAL_PAGES     8
> -#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> +#define SPECIALPAGE_XENSTORE 3
> +#define SPECIALPAGE_IDENT_PT 4
> +#define SPECIALPAGE_CONSOLE  5
> +#define SPECIALPAGE_IOREQ    6
> +#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
> +#define special_pfn(x) (0xff000u - (x))
>  
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
> @@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
>      /* Memory parameters. */
>      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
>      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> -    hvm_info->reserved_mem_pgstart = special_pfn(0);
> +    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
>  
>      /* Finish with the checksum. */
>      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> @@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
>      munmap(hvm_info_page, PAGE_SIZE);
>  
>      /* Allocate and clear special pages. */
> +
> +     DPRINTF("%d SPECIAL PAGES:\n"
> +            "  PAGING:    %"PRI_xen_pfn"\n"
> +            "  ACCESS:    %"PRI_xen_pfn"\n"
> +            "  SHARING:   %"PRI_xen_pfn"\n"
> +            "  STORE:     %"PRI_xen_pfn"\n"
> +            "  IDENT_PT:  %"PRI_xen_pfn"\n"
> +            "  CONSOLE:   %"PRI_xen_pfn"\n"
> +            "  IOREQ:     %"PRI_xen_pfn"\n",
> +            NR_SPECIAL_PAGES,
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> +            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> +

I realise I am being quite picky here, but from a daemon point of view
trying to log to facilities like syslog, multi-line single debugging
messages are a pain.  Would it be possible to do this as 8 DPRINTF()s?

>      for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
>      {
>          xen_pfn_t pfn = special_pfn(i);
> @@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
>  
>      xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
>                       special_pfn(SPECIALPAGE_XENSTORE));
> -    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_BUFIOREQ));
> -    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_IOREQ));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
>                       special_pfn(SPECIALPAGE_CONSOLE));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
> @@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
>                       special_pfn(SPECIALPAGE_ACCESS));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
>                       special_pfn(SPECIALPAGE_SHARING));
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> +                     special_pfn(SPECIALPAGE_IOREQ));
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> +                     special_pfn(SPECIALPAGE_IOREQ) - 1);
>  
>      /*
>       * Identity-map page table is required for running with CR0.PG=0 when
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index a0eaadb..d9874fb 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, int id)
>      return &p->vcpu_ioreq[id];
>  }
>  
> -void hvm_do_resume(struct vcpu *v)
> +static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
>  {
> -    struct hvm_ioreq_server *s;
> -    ioreq_t *p;
> -
> -    check_wakeup_from_wait();
> -
> -    if ( is_hvm_vcpu(v) )
> -        pt_restore_timer(v);
> -
> -    s = v->arch.hvm_vcpu.ioreq_server;
> -    v->arch.hvm_vcpu.ioreq_server = NULL;
> -
> -    if ( !s )
> -        goto check_inject_trap;
> -
>      /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
> -    p = get_ioreq(s, v->vcpu_id);
>      while ( p->state != STATE_IOREQ_NONE )
>      {
>          switch ( p->state )
> @@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
>              break;
>          default:
>              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p->state);
> -            domain_crash(v->domain);
> +            domain_crash(d);
>              return; /* bail */
>          }
>      }
> +}
> +
> +void hvm_do_resume(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +
> +    check_wakeup_from_wait();
> +
> +    if ( is_hvm_vcpu(v) )
> +        pt_restore_timer(v);
> +
> +    s = v->arch.hvm_vcpu.ioreq_server;
> +    v->arch.hvm_vcpu.ioreq_server = NULL;
> +
> +    if ( s )
> +    {
> +        ioreq_t *p = get_ioreq(s, v->vcpu_id);
> +
> +        hvm_wait_on_io(d, p);
> +    }
>  
> - check_inject_trap:
>      /* Inject pending hw/sw trap */
>      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 ) 
>      {
> @@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
>      }
>  }
>  
> -static void hvm_init_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
>  {
> +    struct hvm_ioreq_page *iorp;
> +
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> +

Brackets are redundant.

>      spin_lock_init(&iorp->lock);
> -    domain_pause(d);
>  }
>  
>  void destroy_ring_for_helper(
> @@ -419,16 +426,13 @@ void destroy_ring_for_helper(
>      }
>  }
>  
> -static void hvm_destroy_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp)
> +static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
>  {
> -    spin_lock(&iorp->lock);
> +    struct hvm_ioreq_page *iorp;
>  
> -    ASSERT(d->is_dying);
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
>  
>      destroy_ring_for_helper(&iorp->va, iorp->page);
> -
> -    spin_unlock(&iorp->lock);
>  }
>  
>  int prepare_ring_for_helper(
> @@ -476,8 +480,10 @@ int prepare_ring_for_helper(
>  }
>  
>  static int hvm_set_ioreq_page(
> -    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
> +    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
>  {
> +    struct domain *d = s->domain;
> +    struct hvm_ioreq_page *iorp;
>      struct page_info *page;
>      void *va;
>      int rc;
> @@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
>      if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
>          return rc;
>  
> -    spin_lock(&iorp->lock);
> +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
>  
>      if ( (iorp->va != NULL) || d->is_dying )
>      {
> -        destroy_ring_for_helper(&iorp->va, iorp->page);
> -        spin_unlock(&iorp->lock);
> +        destroy_ring_for_helper(&va, page);
>          return -EINVAL;
>      }
>  
>      iorp->va = va;
>      iorp->page = page;
>  
> -    spin_unlock(&iorp->lock);
> -
> -    domain_unpause(d);
> -
>      return 0;
>  }
>  
> @@ -544,38 +545,6 @@ static int handle_pvh_io(
>      return X86EMUL_OKAY;
>  }
>  
> -static int hvm_init_ioreq_server(struct domain *d)
> -{
> -    struct hvm_ioreq_server *s;
> -    int i;
> -
> -    s = xzalloc(struct hvm_ioreq_server);
> -    if ( !s )
> -        return -ENOMEM;
> -
> -    s->domain = d;
> -
> -    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> -        s->ioreq_evtchn[i] = -1;
> -    s->buf_ioreq_evtchn = -1;
> -
> -    hvm_init_ioreq_page(d, &s->ioreq);
> -    hvm_init_ioreq_page(d, &s->buf_ioreq);
> -
> -    d->arch.hvm_domain.ioreq_server = s;
> -    return 0;
> -}
> -
> -static void hvm_deinit_ioreq_server(struct domain *d)
> -{
> -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> -
> -    hvm_destroy_ioreq_page(d, &s->ioreq);
> -    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> -
> -    xfree(s);
> -}
> -
>  static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server *s)
>  {
>      struct domain *d = s->domain;
> @@ -637,6 +606,152 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
>      }
>  }
>  
> +static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> +{
> +    struct hvm_ioreq_server *s;
> +    int i;
> +    unsigned long pfn;
> +    struct vcpu *v;
> +    int rc;

i and rc can be declared together.

> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -EEXIST;
> +    if ( d->arch.hvm_domain.ioreq_server != NULL )
> +        goto fail_exist;
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +
> +    rc = -ENOMEM;
> +    s = xzalloc(struct hvm_ioreq_server);
> +    if ( !s )
> +        goto fail_alloc;
> +
> +    s->domain = d;
> +    s->domid = domid;
> +
> +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +        s->ioreq_evtchn[i] = -1;
> +    s->buf_ioreq_evtchn = -1;
> +
> +    /* Initialize shared pages */
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +
> +    hvm_init_ioreq_page(s, 0);
> +    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
> +        goto fail_set_ioreq;
> +
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +
> +    hvm_init_ioreq_page(s, 1);
> +    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> +        goto fail_set_buf_ioreq;
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> +            goto fail_add_vcpu;
> +    }
> +
> +    d->arch.hvm_domain.ioreq_server = s;
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail_add_vcpu:
> +    for_each_vcpu ( d, v )
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    hvm_destroy_ioreq_page(s, 1);
> +fail_set_buf_ioreq:
> +    hvm_destroy_ioreq_page(s, 0);
> +fail_set_ioreq:
> +    xfree(s);
> +fail_alloc:
> +fail_exist:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    return rc;
> +}
> +
> +static void hvm_destroy_ioreq_server(struct domain *d)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct vcpu *v;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +    if ( !s )
> +        goto done;
> +
> +    domain_pause(d);
> +
> +    d->arch.hvm_domain.ioreq_server = NULL;
> +
> +    for_each_vcpu ( d, v )
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +
> +    hvm_destroy_ioreq_page(s, 1);
> +    hvm_destroy_ioreq_page(s, 0);
> +
> +    xfree(s);
> +
> +    domain_unpause(d);
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
> +static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    *port = s->buf_ioreq_evtchn;
> +    rc = 0;
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    if ( buf )
> +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +    else
> +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];

This can be reduced and use "params[buf ? HVM_PARAM_BUFIOREQ_PFN :
HVM_PARAM_IOREQ_PFN]", although that is perhaps not as clear.

> +
> +    rc = 0;
> +
> +done:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
>  static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>                                       int *p_port)
>  {
> @@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>      return 0;
>  }
>  
> -static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
> +static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
>  {
> -    struct domain *d = s->domain;
> +    struct hvm_ioreq_server *s;
>      struct vcpu *v;
>      int rc = 0;
>  
>      domain_pause(d);
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    s = d->arch.hvm_domain.ioreq_server;
> +
> +    rc = -ENOENT;
> +    if ( !s )
> +        goto done;
> +
> +    rc = 0;
> +    if ( s->domid == domid )
> +        goto done;
>  
>      if ( d->vcpu[0] )
>      {
> @@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s, domid_t domid)
>  
>  done:
>      domain_unpause(d);
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);

Mismatched order of pause/unpause and lock/unlock pairs.  The unlock
should ideally be before the unpause.

>  
>      return rc;
>  }
>  
> -static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> -{
> -    struct domain *d = s->domain;
> -    int rc;
> -
> -    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> -    if ( rc < 0 )
> -        return rc;
> -
> -    hvm_update_ioreq_server_evtchn(s);
> -
> -    return 0;
> -}
> -
> -static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s, unsigned long pfn)
> -{
> -    struct domain *d = s->domain;
> -
> -    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> -}
> -
>  int hvm_domain_initialise(struct domain *d)
>  {
>      int rc;
> @@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
>  
>      }
>  
> +    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
>      spin_lock_init(&d->arch.hvm_domain.irq_lock);
>      spin_lock_init(&d->arch.hvm_domain.uc_lock);
>  
> @@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> -    rc = hvm_init_ioreq_server(d);
> -    if ( rc != 0 )
> -        goto fail2;
> -
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail3;
> +        goto fail2;
>  
>      return 0;
>  
> - fail3:
> -    hvm_deinit_ioreq_server(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_deinit_ioreq_server(d);
> +    hvm_destroy_ioreq_server(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>      s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    rc = hvm_ioreq_server_add_vcpu(s, v);
> -    if ( rc < 0 )
> -        goto fail6;
> +    if ( s )
> +    {
> +        rc = hvm_ioreq_server_add_vcpu(s, v);
> +        if ( rc < 0 )
> +            goto fail6;
> +    }
>  
>      if ( v->vcpu_id == 0 )
>      {
> @@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> +    struct hvm_ioreq_server *s;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +    s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    hvm_ioreq_server_remove_vcpu(s, v);
> +    if ( s )
> +        hvm_ioreq_server_remove_vcpu(s, v);
>  
>      nestedhvm_vcpu_destroy(v);
>  
> @@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>      s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
>      if ( !s )
>          return 0;
>  
> @@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
>      return 1;
>  }
>  
> -bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> +static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
> +                                            struct vcpu *v,
> +                                            ioreq_t *proto_p)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> -    ioreq_t *p;
> -
> -    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> -        return 0; /* implicitly bins the i/o operation */
> -
> -    s = d->arch.hvm_domain.ioreq_server;
> -    if ( !s )
> -        return 0;
> -
> -    p = get_ioreq(s, v->vcpu_id);
> +    ioreq_t *p = get_ioreq(s, v->vcpu_id);
>  
>      if ( unlikely(p->state != STATE_IOREQ_NONE) )
>      {
> @@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
>      return 1;
>  }
>  
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
> +{
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +
> +    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> +
> +    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> +        return 0;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +    s = d->arch.hvm_domain.ioreq_server;
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);

What is the purpose of talking the server lock just to read the
ioreq_server pointer?

> +
> +    if ( !s )
> +        return 0;
> +
> +    return hvm_send_assist_req_to_server(s, v, p);
> +}
> +
>  void hvm_hlt(unsigned long rflags)
>  {
>      struct vcpu *curr = current;
> @@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>      case HVMOP_get_param:
>      {
>          struct xen_hvm_param a;
> -        struct hvm_ioreq_server *s;
>          struct domain *d;
>          struct vcpu *v;
>  
> @@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          if ( rc )
>              goto param_fail;
>  
> -        s = d->arch.hvm_domain.ioreq_server;
> -
>          if ( op == HVMOP_set_param )
>          {
>              rc = 0;
>  
>              switch ( a.index )
>              {
> -            case HVM_PARAM_IOREQ_PFN:
> -                rc = hvm_set_ioreq_server_pfn(s, a.value);
> -                break;
> -            case HVM_PARAM_BUFIOREQ_PFN: 
> -                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> -                break;
>              case HVM_PARAM_CALLBACK_IRQ:
>                  hvm_set_callback_via(d, a.value);
>                  hvm_latch_shinfo_size(d);
> @@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = hvm_set_ioreq_server_domid(s, a.value);
> +                rc = hvm_create_ioreq_server(d, a.value);
> +                if ( rc == -EEXIST )
> +                    rc = hvm_set_ioreq_server_domid(d, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          {
>              switch ( a.index )
>              {
> +            case HVM_PARAM_IOREQ_PFN:
> +            case HVM_PARAM_BUFIOREQ_PFN:
>              case HVM_PARAM_BUFIOREQ_EVTCHN:
> -                a.value = s->buf_ioreq_evtchn;
> +                /* May need to create server */
> +                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> +                if ( rc != 0 && rc != -EEXIST )
> +                    goto param_fail;
> +
> +                switch ( a.index )
> +                {
> +                case HVM_PARAM_IOREQ_PFN: {
> +                    xen_pfn_t pfn;
> +
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = pfn;
> +                    break;
> +                }
> +                case HVM_PARAM_BUFIOREQ_PFN: {
> +                    xen_pfn_t pfn;
> +
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = pfn;
> +                    break;
> +                }
> +                case HVM_PARAM_BUFIOREQ_EVTCHN: {
> +                    evtchn_port_t port;
> +
> +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> +                        goto param_fail;
> +
> +                    a.value = port;
> +                    break;
> +                }
> +                default:
> +                    BUG();
> +                }
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 4c039f8..e750ef0 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -52,6 +52,8 @@ struct hvm_ioreq_server {
>  
>  struct hvm_domain {
>      struct hvm_ioreq_server *ioreq_server;
> +    spinlock_t              ioreq_server_lock;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> @@ -106,4 +108,3 @@ struct hvm_domain {
>  #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
>  
>  #endif /* __ASM_X86_HVM_DOMAIN_H__ */
> -

Spurious whitespace change

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tVA-0006hy-4U; Thu, 30 Jan 2014 15:26:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8tV9-0006hs-4b
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:25:59 +0000
Received: from [85.158.137.68:21776] by server-3.bemta-3.messagelabs.com id
	76/C1-14520-60F6AE25; Thu, 30 Jan 2014 15:25:58 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391095554!12360990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31288 invoked from network); 30 Jan 2014 15:25:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:25:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96165581"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:25:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:25:53 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8tV3-0007TR-7d; Thu, 30 Jan 2014 15:25:53 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8tV3-0001iE-0o; Thu, 30 Jan 2014
	15:25:53 +0000
Date: Thu, 30 Jan 2014 15:25:52 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130152538.GA6429@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

V2: Added RHEL 7 grub.cfg in pygrub/examples

Kindly consider this patch for xen-4.4-RC3 as this breaks
RHEL 7 guest boot.

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
---
 tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py       |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7.grub2

diff --git a/tools/pygrub/examples/rhel-7.grub2 b/tools/pygrub/examples/rhel-7.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tVA-0006hy-4U; Thu, 30 Jan 2014 15:26:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8tV9-0006hs-4b
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:25:59 +0000
Received: from [85.158.137.68:21776] by server-3.bemta-3.messagelabs.com id
	76/C1-14520-60F6AE25; Thu, 30 Jan 2014 15:25:58 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391095554!12360990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31288 invoked from network); 30 Jan 2014 15:25:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:25:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96165581"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:25:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:25:53 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8tV3-0007TR-7d; Thu, 30 Jan 2014 15:25:53 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8tV3-0001iE-0o; Thu, 30 Jan 2014
	15:25:53 +0000
Date: Thu, 30 Jan 2014 15:25:52 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130152538.GA6429@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

V2: Added RHEL 7 grub.cfg in pygrub/examples

Kindly consider this patch for xen-4.4-RC3 as this breaks
RHEL 7 guest boot.

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
---
 tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py       |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7.grub2

diff --git a/tools/pygrub/examples/rhel-7.grub2 b/tools/pygrub/examples/rhel-7.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:31:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tZt-00075V-2E; Thu, 30 Jan 2014 15:30:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tZr-00075Q-1C
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:30:51 +0000
Received: from [85.158.143.35:7607] by server-1.bemta-4.messagelabs.com id
	5C/67-31661-A207AE25; Thu, 30 Jan 2014 15:30:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391095848!1978626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16673 invoked from network); 30 Jan 2014 15:30:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:30:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98135271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:30:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:30:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tZn-0007Xv-5g;
	Thu, 30 Jan 2014 15:30:47 +0000
Message-ID: <52EA7027.9090208@citrix.com>
Date: Thu, 30 Jan 2014 15:30:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Joby Poriyath <joby.poriyath@citrix.com>
References: <20140130152538.GA6429@citrix.com>
In-Reply-To: <20140130152538.GA6429@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 15:25, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).

This is not the reason for the change to the regex.  The problem with
the regex is because RHEL7 menu entries have two different single-quote
delimited strings on the same line, and the greedy grouping gets both
strings, and the options inbetween.

>
> V2: Added RHEL 7 grub.cfg in pygrub/examples

This line would traiditonaly be down ...

>
> Kindly consider this patch for xen-4.4-RC3 as this breaks
> RHEL 7 guest boot.

RC3 is pending an OSS test success, so there is no chance for this patch
to get in.

I presume you mean that it should be considered for Xen-4.4?

>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> ---

... here, so it is omitted from the final commit message when the patch
is accepted.


Having said all of that, the actual content:

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>  tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py       |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7.grub2
>
> diff --git a/tools/pygrub/examples/rhel-7.grub2 b/tools/pygrub/examples/rhel-7.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:31:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tZt-00075V-2E; Thu, 30 Jan 2014 15:30:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tZr-00075Q-1C
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:30:51 +0000
Received: from [85.158.143.35:7607] by server-1.bemta-4.messagelabs.com id
	5C/67-31661-A207AE25; Thu, 30 Jan 2014 15:30:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391095848!1978626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16673 invoked from network); 30 Jan 2014 15:30:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:30:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98135271"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:30:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:30:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tZn-0007Xv-5g;
	Thu, 30 Jan 2014 15:30:47 +0000
Message-ID: <52EA7027.9090208@citrix.com>
Date: Thu, 30 Jan 2014 15:30:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Joby Poriyath <joby.poriyath@citrix.com>
References: <20140130152538.GA6429@citrix.com>
In-Reply-To: <20140130152538.GA6429@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 15:25, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).

This is not the reason for the change to the regex.  The problem with
the regex is because RHEL7 menu entries have two different single-quote
delimited strings on the same line, and the greedy grouping gets both
strings, and the options inbetween.

>
> V2: Added RHEL 7 grub.cfg in pygrub/examples

This line would traiditonaly be down ...

>
> Kindly consider this patch for xen-4.4-RC3 as this breaks
> RHEL 7 guest boot.

RC3 is pending an OSS test success, so there is no chance for this patch
to get in.

I presume you mean that it should be considered for Xen-4.4?

>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> ---

... here, so it is omitted from the final commit message when the patch
is accepted.


Having said all of that, the actual content:

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>  tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py       |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7.grub2
>
> diff --git a/tools/pygrub/examples/rhel-7.grub2 b/tools/pygrub/examples/rhel-7.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:32:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tbB-00079g-HT; Thu, 30 Jan 2014 15:32:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tbA-00079b-Gm
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:32:13 +0000
Received: from [85.158.137.68:31815] by server-9.bemta-3.messagelabs.com id
	D6/97-10184-B707AE25; Thu, 30 Jan 2014 15:32:11 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391095928!12389897!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14782 invoked from network); 30 Jan 2014 15:32:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:32:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96168369"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:32:07 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:32:06 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:32:05 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation
	of ioreq server
Thread-Index: AQHPHcZanTzHwQ/6o0WY9Jgbf8Cx0JqdUa4AgAASntA=
Date: Thu, 30 Jan 2014 15:32:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD021796E@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
	<52EA6E0E.8020200@citrix.com>
In-Reply-To: <52EA6E0E.8020200@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
 ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 15:22
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation
> of ioreq server
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > This patch only creates the ioreq server when the legacy HVM parameters
> > are touched by an emulator. It also lays some groundwork for supporting
> > multiple IOREQ servers. For instance, it introduces ioreq server reference
> > counting which is not strictly necessary at this stage but will become so
> > when ioreq servers can be destroyed prior the domain dying.
> >
> > There is a significant change in the layout of the special pages reserved
> > in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards
> without
> > moving pages such as the xenstore page when building a domain that can
> > support more than one emulator.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  tools/libxc/xc_hvm_build_x86.c   |   41 ++--
> >  xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++----
> --------
> >  xen/include/asm-x86/hvm/domain.h |    3 +-
> >  3 files changed, 314 insertions(+), 139 deletions(-)
> >
> > diff --git a/tools/libxc/xc_hvm_build_x86.c
> b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..f24f2a1 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -41,13 +41,12 @@
> >  #define SPECIALPAGE_PAGING   0
> >  #define SPECIALPAGE_ACCESS   1
> >  #define SPECIALPAGE_SHARING  2
> > -#define SPECIALPAGE_BUFIOREQ 3
> > -#define SPECIALPAGE_XENSTORE 4
> > -#define SPECIALPAGE_IOREQ    5
> > -#define SPECIALPAGE_IDENT_PT 6
> > -#define SPECIALPAGE_CONSOLE  7
> > -#define NR_SPECIAL_PAGES     8
> > -#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> > +#define SPECIALPAGE_XENSTORE 3
> > +#define SPECIALPAGE_IDENT_PT 4
> > +#define SPECIALPAGE_CONSOLE  5
> > +#define SPECIALPAGE_IOREQ    6
> > +#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server
> needs 2 pages */
> > +#define special_pfn(x) (0xff000u - (x))
> >
> >  static int modules_init(struct xc_hvm_build_args *args,
> >                          uint64_t vend, struct elf_binary *elf,
> > @@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page,
> uint64_t mem_size,
> >      /* Memory parameters. */
> >      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
> >      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> > -    hvm_info->reserved_mem_pgstart = special_pfn(0);
> > +    hvm_info->reserved_mem_pgstart = special_pfn(0) -
> NR_SPECIAL_PAGES;
> >
> >      /* Finish with the checksum. */
> >      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> > @@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
> >      munmap(hvm_info_page, PAGE_SIZE);
> >
> >      /* Allocate and clear special pages. */
> > +
> > +     DPRINTF("%d SPECIAL PAGES:\n"
> > +            "  PAGING:    %"PRI_xen_pfn"\n"
> > +            "  ACCESS:    %"PRI_xen_pfn"\n"
> > +            "  SHARING:   %"PRI_xen_pfn"\n"
> > +            "  STORE:     %"PRI_xen_pfn"\n"
> > +            "  IDENT_PT:  %"PRI_xen_pfn"\n"
> > +            "  CONSOLE:   %"PRI_xen_pfn"\n"
> > +            "  IOREQ:     %"PRI_xen_pfn"\n",
> > +            NR_SPECIAL_PAGES,
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> > +
> 
> I realise I am being quite picky here, but from a daemon point of view
> trying to log to facilities like syslog, multi-line single debugging
> messages are a pain.  Would it be possible to do this as 8 DPRINTF()s?
> 

Yes, of course.

> >      for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> >      {
> >          xen_pfn_t pfn = special_pfn(i);
> > @@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
> >
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
> >                       special_pfn(SPECIALPAGE_XENSTORE));
> > -    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_BUFIOREQ));
> > -    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_IOREQ));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
> >                       special_pfn(SPECIALPAGE_CONSOLE));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
> > @@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
> >                       special_pfn(SPECIALPAGE_ACCESS));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
> >                       special_pfn(SPECIALPAGE_SHARING));
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> > +                     special_pfn(SPECIALPAGE_IOREQ));
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > +                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> >
> >      /*
> >       * Identity-map page table is required for running with CR0.PG=0 when
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index a0eaadb..d9874fb 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server
> *s, int id)
> >      return &p->vcpu_ioreq[id];
> >  }
> >
> > -void hvm_do_resume(struct vcpu *v)
> > +static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
> >  {
> > -    struct hvm_ioreq_server *s;
> > -    ioreq_t *p;
> > -
> > -    check_wakeup_from_wait();
> > -
> > -    if ( is_hvm_vcpu(v) )
> > -        pt_restore_timer(v);
> > -
> > -    s = v->arch.hvm_vcpu.ioreq_server;
> > -    v->arch.hvm_vcpu.ioreq_server = NULL;
> > -
> > -    if ( !s )
> > -        goto check_inject_trap;
> > -
> >      /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > -    p = get_ioreq(s, v->vcpu_id);
> >      while ( p->state != STATE_IOREQ_NONE )
> >      {
> >          switch ( p->state )
> > @@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
> >              break;
> >          default:
> >              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p-
> >state);
> > -            domain_crash(v->domain);
> > +            domain_crash(d);
> >              return; /* bail */
> >          }
> >      }
> > +}
> > +
> > +void hvm_do_resume(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    check_wakeup_from_wait();
> > +
> > +    if ( is_hvm_vcpu(v) )
> > +        pt_restore_timer(v);
> > +
> > +    s = v->arch.hvm_vcpu.ioreq_server;
> > +    v->arch.hvm_vcpu.ioreq_server = NULL;
> > +
> > +    if ( s )
> > +    {
> > +        ioreq_t *p = get_ioreq(s, v->vcpu_id);
> > +
> > +        hvm_wait_on_io(d, p);
> > +    }
> >
> > - check_inject_trap:
> >      /* Inject pending hw/sw trap */
> >      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> >      {
> > @@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
> >      }
> >  }
> >
> > -static void hvm_init_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp)
> > +static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
> >  {
> > +    struct hvm_ioreq_page *iorp;
> > +
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> > +
> 
> Brackets are redundant.
> 

...but good style IMO.

> >      spin_lock_init(&iorp->lock);
> > -    domain_pause(d);
> >  }
> >
> >  void destroy_ring_for_helper(
> > @@ -419,16 +426,13 @@ void destroy_ring_for_helper(
> >      }
> >  }
> >
> > -static void hvm_destroy_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp)
> > +static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
> >  {
> > -    spin_lock(&iorp->lock);
> > +    struct hvm_ioreq_page *iorp;
> >
> > -    ASSERT(d->is_dying);
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> >
> >      destroy_ring_for_helper(&iorp->va, iorp->page);
> > -
> > -    spin_unlock(&iorp->lock);
> >  }
> >
> >  int prepare_ring_for_helper(
> > @@ -476,8 +480,10 @@ int prepare_ring_for_helper(
> >  }
> >
> >  static int hvm_set_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
> > +    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
> >  {
> > +    struct domain *d = s->domain;
> > +    struct hvm_ioreq_page *iorp;
> >      struct page_info *page;
> >      void *va;
> >      int rc;
> > @@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
> >      if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
> >          return rc;
> >
> > -    spin_lock(&iorp->lock);
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> >
> >      if ( (iorp->va != NULL) || d->is_dying )
> >      {
> > -        destroy_ring_for_helper(&iorp->va, iorp->page);
> > -        spin_unlock(&iorp->lock);
> > +        destroy_ring_for_helper(&va, page);
> >          return -EINVAL;
> >      }
> >
> >      iorp->va = va;
> >      iorp->page = page;
> >
> > -    spin_unlock(&iorp->lock);
> > -
> > -    domain_unpause(d);
> > -
> >      return 0;
> >  }
> >
> > @@ -544,38 +545,6 @@ static int handle_pvh_io(
> >      return X86EMUL_OKAY;
> >  }
> >
> > -static int hvm_init_ioreq_server(struct domain *d)
> > -{
> > -    struct hvm_ioreq_server *s;
> > -    int i;
> > -
> > -    s = xzalloc(struct hvm_ioreq_server);
> > -    if ( !s )
> > -        return -ENOMEM;
> > -
> > -    s->domain = d;
> > -
> > -    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > -        s->ioreq_evtchn[i] = -1;
> > -    s->buf_ioreq_evtchn = -1;
> > -
> > -    hvm_init_ioreq_page(d, &s->ioreq);
> > -    hvm_init_ioreq_page(d, &s->buf_ioreq);
> > -
> > -    d->arch.hvm_domain.ioreq_server = s;
> > -    return 0;
> > -}
> > -
> > -static void hvm_deinit_ioreq_server(struct domain *d)
> > -{
> > -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > -
> > -    hvm_destroy_ioreq_page(d, &s->ioreq);
> > -    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> > -
> > -    xfree(s);
> > -}
> > -
> >  static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server
> *s)
> >  {
> >      struct domain *d = s->domain;
> > @@ -637,6 +606,152 @@ static void
> hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
> >      }
> >  }
> >
> > +static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int i;
> > +    unsigned long pfn;
> > +    struct vcpu *v;
> > +    int rc;
> 
> i and rc can be declared together.
> 
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    rc = -EEXIST;
> > +    if ( d->arch.hvm_domain.ioreq_server != NULL )
> > +        goto fail_exist;
> > +
> > +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> > +
> > +    rc = -ENOMEM;
> > +    s = xzalloc(struct hvm_ioreq_server);
> > +    if ( !s )
> > +        goto fail_alloc;
> > +
> > +    s->domain = d;
> > +    s->domid = domid;
> > +
> > +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > +        s->ioreq_evtchn[i] = -1;
> > +    s->buf_ioreq_evtchn = -1;
> > +
> > +    /* Initialize shared pages */
> > +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> > +
> > +    hvm_init_ioreq_page(s, 0);
> > +    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
> > +        goto fail_set_ioreq;
> > +
> > +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> > +
> > +    hvm_init_ioreq_page(s, 1);
> > +    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> > +        goto fail_set_buf_ioreq;
> > +
> > +    for_each_vcpu ( d, v )
> > +    {
> > +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> > +            goto fail_add_vcpu;
> > +    }
> > +
> > +    d->arch.hvm_domain.ioreq_server = s;
> > +
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return 0;
> > +
> > +fail_add_vcpu:
> > +    for_each_vcpu ( d, v )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> > +    hvm_destroy_ioreq_page(s, 1);
> > +fail_set_buf_ioreq:
> > +    hvm_destroy_ioreq_page(s, 0);
> > +fail_set_ioreq:
> > +    xfree(s);
> > +fail_alloc:
> > +fail_exist:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    return rc;
> > +}
> > +
> > +static void hvm_destroy_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    struct vcpu *v;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    domain_pause(d);
> > +
> > +    d->arch.hvm_domain.ioreq_server = NULL;
> > +
> > +    for_each_vcpu ( d, v )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> > +
> > +    hvm_destroy_ioreq_page(s, 1);
> > +    hvm_destroy_ioreq_page(s, 0);
> > +
> > +    xfree(s);
> > +
> > +    domain_unpause(d);
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +}
> > +
> > +static int hvm_get_ioreq_server_buf_port(struct domain *d,
> evtchn_port_t *port)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int rc;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    *port = s->buf_ioreq_evtchn;
> > +    rc = 0;
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return rc;
> > +}
> > +
> > +static int hvm_get_ioreq_server_pfn(struct domain *d, int buf,
> xen_pfn_t *pfn)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int rc;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    if ( buf )
> > +        *pfn = d-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> > +    else
> > +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> 
> This can be reduced and use "params[buf ? HVM_PARAM_BUFIOREQ_PFN :
> HVM_PARAM_IOREQ_PFN]", although that is perhaps not as clear.
> 

Indeed. Yuck.

> > +
> > +    rc = 0;
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return rc;
> > +}
> > +
> >  static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> >                                       int *p_port)
> >  {
> > @@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct
> vcpu *v, domid_t remote_domid,
> >      return 0;
> >  }
> >
> > -static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s,
> domid_t domid)
> > +static int hvm_set_ioreq_server_domid(struct domain *d, domid_t
> domid)
> >  {
> > -    struct domain *d = s->domain;
> > +    struct hvm_ioreq_server *s;
> >      struct vcpu *v;
> >      int rc = 0;
> >
> >      domain_pause(d);
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    rc = 0;
> > +    if ( s->domid == domid )
> > +        goto done;
> >
> >      if ( d->vcpu[0] )
> >      {
> > @@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct
> hvm_ioreq_server *s, domid_t domid)
> >
> >  done:
> >      domain_unpause(d);
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> 
> Mismatched order of pause/unpause and lock/unlock pairs.  The unlock
> should ideally be before the unpause.
> 

Ok.

> >
> >      return rc;
> >  }
> >
> > -static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > -{
> > -    struct domain *d = s->domain;
> > -    int rc;
> > -
> > -    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> > -    if ( rc < 0 )
> > -        return rc;
> > -
> > -    hvm_update_ioreq_server_evtchn(s);
> > -
> > -    return 0;
> > -}
> > -
> > -static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > -{
> > -    struct domain *d = s->domain;
> > -
> > -    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> > -}
> > -
> >  int hvm_domain_initialise(struct domain *d)
> >  {
> >      int rc;
> > @@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      }
> >
> > +    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
> >      spin_lock_init(&d->arch.hvm_domain.irq_lock);
> >      spin_lock_init(&d->arch.hvm_domain.uc_lock);
> >
> > @@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      rtc_init(d);
> >
> > -    rc = hvm_init_ioreq_server(d);
> > -    if ( rc != 0 )
> > -        goto fail2;
> > -
> >      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> >
> >      rc = hvm_funcs.domain_initialise(d);
> >      if ( rc != 0 )
> > -        goto fail3;
> > +        goto fail2;
> >
> >      return 0;
> >
> > - fail3:
> > -    hvm_deinit_ioreq_server(d);
> >   fail2:
> >      rtc_deinit(d);
> >      stdvga_deinit(d);
> > @@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct
> domain *d)
> >      if ( hvm_funcs.nhvm_domain_relinquish_resources )
> >          hvm_funcs.nhvm_domain_relinquish_resources(d);
> >
> > -    hvm_deinit_ioreq_server(d);
> > +    hvm_destroy_ioreq_server(d);
> >
> >      msixtbl_pt_cleanup(d);
> >
> > @@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown:
> nestedhvm_vcpu_destroy */
> >          goto fail5;
> >
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> >      s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> >
> > -    rc = hvm_ioreq_server_add_vcpu(s, v);
> > -    if ( rc < 0 )
> > -        goto fail6;
> > +    if ( s )
> > +    {
> > +        rc = hvm_ioreq_server_add_vcpu(s, v);
> > +        if ( rc < 0 )
> > +            goto fail6;
> > +    }
> >
> >      if ( v->vcpu_id == 0 )
> >      {
> > @@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >  void hvm_vcpu_destroy(struct vcpu *v)
> >  {
> >      struct domain *d = v->domain;
> > -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> >
> > -    hvm_ioreq_server_remove_vcpu(s, v);
> > +    if ( s )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> >
> >      nestedhvm_vcpu_destroy(v);
> >
> > @@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      /* Ensure buffered_iopage fits in a page */
> >      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> >
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> >      s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> >      if ( !s )
> >          return 0;
> >
> > @@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      return 1;
> >  }
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> > +static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server
> *s,
> > +                                            struct vcpu *v,
> > +                                            ioreq_t *proto_p)
> >  {
> >      struct domain *d = v->domain;
> > -    struct hvm_ioreq_server *s;
> > -    ioreq_t *p;
> > -
> > -    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> > -        return 0; /* implicitly bins the i/o operation */
> > -
> > -    s = d->arch.hvm_domain.ioreq_server;
> > -    if ( !s )
> > -        return 0;
> > -
> > -    p = get_ioreq(s, v->vcpu_id);
> > +    ioreq_t *p = get_ioreq(s, v->vcpu_id);
> >
> >      if ( unlikely(p->state != STATE_IOREQ_NONE) )
> >      {
> > @@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v,
> ioreq_t *proto_p)
> >      return 1;
> >  }
> >
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> > +
> > +    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> > +        return 0;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> 
> What is the purpose of talking the server lock just to read the
> ioreq_server pointer?
> 

The lock supposed to be there to eventually wrap a list walk, but as that's done in a separate function the lock is probably not particularly illustrative here - I'll ditch it.

> > +
> > +    if ( !s )
> > +        return 0;
> > +
> > +    return hvm_send_assist_req_to_server(s, v, p);
> > +}
> > +
> >  void hvm_hlt(unsigned long rflags)
> >  {
> >      struct vcpu *curr = current;
> > @@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >      case HVMOP_get_param:
> >      {
> >          struct xen_hvm_param a;
> > -        struct hvm_ioreq_server *s;
> >          struct domain *d;
> >          struct vcpu *v;
> >
> > @@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( rc )
> >              goto param_fail;
> >
> > -        s = d->arch.hvm_domain.ioreq_server;
> > -
> >          if ( op == HVMOP_set_param )
> >          {
> >              rc = 0;
> >
> >              switch ( a.index )
> >              {
> > -            case HVM_PARAM_IOREQ_PFN:
> > -                rc = hvm_set_ioreq_server_pfn(s, a.value);
> > -                break;
> > -            case HVM_PARAM_BUFIOREQ_PFN:
> > -                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> > -                break;
> >              case HVM_PARAM_CALLBACK_IRQ:
> >                  hvm_set_callback_via(d, a.value);
> >                  hvm_latch_shinfo_size(d);
> > @@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  if ( a.value == DOMID_SELF )
> >                      a.value = curr_d->domain_id;
> >
> > -                rc = hvm_set_ioreq_server_domid(s, a.value);
> > +                rc = hvm_create_ioreq_server(d, a.value);
> > +                if ( rc == -EEXIST )
> > +                    rc = hvm_set_ioreq_server_domid(d, a.value);
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  /* Not reflexive, as we must domain_pause(). */
> > @@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          {
> >              switch ( a.index )
> >              {
> > +            case HVM_PARAM_IOREQ_PFN:
> > +            case HVM_PARAM_BUFIOREQ_PFN:
> >              case HVM_PARAM_BUFIOREQ_EVTCHN:
> > -                a.value = s->buf_ioreq_evtchn;
> > +                /* May need to create server */
> > +                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> > +                if ( rc != 0 && rc != -EEXIST )
> > +                    goto param_fail;
> > +
> > +                switch ( a.index )
> > +                {
> > +                case HVM_PARAM_IOREQ_PFN: {
> > +                    xen_pfn_t pfn;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = pfn;
> > +                    break;
> > +                }
> > +                case HVM_PARAM_BUFIOREQ_PFN: {
> > +                    xen_pfn_t pfn;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = pfn;
> > +                    break;
> > +                }
> > +                case HVM_PARAM_BUFIOREQ_EVTCHN: {
> > +                    evtchn_port_t port;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = port;
> > +                    break;
> > +                }
> > +                default:
> > +                    BUG();
> > +                }
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index 4c039f8..e750ef0 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -52,6 +52,8 @@ struct hvm_ioreq_server {
> >
> >  struct hvm_domain {
> >      struct hvm_ioreq_server *ioreq_server;
> > +    spinlock_t              ioreq_server_lock;
> > +
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > @@ -106,4 +108,3 @@ struct hvm_domain {
> >  #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
> >
> >  #endif /* __ASM_X86_HVM_DOMAIN_H__ */
> > -
> 
> Spurious whitespace change
> 

Ok.

  Paul

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:32:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tbB-00079g-HT; Thu, 30 Jan 2014 15:32:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tbA-00079b-Gm
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:32:13 +0000
Received: from [85.158.137.68:31815] by server-9.bemta-3.messagelabs.com id
	D6/97-10184-B707AE25; Thu, 30 Jan 2014 15:32:11 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391095928!12389897!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14782 invoked from network); 30 Jan 2014 15:32:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:32:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96168369"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:32:07 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:32:06 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:32:05 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation
	of ioreq server
Thread-Index: AQHPHcZanTzHwQ/6o0WY9Jgbf8Cx0JqdUa4AgAASntA=
Date: Thu, 30 Jan 2014 15:32:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD021796E@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-4-git-send-email-paul.durrant@citrix.com>
	<52EA6E0E.8020200@citrix.com>
In-Reply-To: <52EA6E0E.8020200@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation of
 ioreq server
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 30 January 2014 15:22
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 3/5] ioreq-server: on-demand creation
> of ioreq server
> 
> On 30/01/14 14:19, Paul Durrant wrote:
> > This patch only creates the ioreq server when the legacy HVM parameters
> > are touched by an emulator. It also lays some groundwork for supporting
> > multiple IOREQ servers. For instance, it introduces ioreq server reference
> > counting which is not strictly necessary at this stage but will become so
> > when ioreq servers can be destroyed prior the domain dying.
> >
> > There is a significant change in the layout of the special pages reserved
> > in xc_hvm_build_x86.c. This is so that we can 'grow' them downwards
> without
> > moving pages such as the xenstore page when building a domain that can
> > support more than one emulator.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> >  tools/libxc/xc_hvm_build_x86.c   |   41 ++--
> >  xen/arch/x86/hvm/hvm.c           |  409 ++++++++++++++++++++++++++----
> --------
> >  xen/include/asm-x86/hvm/domain.h |    3 +-
> >  3 files changed, 314 insertions(+), 139 deletions(-)
> >
> > diff --git a/tools/libxc/xc_hvm_build_x86.c
> b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..f24f2a1 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -41,13 +41,12 @@
> >  #define SPECIALPAGE_PAGING   0
> >  #define SPECIALPAGE_ACCESS   1
> >  #define SPECIALPAGE_SHARING  2
> > -#define SPECIALPAGE_BUFIOREQ 3
> > -#define SPECIALPAGE_XENSTORE 4
> > -#define SPECIALPAGE_IOREQ    5
> > -#define SPECIALPAGE_IDENT_PT 6
> > -#define SPECIALPAGE_CONSOLE  7
> > -#define NR_SPECIAL_PAGES     8
> > -#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
> > +#define SPECIALPAGE_XENSTORE 3
> > +#define SPECIALPAGE_IDENT_PT 4
> > +#define SPECIALPAGE_CONSOLE  5
> > +#define SPECIALPAGE_IOREQ    6
> > +#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server
> needs 2 pages */
> > +#define special_pfn(x) (0xff000u - (x))
> >
> >  static int modules_init(struct xc_hvm_build_args *args,
> >                          uint64_t vend, struct elf_binary *elf,
> > @@ -112,7 +111,7 @@ static void build_hvm_info(void *hvm_info_page,
> uint64_t mem_size,
> >      /* Memory parameters. */
> >      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
> >      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> > -    hvm_info->reserved_mem_pgstart = special_pfn(0);
> > +    hvm_info->reserved_mem_pgstart = special_pfn(0) -
> NR_SPECIAL_PAGES;
> >
> >      /* Finish with the checksum. */
> >      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> > @@ -463,6 +462,24 @@ static int setup_guest(xc_interface *xch,
> >      munmap(hvm_info_page, PAGE_SIZE);
> >
> >      /* Allocate and clear special pages. */
> > +
> > +     DPRINTF("%d SPECIAL PAGES:\n"
> > +            "  PAGING:    %"PRI_xen_pfn"\n"
> > +            "  ACCESS:    %"PRI_xen_pfn"\n"
> > +            "  SHARING:   %"PRI_xen_pfn"\n"
> > +            "  STORE:     %"PRI_xen_pfn"\n"
> > +            "  IDENT_PT:  %"PRI_xen_pfn"\n"
> > +            "  CONSOLE:   %"PRI_xen_pfn"\n"
> > +            "  IOREQ:     %"PRI_xen_pfn"\n",
> > +            NR_SPECIAL_PAGES,
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> > +            (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> > +
> 
> I realise I am being quite picky here, but from a daemon point of view
> trying to log to facilities like syslog, multi-line single debugging
> messages are a pain.  Would it be possible to do this as 8 DPRINTF()s?
> 

Yes, of course.

> >      for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> >      {
> >          xen_pfn_t pfn = special_pfn(i);
> > @@ -478,10 +495,6 @@ static int setup_guest(xc_interface *xch,
> >
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
> >                       special_pfn(SPECIALPAGE_XENSTORE));
> > -    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_BUFIOREQ));
> > -    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_IOREQ));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
> >                       special_pfn(SPECIALPAGE_CONSOLE));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_PAGING_RING_PFN,
> > @@ -490,6 +503,10 @@ static int setup_guest(xc_interface *xch,
> >                       special_pfn(SPECIALPAGE_ACCESS));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_SHARING_RING_PFN,
> >                       special_pfn(SPECIALPAGE_SHARING));
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> > +                     special_pfn(SPECIALPAGE_IOREQ));
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > +                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> >
> >      /*
> >       * Identity-map page table is required for running with CR0.PG=0 when
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index a0eaadb..d9874fb 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -352,24 +352,9 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server
> *s, int id)
> >      return &p->vcpu_ioreq[id];
> >  }
> >
> > -void hvm_do_resume(struct vcpu *v)
> > +static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
> >  {
> > -    struct hvm_ioreq_server *s;
> > -    ioreq_t *p;
> > -
> > -    check_wakeup_from_wait();
> > -
> > -    if ( is_hvm_vcpu(v) )
> > -        pt_restore_timer(v);
> > -
> > -    s = v->arch.hvm_vcpu.ioreq_server;
> > -    v->arch.hvm_vcpu.ioreq_server = NULL;
> > -
> > -    if ( !s )
> > -        goto check_inject_trap;
> > -
> >      /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE).
> */
> > -    p = get_ioreq(s, v->vcpu_id);
> >      while ( p->state != STATE_IOREQ_NONE )
> >      {
> >          switch ( p->state )
> > @@ -385,12 +370,32 @@ void hvm_do_resume(struct vcpu *v)
> >              break;
> >          default:
> >              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %d.\n", p-
> >state);
> > -            domain_crash(v->domain);
> > +            domain_crash(d);
> >              return; /* bail */
> >          }
> >      }
> > +}
> > +
> > +void hvm_do_resume(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    check_wakeup_from_wait();
> > +
> > +    if ( is_hvm_vcpu(v) )
> > +        pt_restore_timer(v);
> > +
> > +    s = v->arch.hvm_vcpu.ioreq_server;
> > +    v->arch.hvm_vcpu.ioreq_server = NULL;
> > +
> > +    if ( s )
> > +    {
> > +        ioreq_t *p = get_ioreq(s, v->vcpu_id);
> > +
> > +        hvm_wait_on_io(d, p);
> > +    }
> >
> > - check_inject_trap:
> >      /* Inject pending hw/sw trap */
> >      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> >      {
> > @@ -399,11 +404,13 @@ void hvm_do_resume(struct vcpu *v)
> >      }
> >  }
> >
> > -static void hvm_init_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp)
> > +static void hvm_init_ioreq_page(struct hvm_ioreq_server *s, int buf)
> >  {
> > +    struct hvm_ioreq_page *iorp;
> > +
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> > +
> 
> Brackets are redundant.
> 

...but good style IMO.

> >      spin_lock_init(&iorp->lock);
> > -    domain_pause(d);
> >  }
> >
> >  void destroy_ring_for_helper(
> > @@ -419,16 +426,13 @@ void destroy_ring_for_helper(
> >      }
> >  }
> >
> > -static void hvm_destroy_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp)
> > +static void hvm_destroy_ioreq_page(struct hvm_ioreq_server *s, int buf)
> >  {
> > -    spin_lock(&iorp->lock);
> > +    struct hvm_ioreq_page *iorp;
> >
> > -    ASSERT(d->is_dying);
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> >
> >      destroy_ring_for_helper(&iorp->va, iorp->page);
> > -
> > -    spin_unlock(&iorp->lock);
> >  }
> >
> >  int prepare_ring_for_helper(
> > @@ -476,8 +480,10 @@ int prepare_ring_for_helper(
> >  }
> >
> >  static int hvm_set_ioreq_page(
> > -    struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
> > +    struct hvm_ioreq_server *s, int buf, unsigned long gmfn)
> >  {
> > +    struct domain *d = s->domain;
> > +    struct hvm_ioreq_page *iorp;
> >      struct page_info *page;
> >      void *va;
> >      int rc;
> > @@ -485,22 +491,17 @@ static int hvm_set_ioreq_page(
> >      if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
> >          return rc;
> >
> > -    spin_lock(&iorp->lock);
> > +    iorp = ( buf ) ? &s->buf_ioreq : &s->ioreq;
> >
> >      if ( (iorp->va != NULL) || d->is_dying )
> >      {
> > -        destroy_ring_for_helper(&iorp->va, iorp->page);
> > -        spin_unlock(&iorp->lock);
> > +        destroy_ring_for_helper(&va, page);
> >          return -EINVAL;
> >      }
> >
> >      iorp->va = va;
> >      iorp->page = page;
> >
> > -    spin_unlock(&iorp->lock);
> > -
> > -    domain_unpause(d);
> > -
> >      return 0;
> >  }
> >
> > @@ -544,38 +545,6 @@ static int handle_pvh_io(
> >      return X86EMUL_OKAY;
> >  }
> >
> > -static int hvm_init_ioreq_server(struct domain *d)
> > -{
> > -    struct hvm_ioreq_server *s;
> > -    int i;
> > -
> > -    s = xzalloc(struct hvm_ioreq_server);
> > -    if ( !s )
> > -        return -ENOMEM;
> > -
> > -    s->domain = d;
> > -
> > -    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > -        s->ioreq_evtchn[i] = -1;
> > -    s->buf_ioreq_evtchn = -1;
> > -
> > -    hvm_init_ioreq_page(d, &s->ioreq);
> > -    hvm_init_ioreq_page(d, &s->buf_ioreq);
> > -
> > -    d->arch.hvm_domain.ioreq_server = s;
> > -    return 0;
> > -}
> > -
> > -static void hvm_deinit_ioreq_server(struct domain *d)
> > -{
> > -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > -
> > -    hvm_destroy_ioreq_page(d, &s->ioreq);
> > -    hvm_destroy_ioreq_page(d, &s->buf_ioreq);
> > -
> > -    xfree(s);
> > -}
> > -
> >  static void hvm_update_ioreq_server_evtchn(struct hvm_ioreq_server
> *s)
> >  {
> >      struct domain *d = s->domain;
> > @@ -637,6 +606,152 @@ static void
> hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
> >      }
> >  }
> >
> > +static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int i;
> > +    unsigned long pfn;
> > +    struct vcpu *v;
> > +    int rc;
> 
> i and rc can be declared together.
> 
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    rc = -EEXIST;
> > +    if ( d->arch.hvm_domain.ioreq_server != NULL )
> > +        goto fail_exist;
> > +
> > +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> > +
> > +    rc = -ENOMEM;
> > +    s = xzalloc(struct hvm_ioreq_server);
> > +    if ( !s )
> > +        goto fail_alloc;
> > +
> > +    s->domain = d;
> > +    s->domid = domid;
> > +
> > +    for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> > +        s->ioreq_evtchn[i] = -1;
> > +    s->buf_ioreq_evtchn = -1;
> > +
> > +    /* Initialize shared pages */
> > +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> > +
> > +    hvm_init_ioreq_page(s, 0);
> > +    if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
> > +        goto fail_set_ioreq;
> > +
> > +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> > +
> > +    hvm_init_ioreq_page(s, 1);
> > +    if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> > +        goto fail_set_buf_ioreq;
> > +
> > +    for_each_vcpu ( d, v )
> > +    {
> > +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> > +            goto fail_add_vcpu;
> > +    }
> > +
> > +    d->arch.hvm_domain.ioreq_server = s;
> > +
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return 0;
> > +
> > +fail_add_vcpu:
> > +    for_each_vcpu ( d, v )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> > +    hvm_destroy_ioreq_page(s, 1);
> > +fail_set_buf_ioreq:
> > +    hvm_destroy_ioreq_page(s, 0);
> > +fail_set_ioreq:
> > +    xfree(s);
> > +fail_alloc:
> > +fail_exist:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    return rc;
> > +}
> > +
> > +static void hvm_destroy_ioreq_server(struct domain *d)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    struct vcpu *v;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    domain_pause(d);
> > +
> > +    d->arch.hvm_domain.ioreq_server = NULL;
> > +
> > +    for_each_vcpu ( d, v )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> > +
> > +    hvm_destroy_ioreq_page(s, 1);
> > +    hvm_destroy_ioreq_page(s, 0);
> > +
> > +    xfree(s);
> > +
> > +    domain_unpause(d);
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +}
> > +
> > +static int hvm_get_ioreq_server_buf_port(struct domain *d,
> evtchn_port_t *port)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int rc;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    *port = s->buf_ioreq_evtchn;
> > +    rc = 0;
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return rc;
> > +}
> > +
> > +static int hvm_get_ioreq_server_pfn(struct domain *d, int buf,
> xen_pfn_t *pfn)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int rc;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    if ( buf )
> > +        *pfn = d-
> >arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> > +    else
> > +        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> 
> This can be reduced and use "params[buf ? HVM_PARAM_BUFIOREQ_PFN :
> HVM_PARAM_IOREQ_PFN]", although that is perhaps not as clear.
> 

Indeed. Yuck.

> > +
> > +    rc = 0;
> > +
> > +done:
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    return rc;
> > +}
> > +
> >  static int hvm_replace_event_channel(struct vcpu *v, domid_t
> remote_domid,
> >                                       int *p_port)
> >  {
> > @@ -652,13 +767,24 @@ static int hvm_replace_event_channel(struct
> vcpu *v, domid_t remote_domid,
> >      return 0;
> >  }
> >
> > -static int hvm_set_ioreq_server_domid(struct hvm_ioreq_server *s,
> domid_t domid)
> > +static int hvm_set_ioreq_server_domid(struct domain *d, domid_t
> domid)
> >  {
> > -    struct domain *d = s->domain;
> > +    struct hvm_ioreq_server *s;
> >      struct vcpu *v;
> >      int rc = 0;
> >
> >      domain_pause(d);
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +
> > +    rc = -ENOENT;
> > +    if ( !s )
> > +        goto done;
> > +
> > +    rc = 0;
> > +    if ( s->domid == domid )
> > +        goto done;
> >
> >      if ( d->vcpu[0] )
> >      {
> > @@ -680,31 +806,11 @@ static int hvm_set_ioreq_server_domid(struct
> hvm_ioreq_server *s, domid_t domid)
> >
> >  done:
> >      domain_unpause(d);
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> 
> Mismatched order of pause/unpause and lock/unlock pairs.  The unlock
> should ideally be before the unpause.
> 

Ok.

> >
> >      return rc;
> >  }
> >
> > -static int hvm_set_ioreq_server_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > -{
> > -    struct domain *d = s->domain;
> > -    int rc;
> > -
> > -    rc = hvm_set_ioreq_page(d, &s->ioreq, pfn);
> > -    if ( rc < 0 )
> > -        return rc;
> > -
> > -    hvm_update_ioreq_server_evtchn(s);
> > -
> > -    return 0;
> > -}
> > -
> > -static int hvm_set_ioreq_server_buf_pfn(struct hvm_ioreq_server *s,
> unsigned long pfn)
> > -{
> > -    struct domain *d = s->domain;
> > -
> > -    return  hvm_set_ioreq_page(d, &s->buf_ioreq, pfn);
> > -}
> > -
> >  int hvm_domain_initialise(struct domain *d)
> >  {
> >      int rc;
> > @@ -732,6 +838,7 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      }
> >
> > +    spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
> >      spin_lock_init(&d->arch.hvm_domain.irq_lock);
> >      spin_lock_init(&d->arch.hvm_domain.uc_lock);
> >
> > @@ -772,20 +879,14 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      rtc_init(d);
> >
> > -    rc = hvm_init_ioreq_server(d);
> > -    if ( rc != 0 )
> > -        goto fail2;
> > -
> >      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> >
> >      rc = hvm_funcs.domain_initialise(d);
> >      if ( rc != 0 )
> > -        goto fail3;
> > +        goto fail2;
> >
> >      return 0;
> >
> > - fail3:
> > -    hvm_deinit_ioreq_server(d);
> >   fail2:
> >      rtc_deinit(d);
> >      stdvga_deinit(d);
> > @@ -809,7 +910,7 @@ void hvm_domain_relinquish_resources(struct
> domain *d)
> >      if ( hvm_funcs.nhvm_domain_relinquish_resources )
> >          hvm_funcs.nhvm_domain_relinquish_resources(d);
> >
> > -    hvm_deinit_ioreq_server(d);
> > +    hvm_destroy_ioreq_server(d);
> >
> >      msixtbl_pt_cleanup(d);
> >
> > @@ -1364,11 +1465,16 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown:
> nestedhvm_vcpu_destroy */
> >          goto fail5;
> >
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> >      s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> >
> > -    rc = hvm_ioreq_server_add_vcpu(s, v);
> > -    if ( rc < 0 )
> > -        goto fail6;
> > +    if ( s )
> > +    {
> > +        rc = hvm_ioreq_server_add_vcpu(s, v);
> > +        if ( rc < 0 )
> > +            goto fail6;
> > +    }
> >
> >      if ( v->vcpu_id == 0 )
> >      {
> > @@ -1404,9 +1510,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
> >  void hvm_vcpu_destroy(struct vcpu *v)
> >  {
> >      struct domain *d = v->domain;
> > -    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> >
> > -    hvm_ioreq_server_remove_vcpu(s, v);
> > +    if ( s )
> > +        hvm_ioreq_server_remove_vcpu(s, v);
> >
> >      nestedhvm_vcpu_destroy(v);
> >
> > @@ -1459,7 +1570,10 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      /* Ensure buffered_iopage fits in a page */
> >      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> >
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> >      s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> >      if ( !s )
> >          return 0;
> >
> > @@ -1532,20 +1646,12 @@ int hvm_buffered_io_send(ioreq_t *p)
> >      return 1;
> >  }
> >
> > -bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
> > +static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server
> *s,
> > +                                            struct vcpu *v,
> > +                                            ioreq_t *proto_p)
> >  {
> >      struct domain *d = v->domain;
> > -    struct hvm_ioreq_server *s;
> > -    ioreq_t *p;
> > -
> > -    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> > -        return 0; /* implicitly bins the i/o operation */
> > -
> > -    s = d->arch.hvm_domain.ioreq_server;
> > -    if ( !s )
> > -        return 0;
> > -
> > -    p = get_ioreq(s, v->vcpu_id);
> > +    ioreq_t *p = get_ioreq(s, v->vcpu_id);
> >
> >      if ( unlikely(p->state != STATE_IOREQ_NONE) )
> >      {
> > @@ -1578,6 +1684,26 @@ bool_t hvm_send_assist_req(struct vcpu *v,
> ioreq_t *proto_p)
> >      return 1;
> >  }
> >
> > +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct hvm_ioreq_server *s;
> > +
> > +    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> > +
> > +    if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
> > +        return 0;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +    s = d->arch.hvm_domain.ioreq_server;
> > +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> 
> What is the purpose of talking the server lock just to read the
> ioreq_server pointer?
> 

The lock supposed to be there to eventually wrap a list walk, but as that's done in a separate function the lock is probably not particularly illustrative here - I'll ditch it.

> > +
> > +    if ( !s )
> > +        return 0;
> > +
> > +    return hvm_send_assist_req_to_server(s, v, p);
> > +}
> > +
> >  void hvm_hlt(unsigned long rflags)
> >  {
> >      struct vcpu *curr = current;
> > @@ -4172,7 +4298,6 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >      case HVMOP_get_param:
> >      {
> >          struct xen_hvm_param a;
> > -        struct hvm_ioreq_server *s;
> >          struct domain *d;
> >          struct vcpu *v;
> >
> > @@ -4198,20 +4323,12 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( rc )
> >              goto param_fail;
> >
> > -        s = d->arch.hvm_domain.ioreq_server;
> > -
> >          if ( op == HVMOP_set_param )
> >          {
> >              rc = 0;
> >
> >              switch ( a.index )
> >              {
> > -            case HVM_PARAM_IOREQ_PFN:
> > -                rc = hvm_set_ioreq_server_pfn(s, a.value);
> > -                break;
> > -            case HVM_PARAM_BUFIOREQ_PFN:
> > -                rc = hvm_set_ioreq_server_buf_pfn(s, a.value);
> > -                break;
> >              case HVM_PARAM_CALLBACK_IRQ:
> >                  hvm_set_callback_via(d, a.value);
> >                  hvm_latch_shinfo_size(d);
> > @@ -4265,7 +4382,9 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  if ( a.value == DOMID_SELF )
> >                      a.value = curr_d->domain_id;
> >
> > -                rc = hvm_set_ioreq_server_domid(s, a.value);
> > +                rc = hvm_create_ioreq_server(d, a.value);
> > +                if ( rc == -EEXIST )
> > +                    rc = hvm_set_ioreq_server_domid(d, a.value);
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  /* Not reflexive, as we must domain_pause(). */
> > @@ -4360,8 +4479,46 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >          {
> >              switch ( a.index )
> >              {
> > +            case HVM_PARAM_IOREQ_PFN:
> > +            case HVM_PARAM_BUFIOREQ_PFN:
> >              case HVM_PARAM_BUFIOREQ_EVTCHN:
> > -                a.value = s->buf_ioreq_evtchn;
> > +                /* May need to create server */
> > +                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> > +                if ( rc != 0 && rc != -EEXIST )
> > +                    goto param_fail;
> > +
> > +                switch ( a.index )
> > +                {
> > +                case HVM_PARAM_IOREQ_PFN: {
> > +                    xen_pfn_t pfn;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = pfn;
> > +                    break;
> > +                }
> > +                case HVM_PARAM_BUFIOREQ_PFN: {
> > +                    xen_pfn_t pfn;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = pfn;
> > +                    break;
> > +                }
> > +                case HVM_PARAM_BUFIOREQ_EVTCHN: {
> > +                    evtchn_port_t port;
> > +
> > +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> > +                        goto param_fail;
> > +
> > +                    a.value = port;
> > +                    break;
> > +                }
> > +                default:
> > +                    BUG();
> > +                }
> >                  break;
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index 4c039f8..e750ef0 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -52,6 +52,8 @@ struct hvm_ioreq_server {
> >
> >  struct hvm_domain {
> >      struct hvm_ioreq_server *ioreq_server;
> > +    spinlock_t              ioreq_server_lock;
> > +
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > @@ -106,4 +108,3 @@ struct hvm_domain {
> >  #define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
> >
> >  #endif /* __ASM_X86_HVM_DOMAIN_H__ */
> > -
> 
> Spurious whitespace change
> 

Ok.

  Paul

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tbX-0007Cr-5S; Thu, 30 Jan 2014 15:32:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8tbV-0007Cd-PQ
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:32:34 +0000
Received: from [85.158.143.35:32944] by server-3.bemta-4.messagelabs.com id
	79/26-11539-1907AE25; Thu, 30 Jan 2014 15:32:33 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391095952!1981550!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21534 invoked from network); 30 Jan 2014 15:32:32 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 15:32:32 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391095952; l=248;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=zKn1BGX19MwJXWPcSG5l6Q4sgIo=;
	b=Jjp/ZS6aUwSEN49VaFDleaDRQR+ERC/dK92J8TBnNc/pTw4/eBM+L7FYjmQHl6PUNhe
	U4g4FByV+SN4qTv2wZMI3MhRNYDOZebxo1yVsSJKOq1Ce7BLy/nWdSliGTQHIHNTaAu1U
	iQvc8J3trl+xzpbPOnGNwWaYdI3WEbgPsY8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id n041f5q0UFWWjer
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 16:32:32 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id BB42850269; Thu, 30 Jan 2014 16:32:31 +0100 (CET)
Date: Thu, 30 Jan 2014 16:32:31 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140130153231.GA32756@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2] libxl: add option for discard support to
 xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, Olaf Hering wrote:

> boolean property "discard_enable" is written to the backend node. An

This should have been "discard-enable". Will resend with updated commit
message once the tree is open again for new features.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tbX-0007Cr-5S; Thu, 30 Jan 2014 15:32:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8tbV-0007Cd-PQ
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:32:34 +0000
Received: from [85.158.143.35:32944] by server-3.bemta-4.messagelabs.com id
	79/26-11539-1907AE25; Thu, 30 Jan 2014 15:32:33 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391095952!1981550!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21534 invoked from network); 30 Jan 2014 15:32:32 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 15:32:32 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391095952; l=248;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=zKn1BGX19MwJXWPcSG5l6Q4sgIo=;
	b=Jjp/ZS6aUwSEN49VaFDleaDRQR+ERC/dK92J8TBnNc/pTw4/eBM+L7FYjmQHl6PUNhe
	U4g4FByV+SN4qTv2wZMI3MhRNYDOZebxo1yVsSJKOq1Ce7BLy/nWdSliGTQHIHNTaAu1U
	iQvc8J3trl+xzpbPOnGNwWaYdI3WEbgPsY8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id n041f5q0UFWWjer
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 16:32:32 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id BB42850269; Thu, 30 Jan 2014 16:32:31 +0100 (CET)
Date: Thu, 30 Jan 2014 16:32:31 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140130153231.GA32756@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2] libxl: add option for discard support to
 xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, Olaf Hering wrote:

> boolean property "discard_enable" is written to the backend node. An

This should have been "discard-enable". Will resend with updated commit
message once the tree is open again for new features.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:35:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tej-0007Tz-Q4; Thu, 30 Jan 2014 15:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8tei-0007Ti-Dv
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:35:52 +0000
Received: from [85.158.139.211:61858] by server-11.bemta-5.messagelabs.com id
	ED/51-23886-5517AE25; Thu, 30 Jan 2014 15:35:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391096143!642768!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29749 invoked from network); 30 Jan 2014 15:35:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:35:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96170312"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:35:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:35:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8teM-0007de-VC;
	Thu, 30 Jan 2014 15:35:30 +0000
Date: Thu, 30 Jan 2014 15:35:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Given that we don't deactivate the interrupt (writing to GICC_DIR) until
the guest EOIs it, I can't understand how you manage to get a second
interrupt notifications before the guest EOIs the first one.

Do you set GICC_CTL_EOI in GICC_CTLR?

On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
> 
> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Is it a level or an edge irq?
> >
> > On Wed, 29 Jan 2014, Julien Grall wrote:
> >> Hi,
> >>
> >> It's weird, physical IRQ should not be injected twice ...
> >> Were you able to print the IRQ number?
> >>
> >> In any case, you are using the old version of the interrupt patch series.
> >> Your new error may come of race condition in this code.
> >>
> >> Can you try to use a newest version?
> >>
> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
> >>       > difference for xen-unstable (it should make things clearer, if nothing
> >>       > else) but it should fix things for Oleksandr.
> >>
> >>       Unfortunately, it is not enough for stable work.
> >>
> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
> >>       which cause to deadlock in on_selected_cpus function (expected).
> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
> >>       yet where this is happening) or I sometimes see traps, like that:
> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
> >>
> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> >>       (XEN) CPU:    1
> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
> >>       (XEN) CPSR:   200001da MODE:Hypervisor
> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> >>       (XEN)
> >>       (XEN)   VTCR_EL2: 80002558
> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
> >>       (XEN)
> >>       (XEN)  SCTLR_EL2: 30cd187f
> >>       (XEN)    HCR_EL2: 00000000000028b5
> >>       (XEN)  TTBR0_EL2: 00000000d2014000
> >>       (XEN)
> >>       (XEN)    ESR_EL2: 00000000
> >>       (XEN)  HPFAR_EL2: 0000000000482110
> >>       (XEN)      HDFAR: fa211190
> >>       (XEN)      HIFAR: 00000000
> >>       (XEN)
> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
> >>       (XEN) Xen call trace:
> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
> >>       (XEN)
> >>
> >>       Also I am posting maintenance_interrupt() from my tree:
> >>
> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
> >>       cpu_user_regs *regs)
> >>       {
> >>           int i = 0, virq, pirq;
> >>           uint32_t lr;
> >>           struct vcpu *v = current;
> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> >>
> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
> >>                                     64, i)) < 64) {
> >>               struct pending_irq *p, *n;
> >>               int cpu, eoi;
> >>
> >>               cpu = -1;
> >>               eoi = 0;
> >>
> >>               spin_lock_irq(&gic.lock);
> >>               lr = GICH[GICH_LR + i];
> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
> >>
> >>               p = irq_to_pending(v, virq);
> >>               if ( p->desc != NULL ) {
> >>                   p->desc->status &= ~IRQ_INPROGRESS;
> >>                   /* Assume only one pcpu needs to EOI the irq */
> >>                   cpu = p->desc->arch.eoi_cpu;
> >>                   eoi = 1;
> >>                   pirq = p->desc->irq;
> >>               }
> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
> >>               {
> >>                   /* Physical IRQ can't be reinject */
> >>                   WARN_ON(p->desc != NULL);
> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> >>                   spin_unlock_irq(&gic.lock);
> >>                   i++;
> >>                   continue;
> >>               }
> >>
> >>               GICH[GICH_LR + i] = 0;
> >>               clear_bit(i, &this_cpu(lr_mask));
> >>
> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
> >>                   list_del_init(&n->lr_queue);
> >>                   set_bit(i, &this_cpu(lr_mask));
> >>               } else {
> >>                   gic_inject_irq_stop();
> >>               }
> >>               spin_unlock_irq(&gic.lock);
> >>
> >>               spin_lock_irq(&v->arch.vgic.lock);
> >>               list_del_init(&p->inflight);
> >>               spin_unlock_irq(&v->arch.vgic.lock);
> >>
> >>               if ( eoi ) {
> >>                   /* this is not racy because we can't receive another irq of the
> >>                    * same type until we EOI it.  */
> >>                   if ( cpu == smp_processor_id() )
> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
> >>                   else
> >>                       on_selected_cpus(cpumask_of(cpu),
> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> >>               }
> >>
> >>               i++;
> >>           }
> >>       }
> >>
> >>
> >>       Oleksandr Tyshchenko | Embedded Developer
> >>       GlobalLogic
> >>
> >>
> >>
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:35:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:35:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tej-0007Tz-Q4; Thu, 30 Jan 2014 15:35:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8tei-0007Ti-Dv
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:35:52 +0000
Received: from [85.158.139.211:61858] by server-11.bemta-5.messagelabs.com id
	ED/51-23886-5517AE25; Thu, 30 Jan 2014 15:35:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391096143!642768!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29749 invoked from network); 30 Jan 2014 15:35:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:35:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96170312"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:35:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:35:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8teM-0007de-VC;
	Thu, 30 Jan 2014 15:35:30 +0000
Date: Thu, 30 Jan 2014 15:35:25 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Given that we don't deactivate the interrupt (writing to GICC_DIR) until
the guest EOIs it, I can't understand how you manage to get a second
interrupt notifications before the guest EOIs the first one.

Do you set GICC_CTL_EOI in GICC_CTLR?

On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
> 
> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Is it a level or an edge irq?
> >
> > On Wed, 29 Jan 2014, Julien Grall wrote:
> >> Hi,
> >>
> >> It's weird, physical IRQ should not be injected twice ...
> >> Were you able to print the IRQ number?
> >>
> >> In any case, you are using the old version of the interrupt patch series.
> >> Your new error may come of race condition in this code.
> >>
> >> Can you try to use a newest version?
> >>
> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
> >>       > difference for xen-unstable (it should make things clearer, if nothing
> >>       > else) but it should fix things for Oleksandr.
> >>
> >>       Unfortunately, it is not enough for stable work.
> >>
> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
> >>       which cause to deadlock in on_selected_cpus function (expected).
> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
> >>       yet where this is happening) or I sometimes see traps, like that:
> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
> >>
> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> >>       (XEN) CPU:    1
> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
> >>       (XEN) CPSR:   200001da MODE:Hypervisor
> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> >>       (XEN)
> >>       (XEN)   VTCR_EL2: 80002558
> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
> >>       (XEN)
> >>       (XEN)  SCTLR_EL2: 30cd187f
> >>       (XEN)    HCR_EL2: 00000000000028b5
> >>       (XEN)  TTBR0_EL2: 00000000d2014000
> >>       (XEN)
> >>       (XEN)    ESR_EL2: 00000000
> >>       (XEN)  HPFAR_EL2: 0000000000482110
> >>       (XEN)      HDFAR: fa211190
> >>       (XEN)      HIFAR: 00000000
> >>       (XEN)
> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
> >>       (XEN) Xen call trace:
> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
> >>       (XEN)
> >>
> >>       Also I am posting maintenance_interrupt() from my tree:
> >>
> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
> >>       cpu_user_regs *regs)
> >>       {
> >>           int i = 0, virq, pirq;
> >>           uint32_t lr;
> >>           struct vcpu *v = current;
> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> >>
> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
> >>                                     64, i)) < 64) {
> >>               struct pending_irq *p, *n;
> >>               int cpu, eoi;
> >>
> >>               cpu = -1;
> >>               eoi = 0;
> >>
> >>               spin_lock_irq(&gic.lock);
> >>               lr = GICH[GICH_LR + i];
> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
> >>
> >>               p = irq_to_pending(v, virq);
> >>               if ( p->desc != NULL ) {
> >>                   p->desc->status &= ~IRQ_INPROGRESS;
> >>                   /* Assume only one pcpu needs to EOI the irq */
> >>                   cpu = p->desc->arch.eoi_cpu;
> >>                   eoi = 1;
> >>                   pirq = p->desc->irq;
> >>               }
> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
> >>               {
> >>                   /* Physical IRQ can't be reinject */
> >>                   WARN_ON(p->desc != NULL);
> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> >>                   spin_unlock_irq(&gic.lock);
> >>                   i++;
> >>                   continue;
> >>               }
> >>
> >>               GICH[GICH_LR + i] = 0;
> >>               clear_bit(i, &this_cpu(lr_mask));
> >>
> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
> >>                   list_del_init(&n->lr_queue);
> >>                   set_bit(i, &this_cpu(lr_mask));
> >>               } else {
> >>                   gic_inject_irq_stop();
> >>               }
> >>               spin_unlock_irq(&gic.lock);
> >>
> >>               spin_lock_irq(&v->arch.vgic.lock);
> >>               list_del_init(&p->inflight);
> >>               spin_unlock_irq(&v->arch.vgic.lock);
> >>
> >>               if ( eoi ) {
> >>                   /* this is not racy because we can't receive another irq of the
> >>                    * same type until we EOI it.  */
> >>                   if ( cpu == smp_processor_id() )
> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
> >>                   else
> >>                       on_selected_cpus(cpumask_of(cpu),
> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> >>               }
> >>
> >>               i++;
> >>           }
> >>       }
> >>
> >>
> >>       Oleksandr Tyshchenko | Embedded Developer
> >>       GlobalLogic
> >>
> >>
> >>
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:36:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tfM-0007Xq-8J; Thu, 30 Jan 2014 15:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8tfL-0007Xh-JR
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:36:31 +0000
Received: from [85.158.139.211:8510] by server-7.bemta-5.messagelabs.com id
	43/B2-14867-E717AE25; Thu, 30 Jan 2014 15:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391096188!627748!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26699 invoked from network); 30 Jan 2014 15:36:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98138388"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:36:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:36:28 -0500
Message-ID: <1391096187.9495.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 15:36:27 +0000
In-Reply-To: <20140130152538.GA6429@citrix.com>
References: <20140130152538.GA6429@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 15:25 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).
> 
> V2: Added RHEL 7 grub.cfg in pygrub/examples
> 
> Kindly consider this patch for xen-4.4-RC3 as this breaks
> RHEL 7 guest boot.
> 
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> ---
>  tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++

If this is actually rhel 7 beta then I am inclined to rename this as
appropriate while I commit (or if you need to resend then please do so).

If you and Andrew can agree on a suitable description then the patch
itself looks good to me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:36:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tfM-0007Xq-8J; Thu, 30 Jan 2014 15:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8tfL-0007Xh-JR
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:36:31 +0000
Received: from [85.158.139.211:8510] by server-7.bemta-5.messagelabs.com id
	43/B2-14867-E717AE25; Thu, 30 Jan 2014 15:36:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391096188!627748!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26699 invoked from network); 30 Jan 2014 15:36:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:36:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98138388"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:36:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:36:28 -0500
Message-ID: <1391096187.9495.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Thu, 30 Jan 2014 15:36:27 +0000
In-Reply-To: <20140130152538.GA6429@citrix.com>
References: <20140130152538.GA6429@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 15:25 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).
> 
> V2: Added RHEL 7 grub.cfg in pygrub/examples
> 
> Kindly consider this patch for xen-4.4-RC3 as this breaks
> RHEL 7 guest boot.
> 
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> ---
>  tools/pygrub/examples/rhel-7.grub2 |  118 ++++++++++++++++++++++++++++++++++++

If this is actually rhel 7 beta then I am inclined to rename this as
appropriate while I commit (or if you need to resend then please do so).

If you and Andrew can agree on a suitable description then the patch
itself looks good to me.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:37:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tgJ-0007fC-Mt; Thu, 30 Jan 2014 15:37:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8tgI-0007em-ER
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:37:30 +0000
Received: from [85.158.143.35:16082] by server-2.bemta-4.messagelabs.com id
	8E/DF-10891-8B17AE25; Thu, 30 Jan 2014 15:37:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391096246!1982602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3491 invoked from network); 30 Jan 2014 15:37:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:37:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98138672"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:37:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:37:25 -0500
Message-ID: <1391096244.9495.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 15:37:24 +0000
In-Reply-To: <20140130153231.GA32756@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130153231.GA32756@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: add option for discard support to
 xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 16:32 +0100, Olaf Hering wrote:
> On Thu, Jan 30, Olaf Hering wrote:
> 
> > boolean property "discard_enable" is written to the backend node. An
> 
> This should have been "discard-enable". Will resend with updated commit
> message

I think I can probably remember to do this on commit...

>  once the tree is open again for new features.

...but since it would be worth pinging then anyway I guess you may as
well resend the patch then too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:37:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tgJ-0007fC-Mt; Thu, 30 Jan 2014 15:37:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8tgI-0007em-ER
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:37:30 +0000
Received: from [85.158.143.35:16082] by server-2.bemta-4.messagelabs.com id
	8E/DF-10891-8B17AE25; Thu, 30 Jan 2014 15:37:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391096246!1982602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3491 invoked from network); 30 Jan 2014 15:37:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:37:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98138672"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:37:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:37:25 -0500
Message-ID: <1391096244.9495.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 15:37:24 +0000
In-Reply-To: <20140130153231.GA32756@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130153231.GA32756@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] libxl: add option for discard support to
 xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 16:32 +0100, Olaf Hering wrote:
> On Thu, Jan 30, Olaf Hering wrote:
> 
> > boolean property "discard_enable" is written to the backend node. An
> 
> This should have been "discard-enable". Will resend with updated commit
> message

I think I can probably remember to do this on commit...

>  once the tree is open again for new features.

...but since it would be worth pinging then anyway I guess you may as
well resend the patch then too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tnB-00088o-Lf; Thu, 30 Jan 2014 15:44:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8tnA-00088g-Nl; Thu, 30 Jan 2014 15:44:36 +0000
Received: from [193.109.254.147:40074] by server-7.bemta-14.messagelabs.com id
	96/4F-23424-3637AE25; Thu, 30 Jan 2014 15:44:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391096673!937745!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30450 invoked from network); 30 Jan 2014 15:44:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:44:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96173874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:44:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:44:32 -0500
Message-ID: <1391096671.9495.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 15:44:31 +0000
In-Reply-To: <CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:18 -0700, Yun Wang wrote:
> Here is the PV guest config
> #########################
> name = "centos65.pv"
> bootloader = "/usr/local/bin/pygrub"
> extra = "root=/dev/xvda"
> memory = 4096
> vcpus = 2
> vfb=[ "type=vnc, vncpass=123456" ]
> vif = [ 'mac=00:16:3e:54:02:01, bridge=xenbr0' ]
> disk = [ 'file:/vms/centos65_pv.img, xvda, w']

This will most likely use a qdisk backend, so this guest will have a
qemu instance running even though it is PV.

> ######################################
> 
> 
> The output of "xl -vvv vcpu-set"
> ########################
> 
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-38

But we certainly shouldn't be doing this for such a guest, regardless of
whether a qemu is running for it.

Anthony -- looks like you forgot about the PV case in 62fe11fb --
libxl_set_vcpuonline shoulod use the xenstored version for such guests.
Can you send a fix please.

Ian.

> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> 
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "qmp_capabilities",
> 
>     "id": 1
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> 
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "cpu-add",
> 
>     "id": 2,
> 
>     "arguments": {
> 
>         "id": 0
> 
>     }
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> 
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an
> error message from QMP server: Not supported
> 
> xc: debug: hypercall buffer: total allocations:9 total releases:9
> 
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> 
> xc: debug: hypercall buffer: cache current size:2
> 
> xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
> ######################################
> Again, this issue exists both on Xen-4.3.0 (official release) and
> Xen-4.4.0-rc1-25-g9a80d50
> 
> On Wed, Jan 29, 2014 at 7:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
> >> So to fix the problem, I need to update the qemu version to version
> >> 1.7 or later?
> >
> > Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
> > Xen releases.
> >
> >> BTW. I had this problem in both pvhvm and pv guest.
> >> Does pv guest rely on qemu also?
> >
> > It does not.
> >
> > If you are having an issue with a PV guest vcpu hotplug then it is not
> > the same underlying issue. Please report it separately with full logs,
> > config info etc.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tnB-00088o-Lf; Thu, 30 Jan 2014 15:44:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8tnA-00088g-Nl; Thu, 30 Jan 2014 15:44:36 +0000
Received: from [193.109.254.147:40074] by server-7.bemta-14.messagelabs.com id
	96/4F-23424-3637AE25; Thu, 30 Jan 2014 15:44:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391096673!937745!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30450 invoked from network); 30 Jan 2014 15:44:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:44:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96173874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:44:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:44:32 -0500
Message-ID: <1391096671.9495.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 15:44:31 +0000
In-Reply-To: <CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:18 -0700, Yun Wang wrote:
> Here is the PV guest config
> #########################
> name = "centos65.pv"
> bootloader = "/usr/local/bin/pygrub"
> extra = "root=/dev/xvda"
> memory = 4096
> vcpus = 2
> vfb=[ "type=vnc, vncpass=123456" ]
> vif = [ 'mac=00:16:3e:54:02:01, bridge=xenbr0' ]
> disk = [ 'file:/vms/centos65_pv.img, xvda, w']

This will most likely use a qdisk backend, so this guest will have a
qemu instance running even though it is PV.

> ######################################
> 
> 
> The output of "xl -vvv vcpu-set"
> ########################
> 
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-38

But we certainly shouldn't be doing this for such a guest, regardless of
whether a qemu is running for it.

Anthony -- looks like you forgot about the PV case in 62fe11fb --
libxl_set_vcpuonline shoulod use the xenstored version for such guests.
Can you send a fix please.

Ian.

> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> 
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "qmp_capabilities",
> 
>     "id": 1
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> 
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
> 
>     "execute": "cpu-add",
> 
>     "id": 2,
> 
>     "arguments": {
> 
>         "id": 0
> 
>     }
> 
> }
> 
> '
> 
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> 
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an
> error message from QMP server: Not supported
> 
> xc: debug: hypercall buffer: total allocations:9 total releases:9
> 
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> 
> xc: debug: hypercall buffer: cache current size:2
> 
> xc: debug: hypercall buffer: cache hits:7 misses:2 toobig:0
> ######################################
> Again, this issue exists both on Xen-4.3.0 (official release) and
> Xen-4.4.0-rc1-25-g9a80d50
> 
> On Wed, Jan 29, 2014 at 7:45 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Wed, 2014-01-29 at 07:38 -0700, Yun Wang wrote:
> >> So to fix the problem, I need to update the qemu version to version
> >> 1.7 or later?
> >
> > Yes. Or, AIUI, you can use the version of Qemu which is bundled with the
> > Xen releases.
> >
> >> BTW. I had this problem in both pvhvm and pv guest.
> >> Does pv guest rely on qemu also?
> >
> > It does not.
> >
> > If you are having an issue with a PV guest vcpu hotplug then it is not
> > the same underlying issue. Please report it separately with full logs,
> > config info etc.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:47:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tpX-0008Ke-Pu; Thu, 30 Jan 2014 15:47:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tpV-0008KU-8Z
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:47:02 +0000
Received: from [85.158.137.68:56814] by server-3.bemta-3.messagelabs.com id
	5E/94-14520-4F37AE25; Thu, 30 Jan 2014 15:47:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391096816!12353695!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11337 invoked from network); 30 Jan 2014 15:46:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96174910"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:46:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:46:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tpO-0007qA-N2;
	Thu, 30 Jan 2014 15:46:54 +0000
Message-ID: <52EA73EE.90103@citrix.com>
Date: Thu, 30 Jan 2014 15:46:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> The legacy 'catch-all' server is always created with id 0. Secondary
> servers will have an id ranging from 1 to a limit set by the toolstack
> via the 'max_emulators' build info field. This defaults to 1 so ordinarily
> no extra special pages are reserved for secondary emulators. It may be
> increased using the secondary_device_emulators parameter in xl.cfg(5).
>
> Because of the re-arrangement of the special pages in a previous patch we
> only need the addition of parameter HVM_PARAM_NR_IOREQ_SERVERS to determine
> the layout of the shared pages for multiple emulators. Guests migrated in
> from hosts without this patch will be lacking the save record which stores
> the new parameter and so the guest is assumed to only have had a single
> emulator.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5            |    7 +
>  tools/libxc/xc_domain.c          |  175 ++++++++
>  tools/libxc/xc_domain_restore.c  |   20 +
>  tools/libxc/xc_domain_save.c     |   12 +
>  tools/libxc/xc_hvm_build_x86.c   |   25 +-
>  tools/libxc/xenctrl.h            |   41 ++
>  tools/libxc/xenguest.h           |    2 +
>  tools/libxc/xg_save_restore.h    |    1 +
>  tools/libxl/libxl.h              |    8 +
>  tools/libxl/libxl_create.c       |    3 +
>  tools/libxl/libxl_dom.c          |    1 +
>  tools/libxl/libxl_types.idl      |    1 +
>  tools/libxl/xl_cmdimpl.c         |    3 +
>  xen/arch/x86/hvm/hvm.c           |  916 +++++++++++++++++++++++++++++++++++---
>  xen/arch/x86/hvm/io.c            |    2 +-
>  xen/include/asm-x86/hvm/domain.h |   21 +-
>  xen/include/asm-x86/hvm/hvm.h    |    1 +
>  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
>  xen/include/public/hvm/hvm_op.h  |   70 +++
>  xen/include/public/hvm/ioreq.h   |    1 +
>  xen/include/public/hvm/params.h  |    4 +-
>  21 files changed, 1230 insertions(+), 86 deletions(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..9aa9958 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1277,6 +1277,13 @@ specified, enabling the use of XenServer PV drivers in the guest.
>  This parameter only takes effect when device_model_version=qemu-xen.
>  See F<docs/misc/pci-device-reservations.txt> for more information.
>  
> +=item B<secondary_device_emulators=NUMBER>
> +
> +If a number of secondary device emulators (i.e. in addition to
> +qemu-xen or qemu-xen-traditional) are to be invoked to support the
> +guest then this parameter can be set with the count of how many are
> +to be used. The default value is zero.
> +
>  =back
>  
>  =head2 Device-Model Options
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..c64d15a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -1246,6 +1246,181 @@ int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long
>      return rc;
>  }
>  
> +int xc_hvm_create_ioreq_server(xc_interface *xch,
> +                               domid_t domid,
> +                               ioservid_t *id)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_create_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    *id = arg->id;
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> +                                 domid_t domid,
> +                                 ioservid_t id,
> +                                 xen_pfn_t *pfn,
> +                                 xen_pfn_t *buf_pfn,
> +                                 evtchn_port_t *buf_port)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_get_ioreq_server_info;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    if ( rc != 0 )
> +        goto done;
> +
> +    if ( pfn )
> +        *pfn = arg->pfn;
> +
> +    if ( buf_pfn )
> +        *buf_pfn = arg->buf_pfn;
> +
> +    if ( buf_port )
> +        *buf_port = arg->buf_port;
> +
> +done:
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
> +                                        ioservid_t id, int is_mmio,
> +                                        uint64_t start, uint64_t end)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->is_mmio = is_mmio;
> +    arg->start = start;
> +    arg->end = end;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
> +                                            ioservid_t id, int is_mmio,
> +                                            uint64_t start)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_unmap_io_range_from_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->is_mmio = is_mmio;
> +    arg->start = start;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
> +                                      ioservid_t id, uint16_t bdf)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_pcidev_to_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_map_pcidev_to_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->bdf = bdf;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
> +                                          ioservid_t id, uint16_t bdf)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_pcidev_from_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_unmap_pcidev_from_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->bdf = bdf;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> +                                domid_t domid,
> +                                ioservid_t id)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_destroy_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
>  int xc_domain_setdebugging(xc_interface *xch,
>                             uint32_t domid,
>                             unsigned int enable)
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index ca2fb51..305e4b8 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -746,6 +746,7 @@ typedef struct {
>      uint64_t acpi_ioport_location;
>      uint64_t viridian;
>      uint64_t vm_generationid_addr;
> +    uint64_t nr_ioreq_servers;
>  
>      struct toolstack_data_t tdata;
>  } pagebuf_t;
> @@ -996,6 +997,16 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
>          DPRINTF("read generation id buffer address");
>          return pagebuf_get_one(xch, ctx, buf, fd, dom);
>  
> +    case XC_SAVE_ID_HVM_NR_IOREQ_SERVERS:
> +        /* Skip padding 4 bytes then read the acpi ioport location. */
> +        if ( RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint32_t)) ||
> +             RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint64_t)) )
> +        {
> +            PERROR("error reading the number of IOREQ servers");
> +            return -1;
> +        }
> +        return pagebuf_get_one(xch, ctx, buf, fd, dom);
> +
>      default:
>          if ( (count > MAX_BATCH_SIZE) || (count < 0) ) {
>              ERROR("Max batch size exceeded (%d). Giving up.", count);
> @@ -1755,6 +1766,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      if (pagebuf.viridian != 0)
>          xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
>  
> +    if ( hvm ) {
> +        int nr_ioreq_servers = pagebuf.nr_ioreq_servers;
> +
> +        if ( nr_ioreq_servers == 0 )
> +            nr_ioreq_servers = 1;
> +
> +        xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS, nr_ioreq_servers);
> +    }
> +
>      if (pagebuf.acpi_ioport_location == 1) {
>          DBGPRINTF("Use new firmware ioport from the checkpoint\n");
>          xc_set_hvm_param(xch, dom, HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index 42c4752..3293e29 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1731,6 +1731,18 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>              PERROR("Error when writing the viridian flag");
>              goto out;
>          }
> +
> +        chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVERS;
> +        chunk.data = 0;
> +        xc_get_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> +                         (unsigned long *)&chunk.data);
> +
> +        if ( (chunk.data != 0) &&
> +             wrexact(io_fd, &chunk, sizeof(chunk)) )
> +        {
> +            PERROR("Error when writing the number of IOREQ servers");
> +            goto out;
> +        }
>      }
>  
>      if ( callbacks != NULL && callbacks->toolstack_save != NULL )
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index f24f2a1..bbe5def 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -45,7 +45,7 @@
>  #define SPECIALPAGE_IDENT_PT 4
>  #define SPECIALPAGE_CONSOLE  5
>  #define SPECIALPAGE_IOREQ    6
> -#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
> +#define NR_SPECIAL_PAGES(n)  SPECIALPAGE_IOREQ + (2 * n) /* ioreq server needs 2 pages */
>  #define special_pfn(x) (0xff000u - (x))
>  
>  static int modules_init(struct xc_hvm_build_args *args,
> @@ -83,7 +83,8 @@ static int modules_init(struct xc_hvm_build_args *args,
>  }
>  
>  static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
> -                           uint64_t mmio_start, uint64_t mmio_size)
> +                           uint64_t mmio_start, uint64_t mmio_size,
> +                           int max_emulators)
>  {
>      struct hvm_info_table *hvm_info = (struct hvm_info_table *)
>          (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
> @@ -111,7 +112,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
>      /* Memory parameters. */
>      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
>      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> -    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
> +    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES(max_emulators);
>  
>      /* Finish with the checksum. */
>      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> @@ -254,6 +255,10 @@ static int setup_guest(xc_interface *xch,
>          stat_1gb_pages = 0;
>      int pod_mode = 0;
>      int claim_enabled = args->claim_enabled;
> +    int max_emulators = args->max_emulators;
> +
> +    if ( max_emulators < 1 )
> +        goto error_out;

Is there a sane upper bound for emulators?

>  
>      if ( nr_pages > target_pages )
>          pod_mode = XENMEMF_populate_on_demand;
> @@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
>                xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
>                HVM_INFO_PFN)) == NULL )
>          goto error_out;
> -    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
> +    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
> +                   max_emulators);
>      munmap(hvm_info_page, PAGE_SIZE);
>  
>      /* Allocate and clear special pages. */
> @@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
>              "  STORE:     %"PRI_xen_pfn"\n"
>              "  IDENT_PT:  %"PRI_xen_pfn"\n"
>              "  CONSOLE:   %"PRI_xen_pfn"\n"
> -            "  IOREQ:     %"PRI_xen_pfn"\n",
> -            NR_SPECIAL_PAGES,
> +            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
> +            NR_SPECIAL_PAGES(max_emulators),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> +            max_emulators * 2,
>              (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
>  
> -    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> +    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
>      {
>          xen_pfn_t pfn = special_pfn(i);
>          rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
> @@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
>      xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
>                       special_pfn(SPECIALPAGE_IOREQ));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> +                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> +                     max_emulators);
>  
>      /*
>       * Identity-map page table is required for running with CR0.PG=0 when
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..142aaea 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
>  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
>  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
>  
> +/*
> + * IOREQ server API
> + */
> +int xc_hvm_create_ioreq_server(xc_interface *xch,
> +			       domid_t domid,
> +			       ioservid_t *id);
> +
> +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> +				 domid_t domid,
> +				 ioservid_t id,
> +				 xen_pfn_t *pfn,
> +				 xen_pfn_t *buf_pfn,
> +				 evtchn_port_t *buf_port);
> +
> +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
> +					domid_t domid,
> +                                        ioservid_t id,
> +					int is_mmio,
> +                                        uint64_t start,
> +					uint64_t end);
> +
> +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
> +					    domid_t domid,
> +                                            ioservid_t id,
> +					    int is_mmio,
> +                                            uint64_t start);
> +
> +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
> +				      domid_t domid,
> +                                      ioservid_t id,
> +				      uint16_t bdf);
> +
> +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
> +					  domid_t domid,
> +					  ioservid_t id,
> +					  uint16_t bdf);
> +
> +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> +				domid_t domid,
> +				ioservid_t id);
> +

There are tab/space issues in this hunk.

>  /* HVM guest pass-through */
>  int xc_assign_device(xc_interface *xch,
>                       uint32_t domid,
> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> index a0e30e1..8930ac0 100644
> --- a/tools/libxc/xenguest.h
> +++ b/tools/libxc/xenguest.h
> @@ -234,6 +234,8 @@ struct xc_hvm_build_args {
>      struct xc_hvm_firmware_module smbios_module;
>      /* Whether to use claim hypercall (1 - enable, 0 - disable). */
>      int claim_enabled;
> +    /* Maximum number of emulators for VM */
> +    int max_emulators;
>  };
>  
>  /**
> diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
> index f859621..5170b7f 100644
> --- a/tools/libxc/xg_save_restore.h
> +++ b/tools/libxc/xg_save_restore.h
> @@ -259,6 +259,7 @@
>  #define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
>  #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
>  #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
> +#define XC_SAVE_ID_HVM_NR_IOREQ_SERVERS -19
>  
>  /*
>  ** We process save/restore/migrate in batches of pages; the below
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..b679957 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -95,6 +95,14 @@
>  #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
>  
>  /*
> + * LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS indicates that the
> + * max_emulators field is present in the hvm sections of
> + * libxl_domain_build_info. This field can be used to reserve
> + * extra special pages for secondary device emulators.
> + */
> +#define LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS 1
> +
> +/*
>   * libxl ABI compatibility
>   *
>   * The only guarantee which libxl makes regarding ABI compatibility
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..cce93d9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -330,6 +330,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>  
>          libxl_defbool_setdefault(&b_info->u.hvm.gfx_passthru, false);
>  
> +        if (b_info->u.hvm.max_emulators < 1)
> +            b_info->u.hvm.max_emulators = 1;
> +
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>          libxl_defbool_setdefault(&b_info->u.pv.e820_host, false);
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 55f74b2..9de06f9 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -637,6 +637,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
>      args.mem_size = (uint64_t)(info->max_memkb - info->video_memkb) << 10;
>      args.mem_target = (uint64_t)(info->target_memkb - info->video_memkb) << 10;
>      args.claim_enabled = libxl_defbool_val(info->claim_mode);
> +    args.max_emulators = info->u.hvm.max_emulators;
>      if (libxl__domain_firmware(gc, info, &args)) {
>          LOG(ERROR, "initializing domain firmware failed");
>          goto out;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b707159 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -372,6 +372,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>                                         ("xen_platform_pci", libxl_defbool),
>                                         ("usbdevice_list",   libxl_string_list),
>                                         ("vendor_device",    libxl_vendor_device),
> +                                       ("max_emulators",    integer),
>                                         ])),
>                   ("pv", Struct(None, [("kernel", string),
>                                        ("slack_memkb", MemKB),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..c65f4f4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1750,6 +1750,9 @@ skip_vfb:
>  
>              b_info->u.hvm.vendor_device = d;
>          }
> + 
> +        if (!xlu_cfg_get_long (config, "secondary_device_emulators", &l, 0))
> +            b_info->u.hvm.max_emulators = l + 1;
>      }
>  
>      xlu_cfg_destroy(config);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d9874fb..5f9e728 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -379,21 +379,23 @@ static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
>  void hvm_do_resume(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> +    struct list_head *entry, *next;
>  
>      check_wakeup_from_wait();
>  
>      if ( is_hvm_vcpu(v) )
>          pt_restore_timer(v);
>  
> -    s = v->arch.hvm_vcpu.ioreq_server;
> -    v->arch.hvm_vcpu.ioreq_server = NULL;
> -
> -    if ( s )
> +    list_for_each_safe ( entry, next, &v->arch.hvm_vcpu.ioreq_server_list )
>      {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                vcpu_list_entry[v->vcpu_id]);
>          ioreq_t *p = get_ioreq(s, v->vcpu_id);
>  
>          hvm_wait_on_io(d, p);
> +
> +        list_del_init(entry);
>      }
>  
>      /* Inject pending hw/sw trap */
> @@ -531,6 +533,83 @@ static int hvm_print_line(
>      return X86EMUL_OKAY;
>  }
>  
> +static int hvm_access_cf8(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *curr = current;
> +    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
> +    int rc;
> +
> +    BUG_ON(port < 0xcf8);
> +    port -= 0xcf8;
> +
> +    spin_lock(&hd->pci_lock);
> +
> +    if ( dir == IOREQ_WRITE )
> +    {
> +        switch ( bytes )
> +        {
> +        case 4:
> +            hd->pci_cf8 = *val;
> +            break;
> +
> +        case 2:
> +        {
> +            uint32_t mask = 0xffff << (port * 8);
> +            uint32_t subval = *val << (port * 8);
> +
> +            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
> +                          (subval & mask);
> +            break;
> +        }
> +            
> +        case 1:
> +        {
> +            uint32_t mask = 0xff << (port * 8);
> +            uint32_t subval = *val << (port * 8);
> +
> +            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
> +                          (subval & mask);
> +            break;
> +        }
> +
> +        default:
> +            break;
> +        }
> +
> +        /* We always need to fall through to the catch all emulator */
> +        rc = X86EMUL_UNHANDLEABLE;
> +    }
> +    else
> +    {
> +        switch ( bytes )
> +        {
> +        case 4:
> +            *val = hd->pci_cf8;
> +            rc = X86EMUL_OKAY;
> +            break;
> +
> +        case 2:
> +            *val = (hd->pci_cf8 >> (port * 8)) & 0xffff;
> +            rc = X86EMUL_OKAY;
> +            break;
> +            
> +        case 1:
> +            *val = (hd->pci_cf8 >> (port * 8)) & 0xff;
> +            rc = X86EMUL_OKAY;
> +            break;
> +
> +        default:
> +            rc = X86EMUL_UNHANDLEABLE;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&hd->pci_lock);
> +
> +    return rc;
> +}
> +
>  static int handle_pvh_io(
>      int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>  {
> @@ -590,6 +669,8 @@ done:
>  
>  static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
>  {
> +    list_del_init(&s->vcpu_list_entry[v->vcpu_id]);
> +
>      if ( v->vcpu_id == 0 )
>      {
>          if ( s->buf_ioreq_evtchn >= 0 )
> @@ -606,7 +687,7 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
>      }
>  }
>  
> -static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> +static int hvm_create_ioreq_server(struct domain *d, ioservid_t id, domid_t domid)
>  {
>      struct hvm_ioreq_server *s;
>      int i;
> @@ -614,34 +695,47 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
>      struct vcpu *v;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -EEXIST;
> -    if ( d->arch.hvm_domain.ioreq_server != NULL )
> -        goto fail_exist;
> +    list_for_each_entry ( s, 
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto fail_exist;
> +    }
>  
> -    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
>  
>      rc = -ENOMEM;
>      s = xzalloc(struct hvm_ioreq_server);
>      if ( !s )
>          goto fail_alloc;
>  
> +    s->id = id;
>      s->domain = d;
>      s->domid = domid;
> +    INIT_LIST_HEAD(&s->domain_list_entry);
>  
>      for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +    {
>          s->ioreq_evtchn[i] = -1;
> +        INIT_LIST_HEAD(&s->vcpu_list_entry[i]);
> +    }
>      s->buf_ioreq_evtchn = -1;
>  
>      /* Initialize shared pages */
> -    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
>  
>      hvm_init_ioreq_page(s, 0);
>      if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
>          goto fail_set_ioreq;
>  
> -    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
>  
>      hvm_init_ioreq_page(s, 1);
>      if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> @@ -653,7 +747,8 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
>              goto fail_add_vcpu;
>      }
>  
> -    d->arch.hvm_domain.ioreq_server = s;
> +    list_add(&s->domain_list_entry,
> +             &d->arch.hvm_domain.ioreq_server_list);
>  
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> @@ -673,22 +768,30 @@ fail_exist:
>      return rc;
>  }
>  
> -static void hvm_destroy_ioreq_server(struct domain *d)
> +static void hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>  {
> -    struct hvm_ioreq_server *s;
> +    struct hvm_ioreq_server *s, *next;
>      struct vcpu *v;
>  
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +    list_for_each_entry_safe ( s,
> +                               next,
> +                               &d->arch.hvm_domain.ioreq_server_list,
> +                               domain_list_entry)
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> -    if ( !s )
> -        goto done;
> +    goto done;
> +
> +found:
> +    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
>  
>      domain_pause(d);
>  
> -    d->arch.hvm_domain.ioreq_server = NULL;
> +    list_del_init(&s->domain_list_entry);
>  
>      for_each_vcpu ( d, v )
>          hvm_ioreq_server_remove_vcpu(s, v);
> @@ -704,21 +807,186 @@ done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  }
>  
> -static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
> +static int hvm_get_ioreq_server_buf_port(struct domain *d, ioservid_t id, evtchn_port_t *port)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -ENOENT;
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( s->id == id )
> +        {
> +            *port = s->buf_ioreq_evtchn;
> +            rc = 0;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_get_ioreq_server_pfn(struct domain *d, ioservid_t id, int buf, xen_pfn_t *pfn)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -ENOENT;
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( s->id == id )
> +        {
> +            if ( buf )
> +                *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
> +            else
> +                *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
> +
> +            rc = 0;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
> +                                            int is_mmio, uint64_t start, uint64_t end)
>  {
>      struct hvm_ioreq_server *s;
> +    struct hvm_io_range *x;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    x = xmalloc(struct hvm_io_range);
> +    if ( x == NULL )
> +        return -ENOMEM;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    rc = -ENOENT;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
> +
> +    goto fail;
> +
> +found:
> +    x->start = start;
> +    x->end = end;
> +
> +    if ( is_mmio )
> +    {
> +        x->next = s->mmio_range_list;
> +        s->mmio_range_list = x;
> +    }
> +    else
> +    {
> +        x->next = s->portio_range_list;
> +        s->portio_range_list = x;
> +    }
> +
> +    gdprintk(XENLOG_DEBUG, "%d:%d: +%s %"PRIX64" - %"PRIX64"\n",
> +             d->domain_id,
> +             s->id,
> +             ( is_mmio ) ? "MMIO" : "PORTIO",
> +             x->start,
> +             x->end);
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    xfree(x);
> +
> +    return rc;
> +}
> +
> +static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
> +                                                int is_mmio, uint64_t start)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct hvm_io_range *x, **xp;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    *port = s->buf_ioreq_evtchn;
> -    rc = 0;
> +    goto done;
> +
> +found:
> +    if ( is_mmio )
> +    {
> +        x = s->mmio_range_list;
> +        xp = &s->mmio_range_list;
> +    }
> +    else
> +    {
> +        x = s->portio_range_list;
> +        xp = &s->portio_range_list;
> +    }
> +
> +    while ( (x != NULL) && (start != x->start) )
> +    {
> +        xp = &x->next;
> +        x = x->next;
> +    }
> +
> +    if ( (x != NULL) )
> +    {
> +        gdprintk(XENLOG_DEBUG, "%d:%d: -%s %"PRIX64" - %"PRIX64"\n",
> +                 d->domain_id,
> +                 s->id,
> +                 ( is_mmio ) ? "MMIO" : "PORTIO",
> +                 x->start,
> +                 x->end);
> +
> +        *xp = x->next;
> +        xfree(x);
> +        rc = 0;
> +    }
>  
>  done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> @@ -726,25 +994,98 @@ done:
>      return rc;
>  }
>  
> -static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
> +static int hvm_map_pcidev_to_ioreq_server(struct domain *d, ioservid_t id,
> +                                          uint16_t bdf)
>  {
>      struct hvm_ioreq_server *s;
> +    struct hvm_pcidev *x;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    x = xmalloc(struct hvm_pcidev);
> +    if ( x == NULL )
> +        return -ENOMEM;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    rc = -ENOENT;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
> +
> +    goto fail;
> +
> +found:
> +    x->bdf = bdf;
> +
> +    x->next = s->pcidev_list;
> +    s->pcidev_list = x;
> +
> +    gdprintk(XENLOG_DEBUG, "%d:%d: +PCIDEV %04X\n",
> +             d->domain_id,
> +             s->id,
> +             x->bdf);
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    xfree(x);
> +
> +    return rc;
> +}
> +
> +static int hvm_unmap_pcidev_from_ioreq_server(struct domain *d, ioservid_t id,
> +                                              uint16_t bdf)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct hvm_pcidev *x, **xp;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    if ( buf )
> -        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> -    else
> -        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +    goto done;
>  
> -    rc = 0;
> +found:
> +    x = s->pcidev_list;
> +    xp = &s->pcidev_list;
> +
> +    while ( (x != NULL) && (bdf != x->bdf) )
> +    {
> +        xp = &x->next;
> +        x = x->next;
> +    }
> +    if ( (x != NULL) )
> +    {
> +        gdprintk(XENLOG_DEBUG, "%d:%d: -PCIDEV %04X\n",
> +                 d->domain_id,
> +                 s->id,
> +                 x->bdf);
> +
> +         *xp = x->next;
> +        xfree(x);
> +        rc = 0;
> +    }
>  
>  done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> @@ -752,6 +1093,73 @@ done:
>      return rc;
>  }
>  
> +static int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> +            goto fail;
> +    }
> +        
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
> +{
> +    struct list_head *entry;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
> +static void hvm_destroy_all_ioreq_servers(struct domain *d)
> +{
> +    ioservid_t id;
> +
> +    for ( id = 0;
> +          id < d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
> +          id++ )
> +        hvm_destroy_ioreq_server(d, id);
> +}
> +
>  static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>                                       int *p_port)
>  {
> @@ -767,21 +1175,30 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>      return 0;
>  }
>  
> -static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
> +static int hvm_set_ioreq_server_domid(struct domain *d, ioservid_t id, domid_t domid)
>  {
>      struct hvm_ioreq_server *s;
>      struct vcpu *v;
>      int rc = 0;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
>      domain_pause(d);
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    goto done;
>  
> +found:
>      rc = 0;
>      if ( s->domid == domid )
>          goto done;
> @@ -838,7 +1255,9 @@ int hvm_domain_initialise(struct domain *d)
>  
>      }
>  
> +    INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server_list);
>      spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
> +    spin_lock_init(&d->arch.hvm_domain.pci_lock);
>      spin_lock_init(&d->arch.hvm_domain.irq_lock);
>      spin_lock_init(&d->arch.hvm_domain.uc_lock);
>  
> @@ -880,6 +1299,7 @@ int hvm_domain_initialise(struct domain *d)
>      rtc_init(d);
>  
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> +    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> @@ -910,7 +1330,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_destroy_ioreq_server(d);
> +    hvm_destroy_all_ioreq_servers(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1422,13 +1842,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  {
>      int rc;
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
>  
>      hvm_asid_flush_vcpu(v);
>  
>      spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
>      INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
>  
> +    INIT_LIST_HEAD(&v->arch.hvm_vcpu.ioreq_server_list);
> +
>      rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
>      if ( rc != 0 )
>          goto fail1;
> @@ -1465,16 +1886,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> -    if ( s )
> -    {
> -        rc = hvm_ioreq_server_add_vcpu(s, v);
> -        if ( rc < 0 )
> -            goto fail6;
> -    }
> +    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
> +    if ( rc < 0 )
> +        goto fail6;
>  
>      if ( v->vcpu_id == 0 )
>      {
> @@ -1510,14 +1924,8 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> -
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    if ( s )
> -        hvm_ioreq_server_remove_vcpu(s, v);
> +    hvm_all_ioreq_servers_remove_vcpu(d, v);
>  
>      nestedhvm_vcpu_destroy(v);
>  
> @@ -1556,6 +1964,101 @@ void hvm_vcpu_down(struct vcpu *v)
>      }
>  }
>  
> +static struct hvm_ioreq_server *hvm_select_ioreq_server(struct vcpu *v, ioreq_t *p)
> +{
> +#define BDF(cf8) (((cf8) & 0x00ffff00) >> 8)
> +
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +    uint8_t type;
> +    uint64_t addr;
> +
> +    if ( p->type == IOREQ_TYPE_PIO &&
> +         (p->addr & ~3) == 0xcfc )
> +    { 
> +        /* PCI config data cycle */
> +        type = IOREQ_TYPE_PCI_CONFIG;
> +
> +        spin_lock(&d->arch.hvm_domain.pci_lock);
> +        addr = d->arch.hvm_domain.pci_cf8 + (p->addr & 3);
> +        spin_unlock(&d->arch.hvm_domain.pci_lock);
> +    }
> +    else
> +    {
> +        type = p->type;
> +        addr = p->addr;
> +    }
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    switch ( type )
> +    {
> +    case IOREQ_TYPE_COPY:
> +    case IOREQ_TYPE_PIO:
> +    case IOREQ_TYPE_PCI_CONFIG:
> +        break;
> +    default:
> +        goto done;
> +    }
> +
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        switch ( type )
> +        {
> +            case IOREQ_TYPE_COPY:
> +            case IOREQ_TYPE_PIO: {
> +                struct hvm_io_range *x;
> +
> +                x = (type == IOREQ_TYPE_COPY) ?
> +                    s->mmio_range_list :
> +                    s->portio_range_list;
> +
> +                for ( ; x; x = x->next )
> +                {
> +                    if ( (addr >= x->start) && (addr <= x->end) )
> +                        goto found;
> +                }
> +                break;
> +            }
> +            case IOREQ_TYPE_PCI_CONFIG: {
> +                struct hvm_pcidev *x;
> +
> +                x = s->pcidev_list;
> +
> +                for ( ; x; x = x->next )
> +                {
> +                    if ( BDF(addr) == x->bdf ) {
> +                        p->type = type;
> +                        p->addr = addr;
> +                        goto found;
> +                    }
> +                }
> +                break;
> +            }
> +        }
> +    }
> +
> +done:
> +    /* The catch-all server has id 0 */
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == 0 )
> +            goto found;
> +    }
> +
> +    s = NULL;
> +
> +found:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    return s;
> +
> +#undef BDF
> +}
> +
>  int hvm_buffered_io_send(ioreq_t *p)
>  {
>      struct vcpu *v = current;
> @@ -1570,10 +2073,7 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> +    s = hvm_select_ioreq_server(v, p);
>      if ( !s )
>          return 0;
>  
> @@ -1661,7 +2161,9 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
>          return 0;
>      }
>  
> -    v->arch.hvm_vcpu.ioreq_server = s;
> +    ASSERT(list_empty(&s->vcpu_list_entry[v->vcpu_id]));
> +    list_add(&s->vcpu_list_entry[v->vcpu_id],
> +             &v->arch.hvm_vcpu.ioreq_server_list); 
>  
>      p->dir = proto_p->dir;
>      p->data_is_ptr = proto_p->data_is_ptr;
> @@ -1686,24 +2188,42 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
>  {
> -    struct domain *d = v->domain;
>      struct hvm_ioreq_server *s;
>  
> -    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> +    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
>  
>      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
>          return 0;
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> +    s = hvm_select_ioreq_server(v, p);
>      if ( !s )
>          return 0;
>  
>      return hvm_send_assist_req_to_server(s, v, p);
>  }
>  
> +void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p)
> +{
> +    struct domain *d = v->domain;
> +    struct list_head *entry;
> +
> +    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        (void) hvm_send_assist_req_to_server(s, v, p);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
>  void hvm_hlt(unsigned long rflags)
>  {
>      struct vcpu *curr = current;
> @@ -4286,6 +4806,215 @@ static int hvmop_flush_tlb_all(void)
>      return 0;
>  }
>  
> +static int hvmop_create_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
> +{
> +    struct domain *curr_d = current->domain;
> +    xen_hvm_create_ioreq_server_t op;
> +    struct domain *d;
> +    ioservid_t id;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = -ENOSPC;
> +    for ( id = 1;
> +          id <  d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
> +          id++ )
> +    {
> +        rc = hvm_create_ioreq_server(d, id, curr_d->domain_id);
> +        if ( rc == -EEXIST )
> +            continue;
> +
> +        break;
> +    }
> +
> +    if ( rc == -EEXIST )
> +        rc = -ENOSPC;
> +
> +    if ( rc < 0 )
> +        goto out;
> +
> +    op.id = id;
> +
> +    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_get_ioreq_server_info(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
> +{
> +    xen_hvm_get_ioreq_server_info_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 0, &op.pfn)) < 0 )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 1, &op.buf_pfn)) < 0 )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_buf_port(d, op.id, &op.buf_port)) < 0 )
> +        goto out;
> +
> +    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_map_io_range_to_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_io_range_to_ioreq_server_t) uop)
> +{
> +    xen_hvm_map_io_range_to_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.is_mmio,
> +                                          op.start, op.end);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_unmap_io_range_from_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_io_range_from_ioreq_server_t) uop)
> +{
> +    xen_hvm_unmap_io_range_from_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.is_mmio,
> +                                              op.start);
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_map_pcidev_to_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_pcidev_to_ioreq_server_t) uop)
> +{
> +    xen_hvm_map_pcidev_to_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_map_pcidev_to_ioreq_server(d, op.id, op.bdf);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_unmap_pcidev_from_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_pcidev_from_ioreq_server_t) uop)
> +{
> +    xen_hvm_unmap_pcidev_from_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_unmap_pcidev_from_ioreq_server(d, op.id, op.bdf);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_destroy_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
> +{
> +    xen_hvm_destroy_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    hvm_destroy_ioreq_server(d, op.id);
> +    rc = 0;
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -4294,6 +5023,41 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      switch ( op )
>      {
> +    case HVMOP_create_ioreq_server:
> +        rc = hvmop_create_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_get_ioreq_server_info:
> +        rc = hvmop_get_ioreq_server_info(
> +            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
> +        break;
> +    
> +    case HVMOP_map_io_range_to_ioreq_server:
> +        rc = hvmop_map_io_range_to_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_map_io_range_to_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_unmap_io_range_from_ioreq_server:
> +        rc = hvmop_unmap_io_range_from_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_unmap_io_range_from_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_map_pcidev_to_ioreq_server:
> +        rc = hvmop_map_pcidev_to_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_map_pcidev_to_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_unmap_pcidev_from_ioreq_server:
> +        rc = hvmop_unmap_pcidev_from_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_unmap_pcidev_from_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_destroy_ioreq_server:
> +        rc = hvmop_destroy_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
> +        break;
> +    
>      case HVMOP_set_param:
>      case HVMOP_get_param:
>      {
> @@ -4382,9 +5146,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = hvm_create_ioreq_server(d, a.value);
> +                rc = hvm_create_ioreq_server(d, 0, a.value);
>                  if ( rc == -EEXIST )
> -                    rc = hvm_set_ioreq_server_domid(d, a.value);
> +                    rc = hvm_set_ioreq_server_domid(d, 0, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4449,6 +5213,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value > SHUTDOWN_MAX )
>                      rc = -EINVAL;
>                  break;
> +            case HVM_PARAM_NR_IOREQ_SERVERS:
> +                if ( d == current->domain )
> +                    rc = -EPERM;
> +                break;

Is this correct? Security-wise, it should be restricted more.

Having said that, I can't see anything good to come from being able to
change this value on the fly.  Is it possible to make a domain creation
parameters?

>              }
>  
>              if ( rc == 0 ) 
> @@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              case HVM_PARAM_BUFIOREQ_PFN:
>              case HVM_PARAM_BUFIOREQ_EVTCHN:
>                  /* May need to create server */
> -                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> +                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
>                  if ( rc != 0 && rc != -EEXIST )
>                      goto param_fail;
>  
> @@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_IOREQ_PFN: {
>                      xen_pfn_t pfn;
>  
> -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
>                          goto param_fail;
>  
>                      a.value = pfn;
> @@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_BUFIOREQ_PFN: {
>                      xen_pfn_t pfn;
>  
> -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
>                          goto param_fail;
>  
>                      a.value = pfn;
> @@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_BUFIOREQ_EVTCHN: {
>                      evtchn_port_t port;
>  
> -                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
>                          goto param_fail;
>  
>                      a.value = port;
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index 576641c..a0d76b2 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -78,7 +78,7 @@ void send_invalidate_req(void)
>      p->dir = IOREQ_WRITE;
>      p->data = ~0UL; /* flush all */
>  
> -    (void)hvm_send_assist_req(v, p);
> +    hvm_broadcast_assist_req(v, p);
>  }
>  
>  int handle_mmio(void)
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index e750ef0..93dcec1 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -41,19 +41,38 @@ struct hvm_ioreq_page {
>      void *va;
>  };
>  
> +struct hvm_io_range {
> +    struct hvm_io_range *next;
> +    uint64_t            start, end;
> +};	
> +
> +struct hvm_pcidev {
> +    struct hvm_pcidev *next;
> +    uint16_t          bdf;
> +};	
> +
>  struct hvm_ioreq_server {
> +    struct list_head       domain_list_entry;
> +    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];

Given that this has to be initialised anyway, would it be better to have
it dynamically sized on the d->max_cpus, which is almost always be far
smaller?

~Andrew

> +    ioservid_t             id;
>      struct domain          *domain;
>      domid_t                domid;
>      struct hvm_ioreq_page  ioreq;
>      int                    ioreq_evtchn[MAX_HVM_VCPUS];
>      struct hvm_ioreq_page  buf_ioreq;
>      int                    buf_ioreq_evtchn;
> +    struct hvm_io_range    *mmio_range_list;
> +    struct hvm_io_range    *portio_range_list;
> +    struct hvm_pcidev      *pcidev_list;
>  };
>  
>  struct hvm_domain {
> -    struct hvm_ioreq_server *ioreq_server;
> +    struct list_head        ioreq_server_list;
>      spinlock_t              ioreq_server_lock;
>  
> +    uint32_t                pci_cf8;
> +    spinlock_t              pci_lock;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 4e8fee8..1c3854f 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -225,6 +225,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
>  void destroy_ring_for_helper(void **_va, struct page_info *page);
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
> +void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p);
>  
>  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
>  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
> index 4c9d7ee..211ebfd 100644
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -138,7 +138,7 @@ struct hvm_vcpu {
>      spinlock_t          tm_lock;
>      struct list_head    tm_list;
>  
> -    struct hvm_ioreq_server *ioreq_server;
> +    struct list_head    ioreq_server_list;
>  
>      bool_t              flag_dr_dirty;
>      bool_t              debug_state_latch;
> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
> index a9aab4b..6b31189 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -23,6 +23,7 @@
>  
>  #include "../xen.h"
>  #include "../trace.h"
> +#include "../event_channel.h"
>  
>  /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
>  #define HVMOP_set_param           0
> @@ -270,6 +271,75 @@ struct xen_hvm_inject_msi {
>  typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
>  
> +typedef uint32_t ioservid_t;
> +
> +DEFINE_XEN_GUEST_HANDLE(ioservid_t);
> +
> +#define HVMOP_create_ioreq_server 17
> +struct xen_hvm_create_ioreq_server {
> +    domid_t domid;  /* IN - domain to be serviced */
> +    ioservid_t id;  /* OUT - server id */
> +};
> +typedef struct xen_hvm_create_ioreq_server xen_hvm_create_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_create_ioreq_server_t);
> +
> +#define HVMOP_get_ioreq_server_info 18
> +struct xen_hvm_get_ioreq_server_info {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - server id */
> +    xen_pfn_t pfn;          /* OUT - ioreq pfn */
> +    xen_pfn_t buf_pfn;      /* OUT - buf ioreq pfn */
> +    evtchn_port_t buf_port; /* OUT - buf ioreq port */
> +};
> +typedef struct xen_hvm_get_ioreq_server_info xen_hvm_get_ioreq_server_info_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
> +
> +#define HVMOP_map_io_range_to_ioreq_server 19
> +struct xen_hvm_map_io_range_to_ioreq_server {
> +    domid_t domid;                  /* IN - domain to be serviced */
> +    ioservid_t id;                  /* IN - handle from HVMOP_register_ioreq_server */
> +    int is_mmio;                    /* IN - MMIO or port IO? */
> +    uint64_aligned_t start, end;    /* IN - inclusive start and end of range */
> +};
> +typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
> +
> +#define HVMOP_unmap_io_range_from_ioreq_server 20
> +struct xen_hvm_unmap_io_range_from_ioreq_server {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
> +    uint8_t is_mmio;        /* IN - MMIO or port IO? */
> +    uint64_aligned_t start; /* IN - start address of the range to remove */
> +};
> +typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
> +
> +#define HVMOP_map_pcidev_to_ioreq_server 21
> +struct xen_hvm_map_pcidev_to_ioreq_server {
> +    domid_t domid;      /* IN - domain to be serviced */
> +    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
> +    uint16_t bdf;       /* IN - PCI bus/dev/func */
> +};
> +typedef struct xen_hvm_map_pcidev_to_ioreq_server xen_hvm_map_pcidev_to_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_pcidev_to_ioreq_server_t);
> +
> +#define HVMOP_unmap_pcidev_from_ioreq_server 22
> +struct xen_hvm_unmap_pcidev_from_ioreq_server {
> +    domid_t domid;      /* IN - domain to be serviced */
> +    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
> +    uint16_t bdf;       /* IN - PCI bus/dev/func */
> +};
> +typedef struct xen_hvm_unmap_pcidev_from_ioreq_server xen_hvm_unmap_pcidev_from_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_pcidev_from_ioreq_server_t);
> +
> +#define HVMOP_destroy_ioreq_server 23
> +struct xen_hvm_destroy_ioreq_server {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - server id */
> +};
> +typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index f05d130..e84fa75 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -34,6 +34,7 @@
>  
>  #define IOREQ_TYPE_PIO          0 /* pio */
>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config ops */
>  #define IOREQ_TYPE_TIMEOFFSET   7
>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
>  
> diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
> index 517a184..4109b11 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -145,6 +145,8 @@
>  /* SHUTDOWN_* action in case of a triple fault */
>  #define HVM_PARAM_TRIPLE_FAULT_REASON 31
>  
> -#define HVM_NR_PARAMS          32
> +#define HVM_PARAM_NR_IOREQ_SERVERS 32
> +
> +#define HVM_NR_PARAMS          33
>  
>  #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:47:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tpX-0008Ke-Pu; Thu, 30 Jan 2014 15:47:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tpV-0008KU-8Z
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:47:02 +0000
Received: from [85.158.137.68:56814] by server-3.bemta-3.messagelabs.com id
	5E/94-14520-4F37AE25; Thu, 30 Jan 2014 15:47:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391096816!12353695!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11337 invoked from network); 30 Jan 2014 15:46:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:46:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96174910"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:46:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:46:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tpO-0007qA-N2;
	Thu, 30 Jan 2014 15:46:54 +0000
Message-ID: <52EA73EE.90103@citrix.com>
Date: Thu, 30 Jan 2014 15:46:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> The legacy 'catch-all' server is always created with id 0. Secondary
> servers will have an id ranging from 1 to a limit set by the toolstack
> via the 'max_emulators' build info field. This defaults to 1 so ordinarily
> no extra special pages are reserved for secondary emulators. It may be
> increased using the secondary_device_emulators parameter in xl.cfg(5).
>
> Because of the re-arrangement of the special pages in a previous patch we
> only need the addition of parameter HVM_PARAM_NR_IOREQ_SERVERS to determine
> the layout of the shared pages for multiple emulators. Guests migrated in
> from hosts without this patch will be lacking the save record which stores
> the new parameter and so the guest is assumed to only have had a single
> emulator.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5            |    7 +
>  tools/libxc/xc_domain.c          |  175 ++++++++
>  tools/libxc/xc_domain_restore.c  |   20 +
>  tools/libxc/xc_domain_save.c     |   12 +
>  tools/libxc/xc_hvm_build_x86.c   |   25 +-
>  tools/libxc/xenctrl.h            |   41 ++
>  tools/libxc/xenguest.h           |    2 +
>  tools/libxc/xg_save_restore.h    |    1 +
>  tools/libxl/libxl.h              |    8 +
>  tools/libxl/libxl_create.c       |    3 +
>  tools/libxl/libxl_dom.c          |    1 +
>  tools/libxl/libxl_types.idl      |    1 +
>  tools/libxl/xl_cmdimpl.c         |    3 +
>  xen/arch/x86/hvm/hvm.c           |  916 +++++++++++++++++++++++++++++++++++---
>  xen/arch/x86/hvm/io.c            |    2 +-
>  xen/include/asm-x86/hvm/domain.h |   21 +-
>  xen/include/asm-x86/hvm/hvm.h    |    1 +
>  xen/include/asm-x86/hvm/vcpu.h   |    2 +-
>  xen/include/public/hvm/hvm_op.h  |   70 +++
>  xen/include/public/hvm/ioreq.h   |    1 +
>  xen/include/public/hvm/params.h  |    4 +-
>  21 files changed, 1230 insertions(+), 86 deletions(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..9aa9958 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1277,6 +1277,13 @@ specified, enabling the use of XenServer PV drivers in the guest.
>  This parameter only takes effect when device_model_version=qemu-xen.
>  See F<docs/misc/pci-device-reservations.txt> for more information.
>  
> +=item B<secondary_device_emulators=NUMBER>
> +
> +If a number of secondary device emulators (i.e. in addition to
> +qemu-xen or qemu-xen-traditional) are to be invoked to support the
> +guest then this parameter can be set with the count of how many are
> +to be used. The default value is zero.
> +
>  =back
>  
>  =head2 Device-Model Options
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..c64d15a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -1246,6 +1246,181 @@ int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long
>      return rc;
>  }
>  
> +int xc_hvm_create_ioreq_server(xc_interface *xch,
> +                               domid_t domid,
> +                               ioservid_t *id)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_create_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    *id = arg->id;
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> +                                 domid_t domid,
> +                                 ioservid_t id,
> +                                 xen_pfn_t *pfn,
> +                                 xen_pfn_t *buf_pfn,
> +                                 evtchn_port_t *buf_port)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_get_ioreq_server_info;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    if ( rc != 0 )
> +        goto done;
> +
> +    if ( pfn )
> +        *pfn = arg->pfn;
> +
> +    if ( buf_pfn )
> +        *buf_pfn = arg->buf_pfn;
> +
> +    if ( buf_port )
> +        *buf_port = arg->buf_port;
> +
> +done:
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
> +                                        ioservid_t id, int is_mmio,
> +                                        uint64_t start, uint64_t end)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_io_range_to_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->is_mmio = is_mmio;
> +    arg->start = start;
> +    arg->end = end;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
> +                                            ioservid_t id, int is_mmio,
> +                                            uint64_t start)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_io_range_from_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_unmap_io_range_from_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->is_mmio = is_mmio;
> +    arg->start = start;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
> +                                      ioservid_t id, uint16_t bdf)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_map_pcidev_to_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_map_pcidev_to_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->bdf = bdf;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
> +                                          ioservid_t id, uint16_t bdf)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_unmap_pcidev_from_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_unmap_pcidev_from_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    arg->bdf = bdf;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> +                                domid_t domid,
> +                                ioservid_t id)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_destroy_ioreq_server;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->id = id;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
>  int xc_domain_setdebugging(xc_interface *xch,
>                             uint32_t domid,
>                             unsigned int enable)
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index ca2fb51..305e4b8 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -746,6 +746,7 @@ typedef struct {
>      uint64_t acpi_ioport_location;
>      uint64_t viridian;
>      uint64_t vm_generationid_addr;
> +    uint64_t nr_ioreq_servers;
>  
>      struct toolstack_data_t tdata;
>  } pagebuf_t;
> @@ -996,6 +997,16 @@ static int pagebuf_get_one(xc_interface *xch, struct restore_ctx *ctx,
>          DPRINTF("read generation id buffer address");
>          return pagebuf_get_one(xch, ctx, buf, fd, dom);
>  
> +    case XC_SAVE_ID_HVM_NR_IOREQ_SERVERS:
> +        /* Skip padding 4 bytes then read the acpi ioport location. */
> +        if ( RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint32_t)) ||
> +             RDEXACT(fd, &buf->nr_ioreq_servers, sizeof(uint64_t)) )
> +        {
> +            PERROR("error reading the number of IOREQ servers");
> +            return -1;
> +        }
> +        return pagebuf_get_one(xch, ctx, buf, fd, dom);
> +
>      default:
>          if ( (count > MAX_BATCH_SIZE) || (count < 0) ) {
>              ERROR("Max batch size exceeded (%d). Giving up.", count);
> @@ -1755,6 +1766,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      if (pagebuf.viridian != 0)
>          xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
>  
> +    if ( hvm ) {
> +        int nr_ioreq_servers = pagebuf.nr_ioreq_servers;
> +
> +        if ( nr_ioreq_servers == 0 )
> +            nr_ioreq_servers = 1;
> +
> +        xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS, nr_ioreq_servers);
> +    }
> +
>      if (pagebuf.acpi_ioport_location == 1) {
>          DBGPRINTF("Use new firmware ioport from the checkpoint\n");
>          xc_set_hvm_param(xch, dom, HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
> index 42c4752..3293e29 100644
> --- a/tools/libxc/xc_domain_save.c
> +++ b/tools/libxc/xc_domain_save.c
> @@ -1731,6 +1731,18 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>              PERROR("Error when writing the viridian flag");
>              goto out;
>          }
> +
> +        chunk.id = XC_SAVE_ID_HVM_NR_IOREQ_SERVERS;
> +        chunk.data = 0;
> +        xc_get_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> +                         (unsigned long *)&chunk.data);
> +
> +        if ( (chunk.data != 0) &&
> +             wrexact(io_fd, &chunk, sizeof(chunk)) )
> +        {
> +            PERROR("Error when writing the number of IOREQ servers");
> +            goto out;
> +        }
>      }
>  
>      if ( callbacks != NULL && callbacks->toolstack_save != NULL )
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index f24f2a1..bbe5def 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -45,7 +45,7 @@
>  #define SPECIALPAGE_IDENT_PT 4
>  #define SPECIALPAGE_CONSOLE  5
>  #define SPECIALPAGE_IOREQ    6
> -#define NR_SPECIAL_PAGES     SPECIALPAGE_IOREQ + 2 /* ioreq server needs 2 pages */
> +#define NR_SPECIAL_PAGES(n)  SPECIALPAGE_IOREQ + (2 * n) /* ioreq server needs 2 pages */
>  #define special_pfn(x) (0xff000u - (x))
>  
>  static int modules_init(struct xc_hvm_build_args *args,
> @@ -83,7 +83,8 @@ static int modules_init(struct xc_hvm_build_args *args,
>  }
>  
>  static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
> -                           uint64_t mmio_start, uint64_t mmio_size)
> +                           uint64_t mmio_start, uint64_t mmio_size,
> +                           int max_emulators)
>  {
>      struct hvm_info_table *hvm_info = (struct hvm_info_table *)
>          (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
> @@ -111,7 +112,7 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
>      /* Memory parameters. */
>      hvm_info->low_mem_pgend = lowmem_end >> PAGE_SHIFT;
>      hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
> -    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES;
> +    hvm_info->reserved_mem_pgstart = special_pfn(0) - NR_SPECIAL_PAGES(max_emulators);
>  
>      /* Finish with the checksum. */
>      for ( i = 0, sum = 0; i < hvm_info->length; i++ )
> @@ -254,6 +255,10 @@ static int setup_guest(xc_interface *xch,
>          stat_1gb_pages = 0;
>      int pod_mode = 0;
>      int claim_enabled = args->claim_enabled;
> +    int max_emulators = args->max_emulators;
> +
> +    if ( max_emulators < 1 )
> +        goto error_out;

Is there a sane upper bound for emulators?

>  
>      if ( nr_pages > target_pages )
>          pod_mode = XENMEMF_populate_on_demand;
> @@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
>                xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
>                HVM_INFO_PFN)) == NULL )
>          goto error_out;
> -    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
> +    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
> +                   max_emulators);
>      munmap(hvm_info_page, PAGE_SIZE);
>  
>      /* Allocate and clear special pages. */
> @@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
>              "  STORE:     %"PRI_xen_pfn"\n"
>              "  IDENT_PT:  %"PRI_xen_pfn"\n"
>              "  CONSOLE:   %"PRI_xen_pfn"\n"
> -            "  IOREQ:     %"PRI_xen_pfn"\n",
> -            NR_SPECIAL_PAGES,
> +            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
> +            NR_SPECIAL_PAGES(max_emulators),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
>              (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> +            max_emulators * 2,
>              (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
>  
> -    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> +    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
>      {
>          xen_pfn_t pfn = special_pfn(i);
>          rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
> @@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
>      xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
>                       special_pfn(SPECIALPAGE_IOREQ));
>      xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> -                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> +                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
> +    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> +                     max_emulators);
>  
>      /*
>       * Identity-map page table is required for running with CR0.PG=0 when
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..142aaea 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
>  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value);
>  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value);
>  
> +/*
> + * IOREQ server API
> + */
> +int xc_hvm_create_ioreq_server(xc_interface *xch,
> +			       domid_t domid,
> +			       ioservid_t *id);
> +
> +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> +				 domid_t domid,
> +				 ioservid_t id,
> +				 xen_pfn_t *pfn,
> +				 xen_pfn_t *buf_pfn,
> +				 evtchn_port_t *buf_port);
> +
> +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
> +					domid_t domid,
> +                                        ioservid_t id,
> +					int is_mmio,
> +                                        uint64_t start,
> +					uint64_t end);
> +
> +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
> +					    domid_t domid,
> +                                            ioservid_t id,
> +					    int is_mmio,
> +                                            uint64_t start);
> +
> +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
> +				      domid_t domid,
> +                                      ioservid_t id,
> +				      uint16_t bdf);
> +
> +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
> +					  domid_t domid,
> +					  ioservid_t id,
> +					  uint16_t bdf);
> +
> +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> +				domid_t domid,
> +				ioservid_t id);
> +

There are tab/space issues in this hunk.

>  /* HVM guest pass-through */
>  int xc_assign_device(xc_interface *xch,
>                       uint32_t domid,
> diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> index a0e30e1..8930ac0 100644
> --- a/tools/libxc/xenguest.h
> +++ b/tools/libxc/xenguest.h
> @@ -234,6 +234,8 @@ struct xc_hvm_build_args {
>      struct xc_hvm_firmware_module smbios_module;
>      /* Whether to use claim hypercall (1 - enable, 0 - disable). */
>      int claim_enabled;
> +    /* Maximum number of emulators for VM */
> +    int max_emulators;
>  };
>  
>  /**
> diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
> index f859621..5170b7f 100644
> --- a/tools/libxc/xg_save_restore.h
> +++ b/tools/libxc/xg_save_restore.h
> @@ -259,6 +259,7 @@
>  #define XC_SAVE_ID_HVM_ACCESS_RING_PFN  -16
>  #define XC_SAVE_ID_HVM_SHARING_RING_PFN -17
>  #define XC_SAVE_ID_TOOLSTACK          -18 /* Optional toolstack specific info */
> +#define XC_SAVE_ID_HVM_NR_IOREQ_SERVERS -19
>  
>  /*
>  ** We process save/restore/migrate in batches of pages; the below
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..b679957 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -95,6 +95,14 @@
>  #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
>  
>  /*
> + * LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS indicates that the
> + * max_emulators field is present in the hvm sections of
> + * libxl_domain_build_info. This field can be used to reserve
> + * extra special pages for secondary device emulators.
> + */
> +#define LIBXL_HAVE_BUILDINFO_HVM_MAX_EMULATORS 1
> +
> +/*
>   * libxl ABI compatibility
>   *
>   * The only guarantee which libxl makes regarding ABI compatibility
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..cce93d9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -330,6 +330,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>  
>          libxl_defbool_setdefault(&b_info->u.hvm.gfx_passthru, false);
>  
> +        if (b_info->u.hvm.max_emulators < 1)
> +            b_info->u.hvm.max_emulators = 1;
> +
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>          libxl_defbool_setdefault(&b_info->u.pv.e820_host, false);
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 55f74b2..9de06f9 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -637,6 +637,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
>      args.mem_size = (uint64_t)(info->max_memkb - info->video_memkb) << 10;
>      args.mem_target = (uint64_t)(info->target_memkb - info->video_memkb) << 10;
>      args.claim_enabled = libxl_defbool_val(info->claim_mode);
> +    args.max_emulators = info->u.hvm.max_emulators;
>      if (libxl__domain_firmware(gc, info, &args)) {
>          LOG(ERROR, "initializing domain firmware failed");
>          goto out;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b707159 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -372,6 +372,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>                                         ("xen_platform_pci", libxl_defbool),
>                                         ("usbdevice_list",   libxl_string_list),
>                                         ("vendor_device",    libxl_vendor_device),
> +                                       ("max_emulators",    integer),
>                                         ])),
>                   ("pv", Struct(None, [("kernel", string),
>                                        ("slack_memkb", MemKB),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..c65f4f4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1750,6 +1750,9 @@ skip_vfb:
>  
>              b_info->u.hvm.vendor_device = d;
>          }
> + 
> +        if (!xlu_cfg_get_long (config, "secondary_device_emulators", &l, 0))
> +            b_info->u.hvm.max_emulators = l + 1;
>      }
>  
>      xlu_cfg_destroy(config);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d9874fb..5f9e728 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -379,21 +379,23 @@ static void hvm_wait_on_io(struct domain *d, ioreq_t *p)
>  void hvm_do_resume(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> +    struct list_head *entry, *next;
>  
>      check_wakeup_from_wait();
>  
>      if ( is_hvm_vcpu(v) )
>          pt_restore_timer(v);
>  
> -    s = v->arch.hvm_vcpu.ioreq_server;
> -    v->arch.hvm_vcpu.ioreq_server = NULL;
> -
> -    if ( s )
> +    list_for_each_safe ( entry, next, &v->arch.hvm_vcpu.ioreq_server_list )
>      {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                vcpu_list_entry[v->vcpu_id]);
>          ioreq_t *p = get_ioreq(s, v->vcpu_id);
>  
>          hvm_wait_on_io(d, p);
> +
> +        list_del_init(entry);
>      }
>  
>      /* Inject pending hw/sw trap */
> @@ -531,6 +533,83 @@ static int hvm_print_line(
>      return X86EMUL_OKAY;
>  }
>  
> +static int hvm_access_cf8(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *curr = current;
> +    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
> +    int rc;
> +
> +    BUG_ON(port < 0xcf8);
> +    port -= 0xcf8;
> +
> +    spin_lock(&hd->pci_lock);
> +
> +    if ( dir == IOREQ_WRITE )
> +    {
> +        switch ( bytes )
> +        {
> +        case 4:
> +            hd->pci_cf8 = *val;
> +            break;
> +
> +        case 2:
> +        {
> +            uint32_t mask = 0xffff << (port * 8);
> +            uint32_t subval = *val << (port * 8);
> +
> +            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
> +                          (subval & mask);
> +            break;
> +        }
> +            
> +        case 1:
> +        {
> +            uint32_t mask = 0xff << (port * 8);
> +            uint32_t subval = *val << (port * 8);
> +
> +            hd->pci_cf8 = (hd->pci_cf8 & ~mask) |
> +                          (subval & mask);
> +            break;
> +        }
> +
> +        default:
> +            break;
> +        }
> +
> +        /* We always need to fall through to the catch all emulator */
> +        rc = X86EMUL_UNHANDLEABLE;
> +    }
> +    else
> +    {
> +        switch ( bytes )
> +        {
> +        case 4:
> +            *val = hd->pci_cf8;
> +            rc = X86EMUL_OKAY;
> +            break;
> +
> +        case 2:
> +            *val = (hd->pci_cf8 >> (port * 8)) & 0xffff;
> +            rc = X86EMUL_OKAY;
> +            break;
> +            
> +        case 1:
> +            *val = (hd->pci_cf8 >> (port * 8)) & 0xff;
> +            rc = X86EMUL_OKAY;
> +            break;
> +
> +        default:
> +            rc = X86EMUL_UNHANDLEABLE;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&hd->pci_lock);
> +
> +    return rc;
> +}
> +
>  static int handle_pvh_io(
>      int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>  {
> @@ -590,6 +669,8 @@ done:
>  
>  static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu *v)
>  {
> +    list_del_init(&s->vcpu_list_entry[v->vcpu_id]);
> +
>      if ( v->vcpu_id == 0 )
>      {
>          if ( s->buf_ioreq_evtchn >= 0 )
> @@ -606,7 +687,7 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, struct vcpu
>      }
>  }
>  
> -static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> +static int hvm_create_ioreq_server(struct domain *d, ioservid_t id, domid_t domid)
>  {
>      struct hvm_ioreq_server *s;
>      int i;
> @@ -614,34 +695,47 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
>      struct vcpu *v;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -EEXIST;
> -    if ( d->arch.hvm_domain.ioreq_server != NULL )
> -        goto fail_exist;
> +    list_for_each_entry ( s, 
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto fail_exist;
> +    }
>  
> -    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
>  
>      rc = -ENOMEM;
>      s = xzalloc(struct hvm_ioreq_server);
>      if ( !s )
>          goto fail_alloc;
>  
> +    s->id = id;
>      s->domain = d;
>      s->domid = domid;
> +    INIT_LIST_HEAD(&s->domain_list_entry);
>  
>      for ( i = 0; i < MAX_HVM_VCPUS; i++ )
> +    {
>          s->ioreq_evtchn[i] = -1;
> +        INIT_LIST_HEAD(&s->vcpu_list_entry[i]);
> +    }
>      s->buf_ioreq_evtchn = -1;
>  
>      /* Initialize shared pages */
> -    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
>  
>      hvm_init_ioreq_page(s, 0);
>      if ( (rc = hvm_set_ioreq_page(s, 0, pfn)) < 0 )
>          goto fail_set_ioreq;
>  
> -    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> +    pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
>  
>      hvm_init_ioreq_page(s, 1);
>      if ( (rc = hvm_set_ioreq_page(s, 1, pfn)) < 0 )
> @@ -653,7 +747,8 @@ static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
>              goto fail_add_vcpu;
>      }
>  
> -    d->arch.hvm_domain.ioreq_server = s;
> +    list_add(&s->domain_list_entry,
> +             &d->arch.hvm_domain.ioreq_server_list);
>  
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> @@ -673,22 +768,30 @@ fail_exist:
>      return rc;
>  }
>  
> -static void hvm_destroy_ioreq_server(struct domain *d)
> +static void hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>  {
> -    struct hvm_ioreq_server *s;
> +    struct hvm_ioreq_server *s, *next;
>      struct vcpu *v;
>  
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, d->domain_id);
> +    list_for_each_entry_safe ( s,
> +                               next,
> +                               &d->arch.hvm_domain.ioreq_server_list,
> +                               domain_list_entry)
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> -    if ( !s )
> -        goto done;
> +    goto done;
> +
> +found:
> +    gdprintk(XENLOG_INFO, "%s: %d:%d\n", __func__, d->domain_id, id);
>  
>      domain_pause(d);
>  
> -    d->arch.hvm_domain.ioreq_server = NULL;
> +    list_del_init(&s->domain_list_entry);
>  
>      for_each_vcpu ( d, v )
>          hvm_ioreq_server_remove_vcpu(s, v);
> @@ -704,21 +807,186 @@ done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  }
>  
> -static int hvm_get_ioreq_server_buf_port(struct domain *d, evtchn_port_t *port)
> +static int hvm_get_ioreq_server_buf_port(struct domain *d, ioservid_t id, evtchn_port_t *port)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -ENOENT;
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( s->id == id )
> +        {
> +            *port = s->buf_ioreq_evtchn;
> +            rc = 0;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_get_ioreq_server_pfn(struct domain *d, ioservid_t id, int buf, xen_pfn_t *pfn)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    rc = -ENOENT;
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( s->id == id )
> +        {
> +            if ( buf )
> +                *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] - s->id;
> +            else
> +                *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN] - s->id;
> +
> +            rc = 0;
> +            break;
> +        }
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
> +                                            int is_mmio, uint64_t start, uint64_t end)
>  {
>      struct hvm_ioreq_server *s;
> +    struct hvm_io_range *x;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    x = xmalloc(struct hvm_io_range);
> +    if ( x == NULL )
> +        return -ENOMEM;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    rc = -ENOENT;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
> +
> +    goto fail;
> +
> +found:
> +    x->start = start;
> +    x->end = end;
> +
> +    if ( is_mmio )
> +    {
> +        x->next = s->mmio_range_list;
> +        s->mmio_range_list = x;
> +    }
> +    else
> +    {
> +        x->next = s->portio_range_list;
> +        s->portio_range_list = x;
> +    }
> +
> +    gdprintk(XENLOG_DEBUG, "%d:%d: +%s %"PRIX64" - %"PRIX64"\n",
> +             d->domain_id,
> +             s->id,
> +             ( is_mmio ) ? "MMIO" : "PORTIO",
> +             x->start,
> +             x->end);
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    xfree(x);
> +
> +    return rc;
> +}
> +
> +static int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
> +                                                int is_mmio, uint64_t start)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct hvm_io_range *x, **xp;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    *port = s->buf_ioreq_evtchn;
> -    rc = 0;
> +    goto done;
> +
> +found:
> +    if ( is_mmio )
> +    {
> +        x = s->mmio_range_list;
> +        xp = &s->mmio_range_list;
> +    }
> +    else
> +    {
> +        x = s->portio_range_list;
> +        xp = &s->portio_range_list;
> +    }
> +
> +    while ( (x != NULL) && (start != x->start) )
> +    {
> +        xp = &x->next;
> +        x = x->next;
> +    }
> +
> +    if ( (x != NULL) )
> +    {
> +        gdprintk(XENLOG_DEBUG, "%d:%d: -%s %"PRIX64" - %"PRIX64"\n",
> +                 d->domain_id,
> +                 s->id,
> +                 ( is_mmio ) ? "MMIO" : "PORTIO",
> +                 x->start,
> +                 x->end);
> +
> +        *xp = x->next;
> +        xfree(x);
> +        rc = 0;
> +    }
>  
>  done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> @@ -726,25 +994,98 @@ done:
>      return rc;
>  }
>  
> -static int hvm_get_ioreq_server_pfn(struct domain *d, int buf, xen_pfn_t *pfn)
> +static int hvm_map_pcidev_to_ioreq_server(struct domain *d, ioservid_t id,
> +                                          uint16_t bdf)
>  {
>      struct hvm_ioreq_server *s;
> +    struct hvm_pcidev *x;
>      int rc;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    x = xmalloc(struct hvm_pcidev);
> +    if ( x == NULL )
> +        return -ENOMEM;
> +
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    rc = -ENOENT;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
> +
> +    goto fail;
> +
> +found:
> +    x->bdf = bdf;
> +
> +    x->next = s->pcidev_list;
> +    s->pcidev_list = x;
> +
> +    gdprintk(XENLOG_DEBUG, "%d:%d: +PCIDEV %04X\n",
> +             d->domain_id,
> +             s->id,
> +             x->bdf);
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    xfree(x);
> +
> +    return rc;
> +}
> +
> +static int hvm_unmap_pcidev_from_ioreq_server(struct domain *d, ioservid_t id,
> +                                              uint16_t bdf)
> +{
> +    struct hvm_ioreq_server *s;
> +    struct hvm_pcidev *x, **xp;
> +    int rc;
> +
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
> -    if ( buf )
> -        *pfn = d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN];
> -    else
> -        *pfn = d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN];
> +    goto done;
>  
> -    rc = 0;
> +found:
> +    x = s->pcidev_list;
> +    xp = &s->pcidev_list;
> +
> +    while ( (x != NULL) && (bdf != x->bdf) )
> +    {
> +        xp = &x->next;
> +        x = x->next;
> +    }
> +    if ( (x != NULL) )
> +    {
> +        gdprintk(XENLOG_DEBUG, "%d:%d: -PCIDEV %04X\n",
> +                 d->domain_id,
> +                 s->id,
> +                 x->bdf);
> +
> +         *xp = x->next;
> +        xfree(x);
> +        rc = 0;
> +    }
>  
>  done:
>      spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> @@ -752,6 +1093,73 @@ done:
>      return rc;
>  }
>  
> +static int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
> +{
> +    struct list_head *entry;
> +    int rc;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        if ( (rc = hvm_ioreq_server_add_vcpu(s, v)) < 0 )
> +            goto fail;
> +    }
> +        
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return 0;
> +
> +fail:
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    return rc;
> +}
> +
> +static void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
> +{
> +    struct list_head *entry;
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        hvm_ioreq_server_remove_vcpu(s, v);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
> +static void hvm_destroy_all_ioreq_servers(struct domain *d)
> +{
> +    ioservid_t id;
> +
> +    for ( id = 0;
> +          id < d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
> +          id++ )
> +        hvm_destroy_ioreq_server(d, id);
> +}
> +
>  static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>                                       int *p_port)
>  {
> @@ -767,21 +1175,30 @@ static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid,
>      return 0;
>  }
>  
> -static int hvm_set_ioreq_server_domid(struct domain *d, domid_t domid)
> +static int hvm_set_ioreq_server_domid(struct domain *d, ioservid_t id, domid_t domid)
>  {
>      struct hvm_ioreq_server *s;
>      struct vcpu *v;
>      int rc = 0;
>  
> +    if ( id >= d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS] )
> +        return -EINVAL;
> +
>      domain_pause(d);
>      spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    s = d->arch.hvm_domain.ioreq_server;
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == id )
> +            goto found;
> +    }
>  
>      rc = -ENOENT;
> -    if ( !s )
> -        goto done;
> +    goto done;
>  
> +found:
>      rc = 0;
>      if ( s->domid == domid )
>          goto done;
> @@ -838,7 +1255,9 @@ int hvm_domain_initialise(struct domain *d)
>  
>      }
>  
> +    INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server_list);
>      spin_lock_init(&d->arch.hvm_domain.ioreq_server_lock);
> +    spin_lock_init(&d->arch.hvm_domain.pci_lock);
>      spin_lock_init(&d->arch.hvm_domain.irq_lock);
>      spin_lock_init(&d->arch.hvm_domain.uc_lock);
>  
> @@ -880,6 +1299,7 @@ int hvm_domain_initialise(struct domain *d)
>      rtc_init(d);
>  
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
> +    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> @@ -910,7 +1330,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
>      if ( hvm_funcs.nhvm_domain_relinquish_resources )
>          hvm_funcs.nhvm_domain_relinquish_resources(d);
>  
> -    hvm_destroy_ioreq_server(d);
> +    hvm_destroy_all_ioreq_servers(d);
>  
>      msixtbl_pt_cleanup(d);
>  
> @@ -1422,13 +1842,14 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  {
>      int rc;
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
>  
>      hvm_asid_flush_vcpu(v);
>  
>      spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
>      INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
>  
> +    INIT_LIST_HEAD(&v->arch.hvm_vcpu.ioreq_server_list);
> +
>      rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
>      if ( rc != 0 )
>          goto fail1;
> @@ -1465,16 +1886,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
>           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown: nestedhvm_vcpu_destroy */
>          goto fail5;
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> -    if ( s )
> -    {
> -        rc = hvm_ioreq_server_add_vcpu(s, v);
> -        if ( rc < 0 )
> -            goto fail6;
> -    }
> +    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
> +    if ( rc < 0 )
> +        goto fail6;
>  
>      if ( v->vcpu_id == 0 )
>      {
> @@ -1510,14 +1924,8 @@ int hvm_vcpu_initialise(struct vcpu *v)
>  void hvm_vcpu_destroy(struct vcpu *v)
>  {
>      struct domain *d = v->domain;
> -    struct hvm_ioreq_server *s;
> -
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
>  
> -    if ( s )
> -        hvm_ioreq_server_remove_vcpu(s, v);
> +    hvm_all_ioreq_servers_remove_vcpu(d, v);
>  
>      nestedhvm_vcpu_destroy(v);
>  
> @@ -1556,6 +1964,101 @@ void hvm_vcpu_down(struct vcpu *v)
>      }
>  }
>  
> +static struct hvm_ioreq_server *hvm_select_ioreq_server(struct vcpu *v, ioreq_t *p)
> +{
> +#define BDF(cf8) (((cf8) & 0x00ffff00) >> 8)
> +
> +    struct domain *d = v->domain;
> +    struct hvm_ioreq_server *s;
> +    uint8_t type;
> +    uint64_t addr;
> +
> +    if ( p->type == IOREQ_TYPE_PIO &&
> +         (p->addr & ~3) == 0xcfc )
> +    { 
> +        /* PCI config data cycle */
> +        type = IOREQ_TYPE_PCI_CONFIG;
> +
> +        spin_lock(&d->arch.hvm_domain.pci_lock);
> +        addr = d->arch.hvm_domain.pci_cf8 + (p->addr & 3);
> +        spin_unlock(&d->arch.hvm_domain.pci_lock);
> +    }
> +    else
> +    {
> +        type = p->type;
> +        addr = p->addr;
> +    }
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    switch ( type )
> +    {
> +    case IOREQ_TYPE_COPY:
> +    case IOREQ_TYPE_PIO:
> +    case IOREQ_TYPE_PCI_CONFIG:
> +        break;
> +    default:
> +        goto done;
> +    }
> +
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        switch ( type )
> +        {
> +            case IOREQ_TYPE_COPY:
> +            case IOREQ_TYPE_PIO: {
> +                struct hvm_io_range *x;
> +
> +                x = (type == IOREQ_TYPE_COPY) ?
> +                    s->mmio_range_list :
> +                    s->portio_range_list;
> +
> +                for ( ; x; x = x->next )
> +                {
> +                    if ( (addr >= x->start) && (addr <= x->end) )
> +                        goto found;
> +                }
> +                break;
> +            }
> +            case IOREQ_TYPE_PCI_CONFIG: {
> +                struct hvm_pcidev *x;
> +
> +                x = s->pcidev_list;
> +
> +                for ( ; x; x = x->next )
> +                {
> +                    if ( BDF(addr) == x->bdf ) {
> +                        p->type = type;
> +                        p->addr = addr;
> +                        goto found;
> +                    }
> +                }
> +                break;
> +            }
> +        }
> +    }
> +
> +done:
> +    /* The catch-all server has id 0 */
> +    list_for_each_entry ( s,
> +                          &d->arch.hvm_domain.ioreq_server_list,
> +                          domain_list_entry )
> +    {
> +        if ( s->id == 0 )
> +            goto found;
> +    }
> +
> +    s = NULL;
> +
> +found:
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +    return s;
> +
> +#undef BDF
> +}
> +
>  int hvm_buffered_io_send(ioreq_t *p)
>  {
>      struct vcpu *v = current;
> @@ -1570,10 +2073,7 @@ int hvm_buffered_io_send(ioreq_t *p)
>      /* Ensure buffered_iopage fits in a page */
>      BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> +    s = hvm_select_ioreq_server(v, p);
>      if ( !s )
>          return 0;
>  
> @@ -1661,7 +2161,9 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
>          return 0;
>      }
>  
> -    v->arch.hvm_vcpu.ioreq_server = s;
> +    ASSERT(list_empty(&s->vcpu_list_entry[v->vcpu_id]));
> +    list_add(&s->vcpu_list_entry[v->vcpu_id],
> +             &v->arch.hvm_vcpu.ioreq_server_list); 
>  
>      p->dir = proto_p->dir;
>      p->data_is_ptr = proto_p->data_is_ptr;
> @@ -1686,24 +2188,42 @@ static bool_t hvm_send_assist_req_to_server(struct hvm_ioreq_server *s,
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p)
>  {
> -    struct domain *d = v->domain;
>      struct hvm_ioreq_server *s;
>  
> -    ASSERT(v->arch.hvm_vcpu.ioreq_server == NULL);
> +    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
>  
>      if ( unlikely(!vcpu_start_shutdown_deferral(v)) )
>          return 0;
>  
> -    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> -    s = d->arch.hvm_domain.ioreq_server;
> -    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> -
> +    s = hvm_select_ioreq_server(v, p);
>      if ( !s )
>          return 0;
>  
>      return hvm_send_assist_req_to_server(s, v, p);
>  }
>  
> +void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p)
> +{
> +    struct domain *d = v->domain;
> +    struct list_head *entry;
> +
> +    ASSERT(list_empty(&v->arch.hvm_vcpu.ioreq_server_list));
> +
> +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> +
> +    list_for_each ( entry,
> +                    &d->arch.hvm_domain.ioreq_server_list )
> +    {
> +        struct hvm_ioreq_server *s = list_entry(entry,
> +                                                struct hvm_ioreq_server,
> +                                                domain_list_entry);
> +
> +        (void) hvm_send_assist_req_to_server(s, v, p);
> +    }
> +
> +    spin_unlock(&d->arch.hvm_domain.ioreq_server_lock);
> +}
> +
>  void hvm_hlt(unsigned long rflags)
>  {
>      struct vcpu *curr = current;
> @@ -4286,6 +4806,215 @@ static int hvmop_flush_tlb_all(void)
>      return 0;
>  }
>  
> +static int hvmop_create_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
> +{
> +    struct domain *curr_d = current->domain;
> +    xen_hvm_create_ioreq_server_t op;
> +    struct domain *d;
> +    ioservid_t id;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = -ENOSPC;
> +    for ( id = 1;
> +          id <  d->arch.hvm_domain.params[HVM_PARAM_NR_IOREQ_SERVERS];
> +          id++ )
> +    {
> +        rc = hvm_create_ioreq_server(d, id, curr_d->domain_id);
> +        if ( rc == -EEXIST )
> +            continue;
> +
> +        break;
> +    }
> +
> +    if ( rc == -EEXIST )
> +        rc = -ENOSPC;
> +
> +    if ( rc < 0 )
> +        goto out;
> +
> +    op.id = id;
> +
> +    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_get_ioreq_server_info(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
> +{
> +    xen_hvm_get_ioreq_server_info_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 0, &op.pfn)) < 0 )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_pfn(d, op.id, 1, &op.buf_pfn)) < 0 )
> +        goto out;
> +
> +    if ( (rc = hvm_get_ioreq_server_buf_port(d, op.id, &op.buf_port)) < 0 )
> +        goto out;
> +
> +    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_map_io_range_to_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_io_range_to_ioreq_server_t) uop)
> +{
> +    xen_hvm_map_io_range_to_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.is_mmio,
> +                                          op.start, op.end);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_unmap_io_range_from_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_io_range_from_ioreq_server_t) uop)
> +{
> +    xen_hvm_unmap_io_range_from_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.is_mmio,
> +                                              op.start);
> +    
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_map_pcidev_to_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_pcidev_to_ioreq_server_t) uop)
> +{
> +    xen_hvm_map_pcidev_to_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_map_pcidev_to_ioreq_server(d, op.id, op.bdf);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_unmap_pcidev_from_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_unmap_pcidev_from_ioreq_server_t) uop)
> +{
> +    xen_hvm_unmap_pcidev_from_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    rc = hvm_unmap_pcidev_from_ioreq_server(d, op.id, op.bdf);
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
> +static int hvmop_destroy_ioreq_server(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
> +{
> +    xen_hvm_destroy_ioreq_server_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    hvm_destroy_ioreq_server(d, op.id);
> +    rc = 0;
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -4294,6 +5023,41 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      switch ( op )
>      {
> +    case HVMOP_create_ioreq_server:
> +        rc = hvmop_create_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_get_ioreq_server_info:
> +        rc = hvmop_get_ioreq_server_info(
> +            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
> +        break;
> +    
> +    case HVMOP_map_io_range_to_ioreq_server:
> +        rc = hvmop_map_io_range_to_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_map_io_range_to_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_unmap_io_range_from_ioreq_server:
> +        rc = hvmop_unmap_io_range_from_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_unmap_io_range_from_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_map_pcidev_to_ioreq_server:
> +        rc = hvmop_map_pcidev_to_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_map_pcidev_to_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_unmap_pcidev_from_ioreq_server:
> +        rc = hvmop_unmap_pcidev_from_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_unmap_pcidev_from_ioreq_server_t));
> +        break;
> +    
> +    case HVMOP_destroy_ioreq_server:
> +        rc = hvmop_destroy_ioreq_server(
> +            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
> +        break;
> +    
>      case HVMOP_set_param:
>      case HVMOP_get_param:
>      {
> @@ -4382,9 +5146,9 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value == DOMID_SELF )
>                      a.value = curr_d->domain_id;
>  
> -                rc = hvm_create_ioreq_server(d, a.value);
> +                rc = hvm_create_ioreq_server(d, 0, a.value);
>                  if ( rc == -EEXIST )
> -                    rc = hvm_set_ioreq_server_domid(d, a.value);
> +                    rc = hvm_set_ioreq_server_domid(d, 0, a.value);
>                  break;
>              case HVM_PARAM_ACPI_S_STATE:
>                  /* Not reflexive, as we must domain_pause(). */
> @@ -4449,6 +5213,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  if ( a.value > SHUTDOWN_MAX )
>                      rc = -EINVAL;
>                  break;
> +            case HVM_PARAM_NR_IOREQ_SERVERS:
> +                if ( d == current->domain )
> +                    rc = -EPERM;
> +                break;

Is this correct? Security-wise, it should be restricted more.

Having said that, I can't see anything good to come from being able to
change this value on the fly.  Is it possible to make a domain creation
parameters?

>              }
>  
>              if ( rc == 0 ) 
> @@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              case HVM_PARAM_BUFIOREQ_PFN:
>              case HVM_PARAM_BUFIOREQ_EVTCHN:
>                  /* May need to create server */
> -                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> +                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
>                  if ( rc != 0 && rc != -EEXIST )
>                      goto param_fail;
>  
> @@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_IOREQ_PFN: {
>                      xen_pfn_t pfn;
>  
> -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
>                          goto param_fail;
>  
>                      a.value = pfn;
> @@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_BUFIOREQ_PFN: {
>                      xen_pfn_t pfn;
>  
> -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
>                          goto param_fail;
>  
>                      a.value = pfn;
> @@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>                  case HVM_PARAM_BUFIOREQ_EVTCHN: {
>                      evtchn_port_t port;
>  
> -                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
>                          goto param_fail;
>  
>                      a.value = port;
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index 576641c..a0d76b2 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -78,7 +78,7 @@ void send_invalidate_req(void)
>      p->dir = IOREQ_WRITE;
>      p->data = ~0UL; /* flush all */
>  
> -    (void)hvm_send_assist_req(v, p);
> +    hvm_broadcast_assist_req(v, p);
>  }
>  
>  int handle_mmio(void)
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index e750ef0..93dcec1 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -41,19 +41,38 @@ struct hvm_ioreq_page {
>      void *va;
>  };
>  
> +struct hvm_io_range {
> +    struct hvm_io_range *next;
> +    uint64_t            start, end;
> +};	
> +
> +struct hvm_pcidev {
> +    struct hvm_pcidev *next;
> +    uint16_t          bdf;
> +};	
> +
>  struct hvm_ioreq_server {
> +    struct list_head       domain_list_entry;
> +    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];

Given that this has to be initialised anyway, would it be better to have
it dynamically sized on the d->max_cpus, which is almost always be far
smaller?

~Andrew

> +    ioservid_t             id;
>      struct domain          *domain;
>      domid_t                domid;
>      struct hvm_ioreq_page  ioreq;
>      int                    ioreq_evtchn[MAX_HVM_VCPUS];
>      struct hvm_ioreq_page  buf_ioreq;
>      int                    buf_ioreq_evtchn;
> +    struct hvm_io_range    *mmio_range_list;
> +    struct hvm_io_range    *portio_range_list;
> +    struct hvm_pcidev      *pcidev_list;
>  };
>  
>  struct hvm_domain {
> -    struct hvm_ioreq_server *ioreq_server;
> +    struct list_head        ioreq_server_list;
>      spinlock_t              ioreq_server_lock;
>  
> +    uint32_t                pci_cf8;
> +    spinlock_t              pci_lock;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 4e8fee8..1c3854f 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -225,6 +225,7 @@ int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
>  void destroy_ring_for_helper(void **_va, struct page_info *page);
>  
>  bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *p);
> +void hvm_broadcast_assist_req(struct vcpu *v, ioreq_t *p);
>  
>  void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat);
>  int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat);
> diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
> index 4c9d7ee..211ebfd 100644
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -138,7 +138,7 @@ struct hvm_vcpu {
>      spinlock_t          tm_lock;
>      struct list_head    tm_list;
>  
> -    struct hvm_ioreq_server *ioreq_server;
> +    struct list_head    ioreq_server_list;
>  
>      bool_t              flag_dr_dirty;
>      bool_t              debug_state_latch;
> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
> index a9aab4b..6b31189 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -23,6 +23,7 @@
>  
>  #include "../xen.h"
>  #include "../trace.h"
> +#include "../event_channel.h"
>  
>  /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
>  #define HVMOP_set_param           0
> @@ -270,6 +271,75 @@ struct xen_hvm_inject_msi {
>  typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
>  
> +typedef uint32_t ioservid_t;
> +
> +DEFINE_XEN_GUEST_HANDLE(ioservid_t);
> +
> +#define HVMOP_create_ioreq_server 17
> +struct xen_hvm_create_ioreq_server {
> +    domid_t domid;  /* IN - domain to be serviced */
> +    ioservid_t id;  /* OUT - server id */
> +};
> +typedef struct xen_hvm_create_ioreq_server xen_hvm_create_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_create_ioreq_server_t);
> +
> +#define HVMOP_get_ioreq_server_info 18
> +struct xen_hvm_get_ioreq_server_info {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - server id */
> +    xen_pfn_t pfn;          /* OUT - ioreq pfn */
> +    xen_pfn_t buf_pfn;      /* OUT - buf ioreq pfn */
> +    evtchn_port_t buf_port; /* OUT - buf ioreq port */
> +};
> +typedef struct xen_hvm_get_ioreq_server_info xen_hvm_get_ioreq_server_info_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
> +
> +#define HVMOP_map_io_range_to_ioreq_server 19
> +struct xen_hvm_map_io_range_to_ioreq_server {
> +    domid_t domid;                  /* IN - domain to be serviced */
> +    ioservid_t id;                  /* IN - handle from HVMOP_register_ioreq_server */
> +    int is_mmio;                    /* IN - MMIO or port IO? */
> +    uint64_aligned_t start, end;    /* IN - inclusive start and end of range */
> +};
> +typedef struct xen_hvm_map_io_range_to_ioreq_server xen_hvm_map_io_range_to_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_io_range_to_ioreq_server_t);
> +
> +#define HVMOP_unmap_io_range_from_ioreq_server 20
> +struct xen_hvm_unmap_io_range_from_ioreq_server {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - handle from HVMOP_register_ioreq_server */
> +    uint8_t is_mmio;        /* IN - MMIO or port IO? */
> +    uint64_aligned_t start; /* IN - start address of the range to remove */
> +};
> +typedef struct xen_hvm_unmap_io_range_from_ioreq_server xen_hvm_unmap_io_range_from_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_io_range_from_ioreq_server_t);
> +
> +#define HVMOP_map_pcidev_to_ioreq_server 21
> +struct xen_hvm_map_pcidev_to_ioreq_server {
> +    domid_t domid;      /* IN - domain to be serviced */
> +    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
> +    uint16_t bdf;       /* IN - PCI bus/dev/func */
> +};
> +typedef struct xen_hvm_map_pcidev_to_ioreq_server xen_hvm_map_pcidev_to_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_pcidev_to_ioreq_server_t);
> +
> +#define HVMOP_unmap_pcidev_from_ioreq_server 22
> +struct xen_hvm_unmap_pcidev_from_ioreq_server {
> +    domid_t domid;      /* IN - domain to be serviced */
> +    ioservid_t id;      /* IN - handle from HVMOP_register_ioreq_server */
> +    uint16_t bdf;       /* IN - PCI bus/dev/func */
> +};
> +typedef struct xen_hvm_unmap_pcidev_from_ioreq_server xen_hvm_unmap_pcidev_from_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_unmap_pcidev_from_ioreq_server_t);
> +
> +#define HVMOP_destroy_ioreq_server 23
> +struct xen_hvm_destroy_ioreq_server {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    ioservid_t id;          /* IN - server id */
> +};
> +typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index f05d130..e84fa75 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -34,6 +34,7 @@
>  
>  #define IOREQ_TYPE_PIO          0 /* pio */
>  #define IOREQ_TYPE_COPY         1 /* mmio ops */
> +#define IOREQ_TYPE_PCI_CONFIG   2 /* pci config ops */
>  #define IOREQ_TYPE_TIMEOFFSET   7
>  #define IOREQ_TYPE_INVALIDATE   8 /* mapcache */
>  
> diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
> index 517a184..4109b11 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -145,6 +145,8 @@
>  /* SHUTDOWN_* action in case of a triple fault */
>  #define HVM_PARAM_TRIPLE_FAULT_REASON 31
>  
> -#define HVM_NR_PARAMS          32
> +#define HVM_PARAM_NR_IOREQ_SERVERS 32
> +
> +#define HVM_NR_PARAMS          33
>  
>  #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tyH-0000UI-Io; Thu, 30 Jan 2014 15:56:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tyE-0000U1-G8
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:56:03 +0000
Received: from [85.158.143.35:6523] by server-2.bemta-4.messagelabs.com id
	34/EF-10891-1167AE25; Thu, 30 Jan 2014 15:56:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391097359!1988535!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14160 invoked from network); 30 Jan 2014 15:56:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98146342"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:55:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:55:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tyA-0007yj-2o;
	Thu, 30 Jan 2014 15:55:58 +0000
Message-ID: <52EA760E.4010400@citrix.com>
Date: Thu, 30 Jan 2014 15:55:58 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> Because we may now have more than one emulator, the implementation of the
> PCI hotplug controller needs to be done by Xen. Happily the code is very
> short and simple and it also removes the need for a different ACPI DSDT
> when using different variants of QEMU.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  tools/firmware/hvmloader/acpi/mk_dsdt.c |  147 ++++------------------
>  tools/libxc/xc_domain.c                 |   46 +++++++
>  tools/libxc/xenctrl.h                   |   11 ++
>  tools/libxl/libxl_pci.c                 |   15 +++
>  xen/arch/x86/hvm/Makefile               |    1 +
>  xen/arch/x86/hvm/hotplug.c              |  207 +++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/hvm.c                  |   40 +++++-
>  xen/include/asm-x86/hvm/domain.h        |   12 ++
>  xen/include/asm-x86/hvm/io.h            |    6 +
>  xen/include/public/hvm/hvm_op.h         |    9 ++
>  xen/include/public/hvm/ioreq.h          |    2 +
>  11 files changed, 373 insertions(+), 123 deletions(-)
>  create mode 100644 xen/arch/x86/hvm/hotplug.c
>
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index a4b693b..6408b44 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -58,28 +58,6 @@ static void pop_block(void)
>      printf("}\n");
>  }
>  
> -static void pci_hotplug_notify(unsigned int slt)
> -{
> -    stmt("Notify", "\\_SB.PCI0.S%02X, EVT", slt);
> -}
> -
> -static void decision_tree(
> -    unsigned int s, unsigned int e, char *var, void (*leaf)(unsigned int))
> -{
> -    if ( s == (e-1) )
> -    {
> -        (*leaf)(s);
> -        return;
> -    }
> -
> -    push_block("If", "And(%s, 0x%02x)", var, (e-s)/2);
> -    decision_tree((s+e)/2, e, var, leaf);
> -    pop_block();
> -    push_block("Else", NULL);
> -    decision_tree(s, (s+e)/2, var, leaf);
> -    pop_block();
> -}
> -
>  static struct option options[] = {
>      { "maxcpu", 1, 0, 'c' },
>      { "dm-version", 1, 0, 'q' },
> @@ -322,64 +300,21 @@ int main(int argc, char **argv)
>                     dev, intx, ((dev*4+dev/8+intx)&31)+16);
>      printf("})\n");
>  
> -    /*
> -     * Each PCI hotplug slot needs at least two methods to handle
> -     * the ACPI event:
> -     *  _EJ0: eject a device
> -     *  _STA: return a device's status, e.g. enabled or removed
> -     * 
> -     * Eject button would generate a general-purpose event, then the
> -     * control method for this event uses Notify() to inform OSPM which
> -     * action happened and on which device.
> -     *
> -     * Pls. refer "6.3 Device Insertion, Removal, and Status Objects"
> -     * in ACPI spec 3.0b for details.
> -     *
> -     * QEMU provides a simple hotplug controller with some I/O to handle
> -     * the hotplug action and status, which is beyond the ACPI scope.
> -     */
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        for ( slot = 0; slot < 0x100; slot++ )
> -        {
> -            push_block("Device", "S%02X", slot);
> -            /* _ADR == dev:fn (16:16) */
> -            stmt("Name", "_ADR, 0x%08x", ((slot & ~7) << 13) | (slot & 7));
> -            /* _SUN == dev */
> -            stmt("Name", "_SUN, 0x%08x", slot >> 3);
> -            push_block("Method", "_EJ0, 1");
> -            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
> -            stmt("Store", "0x88, \\_GPE.DPT2");
> -            stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
> -                 (slot & 1) ? 0x10 : 0x01, slot & ~1);
> -            pop_block();
> -            push_block("Method", "_STA, 0");
> -            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
> -            stmt("Store", "0x89, \\_GPE.DPT2");
> -            if ( slot & 1 )
> -                stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
> -            else
> -                stmt("And", "\\_GPE.PH%02X, 0x0f, Local1", slot & ~1);
> -            stmt("Return", "Local1"); /* IN status as the _STA */
> -            pop_block();
> -            pop_block();
> -        }
> -    } else {
> -        stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
> -        push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
> -        indent(); printf("B0EJ, 32,\n");
> -        pop_block();
> +    stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
> +    push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
> +    indent(); printf("B0EJ, 32,\n");
> +    pop_block();
>  
> -        /* hotplug_slot */
> -        for (slot = 1; slot <= 31; slot++) {
> -            push_block("Device", "S%i", slot); {
> -                stmt("Name", "_ADR, %#06x0000", slot);
> -                push_block("Method", "_EJ0,1"); {
> -                    stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
> -                    stmt("Return", "0x0");
> -                } pop_block();
> -                stmt("Name", "_SUN, %i", slot);
> +    /* hotplug_slot */
> +    for (slot = 1; slot <= 31; slot++) {
> +        push_block("Device", "S%i", slot); {
> +            stmt("Name", "_ADR, %#06x0000", slot);
> +            push_block("Method", "_EJ0,1"); {
> +                stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
> +                stmt("Return", "0x0");
>              } pop_block();
> -        }
> +            stmt("Name", "_SUN, %i", slot);
> +        } pop_block();
>      }
>  
>      pop_block();
> @@ -389,26 +324,11 @@ int main(int argc, char **argv)
>      /**** GPE start ****/
>      push_block("Scope", "\\_GPE");
>  
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        stmt("OperationRegion", "PHP, SystemIO, 0x10c0, 0x82");
> -
> -        push_block("Field", "PHP, ByteAcc, NoLock, Preserve");
> -        indent(); printf("PSTA, 8,\n"); /* hotplug controller event reg */
> -        indent(); printf("PSTB, 8,\n"); /* hotplug controller slot reg */
> -        for ( slot = 0; slot < 0x100; slot += 2 )
> -        {
> -            indent();
> -            /* Each hotplug control register manages a pair of pci functions. */
> -            printf("PH%02X, 8,\n", slot);
> -        }
> -        pop_block();
> -    } else {
> -        stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
> -        push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
> -        indent(); printf("PCIU, 32,\n");
> -        indent(); printf("PCID, 32,\n");
> -        pop_block();
> -    }
> +    stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
> +    push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
> +    indent(); printf("PCIU, 32,\n");
> +    indent(); printf("PCID, 32,\n");
> +    pop_block();
>  
>      stmt("OperationRegion", "DG1, SystemIO, 0xb044, 0x04");
>  
> @@ -416,33 +336,16 @@ int main(int argc, char **argv)
>      indent(); printf("DPT1, 8, DPT2, 8\n");
>      pop_block();
>  
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        push_block("Method", "_L03, 0, Serialized");
> -        /* Detect slot and event (remove/add). */
> -        stmt("Name", "SLT, 0x0");
> -        stmt("Name", "EVT, 0x0");
> -        stmt("Store", "PSTA, Local1");
> -        stmt("And", "Local1, 0xf, EVT");
> -        stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
> -        stmt("And", "Local1, 0xff, SLT");
> -        /* Debug */
> -        stmt("Store", "SLT, DPT1");
> -        stmt("Store", "EVT, DPT2");
> -        /* Decision tree */
> -        decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
> +    push_block("Method", "_E01");
> +    for (slot = 1; slot <= 31; slot++) {
> +        push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
> +        stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
>          pop_block();
> -    } else {
> -        push_block("Method", "_E01");
> -        for (slot = 1; slot <= 31; slot++) {
> -            push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
> -            stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
> -            pop_block();
> -            push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
> -            stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
> -            pop_block();
> -        }
> +        push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
> +        stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
>          pop_block();
>      }
> +    pop_block();
>  
>      pop_block();
>      /**** GPE end ****/
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c64d15a..c89068e 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -1421,6 +1421,52 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
>      return rc;
>  }
>  
> +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> +                              domid_t domid,
> +                              uint32_t slot)

Take enable as a parameter and save having 2 almost identical functions?

> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_pci_hotplug;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->enable = 1;
> +    arg->slot = slot;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> +                               domid_t domid,
> +                               uint32_t slot)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_pci_hotplug;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->enable = 0;
> +    arg->slot = slot;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
>  int xc_domain_setdebugging(xc_interface *xch,
>                             uint32_t domid,
>                             unsigned int enable)
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 142aaea..c3e35a9 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1842,6 +1842,17 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
>  				domid_t domid,
>  				ioservid_t id);
>  
> +/*
> + * PCI hotplug API
> + */
> +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> +			      domid_t domid,
> +			      uint32_t slot);
> +
> +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> +			       domid_t domid,
> +			       uint32_t slot);
> +

tabs/spaces

>  /* HVM guest pass-through */
>  int xc_assign_device(xc_interface *xch,
>                       uint32_t domid,
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index 2e52470..4176440 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
>          }
>          if ( rc )
>              return ERROR_FAIL;
> +
> +        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
> +        if (rc < 0) {
> +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error: xc_hvm_pci_hotplug_enable failed");
> +            return ERROR_FAIL;
> +        }
> +
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>      {
> @@ -1182,6 +1189,14 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
>                                           NULL, NULL, NULL) < 0)
>              goto out_fail;
>  
> +        rc = xc_hvm_pci_hotplug_disable(ctx->xch, domid, pcidev->dev);
> +        if (rc < 0) {
> +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
> +                             "Error: xc_hvm_pci_hotplug_disable failed");
> +            rc = ERROR_FAIL;
> +            goto out_fail;
> +        }
> +
>          switch (libxl__device_model_version_running(gc, domid)) {
>          case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>              rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
> diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> index eea5555..48efddb 100644
> --- a/xen/arch/x86/hvm/Makefile
> +++ b/xen/arch/x86/hvm/Makefile
> @@ -3,6 +3,7 @@ subdir-y += vmx
>  
>  obj-y += asid.o
>  obj-y += emulate.o
> +obj-y += hotplug.o
>  obj-y += hpet.o
>  obj-y += hvm.o
>  obj-y += i8254.o
> diff --git a/xen/arch/x86/hvm/hotplug.c b/xen/arch/x86/hvm/hotplug.c
> new file mode 100644
> index 0000000..253d435
> --- /dev/null
> +++ b/xen/arch/x86/hvm/hotplug.c
> @@ -0,0 +1,207 @@
> +/*
> + * hvm/hotplug.c
> + *
> + * Copyright (c) 2013, Citrix Systems Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +
> +#include <xen/types.h>
> +#include <xen/spinlock.h>
> +#include <xen/xmalloc.h>
> +#include <asm/hvm/io.h>
> +#include <asm/hvm/support.h>
> +
> +#define SCI_IRQ 9
> +
> +#define GPE_BASE            (ACPI_GPE0_BLK_ADDRESS_V1)
> +#define GPE_LEN             (ACPI_GPE0_BLK_LEN_V1)
> +
> +#define GPE_PCI_HOTPLUG_STATUS  2
> +
> +#define PCI_HOTPLUG_BASE    (ACPI_PCI_HOTPLUG_ADDRESS_V1)
> +#define PCI_HOTPLUG_LEN     (ACPI_PCI_HOTPLUG_LEN_V1)
> +
> +#define PCI_UP      0
> +#define PCI_DOWN    4
> +#define PCI_EJECT   8
> +
> +static void gpe_update_sci(struct hvm_hotplug *hp)
> +{
> +    if ( (hp->gpe_sts[0] & hp->gpe_en[0]) & GPE_PCI_HOTPLUG_STATUS )
> +        hvm_isa_irq_assert(hp->domain, SCI_IRQ);
> +    else
> +        hvm_isa_irq_deassert(hp->domain, SCI_IRQ);
> +}
> +
> +static int handle_gpe_io(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    if ( bytes != 1 )
> +    {
> +        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
> +        goto done;
> +    }
> +
> +    port -= GPE_BASE;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        if ( port < GPE_LEN / 2 )
> +        {
> +            *val = hp->gpe_sts[port];
> +        }
> +        else
> +        {
> +            port -= GPE_LEN / 2;
> +            *val = hp->gpe_en[port];
> +        }
> +    } else {
> +        if ( port < GPE_LEN / 2 )
> +        {
> +            hp->gpe_sts[port] &= ~*val;
> +        }
> +        else
> +        {
> +            port -= GPE_LEN / 2;
> +            hp->gpe_en[port] = *val;
> +        }
> +
> +        gpe_update_sci(hp);
> +    }
> +
> +done:
> +    return X86EMUL_OKAY;
> +}
> +
> +static void pci_hotplug_eject(struct hvm_hotplug *hp, uint32_t mask)
> +{
> +    int slot = ffs(mask) - 1;
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, slot);
> +
> +    hp->slot_down &= ~(1u  << slot);
> +    hp->slot_up &= ~(1u  << slot);
> +}
> +
> +static int handle_pci_hotplug_io(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    if ( bytes != 4 )
> +    {
> +        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
> +        goto done;
> +    }
> +
> +    port -= PCI_HOTPLUG_BASE;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        switch ( port )
> +        {
> +        case PCI_UP:
> +            *val = hp->slot_up;
> +            break;
> +        case PCI_DOWN:
> +            *val = hp->slot_down;
> +            break;
> +        default:
> +            break;
> +        }
> +    }
> +    else
> +    {   
> +        switch ( port )
> +        {
> +        case PCI_EJECT:
> +            pci_hotplug_eject(hp, *val);
> +            break;
> +        default:
> +            break;
> +        }
> +    }
> +
> +done:
> +    return X86EMUL_OKAY;
> +}
> +
> +void pci_hotplug(struct domain *d, int slot, bool_t enable)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    gdprintk(XENLOG_INFO, "%s: %s %d\n", __func__,
> +             ( enable ) ? "enable" : "disable", slot);
> +
> +    if ( enable )
> +        hp->slot_up |= (1u << slot);
> +    else
> +        hp->slot_down |= (1u << slot);
> +
> +    hp->gpe_sts[0] |= GPE_PCI_HOTPLUG_STATUS;
> +    gpe_update_sci(hp);
> +}
> +
> +int gpe_init(struct domain *d)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    hp->domain = d;
> +
> +    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);

This size is known at compile time - what about arrays inside
hvm_hotplug and forgo the small memory allocations?

> +    if ( hp->gpe_sts == NULL )
> +        goto fail1;
> +
> +    hp->gpe_en = xzalloc_array(uint8_t, GPE_LEN / 2);
> +    if ( hp->gpe_en == NULL )
> +        goto fail2;
> +
> +    register_portio_handler(d, GPE_BASE, GPE_LEN, handle_gpe_io);
> +    register_portio_handler(d, PCI_HOTPLUG_BASE, PCI_HOTPLUG_LEN,
> +                            handle_pci_hotplug_io);
> +
> +    return 0;
> +
> +fail2:
> +    xfree(hp->gpe_sts);
> +
> +fail1:
> +    return -ENOMEM;
> +}
> +
> +void gpe_deinit(struct domain *d)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    xfree(hp->gpe_en);
> +    xfree(hp->gpe_sts);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * c-tab-always-indent: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5f9e728..ff7b259 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1298,15 +1298,21 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> +    rc = gpe_init(d);
> +    if ( rc != 0 )
> +        goto fail2;
> +
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>      register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail2;
> +        goto fail3;
>  
>      return 0;
>  
> + fail3:
> +    gpe_deinit(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -1352,6 +1358,7 @@ void hvm_domain_destroy(struct domain *d)
>          return;
>  
>      hvm_funcs.domain_destroy(d);
> +    gpe_deinit(d);
>      rtc_deinit(d);
>      stdvga_deinit(d);
>      vioapic_deinit(d);
> @@ -5015,6 +5022,32 @@ out:
>      return rc;
>  }
>  
> +static int hvmop_pci_hotplug(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_pci_hotplug_t) uop)
> +{
> +    xen_hvm_pci_hotplug_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    pci_hotplug(d, op.slot, op.enable);
> +    rc = 0;
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -5058,6 +5091,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
>          break;
>      
> +    case HVMOP_pci_hotplug:
> +        rc = hvmop_pci_hotplug(
> +            guest_handle_cast(arg, xen_hvm_pci_hotplug_t));
> +        break;
> +
>      case HVMOP_set_param:
>      case HVMOP_get_param:
>      {
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 93dcec1..13dd24d 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -66,6 +66,16 @@ struct hvm_ioreq_server {
>      struct hvm_pcidev      *pcidev_list;
>  };
>  
> +struct hvm_hotplug {
> +    struct domain   *domain;

This appears to be found by using container_of(), which will help keep
the size of struct domain down.

> +    uint8_t         *gpe_sts;
> +    uint8_t         *gpe_en;
> +
> +    /* PCI hotplug */
> +    uint32_t        slot_up;
> +    uint32_t        slot_down;
> +};
> +
>  struct hvm_domain {
>      struct list_head        ioreq_server_list;
>      spinlock_t              ioreq_server_lock;
> @@ -73,6 +83,8 @@ struct hvm_domain {
>      uint32_t                pci_cf8;
>      spinlock_t              pci_lock;
>  
> +    struct hvm_hotplug      hotplug;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
> index 86db58d..072bfe7 100644
> --- a/xen/include/asm-x86/hvm/io.h
> +++ b/xen/include/asm-x86/hvm/io.h
> @@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
>  void stdvga_deinit(struct domain *d);
>  
>  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> +
> +int gpe_init(struct domain *d);
> +void gpe_deinit(struct domain *d);
> +
> +void pci_hotplug(struct domain *d, int slot, bool_t enable);
> +
>  #endif /* __ASM_X86_HVM_IO_H__ */
>  
> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
> index 6b31189..20a53ab 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
>  typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
>  
> +#define HVMOP_pci_hotplug 24
> +struct xen_hvm_pci_hotplug {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    uint8_t enable;         /* IN - enable or disable? */
> +    uint32_t slot;          /* IN - slot to enable/disable */

Reordering these two will make the structure smaller.

~Andrew

> +};
> +typedef struct xen_hvm_pci_hotplug xen_hvm_pci_hotplug_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_pci_hotplug_t);
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index e84fa75..40bfa61 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
>  #define ACPI_PM_TMR_BLK_ADDRESS_V1   (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
>  #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
>  #define ACPI_GPE0_BLK_LEN_V1         0x04
> +#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
> +#define ACPI_PCI_HOTPLUG_LEN_V1      0x10
>  
>  /* Compatibility definitions for the default location (version 0). */
>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tyH-0000UI-Io; Thu, 30 Jan 2014 15:56:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W8tyE-0000U1-G8
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:56:03 +0000
Received: from [85.158.143.35:6523] by server-2.bemta-4.messagelabs.com id
	34/EF-10891-1167AE25; Thu, 30 Jan 2014 15:56:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391097359!1988535!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14160 invoked from network); 30 Jan 2014 15:56:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:56:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98146342"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 15:55:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:55:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1W8tyA-0007yj-2o;
	Thu, 30 Jan 2014 15:55:58 +0000
Message-ID: <52EA760E.4010400@citrix.com>
Date: Thu, 30 Jan 2014 15:55:58 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Paul Durrant <paul.durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
In-Reply-To: <1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 14:19, Paul Durrant wrote:
> Because we may now have more than one emulator, the implementation of the
> PCI hotplug controller needs to be done by Xen. Happily the code is very
> short and simple and it also removes the need for a different ACPI DSDT
> when using different variants of QEMU.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>  tools/firmware/hvmloader/acpi/mk_dsdt.c |  147 ++++------------------
>  tools/libxc/xc_domain.c                 |   46 +++++++
>  tools/libxc/xenctrl.h                   |   11 ++
>  tools/libxl/libxl_pci.c                 |   15 +++
>  xen/arch/x86/hvm/Makefile               |    1 +
>  xen/arch/x86/hvm/hotplug.c              |  207 +++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/hvm.c                  |   40 +++++-
>  xen/include/asm-x86/hvm/domain.h        |   12 ++
>  xen/include/asm-x86/hvm/io.h            |    6 +
>  xen/include/public/hvm/hvm_op.h         |    9 ++
>  xen/include/public/hvm/ioreq.h          |    2 +
>  11 files changed, 373 insertions(+), 123 deletions(-)
>  create mode 100644 xen/arch/x86/hvm/hotplug.c
>
> diff --git a/tools/firmware/hvmloader/acpi/mk_dsdt.c b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> index a4b693b..6408b44 100644
> --- a/tools/firmware/hvmloader/acpi/mk_dsdt.c
> +++ b/tools/firmware/hvmloader/acpi/mk_dsdt.c
> @@ -58,28 +58,6 @@ static void pop_block(void)
>      printf("}\n");
>  }
>  
> -static void pci_hotplug_notify(unsigned int slt)
> -{
> -    stmt("Notify", "\\_SB.PCI0.S%02X, EVT", slt);
> -}
> -
> -static void decision_tree(
> -    unsigned int s, unsigned int e, char *var, void (*leaf)(unsigned int))
> -{
> -    if ( s == (e-1) )
> -    {
> -        (*leaf)(s);
> -        return;
> -    }
> -
> -    push_block("If", "And(%s, 0x%02x)", var, (e-s)/2);
> -    decision_tree((s+e)/2, e, var, leaf);
> -    pop_block();
> -    push_block("Else", NULL);
> -    decision_tree(s, (s+e)/2, var, leaf);
> -    pop_block();
> -}
> -
>  static struct option options[] = {
>      { "maxcpu", 1, 0, 'c' },
>      { "dm-version", 1, 0, 'q' },
> @@ -322,64 +300,21 @@ int main(int argc, char **argv)
>                     dev, intx, ((dev*4+dev/8+intx)&31)+16);
>      printf("})\n");
>  
> -    /*
> -     * Each PCI hotplug slot needs at least two methods to handle
> -     * the ACPI event:
> -     *  _EJ0: eject a device
> -     *  _STA: return a device's status, e.g. enabled or removed
> -     * 
> -     * Eject button would generate a general-purpose event, then the
> -     * control method for this event uses Notify() to inform OSPM which
> -     * action happened and on which device.
> -     *
> -     * Pls. refer "6.3 Device Insertion, Removal, and Status Objects"
> -     * in ACPI spec 3.0b for details.
> -     *
> -     * QEMU provides a simple hotplug controller with some I/O to handle
> -     * the hotplug action and status, which is beyond the ACPI scope.
> -     */
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        for ( slot = 0; slot < 0x100; slot++ )
> -        {
> -            push_block("Device", "S%02X", slot);
> -            /* _ADR == dev:fn (16:16) */
> -            stmt("Name", "_ADR, 0x%08x", ((slot & ~7) << 13) | (slot & 7));
> -            /* _SUN == dev */
> -            stmt("Name", "_SUN, 0x%08x", slot >> 3);
> -            push_block("Method", "_EJ0, 1");
> -            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
> -            stmt("Store", "0x88, \\_GPE.DPT2");
> -            stmt("Store", "0x%02x, \\_GPE.PH%02X", /* eject */
> -                 (slot & 1) ? 0x10 : 0x01, slot & ~1);
> -            pop_block();
> -            push_block("Method", "_STA, 0");
> -            stmt("Store", "0x%02x, \\_GPE.DPT1", slot);
> -            stmt("Store", "0x89, \\_GPE.DPT2");
> -            if ( slot & 1 )
> -                stmt("ShiftRight", "0x4, \\_GPE.PH%02X, Local1", slot & ~1);
> -            else
> -                stmt("And", "\\_GPE.PH%02X, 0x0f, Local1", slot & ~1);
> -            stmt("Return", "Local1"); /* IN status as the _STA */
> -            pop_block();
> -            pop_block();
> -        }
> -    } else {
> -        stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
> -        push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
> -        indent(); printf("B0EJ, 32,\n");
> -        pop_block();
> +    stmt("OperationRegion", "SEJ, SystemIO, 0xae08, 0x04");
> +    push_block("Field", "SEJ, DWordAcc, NoLock, WriteAsZeros");
> +    indent(); printf("B0EJ, 32,\n");
> +    pop_block();
>  
> -        /* hotplug_slot */
> -        for (slot = 1; slot <= 31; slot++) {
> -            push_block("Device", "S%i", slot); {
> -                stmt("Name", "_ADR, %#06x0000", slot);
> -                push_block("Method", "_EJ0,1"); {
> -                    stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
> -                    stmt("Return", "0x0");
> -                } pop_block();
> -                stmt("Name", "_SUN, %i", slot);
> +    /* hotplug_slot */
> +    for (slot = 1; slot <= 31; slot++) {
> +        push_block("Device", "S%i", slot); {
> +            stmt("Name", "_ADR, %#06x0000", slot);
> +            push_block("Method", "_EJ0,1"); {
> +                stmt("Store", "ShiftLeft(1, %#06x), B0EJ", slot);
> +                stmt("Return", "0x0");
>              } pop_block();
> -        }
> +            stmt("Name", "_SUN, %i", slot);
> +        } pop_block();
>      }
>  
>      pop_block();
> @@ -389,26 +324,11 @@ int main(int argc, char **argv)
>      /**** GPE start ****/
>      push_block("Scope", "\\_GPE");
>  
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        stmt("OperationRegion", "PHP, SystemIO, 0x10c0, 0x82");
> -
> -        push_block("Field", "PHP, ByteAcc, NoLock, Preserve");
> -        indent(); printf("PSTA, 8,\n"); /* hotplug controller event reg */
> -        indent(); printf("PSTB, 8,\n"); /* hotplug controller slot reg */
> -        for ( slot = 0; slot < 0x100; slot += 2 )
> -        {
> -            indent();
> -            /* Each hotplug control register manages a pair of pci functions. */
> -            printf("PH%02X, 8,\n", slot);
> -        }
> -        pop_block();
> -    } else {
> -        stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
> -        push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
> -        indent(); printf("PCIU, 32,\n");
> -        indent(); printf("PCID, 32,\n");
> -        pop_block();
> -    }
> +    stmt("OperationRegion", "PCST, SystemIO, 0xae00, 0x08");
> +    push_block("Field", "PCST, DWordAcc, NoLock, WriteAsZeros");
> +    indent(); printf("PCIU, 32,\n");
> +    indent(); printf("PCID, 32,\n");
> +    pop_block();
>  
>      stmt("OperationRegion", "DG1, SystemIO, 0xb044, 0x04");
>  
> @@ -416,33 +336,16 @@ int main(int argc, char **argv)
>      indent(); printf("DPT1, 8, DPT2, 8\n");
>      pop_block();
>  
> -    if (dm_version == QEMU_XEN_TRADITIONAL) {
> -        push_block("Method", "_L03, 0, Serialized");
> -        /* Detect slot and event (remove/add). */
> -        stmt("Name", "SLT, 0x0");
> -        stmt("Name", "EVT, 0x0");
> -        stmt("Store", "PSTA, Local1");
> -        stmt("And", "Local1, 0xf, EVT");
> -        stmt("Store", "PSTB, Local1"); /* XXX: Store (PSTB, SLT) ? */
> -        stmt("And", "Local1, 0xff, SLT");
> -        /* Debug */
> -        stmt("Store", "SLT, DPT1");
> -        stmt("Store", "EVT, DPT2");
> -        /* Decision tree */
> -        decision_tree(0x00, 0x100, "SLT", pci_hotplug_notify);
> +    push_block("Method", "_E01");
> +    for (slot = 1; slot <= 31; slot++) {
> +        push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
> +        stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
>          pop_block();
> -    } else {
> -        push_block("Method", "_E01");
> -        for (slot = 1; slot <= 31; slot++) {
> -            push_block("If", "And(PCIU, ShiftLeft(1, %i))", slot);
> -            stmt("Notify", "\\_SB.PCI0.S%i, 1", slot);
> -            pop_block();
> -            push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
> -            stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
> -            pop_block();
> -        }
> +        push_block("If", "And(PCID, ShiftLeft(1, %i))", slot);
> +        stmt("Notify", "\\_SB.PCI0.S%i, 3", slot);
>          pop_block();
>      }
> +    pop_block();
>  
>      pop_block();
>      /**** GPE end ****/
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c64d15a..c89068e 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -1421,6 +1421,52 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
>      return rc;
>  }
>  
> +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> +                              domid_t domid,
> +                              uint32_t slot)

Take enable as a parameter and save having 2 almost identical functions?

> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_pci_hotplug;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->enable = 1;
> +    arg->slot = slot;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
> +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> +                               domid_t domid,
> +                               uint32_t slot)
> +{
> +    DECLARE_HYPERCALL;
> +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> +    int rc;
> +
> +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> +    if ( arg == NULL )
> +        return -1;
> +
> +    hypercall.op     = __HYPERVISOR_hvm_op;
> +    hypercall.arg[0] = HVMOP_pci_hotplug;
> +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> +    arg->domid = domid;
> +    arg->enable = 0;
> +    arg->slot = slot;
> +    rc = do_xen_hypercall(xch, &hypercall);
> +    xc_hypercall_buffer_free(xch, arg);
> +    return rc;
> +}
> +
>  int xc_domain_setdebugging(xc_interface *xch,
>                             uint32_t domid,
>                             unsigned int enable)
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 142aaea..c3e35a9 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1842,6 +1842,17 @@ int xc_hvm_destroy_ioreq_server(xc_interface *xch,
>  				domid_t domid,
>  				ioservid_t id);
>  
> +/*
> + * PCI hotplug API
> + */
> +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> +			      domid_t domid,
> +			      uint32_t slot);
> +
> +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> +			       domid_t domid,
> +			       uint32_t slot);
> +

tabs/spaces

>  /* HVM guest pass-through */
>  int xc_assign_device(xc_interface *xch,
>                       uint32_t domid,
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> index 2e52470..4176440 100644
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev, i
>          }
>          if ( rc )
>              return ERROR_FAIL;
> +
> +        rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev->dev);
> +        if (rc < 0) {
> +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error: xc_hvm_pci_hotplug_enable failed");
> +            return ERROR_FAIL;
> +        }
> +
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>      {
> @@ -1182,6 +1189,14 @@ static int do_pci_remove(libxl__gc *gc, uint32_t domid,
>                                           NULL, NULL, NULL) < 0)
>              goto out_fail;
>  
> +        rc = xc_hvm_pci_hotplug_disable(ctx->xch, domid, pcidev->dev);
> +        if (rc < 0) {
> +            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR,
> +                             "Error: xc_hvm_pci_hotplug_disable failed");
> +            rc = ERROR_FAIL;
> +            goto out_fail;
> +        }
> +
>          switch (libxl__device_model_version_running(gc, domid)) {
>          case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>              rc = qemu_pci_remove_xenstore(gc, domid, pcidev, force);
> diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> index eea5555..48efddb 100644
> --- a/xen/arch/x86/hvm/Makefile
> +++ b/xen/arch/x86/hvm/Makefile
> @@ -3,6 +3,7 @@ subdir-y += vmx
>  
>  obj-y += asid.o
>  obj-y += emulate.o
> +obj-y += hotplug.o
>  obj-y += hpet.o
>  obj-y += hvm.o
>  obj-y += i8254.o
> diff --git a/xen/arch/x86/hvm/hotplug.c b/xen/arch/x86/hvm/hotplug.c
> new file mode 100644
> index 0000000..253d435
> --- /dev/null
> +++ b/xen/arch/x86/hvm/hotplug.c
> @@ -0,0 +1,207 @@
> +/*
> + * hvm/hotplug.c
> + *
> + * Copyright (c) 2013, Citrix Systems Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +
> +#include <xen/types.h>
> +#include <xen/spinlock.h>
> +#include <xen/xmalloc.h>
> +#include <asm/hvm/io.h>
> +#include <asm/hvm/support.h>
> +
> +#define SCI_IRQ 9
> +
> +#define GPE_BASE            (ACPI_GPE0_BLK_ADDRESS_V1)
> +#define GPE_LEN             (ACPI_GPE0_BLK_LEN_V1)
> +
> +#define GPE_PCI_HOTPLUG_STATUS  2
> +
> +#define PCI_HOTPLUG_BASE    (ACPI_PCI_HOTPLUG_ADDRESS_V1)
> +#define PCI_HOTPLUG_LEN     (ACPI_PCI_HOTPLUG_LEN_V1)
> +
> +#define PCI_UP      0
> +#define PCI_DOWN    4
> +#define PCI_EJECT   8
> +
> +static void gpe_update_sci(struct hvm_hotplug *hp)
> +{
> +    if ( (hp->gpe_sts[0] & hp->gpe_en[0]) & GPE_PCI_HOTPLUG_STATUS )
> +        hvm_isa_irq_assert(hp->domain, SCI_IRQ);
> +    else
> +        hvm_isa_irq_deassert(hp->domain, SCI_IRQ);
> +}
> +
> +static int handle_gpe_io(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    if ( bytes != 1 )
> +    {
> +        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
> +        goto done;
> +    }
> +
> +    port -= GPE_BASE;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        if ( port < GPE_LEN / 2 )
> +        {
> +            *val = hp->gpe_sts[port];
> +        }
> +        else
> +        {
> +            port -= GPE_LEN / 2;
> +            *val = hp->gpe_en[port];
> +        }
> +    } else {
> +        if ( port < GPE_LEN / 2 )
> +        {
> +            hp->gpe_sts[port] &= ~*val;
> +        }
> +        else
> +        {
> +            port -= GPE_LEN / 2;
> +            hp->gpe_en[port] = *val;
> +        }
> +
> +        gpe_update_sci(hp);
> +    }
> +
> +done:
> +    return X86EMUL_OKAY;
> +}
> +
> +static void pci_hotplug_eject(struct hvm_hotplug *hp, uint32_t mask)
> +{
> +    int slot = ffs(mask) - 1;
> +
> +    gdprintk(XENLOG_INFO, "%s: %d\n", __func__, slot);
> +
> +    hp->slot_down &= ~(1u  << slot);
> +    hp->slot_up &= ~(1u  << slot);
> +}
> +
> +static int handle_pci_hotplug_io(
> +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    if ( bytes != 4 )
> +    {
> +        gdprintk(XENLOG_WARNING, "%s: bad access\n", __func__);
> +        goto done;
> +    }
> +
> +    port -= PCI_HOTPLUG_BASE;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        switch ( port )
> +        {
> +        case PCI_UP:
> +            *val = hp->slot_up;
> +            break;
> +        case PCI_DOWN:
> +            *val = hp->slot_down;
> +            break;
> +        default:
> +            break;
> +        }
> +    }
> +    else
> +    {   
> +        switch ( port )
> +        {
> +        case PCI_EJECT:
> +            pci_hotplug_eject(hp, *val);
> +            break;
> +        default:
> +            break;
> +        }
> +    }
> +
> +done:
> +    return X86EMUL_OKAY;
> +}
> +
> +void pci_hotplug(struct domain *d, int slot, bool_t enable)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    gdprintk(XENLOG_INFO, "%s: %s %d\n", __func__,
> +             ( enable ) ? "enable" : "disable", slot);
> +
> +    if ( enable )
> +        hp->slot_up |= (1u << slot);
> +    else
> +        hp->slot_down |= (1u << slot);
> +
> +    hp->gpe_sts[0] |= GPE_PCI_HOTPLUG_STATUS;
> +    gpe_update_sci(hp);
> +}
> +
> +int gpe_init(struct domain *d)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    hp->domain = d;
> +
> +    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);

This size is known at compile time - what about arrays inside
hvm_hotplug and forgo the small memory allocations?

> +    if ( hp->gpe_sts == NULL )
> +        goto fail1;
> +
> +    hp->gpe_en = xzalloc_array(uint8_t, GPE_LEN / 2);
> +    if ( hp->gpe_en == NULL )
> +        goto fail2;
> +
> +    register_portio_handler(d, GPE_BASE, GPE_LEN, handle_gpe_io);
> +    register_portio_handler(d, PCI_HOTPLUG_BASE, PCI_HOTPLUG_LEN,
> +                            handle_pci_hotplug_io);
> +
> +    return 0;
> +
> +fail2:
> +    xfree(hp->gpe_sts);
> +
> +fail1:
> +    return -ENOMEM;
> +}
> +
> +void gpe_deinit(struct domain *d)
> +{
> +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> +
> +    xfree(hp->gpe_en);
> +    xfree(hp->gpe_sts);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * c-tab-always-indent: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5f9e728..ff7b259 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1298,15 +1298,21 @@ int hvm_domain_initialise(struct domain *d)
>  
>      rtc_init(d);
>  
> +    rc = gpe_init(d);
> +    if ( rc != 0 )
> +        goto fail2;
> +
>      register_portio_handler(d, 0xe9, 1, hvm_print_line);
>      register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
>  
>      rc = hvm_funcs.domain_initialise(d);
>      if ( rc != 0 )
> -        goto fail2;
> +        goto fail3;
>  
>      return 0;
>  
> + fail3:
> +    gpe_deinit(d);
>   fail2:
>      rtc_deinit(d);
>      stdvga_deinit(d);
> @@ -1352,6 +1358,7 @@ void hvm_domain_destroy(struct domain *d)
>          return;
>  
>      hvm_funcs.domain_destroy(d);
> +    gpe_deinit(d);
>      rtc_deinit(d);
>      stdvga_deinit(d);
>      vioapic_deinit(d);
> @@ -5015,6 +5022,32 @@ out:
>      return rc;
>  }
>  
> +static int hvmop_pci_hotplug(
> +    XEN_GUEST_HANDLE_PARAM(xen_hvm_pci_hotplug_t) uop)
> +{
> +    xen_hvm_pci_hotplug_t op;
> +    struct domain *d;
> +    int rc;
> +
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
> +    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = -EINVAL;
> +    if ( !is_hvm_domain(d) )
> +        goto out;
> +
> +    pci_hotplug(d, op.slot, op.enable);
> +    rc = 0;
> +
> +out:
> +    rcu_unlock_domain(d);
> +    return rc;
> +}
> +
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>  {
> @@ -5058,6 +5091,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
>          break;
>      
> +    case HVMOP_pci_hotplug:
> +        rc = hvmop_pci_hotplug(
> +            guest_handle_cast(arg, xen_hvm_pci_hotplug_t));
> +        break;
> +
>      case HVMOP_set_param:
>      case HVMOP_get_param:
>      {
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 93dcec1..13dd24d 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -66,6 +66,16 @@ struct hvm_ioreq_server {
>      struct hvm_pcidev      *pcidev_list;
>  };
>  
> +struct hvm_hotplug {
> +    struct domain   *domain;

This appears to be found by using container_of(), which will help keep
the size of struct domain down.

> +    uint8_t         *gpe_sts;
> +    uint8_t         *gpe_en;
> +
> +    /* PCI hotplug */
> +    uint32_t        slot_up;
> +    uint32_t        slot_down;
> +};
> +
>  struct hvm_domain {
>      struct list_head        ioreq_server_list;
>      spinlock_t              ioreq_server_lock;
> @@ -73,6 +83,8 @@ struct hvm_domain {
>      uint32_t                pci_cf8;
>      spinlock_t              pci_lock;
>  
> +    struct hvm_hotplug      hotplug;
> +
>      struct pl_time         pl_time;
>  
>      struct hvm_io_handler *io_handler;
> diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
> index 86db58d..072bfe7 100644
> --- a/xen/include/asm-x86/hvm/io.h
> +++ b/xen/include/asm-x86/hvm/io.h
> @@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
>  void stdvga_deinit(struct domain *d);
>  
>  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> +
> +int gpe_init(struct domain *d);
> +void gpe_deinit(struct domain *d);
> +
> +void pci_hotplug(struct domain *d, int slot, bool_t enable);
> +
>  #endif /* __ASM_X86_HVM_IO_H__ */
>  
> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
> index 6b31189..20a53ab 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
>  typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
>  
> +#define HVMOP_pci_hotplug 24
> +struct xen_hvm_pci_hotplug {
> +    domid_t domid;          /* IN - domain to be serviced */
> +    uint8_t enable;         /* IN - enable or disable? */
> +    uint32_t slot;          /* IN - slot to enable/disable */

Reordering these two will make the structure smaller.

~Andrew

> +};
> +typedef struct xen_hvm_pci_hotplug xen_hvm_pci_hotplug_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_hvm_pci_hotplug_t);
> +
>  #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>  
>  #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */
> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
> index e84fa75..40bfa61 100644
> --- a/xen/include/public/hvm/ioreq.h
> +++ b/xen/include/public/hvm/ioreq.h
> @@ -101,6 +101,8 @@ typedef struct buffered_iopage buffered_iopage_t;
>  #define ACPI_PM_TMR_BLK_ADDRESS_V1   (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + 0x08)
>  #define ACPI_GPE0_BLK_ADDRESS_V1     0xafe0
>  #define ACPI_GPE0_BLK_LEN_V1         0x04
> +#define ACPI_PCI_HOTPLUG_ADDRESS_V1  0xae00
> +#define ACPI_PCI_HOTPLUG_LEN_V1      0x10
>  
>  /* Compatibility definitions for the default location (version 0). */
>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ty6-0000Tc-1h; Thu, 30 Jan 2014 15:55:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8ty4-0000TV-MO
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:55:52 +0000
Received: from [193.109.254.147:51449] by server-4.bemta-14.messagelabs.com id
	AF/BE-32066-8067AE25; Thu, 30 Jan 2014 15:55:52 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391097350!941148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29848 invoked from network); 30 Jan 2014 15:55:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:55:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96178943"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:55:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:55:49 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8ty1-0007yg-2a; Thu, 30 Jan 2014 15:55:49 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8ty0-0001ue-S4; Thu, 30 Jan 2014
	15:55:48 +0000
Date: Thu, 30 Jan 2014 15:55:48 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130155536.GA7250@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v3] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

v2: Added RHEL 7 grub.cfg in pygrub/examples
v3: Tidied the commit message.

Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails
to boot on Xen.
---
 tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py            |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2

diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7-beta.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ty6-0000Tc-1h; Thu, 30 Jan 2014 15:55:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1W8ty4-0000TV-MO
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:55:52 +0000
Received: from [193.109.254.147:51449] by server-4.bemta-14.messagelabs.com id
	AF/BE-32066-8067AE25; Thu, 30 Jan 2014 15:55:52 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391097350!941148!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29848 invoked from network); 30 Jan 2014 15:55:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:55:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96178943"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:55:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 10:55:49 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1W8ty1-0007yg-2a; Thu, 30 Jan 2014 15:55:49 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1W8ty0-0001ue-S4; Thu, 30 Jan 2014
	15:55:48 +0000
Date: Thu, 30 Jan 2014 15:55:48 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: <xen-devel@lists.xen.org>
Message-ID: <20140130155536.GA7250@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v3] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, menuentry has some options as well
(--class red, --class gnu, etc).

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

v2: Added RHEL 7 grub.cfg in pygrub/examples
v3: Tidied the commit message.

Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails
to boot on Xen.
---
 tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py            |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2

diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7-beta.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tzS-0000dl-Fb; Thu, 30 Jan 2014 15:57:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tzN-0000dB-TL
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:57:17 +0000
Received: from [193.109.254.147:19421] by server-9.bemta-14.messagelabs.com id
	DF/1B-24895-9567AE25; Thu, 30 Jan 2014 15:57:13 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391097430!940761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17324 invoked from network); 30 Jan 2014 15:57:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:57:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96179562"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:57:10 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:57:10 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:56:47 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
	multiple servers
Thread-Index: AQHPHcZa+PQmQQp8pU2tOslIFzmVTZqdWK8AgAARecA=
Date: Thu, 30 Jan 2014 15:56:47 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A2C@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
	<52EA73EE.90103@citrix.com>
In-Reply-To: <52EA73EE.90103@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
[snip]
> > +
> > +    if ( max_emulators < 1 )
> > +        goto error_out;
> 
> Is there a sane upper bound for emulators?
> 

I imagine there probably needs to be. I haven't work it out yet, but it will be when the special pages  start to run into something else no doubt.

> >
> >      if ( nr_pages > target_pages )
> >          pod_mode = XENMEMF_populate_on_demand;
> > @@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
> >                xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
> >                HVM_INFO_PFN)) == NULL )
> >          goto error_out;
> > -    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
> > +    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
> > +                   max_emulators);
> >      munmap(hvm_info_page, PAGE_SIZE);
> >
> >      /* Allocate and clear special pages. */
> > @@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
> >              "  STORE:     %"PRI_xen_pfn"\n"
> >              "  IDENT_PT:  %"PRI_xen_pfn"\n"
> >              "  CONSOLE:   %"PRI_xen_pfn"\n"
> > -            "  IOREQ:     %"PRI_xen_pfn"\n",
> > -            NR_SPECIAL_PAGES,
> > +            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
> > +            NR_SPECIAL_PAGES(max_emulators),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> > +            max_emulators * 2,
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> >
> > -    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> > +    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
> >      {
> >          xen_pfn_t pfn = special_pfn(i);
> >          rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
> > @@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> >                       special_pfn(SPECIALPAGE_IOREQ));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> > +                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> > +                     max_emulators);
> >
> >      /*
> >       * Identity-map page table is required for running with CR0.PG=0 when
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 13f816b..142aaea 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
> >  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long value);
> >  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long *value);
> >
> > +/*
> > + * IOREQ server API
> > + */
> > +int xc_hvm_create_ioreq_server(xc_interface *xch,
> > +			       domid_t domid,
> > +			       ioservid_t *id);
> > +
> > +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> > +				 domid_t domid,
> > +				 ioservid_t id,
> > +				 xen_pfn_t *pfn,
> > +				 xen_pfn_t *buf_pfn,
> > +				 evtchn_port_t *buf_port);
> > +
> > +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
> > +					domid_t domid,
> > +                                        ioservid_t id,
> > +					int is_mmio,
> > +                                        uint64_t start,
> > +					uint64_t end);
> > +
> > +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
> > +					    domid_t domid,
> > +                                            ioservid_t id,
> > +					    int is_mmio,
> > +                                            uint64_t start);
> > +
> > +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
> > +				      domid_t domid,
> > +                                      ioservid_t id,
> > +				      uint16_t bdf);
> > +
> > +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
> > +					  domid_t domid,
> > +					  ioservid_t id,
> > +					  uint16_t bdf);
> > +
> > +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> > +				domid_t domid,
> > +				ioservid_t id);
> > +
> 
> There are tab/space issues in this hunk.
> 

So there are. Probably some missing emacs boilerplate.

[snip]
> > +            case HVM_PARAM_NR_IOREQ_SERVERS:
> > +                if ( d == current->domain )
> > +                    rc = -EPERM;
> > +                break;
> 
> Is this correct? Security-wise, it should be restricted more.
> 
> Having said that, I can't see anything good to come from being able to
> change this value on the fly.  Is it possible to make a domain creation
> parameters?
> 

I don't know. Maybe we can have one-time settable params? The other 'legacy' ioreq params seem quite insecure too.

> >              }
> >
> >              if ( rc == 0 )
> > @@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              case HVM_PARAM_BUFIOREQ_PFN:
> >              case HVM_PARAM_BUFIOREQ_EVTCHN:
> >                  /* May need to create server */
> > -                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> > +                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
> >                  if ( rc != 0 && rc != -EEXIST )
> >                      goto param_fail;
> >
> > @@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_IOREQ_PFN: {
> >                      xen_pfn_t pfn;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = pfn;
> > @@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_BUFIOREQ_PFN: {
> >                      xen_pfn_t pfn;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = pfn;
> > @@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_BUFIOREQ_EVTCHN: {
> >                      evtchn_port_t port;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = port;
> > diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> > index 576641c..a0d76b2 100644
> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -78,7 +78,7 @@ void send_invalidate_req(void)
> >      p->dir = IOREQ_WRITE;
> >      p->data = ~0UL; /* flush all */
> >
> > -    (void)hvm_send_assist_req(v, p);
> > +    hvm_broadcast_assist_req(v, p);
> >  }
> >
> >  int handle_mmio(void)
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index e750ef0..93dcec1 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -41,19 +41,38 @@ struct hvm_ioreq_page {
> >      void *va;
> >  };
> >
> > +struct hvm_io_range {
> > +    struct hvm_io_range *next;
> > +    uint64_t            start, end;
> > +};
> > +
> > +struct hvm_pcidev {
> > +    struct hvm_pcidev *next;
> > +    uint16_t          bdf;
> > +};
> > +
> >  struct hvm_ioreq_server {
> > +    struct list_head       domain_list_entry;
> > +    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];
> 
> Given that this has to be initialised anyway, would it be better to have
> it dynamically sized on the d->max_cpus, which is almost always be far
> smaller?
> 

Can vcpu ids be sparse? If not then that would seem fine.

  Paul

> ~Andrew
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 15:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 15:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8tzS-0000dl-Fb; Thu, 30 Jan 2014 15:57:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8tzN-0000dB-TL
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 15:57:17 +0000
Received: from [193.109.254.147:19421] by server-9.bemta-14.messagelabs.com id
	DF/1B-24895-9567AE25; Thu, 30 Jan 2014 15:57:13 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391097430!940761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17324 invoked from network); 30 Jan 2014 15:57:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 15:57:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96179562"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 15:57:10 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 10:57:10 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 16:56:47 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
	multiple servers
Thread-Index: AQHPHcZa+PQmQQp8pU2tOslIFzmVTZqdWK8AgAARecA=
Date: Thu, 30 Jan 2014 15:56:47 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A2C@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-5-git-send-email-paul.durrant@citrix.com>
	<52EA73EE.90103@citrix.com>
In-Reply-To: <52EA73EE.90103@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 4/5] ioreq-server: add support for
 multiple servers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
[snip]
> > +
> > +    if ( max_emulators < 1 )
> > +        goto error_out;
> 
> Is there a sane upper bound for emulators?
> 

I imagine there probably needs to be. I haven't work it out yet, but it will be when the special pages  start to run into something else no doubt.

> >
> >      if ( nr_pages > target_pages )
> >          pod_mode = XENMEMF_populate_on_demand;
> > @@ -458,7 +463,8 @@ static int setup_guest(xc_interface *xch,
> >                xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
> >                HVM_INFO_PFN)) == NULL )
> >          goto error_out;
> > -    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
> > +    build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size,
> > +                   max_emulators);
> >      munmap(hvm_info_page, PAGE_SIZE);
> >
> >      /* Allocate and clear special pages. */
> > @@ -470,17 +476,18 @@ static int setup_guest(xc_interface *xch,
> >              "  STORE:     %"PRI_xen_pfn"\n"
> >              "  IDENT_PT:  %"PRI_xen_pfn"\n"
> >              "  CONSOLE:   %"PRI_xen_pfn"\n"
> > -            "  IOREQ:     %"PRI_xen_pfn"\n",
> > -            NR_SPECIAL_PAGES,
> > +            "  IOREQ(%02d): %"PRI_xen_pfn"\n",
> > +            NR_SPECIAL_PAGES(max_emulators),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_PAGING),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_ACCESS),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_SHARING),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_XENSTORE),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_IDENT_PT),
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_CONSOLE),
> > +            max_emulators * 2,
> >              (xen_pfn_t)special_pfn(SPECIALPAGE_IOREQ));
> >
> > -    for ( i = 0; i < NR_SPECIAL_PAGES; i++ )
> > +    for ( i = 0; i < NR_SPECIAL_PAGES(max_emulators); i++ )
> >      {
> >          xen_pfn_t pfn = special_pfn(i);
> >          rc = xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &pfn);
> > @@ -506,7 +513,9 @@ static int setup_guest(xc_interface *xch,
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_IOREQ_PFN,
> >                       special_pfn(SPECIALPAGE_IOREQ));
> >      xc_set_hvm_param(xch, dom, HVM_PARAM_BUFIOREQ_PFN,
> > -                     special_pfn(SPECIALPAGE_IOREQ) - 1);
> > +                     special_pfn(SPECIALPAGE_IOREQ) - max_emulators);
> > +    xc_set_hvm_param(xch, dom, HVM_PARAM_NR_IOREQ_SERVERS,
> > +                     max_emulators);
> >
> >      /*
> >       * Identity-map page table is required for running with CR0.PG=0 when
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 13f816b..142aaea 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -1801,6 +1801,47 @@ void xc_clear_last_error(xc_interface *xch);
> >  int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long value);
> >  int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param,
> unsigned long *value);
> >
> > +/*
> > + * IOREQ server API
> > + */
> > +int xc_hvm_create_ioreq_server(xc_interface *xch,
> > +			       domid_t domid,
> > +			       ioservid_t *id);
> > +
> > +int xc_hvm_get_ioreq_server_info(xc_interface *xch,
> > +				 domid_t domid,
> > +				 ioservid_t id,
> > +				 xen_pfn_t *pfn,
> > +				 xen_pfn_t *buf_pfn,
> > +				 evtchn_port_t *buf_port);
> > +
> > +int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch,
> > +					domid_t domid,
> > +                                        ioservid_t id,
> > +					int is_mmio,
> > +                                        uint64_t start,
> > +					uint64_t end);
> > +
> > +int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch,
> > +					    domid_t domid,
> > +                                            ioservid_t id,
> > +					    int is_mmio,
> > +                                            uint64_t start);
> > +
> > +int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch,
> > +				      domid_t domid,
> > +                                      ioservid_t id,
> > +				      uint16_t bdf);
> > +
> > +int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch,
> > +					  domid_t domid,
> > +					  ioservid_t id,
> > +					  uint16_t bdf);
> > +
> > +int xc_hvm_destroy_ioreq_server(xc_interface *xch,
> > +				domid_t domid,
> > +				ioservid_t id);
> > +
> 
> There are tab/space issues in this hunk.
> 

So there are. Probably some missing emacs boilerplate.

[snip]
> > +            case HVM_PARAM_NR_IOREQ_SERVERS:
> > +                if ( d == current->domain )
> > +                    rc = -EPERM;
> > +                break;
> 
> Is this correct? Security-wise, it should be restricted more.
> 
> Having said that, I can't see anything good to come from being able to
> change this value on the fly.  Is it possible to make a domain creation
> parameters?
> 

I don't know. Maybe we can have one-time settable params? The other 'legacy' ioreq params seem quite insecure too.

> >              }
> >
> >              if ( rc == 0 )
> > @@ -4483,7 +5251,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              case HVM_PARAM_BUFIOREQ_PFN:
> >              case HVM_PARAM_BUFIOREQ_EVTCHN:
> >                  /* May need to create server */
> > -                rc = hvm_create_ioreq_server(d, curr_d->domain_id);
> > +                rc = hvm_create_ioreq_server(d, 0, curr_d->domain_id);
> >                  if ( rc != 0 && rc != -EEXIST )
> >                      goto param_fail;
> >
> > @@ -4492,7 +5260,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_IOREQ_PFN: {
> >                      xen_pfn_t pfn;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, &pfn)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 0, &pfn)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = pfn;
> > @@ -4501,7 +5269,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_BUFIOREQ_PFN: {
> >                      xen_pfn_t pfn;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_pfn(d, 1, &pfn)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_pfn(d, 0, 1, &pfn)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = pfn;
> > @@ -4510,7 +5278,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >                  case HVM_PARAM_BUFIOREQ_EVTCHN: {
> >                      evtchn_port_t port;
> >
> > -                    if ( (rc = hvm_get_ioreq_server_buf_port(d, &port)) < 0 )
> > +                    if ( (rc = hvm_get_ioreq_server_buf_port(d, 0, &port)) < 0 )
> >                          goto param_fail;
> >
> >                      a.value = port;
> > diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> > index 576641c..a0d76b2 100644
> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -78,7 +78,7 @@ void send_invalidate_req(void)
> >      p->dir = IOREQ_WRITE;
> >      p->data = ~0UL; /* flush all */
> >
> > -    (void)hvm_send_assist_req(v, p);
> > +    hvm_broadcast_assist_req(v, p);
> >  }
> >
> >  int handle_mmio(void)
> > diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> > index e750ef0..93dcec1 100644
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -41,19 +41,38 @@ struct hvm_ioreq_page {
> >      void *va;
> >  };
> >
> > +struct hvm_io_range {
> > +    struct hvm_io_range *next;
> > +    uint64_t            start, end;
> > +};
> > +
> > +struct hvm_pcidev {
> > +    struct hvm_pcidev *next;
> > +    uint16_t          bdf;
> > +};
> > +
> >  struct hvm_ioreq_server {
> > +    struct list_head       domain_list_entry;
> > +    struct list_head       vcpu_list_entry[MAX_HVM_VCPUS];
> 
> Given that this has to be initialised anyway, would it be better to have
> it dynamically sized on the d->max_cpus, which is almost always be far
> smaller?
> 

Can vcpu ids be sparse? If not then that would seem fine.

  Paul

> ~Andrew
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:07:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8u8l-0001gL-MO; Thu, 30 Jan 2014 16:06:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8u8k-0001gB-FU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:06:54 +0000
Received: from [85.158.143.35:27352] by server-3.bemta-4.messagelabs.com id
	3D/67-11539-D987AE25; Thu, 30 Jan 2014 16:06:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391098011!1991080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16257 invoked from network); 30 Jan 2014 16:06:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:06:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96184969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:06:47 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 11:06:46 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 17:06:40 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI
	hotplug controller implementation into Xen
Thread-Index: AQHPHcZafuHlqvfYM0ajpdfz4KchSJqdWzcAgAARQVA=
Date: Thu, 30 Jan 2014 16:06:39 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
In-Reply-To: <52EA760E.4010400@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
[snip]
> > +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> > +                              domid_t domid,
> > +                              uint32_t slot)
> 
> Take enable as a parameter and save having 2 almost identical functions?
> 

I was in two minds. Internally it's a single HVMOP with an enable/disable parameter (as we can see below) but I thought it was neater to keep the separation at the API.

> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 1;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +                               domid_t domid,
> > +                               uint32_t slot)
> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 0;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
[snip]
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +			       domid_t domid,
> > +			       uint32_t slot);
> > +
> 
> tabs/spaces
> 

Yep.

[snip]
> > +int gpe_init(struct domain *d)
> > +{
> > +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> > +
> > +    hp->domain = d;
> > +
> > +    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);
> 
> This size is known at compile time - what about arrays inside
> hvm_hotplug and forgo the small memory allocations?
> 

Yes, that seems reasonable.

[snip]
> > +struct hvm_hotplug {
> > +    struct domain   *domain;
> 
> This appears to be found by using container_of(), which will help keep
> the size of struct domain down.
> 

Sure.

> > +    uint8_t         *gpe_sts;
> > +    uint8_t         *gpe_en;
> > +
> > +    /* PCI hotplug */
> > +    uint32_t        slot_up;
> > +    uint32_t        slot_down;
> > +};
> > +
> >  struct hvm_domain {
> >      struct list_head        ioreq_server_list;
> >      spinlock_t              ioreq_server_lock;
> > @@ -73,6 +83,8 @@ struct hvm_domain {
> >      uint32_t                pci_cf8;
> >      spinlock_t              pci_lock;
> >
> > +    struct hvm_hotplug      hotplug;
> > +
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-
> x86/hvm/io.h
> > index 86db58d..072bfe7 100644
> > --- a/xen/include/asm-x86/hvm/io.h
> > +++ b/xen/include/asm-x86/hvm/io.h
> > @@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
> >  void stdvga_deinit(struct domain *d);
> >
> >  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> > +
> > +int gpe_init(struct domain *d);
> > +void gpe_deinit(struct domain *d);
> > +
> > +void pci_hotplug(struct domain *d, int slot, bool_t enable);
> > +
> >  #endif /* __ASM_X86_HVM_IO_H__ */
> >
> > diff --git a/xen/include/public/hvm/hvm_op.h
> b/xen/include/public/hvm/hvm_op.h
> > index 6b31189..20a53ab 100644
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
> >  typedef struct xen_hvm_destroy_ioreq_server
> xen_hvm_destroy_ioreq_server_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
> >
> > +#define HVMOP_pci_hotplug 24
> > +struct xen_hvm_pci_hotplug {
> > +    domid_t domid;          /* IN - domain to be serviced */
> > +    uint8_t enable;         /* IN - enable or disable? */
> > +    uint32_t slot;          /* IN - slot to enable/disable */
> 
> Reordering these two will make the structure smaller.
> 

It will indeed.

   Paul

> ~Andrew
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:07:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8u8l-0001gL-MO; Thu, 30 Jan 2014 16:06:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8u8k-0001gB-FU
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:06:54 +0000
Received: from [85.158.143.35:27352] by server-3.bemta-4.messagelabs.com id
	3D/67-11539-D987AE25; Thu, 30 Jan 2014 16:06:53 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391098011!1991080!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16257 invoked from network); 30 Jan 2014 16:06:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:06:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96184969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:06:47 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 11:06:46 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 17:06:40 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI
	hotplug controller implementation into Xen
Thread-Index: AQHPHcZafuHlqvfYM0ajpdfz4KchSJqdWzcAgAARQVA=
Date: Thu, 30 Jan 2014 16:06:39 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
In-Reply-To: <52EA760E.4010400@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
[snip]
> > +int xc_hvm_pci_hotplug_enable(xc_interface *xch,
> > +                              domid_t domid,
> > +                              uint32_t slot)
> 
> Take enable as a parameter and save having 2 almost identical functions?
> 

I was in two minds. Internally it's a single HVMOP with an enable/disable parameter (as we can see below) but I thought it was neater to keep the separation at the API.

> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 1;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +                               domid_t domid,
> > +                               uint32_t slot)
> > +{
> > +    DECLARE_HYPERCALL;
> > +    DECLARE_HYPERCALL_BUFFER(xen_hvm_pci_hotplug_t, arg);
> > +    int rc;
> > +
> > +    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > +    if ( arg == NULL )
> > +        return -1;
> > +
> > +    hypercall.op     = __HYPERVISOR_hvm_op;
> > +    hypercall.arg[0] = HVMOP_pci_hotplug;
> > +    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
> > +    arg->domid = domid;
> > +    arg->enable = 0;
> > +    arg->slot = slot;
> > +    rc = do_xen_hypercall(xch, &hypercall);
> > +    xc_hypercall_buffer_free(xch, arg);
> > +    return rc;
> > +}
> > +
[snip]
> > +int xc_hvm_pci_hotplug_disable(xc_interface *xch,
> > +			       domid_t domid,
> > +			       uint32_t slot);
> > +
> 
> tabs/spaces
> 

Yep.

[snip]
> > +int gpe_init(struct domain *d)
> > +{
> > +    struct hvm_hotplug  *hp = &d->arch.hvm_domain.hotplug;
> > +
> > +    hp->domain = d;
> > +
> > +    hp->gpe_sts = xzalloc_array(uint8_t, GPE_LEN / 2);
> 
> This size is known at compile time - what about arrays inside
> hvm_hotplug and forgo the small memory allocations?
> 

Yes, that seems reasonable.

[snip]
> > +struct hvm_hotplug {
> > +    struct domain   *domain;
> 
> This appears to be found by using container_of(), which will help keep
> the size of struct domain down.
> 

Sure.

> > +    uint8_t         *gpe_sts;
> > +    uint8_t         *gpe_en;
> > +
> > +    /* PCI hotplug */
> > +    uint32_t        slot_up;
> > +    uint32_t        slot_down;
> > +};
> > +
> >  struct hvm_domain {
> >      struct list_head        ioreq_server_list;
> >      spinlock_t              ioreq_server_lock;
> > @@ -73,6 +83,8 @@ struct hvm_domain {
> >      uint32_t                pci_cf8;
> >      spinlock_t              pci_lock;
> >
> > +    struct hvm_hotplug      hotplug;
> > +
> >      struct pl_time         pl_time;
> >
> >      struct hvm_io_handler *io_handler;
> > diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-
> x86/hvm/io.h
> > index 86db58d..072bfe7 100644
> > --- a/xen/include/asm-x86/hvm/io.h
> > +++ b/xen/include/asm-x86/hvm/io.h
> > @@ -142,5 +142,11 @@ void stdvga_init(struct domain *d);
> >  void stdvga_deinit(struct domain *d);
> >
> >  extern void hvm_dpci_msi_eoi(struct domain *d, int vector);
> > +
> > +int gpe_init(struct domain *d);
> > +void gpe_deinit(struct domain *d);
> > +
> > +void pci_hotplug(struct domain *d, int slot, bool_t enable);
> > +
> >  #endif /* __ASM_X86_HVM_IO_H__ */
> >
> > diff --git a/xen/include/public/hvm/hvm_op.h
> b/xen/include/public/hvm/hvm_op.h
> > index 6b31189..20a53ab 100644
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -340,6 +340,15 @@ struct xen_hvm_destroy_ioreq_server {
> >  typedef struct xen_hvm_destroy_ioreq_server
> xen_hvm_destroy_ioreq_server_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
> >
> > +#define HVMOP_pci_hotplug 24
> > +struct xen_hvm_pci_hotplug {
> > +    domid_t domid;          /* IN - domain to be serviced */
> > +    uint8_t enable;         /* IN - enable or disable? */
> > +    uint32_t slot;          /* IN - slot to enable/disable */
> 
> Reordering these two will make the structure smaller.
> 

It will indeed.

   Paul

> ~Andrew
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uC3-00025M-CH; Thu, 30 Jan 2014 16:10:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8uC1-00025F-Ls
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:10:18 +0000
Received: from [85.158.139.211:51421] by server-16.bemta-5.messagelabs.com id
	53/B3-05060-8697AE25; Thu, 30 Jan 2014 16:10:16 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391098212!636127!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20945 invoked from network); 30 Jan 2014 16:10:14 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 16:10:14 -0000
Received: from mail-vb0-f51.google.com ([209.85.212.51]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUup5ZOKRlDY8Vnlvbjn7JIofi46U4Jwz@postini.com;
	Thu, 30 Jan 2014 08:10:14 PST
Received: by mail-vb0-f51.google.com with SMTP id 11so2176444vbe.10
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 08:10:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4JurlZsfO8+oq1Nq+vOhJcjPUYlSocvLEtEMU/8bhAQ=;
	b=OscspmdEz3WoSDtoOYBc8ZhjVrQBPn5cLUuNBWw4yVcahu5vKxswsfD45COA68i7Z6
	nbljHcOR1JJ4USF5m2sZv3BT/P2UApp5esz83gsivJPZQN4X+0coRDeWMuxm019YKzLf
	z7x4K+Bk6IlEpV8M9XBoOacraEuBb/SUtPMTE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=4JurlZsfO8+oq1Nq+vOhJcjPUYlSocvLEtEMU/8bhAQ=;
	b=DQ6jrMhvCzb35gqu7tahTUDB3hTeQgTtEM7qvHFgX2pOJOTCcudSgJsjX+c+D2ev4e
	ierHWoh+MWGxXtAga73Dshowa6ALn65tJnn3ohfRkrMg/DY1HdtouDluGmXPiH1s6bcZ
	Z5gC884wn3p83uFP001mT8vlmgkYSV/5CPpA4KcuF+EEguKigRz47WCfLzHjKS4DwqfB
	SPeZBodWnIdQoMwF7ykLRCQNOoPYl0PjRT8dohT7ke2UOKuMvtuHpSggD7cdQOjJ6G1w
	s0pH+2jkiUvmlsPcK8NDQWWJOKZ/YCNduUeEcCEX1jG4h8rp5jIpsfMYTLWvjU8w0K5+
	lz8A==
X-Gm-Message-State: ALoCoQlhZsHGmlPzPeExMxmkEsM/Mmwa1cwJopj5MdefmWhSjAK4i7Qu4i6lHuntPPmHgsd0sQhYOzI6GITEyw2TdFXDWrPEQSvQXuCq8a5xq/DOzwSRHQWsFEf5evJ6q+RhmZFaElb9knEZ5Q1ikUABPEP5MNxcoW0EYQQc418jPpQwfJX8+so=
X-Received: by 10.53.13.44 with SMTP id ev12mr10451718vdd.17.1391098211858;
	Thu, 30 Jan 2014 08:10:11 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.53.13.44 with SMTP id ev12mr10451703vdd.17.1391098211620;
	Thu, 30 Jan 2014 08:10:11 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 08:10:11 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 18:10:11 +0200
Message-ID: <CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, all.

1. The "simultaneous cross-interrupts" issue only occurs if
gic_route_irq_to_guest() was called on CPU1 during boot.
it is possible in our case, since gic_route_irq_to_guest() is called
when the both domains (dom0 and domU) are building unlike Xen
upstream.
So, the "impropriety" on our side.

Next solution fixes side-effect (deadlock):

while ( unlikely(!spin_trylock(&call_lock)) )
     smp_call_function_interrupt();

1.1 I have checked patch "xen: arm: increase priority of SGIs used as IPIs".
In general it works (I mean that this patch doesn't cause to new
issues). But, it doesn't fix the issue.
(I still see "simultaneous cross-interrupts").

1.2 I also have checked solution where on_selected_cpus call was moved
out of the
interrupt handler. Unfortunately, it doesn't work.

I almost immediately see next error:
(XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
(XEN) Xen BUG at gic.c:981
(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
(XEN) CPU:    1
(XEN) PC:     00241ee0 __bug+0x2c/0x44
(XEN) CPSR:   2000015a MODE:Hypervisor
(XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
(XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
(XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
(XEN) HYP: SP: 40037eb4 LR: 00241ee0
(XEN)
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00010000deffc000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 0000000000002835
(XEN)  TTBR0_EL2: 00000000d2014000
(XEN)
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000482110
(XEN)      HDFAR: fa211f00
(XEN)      HIFAR: 00000000
(XEN)
(XEN) Xen stack trace from sp=40037eb4:
(XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
(XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
(XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
(XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
(XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
(XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
(XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
(XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
(XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
(XEN)    00000000 0c41e00c 450c2880
(XEN) Xen call trace:
(XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
(XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
(XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
(XEN)    [<00249068>] do_IRQ+0x138/0x198
(XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
(XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
(XEN)    [<00251a30>] return_from_trap+0/0x4
(XEN)


2. The "simultaneous cross-interrupts" issue doesn't occur if I use
next solution:
So, as result I don't see deadlock in on_selected_cpus()

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..af96a31 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
struct dt_irq *irq,

     level = dt_irq_is_level_triggered(irq);

-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);

     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {

So, as result I don't see deadlock in on_selected_cpus().
But, rarely, I see deadlocks in other parts related to interrupts handling.
As noted by Julien, I am using the old version of the interrupt patch series.
I completely agree.

We are based on next XEN commit:
48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list

Also we have some patches, which we cherry-picked when we urgently needed them:
6bba1a3 xen/arm: Keep count of inflight interrupts
33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ

I have to apply next patches and check with them:
88eb95e xen/arm: disable a physical IRQ when the guest disables the
corresponding IRQ
a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
d16d511 xen/arm: track the state of guest IRQs

I'll report about the results. I hope to do it today.

A lot of thanks to all.

On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Given that we don't deactivate the interrupt (writing to GICC_DIR) until
> the guest EOIs it, I can't understand how you manage to get a second
> interrupt notifications before the guest EOIs the first one.
>
> Do you set GICC_CTL_EOI in GICC_CTLR?
>
> On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
>>
>> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > Is it a level or an edge irq?
>> >
>> > On Wed, 29 Jan 2014, Julien Grall wrote:
>> >> Hi,
>> >>
>> >> It's weird, physical IRQ should not be injected twice ...
>> >> Were you able to print the IRQ number?
>> >>
>> >> In any case, you are using the old version of the interrupt patch series.
>> >> Your new error may come of race condition in this code.
>> >>
>> >> Can you try to use a newest version?
>> >>
>> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> >>       > difference for xen-unstable (it should make things clearer, if nothing
>> >>       > else) but it should fix things for Oleksandr.
>> >>
>> >>       Unfortunately, it is not enough for stable work.
>> >>
>> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
>> >>       which cause to deadlock in on_selected_cpus function (expected).
>> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
>> >>       yet where this is happening) or I sometimes see traps, like that:
>> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>> >>
>> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> >>       (XEN) CPU:    1
>> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
>> >>       (XEN) CPSR:   200001da MODE:Hypervisor
>> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> >>       (XEN)
>> >>       (XEN)   VTCR_EL2: 80002558
>> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
>> >>       (XEN)
>> >>       (XEN)  SCTLR_EL2: 30cd187f
>> >>       (XEN)    HCR_EL2: 00000000000028b5
>> >>       (XEN)  TTBR0_EL2: 00000000d2014000
>> >>       (XEN)
>> >>       (XEN)    ESR_EL2: 00000000
>> >>       (XEN)  HPFAR_EL2: 0000000000482110
>> >>       (XEN)      HDFAR: fa211190
>> >>       (XEN)      HIFAR: 00000000
>> >>       (XEN)
>> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
>> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
>> >>       (XEN) Xen call trace:
>> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
>> >>       (XEN)
>> >>
>> >>       Also I am posting maintenance_interrupt() from my tree:
>> >>
>> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
>> >>       cpu_user_regs *regs)
>> >>       {
>> >>           int i = 0, virq, pirq;
>> >>           uint32_t lr;
>> >>           struct vcpu *v = current;
>> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>> >>
>> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>> >>                                     64, i)) < 64) {
>> >>               struct pending_irq *p, *n;
>> >>               int cpu, eoi;
>> >>
>> >>               cpu = -1;
>> >>               eoi = 0;
>> >>
>> >>               spin_lock_irq(&gic.lock);
>> >>               lr = GICH[GICH_LR + i];
>> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
>> >>
>> >>               p = irq_to_pending(v, virq);
>> >>               if ( p->desc != NULL ) {
>> >>                   p->desc->status &= ~IRQ_INPROGRESS;
>> >>                   /* Assume only one pcpu needs to EOI the irq */
>> >>                   cpu = p->desc->arch.eoi_cpu;
>> >>                   eoi = 1;
>> >>                   pirq = p->desc->irq;
>> >>               }
>> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>> >>               {
>> >>                   /* Physical IRQ can't be reinject */
>> >>                   WARN_ON(p->desc != NULL);
>> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>> >>                   spin_unlock_irq(&gic.lock);
>> >>                   i++;
>> >>                   continue;
>> >>               }
>> >>
>> >>               GICH[GICH_LR + i] = 0;
>> >>               clear_bit(i, &this_cpu(lr_mask));
>> >>
>> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>> >>                   list_del_init(&n->lr_queue);
>> >>                   set_bit(i, &this_cpu(lr_mask));
>> >>               } else {
>> >>                   gic_inject_irq_stop();
>> >>               }
>> >>               spin_unlock_irq(&gic.lock);
>> >>
>> >>               spin_lock_irq(&v->arch.vgic.lock);
>> >>               list_del_init(&p->inflight);
>> >>               spin_unlock_irq(&v->arch.vgic.lock);
>> >>
>> >>               if ( eoi ) {
>> >>                   /* this is not racy because we can't receive another irq of the
>> >>                    * same type until we EOI it.  */
>> >>                   if ( cpu == smp_processor_id() )
>> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>> >>                   else
>> >>                       on_selected_cpus(cpumask_of(cpu),
>> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>> >>               }
>> >>
>> >>               i++;
>> >>           }
>> >>       }
>> >>
>> >>
>> >>       Oleksandr Tyshchenko | Embedded Developer
>> >>       GlobalLogic
>> >>
>> >>
>> >>
>>
>>
>>
>> --
>>
>> Name | Title
>> GlobalLogic
>> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> www.globallogic.com
>>
>> http://www.globallogic.com/email_disclaimer.txt
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uC3-00025M-CH; Thu, 30 Jan 2014 16:10:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8uC1-00025F-Ls
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:10:18 +0000
Received: from [85.158.139.211:51421] by server-16.bemta-5.messagelabs.com id
	53/B3-05060-8697AE25; Thu, 30 Jan 2014 16:10:16 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391098212!636127!1
X-Originating-IP: [64.18.0.22]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20945 invoked from network); 30 Jan 2014 16:10:14 -0000
Received: from exprod5og111.obsmtp.com (HELO exprod5og111.obsmtp.com)
	(64.18.0.22)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 16:10:14 -0000
Received: from mail-vb0-f51.google.com ([209.85.212.51]) (using TLSv1) by
	exprod5ob111.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUup5ZOKRlDY8Vnlvbjn7JIofi46U4Jwz@postini.com;
	Thu, 30 Jan 2014 08:10:14 PST
Received: by mail-vb0-f51.google.com with SMTP id 11so2176444vbe.10
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 08:10:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4JurlZsfO8+oq1Nq+vOhJcjPUYlSocvLEtEMU/8bhAQ=;
	b=OscspmdEz3WoSDtoOYBc8ZhjVrQBPn5cLUuNBWw4yVcahu5vKxswsfD45COA68i7Z6
	nbljHcOR1JJ4USF5m2sZv3BT/P2UApp5esz83gsivJPZQN4X+0coRDeWMuxm019YKzLf
	z7x4K+Bk6IlEpV8M9XBoOacraEuBb/SUtPMTE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=4JurlZsfO8+oq1Nq+vOhJcjPUYlSocvLEtEMU/8bhAQ=;
	b=DQ6jrMhvCzb35gqu7tahTUDB3hTeQgTtEM7qvHFgX2pOJOTCcudSgJsjX+c+D2ev4e
	ierHWoh+MWGxXtAga73Dshowa6ALn65tJnn3ohfRkrMg/DY1HdtouDluGmXPiH1s6bcZ
	Z5gC884wn3p83uFP001mT8vlmgkYSV/5CPpA4KcuF+EEguKigRz47WCfLzHjKS4DwqfB
	SPeZBodWnIdQoMwF7ykLRCQNOoPYl0PjRT8dohT7ke2UOKuMvtuHpSggD7cdQOjJ6G1w
	s0pH+2jkiUvmlsPcK8NDQWWJOKZ/YCNduUeEcCEX1jG4h8rp5jIpsfMYTLWvjU8w0K5+
	lz8A==
X-Gm-Message-State: ALoCoQlhZsHGmlPzPeExMxmkEsM/Mmwa1cwJopj5MdefmWhSjAK4i7Qu4i6lHuntPPmHgsd0sQhYOzI6GITEyw2TdFXDWrPEQSvQXuCq8a5xq/DOzwSRHQWsFEf5evJ6q+RhmZFaElb9knEZ5Q1ikUABPEP5MNxcoW0EYQQc418jPpQwfJX8+so=
X-Received: by 10.53.13.44 with SMTP id ev12mr10451718vdd.17.1391098211858;
	Thu, 30 Jan 2014 08:10:11 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.53.13.44 with SMTP id ev12mr10451703vdd.17.1391098211620;
	Thu, 30 Jan 2014 08:10:11 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 08:10:11 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 18:10:11 +0200
Message-ID: <CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, all.

1. The "simultaneous cross-interrupts" issue only occurs if
gic_route_irq_to_guest() was called on CPU1 during boot.
it is possible in our case, since gic_route_irq_to_guest() is called
when the both domains (dom0 and domU) are building unlike Xen
upstream.
So, the "impropriety" on our side.

Next solution fixes side-effect (deadlock):

while ( unlikely(!spin_trylock(&call_lock)) )
     smp_call_function_interrupt();

1.1 I have checked patch "xen: arm: increase priority of SGIs used as IPIs".
In general it works (I mean that this patch doesn't cause to new
issues). But, it doesn't fix the issue.
(I still see "simultaneous cross-interrupts").

1.2 I also have checked solution where on_selected_cpus call was moved
out of the
interrupt handler. Unfortunately, it doesn't work.

I almost immediately see next error:
(XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
(XEN) Xen BUG at gic.c:981
(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
(XEN) CPU:    1
(XEN) PC:     00241ee0 __bug+0x2c/0x44
(XEN) CPSR:   2000015a MODE:Hypervisor
(XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
(XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
(XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
(XEN) HYP: SP: 40037eb4 LR: 00241ee0
(XEN)
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00010000deffc000
(XEN)
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 0000000000002835
(XEN)  TTBR0_EL2: 00000000d2014000
(XEN)
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000482110
(XEN)      HDFAR: fa211f00
(XEN)      HIFAR: 00000000
(XEN)
(XEN) Xen stack trace from sp=40037eb4:
(XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
(XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
(XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
(XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
(XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
(XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
(XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
(XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
(XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
(XEN)    00000000 0c41e00c 450c2880
(XEN) Xen call trace:
(XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
(XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
(XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
(XEN)    [<00249068>] do_IRQ+0x138/0x198
(XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
(XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
(XEN)    [<00251a30>] return_from_trap+0/0x4
(XEN)


2. The "simultaneous cross-interrupts" issue doesn't occur if I use
next solution:
So, as result I don't see deadlock in on_selected_cpus()

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..af96a31 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
struct dt_irq *irq,

     level = dt_irq_is_level_triggered(irq);

-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);

     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {

So, as result I don't see deadlock in on_selected_cpus().
But, rarely, I see deadlocks in other parts related to interrupts handling.
As noted by Julien, I am using the old version of the interrupt patch series.
I completely agree.

We are based on next XEN commit:
48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list

Also we have some patches, which we cherry-picked when we urgently needed them:
6bba1a3 xen/arm: Keep count of inflight interrupts
33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ

I have to apply next patches and check with them:
88eb95e xen/arm: disable a physical IRQ when the guest disables the
corresponding IRQ
a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
d16d511 xen/arm: track the state of guest IRQs

I'll report about the results. I hope to do it today.

A lot of thanks to all.

On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> Given that we don't deactivate the interrupt (writing to GICC_DIR) until
> the guest EOIs it, I can't understand how you manage to get a second
> interrupt notifications before the guest EOIs the first one.
>
> Do you set GICC_CTL_EOI in GICC_CTLR?
>
> On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
>>
>> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > Is it a level or an edge irq?
>> >
>> > On Wed, 29 Jan 2014, Julien Grall wrote:
>> >> Hi,
>> >>
>> >> It's weird, physical IRQ should not be injected twice ...
>> >> Were you able to print the IRQ number?
>> >>
>> >> In any case, you are using the old version of the interrupt patch series.
>> >> Your new error may come of race condition in this code.
>> >>
>> >> Can you try to use a newest version?
>> >>
>> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> >>       > difference for xen-unstable (it should make things clearer, if nothing
>> >>       > else) but it should fix things for Oleksandr.
>> >>
>> >>       Unfortunately, it is not enough for stable work.
>> >>
>> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
>> >>       which cause to deadlock in on_selected_cpus function (expected).
>> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
>> >>       yet where this is happening) or I sometimes see traps, like that:
>> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>> >>
>> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> >>       (XEN) CPU:    1
>> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
>> >>       (XEN) CPSR:   200001da MODE:Hypervisor
>> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> >>       (XEN)
>> >>       (XEN)   VTCR_EL2: 80002558
>> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
>> >>       (XEN)
>> >>       (XEN)  SCTLR_EL2: 30cd187f
>> >>       (XEN)    HCR_EL2: 00000000000028b5
>> >>       (XEN)  TTBR0_EL2: 00000000d2014000
>> >>       (XEN)
>> >>       (XEN)    ESR_EL2: 00000000
>> >>       (XEN)  HPFAR_EL2: 0000000000482110
>> >>       (XEN)      HDFAR: fa211190
>> >>       (XEN)      HIFAR: 00000000
>> >>       (XEN)
>> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
>> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
>> >>       (XEN) Xen call trace:
>> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
>> >>       (XEN)
>> >>
>> >>       Also I am posting maintenance_interrupt() from my tree:
>> >>
>> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
>> >>       cpu_user_regs *regs)
>> >>       {
>> >>           int i = 0, virq, pirq;
>> >>           uint32_t lr;
>> >>           struct vcpu *v = current;
>> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>> >>
>> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>> >>                                     64, i)) < 64) {
>> >>               struct pending_irq *p, *n;
>> >>               int cpu, eoi;
>> >>
>> >>               cpu = -1;
>> >>               eoi = 0;
>> >>
>> >>               spin_lock_irq(&gic.lock);
>> >>               lr = GICH[GICH_LR + i];
>> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
>> >>
>> >>               p = irq_to_pending(v, virq);
>> >>               if ( p->desc != NULL ) {
>> >>                   p->desc->status &= ~IRQ_INPROGRESS;
>> >>                   /* Assume only one pcpu needs to EOI the irq */
>> >>                   cpu = p->desc->arch.eoi_cpu;
>> >>                   eoi = 1;
>> >>                   pirq = p->desc->irq;
>> >>               }
>> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>> >>               {
>> >>                   /* Physical IRQ can't be reinject */
>> >>                   WARN_ON(p->desc != NULL);
>> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>> >>                   spin_unlock_irq(&gic.lock);
>> >>                   i++;
>> >>                   continue;
>> >>               }
>> >>
>> >>               GICH[GICH_LR + i] = 0;
>> >>               clear_bit(i, &this_cpu(lr_mask));
>> >>
>> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>> >>                   list_del_init(&n->lr_queue);
>> >>                   set_bit(i, &this_cpu(lr_mask));
>> >>               } else {
>> >>                   gic_inject_irq_stop();
>> >>               }
>> >>               spin_unlock_irq(&gic.lock);
>> >>
>> >>               spin_lock_irq(&v->arch.vgic.lock);
>> >>               list_del_init(&p->inflight);
>> >>               spin_unlock_irq(&v->arch.vgic.lock);
>> >>
>> >>               if ( eoi ) {
>> >>                   /* this is not racy because we can't receive another irq of the
>> >>                    * same type until we EOI it.  */
>> >>                   if ( cpu == smp_processor_id() )
>> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>> >>                   else
>> >>                       on_selected_cpus(cpumask_of(cpu),
>> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>> >>               }
>> >>
>> >>               i++;
>> >>           }
>> >>       }
>> >>
>> >>
>> >>       Oleksandr Tyshchenko | Embedded Developer
>> >>       GlobalLogic
>> >>
>> >>
>> >>
>>
>>
>>
>> --
>>
>> Name | Title
>> GlobalLogic
>> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> www.globallogic.com
>>
>> http://www.globallogic.com/email_disclaimer.txt
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uE6-0002Gg-RR; Thu, 30 Jan 2014 16:12:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8uE5-0002GQ-5B; Thu, 30 Jan 2014 16:12:25 +0000
Received: from [85.158.139.211:16410] by server-8.bemta-5.messagelabs.com id
	28/79-05298-8E97AE25; Thu, 30 Jan 2014 16:12:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391098342!654871!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4086 invoked from network); 30 Jan 2014 16:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96187976"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:12:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:12:21 -0500
Message-ID: <1391098340.9495.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 16:12:20 +0000
In-Reply-To: <CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
> What disk option should I use for a PV guest? tap2:aio seems to have a
> similar issue with "xl vcpu-set"

To workaround the issue you would need phy: I think, perhaps by setting
up a loopback device on the raw image by hand.

Hopefully the underlying issue can get fixed though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:12:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uE6-0002Gg-RR; Thu, 30 Jan 2014 16:12:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8uE5-0002GQ-5B; Thu, 30 Jan 2014 16:12:25 +0000
Received: from [85.158.139.211:16410] by server-8.bemta-5.messagelabs.com id
	28/79-05298-8E97AE25; Thu, 30 Jan 2014 16:12:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391098342!654871!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4086 invoked from network); 30 Jan 2014 16:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96187976"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:12:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:12:21 -0500
Message-ID: <1391098340.9495.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 16:12:20 +0000
In-Reply-To: <CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
> What disk option should I use for a PV guest? tap2:aio seems to have a
> similar issue with "xl vcpu-set"

To workaround the issue you would need phy: I think, perhaps by setting
up a loopback device on the raw image by hand.

Hopefully the underlying issue can get fixed though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uFC-0002QW-U0; Thu, 30 Jan 2014 16:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>) id 1W8u6b-0001aZ-A6
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:04:41 +0000
Received: from [85.158.139.211:13879] by server-6.bemta-5.messagelabs.com id
	C5/F3-14342-8187AE25; Thu, 30 Jan 2014 16:04:40 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391097878!656753!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30638 invoked from network); 30 Jan 2014 16:04:39 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:04:39 -0000
Received: by mail-oa0-f51.google.com with SMTP id h16so3797189oag.24
	for <multiple recipients>; Thu, 30 Jan 2014 08:04:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=prTDmzFdg93ZmsfX14XasFzEAZqnT12o/kmRYm/hCHY=;
	b=aj61o0ZwxvkcbLn/h9J2X60M14Clow1LtFt0WNVDkmSbzataHkWqZdtpFQRRoDICGW
	zZnmYRLII61Nx4j8BxeMcp47KFQJuKBmalryF+o+SR/z0nm55XzkvGCzTVMvszYnyjiV
	CNAUy8xZSRbdlN28QO15DHvW+8RPdxQ4qbp1OYuNmsPTK9ZRl+ITXXBlnjddz/heuGlI
	sFeR/0BjRiTVs6TvruITtIpePCAx63XhhsI1efZsobKhNse8eB0nXwPvkdfuGt9Aowi1
	1PRpVS9jPIdxHoWrpeqcLWH5on5qUQyeBDef3BLETomTDdUquFE/mEkGqqyw8V3iRMo4
	7Ytg==
MIME-Version: 1.0
X-Received: by 10.60.146.235 with SMTP id tf11mr1587630oeb.63.1391097877934;
	Thu, 30 Jan 2014 08:04:37 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Thu, 30 Jan 2014 08:04:37 -0800 (PST)
In-Reply-To: <1391096671.9495.10.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 09:04:37 -0700
Message-ID: <CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Thu, 30 Jan 2014 16:13:34 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

What disk option should I use for a PV guest? tap2:aio seems to have a
similar issue with "xl vcpu-set"
Thanks,
Yun.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uFC-0002QW-U0; Thu, 30 Jan 2014 16:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>) id 1W8u6b-0001aZ-A6
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:04:41 +0000
Received: from [85.158.139.211:13879] by server-6.bemta-5.messagelabs.com id
	C5/F3-14342-8187AE25; Thu, 30 Jan 2014 16:04:40 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391097878!656753!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30638 invoked from network); 30 Jan 2014 16:04:39 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:04:39 -0000
Received: by mail-oa0-f51.google.com with SMTP id h16so3797189oag.24
	for <multiple recipients>; Thu, 30 Jan 2014 08:04:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=prTDmzFdg93ZmsfX14XasFzEAZqnT12o/kmRYm/hCHY=;
	b=aj61o0ZwxvkcbLn/h9J2X60M14Clow1LtFt0WNVDkmSbzataHkWqZdtpFQRRoDICGW
	zZnmYRLII61Nx4j8BxeMcp47KFQJuKBmalryF+o+SR/z0nm55XzkvGCzTVMvszYnyjiV
	CNAUy8xZSRbdlN28QO15DHvW+8RPdxQ4qbp1OYuNmsPTK9ZRl+ITXXBlnjddz/heuGlI
	sFeR/0BjRiTVs6TvruITtIpePCAx63XhhsI1efZsobKhNse8eB0nXwPvkdfuGt9Aowi1
	1PRpVS9jPIdxHoWrpeqcLWH5on5qUQyeBDef3BLETomTDdUquFE/mEkGqqyw8V3iRMo4
	7Ytg==
MIME-Version: 1.0
X-Received: by 10.60.146.235 with SMTP id tf11mr1587630oeb.63.1391097877934;
	Thu, 30 Jan 2014 08:04:37 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Thu, 30 Jan 2014 08:04:37 -0800 (PST)
In-Reply-To: <1391096671.9495.10.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 09:04:37 -0700
Message-ID: <CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Thu, 30 Jan 2014 16:13:34 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,

What disk option should I use for a PV guest? tap2:aio seems to have a
similar issue with "xl vcpu-set"
Thanks,
Yun.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uFu-0002XP-Cb; Thu, 30 Jan 2014 16:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W8uFt-0002Wt-Gd
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:14:17 +0000
Received: from [85.158.143.35:10491] by server-3.bemta-4.messagelabs.com id
	1D/E4-11539-75A7AE25; Thu, 30 Jan 2014 16:14:15 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391098453!1976607!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17063 invoked from network); 30 Jan 2014 16:14:14 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:14:14 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 09:14:02 -0700
Message-ID: <52EA7A49.2040105@suse.com>
Date: Thu, 30 Jan 2014 09:14:01 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: "Daniel P. Berrange" <berrange@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140130121805.GI3139@redhat.com>
In-Reply-To: <20140130121805.GI3139@redhat.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
	flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel P. Berrange wrote:
> On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
>   
>> [Adding libvirt list...]
>>
>> Ian Jackson wrote:
>>     
>>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>>>       
>
> BTW, I meant to ask before - what is the SIGCHLD reference about in the
> subject line ?
>   

This is related to child processes started by libxl.  E.g. running a
bootloader when creating PV VMs, running a save/restore helper when
saving/restoring a VM, etc.

> libvirt drivers that live inside libvirtd should never use or rely on
> the SIGCHLD signal at all. All VM processes started by libvirtd ought
> to be fully daemonized so that their parent is pid 1 / init. This ensures
> that the libvirtd daemon can be restarted without all the VMs getting
> reaped. Once the VMs are reparented to init, then libvirt or library
> code it uses has no way of ever receiving SIGCHLD.
>   

Nod.  VMs are "daemonized", in the context of Xen.  Once running,
libvirtd can be restarted without reaping them.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uFu-0002XP-Cb; Thu, 30 Jan 2014 16:14:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W8uFt-0002Wt-Gd
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:14:17 +0000
Received: from [85.158.143.35:10491] by server-3.bemta-4.messagelabs.com id
	1D/E4-11539-75A7AE25; Thu, 30 Jan 2014 16:14:15 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391098453!1976607!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17063 invoked from network); 30 Jan 2014 16:14:14 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:14:14 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 09:14:02 -0700
Message-ID: <52EA7A49.2040105@suse.com>
Date: Thu, 30 Jan 2014 09:14:01 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: "Daniel P. Berrange" <berrange@redhat.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140130121805.GI3139@redhat.com>
In-Reply-To: <20140130121805.GI3139@redhat.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
	flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel P. Berrange wrote:
> On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
>   
>> [Adding libvirt list...]
>>
>> Ian Jackson wrote:
>>     
>>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>>>       
>
> BTW, I meant to ask before - what is the SIGCHLD reference about in the
> subject line ?
>   

This is related to child processes started by libxl.  E.g. running a
bootloader when creating PV VMs, running a save/restore helper when
saving/restoring a VM, etc.

> libvirt drivers that live inside libvirtd should never use or rely on
> the SIGCHLD signal at all. All VM processes started by libvirtd ought
> to be fully daemonized so that their parent is pid 1 / init. This ensures
> that the libvirtd daemon can be restarted without all the VMs getting
> reaped. Once the VMs are reparented to init, then libvirt or library
> code it uses has no way of ever receiving SIGCHLD.
>   

Nod.  VMs are "daemonized", in the context of Xen.  Once running,
libvirtd can be restarted without reaping them.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uKB-0003Aq-6F; Thu, 30 Jan 2014 16:18:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W8uK9-00037Q-P8
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:18:41 +0000
Received: from [85.158.139.211:36172] by server-10.bemta-5.messagelabs.com id
	D9/5B-08578-16B7AE25; Thu, 30 Jan 2014 16:18:41 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391098719!646563!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31774 invoked from network); 30 Jan 2014 16:18:40 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-206.messagelabs.com with SMTP;
	30 Jan 2014 16:18:40 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0UGHZKB000742
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 11:17:36 -0500
Received: from redhat.com (vpn1-7-27.ams2.redhat.com [10.36.7.27])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0UGHUNx012902
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Thu, 30 Jan 2014 11:17:32 -0500
Date: Thu, 30 Jan 2014 16:17:30 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140130161730.GN3139@redhat.com>
References: <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140130121805.GI3139@redhat.com>
	<52EA7A49.2040105@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EA7A49.2040105@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 09:14:01AM -0700, Jim Fehlig wrote:
> Daniel P. Berrange wrote:
> > On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> >   
> >> [Adding libvirt list...]
> >>
> >> Ian Jackson wrote:
> >>     
> >>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> >>>       
> >
> > BTW, I meant to ask before - what is the SIGCHLD reference about in the
> > subject line ?
> >   
> 
> This is related to child processes started by libxl.  E.g. running a
> bootloader when creating PV VMs, running a save/restore helper when
> saving/restoring a VM, etc.
> 
> > libvirt drivers that live inside libvirtd should never use or rely on
> > the SIGCHLD signal at all. All VM processes started by libvirtd ought
> > to be fully daemonized so that their parent is pid 1 / init. This ensures
> > that the libvirtd daemon can be restarted without all the VMs getting
> > reaped. Once the VMs are reparented to init, then libvirt or library
> > code it uses has no way of ever receiving SIGCHLD.
> >   
> 
> Nod.  VMs are "daemonized", in the context of Xen.  Once running,
> libvirtd can be restarted without reaping them.

Great, thanks for clarifying, this sounds fine then.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uKB-0003Aq-6F; Thu, 30 Jan 2014 16:18:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1W8uK9-00037Q-P8
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:18:41 +0000
Received: from [85.158.139.211:36172] by server-10.bemta-5.messagelabs.com id
	D9/5B-08578-16B7AE25; Thu, 30 Jan 2014 16:18:41 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391098719!646563!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31774 invoked from network); 30 Jan 2014 16:18:40 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-206.messagelabs.com with SMTP;
	30 Jan 2014 16:18:40 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0UGHZKB000742
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 11:17:36 -0500
Received: from redhat.com (vpn1-7-27.ams2.redhat.com [10.36.7.27])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0UGHUNx012902
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Thu, 30 Jan 2014 11:17:32 -0500
Date: Thu, 30 Jan 2014 16:17:30 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Jim Fehlig <jfehlig@suse.com>
Message-ID: <20140130161730.GN3139@redhat.com>
References: <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com> <20140130121805.GI3139@redhat.com>
	<52EA7A49.2040105@suse.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EA7A49.2040105@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH 00/12] libxl: fork: SIGCHLD
 flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 09:14:01AM -0700, Jim Fehlig wrote:
> Daniel P. Berrange wrote:
> > On Mon, Jan 27, 2014 at 06:39:36PM -0700, Jim Fehlig wrote:
> >   
> >> [Adding libvirt list...]
> >>
> >> Ian Jackson wrote:
> >>     
> >>> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> >>>       
> >
> > BTW, I meant to ask before - what is the SIGCHLD reference about in the
> > subject line ?
> >   
> 
> This is related to child processes started by libxl.  E.g. running a
> bootloader when creating PV VMs, running a save/restore helper when
> saving/restoring a VM, etc.
> 
> > libvirt drivers that live inside libvirtd should never use or rely on
> > the SIGCHLD signal at all. All VM processes started by libvirtd ought
> > to be fully daemonized so that their parent is pid 1 / init. This ensures
> > that the libvirtd daemon can be restarted without all the VMs getting
> > reaped. Once the VMs are reparented to init, then libvirt or library
> > code it uses has no way of ever receiving SIGCHLD.
> >   
> 
> Nod.  VMs are "daemonized", in the context of Xen.  Once running,
> libvirtd can be restarted without reaping them.

Great, thanks for clarifying, this sounds fine then.

Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:26:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uRG-0003OJ-2c; Thu, 30 Jan 2014 16:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8uRF-0003OC-Eu
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:26:01 +0000
Received: from [85.158.137.68:23910] by server-17.bemta-3.messagelabs.com id
	D1/19-22569-81D7AE25; Thu, 30 Jan 2014 16:26:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391099159!12339161!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2029 invoked from network); 30 Jan 2014 16:25:59 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:25:59 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391099159; l=4603;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=14NfsBxF9i0qJIG5Svsjgw7Z7SQ=;
	b=yUbD2VsJgCK1CHPCK/eM9rV1ri60QO1imtrWpH7W16+Whizh9apbJ/ycyJszhaZO7Sy
	Gn49elwpPwuDLmyFHYFTpWkDteMvukOv4vEBbHMbZZO5CG3QfcXqsPMJVMVxcpUf539bC
	Cj3dhSwjCvUGmdvyiIlymuIbwXAH92gNqUk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id C03babq0UGPxjkZ
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 17:25:59 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C82C950269; Thu, 30 Jan 2014 17:25:58 +0100 (CET)
Date: Thu, 30 Jan 2014 17:25:58 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140130162558.GA9033@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


And just for reference, this is a version for our 4.4:

....

This change does not break ABI. Instead of adding a new member ->discard_enable
to struct libxl_device_disk the existing ->readwrite member is reused.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
 tools/libxl/libxl.c                 |  2 ++
 tools/libxl/libxl.h                 | 11 +++++++++++
 tools/libxl/libxlu_disk.c           |  3 +++
 tools/libxl/libxlu_disk_i.h         |  2 +-
 tools/libxl/libxlu_disk_l.l         |  4 ++++
 6 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..c9fd9bd 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on if, available for that backend typ
+
+This option is an advisory setting for the backend driver, depending of the
+value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
+benefit of this option is to be able to force it off rather than on. It allows
+to disable "hole punching" for file based backends which were intentionally
+created non-sparse to avoid fragmentation of the file.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..9ed5062 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (disk->readwrite == LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC)
+            flexarray_append_pair(back, "discard-enable", "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..021f7e4 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,17 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * The libxl_device_disk lacks discard_enable field, disabling discard
+ * is supported without breaking the ABI. This is done by overloading
+ * struct libxl_device_disk->readwrite:
+ * readwrite == 0: disk is readonly, no discard
+ * readwrite == 1: disk is readwrite, backend driver may enable discard
+ * readwrite == MAGIC: disk is readwrite, backend driver should not offer
+ * discard to the frontend driver.
+ */
+#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC 0xdcadU
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..e596cb6 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -80,6 +80,9 @@ int xlu_disk_parse(XLU_Config *cfg,
             disk->format = LIBXL_DISK_FORMAT_EMPTY;
     }
 
+    if (disk->readwrite && dpc.disable_discard)
+        disk->readwrite = LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC;
+
     if (!disk->vdev) {
         xlu__disk_err(&dpc,0, "no vdev specified");
         goto x_err;
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..9db3002 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -10,7 +10,7 @@ typedef struct {
     void *scanner;
     YY_BUFFER_STATE buf;
     libxl_device_disk *disk;
-    int access_set, had_depr_prefix;
+    int access_set, disable_discard, had_depr_prefix;
     const char *spec;
 } DiskParseContext;
 
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..ecc30ae 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ DPC->disable_discard = 0; }
+discard=1,?	{ DPC->disable_discard = 0; }
+discard=off,?	{ DPC->disable_discard = 1; }
+discard=0,?	{ DPC->disable_discard = 1; }
 
  /* the target magic parameter, eats the rest of the string */
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:26:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uRG-0003OJ-2c; Thu, 30 Jan 2014 16:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8uRF-0003OC-Eu
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:26:01 +0000
Received: from [85.158.137.68:23910] by server-17.bemta-3.messagelabs.com id
	D1/19-22569-81D7AE25; Thu, 30 Jan 2014 16:26:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391099159!12339161!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2029 invoked from network); 30 Jan 2014 16:25:59 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:25:59 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391099159; l=4603;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=14NfsBxF9i0qJIG5Svsjgw7Z7SQ=;
	b=yUbD2VsJgCK1CHPCK/eM9rV1ri60QO1imtrWpH7W16+Whizh9apbJ/ycyJszhaZO7Sy
	Gn49elwpPwuDLmyFHYFTpWkDteMvukOv4vEBbHMbZZO5CG3QfcXqsPMJVMVxcpUf539bC
	Cj3dhSwjCvUGmdvyiIlymuIbwXAH92gNqUk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id C03babq0UGPxjkZ
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 17:25:59 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C82C950269; Thu, 30 Jan 2014 17:25:58 +0100 (CET)
Date: Thu, 30 Jan 2014 17:25:58 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140130162558.GA9033@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


And just for reference, this is a version for our 4.4:

....

This change does not break ABI. Instead of adding a new member ->discard_enable
to struct libxl_device_disk the existing ->readwrite member is reused.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
 tools/libxl/libxl.c                 |  2 ++
 tools/libxl/libxl.h                 | 11 +++++++++++
 tools/libxl/libxlu_disk.c           |  3 +++
 tools/libxl/libxlu_disk_i.h         |  2 +-
 tools/libxl/libxlu_disk_l.l         |  4 ++++
 6 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 5bd456d..c9fd9bd 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
 These scripts are normally called "block-<script>".
 
 
+discard=<boolean>
+---------------
+
+Description:           Instruct backend to advertise discard support to frontend
+Supported values:      on, off, 0, 1
+Mandatory:             No
+Default value:         on if, available for that backend typ
+
+This option is an advisory setting for the backend driver, depending of the
+value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
+benefit of this option is to be able to force it off rather than on. It allows
+to disable "hole punching" for file based backends which were intentionally
+created non-sparse to avoid fragmentation of the file.
+
+
 
 ============================================
 DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..9ed5062 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
         flexarray_append(back, disk->readwrite ? "w" : "r");
         flexarray_append(back, "device-type");
         flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
+        if (disk->readwrite == LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC)
+            flexarray_append_pair(back, "discard-enable", "0");
 
         flexarray_append(front, "backend-id");
         flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..021f7e4 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -95,6 +95,17 @@
 #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
 
 /*
+ * The libxl_device_disk lacks discard_enable field, disabling discard
+ * is supported without breaking the ABI. This is done by overloading
+ * struct libxl_device_disk->readwrite:
+ * readwrite == 0: disk is readonly, no discard
+ * readwrite == 1: disk is readwrite, backend driver may enable discard
+ * readwrite == MAGIC: disk is readwrite, backend driver should not offer
+ * discard to the frontend driver.
+ */
+#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC 0xdcadU
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
index 18fe386..e596cb6 100644
--- a/tools/libxl/libxlu_disk.c
+++ b/tools/libxl/libxlu_disk.c
@@ -80,6 +80,9 @@ int xlu_disk_parse(XLU_Config *cfg,
             disk->format = LIBXL_DISK_FORMAT_EMPTY;
     }
 
+    if (disk->readwrite && dpc.disable_discard)
+        disk->readwrite = LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC;
+
     if (!disk->vdev) {
         xlu__disk_err(&dpc,0, "no vdev specified");
         goto x_err;
diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
index 4fccd4a..9db3002 100644
--- a/tools/libxl/libxlu_disk_i.h
+++ b/tools/libxl/libxlu_disk_i.h
@@ -10,7 +10,7 @@ typedef struct {
     void *scanner;
     YY_BUFFER_STATE buf;
     libxl_device_disk *disk;
-    int access_set, had_depr_prefix;
+    int access_set, disable_discard, had_depr_prefix;
     const char *spec;
 } DiskParseContext;
 
diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 7c4e7f1..ecc30ae 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 
 vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
+discard=on,?	{ DPC->disable_discard = 0; }
+discard=1,?	{ DPC->disable_discard = 0; }
+discard=off,?	{ DPC->disable_discard = 1; }
+discard=0,?	{ DPC->disable_discard = 1; }
 
  /* the target magic parameter, eats the rest of the string */
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:28:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uTU-0003Uy-M1; Thu, 30 Jan 2014 16:28:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8uTT-0003Uq-9f
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:28:19 +0000
Received: from [85.158.139.211:51145] by server-9.bemta-5.messagelabs.com id
	7D/5A-11237-2AD7AE25; Thu, 30 Jan 2014 16:28:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391099296!650331!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23985 invoked from network); 30 Jan 2014 16:28:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:28:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96196477"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:28:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 11:28:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8uTP-0001YQ-Bd;
	Thu, 30 Jan 2014 16:28:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8uTP-0000Qh-3f;
	Thu, 30 Jan 2014 16:28:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.32158.954708.954668@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 16:28:14 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E70A58.2060002@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Looking at the libvirt code again, it seems a single thread services the
> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
> see the same thread ID in all the timer and fd callbacks. One of the
> libvirt core devs can correct me if I'm wrong.

OK.  So just to recap where we stand:

 * I think libxl needs the SIGCHLD flexibility series.  I'll repost
   that (v3) but it's had hardly any changes.

 * You need to fix the timer deregistration arrangements in the
   libvirt/libxl driver to avoid the crash you identified the other day.

 * Something needs to be done about the 20ms slop in the libvirt event
   loop (as it could cause libxl to lock up).  If you can't get rid of
   it in the libvirt core, then adding 20ms to the every requested
   callback time in the libvirt/libxl driver would work for now.

 * I think we can get away with not doing anything about the fd
   deregistration race in libvirt because both Linux and FreeBSD have
   behaviours that are tolerable.

 * libxl should have the fd deregistration race fixed in Xen 4.5.

Have you managed to fix the timer deregistration crash, and retest ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:28:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uTU-0003Uy-M1; Thu, 30 Jan 2014 16:28:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8uTT-0003Uq-9f
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:28:19 +0000
Received: from [85.158.139.211:51145] by server-9.bemta-5.messagelabs.com id
	7D/5A-11237-2AD7AE25; Thu, 30 Jan 2014 16:28:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391099296!650331!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23985 invoked from network); 30 Jan 2014 16:28:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:28:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96196477"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:28:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 11:28:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8uTP-0001YQ-Bd;
	Thu, 30 Jan 2014 16:28:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8uTP-0000Qh-3f;
	Thu, 30 Jan 2014 16:28:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.32158.954708.954668@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 16:28:14 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52E70A58.2060002@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Looking at the libvirt code again, it seems a single thread services the
> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
> see the same thread ID in all the timer and fd callbacks. One of the
> libvirt core devs can correct me if I'm wrong.

OK.  So just to recap where we stand:

 * I think libxl needs the SIGCHLD flexibility series.  I'll repost
   that (v3) but it's had hardly any changes.

 * You need to fix the timer deregistration arrangements in the
   libvirt/libxl driver to avoid the crash you identified the other day.

 * Something needs to be done about the 20ms slop in the libvirt event
   loop (as it could cause libxl to lock up).  If you can't get rid of
   it in the libvirt core, then adding 20ms to the every requested
   callback time in the libvirt/libxl driver would work for now.

 * I think we can get away with not doing anything about the fd
   deregistration race in libvirt because both Linux and FreeBSD have
   behaviours that are tolerable.

 * libxl should have the fd deregistration race fixed in Xen 4.5.

Have you managed to fix the timer deregistration crash, and retest ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:32:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uWu-0003sH-B1; Thu, 30 Jan 2014 16:31:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8uWs-0003sA-PX
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:31:51 +0000
Received: from [85.158.137.68:52192] by server-11.bemta-3.messagelabs.com id
	84/3A-04255-67E7AE25; Thu, 30 Jan 2014 16:31:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391099507!12365593!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6149 invoked from network); 30 Jan 2014 16:31:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:31:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208,217";a="96198091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:31:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:31:46 -0500
Message-ID: <1391099505.9495.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 16:31:45 +0000
In-Reply-To: <20140130162558.GA9033@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 17:25 +0100, Olaf Hering wrote:
> And just for reference, this is a version for our 4.4:
> 
> ....
> 
> This change does not break ABI. Instead of adding a new member ->discard_enable
> to struct libxl_device_disk the existing ->readwrite member is reused.

Looks like it changes the libxlu ABI though. Or maybe that's totally
internal?

TBH -- if you (==suse I guess?) are contemplating carrying this as a
backport even before 4.4 is out the door we should probably be at least
considering a freeze exception for 4.4. George CCd for input. (I
appreciate that "backport=>freeze exception" is a potentially slippery
slope/ripe for abuse...)

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
>  tools/libxl/libxl.c                 |  2 ++
>  tools/libxl/libxl.h                 | 11 +++++++++++
>  tools/libxl/libxlu_disk.c           |  3 +++
>  tools/libxl/libxlu_disk_i.h         |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  4 ++++
>  6 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5bd456d..c9fd9bd 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
>  These scripts are normally called "block-<script>".
>  
> 
> +discard=<boolean>
> +---------------
> +
> +Description:           Instruct backend to advertise discard support to frontend
> +Supported values:      on, off, 0, 1
> +Mandatory:             No
> +Default value:         on if, available for that backend typ
> +
> +This option is an advisory setting for the backend driver, depending of the
> +value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
> +benefit of this option is to be able to force it off rather than on. It allows
> +to disable "hole punching" for file based backends which were intentionally
> +created non-sparse to avoid fragmentation of the file.
> +
> +
>  
>  ============================================
>  DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..9ed5062 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->readwrite ? "w" : "r");
>          flexarray_append(back, "device-type");
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> +        if (disk->readwrite == LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC)
> +            flexarray_append_pair(back, "discard-enable", "0");
>  
>          flexarray_append(front, "backend-id");
>          flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..021f7e4 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -95,6 +95,17 @@
>  #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
>  
>  /*
> + * The libxl_device_disk lacks discard_enable field, disabling discard
> + * is supported without breaking the ABI. This is done by overloading
> + * struct libxl_device_disk->readwrite:
> + * readwrite == 0: disk is readonly, no discard
> + * readwrite == 1: disk is readwrite, backend driver may enable discard
> + * readwrite == MAGIC: disk is readwrite, backend driver should not offer
> + * discard to the frontend driver.
> + */
> +#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC 0xdcadU
> +
> +/*
>   * libxl ABI compatibility
>   *
>   * The only guarantee which libxl makes regarding ABI compatibility
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..e596cb6 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -80,6 +80,9 @@ int xlu_disk_parse(XLU_Config *cfg,
>              disk->format = LIBXL_DISK_FORMAT_EMPTY;
>      }
>  
> +    if (disk->readwrite && dpc.disable_discard)
> +        disk->readwrite = LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC;
> +
>      if (!disk->vdev) {
>          xlu__disk_err(&dpc,0, "no vdev specified");
>          goto x_err;
> diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
> index 4fccd4a..9db3002 100644
> --- a/tools/libxl/libxlu_disk_i.h
> +++ b/tools/libxl/libxlu_disk_i.h
> @@ -10,7 +10,7 @@ typedef struct {
>      void *scanner;
>      YY_BUFFER_STATE buf;
>      libxl_device_disk *disk;
> -    int access_set, had_depr_prefix;
> +    int access_set, disable_discard, had_depr_prefix;
>      const char *spec;
>  } DiskParseContext;
>  
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index 7c4e7f1..ecc30ae 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>  
>  vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> +discard=on,?	{ DPC->disable_discard = 0; }
> +discard=1,?	{ DPC->disable_discard = 0; }
> +discard=off,?	{ DPC->disable_discard = 1; }
> +discard=0,?	{ DPC->disable_discard = 1; }
>  
>   /* the target magic parameter, eats the rest of the string */
>  
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:32:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uWu-0003sH-B1; Thu, 30 Jan 2014 16:31:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W8uWs-0003sA-PX
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:31:51 +0000
Received: from [85.158.137.68:52192] by server-11.bemta-3.messagelabs.com id
	84/3A-04255-67E7AE25; Thu, 30 Jan 2014 16:31:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391099507!12365593!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6149 invoked from network); 30 Jan 2014 16:31:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:31:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208,217";a="96198091"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:31:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:31:46 -0500
Message-ID: <1391099505.9495.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 30 Jan 2014 16:31:45 +0000
In-Reply-To: <20140130162558.GA9033@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 17:25 +0100, Olaf Hering wrote:
> And just for reference, this is a version for our 4.4:
> 
> ....
> 
> This change does not break ABI. Instead of adding a new member ->discard_enable
> to struct libxl_device_disk the existing ->readwrite member is reused.

Looks like it changes the libxlu ABI though. Or maybe that's totally
internal?

TBH -- if you (==suse I guess?) are contemplating carrying this as a
backport even before 4.4 is out the door we should probably be at least
considering a freeze exception for 4.4. George CCd for input. (I
appreciate that "backport=>freeze exception" is a potentially slippery
slope/ripe for abuse...)

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  docs/misc/xl-disk-configuration.txt | 15 +++++++++++++++
>  tools/libxl/libxl.c                 |  2 ++
>  tools/libxl/libxl.h                 | 11 +++++++++++
>  tools/libxl/libxlu_disk.c           |  3 +++
>  tools/libxl/libxlu_disk_i.h         |  2 +-
>  tools/libxl/libxlu_disk_l.l         |  4 ++++
>  6 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
> index 5bd456d..c9fd9bd 100644
> --- a/docs/misc/xl-disk-configuration.txt
> +++ b/docs/misc/xl-disk-configuration.txt
> @@ -178,6 +178,21 @@ information to be interpreted by the executable program <script>,
>  These scripts are normally called "block-<script>".
>  
> 
> +discard=<boolean>
> +---------------
> +
> +Description:           Instruct backend to advertise discard support to frontend
> +Supported values:      on, off, 0, 1
> +Mandatory:             No
> +Default value:         on if, available for that backend typ
> +
> +This option is an advisory setting for the backend driver, depending of the
> +value, to advertise discard support (TRIM, UNMAP) to the frontend. The real
> +benefit of this option is to be able to force it off rather than on. It allows
> +to disable "hole punching" for file based backends which were intentionally
> +created non-sparse to avoid fragmentation of the file.
> +
> +
>  
>  ============================================
>  DEPRECATED PARAMETERS, PREFIXES AND SYNTAXES
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..9ed5062 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -2196,6 +2196,8 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
>          flexarray_append(back, disk->readwrite ? "w" : "r");
>          flexarray_append(back, "device-type");
>          flexarray_append(back, disk->is_cdrom ? "cdrom" : "disk");
> +        if (disk->readwrite == LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC)
> +            flexarray_append_pair(back, "discard-enable", "0");
>  
>          flexarray_append(front, "backend-id");
>          flexarray_append(front, libxl__sprintf(gc, "%d", disk->backend_domid));
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..021f7e4 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -95,6 +95,17 @@
>  #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1
>  
>  /*
> + * The libxl_device_disk lacks discard_enable field, disabling discard
> + * is supported without breaking the ABI. This is done by overloading
> + * struct libxl_device_disk->readwrite:
> + * readwrite == 0: disk is readonly, no discard
> + * readwrite == 1: disk is readwrite, backend driver may enable discard
> + * readwrite == MAGIC: disk is readwrite, backend driver should not offer
> + * discard to the frontend driver.
> + */
> +#define LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC 0xdcadU
> +
> +/*
>   * libxl ABI compatibility
>   *
>   * The only guarantee which libxl makes regarding ABI compatibility
> diff --git a/tools/libxl/libxlu_disk.c b/tools/libxl/libxlu_disk.c
> index 18fe386..e596cb6 100644
> --- a/tools/libxl/libxlu_disk.c
> +++ b/tools/libxl/libxlu_disk.c
> @@ -80,6 +80,9 @@ int xlu_disk_parse(XLU_Config *cfg,
>              disk->format = LIBXL_DISK_FORMAT_EMPTY;
>      }
>  
> +    if (disk->readwrite && dpc.disable_discard)
> +        disk->readwrite = LIBXL_HAVE_LIBXL_DEVICE_DISK_DISCARD_DISABLE_MAGIC;
> +
>      if (!disk->vdev) {
>          xlu__disk_err(&dpc,0, "no vdev specified");
>          goto x_err;
> diff --git a/tools/libxl/libxlu_disk_i.h b/tools/libxl/libxlu_disk_i.h
> index 4fccd4a..9db3002 100644
> --- a/tools/libxl/libxlu_disk_i.h
> +++ b/tools/libxl/libxlu_disk_i.h
> @@ -10,7 +10,7 @@ typedef struct {
>      void *scanner;
>      YY_BUFFER_STATE buf;
>      libxl_device_disk *disk;
> -    int access_set, had_depr_prefix;
> +    int access_set, disable_discard, had_depr_prefix;
>      const char *spec;
>  } DiskParseContext;
>  
> diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
> index 7c4e7f1..ecc30ae 100644
> --- a/tools/libxl/libxlu_disk_l.l
> +++ b/tools/libxl/libxlu_disk_l.l
> @@ -173,6 +173,10 @@ backendtype=[^,]*,? { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
>  
>  vdev=[^,]*,?	{ STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
>  script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
> +discard=on,?	{ DPC->disable_discard = 0; }
> +discard=1,?	{ DPC->disable_discard = 0; }
> +discard=off,?	{ DPC->disable_discard = 1; }
> +discard=0,?	{ DPC->disable_discard = 1; }
>  
>   /* the target magic parameter, eats the rest of the string */
>  
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:36:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ubf-00047J-9b; Thu, 30 Jan 2014 16:36:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W8ubd-000478-UA
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:36:46 +0000
Received: from [193.109.254.147:24527] by server-12.bemta-14.messagelabs.com
	id 96/EF-17220-D9F7AE25; Thu, 30 Jan 2014 16:36:45 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391099803!947619!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19189 invoked from network); 30 Jan 2014 16:36:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96200904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:36:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:36:42 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W8uba-0000Cp-Ct;
	Thu, 30 Jan 2014 16:36:42 +0000
Message-ID: <1391099797.25756.19.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Ian Campbell <ian.campbell@citrix.com>
Date: Thu, 30 Jan 2014 16:36:37 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC] Changing migration protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,
  actually we found a big problem with the migration protocol: the
protocol depends on long size so if hosts are compiled with different
architectures (for instance i686 and x86_64) machines cannot migrate.

I think is actually not difficult to detect stream type and change code
to handle both cases.

However is better to to add an header to detect architecture type. This
will also detect easily ARM <-> Intel migrations. With David we spoke
about starting the new stream format with 8 bytes full of 0xff. This
will cause old toolstacks to fail the migration (they will try to
allocate more memory than possible failing before even trying to do it).

Actually the protocol is a bit different even for PV and HVM.

The actual code is quite messy and not that documented/testable. IMHO
would be better to try to come up with a better design.

Looking at protocol (xs_domain_restore and callee) there are also some
minor issues that could be addressed with a better protocol design or
different code:
- protocol require that functions allocate two arrays of p2m_size
(mostly 12 bytes for 32 bit and 16 for 64 for every page in the guest),
this means large memory usage for large guest (actually I'm not sure
this can be fixed by protocol change);
- some streams have a length and a blob without any size checks. In case
of an unsafe source or a corruption (quite unlucky but not impossible)
it could be that we try to allocate a large quantity of memory causing
dom0 to go out of memory.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:36:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8ubf-00047J-9b; Thu, 30 Jan 2014 16:36:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W8ubd-000478-UA
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:36:46 +0000
Received: from [193.109.254.147:24527] by server-12.bemta-14.messagelabs.com
	id 96/EF-17220-D9F7AE25; Thu, 30 Jan 2014 16:36:45 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391099803!947619!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19189 invoked from network); 30 Jan 2014 16:36:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:36:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96200904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:36:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:36:42 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W8uba-0000Cp-Ct;
	Thu, 30 Jan 2014 16:36:42 +0000
Message-ID: <1391099797.25756.19.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Ian Campbell <ian.campbell@citrix.com>
Date: Thu, 30 Jan 2014 16:36:37 +0000
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [RFC] Changing migration protocol
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,
  actually we found a big problem with the migration protocol: the
protocol depends on long size so if hosts are compiled with different
architectures (for instance i686 and x86_64) machines cannot migrate.

I think is actually not difficult to detect stream type and change code
to handle both cases.

However is better to to add an header to detect architecture type. This
will also detect easily ARM <-> Intel migrations. With David we spoke
about starting the new stream format with 8 bytes full of 0xff. This
will cause old toolstacks to fail the migration (they will try to
allocate more memory than possible failing before even trying to do it).

Actually the protocol is a bit different even for PV and HVM.

The actual code is quite messy and not that documented/testable. IMHO
would be better to try to come up with a better design.

Looking at protocol (xs_domain_restore and callee) there are also some
minor issues that could be addressed with a better protocol design or
different code:
- protocol require that functions allocate two arrays of p2m_size
(mostly 12 bytes for 32 bit and 16 for 64 for every page in the guest),
this means large memory usage for large guest (actually I'm not sure
this can be fixed by protocol change);
- some streams have a length and a blob without any size checks. In case
of an unsafe source or a corruption (quite unlucky but not impossible)
it could be that we try to allocate a large quantity of memory causing
dom0 to go out of memory.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8udG-0004J2-Qd; Thu, 30 Jan 2014 16:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8udF-0004Hz-SM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:38:25 +0000
Received: from [85.158.137.68:14195] by server-2.bemta-3.messagelabs.com id
	71/6A-06531-1008AE25; Thu, 30 Jan 2014 16:38:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391099904!12406719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12080 invoked from network); 30 Jan 2014 16:38:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 16:38:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 16:38:23 +0000
Message-Id: <52EA8E1F0200007800118430@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 16:38:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 17:06, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> > +struct xen_hvm_pci_hotplug {
>> > +    domid_t domid;          /* IN - domain to be serviced */
>> > +    uint8_t enable;         /* IN - enable or disable? */
>> > +    uint32_t slot;          /* IN - slot to enable/disable */
>> 
>> Reordering these two will make the structure smaller.
>> 
> 
> It will indeed.

Now I'm confused: domid_t being 16 bits, afaict re-ordering would
make it larger (from 8 to 12 bytes) rather than smaller.

What I'd certainly recommend is filling the 1 byte that's currently
unused (either by widening "enabled" or with a padding field) such
that eventual future extensions (flags?) could be added (i.e. the
field would need to be checked to be zero).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:38:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8udG-0004J2-Qd; Thu, 30 Jan 2014 16:38:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W8udF-0004Hz-SM
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:38:25 +0000
Received: from [85.158.137.68:14195] by server-2.bemta-3.messagelabs.com id
	71/6A-06531-1008AE25; Thu, 30 Jan 2014 16:38:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391099904!12406719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12080 invoked from network); 30 Jan 2014 16:38:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 16:38:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 30 Jan 2014 16:38:23 +0000
Message-Id: <52EA8E1F0200007800118430@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 30 Jan 2014 16:38:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 17:06, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> > +struct xen_hvm_pci_hotplug {
>> > +    domid_t domid;          /* IN - domain to be serviced */
>> > +    uint8_t enable;         /* IN - enable or disable? */
>> > +    uint32_t slot;          /* IN - slot to enable/disable */
>> 
>> Reordering these two will make the structure smaller.
>> 
> 
> It will indeed.

Now I'm confused: domid_t being 16 bits, afaict re-ordering would
make it larger (from 8 to 12 bytes) rather than smaller.

What I'd certainly recommend is filling the 1 byte that's currently
unused (either by widening "enabled" or with a padding field) such
that eventual future extensions (flags?) could be added (i.e. the
field would need to be checked to be zero).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:43:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uhf-0004lD-OB; Thu, 30 Jan 2014 16:42:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8uhe-0004l5-0U
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:42:58 +0000
Received: from [85.158.137.68:61958] by server-14.bemta-3.messagelabs.com id
	11/A5-08196-1118AE25; Thu, 30 Jan 2014 16:42:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391100174!12368352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4639 invoked from network); 30 Jan 2014 16:42:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:42:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96203396"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:42:54 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 11:42:54 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 17:42:53 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI
	hotplug controller implementation into Xen
Thread-Index: AQHPHcZafuHlqvfYM0ajpdfz4KchSJqdWzcAgAARQVD///qsgIAAEbBw
Date: Thu, 30 Jan 2014 16:42:52 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217BF6@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
	<52EA8E1F0200007800118430@nat28.tlf.novell.com>
In-Reply-To: <52EA8E1F0200007800118430@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 30 January 2014 16:39
> To: Andrew Cooper; Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
> controller implementation into Xen
> 
> >>> On 30.01.14 at 17:06, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> >> > +struct xen_hvm_pci_hotplug {
> >> > +    domid_t domid;          /* IN - domain to be serviced */
> >> > +    uint8_t enable;         /* IN - enable or disable? */
> >> > +    uint32_t slot;          /* IN - slot to enable/disable */
> >>
> >> Reordering these two will make the structure smaller.
> >>
> >
> > It will indeed.
> 
> Now I'm confused: domid_t being 16 bits, afaict re-ordering would
> make it larger (from 8 to 12 bytes) rather than smaller.
> 

Sorry, I had it in my head that domids were 32-bits. You are correct... which is probably why I used that ordering in the first place ;-)

  Paul

> What I'd certainly recommend is filling the 1 byte that's currently
> unused (either by widening "enabled" or with a padding field) such
> that eventual future extensions (flags?) could be added (i.e. the
> field would need to be checked to be zero).
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:43:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uhf-0004lD-OB; Thu, 30 Jan 2014 16:42:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1W8uhe-0004l5-0U
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 16:42:58 +0000
Received: from [85.158.137.68:61958] by server-14.bemta-3.messagelabs.com id
	11/A5-08196-1118AE25; Thu, 30 Jan 2014 16:42:57 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391100174!12368352!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4639 invoked from network); 30 Jan 2014 16:42:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:42:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="96203396"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 16:42:54 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 30 Jan 2014 11:42:54 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 30 Jan 2014 17:42:53 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <Andrew.Cooper3@citrix.com>
Thread-Topic: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI
	hotplug controller implementation into Xen
Thread-Index: AQHPHcZafuHlqvfYM0ajpdfz4KchSJqdWzcAgAARQVD///qsgIAAEbBw
Date: Thu, 30 Jan 2014 16:42:52 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0217BF6@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-6-git-send-email-paul.durrant@citrix.com>
	<52EA760E.4010400@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0217A88@AMSPEX01CL01.citrite.net>
	<52EA8E1F0200007800118430@nat28.tlf.novell.com>
In-Reply-To: <52EA8E1F0200007800118430@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
 controller implementation into Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 30 January 2014 16:39
> To: Andrew Cooper; Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 5/5] ioreq-server: bring the PCI hotplug
> controller implementation into Xen
> 
> >>> On 30.01.14 at 17:06, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> >> > +struct xen_hvm_pci_hotplug {
> >> > +    domid_t domid;          /* IN - domain to be serviced */
> >> > +    uint8_t enable;         /* IN - enable or disable? */
> >> > +    uint32_t slot;          /* IN - slot to enable/disable */
> >>
> >> Reordering these two will make the structure smaller.
> >>
> >
> > It will indeed.
> 
> Now I'm confused: domid_t being 16 bits, afaict re-ordering would
> make it larger (from 8 to 12 bytes) rather than smaller.
> 

Sorry, I had it in my head that domids were 32-bits. You are correct... which is probably why I used that ordering in the first place ;-)

  Paul

> What I'd certainly recommend is filling the 1 byte that's currently
> unused (either by widening "enabled" or with a padding field) such
> that eventual future extensions (flags?) could be added (i.e. the
> field would need to be checked to be zero).
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uib-0004pm-6m; Thu, 30 Jan 2014 16:43:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8uiZ-0004pV-On; Thu, 30 Jan 2014 16:43:55 +0000
Received: from [193.109.254.147:55559] by server-15.bemta-14.messagelabs.com
	id 83/36-10839-A418AE25; Thu, 30 Jan 2014 16:43:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391100230!949493!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10402 invoked from network); 30 Jan 2014 16:43:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:43:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98170635"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 16:43:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:43:01 -0500
Message-ID: <1391100180.9495.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 16:43:00 +0000
In-Reply-To: <CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
	<1391098340.9495.15.camel@kazak.uk.xensource.com>
	<CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 09:35 -0700, Yun Wang wrote:
> See if I am doing it correctly.
> 
> 1. setup a loopback device
> losetup /dev/loop0 /vms/centos65_pv.img
> 
> 2. check the output of "losetup -a"
> /dev/loop0: [fb00]:926576 (/vms/centos65_pv.img)
> 
> 3. use the loopback device in the PV config
> disk = [ 'phy:/dev/loop0, xvda, w']
> 
> However, similar error still exists.

Oh, you'll also need to nix your vfb I'm afraid...

If you are comfortable patching C code then you could try modifying
tools/libxl/libxl.c:libxl_set_vcpuonline() to call libxl__domain_type
and arrange to call libxl__set_vcpuonline_xenstore for PV guests and the
existing logic for HVM guests to choose between _xenstore and _qmp.

If you are comfortable doing that then please submit a patch.
http://wiki.xen.org/wiki/Submitting_Xen_Patches has some general
guidance.

Ian.

> 
> On Thu, Jan 30, 2014 at 9:12 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
> >> What disk option should I use for a PV guest? tap2:aio seems to have a
> >> similar issue with "xl vcpu-set"
> >
> > To workaround the issue you would need phy: I think, perhaps by setting
> > up a loopback device on the raw image by hand.
> >
> > Hopefully the underlying issue can get fixed though.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uib-0004pm-6m; Thu, 30 Jan 2014 16:43:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1W8uiZ-0004pV-On; Thu, 30 Jan 2014 16:43:55 +0000
Received: from [193.109.254.147:55559] by server-15.bemta-14.messagelabs.com
	id 83/36-10839-A418AE25; Thu, 30 Jan 2014 16:43:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391100230!949493!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10402 invoked from network); 30 Jan 2014 16:43:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:43:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98170635"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 16:43:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 11:43:01 -0500
Message-ID: <1391100180.9495.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Yun Wang <bimingery@gmail.com>
Date: Thu, 30 Jan 2014 16:43:00 +0000
In-Reply-To: <CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
	<1391098340.9495.15.camel@kazak.uk.xensource.com>
	<CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 09:35 -0700, Yun Wang wrote:
> See if I am doing it correctly.
> 
> 1. setup a loopback device
> losetup /dev/loop0 /vms/centos65_pv.img
> 
> 2. check the output of "losetup -a"
> /dev/loop0: [fb00]:926576 (/vms/centos65_pv.img)
> 
> 3. use the loopback device in the PV config
> disk = [ 'phy:/dev/loop0, xvda, w']
> 
> However, similar error still exists.

Oh, you'll also need to nix your vfb I'm afraid...

If you are comfortable patching C code then you could try modifying
tools/libxl/libxl.c:libxl_set_vcpuonline() to call libxl__domain_type
and arrange to call libxl__set_vcpuonline_xenstore for PV guests and the
existing logic for HVM guests to choose between _xenstore and _qmp.

If you are comfortable doing that then please submit a patch.
http://wiki.xen.org/wiki/Submitting_Xen_Patches has some general
guidance.

Ian.

> 
> On Thu, Jan 30, 2014 at 9:12 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
> >> What disk option should I use for a PV guest? tap2:aio seems to have a
> >> similar issue with "xl vcpu-set"
> >
> > To workaround the issue you would need phy: I think, perhaps by setting
> > up a loopback device on the raw image by hand.
> >
> > Hopefully the underlying issue can get fixed though.
> >
> > Ian.
> >



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:49:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8unb-0005Fu-RO; Thu, 30 Jan 2014 16:49:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8unY-0005Fa-JS
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:49:06 +0000
Received: from [85.158.137.68:23156] by server-4.bemta-3.messagelabs.com id
	F4/15-11750-F728AE25; Thu, 30 Jan 2014 16:49:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391100541!12409405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8384 invoked from network); 30 Jan 2014 16:49:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:49:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98174237"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 16:49:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 11:49:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8unT-0001et-PN;
	Thu, 30 Jan 2014 16:48:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8unT-0004oR-Nn;
	Thu, 30 Jan 2014 16:48:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24641-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 16:48:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24641: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24641 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24641/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              4 kernel-build              fail REGR. vs. 24397
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24397
 build-amd64                   4 xen-build                 fail REGR. vs. 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:49:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8unb-0005Fu-RO; Thu, 30 Jan 2014 16:49:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8unY-0005Fa-JS
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:49:06 +0000
Received: from [85.158.137.68:23156] by server-4.bemta-3.messagelabs.com id
	F4/15-11750-F728AE25; Thu, 30 Jan 2014 16:49:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391100541!12409405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8384 invoked from network); 30 Jan 2014 16:49:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:49:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,750,1384300800"; d="scan'208";a="98174237"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 16:49:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 11:49:00 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8unT-0001et-PN;
	Thu, 30 Jan 2014 16:48:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8unT-0004oR-Nn;
	Thu, 30 Jan 2014 16:48:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24641-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 16:48:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24641: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24641 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24641/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              4 kernel-build              fail REGR. vs. 24397
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24397
 build-amd64                   4 xen-build                 fail REGR. vs. 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uuh-0005iN-Sc; Thu, 30 Jan 2014 16:56:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W8uuh-0005iI-3E
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:56:27 +0000
Received: from [193.109.254.147:32719] by server-16.bemta-14.messagelabs.com
	id CB/5B-21945-A348AE25; Thu, 30 Jan 2014 16:56:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391100983!949579!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19045 invoked from network); 30 Jan 2014 16:56:25 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:56:25 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 09:56:17 -0700
Message-ID: <52EA8430.7010800@suse.com>
Date: Thu, 30 Jan 2014 09:56:16 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>	<21218.24466.92095.134875@mariner.uk.xensource.com>	<52E70A58.2060002@suse.com>
	<21226.32158.954708.954668@mariner.uk.xensource.com>
In-Reply-To: <21226.32158.954708.954668@mariner.uk.xensource.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> Looking at the libvirt code again, it seems a single thread services the
>> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
>> see the same thread ID in all the timer and fd callbacks. One of the
>> libvirt core devs can correct me if I'm wrong.
>>     
>
> OK.  So just to recap where we stand:
>
>  * I think libxl needs the SIGCHLD flexibility series.  I'll repost
>    that (v3) but it's had hardly any changes.
>   

Ok, thanks.  I'm currently testing on your git branch referenced earlier
in this thread

git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

>  * You need to fix the timer deregistration arrangements in the
>    libvirt/libxl driver to avoid the crash you identified the other day.
>   

Yes, I'm testing a fix now.

>  * Something needs to be done about the 20ms slop in the libvirt event
>    loop (as it could cause libxl to lock up).  If you can't get rid of
>    it in the libvirt core, then adding 20ms to the every requested
>    callback time in the libvirt/libxl driver would work for now.
>   

The commit msg adding the fuzz says

    Fix event test timer checks on kernels with HZ=100
   
    On kernels with HZ=100, the resolution of sleeps in poll() is
    quite bad. Doing a precise check on the expiry time vs the
    current time will thus often thing the timer has not expired
    even though we're within 10ms of the expected expiry time. This
    then causes another pointless sleep in poll() for <10ms. Timers
    do not need to have such precise expiration, so we treat a timer
    as expired if it is within 20ms of the expected expiry time. This
    also fixes the eventtest.c test suite on kernels with HZ=100
   
    * daemon/event.c: Add 20ms fuzz when checking for timer expiry


I could handle this in the libxl driver as you say, but doing so makes
me a bit nervous.  Potentially locking up libxl makes me nervous too :).

>  * I think we can get away with not doing anything about the fd
>    deregistration race in libvirt because both Linux and FreeBSD have
>    behaviours that are tolerable.
>
>  * libxl should have the fd deregistration race fixed in Xen 4.5.
>
> Have you managed to fix the timer deregistration crash, and retest ?
>   

Yes.  I've been running my tests for about 24 hours now with no problems
noted.  The tests include starting/stopping a persistent VM,
creating/stopping a transient VM, rebooting a persistent VM,
saving/restoring a transient VM, and getting info on all of these VMs.

I should probably add saving/restoring a persistent VM to the mix since
the associated libxl_ctx is never freed.  Only when a persistent VM is
undefined is the libxl_ctx freed.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 16:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 16:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8uuh-0005iN-Sc; Thu, 30 Jan 2014 16:56:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W8uuh-0005iI-3E
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 16:56:27 +0000
Received: from [193.109.254.147:32719] by server-16.bemta-14.messagelabs.com
	id CB/5B-21945-A348AE25; Thu, 30 Jan 2014 16:56:26 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391100983!949579!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19045 invoked from network); 30 Jan 2014 16:56:25 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 16:56:25 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 09:56:17 -0700
Message-ID: <52EA8430.7010800@suse.com>
Date: Thu, 30 Jan 2014 09:56:16 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>	<52D9AECF.6050309@suse.com>	<52DD678F.3070504@suse.com>	<21214.37402.648941.864060@mariner.uk.xensource.com>	<52DF57E2.2090602@suse.com>	<52E09513.6060603@suse.com>	<21216.62800.746512.422459@mariner.uk.xensource.com>	<52E1EB97.4080007@suse.com>	<21218.24466.92095.134875@mariner.uk.xensource.com>	<52E70A58.2060002@suse.com>
	<21226.32158.954708.954668@mariner.uk.xensource.com>
In-Reply-To: <21226.32158.954708.954668@mariner.uk.xensource.com>
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
>   
>> Looking at the libvirt code again, it seems a single thread services the
>> event loop. See virNetServerRun() in src/util/virnetserver.c. Indeed, I
>> see the same thread ID in all the timer and fd callbacks. One of the
>> libvirt core devs can correct me if I'm wrong.
>>     
>
> OK.  So just to recap where we stand:
>
>  * I think libxl needs the SIGCHLD flexibility series.  I'll repost
>    that (v3) but it's had hardly any changes.
>   

Ok, thanks.  I'm currently testing on your git branch referenced earlier
in this thread

git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

>  * You need to fix the timer deregistration arrangements in the
>    libvirt/libxl driver to avoid the crash you identified the other day.
>   

Yes, I'm testing a fix now.

>  * Something needs to be done about the 20ms slop in the libvirt event
>    loop (as it could cause libxl to lock up).  If you can't get rid of
>    it in the libvirt core, then adding 20ms to the every requested
>    callback time in the libvirt/libxl driver would work for now.
>   

The commit msg adding the fuzz says

    Fix event test timer checks on kernels with HZ=100
   
    On kernels with HZ=100, the resolution of sleeps in poll() is
    quite bad. Doing a precise check on the expiry time vs the
    current time will thus often thing the timer has not expired
    even though we're within 10ms of the expected expiry time. This
    then causes another pointless sleep in poll() for <10ms. Timers
    do not need to have such precise expiration, so we treat a timer
    as expired if it is within 20ms of the expected expiry time. This
    also fixes the eventtest.c test suite on kernels with HZ=100
   
    * daemon/event.c: Add 20ms fuzz when checking for timer expiry


I could handle this in the libxl driver as you say, but doing so makes
me a bit nervous.  Potentially locking up libxl makes me nervous too :).

>  * I think we can get away with not doing anything about the fd
>    deregistration race in libvirt because both Linux and FreeBSD have
>    behaviours that are tolerable.
>
>  * libxl should have the fd deregistration race fixed in Xen 4.5.
>
> Have you managed to fix the timer deregistration crash, and retest ?
>   

Yes.  I've been running my tests for about 24 hours now with no problems
noted.  The tests include starting/stopping a persistent VM,
creating/stopping a transient VM, rebooting a persistent VM,
saving/restoring a transient VM, and getting info on all of these VMs.

I should probably add saving/restoring a persistent VM to the mix since
the associated libxl_ctx is never freed.  Only when a persistent VM is
undefined is the libxl_ctx freed.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:13:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vAZ-0006MG-V0; Thu, 30 Jan 2014 17:12:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8vAY-0006MB-6a
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 17:12:50 +0000
Received: from [193.109.254.147:10621] by server-16.bemta-14.messagelabs.com
	id 5B/D2-21945-1188AE25; Thu, 30 Jan 2014 17:12:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391101967!958401!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1511 invoked from network); 30 Jan 2014 17:12:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 17:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="98184889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 17:12:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 12:12:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8vAU-0001mO-G3;
	Thu, 30 Jan 2014 17:12:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8vAU-0001FT-8w;
	Thu, 30 Jan 2014 17:12:46 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.34829.848029.137396@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 17:12:45 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EA8430.7010800@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
	<21226.32158.954708.954668@mariner.uk.xensource.com>
	<52EA8430.7010800@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Ok, thanks.  I'm currently testing on your git branch referenced earlier
> in this thread
> 
> git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

Great.  That's the one.  My current version is pretty much identical -
some unused variables deleted and comments edited.

> >  * You need to fix the timer deregistration arrangements in the
> >    libvirt/libxl driver to avoid the crash you identified the other day.
> 
> Yes, I'm testing a fix now.

Great.

> >  * Something needs to be done about the 20ms slop in the libvirt event
> >    loop (as it could cause libxl to lock up).  If you can't get rid of
> >    it in the libvirt core, then adding 20ms to the every requested
> >    callback time in the libvirt/libxl driver would work for now.
> >   
> 
> The commit msg adding the fuzz says
> 
>     Fix event test timer checks on kernels with HZ=100
>    
>     On kernels with HZ=100, the resolution of sleeps in poll() is
>     quite bad. Doing a precise check on the expiry time vs the
>     current time will thus often thing the timer has not expired
>     even though we're within 10ms of the expected expiry time. This
>     then causes another pointless sleep in poll() for <10ms. Timers
>     do not need to have such precise expiration, so we treat a timer
>     as expired if it is within 20ms of the expected expiry time. This
>     also fixes the eventtest.c test suite on kernels with HZ=100

I think this is a bug in the kernel.  poll() may sleep longer, but not
shorter, than expected.

>     * daemon/event.c: Add 20ms fuzz when checking for timer expiry
> 
> I could handle this in the libxl driver as you say, but doing so makes
> me a bit nervous.  Potentially locking up libxl makes me nervous too :).

I was going to say that the code in libxl_osevent_occurred_timeout
checked the time against the requested time and would ignore the event
(thinking it was stale) if it was too early.

But in fact now that I read the code this is not true.  In fact I
think it will work OK (modulo some things happening too soon).  So the
upshot is that I still think this is a bug in libvirt but I don't
think it's critical to fix it.

Sorry to cause undue alarm.

> Yes.  I've been running my tests for about 24 hours now with no problems
> noted.  The tests include starting/stopping a persistent VM,
> creating/stopping a transient VM, rebooting a persistent VM,
> saving/restoring a transient VM, and getting info on all of these VMs.
> 
> I should probably add saving/restoring a persistent VM to the mix since
> the associated libxl_ctx is never freed.  Only when a persistent VM is
> undefined is the libxl_ctx freed.

Right.  Great.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:13:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vAZ-0006MG-V0; Thu, 30 Jan 2014 17:12:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8vAY-0006MB-6a
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 17:12:50 +0000
Received: from [193.109.254.147:10621] by server-16.bemta-14.messagelabs.com
	id 5B/D2-21945-1188AE25; Thu, 30 Jan 2014 17:12:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391101967!958401!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1511 invoked from network); 30 Jan 2014 17:12:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 17:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="98184889"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 30 Jan 2014 17:12:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 12:12:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8vAU-0001mO-G3;
	Thu, 30 Jan 2014 17:12:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W8vAU-0001FT-8w;
	Thu, 30 Jan 2014 17:12:46 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21226.34829.848029.137396@mariner.uk.xensource.com>
Date: Thu, 30 Jan 2014 17:12:45 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EA8430.7010800@suse.com>
References: <1389975845-1195-1-git-send-email-ian.jackson@eu.citrix.com>
	<52D9AECF.6050309@suse.com> <52DD678F.3070504@suse.com>
	<21214.37402.648941.864060@mariner.uk.xensource.com>
	<52DF57E2.2090602@suse.com> <52E09513.6060603@suse.com>
	<21216.62800.746512.422459@mariner.uk.xensource.com>
	<52E1EB97.4080007@suse.com>
	<21218.24466.92095.134875@mariner.uk.xensource.com>
	<52E70A58.2060002@suse.com>
	<21226.32158.954708.954668@mariner.uk.xensource.com>
	<52EA8430.7010800@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: LibVir <libvir-list@redhat.com>, xen-devel@lists.xensource.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/12] libxl: fork: SIGCHLD flexibility"):
> Ok, thanks.  I'm currently testing on your git branch referenced earlier
> in this thread
> 
> git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v2.1

Great.  That's the one.  My current version is pretty much identical -
some unused variables deleted and comments edited.

> >  * You need to fix the timer deregistration arrangements in the
> >    libvirt/libxl driver to avoid the crash you identified the other day.
> 
> Yes, I'm testing a fix now.

Great.

> >  * Something needs to be done about the 20ms slop in the libvirt event
> >    loop (as it could cause libxl to lock up).  If you can't get rid of
> >    it in the libvirt core, then adding 20ms to the every requested
> >    callback time in the libvirt/libxl driver would work for now.
> >   
> 
> The commit msg adding the fuzz says
> 
>     Fix event test timer checks on kernels with HZ=100
>    
>     On kernels with HZ=100, the resolution of sleeps in poll() is
>     quite bad. Doing a precise check on the expiry time vs the
>     current time will thus often thing the timer has not expired
>     even though we're within 10ms of the expected expiry time. This
>     then causes another pointless sleep in poll() for <10ms. Timers
>     do not need to have such precise expiration, so we treat a timer
>     as expired if it is within 20ms of the expected expiry time. This
>     also fixes the eventtest.c test suite on kernels with HZ=100

I think this is a bug in the kernel.  poll() may sleep longer, but not
shorter, than expected.

>     * daemon/event.c: Add 20ms fuzz when checking for timer expiry
> 
> I could handle this in the libxl driver as you say, but doing so makes
> me a bit nervous.  Potentially locking up libxl makes me nervous too :).

I was going to say that the code in libxl_osevent_occurred_timeout
checked the time against the requested time and would ignore the event
(thinking it was stale) if it was too early.

But in fact now that I read the code this is not true.  In fact I
think it will work OK (modulo some things happening too soon).  So the
upshot is that I still think this is a bug in libvirt but I don't
think it's critical to fix it.

Sorry to cause undue alarm.

> Yes.  I've been running my tests for about 24 hours now with no problems
> noted.  The tests include starting/stopping a persistent VM,
> creating/stopping a transient VM, rebooting a persistent VM,
> saving/restoring a transient VM, and getting info on all of these VMs.
> 
> I should probably add saving/restoring a persistent VM to the mix since
> the associated libxl_ctx is never freed.  Only when a persistent VM is
> undefined is the libxl_ctx freed.

Right.  Great.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:14:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vCY-0006Rd-Hb; Thu, 30 Jan 2014 17:14:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8uaS-00041n-CK; Thu, 30 Jan 2014 16:35:32 +0000
Received: from [85.158.137.68:31787] by server-3.bemta-3.messagelabs.com id
	6D/D3-14520-35F7AE25; Thu, 30 Jan 2014 16:35:31 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391099729!12377290!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28545 invoked from network); 30 Jan 2014 16:35:30 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:35:30 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so3834635oag.7
	for <multiple recipients>; Thu, 30 Jan 2014 08:35:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=gv4ra8R9qZ2Hkikgezb3rjejgvLmFLqwL9gypjwB6c8=;
	b=Rpi4gMJSKnkKmkD2JhLI2WaayPVKGv+qZAPtjGrEvIbUMUWoo0qzt8V9Dk2aTQ/hru
	EsTwPS5P86MDlvfOrD1j/bEg6qG5Ft1xyD0xMcyUhwH99grzqqdDT8vEjd4Jaw8dnxUN
	2hotRjhob55FDPPl38WS+ZlAss5akcW42byGdRsgreZZUrbjVsILuTNmReAVgF81WXcP
	6vmpJt5yZBLX/230EMAY0rkMIUe5jWPqlNA8e1Ge0/YMA+fi0ii5sW42orPyG7Cf+FfF
	mfDpPM2Fl3XTpWD7clUicMnHpN6NQgP25RSjEek0CgTv1l2VW1RQkXWle9An6TA8WB6n
	AAeA==
MIME-Version: 1.0
X-Received: by 10.60.135.130 with SMTP id ps2mr8211888oeb.46.1391099728979;
	Thu, 30 Jan 2014 08:35:28 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Thu, 30 Jan 2014 08:35:28 -0800 (PST)
In-Reply-To: <1391098340.9495.15.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
	<1391098340.9495.15.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 09:35:28 -0700
Message-ID: <CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Thu, 30 Jan 2014 17:14:53 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See if I am doing it correctly.

1. setup a loopback device
losetup /dev/loop0 /vms/centos65_pv.img

2. check the output of "losetup -a"
/dev/loop0: [fb00]:926576 (/vms/centos65_pv.img)

3. use the loopback device in the PV config
disk = [ 'phy:/dev/loop0, xvda, w']

However, similar error still exists.

On Thu, Jan 30, 2014 at 9:12 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
>> What disk option should I use for a PV guest? tap2:aio seems to have a
>> similar issue with "xl vcpu-set"
>
> To workaround the issue you would need phy: I think, perhaps by setting
> up a loopback device on the raw image by hand.
>
> Hopefully the underlying issue can get fixed though.
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:14:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vCY-0006Rd-Hb; Thu, 30 Jan 2014 17:14:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bimingery@gmail.com>)
	id 1W8uaS-00041n-CK; Thu, 30 Jan 2014 16:35:32 +0000
Received: from [85.158.137.68:31787] by server-3.bemta-3.messagelabs.com id
	6D/D3-14520-35F7AE25; Thu, 30 Jan 2014 16:35:31 +0000
X-Env-Sender: bimingery@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391099729!12377290!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28545 invoked from network); 30 Jan 2014 16:35:30 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 16:35:30 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so3834635oag.7
	for <multiple recipients>; Thu, 30 Jan 2014 08:35:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=gv4ra8R9qZ2Hkikgezb3rjejgvLmFLqwL9gypjwB6c8=;
	b=Rpi4gMJSKnkKmkD2JhLI2WaayPVKGv+qZAPtjGrEvIbUMUWoo0qzt8V9Dk2aTQ/hru
	EsTwPS5P86MDlvfOrD1j/bEg6qG5Ft1xyD0xMcyUhwH99grzqqdDT8vEjd4Jaw8dnxUN
	2hotRjhob55FDPPl38WS+ZlAss5akcW42byGdRsgreZZUrbjVsILuTNmReAVgF81WXcP
	6vmpJt5yZBLX/230EMAY0rkMIUe5jWPqlNA8e1Ge0/YMA+fi0ii5sW42orPyG7Cf+FfF
	mfDpPM2Fl3XTpWD7clUicMnHpN6NQgP25RSjEek0CgTv1l2VW1RQkXWle9An6TA8WB6n
	AAeA==
MIME-Version: 1.0
X-Received: by 10.60.135.130 with SMTP id ps2mr8211888oeb.46.1391099728979;
	Thu, 30 Jan 2014 08:35:28 -0800 (PST)
Received: by 10.60.29.70 with HTTP; Thu, 30 Jan 2014 08:35:28 -0800 (PST)
In-Reply-To: <1391098340.9495.15.camel@kazak.uk.xensource.com>
References: <CAL3hBVrWpw8c1DdG10B-S2TP-ntTJ1biVNjt5SvszD4hpWaQOA@mail.gmail.com>
	<1380619471.925.49.camel@kazak.uk.xensource.com>
	<CAL3hBVpJ5Tf32Vti+90Gu6QQoPgriMYcbtOxomzmcnpwtKYebA@mail.gmail.com>
	<1390990304.31814.50.camel@kazak.uk.xensource.com>
	<20140129121827.GA1775@perard.uk.xensource.com>
	<1390998137.31814.92.camel@kazak.uk.xensource.com>
	<20140129123648.GB1775@perard.uk.xensource.com>
	<1390999187.31814.97.camel@kazak.uk.xensource.com>
	<20140129130434.GD1775@perard.uk.xensource.com>
	<1391001261.31814.115.camel@kazak.uk.xensource.com>
	<CAL3hBVq6S_xBSwF7nj6Jmx+sbLOpcGgVP_oZ9tJ+v2pE1XXqhw@mail.gmail.com>
	<1391006755.31814.129.camel@kazak.uk.xensource.com>
	<CAL3hBVpg9WH+LDxjedzUcMKUMhxybWOcmNX1AxbRCmTKkYcjYQ@mail.gmail.com>
	<1391096671.9495.10.camel@kazak.uk.xensource.com>
	<CAL3hBVrJRhQvt7SH0i473Y5SiPLbUN8OLMuW7wXQwOtHHjGoAQ@mail.gmail.com>
	<1391098340.9495.15.camel@kazak.uk.xensource.com>
Date: Thu, 30 Jan 2014 09:35:28 -0700
Message-ID: <CAL3hBVoWYadkguKZHwopMTWL7E3ygCJPJnqVbdmKBm5ADhzUXQ@mail.gmail.com>
From: Yun Wang <bimingery@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailman-Approved-At: Thu, 30 Jan 2014 17:14:53 +0000
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-users@lists.xen.org,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-users] Issues with vcpu-set
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

See if I am doing it correctly.

1. setup a loopback device
losetup /dev/loop0 /vms/centos65_pv.img

2. check the output of "losetup -a"
/dev/loop0: [fb00]:926576 (/vms/centos65_pv.img)

3. use the loopback device in the PV config
disk = [ 'phy:/dev/loop0, xvda, w']

However, similar error still exists.

On Thu, Jan 30, 2014 at 9:12 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-30 at 09:04 -0700, Yun Wang wrote:
>> What disk option should I use for a PV guest? tap2:aio seems to have a
>> similar issue with "xl vcpu-set"
>
> To workaround the issue you would need phy: I think, perhaps by setting
> up a loopback device on the raw image by hand.
>
> Hopefully the underlying issue can get fixed though.
>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:18:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vGD-0006i0-6a; Thu, 30 Jan 2014 17:18:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8vGB-0006gS-A3
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 17:18:39 +0000
Received: from [85.158.139.211:10257] by server-2.bemta-5.messagelabs.com id
	41/37-23037-E698AE25; Thu, 30 Jan 2014 17:18:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391102316!676007!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14013 invoked from network); 30 Jan 2014 17:18:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 17:18:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96220090"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 17:18:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 12:18:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8vG6-00011K-K3;
	Thu, 30 Jan 2014 17:18:34 +0000
Date: Thu, 30 Jan 2014 17:18:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> 1.2 I also have checked solution where on_selected_cpus call was moved
> out of the
> interrupt handler. Unfortunately, it doesn't work.
> 
> I almost immediately see next error:
> (XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
> (XEN) Xen BUG at gic.c:981
> (XEN) CPU1: Unexpected Trap: Undefined Instruction
> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> (XEN) CPU:    1
> (XEN) PC:     00241ee0 __bug+0x2c/0x44
> (XEN) CPSR:   2000015a MODE:Hypervisor
> (XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
> (XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
> (XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
> (XEN) HYP: SP: 40037eb4 LR: 00241ee0
> (XEN)
> (XEN)   VTCR_EL2: 80002558
> (XEN)  VTTBR_EL2: 00010000deffc000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd187f
> (XEN)    HCR_EL2: 0000000000002835
> (XEN)  TTBR0_EL2: 00000000d2014000
> (XEN)
> (XEN)    ESR_EL2: 00000000
> (XEN)  HPFAR_EL2: 0000000000482110
> (XEN)      HDFAR: fa211f00
> (XEN)      HIFAR: 00000000
> (XEN)
> (XEN) Xen stack trace from sp=40037eb4:
> (XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
> (XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
> (XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
> (XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
> (XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
> (XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
> (XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
> (XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
> (XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
> (XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
> (XEN)    00000000 0c41e00c 450c2880
> (XEN) Xen call trace:
> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
> (XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
> (XEN)    [<00249068>] do_IRQ+0x138/0x198
> (XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
> (XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
> (XEN)    [<00251a30>] return_from_trap+0/0x4
> (XEN)

Are you seeing more than one interrupt being EOI'ed with a single
maintenance interrupt?
I didn't think it could be possible in practice.
If so, we might have to turn eoi_irq into a list or an array.


> 2. The "simultaneous cross-interrupts" issue doesn't occur if I use
> next solution:
> So, as result I don't see deadlock in on_selected_cpus()
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..af96a31 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> 
>      level = dt_irq_is_level_triggered(irq);
> 
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
> 
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> So, as result I don't see deadlock in on_selected_cpus().

As I stated before I think this is a good change to have in 4.4.


> But, rarely, I see deadlocks in other parts related to interrupts handling.
> As noted by Julien, I am using the old version of the interrupt patch series.
> I completely agree.
> 
> We are based on next XEN commit:
> 48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list
> 
> Also we have some patches, which we cherry-picked when we urgently needed them:
> 6bba1a3 xen/arm: Keep count of inflight interrupts
> 33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
> b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
> 5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
> 1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ
> 
> I have to apply next patches and check with them:
> 88eb95e xen/arm: disable a physical IRQ when the guest disables the
> corresponding IRQ
> a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
> 1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
> d16d511 xen/arm: track the state of guest IRQs
> 
> I'll report about the results. I hope to do it today.

I am looking forward to reading your report.
Cheers,

Stefano

> A lot of thanks to all.
> 
> On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Given that we don't deactivate the interrupt (writing to GICC_DIR) until
> > the guest EOIs it, I can't understand how you manage to get a second
> > interrupt notifications before the guest EOIs the first one.
> >
> > Do you set GICC_CTL_EOI in GICC_CTLR?
> >
> > On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> >> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
> >>
> >> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > Is it a level or an edge irq?
> >> >
> >> > On Wed, 29 Jan 2014, Julien Grall wrote:
> >> >> Hi,
> >> >>
> >> >> It's weird, physical IRQ should not be injected twice ...
> >> >> Were you able to print the IRQ number?
> >> >>
> >> >> In any case, you are using the old version of the interrupt patch series.
> >> >> Your new error may come of race condition in this code.
> >> >>
> >> >> Can you try to use a newest version?
> >> >>
> >> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
> >> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
> >> >>       > difference for xen-unstable (it should make things clearer, if nothing
> >> >>       > else) but it should fix things for Oleksandr.
> >> >>
> >> >>       Unfortunately, it is not enough for stable work.
> >> >>
> >> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
> >> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
> >> >>       which cause to deadlock in on_selected_cpus function (expected).
> >> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
> >> >>       yet where this is happening) or I sometimes see traps, like that:
> >> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
> >> >>
> >> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
> >> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> >> >>       (XEN) CPU:    1
> >> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
> >> >>       (XEN) CPSR:   200001da MODE:Hypervisor
> >> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> >> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> >> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> >> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> >> >>       (XEN)
> >> >>       (XEN)   VTCR_EL2: 80002558
> >> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
> >> >>       (XEN)
> >> >>       (XEN)  SCTLR_EL2: 30cd187f
> >> >>       (XEN)    HCR_EL2: 00000000000028b5
> >> >>       (XEN)  TTBR0_EL2: 00000000d2014000
> >> >>       (XEN)
> >> >>       (XEN)    ESR_EL2: 00000000
> >> >>       (XEN)  HPFAR_EL2: 0000000000482110
> >> >>       (XEN)      HDFAR: fa211190
> >> >>       (XEN)      HIFAR: 00000000
> >> >>       (XEN)
> >> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
> >> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
> >> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
> >> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
> >> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
> >> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
> >> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
> >> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
> >> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
> >> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
> >> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
> >> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
> >> >>       (XEN) Xen call trace:
> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> >> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> >> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> >> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> >> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> >> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
> >> >>       (XEN)
> >> >>
> >> >>       Also I am posting maintenance_interrupt() from my tree:
> >> >>
> >> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
> >> >>       cpu_user_regs *regs)
> >> >>       {
> >> >>           int i = 0, virq, pirq;
> >> >>           uint32_t lr;
> >> >>           struct vcpu *v = current;
> >> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> >> >>
> >> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
> >> >>                                     64, i)) < 64) {
> >> >>               struct pending_irq *p, *n;
> >> >>               int cpu, eoi;
> >> >>
> >> >>               cpu = -1;
> >> >>               eoi = 0;
> >> >>
> >> >>               spin_lock_irq(&gic.lock);
> >> >>               lr = GICH[GICH_LR + i];
> >> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
> >> >>
> >> >>               p = irq_to_pending(v, virq);
> >> >>               if ( p->desc != NULL ) {
> >> >>                   p->desc->status &= ~IRQ_INPROGRESS;
> >> >>                   /* Assume only one pcpu needs to EOI the irq */
> >> >>                   cpu = p->desc->arch.eoi_cpu;
> >> >>                   eoi = 1;
> >> >>                   pirq = p->desc->irq;
> >> >>               }
> >> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
> >> >>               {
> >> >>                   /* Physical IRQ can't be reinject */
> >> >>                   WARN_ON(p->desc != NULL);
> >> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> >> >>                   spin_unlock_irq(&gic.lock);
> >> >>                   i++;
> >> >>                   continue;
> >> >>               }
> >> >>
> >> >>               GICH[GICH_LR + i] = 0;
> >> >>               clear_bit(i, &this_cpu(lr_mask));
> >> >>
> >> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
> >> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
> >> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
> >> >>                   list_del_init(&n->lr_queue);
> >> >>                   set_bit(i, &this_cpu(lr_mask));
> >> >>               } else {
> >> >>                   gic_inject_irq_stop();
> >> >>               }
> >> >>               spin_unlock_irq(&gic.lock);
> >> >>
> >> >>               spin_lock_irq(&v->arch.vgic.lock);
> >> >>               list_del_init(&p->inflight);
> >> >>               spin_unlock_irq(&v->arch.vgic.lock);
> >> >>
> >> >>               if ( eoi ) {
> >> >>                   /* this is not racy because we can't receive another irq of the
> >> >>                    * same type until we EOI it.  */
> >> >>                   if ( cpu == smp_processor_id() )
> >> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
> >> >>                   else
> >> >>                       on_selected_cpus(cpumask_of(cpu),
> >> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> >> >>               }
> >> >>
> >> >>               i++;
> >> >>           }
> >> >>       }
> >> >>
> >> >>
> >> >>       Oleksandr Tyshchenko | Embedded Developer
> >> >>       GlobalLogic
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
> >> --
> >>
> >> Name | Title
> >> GlobalLogic
> >> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> >> www.globallogic.com
> >>
> >> http://www.globallogic.com/email_disclaimer.txt
> >>
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:18:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vGD-0006i0-6a; Thu, 30 Jan 2014 17:18:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8vGB-0006gS-A3
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 17:18:39 +0000
Received: from [85.158.139.211:10257] by server-2.bemta-5.messagelabs.com id
	41/37-23037-E698AE25; Thu, 30 Jan 2014 17:18:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391102316!676007!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14013 invoked from network); 30 Jan 2014 17:18:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 17:18:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96220090"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 17:18:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 12:18:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8vG6-00011K-K3;
	Thu, 30 Jan 2014 17:18:34 +0000
Date: Thu, 30 Jan 2014 17:18:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> 1.2 I also have checked solution where on_selected_cpus call was moved
> out of the
> interrupt handler. Unfortunately, it doesn't work.
> 
> I almost immediately see next error:
> (XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
> (XEN) Xen BUG at gic.c:981
> (XEN) CPU1: Unexpected Trap: Undefined Instruction
> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> (XEN) CPU:    1
> (XEN) PC:     00241ee0 __bug+0x2c/0x44
> (XEN) CPSR:   2000015a MODE:Hypervisor
> (XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
> (XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
> (XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
> (XEN) HYP: SP: 40037eb4 LR: 00241ee0
> (XEN)
> (XEN)   VTCR_EL2: 80002558
> (XEN)  VTTBR_EL2: 00010000deffc000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd187f
> (XEN)    HCR_EL2: 0000000000002835
> (XEN)  TTBR0_EL2: 00000000d2014000
> (XEN)
> (XEN)    ESR_EL2: 00000000
> (XEN)  HPFAR_EL2: 0000000000482110
> (XEN)      HDFAR: fa211f00
> (XEN)      HIFAR: 00000000
> (XEN)
> (XEN) Xen stack trace from sp=40037eb4:
> (XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
> (XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
> (XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
> (XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
> (XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
> (XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
> (XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
> (XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
> (XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
> (XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
> (XEN)    00000000 0c41e00c 450c2880
> (XEN) Xen call trace:
> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
> (XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
> (XEN)    [<00249068>] do_IRQ+0x138/0x198
> (XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
> (XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
> (XEN)    [<00251a30>] return_from_trap+0/0x4
> (XEN)

Are you seeing more than one interrupt being EOI'ed with a single
maintenance interrupt?
I didn't think it could be possible in practice.
If so, we might have to turn eoi_irq into a list or an array.


> 2. The "simultaneous cross-interrupts" issue doesn't occur if I use
> next solution:
> So, as result I don't see deadlock in on_selected_cpus()
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..af96a31 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> 
>      level = dt_irq_is_level_triggered(irq);
> 
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
> 
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> So, as result I don't see deadlock in on_selected_cpus().

As I stated before I think this is a good change to have in 4.4.


> But, rarely, I see deadlocks in other parts related to interrupts handling.
> As noted by Julien, I am using the old version of the interrupt patch series.
> I completely agree.
> 
> We are based on next XEN commit:
> 48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list
> 
> Also we have some patches, which we cherry-picked when we urgently needed them:
> 6bba1a3 xen/arm: Keep count of inflight interrupts
> 33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
> b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
> 5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
> 1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ
> 
> I have to apply next patches and check with them:
> 88eb95e xen/arm: disable a physical IRQ when the guest disables the
> corresponding IRQ
> a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
> 1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
> d16d511 xen/arm: track the state of guest IRQs
> 
> I'll report about the results. I hope to do it today.

I am looking forward to reading your report.
Cheers,

Stefano

> A lot of thanks to all.
> 
> On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > Given that we don't deactivate the interrupt (writing to GICC_DIR) until
> > the guest EOIs it, I can't understand how you manage to get a second
> > interrupt notifications before the guest EOIs the first one.
> >
> > Do you set GICC_CTL_EOI in GICC_CTLR?
> >
> > On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
> >> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
> >>
> >> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
> >> <stefano.stabellini@eu.citrix.com> wrote:
> >> > Is it a level or an edge irq?
> >> >
> >> > On Wed, 29 Jan 2014, Julien Grall wrote:
> >> >> Hi,
> >> >>
> >> >> It's weird, physical IRQ should not be injected twice ...
> >> >> Were you able to print the IRQ number?
> >> >>
> >> >> In any case, you are using the old version of the interrupt patch series.
> >> >> Your new error may come of race condition in this code.
> >> >>
> >> >> Can you try to use a newest version?
> >> >>
> >> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
> >> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
> >> >>       > difference for xen-unstable (it should make things clearer, if nothing
> >> >>       > else) but it should fix things for Oleksandr.
> >> >>
> >> >>       Unfortunately, it is not enough for stable work.
> >> >>
> >> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
> >> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
> >> >>       which cause to deadlock in on_selected_cpus function (expected).
> >> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
> >> >>       yet where this is happening) or I sometimes see traps, like that:
> >> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
> >> >>
> >> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
> >> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
> >> >>       (XEN) CPU:    1
> >> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
> >> >>       (XEN) CPSR:   200001da MODE:Hypervisor
> >> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
> >> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
> >> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
> >> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
> >> >>       (XEN)
> >> >>       (XEN)   VTCR_EL2: 80002558
> >> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
> >> >>       (XEN)
> >> >>       (XEN)  SCTLR_EL2: 30cd187f
> >> >>       (XEN)    HCR_EL2: 00000000000028b5
> >> >>       (XEN)  TTBR0_EL2: 00000000d2014000
> >> >>       (XEN)
> >> >>       (XEN)    ESR_EL2: 00000000
> >> >>       (XEN)  HPFAR_EL2: 0000000000482110
> >> >>       (XEN)      HDFAR: fa211190
> >> >>       (XEN)      HIFAR: 00000000
> >> >>       (XEN)
> >> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
> >> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
> >> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
> >> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
> >> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
> >> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
> >> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
> >> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
> >> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
> >> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
> >> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
> >> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
> >> >>       (XEN) Xen call trace:
> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
> >> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
> >> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
> >> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
> >> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
> >> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
> >> >>       (XEN)
> >> >>
> >> >>       Also I am posting maintenance_interrupt() from my tree:
> >> >>
> >> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
> >> >>       cpu_user_regs *regs)
> >> >>       {
> >> >>           int i = 0, virq, pirq;
> >> >>           uint32_t lr;
> >> >>           struct vcpu *v = current;
> >> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> >> >>
> >> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
> >> >>                                     64, i)) < 64) {
> >> >>               struct pending_irq *p, *n;
> >> >>               int cpu, eoi;
> >> >>
> >> >>               cpu = -1;
> >> >>               eoi = 0;
> >> >>
> >> >>               spin_lock_irq(&gic.lock);
> >> >>               lr = GICH[GICH_LR + i];
> >> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
> >> >>
> >> >>               p = irq_to_pending(v, virq);
> >> >>               if ( p->desc != NULL ) {
> >> >>                   p->desc->status &= ~IRQ_INPROGRESS;
> >> >>                   /* Assume only one pcpu needs to EOI the irq */
> >> >>                   cpu = p->desc->arch.eoi_cpu;
> >> >>                   eoi = 1;
> >> >>                   pirq = p->desc->irq;
> >> >>               }
> >> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
> >> >>               {
> >> >>                   /* Physical IRQ can't be reinject */
> >> >>                   WARN_ON(p->desc != NULL);
> >> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> >> >>                   spin_unlock_irq(&gic.lock);
> >> >>                   i++;
> >> >>                   continue;
> >> >>               }
> >> >>
> >> >>               GICH[GICH_LR + i] = 0;
> >> >>               clear_bit(i, &this_cpu(lr_mask));
> >> >>
> >> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
> >> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
> >> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
> >> >>                   list_del_init(&n->lr_queue);
> >> >>                   set_bit(i, &this_cpu(lr_mask));
> >> >>               } else {
> >> >>                   gic_inject_irq_stop();
> >> >>               }
> >> >>               spin_unlock_irq(&gic.lock);
> >> >>
> >> >>               spin_lock_irq(&v->arch.vgic.lock);
> >> >>               list_del_init(&p->inflight);
> >> >>               spin_unlock_irq(&v->arch.vgic.lock);
> >> >>
> >> >>               if ( eoi ) {
> >> >>                   /* this is not racy because we can't receive another irq of the
> >> >>                    * same type until we EOI it.  */
> >> >>                   if ( cpu == smp_processor_id() )
> >> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
> >> >>                   else
> >> >>                       on_selected_cpus(cpumask_of(cpu),
> >> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> >> >>               }
> >> >>
> >> >>               i++;
> >> >>           }
> >> >>       }
> >> >>
> >> >>
> >> >>       Oleksandr Tyshchenko | Embedded Developer
> >> >>       GlobalLogic
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
> >> --
> >>
> >> Name | Title
> >> GlobalLogic
> >> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> >> www.globallogic.com
> >>
> >> http://www.globallogic.com/email_disclaimer.txt
> >>
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:31:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vSB-0007AB-6H; Thu, 30 Jan 2014 17:31:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8vS9-0007A6-Eg
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 17:31:01 +0000
Received: from [85.158.139.211:54887] by server-5.bemta-5.messagelabs.com id
	44/59-32749-45C8AE25; Thu, 30 Jan 2014 17:31:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391103059!663290!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5403 invoked from network); 30 Jan 2014 17:30:59 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 17:30:59 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391103059; l=1158;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=MXZZngdhyHUUBLxE0Pqi+Z6j7RE=;
	b=gqV+poMEEyG/FutVg4N2+F6l9DFF8T4NYjnhln5I4aiyEOXgAD9L+I04IoK8lCCkbDO
	m4kYlu83Fi6niCSn0+xcoD9eA932iHNIKr+P+DEEiZkTcarqUNeImvgGKd047X+n2B62E
	4z03XT9VrWu43EDBu0MgZjqcjYV+ff6s4K8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id x043e3q0UHUxkWF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 18:30:59 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E860F50269; Thu, 30 Jan 2014 18:30:58 +0100 (CET)
Date: Thu, 30 Jan 2014 18:30:58 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130173058.GA12133@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391099505.9495.23.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, Ian Campbell wrote:

> On Thu, 2014-01-30 at 17:25 +0100, Olaf Hering wrote:
> > This change does not break ABI. Instead of adding a new member ->discard_enable
> > to struct libxl_device_disk the existing ->readwrite member is reused.
> Looks like it changes the libxlu ABI though. Or maybe that's totally
> internal?

DiskParseContext is used internal, there is no public header for it.

> TBH -- if you (==suse I guess?) are contemplating carrying this as a
> backport even before 4.4 is out the door we should probably be at least
> considering a freeze exception for 4.4. George CCd for input. (I
> appreciate that "backport=>freeze exception" is a potentially slippery
> slope/ripe for abuse...)

It will make less work for SUSE if this change would be incorporated
into 4.4, and later replaced with the "final" version I sent out today.
However, its small and will be easy to port forward to 4.4.X.

The risk of including such change is small as it requires a patched qemu
which actually does discard (1.7?), a patched frontend driver (pvops
3.15?) before the codepaths it enables are actually executed.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 17:31:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 17:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8vSB-0007AB-6H; Thu, 30 Jan 2014 17:31:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W8vS9-0007A6-Eg
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 17:31:01 +0000
Received: from [85.158.139.211:54887] by server-5.bemta-5.messagelabs.com id
	44/59-32749-45C8AE25; Thu, 30 Jan 2014 17:31:00 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391103059!663290!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5403 invoked from network); 30 Jan 2014 17:30:59 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 30 Jan 2014 17:30:59 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391103059; l=1158;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=MXZZngdhyHUUBLxE0Pqi+Z6j7RE=;
	b=gqV+poMEEyG/FutVg4N2+F6l9DFF8T4NYjnhln5I4aiyEOXgAD9L+I04IoK8lCCkbDO
	m4kYlu83Fi6niCSn0+xcoD9eA932iHNIKr+P+DEEiZkTcarqUNeImvgGKd047X+n2B62E
	4z03XT9VrWu43EDBu0MgZjqcjYV+ff6s4K8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.22 AUTH) with ESMTPSA id x043e3q0UHUxkWF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 30 Jan 2014 18:30:59 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E860F50269; Thu, 30 Jan 2014 18:30:58 +0100 (CET)
Date: Thu, 30 Jan 2014 18:30:58 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140130173058.GA12133@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391099505.9495.23.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, Ian Campbell wrote:

> On Thu, 2014-01-30 at 17:25 +0100, Olaf Hering wrote:
> > This change does not break ABI. Instead of adding a new member ->discard_enable
> > to struct libxl_device_disk the existing ->readwrite member is reused.
> Looks like it changes the libxlu ABI though. Or maybe that's totally
> internal?

DiskParseContext is used internal, there is no public header for it.

> TBH -- if you (==suse I guess?) are contemplating carrying this as a
> backport even before 4.4 is out the door we should probably be at least
> considering a freeze exception for 4.4. George CCd for input. (I
> appreciate that "backport=>freeze exception" is a potentially slippery
> slope/ripe for abuse...)

It will make less work for SUSE if this change would be incorporated
into 4.4, and later replaced with the "final" version I sent out today.
However, its small and will be easy to port forward to 4.4.X.

The risk of including such change is small as it requires a patched qemu
which actually does discard (1.7?), a patched frontend driver (pvops
3.15?) before the codepaths it enables are actually executed.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 18:25:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 18:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8wIZ-0008Pg-9v; Thu, 30 Jan 2014 18:25:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8wIU-0008PY-EH
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 18:25:10 +0000
Received: from [85.158.143.35:38384] by server-1.bemta-4.messagelabs.com id
	CC/9F-31661-1099AE25; Thu, 30 Jan 2014 18:25:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391106303!2009005!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20642 invoked from network); 30 Jan 2014 18:25:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 18:25:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96246946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 18:25:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 13:25:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8wIQ-0002Co-AY;
	Thu, 30 Jan 2014 18:25:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8wIQ-0002iz-9S;
	Thu, 30 Jan 2014 18:25:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24643-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 18:25:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24643: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24643 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24643/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   4 xen-build                 fail REGR. vs. 24571
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24571
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 18:25:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 18:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8wIZ-0008Pg-9v; Thu, 30 Jan 2014 18:25:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8wIU-0008PY-EH
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 18:25:10 +0000
Received: from [85.158.143.35:38384] by server-1.bemta-4.messagelabs.com id
	CC/9F-31661-1099AE25; Thu, 30 Jan 2014 18:25:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391106303!2009005!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20642 invoked from network); 30 Jan 2014 18:25:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 18:25:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96246946"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 18:25:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 13:25:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8wIQ-0002Co-AY;
	Thu, 30 Jan 2014 18:25:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8wIQ-0002iz-9S;
	Thu, 30 Jan 2014 18:25:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24643-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 18:25:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24643: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24643 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24643/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   4 xen-build                 fail REGR. vs. 24571
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24571
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 18:29:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 18:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8wMk-0000HO-14; Thu, 30 Jan 2014 18:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8wMi-0000HI-2J
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 18:29:28 +0000
Received: from [85.158.143.35:46948] by server-2.bemta-4.messagelabs.com id
	3E/1C-10891-70A9AE25; Thu, 30 Jan 2014 18:29:27 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391106565!2022596!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12718 invoked from network); 30 Jan 2014 18:29:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 18:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96248258"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 18:29:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 13:29:24 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8wMe-0002Gw-Hj;
	Thu, 30 Jan 2014 18:29:24 +0000
Date: Thu, 30 Jan 2014 18:29:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401301828380.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	qemu-stable@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony,
I would appreciate if you could pull this branch quickly, as I am
looking forward to backport the patch to the qemu-xen tree for the Xen
4.4 release.
Thanks,

Stefano

On Thu, 30 Jan 2014, Stefano Stabellini wrote:
> The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:
> 
>   Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)
> 
> are available in the git repository at:
> 
> 
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130
> 
> for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:
> 
>   address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)
> 
> ----------------------------------------------------------------
> Stefano Stabellini (1):
>       address_space_translate: do not cross page boundaries
> 
>  exec.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 18:29:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 18:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8wMk-0000HO-14; Thu, 30 Jan 2014 18:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W8wMi-0000HI-2J
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 18:29:28 +0000
Received: from [85.158.143.35:46948] by server-2.bemta-4.messagelabs.com id
	3E/1C-10891-70A9AE25; Thu, 30 Jan 2014 18:29:27 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391106565!2022596!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12718 invoked from network); 30 Jan 2014 18:29:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 18:29:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96248258"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 18:29:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 30 Jan 2014 13:29:24 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W8wMe-0002Gw-Hj;
	Thu, 30 Jan 2014 18:29:24 +0000
Date: Thu, 30 Jan 2014 18:29:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1401301828380.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, qemu-devel@nongnu.org,
	qemu-stable@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony,
I would appreciate if you could pull this branch quickly, as I am
looking forward to backport the patch to the qemu-xen tree for the Xen
4.4 release.
Thanks,

Stefano

On Thu, 30 Jan 2014, Stefano Stabellini wrote:
> The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:
> 
>   Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)
> 
> are available in the git repository at:
> 
> 
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130
> 
> for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:
> 
>   address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)
> 
> ----------------------------------------------------------------
> Stefano Stabellini (1):
>       address_space_translate: do not cross page boundaries
> 
>  exec.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8x0d-0001HP-Pd; Thu, 30 Jan 2014 19:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1W8wyJ-000151-3W
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 19:08:19 +0000
Received: from [85.158.139.211:3337] by server-6.bemta-5.messagelabs.com id
	9F/CC-14342-223AAE25; Thu, 30 Jan 2014 19:08:18 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391108897!670342!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10151 invoked from network); 30 Jan 2014 19:08:17 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 19:08:17 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so7148366wgh.29
	for <xen-devel@lists.xenproject.org>;
	Thu, 30 Jan 2014 11:08:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=0P1MS+MG7ouHd/HzkaFBmpq/FvDmsYoMvjEVTPMs8XU=;
	b=fNVzW9bigKq/iOWTCJrEtrJAAZTS1ZIHdFObW3v1jCTyJ5dyUSlCU8D/KoFO0I5CaF
	reYtYQsZAHIwJqgXXxkWRi7iXkl1jB+REqb1eTq8nOkH5vgZHm/tR9Mad5EPhfZoVY6E
	F3jlUrXqmjTeaBCCO6eUAEBO4nvv8ras0q8kADRW+bv6Ie/Yr0vZSSP1b5u9Y+XBvg4Q
	hq87L1jGIbgosOj2t4swdt1fFwJmhevmpiQo7pSaEk6fwj2ijqijLg8gGyqMp4s3LMde
	FHnQ8MKYurVoPN2NSsvK82unp5OEtsztntJp7v7bAQAXL9lYg3i4R5X6/2R6DfqxIeNq
	1d3Q==
X-Gm-Message-State: ALoCoQnNd1SN3ip1yJwgBZE52IEWBsWh+hya3suuKgB6me99UG06co9tXK6KBI1/xo6+mXLlP662
X-Received: by 10.194.22.129 with SMTP id d1mr10585624wjf.22.1391108896868;
	Thu, 30 Jan 2014 11:08:16 -0800 (PST)
Received: from [10.32.6.139] ([37.205.63.157])
	by mx.google.com with ESMTPSA id po3sm14166484wjc.3.2014.01.30.11.08.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 11:08:16 -0800 (PST)
Message-ID: <52EAA31B.1090606@schaman.hu>
Date: Thu, 30 Jan 2014 19:08:11 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jeff Kirsher <jeffrey.t.kirsher@intel.com>, 
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>, 
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Don Skidmore <donald.c.skidmore@intel.com>, 
	Greg Rose <gregory.v.rose@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>, 
	Alex Duyck <alexander.h.duyck@intel.com>,
	John Ronciak <john.ronciak@intel.com>, 
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>, 
	"David S. Miller" <davem@davemloft.net>,
	e1000-devel@lists.sourceforge.net, 
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	linux-kernel@vger.kernel.org, Michael Chan <mchan@broadcom.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
X-Mailman-Approved-At: Thu, 30 Jan 2014 19:10:42 +0000
Subject: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue timed
 out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've experienced some queue timeout problems mentioned in the subject 
with igb and bnx2 cards. I haven't seen them on other cards so far. I'm 
using XenServer with 3.10 Dom0 kernel (however igb were already updated 
to latest version), and there are Windows guests sending data through 
these cards. I noticed these problems in XenRT test runs, and I know 
that they usually mean some lost interrupt problem or other hardware 
error, but in my case they started to appear more often, and they are 
likely connected to my netback grant mapping patches. These patches 
causing skb's with huge (~64kb) linear buffers to appear more often.
The reason for that is an old problem in the ring protocol: originally 
the maximum amount of slots were linked to MAX_SKB_FRAGS, as every slot 
ended up as a frag of the skb. When this value were changed, netback had 
to cope with the situation by coalescing the packets into fewer frags.
My patch series take a different approach: the leftover slots (pages) 
were assigned to a new skb's frags, and that skb were stashed to the 
frag_list of the first one. Then, before sending it off to the stack it 
calls skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC, __GFP_NOWARN), which 
basically creates a new skb and copied all the data into it. As far as I 
understood, it put everything into the linear buffer, which can amount 
to 64KB at most. The original skb are freed then, and this new one were 
sent to the stack.
I suspect that this is the problem as it only happens when guests send 
too much slots. Does anyone familiar with these drivers have seen such 
issue before? (when these kind of skb's get stucked in the queue)

Regards,

Zoltan Kiss

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8x0d-0001HP-Pd; Thu, 30 Jan 2014 19:10:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1W8wyJ-000151-3W
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 19:08:19 +0000
Received: from [85.158.139.211:3337] by server-6.bemta-5.messagelabs.com id
	9F/CC-14342-223AAE25; Thu, 30 Jan 2014 19:08:18 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391108897!670342!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10151 invoked from network); 30 Jan 2014 19:08:17 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 19:08:17 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so7148366wgh.29
	for <xen-devel@lists.xenproject.org>;
	Thu, 30 Jan 2014 11:08:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=0P1MS+MG7ouHd/HzkaFBmpq/FvDmsYoMvjEVTPMs8XU=;
	b=fNVzW9bigKq/iOWTCJrEtrJAAZTS1ZIHdFObW3v1jCTyJ5dyUSlCU8D/KoFO0I5CaF
	reYtYQsZAHIwJqgXXxkWRi7iXkl1jB+REqb1eTq8nOkH5vgZHm/tR9Mad5EPhfZoVY6E
	F3jlUrXqmjTeaBCCO6eUAEBO4nvv8ras0q8kADRW+bv6Ie/Yr0vZSSP1b5u9Y+XBvg4Q
	hq87L1jGIbgosOj2t4swdt1fFwJmhevmpiQo7pSaEk6fwj2ijqijLg8gGyqMp4s3LMde
	FHnQ8MKYurVoPN2NSsvK82unp5OEtsztntJp7v7bAQAXL9lYg3i4R5X6/2R6DfqxIeNq
	1d3Q==
X-Gm-Message-State: ALoCoQnNd1SN3ip1yJwgBZE52IEWBsWh+hya3suuKgB6me99UG06co9tXK6KBI1/xo6+mXLlP662
X-Received: by 10.194.22.129 with SMTP id d1mr10585624wjf.22.1391108896868;
	Thu, 30 Jan 2014 11:08:16 -0800 (PST)
Received: from [10.32.6.139] ([37.205.63.157])
	by mx.google.com with ESMTPSA id po3sm14166484wjc.3.2014.01.30.11.08.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 11:08:16 -0800 (PST)
Message-ID: <52EAA31B.1090606@schaman.hu>
Date: Thu, 30 Jan 2014 19:08:11 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jeff Kirsher <jeffrey.t.kirsher@intel.com>, 
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>, 
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Don Skidmore <donald.c.skidmore@intel.com>, 
	Greg Rose <gregory.v.rose@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>, 
	Alex Duyck <alexander.h.duyck@intel.com>,
	John Ronciak <john.ronciak@intel.com>, 
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>, 
	"David S. Miller" <davem@davemloft.net>,
	e1000-devel@lists.sourceforge.net, 
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	linux-kernel@vger.kernel.org, Michael Chan <mchan@broadcom.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
X-Mailman-Approved-At: Thu, 30 Jan 2014 19:10:42 +0000
Subject: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue timed
 out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I've experienced some queue timeout problems mentioned in the subject 
with igb and bnx2 cards. I haven't seen them on other cards so far. I'm 
using XenServer with 3.10 Dom0 kernel (however igb were already updated 
to latest version), and there are Windows guests sending data through 
these cards. I noticed these problems in XenRT test runs, and I know 
that they usually mean some lost interrupt problem or other hardware 
error, but in my case they started to appear more often, and they are 
likely connected to my netback grant mapping patches. These patches 
causing skb's with huge (~64kb) linear buffers to appear more often.
The reason for that is an old problem in the ring protocol: originally 
the maximum amount of slots were linked to MAX_SKB_FRAGS, as every slot 
ended up as a frag of the skb. When this value were changed, netback had 
to cope with the situation by coalescing the packets into fewer frags.
My patch series take a different approach: the leftover slots (pages) 
were assigned to a new skb's frags, and that skb were stashed to the 
frag_list of the first one. Then, before sending it off to the stack it 
calls skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC, __GFP_NOWARN), which 
basically creates a new skb and copied all the data into it. As far as I 
understood, it put everything into the linear buffer, which can amount 
to 64KB at most. The original skb are freed then, and this new one were 
sent to the stack.
I suspect that this is the problem as it only happens when guests send 
too much slots. Does anyone familiar with these drivers have seen such 
issue before? (when these kind of skb's get stucked in the queue)

Regards,

Zoltan Kiss

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8xI7-0001o6-GI; Thu, 30 Jan 2014 19:28:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8xI6-0001o1-6N
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 19:28:46 +0000
Received: from [85.158.137.68:13096] by server-15.bemta-3.messagelabs.com id
	C8/A9-19263-DE7AAE25; Thu, 30 Jan 2014 19:28:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391110122!11234039!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2889 invoked from network); 30 Jan 2014 19:28:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 19:28:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96271251"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 19:28:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 14:28:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8xI1-0002WC-D3;
	Thu, 30 Jan 2014 19:28:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8xI1-0000SO-8Z;
	Thu, 30 Jan 2014 19:28:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 19:28:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24632: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24632 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           9 guest-start               fail REGR. vs. 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24570

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24570
 test-amd64-i386-xl-qemuu-win7-amd64  8 guest-saverestore fail blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8xI7-0001o6-GI; Thu, 30 Jan 2014 19:28:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W8xI6-0001o1-6N
	for xen-devel@lists.xensource.com; Thu, 30 Jan 2014 19:28:46 +0000
Received: from [85.158.137.68:13096] by server-15.bemta-3.messagelabs.com id
	C8/A9-19263-DE7AAE25; Thu, 30 Jan 2014 19:28:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391110122!11234039!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2889 invoked from network); 30 Jan 2014 19:28:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 19:28:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,751,1384300800"; d="scan'208";a="96271251"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 30 Jan 2014 19:28:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 14:28:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W8xI1-0002WC-D3;
	Thu, 30 Jan 2014 19:28:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W8xI1-0000SO-8Z;
	Thu, 30 Jan 2014 19:28:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24632-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 30 Jan 2014 19:28:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24632: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24632 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl           9 guest-start               fail REGR. vs. 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24570

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24570
 test-amd64-i386-xl-qemuu-win7-amd64  8 guest-saverestore fail blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8xgr-0002NG-W0; Thu, 30 Jan 2014 19:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8xgp-0002NB-MW
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 19:54:20 +0000
Received: from [85.158.137.68:53279] by server-8.bemta-3.messagelabs.com id
	67/20-16039-AEDAAE25; Thu, 30 Jan 2014 19:54:18 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391111655!12415683!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23245 invoked from network); 30 Jan 2014 19:54:16 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 19:54:16 -0000
Received: from mail-ve0-f178.google.com ([209.85.128.178]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuqt5sdkaXpio8ZCWCfvQbmE+ILtgNNa@postini.com;
	Thu, 30 Jan 2014 11:54:16 PST
Received: by mail-ve0-f178.google.com with SMTP id oy12so2465133veb.23
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 11:54:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=cU6kstGiztcF/iuz9/ZVGYZUnqzb2uFPuv+qy6f/LuE=;
	b=MI5ai2Se+xs3EKTIfU16bdtoYNxyQSDxn0VnLYxmtFkLJf6yfF0BwwoOhavKfyvdw6
	eFtniFmX8IjLTECdjGyZmw96v73MKQGMShoRpcvYL2V7p/XYv+IjmLtNYqJeZUvxiCp7
	zS1o5qU0RlFKAPN6CDOBf5RCel3jQWVFN0KqU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=cU6kstGiztcF/iuz9/ZVGYZUnqzb2uFPuv+qy6f/LuE=;
	b=YRworNq/mPkgMmsilZtN85u7DvJtYdrldW5xsv6r6vVONm/UybdJSWgemX7uqpIcdG
	lmTOIILInoLbQTCM7tIFhgjwrb7zG74j2d8tK3qN/PqPCVZx7fLYaAmFV8ZvM61F/fo4
	p9peyNkPULXvJ4qx/oGDxlJHPoiKINUBLqUKDNQHttH9gYRJ4NOAzBPVlFovZGwbPFpM
	61dYYucXY0cY6wosZVggW3WnEPLoLaOMdmRjIOKXSG475zJ+235klNyy92Oc0pqhtRQr
	AWPuU2jUcD9DqgsydWQ9SHp6sAUX41dUMBx+wBvc1ogWd34xr3BGBvtTvptl0CVgrOB1
	PpFQ==
X-Gm-Message-State: ALoCoQkNjb3h76iwxHOPP/tKEZ2bwGeLWt6+DFanHdT7wFWmgGmAa7ryfxq8+/0jum47O9wGuYKbPBBjwY1+E5pLjw6zOo4+6byeqqLNcLW0ZJsj4d1jyXcxpnS3XPjWU4ZSE+RaMmlDsQflHgmp3g7bT/w/g0tWE0C+nbrZCDf/X80JaHLzpIY=
X-Received: by 10.52.61.168 with SMTP id q8mr1358703vdr.40.1391111654022;
	Thu, 30 Jan 2014 11:54:14 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.61.168 with SMTP id q8mr1358694vdr.40.1391111653926; Thu,
	30 Jan 2014 11:54:13 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 11:54:13 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 21:54:13 +0200
Message-ID: <CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I moved to 4.4.0-rc1 which already has necessary irq patches.

And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
I see that Hypervisor hangs very often. Unfortunately, now I don't
have debugger to localize part of code.
So, I have to use prints and it may takes some time(

On Thu, Jan 30, 2014 at 7:18 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> 1.2 I also have checked solution where on_selected_cpus call was moved
>> out of the
>> interrupt handler. Unfortunately, it doesn't work.
>>
>> I almost immediately see next error:
>> (XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
>> (XEN) Xen BUG at gic.c:981
>> (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> (XEN) CPU:    1
>> (XEN) PC:     00241ee0 __bug+0x2c/0x44
>> (XEN) CPSR:   2000015a MODE:Hypervisor
>> (XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
>> (XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
>> (XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
>> (XEN) HYP: SP: 40037eb4 LR: 00241ee0
>> (XEN)
>> (XEN)   VTCR_EL2: 80002558
>> (XEN)  VTTBR_EL2: 00010000deffc000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd187f
>> (XEN)    HCR_EL2: 0000000000002835
>> (XEN)  TTBR0_EL2: 00000000d2014000
>> (XEN)
>> (XEN)    ESR_EL2: 00000000
>> (XEN)  HPFAR_EL2: 0000000000482110
>> (XEN)      HDFAR: fa211f00
>> (XEN)      HIFAR: 00000000
>> (XEN)
>> (XEN) Xen stack trace from sp=40037eb4:
>> (XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
>> (XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
>> (XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
>> (XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
>> (XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
>> (XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
>> (XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
>> (XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
>> (XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
>> (XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
>> (XEN)    00000000 0c41e00c 450c2880
>> (XEN) Xen call trace:
>> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
>> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
>> (XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
>> (XEN)    [<00249068>] do_IRQ+0x138/0x198
>> (XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
>> (XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
>> (XEN)    [<00251a30>] return_from_trap+0/0x4
>> (XEN)
>
> Are you seeing more than one interrupt being EOI'ed with a single
> maintenance interrupt?
> I didn't think it could be possible in practice.
> If so, we might have to turn eoi_irq into a list or an array.
>
>
>> 2. The "simultaneous cross-interrupts" issue doesn't occur if I use
>> next solution:
>> So, as result I don't see deadlock in on_selected_cpus()
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..af96a31 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
>> struct dt_irq *irq,
>>
>>      level = dt_irq_is_level_triggered(irq);
>>
>> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
>> -                           0xa0);
>> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>>
>>      retval = __setup_irq(desc, irq->irq, action);
>>      if (retval) {
>> So, as result I don't see deadlock in on_selected_cpus().
>
> As I stated before I think this is a good change to have in 4.4.
>
>
>> But, rarely, I see deadlocks in other parts related to interrupts handling.
>> As noted by Julien, I am using the old version of the interrupt patch series.
>> I completely agree.
>>
>> We are based on next XEN commit:
>> 48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list
>>
>> Also we have some patches, which we cherry-picked when we urgently needed them:
>> 6bba1a3 xen/arm: Keep count of inflight interrupts
>> 33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
>> b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
>> 5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
>> 1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ
>>
>> I have to apply next patches and check with them:
>> 88eb95e xen/arm: disable a physical IRQ when the guest disables the
>> corresponding IRQ
>> a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
>> 1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
>> d16d511 xen/arm: track the state of guest IRQs
>>
>> I'll report about the results. I hope to do it today.
>
> I am looking forward to reading your report.
> Cheers,
>
> Stefano
>
>> A lot of thanks to all.
>>
>> On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > Given that we don't deactivate the interrupt (writing to GICC_DIR) until
>> > the guest EOIs it, I can't understand how you manage to get a second
>> > interrupt notifications before the guest EOIs the first one.
>> >
>> > Do you set GICC_CTL_EOI in GICC_CTLR?
>> >
>> > On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> >> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
>> >>
>> >> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
>> >> <stefano.stabellini@eu.citrix.com> wrote:
>> >> > Is it a level or an edge irq?
>> >> >
>> >> > On Wed, 29 Jan 2014, Julien Grall wrote:
>> >> >> Hi,
>> >> >>
>> >> >> It's weird, physical IRQ should not be injected twice ...
>> >> >> Were you able to print the IRQ number?
>> >> >>
>> >> >> In any case, you are using the old version of the interrupt patch series.
>> >> >> Your new error may come of race condition in this code.
>> >> >>
>> >> >> Can you try to use a newest version?
>> >> >>
>> >> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>> >> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> >> >>       > difference for xen-unstable (it should make things clearer, if nothing
>> >> >>       > else) but it should fix things for Oleksandr.
>> >> >>
>> >> >>       Unfortunately, it is not enough for stable work.
>> >> >>
>> >> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>> >> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
>> >> >>       which cause to deadlock in on_selected_cpus function (expected).
>> >> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
>> >> >>       yet where this is happening) or I sometimes see traps, like that:
>> >> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>> >> >>
>> >> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> >> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> >> >>       (XEN) CPU:    1
>> >> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
>> >> >>       (XEN) CPSR:   200001da MODE:Hypervisor
>> >> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> >> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> >> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>> >> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> >> >>       (XEN)
>> >> >>       (XEN)   VTCR_EL2: 80002558
>> >> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
>> >> >>       (XEN)
>> >> >>       (XEN)  SCTLR_EL2: 30cd187f
>> >> >>       (XEN)    HCR_EL2: 00000000000028b5
>> >> >>       (XEN)  TTBR0_EL2: 00000000d2014000
>> >> >>       (XEN)
>> >> >>       (XEN)    ESR_EL2: 00000000
>> >> >>       (XEN)  HPFAR_EL2: 0000000000482110
>> >> >>       (XEN)      HDFAR: fa211190
>> >> >>       (XEN)      HIFAR: 00000000
>> >> >>       (XEN)
>> >> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
>> >> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>> >> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>> >> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>> >> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>> >> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>> >> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>> >> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>> >> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>> >> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>> >> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>> >> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
>> >> >>       (XEN) Xen call trace:
>> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> >> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> >> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> >> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> >> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> >> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
>> >> >>       (XEN)
>> >> >>
>> >> >>       Also I am posting maintenance_interrupt() from my tree:
>> >> >>
>> >> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
>> >> >>       cpu_user_regs *regs)
>> >> >>       {
>> >> >>           int i = 0, virq, pirq;
>> >> >>           uint32_t lr;
>> >> >>           struct vcpu *v = current;
>> >> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>> >> >>
>> >> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>> >> >>                                     64, i)) < 64) {
>> >> >>               struct pending_irq *p, *n;
>> >> >>               int cpu, eoi;
>> >> >>
>> >> >>               cpu = -1;
>> >> >>               eoi = 0;
>> >> >>
>> >> >>               spin_lock_irq(&gic.lock);
>> >> >>               lr = GICH[GICH_LR + i];
>> >> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
>> >> >>
>> >> >>               p = irq_to_pending(v, virq);
>> >> >>               if ( p->desc != NULL ) {
>> >> >>                   p->desc->status &= ~IRQ_INPROGRESS;
>> >> >>                   /* Assume only one pcpu needs to EOI the irq */
>> >> >>                   cpu = p->desc->arch.eoi_cpu;
>> >> >>                   eoi = 1;
>> >> >>                   pirq = p->desc->irq;
>> >> >>               }
>> >> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>> >> >>               {
>> >> >>                   /* Physical IRQ can't be reinject */
>> >> >>                   WARN_ON(p->desc != NULL);
>> >> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>> >> >>                   spin_unlock_irq(&gic.lock);
>> >> >>                   i++;
>> >> >>                   continue;
>> >> >>               }
>> >> >>
>> >> >>               GICH[GICH_LR + i] = 0;
>> >> >>               clear_bit(i, &this_cpu(lr_mask));
>> >> >>
>> >> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>> >> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>> >> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>> >> >>                   list_del_init(&n->lr_queue);
>> >> >>                   set_bit(i, &this_cpu(lr_mask));
>> >> >>               } else {
>> >> >>                   gic_inject_irq_stop();
>> >> >>               }
>> >> >>               spin_unlock_irq(&gic.lock);
>> >> >>
>> >> >>               spin_lock_irq(&v->arch.vgic.lock);
>> >> >>               list_del_init(&p->inflight);
>> >> >>               spin_unlock_irq(&v->arch.vgic.lock);
>> >> >>
>> >> >>               if ( eoi ) {
>> >> >>                   /* this is not racy because we can't receive another irq of the
>> >> >>                    * same type until we EOI it.  */
>> >> >>                   if ( cpu == smp_processor_id() )
>> >> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>> >> >>                   else
>> >> >>                       on_selected_cpus(cpumask_of(cpu),
>> >> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>> >> >>               }
>> >> >>
>> >> >>               i++;
>> >> >>           }
>> >> >>       }
>> >> >>
>> >> >>
>> >> >>       Oleksandr Tyshchenko | Embedded Developer
>> >> >>       GlobalLogic
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>> >> --
>> >>
>> >> Name | Title
>> >> GlobalLogic
>> >> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> >> www.globallogic.com
>> >>
>> >> http://www.globallogic.com/email_disclaimer.txt
>> >>
>>
>>
>>
>> --
>>
>> Name | Title
>> GlobalLogic
>> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> www.globallogic.com
>>
>> http://www.globallogic.com/email_disclaimer.txt
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 19:54:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 19:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8xgr-0002NG-W0; Thu, 30 Jan 2014 19:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W8xgp-0002NB-MW
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 19:54:20 +0000
Received: from [85.158.137.68:53279] by server-8.bemta-3.messagelabs.com id
	67/20-16039-AEDAAE25; Thu, 30 Jan 2014 19:54:18 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391111655!12415683!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=2.0 required=7.0 tests=BODY_RANDOM_LONG,HOT_NASTY,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23245 invoked from network); 30 Jan 2014 19:54:16 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 19:54:16 -0000
Received: from mail-ve0-f178.google.com ([209.85.128.178]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUuqt5sdkaXpio8ZCWCfvQbmE+ILtgNNa@postini.com;
	Thu, 30 Jan 2014 11:54:16 PST
Received: by mail-ve0-f178.google.com with SMTP id oy12so2465133veb.23
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 11:54:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=cU6kstGiztcF/iuz9/ZVGYZUnqzb2uFPuv+qy6f/LuE=;
	b=MI5ai2Se+xs3EKTIfU16bdtoYNxyQSDxn0VnLYxmtFkLJf6yfF0BwwoOhavKfyvdw6
	eFtniFmX8IjLTECdjGyZmw96v73MKQGMShoRpcvYL2V7p/XYv+IjmLtNYqJeZUvxiCp7
	zS1o5qU0RlFKAPN6CDOBf5RCel3jQWVFN0KqU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=cU6kstGiztcF/iuz9/ZVGYZUnqzb2uFPuv+qy6f/LuE=;
	b=YRworNq/mPkgMmsilZtN85u7DvJtYdrldW5xsv6r6vVONm/UybdJSWgemX7uqpIcdG
	lmTOIILInoLbQTCM7tIFhgjwrb7zG74j2d8tK3qN/PqPCVZx7fLYaAmFV8ZvM61F/fo4
	p9peyNkPULXvJ4qx/oGDxlJHPoiKINUBLqUKDNQHttH9gYRJ4NOAzBPVlFovZGwbPFpM
	61dYYucXY0cY6wosZVggW3WnEPLoLaOMdmRjIOKXSG475zJ+235klNyy92Oc0pqhtRQr
	AWPuU2jUcD9DqgsydWQ9SHp6sAUX41dUMBx+wBvc1ogWd34xr3BGBvtTvptl0CVgrOB1
	PpFQ==
X-Gm-Message-State: ALoCoQkNjb3h76iwxHOPP/tKEZ2bwGeLWt6+DFanHdT7wFWmgGmAa7ryfxq8+/0jum47O9wGuYKbPBBjwY1+E5pLjw6zOo4+6byeqqLNcLW0ZJsj4d1jyXcxpnS3XPjWU4ZSE+RaMmlDsQflHgmp3g7bT/w/g0tWE0C+nbrZCDf/X80JaHLzpIY=
X-Received: by 10.52.61.168 with SMTP id q8mr1358703vdr.40.1391111654022;
	Thu, 30 Jan 2014 11:54:14 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.61.168 with SMTP id q8mr1358694vdr.40.1391111653926; Thu,
	30 Jan 2014 11:54:13 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 11:54:13 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
Date: Thu, 30 Jan 2014 21:54:13 +0200
Message-ID: <CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I moved to 4.4.0-rc1 which already has necessary irq patches.

And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
I see that Hypervisor hangs very often. Unfortunately, now I don't
have debugger to localize part of code.
So, I have to use prints and it may takes some time(

On Thu, Jan 30, 2014 at 7:18 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> 1.2 I also have checked solution where on_selected_cpus call was moved
>> out of the
>> interrupt handler. Unfortunately, it doesn't work.
>>
>> I almost immediately see next error:
>> (XEN) Assertion 'this_cpu(eoi_irq) == NULL' failed, line 981, file gic.c
>> (XEN) Xen BUG at gic.c:981
>> (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> (XEN) CPU:    1
>> (XEN) PC:     00241ee0 __bug+0x2c/0x44
>> (XEN) CPSR:   2000015a MODE:Hypervisor
>> (XEN)      R0: 0026770c R1: 00000000 R2: 3fd2fd00 R3: 00000fff
>> (XEN)      R4: 00263248 R5: 00264384 R6: 000003d5 R7: 4003d000
>> (XEN)      R8: 00000001 R9: 00000091 R10:00000000 R11:40037ebc R12:00000001
>> (XEN) HYP: SP: 40037eb4 LR: 00241ee0
>> (XEN)
>> (XEN)   VTCR_EL2: 80002558
>> (XEN)  VTTBR_EL2: 00010000deffc000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd187f
>> (XEN)    HCR_EL2: 0000000000002835
>> (XEN)  TTBR0_EL2: 00000000d2014000
>> (XEN)
>> (XEN)    ESR_EL2: 00000000
>> (XEN)  HPFAR_EL2: 0000000000482110
>> (XEN)      HDFAR: fa211f00
>> (XEN)      HIFAR: 00000000
>> (XEN)
>> (XEN) Xen stack trace from sp=40037eb4:
>> (XEN)    00000000 40037efc 00247e1c 002e6610 002e6610 002e6608 002e6608 00000001
>> (XEN)    00000000 40015000 40017000 40005f60 40017014 40037f58 00000019 00000000
>> (XEN)    40005f60 40037f24 00249068 00000009 00000019 00404000 40037f58 00000000
>> (XEN)    00405000 00004680 002e7694 40037f4c 00248b80 00000000 c5b72000 00000091
>> (XEN)    00000000 c700d4e0 c008477c 000000f1 00000001 40037f54 0024f6c0 40037f58
>> (XEN)    00251a30 c700d4e0 00000001 c008477c 00000000 c5b72000 00000091 00000000
>> (XEN)    c700d4e0 c008477c 000000f1 00000001 00000001 c5b72000 ffffffff 0000a923
>> (XEN)    c0077ac4 60000193 00000000 b6eadaa0 c0578f40 c00138c0 c5b73f58 c036ab90
>> (XEN)    c0578f4c c00136a0 c0578f58 c0013920 00000000 00000000 00000000 00000000
>> (XEN)    00000000 00000000 00000000 80000010 60000193 a0000093 80000193 00000000
>> (XEN)    00000000 0c41e00c 450c2880
>> (XEN) Xen call trace:
>> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (PC)
>> (XEN)    [<00241ee0>] __bug+0x2c/0x44 (LR)
>> (XEN)    [<00247e1c>] maintenance_interrupt+0x2e8/0x328
>> (XEN)    [<00249068>] do_IRQ+0x138/0x198
>> (XEN)    [<00248b80>] gic_interrupt+0x58/0xc0
>> (XEN)    [<0024f6c0>] do_trap_irq+0x10/0x14
>> (XEN)    [<00251a30>] return_from_trap+0/0x4
>> (XEN)
>
> Are you seeing more than one interrupt being EOI'ed with a single
> maintenance interrupt?
> I didn't think it could be possible in practice.
> If so, we might have to turn eoi_irq into a list or an array.
>
>
>> 2. The "simultaneous cross-interrupts" issue doesn't occur if I use
>> next solution:
>> So, as result I don't see deadlock in on_selected_cpus()
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..af96a31 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const
>> struct dt_irq *irq,
>>
>>      level = dt_irq_is_level_triggered(irq);
>>
>> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
>> -                           0xa0);
>> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>>
>>      retval = __setup_irq(desc, irq->irq, action);
>>      if (retval) {
>> So, as result I don't see deadlock in on_selected_cpus().
>
> As I stated before I think this is a good change to have in 4.4.
>
>
>> But, rarely, I see deadlocks in other parts related to interrupts handling.
>> As noted by Julien, I am using the old version of the interrupt patch series.
>> I completely agree.
>>
>> We are based on next XEN commit:
>> 48249a1 libxl: Avoid realloc(,0) when libxl__xs_directory returns empty list
>>
>> Also we have some patches, which we cherry-picked when we urgently needed them:
>> 6bba1a3 xen/arm: Keep count of inflight interrupts
>> 33a8aa9 xen/arm: Only enable physical IRQs when the guest asks
>> b6a4e65 xen/arm: Rename gic_irq_{startup, shutdown} to gic_irq_{mask, unmask}
>> 5dbe455 xen/arm: Don't reinject the IRQ if it's already in LRs
>> 1438f03 xen/arm: Physical IRQ is not always equal to virtual IRQ
>>
>> I have to apply next patches and check with them:
>> 88eb95e xen/arm: disable a physical IRQ when the guest disables the
>> corresponding IRQ
>> a660ee3 xen/arm: implement gic_irq_enable and gic_irq_disable
>> 1dc9556 xen/arm: do not add a second irq to the LRs if one is already present
>> d16d511 xen/arm: track the state of guest IRQs
>>
>> I'll report about the results. I hope to do it today.
>
> I am looking forward to reading your report.
> Cheers,
>
> Stefano
>
>> A lot of thanks to all.
>>
>> On Thu, Jan 30, 2014 at 5:35 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>> > Given that we don't deactivate the interrupt (writing to GICC_DIR) until
>> > the guest EOIs it, I can't understand how you manage to get a second
>> > interrupt notifications before the guest EOIs the first one.
>> >
>> > Do you set GICC_CTL_EOI in GICC_CTLR?
>> >
>> > On Thu, 30 Jan 2014, Oleksandr Tyshchenko wrote:
>> >> According to DT it is a level irq (DT_IRQ_TYPE_LEVEL_HIGH)
>> >>
>> >> On Thu, Jan 30, 2014 at 3:24 PM, Stefano Stabellini
>> >> <stefano.stabellini@eu.citrix.com> wrote:
>> >> > Is it a level or an edge irq?
>> >> >
>> >> > On Wed, 29 Jan 2014, Julien Grall wrote:
>> >> >> Hi,
>> >> >>
>> >> >> It's weird, physical IRQ should not be injected twice ...
>> >> >> Were you able to print the IRQ number?
>> >> >>
>> >> >> In any case, you are using the old version of the interrupt patch series.
>> >> >> Your new error may come of race condition in this code.
>> >> >>
>> >> >> Can you try to use a newest version?
>> >> >>
>> >> >> On 29 Jan 2014 18:40, "Oleksandr Tyshchenko" <oleksandr.tyshchenko@globallogic.com> wrote:
>> >> >>       > Right, that's why changing it to cpumask_of(0) shouldn't make any
>> >> >>       > difference for xen-unstable (it should make things clearer, if nothing
>> >> >>       > else) but it should fix things for Oleksandr.
>> >> >>
>> >> >>       Unfortunately, it is not enough for stable work.
>> >> >>
>> >> >>       I was tried to use cpumask_of(smp_processor_id()) instead of cpumask_of(0) in
>> >> >>       gic_route_irq_to_guest(). And as result, I don't see our situation
>> >> >>       which cause to deadlock in on_selected_cpus function (expected).
>> >> >>       But, hypervisor sometimes hangs somewhere else (I have not identified
>> >> >>       yet where this is happening) or I sometimes see traps, like that:
>> >> >>       ("WARN_ON(p->desc != NULL)" in maintenance_interrupt() leads to them)
>> >> >>
>> >> >>       (XEN) CPU1: Unexpected Trap: Undefined Instruction
>> >> >>       (XEN) ----[ Xen-4.4-unstable  arm32  debug=y  Not tainted ]----
>> >> >>       (XEN) CPU:    1
>> >> >>       (XEN) PC:     00242c1c __warn+0x20/0x28
>> >> >>       (XEN) CPSR:   200001da MODE:Hypervisor
>> >> >>       (XEN)      R0: 0026770c R1: 00000001 R2: 3fd2fd00 R3: 00000fff
>> >> >>       (XEN)      R4: 00406100 R5: 40020ee0 R6: 00000000 R7: 4bfdf000
>> >> >>       (XEN)      R8: 00000001 R9: 4bfd7ed0 R10:00000001 R11:4bfd7ebc R12:00000002
>> >> >>       (XEN) HYP: SP: 4bfd7eb4 LR: 00242c1c
>> >> >>       (XEN)
>> >> >>       (XEN)   VTCR_EL2: 80002558
>> >> >>       (XEN)  VTTBR_EL2: 00020000dec6a000
>> >> >>       (XEN)
>> >> >>       (XEN)  SCTLR_EL2: 30cd187f
>> >> >>       (XEN)    HCR_EL2: 00000000000028b5
>> >> >>       (XEN)  TTBR0_EL2: 00000000d2014000
>> >> >>       (XEN)
>> >> >>       (XEN)    ESR_EL2: 00000000
>> >> >>       (XEN)  HPFAR_EL2: 0000000000482110
>> >> >>       (XEN)      HDFAR: fa211190
>> >> >>       (XEN)      HIFAR: 00000000
>> >> >>       (XEN)
>> >> >>       (XEN) Xen stack trace from sp=4bfd7eb4:
>> >> >>       (XEN)    0026431c 4bfd7efc 00247a54 00000024 002e6608 002e6608 00000097 00000001
>> >> >>       (XEN)    00000000 4bfd7f54 40017000 40005f60 40017014 4bfd7f58 00000019 00000000
>> >> >>       (XEN)    40005f60 4bfd7f24 00248e60 00000009 00000019 00404000 4bfd7f58 00000000
>> >> >>       (XEN)    00405000 000045f0 002e7694 4bfd7f4c 00248978 c0079a90 00000097 00000097
>> >> >>       (XEN)    00000000 fa212000 ea80c900 00000001 c05b8a60 4bfd7f54 0024f4b8 4bfd7f58
>> >> >>       (XEN)    00251830 ea80c950 00000000 00000001 c0079a90 00000097 00000097 00000000
>> >> >>       (XEN)    fa212000 ea80c900 00000001 c05b8a60 00000000 e9879e3c ffffffff b6efbca3
>> >> >>       (XEN)    c03b29fc 60000193 9fffffe7 b6c0bbf0 c0607500 c03b3140 e9879eb8 c007680c
>> >> >>       (XEN)    c060750c c03b32c0 c0607518 c03b3360 00000000 00000000 00000000 00000000
>> >> >>       (XEN)    00000000 00000000 3ff6bebf a0000113 800b0193 800b0093 40000193 00000000
>> >> >>       (XEN)    ffeffbfe fedeefff fffd5ffe
>> >> >>       (XEN) Xen call trace:
>> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (PC)
>> >> >>       (XEN)    [<00242c1c>] __warn+0x20/0x28 (LR)
>> >> >>       (XEN)    [<00247a54>] maintenance_interrupt+0xfc/0x2f4
>> >> >>       (XEN)    [<00248e60>] do_IRQ+0x138/0x198
>> >> >>       (XEN)    [<00248978>] gic_interrupt+0x58/0xc0
>> >> >>       (XEN)    [<0024f4b8>] do_trap_irq+0x10/0x14
>> >> >>       (XEN)    [<00251830>] return_from_trap+0/0x4
>> >> >>       (XEN)
>> >> >>
>> >> >>       Also I am posting maintenance_interrupt() from my tree:
>> >> >>
>> >> >>       static void maintenance_interrupt(int irq, void *dev_id, struct
>> >> >>       cpu_user_regs *regs)
>> >> >>       {
>> >> >>           int i = 0, virq, pirq;
>> >> >>           uint32_t lr;
>> >> >>           struct vcpu *v = current;
>> >> >>           uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
>> >> >>
>> >> >>           while ((i = find_next_bit((const long unsigned int *) &eisr,
>> >> >>                                     64, i)) < 64) {
>> >> >>               struct pending_irq *p, *n;
>> >> >>               int cpu, eoi;
>> >> >>
>> >> >>               cpu = -1;
>> >> >>               eoi = 0;
>> >> >>
>> >> >>               spin_lock_irq(&gic.lock);
>> >> >>               lr = GICH[GICH_LR + i];
>> >> >>               virq = lr & GICH_LR_VIRTUAL_MASK;
>> >> >>
>> >> >>               p = irq_to_pending(v, virq);
>> >> >>               if ( p->desc != NULL ) {
>> >> >>                   p->desc->status &= ~IRQ_INPROGRESS;
>> >> >>                   /* Assume only one pcpu needs to EOI the irq */
>> >> >>                   cpu = p->desc->arch.eoi_cpu;
>> >> >>                   eoi = 1;
>> >> >>                   pirq = p->desc->irq;
>> >> >>               }
>> >> >>               if ( !atomic_dec_and_test(&p->inflight_cnt) )
>> >> >>               {
>> >> >>                   /* Physical IRQ can't be reinject */
>> >> >>                   WARN_ON(p->desc != NULL);
>> >> >>                   gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
>> >> >>                   spin_unlock_irq(&gic.lock);
>> >> >>                   i++;
>> >> >>                   continue;
>> >> >>               }
>> >> >>
>> >> >>               GICH[GICH_LR + i] = 0;
>> >> >>               clear_bit(i, &this_cpu(lr_mask));
>> >> >>
>> >> >>               if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>> >> >>                   n = list_entry(v->arch.vgic.lr_pending.next, typeof(*n), lr_queue);
>> >> >>                   gic_set_lr(i, n->irq, GICH_LR_PENDING, n->priority);
>> >> >>                   list_del_init(&n->lr_queue);
>> >> >>                   set_bit(i, &this_cpu(lr_mask));
>> >> >>               } else {
>> >> >>                   gic_inject_irq_stop();
>> >> >>               }
>> >> >>               spin_unlock_irq(&gic.lock);
>> >> >>
>> >> >>               spin_lock_irq(&v->arch.vgic.lock);
>> >> >>               list_del_init(&p->inflight);
>> >> >>               spin_unlock_irq(&v->arch.vgic.lock);
>> >> >>
>> >> >>               if ( eoi ) {
>> >> >>                   /* this is not racy because we can't receive another irq of the
>> >> >>                    * same type until we EOI it.  */
>> >> >>                   if ( cpu == smp_processor_id() )
>> >> >>                       gic_irq_eoi((void*)(uintptr_t)pirq);
>> >> >>                   else
>> >> >>                       on_selected_cpus(cpumask_of(cpu),
>> >> >>                                        gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
>> >> >>               }
>> >> >>
>> >> >>               i++;
>> >> >>           }
>> >> >>       }
>> >> >>
>> >> >>
>> >> >>       Oleksandr Tyshchenko | Embedded Developer
>> >> >>       GlobalLogic
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>> >> --
>> >>
>> >> Name | Title
>> >> GlobalLogic
>> >> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> >> www.globallogic.com
>> >>
>> >> http://www.globallogic.com/email_disclaimer.txt
>> >>
>>
>>
>>
>> --
>>
>> Name | Title
>> GlobalLogic
>> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
>> www.globallogic.com
>>
>> http://www.globallogic.com/email_disclaimer.txt
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 20:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 20:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8yL1-0003Nd-SS; Thu, 30 Jan 2014 20:35:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mchan@broadcom.com>) id 1W8yJQ-0003NB-C1
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 20:34:12 +0000
Received: from [193.109.254.147:53923] by server-13.bemta-14.messagelabs.com
	id 39/51-01226-347BAE25; Thu, 30 Jan 2014 20:34:11 +0000
X-Env-Sender: mchan@broadcom.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391114050!984052!1
X-Originating-IP: [216.31.210.62]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23992 invoked from network); 30 Jan 2014 20:34:10 -0000
Received: from mail-gw1-out.broadcom.com (HELO mail-gw1-out.broadcom.com)
	(216.31.210.62) by server-7.tower-27.messagelabs.com with SMTP;
	30 Jan 2014 20:34:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,752,1384329600"; d="scan'208";a="12326108"
Received: from irvexchcas06.broadcom.com (HELO
	IRVEXCHCAS06.corp.ad.broadcom.com) ([10.9.208.53])
	by mail-gw1-out.broadcom.com with ESMTP; 30 Jan 2014 12:57:34 -0800
Received: from IRVEXCHSMTP2.corp.ad.broadcom.com (10.9.207.52) by
	IRVEXCHCAS06.corp.ad.broadcom.com (10.9.208.53) with Microsoft SMTP
	Server (TLS) id 14.1.438.0; Thu, 30 Jan 2014 12:34:09 -0800
Received: from mail-irva-13.broadcom.com (10.10.10.20) by
	IRVEXCHSMTP2.corp.ad.broadcom.com (10.9.207.52) with Microsoft SMTP
	Server id 14.1.438.0; Thu, 30 Jan 2014 12:34:09 -0800
Received: from [10.12.137.120] (dhcp-10-12-137-120.irv.broadcom.com
	[10.12.137.120])	by mail-irva-13.broadcom.com (Postfix) with ESMTP id
	F4047246A3;	Thu, 30 Jan 2014 12:34:08 -0800 (PST)
From: Michael Chan <mchan@broadcom.com>
To: Zoltan Kiss <zoltan.kiss@schaman.hu>
In-Reply-To: <52EAA31B.1090606@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
Date: Thu, 30 Jan 2014 12:34:08 -0800
Message-ID: <1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 30 Jan 2014 20:35:50 +0000
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 19:08 +0000, Zoltan Kiss wrote:
> I've experienced some queue timeout problems mentioned in the subject 
> with igb and bnx2 cards. 

Please provide the full tx timeout dmesg.  bnx2 dumps some diagnostic
information during tx timeout that may be useful.  Thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 20:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 20:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8yL1-0003Nd-SS; Thu, 30 Jan 2014 20:35:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mchan@broadcom.com>) id 1W8yJQ-0003NB-C1
	for xen-devel@lists.xenproject.org; Thu, 30 Jan 2014 20:34:12 +0000
Received: from [193.109.254.147:53923] by server-13.bemta-14.messagelabs.com
	id 39/51-01226-347BAE25; Thu, 30 Jan 2014 20:34:11 +0000
X-Env-Sender: mchan@broadcom.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391114050!984052!1
X-Originating-IP: [216.31.210.62]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23992 invoked from network); 30 Jan 2014 20:34:10 -0000
Received: from mail-gw1-out.broadcom.com (HELO mail-gw1-out.broadcom.com)
	(216.31.210.62) by server-7.tower-27.messagelabs.com with SMTP;
	30 Jan 2014 20:34:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,752,1384329600"; d="scan'208";a="12326108"
Received: from irvexchcas06.broadcom.com (HELO
	IRVEXCHCAS06.corp.ad.broadcom.com) ([10.9.208.53])
	by mail-gw1-out.broadcom.com with ESMTP; 30 Jan 2014 12:57:34 -0800
Received: from IRVEXCHSMTP2.corp.ad.broadcom.com (10.9.207.52) by
	IRVEXCHCAS06.corp.ad.broadcom.com (10.9.208.53) with Microsoft SMTP
	Server (TLS) id 14.1.438.0; Thu, 30 Jan 2014 12:34:09 -0800
Received: from mail-irva-13.broadcom.com (10.10.10.20) by
	IRVEXCHSMTP2.corp.ad.broadcom.com (10.9.207.52) with Microsoft SMTP
	Server id 14.1.438.0; Thu, 30 Jan 2014 12:34:09 -0800
Received: from [10.12.137.120] (dhcp-10-12-137-120.irv.broadcom.com
	[10.12.137.120])	by mail-irva-13.broadcom.com (Postfix) with ESMTP id
	F4047246A3;	Thu, 30 Jan 2014 12:34:08 -0800 (PST)
From: Michael Chan <mchan@broadcom.com>
To: Zoltan Kiss <zoltan.kiss@schaman.hu>
In-Reply-To: <52EAA31B.1090606@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
Date: Thu, 30 Jan 2014 12:34:08 -0800
Message-ID: <1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 30 Jan 2014 20:35:50 +0000
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 19:08 +0000, Zoltan Kiss wrote:
> I've experienced some queue timeout problems mentioned in the subject 
> with igb and bnx2 cards. 

Please provide the full tx timeout dmesg.  bnx2 dumps some diagnostic
information during tx timeout that may be useful.  Thanks.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 21:35:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 21:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8zG5-0004j7-J7; Thu, 30 Jan 2014 21:34:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.lengyel@zentific.com>) id 1W8zG3-0004j2-7J
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 21:34:47 +0000
Received: from [85.158.137.68:52727] by server-13.bemta-3.messagelabs.com id
	B4/B4-26923-675CAE25; Thu, 30 Jan 2014 21:34:46 +0000
X-Env-Sender: tamas.lengyel@zentific.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391117685!8763387!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6595 invoked from network); 30 Jan 2014 21:34:45 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 21:34:45 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so1879518eek.39
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 13:34:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=CoipnkvJzWMIB3ZoCT57fCfpfbqDUL1Ru5aUq8DcLZ0=;
	b=gYhDJw9ezoGxgGke6iAMoeOBERR0CV2hN48ENGOqKM2HrMbqOV5km0GnYBTE/UeJPx
	GZQORLiFC5rfczz/LexCyjowQK+wU6yoyTNbDL36uGaFwmUTTY98XEUnWHvUTBEC+ztx
	OQTcwVETQO2BFPGuzcHoL261aklNSPuMDWk/x8ItRC/M7J50nariTSYsgm56D32t0UFO
	BH8b2AFw3PJZ0pPHnb2tU8aBDytVP4uVVjw+faLNS6+1cXkY0hVIVJTLKQPzmRWclINk
	qyuxhD7JddN5dsSHiodxnzQHHmHws9+UgY3akx6C877cdoHZkZH3imog+vuXrNf+zH17
	CVnw==
X-Gm-Message-State: ALoCoQnxlpJexRWsrcyiRj+o2MBmFbEYbqWLM49DmFZmLV31BA9DbvNGlKjsycd5Q9YvuLJ5Z274
X-Received: by 10.14.209.129 with SMTP id s1mr20292545eeo.21.1391117685399;
	Thu, 30 Jan 2014 13:34:45 -0800 (PST)
Received: from localhost.localdomain (ppp-88-217-26-22.dynamic.mnet-online.de.
	[88.217.26.22])
	by mx.google.com with ESMTPSA id n7sm27636440eef.5.2014.01.30.13.34.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 30 Jan 2014 13:34:44 -0800 (PST)
From: Tamas K Lengyel <tamas.lengyel@zentific.com>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 22:34:16 +0100
Message-Id: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
X-Mailer: git-send-email 1.7.10.4
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org
Subject: [Xen-devel] [PATCH] mem_event: Return previous value of CR0/CR3/CR4
	on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch extends the information returned for CR0/CR3/CR4 register write events
with the previous value of the register. The old value was already passed to the trap
processing function, just never placed into the returned request. By returning     
this value, applications subscribing the CR events obtain additional context about
the event.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

---
 xen/arch/x86/hvm/hvm.c         |    4 ++++
 xen/include/public/mem_event.h |    6 +++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..d46abf2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
         req.gla = gla;
         req.gla_valid = 1;
     }
+    else
+    {
+        req.gla = old;
+    }
     
     mem_event_put_request(d, &d->mem_event->access, &req);
     
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index c9ed546..3831b41 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -40,9 +40,9 @@
 /* Reasons for the memory event request */
 #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
 #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
+#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
+#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
+#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
 #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 21:35:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 21:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8zG5-0004j7-J7; Thu, 30 Jan 2014 21:34:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tamas.lengyel@zentific.com>) id 1W8zG3-0004j2-7J
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 21:34:47 +0000
Received: from [85.158.137.68:52727] by server-13.bemta-3.messagelabs.com id
	B4/B4-26923-675CAE25; Thu, 30 Jan 2014 21:34:46 +0000
X-Env-Sender: tamas.lengyel@zentific.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391117685!8763387!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6595 invoked from network); 30 Jan 2014 21:34:45 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 21:34:45 -0000
Received: by mail-ee0-f52.google.com with SMTP id e53so1879518eek.39
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 13:34:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=CoipnkvJzWMIB3ZoCT57fCfpfbqDUL1Ru5aUq8DcLZ0=;
	b=gYhDJw9ezoGxgGke6iAMoeOBERR0CV2hN48ENGOqKM2HrMbqOV5km0GnYBTE/UeJPx
	GZQORLiFC5rfczz/LexCyjowQK+wU6yoyTNbDL36uGaFwmUTTY98XEUnWHvUTBEC+ztx
	OQTcwVETQO2BFPGuzcHoL261aklNSPuMDWk/x8ItRC/M7J50nariTSYsgm56D32t0UFO
	BH8b2AFw3PJZ0pPHnb2tU8aBDytVP4uVVjw+faLNS6+1cXkY0hVIVJTLKQPzmRWclINk
	qyuxhD7JddN5dsSHiodxnzQHHmHws9+UgY3akx6C877cdoHZkZH3imog+vuXrNf+zH17
	CVnw==
X-Gm-Message-State: ALoCoQnxlpJexRWsrcyiRj+o2MBmFbEYbqWLM49DmFZmLV31BA9DbvNGlKjsycd5Q9YvuLJ5Z274
X-Received: by 10.14.209.129 with SMTP id s1mr20292545eeo.21.1391117685399;
	Thu, 30 Jan 2014 13:34:45 -0800 (PST)
Received: from localhost.localdomain (ppp-88-217-26-22.dynamic.mnet-online.de.
	[88.217.26.22])
	by mx.google.com with ESMTPSA id n7sm27636440eef.5.2014.01.30.13.34.44
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 30 Jan 2014 13:34:44 -0800 (PST)
From: Tamas K Lengyel <tamas.lengyel@zentific.com>
To: xen-devel@lists.xen.org
Date: Thu, 30 Jan 2014 22:34:16 +0100
Message-Id: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
X-Mailer: git-send-email 1.7.10.4
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org
Subject: [Xen-devel] [PATCH] mem_event: Return previous value of CR0/CR3/CR4
	on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch extends the information returned for CR0/CR3/CR4 register write events
with the previous value of the register. The old value was already passed to the trap
processing function, just never placed into the returned request. By returning     
this value, applications subscribing the CR events obtain additional context about
the event.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

---
 xen/arch/x86/hvm/hvm.c         |    4 ++++
 xen/include/public/mem_event.h |    6 +++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..d46abf2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
         req.gla = gla;
         req.gla_valid = 1;
     }
+    else
+    {
+        req.gla = old;
+    }
     
     mem_event_put_request(d, &d->mem_event->access, &req);
     
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index c9ed546..3831b41 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -40,9 +40,9 @@
 /* Reasons for the memory event request */
 #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
 #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
+#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
+#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
+#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
 #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 21:48:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 21:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8zSe-00053w-1h; Thu, 30 Jan 2014 21:47:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8zSd-00053r-3h
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 21:47:47 +0000
Received: from [85.158.143.35:12008] by server-1.bemta-4.messagelabs.com id
	E2/B6-31661-288CAE25; Thu, 30 Jan 2014 21:47:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391118465!2049962!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16873 invoked from network); 30 Jan 2014 21:47:45 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 21:47:45 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so1602204ead.24
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 13:47:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=l37BHTSaokDcnv7CQO51NsQQQYIAmOL4s7ix0i+WyYI=;
	b=BT/VZpzCUvX4bjzbDRUVrfU29O8xd6viHsyUkcIiaf4rZOUYQSsBNsxAW2Hgy1yWP4
	cdOaG2uqDZs+eakUDyQF3gNkBqO8UiSVC7lU8X/7Cy5lUJrt3j1Io15AzH0SPgx0KPrP
	DD+yeZ69FjmPMqMCEXhn5G4vUXcpgGJNHpab5GCcC6VpTbH5tOB3n1DoayeO1sYUqk81
	Jhu10IvPixDWJi0/STP3P8L14vEduu0EcZBlc8Xrl+4z+auAtM/ayoOhHv7bFQrhSPYI
	iq7avtkfixdV/BBPuAEpT4aLpKdwgNqW0HG+FlaXmxrPcwt2TZjG/igqrMNwmKbrV8tp
	ftTg==
X-Gm-Message-State: ALoCoQnhRZte1OPSrbKPkCnkFtHsEsLZd6EYLuu+N0S2YFBLpM0hqlMtjwk8qUOAyS09JVLjl4Bk
X-Received: by 10.14.216.193 with SMTP id g41mr20467134eep.13.1391118465368;
	Thu, 30 Jan 2014 13:47:45 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id m9sm14543158eeh.3.2014.01.30.13.47.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 13:47:44 -0800 (PST)
Message-ID: <52EAC87E.1090601@linaro.org>
Date: Thu, 30 Jan 2014 21:47:42 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	<52E8FD02.2060601@linaro.org>	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
	<CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
In-Reply-To: <CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On 30/01/14 19:54, Oleksandr Tyshchenko wrote:
> I moved to 4.4.0-rc1 which already has necessary irq patches.

Any specific reason to use 4.4.0-rc1 instead of 4.4.0-rc2? There is a 
bunch of fixes (which should not be related to your current bug) such as 
TLB issue, foreign mapping, and a first attempt to fix guest cache issue.

> And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
> I see that Hypervisor hangs very often. Unfortunately, now I don't
> have debugger to localize part of code.
> So, I have to use prints and it may takes some time(

What do you mean by hang? Do you have any output from Xen? What do you 
run? Dom0 and a DomU?

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 21:48:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 21:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W8zSe-00053w-1h; Thu, 30 Jan 2014 21:47:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W8zSd-00053r-3h
	for xen-devel@lists.xen.org; Thu, 30 Jan 2014 21:47:47 +0000
Received: from [85.158.143.35:12008] by server-1.bemta-4.messagelabs.com id
	E2/B6-31661-288CAE25; Thu, 30 Jan 2014 21:47:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391118465!2049962!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16873 invoked from network); 30 Jan 2014 21:47:45 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	30 Jan 2014 21:47:45 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so1602204ead.24
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 13:47:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=l37BHTSaokDcnv7CQO51NsQQQYIAmOL4s7ix0i+WyYI=;
	b=BT/VZpzCUvX4bjzbDRUVrfU29O8xd6viHsyUkcIiaf4rZOUYQSsBNsxAW2Hgy1yWP4
	cdOaG2uqDZs+eakUDyQF3gNkBqO8UiSVC7lU8X/7Cy5lUJrt3j1Io15AzH0SPgx0KPrP
	DD+yeZ69FjmPMqMCEXhn5G4vUXcpgGJNHpab5GCcC6VpTbH5tOB3n1DoayeO1sYUqk81
	Jhu10IvPixDWJi0/STP3P8L14vEduu0EcZBlc8Xrl+4z+auAtM/ayoOhHv7bFQrhSPYI
	iq7avtkfixdV/BBPuAEpT4aLpKdwgNqW0HG+FlaXmxrPcwt2TZjG/igqrMNwmKbrV8tp
	ftTg==
X-Gm-Message-State: ALoCoQnhRZte1OPSrbKPkCnkFtHsEsLZd6EYLuu+N0S2YFBLpM0hqlMtjwk8qUOAyS09JVLjl4Bk
X-Received: by 10.14.216.193 with SMTP id g41mr20467134eep.13.1391118465368;
	Thu, 30 Jan 2014 13:47:45 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id m9sm14543158eeh.3.2014.01.30.13.47.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 30 Jan 2014 13:47:44 -0800 (PST)
Message-ID: <52EAC87E.1090601@linaro.org>
Date: Thu, 30 Jan 2014 21:47:42 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	<52E69CBC.3090207@linaro.org>	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	<52E8FD02.2060601@linaro.org>	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
	<CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
In-Reply-To: <CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On 30/01/14 19:54, Oleksandr Tyshchenko wrote:
> I moved to 4.4.0-rc1 which already has necessary irq patches.

Any specific reason to use 4.4.0-rc1 instead of 4.4.0-rc2? There is a 
bunch of fixes (which should not be related to your current bug) such as 
TLB issue, foreign mapping, and a first attempt to fix guest cache issue.

> And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
> I see that Hypervisor hangs very often. Unfortunately, now I don't
> have debugger to localize part of code.
> So, I have to use prints and it may takes some time(

What do you mean by hang? Do you have any output from Xen? What do you 
run? Dom0 and a DomU?

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Jan 30 23:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 23:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W915N-0007jg-Sy; Thu, 30 Jan 2014 23:31:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W915M-0007jb-KC
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 23:31:52 +0000
Received: from [85.158.139.211:62831] by server-12.bemta-5.messagelabs.com id
	B5/40-15415-7E0EAE25; Thu, 30 Jan 2014 23:31:51 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391124709!703783!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29673 invoked from network); 30 Jan 2014 23:31:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 23:31:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0UNVlVi030753
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 23:31:48 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0UNVk3U004944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 30 Jan 2014 23:31:46 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0UNVjX8004932; Thu, 30 Jan 2014 23:31:45 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Jan 2014 15:31:45 -0800
Date: Thu, 30 Jan 2014 15:31:44 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140130153144.10d6f91e@mantra.us.oracle.com>
In-Reply-To: <52EA3A3C.4030209@citrix.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<52EA3A3C.4030209@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAzMCBKYW4gMjAxNCAxMTo0MDo0NCArMDAwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IE9uIDMwLzAxLzE0IDAwOjE1LCBNdWtlc2ggUmF0
aG9yIHdyb3RlOgo+ID4gS29ucmFkLAo+ID4gCj4gPiBUaGUgQ1I0IHNldHRpbmdzIHdlcmUgZHJv
cHBlZCBmcm9tIG15IGVhcmxpZXIgcGF0Y2ggYmVjYXVzZSB5b3UKPiA+IGRpZG4ndCB3YW5uYSBl
bmFibGUgdGhlbS4gQnV0IHNpbmNlIHlvdSBkbyBub3csIHdlIG5lZWQgdG8gc2V0IHRoZW0KPiA+
IGluIHRoZSBBUHMgYWxzby4gSWYgeW91IGRlY2lkZSBub3QgdG9vIGFnYWluLCBwbGVhc2UgYXBw
bHkgbXkgcHJldgo+ID4gcGF0Y2ggInB2aDogZGlzYWJsZSBwc2UgZmVhdHVyZSBmb3Igbm93Ii4K
PiAKPiBIZWxsbyBNdWtlc2gsCj4gCj4gQ291bGQgeW91IHB1c2ggeW91ciBDUiByZWxhdGVkIHBh
dGNoZXMgdG8gYSBnaXQgcmVwbyBicmFuY2g/IEknbQo+IGN1cnJlbnRseSBoYXZpbmcgYSBiaXQg
b2YgYSBtZXNzIGluIGZpZ3VyaW5nIG91dCB3aGljaCBvbmVzIHNob3VsZCBiZQo+IGFwcGxpZWQg
YW5kIGluIHdoaWNoIG9yZGVyLgo+IAo+IFRoYW5rcywgUm9nZXIuCgpIZXkgUm9nZXIsCgpVbmZv
cnR1bmF0ZWx5LCBJIGRvbid0IGhhdmUgdGhlbSBpbiBhIHRyZWUgYmVjYXVzZSBteSBmaXJzdCBw
YXRjaCB3YXMgCmNoYW5nZWQgZHVyaW5nIG1lcmdlLCBhbmQgYWxzbyB0aGUgdHJlZSB3YXMgcmVm
cmVzaGVkLiAgQmFzaWNhbGx5LCB0aGUgZW5kCnJlc3VsdCwgd2UgbGVhdmUgZmVhdHVyZXMgZW5h
YmxlZCBvbiBsaW51eCBzaWRlLCB0aHVzIHNldHRpbmcgbm90IG9ubHkKdGhlIGNyMCBiaXRzLCBi
dXQgYWxzbyB0aGUgY3I0IFBTRSBhbmQgUEdFIGZvciBBUHMgKHRoZXkgd2VyZSBhbHJlYWR5CnNl
dCBmb3IgdGhlIEJTUCkuIAoKS29ucmFkIG9ubHkgbWVyZ2VkIHRoZSBDUjAgc2V0dGluZyBwYXJ0
IG9mIG15IGZpcnN0IHBhdGNoLCBoZW5jZSB0aGlzIApwYXRjaCB0byBzZXQgdGhlIENSNCBiaXRz
LiBIb3BlIHRoYXQgbWFrZXMgc2Vuc2UuIE15IGxhdGVzdCB0cmVlIGlzOgoKaHR0cDovL29zcy51
cy5vcmFjbGUuY29tL2dpdC9tcmF0aG9yL2xpbnV4LmdpdCAgbXVrMgoKdGhhbmtzCm11a2VzaAoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Jan 30 23:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jan 2014 23:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W915N-0007jg-Sy; Thu, 30 Jan 2014 23:31:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W915M-0007jb-KC
	for Xen-devel@lists.xensource.com; Thu, 30 Jan 2014 23:31:52 +0000
Received: from [85.158.139.211:62831] by server-12.bemta-5.messagelabs.com id
	B5/40-15415-7E0EAE25; Thu, 30 Jan 2014 23:31:51 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391124709!703783!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29673 invoked from network); 30 Jan 2014 23:31:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 30 Jan 2014 23:31:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0UNVlVi030753
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 30 Jan 2014 23:31:48 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0UNVk3U004944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 30 Jan 2014 23:31:46 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0UNVjX8004932; Thu, 30 Jan 2014 23:31:45 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 30 Jan 2014 15:31:45 -0800
Date: Thu, 30 Jan 2014 15:31:44 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140130153144.10d6f91e@mantra.us.oracle.com>
In-Reply-To: <52EA3A3C.4030209@citrix.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<52EA3A3C.4030209@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V0] linux PVH: Set CR4 flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAzMCBKYW4gMjAxNCAxMTo0MDo0NCArMDAwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IE9uIDMwLzAxLzE0IDAwOjE1LCBNdWtlc2ggUmF0
aG9yIHdyb3RlOgo+ID4gS29ucmFkLAo+ID4gCj4gPiBUaGUgQ1I0IHNldHRpbmdzIHdlcmUgZHJv
cHBlZCBmcm9tIG15IGVhcmxpZXIgcGF0Y2ggYmVjYXVzZSB5b3UKPiA+IGRpZG4ndCB3YW5uYSBl
bmFibGUgdGhlbS4gQnV0IHNpbmNlIHlvdSBkbyBub3csIHdlIG5lZWQgdG8gc2V0IHRoZW0KPiA+
IGluIHRoZSBBUHMgYWxzby4gSWYgeW91IGRlY2lkZSBub3QgdG9vIGFnYWluLCBwbGVhc2UgYXBw
bHkgbXkgcHJldgo+ID4gcGF0Y2ggInB2aDogZGlzYWJsZSBwc2UgZmVhdHVyZSBmb3Igbm93Ii4K
PiAKPiBIZWxsbyBNdWtlc2gsCj4gCj4gQ291bGQgeW91IHB1c2ggeW91ciBDUiByZWxhdGVkIHBh
dGNoZXMgdG8gYSBnaXQgcmVwbyBicmFuY2g/IEknbQo+IGN1cnJlbnRseSBoYXZpbmcgYSBiaXQg
b2YgYSBtZXNzIGluIGZpZ3VyaW5nIG91dCB3aGljaCBvbmVzIHNob3VsZCBiZQo+IGFwcGxpZWQg
YW5kIGluIHdoaWNoIG9yZGVyLgo+IAo+IFRoYW5rcywgUm9nZXIuCgpIZXkgUm9nZXIsCgpVbmZv
cnR1bmF0ZWx5LCBJIGRvbid0IGhhdmUgdGhlbSBpbiBhIHRyZWUgYmVjYXVzZSBteSBmaXJzdCBw
YXRjaCB3YXMgCmNoYW5nZWQgZHVyaW5nIG1lcmdlLCBhbmQgYWxzbyB0aGUgdHJlZSB3YXMgcmVm
cmVzaGVkLiAgQmFzaWNhbGx5LCB0aGUgZW5kCnJlc3VsdCwgd2UgbGVhdmUgZmVhdHVyZXMgZW5h
YmxlZCBvbiBsaW51eCBzaWRlLCB0aHVzIHNldHRpbmcgbm90IG9ubHkKdGhlIGNyMCBiaXRzLCBi
dXQgYWxzbyB0aGUgY3I0IFBTRSBhbmQgUEdFIGZvciBBUHMgKHRoZXkgd2VyZSBhbHJlYWR5CnNl
dCBmb3IgdGhlIEJTUCkuIAoKS29ucmFkIG9ubHkgbWVyZ2VkIHRoZSBDUjAgc2V0dGluZyBwYXJ0
IG9mIG15IGZpcnN0IHBhdGNoLCBoZW5jZSB0aGlzIApwYXRjaCB0byBzZXQgdGhlIENSNCBiaXRz
LiBIb3BlIHRoYXQgbWFrZXMgc2Vuc2UuIE15IGxhdGVzdCB0cmVlIGlzOgoKaHR0cDovL29zcy51
cy5vcmFjbGUuY29tL2dpdC9tcmF0aG9yL2xpbnV4LmdpdCAgbXVrMgoKdGhhbmtzCm11a2VzaAoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Jan 31 00:10:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 00:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W91gM-0000hg-Ng; Fri, 31 Jan 2014 00:10:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W91gL-0000hW-4e
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 00:10:05 +0000
Received: from [85.158.143.35:45200] by server-2.bemta-4.messagelabs.com id
	E4/01-10891-CD9EAE25; Fri, 31 Jan 2014 00:10:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391127002!2051736!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4081 invoked from network); 31 Jan 2014 00:10:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 00:10:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,753,1384300800"; d="scan'208";a="98324346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 00:10:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 19:10:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W91gG-0003yM-Iu;
	Fri, 31 Jan 2014 00:10:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W91fV-0002Qg-9X;
	Fri, 31 Jan 2014 00:09:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24649-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 00:09:13 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24649: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24649 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24649/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 00:10:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 00:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W91gM-0000hg-Ng; Fri, 31 Jan 2014 00:10:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W91gL-0000hW-4e
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 00:10:05 +0000
Received: from [85.158.143.35:45200] by server-2.bemta-4.messagelabs.com id
	E4/01-10891-CD9EAE25; Fri, 31 Jan 2014 00:10:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391127002!2051736!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4081 invoked from network); 31 Jan 2014 00:10:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 00:10:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,753,1384300800"; d="scan'208";a="98324346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 00:10:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 19:10:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W91gG-0003yM-Iu;
	Fri, 31 Jan 2014 00:10:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W91fV-0002Qg-9X;
	Fri, 31 Jan 2014 00:09:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24649-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 00:09:13 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24649: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24649 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24649/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 01:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 01:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W93Mk-0007vz-JY; Fri, 31 Jan 2014 01:57:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W93Mi-0007vp-E9
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 01:57:56 +0000
Received: from [85.158.137.68:28603] by server-1.bemta-3.messagelabs.com id
	7B/F8-17293-3230BE25; Fri, 31 Jan 2014 01:57:55 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391133472!11274113!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2888 invoked from network); 31 Jan 2014 01:57:54 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 01:57:54 -0000
Received: from mail-ve0-f180.google.com ([209.85.128.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUusDIMGvFdUW4Ztd7DE70Xuhf/nNAVq9@postini.com;
	Thu, 30 Jan 2014 17:57:54 PST
Received: by mail-ve0-f180.google.com with SMTP id db12so2650397veb.25
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 17:57:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=SYIXYiARwysPY3s2/dy8fMfiopWLeSajV22Xfp+gOUg=;
	b=K+6nqmcHH9IC4S142ovnbZCSvyhO6Scg9AVo/MfE+2cT9BXoSZEIHxoTVvERGkw8T+
	dRQmCKdR74gvEps8Qfx1KOHARSpCsd6t2N+KVuWdkUgaBGFISXB3TkTN1kXwvRh/26r1
	yqjUUg/X9YxCKsyPysZnkdfaLVq/oh4jkQ3oA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=SYIXYiARwysPY3s2/dy8fMfiopWLeSajV22Xfp+gOUg=;
	b=X0RfW0LOnys0ma3mvvnvEE4htAZrSfW0CdUgfw6jQOAmD9JGs0pXPEvUI6+xf3h2FU
	TfW/KNSCftsxfPLooKcyK51AYB0dvNe0KSGIlPe7lrTTa6Fgey2W3ecUFFXpOtDdoOBj
	wt5RXy8mEG6wOkgNcUvWW13qbvAGmxyXa0C6Hg2rRJFS4XvgqzsRYwve2P8CM17CLdHu
	rFCxI/alhumeq3qc1WkycTSGvKVDNGOkmaBlMvMWRzf1ON0+isd1zVL0PSf3jzAGUmE2
	sTzUPwhqYuQh55nQswnf1TAJyTPa19CJ/JeCYe/qaQUNkZgMClNlzEbk4eJsphGd1Xb2
	yEug==
X-Gm-Message-State: ALoCoQkgB1XtPaJvg5DaOwCdzlt1EOkM6taIOIa8y0grOSx9XWv49PhekaeUEgif6SrYZGQsaacV6/cXJOCXOh0nCc/CbLNIYKWGhmC9Qdjh//ydZYMlkfuvdyu01b1qVKGz67GZMNVFKcJ5wenOIsVMpehMz2gXxNM13BdE3IApB9N7KMneeN0=
X-Received: by 10.58.235.129 with SMTP id um1mr14956771vec.17.1391133471898;
	Thu, 30 Jan 2014 17:57:51 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.235.129 with SMTP id um1mr14956764vec.17.1391133471809;
	Thu, 30 Jan 2014 17:57:51 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 17:57:51 -0800 (PST)
In-Reply-To: <52EAC87E.1090601@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
	<CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
	<52EAC87E.1090601@linaro.org>
Date: Fri, 31 Jan 2014 03:57:51 +0200
Message-ID: <CAJEb2DHin=Sj8GjyUkJ5NSphaYDCQMGg8Mp+AFjQBC8F9hN-8w@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no specific reason) But, as I found out we already have our
local tree based on 4.4.0-rc1.
All works on porting our local patches on top on 4.4.0-rc1 (resolving
conflicts, etc.) were done by my colleagues.
I saw that the range of commits that you pointed are present there.
And I just moved.

I mean that hypervivor stucks somewhere in the interrupt (gets into an
infinite loop trying to acquire lock, or waiting for event). And as
result nothing works. Of course we don't have any output from it
(Console is not working)
For example, as it happened in on_selected_cpus()

I run domU. We have operation system in domU with UI. After moving to
4.4.0-rc1, hypervivor began to hangs very often. I have not identified
yet where this is happening. And these hangs occur when I use
touchscreen (domU is running at this time). It is somehow depends on
touchscreen irq. I would even say, from "touchscreen interrupt rate".
I was trying to change interrupt priority and was doing other things,
but, I don't have any positive results.
At first I need to localize where deadlock happens.

On Thu, Jan 30, 2014 at 11:47 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hello,
>
>
> On 30/01/14 19:54, Oleksandr Tyshchenko wrote:
>>
>> I moved to 4.4.0-rc1 which already has necessary irq patches.
>
>
> Any specific reason to use 4.4.0-rc1 instead of 4.4.0-rc2? There is a bunch
> of fixes (which should not be related to your current bug) such as TLB
> issue, foreign mapping, and a first attempt to fix guest cache issue.
>
>
>> And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
>> I see that Hypervisor hangs very often. Unfortunately, now I don't
>> have debugger to localize part of code.
>> So, I have to use prints and it may takes some time(
>
>
> What do you mean by hang? Do you have any output from Xen? What do you run?
> Dom0 and a DomU?
>
> Sincerely yours,
>
> --
> Julien Grall



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 01:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 01:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W93Mk-0007vz-JY; Fri, 31 Jan 2014 01:57:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1W93Mi-0007vp-E9
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 01:57:56 +0000
Received: from [85.158.137.68:28603] by server-1.bemta-3.messagelabs.com id
	7B/F8-17293-3230BE25; Fri, 31 Jan 2014 01:57:55 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391133472!11274113!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2888 invoked from network); 31 Jan 2014 01:57:54 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 01:57:54 -0000
Received: from mail-ve0-f180.google.com ([209.85.128.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUusDIMGvFdUW4Ztd7DE70Xuhf/nNAVq9@postini.com;
	Thu, 30 Jan 2014 17:57:54 PST
Received: by mail-ve0-f180.google.com with SMTP id db12so2650397veb.25
	for <xen-devel@lists.xen.org>; Thu, 30 Jan 2014 17:57:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=SYIXYiARwysPY3s2/dy8fMfiopWLeSajV22Xfp+gOUg=;
	b=K+6nqmcHH9IC4S142ovnbZCSvyhO6Scg9AVo/MfE+2cT9BXoSZEIHxoTVvERGkw8T+
	dRQmCKdR74gvEps8Qfx1KOHARSpCsd6t2N+KVuWdkUgaBGFISXB3TkTN1kXwvRh/26r1
	yqjUUg/X9YxCKsyPysZnkdfaLVq/oh4jkQ3oA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=SYIXYiARwysPY3s2/dy8fMfiopWLeSajV22Xfp+gOUg=;
	b=X0RfW0LOnys0ma3mvvnvEE4htAZrSfW0CdUgfw6jQOAmD9JGs0pXPEvUI6+xf3h2FU
	TfW/KNSCftsxfPLooKcyK51AYB0dvNe0KSGIlPe7lrTTa6Fgey2W3ecUFFXpOtDdoOBj
	wt5RXy8mEG6wOkgNcUvWW13qbvAGmxyXa0C6Hg2rRJFS4XvgqzsRYwve2P8CM17CLdHu
	rFCxI/alhumeq3qc1WkycTSGvKVDNGOkmaBlMvMWRzf1ON0+isd1zVL0PSf3jzAGUmE2
	sTzUPwhqYuQh55nQswnf1TAJyTPa19CJ/JeCYe/qaQUNkZgMClNlzEbk4eJsphGd1Xb2
	yEug==
X-Gm-Message-State: ALoCoQkgB1XtPaJvg5DaOwCdzlt1EOkM6taIOIa8y0grOSx9XWv49PhekaeUEgif6SrYZGQsaacV6/cXJOCXOh0nCc/CbLNIYKWGhmC9Qdjh//ydZYMlkfuvdyu01b1qVKGz67GZMNVFKcJ5wenOIsVMpehMz2gXxNM13BdE3IApB9N7KMneeN0=
X-Received: by 10.58.235.129 with SMTP id um1mr14956771vec.17.1391133471898;
	Thu, 30 Jan 2014 17:57:51 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.58.235.129 with SMTP id um1mr14956764vec.17.1391133471809;
	Thu, 30 Jan 2014 17:57:51 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Thu, 30 Jan 2014 17:57:51 -0800 (PST)
In-Reply-To: <52EAC87E.1090601@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<52E8FD02.2060601@linaro.org>
	<alpine.DEB.2.02.1401291321360.4373@kaball.uk.xensource.com>
	<CAJEb2DHh08XEN5Q9kBupZQV-gX5Zx1PcMHt_YEe4pRrMxHY5+A@mail.gmail.com>
	<CAPnVf8wqZWk_tvr3BJbopX8bCBaH__rtWTA5JGD7T+C0_=k7wA@mail.gmail.com>
	<alpine.DEB.2.02.1401301324400.4373@kaball.uk.xensource.com>
	<CAJEb2DEr2pMim4G+6dkCojbtb-c3bEYA95vQLBW=--5cnXu4+w@mail.gmail.com>
	<alpine.DEB.2.02.1401301523530.4373@kaball.uk.xensource.com>
	<CAJEb2DGZRs4OkbFNrPMn+9jCGOu9O6ebEgtZ+NSgBkqBhWycug@mail.gmail.com>
	<alpine.DEB.2.02.1401301715340.4373@kaball.uk.xensource.com>
	<CAJEb2DG7OpB9Fs0h4SGU79PJnYUPvCun2Ei11BtRTqrK+7ErFA@mail.gmail.com>
	<52EAC87E.1090601@linaro.org>
Date: Fri, 31 Jan 2014 03:57:51 +0200
Message-ID: <CAJEb2DHin=Sj8GjyUkJ5NSphaYDCQMGg8Mp+AFjQBC8F9hN-8w@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: xen-devel@lists.xen.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
	fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is no specific reason) But, as I found out we already have our
local tree based on 4.4.0-rc1.
All works on porting our local patches on top on 4.4.0-rc1 (resolving
conflicts, etc.) were done by my colleagues.
I saw that the range of commits that you pointed are present there.
And I just moved.

I mean that hypervivor stucks somewhere in the interrupt (gets into an
infinite loop trying to acquire lock, or waiting for event). And as
result nothing works. Of course we don't have any output from it
(Console is not working)
For example, as it happened in on_selected_cpus()

I run domU. We have operation system in domU with UI. After moving to
4.4.0-rc1, hypervivor began to hangs very often. I have not identified
yet where this is happening. And these hangs occur when I use
touchscreen (domU is running at this time). It is somehow depends on
touchscreen irq. I would even say, from "touchscreen interrupt rate".
I was trying to change interrupt priority and was doing other things,
but, I don't have any positive results.
At first I need to localize where deadlock happens.

On Thu, Jan 30, 2014 at 11:47 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hello,
>
>
> On 30/01/14 19:54, Oleksandr Tyshchenko wrote:
>>
>> I moved to 4.4.0-rc1 which already has necessary irq patches.
>
>
> Any specific reason to use 4.4.0-rc1 instead of 4.4.0-rc2? There is a bunch
> of fixes (which should not be related to your current bug) such as TLB
> issue, foreign mapping, and a first attempt to fix guest cache issue.
>
>
>> And applied only one patch "cpumask_of(0) in gic_route_irq_to_guest".
>> I see that Hypervisor hangs very often. Unfortunately, now I don't
>> have debugger to localize part of code.
>> So, I have to use prints and it may takes some time(
>
>
> What do you mean by hang? Do you have any output from Xen? What do you run?
> Dom0 and a DomU?
>
> Sincerely yours,
>
> --
> Julien Grall



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 02:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 02:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W94H1-0001KK-NK; Fri, 31 Jan 2014 02:56:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W94Gz-0001KC-P6
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 02:56:06 +0000
Received: from [85.158.137.68:42873] by server-14.bemta-3.messagelabs.com id
	DB/EE-08196-4C01BE25; Fri, 31 Jan 2014 02:56:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391136962!12464439!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29449 invoked from network); 31 Jan 2014 02:56:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 02:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,754,1384300800"; d="scan'208";a="98353577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 02:56:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 21:56:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W94Gv-0004my-7z;
	Fri, 31 Jan 2014 02:56:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W94Gu-0001EN-Pu;
	Fri, 31 Jan 2014 02:56:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24653-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 02:56:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24653: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24653 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24653/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail in 24613 REGR. vs. 24397

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24613

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24613 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24613 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24613 never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 02:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 02:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W94H1-0001KK-NK; Fri, 31 Jan 2014 02:56:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W94Gz-0001KC-P6
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 02:56:06 +0000
Received: from [85.158.137.68:42873] by server-14.bemta-3.messagelabs.com id
	DB/EE-08196-4C01BE25; Fri, 31 Jan 2014 02:56:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391136962!12464439!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29449 invoked from network); 31 Jan 2014 02:56:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 02:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,754,1384300800"; d="scan'208";a="98353577"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 02:56:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 21:56:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W94Gv-0004my-7z;
	Fri, 31 Jan 2014 02:56:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W94Gu-0001EN-Pu;
	Fri, 31 Jan 2014 02:56:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24653-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 02:56:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24653: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24653 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24653/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail in 24613 REGR. vs. 24397

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24613

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24613 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24613 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24613 never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 04:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 04:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W95Zq-0003KD-Ll; Fri, 31 Jan 2014 04:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W95Zo-0003K8-TA
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 04:19:37 +0000
Received: from [85.158.139.211:38946] by server-17.bemta-5.messagelabs.com id
	EB/E5-31975-7542BE25; Fri, 31 Jan 2014 04:19:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391141973!736245!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15390 invoked from network); 31 Jan 2014 04:19:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 04:19:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,754,1384300800"; d="scan'208";a="96406329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 04:19:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 23:19:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W95Zj-0005CI-Mb;
	Fri, 31 Jan 2014 04:19:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W95Zj-0007jQ-Fr;
	Fri, 31 Jan 2014 04:19:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24660-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 04:19:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24660: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24660 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24660/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   10 guest-saverestore           fail pass in 24629
 test-amd64-amd64-xl-qemuu-winxpsp3  4 xen-install           fail pass in 24629
 test-amd64-amd64-xl-qemut-winxpsp3 7 windows-install fail in 24629 pass in 24660

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24629 never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 04:20:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 04:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W95Zq-0003KD-Ll; Fri, 31 Jan 2014 04:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W95Zo-0003K8-TA
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 04:19:37 +0000
Received: from [85.158.139.211:38946] by server-17.bemta-5.messagelabs.com id
	EB/E5-31975-7542BE25; Fri, 31 Jan 2014 04:19:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391141973!736245!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15390 invoked from network); 31 Jan 2014 04:19:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 04:19:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,754,1384300800"; d="scan'208";a="96406329"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 04:19:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 30 Jan 2014 23:19:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W95Zj-0005CI-Mb;
	Fri, 31 Jan 2014 04:19:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W95Zj-0007jQ-Fr;
	Fri, 31 Jan 2014 04:19:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24660-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 04:19:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24660: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24660 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24660/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   10 guest-saverestore           fail pass in 24629
 test-amd64-amd64-xl-qemuu-winxpsp3  4 xen-install           fail pass in 24629
 test-amd64-amd64-xl-qemut-winxpsp3 7 windows-install fail in 24629 pass in 24660

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24629 never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 06:02:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 06:02:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W97Ap-0005fA-8g; Fri, 31 Jan 2014 06:01:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W97An-0005f5-Qz
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 06:01:54 +0000
Received: from [193.109.254.147:31392] by server-10.bemta-14.messagelabs.com
	id E4/88-10711-15C3BE25; Fri, 31 Jan 2014 06:01:53 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391148110!1039330!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15440 invoked from network); 31 Jan 2014 06:01:52 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 06:01:52 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 23:01:45 -0700
Message-ID: <52EB3C47.9050902@suse.com>
Date: Thu, 30 Jan 2014 23:01:43 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

I hit a libvirtd segfault after ~7000 iterations of my test scripts. 
Oddly, after restarting libvirtd, I now see the segfault after only a
few iterations.  It seems to occur when shutting down a domain, and
always at the same spot

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff74567c5 in virClassIsDerivedFrom (klass=0x4545454545454545,
parent=0x5555558a1230)
    at util/virobject.c:166
166             if (klass->magic == parent->magic)
(gdb) bt
#0  0x00007ffff74567c5 in virClassIsDerivedFrom
(klass=0x4545454545454545, parent=0x5555558a1230)
    at util/virobject.c:166
#1  0x00007ffff7456f0a in virObjectIsClass (anyobj=0x5555559e78a0,
klass=0x5555558a1230)
    at util/virobject.c:362
#2  0x00007ffff7456d63 in virObjectLock (anyobj=0x5555559e78a0) at
util/virobject.c:314
#3  0x00007fffe993d3ad in libxlDomainObjTimerCallback (timer=31,
timer_info=0x5555559cbed0)
    at libxl/libxl_domain.c:214
#4  0x00007ffff742f5b3 in virEventPollDispatchTimeouts () at
util/vireventpoll.c:451
#5  0x00007ffff7430125 in virEventPollRunOnce () at util/vireventpoll.c:644
#6  0x00007ffff742e061 in virEventRunDefaultImpl () at util/virevent.c:306
#7  0x00007ffff75b7531 in virNetServerRun (srv=0x555555896360) at
rpc/virnetserver.c:1112
#8  0x000055555556b6f8 in main (argc=2, argv=0x7fffffffe2b8) at
libvirtd.c:1517
(gdb) f 3
(gdb) p *info
$1 = {next = 0x0, priv = 0x5555559e78a0, xl_priv = 0x5555559de360, id =
31, in_callback = false,
  dereg = true}
(gdb) p *info->priv
$2 = {parent = {parent = {u = {dummy_align1 = 93824997010160,
dummy_align2 = 0x5555559e2af0, s = {
          magic = 1436429040, refs = 21845}}, klass =
0x4545454545454545}, lock = {lock = {__data = {
          __lock = 1162167621, __count = 1162167621, __owner =
1162167621, __nusers = 1162167621,
          __kind = 1162167621, __spins = 17733, __elision = 17733,
__list = {
            __prev = 0x4545454545454545, __next = 0x4545454545454545}},
        __size = 'E' <repeats 40 times>, __align = 4991471925827290437}}},
  logger_file = 0x4545454545454545, logger = 0xcbababababababa, ctx =
0x21, devs = 0x5555559e2b40,
  deathW = 0x4545454545454545}

Its not clear to me how the for_app_registration blob is being
trampled.  I did notice that the timeout_modify hook is called twice for
some timeouts, once from afterpoll_internal and once from
libxl__ev_time_deregister.  Should libxl apps handle multiple calls to
timeout_modify for the same timer?

On the bright side, I seem to have the fd event handling issues sorted out.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 06:02:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 06:02:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W97Ap-0005fA-8g; Fri, 31 Jan 2014 06:01:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W97An-0005f5-Qz
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 06:01:54 +0000
Received: from [193.109.254.147:31392] by server-10.bemta-14.messagelabs.com
	id E4/88-10711-15C3BE25; Fri, 31 Jan 2014 06:01:53 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391148110!1039330!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15440 invoked from network); 31 Jan 2014 06:01:52 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 06:01:52 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 30 Jan 2014 23:01:45 -0700
Message-ID: <52EB3C47.9050902@suse.com>
Date: Thu, 30 Jan 2014 23:01:43 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

I hit a libvirtd segfault after ~7000 iterations of my test scripts. 
Oddly, after restarting libvirtd, I now see the segfault after only a
few iterations.  It seems to occur when shutting down a domain, and
always at the same spot

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff74567c5 in virClassIsDerivedFrom (klass=0x4545454545454545,
parent=0x5555558a1230)
    at util/virobject.c:166
166             if (klass->magic == parent->magic)
(gdb) bt
#0  0x00007ffff74567c5 in virClassIsDerivedFrom
(klass=0x4545454545454545, parent=0x5555558a1230)
    at util/virobject.c:166
#1  0x00007ffff7456f0a in virObjectIsClass (anyobj=0x5555559e78a0,
klass=0x5555558a1230)
    at util/virobject.c:362
#2  0x00007ffff7456d63 in virObjectLock (anyobj=0x5555559e78a0) at
util/virobject.c:314
#3  0x00007fffe993d3ad in libxlDomainObjTimerCallback (timer=31,
timer_info=0x5555559cbed0)
    at libxl/libxl_domain.c:214
#4  0x00007ffff742f5b3 in virEventPollDispatchTimeouts () at
util/vireventpoll.c:451
#5  0x00007ffff7430125 in virEventPollRunOnce () at util/vireventpoll.c:644
#6  0x00007ffff742e061 in virEventRunDefaultImpl () at util/virevent.c:306
#7  0x00007ffff75b7531 in virNetServerRun (srv=0x555555896360) at
rpc/virnetserver.c:1112
#8  0x000055555556b6f8 in main (argc=2, argv=0x7fffffffe2b8) at
libvirtd.c:1517
(gdb) f 3
(gdb) p *info
$1 = {next = 0x0, priv = 0x5555559e78a0, xl_priv = 0x5555559de360, id =
31, in_callback = false,
  dereg = true}
(gdb) p *info->priv
$2 = {parent = {parent = {u = {dummy_align1 = 93824997010160,
dummy_align2 = 0x5555559e2af0, s = {
          magic = 1436429040, refs = 21845}}, klass =
0x4545454545454545}, lock = {lock = {__data = {
          __lock = 1162167621, __count = 1162167621, __owner =
1162167621, __nusers = 1162167621,
          __kind = 1162167621, __spins = 17733, __elision = 17733,
__list = {
            __prev = 0x4545454545454545, __next = 0x4545454545454545}},
        __size = 'E' <repeats 40 times>, __align = 4991471925827290437}}},
  logger_file = 0x4545454545454545, logger = 0xcbababababababa, ctx =
0x21, devs = 0x5555559e2b40,
  deathW = 0x4545454545454545}

Its not clear to me how the for_app_registration blob is being
trampled.  I did notice that the timeout_modify hook is called twice for
some timeouts, once from afterpoll_internal and once from
libxl__ev_time_deregister.  Should libxl apps handle multiple calls to
timeout_modify for the same timer?

On the bright side, I seem to have the fd event handling issues sorted out.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 07:28:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 07:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W98Wf-0007iC-QZ; Fri, 31 Jan 2014 07:28:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W98Wd-0007i7-RE
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 07:28:32 +0000
Received: from [85.158.139.211:8999] by server-2.bemta-5.messagelabs.com id
	9D/CA-23037-E905BE25; Fri, 31 Jan 2014 07:28:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391153308!759781!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17465 invoked from network); 31 Jan 2014 07:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 07:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,756,1384300800"; d="scan'208";a="98396904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 07:28:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 02:28:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W98WZ-0006Ab-DG;
	Fri, 31 Jan 2014 07:28:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W98WZ-0004kG-1S;
	Fri, 31 Jan 2014 07:28:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24662-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 07:28:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24662: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24662 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail REGR. vs. 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24632 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel  7 redhat-install      fail pass in 24632
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 24632
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24632
 test-amd64-amd64-xl           9 guest-start        fail in 24632 pass in 24662
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24632 pass in 24662

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install        fail like 24599-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 8 guest-saverestore fail in 24632 blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24632 never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 07:28:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 07:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W98Wf-0007iC-QZ; Fri, 31 Jan 2014 07:28:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W98Wd-0007i7-RE
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 07:28:32 +0000
Received: from [85.158.139.211:8999] by server-2.bemta-5.messagelabs.com id
	9D/CA-23037-E905BE25; Fri, 31 Jan 2014 07:28:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391153308!759781!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17465 invoked from network); 31 Jan 2014 07:28:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 07:28:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,756,1384300800"; d="scan'208";a="98396904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 07:28:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 02:28:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W98WZ-0006Ab-DG;
	Fri, 31 Jan 2014 07:28:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W98WZ-0004kG-1S;
	Fri, 31 Jan 2014 07:28:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24662-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 07:28:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24662: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24662 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail REGR. vs. 24570
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24632 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel  7 redhat-install      fail pass in 24632
 test-amd64-amd64-xl-sedf      9 guest-start                 fail pass in 24632
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24632
 test-amd64-amd64-xl           9 guest-start        fail in 24632 pass in 24662
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24632 pass in 24662

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install        fail like 24599-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 8 guest-saverestore fail in 24632 blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24632 never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 09:30:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 09:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9AQP-0002HM-35; Fri, 31 Jan 2014 09:30:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9AQM-0002HH-ON
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 09:30:11 +0000
Received: from [85.158.143.35:51573] by server-3.bemta-4.messagelabs.com id
	61/F6-11539-12D6BE25; Fri, 31 Jan 2014 09:30:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391160607!2142080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31303 invoked from network); 31 Jan 2014 09:30:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 09:30:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,756,1384300800"; d="scan'208";a="98420231"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 09:30:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 04:30:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9AQH-0006m0-Jt;
	Fri, 31 Jan 2014 09:30:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9AQH-0002Rz-Fc;
	Fri, 31 Jan 2014 09:30:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24668-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 09:30:05 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24668: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24668 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24668/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24649
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail pass in 24649

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24649 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 09:30:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 09:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9AQP-0002HM-35; Fri, 31 Jan 2014 09:30:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9AQM-0002HH-ON
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 09:30:11 +0000
Received: from [85.158.143.35:51573] by server-3.bemta-4.messagelabs.com id
	61/F6-11539-12D6BE25; Fri, 31 Jan 2014 09:30:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391160607!2142080!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31303 invoked from network); 31 Jan 2014 09:30:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 09:30:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,756,1384300800"; d="scan'208";a="98420231"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 09:30:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 04:30:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9AQH-0006m0-Jt;
	Fri, 31 Jan 2014 09:30:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9AQH-0002Rz-Fc;
	Fri, 31 Jan 2014 09:30:05 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24668-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 09:30:05 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24668: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24668 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24668/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24649
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail pass in 24649

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24649 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 10:58:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 10:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Bnf-00045K-55; Fri, 31 Jan 2014 10:58:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W9Bnd-00045A-22
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 10:58:17 +0000
Received: from [85.158.139.211:57383] by server-12.bemta-5.messagelabs.com id
	CC/57-15415-8C18BE25; Fri, 31 Jan 2014 10:58:16 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391165895!798722!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19763 invoked from network); 31 Jan 2014 10:58:15 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 10:58:15 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so8333179wgg.1
	for <xen-devel@lists.xen.org>; Fri, 31 Jan 2014 02:58:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=7fvdwXqV8l/ETClXrg7Iz2U/Ipr9EJ71/lsGGY2xQm0=;
	b=0mZ9E16RqBEikWdkn9TYmsmqHV88m8W8qkzFX7WNhDu4LiaTbpHrrpuanzWMHmebfu
	czi+AxjQksD8U3MYTCLQ0DylafJmhgUkw+3ojKtJ0N5dRE2TSxEhHrt3tHGhGaR5k58j
	QRbr0SGPYO/BaTIL133UCLi1S6wqGS5+ZituQjCq57jV4tzzjxRbk6gpMZOnO+78cwFa
	CIrC6Qb3cpTRz1BFeLUoRBPtPp7Ap+nocr6zN+ysYVTC5bcC4mAbshEZ+yK4gKPR0voI
	Ea7f9tQbNAxlz46wbT5QZt0YA8x+dHorGL7k7IfL8ThZlbGMJ2vV7Bxh8uWmT4fEN/RZ
	7kgQ==
X-Received: by 10.194.175.66 with SMTP id by2mr883617wjc.59.1391165895355;
	Fri, 31 Jan 2014 02:58:15 -0800 (PST)
Received: from [172.16.26.11] (217.64.249.130.mactelecom.net. [217.64.249.130])
	by mx.google.com with ESMTPSA id n3sm19123818wix.10.2014.01.31.02.58.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 31 Jan 2014 02:58:14 -0800 (PST)
Message-ID: <52EB81C4.1020609@xen.org>
Date: Fri, 31 Jan 2014 10:58:12 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, 
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>
Subject: [Xen-devel] Complete Checklist for making Xen Releases (including
 PR) - important for Xen 4.4 release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I added a checklist for making Xen Hypervisor releases as I am going to 
be on holidays (and be uncontactable - remember I tend to holiday in the 
middle of nowhere such as the Amazon) when the release is most likely made.

See 
http://wiki.xenproject.org/wiki/Checklist/XenHypervisorReleaseWithMarketing

It is important that owners for these are figured out upfront : there 
are quite a lot of small things that need doing. Also we need to give 
people access rights to our systems as needed.

Best Regards
Lars

P.S.: Please contact me with any questions *before* I leave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 10:58:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 10:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Bnf-00045K-55; Fri, 31 Jan 2014 10:58:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1W9Bnd-00045A-22
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 10:58:17 +0000
Received: from [85.158.139.211:57383] by server-12.bemta-5.messagelabs.com id
	CC/57-15415-8C18BE25; Fri, 31 Jan 2014 10:58:16 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391165895!798722!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19763 invoked from network); 31 Jan 2014 10:58:15 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 10:58:15 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so8333179wgg.1
	for <xen-devel@lists.xen.org>; Fri, 31 Jan 2014 02:58:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=7fvdwXqV8l/ETClXrg7Iz2U/Ipr9EJ71/lsGGY2xQm0=;
	b=0mZ9E16RqBEikWdkn9TYmsmqHV88m8W8qkzFX7WNhDu4LiaTbpHrrpuanzWMHmebfu
	czi+AxjQksD8U3MYTCLQ0DylafJmhgUkw+3ojKtJ0N5dRE2TSxEhHrt3tHGhGaR5k58j
	QRbr0SGPYO/BaTIL133UCLi1S6wqGS5+ZituQjCq57jV4tzzjxRbk6gpMZOnO+78cwFa
	CIrC6Qb3cpTRz1BFeLUoRBPtPp7Ap+nocr6zN+ysYVTC5bcC4mAbshEZ+yK4gKPR0voI
	Ea7f9tQbNAxlz46wbT5QZt0YA8x+dHorGL7k7IfL8ThZlbGMJ2vV7Bxh8uWmT4fEN/RZ
	7kgQ==
X-Received: by 10.194.175.66 with SMTP id by2mr883617wjc.59.1391165895355;
	Fri, 31 Jan 2014 02:58:15 -0800 (PST)
Received: from [172.16.26.11] (217.64.249.130.mactelecom.net. [217.64.249.130])
	by mx.google.com with ESMTPSA id n3sm19123818wix.10.2014.01.31.02.58.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 31 Jan 2014 02:58:14 -0800 (PST)
Message-ID: <52EB81C4.1020609@xen.org>
Date: Fri, 31 Jan 2014 10:58:12 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, 
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>
Subject: [Xen-devel] Complete Checklist for making Xen Releases (including
 PR) - important for Xen 4.4 release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I added a checklist for making Xen Hypervisor releases as I am going to 
be on holidays (and be uncontactable - remember I tend to holiday in the 
middle of nowhere such as the Amazon) when the release is most likely made.

See 
http://wiki.xenproject.org/wiki/Checklist/XenHypervisorReleaseWithMarketing

It is important that owners for these are figured out upfront : there 
are quite a lot of small things that need doing. Also we need to give 
people access rights to our systems as needed.

Best Regards
Lars

P.S.: Please contact me with any questions *before* I leave


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:04:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9CpC-0005m6-Fe; Fri, 31 Jan 2014 12:03:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W9CpB-0005m0-PE
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:03:57 +0000
Received: from [85.158.137.68:42065] by server-9.bemta-3.messagelabs.com id
	EF/65-10184-C219BE25; Fri, 31 Jan 2014 12:03:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391169836!12509340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31074 invoked from network); 31 Jan 2014 12:03:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jan 2014 12:03:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Jan 2014 12:03:55 +0000
Message-Id: <52EB9F450200007800118618@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 31 Jan 2014 12:04:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Roger Pau Monne" <roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger,

so you introduced this, yet looking in a little closer detail I can't seem
to understand why: struct blkif_request_segment is identical in layout,
the sole difference between the two is that in the new structure the
padding field has a name, whereas in the old one it doesn't.

I'd really like to get rid of this redundant type again, unless there's a
reason for it to be there which I'm overlooking.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:04:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9CpC-0005m6-Fe; Fri, 31 Jan 2014 12:03:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W9CpB-0005m0-PE
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:03:57 +0000
Received: from [85.158.137.68:42065] by server-9.bemta-3.messagelabs.com id
	EF/65-10184-C219BE25; Fri, 31 Jan 2014 12:03:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391169836!12509340!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31074 invoked from network); 31 Jan 2014 12:03:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jan 2014 12:03:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Jan 2014 12:03:55 +0000
Message-Id: <52EB9F450200007800118618@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 31 Jan 2014 12:04:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Roger Pau Monne" <roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger,

so you introduced this, yet looking in a little closer detail I can't seem
to understand why: struct blkif_request_segment is identical in layout,
the sole difference between the two is that in the new structure the
padding field has a name, whereas in the old one it doesn't.

I'd really like to get rid of this redundant type again, unless there's a
reason for it to be there which I'm overlooking.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:14:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:14:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Cz1-000679-Jj; Fri, 31 Jan 2014 12:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Cz0-000674-Cj
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:06 +0000
Received: from [85.158.139.211:18509] by server-15.bemta-5.messagelabs.com id
	97/0C-24395-D839BE25; Fri, 31 Jan 2014 12:14:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391170443!834624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1491 invoked from network); 31 Jan 2014 12:14:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:14:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98456225"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 12:14:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:14:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1W9Cyw-0007dG-6x	for
	xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Cyv-0003jz-U0	for
	xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.37769.599789.521146@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 12:14:01 +0000
To: <xen-devel@lists.xenproject.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've just tagged 4.4.0-rc3, please test and report bugs.

The tarball can be downloaded here:

http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz

Ian.

(PS: Due to an oversight by me, the version number in the xen/Makefile
is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
that it's "4.4.0-rc2".  Sorry about that.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:14:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:14:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Cz1-000679-Jj; Fri, 31 Jan 2014 12:14:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Cz0-000674-Cj
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:06 +0000
Received: from [85.158.139.211:18509] by server-15.bemta-5.messagelabs.com id
	97/0C-24395-D839BE25; Fri, 31 Jan 2014 12:14:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391170443!834624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1491 invoked from network); 31 Jan 2014 12:14:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:14:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98456225"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 12:14:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:14:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim 4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id
	1W9Cyw-0007dG-6x	for
	xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Cyv-0003jz-U0	for
	xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 12:14:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.37769.599789.521146@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 12:14:01 +0000
To: <xen-devel@lists.xenproject.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We've just tagged 4.4.0-rc3, please test and report bugs.

The tarball can be downloaded here:

http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz

Ian.

(PS: Due to an oversight by me, the version number in the xen/Makefile
is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
that it's "4.4.0-rc2".  Sorry about that.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:14:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9CzF-000680-1E; Fri, 31 Jan 2014 12:14:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9CzD-00067c-Dm
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 12:14:19 +0000
Received: from [85.158.143.35:58499] by server-1.bemta-4.messagelabs.com id
	93/78-31661-A939BE25; Fri, 31 Jan 2014 12:14:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391170456!2197798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30444 invoked from network); 31 Jan 2014 12:14:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:14:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96499535"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 12:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:14:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Cz8-0007dR-Ts;
	Fri, 31 Jan 2014 12:14:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Cz8-0004rw-SY;
	Fri, 31 Jan 2014 12:14:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24669-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 12:14:14 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24669: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24669 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24669/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build     fail in 24653 REGR. vs. 24397

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 24653
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24653
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail in 24653 pass in 24613
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail in 24613 pass in 24673-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24653 blocked in 24397
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24653 like 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24673 like 24387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24653 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24653 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24653 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24653 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24653 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24653 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24613 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24613 never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:14:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9CzF-000680-1E; Fri, 31 Jan 2014 12:14:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9CzD-00067c-Dm
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 12:14:19 +0000
Received: from [85.158.143.35:58499] by server-1.bemta-4.messagelabs.com id
	93/78-31661-A939BE25; Fri, 31 Jan 2014 12:14:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391170456!2197798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30444 invoked from network); 31 Jan 2014 12:14:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:14:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96499535"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 12:14:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:14:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Cz8-0007dR-Ts;
	Fri, 31 Jan 2014 12:14:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Cz8-0004rw-SY;
	Fri, 31 Jan 2014 12:14:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24669-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 12:14:14 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24669: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24669 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24669/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             4 kernel-build     fail in 24653 REGR. vs. 24397

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-amd  7 redhat-install              fail pass in 24653
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24653
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail in 24653 pass in 24613
 test-amd64-i386-xl-winxpsp3-vcpus1 11 guest-localmigrate.2 fail in 24613 pass in 24673-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24653 blocked in 24397
 test-amd64-i386-xl-win7-amd64  7 windows-install      fail in 24653 like 24397
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24673 like 24387

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)     blocked in 24653 n/a
 test-amd64-amd64-xl           1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)      blocked in 24653 n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-pair         1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 1 xen-build-check(1) blocked in 24653 n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)        blocked in 24653 n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)  blocked in 24653 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 1 xen-build-check(1) blocked in 24653 n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)  blocked in 24653 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24613 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24613 never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 345 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:17:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9D2a-0006MD-Ic; Fri, 31 Jan 2014 12:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9D2Y-0006Lx-KU
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 12:17:46 +0000
Received: from [193.109.254.147:9063] by server-9.bemta-14.messagelabs.com id
	05/D4-24895-A649BE25; Fri, 31 Jan 2014 12:17:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391170664!1131325!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6616 invoked from network); 31 Jan 2014 12:17:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:17:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98457571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 12:17:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:17:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9D2V-0007eT-5D;
	Fri, 31 Jan 2014 12:17:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9D2U-0003kW-Sy;
	Fri, 31 Jan 2014 12:17:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.37990.762802.574421@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 12:17:42 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EB3C47.9050902@suse.com>
References: <52EB3C47.9050902@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("libvirt libxl timer handling issue"):
> I hit a libvirtd segfault after ~7000 iterations of my test scripts. 
> Oddly, after restarting libvirtd, I now see the segfault after only a
> few iterations.  It seems to occur when shutting down a domain, and
> always at the same spot
...
> Its not clear to me how the for_app_registration blob is being
> trampled.  I did notice that the timeout_modify hook is called twice for
> some timeouts, once from afterpoll_internal and once from
> libxl__ev_time_deregister.  Should libxl apps handle multiple calls to
> timeout_modify for the same timer?

Yes, multiple calls to timeout_modify are supposed to work.  Is that
possibly the root cause of your crash ?

> On the bright side, I seem to have the fd event handling issues sorted out.

Good, I guess.  Let me look at your crash stacktrace and the libxl
code in more detail...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:17:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9D2a-0006MD-Ic; Fri, 31 Jan 2014 12:17:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9D2Y-0006Lx-KU
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 12:17:46 +0000
Received: from [193.109.254.147:9063] by server-9.bemta-14.messagelabs.com id
	05/D4-24895-A649BE25; Fri, 31 Jan 2014 12:17:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391170664!1131325!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6616 invoked from network); 31 Jan 2014 12:17:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:17:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98457571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 12:17:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:17:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9D2V-0007eT-5D;
	Fri, 31 Jan 2014 12:17:43 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9D2U-0003kW-Sy;
	Fri, 31 Jan 2014 12:17:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.37990.762802.574421@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 12:17:42 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EB3C47.9050902@suse.com>
References: <52EB3C47.9050902@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("libvirt libxl timer handling issue"):
> I hit a libvirtd segfault after ~7000 iterations of my test scripts. 
> Oddly, after restarting libvirtd, I now see the segfault after only a
> few iterations.  It seems to occur when shutting down a domain, and
> always at the same spot
...
> Its not clear to me how the for_app_registration blob is being
> trampled.  I did notice that the timeout_modify hook is called twice for
> some timeouts, once from afterpoll_internal and once from
> libxl__ev_time_deregister.  Should libxl apps handle multiple calls to
> timeout_modify for the same timer?

Yes, multiple calls to timeout_modify are supposed to work.  Is that
possibly the root cause of your crash ?

> On the bright side, I seem to have the fd event handling issues sorted out.

Good, I guess.  Let me look at your crash stacktrace and the libxl
code in more detail...

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9DNW-0007No-Kc; Fri, 31 Jan 2014 12:39:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9DNU-0007Nj-Pm
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 12:39:25 +0000
Received: from [85.158.139.211:18733] by server-9.bemta-5.messagelabs.com id
	07/73-11237-B799BE25; Fri, 31 Jan 2014 12:39:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391171960!831157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9715 invoked from network); 31 Jan 2014 12:39:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96505003"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 12:39:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:39:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9DNP-0007lj-B7;
	Fri, 31 Jan 2014 12:39:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9DNP-0003k7-8D;
	Fri, 31 Jan 2014 12:39:19 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24671-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 12:39:19 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24671: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24671 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep           fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 12:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 12:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9DNW-0007No-Kc; Fri, 31 Jan 2014 12:39:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9DNU-0007Nj-Pm
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 12:39:25 +0000
Received: from [85.158.139.211:18733] by server-9.bemta-5.messagelabs.com id
	07/73-11237-B799BE25; Fri, 31 Jan 2014 12:39:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391171960!831157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9715 invoked from network); 31 Jan 2014 12:39:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 12:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96505003"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 12:39:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 07:39:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9DNP-0007lj-B7;
	Fri, 31 Jan 2014 12:39:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9DNP-0003k7-8D;
	Fri, 31 Jan 2014 12:39:19 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24671-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 12:39:19 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24671: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24671 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep           fail REGR. vs. 24571

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:29:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9E9i-000094-4X; Fri, 31 Jan 2014 13:29:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W9E9g-0008WB-V0
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 13:29:13 +0000
Received: from [85.158.143.35:14882] by server-1.bemta-4.messagelabs.com id
	AA/ED-31661-625ABE25; Fri, 31 Jan 2014 13:29:10 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391174948!2204857!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3942 invoked from network); 31 Jan 2014 13:29:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 13:29:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96519208"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 13:29:08 +0000
Received: from [10.68.14.36] (10.68.14.36) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 08:29:07 -0500
Message-ID: <52EBA51E.808@citrix.com>
Date: Fri, 31 Jan 2014 14:29:02 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>, Zoltan Kiss <zoltan.kiss@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
In-Reply-To: <1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
X-Originating-IP: [10.68.14.36]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 21:34, Michael Chan wrote:
> On Thu, 2014-01-30 at 19:08 +0000, Zoltan Kiss wrote:
>> I've experienced some queue timeout problems mentioned in the subject
>> with igb and bnx2 cards.
> Please provide the full tx timeout dmesg.  bnx2 dumps some diagnostic
> information during tx timeout that may be useful.  Thanks.
Hi,

Here is some:

[ 5417.275463] ------------[ cut here ]------------
[ 5417.275472] WARNING: at net/sched/sch_generic.c:255 
dev_watchdog+0x156/0x1f0()
[ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
[ 5417.275476] Modules linked in: tun nfsv3 nfs_acl rpcsec_gss_krb5 
auth_rpcgss oid_registry nfsv4 nfs fscache lockd sunrpc ipv6 
openvswitch(O) ipt_REJECT nf_conntrack_ipv
rack xt_tcpudp iptable_filter ip_tables x_tables nls_utf8 isofs 
dm_multipath scsi_dh dm_mod dcdbas coretemp microcode psmouse serio_raw 
lpc_ich mfd_core hid_generic ehci_p
sg hed bnx2 usbhid hid sr_mod cdrom sd_mod pata_acpi ata_generic 
ata_piix libata uhci_hcd mptsas mptscsih mptbase scsi_transport_sas scsi_mod
[ 5417.275517] CPU: 0 PID: 3 Comm: ksoftirqd/0 Tainted: G O 
3.10.11-0.xs1.8.50.170.377582 #1
[ 5417.275518] Hardware name: Dell Inc. PowerEdge R710/00W9X3, BIOS 
1.2.6 07/17/2009
[ 5417.275520]  000000ff f008be08 c1488c53 f008be30 c1046664 c1658a88 
f008be5c 000000ff
[ 5417.275525]  c13fc146 c13fc146 ee96a000 00000002 00137d44 f008be48 
c1046723 00000009
[ 5417.275530]  f008be40 c1658a88 f008be5c f008be80 c13fc146 c16556e1 
000000ff c1658a88
[ 5417.275535] Call Trace:
[ 5417.275539]  [<c1488c53>] dump_stack+0x16/0x1b
[ 5417.275544]  [<c1046664>] warn_slowpath_common+0x64/0x80
[ 5417.275546]  [<c13fc146>] ? dev_watchdog+0x156/0x1f0
[ 5417.275549]  [<c13fc146>] ? dev_watchdog+0x156/0x1f0
[ 5417.275551]  [<c1046723>] warn_slowpath_fmt+0x33/0x40
[ 5417.275554]  [<c13fc146>] dev_watchdog+0x156/0x1f0
[ 5417.275559]  [<c10549ce>] call_timer_fn+0x3e/0xf0
[ 5417.275563]  [<c107293e>] ? finish_task_switch+0x4e/0xb0
[ 5417.275565]  [<c13fbff0>] ? __netdev_watchdog_up+0x60/0x60
[ 5417.275568]  [<c1055c1b>] run_timer_softirq+0x1ab/0x210
[ 5417.275571]  [<c13fbff0>] ? __netdev_watchdog_up+0x60/0x60
[ 5417.275574]  [<c104e3f4>] __do_softirq+0xc4/0x200
[ 5417.275577]  [<c1493547>] ? xen_do_upcall+0x7/0xc
[ 5417.275579]  [<c104e550>] run_ksoftirqd+0x20/0x50
[ 5417.275582]  [<c106f182>] smpboot_thread_fn+0x142/0x150
[ 5417.275586]  [<c1067a2b>] kthread+0x9b/0xa0
[ 5417.275589]  [<c106f040>] ? smpboot_create_threads+0x60/0x60
[ 5417.275591]  [<c1070000>] ? cpu_rt_runtime_read+0x40/0x80
[ 5417.275594]  [<c1492f77>] ret_from_kernel_thread+0x1b/0x28
[ 5417.275596]  [<c1067990>] ? kthread_freezable_should_stop+0x60/0x60
[ 5417.275599] ---[ end trace 691f572d388226ca ]---
[ 5417.275602] bnx2 0000:01:00.1 eth1: <--- start FTQ dump --->
[ 5417.275622] bnx2 0000:01:00.1 eth1: RV2P_PFTQ_CTL 00010000
[ 5417.275629] bnx2 0000:01:00.1 eth1: RV2P_TFTQ_CTL 00020000
[ 5417.275636] bnx2 0000:01:00.1 eth1: RV2P_MFTQ_CTL 00004000
[ 5417.275643] bnx2 0000:01:00.1 eth1: TBDR_FTQ_CTL 00004002
[ 5417.275650] bnx2 0000:01:00.1 eth1: TDMA_FTQ_CTL 00010002
[ 5417.275657] bnx2 0000:01:00.1 eth1: TXP_FTQ_CTL 00010000
[ 5417.275663] bnx2 0000:01:00.1 eth1: TXP_FTQ_CTL 00010000
[ 5417.275670] bnx2 0000:01:00.1 eth1: TPAT_FTQ_CTL 00010000
[ 5417.275677] bnx2 0000:01:00.1 eth1: RXP_CFTQ_CTL 00008000
[ 5417.275684] bnx2 0000:01:00.1 eth1: RXP_FTQ_CTL 00100000
[ 5417.275690] bnx2 0000:01:00.1 eth1: COM_COMXQ_FTQ_CTL 00010000
[ 5417.275698] bnx2 0000:01:00.1 eth1: COM_COMTQ_FTQ_CTL 00020000
[ 5417.275705] bnx2 0000:01:00.1 eth1: COM_COMQ_FTQ_CTL 00010000
[ 5417.275712] bnx2 0000:01:00.1 eth1: CP_CPQ_FTQ_CTL 00004000
[ 5417.275718] bnx2 0000:01:00.1 eth1: CPU states:
[ 5417.275730] bnx2 0000:01:00.1 eth1: 045000 mode b84c state 80001000 
evt_mask 500 pc 8001284 pc 8001284 instr 1440fffc
[ 5417.275746] bnx2 0000:01:00.1 eth1: 085000 mode b84c state 80005000 
evt_mask 500 pc 8000a54 pc 8000a5c instr 10400016
[ 5417.275785] bnx2 0000:01:00.1 eth1: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c20 pc 8004c20 instr 32050003
[ 5417.275801] bnx2 0000:01:00.1 eth1: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a8c pc 8000a94 instr 8c420020
[ 5417.275817] bnx2 0000:01:00.1 eth1: 145000 mode b880 state 80000000 
evt_mask 500 pc 8000ab0 pc 800d1e8 instr 27bd0020
[ 5417.275834] bnx2 0000:01:00.1 eth1: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000cb0 pc 8000930 instr 8ce800e8
[ 5417.275845] bnx2 0000:01:00.1 eth1: <--- end FTQ dump --->
[ 5417.275851] bnx2 0000:01:00.1 eth1: <--- start TBDC dump --->
[ 5417.275858] bnx2 0000:01:00.1 eth1: TBDC free cnt: 32
[ 5417.275864] bnx2 0000:01:00.1 eth1: LINE     CID  BIDX   CMD VALIDS
[ 5417.275875] bnx2 0000:01:00.1 eth1: 00    001080  17c8   00 [0]
[ 5417.275886] bnx2 0000:01:00.1 eth1: 01    001080  17e0   00 [0]
[ 5417.275897] bnx2 0000:01:00.1 eth1: 02    001080  17e8   00 [0]
[ 5417.275907] bnx2 0000:01:00.1 eth1: 03    001080  17f8   00 [0]
[ 5417.275918] bnx2 0000:01:00.1 eth1: 04    001080  1800   00 [0]
[ 5417.275929] bnx2 0000:01:00.1 eth1: 05    001080  17d0   00 [0]
[ 5417.275940] bnx2 0000:01:00.1 eth1: 06    001080  17d8   00 [0]
[ 5417.275951] bnx2 0000:01:00.1 eth1: 07    001080  17f0   00 [0]
[ 5417.275961] bnx2 0000:01:00.1 eth1: 08    001080  1620   00 [0]
[ 5417.275972] bnx2 0000:01:00.1 eth1: 09    17de00  fbf8   78 [0]
[ 5417.275983] bnx2 0000:01:00.1 eth1: 0a    1bbf80  fef8   9f [0]
[ 5417.275994] bnx2 0000:01:00.1 eth1: 0b    1d2d80  f7f8   7f [0]
[ 5417.276005] bnx2 0000:01:00.1 eth1: 0c    148f00  f7b8   88 [0]
[ 5417.276016] bnx2 0000:01:00.1 eth1: 0d    16af80  f7d0   75 [0]
[ 5417.276026] bnx2 0000:01:00.1 eth1: 0e    1adf80  bfb0   26 [0]
[ 5417.276037] bnx2 0000:01:00.1 eth1: 0f    1ebf80  dd68   3c [0]
[ 5417.276048] bnx2 0000:01:00.1 eth1: 10    1cf700  d1f0   fc [0]
[ 5417.276059] bnx2 0000:01:00.1 eth1: 11    1cdc00  fbf0   7d [0]
[ 5417.276069] bnx2 0000:01:00.1 eth1: 12    15c900  f7f8   ef [0]
[ 5417.276081] bnx2 0000:01:00.1 eth1: 13    17cf00  d7d8   3f [0]
[ 5417.276093] bnx2 0000:01:00.1 eth1: 14    1ecf80  ffb0   b7 [0]
[ 5417.276107] bnx2 0000:01:00.1 eth1: 15    1cbd80  f3e8   bf [0]
[ 5417.276119] bnx2 0000:01:00.1 eth1: 16    179b80  d7f8   d7 [0]
[ 5417.276130] bnx2 0000:01:00.1 eth1: 17    1fdf00  f3e8   7e [0]
[ 5417.276141] bnx2 0000:01:00.1 eth1: 18    1f9780  b578   af [0]
[ 5417.276152] bnx2 0000:01:00.1 eth1: 19    1d7d80  fef0   ff [0]
[ 5417.276163] bnx2 0000:01:00.1 eth1: 1a    1d9e80  5fe8   d7 [0]
[ 5417.276174] bnx2 0000:01:00.1 eth1: 1b    1fff80  ebf8   f8 [0]
[ 5417.276186] bnx2 0000:01:00.1 eth1: 1c    1fbd80  f7d8   7f [0]
[ 5417.276200] bnx2 0000:01:00.1 eth1: 1d    16da80  2ef8   ff [0]
[ 5417.276211] bnx2 0000:01:00.1 eth1: 1e    1f9b80  bf50   8e [0]
[ 5417.276224] bnx2 0000:01:00.1 eth1: 1f    1bdf00  faf8   75 [0]
[ 5417.276231] bnx2 0000:01:00.1 eth1: <--- end TBDC dump --->
[ 5417.276246] bnx2 0000:01:00.1 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
[ 5417.276258] bnx2 0000:01:00.1 eth1: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[ 5417.276269] bnx2 0000:01:00.1 eth1: DEBUG: EMAC_TX_STATUS[00000008] 
EMAC_RX_STATUS[00000000]
[ 5417.276280] bnx2 0000:01:00.1 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[ 5417.276288] bnx2 0000:01:00.1 eth1: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01fb0004]
[ 5417.276298] bnx2 0000:01:00.1 eth1: DEBUG: PBA[00000000]
[ 5417.276304] bnx2 0000:01:00.1 eth1: <--- start MCP states dump --->
[ 5417.276314] bnx2 0000:01:00.1 eth1: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[ 5417.276326] bnx2 0000:01:00.1 eth1: DEBUG: MCP mode[0000b880] 
state[80000000] evt_mask[00000500]
[ 5417.276339] bnx2 0000:01:00.1 eth1: DEBUG: pc[0800d7b8] pc[08000cdc] 
instr[00041880]
[ 5417.276349] bnx2 0000:01:00.1 eth1: DEBUG: shmem states:
[ 5417.276358] bnx2 0000:01:00.1 eth1: DEBUG: drv_mb[0d000004] 
fw_mb[00000004] link_status[0000006f]
[ 5417.276369]  drv_pulse_mb[00001485]
[ 5417.276373] bnx2 0000:01:00.1 eth1: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[ 5417.276383]  condition[0003610e]
[ 5417.276389] bnx2 0000:01:00.1 eth1: DEBUG: 000001c0: 01005254 
42530000 0003610e 00000000
[ 5417.276402] bnx2 0000:01:00.1 eth1: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[ 5417.276416] bnx2 0000:01:00.1 eth1: DEBUG: 000003dc: 0004ffff 
00000000 00000000 00000000
[ 5417.276430] bnx2 0000:01:00.1 eth1: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[ 5417.276440] bnx2 0000:01:00.1 eth1: DEBUG: 0x3fc[0000ffff]


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:29:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9E9i-000094-4X; Fri, 31 Jan 2014 13:29:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1W9E9g-0008WB-V0
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 13:29:13 +0000
Received: from [85.158.143.35:14882] by server-1.bemta-4.messagelabs.com id
	AA/ED-31661-625ABE25; Fri, 31 Jan 2014 13:29:10 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391174948!2204857!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3942 invoked from network); 31 Jan 2014 13:29:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 13:29:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="96519208"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 13:29:08 +0000
Received: from [10.68.14.36] (10.68.14.36) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 08:29:07 -0500
Message-ID: <52EBA51E.808@citrix.com>
Date: Fri, 31 Jan 2014 14:29:02 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>, Zoltan Kiss <zoltan.kiss@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
In-Reply-To: <1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
X-Originating-IP: [10.68.14.36]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30/01/14 21:34, Michael Chan wrote:
> On Thu, 2014-01-30 at 19:08 +0000, Zoltan Kiss wrote:
>> I've experienced some queue timeout problems mentioned in the subject
>> with igb and bnx2 cards.
> Please provide the full tx timeout dmesg.  bnx2 dumps some diagnostic
> information during tx timeout that may be useful.  Thanks.
Hi,

Here is some:

[ 5417.275463] ------------[ cut here ]------------
[ 5417.275472] WARNING: at net/sched/sch_generic.c:255 
dev_watchdog+0x156/0x1f0()
[ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
[ 5417.275476] Modules linked in: tun nfsv3 nfs_acl rpcsec_gss_krb5 
auth_rpcgss oid_registry nfsv4 nfs fscache lockd sunrpc ipv6 
openvswitch(O) ipt_REJECT nf_conntrack_ipv
rack xt_tcpudp iptable_filter ip_tables x_tables nls_utf8 isofs 
dm_multipath scsi_dh dm_mod dcdbas coretemp microcode psmouse serio_raw 
lpc_ich mfd_core hid_generic ehci_p
sg hed bnx2 usbhid hid sr_mod cdrom sd_mod pata_acpi ata_generic 
ata_piix libata uhci_hcd mptsas mptscsih mptbase scsi_transport_sas scsi_mod
[ 5417.275517] CPU: 0 PID: 3 Comm: ksoftirqd/0 Tainted: G O 
3.10.11-0.xs1.8.50.170.377582 #1
[ 5417.275518] Hardware name: Dell Inc. PowerEdge R710/00W9X3, BIOS 
1.2.6 07/17/2009
[ 5417.275520]  000000ff f008be08 c1488c53 f008be30 c1046664 c1658a88 
f008be5c 000000ff
[ 5417.275525]  c13fc146 c13fc146 ee96a000 00000002 00137d44 f008be48 
c1046723 00000009
[ 5417.275530]  f008be40 c1658a88 f008be5c f008be80 c13fc146 c16556e1 
000000ff c1658a88
[ 5417.275535] Call Trace:
[ 5417.275539]  [<c1488c53>] dump_stack+0x16/0x1b
[ 5417.275544]  [<c1046664>] warn_slowpath_common+0x64/0x80
[ 5417.275546]  [<c13fc146>] ? dev_watchdog+0x156/0x1f0
[ 5417.275549]  [<c13fc146>] ? dev_watchdog+0x156/0x1f0
[ 5417.275551]  [<c1046723>] warn_slowpath_fmt+0x33/0x40
[ 5417.275554]  [<c13fc146>] dev_watchdog+0x156/0x1f0
[ 5417.275559]  [<c10549ce>] call_timer_fn+0x3e/0xf0
[ 5417.275563]  [<c107293e>] ? finish_task_switch+0x4e/0xb0
[ 5417.275565]  [<c13fbff0>] ? __netdev_watchdog_up+0x60/0x60
[ 5417.275568]  [<c1055c1b>] run_timer_softirq+0x1ab/0x210
[ 5417.275571]  [<c13fbff0>] ? __netdev_watchdog_up+0x60/0x60
[ 5417.275574]  [<c104e3f4>] __do_softirq+0xc4/0x200
[ 5417.275577]  [<c1493547>] ? xen_do_upcall+0x7/0xc
[ 5417.275579]  [<c104e550>] run_ksoftirqd+0x20/0x50
[ 5417.275582]  [<c106f182>] smpboot_thread_fn+0x142/0x150
[ 5417.275586]  [<c1067a2b>] kthread+0x9b/0xa0
[ 5417.275589]  [<c106f040>] ? smpboot_create_threads+0x60/0x60
[ 5417.275591]  [<c1070000>] ? cpu_rt_runtime_read+0x40/0x80
[ 5417.275594]  [<c1492f77>] ret_from_kernel_thread+0x1b/0x28
[ 5417.275596]  [<c1067990>] ? kthread_freezable_should_stop+0x60/0x60
[ 5417.275599] ---[ end trace 691f572d388226ca ]---
[ 5417.275602] bnx2 0000:01:00.1 eth1: <--- start FTQ dump --->
[ 5417.275622] bnx2 0000:01:00.1 eth1: RV2P_PFTQ_CTL 00010000
[ 5417.275629] bnx2 0000:01:00.1 eth1: RV2P_TFTQ_CTL 00020000
[ 5417.275636] bnx2 0000:01:00.1 eth1: RV2P_MFTQ_CTL 00004000
[ 5417.275643] bnx2 0000:01:00.1 eth1: TBDR_FTQ_CTL 00004002
[ 5417.275650] bnx2 0000:01:00.1 eth1: TDMA_FTQ_CTL 00010002
[ 5417.275657] bnx2 0000:01:00.1 eth1: TXP_FTQ_CTL 00010000
[ 5417.275663] bnx2 0000:01:00.1 eth1: TXP_FTQ_CTL 00010000
[ 5417.275670] bnx2 0000:01:00.1 eth1: TPAT_FTQ_CTL 00010000
[ 5417.275677] bnx2 0000:01:00.1 eth1: RXP_CFTQ_CTL 00008000
[ 5417.275684] bnx2 0000:01:00.1 eth1: RXP_FTQ_CTL 00100000
[ 5417.275690] bnx2 0000:01:00.1 eth1: COM_COMXQ_FTQ_CTL 00010000
[ 5417.275698] bnx2 0000:01:00.1 eth1: COM_COMTQ_FTQ_CTL 00020000
[ 5417.275705] bnx2 0000:01:00.1 eth1: COM_COMQ_FTQ_CTL 00010000
[ 5417.275712] bnx2 0000:01:00.1 eth1: CP_CPQ_FTQ_CTL 00004000
[ 5417.275718] bnx2 0000:01:00.1 eth1: CPU states:
[ 5417.275730] bnx2 0000:01:00.1 eth1: 045000 mode b84c state 80001000 
evt_mask 500 pc 8001284 pc 8001284 instr 1440fffc
[ 5417.275746] bnx2 0000:01:00.1 eth1: 085000 mode b84c state 80005000 
evt_mask 500 pc 8000a54 pc 8000a5c instr 10400016
[ 5417.275785] bnx2 0000:01:00.1 eth1: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c20 pc 8004c20 instr 32050003
[ 5417.275801] bnx2 0000:01:00.1 eth1: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a8c pc 8000a94 instr 8c420020
[ 5417.275817] bnx2 0000:01:00.1 eth1: 145000 mode b880 state 80000000 
evt_mask 500 pc 8000ab0 pc 800d1e8 instr 27bd0020
[ 5417.275834] bnx2 0000:01:00.1 eth1: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000cb0 pc 8000930 instr 8ce800e8
[ 5417.275845] bnx2 0000:01:00.1 eth1: <--- end FTQ dump --->
[ 5417.275851] bnx2 0000:01:00.1 eth1: <--- start TBDC dump --->
[ 5417.275858] bnx2 0000:01:00.1 eth1: TBDC free cnt: 32
[ 5417.275864] bnx2 0000:01:00.1 eth1: LINE     CID  BIDX   CMD VALIDS
[ 5417.275875] bnx2 0000:01:00.1 eth1: 00    001080  17c8   00 [0]
[ 5417.275886] bnx2 0000:01:00.1 eth1: 01    001080  17e0   00 [0]
[ 5417.275897] bnx2 0000:01:00.1 eth1: 02    001080  17e8   00 [0]
[ 5417.275907] bnx2 0000:01:00.1 eth1: 03    001080  17f8   00 [0]
[ 5417.275918] bnx2 0000:01:00.1 eth1: 04    001080  1800   00 [0]
[ 5417.275929] bnx2 0000:01:00.1 eth1: 05    001080  17d0   00 [0]
[ 5417.275940] bnx2 0000:01:00.1 eth1: 06    001080  17d8   00 [0]
[ 5417.275951] bnx2 0000:01:00.1 eth1: 07    001080  17f0   00 [0]
[ 5417.275961] bnx2 0000:01:00.1 eth1: 08    001080  1620   00 [0]
[ 5417.275972] bnx2 0000:01:00.1 eth1: 09    17de00  fbf8   78 [0]
[ 5417.275983] bnx2 0000:01:00.1 eth1: 0a    1bbf80  fef8   9f [0]
[ 5417.275994] bnx2 0000:01:00.1 eth1: 0b    1d2d80  f7f8   7f [0]
[ 5417.276005] bnx2 0000:01:00.1 eth1: 0c    148f00  f7b8   88 [0]
[ 5417.276016] bnx2 0000:01:00.1 eth1: 0d    16af80  f7d0   75 [0]
[ 5417.276026] bnx2 0000:01:00.1 eth1: 0e    1adf80  bfb0   26 [0]
[ 5417.276037] bnx2 0000:01:00.1 eth1: 0f    1ebf80  dd68   3c [0]
[ 5417.276048] bnx2 0000:01:00.1 eth1: 10    1cf700  d1f0   fc [0]
[ 5417.276059] bnx2 0000:01:00.1 eth1: 11    1cdc00  fbf0   7d [0]
[ 5417.276069] bnx2 0000:01:00.1 eth1: 12    15c900  f7f8   ef [0]
[ 5417.276081] bnx2 0000:01:00.1 eth1: 13    17cf00  d7d8   3f [0]
[ 5417.276093] bnx2 0000:01:00.1 eth1: 14    1ecf80  ffb0   b7 [0]
[ 5417.276107] bnx2 0000:01:00.1 eth1: 15    1cbd80  f3e8   bf [0]
[ 5417.276119] bnx2 0000:01:00.1 eth1: 16    179b80  d7f8   d7 [0]
[ 5417.276130] bnx2 0000:01:00.1 eth1: 17    1fdf00  f3e8   7e [0]
[ 5417.276141] bnx2 0000:01:00.1 eth1: 18    1f9780  b578   af [0]
[ 5417.276152] bnx2 0000:01:00.1 eth1: 19    1d7d80  fef0   ff [0]
[ 5417.276163] bnx2 0000:01:00.1 eth1: 1a    1d9e80  5fe8   d7 [0]
[ 5417.276174] bnx2 0000:01:00.1 eth1: 1b    1fff80  ebf8   f8 [0]
[ 5417.276186] bnx2 0000:01:00.1 eth1: 1c    1fbd80  f7d8   7f [0]
[ 5417.276200] bnx2 0000:01:00.1 eth1: 1d    16da80  2ef8   ff [0]
[ 5417.276211] bnx2 0000:01:00.1 eth1: 1e    1f9b80  bf50   8e [0]
[ 5417.276224] bnx2 0000:01:00.1 eth1: 1f    1bdf00  faf8   75 [0]
[ 5417.276231] bnx2 0000:01:00.1 eth1: <--- end TBDC dump --->
[ 5417.276246] bnx2 0000:01:00.1 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
[ 5417.276258] bnx2 0000:01:00.1 eth1: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[ 5417.276269] bnx2 0000:01:00.1 eth1: DEBUG: EMAC_TX_STATUS[00000008] 
EMAC_RX_STATUS[00000000]
[ 5417.276280] bnx2 0000:01:00.1 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[ 5417.276288] bnx2 0000:01:00.1 eth1: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01fb0004]
[ 5417.276298] bnx2 0000:01:00.1 eth1: DEBUG: PBA[00000000]
[ 5417.276304] bnx2 0000:01:00.1 eth1: <--- start MCP states dump --->
[ 5417.276314] bnx2 0000:01:00.1 eth1: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[ 5417.276326] bnx2 0000:01:00.1 eth1: DEBUG: MCP mode[0000b880] 
state[80000000] evt_mask[00000500]
[ 5417.276339] bnx2 0000:01:00.1 eth1: DEBUG: pc[0800d7b8] pc[08000cdc] 
instr[00041880]
[ 5417.276349] bnx2 0000:01:00.1 eth1: DEBUG: shmem states:
[ 5417.276358] bnx2 0000:01:00.1 eth1: DEBUG: drv_mb[0d000004] 
fw_mb[00000004] link_status[0000006f]
[ 5417.276369]  drv_pulse_mb[00001485]
[ 5417.276373] bnx2 0000:01:00.1 eth1: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[ 5417.276383]  condition[0003610e]
[ 5417.276389] bnx2 0000:01:00.1 eth1: DEBUG: 000001c0: 01005254 
42530000 0003610e 00000000
[ 5417.276402] bnx2 0000:01:00.1 eth1: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[ 5417.276416] bnx2 0000:01:00.1 eth1: DEBUG: 000003dc: 0004ffff 
00000000 00000000 00000000
[ 5417.276430] bnx2 0000:01:00.1 eth1: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[ 5417.276440] bnx2 0000:01:00.1 eth1: DEBUG: 0x3fc[0000ffff]


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:37:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EHf-0000LU-IE; Fri, 31 Jan 2014 13:37:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W9EHe-0000LP-FQ
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 13:37:26 +0000
Received: from [193.109.254.147:62759] by server-4.bemta-14.messagelabs.com id
	57/2B-32066-517ABE25; Fri, 31 Jan 2014 13:37:25 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391175440!1154621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32622 invoked from network); 31 Jan 2014 13:37:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 13:37:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98478195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 13:37:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 08:37:19 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W9EHX-0002Im-3i;
	Fri, 31 Jan 2014 13:37:19 +0000
Message-ID: <1391175434.11034.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Fri, 31 Jan 2014 13:37:14 +0000
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping

On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> From: Frediano Ziglio <frediano.ziglio@citrix.com>
> Date: Wed, 22 Jan 2014 10:48:50 +0000
> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> These lines (in mctelem_reserve)
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
>  1 file changed, 30 insertions(+), 51 deletions(-)
> 
> Changes from v1:
> - Use bitmap to allow any number of items to be used;
> - Use a single bitmap to simplify reserve loop;
> - Remove HOME flags as was not used anymore.
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> index 895ce1a..ed8e8d2 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -37,24 +37,19 @@ struct mctelem_ent {
>  	void *mcte_data;		/* corresponding data payload */
>  };
>  
> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>  
> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>  				MCTE_F_STATE_UNCOMMITTED | \
>  				MCTE_F_STATE_COMMITTED | \
>  				MCTE_F_STATE_PROCESSING)
>  
> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> -
>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>  #define	MCTE_SET_CLASS(tep, new) do { \
>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> @@ -69,6 +64,8 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
>  
> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>  
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +74,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>   */
>  static void mctelem_free(struct mctelem_ent *tep)
>  {
> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> -	    MC_URGENT : MC_NONURGENT;
> -
>  	BUG_ON(tep->mcte_refcnt != 0);
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>  
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>  }
>  
>  /* Increment the reference count of an entry that is not linked on to
> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>  	}
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> -	    datasz)) == NULL) {
> +	    MC_NENT)) == NULL ||
> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>  		if (mctctl.mctc_elems)
>  			xfree(mctctl.mctc_elems);
>  		printk("Allocations for MCA telemetry failed\n");
>  		return;
>  	}
>  
> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +	for (i = 0; i < MC_NENT; i++) {
> +		struct mctelem_ent *tep;
>  
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>  		tep->mcte_refcnt = 0;
>  		tep->mcte_data = datarr + i * datasz;
>  
> -		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> -		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> -		}
> -
> -		tep->mcte_next = *tepp;
> +		__set_bit(i, mctctl.mctc_free);
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
>  
> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>  
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call exactly one of
> - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> -	mctelem_class_t target = (which == MC_URGENT) ?
> -	    MC_URGENT : MC_NONURGENT;
> +	unsigned bit;
> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>  
> -	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> -			if (which == MC_URGENT && target == MC_URGENT) {
> -				/* raid the non-urgent freelist */
> -				target = MC_NONURGENT;
> -				freelp = &mctctl.mctc_free[target];
> -				continue;
> -			} else {
> -				mctelem_drop_count++;
> -				return (NULL);
> -			}
> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> +
> +		if (bit >= MC_NENT) {
> +			mctelem_drop_count++;
> +			return (NULL);
>  		}
>  
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>  
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:37:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EHf-0000LU-IE; Fri, 31 Jan 2014 13:37:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1W9EHe-0000LP-FQ
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 13:37:26 +0000
Received: from [193.109.254.147:62759] by server-4.bemta-14.messagelabs.com id
	57/2B-32066-517ABE25; Fri, 31 Jan 2014 13:37:25 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391175440!1154621!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32622 invoked from network); 31 Jan 2014 13:37:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 13:37:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,757,1384300800"; d="scan'208";a="98478195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 13:37:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 08:37:19 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1W9EHX-0002Im-3i;
	Fri, 31 Jan 2014 13:37:19 +0000
Message-ID: <1391175434.11034.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Fri, 31 Jan 2014 13:37:14 +0000
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping

On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> From: Frediano Ziglio <frediano.ziglio@citrix.com>
> Date: Wed, 22 Jan 2014 10:48:50 +0000
> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> These lines (in mctelem_reserve)
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
>  1 file changed, 30 insertions(+), 51 deletions(-)
> 
> Changes from v1:
> - Use bitmap to allow any number of items to be used;
> - Use a single bitmap to simplify reserve loop;
> - Remove HOME flags as was not used anymore.
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> index 895ce1a..ed8e8d2 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -37,24 +37,19 @@ struct mctelem_ent {
>  	void *mcte_data;		/* corresponding data payload */
>  };
>  
> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>  
> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>  				MCTE_F_STATE_UNCOMMITTED | \
>  				MCTE_F_STATE_COMMITTED | \
>  				MCTE_F_STATE_PROCESSING)
>  
> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> -
>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>  #define	MCTE_SET_CLASS(tep, new) do { \
>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> @@ -69,6 +64,8 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
>  
> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>  
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +74,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>   */
>  static void mctelem_free(struct mctelem_ent *tep)
>  {
> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> -	    MC_URGENT : MC_NONURGENT;
> -
>  	BUG_ON(tep->mcte_refcnt != 0);
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>  
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>  }
>  
>  /* Increment the reference count of an entry that is not linked on to
> @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>  	}
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> -	    datasz)) == NULL) {
> +	    MC_NENT)) == NULL ||
> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>  		if (mctctl.mctc_elems)
>  			xfree(mctctl.mctc_elems);
>  		printk("Allocations for MCA telemetry failed\n");
>  		return;
>  	}
>  
> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +	for (i = 0; i < MC_NENT; i++) {
> +		struct mctelem_ent *tep;
>  
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>  		tep->mcte_refcnt = 0;
>  		tep->mcte_data = datarr + i * datasz;
>  
> -		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> -		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> -		}
> -
> -		tep->mcte_next = *tepp;
> +		__set_bit(i, mctctl.mctc_free);
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
>  
> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>  
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call exactly one of
> - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> -	mctelem_class_t target = (which == MC_URGENT) ?
> -	    MC_URGENT : MC_NONURGENT;
> +	unsigned bit;
> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>  
> -	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> -			if (which == MC_URGENT && target == MC_URGENT) {
> -				/* raid the non-urgent freelist */
> -				target = MC_NONURGENT;
> -				freelp = &mctctl.mctc_free[target];
> -				continue;
> -			} else {
> -				mctelem_drop_count++;
> -				return (NULL);
> -			}
> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> +
> +		if (bit >= MC_NENT) {
> +			mctelem_drop_count++;
> +			return (NULL);
>  		}
>  
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>  
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:41:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EL1-0000eI-7a; Fri, 31 Jan 2014 13:40:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9EL0-0000eC-7A
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 13:40:54 +0000
Received: from [193.109.254.147:18001] by server-16.bemta-14.messagelabs.com
	id 96/56-21945-5E7ABE25; Fri, 31 Jan 2014 13:40:53 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391175652!1150126!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16618 invoked from network); 31 Jan 2014 13:40:52 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	31 Jan 2014 13:40:52 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 31 Jan 2014 05:36:40 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,757,1384329600"; d="scan'208";a="447652152"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 31 Jan 2014 05:40:39 -0800
Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 05:40:39 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 05:40:39 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 31 Jan 2014 21:40:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: SRIOV fail to work with Xen 4.4+qemu-xen-dir
Thread-Index: Ac8eiXeegua24nfWS36jTHK1Isj9mw==
Date: Fri, 31 Jan 2014 13:40:37 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] SRIOV fail to work with Xen 4.4+qemu-xen-dir
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I saw the SRIOV(82576 or 82599) NICs are fail to work with windows guest in Xen 4.4+qemu-xen-dir. After the windows 2k8 guest boot up, the NIC cannot get ip. And I only saw this issue with qemu-xen-dir. Qemu traditional is working well. Any one saw the same issue?

BTW: I try to bisect which qemu commit introduced the regression, but it seems the issue exists from the start of adding VT-d patch to upstream Qemu.

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 13:41:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 13:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EL1-0000eI-7a; Fri, 31 Jan 2014 13:40:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9EL0-0000eC-7A
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 13:40:54 +0000
Received: from [193.109.254.147:18001] by server-16.bemta-14.messagelabs.com
	id 96/56-21945-5E7ABE25; Fri, 31 Jan 2014 13:40:53 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391175652!1150126!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16618 invoked from network); 31 Jan 2014 13:40:52 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-27.messagelabs.com with SMTP;
	31 Jan 2014 13:40:52 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 31 Jan 2014 05:36:40 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,757,1384329600"; d="scan'208";a="447652152"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 31 Jan 2014 05:40:39 -0800
Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 05:40:39 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 05:40:39 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 31 Jan 2014 21:40:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: SRIOV fail to work with Xen 4.4+qemu-xen-dir
Thread-Index: Ac8eiXeegua24nfWS36jTHK1Isj9mw==
Date: Fri, 31 Jan 2014 13:40:37 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [Xen-devel] SRIOV fail to work with Xen 4.4+qemu-xen-dir
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I saw the SRIOV(82576 or 82599) NICs are fail to work with windows guest in Xen 4.4+qemu-xen-dir. After the windows 2k8 guest boot up, the NIC cannot get ip. And I only saw this issue with qemu-xen-dir. Qemu traditional is working well. Any one saw the same issue?

BTW: I try to bisect which qemu commit introduced the regression, but it seems the issue exists from the start of adding VT-d patch to upstream Qemu.

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 14:03:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 14:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EgD-0001HF-9I; Fri, 31 Jan 2014 14:02:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1W9EgB-0001HA-Ho
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 14:02:47 +0000
Received: from [85.158.143.35:24741] by server-3.bemta-4.messagelabs.com id
	7B/1A-11539-60DABE25; Fri, 31 Jan 2014 14:02:46 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391176965!2228577!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25868 invoked from network); 31 Jan 2014 14:02:46 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	31 Jan 2014 14:02:46 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0VE2ff7015612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 09:02:42 -0500
Received: from localhost (vpn1-6-56.ams2.redhat.com [10.36.6.56])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0VE2e1A025600; Fri, 31 Jan 2014 09:02:41 -0500
Date: Fri, 31 Jan 2014 15:04:51 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: David Rientjes <rientjes@google.com>
Message-ID: <20140131140451.GB7648@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129082521.GA1362@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 09:25:21AM +0100, Stanislaw Gruszka wrote:
> > Looks incomplete, what about the kzalloc() in 
> > xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?
> 
> Indeed and additionally from check_acpi_ids() we call
> acpi_walk_namespace(), which also take mutexes. Hence unfortunately
> making xen_upload_processor_pm_data() atomic is not easy, but possibly
> can be done by saving some data in memory after initialization.
> 
> Or perhaps this problem can be solved differently, by not using 
> yscore_ops->resume(), but some other resume callback from core, which
> allow to sleep. Than can require registering dummy device or sysfs
> class, but maybe there are simpler solutions.

Eventually work_struct could be used and scheduled from ->resume()
callback, if there is no dependency between uploading processor PM
data to hypervisor and other actions performed after
syscore_ops->resume();

Thanks
Stanislaw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 14:03:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 14:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9EgD-0001HF-9I; Fri, 31 Jan 2014 14:02:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1W9EgB-0001HA-Ho
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 14:02:47 +0000
Received: from [85.158.143.35:24741] by server-3.bemta-4.messagelabs.com id
	7B/1A-11539-60DABE25; Fri, 31 Jan 2014 14:02:46 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391176965!2228577!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25868 invoked from network); 31 Jan 2014 14:02:46 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	31 Jan 2014 14:02:46 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s0VE2ff7015612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 09:02:42 -0500
Received: from localhost (vpn1-6-56.ams2.redhat.com [10.36.6.56])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s0VE2e1A025600; Fri, 31 Jan 2014 09:02:41 -0500
Date: Fri, 31 Jan 2014 15:04:51 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: David Rientjes <rientjes@google.com>
Message-ID: <20140131140451.GB7648@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129082521.GA1362@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 09:25:21AM +0100, Stanislaw Gruszka wrote:
> > Looks incomplete, what about the kzalloc() in 
> > xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?
> 
> Indeed and additionally from check_acpi_ids() we call
> acpi_walk_namespace(), which also take mutexes. Hence unfortunately
> making xen_upload_processor_pm_data() atomic is not easy, but possibly
> can be done by saving some data in memory after initialization.
> 
> Or perhaps this problem can be solved differently, by not using 
> yscore_ops->resume(), but some other resume callback from core, which
> allow to sleep. Than can require registering dummy device or sysfs
> class, but maybe there are simpler solutions.

Eventually work_struct could be used and scheduled from ->resume()
callback, if there is no dependency between uploading processor PM
data to hypervisor and other actions performed after
syscore_ops->resume();

Thanks
Stanislaw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:12:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fkz-0002oe-0G; Fri, 31 Jan 2014 15:11:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W9Fkx-0002oZ-CC
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:11:47 +0000
Received: from [85.158.143.35:16293] by server-3.bemta-4.messagelabs.com id
	6E/7B-11539-23DBBE25; Fri, 31 Jan 2014 15:11:46 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391181104!2236499!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7176 invoked from network); 31 Jan 2014 15:11:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="96558018"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 15:11:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:11:43 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W9Fbo-0003aH-O8;
	Fri, 31 Jan 2014 15:02:20 +0000
Date: Fri, 31 Jan 2014 15:02:20 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140131150219.GE1775@perard.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] SRIOV fail to work with Xen 4.4+qemu-xen-dir
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 31, 2014 at 01:40:37PM +0000, Zhang, Yang Z wrote:
> Hi all
> 
> I saw the SRIOV(82576 or 82599) NICs are fail to work with windows
> guest in Xen 4.4+qemu-xen-dir. After the windows 2k8 guest boot up,
> the NIC cannot get ip. And I only saw this issue with qemu-xen-dir.
> Qemu traditional is working well. Any one saw the same issue?
> 
> BTW: I try to bisect which qemu commit introduced the regression, but
> it seems the issue exists from the start of adding VT-d patch to
> upstream Qemu.

My guest is, if it does not work with Xen 4.3 (with qemu-xen), then, it
never worked.

Would this be related to bug #22 (xl does not support specifying virtual
function for passthrough device) http://bugs.xenproject.org/xen/bug/22 ?

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:12:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fkz-0002oe-0G; Fri, 31 Jan 2014 15:11:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W9Fkx-0002oZ-CC
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:11:47 +0000
Received: from [85.158.143.35:16293] by server-3.bemta-4.messagelabs.com id
	6E/7B-11539-23DBBE25; Fri, 31 Jan 2014 15:11:46 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391181104!2236499!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7176 invoked from network); 31 Jan 2014 15:11:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="96558018"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 15:11:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:11:43 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W9Fbo-0003aH-O8;
	Fri, 31 Jan 2014 15:02:20 +0000
Date: Fri, 31 Jan 2014 15:02:20 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140131150219.GE1775@perard.uk.xensource.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CC60C@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] SRIOV fail to work with Xen 4.4+qemu-xen-dir
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 31, 2014 at 01:40:37PM +0000, Zhang, Yang Z wrote:
> Hi all
> 
> I saw the SRIOV(82576 or 82599) NICs are fail to work with windows
> guest in Xen 4.4+qemu-xen-dir. After the windows 2k8 guest boot up,
> the NIC cannot get ip. And I only saw this issue with qemu-xen-dir.
> Qemu traditional is working well. Any one saw the same issue?
> 
> BTW: I try to bisect which qemu commit introduced the regression, but
> it seems the issue exists from the start of adding VT-d patch to
> upstream Qemu.

My guest is, if it does not work with Xen 4.3 (with qemu-xen), then, it
never worked.

Would this be related to bug #22 (xl does not support specifying virtual
function for passthrough device) http://bugs.xenproject.org/xen/bug/22 ?

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:18:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Frm-00037f-2U; Fri, 31 Jan 2014 15:18:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W9Frk-00035t-BY
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:18:48 +0000
Received: from [85.158.137.68:44512] by server-10.bemta-3.messagelabs.com id
	61/02-07302-7DEBBE25; Fri, 31 Jan 2014 15:18:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391181524!12588703!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2162 invoked from network); 31 Jan 2014 15:18:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:18:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98518137"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:18:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:18:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W9FrF-0004G9-4F;
	Fri, 31 Jan 2014 15:18:17 +0000
Date: Fri, 31 Jan 2014 15:18:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1401311517260.4373@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> Cheers Stefano,

Do you think it is acceptable to have this in 3.14?
Maybe we should aim at 3.15?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:18:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Frm-00037f-2U; Fri, 31 Jan 2014 15:18:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W9Frk-00035t-BY
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:18:48 +0000
Received: from [85.158.137.68:44512] by server-10.bemta-3.messagelabs.com id
	61/02-07302-7DEBBE25; Fri, 31 Jan 2014 15:18:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391181524!12588703!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2162 invoked from network); 31 Jan 2014 15:18:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:18:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98518137"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:18:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:18:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W9FrF-0004G9-4F;
	Fri, 31 Jan 2014 15:18:17 +0000
Date: Fri, 31 Jan 2014 15:18:10 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Will Deacon <will.deacon@arm.com>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1401311517260.4373@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>
> 
> Cheers Stefano,

Do you think it is acceptable to have this in 3.14?
Maybe we should aim at 3.15?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:19:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fs7-0003CA-Gd; Fri, 31 Jan 2014 15:19:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Fs6-0003C1-T7
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:19:11 +0000
Received: from [85.158.139.211:29144] by server-11.bemta-5.messagelabs.com id
	0D/AF-23886-EEEBBE25; Fri, 31 Jan 2014 15:19:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391181548!875863!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8737 invoked from network); 31 Jan 2014 15:19:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98518539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:19:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 10:19:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Fs2-000061-JK;
	Fri, 31 Jan 2014 15:19:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Fs2-0003zX-DC;
	Fri, 31 Jan 2014 15:19:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.48873.879761.827214@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 15:19:05 +0000
To: Jim Fehlig <jfehlig@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
In-Reply-To: <21227.37990.762802.574421@mariner.uk.xensource.com>
References: <52EB3C47.9050902@suse.com>
	<21227.37990.762802.574421@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: libvirt libxl timer handling issue"):
> Jim Fehlig writes ("libvirt libxl timer handling issue"):
> > On the bright side, I seem to have the fd event handling issues sorted out.
> 
> Good, I guess.  Let me look at your crash stacktrace and the libxl
> code in more detail...

I think this is due to libxl_event.c not clearing the ->func member of
its timeout structs when the timeout occurs.  TBH it's surprising that
this hasn't caused more trouble and I haven't been able to test this
so I'm not sure.

But please take a look at
  git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0

The top two patches there are new; the rest is my fork fixup branch.
Note once again that I have compiled but NOT EXECUTED these two
patches.  But since I'm about to be out of touch travelling and then
at FOSDEM I thought I'd send this to you right away.

Sorry if this too turns out to be my fault...

Regards,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:19:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fs7-0003CA-Gd; Fri, 31 Jan 2014 15:19:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Fs6-0003C1-T7
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:19:11 +0000
Received: from [85.158.139.211:29144] by server-11.bemta-5.messagelabs.com id
	0D/AF-23886-EEEBBE25; Fri, 31 Jan 2014 15:19:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391181548!875863!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8737 invoked from network); 31 Jan 2014 15:19:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:19:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98518539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:19:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 10:19:06 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Fs2-000061-JK;
	Fri, 31 Jan 2014 15:19:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1W9Fs2-0003zX-DC;
	Fri, 31 Jan 2014 15:19:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21227.48873.879761.827214@mariner.uk.xensource.com>
Date: Fri, 31 Jan 2014 15:19:05 +0000
To: Jim Fehlig <jfehlig@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
In-Reply-To: <21227.37990.762802.574421@mariner.uk.xensource.com>
References: <52EB3C47.9050902@suse.com>
	<21227.37990.762802.574421@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: libvirt libxl timer handling issue"):
> Jim Fehlig writes ("libvirt libxl timer handling issue"):
> > On the bright side, I seem to have the fd event handling issues sorted out.
> 
> Good, I guess.  Let me look at your crash stacktrace and the libxl
> code in more detail...

I think this is due to libxl_event.c not clearing the ->func member of
its timeout structs when the timeout occurs.  TBH it's surprising that
this hasn't caused more trouble and I haven't been able to test this
so I'm not sure.

But please take a look at
  git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0

The top two patches there are new; the rest is my fork fixup branch.
Note once again that I have compiled but NOT EXECUTED these two
patches.  But since I'm about to be out of touch travelling and then
at FOSDEM I thought I'd send this to you right away.

Sorry if this too turns out to be my fault...

Regards,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fwc-0003R8-7o; Fri, 31 Jan 2014 15:23:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W9Fwa-0003R3-VF
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:23:49 +0000
Received: from [85.158.139.211:31059] by server-11.bemta-5.messagelabs.com id
	3C/0B-23886-400CBE25; Fri, 31 Jan 2014 15:23:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391181825!883173!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23631 invoked from network); 31 Jan 2014 15:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98519910"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:23:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:23:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W9Fw7-0004Kk-AK;
	Fri, 31 Jan 2014 15:23:19 +0000
Date: Fri, 31 Jan 2014 15:23:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401311522560.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Zhang, Yang Z wrote:
> Stefano Stabellini wrote on 2014-01-27:
> > On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> >> Anthony PERARD wrote on 2014-01-09:
> >>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> > wrote:
> >>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>>>> [...]
> >>>>>>>>> Those Xen report something like:
> >>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>>>> 131329 >
> >>>>>>>>> 131328
> >>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>>>> id=46
> >>>>>>>>> memflags=0 (62 of 64)
> >>>>>>>>> 
> >>>>>>>>> ?
> >>>>>>>>> 
> >>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
> >>>>>>>>> e1000 in QEMU :) )
> >>>>>>>>> 
> >>>>> 
> >>>>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>>>> Network
> >>> Connection (rev 01)
> >>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>>>         ROM at fb400000 [disabled] [size=4M]
> >>>>> 
> >>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>>>> allocate memory for it. Will have maybe have to find another way.
> >>>>> qemu-trad those not seems to allocate memory, but I haven't been
> >>>>> very far in trying to check that.
> >>>> 
> >>>> And indeed that is the case. The "Fix" below fixes it.
> >>>> 
> >>>> 
> >>>> Based on that and this guest config:
> >>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >>>> memory = 2048
> >>>> boot="d"
> >>>> maxvcpus=32
> >>>> vcpus=1
> >>>> serial='pty'
> >>>> vnclisten="0.0.0.0"
> >>>> name="latest"
> >>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
> >>>> ["01:00.0"]
> >>>> 
> >>>> I can boot the guest.
> >>> 
> >>> And can you access the ROM from the guest ?
> >>> 
> >>> 
> >>> Also, I have another patch, it will initialize the PCI ROM BAR like
> >>> any other BAR. In this case, if qemu is envolved in the access to ROM,
> >>> it will print an error, like it the case for other BAR.
> >>> 
> >>> I tried to test it, but it was with an embedded VGA card. When I dump
> >>> the ROM, I got the same one as the emulated card instead of the ROM
> >>> from the device.
> >>> 
> >>> 
> >>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
> >>> 6dd7a68..2bbdb6d
> >>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
> >>> +440,8 @@ static int
> >>> xen_pt_register_regions(XenPCIPassthroughState *s)
> >>> 
> >>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >>> -                                      "xen-pci-pt-rom", d->rom.size);
> >>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
> >>>                           "xen-pci-pt-rom", d->rom.size);
> >>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> >>> PCI_BASE_ADDRESS_MEM_PREFETCH,
> >>>                           &s->rom);
> >> 
> >> Hi, Anthony,
> >> 
> >> Does your fixing is the final solution for this issue? If yes, will
> >> you push it
> > before Xen 4.4 release?
> > 
> > I included this patch in the last pull request I sent to Anthony Liguori:
> > 
> > http://marc.info/?l=qemu-devel&m=138997319906095
> > 
> > It hasn't been pulled yet, but I would expect that it is going to be
> > upstream soon.
> > Regarding the 4.4 release, we are trying to fix a couple of other
> > serious bugs in the qemu-xen tree right now, but it is still
> > conceivable to have this fix backported to the tree in time for the release.
> 
> Hope it can catch the 4.4 release. 

The fix is in qemu-xen now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Fwc-0003R8-7o; Fri, 31 Jan 2014 15:23:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1W9Fwa-0003R3-VF
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 15:23:49 +0000
Received: from [85.158.139.211:31059] by server-11.bemta-5.messagelabs.com id
	3C/0B-23886-400CBE25; Fri, 31 Jan 2014 15:23:48 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391181825!883173!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23631 invoked from network); 31 Jan 2014 15:23:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:23:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98519910"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 15:23:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 10:23:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1W9Fw7-0004Kk-AK;
	Fri, 31 Jan 2014 15:23:19 +0000
Date: Fri, 31 Jan 2014 15:23:13 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1401311522560.4373@kaball.uk.xensource.com>
References: <20131204195147.GA3833@pegasus.dumpdata.com>
	<20131205121632.GO10855@perard.uk.xensource.com>
	<20131206144935.GA3603@pegasus.dumpdata.com>
	<20131206153503.GS10855@perard.uk.xensource.com>
	<20131206160018.GC4419@zion.uk.xensource.com>
	<20131206160310.GD4419@zion.uk.xensource.com>
	<20131216150816.GA14122@phenom.dumpdata.com>
	<20131218144823.GB6081@perard.uk.xensource.com>
	<20140108194451.GA15956@phenom.dumpdata.com>
	<20140109145624.GD1696@perard.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C6DDB@SHSMSX104.ccr.corp.intel.com>
	<alpine.DEB.2.02.1401271210050.4373@kaball.uk.xensource.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9C8354@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] qemu-xen-dir + PCI passthrough = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 27 Jan 2014, Zhang, Yang Z wrote:
> Stefano Stabellini wrote on 2014-01-27:
> > On Sun, 26 Jan 2014, Zhang, Yang Z wrote:
> >> Anthony PERARD wrote on 2014-01-09:
> >>> On Wed, Jan 08, 2014 at 02:44:51PM -0500, Konrad Rzeszutek Wilk wrote:
> >>>> On Wed, Dec 18, 2013 at 02:48:24PM +0000, Anthony PERARD wrote:
> >>>>> On Mon, Dec 16, 2013 at 10:08:16AM -0500, Konrad Rzeszutek Wilk
> > wrote:
> >>>>>> On Fri, Dec 06, 2013 at 04:03:10PM +0000, Wei Liu wrote:
> >>>>>>> On Fri, Dec 06, 2013 at 04:00:18PM +0000, Wei Liu wrote:
> >>>>>>> [...]
> >>>>>>>>> Those Xen report something like:
> >>>>>>>>> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 46:
> >>>>>>>>> 131329 >
> >>>>>>>>> 131328
> >>>>>>>>> (XEN) memory.c:132:d0 Could not allocate order=0 extent:
> >>>>>>>>> id=46
> >>>>>>>>> memflags=0 (62 of 64)
> >>>>>>>>> 
> >>>>>>>>> ?
> >>>>>>>>> 
> >>>>>>>>> (I tryied to reproduce the issue by simply add many emulated
> >>>>>>>>> e1000 in QEMU :) )
> >>>>>>>>> 
> >>>>> 
> >>>>>> -bash-4.1# lspci -s 01:00.0 -v
> >>>>>> 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit
> >>>>>> Network
> >>> Connection (rev 01)
> >>>>>>         Subsystem: Intel Corporation Gigabit ET Dual Port Server
> >>>>>>         Adapter Flags: fast devsel, IRQ 16 Memory at fbc20000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=128K] Memory at
> >>>>>>         fb800000 (32-bit, non-prefetchable) [disabled] [size=4M] I/O
> >>>>>>         ports at e020 [disabled] [size=32] Memory at fbc44000
> >>>>>>         (32-bit, non-prefetchable) [disabled] [size=16K] Expansion
> >>>>>>         ROM at fb400000 [disabled] [size=4M]
> >>>>> 
> >>>>> BTW, I think this is the issue, the Expansion ROM. qemu-xen will
> >>>>> allocate memory for it. Will have maybe have to find another way.
> >>>>> qemu-trad those not seems to allocate memory, but I haven't been
> >>>>> very far in trying to check that.
> >>>> 
> >>>> And indeed that is the case. The "Fix" below fixes it.
> >>>> 
> >>>> 
> >>>> Based on that and this guest config:
> >>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> >>>> memory = 2048
> >>>> boot="d"
> >>>> maxvcpus=32
> >>>> vcpus=1
> >>>> serial='pty'
> >>>> vnclisten="0.0.0.0"
> >>>> name="latest"
> >>>> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ] pci =
> >>>> ["01:00.0"]
> >>>> 
> >>>> I can boot the guest.
> >>> 
> >>> And can you access the ROM from the guest ?
> >>> 
> >>> 
> >>> Also, I have another patch, it will initialize the PCI ROM BAR like
> >>> any other BAR. In this case, if qemu is envolved in the access to ROM,
> >>> it will print an error, like it the case for other BAR.
> >>> 
> >>> I tried to test it, but it was with an embedded VGA card. When I dump
> >>> the ROM, I got the same one as the emulated card instead of the ROM
> >>> from the device.
> >>> 
> >>> 
> >>> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c index
> >>> 6dd7a68..2bbdb6d
> >>> 100644 --- a/hw/xen/xen_pt.c +++ b/hw/xen/xen_pt.c @@ -440,8
> >>> +440,8 @@ static int
> >>> xen_pt_register_regions(XenPCIPassthroughState *s)
> >>> 
> >>>          s->bases[PCI_ROM_SLOT].access.maddr = d->rom.base_addr;
> >>> -        memory_region_init_rom_device(&s->rom, OBJECT(s), NULL, NULL,
> >>> -                                      "xen-pci-pt-rom", d->rom.size);
> >>> +        memory_region_init_io(&s->rom, OBJECT(s), &ops, &s->dev, +   
> >>>                           "xen-pci-pt-rom", d->rom.size);
> >>>          pci_register_bar(&s->dev, PCI_ROM_SLOT,
> >>> PCI_BASE_ADDRESS_MEM_PREFETCH,
> >>>                           &s->rom);
> >> 
> >> Hi, Anthony,
> >> 
> >> Does your fixing is the final solution for this issue? If yes, will
> >> you push it
> > before Xen 4.4 release?
> > 
> > I included this patch in the last pull request I sent to Anthony Liguori:
> > 
> > http://marc.info/?l=qemu-devel&m=138997319906095
> > 
> > It hasn't been pulled yet, but I would expect that it is going to be
> > upstream soon.
> > Regarding the 4.4 release, we are trying to fix a couple of other
> > serious bugs in the qemu-xen tree right now, but it is still
> > conceivable to have this fix backported to the tree in time for the release.
> 
> Hope it can catch the 4.4 release. 

The fix is in qemu-xen now.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:40:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GCQ-0003xb-Sm; Fri, 31 Jan 2014 15:40:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GCP-0003xW-8Y
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:40:09 +0000
Received: from [85.158.137.68:3685] by server-10.bemta-3.messagelabs.com id
	F9/C8-07302-8D3CBE25; Fri, 31 Jan 2014 15:40:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391182805!12561205!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2208 invoked from network); 31 Jan 2014 15:40:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:40:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFd0Ww017030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:39:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0VFcxTY004263
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:39:00 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0VFcxXj004232; Fri, 31 Jan 2014 15:38:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:38:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8FF1C1BFA73; Fri, 31 Jan 2014 10:38:57 -0500 (EST)
Date: Fri, 31 Jan 2014 10:38:57 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140131153857.GD22527@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-late-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc0-late-tag

which has bug-fixes for the new features that were added during this
cycle. There are also two fixes for long-standing issues for which we have
a solution: grant-table operations extra work that was not needed causing
performance issues and the self balloon code was too aggressive causing
OOMs.

The signed tag has a good description of what these fixes are.

Please pull!


 arch/arm/Kconfig                    |  1 +
 arch/arm/xen/enlighten.c            | 77 ++++++++++++++++++------------
 arch/x86/include/asm/xen/page.h     |  5 +-
 arch/x86/xen/grant-table.c          |  3 +-
 arch/x86/xen/p2m.c                  | 17 +------
 drivers/block/xen-blkback/blkback.c | 15 +++---
 drivers/xen/gntdev.c                | 13 +++--
 drivers/xen/grant-table.c           | 95 ++++++++++++++++++++++++++++++-------
 drivers/xen/swiotlb-xen.c           | 22 ++++++++-
 drivers/xen/xen-selfballoon.c       | 22 +++++++++
 include/xen/grant_table.h           | 10 ++--
 11 files changed, 197 insertions(+), 83 deletions(-)


Bob Liu (1):
      drivers: xen: deaggressive selfballoon driver

Dave Jones (1):
      xen/pvh: Fix misplaced kfree from xlated_setup_gnttab_pages

Ian Campbell (1):
      xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t)

Julien Grall (2):
      arm/xen: Initialize event channels earlier
      xen/gnttab: Use phys_addr_t to describe the grant frame base address

Zoltan Kiss (1):
      xen/grant-table: Avoid m2p_override during mapping


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:40:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GCQ-0003xb-Sm; Fri, 31 Jan 2014 15:40:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GCP-0003xW-8Y
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:40:09 +0000
Received: from [85.158.137.68:3685] by server-10.bemta-3.messagelabs.com id
	F9/C8-07302-8D3CBE25; Fri, 31 Jan 2014 15:40:08 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391182805!12561205!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2208 invoked from network); 31 Jan 2014 15:40:07 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:40:07 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFd0Ww017030
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:39:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0VFcxTY004263
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:39:00 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0VFcxXj004232; Fri, 31 Jan 2014 15:38:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:38:58 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8FF1C1BFA73; Fri, 31 Jan 2014 10:38:57 -0500 (EST)
Date: Fri, 31 Jan 2014 10:38:57 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140131153857.GD22527@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-late-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc0-late-tag

which has bug-fixes for the new features that were added during this
cycle. There are also two fixes for long-standing issues for which we have
a solution: grant-table operations extra work that was not needed causing
performance issues and the self balloon code was too aggressive causing
OOMs.

The signed tag has a good description of what these fixes are.

Please pull!


 arch/arm/Kconfig                    |  1 +
 arch/arm/xen/enlighten.c            | 77 ++++++++++++++++++------------
 arch/x86/include/asm/xen/page.h     |  5 +-
 arch/x86/xen/grant-table.c          |  3 +-
 arch/x86/xen/p2m.c                  | 17 +------
 drivers/block/xen-blkback/blkback.c | 15 +++---
 drivers/xen/gntdev.c                | 13 +++--
 drivers/xen/grant-table.c           | 95 ++++++++++++++++++++++++++++++-------
 drivers/xen/swiotlb-xen.c           | 22 ++++++++-
 drivers/xen/xen-selfballoon.c       | 22 +++++++++
 include/xen/grant_table.h           | 10 ++--
 11 files changed, 197 insertions(+), 83 deletions(-)


Bob Liu (1):
      drivers: xen: deaggressive selfballoon driver

Dave Jones (1):
      xen/pvh: Fix misplaced kfree from xlated_setup_gnttab_pages

Ian Campbell (1):
      xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t)

Julien Grall (2):
      arm/xen: Initialize event channels earlier
      xen/gnttab: Use phys_addr_t to describe the grant frame base address

Zoltan Kiss (1):
      xen/grant-table: Avoid m2p_override during mapping


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:41:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GDk-00041s-CR; Fri, 31 Jan 2014 15:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GDj-00041m-33
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:41:31 +0000
Received: from [85.158.143.35:35374] by server-2.bemta-4.messagelabs.com id
	CA/B9-10891-A24CBE25; Fri, 31 Jan 2014 15:41:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391182888!2259436!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4884 invoked from network); 31 Jan 2014 15:41:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:41:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFeP9M009622
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:40:26 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFeOfc021285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:40:25 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0VFeNGU007910; Fri, 31 Jan 2014 15:40:24 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:40:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8A0481BFA73; Fri, 31 Jan 2014 10:40:21 -0500 (EST)
Date: Fri, 31 Jan 2014 10:40:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140131154021.GE22527@phenom.dumpdata.com>
References: <20140131153857.GD22527@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140131153857.GD22527@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-late-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3511070241536860655=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3511070241536860655==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="mxv5cy4qt+RJ9ypb"
Content-Disposition: inline


--mxv5cy4qt+RJ9ypb
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Jan 31, 2014 at 10:38:57AM -0500, Konrad Rzeszutek Wilk wrote:
> Hey Linus,
>=20
> Please git pull the following tag:
>=20
>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-lin=
us-3.14-rc0-late-tag
>=20
> which has bug-fixes for the new features that were added during this
> cycle. There are also two fixes for long-standing issues for which we have
> a solution: grant-table operations extra work that was not needed causing
> performance issues and the self balloon code was too aggressive causing
> OOMs.
>=20
> The signed tag has a good description of what these fixes are.

And I fat-fingered the GPG sign key - so here is my signed response to the
original GIT PULL.

>=20
> Please pull!
>=20
>=20
>  arch/arm/Kconfig                    |  1 +
>  arch/arm/xen/enlighten.c            | 77 ++++++++++++++++++------------
>  arch/x86/include/asm/xen/page.h     |  5 +-
>  arch/x86/xen/grant-table.c          |  3 +-
>  arch/x86/xen/p2m.c                  | 17 +------
>  drivers/block/xen-blkback/blkback.c | 15 +++---
>  drivers/xen/gntdev.c                | 13 +++--
>  drivers/xen/grant-table.c           | 95 ++++++++++++++++++++++++++++++-=
------
>  drivers/xen/swiotlb-xen.c           | 22 ++++++++-
>  drivers/xen/xen-selfballoon.c       | 22 +++++++++
>  include/xen/grant_table.h           | 10 ++--
>  11 files changed, 197 insertions(+), 83 deletions(-)
>=20
>=20
> Bob Liu (1):
>       drivers: xen: deaggressive selfballoon driver
>=20
> Dave Jones (1):
>       xen/pvh: Fix misplaced kfree from xlated_setup_gnttab_pages
>=20
> Ian Campbell (1):
>       xen: swiotlb: handle sizeof(dma_addr_t) !=3D sizeof(phys_addr_t)
>=20
> Julien Grall (2):
>       arm/xen: Initialize event channels earlier
>       xen/gnttab: Use phys_addr_t to describe the grant frame base address
>=20
> Zoltan Kiss (1):
>       xen/grant-table: Avoid m2p_override during mapping
>=20

--mxv5cy4qt+RJ9ypb
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS68PhAAoJEFjIrFwIi8fJBhoIAMR2YtLNOVJ1uZYbiVy27lDe
6+mxjO4rpHm9rT6qhSsUsdND76/TvWPV9TKIxfFWWKrgDbMNjjOUpkMrp0UR5lcF
rKz+gfuxaX+ZZg887EnTpsnixAnV8UHT66wnKg7c1pXujiDtD+Nv68DIGbvyeKMj
Ad5RgYRYqRBKUmauSNw5DtArl5mFVw0ShEc7uvvqACefD7nQ79lg7R0iL3Xr7AGX
iGMrBlLiz3S+hLxwiGMCy+7pX/pJhfGTWMlyxrd19ZwRRrgYHFRDtRmwF5BhttVO
IlfHWlHZPwOWRur6NN3Q8Riho3vTDZWfj78mMXU/MENL2sn7tzsZw4T0OAu4Pu0=
=O4+L
-----END PGP SIGNATURE-----

--mxv5cy4qt+RJ9ypb--


--===============3511070241536860655==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3511070241536860655==--


From xen-devel-bounces@lists.xen.org Fri Jan 31 15:41:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GDk-00041s-CR; Fri, 31 Jan 2014 15:41:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GDj-00041m-33
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:41:31 +0000
Received: from [85.158.143.35:35374] by server-2.bemta-4.messagelabs.com id
	CA/B9-10891-A24CBE25; Fri, 31 Jan 2014 15:41:30 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391182888!2259436!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4884 invoked from network); 31 Jan 2014 15:41:29 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:41:29 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFeP9M009622
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:40:26 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFeOfc021285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:40:25 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s0VFeNGU007910; Fri, 31 Jan 2014 15:40:24 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:40:23 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8A0481BFA73; Fri, 31 Jan 2014 10:40:21 -0500 (EST)
Date: Fri, 31 Jan 2014 10:40:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140131154021.GE22527@phenom.dumpdata.com>
References: <20140131153857.GD22527@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140131153857.GD22527@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc0-late-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3511070241536860655=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3511070241536860655==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="mxv5cy4qt+RJ9ypb"
Content-Disposition: inline


--mxv5cy4qt+RJ9ypb
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Jan 31, 2014 at 10:38:57AM -0500, Konrad Rzeszutek Wilk wrote:
> Hey Linus,
>=20
> Please git pull the following tag:
>=20
>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-lin=
us-3.14-rc0-late-tag
>=20
> which has bug-fixes for the new features that were added during this
> cycle. There are also two fixes for long-standing issues for which we have
> a solution: grant-table operations extra work that was not needed causing
> performance issues and the self balloon code was too aggressive causing
> OOMs.
>=20
> The signed tag has a good description of what these fixes are.

And I fat-fingered the GPG sign key - so here is my signed response to the
original GIT PULL.

>=20
> Please pull!
>=20
>=20
>  arch/arm/Kconfig                    |  1 +
>  arch/arm/xen/enlighten.c            | 77 ++++++++++++++++++------------
>  arch/x86/include/asm/xen/page.h     |  5 +-
>  arch/x86/xen/grant-table.c          |  3 +-
>  arch/x86/xen/p2m.c                  | 17 +------
>  drivers/block/xen-blkback/blkback.c | 15 +++---
>  drivers/xen/gntdev.c                | 13 +++--
>  drivers/xen/grant-table.c           | 95 ++++++++++++++++++++++++++++++-=
------
>  drivers/xen/swiotlb-xen.c           | 22 ++++++++-
>  drivers/xen/xen-selfballoon.c       | 22 +++++++++
>  include/xen/grant_table.h           | 10 ++--
>  11 files changed, 197 insertions(+), 83 deletions(-)
>=20
>=20
> Bob Liu (1):
>       drivers: xen: deaggressive selfballoon driver
>=20
> Dave Jones (1):
>       xen/pvh: Fix misplaced kfree from xlated_setup_gnttab_pages
>=20
> Ian Campbell (1):
>       xen: swiotlb: handle sizeof(dma_addr_t) !=3D sizeof(phys_addr_t)
>=20
> Julien Grall (2):
>       arm/xen: Initialize event channels earlier
>       xen/gnttab: Use phys_addr_t to describe the grant frame base address
>=20
> Zoltan Kiss (1):
>       xen/grant-table: Avoid m2p_override during mapping
>=20

--mxv5cy4qt+RJ9ypb
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS68PhAAoJEFjIrFwIi8fJBhoIAMR2YtLNOVJ1uZYbiVy27lDe
6+mxjO4rpHm9rT6qhSsUsdND76/TvWPV9TKIxfFWWKrgDbMNjjOUpkMrp0UR5lcF
rKz+gfuxaX+ZZg887EnTpsnixAnV8UHT66wnKg7c1pXujiDtD+Nv68DIGbvyeKMj
Ad5RgYRYqRBKUmauSNw5DtArl5mFVw0ShEc7uvvqACefD7nQ79lg7R0iL3Xr7AGX
iGMrBlLiz3S+hLxwiGMCy+7pX/pJhfGTWMlyxrd19ZwRRrgYHFRDtRmwF5BhttVO
IlfHWlHZPwOWRur6NN3Q8Riho3vTDZWfj78mMXU/MENL2sn7tzsZw4T0OAu4Pu0=
=O4+L
-----END PGP SIGNATURE-----

--mxv5cy4qt+RJ9ypb--


--===============3511070241536860655==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3511070241536860655==--


From xen-devel-bounces@lists.xen.org Fri Jan 31 15:48:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GJt-0004Eo-9K; Fri, 31 Jan 2014 15:47:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W9GJr-0004Ej-Sm
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:47:52 +0000
Received: from [85.158.137.68:57478] by server-8.bemta-3.messagelabs.com id
	A1/12-16039-6A5CBE25; Fri, 31 Jan 2014 15:47:50 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391183268!8963814!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9590 invoked from network); 31 Jan 2014 15:47:49 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:47:49 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 31 Jan 2014 08:47:38 -0700
Message-ID: <52EBC598.6030600@suse.com>
Date: Fri, 31 Jan 2014 08:47:36 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52EB3C47.9050902@suse.com>	<21227.37990.762802.574421@mariner.uk.xensource.com>
	<21227.48873.879761.827214@mariner.uk.xensource.com>
In-Reply-To: <21227.48873.879761.827214@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("Re: libvirt libxl timer handling issue"):
>   
>> Jim Fehlig writes ("libvirt libxl timer handling issue"):
>>     
>>> On the bright side, I seem to have the fd event handling issues sorted out.
>>>       
>> Good, I guess.  Let me look at your crash stacktrace and the libxl
>> code in more detail...
>>     
>
> I think this is due to libxl_event.c not clearing the ->func member of
> its timeout structs when the timeout occurs.  TBH it's surprising that
> this hasn't caused more trouble and I haven't been able to test this
> so I'm not sure.
>
> But please take a look at
>   git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0
>
> The top two patches there are new; the rest is my fork fixup branch.
> Note once again that I have compiled but NOT EXECUTED these two
> patches.  But since I'm about to be out of touch travelling and then
> at FOSDEM I thought I'd send this to you right away.
>   

Ok, thanks. I'll give this a try in a bit.

> Sorry if this too turns out to be my fault...
>   

Well, I should spend some time becoming familiar with this part of libxl
so I can help fix issues too, instead of whining about them :).

Enjoy FOSDEM.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:48:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GJt-0004Eo-9K; Fri, 31 Jan 2014 15:47:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1W9GJr-0004Ej-Sm
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:47:52 +0000
Received: from [85.158.137.68:57478] by server-8.bemta-3.messagelabs.com id
	A1/12-16039-6A5CBE25; Fri, 31 Jan 2014 15:47:50 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391183268!8963814!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9590 invoked from network); 31 Jan 2014 15:47:49 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:47:49 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Fri, 31 Jan 2014 08:47:38 -0700
Message-ID: <52EBC598.6030600@suse.com>
Date: Fri, 31 Jan 2014 08:47:36 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52EB3C47.9050902@suse.com>	<21227.37990.762802.574421@mariner.uk.xensource.com>
	<21227.48873.879761.827214@mariner.uk.xensource.com>
In-Reply-To: <21227.48873.879761.827214@mariner.uk.xensource.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("Re: libvirt libxl timer handling issue"):
>   
>> Jim Fehlig writes ("libvirt libxl timer handling issue"):
>>     
>>> On the bright side, I seem to have the fd event handling issues sorted out.
>>>       
>> Good, I guess.  Let me look at your crash stacktrace and the libxl
>> code in more detail...
>>     
>
> I think this is due to libxl_event.c not clearing the ->func member of
> its timeout structs when the timeout occurs.  TBH it's surprising that
> this hasn't caused more trouble and I haven't been able to test this
> so I'm not sure.
>
> But please take a look at
>   git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0
>
> The top two patches there are new; the rest is my fork fixup branch.
> Note once again that I have compiled but NOT EXECUTED these two
> patches.  But since I'm about to be out of touch travelling and then
> at FOSDEM I thought I'd send this to you right away.
>   

Ok, thanks. I'll give this a try in a bit.

> Sorry if this too turns out to be my fault...
>   

Well, I should spend some time becoming familiar with this part of libxl
so I can help fix issues too, instead of whining about them :).

Enjoy FOSDEM.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:49:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GLh-0004Vc-Q5; Fri, 31 Jan 2014 15:49:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GLf-0004VT-Jl
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:49:43 +0000
Received: from [85.158.137.68:25507] by server-6.bemta-3.messagelabs.com id
	80/AC-09180-616CBE25; Fri, 31 Jan 2014 15:49:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391183380!11445348!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1107 invoked from network); 31 Jan 2014 15:49:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:49:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFnc31021584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:49:39 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFnbPO017686
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:49:38 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFnbMM017675; Fri, 31 Jan 2014 15:49:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:49:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 762101BFA73; Fri, 31 Jan 2014 10:49:36 -0500 (EST)
Date: Fri, 31 Jan 2014 10:49:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140131154936.GA23648@phenom.dumpdata.com>
References: <52DECA4E.4080004@citrix.com>
	<20140124183144.GE15785@phenom.dumpdata.com>
	<52E64BDF.8040105@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E64BDF.8040105@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 01:06:55PM +0100, Roger Pau Monn=E9 wrote:
> On 24/01/14 19:31, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
> >> Hello,
> >>
> >> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
> >> cpuid feature flags exposed to PVH guests are kind of strange, this is
> >> the output of the feature flags as seen by an HVM domain:
> >>
> > =

> > What about a PV guest? I presume if you ran an NetBSD PV guest it would
> > give a format similar to this?
> =

> I guess so, the feature flags reported by NetBSD PV will probably be the
> same of the ones reported by FreeBSD PVH (unless NetBSD PV also does
> some kind of pre-filtering).
> =

> > =

> >> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR=
,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
> >>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TS=
CDLT,HV>
> >> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
> >> AMD Features2=3D0x1<LAHF>
> >>
> >> And this is what a PVH domain sees when running on the same hardware:
> >>
> >> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH=
,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT>
> >> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
> >> AMD Features=3D0x20100800<SYSCALL,NX,LM>
> >> AMD Features2=3D0x1<LAHF>
> >>
> >> I would expect the feature flags to be quite similar between an HVM
> >> domain and a PV domain (since they both run inside of a HVM container).
> >> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
> >> guests. Also, is there any reason why PVH guests have the ACPI, SS and
> >> CLFLUSH feature flags but not HVM?
> > =

> > S5?
> =

> SS - CPU cache supports self-snoop.
> =

> Not sure if that should be enabled or not for PVH.

Not even sure what does means for a guest. I thought all CPUs do snooping
except when they go in C states?

> =

> > =

> > ACPI is enabled for PV I think, but Linux PV guests disable them
> > as there is no ACPI tables in PV mode:
> > =

> >  429         if (!xen_initial_domain())                                =
              =

> >  430                 cpuid_leaf1_edx_mask &=3D                         =
                =

> >  431                         ~((1 << X86_FEATURE_ACPI));  /* disable AC=
PI */         =

> >  432                                                                   =
              =

> >  434                                                                   =
   =

> > =

> > CLFLUSH - no idea why it would be disabled.
> > =

> > =

> > The rdsctp should be enabled. In the past I think it was related to
> > 'timer=3D' option. We would either trap it, or emulate it with a consta=
nt
> > value, or passthrough. It should be passing it through but maybe there
> > is a bug?
> > =

> > PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
> > is not exposed because of another bug:
> > http://www.gossamer-threads.com/lists/xen/devel/313596
> =

> Think so, now that we run inside of an HVM container we should be able
> to make use of all those.

Agreed.
> =

> > =

> >>
> >> Most (if not all) of this probably comes from the fact that we are
> >> reporting the same feature flags as pure PV guests, but I see no reason
> >> to do that for PVH guests, we should decide what's supported on PVH and
> >> set the feature flags accordingly.
> > =

> > Right and also have a nice policy. The problem is that we set/unset
> > the cpuid flags in the toolstack (and in two places - depending on the
> > architecture) and also in the hypervisor.
> =

> Yes, all this cpuid flag stuff seems like a mess to me, there are so
> many places where we enable or blacklist certain cpu flags that makes me
> wonder if it would be more sane to define a set of flags that an HVM
> container supports and maybe then blacklist some of them if they are not
> actually implemented/usable on the specific kind of guest.

So, first go through HVM list and then follow with the PV?

> =

> > Anyhow, these I know we disable:
> > =

> >  425         cpuid_leaf1_edx_mask =3D                                  =
                =

> >  426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */   =
              =

> >  427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring=
 */           =

> > =

> > And I  think your patch to the Xen hypervisor does it too - it clears
> > the MTRR by default now. The ACC is (if my memory is correct) for
> > the Pentium 4 and such - we can disable it as well.
> > =

> >  428                                                                   =
              =

> >  433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32)=
);              =

> > =

> > And this we definitly need to disable. The x2APIC should not
> > be exposed as we want to use the Xen's version of APIC ops. And
> > if the x2APIC bit is enabled in Linux, there is some other code
> > (NMI handler) that will use that without using the APIC ops.
> > And blow up :-(
> > =

> > =

> > Then there is the MWAIT. Actually we can put that on the side.
> > I know it is important for dom0, but since we don't have those
> > patches yet in, we can ignore that. However, the hypervisor
> > (pv_cpuid) disables it. =

> > =

> > =

> > There is also the XSAVE business:
> > =

> > save_mask =3D                                                          =
  =

> >  440                 (1 << (X86_FEATURE_XSAVE % 32)) |                 =
              =

> >  441                 (1 << (X86_FEATURE_OSXSAVE % 32));                =
              =

> >  442                                                                   =
              =

> >  443         /* Xen will set CR4.OSXSAVE if supported and not disabled =
by force */   =

> >  444         if ((cx & xsave_mask) !=3D xsave_mask)                    =
                =

> >  445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable =
XSAVE & OSXSAVE */
> > =

> > Which I am not clear about.
> > =

> > =

> > This looks like to make a uniform 'cpuid' look in the hypervisor
> > we need somehow to glue hvm_cpuid and pv_cpuid with some
> > form of tables/lookups.
> > =

> > And make sure that the same logic is reflected in
> > xc_cpuid_x86.c (toolstack).
> =

> Agree. On a slight different topic, why do we enable the APIC flags for
> PV(H) guests?
> =

> We certainly have no APIC, and makes me wonder if we should disable it
> now and enable it once we have hardware APIC virtualization in place.
> This would allow PVH to either use the traditional PV style, or a
> hardware virtualized APIC if we enable it for PVH guests (and make the
> guest aware of it by turning the flag).

I thought it was off for PV guests (except dom0). That is my recollection
when 'perf' starts and realizes it can't sample the APIC.

But when it comes to dom0 it needs that otherwise it won't even parse
the ACPI MADT tables - and you need to parse those as you need
to get INT_SRV_OVR right. The reason you need that is because once the =

ACPI AML code kicks in, it has to figure out which GSI is the ACPI SCI
and the INT_SRV_OVR might have an override (like pin 9 might be hard-wired
to pin 20 instead of 9).

In other words - need this for dom0 to let it do its work.
> =

> Roger.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:49:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GLh-0004Vc-Q5; Fri, 31 Jan 2014 15:49:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GLf-0004VT-Jl
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 15:49:43 +0000
Received: from [85.158.137.68:25507] by server-6.bemta-3.messagelabs.com id
	80/AC-09180-616CBE25; Fri, 31 Jan 2014 15:49:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391183380!11445348!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1107 invoked from network); 31 Jan 2014 15:49:41 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 15:49:41 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VFnc31021584
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 15:49:39 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFnbPO017686
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 15:49:38 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VFnbMM017675; Fri, 31 Jan 2014 15:49:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 07:49:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 762101BFA73; Fri, 31 Jan 2014 10:49:36 -0500 (EST)
Date: Fri, 31 Jan 2014 10:49:36 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140131154936.GA23648@phenom.dumpdata.com>
References: <52DECA4E.4080004@citrix.com>
	<20140124183144.GE15785@phenom.dumpdata.com>
	<52E64BDF.8040105@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E64BDF.8040105@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH cpuid feature flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Jan 27, 2014 at 01:06:55PM +0100, Roger Pau Monn=E9 wrote:
> On 24/01/14 19:31, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 21, 2014 at 08:28:14PM +0100, Roger Pau Monn=E9 wrote:
> >> Hello,
> >>
> >> While doing some benchmarks on PV/PVH/PVHVM, I've realized that the
> >> cpuid feature flags exposed to PVH guests are kind of strange, this is
> >> the output of the feature flags as seen by an HVM domain:
> >>
> > =

> > What about a PV guest? I presume if you ran an NetBSD PV guest it would
> > give a format similar to this?
> =

> I guess so, the feature flags reported by NetBSD PV will probably be the
> same of the ones reported by FreeBSD PVH (unless NetBSD PV also does
> some kind of pre-filtering).
> =

> > =

> >> Features=3D0x1783fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR=
,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT>
> >>  Features2=3D0x81b82201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TS=
CDLT,HV>
> >> AMD Features=3D0x28100800<SYSCALL,NX,RDTSCP,LM>
> >> AMD Features2=3D0x1<LAHF>
> >>
> >> And this is what a PVH domain sees when running on the same hardware:
> >>
> >> Features=3D0x1fc98b75<FPU,DE,TSC,MSR,PAE,CX8,APIC,SEP,CMOV,PAT,CLFLUSH=
,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT>
> >> Features2=3D0x80982201<SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,HV>
> >> AMD Features=3D0x20100800<SYSCALL,NX,LM>
> >> AMD Features2=3D0x1<LAHF>
> >>
> >> I would expect the feature flags to be quite similar between an HVM
> >> domain and a PV domain (since they both run inside of a HVM container).
> >> AFAIK, there's no reason to disable PSE, PGE, PSE36 and RDTSCP for PVH
> >> guests. Also, is there any reason why PVH guests have the ACPI, SS and
> >> CLFLUSH feature flags but not HVM?
> > =

> > S5?
> =

> SS - CPU cache supports self-snoop.
> =

> Not sure if that should be enabled or not for PVH.

Not even sure what does means for a guest. I thought all CPUs do snooping
except when they go in C states?

> =

> > =

> > ACPI is enabled for PV I think, but Linux PV guests disable them
> > as there is no ACPI tables in PV mode:
> > =

> >  429         if (!xen_initial_domain())                                =
              =

> >  430                 cpuid_leaf1_edx_mask &=3D                         =
                =

> >  431                         ~((1 << X86_FEATURE_ACPI));  /* disable AC=
PI */         =

> >  432                                                                   =
              =

> >  434                                                                   =
   =

> > =

> > CLFLUSH - no idea why it would be disabled.
> > =

> > =

> > The rdsctp should be enabled. In the past I think it was related to
> > 'timer=3D' option. We would either trap it, or emulate it with a consta=
nt
> > value, or passthrough. It should be passing it through but maybe there
> > is a bug?
> > =

> > PSE, PGE, PSE36, PG1GB, etc, should all be exposed. Actually the PG1TB
> > is not exposed because of another bug:
> > http://www.gossamer-threads.com/lists/xen/devel/313596
> =

> Think so, now that we run inside of an HVM container we should be able
> to make use of all those.

Agreed.
> =

> > =

> >>
> >> Most (if not all) of this probably comes from the fact that we are
> >> reporting the same feature flags as pure PV guests, but I see no reason
> >> to do that for PVH guests, we should decide what's supported on PVH and
> >> set the feature flags accordingly.
> > =

> > Right and also have a nice policy. The problem is that we set/unset
> > the cpuid flags in the toolstack (and in two places - depending on the
> > architecture) and also in the hypervisor.
> =

> Yes, all this cpuid flag stuff seems like a mess to me, there are so
> many places where we enable or blacklist certain cpu flags that makes me
> wonder if it would be more sane to define a set of flags that an HVM
> container supports and maybe then blacklist some of them if they are not
> actually implemented/usable on the specific kind of guest.

So, first go through HVM list and then follow with the PV?

> =

> > Anyhow, these I know we disable:
> > =

> >  425         cpuid_leaf1_edx_mask =3D                                  =
                =

> >  426                 ~((1 << X86_FEATURE_MTRR) |  /* disable MTRR */   =
              =

> >  427                   (1 << X86_FEATURE_ACC));   /* thermal monitoring=
 */           =

> > =

> > And I  think your patch to the Xen hypervisor does it too - it clears
> > the MTRR by default now. The ACC is (if my memory is correct) for
> > the Pentium 4 and such - we can disable it as well.
> > =

> >  428                                                                   =
              =

> >  433         cpuid_leaf1_ecx_mask &=3D ~(1 << (X86_FEATURE_X2APIC % 32)=
);              =

> > =

> > And this we definitly need to disable. The x2APIC should not
> > be exposed as we want to use the Xen's version of APIC ops. And
> > if the x2APIC bit is enabled in Linux, there is some other code
> > (NMI handler) that will use that without using the APIC ops.
> > And blow up :-(
> > =

> > =

> > Then there is the MWAIT. Actually we can put that on the side.
> > I know it is important for dom0, but since we don't have those
> > patches yet in, we can ignore that. However, the hypervisor
> > (pv_cpuid) disables it. =

> > =

> > =

> > There is also the XSAVE business:
> > =

> > save_mask =3D                                                          =
  =

> >  440                 (1 << (X86_FEATURE_XSAVE % 32)) |                 =
              =

> >  441                 (1 << (X86_FEATURE_OSXSAVE % 32));                =
              =

> >  442                                                                   =
              =

> >  443         /* Xen will set CR4.OSXSAVE if supported and not disabled =
by force */   =

> >  444         if ((cx & xsave_mask) !=3D xsave_mask)                    =
                =

> >  445                 cpuid_leaf1_ecx_mask &=3D ~xsave_mask; /* disable =
XSAVE & OSXSAVE */
> > =

> > Which I am not clear about.
> > =

> > =

> > This looks like to make a uniform 'cpuid' look in the hypervisor
> > we need somehow to glue hvm_cpuid and pv_cpuid with some
> > form of tables/lookups.
> > =

> > And make sure that the same logic is reflected in
> > xc_cpuid_x86.c (toolstack).
> =

> Agree. On a slight different topic, why do we enable the APIC flags for
> PV(H) guests?
> =

> We certainly have no APIC, and makes me wonder if we should disable it
> now and enable it once we have hardware APIC virtualization in place.
> This would allow PVH to either use the traditional PV style, or a
> hardware virtualized APIC if we enable it for PVH guests (and make the
> guest aware of it by turning the flag).

I thought it was off for PV guests (except dom0). That is my recollection
when 'perf' starts and realizes it can't sample the APIC.

But when it comes to dom0 it needs that otherwise it won't even parse
the ACPI MADT tables - and you need to parse those as you need
to get INT_SRV_OVR right. The reason you need that is because once the =

ACPI AML code kicks in, it has to figure out which GSI is the ACPI SCI
and the INT_SRV_OVR might have an override (like pin 9 might be hard-wired
to pin 20 instead of 9).

In other words - need this for dom0 to let it do its work.
> =

> Roger.
> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:59:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GV6-0004ui-5h; Fri, 31 Jan 2014 15:59:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9GV5-0004ud-96
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:59:27 +0000
Received: from [193.109.254.147:32006] by server-14.bemta-14.messagelabs.com
	id 84/CF-29228-E58CBE25; Fri, 31 Jan 2014 15:59:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391183964!1194133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20182 invoked from network); 31 Jan 2014 15:59:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:59:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="96578425"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 15:59:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 10:59:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9GV1-0000KK-7u;
	Fri, 31 Jan 2014 15:59:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9GV0-00039K-Dr;
	Fri, 31 Jan 2014 15:59:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24674-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 15:59:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24674: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24674 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24674/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24632 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24662
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24662
 test-amd64-i386-qemut-rhel6hvm-intel 7 redhat-install fail in 24662 pass in 24674
 test-amd64-amd64-xl-sedf      9 guest-start        fail in 24662 pass in 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24662 pass in 24674
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail in 24662 pass in 24632
 test-amd64-amd64-xl           9 guest-start        fail in 24632 pass in 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24632 pass in 24674

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24594-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail blocked in 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install        fail like 24599-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24662 like 24570
 test-amd64-i386-xl-qemuu-win7-amd64 8 guest-saverestore fail in 24632 blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24662 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24632 never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 15:59:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 15:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GV6-0004ui-5h; Fri, 31 Jan 2014 15:59:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9GV5-0004ud-96
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 15:59:27 +0000
Received: from [193.109.254.147:32006] by server-14.bemta-14.messagelabs.com
	id 84/CF-29228-E58CBE25; Fri, 31 Jan 2014 15:59:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391183964!1194133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20182 invoked from network); 31 Jan 2014 15:59:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 15:59:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="96578425"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 15:59:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 10:59:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9GV1-0000KK-7u;
	Fri, 31 Jan 2014 15:59:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9GV0-00039K-Dr;
	Fri, 31 Jan 2014 15:59:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24674-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 15:59:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24674: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24674 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24674/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24632 REGR. vs. 24570

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24662
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24662
 test-amd64-i386-qemut-rhel6hvm-intel 7 redhat-install fail in 24662 pass in 24674
 test-amd64-amd64-xl-sedf      9 guest-start        fail in 24662 pass in 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24662 pass in 24674
 test-amd64-i386-xl-winxpsp3-vcpus1 9 guest-localmigrate fail in 24662 pass in 24632
 test-amd64-amd64-xl           9 guest-start        fail in 24632 pass in 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24632 pass in 24674

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install   fail like 24594-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail blocked in 24570
 test-amd64-i386-xl-win7-amd64  7 windows-install        fail like 24599-bisect
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24662 like 24570
 test-amd64-i386-xl-qemuu-win7-amd64 8 guest-saverestore fail in 24632 blocked in 24570

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24662 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24632 never pass

version targeted for testing:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391
baseline version:
 xen                  3c80caee183124b2a0d1f7e0dae062fd794d6321

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:01:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GXO-0005QL-QA; Fri, 31 Jan 2014 16:01:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GXN-0005QF-6F
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 16:01:49 +0000
Received: from [193.109.254.147:61999] by server-2.bemta-14.messagelabs.com id
	A3/09-01236-CE8CBE25; Fri, 31 Jan 2014 16:01:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391184106!1193615!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25517 invoked from network); 31 Jan 2014 16:01:47 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:01:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VG1gfR005649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:01:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VG1gOp005891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 16:01:42 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VG1gST005884; Fri, 31 Jan 2014 16:01:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:01:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5464D1BFA73; Fri, 31 Jan 2014 11:01:40 -0500 (EST)
Date: Fri, 31 Jan 2014 11:01:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140131160140.GC23648@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129082521.GA1362@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 09:25:22AM +0100, Stanislaw Gruszka wrote:
> (Cc: correct Rafael email)
> 
> On Tue, Jan 28, 2014 at 09:24:57PM -0800, David Rientjes wrote:
> > On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > 
> > > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > > index 7231859..7602229 100644
> > > --- a/drivers/xen/xen-acpi-processor.c
> > > +++ b/drivers/xen/xen-acpi-processor.c
> > > @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
> > >   */
> > >  static unsigned int nr_acpi_bits;
> > >  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> > > -static DEFINE_MUTEX(acpi_ids_mutex);
> > > +static DEFINE_SPINLOCK(acpi_ids_lock);
> > >  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
> > >  static unsigned long *acpi_ids_done;
> > >  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> > > @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
> > >  	int ret = 0;
> > >  
> > >  	dst_cx_states = kcalloc(_pr->power.count,
> > > -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> > > +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
> > >  	if (!dst_cx_states)
> > >  		return -ENOMEM;
> > >  
> > > @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
> > >  		     sizeof(struct acpi_processor_px));
> > >  
> > >  	dst_states = kcalloc(_pr->performance->state_count,
> > > -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> > > +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
> > >  	if (!dst_states)
> > >  		return ERR_PTR(-ENOMEM);
> > >  
> > > @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
> > >  {
> > >  	int err = 0;
> > >  
> > > -	mutex_lock(&acpi_ids_mutex);
> > > +	spin_lock(&acpi_ids_lock);
> > >  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> > > -		mutex_unlock(&acpi_ids_mutex);
> > > +		spin_unlock(&acpi_ids_lock);
> > >  		return -EBUSY;
> > >  	}
> > >  	if (_pr->flags.power)
> > > @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
> > >  	if (_pr->performance && _pr->performance->states)
> > >  		err |= push_pxx_to_hypervisor(_pr);
> > >  
> > > -	mutex_unlock(&acpi_ids_mutex);
> > > +	spin_unlock(&acpi_ids_lock);
> > >  	return err;
> > >  }
> > >  static unsigned int __init get_max_acpi_id(void)
> > 
> > Looks incomplete, what about the kzalloc() in 
> > xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?
> 
> Indeed and additionally from check_acpi_ids() we call
> acpi_walk_namespace(), which also take mutexes. Hence unfortunately
> making xen_upload_processor_pm_data() atomic is not easy, but possibly
> can be done by saving some data in memory after initialization.
> 
> Or perhaps this problem can be solved differently, by not using 
> yscore_ops->resume(), but some other resume callback from core, which
> allow to sleep. Than can require registering dummy device or sysfs
> class, but maybe there are simpler solutions.

Perhaps by using 'subsys_system_register' and stick it there?

Again, untested and uncompiled:


diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..a2524f6 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,7 +27,6 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
@@ -495,13 +494,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct device *dev)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
+static struct bus_type xap_bus = {
+	.name	= "xen-acpi-processor",
+	.dev_name = "xen-acpi-processor",
 	.resume	= xen_acpi_processor_resume,
 };
 
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	subsys_system_register(&xap_bus);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	bus_unregister(&xap_bus);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
> 
> Thanks
> Stanislaw 
>  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:01:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9GXO-0005QL-QA; Fri, 31 Jan 2014 16:01:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9GXN-0005QF-6F
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 16:01:49 +0000
Received: from [193.109.254.147:61999] by server-2.bemta-14.messagelabs.com id
	A3/09-01236-CE8CBE25; Fri, 31 Jan 2014 16:01:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391184106!1193615!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25517 invoked from network); 31 Jan 2014 16:01:47 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:01:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VG1gfR005649
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:01:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VG1gOp005891
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 16:01:42 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VG1gST005884; Fri, 31 Jan 2014 16:01:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:01:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5464D1BFA73; Fri, 31 Jan 2014 11:01:40 -0500 (EST)
Date: Fri, 31 Jan 2014 11:01:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140131160140.GC23648@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140129082521.GA1362@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 09:25:22AM +0100, Stanislaw Gruszka wrote:
> (Cc: correct Rafael email)
> 
> On Tue, Jan 28, 2014 at 09:24:57PM -0800, David Rientjes wrote:
> > On Tue, 28 Jan 2014, Konrad Rzeszutek Wilk wrote:
> > 
> > > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > > index 7231859..7602229 100644
> > > --- a/drivers/xen/xen-acpi-processor.c
> > > +++ b/drivers/xen/xen-acpi-processor.c
> > > @@ -46,7 +46,7 @@ module_param_named(off, no_hypercall, int, 0400);
> > >   */
> > >  static unsigned int nr_acpi_bits;
> > >  /* Mutex to protect the acpi_ids_done - for CPU hotplug use. */
> > > -static DEFINE_MUTEX(acpi_ids_mutex);
> > > +static DEFINE_SPINLOCK(acpi_ids_lock);
> > >  /* Which ACPI ID we have processed from 'struct acpi_processor'. */
> > >  static unsigned long *acpi_ids_done;
> > >  /* Which ACPI ID exist in the SSDT/DSDT processor definitions. */
> > > @@ -68,7 +68,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr)
> > >  	int ret = 0;
> > >  
> > >  	dst_cx_states = kcalloc(_pr->power.count,
> > > -				sizeof(struct xen_processor_cx), GFP_KERNEL);
> > > +				sizeof(struct xen_processor_cx), GFP_ATOMIC);
> > >  	if (!dst_cx_states)
> > >  		return -ENOMEM;
> > >  
> > > @@ -149,7 +149,7 @@ xen_copy_pss_data(struct acpi_processor *_pr,
> > >  		     sizeof(struct acpi_processor_px));
> > >  
> > >  	dst_states = kcalloc(_pr->performance->state_count,
> > > -			     sizeof(struct xen_processor_px), GFP_KERNEL);
> > > +			     sizeof(struct xen_processor_px), GFP_ATOMIC);
> > >  	if (!dst_states)
> > >  		return ERR_PTR(-ENOMEM);
> > >  
> > > @@ -275,9 +275,9 @@ static int upload_pm_data(struct acpi_processor *_pr)
> > >  {
> > >  	int err = 0;
> > >  
> > > -	mutex_lock(&acpi_ids_mutex);
> > > +	spin_lock(&acpi_ids_lock);
> > >  	if (__test_and_set_bit(_pr->acpi_id, acpi_ids_done)) {
> > > -		mutex_unlock(&acpi_ids_mutex);
> > > +		spin_unlock(&acpi_ids_lock);
> > >  		return -EBUSY;
> > >  	}
> > >  	if (_pr->flags.power)
> > > @@ -286,7 +286,7 @@ static int upload_pm_data(struct acpi_processor *_pr)
> > >  	if (_pr->performance && _pr->performance->states)
> > >  		err |= push_pxx_to_hypervisor(_pr);
> > >  
> > > -	mutex_unlock(&acpi_ids_mutex);
> > > +	spin_unlock(&acpi_ids_lock);
> > >  	return err;
> > >  }
> > >  static unsigned int __init get_max_acpi_id(void)
> > 
> > Looks incomplete, what about the kzalloc() in 
> > xen_upload_processor_pm_data() and kcalloc()s in check_acpi_ids()?
> 
> Indeed and additionally from check_acpi_ids() we call
> acpi_walk_namespace(), which also take mutexes. Hence unfortunately
> making xen_upload_processor_pm_data() atomic is not easy, but possibly
> can be done by saving some data in memory after initialization.
> 
> Or perhaps this problem can be solved differently, by not using 
> yscore_ops->resume(), but some other resume callback from core, which
> allow to sleep. Than can require registering dummy device or sysfs
> class, but maybe there are simpler solutions.

Perhaps by using 'subsys_system_register' and stick it there?

Again, untested and uncompiled:


diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..a2524f6 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,7 +27,6 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
@@ -495,13 +494,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct device *dev)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
+static struct bus_type xap_bus = {
+	.name	= "xen-acpi-processor",
+	.dev_name = "xen-acpi-processor",
 	.resume	= xen_acpi_processor_resume,
 };
 
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	subsys_system_register(&xap_bus);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	bus_unregister(&xap_bus);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
> 
> Thanks
> Stanislaw 
>  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9H4g-0006E2-Fx; Fri, 31 Jan 2014 16:36:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W9H4e-0006Dx-Br
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:36:12 +0000
Received: from [85.158.139.211:39158] by server-1.bemta-5.messagelabs.com id
	CF/C0-12859-BF0DBE25; Fri, 31 Jan 2014 16:36:11 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391186168!899145!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5523 invoked from network); 31 Jan 2014 16:36:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 16:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98551015"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 16:36:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 11:36:00 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W9H4S-0005Kl-45;
	Fri, 31 Jan 2014 16:36:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Fri, 31 Jan 2014 16:35:47 +0000
Message-ID: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.3
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Yun Wang <bimingery@gmail.com>
Subject: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vcpu-set will try to use the HVM path (through QEMU) instead of the PV
path (through xenstore) for a PV guest, if there is a QEMU running for
this domain. This patch check which kind of guest is running before
before doing any call.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Yun, is this patch fix the issue with your PV guest ?


 tools/libxl/libxl.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..c4fe6af 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
 {
     GC_INIT(ctx);
     int rc;
-    switch (libxl__device_model_version_running(gc, domid)) {
-    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
-        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
+    switch (libxl__domain_type(gc, domid)) {
+    case LIBXL_DOMAIN_TYPE_HVM:
+        switch (libxl__device_model_version_running(gc, domid)) {
+        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
+            break;
+        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
+            break;
+        default:
+            rc = ERROR_INVAL;
+        }
         break;
-    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
+    case LIBXL_DOMAIN_TYPE_PV:
+        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
         break;
     default:
         rc = ERROR_INVAL;
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9H4g-0006E2-Fx; Fri, 31 Jan 2014 16:36:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1W9H4e-0006Dx-Br
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:36:12 +0000
Received: from [85.158.139.211:39158] by server-1.bemta-5.messagelabs.com id
	CF/C0-12859-BF0DBE25; Fri, 31 Jan 2014 16:36:11 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391186168!899145!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5523 invoked from network); 31 Jan 2014 16:36:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 16:36:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,758,1384300800"; d="scan'208";a="98551015"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 16:36:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 11:36:00 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1W9H4S-0005Kl-45;
	Fri, 31 Jan 2014 16:36:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Xen Devel <xen-devel@lists.xen.org>
Date: Fri, 31 Jan 2014 16:35:47 +0000
Message-ID: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
X-Mailer: git-send-email 1.8.5.3
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Yun Wang <bimingery@gmail.com>
Subject: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vcpu-set will try to use the HVM path (through QEMU) instead of the PV
path (through xenstore) for a PV guest, if there is a QEMU running for
this domain. This patch check which kind of guest is running before
before doing any call.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Yun, is this patch fix the issue with your PV guest ?


 tools/libxl/libxl.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..c4fe6af 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
 {
     GC_INIT(ctx);
     int rc;
-    switch (libxl__device_model_version_running(gc, domid)) {
-    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
-        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
+    switch (libxl__domain_type(gc, domid)) {
+    case LIBXL_DOMAIN_TYPE_HVM:
+        switch (libxl__device_model_version_running(gc, domid)) {
+        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
+            break;
+        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
+            break;
+        default:
+            rc = ERROR_INVAL;
+        }
         break;
-    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
+    case LIBXL_DOMAIN_TYPE_PV:
+        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
         break;
     default:
         rc = ERROR_INVAL;
-- 
Anthony PERARD


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HGi-0006iQ-Qt; Fri, 31 Jan 2014 16:48:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W9HGh-0006iL-HU
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:48:39 +0000
Received: from [85.158.139.211:62010] by server-7.bemta-5.messagelabs.com id
	D4/FE-14867-6E3DBE25; Fri, 31 Jan 2014 16:48:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391186917!904490!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20619 invoked from network); 31 Jan 2014 16:48:38 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 16:48:38 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so3622129lbh.36
	for <xen-devel@lists.xen.org>; Fri, 31 Jan 2014 08:48:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=OPL+lmKiCERBEa8EYgsEy96hivhPNwzDFruhQmgnWFY=;
	b=cdVdAiRHBfoduEDZg3NbHmH0Apl/xgUtmErPRXacAVem4exGoWfjzBJZC+xRz3kOX1
	TXs2BfdRMuU4ZS/LRtlE2m8D1V1iWc04wyma6FeYRZ0cIpKSzB7NirIUqunh7PizWBSq
	uCgc/RowQRVaFPlEb/66UVeHX/rC1TcQEvD8SJsQRI39SmXLy7E7dJbWUaCsU5uWsPhZ
	zI4S6D3jPM945p3vuYa03RMbOh6MqZ9CzvKwMIx6IOV5NLgXyUG85N56sZ5p6YbBH7lV
	iNGHpbjH7bfO77xOtTgEbiWdYy8Ps6jb9lFjnZs+3RJp7o7xAKAJg1bavXB/4X5Ep8yD
	+xNw==
X-Received: by 10.112.236.3 with SMTP id uq3mr2504603lbc.14.1391186917265;
	Fri, 31 Jan 2014 08:48:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id z3sm14974343lag.10.2014.01.31.08.48.36
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 31 Jan 2014 08:48:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Fri, 31 Jan 2014 20:48:34 +0400
Message-Id: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

i try to load PV guest by:

virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics

could you please help me try to debug issues with PV creation:

POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")

i want try to find what i have to check/update: Xen sources OR illumos sources.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HGi-0006iQ-Qt; Fri, 31 Jan 2014 16:48:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1W9HGh-0006iL-HU
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:48:39 +0000
Received: from [85.158.139.211:62010] by server-7.bemta-5.messagelabs.com id
	D4/FE-14867-6E3DBE25; Fri, 31 Jan 2014 16:48:38 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391186917!904490!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20619 invoked from network); 31 Jan 2014 16:48:38 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 16:48:38 -0000
Received: by mail-lb0-f177.google.com with SMTP id z5so3622129lbh.36
	for <xen-devel@lists.xen.org>; Fri, 31 Jan 2014 08:48:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:to:mime-version;
	bh=OPL+lmKiCERBEa8EYgsEy96hivhPNwzDFruhQmgnWFY=;
	b=cdVdAiRHBfoduEDZg3NbHmH0Apl/xgUtmErPRXacAVem4exGoWfjzBJZC+xRz3kOX1
	TXs2BfdRMuU4ZS/LRtlE2m8D1V1iWc04wyma6FeYRZ0cIpKSzB7NirIUqunh7PizWBSq
	uCgc/RowQRVaFPlEb/66UVeHX/rC1TcQEvD8SJsQRI39SmXLy7E7dJbWUaCsU5uWsPhZ
	zI4S6D3jPM945p3vuYa03RMbOh6MqZ9CzvKwMIx6IOV5NLgXyUG85N56sZ5p6YbBH7lV
	iNGHpbjH7bfO77xOtTgEbiWdYy8Ps6jb9lFjnZs+3RJp7o7xAKAJg1bavXB/4X5Ep8yD
	+xNw==
X-Received: by 10.112.236.3 with SMTP id uq3mr2504603lbc.14.1391186917265;
	Fri, 31 Jan 2014 08:48:37 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id z3sm14974343lag.10.2014.01.31.08.48.36
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 31 Jan 2014 08:48:36 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Fri, 31 Jan 2014 20:48:34 +0400
Message-Id: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Subject: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

i try to load PV guest by:

virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics

could you please help me try to debug issues with PV creation:

POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: (1, 'Internal error', 'panic: xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")

i want try to find what i have to check/update: Xen sources OR illumos sources.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HOf-0006sa-R8; Fri, 31 Jan 2014 16:56:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9HOd-0006sV-Ir
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 16:56:52 +0000
Received: from [85.158.137.68:9071] by server-17.bemta-3.messagelabs.com id
	3D/2C-22569-2D5DBE25; Fri, 31 Jan 2014 16:56:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391187407!8979626!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20395 invoked from network); 31 Jan 2014 16:56:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:56:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VGukFW012485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:56:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0VGujP7016433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 31 Jan 2014 16:56:45 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuiov025487; Fri, 31 Jan 2014 16:56:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:56:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AAE261BFA73; Fri, 31 Jan 2014 11:56:43 -0500 (EST)
Date: Fri, 31 Jan 2014 11:56:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140131165643.GE23648@phenom.dumpdata.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 03:33:28PM +0000, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.
> 
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.
> 
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.

Would we get this feature for 'free' if we do PVH? Meaning there
is not much to modify in the Linux kernel to make it work in PVH mode?

> 
> And if considering it worthwhile, comments on the actual
> implementation (including the notes at the top of the attached
> patch) would of course be welcome too.
> 
> Jan
> 

> x86: PV SMAP for 64-bit guests
> 
> TODO? Apart from TOOGLE_MODE(), should we enforce SMAP mode for other
>       implied-supervisor guest memory accesses?
> TODO? MMUEXT_SET_SMAP_MODE may better be replaced by a standalone
>       hypercall (with just a single parameter); maybe by extending
>       fpu_taskswitch.
> 
> Note that the new state isn't being saved/restored. That's mainly
> because a capable kernel, when migrated from an incapable hypervisor to
> a capable one, would likely want to take advantage of the capability,
> and hence would need to set up all state anyway. This also implies that
> a capable kernel ought to be prepared to get migrated to an incapable
> hypervisor (as the loss of functionality isn't essential, it just
> results in security getting weakened).
> 
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1149,6 +1149,7 @@ static void load_segments(struct vcpu *n
>              (unsigned long *)regs->rsp :
>              (unsigned long *)pv->kernel_sp;
>          unsigned long cs_and_mask, rflags;
> +        int smap_mode = -1;
>  
>          if ( is_pv_32on64_domain(n->domain) )
>          {
> @@ -1199,9 +1200,17 @@ static void load_segments(struct vcpu *n
>          }
>  
>          if ( !(n->arch.flags & TF_kernel_mode) )
> +        {
> +            n->arch.flags |= TF_smap_mode;
>              toggle_guest_mode(n);
> +        }
>          else
> +        {
>              regs->cs &= ~3;
> +            smap_mode = guest_smap_mode(n);
> +            if ( !set_smap_mode(n, 1) )
> +                smap_mode = -1;
> +        }
>  
>          /* CS longword also contains full evtchn_upcall_mask. */
>          cs_and_mask = (unsigned long)regs->cs |
> @@ -1210,6 +1219,11 @@ static void load_segments(struct vcpu *n
>          /* Fold upcall mask into RFLAGS.IF. */
>          rflags  = regs->rflags & ~X86_EFLAGS_IF;
>          rflags |= !vcpu_info(n, evtchn_upcall_mask) << 9;
> +        if ( smap_mode >= 0 )
> +        {
> +            rflags &= ~X86_EFLAGS_AC;
> +            rflags |= !smap_mode << 18;
> +        }
>  
>          if ( put_user(regs->ss,            rsp- 1) |
>               put_user(regs->rsp,           rsp- 2) |
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -827,7 +827,7 @@ long arch_do_domctl(
>                  evc->sysenter_callback_eip     =
>                      v->arch.pv_vcpu.sysenter_callback_eip;
>                  evc->sysenter_disables_events  =
> -                    v->arch.pv_vcpu.sysenter_disables_events;
> +                    !!(v->arch.pv_vcpu.sysenter_tbf & TBF_INTERRUPT);
>                  evc->syscall32_callback_cs     =
>                      v->arch.pv_vcpu.syscall32_callback_cs;
>                  evc->syscall32_callback_eip    =
> @@ -863,8 +863,9 @@ long arch_do_domctl(
>                      evc->sysenter_callback_cs;
>                  v->arch.pv_vcpu.sysenter_callback_eip     =
>                      evc->sysenter_callback_eip;
> -                v->arch.pv_vcpu.sysenter_disables_events  =
> -                    evc->sysenter_disables_events;
> +                v->arch.pv_vcpu.sysenter_tbf              = 0;
> +                if ( evc->sysenter_disables_events )
> +                    v->arch.pv_vcpu.sysenter_tbf |= TBF_INTERRUPT;
>                  fixup_guest_code_selector(d, evc->syscall32_callback_cs);
>                  v->arch.pv_vcpu.syscall32_callback_cs     =
>                      evc->syscall32_callback_cs;
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -488,7 +488,7 @@ void write_ptbase(struct vcpu *v)
>  /*
>   * Should be called after CR3 is updated.
>   * 
> - * Uses values found in vcpu->arch.(guest_table and guest_table_user), and
> + * Uses values found in vcpu->arch.guest_table{,_user,_kernel}, and
>   * for HVM guests, arch.monitor_table and hvm's guest CR3.
>   *
>   * Update ref counts to shadow tables appropriately.
> @@ -505,8 +505,10 @@ void update_cr3(struct vcpu *v)
>  
>      if ( !(v->arch.flags & TF_kernel_mode) )
>          cr3_mfn = pagetable_get_pfn(v->arch.guest_table_user);
> -    else
> +    else if ( !guest_smap_mode(v) )
>          cr3_mfn = pagetable_get_pfn(v->arch.guest_table);
> +    else
> +        cr3_mfn = pagetable_get_pfn(v->arch.pv_vcpu.guest_table_smap);
>  
>      make_cr3(v, cr3_mfn);
>  }
> @@ -2687,7 +2689,22 @@ int vcpu_destroy_pagetables(struct vcpu 
>                  rc = put_page_and_type_preemptible(page);
>          }
>          if ( !rc )
> +        {
>              v->arch.guest_table_user = pagetable_null();
> +
> +            /* Drop ref to guest_table_smap (from MMUEXT_NEW_SMAP_BASEPTR). */
> +            mfn = pagetable_get_pfn(v->arch.pv_vcpu.guest_table_smap);
> +            if ( mfn )
> +            {
> +                page = mfn_to_page(mfn);
> +                if ( paging_mode_refcounts(v->domain) )
> +                    put_page(page);
> +                else
> +                    rc = put_page_and_type_preemptible(page);
> +            }
> +        }
> +        if ( !rc )
> +            v->arch.pv_vcpu.guest_table_smap = pagetable_null();
>      }
>  
>      v->arch.cr3 = 0;
> @@ -3086,7 +3103,11 @@ long do_mmuext_op(
>              }
>              break;
>  
> -        case MMUEXT_NEW_USER_BASEPTR: {
> +        case MMUEXT_NEW_USER_BASEPTR:
> +        case MMUEXT_NEW_SMAP_BASEPTR: {
> +            pagetable_t *ppt = op.cmd == MMUEXT_NEW_USER_BASEPTR
> +                               ? &curr->arch.guest_table_user
> +                               : &curr->arch.pv_vcpu.guest_table_smap;
>              unsigned long old_mfn;
>  
>              if ( paging_mode_translate(current->domain) )
> @@ -3095,7 +3116,7 @@ long do_mmuext_op(
>                  break;
>              }
>  
> -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
> +            old_mfn = pagetable_get_pfn(*ppt);
>              /*
>               * This is particularly important when getting restarted after the
>               * previous attempt got preempted in the put-old-MFN phase.
> @@ -3124,7 +3145,7 @@ long do_mmuext_op(
>                  }
>              }
>  
> -            curr->arch.guest_table_user = pagetable_from_pfn(op.arg1.mfn);
> +            *ppt = pagetable_from_pfn(op.arg1.mfn);
>  
>              if ( old_mfn != 0 )
>              {
> @@ -3249,6 +3270,15 @@ long do_mmuext_op(
>              break;
>          }
>  
> +        case MMUEXT_SET_SMAP_MODE:
> +            if ( unlikely(is_pv_32bit_domain(d)) )
> +                rc = -ENOSYS, okay = 0;
> +            else if ( unlikely(op.arg1.val & ~1) )
> +                okay = 0;
> +            else if ( unlikely(!set_smap_mode(curr, op.arg1.val)) )
> +                rc = -EOPNOTSUPP, okay = 0;
> +            break;
> +
>          case MMUEXT_CLEAR_PAGE: {
>              struct page_info *page;
>  
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -451,6 +451,8 @@ static void do_guest_trap(
>  
>      if ( TI_GET_IF(ti) )
>          tb->flags |= TBF_INTERRUPT;
> +    if ( !TI_GET_AC(ti) )
> +        tb->flags |= TBF_SMAP;
>  
>      if ( unlikely(null_trap_bounce(v, tb)) )
>          gdprintk(XENLOG_WARNING, "Unhandled %s fault/trap [#%d] "
> @@ -1089,6 +1091,8 @@ struct trap_bounce *propagate_page_fault
>          tb->eip        = ti->address;
>          if ( TI_GET_IF(ti) )
>              tb->flags |= TBF_INTERRUPT;
> +        if ( !TI_GET_AC(ti) )
> +            tb->flags |= TBF_SMAP;
>          return tb;
>      }
>  
> @@ -1109,6 +1113,8 @@ struct trap_bounce *propagate_page_fault
>      tb->eip        = ti->address;
>      if ( TI_GET_IF(ti) )
>          tb->flags |= TBF_INTERRUPT;
> +    if ( !TI_GET_AC(ti) )
> +        tb->flags |= TBF_SMAP;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
>          printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> @@ -1598,23 +1604,21 @@ static int guest_io_okay(
>      unsigned int port, unsigned int bytes,
>      struct vcpu *v, struct cpu_user_regs *regs)
>  {
> -    /* If in user mode, switch to kernel mode just to read I/O bitmap. */
> -    int user_mode = !(v->arch.flags & TF_kernel_mode);
> -#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)
> -
>      if ( !vm86_mode(regs) &&
>           (v->arch.pv_vcpu.iopl >= (guest_kernel_mode(v, regs) ? 1 : 3)) )
>          return 1;
>  
>      if ( v->arch.pv_vcpu.iobmp_limit > (port + bytes) )
>      {
> +        unsigned int mode;
>          union { uint8_t bytes[2]; uint16_t mask; } x;
>  
>          /*
>           * Grab permission bytes from guest space. Inaccessible bytes are
>           * read as 0xff (no access allowed).
> +         * If in user mode, switch to kernel mode just to read I/O bitmap.
>           */
> -        TOGGLE_MODE();
> +        TOGGLE_MODE(v, mode, 1);
>          switch ( __copy_from_guest_offset(x.bytes, v->arch.pv_vcpu.iobmp,
>                                            port>>3, 2) )
>          {
> @@ -1622,7 +1626,7 @@ static int guest_io_okay(
>          case 1:  x.bytes[1] = ~0;
>          case 0:  break;
>          }
> -        TOGGLE_MODE();
> +        TOGGLE_MODE(v, mode, 0);
>  
>          if ( (x.mask & (((1<<bytes)-1) << (port&7))) == 0 )
>              return 1;
> @@ -2188,7 +2192,7 @@ static int emulate_privileged_op(struct 
>          goto fail;
>      switch ( opcode )
>      {
> -    case 0x1: /* RDTSCP and XSETBV */
> +    case 0x1: /* RDTSCP, XSETBV, CLAC, and STAC */
>          switch ( insn_fetch(u8, code_base, eip, code_limit) )
>          {
>          case 0xf9: /* RDTSCP */
> @@ -2216,6 +2220,20 @@ static int emulate_privileged_op(struct 
>  
>              break;
>          }
> +        case 0xcb: /* STAC */
> +            if ( unlikely(!guest_kernel_mode(v, regs)) ||
> +                 unlikely(is_pv_32bit_vcpu(v)) ||
> +                 unlikely(!guest_smap_enabled(v)) )
> +                goto fail;
> +            set_smap_mode(v, 0);
> +            break;
> +        case 0xca: /* CLAC */
> +            if ( unlikely(!guest_kernel_mode(v, regs)) ||
> +                 unlikely(is_pv_32bit_vcpu(v)) ||
> +                 unlikely(!guest_smap_enabled(v)) )
> +                goto fail;
> +            set_smap_mode(v, 1);
> +            break;
>          default:
>              goto fail;
>          }
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -80,8 +80,7 @@ void __dummy__(void)
>             arch.pv_vcpu.sysenter_callback_eip);
>      OFFSET(VCPU_sysenter_sel, struct vcpu,
>             arch.pv_vcpu.sysenter_callback_cs);
> -    OFFSET(VCPU_sysenter_disables_events, struct vcpu,
> -           arch.pv_vcpu.sysenter_disables_events);
> +    OFFSET(VCPU_sysenter_tbf, struct vcpu, arch.pv_vcpu.sysenter_tbf);
>      OFFSET(VCPU_trap_ctxt, struct vcpu, arch.pv_vcpu.trap_ctxt);
>      OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
>      OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
> @@ -95,6 +94,7 @@ void __dummy__(void)
>      DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
>      DEFINE(_VGCF_failsafe_disables_events, _VGCF_failsafe_disables_events);
>      DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
> +    DEFINE(_VGCF_syscall_clac,             _VGCF_syscall_clac);
>      BLANK();
>  
>      OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
> --- a/xen/arch/x86/x86_64/compat/traps.c
> +++ b/xen/arch/x86/x86_64/compat/traps.c
> @@ -205,8 +205,8 @@ static long compat_register_guest_callba
>      case CALLBACKTYPE_sysenter:
>          v->arch.pv_vcpu.sysenter_callback_cs     = reg->address.cs;
>          v->arch.pv_vcpu.sysenter_callback_eip    = reg->address.eip;
> -        v->arch.pv_vcpu.sysenter_disables_events =
> -            (reg->flags & CALLBACKF_mask_events) != 0;
> +        v->arch.pv_vcpu.sysenter_tbf =
> +            (reg->flags & CALLBACKF_mask_events ? TBF_INTERRUPT : 0);
>          break;
>  
>      case CALLBACKTYPE_nmi:
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -28,7 +28,12 @@ switch_to_kernel:
>          /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
>          btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
>          setc  %cl
> +        /* TB_flags |= VGCF_syscall_clac ? TBF_SMAP : 0 */
> +        btl   $_VGCF_syscall_clac,VCPU_guest_context_flags(%rbx)
> +        setc  %al
>          leal  (,%rcx,TBF_INTERRUPT),%ecx
> +        leal  (,%rax,TBF_SMAP),%eax
> +        orl   %eax,%ecx
>          movb  %cl,TRAPBOUNCE_flags(%rdx)
>          call  create_bounce_frame
>          andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)
> @@ -87,7 +92,7 @@ failsafe_callback:
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>          movq  VCPU_failsafe_addr(%rbx),%rax
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> -        movb  $TBF_FAILSAFE,TRAPBOUNCE_flags(%rdx)
> +        movb  $TBF_FAILSAFE|TBF_SMAP,TRAPBOUNCE_flags(%rdx)
>          bt    $_VGCF_failsafe_disables_events,VCPU_guest_context_flags(%rbx)
>          jnc   1f
>          orb   $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
> @@ -215,7 +220,7 @@ test_guest_events:
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>          movq  VCPU_event_addr(%rbx),%rax
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> -        movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
> +        movb  $TBF_INTERRUPT|TBF_SMAP,TRAPBOUNCE_flags(%rdx)
>          call  create_bounce_frame
>          jmp   test_all_events
>  
> @@ -278,9 +283,8 @@ GLOBAL(sysenter_eflags_saved)
>          pushq $0
>          SAVE_VOLATILE TRAP_syscall
>          GET_CURRENT(%rbx)
> -        cmpb  $0,VCPU_sysenter_disables_events(%rbx)
> +        movzbl VCPU_sysenter_tbf(%rbx),%ecx
>          movq  VCPU_sysenter_addr(%rbx),%rax
> -        setne %cl
>          testl $X86_EFLAGS_NT,UREGS_eflags(%rsp)
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>  UNLIKELY_START(nz, sysenter_nt_set)
> @@ -290,7 +294,6 @@ UNLIKELY_START(nz, sysenter_nt_set)
>          xorl  %eax,%eax
>  UNLIKELY_END(sysenter_nt_set)
>          testq %rax,%rax
> -        leal  (,%rcx,TBF_INTERRUPT),%ecx
>  UNLIKELY_START(z, sysenter_gpf)
>          movq  VCPU_trap_ctxt(%rbx),%rsi
>          SAVE_PRESERVED
> @@ -299,7 +302,11 @@ UNLIKELY_START(z, sysenter_gpf)
>          movq  TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_eip(%rsi),%rax
>          testb $4,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)
>          setnz %cl
> +        testb $8,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)
> +        setnz %sil
>          leal  TBF_EXCEPTION|TBF_EXCEPTION_ERRCODE(,%rcx,TBF_INTERRUPT),%ecx
> +        leal  (,%rsi,TBF_SMAP),%esi
> +        orl   %esi,%ecx
>  UNLIKELY_END(sysenter_gpf)
>          movq  VCPU_domain(%rbx),%rdi
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> @@ -351,19 +358,38 @@ int80_slow_path:
>  /* On return only %rbx and %rdx are guaranteed non-clobbered.            */
>  create_bounce_frame:
>          ASSERT_INTERRUPTS_ENABLED
> -        testb $TF_kernel_mode,VCPU_thread_flags(%rbx)
> -        jnz   1f
> -        /* Push new frame at registered guest-OS stack base. */
> +        xorl  %esi,%esi
> +        testb $TBF_SMAP,TRAPBOUNCE_flags(%rdx)
> +        movl  VCPU_thread_flags(%rbx),%eax
> +        setnz %sil
> +        testb $TF_kernel_mode,%al
>          pushq %rdx
>          movq  %rbx,%rdi
> +        jnz   1f
> +        /* Push new frame at registered guest-OS stack base. */
> +        andl  $~TF_smap_mode,VCPU_thread_flags(%rbx)
> +        shll  $_TF_smap_mode,%esi
> +        orl   %esi,VCPU_thread_flags(%rbx)
>          call  toggle_guest_mode
> -        popq  %rdx
>          movq  VCPU_kernel_sp(%rbx),%rsi
> +        movl  $~0,%edi
>          jmp   2f
>  1:      /* In kernel context already: push new frame at existing %rsp. */
> -        movq  UREGS_rsp+8(%rsp),%rsi
> -        andb  $0xfc,UREGS_cs+8(%rsp)    # Indicate kernel context to guest.
> +        pushq %rax
> +        call  set_smap_mode
> +        test  %al,%al
> +        movl  $~0,%edi
> +        popq  %rax                      # old VCPU_thread_flags(%rbx)
> +UNLIKELY_START(nz, cbf_smap)
> +        movl  $~X86_EFLAGS_AC,%edi
> +        testb $TF_smap_mode,%al
> +        UNLIKELY_DONE(nz, cbf_smap)
> +        btsq  $18+32,%rdi               # LOG2(X86_EFLAGS_AC)+32
> +UNLIKELY_END(cbf_smap)
> +        movq  UREGS_rsp+2*8(%rsp),%rsi
> +        andl  $~3,UREGS_cs+2*8(%rsp)    # Indicate kernel context to guest.
>  2:      andq  $~0xf,%rsi                # Stack frames are 16-byte aligned.
> +        popq  %rdx
>          movq  $HYPERVISOR_VIRT_START,%rax
>          cmpq  %rax,%rsi
>          movq  $HYPERVISOR_VIRT_END+60,%rax
> @@ -394,7 +420,10 @@ __UNLIKELY_END(create_bounce_frame_bad_s
>          setz  %ch                       # %ch == !saved_upcall_mask
>          movl  UREGS_eflags+8(%rsp),%eax
>          andl  $~X86_EFLAGS_IF,%eax
> +        andl  %edi,%eax                 # Clear EFLAGS.AC if needed
> +        shrq  $32,%rdi
>          addb  %ch,%ch                   # Bit 9 (EFLAGS.IF)
> +        orl   %edi,%eax                 # Set EFLAGS.AC if needed
>          orb   %ch,%ah                   # Fold EFLAGS.IF into %eax
>  .Lft5:  movq  %rax,16(%rsi)             # RFLAGS
>          movq  UREGS_rip+8(%rsp),%rax
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -153,9 +153,11 @@ void vcpu_show_registers(const struct vc
>  
>      crs[0] = v->arch.pv_vcpu.ctrlreg[0];
>      crs[2] = arch_get_cr2(v);
> -    crs[3] = pagetable_get_paddr(guest_kernel_mode(v, regs) ?
> +    crs[3] = pagetable_get_paddr(!guest_kernel_mode(v, regs) ?
> +                                 v->arch.guest_table_user :
> +                                 !guest_smap_enabled(v) || !guest_smap_mode(v) ?
>                                   v->arch.guest_table :
> -                                 v->arch.guest_table_user);
> +                                 v->arch.pv_vcpu.guest_table_smap);
>      crs[4] = v->arch.pv_vcpu.ctrlreg[4];
>  
>      _show_registers(regs, crs, CTXT_pv_guest, v);
> @@ -258,14 +260,19 @@ void toggle_guest_mode(struct vcpu *v)
>      if ( is_pv_32bit_vcpu(v) )
>          return;
>      v->arch.flags ^= TF_kernel_mode;
> +    if ( !guest_smap_enabled(v) )
> +        v->arch.flags &= ~TF_smap_mode;
>      asm volatile ( "swapgs" );
>      update_cr3(v);
>  #ifdef USER_MAPPINGS_ARE_GLOBAL
> -    /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> -    asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> -#else
> -    write_ptbase(v);
> +    if ( !(v->arch.flags & TF_kernel_mode) || !guest_smap_mode(v) )
> +    {
> +        /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> +        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> +    }
> +    else
>  #endif
> +        write_ptbase(v);
>  
>      if ( !(v->arch.flags & TF_kernel_mode) )
>          return;
> @@ -280,6 +287,35 @@ void toggle_guest_mode(struct vcpu *v)
>          v->arch.pv_vcpu.pending_system_time.version = 0;
>  }
>  
> +bool_t set_smap_mode(struct vcpu *v, bool_t on)
> +{
> +    ASSERT(!is_pv_32bit_vcpu(v));
> +    ASSERT(v->arch.flags & TF_kernel_mode);
> +
> +    if ( !guest_smap_enabled(v) )
> +        return 0;
> +    if ( !on == !guest_smap_mode(v) )
> +        return 1;
> +
> +    if ( on )
> +        v->arch.flags |= TF_smap_mode;
> +    else
> +        v->arch.flags &= ~TF_smap_mode;
> +
> +    update_cr3(v);
> +#ifdef USER_MAPPINGS_ARE_GLOBAL
> +    if ( !guest_smap_mode(v) )
> +    {
> +        /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> +        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> +    }
> +    else
> +#endif
> +        write_ptbase(v);
> +
> +    return 1;
> +}
> +
>  unsigned long do_iret(void)
>  {
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> @@ -305,6 +341,8 @@ unsigned long do_iret(void)
>          }
>          toggle_guest_mode(v);
>      }
> +    else if ( set_smap_mode(v, !(iret_saved.rflags & X86_EFLAGS_AC)) )
> +        iret_saved.rflags &= ~X86_EFLAGS_AC;
>  
>      regs->rip    = iret_saved.rip;
>      regs->cs     = iret_saved.cs | 3; /* force guest privilege */
> @@ -480,6 +518,10 @@ static long register_guest_callback(stru
>          else
>              clear_bit(_VGCF_syscall_disables_events,
>                        &v->arch.vgc_flags);
> +        if ( reg->flags & CALLBACKF_clac )
> +            set_bit(_VGCF_syscall_clac, &v->arch.vgc_flags);
> +        else
> +            clear_bit(_VGCF_syscall_clac, &v->arch.vgc_flags);
>          break;
>  
>      case CALLBACKTYPE_syscall32:
> @@ -490,8 +532,9 @@ static long register_guest_callback(stru
>  
>      case CALLBACKTYPE_sysenter:
>          v->arch.pv_vcpu.sysenter_callback_eip = reg->address;
> -        v->arch.pv_vcpu.sysenter_disables_events =
> -            !!(reg->flags & CALLBACKF_mask_events);
> +        v->arch.pv_vcpu.sysenter_tbf =
> +            (reg->flags & CALLBACKF_mask_events ? TBF_INTERRUPT : 0) |
> +            (reg->flags & CALLBACKF_clac ? TBF_SMAP : 0);
>          break;
>  
>      case CALLBACKTYPE_nmi:
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -75,6 +75,8 @@ void mapcache_override_current(struct vc
>  
>  /* x86/64: toggle guest between kernel and user modes. */
>  void toggle_guest_mode(struct vcpu *);
> +/* x86/64: switch guest between SMAP and "normal" modes. */
> +bool_t set_smap_mode(struct vcpu *, bool_t);
>  
>  /*
>   * Initialise a hypercall-transfer page. The given pointer must be mapped
> @@ -354,13 +356,16 @@ struct pv_vcpu
>      unsigned short syscall32_callback_cs;
>      unsigned short sysenter_callback_cs;
>      bool_t syscall32_disables_events;
> -    bool_t sysenter_disables_events;
> +    u8 sysenter_tbf;
>  
>      /* Segment base addresses. */
>      unsigned long fs_base;
>      unsigned long gs_base_kernel;
>      unsigned long gs_base_user;
>  
> +    /* x86/64 kernel-only (SMAP) pagetable */
> +    pagetable_t guest_table_smap;
> +
>      /* Bounce information for propagating an exception to guest OS. */
>      struct trap_bounce trap_bounce;
>      struct trap_bounce int80_bounce;
> @@ -471,6 +476,10 @@ unsigned long pv_guest_cr4_fixup(const s
>      ((c) & ~(X86_CR4_PGE | X86_CR4_PSE | X86_CR4_TSD |      \
>               X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE))
>  
> +#define guest_smap_enabled(v) \
> +    (!pagetable_is_null((v)->arch.pv_vcpu.guest_table_smap))
> +#define guest_smap_mode(v) ((v)->arch.flags & TF_smap_mode)
> +
>  void domain_cpuid(struct domain *d,
>                    unsigned int  input,
>                    unsigned int  sub_input,
> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -405,17 +405,34 @@ guest_get_eff_l1e(struct vcpu *v, unsign
>      paging_get_hostmode(v)->guest_get_eff_l1e(v, addr, eff_l1e);
>  }
>  
> +#define TOGGLE_MODE(v, m, in) do { \
> +    if ( in ) \
> +        (m) = (v)->arch.flags; \
> +    if ( (m) & TF_kernel_mode ) \
> +    { \
> +        set_smap_mode(v, (in) || ((m) & TF_smap_mode) ); \
> +        break; \
> +    } \
> +    if ( in ) \
> +        (v)->arch.flags |= TF_smap_mode; \
> +    else \
> +    { \
> +        (v)->arch.flags &= ~TF_smap_mode; \
> +        (v)->arch.flags |= (m) & TF_smap_mode; \
> +    } \
> +    toggle_guest_mode(v); \
> +} while ( 0 )
> +
>  /* Read the guest's l1e that maps this address, from the kernel-mode
>   * pagetables. */
>  static inline void
>  guest_get_eff_kern_l1e(struct vcpu *v, unsigned long addr, void *eff_l1e)
>  {
> -    int user_mode = !(v->arch.flags & TF_kernel_mode);
> -#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)
> +    unsigned int mode;
>  
> -    TOGGLE_MODE();
> +    TOGGLE_MODE(v, mode, 1);
>      guest_get_eff_l1e(v, addr, eff_l1e);
> -    TOGGLE_MODE();
> +    TOGGLE_MODE(v, mode, 0);
>  }
>  
>  
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -125,12 +125,15 @@
>  /* 'trap_bounce' flags values */
>  #define TBF_EXCEPTION          1
>  #define TBF_EXCEPTION_ERRCODE  2
> +#define TBF_SMAP               4
>  #define TBF_INTERRUPT          8
>  #define TBF_FAILSAFE          16
>  
>  /* 'arch_vcpu' flags values */
>  #define _TF_kernel_mode        0
>  #define TF_kernel_mode         (1<<_TF_kernel_mode)
> +#define _TF_smap_mode          1
> +#define TF_smap_mode           (1<<_TF_smap_mode)
>  
>  /* #PF error code values. */
>  #define PFEC_page_present   (1U<<0)
> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -138,6 +138,7 @@ typedef unsigned long xen_ulong_t;
>   */
>  #define TI_GET_DPL(_ti)      ((_ti)->flags & 3)
>  #define TI_GET_IF(_ti)       ((_ti)->flags & 4)
> +#define TI_GET_AC(_ti)       ((_ti)->flags & 8)
>  #define TI_SET_DPL(_ti,_dpl) ((_ti)->flags |= (_dpl))
>  #define TI_SET_IF(_ti,_if)   ((_ti)->flags |= ((!!(_if))<<2))
>  struct trap_info {
> @@ -179,6 +180,8 @@ struct vcpu_guest_context {
>  #define VGCF_syscall_disables_events   (1<<_VGCF_syscall_disables_events)
>  #define _VGCF_online                   5
>  #define VGCF_online                    (1<<_VGCF_online)
> +#define _VGCF_syscall_clac             6
> +#define VGCF_syscall_clac              (1<<_VGCF_syscall_clac)
>      unsigned long flags;                    /* VGCF_* flags                 */
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
> --- a/xen/include/public/callback.h
> +++ b/xen/include/public/callback.h
> @@ -76,6 +76,13 @@
>   */
>  #define _CALLBACKF_mask_events             0
>  #define CALLBACKF_mask_events              (1U << _CALLBACKF_mask_events)
> +/*
> + * Effect CLAC upon callback entry? This flag is ignored for event,
> + * failsafe, and NMI callbacks: user space gets unconditionally hidden if
> + * respective functionality was enabled by the kernel.
> + */
> +#define _CALLBACKF_clac                    0
> +#define CALLBACKF_clac                     (1U << _CALLBACKF_clac)
>  
>  /*
>   * Register a callback.
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -341,6 +341,10 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>   * mfn: Machine frame number of new page-table base to install in MMU
>   *      when in user space.
>   *
> + * cmd: MMUEXT_NEW_SMAP_BASEPTR [x86/64 only]
> + * mfn: Machine frame number of new page-table base to install in MMU
> + *      when in kernel-only (SMAP) mode.
> + *
>   * cmd: MMUEXT_TLB_FLUSH_LOCAL
>   * No additional arguments. Flushes local TLB.
>   *
> @@ -371,6 +375,9 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>   * linear_addr: Linear address of LDT base (NB. must be page-aligned).
>   * nr_ents: Number of entries in LDT.
>   *
> + * cmd: MMUEXT_SET_SMAP_MODE
> + * val: 0 - disable, 1 - enable (other values reserved)
> + *
>   * cmd: MMUEXT_CLEAR_PAGE
>   * mfn: Machine frame number to be cleared.
>   *
> @@ -402,17 +409,21 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>  #define MMUEXT_FLUSH_CACHE_GLOBAL 18
>  #define MMUEXT_MARK_SUPER       19
>  #define MMUEXT_UNMARK_SUPER     20
> +#define MMUEXT_NEW_SMAP_BASEPTR 21
> +#define MMUEXT_SET_SMAP_MODE    22
>  /* ` } */
>  
>  #ifndef __ASSEMBLY__
>  struct mmuext_op {
>      unsigned int cmd; /* => enum mmuext_cmd */
>      union {
> -        /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR
> +        /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR, NEW_SMAP_BASEPTR
>           * CLEAR_PAGE, COPY_PAGE, [UN]MARK_SUPER */
>          xen_pfn_t     mfn;
>          /* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>          unsigned long linear_addr;
> +        /* SET_SMAP_MODE */
> +        unsigned int  val;
>      } arg1;
>      union {
>          /* SET_LDT */

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HOf-0006sa-R8; Fri, 31 Jan 2014 16:56:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9HOd-0006sV-Ir
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 16:56:52 +0000
Received: from [85.158.137.68:9071] by server-17.bemta-3.messagelabs.com id
	3D/2C-22569-2D5DBE25; Fri, 31 Jan 2014 16:56:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391187407!8979626!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20395 invoked from network); 31 Jan 2014 16:56:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:56:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VGukFW012485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:56:47 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s0VGujP7016433
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 31 Jan 2014 16:56:45 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuiov025487; Fri, 31 Jan 2014 16:56:45 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:56:44 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id AAE261BFA73; Fri, 31 Jan 2014 11:56:43 -0500 (EST)
Date: Fri, 31 Jan 2014 11:56:43 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140131165643.GE23648@phenom.dumpdata.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E92D580200007800117FC1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 03:33:28PM +0000, Jan Beulich wrote:
> Considering that SMAP (and SMEP) aren't usable for 64-bit PV guests
> (due to them running in ring 3), I drafted a mostly equivalent PV
> solution, at this point mainly to see what people think how useful this
> would be.
> 
> It being based on switching page tables (along with the two page
> tables we have right now - one containing user mappings only, the
> other containing both kernel and user mappings - a third category
> gets added containing kernel mappings only; Linux would have such
> a thing readily available and hence presumably would require not
> too intrusive changes) of course makes clear that this would come
> with quite a bit of a performance cost. Furthermore the state
> management obviously requires a couple of extra instructions to be
> added into reasonably hot hypervisor code paths.
> 
> Hence before going further with this approach (for now I only got
> it to the point that an un-patched Linux is unaffected, i.e. I didn't
> code up the Linux side yet) I would be interested to hear people's
> opinions on whether the performance cost is worth it, or whether
> instead we should consider PVH the one and only route towards
> gaining that extra level of security.

Would we get this feature for 'free' if we do PVH? Meaning there
is not much to modify in the Linux kernel to make it work in PVH mode?

> 
> And if considering it worthwhile, comments on the actual
> implementation (including the notes at the top of the attached
> patch) would of course be welcome too.
> 
> Jan
> 

> x86: PV SMAP for 64-bit guests
> 
> TODO? Apart from TOOGLE_MODE(), should we enforce SMAP mode for other
>       implied-supervisor guest memory accesses?
> TODO? MMUEXT_SET_SMAP_MODE may better be replaced by a standalone
>       hypercall (with just a single parameter); maybe by extending
>       fpu_taskswitch.
> 
> Note that the new state isn't being saved/restored. That's mainly
> because a capable kernel, when migrated from an incapable hypervisor to
> a capable one, would likely want to take advantage of the capability,
> and hence would need to set up all state anyway. This also implies that
> a capable kernel ought to be prepared to get migrated to an incapable
> hypervisor (as the loss of functionality isn't essential, it just
> results in security getting weakened).
> 
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1149,6 +1149,7 @@ static void load_segments(struct vcpu *n
>              (unsigned long *)regs->rsp :
>              (unsigned long *)pv->kernel_sp;
>          unsigned long cs_and_mask, rflags;
> +        int smap_mode = -1;
>  
>          if ( is_pv_32on64_domain(n->domain) )
>          {
> @@ -1199,9 +1200,17 @@ static void load_segments(struct vcpu *n
>          }
>  
>          if ( !(n->arch.flags & TF_kernel_mode) )
> +        {
> +            n->arch.flags |= TF_smap_mode;
>              toggle_guest_mode(n);
> +        }
>          else
> +        {
>              regs->cs &= ~3;
> +            smap_mode = guest_smap_mode(n);
> +            if ( !set_smap_mode(n, 1) )
> +                smap_mode = -1;
> +        }
>  
>          /* CS longword also contains full evtchn_upcall_mask. */
>          cs_and_mask = (unsigned long)regs->cs |
> @@ -1210,6 +1219,11 @@ static void load_segments(struct vcpu *n
>          /* Fold upcall mask into RFLAGS.IF. */
>          rflags  = regs->rflags & ~X86_EFLAGS_IF;
>          rflags |= !vcpu_info(n, evtchn_upcall_mask) << 9;
> +        if ( smap_mode >= 0 )
> +        {
> +            rflags &= ~X86_EFLAGS_AC;
> +            rflags |= !smap_mode << 18;
> +        }
>  
>          if ( put_user(regs->ss,            rsp- 1) |
>               put_user(regs->rsp,           rsp- 2) |
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -827,7 +827,7 @@ long arch_do_domctl(
>                  evc->sysenter_callback_eip     =
>                      v->arch.pv_vcpu.sysenter_callback_eip;
>                  evc->sysenter_disables_events  =
> -                    v->arch.pv_vcpu.sysenter_disables_events;
> +                    !!(v->arch.pv_vcpu.sysenter_tbf & TBF_INTERRUPT);
>                  evc->syscall32_callback_cs     =
>                      v->arch.pv_vcpu.syscall32_callback_cs;
>                  evc->syscall32_callback_eip    =
> @@ -863,8 +863,9 @@ long arch_do_domctl(
>                      evc->sysenter_callback_cs;
>                  v->arch.pv_vcpu.sysenter_callback_eip     =
>                      evc->sysenter_callback_eip;
> -                v->arch.pv_vcpu.sysenter_disables_events  =
> -                    evc->sysenter_disables_events;
> +                v->arch.pv_vcpu.sysenter_tbf              = 0;
> +                if ( evc->sysenter_disables_events )
> +                    v->arch.pv_vcpu.sysenter_tbf |= TBF_INTERRUPT;
>                  fixup_guest_code_selector(d, evc->syscall32_callback_cs);
>                  v->arch.pv_vcpu.syscall32_callback_cs     =
>                      evc->syscall32_callback_cs;
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -488,7 +488,7 @@ void write_ptbase(struct vcpu *v)
>  /*
>   * Should be called after CR3 is updated.
>   * 
> - * Uses values found in vcpu->arch.(guest_table and guest_table_user), and
> + * Uses values found in vcpu->arch.guest_table{,_user,_kernel}, and
>   * for HVM guests, arch.monitor_table and hvm's guest CR3.
>   *
>   * Update ref counts to shadow tables appropriately.
> @@ -505,8 +505,10 @@ void update_cr3(struct vcpu *v)
>  
>      if ( !(v->arch.flags & TF_kernel_mode) )
>          cr3_mfn = pagetable_get_pfn(v->arch.guest_table_user);
> -    else
> +    else if ( !guest_smap_mode(v) )
>          cr3_mfn = pagetable_get_pfn(v->arch.guest_table);
> +    else
> +        cr3_mfn = pagetable_get_pfn(v->arch.pv_vcpu.guest_table_smap);
>  
>      make_cr3(v, cr3_mfn);
>  }
> @@ -2687,7 +2689,22 @@ int vcpu_destroy_pagetables(struct vcpu 
>                  rc = put_page_and_type_preemptible(page);
>          }
>          if ( !rc )
> +        {
>              v->arch.guest_table_user = pagetable_null();
> +
> +            /* Drop ref to guest_table_smap (from MMUEXT_NEW_SMAP_BASEPTR). */
> +            mfn = pagetable_get_pfn(v->arch.pv_vcpu.guest_table_smap);
> +            if ( mfn )
> +            {
> +                page = mfn_to_page(mfn);
> +                if ( paging_mode_refcounts(v->domain) )
> +                    put_page(page);
> +                else
> +                    rc = put_page_and_type_preemptible(page);
> +            }
> +        }
> +        if ( !rc )
> +            v->arch.pv_vcpu.guest_table_smap = pagetable_null();
>      }
>  
>      v->arch.cr3 = 0;
> @@ -3086,7 +3103,11 @@ long do_mmuext_op(
>              }
>              break;
>  
> -        case MMUEXT_NEW_USER_BASEPTR: {
> +        case MMUEXT_NEW_USER_BASEPTR:
> +        case MMUEXT_NEW_SMAP_BASEPTR: {
> +            pagetable_t *ppt = op.cmd == MMUEXT_NEW_USER_BASEPTR
> +                               ? &curr->arch.guest_table_user
> +                               : &curr->arch.pv_vcpu.guest_table_smap;
>              unsigned long old_mfn;
>  
>              if ( paging_mode_translate(current->domain) )
> @@ -3095,7 +3116,7 @@ long do_mmuext_op(
>                  break;
>              }
>  
> -            old_mfn = pagetable_get_pfn(curr->arch.guest_table_user);
> +            old_mfn = pagetable_get_pfn(*ppt);
>              /*
>               * This is particularly important when getting restarted after the
>               * previous attempt got preempted in the put-old-MFN phase.
> @@ -3124,7 +3145,7 @@ long do_mmuext_op(
>                  }
>              }
>  
> -            curr->arch.guest_table_user = pagetable_from_pfn(op.arg1.mfn);
> +            *ppt = pagetable_from_pfn(op.arg1.mfn);
>  
>              if ( old_mfn != 0 )
>              {
> @@ -3249,6 +3270,15 @@ long do_mmuext_op(
>              break;
>          }
>  
> +        case MMUEXT_SET_SMAP_MODE:
> +            if ( unlikely(is_pv_32bit_domain(d)) )
> +                rc = -ENOSYS, okay = 0;
> +            else if ( unlikely(op.arg1.val & ~1) )
> +                okay = 0;
> +            else if ( unlikely(!set_smap_mode(curr, op.arg1.val)) )
> +                rc = -EOPNOTSUPP, okay = 0;
> +            break;
> +
>          case MMUEXT_CLEAR_PAGE: {
>              struct page_info *page;
>  
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -451,6 +451,8 @@ static void do_guest_trap(
>  
>      if ( TI_GET_IF(ti) )
>          tb->flags |= TBF_INTERRUPT;
> +    if ( !TI_GET_AC(ti) )
> +        tb->flags |= TBF_SMAP;
>  
>      if ( unlikely(null_trap_bounce(v, tb)) )
>          gdprintk(XENLOG_WARNING, "Unhandled %s fault/trap [#%d] "
> @@ -1089,6 +1091,8 @@ struct trap_bounce *propagate_page_fault
>          tb->eip        = ti->address;
>          if ( TI_GET_IF(ti) )
>              tb->flags |= TBF_INTERRUPT;
> +        if ( !TI_GET_AC(ti) )
> +            tb->flags |= TBF_SMAP;
>          return tb;
>      }
>  
> @@ -1109,6 +1113,8 @@ struct trap_bounce *propagate_page_fault
>      tb->eip        = ti->address;
>      if ( TI_GET_IF(ti) )
>          tb->flags |= TBF_INTERRUPT;
> +    if ( !TI_GET_AC(ti) )
> +        tb->flags |= TBF_SMAP;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
>          printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> @@ -1598,23 +1604,21 @@ static int guest_io_okay(
>      unsigned int port, unsigned int bytes,
>      struct vcpu *v, struct cpu_user_regs *regs)
>  {
> -    /* If in user mode, switch to kernel mode just to read I/O bitmap. */
> -    int user_mode = !(v->arch.flags & TF_kernel_mode);
> -#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)
> -
>      if ( !vm86_mode(regs) &&
>           (v->arch.pv_vcpu.iopl >= (guest_kernel_mode(v, regs) ? 1 : 3)) )
>          return 1;
>  
>      if ( v->arch.pv_vcpu.iobmp_limit > (port + bytes) )
>      {
> +        unsigned int mode;
>          union { uint8_t bytes[2]; uint16_t mask; } x;
>  
>          /*
>           * Grab permission bytes from guest space. Inaccessible bytes are
>           * read as 0xff (no access allowed).
> +         * If in user mode, switch to kernel mode just to read I/O bitmap.
>           */
> -        TOGGLE_MODE();
> +        TOGGLE_MODE(v, mode, 1);
>          switch ( __copy_from_guest_offset(x.bytes, v->arch.pv_vcpu.iobmp,
>                                            port>>3, 2) )
>          {
> @@ -1622,7 +1626,7 @@ static int guest_io_okay(
>          case 1:  x.bytes[1] = ~0;
>          case 0:  break;
>          }
> -        TOGGLE_MODE();
> +        TOGGLE_MODE(v, mode, 0);
>  
>          if ( (x.mask & (((1<<bytes)-1) << (port&7))) == 0 )
>              return 1;
> @@ -2188,7 +2192,7 @@ static int emulate_privileged_op(struct 
>          goto fail;
>      switch ( opcode )
>      {
> -    case 0x1: /* RDTSCP and XSETBV */
> +    case 0x1: /* RDTSCP, XSETBV, CLAC, and STAC */
>          switch ( insn_fetch(u8, code_base, eip, code_limit) )
>          {
>          case 0xf9: /* RDTSCP */
> @@ -2216,6 +2220,20 @@ static int emulate_privileged_op(struct 
>  
>              break;
>          }
> +        case 0xcb: /* STAC */
> +            if ( unlikely(!guest_kernel_mode(v, regs)) ||
> +                 unlikely(is_pv_32bit_vcpu(v)) ||
> +                 unlikely(!guest_smap_enabled(v)) )
> +                goto fail;
> +            set_smap_mode(v, 0);
> +            break;
> +        case 0xca: /* CLAC */
> +            if ( unlikely(!guest_kernel_mode(v, regs)) ||
> +                 unlikely(is_pv_32bit_vcpu(v)) ||
> +                 unlikely(!guest_smap_enabled(v)) )
> +                goto fail;
> +            set_smap_mode(v, 1);
> +            break;
>          default:
>              goto fail;
>          }
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -80,8 +80,7 @@ void __dummy__(void)
>             arch.pv_vcpu.sysenter_callback_eip);
>      OFFSET(VCPU_sysenter_sel, struct vcpu,
>             arch.pv_vcpu.sysenter_callback_cs);
> -    OFFSET(VCPU_sysenter_disables_events, struct vcpu,
> -           arch.pv_vcpu.sysenter_disables_events);
> +    OFFSET(VCPU_sysenter_tbf, struct vcpu, arch.pv_vcpu.sysenter_tbf);
>      OFFSET(VCPU_trap_ctxt, struct vcpu, arch.pv_vcpu.trap_ctxt);
>      OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
>      OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
> @@ -95,6 +94,7 @@ void __dummy__(void)
>      DEFINE(VCPU_TRAP_MCE, VCPU_TRAP_MCE);
>      DEFINE(_VGCF_failsafe_disables_events, _VGCF_failsafe_disables_events);
>      DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
> +    DEFINE(_VGCF_syscall_clac,             _VGCF_syscall_clac);
>      BLANK();
>  
>      OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
> --- a/xen/arch/x86/x86_64/compat/traps.c
> +++ b/xen/arch/x86/x86_64/compat/traps.c
> @@ -205,8 +205,8 @@ static long compat_register_guest_callba
>      case CALLBACKTYPE_sysenter:
>          v->arch.pv_vcpu.sysenter_callback_cs     = reg->address.cs;
>          v->arch.pv_vcpu.sysenter_callback_eip    = reg->address.eip;
> -        v->arch.pv_vcpu.sysenter_disables_events =
> -            (reg->flags & CALLBACKF_mask_events) != 0;
> +        v->arch.pv_vcpu.sysenter_tbf =
> +            (reg->flags & CALLBACKF_mask_events ? TBF_INTERRUPT : 0);
>          break;
>  
>      case CALLBACKTYPE_nmi:
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -28,7 +28,12 @@ switch_to_kernel:
>          /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
>          btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
>          setc  %cl
> +        /* TB_flags |= VGCF_syscall_clac ? TBF_SMAP : 0 */
> +        btl   $_VGCF_syscall_clac,VCPU_guest_context_flags(%rbx)
> +        setc  %al
>          leal  (,%rcx,TBF_INTERRUPT),%ecx
> +        leal  (,%rax,TBF_SMAP),%eax
> +        orl   %eax,%ecx
>          movb  %cl,TRAPBOUNCE_flags(%rdx)
>          call  create_bounce_frame
>          andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)
> @@ -87,7 +92,7 @@ failsafe_callback:
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>          movq  VCPU_failsafe_addr(%rbx),%rax
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> -        movb  $TBF_FAILSAFE,TRAPBOUNCE_flags(%rdx)
> +        movb  $TBF_FAILSAFE|TBF_SMAP,TRAPBOUNCE_flags(%rdx)
>          bt    $_VGCF_failsafe_disables_events,VCPU_guest_context_flags(%rbx)
>          jnc   1f
>          orb   $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
> @@ -215,7 +220,7 @@ test_guest_events:
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>          movq  VCPU_event_addr(%rbx),%rax
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> -        movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
> +        movb  $TBF_INTERRUPT|TBF_SMAP,TRAPBOUNCE_flags(%rdx)
>          call  create_bounce_frame
>          jmp   test_all_events
>  
> @@ -278,9 +283,8 @@ GLOBAL(sysenter_eflags_saved)
>          pushq $0
>          SAVE_VOLATILE TRAP_syscall
>          GET_CURRENT(%rbx)
> -        cmpb  $0,VCPU_sysenter_disables_events(%rbx)
> +        movzbl VCPU_sysenter_tbf(%rbx),%ecx
>          movq  VCPU_sysenter_addr(%rbx),%rax
> -        setne %cl
>          testl $X86_EFLAGS_NT,UREGS_eflags(%rsp)
>          leaq  VCPU_trap_bounce(%rbx),%rdx
>  UNLIKELY_START(nz, sysenter_nt_set)
> @@ -290,7 +294,6 @@ UNLIKELY_START(nz, sysenter_nt_set)
>          xorl  %eax,%eax
>  UNLIKELY_END(sysenter_nt_set)
>          testq %rax,%rax
> -        leal  (,%rcx,TBF_INTERRUPT),%ecx
>  UNLIKELY_START(z, sysenter_gpf)
>          movq  VCPU_trap_ctxt(%rbx),%rsi
>          SAVE_PRESERVED
> @@ -299,7 +302,11 @@ UNLIKELY_START(z, sysenter_gpf)
>          movq  TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_eip(%rsi),%rax
>          testb $4,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)
>          setnz %cl
> +        testb $8,TRAP_gp_fault * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)
> +        setnz %sil
>          leal  TBF_EXCEPTION|TBF_EXCEPTION_ERRCODE(,%rcx,TBF_INTERRUPT),%ecx
> +        leal  (,%rsi,TBF_SMAP),%esi
> +        orl   %esi,%ecx
>  UNLIKELY_END(sysenter_gpf)
>          movq  VCPU_domain(%rbx),%rdi
>          movq  %rax,TRAPBOUNCE_eip(%rdx)
> @@ -351,19 +358,38 @@ int80_slow_path:
>  /* On return only %rbx and %rdx are guaranteed non-clobbered.            */
>  create_bounce_frame:
>          ASSERT_INTERRUPTS_ENABLED
> -        testb $TF_kernel_mode,VCPU_thread_flags(%rbx)
> -        jnz   1f
> -        /* Push new frame at registered guest-OS stack base. */
> +        xorl  %esi,%esi
> +        testb $TBF_SMAP,TRAPBOUNCE_flags(%rdx)
> +        movl  VCPU_thread_flags(%rbx),%eax
> +        setnz %sil
> +        testb $TF_kernel_mode,%al
>          pushq %rdx
>          movq  %rbx,%rdi
> +        jnz   1f
> +        /* Push new frame at registered guest-OS stack base. */
> +        andl  $~TF_smap_mode,VCPU_thread_flags(%rbx)
> +        shll  $_TF_smap_mode,%esi
> +        orl   %esi,VCPU_thread_flags(%rbx)
>          call  toggle_guest_mode
> -        popq  %rdx
>          movq  VCPU_kernel_sp(%rbx),%rsi
> +        movl  $~0,%edi
>          jmp   2f
>  1:      /* In kernel context already: push new frame at existing %rsp. */
> -        movq  UREGS_rsp+8(%rsp),%rsi
> -        andb  $0xfc,UREGS_cs+8(%rsp)    # Indicate kernel context to guest.
> +        pushq %rax
> +        call  set_smap_mode
> +        test  %al,%al
> +        movl  $~0,%edi
> +        popq  %rax                      # old VCPU_thread_flags(%rbx)
> +UNLIKELY_START(nz, cbf_smap)
> +        movl  $~X86_EFLAGS_AC,%edi
> +        testb $TF_smap_mode,%al
> +        UNLIKELY_DONE(nz, cbf_smap)
> +        btsq  $18+32,%rdi               # LOG2(X86_EFLAGS_AC)+32
> +UNLIKELY_END(cbf_smap)
> +        movq  UREGS_rsp+2*8(%rsp),%rsi
> +        andl  $~3,UREGS_cs+2*8(%rsp)    # Indicate kernel context to guest.
>  2:      andq  $~0xf,%rsi                # Stack frames are 16-byte aligned.
> +        popq  %rdx
>          movq  $HYPERVISOR_VIRT_START,%rax
>          cmpq  %rax,%rsi
>          movq  $HYPERVISOR_VIRT_END+60,%rax
> @@ -394,7 +420,10 @@ __UNLIKELY_END(create_bounce_frame_bad_s
>          setz  %ch                       # %ch == !saved_upcall_mask
>          movl  UREGS_eflags+8(%rsp),%eax
>          andl  $~X86_EFLAGS_IF,%eax
> +        andl  %edi,%eax                 # Clear EFLAGS.AC if needed
> +        shrq  $32,%rdi
>          addb  %ch,%ch                   # Bit 9 (EFLAGS.IF)
> +        orl   %edi,%eax                 # Set EFLAGS.AC if needed
>          orb   %ch,%ah                   # Fold EFLAGS.IF into %eax
>  .Lft5:  movq  %rax,16(%rsi)             # RFLAGS
>          movq  UREGS_rip+8(%rsp),%rax
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -153,9 +153,11 @@ void vcpu_show_registers(const struct vc
>  
>      crs[0] = v->arch.pv_vcpu.ctrlreg[0];
>      crs[2] = arch_get_cr2(v);
> -    crs[3] = pagetable_get_paddr(guest_kernel_mode(v, regs) ?
> +    crs[3] = pagetable_get_paddr(!guest_kernel_mode(v, regs) ?
> +                                 v->arch.guest_table_user :
> +                                 !guest_smap_enabled(v) || !guest_smap_mode(v) ?
>                                   v->arch.guest_table :
> -                                 v->arch.guest_table_user);
> +                                 v->arch.pv_vcpu.guest_table_smap);
>      crs[4] = v->arch.pv_vcpu.ctrlreg[4];
>  
>      _show_registers(regs, crs, CTXT_pv_guest, v);
> @@ -258,14 +260,19 @@ void toggle_guest_mode(struct vcpu *v)
>      if ( is_pv_32bit_vcpu(v) )
>          return;
>      v->arch.flags ^= TF_kernel_mode;
> +    if ( !guest_smap_enabled(v) )
> +        v->arch.flags &= ~TF_smap_mode;
>      asm volatile ( "swapgs" );
>      update_cr3(v);
>  #ifdef USER_MAPPINGS_ARE_GLOBAL
> -    /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> -    asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> -#else
> -    write_ptbase(v);
> +    if ( !(v->arch.flags & TF_kernel_mode) || !guest_smap_mode(v) )
> +    {
> +        /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> +        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> +    }
> +    else
>  #endif
> +        write_ptbase(v);
>  
>      if ( !(v->arch.flags & TF_kernel_mode) )
>          return;
> @@ -280,6 +287,35 @@ void toggle_guest_mode(struct vcpu *v)
>          v->arch.pv_vcpu.pending_system_time.version = 0;
>  }
>  
> +bool_t set_smap_mode(struct vcpu *v, bool_t on)
> +{
> +    ASSERT(!is_pv_32bit_vcpu(v));
> +    ASSERT(v->arch.flags & TF_kernel_mode);
> +
> +    if ( !guest_smap_enabled(v) )
> +        return 0;
> +    if ( !on == !guest_smap_mode(v) )
> +        return 1;
> +
> +    if ( on )
> +        v->arch.flags |= TF_smap_mode;
> +    else
> +        v->arch.flags &= ~TF_smap_mode;
> +
> +    update_cr3(v);
> +#ifdef USER_MAPPINGS_ARE_GLOBAL
> +    if ( !guest_smap_mode(v) )
> +    {
> +        /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
> +        asm volatile ( "mov %0, %%cr3" : : "r" (v->arch.cr3) : "memory" );
> +    }
> +    else
> +#endif
> +        write_ptbase(v);
> +
> +    return 1;
> +}
> +
>  unsigned long do_iret(void)
>  {
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> @@ -305,6 +341,8 @@ unsigned long do_iret(void)
>          }
>          toggle_guest_mode(v);
>      }
> +    else if ( set_smap_mode(v, !(iret_saved.rflags & X86_EFLAGS_AC)) )
> +        iret_saved.rflags &= ~X86_EFLAGS_AC;
>  
>      regs->rip    = iret_saved.rip;
>      regs->cs     = iret_saved.cs | 3; /* force guest privilege */
> @@ -480,6 +518,10 @@ static long register_guest_callback(stru
>          else
>              clear_bit(_VGCF_syscall_disables_events,
>                        &v->arch.vgc_flags);
> +        if ( reg->flags & CALLBACKF_clac )
> +            set_bit(_VGCF_syscall_clac, &v->arch.vgc_flags);
> +        else
> +            clear_bit(_VGCF_syscall_clac, &v->arch.vgc_flags);
>          break;
>  
>      case CALLBACKTYPE_syscall32:
> @@ -490,8 +532,9 @@ static long register_guest_callback(stru
>  
>      case CALLBACKTYPE_sysenter:
>          v->arch.pv_vcpu.sysenter_callback_eip = reg->address;
> -        v->arch.pv_vcpu.sysenter_disables_events =
> -            !!(reg->flags & CALLBACKF_mask_events);
> +        v->arch.pv_vcpu.sysenter_tbf =
> +            (reg->flags & CALLBACKF_mask_events ? TBF_INTERRUPT : 0) |
> +            (reg->flags & CALLBACKF_clac ? TBF_SMAP : 0);
>          break;
>  
>      case CALLBACKTYPE_nmi:
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -75,6 +75,8 @@ void mapcache_override_current(struct vc
>  
>  /* x86/64: toggle guest between kernel and user modes. */
>  void toggle_guest_mode(struct vcpu *);
> +/* x86/64: switch guest between SMAP and "normal" modes. */
> +bool_t set_smap_mode(struct vcpu *, bool_t);
>  
>  /*
>   * Initialise a hypercall-transfer page. The given pointer must be mapped
> @@ -354,13 +356,16 @@ struct pv_vcpu
>      unsigned short syscall32_callback_cs;
>      unsigned short sysenter_callback_cs;
>      bool_t syscall32_disables_events;
> -    bool_t sysenter_disables_events;
> +    u8 sysenter_tbf;
>  
>      /* Segment base addresses. */
>      unsigned long fs_base;
>      unsigned long gs_base_kernel;
>      unsigned long gs_base_user;
>  
> +    /* x86/64 kernel-only (SMAP) pagetable */
> +    pagetable_t guest_table_smap;
> +
>      /* Bounce information for propagating an exception to guest OS. */
>      struct trap_bounce trap_bounce;
>      struct trap_bounce int80_bounce;
> @@ -471,6 +476,10 @@ unsigned long pv_guest_cr4_fixup(const s
>      ((c) & ~(X86_CR4_PGE | X86_CR4_PSE | X86_CR4_TSD |      \
>               X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE))
>  
> +#define guest_smap_enabled(v) \
> +    (!pagetable_is_null((v)->arch.pv_vcpu.guest_table_smap))
> +#define guest_smap_mode(v) ((v)->arch.flags & TF_smap_mode)
> +
>  void domain_cpuid(struct domain *d,
>                    unsigned int  input,
>                    unsigned int  sub_input,
> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -405,17 +405,34 @@ guest_get_eff_l1e(struct vcpu *v, unsign
>      paging_get_hostmode(v)->guest_get_eff_l1e(v, addr, eff_l1e);
>  }
>  
> +#define TOGGLE_MODE(v, m, in) do { \
> +    if ( in ) \
> +        (m) = (v)->arch.flags; \
> +    if ( (m) & TF_kernel_mode ) \
> +    { \
> +        set_smap_mode(v, (in) || ((m) & TF_smap_mode) ); \
> +        break; \
> +    } \
> +    if ( in ) \
> +        (v)->arch.flags |= TF_smap_mode; \
> +    else \
> +    { \
> +        (v)->arch.flags &= ~TF_smap_mode; \
> +        (v)->arch.flags |= (m) & TF_smap_mode; \
> +    } \
> +    toggle_guest_mode(v); \
> +} while ( 0 )
> +
>  /* Read the guest's l1e that maps this address, from the kernel-mode
>   * pagetables. */
>  static inline void
>  guest_get_eff_kern_l1e(struct vcpu *v, unsigned long addr, void *eff_l1e)
>  {
> -    int user_mode = !(v->arch.flags & TF_kernel_mode);
> -#define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)
> +    unsigned int mode;
>  
> -    TOGGLE_MODE();
> +    TOGGLE_MODE(v, mode, 1);
>      guest_get_eff_l1e(v, addr, eff_l1e);
> -    TOGGLE_MODE();
> +    TOGGLE_MODE(v, mode, 0);
>  }
>  
>  
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -125,12 +125,15 @@
>  /* 'trap_bounce' flags values */
>  #define TBF_EXCEPTION          1
>  #define TBF_EXCEPTION_ERRCODE  2
> +#define TBF_SMAP               4
>  #define TBF_INTERRUPT          8
>  #define TBF_FAILSAFE          16
>  
>  /* 'arch_vcpu' flags values */
>  #define _TF_kernel_mode        0
>  #define TF_kernel_mode         (1<<_TF_kernel_mode)
> +#define _TF_smap_mode          1
> +#define TF_smap_mode           (1<<_TF_smap_mode)
>  
>  /* #PF error code values. */
>  #define PFEC_page_present   (1U<<0)
> --- a/xen/include/public/arch-x86/xen.h
> +++ b/xen/include/public/arch-x86/xen.h
> @@ -138,6 +138,7 @@ typedef unsigned long xen_ulong_t;
>   */
>  #define TI_GET_DPL(_ti)      ((_ti)->flags & 3)
>  #define TI_GET_IF(_ti)       ((_ti)->flags & 4)
> +#define TI_GET_AC(_ti)       ((_ti)->flags & 8)
>  #define TI_SET_DPL(_ti,_dpl) ((_ti)->flags |= (_dpl))
>  #define TI_SET_IF(_ti,_if)   ((_ti)->flags |= ((!!(_if))<<2))
>  struct trap_info {
> @@ -179,6 +180,8 @@ struct vcpu_guest_context {
>  #define VGCF_syscall_disables_events   (1<<_VGCF_syscall_disables_events)
>  #define _VGCF_online                   5
>  #define VGCF_online                    (1<<_VGCF_online)
> +#define _VGCF_syscall_clac             6
> +#define VGCF_syscall_clac              (1<<_VGCF_syscall_clac)
>      unsigned long flags;                    /* VGCF_* flags                 */
>      struct cpu_user_regs user_regs;         /* User-level CPU registers     */
>      struct trap_info trap_ctxt[256];        /* Virtual IDT                  */
> --- a/xen/include/public/callback.h
> +++ b/xen/include/public/callback.h
> @@ -76,6 +76,13 @@
>   */
>  #define _CALLBACKF_mask_events             0
>  #define CALLBACKF_mask_events              (1U << _CALLBACKF_mask_events)
> +/*
> + * Effect CLAC upon callback entry? This flag is ignored for event,
> + * failsafe, and NMI callbacks: user space gets unconditionally hidden if
> + * respective functionality was enabled by the kernel.
> + */
> +#define _CALLBACKF_clac                    0
> +#define CALLBACKF_clac                     (1U << _CALLBACKF_clac)
>  
>  /*
>   * Register a callback.
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -341,6 +341,10 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>   * mfn: Machine frame number of new page-table base to install in MMU
>   *      when in user space.
>   *
> + * cmd: MMUEXT_NEW_SMAP_BASEPTR [x86/64 only]
> + * mfn: Machine frame number of new page-table base to install in MMU
> + *      when in kernel-only (SMAP) mode.
> + *
>   * cmd: MMUEXT_TLB_FLUSH_LOCAL
>   * No additional arguments. Flushes local TLB.
>   *
> @@ -371,6 +375,9 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>   * linear_addr: Linear address of LDT base (NB. must be page-aligned).
>   * nr_ents: Number of entries in LDT.
>   *
> + * cmd: MMUEXT_SET_SMAP_MODE
> + * val: 0 - disable, 1 - enable (other values reserved)
> + *
>   * cmd: MMUEXT_CLEAR_PAGE
>   * mfn: Machine frame number to be cleared.
>   *
> @@ -402,17 +409,21 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>  #define MMUEXT_FLUSH_CACHE_GLOBAL 18
>  #define MMUEXT_MARK_SUPER       19
>  #define MMUEXT_UNMARK_SUPER     20
> +#define MMUEXT_NEW_SMAP_BASEPTR 21
> +#define MMUEXT_SET_SMAP_MODE    22
>  /* ` } */
>  
>  #ifndef __ASSEMBLY__
>  struct mmuext_op {
>      unsigned int cmd; /* => enum mmuext_cmd */
>      union {
> -        /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR
> +        /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR, NEW_SMAP_BASEPTR
>           * CLEAR_PAGE, COPY_PAGE, [UN]MARK_SUPER */
>          xen_pfn_t     mfn;
>          /* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */
>          unsigned long linear_addr;
> +        /* SET_SMAP_MODE */
> +        unsigned int  val;
>      } arg1;
>      union {
>          /* SET_LDT */

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:57:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HOu-0006uD-DL; Fri, 31 Jan 2014 16:57:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9HOt-0006tz-Hj
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:57:07 +0000
Received: from [85.158.143.35:56427] by server-3.bemta-4.messagelabs.com id
	96/A9-11539-2E5DBE25; Fri, 31 Jan 2014 16:57:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391187424!2274012!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13414 invoked from network); 31 Jan 2014 16:57:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:57:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VGv1GV012912
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:57:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuuHj025915
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 16:56:57 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuuZq006524; Fri, 31 Jan 2014 16:56:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:56:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C4D201BFA73; Fri, 31 Jan 2014 11:56:54 -0500 (EST)
Date: Fri, 31 Jan 2014 11:56:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james.dingwall@zynstra.com>, daniel.kiper@oracle.com
Message-ID: <20140131165654.GF23648@phenom.dumpdata.com>
References: <52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
	<52E911CA.9020700@oracle.com> <52E91404.30602@zynstra.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E91404.30602@zynstra.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
> Bob Liu wrote:
> >On 01/29/2014 01:15 AM, James Dingwall wrote:
> >>Bob Liu wrote:
> >>>I have made a patch by reserving extra 10% of original total memory, by
> >>>this way I think we can make the system much more reliably in all cases.
> >>>Could you please have a test? You don't need to set
> >>>selfballoon_reserved_mb by yourself any more.
> >>I have to say that with this patch the situation has definitely
> >>improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> >>quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> >Good news!
> >
> >>OOM during a compile (link) of webkit-gtk.  I think your patch is part
> >>of the solution but I'm not sure if the other bit is simply to be more
> >>generous with the guest memory allocation or something else.  Having
> >>tested with memory = 512  and no tmem I get an OOM with the same
> >>compile, with memory = 1024 and no tmem the compile completes ok (both
> >>cases without maxmem).  As my domains are usually started with memory =
> >>512 and maxmem = 1024 it seems that there should be sufficient with my
> >But I think from the beginning tmem/balloon driver can't expand guest
> >memory from size 'memory' to 'maxmem' automatically.
> I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> being honoured.

Daniel,

Weren't you working on a similar patch? Do you recall what happend to it?

Thanks.
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 356f920..fb7965d 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -235,7 +235,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
> &info->cpumap);
> 
> -    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
> +    xc_domain_setmaxmem(ctx->xch, domid, info->max_memkb +
> LIBXL_MAXMEM_CONSTANT);
>      xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
>      state->store_domid = xs_domid ? atoi(xs_domid) : 0;
>      free(xs_domid);
> 
> >
> >>default parameters. Also for an experiment I set memory=1024 and removed
> >>maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
> >>reserve_additional_memory: add_memory() failed: -17" printed many times
> >>in the guest kernel log.
> >>
> >I'll take a look at it.
> It seems possible that this could be the same cause as for the
> message being printed in dom0 and which I reported in
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg01607.html
> and for which no fix seems to have made it to the kernel.  I'm still
> working around this with:
> 
> #!/bin/sh
> 
> CURRENT_KB="/sys/bus/xen_memory/devices/xen_memory0/info/current_kb"
> TARGET_KB="/sys/bus/xen_memory/devices/xen_memory0/target_kb"
> 
> CKB=$(< "${CURRENT_KB}")
> TKB=$(< "${TARGET_KB}")
> 
> if [ "${TKB}" -gt "${CKB}" ] ; then
>         echo "Resizing dom0 memory balloon target to ${CKB}"
>         echo "${CKB}" > "${TARGET_KB}"
> fi
> 
> Thanks,
> James
> 
> 
> -- 
> 
> *James Dingwall*
> 
> Script Monkey
> 
> zynstra-signature-logo <http://www.zynstra.com/>twitter-black
> <http://www.twitter.com/zynstra>linkedin-black
> <http://www.linkedin.com/company/zynstra>
> 
> Zynstra is a private limited company registered in England and Wales
> (registered number 07864369).  Our registered office is 5 New Street
> Square, London, EC4A 3TW and our headquarters are at Bath Ventures,
> Broad Quay, Bath, BA1 1UD.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:57:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HOu-0006uD-DL; Fri, 31 Jan 2014 16:57:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1W9HOt-0006tz-Hj
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:57:07 +0000
Received: from [85.158.143.35:56427] by server-3.bemta-4.messagelabs.com id
	96/A9-11539-2E5DBE25; Fri, 31 Jan 2014 16:57:06 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391187424!2274012!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13414 invoked from network); 31 Jan 2014 16:57:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 16:57:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s0VGv1GV012912
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 31 Jan 2014 16:57:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuuHj025915
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 31 Jan 2014 16:56:57 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s0VGuuZq006524; Fri, 31 Jan 2014 16:56:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 08:56:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C4D201BFA73; Fri, 31 Jan 2014 11:56:54 -0500 (EST)
Date: Fri, 31 Jan 2014 11:56:54 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: James Dingwall <james.dingwall@zynstra.com>, daniel.kiper@oracle.com
Message-ID: <20140131165654.GF23648@phenom.dumpdata.com>
References: <52C50661.7060900@oracle.com> <52CBC700.1060602@zynstra.com>
	<52CE7E67.5080708@oracle.com> <52D64B87.6000400@zynstra.com>
	<52D69E0B.5020006@oracle.com> <52D6B8B6.5070302@zynstra.com>
	<52D7346A.3000300@oracle.com> <52E7E594.2050104@zynstra.com>
	<52E911CA.9020700@oracle.com> <52E91404.30602@zynstra.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52E91404.30602@zynstra.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
> Bob Liu wrote:
> >On 01/29/2014 01:15 AM, James Dingwall wrote:
> >>Bob Liu wrote:
> >>>I have made a patch by reserving extra 10% of original total memory, by
> >>>this way I think we can make the system much more reliably in all cases.
> >>>Could you please have a test? You don't need to set
> >>>selfballoon_reserved_mb by yourself any more.
> >>I have to say that with this patch the situation has definitely
> >>improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> >>quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> >Good news!
> >
> >>OOM during a compile (link) of webkit-gtk.  I think your patch is part
> >>of the solution but I'm not sure if the other bit is simply to be more
> >>generous with the guest memory allocation or something else.  Having
> >>tested with memory = 512  and no tmem I get an OOM with the same
> >>compile, with memory = 1024 and no tmem the compile completes ok (both
> >>cases without maxmem).  As my domains are usually started with memory =
> >>512 and maxmem = 1024 it seems that there should be sufficient with my
> >But I think from the beginning tmem/balloon driver can't expand guest
> >memory from size 'memory' to 'maxmem' automatically.
> I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> being honoured.

Daniel,

Weren't you working on a similar patch? Do you recall what happend to it?

Thanks.
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 356f920..fb7965d 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -235,7 +235,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
> &info->cpumap);
> 
> -    xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
> +    xc_domain_setmaxmem(ctx->xch, domid, info->max_memkb +
> LIBXL_MAXMEM_CONSTANT);
>      xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
>      state->store_domid = xs_domid ? atoi(xs_domid) : 0;
>      free(xs_domid);
> 
> >
> >>default parameters. Also for an experiment I set memory=1024 and removed
> >>maxmem and when tmem is activated I see "[ 3393.884105] xen:balloon:
> >>reserve_additional_memory: add_memory() failed: -17" printed many times
> >>in the guest kernel log.
> >>
> >I'll take a look at it.
> It seems possible that this could be the same cause as for the
> message being printed in dom0 and which I reported in
> http://lists.xen.org/archives/html/xen-devel/2012-12/msg01607.html
> and for which no fix seems to have made it to the kernel.  I'm still
> working around this with:
> 
> #!/bin/sh
> 
> CURRENT_KB="/sys/bus/xen_memory/devices/xen_memory0/info/current_kb"
> TARGET_KB="/sys/bus/xen_memory/devices/xen_memory0/target_kb"
> 
> CKB=$(< "${CURRENT_KB}")
> TKB=$(< "${TARGET_KB}")
> 
> if [ "${TKB}" -gt "${CKB}" ] ; then
>         echo "Resizing dom0 memory balloon target to ${CKB}"
>         echo "${CKB}" > "${TARGET_KB}"
> fi
> 
> Thanks,
> James
> 
> 
> -- 
> 
> *James Dingwall*
> 
> Script Monkey
> 
> zynstra-signature-logo <http://www.zynstra.com/>twitter-black
> <http://www.twitter.com/zynstra>linkedin-black
> <http://www.linkedin.com/company/zynstra>
> 
> Zynstra is a private limited company registered in England and Wales
> (registered number 07864369).  Our registered office is 5 New Street
> Square, London, EC4A 3TW and our headquarters are at Bath Ventures,
> Broad Quay, Bath, BA1 1UD.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:58:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HQ2-000733-Tv; Fri, 31 Jan 2014 16:58:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W9HQ1-00072m-8w
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:58:17 +0000
Received: from [85.158.139.211:11713] by server-7.bemta-5.messagelabs.com id
	99/6C-14867-826DBE25; Fri, 31 Jan 2014 16:58:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391187494!889263!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6384 invoked from network); 31 Jan 2014 16:58:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jan 2014 16:58:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Jan 2014 16:58:14 +0000
Message-Id: <52EBE4450200007800118747@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 31 Jan 2014 16:58:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 10/17] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
> +{
> +    struct vcpu *v;
> +    struct page_info *page;
> +    uint64_t gmfn = params->d.val;
> +
> +    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
> +        return -EINVAL;
> +
> +    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
> +    if ( !page )
> +        return -EINVAL;
> +
> +    v = d->vcpu[params->vcpu];
> +    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
> +    if ( !v->arch.vpmu.xenpmu_data )
> +    {
> +        put_page(page);
> +        return -EINVAL;
> +    }
> +
> +    vpmu_initialise(v);
> +
> +    return 0;
> +}

This being for a PV guest, you need to obtain a write type reference
to the page, or else you risk the guest re-using the page for
something that mustn't be written to in uncontrolled ways (like a
page or descriptor table). See e.g. map_vcpu_info().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 16:58:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 16:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9HQ2-000733-Tv; Fri, 31 Jan 2014 16:58:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1W9HQ1-00072m-8w
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 16:58:17 +0000
Received: from [85.158.139.211:11713] by server-7.bemta-5.messagelabs.com id
	99/6C-14867-826DBE25; Fri, 31 Jan 2014 16:58:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391187494!889263!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6384 invoked from network); 31 Jan 2014 16:58:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 31 Jan 2014 16:58:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 31 Jan 2014 16:58:14 +0000
Message-Id: <52EBE4450200007800118747@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 31 Jan 2014 16:58:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-11-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 10/17] x86/VPMU: Initialize PMU for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
> +{
> +    struct vcpu *v;
> +    struct page_info *page;
> +    uint64_t gmfn = params->d.val;
> +
> +    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
> +        return -EINVAL;
> +
> +    page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
> +    if ( !page )
> +        return -EINVAL;
> +
> +    v = d->vcpu[params->vcpu];
> +    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
> +    if ( !v->arch.vpmu.xenpmu_data )
> +    {
> +        put_page(page);
> +        return -EINVAL;
> +    }
> +
> +    vpmu_initialise(v);
> +
> +    return 0;
> +}

This being for a PV guest, you need to obtain a write type reference
to the page, or else you risk the guest re-using the page for
something that mustn't be written to in uncontrolled ways (like a
page or descriptor table). See e.g. map_vcpu_info().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 18:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 18:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9J8m-0001J9-Qx; Fri, 31 Jan 2014 18:48:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9J8l-0001J4-2o
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 18:48:35 +0000
Received: from [85.158.137.68:60700] by server-7.bemta-3.messagelabs.com id
	EE/35-13775-100FBE25; Fri, 31 Jan 2014 18:48:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391194110!12539328!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19928 invoked from network); 31 Jan 2014 18:48:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 18:48:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,759,1384300800"; d="scan'208";a="98597496"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 18:48:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 13:48:29 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9J8f-0001AX-8h;
	Fri, 31 Jan 2014 18:48:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9J8e-0003LQ-Sj;
	Fri, 31 Jan 2014 18:48:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24676-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 18:48:28 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24676: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24676 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24668 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        16 guest-start                 fail pass in 24668
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore  fail in 24668 pass in 24676
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24668 pass in 24676

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 18:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 18:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9J8m-0001J9-Qx; Fri, 31 Jan 2014 18:48:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9J8l-0001J4-2o
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 18:48:35 +0000
Received: from [85.158.137.68:60700] by server-7.bemta-3.messagelabs.com id
	EE/35-13775-100FBE25; Fri, 31 Jan 2014 18:48:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391194110!12539328!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19928 invoked from network); 31 Jan 2014 18:48:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 18:48:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,759,1384300800"; d="scan'208";a="98597496"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 18:48:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 13:48:29 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9J8f-0001AX-8h;
	Fri, 31 Jan 2014 18:48:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9J8e-0003LQ-Sj;
	Fri, 31 Jan 2014 18:48:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24676-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 18:48:28 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24676: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24676 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 24668 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        16 guest-start                 fail pass in 24668
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore  fail in 24668 pass in 24676
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24668 pass in 24676

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 18:56:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 18:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9JGL-0001Rh-Qc; Fri, 31 Jan 2014 18:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W9JGK-0001Rb-Uo
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 18:56:25 +0000
Received: from [85.158.139.211:35693] by server-17.bemta-5.messagelabs.com id
	31/21-31975-8D1FBE25; Fri, 31 Jan 2014 18:56:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391194580!907159!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11497 invoked from network); 31 Jan 2014 18:56:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 18:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,759,1384300800"; d="scan'208";a="96644967"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 18:56:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 13:56:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W9JGF-0007H8-DY;
	Fri, 31 Jan 2014 18:56:19 +0000
Date: Fri, 31 Jan 2014 18:56:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@schaman.hu>
Message-ID: <20140131185619.GB27553@zion.uk.xensource.com>
References: <52EAA31B.1090606@schaman.hu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EAA31B.1090606@schaman.hu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Alex Duyck <alexander.h.duyck@intel.com>, wei.liu2@citrix.com,
	linux-kernel@vger.kernel.org, Michael Chan <mchan@broadcom.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 07:08:11PM +0000, Zoltan Kiss wrote:
> Hi,
> 
> I've experienced some queue timeout problems mentioned in the
> subject with igb and bnx2 cards. I haven't seen them on other cards
> so far. I'm using XenServer with 3.10 Dom0 kernel (however igb were
> already updated to latest version), and there are Windows guests
> sending data through these cards. I noticed these problems in XenRT
> test runs, and I know that they usually mean some lost interrupt
> problem or other hardware error, but in my case they started to
> appear more often, and they are likely connected to my netback grant
> mapping patches. These patches causing skb's with huge (~64kb)
> linear buffers to appear more often.
> The reason for that is an old problem in the ring protocol:
> originally the maximum amount of slots were linked to MAX_SKB_FRAGS,
> as every slot ended up as a frag of the skb. When this value were
> changed, netback had to cope with the situation by coalescing the
> packets into fewer frags.
> My patch series take a different approach: the leftover slots
> (pages) were assigned to a new skb's frags, and that skb were
> stashed to the frag_list of the first one. Then, before sending it
> off to the stack it calls skb = skb_copy_expand(skb, 0, 0,
> GFP_ATOMIC, __GFP_NOWARN), which basically creates a new skb and
> copied all the data into it. As far as I understood, it put
> everything into the linear buffer, which can amount to 64KB at most.
> The original skb are freed then, and this new one were sent to the
> stack.

Just my two cents, if it is this case, you can try to call
skb_copy_expand on every SKB netback receives to manually create SKBs
with ~64KB linear buffer to see how it goes...

Wei.

> I suspect that this is the problem as it only happens when guests
> send too much slots. Does anyone familiar with these drivers have
> seen such issue before? (when these kind of skb's get stucked in the
> queue)
> 
> Regards,
> 
> Zoltan Kiss
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 18:56:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 18:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9JGL-0001Rh-Qc; Fri, 31 Jan 2014 18:56:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1W9JGK-0001Rb-Uo
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 18:56:25 +0000
Received: from [85.158.139.211:35693] by server-17.bemta-5.messagelabs.com id
	31/21-31975-8D1FBE25; Fri, 31 Jan 2014 18:56:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391194580!907159!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11497 invoked from network); 31 Jan 2014 18:56:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 18:56:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,759,1384300800"; d="scan'208";a="96644967"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 18:56:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 13:56:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1W9JGF-0007H8-DY;
	Fri, 31 Jan 2014 18:56:19 +0000
Date: Fri, 31 Jan 2014 18:56:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@schaman.hu>
Message-ID: <20140131185619.GB27553@zion.uk.xensource.com>
References: <52EAA31B.1090606@schaman.hu>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EAA31B.1090606@schaman.hu>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Alex Duyck <alexander.h.duyck@intel.com>, wei.liu2@citrix.com,
	linux-kernel@vger.kernel.org, Michael Chan <mchan@broadcom.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 07:08:11PM +0000, Zoltan Kiss wrote:
> Hi,
> 
> I've experienced some queue timeout problems mentioned in the
> subject with igb and bnx2 cards. I haven't seen them on other cards
> so far. I'm using XenServer with 3.10 Dom0 kernel (however igb were
> already updated to latest version), and there are Windows guests
> sending data through these cards. I noticed these problems in XenRT
> test runs, and I know that they usually mean some lost interrupt
> problem or other hardware error, but in my case they started to
> appear more often, and they are likely connected to my netback grant
> mapping patches. These patches causing skb's with huge (~64kb)
> linear buffers to appear more often.
> The reason for that is an old problem in the ring protocol:
> originally the maximum amount of slots were linked to MAX_SKB_FRAGS,
> as every slot ended up as a frag of the skb. When this value were
> changed, netback had to cope with the situation by coalescing the
> packets into fewer frags.
> My patch series take a different approach: the leftover slots
> (pages) were assigned to a new skb's frags, and that skb were
> stashed to the frag_list of the first one. Then, before sending it
> off to the stack it calls skb = skb_copy_expand(skb, 0, 0,
> GFP_ATOMIC, __GFP_NOWARN), which basically creates a new skb and
> copied all the data into it. As far as I understood, it put
> everything into the linear buffer, which can amount to 64KB at most.
> The original skb are freed then, and this new one were sent to the
> stack.

Just my two cents, if it is this case, you can try to call
skb_copy_expand on every SKB netback receives to manually create SKBs
with ~64KB linear buffer to see how it goes...

Wei.

> I suspect that this is the problem as it only happens when guests
> send too much slots. Does anyone familiar with these drivers have
> seen such issue before? (when these kind of skb's get stucked in the
> queue)
> 
> Regards,
> 
> Zoltan Kiss
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 22:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 22:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9MU8-0007Yk-6U; Fri, 31 Jan 2014 22:22:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W9MU7-0007Yf-Gj
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 22:22:51 +0000
Received: from [85.158.139.211:50241] by server-3.bemta-5.messagelabs.com id
	B2/C5-13671-A322CE25; Fri, 31 Jan 2014 22:22:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391206969!935817!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14445 invoked from network); 31 Jan 2014 22:22:50 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 22:22:50 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so2639868eak.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 31 Jan 2014 14:22:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=En6uTOSo15mMn6PKBPZlgb4uQ0R1Fm8dtyevxTkKSys=;
	b=k/Iva6Qfd4mau5hB+l9JYSJmpe1AQB3JuP81c7FrLvZxSZd15MLUGPyLJXLnY4dVF0
	9A9nl50BxRM74hkmg2MYGQZpQUX8OBLz5LRfPWStQEWtuEaHc4ae73RGs8q0lWvL+PGJ
	ZHvhCch6B0oX4Qaay1qgUY5zvYLZ1uR27JwQd7kGXUZvD/kEnnlB3maUkz6/YMqdfuiq
	q/YXXbaCEK+sfvk9s/OIwRiEgIFf5bifH1lLXdgwgnuKuNLW5Z9xZsZI5B57P0g+lo3b
	c5aYb3cgAp9azcpBwPI5SiuwtGXFHXPxGQ73hA03V7xZjOigimeiymXhrVBRe4/sxW/4
	xwAQ==
X-Gm-Message-State: ALoCoQm/wPxFsrFM09FLOL+8vtWm4rPgH6N/ExpPulzxoKyJf+FUs78v47Bx5AwHHff2tPuDD6JI
X-Received: by 10.14.32.132 with SMTP id o4mr27718825eea.14.1391206969483;
	Fri, 31 Jan 2014 14:22:49 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm41800544eey.0.2014.01.31.14.22.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 31 Jan 2014 14:22:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 31 Jan 2014 22:22:45 +0000
Message-Id: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: Directly return NULL if Xen fails to
	allocate domain struct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of alloc_domain_struct, dereference the newly
allocated pointer even if the allocation has failed.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This is a bug fix for Xen 4.4. Without this patch if Xen run out of
memory, it will segfault because it's trying to dereference a NULL
pointer.
---
 xen/arch/arm/domain.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..c279a27 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -409,8 +409,10 @@ struct domain *alloc_domain_struct(void)
     struct domain *d;
     BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
     d = alloc_xenheap_pages(0, 0);
-    if ( d != NULL )
-        clear_page(d);
+    if ( d == NULL )
+        return NULL;
+
+    clear_page(d);
     d->arch.grant_table_gpfn = xmalloc_array(xen_pfn_t, max_nr_grant_frames);
     return d;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 22:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 22:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9MU8-0007Yk-6U; Fri, 31 Jan 2014 22:22:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W9MU7-0007Yf-Gj
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 22:22:51 +0000
Received: from [85.158.139.211:50241] by server-3.bemta-5.messagelabs.com id
	B2/C5-13671-A322CE25; Fri, 31 Jan 2014 22:22:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391206969!935817!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14445 invoked from network); 31 Jan 2014 22:22:50 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 22:22:50 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so2639868eak.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 31 Jan 2014 14:22:49 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=En6uTOSo15mMn6PKBPZlgb4uQ0R1Fm8dtyevxTkKSys=;
	b=k/Iva6Qfd4mau5hB+l9JYSJmpe1AQB3JuP81c7FrLvZxSZd15MLUGPyLJXLnY4dVF0
	9A9nl50BxRM74hkmg2MYGQZpQUX8OBLz5LRfPWStQEWtuEaHc4ae73RGs8q0lWvL+PGJ
	ZHvhCch6B0oX4Qaay1qgUY5zvYLZ1uR27JwQd7kGXUZvD/kEnnlB3maUkz6/YMqdfuiq
	q/YXXbaCEK+sfvk9s/OIwRiEgIFf5bifH1lLXdgwgnuKuNLW5Z9xZsZI5B57P0g+lo3b
	c5aYb3cgAp9azcpBwPI5SiuwtGXFHXPxGQ73hA03V7xZjOigimeiymXhrVBRe4/sxW/4
	xwAQ==
X-Gm-Message-State: ALoCoQm/wPxFsrFM09FLOL+8vtWm4rPgH6N/ExpPulzxoKyJf+FUs78v47Bx5AwHHff2tPuDD6JI
X-Received: by 10.14.32.132 with SMTP id o4mr27718825eea.14.1391206969483;
	Fri, 31 Jan 2014 14:22:49 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k41sm41800544eey.0.2014.01.31.14.22.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 31 Jan 2014 14:22:48 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 31 Jan 2014 22:22:45 +0000
Message-Id: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [PATCH] xen/arm: Directly return NULL if Xen fails to
	allocate domain struct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of alloc_domain_struct, dereference the newly
allocated pointer even if the allocation has failed.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

This is a bug fix for Xen 4.4. Without this patch if Xen run out of
memory, it will segfault because it's trying to dereference a NULL
pointer.
---
 xen/arch/arm/domain.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..c279a27 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -409,8 +409,10 @@ struct domain *alloc_domain_struct(void)
     struct domain *d;
     BUILD_BUG_ON(sizeof(*d) > PAGE_SIZE);
     d = alloc_xenheap_pages(0, 0);
-    if ( d != NULL )
-        clear_page(d);
+    if ( d == NULL )
+        return NULL;
+
+    clear_page(d);
     d->arch.grant_table_gpfn = xmalloc_array(xen_pfn_t, max_nr_grant_frames);
     return d;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 22:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 22:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9N2f-0000FA-32; Fri, 31 Jan 2014 22:58:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1W9N2d-0000F2-FG; Fri, 31 Jan 2014 22:58:31 +0000
Received: from [85.158.137.68:42281] by server-6.bemta-3.messagelabs.com id
	0B/87-09180-69A2CE25; Fri, 31 Jan 2014 22:58:30 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391209098!12667341!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjU3NzYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15677 invoked from network); 31 Jan 2014 22:58:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 22:58:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; 
	d="asc'?scan'208";a="98680971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 22:58:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 17:58:17 -0500
Message-ID: <1391209094.13572.50.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Eric Houby <ehouby@yahoo.com>
Date: Fri, 31 Jan 2014 23:58:14 +0100
In-Reply-To: <009c01cf1ece$2739a820$75acf860$@yahoo.com>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-users@lists.xen.org, xen <xen@lists.fedoraproject.org>,
	M A Young <m.a.young@durham.ac.uk>,
	'Russ Pavlicek' <russell.pavlicek@xenproject.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1128720736836542366=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1128720736836542366==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-87J3ms/PedHKjoZnE20P"

--=-87J3ms/PedHKjoZnE20P
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
> > Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate=
 3.
> >=20
> > General Information about Test Days can be found here:
> > http://wiki.xenproject.org/wiki/Xen_Test_Days
> >=20
> > and specific instructions for this Test Day are located here:
> > http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> >=20
>
> Russ,
>=20
> On the RC3 test instructions page, the link for the RC3 RPMS are pointing=
 to
> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for t=
his
> test day?
>=20
Michael, what do you think? It's late I know... Sorry for that, but I'm
travelling and couldn't direct your attention to this before.

Thanks in any case and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-87J3ms/PedHKjoZnE20P
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLsKocACgkQk4XaBE3IOsT5rwCfcxhHJvyZnJN86p+RsmKbGSER
lR0Anjk4w/iGYHWKbycTVp39r94ixXlC
=wnYW
-----END PGP SIGNATURE-----

--=-87J3ms/PedHKjoZnE20P--


--===============1128720736836542366==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1128720736836542366==--


From xen-devel-bounces@lists.xen.org Fri Jan 31 22:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 22:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9N2f-0000FA-32; Fri, 31 Jan 2014 22:58:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1W9N2d-0000F2-FG; Fri, 31 Jan 2014 22:58:31 +0000
Received: from [85.158.137.68:42281] by server-6.bemta-3.messagelabs.com id
	0B/87-09180-69A2CE25; Fri, 31 Jan 2014 22:58:30 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391209098!12667341!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjU3NzYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15677 invoked from network); 31 Jan 2014 22:58:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 22:58:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; 
	d="asc'?scan'208";a="98680971"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 31 Jan 2014 22:58:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 31 Jan 2014 17:58:17 -0500
Message-ID: <1391209094.13572.50.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Eric Houby <ehouby@yahoo.com>
Date: Fri, 31 Jan 2014 23:58:14 +0100
In-Reply-To: <009c01cf1ece$2739a820$75acf860$@yahoo.com>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
Organization: Citrix Ltd. UK
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-users@lists.xen.org, xen <xen@lists.fedoraproject.org>,
	M A Young <m.a.young@durham.ac.uk>,
	'Russ Pavlicek' <russell.pavlicek@xenproject.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1128720736836542366=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1128720736836542366==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-87J3ms/PedHKjoZnE20P"

--=-87J3ms/PedHKjoZnE20P
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
> > Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate=
 3.
> >=20
> > General Information about Test Days can be found here:
> > http://wiki.xenproject.org/wiki/Xen_Test_Days
> >=20
> > and specific instructions for this Test Day are located here:
> > http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> >=20
>
> Russ,
>=20
> On the RC3 test instructions page, the link for the RC3 RPMS are pointing=
 to
> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for t=
his
> test day?
>=20
Michael, what do you think? It's late I know... Sorry for that, but I'm
travelling and couldn't direct your attention to this before.

Thanks in any case and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-87J3ms/PedHKjoZnE20P
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLsKocACgkQk4XaBE3IOsT5rwCfcxhHJvyZnJN86p+RsmKbGSER
lR0Anjk4w/iGYHWKbycTVp39r94ixXlC
=wnYW
-----END PGP SIGNATURE-----

--=-87J3ms/PedHKjoZnE20P--


--===============1128720736836542366==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1128720736836542366==--


From xen-devel-bounces@lists.xen.org Fri Jan 31 23:05:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 23:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9N95-0000pP-Pq; Fri, 31 Jan 2014 23:05:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9N94-0000pH-Jr
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 23:05:10 +0000
Received: from [193.109.254.147:14344] by server-13.bemta-14.messagelabs.com
	id F1/D6-01226-62C2CE25; Fri, 31 Jan 2014 23:05:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391209505!1226599!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15673 invoked from network); 31 Jan 2014 23:05:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 23:05:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96726852"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 23:05:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 18:05:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9N8y-0002RQ-AM;
	Fri, 31 Jan 2014 23:05:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9N8x-00032A-Q2;
	Fri, 31 Jan 2014 23:05:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24680-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 23:05:03 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24680: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24680 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24680/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=a13224074af5f2813d52d15e67fc97e4c5741501
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 a13224074af5f2813d52d15e67fc97e4c5741501
+ branch=linux-3.4
+ revision=a13224074af5f2813d52d15e67fc97e4c5741501
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git a13224074af5f2813d52d15e67fc97e4c5741501:tested/linux-3.4
Counting objects: 1   
Counting objects: 86, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   1% (1/56)   
Writing objects:   3% (2/56)   
Writing objects:   5% (3/56)   
Writing objects:   7% (4/56)   
Writing objects:   8% (5/56)   
Writing objects:  10% (6/56)   
Writing objects:  12% (7/56)   
Writing objects:  14% (8/56)   
Writing objects:  16% (9/56)   
Writing objects:  17% (10/56)   
Writing objects:  19% (11/56)   
Writing objects:  21% (12/56)   
Writing objects:  23% (13/56)   
Writing objects:  25% (14/56)   
Writing objects:  26% (15/56)   
Writing objects:  28% (16/56)   
Writing objects:  30% (17/56)   
Writing objects:  32% (18/56)   
Writing objects:  33% (19/56)   
Writing objects:  35% (20/56)   
Writing objects:  37% (21/56)   
Writing objects:  39% (22/56)   
Writing objects:  41% (23/56)   
Writing objects:  42% (24/56)   
Writing objects:  44% (25/56)   
Writing objects:  46% (26/56)   
Writing objects:  48% (27/56)   
Writing objects:  50% (28/56)   
Writing objects:  51% (29/56)   
Writing objects:  53% (30/56)   
Writing objects:  55% (31/56)   
Writing objects:  57% (32/56)   
Writing objects:  58% (33/56)   
Writing objects:  60% (34/56)   
Writing objects:  62% (35/56)   
Writing objects:  64% (36/56)   
Writing objects:  66% (37/56)   
Writing objects:  67% (38/56)   
Writing objects:  69% (39/56)   
Writing objects:  71% (40/56)   
Writing objects:  73% (41/56)   
Writing objects:  75% (42/56)   
Writing objects:  76% (43/56)   
Writing objects:  78% (44/56)   
Writing objects:  80% (45/56)   
Writing objects:  82% (46/56)   
Writing objects:  83% (47/56)   
Writing objects:  85% (48/56)   
Writing objects:  87% (49/56)   
Writing objects:  89% (50/56)   
Writing objects:  91% (51/56)   
Writing objects:  92% (52/56)   
Writing objects:  94% (53/56)   
Writing objects:  96% (54/56)   
Writing objects:  98% (55/56)   
Writing objects: 100% (56/56)   
Writing objects: 100% (56/56), 17.41 KiB, done.
Total 56 (delta 42), reused 35 (delta 21)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   4b9c8e9..a132240  a13224074af5f2813d52d15e67fc97e4c5741501 -> tested/linux-3.4
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 23:05:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 23:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9N95-0000pP-Pq; Fri, 31 Jan 2014 23:05:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9N94-0000pH-Jr
	for xen-devel@lists.xensource.com; Fri, 31 Jan 2014 23:05:10 +0000
Received: from [193.109.254.147:14344] by server-13.bemta-14.messagelabs.com
	id F1/D6-01226-62C2CE25; Fri, 31 Jan 2014 23:05:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391209505!1226599!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15673 invoked from network); 31 Jan 2014 23:05:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	31 Jan 2014 23:05:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96726852"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 31 Jan 2014 23:05:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 18:05:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9N8y-0002RQ-AM;
	Fri, 31 Jan 2014 23:05:04 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9N8x-00032A-Q2;
	Fri, 31 Jan 2014 23:05:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24680-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 31 Jan 2014 23:05:03 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24680: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24680 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24680/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail blocked in 24397
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install   fail blocked in 24397

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501
baseline version:
 linux                4b9c8e9bd1f5c549fb581f7edae250d4d9ebc922

------------------------------------------------------------
People who touched revisions under test:
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  Ian Abbott <abbotti@mev.co.uk>
  Ingo Molnar <mingo@kernel.org>
  Jean Delvare <khali@linux-fr.org>
  Jianguo Wu <wujianguo@huawei.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Zijlstra <peterz@infradead.org>
  Robert Richter <rric@kernel.org>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Steven Rostedt <rostedt@goodmis.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=a13224074af5f2813d52d15e67fc97e4c5741501
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 a13224074af5f2813d52d15e67fc97e4c5741501
+ branch=linux-3.4
+ revision=a13224074af5f2813d52d15e67fc97e4c5741501
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git a13224074af5f2813d52d15e67fc97e4c5741501:tested/linux-3.4
Counting objects: 1   
Counting objects: 86, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   1% (1/56)   
Writing objects:   3% (2/56)   
Writing objects:   5% (3/56)   
Writing objects:   7% (4/56)   
Writing objects:   8% (5/56)   
Writing objects:  10% (6/56)   
Writing objects:  12% (7/56)   
Writing objects:  14% (8/56)   
Writing objects:  16% (9/56)   
Writing objects:  17% (10/56)   
Writing objects:  19% (11/56)   
Writing objects:  21% (12/56)   
Writing objects:  23% (13/56)   
Writing objects:  25% (14/56)   
Writing objects:  26% (15/56)   
Writing objects:  28% (16/56)   
Writing objects:  30% (17/56)   
Writing objects:  32% (18/56)   
Writing objects:  33% (19/56)   
Writing objects:  35% (20/56)   
Writing objects:  37% (21/56)   
Writing objects:  39% (22/56)   
Writing objects:  41% (23/56)   
Writing objects:  42% (24/56)   
Writing objects:  44% (25/56)   
Writing objects:  46% (26/56)   
Writing objects:  48% (27/56)   
Writing objects:  50% (28/56)   
Writing objects:  51% (29/56)   
Writing objects:  53% (30/56)   
Writing objects:  55% (31/56)   
Writing objects:  57% (32/56)   
Writing objects:  58% (33/56)   
Writing objects:  60% (34/56)   
Writing objects:  62% (35/56)   
Writing objects:  64% (36/56)   
Writing objects:  66% (37/56)   
Writing objects:  67% (38/56)   
Writing objects:  69% (39/56)   
Writing objects:  71% (40/56)   
Writing objects:  73% (41/56)   
Writing objects:  75% (42/56)   
Writing objects:  76% (43/56)   
Writing objects:  78% (44/56)   
Writing objects:  80% (45/56)   
Writing objects:  82% (46/56)   
Writing objects:  83% (47/56)   
Writing objects:  85% (48/56)   
Writing objects:  87% (49/56)   
Writing objects:  89% (50/56)   
Writing objects:  91% (51/56)   
Writing objects:  92% (52/56)   
Writing objects:  94% (53/56)   
Writing objects:  96% (54/56)   
Writing objects:  98% (55/56)   
Writing objects: 100% (56/56)   
Writing objects: 100% (56/56), 17.41 KiB, done.
Total 56 (delta 42), reused 35 (delta 21)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   4b9c8e9..a132240  a13224074af5f2813d52d15e67fc97e4c5741501 -> tested/linux-3.4
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 23:07:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 23:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9NB3-0000wz-AZ; Fri, 31 Jan 2014 23:07:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W9NB2-0000wr-5J
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 23:07:12 +0000
Received: from [85.158.143.35:60324] by server-3.bemta-4.messagelabs.com id
	C7/DF-11539-F9C2CE25; Fri, 31 Jan 2014 23:07:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391209629!2310731!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23513 invoked from network); 31 Jan 2014 23:07:10 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 23:07:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 31 Jan 2014 23:07:09 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="643624147"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.175])
	by fldsmtpi03.verizon.com with ESMTP; 31 Jan 2014 23:07:09 +0000
Message-ID: <52EC2C9C.9090202@terremark.com>
Date: Fri, 31 Jan 2014 18:07:08 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, 
 xen-devel@lists.xenproject.org
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
In-Reply-To: <21227.37769.599789.521146@mariner.uk.xensource.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On CentOS release 5.10 (Final) I hit QEMU bug #1257099:

lt LINK libcacard.la
/usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status
make[3]: *** [libcacard.la] Error 1
make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
make: *** [install-tools] Error 2

See https://bugs.launchpad.net/bugs/1257099

Based on

https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html

it should make it into QEMU at some point.

So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:


 From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Fri, 31 Jan 2014 20:59:28 +0000
Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
  allow for additional QEMU options.

This is currently needed to work around QEMU bug #1257099 on CentOS 5.10

I.E. via

export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
  tools/Makefile | 4 +---
  1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 00c69ee..a3b8a7e 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
         fi
  
  ifeq ($(debug),y)
-QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
-else
-QEMU_XEN_ENABLE_DEBUG :=
+QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
  endif
  
  subdir-all-qemu-xen-dir: qemu-xen-dir-find
-- 
1.8.2.1


and also:

export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss


This gets me to:

Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
  MLDEP
make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
  MLC      xenlight.cmo
  MLA      xenlight.cma
  CC       xenlight_stubs.o
cc1: warnings being treated as errors
xenlight_stubs.c: In function 'Defbool_val':
xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
xenlight_stubs.c: In function 'String_option_val':
xenlight_stubs.c:379: error: expected expression before 'char'
xenlight_stubs.c: In function 'aohow_val':
xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
make[7]: *** [xenlight_stubs.o] Error 1
make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
make[6]: *** [subdir-install-xl] Error 2
make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
make[5]: *** [subdirs-install] Error 2
make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
make[4]: *** [subdir-install-libs] Error 2
make[4]: Leaving directory `/home/don/xen/tools/ocaml'
make[3]: *** [subdirs-install] Error 2
make[3]: Leaving directory `/home/don/xen/tools/ocaml'
make[2]: *** [subdir-install-ocaml] Error 2
make[2]: Leaving directory `/home/don/xen/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/home/don/xen/tools'
make: *** [install-tools] Error 2


Not sure how to work around this.
     -Don Slutz



On 01/31/14 07:14, Ian Jackson wrote:
> We've just tagged 4.4.0-rc3, please test and report bugs.
>
> The tarball can be downloaded here:
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
>
> Ian.
>
> (PS: Due to an oversight by me, the version number in the xen/Makefile
> is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> that it's "4.4.0-rc2".  Sorry about that.)
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Jan 31 23:07:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jan 2014 23:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9NB3-0000wz-AZ; Fri, 31 Jan 2014 23:07:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1W9NB2-0000wr-5J
	for xen-devel@lists.xenproject.org; Fri, 31 Jan 2014 23:07:12 +0000
Received: from [85.158.143.35:60324] by server-3.bemta-4.messagelabs.com id
	C7/DF-11539-F9C2CE25; Fri, 31 Jan 2014 23:07:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391209629!2310731!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23513 invoked from network); 31 Jan 2014 23:07:10 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 23:07:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 31 Jan 2014 23:07:09 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="643624147"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.175])
	by fldsmtpi03.verizon.com with ESMTP; 31 Jan 2014 23:07:09 +0000
Message-ID: <52EC2C9C.9090202@terremark.com>
Date: Fri, 31 Jan 2014 18:07:08 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, 
 xen-devel@lists.xenproject.org
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
In-Reply-To: <21227.37769.599789.521146@mariner.uk.xensource.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On CentOS release 5.10 (Final) I hit QEMU bug #1257099:

lt LINK libcacard.la
/usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status
make[3]: *** [libcacard.la] Error 1
make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
make[2]: *** [subdir-all-qemu-xen-dir] Error 2
make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
make: *** [install-tools] Error 2

See https://bugs.launchpad.net/bugs/1257099

Based on

https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html

it should make it into QEMU at some point.

So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:


 From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Fri, 31 Jan 2014 20:59:28 +0000
Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
  allow for additional QEMU options.

This is currently needed to work around QEMU bug #1257099 on CentOS 5.10

I.E. via

export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
  tools/Makefile | 4 +---
  1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 00c69ee..a3b8a7e 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
         fi
  
  ifeq ($(debug),y)
-QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
-else
-QEMU_XEN_ENABLE_DEBUG :=
+QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
  endif
  
  subdir-all-qemu-xen-dir: qemu-xen-dir-find
-- 
1.8.2.1


and also:

export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss


This gets me to:

Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
  MLDEP
make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
  MLC      xenlight.cmo
  MLA      xenlight.cma
  CC       xenlight_stubs.o
cc1: warnings being treated as errors
xenlight_stubs.c: In function 'Defbool_val':
xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
xenlight_stubs.c: In function 'String_option_val':
xenlight_stubs.c:379: error: expected expression before 'char'
xenlight_stubs.c: In function 'aohow_val':
xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
make[7]: *** [xenlight_stubs.o] Error 1
make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
make[6]: *** [subdir-install-xl] Error 2
make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
make[5]: *** [subdirs-install] Error 2
make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
make[4]: *** [subdir-install-libs] Error 2
make[4]: Leaving directory `/home/don/xen/tools/ocaml'
make[3]: *** [subdirs-install] Error 2
make[3]: Leaving directory `/home/don/xen/tools/ocaml'
make[2]: *** [subdir-install-ocaml] Error 2
make[2]: Leaving directory `/home/don/xen/tools'
make[1]: *** [subdirs-install] Error 2
make[1]: Leaving directory `/home/don/xen/tools'
make: *** [install-tools] Error 2


Not sure how to work around this.
     -Don Slutz



On 01/31/14 07:14, Ian Jackson wrote:
> We've just tagged 4.4.0-rc3, please test and report bugs.
>
> The tarball can be downloaded here:
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
>
> Ian.
>
> (PS: Due to an oversight by me, the version number in the xen/Makefile
> is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> that it's "4.4.0-rc2".  Sorry about that.)
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

